diff --git a/website/route-lockfile.txt b/website/route-lockfile.txt index 216b53cf5d60..3375c692ab0c 100644 --- a/website/route-lockfile.txt +++ b/website/route-lockfile.txt @@ -62,6 +62,7 @@ /ar/subgraphs/guides/near/ /ar/subgraphs/guides/polymarket/ /ar/subgraphs/guides/secure-api-keys-nextjs/ +/ar/subgraphs/guides/subgraph-composition/ /ar/subgraphs/guides/subgraph-debug-forking/ /ar/subgraphs/guides/subgraph-uncrashable/ /ar/subgraphs/guides/transfer-to-the-graph/ @@ -104,6 +105,7 @@ /ar/supported-networks/blast-mainnet/ /ar/supported-networks/blast-testnet/ /ar/supported-networks/bnb-op/ +/ar/supported-networks/bnb-svm/ /ar/supported-networks/boba-bnb-testnet/ /ar/supported-networks/boba-bnb/ /ar/supported-networks/boba-testnet/ @@ -158,6 +160,7 @@ /ar/supported-networks/kaia/ /ar/supported-networks/kylin/ /ar/supported-networks/lens-testnet/ +/ar/supported-networks/lens/ /ar/supported-networks/linea-sepolia/ /ar/supported-networks/linea/ /ar/supported-networks/litecoin/ @@ -188,6 +191,7 @@ /ar/supported-networks/polygon-amoy/ /ar/supported-networks/polygon-zkevm-cardona/ /ar/supported-networks/polygon-zkevm/ +/ar/supported-networks/ronin/ /ar/supported-networks/rootstock-testnet/ /ar/supported-networks/rootstock/ /ar/supported-networks/scroll-sepolia/ @@ -204,10 +208,13 @@ /ar/supported-networks/sonic/ /ar/supported-networks/starknet-mainnet/ /ar/supported-networks/starknet-testnet/ +/ar/supported-networks/stellar-testnet/ +/ar/supported-networks/stellar/ /ar/supported-networks/swellchain-sepolia/ /ar/supported-networks/swellchain/ /ar/supported-networks/telos-testnet/ /ar/supported-networks/telos/ +/ar/supported-networks/ultra/ /ar/supported-networks/unichain-testnet/ /ar/supported-networks/unichain/ /ar/supported-networks/vana-moksha/ @@ -228,6 +235,7 @@ /ar/token-api/evm/get-ohlc-prices-evm-by-contract/ /ar/token-api/evm/get-tokens-evm-by-contract/ /ar/token-api/evm/get-transfers-evm-by-address/ +/ar/token-api/faq/ /ar/token-api/mcp/claude/ /ar/token-api/mcp/cline/ /ar/token-api/mcp/cursor/ @@ -298,6 +306,7 @@ /cs/subgraphs/guides/near/ /cs/subgraphs/guides/polymarket/ /cs/subgraphs/guides/secure-api-keys-nextjs/ +/cs/subgraphs/guides/subgraph-composition/ /cs/subgraphs/guides/subgraph-debug-forking/ /cs/subgraphs/guides/subgraph-uncrashable/ /cs/subgraphs/guides/transfer-to-the-graph/ @@ -340,6 +349,7 @@ /cs/supported-networks/blast-mainnet/ /cs/supported-networks/blast-testnet/ /cs/supported-networks/bnb-op/ +/cs/supported-networks/bnb-svm/ /cs/supported-networks/boba-bnb-testnet/ /cs/supported-networks/boba-bnb/ /cs/supported-networks/boba-testnet/ @@ -394,6 +404,7 @@ /cs/supported-networks/kaia/ /cs/supported-networks/kylin/ /cs/supported-networks/lens-testnet/ +/cs/supported-networks/lens/ /cs/supported-networks/linea-sepolia/ /cs/supported-networks/linea/ /cs/supported-networks/litecoin/ @@ -424,6 +435,7 @@ /cs/supported-networks/polygon-amoy/ /cs/supported-networks/polygon-zkevm-cardona/ /cs/supported-networks/polygon-zkevm/ +/cs/supported-networks/ronin/ /cs/supported-networks/rootstock-testnet/ /cs/supported-networks/rootstock/ /cs/supported-networks/scroll-sepolia/ @@ -440,10 +452,13 @@ /cs/supported-networks/sonic/ /cs/supported-networks/starknet-mainnet/ /cs/supported-networks/starknet-testnet/ +/cs/supported-networks/stellar-testnet/ +/cs/supported-networks/stellar/ /cs/supported-networks/swellchain-sepolia/ /cs/supported-networks/swellchain/ /cs/supported-networks/telos-testnet/ /cs/supported-networks/telos/ +/cs/supported-networks/ultra/ /cs/supported-networks/unichain-testnet/ /cs/supported-networks/unichain/ /cs/supported-networks/vana-moksha/ @@ -464,6 +479,7 @@ /cs/token-api/evm/get-ohlc-prices-evm-by-contract/ /cs/token-api/evm/get-tokens-evm-by-contract/ /cs/token-api/evm/get-transfers-evm-by-address/ +/cs/token-api/faq/ /cs/token-api/mcp/claude/ /cs/token-api/mcp/cline/ /cs/token-api/mcp/cursor/ @@ -534,6 +550,7 @@ /de/subgraphs/guides/near/ /de/subgraphs/guides/polymarket/ /de/subgraphs/guides/secure-api-keys-nextjs/ +/de/subgraphs/guides/subgraph-composition/ /de/subgraphs/guides/subgraph-debug-forking/ /de/subgraphs/guides/subgraph-uncrashable/ /de/subgraphs/guides/transfer-to-the-graph/ @@ -576,6 +593,7 @@ /de/supported-networks/blast-mainnet/ /de/supported-networks/blast-testnet/ /de/supported-networks/bnb-op/ +/de/supported-networks/bnb-svm/ /de/supported-networks/boba-bnb-testnet/ /de/supported-networks/boba-bnb/ /de/supported-networks/boba-testnet/ @@ -630,6 +648,7 @@ /de/supported-networks/kaia/ /de/supported-networks/kylin/ /de/supported-networks/lens-testnet/ +/de/supported-networks/lens/ /de/supported-networks/linea-sepolia/ /de/supported-networks/linea/ /de/supported-networks/litecoin/ @@ -660,6 +679,7 @@ /de/supported-networks/polygon-amoy/ /de/supported-networks/polygon-zkevm-cardona/ /de/supported-networks/polygon-zkevm/ +/de/supported-networks/ronin/ /de/supported-networks/rootstock-testnet/ /de/supported-networks/rootstock/ /de/supported-networks/scroll-sepolia/ @@ -676,10 +696,13 @@ /de/supported-networks/sonic/ /de/supported-networks/starknet-mainnet/ /de/supported-networks/starknet-testnet/ +/de/supported-networks/stellar-testnet/ +/de/supported-networks/stellar/ /de/supported-networks/swellchain-sepolia/ /de/supported-networks/swellchain/ /de/supported-networks/telos-testnet/ /de/supported-networks/telos/ +/de/supported-networks/ultra/ /de/supported-networks/unichain-testnet/ /de/supported-networks/unichain/ /de/supported-networks/vana-moksha/ @@ -700,6 +723,7 @@ /de/token-api/evm/get-ohlc-prices-evm-by-contract/ /de/token-api/evm/get-tokens-evm-by-contract/ /de/token-api/evm/get-transfers-evm-by-address/ +/de/token-api/faq/ /de/token-api/mcp/claude/ /de/token-api/mcp/cline/ /de/token-api/mcp/cursor/ @@ -770,6 +794,7 @@ /en/subgraphs/guides/near/ /en/subgraphs/guides/polymarket/ /en/subgraphs/guides/secure-api-keys-nextjs/ +/en/subgraphs/guides/subgraph-composition/ /en/subgraphs/guides/subgraph-debug-forking/ /en/subgraphs/guides/subgraph-uncrashable/ /en/subgraphs/guides/transfer-to-the-graph/ @@ -812,6 +837,7 @@ /en/supported-networks/blast-mainnet/ /en/supported-networks/blast-testnet/ /en/supported-networks/bnb-op/ +/en/supported-networks/bnb-svm/ /en/supported-networks/boba-bnb-testnet/ /en/supported-networks/boba-bnb/ /en/supported-networks/boba-testnet/ @@ -866,6 +892,7 @@ /en/supported-networks/kaia/ /en/supported-networks/kylin/ /en/supported-networks/lens-testnet/ +/en/supported-networks/lens/ /en/supported-networks/linea-sepolia/ /en/supported-networks/linea/ /en/supported-networks/litecoin/ @@ -896,6 +923,7 @@ /en/supported-networks/polygon-amoy/ /en/supported-networks/polygon-zkevm-cardona/ /en/supported-networks/polygon-zkevm/ +/en/supported-networks/ronin/ /en/supported-networks/rootstock-testnet/ /en/supported-networks/rootstock/ /en/supported-networks/scroll-sepolia/ @@ -912,10 +940,13 @@ /en/supported-networks/sonic/ /en/supported-networks/starknet-mainnet/ /en/supported-networks/starknet-testnet/ +/en/supported-networks/stellar-testnet/ +/en/supported-networks/stellar/ /en/supported-networks/swellchain-sepolia/ /en/supported-networks/swellchain/ /en/supported-networks/telos-testnet/ /en/supported-networks/telos/ +/en/supported-networks/ultra/ /en/supported-networks/unichain-testnet/ /en/supported-networks/unichain/ /en/supported-networks/vana-moksha/ @@ -936,6 +967,7 @@ /en/token-api/evm/get-ohlc-prices-evm-by-contract/ /en/token-api/evm/get-tokens-evm-by-contract/ /en/token-api/evm/get-transfers-evm-by-address/ +/en/token-api/faq/ /en/token-api/mcp/claude/ /en/token-api/mcp/cline/ /en/token-api/mcp/cursor/ @@ -943,7 +975,6 @@ /en/token-api/monitoring/get-networks/ /en/token-api/monitoring/get-version/ /en/token-api/quick-start/ -/en/token-api/token-api-faq/ /es/ /es/404/ /es/about/ @@ -1007,6 +1038,7 @@ /es/subgraphs/guides/near/ /es/subgraphs/guides/polymarket/ /es/subgraphs/guides/secure-api-keys-nextjs/ +/es/subgraphs/guides/subgraph-composition/ /es/subgraphs/guides/subgraph-debug-forking/ /es/subgraphs/guides/subgraph-uncrashable/ /es/subgraphs/guides/transfer-to-the-graph/ @@ -1049,6 +1081,7 @@ /es/supported-networks/blast-mainnet/ /es/supported-networks/blast-testnet/ /es/supported-networks/bnb-op/ +/es/supported-networks/bnb-svm/ /es/supported-networks/boba-bnb-testnet/ /es/supported-networks/boba-bnb/ /es/supported-networks/boba-testnet/ @@ -1103,6 +1136,7 @@ /es/supported-networks/kaia/ /es/supported-networks/kylin/ /es/supported-networks/lens-testnet/ +/es/supported-networks/lens/ /es/supported-networks/linea-sepolia/ /es/supported-networks/linea/ /es/supported-networks/litecoin/ @@ -1133,6 +1167,7 @@ /es/supported-networks/polygon-amoy/ /es/supported-networks/polygon-zkevm-cardona/ /es/supported-networks/polygon-zkevm/ +/es/supported-networks/ronin/ /es/supported-networks/rootstock-testnet/ /es/supported-networks/rootstock/ /es/supported-networks/scroll-sepolia/ @@ -1149,10 +1184,13 @@ /es/supported-networks/sonic/ /es/supported-networks/starknet-mainnet/ /es/supported-networks/starknet-testnet/ +/es/supported-networks/stellar-testnet/ +/es/supported-networks/stellar/ /es/supported-networks/swellchain-sepolia/ /es/supported-networks/swellchain/ /es/supported-networks/telos-testnet/ /es/supported-networks/telos/ +/es/supported-networks/ultra/ /es/supported-networks/unichain-testnet/ /es/supported-networks/unichain/ /es/supported-networks/vana-moksha/ @@ -1173,6 +1211,7 @@ /es/token-api/evm/get-ohlc-prices-evm-by-contract/ /es/token-api/evm/get-tokens-evm-by-contract/ /es/token-api/evm/get-transfers-evm-by-address/ +/es/token-api/faq/ /es/token-api/mcp/claude/ /es/token-api/mcp/cline/ /es/token-api/mcp/cursor/ @@ -1243,6 +1282,7 @@ /fr/subgraphs/guides/near/ /fr/subgraphs/guides/polymarket/ /fr/subgraphs/guides/secure-api-keys-nextjs/ +/fr/subgraphs/guides/subgraph-composition/ /fr/subgraphs/guides/subgraph-debug-forking/ /fr/subgraphs/guides/subgraph-uncrashable/ /fr/subgraphs/guides/transfer-to-the-graph/ @@ -1285,6 +1325,7 @@ /fr/supported-networks/blast-mainnet/ /fr/supported-networks/blast-testnet/ /fr/supported-networks/bnb-op/ +/fr/supported-networks/bnb-svm/ /fr/supported-networks/boba-bnb-testnet/ /fr/supported-networks/boba-bnb/ /fr/supported-networks/boba-testnet/ @@ -1339,6 +1380,7 @@ /fr/supported-networks/kaia/ /fr/supported-networks/kylin/ /fr/supported-networks/lens-testnet/ +/fr/supported-networks/lens/ /fr/supported-networks/linea-sepolia/ /fr/supported-networks/linea/ /fr/supported-networks/litecoin/ @@ -1369,6 +1411,7 @@ /fr/supported-networks/polygon-amoy/ /fr/supported-networks/polygon-zkevm-cardona/ /fr/supported-networks/polygon-zkevm/ +/fr/supported-networks/ronin/ /fr/supported-networks/rootstock-testnet/ /fr/supported-networks/rootstock/ /fr/supported-networks/scroll-sepolia/ @@ -1385,10 +1428,13 @@ /fr/supported-networks/sonic/ /fr/supported-networks/starknet-mainnet/ /fr/supported-networks/starknet-testnet/ +/fr/supported-networks/stellar-testnet/ +/fr/supported-networks/stellar/ /fr/supported-networks/swellchain-sepolia/ /fr/supported-networks/swellchain/ /fr/supported-networks/telos-testnet/ /fr/supported-networks/telos/ +/fr/supported-networks/ultra/ /fr/supported-networks/unichain-testnet/ /fr/supported-networks/unichain/ /fr/supported-networks/vana-moksha/ @@ -1409,6 +1455,7 @@ /fr/token-api/evm/get-ohlc-prices-evm-by-contract/ /fr/token-api/evm/get-tokens-evm-by-contract/ /fr/token-api/evm/get-transfers-evm-by-address/ +/fr/token-api/faq/ /fr/token-api/mcp/claude/ /fr/token-api/mcp/cline/ /fr/token-api/mcp/cursor/ @@ -1479,6 +1526,7 @@ /hi/subgraphs/guides/near/ /hi/subgraphs/guides/polymarket/ /hi/subgraphs/guides/secure-api-keys-nextjs/ +/hi/subgraphs/guides/subgraph-composition/ /hi/subgraphs/guides/subgraph-debug-forking/ /hi/subgraphs/guides/subgraph-uncrashable/ /hi/subgraphs/guides/transfer-to-the-graph/ @@ -1521,6 +1569,7 @@ /hi/supported-networks/blast-mainnet/ /hi/supported-networks/blast-testnet/ /hi/supported-networks/bnb-op/ +/hi/supported-networks/bnb-svm/ /hi/supported-networks/boba-bnb-testnet/ /hi/supported-networks/boba-bnb/ /hi/supported-networks/boba-testnet/ @@ -1575,6 +1624,7 @@ /hi/supported-networks/kaia/ /hi/supported-networks/kylin/ /hi/supported-networks/lens-testnet/ +/hi/supported-networks/lens/ /hi/supported-networks/linea-sepolia/ /hi/supported-networks/linea/ /hi/supported-networks/litecoin/ @@ -1605,6 +1655,7 @@ /hi/supported-networks/polygon-amoy/ /hi/supported-networks/polygon-zkevm-cardona/ /hi/supported-networks/polygon-zkevm/ +/hi/supported-networks/ronin/ /hi/supported-networks/rootstock-testnet/ /hi/supported-networks/rootstock/ /hi/supported-networks/scroll-sepolia/ @@ -1621,10 +1672,13 @@ /hi/supported-networks/sonic/ /hi/supported-networks/starknet-mainnet/ /hi/supported-networks/starknet-testnet/ +/hi/supported-networks/stellar-testnet/ +/hi/supported-networks/stellar/ /hi/supported-networks/swellchain-sepolia/ /hi/supported-networks/swellchain/ /hi/supported-networks/telos-testnet/ /hi/supported-networks/telos/ +/hi/supported-networks/ultra/ /hi/supported-networks/unichain-testnet/ /hi/supported-networks/unichain/ /hi/supported-networks/vana-moksha/ @@ -1645,6 +1699,7 @@ /hi/token-api/evm/get-ohlc-prices-evm-by-contract/ /hi/token-api/evm/get-tokens-evm-by-contract/ /hi/token-api/evm/get-transfers-evm-by-address/ +/hi/token-api/faq/ /hi/token-api/mcp/claude/ /hi/token-api/mcp/cline/ /hi/token-api/mcp/cursor/ @@ -1715,6 +1770,7 @@ /it/subgraphs/guides/near/ /it/subgraphs/guides/polymarket/ /it/subgraphs/guides/secure-api-keys-nextjs/ +/it/subgraphs/guides/subgraph-composition/ /it/subgraphs/guides/subgraph-debug-forking/ /it/subgraphs/guides/subgraph-uncrashable/ /it/subgraphs/guides/transfer-to-the-graph/ @@ -1757,6 +1813,7 @@ /it/supported-networks/blast-mainnet/ /it/supported-networks/blast-testnet/ /it/supported-networks/bnb-op/ +/it/supported-networks/bnb-svm/ /it/supported-networks/boba-bnb-testnet/ /it/supported-networks/boba-bnb/ /it/supported-networks/boba-testnet/ @@ -1811,6 +1868,7 @@ /it/supported-networks/kaia/ /it/supported-networks/kylin/ /it/supported-networks/lens-testnet/ +/it/supported-networks/lens/ /it/supported-networks/linea-sepolia/ /it/supported-networks/linea/ /it/supported-networks/litecoin/ @@ -1841,6 +1899,7 @@ /it/supported-networks/polygon-amoy/ /it/supported-networks/polygon-zkevm-cardona/ /it/supported-networks/polygon-zkevm/ +/it/supported-networks/ronin/ /it/supported-networks/rootstock-testnet/ /it/supported-networks/rootstock/ /it/supported-networks/scroll-sepolia/ @@ -1857,10 +1916,13 @@ /it/supported-networks/sonic/ /it/supported-networks/starknet-mainnet/ /it/supported-networks/starknet-testnet/ +/it/supported-networks/stellar-testnet/ +/it/supported-networks/stellar/ /it/supported-networks/swellchain-sepolia/ /it/supported-networks/swellchain/ /it/supported-networks/telos-testnet/ /it/supported-networks/telos/ +/it/supported-networks/ultra/ /it/supported-networks/unichain-testnet/ /it/supported-networks/unichain/ /it/supported-networks/vana-moksha/ @@ -1881,6 +1943,7 @@ /it/token-api/evm/get-ohlc-prices-evm-by-contract/ /it/token-api/evm/get-tokens-evm-by-contract/ /it/token-api/evm/get-transfers-evm-by-address/ +/it/token-api/faq/ /it/token-api/mcp/claude/ /it/token-api/mcp/cline/ /it/token-api/mcp/cursor/ @@ -1951,6 +2014,7 @@ /ja/subgraphs/guides/near/ /ja/subgraphs/guides/polymarket/ /ja/subgraphs/guides/secure-api-keys-nextjs/ +/ja/subgraphs/guides/subgraph-composition/ /ja/subgraphs/guides/subgraph-debug-forking/ /ja/subgraphs/guides/subgraph-uncrashable/ /ja/subgraphs/guides/transfer-to-the-graph/ @@ -1993,6 +2057,7 @@ /ja/supported-networks/blast-mainnet/ /ja/supported-networks/blast-testnet/ /ja/supported-networks/bnb-op/ +/ja/supported-networks/bnb-svm/ /ja/supported-networks/boba-bnb-testnet/ /ja/supported-networks/boba-bnb/ /ja/supported-networks/boba-testnet/ @@ -2047,6 +2112,7 @@ /ja/supported-networks/kaia/ /ja/supported-networks/kylin/ /ja/supported-networks/lens-testnet/ +/ja/supported-networks/lens/ /ja/supported-networks/linea-sepolia/ /ja/supported-networks/linea/ /ja/supported-networks/litecoin/ @@ -2077,6 +2143,7 @@ /ja/supported-networks/polygon-amoy/ /ja/supported-networks/polygon-zkevm-cardona/ /ja/supported-networks/polygon-zkevm/ +/ja/supported-networks/ronin/ /ja/supported-networks/rootstock-testnet/ /ja/supported-networks/rootstock/ /ja/supported-networks/scroll-sepolia/ @@ -2093,10 +2160,13 @@ /ja/supported-networks/sonic/ /ja/supported-networks/starknet-mainnet/ /ja/supported-networks/starknet-testnet/ +/ja/supported-networks/stellar-testnet/ +/ja/supported-networks/stellar/ /ja/supported-networks/swellchain-sepolia/ /ja/supported-networks/swellchain/ /ja/supported-networks/telos-testnet/ /ja/supported-networks/telos/ +/ja/supported-networks/ultra/ /ja/supported-networks/unichain-testnet/ /ja/supported-networks/unichain/ /ja/supported-networks/vana-moksha/ @@ -2117,6 +2187,7 @@ /ja/token-api/evm/get-ohlc-prices-evm-by-contract/ /ja/token-api/evm/get-tokens-evm-by-contract/ /ja/token-api/evm/get-transfers-evm-by-address/ +/ja/token-api/faq/ /ja/token-api/mcp/claude/ /ja/token-api/mcp/cline/ /ja/token-api/mcp/cursor/ @@ -2185,6 +2256,7 @@ /ko/subgraphs/guides/near/ /ko/subgraphs/guides/polymarket/ /ko/subgraphs/guides/secure-api-keys-nextjs/ +/ko/subgraphs/guides/subgraph-composition/ /ko/subgraphs/guides/subgraph-debug-forking/ /ko/subgraphs/guides/subgraph-uncrashable/ /ko/subgraphs/guides/transfer-to-the-graph/ @@ -2213,6 +2285,7 @@ /ko/token-api/evm/get-ohlc-prices-evm-by-contract/ /ko/token-api/evm/get-tokens-evm-by-contract/ /ko/token-api/evm/get-transfers-evm-by-address/ +/ko/token-api/faq/ /ko/token-api/mcp/claude/ /ko/token-api/mcp/cline/ /ko/token-api/mcp/cursor/ @@ -2283,6 +2356,7 @@ /mr/subgraphs/guides/near/ /mr/subgraphs/guides/polymarket/ /mr/subgraphs/guides/secure-api-keys-nextjs/ +/mr/subgraphs/guides/subgraph-composition/ /mr/subgraphs/guides/subgraph-debug-forking/ /mr/subgraphs/guides/subgraph-uncrashable/ /mr/subgraphs/guides/transfer-to-the-graph/ @@ -2325,6 +2399,7 @@ /mr/supported-networks/blast-mainnet/ /mr/supported-networks/blast-testnet/ /mr/supported-networks/bnb-op/ +/mr/supported-networks/bnb-svm/ /mr/supported-networks/boba-bnb-testnet/ /mr/supported-networks/boba-bnb/ /mr/supported-networks/boba-testnet/ @@ -2379,6 +2454,7 @@ /mr/supported-networks/kaia/ /mr/supported-networks/kylin/ /mr/supported-networks/lens-testnet/ +/mr/supported-networks/lens/ /mr/supported-networks/linea-sepolia/ /mr/supported-networks/linea/ /mr/supported-networks/litecoin/ @@ -2409,6 +2485,7 @@ /mr/supported-networks/polygon-amoy/ /mr/supported-networks/polygon-zkevm-cardona/ /mr/supported-networks/polygon-zkevm/ +/mr/supported-networks/ronin/ /mr/supported-networks/rootstock-testnet/ /mr/supported-networks/rootstock/ /mr/supported-networks/scroll-sepolia/ @@ -2425,10 +2502,13 @@ /mr/supported-networks/sonic/ /mr/supported-networks/starknet-mainnet/ /mr/supported-networks/starknet-testnet/ +/mr/supported-networks/stellar-testnet/ +/mr/supported-networks/stellar/ /mr/supported-networks/swellchain-sepolia/ /mr/supported-networks/swellchain/ /mr/supported-networks/telos-testnet/ /mr/supported-networks/telos/ +/mr/supported-networks/ultra/ /mr/supported-networks/unichain-testnet/ /mr/supported-networks/unichain/ /mr/supported-networks/vana-moksha/ @@ -2449,6 +2529,7 @@ /mr/token-api/evm/get-ohlc-prices-evm-by-contract/ /mr/token-api/evm/get-tokens-evm-by-contract/ /mr/token-api/evm/get-transfers-evm-by-address/ +/mr/token-api/faq/ /mr/token-api/mcp/claude/ /mr/token-api/mcp/cline/ /mr/token-api/mcp/cursor/ @@ -2517,6 +2598,7 @@ /nl/subgraphs/guides/near/ /nl/subgraphs/guides/polymarket/ /nl/subgraphs/guides/secure-api-keys-nextjs/ +/nl/subgraphs/guides/subgraph-composition/ /nl/subgraphs/guides/subgraph-debug-forking/ /nl/subgraphs/guides/subgraph-uncrashable/ /nl/subgraphs/guides/transfer-to-the-graph/ @@ -2545,6 +2627,7 @@ /nl/token-api/evm/get-ohlc-prices-evm-by-contract/ /nl/token-api/evm/get-tokens-evm-by-contract/ /nl/token-api/evm/get-transfers-evm-by-address/ +/nl/token-api/faq/ /nl/token-api/mcp/claude/ /nl/token-api/mcp/cline/ /nl/token-api/mcp/cursor/ @@ -2613,6 +2696,7 @@ /pl/subgraphs/guides/near/ /pl/subgraphs/guides/polymarket/ /pl/subgraphs/guides/secure-api-keys-nextjs/ +/pl/subgraphs/guides/subgraph-composition/ /pl/subgraphs/guides/subgraph-debug-forking/ /pl/subgraphs/guides/subgraph-uncrashable/ /pl/subgraphs/guides/transfer-to-the-graph/ @@ -2641,6 +2725,7 @@ /pl/token-api/evm/get-ohlc-prices-evm-by-contract/ /pl/token-api/evm/get-tokens-evm-by-contract/ /pl/token-api/evm/get-transfers-evm-by-address/ +/pl/token-api/faq/ /pl/token-api/mcp/claude/ /pl/token-api/mcp/cline/ /pl/token-api/mcp/cursor/ @@ -2711,6 +2796,7 @@ /pt/subgraphs/guides/near/ /pt/subgraphs/guides/polymarket/ /pt/subgraphs/guides/secure-api-keys-nextjs/ +/pt/subgraphs/guides/subgraph-composition/ /pt/subgraphs/guides/subgraph-debug-forking/ /pt/subgraphs/guides/subgraph-uncrashable/ /pt/subgraphs/guides/transfer-to-the-graph/ @@ -2753,6 +2839,7 @@ /pt/supported-networks/blast-mainnet/ /pt/supported-networks/blast-testnet/ /pt/supported-networks/bnb-op/ +/pt/supported-networks/bnb-svm/ /pt/supported-networks/boba-bnb-testnet/ /pt/supported-networks/boba-bnb/ /pt/supported-networks/boba-testnet/ @@ -2807,6 +2894,7 @@ /pt/supported-networks/kaia/ /pt/supported-networks/kylin/ /pt/supported-networks/lens-testnet/ +/pt/supported-networks/lens/ /pt/supported-networks/linea-sepolia/ /pt/supported-networks/linea/ /pt/supported-networks/litecoin/ @@ -2837,6 +2925,7 @@ /pt/supported-networks/polygon-amoy/ /pt/supported-networks/polygon-zkevm-cardona/ /pt/supported-networks/polygon-zkevm/ +/pt/supported-networks/ronin/ /pt/supported-networks/rootstock-testnet/ /pt/supported-networks/rootstock/ /pt/supported-networks/scroll-sepolia/ @@ -2853,10 +2942,13 @@ /pt/supported-networks/sonic/ /pt/supported-networks/starknet-mainnet/ /pt/supported-networks/starknet-testnet/ +/pt/supported-networks/stellar-testnet/ +/pt/supported-networks/stellar/ /pt/supported-networks/swellchain-sepolia/ /pt/supported-networks/swellchain/ /pt/supported-networks/telos-testnet/ /pt/supported-networks/telos/ +/pt/supported-networks/ultra/ /pt/supported-networks/unichain-testnet/ /pt/supported-networks/unichain/ /pt/supported-networks/vana-moksha/ @@ -2877,6 +2969,7 @@ /pt/token-api/evm/get-ohlc-prices-evm-by-contract/ /pt/token-api/evm/get-tokens-evm-by-contract/ /pt/token-api/evm/get-transfers-evm-by-address/ +/pt/token-api/faq/ /pt/token-api/mcp/claude/ /pt/token-api/mcp/cline/ /pt/token-api/mcp/cursor/ @@ -2945,6 +3038,7 @@ /ro/subgraphs/guides/near/ /ro/subgraphs/guides/polymarket/ /ro/subgraphs/guides/secure-api-keys-nextjs/ +/ro/subgraphs/guides/subgraph-composition/ /ro/subgraphs/guides/subgraph-debug-forking/ /ro/subgraphs/guides/subgraph-uncrashable/ /ro/subgraphs/guides/transfer-to-the-graph/ @@ -2973,6 +3067,7 @@ /ro/token-api/evm/get-ohlc-prices-evm-by-contract/ /ro/token-api/evm/get-tokens-evm-by-contract/ /ro/token-api/evm/get-transfers-evm-by-address/ +/ro/token-api/faq/ /ro/token-api/mcp/claude/ /ro/token-api/mcp/cline/ /ro/token-api/mcp/cursor/ @@ -3043,6 +3138,7 @@ /ru/subgraphs/guides/near/ /ru/subgraphs/guides/polymarket/ /ru/subgraphs/guides/secure-api-keys-nextjs/ +/ru/subgraphs/guides/subgraph-composition/ /ru/subgraphs/guides/subgraph-debug-forking/ /ru/subgraphs/guides/subgraph-uncrashable/ /ru/subgraphs/guides/transfer-to-the-graph/ @@ -3085,6 +3181,7 @@ /ru/supported-networks/blast-mainnet/ /ru/supported-networks/blast-testnet/ /ru/supported-networks/bnb-op/ +/ru/supported-networks/bnb-svm/ /ru/supported-networks/boba-bnb-testnet/ /ru/supported-networks/boba-bnb/ /ru/supported-networks/boba-testnet/ @@ -3139,6 +3236,7 @@ /ru/supported-networks/kaia/ /ru/supported-networks/kylin/ /ru/supported-networks/lens-testnet/ +/ru/supported-networks/lens/ /ru/supported-networks/linea-sepolia/ /ru/supported-networks/linea/ /ru/supported-networks/litecoin/ @@ -3169,6 +3267,7 @@ /ru/supported-networks/polygon-amoy/ /ru/supported-networks/polygon-zkevm-cardona/ /ru/supported-networks/polygon-zkevm/ +/ru/supported-networks/ronin/ /ru/supported-networks/rootstock-testnet/ /ru/supported-networks/rootstock/ /ru/supported-networks/scroll-sepolia/ @@ -3185,10 +3284,13 @@ /ru/supported-networks/sonic/ /ru/supported-networks/starknet-mainnet/ /ru/supported-networks/starknet-testnet/ +/ru/supported-networks/stellar-testnet/ +/ru/supported-networks/stellar/ /ru/supported-networks/swellchain-sepolia/ /ru/supported-networks/swellchain/ /ru/supported-networks/telos-testnet/ /ru/supported-networks/telos/ +/ru/supported-networks/ultra/ /ru/supported-networks/unichain-testnet/ /ru/supported-networks/unichain/ /ru/supported-networks/vana-moksha/ @@ -3209,6 +3311,7 @@ /ru/token-api/evm/get-ohlc-prices-evm-by-contract/ /ru/token-api/evm/get-tokens-evm-by-contract/ /ru/token-api/evm/get-transfers-evm-by-address/ +/ru/token-api/faq/ /ru/token-api/mcp/claude/ /ru/token-api/mcp/cline/ /ru/token-api/mcp/cursor/ @@ -3279,6 +3382,7 @@ /sv/subgraphs/guides/near/ /sv/subgraphs/guides/polymarket/ /sv/subgraphs/guides/secure-api-keys-nextjs/ +/sv/subgraphs/guides/subgraph-composition/ /sv/subgraphs/guides/subgraph-debug-forking/ /sv/subgraphs/guides/subgraph-uncrashable/ /sv/subgraphs/guides/transfer-to-the-graph/ @@ -3321,6 +3425,7 @@ /sv/supported-networks/blast-mainnet/ /sv/supported-networks/blast-testnet/ /sv/supported-networks/bnb-op/ +/sv/supported-networks/bnb-svm/ /sv/supported-networks/boba-bnb-testnet/ /sv/supported-networks/boba-bnb/ /sv/supported-networks/boba-testnet/ @@ -3375,6 +3480,7 @@ /sv/supported-networks/kaia/ /sv/supported-networks/kylin/ /sv/supported-networks/lens-testnet/ +/sv/supported-networks/lens/ /sv/supported-networks/linea-sepolia/ /sv/supported-networks/linea/ /sv/supported-networks/litecoin/ @@ -3405,6 +3511,7 @@ /sv/supported-networks/polygon-amoy/ /sv/supported-networks/polygon-zkevm-cardona/ /sv/supported-networks/polygon-zkevm/ +/sv/supported-networks/ronin/ /sv/supported-networks/rootstock-testnet/ /sv/supported-networks/rootstock/ /sv/supported-networks/scroll-sepolia/ @@ -3421,10 +3528,13 @@ /sv/supported-networks/sonic/ /sv/supported-networks/starknet-mainnet/ /sv/supported-networks/starknet-testnet/ +/sv/supported-networks/stellar-testnet/ +/sv/supported-networks/stellar/ /sv/supported-networks/swellchain-sepolia/ /sv/supported-networks/swellchain/ /sv/supported-networks/telos-testnet/ /sv/supported-networks/telos/ +/sv/supported-networks/ultra/ /sv/supported-networks/unichain-testnet/ /sv/supported-networks/unichain/ /sv/supported-networks/vana-moksha/ @@ -3445,6 +3555,7 @@ /sv/token-api/evm/get-ohlc-prices-evm-by-contract/ /sv/token-api/evm/get-tokens-evm-by-contract/ /sv/token-api/evm/get-transfers-evm-by-address/ +/sv/token-api/faq/ /sv/token-api/mcp/claude/ /sv/token-api/mcp/cline/ /sv/token-api/mcp/cursor/ @@ -3515,6 +3626,7 @@ /tr/subgraphs/guides/near/ /tr/subgraphs/guides/polymarket/ /tr/subgraphs/guides/secure-api-keys-nextjs/ +/tr/subgraphs/guides/subgraph-composition/ /tr/subgraphs/guides/subgraph-debug-forking/ /tr/subgraphs/guides/subgraph-uncrashable/ /tr/subgraphs/guides/transfer-to-the-graph/ @@ -3557,6 +3669,7 @@ /tr/supported-networks/blast-mainnet/ /tr/supported-networks/blast-testnet/ /tr/supported-networks/bnb-op/ +/tr/supported-networks/bnb-svm/ /tr/supported-networks/boba-bnb-testnet/ /tr/supported-networks/boba-bnb/ /tr/supported-networks/boba-testnet/ @@ -3611,6 +3724,7 @@ /tr/supported-networks/kaia/ /tr/supported-networks/kylin/ /tr/supported-networks/lens-testnet/ +/tr/supported-networks/lens/ /tr/supported-networks/linea-sepolia/ /tr/supported-networks/linea/ /tr/supported-networks/litecoin/ @@ -3641,6 +3755,7 @@ /tr/supported-networks/polygon-amoy/ /tr/supported-networks/polygon-zkevm-cardona/ /tr/supported-networks/polygon-zkevm/ +/tr/supported-networks/ronin/ /tr/supported-networks/rootstock-testnet/ /tr/supported-networks/rootstock/ /tr/supported-networks/scroll-sepolia/ @@ -3657,10 +3772,13 @@ /tr/supported-networks/sonic/ /tr/supported-networks/starknet-mainnet/ /tr/supported-networks/starknet-testnet/ +/tr/supported-networks/stellar-testnet/ +/tr/supported-networks/stellar/ /tr/supported-networks/swellchain-sepolia/ /tr/supported-networks/swellchain/ /tr/supported-networks/telos-testnet/ /tr/supported-networks/telos/ +/tr/supported-networks/ultra/ /tr/supported-networks/unichain-testnet/ /tr/supported-networks/unichain/ /tr/supported-networks/vana-moksha/ @@ -3681,6 +3799,7 @@ /tr/token-api/evm/get-ohlc-prices-evm-by-contract/ /tr/token-api/evm/get-tokens-evm-by-contract/ /tr/token-api/evm/get-transfers-evm-by-address/ +/tr/token-api/faq/ /tr/token-api/mcp/claude/ /tr/token-api/mcp/cline/ /tr/token-api/mcp/cursor/ @@ -3749,6 +3868,7 @@ /uk/subgraphs/guides/near/ /uk/subgraphs/guides/polymarket/ /uk/subgraphs/guides/secure-api-keys-nextjs/ +/uk/subgraphs/guides/subgraph-composition/ /uk/subgraphs/guides/subgraph-debug-forking/ /uk/subgraphs/guides/subgraph-uncrashable/ /uk/subgraphs/guides/transfer-to-the-graph/ @@ -3777,6 +3897,7 @@ /uk/token-api/evm/get-ohlc-prices-evm-by-contract/ /uk/token-api/evm/get-tokens-evm-by-contract/ /uk/token-api/evm/get-transfers-evm-by-address/ +/uk/token-api/faq/ /uk/token-api/mcp/claude/ /uk/token-api/mcp/cline/ /uk/token-api/mcp/cursor/ @@ -3847,6 +3968,7 @@ /ur/subgraphs/guides/near/ /ur/subgraphs/guides/polymarket/ /ur/subgraphs/guides/secure-api-keys-nextjs/ +/ur/subgraphs/guides/subgraph-composition/ /ur/subgraphs/guides/subgraph-debug-forking/ /ur/subgraphs/guides/subgraph-uncrashable/ /ur/subgraphs/guides/transfer-to-the-graph/ @@ -3889,6 +4011,7 @@ /ur/supported-networks/blast-mainnet/ /ur/supported-networks/blast-testnet/ /ur/supported-networks/bnb-op/ +/ur/supported-networks/bnb-svm/ /ur/supported-networks/boba-bnb-testnet/ /ur/supported-networks/boba-bnb/ /ur/supported-networks/boba-testnet/ @@ -3943,6 +4066,7 @@ /ur/supported-networks/kaia/ /ur/supported-networks/kylin/ /ur/supported-networks/lens-testnet/ +/ur/supported-networks/lens/ /ur/supported-networks/linea-sepolia/ /ur/supported-networks/linea/ /ur/supported-networks/litecoin/ @@ -3973,6 +4097,7 @@ /ur/supported-networks/polygon-amoy/ /ur/supported-networks/polygon-zkevm-cardona/ /ur/supported-networks/polygon-zkevm/ +/ur/supported-networks/ronin/ /ur/supported-networks/rootstock-testnet/ /ur/supported-networks/rootstock/ /ur/supported-networks/scroll-sepolia/ @@ -3989,10 +4114,13 @@ /ur/supported-networks/sonic/ /ur/supported-networks/starknet-mainnet/ /ur/supported-networks/starknet-testnet/ +/ur/supported-networks/stellar-testnet/ +/ur/supported-networks/stellar/ /ur/supported-networks/swellchain-sepolia/ /ur/supported-networks/swellchain/ /ur/supported-networks/telos-testnet/ /ur/supported-networks/telos/ +/ur/supported-networks/ultra/ /ur/supported-networks/unichain-testnet/ /ur/supported-networks/unichain/ /ur/supported-networks/vana-moksha/ @@ -4013,6 +4141,7 @@ /ur/token-api/evm/get-ohlc-prices-evm-by-contract/ /ur/token-api/evm/get-tokens-evm-by-contract/ /ur/token-api/evm/get-transfers-evm-by-address/ +/ur/token-api/faq/ /ur/token-api/mcp/claude/ /ur/token-api/mcp/cline/ /ur/token-api/mcp/cursor/ @@ -4081,6 +4210,7 @@ /vi/subgraphs/guides/near/ /vi/subgraphs/guides/polymarket/ /vi/subgraphs/guides/secure-api-keys-nextjs/ +/vi/subgraphs/guides/subgraph-composition/ /vi/subgraphs/guides/subgraph-debug-forking/ /vi/subgraphs/guides/subgraph-uncrashable/ /vi/subgraphs/guides/transfer-to-the-graph/ @@ -4109,6 +4239,7 @@ /vi/token-api/evm/get-ohlc-prices-evm-by-contract/ /vi/token-api/evm/get-tokens-evm-by-contract/ /vi/token-api/evm/get-transfers-evm-by-address/ +/vi/token-api/faq/ /vi/token-api/mcp/claude/ /vi/token-api/mcp/cline/ /vi/token-api/mcp/cursor/ @@ -4179,6 +4310,7 @@ /zh/subgraphs/guides/near/ /zh/subgraphs/guides/polymarket/ /zh/subgraphs/guides/secure-api-keys-nextjs/ +/zh/subgraphs/guides/subgraph-composition/ /zh/subgraphs/guides/subgraph-debug-forking/ /zh/subgraphs/guides/subgraph-uncrashable/ /zh/subgraphs/guides/transfer-to-the-graph/ @@ -4221,6 +4353,7 @@ /zh/supported-networks/blast-mainnet/ /zh/supported-networks/blast-testnet/ /zh/supported-networks/bnb-op/ +/zh/supported-networks/bnb-svm/ /zh/supported-networks/boba-bnb-testnet/ /zh/supported-networks/boba-bnb/ /zh/supported-networks/boba-testnet/ @@ -4275,6 +4408,7 @@ /zh/supported-networks/kaia/ /zh/supported-networks/kylin/ /zh/supported-networks/lens-testnet/ +/zh/supported-networks/lens/ /zh/supported-networks/linea-sepolia/ /zh/supported-networks/linea/ /zh/supported-networks/litecoin/ @@ -4305,6 +4439,7 @@ /zh/supported-networks/polygon-amoy/ /zh/supported-networks/polygon-zkevm-cardona/ /zh/supported-networks/polygon-zkevm/ +/zh/supported-networks/ronin/ /zh/supported-networks/rootstock-testnet/ /zh/supported-networks/rootstock/ /zh/supported-networks/scroll-sepolia/ @@ -4321,10 +4456,13 @@ /zh/supported-networks/sonic/ /zh/supported-networks/starknet-mainnet/ /zh/supported-networks/starknet-testnet/ +/zh/supported-networks/stellar-testnet/ +/zh/supported-networks/stellar/ /zh/supported-networks/swellchain-sepolia/ /zh/supported-networks/swellchain/ /zh/supported-networks/telos-testnet/ /zh/supported-networks/telos/ +/zh/supported-networks/ultra/ /zh/supported-networks/unichain-testnet/ /zh/supported-networks/unichain/ /zh/supported-networks/vana-moksha/ @@ -4345,6 +4483,7 @@ /zh/token-api/evm/get-ohlc-prices-evm-by-contract/ /zh/token-api/evm/get-tokens-evm-by-contract/ /zh/token-api/evm/get-transfers-evm-by-address/ +/zh/token-api/faq/ /zh/token-api/mcp/claude/ /zh/token-api/mcp/cline/ /zh/token-api/mcp/cursor/ diff --git a/website/src/pages/ar/about.mdx b/website/src/pages/ar/about.mdx index 8005f34aef5f..93dbeb51f658 100644 --- a/website/src/pages/ar/about.mdx +++ b/website/src/pages/ar/about.mdx @@ -30,25 +30,25 @@ Blockchain properties, such as finality, chain reorganizations, and uncled block ## The Graph Provides a Solution -The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API. +The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "Subgraphs") can then be queried with a standard GraphQL API. Today, there is a decentralized protocol that is backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node) that enables this process. ### How The Graph Functions -Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL. +Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using Subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL. #### Specifics -- The Graph uses subgraph descriptions, which are known as the subgraph manifest inside the subgraph. +- The Graph uses Subgraph descriptions, which are known as the Subgraph manifest inside the Subgraph. -- The subgraph description outlines the smart contracts of interest for a subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database. +- The Subgraph description outlines the smart contracts of interest for a Subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database. -- When creating a subgraph, you need to write a subgraph manifest. +- When creating a Subgraph, you need to write a Subgraph manifest. -- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that subgraph. +- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that Subgraph. -The diagram below provides more detailed information about the flow of data after a subgraph manifest has been deployed with Ethereum transactions. +The diagram below provides more detailed information about the flow of data after a Subgraph manifest has been deployed with Ethereum transactions. ![A graphic explaining how The Graph uses Graph Node to serve queries to data consumers](/img/graph-dataflow.png) @@ -56,12 +56,12 @@ The diagram below provides more detailed information about the flow of data afte 1. A dapp adds data to Ethereum through a transaction on a smart contract. 2. العقد الذكي يصدر حدثا واحدا أو أكثر أثناء معالجة الإجراء. -3. يقوم الـ Graph Node بمسح الـ Ethereum باستمرار بحثا عن الكتل الجديدة وبيانات الـ subgraph الخاص بك. -4. يعثر الـ Graph Node على أحداث الـ Ethereum لـ subgraph الخاص بك في هذه الكتل ويقوم بتشغيل mapping handlers التي قدمتها. الـ mapping عبارة عن وحدة WASM والتي تقوم بإنشاء أو تحديث البيانات التي يخزنها Graph Node استجابة لأحداث الـ Ethereum. +3. Graph Node continually scans Ethereum for new blocks and the data for your Subgraph they may contain. +4. Graph Node finds Ethereum events for your Subgraph in these blocks and runs the mapping handlers you provided. The mapping is a WASM module that creates or updates the data entities that Graph Node stores in response to Ethereum events. 5. The dapp queries the Graph Node for data indexed from the blockchain, using the node's [GraphQL endpoint](https://graphql.org/learn/). The Graph Node in turn translates the GraphQL queries into queries for its underlying data store in order to fetch this data, making use of the store's indexing capabilities. The dapp displays this data in a rich UI for end-users, which they use to issue new transactions on Ethereum. The cycle repeats. ## الخطوات التالية -The following sections provide a more in-depth look at subgraphs, their deployment and data querying. +The following sections provide a more in-depth look at Subgraphs, their deployment and data querying. -Before you write your own subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed subgraphs. Each subgraph's page includes a GraphQL playground, allowing you to query its data. +Before you write your own Subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed Subgraphs. Each Subgraph's page includes a GraphQL playground, allowing you to query its data. diff --git a/website/src/pages/ar/archived/arbitrum/arbitrum-faq.mdx b/website/src/pages/ar/archived/arbitrum/arbitrum-faq.mdx index 898175b05cad..e1dbbea03383 100644 --- a/website/src/pages/ar/archived/arbitrum/arbitrum-faq.mdx +++ b/website/src/pages/ar/archived/arbitrum/arbitrum-faq.mdx @@ -14,7 +14,7 @@ By scaling The Graph on L2, network participants can now benefit from: - Security inherited from Ethereum -Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers can open and close allocations more frequently to index a greater number of subgraphs. Developers can deploy and update subgraphs more easily, and Delegators can delegate GRT more frequently. Curators can add or remove signal to a larger number of subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. +Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers can open and close allocations more frequently to index a greater number of Subgraphs. Developers can deploy and update Subgraphs more easily, and Delegators can delegate GRT more frequently. Curators can add or remove signal to a larger number of Subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. The Graph community decided to move forward with Arbitrum last year after the outcome of the [GIP-0031](https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305) discussion. @@ -39,7 +39,7 @@ To take advantage of using The Graph on L2, use this dropdown switcher to toggle ![Dropdown switcher to toggle Arbitrum](/img/arbitrum-screenshot-toggle.png) -## As a subgraph developer, data consumer, Indexer, Curator, or Delegator, what do I need to do now? +## As a Subgraph developer, data consumer, Indexer, Curator, or Delegator, what do I need to do now? Network participants must move to Arbitrum to continue participating in The Graph Network. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) for additional support. @@ -51,9 +51,9 @@ All smart contracts have been thoroughly [audited](https://github.com/graphproto Everything has been tested thoroughly, and a contingency plan is in place to ensure a safe and seamless transition. Details can be found [here](https://forum.thegraph.com/t/gip-0037-the-graph-arbitrum-deployment-with-linear-rewards-minted-in-l2/3551#risks-and-security-considerations-20). -## Are existing subgraphs on Ethereum working? +## Are existing Subgraphs on Ethereum working? -All subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) to ensure your subgraphs operate seamlessly. +All Subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) to ensure your Subgraphs operate seamlessly. ## Does GRT have a new smart contract deployed on Arbitrum? diff --git a/website/src/pages/ar/archived/arbitrum/l2-transfer-tools-faq.mdx b/website/src/pages/ar/archived/arbitrum/l2-transfer-tools-faq.mdx index 9c949027b41f..965c96f7355a 100644 --- a/website/src/pages/ar/archived/arbitrum/l2-transfer-tools-faq.mdx +++ b/website/src/pages/ar/archived/arbitrum/l2-transfer-tools-faq.mdx @@ -24,9 +24,9 @@ The exception is with smart contract wallets like multisigs: these are smart con The L2 Transfer Tools use Arbitrum’s native mechanism to send messages from L1 to L2. This mechanism is called a “retryable ticket” and is used by all native token bridges, including the Arbitrum GRT bridge. You can read more about retryable tickets in the [Arbitrum docs](https://docs.arbitrum.io/arbos/l1-to-l2-messaging). -When you transfer your assets (subgraph, stake, delegation or curation) to L2, a message is sent through the Arbitrum GRT bridge which creates a retryable ticket in L2. The transfer tool includes some ETH value in the transaction, that is used to 1) pay to create the ticket and 2) pay for the gas to execute the ticket in L2. However, because gas prices might vary in the time until the ticket is ready to execute in L2, it is possible that this auto-execution attempt fails. When that happens, the Arbitrum bridge will keep the retryable ticket alive for up to 7 days, and anyone can retry “redeeming” the ticket (which requires a wallet with some ETH bridged to Arbitrum). +When you transfer your assets (Subgraph, stake, delegation or curation) to L2, a message is sent through the Arbitrum GRT bridge which creates a retryable ticket in L2. The transfer tool includes some ETH value in the transaction, that is used to 1) pay to create the ticket and 2) pay for the gas to execute the ticket in L2. However, because gas prices might vary in the time until the ticket is ready to execute in L2, it is possible that this auto-execution attempt fails. When that happens, the Arbitrum bridge will keep the retryable ticket alive for up to 7 days, and anyone can retry “redeeming” the ticket (which requires a wallet with some ETH bridged to Arbitrum). -This is what we call the “Confirm” step in all the transfer tools - it will run automatically in most cases, as the auto-execution is most often successful, but it is important that you check back to make sure it went through. If it doesn’t succeed and there are no successful retries in 7 days, the Arbitrum bridge will discard the ticket, and your assets (subgraph, stake, delegation or curation) will be lost and can’t be recovered. The Graph core devs have a monitoring system in place to detect these situations and try to redeem the tickets before it’s too late, but it is ultimately your responsibility to ensure your transfer is completed in time. If you’re having trouble confirming your transaction, please reach out using [this form](https://noteforms.com/forms/notionform-l2-transfer-tooling-issues-0ogqfu?notionforms=1&utm_source=notionforms) and core devs will be there help you. +This is what we call the “Confirm” step in all the transfer tools - it will run automatically in most cases, as the auto-execution is most often successful, but it is important that you check back to make sure it went through. If it doesn’t succeed and there are no successful retries in 7 days, the Arbitrum bridge will discard the ticket, and your assets (Subgraph, stake, delegation or curation) will be lost and can’t be recovered. The Graph core devs have a monitoring system in place to detect these situations and try to redeem the tickets before it’s too late, but it is ultimately your responsibility to ensure your transfer is completed in time. If you’re having trouble confirming your transaction, please reach out using [this form](https://noteforms.com/forms/notionform-l2-transfer-tooling-issues-0ogqfu?notionforms=1&utm_source=notionforms) and core devs will be there help you. ### I started my delegation/stake/curation transfer and I'm not sure if it made it through to L2, how can I confirm that it was transferred correctly? @@ -36,43 +36,43 @@ If you have the L1 transaction hash (which you can find by looking at the recent ## نقل الـ Subgraph (الرسم البياني الفرعي) -### كيفكيف أقوم بتحويل الـ subgraph الخاص بي؟ +### How do I transfer my Subgraph? -لنقل الـ subgraph الخاص بك ، ستحتاج إلى إكمال الخطوات التالية: +To transfer your Subgraph, you will need to complete the following steps: 1. ابدأ التحويل على شبكة Ethereum mainnet 2. انتظر 20 دقيقة للتأكيد -3. قم بتأكيد نقل الـ subgraph على Arbitrum \ \* +3. Confirm Subgraph transfer on Arbitrum\* -4. قم بإنهاء نشر الـ subgraph على Arbitrum +4. Finish publishing Subgraph on Arbitrum 5. جدث عنوان URL للاستعلام (مستحسن) -\*Note that you must confirm the transfer within 7 days otherwise your subgraph may be lost. In most cases, this step will run automatically, but a manual confirmation may be needed if there is a gas price spike on Arbitrum. If there are any issues during this process, there will be resources to help: contact support at support@thegraph.com or on [Discord](https://discord.gg/graphprotocol). +\*Note that you must confirm the transfer within 7 days otherwise your Subgraph may be lost. In most cases, this step will run automatically, but a manual confirmation may be needed if there is a gas price spike on Arbitrum. If there are any issues during this process, there will be resources to help: contact support at support@thegraph.com or on [Discord](https://discord.gg/graphprotocol). ### من أين يجب أن أبدأ التحويل ؟ -يمكنك بدء عملية النقل من [Subgraph Studio] (https://thegraph.com/studio/) ، [Explorer ،] (https://thegraph.com/explorer) أو من أي صفحة تفاصيل subgraph. انقر فوق الزر "Transfer Subgraph" في صفحة تفاصيل الرسم الـ subgraph لبدء النقل. +You can initiate your transfer from the [Subgraph Studio](https://thegraph.com/studio/), [Explorer,](https://thegraph.com/explorer) or any Subgraph details page. Click the "Transfer Subgraph" button in the Subgraph details page to start the transfer. -### كم من الوقت سأنتظر حتى يتم نقل الـ subgraph الخاص بي +### How long do I need to wait until my Subgraph is transferred يستغرق وقت النقل حوالي 20 دقيقة. يعمل جسر Arbitrum في الخلفية لإكمال نقل الجسر تلقائيًا. في بعض الحالات ، قد ترتفع تكاليف الغاز وستحتاج إلى تأكيد المعاملة مرة أخرى. -### هل سيظل الـ subgraph قابلاً للاكتشاف بعد أن أنقله إلى L2؟ +### Will my Subgraph still be discoverable after I transfer it to L2? -سيكون الـ subgraph الخاص بك قابلاً للاكتشاف على الشبكة التي تم نشرها عليها فقط. على سبيل المثال ، إذا كان الـ subgraph الخاص بك موجودًا على Arbitrum One ، فيمكنك العثور عليه فقط في Explorer على Arbitrum One ولن تتمكن من العثور عليه على Ethereum. يرجى التأكد من تحديد Arbitrum One في مبدل الشبكة في أعلى الصفحة للتأكد من أنك على الشبكة الصحيحة. بعد النقل ، سيظهر الـ L1 subgraph على أنه مهمل. +Your Subgraph will only be discoverable on the network it is published to. For example, if your Subgraph is on Arbitrum One, then you can only find it in Explorer on Arbitrum One and will not be able to find it on Ethereum. Please ensure that you have Arbitrum One selected in the network switcher at the top of the page to ensure you are on the correct network.  After the transfer, the L1 Subgraph will appear as deprecated. -### هل يلزم نشر الـ subgraph الخاص بي لنقله؟ +### Does my Subgraph need to be published to transfer it? -للاستفادة من أداة نقل الـ subgraph ، يجب أن يكون الرسم البياني الفرعي الخاص بك قد تم نشره بالفعل على شبكة Ethereum الرئيسية ويجب أن يكون لديه إشارة تنسيق مملوكة للمحفظة التي تمتلك الرسم البياني الفرعي. إذا لم يتم نشر الرسم البياني الفرعي الخاص بك ، فمن المستحسن أن تقوم ببساطة بالنشر مباشرة على Arbitrum One - ستكون رسوم الغاز أقل بكثير. إذا كنت تريد نقل رسم بياني فرعي منشور ولكن حساب المالك لا يملك إشارة تنسيق عليه ، فيمكنك الإشارة بمبلغ صغير (على سبيل المثال 1 GRT) من ذلك الحساب ؛ تأكد من اختيار إشارة "auto-migrating". +To take advantage of the Subgraph transfer tool, your Subgraph must be already published to Ethereum mainnet and must have some curation signal owned by the wallet that owns the Subgraph. If your Subgraph is not published, it is recommended you simply publish directly on Arbitrum One - the associated gas fees will be considerably lower. If you want to transfer a published Subgraph but the owner account hasn't curated any signal on it, you can signal a small amount (e.g. 1 GRT) from that account; make sure to choose "auto-migrating" signal. -### ماذا يحدث لإصدار Ethereum mainnet للرسم البياني الفرعي الخاص بي بعد أن النقل إلى Arbitrum؟ +### What happens to the Ethereum mainnet version of my Subgraph after I transfer to Arbitrum? -بعد نقل الرسم البياني الفرعي الخاص بك إلى Arbitrum ، سيتم إهمال إصدار Ethereum mainnet. نوصي بتحديث عنوان URL للاستعلام في غضون 48 ساعة. ومع ذلك ، هناك فترة سماح تحافظ على عمل عنوان URL للشبكة الرئيسية الخاصة بك بحيث يمكن تحديث أي دعم dapp لجهة خارجية. +After transferring your Subgraph to Arbitrum, the Ethereum mainnet version will be deprecated. We recommend you update your query URL within 48 hours. However, there is a grace period in place that keeps your mainnet URL functioning so that any third-party dapp support can be updated. ### بعد النقل ، هل أحتاج أيضًا إلى إعادة النشر على Arbitrum؟ @@ -80,21 +80,21 @@ If you have the L1 transaction hash (which you can find by looking at the recent ### Will my endpoint experience downtime while re-publishing? -It is unlikely, but possible to experience a brief downtime depending on which Indexers are supporting the subgraph on L1 and whether they keep indexing it until the subgraph is fully supported on L2. +It is unlikely, but possible to experience a brief downtime depending on which Indexers are supporting the Subgraph on L1 and whether they keep indexing it until the Subgraph is fully supported on L2. ### هل يتم نشر وتخطيط الإصدار بنفس الطريقة في الـ L2 كما هو الحال في شبكة Ethereum Ethereum mainnet؟ -Yes. Select Arbitrum One as your published network when publishing in Subgraph Studio. In the Studio, the latest endpoint will be available which points to the latest updated version of the subgraph. +Yes. Select Arbitrum One as your published network when publishing in Subgraph Studio. In the Studio, the latest endpoint will be available which points to the latest updated version of the Subgraph. -### هل سينتقل تنسيق الـ subgraph مع الـ subgraph ؟ +### Will my Subgraph's curation move with my Subgraph? -إذا اخترت إشارة الترحيل التلقائي auto-migrating ، فسيتم نقل 100٪ من التنسيق مع الرسم البياني الفرعي الخاص بك إلى Arbitrum One. سيتم تحويل كل إشارة التنسيق الخاصة بالرسم الفرعي إلى GRT في وقت النقل ، وسيتم استخدام GRT المقابل لإشارة التنسيق الخاصة بك لصك الإشارة على L2 subgraph. +If you've chosen auto-migrating signal, 100% of your own curation will move with your Subgraph to Arbitrum One. All of the Subgraph's curation signal will be converted to GRT at the time of the transfer, and the GRT corresponding to your curation signal will be used to mint signal on the L2 Subgraph. -يمكن للمنسقين الآخرين اختيار ما إذا كانوا سيسحبون أجزاء من GRT ، أو ينقلونه أيضًا إلى L2 لإنتاج إشارة على نفس الرسم البياني الفرعي. +Other Curators can choose whether to withdraw their fraction of GRT, or also transfer it to L2 to mint signal on the same Subgraph. -### هل يمكنني إعادة الرسم البياني الفرعي الخاص بي إلى Ethereum mainnet بعد أن أقوم بالنقل؟ +### Can I move my Subgraph back to Ethereum mainnet after I transfer? -بمجرد النقل ، سيتم إهمال إصدار شبكة Ethereum mainnet للرسم البياني الفرعي الخاص بك. إذا كنت ترغب في العودة إلى mainnet ، فستحتاج إلى إعادة النشر (redeploy) والنشر مرة أخرى على mainnet. ومع ذلك ، لا يُنصح بشدة بالتحويل مرة أخرى إلى شبكة Ethereum mainnet حيث سيتم في النهاية توزيع مكافآت الفهرسة بالكامل على Arbitrum One. +Once transferred, your Ethereum mainnet version of this Subgraph will be deprecated. If you would like to move back to mainnet, you will need to redeploy and publish back to mainnet. However, transferring back to Ethereum mainnet is strongly discouraged as indexing rewards will eventually be distributed entirely on Arbitrum One. ### لماذا أحتاج إلى Bridged ETH لإكمال النقل؟ @@ -206,19 +206,19 @@ The tokens that are being undelegated are "locked" and therefore cannot be trans \ \* إذا لزم الأمر -أنت تستخدم عنوان عقد. -### كيف سأعرف ما إذا كان الرسم البياني الفرعي الذي قمت بعمل إشارة تنسيق عليه قد انتقل إلى L2؟ +### How will I know if the Subgraph I curated has moved to L2? -عند عرض صفحة تفاصيل الرسم البياني الفرعي ، ستعلمك لافتة بأنه تم نقل هذا الرسم البياني الفرعي. يمكنك اتباع التعليمات لنقل إشارة التنسيق الخاص بك. يمكنك أيضًا العثور على هذه المعلومات في صفحة تفاصيل الرسم البياني الفرعي لأي رسم بياني فرعي تم نقله. +When viewing the Subgraph details page, a banner will notify you that this Subgraph has been transferred. You can follow the prompt to transfer your curation. You can also find this information on the Subgraph details page of any Subgraph that has moved. ### ماذا لو كنت لا أرغب في نقل إشارة التنسيق الخاص بي إلى L2؟ -عندما يتم إهمال الرسم البياني الفرعي ، يكون لديك خيار سحب الإشارة. وبالمثل ، إذا انتقل الرسم البياني الفرعي إلى L2 ، فيمكنك اختيار سحب الإشارة في شبكة Ethereum الرئيسية أو إرسال الإشارة إلى L2. +When a Subgraph is deprecated you have the option to withdraw your signal. Similarly, if a Subgraph has moved to L2, you can choose to withdraw your signal in Ethereum mainnet or send the signal to L2. ### كيف أعرف أنه تم نقل إشارة التنسيق بنجاح؟ يمكن الوصول إلى تفاصيل الإشارة عبر Explorer بعد حوالي 20 دقيقة من بدء أداة النقل للـ L2. -### هل يمكنني نقل إشاة التنسيق الخاص بي على أكثر من رسم بياني فرعي في وقت واحد؟ +### Can I transfer my curation on more than one Subgraph at a time? لا يوجد خيار كهذا حالياً. @@ -266,7 +266,7 @@ The tokens that are being undelegated are "locked" and therefore cannot be trans ### هل يجب أن أقوم بالفهرسة على Arbitrum قبل أن أنقل حصتي؟ -يمكنك تحويل حصتك بشكل فعال أولاً قبل إعداد الفهرسة ، ولكن لن تتمكن من المطالبة بأي مكافآت على L2 حتى تقوم بتخصيصها لـ subgraphs على L2 وفهرستها وعرض POIs. +You can effectively transfer your stake first before setting up indexing, but you will not be able to claim any rewards on L2 until you allocate to Subgraphs on L2, index them, and present POIs. ### هل يستطيع المفوضون نقل تفويضهم قبل نقل indexing stake الخاص بي؟ diff --git a/website/src/pages/ar/archived/arbitrum/l2-transfer-tools-guide.mdx b/website/src/pages/ar/archived/arbitrum/l2-transfer-tools-guide.mdx index af5a133538d6..5863ff2de0a2 100644 --- a/website/src/pages/ar/archived/arbitrum/l2-transfer-tools-guide.mdx +++ b/website/src/pages/ar/archived/arbitrum/l2-transfer-tools-guide.mdx @@ -6,53 +6,53 @@ title: L2 Transfer Tools Guide Some frequent questions about these tools are answered in the [L2 Transfer Tools FAQ](/archived/arbitrum/l2-transfer-tools-faq/). The FAQs contain in-depth explanations of how to use the tools, how they work, and things to keep in mind when using them. -## كيف تنقل الغراف الفرعي الخاص بك إلى شبكة آربترم (الطبقة الثانية) +## How to transfer your Subgraph to Arbitrum (L2) -## فوائد نقل الغراف الفرعي الخاصة بك +## Benefits of transferring your Subgraphs مجتمع الغراف والمطورون الأساسيون كانوا [يستعدون] (https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305) للإنتقال إلى آربترم على مدى العام الماضي. وتعتبر آربترم سلسلة كتل من الطبقة الثانية أو "L2"، حيث ترث الأمان من سلسلة الإيثيريوم ولكنها توفر رسوم غازٍ أقل بشكلٍ كبير. -عندما تقوم بنشر أو ترقية الغرافات الفرعية الخاصة بك إلى شبكة الغراف، فأنت تتفاعل مع عقودٍ ذكيةٍ في البروتوكول وهذا يتطلب دفع رسوم الغاز باستخدام عملة الايثيريوم. من خلال نقل غرافاتك الفرعية إلى آربترم، فإن أي ترقيات مستقبلية لغرافك الفرعي ستتطلب رسوم غازٍ أقل بكثير. الرسوم الأقل، وكذلك حقيقة أن منحنيات الترابط التنسيقي على الطبقة الثانية مستقيمة، تجعل من الأسهل على المنسِّقين الآخرين تنسيق غرافك الفرعي، ممّا يزيد من مكافآت المفهرِسين على غرافك الفرعي. هذه البيئة ذات التكلفة-الأقل كذلك تجعل من الأرخص على المفهرسين أن يقوموا بفهرسة وخدمة غرافك الفرعي. سوف تزداد مكافآت الفهرسة على آربترم وتتناقص على شبكة إيثيريوم الرئيسية على مدى الأشهر المقبلة، لذلك سيقوم المزيد والمزيد من المُفَهرِسين بنقل ودائعهم المربوطة وتثبيت عملياتهم على الطبقة الثانية. +When you publish or upgrade your Subgraph to The Graph Network, you're interacting with smart contracts on the protocol and this requires paying for gas using ETH. By moving your Subgraphs to Arbitrum, any future updates to your Subgraph will require much lower gas fees. The lower fees, and the fact that curation bonding curves on L2 are flat, also make it easier for other Curators to curate on your Subgraph, increasing the rewards for Indexers on your Subgraph. This lower-cost environment also makes it cheaper for Indexers to index and serve your Subgraph. Indexing rewards will be increasing on Arbitrum and decreasing on Ethereum mainnet over the coming months, so more and more Indexers will be transferring their stake and setting up their operations on L2. -## فهم ما يحدث مع الإشارة وغرافك الفرعي على الطبقة الأولى وعناوين مواقع الإستعلام +## Understanding what happens with signal, your L1 Subgraph and query URLs -عند نقل سبجراف إلى Arbitrum، يتم استخدام جسر Arbitrum GRT، الذي بدوره يستخدم جسر Arbitrum الأصلي، لإرسال السبجراف إلى L2. سيؤدي عملية "النقل" إلى إهمال السبجراف على شبكة الإيثيريوم الرئيسية وإرسال المعلومات لإعادة إنشاء السبجراف على L2 باستخدام الجسر. ستتضمن أيضًا رصيد GRT المرهون المرتبط بمالك السبجراف، والذي يجب أن يكون أكبر من الصفر حتى يقبل الجسر النقل. +Transferring a Subgraph to Arbitrum uses the Arbitrum GRT bridge, which in turn uses the native Arbitrum bridge, to send the Subgraph to L2. The "transfer" will deprecate the Subgraph on mainnet and send the information to re-create the Subgraph on L2 using the bridge. It will also include the Subgraph owner's signaled GRT, which must be more than zero for the bridge to accept the transfer. -عندما تختار نقل الرسم البياني الفرعي ، سيؤدي ذلك إلى تحويل جميع إشارات التنسيق الخاصة بالرسم الفرعي إلى GRT. هذا يعادل "إهمال" الرسم البياني الفرعي على الشبكة الرئيسية. سيتم إرسال GRT المستخدمة لعملية التنسيق الخاصة بك إلى L2 جمباً إلى جمب مع الرسم البياني الفرعي ، حيث سيتم استخدامها لإنتاج الإشارة نيابة عنك. +When you choose to transfer the Subgraph, this will convert all of the Subgraph's curation signal to GRT. This is equivalent to "deprecating" the Subgraph on mainnet. The GRT corresponding to your curation will be sent to L2 together with the Subgraph, where they will be used to mint signal on your behalf. -يمكن للمنسقين الآخرين اختيار ما إذا كانوا سيسحبون جزء من GRT الخاص بهم ، أو نقله أيضًا إلى L2 لصك إشارة على نفس الرسم البياني الفرعي. إذا لم يقم مالك الرسم البياني الفرعي بنقل الرسم البياني الفرعي الخاص به إلى L2 وقام بإيقافه يدويًا عبر استدعاء العقد ، فسيتم إخطار المنسقين وسيتمكنون من سحب تنسيقهم. +Other Curators can choose whether to withdraw their fraction of GRT, or also transfer it to L2 to mint signal on the same Subgraph. If a Subgraph owner does not transfer their Subgraph to L2 and manually deprecates it via a contract call, then Curators will be notified and will be able to withdraw their curation. -بمجرد نقل الرسم البياني الفرعي ، لن يتلقى المفهرسون بعد الآن مكافآت لفهرسة الرسم البياني الفرعي، نظرًا لأنه يتم تحويل كل التنسيق لـ GRT. ومع ذلك ، سيكون هناك مفهرسون 1) سيستمرون في خدمة الرسوم البيانية الفرعية المنقولة لمدة 24 ساعة ، و 2) سيبدأون فورًا في فهرسة الرسم البياني الفرعي على L2. ونظرًا لأن هؤلاء المفهرسون لديهم بالفعل رسم بياني فرعي مفهرس ، فلا داعي لانتظار مزامنة الرسم البياني الفرعي ، وسيكون من الممكن الاستعلام عن الرسم البياني الفرعي على L2 مباشرة تقريبًا. +As soon as the Subgraph is transferred, since all curation is converted to GRT, Indexers will no longer receive rewards for indexing the Subgraph. However, there will be Indexers that will 1) keep serving transferred Subgraphs for 24 hours, and 2) immediately start indexing the Subgraph on L2. Since these Indexers already have the Subgraph indexed, there should be no need to wait for the Subgraph to sync, and it will be possible to query the L2 Subgraph almost immediately. -يجب إجراء الاستعلامات على الرسم البياني الفرعي في L2 على عنوان URL مختلف (على \`` Arbitrum-gateway.thegraph.com`) ، لكن عنوان URL L1 سيستمر في العمل لمدة 48 ساعة على الأقل. بعد ذلك ، ستقوم بوابة L1 بإعادة توجيه الاستعلامات إلى بوابة L2 (لبعض الوقت) ، ولكن هذا سيضيف زمن تأخير لذلك يوصى تغيير جميع استعلاماتك إلى عنوان URL الجديد في أقرب وقت ممكن. +Queries to the L2 Subgraph will need to be done to a different URL (on `arbitrum-gateway.thegraph.com`), but the L1 URL will continue working for at least 48 hours. After that, the L1 gateway will forward queries to the L2 gateway (for some time), but this will add latency so it is recommended to switch all your queries to the new URL as soon as possible. ## اختيار محفظة L2 الخاصة بك -عندما قمت بنشر subgraph الخاص بك على الشبكة الرئيسية ، فقد استخدمت محفظة متصلة لإنشاء subgraph ، وتمتلك هذه المحفظة NFT الذي يمثل هذا subgraph ويسمح لك بنشر التحديثات. +When you published your Subgraph on mainnet, you used a connected wallet to create the Subgraph, and this wallet owns the NFT that represents this Subgraph and allows you to publish updates. -عند نقل الرسم البياني الفرعي إلى Arbitrum ، يمكنك اختيار محفظة مختلفة والتي ستمتلك هذا الـ subgraph NFT على L2. +When transferring the Subgraph to Arbitrum, you can choose a different wallet that will own this Subgraph NFT on L2. إذا كنت تستخدم محفظة "عادية" مثل MetaMask (حساب مملوك خارجيًا EOA ، محفظة ليست بعقد ذكي) ، فهذا اختياري ويوصى بالاحتفاظ بعنوان المالك نفسه كما في L1. -إذا كنت تستخدم محفظة بعقد ذكي ، مثل multisig (على سبيل المثال Safe) ، فإن اختيار عنوان مختلف لمحفظة L2 أمر إلزامي ، حيث من المرجح أن هذا الحساب موجود فقط على mainnet ولن تكون قادرًا على إجراء المعاملات على Arbitrum باستخدام هذه المحفظة. إذا كنت ترغب في الاستمرار في استخدام محفظة عقد ذكية أو multisig ، فقم بإنشاء محفظة جديدة على Arbitrum واستخدم عنوانها كمالك للرسم البياني الفرعي الخاص بك على L2. +If you're using a smart contract wallet, like a multisig (e.g. a Safe), then choosing a different L2 wallet address is mandatory, as it is most likely that this account only exists on mainnet and you will not be able to make transactions on Arbitrum using this wallet. If you want to keep using a smart contract wallet or multisig, create a new wallet on Arbitrum and use its address as the L2 owner of your Subgraph. -** من المهم جدًا استخدام عنوان محفظة تتحكم فيه ، ويمكنه إجراء معاملات على Arbitrum. وإلا فسيتم فقد الرسم البياني الفرعي ولا يمكن استعادته. ** +**It is very important to use a wallet address that you control, and that can make transactions on Arbitrum. Otherwise, the Subgraph will be lost and cannot be recovered.** ## التحضير لعملية النقل: إنشاء جسر لـبعض ETH -يتضمن نقل الغراف الفرعي إرسال معاملة عبر الجسر ، ثم تنفيذ معاملة أخرى على شبكة أربترم. تستخدم المعاملة الأولى الإيثيريوم على الشبكة الرئيسية ، وتتضمن بعضًا من إيثيريوم لدفع ثمن الغاز عند استلام الرسالة على الطبقة الثانية. ومع ذلك ، إذا كان هذا الغاز غير كافٍ ، فسيتعين عليك إعادة إجراء المعاملة ودفع ثمن الغاز مباشرةً على الطبقة الثانية (هذه هي "الخطوة 3: تأكيد التحويل" أدناه). يجب تنفيذ هذه الخطوة ** في غضون 7 أيام من بدء التحويل **. علاوة على ذلك ، سيتم إجراء المعاملة الثانية مباشرة على شبكة أربترم ("الخطوة 4: إنهاء التحويل على الطبقة الثانية"). لهذه الأسباب ، ستحتاج بعضًا من إيثيريوم في محفظة أربترم. إذا كنت تستخدم متعدد التواقيع أو عقداً ذكياً ، فيجب أن يكون هناك بعضًا من إيثيريوم في المحفظة العادية (حساب مملوك خارجيا) التي تستخدمها لتنفيذ المعاملات ، وليس على محفظة متعددة التواقيع. +Transferring the Subgraph involves sending a transaction through the bridge, and then executing another transaction on Arbitrum. The first transaction uses ETH on mainnet, and includes some ETH to pay for gas when the message is received on L2. However, if this gas is insufficient, you will have to retry the transaction and pay for the gas directly on L2 (this is "Step 3: Confirming the transfer" below). This step **must be executed within 7 days of starting the transfer**. Moreover, the second transaction ("Step 4: Finishing the transfer on L2") will be done directly on Arbitrum. For these reasons, you will need some ETH on an Arbitrum wallet. If you're using a multisig or smart contract account, the ETH will need to be in the regular (EOA) wallet that you are using to execute the transactions, not on the multisig wallet itself. يمكنك شراء إيثيريوم من بعض المنصات وسحبها مباشرة إلى أربترم، أو يمكنك استخدام جسر أربترم لإرسال إيثيريوم من محفظة الشبكة الرئيسيةإلى الطبقة الثانية: [bridge.arbitrum.io] (http://bridge.arbitrum.io). نظرًا لأن رسوم الغاز على أربترم أقل ، فستحتاج فقط إلى مبلغ صغير. من المستحسن أن تبدأ بمبلغ منخفض (0 على سبيل المثال ، 01 ETH) للموافقة على معاملتك. -## العثور على أداة نقل الغراف الفرعي +## Finding the Subgraph Transfer Tool -يمكنك العثور على أداة نقل L2 في صفحة الرسم البياني الفرعي الخاص بك على Subgraph Studio: +You can find the L2 Transfer Tool when you're looking at your Subgraph's page on Subgraph Studio: ![أداة النقل](/img/L2-transfer-tool1.png) -إذا كنت متصلاً بالمحفظة التي تمتلك الغراف الفرعي، فيمكنك الوصول إليها عبر المستكشف، وذلك عن طريق الانتقال إلى صفحة الغراف الفرعي على المستكشف: +It is also available on Explorer if you're connected with the wallet that owns a Subgraph and on that Subgraph's page on Explorer: ![Transferring to L2](/img/transferToL2.png) @@ -60,19 +60,19 @@ Some frequent questions about these tools are answered in the [L2 Transfer Tools ## الخطوة 1: بدء عملية النقل -قبل بدء عملية النقل، يجب أن تقرر أي عنوان سيكون مالكًا للغراف الفرعي على الطبقة الثانية (انظر "اختيار محفظة الطبقة الثانية" أعلاه)، ويُوصَى بشدة بأن يكون لديك بعضًا من الإيثيريوم لرسوم الغاز على أربترم. يمكنك الاطلاع على (التحضير لعملية النقل: تحويل بعضًا من إيثيريوم عبر الجسر." أعلاه). +Before starting the transfer, you must decide which address will own the Subgraph on L2 (see "Choosing your L2 wallet" above), and it is strongly recommend having some ETH for gas already bridged on Arbitrum (see "Preparing for the transfer: bridging some ETH" above). -يرجى أيضًا ملاحظة أن نقل الرسم البياني الفرعي يتطلب وجود كمية غير صفرية من إشارة التنسيق عليه بنفس الحساب الذي يمتلك الرسم البياني الفرعي ؛ إذا لم تكن قد أشرت إلى الرسم البياني الفرعي ، فسيتعين عليك إضافة القليل من إشارة التنسيق (يكفي إضافة مبلغ صغير مثل 1 GRT). +Also please note transferring the Subgraph requires having a nonzero amount of signal on the Subgraph with the same account that owns the Subgraph; if you haven't signaled on the Subgraph you will have to add a bit of curation (adding a small amount like 1 GRT would suffice). -بعد فتح أداة النقل، ستتمكن من إدخال عنوان المحفظة في الطبقة الثانية في حقل "عنوان محفظة الاستلام". تأكد من إدخال العنوان الصحيح هنا. بعد ذلك، انقر على "نقل الغراف الفرعي"، وسيتم طلب تنفيذ العملية في محفظتك. (يُرجى ملاحظة أنه يتم تضمين بعضًا من الإثيريوم لدفع رسوم الغاز في الطبقة الثانية). بعد تنفيذ العملية، سيتم بدء عملية النقل وإهمال الغراف الفرعي في الطبقة الأولى. (يمكنك الاطلاع على "فهم ما يحدث مع الإشارة والغراف الفرعي في الطبقة الأولى وعناوين الاستعلام" أعلاه لمزيد من التفاصيل حول ما يحدث خلف الكواليس). +After opening the Transfer Tool, you will be able to input the L2 wallet address into the "Receiving wallet address" field - **make sure you've entered the correct address here**. Clicking on Transfer Subgraph will prompt you to execute the transaction on your wallet (note some ETH value is included to pay for L2 gas); this will initiate the transfer and deprecate your L1 Subgraph (see "Understanding what happens with signal, your L1 Subgraph and query URLs" above for more details on what goes on behind the scenes). -إذا قمت بتنفيذ هذه الخطوة، \*\*يجب عليك التأكد من أنك ستستكمل الخطوة 3 في غضون 7 أيام، وإلا فإنك ستفقد الغراف الفرعي والإشارة GRT الخاصة بك. يرجع ذلك إلى آلية التواصل بين الطبقة الأولى والطبقة الثانية في أربترم: الرسائل التي ترسل عبر الجسر هي "تذاكر قابلة لإعادة المحاولة" يجب تنفيذها في غضون 7 أيام، وقد يتطلب التنفيذ الأولي إعادة المحاولة إذا كان هناك زيادة في سعر الغاز على أربترم. +If you execute this step, **make sure you proceed until completing step 3 in less than 7 days, or the Subgraph and your signal GRT will be lost.** This is due to how L1-L2 messaging works on Arbitrum: messages that are sent through the bridge are "retry-able tickets" that must be executed within 7 days, and the initial execution might need a retry if there are spikes in the gas price on Arbitrum. ![Start the transfer to L2](/img/startTransferL2.png) -## الخطوة 2: الانتظار حتى يتم نقل الغراف الفرعي إلى الطبقة الثانية +## Step 2: Waiting for the Subgraph to get to L2 -بعد بدء عملية النقل، يتعين على الرسالة التي ترسل الـ subgraph من L1 إلى L2 أن يتم نشرها عبر جسر Arbitrum. يستغرق ذلك حوالي 20 دقيقة (ينتظر الجسر لكتلة الشبكة الرئيسية التي تحتوي على المعاملة حتى يتأكد أنها "آمنة" من إمكانية إعادة ترتيب السلسلة). +After you start the transfer, the message that sends your L1 Subgraph to L2 must propagate through the Arbitrum bridge. This takes approximately 20 minutes (the bridge waits for the mainnet block containing the transaction to be "safe" from potential chain reorgs). بمجرد انتهاء وقت الانتظار ، ستحاول Arbitrum تنفيذ النقل تلقائيًا على عقود L2. @@ -80,7 +80,7 @@ Some frequent questions about these tools are answered in the [L2 Transfer Tools ## الخطوة الثالثة: تأكيد التحويل -في معظم الحالات ، سيتم تنفيذ هذه الخطوة تلقائيًا لأن غاز الطبقة الثانية المضمن في الخطوة 1 يجب أن يكون كافيًا لتنفيذ المعاملة التي تتلقى الغراف الفرعي في عقود أربترم. ومع ذلك ، في بعض الحالات ، من الممكن أن يؤدي ارتفاع أسعار الغاز على أربترم إلى فشل هذا التنفيذ التلقائي. وفي هذه الحالة ، ستكون "التذكرة" التي ترسل غرافك الفرعي إلى الطبقة الثانية معلقة وتتطلب إعادة المحاولة في غضون 7 أيام. +In most cases, this step will auto-execute as the L2 gas included in step 1 should be sufficient to execute the transaction that receives the Subgraph on the Arbitrum contracts. In some cases, however, it is possible that a spike in gas prices on Arbitrum causes this auto-execution to fail. In this case, the "ticket" that sends your Subgraph to L2 will be pending and require a retry within 7 days. في هذا الحالة ، فستحتاج إلى الاتصال باستخدام محفظة الطبقة الثانية والتي تحتوي بعضاً من إيثيريوم على أربترم، قم بتغيير شبكة محفظتك إلى أربترم، والنقر فوق "تأكيد النقل" لإعادة محاولة المعاملة. @@ -88,33 +88,33 @@ Some frequent questions about these tools are answered in the [L2 Transfer Tools ## الخطوة 4: إنهاء عملية النقل على L2 -في هذه المرحلة، تم استلام الغراف الفرعي والـ GRT الخاص بك على أربترم، ولكن الغراف الفرعي لم يتم نشره بعد. ستحتاج إلى الربط باستخدام محفظة الطبقة الثانية التي اخترتها كمحفظة استلام، وتغيير شبكة محفظتك إلى أربترم، ثم النقر على "نشر الغراف الفرعي" +At this point, your Subgraph and GRT have been received on Arbitrum, but the Subgraph is not published yet. You will need to connect using the L2 wallet that you chose as the receiving wallet, switch your wallet network to Arbitrum, and click "Publish Subgraph." -![نشر الغراف الفرعي](/img/publishSubgraphL2TransferTools.png) +![Publish the Subgraph](/img/publishSubgraphL2TransferTools.png) -![انتظر حتى يتم نشر الغراف الفرعي](/img/waitForSubgraphToPublishL2TransferTools.png) +![Wait for the Subgraph to be published](/img/waitForSubgraphToPublishL2TransferTools.png) -سيؤدي هذا إلى نشر الغراف الفرعي حتى يتمكن المفهرسون الذين يعملون في أربترم بالبدء في تقديم الخدمة. كما أنه سيعمل أيضًا على إصدار إشارة التنسيق باستخدام GRT التي تم نقلها من الطبقة الأولى. +This will publish the Subgraph so that Indexers that are operating on Arbitrum can start serving it. It will also mint curation signal using the GRT that were transferred from L1. ## Step 5: Updating the query URL -تم نقل غرافك الفرعي بنجاح إلى أربترم! للاستعلام عن الغراف الفرعي ، سيكون عنوان URL الجديد هو: +Your Subgraph has been successfully transferred to Arbitrum! To query the Subgraph, the new URL will be : `https://arbitrum-gateway.thegraph.com/api/[api-key]/subgraphs/id/[l2-subgraph-id]` -لاحظ أن ID الغراف الفرعي على أربترم سيكون مختلفًا عن الذي لديك في الشبكة الرئيسية، ولكن يمكنك العثور عليه في المستكشف أو استوديو. كما هو مذكور أعلاه (راجع "فهم ما يحدث للإشارة والغراف الفرعي في الطبقة الأولى وعناوين الاستعلام") سيتم دعم عنوان URL الطبقة الأولى القديم لفترة قصيرة ، ولكن يجب عليك تبديل استعلاماتك إلى العنوان الجديد بمجرد مزامنة الغراف الفرعي على الطبقة الثانية. +Note that the Subgraph ID on Arbitrum will be a different than the one you had on mainnet, but you can always find it on Explorer or Studio. As mentioned above (see "Understanding what happens with signal, your L1 Subgraph and query URLs") the old L1 URL will be supported for a short while, but you should switch your queries to the new address as soon as the Subgraph has been synced on L2. ## كيفية نقل التنسيق الخاص بك إلى أربترم (الطبقة الثانية) -## Understanding what happens to curation on subgraph transfers to L2 +## Understanding what happens to curation on Subgraph transfers to L2 -When the owner of a subgraph transfers a subgraph to Arbitrum, all of the subgraph's signal is converted to GRT at the same time. This applies to "auto-migrated" signal, i.e. signal that is not specific to a subgraph version or deployment but that follows the latest version of a subgraph. +When the owner of a Subgraph transfers a Subgraph to Arbitrum, all of the Subgraph's signal is converted to GRT at the same time. This applies to "auto-migrated" signal, i.e. signal that is not specific to a Subgraph version or deployment but that follows the latest version of a Subgraph. -This conversion from signal to GRT is the same as what would happen if the subgraph owner deprecated the subgraph in L1. When the subgraph is deprecated or transferred, all curation signal is "burned" simultaneously (using the curation bonding curve) and the resulting GRT is held by the GNS smart contract (that is the contract that handles subgraph upgrades and auto-migrated signal). Each Curator on that subgraph therefore has a claim to that GRT proportional to the amount of shares they had for the subgraph. +This conversion from signal to GRT is the same as what would happen if the Subgraph owner deprecated the Subgraph in L1. When the Subgraph is deprecated or transferred, all curation signal is "burned" simultaneously (using the curation bonding curve) and the resulting GRT is held by the GNS smart contract (that is the contract that handles Subgraph upgrades and auto-migrated signal). Each Curator on that Subgraph therefore has a claim to that GRT proportional to the amount of shares they had for the Subgraph. -A fraction of these GRT corresponding to the subgraph owner is sent to L2 together with the subgraph. +A fraction of these GRT corresponding to the Subgraph owner is sent to L2 together with the Subgraph. -At this point, the curated GRT will not accrue any more query fees, so Curators can choose to withdraw their GRT or transfer it to the same subgraph on L2, where it can be used to mint new curation signal. There is no rush to do this as the GRT can be help indefinitely and everybody gets an amount proportional to their shares, irrespective of when they do it. +At this point, the curated GRT will not accrue any more query fees, so Curators can choose to withdraw their GRT or transfer it to the same Subgraph on L2, where it can be used to mint new curation signal. There is no rush to do this as the GRT can be help indefinitely and everybody gets an amount proportional to their shares, irrespective of when they do it. ## اختيار محفظة L2 الخاصة بك @@ -130,9 +130,9 @@ If you're using a smart contract wallet, like a multisig (e.g. a Safe), then cho Before starting the transfer, you must decide which address will own the curation on L2 (see "Choosing your L2 wallet" above), and it is recommended having some ETH for gas already bridged on Arbitrum in case you need to retry the execution of the message on L2. You can buy ETH on some exchanges and withdraw it directly to Arbitrum, or you can use the Arbitrum bridge to send ETH from a mainnet wallet to L2: [bridge.arbitrum.io](http://bridge.arbitrum.io) - since gas fees on Arbitrum are so low, you should only need a small amount, e.g. 0.01 ETH will probably be more than enough. -If a subgraph that you curate to has been transferred to L2, you will see a message on Explorer telling you that you're curating to a transferred subgraph. +If a Subgraph that you curate to has been transferred to L2, you will see a message on Explorer telling you that you're curating to a transferred Subgraph. -When looking at the subgraph page, you can choose to withdraw or transfer the curation. Clicking on "Transfer Signal to Arbitrum" will open the transfer tool. +When looking at the Subgraph page, you can choose to withdraw or transfer the curation. Clicking on "Transfer Signal to Arbitrum" will open the transfer tool. ![Transfer signal](/img/transferSignalL2TransferTools.png) @@ -162,4 +162,4 @@ In most cases, this step will auto-execute as the L2 gas included in step 1 shou ## Withdrawing your curation on L1 -If you prefer not to send your GRT to L2, or you'd rather bridge the GRT manually, you can withdraw your curated GRT on L1. On the banner on the subgraph page, choose "Withdraw Signal" and confirm the transaction; the GRT will be sent to your Curator address. +If you prefer not to send your GRT to L2, or you'd rather bridge the GRT manually, you can withdraw your curated GRT on L1. On the banner on the Subgraph page, choose "Withdraw Signal" and confirm the transaction; the GRT will be sent to your Curator address. diff --git a/website/src/pages/ar/archived/sunrise.mdx b/website/src/pages/ar/archived/sunrise.mdx index eb18a93c506c..71262f22e7d8 100644 --- a/website/src/pages/ar/archived/sunrise.mdx +++ b/website/src/pages/ar/archived/sunrise.mdx @@ -7,61 +7,61 @@ sidebarTitle: Post-Sunrise Upgrade FAQ ## What was the Sunrise of Decentralized Data? -The Sunrise of Decentralized Data was an initiative spearheaded by Edge & Node. This initiative enabled subgraph developers to upgrade to The Graph’s decentralized network seamlessly. +The Sunrise of Decentralized Data was an initiative spearheaded by Edge & Node. This initiative enabled Subgraph developers to upgrade to The Graph’s decentralized network seamlessly. -This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published subgraphs. +This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published Subgraphs. ### What happened to the hosted service? -The hosted service query endpoints are no longer available, and developers cannot deploy new subgraphs on the hosted service. +The hosted service query endpoints are no longer available, and developers cannot deploy new Subgraphs on the hosted service. -During the upgrade process, owners of hosted service subgraphs could upgrade their subgraphs to The Graph Network. Additionally, developers were able to claim auto-upgraded subgraphs. +During the upgrade process, owners of hosted service Subgraphs could upgrade their Subgraphs to The Graph Network. Additionally, developers were able to claim auto-upgraded Subgraphs. ### Was Subgraph Studio impacted by this upgrade? No, Subgraph Studio was not impacted by Sunrise. Subgraphs were immediately available for querying, powered by the upgrade Indexer, which uses the same infrastructure as the hosted service. -### Why were subgraphs published to Arbitrum, did it start indexing a different network? +### Why were Subgraphs published to Arbitrum, did it start indexing a different network? -The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that subgraphs are published to, but subgraphs can index any of the [supported networks](/supported-networks/) +The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new Subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that Subgraphs are published to, but Subgraphs can index any of the [supported networks](/supported-networks/) ## About the Upgrade Indexer > The upgrade Indexer is currently active. -The upgrade Indexer was implemented to improve the experience of upgrading subgraphs from the hosted service to The Graph Network and support new versions of existing subgraphs that had not yet been indexed. +The upgrade Indexer was implemented to improve the experience of upgrading Subgraphs from the hosted service to The Graph Network and support new versions of existing Subgraphs that had not yet been indexed. ### What does the upgrade Indexer do? -- It bootstraps chains that have yet to receive indexing rewards on The Graph Network and ensures that an Indexer is available to serve queries as quickly as possible after a subgraph is published. +- It bootstraps chains that have yet to receive indexing rewards on The Graph Network and ensures that an Indexer is available to serve queries as quickly as possible after a Subgraph is published. - It supports chains that were previously only available on the hosted service. Find a comprehensive list of supported chains [here](/supported-networks/). -- Indexers that operate an upgrade Indexer do so as a public service to support new subgraphs and additional chains that lack indexing rewards before The Graph Council approves them. +- Indexers that operate an upgrade Indexer do so as a public service to support new Subgraphs and additional chains that lack indexing rewards before The Graph Council approves them. ### Why is Edge & Node running the upgrade Indexer? -Edge & Node historically maintained the hosted service and, as a result, already have synced data for hosted service subgraphs. +Edge & Node historically maintained the hosted service and, as a result, already have synced data for hosted service Subgraphs. ### What does the upgrade indexer mean for existing Indexers? Chains previously only supported on the hosted service were made available to developers on The Graph Network without indexing rewards at first. -However, this action unlocked query fees for any interested Indexer and increased the number of subgraphs published on The Graph Network. As a result, Indexers have more opportunities to index and serve these subgraphs in exchange for query fees, even before indexing rewards are enabled for a chain. +However, this action unlocked query fees for any interested Indexer and increased the number of Subgraphs published on The Graph Network. As a result, Indexers have more opportunities to index and serve these Subgraphs in exchange for query fees, even before indexing rewards are enabled for a chain. -The upgrade Indexer also provides the Indexer community with information about the potential demand for subgraphs and new chains on The Graph Network. +The upgrade Indexer also provides the Indexer community with information about the potential demand for Subgraphs and new chains on The Graph Network. ### What does this mean for Delegators? -The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity. +The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more Subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity. ### Did the upgrade Indexer compete with existing Indexers for rewards? -No, the upgrade Indexer only allocates the minimum amount per subgraph and does not collect indexing rewards. +No, the upgrade Indexer only allocates the minimum amount per Subgraph and does not collect indexing rewards. -It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and subgraphs. +It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and Subgraphs. -### How does this affect subgraph developers? +### How does this affect Subgraph developers? -Subgraph developers can query their subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/subgraphs/developing/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a subgraph](/developing/creating-a-subgraph/) was not impacted by this upgrade. +Subgraph developers can query their Subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/subgraphs/developing/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a Subgraph](/developing/creating-a-subgraph/) was not impacted by this upgrade. ### How does the upgrade Indexer benefit data consumers? @@ -71,10 +71,10 @@ The upgrade Indexer enables chains on the network that were previously only supp The upgrade Indexer prices queries at the market rate to avoid influencing the query fee market. -### When will the upgrade Indexer stop supporting a subgraph? +### When will the upgrade Indexer stop supporting a Subgraph? -The upgrade Indexer supports a subgraph until at least 3 other Indexers successfully and consistently serve queries made to it. +The upgrade Indexer supports a Subgraph until at least 3 other Indexers successfully and consistently serve queries made to it. -Furthermore, the upgrade Indexer stops supporting a subgraph if it has not been queried in the last 30 days. +Furthermore, the upgrade Indexer stops supporting a Subgraph if it has not been queried in the last 30 days. -Other Indexers are incentivized to support subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it. +Other Indexers are incentivized to support Subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it. diff --git a/website/src/pages/ar/global.json b/website/src/pages/ar/global.json index b543fd624f0e..d9110259f5cb 100644 --- a/website/src/pages/ar/global.json +++ b/website/src/pages/ar/global.json @@ -6,6 +6,7 @@ "subgraphs": "Subgraphs", "substreams": "متعدد-السلاسل", "sps": "Substreams-Powered Subgraphs", + "tokenApi": "Token API", "indexing": "Indexing", "resources": "Resources", "archived": "Archived" @@ -24,9 +25,51 @@ "linkToThisSection": "Link to this section" }, "content": { - "note": "Note", + "callout": { + "note": "Note", + "tip": "Tip", + "important": "Important", + "warning": "Warning", + "caution": "Caution" + }, "video": "Video" }, + "openApi": { + "parameters": { + "pathParameters": "Path Parameters", + "queryParameters": "Query Parameters", + "headerParameters": "Header Parameters", + "cookieParameters": "Cookie Parameters", + "parameter": "Parameter", + "description": "الوصف", + "value": "Value", + "required": "Required", + "deprecated": "Deprecated", + "defaultValue": "Default value", + "minimumValue": "Minimum value", + "maximumValue": "Maximum value", + "acceptedValues": "Accepted values", + "acceptedPattern": "Accepted pattern", + "format": "Format", + "serializationFormat": "Serialization format" + }, + "request": { + "label": "Test this endpoint", + "noCredentialsRequired": "No credentials required", + "send": "Send Request" + }, + "responses": { + "potentialResponses": "Potential Responses", + "status": "Status", + "description": "الوصف", + "liveResponse": "Live Response", + "example": "Example" + }, + "errors": { + "invalidApi": "Could not retrieve API {0}.", + "invalidOperation": "Could not retrieve operation {0} in API {1}." + } + }, "notFound": { "title": "Oops! This page was lost in space...", "subtitle": "Check if you’re using the right address or explore our website by clicking on the link below.", diff --git a/website/src/pages/ar/index.json b/website/src/pages/ar/index.json index c53846a9d8fa..2443372843a8 100644 --- a/website/src/pages/ar/index.json +++ b/website/src/pages/ar/index.json @@ -7,7 +7,7 @@ "cta2": "Build your first subgraph" }, "products": { - "title": "The Graph’s Products", + "title": "The Graph's Products", "description": "Choose a solution that fits your needs—interact with blockchain data your way.", "subgraphs": { "title": "Subgraphs", @@ -21,7 +21,7 @@ }, "sps": { "title": "Substreams-Powered Subgraphs", - "description": "Boost your subgraph’s efficiency and scalability by using Substreams.", + "description": "Boost your subgraph's efficiency and scalability by using Substreams.", "cta": "Set up a Substreams-powered subgraph" }, "graphNode": { @@ -39,12 +39,12 @@ "title": "الشبكات المدعومة", "details": "Network Details", "services": "Services", - "type": "Type", + "type": "النوع", "protocol": "Protocol", "identifier": "Identifier", "chainId": "Chain ID", "nativeCurrency": "Native Currency", - "docs": "Docs", + "docs": "التوثيق", "shortName": "Short Name", "guides": "Guides", "search": "Search networks", @@ -68,7 +68,7 @@ "name": "Name", "id": "ID", "subgraphs": "Subgraphs", - "substreams": "Substreams", + "substreams": "متعدد-السلاسل", "firehose": "Firehose", "tokenapi": "Token API" } @@ -80,7 +80,7 @@ "description": "Kickstart your journey into subgraph development." }, "substreams": { - "title": "Substreams", + "title": "متعدد-السلاسل", "description": "Stream high-speed data for real-time indexing." }, "timeseries": { @@ -92,7 +92,7 @@ "description": "Leverage features like custom data sources, event handlers, and topic filters." }, "billing": { - "title": "Billing", + "title": "الفوترة", "description": "Optimize costs and manage billing efficiently." } }, @@ -156,15 +156,15 @@ "watchOnYouTube": "Watch on YouTube", "theGraphExplained": { "title": "The Graph Explained In 1 Minute", - "description": "What is The Graph? How does it work? Why does it matter so much to web3 developers? Learn how and why The Graph is the backbone of web3 in this short, non-technical video." + "description": "Learn how and why The Graph is the backbone of web3 in this short, non-technical video." }, "whatIsDelegating": { "title": "What is Delegating?", - "description": "Delegators are key participants who help secure The Graph by staking their GRT tokens to Indexers. This video explains key concepts to understand before delegating." + "description": "This video explains key concepts to understand before delegating, a form of staking that helps secure The Graph." }, "howToIndexSolana": { "title": "How to Index Solana with a Substreams-powered Subgraph", - "description": "If you’re familiar with subgraphs, discover how Substreams offer a different approach for key use cases. This video walks you through the process of building your first Substreams-powered subgraph." + "description": "If you're familiar with Subgraphs, discover how Substreams offer a different approach for key use cases." } }, "time": { diff --git a/website/src/pages/ar/indexing/chain-integration-overview.mdx b/website/src/pages/ar/indexing/chain-integration-overview.mdx index e6b95ec0fc17..af9a582b58d3 100644 --- a/website/src/pages/ar/indexing/chain-integration-overview.mdx +++ b/website/src/pages/ar/indexing/chain-integration-overview.mdx @@ -36,7 +36,7 @@ Ready to shape the future of The Graph Network? [Start your proposal](https://gi ### 2. ماذا يحدث إذا تم دعم فايرهوز و سبستريمز بعد أن تم دعم الشبكة على الشبكة الرئيسية؟ -هذا سيؤثر فقط على دعم البروتوكول لمكافآت الفهرسة على الغرافات الفرعية المدعومة من سبستريمز. تنفيذ الفايرهوز الجديد سيحتاج إلى الفحص على شبكة الاختبار، وفقًا للمنهجية الموضحة للمرحلة الثانية في هذا المقترح لتحسين الغراف. وعلى نحو مماثل، وعلى افتراض أن التنفيذ فعال وموثوق به، سيتتطالب إنشاء طلب سحب على [مصفوفة دعم الميزات] (https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) ("مصادر بيانات سبستريمز" ميزة للغراف الفرعي)، بالإضافة إلى مقترح جديد لتحسين الغراف، لدعم البروتوكول لمكافآت الفهرسة. يمكن لأي شخص إنشاء طلب السحب ومقترح تحسين الغراف؛ وسوف تساعد المؤسسة في الحصول على موافقة المجلس. +This would only impact protocol support for indexing rewards on Substreams-powered Subgraphs. The new Firehose implementation would need testing on testnet, following the methodology outlined for Stage 2 in this GIP. Similarly, assuming the implementation is performant and reliable, a PR on the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) would be required (`Substreams data sources` Subgraph Feature), as well as a new GIP for protocol support for indexing rewards. Anyone can create the PR and GIP; the Foundation would help with Council approval. ### 3. How much time will the process of reaching full protocol support take? diff --git a/website/src/pages/ar/indexing/new-chain-integration.mdx b/website/src/pages/ar/indexing/new-chain-integration.mdx index bff012725d9d..b204d002b25d 100644 --- a/website/src/pages/ar/indexing/new-chain-integration.mdx +++ b/website/src/pages/ar/indexing/new-chain-integration.mdx @@ -2,7 +2,7 @@ title: New Chain Integration --- -Chains can bring subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies: +Chains can bring Subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies: 1. **EVM JSON-RPC** 2. **Firehose**: All Firehose integration solutions include Substreams, a large-scale streaming engine based off Firehose with native `graph-node` support, allowing for parallelized transforms. @@ -47,15 +47,15 @@ For EVM chains, there exists a deeper level of data that can be achieved through ## EVM considerations - Difference between JSON-RPC & Firehose -While the JSON-RPC and Firehose are both suitable for subgraphs, a Firehose is always required for developers wanting to build with [Substreams](https://substreams.streamingfast.io). Supporting Substreams allows developers to build [Substreams-powered subgraphs](/subgraphs/cookbook/substreams-powered-subgraphs/) for the new chain, and has the potential to improve the performance of your subgraphs. Additionally, Firehose — as a drop-in replacement for the JSON-RPC extraction layer of `graph-node` — reduces by 90% the number of RPC calls required for general indexing. +While the JSON-RPC and Firehose are both suitable for Subgraphs, a Firehose is always required for developers wanting to build with [Substreams](https://substreams.streamingfast.io). Supporting Substreams allows developers to build [Substreams-powered Subgraphs](/subgraphs/cookbook/substreams-powered-subgraphs/) for the new chain, and has the potential to improve the performance of your Subgraphs. Additionally, Firehose — as a drop-in replacement for the JSON-RPC extraction layer of `graph-node` — reduces by 90% the number of RPC calls required for general indexing. -- All those `getLogs` calls and roundtrips get replaced by a single stream arriving into the heart of `graph-node`; a single block model for all subgraphs it processes. +- All those `getLogs` calls and roundtrips get replaced by a single stream arriving into the heart of `graph-node`; a single block model for all Subgraphs it processes. -> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers) +> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index Subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers) ## تكوين عقدة الغراف -Configuring Graph Node is as easy as preparing your local environment. Once your local environment is set, you can test the integration by locally deploying a subgraph. +Configuring Graph Node is as easy as preparing your local environment. Once your local environment is set, you can test the integration by locally deploying a Subgraph. 1. [استنسخ عقدة الغراف](https://github.com/graphprotocol/graph-node) @@ -67,4 +67,4 @@ Configuring Graph Node is as easy as preparing your local environment. Once your ## Substreams-powered Subgraphs -For StreamingFast-led Firehose/Substreams integrations, basic support for foundational Substreams modules (e.g. decoded transactions, logs and smart-contract events) and Substreams codegen tools are included. These tools enable the ability to enable [Substreams-powered subgraphs](/substreams/sps/introduction/). Follow the [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) and run `substreams codegen subgraph` to experience the codegen tools for yourself. +For StreamingFast-led Firehose/Substreams integrations, basic support for foundational Substreams modules (e.g. decoded transactions, logs and smart-contract events) and Substreams codegen tools are included. These tools enable the ability to enable [Substreams-powered Subgraphs](/substreams/sps/introduction/). Follow the [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) and run `substreams codegen subgraph` to experience the codegen tools for yourself. diff --git a/website/src/pages/ar/indexing/overview.mdx b/website/src/pages/ar/indexing/overview.mdx index 3bfd1cc210c3..200a3a6a64e5 100644 --- a/website/src/pages/ar/indexing/overview.mdx +++ b/website/src/pages/ar/indexing/overview.mdx @@ -7,7 +7,7 @@ Indexers are node operators in The Graph Network that stake Graph Tokens (GRT) i GRT that is staked in the protocol is subject to a thawing period and can be slashed if Indexers are malicious and serve incorrect data to applications or if they index incorrectly. Indexers also earn rewards for delegated stake from Delegators, to contribute to the network. -يختار المفهرسون subgraphs للقيام بالفهرسة بناء على إشارة تنسيق subgraphs ، حيث أن المنسقون يقومون ب staking ل GRT وذلك للإشارة ل Subgraphs عالية الجودة. يمكن أيضا للعملاء (مثل التطبيقات) تعيين بارامترات حيث يقوم المفهرسون بمعالجة الاستعلامات ل Subgraphs وتسعير رسوم الاستعلام. +Indexers select Subgraphs to index based on the Subgraph’s curation signal, where Curators stake GRT in order to indicate which Subgraphs are high-quality and should be prioritized. Consumers (eg. applications) can also set parameters for which Indexers process queries for their Subgraphs and set preferences for query fee pricing. ## FAQ @@ -19,17 +19,17 @@ The minimum stake for an Indexer is currently set to 100K GRT. **Query fee rebates** - Payments for serving queries on the network. These payments are mediated via state channels between an Indexer and a gateway. Each query request from a gateway contains a payment and the corresponding response a proof of query result validity. -**Indexing rewards** - Generated via a 3% annual protocol wide inflation, the indexing rewards are distributed to Indexers who are indexing subgraph deployments for the network. +**Indexing rewards** - Generated via a 3% annual protocol wide inflation, the indexing rewards are distributed to Indexers who are indexing Subgraph deployments for the network. ### How are indexing rewards distributed? -Indexing rewards come from protocol inflation which is set to 3% annual issuance. They are distributed across subgraphs based on the proportion of all curation signal on each, then distributed proportionally to Indexers based on their allocated stake on that subgraph. **An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.** +Indexing rewards come from protocol inflation which is set to 3% annual issuance. They are distributed across Subgraphs based on the proportion of all curation signal on each, then distributed proportionally to Indexers based on their allocated stake on that Subgraph. **An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.** Numerous tools have been created by the community for calculating rewards; you'll find a collection of them organized in the [Community Guides collection](https://www.notion.so/Community-Guides-abbb10f4dba040d5ba81648ca093e70c). You can also find an up to date list of tools in the #Delegators and #Indexers channels on the [Discord server](https://discord.gg/graphprotocol). Here we link a [recommended allocation optimiser](https://github.com/graphprotocol/allocation-optimizer) integrated with the indexer software stack. ### What is a proof of indexing (POI)? -POIs are used in the network to verify that an Indexer is indexing the subgraphs they have allocated on. A POI for the first block of the current epoch must be submitted when closing an allocation for that allocation to be eligible for indexing rewards. A POI for a block is a digest for all entity store transactions for a specific subgraph deployment up to and including that block. +POIs are used in the network to verify that an Indexer is indexing the Subgraphs they have allocated on. A POI for the first block of the current epoch must be submitted when closing an allocation for that allocation to be eligible for indexing rewards. A POI for a block is a digest for all entity store transactions for a specific Subgraph deployment up to and including that block. ### When are indexing rewards distributed? @@ -41,7 +41,7 @@ The RewardsManager contract has a read-only [getRewards](https://github.com/grap Many of the community-made dashboards include pending rewards values and they can be easily checked manually by following these steps: -1. Query the [mainnet subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations: +1. Query the [mainnet Subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations: ```graphql query indexerAllocations { @@ -91,24 +91,24 @@ The `queryFeeCut` and `indexingRewardCut` values are delegation parameters that - **indexingRewardCut** - the % of indexing rewards that will be distributed to the Indexer. If this is set to 95%, the Indexer will receive 95% of the indexing rewards when an allocation is closed and the Delegators will split the other 5%. -### How do Indexers know which subgraphs to index? +### How do Indexers know which Subgraphs to index? -Indexers may differentiate themselves by applying advanced techniques for making subgraph indexing decisions but to give a general idea we'll discuss several key metrics used to evaluate subgraphs in the network: +Indexers may differentiate themselves by applying advanced techniques for making Subgraph indexing decisions but to give a general idea we'll discuss several key metrics used to evaluate Subgraphs in the network: -- **Curation signal** - The proportion of network curation signal applied to a particular subgraph is a good indicator of the interest in that subgraph, especially during the bootstrap phase when query voluming is ramping up. +- **Curation signal** - The proportion of network curation signal applied to a particular Subgraph is a good indicator of the interest in that Subgraph, especially during the bootstrap phase when query volume is ramping up. -- **Query fees collected** - The historical data for volume of query fees collected for a specific subgraph is a good indicator of future demand. +- **Query fees collected** - The historical data for volume of query fees collected for a specific Subgraph is a good indicator of future demand. -- **Amount staked** - Monitoring the behavior of other Indexers or looking at proportions of total stake allocated towards specific subgraphs can allow an Indexer to monitor the supply side for subgraph queries to identify subgraphs that the network is showing confidence in or subgraphs that may show a need for more supply. +- **Amount staked** - Monitoring the behavior of other Indexers or looking at proportions of total stake allocated towards specific Subgraphs can allow an Indexer to monitor the supply side for Subgraph queries to identify Subgraphs that the network is showing confidence in or Subgraphs that may show a need for more supply. -- **Subgraphs with no indexing rewards** - Some subgraphs do not generate indexing rewards mainly because they are using unsupported features like IPFS or because they are querying another network outside of mainnet. You will see a message on a subgraph if it is not generating indexing rewards. +- **Subgraphs with no indexing rewards** - Some Subgraphs do not generate indexing rewards mainly because they are using unsupported features like IPFS or because they are querying another network outside of mainnet. You will see a message on a Subgraph if it is not generating indexing rewards. ### What are the hardware requirements? -- **Small** - Enough to get started indexing several subgraphs, will likely need to be expanded. +- **Small** - Enough to get started indexing several Subgraphs, will likely need to be expanded. - **Standard** - Default setup, this is what is used in the example k8s/terraform deployment manifests. -- **Medium** - Production Indexer supporting 100 subgraphs and 200-500 requests per second. -- **Large** - Prepared to index all currently used subgraphs and serve requests for the related traffic. +- **Medium** - Production Indexer supporting 100 Subgraphs and 200-500 requests per second. +- **Large** - Prepared to index all currently used Subgraphs and serve requests for the related traffic. | Setup | Postgres
(CPUs) | Postgres
(memory in GBs) | Postgres
(disk in TBs) | VMs
(CPUs) | VMs
(memory in GBs) | | --- | :-: | :-: | :-: | :-: | :-: | @@ -125,17 +125,17 @@ Indexers may differentiate themselves by applying advanced techniques for making ## Infrastructure -At the center of an Indexer's infrastructure is the Graph Node which monitors the indexed networks, extracts and loads data per a subgraph definition and serves it as a [GraphQL API](/about/#how-the-graph-works). The Graph Node needs to be connected to an endpoint exposing data from each indexed network; an IPFS node for sourcing data; a PostgreSQL database for its store; and Indexer components which facilitate its interactions with the network. +At the center of an Indexer's infrastructure is the Graph Node which monitors the indexed networks, extracts and loads data per a Subgraph definition and serves it as a [GraphQL API](/about/#how-the-graph-works). The Graph Node needs to be connected to an endpoint exposing data from each indexed network; an IPFS node for sourcing data; a PostgreSQL database for its store; and Indexer components which facilitate its interactions with the network. -- **PostgreSQL database** - The main store for the Graph Node, this is where subgraph data is stored. The Indexer service and agent also use the database to store state channel data, cost models, indexing rules, and allocation actions. +- **PostgreSQL database** - The main store for the Graph Node, this is where Subgraph data is stored. The Indexer service and agent also use the database to store state channel data, cost models, indexing rules, and allocation actions. -- **Data endpoint** - For EVM-compatible networks, Graph Node needs to be connected to an endpoint that exposes an EVM-compatible JSON-RPC API. This may take the form of a single client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain subgraphs will require particular client capabilities such as archive mode and/or the parity tracing API. +- **Data endpoint** - For EVM-compatible networks, Graph Node needs to be connected to an endpoint that exposes an EVM-compatible JSON-RPC API. This may take the form of a single client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain Subgraphs will require particular client capabilities such as archive mode and/or the parity tracing API. -- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during subgraph deployment to fetch the subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com. +- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com. - **Indexer service** - Handles all required external communications with the network. Shares cost models and indexing statuses, passes query requests from gateways on to a Graph Node, and manages the query payments via state channels with the gateway. -- **Indexer agent** - Facilitates the Indexers interactions onchain including registering on the network, managing subgraph deployments to its Graph Node/s, and managing allocations. +- **Indexer agent** - Facilitates the Indexers interactions onchain including registering on the network, managing Subgraph deployments to its Graph Node/s, and managing allocations. - **Prometheus metrics server** - The Graph Node and Indexer components log their metrics to the metrics server. @@ -149,8 +149,8 @@ Note: To support agile scaling, it is recommended that query and indexing concer | Port | Purpose | Routes | CLI Argument | Environment Variable | | --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | -| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | +| 8000 | GraphQL HTTP server
(for Subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | +| 8001 | GraphQL WS
(for Subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | | 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | | 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | | 8040 | Prometheus metrics | /metrics | \--metrics-port | - | @@ -159,7 +159,7 @@ Note: To support agile scaling, it is recommended that query and indexing concer | Port | Purpose | Routes | CLI Argument | Environment Variable | | --- | --- | --- | --- | --- | -| 7600 | GraphQL HTTP server
(for paid subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` | +| 7600 | GraphQL HTTP server
(for paid Subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` | | 7300 | Prometheus metrics | /metrics | \--metrics-port | - | #### Indexer Agent @@ -295,7 +295,7 @@ Deploy all resources with `kubectl apply -k $dir`. ### Graph Node -[Graph Node](https://github.com/graphprotocol/graph-node) is an open source Rust implementation that event sources the Ethereum blockchain to deterministically update a data store that can be queried via the GraphQL endpoint. Developers use subgraphs to define their schema, and a set of mappings for transforming the data sourced from the blockchain and the Graph Node handles syncing the entire chain, monitoring for new blocks, and serving it via a GraphQL endpoint. +[Graph Node](https://github.com/graphprotocol/graph-node) is an open source Rust implementation that event sources the Ethereum blockchain to deterministically update a data store that can be queried via the GraphQL endpoint. Developers use Subgraphs to define their schema, and a set of mappings for transforming the data sourced from the blockchain and the Graph Node handles syncing the entire chain, monitoring for new blocks, and serving it via a GraphQL endpoint. #### Getting started from source @@ -365,9 +365,9 @@ docker-compose up To successfully participate in the network requires almost constant monitoring and interaction, so we've built a suite of Typescript applications for facilitating an Indexers network participation. There are three Indexer components: -- **Indexer agent** - The agent monitors the network and the Indexer's own infrastructure and manages which subgraph deployments are indexed and allocated towards onchain and how much is allocated towards each. +- **Indexer agent** - The agent monitors the network and the Indexer's own infrastructure and manages which Subgraph deployments are indexed and allocated towards onchain and how much is allocated towards each. -- **Indexer service** - The only component that needs to be exposed externally, the service passes on subgraph queries to the graph node, manages state channels for query payments, shares important decision making information to clients like the gateways. +- **Indexer service** - The only component that needs to be exposed externally, the service passes on Subgraph queries to the graph node, manages state channels for query payments, shares important decision making information to clients like the gateways. - **Indexer CLI** - The command line interface for managing the Indexer agent. It allows Indexers to manage cost models, manual allocations, actions queue, and indexing rules. @@ -525,7 +525,7 @@ graph indexer status #### Indexer management using Indexer CLI -The suggested tool for interacting with the **Indexer Management API** is the **Indexer CLI**, an extension to the **Graph CLI**. The Indexer agent needs input from an Indexer in order to autonomously interact with the network on the behalf of the Indexer. The mechanism for defining Indexer agent behavior are **allocation management** mode and **indexing rules**. Under auto mode, an Indexer can use **indexing rules** to apply their specific strategy for picking subgraphs to index and serve queries for. Rules are managed via a GraphQL API served by the agent and known as the Indexer Management API. Under manual mode, an Indexer can create allocation actions using **actions queue** and explicitly approve them before they get executed. Under oversight mode, **indexing rules** are used to populate **actions queue** and also require explicit approval for execution. +The suggested tool for interacting with the **Indexer Management API** is the **Indexer CLI**, an extension to the **Graph CLI**. The Indexer agent needs input from an Indexer in order to autonomously interact with the network on the behalf of the Indexer. The mechanism for defining Indexer agent behavior are **allocation management** mode and **indexing rules**. Under auto mode, an Indexer can use **indexing rules** to apply their specific strategy for picking Subgraphs to index and serve queries for. Rules are managed via a GraphQL API served by the agent and known as the Indexer Management API. Under manual mode, an Indexer can create allocation actions using **actions queue** and explicitly approve them before they get executed. Under oversight mode, **indexing rules** are used to populate **actions queue** and also require explicit approval for execution. #### Usage @@ -537,7 +537,7 @@ The **Indexer CLI** connects to the Indexer agent, typically through port-forwar - `graph indexer rules set [options] ...` - Set one or more indexing rules. -- `graph indexer rules start [options] ` - Start indexing a subgraph deployment if available and set its `decisionBasis` to `always`, so the Indexer agent will always choose to index it. If the global rule is set to always then all available subgraphs on the network will be indexed. +- `graph indexer rules start [options] ` - Start indexing a Subgraph deployment if available and set its `decisionBasis` to `always`, so the Indexer agent will always choose to index it. If the global rule is set to always then all available Subgraphs on the network will be indexed. - `graph indexer rules stop [options] ` - Stop indexing a deployment and set its `decisionBasis` to never, so it will skip this deployment when deciding on deployments to index. @@ -561,9 +561,9 @@ All commands which display rules in the output can choose between the supported #### Indexing rules -Indexing rules can either be applied as global defaults or for specific subgraph deployments using their IDs. The `deployment` and `decisionBasis` fields are mandatory, while all other fields are optional. When an indexing rule has `rules` as the `decisionBasis`, then the Indexer agent will compare non-null threshold values on that rule with values fetched from the network for the corresponding deployment. If the subgraph deployment has values above (or below) any of the thresholds it will be chosen for indexing. +Indexing rules can either be applied as global defaults or for specific Subgraph deployments using their IDs. The `deployment` and `decisionBasis` fields are mandatory, while all other fields are optional. When an indexing rule has `rules` as the `decisionBasis`, then the Indexer agent will compare non-null threshold values on that rule with values fetched from the network for the corresponding deployment. If the Subgraph deployment has values above (or below) any of the thresholds it will be chosen for indexing. -For example, if the global rule has a `minStake` of **5** (GRT), any subgraph deployment which has more than 5 (GRT) of stake allocated to it will be indexed. Threshold rules include `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, and `minAverageQueryFees`. +For example, if the global rule has a `minStake` of **5** (GRT), any Subgraph deployment which has more than 5 (GRT) of stake allocated to it will be indexed. Threshold rules include `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, and `minAverageQueryFees`. Data model: @@ -679,7 +679,7 @@ graph indexer actions execute approve Note that supported action types for allocation management have different input requirements: -- `Allocate` - allocate stake to a specific subgraph deployment +- `Allocate` - allocate stake to a specific Subgraph deployment - required action params: - deploymentID @@ -694,7 +694,7 @@ Note that supported action types for allocation management have different input - poi - force (forces using the provided POI even if it doesn’t match what the graph-node provides) -- `Reallocate` - atomically close allocation and open a fresh allocation for the same subgraph deployment +- `Reallocate` - atomically close allocation and open a fresh allocation for the same Subgraph deployment - required action params: - allocationID @@ -706,7 +706,7 @@ Note that supported action types for allocation management have different input #### Cost models -Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make Indexer selection decisions per query and to negotiate payment with chosen Indexers. +Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each Subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make Indexer selection decisions per query and to negotiate payment with chosen Indexers. #### Agora @@ -782,9 +782,9 @@ Once an Indexer has staked GRT in the protocol, the [Indexer components](/indexi 6. Call `stake()` to stake GRT in the protocol. -7. (Optional) Indexers may approve another address to be the operator for their Indexer infrastructure in order to separate the keys that control the funds from those that are performing day to day actions such as allocating on subgraphs and serving (paid) queries. In order to set the operator call `setOperator()` with the operator address. +7. (Optional) Indexers may approve another address to be the operator for their Indexer infrastructure in order to separate the keys that control the funds from those that are performing day to day actions such as allocating on Subgraphs and serving (paid) queries. In order to set the operator call `setOperator()` with the operator address. -8. (Optional) In order to control the distribution of rewards and strategically attract Delegators Indexers can update their delegation parameters by updating their indexingRewardCut (parts per million), queryFeeCut (parts per million), and cooldownBlocks (number of blocks). To do so call `setDelegationParameters()`. The following example sets the queryFeeCut to distribute 95% of query rebates to the Indexer and 5% to Delegators, set the indexingRewardCutto distribute 60% of indexing rewards to the Indexer and 40% to Delegators, and set `thecooldownBlocks` period to 500 blocks. +8. (Optional) In order to control the distribution of rewards and strategically attract Delegators Indexers can update their delegation parameters by updating their `indexingRewardCut` (parts per million), `queryFeeCut` (parts per million), and `cooldownBlocks` (number of blocks). To do so call `setDelegationParameters()`. The following example sets the `queryFeeCut` to distribute 95% of query rebates to the Indexer and 5% to Delegators, set the `indexingRewardCut` to distribute 60% of indexing rewards to the Indexer and 40% to Delegators, and set the `cooldownBlocks` period to 500 blocks. ``` setDelegationParameters(950000, 600000, 500) @@ -810,8 +810,8 @@ To set the delegation parameters using Graph Explorer interface, follow these st After being created by an Indexer a healthy allocation goes through two states. -- **Active** - Once an allocation is created onchain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L316)) it is considered **active**. A portion of the Indexer's own and/or delegated stake is allocated towards a subgraph deployment, which allows them to claim indexing rewards and serve queries for that subgraph deployment. The Indexer agent manages creating allocations based on the Indexer rules. +- **Active** - Once an allocation is created onchain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L316)) it is considered **active**. A portion of the Indexer's own and/or delegated stake is allocated towards a Subgraph deployment, which allows them to claim indexing rewards and serve queries for that Subgraph deployment. The Indexer agent manages creating allocations based on the Indexer rules. - **Closed** - An Indexer is free to close an allocation once 1 epoch has passed ([closeAllocation()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L335)) or their Indexer agent will automatically close the allocation after the **maxAllocationEpochs** (currently 28 days). When an allocation is closed with a valid proof of indexing (POI) their indexing rewards are distributed to the Indexer and its Delegators ([learn more](/indexing/overview/#how-are-indexing-rewards-distributed)). -Indexers are recommended to utilize offchain syncing functionality to sync subgraph deployments to chainhead before creating the allocation onchain. This feature is especially useful for subgraphs that may take longer than 28 epochs to sync or have some chances of failing undeterministically. +Indexers are recommended to utilize offchain syncing functionality to sync Subgraph deployments to chainhead before creating the allocation onchain. This feature is especially useful for Subgraphs that may take longer than 28 epochs to sync or have some chances of failing undeterministically. diff --git a/website/src/pages/ar/indexing/supported-network-requirements.mdx b/website/src/pages/ar/indexing/supported-network-requirements.mdx index 9c820d055399..4205fe314802 100644 --- a/website/src/pages/ar/indexing/supported-network-requirements.mdx +++ b/website/src/pages/ar/indexing/supported-network-requirements.mdx @@ -6,7 +6,7 @@ title: Supported Network Requirements | --- | --- | --- | :-: | | Arbitrum | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | 4+ core CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_last updated August 2023_ | ✅ | | Avalanche | [Docker Guide](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 5 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
Debian 12/Ubuntu 22.04
16 GB RAM
>= 4.5TB (NVME preffered)
_last updated 14th May 2024_ | ✅ | +| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
Debian 12/Ubuntu 22.04
16 GB RAM
>= 4.5TB (NVME preferred)
_last updated 14th May 2024_ | ✅ | | Binance | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | 8 core / 16 threads CPU
Ubuntu 22.04
>=32 GB RAM
>= 14 TiB NVMe SSD
_last updated 22nd June 2024_ | ✅ | | Celo | [Docker Guide](https://docs.infradao.com/archive-nodes-101/celo/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 2 TiB NVMe SSD
_last updated August 2023_ | ✅ | | Ethereum | [Docker Guide](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Higher clock speed over core count
Ubuntu 22.04
16GB+ RAM
>=3TB (NVMe recommended)
_last updated August 2023_ | ✅ | diff --git a/website/src/pages/ar/indexing/tap.mdx b/website/src/pages/ar/indexing/tap.mdx index ee96a02cd5b8..e7085e5680bb 100644 --- a/website/src/pages/ar/indexing/tap.mdx +++ b/website/src/pages/ar/indexing/tap.mdx @@ -1,21 +1,21 @@ --- -title: TAP Migration Guide +title: GraphTally Guide --- -Learn about The Graph’s new payment system, **Timeline Aggregation Protocol, TAP**. This system provides fast, efficient microtransactions with minimized trust. +Learn about The Graph’s new payment system, **GraphTally** [(previously Timeline Aggregation Protocol)](https://docs.rs/tap_core/latest/tap_core/index.html). This system provides fast, efficient microtransactions with minimized trust. ## نظره عامة -[TAP](https://docs.rs/tap_core/latest/tap_core/index.html) is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features: +GraphTally is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features: - Efficiently handles micropayments. - Adds a layer of consolidations to onchain transactions and costs. - Allows Indexers control of receipts and payments, guaranteeing payment for queries. - It enables decentralized, trustless gateways and improves `indexer-service` performance for multiple senders. -## Specifics +### Specifics -TAP allows a sender to make multiple payments to a receiver, **TAP Receipts**, which aggregates these payments into a single payment, a **Receipt Aggregate Voucher**, also known as a **RAV**. This aggregated payment can then be verified on the blockchain, reducing the number of transactions and simplifying the payment process. +GraphTally allows a sender to make multiple payments to a receiver, **Receipts**, which aggregates these payments into a single payment, a **Receipt Aggregate Voucher**, also known as a **RAV**. This aggregated payment can then be verified on the blockchain, reducing the number of transactions and simplifying the payment process. For each query, the gateway will send you a `signed receipt` that is stored on your database. Then, these queries will be aggregated by a `tap-agent` through a request. Afterwards, you’ll receive a RAV. You can update a RAV by sending it with newer receipts and this will generate a new RAV with an increased value. @@ -59,14 +59,14 @@ As long as you run `tap-agent` and `indexer-agent`, everything will be executed | Signers | `0xfF4B7A5EfD00Ff2EC3518D4F250A27e4c29A2211` | `0xFb142dE83E261e43a81e9ACEADd1c66A0DB121FE` | | Aggregator | `https://tap-aggregator.network.thegraph.com` | `https://tap-aggregator.testnet.thegraph.com` | -### Requirements +### Prerequisites -In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query TAP updates. You can use The Graph Network to query or host yourself on your `graph-node`. +In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query updates. You can use The Graph Network to query or host yourself on your `graph-node`. -- [Graph TAP Arbitrum Sepolia subgraph (for The Graph testnet)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) -- [Graph TAP Arbitrum One subgraph (for The Graph mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) +- [Graph TAP Arbitrum Sepolia Subgraph (for The Graph testnet)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) +- [Graph TAP Arbitrum One Subgraph (for The Graph mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) -> Note: `indexer-agent` does not currently handle the indexing of this subgraph like it does for the network subgraph deployment. As a result, you have to index it manually. +> Note: `indexer-agent` does not currently handle the indexing of this Subgraph like it does for the network Subgraph deployment. As a result, you have to index it manually. ## Migration Guide @@ -79,7 +79,7 @@ The required software version can be found [here](https://github.com/graphprotoc 1. **Indexer Agent** - Follow the [same process](https://github.com/graphprotocol/indexer/pkgs/container/indexer-agent#graph-protocol-indexer-components). - - Give the new argument `--tap-subgraph-endpoint` to activate the new TAP codepaths and enable redeeming of TAP RAVs. + - Give the new argument `--tap-subgraph-endpoint` to activate the new GraphTally codepaths and enable redeeming of RAVs. 2. **Indexer Service** @@ -128,18 +128,18 @@ query_url = "" status_url = "" [subgraphs.network] -# Query URL for the Graph Network subgraph. +# Query URL for the Graph Network Subgraph. query_url = "" # Optional, deployment to look for in the local `graph-node`, if locally indexed. -# Locally indexing the subgraph is recommended. +# Locally indexing the Subgraph is recommended. # NOTE: Use `query_url` or `deployment_id` only deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" [subgraphs.escrow] -# Query URL for the Escrow subgraph. +# Query URL for the Escrow Subgraph. query_url = "" # Optional, deployment to look for in the local `graph-node`, if locally indexed. -# Locally indexing the subgraph is recommended. +# Locally indexing the Subgraph is recommended. # NOTE: Use `query_url` or `deployment_id` only deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" diff --git a/website/src/pages/ar/indexing/tooling/graph-node.mdx b/website/src/pages/ar/indexing/tooling/graph-node.mdx index 0250f14a3d08..edde8a157fd3 100644 --- a/website/src/pages/ar/indexing/tooling/graph-node.mdx +++ b/website/src/pages/ar/indexing/tooling/graph-node.mdx @@ -2,31 +2,31 @@ title: Graph Node --- -Graph Node is the component which indexes subgraphs, and makes the resulting data available to query via a GraphQL API. As such it is central to the indexer stack, and correct operation of Graph Node is crucial to running a successful indexer. +Graph Node is the component which indexes Subgraphs, and makes the resulting data available to query via a GraphQL API. As such it is central to the indexer stack, and correct operation of Graph Node is crucial to running a successful indexer. This provides a contextual overview of Graph Node, and some of the more advanced options available to indexers. Detailed documentation and instructions can be found in the [Graph Node repository](https://github.com/graphprotocol/graph-node). ## Graph Node -[Graph Node](https://github.com/graphprotocol/graph-node) is the reference implementation for indexing Subgraphs on The Graph Network, connecting to blockchain clients, indexing subgraphs and making indexed data available to query. +[Graph Node](https://github.com/graphprotocol/graph-node) is the reference implementation for indexing Subgraphs on The Graph Network, connecting to blockchain clients, indexing Subgraphs and making indexed data available to query. Graph Node (and the whole indexer stack) can be run on bare metal, or in a cloud environment. This flexibility of the central indexing component is crucial to the robustness of The Graph Protocol. Similarly, Graph Node can be [built from source](https://github.com/graphprotocol/graph-node), or indexers can use one of the [provided Docker Images](https://hub.docker.com/r/graphprotocol/graph-node). ### PostgreSQL database -The main store for the Graph Node, this is where subgraph data is stored, as well as metadata about subgraphs, and subgraph-agnostic network data such as the block cache, and eth_call cache. +The main store for the Graph Node, this is where Subgraph data is stored, as well as metadata about Subgraphs, and Subgraph-agnostic network data such as the block cache, and eth_call cache. ### Network clients In order to index a network, Graph Node needs access to a network client via an EVM-compatible JSON-RPC API. This RPC may connect to a single client or it could be a more complex setup that load balances across multiple. -While some subgraphs may just require a full node, some may have indexing features which require additional RPC functionality. Specifically subgraphs which make `eth_calls` as part of indexing will require an archive node which supports [EIP-1898](https://eips.ethereum.org/EIPS/eip-1898), and subgraphs with `callHandlers`, or `blockHandlers` with a `call` filter, require `trace_filter` support ([see trace module documentation here](https://openethereum.github.io/JSONRPC-trace-module)). +While some Subgraphs may just require a full node, some may have indexing features which require additional RPC functionality. Specifically Subgraphs which make `eth_calls` as part of indexing will require an archive node which supports [EIP-1898](https://eips.ethereum.org/EIPS/eip-1898), and Subgraphs with `callHandlers`, or `blockHandlers` with a `call` filter, require `trace_filter` support ([see trace module documentation here](https://openethereum.github.io/JSONRPC-trace-module)). **Network Firehoses** - a Firehose is a gRPC service providing an ordered, yet fork-aware, stream of blocks, developed by The Graph's core developers to better support performant indexing at scale. This is not currently an Indexer requirement, but Indexers are encouraged to familiarise themselves with the technology, ahead of full network support. Learn more about the Firehose [here](https://firehose.streamingfast.io/). ### IPFS Nodes -Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during subgraph deployment to fetch the subgraph manifest and all linked files. Network indexers do not need to host their own IPFS node. An IPFS node for the network is hosted at https://ipfs.network.thegraph.com. +Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network indexers do not need to host their own IPFS node. An IPFS node for the network is hosted at https://ipfs.network.thegraph.com. ### Prometheus metrics server @@ -79,8 +79,8 @@ When it is running Graph Node exposes the following ports: | Port | Purpose | Routes | CLI Argument | Environment Variable | | --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | -| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | +| 8000 | GraphQL HTTP server
(for Subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | +| 8001 | GraphQL WS
(for Subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | | 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | | 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | | 8040 | Prometheus metrics | /metrics | \--metrics-port | - | @@ -89,7 +89,7 @@ When it is running Graph Node exposes the following ports: ## Advanced Graph Node configuration -At its simplest, Graph Node can be operated with a single instance of Graph Node, a single PostgreSQL database, an IPFS node, and the network clients as required by the subgraphs to be indexed. +At its simplest, Graph Node can be operated with a single instance of Graph Node, a single PostgreSQL database, an IPFS node, and the network clients as required by the Subgraphs to be indexed. This setup can be scaled horizontally, by adding multiple Graph Nodes, and multiple databases to support those Graph Nodes. Advanced users may want to take advantage of some of the horizontal scaling capabilities of Graph Node, as well as some of the more advanced configuration options, via the `config.toml` file and Graph Node's environment variables. @@ -114,13 +114,13 @@ Full documentation of `config.toml` can be found in the [Graph Node docs](https: #### Multiple Graph Nodes -Graph Node indexing can scale horizontally, running multiple instances of Graph Node to split indexing and querying across different nodes. This can be done simply by running Graph Nodes configured with a different `node_id` on startup (e.g. in the Docker Compose file), which can then be used in the `config.toml` file to specify [dedicated query nodes](#dedicated-query-nodes), [block ingestors](#dedicated-block-ingestion), and splitting subgraphs across nodes with [deployment rules](#deployment-rules). +Graph Node indexing can scale horizontally, running multiple instances of Graph Node to split indexing and querying across different nodes. This can be done simply by running Graph Nodes configured with a different `node_id` on startup (e.g. in the Docker Compose file), which can then be used in the `config.toml` file to specify [dedicated query nodes](#dedicated-query-nodes), [block ingestors](#dedicated-block-ingestion), and splitting Subgraphs across nodes with [deployment rules](#deployment-rules). > Note that multiple Graph Nodes can all be configured to use the same database, which itself can be horizontally scaled via sharding. #### Deployment rules -Given multiple Graph Nodes, it is necessary to manage deployment of new subgraphs so that the same subgraph isn't being indexed by two different nodes, which would lead to collisions. This can be done by using deployment rules, which can also specify which `shard` a subgraph's data should be stored in, if database sharding is being used. Deployment rules can match on the subgraph name and the network that the deployment is indexing in order to make a decision. +Given multiple Graph Nodes, it is necessary to manage deployment of new Subgraphs so that the same Subgraph isn't being indexed by two different nodes, which would lead to collisions. This can be done by using deployment rules, which can also specify which `shard` a Subgraph's data should be stored in, if database sharding is being used. Deployment rules can match on the Subgraph name and the network that the deployment is indexing in order to make a decision. Example deployment rule configuration: @@ -138,7 +138,7 @@ indexers = [ "index_node_kovan_0" ] match = { network = [ "xdai", "poa-core" ] } indexers = [ "index_node_other_0" ] [[deployment.rule]] -# There's no 'match', so any subgraph matches +# There's no 'match', so any Subgraph matches shards = [ "sharda", "shardb" ] indexers = [ "index_node_community_0", @@ -167,11 +167,11 @@ Any node whose --node-id matches the regular expression will be set up to only r For most use cases, a single Postgres database is sufficient to support a graph-node instance. When a graph-node instance outgrows a single Postgres database, it is possible to split the storage of graph-node's data across multiple Postgres databases. All databases together form the store of the graph-node instance. Each individual database is called a shard. -Shards can be used to split subgraph deployments across multiple databases, and can also be used to use replicas to spread query load across databases. This includes configuring the number of available database connections each `graph-node` should keep in its connection pool for each database, which becomes increasingly important as more subgraphs are being indexed. +Shards can be used to split Subgraph deployments across multiple databases, and can also be used to use replicas to spread query load across databases. This includes configuring the number of available database connections each `graph-node` should keep in its connection pool for each database, which becomes increasingly important as more Subgraphs are being indexed. Sharding becomes useful when your existing database can't keep up with the load that Graph Node puts on it, and when it's not possible to increase the database size anymore. -> It is generally better make a single database as big as possible, before starting with shards. One exception is where query traffic is split very unevenly between subgraphs; in those situations it can help dramatically if the high-volume subgraphs are kept in one shard and everything else in another because that setup makes it more likely that the data for the high-volume subgraphs stays in the db-internal cache and doesn't get replaced by data that's not needed as much from low-volume subgraphs. +> It is generally better make a single database as big as possible, before starting with shards. One exception is where query traffic is split very unevenly between Subgraphs; in those situations it can help dramatically if the high-volume Subgraphs are kept in one shard and everything else in another because that setup makes it more likely that the data for the high-volume Subgraphs stays in the db-internal cache and doesn't get replaced by data that's not needed as much from low-volume Subgraphs. In terms of configuring connections, start with max_connections in postgresql.conf set to 400 (or maybe even 200) and look at the store_connection_wait_time_ms and store_connection_checkout_count Prometheus metrics. Noticeable wait times (anything above 5ms) is an indication that there are too few connections available; high wait times there will also be caused by the database being very busy (like high CPU load). However if the database seems otherwise stable, high wait times indicate a need to increase the number of connections. In the configuration, how many connections each graph-node instance can use is an upper limit, and Graph Node will not keep connections open if it doesn't need them. @@ -188,7 +188,7 @@ ingestor = "block_ingestor_node" #### Supporting multiple networks -The Graph Protocol is increasing the number of networks supported for indexing rewards, and there exist many subgraphs indexing unsupported networks which an indexer would like to process. The `config.toml` file allows for expressive and flexible configuration of: +The Graph Protocol is increasing the number of networks supported for indexing rewards, and there exist many Subgraphs indexing unsupported networks which an indexer would like to process. The `config.toml` file allows for expressive and flexible configuration of: - Multiple networks - Multiple providers per network (this can allow splitting of load across providers, and can also allow for configuration of full nodes as well as archive nodes, with Graph Node preferring cheaper providers if a given workload allows). @@ -225,11 +225,11 @@ Users who are operating a scaled indexing setup with advanced configuration may ### Managing Graph Node -Given a running Graph Node (or Graph Nodes!), the challenge is then to manage deployed subgraphs across those nodes. Graph Node surfaces a range of tools to help with managing subgraphs. +Given a running Graph Node (or Graph Nodes!), the challenge is then to manage deployed Subgraphs across those nodes. Graph Node surfaces a range of tools to help with managing Subgraphs. #### Logging -Graph Node's logs can provide useful information for debugging and optimisation of Graph Node and specific subgraphs. Graph Node supports different log levels via the `GRAPH_LOG` environment variable, with the following levels: error, warn, info, debug or trace. +Graph Node's logs can provide useful information for debugging and optimisation of Graph Node and specific Subgraphs. Graph Node supports different log levels via the `GRAPH_LOG` environment variable, with the following levels: error, warn, info, debug or trace. In addition setting `GRAPH_LOG_QUERY_TIMING` to `gql` provides more details about how GraphQL queries are running (though this will generate a large volume of logs). @@ -247,11 +247,11 @@ The graphman command is included in the official containers, and you can docker Full documentation of `graphman` commands is available in the Graph Node repository. See [/docs/graphman.md] (https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md) in the Graph Node `/docs` -### Working with subgraphs +### Working with Subgraphs #### Indexing status API -Available on port 8030/graphql by default, the indexing status API exposes a range of methods for checking indexing status for different subgraphs, checking proofs of indexing, inspecting subgraph features and more. +Available on port 8030/graphql by default, the indexing status API exposes a range of methods for checking indexing status for different Subgraphs, checking proofs of indexing, inspecting Subgraph features and more. The full schema is available [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). @@ -263,7 +263,7 @@ There are three separate parts of the indexing process: - Processing events in order with the appropriate handlers (this can involve calling the chain for state, and fetching data from the store) - Writing the resulting data to the store -These stages are pipelined (i.e. they can be executed in parallel), but they are dependent on one another. Where subgraphs are slow to index, the underlying cause will depend on the specific subgraph. +These stages are pipelined (i.e. they can be executed in parallel), but they are dependent on one another. Where Subgraphs are slow to index, the underlying cause will depend on the specific Subgraph. Common causes of indexing slowness: @@ -276,24 +276,24 @@ Common causes of indexing slowness: - The provider itself falling behind the chain head - Slowness in fetching new receipts at the chain head from the provider -Subgraph indexing metrics can help diagnose the root cause of indexing slowness. In some cases, the problem lies with the subgraph itself, but in others, improved network providers, reduced database contention and other configuration improvements can markedly improve indexing performance. +Subgraph indexing metrics can help diagnose the root cause of indexing slowness. In some cases, the problem lies with the Subgraph itself, but in others, improved network providers, reduced database contention and other configuration improvements can markedly improve indexing performance. -#### Failed subgraphs +#### Failed Subgraphs -During indexing subgraphs might fail, if they encounter data that is unexpected, some component not working as expected, or if there is some bug in the event handlers or configuration. There are two general types of failure: +During indexing Subgraphs might fail, if they encounter data that is unexpected, some component not working as expected, or if there is some bug in the event handlers or configuration. There are two general types of failure: - Deterministic failures: these are failures which will not be resolved with retries - Non-deterministic failures: these might be down to issues with the provider, or some unexpected Graph Node error. When a non-deterministic failure occurs, Graph Node will retry the failing handlers, backing off over time. -In some cases a failure might be resolvable by the indexer (for example if the error is a result of not having the right kind of provider, adding the required provider will allow indexing to continue). However in others, a change in the subgraph code is required. +In some cases a failure might be resolvable by the indexer (for example if the error is a result of not having the right kind of provider, adding the required provider will allow indexing to continue). However in others, a change in the Subgraph code is required. -> Deterministic failures are considered "final", with a Proof of Indexing generated for the failing block, while non-deterministic failures are not, as the subgraph may manage to "unfail" and continue indexing. In some cases, the non-deterministic label is incorrect, and the subgraph will never overcome the error; such failures should be reported as issues on the Graph Node repository. +> Deterministic failures are considered "final", with a Proof of Indexing generated for the failing block, while non-deterministic failures are not, as the Subgraph may manage to "unfail" and continue indexing. In some cases, the non-deterministic label is incorrect, and the Subgraph will never overcome the error; such failures should be reported as issues on the Graph Node repository. #### Block and call cache -Graph Node caches certain data in the store in order to save refetching from the provider. Blocks are cached, as are the results of `eth_calls` (the latter being cached as of a specific block). This caching can dramatically increase indexing speed during "resyncing" of a slightly altered subgraph. +Graph Node caches certain data in the store in order to save refetching from the provider. Blocks are cached, as are the results of `eth_calls` (the latter being cached as of a specific block). This caching can dramatically increase indexing speed during "resyncing" of a slightly altered Subgraph. -However, in some instances, if an Ethereum node has provided incorrect data for some period, that can make its way into the cache, leading to incorrect data or failed subgraphs. In this case indexers can use `graphman` to clear the poisoned cache, and then rewind the affected subgraphs, which will then fetch fresh data from the (hopefully) healthy provider. +However, in some instances, if an Ethereum node has provided incorrect data for some period, that can make its way into the cache, leading to incorrect data or failed Subgraphs. In this case indexers can use `graphman` to clear the poisoned cache, and then rewind the affected Subgraphs, which will then fetch fresh data from the (hopefully) healthy provider. If a block cache inconsistency is suspected, such as a tx receipt missing event: @@ -304,7 +304,7 @@ If a block cache inconsistency is suspected, such as a tx receipt missing event: #### Querying issues and errors -Once a subgraph has been indexed, indexers can expect to serve queries via the subgraph's dedicated query endpoint. If the indexer is hoping to serve significant query volume, a dedicated query node is recommended, and in case of very high query volumes, indexers may want to configure replica shards so that queries don't impact the indexing process. +Once a Subgraph has been indexed, indexers can expect to serve queries via the Subgraph's dedicated query endpoint. If the indexer is hoping to serve significant query volume, a dedicated query node is recommended, and in case of very high query volumes, indexers may want to configure replica shards so that queries don't impact the indexing process. However, even with a dedicated query node and replicas, certain queries can take a long time to execute, and in some cases increase memory usage and negatively impact the query time for other users. @@ -316,7 +316,7 @@ Graph Node caches GraphQL queries by default, which can significantly reduce dat ##### Analysing queries -Problematic queries most often surface in one of two ways. In some cases, users themselves report that a given query is slow. In that case the challenge is to diagnose the reason for the slowness - whether it is a general issue, or specific to that subgraph or query. And then of course to resolve it, if possible. +Problematic queries most often surface in one of two ways. In some cases, users themselves report that a given query is slow. In that case the challenge is to diagnose the reason for the slowness - whether it is a general issue, or specific to that Subgraph or query. And then of course to resolve it, if possible. In other cases, the trigger might be high memory usage on a query node, in which case the challenge is first to identify the query causing the issue. @@ -336,10 +336,10 @@ In general, tables where the number of distinct entities are less than 1% of the Once a table has been determined to be account-like, running `graphman stats account-like .` will turn on the account-like optimization for queries against that table. The optimization can be turned off again with `graphman stats account-like --clear .
` It takes up to 5 minutes for query nodes to notice that the optimization has been turned on or off. After turning the optimization on, it is necessary to verify that the change does not in fact make queries slower for that table. If you have configured Grafana to monitor Postgres, slow queries would show up in `pg_stat_activity`in large numbers, taking several seconds. In that case, the optimization needs to be turned off again. -For Uniswap-like subgraphs, the `pair` and `token` tables are prime candidates for this optimization, and can have a dramatic effect on database load. +For Uniswap-like Subgraphs, the `pair` and `token` tables are prime candidates for this optimization, and can have a dramatic effect on database load. -#### Removing subgraphs +#### Removing Subgraphs > This is new functionality, which will be available in Graph Node 0.29.x -At some point an indexer might want to remove a given subgraph. This can be easily done via `graphman drop`, which deletes a deployment and all it's indexed data. The deployment can be specified as either a subgraph name, an IPFS hash `Qm..`, or the database namespace `sgdNNN`. Further documentation is available [here](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop). +At some point an indexer might want to remove a given Subgraph. This can be easily done via `graphman drop`, which deletes a deployment and all it's indexed data. The deployment can be specified as either a Subgraph name, an IPFS hash `Qm..`, or the database namespace `sgdNNN`. Further documentation is available [here](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop). diff --git a/website/src/pages/ar/indexing/tooling/graphcast.mdx b/website/src/pages/ar/indexing/tooling/graphcast.mdx index 8fc00976ec28..d084edcd7067 100644 --- a/website/src/pages/ar/indexing/tooling/graphcast.mdx +++ b/website/src/pages/ar/indexing/tooling/graphcast.mdx @@ -10,10 +10,10 @@ Currently, the cost to broadcast information to other network participants is de The Graphcast SDK (Software Development Kit) allows developers to build Radios, which are gossip-powered applications that Indexers can run to serve a given purpose. We also intend to create a few Radios (or provide support to other developers/teams that wish to build Radios) for the following use cases: -- Real-time cross-checking of subgraph data integrity ([Subgraph Radio](https://docs.graphops.xyz/graphcast/radios/subgraph-radio/intro)). -- Conducting auctions and coordination for warp syncing subgraphs, substreams, and Firehose data from other Indexers. -- Self-reporting on active query analytics, including subgraph request volumes, fee volumes, etc. -- Self-reporting on indexing analytics, including subgraph indexing time, handler gas costs, indexing errors encountered, etc. +- Real-time cross-checking of Subgraph data integrity ([Subgraph Radio](https://docs.graphops.xyz/graphcast/radios/subgraph-radio/intro)). +- Conducting auctions and coordination for warp syncing Subgraphs, substreams, and Firehose data from other Indexers. +- Self-reporting on active query analytics, including Subgraph request volumes, fee volumes, etc. +- Self-reporting on indexing analytics, including Subgraph indexing time, handler gas costs, indexing errors encountered, etc. - Self-reporting on stack information including graph-node version, Postgres version, Ethereum client version, etc. ### Learn More diff --git a/website/src/pages/ar/resources/benefits.mdx b/website/src/pages/ar/resources/benefits.mdx index 2e1a0834591c..00a32f92a1a3 100644 --- a/website/src/pages/ar/resources/benefits.mdx +++ b/website/src/pages/ar/resources/benefits.mdx @@ -75,9 +75,9 @@ Query costs may vary; the quoted cost is the average at time of publication (Mar Reflects cost for data consumer. Query fees are still paid to Indexers for Free Plan queries. -Estimated costs are only for Ethereum Mainnet subgraphs — costs are even higher when self hosting a `graph-node` on other networks. Some users may need to update their subgraph to a new version. Due to Ethereum gas fees, an update costs ~$50 at time of writing. Note that gas fees on [Arbitrum](/archived/arbitrum/arbitrum-faq/) are substantially lower than Ethereum mainnet. +Estimated costs are only for Ethereum Mainnet Subgraphs — costs are even higher when self hosting a `graph-node` on other networks. Some users may need to update their Subgraph to a new version. Due to Ethereum gas fees, an update costs ~$50 at time of writing. Note that gas fees on [Arbitrum](/archived/arbitrum/arbitrum-faq/) are substantially lower than Ethereum mainnet. -Curating signal on a subgraph is an optional one-time, net-zero cost (e.g., $1k in signal can be curated on a subgraph, and later withdrawn—with potential to earn returns in the process). +Curating signal on a Subgraph is an optional one-time, net-zero cost (e.g., $1k in signal can be curated on a Subgraph, and later withdrawn—with potential to earn returns in the process). ## No Setup Costs & Greater Operational Efficiency @@ -89,4 +89,4 @@ The Graph’s decentralized network gives users access to geographic redundancy Bottom line: The Graph Network is less expensive, easier to use, and produces superior results compared to running a `graph-node` locally. -Start using The Graph Network today, and learn how to [publish your subgraph to The Graph's decentralized network](/subgraphs/quick-start/). +Start using The Graph Network today, and learn how to [publish your Subgraph to The Graph's decentralized network](/subgraphs/quick-start/). diff --git a/website/src/pages/ar/resources/glossary.mdx b/website/src/pages/ar/resources/glossary.mdx index f922950390a6..d456a94f63ab 100644 --- a/website/src/pages/ar/resources/glossary.mdx +++ b/website/src/pages/ar/resources/glossary.mdx @@ -4,51 +4,51 @@ title: قائمة المصطلحات - **The Graph**: A decentralized protocol for indexing and querying data. -- **Query**: A request for data. In the case of The Graph, a query is a request for data from a subgraph that will be answered by an Indexer. +- **Query**: A request for data. In the case of The Graph, a query is a request for data from a Subgraph that will be answered by an Indexer. -- **GraphQL**: A query language for APIs and a runtime for fulfilling those queries with your existing data. The Graph uses GraphQL to query subgraphs. +- **GraphQL**: A query language for APIs and a runtime for fulfilling those queries with your existing data. The Graph uses GraphQL to query Subgraphs. -- **Endpoint**: A URL that can be used to query a subgraph. The testing endpoint for Subgraph Studio is `https://api.studio.thegraph.com/query///` and the Graph Explorer endpoint is `https://gateway.thegraph.com/api//subgraphs/id/`. The Graph Explorer endpoint is used to query subgraphs on The Graph's decentralized network. +- **Endpoint**: A URL that can be used to query a Subgraph. The testing endpoint for Subgraph Studio is `https://api.studio.thegraph.com/query///` and the Graph Explorer endpoint is `https://gateway.thegraph.com/api//subgraphs/id/`. The Graph Explorer endpoint is used to query Subgraphs on The Graph's decentralized network. -- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish subgraphs to The Graph Network. Once it is indexed, the subgraph can be queried by anyone. +- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish Subgraphs to The Graph Network. Once it is indexed, the Subgraph can be queried by anyone. - **Indexer**: Network participants that run indexing nodes to index data from blockchains and serve GraphQL queries. - **Indexer Revenue Streams**: Indexers are rewarded in GRT with two components: query fee rebates and indexing rewards. - 1. **Query Fee Rebates**: Payments from subgraph consumers for serving queries on the network. + 1. **Query Fee Rebates**: Payments from Subgraph consumers for serving queries on the network. - 2. **Indexing Rewards**: The rewards that Indexers receive for indexing subgraphs. Indexing rewards are generated via new issuance of 3% GRT annually. + 2. **Indexing Rewards**: The rewards that Indexers receive for indexing Subgraphs. Indexing rewards are generated via new issuance of 3% GRT annually. - **Indexer's Self-Stake**: The amount of GRT that Indexers stake to participate in the decentralized network. The minimum is 100,000 GRT, and there is no upper limit. - **Delegation Capacity**: The maximum amount of GRT an Indexer can accept from Delegators. Indexers can only accept up to 16x their Indexer Self-Stake, and additional delegation results in diluted rewards. For example, if an Indexer has a Self-Stake of 1M GRT, their delegation capacity is 16M. However, Indexers can increase their Delegation Capacity by increasing their Self-Stake. -- **Upgrade Indexer**: An Indexer designed to act as a fallback for subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. +- **Upgrade Indexer**: An Indexer designed to act as a fallback for Subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. -- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing subgraphs. +- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in Subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing Subgraphs. - **Delegation Tax**: A 0.5% fee paid by Delegators when they delegate GRT to Indexers. The GRT used to pay the fee is burned. -- **Curator**: Network participants that identify high-quality subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a subgraph, 10% is distributed to the Curators of that subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a subgraph. +- **Curator**: Network participants that identify high-quality Subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a Subgraph, 10% is distributed to the Curators of that Subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a Subgraph. -- **Curation Tax**: A 1% fee paid by Curators when they signal GRT on subgraphs. The GRT used to pay the fee is burned. +- **Curation Tax**: A 1% fee paid by Curators when they signal GRT on Subgraphs. The GRT used to pay the fee is burned. -- **Data Consumer**: Any application or user that queries a subgraph. +- **Data Consumer**: Any application or user that queries a Subgraph. -- **Subgraph Developer**: A developer who builds and deploys a subgraph to The Graph's decentralized network. +- **Subgraph Developer**: A developer who builds and deploys a Subgraph to The Graph's decentralized network. -- **Subgraph Manifest**: A YAML file that describes the subgraph's GraphQL schema, data sources, and other metadata. [Here](https://github.com/graphprotocol/example-subgraph/blob/master/subgraph.yaml) is an example. +- **Subgraph Manifest**: A YAML file that describes the Subgraph's GraphQL schema, data sources, and other metadata. [Here](https://github.com/graphprotocol/example-subgraph/blob/master/subgraph.yaml) is an example. - **Epoch**: A unit of time within the network. Currently, one epoch is 6,646 blocks or approximately 1 day. -- **Allocation**: An Indexer can allocate their total GRT stake (including Delegators' stake) towards subgraphs that have been published on The Graph's decentralized network. Allocations can have different statuses: +- **Allocation**: An Indexer can allocate their total GRT stake (including Delegators' stake) towards Subgraphs that have been published on The Graph's decentralized network. Allocations can have different statuses: - 1. **Active**: An allocation is considered active when it is created onchain. This is called opening an allocation, and indicates to the network that the Indexer is actively indexing and serving queries for a particular subgraph. Active allocations accrue indexing rewards proportional to the signal on the subgraph, and the amount of GRT allocated. + 1. **Active**: An allocation is considered active when it is created onchain. This is called opening an allocation, and indicates to the network that the Indexer is actively indexing and serving queries for a particular Subgraph. Active allocations accrue indexing rewards proportional to the signal on the Subgraph, and the amount of GRT allocated. - 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. + 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given Subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. -- **Subgraph Studio**: A powerful dapp for building, deploying, and publishing subgraphs. +- **Subgraph Studio**: A powerful dapp for building, deploying, and publishing Subgraphs. - **Fishermen**: A role within The Graph Network held by participants who monitor the accuracy and integrity of data served by Indexers. When a Fisherman identifies a query response or a POI they believe to be incorrect, they can initiate a dispute against the Indexer. If the dispute rules in favor of the Fisherman, the Indexer is slashed by losing 2.5% of their self-stake. Of this amount, 50% is awarded to the Fisherman as a bounty for their vigilance, and the remaining 50% is removed from circulation (burned). This mechanism is designed to encourage Fishermen to help maintain the reliability of the network by ensuring that Indexers are held accountable for the data they provide. @@ -56,28 +56,28 @@ title: قائمة المصطلحات - **Slashing**: Indexers can have their self-staked GRT slashed for providing an incorrect POI or for serving inaccurate data. The slashing percentage is a protocol parameter currently set to 2.5% of an Indexer's self-stake. 50% of the slashed GRT goes to the Fisherman that disputed the inaccurate data or incorrect POI. The other 50% is burned. -- **Indexing Rewards**: The rewards that Indexers receive for indexing subgraphs. Indexing rewards are distributed in GRT. +- **Indexing Rewards**: The rewards that Indexers receive for indexing Subgraphs. Indexing rewards are distributed in GRT. - **Delegation Rewards**: The rewards that Delegators receive for delegating GRT to Indexers. Delegation rewards are distributed in GRT. - **GRT**: The Graph's work utility token. GRT provides economic incentives to network participants for contributing to the network. -- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. +- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given Subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. -- **Graph Node**: Graph Node is the component that indexes subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. +- **Graph Node**: Graph Node is the component that indexes Subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. -- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions onchain, including registering on the network, managing subgraph deployments to its Graph Node(s), and managing allocations. +- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions onchain, including registering on the network, managing Subgraph deployments to its Graph Node(s), and managing allocations. - **The Graph Client**: A library for building GraphQL-based dapps in a decentralized way. -- **Graph Explorer**: A dapp designed for network participants to explore subgraphs and interact with the protocol. +- **Graph Explorer**: A dapp designed for network participants to explore Subgraphs and interact with the protocol. - **Graph CLI**: A command line interface tool for building and deploying to The Graph. - **Cooldown Period**: The time remaining until an Indexer who changed their delegation parameters can do so again. -- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, subgraphs, curation shares, and Indexer's self-stake. +- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, Subgraphs, curation shares, and Indexer's self-stake. -- **Updating a subgraph**: The process of releasing a new subgraph version with updates to the subgraph's manifest, schema, or mappings. +- **Updating a Subgraph**: The process of releasing a new Subgraph version with updates to the Subgraph's manifest, schema, or mappings. -- **Migrating**: The process of curation shares moving from an old version of a subgraph to a new version of a subgraph (e.g. when v0.0.1 is updated to v0.0.2). +- **Migrating**: The process of curation shares moving from an old version of a Subgraph to a new version of a Subgraph (e.g. when v0.0.1 is updated to v0.0.2). diff --git a/website/src/pages/ar/resources/migration-guides/assemblyscript-migration-guide.mdx b/website/src/pages/ar/resources/migration-guides/assemblyscript-migration-guide.mdx index 9fe263f2f8b2..40086bb24579 100644 --- a/website/src/pages/ar/resources/migration-guides/assemblyscript-migration-guide.mdx +++ b/website/src/pages/ar/resources/migration-guides/assemblyscript-migration-guide.mdx @@ -2,13 +2,13 @@ title: دليل ترحيل AssemblyScript --- -Up until now, subgraphs have been using one of the [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉 +Up until now, Subgraphs have been using one of the [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉 -سيمكن ذلك لمطوري ال Subgraph من استخدام مميزات أحدث للغة AS والمكتبة القياسية. +That will enable Subgraph developers to use newer features of the AS language and standard library. This guide is applicable for anyone using `graph-cli`/`graph-ts` below version `0.22.0`. If you're already at a higher than (or equal) version to that, you've already been using version `0.19.10` of AssemblyScript 🙂 -> Note: As of `0.24.0`, `graph-node` can support both versions, depending on the `apiVersion` specified in the subgraph manifest. +> Note: As of `0.24.0`, `graph-node` can support both versions, depending on the `apiVersion` specified in the Subgraph manifest. ## مميزات @@ -44,7 +44,7 @@ This guide is applicable for anyone using `graph-cli`/`graph-ts` below version ` ## كيف تقوم بالترقية؟ -1. Change your mappings `apiVersion` in `subgraph.yaml` to `0.0.6`: +1. Change your mappings `apiVersion` in `subgraph.yaml` to `0.0.9`: ```yaml ... @@ -52,7 +52,7 @@ dataSources: ... mapping: ... - apiVersion: 0.0.6 + apiVersion: 0.0.9 ... ``` @@ -106,7 +106,7 @@ let maybeValue = load()! // breaks in runtime if value is null maybeValue.aMethod() ``` -إذا لم تكن متأكدا من اختيارك ، فنحن نوصي دائما باستخدام الإصدار الآمن. إذا كانت القيمة غير موجودة ، فقد ترغب في القيام بعبارة if المبكرة مع قيمة راجعة في معالج الـ subgraph الخاص بك. +If you are unsure which to choose, we recommend always using the safe version. If the value doesn't exist you might want to just do an early if statement with a return in you Subgraph handler. ### Variable Shadowing @@ -132,7 +132,7 @@ in assembly/index.ts(4,3) ### مقارانات Null -من خلال إجراء الترقية على ال Subgraph الخاص بك ، قد تحصل أحيانًا على أخطاء مثل هذه: +By doing the upgrade on your Subgraph, sometimes you might get errors like these: ```typescript ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' is not assignable to type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt'. @@ -330,7 +330,7 @@ let wrapper = new Wrapper(y) wrapper.n = wrapper.n + x // doesn't give compile time errors as it should ``` -لقد فتحنا مشكلة في مترجم AssemblyScript ، ولكن في الوقت الحالي إذا أجريت هذا النوع من العمليات في Subgraph mappings ، فيجب عليك تغييرها لإجراء فحص ل null قبل ذلك. +We've opened a issue on the AssemblyScript compiler for this, but for now if you do these kind of operations in your Subgraph mappings, you should change them to do a null check before it. ```typescript let wrapper = new Wrapper(y) @@ -352,7 +352,7 @@ value.x = 10 value.y = 'content' ``` -سيتم تجميعها لكنها ستتوقف في وقت التشغيل ، وهذا يحدث لأن القيمة لم تتم تهيئتها ، لذا تأكد من أن ال subgraph قد قام بتهيئة قيمها ، على النحو التالي: +It will compile but break at runtime, that happens because the value hasn't been initialized, so make sure your Subgraph has initialized their values, like this: ```typescript var value = new Type() // initialized diff --git a/website/src/pages/ar/resources/migration-guides/graphql-validations-migration-guide.mdx b/website/src/pages/ar/resources/migration-guides/graphql-validations-migration-guide.mdx index 29fed533ef8c..ebed96df1002 100644 --- a/website/src/pages/ar/resources/migration-guides/graphql-validations-migration-guide.mdx +++ b/website/src/pages/ar/resources/migration-guides/graphql-validations-migration-guide.mdx @@ -20,7 +20,7 @@ To be compliant with those validations, please follow the migration guide. You can use the CLI migration tool to find any issues in your GraphQL operations and fix them. Alternatively you can update the endpoint of your GraphQL client to use the `https://api-next.thegraph.com/subgraphs/name/$GITHUB_USER/$SUBGRAPH_NAME` endpoint. Testing your queries against this endpoint will help you find the issues in your queries. -> Not all subgraphs will need to be migrated, if you are using [GraphQL ESlint](https://the-guild.dev/graphql/eslint/docs) or [GraphQL Code Generator](https://the-guild.dev/graphql/codegen), they already ensure that your queries are valid. +> Not all Subgraphs will need to be migrated, if you are using [GraphQL ESlint](https://the-guild.dev/graphql/eslint/docs) or [GraphQL Code Generator](https://the-guild.dev/graphql/codegen), they already ensure that your queries are valid. ## Migration CLI tool diff --git a/website/src/pages/ar/resources/roles/curating.mdx b/website/src/pages/ar/resources/roles/curating.mdx index d2f355055aac..e73785e92590 100644 --- a/website/src/pages/ar/resources/roles/curating.mdx +++ b/website/src/pages/ar/resources/roles/curating.mdx @@ -2,37 +2,37 @@ title: Curating --- -Curators are critical to The Graph's decentralized economy. They use their knowledge of the web3 ecosystem to assess and signal on the subgraphs that should be indexed by The Graph Network. Through Graph Explorer, Curators view network data to make signaling decisions. In turn, The Graph Network rewards Curators who signal on good quality subgraphs with a share of the query fees those subgraphs generate. The amount of GRT signaled is one of the key considerations for indexers when determining which subgraphs to index. +Curators are critical to The Graph's decentralized economy. They use their knowledge of the web3 ecosystem to assess and signal on the Subgraphs that should be indexed by The Graph Network. Through Graph Explorer, Curators view network data to make signaling decisions. In turn, The Graph Network rewards Curators who signal on good quality Subgraphs with a share of the query fees those Subgraphs generate. The amount of GRT signaled is one of the key considerations for indexers when determining which Subgraphs to index. ## What Does Signaling Mean for The Graph Network? -Before consumers can query a subgraph, it must be indexed. This is where curation comes into play. In order for Indexers to earn substantial query fees on quality subgraphs, they need to know what subgraphs to index. When Curators signal on a subgraph, it lets Indexers know that a subgraph is in demand and of sufficient quality that it should be indexed. +Before consumers can query a Subgraph, it must be indexed. This is where curation comes into play. In order for Indexers to earn substantial query fees on quality Subgraphs, they need to know what Subgraphs to index. When Curators signal on a Subgraph, it lets Indexers know that a Subgraph is in demand and of sufficient quality that it should be indexed. -Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that Curators use to let Indexers know that a subgraph is good to index. Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives. +Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that Curators use to let Indexers know that a Subgraph is good to index. Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the Subgraph, entitling them to a portion of future query fees that the Subgraph drives. -Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality subgraph because there will be fewer queries to process or fewer Indexers to process them. +Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to Subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality Subgraph because there will be fewer queries to process or fewer Indexers to process them. -The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. +The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all Subgraphs, signaling GRT on a particular Subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. -When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. +When signaling, Curators can decide to signal on a specific version of the Subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. -If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. +If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the Subgraphs you need assistance with. -Indexers can find subgraphs to index based on curation signals they see in Graph Explorer (screenshot below). +Indexers can find Subgraphs to index based on curation signals they see in Graph Explorer (screenshot below). -![Explorer subgraphs](/img/explorer-subgraphs.png) +![Explorer Subgraphs](/img/explorer-subgraphs.png) ## كيفية الإشارة -Within the Curator tab in Graph Explorer, curators will be able to signal and unsignal on certain subgraphs based on network stats. For a step-by-step overview of how to do this in Graph Explorer, [click here.](/subgraphs/explorer/) +Within the Curator tab in Graph Explorer, curators will be able to signal and unsignal on certain Subgraphs based on network stats. For a step-by-step overview of how to do this in Graph Explorer, [click here.](/subgraphs/explorer/) -يمكن للمنسق الإشارة إلى إصدار معين ل subgraph ، أو يمكنه اختيار أن يتم ترحيل migrate إشاراتهم تلقائيا إلى أحدث إصدار لهذا ال subgraph. كلاهما استراتيجيات سليمة ولها إيجابيات وسلبيات. +A curator can choose to signal on a specific Subgraph version, or they can choose to have their signal automatically migrate to the newest production build of that Subgraph. Both are valid strategies and come with their own pros and cons. -Signaling on a specific version is especially useful when one subgraph is used by multiple dapps. One dapp might need to regularly update the subgraph with new features. Another dapp might prefer to use an older, well-tested subgraph version. Upon initial curation, a 1% standard tax is incurred. +Signaling on a specific version is especially useful when one Subgraph is used by multiple dapps. One dapp might need to regularly update the Subgraph with new features. Another dapp might prefer to use an older, well-tested Subgraph version. Upon initial curation, a 1% standard tax is incurred. Having your signal automatically migrate to the newest production build can be valuable to ensure you keep accruing query fees. Every time you curate, a 1% curation tax is incurred. You will also pay a 0.5% curation tax on every migration. Subgraph developers are discouraged from frequently publishing new versions - they have to pay a 0.5% curation tax on all auto-migrated curation shares. -> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy. +> **Note**: The first address to signal a particular Subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy. ## Withdrawing your GRT @@ -40,39 +40,39 @@ Curators have the option to withdraw their signaled GRT at any time. Unlike the process of delegating, if you decide to withdraw your signaled GRT you will not have to wait for a cooldown period and will receive the entire amount (minus the 1% curation tax). -Once a curator withdraws their signal, indexers may choose to keep indexing the subgraph, even if there's currently no active GRT signaled. +Once a curator withdraws their signal, indexers may choose to keep indexing the Subgraph, even if there's currently no active GRT signaled. -However, it is recommended that curators leave their signaled GRT in place not only to receive a portion of the query fees, but also to ensure reliability and uptime of the subgraph. +However, it is recommended that curators leave their signaled GRT in place not only to receive a portion of the query fees, but also to ensure reliability and uptime of the Subgraph. ## المخاطر 1. سوق الاستعلام يعتبر حديثا في The Graph وهناك خطر من أن يكون٪ APY الخاص بك أقل مما تتوقع بسبب ديناميكيات السوق الناشئة. -2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned. -3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their subgraph or if a subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/resources/roles/delegating/delegating/). -4. يمكن أن يفشل ال subgraph بسبب خطأ. ال subgraph الفاشل لا يمكنه إنشاء رسوم استعلام. نتيجة لذلك ، سيتعين عليك الانتظار حتى يصلح المطور الخطأ وينشر إصدارا جديدا. - - إذا كنت مشتركا في أحدث إصدار من subgraph ، فسيتم ترحيل migrate أسهمك تلقائيا إلى هذا الإصدار الجديد. هذا سيتحمل ضريبة تنسيق بنسبة 0.5٪. - - If you have signaled on a specific subgraph version and it fails, you will have to manually burn your curation shares. You can then signal on the new subgraph version, thus incurring a 1% curation tax. +2. Curation Fee - when a curator signals GRT on a Subgraph, they incur a 1% curation tax. This fee is burned. +3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their Subgraph or if a Subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/resources/roles/delegating/delegating/). +4. A Subgraph can fail due to a bug. A failed Subgraph does not accrue query fees. As a result, you’ll have to wait until the developer fixes the bug and deploys a new version. + - If you are subscribed to the newest version of a Subgraph, your shares will auto-migrate to that new version. This will incur a 0.5% curation tax. + - If you have signaled on a specific Subgraph version and it fails, you will have to manually burn your curation shares. You can then signal on the new Subgraph version, thus incurring a 1% curation tax. ## الأسئلة الشائعة حول التنسيق ### 1. ما هي النسبة المئوية لرسوم الاستعلام التي يكسبها المنسقون؟ -By signalling on a subgraph, you will earn a share of all the query fees that the subgraph generates. 10% of all query fees go to the Curators pro-rata to their curation shares. This 10% is subject to governance. +By signalling on a Subgraph, you will earn a share of all the query fees that the Subgraph generates. 10% of all query fees go to the Curators pro-rata to their curation shares. This 10% is subject to governance. -### 2. كيف يمكنني تقرير ما إذا كان ال subgraph عالي الجودة لكي أقوم بالإشارة إليه؟ +### 2. How do I decide which Subgraphs are high quality to signal on? -Finding high-quality subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy subgraphs that are driving query volume. A trustworthy subgraph may be valuable if it is complete, accurate, and supports a dapp’s data needs. A poorly architected subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a subgraph’s architecture or code in order to assess if a subgraph is valuable. As a result: +Finding high-quality Subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy Subgraphs that are driving query volume. A trustworthy Subgraph may be valuable if it is complete, accurate, and supports a dapp’s data needs. A poorly architected Subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a Subgraph’s architecture or code in order to assess if a Subgraph is valuable. As a result: -- Curators can use their understanding of a network to try and predict how an individual subgraph may generate a higher or lower query volume in the future -- Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the subgraph developer is can help determine whether or not a subgraph is worth signalling on. +- Curators can use their understanding of a network to try and predict how an individual Subgraph may generate a higher or lower query volume in the future +- Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the Subgraph developer is can help determine whether or not a Subgraph is worth signalling on. -### 3. What’s the cost of updating a subgraph? +### 3. What’s the cost of updating a Subgraph? -Migrating your curation shares to a new subgraph version incurs a curation tax of 1%. Curators can choose to subscribe to the newest version of a subgraph. When curator shares get auto-migrated to a new version, Curators will also pay half curation tax, ie. 0.5%, because upgrading subgraphs is an onchain action that costs gas. +Migrating your curation shares to a new Subgraph version incurs a curation tax of 1%. Curators can choose to subscribe to the newest version of a Subgraph. When curator shares get auto-migrated to a new version, Curators will also pay half curation tax, ie. 0.5%, because upgrading Subgraphs is an onchain action that costs gas. -### 4. How often can I update my subgraph? +### 4. How often can I update my Subgraph? -It’s suggested that you don’t update your subgraphs too frequently. See the question above for more details. +It’s suggested that you don’t update your Subgraphs too frequently. See the question above for more details. ### 5. هل يمكنني بيع أسهم التنسيق الخاصة بي؟ diff --git a/website/src/pages/ar/resources/subgraph-studio-faq.mdx b/website/src/pages/ar/resources/subgraph-studio-faq.mdx index 74c0228e4093..ec613ed68df2 100644 --- a/website/src/pages/ar/resources/subgraph-studio-faq.mdx +++ b/website/src/pages/ar/resources/subgraph-studio-faq.mdx @@ -4,7 +4,7 @@ title: الأسئلة الشائعة حول الفرعيةرسم بياني اس ## 1. What is Subgraph Studio? -[Subgraph Studio](https://thegraph.com/studio/) is a dapp for creating, managing, and publishing subgraphs and API keys. +[Subgraph Studio](https://thegraph.com/studio/) is a dapp for creating, managing, and publishing Subgraphs and API keys. ## 2. How do I create an API Key? @@ -18,14 +18,14 @@ Yes! You can create multiple API Keys to use in different projects. Check out th After creating an API Key, in the Security section, you can define the domains that can query a specific API Key. -## 5. Can I transfer my subgraph to another owner? +## 5. Can I transfer my Subgraph to another owner? -Yes, subgraphs that have been published to Arbitrum One can be transferred to a new wallet or a Multisig. You can do so by clicking the three dots next to the 'Publish' button on the subgraph's details page and selecting 'Transfer ownership'. +Yes, Subgraphs that have been published to Arbitrum One can be transferred to a new wallet or a Multisig. You can do so by clicking the three dots next to the 'Publish' button on the Subgraph's details page and selecting 'Transfer ownership'. -Note that you will no longer be able to see or edit the subgraph in Studio once it has been transferred. +Note that you will no longer be able to see or edit the Subgraph in Studio once it has been transferred. -## 6. How do I find query URLs for subgraphs if I’m not the developer of the subgraph I want to use? +## 6. How do I find query URLs for Subgraphs if I’m not the developer of the Subgraph I want to use? -You can find the query URL of each subgraph in the Subgraph Details section of Graph Explorer. When you click on the “Query” button, you will be directed to a pane wherein you can view the query URL of the subgraph you’re interested in. You can then replace the `` placeholder with the API key you wish to leverage in Subgraph Studio. +You can find the query URL of each Subgraph in the Subgraph Details section of Graph Explorer. When you click on the “Query” button, you will be directed to a pane wherein you can view the query URL of the Subgraph you’re interested in. You can then replace the `` placeholder with the API key you wish to leverage in Subgraph Studio. -تذكر أنه يمكنك إنشاء API key والاستعلام عن أي subgraph منشور على الشبكة ، حتى إذا قمت ببناء subgraph بنفسك. حيث أن الاستعلامات عبر API key الجديد ، هي استعلامات مدفوعة مثل أي استعلامات أخرى على الشبكة. +Remember that you can create an API key and query any Subgraph published to the network, even if you build a Subgraph yourself. These queries via the new API key, are paid queries as any other on the network. diff --git a/website/src/pages/ar/resources/tokenomics.mdx b/website/src/pages/ar/resources/tokenomics.mdx index 511af057534f..fa0f098b22c8 100644 --- a/website/src/pages/ar/resources/tokenomics.mdx +++ b/website/src/pages/ar/resources/tokenomics.mdx @@ -6,7 +6,7 @@ description: The Graph Network is incentivized by powerful tokenomics. Here’s ## نظره عامة -The Graph is a decentralized protocol that enables easy access to blockchain data. It indexes blockchain data similarly to how Google indexes the web. If you've used a dapp that retrieves data from a subgraph, you've probably interacted with The Graph. Today, thousands of [popular dapps](https://thegraph.com/explorer) in the web3 ecosystem use The Graph. +The Graph is a decentralized protocol that enables easy access to blockchain data. It indexes blockchain data similarly to how Google indexes the web. If you've used a dapp that retrieves data from a Subgraph, you've probably interacted with The Graph. Today, thousands of [popular dapps](https://thegraph.com/explorer) in the web3 ecosystem use The Graph. ## Specifics @@ -24,9 +24,9 @@ There are four primary network participants: 1. Delegators - Delegate GRT to Indexers & secure the network -2. المنسقون (Curators) - يبحثون عن أفضل subgraphs للمفهرسين +2. Curators - Find the best Subgraphs for Indexers -3. Developers - Build & query subgraphs +3. Developers - Build & query Subgraphs 4. المفهرسون (Indexers) - العمود الفقري لبيانات blockchain @@ -36,7 +36,7 @@ Fishermen and Arbitrators are also integral to the network's success through oth ## Delegators (Passively earn GRT) -Indexers are delegated GRT by Delegators, increasing the Indexer’s stake in subgraphs on the network. In return, Delegators earn a percentage of all query fees and indexing rewards from the Indexer. Each Indexer sets the cut that will be rewarded to Delegators independently, creating competition among Indexers to attract Delegators. Most Indexers offer between 9-12% annually. +Indexers are delegated GRT by Delegators, increasing the Indexer’s stake in Subgraphs on the network. In return, Delegators earn a percentage of all query fees and indexing rewards from the Indexer. Each Indexer sets the cut that will be rewarded to Delegators independently, creating competition among Indexers to attract Delegators. Most Indexers offer between 9-12% annually. For example, if a Delegator were to delegate 15k GRT to an Indexer offering 10%, the Delegator would receive ~1,500 GRT in rewards annually. @@ -46,25 +46,25 @@ If you're reading this, you're capable of becoming a Delegator right now by head ## Curators (Earn GRT) -Curators identify high-quality subgraphs and "curate" them (i.e., signal GRT on them) to earn curation shares, which guarantee a percentage of all future query fees generated by the subgraph. While any independent network participant can be a Curator, typically subgraph developers are among the first Curators for their own subgraphs because they want to ensure their subgraph is indexed. +Curators identify high-quality Subgraphs and "curate" them (i.e., signal GRT on them) to earn curation shares, which guarantee a percentage of all future query fees generated by the Subgraph. While any independent network participant can be a Curator, typically Subgraph developers are among the first Curators for their own Subgraphs because they want to ensure their Subgraph is indexed. -Subgraph developers are encouraged to curate their subgraph with at least 3,000 GRT. However, this number may be impacted by network activity and community participation. +Subgraph developers are encouraged to curate their Subgraph with at least 3,000 GRT. However, this number may be impacted by network activity and community participation. -Curators pay a 1% curation tax when they curate a new subgraph. This curation tax is burned, decreasing the supply of GRT. +Curators pay a 1% curation tax when they curate a new Subgraph. This curation tax is burned, decreasing the supply of GRT. ## Developers -Developers build and query subgraphs to retrieve blockchain data. Since subgraphs are open source, developers can query existing subgraphs to load blockchain data into their dapps. Developers pay for queries they make in GRT, which is distributed to network participants. +Developers build and query Subgraphs to retrieve blockchain data. Since Subgraphs are open source, developers can query existing Subgraphs to load blockchain data into their dapps. Developers pay for queries they make in GRT, which is distributed to network participants. -### إنشاء subgraph +### Creating a Subgraph -Developers can [create a subgraph](/developing/creating-a-subgraph/) to index data on the blockchain. Subgraphs are instructions for Indexers about which data should be served to consumers. +Developers can [create a Subgraph](/developing/creating-a-subgraph/) to index data on the blockchain. Subgraphs are instructions for Indexers about which data should be served to consumers. -Once developers have built and tested their subgraph, they can [publish their subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) on The Graph's decentralized network. +Once developers have built and tested their Subgraph, they can [publish their Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) on The Graph's decentralized network. -### الاستعلام عن subgraph موجود +### Querying an existing Subgraph -Once a subgraph is [published](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph's decentralized network, anyone can create an API key, add GRT to their billing balance, and query the subgraph. +Once a Subgraph is [published](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph's decentralized network, anyone can create an API key, add GRT to their billing balance, and query the Subgraph. Subgraphs are [queried using GraphQL](/subgraphs/querying/introduction/), and the query fees are paid for with GRT in [Subgraph Studio](https://thegraph.com/studio/). Query fees are distributed to network participants based on their contributions to the protocol. @@ -72,27 +72,27 @@ Subgraphs are [queried using GraphQL](/subgraphs/querying/introduction/), and th ## Indexers (Earn GRT) -Indexers are the backbone of The Graph. They operate independent hardware and software powering The Graph’s decentralized network. Indexers serve data to consumers based on instructions from subgraphs. +Indexers are the backbone of The Graph. They operate independent hardware and software powering The Graph’s decentralized network. Indexers serve data to consumers based on instructions from Subgraphs. Indexers can earn GRT rewards in two ways: -1. **Query fees**: GRT paid by developers or users for subgraph data queries. Query fees are directly distributed to Indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). +1. **Query fees**: GRT paid by developers or users for Subgraph data queries. Query fees are directly distributed to Indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). -2. **Indexing rewards**: the 3% annual issuance is distributed to Indexers based on the number of subgraphs they are indexing. These rewards incentivize Indexers to index subgraphs, occasionally before the query fees begin, to accrue and submit Proofs of Indexing (POIs), verifying that they have indexed data accurately. +2. **Indexing rewards**: the 3% annual issuance is distributed to Indexers based on the number of Subgraphs they are indexing. These rewards incentivize Indexers to index Subgraphs, occasionally before the query fees begin, to accrue and submit Proofs of Indexing (POIs), verifying that they have indexed data accurately. -Each subgraph is allotted a portion of the total network token issuance, based on the amount of the subgraph’s curation signal. That amount is then rewarded to Indexers based on their allocated stake on the subgraph. +Each Subgraph is allotted a portion of the total network token issuance, based on the amount of the Subgraph’s curation signal. That amount is then rewarded to Indexers based on their allocated stake on the Subgraph. In order to run an indexing node, Indexers must self-stake 100,000 GRT or more with the network. Indexers are incentivized to self-stake GRT in proportion to the amount of queries they serve. -Indexers can increase their GRT allocations on subgraphs by accepting GRT delegation from Delegators, and they can accept up to 16 times their initial self-stake. If an Indexer becomes "over-delegated" (i.e., more than 16 times their initial self-stake), they will not be able to use the additional GRT from Delegators until they increase their self-stake in the network. +Indexers can increase their GRT allocations on Subgraphs by accepting GRT delegation from Delegators, and they can accept up to 16 times their initial self-stake. If an Indexer becomes "over-delegated" (i.e., more than 16 times their initial self-stake), they will not be able to use the additional GRT from Delegators until they increase their self-stake in the network. The amount of rewards an Indexer receives can vary based on the Indexer's self-stake, accepted delegation, quality of service, and many more factors. ## Token Supply: Burning & Issuance -The initial token supply is 10 billion GRT, with a target of 3% new issuance annually to reward Indexers for allocating stake on subgraphs. This means that the total supply of GRT tokens will increase by 3% each year as new tokens are issued to Indexers for their contribution to the network. +The initial token supply is 10 billion GRT, with a target of 3% new issuance annually to reward Indexers for allocating stake on Subgraphs. This means that the total supply of GRT tokens will increase by 3% each year as new tokens are issued to Indexers for their contribution to the network. -The Graph is designed with multiple burning mechanisms to offset new token issuance. Approximately 1% of the GRT supply is burned annually through various activities on the network, and this number has been increasing as network activity continues to grow. These burning activities include a 0.5% delegation tax whenever a Delegator delegates GRT to an Indexer, a 1% curation tax when Curators signal on a subgraph, and a 1% of query fees for blockchain data. +The Graph is designed with multiple burning mechanisms to offset new token issuance. Approximately 1% of the GRT supply is burned annually through various activities on the network, and this number has been increasing as network activity continues to grow. These burning activities include a 0.5% delegation tax whenever a Delegator delegates GRT to an Indexer, a 1% curation tax when Curators signal on a Subgraph, and a 1% of query fees for blockchain data. ![Total burned GRT](/img/total-burned-grt.jpeg) diff --git a/website/src/pages/ar/sps/introduction.mdx b/website/src/pages/ar/sps/introduction.mdx index 2336653c0e06..e74abf2f0998 100644 --- a/website/src/pages/ar/sps/introduction.mdx +++ b/website/src/pages/ar/sps/introduction.mdx @@ -3,21 +3,21 @@ title: Introduction to Substreams-Powered Subgraphs sidebarTitle: مقدمة --- -Boost your subgraph's efficiency and scalability by using [Substreams](/substreams/introduction/) to stream pre-indexed blockchain data. +Boost your Subgraph's efficiency and scalability by using [Substreams](/substreams/introduction/) to stream pre-indexed blockchain data. ## نظره عامة -Use a Substreams package (`.spkg`) as a data source to give your subgraph access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks. +Use a Substreams package (`.spkg`) as a data source to give your Subgraph access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks. ### Specifics There are two methods of enabling this technology: -1. **Using Substreams [triggers](/sps/triggers/)**: Consume from any Substreams module by importing the Protobuf model through a subgraph handler and move all your logic into a subgraph. This method creates the subgraph entities directly in the subgraph. +1. **Using Substreams [triggers](/sps/triggers/)**: Consume from any Substreams module by importing the Protobuf model through a Subgraph handler and move all your logic into a Subgraph. This method creates the Subgraph entities directly in the Subgraph. -2. **Using [Entity Changes](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)**: By writing more of the logic into Substreams, you can consume the module's output directly into [graph-node](/indexing/tooling/graph-node/). In graph-node, you can use the Substreams data to create your subgraph entities. +2. **Using [Entity Changes](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)**: By writing more of the logic into Substreams, you can consume the module's output directly into [graph-node](/indexing/tooling/graph-node/). In graph-node, you can use the Substreams data to create your Subgraph entities. -You can choose where to place your logic, either in the subgraph or Substreams. However, consider what aligns with your data needs, as Substreams has a parallelized model, and triggers are consumed linearly in the graph node. +You can choose where to place your logic, either in the Subgraph or Substreams. However, consider what aligns with your data needs, as Substreams has a parallelized model, and triggers are consumed linearly in the graph node. ### مصادر إضافية @@ -28,3 +28,4 @@ Visit the following links for tutorials on using code-generation tooling to buil - [Starknet](https://docs.substreams.dev/tutorials/intro-to-tutorials/starknet) - [Injective](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/injective) - [MANTRA](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/mantra) +- [Stellar](https://docs.substreams.dev/tutorials/intro-to-tutorials/stellar) diff --git a/website/src/pages/ar/sps/sps-faq.mdx b/website/src/pages/ar/sps/sps-faq.mdx index 88f4ddbb66d7..c19b0a950297 100644 --- a/website/src/pages/ar/sps/sps-faq.mdx +++ b/website/src/pages/ar/sps/sps-faq.mdx @@ -11,21 +11,21 @@ Specifically, it's a blockchain-agnostic, parallelized, and streaming-first engi Substreams is developed by [StreamingFast](https://www.streamingfast.io/). Visit the [Substreams Documentation](/substreams/introduction/) to learn more about Substreams. -## ما هي الغرافات الفرعية المدعومة بسبستريمز؟ +## What are Substreams-powered Subgraphs? -[Substreams-powered subgraphs](/sps/introduction/) combine the power of Substreams with the queryability of subgraphs. When publishing a Substreams-powered Subgraph, the data produced by the Substreams transformations, can [output entity changes](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs) that are compatible with subgraph entities. +[Substreams-powered Subgraphs](/sps/introduction/) combine the power of Substreams with the queryability of Subgraphs. When publishing a Substreams-powered Subgraph, the data produced by the Substreams transformations, can [output entity changes](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs) that are compatible with Subgraph entities. -If you are already familiar with subgraph development, note that Substreams-powered subgraphs can then be queried just as if it had been produced by the AssemblyScript transformation layer. This provides all the benefits of subgraphs, including a dynamic and flexible GraphQL API. +If you are already familiar with Subgraph development, note that Substreams-powered Subgraphs can then be queried just as if it had been produced by the AssemblyScript transformation layer. This provides all the benefits of Subgraphs, including a dynamic and flexible GraphQL API. -## كيف تختلف الغرافات الفرعية التي تعمل بسبستريمز عن الغرافات الفرعية؟ +## How are Substreams-powered Subgraphs different from Subgraphs? Subgraphs are made up of datasources which specify onchain events, and how those events should be transformed via handlers written in Assemblyscript. These events are processed sequentially, based on the order in which events happen onchain. -By contrast, substreams-powered subgraphs have a single datasource which references a substreams package, which is processed by the Graph Node. Substreams have access to additional granular onchain data compared to conventional subgraphs, and can also benefit from massively parallelised processing, which can mean much faster processing times. +By contrast, substreams-powered Subgraphs have a single datasource which references a substreams package, which is processed by the Graph Node. Substreams have access to additional granular onchain data compared to conventional Subgraphs, and can also benefit from massively parallelised processing, which can mean much faster processing times. -## ما هي فوائد استخدام الغرافات الفرعية المدعومة بسبستريمز؟ +## What are the benefits of using Substreams-powered Subgraphs? -Substreams-powered subgraphs combine all the benefits of Substreams with the queryability of subgraphs. They bring greater composability and high-performance indexing to The Graph. They also enable new data use cases; for example, once you've built your Substreams-powered Subgraph, you can reuse your [Substreams modules](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) to output to different [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) such as PostgreSQL, MongoDB, and Kafka. +Substreams-powered Subgraphs combine all the benefits of Substreams with the queryability of Subgraphs. They bring greater composability and high-performance indexing to The Graph. They also enable new data use cases; for example, once you've built your Substreams-powered Subgraph, you can reuse your [Substreams modules](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) to output to different [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) such as PostgreSQL, MongoDB, and Kafka. ## ماهي فوائد سبستريمز؟ @@ -35,7 +35,7 @@ There are many benefits to using Substreams, including: - High-performance indexing: Orders of magnitude faster indexing through large-scale clusters of parallel operations (think BigQuery). -- التوجيه لأي مكان: يمكنك توجيه بياناتك لأي مكان ترغب فيه: بوستجريسكيو، مونغو دي بي، كافكا، الغرافات الفرعية، الملفات المسطحة، جداول جوجل. +- Sink anywhere: Sink your data to anywhere you want: PostgreSQL, MongoDB, Kafka, Subgraphs, flat files, Google Sheets. - Programmable: Use code to customize extraction, do transformation-time aggregations, and model your output for multiple sinks. @@ -63,17 +63,17 @@ There are many benefits to using Firehose, including: - يستفيد من الملفات المسطحة: يتم استخراج بيانات سلسلة الكتل إلى ملفات مسطحة، وهي أرخص وأكثر موارد الحوسبة تحسيناً. -## أين يمكن للمطورين الوصول إلى مزيد من المعلومات حول الغرافات الفرعية المدعومة بسبستريمز و سبستريمز؟ +## Where can developers access more information about Substreams-powered Subgraphs and Substreams? The [Substreams documentation](/substreams/introduction/) will teach you how to build Substreams modules. -The [Substreams-powered subgraphs documentation](/sps/introduction/) will show you how to package them for deployment on The Graph. +The [Substreams-powered Subgraphs documentation](/sps/introduction/) will show you how to package them for deployment on The Graph. The [latest Substreams Codegen tool](https://streamingfastio.medium.com/substreams-codegen-no-code-tool-to-bootstrap-your-project-a11efe0378c6) will allow you to bootstrap a Substreams project without any code. ## What is the role of Rust modules in Substreams? -تعتبر وحدات رست مكافئة لمعينات أسمبلي اسكريبت في الغرافات الفرعية. يتم ترجمتها إلى ويب أسيمبلي بنفس الطريقة، ولكن النموذج البرمجي يسمح بالتنفيذ الموازي. تحدد وحدات رست نوع التحويلات والتجميعات التي ترغب في تطبيقها على بيانات سلاسل الكتل الخام. +Rust modules are the equivalent of the AssemblyScript mappers in Subgraphs. They are compiled to WASM in a similar way, but the programming model allows for parallel execution. They define the sort of transformations and aggregations you want to apply to the raw blockchain data. See [modules documentation](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) for details. @@ -81,16 +81,16 @@ See [modules documentation](https://docs.substreams.dev/reference-material/subst When using Substreams, the composition happens at the transformation layer enabling cached modules to be re-used. -على سبيل المثال، يمكن لأحمد بناء وحدة أسعار اسواق الصرف اللامركزية، ويمكن لإبراهيم استخدامها لبناء مجمِّع حجم للتوكن المهتم بها، ويمكن لآدم دمج أربع وحدات أسعار ديكس فردية لإنشاء مورد أسعار. سيقوم طلب واحد من سبستريمز بتجميع جميع هذه الوحدات الفردية، وربطها معًا لتقديم تدفق بيانات أكثر تطوراً ودقة. يمكن استخدام هذا التدفق لملءغراف فرعي ويمكن الاستعلام عنه من قبل المستخدمين. +As an example, Alice can build a DEX price module, Bob can use it to build a volume aggregator for some tokens of his interest, and Lisa can combine four individual DEX price modules to create a price oracle. A single Substreams request will package all of these individual's modules, link them together, to offer a much more refined stream of data. That stream can then be used to populate a Subgraph, and be queried by consumers. ## كيف يمكنك إنشاء ونشر غراف فرعي مدعوم بسبستريمز؟ After [defining](/sps/introduction/) a Substreams-powered Subgraph, you can use the Graph CLI to deploy it in [Subgraph Studio](https://thegraph.com/studio/). -## أين يمكنني العثور على أمثلة على سبستريمز والغرافات الفرعية المدعومة بسبستريمز؟ +## Where can I find examples of Substreams and Substreams-powered Subgraphs? -يمكنك زيارة [جيت هب](https://github.com/pinax-network/awesome-substreams) للعثور على أمثلة للسبستريمز والغرافات الفرعية المدعومة بسبستريمز. +You can visit [this Github repo](https://github.com/pinax-network/awesome-substreams) to find examples of Substreams and Substreams-powered Subgraphs. -## ماذا تعني السبستريمز والغرافات الفرعية المدعومة بسبستريمز بالنسبة لشبكة الغراف؟ +## What do Substreams and Substreams-powered Subgraphs mean for The Graph Network? إن التكامل مع سبستريمز والغرافات الفرعية المدعومة بسبستريمز واعدة بالعديد من الفوائد، بما في ذلك عمليات فهرسة عالية الأداء وقابلية أكبر للتركيبية من خلال استخدام وحدات المجتمع والبناء عليها. diff --git a/website/src/pages/ar/sps/triggers.mdx b/website/src/pages/ar/sps/triggers.mdx index 05eccf4d55fb..1bf1a2cf3f51 100644 --- a/website/src/pages/ar/sps/triggers.mdx +++ b/website/src/pages/ar/sps/triggers.mdx @@ -6,13 +6,13 @@ Use Custom Triggers and enable the full use GraphQL. ## نظره عامة -Custom Triggers allow you to send data directly into your subgraph mappings file and entities, which are similar to tables and fields. This enables you to fully use the GraphQL layer. +Custom Triggers allow you to send data directly into your Subgraph mappings file and entities, which are similar to tables and fields. This enables you to fully use the GraphQL layer. -By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data in your subgraph's handler. This ensures efficient and streamlined data management within the subgraph framework. +By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data in your Subgraph's handler. This ensures efficient and streamlined data management within the Subgraph framework. ### Defining `handleTransactions` -The following code demonstrates how to define a `handleTransactions` function in a subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new subgraph entity is created. +The following code demonstrates how to define a `handleTransactions` function in a Subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new Subgraph entity is created. ```tsx export function handleTransactions(bytes: Uint8Array): void { @@ -38,9 +38,9 @@ Here's what you're seeing in the `mappings.ts` file: 1. The bytes containing Substreams data are decoded into the generated `Transactions` object, this object is used like any other AssemblyScript object 2. Looping over the transactions -3. Create a new subgraph entity for every transaction +3. Create a new Subgraph entity for every transaction -To go through a detailed example of a trigger-based subgraph, [check out the tutorial](/sps/tutorial/). +To go through a detailed example of a trigger-based Subgraph, [check out the tutorial](/sps/tutorial/). ### مصادر إضافية diff --git a/website/src/pages/ar/sps/tutorial.mdx b/website/src/pages/ar/sps/tutorial.mdx index 21f99fff2832..c41b10d885cd 100644 --- a/website/src/pages/ar/sps/tutorial.mdx +++ b/website/src/pages/ar/sps/tutorial.mdx @@ -3,7 +3,7 @@ title: 'Tutorial: Set Up a Substreams-Powered Subgraph on Solana' sidebarTitle: Tutorial --- -Successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. +Successfully set up a trigger-based Substreams-powered Subgraph for a Solana SPL token. ## Get Started @@ -54,7 +54,7 @@ params: # Modify the param fields to meet your needs ### Step 2: Generate the Subgraph Manifest -Once the project is initialized, generate a subgraph manifest by running the following command in the Dev Container: +Once the project is initialized, generate a Subgraph manifest by running the following command in the Dev Container: ```bash substreams codegen subgraph @@ -73,7 +73,7 @@ dataSources: moduleName: map_spl_transfers # Module defined in the substreams.yaml file: ./my-project-sol-v0.1.0.spkg mapping: - apiVersion: 0.0.7 + apiVersion: 0.0.9 kind: substreams/graph-entities file: ./src/mappings.ts handler: handleTriggers @@ -81,7 +81,7 @@ dataSources: ### Step 3: Define Entities in `schema.graphql` -Define the fields you want to save in your subgraph entities by updating the `schema.graphql` file. +Define the fields you want to save in your Subgraph entities by updating the `schema.graphql` file. Here is an example: @@ -101,7 +101,7 @@ This schema defines a `MyTransfer` entity with fields such as `id`, `amount`, `s With the Protobuf objects generated, you can now handle the decoded Substreams data in your `mappings.ts` file found in the `./src` directory. -The example below demonstrates how to extract to subgraph entities the non-derived transfers associated to the Orca account id: +The example below demonstrates how to extract to Subgraph entities the non-derived transfers associated to the Orca account id: ```ts import { Protobuf } from 'as-proto/assembly' @@ -140,11 +140,11 @@ To generate Protobuf objects in AssemblyScript, run the following command: npm run protogen ``` -This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the subgraph's handler. +This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the Subgraph's handler. ### Conclusion -Congratulations! You've successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. You can take the next step by customizing your schema, mappings, and modules to fit your specific use case. +Congratulations! You've successfully set up a trigger-based Substreams-powered Subgraph for a Solana SPL token. You can take the next step by customizing your schema, mappings, and modules to fit your specific use case. ### Video Tutorial diff --git a/website/src/pages/ar/subgraphs/best-practices/avoid-eth-calls.mdx b/website/src/pages/ar/subgraphs/best-practices/avoid-eth-calls.mdx index e40a7b3712e4..07249c97dd2a 100644 --- a/website/src/pages/ar/subgraphs/best-practices/avoid-eth-calls.mdx +++ b/website/src/pages/ar/subgraphs/best-practices/avoid-eth-calls.mdx @@ -1,19 +1,19 @@ --- title: Subgraph Best Practice 4 - Improve Indexing Speed by Avoiding eth_calls -sidebarTitle: 'Subgraph Best Practice 4: Avoiding eth_calls' +sidebarTitle: Avoiding eth_calls --- ## TLDR -`eth_calls` are calls that can be made from a subgraph to an Ethereum node. These calls take a significant amount of time to return data, slowing down indexing. If possible, design smart contracts to emit all the data you need so you don’t need to use `eth_calls`. +`eth_calls` are calls that can be made from a Subgraph to an Ethereum node. These calls take a significant amount of time to return data, slowing down indexing. If possible, design smart contracts to emit all the data you need so you don’t need to use `eth_calls`. ## Why Avoiding `eth_calls` Is a Best Practice -Subgraphs are optimized to index event data emitted from smart contracts. A subgraph can also index the data coming from an `eth_call`, however, this can significantly slow down subgraph indexing as `eth_calls` require making external calls to smart contracts. The responsiveness of these calls relies not on the subgraph but on the connectivity and responsiveness of the Ethereum node being queried. By minimizing or eliminating eth_calls in our subgraphs, we can significantly improve our indexing speed. +Subgraphs are optimized to index event data emitted from smart contracts. A Subgraph can also index the data coming from an `eth_call`, however, this can significantly slow down Subgraph indexing as `eth_calls` require making external calls to smart contracts. The responsiveness of these calls relies not on the Subgraph but on the connectivity and responsiveness of the Ethereum node being queried. By minimizing or eliminating eth_calls in our Subgraphs, we can significantly improve our indexing speed. ### What Does an eth_call Look Like? -`eth_calls` are often necessary when the data required for a subgraph is not available through emitted events. For example, consider a scenario where a subgraph needs to identify whether ERC20 tokens are part of a specific pool, but the contract only emits a basic `Transfer` event and does not emit an event that contains the data that we need: +`eth_calls` are often necessary when the data required for a Subgraph is not available through emitted events. For example, consider a scenario where a Subgraph needs to identify whether ERC20 tokens are part of a specific pool, but the contract only emits a basic `Transfer` event and does not emit an event that contains the data that we need: ```yaml event Transfer(address indexed from, address indexed to, uint256 value); @@ -44,7 +44,7 @@ export function handleTransfer(event: Transfer): void { } ``` -This is functional, however is not ideal as it slows down our subgraph’s indexing. +This is functional, however is not ideal as it slows down our Subgraph’s indexing. ## How to Eliminate `eth_calls` @@ -54,7 +54,7 @@ Ideally, the smart contract should be updated to emit all necessary data within event TransferWithPool(address indexed from, address indexed to, uint256 value, bytes32 indexed poolInfo); ``` -With this update, the subgraph can directly index the required data without external calls: +With this update, the Subgraph can directly index the required data without external calls: ```typescript import { Address } from '@graphprotocol/graph-ts' @@ -96,11 +96,11 @@ The portion highlighted in yellow is the call declaration. The part before the c The handler itself accesses the result of this `eth_call` exactly as in the previous section by binding to the contract and making the call. graph-node caches the results of declared `eth_calls` in memory and the call from the handler will retrieve the result from this in memory cache instead of making an actual RPC call. -Note: Declared eth_calls can only be made in subgraphs with specVersion >= 1.2.0. +Note: Declared eth_calls can only be made in Subgraphs with specVersion >= 1.2.0. ## Conclusion -You can significantly improve indexing performance by minimizing or eliminating `eth_calls` in your subgraphs. +You can significantly improve indexing performance by minimizing or eliminating `eth_calls` in your Subgraphs. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/ar/subgraphs/best-practices/derivedfrom.mdx b/website/src/pages/ar/subgraphs/best-practices/derivedfrom.mdx index db3a49928c89..093eb29255ab 100644 --- a/website/src/pages/ar/subgraphs/best-practices/derivedfrom.mdx +++ b/website/src/pages/ar/subgraphs/best-practices/derivedfrom.mdx @@ -1,11 +1,11 @@ --- title: Subgraph Best Practice 2 - Improve Indexing and Query Responsiveness By Using @derivedFrom -sidebarTitle: 'Subgraph Best Practice 2: Arrays with @derivedFrom' +sidebarTitle: Arrays with @derivedFrom --- ## TLDR -Arrays in your schema can really slow down a subgraph's performance as they grow beyond thousands of entries. If possible, the `@derivedFrom` directive should be used when using arrays as it prevents large arrays from forming, simplifies handlers, and reduces the size of individual entities, improving indexing speed and query performance significantly. +Arrays in your schema can really slow down a Subgraph's performance as they grow beyond thousands of entries. If possible, the `@derivedFrom` directive should be used when using arrays as it prevents large arrays from forming, simplifies handlers, and reduces the size of individual entities, improving indexing speed and query performance significantly. ## How to Use the `@derivedFrom` Directive @@ -15,7 +15,7 @@ You just need to add a `@derivedFrom` directive after your array in your schema. comments: [Comment!]! @derivedFrom(field: "post") ``` -`@derivedFrom` creates efficient one-to-many relationships, enabling an entity to dynamically associate with multiple related entities based on a field in the related entity. This approach removes the need for both sides of the relationship to store duplicate data, making the subgraph more efficient. +`@derivedFrom` creates efficient one-to-many relationships, enabling an entity to dynamically associate with multiple related entities based on a field in the related entity. This approach removes the need for both sides of the relationship to store duplicate data, making the Subgraph more efficient. ### Example Use Case for `@derivedFrom` @@ -60,17 +60,17 @@ type Comment @entity { Just by adding the `@derivedFrom` directive, this schema will only store the “Comments” on the “Comments” side of the relationship and not on the “Post” side of the relationship. Arrays are stored across individual rows, which allows them to expand significantly. This can lead to particularly large sizes if their growth is unbounded. -This will not only make our subgraph more efficient, but it will also unlock three features: +This will not only make our Subgraph more efficient, but it will also unlock three features: 1. We can query the `Post` and see all of its comments. 2. We can do a reverse lookup and query any `Comment` and see which post it comes from. -3. We can use [Derived Field Loaders](/subgraphs/developing/creating/graph-ts/api/#looking-up-derived-entities) to unlock the ability to directly access and manipulate data from virtual relationships in our subgraph mappings. +3. We can use [Derived Field Loaders](/subgraphs/developing/creating/graph-ts/api/#looking-up-derived-entities) to unlock the ability to directly access and manipulate data from virtual relationships in our Subgraph mappings. ## Conclusion -Use the `@derivedFrom` directive in subgraphs to effectively manage dynamically growing arrays, enhancing indexing efficiency and data retrieval. +Use the `@derivedFrom` directive in Subgraphs to effectively manage dynamically growing arrays, enhancing indexing efficiency and data retrieval. For a more detailed explanation of strategies to avoid large arrays, check out Kevin Jones' blog: [Best Practices in Subgraph Development: Avoiding Large Arrays](https://thegraph.com/blog/improve-subgraph-performance-avoiding-large-arrays/). diff --git a/website/src/pages/ar/subgraphs/best-practices/grafting-hotfix.mdx b/website/src/pages/ar/subgraphs/best-practices/grafting-hotfix.mdx index b77a40a5be90..d8de3e7a1fa2 100644 --- a/website/src/pages/ar/subgraphs/best-practices/grafting-hotfix.mdx +++ b/website/src/pages/ar/subgraphs/best-practices/grafting-hotfix.mdx @@ -1,26 +1,26 @@ --- title: Subgraph Best Practice 6 - Use Grafting for Quick Hotfix Deployment -sidebarTitle: 'Subgraph Best Practice 6: Grafting and Hotfixing' +sidebarTitle: Grafting and Hotfixing --- ## TLDR -Grafting is a powerful feature in subgraph development that allows you to build and deploy new subgraphs while reusing the indexed data from existing ones. +Grafting is a powerful feature in Subgraph development that allows you to build and deploy new Subgraphs while reusing the indexed data from existing ones. ### نظره عامة -This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. +This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire Subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. ## Benefits of Grafting for Hotfixes 1. **Rapid Deployment** - - **Minimize Downtime**: When a subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. - - **Immediate Recovery**: The new subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. + - **Minimize Downtime**: When a Subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. + - **Immediate Recovery**: The new Subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. 2. **Data Preservation** - - **Reuse Historical Data**: Grafting copies the existing data from the base subgraph, so you don’t lose valuable historical records. + - **Reuse Historical Data**: Grafting copies the existing data from the base Subgraph, so you don’t lose valuable historical records. - **Consistency**: Maintains data continuity, which is crucial for applications relying on consistent historical data. 3. **Efficiency** @@ -31,38 +31,38 @@ This feature enables quick deployment of hotfixes for critical issues, eliminati 1. **Initial Deployment Without Grafting** - - **Start Clean**: Always deploy your initial subgraph without grafting to ensure that it’s stable and functions as expected. - - **Test Thoroughly**: Validate the subgraph’s performance to minimize the need for future hotfixes. + - **Start Clean**: Always deploy your initial Subgraph without grafting to ensure that it’s stable and functions as expected. + - **Test Thoroughly**: Validate the Subgraph’s performance to minimize the need for future hotfixes. 2. **Implementing the Hotfix with Grafting** - **Identify the Issue**: When a critical error occurs, determine the block number of the last successfully indexed event. - - **Create a New Subgraph**: Develop a new subgraph that includes the hotfix. - - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed subgraph. - - **Deploy Quickly**: Publish the grafted subgraph to restore service as soon as possible. + - **Create a New Subgraph**: Develop a new Subgraph that includes the hotfix. + - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed Subgraph. + - **Deploy Quickly**: Publish the grafted Subgraph to restore service as soon as possible. 3. **Post-Hotfix Actions** - - **Monitor Performance**: Ensure the grafted subgraph is indexing correctly and the hotfix resolves the issue. - - **Republish Without Grafting**: Once stable, deploy a new version of the subgraph without grafting for long-term maintenance. + - **Monitor Performance**: Ensure the grafted Subgraph is indexing correctly and the hotfix resolves the issue. + - **Republish Without Grafting**: Once stable, deploy a new version of the Subgraph without grafting for long-term maintenance. > Note: Relying on grafting indefinitely is not recommended as it can complicate future updates and maintenance. - - **Update References**: Redirect any services or applications to use the new, non-grafted subgraph. + - **Update References**: Redirect any services or applications to use the new, non-grafted Subgraph. 4. **Important Considerations** - **Careful Block Selection**: Choose the graft block number carefully to prevent data loss. - **Tip**: Use the block number of the last correctly processed event. - - **Use Deployment ID**: Ensure you reference the Deployment ID of the base subgraph, not the Subgraph ID. - - **Note**: The Deployment ID is the unique identifier for a specific subgraph deployment. - - **Feature Declaration**: Remember to declare grafting in the subgraph manifest under features. + - **Use Deployment ID**: Ensure you reference the Deployment ID of the base Subgraph, not the Subgraph ID. + - **Note**: The Deployment ID is the unique identifier for a specific Subgraph deployment. + - **Feature Declaration**: Remember to declare grafting in the Subgraph manifest under features. ## Example: Deploying a Hotfix with Grafting -Suppose you have a subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. +Suppose you have a Subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. 1. **Failed Subgraph Manifest (subgraph.yaml)** ```yaml - specVersion: 1.0.0 + specVersion: 1.3.0 schema: file: ./schema.graphql dataSources: @@ -75,7 +75,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing startBlock: 5000000 mapping: kind: ethereum/events - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Withdrawal @@ -90,7 +90,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing 2. **New Grafted Subgraph Manifest (subgraph.yaml)** ```yaml - specVersion: 1.0.0 + specVersion: 1.3.0 schema: file: ./schema.graphql dataSources: @@ -103,7 +103,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing startBlock: 6000001 # Block after the last indexed block mapping: kind: ethereum/events - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Withdrawal @@ -117,16 +117,16 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing features: - grafting graft: - base: QmBaseDeploymentID # Deployment ID of the failed subgraph + base: QmBaseDeploymentID # Deployment ID of the failed Subgraph block: 6000000 # Last successfully indexed block ``` **Explanation:** -- **Data Source Update**: The new subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. +- **Data Source Update**: The new Subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. - **Start Block**: Set to one block after the last successfully indexed block to avoid reprocessing the error. - **Grafting Configuration**: - - **base**: Deployment ID of the failed subgraph. + - **base**: Deployment ID of the failed Subgraph. - **block**: Block number where grafting should begin. 3. **Deployment Steps** @@ -135,10 +135,10 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing - **Adjust the Manifest**: As shown above, update the `subgraph.yaml` with grafting configurations. - **Deploy the Subgraph**: - Authenticate with the Graph CLI. - - Deploy the new subgraph using `graph deploy`. + - Deploy the new Subgraph using `graph deploy`. 4. **Post-Deployment** - - **Verify Indexing**: Check that the subgraph is indexing correctly from the graft point. + - **Verify Indexing**: Check that the Subgraph is indexing correctly from the graft point. - **Monitor Data**: Ensure that new data is being captured and the hotfix is effective. - **Plan for Republish**: Schedule the deployment of a non-grafted version for long-term stability. @@ -146,9 +146,9 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing While grafting is a powerful tool for deploying hotfixes quickly, there are specific scenarios where it should be avoided to maintain data integrity and ensure optimal performance. -- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new subgraph’s schema to be compatible with the base subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. +- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new Subgraph’s schema to be compatible with the base Subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. - **Significant Mapping Logic Overhauls**: When the hotfix involves substantial modifications to your mapping logic—such as changing how events are processed or altering handler functions—grafting may not function correctly. The new logic might not be compatible with the data processed under the old logic, leading to incorrect data or failed indexing. -- **Deployments to The Graph Network**: Grafting is not recommended for subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the subgraph from scratch to ensure full compatibility and reliability. +- **Deployments to The Graph Network**: Grafting is not recommended for Subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the Subgraph from scratch to ensure full compatibility and reliability. ### Risk Management @@ -157,20 +157,20 @@ While grafting is a powerful tool for deploying hotfixes quickly, there are spec ## Conclusion -Grafting is an effective strategy for deploying hotfixes in subgraph development, enabling you to: +Grafting is an effective strategy for deploying hotfixes in Subgraph development, enabling you to: - **Quickly Recover** from critical errors without re-indexing. - **Preserve Historical Data**, maintaining continuity for applications and users. - **Ensure Service Availability** by minimizing downtime during critical fixes. -However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. +However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your Subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. ## مصادر إضافية - **[Grafting Documentation](/subgraphs/cookbook/grafting/)**: Replace a Contract and Keep its History With Grafting - **[Understanding Deployment IDs](/subgraphs/querying/subgraph-id-vs-deployment-id/)**: Learn the difference between Deployment ID and Subgraph ID. -By incorporating grafting into your subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. +By incorporating grafting into your Subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/ar/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx b/website/src/pages/ar/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx index 6ff60ec9ab34..3a633244e0f2 100644 --- a/website/src/pages/ar/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx +++ b/website/src/pages/ar/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx @@ -1,6 +1,6 @@ --- title: Subgraph Best Practice 3 - Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs -sidebarTitle: 'Subgraph Best Practice 3: Immutable Entities and Bytes as IDs' +sidebarTitle: Immutable Entities and Bytes as IDs --- ## TLDR @@ -50,12 +50,12 @@ While other types for IDs are possible, such as String and Int8, it is recommend ### Reasons to Not Use Bytes as IDs 1. If entity IDs must be human-readable such as auto-incremented numerical IDs or readable strings, Bytes for IDs should not be used. -2. If integrating a subgraph’s data with another data model that does not use Bytes as IDs, Bytes as IDs should not be used. +2. If integrating a Subgraph’s data with another data model that does not use Bytes as IDs, Bytes as IDs should not be used. 3. Indexing and querying performance improvements are not desired. ### Concatenating With Bytes as IDs -It is a common practice in many subgraphs to use string concatenation to combine two properties of an event into a single ID, such as using `event.transaction.hash.toHex() + "-" + event.logIndex.toString()`. However, as this returns a string, this significantly impedes subgraph indexing and querying performance. +It is a common practice in many Subgraphs to use string concatenation to combine two properties of an event into a single ID, such as using `event.transaction.hash.toHex() + "-" + event.logIndex.toString()`. However, as this returns a string, this significantly impedes Subgraph indexing and querying performance. Instead, we should use the `concatI32()` method to concatenate event properties. This strategy results in a `Bytes` ID that is much more performant. @@ -172,7 +172,7 @@ Query Response: ## Conclusion -Using both Immutable Entities and Bytes as IDs has been shown to markedly improve subgraph efficiency. Specifically, tests have highlighted up to a 28% increase in query performance and up to a 48% acceleration in indexing speeds. +Using both Immutable Entities and Bytes as IDs has been shown to markedly improve Subgraph efficiency. Specifically, tests have highlighted up to a 28% increase in query performance and up to a 48% acceleration in indexing speeds. Read more about using Immutable Entities and Bytes as IDs in this blog post by David Lutterkort, a Software Engineer at Edge & Node: [Two Simple Subgraph Performance Improvements](https://thegraph.com/blog/two-simple-subgraph-performance-improvements/). diff --git a/website/src/pages/ar/subgraphs/best-practices/pruning.mdx b/website/src/pages/ar/subgraphs/best-practices/pruning.mdx index 1b51dde8894f..2d4f9ad803e0 100644 --- a/website/src/pages/ar/subgraphs/best-practices/pruning.mdx +++ b/website/src/pages/ar/subgraphs/best-practices/pruning.mdx @@ -1,11 +1,11 @@ --- title: Subgraph Best Practice 1 - Improve Query Speed with Subgraph Pruning -sidebarTitle: 'Subgraph Best Practice 1: Pruning with indexerHints' +sidebarTitle: Pruning with indexerHints --- ## TLDR -[Pruning](/developing/creating-a-subgraph/#prune) removes archival entities from the subgraph’s database up to a given block, and removing unused entities from a subgraph’s database will improve a subgraph’s query performance, often dramatically. Using `indexerHints` is an easy way to prune a subgraph. +[Pruning](/developing/creating-a-subgraph/#prune) removes archival entities from the Subgraph’s database up to a given block, and removing unused entities from a Subgraph’s database will improve a Subgraph’s query performance, often dramatically. Using `indexerHints` is an easy way to prune a Subgraph. ## How to Prune a Subgraph With `indexerHints` @@ -13,14 +13,14 @@ Add a section called `indexerHints` in the manifest. `indexerHints` has three `prune` options: -- `prune: auto`: Retains the minimum necessary history as set by the Indexer, optimizing query performance. This is the generally recommended setting and is the default for all subgraphs created by `graph-cli` >= 0.66.0. +- `prune: auto`: Retains the minimum necessary history as set by the Indexer, optimizing query performance. This is the generally recommended setting and is the default for all Subgraphs created by `graph-cli` >= 0.66.0. - `prune: `: Sets a custom limit on the number of historical blocks to retain. - `prune: never`: No pruning of historical data; retains the entire history and is the default if there is no `indexerHints` section. `prune: never` should be selected if [Time Travel Queries](/subgraphs/querying/graphql-api/#time-travel-queries) are desired. -We can add `indexerHints` to our subgraphs by updating our `subgraph.yaml`: +We can add `indexerHints` to our Subgraphs by updating our `subgraph.yaml`: ```yaml -specVersion: 1.0.0 +specVersion: 1.3.0 schema: file: ./schema.graphql indexerHints: @@ -39,7 +39,7 @@ dataSources: ## Conclusion -Pruning using `indexerHints` is a best practice for subgraph development, offering significant query performance improvements. +Pruning using `indexerHints` is a best practice for Subgraph development, offering significant query performance improvements. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/ar/subgraphs/best-practices/timeseries.mdx b/website/src/pages/ar/subgraphs/best-practices/timeseries.mdx index 74e56c406044..d713d6cd8864 100644 --- a/website/src/pages/ar/subgraphs/best-practices/timeseries.mdx +++ b/website/src/pages/ar/subgraphs/best-practices/timeseries.mdx @@ -1,11 +1,11 @@ --- title: Subgraph Best Practice 5 - Simplify and Optimize with Timeseries and Aggregations -sidebarTitle: 'Subgraph Best Practice 5: Timeseries and Aggregations' +sidebarTitle: Timeseries and Aggregations --- ## TLDR -Leveraging the new time-series and aggregations feature in subgraphs can significantly enhance both indexing speed and query performance. +Leveraging the new time-series and aggregations feature in Subgraphs can significantly enhance both indexing speed and query performance. ## نظره عامة @@ -36,6 +36,10 @@ Timeseries and aggregations reduce data processing overhead and accelerate queri ## How to Implement Timeseries and Aggregations +### Prerequisites + +You need `spec version 1.1.0` for this feature. + ### Defining Timeseries Entities A timeseries entity represents raw data points collected over time. It is defined with the `@entity(timeseries: true)` annotation. Key requirements: @@ -51,7 +55,7 @@ Example: type Data @entity(timeseries: true) { id: Int8! timestamp: Timestamp! - price: BigDecimal! + amount: BigDecimal! } ``` @@ -68,11 +72,11 @@ Example: type Stats @aggregation(intervals: ["hour", "day"], source: "Data") { id: Int8! timestamp: Timestamp! - sum: BigDecimal! @aggregate(fn: "sum", arg: "price") + sum: BigDecimal! @aggregate(fn: "sum", arg: "amount") } ``` -In this example, Stats aggregates the price field from Data over hourly and daily intervals, computing the sum. +In this example, Stats aggregates the amount field from Data over hourly and daily intervals, computing the sum. ### Querying Aggregated Data @@ -172,13 +176,13 @@ Supported operators and functions include basic arithmetic (+, -, \_, /), compar ### Conclusion -Implementing timeseries and aggregations in subgraphs is a best practice for projects dealing with time-based data. This approach: +Implementing timeseries and aggregations in Subgraphs is a best practice for projects dealing with time-based data. This approach: - Enhances Performance: Speeds up indexing and querying by reducing data processing overhead. - Simplifies Development: Eliminates the need for manual aggregation logic in mappings. - Scales Efficiently: Handles large volumes of data without compromising on speed or responsiveness. -By adopting this pattern, developers can build more efficient and scalable subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your subgraphs. +By adopting this pattern, developers can build more efficient and scalable Subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your Subgraphs. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/ar/subgraphs/billing.mdx b/website/src/pages/ar/subgraphs/billing.mdx index e5b5deb5c4ef..71e44f86c1ab 100644 --- a/website/src/pages/ar/subgraphs/billing.mdx +++ b/website/src/pages/ar/subgraphs/billing.mdx @@ -4,12 +4,14 @@ title: الفوترة ## Querying Plans -There are two plans to use when querying subgraphs on The Graph Network. +There are two plans to use when querying Subgraphs on The Graph Network. - **Free Plan**: The Free Plan includes 100,000 free monthly queries with full access to the Subgraph Studio testing environment. This plan is designed for hobbyists, hackathoners, and those with side projects to try out The Graph before scaling their dapp. - **Growth Plan**: The Growth Plan includes everything in the Free Plan with all queries after 100,000 monthly queries requiring payments with GRT or credit card. The Growth Plan is flexible enough to cover teams that have established dapps across a variety of use cases. +Learn more about pricing [here](https://thegraph.com/studio-pricing/). + ## Query Payments with credit card diff --git a/website/src/pages/ar/subgraphs/developing/creating/advanced.mdx b/website/src/pages/ar/subgraphs/developing/creating/advanced.mdx index d0f9bb2cc348..c35d101f373e 100644 --- a/website/src/pages/ar/subgraphs/developing/creating/advanced.mdx +++ b/website/src/pages/ar/subgraphs/developing/creating/advanced.mdx @@ -4,9 +4,9 @@ title: Advanced Subgraph Features ## نظره عامة -Add and implement advanced subgraph features to enhanced your subgraph's built. +Add and implement advanced Subgraph features to enhanced your Subgraph's built. -Starting from `specVersion` `0.0.4`, subgraph features must be explicitly declared in the `features` section at the top level of the manifest file, using their `camelCase` name, as listed in the table below: +Starting from `specVersion` `0.0.4`, Subgraph features must be explicitly declared in the `features` section at the top level of the manifest file, using their `camelCase` name, as listed in the table below: | Feature | Name | | ---------------------------------------------------- | ---------------- | @@ -14,10 +14,10 @@ Starting from `specVersion` `0.0.4`, subgraph features must be explicitly declar | [Full-text Search](#defining-fulltext-search-fields) | `fullTextSearch` | | [Grafting](#grafting-onto-existing-subgraphs) | `grafting` | -For instance, if a subgraph uses the **Full-Text Search** and the **Non-fatal Errors** features, the `features` field in the manifest should be: +For instance, if a Subgraph uses the **Full-Text Search** and the **Non-fatal Errors** features, the `features` field in the manifest should be: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum features: - fullTextSearch @@ -25,7 +25,7 @@ features: dataSources: ... ``` -> Note that using a feature without declaring it will incur a **validation error** during subgraph deployment, but no errors will occur if a feature is declared but not used. +> Note that using a feature without declaring it will incur a **validation error** during Subgraph deployment, but no errors will occur if a feature is declared but not used. ## Timeseries and Aggregations @@ -33,9 +33,9 @@ Prerequisites: - Subgraph specVersion must be ≥1.1.0. -Timeseries and aggregations enable your subgraph to track statistics like daily average price, hourly total transfers, and more. +Timeseries and aggregations enable your Subgraph to track statistics like daily average price, hourly total transfers, and more. -This feature introduces two new types of subgraph entity. Timeseries entities record data points with timestamps. Aggregation entities perform pre-declared calculations on the timeseries data points on an hourly or daily basis, then store the results for easy access via GraphQL. +This feature introduces two new types of Subgraph entity. Timeseries entities record data points with timestamps. Aggregation entities perform pre-declared calculations on the timeseries data points on an hourly or daily basis, then store the results for easy access via GraphQL. ### Example Schema @@ -97,21 +97,21 @@ Aggregation entities are automatically calculated on the basis of the specified ## أخطاء غير فادحة -Indexing errors on already synced subgraphs will, by default, cause the subgraph to fail and stop syncing. Subgraphs can alternatively be configured to continue syncing in the presence of errors, by ignoring the changes made by the handler which provoked the error. This gives subgraph authors time to correct their subgraphs while queries continue to be served against the latest block, though the results might be inconsistent due to the bug that caused the error. Note that some errors are still always fatal. To be non-fatal, the error must be known to be deterministic. +Indexing errors on already synced Subgraphs will, by default, cause the Subgraph to fail and stop syncing. Subgraphs can alternatively be configured to continue syncing in the presence of errors, by ignoring the changes made by the handler which provoked the error. This gives Subgraph authors time to correct their Subgraphs while queries continue to be served against the latest block, though the results might be inconsistent due to the bug that caused the error. Note that some errors are still always fatal. To be non-fatal, the error must be known to be deterministic. -> **Note:** The Graph Network does not yet support non-fatal errors, and developers should not deploy subgraphs using that functionality to the network via the Studio. +> **Note:** The Graph Network does not yet support non-fatal errors, and developers should not deploy Subgraphs using that functionality to the network via the Studio. -Enabling non-fatal errors requires setting the following feature flag on the subgraph manifest: +Enabling non-fatal errors requires setting the following feature flag on the Subgraph manifest: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum features: - nonFatalErrors ... ``` -The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the subgraph has skipped over errors, as in the example: +The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the Subgraph has skipped over errors, as in the example: ```graphql foos(first: 100, subgraphError: allow) { @@ -123,7 +123,7 @@ _meta { } ``` -If the subgraph encounters an error, that query will return both the data and a graphql error with the message `"indexing_error"`, as in this example response: +If the Subgraph encounters an error, that query will return both the data and a graphql error with the message `"indexing_error"`, as in this example response: ```graphql "data": { @@ -145,7 +145,7 @@ If the subgraph encounters an error, that query will return both the data and a ## IPFS/Arweave File Data Sources -File data sources are a new subgraph functionality for accessing off-chain data during indexing in a robust, extendable way. File data sources support fetching files from IPFS and from Arweave. +File data sources are a new Subgraph functionality for accessing off-chain data during indexing in a robust, extendable way. File data sources support fetching files from IPFS and from Arweave. > This also lays the groundwork for deterministic indexing of off-chain data, as well as the potential introduction of arbitrary HTTP-sourced data. @@ -221,7 +221,7 @@ templates: - name: TokenMetadata kind: file/ipfs mapping: - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mapping.ts handler: handleMetadata @@ -290,7 +290,7 @@ Example: import { TokenMetadata as TokenMetadataTemplate } from '../generated/templates' const ipfshash = 'QmaXzZhcYnsisuue5WRdQDH6FDvqkLQX1NckLqBYeYYEfm' -//This example code is for a Crypto coven subgraph. The above ipfs hash is a directory with token metadata for all crypto coven NFTs. +//This example code is for a Crypto coven Subgraph. The above ipfs hash is a directory with token metadata for all crypto coven NFTs. export function handleTransfer(event: TransferEvent): void { let token = Token.load(event.params.tokenId.toString()) @@ -317,23 +317,23 @@ This will create a new file data source, which will poll Graph Node's configured This example is using the CID as the lookup between the parent `Token` entity and the resulting `TokenMetadata` entity. -> Previously, this is the point at which a subgraph developer would have called `ipfs.cat(CID)` to fetch the file +> Previously, this is the point at which a Subgraph developer would have called `ipfs.cat(CID)` to fetch the file Congratulations, you are using file data sources! -#### Deploying your subgraphs +#### Deploying your Subgraphs -You can now `build` and `deploy` your subgraph to any Graph Node >=v0.30.0-rc.0. +You can now `build` and `deploy` your Subgraph to any Graph Node >=v0.30.0-rc.0. #### Limitations -File data source handlers and entities are isolated from other subgraph entities, ensuring that they are deterministic when executed, and ensuring no contamination of chain-based data sources. To be specific: +File data source handlers and entities are isolated from other Subgraph entities, ensuring that they are deterministic when executed, and ensuring no contamination of chain-based data sources. To be specific: - Entities created by File Data Sources are immutable, and cannot be updated - File Data Source handlers cannot access entities from other file data sources - Entities associated with File Data Sources cannot be accessed by chain-based handlers -> While this constraint should not be problematic for most use-cases, it may introduce complexity for some. Please get in touch via Discord if you are having issues modelling your file-based data in a subgraph! +> While this constraint should not be problematic for most use-cases, it may introduce complexity for some. Please get in touch via Discord if you are having issues modelling your file-based data in a Subgraph! Additionally, it is not possible to create data sources from a file data source, be it an onchain data source or another file data source. This restriction may be lifted in the future. @@ -365,15 +365,15 @@ Handlers for File Data Sources cannot be in files which import `eth_call` contra > **Requires**: [SpecVersion](#specversion-releases) >= `1.2.0` -Topic filters, also known as indexed argument filters, are a powerful feature in subgraphs that allow users to precisely filter blockchain events based on the values of their indexed arguments. +Topic filters, also known as indexed argument filters, are a powerful feature in Subgraphs that allow users to precisely filter blockchain events based on the values of their indexed arguments. -- These filters help isolate specific events of interest from the vast stream of events on the blockchain, allowing subgraphs to operate more efficiently by focusing only on relevant data. +- These filters help isolate specific events of interest from the vast stream of events on the blockchain, allowing Subgraphs to operate more efficiently by focusing only on relevant data. -- This is useful for creating personal subgraphs that track specific addresses and their interactions with various smart contracts on the blockchain. +- This is useful for creating personal Subgraphs that track specific addresses and their interactions with various smart contracts on the blockchain. ### How Topic Filters Work -When a smart contract emits an event, any arguments that are marked as indexed can be used as filters in a subgraph's manifest. This allows the subgraph to listen selectively for events that match these indexed arguments. +When a smart contract emits an event, any arguments that are marked as indexed can be used as filters in a Subgraph's manifest. This allows the Subgraph to listen selectively for events that match these indexed arguments. - The event's first indexed argument corresponds to `topic1`, the second to `topic2`, and so on, up to `topic3`, since the Ethereum Virtual Machine (EVM) allows up to three indexed arguments per event. @@ -401,7 +401,7 @@ In this example: #### Configuration in Subgraphs -Topic filters are defined directly within the event handler configuration in the subgraph manifest. Here is how they are configured: +Topic filters are defined directly within the event handler configuration in the Subgraph manifest. Here is how they are configured: ```yaml eventHandlers: @@ -436,7 +436,7 @@ In this configuration: - `topic1` is configured to filter `Transfer` events where `0xAddressA` is the sender. - `topic2` is configured to filter `Transfer` events where `0xAddressB` is the receiver. -- The subgraph will only index transactions that occur directly from `0xAddressA` to `0xAddressB`. +- The Subgraph will only index transactions that occur directly from `0xAddressA` to `0xAddressB`. #### Example 2: Tracking Transactions in Either Direction Between Two or More Addresses @@ -452,17 +452,17 @@ In this configuration: - `topic1` is configured to filter `Transfer` events where `0xAddressA`, `0xAddressB`, `0xAddressC` is the sender. - `topic2` is configured to filter `Transfer` events where `0xAddressB` and `0xAddressC` is the receiver. -- The subgraph will index transactions that occur in either direction between multiple addresses allowing for comprehensive monitoring of interactions involving all addresses. +- The Subgraph will index transactions that occur in either direction between multiple addresses allowing for comprehensive monitoring of interactions involving all addresses. ## Declared eth_call > Note: This is an experimental feature that is not currently available in a stable Graph Node release yet. You can only use it in Subgraph Studio or your self-hosted node. -Declarative `eth_calls` are a valuable subgraph feature that allows `eth_calls` to be executed ahead of time, enabling `graph-node` to execute them in parallel. +Declarative `eth_calls` are a valuable Subgraph feature that allows `eth_calls` to be executed ahead of time, enabling `graph-node` to execute them in parallel. This feature does the following: -- Significantly improves the performance of fetching data from the Ethereum blockchain by reducing the total time for multiple calls and optimizing the subgraph's overall efficiency. +- Significantly improves the performance of fetching data from the Ethereum blockchain by reducing the total time for multiple calls and optimizing the Subgraph's overall efficiency. - Allows faster data fetching, resulting in quicker query responses and a better user experience. - Reduces wait times for applications that need to aggregate data from multiple Ethereum calls, making the data retrieval process more efficient. @@ -474,7 +474,7 @@ This feature does the following: #### Scenario without Declarative `eth_calls` -Imagine you have a subgraph that needs to make three Ethereum calls to fetch data about a user's transactions, balance, and token holdings. +Imagine you have a Subgraph that needs to make three Ethereum calls to fetch data about a user's transactions, balance, and token holdings. Traditionally, these calls might be made sequentially: @@ -498,15 +498,15 @@ Total time taken = max (3, 2, 4) = 4 seconds #### How it Works -1. Declarative Definition: In the subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. +1. Declarative Definition: In the Subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. 2. Parallel Execution Engine: The Graph Node's execution engine recognizes these declarations and runs the calls simultaneously. -3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the subgraph for further processing. +3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the Subgraph for further processing. #### Example Configuration in Subgraph Manifest Declared `eth_calls` can access the `event.address` of the underlying event as well as all the `event.params`. -`Subgraph.yaml` using `event.address`: +`subgraph.yaml` using `event.address`: ```yaml eventHandlers: @@ -524,7 +524,7 @@ Details for the example above: - The text (`Pool[event.address].feeGrowthGlobal0X128()`) is the actual `eth_call` that will be executed, which is in the form of `Contract[address].function(arguments)` - The `address` and `arguments` can be replaced with variables that will be available when the handler is executed. -`Subgraph.yaml` using `event.params` +`subgraph.yaml` using `event.params` ```yaml calls: @@ -535,22 +535,22 @@ calls: > **Note:** it is not recommended to use grafting when initially upgrading to The Graph Network. Learn more [here](/subgraphs/cookbook/grafting/#important-note-on-grafting-when-upgrading-to-the-network). -When a subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances; it is beneficial to reuse the data from an existing subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly or to temporarily get an existing subgraph working again after it has failed. +When a Subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances; it is beneficial to reuse the data from an existing Subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly or to temporarily get an existing Subgraph working again after it has failed. -A subgraph is grafted onto a base subgraph when the subgraph manifest in `subgraph.yaml` contains a `graft` block at the top-level: +A Subgraph is grafted onto a base Subgraph when the Subgraph manifest in `subgraph.yaml` contains a `graft` block at the top-level: ```yaml description: ... graft: - base: Qm... # Subgraph ID of base subgraph + base: Qm... # Subgraph ID of base Subgraph block: 7345624 # Block number ``` -When a subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` subgraph up to and including the given `block` and then continue indexing the new subgraph from that block on. The base subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted subgraph. +When a Subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` Subgraph up to and including the given `block` and then continue indexing the new Subgraph from that block on. The base Subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted Subgraph. -Because grafting copies rather than indexes base data, it is much quicker to get the subgraph to the desired block than indexing from scratch, though the initial data copy can still take several hours for very large subgraphs. While the grafted subgraph is being initialized, the Graph Node will log information about the entity types that have already been copied. +Because grafting copies rather than indexes base data, it is much quicker to get the Subgraph to the desired block than indexing from scratch, though the initial data copy can still take several hours for very large Subgraphs. While the grafted Subgraph is being initialized, the Graph Node will log information about the entity types that have already been copied. -The grafted subgraph can use a GraphQL schema that is not identical to the one of the base subgraph, but merely compatible with it. It has to be a valid subgraph schema in its own right, but may deviate from the base subgraph's schema in the following ways: +The grafted Subgraph can use a GraphQL schema that is not identical to the one of the base Subgraph, but merely compatible with it. It has to be a valid Subgraph schema in its own right, but may deviate from the base Subgraph's schema in the following ways: - يضيف أو يزيل أنواع الكيانات - يزيل الصفات من أنواع الكيانات @@ -560,4 +560,4 @@ The grafted subgraph can use a GraphQL schema that is not identical to the one o - It adds or removes interfaces - يغير للكيانات التي يتم تنفيذ الواجهة لها -> **[Feature Management](#experimental-features):** `grafting` must be declared under `features` in the subgraph manifest. +> **[Feature Management](#experimental-features):** `grafting` must be declared under `features` in the Subgraph manifest. diff --git a/website/src/pages/ar/subgraphs/developing/creating/assemblyscript-mappings.mdx b/website/src/pages/ar/subgraphs/developing/creating/assemblyscript-mappings.mdx index 2518d7620204..3062fe900657 100644 --- a/website/src/pages/ar/subgraphs/developing/creating/assemblyscript-mappings.mdx +++ b/website/src/pages/ar/subgraphs/developing/creating/assemblyscript-mappings.mdx @@ -10,7 +10,7 @@ The mappings take data from a particular source and transform it into entities t For each event handler that is defined in `subgraph.yaml` under `mapping.eventHandlers`, create an exported function of the same name. Each handler must accept a single parameter called `event` with a type corresponding to the name of the event which is being handled. -In the example subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events: +In the example Subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events: ```javascript import { NewGravatar, UpdatedGravatar } from '../generated/Gravity/Gravity' @@ -72,7 +72,7 @@ If no value is set for a field in the new entity with the same ID, the field wil ## توليد الكود -In order to make it easy and type-safe to work with smart contracts, events and entities, the Graph CLI can generate AssemblyScript types from the subgraph's GraphQL schema and the contract ABIs included in the data sources. +In order to make it easy and type-safe to work with smart contracts, events and entities, the Graph CLI can generate AssemblyScript types from the Subgraph's GraphQL schema and the contract ABIs included in the data sources. This is done with @@ -80,7 +80,7 @@ This is done with graph codegen [--output-dir ] [] ``` -but in most cases, subgraphs are already preconfigured via `package.json` to allow you to simply run one of the following to achieve the same: +but in most cases, Subgraphs are already preconfigured via `package.json` to allow you to simply run one of the following to achieve the same: ```sh # Yarn @@ -90,7 +90,7 @@ yarn codegen npm run codegen ``` -This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with. +This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example Subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with. ```javascript import { @@ -102,12 +102,12 @@ import { } from '../generated/Gravity/Gravity' ``` -In addition to this, one class is generated for each entity type in the subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with +In addition to this, one class is generated for each entity type in the Subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with ```javascript 'import { Gravatar } from '../generated/schema ``` -> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the subgraph. +> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the Subgraph. -Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your subgraph to Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find. +Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your Subgraph to Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find. diff --git a/website/src/pages/ar/subgraphs/developing/creating/graph-ts/CHANGELOG.md b/website/src/pages/ar/subgraphs/developing/creating/graph-ts/CHANGELOG.md index 5d90888ac378..5f964d3cbb78 100644 --- a/website/src/pages/ar/subgraphs/developing/creating/graph-ts/CHANGELOG.md +++ b/website/src/pages/ar/subgraphs/developing/creating/graph-ts/CHANGELOG.md @@ -1,5 +1,11 @@ # @graphprotocol/graph-ts +## 0.38.0 + +### Minor Changes + +- [#1935](https://github.com/graphprotocol/graph-tooling/pull/1935) [`0c36a02`](https://github.com/graphprotocol/graph-tooling/commit/0c36a024e0516bbf883ae62b8312dba3d9945f04) Thanks [@isum](https://github.com/isum)! - feat: add yaml parsing support to mappings + ## 0.37.0 ### Minor Changes diff --git a/website/src/pages/ar/subgraphs/developing/creating/graph-ts/api.mdx b/website/src/pages/ar/subgraphs/developing/creating/graph-ts/api.mdx index 8245a637cc8a..a721f6bcd8d4 100644 --- a/website/src/pages/ar/subgraphs/developing/creating/graph-ts/api.mdx +++ b/website/src/pages/ar/subgraphs/developing/creating/graph-ts/api.mdx @@ -2,12 +2,12 @@ title: AssemblyScript API --- -> Note: If you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/). +> Note: If you created a Subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/). -Learn what built-in APIs can be used when writing subgraph mappings. There are two kinds of APIs available out of the box: +Learn what built-in APIs can be used when writing Subgraph mappings. There are two kinds of APIs available out of the box: - The [Graph TypeScript library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) -- Code generated from subgraph files by `graph codegen` +- Code generated from Subgraph files by `graph codegen` You can also add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). @@ -27,7 +27,7 @@ The `@graphprotocol/graph-ts` library provides the following APIs: ### إصدارات -The `apiVersion` in the subgraph manifest specifies the mapping API version which is run by Graph Node for a given subgraph. +The `apiVersion` in the Subgraph manifest specifies the mapping API version which is run by Graph Node for a given Subgraph. | الاصدار | ملاحظات الإصدار | | :-: | --- | @@ -223,7 +223,7 @@ It adds the following method on top of the `Bytes` API: The `store` API allows to load, save and remove entities from and to the Graph Node store. -Entities written to the store map one-to-one to the `@entity` types defined in the subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities. +Entities written to the store map one-to-one to the `@entity` types defined in the Subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities. #### إنشاء الكيانات @@ -282,8 +282,8 @@ As of `graph-node` v0.31.0, `@graphprotocol/graph-ts` v0.30.0 and `@graphprotoco The store API facilitates the retrieval of entities that were created or updated in the current block. A typical situation for this is that one handler creates a transaction from some onchain event, and a later handler wants to access this transaction if it exists. -- In the case where the transaction does not exist, the subgraph will have to go to the database simply to find out that the entity does not exist. If the subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. -- For some subgraphs, these missed lookups can contribute significantly to the indexing time. +- In the case where the transaction does not exist, the Subgraph will have to go to the database simply to find out that the entity does not exist. If the Subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. +- For some Subgraphs, these missed lookups can contribute significantly to the indexing time. ```typescript let id = event.transaction.hash // or however the ID is constructed @@ -380,11 +380,11 @@ The Ethereum API provides access to smart contracts, public state variables, con #### دعم أنواع الإيثيريوم -As with entities, `graph codegen` generates classes for all smart contracts and events used in a subgraph. For this, the contract ABIs need to be part of the data source in the subgraph manifest. Typically, the ABI files are stored in an `abis/` folder. +As with entities, `graph codegen` generates classes for all smart contracts and events used in a Subgraph. For this, the contract ABIs need to be part of the data source in the Subgraph manifest. Typically, the ABI files are stored in an `abis/` folder. -With the generated classes, conversions between Ethereum types and the [built-in types](#built-in-types) take place behind the scenes so that subgraph authors do not have to worry about them. +With the generated classes, conversions between Ethereum types and the [built-in types](#built-in-types) take place behind the scenes so that Subgraph authors do not have to worry about them. -The following example illustrates this. Given a subgraph schema like +The following example illustrates this. Given a Subgraph schema like ```graphql type Transfer @entity { @@ -483,7 +483,7 @@ class Log { #### الوصول إلى حالة العقد الذكي Smart Contract -The code generated by `graph codegen` also includes classes for the smart contracts used in the subgraph. These can be used to access public state variables and call functions of the contract at the current block. +The code generated by `graph codegen` also includes classes for the smart contracts used in the Subgraph. These can be used to access public state variables and call functions of the contract at the current block. A common pattern is to access the contract from which an event originates. This is achieved with the following code: @@ -506,7 +506,7 @@ export function handleTransfer(event: TransferEvent) { As long as the `ERC20Contract` on Ethereum has a public read-only function called `symbol`, it can be called with `.symbol()`. For public state variables a method with the same name is created automatically. -Any other contract that is part of the subgraph can be imported from the generated code and can be bound to a valid address. +Any other contract that is part of the Subgraph can be imported from the generated code and can be bound to a valid address. #### معالجة الاستدعاءات المعادة @@ -582,7 +582,7 @@ let isContract = ethereum.hasCode(eoa).inner // returns false import { log } from '@graphprotocol/graph-ts ``` -The `log` API allows subgraphs to log information to the Graph Node standard output as well as Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument. +The `log` API allows Subgraphs to log information to the Graph Node standard output as well as Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument. The `log` API includes the following functions: @@ -590,7 +590,7 @@ The `log` API includes the following functions: - `log.info(fmt: string, args: Array): void` - logs an informational message. - `log.warning(fmt: string, args: Array): void` - logs a warning. - `log.error(fmt: string, args: Array): void` - logs an error message. -- `log.critical(fmt: string, args: Array): void` – logs a critical message _and_ terminates the subgraph. +- `log.critical(fmt: string, args: Array): void` – logs a critical message _and_ terminates the Subgraph. The `log` API takes a format string and an array of string values. It then replaces placeholders with the string values from the array. The first `{}` placeholder gets replaced by the first value in the array, the second `{}` placeholder gets replaced by the second value and so on. @@ -721,7 +721,7 @@ ipfs.mapJSON('Qm...', 'processItem', Value.fromString('parentId')) The only flag currently supported is `json`, which must be passed to `ipfs.map`. With the `json` flag, the IPFS file must consist of a series of JSON values, one value per line. The call to `ipfs.map` will read each line in the file, deserialize it into a `JSONValue` and call the callback for each of them. The callback can then use entity operations to store data from the `JSONValue`. Entity changes are stored only when the handler that called `ipfs.map` finishes successfully; in the meantime, they are kept in memory, and the size of the file that `ipfs.map` can process is therefore limited. -On success, `ipfs.map` returns `void`. If any invocation of the callback causes an error, the handler that invoked `ipfs.map` is aborted, and the subgraph is marked as failed. +On success, `ipfs.map` returns `void`. If any invocation of the callback causes an error, the handler that invoked `ipfs.map` is aborted, and the Subgraph is marked as failed. ### Crypto API @@ -836,7 +836,7 @@ The base `Entity` class and the child `DataSourceContext` class have helpers to ### DataSourceContext in Manifest -The `context` section within `dataSources` allows you to define key-value pairs that are accessible within your subgraph mappings. The available types are `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. +The `context` section within `dataSources` allows you to define key-value pairs that are accessible within your Subgraph mappings. The available types are `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Here is a YAML example illustrating the usage of various types in the `context` section: @@ -887,4 +887,4 @@ dataSources: - `List`: Specifies a list of items. Each item needs to specify its type and data. - `BigInt`: Specifies a large integer value. Must be quoted due to its large size. -This context is then accessible in your subgraph mapping files, enabling more dynamic and configurable subgraphs. +This context is then accessible in your Subgraph mapping files, enabling more dynamic and configurable Subgraphs. diff --git a/website/src/pages/ar/subgraphs/developing/creating/graph-ts/common-issues.mdx b/website/src/pages/ar/subgraphs/developing/creating/graph-ts/common-issues.mdx index 6c50af984ad0..b0ce00e687e3 100644 --- a/website/src/pages/ar/subgraphs/developing/creating/graph-ts/common-issues.mdx +++ b/website/src/pages/ar/subgraphs/developing/creating/graph-ts/common-issues.mdx @@ -2,7 +2,7 @@ title: مشاكل شائعة في أسمبلي سكريبت (AssemblyScript) --- -هناك بعض مشاكل [أسمبلي سكريبت](https://github.com/AssemblyScript/assemblyscript) المحددة، التي من الشائع الوقوع فيها أثتاء تطوير غرافٍ فرعي. وهي تتراوح في صعوبة تصحيح الأخطاء، ومع ذلك، فإنّ إدراكها قد يساعد. وفيما يلي قائمة غير شاملة لهذه المشاكل: +There are certain [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) issues that are common to run into during Subgraph development. They range in debug difficulty, however, being aware of them may help. The following is a non-exhaustive list of these issues: - `Private` class variables are not enforced in [AssemblyScript](https://www.assemblyscript.org/status.html#language-features). There is no way to protect class variables from being directly changed from the class object. - لا يتم توريث النطاق في [دوال الإغلاق](https://www.assemblyscript.org/status.html#on-closures)، أي لا يمكن استخدام المتغيرات المعلنة خارج دوال الإغلاق. الشرح في [ النقاط الهامة للمطورين #3](https://www.youtube.com/watch?v=1-8AW-lVfrA&t=3243s). diff --git a/website/src/pages/ar/subgraphs/developing/creating/install-the-cli.mdx b/website/src/pages/ar/subgraphs/developing/creating/install-the-cli.mdx index b55d24367e50..81469bc1837b 100644 --- a/website/src/pages/ar/subgraphs/developing/creating/install-the-cli.mdx +++ b/website/src/pages/ar/subgraphs/developing/creating/install-the-cli.mdx @@ -2,11 +2,11 @@ title: قم بتثبيت Graph CLI --- -> In order to use your subgraph on The Graph's decentralized network, you will need to [create an API key](/resources/subgraph-studio-faq/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your subgraph with at least 3,000 GRT to attract 2-3 Indexers. To learn more about signaling, check out [curating](/resources/roles/curating/). +> In order to use your Subgraph on The Graph's decentralized network, you will need to [create an API key](/resources/subgraph-studio-faq/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your Subgraph with at least 3,000 GRT to attract 2-3 Indexers. To learn more about signaling, check out [curating](/resources/roles/curating/). ## نظره عامة -The [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) is a command-line interface that facilitates developers' commands for The Graph. It processes a [subgraph manifest](/subgraphs/developing/creating/subgraph-manifest/) and compiles the [mappings](/subgraphs/developing/creating/assemblyscript-mappings/) to create the files you will need to deploy the subgraph to [Subgraph Studio](https://thegraph.com/studio/) and the network. +The [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) is a command-line interface that facilitates developers' commands for The Graph. It processes a [Subgraph manifest](/subgraphs/developing/creating/subgraph-manifest/) and compiles the [mappings](/subgraphs/developing/creating/assemblyscript-mappings/) to create the files you will need to deploy the Subgraph to [Subgraph Studio](https://thegraph.com/studio/) and the network. ## Getting Started @@ -28,13 +28,13 @@ npm install -g @graphprotocol/graph-cli@latest yarn global add @graphprotocol/graph-cli ``` -The `graph init` command can be used to set up a new subgraph project, either from an existing contract or from an example subgraph. If you already have a smart contract deployed to your preferred network, you can bootstrap a new subgraph from that contract to get started. +The `graph init` command can be used to set up a new Subgraph project, either from an existing contract or from an example Subgraph. If you already have a smart contract deployed to your preferred network, you can bootstrap a new Subgraph from that contract to get started. ## إنشاء الـ Subgraph ### من عقد موجود -The following command creates a subgraph that indexes all events of an existing contract: +The following command creates a Subgraph that indexes all events of an existing contract: ```sh graph init \ @@ -51,25 +51,25 @@ graph init \ - If any of the optional arguments are missing, it guides you through an interactive form. -- The `` is the ID of your subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your subgraph details page. +- The `` is the ID of your Subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your Subgraph details page. ### من مثال Subgraph -The following command initializes a new project from an example subgraph: +The following command initializes a new project from an example Subgraph: ```sh graph init --from-example=example-subgraph ``` -- The [example subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. +- The [example Subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. -- The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. +- The Subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. ### Add New `dataSources` to an Existing Subgraph -`dataSources` are key components of subgraphs. They define the sources of data that the subgraph indexes and processes. A `dataSource` specifies which smart contract to listen to, which events to process, and how to handle them. +`dataSources` are key components of Subgraphs. They define the sources of data that the Subgraph indexes and processes. A `dataSource` specifies which smart contract to listen to, which events to process, and how to handle them. -Recent versions of the Graph CLI supports adding new `dataSources` to an existing subgraph through the `graph add` command: +Recent versions of the Graph CLI supports adding new `dataSources` to an existing Subgraph through the `graph add` command: ```sh graph add
[] @@ -101,19 +101,5 @@ The `graph add` command will fetch the ABI from Etherscan (unless an ABI path is يجب أن تتطابق ملف (ملفات) ABI مع العقد (العقود) الخاصة بك. هناك عدة طرق للحصول على ملفات ABI: - إذا كنت تقوم ببناء مشروعك الخاص ، فمن المحتمل أن تتمكن من الوصول إلى أحدث ABIs. -- If you are building a subgraph for a public project, you can download that project to your computer and get the ABI by using [`npx hardhat compile`](https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts) or using `solc` to compile. -- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your subgraph will fail. - -## SpecVersion Releases - -| الاصدار | ملاحظات الإصدار | -| :-: | --- | -| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | -| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | -| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune subgraphs | -| 0.0.9 | Supports `endBlock` feature | -| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). | -| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). | -| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | -| 0.0.5 | Added support for event handlers having access to transaction receipts. | -| 0.0.4 | Added support for managing subgraph features. | +- If you are building a Subgraph for a public project, you can download that project to your computer and get the ABI by using [`npx hardhat compile`](https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts) or using `solc` to compile. +- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your Subgraph will fail. diff --git a/website/src/pages/ar/subgraphs/developing/creating/ql-schema.mdx b/website/src/pages/ar/subgraphs/developing/creating/ql-schema.mdx index 56d9abb39ae7..a9d52647e13e 100644 --- a/website/src/pages/ar/subgraphs/developing/creating/ql-schema.mdx +++ b/website/src/pages/ar/subgraphs/developing/creating/ql-schema.mdx @@ -4,7 +4,7 @@ title: The Graph QL Schema ## نظره عامة -The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language. +The schema for your Subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language. > Note: If you've never written a GraphQL schema, it is recommended that you check out this primer on the GraphQL type system. Reference documentation for GraphQL schemas can be found in the [GraphQL API](/subgraphs/querying/graphql-api/) section. @@ -12,7 +12,7 @@ The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas ar Before defining entities, it is important to take a step back and think about how your data is structured and linked. -- All queries will be made against the data model defined in the subgraph schema. As a result, the design of the subgraph schema should be informed by the queries that your application will need to perform. +- All queries will be made against the data model defined in the Subgraph schema. As a result, the design of the Subgraph schema should be informed by the queries that your application will need to perform. - It may be useful to imagine entities as "objects containing data", rather than as events or functions. - You define entity types in `schema.graphql`, and Graph Node will generate top-level fields for querying single instances and collections of that entity type. - Each type that should be an entity is required to be annotated with an `@entity` directive. @@ -141,7 +141,7 @@ type TokenBalance @entity { Reverse lookups can be defined on an entity through the `@derivedFrom` field. This creates a virtual field on the entity that may be queried but cannot be set manually through the mappings API. Rather, it is derived from the relationship defined on the other entity. For such relationships, it rarely makes sense to store both sides of the relationship, and both indexing and query performance will be better when only one side is stored and the other is derived. -For one-to-many relationships, the relationship should always be stored on the 'one' side, and the 'many' side should always be derived. Storing the relationship this way, rather than storing an array of entities on the 'many' side, will result in dramatically better performance for both indexing and querying the subgraph. In general, storing arrays of entities should be avoided as much as is practical. +For one-to-many relationships, the relationship should always be stored on the 'one' side, and the 'many' side should always be derived. Storing the relationship this way, rather than storing an array of entities on the 'many' side, will result in dramatically better performance for both indexing and querying the Subgraph. In general, storing arrays of entities should be avoided as much as is practical. #### Example @@ -160,7 +160,7 @@ type TokenBalance @entity { } ``` -Here is an example of how to write a mapping for a subgraph with reverse lookups: +Here is an example of how to write a mapping for a Subgraph with reverse lookups: ```typescript let token = new Token(event.address) // Create Token @@ -231,7 +231,7 @@ query usersWithOrganizations { } ``` -This more elaborate way of storing many-to-many relationships will result in less data stored for the subgraph, and therefore to a subgraph that is often dramatically faster to index and to query. +This more elaborate way of storing many-to-many relationships will result in less data stored for the Subgraph, and therefore to a Subgraph that is often dramatically faster to index and to query. ### إضافة تعليقات إلى المخطط (schema) @@ -287,7 +287,7 @@ query { } ``` -> **[Feature Management](#experimental-features):** From `specVersion` `0.0.4` and onwards, `fullTextSearch` must be declared under the `features` section in the subgraph manifest. +> **[Feature Management](#experimental-features):** From `specVersion` `0.0.4` and onwards, `fullTextSearch` must be declared under the `features` section in the Subgraph manifest. ## اللغات المدعومة diff --git a/website/src/pages/ar/subgraphs/developing/creating/starting-your-subgraph.mdx b/website/src/pages/ar/subgraphs/developing/creating/starting-your-subgraph.mdx index 8f2e787688c2..fa6c44e61fb2 100644 --- a/website/src/pages/ar/subgraphs/developing/creating/starting-your-subgraph.mdx +++ b/website/src/pages/ar/subgraphs/developing/creating/starting-your-subgraph.mdx @@ -4,20 +4,32 @@ title: Starting Your Subgraph ## نظره عامة -The Graph is home to thousands of subgraphs already available for query, so check [The Graph Explorer](https://thegraph.com/explorer) and find one that already matches your needs. +The Graph is home to thousands of Subgraphs already available for query, so check [The Graph Explorer](https://thegraph.com/explorer) and find one that already matches your needs. -When you create a [subgraph](/subgraphs/developing/subgraphs/), you create a custom open API that extracts data from a blockchain, processes it, stores it, and makes it easy to query via GraphQL. +When you create a [Subgraph](/subgraphs/developing/subgraphs/), you create a custom open API that extracts data from a blockchain, processes it, stores it, and makes it easy to query via GraphQL. -Subgraph development ranges from simple scaffold subgraphs to advanced, specifically tailored subgraphs. +Subgraph development ranges from simple scaffold Subgraphs to advanced, specifically tailored Subgraphs. ### Start Building -Start the process and build a subgraph that matches your needs: +Start the process and build a Subgraph that matches your needs: 1. [Install the CLI](/subgraphs/developing/creating/install-the-cli/) - Set up your infrastructure -2. [Subgraph Manifest](/subgraphs/developing/creating/subgraph-manifest/) - Understand a subgraph's key component +2. [Subgraph Manifest](/subgraphs/developing/creating/subgraph-manifest/) - Understand a Subgraph's key component 3. [The GraphQL Schema](/subgraphs/developing/creating/ql-schema/) - Write your schema 4. [Writing AssemblyScript Mappings](/subgraphs/developing/creating/assemblyscript-mappings/) - Write your mappings -5. [Advanced Features](/subgraphs/developing/creating/advanced/) - Customize your subgraph with advanced features +5. [Advanced Features](/subgraphs/developing/creating/advanced/) - Customize your Subgraph with advanced features Explore additional [resources for APIs](/subgraphs/developing/creating/graph-ts/README/) and conduct local testing with [Matchstick](/subgraphs/developing/creating/unit-testing-framework/). + +| الاصدار | ملاحظات الإصدار | +| :-: | --- | +| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune Subgraphs | +| 0.0.9 | Supports `endBlock` feature | +| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). | +| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | diff --git a/website/src/pages/ar/subgraphs/developing/creating/subgraph-manifest.mdx b/website/src/pages/ar/subgraphs/developing/creating/subgraph-manifest.mdx index ba893838ca4e..29a666a8a297 100644 --- a/website/src/pages/ar/subgraphs/developing/creating/subgraph-manifest.mdx +++ b/website/src/pages/ar/subgraphs/developing/creating/subgraph-manifest.mdx @@ -4,19 +4,19 @@ title: Subgraph Manifest ## نظره عامة -The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. +The Subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your Subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. -The **subgraph definition** consists of the following files: +The **Subgraph definition** consists of the following files: -- `subgraph.yaml`: Contains the subgraph manifest +- `subgraph.yaml`: Contains the Subgraph manifest -- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL +- `schema.graphql`: A GraphQL schema defining the data stored for your Subgraph and how to query it via GraphQL - `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema (e.g. `mapping.ts` in this guide) ### Subgraph Capabilities -A single subgraph can: +A single Subgraph can: - Index data from multiple smart contracts (but not multiple networks). @@ -24,12 +24,12 @@ A single subgraph can: - Add an entry for each contract that requires indexing to the `dataSources` array. -The full specification for subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). +The full specification for Subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). -For the example subgraph listed above, `subgraph.yaml` is: +For the example Subgraph listed above, `subgraph.yaml` is: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum repository: https://github.com/graphprotocol/graph-tooling schema: @@ -54,7 +54,7 @@ dataSources: data: 'bar' mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -79,47 +79,47 @@ dataSources: ## Subgraph Entries -> Important Note: Be sure you populate your subgraph manifest with all handlers and [entities](/subgraphs/developing/creating/ql-schema/). +> Important Note: Be sure you populate your Subgraph manifest with all handlers and [entities](/subgraphs/developing/creating/ql-schema/). الإدخالات الهامة لتحديث manifest هي: -- `specVersion`: a semver version that identifies the supported manifest structure and functionality for the subgraph. The latest version is `1.2.0`. See [specVersion releases](#specversion-releases) section to see more details on features & releases. +- `specVersion`: a semver version that identifies the supported manifest structure and functionality for the Subgraph. The latest version is `1.3.0`. See [specVersion releases](#specversion-releases) section to see more details on features & releases. -- `description`: a human-readable description of what the subgraph is. This description is displayed in Graph Explorer when the subgraph is deployed to Subgraph Studio. +- `description`: a human-readable description of what the Subgraph is. This description is displayed in Graph Explorer when the Subgraph is deployed to Subgraph Studio. -- `repository`: the URL of the repository where the subgraph manifest can be found. This is also displayed in Graph Explorer. +- `repository`: the URL of the repository where the Subgraph manifest can be found. This is also displayed in Graph Explorer. - `features`: a list of all used [feature](#experimental-features) names. -- `indexerHints.prune`: Defines the retention of historical block data for a subgraph. See [prune](#prune) in [indexerHints](#indexer-hints) section. +- `indexerHints.prune`: Defines the retention of historical block data for a Subgraph. See [prune](#prune) in [indexerHints](#indexer-hints) section. -- `dataSources.source`: the address of the smart contract the subgraph sources, and the ABI of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts. +- `dataSources.source`: the address of the smart contract the Subgraph sources, and the ABI of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts. - `dataSources.source.startBlock`: the optional number of the block that the data source starts indexing from. In most cases, we suggest using the block in which the contract was created. - `dataSources.source.endBlock`: The optional number of the block that the data source stops indexing at, including that block. Minimum spec version required: `0.0.9`. -- `dataSources.context`: key-value pairs that can be used within subgraph mappings. Supports various data types like `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Each variable needs to specify its `type` and `data`. These context variables are then accessible in the mapping files, offering more configurable options for subgraph development. +- `dataSources.context`: key-value pairs that can be used within Subgraph mappings. Supports various data types like `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Each variable needs to specify its `type` and `data`. These context variables are then accessible in the mapping files, offering more configurable options for Subgraph development. - `dataSources.mapping.entities`: the entities that the data source writes to the store. The schema for each entity is defined in the schema.graphql file. - `dataSources.mapping.abis`: one or more named ABI files for the source contract as well as any other smart contracts that you interact with from within the mappings. -- `dataSources.mapping.eventHandlers`: lists the smart contract events this subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store. +- `dataSources.mapping.eventHandlers`: lists the smart contract events this Subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store. -- `dataSources.mapping.callHandlers`: lists the smart contract functions this subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store. +- `dataSources.mapping.callHandlers`: lists the smart contract functions this Subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store. -- `dataSources.mapping.blockHandlers`: lists the blocks this subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional call-filter can be provided by adding a `filter` field with `kind: call` to the handler. This will only run the handler if the block contains at least one call to the data source contract. +- `dataSources.mapping.blockHandlers`: lists the blocks this Subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional call-filter can be provided by adding a `filter` field with `kind: call` to the handler. This will only run the handler if the block contains at least one call to the data source contract. -A single subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array. +A single Subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array. ## Event Handlers -Event handlers in a subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the subgraph's manifest. This enables subgraphs to process and store event data according to defined logic. +Event handlers in a Subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the Subgraph's manifest. This enables Subgraphs to process and store event data according to defined logic. ### Defining an Event Handler -An event handler is declared within a data source in the subgraph's YAML configuration. It specifies which events to listen for and the corresponding function to execute when those events are detected. +An event handler is declared within a data source in the Subgraph's YAML configuration. It specifies which events to listen for and the corresponding function to execute when those events are detected. ```yaml dataSources: @@ -131,7 +131,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -149,11 +149,11 @@ dataSources: ## معالجات الاستدعاء(Call Handlers) -While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured. +While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a Subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured. Call handlers will only trigger in one of two cases: when the function specified is called by an account other than the contract itself or when it is marked as external in Solidity and called as part of another function in the same contract. -> **Note:** Call handlers currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a subgraph indexing one of these networks contain one or more call handlers, it will not start syncing. Subgraph developers should instead use event handlers. These are far more performant than call handlers, and are supported on every evm network. +> **Note:** Call handlers currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a Subgraph indexing one of these networks contain one or more call handlers, it will not start syncing. Subgraph developers should instead use event handlers. These are far more performant than call handlers, and are supported on every evm network. ### تعريف معالج الاستدعاء @@ -169,7 +169,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -186,7 +186,7 @@ The `function` is the normalized function signature to filter calls by. The `han ### دالة الـ Mapping -Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument: +Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example Subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument: ```typescript import { CreateGravatarCall } from '../generated/Gravity/Gravity' @@ -205,7 +205,7 @@ The `handleCreateGravatar` function takes a new `CreateGravatarCall` which is a ## معالجات الكتلة -In addition to subscribing to contract events or function calls, a subgraph may want to update its data as new blocks are appended to the chain. To achieve this a subgraph can run a function after every block or after blocks that match a pre-defined filter. +In addition to subscribing to contract events or function calls, a Subgraph may want to update its data as new blocks are appended to the chain. To achieve this a Subgraph can run a function after every block or after blocks that match a pre-defined filter. ### الفلاتر المدعومة @@ -218,7 +218,7 @@ filter: _The defined handler will be called once for every block which contains a call to the contract (data source) the handler is defined under._ -> **Note:** The `call` filter currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a subgraph indexing one of these networks contain one or more block handlers with a `call` filter, it will not start syncing. +> **Note:** The `call` filter currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a Subgraph indexing one of these networks contain one or more block handlers with a `call` filter, it will not start syncing. The absence of a filter for a block handler will ensure that the handler is called every block. A data source can only contain one block handler for each filter type. @@ -232,7 +232,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -261,7 +261,7 @@ blockHandlers: every: 10 ``` -The defined handler will be called once for every `n` blocks, where `n` is the value provided in the `every` field. This configuration allows the subgraph to perform specific operations at regular block intervals. +The defined handler will be called once for every `n` blocks, where `n` is the value provided in the `every` field. This configuration allows the Subgraph to perform specific operations at regular block intervals. #### Once Filter @@ -276,7 +276,7 @@ blockHandlers: kind: once ``` -The defined handler with the once filter will be called only once before all other handlers run. This configuration allows the subgraph to use the handler as an initialization handler, performing specific tasks at the start of indexing. +The defined handler with the once filter will be called only once before all other handlers run. This configuration allows the Subgraph to use the handler as an initialization handler, performing specific tasks at the start of indexing. ```ts export function handleOnce(block: ethereum.Block): void { @@ -288,7 +288,7 @@ export function handleOnce(block: ethereum.Block): void { ### دالة الـ Mapping -The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing subgraph entities in the store, call smart contracts and create or update entities. +The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing Subgraph entities in the store, call smart contracts and create or update entities. ```typescript import { ethereum } from '@graphprotocol/graph-ts' @@ -317,7 +317,7 @@ An event will only be triggered when both the signature and topic 0 match. By de Starting from `specVersion` `0.0.5` and `apiVersion` `0.0.7`, event handlers can have access to the receipt for the transaction which emitted them. -To do so, event handlers must be declared in the subgraph manifest with the new `receipt: true` key, which is optional and defaults to false. +To do so, event handlers must be declared in the Subgraph manifest with the new `receipt: true` key, which is optional and defaults to false. ```yaml eventHandlers: @@ -360,7 +360,7 @@ dataSources: abi: Factory mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/factory.ts entities: @@ -390,7 +390,7 @@ templates: abi: Exchange mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/exchange.ts entities: @@ -454,7 +454,7 @@ There are setters and getters like `setString` and `getString` for all value typ ## كتل البدء -The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. +The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a Subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. ```yaml dataSources: @@ -467,7 +467,7 @@ dataSources: startBlock: 6627917 mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/factory.ts entities: @@ -488,13 +488,13 @@ dataSources: ## Indexer Hints -The `indexerHints` setting in a subgraph's manifest provides directives for indexers on processing and managing a subgraph. It influences operational decisions across data handling, indexing strategies, and optimizations. Presently, it features the `prune` option for managing historical data retention or pruning. +The `indexerHints` setting in a Subgraph's manifest provides directives for indexers on processing and managing a Subgraph. It influences operational decisions across data handling, indexing strategies, and optimizations. Presently, it features the `prune` option for managing historical data retention or pruning. > This feature is available from `specVersion: 1.0.0` ### Prune -`indexerHints.prune`: Defines the retention of historical block data for a subgraph. Options include: +`indexerHints.prune`: Defines the retention of historical block data for a Subgraph. Options include: 1. `"never"`: No pruning of historical data; retains the entire history. 2. `"auto"`: Retains the minimum necessary history as set by the indexer, optimizing query performance. @@ -505,19 +505,19 @@ The `indexerHints` setting in a subgraph's manifest provides directives for inde prune: auto ``` -> The term "history" in this context of subgraphs is about storing data that reflects the old states of mutable entities. +> The term "history" in this context of Subgraphs is about storing data that reflects the old states of mutable entities. History as of a given block is required for: -- [Time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), which enable querying the past states of these entities at specific blocks throughout the subgraph's history -- Using the subgraph as a [graft base](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) in another subgraph, at that block -- Rewinding the subgraph back to that block +- [Time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), which enable querying the past states of these entities at specific blocks throughout the Subgraph's history +- Using the Subgraph as a [graft base](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) in another Subgraph, at that block +- Rewinding the Subgraph back to that block If historical data as of the block has been pruned, the above capabilities will not be available. > Using `"auto"` is generally recommended as it maximizes query performance and is sufficient for most users who do not require access to extensive historical data. -For subgraphs leveraging [time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), it's advisable to either set a specific number of blocks for historical data retention or use `prune: never` to keep all historical entity states. Below are examples of how to configure both options in your subgraph's settings: +For Subgraphs leveraging [time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), it's advisable to either set a specific number of blocks for historical data retention or use `prune: never` to keep all historical entity states. Below are examples of how to configure both options in your Subgraph's settings: To retain a specific amount of historical data: @@ -532,3 +532,18 @@ To preserve the complete history of entity states: indexerHints: prune: never ``` + +## SpecVersion Releases + +| الاصدار | ملاحظات الإصدار | +| :-: | --- | +| 1.3.0 | Added support for [Subgraph Composition](/cookbook/subgraph-composition-three-sources) | +| 1.2.0 | Added support for [Indexed Argument Filtering](/developing/creating/advanced/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](/developing/creating/advanced/#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supports [`indexerHints`](/developing/creating/subgraph-manifest/#indexer-hints) feature to prune Subgraphs | +| 0.0.9 | Supports `endBlock` feature | +| 0.0.8 | Added support for polling [Block Handlers](/developing/creating/subgraph-manifest/#polling-filter) and [Initialisation Handlers](/developing/creating/subgraph-manifest/#once-filter). | +| 0.0.7 | Added support for [File Data Sources](/developing/creating/advanced/#ipfsarweave-file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | diff --git a/website/src/pages/ar/subgraphs/developing/creating/unit-testing-framework.mdx b/website/src/pages/ar/subgraphs/developing/creating/unit-testing-framework.mdx index e72d68bef7c8..44c9fedacb10 100644 --- a/website/src/pages/ar/subgraphs/developing/creating/unit-testing-framework.mdx +++ b/website/src/pages/ar/subgraphs/developing/creating/unit-testing-framework.mdx @@ -2,12 +2,12 @@ title: اختبار وحدة Framework --- -Learn how to use Matchstick, a unit testing framework developed by [LimeChain](https://limechain.tech/). Matchstick enables subgraph developers to test their mapping logic in a sandboxed environment and successfully deploy their subgraphs. +Learn how to use Matchstick, a unit testing framework developed by [LimeChain](https://limechain.tech/). Matchstick enables Subgraph developers to test their mapping logic in a sandboxed environment and successfully deploy their Subgraphs. ## Benefits of Using Matchstick - It's written in Rust and optimized for high performance. -- It gives you access to developer features, including the ability to mock contract calls, make assertions about the store state, monitor subgraph failures, check test performance, and many more. +- It gives you access to developer features, including the ability to mock contract calls, make assertions about the store state, monitor Subgraph failures, check test performance, and many more. ## Getting Started @@ -87,7 +87,7 @@ And finally, do not use `graph test` (which uses your global installation of gra ### Using Matchstick -To use **Matchstick** in your subgraph project just open up a terminal, navigate to the root folder of your project and simply run `graph test [options] ` - it downloads the latest **Matchstick** binary and runs the specified test or all tests in a test folder (or all existing tests if no datasource flag is specified). +To use **Matchstick** in your Subgraph project just open up a terminal, navigate to the root folder of your project and simply run `graph test [options] ` - it downloads the latest **Matchstick** binary and runs the specified test or all tests in a test folder (or all existing tests if no datasource flag is specified). ### CLI options @@ -113,7 +113,7 @@ graph test path/to/file.test.ts ```sh -c, --coverage Run the tests in coverage mode --d, --docker Run the tests in a docker container (Note: Please execute from the root folder of the subgraph) +-d, --docker Run the tests in a docker container (Note: Please execute from the root folder of the Subgraph) -f, --force Binary: Redownloads the binary. Docker: Redownloads the Dockerfile and rebuilds the docker image. -h, --help Show usage information -l, --logs Logs to the console information about the OS, CPU model and download url (debugging purposes) @@ -145,17 +145,17 @@ libsFolder: path/to/libs manifestPath: path/to/subgraph.yaml ``` -### Demo subgraph +### Demo Subgraph You can try out and play around with the examples from this guide by cloning the [Demo Subgraph repo](https://github.com/LimeChain/demo-subgraph) ### Video tutorials -Also you can check out the video series on ["How to use Matchstick to write unit tests for your subgraphs"](https://www.youtube.com/playlist?list=PLTqyKgxaGF3SNakGQwczpSGVjS_xvOv3h) +Also you can check out the video series on ["How to use Matchstick to write unit tests for your Subgraphs"](https://www.youtube.com/playlist?list=PLTqyKgxaGF3SNakGQwczpSGVjS_xvOv3h) ## Tests structure -_**IMPORTANT: The test structure described below depens on `matchstick-as` version >=0.5.0**_ +_**IMPORTANT: The test structure described below depends on `matchstick-as` version >=0.5.0**_ ### describe() @@ -662,7 +662,7 @@ That's a lot to unpack! First off, an important thing to notice is that we're im There we go - we've created our first test! 👏 -Now in order to run our tests you simply need to run the following in your subgraph root folder: +Now in order to run our tests you simply need to run the following in your Subgraph root folder: `graph test Gravity` @@ -756,7 +756,7 @@ createMockedFunction(contractAddress, 'getGravatar', 'getGravatar(address):(stri Users can mock IPFS files by using `mockIpfsFile(hash, filePath)` function. The function accepts two arguments, the first one is the IPFS file hash/path and the second one is the path to a local file. -NOTE: When testing `ipfs.map/ipfs.mapJSON`, the callback function must be exported from the test file in order for matchstck to detect it, like the `processGravatar()` function in the test example bellow: +NOTE: When testing `ipfs.map/ipfs.mapJSON`, the callback function must be exported from the test file in order for matchstick to detect it, like the `processGravatar()` function in the test example bellow: `.test.ts` file: @@ -765,7 +765,7 @@ import { assert, test, mockIpfsFile } from 'matchstick-as/assembly/index' import { ipfs } from '@graphprotocol/graph-ts' import { gravatarFromIpfs } from './utils' -// Export ipfs.map() callback in order for matchstck to detect it +// Export ipfs.map() callback in order for matchstick to detect it export { processGravatar } from './utils' test('ipfs.cat', () => { @@ -1172,7 +1172,7 @@ templates: network: mainnet mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/token-lock-wallet.ts handler: handleMetadata @@ -1289,7 +1289,7 @@ test('file/ipfs dataSource creation example', () => { ## Test Coverage -Using **Matchstick**, subgraph developers are able to run a script that will calculate the test coverage of the written unit tests. +Using **Matchstick**, Subgraph developers are able to run a script that will calculate the test coverage of the written unit tests. The test coverage tool takes the compiled test `wasm` binaries and converts them to `wat` files, which can then be easily inspected to see whether or not the handlers defined in `subgraph.yaml` have been called. Since code coverage (and testing as whole) is in very early stages in AssemblyScript and WebAssembly, **Matchstick** cannot check for branch coverage. Instead we rely on the assertion that if a given handler has been called, the event/function for it have been properly mocked. @@ -1395,7 +1395,7 @@ The mismatch in arguments is caused by mismatch in `graph-ts` and `matchstick-as ## مصادر إضافية -For any additional support, check out this [demo subgraph repo using Matchstick](https://github.com/LimeChain/demo-subgraph#readme_). +For any additional support, check out this [demo Subgraph repo using Matchstick](https://github.com/LimeChain/demo-subgraph#readme_). ## Feedback diff --git a/website/src/pages/ar/subgraphs/developing/deploying/multiple-networks.mdx b/website/src/pages/ar/subgraphs/developing/deploying/multiple-networks.mdx index 4f7dcd3864e8..3b2b1bbc70ae 100644 --- a/website/src/pages/ar/subgraphs/developing/deploying/multiple-networks.mdx +++ b/website/src/pages/ar/subgraphs/developing/deploying/multiple-networks.mdx @@ -1,12 +1,13 @@ --- title: Deploying a Subgraph to Multiple Networks +sidebarTitle: Deploying to Multiple Networks --- -This page explains how to deploy a subgraph to multiple networks. To deploy a subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a subgraph already, see [Creating a subgraph](/developing/creating-a-subgraph/). +This page explains how to deploy a Subgraph to multiple networks. To deploy a Subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a Subgraph already, see [Creating a Subgraph](/developing/creating-a-subgraph/). -## Deploying the subgraph to multiple networks +## Deploying the Subgraph to multiple networks -In some cases, you will want to deploy the same subgraph to multiple networks without duplicating all of its code. The main challenge that comes with this is that the contract addresses on these networks are different. +In some cases, you will want to deploy the same Subgraph to multiple networks without duplicating all of its code. The main challenge that comes with this is that the contract addresses on these networks are different. ### Using `graph-cli` @@ -20,7 +21,7 @@ Options: --network-file Networks config file path (default: "./networks.json") ``` -You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your subgraph during development. +You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your Subgraph during development. > Note: The `init` command will now auto-generate a `networks.json` based on the provided information. You will then be able to update existing or add additional networks. @@ -54,7 +55,7 @@ If you don't have a `networks.json` file, you'll need to manually create one wit > Note: You don't have to specify any of the `templates` (if you have any) in the config file, only the `dataSources`. If there are any `templates` declared in the `subgraph.yaml` file, their network will be automatically updated to the one specified with the `--network` option. -Now, let's assume you want to be able to deploy your subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`: +Now, let's assume you want to be able to deploy your Subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`: ```yaml # ... @@ -96,7 +97,7 @@ yarn build --network sepolia yarn build --network sepolia --network-file path/to/config ``` -The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the subgraph. Your `subgraph.yaml` file now should look like this: +The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the Subgraph. Your `subgraph.yaml` file now should look like this: ```yaml # ... @@ -127,7 +128,7 @@ yarn deploy --network sepolia --network-file path/to/config One way to parameterize aspects like contract addresses using older `graph-cli` versions is to generate parts of it with a templating system like [Mustache](https://mustache.github.io/) or [Handlebars](https://handlebarsjs.com/). -To illustrate this approach, let's assume a subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network: +To illustrate this approach, let's assume a Subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network: ```json { @@ -179,7 +180,7 @@ In order to generate a manifest to either network, you could add two additional } ``` -To deploy this subgraph for mainnet or Sepolia you would now simply run one of the two following commands: +To deploy this Subgraph for mainnet or Sepolia you would now simply run one of the two following commands: ```sh # Mainnet: @@ -193,25 +194,25 @@ A working example of this can be found [here](https://github.com/graphprotocol/e **Note:** This approach can also be applied to more complex situations, where it is necessary to substitute more than contract addresses and network names or where generating mappings or ABIs from templates as well. -This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your Subgraph to check if it is running behind. `synced` informs if the Subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the Subgraph. In this case, you can check the `fatalError` field for details on this error. -## Subgraph Studio subgraph archive policy +## Subgraph Studio Subgraph archive policy -A subgraph version in Studio is archived if and only if it meets the following criteria: +A Subgraph version in Studio is archived if and only if it meets the following criteria: - The version is not published to the network (or pending publish) - The version was created 45 or more days ago -- The subgraph hasn't been queried in 30 days +- The Subgraph hasn't been queried in 30 days -In addition, when a new version is deployed, if the subgraph has not been published, then the N-2 version of the subgraph is archived. +In addition, when a new version is deployed, if the Subgraph has not been published, then the N-2 version of the Subgraph is archived. -Every subgraph affected with this policy has an option to bring the version in question back. +Every Subgraph affected with this policy has an option to bring the version in question back. -## Checking subgraph health +## Checking Subgraph health -If a subgraph syncs successfully, that is a good sign that it will continue to run well forever. However, new triggers on the network might cause your subgraph to hit an untested error condition or it may start to fall behind due to performance issues or issues with the node operators. +If a Subgraph syncs successfully, that is a good sign that it will continue to run well forever. However, new triggers on the network might cause your Subgraph to hit an untested error condition or it may start to fall behind due to performance issues or issues with the node operators. -Graph Node exposes a GraphQL endpoint which you can query to check the status of your subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a subgraph: +Graph Node exposes a GraphQL endpoint which you can query to check the status of your Subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a Subgraph: ```graphql { @@ -238,4 +239,4 @@ Graph Node exposes a GraphQL endpoint which you can query to check the status of } ``` -This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your Subgraph to check if it is running behind. `synced` informs if the Subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the Subgraph. In this case, you can check the `fatalError` field for details on this error. diff --git a/website/src/pages/ar/subgraphs/developing/deploying/using-subgraph-studio.mdx b/website/src/pages/ar/subgraphs/developing/deploying/using-subgraph-studio.mdx index d8880ef1a196..1e0826bfe148 100644 --- a/website/src/pages/ar/subgraphs/developing/deploying/using-subgraph-studio.mdx +++ b/website/src/pages/ar/subgraphs/developing/deploying/using-subgraph-studio.mdx @@ -2,23 +2,23 @@ title: Deploying Using Subgraph Studio --- -Learn how to deploy your subgraph to Subgraph Studio. +Learn how to deploy your Subgraph to Subgraph Studio. -> Note: When you deploy a subgraph, you push it to Subgraph Studio, where you'll be able to test it. It's important to remember that deploying is not the same as publishing. When you publish a subgraph, you're publishing it onchain. +> Note: When you deploy a Subgraph, you push it to Subgraph Studio, where you'll be able to test it. It's important to remember that deploying is not the same as publishing. When you publish a Subgraph, you're publishing it onchain. ## Subgraph Studio Overview In [Subgraph Studio](https://thegraph.com/studio/), you can do the following: -- View a list of subgraphs you've created -- Manage, view details, and visualize the status of a specific subgraph -- إنشاء وإدارة مفاتيح API الخاصة بك لـ subgraphs محددة +- View a list of Subgraphs you've created +- Manage, view details, and visualize the status of a specific Subgraph +- Create and manage your API keys for specific Subgraphs - Restrict your API keys to specific domains and allow only certain Indexers to query with them -- Create your subgraph -- Deploy your subgraph using The Graph CLI -- Test your subgraph in the playground environment -- Integrate your subgraph in staging using the development query URL -- Publish your subgraph to The Graph Network +- Create your Subgraph +- Deploy your Subgraph using The Graph CLI +- Test your Subgraph in the playground environment +- Integrate your Subgraph in staging using the development query URL +- Publish your Subgraph to The Graph Network - Manage your billing ## Install The Graph CLI @@ -44,10 +44,10 @@ npm install -g @graphprotocol/graph-cli 1. Open [Subgraph Studio](https://thegraph.com/studio/). 2. Connect your wallet to sign in. - You can do this via MetaMask, Coinbase Wallet, WalletConnect, or Safe. -3. After you sign in, your unique deploy key will be displayed on your subgraph details page. - - The deploy key allows you to publish your subgraphs or manage your API keys and billing. It is unique but can be regenerated if you think it has been compromised. +3. After you sign in, your unique deploy key will be displayed on your Subgraph details page. + - The deploy key allows you to publish your Subgraphs or manage your API keys and billing. It is unique but can be regenerated if you think it has been compromised. -> Important: You need an API key to query subgraphs +> Important: You need an API key to query Subgraphs ### How to Create a Subgraph in Subgraph Studio @@ -57,31 +57,25 @@ npm install -g @graphprotocol/graph-cli ### توافق الـ Subgraph مع شبكة The Graph -In order to be supported by Indexers on The Graph Network, subgraphs must: - -- Index a [supported network](/supported-networks/) -- يجب ألا تستخدم أيًا من الميزات التالية: - - ipfs.cat & ipfs.map - - أخطاء غير فادحة - - Grafting +To be supported by Indexers on The Graph Network, Subgraphs must index a [supported network](/supported-networks/). For a full list of supported and unsupported features, check out the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) repo. ## Initialize Your Subgraph -Once your subgraph has been created in Subgraph Studio, you can initialize its code through the CLI using this command: +Once your Subgraph has been created in Subgraph Studio, you can initialize its code through the CLI using this command: ```bash graph init ``` -You can find the `` value on your subgraph details page in Subgraph Studio, see image below: +You can find the `` value on your Subgraph details page in Subgraph Studio, see image below: ![Subgraph Studio - Slug](/img/doc-subgraph-slug.png) -After running `graph init`, you will be asked to input the contract address, network, and an ABI that you want to query. This will generate a new folder on your local machine with some basic code to start working on your subgraph. You can then finalize your subgraph to make sure it works as expected. +After running `graph init`, you will be asked to input the contract address, network, and an ABI that you want to query. This will generate a new folder on your local machine with some basic code to start working on your Subgraph. You can then finalize your Subgraph to make sure it works as expected. ## Graph Auth -Before you can deploy your subgraph to Subgraph Studio, you need to log into your account within the CLI. To do this, you will need your deploy key, which you can find under your subgraph details page. +Before you can deploy your Subgraph to Subgraph Studio, you need to log into your account within the CLI. To do this, you will need your deploy key, which you can find under your Subgraph details page. Then, use the following command to authenticate from the CLI: @@ -91,11 +85,11 @@ graph auth ## Deploying a Subgraph -Once you are ready, you can deploy your subgraph to Subgraph Studio. +Once you are ready, you can deploy your Subgraph to Subgraph Studio. -> Deploying a subgraph with the CLI pushes it to the Studio, where you can test it and update the metadata. This action won't publish your subgraph to the decentralized network. +> Deploying a Subgraph with the CLI pushes it to the Studio, where you can test it and update the metadata. This action won't publish your Subgraph to the decentralized network. -Use the following CLI command to deploy your subgraph: +Use the following CLI command to deploy your Subgraph: ```bash graph deploy @@ -108,30 +102,30 @@ After running this command, the CLI will ask for a version label. ## Testing Your Subgraph -After deploying, you can test your subgraph (either in Subgraph Studio or in your own app, with the deployment query URL), deploy another version, update the metadata, and publish to [Graph Explorer](https://thegraph.com/explorer) when you are ready. +After deploying, you can test your Subgraph (either in Subgraph Studio or in your own app, with the deployment query URL), deploy another version, update the metadata, and publish to [Graph Explorer](https://thegraph.com/explorer) when you are ready. -Use Subgraph Studio to check the logs on the dashboard and look for any errors with your subgraph. +Use Subgraph Studio to check the logs on the dashboard and look for any errors with your Subgraph. ## Publish Your Subgraph -In order to publish your subgraph successfully, review [publishing a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). +In order to publish your Subgraph successfully, review [publishing a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). ## Versioning Your Subgraph with the CLI -If you want to update your subgraph, you can do the following: +If you want to update your Subgraph, you can do the following: - You can deploy a new version to Studio using the CLI (it will only be private at this point). - Once you're happy with it, you can publish your new deployment to [Graph Explorer](https://thegraph.com/explorer). -- This action will create a new version of your subgraph that Curators can start signaling on and Indexers can index. +- This action will create a new version of your Subgraph that Curators can start signaling on and Indexers can index. -You can also update your subgraph's metadata without publishing a new version. You can update your subgraph details in Studio (under the profile picture, name, description, etc.) by checking an option called **Update Details** in [Graph Explorer](https://thegraph.com/explorer). If this is checked, an onchain transaction will be generated that updates subgraph details in Explorer without having to publish a new version with a new deployment. +You can also update your Subgraph's metadata without publishing a new version. You can update your Subgraph details in Studio (under the profile picture, name, description, etc.) by checking an option called **Update Details** in [Graph Explorer](https://thegraph.com/explorer). If this is checked, an onchain transaction will be generated that updates Subgraph details in Explorer without having to publish a new version with a new deployment. -> Note: There are costs associated with publishing a new version of a subgraph to the network. In addition to the transaction fees, you must also fund a part of the curation tax on the auto-migrating signal. You cannot publish a new version of your subgraph if Curators have not signaled on it. For more information, please read more [here](/resources/roles/curating/). +> Note: There are costs associated with publishing a new version of a Subgraph to the network. In addition to the transaction fees, you must also fund a part of the curation tax on the auto-migrating signal. You cannot publish a new version of your Subgraph if Curators have not signaled on it. For more information, please read more [here](/resources/roles/curating/). ## الأرشفة التلقائية لإصدارات الـ Subgraph -Whenever you deploy a new subgraph version in Subgraph Studio, the previous version will be archived. Archived versions won't be indexed/synced and therefore cannot be queried. You can unarchive an archived version of your subgraph in Subgraph Studio. +Whenever you deploy a new Subgraph version in Subgraph Studio, the previous version will be archived. Archived versions won't be indexed/synced and therefore cannot be queried. You can unarchive an archived version of your Subgraph in Subgraph Studio. -> Note: Previous versions of non-published subgraphs deployed to Studio will be automatically archived. +> Note: Previous versions of non-published Subgraphs deployed to Studio will be automatically archived. ![Subgraph Studio - Unarchive](/img/Unarchive.png) diff --git a/website/src/pages/ar/subgraphs/developing/developer-faq.mdx b/website/src/pages/ar/subgraphs/developing/developer-faq.mdx index f0e9ba0cd865..016a7a8e5a04 100644 --- a/website/src/pages/ar/subgraphs/developing/developer-faq.mdx +++ b/website/src/pages/ar/subgraphs/developing/developer-faq.mdx @@ -7,37 +7,37 @@ This page summarizes some of the most common questions for developers building o ## Subgraph Related -### 1. What is a subgraph? +### 1. What is a Subgraph? -A subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process subgraphs and make them available for subgraph consumers to query. +A Subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process Subgraphs and make them available for Subgraph consumers to query. -### 2. What is the first step to create a subgraph? +### 2. What is the first step to create a Subgraph? -To successfully create a subgraph, you will need to install The Graph CLI. Review the [Quick Start](/subgraphs/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). +To successfully create a Subgraph, you will need to install The Graph CLI. Review the [Quick Start](/subgraphs/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). -### 3. Can I still create a subgraph if my smart contracts don't have events? +### 3. Can I still create a Subgraph if my smart contracts don't have events? -It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the subgraph are triggered by contract events and are the fastest way to retrieve useful data. +It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the Subgraph are triggered by contract events and are the fastest way to retrieve useful data. -If the contracts you work with do not contain events, your subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. +If the contracts you work with do not contain events, your Subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. -### 4. Can I change the GitHub account associated with my subgraph? +### 4. Can I change the GitHub account associated with my Subgraph? -No. Once a subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your subgraph. +No. Once a Subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your Subgraph. -### 5. How do I update a subgraph on mainnet? +### 5. How do I update a Subgraph on mainnet? -You can deploy a new version of your subgraph to Subgraph Studio using the CLI. This action maintains your subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. +You can deploy a new version of your Subgraph to Subgraph Studio using the CLI. This action maintains your Subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your Subgraph that Curators can start signaling on. -### 6. Is it possible to duplicate a subgraph to another account or endpoint without redeploying? +### 6. Is it possible to duplicate a Subgraph to another account or endpoint without redeploying? -يجب عليك إعادة نشر ال الفرعيةرسم بياني ، ولكن إذا لم يتغير الفرعيةرسم بياني (ID (IPFS hash ، فلن يضطر إلى المزامنة من البداية. +You have to redeploy the Subgraph, but if the Subgraph ID (IPFS hash) doesn't change, it won't have to sync from the beginning. -### 7. How do I call a contract function or access a public state variable from my subgraph mappings? +### 7. How do I call a contract function or access a public state variable from my Subgraph mappings? Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/#access-to-smart-contract-state). -### 8. Can I import `ethers.js` or other JS libraries into my subgraph mappings? +### 8. Can I import `ethers.js` or other JS libraries into my Subgraph mappings? Not currently, as mappings are written in AssemblyScript. @@ -45,15 +45,15 @@ One possible alternative solution to this is to store raw data in entities and p ### 9. When listening to multiple contracts, is it possible to select the contract order to listen to events? -ضمن ال Subgraph ، تتم معالجة الأحداث دائمًا بالترتيب الذي تظهر به في الكتل ، بغض النظر عما إذا كان ذلك عبر عقود متعددة أم لا. +Within a Subgraph, the events are always processed in the order they appear in the blocks, regardless of whether that is across multiple contracts or not. ### 10. How are templates different from data sources? -Templates allow you to create data sources quickly, while your subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your subgraph will create a dynamic data source by supplying the contract address. +Templates allow you to create data sources quickly, while your Subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your Subgraph will create a dynamic data source by supplying the contract address. Check out the "Instantiating a data source template" section on: [Data Source Templates](/developing/creating-a-subgraph/#data-source-templates). -### 11. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? +### 11. Is it possible to set up a Subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? Yes. On `graph init` command itself you can add multiple dataSources by entering contracts one after the other. @@ -79,9 +79,9 @@ docker pull graphprotocol/graph-node:latest If only one entity is created during the event and if there's nothing better available, then the transaction hash + log index would be unique. You can obfuscate these by converting that to Bytes and then piping it through `crypto.keccak256` but this won't make it more unique. -### 15. Can I delete my subgraph? +### 15. Can I delete my Subgraph? -Yes, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) and [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) your subgraph. +Yes, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) and [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) your Subgraph. ## Network Related @@ -110,11 +110,11 @@ Yes. Sepolia supports block handlers, call handlers and event handlers. It shoul Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the dataSource starts indexing from. In most cases, we suggest using the block where the contract was created: [Start blocks](/developing/creating-a-subgraph/#start-blocks) -### 20. What are some tips to increase the performance of indexing? My subgraph is taking a very long time to sync +### 20. What are some tips to increase the performance of indexing? My Subgraph is taking a very long time to sync Yes, you should take a look at the optional start block feature to start indexing from the block where the contract was deployed: [Start blocks](/developing/creating-a-subgraph/#start-blocks) -### 21. Is there a way to query the subgraph directly to determine the latest block number it has indexed? +### 21. Is there a way to query the Subgraph directly to determine the latest block number it has indexed? نعم! جرب الأمر التالي ، مع استبدال "Organization / subgraphName" بالمؤسسة واسم الـ subgraph الخاص بك: @@ -132,7 +132,7 @@ someCollection(first: 1000, skip: ) { ... } ### 23. If my dapp frontend uses The Graph for querying, do I need to write my API key into the frontend directly? What if we pay query fees for users – will malicious users cause our query fees to be very high? -Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. +Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and Subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. ## Miscellaneous diff --git a/website/src/pages/ar/subgraphs/developing/introduction.mdx b/website/src/pages/ar/subgraphs/developing/introduction.mdx index d3b71aaab704..946e62affbe7 100644 --- a/website/src/pages/ar/subgraphs/developing/introduction.mdx +++ b/website/src/pages/ar/subgraphs/developing/introduction.mdx @@ -11,21 +11,21 @@ As a developer, you need data to build and power your dapp. Querying and indexin On The Graph, you can: -1. Create, deploy, and publish subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). -2. Use GraphQL to query existing subgraphs. +1. Create, deploy, and publish Subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). +2. Use GraphQL to query existing Subgraphs. ### What is GraphQL? -- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. +- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query Subgraphs. ### Developer Actions -- Query subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. -- Create custom subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. -- Deploy, publish and signal your subgraphs within The Graph Network. +- Query Subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. +- Create custom Subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. +- Deploy, publish and signal your Subgraphs within The Graph Network. -### What are subgraphs? +### What are Subgraphs? -A subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. +A Subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. -Check out the documentation on [subgraphs](/subgraphs/developing/subgraphs/) to learn specifics. +Check out the documentation on [Subgraphs](/subgraphs/developing/subgraphs/) to learn specifics. diff --git a/website/src/pages/ar/subgraphs/developing/managing/deleting-a-subgraph.mdx b/website/src/pages/ar/subgraphs/developing/managing/deleting-a-subgraph.mdx index 5a4ac15e07fd..b8c2330ca49d 100644 --- a/website/src/pages/ar/subgraphs/developing/managing/deleting-a-subgraph.mdx +++ b/website/src/pages/ar/subgraphs/developing/managing/deleting-a-subgraph.mdx @@ -2,30 +2,30 @@ title: Deleting a Subgraph --- -Delete your subgraph using [Subgraph Studio](https://thegraph.com/studio/). +Delete your Subgraph using [Subgraph Studio](https://thegraph.com/studio/). -> Deleting your subgraph will remove all published versions from The Graph Network, but it will remain visible on Graph Explorer and Subgraph Studio for users who have signaled on it. +> Deleting your Subgraph will remove all published versions from The Graph Network, but it will remain visible on Graph Explorer and Subgraph Studio for users who have signaled on it. ## Step-by-Step -1. Visit the subgraph's page on [Subgraph Studio](https://thegraph.com/studio/). +1. Visit the Subgraph's page on [Subgraph Studio](https://thegraph.com/studio/). 2. Click on the three-dots to the right of the "publish" button. -3. Click on the option to "delete this subgraph": +3. Click on the option to "delete this Subgraph": ![Delete-subgraph](/img/Delete-subgraph.png) -4. Depending on the subgraph's status, you will be prompted with various options. +4. Depending on the Subgraph's status, you will be prompted with various options. - - If the subgraph is not published, simply click “delete” and confirm. - - If the subgraph is published, you will need to confirm on your wallet before the subgraph can be deleted from Studio. If a subgraph is published to multiple networks, such as testnet and mainnet, additional steps may be required. + - If the Subgraph is not published, simply click “delete” and confirm. + - If the Subgraph is published, you will need to confirm on your wallet before the Subgraph can be deleted from Studio. If a Subgraph is published to multiple networks, such as testnet and mainnet, additional steps may be required. -> If the owner of the subgraph has signal on it, the signaled GRT will be returned to the owner. +> If the owner of the Subgraph has signal on it, the signaled GRT will be returned to the owner. ### Important Reminders -- Once you delete a subgraph, it will **not** appear on Graph Explorer's homepage. However, users who have signaled on it will still be able to view it on their profile pages and remove their signal. -- Curators will not be able to signal on the subgraph anymore. -- Curators that already signaled on the subgraph can withdraw their signal at an average share price. -- Deleted subgraphs will show an error message. +- Once you delete a Subgraph, it will **not** appear on Graph Explorer's homepage. However, users who have signaled on it will still be able to view it on their profile pages and remove their signal. +- Curators will not be able to signal on the Subgraph anymore. +- Curators that already signaled on the Subgraph can withdraw their signal at an average share price. +- Deleted Subgraphs will show an error message. diff --git a/website/src/pages/ar/subgraphs/developing/managing/transferring-a-subgraph.mdx b/website/src/pages/ar/subgraphs/developing/managing/transferring-a-subgraph.mdx index 0fc6632cbc40..e80bde3fa6d2 100644 --- a/website/src/pages/ar/subgraphs/developing/managing/transferring-a-subgraph.mdx +++ b/website/src/pages/ar/subgraphs/developing/managing/transferring-a-subgraph.mdx @@ -2,18 +2,18 @@ title: Transferring a Subgraph --- -Subgraphs published to the decentralized network have an NFT minted to the address that published the subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network. +Subgraphs published to the decentralized network have an NFT minted to the address that published the Subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network. ## Reminders -- Whoever owns the NFT controls the subgraph. -- If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that subgraph on the network. -- You can easily move control of a subgraph to a multi-sig. -- A community member can create a subgraph on behalf of a DAO. +- Whoever owns the NFT controls the Subgraph. +- If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that Subgraph on the network. +- You can easily move control of a Subgraph to a multi-sig. +- A community member can create a Subgraph on behalf of a DAO. ## View Your Subgraph as an NFT -To view your subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**: +To view your Subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**: ``` https://opensea.io/your-wallet-address @@ -27,13 +27,13 @@ https://rainbow.me/your-wallet-addres ## Step-by-Step -To transfer ownership of a subgraph, do the following: +To transfer ownership of a Subgraph, do the following: 1. Use the UI built into Subgraph Studio: ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-1.png) -2. Choose the address that you would like to transfer the subgraph to: +2. Choose the address that you would like to transfer the Subgraph to: ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-2.png) diff --git a/website/src/pages/ar/subgraphs/developing/publishing/publishing-a-subgraph.mdx b/website/src/pages/ar/subgraphs/developing/publishing/publishing-a-subgraph.mdx index dca943ad3152..2bc0ec5f514c 100644 --- a/website/src/pages/ar/subgraphs/developing/publishing/publishing-a-subgraph.mdx +++ b/website/src/pages/ar/subgraphs/developing/publishing/publishing-a-subgraph.mdx @@ -1,10 +1,11 @@ --- title: Publishing a Subgraph to the Decentralized Network +sidebarTitle: Publishing to the Decentralized Network --- -Once you have [deployed your subgraph to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/) and it's ready to go into production, you can publish it to the decentralized network. +Once you have [deployed your Subgraph to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/) and it's ready to go into production, you can publish it to the decentralized network. -When you publish a subgraph to the decentralized network, you make it available for: +When you publish a Subgraph to the decentralized network, you make it available for: - [Curators](/resources/roles/curating/) to begin curating it. - [Indexers](/indexing/overview/) to begin indexing it. @@ -17,33 +18,33 @@ Check out the list of [supported networks](/supported-networks/). 1. Go to the [Subgraph Studio](https://thegraph.com/studio/) dashboard 2. Click on the **Publish** button -3. Your subgraph will now be visible in [Graph Explorer](https://thegraph.com/explorer/). +3. Your Subgraph will now be visible in [Graph Explorer](https://thegraph.com/explorer/). -All published versions of an existing subgraph can: +All published versions of an existing Subgraph can: - Be published to Arbitrum One. [Learn more about The Graph Network on Arbitrum](/archived/arbitrum/arbitrum-faq/). -- Index data on any of the [supported networks](/supported-networks/), regardless of the network on which the subgraph was published. +- Index data on any of the [supported networks](/supported-networks/), regardless of the network on which the Subgraph was published. -### Updating metadata for a published subgraph +### Updating metadata for a published Subgraph -- After publishing your subgraph to the decentralized network, you can update the metadata anytime in Subgraph Studio. +- After publishing your Subgraph to the decentralized network, you can update the metadata anytime in Subgraph Studio. - Once you’ve saved your changes and published the updates, they will appear in Graph Explorer. - It's important to note that this process will not create a new version since your deployment has not changed. ## Publishing from the CLI -As of version 0.73.0, you can also publish your subgraph with the [`graph-cli`](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). +As of version 0.73.0, you can also publish your Subgraph with the [`graph-cli`](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). 1. Open the `graph-cli`. 2. Use the following commands: `graph codegen && graph build` then `graph publish`. -3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized subgraph to a network of your choice. +3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized Subgraph to a network of your choice. ![cli-ui](/img/cli-ui.png) ### Customizing your deployment -You can upload your subgraph build to a specific IPFS node and further customize your deployment with the following flags: +You can upload your Subgraph build to a specific IPFS node and further customize your deployment with the following flags: ``` USAGE @@ -61,33 +62,33 @@ FLAGS ``` -## Adding signal to your subgraph +## Adding signal to your Subgraph -Developers can add GRT signal to their subgraphs to incentivize Indexers to query the subgraph. +Developers can add GRT signal to their Subgraphs to incentivize Indexers to query the Subgraph. -- If a subgraph is eligible for indexing rewards, Indexers who provide a "proof of indexing" will receive a GRT reward, based on the amount of GRT signalled. +- If a Subgraph is eligible for indexing rewards, Indexers who provide a "proof of indexing" will receive a GRT reward, based on the amount of GRT signalled. -- You can check indexing reward eligibility based on subgraph feature usage [here](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). +- You can check indexing reward eligibility based on Subgraph feature usage [here](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). - Specific supported networks can be checked [here](/supported-networks/). -> Adding signal to a subgraph which is not eligible for rewards will not attract additional Indexers. +> Adding signal to a Subgraph which is not eligible for rewards will not attract additional Indexers. > -> If your subgraph is eligible for rewards, it is recommended that you curate your own subgraph with at least 3,000 GRT in order to attract additional indexers to index your subgraph. +> If your Subgraph is eligible for rewards, it is recommended that you curate your own Subgraph with at least 3,000 GRT in order to attract additional indexers to index your Subgraph. -The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs. However, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. +The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all Subgraphs. However, signaling GRT on a particular Subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. -When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. +When signaling, Curators can decide to signal on a specific version of the Subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. -Indexers can find subgraphs to index based on curation signals they see in Graph Explorer. +Indexers can find Subgraphs to index based on curation signals they see in Graph Explorer. -![Explorer subgraphs](/img/explorer-subgraphs.png) +![Explorer Subgraphs](/img/explorer-subgraphs.png) -Subgraph Studio enables you to add signal to your subgraph by adding GRT to your subgraph's curation pool in the same transaction it is published. +Subgraph Studio enables you to add signal to your Subgraph by adding GRT to your Subgraph's curation pool in the same transaction it is published. ![Curation Pool](/img/curate-own-subgraph-tx.png) -Alternatively, you can add GRT signal to a published subgraph from Graph Explorer. +Alternatively, you can add GRT signal to a published Subgraph from Graph Explorer. ![Signal from Explorer](/img/signal-from-explorer.png) diff --git a/website/src/pages/ar/subgraphs/developing/subgraphs.mdx b/website/src/pages/ar/subgraphs/developing/subgraphs.mdx index b52ec5cd2843..b2d94218cd67 100644 --- a/website/src/pages/ar/subgraphs/developing/subgraphs.mdx +++ b/website/src/pages/ar/subgraphs/developing/subgraphs.mdx @@ -4,83 +4,83 @@ title: Subgraphs ## What is a Subgraph? -A subgraph is a custom, open API that extracts data from a blockchain, processes it, and stores it so it can be easily queried via GraphQL. +A Subgraph is a custom, open API that extracts data from a blockchain, processes it, and stores it so it can be easily queried via GraphQL. ### Subgraph Capabilities - **Access Data:** Subgraphs enable the querying and indexing of blockchain data for web3. -- **Build:** Developers can build, deploy, and publish subgraphs to The Graph Network. To get started, check out the subgraph developer [Quick Start](quick-start/). -- **Index & Query:** Once a subgraph is indexed, anyone can query it. Explore and query all subgraphs published to the network in [Graph Explorer](https://thegraph.com/explorer). +- **Build:** Developers can build, deploy, and publish Subgraphs to The Graph Network. To get started, check out the Subgraph developer [Quick Start](quick-start/). +- **Index & Query:** Once a Subgraph is indexed, anyone can query it. Explore and query all Subgraphs published to the network in [Graph Explorer](https://thegraph.com/explorer). ## Inside a Subgraph -The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. +The Subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your Subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. -The **subgraph definition** consists of the following files: +The **Subgraph definition** consists of the following files: -- `subgraph.yaml`: Contains the subgraph manifest +- `subgraph.yaml`: Contains the Subgraph manifest -- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL +- `schema.graphql`: A GraphQL schema defining the data stored for your Subgraph and how to query it via GraphQL - `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema -To learn more about each subgraph component, check out [creating a subgraph](/developing/creating-a-subgraph/). +To learn more about each Subgraph component, check out [creating a Subgraph](/developing/creating-a-subgraph/). ## دورة حياة الـ Subgraph -Here is a general overview of a subgraph’s lifecycle: +Here is a general overview of a Subgraph’s lifecycle: ![Subgraph Lifecycle](/img/subgraph-lifecycle.png) ## Subgraph Development -1. [Create a subgraph](/developing/creating-a-subgraph/) -2. [Deploy a subgraph](/deploying/deploying-a-subgraph-to-studio/) -3. [Test a subgraph](/deploying/subgraph-studio/#testing-your-subgraph-in-subgraph-studio) -4. [Publish a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) -5. [Signal on a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) +1. [Create a Subgraph](/developing/creating-a-subgraph/) +2. [Deploy a Subgraph](/deploying/deploying-a-subgraph-to-studio/) +3. [Test a Subgraph](/deploying/subgraph-studio/#testing-your-subgraph-in-subgraph-studio) +4. [Publish a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) +5. [Signal on a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) ### Build locally -Great subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying subgraphs on The Graph. They can also use [Graph TypeScript](/subgraphs/developing/creating/graph-ts/README/) and [Matchstick](/subgraphs/developing/creating/unit-testing-framework/) to create robust subgraphs. +Great Subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying Subgraphs on The Graph. They can also use [Graph TypeScript](/subgraphs/developing/creating/graph-ts/README/) and [Matchstick](/subgraphs/developing/creating/unit-testing-framework/) to create robust Subgraphs. ### Deploy to Subgraph Studio -Once defined, a subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: +Once defined, a Subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: -- Use its staging environment to index the deployed subgraph and make it available for review. -- Verify that your subgraph doesn't have any indexing errors and works as expected. +- Use its staging environment to index the deployed Subgraph and make it available for review. +- Verify that your Subgraph doesn't have any indexing errors and works as expected. ### Publish to the Network -When you're happy with your subgraph, you can [publish it](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph Network. +When you're happy with your Subgraph, you can [publish it](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph Network. -- This is an onchain action, which registers the subgraph and makes it discoverable by Indexers. -- Published subgraphs have a corresponding NFT, which defines the ownership of the subgraph. You can [transfer the subgraph's ownership](/subgraphs/developing/managing/transferring-a-subgraph/) by sending the NFT. -- Published subgraphs have associated metadata, which provides other network participants with useful context and information. +- This is an onchain action, which registers the Subgraph and makes it discoverable by Indexers. +- Published Subgraphs have a corresponding NFT, which defines the ownership of the Subgraph. You can [transfer the Subgraph's ownership](/subgraphs/developing/managing/transferring-a-subgraph/) by sending the NFT. +- Published Subgraphs have associated metadata, which provides other network participants with useful context and information. ### Add Curation Signal for Indexing -Published subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your subgraph. Learn more about signaling and [curating](/resources/roles/curating/) on The Graph. +Published Subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your Subgraph. Learn more about signaling and [curating](/resources/roles/curating/) on The Graph. #### What is signal? -- Signal is locked GRT associated with a given subgraph. It indicates to Indexers that a given subgraph will receive query volume and it contributes to the indexing rewards available for processing it. -- Third-party Curators may also signal on a given subgraph, if they deem the subgraph likely to drive query volume. +- Signal is locked GRT associated with a given Subgraph. It indicates to Indexers that a given Subgraph will receive query volume and it contributes to the indexing rewards available for processing it. +- Third-party Curators may also signal on a given Subgraph, if they deem the Subgraph likely to drive query volume. ### Querying & Application Development Subgraphs on The Graph Network receive 100,000 free queries per month, after which point developers can either [pay for queries with GRT or a credit card](/subgraphs/billing/). -Learn more about [querying subgraphs](/subgraphs/querying/introduction/). +Learn more about [querying Subgraphs](/subgraphs/querying/introduction/). ### Updating Subgraphs -To update your subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. +To update your Subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your Subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. -- If you selected "auto-migrate" when you applied the signal, updating the subgraph will migrate any signal to the new version and incur a migration tax. -- This signal migration should prompt Indexers to start indexing the new version of the subgraph, so it should soon become available for querying. +- If you selected "auto-migrate" when you applied the signal, updating the Subgraph will migrate any signal to the new version and incur a migration tax. +- This signal migration should prompt Indexers to start indexing the new version of the Subgraph, so it should soon become available for querying. ### Deleting & Transferring Subgraphs -If you no longer need a published subgraph, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) or [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) it. Deleting a subgraph returns any signaled GRT to [Curators](/resources/roles/curating/). +If you no longer need a published Subgraph, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) or [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) it. Deleting a Subgraph returns any signaled GRT to [Curators](/resources/roles/curating/). diff --git a/website/src/pages/ar/subgraphs/explorer.mdx b/website/src/pages/ar/subgraphs/explorer.mdx index 512be28e8322..57d7712cc383 100644 --- a/website/src/pages/ar/subgraphs/explorer.mdx +++ b/website/src/pages/ar/subgraphs/explorer.mdx @@ -2,11 +2,11 @@ title: Graph Explorer --- -Unlock the world of subgraphs and network data with [Graph Explorer](https://thegraph.com/explorer). +Unlock the world of Subgraphs and network data with [Graph Explorer](https://thegraph.com/explorer). ## نظره عامة -Graph Explorer consists of multiple parts where you can interact with [subgraphs](https://thegraph.com/explorer?chain=arbitrum-one), [delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one), engage [participants](https://thegraph.com/explorer/participants?chain=arbitrum-one), view [network information](https://thegraph.com/explorer/network?chain=arbitrum-one), and access your user profile. +Graph Explorer consists of multiple parts where you can interact with [Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one), [delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one), engage [participants](https://thegraph.com/explorer/participants?chain=arbitrum-one), view [network information](https://thegraph.com/explorer/network?chain=arbitrum-one), and access your user profile. ## Inside Explorer @@ -14,33 +14,33 @@ The following is a breakdown of all the key features of Graph Explorer. For addi ### Subgraphs Page -After deploying and publishing your subgraph in Subgraph Studio, go to [Graph Explorer](https://thegraph.com/explorer) and click on the "[Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one)" link in the navigation bar to access the following: +After deploying and publishing your Subgraph in Subgraph Studio, go to [Graph Explorer](https://thegraph.com/explorer) and click on the "[Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one)" link in the navigation bar to access the following: -- Your own finished subgraphs +- Your own finished Subgraphs - Subgraphs published by others -- The exact subgraph you want (based on the date created, signal amount, or name). +- The exact Subgraph you want (based on the date created, signal amount, or name). ![Explorer Image 1](/img/Subgraphs-Explorer-Landing.png) -When you click into a subgraph, you will be able to do the following: +When you click into a Subgraph, you will be able to do the following: - Test queries in the playground and be able to leverage network details to make informed decisions. -- Signal GRT on your own subgraph or the subgraphs of others to make indexers aware of its importance and quality. +- Signal GRT on your own Subgraph or the Subgraphs of others to make indexers aware of its importance and quality. - - This is critical because signaling on a subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries. + - This is critical because signaling on a Subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries. ![Explorer Image 2](/img/Subgraph-Details.png) -On each subgraph’s dedicated page, you can do the following: +On each Subgraph’s dedicated page, you can do the following: -- أشر/الغي الإشارة على Subgraphs +- Signal/Un-signal on Subgraphs - اعرض المزيد من التفاصيل مثل المخططات و ال ID الحالي وبيانات التعريف الأخرى -- بدّل بين الإصدارات وذلك لاستكشاف التكرارات السابقة ل subgraphs -- استعلم عن subgraphs عن طريق GraphQL -- اختبار subgraphs في playground -- اعرض المفهرسين الذين يفهرسون Subgraphs معين +- Switch versions to explore past iterations of the Subgraph +- Query Subgraphs via GraphQL +- Test Subgraphs in the playground +- View the Indexers that are indexing on a certain Subgraph - إحصائيات subgraphs (المخصصات ، المنسقين ، إلخ) -- اعرض من قام بنشر ال Subgraphs +- View the entity who published the Subgraph ![Explorer Image 3](/img/Explorer-Signal-Unsignal.png) @@ -53,7 +53,7 @@ On this page, you can see the following: - Indexers who collected the most query fees - Indexers with the highest estimated APR -Additionally, you can calculate your ROI and search for top Indexers by name, address, or subgraph. +Additionally, you can calculate your ROI and search for top Indexers by name, address, or Subgraph. ### Participants Page @@ -63,9 +63,9 @@ This page provides a bird's-eye view of all "participants," which includes every ![Explorer Image 4](/img/Indexer-Pane.png) -Indexers are the backbone of the protocol. They stake on subgraphs, index them, and serve queries to anyone consuming subgraphs. +Indexers are the backbone of the protocol. They stake on Subgraphs, index them, and serve queries to anyone consuming Subgraphs. -In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each subgraph, and how much revenue they have made from query fees and indexing rewards. +In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each Subgraph, and how much revenue they have made from query fees and indexing rewards. **Specifics** @@ -74,7 +74,7 @@ In the Indexers table, you can see an Indexers’ delegation parameters, their s - Cooldown Remaining - the time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters. - Owned - This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior. - Delegated - Stake from Delegators which can be allocated by the Indexer, but cannot be slashed. -- Allocated - Stake that Indexers are actively allocating towards the subgraphs they are indexing. +- Allocated - Stake that Indexers are actively allocating towards the Subgraphs they are indexing. - Available Delegation Capacity - the amount of delegated stake the Indexers can still receive before they become over-delegated. - Max Delegation Capacity - the maximum amount of delegated stake the Indexer can productively accept. An excess delegated stake cannot be used for allocations or rewards calculations. - Query Fees - this is the total fees that end users have paid for queries from an Indexer over all time. @@ -90,10 +90,10 @@ To learn more about how to become an Indexer, you can take a look at the [offici #### 3. المفوضون Delegators -Curators analyze subgraphs to identify which subgraphs are of the highest quality. Once a Curator has found a potentially high-quality subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which subgraphs are high quality and should be indexed. +Curators analyze Subgraphs to identify which Subgraphs are of the highest quality. Once a Curator has found a potentially high-quality Subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which Subgraphs are high quality and should be indexed. -- Curators can be community members, data consumers, or even subgraph developers who signal on their own subgraphs by depositing GRT tokens into a bonding curve. - - By depositing GRT, Curators mint curation shares of a subgraph. As a result, they can earn a portion of the query fees generated by the subgraph they have signaled on. +- Curators can be community members, data consumers, or even Subgraph developers who signal on their own Subgraphs by depositing GRT tokens into a bonding curve. + - By depositing GRT, Curators mint curation shares of a Subgraph. As a result, they can earn a portion of the query fees generated by the Subgraph they have signaled on. - The bonding curve incentivizes Curators to curate the highest quality data sources. In the The Curator table listed below you can see: @@ -144,8 +144,8 @@ The overview section has both all the current network metrics and some cumulativ A few key details to note: -- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the subgraphs have been closed and the data they served has been validated by the consumers. -- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). +- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the Subgraphs have been closed and the data they served has been validated by the consumers. +- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the Subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). ![Explorer Image 8](/img/Network-Stats.png) @@ -178,15 +178,15 @@ In this section, you can view the following: ### تبويب ال Subgraphs -In the Subgraphs tab, you’ll see your published subgraphs. +In the Subgraphs tab, you’ll see your published Subgraphs. -> This will not include any subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network. +> This will not include any Subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network. ![Explorer Image 11](/img/Subgraphs-Overview.png) ### تبويب الفهرسة -In the Indexing tab, you’ll find a table with all the active and historical allocations towards subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer. +In the Indexing tab, you’ll find a table with all the active and historical allocations towards Subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer. هذا القسم سيتضمن أيضا تفاصيل حول صافي مكافآت المفهرس ورسوم الاستعلام الصافي الخاصة بك. سترى المقاييس التالية: @@ -223,13 +223,13 @@ In the Delegators tab, you can find the details of your active and historical de ### تبويب التنسيق Curating -في علامة التبويب Curation ، ستجد جميع ال subgraphs التي تشير إليها (مما يتيح لك تلقي رسوم الاستعلام). الإشارة تسمح للمنسقين التوضيح للمفهرسين ماهي ال subgraphs ذات الجودة العالية والموثوقة ، مما يشير إلى ضرورة فهرستها. +In the Curation tab, you’ll find all the Subgraphs you’re signaling on (thus enabling you to receive query fees). Signaling allows Curators to highlight to Indexers which Subgraphs are valuable and trustworthy, thus signaling that they need to be indexed on. ضمن علامة التبويب هذه ، ستجد نظرة عامة حول: -- جميع ال subgraphs التي تقوم بتنسيقها مع تفاصيل الإشارة -- إجمالي الحصة لكل subgraph -- مكافآت الاستعلام لكل subgraph +- All the Subgraphs you're curating on with signal details +- Share totals per Subgraph +- Query rewards per Subgraph - تحديث في تفاصيل التاريخ ![Explorer Image 14](/img/Curation-Stats.png) diff --git a/website/src/pages/ar/subgraphs/guides/_meta.js b/website/src/pages/ar/subgraphs/guides/_meta.js index 37e18bc51651..a1bb04fb6d3f 100644 --- a/website/src/pages/ar/subgraphs/guides/_meta.js +++ b/website/src/pages/ar/subgraphs/guides/_meta.js @@ -1,4 +1,5 @@ export default { + 'subgraph-composition': '', 'subgraph-debug-forking': '', near: '', arweave: '', diff --git a/website/src/pages/ar/subgraphs/guides/arweave.mdx b/website/src/pages/ar/subgraphs/guides/arweave.mdx index 08e6c4257268..4bb8883b4bd0 100644 --- a/website/src/pages/ar/subgraphs/guides/arweave.mdx +++ b/website/src/pages/ar/subgraphs/guides/arweave.mdx @@ -53,7 +53,7 @@ $ graph codegen # generates types from the schema file identified in the manifes $ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder ``` -## Subgraph Manifest Definition +## تعريف Subgraph Manifest The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for an Arweave Subgraph: @@ -92,12 +92,12 @@ Arweave data sources support two types of handlers: - `transactionHandlers` - Run on every transaction where the data source's `source.owner` is the owner. Currently an owner is required for `transactionHandlers`, if users want to process all transactions they should provide "" as the `source.owner` > The source.owner can be the owner's address, or their Public Key. - +> > Transactions are the building blocks of the Arweave permaweb and they are objects created by end-users. - +> > Note: [Irys (previously Bundlr)](https://irys.xyz/) transactions are not supported yet. -## Schema Definition +## تعريف المخطط Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on the Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). @@ -162,7 +162,7 @@ graph deploy --access-token The GraphQL endpoint for Arweave Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. -## Example Subgraphs +## أمثلة على الـ Subgraphs Here is an example Subgraph for reference: diff --git a/website/src/pages/ar/subgraphs/guides/contract-analyzer.mdx b/website/src/pages/ar/subgraphs/guides/contract-analyzer.mdx index 084ac8d28a00..84aeda12e0fc 100644 --- a/website/src/pages/ar/subgraphs/guides/contract-analyzer.mdx +++ b/website/src/pages/ar/subgraphs/guides/contract-analyzer.mdx @@ -2,11 +2,15 @@ title: Smart Contract Analysis with Cana CLI --- -# Cana CLI: Quick & Efficient Contract Analysis +Improve smart contract analysis with **Cana CLI**. It's fast, efficient, and designed specifically for EVM chains. -**Cana CLI** is a command-line tool that streamlines helpful smart contract metadata analysis specific to subgraph development across multiple EVM-compatible chains. +## نظره عامة -## 📌 Key Features +**Cana CLI** is a command-line tool that streamlines helpful smart contract metadata analysis specific to subgraph development across multiple EVM-compatible chains. It simplifies retrieving contract details, detecting proxy implementations, extracting ABIs, and more. + +### Key Features + +With Cana CLI, you can: - Detect deployment blocks - Verify source code @@ -14,47 +18,59 @@ title: Smart Contract Analysis with Cana CLI - Identify proxy and implementation contracts - Support multiple chains -## 🚀 Installation & Setup +### Prerequisites + +Before installing Cana CLI, make sure you have: + +- [Node.js v16+](https://nodejs.org/en) +- [npm v6+](https://docs.npmjs.com/cli/v11/commands/npm-install) +- Block explorer API keys + +### Installation & Setup -Install Cana globally using npm: +1. Install Cana CLI + +Use npm to install it globally: ```bash npm install -g contract-analyzer ``` -Set up a blockchain for analysis: +2. Configure Cana CLI + +Set up a blockchain environment for analysis: ```bash cana setup ``` -Provide the required block explorer API and block explorer endpoint URL details when prompted. +During setup, you'll be prompted to provide the required block explorer API key and block explorer endpoint URL. -Running `cana setup` creates a configuration file at `~/.contract-analyzer/config.json`. This file stores your block explorer API credentials, endpoint URLs, and chain selection preferences for future use. +After setup, Cana CLI creates a configuration file at `~/.contract-analyzer/config.json`. This file stores your block explorer API credentials, endpoint URLs, and chain selection preferences for future use. -## 🍳 Usage +### Steps: Using Cana CLI for Smart Contract Analysis -### 🔹 Chain Selection +#### 1. Select a Chain -Cana supports multiple EVM-compatible chains. +Cana CLI supports multiple EVM-compatible chains. -List chains added with: +For a list of chains added run this command: ```bash cana chains ``` -Then select a chain with: +Then select a chain with this command: ```bash cana chains --switch ``` -Once a chain is selected, all subsequent contract analases will continue on that chain. +Once a chain is selected, all subsequent contract analyses will continue on that chain. -### 🔹 Basic Contract Analysis +#### 2. Basic Contract Analysis -Analyze a contract with: +Run the following command to analyze a contract: ```bash cana analyze 0xContractAddress @@ -66,11 +82,11 @@ or cana -a 0xContractAddress ``` -This command displays essential contract information in the terminal using a clear, organized format. +This command fetches and displays essential contract information in the terminal using a clear, organized format. -### 🔹 Understanding Output +#### 3. Understanding the Output -Cana organizes results into the terminal as well as into a structured directory when detailed contract data is successfully retrieved: +Cana CLI organizes results into the terminal and into a structured directory when detailed contract data is successfully retrieved: ``` contracts-analyzed/ @@ -80,24 +96,22 @@ contracts-analyzed/ └── event-information.json # Event signatures and examples ``` -### 🔹 Chain Management +This format makes it easy to reference contract metadata, event signatures, and ABIs for subgraph development. + +#### 4. Chain Management Add and manage chains: ```bash -cana setup # Add a new chain -cana chains # List configured chains -cana chains -s # Swich chains. +cana setup # Add a new chain +cana chains # List configured chains +cana chains -s # Switch chains ``` -## ⚠️ Troubleshooting +### Troubleshooting -- **Missing Data**: Ensure the contract address is correct, verified on the block explorer, and that your API key has the required permissions. +Missing Data? Ensure that the contract address is correct, that it's verified on the block explorer, and that your API key has the required permissions. -## ✅ Requirements - -- Node.js v16+ -- npm v6+ -- Block explorer API keys +### Conclusion -Keep your contract analyses efficient and well-organized. 🚀 +With Cana CLI, you can efficiently analyze smart contracts, extract crucial metadata, and support subgraph development with ease. diff --git a/website/src/pages/ar/subgraphs/guides/enums.mdx b/website/src/pages/ar/subgraphs/guides/enums.mdx index 9f55ae07c54b..846faecc1706 100644 --- a/website/src/pages/ar/subgraphs/guides/enums.mdx +++ b/website/src/pages/ar/subgraphs/guides/enums.mdx @@ -269,6 +269,6 @@ Expected output includes the marketplaces that meet the criteria, each represent } ``` -## Additional Resources +## مصادر إضافية For additional information, check out this guide's [repo](https://github.com/chidubemokeke/Subgraph-Tutorial-Enums). diff --git a/website/src/pages/ar/subgraphs/guides/grafting.mdx b/website/src/pages/ar/subgraphs/guides/grafting.mdx index d9abe0e70d2a..4b7dad1a54d9 100644 --- a/website/src/pages/ar/subgraphs/guides/grafting.mdx +++ b/website/src/pages/ar/subgraphs/guides/grafting.mdx @@ -10,13 +10,13 @@ Grafting reuses the data from an existing Subgraph and starts indexing it at a l The grafted Subgraph can use a GraphQL schema that is not identical to the one of the base Subgraph, but merely compatible with it. It has to be a valid Subgraph schema in its own right, but may deviate from the base Subgraph's schema in the following ways: -- It adds or removes entity types -- It removes attributes from entity types +- يضيف أو يزيل أنواع الكيانات +- يزيل الصفات من أنواع الكيانات - It adds nullable attributes to entity types - It turns non-nullable attributes into nullable attributes - It adds values to enums - It adds or removes interfaces -- It changes for which entity types an interface is implemented +- يغير للكيانات التي يتم تنفيذ الواجهة لها For more information, you can check: @@ -48,7 +48,7 @@ Building Subgraphs is an essential part of The Graph, described more in depth [h > Note: The contract used in the Subgraph was taken from the following [Hackathon Starterkit](https://github.com/schmidsi/hackathon-starterkit). -## Subgraph Manifest Definition +## تعريف Subgraph Manifest The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest that you will use: @@ -189,7 +189,7 @@ You can see that the `graft-replacement` Subgraph is indexing from older `graph- Congrats! You have successfully grafted a Subgraph onto another Subgraph. -## Additional Resources +## مصادر إضافية If you want more experience with grafting, here are a few examples for popular contracts: diff --git a/website/src/pages/ar/subgraphs/guides/near.mdx b/website/src/pages/ar/subgraphs/guides/near.mdx index e78a69eb7fa2..04daec8b6ac7 100644 --- a/website/src/pages/ar/subgraphs/guides/near.mdx +++ b/website/src/pages/ar/subgraphs/guides/near.mdx @@ -1,10 +1,10 @@ --- -title: Building Subgraphs on NEAR +title: بناء Subgraphs على NEAR --- This guide is an introduction to building Subgraphs indexing smart contracts on the [NEAR blockchain](https://docs.near.org/). -## What is NEAR? +## ما هو NEAR؟ [NEAR](https://near.org/) is a smart contract platform for building decentralized applications. Visit the [official documentation](https://docs.near.org/concepts/basics/protocol) for more information. @@ -14,14 +14,14 @@ The Graph gives developers tools to process blockchain events and make the resul Subgraphs are event-based, which means that they listen for and then process onchain events. There are currently two types of handlers supported for NEAR Subgraphs: -- Block handlers: these are run on every new block -- Receipt handlers: run every time a message is executed at a specified account +- معالجات الكتل(Block handlers): يتم تشغيلها على كل كتلة جديدة +- معالجات الاستلام (Receipt handlers): يتم تشغيلها في كل مرة يتم فيها تنفيذ رسالة على حساب محدد [From the NEAR documentation](https://docs.near.org/build/data-infrastructure/lake-data-structures/receipt): -> A Receipt is the only actionable object in the system. When we talk about "processing a transaction" on the NEAR platform, this eventually means "applying receipts" at some point. +> الاستلام (Receipt) هو الكائن الوحيد القابل للتنفيذ في النظام. عندما نتحدث عن "معالجة الإجراء" على منصة NEAR ، فإن هذا يعني في النهاية "تطبيق الاستلامات" في مرحلة ما. -## Building a NEAR Subgraph +## بناء NEAR Subgraph `@graphprotocol/graph-cli` is a command-line tool for building and deploying Subgraphs. @@ -46,7 +46,7 @@ $ graph codegen # generates types from the schema file identified in the manifes $ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder ``` -### Subgraph Manifest Definition +### تعريف Subgraph Manifest The Subgraph manifest (`subgraph.yaml`) identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for a NEAR Subgraph: @@ -85,12 +85,12 @@ accounts: - morning.testnet ``` -NEAR data sources support two types of handlers: +مصادر بيانات NEAR تدعم نوعين من المعالجات: - `blockHandlers`: run on every new NEAR block. No `source.account` is required. - `receiptHandlers`: run on every receipt where the data source's `source.account` is the recipient. Note that only exact matches are processed ([subaccounts](https://docs.near.org/tutorials/crosswords/basics/add-functions-call#create-a-subaccount) must be added as independent data sources). -### Schema Definition +### تعريف المخطط Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). @@ -169,7 +169,7 @@ Otherwise, the rest of the [AssemblyScript API](/subgraphs/developing/creating/g This includes a new JSON parsing function - logs on NEAR are frequently emitted as stringified JSONs. A new `json.fromString(...)` function is available as part of the [JSON API](/subgraphs/developing/creating/graph-ts/api/#json-api) to allow developers to easily process these logs. -## Deploying a NEAR Subgraph +## نشر NEAR Subgraph Once you have a built Subgraph, it is time to deploy it to Graph Node for indexing. NEAR Subgraphs can be deployed to any Graph Node `>=v0.26.x` (this version has not yet been tagged & released). @@ -218,19 +218,19 @@ Once your Subgraph has been deployed, it will be indexed by Graph Node. You can ### Indexing NEAR with a Local Graph Node -Running a Graph Node that indexes NEAR has the following operational requirements: +تشغيل Graph Node التي تقوم بفهرسة NEAR لها المتطلبات التشغيلية التالية: -- NEAR Indexer Framework with Firehose instrumentation -- NEAR Firehose Component(s) -- Graph Node with Firehose endpoint configured +- NEAR Indexer Framework مع أجهزة Firehose +- مكونات NEAR Firehose +- تكوين Graph Node مع Firehose endpoint -We will provide more information on running the above components soon. +سوف نقدم المزيد من المعلومات حول تشغيل المكونات أعلاه قريبًا. -## Querying a NEAR Subgraph +## الاستعلام عن NEAR Subgraph The GraphQL endpoint for NEAR Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. -## Example Subgraphs +## أمثلة على الـ Subgraphs Here are some example Subgraphs for reference: @@ -250,7 +250,7 @@ No, a Subgraph can only support data sources from one chain/network. ### Can Subgraphs react to more specific triggers? -Currently, only Block and Receipt triggers are supported. We are investigating triggers for function calls to a specified account. We are also interested in supporting event triggers, once NEAR has native event support. +حاليًا ، يتم دعم مشغلات الكتلة(Block) والاستلام(Receipt). نحن نبحث في مشغلات استدعاءات الدوال لحساب محدد. نحن مهتمون أيضًا بدعم مشغلات الأحداث ، بمجرد حصول NEAR على دعم محلي للأحداث. ### Will receipt handlers trigger for accounts and their sub-accounts? @@ -264,11 +264,11 @@ accounts: ### Can NEAR Subgraphs make view calls to NEAR accounts during mappings? -This is not supported. We are evaluating whether this functionality is required for indexing. +هذا غير مدعوم. نحن بصدد تقييم ما إذا كانت هذه الميزة مطلوبة للفهرسة. ### Can I use data source templates in my NEAR Subgraph? -This is not currently supported. We are evaluating whether this functionality is required for indexing. +هذا غير مدعوم حاليا. نحن بصدد تقييم ما إذا كانت هذه الميزة مطلوبة للفهرسة. ### Ethereum Subgraphs support "pending" and "current" versions, how can I deploy a "pending" version of a NEAR Subgraph? @@ -278,6 +278,6 @@ Pending functionality is not yet supported for NEAR Subgraphs. In the interim, y If it is a general question about Subgraph development, there is a lot more information in the rest of the [Developer documentation](/subgraphs/quick-start/). Otherwise please join [The Graph Protocol Discord](https://discord.gg/graphprotocol) and ask in the #near channel or email near@thegraph.com. -## References +## المراجع - [NEAR developer documentation](https://docs.near.org/tutorials/crosswords/basics/set-up-skeleton) diff --git a/website/src/pages/ar/subgraphs/guides/secure-api-keys-nextjs.mdx b/website/src/pages/ar/subgraphs/guides/secure-api-keys-nextjs.mdx index e17e594408ff..21ac0b74d31d 100644 --- a/website/src/pages/ar/subgraphs/guides/secure-api-keys-nextjs.mdx +++ b/website/src/pages/ar/subgraphs/guides/secure-api-keys-nextjs.mdx @@ -2,7 +2,7 @@ title: How to Secure API Keys Using Next.js Server Components --- -## Overview +## نظره عامة We can use [Next.js server components](https://nextjs.org/docs/app/building-your-application/rendering/server-components) to properly secure our API key from exposure in the frontend of our dapp. To further increase our API key security, we can also [restrict our API key to certain Subgraphs or domains in Subgraph Studio](/cookbook/upgrading-a-subgraph/#securing-your-api-key). diff --git a/website/src/pages/ar/subgraphs/guides/subgraph-composition.mdx b/website/src/pages/ar/subgraphs/guides/subgraph-composition.mdx new file mode 100644 index 000000000000..080de99b5ba1 --- /dev/null +++ b/website/src/pages/ar/subgraphs/guides/subgraph-composition.mdx @@ -0,0 +1,132 @@ +--- +title: Aggregate Data Using Subgraph Composition +sidebarTitle: Build a Composable Subgraph with Multiple Subgraphs +--- + +Leverage Subgraph composition to speed up development time. Create a base Subgraph with essential data, then build additional Subgraphs on top of it. + +Optimize your Subgraph by merging data from independent, source Subgraphs into a single composable Subgraph to enhance data aggregation. + +## مقدمة + +Composable Subgraphs enable you to combine multiple Subgraphs' data sources into a new Subgraph, facilitating faster and more flexible Subgraph development. Subgraph composition empowers you to create and maintain smaller, focused Subgraphs that collectively form a larger, interconnected dataset. + +### Benefits of Composition + +Subgraph composition is a powerful feature for scaling, allowing you to: + +- Reuse, mix, and combine existing data +- Streamline development and queries +- Use multiple data sources (up to five source Subgraphs) +- Speed up your Subgraph's syncing speed +- Handle errors and optimize the resync + +## Architecture Overview + +The setup for this example involves two Subgraphs: + +1. **Source Subgraph**: Tracks event data as entities. +2. **Dependent Subgraph**: Uses the source Subgraph as a data source. + +You can find these in the `source` and `dependent` directories. + +- The **source Subgraph** is a basic event-tracking Subgraph that records events emitted by relevant contracts. +- The **dependent Subgraph** references the source Subgraph as a data source, using the entities from the source as triggers. + +While the source Subgraph is a standard Subgraph, the dependent Subgraph uses the Subgraph composition feature. + +## Prerequisites + +### Source Subgraphs + +- All Subgraphs need to be published with a **specVersion 1.3.0 or later** (Use the latest graph-cli version to be able to deploy composable Subgraphs) +- See notes here: https://github.com/graphprotocol/graph-node/releases/tag/v0.37.0 +- Immutable entities only: All Subgraphs must have [immutable entities](https://thegraph.com/docs/en/subgraphs/best-practices/immutable-entities-bytes-as-ids/#immutable-entities) when the Subgraph is deployed +- Pruning can be used in the source Subgraphs, but only entities that are immutable can be composed on top of +- Source Subgraphs cannot use grafting on top of existing entities +- Aggregated entities can be used in composition, but entities that are composed from them cannot performed additional aggregations directly + +### Composed Subgraphs + +- You can only compose up to a **maximum of 5 source Subgraphs** +- Composed Subgraphs can only use **datasources from the same chain** +- **Nested composition is not yet supported**: Composing on top of another composed Subgraph isn’t allowed at this time +- Aggregated entities can be used in composition, but the composed entities on them cannot also use aggregations directly +- Developers cannot compose an onchain datasource with a Subgraph datasource (i.e. you can’t do normal event handlers and call handlers and block handlers in a composed Subgraph) + +Additionally, you can explore the [example-composable-subgraph](https://github.com/graphprotocol/example-composable-subgraph) repository for a working implementation of composable Subgraphs + +## Get Started + +The following guide provides examples for defining 3 source Subgraphs to create one powerful composed Subgraph. + +### Specifics + +- To keep this example simple, all source Subgraphs use only block handlers. However, in a real environment, each source Subgraph will use data from different smart contracts. +- The examples below show how to import and extend the schema of another Subgraph to enhance its functionality. +- Each source Subgraph is optimized with a specific entity. +- All the commands listed install the necessary dependencies, generate code based on the GraphQL schema, build the Subgraph, and deploy it to your local Graph Node instance. + +### Step 1. Deploy Block Time Source Subgraph + +This first source Subgraph calculates the block time for each block. + +- It imports schemas from other Subgraphs and adds a `block` entity with a `timestamp` field, representing the time each block was mined. +- It listens to time-related blockchain events (e.g., block timestamps) and processes this data to update the Subgraph's entities accordingly. + +To deploy this Subgraph locally, run the following commands: + +```bash +npm install +npm run codegen +npm run build +npm run create-local +npm run deploy-local +``` + +### Step 2. Deploy Block Cost Source Subgraph + +This second source Subgraph indexes the cost of each block. + +#### Key Functions + +- It imports schemas from other Subgraphs and adds a `block` entity with cost-related fields. +- It listens to blockchain events related to costs (e.g. gas fees, transaction costs) and processes this data to update the Subgraph's entities accordingly. + +To deploy this Subgraph locally, run the same commands as above. + +### Step 3. Define Block Size in Source Subgraph + +This third source Subgraph indexes the size of each block. To deploy this Subgraph locally, run the same commands as above. + +#### Key Functions + +- It imports existing schemas from other Subgraphs and adds a `block` entity with a `size` field representing each block's size. +- It listens to blockchain events related to block sizes (e.g., storage or volume) and processes this data to update the Subgraph's entities accordingly. + +### Step 4. Combine Into Block Stats Subgraph + +This composed Subgraph combines and aggregates the information from the source Subgraphs above, providing a unified view of block statistics. To deploy this Subgraph locally, run the same commands as above. + +> Note: +> +> - Any change to a source Subgraph will likely generate a new deployment ID. +> - Be sure to update the deployment ID in the data source address of the Subgraph manifest to take advantage of the latest changes. +> - All source Subgraphs should be deployed before the composed Subgraph is deployed. + +#### Key Functions + +- It provides a consolidated data model that encompasses all relevant block metrics. +- It combines data from 3 source Subgraphs, and provides a comprehensive view of block statistics, enabling more complex queries and analyses. + +## Key Takeaways + +- This powerful tool will scale your Subgraph development and allow you to combine multiple Subgraphs. +- The setup includes the deployment of 3 source Subgraphs and one final deployment of the composed Subgraph. +- This feature unlocks scalability, simplifying both development and maintenance efficiency. + +## مصادر إضافية + +- Check out all the code for this example in [this GitHub repo](https://github.com/graphprotocol/example-composable-subgraph). +- To add advanced features to your Subgraph, check out [Subgraph advanced features](/developing/creating/advanced/). +- To learn more about aggregations, check out [Timeseries and Aggregations](/subgraphs/developing/creating/advanced/#timeseries-and-aggregations). diff --git a/website/src/pages/ar/subgraphs/guides/subgraph-debug-forking.mdx b/website/src/pages/ar/subgraphs/guides/subgraph-debug-forking.mdx index 91aa7484d2ec..364fb8ce4d9c 100644 --- a/website/src/pages/ar/subgraphs/guides/subgraph-debug-forking.mdx +++ b/website/src/pages/ar/subgraphs/guides/subgraph-debug-forking.mdx @@ -4,19 +4,19 @@ title: Quick and Easy Subgraph Debugging Using Forks As with many systems processing large amounts of data, The Graph's Indexers (Graph Nodes) may take quite some time to sync-up your Subgraph with the target blockchain. The discrepancy between quick changes with the purpose of debugging and long wait times needed for indexing is extremely counterproductive and we are well aware of that. This is why we are introducing **Subgraph forking**, developed by [LimeChain](https://limechain.tech/), and in this article I will show you how this feature can be used to substantially speed-up Subgraph debugging! -## Ok, what is it? +## حسنا، ما هو؟ **Subgraph forking** is the process of lazily fetching entities from _another_ Subgraph's store (usually a remote one). In the context of debugging, **Subgraph forking** allows you to debug your failed Subgraph at block _X_ without needing to wait to sync-up to block _X_. -## What?! How? +## ماذا؟! كيف؟ When you deploy a Subgraph to a remote Graph Node for indexing and it fails at block _X_, the good news is that the Graph Node will still serve GraphQL queries using its store, which is synced-up to block _X_. That's great! This means we can take advantage of this "up-to-date" store to fix the bugs arising when indexing block _X_. In a nutshell, we are going to _fork the failing Subgraph_ from a remote Graph Node that is guaranteed to have the Subgraph indexed up to block _X_ in order to provide the locally deployed Subgraph being debugged at block _X_ an up-to-date view of the indexing state. -## Please, show me some code! +## من فضلك ، أرني بعض الأكواد! To stay focused on Subgraph debugging, let's keep things simple and run along with the [example-Subgraph](https://github.com/graphprotocol/graph-tooling/tree/main/examples/ethereum-gravatar) indexing the Ethereum Gravity smart contract. @@ -46,31 +46,31 @@ export function handleUpdatedGravatar(event: UpdatedGravatar): void { Oops, how unfortunate, when I deploy my perfect looking Subgraph to [Subgraph Studio](https://thegraph.com/studio/) it fails with the _"Gravatar not found!"_ error. -The usual way to attempt a fix is: +الطريقة المعتادة لمحاولة الإصلاح هي: -1. Make a change in the mappings source, which you believe will solve the issue (while I know it won't). +1. إجراء تغيير في مصدر الـ mappings ، والذي تعتقد أنه سيحل المشكلة (وأنا أعلم أنه لن يحلها). 2. Re-deploy the Subgraph to [Subgraph Studio](https://thegraph.com/studio/) (or another remote Graph Node). -3. Wait for it to sync-up. -4. If it breaks again go back to 1, otherwise: Hooray! +3. الانتظار حتى تتم المزامنة. +4. إذا حدثت المشكلة مرة أخرى ، فارجع إلى 1! It is indeed pretty familiar to an ordinary debug process, but there is one step that horribly slows down the process: _3. Wait for it to sync-up._ Using **Subgraph forking** we can essentially eliminate this step. Here is how it looks: 0. Spin-up a local Graph Node with the **_appropriate fork-base_** set. -1. Make a change in the mappings source, which you believe will solve the issue. +1. قم بإجراء تغيير في مصدر الـ mappings ، والذي تعتقد أنه سيحل المشكلة. 2. Deploy to the local Graph Node, **_forking the failing Subgraph_** and **_starting from the problematic block_**. -3. If it breaks again, go back to 1, otherwise: Hooray! +3. إذا حدثت المشكلة مرة أخرى ، فارجع إلى 1! -Now, you may have 2 questions: +الآن ، قد يكون لديك سؤالان: -1. fork-base what??? -2. Forking who?! +1. ماهو fork-base؟؟؟ +2. ما الذي نقوم بتفريعه (Forking)؟! -And I answer: +وأنا أجيب: 1. `fork-base` is the "base" URL, such that when the _subgraph id_ is appended the resulting URL (`/`) is a valid GraphQL endpoint for the Subgraph's store. -2. Forking is easy, no need to sweat: +2. الـتفريع سهل ، فلا داعي للقلق: ```bash $ graph deploy --debug-fork --ipfs http://localhost:5001 --node http://localhost:8020 @@ -78,7 +78,7 @@ $ graph deploy --debug-fork --ipfs http://localhos Also, don't forget to set the `dataSources.source.startBlock` field in the Subgraph manifest to the number of the problematic block, so you can skip indexing unnecessary blocks and take advantage of the fork! -So, here is what I do: +لذلك ، هذا ما أفعله: 1. I spin-up a local Graph Node ([here is how to do it](https://github.com/graphprotocol/graph-node#running-a-local-graph-node)) with the `fork-base` option set to: `https://api.thegraph.com/subgraphs/id/`, since I will fork a Subgraph, the buggy one I deployed earlier, from [Subgraph Studio](https://thegraph.com/studio/). diff --git a/website/src/pages/ar/subgraphs/guides/transfer-to-the-graph.mdx b/website/src/pages/ar/subgraphs/guides/transfer-to-the-graph.mdx index a62072c48373..4be3dcedffe8 100644 --- a/website/src/pages/ar/subgraphs/guides/transfer-to-the-graph.mdx +++ b/website/src/pages/ar/subgraphs/guides/transfer-to-the-graph.mdx @@ -12,9 +12,9 @@ Quickly upgrade your Subgraphs from any platform to [The Graph's decentralized n ## Upgrade Your Subgraph to The Graph in 3 Easy Steps -1. [Set Up Your Studio Environment](/subgraphs/cookbook/transfer-to-the-graph/#1-set-up-your-studio-environment) -2. [Deploy Your Subgraph to Studio](/subgraphs/cookbook/transfer-to-the-graph/#2-deploy-your-subgraph-to-studio) -3. [Publish to The Graph Network](/subgraphs/cookbook/transfer-to-the-graph/#publish-your-subgraph-to-the-graphs-decentralized-network) +1. [Set Up Your Studio Environment](/subgraphs/cookbook/transfer-to-the-graph/#1-set-up-your-studio-environment) +2. [Deploy Your Subgraph to Studio](/subgraphs/cookbook/transfer-to-the-graph/#2-deploy-your-subgraph-to-studio) +3. [Publish to The Graph Network](/subgraphs/cookbook/transfer-to-the-graph/#publish-your-subgraph-to-the-graphs-decentralized-network) ## 1. Set Up Your Studio Environment @@ -98,7 +98,7 @@ You can create API Keys in Subgraph Studio under the “API Keys” menu at the Once you upgrade, you can access and manage your Subgraphs in [Subgraph Studio](https://thegraph.com/studio/) and explore all Subgraphs in [The Graph Explorer](https://thegraph.com/networks/). -### Additional Resources +### مصادر إضافية - To quickly create and publish a new Subgraph, check out the [Quick Start](/subgraphs/quick-start/). - To explore all the ways you can optimize and customize your Subgraph for a better performance, read more about [creating a Subgraph here](/developing/creating-a-subgraph/). diff --git a/website/src/pages/ar/subgraphs/querying/best-practices.mdx b/website/src/pages/ar/subgraphs/querying/best-practices.mdx index 23dcd2cb8920..f469ff02de9c 100644 --- a/website/src/pages/ar/subgraphs/querying/best-practices.mdx +++ b/website/src/pages/ar/subgraphs/querying/best-practices.mdx @@ -4,7 +4,7 @@ title: أفضل الممارسات للاستعلام The Graph provides a decentralized way to query data from blockchains. Its data is exposed through a GraphQL API, making it easier to query with the GraphQL language. -Learn the essential GraphQL language rules and best practices to optimize your subgraph. +Learn the essential GraphQL language rules and best practices to optimize your Subgraph. --- @@ -71,7 +71,7 @@ It means that you can query a GraphQL API using standard `fetch` (natively or vi However, as mentioned in ["Querying from an Application"](/subgraphs/querying/from-an-application/), it's recommended to use `graph-client`, which supports the following unique features: -- التعامل مع ال subgraph عبر السلاسل: الاستعلام من عدة subgraphs عبر استعلام واحد +- Cross-chain Subgraph Handling: Querying from multiple Subgraphs in a single query - [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) - [Automatic Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) - نتيجة مكتوبة بالكامل @@ -219,7 +219,7 @@ If the application only needs 10 transactions, the query should explicitly set ` ### Use a single query to request multiple records -By default, subgraphs have a singular entity for one record. For multiple records, use the plural entities and filter: `where: {id_in:[X,Y,Z]}` or `where: {volume_gt:100000}` +By default, Subgraphs have a singular entity for one record. For multiple records, use the plural entities and filter: `where: {id_in:[X,Y,Z]}` or `where: {volume_gt:100000}` Example of inefficient querying: diff --git a/website/src/pages/ar/subgraphs/querying/from-an-application.mdx b/website/src/pages/ar/subgraphs/querying/from-an-application.mdx index 767a2caa9021..08c71fa4ad1f 100644 --- a/website/src/pages/ar/subgraphs/querying/from-an-application.mdx +++ b/website/src/pages/ar/subgraphs/querying/from-an-application.mdx @@ -1,5 +1,6 @@ --- title: الاستعلام من التطبيق +sidebarTitle: Querying from an App --- Learn how to query The Graph from your application. @@ -10,7 +11,7 @@ During the development process, you will receive a GraphQL API endpoint at two d ### Subgraph Studio Endpoint -After deploying your subgraph to [Subgraph Studio](https://thegraph.com/docs/en/subgraphs/developing/deploying/using-subgraph-studio/), you will receive an endpoint that looks like this: +After deploying your Subgraph to [Subgraph Studio](https://thegraph.com/docs/en/subgraphs/developing/deploying/using-subgraph-studio/), you will receive an endpoint that looks like this: ``` https://api.studio.thegraph.com/query/// @@ -20,13 +21,13 @@ https://api.studio.thegraph.com/query/// ### The Graph Network Endpoint -After publishing your subgraph to the network, you will receive an endpoint that looks like this: : +After publishing your Subgraph to the network, you will receive an endpoint that looks like this: : ``` https://gateway.thegraph.com/api//subgraphs/id/ ``` -> This endpoint is intended for active use on the network. It allows you to use various GraphQL client libraries to query the subgraph and populate your application with indexed data. +> This endpoint is intended for active use on the network. It allows you to use various GraphQL client libraries to query the Subgraph and populate your application with indexed data. ## Using Popular GraphQL Clients @@ -34,7 +35,7 @@ https://gateway.thegraph.com/api//subgraphs/id/ The Graph is providing its own GraphQL client, `graph-client` that supports unique features such as: -- التعامل مع ال subgraph عبر السلاسل: الاستعلام من عدة subgraphs عبر استعلام واحد +- Cross-chain Subgraph Handling: Querying from multiple Subgraphs in a single query - [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) - [Automatic Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) - نتيجة مكتوبة بالكامل @@ -43,7 +44,7 @@ The Graph is providing its own GraphQL client, `graph-client` that supports uniq ### Fetch Data with Graph Client -Let's look at how to fetch data from a subgraph with `graph-client`: +Let's look at how to fetch data from a Subgraph with `graph-client`: #### Step 1 @@ -168,7 +169,7 @@ Although it's the heaviest client, it has many features to build advanced UI on ### Fetch Data with Apollo Client -Let's look at how to fetch data from a subgraph with Apollo client: +Let's look at how to fetch data from a Subgraph with Apollo client: #### Step 1 @@ -257,7 +258,7 @@ client ### Fetch data with URQL -Let's look at how to fetch data from a subgraph with URQL: +Let's look at how to fetch data from a Subgraph with URQL: #### Step 1 diff --git a/website/src/pages/ar/subgraphs/querying/graph-client/README.md b/website/src/pages/ar/subgraphs/querying/graph-client/README.md index 416cadc13c6f..d4850e723c6e 100644 --- a/website/src/pages/ar/subgraphs/querying/graph-client/README.md +++ b/website/src/pages/ar/subgraphs/querying/graph-client/README.md @@ -16,19 +16,19 @@ This library is intended to simplify the network aspect of data consumption for | Status | Feature | Notes | | :----: | ---------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------- | -| ✅ | Multiple indexers | based on fetch strategies | -| ✅ | Fetch Strategies | timeout, retry, fallback, race, highestValue | -| ✅ | Build time validations & optimizations | | -| ✅ | Client-Side Composition | with improved execution planner (based on GraphQL-Mesh) | -| ✅ | Cross-chain Subgraph Handling | Use similar subgraphs as a single source | -| ✅ | Raw Execution (standalone mode) | without a wrapping GraphQL client | -| ✅ | Local (client-side) Mutations | | -| ✅ | [Automatic Block Tracking](../packages/block-tracking/README.md) | tracking block numbers [as described here](https://thegraph.com/docs/en/developer/distributed-systems/#polling-for-updated-data) | -| ✅ | [Automatic Pagination](../packages/auto-pagination/README.md) | doing multiple requests in a single call to fetch more than the indexer limit | -| ✅ | Integration with `@apollo/client` | | -| ✅ | Integration with `urql` | | -| ✅ | TypeScript support | with built-in GraphQL Codegen and `TypedDocumentNode` | -| ✅ | [`@live` queries](./live.md) | Based on polling | +| ✅ | Multiple indexers | based on fetch strategies | +| ✅ | Fetch Strategies | timeout, retry, fallback, race, highestValue | +| ✅ | Build time validations & optimizations | | +| ✅ | Client-Side Composition | with improved execution planner (based on GraphQL-Mesh) | +| ✅ | Cross-chain Subgraph Handling | Use similar subgraphs as a single source | +| ✅ | Raw Execution (standalone mode) | without a wrapping GraphQL client | +| ✅ | Local (client-side) Mutations | | +| ✅ | [Automatic Block Tracking](../packages/block-tracking/README.md) | tracking block numbers [as described here](https://thegraph.com/docs/en/developer/distributed-systems/#polling-for-updated-data) | +| ✅ | [Automatic Pagination](../packages/auto-pagination/README.md) | doing multiple requests in a single call to fetch more than the indexer limit | +| ✅ | Integration with `@apollo/client` | | +| ✅ | Integration with `urql` | | +| ✅ | TypeScript support | with built-in GraphQL Codegen and `TypedDocumentNode` | +| ✅ | [`@live` queries](./live.md) | Based on polling | > You can find an [extended architecture design here](./architecture.md) @@ -308,8 +308,8 @@ sources:
`highestValue` - - This strategy allows you to send parallel requests to different endpoints for the same source and choose the most updated. + +This strategy allows you to send parallel requests to different endpoints for the same source and choose the most updated. This is useful if you want to choose most synced data for the same Subgraph over different indexers/sources. diff --git a/website/src/pages/ar/subgraphs/querying/graphql-api.mdx b/website/src/pages/ar/subgraphs/querying/graphql-api.mdx index d73381f88a7d..14e11ff80306 100644 --- a/website/src/pages/ar/subgraphs/querying/graphql-api.mdx +++ b/website/src/pages/ar/subgraphs/querying/graphql-api.mdx @@ -6,13 +6,13 @@ Learn about the GraphQL Query API used in The Graph. ## What is GraphQL? -[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. +[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query Subgraphs. -To understand the larger role that GraphQL plays, review [developing](/subgraphs/developing/introduction/) and [creating a subgraph](/developing/creating-a-subgraph/). +To understand the larger role that GraphQL plays, review [developing](/subgraphs/developing/introduction/) and [creating a Subgraph](/developing/creating-a-subgraph/). ## Queries with GraphQL -In your subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type. +In your Subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type. > Note: `query` does not need to be included at the top of the `graphql` query when using The Graph. @@ -170,7 +170,7 @@ You can use suffixes like `_gt`, `_lte` for value comparison: You can also filter entities that were updated in or after a specified block with `_change_block(number_gte: Int)`. -This can be useful if you are looking to fetch only entities which have changed, for example since the last time you polled. Or alternatively it can be useful to investigate or debug how entities are changing in your subgraph (if combined with a block filter, you can isolate only entities that changed in a specific block). +This can be useful if you are looking to fetch only entities which have changed, for example since the last time you polled. Or alternatively it can be useful to investigate or debug how entities are changing in your Subgraph (if combined with a block filter, you can isolate only entities that changed in a specific block). ```graphql { @@ -329,7 +329,7 @@ This query will return `Challenge` entities, and their associated `Application` ### Fulltext Search Queries -Fulltext search query fields provide an expressive text search API that can be added to the subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) to add fulltext search to your subgraph. +Fulltext search query fields provide an expressive text search API that can be added to the Subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) to add fulltext search to your Subgraph. Fulltext search queries have one required field, `text`, for supplying search terms. Several special fulltext operators are available to be used in this `text` search field. @@ -391,7 +391,7 @@ Graph Node implements [specification-based](https://spec.graphql.org/October2021 The schema of your dataSources, i.e. the entity types, values, and relationships that are available to query, are defined through the [GraphQL Interface Definition Language (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). -GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your subgraph is automatically generated from the GraphQL schema that's included in your [subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph). +GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your Subgraph is automatically generated from the GraphQL schema that's included in your [Subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph). > Note: Our API does not expose mutations because developers are expected to issue transactions directly against the underlying blockchain from their applications. @@ -403,7 +403,7 @@ All GraphQL types with `@entity` directives in your schema will be treated as en ### Subgraph Metadata -All subgraphs have an auto-generated `_Meta_` object, which provides access to subgraph metadata. This can be queried as follows: +All Subgraphs have an auto-generated `_Meta_` object, which provides access to Subgraph metadata. This can be queried as follows: ```graphQL { @@ -419,7 +419,7 @@ All subgraphs have an auto-generated `_Meta_` object, which provides access to s } ``` -If a block is provided, the metadata is as of that block, if not the latest indexed block is used. If provided, the block must be after the subgraph's start block, and less than or equal to the most recently indexed block. +If a block is provided, the metadata is as of that block, if not the latest indexed block is used. If provided, the block must be after the Subgraph's start block, and less than or equal to the most recently indexed block. `deployment` is a unique ID, corresponding to the IPFS CID of the `subgraph.yaml` file. @@ -427,6 +427,6 @@ If a block is provided, the metadata is as of that block, if not the latest inde - hash: the hash of the block - number: the block number -- timestamp: the timestamp of the block, if available (this is currently only available for subgraphs indexing EVM networks) +- timestamp: the timestamp of the block, if available (this is currently only available for Subgraphs indexing EVM networks) -`hasIndexingErrors` is a boolean identifying whether the subgraph encountered indexing errors at some past block +`hasIndexingErrors` is a boolean identifying whether the Subgraph encountered indexing errors at some past block diff --git a/website/src/pages/ar/subgraphs/querying/introduction.mdx b/website/src/pages/ar/subgraphs/querying/introduction.mdx index 281957e11e14..bdd0bde88865 100644 --- a/website/src/pages/ar/subgraphs/querying/introduction.mdx +++ b/website/src/pages/ar/subgraphs/querying/introduction.mdx @@ -7,11 +7,11 @@ To start querying right away, visit [The Graph Explorer](https://thegraph.com/ex ## نظره عامة -When a subgraph is published to The Graph Network, you can visit its subgraph details page on Graph Explorer and use the "Query" tab to explore the deployed GraphQL API for each subgraph. +When a Subgraph is published to The Graph Network, you can visit its Subgraph details page on Graph Explorer and use the "Query" tab to explore the deployed GraphQL API for each Subgraph. ## Specifics -Each subgraph published to The Graph Network has a unique query URL in Graph Explorer to make direct queries. You can find it by navigating to the subgraph details page and clicking on the "Query" button in the top right corner. +Each Subgraph published to The Graph Network has a unique query URL in Graph Explorer to make direct queries. You can find it by navigating to the Subgraph details page and clicking on the "Query" button in the top right corner. ![Query Subgraph Button](/img/query-button-screenshot.png) @@ -21,7 +21,7 @@ You will notice that this query URL must use a unique API key. You can create an Subgraph Studio users start on a Free Plan, which allows them to make 100,000 queries per month. Additional queries are available on the Growth Plan, which offers usage based pricing for additional queries, payable by credit card, or GRT on Arbitrum. You can learn more about billing [here](/subgraphs/billing/). -> Please see the [Query API](/subgraphs/querying/graphql-api/) for a complete reference on how to query the subgraph's entities. +> Please see the [Query API](/subgraphs/querying/graphql-api/) for a complete reference on how to query the Subgraph's entities. > > Note: If you encounter 405 errors with a GET request to the Graph Explorer URL, please switch to a POST request instead. diff --git a/website/src/pages/ar/subgraphs/querying/managing-api-keys.mdx b/website/src/pages/ar/subgraphs/querying/managing-api-keys.mdx index 33e9d7b78fc2..7b91a147ef47 100644 --- a/website/src/pages/ar/subgraphs/querying/managing-api-keys.mdx +++ b/website/src/pages/ar/subgraphs/querying/managing-api-keys.mdx @@ -4,11 +4,11 @@ title: Managing API keys ## نظره عامة -API keys are needed to query subgraphs. They ensure that the connections between application services are valid and authorized, including authenticating the end user and the device using the application. +API keys are needed to query Subgraphs. They ensure that the connections between application services are valid and authorized, including authenticating the end user and the device using the application. ### Create and Manage API Keys -Go to [Subgraph Studio](https://thegraph.com/studio/) and click the **API Keys** tab to create and manage your API keys for specific subgraphs. +Go to [Subgraph Studio](https://thegraph.com/studio/) and click the **API Keys** tab to create and manage your API keys for specific Subgraphs. The "API keys" table lists existing API keys and allows you to manage or delete them. For each key, you can see its status, the cost for the current period, the spending limit for the current period, and the total number of queries. @@ -31,4 +31,4 @@ You can click on an individual API key to view the Details page: - كمية GRT التي تم صرفها 2. Under the **Security** section, you can opt into security settings depending on the level of control you’d like to have. Specifically, you can: - عرض وإدارة أسماء النطاقات المصرح لها باستخدام مفتاح API الخاص بك - - تعيين الـ subgraphs التي يمكن الاستعلام عنها باستخدام مفتاح API الخاص بك + - Assign Subgraphs that can be queried with your API key diff --git a/website/src/pages/ar/subgraphs/querying/python.mdx b/website/src/pages/ar/subgraphs/querying/python.mdx index 0937e4f7862d..ed0d078a4175 100644 --- a/website/src/pages/ar/subgraphs/querying/python.mdx +++ b/website/src/pages/ar/subgraphs/querying/python.mdx @@ -3,7 +3,7 @@ title: Query The Graph with Python and Subgrounds sidebarTitle: Python (Subgrounds) --- -Subgrounds is an intuitive Python library for querying subgraphs, built by [Playgrounds](https://playgrounds.network/). It allows you to directly connect subgraph data to a Python data environment, letting you use libraries like [pandas](https://pandas.pydata.org/) to perform data analysis! +Subgrounds is an intuitive Python library for querying Subgraphs, built by [Playgrounds](https://playgrounds.network/). It allows you to directly connect Subgraph data to a Python data environment, letting you use libraries like [pandas](https://pandas.pydata.org/) to perform data analysis! Subgrounds offers a simple Pythonic API for building GraphQL queries, automates tedious workflows such as pagination, and empowers advanced users through controlled schema transformations. @@ -17,14 +17,14 @@ pip install --upgrade subgrounds python -m pip install --upgrade subgrounds ``` -Once installed, you can test out subgrounds with the following query. The following example grabs a subgraph for the Aave v2 protocol and queries the top 5 markets ordered by TVL (Total Value Locked), selects their name and their TVL (in USD) and returns the data as a pandas [DataFrame](https://pandas.pydata.org/pandas-docs/dev/reference/api/pandas.DataFrame.html#pandas.DataFrame). +Once installed, you can test out subgrounds with the following query. The following example grabs a Subgraph for the Aave v2 protocol and queries the top 5 markets ordered by TVL (Total Value Locked), selects their name and their TVL (in USD) and returns the data as a pandas [DataFrame](https://pandas.pydata.org/pandas-docs/dev/reference/api/pandas.DataFrame.html#pandas.DataFrame). ```python from subgrounds import Subgrounds sg = Subgrounds() -# Load the subgraph +# Load the Subgraph aave_v2 = sg.load_subgraph( "https://api.thegraph.com/subgraphs/name/messari/aave-v2-ethereum") diff --git a/website/src/pages/ar/subgraphs/querying/subgraph-id-vs-deployment-id.mdx b/website/src/pages/ar/subgraphs/querying/subgraph-id-vs-deployment-id.mdx index 103e470e14da..17258dd13ea1 100644 --- a/website/src/pages/ar/subgraphs/querying/subgraph-id-vs-deployment-id.mdx +++ b/website/src/pages/ar/subgraphs/querying/subgraph-id-vs-deployment-id.mdx @@ -2,17 +2,17 @@ title: Subgraph ID vs Deployment ID --- -A subgraph is identified by a Subgraph ID, and each version of the subgraph is identified by a Deployment ID. +A Subgraph is identified by a Subgraph ID, and each version of the Subgraph is identified by a Deployment ID. -When querying a subgraph, either ID can be used, though it is generally suggested that the Deployment ID is used due to its ability to specify a specific version of a subgraph. +When querying a Subgraph, either ID can be used, though it is generally suggested that the Deployment ID is used due to its ability to specify a specific version of a Subgraph. Here are some key differences between the two IDs: ![](/img/subgraph-id-vs-deployment-id.png) ## Deployment ID -The Deployment ID is the IPFS hash of the compiled manifest file, which refers to other files on IPFS instead of relative URLs on the computer. For example, the compiled manifest can be accessed via: `https://api.thegraph.com/ipfs/api/v0/cat?arg=QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. To change the Deployment ID, one can simply update the manifest file, such as modifying the description field as described in the [subgraph manifest documentation](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api). +The Deployment ID is the IPFS hash of the compiled manifest file, which refers to other files on IPFS instead of relative URLs on the computer. For example, the compiled manifest can be accessed via: `https://api.thegraph.com/ipfs/api/v0/cat?arg=QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. To change the Deployment ID, one can simply update the manifest file, such as modifying the description field as described in the [Subgraph manifest documentation](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api). -When queries are made using a subgraph's Deployment ID, we are specifying a version of that subgraph to query. Using the Deployment ID to query a specific subgraph version results in a more sophisticated and robust setup as there is full control over the subgraph version being queried. However, this results in the need of updating the query code manually every time a new version of the subgraph is published. +When queries are made using a Subgraph's Deployment ID, we are specifying a version of that Subgraph to query. Using the Deployment ID to query a specific Subgraph version results in a more sophisticated and robust setup as there is full control over the Subgraph version being queried. However, this results in the need of updating the query code manually every time a new version of the Subgraph is published. Example endpoint that uses Deployment ID: @@ -20,8 +20,8 @@ Example endpoint that uses Deployment ID: ## Subgraph ID -The Subgraph ID is a unique identifier for a subgraph. It remains constant across all versions of a subgraph. It is recommended to use the Subgraph ID to query the latest version of a subgraph, although there are some caveats. +The Subgraph ID is a unique identifier for a Subgraph. It remains constant across all versions of a Subgraph. It is recommended to use the Subgraph ID to query the latest version of a Subgraph, although there are some caveats. -Be aware that querying using Subgraph ID may result in queries being responded to by an older version of the subgraph due to the new version needing time to sync. Also, new versions could introduce breaking schema changes. +Be aware that querying using Subgraph ID may result in queries being responded to by an older version of the Subgraph due to the new version needing time to sync. Also, new versions could introduce breaking schema changes. Example endpoint that uses Subgraph ID: `https://gateway-arbitrum.network.thegraph.com/api/[api-key]/subgraphs/id/FL3ePDCBbShPvfRJTaSCNnehiqxsPHzpLud6CpbHoeKW` diff --git a/website/src/pages/ar/subgraphs/quick-start.mdx b/website/src/pages/ar/subgraphs/quick-start.mdx index 42f4acf08df9..9b7bf860e87d 100644 --- a/website/src/pages/ar/subgraphs/quick-start.mdx +++ b/website/src/pages/ar/subgraphs/quick-start.mdx @@ -2,7 +2,7 @@ title: بداية سريعة --- -Learn how to easily build, publish and query a [subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) on The Graph. +Learn how to easily build, publish and query a [Subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) on The Graph. ## Prerequisites @@ -13,13 +13,13 @@ Learn how to easily build, publish and query a [subgraph](/subgraphs/developing/ ## How to Build a Subgraph -### 1. Create a subgraph in Subgraph Studio +### 1. Create a Subgraph in Subgraph Studio Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. -Subgraph Studio lets you create, manage, deploy, and publish subgraphs, as well as create and manage API keys. +Subgraph Studio lets you create, manage, deploy, and publish Subgraphs, as well as create and manage API keys. -Click "Create a Subgraph". It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name". +Click "Create a Subgraph". It is recommended to name the Subgraph in Title Case: "Subgraph Name Chain Name". ### 2. Install the Graph CLI @@ -37,13 +37,13 @@ Using [yarn](https://yarnpkg.com/): yarn global add @graphprotocol/graph-cli ``` -### 3. Initialize your subgraph +### 3. Initialize your Subgraph -> يمكنك العثور على الأوامر المتعلقة بالغراف الفرعي الخاص بك على صفحة الغراف الفرعي في (سبغراف استوديو) (https://thegraph.com/studio). +> You can find commands for your specific Subgraph on the Subgraph page in [Subgraph Studio](https://thegraph.com/studio/). -The `graph init` command will automatically create a scaffold of a subgraph based on your contract's events. +The `graph init` command will automatically create a scaffold of a Subgraph based on your contract's events. -The following command initializes your subgraph from an existing contract: +The following command initializes your Subgraph from an existing contract: ```sh graph init @@ -51,42 +51,42 @@ graph init If your contract is verified on the respective blockscanner where it is deployed (such as [Etherscan](https://etherscan.io/)), then the ABI will automatically be created in the CLI. -When you initialize your subgraph, the CLI will ask you for the following information: +When you initialize your Subgraph, the CLI will ask you for the following information: -- **Protocol**: Choose the protocol your subgraph will be indexing data from. -- **Subgraph slug**: Create a name for your subgraph. Your subgraph slug is an identifier for your subgraph. -- **Directory**: Choose a directory to create your subgraph in. -- **Ethereum network** (optional): You may need to specify which EVM-compatible network your subgraph will be indexing data from. +- **Protocol**: Choose the protocol your Subgraph will be indexing data from. +- **Subgraph slug**: Create a name for your Subgraph. Your Subgraph slug is an identifier for your Subgraph. +- **Directory**: Choose a directory to create your Subgraph in. +- **Ethereum network** (optional): You may need to specify which EVM-compatible network your Subgraph will be indexing data from. - **Contract address**: Locate the smart contract address you’d like to query data from. - **ABI**: If the ABI is not auto-populated, you will need to input it manually as a JSON file. -- **Start Block**: You should input the start block to optimize subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed. +- **Start Block**: You should input the start block to optimize Subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed. - **Contract Name**: Input the name of your contract. -- **Index contract events as entities**: It is suggested that you set this to true, as it will automatically add mappings to your subgraph for every emitted event. +- **Index contract events as entities**: It is suggested that you set this to true, as it will automatically add mappings to your Subgraph for every emitted event. - **Add another contract** (optional): You can add another contract. -يرجى مراجعة الصورة المرفقة كمثال عن ما يمكن توقعه عند تهيئة غرافك الفرعي: +See the following screenshot for an example for what to expect when initializing your Subgraph: ![Subgraph command](/img/CLI-Example.png) -### 4. Edit your subgraph +### 4. Edit your Subgraph -The `init` command in the previous step creates a scaffold subgraph that you can use as a starting point to build your subgraph. +The `init` command in the previous step creates a scaffold Subgraph that you can use as a starting point to build your Subgraph. -When making changes to the subgraph, you will mainly work with three files: +When making changes to the Subgraph, you will mainly work with three files: -- Manifest (`subgraph.yaml`) - defines what data sources your subgraph will index. -- Schema (`schema.graphql`) - defines what data you wish to retrieve from the subgraph. +- Manifest (`subgraph.yaml`) - defines what data sources your Subgraph will index. +- Schema (`schema.graphql`) - defines what data you wish to retrieve from the Subgraph. - AssemblyScript Mappings (`mapping.ts`) - translates data from your data sources to the entities defined in the schema. -For a detailed breakdown on how to write your subgraph, check out [Creating a Subgraph](/developing/creating-a-subgraph/). +For a detailed breakdown on how to write your Subgraph, check out [Creating a Subgraph](/developing/creating-a-subgraph/). -### 5. Deploy your subgraph +### 5. Deploy your Subgraph > Remember, deploying is not the same as publishing. -When you **deploy** a subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. A deployed subgraph's indexing is performed by the [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/), which is a single Indexer owned and operated by Edge & Node, rather than by the many decentralized Indexers in The Graph Network. A **deployed** subgraph is free to use, rate-limited, not visible to the public, and meant to be used for development, staging, and testing purposes. +When you **deploy** a Subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. A deployed Subgraph's indexing is performed by the [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/), which is a single Indexer owned and operated by Edge & Node, rather than by the many decentralized Indexers in The Graph Network. A **deployed** Subgraph is free to use, rate-limited, not visible to the public, and meant to be used for development, staging, and testing purposes. -عند كتابة غرافك الفرعي، قم بتنفيذ الأوامر التالية: +Once your Subgraph is written, run the following commands: ```` ```sh @@ -94,7 +94,7 @@ graph codegen && graph build ``` ```` -Authenticate and deploy your subgraph. The deploy key can be found on the subgraph's page in Subgraph Studio. +Authenticate and deploy your Subgraph. The deploy key can be found on the Subgraph's page in Subgraph Studio. ![Deploy key](/img/subgraph-studio-deploy-key.jpg) @@ -109,37 +109,37 @@ graph deploy The CLI will ask for a version label. It's strongly recommended to use [semantic versioning](https://semver.org/), e.g. `0.0.1`. -### 6. Review your subgraph +### 6. Review your Subgraph -If you’d like to test your subgraph before publishing it, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following: +If you’d like to test your Subgraph before publishing it, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following: - Run a sample query. -- Analyze your subgraph in the dashboard to check information. -- Check the logs on the dashboard to see if there are any errors with your subgraph. The logs of an operational subgraph will look like this: +- Analyze your Subgraph in the dashboard to check information. +- Check the logs on the dashboard to see if there are any errors with your Subgraph. The logs of an operational Subgraph will look like this: ![Subgraph logs](/img/subgraph-logs-image.png) -### 7. Publish your subgraph to The Graph Network +### 7. Publish your Subgraph to The Graph Network -When your subgraph is ready for a production environment, you can publish it to the decentralized network. Publishing is an onchain action that does the following: +When your Subgraph is ready for a production environment, you can publish it to the decentralized network. Publishing is an onchain action that does the following: -- It makes your subgraph available to be to indexed by the decentralized [Indexers](/indexing/overview/) on The Graph Network. -- It removes rate limits and makes your subgraph publicly searchable and queryable in [Graph Explorer](https://thegraph.com/explorer/). -- It makes your subgraph available for [Curators](/resources/roles/curating/) to curate it. +- It makes your Subgraph available to be to indexed by the decentralized [Indexers](/indexing/overview/) on The Graph Network. +- It removes rate limits and makes your Subgraph publicly searchable and queryable in [Graph Explorer](https://thegraph.com/explorer/). +- It makes your Subgraph available for [Curators](/resources/roles/curating/) to curate it. -> The greater the amount of GRT you and others curate on your subgraph, the more Indexers will be incentivized to index your subgraph, improving the quality of service, reducing latency, and enhancing network redundancy for your subgraph. +> The greater the amount of GRT you and others curate on your Subgraph, the more Indexers will be incentivized to index your Subgraph, improving the quality of service, reducing latency, and enhancing network redundancy for your Subgraph. #### Publishing with Subgraph Studio -To publish your subgraph, click the Publish button in the dashboard. +To publish your Subgraph, click the Publish button in the dashboard. -![Publish a subgraph on Subgraph Studio](/img/publish-sub-transfer.png) +![Publish a Subgraph on Subgraph Studio](/img/publish-sub-transfer.png) -Select the network to which you would like to publish your subgraph. +Select the network to which you would like to publish your Subgraph. #### Publishing from the CLI -As of version 0.73.0, you can also publish your subgraph with the Graph CLI. +As of version 0.73.0, you can also publish your Subgraph with the Graph CLI. Open the `graph-cli`. @@ -157,32 +157,32 @@ graph publish ``` ```` -3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized subgraph to a network of your choice. +3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized Subgraph to a network of your choice. ![cli-ui](/img/cli-ui.png) To customize your deployment, see [Publishing a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). -#### Adding signal to your subgraph +#### Adding signal to your Subgraph -1. To attract Indexers to query your subgraph, you should add GRT curation signal to it. +1. To attract Indexers to query your Subgraph, you should add GRT curation signal to it. - - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your subgraph. + - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your Subgraph. 2. If eligible for indexing rewards, Indexers receive GRT rewards based on the signaled amount. - - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on subgraph feature usage and supported networks. + - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on Subgraph feature usage and supported networks. To learn more about curation, read [Curating](/resources/roles/curating/). -To save on gas costs, you can curate your subgraph in the same transaction you publish it by selecting this option: +To save on gas costs, you can curate your Subgraph in the same transaction you publish it by selecting this option: ![Subgraph publish](/img/studio-publish-modal.png) -### 8. Query your subgraph +### 8. Query your Subgraph -You now have access to 100,000 free queries per month with your subgraph on The Graph Network! +You now have access to 100,000 free queries per month with your Subgraph on The Graph Network! -You can query your subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button. +You can query your Subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button. -For more information about querying data from your subgraph, read [Querying The Graph](/subgraphs/querying/introduction/). +For more information about querying data from your Subgraph, read [Querying The Graph](/subgraphs/querying/introduction/). diff --git a/website/src/pages/ar/substreams/developing/dev-container.mdx b/website/src/pages/ar/substreams/developing/dev-container.mdx index bd4acf16eec7..339ddb159c87 100644 --- a/website/src/pages/ar/substreams/developing/dev-container.mdx +++ b/website/src/pages/ar/substreams/developing/dev-container.mdx @@ -9,7 +9,7 @@ Develop your first project with Substreams Dev Container. It's a tool to help you build your first project. You can either run it remotely through Github codespaces or locally by cloning the [substreams starter repository](https://github.com/streamingfast/substreams-starter?tab=readme-ov-file). -Inside the Dev Container, the `substreams init` command sets up a code-generated Substreams project, allowing you to easily build a subgraph or an SQL-based solution for data handling. +Inside the Dev Container, the `substreams init` command sets up a code-generated Substreams project, allowing you to easily build a Subgraph or an SQL-based solution for data handling. ## Prerequisites @@ -35,7 +35,7 @@ To share your work with the broader community, publish your `.spkg` to [Substrea You can configure your project to query data either through a Subgraph or directly from an SQL database: -- **Subgraph**: Run `substreams codegen subgraph`. This generates a project with a basic `schema.graphql` and `mappings.ts` file. You can customize these to define entities based on the data extracted by Substreams. For more configurations, see [subgraph sink documentation](https://docs.substreams.dev/how-to-guides/sinks/subgraph). +- **Subgraph**: Run `substreams codegen subgraph`. This generates a project with a basic `schema.graphql` and `mappings.ts` file. You can customize these to define entities based on the data extracted by Substreams. For more configurations, see [Subgraph sink documentation](https://docs.substreams.dev/how-to-guides/sinks/subgraph). - **SQL**: Run `substreams codegen sql` for SQL-based queries. For more information on configuring a SQL sink, refer to the [SQL documentation](https://docs.substreams.dev/how-to-guides/sinks/sql-sink). ## Deployment Options diff --git a/website/src/pages/ar/substreams/developing/sinks.mdx b/website/src/pages/ar/substreams/developing/sinks.mdx index 8a3a2eda4ff0..34d2f8624e7d 100644 --- a/website/src/pages/ar/substreams/developing/sinks.mdx +++ b/website/src/pages/ar/substreams/developing/sinks.mdx @@ -1,5 +1,5 @@ --- -title: Official Sinks +title: Sink your Substreams --- Choose a sink that meets your project's needs. @@ -8,7 +8,7 @@ Choose a sink that meets your project's needs. Once you find a package that fits your needs, you can choose how you want to consume the data. -Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database, a file or a subgraph. +Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database, a file or a Subgraph. ## Sinks diff --git a/website/src/pages/ar/substreams/developing/solana/account-changes.mdx b/website/src/pages/ar/substreams/developing/solana/account-changes.mdx index 3e13301b042c..704443dee771 100644 --- a/website/src/pages/ar/substreams/developing/solana/account-changes.mdx +++ b/website/src/pages/ar/substreams/developing/solana/account-changes.mdx @@ -11,7 +11,7 @@ This guide walks you through the process of setting up your environment, configu > NOTE: History for the Solana Account Changes dates as of 2025, block 310629601. -For each Substreams Solana account block, only the latest update per account is recorded, see the [Protobuf Referece](https://buf.build/streamingfast/firehose-solana/file/main:sf/solana/type/v1/account.proto). If an account is deleted, a payload with `deleted == True` is provided. Additionally, events of low importance ware omitted, such as those with the special owner “Vote11111111…” account or changes that do not affect the account data (ex: lamport changes). +For each Substreams Solana account block, only the latest update per account is recorded, see the [Protobuf Reference](https://buf.build/streamingfast/firehose-solana/file/main:sf/solana/type/v1/account.proto). If an account is deleted, a payload with `deleted == True` is provided. Additionally, events of low importance ware omitted, such as those with the special owner “Vote11111111…” account or changes that do not affect the account data (ex: lamport changes). > NOTE: To test Substreams latency for Solana accounts, measured as block-head drift, install the [Substreams CLI](https://docs.substreams.dev/reference-material/substreams-cli/installing-the-cli) and running `substreams run solana-common blocks_without_votes -s -1 -o clock`. diff --git a/website/src/pages/ar/substreams/developing/solana/transactions.mdx b/website/src/pages/ar/substreams/developing/solana/transactions.mdx index b1b97cdcbfe5..ebdeeb98a931 100644 --- a/website/src/pages/ar/substreams/developing/solana/transactions.mdx +++ b/website/src/pages/ar/substreams/developing/solana/transactions.mdx @@ -36,12 +36,12 @@ Within the generated directories, modify your Substreams modules to include addi ## Step 3: Load the Data -To make your Substreams queryable (as opposed to [direct streaming](https://docs.substreams.dev/how-to-guides/sinks/stream)), you can automatically generate a [Substreams-powered subgraph](/sps/introduction/) or SQL-DB sink. +To make your Substreams queryable (as opposed to [direct streaming](https://docs.substreams.dev/how-to-guides/sinks/stream)), you can automatically generate a [Substreams-powered Subgraph](/sps/introduction/) or SQL-DB sink. ### Subgraph 1. Run `substreams codegen subgraph` to initialize the sink, producing the necessary files and function definitions. -2. Create your [subgraph mappings](/sps/triggers/) within the `mappings.ts` and associated entities within the `schema.graphql`. +2. Create your [Subgraph mappings](/sps/triggers/) within the `mappings.ts` and associated entities within the `schema.graphql`. 3. Build and deploy locally or to [Subgraph Studio](https://thegraph.com/studio-pricing/) by running `deploy-studio`. ### SQL diff --git a/website/src/pages/ar/substreams/introduction.mdx b/website/src/pages/ar/substreams/introduction.mdx index 774c2dfb90c2..ffb3f46baa62 100644 --- a/website/src/pages/ar/substreams/introduction.mdx +++ b/website/src/pages/ar/substreams/introduction.mdx @@ -13,7 +13,7 @@ Substreams is a powerful parallel blockchain indexing technology designed to enh ## Substreams Benefits -- **Accelerated Indexing**: Boost subgraph indexing time with a parallelized engine for quicker data retrieval and processing. +- **Accelerated Indexing**: Boost Subgraph indexing time with a parallelized engine for quicker data retrieval and processing. - **Multi-Chain Support**: Expand indexing capabilities beyond EVM-based chains, supporting ecosystems like Solana, Injective, Starknet, and Vara. - **Enhanced Data Model**: Access comprehensive data, including the `trace` level data on EVM or account changes on Solana, while efficiently managing forks/disconnections. - **Multi-Sink Support:** For Subgraph, Postgres database, Clickhouse, and Mongo database. diff --git a/website/src/pages/ar/substreams/publishing.mdx b/website/src/pages/ar/substreams/publishing.mdx index 0d3b7933820e..8ee05b0eda53 100644 --- a/website/src/pages/ar/substreams/publishing.mdx +++ b/website/src/pages/ar/substreams/publishing.mdx @@ -9,7 +9,7 @@ Learn how to publish a Substreams package to the [Substreams Registry](https://s ### What is a package? -A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional subgraphs. +A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional Subgraphs. ## Publish a Package @@ -44,7 +44,7 @@ A Substreams package is a precompiled binary file that defines the specific data ![confirm](/img/4_confirm.png) -That's it! You have succesfully published a package in the Substreams registry. +That's it! You have successfully published a package in the Substreams registry. ![success](/img/5_success.png) diff --git a/website/src/pages/ar/supported-networks.mdx b/website/src/pages/ar/supported-networks.mdx index 09e56bdeb0c2..ac7050638264 100644 --- a/website/src/pages/ar/supported-networks.mdx +++ b/website/src/pages/ar/supported-networks.mdx @@ -18,11 +18,11 @@ export const getStaticProps = getSupportedNetworksStaticProps - Subgraph Studio relies on the stability and reliability of the underlying technologies, for example JSON-RPC, Firehose and Substreams endpoints. - Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier. -- If a subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks. +- If a Subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks. - For a full list of which features are supported on the decentralized network, see [this page](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). ## Running Graph Node locally If your preferred network isn't supported on The Graph's decentralized network, you can run your own [Graph Node](https://github.com/graphprotocol/graph-node) to index any EVM-compatible network. Make sure that the [version](https://github.com/graphprotocol/graph-node/releases) you are using supports the network and you have the needed configuration. -Graph Node can also index other protocols, via a Firehose integration. Firehose integrations have been created for NEAR, Arweave and Cosmos-based networks. Additionally, Graph Node can support Substreams-powered subgraphs for any network with Substreams support. +Graph Node can also index other protocols, via a Firehose integration. Firehose integrations have been created for NEAR and Arweave. Additionally, Graph Node can support Substreams-powered Subgraphs for any network with Substreams support. diff --git a/website/src/pages/ar/token-api/_meta-titles.json b/website/src/pages/ar/token-api/_meta-titles.json index 692cec84bd58..7ed31e0af95d 100644 --- a/website/src/pages/ar/token-api/_meta-titles.json +++ b/website/src/pages/ar/token-api/_meta-titles.json @@ -1,5 +1,6 @@ { "mcp": "MCP", "evm": "EVM Endpoints", - "monitoring": "Monitoring Endpoints" + "monitoring": "Monitoring Endpoints", + "faq": "FAQ" } diff --git a/website/src/pages/ar/token-api/_meta.js b/website/src/pages/ar/token-api/_meta.js index 09aa7ffc2649..0e526f673a66 100644 --- a/website/src/pages/ar/token-api/_meta.js +++ b/website/src/pages/ar/token-api/_meta.js @@ -5,4 +5,5 @@ export default { mcp: titles.mcp, evm: titles.evm, monitoring: titles.monitoring, + faq: '', } diff --git a/website/src/pages/ar/token-api/faq.mdx b/website/src/pages/ar/token-api/faq.mdx new file mode 100644 index 000000000000..8c1032894ddb --- /dev/null +++ b/website/src/pages/ar/token-api/faq.mdx @@ -0,0 +1,109 @@ +--- +title: Token API FAQ +--- + +Get fast answers to easily integrate and scale with The Graph's high-performance Token API. + +## عام + +### What blockchains does the Token API support? + +Currently, the Token API supports Ethereum, Binance Smart Chain (BSC), Polygon, Optimism, Base, and Arbitrum One. + +### Why isn't my API key from The Graph Market working? + +Be sure to use the Access Token generated from the API key, not the API key itself. An access token can be generated from the dashboard on The Graph Market using the dropdown menu next to each API key. + +### How current is the data provided by the API relative to the blockchain? + +The API provides data up to the latest finalized block. + +### How do I authenticate requests to the Token API? + +Authentication is managed via JWT tokens obtained through The Graph Market. JWT tokens issued by [Pinax](https://pinax.network/en), a core developer of The Graph, are also supported. + +### Does the Token API provide a client SDK? + +While a client SDK is not currently available, please share feedback on any SDKs or integrations you would like to see on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to support additional blockchains in the future? + +Yes, more blockchains will be supported in the future. Please share feedback on desired chains on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to offer data closer to the chain head? + +Yes, improvements to provide data closer to the chain head are planned. Feedback is welcome on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to support additional use cases such as NFTs? + +The Graph ecosystem is actively determining the [roadmap](https://thegraph.com/blog/token-api-the-graph/) for additional use cases, including NFTs. Please provide feedback on specific features you would like prioritized on [Discord](https://discord.gg/graphprotocol). + +## MCP / LLM / AI Topics + +### Is there a time limit for LLM queries? + +Yes. The maximum time limit for LLM queries is 10 seconds. + +### Is there a known list of LLMs that work with the API? + +Yes, Cline, Cursor, and Claude Desktop integrate successfully with The Graph's Token API + MCP server. + +Beyond that, whether an LLM "works" depends on whether it supports the MCP protocol directly (or has a compatible plugin/adapter). + +### Where can I find the MCP client? + +You can find the code for the MCP client in [The Graph's repo](https://github.com/graphprotocol/mcp-client). + +## Advanced Topics + +### I'm getting 403/401 errors. What's wrong? + +Check that you included the `Authorization: Bearer ` header with the correct, non-expired token. Common issues include using the API key instead of generating a new JWT, forgetting the "Bearer" prefix, using an incorrect token, or omitting the header entirely. Ensure you copied the JWT exactly as provided by The Graph Market. + +### Are there rate limits or usage costs?\*\* + +During Beta, the Token API is free for authorized developers. While specific rate limits aren't documented, reasonable throttling exists to prevent abuse. High request volumes may trigger HTTP 429 errors. Monitor official announcements for future pricing changes after Beta. + +### What networks are supported, and how do I specify them? + +You can query available networks with [this link](https://token-api.thegraph.com/#tag/monitoring/GET/networks). A full list of network IDs accepted by The Graph can be found on [The Graph's Networks](https://thegraph.com/docs/en/supported-networks/). The API supports Ethereum mainnet, Binance Smart Chain, Base, Arbitrum One, Optimism, and Polygon/Matic. Use the optional `network_id` parameter (e.g., `mainnet`, `bsc`, `base`, `arbitrum-one`, `optimism`, `matic`) to target a specific chain. Without this parameter, the API defaults to Ethereum mainnet. + +### Why do I only see 10 results? How can I get more data? + +Endpoints cap output at 10 items by default. Use pagination parameters: `limit` (up to 500) and `page` (1-indexed). For example, set `limit=50` to get 50 results, and increment `page` for subsequent batches (e.g., `page=2` for items 51-100). + +### How do I fetch older transfer history? + +The Transfers endpoint defaults to 30 days of history. To retrieve older events, increase the `age` parameter up to 180 days maximum (e.g., `age=180` for 6 months of transfers). Transfers older than 180 days cannot be fetched in a single call. + +### What does an empty `"data": []` array mean? + +An empty data array means no records were found matching the query – not an error. This occurs when querying wallets with no tokens/transfers or token contracts with no holders. Verify you've used the correct address and parameters. An invalid address format will trigger a 4xx error. + +### Why is the JSON response wrapped in a `"data"` array? + +All Token API responses consistently wrap results in a top-level `data` array, even for single items. This uniform design handles the common case where addresses have multiple balances or transfers. When parsing, be sure to index into the `data` array (e.g., `const results = response.data`). + +### Why are token amounts returned as strings? + +Large numeric values (like token amounts or supplies) are returned as strings to avoid precision loss, as they often exceed JavaScript's safe integer range. Convert these to big number types for arithmetic operations. Fields like `decimals` are provided as normal numbers to help derive human-readable values. + +### What format should addresses be in? + +The API accepts 40-character hex addresses with or without the `0x` prefix. The endpoint is case-insensitive, so both lower and uppercase hex characters work. Ensure addresses are exactly 40 hex digits (20 bytes) if you remove the prefix. For contract queries, use the token's contract address; for balance/transfer queries, use the wallet address. + +### Do I need special headers besides authentication? + +While recommended, `Accept: application/json` isn't strictly required as the API returns JSON by default. The critical header is `Authorization: Bearer `. Ensure you make a GET request to the correct URL without trailing slashes or path typos (e.g., use `/balances/evm/{address}` not `/balance`). + +### MCP integration with Claude/Cline/Cursor shows errors like "ENOENT" or "Server disconnected". How do I fix this? + +For "ENOENT" errors, ensure Node.js 18+ is installed and the path to `npx`/`bunx` is correct (consider using full paths in config). "Server disconnected" usually indicates authentication or connectivity issues – verify your `ACCESS_TOKEN` is set correctly and your network allows access to `https://token-api.thegraph.com/sse`. + +### Is the Token API part of The Graph's GraphQL service? + +No, the Token API is a separate RESTful service. Unlike traditional subgraphs, it provides ready-to-use REST endpoints (HTTP GET) for common token data. You don't need to write GraphQL queries or deploy subgraphs. Under the hood, it uses The Graph's infrastructure and MCP with AI for enrichment, but you simply interact with REST endpoints. + +### Do I need to use MCP or tools like Claude, Cline, or Cursor? + +No, these are optional. MCP is an advanced feature allowing AI assistants to interface with the API via streaming. For standard usage, simply call the REST endpoints with any HTTP client using your JWT. Claude Desktop, Cline bot, and Cursor IDE integrations are provided for convenience but aren't required. diff --git a/website/src/pages/ar/token-api/mcp/claude.mdx b/website/src/pages/ar/token-api/mcp/claude.mdx index 0da8f2be031d..12a036b6fc24 100644 --- a/website/src/pages/ar/token-api/mcp/claude.mdx +++ b/website/src/pages/ar/token-api/mcp/claude.mdx @@ -25,11 +25,11 @@ Create or edit your `claude_desktop_config.json` file. ```json label="claude_desktop_config.json" { "mcpServers": { - "mcp-pinax": { + "token-api": { "command": "npx", "args": ["@pinax/mcp", "--sse-url", "https://token-api.thegraph.com/sse"], "env": { - "ACCESS_TOKEN": "" + "ACCESS_TOKEN": "" } } } diff --git a/website/src/pages/ar/token-api/mcp/cline.mdx b/website/src/pages/ar/token-api/mcp/cline.mdx index ab54c0c8f6f0..ef98e45939fe 100644 --- a/website/src/pages/ar/token-api/mcp/cline.mdx +++ b/website/src/pages/ar/token-api/mcp/cline.mdx @@ -10,7 +10,7 @@ sidebarTitle: Cline - [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path. - The `@pinax/mcp` package requires Node 18+, as it relies on built-in `fetch()` / `Headers`, which are not available in Node 17 or older. You may need to specify an exact path to an up-to-date Node version, or uninstall previous versions of Node to ensure `@pinax/mcp` uses the correct version. -![Screenshot of Cline's MCP server configuration interface displaying the JSON settings file with mcp-pinax server details visible](/img/cline-preview-token-api.png) +![Screenshot of Cline's MCP server configuration interface displaying the JSON settings file with mcp-pinax server details visible.](/img/cline-preview-token-api.png) ## Configuration diff --git a/website/src/pages/ar/token-api/quick-start.mdx b/website/src/pages/ar/token-api/quick-start.mdx index 4653c3d41ac6..c5fa07fa9371 100644 --- a/website/src/pages/ar/token-api/quick-start.mdx +++ b/website/src/pages/ar/token-api/quick-start.mdx @@ -1,6 +1,6 @@ --- title: Token API Quick Start -sidebarTitle: Quick Start +sidebarTitle: بداية سريعة --- ![The Graph Token API Quick Start banner](/img/token-api-quickstart-banner.jpg) diff --git a/website/src/pages/cs/about.mdx b/website/src/pages/cs/about.mdx index 256519660a73..1f43c663437f 100644 --- a/website/src/pages/cs/about.mdx +++ b/website/src/pages/cs/about.mdx @@ -30,25 +30,25 @@ Blockchain properties, such as finality, chain reorganizations, and uncled block ## The Graph Provides a Solution -The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API. +The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "Subgraphs") can then be queried with a standard GraphQL API. Today, there is a decentralized protocol that is backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node) that enables this process. ### How The Graph Functions -Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL. +Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using Subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL. #### Specifics -- The Graph uses subgraph descriptions, which are known as the subgraph manifest inside the subgraph. +- The Graph uses Subgraph descriptions, which are known as the Subgraph manifest inside the Subgraph. -- The subgraph description outlines the smart contracts of interest for a subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database. +- The Subgraph description outlines the smart contracts of interest for a Subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database. -- When creating a subgraph, you need to write a subgraph manifest. +- When creating a Subgraph, you need to write a Subgraph manifest. -- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that subgraph. +- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that Subgraph. -The diagram below provides more detailed information about the flow of data after a subgraph manifest has been deployed with Ethereum transactions. +The diagram below provides more detailed information about the flow of data after a Subgraph manifest has been deployed with Ethereum transactions. ![Grafu vysvětlující, jak Graf používá Uzel grafu k doručování dotazů konzumentům dat](/img/graph-dataflow.png) @@ -56,12 +56,12 @@ Průběh se řídí těmito kroky: 1. Dapp přidává data do Ethereum prostřednictvím transakce na chytrém kontraktu. 2. Chytrý smlouva vysílá při zpracování transakce jednu nebo více událostí. -3. Uzel grafu neustále vyhledává nové bloky Ethereum a data pro váš podgraf, která mohou obsahovat. -4. Uzel grafu v těchto blocích vyhledá události Etherea pro váš podgraf a spustí vámi zadané mapovací obsluhy. Mapování je modul WASM, který vytváří nebo aktualizuje datové entity, které Uzel grafu ukládá v reakci na události Ethereum. +3. Graph Node continually scans Ethereum for new blocks and the data for your Subgraph they may contain. +4. Graph Node finds Ethereum events for your Subgraph in these blocks and runs the mapping handlers you provided. The mapping is a WASM module that creates or updates the data entities that Graph Node stores in response to Ethereum events. 5. Aplikace dapp se dotazuje grafického uzlu na data indexovaná z blockchainu pomocí [GraphQL endpoint](https://graphql.org/learn/). Uzel Grafu zase překládá dotazy GraphQL na dotazy pro své podkladové datové úložiště, aby tato data načetl, přičemž využívá indexovací schopnosti úložiště. Dapp tato data zobrazuje v bohatém UI pro koncové uživatele, kteří je používají k vydávání nových transakcí na platformě Ethereum. Cyklus se opakuje. ## Další kroky -The following sections provide a more in-depth look at subgraphs, their deployment and data querying. +The following sections provide a more in-depth look at Subgraphs, their deployment and data querying. -Before you write your own subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed subgraphs. Each subgraph's page includes a GraphQL playground, allowing you to query its data. +Before you write your own Subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed Subgraphs. Each Subgraph's page includes a GraphQL playground, allowing you to query its data. diff --git a/website/src/pages/cs/archived/arbitrum/arbitrum-faq.mdx b/website/src/pages/cs/archived/arbitrum/arbitrum-faq.mdx index 050d1a0641aa..df47adfff704 100644 --- a/website/src/pages/cs/archived/arbitrum/arbitrum-faq.mdx +++ b/website/src/pages/cs/archived/arbitrum/arbitrum-faq.mdx @@ -14,7 +14,7 @@ By scaling The Graph on L2, network participants can now benefit from: - Zabezpečení zděděné po Ethereum -Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers can open and close allocations more frequently to index a greater number of subgraphs. Developers can deploy and update subgraphs more easily, and Delegators can delegate GRT more frequently. Curators can add or remove signal to a larger number of subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. +Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers can open and close allocations more frequently to index a greater number of Subgraphs. Developers can deploy and update Subgraphs more easily, and Delegators can delegate GRT more frequently. Curators can add or remove signal to a larger number of Subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. Komunita Graf se v loňském roce rozhodla pokračovat v Arbitrum po výsledku diskuze [GIP-0031] (https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305). @@ -39,7 +39,7 @@ Pro využití výhod používání a Graf na L2 použijte rozevírací přepína ![Dropdown switcher to toggle Arbitrum](/img/arbitrum-screenshot-toggle.png) -## Jako vývojář podgrafů, Spotřebitel dat, indexer, kurátor, nebo delegátor, co mám nyní udělat? +## As a Subgraph developer, data consumer, Indexer, Curator, or Delegator, what do I need to do now? Network participants must move to Arbitrum to continue participating in The Graph Network. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) for additional support. @@ -51,9 +51,9 @@ Všechny chytré smlouvy byly důkladně [auditovány](https://github.com/graphp Vše bylo důkladně otestováno, a je připraven pohotovostní plán, který zajistí bezpečný a bezproblémový přechod. Podrobnosti naleznete [here](https://forum.thegraph.com/t/gip-0037-the-graph-arbitrum-deployment-with-linear-rewards-minted-in-l2/3551#risks-and-security-considerations-20). -## Are existing subgraphs on Ethereum working? +## Are existing Subgraphs on Ethereum working? -All subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) to ensure your subgraphs operate seamlessly. +All Subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) to ensure your Subgraphs operate seamlessly. ## Does GRT have a new smart contract deployed on Arbitrum? diff --git a/website/src/pages/cs/archived/arbitrum/l2-transfer-tools-faq.mdx b/website/src/pages/cs/archived/arbitrum/l2-transfer-tools-faq.mdx index 88e1d9e632a2..439e83f3864b 100644 --- a/website/src/pages/cs/archived/arbitrum/l2-transfer-tools-faq.mdx +++ b/website/src/pages/cs/archived/arbitrum/l2-transfer-tools-faq.mdx @@ -24,9 +24,9 @@ Výjimkou jsou peněženky s chytrými smlouvami, jako je multisigs: jedná se o Nástroje pro přenos L2 používají k odesílání zpráv z L1 do L2 nativní mechanismus Arbitrum. Tento mechanismus se nazývá 'retryable ticket,' a všechny nativní tokenové můstky, včetně můstku Arbitrum GRT, ho používají. Další informace o opakovatelných ticketch naleznete v části [Arbitrum docs](https://docs.arbitrum.io/arbos/l1-to-l2-messaging). -Při přenosu aktiv (podgraf, podíl, delegace nebo kurátorství) do L2 se odešle zpráva přes můstek Arbitrum GRT, která vytvoří opakovatelný tiket v L2. Nástroj pro převod zahrnuje v transakci určitou hodnotu ETH, která se použije na 1) zaplacení vytvoření tiketu a 2) zaplacení plynu pro provedení tiketu v L2. Se však ceny plynu mohou v době, než je ticket připraven k provedení v režimu L2, měnit. Je možné, že se tento pokus o automatické provedení nezdaří. Když se tak stane, most Arbitrum udrží opakovatelný tiket naživu až 7 dní a kdokoli se může pokusit o jeho "vykoupení" (což vyžaduje peněženku s určitým množstvím ETH propojenou s mostem Arbitrum). +When you transfer your assets (Subgraph, stake, delegation or curation) to L2, a message is sent through the Arbitrum GRT bridge which creates a retryable ticket in L2. The transfer tool includes some ETH value in the transaction, that is used to 1) pay to create the ticket and 2) pay for the gas to execute the ticket in L2. However, because gas prices might vary in the time until the ticket is ready to execute in L2, it is possible that this auto-execution attempt fails. When that happens, the Arbitrum bridge will keep the retryable ticket alive for up to 7 days, and anyone can retry “redeeming” the ticket (which requires a wallet with some ETH bridged to Arbitrum). -Tomuto kroku říkáme 'Potvrzení' ve všech nástrojích pro přenos - ve většině případů se spustí automaticky, protože automatické provedení je většinou úspěšné, ale je důležité, abyste se ujistili, že proběhlo. Pokud se to nepodaří a během 7 dnů nedojde k žádnému úspěšnému opakování, můstek Arbitrum tiket zahodí a vaše aktiva (podgraf, podíl, delegace nebo kurátorství) budou ztracena a nebude možné je obnovit. Vývojáři The Graph jádra mají k dispozici monitorovací systém, který tyto situace odhaluje a snaží se lístky uplatnit dříve, než bude pozdě, ale v konečném důsledku je vaší odpovědností zajistit, aby byl váš přenos dokončen včas. Pokud máte potíže s potvrzením transakce, obraťte se na nás pomocí [tohoto formuláře](https://noteforms.com/forms/notionform-l2-transfer-tooling-issues-0ogqfu?notionforms=1&utm_source=notionforms) a hlavní vývojáři vám pomohou. +This is what we call the “Confirm” step in all the transfer tools - it will run automatically in most cases, as the auto-execution is most often successful, but it is important that you check back to make sure it went through. If it doesn’t succeed and there are no successful retries in 7 days, the Arbitrum bridge will discard the ticket, and your assets (Subgraph, stake, delegation or curation) will be lost and can’t be recovered. The Graph core devs have a monitoring system in place to detect these situations and try to redeem the tickets before it’s too late, but it is ultimately your responsibility to ensure your transfer is completed in time. If you’re having trouble confirming your transaction, please reach out using [this form](https://noteforms.com/forms/notionform-l2-transfer-tooling-issues-0ogqfu?notionforms=1&utm_source=notionforms) and core devs will be there help you. ### Zahájil jsem přenos delegace/podílů/kurátorství a nejsem si jistý, zda se to dostalo do L2. Jak mohu potvrdit, že to bylo přeneseno správně? @@ -36,43 +36,43 @@ Pokud máte k dispozici hash transakce L1 (který zjistíte, když se podíváte ## Podgraf přenos -### Jak mohu přenést svůj podgraf? +### How do I transfer my Subgraph? -Chcete-li přenést svůj podgraf, musíte provést následující kroky: +To transfer your Subgraph, you will need to complete the following steps: 1. Zahájení převodu v mainnet Ethereum 2. Počkejte 20 minut na potvrzení -3. Potvrzení přenosu podgrafů na Arbitrum\* +3. Confirm Subgraph transfer on Arbitrum\* -4. Úplné zveřejnění podgrafu na arbitrum +4. Finish publishing Subgraph on Arbitrum 5. Aktualizovat adresu URL dotazu (doporučeno) -\*Upozorňujeme, že převod musíte potvrdit do 7 dnů, jinak může dojít ke ztrátě vašeho podgrafu. Ve většině případů se tento krok provede automaticky, ale v případě prudkého nárůstu cen plynu na Arbitru může být nutné ruční potvrzení. Pokud se během tohoto procesu vyskytnou nějaké problémy, budou k dispozici zdroje, které vám pomohou: kontaktujte podporu na adrese support@thegraph.com nebo na [Discord](https://discord.gg/graphprotocol). +\*Note that you must confirm the transfer within 7 days otherwise your Subgraph may be lost. In most cases, this step will run automatically, but a manual confirmation may be needed if there is a gas price spike on Arbitrum. If there are any issues during this process, there will be resources to help: contact support at support@thegraph.com or on [Discord](https://discord.gg/graphprotocol). ### Odkud mám iniciovat převod? -Přenos můžete zahájit v [Subgraph Studio](https://thegraph.com/studio/), [Explorer,](https://thegraph.com/explorer) nebo na libovolné stránce s detaily subgrafu. "Kliknutím na tlačítko 'Transfer Subgraph' na stránce s podrobnostmi o podgrafu zahájíte přenos. +You can initiate your transfer from the [Subgraph Studio](https://thegraph.com/studio/), [Explorer,](https://thegraph.com/explorer) or any Subgraph details page. Click the "Transfer Subgraph" button in the Subgraph details page to start the transfer. -### Jak dlouho musím čekat, než bude můj podgraf přenesen +### How long do I need to wait until my Subgraph is transferred Přenos trvá přibližně 20 minut. Most Arbitrum pracuje na pozadí a automaticky dokončí přenos mostu. V některých případech může dojít ke zvýšení nákladů na plyn a transakci bude nutné potvrdit znovu. -### Bude můj podgraf zjistitelný i poté, co jej přenesu do L2? +### Will my Subgraph still be discoverable after I transfer it to L2? -Váš podgraf bude zjistitelný pouze v síti, ve které je publikován. Pokud se například váš subgraf nachází na Arbitrum One, pak jej najdete pouze v Průzkumníku na Arbitrum One a na Ethereum jej nenajdete. Ujistěte se, že máte v přepínači sítí v horní části stránky vybranou možnost Arbitrum One, abyste se ujistili, že jste ve správné síti. Po přenosu se podgraf L1 zobrazí jako zastaralý. +Your Subgraph will only be discoverable on the network it is published to. For example, if your Subgraph is on Arbitrum One, then you can only find it in Explorer on Arbitrum One and will not be able to find it on Ethereum. Please ensure that you have Arbitrum One selected in the network switcher at the top of the page to ensure you are on the correct network.  After the transfer, the L1 Subgraph will appear as deprecated. -### Musí být můj podgraf zveřejněn, abych ho mohl přenést? +### Does my Subgraph need to be published to transfer it? -Abyste mohli využít nástroj pro přenos subgrafů, musí být váš subgraf již zveřejněn v mainnet Ethereum a musí mít nějaký kurátorský signál vlastněný peněženkou, která subgraf vlastní. Pokud váš subgraf není zveřejněn, doporučujeme vám jednoduše publikovat přímo na Arbitrum One - související poplatky za plyn budou podstatně nižší. Pokud chcete přenést publikovaný podgraf, ale účet vlastníka na něm nemá kurátorský signál, můžete z tohoto účtu signalizovat malou částku (např. 1 GRT); nezapomeňte zvolit "auto-migrating" signál. +To take advantage of the Subgraph transfer tool, your Subgraph must be already published to Ethereum mainnet and must have some curation signal owned by the wallet that owns the Subgraph. If your Subgraph is not published, it is recommended you simply publish directly on Arbitrum One - the associated gas fees will be considerably lower. If you want to transfer a published Subgraph but the owner account hasn't curated any signal on it, you can signal a small amount (e.g. 1 GRT) from that account; make sure to choose "auto-migrating" signal. -### Co se stane s verzí mého subgrafu na ethereum mainnet po převodu na Arbitrum? +### What happens to the Ethereum mainnet version of my Subgraph after I transfer to Arbitrum? -Po převedení vašeho subgrafu na Arbitrum bude verze mainnet Ethereum zastaralá. Doporučujeme vám aktualizovat adresu URL dotazu do 48 hodin. Je však zavedena ochranná lhůta, která udržuje adresu URL mainnet funkční, aby bylo možné aktualizovat podporu dapp třetích stran. +After transferring your Subgraph to Arbitrum, the Ethereum mainnet version will be deprecated. We recommend you update your query URL within 48 hours. However, there is a grace period in place that keeps your mainnet URL functioning so that any third-party dapp support can be updated. ### Musím po převodu také znovu publikovat na Arbitrum? @@ -80,21 +80,21 @@ Po uplynutí 20minutového okna pro převod budete muset převod potvrdit transa ### Dojde při opětovném publikování k výpadku mého koncového bodu? -Je nepravděpodobné, ale je možné, že dojde ke krátkému výpadku v závislosti na tom, které indexátory podporují podgraf na L1 a zda jej indexují, dokud není podgraf plně podporován na L2. +It is unlikely, but possible to experience a brief downtime depending on which Indexers are supporting the Subgraph on L1 and whether they keep indexing it until the Subgraph is fully supported on L2. ### Je publikování a verzování na L2 stejné jako na mainnet Ethereum Ethereum? -Ano. Při publikování v aplikaci Subgraph Studio vyberte jako publikovanou síť Arbitrum One. Ve Studiu bude k dispozici nejnovější koncový bod, který odkazuje na nejnovější aktualizovanou verzi podgrafu. +Yes. Select Arbitrum One as your published network when publishing in Subgraph Studio. In the Studio, the latest endpoint will be available which points to the latest updated version of the Subgraph. -### Bude se kurátorství mého podgrafu pohybovat spolu s mým podgrafem? +### Will my Subgraph's curation move with my Subgraph? -Pokud jste zvolili automatickou migraci signálu, 100 % vaší vlastní kurátorství se přesune spolu s vaším subgrafem do Arbitrum One. Veškerý signál kurátorství podgrafu bude v okamžiku převodu převeden na GRT a GRT odpovídající vašemu signálu kurátorství bude použit k ražbě signálu na podgrafu L2. +If you've chosen auto-migrating signal, 100% of your own curation will move with your Subgraph to Arbitrum One. All of the Subgraph's curation signal will be converted to GRT at the time of the transfer, and the GRT corresponding to your curation signal will be used to mint signal on the L2 Subgraph. -Ostatní kurátoři se mohou rozhodnout, zda stáhnou svou část GRT, nebo ji také převedou na L2, aby vyrazili signál na stejném podgraf. +Other Curators can choose whether to withdraw their fraction of GRT, or also transfer it to L2 to mint signal on the same Subgraph. -### Mohu svůj subgraf po převodu přesunout zpět do mainnet Ethereum? +### Can I move my Subgraph back to Ethereum mainnet after I transfer? -Po přenosu bude vaše verze tohoto podgrafu v síti Ethereum mainnet zneplatněna. Pokud se chcete přesunout zpět do mainnetu, musíte provést nové nasazení a publikovat zpět do mainnet. Převod zpět do mainnetu Etherea se však důrazně nedoporučuje, protože odměny za indexování budou nakonec distribuovány výhradně na Arbitrum One. +Once transferred, your Ethereum mainnet version of this Subgraph will be deprecated. If you would like to move back to mainnet, you will need to redeploy and publish back to mainnet. However, transferring back to Ethereum mainnet is strongly discouraged as indexing rewards will eventually be distributed entirely on Arbitrum One. ### Proč potřebuji k dokončení převodu překlenovací ETH? @@ -206,19 +206,19 @@ Chcete-li přenést své kurátorství, musíte provést následující kroky: \*Pokud je to nutné - tj. používáte smluvní adresu. -### Jak se dozvím, že se mnou kurátorovaný podgraf přesunul do L2? +### How will I know if the Subgraph I curated has moved to L2? -Při zobrazení stránky s podrobnostmi podgrafu se zobrazí banner s upozorněním, že tento podgraf byl přenesen. Můžete následovat výzvu k přenosu kurátorství. Tyto informace najdete také na stránce s podrobnostmi o podgrafu, který se přesunul. +When viewing the Subgraph details page, a banner will notify you that this Subgraph has been transferred. You can follow the prompt to transfer your curation. You can also find this information on the Subgraph details page of any Subgraph that has moved. ### Co když si nepřeji přesunout své kurátorství do L2? -Pokud je podgraf vyřazen, máte možnost stáhnout svůj signál. Stejně tak pokud se podgraf přesunul do L2, můžete si vybrat, zda chcete stáhnout svůj signál v mainnet Ethereum, nebo signál poslat do L2. +When a Subgraph is deprecated you have the option to withdraw your signal. Similarly, if a Subgraph has moved to L2, you can choose to withdraw your signal in Ethereum mainnet or send the signal to L2. ### Jak poznám, že se moje kurátorství úspěšně přeneslo? Podrobnosti o signálu budou k dispozici prostřednictvím Průzkumníka přibližně 20 minut po spuštění nástroje pro přenos L2. -### Mohu přenést své kurátorství na více než jeden podgraf najednou? +### Can I transfer my curation on more than one Subgraph at a time? V současné době není k dispozici možnost hromadného přenosu. @@ -266,7 +266,7 @@ Nástroj pro převod L2 dokončí převod vašeho podílu přibližně za 20 min ### Musím před převodem svého podílu indexovat na Arbitrum? -Před nastavením indexování můžete nejprve efektivně převést svůj podíl, ale nebudete si moci nárokovat žádné odměny na L2, dokud nepřidělíte podgrafy na L2, neindexujete je a nepředložíte POIs. +You can effectively transfer your stake first before setting up indexing, but you will not be able to claim any rewards on L2 until you allocate to Subgraphs on L2, index them, and present POIs. ### Mohou delegáti přesunout svou delegaci dříve, než přesunu svůj indexovací podíl? diff --git a/website/src/pages/cs/archived/arbitrum/l2-transfer-tools-guide.mdx b/website/src/pages/cs/archived/arbitrum/l2-transfer-tools-guide.mdx index 69717e46ed39..94b78981db6b 100644 --- a/website/src/pages/cs/archived/arbitrum/l2-transfer-tools-guide.mdx +++ b/website/src/pages/cs/archived/arbitrum/l2-transfer-tools-guide.mdx @@ -6,53 +6,53 @@ Graph usnadnil přechod na úroveň L2 v Arbitrum One. Pro každého účastník Some frequent questions about these tools are answered in the [L2 Transfer Tools FAQ](/archived/arbitrum/l2-transfer-tools-faq/). The FAQs contain in-depth explanations of how to use the tools, how they work, and things to keep in mind when using them. -## Jak přenést podgraf do Arbitrum (L2) +## How to transfer your Subgraph to Arbitrum (L2) -## Výhody přenosu podgrafů +## Benefits of transferring your Subgraphs Komunita a hlavní vývojáři Graphu se v uplynulém roce [připravovali](https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305) na přechod na Arbitrum. Arbitrum, blockchain druhé vrstvy neboli "L2", zdědil bezpečnost po Ethereum, ale poskytuje výrazně nižší poplatky za plyn. -Když publikujete nebo aktualizujete svůj subgraf v síti The Graph Network, komunikujete s chytrými smlouvami na protokolu, což vyžaduje platbu za plyn pomocí ETH. Přesunutím subgrafů do Arbitrum budou veškeré budoucí aktualizace subgrafů vyžadovat mnohem nižší poplatky za plyn. Nižší poplatky a skutečnost, že křivky vazby kurátorů na L2 jsou ploché, také usnadňují ostatním kurátorům kurátorství na vašem podgrafu, což zvyšuje odměny pro indexátory na vašem podgrafu. Toto prostředí s nižšími náklady také zlevňuje indexování a obsluhu subgrafu pro indexátory. Odměny za indexování se budou v následujících měsících na Arbitrum zvyšovat a na mainnetu Ethereum snižovat, takže stále více indexerů bude převádět své podíly a zakládat své operace na L2. +When you publish or upgrade your Subgraph to The Graph Network, you're interacting with smart contracts on the protocol and this requires paying for gas using ETH. By moving your Subgraphs to Arbitrum, any future updates to your Subgraph will require much lower gas fees. The lower fees, and the fact that curation bonding curves on L2 are flat, also make it easier for other Curators to curate on your Subgraph, increasing the rewards for Indexers on your Subgraph. This lower-cost environment also makes it cheaper for Indexers to index and serve your Subgraph. Indexing rewards will be increasing on Arbitrum and decreasing on Ethereum mainnet over the coming months, so more and more Indexers will be transferring their stake and setting up their operations on L2. -## Porozumění tomu, co se děje se signálem, podgrafem L1 a adresami URL dotazů +## Understanding what happens with signal, your L1 Subgraph and query URLs -Při přenosu podgrafu do Arbitrum se používá můstek Arbitrum GRT, který zase používá nativní můstek Arbitrum k odeslání podgrafu do L2. Při "přenosu" se subgraf v mainnetu znehodnotí a odešlou se informace pro opětovné vytvoření subgrafu v L2 pomocí mostu. Zahrnuje také GRT vlastníka podgrafu, který již byl signalizován a který musí být větší než nula, aby most převod přijal. +Transferring a Subgraph to Arbitrum uses the Arbitrum GRT bridge, which in turn uses the native Arbitrum bridge, to send the Subgraph to L2. The "transfer" will deprecate the Subgraph on mainnet and send the information to re-create the Subgraph on L2 using the bridge. It will also include the Subgraph owner's signaled GRT, which must be more than zero for the bridge to accept the transfer. -Pokud zvolíte převod podgrafu, převede se veškerý signál kurátoru podgrafu na GRT. To je ekvivalentní "znehodnocení" podgrafu v síti mainnet. GRT odpovídající vašemu kurátorství budou spolu s podgrafem odeslány na L2, kde budou vaším jménem použity k ražbě signálu. +When you choose to transfer the Subgraph, this will convert all of the Subgraph's curation signal to GRT. This is equivalent to "deprecating" the Subgraph on mainnet. The GRT corresponding to your curation will be sent to L2 together with the Subgraph, where they will be used to mint signal on your behalf. -Ostatní kurátoři se mohou rozhodnout, zda si stáhnou svůj podíl GRT, nebo jej také převedou na L2, aby na stejném podgrafu vyrazili signál. Pokud vlastník podgrafu nepřevede svůj podgraf na L2 a ručně jej znehodnotí prostřednictvím volání smlouvy, pak budou Kurátoři upozorněni a budou moci stáhnout svou kurátorskou funkci. +Other Curators can choose whether to withdraw their fraction of GRT, or also transfer it to L2 to mint signal on the same Subgraph. If a Subgraph owner does not transfer their Subgraph to L2 and manually deprecates it via a contract call, then Curators will be notified and will be able to withdraw their curation. -Jakmile je podgraf převeden, protože veškerá kurátorská činnost je převedena na GRT, indexátoři již nebudou dostávat odměny za indexování podgrafu. Budou však existovat indexátory, které 1) budou obsluhovat převedené podgrafy po dobu 24 hodin a 2) okamžitě začnou indexovat podgraf na L2. Protože tyto Indexery již mají podgraf zaindexovaný, nemělo by být nutné čekat na synchronizaci podgrafu a bude možné se na podgraf na L2 dotazovat téměř okamžitě. +As soon as the Subgraph is transferred, since all curation is converted to GRT, Indexers will no longer receive rewards for indexing the Subgraph. However, there will be Indexers that will 1) keep serving transferred Subgraphs for 24 hours, and 2) immediately start indexing the Subgraph on L2. Since these Indexers already have the Subgraph indexed, there should be no need to wait for the Subgraph to sync, and it will be possible to query the L2 Subgraph almost immediately. -Dotazy do podgrafu L2 bude nutné zadávat na jinou adresu URL (na `arbitrum-gateway.thegraph.com`), ale adresa URL L1 bude fungovat nejméně 48 hodin. Poté bude brána L1 přeposílat dotazy na bránu L2 (po určitou dobu), což však zvýší latenci, takže se doporučuje co nejdříve přepnout všechny dotazy na novou adresu URL. +Queries to the L2 Subgraph will need to be done to a different URL (on `arbitrum-gateway.thegraph.com`), but the L1 URL will continue working for at least 48 hours. After that, the L1 gateway will forward queries to the L2 gateway (for some time), but this will add latency so it is recommended to switch all your queries to the new URL as soon as possible. ## Výběr peněženky L2 -Když jste publikovali svůj podgraf na hlavní síti (mainnet), použili jste připojenou peněženku, která vlastní NFT reprezentující tento podgraf a umožňuje vám publikovat aktualizace. +When you published your Subgraph on mainnet, you used a connected wallet to create the Subgraph, and this wallet owns the NFT that represents this Subgraph and allows you to publish updates. -Při přenosu podgrafu do Arbitrum si můžete vybrat jinou peněženku, která bude vlastnit tento podgraf NFT na L2. +When transferring the Subgraph to Arbitrum, you can choose a different wallet that will own this Subgraph NFT on L2. Pokud používáte "obyčejnou" peněženku, jako je MetaMask (externě vlastněný účet nebo EOA, tj. peněženka, která není chytrým kontraktem), pak je to volitelné a doporučuje se zachovat stejnou adresu vlastníka jako v L1. -Pokud používáte peněženku s chytrým kontraktem, jako je multisig (např. Trezor), pak je nutné zvolit jinou adresu peněženky L2, protože je pravděpodobné, že tento účet existuje pouze v mainnetu a nebudete moci provádět transakce na Arbitrum pomocí této peněženky. Pokud chcete i nadále používat peněženku s chytrým kontraktem nebo multisig, vytvořte si na Arbitrum novou peněženku a její adresu použijte jako vlastníka L2 svého subgrafu. +If you're using a smart contract wallet, like a multisig (e.g. a Safe), then choosing a different L2 wallet address is mandatory, as it is most likely that this account only exists on mainnet and you will not be able to make transactions on Arbitrum using this wallet. If you want to keep using a smart contract wallet or multisig, create a new wallet on Arbitrum and use its address as the L2 owner of your Subgraph. -**Je velmi důležité používat adresu peněženky, kterou máte pod kontrolou a která může provádět transakce na Arbitrum. V opačném případě bude podgraf ztracen a nebude možné jej obnovit.** +**It is very important to use a wallet address that you control, and that can make transactions on Arbitrum. Otherwise, the Subgraph will be lost and cannot be recovered.** ## Příprava na převod: přemostění některých ETH -Přenos podgrafu zahrnuje odeslání transakce přes můstek a následné provedení další transakce na Arbitrum. První transakce využívá ETH na mainnetu a obsahuje nějaké ETH na zaplacení plynu, když je zpráva přijata na L2. Pokud však tento plyn nestačí, je třeba transakci zopakovat a zaplatit za plyn přímo na L2 (to je 'Krok 3: Potvrzení převodu' níže). Tento krok musí být proveden do 7 dnů od zahájení převodu\*\*. Druhá transakce ('Krok 4: Dokončení převodu na L2') bude navíc provedena přímo na Arbitrum. Z těchto důvodů budete potřebovat nějaké ETH na peněžence Arbitrum. Pokud používáte multisig nebo smart contract účet, ETH bude muset být v běžné peněžence (EOA), kterou používáte k provádění transakcí, nikoli na samotné multisig peněžence. +Transferring the Subgraph involves sending a transaction through the bridge, and then executing another transaction on Arbitrum. The first transaction uses ETH on mainnet, and includes some ETH to pay for gas when the message is received on L2. However, if this gas is insufficient, you will have to retry the transaction and pay for the gas directly on L2 (this is "Step 3: Confirming the transfer" below). This step **must be executed within 7 days of starting the transfer**. Moreover, the second transaction ("Step 4: Finishing the transfer on L2") will be done directly on Arbitrum. For these reasons, you will need some ETH on an Arbitrum wallet. If you're using a multisig or smart contract account, the ETH will need to be in the regular (EOA) wallet that you are using to execute the transactions, not on the multisig wallet itself. ETH si můžete koupit na některých burzách a vybrat přímo na Arbitrum, nebo můžete použít most Arbitrum a poslat ETH z peněženky mainnetu na L2: [bridge.arbitrum.io](http://bridge.arbitrum.io). Vzhledem k tomu, že poplatky za plyn na Arbitrum jsou nižší, mělo by vám stačit jen malé množství. Doporučujeme začít na nízkém prahu (např. 0.01 ETH), aby byla vaše transakce schválena. -## Hledání nástroje pro přenos podgrafu +## Finding the Subgraph Transfer Tool -Nástroj pro přenos L2 najdete při prohlížení stránky svého podgrafu v aplikaci Subgraph Studio: +You can find the L2 Transfer Tool when you're looking at your Subgraph's page on Subgraph Studio: ![transfer tool](/img/L2-transfer-tool1.png) -Je k dispozici také v Průzkumníku, pokud jste připojeni k peněžence, která vlastní podgraf, a na stránce tohoto podgrafu v Průzkumníku: +It is also available on Explorer if you're connected with the wallet that owns a Subgraph and on that Subgraph's page on Explorer: ![Transferring to L2](/img/transferToL2.png) @@ -60,19 +60,19 @@ Kliknutím na tlačítko Přenést na L2 otevřete nástroj pro přenos, kde mů ## Krok 1: Zahájení přenosu -Před zahájením převodu se musíte rozhodnout, která adresa bude vlastnit podgraf na L2 (viz výše "Výběr peněženky L2"), a důrazně doporučujeme mít na Arbitrum již přemostěné ETH pro plyn (viz výše "Příprava na převod: přemostění některých ETH"). +Before starting the transfer, you must decide which address will own the Subgraph on L2 (see "Choosing your L2 wallet" above), and it is strongly recommend having some ETH for gas already bridged on Arbitrum (see "Preparing for the transfer: bridging some ETH" above). -Vezměte prosím na vědomí, že přenos podgrafu vyžaduje nenulové množství signálu na podgrafu se stejným účtem, který vlastní podgraf; pokud jste na podgrafu nesignalizovali, budete muset přidat trochu kurátorství (stačí přidat malé množství, například 1 GRT). +Also please note transferring the Subgraph requires having a nonzero amount of signal on the Subgraph with the same account that owns the Subgraph; if you haven't signaled on the Subgraph you will have to add a bit of curation (adding a small amount like 1 GRT would suffice). -Po otevření nástroje Transfer Tool budete moci do pole "Receiving wallet address" zadat adresu peněženky L2 - **ujistěte se, že jste zadali správnou adresu**. Kliknutím na Transfer Subgraph budete vyzváni k provedení transakce na vaší peněžence (všimněte si, že je zahrnuta určitá hodnota ETH, abyste zaplatili za plyn L2); tím se zahájí přenos a znehodnotí váš subgraf L1 (více podrobností o tom, co se děje v zákulisí, najdete výše v části "Porozumění tomu, co se děje se signálem, vaším subgrafem L1 a URL dotazů"). +After opening the Transfer Tool, you will be able to input the L2 wallet address into the "Receiving wallet address" field - **make sure you've entered the correct address here**. Clicking on Transfer Subgraph will prompt you to execute the transaction on your wallet (note some ETH value is included to pay for L2 gas); this will initiate the transfer and deprecate your L1 Subgraph (see "Understanding what happens with signal, your L1 Subgraph and query URLs" above for more details on what goes on behind the scenes). -Pokud tento krok provedete, ujistěte se, že jste pokračovali až do dokončení kroku 3 za méně než 7 dní, jinak se podgraf a váš signál GRT ztratí. To je způsobeno tím, jak funguje zasílání zpráv L1-L2 na Arbitrum: zprávy, které jsou zasílány přes most, jsou "Opakovatelný tiket", které musí být provedeny do 7 dní, a počáteční provedení může vyžadovat opakování, pokud dojde ke skokům v ceně plynu na Arbitrum. +If you execute this step, **make sure you proceed until completing step 3 in less than 7 days, or the Subgraph and your signal GRT will be lost.** This is due to how L1-L2 messaging works on Arbitrum: messages that are sent through the bridge are "retry-able tickets" that must be executed within 7 days, and the initial execution might need a retry if there are spikes in the gas price on Arbitrum. ![Start the transfer to L2](/img/startTransferL2.png) -## Krok 2: Čekání, až se podgraf dostane do L2 +## Step 2: Waiting for the Subgraph to get to L2 -Po zahájení přenosu se musí zpráva, která odesílá podgraf L1 do L2, šířit přes můstek Arbitrum. To trvá přibližně 20 minut (můstek čeká, až bude blok mainnetu obsahující transakci "bezpečný" před případnými reorgy řetězce). +After you start the transfer, the message that sends your L1 Subgraph to L2 must propagate through the Arbitrum bridge. This takes approximately 20 minutes (the bridge waits for the mainnet block containing the transaction to be "safe" from potential chain reorgs). Po uplynutí této čekací doby se Arbitrum pokusí o automatické provedení přenosu na základě smluv L2. @@ -80,7 +80,7 @@ Po uplynutí této čekací doby se Arbitrum pokusí o automatické provedení p ## Krok 3: Potvrzení převodu -Ve většině případů se tento krok provede automaticky, protože plyn L2 obsažený v kroku 1 by měl stačit k provedení transakce, která přijímá podgraf na smlouvách Arbitrum. V některých případech je však možné, že prudký nárůst cen plynu na Arbitrum způsobí selhání tohoto automatického provedení. V takovém případě bude "ticket", který odešle subgraf na L2, čekat na vyřízení a bude vyžadovat opakování pokusu do 7 dnů. +In most cases, this step will auto-execute as the L2 gas included in step 1 should be sufficient to execute the transaction that receives the Subgraph on the Arbitrum contracts. In some cases, however, it is possible that a spike in gas prices on Arbitrum causes this auto-execution to fail. In this case, the "ticket" that sends your Subgraph to L2 will be pending and require a retry within 7 days. V takovém případě se musíte připojit pomocí peněženky L2, která má nějaké ETH na Arbitrum, přepnout síť peněženky na Arbitrum a kliknutím na "Confirm Transfer" zopakovat transakci. @@ -88,33 +88,33 @@ V takovém případě se musíte připojit pomocí peněženky L2, která má n ## Krok 4: Dokončení přenosu na L2 -V tuto chvíli byly váš podgraf a GRT přijaty na Arbitrum, ale podgraf ještě není zveřejněn. Budete se muset připojit pomocí peněženky L2, kterou jste si vybrali jako přijímající peněženku, přepnout síť peněženky na Arbitrum a kliknout na "Publikovat subgraf" +At this point, your Subgraph and GRT have been received on Arbitrum, but the Subgraph is not published yet. You will need to connect using the L2 wallet that you chose as the receiving wallet, switch your wallet network to Arbitrum, and click "Publish Subgraph." -![Publish the subgraph](/img/publishSubgraphL2TransferTools.png) +![Publish the Subgraph](/img/publishSubgraphL2TransferTools.png) -![Wait for the subgraph to be published](/img/waitForSubgraphToPublishL2TransferTools.png) +![Wait for the Subgraph to be published](/img/waitForSubgraphToPublishL2TransferTools.png) -Tím se podgraf zveřejní, aby jej mohly začít obsluhovat indexery pracující na Arbitrum. Rovněž bude zminován kurátorský signál pomocí GRT, které byly přeneseny z L1. +This will publish the Subgraph so that Indexers that are operating on Arbitrum can start serving it. It will also mint curation signal using the GRT that were transferred from L1. ## Krok 5: Aktualizace URL dotazu -Váš podgraf byl úspěšně přenesen do Arbitrum! Chcete-li se na podgraf zeptat, nová URL bude: +Your Subgraph has been successfully transferred to Arbitrum! To query the Subgraph, the new URL will be : `https://arbitrum-gateway.thegraph.com/api/[api-key]/subgraphs/id/[l2-subgraph-id]` -Všimněte si, že ID podgrafu v Arbitrum bude jiné než to, které jste měli v mainnetu, ale vždy ho můžete najít v Průzkumníku nebo Studiu. Jak je uvedeno výše (viz "Pochopení toho, co se děje se signálem, vaším subgrafem L1 a URL dotazů"), stará URL adresa L1 bude po krátkou dobu podporována, ale jakmile bude subgraf synchronizován na L2, měli byste své dotazy přepnout na novou adresu. +Note that the Subgraph ID on Arbitrum will be a different than the one you had on mainnet, but you can always find it on Explorer or Studio. As mentioned above (see "Understanding what happens with signal, your L1 Subgraph and query URLs") the old L1 URL will be supported for a short while, but you should switch your queries to the new address as soon as the Subgraph has been synced on L2. ## Jak přenést kurátorství do služby Arbitrum (L2) -## Porozumění tomu, co se děje s kurátorstvím při přenosu podgrafů do L2 +## Understanding what happens to curation on Subgraph transfers to L2 -Když vlastník podgrafu převede podgraf do Arbitrum, je veškerý signál podgrafu současně převeden na GRT. To se týká "automaticky migrovaného" signálu, tj. signálu, který není specifický pro verzi podgrafu nebo nasazení, ale který následuje nejnovější verzi podgrafu. +When the owner of a Subgraph transfers a Subgraph to Arbitrum, all of the Subgraph's signal is converted to GRT at the same time. This applies to "auto-migrated" signal, i.e. signal that is not specific to a Subgraph version or deployment but that follows the latest version of a Subgraph. -Tento převod ze signálu na GRT je stejný, jako kdyby vlastník podgrafu zrušil podgraf v L1. Při depreciaci nebo převodu subgrafu se současně "spálí" veškerý kurátorský signál (pomocí kurátorské vazební křivky) a výsledný GRT je držen inteligentním kontraktem GNS (tedy kontraktem, který se stará o upgrade subgrafu a automatickou migraci signálu). Každý kurátor na tomto subgrafu má tedy nárok na tento GRT úměrný množství podílů, které měl na subgrafu. +This conversion from signal to GRT is the same as what would happen if the Subgraph owner deprecated the Subgraph in L1. When the Subgraph is deprecated or transferred, all curation signal is "burned" simultaneously (using the curation bonding curve) and the resulting GRT is held by the GNS smart contract (that is the contract that handles Subgraph upgrades and auto-migrated signal). Each Curator on that Subgraph therefore has a claim to that GRT proportional to the amount of shares they had for the Subgraph. -Část těchto GRT odpovídající vlastníkovi podgrafu je odeslána do L2 spolu s podgrafem. +A fraction of these GRT corresponding to the Subgraph owner is sent to L2 together with the Subgraph. -V tomto okamžiku se za kurátorský GRT již nebudou účtovat žádné poplatky za dotazování, takže kurátoři se mohou rozhodnout, zda svůj GRT stáhnou, nebo jej přenesou do stejného podgrafu na L2, kde může být použit k ražbě nového kurátorského signálu. S tímto úkonem není třeba spěchat, protože GRT lze pomáhat donekonečna a každý dostane částku úměrnou svému podílu bez ohledu na to, kdy tak učiní. +At this point, the curated GRT will not accrue any more query fees, so Curators can choose to withdraw their GRT or transfer it to the same Subgraph on L2, where it can be used to mint new curation signal. There is no rush to do this as the GRT can be help indefinitely and everybody gets an amount proportional to their shares, irrespective of when they do it. ## Výběr peněženky L2 @@ -130,9 +130,9 @@ Pokud používáte peněženku s chytrým kontraktem, jako je multisig (např. T Před zahájením převodu se musíte rozhodnout, která adresa bude vlastnit kurátorství na L2 (viz výše "Výběr peněženky L2"), a doporučujeme mít nějaké ETH pro plyn již přemostěné na Arbitrum pro případ, že byste potřebovali zopakovat provedení zprávy na L2. ETH můžete nakoupit na některých burzách a vybrat si ho přímo na Arbitrum, nebo můžete použít Arbitrum bridge pro odeslání ETH z peněženky mainnetu na L2: [bridge.arbitrum.io](http://bridge.arbitrum.io) - protože poplatky za plyn na Arbitrum jsou tak nízké, mělo by vám stačit jen malé množství, např. 0,01 ETH bude pravděpodobně více než dostačující. -Pokud byl podgraf, do kterého kurátor provádí kurátorství, převeden do L2, zobrazí se v Průzkumníku zpráva, že kurátorství provádíte do převedeného podgrafu. +If a Subgraph that you curate to has been transferred to L2, you will see a message on Explorer telling you that you're curating to a transferred Subgraph. -Při pohledu na stránku podgrafu můžete zvolit stažení nebo přenos kurátorství. Kliknutím na "Přenést signál do Arbitrum" otevřete nástroj pro přenos. +When looking at the Subgraph page, you can choose to withdraw or transfer the curation. Clicking on "Transfer Signal to Arbitrum" will open the transfer tool. ![Transfer signal](/img/transferSignalL2TransferTools.png) @@ -162,4 +162,4 @@ V takovém případě se musíte připojit pomocí peněženky L2, která má n ## Odstranění vašeho kurátorství na L1 -Pokud nechcete posílat GRT na L2 nebo byste raději překlenuli GRT ručně, můžete si na L1 stáhnout svůj kurátorovaný GRT. Na banneru na stránce podgrafu zvolte "Withdraw Signal" a potvrďte transakci; GRT bude odeslán na vaši adresu kurátora. +If you prefer not to send your GRT to L2, or you'd rather bridge the GRT manually, you can withdraw your curated GRT on L1. On the banner on the Subgraph page, choose "Withdraw Signal" and confirm the transaction; the GRT will be sent to your Curator address. diff --git a/website/src/pages/cs/archived/sunrise.mdx b/website/src/pages/cs/archived/sunrise.mdx index 71b86ac159ff..52e8c90d7708 100644 --- a/website/src/pages/cs/archived/sunrise.mdx +++ b/website/src/pages/cs/archived/sunrise.mdx @@ -7,61 +7,61 @@ sidebarTitle: Post-Sunrise Upgrade FAQ ## Jaký byl úsvit decentralizovaných dat? -Úsvit decentralizovaných dat byla iniciativa, kterou vedla společnost Edge & Node. Tato iniciativa umožnila vývojářům podgrafů bezproblémově přejít na decentralizovanou síť Graf. +The Sunrise of Decentralized Data was an initiative spearheaded by Edge & Node. This initiative enabled Subgraph developers to upgrade to The Graph’s decentralized network seamlessly. -This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published subgraphs. +This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published Subgraphs. ### Co se stalo s hostovanou službou? -Koncové body dotazů hostované služby již nejsou k dispozici a vývojáři nemohou v hostované službě nasadit nové podgrafy. +The hosted service query endpoints are no longer available, and developers cannot deploy new Subgraphs on the hosted service. -Během procesu aktualizace mohli vlastníci podgrafů hostovaných služeb aktualizovat své podgrafy na síť Graf. Vývojáři navíc mohli nárokovat automatickou aktualizaci podgrafů. +During the upgrade process, owners of hosted service Subgraphs could upgrade their Subgraphs to The Graph Network. Additionally, developers were able to claim auto-upgraded Subgraphs. ### Měla tato aktualizace vliv na Podgraf Studio? Ne, na Podgraf Studio neměl Sunrise vliv. Podgrafy byly okamžitě k dispozici pro dotazování, a to díky aktualizačnímu indexeru, který využívá stejnou infrastrukturu jako hostovaná služba. -### Proč byly podgrafy zveřejněny na Arbitrum, začalo indexovat jinou síť? +### Why were Subgraphs published to Arbitrum, did it start indexing a different network? -The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that subgraphs are published to, but subgraphs can index any of the [supported networks](/supported-networks/) +The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new Subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that Subgraphs are published to, but Subgraphs can index any of the [supported networks](/supported-networks/) ## O Upgrade Indexer > Aktualizace Indexer je v současné době aktivní. -Upgrade Indexer byl implementován za účelem zlepšení zkušeností s upgradem podgrafů z hostované služby do sit' Graf a podpory nových verzí stávajících podgrafů, které dosud nebyly indexovány. +The upgrade Indexer was implemented to improve the experience of upgrading Subgraphs from the hosted service to The Graph Network and support new versions of existing Subgraphs that had not yet been indexed. ### Co dělá upgrade Indexer? -- Zavádí řetězce, které ještě nezískaly odměnu za indexaci v síti Graf, a zajišťuje, aby byl po zveřejnění podgrafu co nejrychleji k dispozici indexátor pro obsluhu dotazů. +- It bootstraps chains that have yet to receive indexing rewards on The Graph Network and ensures that an Indexer is available to serve queries as quickly as possible after a Subgraph is published. - It supports chains that were previously only available on the hosted service. Find a comprehensive list of supported chains [here](/supported-networks/). -- Indexátoři, kteří provozují upgrade indexátoru, tak činí jako veřejnou službu pro podporu nových podgrafů a dalších řetězců, kterým chybí indexační odměny, než je Rada grafů schválí. +- Indexers that operate an upgrade Indexer do so as a public service to support new Subgraphs and additional chains that lack indexing rewards before The Graph Council approves them. ### Proč Edge & Node spouští aktualizaci Indexer? -Edge & Node historicky udržovaly hostovanou službu, a proto již mají synchronizovaná data pro podgrafy hostované služby. +Edge & Node historically maintained the hosted service and, as a result, already have synced data for hosted service Subgraphs. ### Co znamená upgrade indexeru pro stávající indexery? Řetězce, které byly dříve podporovány pouze v hostované službě, byly vývojářům zpřístupněny v síti Graf nejprve bez odměn za indexování. -Tato akce však uvolnila poplatky za dotazy pro všechny zájemce o indexování a zvýšila počet podgrafů zveřejněných v síti Graf. V důsledku toho mají indexátoři více příležitostí indexovat a obsluhovat tyto podgrafy výměnou za poplatky za dotazy, a to ještě předtím, než jsou odměny za indexování pro řetězec povoleny. +However, this action unlocked query fees for any interested Indexer and increased the number of Subgraphs published on The Graph Network. As a result, Indexers have more opportunities to index and serve these Subgraphs in exchange for query fees, even before indexing rewards are enabled for a chain. -Upgrade Indexer také poskytuje komunitě Indexer informace o potenciální poptávce po podgrafech a nových řetězcích v síti grafů. +The upgrade Indexer also provides the Indexer community with information about the potential demand for Subgraphs and new chains on The Graph Network. ### Co to znamená pro delegáti? -The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity. +The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more Subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity. ### Did the upgrade Indexer compete with existing Indexers for rewards? -No, the upgrade Indexer only allocates the minimum amount per subgraph and does not collect indexing rewards. +No, the upgrade Indexer only allocates the minimum amount per Subgraph and does not collect indexing rewards. -It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and subgraphs. +It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and Subgraphs. -### How does this affect subgraph developers? +### How does this affect Subgraph developers? -Subgraph developers can query their subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/subgraphs/developing/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a subgraph](/developing/creating-a-subgraph/) was not impacted by this upgrade. +Subgraph developers can query their Subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/subgraphs/developing/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a Subgraph](/developing/creating-a-subgraph/) was not impacted by this upgrade. ### How does the upgrade Indexer benefit data consumers? @@ -71,10 +71,10 @@ Aktualizace Indexeru umožňuje podporu blockchainů v síti, které byly dřív The upgrade Indexer prices queries at the market rate to avoid influencing the query fee market. -### When will the upgrade Indexer stop supporting a subgraph? +### When will the upgrade Indexer stop supporting a Subgraph? -The upgrade Indexer supports a subgraph until at least 3 other Indexers successfully and consistently serve queries made to it. +The upgrade Indexer supports a Subgraph until at least 3 other Indexers successfully and consistently serve queries made to it. -Furthermore, the upgrade Indexer stops supporting a subgraph if it has not been queried in the last 30 days. +Furthermore, the upgrade Indexer stops supporting a Subgraph if it has not been queried in the last 30 days. -Other Indexers are incentivized to support subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it. +Other Indexers are incentivized to support Subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it. diff --git a/website/src/pages/cs/global.json b/website/src/pages/cs/global.json index c431472eb4f5..59211940d133 100644 --- a/website/src/pages/cs/global.json +++ b/website/src/pages/cs/global.json @@ -6,6 +6,7 @@ "subgraphs": "Podgrafy", "substreams": "Substreams", "sps": "Substreams-Powered Subgraphs", + "tokenApi": "Token API", "indexing": "Indexing", "resources": "Resources", "archived": "Archived" @@ -24,9 +25,51 @@ "linkToThisSection": "Link to this section" }, "content": { - "note": "Note", + "callout": { + "note": "Note", + "tip": "Tip", + "important": "Important", + "warning": "Warning", + "caution": "Caution" + }, "video": "Video" }, + "openApi": { + "parameters": { + "pathParameters": "Path Parameters", + "queryParameters": "Query Parameters", + "headerParameters": "Header Parameters", + "cookieParameters": "Cookie Parameters", + "parameter": "Parameter", + "description": "Popis", + "value": "Value", + "required": "Required", + "deprecated": "Deprecated", + "defaultValue": "Default value", + "minimumValue": "Minimum value", + "maximumValue": "Maximum value", + "acceptedValues": "Accepted values", + "acceptedPattern": "Accepted pattern", + "format": "Format", + "serializationFormat": "Serialization format" + }, + "request": { + "label": "Test this endpoint", + "noCredentialsRequired": "No credentials required", + "send": "Send Request" + }, + "responses": { + "potentialResponses": "Potential Responses", + "status": "Status", + "description": "Popis", + "liveResponse": "Live Response", + "example": "Příklad" + }, + "errors": { + "invalidApi": "Could not retrieve API {0}.", + "invalidOperation": "Could not retrieve operation {0} in API {1}." + } + }, "notFound": { "title": "Oops! This page was lost in space...", "subtitle": "Check if you’re using the right address or explore our website by clicking on the link below.", diff --git a/website/src/pages/cs/index.json b/website/src/pages/cs/index.json index dd7566b56c2e..545b2b717b56 100644 --- a/website/src/pages/cs/index.json +++ b/website/src/pages/cs/index.json @@ -7,7 +7,7 @@ "cta2": "Build your first subgraph" }, "products": { - "title": "The Graph’s Products", + "title": "The Graph's Products", "description": "Choose a solution that fits your needs—interact with blockchain data your way.", "subgraphs": { "title": "Podgrafy", @@ -21,7 +21,7 @@ }, "sps": { "title": "Substreams-Powered Subgraphs", - "description": "Boost your subgraph’s efficiency and scalability by using Substreams.", + "description": "Boost your subgraph's efficiency and scalability by using Substreams.", "cta": "Set up a Substreams-powered subgraph" }, "graphNode": { @@ -39,12 +39,12 @@ "title": "Podporované sítě", "details": "Network Details", "services": "Services", - "type": "Type", + "type": "Typ", "protocol": "Protocol", "identifier": "Identifier", "chainId": "Chain ID", "nativeCurrency": "Native Currency", - "docs": "Docs", + "docs": "Dokumenty", "shortName": "Short Name", "guides": "Guides", "search": "Search networks", @@ -67,7 +67,7 @@ "tableHeaders": { "name": "Name", "id": "ID", - "subgraphs": "Subgraphs", + "subgraphs": "Podgrafy", "substreams": "Substreams", "firehose": "Firehose", "tokenapi": "Token API" @@ -92,7 +92,7 @@ "description": "Leverage features like custom data sources, event handlers, and topic filters." }, "billing": { - "title": "Billing", + "title": "Fakturace", "description": "Optimize costs and manage billing efficiently." } }, @@ -156,15 +156,15 @@ "watchOnYouTube": "Watch on YouTube", "theGraphExplained": { "title": "The Graph Explained In 1 Minute", - "description": "What is The Graph? How does it work? Why does it matter so much to web3 developers? Learn how and why The Graph is the backbone of web3 in this short, non-technical video." + "description": "Learn how and why The Graph is the backbone of web3 in this short, non-technical video." }, "whatIsDelegating": { "title": "What is Delegating?", - "description": "Delegators are key participants who help secure The Graph by staking their GRT tokens to Indexers. This video explains key concepts to understand before delegating." + "description": "This video explains key concepts to understand before delegating, a form of staking that helps secure The Graph." }, "howToIndexSolana": { "title": "How to Index Solana with a Substreams-powered Subgraph", - "description": "If you’re familiar with subgraphs, discover how Substreams offer a different approach for key use cases. This video walks you through the process of building your first Substreams-powered subgraph." + "description": "If you're familiar with Subgraphs, discover how Substreams offer a different approach for key use cases." } }, "time": { diff --git a/website/src/pages/cs/indexing/chain-integration-overview.mdx b/website/src/pages/cs/indexing/chain-integration-overview.mdx index e048421d7ad9..a2f1eed58864 100644 --- a/website/src/pages/cs/indexing/chain-integration-overview.mdx +++ b/website/src/pages/cs/indexing/chain-integration-overview.mdx @@ -36,7 +36,7 @@ Tento proces souvisí se službou Datová služba podgrafů a vztahuje se pouze ### 2. Co se stane, když podpora Firehose & Substreams přijde až poté, co bude síť podporována v mainnet? -To by mělo vliv pouze na podporu protokolu pro indexování odměn na podgrafech s podsílou. Novou implementaci Firehose by bylo třeba testovat v testnetu podle metodiky popsané pro fázi 2 v tomto GIP. Podobně, za předpokladu, že implementace bude výkonná a spolehlivá, by bylo nutné provést PR na [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) (`Substreams data sources` Subgraph Feature) a také nový GIP pro podporu protokolu pro indexování odměn. PR a GIP může vytvořit kdokoli; nadace by pomohla se schválením Radou. +This would only impact protocol support for indexing rewards on Substreams-powered Subgraphs. The new Firehose implementation would need testing on testnet, following the methodology outlined for Stage 2 in this GIP. Similarly, assuming the implementation is performant and reliable, a PR on the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) would be required (`Substreams data sources` Subgraph Feature), as well as a new GIP for protocol support for indexing rewards. Anyone can create the PR and GIP; the Foundation would help with Council approval. ### 3. How much time will the process of reaching full protocol support take? diff --git a/website/src/pages/cs/indexing/new-chain-integration.mdx b/website/src/pages/cs/indexing/new-chain-integration.mdx index 5eb78fc9efbd..0d856bfa9374 100644 --- a/website/src/pages/cs/indexing/new-chain-integration.mdx +++ b/website/src/pages/cs/indexing/new-chain-integration.mdx @@ -2,7 +2,7 @@ title: New Chain Integration --- -Chains can bring subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies: +Chains can bring Subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies: 1. **EVM JSON-RPC** 2. **Firehose**: All Firehose integration solutions include Substreams, a large-scale streaming engine based off Firehose with native `graph-node` support, allowing for parallelized transforms. @@ -47,15 +47,15 @@ For EVM chains, there exists a deeper level of data that can be achieved through ## EVM considerations - Difference between JSON-RPC & Firehose -While the JSON-RPC and Firehose are both suitable for subgraphs, a Firehose is always required for developers wanting to build with [Substreams](https://substreams.streamingfast.io). Supporting Substreams allows developers to build [Substreams-powered subgraphs](/subgraphs/cookbook/substreams-powered-subgraphs/) for the new chain, and has the potential to improve the performance of your subgraphs. Additionally, Firehose — as a drop-in replacement for the JSON-RPC extraction layer of `graph-node` — reduces by 90% the number of RPC calls required for general indexing. +While the JSON-RPC and Firehose are both suitable for Subgraphs, a Firehose is always required for developers wanting to build with [Substreams](https://substreams.streamingfast.io). Supporting Substreams allows developers to build [Substreams-powered Subgraphs](/subgraphs/cookbook/substreams-powered-subgraphs/) for the new chain, and has the potential to improve the performance of your Subgraphs. Additionally, Firehose — as a drop-in replacement for the JSON-RPC extraction layer of `graph-node` — reduces by 90% the number of RPC calls required for general indexing. -- All those `getLogs` calls and roundtrips get replaced by a single stream arriving into the heart of `graph-node`; a single block model for all subgraphs it processes. +- All those `getLogs` calls and roundtrips get replaced by a single stream arriving into the heart of `graph-node`; a single block model for all Subgraphs it processes. -> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers) +> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index Subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers) ## Config uzlu grafu -Configuring Graph Node is as easy as preparing your local environment. Once your local environment is set, you can test the integration by locally deploying a subgraph. +Configuring Graph Node is as easy as preparing your local environment. Once your local environment is set, you can test the integration by locally deploying a Subgraph. 1. [Clone Graph Node](https://github.com/graphprotocol/graph-node) @@ -67,4 +67,4 @@ Configuring Graph Node is as easy as preparing your local environment. Once your ## Substreams-powered Subgraphs -For StreamingFast-led Firehose/Substreams integrations, basic support for foundational Substreams modules (e.g. decoded transactions, logs and smart-contract events) and Substreams codegen tools are included. These tools enable the ability to enable [Substreams-powered subgraphs](/substreams/sps/introduction/). Follow the [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) and run `substreams codegen subgraph` to experience the codegen tools for yourself. +For StreamingFast-led Firehose/Substreams integrations, basic support for foundational Substreams modules (e.g. decoded transactions, logs and smart-contract events) and Substreams codegen tools are included. These tools enable the ability to enable [Substreams-powered Subgraphs](/substreams/sps/introduction/). Follow the [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) and run `substreams codegen subgraph` to experience the codegen tools for yourself. diff --git a/website/src/pages/cs/indexing/overview.mdx b/website/src/pages/cs/indexing/overview.mdx index 52eda54899f1..8acf4fdf72a9 100644 --- a/website/src/pages/cs/indexing/overview.mdx +++ b/website/src/pages/cs/indexing/overview.mdx @@ -7,7 +7,7 @@ Indexery jsou operátoři uzlů v síti Graf, kteří sázejí graf tokeny (GRT) GRT, který je v protokolu založen, podléhá období rozmrazování a může být zkrácen, pokud jsou indexátory škodlivé a poskytují aplikacím nesprávná data nebo pokud indexují nesprávně. Indexátoři také získávají odměny za delegované sázky od delegátů, aby přispěli do sítě. -Indexátory vybírají podgrafy k indexování na základě signálu kurátorů podgrafů, přičemž kurátoři sázejí na GRT, aby určili, které podgrafy jsou vysoce kvalitní a měly by být upřednostněny. Spotřebitelé (např. aplikace) mohou také nastavit parametry, podle kterých indexátoři zpracovávají dotazy pro jejich podgrafy, a nastavit preference pro stanovení ceny poplatků za dotazy. +Indexers select Subgraphs to index based on the Subgraph’s curation signal, where Curators stake GRT in order to indicate which Subgraphs are high-quality and should be prioritized. Consumers (eg. applications) can also set parameters for which Indexers process queries for their Subgraphs and set preferences for query fee pricing. ## FAQ @@ -19,17 +19,17 @@ The minimum stake for an Indexer is currently set to 100K GRT. **Query fee rebates** - Payments for serving queries on the network. These payments are mediated via state channels between an Indexer and a gateway. Each query request from a gateway contains a payment and the corresponding response a proof of query result validity. -**Indexing rewards** - Generated via a 3% annual protocol wide inflation, the indexing rewards are distributed to Indexers who are indexing subgraph deployments for the network. +**Indexing rewards** - Generated via a 3% annual protocol wide inflation, the indexing rewards are distributed to Indexers who are indexing Subgraph deployments for the network. ### How are indexing rewards distributed? -Indexing rewards come from protocol inflation which is set to 3% annual issuance. They are distributed across subgraphs based on the proportion of all curation signal on each, then distributed proportionally to Indexers based on their allocated stake on that subgraph. **An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.** +Indexing rewards come from protocol inflation which is set to 3% annual issuance. They are distributed across Subgraphs based on the proportion of all curation signal on each, then distributed proportionally to Indexers based on their allocated stake on that Subgraph. **An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.** Numerous tools have been created by the community for calculating rewards; you'll find a collection of them organized in the [Community Guides collection](https://www.notion.so/Community-Guides-abbb10f4dba040d5ba81648ca093e70c). You can also find an up to date list of tools in the #Delegators and #Indexers channels on the [Discord server](https://discord.gg/graphprotocol). Here we link a [recommended allocation optimiser](https://github.com/graphprotocol/allocation-optimizer) integrated with the indexer software stack. ### What is a proof of indexing (POI)? -POIs are used in the network to verify that an Indexer is indexing the subgraphs they have allocated on. A POI for the first block of the current epoch must be submitted when closing an allocation for that allocation to be eligible for indexing rewards. A POI for a block is a digest for all entity store transactions for a specific subgraph deployment up to and including that block. +POIs are used in the network to verify that an Indexer is indexing the Subgraphs they have allocated on. A POI for the first block of the current epoch must be submitted when closing an allocation for that allocation to be eligible for indexing rewards. A POI for a block is a digest for all entity store transactions for a specific Subgraph deployment up to and including that block. ### When are indexing rewards distributed? @@ -41,7 +41,7 @@ The RewardsManager contract has a read-only [getRewards](https://github.com/grap Many of the community-made dashboards include pending rewards values and they can be easily checked manually by following these steps: -1. Query the [mainnet subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations: +1. Query the [mainnet Subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations: ```graphql query indexerAllocations { @@ -91,24 +91,24 @@ The `queryFeeCut` and `indexingRewardCut` values are delegation parameters that - **indexingRewardCut** - the % of indexing rewards that will be distributed to the Indexer. If this is set to 95%, the Indexer will receive 95% of the indexing rewards when an allocation is closed and the Delegators will split the other 5%. -### How do Indexers know which subgraphs to index? +### How do Indexers know which Subgraphs to index? -Indexers may differentiate themselves by applying advanced techniques for making subgraph indexing decisions but to give a general idea we'll discuss several key metrics used to evaluate subgraphs in the network: +Indexers may differentiate themselves by applying advanced techniques for making Subgraph indexing decisions but to give a general idea we'll discuss several key metrics used to evaluate Subgraphs in the network: -- **Curation signal** - The proportion of network curation signal applied to a particular subgraph is a good indicator of the interest in that subgraph, especially during the bootstrap phase when query voluming is ramping up. +- **Curation signal** - The proportion of network curation signal applied to a particular Subgraph is a good indicator of the interest in that Subgraph, especially during the bootstrap phase when query volume is ramping up. -- **Query fees collected** - The historical data for volume of query fees collected for a specific subgraph is a good indicator of future demand. +- **Query fees collected** - The historical data for volume of query fees collected for a specific Subgraph is a good indicator of future demand. -- **Amount staked** - Monitoring the behavior of other Indexers or looking at proportions of total stake allocated towards specific subgraphs can allow an Indexer to monitor the supply side for subgraph queries to identify subgraphs that the network is showing confidence in or subgraphs that may show a need for more supply. +- **Amount staked** - Monitoring the behavior of other Indexers or looking at proportions of total stake allocated towards specific Subgraphs can allow an Indexer to monitor the supply side for Subgraph queries to identify Subgraphs that the network is showing confidence in or Subgraphs that may show a need for more supply. -- **Subgraphs with no indexing rewards** - Some subgraphs do not generate indexing rewards mainly because they are using unsupported features like IPFS or because they are querying another network outside of mainnet. You will see a message on a subgraph if it is not generating indexing rewards. +- **Subgraphs with no indexing rewards** - Some Subgraphs do not generate indexing rewards mainly because they are using unsupported features like IPFS or because they are querying another network outside of mainnet. You will see a message on a Subgraph if it is not generating indexing rewards. ### What are the hardware requirements? -- **Small** - Enough to get started indexing several subgraphs, will likely need to be expanded. +- **Small** - Enough to get started indexing several Subgraphs, will likely need to be expanded. - **Standard** - Default setup, this is what is used in the example k8s/terraform deployment manifests. -- **Medium** - Production Indexer supporting 100 subgraphs and 200-500 requests per second. -- **Large** - Prepared to index all currently used subgraphs and serve requests for the related traffic. +- **Medium** - Production Indexer supporting 100 Subgraphs and 200-500 requests per second. +- **Large** - Prepared to index all currently used Subgraphs and serve requests for the related traffic. | Setup | Postgres
(CPUs) | Postgres
(memory in GBs) | Postgres
(disk in TBs) | VMs
(CPUs) | VMs
(memory in GBs) | | --- | :-: | :-: | :-: | :-: | :-: | @@ -125,17 +125,17 @@ Indexers may differentiate themselves by applying advanced techniques for making ## Infrastructure -At the center of an Indexer's infrastructure is the Graph Node which monitors the indexed networks, extracts and loads data per a subgraph definition and serves it as a [GraphQL API](/about/#how-the-graph-works). The Graph Node needs to be connected to an endpoint exposing data from each indexed network; an IPFS node for sourcing data; a PostgreSQL database for its store; and Indexer components which facilitate its interactions with the network. +At the center of an Indexer's infrastructure is the Graph Node which monitors the indexed networks, extracts and loads data per a Subgraph definition and serves it as a [GraphQL API](/about/#how-the-graph-works). The Graph Node needs to be connected to an endpoint exposing data from each indexed network; an IPFS node for sourcing data; a PostgreSQL database for its store; and Indexer components which facilitate its interactions with the network. -- **PostgreSQL database** - The main store for the Graph Node, this is where subgraph data is stored. The Indexer service and agent also use the database to store state channel data, cost models, indexing rules, and allocation actions. +- **PostgreSQL database** - The main store for the Graph Node, this is where Subgraph data is stored. The Indexer service and agent also use the database to store state channel data, cost models, indexing rules, and allocation actions. -- **Data endpoint** - For EVM-compatible networks, Graph Node needs to be connected to an endpoint that exposes an EVM-compatible JSON-RPC API. This may take the form of a single client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain subgraphs will require particular client capabilities such as archive mode and/or the parity tracing API. +- **Data endpoint** - For EVM-compatible networks, Graph Node needs to be connected to an endpoint that exposes an EVM-compatible JSON-RPC API. This may take the form of a single client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain Subgraphs will require particular client capabilities such as archive mode and/or the parity tracing API. -- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during subgraph deployment to fetch the subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com. +- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com. - **Indexer service** - Handles all required external communications with the network. Shares cost models and indexing statuses, passes query requests from gateways on to a Graph Node, and manages the query payments via state channels with the gateway. -- **Indexer agent** - Facilitates the Indexers interactions onchain including registering on the network, managing subgraph deployments to its Graph Node/s, and managing allocations. +- **Indexer agent** - Facilitates the Indexers interactions onchain including registering on the network, managing Subgraph deployments to its Graph Node/s, and managing allocations. - **Prometheus metrics server** - The Graph Node and Indexer components log their metrics to the metrics server. @@ -149,8 +149,8 @@ Note: To support agile scaling, it is recommended that query and indexing concer | Port | Purpose | Routes | CLI Argument | Environment Variable | | --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | -| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | +| 8000 | GraphQL HTTP server
(for Subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | +| 8001 | GraphQL WS
(for Subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | | 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | | 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | | 8040 | Prometheus metrics | /metrics | \--metrics-port | - | @@ -159,7 +159,7 @@ Note: To support agile scaling, it is recommended that query and indexing concer | Port | Purpose | Routes | CLI Argument | Environment Variable | | --- | --- | --- | --- | --- | -| 7600 | GraphQL HTTP server
(for paid subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` | +| 7600 | GraphQL HTTP server
(for paid Subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` | | 7300 | Prometheus metrics | /metrics | \--metrics-port | - | #### Indexer Agent @@ -295,7 +295,7 @@ Deploy all resources with `kubectl apply -k $dir`. ### Uzel Graf -[Graph Node](https://github.com/graphprotocol/graph-node) is an open source Rust implementation that event sources the Ethereum blockchain to deterministically update a data store that can be queried via the GraphQL endpoint. Developers use subgraphs to define their schema, and a set of mappings for transforming the data sourced from the blockchain and the Graph Node handles syncing the entire chain, monitoring for new blocks, and serving it via a GraphQL endpoint. +[Graph Node](https://github.com/graphprotocol/graph-node) is an open source Rust implementation that event sources the Ethereum blockchain to deterministically update a data store that can be queried via the GraphQL endpoint. Developers use Subgraphs to define their schema, and a set of mappings for transforming the data sourced from the blockchain and the Graph Node handles syncing the entire chain, monitoring for new blocks, and serving it via a GraphQL endpoint. #### Getting started from source @@ -365,9 +365,9 @@ docker-compose up To successfully participate in the network requires almost constant monitoring and interaction, so we've built a suite of Typescript applications for facilitating an Indexers network participation. There are three Indexer components: -- **Indexer agent** - The agent monitors the network and the Indexer's own infrastructure and manages which subgraph deployments are indexed and allocated towards onchain and how much is allocated towards each. +- **Indexer agent** - The agent monitors the network and the Indexer's own infrastructure and manages which Subgraph deployments are indexed and allocated towards onchain and how much is allocated towards each. -- **Indexer service** - The only component that needs to be exposed externally, the service passes on subgraph queries to the graph node, manages state channels for query payments, shares important decision making information to clients like the gateways. +- **Indexer service** - The only component that needs to be exposed externally, the service passes on Subgraph queries to the graph node, manages state channels for query payments, shares important decision making information to clients like the gateways. - **Indexer CLI** - The command line interface for managing the Indexer agent. It allows Indexers to manage cost models, manual allocations, actions queue, and indexing rules. @@ -525,7 +525,7 @@ graph indexer status #### Indexer management using Indexer CLI -The suggested tool for interacting with the **Indexer Management API** is the **Indexer CLI**, an extension to the **Graph CLI**. The Indexer agent needs input from an Indexer in order to autonomously interact with the network on the behalf of the Indexer. The mechanism for defining Indexer agent behavior are **allocation management** mode and **indexing rules**. Under auto mode, an Indexer can use **indexing rules** to apply their specific strategy for picking subgraphs to index and serve queries for. Rules are managed via a GraphQL API served by the agent and known as the Indexer Management API. Under manual mode, an Indexer can create allocation actions using **actions queue** and explicitly approve them before they get executed. Under oversight mode, **indexing rules** are used to populate **actions queue** and also require explicit approval for execution. +The suggested tool for interacting with the **Indexer Management API** is the **Indexer CLI**, an extension to the **Graph CLI**. The Indexer agent needs input from an Indexer in order to autonomously interact with the network on the behalf of the Indexer. The mechanism for defining Indexer agent behavior are **allocation management** mode and **indexing rules**. Under auto mode, an Indexer can use **indexing rules** to apply their specific strategy for picking Subgraphs to index and serve queries for. Rules are managed via a GraphQL API served by the agent and known as the Indexer Management API. Under manual mode, an Indexer can create allocation actions using **actions queue** and explicitly approve them before they get executed. Under oversight mode, **indexing rules** are used to populate **actions queue** and also require explicit approval for execution. #### Usage @@ -537,7 +537,7 @@ The **Indexer CLI** connects to the Indexer agent, typically through port-forwar - `graph indexer rules set [options] ...` - Set one or more indexing rules. -- `graph indexer rules start [options] ` - Start indexing a subgraph deployment if available and set its `decisionBasis` to `always`, so the Indexer agent will always choose to index it. If the global rule is set to always then all available subgraphs on the network will be indexed. +- `graph indexer rules start [options] ` - Start indexing a Subgraph deployment if available and set its `decisionBasis` to `always`, so the Indexer agent will always choose to index it. If the global rule is set to always then all available Subgraphs on the network will be indexed. - `graph indexer rules stop [options] ` - Stop indexing a deployment and set its `decisionBasis` to never, so it will skip this deployment when deciding on deployments to index. @@ -561,9 +561,9 @@ All commands which display rules in the output can choose between the supported #### Indexing rules -Indexing rules can either be applied as global defaults or for specific subgraph deployments using their IDs. The `deployment` and `decisionBasis` fields are mandatory, while all other fields are optional. When an indexing rule has `rules` as the `decisionBasis`, then the Indexer agent will compare non-null threshold values on that rule with values fetched from the network for the corresponding deployment. If the subgraph deployment has values above (or below) any of the thresholds it will be chosen for indexing. +Indexing rules can either be applied as global defaults or for specific Subgraph deployments using their IDs. The `deployment` and `decisionBasis` fields are mandatory, while all other fields are optional. When an indexing rule has `rules` as the `decisionBasis`, then the Indexer agent will compare non-null threshold values on that rule with values fetched from the network for the corresponding deployment. If the Subgraph deployment has values above (or below) any of the thresholds it will be chosen for indexing. -For example, if the global rule has a `minStake` of **5** (GRT), any subgraph deployment which has more than 5 (GRT) of stake allocated to it will be indexed. Threshold rules include `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, and `minAverageQueryFees`. +For example, if the global rule has a `minStake` of **5** (GRT), any Subgraph deployment which has more than 5 (GRT) of stake allocated to it will be indexed. Threshold rules include `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, and `minAverageQueryFees`. Data model: @@ -679,7 +679,7 @@ graph indexer actions execute approve Note that supported action types for allocation management have different input requirements: -- `Allocate` - allocate stake to a specific subgraph deployment +- `Allocate` - allocate stake to a specific Subgraph deployment - required action params: - deploymentID @@ -694,7 +694,7 @@ Note that supported action types for allocation management have different input - poi - force (forces using the provided POI even if it doesn’t match what the graph-node provides) -- `Reallocate` - atomically close allocation and open a fresh allocation for the same subgraph deployment +- `Reallocate` - atomically close allocation and open a fresh allocation for the same Subgraph deployment - required action params: - allocationID @@ -706,7 +706,7 @@ Note that supported action types for allocation management have different input #### Cost models -Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make Indexer selection decisions per query and to negotiate payment with chosen Indexers. +Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each Subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make Indexer selection decisions per query and to negotiate payment with chosen Indexers. #### Agora @@ -782,9 +782,9 @@ Once an Indexer has staked GRT in the protocol, the [Indexer components](/indexi 6. Call `stake()` to stake GRT in the protocol. -7. (Optional) Indexers may approve another address to be the operator for their Indexer infrastructure in order to separate the keys that control the funds from those that are performing day to day actions such as allocating on subgraphs and serving (paid) queries. In order to set the operator call `setOperator()` with the operator address. +7. (Optional) Indexers may approve another address to be the operator for their Indexer infrastructure in order to separate the keys that control the funds from those that are performing day to day actions such as allocating on Subgraphs and serving (paid) queries. In order to set the operator call `setOperator()` with the operator address. -8. (Optional) In order to control the distribution of rewards and strategically attract Delegators Indexers can update their delegation parameters by updating their indexingRewardCut (parts per million), queryFeeCut (parts per million), and cooldownBlocks (number of blocks). To do so call `setDelegationParameters()`. The following example sets the queryFeeCut to distribute 95% of query rebates to the Indexer and 5% to Delegators, set the indexingRewardCutto distribute 60% of indexing rewards to the Indexer and 40% to Delegators, and set `thecooldownBlocks` period to 500 blocks. +8. (Optional) In order to control the distribution of rewards and strategically attract Delegators Indexers can update their delegation parameters by updating their `indexingRewardCut` (parts per million), `queryFeeCut` (parts per million), and `cooldownBlocks` (number of blocks). To do so call `setDelegationParameters()`. The following example sets the `queryFeeCut` to distribute 95% of query rebates to the Indexer and 5% to Delegators, set the `indexingRewardCut` to distribute 60% of indexing rewards to the Indexer and 40% to Delegators, and set the `cooldownBlocks` period to 500 blocks. ``` setDelegationParameters(950000, 600000, 500) @@ -810,8 +810,8 @@ To set the delegation parameters using Graph Explorer interface, follow these st After being created by an Indexer a healthy allocation goes through two states. -- **Active** - Once an allocation is created onchain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L316)) it is considered **active**. A portion of the Indexer's own and/or delegated stake is allocated towards a subgraph deployment, which allows them to claim indexing rewards and serve queries for that subgraph deployment. The Indexer agent manages creating allocations based on the Indexer rules. +- **Active** - Once an allocation is created onchain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L316)) it is considered **active**. A portion of the Indexer's own and/or delegated stake is allocated towards a Subgraph deployment, which allows them to claim indexing rewards and serve queries for that Subgraph deployment. The Indexer agent manages creating allocations based on the Indexer rules. - **Closed** - An Indexer is free to close an allocation once 1 epoch has passed ([closeAllocation()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L335)) or their Indexer agent will automatically close the allocation after the **maxAllocationEpochs** (currently 28 days). When an allocation is closed with a valid proof of indexing (POI) their indexing rewards are distributed to the Indexer and its Delegators ([learn more](/indexing/overview/#how-are-indexing-rewards-distributed)). -Indexers are recommended to utilize offchain syncing functionality to sync subgraph deployments to chainhead before creating the allocation onchain. This feature is especially useful for subgraphs that may take longer than 28 epochs to sync or have some chances of failing undeterministically. +Indexers are recommended to utilize offchain syncing functionality to sync Subgraph deployments to chainhead before creating the allocation onchain. This feature is especially useful for Subgraphs that may take longer than 28 epochs to sync or have some chances of failing undeterministically. diff --git a/website/src/pages/cs/indexing/supported-network-requirements.mdx b/website/src/pages/cs/indexing/supported-network-requirements.mdx index a81118cec231..b241acc94b41 100644 --- a/website/src/pages/cs/indexing/supported-network-requirements.mdx +++ b/website/src/pages/cs/indexing/supported-network-requirements.mdx @@ -6,7 +6,7 @@ title: Supported Network Requirements | --- | --- | --- | :-: | | Arbitrum | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | 4+ core CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_last updated August 2023_ | ✅ | | Avalanche | [Docker Guide](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 5 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
Debian 12/Ubuntu 22.04
16 GB RAM
>= 4.5TB (NVME preffered)
_last updated 14th May 2024_ | ✅ | +| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
Debian 12/Ubuntu 22.04
16 GB RAM
>= 4.5TB (NVME preferred)
_last updated 14th May 2024_ | ✅ | | Binance | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | 8 core / 16 threads CPU
Ubuntu 22.04
>=32 GB RAM
>= 14 TiB NVMe SSD
_last updated 22nd June 2024_ | ✅ | | Celo | [Docker Guide](https://docs.infradao.com/archive-nodes-101/celo/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 2 TiB NVMe SSD
_last updated August 2023_ | ✅ | | Ethereum | [Docker Guide](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Higher clock speed over core count
Ubuntu 22.04
16GB+ RAM
>=3TB (NVMe recommended)
_last updated August 2023_ | ✅ | diff --git a/website/src/pages/cs/indexing/tap.mdx b/website/src/pages/cs/indexing/tap.mdx index f8d028634016..6063720aca9d 100644 --- a/website/src/pages/cs/indexing/tap.mdx +++ b/website/src/pages/cs/indexing/tap.mdx @@ -1,21 +1,21 @@ --- -title: TAP Migration Guide +title: GraphTally Guide --- -Learn about The Graph’s new payment system, **Timeline Aggregation Protocol, TAP**. This system provides fast, efficient microtransactions with minimized trust. +Learn about The Graph’s new payment system, **GraphTally** [(previously Timeline Aggregation Protocol)](https://docs.rs/tap_core/latest/tap_core/index.html). This system provides fast, efficient microtransactions with minimized trust. ## Přehled -[TAP](https://docs.rs/tap_core/latest/tap_core/index.html) is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features: +GraphTally is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features: - Efficiently handles micropayments. - Adds a layer of consolidations to onchain transactions and costs. - Allows Indexers control of receipts and payments, guaranteeing payment for queries. - It enables decentralized, trustless gateways and improves `indexer-service` performance for multiple senders. -## Specifics +### Specifics -TAP allows a sender to make multiple payments to a receiver, **TAP Receipts**, which aggregates these payments into a single payment, a **Receipt Aggregate Voucher**, also known as a **RAV**. This aggregated payment can then be verified on the blockchain, reducing the number of transactions and simplifying the payment process. +GraphTally allows a sender to make multiple payments to a receiver, **Receipts**, which aggregates these payments into a single payment, a **Receipt Aggregate Voucher**, also known as a **RAV**. This aggregated payment can then be verified on the blockchain, reducing the number of transactions and simplifying the payment process. For each query, the gateway will send you a `signed receipt` that is stored on your database. Then, these queries will be aggregated by a `tap-agent` through a request. Afterwards, you’ll receive a RAV. You can update a RAV by sending it with newer receipts and this will generate a new RAV with an increased value. @@ -59,14 +59,14 @@ As long as you run `tap-agent` and `indexer-agent`, everything will be executed | Signers | `0xfF4B7A5EfD00Ff2EC3518D4F250A27e4c29A2211` | `0xFb142dE83E261e43a81e9ACEADd1c66A0DB121FE` | | Aggregator | `https://tap-aggregator.network.thegraph.com` | `https://tap-aggregator.testnet.thegraph.com` | -### Požadavky +### Prerequisites -In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query TAP updates. You can use The Graph Network to query or host yourself on your `graph-node`. +In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query updates. You can use The Graph Network to query or host yourself on your `graph-node`. -- [Graph TAP Arbitrum Sepolia subgraph (for The Graph testnet)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) -- [Graph TAP Arbitrum One subgraph (for The Graph mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) +- [Graph TAP Arbitrum Sepolia Subgraph (for The Graph testnet)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) +- [Graph TAP Arbitrum One Subgraph (for The Graph mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) -> Note: `indexer-agent` does not currently handle the indexing of this subgraph like it does for the network subgraph deployment. As a result, you have to index it manually. +> Note: `indexer-agent` does not currently handle the indexing of this Subgraph like it does for the network Subgraph deployment. As a result, you have to index it manually. ## Migration Guide @@ -79,7 +79,7 @@ The required software version can be found [here](https://github.com/graphprotoc 1. **Indexer Agent** - Follow the [same process](https://github.com/graphprotocol/indexer/pkgs/container/indexer-agent#graph-protocol-indexer-components). - - Give the new argument `--tap-subgraph-endpoint` to activate the new TAP codepaths and enable redeeming of TAP RAVs. + - Give the new argument `--tap-subgraph-endpoint` to activate the new GraphTally codepaths and enable redeeming of RAVs. 2. **Indexer Service** @@ -128,18 +128,18 @@ query_url = "" status_url = "" [subgraphs.network] -# Query URL for the Graph Network subgraph. +# Query URL for the Graph Network Subgraph. query_url = "" # Optional, deployment to look for in the local `graph-node`, if locally indexed. -# Locally indexing the subgraph is recommended. +# Locally indexing the Subgraph is recommended. # NOTE: Use `query_url` or `deployment_id` only deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" [subgraphs.escrow] -# Query URL for the Escrow subgraph. +# Query URL for the Escrow Subgraph. query_url = "" # Optional, deployment to look for in the local `graph-node`, if locally indexed. -# Locally indexing the subgraph is recommended. +# Locally indexing the Subgraph is recommended. # NOTE: Use `query_url` or `deployment_id` only deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" diff --git a/website/src/pages/cs/indexing/tooling/graph-node.mdx b/website/src/pages/cs/indexing/tooling/graph-node.mdx index 88ddb88813fb..9257902fe247 100644 --- a/website/src/pages/cs/indexing/tooling/graph-node.mdx +++ b/website/src/pages/cs/indexing/tooling/graph-node.mdx @@ -2,31 +2,31 @@ title: Uzel Graf --- -Graf Uzel je komponenta, která indexuje podgrafy a zpřístupňuje výsledná data k dotazování prostřednictvím rozhraní GraphQL API. Jako taková je ústředním prvkem zásobníku indexeru a její správná činnost je pro úspěšný provoz indexeru klíčová. +Graph Node is the component which indexes Subgraphs, and makes the resulting data available to query via a GraphQL API. As such it is central to the indexer stack, and correct operation of Graph Node is crucial to running a successful indexer. This provides a contextual overview of Graph Node, and some of the more advanced options available to indexers. Detailed documentation and instructions can be found in the [Graph Node repository](https://github.com/graphprotocol/graph-node). ## Uzel Graf -[Graph Node](https://github.com/graphprotocol/graph-node) is the reference implementation for indexing Subgraphs on The Graph Network, connecting to blockchain clients, indexing subgraphs and making indexed data available to query. +[Graph Node](https://github.com/graphprotocol/graph-node) is the reference implementation for indexing Subgraphs on The Graph Network, connecting to blockchain clients, indexing Subgraphs and making indexed data available to query. Graph Node (and the whole indexer stack) can be run on bare metal, or in a cloud environment. This flexibility of the central indexing component is crucial to the robustness of The Graph Protocol. Similarly, Graph Node can be [built from source](https://github.com/graphprotocol/graph-node), or indexers can use one of the [provided Docker Images](https://hub.docker.com/r/graphprotocol/graph-node). ### Databáze PostgreSQL -Hlavní úložiště pro uzel Graf Uzel, kde jsou uložena data podgrafů, metadata o podgraf a síťová data týkající se podgrafů, jako je bloková cache a cache eth_call. +The main store for the Graph Node, this is where Subgraph data is stored, as well as metadata about Subgraphs, and Subgraph-agnostic network data such as the block cache, and eth_call cache. ### Síťoví klienti Aby mohl uzel Graph Node indexovat síť, potřebuje přístup k síťovému klientovi prostřednictvím rozhraní API JSON-RPC kompatibilního s EVM. Toto RPC se může připojit k jedinému klientovi nebo může jít o složitější nastavení, které vyrovnává zátěž mezi více klienty. -While some subgraphs may just require a full node, some may have indexing features which require additional RPC functionality. Specifically subgraphs which make `eth_calls` as part of indexing will require an archive node which supports [EIP-1898](https://eips.ethereum.org/EIPS/eip-1898), and subgraphs with `callHandlers`, or `blockHandlers` with a `call` filter, require `trace_filter` support ([see trace module documentation here](https://openethereum.github.io/JSONRPC-trace-module)). +While some Subgraphs may just require a full node, some may have indexing features which require additional RPC functionality. Specifically Subgraphs which make `eth_calls` as part of indexing will require an archive node which supports [EIP-1898](https://eips.ethereum.org/EIPS/eip-1898), and Subgraphs with `callHandlers`, or `blockHandlers` with a `call` filter, require `trace_filter` support ([see trace module documentation here](https://openethereum.github.io/JSONRPC-trace-module)). **Network Firehoses** - a Firehose is a gRPC service providing an ordered, yet fork-aware, stream of blocks, developed by The Graph's core developers to better support performant indexing at scale. This is not currently an Indexer requirement, but Indexers are encouraged to familiarise themselves with the technology, ahead of full network support. Learn more about the Firehose [here](https://firehose.streamingfast.io/). ### IPFS uzly -Metadata nasazení podgrafů jsou uložena v síti IPFS. Uzel Graf přistupuje během nasazení podgrafu především k uzlu IPFS, aby načetl manifest podgrafu a všechny propojené soubory. Síťové indexery nemusí hostit vlastní uzel IPFS. Uzel IPFS pro síť je hostován na adrese https://ipfs.network.thegraph.com. +Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network indexers do not need to host their own IPFS node. An IPFS node for the network is hosted at https://ipfs.network.thegraph.com. ### Metrický server Prometheus @@ -79,8 +79,8 @@ Když je Graf Uzel spuštěn, zpřístupňuje následující ports: | Port | Purpose | Routes | CLI Argument | Environment Variable | | --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | -| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | +| 8000 | GraphQL HTTP server
(for Subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | +| 8001 | GraphQL WS
(for Subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | | 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | | 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | | 8040 | Prometheus metrics | /metrics | \--metrics-port | - | @@ -89,7 +89,7 @@ Když je Graf Uzel spuštěn, zpřístupňuje následující ports: ## Pokročilá konfigurace uzlu Graf -V nejjednodušším případě lze Graf Uzel provozovat s jednou instancí Graf Uzel, jednou databází PostgreSQL, uzlem IPFS a síťovými klienty podle potřeby indexovaných podgrafů. +At its simplest, Graph Node can be operated with a single instance of Graph Node, a single PostgreSQL database, an IPFS node, and the network clients as required by the Subgraphs to be indexed. This setup can be scaled horizontally, by adding multiple Graph Nodes, and multiple databases to support those Graph Nodes. Advanced users may want to take advantage of some of the horizontal scaling capabilities of Graph Node, as well as some of the more advanced configuration options, via the `config.toml` file and Graph Node's environment variables. @@ -114,13 +114,13 @@ Full documentation of `config.toml` can be found in the [Graph Node docs](https: #### Více uzlů graf -Graph Node indexing can scale horizontally, running multiple instances of Graph Node to split indexing and querying across different nodes. This can be done simply by running Graph Nodes configured with a different `node_id` on startup (e.g. in the Docker Compose file), which can then be used in the `config.toml` file to specify [dedicated query nodes](#dedicated-query-nodes), [block ingestors](#dedicated-block-ingestion), and splitting subgraphs across nodes with [deployment rules](#deployment-rules). +Graph Node indexing can scale horizontally, running multiple instances of Graph Node to split indexing and querying across different nodes. This can be done simply by running Graph Nodes configured with a different `node_id` on startup (e.g. in the Docker Compose file), which can then be used in the `config.toml` file to specify [dedicated query nodes](#dedicated-query-nodes), [block ingestors](#dedicated-block-ingestion), and splitting Subgraphs across nodes with [deployment rules](#deployment-rules). > Všimněte si, že více graf uzlů lze nakonfigurovat tak, aby používaly stejnou databázi, kterou lze horizontálně škálovat pomocí sharding. #### Pravidla nasazení -Given multiple Graph Nodes, it is necessary to manage deployment of new subgraphs so that the same subgraph isn't being indexed by two different nodes, which would lead to collisions. This can be done by using deployment rules, which can also specify which `shard` a subgraph's data should be stored in, if database sharding is being used. Deployment rules can match on the subgraph name and the network that the deployment is indexing in order to make a decision. +Given multiple Graph Nodes, it is necessary to manage deployment of new Subgraphs so that the same Subgraph isn't being indexed by two different nodes, which would lead to collisions. This can be done by using deployment rules, which can also specify which `shard` a Subgraph's data should be stored in, if database sharding is being used. Deployment rules can match on the Subgraph name and the network that the deployment is indexing in order to make a decision. Příklad konfigurace pravidla nasazení: @@ -138,7 +138,7 @@ indexers = [ "index_node_kovan_0" ] match = { network = [ "xdai", "poa-core" ] } indexers = [ "index_node_other_0" ] [[deployment.rule]] -# There's no 'match', so any subgraph matches +# There's no 'match', so any Subgraph matches shards = [ "sharda", "shardb" ] indexers = [ "index_node_community_0", @@ -167,11 +167,11 @@ Každý uzel, jehož --node-id odpovídá regulárnímu výrazu, bude nastaven t Pro většinu případů použití postačuje k podpoře instance graf uzlu jedna databáze Postgres. Pokud instance graf uzlu přeroste rámec jedné databáze Postgres, je možné rozdělit ukládání dat grafového uzlu do více databází Postgres. Všechny databáze dohromady tvoří úložiště instance graf uzlu. Každá jednotlivá databáze se nazývá shard. -Shards can be used to split subgraph deployments across multiple databases, and can also be used to use replicas to spread query load across databases. This includes configuring the number of available database connections each `graph-node` should keep in its connection pool for each database, which becomes increasingly important as more subgraphs are being indexed. +Shards can be used to split Subgraph deployments across multiple databases, and can also be used to use replicas to spread query load across databases. This includes configuring the number of available database connections each `graph-node` should keep in its connection pool for each database, which becomes increasingly important as more Subgraphs are being indexed. Sharding se stává užitečným, když vaše stávající databáze nedokáže udržet krok se zátěží, kterou na ni Graf Uzel vyvíjí, a když už není možné zvětšit velikost databáze. -> Obecně je lepší vytvořit jednu co největší databázi, než začít s oddíly. Jednou z výjimek jsou případy, kdy je provoz dotazů rozdělen velmi nerovnoměrně mezi dílčí podgrafy; v těchto situacích může výrazně pomoci, pokud jsou dílčí podgrafy s velkým objemem uchovávány v jednom shardu a vše ostatní v jiném, protože toto nastavení zvyšuje pravděpodobnost, že data pro dílčí podgrafu s velkým objemem zůstanou v interní cache db a nebudou nahrazena daty, která nejsou tolik potřebná z dílčích podgrafů s malým objemem. +> It is generally better make a single database as big as possible, before starting with shards. One exception is where query traffic is split very unevenly between Subgraphs; in those situations it can help dramatically if the high-volume Subgraphs are kept in one shard and everything else in another because that setup makes it more likely that the data for the high-volume Subgraphs stays in the db-internal cache and doesn't get replaced by data that's not needed as much from low-volume Subgraphs. Pokud jde o konfiguraci připojení, začněte s max_connections v souboru postgresql.conf nastaveným na 400 (nebo možná dokonce 200) a podívejte se na metriky store_connection_wait_time_ms a store_connection_checkout_count Prometheus. Výrazné čekací doby (cokoli nad 5 ms) jsou známkou toho, že je k dispozici příliš málo připojení; vysoké čekací doby tam budou také způsobeny tím, že databáze je velmi vytížená (například vysoké zatížení procesoru). Pokud se však databáze jinak jeví jako stabilní, vysoké čekací doby naznačují potřebu zvýšit počet připojení. V konfiguraci je horní hranicí, kolik připojení může každá instance graf uzlu používat, a graf uzel nebude udržovat otevřená připojení, pokud je nepotřebuje. @@ -188,7 +188,7 @@ ingestor = "block_ingestor_node" #### Podpora více sítí -The Graph Protocol is increasing the number of networks supported for indexing rewards, and there exist many subgraphs indexing unsupported networks which an indexer would like to process. The `config.toml` file allows for expressive and flexible configuration of: +The Graph Protocol is increasing the number of networks supported for indexing rewards, and there exist many Subgraphs indexing unsupported networks which an indexer would like to process. The `config.toml` file allows for expressive and flexible configuration of: - Více sítí - Více poskytovatelů na síť (to může umožnit rozdělení zátěže mezi poskytovatele a také konfiguraci plných uzlů i archivních uzlů, přičemž Graph Node může preferovat levnější poskytovatele, pokud to daná pracovní zátěž umožňuje). @@ -225,11 +225,11 @@ Uživatelé, kteří provozují škálované nastavení indexování s pokročil ### Správa uzlu graf -Vzhledem k běžícímu uzlu Graf (nebo uzlům Graf Uzel!) je pak úkolem spravovat rozmístěné podgrafy v těchto uzlech. Graf Uzel nabízí řadu nástrojů, které pomáhají se správou podgrafů. +Given a running Graph Node (or Graph Nodes!), the challenge is then to manage deployed Subgraphs across those nodes. Graph Node surfaces a range of tools to help with managing Subgraphs. #### Protokolování -Graph Node's logs can provide useful information for debugging and optimisation of Graph Node and specific subgraphs. Graph Node supports different log levels via the `GRAPH_LOG` environment variable, with the following levels: error, warn, info, debug or trace. +Graph Node's logs can provide useful information for debugging and optimisation of Graph Node and specific Subgraphs. Graph Node supports different log levels via the `GRAPH_LOG` environment variable, with the following levels: error, warn, info, debug or trace. In addition setting `GRAPH_LOG_QUERY_TIMING` to `gql` provides more details about how GraphQL queries are running (though this will generate a large volume of logs). @@ -247,11 +247,11 @@ The graphman command is included in the official containers, and you can docker Full documentation of `graphman` commands is available in the Graph Node repository. See [/docs/graphman.md] (https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md) in the Graph Node `/docs` -### Práce s podgrafy +### Working with Subgraphs #### Stav indexování API -API pro stav indexování, které je ve výchozím nastavení dostupné na portu 8030/graphql, nabízí řadu metod pro kontrolu stavu indexování pro různé podgrafy, kontrolu důkazů indexování, kontrolu vlastností podgrafů a další. +Available on port 8030/graphql by default, the indexing status API exposes a range of methods for checking indexing status for different Subgraphs, checking proofs of indexing, inspecting Subgraph features and more. The full schema is available [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). @@ -263,7 +263,7 @@ Proces indexování má tři samostatné části: - Zpracování událostí v pořadí pomocí příslušných obslužných (to může zahrnovat volání řetězce pro zjištění stavu a načtení dat z úložiště) - Zápis výsledných dat do úložiště -Tyto fáze jsou spojeny do potrubí (tj. mohou být prováděny paralelně), ale jsou na sobě závislé. Pokud se podgrafy indexují pomalu, bude příčina záviset na konkrétním podgrafu. +These stages are pipelined (i.e. they can be executed in parallel), but they are dependent on one another. Where Subgraphs are slow to index, the underlying cause will depend on the specific Subgraph. Běžné příčiny pomalého indexování: @@ -276,24 +276,24 @@ Běžné příčiny pomalého indexování: - Samotný poskytovatel se dostává za hlavu řetězu - Pomalé načítání nových účtenek od poskytovatele v hlavě řetězce -Metriky indexování podgrafů mohou pomoci diagnostikovat hlavní příčinu pomalého indexování. V některých případech spočívá problém v samotném podgrafu, ale v jiných případech mohou zlepšení síťových poskytovatelů, snížení konfliktů v databázi a další zlepšení konfigurace výrazně zlepšit výkon indexování. +Subgraph indexing metrics can help diagnose the root cause of indexing slowness. In some cases, the problem lies with the Subgraph itself, but in others, improved network providers, reduced database contention and other configuration improvements can markedly improve indexing performance. -#### Neúspěšné podgrafy +#### Failed Subgraphs -Během indexování mohou dílčí graf selhat, pokud narazí na neočekávaná data, pokud některá komponenta nefunguje podle očekávání nebo pokud je chyba ve zpracovatelích událostí nebo v konfiguraci. Existují dva obecné typy selhání: +During indexing Subgraphs might fail, if they encounter data that is unexpected, some component not working as expected, or if there is some bug in the event handlers or configuration. There are two general types of failure: - Deterministická selhání: jedná se o selhání, která nebudou vyřešena opakovanými pokusy - Nedeterministická selhání: mohou být způsobena problémy se zprostředkovatelem nebo neočekávanou chybou grafického uzlu. Pokud dojde k nedeterministickému selhání, uzel Graf zopakuje selhání obsluhy a postupně se vrátí zpět. -V některých případech může být chyba řešitelná indexátorem (například pokud je chyba důsledkem toho, že není k dispozici správný typ zprostředkovatele, přidání požadovaného zprostředkovatele umožní pokračovat v indexování). V jiných případech je však nutná změna v kódu podgrafu. +In some cases a failure might be resolvable by the indexer (for example if the error is a result of not having the right kind of provider, adding the required provider will allow indexing to continue). However in others, a change in the Subgraph code is required. -> Deterministic failures are considered "final", with a Proof of Indexing generated for the failing block, while non-deterministic failures are not, as the subgraph may manage to "unfail" and continue indexing. In some cases, the non-deterministic label is incorrect, and the subgraph will never overcome the error; such failures should be reported as issues on the Graph Node repository. +> Deterministic failures are considered "final", with a Proof of Indexing generated for the failing block, while non-deterministic failures are not, as the Subgraph may manage to "unfail" and continue indexing. In some cases, the non-deterministic label is incorrect, and the Subgraph will never overcome the error; such failures should be reported as issues on the Graph Node repository. #### Bloková a volací mezipaměť -Graph Node caches certain data in the store in order to save refetching from the provider. Blocks are cached, as are the results of `eth_calls` (the latter being cached as of a specific block). This caching can dramatically increase indexing speed during "resyncing" of a slightly altered subgraph. +Graph Node caches certain data in the store in order to save refetching from the provider. Blocks are cached, as are the results of `eth_calls` (the latter being cached as of a specific block). This caching can dramatically increase indexing speed during "resyncing" of a slightly altered Subgraph. -However, in some instances, if an Ethereum node has provided incorrect data for some period, that can make its way into the cache, leading to incorrect data or failed subgraphs. In this case indexers can use `graphman` to clear the poisoned cache, and then rewind the affected subgraphs, which will then fetch fresh data from the (hopefully) healthy provider. +However, in some instances, if an Ethereum node has provided incorrect data for some period, that can make its way into the cache, leading to incorrect data or failed Subgraphs. In this case indexers can use `graphman` to clear the poisoned cache, and then rewind the affected Subgraphs, which will then fetch fresh data from the (hopefully) healthy provider. Pokud existuje podezření na nekonzistenci blokové mezipaměti, například chybějící událost tx receipt: @@ -304,7 +304,7 @@ Pokud existuje podezření na nekonzistenci blokové mezipaměti, například ch #### Problémy a chyby při dotazování -Jakmile je podgraf indexován, lze očekávat, že indexery budou obsluhovat dotazy prostřednictvím koncového bodu vyhrazeného pro dotazy podgrafu. Pokud indexátor doufá, že bude obsluhovat značný objem dotazů, doporučuje se použít vyhrazený uzel pro dotazy a v případě velmi vysokého objemu dotazů mohou indexátory chtít nakonfigurovat oddíly replik tak, aby dotazy neovlivňovaly proces indexování. +Once a Subgraph has been indexed, indexers can expect to serve queries via the Subgraph's dedicated query endpoint. If the indexer is hoping to serve significant query volume, a dedicated query node is recommended, and in case of very high query volumes, indexers may want to configure replica shards so that queries don't impact the indexing process. I s vyhrazeným dotazovacím uzlem a replikami však může provádění některých dotazů trvat dlouho a v některých případech může zvýšit využití paměti a negativně ovlivnit dobu dotazování ostatních uživatelů. @@ -316,7 +316,7 @@ Graph Node caches GraphQL queries by default, which can significantly reduce dat ##### Analýza dotazů -Problematické dotazy se nejčastěji objevují jedním ze dvou způsobů. V některých případech uživatelé sami hlásí, že daný dotaz je pomalý. V takovém případě je úkolem diagnostikovat příčinu pomalosti - zda se jedná o obecný problém, nebo o specifický problém daného podgrafu či dotazu. A pak ho samozřejmě vyřešit, pokud je to možné. +Problematic queries most often surface in one of two ways. In some cases, users themselves report that a given query is slow. In that case the challenge is to diagnose the reason for the slowness - whether it is a general issue, or specific to that Subgraph or query. And then of course to resolve it, if possible. V jiných případech může být spouštěcím faktorem vysoké využití paměti v uzlu dotazu a v takovém případě je třeba nejprve identifikovat dotaz, který problém způsobuje. @@ -336,10 +336,10 @@ In general, tables where the number of distinct entities are less than 1% of the Once a table has been determined to be account-like, running `graphman stats account-like .
` will turn on the account-like optimization for queries against that table. The optimization can be turned off again with `graphman stats account-like --clear .
` It takes up to 5 minutes for query nodes to notice that the optimization has been turned on or off. After turning the optimization on, it is necessary to verify that the change does not in fact make queries slower for that table. If you have configured Grafana to monitor Postgres, slow queries would show up in `pg_stat_activity`in large numbers, taking several seconds. In that case, the optimization needs to be turned off again. -For Uniswap-like subgraphs, the `pair` and `token` tables are prime candidates for this optimization, and can have a dramatic effect on database load. +For Uniswap-like Subgraphs, the `pair` and `token` tables are prime candidates for this optimization, and can have a dramatic effect on database load. -#### Odstranění podgrafů +#### Removing Subgraphs > Jedná se o novou funkci, která bude k dispozici v uzlu Graf 0.29.x -At some point an indexer might want to remove a given subgraph. This can be easily done via `graphman drop`, which deletes a deployment and all it's indexed data. The deployment can be specified as either a subgraph name, an IPFS hash `Qm..`, or the database namespace `sgdNNN`. Further documentation is available [here](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop). +At some point an indexer might want to remove a given Subgraph. This can be easily done via `graphman drop`, which deletes a deployment and all it's indexed data. The deployment can be specified as either a Subgraph name, an IPFS hash `Qm..`, or the database namespace `sgdNNN`. Further documentation is available [here](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop). diff --git a/website/src/pages/cs/indexing/tooling/graphcast.mdx b/website/src/pages/cs/indexing/tooling/graphcast.mdx index aec7d84070c3..5aa86adcc8da 100644 --- a/website/src/pages/cs/indexing/tooling/graphcast.mdx +++ b/website/src/pages/cs/indexing/tooling/graphcast.mdx @@ -10,10 +10,10 @@ V současné době jsou náklady na vysílání informací ostatním účastník Graphcast SDK (Vývoj softwaru Kit) umožňuje vývojářům vytvářet rádia, což jsou aplikace napájené drby, které mohou indexery spouštět k danému účelu. Máme také v úmyslu vytvořit několik Radios (nebo poskytnout podporu jiným vývojářům/týmům, které chtějí Radios vytvořit) pro následující případy použití: -- Křížová kontrola integrity dat subgrafu v reálném čase ([Podgraf Radio](https://docs.graphops.xyz/graphcast/radios/subgraph-radio/intro)). -- Provádění aukcí a koordinace pro warp synchronizaci podgrafů, substreamů a dat Firehose z jiných Indexerů. -- Vlastní hlášení o analýze aktivních dotazů, včetně objemů požadavků na dílčí grafy, objemů poplatků atd. -- Vlastní hlášení o analýze indexování, včetně času indexování podgrafů, nákladů na plyn obsluhy, zjištěných chyb indexování atd. +- Real-time cross-checking of Subgraph data integrity ([Subgraph Radio](https://docs.graphops.xyz/graphcast/radios/subgraph-radio/intro)). +- Conducting auctions and coordination for warp syncing Subgraphs, substreams, and Firehose data from other Indexers. +- Self-reporting on active query analytics, including Subgraph request volumes, fee volumes, etc. +- Self-reporting on indexing analytics, including Subgraph indexing time, handler gas costs, indexing errors encountered, etc. - Vlastní hlášení informací o zásobníku včetně verze grafového uzlu, verze Postgres, verze klienta Ethereum atd. ### Dozvědět se více diff --git a/website/src/pages/cs/resources/benefits.mdx b/website/src/pages/cs/resources/benefits.mdx index e18158242265..d0b336ece33a 100644 --- a/website/src/pages/cs/resources/benefits.mdx +++ b/website/src/pages/cs/resources/benefits.mdx @@ -75,9 +75,9 @@ Query costs may vary; the quoted cost is the average at time of publication (Mar Reflects cost for data consumer. Query fees are still paid to Indexers for Free Plan queries. -Estimated costs are only for Ethereum Mainnet subgraphs — costs are even higher when self hosting a `graph-node` on other networks. Some users may need to update their subgraph to a new version. Due to Ethereum gas fees, an update costs ~$50 at time of writing. Note that gas fees on [Arbitrum](/archived/arbitrum/arbitrum-faq/) are substantially lower than Ethereum mainnet. +Estimated costs are only for Ethereum Mainnet Subgraphs — costs are even higher when self hosting a `graph-node` on other networks. Some users may need to update their Subgraph to a new version. Due to Ethereum gas fees, an update costs ~$50 at time of writing. Note that gas fees on [Arbitrum](/archived/arbitrum/arbitrum-faq/) are substantially lower than Ethereum mainnet. -Kurátorování signálu na podgrafu je volitelný jednorázový čistý nulový náklad (např. na podgrafu lze kurátorovat signál v hodnotě $1k a později jej stáhnout - s potenciálem získat v tomto procesu výnosy). +Curating signal on a Subgraph is an optional one-time, net-zero cost (e.g., $1k in signal can be curated on a Subgraph, and later withdrawn—with potential to earn returns in the process). ## No Setup Costs & Greater Operational Efficiency @@ -89,4 +89,4 @@ The Graph’s decentralized network gives users access to geographic redundancy Bottom line: The Graph Network is less expensive, easier to use, and produces superior results compared to running a `graph-node` locally. -Start using The Graph Network today, and learn how to [publish your subgraph to The Graph's decentralized network](/subgraphs/quick-start/). +Start using The Graph Network today, and learn how to [publish your Subgraph to The Graph's decentralized network](/subgraphs/quick-start/). diff --git a/website/src/pages/cs/resources/glossary.mdx b/website/src/pages/cs/resources/glossary.mdx index 70161f581585..49fd1f60c539 100644 --- a/website/src/pages/cs/resources/glossary.mdx +++ b/website/src/pages/cs/resources/glossary.mdx @@ -4,51 +4,51 @@ title: Glosář - **The Graph**: A decentralized protocol for indexing and querying data. -- **Query**: A request for data. In the case of The Graph, a query is a request for data from a subgraph that will be answered by an Indexer. +- **Query**: A request for data. In the case of The Graph, a query is a request for data from a Subgraph that will be answered by an Indexer. -- **GraphQL**: A query language for APIs and a runtime for fulfilling those queries with your existing data. The Graph uses GraphQL to query subgraphs. +- **GraphQL**: A query language for APIs and a runtime for fulfilling those queries with your existing data. The Graph uses GraphQL to query Subgraphs. -- **Endpoint**: A URL that can be used to query a subgraph. The testing endpoint for Subgraph Studio is `https://api.studio.thegraph.com/query///` and the Graph Explorer endpoint is `https://gateway.thegraph.com/api//subgraphs/id/`. The Graph Explorer endpoint is used to query subgraphs on The Graph's decentralized network. +- **Endpoint**: A URL that can be used to query a Subgraph. The testing endpoint for Subgraph Studio is `https://api.studio.thegraph.com/query///` and the Graph Explorer endpoint is `https://gateway.thegraph.com/api//subgraphs/id/`. The Graph Explorer endpoint is used to query Subgraphs on The Graph's decentralized network. -- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish subgraphs to The Graph Network. Once it is indexed, the subgraph can be queried by anyone. +- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish Subgraphs to The Graph Network. Once it is indexed, the Subgraph can be queried by anyone. - **Indexer**: Network participants that run indexing nodes to index data from blockchains and serve GraphQL queries. - **Indexer Revenue Streams**: Indexers are rewarded in GRT with two components: query fee rebates and indexing rewards. - 1. **Query Fee Rebates**: Payments from subgraph consumers for serving queries on the network. + 1. **Query Fee Rebates**: Payments from Subgraph consumers for serving queries on the network. - 2. **Indexing Rewards**: The rewards that Indexers receive for indexing subgraphs. Indexing rewards are generated via new issuance of 3% GRT annually. + 2. **Indexing Rewards**: The rewards that Indexers receive for indexing Subgraphs. Indexing rewards are generated via new issuance of 3% GRT annually. - **Indexer's Self-Stake**: The amount of GRT that Indexers stake to participate in the decentralized network. The minimum is 100,000 GRT, and there is no upper limit. - **Delegation Capacity**: The maximum amount of GRT an Indexer can accept from Delegators. Indexers can only accept up to 16x their Indexer Self-Stake, and additional delegation results in diluted rewards. For example, if an Indexer has a Self-Stake of 1M GRT, their delegation capacity is 16M. However, Indexers can increase their Delegation Capacity by increasing their Self-Stake. -- **Upgrade Indexer**: An Indexer designed to act as a fallback for subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. +- **Upgrade Indexer**: An Indexer designed to act as a fallback for Subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. -- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing subgraphs. +- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in Subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing Subgraphs. - **Delegation Tax**: A 0.5% fee paid by Delegators when they delegate GRT to Indexers. The GRT used to pay the fee is burned. -- **Curator**: Network participants that identify high-quality subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a subgraph, 10% is distributed to the Curators of that subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a subgraph. +- **Curator**: Network participants that identify high-quality Subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a Subgraph, 10% is distributed to the Curators of that Subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a Subgraph. -- **Curation Tax**: A 1% fee paid by Curators when they signal GRT on subgraphs. The GRT used to pay the fee is burned. +- **Curation Tax**: A 1% fee paid by Curators when they signal GRT on Subgraphs. The GRT used to pay the fee is burned. -- **Data Consumer**: Any application or user that queries a subgraph. +- **Data Consumer**: Any application or user that queries a Subgraph. -- **Subgraph Developer**: A developer who builds and deploys a subgraph to The Graph's decentralized network. +- **Subgraph Developer**: A developer who builds and deploys a Subgraph to The Graph's decentralized network. -- **Subgraph Manifest**: A YAML file that describes the subgraph's GraphQL schema, data sources, and other metadata. [Here](https://github.com/graphprotocol/example-subgraph/blob/master/subgraph.yaml) is an example. +- **Subgraph Manifest**: A YAML file that describes the Subgraph's GraphQL schema, data sources, and other metadata. [Here](https://github.com/graphprotocol/example-subgraph/blob/master/subgraph.yaml) is an example. - **Epoch**: A unit of time within the network. Currently, one epoch is 6,646 blocks or approximately 1 day. -- **Allocation**: An Indexer can allocate their total GRT stake (including Delegators' stake) towards subgraphs that have been published on The Graph's decentralized network. Allocations can have different statuses: +- **Allocation**: An Indexer can allocate their total GRT stake (including Delegators' stake) towards Subgraphs that have been published on The Graph's decentralized network. Allocations can have different statuses: - 1. **Active**: An allocation is considered active when it is created onchain. This is called opening an allocation, and indicates to the network that the Indexer is actively indexing and serving queries for a particular subgraph. Active allocations accrue indexing rewards proportional to the signal on the subgraph, and the amount of GRT allocated. + 1. **Active**: An allocation is considered active when it is created onchain. This is called opening an allocation, and indicates to the network that the Indexer is actively indexing and serving queries for a particular Subgraph. Active allocations accrue indexing rewards proportional to the signal on the Subgraph, and the amount of GRT allocated. - 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. + 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given Subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. -- **Subgraph Studio**: A powerful dapp for building, deploying, and publishing subgraphs. +- **Subgraph Studio**: A powerful dapp for building, deploying, and publishing Subgraphs. - **Fishermen**: A role within The Graph Network held by participants who monitor the accuracy and integrity of data served by Indexers. When a Fisherman identifies a query response or a POI they believe to be incorrect, they can initiate a dispute against the Indexer. If the dispute rules in favor of the Fisherman, the Indexer is slashed by losing 2.5% of their self-stake. Of this amount, 50% is awarded to the Fisherman as a bounty for their vigilance, and the remaining 50% is removed from circulation (burned). This mechanism is designed to encourage Fishermen to help maintain the reliability of the network by ensuring that Indexers are held accountable for the data they provide. @@ -56,28 +56,28 @@ title: Glosář - **Slashing**: Indexers can have their self-staked GRT slashed for providing an incorrect POI or for serving inaccurate data. The slashing percentage is a protocol parameter currently set to 2.5% of an Indexer's self-stake. 50% of the slashed GRT goes to the Fisherman that disputed the inaccurate data or incorrect POI. The other 50% is burned. -- **Indexing Rewards**: The rewards that Indexers receive for indexing subgraphs. Indexing rewards are distributed in GRT. +- **Indexing Rewards**: The rewards that Indexers receive for indexing Subgraphs. Indexing rewards are distributed in GRT. - **Delegation Rewards**: The rewards that Delegators receive for delegating GRT to Indexers. Delegation rewards are distributed in GRT. - **GRT**: The Graph's work utility token. GRT provides economic incentives to network participants for contributing to the network. -- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. +- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given Subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. -- **Graph Node**: Graph Node is the component that indexes subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. +- **Graph Node**: Graph Node is the component that indexes Subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. -- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions onchain, including registering on the network, managing subgraph deployments to its Graph Node(s), and managing allocations. +- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions onchain, including registering on the network, managing Subgraph deployments to its Graph Node(s), and managing allocations. - **The Graph Client**: A library for building GraphQL-based dapps in a decentralized way. -- **Graph Explorer**: A dapp designed for network participants to explore subgraphs and interact with the protocol. +- **Graph Explorer**: A dapp designed for network participants to explore Subgraphs and interact with the protocol. - **Graph CLI**: A command line interface tool for building and deploying to The Graph. - **Cooldown Period**: The time remaining until an Indexer who changed their delegation parameters can do so again. -- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, subgraphs, curation shares, and Indexer's self-stake. +- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, Subgraphs, curation shares, and Indexer's self-stake. -- **Updating a subgraph**: The process of releasing a new subgraph version with updates to the subgraph's manifest, schema, or mappings. +- **Updating a Subgraph**: The process of releasing a new Subgraph version with updates to the Subgraph's manifest, schema, or mappings. -- **Migrating**: The process of curation shares moving from an old version of a subgraph to a new version of a subgraph (e.g. when v0.0.1 is updated to v0.0.2). +- **Migrating**: The process of curation shares moving from an old version of a Subgraph to a new version of a Subgraph (e.g. when v0.0.1 is updated to v0.0.2). diff --git a/website/src/pages/cs/resources/migration-guides/assemblyscript-migration-guide.mdx b/website/src/pages/cs/resources/migration-guides/assemblyscript-migration-guide.mdx index 756873dd8fbb..8af6d2817679 100644 --- a/website/src/pages/cs/resources/migration-guides/assemblyscript-migration-guide.mdx +++ b/website/src/pages/cs/resources/migration-guides/assemblyscript-migration-guide.mdx @@ -2,13 +2,13 @@ title: Průvodce migrací AssemblyScript --- -Up until now, subgraphs have been using one of the [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉 +Up until now, Subgraphs have been using one of the [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉 -To umožní vývojářům podgrafů používat novější funkce jazyka AS a standardní knihovny. +That will enable Subgraph developers to use newer features of the AS language and standard library. This guide is applicable for anyone using `graph-cli`/`graph-ts` below version `0.22.0`. If you're already at a higher than (or equal) version to that, you've already been using version `0.19.10` of AssemblyScript 🙂 -> Note: As of `0.24.0`, `graph-node` can support both versions, depending on the `apiVersion` specified in the subgraph manifest. +> Note: As of `0.24.0`, `graph-node` can support both versions, depending on the `apiVersion` specified in the Subgraph manifest. ## Funkce @@ -44,7 +44,7 @@ This guide is applicable for anyone using `graph-cli`/`graph-ts` below version ` ## Jak provést upgrade? -1. Change your mappings `apiVersion` in `subgraph.yaml` to `0.0.6`: +1. Change your mappings `apiVersion` in `subgraph.yaml` to `0.0.9`: ```yaml ... @@ -52,7 +52,7 @@ dataSources: ... mapping: ... - apiVersion: 0.0.6 + apiVersion: 0.0.9 ... ``` @@ -106,7 +106,7 @@ let maybeValue = load()! // breaks in runtime if value is null maybeValue.aMethod() ``` -Pokud si nejste jisti, kterou verzi zvolit, doporučujeme vždy použít bezpečnou verzi. Pokud hodnota neexistuje, možná budete chtít provést pouze časný příkaz if s návratem v obsluze podgrafu. +If you are unsure which to choose, we recommend always using the safe version. If the value doesn't exist you might want to just do an early if statement with a return in you Subgraph handler. ### Proměnlivé stínování @@ -132,7 +132,7 @@ Pokud jste použili stínování proměnných, musíte duplicitní proměnné p ### Nulová srovnání -Při aktualizaci podgrafu může někdy dojít k těmto chybám: +By doing the upgrade on your Subgraph, sometimes you might get errors like these: ```typescript ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' is not assignable to type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt'. @@ -330,7 +330,7 @@ let wrapper = new Wrapper(y) wrapper.n = wrapper.n + x // doesn't give compile time errors as it should ``` -Otevřeli jsme kvůli tomu problém v kompilátoru jazyka AssemblyScript, ale zatím platí, že pokud provádíte tyto operace v mapování podgrafů, měli byste je změnit tak, aby se před nimi provedla kontrola null. +We've opened a issue on the AssemblyScript compiler for this, but for now if you do these kind of operations in your Subgraph mappings, you should change them to do a null check before it. ```typescript let wrapper = new Wrapper(y) @@ -352,7 +352,7 @@ value.x = 10 value.y = 'content' ``` -Zkompiluje se, ale za běhu se přeruší, což se stane, protože hodnota nebyla inicializována, takže se ujistěte, že váš podgraf inicializoval své hodnoty, například takto: +It will compile but break at runtime, that happens because the value hasn't been initialized, so make sure your Subgraph has initialized their values, like this: ```typescript var value = new Type() // initialized diff --git a/website/src/pages/cs/resources/migration-guides/graphql-validations-migration-guide.mdx b/website/src/pages/cs/resources/migration-guides/graphql-validations-migration-guide.mdx index 7f273724aff4..4051faab8eef 100644 --- a/website/src/pages/cs/resources/migration-guides/graphql-validations-migration-guide.mdx +++ b/website/src/pages/cs/resources/migration-guides/graphql-validations-migration-guide.mdx @@ -1,5 +1,5 @@ --- -title: Průvodce migrací na GraphQL Validace +title: GraphQL Validations Migration Guide --- Brzy bude `graph-node` podporovat 100% pokrytí [GraphQL Validations specifikace](https://spec.graphql.org/June2018/#sec-Validation). @@ -20,7 +20,7 @@ Chcete-li být v souladu s těmito validacemi, postupujte podle průvodce migrac Pomocí migračního nástroje CLI můžete najít případné problémy v operacích GraphQL a opravit je. Případně můžete aktualizovat koncový bod svého klienta GraphQL tak, aby používal koncový bod `https://api-next.thegraph.com/subgraphs/name/$GITHUB_USER/$SUBGRAPH_NAME`. Testování dotazů proti tomuto koncovému bodu vám pomůže najít problémy ve vašich dotazech. -> Není nutné migrovat všechny podgrafy, pokud používáte [GraphQL ESlint](https://the-guild.dev/graphql/eslint/docs) nebo [GraphQL Code Generator](https://the-guild.dev/graphql/codegen), ty již zajistí, že vaše dotazy jsou platné. +> Not all Subgraphs will need to be migrated, if you are using [GraphQL ESlint](https://the-guild.dev/graphql/eslint/docs) or [GraphQL Code Generator](https://the-guild.dev/graphql/codegen), they already ensure that your queries are valid. ## Migrační nástroj CLI diff --git a/website/src/pages/cs/resources/roles/curating.mdx b/website/src/pages/cs/resources/roles/curating.mdx index c8b9caf18e2e..f06866a7c0ee 100644 --- a/website/src/pages/cs/resources/roles/curating.mdx +++ b/website/src/pages/cs/resources/roles/curating.mdx @@ -2,37 +2,37 @@ title: Kurátorování --- -Curators are critical to The Graph's decentralized economy. They use their knowledge of the web3 ecosystem to assess and signal on the subgraphs that should be indexed by The Graph Network. Through Graph Explorer, Curators view network data to make signaling decisions. In turn, The Graph Network rewards Curators who signal on good quality subgraphs with a share of the query fees those subgraphs generate. The amount of GRT signaled is one of the key considerations for indexers when determining which subgraphs to index. +Curators are critical to The Graph's decentralized economy. They use their knowledge of the web3 ecosystem to assess and signal on the Subgraphs that should be indexed by The Graph Network. Through Graph Explorer, Curators view network data to make signaling decisions. In turn, The Graph Network rewards Curators who signal on good quality Subgraphs with a share of the query fees those Subgraphs generate. The amount of GRT signaled is one of the key considerations for indexers when determining which Subgraphs to index. ## What Does Signaling Mean for The Graph Network? -Before consumers can query a subgraph, it must be indexed. This is where curation comes into play. In order for Indexers to earn substantial query fees on quality subgraphs, they need to know what subgraphs to index. When Curators signal on a subgraph, it lets Indexers know that a subgraph is in demand and of sufficient quality that it should be indexed. +Before consumers can query a Subgraph, it must be indexed. This is where curation comes into play. In order for Indexers to earn substantial query fees on quality Subgraphs, they need to know what Subgraphs to index. When Curators signal on a Subgraph, it lets Indexers know that a Subgraph is in demand and of sufficient quality that it should be indexed. -Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that Curators use to let Indexers know that a subgraph is good to index. Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives. +Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that Curators use to let Indexers know that a Subgraph is good to index. Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the Subgraph, entitling them to a portion of future query fees that the Subgraph drives. -Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality subgraph because there will be fewer queries to process or fewer Indexers to process them. +Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to Subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality Subgraph because there will be fewer queries to process or fewer Indexers to process them. -The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. +The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all Subgraphs, signaling GRT on a particular Subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. -When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. +When signaling, Curators can decide to signal on a specific version of the Subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. -If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. +If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the Subgraphs you need assistance with. -Indexers can find subgraphs to index based on curation signals they see in Graph Explorer (screenshot below). +Indexers can find Subgraphs to index based on curation signals they see in Graph Explorer (screenshot below). -![Explorer subgraphs](/img/explorer-subgraphs.png) +![Explorer Subgraphs](/img/explorer-subgraphs.png) ## Jak signalizovat -Within the Curator tab in Graph Explorer, curators will be able to signal and unsignal on certain subgraphs based on network stats. For a step-by-step overview of how to do this in Graph Explorer, [click here.](/subgraphs/explorer/) +Within the Curator tab in Graph Explorer, curators will be able to signal and unsignal on certain Subgraphs based on network stats. For a step-by-step overview of how to do this in Graph Explorer, [click here.](/subgraphs/explorer/) -Kurátor si může zvolit, zda bude signalizovat na konkrétní verzi podgrafu, nebo zda se jeho signál automaticky přenese na nejnovější produkční sestavení daného podgrafu. Obě strategie jsou platné a mají své výhody i nevýhody. +A curator can choose to signal on a specific Subgraph version, or they can choose to have their signal automatically migrate to the newest production build of that Subgraph. Both are valid strategies and come with their own pros and cons. -Signaling on a specific version is especially useful when one subgraph is used by multiple dapps. One dapp might need to regularly update the subgraph with new features. Another dapp might prefer to use an older, well-tested subgraph version. Upon initial curation, a 1% standard tax is incurred. +Signaling on a specific version is especially useful when one Subgraph is used by multiple dapps. One dapp might need to regularly update the Subgraph with new features. Another dapp might prefer to use an older, well-tested Subgraph version. Upon initial curation, a 1% standard tax is incurred. Automatická migrace signálu na nejnovější produkční sestavení může být cenná, protože zajistí, že se poplatky za dotazy budou neustále zvyšovat. Při každém kurátorství se platí 1% kurátorský poplatek. Při každé migraci také zaplatíte 0,5% kurátorskou daň. Vývojáři podgrafu jsou odrazováni od častého publikování nových verzí - musí zaplatit 0.5% kurátorskou daň ze všech automaticky migrovaných kurátorských podílů. -> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy. +> **Note**: The first address to signal a particular Subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy. ## Withdrawing your GRT @@ -40,39 +40,39 @@ Curators have the option to withdraw their signaled GRT at any time. Unlike the process of delegating, if you decide to withdraw your signaled GRT you will not have to wait for a cooldown period and will receive the entire amount (minus the 1% curation tax). -Once a curator withdraws their signal, indexers may choose to keep indexing the subgraph, even if there's currently no active GRT signaled. +Once a curator withdraws their signal, indexers may choose to keep indexing the Subgraph, even if there's currently no active GRT signaled. -However, it is recommended that curators leave their signaled GRT in place not only to receive a portion of the query fees, but also to ensure reliability and uptime of the subgraph. +However, it is recommended that curators leave their signaled GRT in place not only to receive a portion of the query fees, but also to ensure reliability and uptime of the Subgraph. ## Rizika 1. Trh s dotazy je v Graf ze své podstaty mladý a existuje riziko, že vaše %APY může být nižší, než očekáváte, v důsledku dynamiky rodícího se trhu. -2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned. -3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their subgraph or if a subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/resources/roles/delegating/delegating/). -4. Podgraf může selhat kvůli chybě. Za neúspěšný podgraf se neúčtují poplatky za dotaz. V důsledku toho budete muset počkat, až vývojář chybu opraví a nasadí novou verzi. - - Pokud jste přihlášeni k odběru nejnovější verze podgrafu, vaše sdílené položky se automaticky přemigrují na tuto novou verzi. Při tom bude účtována 0,5% kurátorská daň. - - If you have signaled on a specific subgraph version and it fails, you will have to manually burn your curation shares. You can then signal on the new subgraph version, thus incurring a 1% curation tax. +2. Curation Fee - when a curator signals GRT on a Subgraph, they incur a 1% curation tax. This fee is burned. +3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their Subgraph or if a Subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/resources/roles/delegating/delegating/). +4. A Subgraph can fail due to a bug. A failed Subgraph does not accrue query fees. As a result, you’ll have to wait until the developer fixes the bug and deploys a new version. + - If you are subscribed to the newest version of a Subgraph, your shares will auto-migrate to that new version. This will incur a 0.5% curation tax. + - If you have signaled on a specific Subgraph version and it fails, you will have to manually burn your curation shares. You can then signal on the new Subgraph version, thus incurring a 1% curation tax. ## Nejčastější dotazy ke kurátorství ### 1. Kolik % z poplatků za dotazy kurátoři vydělávají? -By signalling on a subgraph, you will earn a share of all the query fees that the subgraph generates. 10% of all query fees go to the Curators pro-rata to their curation shares. This 10% is subject to governance. +By signalling on a Subgraph, you will earn a share of all the query fees that the Subgraph generates. 10% of all query fees go to the Curators pro-rata to their curation shares. This 10% is subject to governance. -### 2. Jak se rozhodnu, které podgrafy jsou kvalitní a na kterých je třeba signalizovat? +### 2. How do I decide which Subgraphs are high quality to signal on? -Finding high-quality subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy subgraphs that are driving query volume. A trustworthy subgraph may be valuable if it is complete, accurate, and supports a dapp’s data needs. A poorly architected subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a subgraph’s architecture or code in order to assess if a subgraph is valuable. As a result: +Finding high-quality Subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy Subgraphs that are driving query volume. A trustworthy Subgraph may be valuable if it is complete, accurate, and supports a dapp’s data needs. A poorly architected Subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a Subgraph’s architecture or code in order to assess if a Subgraph is valuable. As a result: -- Curators can use their understanding of a network to try and predict how an individual subgraph may generate a higher or lower query volume in the future -- Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the subgraph developer is can help determine whether or not a subgraph is worth signalling on. +- Curators can use their understanding of a network to try and predict how an individual Subgraph may generate a higher or lower query volume in the future +- Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the Subgraph developer is can help determine whether or not a Subgraph is worth signalling on. -### 3. Jaké jsou náklady na aktualizaci podgrafu? +### 3. What’s the cost of updating a Subgraph? -Migrating your curation shares to a new subgraph version incurs a curation tax of 1%. Curators can choose to subscribe to the newest version of a subgraph. When curator shares get auto-migrated to a new version, Curators will also pay half curation tax, ie. 0.5%, because upgrading subgraphs is an onchain action that costs gas. +Migrating your curation shares to a new Subgraph version incurs a curation tax of 1%. Curators can choose to subscribe to the newest version of a Subgraph. When curator shares get auto-migrated to a new version, Curators will also pay half curation tax, ie. 0.5%, because upgrading Subgraphs is an onchain action that costs gas. -### 4. Jak často mohu svůj podgraf aktualizovat? +### 4. How often can I update my Subgraph? -Doporučujeme, abyste podgrafy neaktualizovali příliš často. Další podrobnosti naleznete v otázce výše. +It’s suggested that you don’t update your Subgraphs too frequently. See the question above for more details. ### 5. Mohu prodat své kurátorské podíly? diff --git a/website/src/pages/cs/resources/subgraph-studio-faq.mdx b/website/src/pages/cs/resources/subgraph-studio-faq.mdx index a67af0f6505e..1f036fb46484 100644 --- a/website/src/pages/cs/resources/subgraph-studio-faq.mdx +++ b/website/src/pages/cs/resources/subgraph-studio-faq.mdx @@ -4,7 +4,7 @@ title: FAQs Podgraf Studio ## 1. Co je Podgraf Studio? -[Subgraph Studio](https://thegraph.com/studio/) is a dapp for creating, managing, and publishing subgraphs and API keys. +[Subgraph Studio](https://thegraph.com/studio/) is a dapp for creating, managing, and publishing Subgraphs and API keys. ## 2. Jak vytvořím klíč API? @@ -18,14 +18,14 @@ Yes! You can create multiple API Keys to use in different projects. Check out th Po vytvoření klíče API můžete v části Zabezpečení definovat domény, které se mohou dotazovat na konkrétní klíč API. -## 5. Mohu svůj podgraf převést na jiného vlastníka? +## 5. Can I transfer my Subgraph to another owner? -Yes, subgraphs that have been published to Arbitrum One can be transferred to a new wallet or a Multisig. You can do so by clicking the three dots next to the 'Publish' button on the subgraph's details page and selecting 'Transfer ownership'. +Yes, Subgraphs that have been published to Arbitrum One can be transferred to a new wallet or a Multisig. You can do so by clicking the three dots next to the 'Publish' button on the Subgraph's details page and selecting 'Transfer ownership'. -Všimněte si, že po přenesení podgrafu jej již nebudete moci ve Studio zobrazit ani upravovat. +Note that you will no longer be able to see or edit the Subgraph in Studio once it has been transferred. -## 6. Jak najdu adresy URL dotazů pro podgrafy, pokud nejsem Vývojář podgrafu, který chci použít? +## 6. How do I find query URLs for Subgraphs if I’m not the developer of the Subgraph I want to use? -You can find the query URL of each subgraph in the Subgraph Details section of Graph Explorer. When you click on the “Query” button, you will be directed to a pane wherein you can view the query URL of the subgraph you’re interested in. You can then replace the `` placeholder with the API key you wish to leverage in Subgraph Studio. +You can find the query URL of each Subgraph in the Subgraph Details section of Graph Explorer. When you click on the “Query” button, you will be directed to a pane wherein you can view the query URL of the Subgraph you’re interested in. You can then replace the `` placeholder with the API key you wish to leverage in Subgraph Studio. -Nezapomeňte, že si můžete vytvořit klíč API a dotazovat se na libovolný podgraf zveřejněný v síti, i když si podgraf vytvoříte sami. Tyto dotazy prostřednictvím nového klíče API jsou placené dotazy jako jakékoli jiné v síti. +Remember that you can create an API key and query any Subgraph published to the network, even if you build a Subgraph yourself. These queries via the new API key, are paid queries as any other on the network. diff --git a/website/src/pages/cs/resources/tokenomics.mdx b/website/src/pages/cs/resources/tokenomics.mdx index 92b1514574b4..66eefd5b8b1a 100644 --- a/website/src/pages/cs/resources/tokenomics.mdx +++ b/website/src/pages/cs/resources/tokenomics.mdx @@ -6,7 +6,7 @@ description: The Graph Network is incentivized by powerful tokenomics. Here’s ## Přehled -The Graph is a decentralized protocol that enables easy access to blockchain data. It indexes blockchain data similarly to how Google indexes the web. If you've used a dapp that retrieves data from a subgraph, you've probably interacted with The Graph. Today, thousands of [popular dapps](https://thegraph.com/explorer) in the web3 ecosystem use The Graph. +The Graph is a decentralized protocol that enables easy access to blockchain data. It indexes blockchain data similarly to how Google indexes the web. If you've used a dapp that retrieves data from a Subgraph, you've probably interacted with The Graph. Today, thousands of [popular dapps](https://thegraph.com/explorer) in the web3 ecosystem use The Graph. ## Specifics @@ -24,9 +24,9 @@ There are four primary network participants: 1. Delegators - Delegate GRT to Indexers & secure the network -2. Kurátoři - nalezení nejlepších podgrafů pro indexátory +2. Curators - Find the best Subgraphs for Indexers -3. Developers - Build & query subgraphs +3. Developers - Build & query Subgraphs 4. Indexery - páteř blockchainových dat @@ -36,7 +36,7 @@ Fishermen and Arbitrators are also integral to the network's success through oth ## Delegators (Passively earn GRT) -Indexers are delegated GRT by Delegators, increasing the Indexer’s stake in subgraphs on the network. In return, Delegators earn a percentage of all query fees and indexing rewards from the Indexer. Each Indexer sets the cut that will be rewarded to Delegators independently, creating competition among Indexers to attract Delegators. Most Indexers offer between 9-12% annually. +Indexers are delegated GRT by Delegators, increasing the Indexer’s stake in Subgraphs on the network. In return, Delegators earn a percentage of all query fees and indexing rewards from the Indexer. Each Indexer sets the cut that will be rewarded to Delegators independently, creating competition among Indexers to attract Delegators. Most Indexers offer between 9-12% annually. For example, if a Delegator were to delegate 15k GRT to an Indexer offering 10%, the Delegator would receive ~1,500 GRT in rewards annually. @@ -46,25 +46,25 @@ If you're reading this, you're capable of becoming a Delegator right now by head ## Curators (Earn GRT) -Curators identify high-quality subgraphs and "curate" them (i.e., signal GRT on them) to earn curation shares, which guarantee a percentage of all future query fees generated by the subgraph. While any independent network participant can be a Curator, typically subgraph developers are among the first Curators for their own subgraphs because they want to ensure their subgraph is indexed. +Curators identify high-quality Subgraphs and "curate" them (i.e., signal GRT on them) to earn curation shares, which guarantee a percentage of all future query fees generated by the Subgraph. While any independent network participant can be a Curator, typically Subgraph developers are among the first Curators for their own Subgraphs because they want to ensure their Subgraph is indexed. -Subgraph developers are encouraged to curate their subgraph with at least 3,000 GRT. However, this number may be impacted by network activity and community participation. +Subgraph developers are encouraged to curate their Subgraph with at least 3,000 GRT. However, this number may be impacted by network activity and community participation. -Curators pay a 1% curation tax when they curate a new subgraph. This curation tax is burned, decreasing the supply of GRT. +Curators pay a 1% curation tax when they curate a new Subgraph. This curation tax is burned, decreasing the supply of GRT. ## Developers -Developers build and query subgraphs to retrieve blockchain data. Since subgraphs are open source, developers can query existing subgraphs to load blockchain data into their dapps. Developers pay for queries they make in GRT, which is distributed to network participants. +Developers build and query Subgraphs to retrieve blockchain data. Since Subgraphs are open source, developers can query existing Subgraphs to load blockchain data into their dapps. Developers pay for queries they make in GRT, which is distributed to network participants. -### Vytvoření podgrafu +### Creating a Subgraph -Developers can [create a subgraph](/developing/creating-a-subgraph/) to index data on the blockchain. Subgraphs are instructions for Indexers about which data should be served to consumers. +Developers can [create a Subgraph](/developing/creating-a-subgraph/) to index data on the blockchain. Subgraphs are instructions for Indexers about which data should be served to consumers. -Once developers have built and tested their subgraph, they can [publish their subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) on The Graph's decentralized network. +Once developers have built and tested their Subgraph, they can [publish their Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) on The Graph's decentralized network. -### Dotazování na existující podgraf +### Querying an existing Subgraph -Once a subgraph is [published](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph's decentralized network, anyone can create an API key, add GRT to their billing balance, and query the subgraph. +Once a Subgraph is [published](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph's decentralized network, anyone can create an API key, add GRT to their billing balance, and query the Subgraph. Subgraphs are [queried using GraphQL](/subgraphs/querying/introduction/), and the query fees are paid for with GRT in [Subgraph Studio](https://thegraph.com/studio/). Query fees are distributed to network participants based on their contributions to the protocol. @@ -72,27 +72,27 @@ Subgraphs are [queried using GraphQL](/subgraphs/querying/introduction/), and th ## Indexers (Earn GRT) -Indexers are the backbone of The Graph. They operate independent hardware and software powering The Graph’s decentralized network. Indexers serve data to consumers based on instructions from subgraphs. +Indexers are the backbone of The Graph. They operate independent hardware and software powering The Graph’s decentralized network. Indexers serve data to consumers based on instructions from Subgraphs. Indexers can earn GRT rewards in two ways: -1. **Query fees**: GRT paid by developers or users for subgraph data queries. Query fees are directly distributed to Indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). +1. **Query fees**: GRT paid by developers or users for Subgraph data queries. Query fees are directly distributed to Indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). -2. **Indexing rewards**: the 3% annual issuance is distributed to Indexers based on the number of subgraphs they are indexing. These rewards incentivize Indexers to index subgraphs, occasionally before the query fees begin, to accrue and submit Proofs of Indexing (POIs), verifying that they have indexed data accurately. +2. **Indexing rewards**: the 3% annual issuance is distributed to Indexers based on the number of Subgraphs they are indexing. These rewards incentivize Indexers to index Subgraphs, occasionally before the query fees begin, to accrue and submit Proofs of Indexing (POIs), verifying that they have indexed data accurately. -Each subgraph is allotted a portion of the total network token issuance, based on the amount of the subgraph’s curation signal. That amount is then rewarded to Indexers based on their allocated stake on the subgraph. +Each Subgraph is allotted a portion of the total network token issuance, based on the amount of the Subgraph’s curation signal. That amount is then rewarded to Indexers based on their allocated stake on the Subgraph. In order to run an indexing node, Indexers must self-stake 100,000 GRT or more with the network. Indexers are incentivized to self-stake GRT in proportion to the amount of queries they serve. -Indexers can increase their GRT allocations on subgraphs by accepting GRT delegation from Delegators, and they can accept up to 16 times their initial self-stake. If an Indexer becomes "over-delegated" (i.e., more than 16 times their initial self-stake), they will not be able to use the additional GRT from Delegators until they increase their self-stake in the network. +Indexers can increase their GRT allocations on Subgraphs by accepting GRT delegation from Delegators, and they can accept up to 16 times their initial self-stake. If an Indexer becomes "over-delegated" (i.e., more than 16 times their initial self-stake), they will not be able to use the additional GRT from Delegators until they increase their self-stake in the network. The amount of rewards an Indexer receives can vary based on the Indexer's self-stake, accepted delegation, quality of service, and many more factors. ## Token Supply: Burning & Issuance -The initial token supply is 10 billion GRT, with a target of 3% new issuance annually to reward Indexers for allocating stake on subgraphs. This means that the total supply of GRT tokens will increase by 3% each year as new tokens are issued to Indexers for their contribution to the network. +The initial token supply is 10 billion GRT, with a target of 3% new issuance annually to reward Indexers for allocating stake on Subgraphs. This means that the total supply of GRT tokens will increase by 3% each year as new tokens are issued to Indexers for their contribution to the network. -The Graph is designed with multiple burning mechanisms to offset new token issuance. Approximately 1% of the GRT supply is burned annually through various activities on the network, and this number has been increasing as network activity continues to grow. These burning activities include a 0.5% delegation tax whenever a Delegator delegates GRT to an Indexer, a 1% curation tax when Curators signal on a subgraph, and a 1% of query fees for blockchain data. +The Graph is designed with multiple burning mechanisms to offset new token issuance. Approximately 1% of the GRT supply is burned annually through various activities on the network, and this number has been increasing as network activity continues to grow. These burning activities include a 0.5% delegation tax whenever a Delegator delegates GRT to an Indexer, a 1% curation tax when Curators signal on a Subgraph, and a 1% of query fees for blockchain data. ![Total burned GRT](/img/total-burned-grt.jpeg) diff --git a/website/src/pages/cs/sps/introduction.mdx b/website/src/pages/cs/sps/introduction.mdx index f0180d6a569b..4938d23102e4 100644 --- a/website/src/pages/cs/sps/introduction.mdx +++ b/website/src/pages/cs/sps/introduction.mdx @@ -3,21 +3,21 @@ title: Introduction to Substreams-Powered Subgraphs sidebarTitle: Úvod --- -Boost your subgraph's efficiency and scalability by using [Substreams](/substreams/introduction/) to stream pre-indexed blockchain data. +Boost your Subgraph's efficiency and scalability by using [Substreams](/substreams/introduction/) to stream pre-indexed blockchain data. ## Přehled -Use a Substreams package (`.spkg`) as a data source to give your subgraph access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks. +Use a Substreams package (`.spkg`) as a data source to give your Subgraph access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks. ### Specifics There are two methods of enabling this technology: -1. **Using Substreams [triggers](/sps/triggers/)**: Consume from any Substreams module by importing the Protobuf model through a subgraph handler and move all your logic into a subgraph. This method creates the subgraph entities directly in the subgraph. +1. **Using Substreams [triggers](/sps/triggers/)**: Consume from any Substreams module by importing the Protobuf model through a Subgraph handler and move all your logic into a Subgraph. This method creates the Subgraph entities directly in the Subgraph. -2. **Using [Entity Changes](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)**: By writing more of the logic into Substreams, you can consume the module's output directly into [graph-node](/indexing/tooling/graph-node/). In graph-node, you can use the Substreams data to create your subgraph entities. +2. **Using [Entity Changes](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)**: By writing more of the logic into Substreams, you can consume the module's output directly into [graph-node](/indexing/tooling/graph-node/). In graph-node, you can use the Substreams data to create your Subgraph entities. -You can choose where to place your logic, either in the subgraph or Substreams. However, consider what aligns with your data needs, as Substreams has a parallelized model, and triggers are consumed linearly in the graph node. +You can choose where to place your logic, either in the Subgraph or Substreams. However, consider what aligns with your data needs, as Substreams has a parallelized model, and triggers are consumed linearly in the graph node. ### Další zdroje @@ -28,3 +28,4 @@ Visit the following links for tutorials on using code-generation tooling to buil - [Starknet](https://docs.substreams.dev/tutorials/intro-to-tutorials/starknet) - [Injective](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/injective) - [MANTRA](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/mantra) +- [Stellar](https://docs.substreams.dev/tutorials/intro-to-tutorials/stellar) diff --git a/website/src/pages/cs/sps/sps-faq.mdx b/website/src/pages/cs/sps/sps-faq.mdx index 657b027cf5e9..25e77dc3c7f1 100644 --- a/website/src/pages/cs/sps/sps-faq.mdx +++ b/website/src/pages/cs/sps/sps-faq.mdx @@ -11,21 +11,21 @@ Specifically, it's a blockchain-agnostic, parallelized, and streaming-first engi Substreams is developed by [StreamingFast](https://www.streamingfast.io/). Visit the [Substreams Documentation](/substreams/introduction/) to learn more about Substreams. -## Co jsou substreamu napájen podgrafy? +## What are Substreams-powered Subgraphs? -[Substreams-powered subgraphs](/sps/introduction/) combine the power of Substreams with the queryability of subgraphs. When publishing a Substreams-powered Subgraph, the data produced by the Substreams transformations, can [output entity changes](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs) that are compatible with subgraph entities. +[Substreams-powered Subgraphs](/sps/introduction/) combine the power of Substreams with the queryability of Subgraphs. When publishing a Substreams-powered Subgraph, the data produced by the Substreams transformations, can [output entity changes](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs) that are compatible with Subgraph entities. -If you are already familiar with subgraph development, note that Substreams-powered subgraphs can then be queried just as if it had been produced by the AssemblyScript transformation layer. This provides all the benefits of subgraphs, including a dynamic and flexible GraphQL API. +If you are already familiar with Subgraph development, note that Substreams-powered Subgraphs can then be queried just as if it had been produced by the AssemblyScript transformation layer. This provides all the benefits of Subgraphs, including a dynamic and flexible GraphQL API. -## Jak se liší substream, které jsou napájeny podgrafy, od podgrafů? +## How are Substreams-powered Subgraphs different from Subgraphs? Subgraphs are made up of datasources which specify onchain events, and how those events should be transformed via handlers written in Assemblyscript. These events are processed sequentially, based on the order in which events happen onchain. -By contrast, substreams-powered subgraphs have a single datasource which references a substreams package, which is processed by the Graph Node. Substreams have access to additional granular onchain data compared to conventional subgraphs, and can also benefit from massively parallelised processing, which can mean much faster processing times. +By contrast, substreams-powered Subgraphs have a single datasource which references a substreams package, which is processed by the Graph Node. Substreams have access to additional granular onchain data compared to conventional Subgraphs, and can also benefit from massively parallelised processing, which can mean much faster processing times. -## Jaké jsou výhody používání substreamu, které jsou založeny na podgraf? +## What are the benefits of using Substreams-powered Subgraphs? -Substreams-powered subgraphs combine all the benefits of Substreams with the queryability of subgraphs. They bring greater composability and high-performance indexing to The Graph. They also enable new data use cases; for example, once you've built your Substreams-powered Subgraph, you can reuse your [Substreams modules](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) to output to different [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) such as PostgreSQL, MongoDB, and Kafka. +Substreams-powered Subgraphs combine all the benefits of Substreams with the queryability of Subgraphs. They bring greater composability and high-performance indexing to The Graph. They also enable new data use cases; for example, once you've built your Substreams-powered Subgraph, you can reuse your [Substreams modules](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) to output to different [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) such as PostgreSQL, MongoDB, and Kafka. ## Jaké jsou výhody Substreams? @@ -35,7 +35,7 @@ Používání ubstreams má mnoho výhod, mimo jiné: - Vysoce výkonné indexování: Řádově rychlejší indexování prostřednictvím rozsáhlých klastrů paralelních operací (viz BigQuery). -- Umyvadlo kdekoli: Data můžete ukládat kamkoli chcete: Vložte data do PostgreSQL, MongoDB, Kafka, podgrafy, ploché soubory, tabulky Google. +- Sink anywhere: Sink your data to anywhere you want: PostgreSQL, MongoDB, Kafka, Subgraphs, flat files, Google Sheets. - Programovatelné: Pomocí kódu můžete přizpůsobit extrakci, provádět agregace v čase transformace a modelovat výstup pro více zdrojů. @@ -63,17 +63,17 @@ Používání Firehose přináší mnoho výhod, včetně: - Využívá ploché soubory: Blockchain data jsou extrahována do plochých souborů, což je nejlevnější a nejoptimálnější dostupný výpočetní zdroj. -## Kde mohou vývojáři získat více informací o substreamu, které jsou založeny na podgraf a substreamu? +## Where can developers access more information about Substreams-powered Subgraphs and Substreams? The [Substreams documentation](/substreams/introduction/) will teach you how to build Substreams modules. -The [Substreams-powered subgraphs documentation](/sps/introduction/) will show you how to package them for deployment on The Graph. +The [Substreams-powered Subgraphs documentation](/sps/introduction/) will show you how to package them for deployment on The Graph. The [latest Substreams Codegen tool](https://streamingfastio.medium.com/substreams-codegen-no-code-tool-to-bootstrap-your-project-a11efe0378c6) will allow you to bootstrap a Substreams project without any code. ## Jaká je role modulů Rust v Substreamu? -Moduly Rust jsou ekvivalentem mapovačů AssemblyScript v podgraf. Jsou kompilovány do WASM podobným způsobem, ale programovací model umožňuje paralelní provádění. Definují druh transformací a agregací, které chcete aplikovat na surová data blockchainu. +Rust modules are the equivalent of the AssemblyScript mappers in Subgraphs. They are compiled to WASM in a similar way, but the programming model allows for parallel execution. They define the sort of transformations and aggregations you want to apply to the raw blockchain data. See [modules documentation](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) for details. @@ -81,16 +81,16 @@ See [modules documentation](https://docs.substreams.dev/reference-material/subst Při použití substreamů probíhá kompozice na transformační vrstvě, což umožňuje opakované použití modulů uložených v mezipaměti. -Jako příklad může Alice vytvořit cenový modul DEX, Bob jej může použít k vytvoření agregátoru objemu pro některé tokeny, které ho zajímají, a Lisa může zkombinovat čtyři jednotlivé cenové moduly DEX a vytvořit cenové orákulum. Jediný požadavek Substreams zabalí všechny moduly těchto jednotlivců, propojí je dohromady a nabídne mnohem sofistikovanější tok dat. Tento proud pak může být použit k naplnění podgrafu a může být dotazován spotřebiteli. +As an example, Alice can build a DEX price module, Bob can use it to build a volume aggregator for some tokens of his interest, and Lisa can combine four individual DEX price modules to create a price oracle. A single Substreams request will package all of these individual's modules, link them together, to offer a much more refined stream of data. That stream can then be used to populate a Subgraph, and be queried by consumers. ## Jak můžete vytvořit a nasadit Substreams využívající podgraf? After [defining](/sps/introduction/) a Substreams-powered Subgraph, you can use the Graph CLI to deploy it in [Subgraph Studio](https://thegraph.com/studio/). -## Kde najdu příklady podgrafů Substreams a Substreams-powered? +## Where can I find examples of Substreams and Substreams-powered Subgraphs? -Příklady podgrafů Substreams a Substreams-powered najdete na [tomto repozitáři Github](https://github.com/pinax-network/awesome-substreams). +You can visit [this Github repo](https://github.com/pinax-network/awesome-substreams) to find examples of Substreams and Substreams-powered Subgraphs. -## Co znamenají substreams a podgrafy napájené substreams pro síť grafů? +## What do Substreams and Substreams-powered Subgraphs mean for The Graph Network? Integrace slibuje mnoho výhod, včetně extrémně výkonného indexování a větší složitelnosti díky využití komunitních modulů a stavění na nich. diff --git a/website/src/pages/cs/sps/triggers.mdx b/website/src/pages/cs/sps/triggers.mdx index 06a8845e4daf..b0c4bea23f3d 100644 --- a/website/src/pages/cs/sps/triggers.mdx +++ b/website/src/pages/cs/sps/triggers.mdx @@ -6,13 +6,13 @@ Use Custom Triggers and enable the full use GraphQL. ## Přehled -Custom Triggers allow you to send data directly into your subgraph mappings file and entities, which are similar to tables and fields. This enables you to fully use the GraphQL layer. +Custom Triggers allow you to send data directly into your Subgraph mappings file and entities, which are similar to tables and fields. This enables you to fully use the GraphQL layer. -By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data in your subgraph's handler. This ensures efficient and streamlined data management within the subgraph framework. +By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data in your Subgraph's handler. This ensures efficient and streamlined data management within the Subgraph framework. ### Defining `handleTransactions` -The following code demonstrates how to define a `handleTransactions` function in a subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new subgraph entity is created. +The following code demonstrates how to define a `handleTransactions` function in a Subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new Subgraph entity is created. ```tsx export function handleTransactions(bytes: Uint8Array): void { @@ -38,9 +38,9 @@ Here's what you're seeing in the `mappings.ts` file: 1. The bytes containing Substreams data are decoded into the generated `Transactions` object, this object is used like any other AssemblyScript object 2. Looping over the transactions -3. Create a new subgraph entity for every transaction +3. Create a new Subgraph entity for every transaction -To go through a detailed example of a trigger-based subgraph, [check out the tutorial](/sps/tutorial/). +To go through a detailed example of a trigger-based Subgraph, [check out the tutorial](/sps/tutorial/). ### Další zdroje diff --git a/website/src/pages/cs/sps/tutorial.mdx b/website/src/pages/cs/sps/tutorial.mdx index 3f98c57508bd..67d564483af1 100644 --- a/website/src/pages/cs/sps/tutorial.mdx +++ b/website/src/pages/cs/sps/tutorial.mdx @@ -3,7 +3,7 @@ title: 'Tutorial: Set Up a Substreams-Powered Subgraph on Solana' sidebarTitle: Tutorial --- -Successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. +Successfully set up a trigger-based Substreams-powered Subgraph for a Solana SPL token. ## Začněte @@ -54,7 +54,7 @@ params: # Modify the param fields to meet your needs ### Step 2: Generate the Subgraph Manifest -Once the project is initialized, generate a subgraph manifest by running the following command in the Dev Container: +Once the project is initialized, generate a Subgraph manifest by running the following command in the Dev Container: ```bash substreams codegen subgraph @@ -73,7 +73,7 @@ dataSources: moduleName: map_spl_transfers # Module defined in the substreams.yaml file: ./my-project-sol-v0.1.0.spkg mapping: - apiVersion: 0.0.7 + apiVersion: 0.0.9 kind: substreams/graph-entities file: ./src/mappings.ts handler: handleTriggers @@ -81,7 +81,7 @@ dataSources: ### Step 3: Define Entities in `schema.graphql` -Define the fields you want to save in your subgraph entities by updating the `schema.graphql` file. +Define the fields you want to save in your Subgraph entities by updating the `schema.graphql` file. Here is an example: @@ -101,7 +101,7 @@ This schema defines a `MyTransfer` entity with fields such as `id`, `amount`, `s With the Protobuf objects generated, you can now handle the decoded Substreams data in your `mappings.ts` file found in the `./src` directory. -The example below demonstrates how to extract to subgraph entities the non-derived transfers associated to the Orca account id: +The example below demonstrates how to extract to Subgraph entities the non-derived transfers associated to the Orca account id: ```ts import { Protobuf } from 'as-proto/assembly' @@ -140,11 +140,11 @@ To generate Protobuf objects in AssemblyScript, run the following command: npm run protogen ``` -This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the subgraph's handler. +This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the Subgraph's handler. ### Závěr -Congratulations! You've successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. You can take the next step by customizing your schema, mappings, and modules to fit your specific use case. +Congratulations! You've successfully set up a trigger-based Substreams-powered Subgraph for a Solana SPL token. You can take the next step by customizing your schema, mappings, and modules to fit your specific use case. ### Video Tutorial diff --git a/website/src/pages/cs/subgraphs/_meta-titles.json b/website/src/pages/cs/subgraphs/_meta-titles.json index 3fd405eed29a..c2d850dfc35c 100644 --- a/website/src/pages/cs/subgraphs/_meta-titles.json +++ b/website/src/pages/cs/subgraphs/_meta-titles.json @@ -2,5 +2,5 @@ "querying": "Querying", "developing": "Developing", "guides": "How-to Guides", - "best-practices": "Best Practices" + "best-practices": "Osvědčené postupy" } diff --git a/website/src/pages/cs/subgraphs/best-practices/avoid-eth-calls.mdx b/website/src/pages/cs/subgraphs/best-practices/avoid-eth-calls.mdx index 3ce9c29a17a0..2783957614bf 100644 --- a/website/src/pages/cs/subgraphs/best-practices/avoid-eth-calls.mdx +++ b/website/src/pages/cs/subgraphs/best-practices/avoid-eth-calls.mdx @@ -1,19 +1,19 @@ --- title: Doporučený postup pro podgraf 4 - Zlepšení rychlosti indexování vyhnutím se eth_calls -sidebarTitle: 'Subgraph Best Practice 4: Avoiding eth_calls' +sidebarTitle: Avoiding eth_calls --- ## TLDR -`eth_calls` jsou volání, která lze provést z podgrafu do uzlu Ethereum. Tato volání zabírají značnou dobu, než vrátí data, což zpomaluje indexování. Pokud je to možné, navrhněte chytré kontrakty tak, aby emitovaly všechna potřebná data, takže nebudete muset používat `eth_calls`. +`eth_calls` are calls that can be made from a Subgraph to an Ethereum node. These calls take a significant amount of time to return data, slowing down indexing. If possible, design smart contracts to emit all the data you need so you don’t need to use `eth_calls`. ## Proč je dobré se vyhnout `eth_calls` -Podgraf jsou optimalizovány pro indexování dat událostí emitovaných z chytré smlouvy. Podgraf může také indexovat data pocházející z `eth_call`, což však může indexování podgrafu výrazně zpomalit, protože `eth_calls` vyžadují externí volání chytrých smluv. Odezva těchto volání nezávisí na podgrafu, ale na konektivitě a odezvě dotazovaného uzlu Ethereum. Minimalizací nebo eliminací eth_calls v našich podgrafech můžeme výrazně zvýšit rychlost indexování. +Subgraphs are optimized to index event data emitted from smart contracts. A Subgraph can also index the data coming from an `eth_call`, however, this can significantly slow down Subgraph indexing as `eth_calls` require making external calls to smart contracts. The responsiveness of these calls relies not on the Subgraph but on the connectivity and responsiveness of the Ethereum node being queried. By minimizing or eliminating eth_calls in our Subgraphs, we can significantly improve our indexing speed. ### Jak vypadá eth_call? -`eth_calls` jsou často nutné, pokud data potřebná pro podgraf nejsou dostupná prostřednictvím emitovaných událostí. Uvažujme například scénář, kdy podgraf potřebuje zjistit, zda jsou tokeny ERC20 součástí určitého poolu, ale smlouva emituje pouze základní událost `Transfer` a neemituje událost, která by obsahovala data, která potřebujeme: +`eth_calls` are often necessary when the data required for a Subgraph is not available through emitted events. For example, consider a scenario where a Subgraph needs to identify whether ERC20 tokens are part of a specific pool, but the contract only emits a basic `Transfer` event and does not emit an event that contains the data that we need: ```yaml event Transfer(address indexed from, address indexed to, uint256 value); @@ -44,7 +44,7 @@ export function handleTransfer(event: Transfer): void { } ``` -To je funkční, ale není to ideální, protože to zpomaluje indexování našeho podgrafu. +This is functional, however is not ideal as it slows down our Subgraph’s indexing. ## Jak odstranit `eth_calls` @@ -54,7 +54,7 @@ V ideálním případě by měl být inteligentní kontrakt aktualizován tak, a event TransferWithPool(address indexed from, address indexed to, uint256 value, bytes32 indexed poolInfo); ``` -Díky této aktualizaci může podgraf přímo indexovat požadovaná data bez externích volání: +With this update, the Subgraph can directly index the required data without external calls: ```typescript import { Address } from '@graphprotocol/graph-ts' @@ -96,11 +96,11 @@ calls: Samotná obslužná rutina přistupuje k výsledku tohoto `eth_call` přesně tak, jak je uvedeno v předchozí části, a to navázáním na smlouvu a provedením volání. graph-node cachuje výsledky deklarovaných `eth_call` v paměti a volání obslužné rutiny získá výsledek z této paměťové cache místo skutečného volání RPC. -Poznámka: Deklarované eth_calls lze provádět pouze v podgraf s verzí specVersion >= 1.2.0. +Note: Declared eth_calls can only be made in Subgraphs with specVersion >= 1.2.0. ## Závěr -You can significantly improve indexing performance by minimizing or eliminating `eth_calls` in your subgraphs. +You can significantly improve indexing performance by minimizing or eliminating `eth_calls` in your Subgraphs. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/cs/subgraphs/best-practices/derivedfrom.mdx b/website/src/pages/cs/subgraphs/best-practices/derivedfrom.mdx index f6ec5a660bf2..fc9dce04c8c0 100644 --- a/website/src/pages/cs/subgraphs/best-practices/derivedfrom.mdx +++ b/website/src/pages/cs/subgraphs/best-practices/derivedfrom.mdx @@ -1,11 +1,11 @@ --- title: Podgraf Doporučený postup 2 - Zlepšení indexování a rychlosti dotazů pomocí @derivedFrom -sidebarTitle: 'Subgraph Best Practice 2: Arrays with @derivedFrom' +sidebarTitle: Arrays with @derivedFrom --- ## TLDR -Pole ve vašem schématu mohou skutečně zpomalit výkon podgrafu, pokud jejich počet přesáhne tisíce položek. Pokud je to možné, měla by se při použití polí používat direktiva `@derivedFrom`, která zabraňuje vzniku velkých polí, zjednodušuje obslužné programy a snižuje velikost jednotlivých entit, čímž výrazně zvyšuje rychlost indexování a výkon dotazů. +Arrays in your schema can really slow down a Subgraph's performance as they grow beyond thousands of entries. If possible, the `@derivedFrom` directive should be used when using arrays as it prevents large arrays from forming, simplifies handlers, and reduces the size of individual entities, improving indexing speed and query performance significantly. ## Jak používat směrnici `@derivedFrom` @@ -15,7 +15,7 @@ Stačí ve schématu za pole přidat směrnici `@derivedFrom`. Takto: comments: [Comment!]! @derivedFrom(field: "post") ``` -`@derivedFrom` vytváří efektivní vztahy typu one-to-many, které umožňují dynamické přiřazení entity k více souvisejícím entitám na základě pole v související entitě. Tento přístup odstraňuje nutnost ukládat duplicitní data na obou stranách vztahu, čímž se podgraf stává efektivnějším. +`@derivedFrom` creates efficient one-to-many relationships, enabling an entity to dynamically associate with multiple related entities based on a field in the related entity. This approach removes the need for both sides of the relationship to store duplicate data, making the Subgraph more efficient. ### Příklad případu použití pro `@derivedFrom` @@ -60,17 +60,17 @@ type Comment @entity { Pouhým přidáním direktivy `@derivedFrom` bude toto schéma ukládat "Komentáře“ pouze na straně "Komentáře“ vztahu a nikoli na straně "Příspěvek“ vztahu. Pole se ukládají napříč jednotlivými řádky, což umožňuje jejich výrazné rozšíření. To může vést k obzvláště velkým velikostem, pokud je jejich růst neomezený. -Tím se nejen zefektivní náš podgraf, ale také se odemknou tři funkce: +This will not only make our Subgraph more efficient, but it will also unlock three features: 1. Můžeme se zeptat na `Post` a zobrazit všechny jeho komentáře. 2. Můžeme provést zpětné vyhledávání a dotazovat se na jakýkoli `Komentář` a zjistit, ze kterého příspěvku pochází. -3. We can use [Derived Field Loaders](/subgraphs/developing/creating/graph-ts/api/#looking-up-derived-entities) to unlock the ability to directly access and manipulate data from virtual relationships in our subgraph mappings. +3. We can use [Derived Field Loaders](/subgraphs/developing/creating/graph-ts/api/#looking-up-derived-entities) to unlock the ability to directly access and manipulate data from virtual relationships in our Subgraph mappings. ## Závěr -Use the `@derivedFrom` directive in subgraphs to effectively manage dynamically growing arrays, enhancing indexing efficiency and data retrieval. +Use the `@derivedFrom` directive in Subgraphs to effectively manage dynamically growing arrays, enhancing indexing efficiency and data retrieval. For a more detailed explanation of strategies to avoid large arrays, check out Kevin Jones' blog: [Best Practices in Subgraph Development: Avoiding Large Arrays](https://thegraph.com/blog/improve-subgraph-performance-avoiding-large-arrays/). diff --git a/website/src/pages/cs/subgraphs/best-practices/grafting-hotfix.mdx b/website/src/pages/cs/subgraphs/best-practices/grafting-hotfix.mdx index 7a2dbdda86f6..541cf76d0f7a 100644 --- a/website/src/pages/cs/subgraphs/best-practices/grafting-hotfix.mdx +++ b/website/src/pages/cs/subgraphs/best-practices/grafting-hotfix.mdx @@ -1,26 +1,26 @@ --- title: Subgraph Best Practice 6 - Use Grafting for Quick Hotfix Deployment -sidebarTitle: 'Subgraph Best Practice 6: Grafting and Hotfixing' +sidebarTitle: Grafting and Hotfixing --- ## TLDR -Grafting is a powerful feature in subgraph development that allows you to build and deploy new subgraphs while reusing the indexed data from existing ones. +Grafting is a powerful feature in Subgraph development that allows you to build and deploy new Subgraphs while reusing the indexed data from existing ones. ### Přehled -This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. +This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire Subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. ## Benefits of Grafting for Hotfixes 1. **Rapid Deployment** - - **Minimize Downtime**: When a subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. - - **Immediate Recovery**: The new subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. + - **Minimize Downtime**: When a Subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. + - **Immediate Recovery**: The new Subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. 2. **Data Preservation** - - **Reuse Historical Data**: Grafting copies the existing data from the base subgraph, so you don’t lose valuable historical records. + - **Reuse Historical Data**: Grafting copies the existing data from the base Subgraph, so you don’t lose valuable historical records. - **Consistency**: Maintains data continuity, which is crucial for applications relying on consistent historical data. 3. **Efficiency** @@ -31,38 +31,38 @@ This feature enables quick deployment of hotfixes for critical issues, eliminati 1. **Initial Deployment Without Grafting** - - **Start Clean**: Always deploy your initial subgraph without grafting to ensure that it’s stable and functions as expected. - - **Test Thoroughly**: Validate the subgraph’s performance to minimize the need for future hotfixes. + - **Start Clean**: Always deploy your initial Subgraph without grafting to ensure that it’s stable and functions as expected. + - **Test Thoroughly**: Validate the Subgraph’s performance to minimize the need for future hotfixes. 2. **Implementing the Hotfix with Grafting** - **Identify the Issue**: When a critical error occurs, determine the block number of the last successfully indexed event. - - **Create a New Subgraph**: Develop a new subgraph that includes the hotfix. - - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed subgraph. - - **Deploy Quickly**: Publish the grafted subgraph to restore service as soon as possible. + - **Create a New Subgraph**: Develop a new Subgraph that includes the hotfix. + - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed Subgraph. + - **Deploy Quickly**: Publish the grafted Subgraph to restore service as soon as possible. 3. **Post-Hotfix Actions** - - **Monitor Performance**: Ensure the grafted subgraph is indexing correctly and the hotfix resolves the issue. - - **Republish Without Grafting**: Once stable, deploy a new version of the subgraph without grafting for long-term maintenance. + - **Monitor Performance**: Ensure the grafted Subgraph is indexing correctly and the hotfix resolves the issue. + - **Republish Without Grafting**: Once stable, deploy a new version of the Subgraph without grafting for long-term maintenance. > Note: Relying on grafting indefinitely is not recommended as it can complicate future updates and maintenance. - - **Update References**: Redirect any services or applications to use the new, non-grafted subgraph. + - **Update References**: Redirect any services or applications to use the new, non-grafted Subgraph. 4. **Important Considerations** - **Careful Block Selection**: Choose the graft block number carefully to prevent data loss. - **Tip**: Use the block number of the last correctly processed event. - - **Use Deployment ID**: Ensure you reference the Deployment ID of the base subgraph, not the Subgraph ID. - - **Note**: The Deployment ID is the unique identifier for a specific subgraph deployment. - - **Feature Declaration**: Remember to declare grafting in the subgraph manifest under features. + - **Use Deployment ID**: Ensure you reference the Deployment ID of the base Subgraph, not the Subgraph ID. + - **Note**: The Deployment ID is the unique identifier for a specific Subgraph deployment. + - **Feature Declaration**: Remember to declare grafting in the Subgraph manifest under features. ## Example: Deploying a Hotfix with Grafting -Suppose you have a subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. +Suppose you have a Subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. 1. **Failed Subgraph Manifest (subgraph.yaml)** ```yaml - specVersion: 1.0.0 + specVersion: 1.3.0 schema: file: ./schema.graphql dataSources: @@ -75,7 +75,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing startBlock: 5000000 mapping: kind: ethereum/events - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Withdrawal @@ -90,7 +90,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing 2. **New Grafted Subgraph Manifest (subgraph.yaml)** ```yaml - specVersion: 1.0.0 + specVersion: 1.3.0 schema: file: ./schema.graphql dataSources: @@ -103,7 +103,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing startBlock: 6000001 # Block after the last indexed block mapping: kind: ethereum/events - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Withdrawal @@ -117,16 +117,16 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing features: - grafting graft: - base: QmBaseDeploymentID # Deployment ID of the failed subgraph + base: QmBaseDeploymentID # Deployment ID of the failed Subgraph block: 6000000 # Last successfully indexed block ``` **Explanation:** -- **Data Source Update**: The new subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. +- **Data Source Update**: The new Subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. - **Start Block**: Set to one block after the last successfully indexed block to avoid reprocessing the error. - **Grafting Configuration**: - - **base**: Deployment ID of the failed subgraph. + - **base**: Deployment ID of the failed Subgraph. - **block**: Block number where grafting should begin. 3. **Deployment Steps** @@ -135,10 +135,10 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing - **Adjust the Manifest**: As shown above, update the `subgraph.yaml` with grafting configurations. - **Deploy the Subgraph**: - Authenticate with the Graph CLI. - - Deploy the new subgraph using `graph deploy`. + - Deploy the new Subgraph using `graph deploy`. 4. **Post-Deployment** - - **Verify Indexing**: Check that the subgraph is indexing correctly from the graft point. + - **Verify Indexing**: Check that the Subgraph is indexing correctly from the graft point. - **Monitor Data**: Ensure that new data is being captured and the hotfix is effective. - **Plan for Republish**: Schedule the deployment of a non-grafted version for long-term stability. @@ -146,9 +146,9 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing While grafting is a powerful tool for deploying hotfixes quickly, there are specific scenarios where it should be avoided to maintain data integrity and ensure optimal performance. -- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new subgraph’s schema to be compatible with the base subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. +- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new Subgraph’s schema to be compatible with the base Subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. - **Significant Mapping Logic Overhauls**: When the hotfix involves substantial modifications to your mapping logic—such as changing how events are processed or altering handler functions—grafting may not function correctly. The new logic might not be compatible with the data processed under the old logic, leading to incorrect data or failed indexing. -- **Deployments to The Graph Network**: Grafting is not recommended for subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the subgraph from scratch to ensure full compatibility and reliability. +- **Deployments to The Graph Network**: Grafting is not recommended for Subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the Subgraph from scratch to ensure full compatibility and reliability. ### Risk Management @@ -157,20 +157,20 @@ While grafting is a powerful tool for deploying hotfixes quickly, there are spec ## Závěr -Grafting is an effective strategy for deploying hotfixes in subgraph development, enabling you to: +Grafting is an effective strategy for deploying hotfixes in Subgraph development, enabling you to: - **Quickly Recover** from critical errors without re-indexing. - **Preserve Historical Data**, maintaining continuity for applications and users. - **Ensure Service Availability** by minimizing downtime during critical fixes. -However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. +However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your Subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. ## Další zdroje - **[Grafting Documentation](/subgraphs/cookbook/grafting/)**: Replace a Contract and Keep its History With Grafting - **[Understanding Deployment IDs](/subgraphs/querying/subgraph-id-vs-deployment-id/)**: Learn the difference between Deployment ID and Subgraph ID. -By incorporating grafting into your subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. +By incorporating grafting into your Subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/cs/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx b/website/src/pages/cs/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx index 5b058ee9d7cf..e4e191353476 100644 --- a/website/src/pages/cs/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx +++ b/website/src/pages/cs/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx @@ -1,6 +1,6 @@ --- title: Osvědčený postup 3 - Zlepšení indexování a výkonu dotazů pomocí neměnných entit a bytů jako ID -sidebarTitle: 'Subgraph Best Practice 3: Immutable Entities and Bytes as IDs' +sidebarTitle: Immutable Entities and Bytes as IDs --- ## TLDR @@ -50,12 +50,12 @@ I když jsou možné i jiné typy ID, například String a Int8, doporučuje se ### Důvody, proč nepoužívat bajty jako IDs 1. Pokud musí být IDs entit čitelné pro člověka, například automaticky doplňované číselné IDs nebo čitelné řetězce, neměly by být použity bajty pro IDs. -2. Při integraci dat podgrafu s jiným datovým modelem, který nepoužívá bajty jako IDs, by se bajty jako IDs neměly používat. +2. If integrating a Subgraph’s data with another data model that does not use Bytes as IDs, Bytes as IDs should not be used. 3. Zlepšení výkonu indexování a dotazování není žádoucí. ### Konkatenace s byty jako IDs -V mnoha podgrafech se běžně používá spojování řetězců ke spojení dvou vlastností události do jediného ID, například pomocí `event.transaction.hash.toHex() + "-" + event.logIndex.toString()`. Protože se však tímto způsobem vrací řetězec, značně to zhoršuje indexování podgrafů a výkonnost dotazování. +It is a common practice in many Subgraphs to use string concatenation to combine two properties of an event into a single ID, such as using `event.transaction.hash.toHex() + "-" + event.logIndex.toString()`. However, as this returns a string, this significantly impedes Subgraph indexing and querying performance. Místo toho bychom měli použít metodu `concatI32()` pro spojování vlastností událostí. Výsledkem této strategie je ID `Bytes`, které je mnohem výkonnější. @@ -172,7 +172,7 @@ Odpověď na dotaz: ## Závěr -Bylo prokázáno, že použití neměnných entit i bytů jako ID výrazně zvyšuje efektivitu podgrafů. Testy konkrétně ukázaly až 28% nárůst výkonu dotazů a až 48% zrychlení indexace. +Using both Immutable Entities and Bytes as IDs has been shown to markedly improve Subgraph efficiency. Specifically, tests have highlighted up to a 28% increase in query performance and up to a 48% acceleration in indexing speeds. Více informací o používání nezměnitelných entit a bytů jako ID najdete v tomto příspěvku na blogu Davida Lutterkorta, softwarového inženýra ve společnosti Edge & Node: [Dvě jednoduchá vylepšení výkonu podgrafu](https://thegraph.com/blog/two-simple-subgraph-performance-improvements/). diff --git a/website/src/pages/cs/subgraphs/best-practices/pruning.mdx b/website/src/pages/cs/subgraphs/best-practices/pruning.mdx index e6b23f71c409..6fd068f449d6 100644 --- a/website/src/pages/cs/subgraphs/best-practices/pruning.mdx +++ b/website/src/pages/cs/subgraphs/best-practices/pruning.mdx @@ -1,11 +1,11 @@ --- title: Doporučený postup 1 - Zlepšení rychlosti dotazu pomocí ořezávání podgrafů -sidebarTitle: 'Subgraph Best Practice 1: Pruning with indexerHints' +sidebarTitle: Pruning with indexerHints --- ## TLDR -[Pruning](/developing/creating-a-subgraph/#prune) odstraní archivní entity z databáze podgrafu až do daného bloku a odstranění nepoužívaných entit z databáze podgrafu zlepší výkonnost dotazu podgrafu, často výrazně. Použití `indexerHints` je snadný způsob, jak podgraf ořezat. +[Pruning](/developing/creating-a-subgraph/#prune) removes archival entities from the Subgraph’s database up to a given block, and removing unused entities from a Subgraph’s database will improve a Subgraph’s query performance, often dramatically. Using `indexerHints` is an easy way to prune a Subgraph. ## Jak prořezat podgraf pomocí `indexerHints` @@ -13,14 +13,14 @@ Přidejte do manifestu sekci `indexerHints`. `indexerHints` má tři možnosti `prune`: -- `prune: auto`: Udržuje minimální potřebnou historii nastavenou indexátorem, čímž optimalizuje výkon dotazu. Toto je obecně doporučené nastavení a je výchozí pro všechny podgrafy vytvořené pomocí `graph-cli` >= 0.66.0. +- `prune: auto`: Retains the minimum necessary history as set by the Indexer, optimizing query performance. This is the generally recommended setting and is the default for all Subgraphs created by `graph-cli` >= 0.66.0. - `prune: `: Nastaví vlastní omezení počtu historických bloků, které se mají zachovat. - `prune: never`: No pruning of historical data; retains the entire history and is the default if there is no `indexerHints` section. `prune: never` should be selected if [Time Travel Queries](/subgraphs/querying/graphql-api/#time-travel-queries) are desired. -Aktualizací souboru `subgraph.yaml` můžeme do podgrafů přidat `indexerHints`: +We can add `indexerHints` to our Subgraphs by updating our `subgraph.yaml`: ```yaml -specVersion: 1.0.0 +specVersion: 1.3.0 schema: file: ./schema.graphql indexerHints: @@ -39,7 +39,7 @@ dataSources: ## Závěr -Ořezávání pomocí `indexerHints` je osvědčeným postupem pro vývoj podgrafů, který nabízí významné zlepšení výkonu dotazů. +Pruning using `indexerHints` is a best practice for Subgraph development, offering significant query performance improvements. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/cs/subgraphs/best-practices/timeseries.mdx b/website/src/pages/cs/subgraphs/best-practices/timeseries.mdx index f35ab0913563..dae73ede9ff3 100644 --- a/website/src/pages/cs/subgraphs/best-practices/timeseries.mdx +++ b/website/src/pages/cs/subgraphs/best-practices/timeseries.mdx @@ -1,11 +1,11 @@ --- title: Subgraph Best Practice 5 - Simplify and Optimize with Timeseries and Aggregations -sidebarTitle: 'Subgraph Best Practice 5: Timeseries and Aggregations' +sidebarTitle: Timeseries and Aggregations --- ## TLDR -Leveraging the new time-series and aggregations feature in subgraphs can significantly enhance both indexing speed and query performance. +Leveraging the new time-series and aggregations feature in Subgraphs can significantly enhance both indexing speed and query performance. ## Přehled @@ -36,6 +36,10 @@ Timeseries and aggregations reduce data processing overhead and accelerate queri ## How to Implement Timeseries and Aggregations +### Prerequisites + +You need `spec version 1.1.0` for this feature. + ### Defining Timeseries Entities A timeseries entity represents raw data points collected over time. It is defined with the `@entity(timeseries: true)` annotation. Key requirements: @@ -51,7 +55,7 @@ Příklad: type Data @entity(timeseries: true) { id: Int8! timestamp: Timestamp! - price: BigDecimal! + amount: BigDecimal! } ``` @@ -68,11 +72,11 @@ Příklad: type Stats @aggregation(intervals: ["hour", "day"], source: "Data") { id: Int8! timestamp: Timestamp! - sum: BigDecimal! @aggregate(fn: "sum", arg: "price") + sum: BigDecimal! @aggregate(fn: "sum", arg: "amount") } ``` -In this example, Stats aggregates the price field from Data over hourly and daily intervals, computing the sum. +In this example, Stats aggregates the amount field from Data over hourly and daily intervals, computing the sum. ### Querying Aggregated Data @@ -172,13 +176,13 @@ Supported operators and functions include basic arithmetic (+, -, \_, /), compar ### Závěr -Implementing timeseries and aggregations in subgraphs is a best practice for projects dealing with time-based data. This approach: +Implementing timeseries and aggregations in Subgraphs is a best practice for projects dealing with time-based data. This approach: - Enhances Performance: Speeds up indexing and querying by reducing data processing overhead. - Simplifies Development: Eliminates the need for manual aggregation logic in mappings. - Scales Efficiently: Handles large volumes of data without compromising on speed or responsiveness. -By adopting this pattern, developers can build more efficient and scalable subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your subgraphs. +By adopting this pattern, developers can build more efficient and scalable Subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your Subgraphs. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/cs/subgraphs/billing.mdx b/website/src/pages/cs/subgraphs/billing.mdx index 4118bf1d451a..b78c375c4aee 100644 --- a/website/src/pages/cs/subgraphs/billing.mdx +++ b/website/src/pages/cs/subgraphs/billing.mdx @@ -4,12 +4,14 @@ title: Fakturace ## Querying Plans -There are two plans to use when querying subgraphs on The Graph Network. +There are two plans to use when querying Subgraphs on The Graph Network. - **Free Plan**: The Free Plan includes 100,000 free monthly queries with full access to the Subgraph Studio testing environment. This plan is designed for hobbyists, hackathoners, and those with side projects to try out The Graph before scaling their dapp. - **Growth Plan**: The Growth Plan includes everything in the Free Plan with all queries after 100,000 monthly queries requiring payments with GRT or credit card. The Growth Plan is flexible enough to cover teams that have established dapps across a variety of use cases. +Learn more about pricing [here](https://thegraph.com/studio-pricing/). + ## Query Payments with credit card diff --git a/website/src/pages/cs/subgraphs/developing/creating/advanced.mdx b/website/src/pages/cs/subgraphs/developing/creating/advanced.mdx index 4fbf2b573c14..e8db267667c0 100644 --- a/website/src/pages/cs/subgraphs/developing/creating/advanced.mdx +++ b/website/src/pages/cs/subgraphs/developing/creating/advanced.mdx @@ -4,9 +4,9 @@ title: Advanced Subgraph Features ## Přehled -Add and implement advanced subgraph features to enhanced your subgraph's built. +Add and implement advanced Subgraph features to enhanced your Subgraph's built. -Starting from `specVersion` `0.0.4`, subgraph features must be explicitly declared in the `features` section at the top level of the manifest file, using their `camelCase` name, as listed in the table below: +Starting from `specVersion` `0.0.4`, Subgraph features must be explicitly declared in the `features` section at the top level of the manifest file, using their `camelCase` name, as listed in the table below: | Feature | Name | | ---------------------------------------------------- | ---------------- | @@ -14,10 +14,10 @@ Starting from `specVersion` `0.0.4`, subgraph features must be explicitly declar | [Full-text Search](#defining-fulltext-search-fields) | `fullTextSearch` | | [Grafting](#grafting-onto-existing-subgraphs) | `grafting` | -For instance, if a subgraph uses the **Full-Text Search** and the **Non-fatal Errors** features, the `features` field in the manifest should be: +For instance, if a Subgraph uses the **Full-Text Search** and the **Non-fatal Errors** features, the `features` field in the manifest should be: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum features: - fullTextSearch @@ -25,7 +25,7 @@ features: dataSources: ... ``` -> Note that using a feature without declaring it will incur a **validation error** during subgraph deployment, but no errors will occur if a feature is declared but not used. +> Note that using a feature without declaring it will incur a **validation error** during Subgraph deployment, but no errors will occur if a feature is declared but not used. ## Timeseries and Aggregations @@ -33,9 +33,9 @@ Prerequisites: - Subgraph specVersion must be ≥1.1.0. -Timeseries and aggregations enable your subgraph to track statistics like daily average price, hourly total transfers, and more. +Timeseries and aggregations enable your Subgraph to track statistics like daily average price, hourly total transfers, and more. -This feature introduces two new types of subgraph entity. Timeseries entities record data points with timestamps. Aggregation entities perform pre-declared calculations on the timeseries data points on an hourly or daily basis, then store the results for easy access via GraphQL. +This feature introduces two new types of Subgraph entity. Timeseries entities record data points with timestamps. Aggregation entities perform pre-declared calculations on the timeseries data points on an hourly or daily basis, then store the results for easy access via GraphQL. ### Example Schema @@ -97,21 +97,21 @@ Aggregation entities are automatically calculated on the basis of the specified ## Nefatální -Chyby indexování v již synchronizovaných podgrafech ve výchozím nastavení způsobí selhání podgrafy a zastavení synchronizace. Podgrafy lze alternativně nakonfigurovat tak, aby pokračovaly v synchronizaci i při přítomnosti chyb, a to ignorováním změn provedených obslužnou rutinou, která chybu vyvolala. To dává autorům podgrafů čas na opravu jejich podgrafů, zatímco dotazy jsou nadále obsluhovány proti poslednímu bloku, ačkoli výsledky mohou být nekonzistentní kvůli chybě, která chybu způsobila. Všimněte si, že některé chyby jsou stále fatální. Aby chyba nebyla fatální, musí být známo, že je deterministická. +Indexing errors on already synced Subgraphs will, by default, cause the Subgraph to fail and stop syncing. Subgraphs can alternatively be configured to continue syncing in the presence of errors, by ignoring the changes made by the handler which provoked the error. This gives Subgraph authors time to correct their Subgraphs while queries continue to be served against the latest block, though the results might be inconsistent due to the bug that caused the error. Note that some errors are still always fatal. To be non-fatal, the error must be known to be deterministic. -> **Note:** The Graph Network does not yet support non-fatal errors, and developers should not deploy subgraphs using that functionality to the network via the Studio. +> **Note:** The Graph Network does not yet support non-fatal errors, and developers should not deploy Subgraphs using that functionality to the network via the Studio. -Povolení nefatálních chyb vyžaduje nastavení následujícího příznaku funkce v manifestu podgraf: +Enabling non-fatal errors requires setting the following feature flag on the Subgraph manifest: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum features: - nonFatalErrors ... ``` -The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the subgraph has skipped over errors, as in the example: +The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the Subgraph has skipped over errors, as in the example: ```graphql foos(first: 100, subgraphError: allow) { @@ -123,7 +123,7 @@ _meta { } ``` -If the subgraph encounters an error, that query will return both the data and a graphql error with the message `"indexing_error"`, as in this example response: +If the Subgraph encounters an error, that query will return both the data and a graphql error with the message `"indexing_error"`, as in this example response: ```graphql "data": { @@ -145,7 +145,7 @@ If the subgraph encounters an error, that query will return both the data and a ## IPFS/Arweave File Data Sources -Zdroje dat souborů jsou novou funkcí podgrafu pro přístup k datům mimo řetězec během indexování robustním a rozšiřitelným způsobem. Zdroje souborových dat podporují načítání souborů ze systému IPFS a z Arweave. +File data sources are a new Subgraph functionality for accessing off-chain data during indexing in a robust, extendable way. File data sources support fetching files from IPFS and from Arweave. > To také vytváří základ pro deterministické indexování dat mimo řetězec a potenciální zavedení libovolných dat ze zdrojů HTTP. @@ -221,7 +221,7 @@ templates: - name: TokenMetadata kind: file/ipfs mapping: - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mapping.ts handler: handleMetadata @@ -290,7 +290,7 @@ Příklad: import { TokenMetadata as TokenMetadataTemplate } from '../generated/templates' const ipfshash = 'QmaXzZhcYnsisuue5WRdQDH6FDvqkLQX1NckLqBYeYYEfm' -//This example code is for a Crypto coven subgraph. The above ipfs hash is a directory with token metadata for all crypto coven NFTs. +//This example code is for a Crypto coven Subgraph. The above ipfs hash is a directory with token metadata for all crypto coven NFTs. export function handleTransfer(event: TransferEvent): void { let token = Token.load(event.params.tokenId.toString()) @@ -317,23 +317,23 @@ Tím se vytvoří nový zdroj dat souborů, který bude dotazovat nakonfigurovan This example is using the CID as the lookup between the parent `Token` entity and the resulting `TokenMetadata` entity. -> Previously, this is the point at which a subgraph developer would have called `ipfs.cat(CID)` to fetch the file +> Previously, this is the point at which a Subgraph developer would have called `ipfs.cat(CID)` to fetch the file Gratulujeme, používáte souborové zdroje dat! -#### Nasazení podgrafů +#### Deploying your Subgraphs -You can now `build` and `deploy` your subgraph to any Graph Node >=v0.30.0-rc.0. +You can now `build` and `deploy` your Subgraph to any Graph Node >=v0.30.0-rc.0. #### Omezení -Zpracovatelé a entity zdrojů dat souborů jsou izolovány od ostatních entit podgrafů, což zajišťuje, že jsou při provádění deterministické a nedochází ke kontaminaci zdrojů dat založených na řetězci. Přesněji řečeno: +File data source handlers and entities are isolated from other Subgraph entities, ensuring that they are deterministic when executed, and ensuring no contamination of chain-based data sources. To be specific: - Entity vytvořené souborovými zdroji dat jsou neměnné a nelze je aktualizovat - Obsluhy zdrojů dat souborů nemohou přistupovat k entita z jiných zdrojů dat souborů - K entita přidruženým k datovým zdrojům souborů nelze přistupovat pomocí zpracovatelů založených na řetězci -> Ačkoli by toto omezení nemělo být pro většinu případů použití problematické, pro některé může představovat složitost. Pokud máte problémy s modelováním dat založených na souborech v podgrafu, kontaktujte nás prosím prostřednictvím služby Discord! +> While this constraint should not be problematic for most use-cases, it may introduce complexity for some. Please get in touch via Discord if you are having issues modelling your file-based data in a Subgraph! Kromě toho není možné vytvářet zdroje dat ze zdroje dat souborů, ať už se jedná o zdroj dat v řetězci nebo jiný zdroj dat souborů. Toto omezení může být v budoucnu zrušeno. @@ -365,15 +365,15 @@ Handlers for File Data Sources cannot be in files which import `eth_call` contra > **Requires**: [SpecVersion](#specversion-releases) >= `1.2.0` -Topic filters, also known as indexed argument filters, are a powerful feature in subgraphs that allow users to precisely filter blockchain events based on the values of their indexed arguments. +Topic filters, also known as indexed argument filters, are a powerful feature in Subgraphs that allow users to precisely filter blockchain events based on the values of their indexed arguments. -- These filters help isolate specific events of interest from the vast stream of events on the blockchain, allowing subgraphs to operate more efficiently by focusing only on relevant data. +- These filters help isolate specific events of interest from the vast stream of events on the blockchain, allowing Subgraphs to operate more efficiently by focusing only on relevant data. -- This is useful for creating personal subgraphs that track specific addresses and their interactions with various smart contracts on the blockchain. +- This is useful for creating personal Subgraphs that track specific addresses and their interactions with various smart contracts on the blockchain. ### How Topic Filters Work -When a smart contract emits an event, any arguments that are marked as indexed can be used as filters in a subgraph's manifest. This allows the subgraph to listen selectively for events that match these indexed arguments. +When a smart contract emits an event, any arguments that are marked as indexed can be used as filters in a Subgraph's manifest. This allows the Subgraph to listen selectively for events that match these indexed arguments. - The event's first indexed argument corresponds to `topic1`, the second to `topic2`, and so on, up to `topic3`, since the Ethereum Virtual Machine (EVM) allows up to three indexed arguments per event. @@ -401,7 +401,7 @@ In this example: #### Configuration in Subgraphs -Topic filters are defined directly within the event handler configuration in the subgraph manifest. Here is how they are configured: +Topic filters are defined directly within the event handler configuration in the Subgraph manifest. Here is how they are configured: ```yaml eventHandlers: @@ -436,7 +436,7 @@ In this configuration: - `topic1` is configured to filter `Transfer` events where `0xAddressA` is the sender. - `topic2` is configured to filter `Transfer` events where `0xAddressB` is the receiver. -- The subgraph will only index transactions that occur directly from `0xAddressA` to `0xAddressB`. +- The Subgraph will only index transactions that occur directly from `0xAddressA` to `0xAddressB`. #### Example 2: Tracking Transactions in Either Direction Between Two or More Addresses @@ -452,17 +452,17 @@ In this configuration: - `topic1` is configured to filter `Transfer` events where `0xAddressA`, `0xAddressB`, `0xAddressC` is the sender. - `topic2` is configured to filter `Transfer` events where `0xAddressB` and `0xAddressC` is the receiver. -- The subgraph will index transactions that occur in either direction between multiple addresses allowing for comprehensive monitoring of interactions involving all addresses. +- The Subgraph will index transactions that occur in either direction between multiple addresses allowing for comprehensive monitoring of interactions involving all addresses. ## Declared eth_call > Note: This is an experimental feature that is not currently available in a stable Graph Node release yet. You can only use it in Subgraph Studio or your self-hosted node. -Declarative `eth_calls` are a valuable subgraph feature that allows `eth_calls` to be executed ahead of time, enabling `graph-node` to execute them in parallel. +Declarative `eth_calls` are a valuable Subgraph feature that allows `eth_calls` to be executed ahead of time, enabling `graph-node` to execute them in parallel. This feature does the following: -- Significantly improves the performance of fetching data from the Ethereum blockchain by reducing the total time for multiple calls and optimizing the subgraph's overall efficiency. +- Significantly improves the performance of fetching data from the Ethereum blockchain by reducing the total time for multiple calls and optimizing the Subgraph's overall efficiency. - Allows faster data fetching, resulting in quicker query responses and a better user experience. - Reduces wait times for applications that need to aggregate data from multiple Ethereum calls, making the data retrieval process more efficient. @@ -474,7 +474,7 @@ This feature does the following: #### Scenario without Declarative `eth_calls` -Imagine you have a subgraph that needs to make three Ethereum calls to fetch data about a user's transactions, balance, and token holdings. +Imagine you have a Subgraph that needs to make three Ethereum calls to fetch data about a user's transactions, balance, and token holdings. Traditionally, these calls might be made sequentially: @@ -498,15 +498,15 @@ Total time taken = max (3, 2, 4) = 4 seconds #### How it Works -1. Declarative Definition: In the subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. +1. Declarative Definition: In the Subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. 2. Parallel Execution Engine: The Graph Node's execution engine recognizes these declarations and runs the calls simultaneously. -3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the subgraph for further processing. +3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the Subgraph for further processing. #### Example Configuration in Subgraph Manifest Declared `eth_calls` can access the `event.address` of the underlying event as well as all the `event.params`. -`Subgraph.yaml` using `event.address`: +`subgraph.yaml` using `event.address`: ```yaml eventHandlers: @@ -524,7 +524,7 @@ Details for the example above: - The text (`Pool[event.address].feeGrowthGlobal0X128()`) is the actual `eth_call` that will be executed, which is in the form of `Contract[address].function(arguments)` - The `address` and `arguments` can be replaced with variables that will be available when the handler is executed. -`Subgraph.yaml` using `event.params` +`subgraph.yaml` using `event.params` ```yaml calls: @@ -535,22 +535,22 @@ calls: > **Note:** it is not recommended to use grafting when initially upgrading to The Graph Network. Learn more [here](/subgraphs/cookbook/grafting/#important-note-on-grafting-when-upgrading-to-the-network). -When a subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances; it is beneficial to reuse the data from an existing subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly or to temporarily get an existing subgraph working again after it has failed. +When a Subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances; it is beneficial to reuse the data from an existing Subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly or to temporarily get an existing Subgraph working again after it has failed. -A subgraph is grafted onto a base subgraph when the subgraph manifest in `subgraph.yaml` contains a `graft` block at the top-level: +A Subgraph is grafted onto a base Subgraph when the Subgraph manifest in `subgraph.yaml` contains a `graft` block at the top-level: ```yaml description: ... graft: - base: Qm... # Subgraph ID of base subgraph + base: Qm... # Subgraph ID of base Subgraph block: 7345624 # Block number ``` -When a subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` subgraph up to and including the given `block` and then continue indexing the new subgraph from that block on. The base subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted subgraph. +When a Subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` Subgraph up to and including the given `block` and then continue indexing the new Subgraph from that block on. The base Subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted Subgraph. -Protože se při roubování základní data spíše kopírují než indexují, je mnohem rychlejší dostat podgraf do požadovaného bloku než při indexování od nuly, i když počáteční kopírování dat může u velmi velkých podgrafů trvat i několik hodin. Během inicializace roubovaného podgrafu bude uzel Graf Uzel zaznamenávat informace o typů entit, které již byly zkopírovány. +Because grafting copies rather than indexes base data, it is much quicker to get the Subgraph to the desired block than indexing from scratch, though the initial data copy can still take several hours for very large Subgraphs. While the grafted Subgraph is being initialized, the Graph Node will log information about the entity types that have already been copied. -Štěpovaný podgraf může používat schéma GraphQL, které není totožné se schématem základního podgrafu, ale je s ním pouze kompatibilní. Musí to být platné schéma podgrafu jako takové, ale může se od schématu základního podgrafu odchýlit následujícími způsoby: +The grafted Subgraph can use a GraphQL schema that is not identical to the one of the base Subgraph, but merely compatible with it. It has to be a valid Subgraph schema in its own right, but may deviate from the base Subgraph's schema in the following ways: - Přidává nebo odebírá typy entit - Odstraňuje atributy z typů entit @@ -560,4 +560,4 @@ Protože se při roubování základní data spíše kopírují než indexují, - Přidává nebo odebírá rozhraní - Mění se, pro které typy entit je rozhraní implementováno -> **[Feature Management](#experimental-features):** `grafting` must be declared under `features` in the subgraph manifest. +> **[Feature Management](#experimental-features):** `grafting` must be declared under `features` in the Subgraph manifest. diff --git a/website/src/pages/cs/subgraphs/developing/creating/assemblyscript-mappings.mdx b/website/src/pages/cs/subgraphs/developing/creating/assemblyscript-mappings.mdx index fad0d6ebaa1a..00fb7cbcf275 100644 --- a/website/src/pages/cs/subgraphs/developing/creating/assemblyscript-mappings.mdx +++ b/website/src/pages/cs/subgraphs/developing/creating/assemblyscript-mappings.mdx @@ -10,7 +10,7 @@ The mappings take data from a particular source and transform it into entities t For each event handler that is defined in `subgraph.yaml` under `mapping.eventHandlers`, create an exported function of the same name. Each handler must accept a single parameter called `event` with a type corresponding to the name of the event which is being handled. -In the example subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events: +In the example Subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events: ```javascript import { NewGravatar, UpdatedGravatar } from '../generated/Gravity/Gravity' @@ -72,7 +72,7 @@ Pokud není pro pole v nové entitě se stejným ID nastavena žádná hodnota, ## Generování kódu -Aby byla práce s inteligentními smlouvami, událostmi a entitami snadná a typově bezpečná, může Graf CLI generovat typy AssemblyScript ze schématu GraphQL podgrafu a ABI smluv obsažených ve zdrojích dat. +In order to make it easy and type-safe to work with smart contracts, events and entities, the Graph CLI can generate AssemblyScript types from the Subgraph's GraphQL schema and the contract ABIs included in the data sources. To se provádí pomocí @@ -80,7 +80,7 @@ To se provádí pomocí graph codegen [--output-dir ] [] ``` -but in most cases, subgraphs are already preconfigured via `package.json` to allow you to simply run one of the following to achieve the same: +but in most cases, Subgraphs are already preconfigured via `package.json` to allow you to simply run one of the following to achieve the same: ```sh # Yarn @@ -90,7 +90,7 @@ yarn codegen npm run codegen ``` -This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with. +This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example Subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with. ```javascript import { @@ -102,12 +102,12 @@ import { } from '../generated/Gravity/Gravity' ``` -In addition to this, one class is generated for each entity type in the subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with +In addition to this, one class is generated for each entity type in the Subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with ```javascript import { Gravatar } from '../generated/schema' ``` -> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the subgraph. +> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the Subgraph. -Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your subgraph to Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find. +Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your Subgraph to Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find. diff --git a/website/src/pages/cs/subgraphs/developing/creating/graph-ts/CHANGELOG.md b/website/src/pages/cs/subgraphs/developing/creating/graph-ts/CHANGELOG.md index 5d90888ac378..5f964d3cbb78 100644 --- a/website/src/pages/cs/subgraphs/developing/creating/graph-ts/CHANGELOG.md +++ b/website/src/pages/cs/subgraphs/developing/creating/graph-ts/CHANGELOG.md @@ -1,5 +1,11 @@ # @graphprotocol/graph-ts +## 0.38.0 + +### Minor Changes + +- [#1935](https://github.com/graphprotocol/graph-tooling/pull/1935) [`0c36a02`](https://github.com/graphprotocol/graph-tooling/commit/0c36a024e0516bbf883ae62b8312dba3d9945f04) Thanks [@isum](https://github.com/isum)! - feat: add yaml parsing support to mappings + ## 0.37.0 ### Minor Changes diff --git a/website/src/pages/cs/subgraphs/developing/creating/graph-ts/api.mdx b/website/src/pages/cs/subgraphs/developing/creating/graph-ts/api.mdx index 3c3dbdc7671f..87734452737d 100644 --- a/website/src/pages/cs/subgraphs/developing/creating/graph-ts/api.mdx +++ b/website/src/pages/cs/subgraphs/developing/creating/graph-ts/api.mdx @@ -2,12 +2,12 @@ title: AssemblyScript API --- -> Note: If you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/). +> Note: If you created a Subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/). -Learn what built-in APIs can be used when writing subgraph mappings. There are two kinds of APIs available out of the box: +Learn what built-in APIs can be used when writing Subgraph mappings. There are two kinds of APIs available out of the box: - The [Graph TypeScript library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) -- Code generated from subgraph files by `graph codegen` +- Code generated from Subgraph files by `graph codegen` You can also add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). @@ -27,7 +27,7 @@ Knihovna `@graphprotocol/graph-ts` poskytuje následující API: ### Verze -`apiVersion` v manifestu podgrafu určuje verzi mapovacího API, kterou pro daný podgraf používá uzel Graf. +The `apiVersion` in the Subgraph manifest specifies the mapping API version which is run by Graph Node for a given Subgraph. | Verze | Poznámky vydání | | :-: | --- | @@ -223,7 +223,7 @@ import { store } from '@graphprotocol/graph-ts' API `store` umožňuje načítat, ukládat a odebírat entity z a do úložiště Graf uzel. -Entities written to the store map one-to-one to the `@entity` types defined in the subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities. +Entities written to the store map one-to-one to the `@entity` types defined in the Subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities. #### Vytváření entity @@ -282,8 +282,8 @@ Od verzí `graph-node` v0.31.0, `@graphprotocol/graph-ts` v0.30.0 a `@graphproto The store API facilitates the retrieval of entities that were created or updated in the current block. A typical situation for this is that one handler creates a transaction from some onchain event, and a later handler wants to access this transaction if it exists. -- In the case where the transaction does not exist, the subgraph will have to go to the database simply to find out that the entity does not exist. If the subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. -- For some subgraphs, these missed lookups can contribute significantly to the indexing time. +- In the case where the transaction does not exist, the Subgraph will have to go to the database simply to find out that the entity does not exist. If the Subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. +- For some Subgraphs, these missed lookups can contribute significantly to the indexing time. ```typescript let id = event.transaction.hash // or however the ID is constructed @@ -380,11 +380,11 @@ Ethereum API poskytuje přístup k inteligentním smlouvám, veřejným stavový #### Podpora typů Ethereum -Stejně jako u entit generuje `graph codegen` třídy pro všechny inteligentní smlouvy a události používané v podgrafu. Za tímto účelem musí být ABI kontraktu součástí zdroje dat v manifestu podgrafu. Obvykle jsou soubory ABI uloženy ve složce `abis/`. +As with entities, `graph codegen` generates classes for all smart contracts and events used in a Subgraph. For this, the contract ABIs need to be part of the data source in the Subgraph manifest. Typically, the ABI files are stored in an `abis/` folder. -Ve vygenerovaných třídách probíhají konverze mezi typy Ethereum [built-in-types](#built-in-types) v pozadí, takže se o ně autoři podgraf nemusí starat. +With the generated classes, conversions between Ethereum types and the [built-in types](#built-in-types) take place behind the scenes so that Subgraph authors do not have to worry about them. -To ilustruje následující příklad. Je dáno schéma podgrafu, jako je +The following example illustrates this. Given a Subgraph schema like ```graphql type Transfer @entity { @@ -483,7 +483,7 @@ class Log { #### Přístup ke stavu inteligentní smlouvy -Kód vygenerovaný nástrojem `graph codegen` obsahuje také třídy pro inteligentní smlouvy používané v podgrafu. Ty lze použít k přístupu k veřejným stavovým proměnným a k volání funkcí kontraktu v aktuálním bloku. +The code generated by `graph codegen` also includes classes for the smart contracts used in the Subgraph. These can be used to access public state variables and call functions of the contract at the current block. Běžným vzorem je přístup ke smlouvě, ze které událost pochází. Toho lze dosáhnout pomocí následujícího kódu: @@ -506,7 +506,7 @@ export function handleTransfer(event: TransferEvent) { Pokud má smlouva `ERC20Contract` na platformě Ethereum veřejnou funkci pouze pro čtení s názvem `symbol`, lze ji volat pomocí `.symbol()`. Pro veřejné stavové proměnné se automaticky vytvoří metoda se stejným názvem. -Jakákoli jiná smlouva, která je součástí podgrafu, může být importována z vygenerovaného kódu a může být svázána s platnou adresou. +Any other contract that is part of the Subgraph can be imported from the generated code and can be bound to a valid address. #### Zpracování vrácených volání @@ -582,7 +582,7 @@ let isContract = ethereum.hasCode(eoa).inner // returns false import { log } from '@graphprotocol/graph-ts' ``` -The `log` API allows subgraphs to log information to the Graph Node standard output as well as Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument. +The `log` API allows Subgraphs to log information to the Graph Node standard output as well as Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument. `log` API obsahuje následující funkce: @@ -590,7 +590,7 @@ The `log` API allows subgraphs to log information to the Graph Node standard out - `log.info(fmt: string, args: Array): void` - zaznamená informační zprávu. - `log.warning(fmt: string, args: Array): void` - zaznamená varování. - `log.error(fmt: string, args: Array): void` - zaznamená chybovou zprávu. -- `log.critical(fmt: string, args: Array): void` - zaznamená kritickou zprávu _a_ ukončí podgraf. +- `log.critical(fmt: string, args: Array): void` – logs a critical message _and_ terminates the Subgraph. `log` API přebírá formátovací řetězec a pole řetězcových hodnot. Poté nahradí zástupné symboly řetězcovými hodnotami z pole. První zástupný symbol „{}“ bude nahrazen první hodnotou v poli, druhý zástupný symbol „{}“ bude nahrazen druhou hodnotou a tak dále. @@ -721,7 +721,7 @@ ipfs.mapJSON('Qm...', 'processItem', Value.fromString('parentId')) V současné době je podporován pouze příznak `json`, který musí být předán souboru `ipfs.map`. S příznakem `json` se soubor IPFS musí skládat z řady hodnot JSON, jedna hodnota na řádek. Volání příkazu `ipfs.map` přečte každý řádek souboru, deserializuje jej do hodnoty `JSONValue` a pro každou z nich zavolá zpětné volání. Zpětné volání pak může použít operace entit k uložení dat z `JSONValue`. Změny entit se uloží až po úspěšném ukončení obsluhy, která volala `ipfs.map`; do té doby se uchovávají v paměti, a velikost souboru, který může `ipfs.map` zpracovat, je proto omezená. -Při úspěchu vrátí `ipfs.map` hodnotu `void`. Pokud vyvolání zpětného volání způsobí chybu, obslužná rutina, která vyvolala `ipfs.map`, se přeruší a podgraf se označí jako neúspěšný. +On success, `ipfs.map` returns `void`. If any invocation of the callback causes an error, the handler that invoked `ipfs.map` is aborted, and the Subgraph is marked as failed. ### Crypto API @@ -836,7 +836,7 @@ Základní třída `Entity` a podřízená třída `DataSourceContext` mají pom ### DataSourceContext v manifestu -Sekce `context` v rámci `dataSources` umožňuje definovat páry klíč-hodnota, které jsou přístupné v rámci mapování podgrafů. Dostupné typy jsou `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List` a `BigInt`. +The `context` section within `dataSources` allows you to define key-value pairs that are accessible within your Subgraph mappings. The available types are `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Zde je příklad YAML ilustrující použití různých typů v sekci `context`: @@ -887,4 +887,4 @@ dataSources: - `Seznam`: Určuje seznam položek. U každé položky je třeba zadat její typ a data. - `BigInt`: Určuje velkou celočíselnou hodnotu. Kvůli velké velikosti musí být uvedena v uvozovkách. -Tento kontext je pak přístupný v souborech mapování podgrafů, což umožňuje vytvářet dynamičtější a konfigurovatelnější podgrafy. +This context is then accessible in your Subgraph mapping files, enabling more dynamic and configurable Subgraphs. diff --git a/website/src/pages/cs/subgraphs/developing/creating/graph-ts/common-issues.mdx b/website/src/pages/cs/subgraphs/developing/creating/graph-ts/common-issues.mdx index 79ec3df1a827..419f698e68e4 100644 --- a/website/src/pages/cs/subgraphs/developing/creating/graph-ts/common-issues.mdx +++ b/website/src/pages/cs/subgraphs/developing/creating/graph-ts/common-issues.mdx @@ -2,7 +2,7 @@ title: Běžné problémy se AssemblyScript --- -Při vývoji podgrafů se často vyskytují určité problémy [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). Jejich obtížnost při ladění je různá, nicméně jejich znalost může pomoci. Následuje neúplný seznam těchto problémů: +There are certain [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) issues that are common to run into during Subgraph development. They range in debug difficulty, however, being aware of them may help. The following is a non-exhaustive list of these issues: - `Private` class variables are not enforced in [AssemblyScript](https://www.assemblyscript.org/status.html#language-features). There is no way to protect class variables from being directly changed from the class object. - Rozsah se nedědí do [uzavíracích funkcí](https://www.assemblyscript.org/status.html#on-closures), tj. proměnné deklarované mimo uzavírací funkce nelze použít. Vysvětlení v [Developer Highlights #3](https://www.youtube.com/watch?v=1-8AW-lVfrA&t=3243s). diff --git a/website/src/pages/cs/subgraphs/developing/creating/install-the-cli.mdx b/website/src/pages/cs/subgraphs/developing/creating/install-the-cli.mdx index dbeac0c137a5..536b416c9465 100644 --- a/website/src/pages/cs/subgraphs/developing/creating/install-the-cli.mdx +++ b/website/src/pages/cs/subgraphs/developing/creating/install-the-cli.mdx @@ -2,11 +2,11 @@ title: Instalace Graf CLI --- -> In order to use your subgraph on The Graph's decentralized network, you will need to [create an API key](/resources/subgraph-studio-faq/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your subgraph with at least 3,000 GRT to attract 2-3 Indexers. To learn more about signaling, check out [curating](/resources/roles/curating/). +> In order to use your Subgraph on The Graph's decentralized network, you will need to [create an API key](/resources/subgraph-studio-faq/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your Subgraph with at least 3,000 GRT to attract 2-3 Indexers. To learn more about signaling, check out [curating](/resources/roles/curating/). ## Přehled -The [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) is a command-line interface that facilitates developers' commands for The Graph. It processes a [subgraph manifest](/subgraphs/developing/creating/subgraph-manifest/) and compiles the [mappings](/subgraphs/developing/creating/assemblyscript-mappings/) to create the files you will need to deploy the subgraph to [Subgraph Studio](https://thegraph.com/studio/) and the network. +The [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) is a command-line interface that facilitates developers' commands for The Graph. It processes a [Subgraph manifest](/subgraphs/developing/creating/subgraph-manifest/) and compiles the [mappings](/subgraphs/developing/creating/assemblyscript-mappings/) to create the files you will need to deploy the Subgraph to [Subgraph Studio](https://thegraph.com/studio/) and the network. ## Začínáme @@ -28,13 +28,13 @@ npm install -g @graphprotocol/graph-cli@latest yarn global add @graphprotocol/graph-cli ``` -The `graph init` command can be used to set up a new subgraph project, either from an existing contract or from an example subgraph. If you already have a smart contract deployed to your preferred network, you can bootstrap a new subgraph from that contract to get started. +The `graph init` command can be used to set up a new Subgraph project, either from an existing contract or from an example Subgraph. If you already have a smart contract deployed to your preferred network, you can bootstrap a new Subgraph from that contract to get started. ## Vytvoření podgrafu ### Ze stávající smlouvy -The following command creates a subgraph that indexes all events of an existing contract: +The following command creates a Subgraph that indexes all events of an existing contract: ```sh graph init \ @@ -51,25 +51,25 @@ graph init \ - If any of the optional arguments are missing, it guides you through an interactive form. -- The `` is the ID of your subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your subgraph details page. +- The `` is the ID of your Subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your Subgraph details page. ### Z příkladu podgrafu -The following command initializes a new project from an example subgraph: +The following command initializes a new project from an example Subgraph: ```sh graph init --from-example=example-subgraph ``` -- The [example subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. +- The [example Subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. -- The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. +- The Subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. ### Add New `dataSources` to an Existing Subgraph -`dataSources` are key components of subgraphs. They define the sources of data that the subgraph indexes and processes. A `dataSource` specifies which smart contract to listen to, which events to process, and how to handle them. +`dataSources` are key components of Subgraphs. They define the sources of data that the Subgraph indexes and processes. A `dataSource` specifies which smart contract to listen to, which events to process, and how to handle them. -Recent versions of the Graph CLI supports adding new `dataSources` to an existing subgraph through the `graph add` command: +Recent versions of the Graph CLI supports adding new `dataSources` to an existing Subgraph through the `graph add` command: ```sh graph add
[] @@ -101,19 +101,5 @@ The `graph add` command will fetch the ABI from Etherscan (unless an ABI path is Soubor(y) ABI se musí shodovat s vaší smlouvou. Soubory ABI lze získat několika způsoby: - Pokud vytváříte vlastní projekt, budete mít pravděpodobně přístup k nejaktuálnějším ABI. -- If you are building a subgraph for a public project, you can download that project to your computer and get the ABI by using [`npx hardhat compile`](https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts) or using `solc` to compile. -- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your subgraph will fail. - -## SpecVersion Releases - -| Verze | Poznámky vydání | -| :-: | --- | -| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | -| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | -| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune subgraphs | -| 0.0.9 | Supports `endBlock` feature | -| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). | -| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). | -| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | -| 0.0.5 | Added support for event handlers having access to transaction receipts. | -| 0.0.4 | Added support for managing subgraph features. | +- If you are building a Subgraph for a public project, you can download that project to your computer and get the ABI by using [`npx hardhat compile`](https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts) or using `solc` to compile. +- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your Subgraph will fail. diff --git a/website/src/pages/cs/subgraphs/developing/creating/ql-schema.mdx b/website/src/pages/cs/subgraphs/developing/creating/ql-schema.mdx index c0a99bb516eb..ddc97aeed9e9 100644 --- a/website/src/pages/cs/subgraphs/developing/creating/ql-schema.mdx +++ b/website/src/pages/cs/subgraphs/developing/creating/ql-schema.mdx @@ -4,7 +4,7 @@ title: The Graph QL Schema ## Přehled -The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language. +The schema for your Subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language. > Note: If you've never written a GraphQL schema, it is recommended that you check out this primer on the GraphQL type system. Reference documentation for GraphQL schemas can be found in the [GraphQL API](/subgraphs/querying/graphql-api/) section. @@ -12,7 +12,7 @@ The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas ar Before defining entities, it is important to take a step back and think about how your data is structured and linked. -- All queries will be made against the data model defined in the subgraph schema. As a result, the design of the subgraph schema should be informed by the queries that your application will need to perform. +- All queries will be made against the data model defined in the Subgraph schema. As a result, the design of the Subgraph schema should be informed by the queries that your application will need to perform. - It may be useful to imagine entities as "objects containing data", rather than as events or functions. - You define entity types in `schema.graphql`, and Graph Node will generate top-level fields for querying single instances and collections of that entity type. - Each type that should be an entity is required to be annotated with an `@entity` directive. @@ -141,7 +141,7 @@ type TokenBalance @entity { Reverse lookups can be defined on an entity through the `@derivedFrom` field. This creates a virtual field on the entity that may be queried but cannot be set manually through the mappings API. Rather, it is derived from the relationship defined on the other entity. For such relationships, it rarely makes sense to store both sides of the relationship, and both indexing and query performance will be better when only one side is stored and the other is derived. -U vztahů typu "jeden k mnoha" by měl být vztah vždy uložen na straně "jeden" a strana "mnoho" by měla být vždy odvozena. Uložení vztahu tímto způsobem namísto uložení pole entit na straně "mnoho" povede k výrazně lepšímu výkonu jak při indexování, tak při dotazování na podgraf. Obecně platí, že ukládání polí entit je třeba se vyhnout, pokud je to praktické. +For one-to-many relationships, the relationship should always be stored on the 'one' side, and the 'many' side should always be derived. Storing the relationship this way, rather than storing an array of entities on the 'many' side, will result in dramatically better performance for both indexing and querying the Subgraph. In general, storing arrays of entities should be avoided as much as is practical. #### Příklad @@ -160,7 +160,7 @@ type TokenBalance @entity { } ``` -Here is an example of how to write a mapping for a subgraph with reverse lookups: +Here is an example of how to write a mapping for a Subgraph with reverse lookups: ```typescript let token = new Token(event.address) // Create Token @@ -231,7 +231,7 @@ query usersWithOrganizations { } ``` -Tento propracovanější způsob ukládání vztahů mnoho-více vede k menšímu množství dat uložených pro podgraf, a tedy k podgrafu, který je často výrazně rychlejší při indexování a dotazování. +This more elaborate way of storing many-to-many relationships will result in less data stored for the Subgraph, and therefore to a Subgraph that is often dramatically faster to index and to query. ### Přidání komentářů do schématu @@ -287,7 +287,7 @@ query { } ``` -> **[Feature Management](#experimental-features):** From `specVersion` `0.0.4` and onwards, `fullTextSearch` must be declared under the `features` section in the subgraph manifest. +> **[Feature Management](#experimental-features):** From `specVersion` `0.0.4` and onwards, `fullTextSearch` must be declared under the `features` section in the Subgraph manifest. ## Podporované jazyky diff --git a/website/src/pages/cs/subgraphs/developing/creating/starting-your-subgraph.mdx b/website/src/pages/cs/subgraphs/developing/creating/starting-your-subgraph.mdx index 436b407a19ba..a0fcb52875ca 100644 --- a/website/src/pages/cs/subgraphs/developing/creating/starting-your-subgraph.mdx +++ b/website/src/pages/cs/subgraphs/developing/creating/starting-your-subgraph.mdx @@ -4,20 +4,32 @@ title: Starting Your Subgraph ## Přehled -The Graph is home to thousands of subgraphs already available for query, so check [The Graph Explorer](https://thegraph.com/explorer) and find one that already matches your needs. +The Graph is home to thousands of Subgraphs already available for query, so check [The Graph Explorer](https://thegraph.com/explorer) and find one that already matches your needs. -When you create a [subgraph](/subgraphs/developing/subgraphs/), you create a custom open API that extracts data from a blockchain, processes it, stores it, and makes it easy to query via GraphQL. +When you create a [Subgraph](/subgraphs/developing/subgraphs/), you create a custom open API that extracts data from a blockchain, processes it, stores it, and makes it easy to query via GraphQL. -Subgraph development ranges from simple scaffold subgraphs to advanced, specifically tailored subgraphs. +Subgraph development ranges from simple scaffold Subgraphs to advanced, specifically tailored Subgraphs. ### Start Building -Start the process and build a subgraph that matches your needs: +Start the process and build a Subgraph that matches your needs: 1. [Install the CLI](/subgraphs/developing/creating/install-the-cli/) - Set up your infrastructure -2. [Subgraph Manifest](/subgraphs/developing/creating/subgraph-manifest/) - Understand a subgraph's key component +2. [Subgraph Manifest](/subgraphs/developing/creating/subgraph-manifest/) - Understand a Subgraph's key component 3. [The GraphQL Schema](/subgraphs/developing/creating/ql-schema/) - Write your schema 4. [Writing AssemblyScript Mappings](/subgraphs/developing/creating/assemblyscript-mappings/) - Write your mappings -5. [Advanced Features](/subgraphs/developing/creating/advanced/) - Customize your subgraph with advanced features +5. [Advanced Features](/subgraphs/developing/creating/advanced/) - Customize your Subgraph with advanced features Explore additional [resources for APIs](/subgraphs/developing/creating/graph-ts/README/) and conduct local testing with [Matchstick](/subgraphs/developing/creating/unit-testing-framework/). + +| Verze | Poznámky vydání | +| :-: | --- | +| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune Subgraphs | +| 0.0.9 | Supports `endBlock` feature | +| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). | +| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | diff --git a/website/src/pages/cs/subgraphs/developing/creating/subgraph-manifest.mdx b/website/src/pages/cs/subgraphs/developing/creating/subgraph-manifest.mdx index a434110b4282..6b5bae4680cd 100644 --- a/website/src/pages/cs/subgraphs/developing/creating/subgraph-manifest.mdx +++ b/website/src/pages/cs/subgraphs/developing/creating/subgraph-manifest.mdx @@ -4,19 +4,19 @@ title: Subgraph Manifest ## Přehled -The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. +The Subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your Subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. -The **subgraph definition** consists of the following files: +The **Subgraph definition** consists of the following files: -- `subgraph.yaml`: Contains the subgraph manifest +- `subgraph.yaml`: Contains the Subgraph manifest -- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL +- `schema.graphql`: A GraphQL schema defining the data stored for your Subgraph and how to query it via GraphQL - `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema (e.g. `mapping.ts` in this guide) ### Subgraph Capabilities -A single subgraph can: +A single Subgraph can: - Index data from multiple smart contracts (but not multiple networks). @@ -24,12 +24,12 @@ A single subgraph can: - Add an entry for each contract that requires indexing to the `dataSources` array. -The full specification for subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). +The full specification for Subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). -For the example subgraph listed above, `subgraph.yaml` is: +For the example Subgraph listed above, `subgraph.yaml` is: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum repository: https://github.com/graphprotocol/graph-tooling schema: @@ -54,7 +54,7 @@ dataSources: data: 'bar' mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -79,47 +79,47 @@ dataSources: ## Subgraph Entries -> Important Note: Be sure you populate your subgraph manifest with all handlers and [entities](/subgraphs/developing/creating/ql-schema/). +> Important Note: Be sure you populate your Subgraph manifest with all handlers and [entities](/subgraphs/developing/creating/ql-schema/). Důležité položky, které je třeba v manifestu aktualizovat, jsou: -- `specVersion`: a semver version that identifies the supported manifest structure and functionality for the subgraph. The latest version is `1.2.0`. See [specVersion releases](#specversion-releases) section to see more details on features & releases. +- `specVersion`: a semver version that identifies the supported manifest structure and functionality for the Subgraph. The latest version is `1.3.0`. See [specVersion releases](#specversion-releases) section to see more details on features & releases. -- `description`: a human-readable description of what the subgraph is. This description is displayed in Graph Explorer when the subgraph is deployed to Subgraph Studio. +- `description`: a human-readable description of what the Subgraph is. This description is displayed in Graph Explorer when the Subgraph is deployed to Subgraph Studio. -- `repository`: the URL of the repository where the subgraph manifest can be found. This is also displayed in Graph Explorer. +- `repository`: the URL of the repository where the Subgraph manifest can be found. This is also displayed in Graph Explorer. - `features`: a list of all used [feature](#experimental-features) names. -- `indexerHints.prune`: Defines the retention of historical block data for a subgraph. See [prune](#prune) in [indexerHints](#indexer-hints) section. +- `indexerHints.prune`: Defines the retention of historical block data for a Subgraph. See [prune](#prune) in [indexerHints](#indexer-hints) section. -- `dataSources.source`: the address of the smart contract the subgraph sources, and the ABI of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts. +- `dataSources.source`: the address of the smart contract the Subgraph sources, and the ABI of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts. - `dataSources.source.startBlock`: the optional number of the block that the data source starts indexing from. In most cases, we suggest using the block in which the contract was created. - `dataSources.source.endBlock`: The optional number of the block that the data source stops indexing at, including that block. Minimum spec version required: `0.0.9`. -- `dataSources.context`: key-value pairs that can be used within subgraph mappings. Supports various data types like `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Each variable needs to specify its `type` and `data`. These context variables are then accessible in the mapping files, offering more configurable options for subgraph development. +- `dataSources.context`: key-value pairs that can be used within Subgraph mappings. Supports various data types like `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Each variable needs to specify its `type` and `data`. These context variables are then accessible in the mapping files, offering more configurable options for Subgraph development. - `dataSources.mapping.entities`: the entities that the data source writes to the store. The schema for each entity is defined in the schema.graphql file. - `dataSources.mapping.abis`: one or more named ABI files for the source contract as well as any other smart contracts that you interact with from within the mappings. -- `dataSources.mapping.eventHandlers`: lists the smart contract events this subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store. +- `dataSources.mapping.eventHandlers`: lists the smart contract events this Subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store. -- `dataSources.mapping.callHandlers`: lists the smart contract functions this subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store. +- `dataSources.mapping.callHandlers`: lists the smart contract functions this Subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store. -- `dataSources.mapping.blockHandlers`: lists the blocks this subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional call-filter can be provided by adding a `filter` field with `kind: call` to the handler. This will only run the handler if the block contains at least one call to the data source contract. +- `dataSources.mapping.blockHandlers`: lists the blocks this Subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional call-filter can be provided by adding a `filter` field with `kind: call` to the handler. This will only run the handler if the block contains at least one call to the data source contract. -A single subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array. +A single Subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array. ## Event Handlers -Event handlers in a subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the subgraph's manifest. This enables subgraphs to process and store event data according to defined logic. +Event handlers in a Subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the Subgraph's manifest. This enables Subgraphs to process and store event data according to defined logic. ### Defining an Event Handler -An event handler is declared within a data source in the subgraph's YAML configuration. It specifies which events to listen for and the corresponding function to execute when those events are detected. +An event handler is declared within a data source in the Subgraph's YAML configuration. It specifies which events to listen for and the corresponding function to execute when those events are detected. ```yaml dataSources: @@ -131,7 +131,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -149,11 +149,11 @@ dataSources: ## Zpracovatelé hovorů -While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured. +While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a Subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured. Obsluhy volání se spustí pouze v jednom ze dvou případů: když je zadaná funkce volána jiným účtem než samotnou smlouvou nebo když je v Solidity označena jako externí a volána jako součást jiné funkce ve stejné smlouvě. -> **Note:** Call handlers currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a subgraph indexing one of these networks contain one or more call handlers, it will not start syncing. Subgraph developers should instead use event handlers. These are far more performant than call handlers, and are supported on every evm network. +> **Note:** Call handlers currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a Subgraph indexing one of these networks contain one or more call handlers, it will not start syncing. Subgraph developers should instead use event handlers. These are far more performant than call handlers, and are supported on every evm network. ### Definice obsluhy volání @@ -169,7 +169,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -186,7 +186,7 @@ The `function` is the normalized function signature to filter calls by. The `han ### Funkce mapování -Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument: +Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example Subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument: ```typescript import { CreateGravatarCall } from '../generated/Gravity/Gravity' @@ -205,7 +205,7 @@ The `handleCreateGravatar` function takes a new `CreateGravatarCall` which is a ## Obsluha bloků -Kromě přihlášení k událostem smlouvy nebo volání funkcí může podgraf chtít aktualizovat svá data, když jsou do řetězce přidány nové bloky. Za tímto účelem může podgraf spustit funkci po každém bloku nebo po blocích, které odpovídají předem definovanému filtru. +In addition to subscribing to contract events or function calls, a Subgraph may want to update its data as new blocks are appended to the chain. To achieve this a Subgraph can run a function after every block or after blocks that match a pre-defined filter. ### Podporované filtry @@ -218,7 +218,7 @@ filter: _The defined handler will be called once for every block which contains a call to the contract (data source) the handler is defined under._ -> **Note:** The `call` filter currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a subgraph indexing one of these networks contain one or more block handlers with a `call` filter, it will not start syncing. +> **Note:** The `call` filter currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a Subgraph indexing one of these networks contain one or more block handlers with a `call` filter, it will not start syncing. Protože pro obsluhu bloku neexistuje žádný filtr, zajistí, že obsluha bude volána každý blok. Zdroj dat může obsahovat pouze jednu blokovou obsluhu pro každý typ filtru. @@ -232,7 +232,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -261,7 +261,7 @@ blockHandlers: every: 10 ``` -The defined handler will be called once for every `n` blocks, where `n` is the value provided in the `every` field. This configuration allows the subgraph to perform specific operations at regular block intervals. +The defined handler will be called once for every `n` blocks, where `n` is the value provided in the `every` field. This configuration allows the Subgraph to perform specific operations at regular block intervals. #### Jednou Filtr @@ -276,7 +276,7 @@ blockHandlers: kind: once ``` -Definovaný obslužná rutina s filtrem once bude zavolána pouze jednou před spuštěním všech ostatních rutin. Tato konfigurace umožňuje, aby podgraf používal obslužný program jako inicializační obslužný, který provádí specifické úlohy na začátku indexování. +The defined handler with the once filter will be called only once before all other handlers run. This configuration allows the Subgraph to use the handler as an initialization handler, performing specific tasks at the start of indexing. ```ts export function handleOnce(block: ethereum.Block): void { @@ -288,7 +288,7 @@ export function handleOnce(block: ethereum.Block): void { ### Funkce mapování -The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing subgraph entities in the store, call smart contracts and create or update entities. +The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing Subgraph entities in the store, call smart contracts and create or update entities. ```typescript import { ethereum } from '@graphprotocol/graph-ts' @@ -317,7 +317,7 @@ An event will only be triggered when both the signature and topic 0 match. By de Starting from `specVersion` `0.0.5` and `apiVersion` `0.0.7`, event handlers can have access to the receipt for the transaction which emitted them. -To do so, event handlers must be declared in the subgraph manifest with the new `receipt: true` key, which is optional and defaults to false. +To do so, event handlers must be declared in the Subgraph manifest with the new `receipt: true` key, which is optional and defaults to false. ```yaml eventHandlers: @@ -360,7 +360,7 @@ dataSources: abi: Factory mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/factory.ts entities: @@ -390,7 +390,7 @@ templates: abi: Exchange mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/exchange.ts entities: @@ -454,7 +454,7 @@ There are setters and getters like `setString` and `getString` for all value typ ## Výchozí bloky -The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. +The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a Subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. ```yaml dataSources: @@ -467,7 +467,7 @@ dataSources: startBlock: 6627917 mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/factory.ts entities: @@ -488,13 +488,13 @@ dataSources: ## Tipy indexátor -The `indexerHints` setting in a subgraph's manifest provides directives for indexers on processing and managing a subgraph. It influences operational decisions across data handling, indexing strategies, and optimizations. Presently, it features the `prune` option for managing historical data retention or pruning. +The `indexerHints` setting in a Subgraph's manifest provides directives for indexers on processing and managing a Subgraph. It influences operational decisions across data handling, indexing strategies, and optimizations. Presently, it features the `prune` option for managing historical data retention or pruning. > This feature is available from `specVersion: 1.0.0` ### Prořezávat -`indexerHints.prune`: Defines the retention of historical block data for a subgraph. Options include: +`indexerHints.prune`: Defines the retention of historical block data for a Subgraph. Options include: 1. `"never"`: No pruning of historical data; retains the entire history. 2. `"auto"`: Retains the minimum necessary history as set by the indexer, optimizing query performance. @@ -505,19 +505,19 @@ The `indexerHints` setting in a subgraph's manifest provides directives for inde prune: auto ``` -> The term "history" in this context of subgraphs is about storing data that reflects the old states of mutable entities. +> The term "history" in this context of Subgraphs is about storing data that reflects the old states of mutable entities. History as of a given block is required for: -- [Time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), which enable querying the past states of these entities at specific blocks throughout the subgraph's history -- Using the subgraph as a [graft base](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) in another subgraph, at that block -- Rewinding the subgraph back to that block +- [Time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), which enable querying the past states of these entities at specific blocks throughout the Subgraph's history +- Using the Subgraph as a [graft base](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) in another Subgraph, at that block +- Rewinding the Subgraph back to that block If historical data as of the block has been pruned, the above capabilities will not be available. > Using `"auto"` is generally recommended as it maximizes query performance and is sufficient for most users who do not require access to extensive historical data. -For subgraphs leveraging [time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), it's advisable to either set a specific number of blocks for historical data retention or use `prune: never` to keep all historical entity states. Below are examples of how to configure both options in your subgraph's settings: +For Subgraphs leveraging [time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), it's advisable to either set a specific number of blocks for historical data retention or use `prune: never` to keep all historical entity states. Below are examples of how to configure both options in your Subgraph's settings: Uchování určitého množství historických dat: @@ -532,3 +532,18 @@ Zachování kompletní historie entitních států: indexerHints: prune: never ``` + +## SpecVersion Releases + +| Verze | Poznámky vydání | +| :-: | --- | +| 1.3.0 | Added support for [Subgraph Composition](/cookbook/subgraph-composition-three-sources) | +| 1.2.0 | Added support for [Indexed Argument Filtering](/developing/creating/advanced/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](/developing/creating/advanced/#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supports [`indexerHints`](/developing/creating/subgraph-manifest/#indexer-hints) feature to prune Subgraphs | +| 0.0.9 | Supports `endBlock` feature | +| 0.0.8 | Added support for polling [Block Handlers](/developing/creating/subgraph-manifest/#polling-filter) and [Initialisation Handlers](/developing/creating/subgraph-manifest/#once-filter). | +| 0.0.7 | Added support for [File Data Sources](/developing/creating/advanced/#ipfsarweave-file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | diff --git a/website/src/pages/cs/subgraphs/developing/creating/unit-testing-framework.mdx b/website/src/pages/cs/subgraphs/developing/creating/unit-testing-framework.mdx index fd0130dd672a..691624b81344 100644 --- a/website/src/pages/cs/subgraphs/developing/creating/unit-testing-framework.mdx +++ b/website/src/pages/cs/subgraphs/developing/creating/unit-testing-framework.mdx @@ -2,12 +2,12 @@ title: Rámec pro testování jednotek --- -Learn how to use Matchstick, a unit testing framework developed by [LimeChain](https://limechain.tech/). Matchstick enables subgraph developers to test their mapping logic in a sandboxed environment and successfully deploy their subgraphs. +Learn how to use Matchstick, a unit testing framework developed by [LimeChain](https://limechain.tech/). Matchstick enables Subgraph developers to test their mapping logic in a sandboxed environment and successfully deploy their Subgraphs. ## Benefits of Using Matchstick - It's written in Rust and optimized for high performance. -- It gives you access to developer features, including the ability to mock contract calls, make assertions about the store state, monitor subgraph failures, check test performance, and many more. +- It gives you access to developer features, including the ability to mock contract calls, make assertions about the store state, monitor Subgraph failures, check test performance, and many more. ## Začínáme @@ -87,7 +87,7 @@ And finally, do not use `graph test` (which uses your global installation of gra ### Using Matchstick -To use **Matchstick** in your subgraph project just open up a terminal, navigate to the root folder of your project and simply run `graph test [options] ` - it downloads the latest **Matchstick** binary and runs the specified test or all tests in a test folder (or all existing tests if no datasource flag is specified). +To use **Matchstick** in your Subgraph project just open up a terminal, navigate to the root folder of your project and simply run `graph test [options] ` - it downloads the latest **Matchstick** binary and runs the specified test or all tests in a test folder (or all existing tests if no datasource flag is specified). ### Možnosti CLI @@ -113,7 +113,7 @@ graph test path/to/file.test.ts ```sh -c, --coverage Run the tests in coverage mode --d, --docker Run the tests in a docker container (Note: Please execute from the root folder of the subgraph) +-d, --docker Run the tests in a docker container (Note: Please execute from the root folder of the Subgraph) -f, --force Binary: Redownloads the binary. Docker: Redownloads the Dockerfile and rebuilds the docker image. -h, --help Show usage information -l, --logs Logs to the console information about the OS, CPU model and download url (debugging purposes) @@ -145,17 +145,17 @@ libsFolder: path/to/libs manifestPath: path/to/subgraph.yaml ``` -### Ukázkový podgraf +### Demo Subgraph You can try out and play around with the examples from this guide by cloning the [Demo Subgraph repo](https://github.com/LimeChain/demo-subgraph) ### Videonávody -Also you can check out the video series on ["How to use Matchstick to write unit tests for your subgraphs"](https://www.youtube.com/playlist?list=PLTqyKgxaGF3SNakGQwczpSGVjS_xvOv3h) +Also you can check out the video series on ["How to use Matchstick to write unit tests for your Subgraphs"](https://www.youtube.com/playlist?list=PLTqyKgxaGF3SNakGQwczpSGVjS_xvOv3h) ## Tests structure -_**IMPORTANT: The test structure described below depens on `matchstick-as` version >=0.5.0**_ +_**IMPORTANT: The test structure described below depends on `matchstick-as` version >=0.5.0**_ ### describe() @@ -662,7 +662,7 @@ That's a lot to unpack! First off, an important thing to notice is that we're im A je to tady - vytvořili jsme první test! 👏 -Pro spuštění našich testů nyní stačí v kořenové složce podgrafu spustit následující příkaz: +Now in order to run our tests you simply need to run the following in your Subgraph root folder: `graph test Gravity` @@ -756,7 +756,7 @@ createMockedFunction(contractAddress, 'getGravatar', 'getGravatar(address):(stri Users can mock IPFS files by using `mockIpfsFile(hash, filePath)` function. The function accepts two arguments, the first one is the IPFS file hash/path and the second one is the path to a local file. -NOTE: When testing `ipfs.map/ipfs.mapJSON`, the callback function must be exported from the test file in order for matchstck to detect it, like the `processGravatar()` function in the test example bellow: +NOTE: When testing `ipfs.map/ipfs.mapJSON`, the callback function must be exported from the test file in order for matchstick to detect it, like the `processGravatar()` function in the test example bellow: `.test.ts` file: @@ -765,7 +765,7 @@ import { assert, test, mockIpfsFile } from 'matchstick-as/assembly/index' import { ipfs } from '@graphprotocol/graph-ts' import { gravatarFromIpfs } from './utils' -// Export ipfs.map() callback in order for matchstck to detect it +// Export ipfs.map() callback in order for matchstick to detect it export { processGravatar } from './utils' test('ipfs.cat', () => { @@ -1172,7 +1172,7 @@ templates: network: mainnet mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/token-lock-wallet.ts handler: handleMetadata @@ -1289,7 +1289,7 @@ test('file/ipfs dataSource creation example', () => { ## Pokrytí test -Using **Matchstick**, subgraph developers are able to run a script that will calculate the test coverage of the written unit tests. +Using **Matchstick**, Subgraph developers are able to run a script that will calculate the test coverage of the written unit tests. The test coverage tool takes the compiled test `wasm` binaries and converts them to `wat` files, which can then be easily inspected to see whether or not the handlers defined in `subgraph.yaml` have been called. Since code coverage (and testing as whole) is in very early stages in AssemblyScript and WebAssembly, **Matchstick** cannot check for branch coverage. Instead we rely on the assertion that if a given handler has been called, the event/function for it have been properly mocked. @@ -1395,7 +1395,7 @@ The mismatch in arguments is caused by mismatch in `graph-ts` and `matchstick-as ## Další zdroje -For any additional support, check out this [demo subgraph repo using Matchstick](https://github.com/LimeChain/demo-subgraph#readme_). +For any additional support, check out this [demo Subgraph repo using Matchstick](https://github.com/LimeChain/demo-subgraph#readme_). ## Zpětná vazba diff --git a/website/src/pages/cs/subgraphs/developing/deploying/multiple-networks.mdx b/website/src/pages/cs/subgraphs/developing/deploying/multiple-networks.mdx index 77f05e1ad499..e9848601ebc7 100644 --- a/website/src/pages/cs/subgraphs/developing/deploying/multiple-networks.mdx +++ b/website/src/pages/cs/subgraphs/developing/deploying/multiple-networks.mdx @@ -1,12 +1,13 @@ --- title: Deploying a Subgraph to Multiple Networks +sidebarTitle: Deploying to Multiple Networks --- -This page explains how to deploy a subgraph to multiple networks. To deploy a subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a subgraph already, see [Creating a subgraph](/developing/creating-a-subgraph/). +This page explains how to deploy a Subgraph to multiple networks. To deploy a Subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a Subgraph already, see [Creating a Subgraph](/developing/creating-a-subgraph/). -## Nasazení podgrafu do více sítí +## Deploying the Subgraph to multiple networks -V některých případech budete chtít nasadit stejný podgraf do více sítí, aniž byste museli duplikovat celý jeho kód. Hlavním problémem, který s tím souvisí, je skutečnost, že smluvní adresy v těchto sítích jsou různé. +In some cases, you will want to deploy the same Subgraph to multiple networks without duplicating all of its code. The main challenge that comes with this is that the contract addresses on these networks are different. ### Using `graph-cli` @@ -20,7 +21,7 @@ Options: --network-file Networks config file path (default: "./networks.json") ``` -You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your subgraph during development. +You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your Subgraph during development. > Note: The `init` command will now auto-generate a `networks.json` based on the provided information. You will then be able to update existing or add additional networks. @@ -54,7 +55,7 @@ If you don't have a `networks.json` file, you'll need to manually create one wit > Note: You don't have to specify any of the `templates` (if you have any) in the config file, only the `dataSources`. If there are any `templates` declared in the `subgraph.yaml` file, their network will be automatically updated to the one specified with the `--network` option. -Now, let's assume you want to be able to deploy your subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`: +Now, let's assume you want to be able to deploy your Subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`: ```yaml # ... @@ -96,7 +97,7 @@ yarn build --network sepolia yarn build --network sepolia --network-file path/to/config ``` -The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the subgraph. Your `subgraph.yaml` file now should look like this: +The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the Subgraph. Your `subgraph.yaml` file now should look like this: ```yaml # ... @@ -127,7 +128,7 @@ yarn deploy --network sepolia --network-file path/to/config One way to parameterize aspects like contract addresses using older `graph-cli` versions is to generate parts of it with a templating system like [Mustache](https://mustache.github.io/) or [Handlebars](https://handlebarsjs.com/). -To illustrate this approach, let's assume a subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network: +To illustrate this approach, let's assume a Subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network: ```json { @@ -179,7 +180,7 @@ In order to generate a manifest to either network, you could add two additional } ``` -To deploy this subgraph for mainnet or Sepolia you would now simply run one of the two following commands: +To deploy this Subgraph for mainnet or Sepolia you would now simply run one of the two following commands: ```sh # Mainnet: @@ -193,25 +194,25 @@ A working example of this can be found [here](https://github.com/graphprotocol/e **Note:** This approach can also be applied to more complex situations, where it is necessary to substitute more than contract addresses and network names or where generating mappings or ABIs from templates as well. -This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your Subgraph to check if it is running behind. `synced` informs if the Subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the Subgraph. In this case, you can check the `fatalError` field for details on this error. -## Zásady archivace subgrafů Subgraph Studio +## Subgraph Studio Subgraph archive policy -A subgraph version in Studio is archived if and only if it meets the following criteria: +A Subgraph version in Studio is archived if and only if it meets the following criteria: - The version is not published to the network (or pending publish) - The version was created 45 or more days ago -- The subgraph hasn't been queried in 30 days +- The Subgraph hasn't been queried in 30 days -In addition, when a new version is deployed, if the subgraph has not been published, then the N-2 version of the subgraph is archived. +In addition, when a new version is deployed, if the Subgraph has not been published, then the N-2 version of the Subgraph is archived. -Každý podgraf ovlivněný touto zásadou má možnost vrátit danou verzi zpět. +Every Subgraph affected with this policy has an option to bring the version in question back. -## Kontrola stavu podgrafů +## Checking Subgraph health -Pokud se podgraf úspěšně synchronizuje, je to dobré znamení, že bude dobře fungovat navždy. Nové spouštěče v síti však mohou způsobit, že se podgraf dostane do neověřeného chybového stavu, nebo může začít zaostávat kvůli problémům s výkonem či operátory uzlů. +If a Subgraph syncs successfully, that is a good sign that it will continue to run well forever. However, new triggers on the network might cause your Subgraph to hit an untested error condition or it may start to fall behind due to performance issues or issues with the node operators. -Graph Node exposes a GraphQL endpoint which you can query to check the status of your subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a subgraph: +Graph Node exposes a GraphQL endpoint which you can query to check the status of your Subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a Subgraph: ```graphql { @@ -238,4 +239,4 @@ Graph Node exposes a GraphQL endpoint which you can query to check the status of } ``` -This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your Subgraph to check if it is running behind. `synced` informs if the Subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the Subgraph. In this case, you can check the `fatalError` field for details on this error. diff --git a/website/src/pages/cs/subgraphs/developing/deploying/using-subgraph-studio.mdx b/website/src/pages/cs/subgraphs/developing/deploying/using-subgraph-studio.mdx index 7c53f174237a..14be0175123c 100644 --- a/website/src/pages/cs/subgraphs/developing/deploying/using-subgraph-studio.mdx +++ b/website/src/pages/cs/subgraphs/developing/deploying/using-subgraph-studio.mdx @@ -2,23 +2,23 @@ title: Deploying Using Subgraph Studio --- -Learn how to deploy your subgraph to Subgraph Studio. +Learn how to deploy your Subgraph to Subgraph Studio. -> Note: When you deploy a subgraph, you push it to Subgraph Studio, where you'll be able to test it. It's important to remember that deploying is not the same as publishing. When you publish a subgraph, you're publishing it onchain. +> Note: When you deploy a Subgraph, you push it to Subgraph Studio, where you'll be able to test it. It's important to remember that deploying is not the same as publishing. When you publish a Subgraph, you're publishing it onchain. ## Subgraph Studio Overview In [Subgraph Studio](https://thegraph.com/studio/), you can do the following: -- View a list of subgraphs you've created -- Manage, view details, and visualize the status of a specific subgraph -- Vytváření a správa klíčů API pro konkrétní podgrafy +- View a list of Subgraphs you've created +- Manage, view details, and visualize the status of a specific Subgraph +- Create and manage your API keys for specific Subgraphs - Restrict your API keys to specific domains and allow only certain Indexers to query with them -- Create your subgraph -- Deploy your subgraph using The Graph CLI -- Test your subgraph in the playground environment -- Integrate your subgraph in staging using the development query URL -- Publish your subgraph to The Graph Network +- Create your Subgraph +- Deploy your Subgraph using The Graph CLI +- Test your Subgraph in the playground environment +- Integrate your Subgraph in staging using the development query URL +- Publish your Subgraph to The Graph Network - Manage your billing ## Install The Graph CLI @@ -44,10 +44,10 @@ npm install -g @graphprotocol/graph-cli 1. Open [Subgraph Studio](https://thegraph.com/studio/). 2. Connect your wallet to sign in. - You can do this via MetaMask, Coinbase Wallet, WalletConnect, or Safe. -3. After you sign in, your unique deploy key will be displayed on your subgraph details page. - - The deploy key allows you to publish your subgraphs or manage your API keys and billing. It is unique but can be regenerated if you think it has been compromised. +3. After you sign in, your unique deploy key will be displayed on your Subgraph details page. + - The deploy key allows you to publish your Subgraphs or manage your API keys and billing. It is unique but can be regenerated if you think it has been compromised. -> Important: You need an API key to query subgraphs +> Important: You need an API key to query Subgraphs ### Jak vytvořit podgraf v Podgraf Studio @@ -57,31 +57,25 @@ npm install -g @graphprotocol/graph-cli ### Kompatibilita podgrafů se sítí grafů -Aby mohly být podgrafy podporovány indexátory v síti grafů, musí: - -- Index a [supported network](/supported-networks/) -- Nesmí používat žádnou z následujících funkcí: - - ipfs.cat & ipfs.map - - Nefatální - - Roubování +To be supported by Indexers on The Graph Network, Subgraphs must index a [supported network](/supported-networks/). For a full list of supported and unsupported features, check out the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) repo. ## Initialize Your Subgraph -Once your subgraph has been created in Subgraph Studio, you can initialize its code through the CLI using this command: +Once your Subgraph has been created in Subgraph Studio, you can initialize its code through the CLI using this command: ```bash graph init ``` -You can find the `` value on your subgraph details page in Subgraph Studio, see image below: +You can find the `` value on your Subgraph details page in Subgraph Studio, see image below: ![Subgraph Studio - Slug](/img/doc-subgraph-slug.png) -After running `graph init`, you will be asked to input the contract address, network, and an ABI that you want to query. This will generate a new folder on your local machine with some basic code to start working on your subgraph. You can then finalize your subgraph to make sure it works as expected. +After running `graph init`, you will be asked to input the contract address, network, and an ABI that you want to query. This will generate a new folder on your local machine with some basic code to start working on your Subgraph. You can then finalize your Subgraph to make sure it works as expected. ## Autorizace grafu -Before you can deploy your subgraph to Subgraph Studio, you need to log into your account within the CLI. To do this, you will need your deploy key, which you can find under your subgraph details page. +Before you can deploy your Subgraph to Subgraph Studio, you need to log into your account within the CLI. To do this, you will need your deploy key, which you can find under your Subgraph details page. Then, use the following command to authenticate from the CLI: @@ -91,11 +85,11 @@ graph auth ## Deploying a Subgraph -Once you are ready, you can deploy your subgraph to Subgraph Studio. +Once you are ready, you can deploy your Subgraph to Subgraph Studio. -> Deploying a subgraph with the CLI pushes it to the Studio, where you can test it and update the metadata. This action won't publish your subgraph to the decentralized network. +> Deploying a Subgraph with the CLI pushes it to the Studio, where you can test it and update the metadata. This action won't publish your Subgraph to the decentralized network. -Use the following CLI command to deploy your subgraph: +Use the following CLI command to deploy your Subgraph: ```bash graph deploy @@ -108,30 +102,30 @@ After running this command, the CLI will ask for a version label. ## Testing Your Subgraph -After deploying, you can test your subgraph (either in Subgraph Studio or in your own app, with the deployment query URL), deploy another version, update the metadata, and publish to [Graph Explorer](https://thegraph.com/explorer) when you are ready. +After deploying, you can test your Subgraph (either in Subgraph Studio or in your own app, with the deployment query URL), deploy another version, update the metadata, and publish to [Graph Explorer](https://thegraph.com/explorer) when you are ready. -Use Subgraph Studio to check the logs on the dashboard and look for any errors with your subgraph. +Use Subgraph Studio to check the logs on the dashboard and look for any errors with your Subgraph. ## Publish Your Subgraph -In order to publish your subgraph successfully, review [publishing a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). +In order to publish your Subgraph successfully, review [publishing a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). ## Versioning Your Subgraph with the CLI -If you want to update your subgraph, you can do the following: +If you want to update your Subgraph, you can do the following: - You can deploy a new version to Studio using the CLI (it will only be private at this point). - Once you're happy with it, you can publish your new deployment to [Graph Explorer](https://thegraph.com/explorer). -- This action will create a new version of your subgraph that Curators can start signaling on and Indexers can index. +- This action will create a new version of your Subgraph that Curators can start signaling on and Indexers can index. -You can also update your subgraph's metadata without publishing a new version. You can update your subgraph details in Studio (under the profile picture, name, description, etc.) by checking an option called **Update Details** in [Graph Explorer](https://thegraph.com/explorer). If this is checked, an onchain transaction will be generated that updates subgraph details in Explorer without having to publish a new version with a new deployment. +You can also update your Subgraph's metadata without publishing a new version. You can update your Subgraph details in Studio (under the profile picture, name, description, etc.) by checking an option called **Update Details** in [Graph Explorer](https://thegraph.com/explorer). If this is checked, an onchain transaction will be generated that updates Subgraph details in Explorer without having to publish a new version with a new deployment. -> Note: There are costs associated with publishing a new version of a subgraph to the network. In addition to the transaction fees, you must also fund a part of the curation tax on the auto-migrating signal. You cannot publish a new version of your subgraph if Curators have not signaled on it. For more information, please read more [here](/resources/roles/curating/). +> Note: There are costs associated with publishing a new version of a Subgraph to the network. In addition to the transaction fees, you must also fund a part of the curation tax on the auto-migrating signal. You cannot publish a new version of your Subgraph if Curators have not signaled on it. For more information, please read more [here](/resources/roles/curating/). ## Automatická archivace verzí podgrafů -Whenever you deploy a new subgraph version in Subgraph Studio, the previous version will be archived. Archived versions won't be indexed/synced and therefore cannot be queried. You can unarchive an archived version of your subgraph in Subgraph Studio. +Whenever you deploy a new Subgraph version in Subgraph Studio, the previous version will be archived. Archived versions won't be indexed/synced and therefore cannot be queried. You can unarchive an archived version of your Subgraph in Subgraph Studio. -> Note: Previous versions of non-published subgraphs deployed to Studio will be automatically archived. +> Note: Previous versions of non-published Subgraphs deployed to Studio will be automatically archived. ![Subgraph Studio - Unarchive](/img/Unarchive.png) diff --git a/website/src/pages/cs/subgraphs/developing/developer-faq.mdx b/website/src/pages/cs/subgraphs/developing/developer-faq.mdx index e07a7f06fb48..2c5d8903c4d9 100644 --- a/website/src/pages/cs/subgraphs/developing/developer-faq.mdx +++ b/website/src/pages/cs/subgraphs/developing/developer-faq.mdx @@ -7,37 +7,37 @@ This page summarizes some of the most common questions for developers building o ## Subgraph Related -### 1. Co je to podgraf? +### 1. What is a Subgraph? -A subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process subgraphs and make them available for subgraph consumers to query. +A Subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process Subgraphs and make them available for Subgraph consumers to query. -### 2. What is the first step to create a subgraph? +### 2. What is the first step to create a Subgraph? -To successfully create a subgraph, you will need to install The Graph CLI. Review the [Quick Start](/subgraphs/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). +To successfully create a Subgraph, you will need to install The Graph CLI. Review the [Quick Start](/subgraphs/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). -### 3. Can I still create a subgraph if my smart contracts don't have events? +### 3. Can I still create a Subgraph if my smart contracts don't have events? -It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the subgraph are triggered by contract events and are the fastest way to retrieve useful data. +It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the Subgraph are triggered by contract events and are the fastest way to retrieve useful data. -If the contracts you work with do not contain events, your subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. +If the contracts you work with do not contain events, your Subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. -### 4. Mohu změnit účet GitHub přidružený k mému podgrafu? +### 4. Can I change the GitHub account associated with my Subgraph? -No. Once a subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your subgraph. +No. Once a Subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your Subgraph. -### 5. How do I update a subgraph on mainnet? +### 5. How do I update a Subgraph on mainnet? -You can deploy a new version of your subgraph to Subgraph Studio using the CLI. This action maintains your subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. +You can deploy a new version of your Subgraph to Subgraph Studio using the CLI. This action maintains your Subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your Subgraph that Curators can start signaling on. -### 6. Is it possible to duplicate a subgraph to another account or endpoint without redeploying? +### 6. Is it possible to duplicate a Subgraph to another account or endpoint without redeploying? -Podgraf musíte znovu nasadit, ale pokud se ID podgrafu (hash IPFS) nezmění, nebude se muset synchronizovat od začátku. +You have to redeploy the Subgraph, but if the Subgraph ID (IPFS hash) doesn't change, it won't have to sync from the beginning. -### 7. How do I call a contract function or access a public state variable from my subgraph mappings? +### 7. How do I call a contract function or access a public state variable from my Subgraph mappings? Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/#access-to-smart-contract-state). -### 8. Can I import `ethers.js` or other JS libraries into my subgraph mappings? +### 8. Can I import `ethers.js` or other JS libraries into my Subgraph mappings? Not currently, as mappings are written in AssemblyScript. @@ -45,15 +45,15 @@ One possible alternative solution to this is to store raw data in entities and p ### 9. When listening to multiple contracts, is it possible to select the contract order to listen to events? -V rámci podgrafu se události zpracovávají vždy v pořadí, v jakém se objevují v blocích, bez ohledu na to, zda se jedná o více smluv. +Within a Subgraph, the events are always processed in the order they appear in the blocks, regardless of whether that is across multiple contracts or not. ### 10. How are templates different from data sources? -Templates allow you to create data sources quickly, while your subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your subgraph will create a dynamic data source by supplying the contract address. +Templates allow you to create data sources quickly, while your Subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your Subgraph will create a dynamic data source by supplying the contract address. Check out the "Instantiating a data source template" section on: [Data Source Templates](/developing/creating-a-subgraph/#data-source-templates). -### 11. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? +### 11. Is it possible to set up a Subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? Yes. On `graph init` command itself you can add multiple dataSources by entering contracts one after the other. @@ -79,9 +79,9 @@ docker pull graphprotocol/graph-node:latest If only one entity is created during the event and if there's nothing better available, then the transaction hash + log index would be unique. You can obfuscate these by converting that to Bytes and then piping it through `crypto.keccak256` but this won't make it more unique. -### 15. Can I delete my subgraph? +### 15. Can I delete my Subgraph? -Yes, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) and [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) your subgraph. +Yes, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) and [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) your Subgraph. ## Network Related @@ -110,11 +110,11 @@ Yes. Sepolia supports block handlers, call handlers and event handlers. It shoul Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the dataSource starts indexing from. In most cases, we suggest using the block where the contract was created: [Start blocks](/developing/creating-a-subgraph/#start-blocks) -### 20. What are some tips to increase the performance of indexing? My subgraph is taking a very long time to sync +### 20. What are some tips to increase the performance of indexing? My Subgraph is taking a very long time to sync Yes, you should take a look at the optional start block feature to start indexing from the block where the contract was deployed: [Start blocks](/developing/creating-a-subgraph/#start-blocks) -### 21. Is there a way to query the subgraph directly to determine the latest block number it has indexed? +### 21. Is there a way to query the Subgraph directly to determine the latest block number it has indexed? Ano! Vyzkoušejte následující příkaz, přičemž "organization/subgraphName" nahraďte názvem organizace, pod kterou je publikován, a názvem vašeho podgrafu: @@ -132,7 +132,7 @@ someCollection(first: 1000, skip: ) { ... } ### 23. If my dapp frontend uses The Graph for querying, do I need to write my API key into the frontend directly? What if we pay query fees for users – will malicious users cause our query fees to be very high? -Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. +Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and Subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. ## Miscellaneous diff --git a/website/src/pages/cs/subgraphs/developing/introduction.mdx b/website/src/pages/cs/subgraphs/developing/introduction.mdx index 110d7639aded..b040c749c6ca 100644 --- a/website/src/pages/cs/subgraphs/developing/introduction.mdx +++ b/website/src/pages/cs/subgraphs/developing/introduction.mdx @@ -11,21 +11,21 @@ As a developer, you need data to build and power your dapp. Querying and indexin On The Graph, you can: -1. Create, deploy, and publish subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). -2. Use GraphQL to query existing subgraphs. +1. Create, deploy, and publish Subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). +2. Use GraphQL to query existing Subgraphs. ### What is GraphQL? -- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. +- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query Subgraphs. ### Developer Actions -- Query subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. -- Create custom subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. -- Deploy, publish and signal your subgraphs within The Graph Network. +- Query Subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. +- Create custom Subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. +- Deploy, publish and signal your Subgraphs within The Graph Network. -### What are subgraphs? +### What are Subgraphs? -A subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. +A Subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. -Check out the documentation on [subgraphs](/subgraphs/developing/subgraphs/) to learn specifics. +Check out the documentation on [Subgraphs](/subgraphs/developing/subgraphs/) to learn specifics. diff --git a/website/src/pages/cs/subgraphs/developing/managing/deleting-a-subgraph.mdx b/website/src/pages/cs/subgraphs/developing/managing/deleting-a-subgraph.mdx index 77896e36a45d..b8c2330ca49d 100644 --- a/website/src/pages/cs/subgraphs/developing/managing/deleting-a-subgraph.mdx +++ b/website/src/pages/cs/subgraphs/developing/managing/deleting-a-subgraph.mdx @@ -2,30 +2,30 @@ title: Deleting a Subgraph --- -Delete your subgraph using [Subgraph Studio](https://thegraph.com/studio/). +Delete your Subgraph using [Subgraph Studio](https://thegraph.com/studio/). -> Deleting your subgraph will remove all published versions from The Graph Network, but it will remain visible on Graph Explorer and Subgraph Studio for users who have signaled on it. +> Deleting your Subgraph will remove all published versions from The Graph Network, but it will remain visible on Graph Explorer and Subgraph Studio for users who have signaled on it. ## Step-by-Step -1. Visit the subgraph's page on [Subgraph Studio](https://thegraph.com/studio/). +1. Visit the Subgraph's page on [Subgraph Studio](https://thegraph.com/studio/). 2. Click on the three-dots to the right of the "publish" button. -3. Click on the option to "delete this subgraph": +3. Click on the option to "delete this Subgraph": ![Delete-subgraph](/img/Delete-subgraph.png) -4. Depending on the subgraph's status, you will be prompted with various options. +4. Depending on the Subgraph's status, you will be prompted with various options. - - If the subgraph is not published, simply click “delete” and confirm. - - If the subgraph is published, you will need to confirm on your wallet before the subgraph can be deleted from Studio. If a subgraph is published to multiple networks, such as testnet and mainnet, additional steps may be required. + - If the Subgraph is not published, simply click “delete” and confirm. + - If the Subgraph is published, you will need to confirm on your wallet before the Subgraph can be deleted from Studio. If a Subgraph is published to multiple networks, such as testnet and mainnet, additional steps may be required. -> If the owner of the subgraph has signal on it, the signaled GRT will be returned to the owner. +> If the owner of the Subgraph has signal on it, the signaled GRT will be returned to the owner. ### Important Reminders -- Once you delete a subgraph, it will **not** appear on Graph Explorer's homepage. However, users who have signaled on it will still be able to view it on their profile pages and remove their signal. -- Kurátoři již nebudou moci signalizovat na podgrafu. -- Curators that already signaled on the subgraph can withdraw their signal at an average share price. -- Deleted subgraphs will show an error message. +- Once you delete a Subgraph, it will **not** appear on Graph Explorer's homepage. However, users who have signaled on it will still be able to view it on their profile pages and remove their signal. +- Curators will not be able to signal on the Subgraph anymore. +- Curators that already signaled on the Subgraph can withdraw their signal at an average share price. +- Deleted Subgraphs will show an error message. diff --git a/website/src/pages/cs/subgraphs/developing/managing/transferring-a-subgraph.mdx b/website/src/pages/cs/subgraphs/developing/managing/transferring-a-subgraph.mdx index 0fc6632cbc40..e80bde3fa6d2 100644 --- a/website/src/pages/cs/subgraphs/developing/managing/transferring-a-subgraph.mdx +++ b/website/src/pages/cs/subgraphs/developing/managing/transferring-a-subgraph.mdx @@ -2,18 +2,18 @@ title: Transferring a Subgraph --- -Subgraphs published to the decentralized network have an NFT minted to the address that published the subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network. +Subgraphs published to the decentralized network have an NFT minted to the address that published the Subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network. ## Reminders -- Whoever owns the NFT controls the subgraph. -- If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that subgraph on the network. -- You can easily move control of a subgraph to a multi-sig. -- A community member can create a subgraph on behalf of a DAO. +- Whoever owns the NFT controls the Subgraph. +- If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that Subgraph on the network. +- You can easily move control of a Subgraph to a multi-sig. +- A community member can create a Subgraph on behalf of a DAO. ## View Your Subgraph as an NFT -To view your subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**: +To view your Subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**: ``` https://opensea.io/your-wallet-address @@ -27,13 +27,13 @@ https://rainbow.me/your-wallet-addres ## Step-by-Step -To transfer ownership of a subgraph, do the following: +To transfer ownership of a Subgraph, do the following: 1. Use the UI built into Subgraph Studio: ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-1.png) -2. Choose the address that you would like to transfer the subgraph to: +2. Choose the address that you would like to transfer the Subgraph to: ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-2.png) diff --git a/website/src/pages/cs/subgraphs/developing/publishing/publishing-a-subgraph.mdx b/website/src/pages/cs/subgraphs/developing/publishing/publishing-a-subgraph.mdx index ed8846e26498..29c75273aa17 100644 --- a/website/src/pages/cs/subgraphs/developing/publishing/publishing-a-subgraph.mdx +++ b/website/src/pages/cs/subgraphs/developing/publishing/publishing-a-subgraph.mdx @@ -1,10 +1,11 @@ --- title: Zveřejnění podgrafu v decentralizované síti +sidebarTitle: Publishing to the Decentralized Network --- -Once you have [deployed your subgraph to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/) and it's ready to go into production, you can publish it to the decentralized network. +Once you have [deployed your Subgraph to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/) and it's ready to go into production, you can publish it to the decentralized network. -When you publish a subgraph to the decentralized network, you make it available for: +When you publish a Subgraph to the decentralized network, you make it available for: - [Curators](/resources/roles/curating/) to begin curating it. - [Indexers](/indexing/overview/) to begin indexing it. @@ -17,33 +18,33 @@ Check out the list of [supported networks](/supported-networks/). 1. Go to the [Subgraph Studio](https://thegraph.com/studio/) dashboard 2. Click on the **Publish** button -3. Your subgraph will now be visible in [Graph Explorer](https://thegraph.com/explorer/). +3. Your Subgraph will now be visible in [Graph Explorer](https://thegraph.com/explorer/). -All published versions of an existing subgraph can: +All published versions of an existing Subgraph can: - Be published to Arbitrum One. [Learn more about The Graph Network on Arbitrum](/archived/arbitrum/arbitrum-faq/). -- Index data on any of the [supported networks](/supported-networks/), regardless of the network on which the subgraph was published. +- Index data on any of the [supported networks](/supported-networks/), regardless of the network on which the Subgraph was published. -### Aktualizace metadata publikovaného podgrafu +### Updating metadata for a published Subgraph -- After publishing your subgraph to the decentralized network, you can update the metadata anytime in Subgraph Studio. +- After publishing your Subgraph to the decentralized network, you can update the metadata anytime in Subgraph Studio. - Once you’ve saved your changes and published the updates, they will appear in Graph Explorer. - It's important to note that this process will not create a new version since your deployment has not changed. ## Publishing from the CLI -As of version 0.73.0, you can also publish your subgraph with the [`graph-cli`](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). +As of version 0.73.0, you can also publish your Subgraph with the [`graph-cli`](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). 1. Open the `graph-cli`. 2. Use the following commands: `graph codegen && graph build` then `graph publish`. -3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized subgraph to a network of your choice. +3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized Subgraph to a network of your choice. ![cli-ui](/img/cli-ui.png) ### Customizing your deployment -You can upload your subgraph build to a specific IPFS node and further customize your deployment with the following flags: +You can upload your Subgraph build to a specific IPFS node and further customize your deployment with the following flags: ``` USAGE @@ -61,33 +62,33 @@ FLAGS ``` -## Přidání signálu do podgrafu +## Adding signal to your Subgraph -Developers can add GRT signal to their subgraphs to incentivize Indexers to query the subgraph. +Developers can add GRT signal to their Subgraphs to incentivize Indexers to query the Subgraph. -- If a subgraph is eligible for indexing rewards, Indexers who provide a "proof of indexing" will receive a GRT reward, based on the amount of GRT signalled. +- If a Subgraph is eligible for indexing rewards, Indexers who provide a "proof of indexing" will receive a GRT reward, based on the amount of GRT signalled. -- You can check indexing reward eligibility based on subgraph feature usage [here](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). +- You can check indexing reward eligibility based on Subgraph feature usage [here](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). - Specific supported networks can be checked [here](/supported-networks/). -> Přidání signálu do podgrafu, který nemá nárok na odměny, nepřiláká další indexátory. +> Adding signal to a Subgraph which is not eligible for rewards will not attract additional Indexers. > -> If your subgraph is eligible for rewards, it is recommended that you curate your own subgraph with at least 3,000 GRT in order to attract additional indexers to index your subgraph. +> If your Subgraph is eligible for rewards, it is recommended that you curate your own Subgraph with at least 3,000 GRT in order to attract additional indexers to index your Subgraph. -The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs. However, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. +The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all Subgraphs. However, signaling GRT on a particular Subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. -When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. +When signaling, Curators can decide to signal on a specific version of the Subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. -Indexers can find subgraphs to index based on curation signals they see in Graph Explorer. +Indexers can find Subgraphs to index based on curation signals they see in Graph Explorer. -![Explorer subgraphs](/img/explorer-subgraphs.png) +![Explorer Subgraphs](/img/explorer-subgraphs.png) -Subgraph Studio enables you to add signal to your subgraph by adding GRT to your subgraph's curation pool in the same transaction it is published. +Subgraph Studio enables you to add signal to your Subgraph by adding GRT to your Subgraph's curation pool in the same transaction it is published. ![Curation Pool](/img/curate-own-subgraph-tx.png) -Případně můžete přidat signál GRT do publikovaného podgrafu z Průzkumníka grafů. +Alternatively, you can add GRT signal to a published Subgraph from Graph Explorer. ![Signal from Explorer](/img/signal-from-explorer.png) diff --git a/website/src/pages/cs/subgraphs/developing/subgraphs.mdx b/website/src/pages/cs/subgraphs/developing/subgraphs.mdx index f197aabdc49c..a998db9c316d 100644 --- a/website/src/pages/cs/subgraphs/developing/subgraphs.mdx +++ b/website/src/pages/cs/subgraphs/developing/subgraphs.mdx @@ -4,83 +4,83 @@ title: Podgrafy ## What is a Subgraph? -A subgraph is a custom, open API that extracts data from a blockchain, processes it, and stores it so it can be easily queried via GraphQL. +A Subgraph is a custom, open API that extracts data from a blockchain, processes it, and stores it so it can be easily queried via GraphQL. ### Subgraph Capabilities - **Access Data:** Subgraphs enable the querying and indexing of blockchain data for web3. -- **Build:** Developers can build, deploy, and publish subgraphs to The Graph Network. To get started, check out the subgraph developer [Quick Start](quick-start/). -- **Index & Query:** Once a subgraph is indexed, anyone can query it. Explore and query all subgraphs published to the network in [Graph Explorer](https://thegraph.com/explorer). +- **Build:** Developers can build, deploy, and publish Subgraphs to The Graph Network. To get started, check out the Subgraph developer [Quick Start](quick-start/). +- **Index & Query:** Once a Subgraph is indexed, anyone can query it. Explore and query all Subgraphs published to the network in [Graph Explorer](https://thegraph.com/explorer). ## Inside a Subgraph -The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. +The Subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your Subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. -The **subgraph definition** consists of the following files: +The **Subgraph definition** consists of the following files: -- `subgraph.yaml`: Contains the subgraph manifest +- `subgraph.yaml`: Contains the Subgraph manifest -- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL +- `schema.graphql`: A GraphQL schema defining the data stored for your Subgraph and how to query it via GraphQL - `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema -To learn more about each subgraph component, check out [creating a subgraph](/developing/creating-a-subgraph/). +To learn more about each Subgraph component, check out [creating a Subgraph](/developing/creating-a-subgraph/). ## Životní cyklus podgrafů -Here is a general overview of a subgraph’s lifecycle: +Here is a general overview of a Subgraph’s lifecycle: ![Subgraph Lifecycle](/img/subgraph-lifecycle.png) ## Subgraph Development -1. [Create a subgraph](/developing/creating-a-subgraph/) -2. [Deploy a subgraph](/deploying/deploying-a-subgraph-to-studio/) -3. [Test a subgraph](/deploying/subgraph-studio/#testing-your-subgraph-in-subgraph-studio) -4. [Publish a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) -5. [Signal on a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) +1. [Create a Subgraph](/developing/creating-a-subgraph/) +2. [Deploy a Subgraph](/deploying/deploying-a-subgraph-to-studio/) +3. [Test a Subgraph](/deploying/subgraph-studio/#testing-your-subgraph-in-subgraph-studio) +4. [Publish a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) +5. [Signal on a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) ### Build locally -Great subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying subgraphs on The Graph. They can also use [Graph TypeScript](/subgraphs/developing/creating/graph-ts/README/) and [Matchstick](/subgraphs/developing/creating/unit-testing-framework/) to create robust subgraphs. +Great Subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying Subgraphs on The Graph. They can also use [Graph TypeScript](/subgraphs/developing/creating/graph-ts/README/) and [Matchstick](/subgraphs/developing/creating/unit-testing-framework/) to create robust Subgraphs. ### Deploy to Subgraph Studio -Once defined, a subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: +Once defined, a Subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: -- Use its staging environment to index the deployed subgraph and make it available for review. -- Verify that your subgraph doesn't have any indexing errors and works as expected. +- Use its staging environment to index the deployed Subgraph and make it available for review. +- Verify that your Subgraph doesn't have any indexing errors and works as expected. ### Publish to the Network -When you're happy with your subgraph, you can [publish it](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph Network. +When you're happy with your Subgraph, you can [publish it](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph Network. -- This is an onchain action, which registers the subgraph and makes it discoverable by Indexers. -- Published subgraphs have a corresponding NFT, which defines the ownership of the subgraph. You can [transfer the subgraph's ownership](/subgraphs/developing/managing/transferring-a-subgraph/) by sending the NFT. -- Published subgraphs have associated metadata, which provides other network participants with useful context and information. +- This is an onchain action, which registers the Subgraph and makes it discoverable by Indexers. +- Published Subgraphs have a corresponding NFT, which defines the ownership of the Subgraph. You can [transfer the Subgraph's ownership](/subgraphs/developing/managing/transferring-a-subgraph/) by sending the NFT. +- Published Subgraphs have associated metadata, which provides other network participants with useful context and information. ### Add Curation Signal for Indexing -Published subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your subgraph. Learn more about signaling and [curating](/resources/roles/curating/) on The Graph. +Published Subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your Subgraph. Learn more about signaling and [curating](/resources/roles/curating/) on The Graph. #### What is signal? -- Signal is locked GRT associated with a given subgraph. It indicates to Indexers that a given subgraph will receive query volume and it contributes to the indexing rewards available for processing it. -- Third-party Curators may also signal on a given subgraph, if they deem the subgraph likely to drive query volume. +- Signal is locked GRT associated with a given Subgraph. It indicates to Indexers that a given Subgraph will receive query volume and it contributes to the indexing rewards available for processing it. +- Third-party Curators may also signal on a given Subgraph, if they deem the Subgraph likely to drive query volume. ### Querying & Application Development Subgraphs on The Graph Network receive 100,000 free queries per month, after which point developers can either [pay for queries with GRT or a credit card](/subgraphs/billing/). -Learn more about [querying subgraphs](/subgraphs/querying/introduction/). +Learn more about [querying Subgraphs](/subgraphs/querying/introduction/). ### Updating Subgraphs -To update your subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. +To update your Subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your Subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. -- If you selected "auto-migrate" when you applied the signal, updating the subgraph will migrate any signal to the new version and incur a migration tax. -- This signal migration should prompt Indexers to start indexing the new version of the subgraph, so it should soon become available for querying. +- If you selected "auto-migrate" when you applied the signal, updating the Subgraph will migrate any signal to the new version and incur a migration tax. +- This signal migration should prompt Indexers to start indexing the new version of the Subgraph, so it should soon become available for querying. ### Deleting & Transferring Subgraphs -If you no longer need a published subgraph, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) or [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) it. Deleting a subgraph returns any signaled GRT to [Curators](/resources/roles/curating/). +If you no longer need a published Subgraph, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) or [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) it. Deleting a Subgraph returns any signaled GRT to [Curators](/resources/roles/curating/). diff --git a/website/src/pages/cs/subgraphs/explorer.mdx b/website/src/pages/cs/subgraphs/explorer.mdx index b679cdbb8c43..2d918567ee9d 100644 --- a/website/src/pages/cs/subgraphs/explorer.mdx +++ b/website/src/pages/cs/subgraphs/explorer.mdx @@ -2,11 +2,11 @@ title: Průzkumník grafů --- -Unlock the world of subgraphs and network data with [Graph Explorer](https://thegraph.com/explorer). +Unlock the world of Subgraphs and network data with [Graph Explorer](https://thegraph.com/explorer). ## Přehled -Graph Explorer consists of multiple parts where you can interact with [subgraphs](https://thegraph.com/explorer?chain=arbitrum-one), [delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one), engage [participants](https://thegraph.com/explorer/participants?chain=arbitrum-one), view [network information](https://thegraph.com/explorer/network?chain=arbitrum-one), and access your user profile. +Graph Explorer consists of multiple parts where you can interact with [Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one), [delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one), engage [participants](https://thegraph.com/explorer/participants?chain=arbitrum-one), view [network information](https://thegraph.com/explorer/network?chain=arbitrum-one), and access your user profile. ## Inside Explorer @@ -14,33 +14,33 @@ The following is a breakdown of all the key features of Graph Explorer. For addi ### Subgraphs Page -After deploying and publishing your subgraph in Subgraph Studio, go to [Graph Explorer](https://thegraph.com/explorer) and click on the "[Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one)" link in the navigation bar to access the following: +After deploying and publishing your Subgraph in Subgraph Studio, go to [Graph Explorer](https://thegraph.com/explorer) and click on the "[Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one)" link in the navigation bar to access the following: -- Your own finished subgraphs +- Your own finished Subgraphs - Subgraphs published by others -- The exact subgraph you want (based on the date created, signal amount, or name). +- The exact Subgraph you want (based on the date created, signal amount, or name). ![Explorer Image 1](/img/Subgraphs-Explorer-Landing.png) -When you click into a subgraph, you will be able to do the following: +When you click into a Subgraph, you will be able to do the following: - Test queries in the playground and be able to leverage network details to make informed decisions. -- Signal GRT on your own subgraph or the subgraphs of others to make indexers aware of its importance and quality. +- Signal GRT on your own Subgraph or the Subgraphs of others to make indexers aware of its importance and quality. - - This is critical because signaling on a subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries. + - This is critical because signaling on a Subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries. ![Explorer Image 2](/img/Subgraph-Details.png) -On each subgraph’s dedicated page, you can do the following: +On each Subgraph’s dedicated page, you can do the following: -- Signál/nesignál na podgraf +- Signal/Un-signal on Subgraphs - Zobrazit další podrobnosti, například grafy, ID aktuálního nasazení a další metadata -- Přepínání verzí pro zkoumání minulých iterací podgrafu -- Dotazování na podgrafy prostřednictvím GraphQL -- Testování podgrafů na hřišti -- Zobrazení indexátorů, které indexují na určitém podgrafu +- Switch versions to explore past iterations of the Subgraph +- Query Subgraphs via GraphQL +- Test Subgraphs in the playground +- View the Indexers that are indexing on a certain Subgraph - Statistiky podgrafů (alokace, kurátoři atd.) -- Zobrazení subjektu, který podgraf zveřejnil +- View the entity who published the Subgraph ![Explorer Image 3](/img/Explorer-Signal-Unsignal.png) @@ -53,7 +53,7 @@ On this page, you can see the following: - Indexers who collected the most query fees - Indexers with the highest estimated APR -Additionally, you can calculate your ROI and search for top Indexers by name, address, or subgraph. +Additionally, you can calculate your ROI and search for top Indexers by name, address, or Subgraph. ### Participants Page @@ -63,9 +63,9 @@ This page provides a bird's-eye view of all "participants," which includes every ![Explorer Image 4](/img/Indexer-Pane.png) -Indexers are the backbone of the protocol. They stake on subgraphs, index them, and serve queries to anyone consuming subgraphs. +Indexers are the backbone of the protocol. They stake on Subgraphs, index them, and serve queries to anyone consuming Subgraphs. -In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each subgraph, and how much revenue they have made from query fees and indexing rewards. +In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each Subgraph, and how much revenue they have made from query fees and indexing rewards. **Specifics** @@ -74,7 +74,7 @@ In the Indexers table, you can see an Indexers’ delegation parameters, their s - Cooldown Remaining - the time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters. - Owned - This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior. - Delegated - Stake from Delegators which can be allocated by the Indexer, but cannot be slashed. -- Allocated - Stake that Indexers are actively allocating towards the subgraphs they are indexing. +- Allocated - Stake that Indexers are actively allocating towards the Subgraphs they are indexing. - Available Delegation Capacity - the amount of delegated stake the Indexers can still receive before they become over-delegated. - Maximální kapacita delegování - maximální množství delegovaných podílů, které může indexátor produktivně přijmout. Nadměrný delegovaný podíl nelze použít pro alokace nebo výpočty odměn. - Query Fees - this is the total fees that end users have paid for queries from an Indexer over all time. @@ -90,10 +90,10 @@ To learn more about how to become an Indexer, you can take a look at the [offici #### 2. Kurátoři -Curators analyze subgraphs to identify which subgraphs are of the highest quality. Once a Curator has found a potentially high-quality subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which subgraphs are high quality and should be indexed. +Curators analyze Subgraphs to identify which Subgraphs are of the highest quality. Once a Curator has found a potentially high-quality Subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which Subgraphs are high quality and should be indexed. -- Curators can be community members, data consumers, or even subgraph developers who signal on their own subgraphs by depositing GRT tokens into a bonding curve. - - By depositing GRT, Curators mint curation shares of a subgraph. As a result, they can earn a portion of the query fees generated by the subgraph they have signaled on. +- Curators can be community members, data consumers, or even Subgraph developers who signal on their own Subgraphs by depositing GRT tokens into a bonding curve. + - By depositing GRT, Curators mint curation shares of a Subgraph. As a result, they can earn a portion of the query fees generated by the Subgraph they have signaled on. - The bonding curve incentivizes Curators to curate the highest quality data sources. In the The Curator table listed below you can see: @@ -144,8 +144,8 @@ The overview section has both all the current network metrics and some cumulativ A few key details to note: -- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the subgraphs have been closed and the data they served has been validated by the consumers. -- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). +- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the Subgraphs have been closed and the data they served has been validated by the consumers. +- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the Subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). ![Explorer Image 8](/img/Network-Stats.png) @@ -178,15 +178,15 @@ In this section, you can view the following: ### Tab Podgrafy -In the Subgraphs tab, you’ll see your published subgraphs. +In the Subgraphs tab, you’ll see your published Subgraphs. -> This will not include any subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network. +> This will not include any Subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network. ![Explorer Image 11](/img/Subgraphs-Overview.png) ### Tab Indexování -In the Indexing tab, you’ll find a table with all the active and historical allocations towards subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer. +In the Indexing tab, you’ll find a table with all the active and historical allocations towards Subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer. Tato část bude také obsahovat podrobnosti o vašich čistých odměnách za indexování a čistých poplatcích za dotazy. Zobrazí se následující metriky: @@ -223,13 +223,13 @@ Nezapomeňte, že tento graf lze horizontálně posouvat, takže pokud se posune ### Tab Kurátorství -Na kartě Kurátorství najdete všechny dílčí grafy, na které signalizujete (a které vám tak umožňují přijímat poplatky za dotazy). Signalizace umožňuje kurátorům upozornit indexátory na to, které podgrafy jsou hodnotné a důvěryhodné, a tím signalizovat, že je třeba je indexovat. +In the Curation tab, you’ll find all the Subgraphs you’re signaling on (thus enabling you to receive query fees). Signaling allows Curators to highlight to Indexers which Subgraphs are valuable and trustworthy, thus signaling that they need to be indexed on. Na této tab najdete přehled: -- Všechny dílčí podgrafy, na kterých kurátor pracuje, s podrobnostmi o signálu -- Celkové podíly na podgraf -- Odměny za dotaz na podgraf +- All the Subgraphs you're curating on with signal details +- Share totals per Subgraph +- Query rewards per Subgraph - Aktualizováno v detailu data ![Explorer Image 14](/img/Curation-Stats.png) diff --git a/website/src/pages/cs/subgraphs/guides/_meta.js b/website/src/pages/cs/subgraphs/guides/_meta.js index 37e18bc51651..a1bb04fb6d3f 100644 --- a/website/src/pages/cs/subgraphs/guides/_meta.js +++ b/website/src/pages/cs/subgraphs/guides/_meta.js @@ -1,4 +1,5 @@ export default { + 'subgraph-composition': '', 'subgraph-debug-forking': '', near: '', arweave: '', diff --git a/website/src/pages/cs/subgraphs/guides/arweave.mdx b/website/src/pages/cs/subgraphs/guides/arweave.mdx index 08e6c4257268..dff8facf77d4 100644 --- a/website/src/pages/cs/subgraphs/guides/arweave.mdx +++ b/website/src/pages/cs/subgraphs/guides/arweave.mdx @@ -1,50 +1,50 @@ --- -title: Building Subgraphs on Arweave +title: Vytváření podgrafů na Arweave --- > Arweave support in Graph Node and on Subgraph Studio is in beta: please reach us on [Discord](https://discord.gg/graphprotocol) with any questions about building Arweave Subgraphs! -In this guide, you will learn how to build and deploy Subgraphs to index the Arweave blockchain. +V této příručce se dozvíte, jak vytvořit a nasadit subgrafy pro indexování blockchainu Arweave. -## What is Arweave? +## Co je Arweave? -The Arweave protocol allows developers to store data permanently and that is the main difference between Arweave and IPFS, where IPFS lacks the feature; permanence, and files stored on Arweave can't be changed or deleted. +Protokol Arweave umožňuje vývojářům ukládat data trvale a to je hlavní rozdíl mezi Arweave a IPFS, kde IPFS tuto funkci postrádá; trvalé uložení a soubory uložené na Arweave nelze měnit ani mazat. -Arweave already has built numerous libraries for integrating the protocol in a number of different programming languages. For more information you can check: +Společnost Arweave již vytvořila řadu knihoven pro integraci protokolu do řady různých programovacích jazyků. Další informace naleznete zde: - [Arwiki](https://arwiki.wiki/#/en/main) - [Arweave Resources](https://www.arweave.org/build) -## What are Arweave Subgraphs? +## Co jsou podgrafy Arweave? The Graph allows you to build custom open APIs called "Subgraphs". Subgraphs are used to tell indexers (server operators) which data to index on a blockchain and save on their servers in order for you to be able to query it at any time using [GraphQL](https://graphql.org/). [Graph Node](https://github.com/graphprotocol/graph-node) is now able to index data on Arweave protocol. The current integration is only indexing Arweave as a blockchain (blocks and transactions), it is not indexing the stored files yet. -## Building an Arweave Subgraph +## Vytvoření podgrafu Arweave -To be able to build and deploy Arweave Subgraphs, you need two packages: +Abyste mohli sestavit a nasadit Arweave Subgraphs, potřebujete dva balíčky: 1. `@graphprotocol/graph-cli` above version 0.30.2 - This is a command-line tool for building and deploying Subgraphs. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-cli) to download using `npm`. 2. `@graphprotocol/graph-ts` above version 0.27.0 - This is library of Subgraph-specific types. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-ts) to download using `npm`. -## Subgraph's components +## Komponenty podgrafu There are three components of a Subgraph: ### 1. Manifest - `subgraph.yaml` -Defines the data sources of interest, and how they should be processed. Arweave is a new kind of data source. +Definuje zdroje dat, které jsou předmětem zájmu, a způsob jejich zpracování. Arweave je nový druh datového zdroje. ### 2. Schema - `schema.graphql` -Here you define which data you want to be able to query after indexing your Subgraph using GraphQL. This is actually similar to a model for an API, where the model defines the structure of a request body. +Zde definujete, na která data se chcete po indexování subgrafu pomocí jazyka GraphQL dotazovat. Je to vlastně podobné modelu pro API, kde model definuje strukturu těla požadavku. The requirements for Arweave Subgraphs are covered by the [existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). ### 3. AssemblyScript Mappings - `mapping.ts` -This is the logic that determines how data should be retrieved and stored when someone interacts with the data sources you are listening to. The data gets translated and is stored based off the schema you have listed. +Jedná se o logiku, která určuje, jak mají být data načtena a uložena, když někdo komunikuje se zdroji dat, kterým nasloucháte. Data se přeloží a uloží na základě schématu, které jste uvedli. During Subgraph development there are two key commands: @@ -53,7 +53,7 @@ $ graph codegen # generates types from the schema file identified in the manifes $ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder ``` -## Subgraph Manifest Definition +## Definice podgrafu Manifest The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for an Arweave Subgraph: @@ -84,24 +84,24 @@ dataSources: - Arweave Subgraphs introduce a new kind of data source (`arweave`) - The network should correspond to a network on the hosting Graph Node. In Subgraph Studio, Arweave's mainnet is `arweave-mainnet` -- Arweave data sources introduce an optional source.owner field, which is the public key of an Arweave wallet +- Zdroje dat Arweave obsahují nepovinné pole source.owner, což je veřejný klíč peněženky Arweave -Arweave data sources support two types of handlers: +Datové zdroje Arweave podporují dva typy zpracovatelů: - `blockHandlers` - Run on every new Arweave block. No source.owner is required. - `transactionHandlers` - Run on every transaction where the data source's `source.owner` is the owner. Currently an owner is required for `transactionHandlers`, if users want to process all transactions they should provide "" as the `source.owner` -> The source.owner can be the owner's address, or their Public Key. - -> Transactions are the building blocks of the Arweave permaweb and they are objects created by end-users. - +> Source.owner může být adresa vlastníka nebo jeho veřejný klíč. +> +> Transakce jsou stavebními kameny permaweb Arweave a jsou to objekty vytvořené koncovými uživateli. +> > Note: [Irys (previously Bundlr)](https://irys.xyz/) transactions are not supported yet. -## Schema Definition +## Definice schématu Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on the Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). -## AssemblyScript Mappings +## AssemblyScript Mapování The handlers for processing events are written in [AssemblyScript](https://www.assemblyscript.org/). @@ -150,7 +150,7 @@ Block handlers receive a `Block`, while transactions receive a `Transaction`. Writing the mappings of an Arweave Subgraph is very similar to writing the mappings of an Ethereum Subgraph. For more information, click [here](/developing/creating-a-subgraph/#writing-mappings). -## Deploying an Arweave Subgraph in Subgraph Studio +## Nasazení podgrafu Arweave v Podgraf Studio Once your Subgraph has been created on your Subgraph Studio dashboard, you can deploy by using the `graph deploy` CLI command. @@ -158,11 +158,11 @@ Once your Subgraph has been created on your Subgraph Studio dashboard, you can d graph deploy --access-token ``` -## Querying an Arweave Subgraph +## Dotazování podgrafu Arweave The GraphQL endpoint for Arweave Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. -## Example Subgraphs +## Příklady podgrafů Here is an example Subgraph for reference: @@ -174,19 +174,19 @@ Here is an example Subgraph for reference: No, a Subgraph can only support data sources from one chain/network. -### Can I index the stored files on Arweave? +### Mohu indexovat uložené soubory v Arweave? -Currently, The Graph is only indexing Arweave as a blockchain (its blocks and transactions). +V současné době The Graph indexuje pouze Arweave jako blockchain (jeho bloky a transakce). ### Can I identify Bundlr bundles in my Subgraph? -This is not currently supported. +Toto není aktuálně podporováno. -### How can I filter transactions to a specific account? +### Jak mohu filtrovat transakce na určitý účet? -The source.owner can be the user's public key or account address. +Source.owner může být veřejný klíč uživatele nebo adresa účtu. -### What is the current encryption format? +### Jaký je aktuální formát šifrování? Data is generally passed into the mappings as Bytes, which if stored directly is returned in the Subgraph in a `hex` format (ex. block and transaction hashes). You may want to convert to a `base64` or `base64 URL`-safe format in your mappings, in order to match what is displayed in block explorers like [Arweave Explorer](https://viewblock.io/arweave/). diff --git a/website/src/pages/cs/subgraphs/guides/contract-analyzer.mdx b/website/src/pages/cs/subgraphs/guides/contract-analyzer.mdx index 084ac8d28a00..9f53796b8066 100644 --- a/website/src/pages/cs/subgraphs/guides/contract-analyzer.mdx +++ b/website/src/pages/cs/subgraphs/guides/contract-analyzer.mdx @@ -2,11 +2,15 @@ title: Smart Contract Analysis with Cana CLI --- -# Cana CLI: Quick & Efficient Contract Analysis +Improve smart contract analysis with **Cana CLI**. It's fast, efficient, and designed specifically for EVM chains. -**Cana CLI** is a command-line tool that streamlines helpful smart contract metadata analysis specific to subgraph development across multiple EVM-compatible chains. +## Přehled -## 📌 Key Features +**Cana CLI** is a command-line tool that streamlines helpful smart contract metadata analysis specific to subgraph development across multiple EVM-compatible chains. It simplifies retrieving contract details, detecting proxy implementations, extracting ABIs, and more. + +### Key Features + +With Cana CLI, you can: - Detect deployment blocks - Verify source code @@ -14,63 +18,75 @@ title: Smart Contract Analysis with Cana CLI - Identify proxy and implementation contracts - Support multiple chains -## 🚀 Installation & Setup +### Prerequisites + +Before installing Cana CLI, make sure you have: + +- [Node.js v16+](https://nodejs.org/en) +- [npm v6+](https://docs.npmjs.com/cli/v11/commands/npm-install) +- Block explorer API keys + +### Installation & Setup -Install Cana globally using npm: +1. Install Cana CLI + +Use npm to install it globally: ```bash npm install -g contract-analyzer ``` -Set up a blockchain for analysis: +2. Configure Cana CLI + +Set up a blockchain environment for analysis: ```bash cana setup ``` -Provide the required block explorer API and block explorer endpoint URL details when prompted. +During setup, you'll be prompted to provide the required block explorer API key and block explorer endpoint URL. -Running `cana setup` creates a configuration file at `~/.contract-analyzer/config.json`. This file stores your block explorer API credentials, endpoint URLs, and chain selection preferences for future use. +After setup, Cana CLI creates a configuration file at `~/.contract-analyzer/config.json`. This file stores your block explorer API credentials, endpoint URLs, and chain selection preferences for future use. -## 🍳 Usage +### Steps: Using Cana CLI for Smart Contract Analysis -### 🔹 Chain Selection +#### 1. Select a Chain -Cana supports multiple EVM-compatible chains. +Cana CLI supports multiple EVM-compatible chains. -List chains added with: +For a list of chains added run this command: ```bash cana chains ``` -Then select a chain with: +Then select a chain with this command: ```bash cana chains --switch ``` -Once a chain is selected, all subsequent contract analases will continue on that chain. +Once a chain is selected, all subsequent contract analyses will continue on that chain. -### 🔹 Basic Contract Analysis +#### 2. Basic Contract Analysis -Analyze a contract with: +Run the following command to analyze a contract: ```bash cana analyze 0xContractAddress ``` -or +nebo ```bash cana -a 0xContractAddress ``` -This command displays essential contract information in the terminal using a clear, organized format. +This command fetches and displays essential contract information in the terminal using a clear, organized format. -### 🔹 Understanding Output +#### 3. Understanding the Output -Cana organizes results into the terminal as well as into a structured directory when detailed contract data is successfully retrieved: +Cana CLI organizes results into the terminal and into a structured directory when detailed contract data is successfully retrieved: ``` contracts-analyzed/ @@ -80,24 +96,22 @@ contracts-analyzed/ └── event-information.json # Event signatures and examples ``` -### 🔹 Chain Management +This format makes it easy to reference contract metadata, event signatures, and ABIs for subgraph development. + +#### 4. Chain Management Add and manage chains: ```bash -cana setup # Add a new chain -cana chains # List configured chains -cana chains -s # Swich chains. +cana setup # Add a new chain +cana chains # List configured chains +cana chains -s # Switch chains ``` -## ⚠️ Troubleshooting +### Troubleshooting -- **Missing Data**: Ensure the contract address is correct, verified on the block explorer, and that your API key has the required permissions. +Missing Data? Ensure that the contract address is correct, that it's verified on the block explorer, and that your API key has the required permissions. -## ✅ Requirements - -- Node.js v16+ -- npm v6+ -- Block explorer API keys +### Závěr -Keep your contract analyses efficient and well-organized. 🚀 +With Cana CLI, you can efficiently analyze smart contracts, extract crucial metadata, and support subgraph development with ease. diff --git a/website/src/pages/cs/subgraphs/guides/enums.mdx b/website/src/pages/cs/subgraphs/guides/enums.mdx index 9f55ae07c54b..7cc0e6c0ed78 100644 --- a/website/src/pages/cs/subgraphs/guides/enums.mdx +++ b/website/src/pages/cs/subgraphs/guides/enums.mdx @@ -269,6 +269,6 @@ Expected output includes the marketplaces that meet the criteria, each represent } ``` -## Additional Resources +## Další zdroje For additional information, check out this guide's [repo](https://github.com/chidubemokeke/Subgraph-Tutorial-Enums). diff --git a/website/src/pages/cs/subgraphs/guides/grafting.mdx b/website/src/pages/cs/subgraphs/guides/grafting.mdx index d9abe0e70d2a..a7bad43c9c1f 100644 --- a/website/src/pages/cs/subgraphs/guides/grafting.mdx +++ b/website/src/pages/cs/subgraphs/guides/grafting.mdx @@ -1,46 +1,46 @@ --- -title: Replace a Contract and Keep its History With Grafting +title: Nahrazení smlouvy a zachování její historie pomocí roubování --- In this guide, you will learn how to build and deploy new Subgraphs by grafting existing Subgraphs. -## What is Grafting? +## Co je to roubování? Grafting reuses the data from an existing Subgraph and starts indexing it at a later block. This is useful during development to get past simple errors in the mappings quickly or to temporarily get an existing Subgraph working again after it has failed. Also, it can be used when adding a feature to a Subgraph that takes long to index from scratch. The grafted Subgraph can use a GraphQL schema that is not identical to the one of the base Subgraph, but merely compatible with it. It has to be a valid Subgraph schema in its own right, but may deviate from the base Subgraph's schema in the following ways: -- It adds or removes entity types -- It removes attributes from entity types -- It adds nullable attributes to entity types -- It turns non-nullable attributes into nullable attributes -- It adds values to enums -- It adds or removes interfaces -- It changes for which entity types an interface is implemented +- Přidává nebo odebírá typy entit +- Odstraňuje atributy z typů entit +- Přidává nulovatelné atributy k typům entit +- Mění nenulovatelné atributy na nulovatelné atributy +- Přidává hodnoty do enums +- Přidává nebo odebírá rozhraní +- Mění se, pro které typy entit je rozhraní implementováno -For more information, you can check: +Další informace naleznete na: - [Grafting](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) In this tutorial, we will be covering a basic use case. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing Subgraph onto the "base" Subgraph that tracks the new contract. -## Important Note on Grafting When Upgrading to the Network +## Důležité upozornění k roubování při aktualizaci na síť > **Caution**: It is recommended to not use grafting for Subgraphs published to The Graph Network -### Why Is This Important? +### Proč je to důležité? Grafting is a powerful feature that allows you to "graft" one Subgraph onto another, effectively transferring historical data from the existing Subgraph to a new version. It is not possible to graft a Subgraph from The Graph Network back to Subgraph Studio. -### Best Practices +### Osvědčené postupy **Initial Migration**: when you first deploy your Subgraph to the decentralized network, do so without grafting. Ensure that the Subgraph is stable and functioning as expected. **Subsequent Updates**: once your Subgraph is live and stable on the decentralized network, you may use grafting for future versions to make the transition smoother and to preserve historical data. -By adhering to these guidelines, you minimize risks and ensure a smoother migration process. +Dodržováním těchto pokynů minimalizujete rizika a zajistíte hladší průběh migrace. -## Building an Existing Subgraph +## Vytvoření existujícího podgrafu Building Subgraphs is an essential part of The Graph, described more in depth [here](/subgraphs/quick-start/). To be able to build and deploy the existing Subgraph used in this tutorial, the following repo is provided: @@ -48,7 +48,7 @@ Building Subgraphs is an essential part of The Graph, described more in depth [h > Note: The contract used in the Subgraph was taken from the following [Hackathon Starterkit](https://github.com/schmidsi/hackathon-starterkit). -## Subgraph Manifest Definition +## Definice podgrafu Manifest The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest that you will use: @@ -83,7 +83,7 @@ dataSources: - The network should correspond to an indexed network being queried. Since we're running on Sepolia testnet, the network is `sepolia` - The `mapping` section defines the triggers of interest and the functions that should be run in response to those triggers. In this case, we are listening for the `Withdrawal` event and calling the `handleWithdrawal` function when it is emitted. -## Grafting Manifest Definition +## Definice manifestu roubování Grafting requires adding two new items to the original Subgraph manifest: @@ -101,7 +101,7 @@ graft: The `base` and `block` values can be found by deploying two Subgraphs: one for the base indexing and one with grafting -## Deploying the Base Subgraph +## Nasazení základního podgrafu 1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-example` 2. Follow the directions in the `AUTH & DEPLOY` section on your Subgraph page in the `graft-example` folder from the repo @@ -117,7 +117,7 @@ The `base` and `block` values can be found by deploying two Subgraphs: one for t } ``` -It returns something like this: +Vrátí něco takového: ``` { @@ -140,9 +140,9 @@ It returns something like this: Once you have verified the Subgraph is indexing properly, you can quickly update the Subgraph with grafting. -## Deploying the Grafting Subgraph +## Nasazení podgrafu roubování -The graft replacement subgraph.yaml will have a new contract address. This could happen when you update your dapp, redeploy a contract, etc. +Náhradní podgraf.yaml bude mít novou adresu smlouvy. K tomu může dojít při aktualizaci dapp, novém nasazení kontraktu atd. 1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-replacement` 2. Create a new manifest. The `subgraph.yaml` for `graph-replacement` contains a different contract address and new information about how it should graft. These are the `block` of the [last event emitted](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452) you care about by the old contract and the `base` of the old Subgraph. The `base` Subgraph ID is the `Deployment ID` of your original `graph-example` Subgraph. You can find this in Subgraph Studio. @@ -159,7 +159,7 @@ The graft replacement subgraph.yaml will have a new contract address. This could } ``` -It should return the following: +Měla by vrátit následující: ``` { @@ -189,7 +189,7 @@ You can see that the `graft-replacement` Subgraph is indexing from older `graph- Congrats! You have successfully grafted a Subgraph onto another Subgraph. -## Additional Resources +## Další zdroje If you want more experience with grafting, here are a few examples for popular contracts: diff --git a/website/src/pages/cs/subgraphs/guides/near.mdx b/website/src/pages/cs/subgraphs/guides/near.mdx index e78a69eb7fa2..275c2aba0fd4 100644 --- a/website/src/pages/cs/subgraphs/guides/near.mdx +++ b/website/src/pages/cs/subgraphs/guides/near.mdx @@ -1,10 +1,10 @@ --- -title: Building Subgraphs on NEAR +title: Vytváření podgrafů v NEAR --- This guide is an introduction to building Subgraphs indexing smart contracts on the [NEAR blockchain](https://docs.near.org/). -## What is NEAR? +## Co je NEAR? [NEAR](https://near.org/) is a smart contract platform for building decentralized applications. Visit the [official documentation](https://docs.near.org/concepts/basics/protocol) for more information. @@ -14,14 +14,14 @@ The Graph gives developers tools to process blockchain events and make the resul Subgraphs are event-based, which means that they listen for and then process onchain events. There are currently two types of handlers supported for NEAR Subgraphs: -- Block handlers: these are run on every new block -- Receipt handlers: run every time a message is executed at a specified account +- Obsluhy bloků: jsou spouštěny při každém novém bloku. +- Obsluhy příjmu: spouštějí se pokaždé, když je zpráva provedena na zadaném účtu. [From the NEAR documentation](https://docs.near.org/build/data-infrastructure/lake-data-structures/receipt): -> A Receipt is the only actionable object in the system. When we talk about "processing a transaction" on the NEAR platform, this eventually means "applying receipts" at some point. +> Příjemka je jediným objektem, který lze v systému použít. Když na platformě NEAR hovoříme o "zpracování transakce", znamená to v určitém okamžiku "použití účtenky". -## Building a NEAR Subgraph +## Sestavení podgrafu NEAR `@graphprotocol/graph-cli` is a command-line tool for building and deploying Subgraphs. @@ -46,7 +46,7 @@ $ graph codegen # generates types from the schema file identified in the manifes $ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder ``` -### Subgraph Manifest Definition +### Definice podgrafu Manifest The Subgraph manifest (`subgraph.yaml`) identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for a NEAR Subgraph: @@ -85,16 +85,16 @@ accounts: - morning.testnet ``` -NEAR data sources support two types of handlers: +Zdroje dat NEAR podporují dva typy zpracovatelů: - `blockHandlers`: run on every new NEAR block. No `source.account` is required. - `receiptHandlers`: run on every receipt where the data source's `source.account` is the recipient. Note that only exact matches are processed ([subaccounts](https://docs.near.org/tutorials/crosswords/basics/add-functions-call#create-a-subaccount) must be added as independent data sources). -### Schema Definition +### Definice schématu Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). -### AssemblyScript Mappings +### AssemblyScript Mapování The handlers for processing events are written in [AssemblyScript](https://www.assemblyscript.org/). @@ -169,7 +169,7 @@ Otherwise, the rest of the [AssemblyScript API](/subgraphs/developing/creating/g This includes a new JSON parsing function - logs on NEAR are frequently emitted as stringified JSONs. A new `json.fromString(...)` function is available as part of the [JSON API](/subgraphs/developing/creating/graph-ts/api/#json-api) to allow developers to easily process these logs. -## Deploying a NEAR Subgraph +## Nasazení podgrafu NEAR Once you have a built Subgraph, it is time to deploy it to Graph Node for indexing. NEAR Subgraphs can be deployed to any Graph Node `>=v0.26.x` (this version has not yet been tagged & released). @@ -191,14 +191,14 @@ $ graph deploy --node --ipfs https://api.thegraph.com/ipfs/ ``` -### Local Graph Node (based on default configuration) +### Místní uzel grafu (na základě výchozí konfigurace) ```sh graph deploy --node http://localhost:8020/ --ipfs http://localhost:5001 @@ -216,21 +216,21 @@ Once your Subgraph has been deployed, it will be indexed by Graph Node. You can } ``` -### Indexing NEAR with a Local Graph Node +### Indexování NEAR pomocí místního uzlu grafu -Running a Graph Node that indexes NEAR has the following operational requirements: +Spuštění grafu uzlu, který indexuje NEAR, má následující provozní požadavky: -- NEAR Indexer Framework with Firehose instrumentation -- NEAR Firehose Component(s) -- Graph Node with Firehose endpoint configured +- Framework NEAR Indexer s instrumentací Firehose +- Komponenta(y) NEAR Firehose +- Uzel Graph s nakonfigurovaným koncovým bodem Firehose -We will provide more information on running the above components soon. +Brzy vám poskytneme další informace o provozu výše uvedených komponent. -## Querying a NEAR Subgraph +## Dotazování podgrafu NEAR The GraphQL endpoint for NEAR Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. -## Example Subgraphs +## Příklady podgrafů Here are some example Subgraphs for reference: @@ -240,7 +240,7 @@ Here are some example Subgraphs for reference: ## FAQ -### How does the beta work? +### Jak funguje beta verze? NEAR support is in beta, which means that there may be changes to the API as we continue to work on improving the integration. Please email near@thegraph.com so that we can support you in building NEAR Subgraphs, and keep you up to date on the latest developments! @@ -250,9 +250,9 @@ No, a Subgraph can only support data sources from one chain/network. ### Can Subgraphs react to more specific triggers? -Currently, only Block and Receipt triggers are supported. We are investigating triggers for function calls to a specified account. We are also interested in supporting event triggers, once NEAR has native event support. +V současné době jsou podporovány pouze spouštěče Blok a Příjem. Zkoumáme spouštěče pro volání funkcí na zadaném účtu. Máme také zájem o podporu spouštěčů událostí, jakmile bude mít NEAR nativní podporu událostí. -### Will receipt handlers trigger for accounts and their sub-accounts? +### Budou se obsluhy příjmu spouštět pro účty a jejich podúčty? If an `account` is specified, that will only match the exact account name. It is possible to match sub-accounts by specifying an `accounts` field, with `suffixes` and `prefixes` specified to match accounts and sub-accounts, for example the following would match all `mintbase1.near` sub-accounts: @@ -264,11 +264,11 @@ accounts: ### Can NEAR Subgraphs make view calls to NEAR accounts during mappings? -This is not supported. We are evaluating whether this functionality is required for indexing. +To není podporováno. Vyhodnocujeme, zda je tato funkce pro indexování nutná. ### Can I use data source templates in my NEAR Subgraph? -This is not currently supported. We are evaluating whether this functionality is required for indexing. +Tato funkce není v současné době podporována. Vyhodnocujeme, zda je tato funkce pro indexování nutná. ### Ethereum Subgraphs support "pending" and "current" versions, how can I deploy a "pending" version of a NEAR Subgraph? @@ -278,6 +278,6 @@ Pending functionality is not yet supported for NEAR Subgraphs. In the interim, y If it is a general question about Subgraph development, there is a lot more information in the rest of the [Developer documentation](/subgraphs/quick-start/). Otherwise please join [The Graph Protocol Discord](https://discord.gg/graphprotocol) and ask in the #near channel or email near@thegraph.com. -## References +## Odkazy: - [NEAR developer documentation](https://docs.near.org/tutorials/crosswords/basics/set-up-skeleton) diff --git a/website/src/pages/cs/subgraphs/guides/secure-api-keys-nextjs.mdx b/website/src/pages/cs/subgraphs/guides/secure-api-keys-nextjs.mdx index e17e594408ff..d311cfa5117e 100644 --- a/website/src/pages/cs/subgraphs/guides/secure-api-keys-nextjs.mdx +++ b/website/src/pages/cs/subgraphs/guides/secure-api-keys-nextjs.mdx @@ -1,22 +1,22 @@ --- -title: How to Secure API Keys Using Next.js Server Components +title: Jak zabezpečit klíče API pomocí komponent serveru Next.js --- -## Overview +## Přehled We can use [Next.js server components](https://nextjs.org/docs/app/building-your-application/rendering/server-components) to properly secure our API key from exposure in the frontend of our dapp. To further increase our API key security, we can also [restrict our API key to certain Subgraphs or domains in Subgraph Studio](/cookbook/upgrading-a-subgraph/#securing-your-api-key). In this cookbook, we will go over how to create a Next.js server component that queries a Subgraph while also hiding the API key from the frontend. -### Caveats +### Upozornění -- Next.js server components do not protect API keys from being drained using denial of service attacks. -- The Graph Network gateways have denial of service detection and mitigation strategies in place, however using server components may weaken these protections. -- Next.js server components introduce centralization risks as the server can go down. +- Součásti serveru Next.js nechrání klíče API před odčerpáním pomocí útoků typu odepření služby. +- Brány Graf síť mají zavedené strategie detekce a zmírňování odepření služby, avšak použití serverových komponent může tyto ochrany oslabit. +- Server komponenty Next.js přinášejí rizika centralizace, protože může dojít k výpadku serveru. -### Why It's Needed +### Proč je to důležité -In a standard React application, API keys included in the frontend code can be exposed to the client-side, posing a security risk. While `.env` files are commonly used, they don't fully protect the keys since React's code is executed on the client side, exposing the API key in the headers. Next.js Server Components address this issue by handling sensitive operations server-side. +Ve standardní aplikaci React mohou být klíče API obsažené v kódu frontendu vystaveny na straně klienta, což představuje bezpečnostní riziko. Soubory `.env` se sice běžně používají, ale plně klíče nechrání, protože kód Reactu se spouští na straně klienta a vystavuje klíč API v hlavičkách. Serverové komponenty Next.js tento problém řeší tím, že citlivé operace zpracovávají na straně serveru. ### Using client-side rendering to query a Subgraph @@ -24,25 +24,25 @@ In a standard React application, API keys included in the frontend code can be e ### Prerequisites -- An API key from [Subgraph Studio](https://thegraph.com/studio) -- Basic knowledge of Next.js and React. -- An existing Next.js project that uses the [App Router](https://nextjs.org/docs/app). +- Klíč API od [Subgraph Studio](https://thegraph.com/studio) +- Základní znalosti Next.js a React. +- Existující projekt Next.js, který používá [App Router](https://nextjs.org/docs/app). -## Step-by-Step Cookbook +## Kuchařka krok za krokem -### Step 1: Set Up Environment Variables +### Krok 1: Nastavení proměnných prostředí -1. In our Next.js project root, create a `.env.local` file. -2. Add our API key: `API_KEY=`. +1. V kořeni našeho projektu Next.js vytvořte soubor `.env.local`. +2. Přidejte náš klíč API: `API_KEY=`. -### Step 2: Create a Server Component +### Krok 2: Vytvoření součásti serveru -1. In our `components` directory, create a new file, `ServerComponent.js`. -2. Use the provided example code to set up the server component. +1. V adresáři `components` vytvořte nový soubor `ServerComponent.js`. +2. K nastavení komponenty serveru použijte přiložený ukázkový kód. -### Step 3: Implement Server-Side API Request +### Krok 3: Implementace požadavku API na straně serveru -In `ServerComponent.js`, add the following code: +Do souboru `ServerComponent.js` přidejte následující kód: ```javascript const API_KEY = process.env.API_KEY @@ -95,10 +95,10 @@ export default async function ServerComponent() { } ``` -### Step 4: Use the Server Component +### Krok 4: Použití komponenty serveru -1. In our page file (e.g., `pages/index.js`), import `ServerComponent`. -2. Render the component: +1. V našem souboru stránky (např. `pages/index.js`) importujte `ServerComponent`. +2. Vykreslení komponenty: ```javascript import ServerComponent from './components/ServerComponent' @@ -112,12 +112,12 @@ export default function Home() { } ``` -### Step 5: Run and Test Our Dapp +### Krok 5: Spusťte a otestujte náš Dapp -Start our Next.js application using `npm run dev`. Verify that the server component is fetching data without exposing the API key. +Spusťte naši aplikaci Next.js pomocí `npm run dev`. Ověřte, že serverová komponenta načítá data bez vystavení klíče API. ![Server-side rendering](/img/api-key-server-side-rendering.png) -### Conclusion +### Závěr By utilizing Next.js Server Components, we've effectively hidden the API key from the client-side, enhancing the security of our application. This method ensures that sensitive operations are handled server-side, away from potential client-side vulnerabilities. Finally, be sure to explore [other API key security measures](/subgraphs/querying/managing-api-keys/) to increase your API key security even further. diff --git a/website/src/pages/cs/subgraphs/guides/subgraph-composition.mdx b/website/src/pages/cs/subgraphs/guides/subgraph-composition.mdx new file mode 100644 index 000000000000..f5480ab15a48 --- /dev/null +++ b/website/src/pages/cs/subgraphs/guides/subgraph-composition.mdx @@ -0,0 +1,132 @@ +--- +title: Aggregate Data Using Subgraph Composition +sidebarTitle: Build a Composable Subgraph with Multiple Subgraphs +--- + +Leverage Subgraph composition to speed up development time. Create a base Subgraph with essential data, then build additional Subgraphs on top of it. + +Optimize your Subgraph by merging data from independent, source Subgraphs into a single composable Subgraph to enhance data aggregation. + +## Úvod + +Composable Subgraphs enable you to combine multiple Subgraphs' data sources into a new Subgraph, facilitating faster and more flexible Subgraph development. Subgraph composition empowers you to create and maintain smaller, focused Subgraphs that collectively form a larger, interconnected dataset. + +### Benefits of Composition + +Subgraph composition is a powerful feature for scaling, allowing you to: + +- Reuse, mix, and combine existing data +- Streamline development and queries +- Use multiple data sources (up to five source Subgraphs) +- Speed up your Subgraph's syncing speed +- Handle errors and optimize the resync + +## Architecture Overview + +The setup for this example involves two Subgraphs: + +1. **Source Subgraph**: Tracks event data as entities. +2. **Dependent Subgraph**: Uses the source Subgraph as a data source. + +You can find these in the `source` and `dependent` directories. + +- The **source Subgraph** is a basic event-tracking Subgraph that records events emitted by relevant contracts. +- The **dependent Subgraph** references the source Subgraph as a data source, using the entities from the source as triggers. + +While the source Subgraph is a standard Subgraph, the dependent Subgraph uses the Subgraph composition feature. + +## Prerequisites + +### Source Subgraphs + +- All Subgraphs need to be published with a **specVersion 1.3.0 or later** (Use the latest graph-cli version to be able to deploy composable Subgraphs) +- See notes here: https://github.com/graphprotocol/graph-node/releases/tag/v0.37.0 +- Immutable entities only: All Subgraphs must have [immutable entities](https://thegraph.com/docs/en/subgraphs/best-practices/immutable-entities-bytes-as-ids/#immutable-entities) when the Subgraph is deployed +- Pruning can be used in the source Subgraphs, but only entities that are immutable can be composed on top of +- Source Subgraphs cannot use grafting on top of existing entities +- Aggregated entities can be used in composition, but entities that are composed from them cannot performed additional aggregations directly + +### Composed Subgraphs + +- You can only compose up to a **maximum of 5 source Subgraphs** +- Composed Subgraphs can only use **datasources from the same chain** +- **Nested composition is not yet supported**: Composing on top of another composed Subgraph isn’t allowed at this time +- Aggregated entities can be used in composition, but the composed entities on them cannot also use aggregations directly +- Developers cannot compose an onchain datasource with a Subgraph datasource (i.e. you can’t do normal event handlers and call handlers and block handlers in a composed Subgraph) + +Additionally, you can explore the [example-composable-subgraph](https://github.com/graphprotocol/example-composable-subgraph) repository for a working implementation of composable Subgraphs + +## Začněte + +The following guide provides examples for defining 3 source Subgraphs to create one powerful composed Subgraph. + +### Specifics + +- To keep this example simple, all source Subgraphs use only block handlers. However, in a real environment, each source Subgraph will use data from different smart contracts. +- The examples below show how to import and extend the schema of another Subgraph to enhance its functionality. +- Each source Subgraph is optimized with a specific entity. +- All the commands listed install the necessary dependencies, generate code based on the GraphQL schema, build the Subgraph, and deploy it to your local Graph Node instance. + +### Step 1. Deploy Block Time Source Subgraph + +This first source Subgraph calculates the block time for each block. + +- It imports schemas from other Subgraphs and adds a `block` entity with a `timestamp` field, representing the time each block was mined. +- It listens to time-related blockchain events (e.g., block timestamps) and processes this data to update the Subgraph's entities accordingly. + +To deploy this Subgraph locally, run the following commands: + +```bash +npm install +npm run codegen +npm run build +npm run create-local +npm run deploy-local +``` + +### Step 2. Deploy Block Cost Source Subgraph + +This second source Subgraph indexes the cost of each block. + +#### Key Functions + +- It imports schemas from other Subgraphs and adds a `block` entity with cost-related fields. +- It listens to blockchain events related to costs (e.g. gas fees, transaction costs) and processes this data to update the Subgraph's entities accordingly. + +To deploy this Subgraph locally, run the same commands as above. + +### Step 3. Define Block Size in Source Subgraph + +This third source Subgraph indexes the size of each block. To deploy this Subgraph locally, run the same commands as above. + +#### Key Functions + +- It imports existing schemas from other Subgraphs and adds a `block` entity with a `size` field representing each block's size. +- It listens to blockchain events related to block sizes (e.g., storage or volume) and processes this data to update the Subgraph's entities accordingly. + +### Step 4. Combine Into Block Stats Subgraph + +This composed Subgraph combines and aggregates the information from the source Subgraphs above, providing a unified view of block statistics. To deploy this Subgraph locally, run the same commands as above. + +> Note: +> +> - Any change to a source Subgraph will likely generate a new deployment ID. +> - Be sure to update the deployment ID in the data source address of the Subgraph manifest to take advantage of the latest changes. +> - All source Subgraphs should be deployed before the composed Subgraph is deployed. + +#### Key Functions + +- It provides a consolidated data model that encompasses all relevant block metrics. +- It combines data from 3 source Subgraphs, and provides a comprehensive view of block statistics, enabling more complex queries and analyses. + +## Key Takeaways + +- This powerful tool will scale your Subgraph development and allow you to combine multiple Subgraphs. +- The setup includes the deployment of 3 source Subgraphs and one final deployment of the composed Subgraph. +- This feature unlocks scalability, simplifying both development and maintenance efficiency. + +## Další zdroje + +- Check out all the code for this example in [this GitHub repo](https://github.com/graphprotocol/example-composable-subgraph). +- To add advanced features to your Subgraph, check out [Subgraph advanced features](/developing/creating/advanced/). +- To learn more about aggregations, check out [Timeseries and Aggregations](/subgraphs/developing/creating/advanced/#timeseries-and-aggregations). diff --git a/website/src/pages/cs/subgraphs/guides/subgraph-debug-forking.mdx b/website/src/pages/cs/subgraphs/guides/subgraph-debug-forking.mdx index 91aa7484d2ec..60ad21d2fe95 100644 --- a/website/src/pages/cs/subgraphs/guides/subgraph-debug-forking.mdx +++ b/website/src/pages/cs/subgraphs/guides/subgraph-debug-forking.mdx @@ -1,22 +1,22 @@ --- -title: Quick and Easy Subgraph Debugging Using Forks +title: Rychlé a snadné ladění podgrafů pomocí vidliček --- As with many systems processing large amounts of data, The Graph's Indexers (Graph Nodes) may take quite some time to sync-up your Subgraph with the target blockchain. The discrepancy between quick changes with the purpose of debugging and long wait times needed for indexing is extremely counterproductive and we are well aware of that. This is why we are introducing **Subgraph forking**, developed by [LimeChain](https://limechain.tech/), and in this article I will show you how this feature can be used to substantially speed-up Subgraph debugging! -## Ok, what is it? +## Ok, co to je? **Subgraph forking** is the process of lazily fetching entities from _another_ Subgraph's store (usually a remote one). In the context of debugging, **Subgraph forking** allows you to debug your failed Subgraph at block _X_ without needing to wait to sync-up to block _X_. -## What?! How? +## Co?! Jak? When you deploy a Subgraph to a remote Graph Node for indexing and it fails at block _X_, the good news is that the Graph Node will still serve GraphQL queries using its store, which is synced-up to block _X_. That's great! This means we can take advantage of this "up-to-date" store to fix the bugs arising when indexing block _X_. In a nutshell, we are going to _fork the failing Subgraph_ from a remote Graph Node that is guaranteed to have the Subgraph indexed up to block _X_ in order to provide the locally deployed Subgraph being debugged at block _X_ an up-to-date view of the indexing state. -## Please, show me some code! +## Ukažte mi prosím nějaký kód! To stay focused on Subgraph debugging, let's keep things simple and run along with the [example-Subgraph](https://github.com/graphprotocol/graph-tooling/tree/main/examples/ethereum-gravatar) indexing the Ethereum Gravity smart contract. @@ -46,31 +46,31 @@ export function handleUpdatedGravatar(event: UpdatedGravatar): void { Oops, how unfortunate, when I deploy my perfect looking Subgraph to [Subgraph Studio](https://thegraph.com/studio/) it fails with the _"Gravatar not found!"_ error. -The usual way to attempt a fix is: +Obvyklý způsob, jak se pokusit o opravu, je: -1. Make a change in the mappings source, which you believe will solve the issue (while I know it won't). +1. Proveďte změnu ve zdroji mapování, která podle vás problém vyřeší (zatímco já vím, že ne). 2. Re-deploy the Subgraph to [Subgraph Studio](https://thegraph.com/studio/) (or another remote Graph Node). -3. Wait for it to sync-up. -4. If it breaks again go back to 1, otherwise: Hooray! +3. Počkejte na synchronizaci. +4. Pokud se opět rozbije, vraťte se na 1, jinak: Hurá! It is indeed pretty familiar to an ordinary debug process, but there is one step that horribly slows down the process: _3. Wait for it to sync-up._ Using **Subgraph forking** we can essentially eliminate this step. Here is how it looks: 0. Spin-up a local Graph Node with the **_appropriate fork-base_** set. -1. Make a change in the mappings source, which you believe will solve the issue. +1. Proveďte změnu ve zdroji mapování, která podle vás problém vyřeší. 2. Deploy to the local Graph Node, **_forking the failing Subgraph_** and **_starting from the problematic block_**. -3. If it breaks again, go back to 1, otherwise: Hooray! +3. Pokud se opět rozbije, vraťte se na 1, jinak: Hurá! -Now, you may have 2 questions: +Nyní můžete mít 2 otázky: -1. fork-base what??? -2. Forking who?! +1. fork-base co??? +2. Vidličkování kdo?! -And I answer: +A já odpovídám: 1. `fork-base` is the "base" URL, such that when the _subgraph id_ is appended the resulting URL (`/`) is a valid GraphQL endpoint for the Subgraph's store. -2. Forking is easy, no need to sweat: +2. Vidličkování je snadné, není třeba se potit: ```bash $ graph deploy --debug-fork --ipfs http://localhost:5001 --node http://localhost:8020 @@ -78,7 +78,7 @@ $ graph deploy --debug-fork --ipfs http://localhos Also, don't forget to set the `dataSources.source.startBlock` field in the Subgraph manifest to the number of the problematic block, so you can skip indexing unnecessary blocks and take advantage of the fork! -So, here is what I do: +Takže to dělám takhle: 1. I spin-up a local Graph Node ([here is how to do it](https://github.com/graphprotocol/graph-node#running-a-local-graph-node)) with the `fork-base` option set to: `https://api.thegraph.com/subgraphs/id/`, since I will fork a Subgraph, the buggy one I deployed earlier, from [Subgraph Studio](https://thegraph.com/studio/). @@ -97,5 +97,5 @@ $ cargo run -p graph-node --release -- \ $ graph deploy gravity --debug-fork QmNp169tKvomnH3cPXTfGg4ZEhAHA6kEq5oy1XDqAxqHmW --ipfs http://localhost:5001 --node http://localhost:8020 ``` -4. I inspect the logs produced by the local Graph Node and, Hooray!, everything seems to be working. +4. Zkontroluji protokoly vytvořené místním graf uzlem a hurá, zdá se, že vše funguje. 5. I deploy my now bug-free Subgraph to a remote Graph Node and live happily ever after! (no potatoes tho) diff --git a/website/src/pages/cs/subgraphs/guides/subgraph-uncrashable.mdx b/website/src/pages/cs/subgraphs/guides/subgraph-uncrashable.mdx index a08e2a7ad8c9..bdc3671399e1 100644 --- a/website/src/pages/cs/subgraphs/guides/subgraph-uncrashable.mdx +++ b/website/src/pages/cs/subgraphs/guides/subgraph-uncrashable.mdx @@ -1,10 +1,10 @@ --- -title: Safe Subgraph Code Generator +title: Generátor kódu bezpečného podgrafu --- [Subgraph Uncrashable](https://float-capital.github.io/float-subgraph-uncrashable/) is a code generation tool that generates a set of helper functions from the graphql schema of a project. It ensures that all interactions with entities in your Subgraph are completely safe and consistent. -## Why integrate with Subgraph Uncrashable? +## Proč se integrovat s aplikací Subgraph Uncrashable? - **Continuous Uptime**. Mishandled entities may cause Subgraphs to crash, which can be disruptive for projects that are dependent on The Graph. Set up helper functions to make your Subgraphs “uncrashable” and ensure business continuity. @@ -16,11 +16,11 @@ title: Safe Subgraph Code Generator - The code generation tool accommodates **all** Subgraph types and is configurable for users to set sane defaults on values. The code generation will use this config to generate helper functions that are to the users specification. -- The framework also includes a way (via the config file) to create custom, but safe, setter functions for groups of entity variables. This way it is impossible for the user to load/use a stale graph entity and it is also impossible to forget to save or set a variable that is required by the function. +- Framework také obsahuje způsob (prostřednictvím konfiguračního souboru), jak vytvořit vlastní, ale bezpečné funkce setteru pro skupiny proměnných entit. Tímto způsobem není možné, aby uživatel načetl/použil zastaralou entitu grafu, a také není možné zapomenout uložit nebo nastavit proměnnou, kterou funkce vyžaduje. - Warning logs are recorded as logs indicating where there is a breach of Subgraph logic to help patch the issue to ensure data accuracy. -Subgraph Uncrashable can be run as an optional flag using the Graph CLI codegen command. +Podgraf Uncrashable lze spustit jako volitelný příznak pomocí příkazu Graph CLI codegen. ```sh graph codegen -u [options] [] diff --git a/website/src/pages/cs/subgraphs/guides/transfer-to-the-graph.mdx b/website/src/pages/cs/subgraphs/guides/transfer-to-the-graph.mdx index a62072c48373..510b0ea317f6 100644 --- a/website/src/pages/cs/subgraphs/guides/transfer-to-the-graph.mdx +++ b/website/src/pages/cs/subgraphs/guides/transfer-to-the-graph.mdx @@ -12,9 +12,9 @@ Quickly upgrade your Subgraphs from any platform to [The Graph's decentralized n ## Upgrade Your Subgraph to The Graph in 3 Easy Steps -1. [Set Up Your Studio Environment](/subgraphs/cookbook/transfer-to-the-graph/#1-set-up-your-studio-environment) -2. [Deploy Your Subgraph to Studio](/subgraphs/cookbook/transfer-to-the-graph/#2-deploy-your-subgraph-to-studio) -3. [Publish to The Graph Network](/subgraphs/cookbook/transfer-to-the-graph/#publish-your-subgraph-to-the-graphs-decentralized-network) +1. [Set Up Your Studio Environment](/subgraphs/cookbook/transfer-to-the-graph/#1-set-up-your-studio-environment) +2. [Deploy Your Subgraph to Studio](/subgraphs/cookbook/transfer-to-the-graph/#2-deploy-your-subgraph-to-studio) +3. [Publish to The Graph Network](/subgraphs/cookbook/transfer-to-the-graph/#publish-your-subgraph-to-the-graphs-decentralized-network) ## 1. Set Up Your Studio Environment @@ -31,7 +31,7 @@ You must have [Node.js](https://nodejs.org/) and a package manager of your choic On your local machine, run the following command: -Using [npm](https://www.npmjs.com/): +Použitím [npm](https://www.npmjs.com/): ```sh npm install -g @graphprotocol/graph-cli@latest @@ -74,7 +74,7 @@ graph deploy --ipfs-hash You can start [querying](/subgraphs/querying/introduction/) any Subgraph by sending a GraphQL query into the Subgraph’s query URL endpoint, which is located at the top of its Explorer page in Subgraph Studio. -#### Example +#### Příklad [CryptoPunks Ethereum Subgraph](https://thegraph.com/explorer/subgraphs/HdVdERFUe8h61vm2fDyycHgxjsde5PbB832NHgJfZNqK) by Messari: @@ -98,7 +98,7 @@ You can create API Keys in Subgraph Studio under the “API Keys” menu at the Once you upgrade, you can access and manage your Subgraphs in [Subgraph Studio](https://thegraph.com/studio/) and explore all Subgraphs in [The Graph Explorer](https://thegraph.com/networks/). -### Additional Resources +### Další zdroje - To quickly create and publish a new Subgraph, check out the [Quick Start](/subgraphs/quick-start/). - To explore all the ways you can optimize and customize your Subgraph for a better performance, read more about [creating a Subgraph here](/developing/creating-a-subgraph/). diff --git a/website/src/pages/cs/subgraphs/querying/best-practices.mdx b/website/src/pages/cs/subgraphs/querying/best-practices.mdx index a28d505b9b46..038319488eda 100644 --- a/website/src/pages/cs/subgraphs/querying/best-practices.mdx +++ b/website/src/pages/cs/subgraphs/querying/best-practices.mdx @@ -4,7 +4,7 @@ title: Osvědčené postupy dotazování The Graph provides a decentralized way to query data from blockchains. Its data is exposed through a GraphQL API, making it easier to query with the GraphQL language. -Learn the essential GraphQL language rules and best practices to optimize your subgraph. +Learn the essential GraphQL language rules and best practices to optimize your Subgraph. --- @@ -71,7 +71,7 @@ It means that you can query a GraphQL API using standard `fetch` (natively or vi However, as mentioned in ["Querying from an Application"](/subgraphs/querying/from-an-application/), it's recommended to use `graph-client`, which supports the following unique features: -- Manipulace s podgrafy napříč řetězci: Dotazování z více podgrafů v jednom dotazu +- Cross-chain Subgraph Handling: Querying from multiple Subgraphs in a single query - [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) - [Automatic Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) - Plně zadaný výsledekv @@ -219,7 +219,7 @@ If the application only needs 10 transactions, the query should explicitly set ` ### Use a single query to request multiple records -By default, subgraphs have a singular entity for one record. For multiple records, use the plural entities and filter: `where: {id_in:[X,Y,Z]}` or `where: {volume_gt:100000}` +By default, Subgraphs have a singular entity for one record. For multiple records, use the plural entities and filter: `where: {id_in:[X,Y,Z]}` or `where: {volume_gt:100000}` Example of inefficient querying: diff --git a/website/src/pages/cs/subgraphs/querying/from-an-application.mdx b/website/src/pages/cs/subgraphs/querying/from-an-application.mdx index b5e719983167..ef667e6b74c2 100644 --- a/website/src/pages/cs/subgraphs/querying/from-an-application.mdx +++ b/website/src/pages/cs/subgraphs/querying/from-an-application.mdx @@ -1,5 +1,6 @@ --- title: Dotazování z aplikace +sidebarTitle: Querying from an App --- Learn how to query The Graph from your application. @@ -10,7 +11,7 @@ During the development process, you will receive a GraphQL API endpoint at two d ### Subgraph Studio Endpoint -After deploying your subgraph to [Subgraph Studio](https://thegraph.com/docs/en/subgraphs/developing/deploying/using-subgraph-studio/), you will receive an endpoint that looks like this: +After deploying your Subgraph to [Subgraph Studio](https://thegraph.com/docs/en/subgraphs/developing/deploying/using-subgraph-studio/), you will receive an endpoint that looks like this: ``` https://api.studio.thegraph.com/query/// @@ -20,13 +21,13 @@ https://api.studio.thegraph.com/query/// ### The Graph Network Endpoint -After publishing your subgraph to the network, you will receive an endpoint that looks like this: : +After publishing your Subgraph to the network, you will receive an endpoint that looks like this: : ``` https://gateway.thegraph.com/api//subgraphs/id/ ``` -> This endpoint is intended for active use on the network. It allows you to use various GraphQL client libraries to query the subgraph and populate your application with indexed data. +> This endpoint is intended for active use on the network. It allows you to use various GraphQL client libraries to query the Subgraph and populate your application with indexed data. ## Using Popular GraphQL Clients @@ -34,7 +35,7 @@ https://gateway.thegraph.com/api//subgraphs/id/ The Graph is providing its own GraphQL client, `graph-client` that supports unique features such as: -- Manipulace s podgrafy napříč řetězci: Dotazování z více podgrafů v jednom dotazu +- Cross-chain Subgraph Handling: Querying from multiple Subgraphs in a single query - [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) - [Automatic Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) - Plně zadaný výsledekv @@ -43,7 +44,7 @@ The Graph is providing its own GraphQL client, `graph-client` that supports uniq ### Fetch Data with Graph Client -Let's look at how to fetch data from a subgraph with `graph-client`: +Let's look at how to fetch data from a Subgraph with `graph-client`: #### Krok 1 @@ -168,7 +169,7 @@ Although it's the heaviest client, it has many features to build advanced UI on ### Fetch Data with Apollo Client -Let's look at how to fetch data from a subgraph with Apollo client: +Let's look at how to fetch data from a Subgraph with Apollo client: #### Krok 1 @@ -257,7 +258,7 @@ client ### Fetch data with URQL -Let's look at how to fetch data from a subgraph with URQL: +Let's look at how to fetch data from a Subgraph with URQL: #### Krok 1 diff --git a/website/src/pages/cs/subgraphs/querying/graph-client/README.md b/website/src/pages/cs/subgraphs/querying/graph-client/README.md index 416cadc13c6f..5dc2cfc408de 100644 --- a/website/src/pages/cs/subgraphs/querying/graph-client/README.md +++ b/website/src/pages/cs/subgraphs/querying/graph-client/README.md @@ -16,23 +16,23 @@ This library is intended to simplify the network aspect of data consumption for | Status | Feature | Notes | | :----: | ---------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------- | -| ✅ | Multiple indexers | based on fetch strategies | -| ✅ | Fetch Strategies | timeout, retry, fallback, race, highestValue | -| ✅ | Build time validations & optimizations | | -| ✅ | Client-Side Composition | with improved execution planner (based on GraphQL-Mesh) | -| ✅ | Cross-chain Subgraph Handling | Use similar subgraphs as a single source | -| ✅ | Raw Execution (standalone mode) | without a wrapping GraphQL client | -| ✅ | Local (client-side) Mutations | | -| ✅ | [Automatic Block Tracking](../packages/block-tracking/README.md) | tracking block numbers [as described here](https://thegraph.com/docs/en/developer/distributed-systems/#polling-for-updated-data) | -| ✅ | [Automatic Pagination](../packages/auto-pagination/README.md) | doing multiple requests in a single call to fetch more than the indexer limit | -| ✅ | Integration with `@apollo/client` | | -| ✅ | Integration with `urql` | | -| ✅ | TypeScript support | with built-in GraphQL Codegen and `TypedDocumentNode` | -| ✅ | [`@live` queries](./live.md) | Based on polling | +| ✅ | Multiple indexers | based on fetch strategies | +| ✅ | Fetch Strategies | timeout, retry, fallback, race, highestValue | +| ✅ | Build time validations & optimizations | | +| ✅ | Client-Side Composition | with improved execution planner (based on GraphQL-Mesh) | +| ✅ | Cross-chain Subgraph Handling | Use similar subgraphs as a single source | +| ✅ | Raw Execution (standalone mode) | without a wrapping GraphQL client | +| ✅ | Local (client-side) Mutations | | +| ✅ | [Automatic Block Tracking](../packages/block-tracking/README.md) | tracking block numbers [as described here](https://thegraph.com/docs/en/developer/distributed-systems/#polling-for-updated-data) | +| ✅ | [Automatic Pagination](../packages/auto-pagination/README.md) | doing multiple requests in a single call to fetch more than the indexer limit | +| ✅ | Integration with `@apollo/client` | | +| ✅ | Integration with `urql` | | +| ✅ | TypeScript support | with built-in GraphQL Codegen and `TypedDocumentNode` | +| ✅ | [`@live` queries](./live.md) | Based on polling | > You can find an [extended architecture design here](./architecture.md) -## Getting Started +## Začínáme You can follow [Episode 45 of `graphql.wtf`](https://graphql.wtf/episodes/45-the-graph-client) to learn more about Graph Client: @@ -138,7 +138,7 @@ graphclient serve-dev And open http://localhost:4000/ to use GraphiQL. You can now experiment with your Graph client-side GraphQL schema locally! 🥳 -#### Examples +#### Příklady You can also refer to [examples directory in this repo](../examples), for more advanced examples and integration examples: @@ -308,8 +308,8 @@ sources:
`highestValue` - - This strategy allows you to send parallel requests to different endpoints for the same source and choose the most updated. + +This strategy allows you to send parallel requests to different endpoints for the same source and choose the most updated. This is useful if you want to choose most synced data for the same Subgraph over different indexers/sources. diff --git a/website/src/pages/cs/subgraphs/querying/graph-client/live.md b/website/src/pages/cs/subgraphs/querying/graph-client/live.md index e6f726cb4352..0e3b535bd5d6 100644 --- a/website/src/pages/cs/subgraphs/querying/graph-client/live.md +++ b/website/src/pages/cs/subgraphs/querying/graph-client/live.md @@ -2,7 +2,7 @@ Graph-Client implements a custom `@live` directive that can make every GraphQL query work with real-time data. -## Getting Started +## Začínáme Start by adding the following configuration to your `.graphclientrc.yml` file: diff --git a/website/src/pages/cs/subgraphs/querying/graphql-api.mdx b/website/src/pages/cs/subgraphs/querying/graphql-api.mdx index f0cc9b78b338..e5dc52ccce1f 100644 --- a/website/src/pages/cs/subgraphs/querying/graphql-api.mdx +++ b/website/src/pages/cs/subgraphs/querying/graphql-api.mdx @@ -6,13 +6,13 @@ Learn about the GraphQL Query API used in The Graph. ## What is GraphQL? -[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. +[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query Subgraphs. -To understand the larger role that GraphQL plays, review [developing](/subgraphs/developing/introduction/) and [creating a subgraph](/developing/creating-a-subgraph/). +To understand the larger role that GraphQL plays, review [developing](/subgraphs/developing/introduction/) and [creating a Subgraph](/developing/creating-a-subgraph/). ## Queries with GraphQL -In your subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type. +In your Subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type. > Note: `query` does not need to be included at the top of the `graphql` query when using The Graph. @@ -170,7 +170,7 @@ You can use suffixes like `_gt`, `_lte` for value comparison: You can also filter entities that were updated in or after a specified block with `_change_block(number_gte: Int)`. -To může být užitečné, pokud chcete načíst pouze entity, které se změnily například od posledního dotazování. Nebo může být užitečná pro zkoumání nebo ladění změn entit v podgrafu (v kombinaci s blokovým filtrem můžete izolovat pouze entity, které se změnily v určitém bloku). +This can be useful if you are looking to fetch only entities which have changed, for example since the last time you polled. Or alternatively it can be useful to investigate or debug how entities are changing in your Subgraph (if combined with a block filter, you can isolate only entities that changed in a specific block). ```graphql { @@ -329,7 +329,7 @@ This query will return `Challenge` entities, and their associated `Application` ### Fulltextové Vyhledávání dotazy -Fulltext search query fields provide an expressive text search API that can be added to the subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) to add fulltext search to your subgraph. +Fulltext search query fields provide an expressive text search API that can be added to the Subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) to add fulltext search to your Subgraph. Fulltext search queries have one required field, `text`, for supplying search terms. Several special fulltext operators are available to be used in this `text` search field. @@ -391,7 +391,7 @@ Graph Node implements [specification-based](https://spec.graphql.org/October2021 The schema of your dataSources, i.e. the entity types, values, and relationships that are available to query, are defined through the [GraphQL Interface Definition Language (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). -GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your subgraph is automatically generated from the GraphQL schema that's included in your [subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph). +GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your Subgraph is automatically generated from the GraphQL schema that's included in your [Subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph). > Note: Our API does not expose mutations because developers are expected to issue transactions directly against the underlying blockchain from their applications. @@ -403,7 +403,7 @@ All GraphQL types with `@entity` directives in your schema will be treated as en ### Metadata podgrafů -All subgraphs have an auto-generated `_Meta_` object, which provides access to subgraph metadata. This can be queried as follows: +All Subgraphs have an auto-generated `_Meta_` object, which provides access to Subgraph metadata. This can be queried as follows: ```graphQL { @@ -419,7 +419,7 @@ All subgraphs have an auto-generated `_Meta_` object, which provides access to s } ``` -Pokud je uveden blok, metadata se vztahují k tomuto bloku, pokud ne, použije se poslední indexovaný blok. Pokud je blok uveden, musí se nacházet za počátečním blokem podgrafu a musí být menší nebo roven poslednímu Indevovaný bloku. +If a block is provided, the metadata is as of that block, if not the latest indexed block is used. If provided, the block must be after the Subgraph's start block, and less than or equal to the most recently indexed block. `deployment` is a unique ID, corresponding to the IPFS CID of the `subgraph.yaml` file. @@ -427,6 +427,6 @@ Pokud je uveden blok, metadata se vztahují k tomuto bloku, pokud ne, použije s - hash: hash bloku - číslo: číslo bloku -- timestamp: časové razítko bloku, pokud je k dispozici (v současné době je k dispozici pouze pro podgrafy indexující sítě EVM) +- timestamp: the timestamp of the block, if available (this is currently only available for Subgraphs indexing EVM networks) -`hasIndexingErrors` is a boolean identifying whether the subgraph encountered indexing errors at some past block +`hasIndexingErrors` is a boolean identifying whether the Subgraph encountered indexing errors at some past block diff --git a/website/src/pages/cs/subgraphs/querying/introduction.mdx b/website/src/pages/cs/subgraphs/querying/introduction.mdx index 19ecde83f4a8..6169df767051 100644 --- a/website/src/pages/cs/subgraphs/querying/introduction.mdx +++ b/website/src/pages/cs/subgraphs/querying/introduction.mdx @@ -7,11 +7,11 @@ To start querying right away, visit [The Graph Explorer](https://thegraph.com/ex ## Přehled -When a subgraph is published to The Graph Network, you can visit its subgraph details page on Graph Explorer and use the "Query" tab to explore the deployed GraphQL API for each subgraph. +When a Subgraph is published to The Graph Network, you can visit its Subgraph details page on Graph Explorer and use the "Query" tab to explore the deployed GraphQL API for each Subgraph. ## Specifics -Each subgraph published to The Graph Network has a unique query URL in Graph Explorer to make direct queries. You can find it by navigating to the subgraph details page and clicking on the "Query" button in the top right corner. +Each Subgraph published to The Graph Network has a unique query URL in Graph Explorer to make direct queries. You can find it by navigating to the Subgraph details page and clicking on the "Query" button in the top right corner. ![Query Subgraph Button](/img/query-button-screenshot.png) @@ -21,7 +21,7 @@ You will notice that this query URL must use a unique API key. You can create an Subgraph Studio users start on a Free Plan, which allows them to make 100,000 queries per month. Additional queries are available on the Growth Plan, which offers usage based pricing for additional queries, payable by credit card, or GRT on Arbitrum. You can learn more about billing [here](/subgraphs/billing/). -> Please see the [Query API](/subgraphs/querying/graphql-api/) for a complete reference on how to query the subgraph's entities. +> Please see the [Query API](/subgraphs/querying/graphql-api/) for a complete reference on how to query the Subgraph's entities. > > Note: If you encounter 405 errors with a GET request to the Graph Explorer URL, please switch to a POST request instead. diff --git a/website/src/pages/cs/subgraphs/querying/managing-api-keys.mdx b/website/src/pages/cs/subgraphs/querying/managing-api-keys.mdx index 0f5721e5cbcb..f2954c5593c0 100644 --- a/website/src/pages/cs/subgraphs/querying/managing-api-keys.mdx +++ b/website/src/pages/cs/subgraphs/querying/managing-api-keys.mdx @@ -1,14 +1,14 @@ --- -title: Správa klíčů API +title: Managing API keys --- ## Přehled -API keys are needed to query subgraphs. They ensure that the connections between application services are valid and authorized, including authenticating the end user and the device using the application. +API keys are needed to query Subgraphs. They ensure that the connections between application services are valid and authorized, including authenticating the end user and the device using the application. ### Create and Manage API Keys -Go to [Subgraph Studio](https://thegraph.com/studio/) and click the **API Keys** tab to create and manage your API keys for specific subgraphs. +Go to [Subgraph Studio](https://thegraph.com/studio/) and click the **API Keys** tab to create and manage your API keys for specific Subgraphs. The "API keys" table lists existing API keys and allows you to manage or delete them. For each key, you can see its status, the cost for the current period, the spending limit for the current period, and the total number of queries. @@ -31,4 +31,4 @@ You can click on an individual API key to view the Details page: - Výše vynaložených GRT 2. Under the **Security** section, you can opt into security settings depending on the level of control you’d like to have. Specifically, you can: - Zobrazení a správa názvů domén oprávněných používat váš klíč API - - Přiřazení podgrafů, na které se lze dotazovat pomocí klíče API + - Assign Subgraphs that can be queried with your API key diff --git a/website/src/pages/cs/subgraphs/querying/python.mdx b/website/src/pages/cs/subgraphs/querying/python.mdx index 669e95c19183..51e3b966a2b5 100644 --- a/website/src/pages/cs/subgraphs/querying/python.mdx +++ b/website/src/pages/cs/subgraphs/querying/python.mdx @@ -3,7 +3,7 @@ title: Query The Graph with Python and Subgrounds sidebarTitle: Python (Subgrounds) --- -Subgrounds je intuitivní knihovna Pythonu pro dotazování na podgrafy, vytvořená [Playgrounds](https://playgrounds.network/). Umožňuje přímo připojit data subgrafů k datovému prostředí Pythonu, což vám umožní používat knihovny jako [pandas](https://pandas.pydata.org/) k provádění analýzy dat! +Subgrounds is an intuitive Python library for querying Subgraphs, built by [Playgrounds](https://playgrounds.network/). It allows you to directly connect Subgraph data to a Python data environment, letting you use libraries like [pandas](https://pandas.pydata.org/) to perform data analysis! Subgrounds nabízí jednoduché Pythonic API pro vytváření dotazů GraphQL, automatizuje zdlouhavé pracovní postupy, jako je stránkování, a umožňuje pokročilým uživatelům řízené transformace schémat. @@ -17,24 +17,24 @@ pip install --upgrade subgrounds python -m pip install --upgrade subgrounds ``` -Po instalaci můžete vyzkoušet podklady pomocí následujícího dotazu. Následující příklad uchopí podgraf pro protokol Aave v2 a dotazuje se na 5 největších trhů seřazených podle TVL (Total Value Locked), vybere jejich název a jejich TVL (v USD) a vrátí data jako pandas [DataFrame](https://pandas.pydata.org/pandas-docs/dev/reference/api/pandas.DataFrame.html#pandas.DataFrame). +Once installed, you can test out subgrounds with the following query. The following example grabs a Subgraph for the Aave v2 protocol and queries the top 5 markets ordered by TVL (Total Value Locked), selects their name and their TVL (in USD) and returns the data as a pandas [DataFrame](https://pandas.pydata.org/pandas-docs/dev/reference/api/pandas.DataFrame.html#pandas.DataFrame). ```python from subgrounds import Subgrounds sg = Subgrounds() -# Načtení podgrafu +# Load the Subgraph aave_v2 = sg.load_subgraph( "https://api.thegraph.com/subgraphs/name/messari/aave-v2-ethereum") -# Sestavte dotaz +# Construct the query latest_markets = aave_v2.Query.markets( orderBy=aave_v2.Market.totalValueLockedUSD, - orderDirection="desc", + orderDirection='desc', first=5, ) -# Vrátit dotaz do datového rámce +# Return query to a dataframe sg.query_df([ latest_markets.name, latest_markets.totalValueLockedUSD, diff --git a/website/src/pages/cs/subgraphs/querying/subgraph-id-vs-deployment-id.mdx b/website/src/pages/cs/subgraphs/querying/subgraph-id-vs-deployment-id.mdx index 7bef9e129e33..7792cb56d855 100644 --- a/website/src/pages/cs/subgraphs/querying/subgraph-id-vs-deployment-id.mdx +++ b/website/src/pages/cs/subgraphs/querying/subgraph-id-vs-deployment-id.mdx @@ -2,17 +2,17 @@ title: ID podgrafu vs. ID nasazení --- -Podgraf je identifikován ID podgrafu a každá verze podgrafu je identifikována ID nasazení. +A Subgraph is identified by a Subgraph ID, and each version of the Subgraph is identified by a Deployment ID. -When querying a subgraph, either ID can be used, though it is generally suggested that the Deployment ID is used due to its ability to specify a specific version of a subgraph. +When querying a Subgraph, either ID can be used, though it is generally suggested that the Deployment ID is used due to its ability to specify a specific version of a Subgraph. Here are some key differences between the two IDs: ![](/img/subgraph-id-vs-deployment-id.png) ## ID nasazení -The Deployment ID is the IPFS hash of the compiled manifest file, which refers to other files on IPFS instead of relative URLs on the computer. For example, the compiled manifest can be accessed via: `https://api.thegraph.com/ipfs/api/v0/cat?arg=QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. To change the Deployment ID, one can simply update the manifest file, such as modifying the description field as described in the [subgraph manifest documentation](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api). +The Deployment ID is the IPFS hash of the compiled manifest file, which refers to other files on IPFS instead of relative URLs on the computer. For example, the compiled manifest can be accessed via: `https://api.thegraph.com/ipfs/api/v0/cat?arg=QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. To change the Deployment ID, one can simply update the manifest file, such as modifying the description field as described in the [Subgraph manifest documentation](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api). -When queries are made using a subgraph's Deployment ID, we are specifying a version of that subgraph to query. Using the Deployment ID to query a specific subgraph version results in a more sophisticated and robust setup as there is full control over the subgraph version being queried. However, this results in the need of updating the query code manually every time a new version of the subgraph is published. +When queries are made using a Subgraph's Deployment ID, we are specifying a version of that Subgraph to query. Using the Deployment ID to query a specific Subgraph version results in a more sophisticated and robust setup as there is full control over the Subgraph version being queried. However, this results in the need of updating the query code manually every time a new version of the Subgraph is published. Příklad koncového bodu, který používá ID nasazení: @@ -20,8 +20,8 @@ Příklad koncového bodu, který používá ID nasazení: ## ID podgrafu -The Subgraph ID is a unique identifier for a subgraph. It remains constant across all versions of a subgraph. It is recommended to use the Subgraph ID to query the latest version of a subgraph, although there are some caveats. +The Subgraph ID is a unique identifier for a Subgraph. It remains constant across all versions of a Subgraph. It is recommended to use the Subgraph ID to query the latest version of a Subgraph, although there are some caveats. -Be aware that querying using Subgraph ID may result in queries being responded to by an older version of the subgraph due to the new version needing time to sync. Also, new versions could introduce breaking schema changes. +Be aware that querying using Subgraph ID may result in queries being responded to by an older version of the Subgraph due to the new version needing time to sync. Also, new versions could introduce breaking schema changes. Example endpoint that uses Subgraph ID: `https://gateway-arbitrum.network.thegraph.com/api/[api-key]/subgraphs/id/FL3ePDCBbShPvfRJTaSCNnehiqxsPHzpLud6CpbHoeKW` diff --git a/website/src/pages/cs/subgraphs/quick-start.mdx b/website/src/pages/cs/subgraphs/quick-start.mdx index 130f699763ce..7c52d4745a83 100644 --- a/website/src/pages/cs/subgraphs/quick-start.mdx +++ b/website/src/pages/cs/subgraphs/quick-start.mdx @@ -2,7 +2,7 @@ title: Rychlé spuštění --- -Learn how to easily build, publish and query a [subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) on The Graph. +Learn how to easily build, publish and query a [Subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) on The Graph. ## Prerequisites @@ -13,13 +13,13 @@ Learn how to easily build, publish and query a [subgraph](/subgraphs/developing/ ## How to Build a Subgraph -### 1. Create a subgraph in Subgraph Studio +### 1. Create a Subgraph in Subgraph Studio Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. -Subgraph Studio lets you create, manage, deploy, and publish subgraphs, as well as create and manage API keys. +Subgraph Studio lets you create, manage, deploy, and publish Subgraphs, as well as create and manage API keys. -Click "Create a Subgraph". It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name". +Click "Create a Subgraph". It is recommended to name the Subgraph in Title Case: "Subgraph Name Chain Name". ### 2. Nainstalujte Graph CLI @@ -37,13 +37,13 @@ Použitím [yarn](https://yarnpkg.com/): yarn global add @graphprotocol/graph-cli ``` -### 3. Initialize your subgraph +### 3. Initialize your Subgraph -> Příkazy pro konkrétní podgraf najdete na stránce podgrafu v [Subgraph Studio](https://thegraph.com/studio/). +> You can find commands for your specific Subgraph on the Subgraph page in [Subgraph Studio](https://thegraph.com/studio/). -The `graph init` command will automatically create a scaffold of a subgraph based on your contract's events. +The `graph init` command will automatically create a scaffold of a Subgraph based on your contract's events. -The following command initializes your subgraph from an existing contract: +The following command initializes your Subgraph from an existing contract: ```sh graph init @@ -51,42 +51,42 @@ graph init If your contract is verified on the respective blockscanner where it is deployed (such as [Etherscan](https://etherscan.io/)), then the ABI will automatically be created in the CLI. -When you initialize your subgraph, the CLI will ask you for the following information: +When you initialize your Subgraph, the CLI will ask you for the following information: -- **Protocol**: Choose the protocol your subgraph will be indexing data from. -- **Subgraph slug**: Create a name for your subgraph. Your subgraph slug is an identifier for your subgraph. -- **Directory**: Choose a directory to create your subgraph in. -- **Ethereum network** (optional): You may need to specify which EVM-compatible network your subgraph will be indexing data from. +- **Protocol**: Choose the protocol your Subgraph will be indexing data from. +- **Subgraph slug**: Create a name for your Subgraph. Your Subgraph slug is an identifier for your Subgraph. +- **Directory**: Choose a directory to create your Subgraph in. +- **Ethereum network** (optional): You may need to specify which EVM-compatible network your Subgraph will be indexing data from. - **Contract address**: Locate the smart contract address you’d like to query data from. - **ABI**: If the ABI is not auto-populated, you will need to input it manually as a JSON file. -- **Start Block**: You should input the start block to optimize subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed. +- **Start Block**: You should input the start block to optimize Subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed. - **Contract Name**: Input the name of your contract. -- **Index contract events as entities**: It is suggested that you set this to true, as it will automatically add mappings to your subgraph for every emitted event. +- **Index contract events as entities**: It is suggested that you set this to true, as it will automatically add mappings to your Subgraph for every emitted event. - **Add another contract** (optional): You can add another contract. -Na následujícím snímku najdete příklad toho, co můžete očekávat při inicializaci podgrafu: +See the following screenshot for an example for what to expect when initializing your Subgraph: ![Subgraph command](/img/CLI-Example.png) -### 4. Edit your subgraph +### 4. Edit your Subgraph -The `init` command in the previous step creates a scaffold subgraph that you can use as a starting point to build your subgraph. +The `init` command in the previous step creates a scaffold Subgraph that you can use as a starting point to build your Subgraph. -When making changes to the subgraph, you will mainly work with three files: +When making changes to the Subgraph, you will mainly work with three files: -- Manifest (`subgraph.yaml`) - defines what data sources your subgraph will index. -- Schema (`schema.graphql`) - defines what data you wish to retrieve from the subgraph. +- Manifest (`subgraph.yaml`) - defines what data sources your Subgraph will index. +- Schema (`schema.graphql`) - defines what data you wish to retrieve from the Subgraph. - AssemblyScript Mappings (`mapping.ts`) - translates data from your data sources to the entities defined in the schema. -For a detailed breakdown on how to write your subgraph, check out [Creating a Subgraph](/developing/creating-a-subgraph/). +For a detailed breakdown on how to write your Subgraph, check out [Creating a Subgraph](/developing/creating-a-subgraph/). -### 5. Deploy your subgraph +### 5. Deploy your Subgraph > Remember, deploying is not the same as publishing. -When you **deploy** a subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. A deployed subgraph's indexing is performed by the [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/), which is a single Indexer owned and operated by Edge & Node, rather than by the many decentralized Indexers in The Graph Network. A **deployed** subgraph is free to use, rate-limited, not visible to the public, and meant to be used for development, staging, and testing purposes. +When you **deploy** a Subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. A deployed Subgraph's indexing is performed by the [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/), which is a single Indexer owned and operated by Edge & Node, rather than by the many decentralized Indexers in The Graph Network. A **deployed** Subgraph is free to use, rate-limited, not visible to the public, and meant to be used for development, staging, and testing purposes. -Jakmile je podgraf napsán, spusťte následující příkazy: +Once your Subgraph is written, run the following commands: ```` ```sh @@ -94,7 +94,7 @@ graph codegen && graph build ``` ```` -Authenticate and deploy your subgraph. The deploy key can be found on the subgraph's page in Subgraph Studio. +Authenticate and deploy your Subgraph. The deploy key can be found on the Subgraph's page in Subgraph Studio. ![Deploy key](/img/subgraph-studio-deploy-key.jpg) @@ -109,37 +109,37 @@ graph deploy The CLI will ask for a version label. It's strongly recommended to use [semantic versioning](https://semver.org/), e.g. `0.0.1`. -### 6. Review your subgraph +### 6. Review your Subgraph -If you’d like to test your subgraph before publishing it, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following: +If you’d like to test your Subgraph before publishing it, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following: - Run a sample query. -- Analyze your subgraph in the dashboard to check information. -- Check the logs on the dashboard to see if there are any errors with your subgraph. The logs of an operational subgraph will look like this: +- Analyze your Subgraph in the dashboard to check information. +- Check the logs on the dashboard to see if there are any errors with your Subgraph. The logs of an operational Subgraph will look like this: ![Subgraph logs](/img/subgraph-logs-image.png) -### 7. Publish your subgraph to The Graph Network +### 7. Publish your Subgraph to The Graph Network -When your subgraph is ready for a production environment, you can publish it to the decentralized network. Publishing is an onchain action that does the following: +When your Subgraph is ready for a production environment, you can publish it to the decentralized network. Publishing is an onchain action that does the following: -- It makes your subgraph available to be to indexed by the decentralized [Indexers](/indexing/overview/) on The Graph Network. -- It removes rate limits and makes your subgraph publicly searchable and queryable in [Graph Explorer](https://thegraph.com/explorer/). -- It makes your subgraph available for [Curators](/resources/roles/curating/) to curate it. +- It makes your Subgraph available to be to indexed by the decentralized [Indexers](/indexing/overview/) on The Graph Network. +- It removes rate limits and makes your Subgraph publicly searchable and queryable in [Graph Explorer](https://thegraph.com/explorer/). +- It makes your Subgraph available for [Curators](/resources/roles/curating/) to curate it. -> The greater the amount of GRT you and others curate on your subgraph, the more Indexers will be incentivized to index your subgraph, improving the quality of service, reducing latency, and enhancing network redundancy for your subgraph. +> The greater the amount of GRT you and others curate on your Subgraph, the more Indexers will be incentivized to index your Subgraph, improving the quality of service, reducing latency, and enhancing network redundancy for your Subgraph. #### Publishing with Subgraph Studio -To publish your subgraph, click the Publish button in the dashboard. +To publish your Subgraph, click the Publish button in the dashboard. -![Publish a subgraph on Subgraph Studio](/img/publish-sub-transfer.png) +![Publish a Subgraph on Subgraph Studio](/img/publish-sub-transfer.png) -Select the network to which you would like to publish your subgraph. +Select the network to which you would like to publish your Subgraph. #### Publishing from the CLI -As of version 0.73.0, you can also publish your subgraph with the Graph CLI. +As of version 0.73.0, you can also publish your Subgraph with the Graph CLI. Open the `graph-cli`. @@ -157,32 +157,32 @@ graph publish ``` ```` -3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized subgraph to a network of your choice. +3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized Subgraph to a network of your choice. ![cli-ui](/img/cli-ui.png) To customize your deployment, see [Publishing a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). -#### Přidání signálu do podgrafu +#### Adding signal to your Subgraph -1. To attract Indexers to query your subgraph, you should add GRT curation signal to it. +1. To attract Indexers to query your Subgraph, you should add GRT curation signal to it. - - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your subgraph. + - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your Subgraph. 2. If eligible for indexing rewards, Indexers receive GRT rewards based on the signaled amount. - - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on subgraph feature usage and supported networks. + - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on Subgraph feature usage and supported networks. To learn more about curation, read [Curating](/resources/roles/curating/). -To save on gas costs, you can curate your subgraph in the same transaction you publish it by selecting this option: +To save on gas costs, you can curate your Subgraph in the same transaction you publish it by selecting this option: ![Subgraph publish](/img/studio-publish-modal.png) -### 8. Query your subgraph +### 8. Query your Subgraph -You now have access to 100,000 free queries per month with your subgraph on The Graph Network! +You now have access to 100,000 free queries per month with your Subgraph on The Graph Network! -You can query your subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button. +You can query your Subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button. -For more information about querying data from your subgraph, read [Querying The Graph](/subgraphs/querying/introduction/). +For more information about querying data from your Subgraph, read [Querying The Graph](/subgraphs/querying/introduction/). diff --git a/website/src/pages/cs/substreams/developing/dev-container.mdx b/website/src/pages/cs/substreams/developing/dev-container.mdx index bd4acf16eec7..339ddb159c87 100644 --- a/website/src/pages/cs/substreams/developing/dev-container.mdx +++ b/website/src/pages/cs/substreams/developing/dev-container.mdx @@ -9,7 +9,7 @@ Develop your first project with Substreams Dev Container. It's a tool to help you build your first project. You can either run it remotely through Github codespaces or locally by cloning the [substreams starter repository](https://github.com/streamingfast/substreams-starter?tab=readme-ov-file). -Inside the Dev Container, the `substreams init` command sets up a code-generated Substreams project, allowing you to easily build a subgraph or an SQL-based solution for data handling. +Inside the Dev Container, the `substreams init` command sets up a code-generated Substreams project, allowing you to easily build a Subgraph or an SQL-based solution for data handling. ## Prerequisites @@ -35,7 +35,7 @@ To share your work with the broader community, publish your `.spkg` to [Substrea You can configure your project to query data either through a Subgraph or directly from an SQL database: -- **Subgraph**: Run `substreams codegen subgraph`. This generates a project with a basic `schema.graphql` and `mappings.ts` file. You can customize these to define entities based on the data extracted by Substreams. For more configurations, see [subgraph sink documentation](https://docs.substreams.dev/how-to-guides/sinks/subgraph). +- **Subgraph**: Run `substreams codegen subgraph`. This generates a project with a basic `schema.graphql` and `mappings.ts` file. You can customize these to define entities based on the data extracted by Substreams. For more configurations, see [Subgraph sink documentation](https://docs.substreams.dev/how-to-guides/sinks/subgraph). - **SQL**: Run `substreams codegen sql` for SQL-based queries. For more information on configuring a SQL sink, refer to the [SQL documentation](https://docs.substreams.dev/how-to-guides/sinks/sql-sink). ## Deployment Options diff --git a/website/src/pages/cs/substreams/developing/sinks.mdx b/website/src/pages/cs/substreams/developing/sinks.mdx index f87e46464532..d89161878fc9 100644 --- a/website/src/pages/cs/substreams/developing/sinks.mdx +++ b/website/src/pages/cs/substreams/developing/sinks.mdx @@ -1,5 +1,5 @@ --- -title: Official Sinks +title: Sink your Substreams --- Choose a sink that meets your project's needs. @@ -8,7 +8,7 @@ Choose a sink that meets your project's needs. Once you find a package that fits your needs, you can choose how you want to consume the data. -Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database, a file or a subgraph. +Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database, a file or a Subgraph. ## Sinks diff --git a/website/src/pages/cs/substreams/developing/solana/account-changes.mdx b/website/src/pages/cs/substreams/developing/solana/account-changes.mdx index 8c309bbcce31..98da6949aef4 100644 --- a/website/src/pages/cs/substreams/developing/solana/account-changes.mdx +++ b/website/src/pages/cs/substreams/developing/solana/account-changes.mdx @@ -11,7 +11,7 @@ This guide walks you through the process of setting up your environment, configu > NOTE: History for the Solana Account Changes dates as of 2025, block 310629601. -For each Substreams Solana account block, only the latest update per account is recorded, see the [Protobuf Referece](https://buf.build/streamingfast/firehose-solana/file/main:sf/solana/type/v1/account.proto). If an account is deleted, a payload with `deleted == True` is provided. Additionally, events of low importance ware omitted, such as those with the special owner “Vote11111111…” account or changes that do not affect the account data (ex: lamport changes). +For each Substreams Solana account block, only the latest update per account is recorded, see the [Protobuf Reference](https://buf.build/streamingfast/firehose-solana/file/main:sf/solana/type/v1/account.proto). If an account is deleted, a payload with `deleted == True` is provided. Additionally, events of low importance ware omitted, such as those with the special owner “Vote11111111…” account or changes that do not affect the account data (ex: lamport changes). > NOTE: To test Substreams latency for Solana accounts, measured as block-head drift, install the [Substreams CLI](https://docs.substreams.dev/reference-material/substreams-cli/installing-the-cli) and running `substreams run solana-common blocks_without_votes -s -1 -o clock`. diff --git a/website/src/pages/cs/substreams/developing/solana/transactions.mdx b/website/src/pages/cs/substreams/developing/solana/transactions.mdx index a50984178cd8..a5415dcfd8e4 100644 --- a/website/src/pages/cs/substreams/developing/solana/transactions.mdx +++ b/website/src/pages/cs/substreams/developing/solana/transactions.mdx @@ -36,12 +36,12 @@ Within the generated directories, modify your Substreams modules to include addi ## Step 3: Load the Data -To make your Substreams queryable (as opposed to [direct streaming](https://docs.substreams.dev/how-to-guides/sinks/stream)), you can automatically generate a [Substreams-powered subgraph](/sps/introduction/) or SQL-DB sink. +To make your Substreams queryable (as opposed to [direct streaming](https://docs.substreams.dev/how-to-guides/sinks/stream)), you can automatically generate a [Substreams-powered Subgraph](/sps/introduction/) or SQL-DB sink. ### Podgrafy 1. Run `substreams codegen subgraph` to initialize the sink, producing the necessary files and function definitions. -2. Create your [subgraph mappings](/sps/triggers/) within the `mappings.ts` and associated entities within the `schema.graphql`. +2. Create your [Subgraph mappings](/sps/triggers/) within the `mappings.ts` and associated entities within the `schema.graphql`. 3. Build and deploy locally or to [Subgraph Studio](https://thegraph.com/studio-pricing/) by running `deploy-studio`. ### SQL diff --git a/website/src/pages/cs/substreams/introduction.mdx b/website/src/pages/cs/substreams/introduction.mdx index 57d215576f60..d68760ad1432 100644 --- a/website/src/pages/cs/substreams/introduction.mdx +++ b/website/src/pages/cs/substreams/introduction.mdx @@ -13,7 +13,7 @@ Substreams is a powerful parallel blockchain indexing technology designed to enh ## Substreams Benefits -- **Accelerated Indexing**: Boost subgraph indexing time with a parallelized engine for quicker data retrieval and processing. +- **Accelerated Indexing**: Boost Subgraph indexing time with a parallelized engine for quicker data retrieval and processing. - **Multi-Chain Support**: Expand indexing capabilities beyond EVM-based chains, supporting ecosystems like Solana, Injective, Starknet, and Vara. - **Enhanced Data Model**: Access comprehensive data, including the `trace` level data on EVM or account changes on Solana, while efficiently managing forks/disconnections. - **Multi-Sink Support:** For Subgraph, Postgres database, Clickhouse, and Mongo database. diff --git a/website/src/pages/cs/substreams/publishing.mdx b/website/src/pages/cs/substreams/publishing.mdx index 8e71c65c2eed..19415c7860d8 100644 --- a/website/src/pages/cs/substreams/publishing.mdx +++ b/website/src/pages/cs/substreams/publishing.mdx @@ -9,7 +9,7 @@ Learn how to publish a Substreams package to the [Substreams Registry](https://s ### What is a package? -A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional subgraphs. +A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional Subgraphs. ## Publish a Package @@ -44,7 +44,7 @@ A Substreams package is a precompiled binary file that defines the specific data ![confirm](/img/4_confirm.png) -That's it! You have succesfully published a package in the Substreams registry. +That's it! You have successfully published a package in the Substreams registry. ![success](/img/5_success.png) diff --git a/website/src/pages/cs/supported-networks.mdx b/website/src/pages/cs/supported-networks.mdx index 6ccb230d548f..863814948ba7 100644 --- a/website/src/pages/cs/supported-networks.mdx +++ b/website/src/pages/cs/supported-networks.mdx @@ -18,11 +18,11 @@ export const getStaticProps = getSupportedNetworksStaticProps - Subgraph Studio relies on the stability and reliability of the underlying technologies, for example JSON-RPC, Firehose and Substreams endpoints. - Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier. -- If a subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks. +- If a Subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks. - For a full list of which features are supported on the decentralized network, see [this page](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). ## Running Graph Node locally If your preferred network isn't supported on The Graph's decentralized network, you can run your own [Graph Node](https://github.com/graphprotocol/graph-node) to index any EVM-compatible network. Make sure that the [version](https://github.com/graphprotocol/graph-node/releases) you are using supports the network and you have the needed configuration. -Graph Node can also index other protocols, via a Firehose integration. Firehose integrations have been created for NEAR, Arweave and Cosmos-based networks. Additionally, Graph Node can support Substreams-powered subgraphs for any network with Substreams support. +Graph Node can also index other protocols, via a Firehose integration. Firehose integrations have been created for NEAR and Arweave. Additionally, Graph Node can support Substreams-powered Subgraphs for any network with Substreams support. diff --git a/website/src/pages/cs/token-api/_meta-titles.json b/website/src/pages/cs/token-api/_meta-titles.json index 692cec84bd58..7ed31e0af95d 100644 --- a/website/src/pages/cs/token-api/_meta-titles.json +++ b/website/src/pages/cs/token-api/_meta-titles.json @@ -1,5 +1,6 @@ { "mcp": "MCP", "evm": "EVM Endpoints", - "monitoring": "Monitoring Endpoints" + "monitoring": "Monitoring Endpoints", + "faq": "FAQ" } diff --git a/website/src/pages/cs/token-api/_meta.js b/website/src/pages/cs/token-api/_meta.js index 09aa7ffc2649..0e526f673a66 100644 --- a/website/src/pages/cs/token-api/_meta.js +++ b/website/src/pages/cs/token-api/_meta.js @@ -5,4 +5,5 @@ export default { mcp: titles.mcp, evm: titles.evm, monitoring: titles.monitoring, + faq: '', } diff --git a/website/src/pages/cs/token-api/faq.mdx b/website/src/pages/cs/token-api/faq.mdx new file mode 100644 index 000000000000..83196959be14 --- /dev/null +++ b/website/src/pages/cs/token-api/faq.mdx @@ -0,0 +1,109 @@ +--- +title: Token API FAQ +--- + +Get fast answers to easily integrate and scale with The Graph's high-performance Token API. + +## Obecný + +### What blockchains does the Token API support? + +Currently, the Token API supports Ethereum, Binance Smart Chain (BSC), Polygon, Optimism, Base, and Arbitrum One. + +### Why isn't my API key from The Graph Market working? + +Be sure to use the Access Token generated from the API key, not the API key itself. An access token can be generated from the dashboard on The Graph Market using the dropdown menu next to each API key. + +### How current is the data provided by the API relative to the blockchain? + +The API provides data up to the latest finalized block. + +### How do I authenticate requests to the Token API? + +Authentication is managed via JWT tokens obtained through The Graph Market. JWT tokens issued by [Pinax](https://pinax.network/en), a core developer of The Graph, are also supported. + +### Does the Token API provide a client SDK? + +While a client SDK is not currently available, please share feedback on any SDKs or integrations you would like to see on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to support additional blockchains in the future? + +Yes, more blockchains will be supported in the future. Please share feedback on desired chains on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to offer data closer to the chain head? + +Yes, improvements to provide data closer to the chain head are planned. Feedback is welcome on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to support additional use cases such as NFTs? + +The Graph ecosystem is actively determining the [roadmap](https://thegraph.com/blog/token-api-the-graph/) for additional use cases, including NFTs. Please provide feedback on specific features you would like prioritized on [Discord](https://discord.gg/graphprotocol). + +## MCP / LLM / AI Topics + +### Is there a time limit for LLM queries? + +Yes. The maximum time limit for LLM queries is 10 seconds. + +### Is there a known list of LLMs that work with the API? + +Yes, Cline, Cursor, and Claude Desktop integrate successfully with The Graph's Token API + MCP server. + +Beyond that, whether an LLM "works" depends on whether it supports the MCP protocol directly (or has a compatible plugin/adapter). + +### Where can I find the MCP client? + +You can find the code for the MCP client in [The Graph's repo](https://github.com/graphprotocol/mcp-client). + +## Advanced Topics + +### I'm getting 403/401 errors. What's wrong? + +Check that you included the `Authorization: Bearer ` header with the correct, non-expired token. Common issues include using the API key instead of generating a new JWT, forgetting the "Bearer" prefix, using an incorrect token, or omitting the header entirely. Ensure you copied the JWT exactly as provided by The Graph Market. + +### Are there rate limits or usage costs?\*\* + +During Beta, the Token API is free for authorized developers. While specific rate limits aren't documented, reasonable throttling exists to prevent abuse. High request volumes may trigger HTTP 429 errors. Monitor official announcements for future pricing changes after Beta. + +### What networks are supported, and how do I specify them? + +You can query available networks with [this link](https://token-api.thegraph.com/#tag/monitoring/GET/networks). A full list of network IDs accepted by The Graph can be found on [The Graph's Networks](https://thegraph.com/docs/en/supported-networks/). The API supports Ethereum mainnet, Binance Smart Chain, Base, Arbitrum One, Optimism, and Polygon/Matic. Use the optional `network_id` parameter (e.g., `mainnet`, `bsc`, `base`, `arbitrum-one`, `optimism`, `matic`) to target a specific chain. Without this parameter, the API defaults to Ethereum mainnet. + +### Why do I only see 10 results? How can I get more data? + +Endpoints cap output at 10 items by default. Use pagination parameters: `limit` (up to 500) and `page` (1-indexed). For example, set `limit=50` to get 50 results, and increment `page` for subsequent batches (e.g., `page=2` for items 51-100). + +### How do I fetch older transfer history? + +The Transfers endpoint defaults to 30 days of history. To retrieve older events, increase the `age` parameter up to 180 days maximum (e.g., `age=180` for 6 months of transfers). Transfers older than 180 days cannot be fetched in a single call. + +### What does an empty `"data": []` array mean? + +An empty data array means no records were found matching the query – not an error. This occurs when querying wallets with no tokens/transfers or token contracts with no holders. Verify you've used the correct address and parameters. An invalid address format will trigger a 4xx error. + +### Why is the JSON response wrapped in a `"data"` array? + +All Token API responses consistently wrap results in a top-level `data` array, even for single items. This uniform design handles the common case where addresses have multiple balances or transfers. When parsing, be sure to index into the `data` array (e.g., `const results = response.data`). + +### Why are token amounts returned as strings? + +Large numeric values (like token amounts or supplies) are returned as strings to avoid precision loss, as they often exceed JavaScript's safe integer range. Convert these to big number types for arithmetic operations. Fields like `decimals` are provided as normal numbers to help derive human-readable values. + +### What format should addresses be in? + +The API accepts 40-character hex addresses with or without the `0x` prefix. The endpoint is case-insensitive, so both lower and uppercase hex characters work. Ensure addresses are exactly 40 hex digits (20 bytes) if you remove the prefix. For contract queries, use the token's contract address; for balance/transfer queries, use the wallet address. + +### Do I need special headers besides authentication? + +While recommended, `Accept: application/json` isn't strictly required as the API returns JSON by default. The critical header is `Authorization: Bearer `. Ensure you make a GET request to the correct URL without trailing slashes or path typos (e.g., use `/balances/evm/{address}` not `/balance`). + +### MCP integration with Claude/Cline/Cursor shows errors like "ENOENT" or "Server disconnected". How do I fix this? + +For "ENOENT" errors, ensure Node.js 18+ is installed and the path to `npx`/`bunx` is correct (consider using full paths in config). "Server disconnected" usually indicates authentication or connectivity issues – verify your `ACCESS_TOKEN` is set correctly and your network allows access to `https://token-api.thegraph.com/sse`. + +### Is the Token API part of The Graph's GraphQL service? + +No, the Token API is a separate RESTful service. Unlike traditional subgraphs, it provides ready-to-use REST endpoints (HTTP GET) for common token data. You don't need to write GraphQL queries or deploy subgraphs. Under the hood, it uses The Graph's infrastructure and MCP with AI for enrichment, but you simply interact with REST endpoints. + +### Do I need to use MCP or tools like Claude, Cline, or Cursor? + +No, these are optional. MCP is an advanced feature allowing AI assistants to interface with the API via streaming. For standard usage, simply call the REST endpoints with any HTTP client using your JWT. Claude Desktop, Cline bot, and Cursor IDE integrations are provided for convenience but aren't required. diff --git a/website/src/pages/cs/token-api/mcp/claude.mdx b/website/src/pages/cs/token-api/mcp/claude.mdx index 0da8f2be031d..aabd9c69d69a 100644 --- a/website/src/pages/cs/token-api/mcp/claude.mdx +++ b/website/src/pages/cs/token-api/mcp/claude.mdx @@ -12,7 +12,7 @@ sidebarTitle: Claude Desktop ![Screenshot of Claude Desktop's settings panel showing the MCP server configuration option.](/img/claude-preview-token-api.png) -## Configuration +## Konfigurace Create or edit your `claude_desktop_config.json` file. @@ -25,11 +25,11 @@ Create or edit your `claude_desktop_config.json` file. ```json label="claude_desktop_config.json" { "mcpServers": { - "mcp-pinax": { + "token-api": { "command": "npx", "args": ["@pinax/mcp", "--sse-url", "https://token-api.thegraph.com/sse"], "env": { - "ACCESS_TOKEN": "" + "ACCESS_TOKEN": "" } } } diff --git a/website/src/pages/cs/token-api/mcp/cline.mdx b/website/src/pages/cs/token-api/mcp/cline.mdx index ab54c0c8f6f0..2e8f478f68c1 100644 --- a/website/src/pages/cs/token-api/mcp/cline.mdx +++ b/website/src/pages/cs/token-api/mcp/cline.mdx @@ -10,9 +10,9 @@ sidebarTitle: Cline - [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path. - The `@pinax/mcp` package requires Node 18+, as it relies on built-in `fetch()` / `Headers`, which are not available in Node 17 or older. You may need to specify an exact path to an up-to-date Node version, or uninstall previous versions of Node to ensure `@pinax/mcp` uses the correct version. -![Screenshot of Cline's MCP server configuration interface displaying the JSON settings file with mcp-pinax server details visible](/img/cline-preview-token-api.png) +![Screenshot of Cline's MCP server configuration interface displaying the JSON settings file with mcp-pinax server details visible.](/img/cline-preview-token-api.png) -## Configuration +## Konfigurace Create or edit your `cline_mcp_settings.json` file. diff --git a/website/src/pages/cs/token-api/mcp/cursor.mdx b/website/src/pages/cs/token-api/mcp/cursor.mdx index 658108d1337b..fac3a1a1af73 100644 --- a/website/src/pages/cs/token-api/mcp/cursor.mdx +++ b/website/src/pages/cs/token-api/mcp/cursor.mdx @@ -12,7 +12,7 @@ sidebarTitle: Cursor ![Screenshot of Cursor's MCP configuration panel.](/img/cursor-preview-token-api.png) -## Configuration +## Konfigurace Create or edit your `~/.cursor/mcp.json` file. diff --git a/website/src/pages/cs/token-api/quick-start.mdx b/website/src/pages/cs/token-api/quick-start.mdx index 4653c3d41ac6..4083154b5a8b 100644 --- a/website/src/pages/cs/token-api/quick-start.mdx +++ b/website/src/pages/cs/token-api/quick-start.mdx @@ -1,6 +1,6 @@ --- title: Token API Quick Start -sidebarTitle: Quick Start +sidebarTitle: Rychlé spuštění --- ![The Graph Token API Quick Start banner](/img/token-api-quickstart-banner.jpg) diff --git a/website/src/pages/de/about.mdx b/website/src/pages/de/about.mdx index 61dbccdd5c84..30ff84ae06f0 100644 --- a/website/src/pages/de/about.mdx +++ b/website/src/pages/de/about.mdx @@ -30,25 +30,25 @@ Blockchain-Eigenschaften wie Endgültigkeit, Umstrukturierung der Kette und nich ## The Graph bietet eine Lösung -The Graph löst diese Herausforderung mit einem dezentralen Protokoll, das Blockchain-Daten indiziert und eine effiziente und leistungsstarke Abfrage ermöglicht. Diese APIs (indizierte „Subgraphen“) können dann mit einer Standard-GraphQL-API abgefragt werden. +The Graph löst diese Herausforderung mit einem dezentralen Protokoll, das die Blockchain-Daten indiziert und eine effiziente und leistungsstarke Abfrage ermöglicht. Diese APIs (indizierte „Subgraphen“) können dann mit einer Standard-GraphQL-API abgefragt werden. Heute gibt es ein dezentralisiertes Protokoll, das durch die Open-Source-Implementierung von [Graph Node](https://github.com/graphprotocol/graph-node) unterstützt wird und diesen Prozess ermöglicht. ### Die Funktionsweise von The Graph -Die Indizierung von Blockchain-Daten ist sehr schwierig, aber The Graph macht es einfach. The Graph lernt, wie man Ethereum-Daten mit Hilfe von Subgraphen indiziert. Subgraphs sind benutzerdefinierte APIs, die auf Blockchain-Daten aufgebaut sind. Sie extrahieren Daten aus einer Blockchain, verarbeiten sie und speichern sie so, dass sie nahtlos über GraphQL abgefragt werden können. +Die Indexierung von Blockchain-Daten ist sehr schwierig, aber The Graph macht es einfach. The Graph lernt, wie man Ethereum-Daten mit Hilfe von Subgraphen indizieren kann. Subgraphen sind benutzerdefinierte APIs, die auf Blockchain-Daten aufgebaut sind. Sie extrahieren Daten aus einer Blockchain, verarbeiten sie und speichern sie so, dass sie nahtlos über GraphQL abgefragt werden können. #### Besonderheiten -- The Graph verwendet Subgraph-Beschreibungen, die als Subgraph Manifest innerhalb des Subgraphen bekannt sind. +- The Graph verwendet Subgraph-Beschreibungen, die als Subgraph-Manifest innerhalb des Subgraphen bekannt sind. -- Die Beschreibung des Subgraphs beschreibt die Smart Contracts, die für einen Subgraph von Interesse sind, die Ereignisse innerhalb dieser Verträge, auf die man sich konzentrieren sollte, und wie man die Ereignisdaten den Daten zuordnet, die The Graph in seiner Datenbank speichern wird. +- Die Subgraph-Beschreibung beschreibt die Smart Contracts, die für einen Subgraphen von Interesse sind, die Ereignisse innerhalb dieser Verträge, auf die man sich konzentrieren soll, und wie man die Ereignisdaten den Daten zuordnet, die The Graph in seiner Datenbank speichern wird. -- Wenn Sie einen Subgraphen erstellen, müssen Sie ein Subgraph Manifest schreiben. +- Wenn Sie einen Subgraphen erstellen, müssen Sie ein Subgraphenmanifest schreiben. -- Nachdem Sie das `Subgraph Manifest` geschrieben haben, können Sie das Graph CLI verwenden, um die Definition im IPFS zu speichern und einen Indexer anzuweisen, mit der Indizierung der Daten für diesen Subgraphen zu beginnen. +- Nachdem Sie das `Subgraphenmanifest` geschrieben haben, können Sie das Graph CLI verwenden, um die Definition im IPFS zu speichern und einen Indexer anzuweisen, mit der Indizierung von Daten für diesen Subgraphen zu beginnen. -Das nachstehende Diagramm enthält detailliertere Informationen über den Datenfluss, nachdem ein Subgraph Manifest mit Ethereum-Transaktionen bereitgestellt worden ist. +Das nachstehende Diagramm enthält detailliertere Informationen über den Datenfluss, nachdem ein Subgraph-Manifest mit Ethereum-Transaktionen bereitgestellt wurde. ![Eine graphische Darstellung, die erklärt, wie The Graph Graph Node verwendet, um Abfragen an Datenkonsumenten zu stellen](/img/graph-dataflow.png) @@ -56,12 +56,12 @@ Der Ablauf ist wie folgt: 1. Eine Dapp fügt Ethereum durch eine Transaktion auf einem Smart Contract Daten hinzu. 2. Der Smart Contract gibt während der Verarbeitung der Transaktion ein oder mehrere Ereignisse aus. -3. Graph Node scannt Ethereum kontinuierlich nach neuen Blöcken und den darin enthaltenen Daten für Ihren Subgraphen. -4. Graph Node findet Ethereum-Ereignisse für Ihren Subgraphen in diesen Blöcken und führt die von Ihnen bereitgestellten Mapping-Handler aus. Das Mapping ist ein WASM-Modul, das die Dateneinheiten erstellt oder aktualisiert, die Graph Node als Reaktion auf Ethereum-Ereignisse speichert. +3. Graph Node scannt Ethereum kontinuierlich nach neuen Blöcken und den darin enthaltenen Daten für Ihren Subgraph. +4. Graph Node findet in diesen Blöcken Ethereum-Ereignisse für Ihren Subgraph und führt die von Ihnen bereitgestellten Mapping-Handler aus. Das Mapping ist ein WASM-Modul, das die Dateneinheiten erstellt oder aktualisiert, die Graph Node als Reaktion auf Ethereum-Ereignisse speichert. 5. Die Dapp fragt den Graph Node über den [GraphQL-Endpunkt](https://graphql.org/learn/) des Knotens nach Daten ab, die von der Blockchain indiziert wurden. Der Graph Node wiederum übersetzt die GraphQL-Abfragen in Abfragen für seinen zugrundeliegenden Datenspeicher, um diese Daten abzurufen, wobei er die Indexierungsfunktionen des Speichers nutzt. Die Dapp zeigt diese Daten in einer reichhaltigen Benutzeroberfläche für die Endnutzer an, mit der diese dann neue Transaktionen auf Ethereum durchführen können. Der Zyklus wiederholt sich. ## Nächste Schritte -In den folgenden Abschnitten werden die Subgraphen, ihr Einsatz und die Datenabfrage eingehender behandelt. +In den folgenden Abschnitten werden die Subgraphen, ihr Einsatz und die Datenabfrage näher erläutert. -Bevor Sie Ihren eigenen Subgraphen schreiben, sollten Sie den [Graph Explorer](https://thegraph.com/explorer) erkunden und sich einige der bereits vorhandenen Subgraphen ansehen. Die Seite jedes Subgraphen enthält eine GraphQL- Playground, mit der Sie seine Daten abfragen können. +Bevor Sie Ihren eigenen Subgraph schreiben, sollten Sie den [Graph Explorer](https://thegraph.com/explorer) erkunden und sich einige der bereits eingesetzten Subgraphen ansehen. Die Seite jedes Subgraphen enthält eine GraphQL-Spielwiese, mit der Sie seine Daten abfragen können. diff --git a/website/src/pages/de/archived/_meta-titles.json b/website/src/pages/de/archived/_meta-titles.json index 9501304a4305..68385040140c 100644 --- a/website/src/pages/de/archived/_meta-titles.json +++ b/website/src/pages/de/archived/_meta-titles.json @@ -1,3 +1,3 @@ { - "arbitrum": "Scaling with Arbitrum" + "arbitrum": "Skalierung mit Arbitrum" } diff --git a/website/src/pages/de/archived/arbitrum/arbitrum-faq.mdx b/website/src/pages/de/archived/arbitrum/arbitrum-faq.mdx index 54809f94fd9c..6fa6fbe5faaf 100644 --- a/website/src/pages/de/archived/arbitrum/arbitrum-faq.mdx +++ b/website/src/pages/de/archived/arbitrum/arbitrum-faq.mdx @@ -14,7 +14,7 @@ Durch die Skalierung von The Graph auf L2 können die Netzwerkteilnehmer nun von - Von Ethereum übernommene Sicherheit -Die Skalierung der Smart Contracts des Protokolls auf L2 ermöglicht den Netzwerkteilnehmern eine häufigere Interaktion zu geringeren Kosten in Form von Gasgebühren. So können Indexer beispielsweise häufiger Zuweisungen öffnen und schließen, um eine größere Anzahl von Subgraphen zu indexieren. Entwickler können Subgraphen leichter bereitstellen und aktualisieren, und Delegatoren können GRT häufiger delegieren. Kuratoren können einer größeren Anzahl von Subgraphen Signale hinzufügen oder entfernen - Aktionen, die bisher aufgrund der Kosten zu kostspielig waren, um sie häufig durchzuführen. +Die Skalierung der Smart Contracts des Protokolls auf L2 ermöglicht den Netzwerkteilnehmern eine häufigere Interaktion zu geringeren Kosten in Form von Gasgebühren. So können Indexer beispielsweise häufiger Zuweisungen öffnen und schließen, um eine größere Anzahl von Subgraphen zu indexieren. Entwickler können Subgraphen leichter einsetzen und aktualisieren, und Delegatoren können GRT häufiger delegieren. Kuratoren können einer größeren Anzahl von Untergraphen Signale hinzufügen oder entfernen - Aktionen, die bisher aufgrund der Gaskosten zu kostspielig waren, um sie häufig durchzuführen. Die The Graph-Community beschloss letztes Jahr nach dem Ergebnis der [GIP-0031](https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305)-Diskussion, mit Arbitrum weiterzumachen. @@ -39,7 +39,7 @@ Um die Vorteile von The Graph auf L2 zu nutzen, verwenden Sie diesen Dropdown-Sc ![Dropdown-Schalter zum Aktivieren von Arbitrum](/img/arbitrum-screenshot-toggle.png) -## Was muss ich als Entwickler von Subgraphen, Datenkonsument, Indexer, Kurator oder Delegator jetzt tun? +## Was muss ich als Subgraph-Entwickler, Datenkonsument, Indexer, Kurator oder Delegator jetzt tun? Netzwerk-Teilnehmer müssen zu Arbitrum wechseln, um weiterhin am The Graph Network teilnehmen zu können. Weitere Unterstützung finden Sie im [Leitfaden zum L2 Transfer Tool](/archived/arbitrum/l2-transfer-tools-guide/). @@ -51,9 +51,9 @@ Alle Smart Contracts wurden gründlich [audited] (https://github.com/graphprotoc Alles wurde gründlich getestet, und es gibt einen Notfallplan, um einen sicheren und nahtlosen Übergang zu gewährleisten. Einzelheiten finden Sie [here](https://forum.thegraph.com/t/gip-0037-the-graph-arbitrum-deployment-with-linear-rewards-minted-in-l2/3551#risks-and-security-considerations-20). -## Funktionieren die vorhandenen Subgraphen auf Ethereum? +## Funktionieren die bestehenden Subgraphen auf Ethereum? -Alle Subgraphen sind jetzt auf Arbitrum. Bitte lesen Sie den [Leitfaden zum L2 Transfer Tool](/archived/arbitrum/l2-transfer-tools-guide/), um sicherzustellen, dass Ihre Subgraphen reibungslos funktionieren. +Alle Subgraphen sind jetzt auf Arbitrum. Bitte lesen Sie den [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/), um sicherzustellen, dass Ihre Subgraphen reibungslos funktionieren. ## Verfügt GRT über einen neuen Smart Contract, der auf Arbitrum eingesetzt wird? @@ -77,4 +77,4 @@ Die Brücke wurde [umfangreich geprüft] (https://code4rena.com/contests/2022-10 Das Hinzufügen von GRT zu Ihrem Arbitrum-Abrechnungssaldo kann mit nur einem Klick in [Subgraph Studio] (https://thegraph.com/studio/) erfolgen. Sie können Ihr GRT ganz einfach mit Arbitrum verbinden und Ihre API-Schlüssel in einer einzigen Transaktion füllen. -Visit the [Billing page](/subgraphs/billing/) for more detailed instructions on adding, withdrawing, or acquiring GRT. +Besuchen Sie die [Abrechnungsseite](/subgraphs/billing/) für genauere Anweisungen zum Hinzufügen, Abheben oder Erwerben von GRT. diff --git a/website/src/pages/de/archived/arbitrum/l2-transfer-tools-faq.mdx b/website/src/pages/de/archived/arbitrum/l2-transfer-tools-faq.mdx index 8abcda305f8a..8ac2d50c81e7 100644 --- a/website/src/pages/de/archived/arbitrum/l2-transfer-tools-faq.mdx +++ b/website/src/pages/de/archived/arbitrum/l2-transfer-tools-faq.mdx @@ -24,19 +24,19 @@ Die Ausnahme sind Smart-Contract-Wallets wie Multisigs: Das sind Smart Contracts Die L2-Transfer-Tools verwenden den nativen Mechanismus von Arbitrum, um Nachrichten von L1 nach L2 zu senden. Dieser Mechanismus wird "retryable ticket" genannt und wird von allen nativen Token-Bridges verwendet, einschließlich der Arbitrum GRT-Bridge. Sie können mehr über wiederholbare Tickets in den [Arbitrum docs](https://docs.arbitrum.io/arbos/l1-to-l2-messaging) lesen. -Wenn Sie Ihre Vermögenswerte (Subgraph, Anteil, Delegation oder Kuration) an L2 übertragen, wird eine Nachricht über die Arbitrum GRT-Brücke gesendet, die ein wiederholbares Ticket in L2 erstellt. Das Transfer-Tool beinhaltet einen gewissen ETH-Wert in der Transaktion, der verwendet wird, um 1) die Erstellung des Tickets und 2) das Gas für die Ausführung des Tickets in L2 zu bezahlen. Da jedoch die Gaspreise in der Zeit, bis das Zertifikat zur Ausführung in L2 bereit ist, schwanken können, ist es möglich, dass dieser automatische Ausführungsversuch fehlschlägt. Wenn das passiert, hält die Arbitrum-Brücke das wiederholbare Zertifikat für bis zu 7 Tage am Leben, und jeder kann versuchen, das Ticket erneut "einzulösen" (was eine Geldbörse mit etwas ETH erfordert, die mit Arbitrum verbunden ist). +Wenn Sie Ihre Vermögenswerte (Subgraph, Anteil, Delegation oder Kuration) an L2 übertragen, wird eine Nachricht über die Arbitrum GRT-Brücke gesendet, die ein wiederholbares Ticket in L2 erstellt. Das Transfer-Tool beinhaltet einen gewissen ETH-Wert in der Transaktion, der verwendet wird, um 1) die Erstellung des Tickets und 2) das Gas für die Ausführung des Tickets in L2 zu bezahlen. Da jedoch die Gaspreise in der Zeit, bis das Ticket zur Ausführung in L2 bereit ist, schwanken können, ist es möglich, dass dieser automatische Ausführungsversuch fehlschlägt. Wenn das passiert, hält die Arbitrum-Brücke das wiederholbare Ticket für bis zu 7 Tage am Leben, und jeder kann versuchen, das Ticket erneut „einzulösen“ (was eine Wallet mit etwas ETH erfordert, die mit Arbitrum verbunden ist). -Dies ist der so genannte "Bestätigungsschritt" in allen Übertragungswerkzeugen - er wird in den meisten Fällen automatisch ausgeführt, da die automatische Ausführung meist erfolgreich ist, aber es ist wichtig, dass Sie sich vergewissern, dass die Übertragung erfolgreich war. Wenn dies nicht gelingt und es innerhalb von 7 Tagen keine erfolgreichen Wiederholungsversuche gibt, verwirft die Arbitrum-Brücke das Ticket, und Ihre Assets (Subgraph, Pfahl, Delegation oder Kuration) gehen verloren und können nicht wiederhergestellt werden. Die Entwickler des Graph-Kerns haben ein Überwachungssystem eingerichtet, um diese Situationen zu erkennen und zu versuchen, die Tickets einzulösen, bevor es zu spät ist, aber es liegt letztendlich in Ihrer Verantwortung, sicherzustellen, dass Ihr Transfer rechtzeitig abgeschlossen wird. Wenn Sie Probleme mit der Bestätigung Ihrer Transaktion haben, wenden Sie sich bitte an [dieses Formular] (https://noteforms.com/forms/notionform-l2-transfer-tooling-issues-0ogqfu?notionforms=1&utm_source=notionforms) und die Entwickler des Kerns werden Ihnen helfen. +Dies ist der so genannte „Bestätigungsschritt“ in allen Übertragungswerkzeugen - er wird in den meisten Fällen automatisch ausgeführt, da die automatische Ausführung meistens erfolgreich ist, aber es ist wichtig, dass Sie sich vergewissern, dass die Übertragung erfolgreich war. Wenn dies nicht gelingt und es innerhalb von 7 Tagen keine erfolgreichen Wiederholungsversuche gibt, verwirft die Arbitrum-Brücke das Ticket, und Ihre Assets (Subgraph, Anteil, Delegation oder Kuration) gehen verloren und können nicht wiederhergestellt werden. Die Entwickler des The Graph-„ Core“ haben ein Überwachungssystem eingerichtet, um diese Situationen zu erkennen und zu versuchen, die Tickets einzulösen, bevor es zu spät ist, aber es liegt letztendlich in Ihrer Verantwortung, sicherzustellen, dass Ihr Transfer rechtzeitig abgeschlossen wird. Wenn Sie Probleme mit der Bestätigung Ihrer Transaktion haben, wenden Sie sich bitte an [dieses Formular] (https://noteforms.com/forms/notionform-l2-transfer-tooling-issues-0ogqfu?notionforms=1&utm_source=notionforms) und die Entwickler des Kerns werden Ihnen helfen. ### Ich habe mit der Übertragung meiner Delegation/des Einsatzes/der Kuration begonnen und bin mir nicht sicher, ob sie an L2 weitergeleitet wurde. Wie kann ich bestätigen, dass sie korrekt übertragen wurde? -If you don't see a banner on your profile asking you to finish the transfer, then it's likely the transaction made it safely to L2 and no more action is needed. If in doubt, you can check if Explorer shows your delegation, stake or curation on Arbitrum One. +Wenn Sie in Ihrem Profil kein Banner sehen, das Sie auffordert, den Transfer abzuschließen, dann ist die Transaktion wahrscheinlich sicher auf L2 angekommen und es sind keine weiteren Maßnahmen erforderlich. Im Zweifelsfall können Sie überprüfen, ob der Explorer Ihre Delegation, Ihren Einsatz oder Ihre Kuration auf Arbitrum One anzeigt. -If you have the L1 transaction hash (which you can find by looking at the recent transactions in your wallet), you can also confirm if the "retryable ticket" that carried the message to L2 was redeemed here: https://retryable-dashboard.arbitrum.io/ - if the auto-redeem failed, you can also connect your wallet there and redeem it. Rest assured that core devs are also monitoring for messages that get stuck, and will attempt to redeem them before they expire. +Wenn Sie den L1-Transaktionshash haben ( den Sie durch einen Blick auf die letzten Transaktionen in Ihrer Wallet finden können), können Sie auch überprüfen, ob das „retryable ticket“, das die Nachricht nach L2 transportiert hat, hier eingelöst wurde: https://retryable-dashboard.arbitrum.io/ - wenn die automatische Einlösung fehlgeschlagen ist, können Sie Ihre Wallet auch dort verbinden und es einlösen. Seien Sie versichert, dass die Kernentwickler auch Nachrichten überwachen, die stecken bleiben, und versuchen werden, sie einzulösen, bevor sie ablaufen. ## Subgraph-Transfer -### Wie übertrage ich meinen Subgraphen +### Wie übertrage ich meinen Subgraphen? @@ -48,15 +48,15 @@ Um Ihren Subgraphen zu übertragen, müssen Sie die folgenden Schritte ausführe 3. Bestätigung der Übertragung von Subgraphen auf Arbitrum\* -4. Veröffentlichung des Subgraphen auf Arbitrum beenden +4. Veröffentlichung von Subgraph auf Arbitrum beenden 5. Abfrage-URL aktualisieren (empfohlen) -\*Note that you must confirm the transfer within 7 days otherwise your subgraph may be lost. In most cases, this step will run automatically, but a manual confirmation may be needed if there is a gas price spike on Arbitrum. If there are any issues during this process, there will be resources to help: contact support at support@thegraph.com or on [Discord](https://discord.gg/graphprotocol). +\* Beachten Sie, dass Sie die Übertragung innerhalb von 7 Tagen bestätigen müssen, da sonst Ihr Subgraph verloren gehen kann. In den meisten Fällen wird dieser Schritt automatisch ausgeführt, aber eine manuelle Bestätigung kann erforderlich sein, wenn es einen Gaspreisanstieg auf Arbitrum gibt. Sollte es während dieses Prozesses zu Problemen kommen, gibt es Ressourcen, die helfen können: kontaktiere den Support unter support@thegraph.com oder auf [Discord] (https://discord.gg/graphprotocol). ### Von wo aus soll ich meine Übertragung veranlassen? -Sie können die Übertragung vom [Subgraph Studio] (https://thegraph.com/studio/), vom [Explorer] (https://thegraph.com/explorer) oder von einer beliebigen Subgraph-Detailseite aus starten. Klicken Sie auf die Schaltfläche "Subgraph übertragen" auf der Detailseite des Subgraphen, um die Übertragung zu starten. +Sie können die Übertragung vom [Subgraph Studio] (https://thegraph.com/studio/), vom [Explorer,] (https://thegraph.com/explorer) oder von einer beliebigen Subgraph-Detailseite aus starten. Klicken Sie auf die Schaltfläche „Subgraph übertragen“ auf der Detailseite des Subgraphen, um die Übertragung zu starten. ### Wie lange muss ich warten, bis mein Subgraph übertragen wird? @@ -66,35 +66,35 @@ Die Übertragungszeit beträgt etwa 20 Minuten. Die Arbitrum-Brücke arbeitet im Ihr Subgraph ist nur in dem Netzwerk auffindbar, in dem er veröffentlicht ist. Wenn Ihr Subgraph zum Beispiel auf Arbitrum One ist, können Sie ihn nur im Explorer auf Arbitrum One finden und nicht auf Ethereum. Bitte vergewissern Sie sich, dass Sie Arbitrum One in der Netzwerkumschaltung oben auf der Seite ausgewählt haben, um sicherzustellen, dass Sie sich im richtigen Netzwerk befinden. Nach der Übertragung wird der L1-Subgraph als veraltet angezeigt. -### Muss mein Subgraph ( Teilgraph ) veröffentlicht werden, um ihn zu übertragen? +### Muss mein Subgraph veröffentlicht werden, um ihn zu übertragen? -Um das Subgraph-Transfer-Tool nutzen zu können, muss Ihr Subgraph bereits im Ethereum-Mainnet veröffentlicht sein und über ein Kurationssignal verfügen, das der Wallet gehört, die den Subgraph besitzt. Wenn Ihr Subgraph nicht veröffentlicht ist, empfehlen wir Ihnen, ihn einfach direkt auf Arbitrum One zu veröffentlichen - die damit verbundenen Gasgebühren sind erheblich niedriger. Wenn Sie einen veröffentlichten Subgraphen übertragen wollen, aber das Konto des Eigentümers kein Signal darauf kuratiert hat, können Sie einen kleinen Betrag (z.B. 1 GRT) von diesem Konto signalisieren; stellen Sie sicher, dass Sie ein "auto-migrating" Signal wählen. +Um die Vorteile des Subgraph-Transfer-Tools zu nutzen, muss Ihr Subgraph bereits im Ethereum-Mainnet veröffentlicht sein und über ein Kurationssignal verfügen, das der Wallet gehört, die den Subgraph besitzt. Wenn Ihr Subgraph noch nicht veröffentlicht ist, empfehlen wir Ihnen, ihn einfach direkt auf Arbitrum One zu veröffentlichen - die damit verbundenen Gasgebühren sind erheblich niedriger. Wenn Sie einen veröffentlichten Subgraph transferieren wollen, aber das Konto des Besitzers kein Signal darauf kuratiert hat, können Sie einen kleinen Betrag (z.B. 1 GRT) von diesem Konto signalisieren; stellen Sie sicher, dass Sie das „auto-migrating“ Signal wählen. -### Was passiert mit der Ethereum-Mainnet-Version meines Subgraphen, nachdem ich zu Arbitrum übergehe? +### Was passiert mit der Ethereum-Hauptnetz-Version meines Subgraphen, nachdem ich zu Arbitrum gewechselt bin? -Nach der Übertragung Ihres Subgraphen auf Arbitrum wird die Ethereum-Hauptnetzversion veraltet sein. Wir empfehlen Ihnen, Ihre Abfrage-URL innerhalb von 48 Stunden zu aktualisieren. Es gibt jedoch eine Schonfrist, die Ihre Mainnet-URL funktionsfähig hält, so dass jede Drittanbieter-Dapp-Unterstützung aktualisiert werden kann. +Nach dem Transfer Ihres Subgraphen zu Arbitrum wird die Ethereum-Hauptnetzversion veraltet sein. Wir empfehlen Ihnen, Ihre Abfrage-URL innerhalb von 48 Stunden zu aktualisieren. Es gibt jedoch eine Schonfrist, die Ihre Mainnet-URL funktionsfähig hält, so dass jede Drittanbieter-Dapp-Unterstützung aktualisiert werden kann. ### Muss ich nach der Übertragung auch auf Arbitrum neu veröffentlichen? Nach Ablauf des 20-minütigen Übertragungsfensters müssen Sie die Übertragung mit einer Transaktion in der Benutzeroberfläche bestätigen, um die Übertragung abzuschließen. Ihr L1-Endpunkt wird während des Übertragungsfensters und einer Schonfrist danach weiterhin unterstützt. Es wird empfohlen, dass Sie Ihren Endpunkt aktualisieren, wenn es Ihnen passt. -### Will my endpoint experience downtime while re-publishing? +### Kommt es während der Neuveröffentlichung zu Ausfallzeiten an meinem Endpunkt? -It is unlikely, but possible to experience a brief downtime depending on which Indexers are supporting the subgraph on L1 and whether they keep indexing it until the subgraph is fully supported on L2. +Es ist unwahrscheinlich, aber möglich, dass es zu einer kurzen Ausfallzeit kommt, je nachdem, welche Indexer den Subgraphen auf L1 unterstützen und ob sie ihn weiter indizieren, bis der Subgraph auf L2 vollständig unterstützt wird. ### Ist die Veröffentlichung und Versionierung auf L2 die gleiche wie im Ethereum-Mainnet? -Yes. Select Arbitrum One as your published network when publishing in Subgraph Studio. In the Studio, the latest endpoint will be available which points to the latest updated version of the subgraph. +Ja. Wählen Sie Arbitrum One als Ihr veröffentlichtes Netzwerk, wenn Sie in Subgraph Studio veröffentlichen. Im Studio wird der neueste Endpunkt verfügbar sein, der auf die letzte aktualisierte Version des Subgraphen verweist. -### Bewegt sich die Kuration meines Untergraphen ( Subgraphen ) mit meinem Untergraphen? +### Wird die Kuration meines Subgraphen mit meinem Subgraphen umziehen? Wenn Sie die automatische Signalmigration gewählt haben, werden 100 % Ihrer eigenen Kuration mit Ihrem Subgraphen zu Arbitrum One übertragen. Alle Kurationssignale des Subgraphen werden zum Zeitpunkt des Transfers in GRT umgewandelt, und die GRT, die Ihrem Kurationssignal entsprechen, werden zum Prägen von Signalen auf dem L2-Subgraphen verwendet. -Andere Kuratoren können wählen, ob sie ihren Anteil an GRT zurückziehen oder ihn ebenfalls auf L2 übertragen, um das Signal auf demselben Untergraphen zu prägen. +Andere Kuratoren können wählen, ob sie ihren Anteil an GRT zurückziehen oder ihn ebenfalls auf L2 übertragen, um das Signal auf demselben Subgraphen zu prägen. ### Kann ich meinen Subgraph nach dem Transfer zurück ins Ethereum Mainnet verschieben? -Nach der Übertragung wird Ihre Ethereum-Mainnet-Version dieses Untergraphen veraltet sein. Wenn Sie zum Mainnet zurückkehren möchten, müssen Sie Ihre Version neu bereitstellen und zurück zum Mainnet veröffentlichen. Es wird jedoch dringend davon abgeraten, zurück ins Ethereum Mainnet zu wechseln, da die Indexierungsbelohnungen schließlich vollständig auf Arbitrum One verteilt werden. +Nach der Übertragung wird Ihre Ethereum Mainnet-Version dieses Subgraphen veraltet sein. Wenn Sie zum Mainnet zurückkehren möchten, müssen Sie den Subgraph erneut bereitstellen und im Mainnet veröffentlichen. Es wird jedoch dringend davon abgeraten, zurück zum Ethereum Mainnet zu wechseln, da die Indexierungsbelohnungen schließlich vollständig auf Arbitrum One verteilt werden. ### Warum brauche ich überbrückte ETH, um meine Überweisung abzuschließen? @@ -112,11 +112,11 @@ Um Ihre Delegation zu übertragen, müssen Sie die folgenden Schritte ausführen 2. 20 Minuten auf Bestätigung warten 3. Bestätigung der Delegationsübertragung auf Arbitrum -\*\*\*\*You must confirm the transaction to complete the delegation transfer on Arbitrum. This step must be completed within 7 days or the delegation could be lost. In most cases, this step will run automatically, but a manual confirmation may be needed if there is a gas price spike on Arbitrum. If there are any issues during this process, there will be resources to help: contact support at support@thegraph.com or on [Discord](https://discord.gg/graphprotocol). +\*\*\*\*Sie müssen die Transaktion bestätigen, um die Übertragung der Delegation auf Arbitrum abzuschließen. Dieser Schritt muss innerhalb von 7 Tagen abgeschlossen werden, da die Delegation sonst verloren gehen kann. In den meisten Fällen läuft dieser Schritt automatisch ab, aber eine manuelle Bestätigung kann erforderlich sein, wenn es auf Arbitrum zu einer Gaspreiserhöhung kommt. Sollte es während dieses Prozesses zu Problemen kommen, gibt es Ressourcen, die helfen können: Kontaktieren Sie den Support unter support@thegraph.com oder auf [Discord] (https://discord.gg/graphprotocol). ### Was passiert mit meinen Rewards, wenn ich einen Transfer mit einer offenen Zuteilung im Ethereum Mainnet initiiere? -If the Indexer to whom you're delegating is still operating on L1, when you transfer to Arbitrum you will forfeit any delegation rewards from open allocations on Ethereum mainnet. This means that you will lose the rewards from, at most, the last 28-day period. If you time the transfer right after the Indexer has closed allocations you can make sure this is the least amount possible. If you have a communication channel with your Indexer(s), consider discussing with them to find the best time to do your transfer. +Wenn der Indexer, an den Sie delegieren, noch auf L1 arbeitet, verlieren Sie beim Wechsel zu Arbitrum alle Delegationsbelohnungen aus offenen Zuteilungen im Ethereum Mainnet. Das bedeutet, dass Sie höchstens die Rewards aus dem letzten 28-Tage-Zeitraum verlieren. Wenn Sie den Transfer direkt nach der Schließung der Zuteilungen durch den Indexer durchführen, können Sie sicherstellen, dass der Betrag so gering wie möglich ist. Wenn Sie einen Kommunikationskanal mit Ihrem Indexer haben, sollten Sie mit ihm über den besten Zeitpunkt für den Transfer sprechen. ### Was passiert, wenn der Indexer, an den ich derzeit delegiere, nicht auf Arbitrum One ist? @@ -124,7 +124,7 @@ Das L2-Transfer-Tool wird nur aktiviert, wenn der Indexer, den Sie delegiert hab ### Haben Delegatoren die Möglichkeit, an einen anderen Indexierer zu delegieren? -If you wish to delegate to another Indexer, you can transfer to the same Indexer on Arbitrum, then undelegate and wait for the thawing period. After this, you can select another active Indexer to delegate to. +Wenn Sie an einen anderen Indexer delegieren möchten, können Sie auf denselben Indexer auf Arbitrum übertragen, dann die Delegation aufheben und die Auftau-Phase abwarten. Danach können Sie einen anderen aktiven Indexer auswählen, an den Sie delegieren möchten. ### Was ist, wenn ich den Indexer, an den ich delegiere, auf L2 nicht finden kann? @@ -144,53 +144,53 @@ Es wird davon ausgegangen, dass die gesamte Netzbeteiligung in Zukunft zu Arbitr ### Wie lange dauert es, bis die Übertragung meiner Delegation auf L2 abgeschlossen ist? -A 20-minute confirmation is required for delegation transfer. Please note that after the 20-minute period, you must come back and complete step 3 of the transfer process within 7 days. If you fail to do this, then your delegation may be lost. Note that in most cases the transfer tool will complete this step for you automatically. In case of a failed auto-attempt, you will need to complete it manually. If any issues arise during this process, don't worry, we'll be here to help: contact us at support@thegraph.com or on [Discord](https://discord.gg/graphprotocol). +Für die Übertragung von Delegationen ist eine 20-minütige Bestätigung erforderlich. Bitte beachten Sie, dass Sie nach Ablauf der 20-Minuten-Frist innerhalb von 7 Tagen zurückkommen und Schritt 3 des Übertragungsverfahrens abschließen müssen. Wenn Sie dies versäumen, kann Ihre Delegation verloren gehen. Beachten Sie bitte, dass das Übertragungstool diesen Schritt in den meisten Fällen automatisch für Sie ausführt. Falls der automatische Versuch fehlschlägt, müssen Sie ihn manuell ausführen. Sollten während dieses Vorgangs Probleme auftreten, sind wir für Sie da: Kontaktieren Sie uns unter support@thegraph.com oder auf [Discord] (https://discord.gg/vtvv7FP). ### Kann ich meine Delegation übertragen, wenn ich eine GRT Vesting Contract/Token Lock Wallet verwende? Ja! Der Prozess ist ein wenig anders, weil Vesting-Verträge die ETH, die für die Bezahlung des L2-Gases benötigt werden, nicht weiterleiten können, also müssen Sie sie vorher einzahlen. Wenn Ihr Berechtigungsvertrag nicht vollständig freigeschaltet ist, müssen Sie außerdem zuerst einen Gegenkontrakt auf L2 initialisieren und können die Delegation dann nur auf diesen L2-Berechtigungsvertrag übertragen. Die Benutzeroberfläche des Explorers kann Sie durch diesen Prozess leiten, wenn Sie sich über die Vesting Lock Wallet mit dem Explorer verbunden haben. -### Does my Arbitrum vesting contract allow releasing GRT just like on mainnet? +### Erlaubt mein Arbitrum-„Vesting“-Vertrag die Freigabe von GRT genau wie im Mainnet? -No, the vesting contract that is created on Arbitrum will not allow releasing any GRT until the end of the vesting timeline, i.e. until your contract is fully vested. This is to prevent double spending, as otherwise it would be possible to release the same amounts on both layers. +Nein, der Vesting-Vertrag, der auf Arbitrum erstellt wird, erlaubt keine Freigabe von GRT bis zum Ende des Vesting-Zeitraums, d.h. bis Ihr Vertrag vollständig freigegeben ist. Damit sollen Doppelausgaben verhindert werden, da es sonst möglich wäre, die gleichen Beträge auf beiden Ebenen freizugeben. -If you'd like to release GRT from the vesting contract, you can transfer them back to the L1 vesting contract using Explorer: in your Arbitrum One profile, you will see a banner saying you can transfer GRT back to the mainnet vesting contract. This requires a transaction on Arbitrum One, waiting 7 days, and a final transaction on mainnet, as it uses the same native bridging mechanism from the GRT bridge. +Wenn Sie GRT aus dem Vesting-Vertrag freigeben möchten, können Sie sie mit dem Explorer zurück in den L1-Vesting-Vertrag übertragen: In Ihrem Arbitrum One-Profil wird ein Banner angezeigt, das besagt, dass Sie GRT zurück in den Mainnet-Vesting-Vertrag übertragen können. Dies erfordert eine Transaktion auf Arbitrum One, eine Wartezeit von 7 Tagen und eine abschließende Transaktion auf dem Mainnet, da es denselben nativen Überbrückungsmechanismus der GRT- Bridge verwendet. ### Fällt eine Delegationssteuer an? -Nein. Auf L2 erhaltene Token werden im Namen des angegebenen Delegators an den angegebenen Indexierer delegiert, ohne dass eine Delegationssteuer erhoben wird. +Nein. Erhaltene Token auf L2 werden im Namen des angegebenen Delegatoren an den angegebenen Indexer delegiert, ohne eine Delegiertensteuer zu erheben. -### Will my unrealized rewards be transferred when I transfer my delegation? +### Werden meine nicht realisierten Rewards übertragen, wenn ich meine Delegation übertrage? -​Yes! The only rewards that can't be transferred are the ones for open allocations, as those won't exist until the Indexer closes the allocations (usually every 28 days). If you've been delegating for a while, this is likely only a small fraction of rewards. +Ja! Die einzigen Rewards, die nicht übertragen werden können, sind die für offene Zuteilungen, da diese nicht mehr existieren, bis der Indexer die Zuteilungen schließt (normalerweise alle 28 Tage). Wenn Sie schon eine Weile delegieren, ist dies wahrscheinlich nur ein kleiner Teil der Rewards. -At the smart contract level, unrealized rewards are already part of your delegation balance, so they will be transferred when you transfer your delegation to L2. ​ +Auf der Smart-Contract-Ebene sind nicht realisierte Rewards bereits Teil Ihres Delegationsguthabens, so dass sie übertragen werden, wenn Sie Ihre Delegation auf L2 übertragen. -### Is moving delegations to L2 mandatory? Is there a deadline? +### Ist die Verlegung von Delegationen nach L2 obligatorisch? Gibt es eine Frist? -​Moving delegation to L2 is not mandatory, but indexing rewards are increasing on L2 following the timeline described in [GIP-0052](https://forum.thegraph.com/t/gip-0052-timeline-and-requirements-to-increase-rewards-in-l2/4193). Eventually, if the Council keeps approving the increases, all rewards will be distributed in L2 and there will be no indexing rewards for Indexers and Delegators on L1. ​ +Die Verlagerung der Delegation nach L2 ist nicht zwingend erforderlich, aber die Rewards für die Indexierung steigen auf L2 entsprechend dem in [GIP-0052] (https://forum.thegraph.com/t/gip-0052-timeline-and-requirements-to-increase-rewards-in-l2/4193) beschriebenen Zeitplan. Wenn der Rat die Erhöhungen weiterhin genehmigt, werden schließlich alle Rewards in L2 verteilt und es wird keine Indexierungs-Rewards für Indexer und Delegatoren in L1 geben. -### If I am delegating to an Indexer that has already transferred stake to L2, do I stop receiving rewards on L1? +### Wenn ich an einen Indexer delegiere, der bereits Anteile auf L2 übertragen hat, erhalte ich dann keine Rewards mehr auf L1? -​Many Indexers are transferring stake gradually so Indexers on L1 will still be earning rewards and fees on L1, which are then shared with Delegators. Once an Indexer has transferred all of their stake, then they will stop operating on L1, so Delegators will not receive any more rewards unless they transfer to L2. +Viele Indexer übertragen ihre Anteile nach und nach, so dass Indexer auf L1 immer noch Rewards und Gebühren auf L1 verdienen, die dann mit den Delegatoren geteilt werden. Sobald ein Indexer seinen gesamten Anteil übertragen hat, wird er seine Tätigkeit auf L1 einstellen, so dass die Delegatoren keine Rewards mehr erhalten, es sei denn, sie wechseln zu L2. -Eventually, if the Council keeps approving the indexing rewards increases in L2, all rewards will be distributed on L2 and there will be no indexing rewards for Indexers and Delegators on L1. ​ +Wenn das Council die Erhöhungen der Rewards für die Indexierung in L2 weiterhin genehmigt, werden schließlich alle Rewards in L2 verteilt und es wird keine Rewards für Indexer und Delegierte in L1 geben. -### I don't see a button to transfer my delegation. Why is that? +### Ich sehe keine Schaltfläche zum Übertragen meiner Delegation. Woran liegt das? -​Your Indexer has probably not used the L2 transfer tools to transfer stake yet. +Ihr Indexer hat wahrscheinlich noch nicht die L2-Transfer-Tools zur Übertragung von Anteilen verwendet. -If you can contact the Indexer, you can encourage them to use the L2 Transfer Tools so that Delegators can transfer delegations to their L2 Indexer address. ​ +Wenn Sie sich mit dem Indexer in Verbindung setzen können, können Sie ihn ermutigen, die L2-Transfer-Tools zu verwenden, damit die Delegatoren Delegationen an ihre L2-Indexer-Adresse übertragen können. -### My Indexer is also on Arbitrum, but I don't see a button to transfer the delegation in my profile. Why is that? +### Mein Indexer ist auch auf Arbitrum, aber ich sehe in meinem Profil keine Schaltfläche zum Übertragen der Delegation. Warum ist das so? -​It is possible that the Indexer has set up operations on L2, but hasn't used the L2 transfer tools to transfer stake. The L1 smart contracts will therefore not know about the Indexer's L2 address. If you can contact the Indexer, you can encourage them to use the transfer tool so that Delegators can transfer delegations to their L2 Indexer address. ​ +Es ist möglich, dass der Indexer Operationen auf L2 eingerichtet hat, aber nicht die L2-Transfer-Tools zur Übertragung von Einsätzen verwendet hat. Die L1-Smart Contracts kennen daher die L2-Adresse des Indexers nicht. Wenn Sie sich mit dem Indexer in Verbindung setzen können, können Sie ihn ermutigen, das Übertragungswerkzeug zu verwenden, damit Delegatoren Delegationen an seine L2-Indexer-Adresse übertragen können. -### Can I transfer my delegation to L2 if I have started the undelegating process and haven't withdrawn it yet? +### Kann ich meine Delegation auf L2 übertragen, wenn ich den Prozess der Undelegation eingeleitet und noch nicht zurückgezogen habe? -​No. If your delegation is thawing, you have to wait the 28 days and withdraw it. +Nein. Wenn Ihre Delegation auftaut, müssen Sie die 28 Tage abwarten und sie zurückziehen. -The tokens that are being undelegated are "locked" and therefore cannot be transferred to L2. +Die Token, die nicht delegiert werden, sind „gesperrt“ und können daher nicht auf L2 übertragen werden. ## Kurationssignal @@ -206,9 +206,9 @@ Um Ihre Kuration zu übertragen, müssen Sie die folgenden Schritte ausführen: \* Falls erforderlich - d.h. wenn Sie eine Vertragsadresse verwenden. -### Wie erfahre ich, ob der von mir kuratierte Subgraph nach L2 umgezogen ist? +### Wie erfahre ich, ob der von mir kuratierte Subgraph nach L2 verschoben wurde? -Auf der Seite mit den Details der Subgraphen werden Sie durch ein Banner darauf hingewiesen, dass dieser Subgraph übertragen wurde. Sie können der Aufforderung folgen, um Ihre Kuration zu übertragen. Diese Information finden Sie auch auf der Seite mit den Details zu jedem verschobenen Subgraphen. +Wenn Sie die Detailseite des Subgraphen aufrufen, werden Sie durch ein Banner darauf hingewiesen, dass dieser Subgraph übertragen wurde. Sie können der Aufforderung folgen, um Ihre Kuration zu übertragen. Sie finden diese Information auch auf der Seite mit den Details zu jedem verschobenen Subgraphen. ### Was ist, wenn ich meine Kuration nicht auf L2 verschieben möchte? @@ -226,7 +226,7 @@ Zurzeit gibt es keine Option für Massenübertragungen. ### Wie übertrage ich meine Anteile auf Arbitrum? -> Disclaimer: If you are currently unstaking any portion of your GRT on your Indexer, you will not be able to use L2 Transfer Tools. +> Haftungsausschluss: Wenn Sie derzeit einen Teil Ihres GRT auf Ihrem Indexer entsperren, können Sie die L2 Transfer Tools nicht verwenden. @@ -238,7 +238,7 @@ Um Ihren Einsatz zu übertragen, müssen Sie die folgenden Schritte ausführen: 3. Bestätigen Sie die Übertragung von Anteilen auf Arbitrum -\*Note that you must confirm the transfer within 7 days otherwise your stake may be lost. In most cases, this step will run automatically, but a manual confirmation may be needed if there is a gas price spike on Arbitrum. If there are any issues during this process, there will be resources to help: contact support at support@thegraph.com or on [Discord](https://discord.gg/graphprotocol). +\*Beachten Sie, dass Sie den Transfer innerhalb von 7 Tagen bestätigen müssen, sonst kann Ihr Einsatz verloren gehen. In den meisten Fällen wird dieser Schritt automatisch ausgeführt, aber eine manuelle Bestätigung kann erforderlich sein, wenn es einen Gaspreisanstieg auf Arbitrum gibt. Sollte es während dieses Prozesses zu Problemen kommen, gibt es Ressourcen, die Ihnen helfen: Kontaktieren Sie den Support unter support@thegraph.com oder auf [Discord] (https://discord.gg/graphprotocol). ### Wird mein gesamter Einsatz übertragen? @@ -276,13 +276,13 @@ Nein, damit Delegatoren ihre delegierten GRT an Arbitrum übertragen können, mu Ja! Der Prozess ist ein wenig anders, weil Vesting-Verträge die ETH, die für die Bezahlung des L2-Gases benötigt werden, nicht weiterleiten können, so dass Sie sie vorher einzahlen müssen. Wenn Ihr Freizügigkeitsvertrag nicht vollständig freigeschaltet ist, müssen Sie außerdem zuerst einen Gegenkontrakt auf L2 initialisieren und können den Anteil nur auf diesen L2-Freizügigkeitsvertrag übertragen. Die Benutzeroberfläche des Explorers kann Sie durch diesen Prozess führen, wenn Sie sich mit dem Explorer über die Vesting Lock Wallet verbunden haben. -### I already have stake on L2. Do I still need to send 100k GRT when I use the transfer tools the first time? +### Ich habe bereits einen Einsatz auf L2. Muss ich immer noch 100k GRT senden, wenn ich die Transfer-Tools zum ersten Mal benutze? -​Yes. The L1 smart contracts will not be aware of your L2 stake, so they will require you to transfer at least 100k GRT when you transfer for the first time. ​ +Ja. Die L1-Smart-Contracts kennen Ihren L2-Einsatz nicht und verlangen daher, dass Sie beim ersten Transfer mindestens 100k GRT übertragen. -### Can I transfer my stake to L2 if I am in the process of unstaking GRT? +### Kann ich meinen Anteil auf L2 übertragen, wenn ich gerade dabei bin, GRT zu entstaken? -​No. If any fraction of your stake is thawing, you have to wait the 28 days and withdraw it before you can transfer stake. The tokens that are being staked are "locked" and will prevent any transfers or stake to L2. +Nein. Wenn ein Teil Ihres Einsatzes auftaut, müssen Sie die 28 Tage warten und ihn abheben, bevor Sie den Einsatz übertragen können. Die Token, die eingesetzt werden, sind „gesperrt“ und verhindern jede Übertragung oder Einsatz auf L2. ## Unverfallbare Vertragsübertragung @@ -377,25 +377,25 @@ Um Ihren Vesting-Vertrag auf L2 zu übertragen, senden Sie ein eventuelles GRT-G \* Falls erforderlich - d.h. wenn Sie eine Vertragsadresse verwenden. -\*\*\*\*You must confirm your transaction to complete the balance transfer on Arbitrum. This step must be completed within 7 days or the balance could be lost. In most cases, this step will run automatically, but a manual confirmation may be needed if there is a gas price spike on Arbitrum. If there are any issues during this process, there will be resources to help: contact support at support@thegraph.com or on [Discord](https://discord.gg/graphprotocol). +\*\*\*\*Sie müssen Ihre Transaktion bestätigen, um die Übertragung des Guthabens auf Arbitrum abzuschließen. Dieser Schritt muss innerhalb von 7 Tagen abgeschlossen werden, da sonst das Guthaben verloren gehen kann. In den meisten Fällen wird dieser Schritt automatisch ausgeführt, aber eine manuelle Bestätigung kann erforderlich sein, wenn es auf Arbitrum eine Gaspreisspitze gibt. Sollte es während dieses Prozesses zu Problemen kommen, gibt es Ressourcen, die helfen können: kontaktiere den Support unter support@thegraph.com oder auf [Discord] (https://discord.gg/graphprotocol). -### My vesting contract shows 0 GRT so I cannot transfer it, why is this and how do I fix it? +### Mein Vesting-Vertrag zeigt 0 GRT an, so dass ich ihn nicht übertragen kann. Warum ist das so und wie kann ich das ändern? -​To initialize your L2 vesting contract, you need to transfer a nonzero amount of GRT to L2. This is required by the Arbitrum GRT bridge that is used by the L2 Transfer Tools. The GRT must come from the vesting contract's balance, so it does not include staked or delegated GRT. +Um Ihren L2 Vesting-Vertrag zu initialisieren, müssen Sie einen GRT-Betrag, der nicht Null ist, auf L2 übertragen. Dies ist für die Arbitrum GRT-Brücke erforderlich, die von den L2-Transfer-Tools verwendet wird. Die GRT müssen aus dem Guthaben des Vesting-Vertrags stammen, d. h. sie umfassen keine abgesicherten oder delegierten GRT. -If you've staked or delegated all your GRT from the vesting contract, you can manually send a small amount like 1 GRT to the vesting contract address from anywhere else (e.g. from another wallet, or an exchange). ​ +Wenn Sie alle Ihre GRT aus dem Vesting-Vertrag eingesetzt oder delegiert haben, können Sie manuell einen kleinen Betrag wie 1 GRT an die Adresse des Vesting-Vertrags von einem anderen Ort aus senden (z. B. von einer anderen Wallet oder einer Börse). -### I am using a vesting contract to stake or delegate, but I don't see a button to transfer my stake or delegation to L2, what do I do? +### Ich verwende einen Vesting-Vertrag, um meinen Anteil oder meine Delegation auf L2 zu übertragen, aber ich sehe keine Taste, um meinen Anteil oder meine Delegation auf L2 zu übertragen. -​If your vesting contract hasn't finished vesting, you need to first create an L2 vesting contract that will receive your stake or delegation on L2. This vesting contract will not allow releasing tokens in L2 until the end of the vesting timeline, but will allow you to transfer GRT back to the L1 vesting contract to be released there. +Wenn Ihr Vesting-Vertrag noch nicht abgeschlossen ist, müssen Sie zunächst einen L2-Vesting-Vertrag erstellen, der Ihren Anteil oder Ihre Delegation auf L2 erhält. Dieser Vesting-Vertrag erlaubt keine Freigabe von Token in L2 bis zum Ende des Vesting-Zeitraums, aber er erlaubt Ihnen, GRT zurück zum L1-Vesting-Vertrag zu übertragen, um dort freigegeben zu werden. -When connected with the vesting contract on Explorer, you should see a button to initialize your L2 vesting contract. Follow that process first, and you will then see the buttons to transfer your stake or delegation in your profile. ​ +Wenn Sie mit dem Vesting-Vertrag im Explorer verbunden sind, sollten Sie eine Schaltfläche zur Initialisierung Ihres L2-Vesting-Vertrags sehen. Befolgen Sie zunächst diesen Prozess, und Sie werden dann die Schaltflächen zur Übertragung Ihres Anteils oder zur Delegation in Ihrem Profil sehen. -### If I initialize my L2 vesting contract, will this also transfer my delegation to L2 automatically? +### Wenn ich meinen L2-Vesting-Vertrag initialisiere, wird dann auch meine Delegation automatisch auf L2 übertragen? -​No, initializing your L2 vesting contract is a prerequisite for transferring stake or delegation from the vesting contract, but you still need to transfer these separately. +Nein, die Initialisierung Ihres L2 Vesting-Vertrags ist eine Voraussetzung für die Übertragung von Anteilen oder Delegationen aus dem Vesting-Vertrag, aber Sie müssen diese trotzdem separat übertragen. -You will see a banner on your profile prompting you to transfer your stake or delegation after you have initialized your L2 vesting contract. +Nachdem Sie Ihren L2 Vesting-Vertrag initialisiert haben, erscheint in Ihrem Profil ein Banner, das Sie auffordert, Ihren Anteil oder Ihre Delegation zu übertragen. ### Kann ich meinen Vertrag mit unverfallbarer Anwartschaft zurück nach L1 verschieben? diff --git a/website/src/pages/de/archived/arbitrum/l2-transfer-tools-guide.mdx b/website/src/pages/de/archived/arbitrum/l2-transfer-tools-guide.mdx index 6a5b13da53d7..1be2386aedba 100644 --- a/website/src/pages/de/archived/arbitrum/l2-transfer-tools-guide.mdx +++ b/website/src/pages/de/archived/arbitrum/l2-transfer-tools-guide.mdx @@ -1,60 +1,60 @@ --- -title: L2 Transfer Tools Guide +title: L2 Transfer Tools Anleitung --- The Graph hat den Wechsel zu L2 auf Arbitrum One leicht gemacht. Für jeden Protokollteilnehmer gibt es eine Reihe von L2-Transfer-Tools, um den Transfer zu L2 für alle Netzwerkteilnehmer nahtlos zu gestalten. Je nachdem, was Sie übertragen möchten, müssen Sie eine bestimmte Anzahl von Schritten befolgen. Einige häufig gestellte Fragen zu diesen Tools werden in den [L2 Transfer Tools FAQ](/archived/arbitrum/l2-transfer-tools-faq/) beantwortet. Die FAQs enthalten ausführliche Erklärungen zur Verwendung der Tools, zu ihrer Funktionsweise und zu den Dingen, die bei ihrer Verwendung zu beachten sind. -## So übertragen Sie Ihren Subgraphen auf Arbitrum (L2) +## How to transfer your Subgraph to Arbitrum (L2) -## Vorteile der Übertragung Ihrer Untergraphen +## Benefits of transferring your Subgraphs The Graph's Community und die Kernentwickler haben im letzten Jahr den Wechsel zu Arbitrum [vorbereitet] (https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305). Arbitrum, eine Layer-2- oder "L2"-Blockchain, erbt die Sicherheit von Ethereum, bietet aber drastisch niedrigere Gasgebühren. -Wenn Sie Ihren Subgraphen auf The Graph Network veröffentlichen oder aktualisieren, interagieren Sie mit intelligenten Verträgen auf dem Protokoll, und dies erfordert die Bezahlung von Gas mit ETH. Indem Sie Ihre Subgraphen zu Arbitrum verschieben, werden alle zukünftigen Aktualisierungen Ihres Subgraphen viel niedrigere Gasgebühren erfordern. Die niedrigeren Gebühren und die Tatsache, dass die Kurationsbindungskurven auf L2 flach sind, machen es auch für andere Kuratoren einfacher, auf Ihrem Subgraphen zu kuratieren, was die Belohnungen für Indexer auf Ihrem Subgraphen erhöht. Diese kostengünstigere Umgebung macht es auch für Indexer preiswerter, Ihren Subgraphen zu indizieren und zu bedienen. Die Belohnungen für die Indexierung werden in den kommenden Monaten auf Arbitrum steigen und auf dem Ethereum-Mainnet sinken, so dass immer mehr Indexer ihren Einsatz transferieren und ihre Operationen auf L2 einrichten werden. +When you publish or upgrade your Subgraph to The Graph Network, you're interacting with smart contracts on the protocol and this requires paying for gas using ETH. By moving your Subgraphs to Arbitrum, any future updates to your Subgraph will require much lower gas fees. The lower fees, and the fact that curation bonding curves on L2 are flat, also make it easier for other Curators to curate on your Subgraph, increasing the rewards for Indexers on your Subgraph. This lower-cost environment also makes it cheaper for Indexers to index and serve your Subgraph. Indexing rewards will be increasing on Arbitrum and decreasing on Ethereum mainnet over the coming months, so more and more Indexers will be transferring their stake and setting up their operations on L2. -## Verstehen, was mit dem Signal, Ihrem L1-Subgraphen und den Abfrage-URLs geschieht +## Understanding what happens with signal, your L1 Subgraph and query URLs -Die Übertragung eines Subgraphen nach Arbitrum verwendet die Arbitrum GRT-Brücke, die wiederum die native Arbitrum-Brücke verwendet, um den Subgraphen nach L2 zu senden. Der "Transfer" löscht den Subgraphen im Mainnet und sendet die Informationen, um den Subgraphen auf L2 mit Hilfe der Brücke neu zu erstellen. Sie enthält auch die vom Eigentümer des Subgraphen signalisierte GRT, die größer als Null sein muss, damit die Brücke die Übertragung akzeptiert. +Transferring a Subgraph to Arbitrum uses the Arbitrum GRT bridge, which in turn uses the native Arbitrum bridge, to send the Subgraph to L2. The "transfer" will deprecate the Subgraph on mainnet and send the information to re-create the Subgraph on L2 using the bridge. It will also include the Subgraph owner's signaled GRT, which must be more than zero for the bridge to accept the transfer. -Wenn Sie sich für die Übertragung des Untergraphen entscheiden, wird das gesamte Kurationssignal des Untergraphen in GRT umgewandelt. Dies ist gleichbedeutend mit dem "Verwerfen" des Subgraphen im Mainnet. Die GRT, die Ihrer Kuration entsprechen, werden zusammen mit dem Subgraphen an L2 gesendet, wo sie für die Prägung von Signalen in Ihrem Namen verwendet werden. +When you choose to transfer the Subgraph, this will convert all of the Subgraph's curation signal to GRT. This is equivalent to "deprecating" the Subgraph on mainnet. The GRT corresponding to your curation will be sent to L2 together with the Subgraph, where they will be used to mint signal on your behalf. -Andere Kuratoren können wählen, ob sie ihren Anteil an GRT zurückziehen oder ihn ebenfalls an L2 übertragen, um das Signal auf demselben Untergraphen zu prägen. Wenn ein Subgraph-Eigentümer seinen Subgraph nicht an L2 überträgt und ihn manuell über einen Vertragsaufruf abmeldet, werden die Kuratoren benachrichtigt und können ihre Kuration zurückziehen. +Other Curators can choose whether to withdraw their fraction of GRT, or also transfer it to L2 to mint signal on the same Subgraph. If a Subgraph owner does not transfer their Subgraph to L2 and manually deprecates it via a contract call, then Curators will be notified and will be able to withdraw their curation. -Sobald der Subgraph übertragen wurde, erhalten die Indexer keine Belohnungen mehr für die Indizierung des Subgraphen, da die gesamte Kuration in GRT umgewandelt wird. Es wird jedoch Indexer geben, die 1) übertragene Untergraphen für 24 Stunden weiter bedienen und 2) sofort mit der Indizierung des Untergraphen auf L2 beginnen. Da diese Indexer den Untergraphen bereits indiziert haben, sollte es nicht nötig sein, auf die Synchronisierung des Untergraphen zu warten, und es wird möglich sein, den L2-Untergraphen fast sofort abzufragen. +As soon as the Subgraph is transferred, since all curation is converted to GRT, Indexers will no longer receive rewards for indexing the Subgraph. However, there will be Indexers that will 1) keep serving transferred Subgraphs for 24 hours, and 2) immediately start indexing the Subgraph on L2. Since these Indexers already have the Subgraph indexed, there should be no need to wait for the Subgraph to sync, and it will be possible to query the L2 Subgraph almost immediately. -Anfragen an den L2-Subgraphen müssen an eine andere URL gerichtet werden (an `arbitrum-gateway.thegraph.com`), aber die L1-URL wird noch mindestens 48 Stunden lang funktionieren. Danach wird das L1-Gateway (für eine gewisse Zeit) Anfragen an das L2-Gateway weiterleiten, was jedoch zu zusätzlichen Latenzzeiten führt. Es wird daher empfohlen, alle Anfragen so bald wie möglich auf die neue URL umzustellen. +Queries to the L2 Subgraph will need to be done to a different URL (on `arbitrum-gateway.thegraph.com`), but the L1 URL will continue working for at least 48 hours. After that, the L1 gateway will forward queries to the L2 gateway (for some time), but this will add latency so it is recommended to switch all your queries to the new URL as soon as possible. ## Ein Teil dieser GRT, der dem Inhaber des Untergraphen entspricht, wird zusammen mit dem Untergraphen an L2 gesendet. -Als Sie Ihren Subgraphen im Mainnet veröffentlicht haben, haben Sie eine angeschlossene Wallet benutzt, um den Subgraphen zu erstellen, und diese Wallet besitzt die NFT, die diesen Subgraphen repräsentiert und Ihnen erlaubt, Updates zu veröffentlichen. +When you published your Subgraph on mainnet, you used a connected wallet to create the Subgraph, and this wallet owns the NFT that represents this Subgraph and allows you to publish updates. -Wenn man den Subgraphen zu Arbitrum überträgt, kann man eine andere Wallet wählen, die diesen Subgraphen NFT auf L2 besitzen wird. +When transferring the Subgraph to Arbitrum, you can choose a different wallet that will own this Subgraph NFT on L2. Wenn Sie eine "normale" Wallet wie MetaMask verwenden (ein Externally Owned Account oder EOA, d.h. eine Wallet, die kein Smart Contract ist), dann ist dies optional und es wird empfohlen, die gleiche Eigentümeradresse wie in L1 beizubehalten. -Wenn Sie eine Smart-Contract-Wallet, wie z.B. eine Multisig (z.B. Safe), verwenden, dann ist die Wahl einer anderen L2-Wallet-Adresse zwingend erforderlich, da es sehr wahrscheinlich ist, dass dieses Konto nur im Mainnet existiert und Sie mit dieser Wallet keine Transaktionen auf Arbitrum durchführen können. Wenn Sie weiterhin eine Smart Contract Wallet oder Multisig verwenden möchten, erstellen Sie eine neue Wallet auf Arbitrum und verwenden Sie deren Adresse als L2-Besitzer Ihres Subgraphen. +If you're using a smart contract wallet, like a multisig (e.g. a Safe), then choosing a different L2 wallet address is mandatory, as it is most likely that this account only exists on mainnet and you will not be able to make transactions on Arbitrum using this wallet. If you want to keep using a smart contract wallet or multisig, create a new wallet on Arbitrum and use its address as the L2 owner of your Subgraph. -**Es ist sehr wichtig, eine Wallet-Adresse zu verwenden, die Sie kontrollieren und die Transaktionen auf Arbitrum durchführen kann. Andernfalls geht der Subgraph verloren und kann nicht wiederhergestellt werden.** +**It is very important to use a wallet address that you control, and that can make transactions on Arbitrum. Otherwise, the Subgraph will be lost and cannot be recovered.** ## Vorbereitung der Übertragung: Überbrückung einiger ETH -Die Übertragung des Subgraphen beinhaltet das Senden einer Transaktion über die Brücke und das Ausführen einer weiteren Transaktion auf Arbitrum. Die erste Transaktion verwendet ETH im Mainnet und enthält einige ETH, um das Gas zu bezahlen, wenn die Nachricht auf L2 empfangen wird. Wenn dieses Gas jedoch nicht ausreicht, müssen Sie die Transaktion wiederholen und das Gas direkt auf L2 bezahlen (dies ist "Schritt 3: Bestätigen des Transfers" unten). Dieser Schritt **muss innerhalb von 7 Tagen nach Beginn der Überweisung** ausgeführt werden. Außerdem wird die zweite Transaktion ("Schritt 4: Beenden der Übertragung auf L2") direkt auf Arbitrum durchgeführt. Aus diesen Gründen benötigen Sie etwas ETH auf einer Arbitrum-Wallet. Wenn Sie ein Multisig- oder Smart-Contract-Konto verwenden, muss sich die ETH in der regulären (EOA-) Wallet befinden, die Sie zum Ausführen der Transaktionen verwenden, nicht in der Multisig-Wallet selbst. +Transferring the Subgraph involves sending a transaction through the bridge, and then executing another transaction on Arbitrum. The first transaction uses ETH on mainnet, and includes some ETH to pay for gas when the message is received on L2. However, if this gas is insufficient, you will have to retry the transaction and pay for the gas directly on L2 (this is "Step 3: Confirming the transfer" below). This step **must be executed within 7 days of starting the transfer**. Moreover, the second transaction ("Step 4: Finishing the transfer on L2") will be done directly on Arbitrum. For these reasons, you will need some ETH on an Arbitrum wallet. If you're using a multisig or smart contract account, the ETH will need to be in the regular (EOA) wallet that you are using to execute the transactions, not on the multisig wallet itself. Sie können ETH auf einigen Börsen kaufen und direkt auf Arbitrum abheben, oder Sie können die Arbitrum-Brücke verwenden, um ETH von einer Mainnet-Wallet zu L2 zu senden: [bridge.arbitrum.io](http://bridge.arbitrum.io). Da die Gasgebühren auf Arbitrum niedriger sind, sollten Sie nur eine kleine Menge benötigen. Es wird empfohlen, mit einem niedrigen Schwellenwert (z.B. 0,01 ETH) zu beginnen, damit Ihre Transaktion genehmigt wird. -## Suche nach dem Untergraphen Transfer Tool +## Finding the Subgraph Transfer Tool -Sie finden das L2 Transfer Tool, wenn Sie die Seite Ihres Subgraphen in Subgraph Studio ansehen: +You can find the L2 Transfer Tool when you're looking at your Subgraph's page on Subgraph Studio: ![transfer tool](/img/L2-transfer-tool1.png) -Sie ist auch im Explorer verfügbar, wenn Sie mit der Wallet verbunden sind, die einen Untergraphen besitzt, und auf der Seite dieses Untergraphen im Explorer: +It is also available on Explorer if you're connected with the wallet that owns a Subgraph and on that Subgraph's page on Explorer: -![Transferring to L2](/img/transferToL2.png) +![Übertragung auf L2](/img/transferToL2.png) Wenn Sie auf die Schaltfläche auf L2 übertragen klicken, wird das Übertragungstool geöffnet, mit dem Sie den Übertragungsvorgang starten können. @@ -64,15 +64,15 @@ Bevor Sie mit dem Transfer beginnen, müssen Sie entscheiden, welche Adresse den Bitte beachten Sie auch, dass die Übertragung des Untergraphen ein Signal ungleich Null auf dem Untergraphen mit demselben Konto erfordert, das den Untergraphen besitzt; wenn Sie kein Signal auf dem Untergraphen haben, müssen Sie ein wenig Kuration hinzufügen (das Hinzufügen eines kleinen Betrags wie 1 GRT würde ausreichen). -Nachdem Sie das Transfer-Tool geöffnet haben, können Sie die L2-Wallet-Adresse in das Feld "Empfänger-Wallet-Adresse" eingeben - **vergewissern Sie sich, dass Sie hier die richtige Adresse eingegeben haben**. Wenn Sie auf "Transfer Subgraph" klicken, werden Sie aufgefordert, die Transaktion auf Ihrer Wallet auszuführen (beachten Sie, dass ein gewisser ETH-Wert enthalten ist, um das L2-Gas zu bezahlen); dadurch wird der Transfer eingeleitet und Ihr L1-Subgraph außer Kraft gesetzt (siehe "Verstehen, was mit Signal, Ihrem L1-Subgraph und Abfrage-URLs passiert" weiter oben für weitere Details darüber, was hinter den Kulissen passiert). +After opening the Transfer Tool, you will be able to input the L2 wallet address into the "Receiving wallet address" field - **make sure you've entered the correct address here**. Clicking on Transfer Subgraph will prompt you to execute the transaction on your wallet (note some ETH value is included to pay for L2 gas); this will initiate the transfer and deprecate your L1 Subgraph (see "Understanding what happens with signal, your L1 Subgraph and query URLs" above for more details on what goes on behind the scenes). Wenn Sie diesen Schritt ausführen, **vergewissern Sie sich, dass Sie bis zum Abschluss von Schritt 3 in weniger als 7 Tagen fortfahren, sonst gehen der Subgraph und Ihr Signal GRT verloren.** Dies liegt daran, wie L1-L2-Nachrichten auf Arbitrum funktionieren: Nachrichten, die über die Brücke gesendet werden, sind "wiederholbare Tickets", die innerhalb von 7 Tagen ausgeführt werden müssen, und die erste Ausführung muss möglicherweise wiederholt werden, wenn es Spitzen im Gaspreis auf Arbitrum gibt. -![Start the transfer to L2](/img/startTransferL2.png) +![Start der Übertragung auf L2](/img/startTransferL2.png) -## Schritt 2: Warten, bis der Untergraph L2 erreicht hat +## Step 2: Waiting for the Subgraph to get to L2 -Nachdem Sie die Übertragung gestartet haben, muss die Nachricht, die Ihren L1-Subgraphen an L2 sendet, die Arbitrum-Brücke durchlaufen. Dies dauert etwa 20 Minuten (die Brücke wartet darauf, dass der Mainnet-Block, der die Transaktion enthält, vor potenziellen Reorgs der Kette "sicher" ist). +After you start the transfer, the message that sends your L1 Subgraph to L2 must propagate through the Arbitrum bridge. This takes approximately 20 minutes (the bridge waits for the mainnet block containing the transaction to be "safe" from potential chain reorgs). Sobald diese Wartezeit abgelaufen ist, versucht Arbitrum, die Übertragung auf den L2-Verträgen automatisch auszuführen. @@ -92,74 +92,74 @@ Zu diesem Zeitpunkt wurden Ihr Subgraph und GRT auf Arbitrum empfangen, aber der ![Publish the subgraph](/img/publishSubgraphL2TransferTools.png) -![Wait for the subgraph to be published](/img/waitForSubgraphToPublishL2TransferTools.png) +![Wait for the Subgraph to be published](/img/waitForSubgraphToPublishL2TransferTools.png) -Dadurch wird der Untergraph veröffentlicht, so dass Indexer, die auf Arbitrum arbeiten, damit beginnen können, ihn zu bedienen. Es wird auch ein Kurationssignal unter Verwendung der GRT, die von L1 übertragen wurden, eingeleitet. +This will publish the Subgraph so that Indexers that are operating on Arbitrum can start serving it. It will also mint curation signal using the GRT that were transferred from L1. ## Schritt 5: Aktualisierung der Abfrage-URL -Ihr Subgraph wurde erfolgreich zu Arbitrum übertragen! Um den Subgraphen abzufragen, wird die neue URL lauten: +Your Subgraph has been successfully transferred to Arbitrum! To query the Subgraph, the new URL will be : `https://arbitrum-gateway.thegraph.com/api/[api-key]/subgraphs/id/[l2-subgraph-id]` -Beachten Sie, dass die ID des Subgraphen auf Arbitrum eine andere sein wird als die, die Sie im Mainnet hatten, aber Sie können sie immer im Explorer oder Studio finden. Wie oben erwähnt (siehe "Verstehen, was mit Signal, Ihrem L1-Subgraphen und Abfrage-URLs passiert"), wird die alte L1-URL noch eine kurze Zeit lang unterstützt, aber Sie sollten Ihre Abfragen auf die neue Adresse umstellen, sobald der Subgraph auf L2 synchronisiert worden ist. +Note that the Subgraph ID on Arbitrum will be a different than the one you had on mainnet, but you can always find it on Explorer or Studio. As mentioned above (see "Understanding what happens with signal, your L1 Subgraph and query URLs") the old L1 URL will be supported for a short while, but you should switch your queries to the new address as soon as the Subgraph has been synced on L2. ## Wie Sie Ihre Kuration auf Arbitrum übertragen (L2) -## Verstehen, was mit der Kuration bei der Übertragung von Untergraphen auf L2 geschieht +## Understanding what happens to curation on Subgraph transfers to L2 -Wenn der Eigentümer eines Untergraphen einen Untergraphen an Arbitrum überträgt, werden alle Signale des Untergraphen gleichzeitig in GRT konvertiert. Dies gilt für "automatisch migrierte" Signale, d.h. Signale, die nicht spezifisch für eine Subgraphenversion oder einen Einsatz sind, sondern der neuesten Version eines Subgraphen folgen. +When the owner of a Subgraph transfers a Subgraph to Arbitrum, all of the Subgraph's signal is converted to GRT at the same time. This applies to "auto-migrated" signal, i.e. signal that is not specific to a Subgraph version or deployment but that follows the latest version of a Subgraph. -Diese Umwandlung von Signal in GRT entspricht dem, was passieren würde, wenn der Eigentümer des Subgraphen den Subgraphen in L1 verwerfen würde. Wenn der Subgraph veraltet oder übertragen wird, werden alle Kurationssignale gleichzeitig "verbrannt" (unter Verwendung der Kurationsbindungskurve) und das resultierende GRT wird vom GNS-Smart-Contract gehalten (das ist der Vertrag, der Subgraph-Upgrades und automatisch migrierte Signale handhabt). Jeder Kurator auf diesem Subgraphen hat daher einen Anspruch auf dieses GRT proportional zu der Menge an Anteilen, die er für den Subgraphen hatte. +This conversion from signal to GRT is the same as what would happen if the Subgraph owner deprecated the Subgraph in L1. When the Subgraph is deprecated or transferred, all curation signal is "burned" simultaneously (using the curation bonding curve) and the resulting GRT is held by the GNS smart contract (that is the contract that handles Subgraph upgrades and auto-migrated signal). Each Curator on that Subgraph therefore has a claim to that GRT proportional to the amount of shares they had for the Subgraph. -Ein Teil dieser GRT, der dem Eigentümer des Untergraphen entspricht, wird zusammen mit dem Untergraphen an L2 gesendet. +A fraction of these GRT corresponding to the Subgraph owner is sent to L2 together with the Subgraph. -Ein Teil dieser GRT, der dem Eigentümer des Untergraphen entspricht, wird zusammen mit dem Untergraphen an L2 gesendet. +At this point, the curated GRT will not accrue any more query fees, so Curators can choose to withdraw their GRT or transfer it to the same Subgraph on L2, where it can be used to mint new curation signal. There is no rush to do this as the GRT can be help indefinitely and everybody gets an amount proportional to their shares, irrespective of when they do it. ## Ein Teil dieser GRT, der dem Inhaber des Untergraphen entspricht, wird zusammen mit dem Untergraphen an L2 gesendet. Ein Teil dieser GRT, der dem Eigentümer des Untergraphen entspricht, wird zusammen mit dem Untergraphen an L2 gesendet. -If you're using a "regular" wallet like Metamask (an Externally Owned Account or EOA, i.e. a wallet that is not a smart contract), then this is optional and it is recommended to keep the same Curator address as in L1. +Wenn Sie eine "normale" Wallet wie MetaMask verwenden (ein Externally Owned Account oder EOA, d.h. eine Wallet, die kein Smart Contract ist), dann ist dies optional und es wird empfohlen, die gleiche Eigentümeradresse wie in L1 beizubehalten. -If you're using a smart contract wallet, like a multisig (e.g. a Safe), then choosing a different L2 wallet address is mandatory, as it is most likely that this account only exists on mainnet and you will not be able to make transactions on Arbitrum using this wallet. If you want to keep using a smart contract wallet or multisig, create a new wallet on Arbitrum and use its address as the L2 receiving wallet address. +Wenn Sie eine Smart-Contract-Wallet wie eine Multisig (z.B. einen Safe) verwenden, dann ist die Wahl einer anderen L2-Wallet-Adresse zwingend erforderlich, da es sehr wahrscheinlich ist, dass dieses Konto nur im Mainnet existiert und Sie mit dieser Wallet keine Transaktionen auf Arbitrum durchführen können. Wenn Sie weiterhin eine Smart Contract Wallet oder Multisig verwenden möchten, erstellen Sie eine neue Wallet auf Arbitrum und verwenden Sie deren Adresse als L2-Empfangs-Wallet-Adresse. -**It is very important to use a wallet address that you control, and that can make transactions on Arbitrum, as otherwise the curation will be lost and cannot be recovered.** +**Es ist äußerst wichtig, eine Wallet-Adresse zu verwenden, die Sie kontrollieren und mit der Sie Transaktionen auf Arbitrum durchführen können, da sonst die Kuration verloren geht und nicht wiederhergestellt werden kann.** -## Sending curation to L2: Step 1 +## Senden der Kuration an L2: Schritt 1 -Before starting the transfer, you must decide which address will own the curation on L2 (see "Choosing your L2 wallet" above), and it is recommended having some ETH for gas already bridged on Arbitrum in case you need to retry the execution of the message on L2. You can buy ETH on some exchanges and withdraw it directly to Arbitrum, or you can use the Arbitrum bridge to send ETH from a mainnet wallet to L2: [bridge.arbitrum.io](http://bridge.arbitrum.io) - since gas fees on Arbitrum are so low, you should only need a small amount, e.g. 0.01 ETH will probably be more than enough. +Bevor Sie mit dem Transfer beginnen, müssen Sie entscheiden, welche Adresse die Kuration auf L2 besitzen wird (siehe „Auswahl Ihrer L2 Wallet“ oben), und es wird empfohlen, einige ETH für Gas bereits auf Arbitrum überbrückt zu haben, falls Sie die Ausführung der Nachricht auf L2 wiederholen müssen. Sie können ETH auf einigen Börsen kaufen und sie direkt auf Arbitrum abheben, oder Sie können die Arbitrum- Bridge benutzen, um ETH von einer Mainnet Wallet zu L2 zu senden: [bridge.arbitrum.io](http://bridge.arbitrum.io) - da die Gasgebühren auf Arbitrum so niedrig sind, sollten Sie nur eine kleine Menge benötigen, z.B. 0,01 ETH wird wahrscheinlich mehr als genug sein. -If a subgraph that you curate to has been transferred to L2, you will see a message on Explorer telling you that you're curating to a transferred subgraph. +Wenn ein Subgraph, den Sie kuratieren, auf L2 übertragen wurde, wird im Explorer eine Meldung angezeigt, dass Sie einen übertragenen Subgraph kuratieren. -When looking at the subgraph page, you can choose to withdraw or transfer the curation. Clicking on "Transfer Signal to Arbitrum" will open the transfer tool. +Auf der Subgraph-Seite können Sie wählen, ob Sie die Kuration zurückziehen oder übertragen wollen. Ein Klick auf „Signal nach Arbitrum übertragen“ öffnet das Übertragungstool. ![Transfer signal](/img/transferSignalL2TransferTools.png) -After opening the Transfer Tool, you may be prompted to add some ETH to your wallet if you don't have any. Then you will be able to input the L2 wallet address into the "Receiving wallet address" field - **make sure you've entered the correct address here**. Clicking on Transfer Signal will prompt you to execute the transaction on your wallet (note some ETH value is included to pay for L2 gas); this will initiate the transfer. +Nachdem Sie das Transfer-Tool geöffnet haben, werden Sie möglicherweise aufgefordert, Ihrer Wallet ETH hinzuzufügen, falls Sie keine haben. Dann können Sie die Adresse der L2-Wallet in das Feld „Receiving wallet address“ (Adresse der empfangenden Wallet) eingeben - **vergewissern Sie sich, dass Sie hier die richtige Adresse eingegeben haben**. Wenn Sie auf „Transfer Signal“ klicken, werden Sie aufgefordert, die Transaktion auf Ihrer Wallet auszuführen (beachten Sie, dass ein gewisser ETH-Wert enthalten ist, um das L2-Gas zu bezahlen); dadurch wird der Transfer eingeleitet. -If you execute this step, **make sure you proceed until completing step 3 in less than 7 days, or your signal GRT will be lost.** This is due to how L1-L2 messaging works on Arbitrum: messages that are sent through the bridge are "retryable tickets" that must be executed within 7 days, and the initial execution might need a retry if there are spikes in the gas price on Arbitrum. +Wenn Sie diesen Schritt ausführen, **sichern Sie sich, dass Sie bis zum Abschluss von Schritt 3 in weniger als 7 Tagen fortfahren, sonst geht Ihr Signal GRT verloren.** Das liegt daran, wie der L1-L2-Nachrichtenaustausch auf Arbitrum funktioniert: Nachrichten, die über die Bridge gesendet werden, sind „wiederholbare Tickets“, die innerhalb von 7 Tagen ausgeführt werden müssen, und die anfängliche Ausführung muss möglicherweise wiederholt werden, wenn es Spitzen im Gaspreis auf Arbitrum gibt. -## Sending curation to L2: step 2 +## Senden der Kuration an L2: Schritt 2 -Starting the transfer: +Starten Sie den Transfer: ![Send signal to L2](/img/sendingCurationToL2Step2First.png) -After you start the transfer, the message that sends your L1 curation to L2 must propagate through the Arbitrum bridge. This takes approximately 20 minutes (the bridge waits for the mainnet block containing the transaction to be "safe" from potential chain reorgs). +Nachdem Sie die Übertragung gestartet haben, muss die Nachricht, die Ihre L1-Kuration an L2 sendet, die Arbitrum- Bridge durchlaufen. Dies dauert etwa 20 Minuten (die Bridge wartet darauf, dass der Mainnet-Block, der die Transaktion enthält, vor potenziellen Chain Reorgs „sicher“ ist). Sobald diese Wartezeit abgelaufen ist, versucht Arbitrum, die Übertragung auf den L2-Verträgen automatisch auszuführen. ![Sending curation signal to L2](/img/sendingCurationToL2Step2Second.png) -## Sending curation to L2: step 3 +## Senden der Kuration an L2: Schritt 3 -In most cases, this step will auto-execute as the L2 gas included in step 1 should be sufficient to execute the transaction that receives the curation on the Arbitrum contracts. In some cases, however, it is possible that a spike in gas prices on Arbitrum causes this auto-execution to fail. In this case, the "ticket" that sends your curation to L2 will be pending and require a retry within 7 days. +In den meisten Fällen wird dieser Schritt automatisch ausgeführt, da das in Schritt 1 enthaltene L2-Gas ausreichen sollte, um die Transaktion auszuführen, die die Kuration auf den Arbitrum-Verträgen erhält. In einigen Fällen ist es jedoch möglich, dass ein Anstieg der Gaspreise auf Arbitrum dazu führt, dass diese automatische Ausführung fehlschlägt. In diesem Fall wird das „Ticket“, das Ihre Kuration an L2 sendet, ausstehend sein und einen erneuten Versuch innerhalb von 7 Tagen erfordern. Wenn dies der Fall ist, müssen Sie sich mit einer L2-Wallet verbinden, die etwas ETH auf Arbitrum hat, Ihr Wallet-Netzwerk auf Arbitrum umstellen und auf "Confirm Transfer" klicken, um die Transaktion zu wiederholen. ![Send signal to L2](/img/L2TransferToolsFinalCurationImage.png) -## Withdrawing your curation on L1 +## Zurückziehen Ihrer Kuration auf L1 -If you prefer not to send your GRT to L2, or you'd rather bridge the GRT manually, you can withdraw your curated GRT on L1. On the banner on the subgraph page, choose "Withdraw Signal" and confirm the transaction; the GRT will be sent to your Curator address. +Wenn Sie es vorziehen, Ihre GRT nicht an L2 zu senden, oder wenn Sie die GRT lieber manuell überbrücken möchten, können Sie Ihre kuratierten GRT auf L1 abheben. Wählen Sie auf dem Banner auf der Subgraph-Seite „Signal zurückziehen“ und bestätigen Sie die Transaktion; die GRT werden an Ihre Kurator-Adresse gesendet. diff --git a/website/src/pages/de/archived/sunrise.mdx b/website/src/pages/de/archived/sunrise.mdx index 398fe1ca72f7..5b521b176ffc 100644 --- a/website/src/pages/de/archived/sunrise.mdx +++ b/website/src/pages/de/archived/sunrise.mdx @@ -1,13 +1,13 @@ --- title: Post-Sunrise + Upgrade auf The Graph Network FAQ -sidebarTitle: Post-Sunrise Upgrade FAQ +sidebarTitle: FAQ zum Post-Sunrise-Upgrade --- > Hinweis: Die Sunrise der dezentralisierten Daten endete am 12. Juni 2024. ## Was war die Sunrise der dezentralisierten Daten? -Die Sunrise of Decentralized Data war eine Initiative, die von Edge & Node angeführt wurde. Diese Initiative ermöglichte es Subgraph-Entwicklern, nahtlos auf das dezentrale Netzwerk von The Graph zu wechseln. +Die Sunrise of dezentralisierten Data war eine Initiative, die von Edge & Node angeführt wurde. Diese Initiative ermöglichte es Subgraph-Entwicklern, nahtlos auf das dezentrale Netzwerk von The Graph zu wechseln. Dieser Plan stützt sich auf frühere Entwicklungen des Graph-Ökosystems, einschließlich eines aktualisierten Indexers, der Abfragen auf neu veröffentlichte Subgraphen ermöglicht. diff --git a/website/src/pages/de/contracts.json b/website/src/pages/de/contracts.json index b33760446ae8..6b94c57a82a5 100644 --- a/website/src/pages/de/contracts.json +++ b/website/src/pages/de/contracts.json @@ -1,4 +1,4 @@ { - "contract": "Contract", + "contract": "Vertrag", "address": "Adress" } diff --git a/website/src/pages/de/global.json b/website/src/pages/de/global.json index 424bff2965bc..99f5545ec43c 100644 --- a/website/src/pages/de/global.json +++ b/website/src/pages/de/global.json @@ -1,35 +1,78 @@ { "navigation": { "title": "Hauptmenü", - "show": "Show navigation", - "hide": "Hide navigation", - "subgraphs": "Subgraphs", + "show": "Navigation anzeigen", + "hide": "Navigation ausblenden", + "subgraphs": "Subgraphen", "substreams": "Substreams", - "sps": "Substreams-Powered Subgraphs", - "indexing": "Indexing", + "sps": "Substreams-getriebene Subgraphen", + "tokenApi": "Token API", + "indexing": "Indizierung", "resources": "Ressourcen", - "archived": "Archived" + "archived": "Archiviert" }, "page": { - "lastUpdated": "Last updated", + "lastUpdated": "Zuletzt aktualisiert", "readingTime": { - "title": "Reading time", - "minutes": "minutes" + "title": "Lesedauer", + "minutes": "Minuten" }, - "previous": "Previous page", - "next": "Next page", - "edit": "Edit on GitHub", - "onThisPage": "On this page", - "tableOfContents": "Table of contents", - "linkToThisSection": "Link to this section" + "previous": "Vorherige Seite", + "next": "Nächste Seite", + "edit": "Auf GitHub bearbeiten", + "onThisPage": "Auf dieser Seite", + "tableOfContents": "Inhaltsübersicht", + "linkToThisSection": "Link zu diesem Abschnitt" }, "content": { - "note": "Note", + "callout": { + "note": "Note", + "tip": "Tip", + "important": "Important", + "warning": "Warning", + "caution": "Caution" + }, "video": "Video" }, + "openApi": { + "parameters": { + "pathParameters": "Path Parameters", + "queryParameters": "Abfrage-Parameter", + "headerParameters": "Header Parameters", + "cookieParameters": "Cookie Parameters", + "parameter": "Parameter", + "description": "Beschreibung", + "value": "Value", + "required": "Required", + "deprecated": "Deprecated", + "defaultValue": "Default value", + "minimumValue": "Minimum value", + "maximumValue": "Maximum value", + "acceptedValues": "Accepted values", + "acceptedPattern": "Accepted pattern", + "format": "Format", + "serializationFormat": "Serialization format" + }, + "request": { + "label": "Test this endpoint", + "noCredentialsRequired": "No credentials required", + "send": "Send Request" + }, + "responses": { + "potentialResponses": "Potential Responses", + "status": "Status", + "description": "Beschreibung", + "liveResponse": "Live Response", + "example": "Beispiel" + }, + "errors": { + "invalidApi": "Could not retrieve API {0}.", + "invalidOperation": "Could not retrieve operation {0} in API {1}." + } + }, "notFound": { - "title": "Oops! This page was lost in space...", - "subtitle": "Check if you’re using the right address or explore our website by clicking on the link below.", - "back": "Go Home" + "title": "Ups! Diese Seite ist im Space verloren gegangen...", + "subtitle": "Überprüfen Sie, ob Sie die richtige Adresse verwenden, oder besuchen Sie unsere Website, indem Sie auf den unten stehenden Link klicken.", + "back": "Zurück zur Startseite" } } diff --git a/website/src/pages/de/index.json b/website/src/pages/de/index.json index fccfa5cf2a6c..b56ea56c5897 100644 --- a/website/src/pages/de/index.json +++ b/website/src/pages/de/index.json @@ -2,41 +2,41 @@ "title": "Home", "hero": { "title": "The Graph Docs", - "description": "Kick-start your web3 project with the tools to extract, transform, and load blockchain data.", - "cta1": "How The Graph works", + "description": "Starten Sie Ihr Web3-Projekt mit den Tools zum Extrahieren, Transformieren und Laden von Blockchain-Daten.", + "cta1": "Funktionsweise von The Graph", "cta2": "Erstellen Sie Ihren ersten Subgraphen" }, "products": { - "title": "The Graph’s Products", - "description": "Choose a solution that fits your needs—interact with blockchain data your way.", + "title": "The Graph's Products", + "description": "Wählen Sie eine Lösung, die Ihren Anforderungen entspricht, und interagieren Sie auf Ihre Weise mit Blockchain-Daten.", "subgraphs": { "title": "Subgraphs", - "description": "Extract, process, and query blockchain data with open APIs.", - "cta": "Develop a subgraph" + "description": "Extrahieren, Verarbeiten und Abfragen von Blockchain-Daten mit offenen APIs.", + "cta": "Entwickeln Sie einen Subgraphen" }, "substreams": { "title": "Substreams", - "description": "Fetch and consume blockchain data with parallel execution.", - "cta": "Develop with Substreams" + "description": "Abrufen und Konsumieren von Blockchain-Daten mit paralleler Ausführung.", + "cta": "Entwickeln mit Substreams" }, "sps": { - "title": "Substreams-Powered Subgraphs", - "description": "Boost your subgraph’s efficiency and scalability by using Substreams.", - "cta": "Set up a Substreams-powered subgraph" + "title": "Substreams-getriebene Subgraphen", + "description": "Boost your subgraph's efficiency and scalability by using Substreams.", + "cta": "Einrichten eines Substreams-powered Subgraphen" }, "graphNode": { - "title": "Graph Node", - "description": "Index blockchain data and serve it via GraphQL queries.", - "cta": "Set up a local Graph Node" + "title": "Graph-Knoten", + "description": "Indexieren Sie Blockchain-Daten und stellen Sie sie über GraphQL-Abfragen bereit.", + "cta": "Lokalen Graph-Knoten einrichten" }, "firehose": { "title": "Firehose", - "description": "Extract blockchain data into flat files to enhance sync times and streaming capabilities.", - "cta": "Get started with Firehose" + "description": "Extrahieren Sie Blockchain-Daten in flache Dateien, um die Synchronisierungszeiten und Streaming-Funktionen zu verbessern.", + "cta": "Erste Schritte mit Firehose" } }, "supportedNetworks": { - "title": "Supported Networks", + "title": "Unterstützte Netzwerke", "details": "Network Details", "services": "Services", "type": "Type", @@ -44,7 +44,7 @@ "identifier": "Identifier", "chainId": "Chain ID", "nativeCurrency": "Native Currency", - "docs": "Docs", + "docs": "Dokumente", "shortName": "Short Name", "guides": "Guides", "search": "Search networks", @@ -54,9 +54,9 @@ "infoText": "Boost your developer experience by enabling The Graph's indexing network.", "infoLink": "Integrate new network", "description": { - "base": "The Graph supports {0}. To add a new network, {1}", - "networks": "networks", - "completeThisForm": "complete this form" + "base": "The Graph unterstützt {0}. Um ein neues Netzwerk hinzuzufügen, {1}", + "networks": "Netzwerke", + "completeThisForm": "füllen Sie dieses Formular aus" }, "emptySearch": { "title": "No networks found", @@ -92,7 +92,7 @@ "description": "Leverage features like custom data sources, event handlers, and topic filters." }, "billing": { - "title": "Billing", + "title": "Abrechnung", "description": "Optimize costs and manage billing efficiently." } }, @@ -123,53 +123,53 @@ "title": "Guides", "description": "", "explorer": { - "title": "Find Data in Graph Explorer", - "description": "Leverage hundreds of public subgraphs for existing blockchain data." + "title": "find Data in Graph Explorer", + "description": "Nutzen Sie Hunderte von öffentlichen Subgraphen für bestehende Blockchain-Daten." }, "publishASubgraph": { - "title": "Publish a Subgraph", - "description": "Add your subgraph to the decentralized network." + "title": "Veröffentlichen eines Subgraphen", + "description": "Fügen Sie Ihren Subgraphen dem dezentralen Netzwerk hinzu." }, "publishSubstreams": { - "title": "Publish Substreams", - "description": "Launch your Substreams package to the Substreams Registry." + "title": "Substreams veröffentlichen", + "description": "Starten Sie Ihr Substrats-Paket in der Substrats-Registrierung." }, "queryingBestPractices": { - "title": "Querying Best Practices", - "description": "Optimize your subgraph queries for faster, better results." + "title": "Best Practices für Abfragen", + "description": "Optimieren Sie Ihre Subgraphenabfragen für schnellere und bessere Ergebnisse." }, "timeseries": { - "title": "Optimized Timeseries & Aggregations", - "description": "Streamline your subgraph for efficiency." + "title": "Optimierte Zeitreihen & Aggregationen", + "description": "Optimieren Sie Ihren Subgraphen für mehr Effizienz." }, "apiKeyManagement": { - "title": "API Key Management", - "description": "Easily create, manage, and secure API keys for your subgraphs." + "title": "API-Schlüssel-Management", + "description": "Einfaches Erstellen, Verwalten und Sichern von API-Schlüsseln für Ihre Subgraphen." }, "transferToTheGraph": { - "title": "Transfer to The Graph", - "description": "Seamlessly upgrade your subgraph from any platform." + "title": "Übertragung auf The Graph", + "description": "Aktualisieren Sie Ihren Subgraph nahtlos von jeder Plattform aus." } }, "videos": { "title": "Video Tutorials", - "watchOnYouTube": "Watch on YouTube", + "watchOnYouTube": "Auf YouTube ansehen", "theGraphExplained": { "title": "The Graph Explained In 1 Minute", - "description": "What is The Graph? How does it work? Why does it matter so much to web3 developers? Learn how and why The Graph is the backbone of web3 in this short, non-technical video." + "description": "Learn how and why The Graph is the backbone of web3 in this short, non-technical video." }, "whatIsDelegating": { "title": "Was ist Delegieren?", - "description": "Delegators are key participants who help secure The Graph by staking their GRT tokens to Indexers. This video explains key concepts to understand before delegating." + "description": "This video explains key concepts to understand before delegating, a form of staking that helps secure The Graph." }, "howToIndexSolana": { - "title": "How to Index Solana with a Substreams-powered Subgraph", - "description": "If you’re familiar with subgraphs, discover how Substreams offer a different approach for key use cases. This video walks you through the process of building your first Substreams-powered subgraph." + "title": "Indizierung von Solana mit einem Substreams-powered Subgraph", + "description": "If you're familiar with Subgraphs, discover how Substreams offer a different approach for key use cases." } }, "time": { - "reading": "Reading time", - "duration": "Duration", + "reading": "Lesedauer", + "duration": "Laufzeit", "minutes": "min" } } diff --git a/website/src/pages/de/indexing/_meta-titles.json b/website/src/pages/de/indexing/_meta-titles.json index 42f4de188fd4..ccfae2db5e84 100644 --- a/website/src/pages/de/indexing/_meta-titles.json +++ b/website/src/pages/de/indexing/_meta-titles.json @@ -1,3 +1,3 @@ { - "tooling": "Indexer Tooling" + "tooling": "Indexierer- Tools" } diff --git a/website/src/pages/de/indexing/new-chain-integration.mdx b/website/src/pages/de/indexing/new-chain-integration.mdx index 54d9b95d5a24..eed49796a99f 100644 --- a/website/src/pages/de/indexing/new-chain-integration.mdx +++ b/website/src/pages/de/indexing/new-chain-integration.mdx @@ -2,7 +2,7 @@ title: Integration neuer Ketten --- -Ketten können die Unterstützung von Subgraphen in ihr Ökosystem einbringen, indem sie eine neue `graph-node` Integration starten. Subgraphen sind ein leistungsfähiges Indizierungswerkzeug, das Entwicklern eine Welt voller Möglichkeiten eröffnet. Graph Node indiziert bereits Daten von den hier aufgeführten Ketten. Wenn Sie an einer neuen Integration interessiert sind, gibt es 2 Integrationsstrategien: +Chains can bring Subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies: 1. **EVM JSON-RPC** 2. **Firehose**: Alle Firehose-Integrationslösungen umfassen Substreams, eine groß angelegte Streaming-Engine auf der Grundlage von Firehose mit nativer `graph-node`-Unterstützung, die parallelisierte Transformationen ermöglicht. @@ -51,7 +51,7 @@ Während JSON-RPC und Firehose beide für Subgraphen geeignet sind, ist für Ent - All diese `getLogs`-Aufrufe und Roundtrips werden durch einen einzigen Stream ersetzt, der im Herzen von `graph-node` ankommt; ein einziges Blockmodell für alle Subgraphen, die es verarbeitet. -> HINWEIS: Bei einer Firehose-basierten Integration für EVM-Ketten müssen Indexer weiterhin den Archiv-RPC-Knoten der Kette ausführen, um Subgraphen ordnungsgemäß zu indizieren. Dies liegt daran, dass der Firehose nicht in der Lage ist, den Smart-Contract-Status bereitzustellen, der normalerweise über die RPC-Methode „eth_call“ zugänglich ist. (Es ist erwähnenswert, dass `eth_calls` keine gute Praxis für Entwickler sind) +> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index Subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers) ## Graph-Node Konfiguration diff --git a/website/src/pages/de/indexing/overview.mdx b/website/src/pages/de/indexing/overview.mdx index 05530cbff93a..4635fbb7f2b9 100644 --- a/website/src/pages/de/indexing/overview.mdx +++ b/website/src/pages/de/indexing/overview.mdx @@ -5,43 +5,43 @@ sidebarTitle: Überblick Indexer sind Knotenbetreiber im Graph Network, die Graph Tokens (GRT) einsetzen, um Indizierungs- und Abfrageverarbeitungsdienste anzubieten. Indexer verdienen Abfragegebühren und Indexing Rewards für ihre Dienste. Sie verdienen auch Abfragegebühren, die gemäß einer exponentiellen Rabattfunktion zurückerstattet werden. -GRT that is staked in the protocol is subject to a thawing period and can be slashed if Indexers are malicious and serve incorrect data to applications or if they index incorrectly. Indexers also earn rewards for delegated stake from Delegators, to contribute to the network. +Die im Protokoll eingesetzte GRT unterliegt einer Nachfrist und kann reduziert werden, wenn Indexierer böswillig sind und Anwendungen falsche Daten präsentieren oder wenn sie falsch indizieren. Indexer erhalten auch Belohnungen für den Einsatz, den Delegatoren für ihren Beitrag zum Netzwerk geben. -Indexers select subgraphs to index based on the subgraph’s curation signal, where Curators stake GRT in order to indicate which subgraphs are high-quality and should be prioritized. Consumers (eg. applications) can also set parameters for which Indexers process queries for their subgraphs and set preferences for query fee pricing. +Die Indexierer wählen die zu indexierenden Subgraphen auf der Grundlage des Kurationssignals des Subgraphen aus, wobei die Kuratoren GRT einsetzen, um anzugeben, welche Subgraphen von hoher Qualität sind und priorisiert werden sollten. Verbraucher (z. B. Anwendungen) können auch Parameter dafür festlegen, welche Indexierer Abfragen für ihre Teilgraphen verarbeiten, und Präferenzen für die Preisgestaltung für Abfragen festlegen. ## FAQ -### What is the minimum stake required to be an Indexer on the network? +### Wie hoch ist der Mindesteinsatz, der erforderlich ist, um ein Indexierer im Netzwerk zu sein? -The minimum stake for an Indexer is currently set to 100K GRT. +Der Mindesteinsatz für einen Indexer ist derzeit auf 100.000 GRT festgelegt. -### What are the revenue streams for an Indexer? +### Welche Einnahmequellen gibt es für einen Indexierer? -**Query fee rebates** - Payments for serving queries on the network. These payments are mediated via state channels between an Indexer and a gateway. Each query request from a gateway contains a payment and the corresponding response a proof of query result validity. +**Query fee rebates** - Zahlungen für die Bedienung von Abfragen im Netz. Diese Zahlungen werden über Statuskanäle zwischen einem Indexer und einem Gateway vermittelt. Jede Abfrageanfrage eines Gateways enthält eine Zahlung und die entsprechende Antwort einen Nachweis für die Gültigkeit des Abfrageergebnisses. -**Indexing rewards** - Generated via a 3% annual protocol wide inflation, the indexing rewards are distributed to Indexers who are indexing subgraph deployments for the network. +**Indexierungsbelohnungen** - Die Indexierungsbelohnungen werden über eine jährliche protokollweite Inflation von 3% an Indexer verteilt, die Subgraph-Einsätze für das Netzwerk indexieren. -### How are indexing rewards distributed? +### Wie werden die Indexierungsprämien verteilt? -Indexing rewards come from protocol inflation which is set to 3% annual issuance. They are distributed across subgraphs based on the proportion of all curation signal on each, then distributed proportionally to Indexers based on their allocated stake on that subgraph. **An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.** +Indexierungsbelohnungen stammen aus der Protokollinflation, die auf 3 % pro Jahr festgelegt ist. Sie werden auf der Grundlage des Anteils aller Kurationssignale auf jedem Subgraphen verteilt und dann anteilig an die Indexierer auf der Grundlage ihres zugewiesenen Anteils an diesem Subgraphen verteilt. \*\*Eine Zuteilung muss mit einem gültigen Indizierungsnachweis (POI) abgeschlossen werden, der die in der Schlichtungscharta festgelegten Standards erfüllt, um für Belohnungen in Frage zu kommen. -Numerous tools have been created by the community for calculating rewards; you'll find a collection of them organized in the [Community Guides collection](https://www.notion.so/Community-Guides-abbb10f4dba040d5ba81648ca093e70c). You can also find an up to date list of tools in the #Delegators and #Indexers channels on the [Discord server](https://discord.gg/graphprotocol). Here we link a [recommended allocation optimiser](https://github.com/graphprotocol/allocation-optimizer) integrated with the indexer software stack. +Die Community hat zahlreiche Tools zur Berechnung von Rewards erstellt, die in der [Community-Guides-Sammlung](https://www.notion.so/Community-Guides-abbb10f4dba040d5ba81648ca093e70c) zusammengefasst sind. Eine aktuelle Liste von Tools finden Sie auch in den Channels #Delegators und #Indexers auf dem [Discord-Server](https://discord.gg/graphprotocol). Hier verlinken wir einen [empfohlenen Allokationsoptimierer](https://github.com/graphprotocol/allocation-optimizer), der in den Indexer-Software-Stack integriert ist. -### What is a proof of indexing (POI)? +### Was ist ein Indizierungsnachweis (proof of indexing - POI)? -POIs are used in the network to verify that an Indexer is indexing the subgraphs they have allocated on. A POI for the first block of the current epoch must be submitted when closing an allocation for that allocation to be eligible for indexing rewards. A POI for a block is a digest for all entity store transactions for a specific subgraph deployment up to and including that block. +POIs werden im Netzwerk verwendet, um zu überprüfen, ob ein Indexierer die von ihm zugewiesenen Subgraphen indexiert. Ein POI für den ersten Block der aktuellen Epoche muss beim Schließen einer Zuweisung eingereicht werden, damit diese Zuweisung für die Indexierung belohnt werden kann. Ein POI für einen Block ist eine Zusammenfassung aller Entity-Store-Transaktionen für einen bestimmten Subgraph-Einsatz bis zu diesem Block und einschließlich. -### When are indexing rewards distributed? +### Wann werden Indizierungsprämien verteilt? -Allocations are continuously accruing rewards while they're active and allocated within 28 epochs. Rewards are collected by the Indexers, and distributed whenever their allocations are closed. That happens either manually, whenever the Indexer wants to force close them, or after 28 epochs a Delegator can close the allocation for the Indexer, but this results in no rewards. 28 epochs is the max allocation lifetime (right now, one epoch lasts for ~24h). +Zuteilungen sind kontinuierlich anfallende Belohnungen, während sie aktiv sind und innerhalb von 28 Epochen zugeteilt werden. Belohnungen werden von den Indexierern gesammelt und verteilt, sobald ihre Zuteilungen geschlossen sind. Das geschieht entweder manuell, wenn der Indexierer das Schließen erzwingen möchte, oder nach 28 Epochen kann ein Delegator die Zuordnung für den Indexer schließen, aber dies führt zu keinen Belohnungen. 28 Epochen ist die maximale Zuweisungslebensdauer (im Moment dauert eine Epoche etwa 24 Stunden). -### Can pending indexing rewards be monitored? +### Können ausstehende Indizierungsprämien überwacht werden? -The RewardsManager contract has a read-only [getRewards](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/rewards/RewardsManager.sol#L316) function that can be used to check the pending rewards for a specific allocation. +Der RewardsManager-Vertrag verfügt über eine schreibgeschützte Funktion [getRewards](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/rewards/RewardsManager.sol#L316), mit der die ausstehenden Rewards für eine bestimmte Zuweisung überprüft werden können. -Many of the community-made dashboards include pending rewards values and they can be easily checked manually by following these steps: +Viele der von der Community erstellten Dashboards enthalten ausstehende Prämienwerte und können einfach manuell überprüft werden, indem Sie diesen Schritten folgen: -1. Query the [mainnet subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations: +1. Abfrage des [mainnet Subgraphen] (https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one), um die IDs für alle aktiven Zuweisungen zu erhalten: ```graphql query indexerAllocations { @@ -57,138 +57,138 @@ query indexerAllocations { } ``` -Use Etherscan to call `getRewards()`: +Verwenden Sie Etherscan, um `getRewards()` aufzurufen: -- Navigate to [Etherscan interface to Rewards contract](https://etherscan.io/address/0x9Ac758AB77733b4150A901ebd659cbF8cB93ED66#readProxyContract) -- To call `getRewards()`: - - Expand the **9. getRewards** dropdown. - - Enter the **allocationID** in the input. - - Click the **Query** button. +- Navigieren Sie zu [Etherscan-Schnittstelle zu Rewards-Vertrag](https://etherscan.io/address/0x9Ac758AB77733b4150A901ebd659cbF8cB93ED66#readProxyContract) +- Zum Aufrufen von `getRewards()`: + - Erweitern Sie das Dropdown-Menü **9. getRewards**. + - Geben Sie die **allocationID** in die Eingabe ein. + - Klicken Sie auf die Schaltfläche **Abfrage**. -### What are disputes and where can I view them? +### Was sind Streitfälle und wo kann ich sie einsehen? -Indexer's queries and allocations can both be disputed on The Graph during the dispute period. The dispute period varies, depending on the type of dispute. Queries/attestations have 7 epochs dispute window, whereas allocations have 56 epochs. After these periods pass, disputes cannot be opened against either of allocations or queries. When a dispute is opened, a deposit of a minimum of 10,000 GRT is required by the Fishermen, which will be locked until the dispute is finalized and a resolution has been given. Fisherman are any network participants that open disputes. +Sowohl die Abfragen als auch die Zuordnungen des Indexierers können während des Streitzeitraums auf The Graph angefochten werden. Die Streitdauer variiert je nach Streitfall. Abfragen/Bescheinigungen haben ein 7-Epochen-Streitfenster, während Zuweisungen 56 Epochen haben. Nach Ablauf dieser Fristen können weder Zuweisungen noch Rückfragen angefochten werden. Wenn eine Streitigkeit eröffnet wird, wird von den Fischern eine Kaution von mindestens 10.000 GRT verlangt, die gesperrt wird, bis die Streitigkeit abgeschlossen ist und eine Lösung gefunden wurde. Fischer sind alle Netzwerkteilnehmer, die Streitigkeiten eröffnen. -Disputes have **three** possible outcomes, so does the deposit of the Fishermen. +Bei Streitigkeiten gibt es **drei** mögliche Ergebnisse, so auch bei der Kaution der Fischer. -- If the dispute is rejected, the GRT deposited by the Fishermen will be burned, and the disputed Indexer will not be slashed. -- If the dispute is settled as a draw, the Fishermen's deposit will be returned, and the disputed Indexer will not be slashed. -- If the dispute is accepted, the GRT deposited by the Fishermen will be returned, the disputed Indexer will be slashed and the Fishermen will earn 50% of the slashed GRT. +- Wird die Anfechtung zurückgewiesen, werden die von den Fischern hinterlegten GRT verbrannt, und der angefochtene Indexierer wird nicht gekürzt. +- Wird der Streitfall durch ein Unentschieden entschieden, wird die Kaution des Fischers zurückerstattet und der strittige Indexierer wird nicht gekürzt. +- Wird dem Einspruch stattgegeben, werden die von den Fischern eingezahlten GRT zurückerstattet, der strittige Indexer wird gekürzt und die Fischer erhalten 50 % der gekürzten GRT. -Disputes can be viewed in the UI in an Indexer's profile page under the `Disputes` tab. +Streitfälle können in der Benutzeroberfläche auf der Profilseite eines Indexierers unter der Registerkarte `Disputes` angezeigt werden. -### What are query fee rebates and when are they distributed? +### Was sind Rückerstattungen von Abfragegebühren und wann werden sie ausgeschüttet? -Query fees are collected by the gateway and distributed to indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). The exponential rebate function is proposed as a way to ensure indexers achieve the best outcome by faithfully serving queries. It works by incentivizing Indexers to allocate a large amount of stake (which can be slashed for erring when serving a query) relative to the amount of query fees they may collect. +Die Abfragegebühren werden vom Gateway eingezogen und gemäß der exponentiellen Rabattfunktion an die Indexierer verteilt (siehe GIP [hier](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). Die exponentielle Rabattfunktion wird vorgeschlagen, um sicherzustellen, dass die Indexierer das beste Ergebnis erzielen, indem sie die Abfragen treu bedienen. Sie bietet den Indexierern einen Anreiz, einen hohen Einsatz (der bei Fehlern bei der Bedienung einer Anfrage gekürzt werden kann) im Verhältnis zur Höhe der Abfragegebühren, die sie einnehmen können, zu leisten. -Once an allocation has been closed the rebates are available to be claimed by the Indexer. Upon claiming, the query fee rebates are distributed to the Indexer and their Delegators based on the query fee cut and the exponential rebate function. +Sobald eine Zuteilung abgeschlossen ist, können die Rabatte vom Indexierer beansprucht werden. Nach der Beantragung werden die Abfragegebührenrabatte auf der Grundlage der Abfragegebührenkürzung und der exponentiellen Rabattfunktion an den Indexer und seine Delegatoren verteilt. -### What is query fee cut and indexing reward cut? +### Was ist die Kürzung der Abfragegebühr und die Kürzung der Indizierungsprämie? -The `queryFeeCut` and `indexingRewardCut` values are delegation parameters that the Indexer may set along with cooldownBlocks to control the distribution of GRT between the Indexer and their Delegators. See the last steps in [Staking in the Protocol](/indexing/overview/#stake-in-the-protocol) for instructions on setting the delegation parameters. +Die Werte `queryFeeCut` und `indexingRewardCut` sind Delegationsparameter, die der Indexer zusammen mit cooldownBlocks setzen kann, um die Verteilung von GRT zwischen dem Indexer und seinen Delegatoren zu kontrollieren. Siehe die letzten Schritte in [Staking im Protokoll](/indexing/overview/#stake-in-the-protocol) für Anweisungen zur Einstellung der Delegationsparameter. -- **queryFeeCut** - the % of query fee rebates that will be distributed to the Indexer. If this is set to 95%, the Indexer will receive 95% of the query fees earned when an allocation is closed with the other 5% going to the Delegators. +- **queryFeeCut** - der Prozentsatz der Rückerstattungen von Abfragegebühren, der an den Indexer verteilt wird. Wenn dieser Wert auf 95 % gesetzt ist, erhält der Indexer 95 % der Abfragegebühren, die beim Abschluss einer Zuteilung anfallen, während die restlichen 5 % an die Delegatoren gehen. -- **indexingRewardCut** - the % of indexing rewards that will be distributed to the Indexer. If this is set to 95%, the Indexer will receive 95% of the indexing rewards when an allocation is closed and the Delegators will split the other 5%. +- **indexingRewardCut** - der Prozentsatz der Indizierung Rewards, der an den Indexer verteilt wird. Wenn dieser Wert auf 95 % gesetzt ist, erhält der Indexierer 95 % der Rewards für die Indizierung, wenn eine Zuweisung abgeschlossen wird, und die Delegatoren teilen sich die restlichen 5 %. -### How do Indexers know which subgraphs to index? +### Woher wissen die Indexierer, welche Subgraphen indexiert werden sollen? -Indexers may differentiate themselves by applying advanced techniques for making subgraph indexing decisions but to give a general idea we'll discuss several key metrics used to evaluate subgraphs in the network: +Indexierer können sich durch die Anwendung fortgeschrittener Techniken für die Indizierung von Subgraphen unterscheiden, aber um eine allgemeine Vorstellung zu vermitteln, werden wir einige Schlüsselmetriken diskutieren, die zur Bewertung von Subgraphen im Netzwerk verwendet werden: -- **Curation signal** - The proportion of network curation signal applied to a particular subgraph is a good indicator of the interest in that subgraph, especially during the bootstrap phase when query voluming is ramping up. +- **Kurationssignal** - Der Anteil des Netzwerkkurationssignals, der auf einen bestimmten Subgraphen angewandt wird, ist ein guter Indikator für das Interesse an diesem Subgraphen, insbesondere während der Bootstrap-Phase, wenn das Abfragevolumen ansteigt. -- **Query fees collected** - The historical data for volume of query fees collected for a specific subgraph is a good indicator of future demand. +- **Eingezogene Abfragegebühren** - Die historischen Daten zum Volumen der für einen bestimmten Subgraphen eingezogenen Abfragegebühren sind ein guter Indikator für die zukünftige Nachfrage. -- **Amount staked** - Monitoring the behavior of other Indexers or looking at proportions of total stake allocated towards specific subgraphs can allow an Indexer to monitor the supply side for subgraph queries to identify subgraphs that the network is showing confidence in or subgraphs that may show a need for more supply. +- **Einsatzhöhe** - Die Beobachtung des Verhaltens anderer Indexierer oder die Betrachtung des Anteils am Gesamteinsatz, der bestimmten Subgraphen zugewiesen wird, kann es einem Indexierer ermöglichen, die Angebotsseite für Subgraphenabfragen zu überwachen, um Subgraphen zu identifizieren, in die das Netzwerk Vertrauen zeigt, oder Subgraphen, die möglicherweise einen Bedarf an mehr Angebot aufweisen. -- **Subgraphs with no indexing rewards** - Some subgraphs do not generate indexing rewards mainly because they are using unsupported features like IPFS or because they are querying another network outside of mainnet. You will see a message on a subgraph if it is not generating indexing rewards. +- **Subgraphen ohne Indizierungsbelohnungen** - Einige Subgraphen erzeugen keine Indizierungsbelohnungen, hauptsächlich weil sie nicht unterstützte Funktionen wie IPFS verwenden oder weil sie ein anderes Netzwerk außerhalb des Hauptnetzes abfragen. Wenn ein Subgraph keine Indizierungsbelohnungen erzeugt, wird eine entsprechende Meldung angezeigt. -### What are the hardware requirements? +### Welche Hardware-Anforderungen gibt es? -- **Small** - Enough to get started indexing several subgraphs, will likely need to be expanded. -- **Standard** - Default setup, this is what is used in the example k8s/terraform deployment manifests. -- **Medium** - Production Indexer supporting 100 subgraphs and 200-500 requests per second. -- **Large** - Prepared to index all currently used subgraphs and serve requests for the related traffic. +- **Small** - Ausreichend, um mit der Indizierung mehrerer Subgraphen zu beginnen, wird wahrscheinlich erweitert werden müssen. +- **Standard** - Standardeinstellung, wie sie in den k8s/terraform-Beispielmanifesten verwendet wird. +- **Medium** - Produktionsindexer, der 100 Subgraphen und 200-500 Anfragen pro Sekunde unterstützt. +- **Large** - Vorbereitet, um alle derzeit verwendeten Subgraphen zu indizieren und Anfragen für den entsprechenden Verkehr zu bedienen. -| Setup | Postgres
(CPUs) | Postgres
(memory in GBs) | Postgres
(disk in TBs) | VMs
(CPUs) | VMs
(memory in GBs) | +| Konfiguration | Postgres
(CPUs) | Postgres
(Speicher in GB) | Postgres
(Festplatte in TB) | VMs
(CPUs) | VMs
(Speicher in GB) | | --- | :-: | :-: | :-: | :-: | :-: | | Small | 4 | 8 | 1 | 4 | 16 | | Standard | 8 | 30 | 1 | 12 | 48 | | Medium | 16 | 64 | 2 | 32 | 64 | | Large | 72 | 468 | 3.5 | 48 | 184 | -### What are some basic security precautions an Indexer should take? +### Was sind einige grundlegende Sicherheitsvorkehrungen, die ein Indexierer treffen sollte? -- **Operator wallet** - Setting up an operator wallet is an important precaution because it allows an Indexer to maintain separation between their keys that control stake and those that are in control of day-to-day operations. See [Stake in Protocol](/indexing/overview/#stake-in-the-protocol) for instructions. +- **Operator Wallet** - Die Einrichtung einer Operator Wallet ist eine wichtige Vorsichtsmaßnahme, da sie es einem Indexierer ermöglicht, eine Trennung zwischen seinen Schlüsseln, die den Einsatz kontrollieren, und den Schlüsseln, die für den täglichen Betrieb zuständig sind, aufrechtzuerhalten. Siehe [Stake im Protocol](/indexing/overview/#stake-in-the-protocol) für Anweisungen. -- **Firewall** - Only the Indexer service needs to be exposed publicly and particular attention should be paid to locking down admin ports and database access: the Graph Node JSON-RPC endpoint (default port: 8030), the Indexer management API endpoint (default port: 18000), and the Postgres database endpoint (default port: 5432) should not be exposed. +- **Firewall** - Nur der Indexierer-Dienst muss öffentlich zugänglich gemacht werden, und es sollte besonders darauf geachtet werden, dass die Admin-Ports und der Datenbankzugriff gesperrt werden: der Graph Node JSON-RPC-Endpunkt (Standard-Port: 8030), der Indexer-Management-API-Endpunkt (Standard-Port: 18000) und der Postgres-Datenbank-Endpunkt (Standard-Port: 5432) sollten nicht öffentlich zugänglich sein. -## Infrastructure +## Infrastruktur -At the center of an Indexer's infrastructure is the Graph Node which monitors the indexed networks, extracts and loads data per a subgraph definition and serves it as a [GraphQL API](/about/#how-the-graph-works). The Graph Node needs to be connected to an endpoint exposing data from each indexed network; an IPFS node for sourcing data; a PostgreSQL database for its store; and Indexer components which facilitate its interactions with the network. +Im Zentrum der Infrastruktur eines Indexierers steht der Graph Node, der die indizierten Netzwerke überwacht, Daten gemäß einer Subgraph-Definition extrahiert und lädt und sie als [GraphQL API](/about/#how-the-graph-works) bereitstellt. Der Graph Node muss mit einem Endpunkt verbunden sein, der Daten aus jedem indizierten Netzwerk ausgibt; ein IPFS-Knoten für die Datenbeschaffung; eine PostgreSQL-Datenbank für die Speicherung; und Indexer-Komponenten, die seine Interaktionen mit dem Netzwerk erleichtern. -- **PostgreSQL database** - The main store for the Graph Node, this is where subgraph data is stored. The Indexer service and agent also use the database to store state channel data, cost models, indexing rules, and allocation actions. +- **PostgreSQL-Datenbank** - Der Hauptspeicher für den Graphenknoten, in dem die Subgraphen-Daten gespeichert werden. Der Indexer-Dienst und der Agent verwenden die Datenbank auch zum Speichern von Statuskanaldaten, Kostenmodellen, Indizierungsregeln und Zuordnungsaktionen. -- **Data endpoint** - For EVM-compatible networks, Graph Node needs to be connected to an endpoint that exposes an EVM-compatible JSON-RPC API. This may take the form of a single client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain subgraphs will require particular client capabilities such as archive mode and/or the parity tracing API. +- **Datenendpunkt** - Bei EVM-kompatiblen Netzwerken muss der Graph Node mit einem Endpunkt verbunden sein, der eine EVM-kompatible JSON-RPC-API bereitstellt. Dabei kann es sich um einen einzelnen Client handeln oder um ein komplexeres Setup, das die Last auf mehrere Clients verteilt. Es ist wichtig, sich darüber im Klaren zu sein, dass bestimmte Subgraphen besondere Client-Fähigkeiten erfordern, wie z. B. den Archivmodus und/oder die Paritätsverfolgungs-API. -- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during subgraph deployment to fetch the subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com. +- **IPFS-Knoten (Version kleiner als 5)** - Die Metadaten für die Subgraph-Bereitstellung werden im IPFS-Netzwerk gespeichert. Der Graph Node greift in erster Linie auf den IPFS-Knoten während der Bereitstellung des Subgraphen zu, um das Subgraphen-Manifest und alle verknüpften Dateien zu holen. Netzwerk-Indizierer müssen keinen eigenen IPFS-Knoten hosten, ein IPFS-Knoten für das Netzwerk wird unter https://ipfs.network.thegraph.com gehostet. -- **Indexer service** - Handles all required external communications with the network. Shares cost models and indexing statuses, passes query requests from gateways on to a Graph Node, and manages the query payments via state channels with the gateway. +- **Indexierer-Dienst** - Erledigt alle erforderlichen externen Kommunikationen mit dem Netz. Teilt Kostenmodelle und Indizierungsstatus, leitet Abfrageanfragen von Gateways an einen Graph Node weiter und verwaltet die Abfragezahlungen über Statuskanäle mit dem Gateway. -- **Indexer agent** - Facilitates the Indexers interactions onchain including registering on the network, managing subgraph deployments to its Graph Node/s, and managing allocations. +- **Indexierer-Agent** - Erleichtert die Interaktionen des Indexierers in der Kette, einschließlich der Registrierung im Netzwerk, der Verwaltung von Subgraph-Einsätzen in seine(n) Graph-Knoten und der Verwaltung von Zuweisungen. -- **Prometheus metrics server** - The Graph Node and Indexer components log their metrics to the metrics server. +- **Prometheus Metrics Server** - Die Komponenten Graph Node und Indexierer protokollieren ihre Metriken auf dem Metrics Server. -Note: To support agile scaling, it is recommended that query and indexing concerns are separated between different sets of nodes: query nodes and index nodes. +Hinweis: Um eine flexible Skalierung zu unterstützen, wird empfohlen, Abfrage- und Indizierungsbelange auf verschiedene Knotengruppen zu verteilen: Abfrageknoten und Indexknoten. -### Ports overview +### Übersicht über Ports -> **Important**: Be careful about exposing ports publicly - **administration ports** should be kept locked down. This includes the the Graph Node JSON-RPC and the Indexer management endpoints detailed below. +> **Wichtig**: Seien Sie vorsichtig damit, Ports öffentlich zugänglich zu machen - **Verwaltungsports** sollten unter Verschluss gehalten werden. Dies gilt auch für den Graph Node JSON-RPC und die Indexierer-Verwaltungsendpunkte, die im Folgenden beschrieben werden. -#### Graph Node +#### Graph-Knoten -| Port | Purpose | Routes | CLI Argument | Environment Variable | +| Port | Verwendungszweck | Routen | CLI-Argument | Umgebungsvariable | | --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | -| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | -| 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | -| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | -| 8040 | Prometheus metrics | /metrics | \--metrics-port | - | +| 8000 | GraphQL HTTP Server
(für Subgraph-Abfragen) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | +| 8001 | GraphQL WS
(für Subgraphen-Abonnements) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | +| 8020 | JSON-RPC
(zum Verwalten von Deployments) | / | \--admin-port | - | +| 8030 | Status der Indizierung von Subgraphen API | /graphql | \--index-node-port | - | +| 8040 | Prometheus-Metriken | /metrics | \--metrics-port | - | -#### Indexer Service +#### Indexer-Service -| Port | Purpose | Routes | CLI Argument | Environment Variable | +| Port | Verwendungszweck | Routen | CLI-Argument | Umgebungsvariable | | --- | --- | --- | --- | --- | -| 7600 | GraphQL HTTP server
(for paid subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` | -| 7300 | Prometheus metrics | /metrics | \--metrics-port | - | +| 7600 | GraphQL HTTP Server
(für bezahlte Subgraph-Abfragen) | /subgraphs/id/...
/status
/channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` | +| 7300 | Prometheus-Metriken | /metrics | \--metrics-port | - | -#### Indexer Agent +#### Indexierer-Agent -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| ---- | ---------------------- | ------ | -------------------------- | --------------------------------------- | -| 8000 | Indexer management API | / | \--indexer-management-port | `INDEXER_AGENT_INDEXER_MANAGEMENT_PORT` | +| Port | Verwendungszweck | Routen | CLI-Argument | Umgebungsvariable | +| ---- | ----------------------- | ------ | -------------------------- | --------------------------------------- | +| 8000 | Indexer-Verwaltungs-API | / | \--indexer-management-port | `INDEXER_AGENT_INDEXER_MANAGEMENT_PORT` | -### Setup server infrastructure using Terraform on Google Cloud +### Einrichten einer Server-Infrastruktur mit Terraform auf Google Cloud -> Note: Indexers can alternatively use AWS, Microsoft Azure, or Alibaba. +> Hinweis: Indexierer können alternativ AWS, Microsoft Azure oder Alibaba nutzen. -#### Install prerequisites +#### Installieren Sie die Voraussetzungen -- Google Cloud SDK -- Kubectl command line tool +- Google Cloud-SDK +- Kubectl-Befehlszeilentool - Terraform -#### Create a Google Cloud Project +#### Erstellen Sie ein Google Cloud-Projekt -- Clone or navigate to the [Indexer repository](https://github.com/graphprotocol/indexer). +- Klonen oder navigieren Sie zum [Indexierer-Repository] (https://github.com/graphprotocol/indexer). -- Navigate to the `./terraform` directory, this is where all commands should be executed. +- Navigieren Sie zum Verzeichnis `./terraform`, in dem alle Befehle ausgeführt werden sollen. ```sh cd terraform ``` -- Authenticate with Google Cloud and create a new project. +- Authentifizieren Sie sich bei Google Cloud und erstellen Sie ein neues Projekt. ```sh gcloud auth login @@ -196,9 +196,9 @@ project= gcloud projects create --enable-cloud-apis $project ``` -- Use the Google Cloud Console's billing page to enable billing for the new project. +- Verwenden Sie die Abrechnungsseite der Google Cloud Console, um die Abrechnung für das neue Projekt zu aktivieren. -- Create a Google Cloud configuration. +- Erstellen Sie eine Google Cloud-Konfiguration. ```sh proj_id=$(gcloud projects list --format='get(project_id)' --filter="name=$project") @@ -208,7 +208,7 @@ gcloud config set compute/region us-central1 gcloud config set compute/zone us-central1-a ``` -- Enable required Google Cloud APIs. +- Aktivieren Sie die erforderlichen Google Cloud-APIs. ```sh gcloud services enable compute.googleapis.com @@ -217,7 +217,7 @@ gcloud services enable servicenetworking.googleapis.com gcloud services enable sqladmin.googleapis.com ``` -- Create a service account. +- Erstellen Sie ein Service-Konto. ```sh svc_name= @@ -235,7 +235,7 @@ gcloud projects add-iam-policy-binding $proj_id \ --role roles/editor ``` -- Enable peering between database and Kubernetes cluster that will be created in the next step. +- Aktivieren Sie das Peering zwischen der Datenbank und dem Kubernetes-Cluster, der im nächsten Schritt erstellt wird. ```sh gcloud compute addresses create google-managed-services-default \ @@ -243,41 +243,41 @@ gcloud compute addresses create google-managed-services-default \ --purpose=VPC_PEERING \ --network default \ --global \ - --description 'IP Range for peer networks.' + --description 'IP Range for peer Networks.' gcloud services vpc-peerings connect \ --network=default \ --ranges=google-managed-services-default ``` -- Create minimal terraform configuration file (update as needed). +- Erstellen Sie eine minimale Terraform-Konfigurationsdatei (aktualisieren Sie sie nach Bedarf). ```sh indexer= cat > terraform.tfvars < **NOTE**: All runtime configuration variables may be applied either as parameters to the command on startup or using environment variables of the format `COMPONENT_NAME_VARIABLE_NAME`(ex. `INDEXER_AGENT_ETHEREUM`). +> HINWEIS: Alle Laufzeit-Konfigurationsvariablen können entweder als Parameter auf den Befehl beim Start oder mithilfe von Umgebungsvariablen im Format `COMPONENT_NAME_VARIABLE_NAME`(z. B. `INDEXER_AGENT_ETHEREUM`) angewandt werden. -#### Indexer agent +#### Indexierer-Agent ```sh graph-indexer-agent start \ @@ -488,7 +488,7 @@ graph-indexer-agent start \ | pino-pretty ``` -#### Indexer service +#### Indexierer-Service ```sh SERVER_HOST=localhost \ @@ -514,58 +514,58 @@ graph-indexer-service start \ | pino-pretty ``` -#### Indexer CLI +#### Indexierer-CLI -The Indexer CLI is a plugin for [`@graphprotocol/graph-cli`](https://www.npmjs.com/package/@graphprotocol/graph-cli) accessible in the terminal at `graph indexer`. +Das Indexierer-CLI ist ein Plugin für [`@graphprotocol/graph-cli`] (https://www.npmjs.com/package/@graphprotocol/graph-cli), das im Terminal unter `graph indexer` erreichbar ist. ```sh graph indexer connect http://localhost:18000 graph indexer status ``` -#### Indexer management using Indexer CLI +#### Indexierer-Verwaltung mit Indexierer-CLI -The suggested tool for interacting with the **Indexer Management API** is the **Indexer CLI**, an extension to the **Graph CLI**. The Indexer agent needs input from an Indexer in order to autonomously interact with the network on the behalf of the Indexer. The mechanism for defining Indexer agent behavior are **allocation management** mode and **indexing rules**. Under auto mode, an Indexer can use **indexing rules** to apply their specific strategy for picking subgraphs to index and serve queries for. Rules are managed via a GraphQL API served by the agent and known as the Indexer Management API. Under manual mode, an Indexer can create allocation actions using **actions queue** and explicitly approve them before they get executed. Under oversight mode, **indexing rules** are used to populate **actions queue** and also require explicit approval for execution. +Das vorgeschlagene Werkzeug für die Interaktion mit der **Indexierer-Management-API** ist das **Indexierer-CLI**, eine Erweiterung des **Graph CLI**. Der Indexierer-Agent benötigt Input von einem Indexierer, um im Namen des Indexers autonom mit dem Netzwerk zu interagieren. Die Mechanismen zur Definition des Verhaltens des Indexer-Agenten sind der **Zuweisungsmanagement**-Modus und **Indexierungsregeln**. Im automatischen Modus kann ein Indexierer **Indizierungsregeln** verwenden, um seine spezifische Strategie für die Auswahl von Subgraphen anzuwenden, die er indizieren und für die er Abfragen liefern soll. Die Regeln werden über eine GraphQL-API verwaltet, die vom Agenten bereitgestellt wird und als Indexierer Management API bekannt ist. Im manuellen Modus kann ein Indexierer Zuordnungsaktionen über die **Aktionswarteschlange** erstellen und sie explizit genehmigen, bevor sie ausgeführt werden. Im Überwachungsmodus werden **Indizierungsregeln** verwendet, um die **Aktionswarteschlange** zu füllen, und erfordern ebenfalls eine ausdrückliche Genehmigung für die Ausführung. -#### Usage +#### Verwendung -The **Indexer CLI** connects to the Indexer agent, typically through port-forwarding, so the CLI does not need to run on the same server or cluster. To help you get started, and to provide some context, the CLI will briefly be described here. +Die **Indexierer-CLI** verbindet sich mit dem Indexierer-Agenten, in der Regel über Port-Forwarding, so dass die CLI nicht auf demselben Server oder Cluster laufen muss. Um Ihnen den Einstieg zu erleichtern und etwas Kontext zu liefern, wird die CLI hier kurz beschrieben. -- `graph indexer connect ` - Connect to the Indexer management API. Typically the connection to the server is opened via port forwarding, so the CLI can be easily operated remotely. (Example: `kubectl port-forward pod/ 8000:8000`) +- `graph indexer connect ` - Verbindet mit der Indexierer-Verwaltungs-API. Typischerweise wird die Verbindung zum Server über Port-Forwarding geöffnet, so dass die CLI einfach aus der Ferne bedient werden kann. (Datenbeispiel: `kubectl port-forward pod/ 8000:8000`) -- `graph indexer rules get [options] [ ...]` - Get one or more indexing rules using `all` as the `` to get all rules, or `global` to get the global defaults. An additional argument `--merged` can be used to specify that deployment specific rules are merged with the global rule. This is how they are applied in the Indexer agent. +- `graph indexer rules get [options] [ ...]` - Holt eine oder mehrere Indizierungsregeln unter Verwendung von `all` als ``, um alle Regeln zu erhalten, oder `global`, um die globalen Standardwerte zu erhalten. Ein zusätzliches Argument `--merged` kann verwendet werden, um anzugeben, dass einsatzspezifische Regeln mit der globalen Regel zusammengeführt werden. Auf diese Weise werden sie im Indexer-Agent angewendet. -- `graph indexer rules set [options] ...` - Set one or more indexing rules. +- `graph indexer rules set [options] ...` - Eine oder mehrere Indizierungsregeln setzen. -- `graph indexer rules start [options] ` - Start indexing a subgraph deployment if available and set its `decisionBasis` to `always`, so the Indexer agent will always choose to index it. If the global rule is set to always then all available subgraphs on the network will be indexed. +- `graph indexer rules start [options] ` - Startet die Indizierung eines Subgraph-Einsatzes, wenn dieser verfügbar ist, und setzt seine `decisionBasis` auf `always`, so dass der Indexierer-Agent immer die Indizierung dieses Einsatzes wählt. Wenn die globale Regel auf `always` gesetzt ist, werden alle verfügbaren Subgraphen im Netzwerk indiziert. -- `graph indexer rules stop [options] ` - Stop indexing a deployment and set its `decisionBasis` to never, so it will skip this deployment when deciding on deployments to index. +- `graph indexer rules stop [options] ` - Stoppt die Indizierung eines Einsatzes und setzt seine `decisionBasis` auf never, so dass er diesen Einsatz bei der Entscheidung über die zu indizierenden Einsätze überspringt. -- `graph indexer rules maybe [options] ` — Set the `decisionBasis` for a deployment to `rules`, so that the Indexer agent will use indexing rules to decide whether to index this deployment. +- `graph indexer rules maybe [options] ` - Setzt die `decisionBasis` für ein Deployment auf `rules`, so dass der Indexierer-Agent Indizierungsregeln verwendet, um zu entscheiden, ob dieses Deployment indiziert werden soll. -- `graph indexer actions get [options] ` - Fetch one or more actions using `all` or leave `action-id` empty to get all actions. An additional argument `--status` can be used to print out all actions of a certain status. +- `graph indexer actions get [options] ` - Holt eine oder mehrere Aktionen mit `all` oder lässt `action-id` leer, um alle Aktionen zu erhalten. Ein zusätzliches Argument `--status` kann verwendet werden, um alle Aktionen mit einem bestimmten Status auszugeben. -- `graph indexer action queue allocate ` - Queue allocation action +- `graph indexer action queue allocate ` - Aktion zur Warteschlangenzuordnung -- `graph indexer action queue reallocate ` - Queue reallocate action +- `graph indexer action queue reallocate ` - Aktion zur Neuzuweisung der Warteschlange -- `graph indexer action queue unallocate ` - Queue unallocate action +- `graph indexer action queue unallocate ` - Aktion zum Aufheben der Warteschlangenzuordnung -- `graph indexer actions cancel [ ...]` - Cancel all action in the queue if id is unspecified, otherwise cancel array of id with space as separator +- `graph indexer actions cancel [ ...]` - Abbrechen aller Aktionen in der Warteschlange, wenn id nicht angegeben ist, sonst Abbrechen eines Arrays von id mit Leerzeichen als Trennzeichen -- `graph indexer actions approve [ ...]` - Approve multiple actions for execution +- `graph indexer actions approve [ ...]` - Mehrere Aktionen zur Ausführung freigeben -- `graph indexer actions execute approve` - Force the worker to execute approved actions immediately +- `graph indexer actions execute approve` - Erzwingt die sofortige Ausführung genehmigter Aktionen durch den Worker -All commands which display rules in the output can choose between the supported output formats (`table`, `yaml`, and `json`) using the `-output` argument. +Alle Befehle, die Regeln in der Ausgabe anzeigen, können zwischen den unterstützten Ausgabeformaten (`table`, `yaml` und `json`) mit dem Argument `-output` wählen. -#### Indexing rules +#### Indizierungsregeln -Indexing rules can either be applied as global defaults or for specific subgraph deployments using their IDs. The `deployment` and `decisionBasis` fields are mandatory, while all other fields are optional. When an indexing rule has `rules` as the `decisionBasis`, then the Indexer agent will compare non-null threshold values on that rule with values fetched from the network for the corresponding deployment. If the subgraph deployment has values above (or below) any of the thresholds it will be chosen for indexing. +Indizierungsregeln können entweder als globale Standardwerte oder für bestimmte Subgraph-Einsätze unter Verwendung ihrer IDs angewendet werden. Die Felder `deployment` und `decisionBasis` sind obligatorisch, während alle anderen Felder optional sind. Wenn eine Indizierungsregel `rules` als `decisionBasis` hat, dann vergleicht der Indexierer-Agent die Schwellenwerte dieser Regel, die nicht Null sind, mit den Werten, die aus dem Netzwerk für den entsprechenden Einsatz geholt wurden. Wenn der Subgraph-Einsatz Werte über (oder unter) einem der Schwellenwerte hat, wird er für die Indizierung ausgewählt. -For example, if the global rule has a `minStake` of **5** (GRT), any subgraph deployment which has more than 5 (GRT) of stake allocated to it will be indexed. Threshold rules include `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, and `minAverageQueryFees`. +Wenn zum Beispiel die globale Regel einen `minStake` von **5** (GRT) hat, wird jeder Einsatz von Subgraphen, dem mehr als 5 (GRT) zugewiesen wurden, indiziert. Zu den Schwellenwertregeln gehören `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, und `minAverageQueryFees`. -Data model: +Datenmodell: ```graphql type IndexingRule { @@ -599,7 +599,7 @@ IndexingDecisionBasis { } ``` -Example usage of indexing rule: +Beispiel für die Verwendung der Indizierungsregel: ``` graph indexer rules offchain QmZfeJYR86UARzp9HiXbURWunYgC9ywvPvoePNbuaATrEK @@ -611,20 +611,20 @@ graph indexer rules stop QmZfeJYR86UARzp9HiXbURWunYgC9ywvPvoePNbuaATrEK graph indexer rules delete QmZfeJYR86UARzp9HiXbURWunYgC9ywvPvoePNbuaATrEK ``` -#### Actions queue CLI +#### Befehlszeilenschnittstelle (CLI) für die Aktionswarteschlange -The indexer-cli provides an `actions` module for manually working with the action queue. It uses the **Graphql API** hosted by the indexer management server to interact with the actions queue. +Das indexierer-cli bietet ein `actions`-Modul für die manuelle Arbeit mit der Aktionswarteschlange. Es verwendet die **Graphql-API**, die vom Indexierer-Verwaltungsserver gehostet wird, um mit der Aktions-Warteschlange zu interagieren. -The action execution worker will only grab items from the queue to execute if they have `ActionStatus = approved`. In the recommended path actions are added to the queue with ActionStatus = queued, so they must then be approved in order to be executed onchain. The general flow will look like: +Der Action Execution Worker holt sich nur dann Elemente aus der Warteschlange, um sie auszuführen, wenn sie den Status `ActionStatus = approved` haben. Im empfohlenen Pfad werden Aktionen der Warteschlange mit ActionStatus = queued hinzugefügt, so dass sie dann genehmigt werden müssen, um in der Kette ausgeführt zu werden. Der allgemeine Ablauf sieht dann wie folgt aus: -- Action added to the queue by the 3rd party optimizer tool or indexer-cli user -- Indexer can use the `indexer-cli` to view all queued actions -- Indexer (or other software) can approve or cancel actions in the queue using the `indexer-cli`. The approve and cancel commands take an array of action ids as input. -- The execution worker regularly polls the queue for approved actions. It will grab the `approved` actions from the queue, attempt to execute them, and update the values in the db depending on the status of execution to `success` or `failed`. -- If an action is successful the worker will ensure that there is an indexing rule present that tells the agent how to manage the allocation moving forward, useful when taking manual actions while the agent is in `auto` or `oversight` mode. -- The indexer can monitor the action queue to see a history of action execution and if needed re-approve and update action items if they failed execution. The action queue provides a history of all actions queued and taken. +- Aktion, die vom Drittanbieter-Optimierungstool oder vom indexer-cli-Benutzer zur Warteschlange hinzugefügt wurde +- Indexierer kann die `indexer-cli` verwenden, um alle in der Warteschlange stehenden Aktionen zu sehen +- Indexierer (oder andere Software) kann Aktionen in der Warteschlange mit Hilfe des `indexer-cli` genehmigen oder abbrechen. Die Befehle approve und cancel nehmen ein Array von Aktions-Ids als Eingabe. +- Der Ausführungsworker fragt die Warteschlange regelmäßig nach genehmigten Aktionen ab. Er holt die `approved` Aktionen aus der Warteschlange, versucht, sie auszuführen, und aktualisiert die Werte in der Datenbank je nach Ausführungsstatus auf `success` oder `failed`. +- Ist eine Aktion erfolgreich, stellt der Worker sicher, dass eine Indizierungsregel vorhanden ist, die dem Agenten mitteilt, wie er die Zuweisung in Zukunft verwalten soll. Dies ist nützlich, wenn manuelle Aktionen durchgeführt werden, während sich der Agent im `auto`- oder `oversight`-Modus befindet. +- Der Indexierer kann die Aktionswarteschlange überwachen, um einen Überblick über die Ausführung von Aktionen zu erhalten und bei Bedarf Aktionen, deren Ausführung fehlgeschlagen ist, erneut zu genehmigen und zu aktualisieren. Die Aktionswarteschlange bietet einen Überblick über alle in der Warteschlange stehenden und ausgeführten Aktionen. -Data model: +Datenmodell: ```graphql Type ActionInput { @@ -657,7 +657,7 @@ ActionType { } ``` -Example usage from source: +Verwendungsbeispiel aus dem Sourcecode: ```bash graph indexer actions get all @@ -677,141 +677,141 @@ graph indexer actions approve 1 3 5 graph indexer actions execute approve ``` -Note that supported action types for allocation management have different input requirements: +Beachten Sie, dass unterstützte Aktionstypen für das Allokationsmanagement unterschiedliche Eingabeanforderungen haben: -- `Allocate` - allocate stake to a specific subgraph deployment +- `Allocate` - Zuweisung eines Einsatzes für einen bestimmten Einsatz von Subgraphen - - required action params: + - erforderliche Aktionsparameter: - deploymentID - amount -- `Unallocate` - close allocation, freeing up the stake to reallocate elsewhere +- `Unallocate` - Beendigung der Zuweisung, wodurch der Einsatz für eine andere Zuweisung frei wird - - required action params: + - erforderliche Aktionsparameter: - allocationID - deploymentID - - optional action params: + - optionale Aktionsparameter: - poi - - force (forces using the provided POI even if it doesn’t match what the graph-node provides) + - force (erzwingt die Verwendung des bereitgestellten POI, auch wenn es nicht mit dem übereinstimmt, was der Graph-Knoten bereitstellt) -- `Reallocate` - atomically close allocation and open a fresh allocation for the same subgraph deployment +- `Reallocate` - Zuordnung atomar schließen und eine neue Zuordnung für denselben Einsatz von Subgraphen öffnen - - required action params: + - erforderliche Aktionsparameter: - allocationID - deploymentID - amount - - optional action params: + - optionale Aktionsparameter: - poi - - force (forces using the provided POI even if it doesn’t match what the graph-node provides) + - force (erzwingt die Verwendung des bereitgestellten POI, auch wenn es nicht mit dem übereinstimmt, was der Graph-Knoten bereitstellt) -#### Cost models +#### Kostenmodelle -Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make Indexer selection decisions per query and to negotiate payment with chosen Indexers. +Kostenmodelle ermöglichen eine dynamische Preisgestaltung für Abfragen auf der Grundlage von Markt- und Abfrageattributen. Der Indexierer-Service teilt ein Kostenmodell mit den Gateways für jeden Subgraphen, für den sie beabsichtigen, auf Anfragen zu antworten. Die Gateways wiederum nutzen das Kostenmodell, um Entscheidungen über die Auswahl der Indexer pro Anfrage zu treffen und die Bezahlung mit den ausgewählten Indexern auszuhandeln. #### Agora -The Agora language provides a flexible format for declaring cost models for queries. An Agora price model is a sequence of statements that execute in order for each top-level query in a GraphQL query. For each top-level query, the first statement which matches it determines the price for that query. +Die Agora-Sprache bietet ein flexibles Format zur Deklaration von Kostenmodellen für Abfragen. Ein Agora-Preismodell ist eine Folge von Anweisungen, die für jede Top-Level-Abfrage in einer GraphQL-Abfrage nacheinander ausgeführt werden. Für jede Top-Level-Abfrage bestimmt die erste Anweisung, die ihr entspricht, den Preis für diese Abfrage. -A statement is comprised of a predicate, which is used for matching GraphQL queries, and a cost expression which when evaluated outputs a cost in decimal GRT. Values in the named argument position of a query may be captured in the predicate and used in the expression. Globals may also be set and substituted in for placeholders in an expression. +Eine Anweisung besteht aus einem Prädikat, das zum Abgleich von GraphQL-Abfragen verwendet wird, und einem Kostenausdruck, der bei der Auswertung die Kosten in dezimalen GRT ausgibt. Werte in der benannten Argumentposition einer Abfrage können im Prädikat erfasst und im Ausdruck verwendet werden. Globale Werte können auch gesetzt und durch Platzhalter in einem Ausdruck ersetzt werden. -Example cost model: +Beispielkostenmodell: ``` -# This statement captures the skip value, -# uses a boolean expression in the predicate to match specific queries that use `skip` -# and a cost expression to calculate the cost based on the `skip` value and the SYSTEM_LOAD global +# Diese Anweisung erfasst den Wert „skip“, +# verwendet einen booleschen Ausdruck im Prädikat, um mit bestimmten Abfragen übereinzustimmen, die `skip` verwenden +# und einen Kostenausdruck, um die Kosten auf der Grundlage des `skip`-Wertes und des globalen SYSTEM_LOAD-Wertes zu berechnen query { pairs(skip: $skip) { id } } when $skip > 2000 => 0.0001 * $skip * $SYSTEM_LOAD; -# This default will match any GraphQL expression. -# It uses a Global substituted into the expression to calculate cost +# Diese Vorgabe passt auf jeden GraphQL-Ausdruck. +# Sie verwendet ein Global, das in den Ausdruck eingesetzt wird, um die Kosten zu berechnen default => 0.1 * $SYSTEM_LOAD; ``` -Example query costing using the above model: +Beispiel für eine Abfragekostenberechnung unter Verwendung des obigen Modells: -| Query | Price | +| Abfrage | Preis | | ---------------------------------------------------------------------------- | ------- | | { pairs(skip: 5000) { id } } | 0.5 GRT | | { tokens { symbol } } | 0.1 GRT | | { pairs(skip: 5000) { id } tokens { symbol } } | 0.6 GRT | -#### Applying the cost model +#### Anwendung des Kostenmodells -Cost models are applied via the Indexer CLI, which passes them to the Indexer Management API of the Indexer agent for storing in the database. The Indexer Service will then pick them up and serve the cost models to gateways whenever they ask for them. +Kostenmodelle werden über die Indexierer-CLI angewendet, die sie zum Speichern in der Datenbank an die Indexierer-Verwaltungs-API des Indexierer-Agenten übergibt. Der Indexierer-Service holt sie dann ab und stellt Gateways die Kostenmodelle zur Verfügung, jedesmal wenn sie danach fragen. ```sh indexer cost set variables '{ "SYSTEM_LOAD": 1.4 }' indexer cost set model my_model.agora ``` -## Interacting with the network +## Interaktion mit dem Netzwerk -### Stake in the protocol +### Einsatz im Protokoll -The first steps to participating in the network as an Indexer are to approve the protocol, stake funds, and (optionally) set up an operator address for day-to-day protocol interactions. +Die ersten Schritte zur Teilnahme am Netzwerk als Indexierer sind die Genehmigung des Protokolls, der Einsatz von Geldern und (optional) die Einrichtung einer Betreiberadresse für die täglichen Interaktionen mit dem Protokoll. -> Note: For the purposes of these instructions Remix will be used for contract interaction, but feel free to use your tool of choice ([OneClickDapp](https://oneclickdapp.com/), [ABItopic](https://abitopic.io/), and [MyCrypto](https://www.mycrypto.com/account) are a few other known tools). +> Hinweis: In dieser Anleitung wird Remix für die Interaktion mit dem Vertrag verwendet, aber Sie können auch das Tool Ihrer Wahl verwenden ([OneClickDapp](https://oneclickdapp.com/), [ABItopic](https://abitopic.io/) und [MyCrypto](https://www.mycrypto.com/account) sind einige andere bekannte Tools). -Once an Indexer has staked GRT in the protocol, the [Indexer components](/indexing/overview/#indexer-components) can be started up and begin their interactions with the network. +Sobald ein Indexer GRT im Protokoll verankert hat, können die [Indexierer-Komponenten](/indexing/overview/#indexer-components) gestartet werden und ihre Interaktionen mit dem Netzwerk beginnen. -#### Approve tokens +#### Genehmigen Sie Token -1. Open the [Remix app](https://remix.ethereum.org/) in a browser +1. Öffnen Sie die [Remix-App] (https://remix.ethereum.org/) in einem Browser -2. In the `File Explorer` create a file named **GraphToken.abi** with the [token ABI](https://raw.githubusercontent.com/graphprotocol/contracts/mainnet-deploy-build/build/abis/GraphToken.json). +2. Erstellen Sie im `File Explorer` eine Datei mit dem Namen **GraphToken.abi** mit dem [Token ABI](https://raw.githubusercontent.com/graphprotocol/contracts/mainnet-deploy-build/build/abis/GraphToken.json). -3. With `GraphToken.abi` selected and open in the editor, switch to the `Deploy and run transactions` section in the Remix interface. +3. Wählen Sie die Datei `GraphToken.abi` aus und öffnen Sie sie im Editor. Wechseln Sie in der Remix-Benutzeroberfläche zum Abschnitt `Deploy and run transactions`. -4. Under environment select `Injected Web3` and under `Account` select your Indexer address. +4. Wählen Sie unter environment die Option `Injected Web3` und unter `Account` die Adresse Ihres Indexierers aus. -5. Set the GraphToken contract address - Paste the GraphToken contract address (`0xc944E90C64B2c07662A292be6244BDf05Cda44a7`) next to `At Address` and click the `At address` button to apply. +5. Legen Sie die GraphToken-Vertragsadresse fest - Fügen Sie die GraphToken-Vertragsadresse (`0xc944E90C64B2c07662A292be6244BDf05Cda44a7`) neben `At Address` ein und klicken Sie zum Anwenden auf die Schaltfläche `At address`. -6. Call the `approve(spender, amount)` function to approve the Staking contract. Fill in `spender` with the Staking contract address (`0xF55041E37E12cD407ad00CE2910B8269B01263b9`) and `amount` with the tokens to stake (in wei). +6. Rufen Sie die Funktion `approve(spender, amount)` auf, um den Einsatzvertrag zu genehmigen. Geben Sie in `spender` die Adresse des Einsatzvertrags (`0xF55041E37E12cD407ad00CE2910B8269B01263b9`) und in `amount` die zu setzenden Token (in wei) ein. -#### Stake tokens +#### Stake-Token -1. Open the [Remix app](https://remix.ethereum.org/) in a browser +1. Öffnen Sie die [Remix-App] (https://remix.ethereum.org/) in einem Browser -2. In the `File Explorer` create a file named **Staking.abi** with the staking ABI. +2. Erstellen Sie im `File Explorer` eine Datei mit dem Namen **Staking.abi** mit dem Staking-ABI. -3. With `Staking.abi` selected and open in the editor, switch to the `Deploy and run transactions` section in the Remix interface. +3. Wählen Sie die Datei `Staking.abi` aus und öffnen Sie sie im Editor. Wechseln Sie in der Remix-Benutzeroberfläche zum Abschnitt `Deploy and run transactions`. -4. Under environment select `Injected Web3` and under `Account` select your Indexer address. +4. Wählen Sie unter environment die Option `Injected Web3` und unter `Account` die Adresse Ihres Indexierers aus. -5. Set the Staking contract address - Paste the Staking contract address (`0xF55041E37E12cD407ad00CE2910B8269B01263b9`) next to `At Address` and click the `At address` button to apply. +5. Legen Sie die Adresse des Abtretungsvertrags fest - Fügen Sie die Adresse des Abtretungsvertrags (`0xF55041E37E12cD407ad00CE2910B8269B01263b9`) neben `At Address` ein und klicken Sie auf die Schaltfläche `At address`, um sie anzuwenden. -6. Call `stake()` to stake GRT in the protocol. +6. Rufen Sie `stake()` auf, um GRT in das Protokoll aufzunehmen. -7. (Optional) Indexers may approve another address to be the operator for their Indexer infrastructure in order to separate the keys that control the funds from those that are performing day to day actions such as allocating on subgraphs and serving (paid) queries. In order to set the operator call `setOperator()` with the operator address. +7. (Optional) Indexierer können eine andere Adresse als Operator für ihre Indexer-Infrastruktur genehmigen, um die Schlüssel, die die Fonds kontrollieren, von denen zu trennen, die alltägliche Aktionen wie die Zuweisung auf Subgraphen und die Bedienung (bezahlter) Abfragen durchführen. Um den Betreiber zu setzen, rufen Sie `setOperator()` mit der Betreiberadresse auf. -8. (Optional) In order to control the distribution of rewards and strategically attract Delegators Indexers can update their delegation parameters by updating their indexingRewardCut (parts per million), queryFeeCut (parts per million), and cooldownBlocks (number of blocks). To do so call `setDelegationParameters()`. The following example sets the queryFeeCut to distribute 95% of query rebates to the Indexer and 5% to Delegators, set the indexingRewardCutto distribute 60% of indexing rewards to the Indexer and 40% to Delegators, and set `thecooldownBlocks` period to 500 blocks. +8. (Optional) Um die Verteilung von Belohnungen zu kontrollieren und Delegatoren strategisch anzulocken, können Indexierer ihre Delegationsparameter aktualisieren, indem sie ihren `indexingRewardCut` (Teile pro Million), `queryFeeCut` (Teile pro Million) und `cooldownBlocks` (Anzahl der Blöcke) aktualisieren. Dazu rufen Sie `setDelegationParameters()` auf. Das folgende Beispiel stellt den `queryFeeCut` so ein, dass 95% der Abfragerabatte an den Indexierer und 5% an die Delegatoren verteilt werden, stellt den `indexingRewardCut` so ein, dass 60% der Indexierungsbelohnungen an den Indexierer und 40% an die Delegatoren verteilt werden, und stellt die `cooldownBlocks` Periode auf 500 Blöcke. ``` setDelegationParameters(950000, 600000, 500) ``` -### Setting delegation parameters +### Einstellung der Delegationsparameter -The `setDelegationParameters()` function in the [staking contract](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol) is essential for Indexers, allowing them to set parameters that define their interactions with Delegators, influencing their reward sharing and delegation capacity. +Die Funktion `setDelegationParameters()` im [Staking Contract] (https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol) ist für Indexierer von entscheidender Bedeutung, da sie es ihnen ermöglicht, Parameter zu setzen, die ihre Interaktion mit Delegatoren definieren und ihre Reward-Aufteilung und Delegationskapazität beeinflussen. -### How to set delegation parameters +### Festlegen der Delegationsparameter -To set the delegation parameters using Graph Explorer interface, follow these steps: +Gehen Sie wie folgt vor, um die Delegationsparameter über die Graph Explorer-Schnittstelle einzustellen: -1. Navigate to [Graph Explorer](https://thegraph.com/explorer/). -2. Connect your wallet. Choose multisig (such as Gnosis Safe) and then select mainnet. Note: You will need to repeat this process for Arbitrum One. -3. Connect the wallet you have as a signer. -4. Navigate to the 'Settings' section and select 'Delegation Parameters'. These parameters should be configured to achieve an effective cut within the desired range. Upon entering values in the provided input fields, the interface will automatically calculate the effective cut. Adjust these values as necessary to attain the desired effective cut percentage. -5. Submit the transaction to the network. +1. Navigieren Sie zu [Graph Explorer] (https://thegraph.com/explorer/). +2. Verbinden Sie Ihre Wallet. Wählen Sie Multisig (z. B. Gnosis Safe) und dann Mainnet aus. Hinweis: Sie müssen diesen Vorgang für Arbitrum One wiederholen. +3. Verbinden Sie die Wallet, die Sie als Unterzeichner haben. +4. Navigieren Sie zum Abschnitt 'Settings' und wählen Sie 'Delegation Parameters'. Diese Parameter sollten so konfiguriert werden, dass eine effektive Kürzung innerhalb des gewünschten Bereichs erreicht wird. Nach Eingabe der Werte in die vorgesehenen Eingabefelder berechnet die Schnittstelle automatisch den effektiven Anteil. Passen Sie diese Werte nach Bedarf an, um den gewünschten Prozentsatz der effektiven Kürzung zu erreichen. +5. Übermitteln Sie die Transaktion an das Netz. -> Note: This transaction will need to be confirmed by the multisig wallet signers. +> Hinweis: Diese Transaktion muss von den Unterzeichnern der Multisig-Wallets bestätigt werden. -### The life of an allocation +### Die Lebensdauer einer Zuweisung -After being created by an Indexer a healthy allocation goes through two states. +Nachdem sie von einem Indexer erstellt wurde, durchläuft eine ordnungsgemäße Zuordnung zwei Zustände. -- **Active** - Once an allocation is created onchain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L316)) it is considered **active**. A portion of the Indexer's own and/or delegated stake is allocated towards a subgraph deployment, which allows them to claim indexing rewards and serve queries for that subgraph deployment. The Indexer agent manages creating allocations based on the Indexer rules. +- **Aktiv** - Sobald eine Zuweisung in der Kette erstellt wurde ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L316)), wird sie als **aktiv** betrachtet. Ein Teil des eigenen und/oder delegierten Einsatzes des Indexierers wird einem Subgraph-Einsatz zugewiesen, was ihm erlaubt, Rewards für die Indizierung zu beanspruchen und Abfragen für diesen Subgraph-Einsatz zu bedienen. Der Indexierer-Agent verwaltet die Erstellung von Zuweisungen basierend auf den Indexierer-Regeln. -- **Closed** - An Indexer is free to close an allocation once 1 epoch has passed ([closeAllocation()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L335)) or their Indexer agent will automatically close the allocation after the **maxAllocationEpochs** (currently 28 days). When an allocation is closed with a valid proof of indexing (POI) their indexing rewards are distributed to the Indexer and its Delegators ([learn more](/indexing/overview/#how-are-indexing-rewards-distributed)). +- **Geschlossen** - Ein Indexierer kann eine Zuweisung schließen, sobald 1 Epoche vergangen ist ([closeAllocation()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L335)) oder sein Indexierer-Agent schließt die Zuweisung automatisch nach der **maxAllocationEpochs** (derzeit 28 Tage). Wenn eine Zuweisung mit einem gültigen Indizierungsnachweis (POI) geschlossen wird, werden die Rewards für die Indizierung an den Indexierer und seine Delegatoren verteilt ([lweitere Informationen](/indexing/overview/#how-are-indexing-rewards-distributed)). -Indexers are recommended to utilize offchain syncing functionality to sync subgraph deployments to chainhead before creating the allocation onchain. This feature is especially useful for subgraphs that may take longer than 28 epochs to sync or have some chances of failing undeterministically. +Indexierern wird empfohlen, die Offchain-Synchronisierungsfunktionalität zu nutzen, um den Einsatz von Subgraphen mit dem Chainhead zu synchronisieren, bevor die Zuweisung Onchain erstellt wird. Diese Funktion ist besonders nützlich für Subgraphen, bei denen die Synchronisierung länger als 28 Epochen dauert oder die Gefahr eines unbestimmten Fehlers besteht. diff --git a/website/src/pages/de/indexing/supported-network-requirements.mdx b/website/src/pages/de/indexing/supported-network-requirements.mdx index 72e36248f68c..a5f663f3db4a 100644 --- a/website/src/pages/de/indexing/supported-network-requirements.mdx +++ b/website/src/pages/de/indexing/supported-network-requirements.mdx @@ -6,7 +6,7 @@ title: Unterstützte Netzwerkanforderungen | --- | --- | --- | :-: | | Arbitrum | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | 4+ core CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_last updated August 2023_ | ✅ | | Avalanche | [Docker Guide](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 5 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
Debian 12/Ubuntu 22.04
16 GB RAM
>= 4.5TB (NVME preffered)
_last updated 14th May 2024_ | ✅ | +| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
Debian 12/Ubuntu 22.04
16 GB RAM
>= 4.5TB (NVME preferred)
_last updated 14th May 2024_ | ✅ | | Binance | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | 8 core / 16 threads CPU
Ubuntu 22.04
>=32 GB RAM
>= 14 TiB NVMe SSD
_last updated 22nd June 2024_ | ✅ | | Celo | [Docker Guide](https://docs.infradao.com/archive-nodes-101/celo/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 2 TiB NVMe SSD
_last updated August 2023_ | ✅ | | Ethereum | [Docker Guide](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Höhere Taktfrequenz im Vergleich zur Kernanzahl
Ubuntu 22.04
16GB+ RAM
>=3TB (NVMe recommended)
_last updated August 2023_ | ✅ | diff --git a/website/src/pages/de/indexing/tap.mdx b/website/src/pages/de/indexing/tap.mdx index 13fa3c754e0d..a3eec839d931 100644 --- a/website/src/pages/de/indexing/tap.mdx +++ b/website/src/pages/de/indexing/tap.mdx @@ -1,21 +1,21 @@ --- -title: TAP-Migrationsleitfaden +title: GraphTally Guide --- -Erfahren Sie mehr über das neue Zahlungssystem von The Graph, **Timeline Aggregation Protocol, TAP**. Dieses System bietet schnelle, effiziente Mikrotransaktionen mit minimiertem Vertrauen. +Learn about The Graph’s new payment system, **GraphTally** [(previously Timeline Aggregation Protocol)](https://docs.rs/tap_core/latest/tap_core/index.html). This system provides fast, efficient microtransactions with minimized trust. ## Überblick -[TAP] (https://docs.rs/tap_core/latest/tap_core/index.html) ist ein direkter Ersatz für das derzeitige Scalar-Zahlungssystem. Es bietet die folgenden Hauptfunktionen: +GraphTally is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features: - Effiziente Abwicklung von Mikrozahlungen. - Fügt den Onchain-Transaktionen und -Kosten eine weitere Ebene der Konsolidierung hinzu. - Ermöglicht den Indexern die Kontrolle über Eingänge und Zahlungen und garantiert die Bezahlung von Abfragen. - Es ermöglicht dezentralisierte, vertrauenslose Gateways und verbessert die Leistung des `indexer-service` für mehrere Absender. -## Besonderheiten +### Besonderheiten -TAP ermöglicht es einem Sender, mehrere Zahlungen an einen Empfänger zu leisten, **TAP Receipts**, der diese Zahlungen zu einer einzigen Zahlung zusammenfasst, einem **Receipt Aggregate Voucher**, auch bekannt als **RAV**. Diese aggregierte Zahlung kann dann auf der Blockchain verifiziert werden, wodurch sich die Anzahl der Transaktionen verringert und der Zahlungsvorgang vereinfacht wird. +GraphTally allows a sender to make multiple payments to a receiver, **Receipts**, which aggregates these payments into a single payment, a **Receipt Aggregate Voucher**, also known as a **RAV**. This aggregated payment can then be verified on the blockchain, reducing the number of transactions and simplifying the payment process. Für jede Abfrage sendet Ihnen das Gateway eine „signierte Quittung“, die in Ihrer Datenbank gespeichert wird. Dann werden diese Abfragen von einem „Tap-Agent“ durch eine Anfrage aggregiert. Anschließend erhalten Sie ein RAV. Sie können ein RAV aktualisieren, indem Sie es mit neueren Quittungen senden, wodurch ein neues RAV mit einem höheren Wert erzeugt wird. @@ -59,14 +59,14 @@ Solange Sie `tap-agent` und `indexer-agent` ausführen, wird alles automatisch a | Unterzeichner | `0xfF4B7A5EfD00Ff2EC3518D4F250A27e4c29A2211` | `0xFb142dE83E261e43a81e9ACEADd1c66A0DB121FE` | | Aggregator | `https://tap-aggregator.network.thegraph.com` | `https://tap-aggregator.testnet.thegraph.com` | -### Anforderungen +### Voraussetzungen -Zusätzlich zu den typischen Anforderungen für den Betrieb eines Indexers benötigen Sie einen `tap-escrow-subgraph`-Endpunkt, um TAP-Aktualisierungen abzufragen. Sie können The Graph Network zur Abfrage verwenden oder sich selbst auf Ihrem `graph-node` hosten. +In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query updates. You can use The Graph Network to query or host yourself on your `graph-node`. -- [Graph TAP Arbitrum Sepolia subgraph (für The Graph Testnet)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) -- [Graph TAP Arbitrum One subgraph (für The Graph Mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) +- [Graph TAP Arbitrum Sepolia Subgraph (für The Graph testnet)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) +- [Graph TAP Arbitrum One Subgraph (für The Graph mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) -> Hinweis: `indexer-agent` übernimmt derzeit nicht die Indizierung dieses Subgraphen, wie es bei der Bereitstellung von Netzwerk-Subgraphen der Fall ist. Daher müssen Sie ihn manuell indizieren. +> Hinweis: `indexer-agent` übernimmt derzeit nicht die Indizierung dieses Subgraphen, wie es beim Einsatz von Subgraphen im Netzwerk der Fall ist. Infolgedessen müssen Sie ihn manuell indizieren. ## Migrationsleitfaden @@ -79,7 +79,7 @@ Die erforderliche Softwareversion finden Sie [hier](https://github.com/graphprot 1. **Indexer-Agent** - Folgen Sie dem [gleichen Prozess](https://github.com/graphprotocol/indexer/pkgs/container/indexer-agent#graph-protocol-indexer-components). - - Geben Sie das neue Argument `--tap-subgraph-endpoint` an, um die neuen TAP-Codepfade zu aktivieren und die Einlösung von TAP-RAVs zu ermöglichen. + - Give the new argument `--tap-subgraph-endpoint` to activate the new GraphTally codepaths and enable redeeming of RAVs. 2. **Indexer-Service** @@ -104,8 +104,8 @@ Für eine minimale Konfiguration verwenden Sie die folgende Vorlage: # Einige der nachstehenden Konfigurationswerte sind globale Graphnetzwerkwerte, die Sie hier finden können: # # -# Pro-Tipp: Wenn Sie einige Werte aus der Umgebung in diese Konfiguration laden müssen, -# können Sie sie mit Umgebungsvariablen überschreiben. Als Datenbeispiel kann folgendes ersetzt werden +# Pro-Tipp: Wenn Sie einige Werte aus der Umgebung in diese Konfiguration laden müssen, können Sie +# können Sie mit Umgebungsvariablen überschreiben. Zum Beispiel kann das Folgende ersetzt werden # durch [PREFIX]_DATABASE_POSTGRESURL, wobei PREFIX `INDEXER_SERVICE` oder `TAP_AGENT` sein kann: # # [Datenbank] @@ -116,8 +116,8 @@ indexer_address = „0x1111111111111111111111111111111111111111“ operator_mnemonic = „celery smart tip orange scare van steel radio dragon joy alarm crane“ [database] -# Die URL der Postgres-Datenbank, die für die Indexer-Komponenten verwendet wird. Die gleiche Datenbank, -# die auch vom `indexer-agent` verwendet wird. Es wird erwartet, dass `indexer-agent` +# Die URL der Postgres-Datenbank, die für die Indexer-Komponenten verwendet wird. Die gleiche Datenbank +# die vom `indexer-agent` verwendet wird. Es wird erwartet, dass `indexer-agent` die # die notwendigen Tabellen erstellt. postgres_url = „postgres://postgres@postgres:5432/postgres“ @@ -128,18 +128,18 @@ query_url = „“ status_url = „“ [subgraphs.network] -# Abfrage-URL für den Graph Network Subgraph. +# Abfrage-URL für den Graph-Netzwerk-Subgraphen. query_url = „“ -# Optional, Einsatz, nach dem im lokalen `graph-node` gesucht wird, falls er lokal indiziert ist. -# Es wird empfohlen, den Subgraphen lokal zu indizieren. +# Optional, Einsatz, der im lokalen `graph-node` zu suchen ist, falls lokal indiziert. +# Die lokale Indizierung des Subgraphen wird empfohlen. # HINWEIS: Verwenden Sie nur `query_url` oder `deployment_id`. deployment_id = „Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa“ [subgraphs.escrow] -# Abfrage-URL für den Subgraphen „Escrow“. +# Abfrage-URL für den Escrow-Subgraphen. query_url = „“ -# Optional, Einsatz, nach dem im lokalen `graph-node` gesucht wird, falls er lokal indiziert ist. -# Es wird empfohlen, den Subgraphen lokal zu indizieren. +# Optional, Einsatz für die Suche im lokalen `Graph-Knoten`, falls lokal indiziert. +# Die lokale Indizierung des Subgraphen wird empfohlen. # HINWEIS: Verwenden Sie nur `query_url` oder `deployment_id`. deployment_id = „Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa“ @@ -153,9 +153,9 @@ receipts_verifier_address = „0x2222222222222222222222222222222222222222“ # Spezifische Konfigurationen für tap-agent # ######################################## [tap] -# Dies ist die Höhe der Gebühren, die Sie bereit sind, zu einem bestimmten Zeitpunkt zu riskieren. Zum Beispiel, +# Dies ist die Höhe der Gebühren, die Sie bereit sind, zu einem bestimmten Zeitpunkt zu riskieren. Zum Beispiel. # wenn der Sender lange genug keine RAVs mehr liefert und die Gebühren diesen Betrag -# übersteigt, wird der Indexer-Service keine Anfragen mehr vom Absender annehmen +# Betrag übersteigt, wird der Indexer-Service keine Anfragen mehr vom Absender annehmen # bis die Gebühren aggregiert sind. # HINWEIS: Verwenden Sie Strings für dezimale Werte, um Rundungsfehler zu vermeiden. # z.B.: @@ -164,7 +164,7 @@ max_Betrag_willig_zu_verlieren_grt = 20 [tap.sender_aggregator_endpoints] # Key-Value aller Absender und ihrer Aggregator-Endpunkte -# Das folgende Datenbeispiel gilt für das E&N Testnet-Gateway. +# Dieser hier ist zum Beispiel für das E&N Testnetz-Gateway. 0xDDE4cfFd3D9052A9cb618fC05a1Cd02be1f2F467 = „https://tap-aggregator.network.thegraph.com“ ``` diff --git a/website/src/pages/de/indexing/tooling/graph-node.mdx b/website/src/pages/de/indexing/tooling/graph-node.mdx index ad1242d7c2b7..3c4cb903b165 100644 --- a/website/src/pages/de/indexing/tooling/graph-node.mdx +++ b/website/src/pages/de/indexing/tooling/graph-node.mdx @@ -1,40 +1,40 @@ --- -title: Graph Node +title: Graph-Knoten --- -Graph Node ist die Komponente, die Subgrafen indiziert und die resultierenden Daten zur Abfrage über eine GraphQL-API verfügbar macht. Als solches ist es für den Indexer-Stack von zentraler Bedeutung, und der korrekte Betrieb des Graph-Knotens ist entscheidend für den Betrieb eines erfolgreichen Indexers. +Graph Node ist die Komponente, die Subgraphen indiziert und die daraus resultierenden Daten zur Abfrage über eine GraphQL-API bereitstellt. Als solche ist sie ein zentraler Bestandteil des Indexer-Stacks, und der korrekte Betrieb von Graph Node ist entscheidend für den erfolgreichen Betrieb eines Indexers. -This provides a contextual overview of Graph Node, and some of the more advanced options available to indexers. Detailed documentation and instructions can be found in the [Graph Node repository](https://github.com/graphprotocol/graph-node). +Dies bietet einen kontextbezogenen Überblick über Graph Node und einige der erweiterten Optionen, die Indexern zur Verfügung stehen. Ausführliche Dokumentation und Anleitungen finden Sie im [Graph Node repository](https://github.com/graphprotocol/graph-node). -## Graph Node +## Graph-Knoten -[Graph Node](https://github.com/graphprotocol/graph-node) is the reference implementation for indexing Subgraphs on The Graph Network, connecting to blockchain clients, indexing subgraphs and making indexed data available to query. +[Graph Node] (https://github.com/graphprotocol/graph-node) ist die Referenzimplementierung für die Indizierung von Subgraphen auf The Graph Network, die Verbindung zu Blockchain-Clients, die Indizierung von Subgraphen und die Bereitstellung indizierter Daten für Abfragen. -Graph Node (and the whole indexer stack) can be run on bare metal, or in a cloud environment. This flexibility of the central indexing component is crucial to the robustness of The Graph Protocol. Similarly, Graph Node can be [built from source](https://github.com/graphprotocol/graph-node), or indexers can use one of the [provided Docker Images](https://hub.docker.com/r/graphprotocol/graph-node). +Graph Node (und der gesamte Indexer-Stack) kann sowohl auf Bare Metal als auch in einer Cloud-Umgebung betrieben werden. Diese Flexibilität der zentralen Indexer-Komponente ist entscheidend für die Robustheit von The Graph Protocol. Ebenso kann Graph Node [aus dem Quellcode gebaut] werden (https://github.com/graphprotocol/graph-node), oder Indexer können eines der [bereitgestellten Docker Images] verwenden (https://hub.docker.com/r/graphprotocol/graph-node). ### PostgreSQL-Datenbank -Der Hauptspeicher für den Graph-Knoten, hier werden Subgraf-Daten sowie Metadaten zu Subgrafen und Subgraf-unabhängige Netzwerkdaten wie Block-Cache und eth_call-Cache gespeichert. +Der Hauptspeicher für den Graph Node. Hier werden die Subgraph-Daten, Metadaten über Subgraphs und Subgraph-agnostische Netzwerkdaten wie der Block-Cache und der eth_call-Cache gespeichert. ### Netzwerk-Clients -In order to index a network, Graph Node needs access to a network client via an EVM-compatible JSON-RPC API. This RPC may connect to a single client or it could be a more complex setup that load balances across multiple. +Um ein Netzwerk zu indizieren, benötigt Graph Node Zugriff auf einen Netzwerk-Client über einen EVM-kompatiblen JSON-RPC API. Dieser RPC kann sich mit einem einzelnen Client verbinden oder es könnte sich um ein komplexeres Setup handeln, das die Last auf mehrere verteilt. -While some subgraphs may just require a full node, some may have indexing features which require additional RPC functionality. Specifically subgraphs which make `eth_calls` as part of indexing will require an archive node which supports [EIP-1898](https://eips.ethereum.org/EIPS/eip-1898), and subgraphs with `callHandlers`, or `blockHandlers` with a `call` filter, require `trace_filter` support ([see trace module documentation here](https://openethereum.github.io/JSONRPC-trace-module)). +Während einige Subgraphen nur einen vollständigen Knoten benötigen, können einige Indizierungsfunktionen haben, die zusätzliche RPC-Funktionalität erfordern. Insbesondere Subgraphen, die `eth_calls` als Teil der Indizierung machen, benötigen einen Archivknoten, der [EIP-1898](https://eips.ethereum.org/EIPS/eip-1898) unterstützt, und Subgraphen mit `callHandlers` oder `blockHandlers` mit einem `call`-Filter benötigen `trace_filter`-Unterstützung ([siehe Trace-Modul-Dokumentation hier](https://openethereum.github.io/JSONRPC-trace-module)). -**Network Firehoses** - a Firehose is a gRPC service providing an ordered, yet fork-aware, stream of blocks, developed by The Graph's core developers to better support performant indexing at scale. This is not currently an Indexer requirement, but Indexers are encouraged to familiarise themselves with the technology, ahead of full network support. Learn more about the Firehose [here](https://firehose.streamingfast.io/). +**Network Firehoses** - ein Firehose ist ein gRPC-Dienst, der einen geordneten, aber forkfähigen Strom von Blöcken bereitstellt, der von den Kernentwicklern von The Graph entwickelt wurde, um eine performante Indexierung in großem Umfang zu unterstützen. Dies ist derzeit keine Voraussetzung für Indexer, aber Indexer werden ermutigt, sich mit dieser Technologie vertraut zu machen, bevor die volle Netzwerkunterstützung zur Verfügung steht. Erfahren Sie mehr über den Firehose [hier](https://firehose.streamingfast.io/). ### IPFS-Knoten -Subgraf-Bereitstellungsmetadaten werden im IPFS-Netzwerk gespeichert. Der Graph-Knoten greift hauptsächlich während der Subgraf-Bereitstellung auf den IPFS-Knoten zu, um das Subgraf-Manifest und alle verknüpften Dateien abzurufen. Netzwerk-Indexierer müssen keinen eigenen IPFS-Knoten hosten, ein IPFS-Knoten für das Netzwerk wird unter https://ipfs.network.thegraph.com gehostet. +Die Metadaten für den Einsatz von Subgraphen werden im IPFS-Netzwerk gespeichert. Der Graph Node greift während des Einsatzes von Subgraphen primär auf den IPFS-Knoten zu, um das Subgraphen-Manifest und alle verknüpften Dateien abzurufen. Netzwerkindizierer müssen keinen eigenen IPFS-Knoten hosten. Ein IPFS-Knoten für das Netzwerk wird auf https://ipfs.network.thegraph.com gehostet. ### Prometheus-Metrikserver Um Überwachung und Berichterstellung zu ermöglichen, kann Graph Node optional Metriken auf einem Prometheus-Metrikserver protokollieren. -### Getting started from source +### Einstieg in den Sourcecode -#### Install prerequisites +#### Installieren Sie die Voraussetzungen - **Rust** @@ -42,15 +42,15 @@ Um Überwachung und Berichterstellung zu ermöglichen, kann Graph Node optional - **IPFS** -- **Additional Requirements for Ubuntu users** - To run a Graph Node on Ubuntu a few additional packages may be needed. +- **Zusätzliche Anforderungen für Ubuntu-Benutzer** - Um einen Graph Node unter Ubuntu zu betreiben, sind möglicherweise einige zusätzliche Pakete erforderlich. ```sh -sudo apt-get install -y clang libpq-dev libssl-dev pkg-config +sudo apt-get install -y clang libpg-dev libssl-dev pkg-config ``` -#### Setup +#### Konfiguration -1. Start a PostgreSQL database server +1. Starten Sie einen PostgreSQL-Datenbankserver ```sh initdb -D .postgres @@ -58,9 +58,9 @@ pg_ctl -D .postgres -l logfile start createdb graph-node ``` -2. Clone [Graph Node](https://github.com/graphprotocol/graph-node) repo and build the source by running `cargo build` +2. Klonen Sie das [Graph-Knoten](https://github.com/graphprotocol/graph-node)-Repo und erstellen Sie den Sourcecode durch Ausführen von `cargo build` -3. Now that all the dependencies are setup, start the Graph Node: +3. Nachdem alle Abhängigkeiten eingerichtet sind, starten Sie den Graph Node: ```sh cargo run -p graph-node --release -- \ @@ -71,35 +71,35 @@ cargo run -p graph-node --release -- \ ### Erste Schritte mit Kubernetes -A complete Kubernetes example configuration can be found in the [indexer repository](https://github.com/graphprotocol/indexer/tree/main/k8s). +Eine vollständige Datenbeispiel-Konfiguration für Kubernetes ist im [indexer repository](https://github.com/graphprotocol/indexer/tree/main/k8s) zu finden. ### Ports Wenn es ausgeführt wird, stellt Graph Node die folgenden Ports zur Verfügung: -| Port | Purpose | Routes | CLI Argument | Environment Variable | +| Port | Verwendungszweck | Routen | CLI-Argument | Umgebungsvariable | | --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | -| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | -| 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | -| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | -| 8040 | Prometheus metrics | /metrics | \--metrics-port | - | +| 8000 | GraphQL HTTP Server
(für Subgraph-Abfragen) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | +| 8001 | GraphQL WS
(für Subgraphen-Abonnements) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | +| 8020 | JSON-RPC
(zum Verwalten von Deployments) | / | \--admin-port | - | +| 8030 | Status der Indizierung von Subgraphen API | /graphql | \--index-node-port | - | +| 8040 | Prometheus-Metriken | /metrics | \--metrics-port | - | -> **Important**: Be careful about exposing ports publicly - **administration ports** should be kept locked down. This includes the the Graph Node JSON-RPC endpoint. +> **Wichtig**: Seien Sie vorsichtig damit, Ports öffentlich zugänglich zu machen - **Verwaltungsports** sollten unter Verschluss gehalten werden. Dies gilt auch für den Graph Node JSON-RPC Endpunkt. ## Erweiterte Graph-Knoten-Konfiguration -In seiner einfachsten Form kann Graph Node mit einer einzelnen Instanz von Graph Node, einer einzelnen PostgreSQL-Datenbank, einem IPFS-Knoten und den Netzwerk-Clients betrieben werden, die von den zu indizierenden Subgrafen benötigt werden. +In seiner einfachsten Form kann Graph Node mit einer einzelnen Instanz von Graph Node, einer einzelnen PostgreSQL-Datenbank, einem IPFS-Knoten und den Netzwerk-Clients betrieben werden, die für die zu indizierenden Subgraphen erforderlich sind. -This setup can be scaled horizontally, by adding multiple Graph Nodes, and multiple databases to support those Graph Nodes. Advanced users may want to take advantage of some of the horizontal scaling capabilities of Graph Node, as well as some of the more advanced configuration options, via the `config.toml` file and Graph Node's environment variables. +Dieses Setup kann horizontal skaliert werden, indem mehrere Graph Nodes und mehrere Datenbanken zur Unterstützung dieser Graph Nodes hinzugefügt werden. Fortgeschrittene Benutzer möchten vielleicht einige der horizontalen Skalierungsmöglichkeiten von Graph Node sowie einige der erweiterten Konfigurationsoptionen über die Datei „config.toml“ und die Umgebungsvariablen von Graph Node nutzen. ### `config.toml` -A [TOML](https://toml.io/en/) configuration file can be used to set more complex configurations than those exposed in the CLI. The location of the file is passed with the --config command line switch. +Eine [TOML](https://toml.io/en/)-Konfigurationsdatei kann verwendet werden, um komplexere Konfigurationen als die in der Befehlszeile angezeigten festzulegen. Der Speicherort der Datei wird mit dem Befehlszeilenschalter --config übergeben. > Bei Verwendung einer Konfigurationsdatei ist es nicht möglich, die Optionen --postgres-url, --postgres-secondary-hosts und --postgres-host-weights zu verwenden. -A minimal `config.toml` file can be provided; the following file is equivalent to using the --postgres-url command line option: +Eine minimale `config.toml`-Datei kann angegeben werden; die folgende Datei entspricht der Verwendung der Befehlszeilenoption --postgres-url: ```toml [store] @@ -110,47 +110,47 @@ connection="<.. postgres-url argument ..>" indexers = [ "<.. list of all indexing nodes ..>" ] ``` -Full documentation of `config.toml` can be found in the [Graph Node docs](https://github.com/graphprotocol/graph-node/blob/master/docs/config.md). +Eine vollständige Dokumentation von `config.toml` findet sich in den [Graph Node docs](https://github.com/graphprotocol/graph-node/blob/master/docs/config.md). #### Mehrere Graph-Knoten -Graph Node indexing can scale horizontally, running multiple instances of Graph Node to split indexing and querying across different nodes. This can be done simply by running Graph Nodes configured with a different `node_id` on startup (e.g. in the Docker Compose file), which can then be used in the `config.toml` file to specify [dedicated query nodes](#dedicated-query-nodes), [block ingestors](#dedicated-block-ingestion), and splitting subgraphs across nodes with [deployment rules](#deployment-rules). +Die Indizierung von Graph Node kann horizontal skaliert werden, indem mehrere Instanzen von Graph Node ausgeführt werden, um die Indizierung und Abfrage auf verschiedene Knoten aufzuteilen. Dies kann einfach durch die Ausführung von Graph Nodes erfolgen, die beim Start mit einer anderen `node_id` konfiguriert werden (z. B. in der Docker Compose-Datei). Diese kann dann in der Datei `config.toml` verwendet werden, um [dedizierte Abfrageknoten](#dedicated-query-nodes), [Block-Ingestoren](#dedicated-block-ingestion) und die Aufteilung von Subgraphen über Knoten mit [Einsatzregeln](#deployment-rules) zu spezifizieren. > Beachten Sie darauf, dass mehrere Graph-Knoten so konfiguriert werden können, dass sie dieselbe Datenbank verwenden, die ihrerseits durch Sharding horizontal skaliert werden kann. #### Bereitstellungsregeln -Given multiple Graph Nodes, it is necessary to manage deployment of new subgraphs so that the same subgraph isn't being indexed by two different nodes, which would lead to collisions. This can be done by using deployment rules, which can also specify which `shard` a subgraph's data should be stored in, if database sharding is being used. Deployment rules can match on the subgraph name and the network that the deployment is indexing in order to make a decision. +Bei mehreren Graph-Knoten ist es notwendig, den Einsatz von neuen Subgraphen zu verwalten, damit derselbe Subgraph nicht von zwei verschiedenen Knoten indiziert wird, was zu Kollisionen führen würde. Dies kann durch die Verwendung von Einsatzregeln geschehen, die auch angeben können, in welchem „Shard“ die Daten eines Subgraphen gespeichert werden sollen, wenn ein Datenbank-Sharding verwendet wird. Einsatzregeln können den Namen des Subgraphen und das Netzwerk, das der Einsatz indiziert, abgleichen, um eine Entscheidung zu treffen. -Beispielkonfiguration für Bereitstellungsregeln: +Example deployment rule configuration: ```toml [deployment] [[deployment.rule]] -match = { name = "(vip|important)/.*" } -shard = "vip" -indexers = [ "index_node_vip_0", "index_node_vip_1" ] +match = { name = „(vip|important)/.*“ } +shard = „vip“ +indexers = [ „index_node_vip_0“, „index_node_vip_1“ ] [[deployment.rule]] -match = { network = "kovan" } -# No shard, so we use the default shard called 'primary' -indexers = [ "index_node_kovan_0" ] +match = { network = „kovan“ } +# Kein Shard, also verwenden wir den Standard-Shard namens 'primary' +indexers = [ „index_node_kovan_0“ ] [[deployment.rule]] -match = { network = [ "xdai", "poa-core" ] } -indexers = [ "index_node_other_0" ] +match = { network = [ „xdai“, „poa-core“ ] } +indexers = [ „index_node_other_0“ ] [[deployment.rule]] -# There's no 'match', so any subgraph matches -shards = [ "sharda", "shardb" ] +# Es gibt kein 'match', also passt jeder Subgraph +shards = [ „sharda“, „shardb“ ] indexers = [ - "index_node_community_0", - "index_node_community_1", - "index_node_community_2", - "index_node_community_3", - "index_node_community_4", - "index_node_community_5" + „index_node_community_0“, + „index_node_community_1“, [ ‚index_node_community_1‘, + „index_node_community_2“, + „index_node_community_3“, + „index_node_community_4“, + „index_node_community_5“ ] ``` -Read more about deployment rules [here](https://github.com/graphprotocol/graph-node/blob/master/docs/config.md#controlling-deployment). +Lesen Sie mehr über die Einsatzregeln [hier] (https://github.com/graphprotocol/graph-node/blob/master/docs/config.md#controlling-deployment). #### Dedizierte Abfrageknoten @@ -167,11 +167,11 @@ Jeder Knoten, dessen --node-id mit dem regulären Ausdruck übereinstimmt, wird Für die meisten Anwendungsfälle reicht eine einzelne Postgres-Datenbank aus, um eine Graph-Node-Instanz zu unterstützen. Wenn eine Graph-Node-Instanz aus einer einzelnen Postgres-Datenbank herauswächst, ist es möglich, die Speicherung der Daten des Graph-Nodes auf mehrere Postgres-Datenbanken aufzuteilen. Alle Datenbanken zusammen bilden den Speicher der Graph-Node-Instanz. Jede einzelne Datenbank wird als Shard bezeichnet. -Shards can be used to split subgraph deployments across multiple databases, and can also be used to use replicas to spread query load across databases. This includes configuring the number of available database connections each `graph-node` should keep in its connection pool for each database, which becomes increasingly important as more subgraphs are being indexed. +Shards können verwendet werden, um Subgraph-Einsätze auf mehrere Datenbanken aufzuteilen, und sie können auch verwendet werden, um Replikate zu verwenden, um die Abfragelast auf die Datenbanken zu verteilen. Dazu gehört auch die Konfiguration der Anzahl der verfügbaren Datenbankverbindungen, die jeder „Graph-Knoten“ in seinem Verbindungspool für jede Datenbank vorhalten soll, was zunehmend wichtiger wird, je mehr Subgraphen indiziert werden. Sharding wird nützlich, wenn Ihre vorhandene Datenbank nicht mit der Last Schritt halten kann, die Graph Node ihr auferlegt, und wenn es nicht mehr möglich ist, die Datenbankgröße zu erhöhen. -> It is generally better make a single database as big as possible, before starting with shards. One exception is where query traffic is split very unevenly between subgraphs; in those situations it can help dramatically if the high-volume subgraphs are kept in one shard and everything else in another because that setup makes it more likely that the data for the high-volume subgraphs stays in the db-internal cache and doesn't get replaced by data that's not needed as much from low-volume subgraphs. +> Im Allgemeinen ist es besser, eine einzelne Datenbank so groß wie möglich zu machen, bevor man mit Shards beginnt. Eine Ausnahme ist, wenn der Abfrageverkehr sehr ungleichmäßig auf die Subgraphen verteilt ist; in solchen Situationen kann es sehr hilfreich sein, wenn die hochvolumigen Subgraphen in einem Shard und alles andere in einem anderen aufbewahrt wird, weil es dann wahrscheinlicher ist, dass die Daten für die hochvolumigen Subgraphen im db-internen Cache verbleiben und nicht durch Daten ersetzt werden, die von den niedrigvolumigen Subgraphen nicht so häufig benötigt werden. Was das Konfigurieren von Verbindungen betrifft, beginnen Sie mit max_connections in postgresql.conf, das auf 400 (oder vielleicht sogar 200) eingestellt ist, und sehen Sie sich die Prometheus-Metriken store_connection_wait_time_ms und store_connection_checkout_count an. Spürbare Wartezeiten (alles über 5 ms) sind ein Hinweis darauf, dass zu wenige Verbindungen verfügbar sind; hohe Wartezeiten werden auch dadurch verursacht, dass die Datenbank sehr ausgelastet ist (z. B. hohe CPU-Last). Wenn die Datenbank jedoch ansonsten stabil erscheint, weisen hohe Wartezeiten darauf hin, dass die Anzahl der Verbindungen erhöht werden muss. In der Konfiguration ist die Anzahl der Verbindungen, die jede Graph-Knoten-Instanz verwenden kann, eine Obergrenze, und der Graph-Knoten hält Verbindungen nicht offen, wenn er sie nicht benötigt. @@ -188,7 +188,7 @@ ingestor = "block_ingestor_node" #### Unterstützung mehrerer Netzwerke -Das Graph-Protokoll erhöht die Anzahl der Netzwerke, die für die Indizierung von Belohnungen unterstützt werden, und es gibt viele Subgraphen, die nicht unterstützte Netzwerke indizieren, die ein Indexer verarbeiten möchte. Die Datei `config.toml` ermöglicht eine ausdrucksstarke und flexible Konfiguration von: +Das Graph Protocol erhöht die Anzahl der Netzwerke, die für die Indizierung von Belohnungen unterstützt werden, und es gibt viele Subgraphen, die nicht unterstützte Netzwerke indizieren, die ein Indexer gerne verarbeiten würde. Die Datei `config.toml` ermöglicht eine ausdrucksstarke und flexible Konfiguration von: - Mehrere Netzwerke - Mehrere Anbieter pro Netzwerk (dies kann eine Aufteilung der Last auf Anbieter ermöglichen und kann auch die Konfiguration von vollständigen Knoten sowie Archivknoten ermöglichen, wobei Graph Node günstigere Anbieter bevorzugt, wenn eine bestimmte Arbeitslast dies zulässt). @@ -223,13 +223,13 @@ Benutzer, die ein skaliertes Indizierungs-Setup mit erweiterter Konfiguration be - Das Indexer-Repository hat eine [Beispiel-Kubernetes-Referenz](https://github.com/graphprotocol/indexer/tree/main/k8s) - [Launchpad] (https://docs.graphops.xyz/launchpad/intro) ist ein Toolkit für den Betrieb eines Graph Protocol Indexer auf Kubernetes, das von GraphOps gepflegt wird. Es bietet eine Reihe von Helm-Diagrammen und eine CLI zur Verwaltung eines Graph Node- Deployments. -### Managing Graph Node +### Verwaltung von Graph Knoten -Given a running Graph Node (or Graph Nodes!), the challenge is then to manage deployed subgraphs across those nodes. Graph Node surfaces a range of tools to help with managing subgraphs. +Bei einem laufenden Graph Node (oder Graph Nodes!) besteht die Herausforderung darin, die eingesetzten Subgraphen über diese Nodes hinweg zu verwalten. Graph Node bietet eine Reihe von Tools, die bei der Verwaltung von Subgraphen helfen. #### Protokollierung -Die Protokolle von Graph Node können nützliche Informationen für die Debuggen und Optimierung von Graph Node und bestimmten Subgraphen liefern. Graph Node unterstützt verschiedene Log-Ebenen über die Umgebungsvariable `GRAPH_LOG`, mit den folgenden Ebenen: Fehler, Warnung, Info, Debug oder Trace. +Die Protokolle von Graph Node können nützliche Informationen zur Fehlersuche und Optimierung von Graph Node und bestimmten Untergraphen liefern. Graph Node unterstützt verschiedene Log-Ebenen über die Umgebungsvariable `GRAPH_LOG`, mit den folgenden Ebenen: error, warn, info, debug oder trace. Wenn Sie außerdem `GRAPH_LOG_QUERY_TIMING` auf `gql` setzen, erhalten Sie mehr Details darüber, wie GraphQL-Abfragen ausgeführt werden (allerdings wird dadurch eine große Menge an Protokollen erzeugt). @@ -247,86 +247,86 @@ Der Befehl graphman ist in den offiziellen Containern enthalten, und Sie können Eine vollständige Dokumentation der `graphman`-Befehle ist im Graph Node Repository verfügbar. Siehe [/docs/graphman.md] (https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md) im Graph Node `/docs` -### Working with subgraphs +### Arbeiten mit Subgraphen #### Indizierungsstatus-API -Available on port 8030/graphql by default, the indexing status API exposes a range of methods for checking indexing status for different subgraphs, checking proofs of indexing, inspecting subgraph features and more. +Die API für den Indizierungsstatus ist standardmäßig an Port 8030/graphql verfügbar und bietet eine Reihe von Methoden zur Überprüfung des Indizierungsstatus für verschiedene Subgraphen, zur Überprüfung von Indizierungsnachweisen, zur Inspektion von Subgraphen-Features und mehr. Das vollständige Schema ist [hier] verfügbar (https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). -#### Indexing performance +#### Indizierungsleistung -There are three separate parts of the indexing process: +Es gibt drei separate Teile des Indizierungsprozesses: -- Fetching events of interest from the provider -- Processing events in order with the appropriate handlers (this can involve calling the chain for state, and fetching data from the store) -- Writing the resulting data to the store +- Abrufen von interessanten Ereignissen vom Anbieter +- Verarbeiten von Ereignissen in der Reihenfolge mit den entsprechenden Handlern (dies kann das Aufrufen der Kette für den Zustand und das Abrufen von Daten aus dem Speicher beinhalten) +- Schreiben der Ergebnisdaten in den Speicher -These stages are pipelined (i.e. they can be executed in parallel), but they are dependent on one another. Where subgraphs are slow to index, the underlying cause will depend on the specific subgraph. +Diese Phasen sind in einer Pipeline angeordnet (d.h. sie können parallel ausgeführt werden), aber sie sind voneinander abhängig. Wenn die Indizierung von Subgraphen langsam ist, hängt die Ursache dafür von dem jeweiligen Subgraphen ab. -Common causes of indexing slowness: +Häufige Ursachen für eine langsame Indizierung: - Zeit, die benötigt wird, um relevante Ereignisse aus der Kette zu finden (insbesondere Call-Handler können langsam sein, da sie auf `trace_filter` angewiesen sind) - Durchführen einer großen Anzahl von „eth_calls“ als Teil von Handlern -- A large amount of store interaction during execution -- A large amount of data to save to the store -- A large number of events to process -- Slow database connection time, for crowded nodes -- The provider itself falling behind the chain head -- Slowness in fetching new receipts at the chain head from the provider +- Eine große Anzahl von Store-Interaktionen während der Ausführung +- Eine große Datenmenge, die im Speicher gespeichert werden soll +- Eine große Anzahl von Ereignissen, die verarbeitet werden müssen +- Lange Datenbankverbindungszeit für überfüllte Knoten +- Der Anbieter selbst fällt dem Kettenkopf hinterher +- Langsamkeit beim Abrufen neuer Einnahmen am Kettenkopf vom Anbieter -Subgraph indexing metrics can help diagnose the root cause of indexing slowness. In some cases, the problem lies with the subgraph itself, but in others, improved network providers, reduced database contention and other configuration improvements can markedly improve indexing performance. +Metriken zur Indizierung von Subgraphen können dabei helfen, die Ursache für die Langsamkeit der Indizierung zu ermitteln. In einigen Fällen liegt das Problem am Subgraph selbst, in anderen Fällen können verbesserte Netzwerkanbieter, geringere Datenbankkonflikte und andere Konfigurationsverbesserungen die Indizierungsleistung deutlich verbessern. -#### Failed subgraphs +#### Fehlerhafte Subgraphen -During indexing subgraphs might fail, if they encounter data that is unexpected, some component not working as expected, or if there is some bug in the event handlers or configuration. There are two general types of failure: +Während der Indizierung können Subgraphen fehlschlagen, wenn sie auf unerwartete Daten stoßen, wenn eine Komponente nicht wie erwartet funktioniert oder wenn es einen Fehler in den Event-Handlern oder der Konfiguration gibt. Es gibt zwei allgemeine Arten von Fehlern: -- Deterministic failures: these are failures which will not be resolved with retries -- Non-deterministic failures: these might be down to issues with the provider, or some unexpected Graph Node error. When a non-deterministic failure occurs, Graph Node will retry the failing handlers, backing off over time. +- Deterministische Fehler: Dies sind Fehler, die nicht durch Wiederholungsversuche behoben werden können +- Nicht deterministische Fehler: Diese können auf Probleme mit dem Anbieter oder auf einen unerwarteten Graph-Knoten-Fehler zurückzuführen sein. Wenn ein nicht deterministischer Fehler auftritt, versucht Graph Node die fehlgeschlagenen Handler erneut und nimmt im Laufe der Zeit einen Rückzieher. -In some cases a failure might be resolvable by the indexer (for example if the error is a result of not having the right kind of provider, adding the required provider will allow indexing to continue). However in others, a change in the subgraph code is required. +In einigen Fällen kann ein Fehler durch den Indexer behoben werden (z. B. wenn der Fehler darauf zurückzuführen ist, dass nicht die richtige Art von Anbieter vorhanden ist, kann durch Hinzufügen des erforderlichen Anbieters die Indizierung fortgesetzt werden). In anderen Fällen ist jedoch eine Änderung des Subgraph-Codes erforderlich. -> Deterministische Fehler werden als „endgültig“ betrachtet, wobei für den fehlgeschlagenen Block ein Indizierungsnachweis generiert wird, während nicht-deterministische Fehler nicht als solche betrachtet werden, da es dem Subgraph gelingen kann, „auszufallen“ und die Indizierung fortzusetzen. In einigen Fällen ist das nicht-deterministische Label falsch und der Subgraph wird den Fehler nie überwinden; solche Fehler sollten als Probleme im Graph Node Repository gemeldet werden. +> Deterministische Fehler werden als „endgültig“ betrachtet, wobei für den fehlgeschlagenen Block ein Indizierungsnachweis generiert wird, während nicht-deterministische Fehler nicht als solche betrachtet werden, da es dem Subgraphen gelingen kann, „nicht zu versagen“ und die Indizierung fortzusetzen. In einigen Fällen ist die nicht-deterministische Kennzeichnung falsch, und der Subgraph wird den Fehler nie überwinden; solche Fehler sollten als Probleme im Graph Node Repository gemeldet werden. -#### Block and call cache +#### Cache blockieren und aufrufen -Graph Node speichert bestimmte Daten im Zwischenspeicher, um ein erneutes Abrufen vom Anbieter zu vermeiden. Blöcke werden zwischengespeichert, ebenso wie die Ergebnisse von `eth_calls` (letztere werden ab einem bestimmten Block zwischengespeichert). Diese Zwischenspeicherung kann die Indizierungsgeschwindigkeit bei der „Neusynchronisierung“ eines geringfügig veränderten Subgraphen drastisch erhöhen. +Graph Node speichert bestimmte Daten im Zwischenspeicher, um ein erneutes Abrufen vom Anbieter zu vermeiden. Blöcke werden zwischengespeichert, ebenso wie die Ergebnisse von `eth_calls` (letztere werden ab einem bestimmten Block zwischengespeichert). Diese Zwischenspeicherung kann die Indizierungsgeschwindigkeit bei der „Neusynchronisierung“ eines leicht geänderten Untergraphen drastisch erhöhen. -Wenn jedoch ein Ethereum-Knoten über einen bestimmten Zeitraum falsche Daten geliefert hat, können diese in den Cache gelangen und zu falschen Daten oder fehlgeschlagenen Subgraphen führen. In diesem Fall können Indexer `graphman` verwenden, um den vergifteten Cache zu löschen und dann die betroffenen Subgraphen zurückzuspulen, die dann frische Daten von dem (hoffentlich) gesunden Anbieter abrufen. +Wenn jedoch ein Ethereum-Knoten über einen bestimmten Zeitraum falsche Daten geliefert hat, können diese in den Cache gelangen und zu falschen Daten oder fehlgeschlagenen Subgraphen führen. In diesem Fall können Indexer `graphman` verwenden, um den vergifteten Cache zu löschen und dann die betroffenen Subgraphen neu zu spulen, die dann frische Daten von dem (hoffentlich) gesunden Anbieter holen werden. -If a block cache inconsistency is suspected, such as a tx receipt missing event: +Wenn eine Block-Cache-Inkonsistenz vermutet wird, z. B. ein Ereignis „TX-Empfang fehlt“: 1. `graphman chain list`, um den Namen der Kette zu finden. 2. `graphman chain check-blocks by-number ` prüft, ob der zwischengespeicherte Block mit dem Anbieter übereinstimmt, und löscht den Block aus dem Cache, wenn dies nicht der Fall ist. 1. Wenn es einen Unterschied gibt, kann es sicherer sein, den gesamten Cache mit `graphman chain truncate ` abzuschneiden. - 2. If the block matches the provider, then the issue can be debugged directly against the provider. + 2. Wenn der Block mit dem Anbieter übereinstimmt, kann das Problem direkt beim Anbieter gedebuggt werden. -#### Querying issues and errors +#### Abfragen von Problemen und Fehlern -Once a subgraph has been indexed, indexers can expect to serve queries via the subgraph's dedicated query endpoint. If the indexer is hoping to serve significant query volume, a dedicated query node is recommended, and in case of very high query volumes, indexers may want to configure replica shards so that queries don't impact the indexing process. +Sobald ein Subgraph indiziert wurde, können Indexierer erwarten, dass Abfragen über den dedizierten Abfrageendpunkt des Subgraphen bedient werden. Wenn der Indexer hofft, ein erhebliches Abfragevolumen zu bedienen, wird ein dedizierter Abfrageknoten empfohlen. Im Falle eines sehr hohen Abfragevolumens möchten Indexer möglicherweise Replikatshards konfigurieren, damit Abfragen den Indexierungsprozess nicht beeinträchtigen. -However, even with a dedicated query node and replicas, certain queries can take a long time to execute, and in some cases increase memory usage and negatively impact the query time for other users. +Aber selbst mit einem dedizierten Abfrageknoten und Replikaten kann die Ausführung bestimmter Abfragen lange dauern und in einigen Fällen die Speichernutzung erhöhen und die Abfragezeit für andere Benutzer negativ beeinflussen. -There is not one "silver bullet", but a range of tools for preventing, diagnosing and dealing with slow queries. +Es gibt nicht die eine Wunderwaffe, sondern eine Reihe von Tools zur Vorbeugung, Diagnose und Behandlung langsamer Abfragen. -##### Query caching +##### Abfrage-Caching Graph Node zwischenspeichert GraphQL-Abfragen standardmäßig, was die Datenbanklast erheblich reduzieren kann. Dies kann mit den Einstellungen `GRAPH_QUERY_CACHE_BLOCKS` und `GRAPH_QUERY_CACHE_MAX_MEM` weiter konfiguriert werden - lesen Sie mehr [hier](https://github.com/graphprotocol/graph-node/blob/master/docs/environment-variables.md#graphql-caching). -##### Analysing queries +##### Analysieren von Abfragen -Problematic queries most often surface in one of two ways. In some cases, users themselves report that a given query is slow. In that case the challenge is to diagnose the reason for the slowness - whether it is a general issue, or specific to that subgraph or query. And then of course to resolve it, if possible. +Problematische Abfragen treten meist auf zwei Arten auf. In einigen Fällen melden die Benutzer selbst, dass eine bestimmte Abfrage langsam ist. In diesem Fall besteht die Herausforderung darin, den Grund für die Langsamkeit zu diagnostizieren - ob es sich um ein allgemeines Problem oder um ein spezifisches Problem für diesen Untergraphen oder diese Abfrage handelt. Und dann natürlich, wenn möglich, das Problem zu beheben. -In other cases, the trigger might be high memory usage on a query node, in which case the challenge is first to identify the query causing the issue. +In anderen Fällen kann der Auslöser eine hohe Speicherauslastung auf einem Abfrageknoten sein. In diesem Fall besteht die Herausforderung darin, zuerst die Abfrage zu identifizieren, die das Problem verursacht. Indexer können [qlog](https://github.com/graphprotocol/qlog/) verwenden, um die Abfrageprotokolle von Graph Node zu verarbeiten und zusammenzufassen. `GRAPH_LOG_QUERY_TIMING` kann auch aktiviert werden, um langsame Abfragen zu identifizieren und zu debuggen. -Given a slow query, indexers have a few options. Of course they can alter their cost model, to significantly increase the cost of sending the problematic query. This may result in a reduction in the frequency of that query. However this often doesn't resolve the root cause of the issue. +Bei einer langsamen Abfrage haben Indexierer einige Optionen. Natürlich können sie ihr Kostenmodell ändern, um die Kosten für das Senden der problematischen Anfrage erheblich zu erhöhen. Dies kann zu einer Verringerung der Häufigkeit dieser Abfrage führen. Dies behebt jedoch häufig nicht die Ursache des Problems. -##### Account-like optimisation +##### Account-ähnliche Optimierung -Database tables that store entities seem to generally come in two varieties: 'transaction-like', where entities, once created, are never updated, i.e., they store something akin to a list of financial transactions, and 'account-like' where entities are updated very often, i.e., they store something like financial accounts that get modified every time a transaction is recorded. Account-like tables are characterized by the fact that they contain a large number of entity versions, but relatively few distinct entities. Often, in such tables the number of distinct entities is 1% of the total number of rows (entity versions) +Datenbanktabellen, die Entitäten speichern, scheinen im Allgemeinen in zwei Varianten zu existieren: „transaktionsähnlich“, bei denen Entitäten, sobald sie erstellt wurden, nie aktualisiert werden, d. h. sie speichern so etwas wie eine Liste von Finanztransaktionen, und „kontoähnlich“, bei denen Entitäten sehr oft aktualisiert werden, d. h. sie speichern so etwas wie Finanzkonten, die jedes Mal geändert werden, wenn eine Transaktion aufgezeichnet wird. Kontenähnliche Tabellen zeichnen sich dadurch aus, dass sie eine große Anzahl von Entitätsversionen, aber relativ wenige eindeutige Entitäten enthalten. In solchen Tabellen beträgt die Anzahl der unterschiedlichen Entitäten häufig 1 % der Gesamtzahl der Zeilen (Entitätsversionen). Für kontoähnliche Tabellen kann `graph-node` Abfragen generieren, die sich die Details zunutze machen, wie Postgres Daten mit einer so hohen Änderungsrate speichert, nämlich dass alle Versionen für die jüngsten Blöcke in einem kleinen Teil des Gesamtspeichers für eine solche Tabelle liegen. @@ -336,10 +336,10 @@ Im Allgemeinen sind Tabellen, bei denen die Anzahl der unterschiedlichen Entitä Sobald eine Tabelle als „kontoähnlich“ eingestuft wurde, wird durch die Ausführung von `graphman stats account-like .
` die kontoähnliche Optimierung für Abfragen auf diese Tabelle aktiviert. Die Optimierung kann mit `graphman stats account-like --clear .
` wieder ausgeschaltet werden. Es dauert bis zu 5 Minuten, bis die Abfrageknoten merken, dass die Optimierung ein- oder ausgeschaltet wurde. Nach dem Einschalten der Optimierung muss überprüft werden, ob die Abfragen für diese Tabelle durch die Änderung nicht tatsächlich langsamer werden. Wenn Sie Grafana für die Überwachung von Postgres konfiguriert haben, würden langsame Abfragen in `pg_stat_activity` in großer Zahl angezeigt werden und mehrere Sekunden dauern. In diesem Fall muss die Optimierung wieder abgeschaltet werden. -Bei Uniswap-ähnlichen Subgraphen sind die `pair`- und `token`-Tabellen die Hauptkandidaten für diese Optimierung und können die Datenbankauslastung erheblich beeinflussen. +For Uniswap-like Subgraphs, the `pair` and `token` tables are prime candidates for this optimization, and can have a dramatic effect on database load. -#### Removing subgraphs +#### Entfernen von Subgraphen > This is new functionality, which will be available in Graph Node 0.29.x -Irgendwann möchte ein Indexer vielleicht einen bestimmten Subgraph entfernen. Das kann einfach mit `graphman drop` gemacht werden, das einen Einsatz und alle indizierten Daten löscht. Der Einsatz kann entweder als Subgraph-Name, als IPFS-Hash `Qm..` oder als Datenbank-Namensraum `sgdNNN` angegeben werden. Weitere Dokumentation ist [hier](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop) verfügbar. +Irgendwann möchte ein Indexer vielleicht einen bestimmten Subgraph entfernen. Dies kann einfach mit `graphman drop` gemacht werden, welches einen Einsatz und alle indizierten Daten löscht. Der Einsatz kann entweder als Name eines Subgraphen, als IPFS-Hash `Qm..` oder als Datenbank-Namensraum `sgdNNN` angegeben werden. Weitere Dokumentation ist [hier](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop) verfügbar. diff --git a/website/src/pages/de/resources/_meta-titles.json b/website/src/pages/de/resources/_meta-titles.json index f5971e95a8f6..5ef7fded48f6 100644 --- a/website/src/pages/de/resources/_meta-titles.json +++ b/website/src/pages/de/resources/_meta-titles.json @@ -1,4 +1,4 @@ { - "roles": "Additional Roles", - "migration-guides": "Migration Guides" + "roles": "Zusätzliche Rollen", + "migration-guides": "Leitfäden zur Migration" } diff --git a/website/src/pages/de/resources/benefits.mdx b/website/src/pages/de/resources/benefits.mdx index 24c816c0784e..414897ac5365 100644 --- a/website/src/pages/de/resources/benefits.mdx +++ b/website/src/pages/de/resources/benefits.mdx @@ -34,7 +34,7 @@ Die Abfragekosten können variieren; die angegebenen Kosten sind der Durchschnit | Entwicklungszeit | $400 pro Monat | Keine, eingebaut in das Netzwerk mit global verteilten Indexern | | Abfragen pro Monat | Begrenzt auf infrastrukturelle Funktionen | 100.000 (kostenloser Plan) | | Kosten pro Abfrage | $0 | $0 | -| Infrastructure | Zentralisiert | Dezentralisiert | +| Infrastruktur | Zentralisiert | Dezentralisiert | | Geografische Redundanz | $750+ pro zusätzlichem Knoten | Eingeschlossen | | Betriebszeit | Variiert | 99.9%+ | | Monatliche Gesamtkosten | $750+ | $0 | @@ -48,7 +48,7 @@ Die Abfragekosten können variieren; die angegebenen Kosten sind der Durchschnit | Entwicklungszeit | $800 pro Monat | Keine, eingebaut in das Netzwerk mit global verteilten Indexern | | Abfragen pro Monat | Begrenzt auf infrastrukturelle Funktionen | ~3,000,000 | | Kosten pro Abfrage | $0 | $0.00004 | -| Infrastructure | Zentralisiert | Dezentralisiert | +| Infrastruktur | Zentralisiert | Dezentralisiert | | Engineering-Kosten | $200 pro Stunde | Eingeschlossen | | Geografische Redundanz | $1,200 Gesamtkosten pro zusätzlichem Knoten | Eingeschlossen | | Betriebszeit | Variiert | 99.9%+ | @@ -64,7 +64,7 @@ Die Abfragekosten können variieren; die angegebenen Kosten sind der Durchschnit | Entwicklungszeit | $6,000 oder mehr pro Monat | Keine, eingebaut in das Netzwerk mit global verteilten Indexern | | Abfragen pro Monat | Begrenzt auf infrastrukturelle Funktionen | ~30,000,000 | | Kosten pro Abfrage | $0 | $0.00004 | -| Infrastructure | Zentralisiert | Dezentralisiert | +| Infrastruktur | Zentralisiert | Dezentralisiert | | Geografische Redundanz | $1,200 Gesamtkosten pro zusätzlichem Knoten | Eingeschlossen | | Betriebszeit | Variiert | 99.9%+ | | Monatliche Gesamtkosten | $11,000+ | $1,200 | @@ -90,4 +90,4 @@ Das dezentralisierte Netzwerk von The Graph bietet den Nutzern Zugang zu einer g Unterm Strich: Das The Graph Network ist kostengünstiger, einfacher zu benutzen und liefert bessere Ergebnisse als ein lokaler `graph-node`. -Beginnen Sie noch heute mit der Nutzung von The Graph Network und erfahren Sie, wie Sie [Ihren Subgraphут im dezentralen Netzwerk von The Graph veröffentlichen](/subgraphs/quick-start/). +Beginnen Sie noch heute mit der Nutzung von The Graph Network und erfahren Sie, wie Sie [Ihren Subgraphen im dezentralen Netzwerk von The Graph veröffentlichen](/subgraphs/quick-start/). diff --git a/website/src/pages/de/resources/glossary.mdx b/website/src/pages/de/resources/glossary.mdx index ffcd4bca2eed..921c1f6225ae 100644 --- a/website/src/pages/de/resources/glossary.mdx +++ b/website/src/pages/de/resources/glossary.mdx @@ -1,83 +1,83 @@ --- -title: Glossary +title: Glossar --- -- **The Graph**: A decentralized protocol for indexing and querying data. +- **The Graph**: Ein dezentrales Protokoll zur Indizierung und Abfrage von Daten. -- **Query**: A request for data. In the case of The Graph, a query is a request for data from a subgraph that will be answered by an Indexer. +- **Abfrage**: Eine Anfrage nach Daten. Im Fall von The Graph ist eine Abfrage eine Anfrage nach Daten aus einem Subgraphen, die von einem Indexierer beantwortet wird. -- **GraphQL**: A query language for APIs and a runtime for fulfilling those queries with your existing data. The Graph uses GraphQL to query subgraphs. +- **GraphQL**: Eine Abfragesprache für APIs und eine Laufzeitumgebung, um diese Abfragen mit Ihren vorhandenen Daten zu erfüllen. The Graph verwendet GraphQL, um Subgraphen abzufragen. -- **Endpoint**: A URL that can be used to query a subgraph. The testing endpoint for Subgraph Studio is `https://api.studio.thegraph.com/query///` and the Graph Explorer endpoint is `https://gateway.thegraph.com/api//subgraphs/id/`. The Graph Explorer endpoint is used to query subgraphs on The Graph's decentralized network. +- **Endpunkt**: Eine URL, die zur Abfrage eines Subgraphen verwendet werden kann. Der Test-Endpunkt für Subgraph Studio ist `https://api.studio.thegraph.com/query///` und der Graph Explorer Endpunkt ist `https://gateway.thegraph.com/api//subgraphs/id/`. Der The Graph Explorer Endpunkt wird verwendet, um Subgraphen im dezentralen Netzwerk von The Graph abzufragen. -- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish subgraphs to The Graph Network. Once it is indexed, the subgraph can be queried by anyone. +- **Subgraph**: Eine offene API, die Daten aus einer Blockchain extrahiert, verarbeitet und so speichert, dass sie einfach über GraphQL abgefragt werden können. Entwickler können einen Subgraphen erstellen, bereitstellen und auf The Graph Network veröffentlichen. Sobald der Subgraph indiziert ist, kann er von jedem abgefragt werden. -- **Indexer**: Network participants that run indexing nodes to index data from blockchains and serve GraphQL queries. +- **Indexierer**: Netzwerkteilnehmer, die Indexierungsknoten betreiben, um Daten aus Blockchains zu indexieren und GraphQL-Abfragen zu bedienen. -- **Indexer Revenue Streams**: Indexers are rewarded in GRT with two components: query fee rebates and indexing rewards. +- **Einkommensströme für Indexierer**: Indexierer werden in GRT mit zwei Komponenten belohnt: Rabatte auf Abfragegebühren und Rewards für die Indizierung. - 1. **Query Fee Rebates**: Payments from subgraph consumers for serving queries on the network. + 1. **Abfragegebühren-Rabatte**: Zahlungen von Subgraph-Konsumenten für die Bedienung von Anfragen im Netz. - 2. **Indexing Rewards**: The rewards that Indexers receive for indexing subgraphs. Indexing rewards are generated via new issuance of 3% GRT annually. + 2. **Indizierung-Rewards**: Die Rewards, die Indexierer für die Indizierung von Subgraphen erhalten. Indizierung-Rewards werden durch die Neuausgabe von 3% GRT jährlich generiert. -- **Indexer's Self-Stake**: The amount of GRT that Indexers stake to participate in the decentralized network. The minimum is 100,000 GRT, and there is no upper limit. +- **Selbstbeteiligung der Indexierer**: Der Betrag an GRT, den Indexierer einsetzen, um am dezentralen Netzwerk teilzunehmen. Das Minimum beträgt 100.000 GRT, eine Obergrenze gibt es nicht. -- **Delegation Capacity**: The maximum amount of GRT an Indexer can accept from Delegators. Indexers can only accept up to 16x their Indexer Self-Stake, and additional delegation results in diluted rewards. For example, if an Indexer has a Self-Stake of 1M GRT, their delegation capacity is 16M. However, Indexers can increase their Delegation Capacity by increasing their Self-Stake. +- **Delegationskapazität**: Die maximale Menge an GRT, die ein Indexierer von Delegatoren annehmen kann. Indexierer können nur bis zum 16-fachen ihres Indexierer-Eigenanteils akzeptieren, und zusätzliche Delegationen führen zu verwässerten Rewards. Ein Datenbeispiel: Wenn ein Indexierer eine Selbsteinnahme von 1 Mio. GRT hat, beträgt seine Delegationskapazität 16 Mio. GRT. Indexierer können jedoch ihre Delegationskapazität erhöhen, indem sie ihre Selbstbeteiligung erhöhen. -- **Upgrade Indexer**: An Indexer designed to act as a fallback for subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. +- **Upgrade-Indexierer**: Ein Indexierer, der als Fallback für Subgraph-Abfragen dient, die nicht von anderen Indexierern im Netzwerk bedient werden. Der Upgrade-Indexierer ist nicht konkurrenzfähig mit anderen Indexierern. -- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing subgraphs. +- **Delegator**: Netzwerkteilnehmer, die GRT besitzen und ihre GRT an Indexer delegieren. Dies erlaubt es Indexern, ihre Beteiligung an Subgraphen im Netzwerk zu erhöhen. Im Gegenzug erhalten die Delegierten einen Teil der Indexbelohnungen, die Indexer für die Bearbeitung von Subgraphen erhalten. -- **Delegation Tax**: A 0.5% fee paid by Delegators when they delegate GRT to Indexers. The GRT used to pay the fee is burned. +- **Delegationssteuer**: Eine 0,5%ige Gebühr, die von Delegatoren gezahlt wird, wenn sie GRT an Indexierer delegieren. Die GRT, die zur Zahlung der Gebühr verwendet werden, werden verbrannt. -- **Curator**: Network participants that identify high-quality subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a subgraph, 10% is distributed to the Curators of that subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a subgraph. +- **Kurator**: Netzwerkteilnehmer, die hochwertige Untergraphen identifizieren und GRT auf ihnen im Gegenzug für Kuratierungsfreigaben signalisieren. Wenn Indexer Abfragegebühren für einen Subgraph beanspruchen, werden 10% an die Kuratoren dieses Subgraphen verteilt. Es gibt eine positive Korrelation zwischen der Menge der signalisierten GRT und der Anzahl der Indexer, die einen Subgraph indizieren. -- **Curation Tax**: A 1% fee paid by Curators when they signal GRT on subgraphs. The GRT used to pay the fee is burned. +- **Kuratierungssteuer**: Eine 1% Gebühr, die von Kuratoren bezahlt wird, wenn sie GRT auf Subgraphen signalisieren. Der GRT wird verwendet, um die Gebühr zu bezahlen. -- **Data Consumer**: Any application or user that queries a subgraph. +- **Datenverbraucher**: Jede Anwendung oder Benutzer, die einen Subgraph abfragt. -- **Subgraph Developer**: A developer who builds and deploys a subgraph to The Graph's decentralized network. +- **Subgraph Developer**: Ein Entwickler, der einen Subgraph für das dezentralisierte Netzwerk von The Graphen baut und bereitstellt. -- **Subgraph Manifest**: A YAML file that describes the subgraph's GraphQL schema, data sources, and other metadata. [Here](https://github.com/graphprotocol/example-subgraph/blob/master/subgraph.yaml) is an example. +- **Subgraph Manifest**: A YAML file that describes the Subgraph's GraphQL schema, data sources, and other metadata. [Here](https://github.com/graphprotocol/example-subgraph/blob/master/subgraph.yaml) is an example. -- **Epoch**: A unit of time within the network. Currently, one epoch is 6,646 blocks or approximately 1 day. +- **Epoche**: Eine Zeiteinheit innerhalb des Netzes. Derzeit entspricht eine Epoche 6.646 Blöcken oder etwa 1 Tag. -- **Allocation**: An Indexer can allocate their total GRT stake (including Delegators' stake) towards subgraphs that have been published on The Graph's decentralized network. Allocations can have different statuses: +- **Allocation**: An Indexer can allocate their total GRT stake (including Delegators' stake) towards Subgraphs that have been published on The Graph's decentralized network. Allocations can have different statuses: - 1. **Active**: An allocation is considered active when it is created onchain. This is called opening an allocation, and indicates to the network that the Indexer is actively indexing and serving queries for a particular subgraph. Active allocations accrue indexing rewards proportional to the signal on the subgraph, and the amount of GRT allocated. + 1. **Aktiv**: Eine Zuordnung gilt als aktiv, wenn sie onchain erstellt wird. Dies wird als Öffnen einer Zuordnung bezeichnet und zeigt dem Netzwerk an, dass der Indexierer aktiv indiziert und Abfragen für einen bestimmten Subgraphen bedient. Aktive Zuweisungen sammeln Rewards für die Indizierung, die proportional zum Signal auf dem Subgraphen und der Menge des zugewiesenen GRT sind. - 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. + 2. **Geschlossen**: Ein Indexierer kann die aufgelaufenen Rewards für einen bestimmten Subgraphen beanspruchen, indem er einen aktuellen und gültigen Proof of Indexing (POI) einreicht. Dies wird als Schließen einer Zuordnung bezeichnet. Eine Zuordnung muss mindestens eine Epoche lang offen gewesen sein, bevor sie geschlossen werden kann. Die maximale Zuordnungsdauer beträgt 28 Epochen. Lässt ein Indexierer eine Zuordnung länger als 28 Epochen offen, wird sie als veraltete Zuordnung bezeichnet. Wenn sich eine Zuordnung im Zustand **Geschlossen** befindet, kann ein Fisher immer noch einen Disput eröffnen, um einen Indexierer wegen der Bereitstellung falscher Daten anzufechten. -- **Subgraph Studio**: A powerful dapp for building, deploying, and publishing subgraphs. +- **Subgraph Studio**: Ein mächtiger dApp zum Erstellen, Bereitstellen und Publizieren von Subgraphen. -- **Fishermen**: A role within The Graph Network held by participants who monitor the accuracy and integrity of data served by Indexers. When a Fisherman identifies a query response or a POI they believe to be incorrect, they can initiate a dispute against the Indexer. If the dispute rules in favor of the Fisherman, the Indexer is slashed by losing 2.5% of their self-stake. Of this amount, 50% is awarded to the Fisherman as a bounty for their vigilance, and the remaining 50% is removed from circulation (burned). This mechanism is designed to encourage Fishermen to help maintain the reliability of the network by ensuring that Indexers are held accountable for the data they provide. +- **Fischer**: Eine Rolle innerhalb des The Graph Network, die von Teilnehmern eingenommen wird, die die Genauigkeit und Integrität der von Indexierern gelieferten Daten überwachen. Wenn ein Fisher eine Abfrage-Antwort oder einen POI identifiziert, den er für falsch hält, kann er einen Disput gegen den Indexierer einleiten. Wenn der Streitfall zu Gunsten des Fischers entschieden wird, verliert der Indexierer 2,5 % seines Eigenanteils. Von diesem Betrag erhält der Fischer 50 % als Belohnung für seine Wachsamkeit, und die restlichen 50 % werden aus dem Verkehr gezogen (verbrannt). Dieser Mechanismus soll die Fischer dazu ermutigen, die Zuverlässigkeit des Netzwerks aufrechtzuerhalten, indem sichergestellt wird, dass die Indexierer für die von ihnen gelieferten Daten verantwortlich gemacht werden. -- **Arbitrators**: Arbitrators are network participants appointed through a governance process. The role of the Arbitrator is to decide the outcome of indexing and query disputes. Their goal is to maximize the utility and reliability of The Graph Network. +- **Schlichter**: Schlichter sind Netzwerkteilnehmer, die im Rahmen eines Governance-Prozesses ernannt werden. Die Rolle des Schlichters besteht darin, über den Ausgang von Streitigkeiten bei Indizierungen und Abfragen zu entscheiden. Ihr Ziel ist es, den Nutzen und die Zuverlässigkeit von The Graph Network zu maximieren. -- **Slashing**: Indexers can have their self-staked GRT slashed for providing an incorrect POI or for serving inaccurate data. The slashing percentage is a protocol parameter currently set to 2.5% of an Indexer's self-stake. 50% of the slashed GRT goes to the Fisherman that disputed the inaccurate data or incorrect POI. The other 50% is burned. +- **Slashing**: Indexierer können für die Bereitstellung eines falschen POI oder für die Bereitstellung ungenauer Daten um ihre selbst gesetzten GRT gekürzt werden. Der Prozentsatz des Slashings ist ein Protokollparameter, der derzeit auf 2,5% des Eigenanteils eines Indexierers festgelegt ist. 50 % der gekürzten GRT gehen an den Fischer, der die ungenauen Daten oder den falschen POI bestritten hat. Die anderen 50% werden verbrannt. -- **Indexing Rewards**: The rewards that Indexers receive for indexing subgraphs. Indexing rewards are distributed in GRT. +- **Indexing Rewards**: The rewards that Indexers receive for indexing Subgraphs. Indexing rewards are distributed in GRT. -- **Delegation Rewards**: The rewards that Delegators receive for delegating GRT to Indexers. Delegation rewards are distributed in GRT. +- **Delegation Rewards**: Die Rewards, die Delegatoren für die Delegierung von GRT an Indexierer erhalten. Delegations-Rewards werden in GRT verteilt. -- **GRT**: The Graph's work utility token. GRT provides economic incentives to network participants for contributing to the network. +- **GRT**: Der Utility-Token von The Graph. GRT bietet den Netzwerkteilnehmern wirtschaftliche Anreize für ihren Beitrag zum Netzwerk. -- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. +- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given Subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. -- **Graph Node**: Graph Node is the component that indexes subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. +- **Graph Node**: Graph Node is the component that indexes Subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. -- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions onchain, including registering on the network, managing subgraph deployments to its Graph Node(s), and managing allocations. +- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions onchain, including registering on the network, managing Subgraph deployments to its Graph Node(s), and managing allocations. -- **The Graph Client**: A library for building GraphQL-based dapps in a decentralized way. +- **The Graph Client**: Eine Bibliothek für den Aufbau von GraphQL-basierten Dapps auf dezentralisierte Weise. -- **Graph Explorer**: A dapp designed for network participants to explore subgraphs and interact with the protocol. +- **Graph Explorer**: A dapp designed for network participants to explore Subgraphs and interact with the protocol. -- **Graph CLI**: A command line interface tool for building and deploying to The Graph. +- **Graph CLI**: Ein Command-Line-Interface-Tool (CLI) zum Erstellen und Bereitstellen von The Graph. -- **Cooldown Period**: The time remaining until an Indexer who changed their delegation parameters can do so again. +- **Abkühlphase**: Die Zeit, die verbleibt, bis ein Indexierer, der seine Delegationsparameter geändert hat, dies wieder tun kann. -- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, subgraphs, curation shares, and Indexer's self-stake. +- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, Subgraphs, curation shares, and Indexer's self-stake. -- **Updating a subgraph**: The process of releasing a new subgraph version with updates to the subgraph's manifest, schema, or mappings. +- **Updating a Subgraph**: The process of releasing a new Subgraph version with updates to the Subgraph's manifest, schema, or mappings. -- **Migrating**: The process of curation shares moving from an old version of a subgraph to a new version of a subgraph (e.g. when v0.0.1 is updated to v0.0.2). +- **Migrieren**: Der Prozess, bei dem Kurationsanteile von einer alten Version eines Subgraphen auf eine neue Version eines Subgraphen übertragen werden (z. B. wenn v0.0.1 auf v0.0.2 aktualisiert wird). diff --git a/website/src/pages/de/resources/migration-guides/assemblyscript-migration-guide.mdx b/website/src/pages/de/resources/migration-guides/assemblyscript-migration-guide.mdx index d5ffa00d0e1f..0508b5db3baf 100644 --- a/website/src/pages/de/resources/migration-guides/assemblyscript-migration-guide.mdx +++ b/website/src/pages/de/resources/migration-guides/assemblyscript-migration-guide.mdx @@ -1,18 +1,18 @@ --- -title: AssemblyScript Migration Guide +title: AssemblyScript-Migrationsleitfaden --- Bis jetzt haben Subgraphen eine der [ersten Versionen von AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6) verwendet. Endlich haben wir Unterstützung für die [neueste verfügbare Version](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10) hinzugefügt! 🎉 -That will enable subgraph developers to use newer features of the AS language and standard library. +Dies ermöglicht es den Entwicklern von Subgrafen, neuere Funktionen der AS-Sprache und der Standardbibliothek zu nutzen. Diese Anleitung gilt für alle, die `graph-cli`/`graph-ts` unter Version `0.22.0` verwenden. Wenn Sie bereits eine höhere (oder gleiche) Version als diese haben, haben Sie bereits Version `0.19.10` von AssemblyScript verwendet 🙂 > Anmerkung: Ab `0.24.0` kann `graph-node` beide Versionen unterstützen, abhängig von der im Subgraph-Manifest angegebenen `apiVersion`. -## Features +## Besonderheiten -### New functionality +### Neue Funktionalität - `TypedArray` kann nun aus `ArrayBuffer` mit Hilfe der [neuen statischen Methode `wrap`](https://www.assemblyscript.org/stdlib/typedarray.html#static-members) ([v0.8.1](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.8.1)) erstellt werden - Neue Standard-Bibliotheksfunktionen: `String#toUpperCase`, `String#toLowerCase`, `String#localeCompare`und `TypedArray#set` ([v0.9.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.0)) @@ -30,39 +30,39 @@ Diese Anleitung gilt für alle, die `graph-cli`/`graph-ts` unter Version `0.22.0 - Hinzufügen von `toUTCString` für `Date` ([v0.18.30](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.30)) - Hinzufügen von `nonnull/NonNullable` integrierten Typ ([v0.19.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.19.2)) -### Optimizations +### Optimierungen - `Math`-Funktionen wie `exp`, `exp2`, `log`, `log2` und `pow` wurden durch schnellere Varianten ersetzt ([v0.9.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.0)) - Leicht optimierte `Math.mod` ([v0.17.1](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.1)) - Mehr Feldzugriffe in std Map und Set zwischengespeichert ([v0.17.8](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.8)) - Optimieren für Zweierpotenzen in `ipow32/64` ([v0.18.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.2)) -### Other +### Sonstiges - Der Typ eines Array-Literal kann nun aus seinem Inhalt abgeleitet werden ([v0.9.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.0)) - stdlib auf Unicode 13.0.0 ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) aktualisiert -## How to upgrade? +## Wie kann man upgraden? -1. Ändern Sie Ihre Mappings `apiVersion` in `subgraph.yaml` auf `0.0.6`: +1. Ändern Sie Ihre Mappings `apiVersion` in `subgraph.yaml` auf `0.0.9`: ```yaml ... dataSources: ... - mapping: + Kartierung: ... - apiVersion: 0.0.6 + apiVersion: 0.0.9 ... ``` 2. Aktualisieren Sie die `graph-cli`, die Sie verwenden, auf die `latest` Version, indem Sie sie ausführen: ```bash -# if you have it globally installed +# wenn es global installiert ist npm install --global @graphprotocol/graph-cli@latest -# or in your subgraph if you have it as a dev dependency +# oder in Ihrem Subgrafen, wenn Sie es als Entwicklerabhängigkeit haben npm install --save-dev @graphprotocol/graph-cli@latest ``` @@ -72,14 +72,14 @@ npm install --save-dev @graphprotocol/graph-cli@latest npm install --save @graphprotocol/graph-ts@latest ``` -4. Follow the rest of the guide to fix the language breaking changes. +4. Befolgen Sie den Rest der Anleitung, um die Sprachänderungen zu beheben. 5. Führen Sie `codegen` und `deploy` erneut aus. -## Breaking changes +## Einschneidende Veränderungen -### Nullability +### Nullbarkeit -On the older version of AssemblyScript, you could create code like this: +In der älteren Version von AssemblyScript konnten Sie Code wie diesen erstellen: ```typescript function load(): Value | null { ... } @@ -88,7 +88,7 @@ let maybeValue = load(); maybeValue.aMethod(); ``` -However on the newer version, because the value is nullable, it requires you to check, like this: +Da der Wert in der neueren Version jedoch nullbar ist, müssen Sie dies wie folgt überprüfen: ```typescript let maybeValue = load() @@ -98,17 +98,17 @@ if (maybeValue) { } ``` -Or force it like this: +Oder erzwingen Sie es wie folgt: ```typescript -let maybeValue = load()! // breaks in runtime if value is null +let maybeValue = load()! // bricht zur Laufzeit ab, wenn der Wert null ist maybeValue.aMethod() ``` -If you are unsure which to choose, we recommend always using the safe version. If the value doesn't exist you might want to just do an early if statement with a return in you subgraph handler. +Wenn Sie unsicher sind, welche Sie wählen sollen, empfehlen wir Ihnen, immer die sichere Variante zu verwenden. Wenn der Wert nicht vorhanden ist, sollten Sie einfach eine frühe if-Anweisung mit einem Return in Ihrem Subgraf-Handler ausführen. -### Variable Shadowing +### Variable Beschattung Früher konnte man [variable shadowing](https://en.wikipedia.org/wiki/Variable_shadowing) machen und Code wie dieser würde funktionieren: @@ -118,7 +118,7 @@ let b = 20 let a = a + b ``` -However now this isn't possible anymore, and the compiler returns this error: +Jetzt ist dies jedoch nicht mehr möglich und der Compiler gibt diesen Fehler zurück: ```typescript ERROR TS2451: Cannot redeclare block-scoped variable 'a' @@ -128,11 +128,11 @@ ERROR TS2451: Cannot redeclare block-scoped variable 'a' in assembly/index.ts(4,3) ``` -You'll need to rename your duplicate variables if you had variable shadowing. +Sie müssen Ihre doppelten Variablen umbenennen, wenn Sie Variable Beschattung verwendet haben. -### Null Comparisons +### Null-Vergleiche -By doing the upgrade on your subgraph, sometimes you might get errors like these: +Wenn Sie das Upgrade für Ihren Subgrafen durchführen, können manchmal solche Fehler wie diese auftreten: ```typescript ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' is not assignable to type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt'. @@ -151,7 +151,7 @@ Zur Lösung des Problems können Sie die `if`-Anweisung einfach wie folgt änder if (decimals === null) { ``` -The same applies if you're doing != instead of ==. +Dasselbe gilt, wenn Sie != statt == verwenden. ### Casting @@ -162,15 +162,15 @@ let byteArray = new ByteArray(10) let uint8Array = byteArray as Uint8Array // equivalent to: byteArray ``` -However this only works in two scenarios: +Dies funktioniert jedoch nur in zwei Szenarien: - Primitives Casting (zwischen Typen wie `u8`, `i32`, `bool`; z. B.: `let b: isize = 10; b as usize`); -- Upcasting on class inheritance (subclass → superclass) +- Upcasting bei der Klassenvererbung (subclass → superclass) Beispiele: ```typescript -// primitive casting +// primitives Casting let a: usize = 10 let b: isize = 5 let c: usize = a + (b as usize) @@ -186,8 +186,8 @@ let bytes = new Bytes(2) Es gibt zwei Szenarien, in denen man casten möchte, aber die Verwendung von `as`/`var` **ist nicht sicher**: -- Downcasting on class inheritance (superclass → subclass) -- Between two types that share a superclass +- Downcasting bei der Klassenvererbung (superclass → subclass) +- Zwischen zwei Typen, die eine gemeinsame Oberklasse haben ```typescript // Downcasting bei Klassenvererbung @@ -228,11 +228,11 @@ changetype(bytes) // funktioniert :) Wenn Sie nur die Nullbarkeit entfernen wollen, können Sie weiterhin den `as`-Operator (oder `variable`) verwenden, aber stellen Sie sicher, dass Sie wissen, dass der Wert nicht Null sein kann, sonst bricht es. ```typescript -// remove nullability +// die NULL-Zulässigkeit entfernen let previousBalance = AccountBalance.load(balanceId) // AccountBalance | null if (previousBalance != null) { - return previousBalance as AccountBalance // safe remove null + return previousBalance as AccountBalance // die NULL-Zulässigkeit sicher entfernen } let newBalance = new AccountBalance(balanceId) @@ -240,14 +240,14 @@ let newBalance = new AccountBalance(balanceId) Für den Fall der Nullbarkeit empfehlen wir, einen Blick auf die [Nullability-Check-Funktion] (https://www.assemblyscript.org/basics.html#nullability-checks) zu werfen, sie wird Ihren Code sauberer machen 🙂 -Also we've added a few more static methods in some types to ease casting, they are: +Außerdem haben wir ein paar weitere statische Methoden in einigen Typen hinzugefügt, um das Casting zu erleichtern: - Bytes.fromByteArray - Bytes.fromUint8Array - BigInt.fromByteArray - ByteArray.fromBigInt -### Nullability check with property access +### Nullbarkeitsprüfung mit Eigenschaftszugriff Um die [Nullability-Check-Funktion] (https://www.assemblyscript.org/basics.html#nullability-checks) zu verwenden, können Sie entweder `if`-Anweisungen oder den ternären Operator (`?` und `:`) wie folgt verwenden: @@ -277,10 +277,10 @@ class Container { let container = new Container() container.data = 'data' -let somethingOrElse: string = container.data ? container.data : 'else' // doesn't compile +let somethingOrElse: string = container.data ? container.data : 'else' // lässt sich nicht kompilieren ``` -Which outputs this error: +Das gibt folgenden Fehler aus: ```typescript ERROR TS2322: Type '~lib/string/String | null' is not assignable to type '~lib/string/String'. @@ -301,12 +301,12 @@ container.data = 'data' let data = container.data -let somethingOrElse: string = data ? data : 'else' // compiles just fine :) +let somethingOrElse: string = data ? data : 'else' // lässt sich prima kompilieren :) ``` -### Operator overloading with property access +### Operator-Überlastung mit Eigenschaftszugriff -If you try to sum (for example) a nullable type (from a property access) with a non nullable one, the AssemblyScript compiler instead of giving a compile time error warning that one of the values is nullable, it just compiles silently, giving chance for the code to break at runtime. +Wenn Sie versuchen, (z.B.) einen Typ, der NULL-Werte (aus einem Eigenschaftszugriff) zulässt, mit einem Typ, der keine NULL-Werte zulässt, zu summieren, gibt der AssemblyScript-Compiler keine Fehlermeldung aus, dass einer der Werte NULL-Werte zulässt, sondern kompiliert es einfach stillschweigend, so dass die Möglichkeit besteht, dass der Code zur Laufzeit nicht funktioniert. ```typescript class BigInt extends Uint8Array { @@ -323,14 +323,14 @@ class Wrapper { let x = BigInt.fromI32(2) let y: BigInt | null = null -x + y // give compile time error about nullability +x + y // gibt Kompilierzeitfehler über die Nullbarkeit let wrapper = new Wrapper(y) -wrapper.n = wrapper.n + x // doesn't give compile time errors as it should +wrapper.n = wrapper.n + x // gibt keine Kompilierzeitfehler, wie es sollte ``` -We've opened a issue on the AssemblyScript compiler for this, but for now if you do these kind of operations in your subgraph mappings, you should change them to do a null check before it. +Wir haben diesbezüglich ein Problem mit dem AssemblyScript-Compiler eröffnet. Wenn Sie diese Art von Vorgängen jedoch in Ihren Subgraf-Zuordnungen ausführen, sollten Sie sie zunächst so ändern, dass zuvor eine Nullprüfung durchgeführt wird. ```typescript let wrapper = new Wrapper(y) @@ -339,12 +339,12 @@ if (!wrapper.n) { wrapper.n = BigInt.fromI32(0) } -wrapper.n = wrapper.n + x // now `n` is guaranteed to be a BigInt +wrapper.n = wrapper.n + x // jetzt ist `n` garantiert ein BigInt ``` -### Value initialization +### Wert-Initialisierung -If you have any code like this: +Wenn Sie einen Code wie diesen haben: ```typescript var value: Type // null @@ -352,7 +352,7 @@ value.x = 10 value.y = 'content' ``` -It will compile but break at runtime, that happens because the value hasn't been initialized, so make sure your subgraph has initialized their values, like this: +Es wird zwar kompiliert, bricht aber zur Laufzeit ab. Dies liegt daran, dass der Wert nicht initialisiert wurde. Stellen Sie daher sicher, dass Ihr Subgraf seine Werte initialisiert hat, etwa so: ```typescript var value = new Type() // initialized @@ -360,7 +360,7 @@ value.x = 10 value.y = 'content' ``` -Also if you have nullable properties in a GraphQL entity, like this: +Auch wenn Sie nullfähige Eigenschaften in einer GraphQL-Entität haben, gehen Sie wie folgt vor: ```graphql type Total @entity { @@ -369,7 +369,7 @@ type Total @entity { } ``` -And you have code similar to this: +Und Sie haben einen ähnlichen Code wie diesen: ```typescript let total = Total.load('latest') @@ -407,15 +407,15 @@ type Total @entity { let total = Total.load('latest') if (total === null) { - total = new Total('latest') // already initializes non-nullable properties + total = new Total('latest') // initialisiert bereits Eigenschaften, die keine NULL-Werte zulassen } total.amount = total.amount + BigInt.fromI32(1) ``` -### Class property initialization +### Initialisierung von Klasseneigenschaften -If you export any classes with properties that are other classes (declared by you or by the standard library) like this: +Wenn Sie Klassen mit Eigenschaften exportieren, die andere Klassen sind (von Ihnen selbst oder von der Standardbibliothek deklariert), dann ist dies der Fall: ```typescript class Thing {} @@ -432,7 +432,7 @@ export class Something { constructor(public value: Thing) {} } -// oder +// or export class Something { value: Thing @@ -442,7 +442,7 @@ export class Something { } } -// oder +// or export class Something { value!: Thing @@ -459,7 +459,7 @@ let arr = new Array(5) // ["", "", "", "", ""] arr.push('something') // ["", "", "", "", "", "something"] // size 6 :( ``` -Depending on the types you're using, eg nullable ones, and how you're accessing them, you might encounter a runtime error like this one: +Je nach den Typen, die Sie verwenden (z. B. nullbare Typen) und wie Sie darauf zugreifen, kann es zu einem Laufzeitfehler wie diesem kommen: ``` ERRO Handler skipped due to execution failure, error: Mapping aborted at ~lib/array.ts, line 110, column 40, with message: Element type must be nullable if array is holey wasm backtrace: 0: 0x19c4 - !~lib/@graphprotocol/graph-ts/index/format 1: 0x1e75 - !~lib/@graphprotocol/graph-ts/common/collections/Entity#constructor 2: 0x30b9 - !node_modules/@graphprotocol/graph-ts/global/global/id_of_type @@ -473,7 +473,7 @@ let arr = new Array(0) // [] arr.push('something') // ["something"] ``` -Or you should mutate it via index: +Oder Sie sollten es per Index mutieren: ```typescript let arr = new Array(5) // ["", "", "", "", ""] @@ -481,11 +481,11 @@ let arr = new Array(5) // ["", "", "", "", ""] arr[0] = 'something' // ["something", "", "", "", ""] ``` -### GraphQL schema +### GraphQL-Schema Dies ist keine direkte AssemblyScript-Änderung, aber Sie müssen möglicherweise Ihre Datei `schema.graphql` aktualisieren. -Now you no longer can define fields in your types that are Non-Nullable Lists. If you have a schema like this: +Jetzt können Sie in Ihren Typen keine Felder mehr definieren, die nicht nullbare Listen sind. Wenn Sie über ein Schema wie dieses verfügen: ```graphql type Something @entity { @@ -513,7 +513,7 @@ type MyEntity @entity { Dies hat sich aufgrund von Unterschieden in der Nullbarkeit zwischen AssemblyScript-Versionen geändert und hängt mit der Datei `src/generated/schema.ts` (Standardpfad, vielleicht haben Sie diesen geändert) zusammen. -### Other +### Sonstiges - `Map#set` und `Set#add` wurden an die Spezifikation angepasst und geben `this` zurück ([v0.9.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.2)) - Arrays erben nicht mehr von ArrayBufferView, sondern sind jetzt eigenständig ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) diff --git a/website/src/pages/de/resources/migration-guides/graphql-validations-migration-guide.mdx b/website/src/pages/de/resources/migration-guides/graphql-validations-migration-guide.mdx index 68c70b711a60..a0b114383280 100644 --- a/website/src/pages/de/resources/migration-guides/graphql-validations-migration-guide.mdx +++ b/website/src/pages/de/resources/migration-guides/graphql-validations-migration-guide.mdx @@ -1,62 +1,62 @@ --- -title: GraphQL Validations Migration Guide +title: Anleitung zur Migration von GraphQL-Validierungen --- -Soon `graph-node` will support 100% coverage of the [GraphQL Validations specification](https://spec.graphql.org/June2018/#sec-Validation). +Bald wird „graph-node“ eine 100-prozentige Abdeckung der [GraphQL Validations-Spezifikation](https://spec.graphql.org/June2018/#sec-Validation) unterstützen. -Previous versions of `graph-node` did not support all validations and provided more graceful responses - so, in cases of ambiguity, `graph-node` was ignoring invalid GraphQL operations components. +Frühere Versionen von „graph-node“ unterstützten nicht alle Validierungen und lieferten optimierte Antworten – daher ignorierte „graph-node“ bei Unklarheiten ungültige GraphQL-Operationskomponenten. -GraphQL Validations support is the pillar for the upcoming new features and the performance at scale of The Graph Network. +Die Unterstützung von GraphQL-Validierungen ist die Grundlage für die kommenden neuen Funktionen und die umfassende Leistung von The Graph Network. -It will also ensure determinism of query responses, a key requirement on The Graph Network. +Dadurch wird auch der Determinismus der Abfrageantworten sichergestellt, eine wichtige Anforderung für The Graph Network. -**Enabling the GraphQL Validations will break some existing queries** sent to The Graph API. +**Durch die Aktivierung der GraphQL-Validierungen werden einige vorhandene Abfragen unterbrochen,** die an die Graph-API gesendet werden. -To be compliant with those validations, please follow the migration guide. +Um diese Validierungen einzuhalten, befolgen Sie bitte den Migrationsleitfaden. -> ⚠️ If you do not migrate your queries before the validations are rolled out, they will return errors and possibly break your frontends/clients. +> ⚠️ Wenn Sie Ihre Abfragen nicht migrieren, bevor die Validierungen eingeführt werden, werden Fehler zurückgegeben und möglicherweise Ihre Frontends/Clients beschädigt. -## Migration guide +## Migrationsleitfaden -You can use the CLI migration tool to find any issues in your GraphQL operations and fix them. Alternatively you can update the endpoint of your GraphQL client to use the `https://api-next.thegraph.com/subgraphs/name/$GITHUB_USER/$SUBGRAPH_NAME` endpoint. Testing your queries against this endpoint will help you find the issues in your queries. +Mit dem CLI-Migrationstool können Sie Probleme in Ihren GraphQL-Vorgängen finden und beheben. Alternativ können Sie den Endpunkt Ihres GraphQL-Clients aktualisieren, um den Endpunkt „https://api-next.thegraph.com/subgraphs/name/$GITHUB_USER/$SUBGRAPH_NAME“ zu verwenden. Wenn Sie Ihre Abfragen anhand dieses Endpunkts testen, können Sie die Probleme in Ihren Abfragen leichter finden. -> Not all subgraphs will need to be migrated, if you are using [GraphQL ESlint](https://the-guild.dev/graphql/eslint/docs) or [GraphQL Code Generator](https://the-guild.dev/graphql/codegen), they already ensure that your queries are valid. +> Nicht alle Subgrafen müssen migriert werden, wenn Sie [GraphQL ESlint](https://the-guild.dev/graphql/eslint/docs) oder [GraphQL Code Generator](https://the-guild.dev/graphql/codegen) verwenden, stellen sie bereits sicher, dass Ihre Abfragen gültig sind. -## Migration CLI tool +## Migrations-CLI-Tool -**Most of the GraphQL operations errors can be found in your codebase ahead of time.** +**Die meisten GraphQL-Operationsfehler können im Voraus in Ihrer Codebasis gefunden werden.** -For this reason, we provide a smooth experience for validating your GraphQL operations during development or in CI. +Aus diesem Grund bieten wir eine reibungslose Validierung Ihrer GraphQL-Operationen während der Entwicklung oder im CI. -[`@graphql-validate/cli`](https://github.com/saihaj/graphql-validate) is a simple CLI tool that helps validate GraphQL operations against a given schema. +[`@graphql-validate/cli`](https://github.com/saihaj/graphql-validate) ist ein einfaches CLI-Tool, das bei der Validierung von GraphQL-Operationen anhand eines bestimmten Schemas hilft. -### **Getting started** +### **Erste Schritte** -You can run the tool as follows: +Sie können das Tool wie folgt ausführen: ```bash npx @graphql-validate/cli -s https://api-next.thegraph.com/subgraphs/name/$GITHUB_USER/$SUBGRAPH_NAME -o *.graphql ``` -**Notes:** +**Anmerkungen:** -- Set or replace $GITHUB_USER, $SUBGRAPH_NAME with the appropriate values. Like: [`artblocks/art-blocks`](https://api.thegraph.com/subgraphs/name/artblocks/art-blocks) -- The preview schema URL (https://api-next.thegraph.com/) provided is heavily rate-limited and will be sunset once all users have migrated to the new version. **Do not use it in production.** -- Operations are identified in files with the following extensions [`.graphql`,](https://www.graphql-tools.com/docs/schema-loading#graphql-file-loader)[`.ts`, `.tsx`, `.js`, `jsx`](https://www.graphql-tools.com/docs/schema-loading#code-file-loader) (`-o` option). +- Setzen oder ersetzen Sie $GITHUB_USER, $SUBGRAPH_NAME durch die entsprechenden Werte. Wie z.B.: [`artblocks/art-blocks`](https://api.thegraph.com/subgraphs/name/artblocks/art-blocks) +- Die bereitgestellte Vorschau-Schema-URL (https://api-next.thegraph.com/) ist stark ratenbeschränkt und wird eingestellt, sobald alle Benutzer auf die neue Version migrieren werden. **Verwenden Sie es nicht in der Produktion.** +- Operationen werden in Dateien mit den folgenden Erweiterungen identifiziert [`.graphql`,](https://www.graphql-tools.com/docs/schema-loading#graphql-file-loader)[`.ts`, `.tsx`, `.js`, `jsx`](https://www.graphql-tools.com/docs/schema-loading#code-file-loader) (`-o` option). -### CLI output +### CLI-Ausgabe -The `[@graphql-validate/cli](https://github.com/saihaj/graphql-validate)` CLI tool will output any GraphQL operations errors as follows: +Das CLI-Tool „[@graphql-validate/cli](https://github.com/saihaj/graphql-validate)“ gibt alle GraphQL-Operationsfehler wie folgt aus: ![Error output from CLI](https://i.imgur.com/x1cBdhq.png) -For each error, you will find a description, file path and position, and a link to a solution example (see the following section). +Zu jedem Fehler finden Sie eine Beschreibung, Dateipfad und -position sowie einen Link zu einem Lösungsbeispiel (siehe folgenden Abschnitt). -## Run your local queries against the preview schema +## Führen Sie Ihre lokalen Abfragen anhand des Vorschauschemas aus -We provide an endpoint `https://api-next.thegraph.com/` that runs a `graph-node` version that has validations turned on. +Wir stellen einen Endpunkt „https://api-next.thegraph.com/“ bereit, der eine „graph-node“-Version ausführt, bei der Validierungen aktiviert sind. -You can try out queries by sending them to: +Sie können Abfragen ausprobieren, indem Sie diese an folgende Adresse senden: - `https://api-next.thegraph.com/subgraphs/id/` @@ -64,28 +64,28 @@ oder - `https://api-next.thegraph.com/subgraphs/name//` -To work on queries that have been flagged as having validation errors, you can use your favorite GraphQL query tool, like Altair or [GraphiQL](https://cloud.hasura.io/public/graphiql), and try your query out. Those tools will also mark those errors in their UI, even before you run it. +Um Abfragen zu bearbeiten, bei denen Validierungsfehler gemeldet wurden, können Sie Ihr bevorzugtes GraphQL-Abfragetool wie Altair oder [GraphiQL] (https://cloud.hasura.io/public/graphiql) verwenden und Ihre Abfrage ausprobieren. Diese Tools markieren diese Fehler auch in ihrer Benutzeroberfläche, noch bevor Sie sie ausführen. -## How to solve issues +## So lösen Sie Probleme -Below, you will find all the GraphQL validations errors that could occur on your existing GraphQL operations. +Nachfolgend finden Sie alle GraphQL-Validierungsfehler, die bei Ihren vorhandenen GraphQL-Vorgängen auftreten können. -### GraphQL variables, operations, fragments, or arguments must be unique +### GraphQL-Variablen, -Operationen, -Fragmente oder -Argumente müssen einzigartig sein -We applied rules for ensuring that an operation includes a unique set of GraphQL variables, operations, fragments, and arguments. +Wir haben Regeln angewendet, um sicherzustellen, dass eine Operation einen eindeutigen Satz von GraphQL-Variablen, -Operationen, -Fragmenten und -Argumenten enthält. -A GraphQL operation is only valid if it does not contain any ambiguity. +Eine GraphQL-Operation ist nur dann gültig, wenn sie keine Mehrdeutigkeit enthält. -To achieve that, we need to ensure that some components in your GraphQL operation must be unique. +Um dies zu erreichen, müssen wir sicherstellen, dass einige Komponenten in Ihrer GraphQL-Operation eindeutig sind. -Here's an example of a few invalid operations that violates these rules: +Hier ist ein Beispiel für einige ungültige Vorgänge, die gegen diese Regeln verstoßen: -**Duplicate Query name (#UniqueOperationNamesRule)** +**Doppelter Abfragename (#UniqueOperationNamesRule)** ```graphql -# The following operation violated the UniqueOperationName -# rule, since we have a single operation with 2 queries -# with the same name +# Der folgende Vorgang hat den UniqueOperationName +# -Regel verletzt, da wir eine einzige Operation mit 2 Abfragen +# mit demselben Namen haben query myData { id } @@ -108,11 +108,11 @@ query myData2 { } ``` -**Duplicate Fragment name (#UniqueFragmentNamesRule)** +**Doppelter Fragmentname (#UniqueFragmentNamesRule)** ```graphql -# The following operation violated the UniqueFragmentName -# rule. +# Der folgende Vorgang hat den UniqueFragmentName +# -Regel verletzt. query myData { id ...MyFields @@ -136,19 +136,19 @@ query myData { ...MyFieldsMetadata } -fragment MyFieldsMetadata { # assign a unique name to fragment +fragment MyFieldsMetadata { # dem Fragment einen eindeutigen Namen zuweisen metadata } -fragment MyFieldsName { # assign a unique name to fragment +fragment MyFieldsName { # dem Fragment einen eindeutigen Namen zuweisen name } ``` -**Duplicate variable name (#UniqueVariableNamesRule)** +**Doppelter Variablenname (#UniqueVariableNamesRule)** ```graphql -# The following operation violates the UniqueVariables +# Die folgende Operation verstößt gegen die UniqueVariables query myData($id: String, $id: Int) { id ...MyFields @@ -159,16 +159,16 @@ _Lösung:_ ```graphql query myData($id: String) { - # keep the relevant variable (here: `$id: String`) + # die relevante Variable beibehalten (hier: „$id: String“) id ...MyFields } ``` -**Duplicate argument name (#UniqueArgument)** +**Doppelter Argumentname (#UniqueArgument)** ```graphql -# The following operation violated the UniqueArguments +# Die folgende Operation hat die UniqueArguments verletzt query myData($id: ID!) { userById(id: $id, id: "1") { id @@ -186,13 +186,13 @@ query myData($id: ID!) { } ``` -**Duplicate anonymous query (#LoneAnonymousOperationRule)** +**Duplizierte anonyme Abfrage (#LoneAnonymousOperationRule)** -Also, using two anonymous operations will violate the `LoneAnonymousOperation` rule due to conflict in the response structure: +Außerdem verstößt die Verwendung von zwei anonymen Vorgängen aufgrund eines Konflikts in der Antwortstruktur gegen die Regel „LoneAnonymousOperation“: ```graphql -# This will fail if executed together in -# a single operation with the following two queries: +# Dies wird fehlschlagen, wenn es gleichzeitig in +# einer einzelnen Operation mit den folgenden zwei Abfragen ausgeführt wird: query { someField } @@ -211,7 +211,7 @@ query { } ``` -Or name the two queries: +Oder benennen Sie die beiden Abfragen: ```graphql query FirstQuery { @@ -223,20 +223,20 @@ query SecondQuery { } ``` -### Overlapping Fields +### Überlappende Felder -A GraphQL selection set is considered valid only if it correctly resolves the eventual result set. +Ein GraphQL-Auswahlsatz wird nur dann als gültig angesehen, wenn er den endgültigen Ergebnissatz korrekt auflöst. -If a specific selection set, or a field, creates ambiguity either by the selected field or by the arguments used, the GraphQL service will fail to validate the operation. +Wenn ein bestimmter Auswahlsatz oder ein Feld entweder durch das ausgewählte Feld oder durch die verwendeten Argumente Mehrdeutigkeiten erzeugt, kann der GraphQL-Dienst den Vorgang nicht validieren. -Here are a few examples of invalid operations that violate this rule: +Hier sind einige Beispiele für ungültige Vorgänge, die gegen diese Regel verstoßen: -**Conflicting fields aliases (#OverlappingFieldsCanBeMergedRule)** +**Widersprüchliche Feldaliase (#OverlappingFieldsCanBeMergedRule)** ```graphql -# Aliasing fields might cause conflicts, either with -# other aliases or other fields that exist on the -# GraphQL schema. +# Alias-Felder können Konflikte verursachen, entweder mit +# anderen Aliasen oder anderen Feldern, die im +# GraphQL-Schema vorhanden sind. query { dogs { name: nickname @@ -256,11 +256,11 @@ query { } ``` -**Conflicting fields with arguments (#OverlappingFieldsCanBeMergedRule)** +**Widersprüchliche Felder mit Argumenten (#OverlappingFieldsCanBeMergedRule)** ```graphql -# Different arguments might lead to different data, -# so we can't assume the fields will be the same. +# Unterschiedliche Argumente können zu unterschiedlichen Daten führen, +# daher können wir nicht davon ausgehen, dass die Felder gleich sind. query { dogs { doesKnowCommand(dogCommand: SIT) @@ -280,12 +280,12 @@ query { } ``` -Also, in more complex use-cases, you might violate this rule by using two fragments that might cause a conflict in the eventually expected set: +Außerdem könnten Sie in komplexeren Anwendungsfällen gegen diese Regel verstoßen, indem Sie zwei Fragmente verwenden, die einen Konflikt in der letztendlich erwarteten Menge verursachen könnten: ```graphql query { - # Eventually, we have two "x" definitions, pointing - # to different fields! + # Letztendlich haben wir zwei „x“-Definitionen, die + # auf verschiedene Felder verweisen! ...A ...B } @@ -299,7 +299,7 @@ fragment B on Type { } ``` -In addition to that, client-side GraphQL directives like `@skip` and `@include` might lead to ambiguity, for example: +Darüber hinaus können clientseitige GraphQL-Direktiven wie „@skip“ und „@include“ zu Unklarheiten führen, zum Beispiel: ```graphql fragment mergeSameFieldsWithSameDirectives on Dog { @@ -308,18 +308,18 @@ fragment mergeSameFieldsWithSameDirectives on Dog { } ``` -[You can read more about the algorithm here.](https://spec.graphql.org/June2018/#sec-Field-Selection-Merging) +[Mehr über den Algorithmus können Sie hier lesen.](https://spec.graphql.org/June2018/#sec-Field-Selection-Merging) -### Unused Variables or Fragments +### Unbenutzte Variablen oder Fragmente -A GraphQL operation is also considered valid only if all operation-defined components (variables, fragments) are used. +Eine GraphQL-Operation gilt auch nur dann als gültig, wenn alle durch die Operation definierten Komponenten (Variablen, Fragmente) verwendet werden. -Here are a few examples for GraphQL operations that violates these rules: +Hier sind einige Beispiele für GraphQL-Operationen, die gegen diese Regeln verstoßen: -**Unused variable** (#NoUnusedVariablesRule) +**Unbenutzte Variable** (#NoUnusedVariablesRule) ```graphql -# Invalid, because $someVar is never used. +# Ungültig, da $someVar nie verwendet wird. query something($someVar: String) { someData } @@ -333,10 +333,10 @@ query something { } ``` -**Unused Fragment** (#NoUnusedFragmentsRule) +**Unbenutztes Fragment** (#NoUnusedFragmentsRule) ```graphql -# Invalid, because fragment AllFields is never used. +# Ungültig, da das Fragment AllFields nie verwendet wird. query something { someData } @@ -350,22 +350,22 @@ fragment AllFields { # unused :( _Lösung:_ ```graphql -# Invalid, because fragment AllFields is never used. +# Ungültig, da das Fragment AllFields nie verwendet wird. query something { someData } -# remove the `AllFields` fragment +# das „AllFields“-Fragment entfernen ``` -### Invalid or missing Selection-Set (#ScalarLeafsRule) +### Ungültiger oder fehlender Auswahlsatz (#ScalarLeafsRule) -Also, a GraphQL field selection is only valid if the following is validated: +Außerdem ist eine GraphQL-Feldauswahl nur dann gültig, wenn Folgendes validiert ist: -- An object field must-have selection set specified. -- An edge field (scalar, enum) must not have a selection set specified. +- Für ein Objektfeld muss ein Auswahlsatz angegeben werden. +- Für ein Kantenfeld (Skalar, Enumeration) darf kein Auswahlsatz angegeben sein. -Here are a few examples of violations of these rules with the following Schema: +Hier sind einige Beispiele für Verstöße gegen diese Regeln mit dem folgenden Schema: ```graphql type Image { @@ -382,12 +382,12 @@ type Query { } ``` -**Invalid Selection-Set** +**Ungültiger Auswahlsatz** ```graphql query { user { - id { # Invalid, because "id" is of type ID and does not have sub-fields + id { # Ungültig, da „id“ vom Typ ID ist und keine Unterfelder hat } } @@ -404,13 +404,13 @@ query { } ``` -**Missing Selection-Set** +**Fehlender Auswahlsatz** ```graphql query { user { id - image # `image` requires a Selection-Set for sub-fields! + image # `image` erfordert einen Auswahlsatz für Unterfelder! } } ``` @@ -428,49 +428,49 @@ query { } ``` -### Incorrect Arguments values (#VariablesInAllowedPositionRule) +### Falsche Argumentwerte (#VariablesInAllowedPositionRule) -GraphQL operations that pass hard-coded values to arguments must be valid, based on the value defined in the schema. +GraphQL-Operationen, die fest codierte Werte an Argumente übergeben, müssen basierend auf dem im Schema definierten Wert gültig sein. -Here are a few examples of invalid operations that violate these rules: +Hier sind einige Beispiele für ungültige Vorgänge, die gegen diese Regeln verstoßen: ```graphql query purposes { - # If "name" is defined as "String" in the schema, - # this query will fail during validation. + # Wenn „name“ im Schema als „String“ definiert ist, + # schlägt diese Abfrage während der Validierung fehl. purpose(name: 1) { id } } -# This might also happen when an incorrect variable is defined: +# Dies kann auch passieren, wenn eine falsche Variable definiert wurde: query purposes($name: Int!) { - # If "name" is defined as `String` in the schema, - # this query will fail during validation, because the - # variable used is of type `Int` + # Wenn „name“ im Schema als „String“ definiert ist, + # schlägt diese Abfrage während der Validierung fehl, da die + # verwendete Variable vom Typ „Int“ ist purpose(name: $name) { id } } ``` -### Unknown Type, Variable, Fragment, or Directive (#UnknownX) +### Unbekannter Typ, unbekannte Variable, unbekanntes Fragment oder unbekannte Direktive (#UnknownX) -The GraphQL API will raise an error if any unknown type, variable, fragment, or directive is used. +Die GraphQL-API löst einen Fehler aus, wenn ein unbekannter Typ, eine unbekannte Variable, ein unbekanntes Fragment oder eine unbekannte Direktive verwendet wird. -Those unknown references must be fixed: +Diese unbekannten Referenzen müssen korrigiert werden: -- rename if it was a typo -- otherwise, remove +- umbenennen, wenn es ein Tippfehler war +- andernfalls entfernen -### Fragment: invalid spread or definition +### Fragment: ungültiger Spread oder ungültige Definition -**Invalid Fragment spread (#PossibleFragmentSpreadsRule)** +**Ungültige Fragmentverteilung (#PossibleFragmentSpreadsRule)** -A Fragment cannot be spread on a non-applicable type. +Ein Fragment kann nicht auf einen nicht anwendbaren Typ verteilt werden. -Example, we cannot apply a `Cat` fragment to the `Dog` type: +Beispiel: Wir können kein „Cat“-Fragment auf den Typ „Dog“ anwenden: ```graphql query { @@ -484,33 +484,33 @@ fragment CatSimple on Cat { } ``` -**Invalid Fragment definition (#FragmentsOnCompositeTypesRule)** +**Ungültige Fragmentdefinition (#FragmentsOnCompositeTypesRule)** -All Fragment must be defined upon (using `on ...`) a composite type, in short: object, interface, or union. +Alle Fragmente müssen auf einem zusammengesetzten Typ (mit „on ...“) definiert werden, kurz gesagt: Objekt, Schnittstelle oder Union. -The following examples are invalid, since defining fragments on scalars is invalid. +Die folgenden Beispiele sind ungültig, da die Definition von Fragmenten auf Skalaren ungültig ist. ```graphql fragment fragOnScalar on Int { - # we cannot define a fragment upon a scalar (`Int`) + # wir können kein Fragment auf einem Skalar („Int“) definieren. something } fragment inlineFragOnScalar on Dog { ... on Boolean { - # `Boolean` is not a subtype of `Dog` + # `Boolean` ist kein Subtyp von `Dog` somethingElse } } ``` -### Directives usage +### Verwendung von Direktiven -**Directive cannot be used at this location (#KnownDirectivesRule)** +**Direktive kann an dieser Stelle nicht verwendet werden (#KnownDirectivesRule)** -Only GraphQL directives (`@...`) supported by The Graph API can be used. +Es können nur GraphQL-Direktiven („@...“) verwendet werden, die von der Graph-API unterstützt werden. -Here is an example with The GraphQL supported directives: +Hier ist ein Beispiel mit von GraphQL unterstützten Direktiven: ```graphql query { @@ -521,11 +521,11 @@ query { } ``` -_Note: `@stream`, `@live`, `@defer` are not supported._ +_Hinweis: „@stream“, „@live“, „@defer“ werden nicht unterstützt._ -**Directive can only be used once at this location (#UniqueDirectivesPerLocationRule)** +**Direktive kann nur einmal an diesem Standort verwendet werden (#UniqueDirectivesPerLocationRule)** -The directives supported by The Graph can only be used once per location. +Die von The Graph unterstützten Direktiven können nur einmal pro Standort verwendet werden. Folgendes ist ungültig (und überflüssig): diff --git a/website/src/pages/de/resources/roles/curating.mdx b/website/src/pages/de/resources/roles/curating.mdx index 7d145d84ab5e..40f0110f505f 100644 --- a/website/src/pages/de/resources/roles/curating.mdx +++ b/website/src/pages/de/resources/roles/curating.mdx @@ -2,37 +2,37 @@ title: Kuratieren --- -Kuratoren sind entscheidend für die dezentrale Wirtschaft von The Graph. Sie nutzen ihr Wissen über das web3-Ökosystem, um die Subgraphen zu bewerten und zu signalisieren, die von The Graph Network indiziert werden sollten. Über den Graph Explorer sehen die Kuratoren die Netzwerkdaten, um Signalisierungsentscheidungen zu treffen. Im Gegenzug belohnt The Graph Network Kuratoren, die auf qualitativ hochwertige Subgraphen hinweisen, mit einem Anteil an den Abfragegebühren, die diese Subgraphen generieren. Die Höhe der signalisierten GRT ist eine der wichtigsten Überlegungen für Indexer bei der Entscheidung, welche Subgraphen indiziert werden sollen. +Kuratoren sind entscheidend für die dezentrale Wirtschaft von The Graph. Sie nutzen ihr Wissen über das web3-Ökosystem, um die Subgraphen zu bewerten und zu signalisieren, die von The Graph Network indiziert werden sollten. Über den Graph Explorer sehen die Kuratoren die Netzwerkdaten, um Signalisierungsentscheidungen zu treffen. Im Gegenzug belohnt The Graph Network Kuratoren, die auf qualitativ hochwertige Subgraphen hinweisen, mit einem Anteil an den Abfragegebühren, die diese Subgraphen generieren. Die Höhe der signalisierten GRT ist eine der wichtigsten Überlegungen für Indexierer bei der Entscheidung, welche Subgraphen indiziert werden sollen. ## Was bedeutet Signalisierung für The Graph Network? -Bevor Verbraucher einen Subgraphen abfragen können, muss er indiziert werden. An dieser Stelle kommt die Kuratierung ins Spiel. Damit Indexer erhebliche Abfragegebühren für hochwertige Subgraphen verdienen können, müssen sie wissen, welche Subgraphen indiziert werden sollen. Wenn Kuratoren ein Signal für einen Subgraphen geben, wissen Indexer, dass ein Subgraph gefragt und von ausreichender Qualität ist, so dass er indiziert werden sollte. +Bevor Verbraucher einen Subgraphen abfragen können, muss er indiziert werden. An dieser Stelle kommt die Kuratierung ins Spiel. Damit Indexierer erhebliche Abfragegebühren für hochwertige Subgraphen verdienen können, müssen sie wissen, welche Subgraphen indiziert werden sollen. Wenn Kuratoren ein Signal für einen Subgraphen geben, wissen Indexierer, dass ein Subgraph gefragt und von ausreichender Qualität ist, so dass er indiziert werden sollte. -Kuratoren machen das The Graph Netzwerk effizient und [signaling](#how-to-signal) ist der Prozess, den Kuratoren verwenden, um Indexer wissen zu lassen, dass ein Subgraph gut zu indizieren ist. Indexer können dem Signal eines Kurators vertrauen, da Kuratoren nach dem Signalisieren einen Kurationsanteil für den Subgraphen prägen, der sie zu einem Teil der zukünftigen Abfragegebühren berechtigt, die der Subgraph verursacht. +Kuratoren machen das The Graph Netzwerk effizient und [Signalisierung](#how-to-signal) ist der Prozess, den Kuratoren verwenden, um Indexierer wissen zu lassen, dass ein Subgraph gut zu indizieren ist. Indexierer können dem Signal eines Kurators vertrauen, da Kuratoren nach dem Signalisieren einen Kurationsanteil für den Subgraphen prägen, der sie zu einem Teil der zukünftigen Abfragegebühren berechtigt, die der Subgraph verursacht. -Die Signale der Kuratoren werden als ERC20-Token dargestellt, die Graph Curation Shares (GCS) genannt werden. Diejenigen, die mehr Abfragegebühren verdienen wollen, sollten ihre GRT an Subgraphen signalisieren, von denen sie vorhersagen, dass sie einen starken Gebührenfluss an das Netzwerk generieren werden. Kuratoren können nicht für schlechtes Verhalten bestraft werden, aber es gibt eine Einlagensteuer für Kuratoren, um von schlechten Entscheidungen abzuschrecken, die der Integrität des Netzwerks schaden könnten. Kuratoren werden auch weniger Abfragegebühren verdienen, wenn sie einen Subgraphen von geringer Qualität kuratieren, weil es weniger Abfragen zu bearbeiten gibt oder weniger Indexer, die sie bearbeiten. +Die Signale der Kuratoren werden als ERC20-Token dargestellt, die Graph Curation Shares (GCS) genannt werden. Diejenigen, die mehr Abfragegebühren verdienen wollen, sollten ihre GRT an Subgraphen signalisieren, von denen sie vorhersagen, dass sie einen starken Gebührenfluss an das Netzwerk generieren werden. Kuratoren können nicht für schlechtes Verhalten bestraft werden, aber es gibt eine Einlagensteuer für Kuratoren, um von schlechten Entscheidungen abzuschrecken, die der Integrität des Netzwerks schaden könnten. Kuratoren werden auch weniger Abfragegebühren verdienen, wenn sie einen Subgraphen von geringer Qualität kuratieren, weil es weniger Abfragen zu bearbeiten gibt oder weniger Indexierer, die sie bearbeiten. -Der [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) stellt die Indizierung aller Subgraphen sicher und signalisiert, dass GRT auf einem bestimmten Subgraphen mehr Indexer anzieht. Dieser Anreiz für zusätzliche Indexer durch Kuration zielt darauf ab, die Servicequalität für Abfragen zu verbessern, indem die Latenzzeit verringert und die Netzwerkverfügbarkeit erhöht wird. +Der [Sunrise Upgrade Indexierer](/archived/sunrise/#what-is-the-upgrade-indexer) stellt die Indizierung aller Subgraphen sicher und signalisiert, dass GRT auf einem bestimmten Subgraphen mehr Indexierer anzieht. Dieser Anreiz für zusätzliche Indexierer durch Kuration zielt darauf ab, die Servicequalität für Abfragen zu verbessern, indem die Latenzzeit verringert und die Netzwerkverfügbarkeit erhöht wird. Bei der Signalisierung können Kuratoren entscheiden, ob sie für eine bestimmte Version des Subgraphen signalisieren wollen oder ob sie die automatische Migration verwenden wollen. Bei der automatischen Migration werden die Freigaben eines Kurators immer auf die neueste vom Entwickler veröffentlichte Version aktualisiert. Wenn sie sich stattdessen für eine bestimmte Version entscheiden, bleiben die Freigaben immer auf dieser spezifischen Version. Wenn Sie Unterstützung bei der Kuratierung benötigen, um die Qualität des Dienstes zu verbessern, senden Sie bitte eine Anfrage an das Edge & Node-Team unter support@thegraph.zendesk.com und geben Sie die Subgraphen an, für die Sie Unterstützung benötigen. -Indexer können Subgraphen für die Indizierung auf der Grundlage von Kurationssignalen finden, die sie im Graph Explorer sehen (siehe Screenshot unten). +Indexierer können Subgraphen für die Indizierung auf der Grundlage von Kurationssignalen finden, die sie im Graph Explorer sehen (siehe Screenshot unten). ![Explorer-Subgrafen](/img/explorer-subgraphs.png) ## Wie man signalisiert -Auf der Registerkarte Kurator im Graph Explorer können Kuratoren bestimmte Subgraphen auf der Grundlage von Netzwerkstatistiken an- und abmelden. Einen schrittweisen Überblick über die Vorgehensweise im Graph Explorer finden Sie [hier](/subgraphs/explorer/) +Auf der Registerkarte „Kurator“ im Graph Explorer können Kuratoren bestimmte Subgraphen auf der Grundlage von Netzwerkstatistiken an- und abmelden. Einen schrittweisen Überblick über die Vorgehensweise im Graph Explorer finden Sie [hier](/subgraphs/explorer/) Ein Kurator kann sich dafür entscheiden, ein Signal für eine bestimmte Subgraph-Version abzugeben, oder er kann sein Signal automatisch auf die neueste Produktionsversion dieses Subgraphen migrieren lassen. Beides sind gültige Strategien und haben ihre eigenen Vor- und Nachteile. -Die Signalisierung einer bestimmten Version ist besonders nützlich, wenn ein Subgraph von mehreren Dapps verwendet wird. Eine Dapp muss den Subgraph vielleicht regelmäßig mit neuen Funktionen aktualisieren. Eine andere Dapp zieht es vielleicht vor, eine ältere, gut getestete Version des Subgraphs zu verwenden. Bei der ersten Kuration fällt eine Standardsteuer von 1% an. +Die Signalisierung einer bestimmten Version ist besonders nützlich, wenn ein Subgraph von mehreren Dapps verwendet wird. Eine Dapp muss den Subgraphen vielleicht regelmäßig mit neuen Funktionen aktualisieren. Eine andere Dapp zieht es vielleicht vor, eine ältere, gut getestete Version des Subgraphen zu verwenden. Bei der ersten Kuration fällt eine Standardsteuer von 1% an. Die automatische Migration Ihres Signals zum neuesten Produktions-Build kann sich als nützlich erweisen, um sicherzustellen, dass Sie weiterhin Abfragegebühren anfallen. Jedes Mal, wenn Sie kuratieren, fällt eine Kuratierungssteuer von 1 % an. Außerdem zahlen Sie bei jeder Migration eine Kuratierungssteuer von 0,5 %. Subgraph-Entwickler werden davon abgehalten, häufig neue Versionen zu veröffentlichen - sie müssen eine Kurationssteuer von 0,5 % auf alle automatisch migrierten Kurationsanteile zahlen. -> **Anmerkung**: Die erste Adresse, die einen bestimmten Subgraph signalisiert, wird als erster Kurator betrachtet und muss viel mehr Arbeit leisten als die übrigen folgenden Kuratoren, da der erste Kurator die Kurationsaktien-Token initialisiert und außerdem Token in den Graph-Proxy überträgt. +> **Anmerkung**: Die erste Adresse, die einen bestimmten Subgraphen signalisiert, wird als erster Kurator betrachtet und muss viel mehr Arbeit leisten als die übrigen folgenden Kuratoren, da der erste Kurator die Kurationsaktien-Token initialisiert und außerdem Token in den Graph-Proxy überträgt. ## Abhebung Ihrer GRT @@ -40,7 +40,7 @@ Die Kuratoren haben jederzeit die Möglichkeit, ihre signalisierten GRT zurückz Anders als beim Delegieren müssen Sie, wenn Sie sich entscheiden, Ihr signalisiertes GRT abzuheben, keine Abkühlungsphase abwarten und erhalten den gesamten Betrag (abzüglich der 1 % Kurationssteuer). -Sobald ein Kurator sein Signal zurückzieht, können die Indexer den Subgraphen weiter indizieren, auch wenn derzeit kein aktives GRT signalisiert wird. +Sobald ein Kurator sein Signal zurückzieht, können die Indexierer den Subgraphen weiter indizieren, auch wenn derzeit kein aktives GRT signalisiert wird. Es wird jedoch empfohlen, dass Kuratoren ihr signalisiertes GRT bestehen lassen, nicht nur um einen Teil der Abfragegebühren zu erhalten, sondern auch um die Zuverlässigkeit und Betriebszeit des Subgraphen zu gewährleisten. @@ -48,8 +48,8 @@ Es wird jedoch empfohlen, dass Kuratoren ihr signalisiertes GRT bestehen lassen, 1. Der Abfragemarkt ist bei The Graph noch sehr jung, und es besteht das Risiko, dass Ihr %APY aufgrund der noch jungen Marktdynamik niedriger ist als Sie erwarten. 2. Kurationsgebühr - wenn ein Kurator GRT auf einem Subgraphen meldet, fällt eine Kurationsgebühr von 1% an. Diese Gebühr wird verbrannt. -3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their subgraph or if a subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/resources/roles/delegating/delegating/). -4. Ein Subgraph kann aufgrund eines Fehlers fehlschlagen. Für einen fehlgeschlagenen Subgraph fallen keine Abfragegebühren an. Daher müssen Sie warten, bis der Entwickler den Fehler behebt und eine neue Version bereitstellt. +3. (Nur Ethereum) Wenn Kuratoren ihre Anteile verbrennen, um GRT abzuziehen, wird die GRT-Bewertung der verbleibenden Anteile reduziert. Bitte beachten Sie, dass Kuratoren in manchen Fällen beschließen können, ihre Anteile **alle auf einmal** zu verbrennen. Dies kann der Fall sein, wenn ein Dapp-Entwickler die Versionierung/Verbesserung und Abfrage seines Subgraphen einstellt oder wenn ein Subgraph ausfällt. Infolgedessen können die verbleibenden Kuratoren möglicherweise nur einen Bruchteil ihres ursprünglichen GRT abheben. Für eine Netzwerkrolle mit einem geringeren Risikoprofil siehe [Delegators](/resources/roles/delegating/). +4. Ein Subgraph kann aufgrund eines Fehlers fehlschlagen. Für einen fehlgeschlagenen Subgraphen fallen keine Abfragegebühren an. Daher müssen Sie warten, bis der Entwickler den Fehler behebt und eine neue Version bereitstellt. - Wenn Sie die neueste Version eines Subgraphen abonniert haben, werden Ihre Anteile automatisch zu dieser neuen Version migriert. Dabei fällt eine Kurationsgebühr von 0,5 % an. - Wenn Sie für eine bestimmte Version eines Subgraphen ein Signal gegeben haben und dieses fehlschlägt, müssen Sie Ihre Kurationsanteile manuell verbrennen. Sie können dann ein Signal für die neue Subgraph-Version geben, wodurch eine Kurationssteuer von 1 % anfällt. diff --git a/website/src/pages/de/resources/roles/delegating/delegating.mdx b/website/src/pages/de/resources/roles/delegating/delegating.mdx index 5bdf77f185b0..04f196f7ba16 100644 --- a/website/src/pages/de/resources/roles/delegating/delegating.mdx +++ b/website/src/pages/de/resources/roles/delegating/delegating.mdx @@ -2,54 +2,54 @@ title: Delegieren --- -To start delegating right away, check out [delegate on the graph](https://thegraph.com/explorer/delegate?chain=arbitrum-one). +Wenn Sie sofort mit dem Delegieren beginnen möchten, schauen Sie sich [Delegieren in the Graph] (https://thegraph.com/explorer/delegate?chain=arbitrum-one) an. ## Überblick -Delegators earn GRT by delegating GRT to Indexers, which helps network security and functionality. +Delegatoren verdienen GRT, indem sie GRT an Indexer delegieren, was die Sicherheit und Funktionalität des Netzwerks erhöht. -## Benefits of Delegating +## Vorteile des Delegierens -- Strengthen the network’s security and scalability by supporting Indexers. -- Earn a portion of rewards generated by the Indexers. +- Stärkung der Sicherheit und Skalierbarkeit des Netzwerks durch Unterstützung von Indexierern. +- Verdienen vom Teil der Rewards, die von den Indexierern generiert werden. -## How Does Delegation Work? +## Wie funktioniert die Delegation? -Delegators earn GRT rewards from the Indexer(s) they choose to delegate their GRT to. +Delegatoren erhalten GRT Rewards von dem/den Indexierer(n), an den/die sie ihr GRT delegieren. -An Indexer's ability to process queries and earn rewards depends on three key factors: +Die Fähigkeit eines Indexers, Abfragen zu verarbeiten und Rewards zu verdienen, hängt von drei Schlüsselfaktoren ab: -1. The Indexer's Self-Stake (GRT staked by the Indexer). -2. The total GRT delegated to them by Delegators. -3. The price the Indexer sets for queries. +1. Die Selbstbeteiligung des Indexierers (GRT, die vom Indexer eingesetzt werden). +2. Die gesamte GRT, die ihnen von den Delegatoren übertragen wurde. +3. Der Preis, den der Indexierer für Abfragen festlegt. -The more GRT staked and delegated to an Indexer, the more queries they can serve, leading to higher potential rewards for both the Delegator and Indexer. +Je mehr GRT eingesetzt und an einen Indexierer delegiert werden, desto mehr Abfragen können bedient werden, was zu höheren potenziellen Rewards sowohl für den Delegator als auch für den Indexierer führt. -### What is Delegation Capacity? +### Was ist Delegationskapazität? -Delegation Capacity refers to the maximum amount of GRT an Indexer can accept from Delegators, based on the Indexer's Self-Stake. +Die Delegationskapazität bezieht sich auf die maximale Menge an GRT, die ein Indexierer von Delegatoren annehmen kann, basierend auf der Selbstbeteiligung des Indexierers. -The Graph Network includes a delegation ratio of 16, meaning an Indexer can accept up to 16 times their Self-Stake in delegated GRT. +The Graph Network beinhaltet ein Delegationsverhältnis von 16, d. h. ein Indexierer kann bis zum 16-fachen seines Eigenanteils an delegiertem GRT annehmen. -For example, if an Indexer has a Self-Stake of 1M GRT, their Delegation Capacity is 16M. +Ein Beispiel: Wenn ein Indexierer eine Selbstabnahme von 1 Mio. GRT hat, beträgt seine Delegationskapazität 16 Mio. GRT. -### Why Does Delegation Capacity Matter? +### Warum ist die Delegationskapazität so wichtig? -If an Indexer exceeds their Delegation Capacity, rewards for all Delegators become diluted because the excess delegated GRT cannot be used effectively within the protocol. +Wenn ein Indexierer seine Delegationskapazität überschreitet, werden die Rewards für alle Delegatoren verwässert, da das überschüssige delegierte GRT innerhalb des Protokolls nicht effektiv genutzt werden kann. -This makes it crucial for Delegators to evaluate an Indexer's current Delegation Capacity before selecting an Indexer. +Daher ist es für die Delegatoren von entscheidender Bedeutung, die aktuelle Delegationskapazität eines Indexierers zu bewerten, bevor sie einen Indexierer auswählen. -Indexers can increase their Delegation Capacity by increasing their Self-Stake, thereby raising the limit for delegated tokens. +Indexierer können ihre Delegationskapazität erhöhen, indem sie ihre Selbstbeteiligung erhöhen und damit das Limit für delegierte Token anheben. -## Delegation on The Graph +## Delegation auf The Graph -> Please note this guide does not cover steps such as setting up MetaMask. The Ethereum community provides a [comprehensive resource regarding wallets](https://ethereum.org/en/wallets/). +> Bitte beachten Sie, dass dieser Leitfaden nicht auf Schritte wie die Einrichtung von MetaMask eingeht. Die Ethereum Community bietet eine [umfassende Ressource zu Wallets] (https://ethereum.org/en/wallets/). -There are two sections in this guide: +Dieser Leitfaden besteht aus zwei Abschnitten: - Die Risiken der Übertragung von Token in The Graph Network - Wie man als Delegator die erwarteten Erträge berechnet @@ -58,7 +58,7 @@ There are two sections in this guide: Nachfolgend sind die wichtigsten Risiken aufgeführt, die mit der Tätigkeit eines Delegators im Protokoll verbunden sind. -### The Delegation Tax +### Die Delegationssteuer Delegatoren können nicht für schlechtes Verhalten bestraft werden, aber es gibt eine Steuer für Delegatoren, um von schlechten Entscheidungen abzuschrecken, die die Integrität des Netzes beeinträchtigen könnten. @@ -68,21 +68,21 @@ Als Delegator ist es wichtig, die folgenden Punkte zu verstehen: - Um auf Nummer sicher zu gehen, sollten Sie Ihre potenzielle Rendite berechnen, wenn Sie einen Indexer beauftragen. Als Beispiel könnten Sie berechnen, wie viele Tage es dauern wird, bis Sie die 0,5 % Steuer auf Ihre Delegation zurückverdient haben. -### The Undelegation Period +### Der Zeitraum der Undelegation -When a Delegator chooses to undelegate, their tokens are subject to a 28-day undelegation period. +Wenn ein Delegator die Delegation aufhebt, gilt für seine Token eine Aufhebungsfrist von 28 Tagen für die Delegation. -This means they cannot transfer their tokens or earn any rewards for 28 days. +Das bedeutet, dass sie 28 Tage lang weder ihre Token übertragen noch Rewards verdienen können. -After the undelegation period, GRT will return to your crypto wallet. +Nach Ablauf der Aufhebungsfrist wird GRT in Ihre Wallet zurückgegeben. ### Warum ist das wichtig? -If you choose an Indexer that is not trustworthy or not doing a good job, you will want to undelegate. This means you will be losing opportunities to earn rewards. +Wenn Sie sich für einen Indexer entscheiden, der nicht vertrauenswürdig ist oder keine gute Arbeit leistet, werden Sie die Abtretung rückgängig machen wollen. Das bedeutet, dass Sie die Möglichkeit verlieren, Rewards zu verdienen. -As a result, it’s recommended that you choose an Indexer wisely. +Es empfiehlt sich daher, einen Indexierer mit Bedacht auszuwählen. -![Delegation unbonding. Note the 0.5% fee in the Delegation UI, as well as the 28 day unbonding period.](/img/Delegation-Unbonding.png) +![Aufhebung der Bindung der Delegation. Beachten Sie die 0,5 %ige Gebühr in der Benutzeroberfläche der Delegation sowie die 28-tägige Frist für die Aufhebung der Bindung](/img/Delegation-Unbonding.png) #### Parameter der Delegation @@ -92,29 +92,29 @@ Um zu verstehen, wie man einen vertrauenswürdigen Indexer auswählt, müssen Si - Wenn die Rewardkürzung eines Indexers auf 100% eingestellt ist, erhalten Sie als Delegator 0 Rewards für die Indexierung. - Wenn er auf 80 % eingestellt ist, erhalten Sie als Delegator 20 %. -![Indexing Reward Cut. The top Indexer is giving Delegators 90% of the rewards. The middle one is giving Delegators 20%. The bottom one is giving Delegators ~83%.](/img/Indexing-Reward-Cut.png) +![Indexing Reward Cut. Der oberste Indexierer gibt den Delegatoren 90% der Belohnungen. Der mittlere gibt den Delegierten 20%. Der untere gibt den Delegatoren ~83%.](/img/Indexing-Reward-Cut.png) - **Kürzung der Abfragegebühren** - Dies ist genau wie die Indizierung der Rewardkürzung, aber es gilt für die Renditen der Abfragegebühren, die der Indexer einnimmt. -- It is highly recommended that you explore [The Graph Discord](https://discord.gg/graphprotocol) to determine which Indexers have the best social and technical reputations. +- Es wird dringend empfohlen, [The Graph Discord] (https://discord.gg/graphprotocol) zu erkunden, um festzustellen, welche Indexierer den besten sozialen und technischen Ruf haben. -- Many Indexers are active in Discord and will be happy to answer your questions. +- Viele Indexierer sind in Discord aktiv und beantworten gerne Ihre Fragen. ## Berechnung der erwarteten Rendite der Delegatoren -> Calculate the ROI on your delegation [here](https://thegraph.com/explorer/delegate?chain=arbitrum-one). +> Berechnen Sie den ROI für Ihre Delegation [hier] (https://thegraph.com/explorer/delegate?chain=arbitrum-one). -A Delegator must consider a variety of factors to determine a return: +Ein Delegator muss eine Vielzahl von Faktoren berücksichtigen, um eine Rendite zu bestimmen: -An Indexer's ability to use the delegated GRT available to them impacts their rewards. +Die Fähigkeit eines Indexierers, die ihm zur Verfügung stehende delegierte GRT zu nutzen, wirkt sich auf seine Rewards aus. -If an Indexer does not allocate all the GRT at their disposal, they may miss out on maximizing potential earnings for both themselves and their Delegators. +Wenn ein Indexierer nicht alle ihm zur Verfügung stehenden GRT einsetzt, verpasst er möglicherweise die Maximierung des Ertragspotenzials für sich und seine Delegatoren. -Indexers can close an allocation and collect rewards at any time within the 1 to 28-day window. However, if rewards are not promptly collected, the total rewards may appear lower, even if a percentage of rewards remain unclaimed. +Indexierer können eine Zuweisung schließen und Rewards jederzeit innerhalb des Zeitfensters von 1 bis 28 Tagen abholen. Werden die Rewards jedoch nicht umgehend abgeholt, kann der Gesamtbetrag der Rewards niedriger erscheinen, selbst wenn ein bestimmter Prozentsatz der Rewards nicht abgeholt wird. ### Berücksichtigung der Senkung der Abfrage- und Indizierungsgebühren -You should choose an Indexer that is transparent about setting their Query Fee and Indexing Fee Cuts. +Sie sollten einen Indexierer wählen, der seine Abfrage- und Indizierungsgebühren transparent festlegt. Die Formel lautet: diff --git a/website/src/pages/de/resources/roles/delegating/undelegating.mdx b/website/src/pages/de/resources/roles/delegating/undelegating.mdx index 23da5ee0f456..116bfb6110f5 100644 --- a/website/src/pages/de/resources/roles/delegating/undelegating.mdx +++ b/website/src/pages/de/resources/roles/delegating/undelegating.mdx @@ -1,73 +1,73 @@ --- -title: Undelegating +title: Aufheben der Delegierung --- -Learn how to withdraw your delegated tokens through [Graph Explorer](https://thegraph.com/explorer) or [Arbiscan](https://arbiscan.io/). +Erfahren Sie, wie Sie Ihre delegierten Token über [Graph Explorer] (https://thegraph.com/explorer) oder [Arbiscan] (https://arbiscan.io/) abheben können. -> To avoid this in the future, it's recommended that you select an Indexer wisely. To learn how to select and Indexer, check out the Delegate section in Graph Explorer. +> Um dies in Zukunft zu vermeiden, empfiehlt es sich, einen Indexierer mit Bedacht auszuwählen. Wie Sie einen Indexierer auswählen, erfahren Sie im Abschnitt Delegieren im Graph Explorer. -## How to Withdraw Using Graph Explorer +## Wie man mit Graph Explorer abhebt ### Schritt für Schritt -1. Visit [Graph Explorer](https://thegraph.com/explorer). Please make sure you're on Explorer and **not** Subgraph Studio. +1. Besuchen Sie [Graph Explorer] (https://thegraph.com/explorer). Bitte vergewissern Sie sich, dass Sie den Explorer und **nicht** Subgraph Studio verwenden. -2. Click on your profile. You can find it on the top right corner of the page. +2. Klicken Sie auf Ihr Profil. Sie finden es in der oberen rechten Ecke der Seite. - - Make sure that your wallet is connected. If it's not connected, you will see the "connect" button instead. + - Vergewissern Sie sich, dass Ihre Wallet verbunden ist. Wenn sie nicht verbunden ist, sehen Sie stattdessen die Schaltfläche „Verbinden“. -3. Once you're in your profile, click on the Delegating tab. In the Delegating tab, you can view the list of Indexers you have delegated to. +3. Sobald Sie sich in Ihrem Profil befinden, klicken Sie auf die Registerkarte „Delegieren". Auf der Registerkarte „Delegieren“ können Sie die Liste der Indexierer einsehen, an die Sie delegiert haben. -4. Click on the Indexer from which you wish to withdraw your tokens. +4. Klicken Sie auf den Indexierer, von dem Sie Ihre Token abheben möchten. - - Make sure to note the specific Indexer, as you will need to find them again to withdraw. + - Achten Sie darauf, dass Sie sich den Indexierer notieren, denn Sie müssen ihn wiederfinden, wenn Sie etwas abheben wollen. -5. Select the "Undelegate" option by clicking on the three dots next to the Indexer on the right side, see image below: +5. Wählen Sie die Option „Delegation aufheben“, indem Sie auf die drei Punkte neben dem Indexierer auf der rechten Seite klicken (siehe Abbildung unten): - ![Undelegate button](/img/undelegate-button.png) + ![Schaltfläche „Delegieren aufheben“](/img/undelegate-button.png) -6. After approximately [28 epochs](https://thegraph.com/explorer/network/epochs?chain=arbitrum-one) (28 days), return to the Delegate section and locate the specific Indexer you undelegated from. +6. Kehren Sie nach ca. [28 Epochen](https://thegraph.com/explorer/network/epochs?chain=arbitrum-one) (28 Tagen) zum Abschnitt „Delegieren“ zurück und suchen Sie den Indexierer, von dem Sie die Delegierung aufgehoben haben. -7. Once you find the Indexer, click on the three dots next to them and proceed to withdraw all your tokens. +7. Sobald Sie den Indexierer gefunden haben, klicken Sie auf die drei Punkte daneben und fahren Sie fort, alle Ihre Token abzuheben. -## How to Withdraw Using Arbiscan +## Wie man mit Arbiscan abhebt -> This process is primarily useful if the UI in Graph Explorer experiences issues. +> Dieser Prozess ist vor allem dann sinnvoll, wenn die Benutzeroberfläche im Graph Explorer Probleme aufweist. ### Schritt für Schritt -1. Find your delegation transaction on Arbiscan. +1. Finden Sie Ihre Delegationstransaktion auf Arbiscan. - - Here's an [example transaction on Arbiscan](https://arbiscan.io/tx/0xcf2110eac897099f821064445041031efb32786392bdbe7544a4cb7a6b2e4f9a) + - Hier ist eine [Datenbeispiel-Transaktion auf Arbiscan](https://arbiscan.io/tx/0xcf2110eac897099f821064445041031efb32786392bdbe7544a4cb7a6b2e4f9a) -2. Navigate to "Transaction Action" where you can find the staking extension contract: +2. Navigieren Sie zu „Transaktionsaktion“, wo Sie den Staking-Verlängerungsvertrag finden können: - - [This is the staking extension contract for the example listed above](https://arbiscan.io/address/0x00669A4CF01450B64E8A2A20E9b1FCB71E61eF03) + - [Dies ist der Staking-Verlängerungsvertrag für das oben genannte Datenbeispiel](https://arbiscan.io/address/0x00669A4CF01450B64E8A2A20E9b1FCB71E61eF03) -3. Then click on "Contract". ![Contract tab on Arbiscan, between NFT Transfers and Events](/img/arbiscan-contract.png) +3. Klicken Sie dann auf „Vertrag“. ![Registerkarte „Vertrag“ auf Arbiscan, zwischen NFT-Transfers und Ereignissen](/img/arbiscan-contract.png) -4. Scroll to the bottom and copy the Contract ABI. There should be a small button next to it that allows you to copy everything. +4. Scrollen Sie nach unten und kopieren Sie die Vertrags-ABI. Es sollte eine kleine Schaltfläche daneben sein, mit der Sie alles kopieren können. -5. Click on your profile button in the top right corner of the page. If you haven't created an account yet, please do so. +5. Klicken Sie auf Ihr Profil in der oberen rechten Ecke der Seite. Wenn Sie noch kein Konto erstellt haben, tun Sie dies bitte. -6. Once you're in your profile, click on "Custom ABI”. +6. Sobald Sie in Ihrem Profil sind, klicken Sie auf „Benutzerdefinierte ABI“. -7. Paste the custom ABI you copied from the staking extension contract, and add the custom ABI for the address: 0x00669A4CF01450B64E8A2A20E9b1FCB71E61eF03 (**sample address**) +7. Fügen Sie die benutzerdefinierte ABI ein, die Sie aus dem Vertrag über die Stakerweiterung kopiert haben, und fügen Sie die benutzerdefinierte ABI für die Adresse hinzu: 0x00669A4CF01450B64E8A2A20E9b1FCB71E61eF03 (**sample address**) -8. Go back to the [staking extension contract](https://arbiscan.io/address/0x00669A4CF01450B64E8A2A20E9b1FCB71E61eF03#writeProxyContract). Now, call the `unstake` function in the [Write as Proxy tab](https://arbiscan.io/address/0x00669A4CF01450B64E8A2A20E9b1FCB71E61eF03#writeProxyContract), which has been added thanks to the custom ABI, with the number of tokens that you delegated. +8. Gehen Sie zurück zum [Staking-Erweiterungsvertrag] (https://arbiscan.io/address/0x00669A4CF01450B64E8A2A20E9b1FCB71E61eF03#writeProxyContract). Rufen Sie nun die Funktion „Unstake“ in der Registerkarte [Als Proxy schreiben] (https://arbiscan.io/address/0x00669A4CF01450B64E8A2A20E9b1FCB71E61eF03#writeProxyContract), die dank der benutzerdefinierten ABI hinzugefügt wurde, mit der Anzahl der Token auf, die Sie delegiert haben. -9. If you don't know how many tokens you delegated, you can call `getDelegation` on the Read Custom tab. You will need to paste your address (delegator address) and the address of the Indexer that you delegated to, as shown in the following screenshot: +9. Wenn Sie nicht wissen, wie viele Token Sie delegiert haben, können Sie `getDelegation` auf der Registerkarte „Benutzerdefiniertes Lesen“ aufrufen. Sie müssen Ihre Adresse (Adresse des Delegators) und die Adresse des Indexirers, an den Sie delegiert haben, einfügen, wie im folgenden Screenshot gezeigt ist: - ![Both of the addresses needed](/img/get-delegate.png) + ![Beide Adressen benötigt](/img/get-delegate.png) - - This will return three numbers. The first number is the amount you can unstake. + - Sie erhalten dann drei Zahlen. Die erste Zahl ist der Betrag, den Sie unstaken (abheben) können. -10. After you have called `unstake`, you can withdraw after approximately 28 epochs (28 days) by calling the `withdraw` function. +10. Nachdem Sie `unstake` aufgerufen haben, können Sie nach ca. 28 Epochen (28 Tagen) durch Aufruf der Funktion `withdraw` abheben. -11. You can see how much you will have available to withdraw by calling the `getWithdrawableDelegatedTokens` on Read Custom and passing it your delegation tuple. See screenshot below: +11. Sie können sehen, wie viel Sie zum Abheben zur Verfügung haben, indem Sie die Funktion `getWithdrawableDelegatedTokens` auf „Benutzerdefiniertes Lesen“ aufrufen und Ihr Delegations-Tupel übergeben. Siehe Bildschirmfoto unten: - ![Call `getWithdrawableDelegatedTokens` to see amount of tokens that can be withdrawn](/img/withdraw-available.png) + Rufen Sie `getWithdrawableDelegatedTokens` auf, um die Anzahl der Token zu sehen, die abgehoben werden können](/img/withdraw-available.png) ## Zusätzliche Ressourcen -To delegate successfully, review the [delegating documentation](/resources/roles/delegating/delegating/) and check out the delegate section in Graph Explorer. +Um erfolgreich zu delegieren, lesen Sie die [Delegierungsdokumentation](/resources/roles/delegating/delegating/) und schauen Sie sich den Abschnitt „Delegieren“ im Graph Explorer an. diff --git a/website/src/pages/de/resources/subgraph-studio-faq.mdx b/website/src/pages/de/resources/subgraph-studio-faq.mdx index a6e114083fc7..423b6b5059b3 100644 --- a/website/src/pages/de/resources/subgraph-studio-faq.mdx +++ b/website/src/pages/de/resources/subgraph-studio-faq.mdx @@ -4,7 +4,7 @@ title: Subgraph Studio-FAQs ## 1. Was ist Subgraph Studio? -[Subgraph Studio] (https://thegraph.com/studio/) ist eine App zur Erstellung, Verwaltung und Veröffentlichung von Subgraphen und API-Schlüsseln. +[Subgraph Studio] (https://thegraph.com/studio/) ist eine dApp zur Erstellung, Verwaltung und Veröffentlichung von Subgraphen und API-Schlüsseln. ## 2. Wie erstelle ich einen API-Schlüssel? @@ -24,7 +24,7 @@ Ja, Subgraphen, die auf Arbitrum One veröffentlicht wurden, können auf eine ne Beachten Sie, dass Sie den Subgrafen nach der Übertragung nicht mehr in Studio sehen oder bearbeiten können. -## 6. Wie finde ich Abfrage-URLs für Subgraphen, wenn ich kein Entwickler des Subgraphen bin, den ich verwenden möchte? +## 6. Wie finde ich Abfrage-URLs für Subgraphen, wenn ich kein Programmierer des Subgraphen bin, den ich verwenden möchte? Die Abfrage-URL eines jeden Subgraphen finden Sie im Abschnitt Subgraph Details des Graph Explorers. Wenn Sie auf die Schaltfläche „Abfrage“ klicken, werden Sie zu einem Fenster weitergeleitet, in dem Sie die Abfrage-URL des gewünschten Subgraphen sehen können. Sie können dann den `` Platzhalter durch den API-Schlüssel ersetzen, den Sie in Subgraph Studio verwenden möchten. diff --git a/website/src/pages/de/resources/tokenomics.mdx b/website/src/pages/de/resources/tokenomics.mdx index 3dd13eb7d06a..ee4485cce517 100644 --- a/website/src/pages/de/resources/tokenomics.mdx +++ b/website/src/pages/de/resources/tokenomics.mdx @@ -1,103 +1,103 @@ --- title: Tokenomics des The Graph Netzwerks sidebarTitle: Tokenomics -description: The Graph Network is incentivized by powerful tokenomics. Here’s how GRT, The Graph’s native work utility token, works. +description: The Graph Network wird durch leistungsstarke Tokenomics incentiviert. So funktioniert GRT, der The Graph-eigene Arbeits-Utility-Token. --- ## Überblick -The Graph is a decentralized protocol that enables easy access to blockchain data. It indexes blockchain data similarly to how Google indexes the web. If you've used a dapp that retrieves data from a subgraph, you've probably interacted with The Graph. Today, thousands of [popular dapps](https://thegraph.com/explorer) in the web3 ecosystem use The Graph. +The Graph ist ein dezentrales Protokoll, das einen einfachen Zugang zu Blockchain-Daten ermöglicht. Es indiziert Blockchain-Daten ähnlich wie Google das Web indiziert. Wenn Sie eine Dapp verwendet haben, die Daten aus einem Subgraphут abruft, haben Sie wahrscheinlich mit The Graph interagiert. Heute nutzen Tausende von [beliebten Dapps](https://thegraph.com/explorer) im web3-Ökosystem The Graph. ## Besonderheiten -The Graph's model is akin to a B2B2C model, but it's driven by a decentralized network where participants collaborate to provide data to end users in exchange for GRT rewards. GRT is the utility token for The Graph. It coordinates and incentivizes the interaction between data providers and consumers within the network. +Das Modell von The Graph ähnelt einem B2B2C-Modell, wird aber von einem dezentralen Netzwerk angetrieben, in dem die Teilnehmer zusammenarbeiten, um den Endnutzern Daten im Austausch für GRT Rewards zur Verfügung zu stellen. GRT ist der Utility-Token für The Graph. Er koordiniert und fördert die Interaktion zwischen Datenanbietern und Verbrauchern innerhalb des Netzwerks. -The Graph plays a vital role in making blockchain data more accessible and supports a marketplace for its exchange. To learn more about The Graph's pay-for-what-you-need model, check out its [free and growth plans](/subgraphs/billing/). +The Graph spielt eine wichtige Rolle dabei, Blockchain-Daten besser zugänglich zu machen und unterstützt einen Marktplatz für deren Austausch. Wenn Sie mehr über das „Pay-for-what-you-need“-Modell von The Graph erfahren möchten, sehen Sie sich die [kostenlosen und wachstumsorientierten Pläne](/subgraphs/billing/) an. -- GRT Token Address on Mainnet: [0xc944e90c64b2c07662a292be6244bdf05cda44a7](https://etherscan.io/token/0xc944e90c64b2c07662a292be6244bdf05cda44a7) +- GRT-Token-Adresse im Mainnet: [0xc944e90c64b2c07662a292be6244bdf05cda44a7](https://etherscan.io/token/0xc944e90c64b2c07662a292be6244bdf05cda44a7) -- GRT Token Address on Arbitrum One: [0x9623063377AD1B27544C965cCd7342f7EA7e88C7](https://arbiscan.io/token/0x9623063377ad1b27544c965ccd7342f7ea7e88c7) +- GRT-Token-Adresse auf Arbitrum One: [0x9623063377AD1B27544C965cCd7342f7EA7e88C7](https://arbiscan.io/token/0x9623063377ad1b27544c965ccd7342f7ea7e88c7) -## The Roles of Network Participants +## Die Rollen der Netzwerkteilnehmer -There are four primary network participants: +Es gibt vier primäre Netzwerkteilnehmer: -1. Delegators - Delegate GRT to Indexers & secure the network +1. Delegatoren - Delegieren Sie GRT an Indexierer und sichern Sie das Netzwerk -2. Kuratoren - Finden Sie die besten Untergraphen für Indexer +2. Kuratoren - Finden Sie die besten Subgrafen für Indexierer -3. Developers - Build & query subgraphs +3. Entwickler - Erstellen & Abfragen von Subgrafen 4. Indexer - Das Rückgrat der Blockchain-Daten -Fishermen and Arbitrators are also integral to the network's success through other contributions, supporting the work of the other primary participant roles. For more information about network roles, [read this article](https://thegraph.com/blog/the-graph-grt-token-economics/). +Fischer und Schlichter tragen auch durch andere Beiträge zum Erfolg des Netzwerks bei und unterstützen die Arbeit der anderen Hauptbeteiligten. Weitere Informationen über die Rollen des Netzwerks finden Sie in [diesem Artikel] (https://thegraph.com/blog/the-graph-grt-token-economics/). -![Tokenomics diagram](/img/updated-tokenomics-image.png) +![Tokenomics-Diagramm](/img/updated-tokenomics-image.png) -## Delegators (Passively earn GRT) +## Delegatoren (verdienen passiv GRT) -Indexers are delegated GRT by Delegators, increasing the Indexer’s stake in subgraphs on the network. In return, Delegators earn a percentage of all query fees and indexing rewards from the Indexer. Each Indexer sets the cut that will be rewarded to Delegators independently, creating competition among Indexers to attract Delegators. Most Indexers offer between 9-12% annually. +Indexierer werden von Delegatoren mit GRT betraut, wodurch sich der Anteil des Indexierers an Subgraphen im Netzwerk erhöht. Im Gegenzug erhalten die Delegatoren einen prozentualen Anteil an allen Abfragegebühren und Rewards des Indexierers. Jeder Indexierer legt den Anteil, den er an die Delegatoren vergütet, selbständig fest, wodurch ein Wettbewerb zwischen den Indexierern entsteht, um Delegatoren anzuziehen. Die meisten Indexierer bieten zwischen 9-12% jährlich. -For example, if a Delegator were to delegate 15k GRT to an Indexer offering 10%, the Delegator would receive ~1,500 GRT in rewards annually. +Ein Datenbeispiel: Wenn ein Delegator 15.000 GRT an einen Indexierer delegiert, der 10 % anbietet, würde der Delegator jährlich ~ 1.500 GRT an Rewards erhalten. -There is a 0.5% delegation tax which is burned whenever a Delegator delegates GRT on the network. If a Delegator chooses to withdraw their delegated GRT, the Delegator must wait for the 28-epoch unbonding period. Each epoch is 6,646 blocks, which means 28 epochs ends up being approximately 26 days. +Es gibt eine Delegationssteuer von 0,5 %, die jedes Mal erhoben wird, wenn ein Delegator GRT an das Netzwerk delegiert. Wenn ein Delegator beschließt, sein delegiertes GRT zurückzuziehen, muss er die 28-Epochen-Frist abwarten, in der die Bindung aufgehoben wird. Jede Epoche besteht aus 6.646 Blöcken, was bedeutet, dass 28 Epochen ungefähr 26 Tagen entsprechen. -If you're reading this, you're capable of becoming a Delegator right now by heading to the [network participants page](https://thegraph.com/explorer/participants/indexers), and delegating GRT to an Indexer of your choice. +Wenn Sie dies lesen, können Sie sofort Delegator werden, indem Sie auf die [Netzwerkteilnehmerseite] (https://thegraph.com/explorer/participants/indexers) gehen und GRT an einen Indexierer Ihrer Wahl delegieren. -## Curators (Earn GRT) +## Kuratoren (GRT verdienen) -Curators identify high-quality subgraphs and "curate" them (i.e., signal GRT on them) to earn curation shares, which guarantee a percentage of all future query fees generated by the subgraph. While any independent network participant can be a Curator, typically subgraph developers are among the first Curators for their own subgraphs because they want to ensure their subgraph is indexed. +Kuratoren identifizieren qualitativ hochwertige Subgraphen und „kuratieren“ sie (d.h. signalisieren GRT auf ihnen), um Kurationsanteile zu verdienen, die einen Prozentsatz aller zukünftigen Abfragegebühren garantieren, die durch den Subgraphen generiert werden. Obwohl jeder unabhängige Netzwerkteilnehmer ein Kurator sein kann, gehören Entwickler von Subgraphen in der Regel zu den ersten Kuratoren für ihre eigenen Subgraphen, da sie sicherstellen wollen, dass ihr Subgraph indiziert wird. -Subgraph developers are encouraged to curate their subgraph with at least 3,000 GRT. However, this number may be impacted by network activity and community participation. +Die Entwickler von Subgraphen werden ermutigt, ihren Subgraphen mit mindestens 3.000 GRT zu kuratieren. Diese Zahl kann jedoch von der Netzwerkaktivität und der Beteiligung der Community beeinflusst werden. -Curators pay a 1% curation tax when they curate a new subgraph. This curation tax is burned, decreasing the supply of GRT. +Kuratoren zahlen eine Kurationssteuer von 1%, wenn sie einen neuen Subgraphen kuratieren. Diese Kurationssteuer wird verbrannt, wodurch das Angebot an GRT sinkt. -## Developers +## Entwickler -Developers build and query subgraphs to retrieve blockchain data. Since subgraphs are open source, developers can query existing subgraphs to load blockchain data into their dapps. Developers pay for queries they make in GRT, which is distributed to network participants. +Entwickler erstellen Subgraphen und fragen sie ab, um Blockchain-Daten abzurufen. Da Subgraphen quelloffen sind, können Entwickler bestehende Subgraphen abfragen, um Blockchain-Daten in ihre Dapps zu laden. Entwickler zahlen für ihre Abfragen in GRT, das an die Netzwerkteilnehmer verteilt wird. -### Erstellung eines Untergraphen +### Erstellen eines Subgraphen -Developers can [create a subgraph](/developing/creating-a-subgraph/) to index data on the blockchain. Subgraphs are instructions for Indexers about which data should be served to consumers. +Entwickler können [einen Subgraphen erstellen](/developing/creating-a-subgraph/), um Daten auf der Blockchain zu indizieren. Subgraphen sind Anweisungen für Indexierer darüber, welche Daten an Verbraucher geliefert werden sollen. -Once developers have built and tested their subgraph, they can [publish their subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) on The Graph's decentralized network. +Sobald Entwickler ihren Subgraphen gebaut und getestet haben, können sie ihn im dezentralen Netzwerk von The Graph veröffentlichen (/subgraphs/developing/publishing/publishing-a-subgraph/). -### Abfrage eines vorhandenen Untergraphen +### Abfrage vorhandener Subgraphen -Once a subgraph is [published](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph's decentralized network, anyone can create an API key, add GRT to their billing balance, and query the subgraph. +Sobald ein Subgraph im dezentralen Netzwerk von The Graph [veröffentlicht](/subgraphs/developing/publishing/publishing-a-subgraph/) wurde, kann jeder einen API-Schlüssel erstellen, GRT zu seinem Guthaben hinzufügen und den Subgraphen abfragen. -Subgraphs are [queried using GraphQL](/subgraphs/querying/introduction/), and the query fees are paid for with GRT in [Subgraph Studio](https://thegraph.com/studio/). Query fees are distributed to network participants based on their contributions to the protocol. +Subgraphen werden [mit GraphQL abgefragt](/subgraphs/querying/introduction/), und die Abfragegebühren werden mit GRT in [Subgraph Studio](https://thegraph.com/studio/) bezahlt. Die Abfragegebühren werden an die Netzwerkteilnehmer auf der Grundlage ihrer Beiträge zum Protokoll verteilt. -1% of the query fees paid to the network are burned. +1 % der an das Netz gezahlten Abfragegebühren werden verbrannt. -## Indexers (Earn GRT) +## Indexierer (GRT verdienen) -Indexers are the backbone of The Graph. They operate independent hardware and software powering The Graph’s decentralized network. Indexers serve data to consumers based on instructions from subgraphs. +Indexierer sind das Rückgrat von The Graph. Sie betreiben unabhängige Hardware und Software, die das dezentrale Netzwerk von The Graph antreiben. Indexierer versorgen die Verbraucher mit Daten, die auf Anweisungen von Subgraphen basieren. -Indexers can earn GRT rewards in two ways: +Indexierer können GRT-Rewards auf zwei Arten verdienen: -1. **Query fees**: GRT paid by developers or users for subgraph data queries. Query fees are directly distributed to Indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). +1. **Abfragegebühren**: GRT, die von Entwicklern oder Nutzern für die Abfrage von Subgraph-Daten gezahlt werden. Abfragegebühren werden gemäß der exponentiellen Rabattfunktion direkt an Indexierer verteilt (siehe GIP [hier](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). -2. **Indexing rewards**: the 3% annual issuance is distributed to Indexers based on the number of subgraphs they are indexing. These rewards incentivize Indexers to index subgraphs, occasionally before the query fees begin, to accrue and submit Proofs of Indexing (POIs), verifying that they have indexed data accurately. +2. **Indexing Rewards**: Die jährliche Ausgabe von 3% wird an Indexierer verteilt, basierend auf der Anzahl der Subgraphen, die sie indexieren. Diese Rewards sind ein Anreiz für Indexierer, Subgraphen zu indizieren, gelegentlich vor Beginn der Abfragegebühren, um Proofs of Indexing (POIs) zu sammeln und einzureichen, die bestätigen, dass sie Daten korrekt indiziert haben. -Each subgraph is allotted a portion of the total network token issuance, based on the amount of the subgraph’s curation signal. That amount is then rewarded to Indexers based on their allocated stake on the subgraph. +Jedem Subgraphen wird ein Teil der gesamten Netzwerk-Token-Ausgabe zugeteilt, basierend auf der Höhe des Kurationssignals des Subgraphen. Dieser Betrag wird dann an Indexierer auf der Grundlage ihres zugewiesenen Anteils an dem Subgraphen belohnt. -In order to run an indexing node, Indexers must self-stake 100,000 GRT or more with the network. Indexers are incentivized to self-stake GRT in proportion to the amount of queries they serve. +Um einen Indexierungsknoten betreiben zu können, müssen Indexierer mindestens 100.000 GRT selbst in das Netzwerk einbringen. Für Indexierer besteht ein Anreiz, GRT im Verhältnis zur Anzahl der von ihnen bearbeiteten Abfragen selbst einzunehmen. -Indexers can increase their GRT allocations on subgraphs by accepting GRT delegation from Delegators, and they can accept up to 16 times their initial self-stake. If an Indexer becomes "over-delegated" (i.e., more than 16 times their initial self-stake), they will not be able to use the additional GRT from Delegators until they increase their self-stake in the network. +Indexierer können ihre GRT-Zuteilungen auf Subgraphen erhöhen, indem sie GRT-Delegationen von Delegatoren akzeptieren, und sie können bis zum 16-fachen ihres ursprünglichen Eigenanteils akzeptieren. Wenn ein Indexierer „überdelegiert“ wird (d.h. mehr als das 16-fache seines ursprünglichen Eigenanteils), kann er die zusätzlichen GRT von Delegatoren nicht nutzen, bis er seinen Eigenanteil im Netzwerk erhöht. -The amount of rewards an Indexer receives can vary based on the Indexer's self-stake, accepted delegation, quality of service, and many more factors. +Die Höhe der Rewards, die ein Indexierer erhält, kann je nach Eigenanteil des Indexierers, akzeptierter Delegation, Qualität der Dienstleistung und vielen weiteren Faktoren variieren. -## Token Supply: Burning & Issuance +## Token-Versorgung: Burning & Emission -The initial token supply is 10 billion GRT, with a target of 3% new issuance annually to reward Indexers for allocating stake on subgraphs. This means that the total supply of GRT tokens will increase by 3% each year as new tokens are issued to Indexers for their contribution to the network. +Das anfängliche Token-Angebot beträgt 10 Milliarden GRT, mit einem Ziel von 3 % Neuemissionen pro Jahr, um Indexierer für die Zuweisung von Anteilen an Subgraphen zu belohnen. Das bedeutet, dass das Gesamtangebot an GRT-Token jedes Jahr um 3 % steigen wird, da neue Token an Indexierer für ihren Beitrag zum Netzwerk ausgegeben werden. -The Graph is designed with multiple burning mechanisms to offset new token issuance. Approximately 1% of the GRT supply is burned annually through various activities on the network, and this number has been increasing as network activity continues to grow. These burning activities include a 0.5% delegation tax whenever a Delegator delegates GRT to an Indexer, a 1% curation tax when Curators signal on a subgraph, and a 1% of query fees for blockchain data. +The Graph ist mit mehreren Brennmechanismen ausgestattet, um die Ausgabe neuer Token auszugleichen. Ungefähr 1 % des GRT-Angebots wird jährlich durch verschiedene Aktivitäten im Netzwerk verbrannt, und diese Zahl steigt, da die Netzwerkaktivität weiter zunimmt. Zu diesen Burning-Aktivitäten gehören eine Delegationssteuer von 0,5 %, wenn ein Delegator GRT an einen Indexierer delegiert, eine Kurationssteuer von 1 %, wenn Kuratoren ein Signal auf einem Subgraphen geben, und 1 % der Abfragegebühren für Blockchain-Daten. -![Total burned GRT](/img/total-burned-grt.jpeg) +[[Insgesamt verbrannte GRT](/img/total-burned-grt.jpeg) -In addition to these regularly occurring burning activities, the GRT token also has a slashing mechanism in place to penalize malicious or irresponsible behavior by Indexers. If an Indexer is slashed, 50% of their indexing rewards for the epoch are burned (while the other half goes to the fisherman), and their self-stake is slashed by 2.5%, with half of this amount being burned. This helps to ensure that Indexers have a strong incentive to act in the best interests of the network and to contribute to its security and stability. +Zusätzlich zu diesen regelmäßig stattfindenden Verbrennungsaktivitäten verfügt der GRT-Token auch über einen Slashing-Mechanismus, um böswilliges oder unverantwortliches Verhalten von Indexierern zu bestrafen. Wenn ein Indexierer geslashed wird, werden 50% seiner Rewards für die Epoche verbrannt (während die andere Hälfte an den Fischer geht), und sein Eigenanteil wird um 2,5% gekürzt, wobei die Hälfte dieses Betrags verbrannt wird. Dies trägt dazu bei, dass Indexierer einen starken Anreiz haben, im besten Interesse des Netzwerks zu handeln und zu dessen Sicherheit und Stabilität beizutragen. -## Improving the Protocol +## Verbesserung des Protokolls -The Graph Network is ever-evolving and improvements to the economic design of the protocol are constantly being made to provide the best experience for all network participants. The Graph Council oversees protocol changes and community members are encouraged to participate. Get involved with protocol improvements in [The Graph Forum](https://forum.thegraph.com/). +The Graph Network entwickelt sich ständig weiter, und es werden laufend Verbesserungen an der wirtschaftlichen Gestaltung des Protokolls vorgenommen, um allen Netzwerkteilnehmern die bestmögliche Erfahrung zu bieten. Der The Graph-Rat überwacht die Protokolländerungen, und die Mitglieder der Community sind aufgerufen, sich daran zu beteiligen. Beteiligen Sie sich an der Verbesserung des Protokolls im [The Graph Forum] (https://forum.thegraph.com/). diff --git a/website/src/pages/de/sps/introduction.mdx b/website/src/pages/de/sps/introduction.mdx index 6f1270848072..396c53077fd1 100644 --- a/website/src/pages/de/sps/introduction.mdx +++ b/website/src/pages/de/sps/introduction.mdx @@ -3,28 +3,29 @@ title: Einführung in Substreams-Powered Subgraphen sidebarTitle: Einführung --- -Boost your subgraph's efficiency and scalability by using [Substreams](/substreams/introduction/) to stream pre-indexed blockchain data. +Steigern Sie die Effizienz und Skalierbarkeit Ihres Subgraphen, indem Sie [Substreams](/substreams/introduction/) verwenden, um vorindizierte Blockchain-Daten zu streamen. ## Überblick -Use a Substreams package (`.spkg`) as a data source to give your subgraph access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks. +Verwenden Sie ein Substreams-Paket (`.spkg`) als Datenquelle, um Ihrem Subgraph Zugang zu einem Strom von vorindizierten Blockchain-Daten zu geben. Dies ermöglicht eine effizientere und skalierbarere Datenverarbeitung, insbesondere bei großen oder komplexen Blockchain-Netzwerken. ### Besonderheiten Es gibt zwei Methoden zur Aktivierung dieser Technologie: -1. **Using Substreams [triggers](/sps/triggers/)**: Consume from any Substreams module by importing the Protobuf model through a subgraph handler and move all your logic into a subgraph. This method creates the subgraph entities directly in the subgraph. +1. **Verwendung von Substreams [triggers](/sps/triggers/)**: Nutzen Sie ein beliebiges Substreams-Modul, indem Sie das Protobuf-Modell über einen Subgraph-Handler importieren und Ihre gesamte Logik in einen Subgraph verschieben. Diese Methode erstellt die Subgraph-Entitäten direkt im Subgraph. -2. **Using [Entity Changes](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)**: By writing more of the logic into Substreams, you can consume the module's output directly into [graph-node](/indexing/tooling/graph-node/). In graph-node, you can use the Substreams data to create your subgraph entities. +2. **Unter Verwendung von [Entity Changes](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)**: Wenn Sie einen größeren Teil der Logik in Substreams schreiben, können Sie die Ausgabe des Moduls direkt in [graph-node](/indexing/tooling/graph-node/) verwenden. In graph-node können Sie die Substreams-Daten verwenden, um Ihre Subgraph-Entitäten zu erstellen. -You can choose where to place your logic, either in the subgraph or Substreams. However, consider what aligns with your data needs, as Substreams has a parallelized model, and triggers are consumed linearly in the graph node. +Sie können wählen, wo Sie Ihre Logik platzieren möchten, entweder im Subgraph oder in Substreams. Überlegen Sie jedoch, was mit Ihren Datenanforderungen übereinstimmt, da Substreams ein parallelisiertes Modell hat und Auslöser linear in den Graphknoten verbraucht werden. ### Zusätzliche Ressourcen -Visit the following links for tutorials on using code-generation tooling to build your first end-to-end Substreams project quickly: +Unter den folgenden Links finden Sie Anleitungen zur Verwendung von Tools zur Codegenerierung, mit denen Sie schnell Ihr erstes durchgängiges Substreams-Projekt erstellen können: - [Solana](/substreams/developing/solana/transactions/) - [EVM](https://docs.substreams.dev/tutorials/intro-to-tutorials/evm) - [Starknet](https://docs.substreams.dev/tutorials/intro-to-tutorials/starknet) - [Injective](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/injective) - [MANTRA](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/mantra) +- [Stellar](https://docs.substreams.dev/tutorials/intro-to-tutorials/stellar) diff --git a/website/src/pages/de/sps/sps-faq.mdx b/website/src/pages/de/sps/sps-faq.mdx index 72005f6cfc09..705188578529 100644 --- a/website/src/pages/de/sps/sps-faq.mdx +++ b/website/src/pages/de/sps/sps-faq.mdx @@ -5,17 +5,17 @@ sidebarTitle: FAQ ## Was sind Substreams? -Substreams is an exceptionally powerful processing engine capable of consuming rich streams of blockchain data. It allows you to refine and shape blockchain data for fast and seamless digestion by end-user applications. +Substreams ist eine außergewöhnlich leistungsstarke Verarbeitungsmaschine, die umfangreiche Blockchain-Datenströme verarbeiten kann. Sie ermöglicht es Ihnen, Blockchain-Daten für eine schnelle und nahtlose Verarbeitung durch Endbenutzeranwendungen zu verfeinern und zu gestalten. -Specifically, it's a blockchain-agnostic, parallelized, and streaming-first engine that serves as a blockchain data transformation layer. It's powered by [Firehose](https://firehose.streamingfast.io/), and enables developers to write Rust modules, build upon community modules, provide extremely high-performance indexing, and [sink](/substreams/developing/sinks/) their data anywhere. +Genauer gesagt handelt es sich um eine Blockchain-agnostische, parallelisierte und Streaming-first-Engine, die als Blockchain-Datenumwandlungsschicht dient. Sie wird von [Firehose](https://firehose.streamingfast.io/) angetrieben und ermöglicht es Entwicklern, Rust-Module zu schreiben, auf Community-Modulen aufzubauen, eine extrem leistungsstarke Indizierung bereitzustellen und ihre Daten überall [zu versenken](/substreams/developing/sinks/). -Substreams is developed by [StreamingFast](https://www.streamingfast.io/). Visit the [Substreams Documentation](/substreams/introduction/) to learn more about Substreams. +Substreams wird von [StreamingFast](https://www.streamingfast.io/) entwickelt. Besuchen Sie die [Substreams-Dokumentation](/substreams/introduction/), um mehr über Substreams zu erfahren. ## Was sind Substreams-basierte Subgraphen? -[Substreams-powered subgraphs](/sps/introduction/) combine the power of Substreams with the queryability of subgraphs. When publishing a Substreams-powered Subgraph, the data produced by the Substreams transformations, can [output entity changes](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs) that are compatible with subgraph entities. +Die [Substreams-basierte Subgraphen](/sps/introduction/) kombinieren die Leistungsfähigkeit von Substreams mit der Abfragefähigkeit von Subgraphen. Bei der Veröffentlichung eines Substreams-basierten Subgraphen können die von den Substreams-Transformationen erzeugten Daten [Entitätsänderungen ausgeben](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs), die mit Subgraph-Entitäten kompatibel sind. -If you are already familiar with subgraph development, note that Substreams-powered subgraphs can then be queried just as if it had been produced by the AssemblyScript transformation layer. This provides all the benefits of subgraphs, including a dynamic and flexible GraphQL API. +Wenn Sie bereits mit der Entwicklung von Subgraphen vertraut sind, dann beachten Sie, dass Substreams-basierte Subgraphen dann abgefragt werden können, als ob sie von der AssemblyScript-Transformationsschicht erzeugt worden wären, mit allen Vorteilen von Subgraphen, wie der Bereitstellung einer dynamischen und flexiblen GraphQL-API. ## Wie unterscheiden sich Substreams-basierte Subgraphen von Subgraphen? @@ -25,7 +25,7 @@ Im Gegensatz dazu haben Substreams-basierte Subgraphen eine einzige Datenquelle, ## Was sind die Vorteile der Verwendung von Substreams-basierten Subgraphen? -Substreams-powered subgraphs combine all the benefits of Substreams with the queryability of subgraphs. They bring greater composability and high-performance indexing to The Graph. They also enable new data use cases; for example, once you've built your Substreams-powered Subgraph, you can reuse your [Substreams modules](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) to output to different [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) such as PostgreSQL, MongoDB, and Kafka. +Substreams-basierte Subgraphen kombinieren alle Vorteile von Substreams mit der Abfragefähigkeit von Subgraphen. Sie bieten The Graph eine bessere Zusammensetzbarkeit und eine leistungsstarke Indizierung. Sie ermöglichen auch neue Datenanwendungsfälle; sobald Sie beispielsweise Ihren Substreams-basierten Subgraphen erstellt haben, können Sie Ihre [Substreams-Module] (https://docs.substreams.dev/reference-material/substreams-components/modules#modules) für die Ausgabe an verschiedene [Senken] (https://substreams.streamingfast.io/reference-and-specs/manifests#sink) wie PostgreSQL, MongoDB und Kafka wiederverwenden. ## Was sind die Vorteile von Substreams? @@ -63,11 +63,11 @@ Die Verwendung von Firehose bietet viele Vorteile, darunter: - Nutzung von Flat Files: Blockchain-Daten werden in Flat Files extrahiert, der billigsten und optimalsten verfügbaren Rechenressource. -## Wo erhalten Entwickler weitere Informationen über Substreams-basieren Subgraphen und Substreams? +## Wo erhalten Entwickler weitere Informationen über Substreams-basierten Subgraphen und Substreams? In der [Substreams-Dokumentation](/substreams/introduction/) erfahren Sie, wie Sie Substreams-Module erstellen können. -The [Substreams-powered subgraphs documentation](/sps/introduction/) will show you how to package them for deployment on The Graph. +Die [Dokumentation zu Substreams-basierten Subgraphen](/sps/introduction/) zeigt Ihnen, wie Sie diese für die Bereitstellung in The Graph verpacken können. Das [neueste Substreams Codegen-Tool] (https://streamingfastio.medium.com/substreams-codegen-no-code-tool-to-bootstrap-your-project-a11efe0378c6) ermöglicht es Ihnen, ein Substreams-Projekt ohne jeglichen Code zu booten. @@ -75,7 +75,7 @@ Das [neueste Substreams Codegen-Tool] (https://streamingfastio.medium.com/substr Rust-Module sind das Äquivalent zu den AssemblyScript-Mappern in Subgraphen. Sie werden auf ähnliche Weise in WASM kompiliert, aber das Programmiermodell ermöglicht eine parallele Ausführung. Sie definieren die Art der Transformationen und Aggregationen, die Sie auf die Blockchain-Rohdaten anwenden möchten. -See [modules documentation](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) for details. +Weitere Informationen finden Sie in der [Moduldokumentation](https://docs.substreams.dev/reference-material/substreams-components/modules#modules). ## Was macht Substreams kompositionsfähig? @@ -85,7 +85,7 @@ Als Datenbeispiel kann Alice ein DEX-Preismodul erstellen, Bob kann damit einen ## Wie können Sie einen Substreams-basierten Subgraphen erstellen und einsetzen? -After [defining](/sps/introduction/) a Substreams-powered Subgraph, you can use the Graph CLI to deploy it in [Subgraph Studio](https://thegraph.com/studio/). +Nach der [Definition](/sps/introduction/) eines Subgraphen können Sie den Graph CLI verwenden, um ihn in [Subgraph Studio](https://thegraph.com/studio/) einzusetzen. ## Wo finde ich Datenbeispiele für Substreams und Substreams-basierte Subgraphen? diff --git a/website/src/pages/de/sps/triggers.mdx b/website/src/pages/de/sps/triggers.mdx index 5bf7350c6b5f..792dee351596 100644 --- a/website/src/pages/de/sps/triggers.mdx +++ b/website/src/pages/de/sps/triggers.mdx @@ -2,15 +2,15 @@ title: Trigger für Substreams --- -Use Custom Triggers and enable the full use GraphQL. +Verwenden Sie Custom Triggers und aktivieren Sie die volle Nutzung von GraphQL. ## Überblick -Custom Triggers allow you to send data directly into your subgraph mappings file and entities, which are similar to tables and fields. This enables you to fully use the GraphQL layer. +Mit benutzerdefinierten Triggern können Sie Daten direkt in Ihre Subgraph-Mappings-Datei und Entitäten senden, die Tabellen und Feldern ähneln. So können Sie die GraphQL-Schicht vollständig nutzen. -By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data in your subgraph's handler. This ensures efficient and streamlined data management within the subgraph framework. +Durch den Import der Protobuf-Definitionen, die von Ihrem Substreams-Modul ausgegeben werden, können Sie diese Daten in Ihrem Subgraph-Handler empfangen und verarbeiten. Dies gewährleistet eine effiziente und schlanke Datenverwaltung innerhalb des Subgraph-Frameworks. -### Defining `handleTransactions` +### Definieren von `handleTransactions` Der folgende Code veranschaulicht, wie eine Funktion `handleTransactions` in einem Subgraph-Handler definiert wird. Diese Funktion empfängt rohe Substream-Bytes als Parameter und dekodiert sie in ein `Transactions`-Objekt. Für jede Transaktion wird eine neue Subgraph-Entität erstellt. @@ -34,14 +34,14 @@ export function handleTransactions(bytes: Uint8Array): void { } ``` -Here's what you're seeing in the `mappings.ts` file: +Das sehen Sie in der Datei `mappings.ts`: 1. Die Bytes, die die Substreams enthalten, werden in das generierte `Transactions`-Objekt dekodiert. Dieses Objekt wird wie jedes andere AssemblyScript-Objekt verwendet 2. Looping über die Transaktionen 3. Erstellen einer neuen Subgraph-Entität für jede Transaktion -To go through a detailed example of a trigger-based subgraph, [check out the tutorial](/sps/tutorial/). +Ein ausführliches Datenbeispiel für einen auslöserbasierten Subgraphen finden Sie [hier](/sps/tutorial/). ### Zusätzliche Ressourcen -To scaffold your first project in the Development Container, check out one of the [How-To Guide](/substreams/developing/dev-container/). +Um Ihr erstes Projekt im Entwicklungscontainer zu erstellen, lesen Sie einen der [Schritt-für-Schritt-Guide](/substreams/developing/dev-container/). diff --git a/website/src/pages/de/sps/tutorial.mdx b/website/src/pages/de/sps/tutorial.mdx index 395bb0433bd7..db9bb0793890 100644 --- a/website/src/pages/de/sps/tutorial.mdx +++ b/website/src/pages/de/sps/tutorial.mdx @@ -3,13 +3,13 @@ title: 'Tutorial: Einrichten eines Substreams-basierten Subgraphen auf Solana' sidebarTitle: Tutorial --- -Successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. +Erfolgreiche Einrichtung eines auslösungsbasierten Substreams-powered Subgraphs für ein Solana SPL-Token. ## Los geht’s For a video tutorial, check out [How to Index Solana with a Substreams-powered Subgraph](/sps/tutorial/#video-tutorial) -### Prerequisites +### Voraussetzungen Bevor Sie beginnen, stellen Sie Folgendes sicher: @@ -65,25 +65,25 @@ Sie erzeugen ein `subgraph.yaml`-Manifest, das das Substreams-Paket als Datenque ```yaml --- dataSources: - - art: substreams - Name: mein_Projekt_sol - Netzwerk: solana-mainnet-beta - Quelle: + - kind: substreams + name: my_project_sol + network: solana-mainnet-beta + source: package: - moduleName: map_spl_transfers # Modul definiert in der substreams.yaml - Datei: ./mein-projekt-sol-v0.1.0.spkg + moduleName: map_spl_transfers # Module defined in the substreams.yaml + file: ./my-project-sol-v0.1.0.spkg mapping: - apiVersion: 0.0.7 - Art: substreams/graph-entities - Datei: ./src/mappings.ts + apiVersion: 0.0.9 + kind: substreams/graph-entities + file: ./src/mappings.ts handler: handleTriggers ``` ### Schritt 3: Definieren Sie Entitäten in `schema.graphql` -Define the fields you want to save in your subgraph entities by updating the `schema.graphql` file. +Definieren Sie die Felder, die Sie in Ihren Subgraph-Entitäten speichern wollen, indem Sie die Datei `schema.graphql` aktualisieren. -Here is an example: +Hier ist ein Beispiel: ```graphql type MyTransfer @entity { @@ -99,9 +99,9 @@ Dieses Schema definiert eine `MyTransfer`-Entität mit Feldern wie `id`, `amount ### Schritt 4: Umgang mit Substreams Daten in `mappings.ts` -With the Protobuf objects generated, you can now handle the decoded Substreams data in your `mappings.ts` file found in the `./src` directory. +Mit den erzeugten Protobuf-Objekten können Sie nun die dekodierten Substreams-Daten in Ihrer Datei `mappings.ts` im Verzeichnis `./src`verarbeiten. -The example below demonstrates how to extract to subgraph entities the non-derived transfers associated to the Orca account id: +Das folgende Beispiel zeigt, wie die nicht abgeleiteten Überweisungen, die mit der Orca-Kontonummer verbunden sind, in die Subgraph-Entitäten extrahiert werden: ```ts import { Protobuf } from 'as-proto/assembly' @@ -142,11 +142,11 @@ npm run protogen Dieser Befehl konvertiert die Protobuf-Definitionen in AssemblyScript, so dass Sie sie im Handler des Subgraphen verwenden können. -### Conclusion +### Schlussfolgerung -Congratulations! You've successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. You can take the next step by customizing your schema, mappings, and modules to fit your specific use case. +Herzlichen Glückwunsch! Sie haben erfolgreich einen Trigger-basierten Substreams-powered Subgraph für ein Solana SPL-Token eingerichtet. Im nächsten Schritt können Sie Ihr Schema, Ihre Mappings und Module an Ihren spezifischen Anwendungsfall anpassen. -### Video Tutorial +### Video-Anleitung diff --git a/website/src/pages/de/subgraphs/_meta-titles.json b/website/src/pages/de/subgraphs/_meta-titles.json index 3fd405eed29a..1338cbaa797d 100644 --- a/website/src/pages/de/subgraphs/_meta-titles.json +++ b/website/src/pages/de/subgraphs/_meta-titles.json @@ -1,6 +1,6 @@ { - "querying": "Querying", - "developing": "Developing", - "guides": "How-to Guides", - "best-practices": "Best Practices" + "querying": "Abfragen", + "developing": "Entwicklung", + "guides": "Anleitungen", + "best-practices": "Bewährte Praktiken" } diff --git a/website/src/pages/de/subgraphs/best-practices/avoid-eth-calls.mdx b/website/src/pages/de/subgraphs/best-practices/avoid-eth-calls.mdx index e40a7b3712e4..109a388ddd19 100644 --- a/website/src/pages/de/subgraphs/best-practices/avoid-eth-calls.mdx +++ b/website/src/pages/de/subgraphs/best-practices/avoid-eth-calls.mdx @@ -1,25 +1,25 @@ --- -title: Subgraph Best Practice 4 - Improve Indexing Speed by Avoiding eth_calls -sidebarTitle: 'Subgraph Best Practice 4: Avoiding eth_calls' +title: Best Practice 4 für Subgraphen - Verbesserung der Indizierungsgeschwindigkeit durch Vermeidung von eth_calls +sidebarTitle: Vermeidung von eth_calls --- ## TLDR -`eth_calls` are calls that can be made from a subgraph to an Ethereum node. These calls take a significant amount of time to return data, slowing down indexing. If possible, design smart contracts to emit all the data you need so you don’t need to use `eth_calls`. +`eth_calls` sind Aufrufe, die von einem Subgraphen zu einem Ethereum-Knoten gemacht werden können. Diese Aufrufe benötigen eine beträchtliche Menge an Zeit, um Daten zurückzugeben, was die Indexierung verlangsamt. Entwerfen Sie nach Möglichkeit intelligente Verträge, die alle benötigten Daten ausgeben, damit Sie keine `eth_calls` verwenden müssen. -## Why Avoiding `eth_calls` Is a Best Practice +## Warum die Vermeidung von `eth_calls` eine gute Praxis ist -Subgraphs are optimized to index event data emitted from smart contracts. A subgraph can also index the data coming from an `eth_call`, however, this can significantly slow down subgraph indexing as `eth_calls` require making external calls to smart contracts. The responsiveness of these calls relies not on the subgraph but on the connectivity and responsiveness of the Ethereum node being queried. By minimizing or eliminating eth_calls in our subgraphs, we can significantly improve our indexing speed. +Subgraphen sind für die Indizierung von Ereignisdaten optimiert, die von intelligenten Verträgen ausgegeben werden. Ein Subgraph kann auch die Daten indizieren, die von einem `eth_calls` stammen. Dies kann jedoch die Indizierung von Subgraphen erheblich verlangsamen, da `eth_calls` externe Aufrufe an Smart Contracts erfordern. Die Reaktionsfähigkeit dieser Aufrufe hängt nicht vom Subgraphen ab, sondern von der Konnektivität und Reaktionsfähigkeit des Ethereum-Knotens, der abgefragt wird. Indem wir eth_calls in unseren Subgraphen minimieren oder eliminieren, können wir unsere Indizierungsgeschwindigkeit erheblich verbessern. -### What Does an eth_call Look Like? +### Wie sieht ein eth_call aus? -`eth_calls` are often necessary when the data required for a subgraph is not available through emitted events. For example, consider a scenario where a subgraph needs to identify whether ERC20 tokens are part of a specific pool, but the contract only emits a basic `Transfer` event and does not emit an event that contains the data that we need: +`eth_calls` sind häufig erforderlich, wenn die für einen Subgraphen benötigten Daten nicht über emittierte Ereignisse verfügbar sind. Betrachten wir zum Beispiel ein Szenario, in dem ein Subgraph feststellen muss, ob ERC20-Token Teil eines bestimmten Pools sind, der Vertrag aber nur ein einfaches `Transfer`-Ereignis aussendet und kein Ereignis, das die benötigten Daten enthält: ```yaml event Transfer(address indexed from, address indexed to, uint256 value); ``` -Suppose the tokens' pool membership is determined by a state variable named `getPoolInfo`. In this case, we would need to use an `eth_call` to query this data: +Angenommen, die Zugehörigkeit der Token zum Pool wird durch eine Statusvariable namens `getPoolInfo` bestimmt. In diesem Fall müssten wir einen `eth_call` verwenden, um diese Daten abzufragen: ```typescript import { Address } from '@graphprotocol/graph-ts' @@ -27,34 +27,36 @@ import { ERC20, Transfer } from '../generated/ERC20/ERC20' import { TokenTransaction } from '../generated/schema' export function handleTransfer(event: Transfer): void { - let transaction = new TokenTransaction(event.transaction.hash.toHex()) + let transaction = new TokenTransaction(event.transaction.hash.toHex()) - // Bind the ERC20 contract instance to the given address: - let instance = ERC20.bind(event.address) + // Binde die ERC20-Vertragsinstanz an die angegebene Adresse: + let instance = ERC20. bind(event.address) - // Retrieve pool information via eth_call - let poolInfo = instance.getPoolInfo(event.params.to) + // Abrufen von Pool-Informationen über eth_call + let poolInfo = instance.getPoolInfo(event.params.to) - transaction.pool = poolInfo.toHexString() - transaction.from = event.params.from.toHexString() - transaction.to = event.params.to.toHexString() - transaction.value = event.params.value + transaction.pool = poolInfo.toHexString() + transaction.from = event.params.from.toHexString() + transaction.to = event.params.to.toHexString() + transaction.value = event.params.value - transaction.save() + transaction.save() } + +Übersetzt mit DeepL.com (kostenlose Version) ``` -This is functional, however is not ideal as it slows down our subgraph’s indexing. +Dies ist funktional, aber nicht ideal, da es die Indizierung unseres Subgraphen verlangsamt. -## How to Eliminate `eth_calls` +## Wie man `eth_calls` beseitigt -Ideally, the smart contract should be updated to emit all necessary data within events. For instance, modifying the smart contract to include pool information in the event could eliminate the need for `eth_calls`: +Idealerweise sollte der Smart Contract so aktualisiert werden, dass er alle erforderlichen Daten in Ereignissen ausgibt. Wenn der Smart Contract beispielsweise so geändert wird, dass er Pool-Informationen in das Ereignis aufnimmt, könnte die Notwendigkeit von `eth_calls` entfallen: ``` event TransferWithPool(address indexed from, address indexed to, uint256 value, bytes32 indexed poolInfo); ``` -With this update, the subgraph can directly index the required data without external calls: +Mit dieser Aktualisierung kann der Subgraph die benötigten Daten ohne externe Aufrufe direkt indizieren: ```typescript import { Address } from '@graphprotocol/graph-ts' @@ -73,17 +75,17 @@ export function handleTransferWithPool(event: TransferWithPool): void { } ``` -This is much more performant as it has eliminated the need for `eth_calls`. +Dies ist sehr viel leistungsfähiger, da es die Notwendigkeit von `eth_calls` beseitigt hat. -## How to Optimize `eth_calls` +## Wie man `eth_calls` optimiert -If modifying the smart contract is not possible and `eth_calls` are required, read “[Improve Subgraph Indexing Performance Easily: Reduce eth_calls](https://thegraph.com/blog/improve-subgraph-performance-reduce-eth-calls/)” by Simon Emanuel Schmid to learn various strategies on how to optimize `eth_calls`. +Wenn eine Änderung des Smart Contracts nicht möglich ist und `eth_calls` benötigt werden, lesen Sie „[Verbessern Sie die Leistung der Subgraph-Indizierung ganz einfach: Reduzieren Sie eth_calls](https://thegraph.com/blog/improve-subgraph-performance-reduce-eth-calls/)“ von Simon Emanuel Schmid, um verschiedene Strategien zur Optimierung von `eth_calls` zu lernen. -## Reducing the Runtime Overhead of `eth_calls` +## Verringerung des Laufzeit-Overheads von `eth_calls` -For the `eth_calls` that can not be eliminated, the runtime overhead they introduce can be minimized by declaring them in the manifest. When `graph-node` processes a block it performs all declared `eth_calls` in parallel before handlers are run. Calls that are not declared are executed sequentially when handlers run. The runtime improvement comes from performing calls in parallel rather than sequentially - that helps reduce the total time spent in calls but does not eliminate it completely. +Für die `eth_calls`, die nicht eliminiert werden können, kann der Laufzeit-Overhead, den sie verursachen, minimiert werden, indem sie im Manifest deklariert werden. Wenn `graph-node` einen Block verarbeitet, führt er alle deklarierten `eth_calls` parallel aus, bevor die Handler ausgeführt werden. Aufrufe, die nicht deklariert sind, werden sequentiell ausgeführt, wenn die Handler laufen. Die Laufzeitverbesserung kommt dadurch zustande, dass die Aufrufe parallel und nicht sequentiell ausgeführt werden - das trägt dazu bei, die Gesamtzeit für die Aufrufe zu reduzieren, beseitigt sie aber nicht vollständig. -Currently, `eth_calls` can only be declared for event handlers. In the manifest, write +Derzeit können `eth_calls` nur für Event-Handler deklariert werden. Im Manifest, schreiben Sie ```yaml event: TransferWithPool(address indexed, address indexed, uint256, bytes32 indexed) @@ -92,26 +94,26 @@ calls: ERC20.poolInfo: ERC20[event.address].getPoolInfo(event.params.to) ``` -The portion highlighted in yellow is the call declaration. The part before the colon is simply a text label that is only used for error messages. The part after the colon has the form `Contract[address].function(params)`. Permissible values for address and params are `event.address` and `event.params.`. +Der gelb hervorgehobene Teil ist die Aufrufdeklaration. Der Teil vor dem Doppelpunkt ist einfach eine Textbeschriftung, die nur für Fehlermeldungen verwendet wird. Der Teil nach dem Doppelpunkt hat die Form `Contract[address].function(params)`. Zulässige Werte für Adresse und Parameter sind `event.address` und `event.params.`. -The handler itself accesses the result of this `eth_call` exactly as in the previous section by binding to the contract and making the call. graph-node caches the results of declared `eth_calls` in memory and the call from the handler will retrieve the result from this in memory cache instead of making an actual RPC call. +Der Handler selbst greift auf das Ergebnis dieses `eth_call` genau wie im vorherigen Abschnitt zu, indem er sich an den Vertrag bindet und den Aufruf tätigt. graph-node speichert die Ergebnisse der deklarierten `eth_calls` im Speicher und der Aufruf des Handlers ruft das Ergebnis aus diesem Speicher-Cache ab, anstatt einen tatsächlichen RPC-Aufruf zu tätigen. -Note: Declared eth_calls can only be made in subgraphs with specVersion >= 1.2.0. +Hinweis: Deklarierte eth_calls können nur in Subgraphen mit specVersion >= 1.2.0 gemacht werden. -## Conclusion +## Schlussfolgerung -You can significantly improve indexing performance by minimizing or eliminating `eth_calls` in your subgraphs. +Sie können die Indizierungsleistung erheblich verbessern, indem Sie die `eth_calls` in Ihren Subgraphen minimieren oder eliminieren. -## Subgraph Best Practices 1-6 +## Best Practices 1-6 für Subgraphen -1. [Improve Query Speed with Subgraph Pruning](/subgraphs/best-practices/pruning/) +1. [Verbesserung der Abfragegeschwindigkeit mit Subgraph Pruning](/subgraphs/best-practices/pruning/) -2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/subgraphs/best-practices/derivedfrom/) +2. [Verbesserung der Indizierungs- und der Reaktionsfähigkeit bei Abfragen durch Verwendung von @derivedFrom](/subgraphs/best-practices/derivedfrom/) -3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) +3. [Verbesserung der Indizierungs- und Abfrageleistung durch Verwendung unveränderlicher Entitäten und Bytes als IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) -4. [Improve Indexing Speed by Avoiding `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) +4. [Verbesserung der Indizierungsgeschwindigkeit durch Vermeidung von `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) -5. [Simplify and Optimize with Timeseries and Aggregations](/subgraphs/best-practices/timeseries/) +5. [Vereinfachen und Optimieren mit Zeitreihen und Aggregationen](/subgraphs/best-practices/timeseries/) -6. [Use Grafting for Quick Hotfix Deployment](/subgraphs/best-practices/grafting-hotfix/) +6. [Grafting für schnelle Hotfix-Bereitstellung verwenden](/subgraphs/best-practices/grafting-hotfix/) diff --git a/website/src/pages/de/subgraphs/best-practices/derivedfrom.mdx b/website/src/pages/de/subgraphs/best-practices/derivedfrom.mdx index db3a49928c89..49fb2c1f8ff8 100644 --- a/website/src/pages/de/subgraphs/best-practices/derivedfrom.mdx +++ b/website/src/pages/de/subgraphs/best-practices/derivedfrom.mdx @@ -1,29 +1,29 @@ --- -title: Subgraph Best Practice 2 - Improve Indexing and Query Responsiveness By Using @derivedFrom -sidebarTitle: 'Subgraph Best Practice 2: Arrays with @derivedFrom' +title: Best Practice 2 für Subgraphen - Verbessern Sie die Indizierung und die Reaktionsfähigkeit bei Abfragen durch die Verwendung von @derivedFrom +sidebarTitle: Arrays mit @derivedFrom --- ## TLDR -Arrays in your schema can really slow down a subgraph's performance as they grow beyond thousands of entries. If possible, the `@derivedFrom` directive should be used when using arrays as it prevents large arrays from forming, simplifies handlers, and reduces the size of individual entities, improving indexing speed and query performance significantly. +Arrays in Ihrem Schema können die Leistung eines Subgraphen stark verlangsamen, wenn sie über Tausende von Einträgen hinauswachsen. Wenn möglich, sollte bei der Verwendung von Arrays die Direktive `@derivedFrom` verwendet werden, da sie die Bildung großer Arrays verhindert, Handler vereinfacht und die Größe einzelner Entitäten reduziert, was die Indizierungsgeschwindigkeit und die Abfrageleistung erheblich verbessert. -## How to Use the `@derivedFrom` Directive +## Verwendung der `@derivedFrom`-Direktive -You just need to add a `@derivedFrom` directive after your array in your schema. Like this: +Sie müssen nur eine `@derivedFrom`-Direktive nach Ihrem Array in Ihrem Schema hinzufügen. Zum Beispiel so: ```graphql comments: [Comment!]! @derivedFrom(field: "post") ``` -`@derivedFrom` creates efficient one-to-many relationships, enabling an entity to dynamically associate with multiple related entities based on a field in the related entity. This approach removes the need for both sides of the relationship to store duplicate data, making the subgraph more efficient. +`@derivedFrom` schafft effiziente Eins-zu-Viel-Beziehungen, die es einer Entität ermöglichen, sich dynamisch mit mehreren verwandten Entitäten auf der Grundlage eines Feldes in der verwandten Entität zu verbinden. Durch diesen Ansatz entfällt die Notwendigkeit, auf beiden Seiten der Beziehung doppelte Daten zu speichern, wodurch der Subgraph effizienter wird. -### Example Use Case for `@derivedFrom` +### Beispiel für die Verwendung von `@derivedFrom` -An example of a dynamically growing array is a blogging platform where a “Post” can have many “Comments”. +Ein Beispiel für ein dynamisch wachsendes Array ist eine Blogging-Plattform, auf der ein „Post“ viele „Kommentare“ haben kann. -Let’s start with our two entities, `Post` and `Comment` +Beginnen wir mit unseren beiden Entitäten, „Post“ und „Kommentar“. -Without optimization, you could implement it like this with an array: +Ohne Optimierung könnte man es so mit einem Array implementieren: ```graphql type Post @entity { @@ -39,9 +39,9 @@ type Comment @entity { } ``` -Arrays like these will effectively store extra Comments data on the Post side of the relationship. +Arrays wie diese speichern effektiv zusätzliche Comments-Daten auf der Post-Seite der Beziehung. -Here’s what an optimized version looks like using `@derivedFrom`: +So sieht eine optimierte Version aus, die `@derivedFrom` verwendet: ```graphql type Post @entity { @@ -58,32 +58,32 @@ type Comment @entity { } ``` -Just by adding the `@derivedFrom` directive, this schema will only store the “Comments” on the “Comments” side of the relationship and not on the “Post” side of the relationship. Arrays are stored across individual rows, which allows them to expand significantly. This can lead to particularly large sizes if their growth is unbounded. +Durch Hinzufügen der Direktive `@derivedFrom` speichert dieses Schema die „Comments“ nur auf der „Comments“-Seite der Beziehung und nicht auf der „Post“-Seite der Beziehung. Arrays werden in einzelnen Zeilen gespeichert, wodurch sie sich erheblich ausdehnen können. Dies kann zu besonders großen Größen führen, wenn ihr Wachstum unbegrenzt ist. -This will not only make our subgraph more efficient, but it will also unlock three features: +Dadurch wird unser Subgraph nicht nur effizienter, sondern es werden auch drei Funktionen freigeschaltet: -1. We can query the `Post` and see all of its comments. +1. Wir können den `Post` abfragen und alle seine Kommentare sehen. -2. We can do a reverse lookup and query any `Comment` and see which post it comes from. +2. Wir können eine Rückwärtssuche durchführen und jeden `Comment` abfragen und sehen, von welchem Beitrag er stammt. -3. We can use [Derived Field Loaders](/subgraphs/developing/creating/graph-ts/api/#looking-up-derived-entities) to unlock the ability to directly access and manipulate data from virtual relationships in our subgraph mappings. +3. Mit [Derived Field Loaders](/subgraphs/developing/creating/graph-ts/api/#looking-up-derived-entities) können wir direkt auf Daten aus virtuellen Beziehungen in unseren Subgraphen-Mappings zugreifen und diese bearbeiten. -## Conclusion +## Schlussfolgerung -Use the `@derivedFrom` directive in subgraphs to effectively manage dynamically growing arrays, enhancing indexing efficiency and data retrieval. +Verwenden Sie die Direktive `@derivedFrom` in Subgraphen, um dynamisch wachsende Arrays effektiv zu verwalten und die Effizienz der Indizierung und des Datenabrufs zu verbessern. -For a more detailed explanation of strategies to avoid large arrays, check out Kevin Jones' blog: [Best Practices in Subgraph Development: Avoiding Large Arrays](https://thegraph.com/blog/improve-subgraph-performance-avoiding-large-arrays/). +Eine ausführlichere Erklärung von Strategien zur Vermeidung großer Arrays finden Sie im Blog von Kevin Jones: [Best Practices bei der Subgraph-Entwicklung: Vermeiden großer Arrays](https://thegraph.com/blog/improve-subgraph-performance-avoiding-large-arrays/). -## Subgraph Best Practices 1-6 +## Best Practices 1-6 für Subgraphen -1. [Improve Query Speed with Subgraph Pruning](/subgraphs/best-practices/pruning/) +1. [Verbesserung der Abfragegeschwindigkeit mit Subgraph Pruning](/subgraphs/best-practices/pruning/) -2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/subgraphs/best-practices/derivedfrom/) +2. [Verbesserung der Indizierungs- und der Reaktionsfähigkeit bei Abfragen durch Verwendung von @derivedFrom](/subgraphs/best-practices/derivedfrom/) -3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) +3. [Verbesserung der Indizierungs- und Abfrageleistung durch Verwendung unveränderlicher Entitäten und Bytes als IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) -4. [Improve Indexing Speed by Avoiding `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) +4. [Verbesserung der Indizierungsgeschwindigkeit durch Vermeidung von `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) -5. [Simplify and Optimize with Timeseries and Aggregations](/subgraphs/best-practices/timeseries/) +5. [Vereinfachen und Optimieren mit Zeitreihen und Aggregationen](/subgraphs/best-practices/timeseries/) -6. [Use Grafting for Quick Hotfix Deployment](/subgraphs/best-practices/grafting-hotfix/) +6. [Grafting für schnelle Hotfix-Bereitstellung verwenden](/subgraphs/best-practices/grafting-hotfix/) diff --git a/website/src/pages/de/subgraphs/best-practices/grafting-hotfix.mdx b/website/src/pages/de/subgraphs/best-practices/grafting-hotfix.mdx index f0297328b52d..bfff7009381b 100644 --- a/website/src/pages/de/subgraphs/best-practices/grafting-hotfix.mdx +++ b/website/src/pages/de/subgraphs/best-practices/grafting-hotfix.mdx @@ -1,68 +1,68 @@ --- -title: Subgraph Best Practice 6 - Use Grafting for Quick Hotfix Deployment -sidebarTitle: 'Subgraph Best Practice 6: Grafting and Hotfixing' +title: Best Practice 6 für Subgraphen - Verwendung von Grafting für die schnelle Hotfix-Bereitstellung +sidebarTitle: Grafting und Hotfixing --- ## TLDR -Grafting is a powerful feature in subgraph development that allows you to build and deploy new subgraphs while reusing the indexed data from existing ones. +Grafting ist eine leistungsstarke Funktion bei der Entwicklung von Subgrafenen, die es Ihnen ermöglicht, neue Subgraphen zu erstellen und bereitzustellen, während Sie die indizierten Daten aus bestehenden Subgraphen wiederverwenden. ### Überblick -This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. +Diese Funktion ermöglicht die schnelle Bereitstellung von Hotfixes für kritische Probleme, so dass nicht der gesamte Subgraph von Grund auf neu indiziert werden muss. Durch die Bewahrung historischer Daten minimiert Grafting Ausfallzeiten und gewährleistet die Kontinuität der Datendienste. -## Benefits of Grafting for Hotfixes +## Vorteile des Graftings für Hotfixes -1. **Rapid Deployment** +1. **Schnelle Bereitstellung** - - **Minimize Downtime**: When a subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. - - **Immediate Recovery**: The new subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. + - **Ausfallzeiten minimieren**: Wenn in einem Subgraphen ein kritischer Fehler auftritt und die Indizierung unterbrochen wird, können Sie mit Hilfe vom Grafting sofort eine Lösung bereitstellen, ohne auf die erneute Indizierung zu warten. + - **Sofortige Wiederherstellung**: Der neue Subgraph geht vom letzten indizierten Block aus und gewährleistet, dass die Datendienste nicht unterbrochen werden. -2. **Data Preservation** +2. **Datenaufbewahrung** - - **Reuse Historical Data**: Grafting copies the existing data from the base subgraph, so you don’t lose valuable historical records. - - **Consistency**: Maintains data continuity, which is crucial for applications relying on consistent historical data. + - **Wiederverwendung historischer Daten**: Beim Grafting werden die vorhandenen Daten aus dem Basis-Subgraphen kopiert, so dass Sie keine wertvollen historischen Datensätze verlieren. + - **Konsistenz**: Bewahrt die Datenkontinuität, was für Anwendungen, die auf konsistente historische Daten angewiesen sind, von entscheidender Bedeutung ist. -3. **Efficiency** - - **Save Time and Resources**: Avoids the computational overhead of re-indexing large datasets. - - **Focus on Fixes**: Allows developers to concentrate on resolving issues rather than managing data recovery. +3. **Effizienz** + - **Zeit und Ressourcen sparen**: Vermeidet den Rechenaufwand für die Neuindizierung großer Datensätze. + - **Fokus auf Behebungen**: Ermöglicht es den Entwicklern, sich auf die Lösung von Problemen zu konzentrieren, anstatt die Datenwiederherstellung zu verwalten. -## Best Practices When Using Grafting for Hotfixes +## Best Practices bei der Verwendung von Grafting für Hotfixes -1. **Initial Deployment Without Grafting** +1. **Erster Einsatz ohne Grafting** - - **Start Clean**: Always deploy your initial subgraph without grafting to ensure that it’s stable and functions as expected. - - **Test Thoroughly**: Validate the subgraph’s performance to minimize the need for future hotfixes. + - **Starten Sie sauber**: Setzen Sie Ihren ersten Subgraphen immer ohne Grafting ein, um sicherzustellen, dass er stabil ist und wie erwartet funktioniert. + - **Testen Sie gründlich**: Überprüfen Sie die Leistung des Subgraphen, um den Bedarf an zukünftigen Hotfixes zu minimieren. -2. **Implementing the Hotfix with Grafting** +2. **Implementierung des Hotfix mit Grafting** - - **Identify the Issue**: When a critical error occurs, determine the block number of the last successfully indexed event. - - **Create a New Subgraph**: Develop a new subgraph that includes the hotfix. - - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed subgraph. - - **Deploy Quickly**: Publish the grafted subgraph to restore service as soon as possible. + - **Identifizieren Sie das Problem**: Wenn ein kritischer Fehler auftritt, ermitteln Sie die Blocknummer des letzten erfolgreich indizierten Ereignisses. + - **Erstellen Sie einen neuen Subgraphen**: Entwickeln Sie einen neuen Subgraphen, der den Hotfix enthält. + - **Konfigurieren Sie Grafting**: Verwenden Sie Grafting, um Daten bis zur identifizierten Blocknummer aus dem ausgefallenen Subgraphen zu kopieren. + - **Stellen Sie schnell bereit**: Veröffentlichen Sie den grafted Subgrafen, um den Dienst so schnell wie möglich wiederherzustellen. -3. **Post-Hotfix Actions** +3. **Post-Hotfix-Aktionen** - - **Monitor Performance**: Ensure the grafted subgraph is indexing correctly and the hotfix resolves the issue. - - **Republish Without Grafting**: Once stable, deploy a new version of the subgraph without grafting for long-term maintenance. - > Note: Relying on grafting indefinitely is not recommended as it can complicate future updates and maintenance. - - **Update References**: Redirect any services or applications to use the new, non-grafted subgraph. + - **Überwachen Sie die Leistung**: Stellen Sie sicher, dass der übertragene Subgraph korrekt indiziert wird und der Hotfix das Problem behebt. + - **Veröffentlichen Sie ohne Grafting erneut**: Sobald der Subgraph stabil ist, können Sie eine neue Version des Subgraphen ohne Grafting für die langfristige Wartung bereitstellen. + > Hinweis: Es wird nicht empfohlen, sich auf unbegrenzte Zeit aufs Grafting zu verlassen, da dies künftige Aktualisierungen und Wartungsarbeiten erschweren kann. + - **Aktualisieren Sie Referenzen**: Leiten Sie alle Dienste oder Anwendungen um, damit sie den neuen, nicht übertragenen Subgraphen verwenden. -4. **Important Considerations** - - **Careful Block Selection**: Choose the graft block number carefully to prevent data loss. - - **Tip**: Use the block number of the last correctly processed event. - - **Use Deployment ID**: Ensure you reference the Deployment ID of the base subgraph, not the Subgraph ID. - - **Note**: The Deployment ID is the unique identifier for a specific subgraph deployment. - - **Feature Declaration**: Remember to declare grafting in the subgraph manifest under features. +4. **Wichtige Hinweise** + - **Sorgfältige Blockauswahl**: Wählen Sie die Graft-Blocknummer sorgfältig aus, um Datenverluste zu vermeiden. + - **Tipp**: Verwenden Sie die Blocknummer des letzten korrekt verarbeiteten Ereignisses. + - **Verwenden Sie die Bereitstellung-ID**: Stellen Sie sicher, dass Sie auf die Einsatz-ID des Basis-Subgraphen verweisen, nicht auf die ID des Subgraphen. + - **Anmerkung**: Die Bereitstellung-ID ist der eindeutige Bezeichner für einen bestimmten Subgraphen-Bereitstellung. + - **Funktionserklärung**: Vergessen Sie nicht, Grafting im Subgraphenmanifest unter Funktionen zu deklarieren. -## Example: Deploying a Hotfix with Grafting +## Beispiel: Bereitstellen eines Hotfixes mit Grafting -Suppose you have a subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. +Angenommen, Sie haben einen Subgraphen, der einen Smart Contract verfolgt, der aufgrund eines kritischen Fehlers nicht mehr indiziert wird. Hier erfahren Sie, wie Sie mithilfe vom Grafting einen Hotfix bereitstellen können. -1. **Failed Subgraph Manifest (subgraph.yaml)** +1. **Fehlgeschlagenes Subgraph-Manifest (subgraph.yaml)** ```yaml - specVersion: 1.0.0 + specVersion: 1.3.0 schema: file: ./schema.graphql dataSources: @@ -75,7 +75,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing startBlock: 5000000 mapping: kind: ethereum/events - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Withdrawal @@ -88,9 +88,9 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing file: ./src/old-lock.ts ``` -2. **New Grafted Subgraph Manifest (subgraph.yaml)** +2. **Neues grafted Subgraph-Manifest (subgraph.yaml)** ```yaml - specVersion: 1.0.0 + specVersion: 1.3.0 schema: file: ./schema.graphql dataSources: @@ -103,7 +103,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing startBlock: 6000001 # Block after the last indexed block mapping: kind: ethereum/events - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Withdrawal @@ -117,71 +117,71 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing features: - grafting graft: - base: QmBaseDeploymentID # Deployment ID of the failed subgraph - block: 6000000 # Last successfully indexed block + base: QmBaseDeploymentID # Deployment ID of the failed Subgraph + block: 6000000 # Letzter erfolgreich indizierter Block ``` -**Explanation:** +**Erläuterung:** -- **Data Source Update**: The new subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. -- **Start Block**: Set to one block after the last successfully indexed block to avoid reprocessing the error. -- **Grafting Configuration**: - - **base**: Deployment ID of the failed subgraph. - - **block**: Block number where grafting should begin. +- **Aktualisierung der Datenquelle**: Der neue Subgraph zeigt auf 0xNewContractAddress, bei dem es sich um eine feste Version des Smart Contracts handeln könnte. +- **Startblock**: Wird auf einen Block nach dem letzten erfolgreich indizierten Block gesetzt, um eine erneute Bearbeitung des Fehlers zu vermeiden. +- **Grafting-Konfiguration**: + - **base**: Einsatz-ID des fehlgeschlagenen Subgraphen. + - **block**: Nummer des Blocks, in dem das Grafting beginnen soll. -3. **Deployment Steps** +3. **Bereitstellungsschritte** - - **Update the Code**: Implement the hotfix in your mapping scripts (e.g., handleWithdrawal). - - **Adjust the Manifest**: As shown above, update the `subgraph.yaml` with grafting configurations. - - **Deploy the Subgraph**: - - Authenticate with the Graph CLI. - - Deploy the new subgraph using `graph deploy`. + - **Aktualisieren Sie den Code**: Implementieren Sie den Hotfix in Ihre Mapping-Skripte (z. B. handleWithdrawal). + - **Passen Sie das Manifest an**: Wie oben gezeigt, aktualisieren Sie die Datei `subgraph.yaml` mit den Grafting-Konfigurationen. + - **Stellen Sie den Subgraphen bereit**: + - Authentifizieren Sie sich mit der Graph CLI. + - Stellen Sie den neuen Subgraphen mit `graph deploy` bereit. -4. **Post-Deployment** - - **Verify Indexing**: Check that the subgraph is indexing correctly from the graft point. - - **Monitor Data**: Ensure that new data is being captured and the hotfix is effective. - - **Plan for Republish**: Schedule the deployment of a non-grafted version for long-term stability. +4. **Post-Bereitstellung** + - **Überprüfen Sie die Indizierung**: Prüfen Sie, ob der Subgraph vom Graft-Punkt aus korrekt indiziert ist. + - **Überwachen Sie Daten**: Stellen Sie sicher, dass neue Daten erfasst werden und der Hotfix wirksam ist. + - **Planen Sie die Wiederveröffentlichung**: Planen Sie die Bereitstellung einer nicht übertragenen Version für langfristige Stabilität. -## Warnings and Cautions +## Warnungen und Vorsichtshinweise -While grafting is a powerful tool for deploying hotfixes quickly, there are specific scenarios where it should be avoided to maintain data integrity and ensure optimal performance. +Obwohl Grafting ein leistungsfähiges Tool für die schnelle Bereitstellung von Hotfixes ist, gibt es bestimmte Szenarien, in denen es vermieden werden sollte, um die Datenintegrität zu wahren und eine optimale Leistung zu gewährleisten. -- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new subgraph’s schema to be compatible with the base subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. -- **Significant Mapping Logic Overhauls**: When the hotfix involves substantial modifications to your mapping logic—such as changing how events are processed or altering handler functions—grafting may not function correctly. The new logic might not be compatible with the data processed under the old logic, leading to incorrect data or failed indexing. -- **Deployments to The Graph Network**: Grafting is not recommended for subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the subgraph from scratch to ensure full compatibility and reliability. +- **Inkompatible Schemaänderungen**: Wenn Ihr Hotfix eine Änderung des Typs vorhandener Felder oder das Entfernen von Feldern aus Ihrem Schema erfordert, ist das Grafting nicht geeignet. Das Grafting erwartet, dass das Schema des neuen Subgraphen mit dem Schema des Basis-Subgraphen kompatibel ist. Inkompatible Änderungen können zu Dateninkonsistenzen und Fehlern führen, da die vorhandenen Daten nicht mit dem neuen Schema übereinstimmen. +- **Wesentliche Überarbeitungen der Mapping-Logik**: Wenn der Hotfix wesentliche Änderungen an der Mapping-Logik vornimmt, z. B. die Verarbeitung von Ereignissen oder die Änderung von Handler-Funktionen, funktioniert das Grafting möglicherweise nicht korrekt. Die neue Logik ist möglicherweise nicht mit den Daten kompatibel, die unter der alten Logik verarbeitet wurden, was zu falschen Daten oder einer fehlgeschlagenen Indizierung führt. +- **Bereitstellungen des Graph-Netzwerk**: Grafting wird nicht für Subgraphen empfohlen, die für das dezentrale Netzwerk (Mainnet) von The Graph bestimmt sind. Es kann die Indizierung verkomplizieren und wird möglicherweise nicht von allen Indexierern vollständig unterstützt, was zu unerwartetem Verhalten oder erhöhten Kosten führen kann. Für Mainnet-Bereitstellungen ist es sicherer, den Subgraphen von Grund auf neu zu indizieren, um volle Kompatibilität und Zuverlässigkeit zu gewährleisten. -### Risk Management +### Risikomanagement -- **Data Integrity**: Incorrect block numbers can lead to data loss or duplication. -- **Testing**: Always test grafting in a development environment before deploying to production. +- **Datenintegrität**: Falsche Blocknummern können zu Datenverlust oder -duplizierung führen. +- **Testen**: Testen Sie das Grafting immer in einer Entwicklungsumgebung, bevor Sie es in der Produktion einsetzen. -## Conclusion +## Schlussfolgerung -Grafting is an effective strategy for deploying hotfixes in subgraph development, enabling you to: +Grafting ist eine effektive Strategie für die Bereitstellung von Hotfixes bei der Entwicklung von Subgraphen, die es Ihnen folgendes ermöglicht: -- **Quickly Recover** from critical errors without re-indexing. -- **Preserve Historical Data**, maintaining continuity for applications and users. -- **Ensure Service Availability** by minimizing downtime during critical fixes. +- **Schnelle Wiederherstellung** bei kritischen Fehlern ohne Neuindizierung. +- **Historische Daten aufbewahren**, um die Kontinuität für Anwendungen und Benutzer zu erhalten. +- **Sicherung der Serviceverfügbarkeit** durch Minimierung der Ausfallzeiten bei kritischen Reparaturen. -However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. +Es ist jedoch wichtig, das Grafting mit Bedacht einzusetzen und bewährte Verfahren zu befolgen, um die Risiken zu minimieren. Planen Sie nach der Stabilisierung Ihres Subgraphen mit dem Hotfix die Bereitstellung einer Version ohne Grafting, um die langfristige Wartbarkeit zu gewährleisten. ## Zusätzliche Ressourcen -- **[Grafting Documentation](/subgraphs/cookbook/grafting/)**: Replace a Contract and Keep its History With Grafting -- **[Understanding Deployment IDs](/subgraphs/querying/subgraph-id-vs-deployment-id/)**: Learn the difference between Deployment ID and Subgraph ID. +- **[Grafting-Dokumentation](/subgraphs/cookbook/grafting/)**: Ersetzen eines Vertrags und Beibehaltung seiner Historie mit Grafting +- **[Verstehen der Bereitstellung-IDs](/subgraphs/querying/subgraph-id-vs-deployment-id/)**: Lernen Sie den Unterschied zwischen Bereitstellung-ID und Subgraph-ID. -By incorporating grafting into your subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. +Durch die Integration von Grafting in Ihren Subgraphen-Entwicklungs-Workflow können Sie Ihre Fähigkeit verbessern, schnell auf Probleme zu reagieren, und sicherstellen, dass Ihre Datendienste robust und zuverlässig bleiben. -## Subgraph Best Practices 1-6 +## Best Practices 1-6 für Subgraphen -1. [Improve Query Speed with Subgraph Pruning](/subgraphs/best-practices/pruning/) +1. [Verbesserung der Abfragegeschwindigkeit mit Subgraph Pruning](/subgraphs/best-practices/pruning/) -2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/subgraphs/best-practices/derivedfrom/) +2. [Verbesserung der Indizierungs- und der Reaktionsfähigkeit bei Abfragen durch Verwendung von @derivedFrom](/subgraphs/best-practices/derivedfrom/) -3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) +3. [Verbesserung der Indizierungs- und Abfrageleistung durch Verwendung unveränderlicher Entitäten und Bytes als IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) -4. [Improve Indexing Speed by Avoiding `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) +4. [Verbesserung der Indizierungsgeschwindigkeit durch Vermeidung von `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) -5. [Simplify and Optimize with Timeseries and Aggregations](/subgraphs/best-practices/timeseries/) +5. [Vereinfachen und Optimieren mit Zeitreihen und Aggregationen](/subgraphs/best-practices/timeseries/) -6. [Use Grafting for Quick Hotfix Deployment](/subgraphs/best-practices/grafting-hotfix/) +6. [Grafting für schnelle Hotfix-Bereitstellung verwenden](/subgraphs/best-practices/grafting-hotfix/) diff --git a/website/src/pages/de/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx b/website/src/pages/de/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx index 6ff60ec9ab34..04ca2fd1e0db 100644 --- a/website/src/pages/de/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx +++ b/website/src/pages/de/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx @@ -1,15 +1,15 @@ --- -title: Subgraph Best Practice 3 - Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs -sidebarTitle: 'Subgraph Best Practice 3: Immutable Entities and Bytes as IDs' +title: Best Practice 3 für Subgraphen - Verbesserung der Indizierungs- und Abfrageleistung durch Verwendung unveränderlicher Entitäten und Bytes als IDs +sidebarTitle: Unveränderliche Entitäten und Bytes als IDs --- ## TLDR -Using Immutable Entities and Bytes for IDs in our `schema.graphql` file [significantly improves ](https://thegraph.com/blog/two-simple-subgraph-performance-improvements/) indexing speed and query performance. +Die Verwendung von unveränderlichen Entitäten und Bytes für IDs in unserer Datei `schema.graphql` verbessert die Indizierungsgeschwindigkeit und die Abfrageleistung erheblich (https://thegraph.com/blog/two-simple-subgraph-performance-improvements/). -## Immutable Entities +## Unveränderliche Entitäten -To make an entity immutable, we simply add `(immutable: true)` to an entity. +Um eine Entität unveränderlich zu machen, fügen wir einfach `(immutable: true)` zu einer Entität hinzu. ```graphql type Transfer @entity(immutable: true) { @@ -20,21 +20,21 @@ type Transfer @entity(immutable: true) { } ``` -By making the `Transfer` entity immutable, graph-node is able to process the entity more efficiently, improving indexing speeds and query responsiveness. +Indem die Entität `Transfer` unveränderlich gemacht wird, ist Graph-Node in der Lage, die Entität effizienter zu verarbeiten, was die Indizierungsgeschwindigkeit und die Reaktionsfähigkeit bei Abfragen verbessert. -Immutable Entities structures will not change in the future. An ideal entity to become an Immutable Entity would be an entity that is directly logging onchain event data, such as a `Transfer` event being logged as a `Transfer` entity. +Die Strukturen von unveränderlichen Entitäten werden sich in Zukunft nicht ändern. Eine ideale Entität, um eine unveränderliche Entität zu werden, wäre eine Entität, die direkt Onchain-Ereignisdaten protokolliert, z. B. ein `Transfer`-Ereignis, das als `Transfer`-Entität protokolliert wird. -### Under the hood +### Unter der Haube -Mutable entities have a 'block range' indicating their validity. Updating these entities requires the graph node to adjust the block range of previous versions, increasing database workload. Queries also need filtering to find only live entities. Immutable entities are faster because they are all live and since they won't change, no checks or updates are required while writing, and no filtering is required during queries. +Veränderliche Entitäten haben einen 'block range', der ihre Gültigkeit angibt. Bei der Aktualisierung dieser Entitäten muss der Graph-Knoten den Blockbereich früherer Versionen anpassen, was die Datenbankbelastung erhöht. Außerdem müssen Abfragen gefiltert werden, um nur aktive Entitäten zu finden. Unveränderliche Entitäten sind schneller, weil sie alle live sind und sich nicht ändern, so dass beim Schreiben keine Überprüfungen oder Aktualisierungen erforderlich sind und bei Abfragen keine Filterung erforderlich ist. -### When not to use Immutable Entities +### Wann man keine unveränderlichen Entitäten verwenden sollte -If you have a field like `status` that needs to be modified over time, then you should not make the entity immutable. Otherwise, you should use immutable entities whenever possible. +Wenn Sie ein Feld wie `status` haben, das im Laufe der Zeit geändert werden muss, dann sollten Sie die Entität nicht unveränderlich machen. Ansonsten sollten Sie, wann immer möglich, unveränderliche Entitäten verwenden. -## Bytes as IDs +## Bytes als IDs -Every entity requires an ID. In the previous example, we can see that the ID is already of the Bytes type. +Jede Entität benötigt eine ID. Im vorherigen Beispiel sehen wir, dass die ID bereits vom Typ Bytes ist. ```graphql type Transfer @entity(immutable: true) { @@ -45,19 +45,19 @@ type Transfer @entity(immutable: true) { } ``` -While other types for IDs are possible, such as String and Int8, it is recommended to use the Bytes type for all IDs due to character strings taking twice as much space as Byte strings to store binary data, and comparisons of UTF-8 character strings must take the locale into account which is much more expensive than the bytewise comparison used to compare Byte strings. +Es sind zwar auch andere Typen für IDs möglich, z. B. String und Int8, es wird jedoch empfohlen, den Typ Bytes für alle IDs zu verwenden, da Zeichenketten doppelt so viel Platz wie Byte-Zeichenketten benötigen, um binäre Daten zu speichern, und Vergleiche von UTF-8-Zeichenketten das Gebietsschema berücksichtigen müssen, was sehr viel teurer ist als der byteweise Vergleich, der zum Vergleich von Byte-Zeichenketten verwendet wird. -### Reasons to Not Use Bytes as IDs +### Gründe, keine Bytes als IDs zu verwenden -1. If entity IDs must be human-readable such as auto-incremented numerical IDs or readable strings, Bytes for IDs should not be used. -2. If integrating a subgraph’s data with another data model that does not use Bytes as IDs, Bytes as IDs should not be used. -3. Indexing and querying performance improvements are not desired. +1. Wenn Entitäts-IDs für den Menschen lesbar sein müssen, wie z. B. automatisch inkrementierte numerische IDs oder lesbare Zeichenketten, sollten Bytes für IDs nicht verwendet werden. +2. Wenn die Daten eines Subgraphen in ein anderes Datenmodell integriert werden, das keine Bytes als IDs verwendet, sollten Bytes als IDs nicht verwendet werden. +3. Verbesserungen der Indizierungs- und Abfrageleistung sind nicht erwünscht. -### Concatenating With Bytes as IDs +### Verkettung mit Bytes als IDs -It is a common practice in many subgraphs to use string concatenation to combine two properties of an event into a single ID, such as using `event.transaction.hash.toHex() + "-" + event.logIndex.toString()`. However, as this returns a string, this significantly impedes subgraph indexing and querying performance. +In vielen Subgraphen ist es gängige Praxis, zwei Eigenschaften eines Ereignisses durch String-Verkettung zu einer einzigen ID zu kombinieren, z. B. durch `event.transaction.hash.toHex() + „-“ + event.logIndex.toString()`. Da dies jedoch eine Zeichenkette zurückgibt, beeinträchtigt dies die Indizierung von Subgraphen und die Abfrageleistung erheblich. -Instead, we should use the `concatI32()` method to concatenate event properties. This strategy results in a `Bytes` ID that is much more performant. +Stattdessen sollten wir die Methode `concatI32()` zur Verkettung von Ereigniseigenschaften verwenden. Diese Strategie führt zu einer \`Bytes'-ID, die viel leistungsfähiger ist. ```typescript export function handleTransfer(event: TransferEvent): void { @@ -74,11 +74,11 @@ export function handleTransfer(event: TransferEvent): void { } ``` -### Sorting With Bytes as IDs +### Sortieren mit Bytes als IDs -Sorting using Bytes as IDs is not optimal as seen in this example query and response. +Die Sortierung nach Bytes als IDs ist nicht optimal, wie in dieser Beispielabfrage und -antwort zu sehen ist. -Query: +Abfrage: ```graphql { @@ -91,7 +91,7 @@ Query: } ``` -Query response: +Antwort auf die Abfrage: ```json { @@ -120,9 +120,9 @@ Query response: } ``` -The IDs are returned as hex. +Die IDs werden als Hexadezimalzahlen zurückgegeben. -To improve sorting, we should create another field on the entity that is a BigInt. +Um die Sortierung zu verbessern, sollten wir ein weiteres Feld auf der Entität erstellen, das ein BigInt ist. ```graphql type Transfer @entity { @@ -134,9 +134,9 @@ type Transfer @entity { } ``` -This will allow for sorting to be optimized sequentially. +Dadurch kann die Sortierung nacheinander optimiert werden. -Query: +Abfrage: ```graphql { @@ -147,7 +147,7 @@ Query: } ``` -Query Response: +Antwort auf die Abfrage: ```json { @@ -170,22 +170,22 @@ Query Response: } ``` -## Conclusion +## Schlussfolgerung -Using both Immutable Entities and Bytes as IDs has been shown to markedly improve subgraph efficiency. Specifically, tests have highlighted up to a 28% increase in query performance and up to a 48% acceleration in indexing speeds. +Es hat sich gezeigt, dass die Verwendung von unveränderlichen Entitäten und Bytes als IDs die Effizienz von Subgraphen deutlich verbessert. Insbesondere haben Tests eine Steigerung der Abfrageleistung um bis zu 28 % und eine Beschleunigung der Indizierungsgeschwindigkeit um bis zu 48 % ergeben. -Read more about using Immutable Entities and Bytes as IDs in this blog post by David Lutterkort, a Software Engineer at Edge & Node: [Two Simple Subgraph Performance Improvements](https://thegraph.com/blog/two-simple-subgraph-performance-improvements/). +Lesen Sie mehr über die Verwendung von unveränderlichen Entitäten und Bytes als IDs in diesem Blogbeitrag von David Lutterkort, Software Engineer bei Edge & Node: [Zwei einfache Leistungsverbesserungen für Subgrafen] (https://thegraph.com/blog/two-simple-subgraph-performance-improvements/). -## Subgraph Best Practices 1-6 +## Best Practices 1-6 für Subgraphen -1. [Improve Query Speed with Subgraph Pruning](/subgraphs/best-practices/pruning/) +1. [Verbesserung der Abfragegeschwindigkeit mit Subgraph Pruning](/subgraphs/best-practices/pruning/) -2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/subgraphs/best-practices/derivedfrom/) +2. [Verbesserung der Indizierungs- und der Reaktionsfähigkeit bei Abfragen durch Verwendung von @derivedFrom](/subgraphs/best-practices/derivedfrom/) -3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) +3. [Verbesserung der Indizierungs- und Abfrageleistung durch Verwendung unveränderlicher Entitäten und Bytes als IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) -4. [Improve Indexing Speed by Avoiding `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) +4. [Verbesserung der Indizierungsgeschwindigkeit durch Vermeidung von `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) -5. [Simplify and Optimize with Timeseries and Aggregations](/subgraphs/best-practices/timeseries/) +5. [Vereinfachen und Optimieren mit Zeitreihen und Aggregationen](/subgraphs/best-practices/timeseries/) -6. [Use Grafting for Quick Hotfix Deployment](/subgraphs/best-practices/grafting-hotfix/) +6. [Grafting für schnelle Hotfix-Bereitstellung verwenden](/subgraphs/best-practices/grafting-hotfix/) diff --git a/website/src/pages/de/subgraphs/best-practices/pruning.mdx b/website/src/pages/de/subgraphs/best-practices/pruning.mdx index 1b51dde8894f..6688d9bdfabd 100644 --- a/website/src/pages/de/subgraphs/best-practices/pruning.mdx +++ b/website/src/pages/de/subgraphs/best-practices/pruning.mdx @@ -1,26 +1,26 @@ --- -title: Subgraph Best Practice 1 - Improve Query Speed with Subgraph Pruning -sidebarTitle: 'Subgraph Best Practice 1: Pruning with indexerHints' +title: Best Practice 1 für Subgraphen - Verbessern Sie die Abfragegeschwindigkeit mit Subgraph Pruning +sidebarTitle: Pruning mit indexerHints --- ## TLDR -[Pruning](/developing/creating-a-subgraph/#prune) removes archival entities from the subgraph’s database up to a given block, and removing unused entities from a subgraph’s database will improve a subgraph’s query performance, often dramatically. Using `indexerHints` is an easy way to prune a subgraph. +[Pruning](/developing/creating-a-subgraph/#prune) entfernt archivierte Entitäten aus der Datenbank des Subgraphen bis zu einem bestimmten Block, und das Entfernen unbenutzter Entitäten aus der Datenbank eines Subgraphen verbessert die Abfrageleistung eines Subgraphen, oft dramatisch. Die Verwendung von `indexerHints` ist ein einfacher Weg, einen Subgraphen zu beschneiden. -## How to Prune a Subgraph With `indexerHints` +## Wie man einen Subgraphen mit `indexerHints` beschneidet -Add a section called `indexerHints` in the manifest. +Fügen Sie dem Manifest einen Abschnitt mit dem Namen `indexerHints` hinzu. -`indexerHints` has three `prune` options: +`indexerHints` hat drei Optionen für ‚prune‘: -- `prune: auto`: Retains the minimum necessary history as set by the Indexer, optimizing query performance. This is the generally recommended setting and is the default for all subgraphs created by `graph-cli` >= 0.66.0. -- `prune: `: Sets a custom limit on the number of historical blocks to retain. -- `prune: never`: No pruning of historical data; retains the entire history and is the default if there is no `indexerHints` section. `prune: never` should be selected if [Time Travel Queries](/subgraphs/querying/graphql-api/#time-travel-queries) are desired. +- `prune: auto`: Behält die minimal notwendige Historie, wie vom Indexierer festgelegt, bei und optimiert so die Abfrageleistung. Dies ist die allgemein empfohlene Einstellung und die Standardeinstellung für alle mit `graph-cli` >= 0.66.0 erstellten Subgraphen. +- `prune: `: Legt eine benutzerdefinierte Grenze für die Anzahl der zu speichernden historischen Blöcke fest. +- `prune: never`: Kein Pruning der historischen Daten; behält die gesamte Historie bei und ist der Standard, wenn es keinen `indexerHints` Abschnitt gibt. Die Option `prune: never` sollte gewählt werden, wenn [Zeitreiseabfragen] (/subgraphs/querying/graphql-api/#time-travel-queries) gewünscht sind. -We can add `indexerHints` to our subgraphs by updating our `subgraph.yaml`: +Wir können `indexerHints` zu unseren Subgraphen hinzufügen, indem wir unsere `subgraph.yaml` aktualisieren: ```yaml -specVersion: 1.0.0 +specVersion: 1.3.0 schema: file: ./schema.graphql indexerHints: @@ -31,26 +31,26 @@ dataSources: network: mainnet ``` -## Important Considerations +## Wichtige Überlegungen -- If [Time Travel Queries](/subgraphs/querying/graphql-api/#time-travel-queries) are desired as well as pruning, pruning must be performed accurately to retain Time Travel Query functionality. Due to this, it is generally not recommended to use `indexerHints: prune: auto` with Time Travel Queries. Instead, prune using `indexerHints: prune: ` to accurately prune to a block height that preserves the historical data required by Time Travel Queries, or use `prune: never` to maintain all data. +- Wenn neben dem Pruning auch [Zeitreiseabfragen](/subgraphs/querying/graphql-api/#time-travel-queries) gewünscht werden, muss das Pruning genau durchgeführt werden, um die Funktionalität der Zeitreiseabfrage zu erhalten. Aus diesem Grund ist es im Allgemeinen nicht empfehlenswert, `indexerHints: prune: auto` mit Zeitreiseabfragen zu verwenden. Verwenden Sie stattdessen `indexerHints: prune: <>`, um genau auf eine Blockhöhe zu beschneiden, die die für Zeitreiseabfragen erforderlichen historischen Daten beibehält, oder verwenden Sie `prune: never`, um alle Daten zu erhalten. -- It is not possible to [graft](/subgraphs/cookbook/grafting/) at a block height that has been pruned. If grafting is routinely performed and pruning is desired, it is recommended to use `indexerHints: prune: ` that will accurately retain a set number of blocks (e.g., enough for six months). +- Es ist nicht möglich, [Grafting](/subgraphs/cookbook/grafting/) in einer Blockhöhe vorzunehmen, die beschnitten wurde. Wenn das Grafting routinemäßig durchgeführt wird und Pruning gewünscht ist, wird empfohlen, `indexerHints: prune: <>` zu verwenden, das eine bestimmte Anzahl von Blöcken (z. B. genug für sechs Monate) genau beibehält. -## Conclusion +## Schlussfolgerung -Pruning using `indexerHints` is a best practice for subgraph development, offering significant query performance improvements. +Das Pruning unter Verwendung von `indexerHints` ist eine bewährte Methode für die Entwicklung von Subgraphen, die eine erhebliche Verbesserung der Abfrageleistung ermöglicht. -## Subgraph Best Practices 1-6 +## Best Practices 1-6 für Subgraphen -1. [Improve Query Speed with Subgraph Pruning](/subgraphs/best-practices/pruning/) +1. [Verbesserung der Abfragegeschwindigkeit mit Subgraph Pruning](/subgraphs/best-practices/pruning/) -2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/subgraphs/best-practices/derivedfrom/) +2. [Verbesserung der Indizierungs- und der Reaktionsfähigkeit bei Abfragen durch Verwendung von @derivedFrom](/subgraphs/best-practices/derivedfrom/) -3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) +3. [Verbesserung der Indizierungs- und Abfrageleistung durch Verwendung unveränderlicher Entitäten und Bytes als IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) -4. [Improve Indexing Speed by Avoiding `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) +4. [Verbesserung der Indizierungsgeschwindigkeit durch Vermeidung von `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) -5. [Simplify and Optimize with Timeseries and Aggregations](/subgraphs/best-practices/timeseries/) +5. [Vereinfachen und Optimieren mit Zeitreihen und Aggregationen](/subgraphs/best-practices/timeseries/) -6. [Use Grafting for Quick Hotfix Deployment](/subgraphs/best-practices/grafting-hotfix/) +6. [Grafting für schnelle Hotfix-Bereitstellung verwenden](/subgraphs/best-practices/grafting-hotfix/) diff --git a/website/src/pages/de/subgraphs/best-practices/timeseries.mdx b/website/src/pages/de/subgraphs/best-practices/timeseries.mdx index 060540f991bf..9a49023d6f5c 100644 --- a/website/src/pages/de/subgraphs/best-practices/timeseries.mdx +++ b/website/src/pages/de/subgraphs/best-practices/timeseries.mdx @@ -1,84 +1,88 @@ --- -title: Subgraph Best Practice 5 - Simplify and Optimize with Timeseries and Aggregations -sidebarTitle: 'Subgraph Best Practice 5: Timeseries and Aggregations' +title: Best Practice 5 für Subgraphen - Vereinfachen und Optimieren mit Zeitreihen und Aggregationen +sidebarTitle: Zeitreihen und Aggregationen --- ## TLDR -Leveraging the new time-series and aggregations feature in subgraphs can significantly enhance both indexing speed and query performance. +Die Nutzung der neuen Zeitreihen- und Aggregationsfunktion in Subgraphen kann sowohl die Indizierungsgeschwindigkeit als auch die Abfrageleistung erheblich verbessern. ## Überblick -Timeseries and aggregations reduce data processing overhead and accelerate queries by offloading aggregation computations to the database and simplifying mapping code. This approach is particularly effective when handling large volumes of time-based data. +Zeitreihen und Aggregationen reduzieren den Datenverarbeitungsaufwand und beschleunigen Abfragen, indem sie Aggregationsberechnungen in die Datenbank verlagern und den Mapping-Code vereinfachen. Dieser Ansatz ist besonders effektiv bei der Verarbeitung großer Mengen zeitbasierter Daten. -## Benefits of Timeseries and Aggregations +## Vorteile von Zeitreihen und Aggregationen -1. Improved Indexing Time +1. Verbesserte Indizierungszeit -- Less Data to Load: Mappings handle less data since raw data points are stored as immutable timeseries entities. -- Database-Managed Aggregations: Aggregations are automatically computed by the database, reducing the workload on the mappings. +- Weniger zu ladende Daten: Mappings verarbeiten weniger Daten, da die Rohdatenpunkte als unveränderliche Zeitreiheneinheiten gespeichert werden. +- Datenbank-verwaltete Aggregationen: Aggregationen werden automatisch von der Datenbank berechnet, wodurch sich die Arbeitsbelastung der Mappings verringert. -2. Simplified Mapping Code +2. Vereinfachter Mapping-Code -- No Manual Calculations: Developers no longer need to write complex aggregation logic in mappings. -- Reduced Complexity: Simplifies code maintenance and minimizes the potential for errors. +- Keine manuellen Berechnungen: Entwickler müssen keine komplexe Aggregationslogik mehr in Mappings schreiben. +- Geringere Komplexität: Vereinfacht die Codewartung und minimiert das Fehlerpotenzial. -3. Dramatically Faster Queries +3. Deutlich schnellere Abfragen -- Immutable Data: All timeseries data is immutable, enabling efficient storage and retrieval. -- Efficient Data Separation: Aggregates are stored separately from raw timeseries data, allowing queries to process significantly less data—often several orders of magnitude less. +- Unveränderliche Daten: Alle Zeitreihendaten sind unveränderbar, was eine effiziente Speicherung und Abfrage ermöglicht. +- Effiziente Datentrennung: Die Aggregate werden getrennt von den Rohdaten der Zeitreihen gespeichert, so dass bei Abfragen deutlich weniger Daten verarbeitet werden müssen - oft um mehrere Größenordnungen weniger. -### Important Considerations +### Wichtige Überlegungen -- Immutable Data: Timeseries data cannot be altered once written, ensuring data integrity and simplifying indexing. -- Automatic ID and Timestamp Management: id and timestamp fields are automatically managed by graph-node, reducing potential errors. -- Efficient Data Storage: By separating raw data from aggregates, storage is optimized, and queries run faster. +- Unveränderliche Daten: Einmal geschriebene Zeitreihendaten können nicht mehr verändert werden, was die Datenintegrität gewährleistet und die Indizierung vereinfacht. +- Automatische ID- und Zeitstempel-Verwaltung: ID- und Zeitstempel-Felder werden automatisch von Graph-Node verwaltet, wodurch mögliche Fehler vermieden werden. +- Effiziente Datenspeicherung: Durch die Trennung von Rohdaten und Aggregaten wird die Speicherung optimiert, und Abfragen werden schneller ausgeführt. -## How to Implement Timeseries and Aggregations +## Implementierung von Zeitreihen und Aggregationen -### Defining Timeseries Entities +### Voraussetzungen -A timeseries entity represents raw data points collected over time. It is defined with the `@entity(timeseries: true)` annotation. Key requirements: +Sie benötigen `spec Version 1.1.0` für diese Funktion. -- Immutable: Timeseries entities are always immutable. -- Mandatory Fields: - - `id`: Must be of type `Int8!` and is auto-incremented. - - `timestamp`: Must be of type `Timestamp!` and is automatically set to the block timestamp. +### Definition von Zeitreihenelementen -Example: +Ein Zeitreihenelement stellt Rohdatenpunkte dar, die im Laufe der Zeit gesammelt wurden. Sie wird mit der Annotation `@entity(timeseries: true)` definiert. Zentrale Anforderungen: + +- Unveränderlich: Zeitreihenentitäten sind immer unveränderlich. +- Pflichtfelder: + - `id`: Muss vom Typ `Int8!` sein und wird automatisch inkrementiert. + - `timestamp`: Muss vom Typ `Timestamp!` sein und wird automatisch auf den Blockzeitstempel gesetzt. + +Beispiel: ```graphql type Data @entity(timeseries: true) { id: Int8! timestamp: Timestamp! - price: BigDecimal! + amount: BigDecimal! } ``` -### Defining Aggregation Entities +### Definition von Aggregationsetntitäten -An aggregation entity computes aggregated values from a timeseries source. It is defined with the `@aggregation` annotation. Key components: +Eine Aggregationsentitäten berechnet aggregierte Werte aus einer Zeitreihenquelle. Sie wird mit der Annotation `@aggregation` definiert. Schlüsselkomponenten: -- Annotation Arguments: - - `intervals`: Specifies time intervals (e.g., `["hour", "day"]`). +- Anmerkungsargumente: + - `intervals`: Gibt Zeitintervalle an (z. B. `["hour", "day"]`). -Example: +Beispiel: ```graphql type Stats @aggregation(intervals: ["hour", "day"], source: "Data") { id: Int8! timestamp: Timestamp! - sum: BigDecimal! @aggregate(fn: "sum", arg: "price") + sum: BigDecimal! @aggregate(fn: "sum", arg: "amount") } ``` -In this example, Stats aggregates the price field from Data over hourly and daily intervals, computing the sum. +In diesem Beispiel aggregiert Stats das Betragsfeld von Data über stündliche und tägliche Intervalle und berechnet die Summe. -### Querying Aggregated Data +### Abfrage von aggregierten Daten -Aggregations are exposed via query fields that allow filtering and retrieval based on dimensions and time intervals. +Aggregationen werden über Abfragefelder dargestellt, die das Filtern und Abrufen auf der Grundlage von Dimensionen und Zeitintervallen ermöglichen. -Example: +Beispiel: ```graphql { @@ -98,13 +102,13 @@ Example: } ``` -### Using Dimensions in Aggregations +### Verwendung von Dimensionen in Aggregationen -Dimensions are non-aggregated fields used to group data points. They enable aggregations based on specific criteria, such as a token in a financial application. +Dimensionen sind nicht aggregierte Felder, die zur Gruppierung von Datenpunkten verwendet werden. Sie ermöglichen Aggregationen auf der Grundlage bestimmter Kriterien, wie z. B. eines Tokens in einer Finanzanwendung. -Example: +Beispiel: -### Timeseries Entity +### Zeitreihen-Entität ```graphql type TokenData @entity(timeseries: true) { @@ -116,7 +120,7 @@ type TokenData @entity(timeseries: true) { } ``` -### Aggregation Entity with Dimension +### Aggregationsentität mit Dimension ```graphql type TokenStats @aggregation(intervals: ["hour", "day"], source: "TokenData") { @@ -129,15 +133,15 @@ type TokenStats @aggregation(intervals: ["hour", "day"], source: "TokenData") { } ``` -- Dimension Field: token groups the data, so aggregates are computed per token. -- Aggregates: - - totalVolume: Sum of amount. - - priceUSD: Last recorded priceUSD. - - count: Cumulative count of records. +- Dimensionsfeld: Das Token gruppiert die Daten, so dass die Aggregate pro Token berechnet werden. +- Aggregate: + - totalVolume: Summe der Beträge. + - priceUSD: Letzter aufgezeichneter Preis in USD. + - count: Kumulative Anzahl der Datensätze. -### Aggregation Functions and Expressions +### Aggregationsfunktionen und Ausdrücke -Supported aggregation functions: +Unterstützte Aggregationsfunktionen: - sum - count @@ -146,50 +150,50 @@ Supported aggregation functions: - first - last -### The arg in @aggregate can be +### Das Argument in @aggregate kann sein -- A field name from the timeseries entity. -- An expression using fields and constants. +- Ein Feldname aus der Zeitreihenentität. +- Ein Ausdruck mit Feldern und Konstanten. -### Examples of Aggregation Expressions +### Beispiele für Aggregationsausdrücke -- Sum Token Value: @aggregate(fn: "sum", arg: "priceUSD \_ amount") -- Maximum Positive Amount: @aggregate(fn: "max", arg: "greatest(amount0, amount1, 0)") -- Conditional Sum: @aggregate(fn: "sum", arg: "case when amount0 > amount1 then amount0 else 0 end") +- Summe Tokenwert: @aggregate(fn: „sum“, arg: „preisUSD \_betrag“) +- Größter positiver Betrag: @aggregate(fn: „max“, arg: „greatest(amount0, amount1, 0)“) +- Bedingte Summe: @aggregate(fn: „sum“, arg: „case when amount0 > amount1 then amount0 else 0 end“) -Supported operators and functions include basic arithmetic (+, -, \_, /), comparison operators, logical operators (and, or, not), and SQL functions like greatest, least, coalesce, etc. +Zu den unterstützten Operatoren und Funktionen gehören grundlegende arithmetische Operatoren (+, -, \_, /), Vergleichsoperatoren, logische Operatoren (und, oder, nicht) und SQL-Funktionen wie greatest, least, coalesce, usw. -### Query Parameters +### Abfrage-Parameter -- interval: Specifies the time interval (e.g., "hour"). -- where: Filters based on dimensions and timestamp ranges. -- timestamp_gte / timestamp_lt: Filters for start and end times (microseconds since epoch). +- intervall: Gibt das Zeitintervall an (z. B. „Stunde“). +- where: Filter auf der Grundlage von Dimensionen und Zeitstempelbereichen. +- timestamp_gte / timestamp_lt: Filter für Start- und Endzeiten (Mikrosekunden seit Epoche). -### Notes +### Anmerkungen -- Sorting: Results are automatically sorted by timestamp and id in descending order. -- Current Data: An optional current argument can include the current, partially filled interval. +- Sortieren: Die Ergebnisse werden automatisch nach Zeitstempel und ID in absteigender Reihenfolge sortiert. +- Aktuelle Daten: Ein optionales aktuelles Argument kann das aktuelle, teilweise gefüllte Intervall enthalten. -### Conclusion +### Schlussfolgerung -Implementing timeseries and aggregations in subgraphs is a best practice for projects dealing with time-based data. This approach: +Die Implementierung von Zeitreihen und Aggregationen in Subgraphen ist ein bewährtes Verfahren für Projekte, die mit zeitbasierten Daten arbeiten. Dieser Ansatz: -- Enhances Performance: Speeds up indexing and querying by reducing data processing overhead. -- Simplifies Development: Eliminates the need for manual aggregation logic in mappings. -- Scales Efficiently: Handles large volumes of data without compromising on speed or responsiveness. +- Verbessert die Leistung: Beschleunigt die Indizierung und Abfrage durch Reduzierung des Datenverarbeitungs-Overheads. +- Vereinfacht die Entwicklung: Manuelle Aggregationslogik in Mappings ist nicht mehr erforderlich. +- Skaliert Effizienz: Verarbeitet große Datenmengen, ohne Kompromisse bei Geschwindigkeit und Reaktionsfähigkeit einzugehen. -By adopting this pattern, developers can build more efficient and scalable subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your subgraphs. +Durch die Übernahme dieses Musters können Entwickler effizientere und skalierbare Subgraphen erstellen und den Endbenutzern einen schnelleren und zuverlässigeren Datenzugriff bieten. Um mehr über die Implementierung von Zeitreihen und Aggregationen zu erfahren, lesen Sie die [Readme-Datei zu Zeitreihen und Aggregationen](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) und ziehen Sie in Erwägung, mit dieser Funktion in Ihren Subgraphen zu experimentieren. -## Subgraph Best Practices 1-6 +## Best Practices 1-6 für Subgraphen -1. [Improve Query Speed with Subgraph Pruning](/subgraphs/best-practices/pruning/) +1. [Verbesserung der Abfragegeschwindigkeit mit Subgraph Pruning](/subgraphs/best-practices/pruning/) -2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/subgraphs/best-practices/derivedfrom/) +2. [Verbesserung der Indizierungs- und der Reaktionsfähigkeit bei Abfragen durch Verwendung von @derivedFrom](/subgraphs/best-practices/derivedfrom/) -3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) +3. [Verbesserung der Indizierungs- und Abfrageleistung durch Verwendung unveränderlicher Entitäten und Bytes als IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) -4. [Improve Indexing Speed by Avoiding `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) +4. [Verbesserung der Indizierungsgeschwindigkeit durch Vermeidung von `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) -5. [Simplify and Optimize with Timeseries and Aggregations](/subgraphs/best-practices/timeseries/) +5. [Vereinfachen und Optimieren mit Zeitreihen und Aggregationen](/subgraphs/best-practices/timeseries/) -6. [Use Grafting for Quick Hotfix Deployment](/subgraphs/best-practices/grafting-hotfix/) +6. [Grafting für schnelle Hotfix-Bereitstellung verwenden](/subgraphs/best-practices/grafting-hotfix/) diff --git a/website/src/pages/de/subgraphs/billing.mdx b/website/src/pages/de/subgraphs/billing.mdx index 7014ebf64d61..2fed1d944f78 100644 --- a/website/src/pages/de/subgraphs/billing.mdx +++ b/website/src/pages/de/subgraphs/billing.mdx @@ -1,22 +1,24 @@ --- -title: Billing +title: Abrechnung --- -## Querying Plans +## Abfrage von Plänen Es gibt zwei Pläne für die Abfrage von Subgraphen in The Graph Network. -- **Free Plan**: The Free Plan includes 100,000 free monthly queries with full access to the Subgraph Studio testing environment. This plan is designed for hobbyists, hackathoners, and those with side projects to try out The Graph before scaling their dapp. +- \*Kostenloser Plan\*\*: Der Free Plan beinhaltet 100.000 kostenlose monatliche Abfragen mit vollem Zugriff auf die Subgraph Studio Testumgebung. Dieser Plan ist für Hobbyisten, Hackathon-Teilnehmer und diejenigen mit Nebenprojekten gedacht, die The Graph ausprobieren möchten, bevor sie ihre Dapp skalieren. -- **Growth Plan**: The Growth Plan includes everything in the Free Plan with all queries after 100,000 monthly queries requiring payments with GRT or credit card. The Growth Plan is flexible enough to cover teams that have established dapps across a variety of use cases. +- **Wachstumsplan (Growth Plan)**: Der Growth Plan beinhaltet alles, was im Free Plan enthalten ist, wobei alle Abfragen nach 100.000 monatlichen Abfragen eine Zahlung mit GRT oder Kreditkarte erfordern. Der Growth Plan ist flexibel genug, um Teams abzudecken, die Dapps für eine Vielzahl von Anwendungsfällen etabliert haben. + +Erfahren Sie mehr über die Preisgestaltung [here](https://thegraph.com/studio-pricing/). ## Abfrage Zahlungen mit Kreditkarte - Um die Abrechnung mit Kredit-/Debitkarten einzurichten, müssen die Benutzer Subgraph Studio (https://thegraph.com/studio/) aufrufen - 1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/subgraphs/billing/). - 2. Click on the "Connect Wallet" button on the top right corner of the page. You'll be redirected to the wallet selection page. Select your wallet and click on "Connect". + 1. Rufen Sie die Seite [Subgraph Studio Billing] (https://thegraph.com/studio/subgraphs/billing/) auf. + 2. Klicken Sie oben rechts auf der Seite auf die Schaltfläche „Wallet verbinden“. Sie werden zur Wallet-Auswahlseite weitergeleitet. Wählen Sie Ihr Wallet aus und klicken Sie auf „Verbinden“. 3. Wählen Sie „ Upgrade Plan“, wenn Sie vom Free Plan upgraden oder wählen Sie „Manage Plan“, wenn Sie GRT bereits in der Vergangenheit zu Ihrem Abrechnungssaldo hinzugefügt haben. Als Nächstes können Sie die Anzahl der Abfragen schätzen, um einen Kostenvoranschlag zu erhalten, dieser Schritt ist jedoch nicht erforderlich. 4. Um eine Zahlung per Kreditkarte zu wählen, wählen Sie „Kreditkarte“ als Zahlungsmethode und geben Sie Ihre Kreditkartendaten ein. Diejenigen, die Stripe bereits verwendet haben, können die Funktion „Link“ verwenden, um ihre Daten automatisch auszufüllen. - Die Rechnungen werden am Ende eines jeden Monats erstellt. Für alle Abfragen, die über das kostenlose Kontingent hinausgehen, muss eine aktive Kreditkarte hinterlegt sein. @@ -25,9 +27,9 @@ Es gibt zwei Pläne für die Abfrage von Subgraphen in The Graph Network. Subgraph-Nutzer können The Graph Token (oder GRT) verwenden, um für Abfragen im The Graph Network zu bezahlen. Mit GRT werden Rechnungen am Ende eines jeden Monats bearbeitet und erfordern ein ausreichendes Guthaben an GRT, um Abfragen über die Free-Plan-Quote von 100.000 monatlichen Abfragen hinaus durchzuführen. Sie müssen die von Ihren API-Schlüsseln generierten Gebühren bezahlen. Mit dem Abrechnungsvertrag können Sie: -- Add and withdraw GRT from your account balance. -- Keep track of your balances based on how much GRT you have added to your account balance, how much you have removed, and your invoices. -- Automatically pay invoices based on query fees generated, as long as there is enough GRT in your account balance. +- GRT zu Ihrem Rechnungsguthaben hinzufügen oder abziehen. +- Ihre Salden und Ihre Rechnungen im Auge behalten, basierend darauf, wie viel GRT Sie Ihrem Abrechnungsguthaben hinzugefügt und wie viel Sie entfernt haben. +- Rechnungen automatisch auf der Grundlage der generierten Abfragegebühren bezahlen, solange Ihr Rechnungssaldo über genügend GRT verfügt. ### GRT auf Arbitrum oder Ethereum @@ -45,17 +47,17 @@ Um für Abfragen zu bezahlen, brauchen Sie GRT auf Arbitrum. Hier sind ein paar - Alternativ können Sie GRT auch direkt auf Arbitrum über einen dezentralen Handelsplatz erwerben. -> This section is written assuming you already have GRT in your wallet, and you're on Arbitrum. If you don't have GRT, you can learn how to get GRT [here](#getting-grt). +> In diesem Abschnitt wird davon ausgegangen, dass Sie bereits GRT in Ihrem Geldbeutel haben und auf Arbitrum sind. Wenn Sie keine GRT haben, können Sie lernen, wie man GRT [hier](#getting-grt) bekommt. Sobald Sie GRT überbrücken, können Sie es zu Ihrem Rechnungssaldo hinzufügen. ### Hinzufügen von GRT mit einer Wallet -1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/subgraphs/billing/). -2. Click on the "Connect Wallet" button on the top right corner of the page. You'll be redirected to the wallet selection page. Select your wallet and click on "Connect". +1. Rufen Sie die [Subgraph Studio Abrechnungsseite](https://thegraph.com/studio/subgraphs/billing/) auf. +2. Klicken Sie oben rechts auf der Seite auf die Schaltfläche „Wallet verbinden“. Sie werden zur Wallet-Auswahlseite weitergeleitet. Wählen Sie Ihr Wallet aus und klicken Sie auf „Verbinden“. 3. Wählen Sie die Schaltfläche „ Manage “ in der oberen rechten Ecke. Erstmalige Nutzer sehen die Option „Upgrade auf den Wachstumsplan“, während wiederkehrende Nutzer auf „Von der Wallet einzahlen“ klicken. 4. Verwenden Sie den Slider, um die Anzahl der Abfragen zu schätzen, die Sie monatlich erwarten. - - For suggestions on the number of queries you may use, see our **Frequently Asked Questions** page. + - Vorschläge für die Anzahl der Abfragen, die Sie verwenden können, finden Sie auf unserer Seite **Häufig gestellte Fragen**. 5. Wählen Sie „Kryptowährung“. GRT ist derzeit die einzige Kryptowährung, die im The Graph Network akzeptiert wird. 6. Wählen Sie die Anzahl der Monate, die Sie im Voraus bezahlen möchten. - Die Zahlung im Voraus verpflichtet Sie nicht zu einer zukünftigen Nutzung. Ihnen wird nur das berechnet, was Sie verbrauchen, und Sie können Ihr Guthaben jederzeit abheben. @@ -68,20 +70,20 @@ Sobald Sie GRT überbrücken, können Sie es zu Ihrem Rechnungssaldo hinzufügen ### GRT über eine Wallet abheben -1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/subgraphs/billing/). +1. Rufen Sie die [Subgraph Studio Abrechnungsseite](https://thegraph.com/studio/subgraphs/billing/) auf. 2. Klicken Sie auf die Schaltfläche „Connect Wallet“ in der oberen rechten Ecke der Seite. Wählen Sie Ihre Wallet aus und klicken Sie auf „Verbinden“. 3. Klicken Sie auf die Schaltfläche „Verwalten“ in der oberen rechten Ecke der Seite. Wählen Sie „GRT abheben“. Ein Seitenfenster wird angezeigt. 4. Geben Sie den Betrag der GRT ein, den Sie abheben möchten. 5. Klicken Sie auf „GRT abheben“, um die GRT von Ihrem Kontostand abzuheben. Unterschreiben Sie die zugehörige Transaktion in Ihrer Wallet. Dies kostet Gas. Die GRT werden an Ihre Arbitrum Wallet gesendet. 6. Sobald die Transaktion bestätigt ist, werden die GRT von Ihrem Kontostand in Ihrem Arbitrum Wallet abgezogen. -### Adding GRT using a multisig wallet +### Hinzufügen von GRT mit einer Multisig-Wallet -1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/subgraphs/billing/). -2. Click on the "Connect Wallet" button on the top right corner of the page. Select your wallet and click on "Connect". If you're using [Gnosis-Safe](https://gnosis-safe.io/), you'll be able to connect your multisig as well as your signing wallet. Then, sign the associated message. This will not cost any gas. +1. Rufen Sie die [Subgraph Studio Abrechnungsseite](https://thegraph.com/studio/subgraphs/billing/) auf. +2. Klicken Sie auf die Schaltfläche „Connect Wallet“ in der oberen rechten Ecke der Seite. Wählen Sie Ihre Wallet aus und klicken Sie auf „Verbinden“. Wenn Sie [Gnosis-Safe] (https://gnosis-safe.io/) verwenden, können Sie sowohl Ihre Multisig- als auch Ihre signierende Wallet verbinden. Signieren Sie dann die zugehörige Nachricht. Dies kostet kein Gas. 3. Wählen Sie die Schaltfläche „ Manage “ in der oberen rechten Ecke. Erstmalige Nutzer sehen die Option „Upgrade auf den Wachstumsplan“, während wiederkehrende Nutzer auf „Von der Wallet einzahlen“ klicken. 4. Verwenden Sie den Slider, um die Anzahl der Abfragen zu schätzen, die Sie monatlich erwarten. - - For suggestions on the number of queries you may use, see our **Frequently Asked Questions** page. + - Vorschläge für die Anzahl der Abfragen, die Sie verwenden können, finden Sie auf unserer Seite **Häufig gestellte Fragen**. 5. Wählen Sie „Kryptowährung“. GRT ist derzeit die einzige Kryptowährung, die im The Graph Network akzeptiert wird. 6. Wählen Sie die Anzahl der Monate, die Sie im Voraus bezahlen möchten. - Die Zahlung im Voraus verpflichtet Sie nicht zu einer zukünftigen Nutzung. Ihnen wird nur das berechnet, was Sie verbrauchen, und Sie können Ihr Guthaben jederzeit abheben. @@ -99,7 +101,7 @@ In diesem Abschnitt erfahren Sie, wie Sie GRT dazu bringen können, die Abfrageg Dies ist eine Schritt-für-Schritt-Anleitung für den Kauf von GRT auf Coinbase. -1. Go to [Coinbase](https://www.coinbase.com/) and create an account. +1. Gehen Sie zu [Coinbase] (https://www.coinbase.com/) und erstellen Sie ein Konto. 2. Sobald Sie ein Konto erstellt haben, müssen Sie Ihre Identität durch ein Verfahren verifizieren, das als KYC (oder Know Your Customer) bekannt ist. Dies ist ein Standardverfahren für alle zentralisierten oder verwahrten Krypto-Börsen. 3. Sobald Sie Ihre Identität überprüft haben, können Sie GRT kaufen. Dazu klicken Sie auf die Schaltfläche „Kaufen/Verkaufen“ oben rechts auf der Seite. 4. Wählen Sie die Währung, die Sie kaufen möchten. Wählen Sie GRT. @@ -107,19 +109,19 @@ Dies ist eine Schritt-für-Schritt-Anleitung für den Kauf von GRT auf Coinbase. 6. Wählen Sie die Menge an GRT, die Sie kaufen möchten. 7. Überprüfen Sie Ihren Einkauf. Überprüfen Sie Ihren Einkauf und klicken Sie auf „GRT kaufen“. 8. Bestätigen Sie Ihren Kauf. Bestätigen Sie Ihren Kauf und Sie haben GRT erfolgreich gekauft. -9. You can transfer the GRT from your account to your wallet such as [MetaMask](https://metamask.io/). +9. Sie können das GRT von Ihrem Konto auf Ihre Wallet wie z.B. [MetaMask](https://metamask.io/) übertragen. - Um GRT auf Ihre Wallet zu übertragen, klicken Sie auf die Schaltfläche „Konten“ oben rechts auf der Seite. - Klicken Sie auf die Schaltfläche „Senden“ neben dem GRT Konto. - Geben Sie den Betrag an GRT ein, den Sie senden möchten, und die Wallet-Adresse, an die Sie ihn senden möchten. - Klicken Sie auf „Weiter“ und bestätigen Sie Ihre Transaktion. -Bitte beachten Sie, dass Coinbase Sie bei größeren Kaufbeträgen möglicherweise 7-10 Tage warten lässt, bevor Sie den vollen Betrag in eine Krypto-Wallet überweisen. -You can learn more about getting GRT on Coinbase [here](https://help.coinbase.com/en/coinbase/trading-and-funding/buying-selling-or-converting-crypto/how-do-i-buy-digital-currency). +Sie können mehr über den Erwerb von GRT auf Coinbase [hier](https://help.coinbase.com/en/coinbase/trading-and-funding/buying-selling-or-converting-crypto/how-do-i-buy-digital-currency) erfahren. ### Binance Dies ist eine Schritt-für-Schritt-Anleitung für den Kauf von GRT auf Binance. -1. Go to [Binance](https://www.binance.com/en) and create an account. +1. Gehen Sie zu [Binance] (https://www.binance.com/en) und erstellen Sie ein Konto. 2. Sobald Sie ein Konto erstellt haben, müssen Sie Ihre Identität durch ein Verfahren verifizieren, das als KYC (oder Know Your Customer) bekannt ist. Dies ist ein Standardverfahren für alle zentralisierten oder verwahrten Krypto-Börsen. 3. Sobald Sie Ihre Identität überprüft haben, können Sie GRT kaufen. Dazu klicken Sie auf die Schaltfläche „Jetzt kaufen“ auf dem Banner der Homepage. 4. Sie werden zu einer Seite weitergeleitet, auf der Sie die Währung auswählen können, die Sie kaufen möchten. Wählen Sie GRT. @@ -127,27 +129,27 @@ Dies ist eine Schritt-für-Schritt-Anleitung für den Kauf von GRT auf Binance. 6. Wählen Sie die Menge an GRT, die Sie kaufen möchten. 7. Überprüfen Sie Ihren Kauf und klicken Sie auf „GRT kaufen“. 8. Bestätigen Sie Ihren Kauf und Sie werden Ihr GRT in Ihrer Binance Spot Wallet sehen können. -9. You can withdraw the GRT from your account to your wallet such as [MetaMask](https://metamask.io/). - - [To withdraw](https://www.binance.com/en/blog/ecosystem/how-to-transfer-crypto-from-binance-to-trust-wallet-8305050796630181570) the GRT to your wallet, add your wallet's address to the withdrawal whitelist. +9. Sie können das GRT von Ihrem Konto auf Ihre Geldbörse wie [MetaMask](https://metamask.io/) abheben. + - [Um das GRT auf Ihr Wallet abzuheben (https://www.binance.com/en/blog/ecosystem/how-to-transfer-crypto-from-binance-to-trust-wallet-8305050796630181570), fügen Sie die Adresse Ihres Wallets der Whitelist für Abhebungen hinzu. - Klicken Sie auf die Schaltfläche „Wallet“, klicken Sie auf Abheben und wählen Sie GRT. - Geben Sie den GRT-Betrag ein, den Sie senden möchten, und die Wallet-Adresse, die auf der Whitelist steht. - Klicken Sie auf „Weiter“ und bestätigen Sie Ihre Transaktion. -You can learn more about getting GRT on Binance [here](https://www.binance.com/en/support/faq/how-to-buy-cryptocurrency-on-binance-homepage-400c38f5e0cd4b46a1d0805c296b5582). +Sie können mehr über den Erwerb von GRT auf Binance [hier](https://www.binance.com/en/support/faq/how-to-buy-cryptocurrency-on-binance-homepage-400c38f5e0cd4b46a1d0805c296b5582) erfahren. ### Uniswap So können Sie GRT auf Uniswap kaufen. -1. Go to [Uniswap](https://app.uniswap.org/swap?chain=arbitrum) and connect your wallet. +1. Gehen Sie zu [Uniswap] (https://app.uniswap.org/swap?chain=arbitrum) und verbinden Sie Ihre Wallet. 2. Wählen Sie den Token, von dem Sie tauschen möchten. Wählen Sie ETH. 3. Wählen Sie den Token, in den Sie tauschen möchten. Wählen Sie GRT. - - Make sure you're swapping for the correct token. The GRT smart contract address on Arbitrum One is: [0x9623063377AD1B27544C965cCd7342f7EA7e88C7](https://arbiscan.io/token/0x9623063377ad1b27544c965ccd7342f7ea7e88c7) + - Vergewissern Sie sich, dass Sie gegen den richtigen Token tauschen. Die GRT Smart Contract Adresse auf Arbitrum One ist: [0x9623063377AD1B27544C965cCd7342f7EA7e88C7](https://arbiscan.io/token/0x9623063377ad1b27544c965ccd7342f7ea7e88c7) 4. Geben Sie den Betrag an ETH ein, den Sie tauschen möchten. 5. Klicken Sie auf „Swap“. 6. Bestätigen Sie die Transaktion in Ihrer Wallet und warten Sie auf die Abwicklung der Transaktion. -You can learn more about getting GRT on Uniswap [here](https://support.uniswap.org/hc/en-us/articles/8370549680909-How-to-Swap-Tokens-). +Sie können mehr über den Erwerb von GRT auf Coinbase [hier](https://support.uniswap.org/hc/en-us/articles/8370549680909-How-to-Swap-Tokens-) erfahren. ## Ether erhalten @@ -157,7 +159,7 @@ In diesem Abschnitt erfahren Sie, wie Sie Ether (ETH) erhalten können, um Trans Dies ist eine Schritt-für-Schritt-Anleitung für den Kauf von GRT auf Coinbase. -1. Go to [Coinbase](https://www.coinbase.com/) and create an account. +1. Gehen Sie zu [Coinbase] (https://www.coinbase.com/) und erstellen Sie ein Konto. 2. Sobald Sie ein Konto erstellt haben, müssen Sie Ihre Identität durch ein Verfahren verifizieren, das als KYC (oder Know Your Customer) bekannt ist. Dies ist ein Standardverfahren für alle zentralisierten oder verwahrten Krypto-Börsen. 3. Sobald Sie Ihre Identität bestätigt haben, können Sie ETH kaufen, indem Sie auf die Schaltfläche „Kaufen/Verkaufen“ oben rechts auf der Seite klicken. 4. Wählen Sie die Währung, die Sie kaufen möchten. Wählen Sie ETH. @@ -165,20 +167,20 @@ Dies ist eine Schritt-für-Schritt-Anleitung für den Kauf von GRT auf Coinbase. 6. Geben Sie die Menge an ETH ein, die Sie kaufen möchten. 7. Überprüfen Sie Ihren Kauf und klicken Sie auf „ETH kaufen“. 8. Bestätigen Sie Ihren Kauf und Sie haben erfolgreich ETH gekauft. -9. You can transfer the ETH from your Coinbase account to your wallet such as [MetaMask](https://metamask.io/). +9. Sie können die ETH von Ihrem Coinbase-Konto auf Ihr Wallet wie [MetaMask](https://metamask.io/) übertragen. - Um die ETH auf Ihre Wallet zu übertragen, klicken Sie auf die Schaltfläche „Konten“ oben rechts auf der Seite. - Klicken Sie auf die Schaltfläche „Senden“ neben dem ETH-Konto. - Geben Sie den ETH-Betrag ein, den Sie senden möchten, und die Wallet-Adresse, an die Sie ihn senden möchten. - Stellen Sie sicher, dass Sie an Ihre Ethereum Wallet Adresse auf Arbitrum One senden. - Klicken Sie auf „Weiter“ und bestätigen Sie Ihre Transaktion. -You can learn more about getting ETH on Coinbase [here](https://help.coinbase.com/en/coinbase/trading-and-funding/buying-selling-or-converting-crypto/how-do-i-buy-digital-currency). +Sie können mehr über den Erwerb von ETH auf Coinbase [hier](https://help.coinbase.com/en/coinbase/trading-and-funding/buying-selling-or-converting-crypto/how-do-i-buy-digital-currency) erfahren. ### Binance Dies ist eine Schritt-für-Schritt-Anleitung für den Kauf von ETH auf Binance. -1. Go to [Binance](https://www.binance.com/en) and create an account. +1. Gehen Sie zu [Binance] (https://www.binance.com/en) und erstellen Sie ein Konto. 2. Sobald Sie ein Konto erstellt haben, müssen Sie Ihre Identität durch ein Verfahren verifizieren, das als KYC (oder Know Your Customer) bekannt ist. Dies ist ein Standardverfahren für alle zentralisierten oder verwahrten Krypto-Börsen. 3. Sobald Sie Ihre Identität verifiziert haben, kaufen Sie ETH, indem Sie auf die Schaltfläche „Jetzt kaufen“ auf dem Banner der Homepage klicken. 4. Wählen Sie die Währung, die Sie kaufen möchten. Wählen Sie ETH. @@ -186,14 +188,14 @@ Dies ist eine Schritt-für-Schritt-Anleitung für den Kauf von ETH auf Binance. 6. Geben Sie die Menge an ETH ein, die Sie kaufen möchten. 7. Überprüfen Sie Ihren Kauf und klicken Sie auf „ETH kaufen“. 8. Bestätigen Sie Ihren Kauf und Sie werden Ihre ETH in Ihrer Binance Spot Wallet sehen. -9. You can withdraw the ETH from your account to your wallet such as [MetaMask](https://metamask.io/). +9. Sie können die ETH von Ihrem Konto auf Ihr Wallet wie [MetaMask](https://metamask.io/) abheben. - Um die ETH auf Ihre Wallet abzuheben, fügen Sie die Adresse Ihrer Wallet zur Abhebungs-Whitelist hinzu. - Klicken Sie auf die Schaltfläche „Wallet“, klicken Sie auf „withdraw“ und wählen Sie ETH. - Geben Sie den ETH-Betrag ein, den Sie senden möchten, und die Adresse der Wallet, die auf der Whitelist steht, an die Sie den Betrag senden möchten. - Stellen Sie sicher, dass Sie an Ihre Ethereum Wallet Adresse auf Arbitrum One senden. - Klicken Sie auf „Weiter“ und bestätigen Sie Ihre Transaktion. -You can learn more about getting ETH on Binance [here](https://www.binance.com/en/support/faq/how-to-buy-cryptocurrency-on-binance-homepage-400c38f5e0cd4b46a1d0805c296b5582). +Sie können mehr über den Erwerb von ETH auf Binance [hier](https://www.binance.com/en/support/faq/how-to-buy-cryptocurrency-on-binance-homepage-400c38f5e0cd4b46a1d0805c296b5582) erfahren. ## FAQs zur Rechnungsstellung @@ -203,11 +205,11 @@ Sie müssen nicht im Voraus wissen, wie viele Abfragen Sie benötigen werden. Ih Wir empfehlen Ihnen, die Anzahl der Abfragen, die Sie benötigen, zu überschlagen, damit Sie Ihr Guthaben nicht häufig aufstocken müssen. Eine gute Schätzung für kleine bis mittelgroße Anwendungen ist, mit 1 Mio. bis 2 Mio. Abfragen pro Monat zu beginnen und die Nutzung in den ersten Wochen genau zu überwachen. Bei größeren Anwendungen ist es sinnvoll, die Anzahl der täglichen Besuche auf Ihrer Website mit der Anzahl der Abfragen zu multiplizieren, die Ihre aktivste Seite beim Öffnen auslöst. -Of course, both new and existing users can reach out to Edge & Node's BD team for a consult to learn more about anticipated usage. +Natürlich können sich sowohl neue als auch bestehende Nutzer an das BD-Team von Edge & Node wenden, um mehr über die voraussichtliche Nutzung zu erfahren. ### Kann ich GRT von meinem Rechnungssaldo abheben? -Yes, you can always withdraw GRT that has not already been used for queries from your billing balance. The billing contract is only designed to bridge GRT from Ethereum mainnet to the Arbitrum network. If you'd like to transfer your GRT from Arbitrum back to Ethereum mainnet, you'll need to use the [Arbitrum Bridge](https://bridge.arbitrum.io/?l2ChainId=42161). +Ja, Sie können jederzeit GRT, die nicht bereits für Abfragen verwendet wurden, von Ihrem Abrechnungskonto abheben. Der Abrechnungsvertrag ist nur dafür gedacht, GRT aus dem Ethereum-Mainnet in das Arbitrum-Netzwerk zu übertragen. Wenn Sie Ihre GRT von Arbitrum zurück ins Ethereum Mainnet transferieren möchten, müssen Sie die [Arbitrum Bridge](https://bridge.arbitrum.io/?l2ChainId=42161) verwenden. ### Was passiert, wenn mein Guthaben aufgebraucht ist? Werde ich eine Warnung erhalten? diff --git a/website/src/pages/de/subgraphs/developing/_meta-titles.json b/website/src/pages/de/subgraphs/developing/_meta-titles.json index 01a91b09ed77..7035d7a7491b 100644 --- a/website/src/pages/de/subgraphs/developing/_meta-titles.json +++ b/website/src/pages/de/subgraphs/developing/_meta-titles.json @@ -1,6 +1,6 @@ { - "creating": "Creating", - "deploying": "Deploying", - "publishing": "Publishing", - "managing": "Managing" + "creating": "Erstellen", + "deploying": "Bereitstellung", + "publishing": "Veröffentlichung", + "managing": "Verwaltung" } diff --git a/website/src/pages/de/subgraphs/developing/creating/advanced.mdx b/website/src/pages/de/subgraphs/developing/creating/advanced.mdx index 1a8debdf98c5..e1245dcae9a8 100644 --- a/website/src/pages/de/subgraphs/developing/creating/advanced.mdx +++ b/website/src/pages/de/subgraphs/developing/creating/advanced.mdx @@ -1,43 +1,43 @@ --- -title: Advanced Subgraph Features +title: Erweiterte Subgraph-Funktionen --- ## Überblick -Add and implement advanced subgraph features to enhanced your subgraph's built. +Fügen Sie fortgeschrittene Subgraph-Funktionen hinzu und implementieren Sie sie, um Ihre Subgraphen zu verbessern. -Starting from `specVersion` `0.0.4`, subgraph features must be explicitly declared in the `features` section at the top level of the manifest file, using their `camelCase` name, as listed in the table below: +Ab `specVersion` `0.0.4` müssen Subgraph-Funktionen explizit im Abschnitt `features` auf der obersten Ebene der Manifestdatei unter Verwendung ihres `camelCase`-Namens deklariert werden, wie in der folgenden Tabelle aufgeführt: -| Feature | Name | -| ---------------------------------------------------- | ---------------- | -| [Non-fatal errors](#non-fatal-errors) | `nonFatalErrors` | -| [Full-text Search](#defining-fulltext-search-fields) | `fullTextSearch` | -| [Grafting](#grafting-onto-existing-subgraphs) | `grafting` | +| Funktion | Name | +| ------------------------------------------------- | ---------------- | +| [Non-fatal errors](#non-fatal-errors) | `nonFatalErrors` | +| [Volltextsuche](#defining-fulltext-search-fields) | "Volltextsuche" | +| [Grafting](#grafting-onto-existing-subgraphs) | `grafting` | -For instance, if a subgraph uses the **Full-Text Search** and the **Non-fatal Errors** features, the `features` field in the manifest should be: +Wenn ein Subgraph beispielsweise die Funktionen **Volltextsuche** und **Nicht fatale Fehler** verwendet, sollte das Feld „Features“ im Manifest lauten: ```yaml -specVersion: 0.0.4 -description: Gravatar for Ethereum -features: +specVersion: 1.3.0 +Beschreibung: Gravatar für Ethereum +Funktionen: - fullTextSearch - nonFatalErrors -dataSources: ... +Datenquellen: ... ``` -> Note that using a feature without declaring it will incur a **validation error** during subgraph deployment, but no errors will occur if a feature is declared but not used. +> Beachten Sie, dass die Verwendung einer Funktion ohne deren Deklaration zu einem **Validierungsfehler** beim Einsatz von Subgraphen führt, aber keine Fehler auftreten, wenn eine Funktion deklariert, aber nicht verwendet wird. -## Timeseries and Aggregations +## Subgraph Best Practice 5: Timeseries and Aggregations -Prerequisites: +Voraussetzungen: -- Subgraph specVersion must be ≥1.1.0. +- Subgraph specVersion muss ≥1.1.0 sein. -Timeseries and aggregations enable your subgraph to track statistics like daily average price, hourly total transfers, and more. +Zeitreihen und Aggregationen ermöglichen es Ihrem Subgraph, Statistiken wie den täglichen Durchschnittspreis, stündliche Gesamttransfers und mehr zu verfolgen. -This feature introduces two new types of subgraph entity. Timeseries entities record data points with timestamps. Aggregation entities perform pre-declared calculations on the timeseries data points on an hourly or daily basis, then store the results for easy access via GraphQL. +Mit dieser Funktion werden zwei neue Typen von Subgraph-Entitäten eingeführt. Zeitreihen-Entitäten zeichnen Datenpunkte mit Zeitstempeln auf. Aggregations-Entitäten führen vordeklarierte Berechnungen an den Zeitreihen-Datenpunkten auf stündlicher oder täglicher Basis durch und speichern dann die Ergebnisse für den einfachen Zugriff über GraphQL. -### Example Schema +### Beispiel-Schema ```graphql type Data @entity(timeseries: true) { @@ -46,130 +46,130 @@ type Data @entity(timeseries: true) { price: BigDecimal! } -type Stats @aggregation(intervals: ["hour", "day"], source: "Data") { +type Stats @aggregation(intervals: [„hour“, „day“], source: „Data“) { id: Int8! timestamp: Timestamp! - sum: BigDecimal! @aggregate(fn: "sum", arg: "price") + sum: BigDecimal! @aggregate(fn: „sum“, arg: „price“) } ``` -### How to Define Timeseries and Aggregations +### Definition von Zeitreihen und Aggregationen -Timeseries entities are defined with `@entity(timeseries: true)` in the GraphQL schema. Every timeseries entity must: +Zeitreihenentitäten werden mit `@entity(timeseries: true)` im GraphQL-Schema definiert. Jede Zeitreihen-Entität muss: -- have a unique ID of the int8 type -- have a timestamp of the Timestamp type -- include data that will be used for calculation by aggregation entities. +- haben eine eindeutige ID vom Typ int8 +- einen Zeitstempel vom Typ Zeitstempel haben +- Daten enthalten, die von den Aggregationseinheiten für die Berechnung verwendet werden. -These Timeseries entities can be saved in regular trigger handlers, and act as the “raw data” for the aggregation entities. +Diese Timeseries-Entitäten können in regulären Trigger-Handlern gespeichert werden und dienen als „Rohdaten“ für die Aggregationsentitäten. Aggregation entities are defined with `@aggregation` in the GraphQL schema. Every aggregation entity defines the source from which it will gather data (which must be a timeseries entity), sets the intervals (e.g., hour, day), and specifies the aggregation function it will use (e.g., sum, count, min, max, first, last). -Aggregation entities are automatically calculated on the basis of the specified source at the end of the required interval. +Die Aggregationseinheiten werden automatisch auf der Grundlage der angegebenen Quelle am Ende des gewünschten Intervalls berechnet. -#### Available Aggregation Intervals +#### Verfügbare Aggregationsintervalle -- `hour`: sets the timeseries period every hour, on the hour. -- `day`: sets the timeseries period every day, starting and ending at 00:00. +- Stunde": setzt den Zeitraum der Zeitreihe stündlich, zur vollen Stunde. +- Tag": legt den Zeitraum der Zeitreihe für jeden Tag fest, beginnend und endend um 00:00 Uhr. -#### Available Aggregation Functions +#### Verfügbare Aggregationsfunktionen -- `sum`: Total of all values. -- `count`: Number of values. -- `min`: Minimum value. -- `max`: Maximum value. -- `first`: First value in the period. -- `last`: Last value in the period. +- Summe": Summe aller Werte. +- Anzahl": Anzahl der Werte. +- Min": Minimaler Wert. +- `max`: Maximaler Wert. +- "erster": Erster Wert in der Periode. +- Letzter Wert: Letzter Wert in der Periode. -#### Example Aggregations Query +#### Beispiel-Aggregationsabfrage ```graphql { - stats(interval: "hour", where: { timestamp_gt: 1704085200 }) { + stats(interval: „hour“, where: { timestamp_gt: 1704085200 }) { id - timestamp - sum + Zeitstempel + Summe } } ``` -[Read more](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) about Timeseries and Aggregations. +[Lesen Sie mehr](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) über Zeitreihen und Aggregationen. ## Non-fatal errors -Indexing errors on already synced subgraphs will, by default, cause the subgraph to fail and stop syncing. Subgraphs can alternatively be configured to continue syncing in the presence of errors, by ignoring the changes made by the handler which provoked the error. This gives subgraph authors time to correct their subgraphs while queries continue to be served against the latest block, though the results might be inconsistent due to the bug that caused the error. Note that some errors are still always fatal. To be non-fatal, the error must be known to be deterministic. +Indexierungsfehler bei bereits synchronisierten Subgraphen führen standardmäßig dazu, dass der Subgraph fehlschlägt und die Synchronisierung beendet wird. Subgraphen können alternativ so konfiguriert werden, dass die Synchronisierung bei Fehlern fortgesetzt wird, indem die vom Handler, der den Fehler verursacht hat, vorgenommenen Änderungen ignoriert werden. Dies gibt den Autoren von Untergraphen Zeit, ihre Subgraphen zu korrigieren, während die Abfragen weiterhin gegen den letzten Block ausgeführt werden, obwohl die Ergebnisse aufgrund des Fehlers, der den Fehler verursacht hat, inkonsistent sein könnten. Beachten Sie, dass einige Fehler immer noch fatal sind. Um nicht fatal zu sein, muss der Fehler als deterministisch bekannt sein. -> **Note:** The Graph Network does not yet support non-fatal errors, and developers should not deploy subgraphs using that functionality to the network via the Studio. +> **Hinweis:** Das The Graph Netzwerk unterstützt noch keine nicht-fatalen Fehler, und Entwickler sollten keine Subgraphen, die diese Funktionalität nutzen, über das Studio im Netzwerk bereitstellen. -Enabling non-fatal errors requires setting the following feature flag on the subgraph manifest: +Zur Aktivierung von nicht schwerwiegenden Fehlern muss das folgende Funktionskennzeichen im Manifest des Subgraphen gesetzt werden: ```yaml -specVersion: 0.0.4 -description: Gravatar for Ethereum -features: +specVersion: 1.3.0 +Beschreibung: Gravatar für Ethereum +Merkmale: - nonFatalErrors ... ``` -The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the subgraph has skipped over errors, as in the example: +Die Abfrage muss sich auch für die Abfrage von Daten mit potenziellen Inkonsistenzen durch das Argument `subgraphError` entscheiden. Es wird auch empfohlen, `_meta` abzufragen, um zu prüfen, ob der Subgraph Fehler übersprungen hat, wie in diesem Beispiel: ```graphql foos(first: 100, subgraphError: allow) { - id + id } _meta { - hasIndexingErrors + hasIndexingErrors } ``` -If the subgraph encounters an error, that query will return both the data and a graphql error with the message `"indexing_error"`, as in this example response: +Wenn der Subgraph auf einen Fehler stößt, gibt diese Abfrage sowohl die Daten als auch einen Graphql-Fehler mit der Meldung `„indexing_error“` zurück, wie in dieser Beispielantwort: ```graphql -"data": { - "foos": [ - { - "id": "0xdead" - } - ], - "_meta": { - "hasIndexingErrors": true - } +"Daten": { + "foos": [ + { + "id": "0xdead" + } + ], + "_meta": { + "hasIndexingErrors": true + } }, "errors": [ - { - "message": "indexing_error" - } + { + "message": "indexing_error" + } ] ``` -## IPFS/Arweave File Data Sources +## IPFS/Arweave File Datenquellen -File data sources are a new subgraph functionality for accessing off-chain data during indexing in a robust, extendable way. File data sources support fetching files from IPFS and from Arweave. +Dateidatenquellen sind eine neue Subgraph-Funktionalität für den Zugriff auf Off-Chain-Daten während der Indizierung in einer robusten, erweiterbaren Weise. Dateidatenquellen unterstützen das Abrufen von Dateien aus dem IPFS und aus Arweave. -> This also lays the groundwork for deterministic indexing of off-chain data, as well as the potential introduction of arbitrary HTTP-sourced data. +> Damit wird auch die Grundlage für die deterministische Indizierung von Off-Chain-Daten sowie für die potenzielle Einführung beliebiger HTTP-Daten geschaffen. ### Überblick -Rather than fetching files "in line" during handler execution, this introduces templates which can be spawned as new data sources for a given file identifier. These new data sources fetch the files, retrying if they are unsuccessful, running a dedicated handler when the file is found. +Anstatt die Dateien während der Ausführung des Handlers „in line“ zu holen, werden Vorlagen eingeführt, die als neue Datenquellen für eine bestimmte Dateikennung erzeugt werden können. Diese neuen Datenquellen rufen die Dateien ab, versuchen es erneut, wenn sie nicht erfolgreich sind, und führen einen speziellen Handler aus, wenn die Datei gefunden wird. -This is similar to the [existing data source templates](/developing/creating-a-subgraph/#data-source-templates), which are used to dynamically create new chain-based data sources. +Dies ist vergleichbar mit den [existing data source templates](/developing/creating-a-subgraph/#data-source-templates), die zur dynamischen Erstellung neuer kettenbasierter Datenquellen verwendet werden. -> This replaces the existing `ipfs.cat` API +> Dies ersetzt die bestehende API „ipfs.cat“. -### Upgrade guide +### Upgrade-Leitfaden -#### Update `graph-ts` and `graph-cli` +#### Aktualisierung von `graph-ts` und `graph-cli` -File data sources requires graph-ts >=0.29.0 and graph-cli >=0.33.1 +Dateidatenquellen erfordern graph-ts >=0.29.0 und graph-cli >=0.33.1 -#### Add a new entity type which will be updated when files are found +#### Hinzufügen eines neuen Entitätstyps, der aktualisiert wird, wenn Dateien gefunden werden -File data sources cannot access or update chain-based entities, but must update file specific entities. +Dateidatenquellen können nicht auf kettenbasierte Entitäten zugreifen oder diese aktualisieren, sondern müssen dateispezifische Entitäten aktualisieren. -This may mean splitting out fields from existing entities into separate entities, linked together. +Dies kann bedeuten, dass Felder aus bestehenden Entitäten in separate, miteinander verbundene Entitäten aufgeteilt werden. -Original combined entity: +Ursprüngliche kombinierte Einheit: ```graphql type Token @entity { @@ -187,7 +187,7 @@ type Token @entity { } ``` -New, split entity: +Neu, geteilte Einheit: ```graphql type Token @entity { @@ -208,64 +208,64 @@ type TokenMetadata @entity { } ``` -If the relationship is 1:1 between the parent entity and the resulting file data source entity, the simplest pattern is to link the parent entity to a resulting file entity by using the IPFS CID as the lookup. Get in touch on Discord if you are having difficulty modelling your new file-based entities! +Wenn die Beziehung zwischen der übergeordneten Entität und der resultierenden Dateidatenquellen-Entität 1:1 ist, besteht das einfachste Muster darin, die übergeordnete Entität mit einer resultierenden Datei-Entität zu verknüpfen, indem die IPFS CID als Lookup verwendet wird. Kontaktieren Sie uns auf Discord, wenn Sie Schwierigkeiten bei der Modellierung Ihrer neuen dateibasierten Entitäten haben! -> You can use [nested filters](/subgraphs/querying/graphql-api/#example-for-nested-entity-filtering) to filter parent entities on the basis of these nested entities. +> Sie können [nested filters](/subgraphs/querying/graphql-api/#example-for-nested-entity-filtering) verwenden, um übergeordnete Entitäten auf der Grundlage dieser verschachtelten Entitäten zu filtern. -#### Add a new templated data source with `kind: file/ipfs` or `kind: file/arweave` +#### Hinzufügen einer neuen Schablonen-Datenquelle mit „Art: file/ipfs“ oder „Art: file/arweave“. -This is the data source which will be spawned when a file of interest is identified. +Dies ist die Datenquelle, die erzeugt wird, wenn eine Datei von Interesse identifiziert wird. ```yaml -templates: - - name: TokenMetadata - kind: file/ipfs +Vorlagen: + - name: TokenMetadaten + Art: Datei/ipfs mapping: - apiVersion: 0.0.7 - language: wasm/assemblyscript - file: ./src/mapping.ts + apiVersion: 0.0.9 + Sprache: wasm/assemblyscript + Datei: ./src/mapping.ts handler: handleMetadata - entities: - - TokenMetadata + Entitäten: + - TokenMetadaten abis: - name: Token - file: ./abis/Token.json + Datei: ./abis/Token.json ``` -> Currently `abis` are required, though it is not possible to call contracts from within file data sources +> Derzeit sind „abis“ erforderlich, obwohl es nicht möglich ist, Verträge aus Dateidatenquellen aufzurufen. -The file data source must specifically mention all the entity types which it will interact with under `entities`. See [limitations](#limitations) for more details. +Die Dateidatenquelle muss alle Entitätstypen, mit denen sie interagieren wird, unter „Entitäten“ ausdrücklich erwähnen. Siehe [limitations](#limitations) für weitere Details. -#### Create a new handler to process files +#### Erstellen Sie einen neuen Handler zur Verarbeitung von Dateien -This handler should accept one `Bytes` parameter, which will be the contents of the file, when it is found, which can then be processed. This will often be a JSON file, which can be processed with `graph-ts` helpers ([documentation](/subgraphs/developing/creating/graph-ts/api/#json-api)). +Dieser Handler sollte einen `Bytes`-Parameter akzeptieren, der den Inhalt der Datei darstellt, wenn diese gefunden wird und dann verarbeitet werden kann. Oft handelt es sich dabei um eine JSON-Datei, die mit `graph-ts`-Helfern verarbeitet werden kann ([documentation](/subgraphs/developing/creating/graph-ts/api/#json-api)). -The CID of the file as a readable string can be accessed via the `dataSource` as follows: +Auf das CID der Datei als lesbare Zeichenkette kann über die `dataSource` wie folgt zugegriffen werden: ```typescript const cid = dataSource.stringParam() ``` -Example handler: +Beispiel-Handler: ```typescript -import { json, Bytes, dataSource } from '@graphprotocol/graph-ts' -import { TokenMetadata } from '../generated/schema' +Importiere { json, Bytes, dataSource } von '@graphprotocol/graph-ts' +importiere { TokenMetadata } von '.. generated/schema' export function handleMetadata(content: Bytes): void { - let tokenMetadata = new TokenMetadata(dataSource.stringParam()) + let tokenMetadata = new TokenMetadata(dataSource. tringParam()) const value = json.fromBytes(content).toObject() if (value) { - const image = value.get('image') + const image = value. et('image') const name = value.get('name') - const description = value.get('description') - const externalURL = value.get('external_url') + const description = value. et('description') + const externalURL = Wert. et('external_url') if (name && image && description && externalURL) { - tokenMetadata.name = name.toString() - tokenMetadata.image = image.toString() + tokenMetadata. ame = name.toString() + tokenMetadata. mage = image.toString() tokenMetadata.externalURL = externalURL.toString() - tokenMetadata.description = description.toString() + tokenMetadata. escription = description.toString() } tokenMetadata.save() @@ -273,24 +273,24 @@ export function handleMetadata(content: Bytes): void { } ``` -#### Spawn file data sources when required +#### Dateidatenquellen bei Bedarf abrufen -You can now create file data sources during execution of chain-based handlers: +Sie können jetzt Dateidatenquellen während der Ausführung von kettenbasierten Handlern erstellen: -- Import the template from the auto-generated `templates` -- call `TemplateName.create(cid: string)` from within a mapping, where the cid is a valid content identifier for IPFS or Arweave +- Importieren Sie die Vorlage aus den automatisch erzeugten „Templates“. +- Aufruf von `TemplateName.create(cid: string)` innerhalb einer Zuordnung, wobei cid ein gültiger Inhaltsbezeichner für IPFS oder Arweave ist -For IPFS, Graph Node supports [v0 and v1 content identifiers](https://docs.ipfs.tech/concepts/content-addressing/), and content identifiers with directories (e.g. `bafyreighykzv2we26wfrbzkcdw37sbrby4upq7ae3aqobbq7i4er3tnxci/metadata.json`). +Für IPFS unterstützt Graph Node [v0- und v1-Inhaltsbezeichner] (https://docs.ipfs.tech/concepts/content-addressing/) und Inhaltsbezeichner mit Verzeichnissen (z. B. `bafyreighykzv2we26wfrbzkcdw37sbrby4upq7ae3aqobbq7i4er3tnxci/metadata.json`). -For Arweave, as of version 0.33.0 Graph Node can fetch files stored on Arweave based on their [transaction ID](https://docs.arweave.org/developers/arweave-node-server/http-api#transactions) from an Arweave gateway ([example file](https://bdxujjl5ev5eerd5ouhhs6o4kjrs4g6hqstzlci5pf6vhxezkgaa.arweave.net/CO9EpX0lekJEfXUOeXncUmMuG8eEp5WJHXl9U9yZUYA)). Arweave supports transactions uploaded via Irys (previously Bundlr), and Graph Node can also fetch files based on [Irys manifests](https://docs.irys.xyz/overview/gateways#indexing). +Für Arweave kann Graph Node ab Version 0.33.0 Dateien, die auf Arweave gespeichert sind, basierend auf ihrer [Transaktions-ID](https://docs.arweave.org/developers/arweave-node-server/http-api#transactions) von einem Arweave-Gateway ([Beispieldatei](https://bdxujjl5ev5eerd5ouhhs6o4kjrs4g6hqstzlci5pf6vhxezkgaa.arweave.net/CO9EpX0lekJEfXUOeXncUmMuG8eEp5WJHXl9U9yZUYA)) abrufen. Arweave unterstützt Transaktionen, die über Irys (früher Bundlr) hochgeladen werden, und Graph Node kann auch Dateien auf der Grundlage von [Irys-Manifesten](https://docs.irys.xyz/overview/gateways#indexing) abrufen. -Example: +Beispiel: ```typescript import { TokenMetadata as TokenMetadataTemplate } from '../generated/templates' const ipfshash = 'QmaXzZhcYnsisuue5WRdQDH6FDvqkLQX1NckLqBYeYYEfm' -//This example code is for a Crypto coven subgraph. The above ipfs hash is a directory with token metadata for all crypto coven NFTs. +//Dieser Beispielcode ist für einen Crypto Coven Subgraph. Der obige ipfs-Hash ist ein Verzeichnis mit Token-Metadaten für alle Crypto Coven NFTs. export function handleTransfer(event: TransferEvent): void { let token = Token.load(event.params.tokenId.toString()) @@ -300,7 +300,7 @@ export function handleTransfer(event: TransferEvent): void { token.tokenURI = '/' + event.params.tokenId.toString() + '.json' const tokenIpfsHash = ipfshash + token.tokenURI - //This creates a path to the metadata for a single Crypto coven NFT. It concats the directory with "/" + filename + ".json" + //This erstellt einen Pfad zu den Metadaten für eine einzelne Crypto Coven NFT. Das Verzeichnis wird mit „/“ + Dateiname + „.json“ verknüpft. token.ipfsURI = tokenIpfsHash @@ -313,251 +313,251 @@ export function handleTransfer(event: TransferEvent): void { } ``` -This will create a new file data source, which will poll Graph Node's configured IPFS or Arweave endpoint, retrying if it is not found. When the file is found, the file data source handler will be executed. +Dadurch wird eine neue Dateidatenquelle erstellt, die den konfigurierten IPFS- oder Arweave-Endpunkt des Graph Node abfragt und es erneut versucht, wenn sie nicht gefunden wird. Wenn die Datei gefunden wird, wird der Dateidatenquellen-Handler ausgeführt. -This example is using the CID as the lookup between the parent `Token` entity and the resulting `TokenMetadata` entity. +In diesem Beispiel wird die CID als Lookup zwischen der übergeordneten Entität „Token“ und der daraus resultierenden Entität „TokenMetadata“ verwendet. -> Previously, this is the point at which a subgraph developer would have called `ipfs.cat(CID)` to fetch the file +> Früher hätte ein Subgraph-Entwickler an dieser Stelle `ipfs.cat(CID)` aufgerufen, um die Datei zu holen -Congratulations, you are using file data sources! +Herzlichen Glückwunsch, Sie verwenden Dateidatenquellen! -#### Deploying your subgraphs +#### Einsatz von Subgraphen -You can now `build` and `deploy` your subgraph to any Graph Node >=v0.30.0-rc.0. +Sie können jetzt Ihren Subgraphen auf jedem Graph Node >=v0.30.0-rc.0 „bauen“ und „bereitstellen“. -#### Limitations +#### Beschränkungen -File data source handlers and entities are isolated from other subgraph entities, ensuring that they are deterministic when executed, and ensuring no contamination of chain-based data sources. To be specific: +Dateidatenquellen-Handler und -Entitäten sind von anderen Subgraph-Entitäten isoliert, wodurch sichergestellt wird, dass sie bei ihrer Ausführung deterministisch sind und keine Kontamination von kettenbasierten Datenquellen erfolgt. Um genau zu sein: -- Entities created by File Data Sources are immutable, and cannot be updated -- File Data Source handlers cannot access entities from other file data sources -- Entities associated with File Data Sources cannot be accessed by chain-based handlers +- Von Dateidatenquellen erstellte Entitäten sind unveränderlich und können nicht aktualisiert werden. +- Dateidatenquellen-Handler können nicht auf Entitäten aus anderen Dateidatenquellen zugreifen +- Auf Entitäten, die mit Dateidatenquellen verknüpft sind, kann von kettenbasierten Handlern nicht zugegriffen werden -> While this constraint should not be problematic for most use-cases, it may introduce complexity for some. Please get in touch via Discord if you are having issues modelling your file-based data in a subgraph! +> Während diese Einschränkung für die meisten Anwendungsfälle nicht problematisch sein sollte, kann sie für einige Fälle zu mehr Komplexität führen. Bitte kontaktieren Sie uns über Discord, wenn Sie Probleme bei der Modellierung Ihrer dateibasierten Daten in einem Subgraphen haben! -Additionally, it is not possible to create data sources from a file data source, be it an onchain data source or another file data source. This restriction may be lifted in the future. +Außerdem ist es nicht möglich, Datenquellen aus einer Dateidatenquelle zu erstellen, sei es eine Onchain-Datenquelle oder eine andere Dateidatenquelle. Diese Einschränkung kann in Zukunft aufgehoben werden. -#### Best practices +#### Bewährte Praktiken -If you are linking NFT metadata to corresponding tokens, use the metadata's IPFS hash to reference a Metadata entity from the Token entity. Save the Metadata entity using the IPFS hash as an ID. +Wenn Sie NFT-Metadaten mit entsprechenden Token verknüpfen, verwenden Sie den IPFS-Hash der Metadaten, um eine Metadaten-Entität von der Token-Entität zu referenzieren. Speichern Sie die Metadaten-Entität unter Verwendung des IPFS-Hashs als ID. -You can use [DataSource context](/subgraphs/developing/creating/graph-ts/api/#entity-and-datasourcecontext) when creating File Data Sources to pass extra information which will be available to the File Data Source handler. +Sie können [DataSource context](/subgraphs/developing/creating/graph-ts/api/#entity-and-datasourcecontext) beim Erstellen von Dateidatenquellen verwenden, um zusätzliche Informationen zu übergeben, die dem Dateidatenquellen-Handler zur Verfügung stehen. -If you have entities which are refreshed multiple times, create unique file-based entities using the IPFS hash & the entity ID, and reference them using a derived field in the chain-based entity. +Wenn Sie Entitäten haben, die mehrfach aktualisiert werden, erstellen Sie eindeutige dateibasierte Entitäten unter Verwendung des IPFS-Hash & der Entitäts-ID, und verweisen Sie auf sie mit einem abgeleiteten Feld in der kettenbasierten Entität. -> We are working to improve the above recommendation, so queries only return the "most recent" version +> Wir arbeiten daran, die obige Empfehlung zu verbessern, so dass Abfragen nur die „aktuellste“ Version zurückgeben -#### Known issues +#### Bekannte Probleme -File data sources currently require ABIs, even though ABIs are not used ([issue](https://github.com/graphprotocol/graph-cli/issues/961)). Workaround is to add any ABI. +Dateidatenquellen erfordern derzeit ABIs, auch wenn ABIs nicht verwendet werden ([issue](https://github.com/graphprotocol/graph-cli/issues/961)). Umgehung ist das Hinzufügen eines beliebigen ABI. -Handlers for File Data Sources cannot be in files which import `eth_call` contract bindings, failing with "unknown import: `ethereum::ethereum.call` has not been defined" ([issue](https://github.com/graphprotocol/graph-node/issues/4309)). Workaround is to create file data source handlers in a dedicated file. +Handler für Dateidatenquellen können nicht in Dateien sein, die `eth_call`-Vertragsbindungen importieren, und schlagen mit "unknown import: `ethereum::ethereum.call` has not been defined" ([issue](https://github.com/graphprotocol/graph-node/issues/4309)). Umgehung ist die Erstellung von Dateidatenquellen-Handlern in einer eigenen Datei. #### Beispiele -[Crypto Coven Subgraph migration](https://github.com/azf20/cryptocoven-api/tree/file-data-sources-refactor) +[Crypto Coven Subgraph Migration](https://github.com/azf20/cryptocoven-api/tree/file-data-sources-refactor) -#### References +#### Referenzen -[GIP File Data Sources](https://forum.thegraph.com/t/gip-file-data-sources/2721) +[GIP Datenquellen](https://forum.thegraph.com/t/gip-file-data-sources/2721) -## Indexed Argument Filters / Topic Filters +## Indizierte Argumentfilter / Themen-Filter -> **Requires**: [SpecVersion](#specversion-releases) >= `1.2.0` +> **Benötigt**: [SpecVersion](#specversion-releases) >= `1.2.0` -Topic filters, also known as indexed argument filters, are a powerful feature in subgraphs that allow users to precisely filter blockchain events based on the values of their indexed arguments. +Themenfilter, auch bekannt als Filter für indizierte Argumente, sind eine leistungsstarke Funktion in Subgraphen, die es Benutzern ermöglicht, Blockchain-Ereignisse auf der Grundlage der Werte ihrer indizierten Argumente genau zu filtern. -- These filters help isolate specific events of interest from the vast stream of events on the blockchain, allowing subgraphs to operate more efficiently by focusing only on relevant data. +- Diese Filter helfen dabei, bestimmte Ereignisse von Interesse aus dem riesigen Strom von Ereignissen auf der Blockchain zu isolieren, so dass Subgraphen effizienter arbeiten können, indem sie sich nur auf relevante Daten konzentrieren. -- This is useful for creating personal subgraphs that track specific addresses and their interactions with various smart contracts on the blockchain. +- Dies ist nützlich, um persönliche Subgraphen zu erstellen, die bestimmte Adressen und ihre Interaktionen mit verschiedenen Smart Contracts auf der Blockchain verfolgen. -### How Topic Filters Work +### Wie Themen-Filter funktionieren -When a smart contract emits an event, any arguments that are marked as indexed can be used as filters in a subgraph's manifest. This allows the subgraph to listen selectively for events that match these indexed arguments. +Wenn ein Smart Contract ein Ereignis auslöst, können alle Argumente, die als indiziert markiert sind, als Filter im Manifest eines Subgraphen verwendet werden. Dies ermöglicht es dem Subgraph, selektiv auf Ereignisse zu warten, die diesen indizierten Argumenten entsprechen. -- The event's first indexed argument corresponds to `topic1`, the second to `topic2`, and so on, up to `topic3`, since the Ethereum Virtual Machine (EVM) allows up to three indexed arguments per event. +- Das erste indizierte Argument des Ereignisses entspricht `topic1`, das zweite `topic2` und so weiter bis `topic3`, da die Ethereum Virtual Machine (EVM) bis zu drei indizierte Argumente pro Ereignis erlaubt. ```solidity -// SPDX-License-Identifier: MIT +// SPDX-Lizenz-Identifikator: MIT pragma solidity ^0.8.0; contract Token { - // Event declaration with indexed parameters for addresses - event Transfer(address indexed from, address indexed to, uint256 value); - - // Function to simulate transferring tokens - function transfer(address to, uint256 value) public { - // Emitting the Transfer event with from, to, and value - emit Transfer(msg.sender, to, value); - } + // Ereignisdeklaration mit indizierten Parametern für Adressen + event Transfer(address indexed from, address indexed to, uint256 value); + + // Funktion zur Simulation der Übertragung von Token + function transfer(address to, uint256 value) public { + // Senden des Transfer-Ereignisses mit from, to, und value + emit Transfer(msg.sender, to, value); + } } ``` -In this example: +In unserem Beispiel: -- The `Transfer` event is used to log transactions of tokens between addresses. -- The `from` and `to` parameters are indexed, allowing event listeners to filter and monitor transfers involving specific addresses. -- The `transfer` function is a simple representation of a token transfer action, emitting the Transfer event whenever it is called. +- Das Ereignis „Übertragung“ wird verwendet, um Transaktionen von Token zwischen Adressen zu protokollieren. +- Die Parameter „von“ und „bis“ sind indiziert, so dass Ereignisüberwacher Übertragungen mit bestimmten Adressen filtern und überwachen können. +- Die Funktion „Transfer“ ist eine einfache Darstellung einer Token-Transfer-Aktion, die bei jedem Aufruf das Ereignis „Transfer“ auslöst. -#### Configuration in Subgraphs +#### Konfiguration in Subgraphen -Topic filters are defined directly within the event handler configuration in the subgraph manifest. Here is how they are configured: +Themenfilter werden direkt in der Event-Handler-Konfiguration im Subgraph-Manifest definiert. Hier sehen Sie, wie sie konfiguriert werden: ```yaml eventHandlers: - - event: SomeEvent(indexed uint256, indexed address, indexed uint256) + - Event: SomeEvent(indexed uint256, indexed address, indexed uint256) handler: handleSomeEvent topic1: ['0xValue1', '0xValue2'] topic2: ['0xAddress1', '0xAddress2'] topic3: ['0xValue3'] ``` -In this setup: +In diesem Setup: -- `topic1` corresponds to the first indexed argument of the event, `topic2` to the second, and `topic3` to the third. -- Each topic can have one or more values, and an event is only processed if it matches one of the values in each specified topic. +- Dabei entspricht „Thema1“ dem ersten indizierten Argument des Ereignisses, ‚Thema2‘ dem zweiten und „Thema3“ dem dritten. +- Jedes Thema kann einen oder mehrere Werte haben, und ein Ereignis wird nur verarbeitet, wenn es einem der Werte in jedem angegebenen Thema entspricht. -#### Filter Logic +#### Filter-Logik -- Within a Single Topic: The logic functions as an OR condition. The event will be processed if it matches any one of the listed values in a given topic. -- Between Different Topics: The logic functions as an AND condition. An event must satisfy all specified conditions across different topics to trigger the associated handler. +- Innerhalb eines einzelnen Themas: Die Logik funktioniert wie eine ODER-Bedingung. Das Ereignis wird verarbeitet, wenn es mit einem der aufgeführten Werte in einem bestimmten Thema übereinstimmt. +- Zwischen verschiedenen Themen: Die Logik funktioniert wie eine UND-Bedingung. Ein Ereignis muss alle angegebenen Bedingungen über verschiedene Themen hinweg erfüllen, um den zugehörigen Handler auszulösen. -#### Example 1: Tracking Direct Transfers from Address A to Address B +#### Beispiel 1: Verfolgung von Direktüberweisungen von Adresse A nach Adresse B ```yaml eventHandlers: - - event: Transfer(indexed address,indexed address,uint256) + - Event: Transfer(indizierte Adresse,indizierte Adresse,uint256) handler: handleDirectedTransfer - topic1: ['0xAddressA'] # Sender Address - topic2: ['0xAddressB'] # Receiver Address + topic1: ['0xAddressA'] # Absenderadresse + Topic2: ['0xAddressB'] # Empfängeradresse ``` -In this configuration: +In dieser Konfiguration: -- `topic1` is configured to filter `Transfer` events where `0xAddressA` is the sender. -- `topic2` is configured to filter `Transfer` events where `0xAddressB` is the receiver. -- The subgraph will only index transactions that occur directly from `0xAddressA` to `0xAddressB`. +- `topic1` ist konfiguriert, `Transfer` Ereignisse zu filtern, wobei `0xAddressA` der Absender ist. +- Thema2„ ist so konfiguriert, dass Ereignisse der Kategorie ‚Übertragung‘ gefiltert werden, bei denen “0xAdresseB" der Empfänger ist. +- Der Subgraph indiziert nur Transaktionen, die direkt von „0xAdresseA“ nach „0xAdresseB“ erfolgen. -#### Example 2: Tracking Transactions in Either Direction Between Two or More Addresses +#### Beispiel 2: Verfolgung von Transaktionen in beiden Richtungen zwischen zwei oder mehr Adressen ```yaml eventHandlers: - - event: Transfer(indexed address,indexed address,uint256) - handler: handleTransferToOrFrom - topic1: ['0xAddressA', '0xAddressB', '0xAddressC'] # Sender Address - topic2: ['0xAddressB', '0xAddressC'] # Receiver Address + - event: Transfer(indexed address,indexed address,uint256) + handler: handleTransferToOrFrom + topic1: ['0xAddressA', '0xAddressB', '0xAddressC'] # Absenderadresse + topic2: ['0xAddressB', '0xAddressC'] # Empfängeradresse ``` -In this configuration: +In dieser Konfiguration: -- `topic1` is configured to filter `Transfer` events where `0xAddressA`, `0xAddressB`, `0xAddressC` is the sender. -- `topic2` is configured to filter `Transfer` events where `0xAddressB` and `0xAddressC` is the receiver. -- The subgraph will index transactions that occur in either direction between multiple addresses allowing for comprehensive monitoring of interactions involving all addresses. +- Thema1„ ist so konfiguriert, dass er “Transfer„-Ereignisse filtert, bei denen “0xAdresseA„, ‚0xAdresseB‘, “0xAdresseC" der Absender ist. +- Thema2„ ist so konfiguriert, dass es “Transfer„-Ereignisse filtert, bei denen ‚0xAdresseB‘ und “0xAdresseC" der Empfänger ist. +- Der Subgraph indiziert Transaktionen, die in beiden Richtungen zwischen mehreren Adressen stattfinden, und ermöglicht so eine umfassende Überwachung von Interaktionen, die alle Adressen betreffen. -## Declared eth_call +## Deklariert eth_call -> Note: This is an experimental feature that is not currently available in a stable Graph Node release yet. You can only use it in Subgraph Studio or your self-hosted node. +> Hinweis: Dies ist eine experimentelle Funktion, die derzeit noch nicht in einer stabilen Graph Node-Version verfügbar ist. Sie können sie nur in Subgraph Studio oder Ihrem selbst gehosteten Knoten verwenden. -Declarative `eth_calls` are a valuable subgraph feature that allows `eth_calls` to be executed ahead of time, enabling `graph-node` to execute them in parallel. +Deklarative `eth_calls` sind eine wertvolle Funktion von Subgraph, die es erlaubt, `eth_calls` im Voraus auszuführen, so dass `graph-node` sie parallel ausführen kann. -This feature does the following: +Diese Funktion hat die folgenden Aufgaben: -- Significantly improves the performance of fetching data from the Ethereum blockchain by reducing the total time for multiple calls and optimizing the subgraph's overall efficiency. -- Allows faster data fetching, resulting in quicker query responses and a better user experience. -- Reduces wait times for applications that need to aggregate data from multiple Ethereum calls, making the data retrieval process more efficient. +- Erhebliche Verbesserung der Leistung beim Abrufen von Daten aus der Ethereum-Blockchain durch Reduzierung der Gesamtzeit für mehrere Aufrufe und Optimierung der Gesamteffizienz des Subgraphen. +- Ermöglicht einen schnelleren Datenabruf, was zu schnelleren Abfrageantworten und einer besseren Benutzerfreundlichkeit führt. +- Reduziert die Wartezeiten für Anwendungen, die Daten aus mehreren Ethereum-Aufrufen aggregieren müssen, und macht den Datenabrufprozess effizienter. -### Key Concepts +### Schlüsselkonzepte -- Declarative `eth_calls`: Ethereum calls that are defined to be executed in parallel rather than sequentially. -- Parallel Execution: Instead of waiting for one call to finish before starting the next, multiple calls can be initiated simultaneously. -- Time Efficiency: The total time taken for all the calls changes from the sum of the individual call times (sequential) to the time taken by the longest call (parallel). +- Deklarative `eth_calls`: Ethereum-Aufrufe, die so definiert sind, dass sie parallel und nicht sequentiell ausgeführt werden. +- Parallele Ausführung: Anstatt auf das Ende eines Aufrufs zu warten, bevor der nächste gestartet wird, können mehrere Aufrufe gleichzeitig gestartet werden. +- Zeiteffizienz: Die Gesamtzeit, die für alle Anrufe benötigt wird, ändert sich von der Summe der einzelnen Anrufzeiten (sequentiell) zur Zeit des längsten Anrufs (parallel). -#### Scenario without Declarative `eth_calls` +#### Szenario ohne deklarative `eth_calls` -Imagine you have a subgraph that needs to make three Ethereum calls to fetch data about a user's transactions, balance, and token holdings. +Stellen Sie sich vor, Sie haben einen Subgraph, der drei Ethereum-Aufrufe tätigen muss, um Daten über die Transaktionen, den Kontostand und den Token-Besitz eines Nutzers abzurufen. -Traditionally, these calls might be made sequentially: +Traditionell können diese Anrufe nacheinander erfolgen: -1. Call 1 (Transactions): Takes 3 seconds -2. Call 2 (Balance): Takes 2 seconds -3. Call 3 (Token Holdings): Takes 4 seconds +1. Anruf 1 (Transaktionen): dauert 3 Sekunden +2. Aufruf 2 (Balance): Dauert 2 Sekunden +3. Aufruf 3 (Besitz von Token): Dauert 4 Sekunden -Total time taken = 3 + 2 + 4 = 9 seconds +Gesamte benötigte Zeit = 3 + 2 + 4 = 9 Sekunden -#### Scenario with Declarative `eth_calls` +#### Szenario mit deklarativen `eth_calls` -With this feature, you can declare these calls to be executed in parallel: +Mit dieser Funktion können Sie erklären, dass diese Aufrufe parallel ausgeführt werden sollen: -1. Call 1 (Transactions): Takes 3 seconds -2. Call 2 (Balance): Takes 2 seconds -3. Call 3 (Token Holdings): Takes 4 seconds +1. Anruf 1 (Transaktionen): dauert 3 Sekunden +2. Aufruf 2 (Balance): Dauert 2 Sekunden +3. Aufruf 3 (Besitz von Token): Dauert 4 Sekunden -Since these calls are executed in parallel, the total time taken is equal to the time taken by the longest call. +Da diese Aufrufe parallel ausgeführt werden, entspricht die Gesamtzeit der Zeit, die der längste Aufruf benötigt. -Total time taken = max (3, 2, 4) = 4 seconds +Insgesamt benötigte Zeit = max (3, 2, 4) = 4 Sekunden -#### How it Works +#### So funktioniert's -1. Declarative Definition: In the subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. -2. Parallel Execution Engine: The Graph Node's execution engine recognizes these declarations and runs the calls simultaneously. -3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the subgraph for further processing. +1. Deklarative Definition: Im Subgraph-Manifest deklarieren Sie die Ethereum-Aufrufe in einer Weise, die angibt, dass sie parallel ausgeführt werden können. +2. Parallele Ausführungsmaschine: Die Ausführungsmaschine des Graph Node erkennt diese Deklarationen und führt die Aufrufe gleichzeitig aus. +3. Ergebnis-Aggregation: Sobald alle Aufrufe abgeschlossen sind, werden die Ergebnisse aggregiert und vom Subgraphen für die weitere Verarbeitung verwendet. -#### Example Configuration in Subgraph Manifest +#### Beispielkonfiguration im Subgraph Manifest -Declared `eth_calls` can access the `event.address` of the underlying event as well as all the `event.params`. +Deklarierte `eth_calls` können auf die `event.address` des zugrunde liegenden Ereignisses sowie auf alle `event.params` zugreifen. -`Subgraph.yaml` using `event.address`: +subgraph.yaml„ unter Verwendung von “event.address": ```yaml eventHandlers: -event: Swap(indexed address,indexed address,int256,int256,uint160,uint128,int24) +event: Swap(indexed address,indexed address,int256,int160,uint128,int24) handler: handleSwap calls: global0X128: Pool[event.address].feeGrowthGlobal0X128() global1X128: Pool[event.address].feeGrowthGlobal1X128() ``` -Details for the example above: +Details für das obige Beispiel: -- `global0X128` is the declared `eth_call`. -- The text (`global0X128`) is the label for this `eth_call` which is used when logging errors. -- The text (`Pool[event.address].feeGrowthGlobal0X128()`) is the actual `eth_call` that will be executed, which is in the form of `Contract[address].function(arguments)` -- The `address` and `arguments` can be replaced with variables that will be available when the handler is executed. +- global0X128„ ist der angegebene “eth_call". +- Der Text (`global0X128`) ist die Bezeichnung für diesen `eth_call`, die bei der Fehlerprotokollierung verwendet wird. +- Der Text (`Pool[event.address].feeGrowthGlobal0X128()`) ist der eigentliche `eth_call`, der in Form von `Contract[address].function(arguments)` ausgeführt wird. +- Die „Adresse“ und die „Argumente“ können durch Variablen ersetzt werden, die bei der Ausführung des Handlers verfügbar sein werden. -`Subgraph.yaml` using `event.params` +subgraph.yaml„ unter Verwendung von “event.params ```yaml -calls: +Aufrufe: - ERC20DecimalsToken0: ERC20[event.params.token0].decimals() ``` -### Grafting onto Existing Subgraphs +### Grafting auf bestehende Subgraphen -> **Note:** it is not recommended to use grafting when initially upgrading to The Graph Network. Learn more [here](/subgraphs/cookbook/grafting/#important-note-on-grafting-when-upgrading-to-the-network). +> **Hinweis:** Es wird nicht empfohlen, beim ersten Upgrade auf The Graph Network das Grafting zu verwenden. Erfahren Sie mehr [hier](/subgraphs/cookbook/grafting/#important-note-on-grafting-when-upgrading-to-the-network). -When a subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances; it is beneficial to reuse the data from an existing subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly or to temporarily get an existing subgraph working again after it has failed. +Wenn ein Subgraph zum ersten Mal eingesetzt wird, beginnt er mit der Indizierung von Ereignissen am Entstehungsblock der entsprechenden Kette (oder am `startBlock`, der mit jeder Datenquelle definiert ist). Unter bestimmten Umständen ist es von Vorteil, die Daten eines bestehenden Subgraphen wiederzuverwenden und die Indizierung an einem viel späteren Block zu beginnen. Diese Art der Indizierung wird _Grafting_ genannt. Grafting ist z.B. während der Entwicklung nützlich, um einfache Fehler in den Mappings schnell zu beheben oder um einen bestehenden Subgraph nach einem Fehler vorübergehend wieder zum Laufen zu bringen. -A subgraph is grafted onto a base subgraph when the subgraph manifest in `subgraph.yaml` contains a `graft` block at the top-level: +Ein Subgraph wird auf einen Basis-Subgraph gepfropft, wenn das Subgraph-Manifest in `subgraph.yaml` einen `graft`-Block auf der obersten Ebene enthält: ```yaml -description: ... +Beschreibung: ... graft: - base: Qm... # Subgraph ID of base subgraph - block: 7345624 # Block number + base: Qm ... # Subgraph ID des Basis-Subgraphen + block: 7345624 # Blocknummer ``` -When a subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` subgraph up to and including the given `block` and then continue indexing the new subgraph from that block on. The base subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted subgraph. +Wenn ein Subgraph, dessen Manifest einen „graft“-Block enthält, bereitgestellt wird, kopiert Graph Node die Daten des ‚Basis‘-Subgraphen bis einschließlich des angegebenen „Blocks“ und fährt dann mit der Indizierung des neuen Subgraphen ab diesem Block fort. Der Basis-Subgraph muss auf der Ziel-Graph-Node-Instanz existieren und mindestens bis zum angegebenen Block indexiert sein. Aufgrund dieser Einschränkung sollte Grafting nur während der Entwicklung oder in Notfällen verwendet werden, um die Erstellung eines äquivalenten, nicht gepfropften Subgraphen zu beschleunigen. -Because grafting copies rather than indexes base data, it is much quicker to get the subgraph to the desired block than indexing from scratch, though the initial data copy can still take several hours for very large subgraphs. While the grafted subgraph is being initialized, the Graph Node will log information about the entity types that have already been copied. +Da beim Grafting die Basisdaten kopiert und nicht indiziert werden, ist es viel schneller, den Subgraphen auf den gewünschten Block zu bringen, als wenn er von Grund auf neu indiziert wird, obwohl die anfängliche Datenkopie bei sehr großen Subgraphen immer noch mehrere Stunden dauern kann. Während der Initialisierung des gepfropften Subgraphen protokolliert der Graph Node Informationen über die bereits kopierten Entitätstypen. -The grafted subgraph can use a GraphQL schema that is not identical to the one of the base subgraph, but merely compatible with it. It has to be a valid subgraph schema in its own right, but may deviate from the base subgraph's schema in the following ways: +Der aufgepfropfte Subgrafen kann ein GraphQL-Schema verwenden, das nicht identisch mit dem des Basis-Subgrafen ist, sondern lediglich mit diesem kompatibel ist. Es muss ein eigenständig gültiges Subgrafen-Schema sein, darf aber auf folgende Weise vom Schema des Basis-Subgrafen abweichen: -- It adds or removes entity types -- It removes attributes from entity types -- It adds nullable attributes to entity types -- It turns non-nullable attributes into nullable attributes -- It adds values to enums -- It adds or removes interfaces -- It changes for which entity types an interface is implemented +- Es fügt Entitätstypen hinzu oder entfernt sie +- Es entfernt Attribute von Entitätstypen +- Es fügt Entitätstypen nullfähige Attribute hinzu +- Es wandelt Nicht-Nullable-Attribute in Nullable-Attribute um +- Es fügt Aufzählungen Werte hinzu +- Es fügt Interface hinzu oder entfernt sie +- Sie ändert sich je nachdem, für welche Art von Elementen das Interface implementiert ist -> **[Feature Management](#experimental-features):** `grafting` must be declared under `features` in the subgraph manifest. +> **[Feature Management](#experimental-features):** `grafting` muss unter `features` im Subgraph-Manifest deklariert werden. diff --git a/website/src/pages/de/subgraphs/developing/creating/assemblyscript-mappings.mdx b/website/src/pages/de/subgraphs/developing/creating/assemblyscript-mappings.mdx index 4354181a33df..e0b1bfea4e2d 100644 --- a/website/src/pages/de/subgraphs/developing/creating/assemblyscript-mappings.mdx +++ b/website/src/pages/de/subgraphs/developing/creating/assemblyscript-mappings.mdx @@ -10,7 +10,7 @@ The mappings take data from a particular source and transform it into entities t For each event handler that is defined in `subgraph.yaml` under `mapping.eventHandlers`, create an exported function of the same name. Each handler must accept a single parameter called `event` with a type corresponding to the name of the event which is being handled. -In the example subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events: +In the example Subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events: ```javascript import { NewGravatar, UpdatedGravatar } from '../generated/Gravity/Gravity' @@ -39,17 +39,17 @@ export function handleUpdatedGravatar(event: UpdatedGravatar): void { The first handler takes a `NewGravatar` event and creates a new `Gravatar` entity with `new Gravatar(event.params.id.toHex())`, populating the entity fields using the corresponding event parameters. This entity instance is represented by the variable `gravatar`, with an id value of `event.params.id.toHex()`. -The second handler tries to load the existing `Gravatar` from the Graph Node store. If it does not exist yet, it is created on-demand. The entity is then updated to match the new event parameters before it is saved back to the store using `gravatar.save()`. +Der zweite Handler versucht, den vorhandenen `Gravatar` aus dem Graph Node Speicher zu laden. Wenn er noch nicht vorhanden ist, wird er bei Bedarf erstellt. Die Entität wird dann aktualisiert, um den neuen Ereignisparametern zu entsprechen, bevor sie mit „gravatar.save()“ in den Speicher zurückgespeichert wird. ### Recommended IDs for Creating New Entities -It is highly recommended to use `Bytes` as the type for `id` fields, and only use `String` for attributes that truly contain human-readable text, like the name of a token. Below are some recommended `id` values to consider when creating new entities. +Es wird dringend empfohlen, `Bytes` als Typ für `id`-Felder zu verwenden und `String` nur für Attribute zu verwenden, die wirklich menschenlesbaren Text enthalten, wie den Namen eines Tokens. Im Folgenden sind einige empfohlene `id`-Werte aufgeführt, die bei der Erstellung neuer Entitäten zu berücksichtigen sind. - `transfer.id = event.transaction.hash` - `let id = event.transaction.hash.concatI32(event.logIndex.toI32())` -- For entities that store aggregated data, for e.g, daily trade volumes, the `id` usually contains the day number. Here, using a `Bytes` as the `id` is beneficial. Determining the `id` would look like +- Bei Entitäten, die aggregierte Daten speichern, z. B. tägliche Handelsvolumina, enthält die „ID“ in der Regel die Tagesnummer. Hier ist die Verwendung von „Bytes“ als „ID“ von Vorteil. Die Bestimmung der `id` würde wie folgt aussehen ```typescript let dayID = event.block.timestamp.toI32() / 86400 @@ -66,13 +66,13 @@ There is a [Graph Typescript Library](https://github.com/graphprotocol/graph-too When creating and saving a new entity, if an entity with the same ID already exists, the properties of the new entity are always preferred during the merge process. This means that the existing entity will be updated with the values from the new entity. -If a null value is intentionally set for a field in the new entity with the same ID, the existing entity will be updated with the null value. +Wird für ein Feld in der neuen Entität mit der gleichen ID absichtlich ein Nullwert gesetzt, wird die bestehende Entität mit dem Nullwert aktualisiert. If no value is set for a field in the new entity with the same ID, the field will result in null as well. ## Code Generation -In order to make it easy and type-safe to work with smart contracts, events and entities, the Graph CLI can generate AssemblyScript types from the subgraph's GraphQL schema and the contract ABIs included in the data sources. +In order to make it easy and type-safe to work with smart contracts, events and entities, the Graph CLI can generate AssemblyScript types from the Subgraph's GraphQL schema and the contract ABIs included in the data sources. This is done with @@ -80,7 +80,7 @@ This is done with graph codegen [--output-dir ] [] ``` -but in most cases, subgraphs are already preconfigured via `package.json` to allow you to simply run one of the following to achieve the same: +but in most cases, Subgraphs are already preconfigured via `package.json` to allow you to simply run one of the following to achieve the same: ```sh # Yarn @@ -90,7 +90,7 @@ yarn codegen npm run codegen ``` -This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with. +This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example Subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with. ```javascript import { @@ -102,12 +102,12 @@ import { } from '../generated/Gravity/Gravity' ``` -In addition to this, one class is generated for each entity type in the subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with +In addition to this, one class is generated for each entity type in the Subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with ```javascript import { Gravatar } from '../generated/schema' ``` -> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the subgraph. +> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the Subgraph. -Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your subgraph to Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find. +Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your Subgraph to Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find. diff --git a/website/src/pages/de/subgraphs/developing/creating/graph-ts/CHANGELOG.md b/website/src/pages/de/subgraphs/developing/creating/graph-ts/CHANGELOG.md index 5d90888ac378..9dace9f39aaf 100644 --- a/website/src/pages/de/subgraphs/developing/creating/graph-ts/CHANGELOG.md +++ b/website/src/pages/de/subgraphs/developing/creating/graph-ts/CHANGELOG.md @@ -1,101 +1,107 @@ # @graphprotocol/graph-ts +## 0.38.0 + +### Geringfügige Änderungen + +- [#1935](https://github.com/graphprotocol/graph-tooling/pull/1935) [`0c36a02`](https://github.com/graphprotocol/graph-tooling/commit/0c36a024e0516bbf883ae62b8312dba3d9945f04) Danke [@isum](https://github.com/isum)! - feat: yaml parsing Unterstützung für Mappings hinzufügen + ## 0.37.0 -### Minor Changes +### Geringfügige Änderungen - [#1843](https://github.com/graphprotocol/graph-tooling/pull/1843) [`c09b56b`](https://github.com/graphprotocol/graph-tooling/commit/c09b56b093f23c80aa5d217b2fd56fccac061145) - Thanks [@YaroShkvorets](https://github.com/YaroShkvorets)! - Update all dependencies + Danke [@YaroShkvorets](https://github.com/YaroShkvorets)! - Alle Abhängigkeiten aktualisieren ## 0.36.0 -### Minor Changes +### Geringfügige Änderungen - [#1754](https://github.com/graphprotocol/graph-tooling/pull/1754) [`2050bf6`](https://github.com/graphprotocol/graph-tooling/commit/2050bf6259c19bd86a7446410c7e124dfaddf4cd) - Thanks [@incrypto32](https://github.com/incrypto32)! - Add support for subgraph datasource and - associated types. + Danke [@incrypto32](https://github.com/incrypto32)! - Hinzufügen von Unterstützung für Subgraph-Datenquellen und + zugehörige Typen. ## 0.35.1 -### Patch Changes +### Patch-Änderungen - [#1637](https://github.com/graphprotocol/graph-tooling/pull/1637) [`f0c583f`](https://github.com/graphprotocol/graph-tooling/commit/f0c583f00c90e917d87b707b5b7a892ad0da916f) - Thanks [@incrypto32](https://github.com/incrypto32)! - Update return type for ethereum.hasCode + Danke [@incrypto32](https://github.com/incrypto32)! - Rückgabetyp für ethereum.hasCode aktualisieren ## 0.35.0 -### Minor Changes +### Geringfügige Änderungen - [#1609](https://github.com/graphprotocol/graph-tooling/pull/1609) [`e299f6c`](https://github.com/graphprotocol/graph-tooling/commit/e299f6ce5cf1ad74cab993f6df3feb7ca9993254) - Thanks [@incrypto32](https://github.com/incrypto32)! - Add support for eth.hasCode method + Danke [@incrypto32](https://github.com/incrypto32)! - Unterstützung für eth.hasCode-Methode hinzufügen ## 0.34.0 -### Minor Changes +### Geringfügige Änderungen - [#1522](https://github.com/graphprotocol/graph-tooling/pull/1522) [`d132f9c`](https://github.com/graphprotocol/graph-tooling/commit/d132f9c9f6ea5283e40a8d913f3abefe5a8ad5f8) - Thanks [@dotansimha](https://github.com/dotansimha)! - Added support for handling GraphQL - `Timestamp` scalar as `i64` (AssemblyScript) + Danke [@dotansimha](https://github.com/dotansimha)! - Unterstützung für den Umgang mit GraphQL hinzugefügt + Zeitstempel"-Skalar als ‚i64‘ (AssemblyScript) ## 0.33.0 -### Minor Changes +### Geringfügige Änderungen - [#1584](https://github.com/graphprotocol/graph-tooling/pull/1584) [`0075f06`](https://github.com/graphprotocol/graph-tooling/commit/0075f06ddaa6d37606e42e1c12d11d19674d00ad) - Thanks [@incrypto32](https://github.com/incrypto32)! - Added getBalance call to ethereum API + Danke [@incrypto32](https://github.com/incrypto32)! - getBalance-Aufruf zur Ethereum-API hinzugefügt ## 0.32.0 -### Minor Changes +### Geringfügige Änderungen - [#1523](https://github.com/graphprotocol/graph-tooling/pull/1523) [`167696e`](https://github.com/graphprotocol/graph-tooling/commit/167696eb611db0da27a6cf92a7390e72c74672ca) - Thanks [@xJonathanLEI](https://github.com/xJonathanLEI)! - add starknet data types + Danke [@xJonathanLEI](https://github.com/xJonathanLEI)! - Starknet-Datentypen hinzufügen ## 0.31.0 -### Minor Changes +### Geringfügige Änderungen - [#1340](https://github.com/graphprotocol/graph-tooling/pull/1340) [`2375877`](https://github.com/graphprotocol/graph-tooling/commit/23758774b33b5b7c6934f57a3e137870205ca6f0) - Thanks [@incrypto32](https://github.com/incrypto32)! - export `loadRelated` host function + Danke [@incrypto32](https://github.com/incrypto32)! - exportieren Sie `loadRelated` Host-Funktion - [#1296](https://github.com/graphprotocol/graph-tooling/pull/1296) [`dab4ca1`](https://github.com/graphprotocol/graph-tooling/commit/dab4ca1f5df7dcd0928bbaa20304f41d23b20ced) - Thanks [@dotansimha](https://github.com/dotansimha)! - Added support for handling GraphQL `Int8` - scalar as `i64` (AssemblyScript) + Danke [@dotansimha](https://github.com/dotansimha)! - Unterstützung für die Behandlung von GraphQL `Int8` hinzugefügt + Skalar als `i64` (AssemblyScript) ## 0.30.0 -### Minor Changes +### Geringfügige Änderungen - [#1299](https://github.com/graphprotocol/graph-tooling/pull/1299) [`3f8b514`](https://github.com/graphprotocol/graph-tooling/commit/3f8b51440db281e69879be7d91d79cd43e45fe86) - Thanks [@saihaj](https://github.com/saihaj)! - introduce new Etherum utility to get a CREATE2 - Address + Danke [@saihaj](https://github.com/saihaj)! - Einführung eines neuen Etherum-Dienstprogramms, um ein CREATE2 zu erhalten + Adresse - [#1306](https://github.com/graphprotocol/graph-tooling/pull/1306) [`f5e4b58`](https://github.com/graphprotocol/graph-tooling/commit/f5e4b58989edc5f3bb8211f1b912449e77832de8) - Thanks [@saihaj](https://github.com/saihaj)! - expose Host's `get_in_block` function + Danke [@saihaj](https://github.com/saihaj)! - Host's `get_in_block` Funktion freilegen ## 0.29.3 -### Patch Changes +### Patch-Änderungen - [#1057](https://github.com/graphprotocol/graph-tooling/pull/1057) [`b7a2ec3`](https://github.com/graphprotocol/graph-tooling/commit/b7a2ec3e9e2206142236f892e2314118d410ac93) - Thanks [@saihaj](https://github.com/saihaj)! - fix publihsed contents + Danke [@saihaj](https://github.com/saihaj)! - Publizierte Inhalte korrigieren ## 0.29.2 -### Patch Changes +### Patch-Änderungen - [#1044](https://github.com/graphprotocol/graph-tooling/pull/1044) [`8367f90`](https://github.com/graphprotocol/graph-tooling/commit/8367f90167172181870c1a7fe5b3e84d2c5aeb2c) - Thanks [@saihaj](https://github.com/saihaj)! - publish readme with packages + Danke [@saihaj](https://github.com/saihaj)! - Readme mit Paketen veröffentlichen diff --git a/website/src/pages/de/subgraphs/developing/creating/graph-ts/README.md b/website/src/pages/de/subgraphs/developing/creating/graph-ts/README.md index b6771a8305e5..bf79b8c8eb78 100644 --- a/website/src/pages/de/subgraphs/developing/creating/graph-ts/README.md +++ b/website/src/pages/de/subgraphs/developing/creating/graph-ts/README.md @@ -1,30 +1,30 @@ -# The Graph TypeScript Library (graph-ts) +# The Graph-TypeScript-Bibliothek (graph-ts) [![npm (scoped)](https://img.shields.io/npm/v/@graphprotocol/graph-ts.svg)](https://www.npmjs.com/package/@graphprotocol/graph-ts) [![Build Status](https://travis-ci.org/graphprotocol/graph-ts.svg?branch=master)](https://travis-ci.org/graphprotocol/graph-ts) -TypeScript/AssemblyScript library for writing subgraph mappings to be deployed to +TypeScript/AssemblyScript-Bibliothek zum Schreiben von Subgraph-Mappings, die auf [The Graph](https://github.com/graphprotocol/graph-node). -## Usage +## Verwendung -For a detailed guide on how to create a subgraph, please see the +Eine detaillierte Anleitung zur Erstellung eines Subgraphen finden Sie in der [Graph CLI docs](https://github.com/graphprotocol/graph-cli). -One step of creating the subgraph is writing mappings that will process blockchain events and will -write entities into the store. These mappings are written in TypeScript/AssemblyScript. +Ein Schritt bei der Erstellung des Subgraphen ist das Schreiben von Mappings, die Blockchain-Ereignisse verarbeiten und +Entitäten in den Speicher schreiben. Diese Mappings werden in TypeScript/AssemblyScript geschrieben. -The `graph-ts` library provides APIs to access the Graph Node store, blockchain data, smart -contracts, data on IPFS, cryptographic functions and more. To use it, all you have to do is add a -dependency on it: +Die Bibliothek `graph-ts` bietet APIs für den Zugriff auf den Graph Node-Speicher, Blockchain-Daten, Smart +Verträge, Daten auf IPFS, kryptographische Funktionen und mehr. Um sie zu verwenden, müssen Sie lediglich eine +Abhängigkeit von ihr hinzufügen: ```sh npm install --dev @graphprotocol/graph-ts # NPM yarn add --dev @graphprotocol/graph-ts # Yarn ``` -After that, you can import the `store` API and other features from this library in your mappings. A -few examples: +Danach können Sie die „Store“-API und andere Funktionen aus dieser Bibliothek in Ihre Mappings importieren. A +einige Beispiele: ```typescript import { crypto, store } from '@graphprotocol/graph-ts' @@ -50,19 +50,19 @@ function handleNameRegistered(event: NameRegistered) { } ``` -## Helper Functions for AssemblyScript +## Hilfsfunktionen für AssemblyScript -Refer to the `helper-functions.ts` file in +Siehe die Datei `helper-functions.ts` in [this](https://github.com/graphprotocol/graph-tooling/blob/main/packages/ts/helper-functions.ts) -repository for a few common functions that help build on top of the AssemblyScript library, such as -byte array concatenation, among others. +Repository für einige allgemeine Funktionen, die helfen, auf der AssemblyScript-Bibliothek aufzubauen, wie +Byte-Array-Verkettung, unter anderem. ## API -Documentation on the API can be found -[here](https://thegraph.com/docs/en/developer/assemblyscript-api/). +Die Dokumentation zur API finden Sie +[hier](https://thegraph.com/docs/en/developer/assemblyscript-api/). -For examples of `graph-ts` in use take a look at one of the following subgraphs: +Beispiele für die Verwendung von `graph-ts` finden Sie in einem der folgenden Subgraphen: - https://github.com/graphprotocol/ens-subgraph - https://github.com/graphprotocol/decentraland-subgraph @@ -71,15 +71,15 @@ For examples of `graph-ts` in use take a look at one of the following subgraphs: - https://github.com/graphprotocol/aragon-subgraph - https://github.com/graphprotocol/dharma-subgraph -## License +## Lizenz -Copyright © 2018 Graph Protocol, Inc. and contributors. +Copyright © 2018 Graph Protocol, Inc. und Mitwirkende. -The Graph TypeScript library is dual-licensed under the -[MIT license](https://github.com/graphprotocol/graph-tooling/blob/main/LICENSE-MIT) and the -[Apache License, Version 2.0](https://github.com/graphprotocol/graph-tooling/blob/main/LICENSE-APACHE). +The Graph TypeScript-Bibliothek ist doppelt lizenziert unter der +[MIT-Lizenz](https://github.com/graphprotocol/graph-tooling/blob/main/LICENSE-MIT) und der +[Apache-Lizenz, Version 2.0](https://github.com/graphprotocol/graph-tooling/blob/main/LICENSE-APACHE). -Unless required by applicable law or agreed to in writing, software distributed under the License is -distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or -implied. See the License for the specific language governing permissions and limitations under the -License. +Sofern nicht durch geltendes Recht vorgeschrieben oder schriftlich vereinbart, wird die unter dieser Lizenz vertriebene Software +auf einer „AS IS“-Basis verteilt, OHNE GARANTIEN ODER BEDINGUNGEN JEGLICHER ART, weder ausdrücklich noch +stillschweigend. In der Lizenz finden Sie die spezifischen Bestimmungen zu den Rechten und Beschränkungen unter der +Lizenz. diff --git a/website/src/pages/de/subgraphs/developing/creating/graph-ts/_meta-titles.json b/website/src/pages/de/subgraphs/developing/creating/graph-ts/_meta-titles.json index a6ca184af501..60d143e1f518 100644 --- a/website/src/pages/de/subgraphs/developing/creating/graph-ts/_meta-titles.json +++ b/website/src/pages/de/subgraphs/developing/creating/graph-ts/_meta-titles.json @@ -1,5 +1,5 @@ { "README": "Einführung", - "api": "API Reference", - "common-issues": "Common Issues" + "api": "API-Referenz", + "common-issues": "Gemeinsame Themen" } diff --git a/website/src/pages/de/subgraphs/developing/creating/graph-ts/api.mdx b/website/src/pages/de/subgraphs/developing/creating/graph-ts/api.mdx index 6106b8cdf0dc..c56511a3a35c 100644 --- a/website/src/pages/de/subgraphs/developing/creating/graph-ts/api.mdx +++ b/website/src/pages/de/subgraphs/developing/creating/graph-ts/api.mdx @@ -2,49 +2,49 @@ title: AssemblyScript API --- -> Note: If you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/). +> Hinweis: Wenn Sie einen Subgraph vor der Version `graph-cli`/`graph-ts` `0.22.0` erstellt haben, dann verwenden Sie eine ältere Version von AssemblyScript. Es wird empfohlen, den [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/) zu lesen. -Learn what built-in APIs can be used when writing subgraph mappings. There are two kinds of APIs available out of the box: +Erfahren Sie, welche eingebauten APIs beim Schreiben von Subgraph-Mappings verwendet werden können. Es gibt zwei Arten von APIs, die standardmäßig verfügbar sind: -- The [Graph TypeScript library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) -- Code generated from subgraph files by `graph codegen` +- Die [Graph-TypeScript-Bibliothek](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) +- Code, der von `graph codegen` aus Subgraph-Dateien erzeugt wird -You can also add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). +Sie können auch andere Bibliotheken als Abhängigkeiten hinzufügen, solange sie mit [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) kompatibel sind. -Since language mappings are written in AssemblyScript, it is useful to review the language and standard library features from the [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki). +Da die Sprachabbildungen in AssemblyScript geschrieben werden, ist es nützlich, die Sprach- und Standardbibliotheksfunktionen aus dem [AssemblyScript wiki] (https://github.com/AssemblyScript/assemblyscript/wiki) zu überprüfen. -## API Reference +## API-Referenz -The `@graphprotocol/graph-ts` library provides the following APIs: +Die Bibliothek `@graphprotocol/graph-ts` bietet die folgenden APIs: -- An `ethereum` API for working with Ethereum smart contracts, events, blocks, transactions, and Ethereum values. -- A `store` API to load and save entities from and to the Graph Node store. -- A `log` API to log messages to the Graph Node output and Graph Explorer. -- An `ipfs` API to load files from IPFS. -- A `json` API to parse JSON data. -- A `crypto` API to use cryptographic functions. -- Low-level primitives to translate between different type systems such as Ethereum, JSON, GraphQL and AssemblyScript. +- Eine „Ethereum“-API für die Arbeit mit Ethereum-Smart Contracts, Ereignissen, Blöcken, Transaktionen und Ethereum-Werten. +- Eine „Store“-API zum Laden und Speichern von Entitäten aus und in den Graph Node-Speicher. +- Eine „Log“-API zur Protokollierung von Meldungen an die Graph Node-Ausgabe und den Graph Explorer. +- Eine `ipfs`-API zum Laden von Dateien aus dem IPFS. +- Eine „json“-API zum Parsen von JSON-Daten. +- Eine „Crypto“-API zur Verwendung kryptographischer Funktionen. +- Low-Level-Primitive zur Übersetzung zwischen verschiedenen Typsystemen wie Ethereum, JSON, GraphQL und AssemblyScript. ### Versionen -The `apiVersion` in the subgraph manifest specifies the mapping API version which is run by Graph Node for a given subgraph. +Die `apiVersion` im Subgraph-Manifest gibt die Mapping-API-Version an, die von Graph Node für einen bestimmten Subgraph ausgeführt wird. -| Version | Release notes | +| Version | Hinweise zur Version | | :-: | --- | -| 0.0.9 | Adds new host functions [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) | -| 0.0.8 | Adds validation for existence of fields in the schema when saving an entity. | -| 0.0.7 | Added `TransactionReceipt` and `Log` classes to the Ethereum types
Added `receipt` field to the Ethereum Event object | -| 0.0.6 | Added `nonce` field to the Ethereum Transaction object
Added `baseFeePerGas` to the Ethereum Block object | -| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/))
`ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` | -| 0.0.4 | Added `functionSignature` field to the Ethereum SmartContractCall object | -| 0.0.3 | Added `from` field to the Ethereum Call object
`ethereum.call.address` renamed to `ethereum.call.to` | -| 0.0.2 | Added `input` field to the Ethereum Transaction object | +| 0.0.9 | Fügt neue Host-Funktionen hinzu [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) | +| 0.0.8 | Fügt eine Validierung für das Vorhandensein von Feldern im Schema beim Speichern einer Entität hinzu. | +| 0.0.7 | Klassen `TransactionReceipt` und `Log` zu den Ethereum-Typen hinzugefügt<br />Feld `Receipt` zum Ethereum Event Objekt hinzugefügt | +| 0.0.6 | Feld `nonce` zum Ethereum Transaction Objekt hinzugefügt<br />`baseFeePerGas` zum Ethereum Block Objekt hinzugefügt | +| 0.0.5 | AssemblyScript wurde auf Version 0.19.10 aktualisiert (dies beinhaltet einige Änderungen, siehe [`Migrationsanleitung`](/resources/migration-guides/assemblyscript-migration-guide/))
`ethereum.transaction.gasUsed` umbenannt in `ethereum.transaction.gasLimit` | +| 0.0.4 | Feld `functionSignature` zum Ethereum SmartContractCall Objekt hinzugefügt | +| 0.0.3 | Feld `von` zum Ethereum Call Objekt hinzugefügt<br />`ethereum.call.address` umbenannt in `ethereum.call.to` | +| 0.0.2 | Feld „Eingabe“ zum Ethereum-Transaktionsobjekt hinzugefügt | -### Built-in Types +### Integrierte Typen -Documentation on the base types built into AssemblyScript can be found in the [AssemblyScript wiki](https://www.assemblyscript.org/types.html). +Dokumentation zu den in AssemblyScript eingebauten Basistypen finden Sie im [AssemblyScript wiki](https://www.assemblyscript.org/types.html). -The following additional types are provided by `@graphprotocol/graph-ts`. +Die folgenden zusätzlichen Typen werden von `@graphprotocol/graph-ts` bereitgestellt. #### ByteArray @@ -52,25 +52,25 @@ The following additional types are provided by `@graphprotocol/graph-ts`. import { ByteArray } from '@graphprotocol/graph-ts' ``` -`ByteArray` represents an array of `u8`. +ByteArray„ stellt ein Array von “u8" dar. -_Construction_ +_Konstruktion_ -- `fromI32(x: i32): ByteArray` - Decomposes `x` into bytes. -- `fromHexString(hex: string): ByteArray` - Input length must be even. Prefixing with `0x` is optional. +- `fromI32(x: i32): ByteArray` - Zerlegt `x` in Bytes. +- fromHexString(hex: string): ByteArray`- Die Eingabelänge muss gerade sein. Das Voranstellen von`0x\` ist optional. -_Type conversions_ +_Typumwandlungen_ -- `toHexString(): string` - Converts to a hex string prefixed with `0x`. -- `toString(): string` - Interprets the bytes as a UTF-8 string. -- `toBase58(): string` - Encodes the bytes into a base58 string. -- `toU32(): u32` - Interprets the bytes as a little-endian `u32`. Throws in case of overflow. -- `toI32(): i32` - Interprets the byte array as a little-endian `i32`. Throws in case of overflow. +- toHexString(): string`- Konvertiert in eine hexadezimale Zeichenkette mit dem Präfix`0x\`. +- \`toString(): string – Interpretiert die Bytes als UTF-8-String. +- \`toBase58(): string – Kodiert die Bytes in einen Base58-String. +- \`toU32(): u32 – Interpretiert die Bytes als Little-Endian u32. Wirft im Falle eines Überlaufs. +- \`toI32(): i32 – Interpretiert das Byte-Array als Little-Endian i32. Wirft im Falle eines Überlaufs. -_Operators_ +_Operatoren_ -- `equals(y: ByteArray): bool` – can be written as `x == y`. -- `concat(other: ByteArray) : ByteArray` - return a new `ByteArray` consisting of `this` directly followed by `other` +- `equals(y: ByteArray): bool – kann als x == y geschrieben werden`. +- `concat(other: ByteArray): ByteArray – gibt ein neues ByteArray zurück, das aus this besteht, direkt gefolgt von other` - `concatI32(other: i32) : ByteArray` - return a new `ByteArray` consisting of `this` directly followed by the byte representation of `other` #### BigDecimal @@ -83,24 +83,24 @@ import { BigDecimal } from '@graphprotocol/graph-ts' > Note: [Internally](https://github.com/graphprotocol/graph-node/blob/master/graph/src/data/store/scalar/bigdecimal.rs) `BigDecimal` is stored in [IEEE-754 decimal128 floating-point format](https://en.wikipedia.org/wiki/Decimal128_floating-point_format), which supports 34 decimal digits of significand. This makes `BigDecimal` unsuitable for representing fixed-point types that can span wider than 34 digits, such as a Solidity [`ufixed256x18`](https://docs.soliditylang.org/en/latest/types.html#fixed-point-numbers) or equivalent. -_Construction_ +_Konstruktion_ - `constructor(bigInt: BigInt)` – creates a `BigDecimal` from an `BigInt`. - `static fromString(s: string): BigDecimal` – parses from a decimal string. -_Type conversions_ +_Typumwandlungen_ - `toString(): string` – prints to a decimal string. _Math_ - `plus(y: BigDecimal): BigDecimal` – can be written as `x + y`. -- `minus(y: BigDecimal): BigDecimal` – can be written as `x - y`. -- `times(y: BigDecimal): BigDecimal` – can be written as `x * y`. -- `div(y: BigDecimal): BigDecimal` – can be written as `x / y`. -- `equals(y: BigDecimal): bool` – can be written as `x == y`. -- `notEqual(y: BigDecimal): bool` – can be written as `x != y`. -- `lt(y: BigDecimal): bool` – can be written as `x < y`. +- minus(y: BigDecimal): BigDecimal`- kann geschrieben werden als`x - y\`. +- Zeiten(y: BigDecimal): BigDecimal`- kann geschrieben werden als`x \* y\`. +- div(y: BigDecimal): BigDecimal`- kann als`x / y\` geschrieben werden. +- `equals(y: BigDecimal): bool` - kann geschrieben werden als `x == y`. +- `notEqual(y: BigDecimal): bool` - kann geschrieben werden als `x != y`. +- lt(y: BigDecimal): bool`- kann geschrieben werden als`x < y\` - `le(y: BigDecimal): bool` – can be written as `x <= y`. - `gt(y: BigDecimal): bool` – can be written as `x > y`. - `ge(y: BigDecimal): bool` – can be written as `x >= y`. @@ -112,11 +112,11 @@ _Math_ import { BigInt } from '@graphprotocol/graph-ts' ``` -`BigInt` is used to represent big integers. This includes Ethereum values of type `uint32` to `uint256` and `int64` to `int256`. Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. +BigInt" wird zur Darstellung großer Ganzzahlen verwendet. Dazu gehören Ethereum-Werte vom Typ `uint32` bis `uint256` und `int64` bis `int256`. Alles unter `uint32`, wie `int32`, `uint24` oder `int8` wird als `i32` dargestellt. The `BigInt` class has the following API: -_Construction_ +_Konstruktion_ - `BigInt.fromI32(x: i32): BigInt` – creates a `BigInt` from an `i32`. @@ -126,7 +126,7 @@ _Construction_ - `BigInt.fromSignedBytes(x: Bytes): BigInt` – Interprets `bytes` as a signed, little-endian integer. If your input is big-endian, call `.reverse()` first. - _Type conversions_ + _Typumwandlungen_ - `x.toHex(): string` – turns `BigInt` into a string of hexadecimal characters. @@ -186,18 +186,18 @@ import { Bytes } from '@graphprotocol/graph-ts' The `Bytes` class extends AssemblyScript's [Uint8Array](https://github.com/AssemblyScript/assemblyscript/blob/3b1852bc376ae799d9ebca888e6413afac7b572f/std/assembly/typedarray.ts#L64) and this supports all the `Uint8Array` functionality, plus the following new methods: -_Construction_ +_Konstruktion_ - `fromHexString(hex: string) : Bytes` - Convert the string `hex` which must consist of an even number of hexadecimal digits to a `ByteArray`. The string `hex` can optionally start with `0x` - `fromI32(i: i32) : Bytes` - Convert `i` to an array of bytes -_Type conversions_ +_Typumwandlungen_ - `b.toHex()` – returns a hexadecimal string representing the bytes in the array - `b.toString()` – converts the bytes in the array to a string of unicode characters - `b.toBase58()` – turns an Ethereum Bytes value to base58 encoding (used for IPFS hashes) -_Operators_ +_Operatoren_ - `b.concat(other: Bytes) : Bytes` - - return new `Bytes` consisting of `this` directly followed by `other` - `b.concatI32(other: i32) : ByteArray` - return new `Bytes` consisting of `this` directly follow by the byte representation of `other` @@ -223,31 +223,31 @@ import { store } from '@graphprotocol/graph-ts' The `store` API allows to load, save and remove entities from and to the Graph Node store. -Entities written to the store map one-to-one to the `@entity` types defined in the subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities. +Entities written to the store map one-to-one to the `@entity` types defined in the Subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities. -#### Creating entities +#### Erstellen von Entitäten -The following is a common pattern for creating entities from Ethereum events. +Im Folgenden finden Sie ein gängiges Muster zum Erstellen von Entitäten aus Ethereum-Ereignissen. ```typescript -// Import the Transfer event class generated from the ERC20 ABI +// Importieren Sie die aus dem ERC20-ABI generierte Transfer-Ereignisklasse import { Transfer as TransferEvent } from '../generated/ERC20/ERC20' -// Import the Transfer entity type generated from the GraphQL schema +// Importieren Sie den aus dem GraphQL-Schema generierten Transfer-Entitätstyp import { Transfer } from '../generated/schema' -// Transfer event handler +// Ereignishandler für Transfer export function handleTransfer(event: TransferEvent): void { - // Create a Transfer entity, using the transaction hash as the entity ID + // Erstellen Sie eine Transfer-Entität und verwenden Sie den Transaktions-Hash als Entitäts-ID let id = event.transaction.hash let transfer = new Transfer(id) - // Set properties on the entity, using the event parameters + // Legen Sie mithilfe der Ereignisparameter Eigenschaften für die Entität fest transfer.from = event.params.from transfer.to = event.params.to transfer.amount = event.params.amount - // Save the entity to the store + // Speichern Sie die Entität im Store transfer.save() } ``` @@ -258,50 +258,50 @@ Each entity must have a unique ID to avoid collisions with other entities. It is > Note: Using the transaction hash as the ID assumes that no other events in the same transaction create entities with this hash as the ID. -#### Loading entities from the store +#### Laden von Entitäten aus dem Store -If an entity already exists, it can be loaded from the store with the following: +Wenn eine Entität bereits vorhanden ist, kann sie wie folgt aus dem Store geladen werden: ```typescript -let id = event.transaction.hash // or however the ID is constructed +let id = event.transaction.hash // oder wie auch immer die ID konstruiert wird let transfer = Transfer.load(id) if (transfer == null) { transfer = new Transfer(id) } -// Use the Transfer entity as before +// Verwenden Sie die Transfer-Entität wie zuvor ``` As the entity may not exist in the store yet, the `load` method returns a value of type `Transfer | null`. It may be necessary to check for the `null` case before using the value. > Note: Loading entities is only necessary if the changes made in the mapping depend on the previous data of an entity. See the next section for the two ways of updating existing entities. -#### Looking up entities created withing a block +#### Suchen nach Entitäten, die innerhalb eines Blocks erstellt wurden As of `graph-node` v0.31.0, `@graphprotocol/graph-ts` v0.30.0 and `@graphprotocol/graph-cli` v0.49.0 the `loadInBlock` method is available on all entity types. The store API facilitates the retrieval of entities that were created or updated in the current block. A typical situation for this is that one handler creates a transaction from some onchain event, and a later handler wants to access this transaction if it exists. -- In the case where the transaction does not exist, the subgraph will have to go to the database simply to find out that the entity does not exist. If the subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. -- For some subgraphs, these missed lookups can contribute significantly to the indexing time. +- In the case where the transaction does not exist, the Subgraph will have to go to the database simply to find out that the entity does not exist. If the Subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. +- For some Subgraphs, these missed lookups can contribute significantly to the indexing time. ```typescript -let id = event.transaction.hash // or however the ID is constructed +let id = event.transaction.hash // oder wie auch immer die ID konstruiert wird let transfer = Transfer.loadInBlock(id) if (transfer == null) { transfer = new Transfer(id) } -// Use the Transfer entity as before +// Verwenden Sie die Transfer-Entität wie zuvor ``` > Note: If there is no entity created in the given block, `loadInBlock` will return `null` even if there is an entity with the given ID in the store. -#### Looking up derived entities +#### Suchen nach abgeleiteten Entitäten As of `graph-node` v0.31.0, `@graphprotocol/graph-ts` v0.31.0 and `@graphprotocol/graph-cli` v0.51.0 the `loadRelated` method is available. -This enables loading derived entity fields from within an event handler. For example, given the following schema: +Dies ermöglicht das Laden abgeleiteter Entitätsfelder aus einem Event-Handler heraus. Zum Beispiel anhand des folgenden Schemas: ```graphql type Token @entity { @@ -320,18 +320,18 @@ The following code will load the `Token` entity that the `Holder` entity was der ```typescript let holder = Holder.load('test-id') -// Load the Token entities associated with a given holder +// Laden Sie die Token-Entitäten, die einem bestimmten Inhaber zugeordnet sind let tokens = holder.tokens.load() ``` -#### Updating existing entities +#### Aktualisieren vorhandener Entitäten -There are two ways to update an existing entity: +Es gibt zwei Möglichkeiten, eine vorhandene Entität zu aktualisieren: 1. Load the entity with e.g. `Transfer.load(id)`, set properties on the entity, then `.save()` it back to the store. 2. Simply create the entity with e.g. `new Transfer(id)`, set properties on the entity, then `.save()` it to the store. If the entity already exists, the changes are merged into it. -Changing properties is straight forward in most cases, thanks to the generated property setters: +Dank der generierten Eigenschaftssetzer ist das Ändern von Eigenschaften in den meisten Fällen unkompliziert: ```typescript let transfer = new Transfer(id) @@ -340,7 +340,7 @@ transfer.to = ... transfer.amount = ... ``` -It is also possible to unset properties with one of the following two instructions: +Es ist auch möglich, Eigenschaften mit einer der folgenden beiden Anweisungen zu deaktivieren: ```typescript transfer.from.unset() @@ -363,7 +363,7 @@ entity.numbers = numbers entity.save() ``` -#### Removing entities from the store +#### Entfernen von Entitäten aus dem Store There is currently no way to remove an entity via the generated types. Instead, removing an entity requires passing the name of the entity type and the entity ID to `store.remove`: @@ -376,15 +376,15 @@ store.remove('Transfer', id) ### Ethereum API -The Ethereum API provides access to smart contracts, public state variables, contract functions, events, transactions, blocks and the encoding/decoding Ethereum data. +Die Ethereum API bietet Zugriff auf Smart Contracts, öffentliche Zustandsvariablen, Vertragsfunktionen, Ereignisse, Transaktionen, Blöcke und die Kodierung/Dekodierung von Ethereum-Daten. -#### Support for Ethereum Types +#### Unterstützung von Ethereum-Typen -As with entities, `graph codegen` generates classes for all smart contracts and events used in a subgraph. For this, the contract ABIs need to be part of the data source in the subgraph manifest. Typically, the ABI files are stored in an `abis/` folder. +As with entities, `graph codegen` generates classes for all smart contracts and events used in a Subgraph. For this, the contract ABIs need to be part of the data source in the Subgraph manifest. Typically, the ABI files are stored in an `abis/` folder. -With the generated classes, conversions between Ethereum types and the [built-in types](#built-in-types) take place behind the scenes so that subgraph authors do not have to worry about them. +With the generated classes, conversions between Ethereum types and the [built-in types](#built-in-types) take place behind the scenes so that Subgraph authors do not have to worry about them. -The following example illustrates this. Given a subgraph schema like +The following example illustrates this. Given a Subgraph schema like ```graphql type Transfer @entity { @@ -406,7 +406,7 @@ transfer.amount = event.params.amount transfer.save() ``` -#### Events and Block/Transaction Data +#### Ereignisse und Block-/Transaktionsdaten Ethereum events passed to event handlers, such as the `Transfer` event in the previous examples, not only provide access to the event parameters but also to their parent transaction and the block they are part of. The following data can be obtained from `event` instances (these classes are a part of the `ethereum` module in `graph-ts`): @@ -481,23 +481,23 @@ class Log { } ``` -#### Access to Smart Contract State +#### Zugriff auf den Smart Contract-Status -The code generated by `graph codegen` also includes classes for the smart contracts used in the subgraph. These can be used to access public state variables and call functions of the contract at the current block. +The code generated by `graph codegen` also includes classes for the smart contracts used in the Subgraph. These can be used to access public state variables and call functions of the contract at the current block. -A common pattern is to access the contract from which an event originates. This is achieved with the following code: +Ein gängiges Muster ist der Zugriff auf den Vertrag, aus dem ein Ereignis hervorgeht. Dies wird mit dem folgenden Code erreicht: ```typescript -// Import the generated contract class and generated Transfer event class +// Importieren Sie die generierte Vertragsklasse und die generierte Transfer-Ereignisklasse import { ERC20Contract, Transfer as TransferEvent } from '../generated/ERC20Contract/ERC20Contract' -// Import the generated entity class +// Importieren Sie die generierte Entitätsklasse import { Transfer } from '../generated/schema' export function handleTransfer(event: TransferEvent) { - // Bind the contract to the address that emitted the event + // Binden Sie den Vertrag an die Adresse, die das Ereignis ausgegeben hat let contract = ERC20Contract.bind(event.address) - // Access state variables and functions by calling them + // Greifen Sie auf Zustandsvariablen und Funktionen zu, indem Sie sie aufrufen let erc20Symbol = contract.symbol() } ``` @@ -506,13 +506,13 @@ export function handleTransfer(event: TransferEvent) { As long as the `ERC20Contract` on Ethereum has a public read-only function called `symbol`, it can be called with `.symbol()`. For public state variables a method with the same name is created automatically. -Any other contract that is part of the subgraph can be imported from the generated code and can be bound to a valid address. +Jeder andere Vertrag, der Teil des Subgraphen ist, kann aus dem generierten Code importiert werden und an eine gültige Adresse gebunden werden. -#### Handling Reverted Calls +#### Bearbeitung rückgängig gemachter Anrufe -If the read-only methods of your contract may revert, then you should handle that by calling the generated contract method prefixed with `try_`. +Wenn die Nur-Lese-Methoden Ihres Vertrags rückgängig gemacht werden können, sollten Sie dies durch den Aufruf der generierten Vertragsmethode mit dem Präfix `try_` behandeln. -- For example, the Gravity contract exposes the `gravatarToOwner` method. This code would be able to handle a revert in that method: +- Der Gravity-Vertrag stellt zum Beispiel die Methode „GravatarToOwner“ zur Verfügung. Dieser Code wäre in der Lage, eine Umkehrung in dieser Methode zu behandeln: ```typescript let gravity = Gravity.bind(event.address) @@ -524,11 +524,11 @@ if (callResult.reverted) { } ``` -> Note: A Graph node connected to a Geth or Infura client may not detect all reverts. If you rely on this, we recommend using a Graph Node connected to a Parity client. +> Hinweis: Ein Graph-Knoten, der mit einem Geth- oder Infura-Client verbunden ist, erkennt möglicherweise nicht alle Umkehrungen. Wenn Sie sich darauf verlassen, empfehlen wir die Verwendung eines Graph-Knotens, der mit einem Parity-Client verbunden ist. -#### Encoding/Decoding ABI +#### Kodierung/Dekodierung von ABI -Data can be encoded and decoded according to Ethereum's ABI encoding format using the `encode` and `decode` functions in the `ethereum` module. +Daten können mit den Funktionen `encode` und `decode` im Modul `ethereum` gemäß dem ABI-Kodierungsformat von Ethereum kodiert und dekodiert werden. ```typescript import { Address, BigInt, ethereum } from '@graphprotocol/graph-ts' @@ -545,7 +545,7 @@ let encoded = ethereum.encode(ethereum.Value.fromTuple(tuple))! let decoded = ethereum.decode('(address,uint256)', encoded) ``` -For more information: +Weitere Informationen: - [ABI Spec](https://docs.soliditylang.org/en/v0.7.4/abi-spec.html#types) - Encoding/decoding [Rust library/CLI](https://github.com/rust-ethereum/ethabi) @@ -576,13 +576,13 @@ let eoa = Address.fromString('0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045') let isContract = ethereum.hasCode(eoa).inner // returns false ``` -### Logging API +### Logging-API ```typescript import { log } from '@graphprotocol/graph-ts' ``` -The `log` API allows subgraphs to log information to the Graph Node standard output as well as Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument. +The `log` API allows Subgraphs to log information to the Graph Node standard output as well as Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument. The `log` API includes the following functions: @@ -590,7 +590,7 @@ The `log` API includes the following functions: - `log.info(fmt: string, args: Array): void` - logs an informational message. - `log.warning(fmt: string, args: Array): void` - logs a warning. - `log.error(fmt: string, args: Array): void` - logs an error message. -- `log.critical(fmt: string, args: Array): void` – logs a critical message _and_ terminates the subgraph. +- `log.critical(fmt: string, args: Array): void` – logs a critical message _and_ terminates the Subgraph. The `log` API takes a format string and an array of string values. It then replaces placeholders with the string values from the array. The first `{}` placeholder gets replaced by the first value in the array, the second `{}` placeholder gets replaced by the second value and so on. @@ -598,9 +598,9 @@ The `log` API takes a format string and an array of string values. It then repla log.info('Message to be displayed: {}, {}, {}', [value.toString(), anotherValue.toString(), 'already a string']) ``` -#### Logging one or more values +#### Protokollierung eines oder mehrerer Werte -##### Logging a single value +##### Protokollierung eines einzelnen Werts In the example below, the string value "A" is passed into an array to become`['A']` before being logged: @@ -608,25 +608,25 @@ In the example below, the string value "A" is passed into an array to become`['A let myValue = 'A' export function handleSomeEvent(event: SomeEvent): void { - // Displays : "My value is: A" + // Zeigt an : "My value is: A" log.info('My value is: {}', [myValue]) } ``` -##### Logging a single entry from an existing array +##### Protokollieren eines einzelnen Eintrags aus einem vorhandenen Array -In the example below, only the first value of the argument array is logged, despite the array containing three values. +Im folgenden Beispiel wird nur der erste Wert des Argumentarrays protokolliert, obwohl das Array drei Werte enthält. ```typescript let myArray = ['A', 'B', 'C'] export function handleSomeEvent(event: SomeEvent): void { - // Displays : "My value is: A" (Even though three values are passed to `log.info`) + // Zeigt an : "My value is: A" (Obwohl drei Werte an „log.info“ übergeben werden) log.info('My value is: {}', myArray) } ``` -#### Logging multiple entries from an existing array +#### Protokollierung mehrerer Einträge aus einem vorhandenen Array Each entry in the arguments array requires its own placeholder `{}` in the log message string. The below example contains three placeholders `{}` in the log message. Because of this, all three values in `myArray` are logged. @@ -634,25 +634,25 @@ Each entry in the arguments array requires its own placeholder `{}` in the log m let myArray = ['A', 'B', 'C'] export function handleSomeEvent(event: SomeEvent): void { - // Displays : "My first value is: A, second value is: B, third value is: C" + // Zeigt an : "My first value is: A, second value is: B, third value is: C" log.info('My first value is: {}, second value is: {}, third value is: {}', myArray) } ``` -##### Logging a specific entry from an existing array +##### Protokollieren eines bestimmten Eintrags aus einem vorhandenen Array -To display a specific value in the array, the indexed value must be provided. +Um einen bestimmten Wert im Array anzuzeigen, muss der indizierte Wert angegeben werden. ```typescript export function handleSomeEvent(event: SomeEvent): void { - // Displays : "My third value is C" + // Zeigt an : "My third value is C" log.info('My third value is: {}', [myArray[2]]) } ``` -##### Logging event information +##### Protokollierung von Ereignisinformationen -The example below logs the block number, block hash and transaction hash from an event: +Im folgenden Beispiel werden Blocknummer, Block-Hash und Transaktions-Hash eines Ereignisses protokolliert: ```typescript import { log } from '@graphprotocol/graph-ts' @@ -672,31 +672,31 @@ export function handleSomeEvent(event: SomeEvent): void { import { ipfs } from '@graphprotocol/graph-ts' ``` -Smart contracts occasionally anchor IPFS files onchain. This allows mappings to obtain the IPFS hashes from the contract and read the corresponding files from IPFS. The file data will be returned as `Bytes`, which usually requires further processing, e.g. with the `json` API documented later on this page. +Smart Contracts verankern gelegentlich IPFS-Dateien in der Kette. Dadurch können Mappings die IPFS-Hashes aus dem Vertrag abrufen und die entsprechenden Dateien aus IPFS lesen. Die Dateidaten werden als Bytes zurückgegeben, was normalerweise eine weitere Verarbeitung erfordert, z. B. mit der json-API, die später auf dieser Seite dokumentiert wird. -Given an IPFS hash or path, reading a file from IPFS is done as follows: +Bei gegebenem IPFS-Hash oder -Pfad erfolgt das Lesen einer Datei aus IPFS wie folgt: ```typescript -// Put this inside an event handler in the mapping +// Fügen Sie dies in einen Event-Handler im Mapping ein let hash = 'QmTkzDwWqPbnAh5YiV5VwcTLnGdwSNsNTn2aDxdXBFca7D' let data = ipfs.cat(hash) -// Paths like `QmTkzDwWqPbnAh5YiV5VwcTLnGdwSNsNTn2aDxdXBFca7D/Makefile` -// that include files in directories are also supported +// Pfade wie `QmTkzDwWqPbnAh5YiV5VwcTLnGdwSNsNTn2aDxdXBFca7D/Makefile`, +// die Dateien in Verzeichnissen enthalten, werden ebenfalls unterstützt let path = 'QmTkzDwWqPbnAh5YiV5VwcTLnGdwSNsNTn2aDxdXBFca7D/Makefile' let data = ipfs.cat(path) ``` -**Note:** `ipfs.cat` is not deterministic at the moment. If the file cannot be retrieved over the IPFS network before the request times out, it will return `null`. Due to this, it's always worth checking the result for `null`. +**Anmerkung:** `ipfs.cat` ist zur Zeit nicht deterministisch. Wenn die Datei nicht über das IPFS-Netzwerk abgerufen werden kann, bevor die Anfrage eine Zeitüberschreitung erreicht, wird `null` zurückgegeben. Aus diesem Grund lohnt es sich immer, das Ergebnis auf `null` zu überprüfen. -It is also possible to process larger files in a streaming fashion with `ipfs.map`. The function expects the hash or path for an IPFS file, the name of a callback, and flags to modify its behavior: +Es ist auch möglich, größere Dateien mit `ipfs.map` in einem Streaming-Verfahren zu verarbeiten. Die Funktion erwartet den Hash oder Pfad für eine IPFS-Datei, den Namen eines Callbacks und Flags, um ihr Verhalten zu ändern: ```typescript import { JSONValue, Value } from '@graphprotocol/graph-ts' export function processItem(value: JSONValue, userData: Value): void { - // See the JSONValue documentation for details on dealing - // with JSON values + // Weitere Informationen zum Handel + // mit JSON-Werten finden Sie in der JSONValue-Dokumentation let obj = value.toObject() let id = obj.get('id') let title = obj.get('title') @@ -705,23 +705,23 @@ export function processItem(value: JSONValue, userData: Value): void { return } - // Callbacks can also created entities + // Callbacks können auch Entitäten erstellen let newItem = new Item(id) newItem.title = title.toString() - newitem.parent = userData.toString() // Set parent to "parentId" + newitem.parent = userData.toString() // Übergeordnetes Element auf „parentId“ setzen newitem.save() } -// Put this inside an event handler in the mapping +// Fügen Sie dies in einen Event-Handler im Mapping ein ipfs.map('Qm...', 'processItem', Value.fromString('parentId'), ['json']) -// Alternatively, use `ipfs.mapJSON` +// Alternativ verwenden Sie „ipfs.mapJSON“. ipfs.mapJSON('Qm...', 'processItem', Value.fromString('parentId')) ``` -The only flag currently supported is `json`, which must be passed to `ipfs.map`. With the `json` flag, the IPFS file must consist of a series of JSON values, one value per line. The call to `ipfs.map` will read each line in the file, deserialize it into a `JSONValue` and call the callback for each of them. The callback can then use entity operations to store data from the `JSONValue`. Entity changes are stored only when the handler that called `ipfs.map` finishes successfully; in the meantime, they are kept in memory, and the size of the file that `ipfs.map` can process is therefore limited. +Das einzige derzeit unterstützte Flag ist `json`, das an `ipfs.map` übergeben werden muss. Mit dem `json`-Flag muss die IPFS-Datei aus einer Reihe von JSON-Werten bestehen, ein Wert pro Zeile. Der Aufruf von `ipfs.map` liest jede Zeile der Datei, deserialisiert sie in einen `JSONValue` und ruft den Callback für jeden dieser Werte auf. Der Callback kann dann Entity-Operationen verwenden, um Daten aus dem `JSONValue` zu speichern. Entity-Änderungen werden nur gespeichert, wenn der Handler, der `ipfs.map` aufgerufen hat, erfolgreich beendet ist; in der Zwischenzeit werden sie im Speicher gehalten, und die Größe der Datei, die `ipfs.map` verarbeiten kann, ist daher begrenzt. -On success, `ipfs.map` returns `void`. If any invocation of the callback causes an error, the handler that invoked `ipfs.map` is aborted, and the subgraph is marked as failed. +Bei Erfolg gibt `ipfs.map` `void` zurück. Wenn ein Aufruf des Callbacks einen Fehler verursacht, wird der Handler, der `ipfs.map` aufgerufen hat, abgebrochen und der Subgraph wird als fehlgeschlagen markiert. ### Crypto API @@ -729,9 +729,9 @@ On success, `ipfs.map` returns `void`. If any invocation of the callback causes import { crypto } from '@graphprotocol/graph-ts' ``` -The `crypto` API makes a cryptographic functions available for use in mappings. Right now, there is only one: +Die „crypto“-API stellt kryptographische Funktionen für die Verwendung in Mappings zur Verfügung. Momentan gibt es nur eine: -- `crypto.keccak256(input: ByteArray): ByteArray` +- crypto.keccak256(input: ByteArray): ByteArray\` ### JSON API @@ -746,7 +746,7 @@ JSON data can be parsed using the `json` API: - `json.fromString(data: string): JSONValue` – parses JSON data from a valid UTF-8 `String` - `json.try_fromString(data: string): Result` – safe version of `json.fromString`, it returns an error variant if the parsing failed -The `JSONValue` class provides a way to pull values out of an arbitrary JSON document. Since JSON values can be booleans, numbers, arrays and more, `JSONValue` comes with a `kind` property to check the type of a value: +Die Klasse `JSONValue` bietet eine Möglichkeit, Werte aus einem beliebigen JSON-Dokument zu ziehen. Da JSON-Werte Boolesche Werte, Zahlen, Arrays und mehr sein können, verfügt `JSONValue` über die Eigenschaft `kind`, um den Typ eines Wertes zu überprüfen: ```typescript let value = json.fromBytes(...) @@ -768,9 +768,9 @@ When the type of a value is certain, it can be converted to a [built-in type](#b - `value.toString(): string` - `value.toArray(): Array` - (and then convert `JSONValue` with one of the 5 methods above) -### Type Conversions Reference +### Referenz zu Typkonvertierungen -| Source(s) | Destination | Conversion function | +| Quelle(n) | Destination | Conversion function | | -------------------- | -------------------- | ---------------------------- | | Address | Bytes | none | | Address | String | s.toHexString() | @@ -809,15 +809,15 @@ When the type of a value is certain, it can be converted to a [built-in type](#b | String (hexadecimal) | Bytes | ByteArray.fromHexString(s) | | String (UTF-8) | Bytes | ByteArray.fromUTF8(s) | -### Data Source Metadata +### Metadaten der Datenquelle -You can inspect the contract address, network and context of the data source that invoked the handler through the `dataSource` namespace: +Sie können die Vertragsadresse, das Netzwerk und den Kontext der Datenquelle, die den Handler aufgerufen hat, durch den Namespace dataSource überprüfen: - `dataSource.address(): Address` - `dataSource.network(): string` - `dataSource.context(): DataSourceContext` -### Entity and DataSourceContext +### Entität und DataSourceContext The base `Entity` class and the child `DataSourceContext` class have helpers to dynamically set and get fields: @@ -834,9 +834,9 @@ The base `Entity` class and the child `DataSourceContext` class have helpers to - `getBoolean(key: string): boolean` - `getBigDecimal(key: string): BigDecimal` -### DataSourceContext in Manifest +### DataSourceContext im Manifest -The `context` section within `dataSources` allows you to define key-value pairs that are accessible within your subgraph mappings. The available types are `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. +The `context` section within `dataSources` allows you to define key-value pairs that are accessible within your Subgraph mappings. The available types are `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Here is a YAML example illustrating the usage of various types in the `context` section: @@ -887,4 +887,4 @@ dataSources: - `List`: Specifies a list of items. Each item needs to specify its type and data. - `BigInt`: Specifies a large integer value. Must be quoted due to its large size. -This context is then accessible in your subgraph mapping files, enabling more dynamic and configurable subgraphs. +Dieser Kontext ist dann in Ihren Subgraph-Zuordnungsdateien zugänglich und ermöglicht dynamischere und konfigurierbare Subgraphen. diff --git a/website/src/pages/de/subgraphs/developing/creating/graph-ts/common-issues.mdx b/website/src/pages/de/subgraphs/developing/creating/graph-ts/common-issues.mdx index f8d0c9c004c2..116ddbb29853 100644 --- a/website/src/pages/de/subgraphs/developing/creating/graph-ts/common-issues.mdx +++ b/website/src/pages/de/subgraphs/developing/creating/graph-ts/common-issues.mdx @@ -1,8 +1,8 @@ --- -title: Common AssemblyScript Issues +title: Häufige Probleme mit AssemblyScript --- -There are certain [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) issues that are common to run into during subgraph development. They range in debug difficulty, however, being aware of them may help. The following is a non-exhaustive list of these issues: +Es gibt bestimmte [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) Probleme, die bei der Entwicklung von Subgraphs häufig auftreten. Sie sind unterschiedlich schwer zu beheben, aber es kann hilfreich sein, sie zu kennen. Im Folgenden finden Sie eine nicht erschöpfende Liste dieser Probleme: -- `Private` class variables are not enforced in [AssemblyScript](https://www.assemblyscript.org/status.html#language-features). There is no way to protect class variables from being directly changed from the class object. -- Scope is not inherited into [closure functions](https://www.assemblyscript.org/status.html#on-closures), i.e. variables declared outside of closure functions cannot be used. Explanation in [Developer Highlights #3](https://www.youtube.com/watch?v=1-8AW-lVfrA&t=3243s). +- Private" Klassenvariablen werden in [AssemblyScript](https://www.assemblyscript.org/status.html#language-features) nicht erzwungen. Es gibt keine Möglichkeit, Klassenvariablen davor zu schützen, dass sie direkt vom Klassenobjekt aus geändert werden. +- Der Geltungsbereich wird nicht in [Schließungsfunktionen](https://www.assemblyscript.org/status.html#on-closures) vererbt, d.h. außerhalb von Schließungsfunktionen deklarierte Variablen können nicht verwendet werden. Erläuterung in [Developer Highlights #3](https://www.youtube.com/watch?v=1-8AW-lVfrA&t=3243s) diff --git a/website/src/pages/de/subgraphs/developing/creating/install-the-cli.mdx b/website/src/pages/de/subgraphs/developing/creating/install-the-cli.mdx index f9d419ffe1ce..bb9fe36ade05 100644 --- a/website/src/pages/de/subgraphs/developing/creating/install-the-cli.mdx +++ b/website/src/pages/de/subgraphs/developing/creating/install-the-cli.mdx @@ -1,22 +1,22 @@ --- -title: Install the Graph CLI +title: Installieren der Graph-CLI --- -> In order to use your subgraph on The Graph's decentralized network, you will need to [create an API key](/resources/subgraph-studio-faq/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your subgraph with at least 3,000 GRT to attract 2-3 Indexers. To learn more about signaling, check out [curating](/resources/roles/curating/). +> In order to use your Subgraph on The Graph's decentralized network, you will need to [create an API key](/resources/subgraph-studio-faq/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your Subgraph with at least 3,000 GRT to attract 2-3 Indexers. To learn more about signaling, check out [curating](/resources/roles/curating/). ## Überblick -The [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) is a command-line interface that facilitates developers' commands for The Graph. It processes a [subgraph manifest](/subgraphs/developing/creating/subgraph-manifest/) and compiles the [mappings](/subgraphs/developing/creating/assemblyscript-mappings/) to create the files you will need to deploy the subgraph to [Subgraph Studio](https://thegraph.com/studio/) and the network. +The [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) is a command-line interface that facilitates developers' commands for The Graph. It processes a [Subgraph manifest](/subgraphs/developing/creating/subgraph-manifest/) and compiles the [mappings](/subgraphs/developing/creating/assemblyscript-mappings/) to create the files you will need to deploy the Subgraph to [Subgraph Studio](https://thegraph.com/studio/) and the network. ## Erste Schritte -### Install the Graph CLI +### Installieren der Graph-CLI The Graph CLI is written in TypeScript, and you must have `node` and either `npm` or `yarn` installed to use it. Check for the [most recent](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) CLI version. Führen Sie einen der folgenden Befehle auf Ihrem lokalen Computer aus: -#### Using [npm](https://www.npmjs.com/) +#### Verwendung von [npm](https://www.npmjs.com/) ```bash npm install -g @graphprotocol/graph-cli@latest @@ -28,13 +28,13 @@ npm install -g @graphprotocol/graph-cli@latest yarn global add @graphprotocol/graph-cli ``` -The `graph init` command can be used to set up a new subgraph project, either from an existing contract or from an example subgraph. If you already have a smart contract deployed to your preferred network, you can bootstrap a new subgraph from that contract to get started. +The `graph init` command can be used to set up a new Subgraph project, either from an existing contract or from an example Subgraph. If you already have a smart contract deployed to your preferred network, you can bootstrap a new Subgraph from that contract to get started. -## Create a Subgraph +## Erstellen Sie einen Subgrafen -### From an Existing Contract +### Aus einem bestehenden Vertrag -The following command creates a subgraph that indexes all events of an existing contract: +The following command creates a Subgraph that indexes all events of an existing contract: ```sh graph init \ @@ -47,39 +47,39 @@ graph init \ - The command tries to retrieve the contract ABI from Etherscan. - - The Graph CLI relies on a public RPC endpoint. While occasional failures are expected, retries typically resolve this issue. If failures persist, consider using a local ABI. + - Die Graph CLI stützt sich auf einen öffentlichen RPC-Endpunkt. Gelegentliche Fehler sind zwar zu erwarten, aber durch Wiederholungen lässt sich dieses Problem in der Regel beheben. Bei anhaltenden Fehlern sollten Sie eine lokale ABI verwenden. - If any of the optional arguments are missing, it guides you through an interactive form. -- The `` is the ID of your subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your subgraph details page. +- The `` is the ID of your Subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your Subgraph details page. -### From an Example Subgraph +### Aus einem Datenbeispiel Subgraph -The following command initializes a new project from an example subgraph: +The following command initializes a new project from an example Subgraph: ```sh graph init --from-example=example-subgraph ``` -- The [example subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. +- The [example Subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. -- The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. +- Der Subgraph behandelt diese Ereignisse, indem er „Gravatar“-Entitäten in den Graph Node Store schreibt und sicherstellt, dass diese entsprechend den Ereignissen aktualisiert werden. ### Add New `dataSources` to an Existing Subgraph -`dataSources` are key components of subgraphs. They define the sources of data that the subgraph indexes and processes. A `dataSource` specifies which smart contract to listen to, which events to process, and how to handle them. +dataSources" sind Schlüsselkomponenten von Subgraphs. Sie definieren die Datenquellen, die der Subgraph indiziert und verarbeitet. Eine „Datenquelle“ gibt an, auf welche Smart Contracts zu hören ist, welche Ereignisse zu verarbeiten sind und wie sie zu behandeln sind. -Recent versions of the Graph CLI supports adding new `dataSources` to an existing subgraph through the `graph add` command: +Neuere Versionen der Graph CLI unterstützen das Hinzufügen neuer Datenquellen zu einem bestehenden Subgraphen durch den Befehl „Graph add“: ```sh graph add
[] -Options: +Optionen: - --abi Path to the contract ABI (default: download from Etherscan) - --contract-name Name of the contract (default: Contract) - --merge-entities Whether to merge entities with the same name (default: false) - --network-file Networks config file path (default: "./networks.json") + --abi Pfad zur Vertrags-ABI (default: download from Etherscan) + --contract-name Name des Vertrags (default: Contract) + --merge-entities Ob Entitäten mit demselben Namen zusammengeführt werden sollen (default: false) + --network-file Pfad der Netzwerkkonfigurationsdate (default: "./networks.json") ``` #### Besonderheiten @@ -101,19 +101,5 @@ The `graph add` command will fetch the ABI from Etherscan (unless an ABI path is The ABI file(s) must match your contract(s). There are a few ways to obtain ABI files: - If you are building your own project, you will likely have access to your most current ABIs. -- If you are building a subgraph for a public project, you can download that project to your computer and get the ABI by using [`npx hardhat compile`](https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts) or using `solc` to compile. -- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your subgraph will fail. - -## SpecVersion Releases - -| Version | Release notes | -| :-: | --- | -| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | -| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | -| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune subgraphs | -| 0.0.9 | Supports `endBlock` feature | -| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). | -| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). | -| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | -| 0.0.5 | Added support for event handlers having access to transaction receipts. | -| 0.0.4 | Added support for managing subgraph features. | +- If you are building a Subgraph for a public project, you can download that project to your computer and get the ABI by using [`npx hardhat compile`](https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts) or using `solc` to compile. +- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your Subgraph will fail. diff --git a/website/src/pages/de/subgraphs/developing/creating/ql-schema.mdx b/website/src/pages/de/subgraphs/developing/creating/ql-schema.mdx index 7f0283d91f62..40df6bfff105 100644 --- a/website/src/pages/de/subgraphs/developing/creating/ql-schema.mdx +++ b/website/src/pages/de/subgraphs/developing/creating/ql-schema.mdx @@ -4,39 +4,39 @@ title: The Graph QL Schema ## Überblick -The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language. +The schema for your Subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language. -> Note: If you've never written a GraphQL schema, it is recommended that you check out this primer on the GraphQL type system. Reference documentation for GraphQL schemas can be found in the [GraphQL API](/subgraphs/querying/graphql-api/) section. +> Hinweis: Wenn Sie noch nie ein GraphQL-Schema geschrieben haben, empfehlen wir Ihnen, sich diese Einführung in das GraphQL-Typsystem anzusehen. Die Referenzdokumentation für GraphQL-Schemata finden Sie im Abschnitt [GraphQL API](/subgraphs/querying/graphql-api/). ### Defining Entities Before defining entities, it is important to take a step back and think about how your data is structured and linked. -- All queries will be made against the data model defined in the subgraph schema. As a result, the design of the subgraph schema should be informed by the queries that your application will need to perform. -- It may be useful to imagine entities as "objects containing data", rather than as events or functions. -- You define entity types in `schema.graphql`, and Graph Node will generate top-level fields for querying single instances and collections of that entity type. +- Alle Abfragen werden gegen das im Subgraph-Schema definierte Datenmodell durchgeführt. Daher sollte sich der Entwurf des Subgraph-Schemas an den Abfragen orientieren, die Ihre Anwendung durchführen muss. +- Es kann sinnvoll sein, sich Entitäten als „Objekte, die Daten enthalten“, vorzustellen und nicht als Ereignisse oder Funktionen. +- Sie definieren Entitätstypen in „schema.graphql“, und Graph Node generiert Top-Level-Felder zur Abfrage einzelner Instanzen und Sammlungen dieses Entitätstyps. - Each type that should be an entity is required to be annotated with an `@entity` directive. - By default, entities are mutable, meaning that mappings can load existing entities, modify them and store a new version of that entity. - - Mutability comes at a price, so for entity types that will never be modified, such as those containing data extracted verbatim from the chain, it is recommended to mark them as immutable with `@entity(immutable: true)`. - - If changes happen in the same block in which the entity was created, then mappings can make changes to immutable entities. Immutable entities are much faster to write and to query so they should be used whenever possible. + - Die Veränderbarkeit hat ihren Preis. Daher wird empfohlen, Entitätstypen, die niemals verändert werden, wie z. B. solche, die Daten enthalten, die wortwörtlich aus der Kette extrahiert wurden, mit `@entity(immutable: true)` als unveränderlich zu kennzeichnen. + - Wenn Änderungen in demselben Block stattfinden, in dem die Entität erstellt wurde, können Mappings Änderungen an unveränderlichen Entitäten vornehmen. Unveränderliche Entitäten sind viel schneller zu schreiben und abzufragen, so dass sie, wann immer möglich, verwendet werden sollten. #### Good Example -The following `Gravatar` entity is structured around a Gravatar object and is a good example of how an entity could be defined. +Die folgende „Gravatar“-Entität ist um ein Gravatar-Objekt herum aufgebaut und ist ein gutes Beispiel dafür, wie eine Entität definiert werden könnte. ```graphql -type Gravatar @entity(immutable: true) { +Typ Gravatar @entity(immutable: true) { id: Bytes! - owner: Bytes + Eigentümer: Bytes displayName: String imageUrl: String - accepted: Boolean + akzeptiert: Boolean } ``` #### Bad Example -The following example `GravatarAccepted` and `GravatarDeclined` entities are based around events. It is not recommended to map events or function calls to entities 1:1. +Die folgenden Beispiele `GravatarAccepted` und `GravatarDeclined` basieren auf Ereignissen. Es wird nicht empfohlen, Ereignisse oder Funktionsaufrufe 1:1 auf Entitäten abzubilden. ```graphql type GravatarAccepted @entity { @@ -141,7 +141,7 @@ type TokenBalance @entity { Reverse lookups can be defined on an entity through the `@derivedFrom` field. This creates a virtual field on the entity that may be queried but cannot be set manually through the mappings API. Rather, it is derived from the relationship defined on the other entity. For such relationships, it rarely makes sense to store both sides of the relationship, and both indexing and query performance will be better when only one side is stored and the other is derived. -For one-to-many relationships, the relationship should always be stored on the 'one' side, and the 'many' side should always be derived. Storing the relationship this way, rather than storing an array of entities on the 'many' side, will result in dramatically better performance for both indexing and querying the subgraph. In general, storing arrays of entities should be avoided as much as is practical. +For one-to-many relationships, the relationship should always be stored on the 'one' side, and the 'many' side should always be derived. Storing the relationship this way, rather than storing an array of entities on the 'many' side, will result in dramatically better performance for both indexing and querying the Subgraph. In general, storing arrays of entities should be avoided as much as is practical. #### Beispiel @@ -160,7 +160,7 @@ type TokenBalance @entity { } ``` -Here is an example of how to write a mapping for a subgraph with reverse lookups: +Here is an example of how to write a mapping for a Subgraph with reverse lookups: ```typescript let token = new Token(event.address) // Create Token @@ -231,7 +231,7 @@ query usersWithOrganizations { } ``` -This more elaborate way of storing many-to-many relationships will result in less data stored for the subgraph, and therefore to a subgraph that is often dramatically faster to index and to query. +This more elaborate way of storing many-to-many relationships will result in less data stored for the Subgraph, and therefore to a Subgraph that is often dramatically faster to index and to query. ### Adding comments to the schema @@ -287,7 +287,7 @@ query { } ``` -> **[Feature Management](#experimental-features):** From `specVersion` `0.0.4` and onwards, `fullTextSearch` must be declared under the `features` section in the subgraph manifest. +> **[Feature Management](#experimental-features):** From `specVersion` `0.0.4` and onwards, `fullTextSearch` must be declared under the `features` section in the Subgraph manifest. ## Languages supported diff --git a/website/src/pages/de/subgraphs/developing/creating/starting-your-subgraph.mdx b/website/src/pages/de/subgraphs/developing/creating/starting-your-subgraph.mdx index dbffb92cfc5e..c198baf1e1f1 100644 --- a/website/src/pages/de/subgraphs/developing/creating/starting-your-subgraph.mdx +++ b/website/src/pages/de/subgraphs/developing/creating/starting-your-subgraph.mdx @@ -4,20 +4,32 @@ title: Starten Ihres Subgraphen ## Überblick -The Graph beherbergt Tausende von Subgraphen, die bereits für Abfragen zur Verfügung stehen. Schauen Sie also in [The Graph Explorer] (https://thegraph.com/explorer) nach und finden Sie einen, der Ihren Anforderungen entspricht. +The Graph beherbergt Tausende von Subgraphen, die bereits für Abfragen zur Verfügung stehen. Schauen Sie sich also [The Graph Explorer] (https://thegraph.com/explorer) an und finden Sie einen, der bereits Ihren Bedürfnissen entspricht. -Wenn Sie einen [Subgraphen](/subgraphs/developing/subgraphs/) erstellen, erstellen Sie eine benutzerdefinierte offene API, die Daten aus einer Blockchain extrahiert, verarbeitet, speichert und über GraphQL einfach abfragen lässt. +Wenn Sie einen [Subgraph](/subgraphs/developing/subgraphs/) erstellen, erstellen Sie eine benutzerdefinierte offene API, die Daten aus einer Blockchain extrahiert, verarbeitet, speichert und eine einfache Abfrage über GraphQL ermöglicht. -Die Entwicklung von Subgraphen reicht von einfachen Gerüst-Subgraphen bis hin zu fortgeschrittenen, speziell zugeschnittenen Subgraphen. +Die Entwicklung von Subgraphen reicht von einfachen Gerüstsubgraphen bis hin zu fortgeschrittenen, speziell zugeschnittenen Subgraphen. ### Start des Erstellens Starten Sie den Prozess und erstellen Sie einen Subgraphen, der Ihren Anforderungen entspricht: 1. [Installieren der CLI](/subgraphs/developing/creating/install-the-cli/) - Richten Sie Ihre Infrastruktur ein -2. [Subgraph Manifest](/subgraphs/developing/creating/subgraph-manifest/) - Verstehen der wichtigsten Komponenten eines Subgraphen +2. [Subgraph Manifest](/subgraphs/developing/creating/subgraph-manifest/) - Die Schlüsselkomponente eines Subgraphen verstehen 3. [Das GraphQL-Schema](/subgraphs/developing/creating/ql-schema/) - Schreiben Sie Ihr Schema 4. [Schreiben von AssemblyScript-Mappings](/subgraphs/developing/creating/assemblyscript-mappings/) - Schreiben Sie Ihre Mappings 5. [Erweiterte Funktionen](/subgraphs/developing/creating/advanced/) - Passen Sie Ihren Subgraph mit erweiterten Funktionen an Erkunden Sie zusätzliche [Ressourcen für APIs](/subgraphs/developing/creating/graph-ts/README/) und führen Sie lokale Tests mit [Matchstick](/subgraphs/developing/creating/unit-testing-framework/) durch. + +| Version | Hinweise zur Version | +| :-: | --- | +| 1.2.0 | Unterstützung für [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) hinzugefügt & `eth_call` erklärt | +| 1.1.0 | Unterstützt [Timeseries & Aggregations](#timeseries-and-aggregations). Unterstützung für Typ `Int8` für `id` hinzugefügt. | +| 1.0.0 | Unterstützt die Funktion [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) zum Beschneiden von Subgraphen | +| 0.0.9 | Unterstützt `endBlock` Funktion | +| 0.0.8 | Unterstützung für die Abfrage von [Block-Handlern](/developing/creating-a-subgraph/#polling-filter) und [Initialisierungs-Handlern](/developing/creating-a-subgraph/#once-filter) hinzugefügt. | +| 0.0.7 | Unterstützung für [Dateidatenquellen](/developing/creating-a-subgraph/#file-data-sources) hinzugefügt. | +| 0.0.6 | Unterstützt schnelle [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) Berechnungsvariante. | +| 0.0.5 | Unterstützung für Event-Handler mit Zugriff auf Transaktionsbelege hinzugefügt. | +| 0.0.4 | Unterstützung für die Verwaltung von Subgraphen-Features wurde hinzugefügt. | diff --git a/website/src/pages/de/subgraphs/developing/creating/subgraph-manifest.mdx b/website/src/pages/de/subgraphs/developing/creating/subgraph-manifest.mdx index a3959f1f4d57..b5f88df96a89 100644 --- a/website/src/pages/de/subgraphs/developing/creating/subgraph-manifest.mdx +++ b/website/src/pages/de/subgraphs/developing/creating/subgraph-manifest.mdx @@ -16,7 +16,7 @@ Die **Subgraph-Definition** besteht aus den folgenden Dateien: ### Subgraph-Fähigkeiten -A single subgraph can: +A single Subgraph can: - Index data from multiple smart contracts (but not multiple networks). @@ -24,48 +24,48 @@ A single subgraph can: - Add an entry for each contract that requires indexing to the `dataSources` array. -The full specification for subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). +The full specification for Subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). -For the example subgraph listed above, `subgraph.yaml` is: +For the example Subgraph listed above, `subgraph.yaml` is: ```yaml -specVersion: 0.0.4 -description: Gravatar for Ethereum -repository: https://github.com/graphprotocol/graph-tooling +specVersion: 1.3.0 +Beschreibung: Gravatar für Ethereum +Repository: https://github.com/graphprotocol/graph-tooling schema: - file: ./schema.graphql + Datei: ./schema.graphql indexerHints: prune: auto dataSources: - - kind: ethereum/contract - name: Gravity - network: mainnet - source: - address: '0x2E645469f354BB4F5c8a05B3b30A929361cf77eC' - abi: Gravity + - art: ethereum/contract + Name: Schwerkraft + Netzwerk: mainnet + Quelle: + Adresse: '0x2E645469f354BB4F5c8a05B3b30A929361cf77eC' + abi: Schwerkraft startBlock: 6175244 endBlock: 7175245 - context: + Kontext: foo: type: Bool - data: true + Daten: wahr bar: type: String - data: 'bar' + Daten: 'bar' mapping: - kind: ethereum/events - apiVersion: 0.0.6 - language: wasm/assemblyscript - entities: + Art: Ethereum/Ereignisse + apiVersion: 0.0.9 + Sprache: wasm/assemblyscript + Entitäten: - Gravatar abis: - - name: Gravity - file: ./abis/Gravity.json - eventHandlers: - - event: NewGravatar(uint256,address,string,string) - handler: handleNewGravatar - - event: UpdatedGravatar(uint256,address,string,string) - handler: handleUpdatedGravatar + - Name: Schwerkraft + Datei: ./abis/Gravity.json + eventHandler: + - event: NewGravatar(uint256,adresse,string,string) + Behandler: handleNewGravatar + - event: UpdatedGravatar(uint256,adresse,zeichenkette,zeichenkette) + Behandler: handleUpdatedGravatar callHandlers: - function: createGravatar(string,string) handler: handleCreateGravatar @@ -73,53 +73,53 @@ dataSources: - handler: handleBlock - handler: handleBlockWithCall filter: - kind: call - file: ./src/mapping.ts + Art: Aufruf + Datei: ./src/mapping.ts ``` ## Subgraph Entries -> Important Note: Be sure you populate your subgraph manifest with all handlers and [entities](/subgraphs/developing/creating/ql-schema/). +> Important Note: Be sure you populate your Subgraph manifest with all handlers and [entities](/subgraphs/developing/creating/ql-schema/). -The important entries to update for the manifest are: +Die wichtigen Einträge, die für das Manifest aktualisiert werden müssen, sind: -- `specVersion`: a semver version that identifies the supported manifest structure and functionality for the subgraph. The latest version is `1.2.0`. See [specVersion releases](#specversion-releases) section to see more details on features & releases. +- specVersion": eine Semerversion, die die unterstützte Manifeststruktur und Funktionalität für den Untergraphen angibt. Die neueste Version ist `1.3.0`. Siehe [specVersion-Releases](#specversion-releases) Abschnitt für weitere Details zu Features und Releases. -- `description`: a human-readable description of what the subgraph is. This description is displayed in Graph Explorer when the subgraph is deployed to Subgraph Studio. +- Beschreibung": eine von Menschen lesbare Beschreibung des Subgraphen. Diese Beschreibung wird im Graph Explorer angezeigt, wenn der Subgraph in Subgraph Studio bereitgestellt wird. -- `repository`: the URL of the repository where the subgraph manifest can be found. This is also displayed in Graph Explorer. +- Repository": die URL des Repositorys, in dem das Subgraph-Manifest zu finden ist. Dies wird auch im Graph Explorer angezeigt. - `features`: a list of all used [feature](#experimental-features) names. -- `indexerHints.prune`: Defines the retention of historical block data for a subgraph. See [prune](#prune) in [indexerHints](#indexer-hints) section. +- `indexerHints.prune`: Defines the retention of historical block data for a Subgraph. See [prune](#prune) in [indexerHints](#indexer-hints) section. -- `dataSources.source`: the address of the smart contract the subgraph sources, and the ABI of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts. +- `dataSources.source`: the address of the smart contract the Subgraph sources, and the ABI of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts. - `dataSources.source.startBlock`: the optional number of the block that the data source starts indexing from. In most cases, we suggest using the block in which the contract was created. - `dataSources.source.endBlock`: The optional number of the block that the data source stops indexing at, including that block. Minimum spec version required: `0.0.9`. -- `dataSources.context`: key-value pairs that can be used within subgraph mappings. Supports various data types like `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Each variable needs to specify its `type` and `data`. These context variables are then accessible in the mapping files, offering more configurable options for subgraph development. +- `dataSources.context`: key-value pairs that can be used within Subgraph mappings. Supports various data types like `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Each variable needs to specify its `type` and `data`. These context variables are then accessible in the mapping files, offering more configurable options for Subgraph development. - `dataSources.mapping.entities`: the entities that the data source writes to the store. The schema for each entity is defined in the schema.graphql file. - `dataSources.mapping.abis`: one or more named ABI files for the source contract as well as any other smart contracts that you interact with from within the mappings. -- `dataSources.mapping.eventHandlers`: lists the smart contract events this subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store. +- `dataSources.mapping.eventHandlers`: lists the smart contract events this Subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store. -- `dataSources.mapping.callHandlers`: lists the smart contract functions this subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store. +- `dataSources.mapping.callHandlers`: lists the smart contract functions this Subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store. -- `dataSources.mapping.blockHandlers`: lists the blocks this subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional call-filter can be provided by adding a `filter` field with `kind: call` to the handler. This will only run the handler if the block contains at least one call to the data source contract. +- `dataSources.mapping.blockHandlers`: lists the blocks this Subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional call-filter can be provided by adding a `filter` field with `kind: call` to the handler. This will only run the handler if the block contains at least one call to the data source contract. -A single subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array. +Ein einzelner Subgraph kann Daten von mehreren Smart Contracts indizieren. Fügen Sie dem Array „DataSources“ einen Eintrag für jeden Vertrag hinzu, von dem Daten indiziert werden müssen. -## Event Handlers +## Event Handler -Event handlers in a subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the subgraph's manifest. This enables subgraphs to process and store event data according to defined logic. +Event-Handler in einem Subgraph reagieren auf bestimmte Ereignisse, die von Smart Contracts auf der Blockchain ausgelöst werden, und lösen Handler aus, die im Manifest des Subgraphen definiert sind. Auf diese Weise können Subgraphen Ereignisdaten nach einer festgelegten Logik verarbeiten und speichern. ### Defining an Event Handler -An event handler is declared within a data source in the subgraph's YAML configuration. It specifies which events to listen for and the corresponding function to execute when those events are detected. +Ein Eventhandler wird innerhalb einer Datenquelle in der YAML-Konfiguration des Subgraphen deklariert. Er gibt an, auf welche Ereignisse zu warten ist und welche Funktion ausgeführt werden soll, wenn diese Ereignisse erkannt werden. ```yaml dataSources: @@ -131,29 +131,29 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 - language: wasm/assemblyscript - entities: + apiVersion: 0. .9 + Sprache: wasm/assemblyscript + Einheiten: - Gravatar - - Transaction + - Transaktion abis: - - name: Gravity - file: ./abis/Gravity.json + - Name: Gravity + Datei: . abis/Schwerkraft. son eventHandlers: - - event: Approval(address,address,uint256) - handler: handleApproval - - event: Transfer(address,address,uint256) + - Event: Genehmigung (Adresse, Adresse, Adresse, int256) + Handler: HandlesFreigabe + - Event: Transfer(Adresse, ddress, int256) handler: handleTransfer - topic1: ['0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045', '0xc8dA6BF26964aF9D7eEd9e03E53415D37aA96325'] # Optional topic filter which filters only events with the specified topic. + Topic1: ['0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045', '0xc8dA6BF26964aF9D7eEd9e03E53415D37aA96325'] # Optionaler Themenfilter, der nur Ereignisse nach dem angegebenen Thema filtert. ``` ## Call Handlers -While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured. +Während Ereignisse eine effektive Möglichkeit bieten, relevante Änderungen am Zustand eines Vertrags zu erfassen, vermeiden viele Verträge die Erstellung von Protokollen, um die Gaskosten zu optimieren. In diesen Fällen kann ein Subgraph Aufrufe an den Datenquellenvertrag abonnieren. Dies wird durch die Definition von Call-Handlern erreicht, die auf die Funktionssignatur und den Mapping-Handler verweisen, der die Aufrufe dieser Funktion verarbeiten wird. Um diese Aufrufe zu verarbeiten, erhält der Mapping Handler ein `ethereum.Call` als Argument mit den typisierten Eingaben und Ausgaben des Aufrufs. Aufrufe, die in jeder Tiefe der Aufrufkette einer Transaktion erfolgen, lösen das Mapping aus, so dass Aktivitäten mit dem Datenquellenvertrag durch Proxy-Verträge erfasst werden können. -Call handlers will only trigger in one of two cases: when the function specified is called by an account other than the contract itself or when it is marked as external in Solidity and called as part of another function in the same contract. +Call-Handler werden nur in einem von zwei Fällen ausgelöst: wenn die angegebene Funktion von einem anderen Konto als dem Vertrag selbst aufgerufen wird oder wenn sie in Solidity als extern markiert ist und als Teil einer anderen Funktion im selben Vertrag aufgerufen wird. -> **Note:** Call handlers currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a subgraph indexing one of these networks contain one or more call handlers, it will not start syncing. Subgraph developers should instead use event handlers. These are far more performant than call handlers, and are supported on every evm network. +> **Hinweis:** Call-Handler sind derzeit von der Parity-Tracing-API abhängig. Bestimmte Netzwerke, wie die BNB-Kette und Arbitrum, unterstützen diese API nicht. Wenn ein Subgraph, der eines dieser Netzwerke indiziert, einen oder mehrere Call-Handler enthält, wird er nicht mit der Synchronisierung beginnen. Subgraph-Entwickler sollten stattdessen Event-Handler verwenden. Diese sind weitaus leistungsfähiger als Call-Handler und werden von jedem evm-Netzwerk unterstützt. ### Defining a Call Handler @@ -161,24 +161,24 @@ To define a call handler in your manifest, simply add a `callHandlers` array und ```yaml dataSources: - - kind: ethereum/contract - name: Gravity - network: mainnet - source: - address: '0x731a10897d267e19b34503ad902d0a29173ba4b1' + - art: ethereum/contract + Name: Gravity + Netzwerk: Hauptnetz + Quelle: + Adresse: '0x731a10897d267e19b34503ad902d0a29173ba4b1' abi: Gravity mapping: - kind: ethereum/events - apiVersion: 0.0.6 - language: wasm/assemblyscript - entities: + Art: Ethereum/Ereignisse + apiVersion: 0.0.9 + Sprache: wasm/assemblyscript + Entitäten: - Gravatar - - Transaction + - transaktion abis: - - name: Gravity - file: ./abis/Gravity.json + - Name: Gravity + Datei: ./abis/Gravity.json callHandlers: - - function: createGravatar(string,string) + - Funktion: createGravatar(string,string) handler: handleCreateGravatar ``` @@ -186,7 +186,7 @@ The `function` is the normalized function signature to filter calls by. The `han ### Mapping Function -Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument: +Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example Subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument: ```typescript import { CreateGravatarCall } from '../generated/Gravity/Gravity' @@ -205,7 +205,7 @@ The `handleCreateGravatar` function takes a new `CreateGravatarCall` which is a ## Block Handlers -In addition to subscribing to contract events or function calls, a subgraph may want to update its data as new blocks are appended to the chain. To achieve this a subgraph can run a function after every block or after blocks that match a pre-defined filter. +In addition to subscribing to contract events or function calls, a Subgraph may want to update its data as new blocks are appended to the chain. To achieve this a Subgraph can run a function after every block or after blocks that match a pre-defined filter. ### Supported Filters @@ -218,33 +218,33 @@ filter: _The defined handler will be called once for every block which contains a call to the contract (data source) the handler is defined under._ -> **Note:** The `call` filter currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a subgraph indexing one of these networks contain one or more block handlers with a `call` filter, it will not start syncing. +> **Note:** The `call` filter currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a Subgraph indexing one of these networks contain one or more block handlers with a `call` filter, it will not start syncing. The absence of a filter for a block handler will ensure that the handler is called every block. A data source can only contain one block handler for each filter type. ```yaml dataSources: - - kind: ethereum/contract - name: Gravity - network: dev - source: - address: '0x731a10897d267e19b34503ad902d0a29173ba4b1' + - art: ethereum/contract + Name: Gravity + Netzwerk: dev + Quelle: + Adresse: '0x731a10897d267e19b34503ad902d0a29173ba4b1' abi: Gravity mapping: - kind: ethereum/events - apiVersion: 0.0.6 - language: wasm/assemblyscript - entities: + Art: Ethereum/Ereignisse + apiVersion: 0.0.9 + Sprache: wasm/assemblyscript + Entitäten: - Gravatar - - Transaction + - transaktion abis: - - name: Gravity - file: ./abis/Gravity.json - blockHandlers: + - Name: Gravity + Datei: ./abis/Gravity.json + blockHandler: - handler: handleBlock - handler: handleBlockWithCallToContract filter: - kind: call + Art: Aufruf ``` #### Polling Filter @@ -261,7 +261,7 @@ blockHandlers: every: 10 ``` -The defined handler will be called once for every `n` blocks, where `n` is the value provided in the `every` field. This configuration allows the subgraph to perform specific operations at regular block intervals. +Der definierte Handler wird alle `n` Blöcke einmal aufgerufen, wobei `n` der im Feld `every` angegebene Wert ist. Diese Konfiguration ermöglicht es dem Subgraphen, bestimmte Operationen in regelmäßigen Blockintervallen durchzuführen. #### Once Filter @@ -276,7 +276,7 @@ blockHandlers: kind: once ``` -The defined handler with the once filter will be called only once before all other handlers run. This configuration allows the subgraph to use the handler as an initialization handler, performing specific tasks at the start of indexing. +Der definierte Handler mit dem once-Filter wird nur einmal aufgerufen, bevor alle anderen Handler ausgeführt werden. Diese Konfiguration ermöglicht es dem Subgraph, den Handler als Initialisierungs-Handler zu verwenden, der zu Beginn der Indizierung bestimmte Aufgaben ausführt. ```ts export function handleOnce(block: ethereum.Block): void { @@ -288,14 +288,14 @@ export function handleOnce(block: ethereum.Block): void { ### Mapping Function -The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing subgraph entities in the store, call smart contracts and create or update entities. +Die Mapping-Funktion erhält einen „Ethereum.Block“ als einziges Argument. Wie Mapping-Funktionen für Ereignisse kann diese Funktion auf bestehende Subgraph-Entitäten im Speicher zugreifen, Smart Contracts aufrufen und Entitäten erstellen oder aktualisieren. ```typescript -import { ethereum } from '@graphprotocol/graph-ts' +import { ethereum } aus '@graphprotocol/graph-ts' -export function handleBlock(block: ethereum.Block): void { +export function handleBlock(block: ethereum. lock): void { let id = block.hash - let entity = new Block(id) + let entity = new block(id) entity.save() } ``` @@ -317,7 +317,7 @@ An event will only be triggered when both the signature and topic 0 match. By de Starting from `specVersion` `0.0.5` and `apiVersion` `0.0.7`, event handlers can have access to the receipt for the transaction which emitted them. -To do so, event handlers must be declared in the subgraph manifest with the new `receipt: true` key, which is optional and defaults to false. +To do so, event handlers must be declared in the Subgraph manifest with the new `receipt: true` key, which is optional and defaults to false. ```yaml eventHandlers: @@ -330,7 +330,7 @@ Inside the handler function, the receipt can be accessed in the `Event.receipt` ## Order of Triggering Handlers -The triggers for a data source within a block are ordered using the following process: +Die Trigger für eine Datenquelle innerhalb eines Blocks werden mit dem folgenden Prozess bestellt: 1. Event and call triggers are first ordered by transaction index within the block. 2. Event and call triggers within the same transaction are ordered using a convention: event triggers first then call triggers, each type respecting the order they are defined in the manifest. @@ -352,24 +352,24 @@ First, you define a regular data source for the main contract. The snippet below ```yaml dataSources: - - kind: ethereum/contract - name: Factory - network: mainnet - source: - address: '0xc0a47dFe034B400B47bDaD5FecDa2621de6c4d95' + - art: ethereum/contract + Name: Factory + Netzwerk: mainnet + Quelle: + Adresse: '0xc0a47dFe034B400B47bDaD5FecDa2621de6c4d95' abi: Factory mapping: - kind: ethereum/events - apiVersion: 0.0.6 - language: wasm/assemblyscript - file: ./src/mappings/factory.ts - entities: - - Directory + Art: Ethereum/Ereignisse + apiVersion: 0.0.9 + Sprache: wasm/assemblyscript + Datei: ./src/mappings/factory.ts + Entitäten: + - Verzeichnis abis: - name: Factory - file: ./abis/factory.json - eventHandlers: - - event: NewExchange(address,address) + Datei: ./abis/factory.json + eventHandler: + - event: NewExchange(Adresse,Adresse) handler: handleNewExchange ``` @@ -379,34 +379,34 @@ Then, you add _data source templates_ to the manifest. These are identical to re ```yaml dataSources: - - kind: ethereum/contract + - art: ethereum/contract name: Factory - # ... other source fields for the main contract ... -templates: + # ... andere Quellfelder für den Hauptvertrag ... +Vorlagen: - name: Exchange - kind: ethereum/contract - network: mainnet - source: + Art: ethereum/contract + Netzwerk: mainnet + Quelle: abi: Exchange - mapping: - kind: ethereum/events - apiVersion: 0.0.6 - language: wasm/assemblyscript - file: ./src/mappings/exchange.ts - entities: + Mapping: + Art: Ethereum/Ereignisse + apiVersion: 0.0.9 + Sprache: wasm/assemblyscript + Datei: ./src/mappings/exchange.ts + Entitäten: - Exchange abis: - - name: Exchange - file: ./abis/exchange.json - eventHandlers: - - event: TokenPurchase(address,uint256,uint256) - handler: handleTokenPurchase - - event: EthPurchase(address,uint256,uint256) - handler: handleEthPurchase - - event: AddLiquidity(address,uint256,uint256) - handler: handleAddLiquidity - - event: RemoveLiquidity(address,uint256,uint256) - handler: handleRemoveLiquidity + - Name: Exchange + Datei: ./abis/exchange.json + eventHandler: + - event: TokenKauf(Adresse,uint256,uint256) + Behandler: handleTokenPurchase + - event: EthPurchase(Adresse,uint256,uint256) + Behandler: handleEthPurchase + - event: AddLiquidity(Adresse,uint256,uint256) + Behandler: handleAddLiquidity + - event: RemoveLiquidity(Adresse,uint256,uint256) + Behandler: handleRemoveLiquidity ``` ### Instantiating a Data Source Template @@ -454,30 +454,30 @@ There are setters and getters like `setString` and `getString` for all value typ ## Start Blocks -The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. +Der „Startblock“ ist eine optionale Einstellung, mit der Sie festlegen können, ab welchem Block in der Kette die Datenquelle mit der Indizierung beginnen soll. Die Einstellung des Startblocks ermöglicht es der Datenquelle, potenziell Millionen von Blöcken zu überspringen, die irrelevant sind. Typischerweise wird ein Subgraph-Entwickler `startBlock` auf den Block setzen, in dem der Smart Contract der Datenquelle erstellt wurde. ```yaml dataSources: - - kind: ethereum/contract - name: ExampleSource - network: mainnet - source: - address: '0xc0a47dFe034B400B47bDaD5FecDa2621de6c4d95' - abi: ExampleContract + - art: ethereum/contract + Name: BeispielQuelle + Netzwerk: Hauptnetz + Quelle: + Adresse: '0xc0a47dFe034B400B47bDaD5FecDa2621de6c4d95' + abi: BeispielVertrag startBlock: 6627917 mapping: kind: ethereum/events - apiVersion: 0.0.6 - language: wasm/assemblyscript - file: ./src/mappings/factory.ts - entities: - - User + apiVersion: 0.0.9 + Sprache: wasm/assemblyscript + Datei: ./src/mappings/factory.ts + Entitäten: + - Benutzer abis: - - name: ExampleContract - file: ./abis/ExampleContract.json - eventHandlers: - - event: NewEvent(address,address) - handler: handleNewEvent + - Name: BeispielVertrag + Datei: ./abis/BeispielVertrag.json + eventHandler: + - event: NewEvent(Adresse,Adresse) + Behandler: handleNewEvent ``` > **Note:** The contract creation block can be quickly looked up on Etherscan: @@ -488,13 +488,13 @@ dataSources: ## Indexer Hints -The `indexerHints` setting in a subgraph's manifest provides directives for indexers on processing and managing a subgraph. It influences operational decisions across data handling, indexing strategies, and optimizations. Presently, it features the `prune` option for managing historical data retention or pruning. +Die Einstellung „indexerHints“ im Manifest eines Subgraphen enthält Richtlinien für Indexer zur Verarbeitung und Verwaltung eines Subgraphen. Sie beeinflusst operative Entscheidungen über die Datenverarbeitung, Indizierungsstrategien und Optimierungen. Gegenwärtig bietet sie die Option „prune“ für die Verwaltung der Aufbewahrung historischer Daten oder das Pruning. > This feature is available from `specVersion: 1.0.0` ### Prune -`indexerHints.prune`: Defines the retention of historical block data for a subgraph. Options include: +IndexerHints.prune": Definiert die Aufbewahrung von historischen Blockdaten für einen Subgraphen. Die Optionen umfassen: 1. `"never"`: No pruning of historical data; retains the entire history. 2. `"auto"`: Retains the minimum necessary history as set by the indexer, optimizing query performance. @@ -505,19 +505,19 @@ The `indexerHints` setting in a subgraph's manifest provides directives for inde prune: auto ``` -> The term "history" in this context of subgraphs is about storing data that reflects the old states of mutable entities. +> Der Begriff „Historie“ bezieht sich in diesem Zusammenhang auf die Speicherung von Daten, die die alten Zustände von veränderlichen Entitäten widerspiegeln. History as of a given block is required for: -- [Time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), which enable querying the past states of these entities at specific blocks throughout the subgraph's history -- Using the subgraph as a [graft base](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) in another subgraph, at that block -- Rewinding the subgraph back to that block +- [Zeitreiseabfragen](/subgraphs/querying/graphql-api/#time-travel-queries), die es ermöglichen, die vergangenen Zustände dieser Entitäten zu bestimmten Zeitpunkten in der Geschichte des Subgraphen abzufragen +- Verwendung des Subgraphen als [Pfropfgrundlage] (/entwickeln/erzeugen-eines-subgraphen/#pfropfen-auf-vorhandene-subgraphen) in einem anderen Subgraphen, in diesem Block +- Zurückspulen des Subgraphen zu diesem Block If historical data as of the block has been pruned, the above capabilities will not be available. > Using `"auto"` is generally recommended as it maximizes query performance and is sufficient for most users who do not require access to extensive historical data. -For subgraphs leveraging [time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), it's advisable to either set a specific number of blocks for historical data retention or use `prune: never` to keep all historical entity states. Below are examples of how to configure both options in your subgraph's settings: +Für Subgraphen, die [„time travel queries“] (/subgraphs/querying/graphql-api/#time-travel-queries) verwenden, ist es ratsam, entweder eine bestimmte Anzahl von Blöcken für die Aufbewahrung historischer Daten festzulegen oder `prune: never` zu verwenden, um alle historischen Entitätszustände zu erhalten. Im Folgenden finden Sie Beispiele, wie Sie beide Optionen in den Einstellungen Ihres Subgraphen konfigurieren können: To retain a specific amount of historical data: @@ -532,3 +532,18 @@ To preserve the complete history of entity states: indexerHints: prune: never ``` + +## SpecVersion Veröffentlichungen + +| Version | Hinweise zur Version | +| :-: | --- | +| 1.3.0 | Unterstützung für [Subgraph Composition](/cookbook/subgraph-composition-three-sources) hinzugefügt | +| 1.2.0 | Unterstützung für [Indexed Argument Filtering](/developing/creating/advanced/#indexed-argument-filters--topic-filters) hinzugefügt & `eth_call` erklärt | +| 1.1.0 | Unterstützt [Zeitreihen & Aggregationen](/developing/creating/advanced/#timeseries-and-aggregations). Unterstützung für den Typ `Int8` für `id` hinzugefügt. | +| 1.0.0 | Unterstützt die Funktion [`indexerHints`](/developing/creating/subgraph-manifest/#indexer-hints) zum Beschneiden von Subgraphen | +| 0.0.9 | Unterstützt `endBlock` Funktion | +| 0.0.8 | Unterstützung für das Polling von [Block-Handlern](/entwickeln/erstellen/subgraph-manifest/#polling-filter) und [Initialisierungs-Handlern](/entwickeln/erstellen/subgraph-manifest/#once-filter) hinzugefügt. | +| 0.0.7 | Unterstützung für [File Data Sources](/developing/creating/advanced/#ipfsarweave-file-data-sources) hinzugefügt. | +| 0.0.6 | Unterstützt schnelle [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) Berechnungsvariante. | +| 0.0.5 | Unterstützung für Event-Handler mit Zugriff auf Transaktionsbelege hinzugefügt. | +| 0.0.4 | Unterstützung für die Verwaltung von Subgraphen-Features wurde hinzugefügt. | diff --git a/website/src/pages/de/subgraphs/developing/creating/unit-testing-framework.mdx b/website/src/pages/de/subgraphs/developing/creating/unit-testing-framework.mdx index 52f7cc2134b8..357617cfce50 100644 --- a/website/src/pages/de/subgraphs/developing/creating/unit-testing-framework.mdx +++ b/website/src/pages/de/subgraphs/developing/creating/unit-testing-framework.mdx @@ -2,21 +2,21 @@ title: Rahmen für Einheitstests --- -Learn how to use Matchstick, a unit testing framework developed by [LimeChain](https://limechain.tech/). Matchstick enables subgraph developers to test their mapping logic in a sandboxed environment and successfully deploy their subgraphs. +Lernen Sie die Verwendung von Matchstick, einem von [LimeChain] (https://limechain.tech/) entwickelten Unit-Testing-Framework. Matchstick ermöglicht es Subgraph-Entwicklern, ihre Mapping-Logik in einer Sandbox-Umgebung zu testen und ihre Subgraphen erfolgreich einzusetzen. -## Benefits of Using Matchstick +## Vorteile der Verwendung von Matchstick -- It's written in Rust and optimized for high performance. -- It gives you access to developer features, including the ability to mock contract calls, make assertions about the store state, monitor subgraph failures, check test performance, and many more. +- Es ist in Rust geschrieben und für hohe Leistung optimiert. +- Sie ermöglicht Ihnen den Zugriff auf Entwicklerfunktionen, einschließlich der Möglichkeit, Vertragsaufrufe nachzubilden, Behauptungen über den Speicherzustand aufzustellen, Fehler in Subgraphen zu überwachen, die Testleistung zu überprüfen und vieles mehr. ## Erste Schritte -### Install Dependencies +### Abhängigkeiten installieren -In order to use the test helper methods and run tests, you need to install the following dependencies: +Um die Testhilfsmethoden verwenden und Tests ausführen zu können, müssen Sie die folgenden Abhängigkeiten installieren: ```sh -yarn add --dev matchstick-as +yarn add --dev Matchstick-as ``` ### Install PostgreSQL @@ -47,7 +47,7 @@ Installation command (depends on your distro): sudo apt install postgresql ``` -### Using WSL (Windows Subsystem for Linux) +### Verwendung von WSL (Windows Subsystem für Linux) Sie können Matchstick auf WSL sowohl mit dem Docker-Ansatz als auch mit dem binären Ansatz verwenden. Da WSL ein wenig knifflig sein kann, hier ein paar Tipps, falls Sie auf Probleme stoßen wie @@ -61,13 +61,13 @@ oder /node_modules/gluegun/build/index.js:13 throw up; ``` -Please make sure you're on a newer version of Node.js graph-cli doesn't support **v10.19.0** anymore, and that is still the default version for new Ubuntu images on WSL. For instance Matchstick is confirmed to be working on WSL with **v18.1.0**, you can switch to it either via **nvm** or if you update your global Node.js. Don't forget to delete `node_modules` and to run `npm install` again after updating you nodejs! Then, make sure you have **libpq** installed, you can do that by running +Bitte stellen Sie sicher, dass Sie eine neuere Version von Node.js verwenden. graph-cli unterstützt **v10.19.0** nicht mehr, und das ist immer noch die Standardversion für neue Ubuntu-Images auf WSL. Zum Beispiel ist Matchstick bestätigt, dass es auf WSL mit **v18.1.0** funktioniert, Sie können entweder über **nvm** darauf umsteigen oder wenn Sie Ihr globales Node.js aktualisieren. Vergessen Sie nicht, `node_modules` zu löschen und `npm install` erneut auszuführen, nachdem Sie Ihr nodejs aktualisiert haben! Stellen Sie dann sicher, dass Sie **libpq** installiert haben, indem Sie ``` sudo apt-get install libpq-dev ``` -And finally, do not use `graph test` (which uses your global installation of graph-cli and for some reason that looks like it's broken on WSL currently), instead use `yarn test` or `npm run test` (that will use the local, project-level instance of graph-cli, which works like a charm). For that you would of course need to have a `"test"` script in your `package.json` file which can be something as simple as +Und schließlich, verwenden Sie nicht `graph test` (das Ihre globale Installation von graph-cli verwendet und aus irgendeinem Grund sieht es so aus, als ob es auf der WSL derzeit nicht funktioniert), sondern verwenden Sie `yarn test` oder `npm run test` (das wird die lokale Instanz von graph-cli auf Projektebene verwenden, was wunderbar funktioniert). Dafür müssen Sie natürlich ein `„test“-Skript in Ihrer `package.json\`-Datei haben, was etwas so einfaches sein kann wie ```json { @@ -87,7 +87,7 @@ And finally, do not use `graph test` (which uses your global installation of gra ### Using Matchstick -To use **Matchstick** in your subgraph project just open up a terminal, navigate to the root folder of your project and simply run `graph test [options] ` - it downloads the latest **Matchstick** binary and runs the specified test or all tests in a test folder (or all existing tests if no datasource flag is specified). +To use **Matchstick** in your Subgraph project just open up a terminal, navigate to the root folder of your project and simply run `graph test [options] ` - it downloads the latest **Matchstick** binary and runs the specified test or all tests in a test folder (or all existing tests if no datasource flag is specified). ### CLI-Optionen @@ -109,7 +109,7 @@ Dadurch wird nur diese spezielle Testdatei ausgeführt: graph test path/to/file.test.ts ``` -**Options:** +**Optionen:** ```sh -c, --coverage Run the tests in coverage mode @@ -118,12 +118,12 @@ graph test path/to/file.test.ts -h, --help Show usage information -l, --logs Logs to the console information about the OS, CPU model and download url (debugging purposes) -r, --recompile Forces tests to be recompiled --v, --version Choose the version of the rust binary that you want to be downloaded/used +-v, --version &lt; tag&gt; Choose the version of the rust binary that you want to be downloaded/used ``` ### Docker -From `graph-cli 0.25.2`, the `graph test` command supports running `matchstick` in a docker container with the `-d` flag. The docker implementation uses [bind mount](https://docs.docker.com/storage/bind-mounts/) so it does not have to rebuild the docker image every time the `graph test -d` command is executed. Alternatively you can follow the instructions from the [matchstick](https://github.com/LimeChain/matchstick#docker-) repository to run docker manually. +Ab `graph-cli 0.25.2` unterstützt der Befehl `graph test` die Ausführung von `matchstick` in einem Docker-Container mit dem `-d` Flag. Die Docker-Implementierung verwendet [bind mount](https://docs.docker.com/storage/bind-mounts/), so dass sie das Docker-Image nicht jedes Mal neu erstellen muss, wenn der Befehl `graph test -d` ausgeführt wird. Alternativ können Sie den Anweisungen aus dem [matchstick](https://github.com/LimeChain/matchstick#docker-) Repository folgen, um docker manuell zu starten. ❗ `graph test -d` forces `docker run` to run with flag `-t`. This must be removed to run inside non-interactive environments (like GitHub CI). @@ -145,17 +145,17 @@ libsFolder: path/to/libs manifestPath: path/to/subgraph.yaml ``` -### Demo-Subgraph +### Demo Subgraph You can try out and play around with the examples from this guide by cloning the [Demo Subgraph repo](https://github.com/LimeChain/demo-subgraph) ### Video-Tutorials -Also you can check out the video series on ["How to use Matchstick to write unit tests for your subgraphs"](https://www.youtube.com/playlist?list=PLTqyKgxaGF3SNakGQwczpSGVjS_xvOv3h) +Also you can check out the video series on ["How to use Matchstick to write unit tests for your Subgraphs"](https://www.youtube.com/playlist?list=PLTqyKgxaGF3SNakGQwczpSGVjS_xvOv3h) ## Struktur der Tests -_**IMPORTANT: The test structure described below depens on `matchstick-as` version >=0.5.0**_ +WICHTIG: Die unten beschriebene Teststruktur hängt von der Version `matchstick-as` >=0.5.0\*\*\_ ab. ### describe() @@ -165,14 +165,14 @@ _**IMPORTANT: The test structure described below depens on `matchstick-as` versi - _Describes are not mandatory. You can still use test() the old way, outside of the describe() blocks_ -Example: +Beispiel: ```typescript -import { describe, test } from "matchstick-as/assembly/index" -import { handleNewGravatar } from "../../src/gravity" +Importiere { describe, test } von "Matchstick-as/assembly/index" +importiere { handleNewGravatar } von "../.. src/gravity" describe("handleNewGravatar()", () => { - test("Should create a new Gravatar entity", () => { + test("Soll eine neue Gravatar Entity erstellen", () => { ... }) }) @@ -181,18 +181,18 @@ describe("handleNewGravatar()", () => { Nested `describe()` example: ```typescript -import { describe, test } from "matchstick-as/assembly/index" -import { handleUpdatedGravatar } from "../../src/gravity" +Importiere { describe, test } von "Matchstick-as/assembly/index" +importiere { handleUpdatedGravatar } von "../.. src/gravity" describe("handleUpdatedGravatar()", () => { - describe("When entity exists", () => { - test("updates the entity", () => { + describe("Wenn Entität existiert", () => { + test("aktualisiert die Entität", () => { ... }) }) - describe("When entity does not exists", () => { - test("it creates a new entity", () => { + beschreibt ("Wenn Entität nicht existiert", () => { + test("Es erzeugt ein neues Entity", () => { ... }) }) @@ -205,7 +205,7 @@ describe("handleUpdatedGravatar()", () => { `test(name: String, () =>, should_fail: bool)` - Defines a test case. You can use test() inside of describe() blocks or independently. -Example: +Beispiel: ```typescript import { describe, test } from "matchstick-as/assembly/index" @@ -294,7 +294,7 @@ describe("handleUpdatedGravatar()", () => { Runs a code block after all of the tests in the file. If `afterAll` is declared inside of a `describe` block, it runs at the end of that `describe` block. -Example: +Beispiel: Code inside `afterAll` will execute once after _all_ tests in the file. @@ -416,7 +416,7 @@ describe('handleUpdatedGravatars', () => { ### afterEach() -Runs a code block after every test. If `afterEach` is declared inside of a `describe` block, it runs after each test in that `describe` block. +Führt einen Codeblock nach jedem Test aus. Wenn `afterEach` innerhalb eines `describe`-Blocks deklariert ist, wird es nach jedem Test in diesem `describe`-Block ausgeführt. Beispiele: @@ -652,17 +652,17 @@ test('Next test', () => { }) ``` -That's a lot to unpack! First off, an important thing to notice is that we're importing things from `matchstick-as`, our AssemblyScript helper library (distributed as an npm module). You can find the repository [here](https://github.com/LimeChain/matchstick-as). `matchstick-as` provides us with useful testing methods and also defines the `test()` function which we will use to build our test blocks. The rest of it is pretty straightforward - here's what happens: +Das ist eine Menge zum Auspacken! Zunächst einmal ist es wichtig zu wissen, dass wir Dinge aus `matchstick-as` importieren, unserer AssemblyScript-Hilfsbibliothek (die als npm-Modul verteilt wird). Sie können das Repository [hier] finden (https://github.com/LimeChain/matchstick-as). `matchstick-as` stellt uns nützliche Testmethoden zur Verfügung und definiert auch die Funktion `test()`, die wir zum Erstellen unserer Testblöcke verwenden werden. Der Rest ist ziemlich einfach - hier ist, was passiert: -- We're setting up our initial state and adding one custom Gravatar entity; -- We define two `NewGravatar` event objects along with their data, using the `createNewGravatarEvent()` function; -- We're calling out handler methods for those events - `handleNewGravatars()` and passing in the list of our custom events; -- We assert the state of the store. How does that work? - We're passing a unique combination of Entity type and id. Then we check a specific field on that Entity and assert that it has the value we expect it to have. We're doing this both for the initial Gravatar Entity we added to the store, as well as the two Gravatar entities that gets added when the handler function is called; -- And lastly - we're cleaning the store using `clearStore()` so that our next test can start with a fresh and empty store object. We can define as many test blocks as we want. +- Wir richten unseren Ausgangszustand ein und fügen eine benutzerdefinierte Gravatar-Entität hinzu; +- Wir definieren zwei „NewGravatar“-Ereignisobjekte zusammen mit ihren Daten, indem wir die Funktion „CreateNewGravatarEvent()“ verwenden; +- Wir rufen Handler-Methoden für diese Ereignisse auf - „handleNewGravatars()“ - und übergeben die Liste unserer eigenen Ereignisse; +- Wir behaupten den Zustand des Ladens. Wie funktioniert das? - Wir übergeben eine eindeutige Kombination aus Entity-Typ und ID. Dann überprüfen wir ein bestimmtes Feld dieser Entität und stellen sicher, dass es den erwarteten Wert hat. Wir tun dies sowohl für die ursprüngliche Gravatar-Entität, die wir dem Speicher hinzugefügt haben, als auch für die beiden Gravatar-Entitäten, die hinzugefügt werden, wenn die Handler-Funktion aufgerufen wird; +- Und schließlich bereinigen wir den Speicher mit `clearStore()`, damit unser nächster Test mit einem frischen und leeren Speicherobjekt beginnen kann. Wir können so viele Testblöcke definieren, wie wir wollen. There we go - we've created our first test! 👏 -Now in order to run our tests you simply need to run the following in your subgraph root folder: +Now in order to run our tests you simply need to run the following in your Subgraph root folder: `graph test Gravity` @@ -674,7 +674,7 @@ And if all goes well you should be greeted with the following: ### Hydrating the store with a certain state -Users are able to hydrate the store with a known set of entities. Here's an example to initialise the store with a Gravatar entity: +Die Benutzer können den Shop mit einer bekannten Reihe von Entitäten bestücken. Hier ist ein Beispiel für die Initialisierung des Speichers mit einer Gravatar-Entität: ```typescript let gravatar = new Gravatar('entryId') @@ -683,7 +683,7 @@ gravatar.save() ### Calling a mapping function with an event -A user can create a custom event and pass it to a mapping function that is bound to the store: +Ein Benutzer kann ein benutzerdefiniertes Ereignis erstellen und es an eine Mapping-Funktion übergeben, die an den Speicher gebunden ist: ```typescript import { store } from 'matchstick-as/assembly/store' @@ -741,7 +741,7 @@ let result = gravity.gravatarToOwner(bigIntParam) assert.equals(ethereum.Value.fromAddress(expectedResult), ethereum.Value.fromAddress(result)) ``` -As demonstrated, in order to mock a contract call and hardcore a return value, the user must provide a contract address, function name, function signature, an array of arguments, and of course - the return value. +Wie gezeigt, muss der Benutzer eine Vertragsadresse, einen Funktionsnamen, eine Funktionssignatur, ein Array von Argumenten und natürlich den Rückgabewert angeben, um einen Vertragsaufruf und einen Hardcore-Rückgabewert nachzuahmen. Users can also mock function reverts: @@ -754,19 +754,19 @@ createMockedFunction(contractAddress, 'getGravatar', 'getGravatar(address):(stri ### Mocking IPFS files (from matchstick 0.4.1) -Users can mock IPFS files by using `mockIpfsFile(hash, filePath)` function. The function accepts two arguments, the first one is the IPFS file hash/path and the second one is the path to a local file. +Benutzer können IPFS-Dateien mit der Funktion „mockIpfsFile(hash, filePath)“ simulieren. Die Funktion akzeptiert zwei Argumente, das erste ist der Hash/Pfad der IPFS-Datei und das zweite ist der Pfad zu einer lokalen Datei. -NOTE: When testing `ipfs.map/ipfs.mapJSON`, the callback function must be exported from the test file in order for matchstck to detect it, like the `processGravatar()` function in the test example bellow: +HINWEIS: Beim Testen von `ipfs.map/ipfs.mapJSON` muss die Callback-Funktion aus der Testdatei exportiert werden, damit Matchstick sie erkennen kann, wie die Funktion `processGravatar()` im untenstehenden Testbeispiel: `.test.ts` file: ```typescript -import { assert, test, mockIpfsFile } from 'matchstick-as/assembly/index' -import { ipfs } from '@graphprotocol/graph-ts' -import { gravatarFromIpfs } from './utils' +importiere { assert, test, mockIpfsFile } von 'matchstick-as/assembly/index' +importieren { ipfs } von '@graphprotocol/graph-ts' +importiere { gravatarFromIpfs } von './utils' -// Export ipfs.map() callback in order for matchstck to detect it -export { processGravatar } from './utils' +// ipfs.map()-Callback exportieren, damit Matchstick ihn erkennen kann +exportiere { processGravatar } aus './utils' test('ipfs.cat', () => { mockIpfsFile('ipfsCatfileHash', 'tests/ipfs/cat.json') @@ -798,46 +798,46 @@ test('ipfs.map', () => { `utils.ts` file: ```typescript -import { Address, ethereum, JSONValue, Value, ipfs, json, Bytes } from "@graphprotocol/graph-ts" -import { Gravatar } from "../../generated/schema" +import { Address, ethereum, JSONValue, Wert, ipfs, json, Bytes } von "@graphprotocol/graph-ts" +importieren { Gravatar } von "../../generated/schema" ... -// ipfs.map callback -export function processGravatar(value: JSONValue, userData: Value): void { - // See the JSONValue documentation for details on dealing - // with JSON values - let obj = value.toObject() - let id = obj.get('id') +// ipfs. ap Callback +Export Funktion processGravatar(Wert: JSONValue, userData: Wert): void { + // Siehe JSONValue Dokumentation für Details zum Umgang mit + // mit JSON-Werten + let obj = value. oObject() + let id = obj. et('id') - if (!id) { + wenn (! d) { return } - // Callbacks can also created entities - let gravatar = new Gravatar(id.toString()) - gravatar.displayName = userData.toString() + id.toString() + // Callbacks können auch Objekte + let gravatar = new Gravatar(id. oString()) + Gravatar. isplayName = userData.toString() + id.toString() gravatar.save() } -// function that calls ipfs.cat -export function gravatarFromIpfs(): void { - let rawData = ipfs.cat("ipfsCatfileHash") +// Funktion, die ipfs aufruft. bei +Export Funktion gravatarFromIpfs(): void { + let rawData = ipfs. at("ipfsCatfileHash") if (!rawData) { return } - let jsonData = json.fromBytes(rawData as Bytes).toObject() + let jsonData = json. romBytes(rawData as Bytes).toObject() let id = jsonData.get('id') - let url = jsonData.get("imageUrl") + let url = jsonData. et("imageUrl") if (!id || !url) { return } - let gravatar = new Gravatar(id.toString()) + let gravatar = new Gravatar(id. oString()) gravatar.imageUrl = url.toString() gravatar.save() } @@ -896,7 +896,7 @@ import { logStore } from 'matchstick-as/assembly/store' logStore() ``` -As of version 0.6.0, `logStore` no longer prints derived fields, instead users can use the new `logEntity` function. Of course `logEntity` can be used to print any entity, not just ones that have derived fields. `logEntity` takes the entity type, entity id and a `showRelated` flag to indicate if users want to print the related derived entities. +Seit Version 0.6.0 druckt `logStore` keine abgeleiteten Felder mehr aus, stattdessen können Benutzer die neue Funktion `logEntity` verwenden. Natürlich kann `logEntity` verwendet werden, um jede Entität zu drucken, nicht nur solche, die abgeleitete Felder haben. Die Funktion `logEntity` nimmt den Entitätstyp, die Entitäts-ID und ein `showRelated`-Flag, um anzugeben, ob der Benutzer die zugehörigen abgeleiteten Entitäten ausgeben möchte. ``` import { logEntity } from 'matchstick-as/assembly/store' @@ -919,30 +919,30 @@ test( ) ``` -If the test is marked with shouldFail = true but DOES NOT fail, that will show up as an error in the logs and the test block will fail. Also, if it's marked with shouldFail = false (the default state), the test executor will crash. +Wenn der Test mit shouldFail = true gekennzeichnet ist, aber NICHT fehlschlägt, wird dies in den Protokollen als Fehler angezeigt und der Testblock schlägt fehl. Wenn der Test mit shouldFail = false markiert ist (der Standardstatus), stürzt der Test-Executor ab. ### Protokollierung -Having custom logs in the unit tests is exactly the same as logging in the mappings. The difference is that the log object needs to be imported from matchstick-as rather than graph-ts. Here's a simple example with all non-critical log types: +Die Verwendung von benutzerdefinierten Protokollen in den Unit-Tests ist genau dasselbe wie die Protokollierung in den Mappings. Der Unterschied besteht darin, dass das Log-Objekt von matchstick-as und nicht von graph-ts importiert werden muss. Hier ist ein einfaches Beispiel mit allen unkritischen Protokolltypen: ```typescript -import { test } from "matchstick-as/assembly/index"; -import { log } from "matchstick-as/assembly/log"; +importiere { test } aus "matchstick-as/assembly/index"; +importiere { log } aus "matchstick-as/assembly/log"; test("Success", () => { - log.success("Success!". []); + log. uccess("Erfolg!". []); }); test("Error", () => { - log.error("Error :( ", []); + log. rror("Error :( ", []); }); test("Debug", () => { - log.debug("Debugging...", []); + log. ebug("Debugging...", []); }); test("Info", () => { - log.info("Info!", []); + log. nfo("Info!", []); }); -test("Warning", () => { - log.warning("Warning!", []); +test("Warnung", () => { + log.warning("Warnung!", []); }); ``` @@ -954,7 +954,7 @@ test('Blow everything up', () => { }) ``` -Logging critical errors will stop the execution of the tests and blow everything up. After all - we want to make sure you're code doesn't have critical logs in deployment, and you should notice right away if that were to happen. +Die Protokollierung kritischer Fehler wird die Ausführung der Tests stoppen und alles in die Luft jagen. Schließlich wollen wir sicherstellen, dass Ihr Code bei der Bereitstellung keine kritischen Protokolle enthält, und Sie sollten sofort bemerken, wenn das passiert. ### Testing derived fields @@ -1044,56 +1044,56 @@ Testing dynamic data sources can be be done by mocking the return value of the ` Example below: -First we have the following event handler (which has been intentionally repurposed to showcase datasource mocking): +Zunächst haben wir den folgenden Event-Handler (der absichtlich umgewidmet wurde, um Datasource Mocking zu zeigen): ```typescript -export function handleApproveTokenDestinations(event: ApproveTokenDestinations): void { - let tokenLockWallet = TokenLockWallet.load(dataSource.address().toHexString())! +Export Funktion handleApproveTokenDestinations(event: ApproveTokenDestinations): void { + let tokenLockWallet = TokenLockWallet. oad(dataSource.address().toHexString())! if (dataSource.network() == 'rinkeby') { - tokenLockWallet.tokenDestinationsApproved = true + tokenLockWallet. okenDestinationsApproved = true } - let context = dataSource.context() + let context = dataSource. ontext() if (context.get('contextVal')!.toI32() > 0) { - tokenLockWallet.setBigInt('tokensReleased', BigInt.fromI32(context.get('contextVal')!.toI32())) + tokenLockWallet. etBigInt('tokensReleased', BigInt.fromI32(context.get('contextVal')!.toI32())) } tokenLockWallet.save() } ``` -And then we have the test using one of the methods in the dataSourceMock namespace to set a new return value for all of the dataSource functions: +Und dann haben wir den Test mit einer der Methoden im dataSourceMock-Namensraum, um einen neuen Rückgabewert für alle dataSource-Funktionen festzulegen: ```typescript import { assert, test, newMockEvent, dataSourceMock } from 'matchstick-as/assembly/index' import { BigInt, DataSourceContext, Value } from '@graphprotocol/graph-ts' -import { handleApproveTokenDestinations } from '../../src/token-lock-wallet' -import { ApproveTokenDestinations } from '../../generated/templates/GraphTokenLockWallet/GraphTokenLockWallet' -import { TokenLockWallet } from '../../generated/schema' +import { handleApproveTokenDestinations } from '. /../src/token-lock-wallet' +Import { ApproveTokenDestinations } von '../../generated/templates/GraphTokenLockWallet/GraphTokenLockWallet' +Import { TokenLockWallet } von '. /../generated/schema' test('Data source simple mocking example', () => { let addressString = '0xA16081F360e3847006dB660bae1c6d1b2e17eC2A' - let address = Address.fromString(addressString) + let address = Address. romString(addressString) let wallet = new TokenLockWallet(address.toHexString()) - wallet.save() - let context = new DataSourceContext() + Wallet. ave() + laßt context = new DataSourceContext() context.set('contextVal', Value.fromI32(325)) - dataSourceMock.setReturnValues(addressString, 'rinkeby', context) - let event = changetype(newMockEvent()) + dataSourceMock. etReturnValues(addressString, 'rinkeby', context) + let event = change type(newMockEvent()) - assert.assertTrue(!wallet.tokenDestinationsApproved) + assert.assertTrue(!wallet. okenDestinationsApproved) handleApproveTokenDestinations(event) - wallet = TokenLockWallet.load(address.toHexString())! - assert.assertTrue(wallet.tokenDestinationsApproved) + Wallet.load(address.toHexString())! + assert. ssertTrue(wallet.tokenDestinationsApproved) assert.bigIntEquals(wallet.tokensReleased, BigInt.fromI32(325)) dataSourceMock.resetValues() }) ``` -Notice that dataSourceMock.resetValues() is called at the end. That's because the values are remembered when they are changed and need to be reset if you want to go back to the default values. +Beachten Sie, dass dataSourceMock.resetValues() am Ende aufgerufen wird. Das liegt daran, dass die Werte gespeichert werden, wenn sie geändert werden, und dass sie zurückgesetzt werden müssen, wenn Sie zu den Standardwerten zurückkehren möchten. ### Testing dynamic data source creation @@ -1101,57 +1101,57 @@ As of version `0.6.0`, it is possible to test if a new data source has been crea - `assert.dataSourceCount(templateName, expectedCount)` can be used to assert the expected count of data sources from the specified template - `assert.dataSourceExists(templateName, address/ipfsHash)` asserts that a data source with the specified identifier (could be a contract address or IPFS file hash) from a specified template was created -- `logDataSources(templateName)` prints all data sources from the specified template to the console for debugging purposes +- logDataSources(templateName)\` gibt alle Datenquellen der angegebenen Vorlage zu Debugging-Zwecken auf der Konsole aus - `readFile(path)` reads a JSON file that represents an IPFS file and returns the content as Bytes #### Testing `ethereum/contract` templates ```typescript -test('ethereum/contract dataSource creation example', () => { - // Assert there are no dataSources created from GraphTokenLockWallet template - assert.dataSourceCount('GraphTokenLockWallet', 0) +test('ethereum/contract dataSource creation example', () =&gt; { + // Assert, dass keine dataSources aus der GraphTokenLockWallet-Vorlage erstellt wurden + assert. dataSourceCount('GraphTokenLockWallet', 0) - // Create a new GraphTokenLockWallet datasource with address 0xA16081F360e3847006dB660bae1c6d1b2e17eC2A - GraphTokenLockWallet.create(Address.fromString('0xA16081F360e3847006dB660bae1c6d1b2e17eC2A')) + // Erstellen einer neuen GraphTokenLockWallet-Datenquelle mit der Adresse 0xA16081F360e3847006dB660bae1c6d1b2e17eC2A + GraphTokenLockWallet.create(Address.fromString('0xA16081F360e3847006dB660bae1c6d1b2e17eC2A')) - // Assert the dataSource has been created - assert.dataSourceCount('GraphTokenLockWallet', 1) + // Assert, dass die Datenquelle erstellt wurde + assert.dataSourceCount('GraphTokenLockWallet', 1) - // Add a second dataSource with context - let context = new DataSourceContext() - context.set('contextVal', Value.fromI32(325)) + // Eine zweite Datenquelle mit Kontext hinzufügen + let context = new DataSourceContext() + context.set('contextVal', Value.fromI32(325)) - GraphTokenLockWallet.createWithContext(Address.fromString('0xA16081F360e3847006dB660bae1c6d1b2e17eC2B'), context) + GraphTokenLockWallet.createWithContext(Address.fromString('0xA16081F360e3847006dB660bae1c6d1b2e17eC2B'), context) - // Assert there are now 2 dataSources - assert.dataSourceCount('GraphTokenLockWallet', 2) + // Assert, dass es jetzt 2 Datenquellen gibt + assert.dataSourceCount('GraphTokenLockWallet', 2) - // Assert that a dataSource with address "0xA16081F360e3847006dB660bae1c6d1b2e17eC2B" was created - // Keep in mind that `Address` type is transformed to lower case when decoded, so you have to pass the address as all lower case when asserting if it exists - assert.dataSourceExists('GraphTokenLockWallet', '0xA16081F360e3847006dB660bae1c6d1b2e17eC2B'.toLowerCase()) + // Assert, dass eine Datenquelle mit der Adresse "0xA16081F360e3847006dB660bae1c6d1b2e17eC2B" erstellt wurde + // Beachten Sie, dass der Typ `Address` bei der Dekodierung in Kleinbuchstaben umgewandelt wird, so dass Sie die Adresse in Kleinbuchstaben übergeben müssen, wenn Sie behaupten, dass sie existiert + assert. dataSourceExists('GraphTokenLockWallet', '0xA16081F360e3847006dB660bae1c6d1b2e17eC2B'.toLowerCase()) - logDataSources('GraphTokenLockWallet') + logDataSources('GraphTokenLockWallet') }) ``` ##### Example `logDataSource` output ```bash -🛠 { +🛠️ { "0xa16081f360e3847006db660bae1c6d1b2e17ec2a": { "kind": "ethereum/contract", "name": "GraphTokenLockWallet", - "address": "0xa16081f360e3847006db660bae1c6d1b2e17ec2a", + "Adresse: "0xa16081f360e3847006db660bae1c6d1b2e17ec2a", "context": null }, "0xa16081f360e3847006db660bae1c6d1b2e17ec2b": { "kind": "ethereum/contract", "name": "GraphTokenLockWallet", - "address": "0xa16081f360e3847006db660bae1c6d1b2e17ec2b", + "Adresse: "0xa16081f360e3847006db660bae1c6d1b2e17ec2b", "context": { "contextVal": { "type": "Int", - "data": 325 + "Daten": 325 } } } @@ -1160,45 +1160,45 @@ test('ethereum/contract dataSource creation example', () => { #### Testing `file/ipfs` templates -Similarly to contract dynamic data sources, users can test test file data sources and their handlers +Ähnlich wie bei den vertraglich vereinbarten dynamischen Datenquellen können die Benutzer auch Dateidatenquellen und ihre Bearbeiter testen ##### Example `subgraph.yaml` ```yaml ... -templates: - - kind: file/ipfs - name: GraphTokenLockMetadata - network: mainnet - mapping: - kind: ethereum/events - apiVersion: 0.0.6 - language: wasm/assemblyscript - file: ./src/token-lock-wallet.ts +Vorlagen: + - Art: Datei/ipfs + Name: GraphTokenLockMetadaten + Netzwerk: mainnet + Zuweisung: + Art: Ethereum/Ereignisse + apiVersion: 0.0.9 + Sprache: wasm/assemblyscript + Datei: ./src/token-lock-wallet.ts handler: handleMetadata - entities: + Entitäten: - TokenLockMetadata abis: - name: GraphTokenLockWallet - file: ./abis/GraphTokenLockWallet.json + Datei: ./abis/GraphTokenLockWallet.json ``` ##### Example `schema.graphql` ```graphql """ -Token Lock Wallets which hold locked GRT +Token Sperr-Wallets, die gesperrte GRT """ -type TokenLockMetadata @entity { - "The address of the token lock wallet" +Typ TokenLockMetadata @entity { + "Die Adresse der Token Sperr-Wallet" id: ID! - "Start time of the release schedule" + "Startzeit des Release-Zeitplans" startTime: BigInt! - "End time of the release schedule" - endTime: BigInt! - "Number of periods between start time and end time" - periods: BigInt! - "Time when the releases start" + "Endzeit des Release-Zeitplans" + EndTime: BigInt! + "Anzahl der Perioden zwischen Startzeit und Endzeit" + Perioden: BigInt! + "Time when the release start" releaseStartTime: BigInt! } ``` @@ -1218,27 +1218,27 @@ type TokenLockMetadata @entity { ```typescript export function handleMetadata(content: Bytes): void { - // dataSource.stringParams() returns the File DataSource CID - // stringParam() will be mocked in the handler test - // for more info https://thegraph.com/docs/en/developing/creating-a-subgraph/#create-a-new-handler-to-process-files - let tokenMetadata = new TokenLockMetadata(dataSource.stringParam()) - const value = json.fromBytes(content).toObject() - - if (value) { - const startTime = value.get('startTime') - const endTime = value.get('endTime') - const periods = value.get('periods') - const releaseStartTime = value.get('releaseStartTime') - - if (startTime && endTime && periods && releaseStartTime) { - tokenMetadata.startTime = startTime.toBigInt() - tokenMetadata.endTime = endTime.toBigInt() - tokenMetadata.periods = periods.toBigInt() - tokenMetadata.releaseStartTime = releaseStartTime.toBigInt() - } - - tokenMetadata.save() - } + // dataSource.stringParams() gibt die CID der File DataSource zurück + // stringParam() wird im Handler-Test gemockt + // für weitere Informationen https://thegraph.com/docs/en/developing/creating-a-subgraph/#create-a-new-handler-to-process-files + let tokenMetadata = new TokenLockMetadata(dataSource.stringParam()) + const value = json.fromBytes(content).toObject() + + if (value) { + const startTime = value. get('startTime') + const endTime = value.get('endTime') + const periods = value.get('periods') + const releaseStartTime = value.get('releaseStartTime') + + if (startTime &amp;&amp; endTime &amp;&amp; periods &amp;&amp; releaseStartTime) { + tokenMetadata. startTime = startTime.toBigInt() + tokenMetadata.endTime = endTime.toBigInt() + tokenMetadata.periods = periods.toBigInt() + tokenMetadata.releaseStartTime = releaseStartTime.toBigInt() + } + + tokenMetadata.save() + } } ``` @@ -1249,57 +1249,57 @@ import { assert, test, dataSourceMock, readFile } from 'matchstick-as' import { Address, BigInt, Bytes, DataSourceContext, ipfs, json, store, Value } from '@graphprotocol/graph-ts' import { handleMetadata } from '../../src/token-lock-wallet' -import { TokenLockMetadata } from '../../generated/schema' +import { TokenLockMetadata } from '.. /../generated/schema' import { GraphTokenLockMetadata } from '../../generated/templates' -test('file/ipfs dataSource creation example', () => { - // Generate the dataSource CID from the ipfsHash + ipfs path file - // For example QmaXzZhcYnsisuue5WRdQDH6FDvqkLQX1NckLqBYeYYEfm/example.json - const ipfshash = 'QmaXzZhcYnsisuue5WRdQDH6FDvqkLQX1NckLqBYeYYEfm' - const CID = `${ipfshash}/example.json` - - // Create a new dataSource using the generated CID - GraphTokenLockMetadata.create(CID) - - // Assert the dataSource has been created - assert.dataSourceCount('GraphTokenLockMetadata', 1) - assert.dataSourceExists('GraphTokenLockMetadata', CID) - logDataSources('GraphTokenLockMetadata') - - // Now we have to mock the dataSource metadata and specifically dataSource.stringParam() - // dataSource.stringParams actually uses the value of dataSource.address(), so we will mock the address using dataSourceMock from matchstick-as - // First we will reset the values and then use dataSourceMock.setAddress() to set the CID - dataSourceMock.resetValues() - dataSourceMock.setAddress(CID) - - // Now we need to generate the Bytes to pass to the dataSource handler - // For this case we introduced a new function readFile, that reads a local json and returns the content as Bytes - const content = readFile(`path/to/metadata.json`) - handleMetadata(content) - - // Now we will test if a TokenLockMetadata was created - const metadata = TokenLockMetadata.load(CID) - - assert.bigIntEquals(metadata!.endTime, BigInt.fromI32(1)) - assert.bigIntEquals(metadata!.periods, BigInt.fromI32(1)) - assert.bigIntEquals(metadata!.releaseStartTime, BigInt.fromI32(1)) - assert.bigIntEquals(metadata!.startTime, BigInt.fromI32(1)) +test('file/ipfs dataSource creation example', () =&gt; { + // Generieren Sie die dataSource CID aus der ipfsHash + ipfs Pfaddatei + // Zum Beispiel QmaXzZhcYnsisuue5WRdQDH6FDvqkLQX1NckLqBYeYYEfm/example. json + const ipfshash = 'QmaXzZhcYnsisuue5WRdQDH6FDvqkLQX1NckLqBYeYYEfm' + const CID = `${ipfshash}/example.json` + + // Erstellen einer neuen dataSource mit der generierten CID + GraphTokenLockMetadata.create(CID) + + // Assert, dass die dataSource erstellt wurde + assert. dataSourceCount('GraphTokenLockMetadata', 1) + assert.dataSourceExists('GraphTokenLockMetadata', CID) + logDataSources('GraphTokenLockMetadata') + + // Nun müssen wir die dataSource-Metadaten und insbesondere dataSource. stringParam() + // dataSource.stringParams verwendet eigentlich den Wert von dataSource.address(), also werden wir die Adresse mit dataSourceMock von matchstick-as nachbilden + // Zuerst werden wir die Werte zurücksetzen und dann dataSourceMock.setAddress() verwenden, um die CID zu setzen + dataSourceMock. resetValues() + dataSourceMock.setAddress(CID) + + // Nun müssen wir die Bytes generieren, um sie an den dataSource-Handler zu übergeben + // Für diesen Fall haben wir eine neue Funktion readFile eingeführt, die ein lokales json liest und den Inhalt als Bytes zurückgibt + const content = readFile(`path/to/metadata. json`) + handleMetadata(content) + + // Nun testen wir, ob ein TokenLockMetadata erstellt wurde + const metadata = TokenLockMetadata.load(CID) + + assert.bigIntEquals(metadata!.endTime, BigInt.fromI32(1)) + assert. bigIntEquals(metadata!.periods, BigInt.fromI32(1)) + assert.bigIntEquals(metadata!.releaseStartTime, BigInt.fromI32(1)) + assert.bigIntEquals(metadata!.startTime, BigInt.fromI32(1)) }) ``` ## Test Coverage -Using **Matchstick**, subgraph developers are able to run a script that will calculate the test coverage of the written unit tests. +Mit **Matchstick** können die Entwickler von Subgraph ein Skript ausführen, das die Testabdeckung der geschriebenen Unit-Tests berechnet. -The test coverage tool takes the compiled test `wasm` binaries and converts them to `wat` files, which can then be easily inspected to see whether or not the handlers defined in `subgraph.yaml` have been called. Since code coverage (and testing as whole) is in very early stages in AssemblyScript and WebAssembly, **Matchstick** cannot check for branch coverage. Instead we rely on the assertion that if a given handler has been called, the event/function for it have been properly mocked. +Das Testabdeckungswerkzeug nimmt die kompilierten Test-Binärdateien „wasm“ und konvertiert sie in ‚wat‘-Dateien, die dann leicht inspiziert werden können, um zu sehen, ob die in „subgraph.yaml“ definierten Handler aufgerufen wurden oder nicht. Da die Codeabdeckung (und das Testen als Ganzes) in AssemblyScript und WebAssembly noch in den Kinderschuhen steckt, kann **Matchstick** nicht auf Zweigabdeckung prüfen. Stattdessen verlassen wir uns auf die Behauptung, dass, wenn ein bestimmter Handler aufgerufen wurde, das Ereignis/die Funktion für diesen Handler korrekt gespottet wurde. -### Prerequisites +### Voraussetzungen -To run the test coverage functionality provided in **Matchstick**, there are a few things you need to prepare beforehand: +Um die Testabdeckungsfunktion von **Matchstick** nutzen zu können, müssen Sie einige Dinge vorbereiten: #### Export your handlers -In order for **Matchstick** to check which handlers are being run, those handlers need to be exported from the **test file**. So for instance in our example, in our gravity.test.ts file we have the following handler being imported: +Damit **Matchstick** prüfen kann, welche Handler ausgeführt werden, müssen diese Handler aus der **Testdatei** exportiert werden. In unserem Beispiel haben wir also in der Datei gravity.test.ts den folgenden Handler importiert: ```typescript import { handleNewGravatar } from '../../src/gravity' @@ -1311,7 +1311,7 @@ In order for that function to be visible (for it to be included in the `wat` fil export { handleNewGravatar } ``` -### Usage +### Verwendung Once that's all set up, to run the test coverage tool, simply run: @@ -1328,7 +1328,7 @@ You could also add a custom `coverage` command to your `package.json` file, like }, ``` -That will execute the coverage tool and you should see something like this in the terminal: +Hopefully that should execute the coverage tool without any issues. You should see something like this in the terminal: ```sh $ graph test -c @@ -1375,9 +1375,9 @@ The log output includes the test run duration. Here's an example: ## Common compiler errors -> Critical: Could not create WasmInstance from valid module with context: unknown import: wasi_snapshot_preview1::fd_write has not been defined +> Kritisch: WasmInstance konnte nicht aus einem gültigen Modul mit Kontext erstellt werden: unknown import: wasi_snapshot_preview1::fd_write wurde nicht definiert -This means you have used `console.log` in your code, which is not supported by AssemblyScript. Please consider using the [Logging API](/subgraphs/developing/creating/graph-ts/api/#logging-api) +Dies bedeutet, dass Sie `console.log` in Ihrem Code verwendet haben, was von AssemblyScript nicht unterstützt wird. Bitte verwenden Sie die [Logging API](/subgraphs/developing/creating/graph-ts/api/#logging-api) > ERROR TS2554: Expected ? arguments, but got ?. > @@ -1395,7 +1395,7 @@ The mismatch in arguments is caused by mismatch in `graph-ts` and `matchstick-as ## Zusätzliche Ressourcen -For any additional support, check out this [demo subgraph repo using Matchstick](https://github.com/LimeChain/demo-subgraph#readme_). +For any additional support, check out this [demo Subgraph repo using Matchstick](https://github.com/LimeChain/demo-subgraph#readme_). ## Feedback diff --git a/website/src/pages/de/subgraphs/developing/deploying/multiple-networks.mdx b/website/src/pages/de/subgraphs/developing/deploying/multiple-networks.mdx index 7bc4c42301c5..6db33ed6bf1e 100644 --- a/website/src/pages/de/subgraphs/developing/deploying/multiple-networks.mdx +++ b/website/src/pages/de/subgraphs/developing/deploying/multiple-networks.mdx @@ -1,10 +1,11 @@ --- title: Bereitstellen eines Subgraphen in mehreren Netzen +sidebarTitle: Bereitstellung für mehrere Netzwerke --- Auf dieser Seite wird erklärt, wie man einen Subgraphen in mehreren Netzwerken bereitstellt. Um einen Subgraphen bereitzustellen, müssen Sie zunächst die [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) installieren. Wenn Sie noch keinen Subgraphen erstellt haben, lesen Sie [Erstellen eines Subgraphen](/developing/creating-a-subgraph/). -## Breitstellen des Subgraphen in mehreren Netzen +## Deploying the Subgraph to multiple networks In manchen Fällen möchten Sie denselben Subgraph in mehreren Netzen bereitstellen, ohne den gesamten Code zu duplizieren. Die größte Herausforderung dabei ist, dass die Vertragsadressen in diesen Netzen unterschiedlich sind. diff --git a/website/src/pages/de/subgraphs/developing/deploying/using-subgraph-studio.mdx b/website/src/pages/de/subgraphs/developing/deploying/using-subgraph-studio.mdx index b559bcdff049..4f784b4304b8 100644 --- a/website/src/pages/de/subgraphs/developing/deploying/using-subgraph-studio.mdx +++ b/website/src/pages/de/subgraphs/developing/deploying/using-subgraph-studio.mdx @@ -10,18 +10,18 @@ Erfahren Sie, wie Sie Ihren Subgraphen in Subgraph Studio bereitstellen können. In [Subgraph Studio] (https://thegraph.com/studio/) können Sie Folgendes tun: -- Eine Liste der von Ihnen erstellten Subgraphen anzeigen -- Verwalten, Details anzeigen und den Status eines bestimmten Subgraphen visualisieren -- Ihre API-Schlüssel für bestimmte Subgraphen erstellen und verwalten +- View a list of Subgraphs you've created +- Manage, view details, and visualize the status of a specific Subgraph +- Create and manage your API keys for specific Subgraphs - Ihre API-Schlüssel auf bestimmte Domains einschränken und nur bestimmten Indexern die Abfrage mit diesen Schlüsseln erlauben - Ihren Subgraphen erstellen - Ihren Subgraphen mit The Graph CLI verteilen - Ihren Subgraphen in der „Playground“-Umgebung testen - Ihren Subgraphen in Staging unter Verwendung der Entwicklungsabfrage-URL integrieren -- Ihren Subgraphen auf The Graph Network veröffentlichen -- Ihre Rechnungen verwalten +- Veröffentlichen Sie Ihren Subgraphen im The Graph Network +- Verwalten Sie Ihre Rechnungen -## Installieren der The Graph-CLI +## Installieren der Graph-CLI Vor der Bereitstellung müssen Sie The Graph CLI installieren. @@ -57,13 +57,7 @@ npm install -g @graphprotocol/graph-cli ### Kompatibilität von Subgraphen mit dem The Graph Network -Um von Indexern auf The Graph Network unterstützt zu werden, müssen Subgraphen: - -- Ein [unterstütztes Netzwerk](/supported-networks/) indizieren -- Keine der folgenden Funktionen verwenden: - - ipfs.cat & ipfs.map - - Non-fatal errors - - Grafting +Um von Indexern auf The Graph Network unterstützt zu werden, müssen Subgraphen ein [supported network](/supported-networks/) indizieren. Eine vollständige Liste der unterstützten und nicht unterstützten Features finden Sie im [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) Repo. ## Initialisieren Ihres Subgraphen @@ -81,7 +75,7 @@ Nachdem Sie `graph init` ausgeführt haben, werden Sie aufgefordert, die Vertrag ## Graph Auth -Bevor Sie Ihren Subgraphen in Subgraph Studio bereitstellen können, müssen Sie sich bei Ihrem Konto in der CLI anmelden. Dazu benötigen Sie Ihren Bereitstellungsschlüssel, den Sie auf der Seite mit den Details Ihres Subgraphen finden. +Bevor Sie Ihren Subgraph in Subgraph Studio bereitstellen können, müssen Sie sich bei Ihrem Konto in der CLI anmelden. Dazu benötigen Sie Ihren Deploy-Schlüssel, den Sie auf Ihrer Subgraph-Detailseite finden. Verwenden Sie dann den folgenden Befehl, um sich über die CLI zu authentifizieren: @@ -91,11 +85,11 @@ graph auth ## Bereitstellen eines Subgraphen -Sobald Sie fertig sind, können Sie Ihren Subgraphen an Subgraph Studio übergeben. +Sobald Sie bereit sind, können Sie Ihren Subgraph in Subgraph Studio bereitstellen. -> Wenn Sie einen Subgraphen über die Befehlszeilenschnittstelle bereitstellen, wird er in das Studio übertragen, wo Sie ihn testen und die Metadaten aktualisieren können. Bei dieser Aktion wird Ihr Subgraph nicht im dezentralen Netzwerk veröffentlicht. +> Wenn Sie einen Subgraphen mit der Befehlszeilenschnittstelle bereitstellen, wird er in das Studio übertragen, wo Sie ihn testen und die Metadaten aktualisieren können. Durch diese Aktion wird Ihr Subgraph nicht im dezentralen Netzwerk veröffentlicht. -Verwenden Sie den folgenden CLI-Befehl, um Ihren Subgraphen bereitzustellen: +Verwenden Sie den folgenden CLI-Befehl, um Ihren Subgraph zu verteilen: ```bash graph deploy @@ -108,13 +102,13 @@ Nach der Ausführung dieses Befehls wird die CLI nach einer Versionsbezeichnung ## Testen Ihres Subgraphen -Nach der Bereitstellung können Sie Ihren Subgraphen testen (entweder in Subgraph Studio oder in Ihrer eigenen Anwendung, mit der Bereitstellungsabfrage-URL), eine weitere Version bereitstellen, die Metadaten aktualisieren und im [Graph Explorer](https://thegraph.com/explorer) veröffentlichen, wenn Sie bereit sind. +Nach dem Deployment können Sie Ihren Subgraph testen (entweder in Subgraph Studio oder in Ihrer eigenen Anwendung, mit der Deployment-Query-URL), eine weitere Version deployen, die Metadaten aktualisieren und im [Graph Explorer](https://thegraph.com/explorer) veröffentlichen, wenn Sie bereit sind. Verwenden Sie Subgraph Studio, um die Protokolle auf dem Dashboard zu überprüfen und nach Fehlern in Ihrem Subgraphen zu suchen. -## Veröffentlichung Ihres Subgraphen +## Veröffentlichen Sie Ihren Subgraph -Um Ihren Subgraphen erfolgreich zu veröffentlichen, lesen Sie [Veröffentlichen eines Subgraphen](/subgraphs/developing/publishing/publishing-a-subgraph/). +Um Ihren Subgraphen erfolgreich zu veröffentlichen, lesen Sie bitte [Einen Subgraphen veröffentlichen](/subgraphs/developing/publishing/publishing-a-subgraph/). ## Versionierung Ihres Subgraphen mit der CLI @@ -122,15 +116,15 @@ Wenn Sie Ihren Subgraphen aktualisieren möchten, können Sie wie folgt vorgehen - Sie können eine neue Version über die Befehlszeilenschnittstelle (CLI) in Studio bereitstellen (zu diesem Zeitpunkt ist sie nur privat). - Wenn Sie damit zufrieden sind, können Sie Ihre neue Bereitstellung im [Graph Explorer] (https://thegraph.com/explorer) veröffentlichen. -- Mit dieser Aktion wird eine neue Version Ihres Subgraphen erstellt, die von Kuratoren mit Signalen versehen und von Indexern indiziert werden kann. +- Mit dieser Aktion wird eine neue Version Ihres Subgraphen erstellt, die von Kuratoren mit Signalen versehen und von Indexierern indiziert werden kann. -Sie können auch die Metadaten Ihres Subgraphen aktualisieren, ohne eine neue Version zu veröffentlichen. Sie können Ihre Subgraph-Details in Studio (unter dem Profilbild, dem Namen, der Beschreibung usw.) aktualisieren, indem Sie eine Option namens **Details aktualisieren** im [Graph Explorer] (https://thegraph.com/explorer) aktivieren. Wenn diese Option aktiviert ist, wird eine Onchain-Transaktion generiert, die die Subgraph-Details im Explorer aktualisiert, ohne dass eine neue Version mit einer neuen Bereitstellung veröffentlicht werden muss. +Sie können die Metadaten Ihres Subgraphen auch aktualisieren, ohne eine neue Version zu veröffentlichen. Sie können die Details Ihres Subgraphen in Studio (unter dem Profilbild, dem Namen, der Beschreibung usw.) aktualisieren, indem Sie eine Option namens **Details aktualisieren** im [Graph Explorer] (https://thegraph.com/explorer) aktivieren. Wenn diese Option aktiviert ist, wird eine Onchain-Transaktion generiert, die die Subgraph-Details im Explorer aktualisiert, ohne dass Sie eine neue Version mit einem neuen Deployment veröffentlichen müssen. -> Hinweis: Die Veröffentlichung einer neuen Version eines Subgraphen im Netz ist mit Kosten verbunden. Zusätzlich zu den Transaktionsgebühren müssen Sie auch einen Teil der Kurationssteuer für das Auto-Migrations-Signal finanzieren. Sie können keine neue Version Ihres Subgraphen veröffentlichen, wenn Kuratoren nicht darauf signalisiert haben. Für weitere Informationen, lesen Sie bitte [hier](/resources/roles/curating/). +> Hinweis: Die Veröffentlichung einer neuen Version eines Subgraphen im Netz ist mit Kosten verbunden. Zusätzlich zu den Transaktionsgebühren müssen Sie auch einen Teil der Kurationssteuer für das Auto-Migrations-Signal finanzieren. Sie können keine neue Version Ihres Subgraphen veröffentlichen, wenn die Kuratoren nicht auf ihn signalisiert haben. Für weitere Informationen, lesen Sie bitte [hier](/resources/roles/curating/). ## Automatische Archivierung von Subgraph-Versionen -Immer wenn Sie eine neue Subgraph-Version in Subgraph Studio bereitstellen, wird die vorherige Version archiviert. Archivierte Versionen werden nicht indiziert/synchronisiert und können daher nicht abgefragt werden. Sie können die Archivierung einer archivierten Version Ihres Subgraphen in Subgraph Studio dearchivieren. +Immer wenn Sie eine neue Subgraph-Version in Subgraph Studio bereitstellen, wird die vorherige Version archiviert. Archivierte Versionen werden nicht indiziert/synchronisiert und können daher nicht abgefragt werden. Sie können eine archivierte Version Ihres Subgraphen in Subgraph Studio dearchivieren. > Hinweis: Frühere Versionen von nicht veröffentlichten Subgraphen, die in Studio bereitgestellt wurden, werden automatisch archiviert. diff --git a/website/src/pages/de/subgraphs/developing/developer-faq.mdx b/website/src/pages/de/subgraphs/developing/developer-faq.mdx index 8dbe6d23ad39..1584166374a4 100644 --- a/website/src/pages/de/subgraphs/developing/developer-faq.mdx +++ b/website/src/pages/de/subgraphs/developing/developer-faq.mdx @@ -1,95 +1,95 @@ --- -title: Developer FAQ +title: Entwickler-FAQ sidebarTitle: FAQ --- -This page summarizes some of the most common questions for developers building on The Graph. +Diese Seite fasst einige der häufigsten Fragen für Entwickler zusammen, die auf The Graph aufbauen. ## Subgraph Related -### 1. What is a subgraph? +### 1. What is a Subgraph? -A subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process subgraphs and make them available for subgraph consumers to query. +A Subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process Subgraphs and make them available for Subgraph consumers to query. -### 2. What is the first step to create a subgraph? +### 2. What is the first step to create a Subgraph? -To successfully create a subgraph, you will need to install The Graph CLI. Review the [Quick Start](/subgraphs/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). +To successfully create a Subgraph, you will need to install The Graph CLI. Review the [Quick Start](/subgraphs/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). -### 3. Can I still create a subgraph if my smart contracts don't have events? +### 3. Can I still create a Subgraph if my smart contracts don't have events? -It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the subgraph are triggered by contract events and are the fastest way to retrieve useful data. +It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the Subgraph are triggered by contract events and are the fastest way to retrieve useful data. -If the contracts you work with do not contain events, your subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. +If the contracts you work with do not contain events, your Subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. -### 4. Can I change the GitHub account associated with my subgraph? +### 4. Can I change the GitHub account associated with my Subgraph? -No. Once a subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your subgraph. +No. Once a Subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your Subgraph. -### 5. How do I update a subgraph on mainnet? +### 5. How do I update a Subgraph on mainnet? -You can deploy a new version of your subgraph to Subgraph Studio using the CLI. This action maintains your subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. +You can deploy a new version of your Subgraph to Subgraph Studio using the CLI. This action maintains your Subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your Subgraph that Curators can start signaling on. -### 6. Is it possible to duplicate a subgraph to another account or endpoint without redeploying? +### 6. Is it possible to duplicate a Subgraph to another account or endpoint without redeploying? -You have to redeploy the subgraph, but if the subgraph ID (IPFS hash) doesn't change, it won't have to sync from the beginning. +You have to redeploy the Subgraph, but if the Subgraph ID (IPFS hash) doesn't change, it won't have to sync from the beginning. -### 7. How do I call a contract function or access a public state variable from my subgraph mappings? +### 7. How do I call a contract function or access a public state variable from my Subgraph mappings? Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/#access-to-smart-contract-state). -### 8. Can I import `ethers.js` or other JS libraries into my subgraph mappings? +### 8. Can I import `ethers.js` or other JS libraries into my Subgraph mappings? -Not currently, as mappings are written in AssemblyScript. +Gegenwärtig nicht, da Mappings in AssemblyScript geschrieben werden. -One possible alternative solution to this is to store raw data in entities and perform logic that requires JS libraries on the client. +Eine mögliche alternative Lösung hierzu ist die Speicherung von Rohdaten in Entitäten und die Durchführung von Logik, die JS-Bibliotheken auf dem Client erfordert. -### 9. When listening to multiple contracts, is it possible to select the contract order to listen to events? +### 9. Ist es möglich, beim Abhören mehrerer Verträge die Reihenfolge der zu hörenden Ereignisse zu wählen? -Within a subgraph, the events are always processed in the order they appear in the blocks, regardless of whether that is across multiple contracts or not. +Within a Subgraph, the events are always processed in the order they appear in the blocks, regardless of whether that is across multiple contracts or not. -### 10. How are templates different from data sources? +### 10. Wie unterscheiden sich Vorlagen von Datenquellen? -Templates allow you to create data sources quickly, while your subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your subgraph will create a dynamic data source by supplying the contract address. +Templates allow you to create data sources quickly, while your Subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your Subgraph will create a dynamic data source by supplying the contract address. Check out the "Instantiating a data source template" section on: [Data Source Templates](/developing/creating-a-subgraph/#data-source-templates). -### 11. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? +### 11. Is it possible to set up a Subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? Yes. On `graph init` command itself you can add multiple dataSources by entering contracts one after the other. You can also use `graph add` command to add a new dataSource. -### 12. In what order are the event, block, and call handlers triggered for a data source? +### 12. In welcher Reihenfolge werden die Ereignis-, Block- und Aufrufhandler für eine Datenquelle ausgelöst? -Event and call handlers are first ordered by transaction index within the block. Event and call handlers within the same transaction are ordered using a convention: event handlers first then call handlers, each type respecting the order they are defined in the manifest. Block handlers are run after event and call handlers, in the order they are defined in the manifest. Also these ordering rules are subject to change. +Ereignis- und Aufruf-Handler sind innerhalb des Blocks zunächst nach dem Index der Transaktion geordnet. Ereignis- und Aufruf-Handler innerhalb derselben Transaktion werden nach einer Konvention geordnet: zuerst Ereignis-Handler, dann Aufruf-Handler, wobei jeder Typ die Reihenfolge einhält, in der sie im Manifest definiert sind. Block-Handler werden nach Ereignis- und Anruf-Handlern ausgeführt, und zwar in der Reihenfolge, in der sie im Manifest definiert sind. Auch diese Ordnungsregeln können sich ändern. -When new dynamic data source are created, the handlers defined for dynamic data sources will only start processing after all existing data source handlers are processed, and will repeat in the same sequence whenever triggered. +Wenn neue dynamische Datenquellen erstellt werden, beginnen die für dynamische Datenquellen definierten Handler erst mit der Verarbeitung, nachdem alle vorhandenen Datenquellen-Handler verarbeitet wurden, und wiederholen sich in der gleichen Reihenfolge, wenn sie ausgelöst werden. -### 13. How do I make sure I'm using the latest version of graph-node for my local deployments? +### 13. Wie stelle ich sicher, dass ich die neueste Version von graph-node für meine lokalen Implementierungen verwende? -You can run the following command: +Sie können den folgenden Befehl ausführen: ```sh docker pull graphprotocol/graph-node:latest ``` -> Note: docker / docker-compose will always use whatever graph-node version was pulled the first time you ran it, so make sure you're up to date with the latest version of graph-node. +> Hinweis: docker / docker-compose verwendet immer die Version von graph-node, die beim ersten Start geladen wurde. Stellen Sie also sicher, dass Sie die neueste Version von graph-node verwenden. -### 14. What is the recommended way to build "autogenerated" ids for an entity when handling events? +### 14. Welches ist der empfohlene Weg, um „automatisch generierte“ IDs für eine Entität zu erstellen, wenn Ereignisse behandelt werden? If only one entity is created during the event and if there's nothing better available, then the transaction hash + log index would be unique. You can obfuscate these by converting that to Bytes and then piping it through `crypto.keccak256` but this won't make it more unique. -### 15. Can I delete my subgraph? +### 15. Can I delete my Subgraph? -Yes, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) and [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) your subgraph. +Yes, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) and [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) your Subgraph. -## Network Related +## Netzwerkspezifisch -### 16. What networks are supported by The Graph? +### 16. Welche Netze werden von The Graph unterstützt? You can find the list of the supported networks [here](/supported-networks/). -### 17. Is it possible to differentiate between networks (mainnet, Sepolia, local) within event handlers? +### 17. Ist es möglich, innerhalb von Event-Handlern zwischen Netzen (Mainnet, Sepolia, Local) zu unterscheiden? Yes. You can do this by importing `graph-ts` as per the example below: @@ -100,31 +100,31 @@ dataSource.network() dataSource.address() ``` -### 18. Do you support block and call handlers on Sepolia? +### 18. Unterstützen Sie Block- und Call-Handler auf Sepolia? -Yes. Sepolia supports block handlers, call handlers and event handlers. It should be noted that event handlers are far more performant than the other two handlers, and they are supported on every EVM-compatible network. +Ja, Sepolia unterstützt Block-Handler, Call-Handler und Event-Handler. Es ist anzumerken, dass Ereignis-Handler weitaus leistungsfähiger sind als die beiden anderen Handler und in jedem EVM-kompatiblen Netzwerk unterstützt werden. ## Indexing & Querying Related -### 19. Is it possible to specify what block to start indexing on? +### 19. Ist es möglich festzulegen, bei welchem Block die Indizierung beginnen soll? Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the dataSource starts indexing from. In most cases, we suggest using the block where the contract was created: [Start blocks](/developing/creating-a-subgraph/#start-blocks) -### 20. What are some tips to increase the performance of indexing? My subgraph is taking a very long time to sync +### 20. What are some tips to increase the performance of indexing? My Subgraph is taking a very long time to sync Yes, you should take a look at the optional start block feature to start indexing from the block where the contract was deployed: [Start blocks](/developing/creating-a-subgraph/#start-blocks) -### 21. Is there a way to query the subgraph directly to determine the latest block number it has indexed? +### 21. Is there a way to query the Subgraph directly to determine the latest block number it has indexed? -Yes! Try the following command, substituting "organization/subgraphName" with the organization under it is published and the name of your subgraph: +Ja! Probieren Sie den folgenden Befehl aus und ersetzen Sie „organization/subgraphName“ durch die Organisation, unter der sie veröffentlicht ist, und den Namen Ihres Subgrafen: ```sh curl -X POST -d '{ "query": "{indexingStatusForCurrentVersion(subgraphName: \"organization/subgraphName\") { chains { latestBlock { hash number }}}}"}' https://api.thegraph.com/index-node/graphql ``` -### 22. Is there a limit to how many objects The Graph can return per query? +### 22. Gibt es eine Grenze für die Anzahl der Objekte, die The Graph pro Abfrage zurückgeben kann? -By default, query responses are limited to 100 items per collection. If you want to receive more, you can go up to 1000 items per collection and beyond that, you can paginate with: +Standardmäßig sind die Abfrageantworten auf 100 Elemente pro Sammlung beschränkt. Wenn Sie mehr erhalten möchten, können Sie bis zu 1000 Elemente pro Sammlung erhalten und darüber hinaus können Sie mit paginieren: ```graphql someCollection(first: 1000, skip: ) { ... } @@ -132,15 +132,15 @@ someCollection(first: 1000, skip: ) { ... } ### 23. If my dapp frontend uses The Graph for querying, do I need to write my API key into the frontend directly? What if we pay query fees for users – will malicious users cause our query fees to be very high? -Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. +Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and Subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. -## Miscellaneous +## Sonstiges -### 24. Is it possible to use Apollo Federation on top of graph-node? +### 24. Ist es möglich, Apollo Federation zusätzlich zum Graph-Knoten zu verwenden? -Federation is not supported yet. At the moment, you can use schema stitching, either on the client or via a proxy service. +Federation wird noch nicht unterstützt. Zurzeit können Sie Schema-Stitching verwenden, entweder auf dem Client oder über einen Proxy-Dienst. -### 25. I want to contribute or add a GitHub issue. Where can I find the open source repositories? +### 25. Ich möchte einen Beitrag leisten oder ein GitHub-Problem hinzufügen. Wo kann ich die Open-Source-Repositories finden? - [graph-node](https://github.com/graphprotocol/graph-node) - [graph-tooling](https://github.com/graphprotocol/graph-tooling) diff --git a/website/src/pages/de/subgraphs/developing/introduction.mdx b/website/src/pages/de/subgraphs/developing/introduction.mdx index fd2872880ce0..6ea77e4cf497 100644 --- a/website/src/pages/de/subgraphs/developing/introduction.mdx +++ b/website/src/pages/de/subgraphs/developing/introduction.mdx @@ -11,21 +11,21 @@ As a developer, you need data to build and power your dapp. Querying and indexin On The Graph, you can: -1. Create, deploy, and publish subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). -2. Use GraphQL to query existing subgraphs. +1. Create, deploy, and publish Subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). +2. Use GraphQL to query existing Subgraphs. -### What is GraphQL? +### Was ist GraphQL? -- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. +- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query Subgraphs. ### Developer Actions -- Query subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. -- Create custom subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. -- Deploy, publish and signal your subgraphs within The Graph Network. +- Query Subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. +- Create custom Subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. +- Deploy, publish and signal your Subgraphs within The Graph Network. -### What are subgraphs? +### What are Subgraphs? -A subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. +A Subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. -Check out the documentation on [subgraphs](/subgraphs/developing/subgraphs/) to learn specifics. +Check out the documentation on [Subgraphs](/subgraphs/developing/subgraphs/) to learn specifics. diff --git a/website/src/pages/de/subgraphs/developing/managing/deleting-a-subgraph.mdx b/website/src/pages/de/subgraphs/developing/managing/deleting-a-subgraph.mdx index 91c22f7c44ba..e01d84c31aee 100644 --- a/website/src/pages/de/subgraphs/developing/managing/deleting-a-subgraph.mdx +++ b/website/src/pages/de/subgraphs/developing/managing/deleting-a-subgraph.mdx @@ -2,30 +2,30 @@ title: Deleting a Subgraph --- -Delete your subgraph using [Subgraph Studio](https://thegraph.com/studio/). +Delete your Subgraph using [Subgraph Studio](https://thegraph.com/studio/). -> Deleting your subgraph will remove all published versions from The Graph Network, but it will remain visible on Graph Explorer and Subgraph Studio for users who have signaled on it. +> Deleting your Subgraph will remove all published versions from The Graph Network, but it will remain visible on Graph Explorer and Subgraph Studio for users who have signaled on it. ## Schritt für Schritt -1. Visit the subgraph's page on [Subgraph Studio](https://thegraph.com/studio/). +1. Visit the Subgraph's page on [Subgraph Studio](https://thegraph.com/studio/). 2. Click on the three-dots to the right of the "publish" button. -3. Click on the option to "delete this subgraph": +3. Click on the option to "delete this Subgraph": ![Delete-subgraph](/img/Delete-subgraph.png) -4. Depending on the subgraph's status, you will be prompted with various options. +4. Depending on the Subgraph's status, you will be prompted with various options. - - If the subgraph is not published, simply click “delete” and confirm. - - If the subgraph is published, you will need to confirm on your wallet before the subgraph can be deleted from Studio. If a subgraph is published to multiple networks, such as testnet and mainnet, additional steps may be required. + - If the Subgraph is not published, simply click “delete” and confirm. + - If the Subgraph is published, you will need to confirm on your wallet before the Subgraph can be deleted from Studio. If a Subgraph is published to multiple networks, such as testnet and mainnet, additional steps may be required. -> If the owner of the subgraph has signal on it, the signaled GRT will be returned to the owner. +> If the owner of the Subgraph has signal on it, the signaled GRT will be returned to the owner. ### Important Reminders -- Once you delete a subgraph, it will **not** appear on Graph Explorer's homepage. However, users who have signaled on it will still be able to view it on their profile pages and remove their signal. -- Curators will not be able to signal on the subgraph anymore. -- Curators that already signaled on the subgraph can withdraw their signal at an average share price. -- Deleted subgraphs will show an error message. +- Once you delete a Subgraph, it will **not** appear on Graph Explorer's homepage. However, users who have signaled on it will still be able to view it on their profile pages and remove their signal. +- Curators will not be able to signal on the Subgraph anymore. +- Curators that already signaled on the Subgraph can withdraw their signal at an average share price. +- Deleted Subgraphs will show an error message. diff --git a/website/src/pages/de/subgraphs/developing/managing/transferring-a-subgraph.mdx b/website/src/pages/de/subgraphs/developing/managing/transferring-a-subgraph.mdx index d6837fbade98..a4cbb348e418 100644 --- a/website/src/pages/de/subgraphs/developing/managing/transferring-a-subgraph.mdx +++ b/website/src/pages/de/subgraphs/developing/managing/transferring-a-subgraph.mdx @@ -2,18 +2,18 @@ title: Transferring a Subgraph --- -Subgraphs, die im dezentralen Netzwerk veröffentlicht werden, haben eine NFT, die auf die Adresse geprägt wird, die den Subgraph veröffentlicht hat. Die NFT basiert auf dem Standard ERC721, der Überweisungen zwischen Konten im The Graph Network erleichtert. +Die im dezentralen Netzwerk veröffentlichten Subgraphen haben eine NFT, die auf die Adresse geprägt ist, die den Subgraphen veröffentlicht hat. Die NFT basiert auf einem ERC721-Standard, der Überweisungen zwischen Konten im The Graph Network erleichtert. ## Erinnerungshilfen -- Wer im Besitz der NFT ist, kontrolliert den Subgraph. -- Wenn der Eigentümer beschließt, das NFT zu verkaufen oder zu übertragen, kann er diesen Subgraph im Netz nicht mehr bearbeiten oder aktualisieren. -- Sie können die Kontrolle über einen Subgraph leicht an eine Multisig übertragen. -- Ein Community-Mitglied kann einen Subgraph im Namen einer DAO erstellen. +- Whoever owns the NFT controls the Subgraph. +- Wenn der Eigentümer beschließt, das NFT zu verkaufen oder zu übertragen, kann er diesen Subgraphen im Netz nicht mehr bearbeiten oder aktualisieren. +- You can easily move control of a Subgraph to a multi-sig. +- A community member can create a Subgraph on behalf of a DAO. ## Betrachten Sie Ihren Subgraph als NFT -Um Ihren Subgraph als NFT zu betrachten, können Sie einen NFT-Marktplatz wie **OpenSea** besuchen: +To view your Subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**: ``` https://opensea.io/your-wallet-address @@ -27,15 +27,15 @@ https://rainbow.me/your-wallet-addres ## Schritt für Schritt -Um das Eigentum an einem Subgraph zu übertragen, gehen Sie wie folgt vor: +To transfer ownership of a Subgraph, do the following: 1. Verwenden Sie die in Subgraph Studio integrierte Benutzeroberfläche: ![Subgraph-Besitzübertragung](/img/subgraph-ownership-transfer-1.png) -2. Wählen Sie die Adresse, an die Sie den Subgraph übertragen möchten: +2. Choose the address that you would like to transfer the Subgraph to: - ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-2.png) + ![Subgraph-Eigentumsübertragung](/img/subgraph-ownership-transfer-2.png) Optional können Sie auch die integrierte Benutzeroberfläche von NFT-Marktplätzen wie OpenSea verwenden: diff --git a/website/src/pages/de/subgraphs/developing/publishing/publishing-a-subgraph.mdx b/website/src/pages/de/subgraphs/developing/publishing/publishing-a-subgraph.mdx index 129d063a2e95..2fa5e3654038 100644 --- a/website/src/pages/de/subgraphs/developing/publishing/publishing-a-subgraph.mdx +++ b/website/src/pages/de/subgraphs/developing/publishing/publishing-a-subgraph.mdx @@ -1,10 +1,11 @@ --- title: Veröffentlichung eines Subgraphen im dezentralen Netzwerk +sidebarTitle: Veröffentlichung im dezentralen Netzwerk --- -Once you have [deployed your subgraph to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/) and it's ready to go into production, you can publish it to the decentralized network. +Once you have [deployed your Subgraph to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/) and it's ready to go into production, you can publish it to the decentralized network. -Wenn Sie einen Subgraphen im dezentralen Netzwerk veröffentlichen, stellen Sie ihn für andere zur Verfügung: +When you publish a Subgraph to the decentralized network, you make it available for: - [Curators](/resources/roles/curating/) to begin curating it. - [Indexers](/indexing/overview/) to begin indexing it. @@ -17,23 +18,23 @@ Check out the list of [supported networks](/supported-networks/). 1. Go to the [Subgraph Studio](https://thegraph.com/studio/) dashboard 2. Click on the **Publish** button -3. Your subgraph will now be visible in [Graph Explorer](https://thegraph.com/explorer/). +3. Your Subgraph will now be visible in [Graph Explorer](https://thegraph.com/explorer/). -Alle veröffentlichten Versionen eines bestehenden Subgraphen können: +All published versions of an existing Subgraph can: - Be published to Arbitrum One. [Learn more about The Graph Network on Arbitrum](/archived/arbitrum/arbitrum-faq/). -- Index data on any of the [supported networks](/supported-networks/), regardless of the network on which the subgraph was published. +- Index data on any of the [supported networks](/supported-networks/), regardless of the network on which the Subgraph was published. -### Aktualisierung der Metadaten für einen veröffentlichten Subgraphen +### Updating metadata for a published Subgraph -- Nachdem Sie Ihren Subgraphen im dezentralen Netzwerk veröffentlicht haben, können Sie die Metadaten jederzeit in Subgraph Studio aktualisieren. +- After publishing your Subgraph to the decentralized network, you can update the metadata anytime in Subgraph Studio. - Sobald Sie Ihre Änderungen gespeichert und die Aktualisierungen veröffentlicht haben, werden sie im Graph Explorer angezeigt. - Es ist wichtig zu beachten, dass bei diesem Vorgang keine neue Version erstellt wird, da sich Ihre Bereitstellung nicht geändert hat. ## Veröffentlichen über die CLI -As of version 0.73.0, you can also publish your subgraph with the [`graph-cli`](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). +As of version 0.73.0, you can also publish your Subgraph with the [`graph-cli`](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). 1. Öffnen Sie den `graph-cli`. 2. Use the following commands: `graph codegen && graph build` then `graph publish`. @@ -43,7 +44,7 @@ As of version 0.73.0, you can also publish your subgraph with the [`graph-cli`]( ### Anpassen Ihrer Bereitstellung -Sie können Ihre Subgraph-Erstellung auf einen bestimmten IPFS-Knoten hochladen und Ihre Bereitstellung mit den folgenden Flags weiter anpassen: +You can upload your Subgraph build to a specific IPFS node and further customize your deployment with the following flags: ``` USAGE @@ -63,31 +64,31 @@ FLAGS ## Hinzufügen von Signalen zu Ihrem Subgraphen -Entwickler können ihren Subgraphen ein GRT-Signal hinzufügen, um Indexer zur Abfrage des Subgraphen zu veranlassen. +Developers can add GRT signal to their Subgraphs to incentivize Indexers to query the Subgraph. -- Wenn ein Subgraph für Indexing Rewards in Frage kommt, erhalten Indexer, die einen „Beweis für die Indizierung“ erbringen, einen GRT Reward, der sich nach der Menge der signalisierten GRT richtet. +- If a Subgraph is eligible for indexing rewards, Indexers who provide a "proof of indexing" will receive a GRT reward, based on the amount of GRT signalled. -- You can check indexing reward eligibility based on subgraph feature usage [here](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). +- You can check indexing reward eligibility based on Subgraph feature usage [here](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). - Specific supported networks can be checked [here](/supported-networks/). -> Das Hinzufügen von Signalen zu einem Subgraphen, der nicht für Rewards in Frage kommt, zieht keine weiteren Indexer an. +> Adding signal to a Subgraph which is not eligible for rewards will not attract additional Indexers. > > Wenn Ihr Subgraph für Rewards in Frage kommt, wird empfohlen, dass Sie Ihren eigenen Subgraphen mit mindestens 3.000 GRT kuratieren, um zusätzliche Indexer für die Indizierung Ihres Subgraphen zu gewinnen. -The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs. However, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. +The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all Subgraphs. However, signaling GRT on a particular Subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. Bei der Signalisierung können Kuratoren entscheiden, ob sie für eine bestimmte Version des Subgraphen signalisieren wollen oder ob sie die automatische Migration verwenden wollen. Bei der automatischen Migration werden die Freigaben eines Kurators immer auf die neueste vom Entwickler veröffentlichte Version aktualisiert. Wenn sie sich stattdessen für eine bestimmte Version entscheiden, bleiben die Freigaben immer auf dieser spezifischen Version. -Indexer können Subgraphen für die Indizierung auf der Grundlage von Kurationssignalen finden, die sie im Graph Explorer sehen. +Indexers can find Subgraphs to index based on curation signals they see in Graph Explorer. ![Explorer-Subgrafen](/img/explorer-subgraphs.png) -Mit Subgraph Studio können Sie Ihrem Subgraphen ein Signal hinzufügen, indem Sie GRT in der gleichen Transaktion, in der es veröffentlicht wird, zum Kurationspool Ihres Subgraphen hinzufügen. +Subgraph Studio enables you to add signal to your Subgraph by adding GRT to your Subgraph's curation pool in the same transaction it is published. ![Curation Pool](/img/curate-own-subgraph-tx.png) -Alternativ können Sie ein GRT-Signal zu einem veröffentlichten Subgraphen aus dem Graph Explorer hinzufügen. +Alternatively, you can add GRT signal to a published Subgraph from Graph Explorer. ![Signal provenant de l'Explorer](/img/signal-from-explorer.png) diff --git a/website/src/pages/de/subgraphs/developing/subgraphs.mdx b/website/src/pages/de/subgraphs/developing/subgraphs.mdx index 9e5dc5f613a6..1ac536f54378 100644 --- a/website/src/pages/de/subgraphs/developing/subgraphs.mdx +++ b/website/src/pages/de/subgraphs/developing/subgraphs.mdx @@ -4,13 +4,13 @@ title: Subgraphs ## Was ist ein Subgraph? -Ein Subgraph ist eine benutzerdefinierte, offene API, die Daten aus einer Blockchain extrahiert, verarbeitet und so speichert, dass sie einfach über GraphQL abgefragt werden können. +A Subgraph is a custom, open API that extracts data from a blockchain, processes it, and stores it so it can be easily queried via GraphQL. ### Subgraph-Fähigkeiten - **Zugangsdaten:** Subgraphs ermöglichen die Abfrage und Indizierung von Blockchain-Daten für web3. -- **Build:** Entwickler können Subgraphs für The Graph Network erstellen, bereitstellen und veröffentlichen. Um loszulegen, schauen Sie sich den Subgraph Entwickler [Quick Start](quick-start/) an. -- **Index & Abfrage:** Sobald ein Subgraph indiziert ist, kann jeder ihn abfragen. Alle im Netzwerk veröffentlichten Subgraphen können im [Graph Explorer] (https://thegraph.com/explorer) untersucht und abgefragt werden. +- **Build:** Developers can build, deploy, and publish Subgraphs to The Graph Network. To get started, check out the Subgraph developer [Quick Start](quick-start/). +- **Index & Query:** Once a Subgraph is indexed, anyone can query it. Explore and query all Subgraphs published to the network in [Graph Explorer](https://thegraph.com/explorer). ## Innerhalb eines Subgraph @@ -24,63 +24,63 @@ Die **Subgraph-Definition** besteht aus den folgenden Dateien: - mapping.ts\`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) Code, der die Ereignisdaten in die in Ihrem Schema definierten Entitäten übersetzt -Um mehr über die einzelnen Komponenten eines Subgraphs zu erfahren, lesen Sie bitte [Erstellen eines Subgraphs](/developing/creating-a-subgraph/). +To learn more about each Subgraph component, check out [creating a Subgraph](/developing/creating-a-subgraph/). ## Subgraph Lebenszyklus -Hier ist ein allgemeiner Überblick über den Lebenszyklus eines Subgraphs: +Here is a general overview of a Subgraph’s lifecycle: ![Subgraph Lifecycle](/img/subgraph-lifecycle.png) ## Subgraph Entwicklung -1. [Einen Subgraph erstellen](/entwickeln/einen-subgraph-erstellen/) -2. [Einen Subgraph bereitstellen](/deploying/deploying-a-subgraph-to-studio/) -3. [Testen eines Subgraphen](/deploying/subgraph-studio/#testing-your-subgraph-in-subgraph-studio) -4. [Publish a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) -5. [Signal on a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) +1. [Create a Subgraph](/developing/creating-a-subgraph/) +2. [Deploy a Subgraph](/deploying/deploying-a-subgraph-to-studio/) +3. [Test a Subgraph](/deploying/subgraph-studio/#testing-your-subgraph-in-subgraph-studio) +4. [Publish a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) +5. [Signal on a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) ### Build locally -Great subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying subgraphs on The Graph. They can also use [Graph TypeScript](/subgraphs/developing/creating/graph-ts/README/) and [Matchstick](/subgraphs/developing/creating/unit-testing-framework/) to create robust subgraphs. +Great Subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying Subgraphs on The Graph. They can also use [Graph TypeScript](/subgraphs/developing/creating/graph-ts/README/) and [Matchstick](/subgraphs/developing/creating/unit-testing-framework/) to create robust Subgraphs. ### Deploy to Subgraph Studio -Once defined, a subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: +Once defined, a Subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: -- Use its staging environment to index the deployed subgraph and make it available for review. -- Verify that your subgraph doesn't have any indexing errors and works as expected. +- Use its staging environment to index the deployed Subgraph and make it available for review. +- Verify that your Subgraph doesn't have any indexing errors and works as expected. ### Publish to the Network -When you're happy with your subgraph, you can [publish it](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph Network. +When you're happy with your Subgraph, you can [publish it](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph Network. -- This is an onchain action, which registers the subgraph and makes it discoverable by Indexers. -- Published subgraphs have a corresponding NFT, which defines the ownership of the subgraph. You can [transfer the subgraph's ownership](/subgraphs/developing/managing/transferring-a-subgraph/) by sending the NFT. -- Published subgraphs have associated metadata, which provides other network participants with useful context and information. +- This is an onchain action, which registers the Subgraph and makes it discoverable by Indexers. +- Published Subgraphs have a corresponding NFT, which defines the ownership of the Subgraph. You can [transfer the Subgraph's ownership](/subgraphs/developing/managing/transferring-a-subgraph/) by sending the NFT. +- Published Subgraphs have associated metadata, which provides other network participants with useful context and information. ### Add Curation Signal for Indexing -Published subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your subgraph. Learn more about signaling and [curating](/resources/roles/curating/) on The Graph. +Published Subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your Subgraph. Learn more about signaling and [curating](/resources/roles/curating/) on The Graph. #### What is signal? -- Signal is locked GRT associated with a given subgraph. It indicates to Indexers that a given subgraph will receive query volume and it contributes to the indexing rewards available for processing it. -- Third-party Curators may also signal on a given subgraph, if they deem the subgraph likely to drive query volume. +- Signal is locked GRT associated with a given Subgraph. It indicates to Indexers that a given Subgraph will receive query volume and it contributes to the indexing rewards available for processing it. +- Third-party Curators may also signal on a given Subgraph, if they deem the Subgraph likely to drive query volume. ### Querying & Application Development Subgraphs on The Graph Network receive 100,000 free queries per month, after which point developers can either [pay for queries with GRT or a credit card](/subgraphs/billing/). -Learn more about [querying subgraphs](/subgraphs/querying/introduction/). +Learn more about [querying Subgraphs](/subgraphs/querying/introduction/). ### Updating Subgraphs -To update your subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. +To update your Subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your Subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. -- If you selected "auto-migrate" when you applied the signal, updating the subgraph will migrate any signal to the new version and incur a migration tax. -- This signal migration should prompt Indexers to start indexing the new version of the subgraph, so it should soon become available for querying. +- If you selected "auto-migrate" when you applied the signal, updating the Subgraph will migrate any signal to the new version and incur a migration tax. +- This signal migration should prompt Indexers to start indexing the new version of the Subgraph, so it should soon become available for querying. ### Deleting & Transferring Subgraphs -If you no longer need a published subgraph, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) or [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) it. Deleting a subgraph returns any signaled GRT to [Curators](/resources/roles/curating/). +If you no longer need a published Subgraph, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) or [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) it. Deleting a Subgraph returns any signaled GRT to [Curators](/resources/roles/curating/). diff --git a/website/src/pages/de/subgraphs/explorer.mdx b/website/src/pages/de/subgraphs/explorer.mdx index 3cc0e39ef659..3a386698a7d4 100644 --- a/website/src/pages/de/subgraphs/explorer.mdx +++ b/website/src/pages/de/subgraphs/explorer.mdx @@ -2,255 +2,255 @@ title: Graph Explorer --- -Unlock the world of subgraphs and network data with [Graph Explorer](https://thegraph.com/explorer). +Erschließen Sie die Welt der Subgraphen und Netzwerkdaten mit [Graph Explorer] (https://thegraph.com/explorer). ## Überblick -Graph Explorer consists of multiple parts where you can interact with [subgraphs](https://thegraph.com/explorer?chain=arbitrum-one), [delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one), engage [participants](https://thegraph.com/explorer/participants?chain=arbitrum-one), view [network information](https://thegraph.com/explorer/network?chain=arbitrum-one), and access your user profile. +Graph Explorer besteht aus mehreren Teilen, in denen Sie mit [[Subgraphen]] (https://thegraph.com/explorer?chain=arbitrum-one) interagieren, [[delegieren]] (https://thegraph.com/explorer/delegate?chain=arbitrum-one), [[Teilnehmer]] (https://thegraph.com/explorer/participants?chain=arbitrum-one) einbeziehen, [[Netzwerkinformationen]] (https://thegraph.com/explorer/network?chain=arbitrum-one) anzeigen und auf Ihr Benutzerprofil zugreifen können. ## Inside Explorer -The following is a breakdown of all the key features of Graph Explorer. For additional support, you can watch the [Graph Explorer video guide](/subgraphs/explorer/#video-guide). +Nachfolgend finden Sie eine Übersicht über die wichtigsten Funktionen von Graph Explorer. Für zusätzliche Unterstützung können Sie sich den [Graph Explorer Video Guide](/subgraphs/explorer/#video-guide) ansehen. -### Subgraphs Page +### Subgraphen-Seite -After deploying and publishing your subgraph in Subgraph Studio, go to [Graph Explorer](https://thegraph.com/explorer) and click on the "[Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one)" link in the navigation bar to access the following: +Nachdem Sie Ihren Subgraph in Subgraph Studio bereitgestellt und veröffentlicht haben, gehen Sie zu [Graph Explorer] (https://thegraph.com/explorer) und klicken Sie auf den Link „[Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one)“ in der Navigationsleiste, um auf Folgendes zuzugreifen: -- Your own finished subgraphs -- Subgraphs published by others -- The exact subgraph you want (based on the date created, signal amount, or name). +- Ihre eigenen fertigen Subgraphen +- Von anderen veröffentlichte Subgraphen +- Den genauen Subgraphen, den Sie wünschen (basierend auf dem Erstellungsdatum, der Signalmenge oder dem Namen). -![Explorer Image 1](/img/Subgraphs-Explorer-Landing.png) +![Explorer Bild 1](/img/Subgraphs-Explorer-Landing.png) -When you click into a subgraph, you will be able to do the following: +Wenn Sie in einen Subgraphen klicken, können Sie Folgendes tun: -- Test queries in the playground and be able to leverage network details to make informed decisions. -- Signal GRT on your own subgraph or the subgraphs of others to make indexers aware of its importance and quality. +- Testen Sie Abfragen auf dem Playground und nutzen Sie Netzwerkdetails, um fundierte Entscheidungen zu treffen. +- Signalisieren Sie GRT auf Ihrem eigenen Subgraphen oder den Subgraphen anderer, um die Indexierer auf seine Bedeutung und Qualität aufmerksam zu machen. - - This is critical because signaling on a subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries. + - Dies ist von entscheidender Bedeutung, da die Signalisierung eines Subgraphen einen Anreiz darstellt, ihn zu indizieren, was bedeutet, dass er schließlich im Netzwerk auftaucht, um Abfragen zu bedienen. -![Explorer Image 2](/img/Subgraph-Details.png) +![Explorer Bild 2](/img/Subgraph-Details.png) -On each subgraph’s dedicated page, you can do the following: +Auf der speziellen Seite jedes Subgraphen können Sie Folgendes tun: -- Signal/Un-signal on subgraphs -- View more details such as charts, current deployment ID, and other metadata -- Switch versions to explore past iterations of the subgraph -- Query subgraphs via GraphQL -- Test subgraphs in the playground -- View the Indexers that are indexing on a certain subgraph -- Subgraph stats (allocations, Curators, etc) -- View the entity who published the subgraph +- Signal/Un-Signal auf Subgraphen +- Weitere Details wie Diagramme, aktuelle Bereitstellungs-ID und andere Metadaten anzeigen +- Versionen wechseln, um frühere Iterationen des Subgraphen zu erkunden +- Abfrage von Subgraphen über GraphQL +- Subgraphen auf dem Prüfstand testen +- Anzeigen der Indexierer, die auf einem bestimmten Subgraphen indexieren +- Subgraphen-Statistiken (Zuweisungen, Kuratoren, etc.) +- Anzeigen der Entität, die den Subgraphen veröffentlicht hat -![Explorer Image 3](/img/Explorer-Signal-Unsignal.png) +![Explorer Bild 3](/img/Explorer-Signal-Unsignal.png) -### Delegate Page +### Delegierten-Seite -On the [Delegate page](https://thegraph.com/explorer/delegate?chain=arbitrum-one), you can find information about delegating, acquiring GRT, and choosing an Indexer. +Auf der [Delegierten-Seite] (https://thegraph.com/explorer/delegate?chain=arbitrum-one) finden Sie Informationen zum Delegieren, zum Erwerb von GRT und zur Auswahl eines Indexierers. -On this page, you can see the following: +Auf dieser Seite können Sie Folgendes sehen: -- Indexers who collected the most query fees -- Indexers with the highest estimated APR +- Indexierer, die die meisten Abfragegebühren erhoben haben +- Indexierer mit dem höchsten geschätzten effektiven Jahreszins -Additionally, you can calculate your ROI and search for top Indexers by name, address, or subgraph. +Darüber hinaus können Sie Ihren ROI berechnen und die besten Indexierer nach Name, Adresse oder Subgraph suchen. -### Participants Page +### Teilnehmer-Seite -This page provides a bird's-eye view of all "participants," which includes everyone participating in the network, such as Indexers, Delegators, and Curators. +Diese Seite bietet einen Überblick über alle „Teilnehmer“, d. h. alle am Netzwerk beteiligten Personen wie Indexer, Delegatoren und Kuratoren. -#### 1. Indexers +#### 1. Indexierer -![Explorer Image 4](/img/Indexer-Pane.png) +![Explorer Bild 4](/img/Indexer-Pane.png) -Indexers are the backbone of the protocol. They stake on subgraphs, index them, and serve queries to anyone consuming subgraphs. +Indexierer sind das Rückgrat des Protokolls. Sie setzen auf Subgraphen, indizieren sie und stellen allen, die Subgraphen konsumieren, Abfragen zur Verfügung. -In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each subgraph, and how much revenue they have made from query fees and indexing rewards. +In der Tabelle Indizierer können Sie die Delegationsparameter eines Indizierers, seinen Einsatz, die Höhe seines Einsatzes für jeden Subgraphen und die Höhe seiner Einnahmen aus Abfragegebühren und Indizierungsprämien sehen. -**Specifics** +**Besonderheiten** -- Query Fee Cut - the % of the query fee rebates that the Indexer keeps when splitting with Delegators. -- Effective Reward Cut - the indexing reward cut applied to the delegation pool. If it’s negative, it means that the Indexer is giving away part of their rewards. If it’s positive, it means that the Indexer is keeping some of their rewards. -- Cooldown Remaining - the time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters. -- Owned - This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior. -- Delegated - Stake from Delegators which can be allocated by the Indexer, but cannot be slashed. -- Allocated - Stake that Indexers are actively allocating towards the subgraphs they are indexing. -- Available Delegation Capacity - the amount of delegated stake the Indexers can still receive before they become over-delegated. -- Max Delegation Capacity - the maximum amount of delegated stake the Indexer can productively accept. An excess delegated stake cannot be used for allocations or rewards calculations. -- Query Fees - this is the total fees that end users have paid for queries from an Indexer over all time. -- Indexer Rewards - this is the total indexer rewards earned by the Indexer and their Delegators over all time. Indexer rewards are paid through GRT issuance. +- Abfragegebührenkürzung – der Prozentsatz der Abfragegebührenrabatte, den der Indexierer bei der Aufteilung mit Delegatoren teien. +- Effektiver Reward Cut - der auf den Delegationspool angewandte Indexierungs-Reward Cut. Ist er negativ, bedeutet dies, dass der Indexierer einen Teil seiner Rewards abgibt. Ist er positiv, bedeutet dies, dass der Indexierer einen Teil seiner Rewards behält. +- Verbleibende Abklingzeit - die verbleibende Zeit, bis der Indexierer die oben genannten Delegationsparameter ändern kann. Abklingzeiten werden von Indexierern festgelegt, wenn sie ihre Delegationsparameter aktualisieren. +- Eigenkapital - Dies ist der hinterlegte Einsatz des Indexierers, der bei bösartigem oder falschem Verhalten gekürzt werden kann. +- Delegiert - Einsätze von Delegatoren, die vom Indexierer zugewiesen werden können, aber nicht durchgeschnitten werden können. +- Zugewiesen - Einsatz, den Indexierer aktiv den Subgraphen zuweisen, die sie indizieren. +- Verfügbare Delegationskapazität - die Menge der delegierten Anteile, die die Indexierer noch erhalten können, bevor sie überdelegiert werden. +- Maximale Delegationskapazität - der maximale Betrag an delegiertem Einsatz, den der Indexierer produktiv akzeptieren kann. Ein überschüssiger delegierter Einsatz kann nicht für Zuteilungen oder Belohnungsberechnungen verwendet werden. +- Abfragegebühren - dies ist die Gesamtsumme der Gebühren, die Endnutzer über die gesamte Zeit für Abfragen von einem Indexierer bezahlt haben. +- Indexierer Rewards - dies ist die Gesamtsumme der Indexierer Rewards, die der Indexierer und seine Delegatoren über die gesamte Zeit verdient haben. Indexierer Rewards werden durch die Ausgabe von GRTs ausgezahlt. -Indexers can earn both query fees and indexing rewards. Functionally, this happens when network participants delegate GRT to an Indexer. This enables Indexers to receive query fees and rewards depending on their Indexer parameters. +Indexierer können sowohl Abfragegebühren als auch Indexierungsprämien verdienen. Funktionell geschieht dies, wenn Netzwerkteilnehmer GRT an einen Indexierer delegieren. Dadurch können Indexierer je nach ihren Indexierer-Parametern Abfragegebühren und Belohnungen erhalten. -- Indexing parameters can be set by clicking on the right-hand side of the table or by going into an Indexer’s profile and clicking the “Delegate” button. +- Indizierungsparameter können durch Klicken auf die rechte Seite der Tabelle oder durch Aufrufen des Profils eines Indizierers und Klicken auf die Schaltfläche „Delegieren“ festgelegt werden. -To learn more about how to become an Indexer, you can take a look at the [official documentation](/indexing/overview/) or [The Graph Academy Indexer guides.](https://thegraph.academy/delegators/choosing-indexers/) +Um mehr darüber zu erfahren, wie man ein Indexierer wird, können Sie einen Blick auf die [offizielle Dokumentation](/indexing/overview/) oder [The Graph Academy Indexer guides](https://thegraph.academy/delegators/choosing-indexers/) werfen. -![Indexing details pane](/img/Indexing-Details-Pane.png) +![Indizierungs-Detailfenster](/img/Indexing-Details-Pane.png) -#### 2. Curators +#### 2. Kuratoren -Curators analyze subgraphs to identify which subgraphs are of the highest quality. Once a Curator has found a potentially high-quality subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which subgraphs are high quality and should be indexed. +Kuratoren analysieren Subgraphen, um festzustellen, welche Subgraphen von höchster Qualität sind. Sobald ein Kurator einen potenziell hochwertigen Subgraphen gefunden hat, kann er ihn kuratieren, indem er seine Bindungskurve signalisiert. Auf diese Weise teilen die Kuratoren den Indexierern mit, welche Subgraphen von hoher Qualität sind und indiziert werden sollten. -- Curators can be community members, data consumers, or even subgraph developers who signal on their own subgraphs by depositing GRT tokens into a bonding curve. - - By depositing GRT, Curators mint curation shares of a subgraph. As a result, they can earn a portion of the query fees generated by the subgraph they have signaled on. - - The bonding curve incentivizes Curators to curate the highest quality data sources. +- Kuratoren können Community-Mitglieder, Datenkonsumenten oder sogar Subgraph-Entwickler sein, die ihre eigenen Subgraphen durch Einzahlung von GRT-Token in eine Bindungskurve signalisieren. + - Durch die Hinterlegung von GRT prägen Kuratoren Kurationsanteile an einem Subgraphen. Dadurch können sie einen Teil der Abfragegebühren verdienen, die von dem Subgraphen generiert werden, auf den sie sich gemeldet haben. + - Die Bindungskurve bietet den Kuratoren einen Anreiz, die hochwertigsten Datenquellen zu kuratieren. -In the The Curator table listed below you can see: +In der unten aufgeführten Tabelle von The Curator können Sie sehen: -- The date the Curator started curating -- The number of GRT that was deposited -- The number of shares a Curator owns +- Das Datum, an dem der Kurator mit der Kuratierung begonnen hat +- Die Anzahl der hinterlegten GRT +- Die Anzahl der Anteile, die ein Kurator besitzt -![Explorer Image 6](/img/Curation-Overview.png) +![Explorer Bild 6](/img/Curation-Overview.png) -If you want to learn more about the Curator role, you can do so by visiting [official documentation.](/resources/roles/curating/) or [The Graph Academy](https://thegraph.academy/curators/). +Wenn Sie mehr über die Rolle des Kurators erfahren möchten, besuchen Sie [offizielle Dokumentation](/resources/roles/curating/) oder [The Graph Academy](https://thegraph.academy/curators/). -#### 3. Delegators +#### 3. Delegatoren -Delegators play a key role in maintaining the security and decentralization of The Graph Network. They participate in the network by delegating (i.e., “staking”) GRT tokens to one or multiple indexers. +Delegatoren spielen eine Schlüsselrolle bei der Aufrechterhaltung der Sicherheit und Dezentralisierung des Graph Network. Sie beteiligen sich am Netzwerk, indem sie GRT-Token an einen oder mehrere Indexierer delegieren (d.h. „staken“). -- Without Delegators, Indexers are less likely to earn significant rewards and fees. Therefore, Indexers attract Delegators by offering them a portion of their indexing rewards and query fees. -- Delegators select Indexers based on a number of different variables, such as past performance, indexing reward rates, and query fee cuts. -- Reputation within the community can also play a factor in the selection process. It's recommended to connect with the selected Indexers via [The Graph's Discord](https://discord.gg/graphprotocol) or [The Graph Forum](https://forum.thegraph.com/). +- Ohne Delegatoren ist es für die Indexierer unwahrscheinlicher, signifikante Prämien und Gebühren zu verdienen. Daher locken Indexierer Delegatoren an, indem sie ihnen einen Teil ihrer Indexierungsprämien und Abfragegebühren anbieten. +- Die Delegatoren wählen die Indexierer auf der Grundlage einer Reihe von Variablen aus, wie z. B. frühere Leistungen, Indexierungsvergütungssätze und Senkung der Abfragegebühren. +- Die Reputation innerhalb der Community kann bei der Auswahl ebenfalls eine Rolle spielen. Es wird empfohlen, mit den ausgewählten Indexierern über [The Graph's Discord] (https://discord.gg/graphprotocol) oder [The Graph Forum] (https://forum.thegraph.com/) in Kontakt zu treten. -![Explorer Image 7](/img/Delegation-Overview.png) +![Explorer Bild 7](/img/Delegation-Overview.png) -In the Delegators table you can see the active Delegators in the community and important metrics: +In der Tabelle „Delegatoren“ können Sie die aktiven Delegatoren in der Community und wichtige Metriken einsehen: -- The number of Indexers a Delegator is delegating towards -- A Delegator's original delegation -- The rewards they have accumulated but have not withdrawn from the protocol -- The realized rewards they withdrew from the protocol -- Total amount of GRT they have currently in the protocol -- The date they last delegated +- Die Anzahl der Indexierer, an die ein Delegator delegiert +- Die ursprüngliche Delegation eines Delegators +- Die Belohnungen, die sie angesammelt, aber nicht aus dem Protokoll entnommen haben +- Die realisierten Belohnungen zogen sie aus dem Protokoll zurück +- Gesamtmenge an GRT, die sie derzeit im Protokoll haben +- Das Datum der letzten Delegation If you want to learn more about how to become a Delegator, check out the [official documentation](/resources/roles/delegating/delegating/) or [The Graph Academy](https://docs.thegraph.academy/official-docs/delegator/choosing-indexers). -### Network Page +### Netzwerk-Seite -On this page, you can see global KPIs and have the ability to switch to a per-epoch basis and analyze network metrics in more detail. These details will give you a sense of how the network is performing over time. +Auf dieser Seite können Sie globale KPIs sehen und haben die Möglichkeit, auf eine Epochenbasis zu wechseln und die Netzwerkmetriken detaillierter zu analysieren. Diese Details geben Ihnen ein Gefühl dafür, wie sich das Netzwerk im Laufe der Zeit entwickelt. #### Überblick -The overview section has both all the current network metrics and some cumulative metrics over time: +Der Übersichtsabschnitt enthält sowohl alle aktuellen Netzwerkmetriken als auch einige kumulative Metriken im Zeitverlauf: -- The current total network stake -- The stake split between the Indexers and their Delegators -- Total supply, minted, and burned GRT since the network inception -- Total Indexing rewards since the inception of the protocol -- Protocol parameters such as curation reward, inflation rate, and more -- Current epoch rewards and fees +- Die aktuelle Gesamtbeteiligung am Netz +- Die Aufteilung der Anteile zwischen den Indexierern und ihren Delegatoren +- Gesamtangebot, geprägte und verbrannte GRT seit Gründung des Netzes +- Gesamtindexierungsgewinne seit Einführung des Protokolls +- Protokollparameter wie Kurationsbelohnung, Inflationsrate und mehr +- Aktuelle Epochenprämien und Gebühren -A few key details to note: +Ein paar wichtige Details sind zu beachten: -- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the subgraphs have been closed and the data they served has been validated by the consumers. -- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). +- **Die Abfragegebühren stellen die von den Verbrauchern** generierten Gebühren dar. Sie können von den Indexierern nach einem Zeitraum von mindestens 7 Epochen (siehe unten) eingefordert werden (oder auch nicht), nachdem ihre Zuweisungen zu den Subgraphen abgeschlossen wurden und die von ihnen gelieferten Daten von den Verbrauchern validiert wurden. +- **Die Indizierungs-Belohnungen stellen die Anzahl der Belohnungen dar, die die Indexierer während der Epoche von der Netzwerkausgabe beansprucht haben.** Obwohl die Protokollausgabe festgelegt ist, werden die Belohnungen erst geprägt, wenn die Indexierer ihre Zuweisungen zu den Subgraphen schließen, die sie indiziert haben. Daher variiert die Anzahl der Rewards pro Epoche (d. h. während einiger Epochen könnten Indexer kollektiv Zuweisungen geschlossen haben, die seit vielen Tagen offen waren). -![Explorer Image 8](/img/Network-Stats.png) +![Explorer Bild 8](/img/Network-Stats.png) -#### Epochs +#### Epochen -In the Epochs section, you can analyze on a per-epoch basis, metrics such as: +Im Abschnitt Epochen können Sie je nach Epochen Metriken analysieren: -- Epoch start or end block -- Query fees generated and indexing rewards collected during a specific epoch -- Epoch status, which refers to the query fee collection and distribution and can have different states: - - The active epoch is the one in which Indexers are currently allocating stake and collecting query fees - - The settling epochs are the ones in which the state channels are being settled. This means that the Indexers are subject to slashing if the consumers open disputes against them. - - The distributing epochs are the epochs in which the state channels for the epochs are being settled and Indexers can claim their query fee rebates. - - The finalized epochs are the epochs that have no query fee rebates left to claim by the Indexers. +- Epochenstart oder Endblock +- Abfragegebühren und Indexierungsprämien, die während einer bestimmten Epoche erhoben werden +- Epochenstatus, der sich auf die Erhebung und Verteilung der Abfragegebühren bezieht und verschiedene Zustände annehmen kann: + - Die aktive Epoche ist diejenige, in der die Indexierer gerade Anteile zuweisen und Abfragegebühren erheben + - Die Abrechnungsepochen sind diejenigen, in denen die staatlichen Kanäle abgewickelt werden. Das bedeutet, dass die Indexierer der Kürzung unterliegen, wenn die Verbraucher Streitigkeiten gegen sie eröffnen. + - Die verteilenden Epochen sind die Epochen, in denen die Zustandskanäle für die Epochen abgerechnet werden und die Indexierer ihre Rückerstattung der Abfragegebühren beantragen können. + - Die abgeschlossenen Epochen sind die Epochen, für die die Indexierer keine Abfragegebühren-Rabatte mehr beanspruchen können. -![Explorer Image 9](/img/Epoch-Stats.png) +![Explorer Bild 9](/img/Epoch-Stats.png) -## Your User Profile +## Ihr Benutzerprofil -Your personal profile is the place where you can see your network activity, regardless of your role on the network. Your crypto wallet will act as your user profile, and with the User Dashboard, you’ll be able to see the following tabs: +Ihr persönliches Profil ist der Ort, an dem Sie Ihre Netzwerkaktivitäten sehen können, unabhängig von Ihrer Rolle im Netzwerk. Ihre Krypto- Wallet dient als Ihr Benutzerprofil, und im Benutzer-Dashboard können Sie die folgenden Registerkarten sehen: -### Profile Overview +### Profil-Übersicht -In this section, you can view the following: +In diesem Abschnitt können Sie Folgendes sehen: -- Any of your current actions you've done. -- Your profile information, description, and website (if you added one). +- Jede Ihrer aktuellen Aktionen, die Sie durchgeführt haben. +- Ihre Profilinformationen, Beschreibung und Website (falls Sie eine hinzugefügt haben). -![Explorer Image 10](/img/Profile-Overview.png) +![Explorer Bild 10](/img/Profile-Overview.png) -### Subgraphs Tab +### Registerkarte "Subgraphen" -In the Subgraphs tab, you’ll see your published subgraphs. +Auf der Registerkarte "Subgraphen" sehen Sie Ihre veröffentlichten Subgraphen. -> This will not include any subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network. +> Dies schließt keine Subgraphen ein, die mit dem CLI zu Testzwecken bereitgestellt wurden. Subgraphen werden erst angezeigt, wenn sie im dezentralen Netzwerk veröffentlicht werden. -![Explorer Image 11](/img/Subgraphs-Overview.png) +![Explorer Bild 11](/img/Subgraphs-Overview.png) -### Indexing Tab +### Registerkarte "Indizierung" -In the Indexing tab, you’ll find a table with all the active and historical allocations towards subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer. +Auf der Registerkarte "Indizierung" finden Sie eine Tabelle mit allen aktiven und historischen Zuweisungen zu Subgraphen. Hier finden Sie auch Diagramme, in denen Sie Ihre bisherige Leistung als Indexierer sehen und analysieren können. -This section will also include details about your net Indexer rewards and net query fees. You’ll see the following metrics: +Dieser Abschnitt enthält auch Angaben zu Ihren Netto-Indexierer-Belohnungen und Netto-Abfragegebühren. Sie sehen die folgenden Metriken: -- Delegated Stake - the stake from Delegators that can be allocated by you but cannot be slashed -- Total Query Fees - the total fees that users have paid for queries served by you over time -- Indexer Rewards - the total amount of Indexer rewards you have received, in GRT -- Fee Cut - the % of query fee rebates that you will keep when you split with Delegators -- Rewards Cut - the % of Indexer rewards that you will keep when splitting with Delegators -- Owned - your deposited stake, which could be slashed for malicious or incorrect behavior +- Delegated Stake - der Einsatz von Delegatoren, der von Ihnen zugewiesen werden kann, aber nicht reduziert werden kann +- Gesamte Abfragegebühren - die gesamten Gebühren, die Nutzer im Laufe der Zeit für von Ihnen durchgeführte Abfragen bezahlt haben +- Indexierer Rewards - der Gesamtbetrag der Indexierer Rewards, die Sie erhalten haben, in GRT +- Gebührensenkung - der Prozentsatz der Rückerstattungen von Abfragegebühren, den Sie behalten, wenn Sie mit Delegatoren teilen +- Rewardkürzung - der Prozentsatz der Indexierer-Rewards, den Sie behalten, wenn Sie mit Delegatoren teilen +- Eigenkapital - Ihr hinterlegter Einsatz, der bei böswilligem oder falschem Verhalten gekürzt werden kann -![Explorer Image 12](/img/Indexer-Stats.png) +![Explorer Bild 12](/img/Indexer-Stats.png) -### Delegating Tab +### Registerkarte "Delegieren" -Delegators are important to the Graph Network. They must use their knowledge to choose an Indexer that will provide a healthy return on rewards. +Die Delegatoren sind wichtig für The Graph Network. Sie müssen ihr Wissen nutzen, um einen Indexierer auszuwählen, der eine gesunde Rendite abwirft. -In the Delegators tab, you can find the details of your active and historical delegations, along with the metrics of the Indexers that you delegated towards. +Auf der Registerkarte "Delegatoren" finden Sie die Details Ihrer aktiven und historischen Delegationen sowie die Metriken der Indexierer, an die Sie delegiert haben. -In the first half of the page, you can see your delegation chart, as well as the rewards-only chart. To the left, you can see the KPIs that reflect your current delegation metrics. +In der ersten Hälfte der Seite sehen Sie Ihr Delegationsdiagramm sowie das Diagramm „Nur Belohnungen“. Auf der linken Seite sehen Sie die KPIs, die Ihre aktuellen Delegationskennzahlen widerspiegeln. -The Delegator metrics you’ll see here in this tab include: +Auf dieser Registerkarte sehen Sie unter anderem die Delegator-Metriken: -- Total delegation rewards -- Total unrealized rewards -- Total realized rewards +- Delegationsprämien insgesamt +- Unrealisierte Rewards insgesamt +- Gesamte realisierte Rewards -In the second half of the page, you have the delegations table. Here you can see the Indexers that you delegated towards, as well as their details (such as rewards cuts, cooldown, etc). +In der zweiten Hälfte der Seite finden Sie die Tabelle der Delegationen. Hier sehen Sie die Indexierer, an die Sie delegiert haben, sowie deren Details (wie z. B. Belohnungskürzungen, Abklingzeit, usw.). -With the buttons on the right side of the table, you can manage your delegation - delegate more, undelegate, or withdraw your delegation after the thawing period. +Mit den Schaltflächen auf der rechten Seite der Tabelle können Sie Ihre Delegierung verwalten - mehr delegieren, die Delegierung aufheben oder Ihre Delegierung nach der Auftauzeit zurückziehen. -Keep in mind that this chart is horizontally scrollable, so if you scroll all the way to the right, you can also see the status of your delegation (delegating, undelegating, withdrawable). +Beachten Sie, dass dieses Diagramm horizontal gescrollt werden kann. Wenn Sie also ganz nach rechts scrollen, können Sie auch den Status Ihrer Delegation sehen (delegierend, nicht delegierend, zurückziehbar). -![Explorer Image 13](/img/Delegation-Stats.png) +![Explorer Bild 13](/img/Delegation-Stats.png) -### Curating Tab +### Registerkarte "Kuratieren" -In the Curation tab, you’ll find all the subgraphs you’re signaling on (thus enabling you to receive query fees). Signaling allows Curators to highlight to Indexers which subgraphs are valuable and trustworthy, thus signaling that they need to be indexed on. +Auf der Registerkarte „Kuratierung“ finden Sie alle Subgraphen, für die Sie ein Signal geben (damit Sie Abfragegebühren erhalten). Mit der Signalisierung können Kuratoren den Indexierern zeigen, welche Subgraphen wertvoll und vertrauenswürdig sind und somit signalisieren, dass sie indiziert werden müssen. -Within this tab, you’ll find an overview of: +Auf dieser Registerkarte finden Sie eine Übersicht über: -- All the subgraphs you're curating on with signal details -- Share totals per subgraph -- Query rewards per subgraph -- Updated at date details +- Alle Subgraphen, die Sie kuratieren, mit Signaldetails +- Anteilssummen pro Subgraph +- Abfragebelohnungen pro Subgraph +- Aktualisiert bei Datumsdetails -![Explorer Image 14](/img/Curation-Stats.png) +![Explorer Bild 14](/img/Curation-Stats.png) -### Your Profile Settings +### Ihre Profileinstellungen -Within your user profile, you’ll be able to manage your personal profile details (like setting up an ENS name). If you’re an Indexer, you have even more access to settings at your fingertips. In your user profile, you’ll be able to set up your delegation parameters and operators. +In Ihrem Benutzerprofil können Sie Ihre persönlichen Profildaten verwalten (z. B. einen ENS-Namen einrichten). Wenn Sie ein Indexierer sind, haben Sie sogar noch mehr Zugang zu den Einstellungen, die Ihnen zur Verfügung stehen. In Ihrem Benutzerprofil können Sie Ihre Delegationsparameter und Operatoren einrichten. -- Operators take limited actions in the protocol on the Indexer's behalf, such as opening and closing allocations. Operators are typically other Ethereum addresses, separate from their staking wallet, with gated access to the network that Indexers can personally set -- Delegation parameters allow you to control the distribution of GRT between you and your Delegators. +- Operatoren führen im Namen des Indexierers begrenzte Aktionen im Protokoll durch, wie z. B. das Öffnen und Schließen von Allokationen. Operatoren sind in der Regel andere Ethereum-Adressen, die von ihrer Staking-Wallet getrennt sind und einen beschränkten Zugang zum Netzwerk haben, den Indexer persönlich festlegen können +- Mit den Delegationsparametern können Sie die Verteilung der GRT zwischen Ihnen und Ihren Delegatoren steuern. -![Explorer Image 15](/img/Profile-Settings.png) +![Explorer Bild 15](/img/Profile-Settings.png) -As your official portal into the world of decentralized data, Graph Explorer allows you to take a variety of actions, no matter your role in the network. You can get to your profile settings by opening the dropdown menu next to your address, then clicking on the Settings button. +Als Ihr offizielles Portal in die Welt der dezentralen Daten ermöglicht Ihnen der Graph Explorer eine Vielzahl von Aktionen, unabhängig von Ihrer Rolle im Netzwerk. Sie können zu Ihren Profileinstellungen gelangen, indem Sie das Dropdown-Menü neben Ihrer Adresse öffnen und dann auf die Schaltfläche Einstellungen klicken. ![Wallet details](/img/Wallet-Details.png) ## Zusätzliche Ressourcen -### Video Guide +### Video-Leitfaden -For a general overview of Graph Explorer, check out the video below: +Einen allgemeinen Überblick über Graph Explorer finden Sie in dem folgenden Video: diff --git a/website/src/pages/de/subgraphs/guides/_meta.js b/website/src/pages/de/subgraphs/guides/_meta.js index 37e18bc51651..a1bb04fb6d3f 100644 --- a/website/src/pages/de/subgraphs/guides/_meta.js +++ b/website/src/pages/de/subgraphs/guides/_meta.js @@ -1,4 +1,5 @@ export default { + 'subgraph-composition': '', 'subgraph-debug-forking': '', near: '', arweave: '', diff --git a/website/src/pages/de/subgraphs/guides/arweave.mdx b/website/src/pages/de/subgraphs/guides/arweave.mdx index 08e6c4257268..2e547c7b6813 100644 --- a/website/src/pages/de/subgraphs/guides/arweave.mdx +++ b/website/src/pages/de/subgraphs/guides/arweave.mdx @@ -1,111 +1,110 @@ --- -title: Building Subgraphs on Arweave +title: Erstellen von Subgraphen auf Arweave --- -> Arweave support in Graph Node and on Subgraph Studio is in beta: please reach us on [Discord](https://discord.gg/graphprotocol) with any questions about building Arweave Subgraphs! +> Die Unterstützung von Arweave in Graph Node und Subgraph Studio befindet sich in der Beta-Phase: Bitte kontaktieren Sie uns auf [Discord] (https://discord.gg/graphprotocol), wenn Sie Fragen zur Erstellung von Arweave-Subgraphen haben! -In this guide, you will learn how to build and deploy Subgraphs to index the Arweave blockchain. +In dieser Anleitung erfahren Sie, wie Sie Subgraphen erstellen und einsetzen, um die Arweave-Blockchain zu indizieren. -## What is Arweave? +## Was ist Arweave? -The Arweave protocol allows developers to store data permanently and that is the main difference between Arweave and IPFS, where IPFS lacks the feature; permanence, and files stored on Arweave can't be changed or deleted. +Das Arweave-Protokoll ermöglicht es Entwicklern, Daten dauerhaft zu speichern, und das ist der Hauptunterschied zwischen Arweave und IPFS, wobei IPFS die Eigenschaft der Dauerhaftigkeit fehlt und auf Arweave gespeicherte Dateien nicht geändert oder gelöscht werden können. -Arweave already has built numerous libraries for integrating the protocol in a number of different programming languages. For more information you can check: +Arweave hat bereits zahlreiche Bibliotheken für die Integration des Protokolls in eine Reihe verschiedener Programmiersprachen erstellt. Für weitere Informationen können Sie nachsehen: - [Arwiki](https://arwiki.wiki/#/en/main) - [Arweave Resources](https://www.arweave.org/build) -## What are Arweave Subgraphs? +## Was sind Subgraphen von Arweave? -The Graph allows you to build custom open APIs called "Subgraphs". Subgraphs are used to tell indexers (server operators) which data to index on a blockchain and save on their servers in order for you to be able to query it at any time using [GraphQL](https://graphql.org/). +The Graph ermöglicht es Ihnen, benutzerdefinierte offene APIs, sogenannte „ Subgraphen“, zu erstellen. Subgraphen werden verwendet, um Indexierern (Serverbetreibern) mitzuteilen, welche Daten auf einer Blockchain indexiert und auf ihren Servern gespeichert werden sollen, damit Sie sie jederzeit mit [GraphQL] (https://graphql.org/) abfragen können. -[Graph Node](https://github.com/graphprotocol/graph-node) is now able to index data on Arweave protocol. The current integration is only indexing Arweave as a blockchain (blocks and transactions), it is not indexing the stored files yet. +Der [Graph Node] (https://github.com/graphprotocol/graph-node) ist nun in der Lage, Daten auf dem Arweave-Protokoll zu indizieren. Die aktuelle Integration indiziert nur Arweave als Blockchain (Blöcke und Transaktionen), sie indiziert noch nicht die gespeicherten Dateien. -## Building an Arweave Subgraph +## Aufbau eines Arweave Subgraphen -To be able to build and deploy Arweave Subgraphs, you need two packages: +Um Arweave Subgraphs erstellen und einsetzen zu können, benötigen Sie zwei Pakete: -1. `@graphprotocol/graph-cli` above version 0.30.2 - This is a command-line tool for building and deploying Subgraphs. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-cli) to download using `npm`. -2. `@graphprotocol/graph-ts` above version 0.27.0 - This is library of Subgraph-specific types. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-ts) to download using `npm`. +1. `@graphprotocol/graph-cli` ab Version 0.30.2 - Dies ist ein Kommandozeilen-Tool zum Erstellen und Bereitstellen von Subgraphen. [Klicken Sie hier](https://www.npmjs.com/package/@graphprotocol/graph-cli), um es mit `npm` herunterzuladen. +2. `@graphprotocol/graph-ts` ab Version 0.27.0 - Dies ist eine Bibliothek von Subgraphen-spezifischen Typen. [Klicken Sie hier](https://www.npmjs.com/package/@graphprotocol/graph-ts) zum Herunterladen mit `npm`. -## Subgraph's components +## Komponenten des Subgraphen -There are three components of a Subgraph: +Ein Subgraph besteht aus drei Komponenten: ### 1. Manifest - `subgraph.yaml` -Defines the data sources of interest, and how they should be processed. Arweave is a new kind of data source. +Definiert die Datenquellen, die von Interesse sind, und wie sie verarbeitet werden sollen. Arweave ist eine neue Art von Datenquelle. ### 2. Schema - `schema.graphql` -Here you define which data you want to be able to query after indexing your Subgraph using GraphQL. This is actually similar to a model for an API, where the model defines the structure of a request body. +Hier legen Sie fest, welche Daten Sie nach der Indizierung Ihres Subgraphen mit GraphQL abfragen können möchten. Dies ist eigentlich ähnlich wie ein Modell für eine API, wobei das Modell die Struktur eines Requests Body definiert. -The requirements for Arweave Subgraphs are covered by the [existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). +Die Anforderungen für Arweave-Subgraphen werden in der [bestehenden Dokumentation](/developing/creating-a-subgraph/#the-graphql-schema) behandelt. -### 3. AssemblyScript Mappings - `mapping.ts` +### 3. AssemblyScript-Mappings - `mapping.ts` -This is the logic that determines how data should be retrieved and stored when someone interacts with the data sources you are listening to. The data gets translated and is stored based off the schema you have listed. +Dies ist die Logik, die bestimmt, wie Daten abgerufen und gespeichert werden sollen, wenn jemand mit den Datenquellen interagiert, die Sie abhören. Die Daten werden übersetzt und auf der Grundlage des von Ihnen angegebenen Schemas gespeichert. -During Subgraph development there are two key commands: +Bei der Entwicklung von Subgraphen gibt es zwei wichtige Befehle: ``` -$ graph codegen # generates types from the schema file identified in the manifest -$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder +$ graph codegen # erzeugt Typen aus der im Manifest angegebenen Schemadatei +$ graph build # generiert Web Assembly aus den AssemblyScript-Dateien und bereitet alle Subgraph-Dateien in einem /build-Ordner vor ``` -## Subgraph Manifest Definition +## Subgraph-Manifest-Definition -The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for an Arweave Subgraph: +Das Subgraph-Manifest `subgraph.yaml` identifiziert die Datenquellen für den Subgraphen, die Auslöser von Interesse und die Funktionen, die als Reaktion auf diese Auslöser ausgeführt werden sollen. Im Folgenden finden Sie ein Beispiel für ein Subgraph-Manifest für einen Arweave-Subgraphen: ```yaml specVersion: 1.3.0 description: Arweave Blocks Indexing schema: - file: ./schema.graphql # link to the schema file + file: ./schema.graphql #Link zur Schemadatei dataSources: - kind: arweave name: arweave-blocks - network: arweave-mainnet # The Graph only supports Arweave Mainnet - source: - owner: 'ID-OF-AN-OWNER' # The public key of an Arweave wallet - startBlock: 0 # set this to 0 to start indexing from chain genesis - mapping: + network: arweave-mainnet # The Graph unterstützt nur das Arweave source: + owner: 'ID-OF-AN-OWNER' # Der öffentliche Schlüssel einer Arweave-Brieftasche + startBlock: 0 # Setzen Sie dies auf 0, um die Indizierung von der Kettenentstehung zu starten + mapping: apiVersion: 0.0.9 language: wasm/assemblyscript - file: ./src/blocks.ts # link to the file with the Assemblyscript mappings + file: ./src/blocks.ts # Verweis auf die Datei mit den Assemblyscript-mappings entities: - Block - Transaction blockHandlers: - - handler: handleBlock # the function name in the mapping file + - handler: handleBlock # der Funktionsname in der Mapping-Datei transactionHandlers: - - handler: handleTx # the function name in the mapping file + - handler: handleTx # der Funktionsname in der Mapping-Datei ``` -- Arweave Subgraphs introduce a new kind of data source (`arweave`) -- The network should correspond to a network on the hosting Graph Node. In Subgraph Studio, Arweave's mainnet is `arweave-mainnet` -- Arweave data sources introduce an optional source.owner field, which is the public key of an Arweave wallet +- Mit Arweave Subgraphen wird eine neue Art von Datenquelle eingeführt (`arweave`) +- Das Netzwerk sollte einem Netzwerk auf dem hostenden Graph Node entsprechen. In Subgraph Studio ist das Arweave-Mainnet als `arweave-mainnet` bezeichnet +- Arweave-Datenquellen führen ein optionales Feld source.owner ein, das den öffentlichen Schlüssel eines Arweave-Wallets darstellt -Arweave data sources support two types of handlers: +Arweave-Datenquellen unterstützen zwei Arten von Handlern: -- `blockHandlers` - Run on every new Arweave block. No source.owner is required. -- `transactionHandlers` - Run on every transaction where the data source's `source.owner` is the owner. Currently an owner is required for `transactionHandlers`, if users want to process all transactions they should provide "" as the `source.owner` +- `blockHandlers` - Wird bei jedem neuen Arweave-Block ausgeführt. Es wird kein source.owner benötigt. +- `transactionHandlers` - Wird bei jeder Transaktion ausgeführt, bei der der `source.owner` der Eigentümer der Datenquelle ist. Derzeit ist ein Besitzer für `transactionHandlers` erforderlich, wenn Benutzer alle Transaktionen verarbeiten wollen, sollten sie "" als `source.owner` angeben -> The source.owner can be the owner's address, or their Public Key. +> Als source.owner kann die Adresse des Eigentümers oder sein öffentlicher Schlüssel angegeben werden. +> +> Transaktionen sind die Bausteine des Arweave permaweb und sie sind Objekte, die von den Endbenutzern erstellt werden. +> +> Hinweis: [Irys (früher Bundlr)](https://irys.xyz/) Transaktionen werden noch nicht unterstützt. -> Transactions are the building blocks of the Arweave permaweb and they are objects created by end-users. +## Schema-Definition -> Note: [Irys (previously Bundlr)](https://irys.xyz/) transactions are not supported yet. +Die Schemadefinition beschreibt die Struktur der entstehenden Subgraph-Datenbank und die Beziehungen zwischen den Entitäten. Dies ist unabhängig von der ursprünglichen Datenquelle. Weitere Details zur Subgraph-Schemadefinition finden Sie [hier](/developing/creating-a-subgraph/#the-graphql-schema). -## Schema Definition +## AssemblyScript-Mappings -Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on the Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). +Die Handler für die Ereignisverarbeitung sind in [AssemblyScript](https://www.assemblyscript.org/) geschrieben. -## AssemblyScript Mappings - -The handlers for processing events are written in [AssemblyScript](https://www.assemblyscript.org/). - -Arweave indexing introduces Arweave-specific data types to the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/). +Die Arweave-Indizierung führt Arweave-spezifische Datentypen in die [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/) ein. ```tsx class Block { @@ -146,51 +145,51 @@ class Transaction { } ``` -Block handlers receive a `Block`, while transactions receive a `Transaction`. +Block-Handler erhalten einen `Block`, während Transaktionen einen `Transaction` erhalten. -Writing the mappings of an Arweave Subgraph is very similar to writing the mappings of an Ethereum Subgraph. For more information, click [here](/developing/creating-a-subgraph/#writing-mappings). +Das Schreiben der Mappings eines Arweave-Subgraphen ist dem Schreiben der Mappings eines Ethereum-Subgraphen sehr ähnlich. Für weitere Informationen, klicken Sie [hier](/developing/creating-a-subgraph/#writing-mappings). -## Deploying an Arweave Subgraph in Subgraph Studio +## Einsatz von Subgraphen aus Arweave in Subgraph Studio -Once your Subgraph has been created on your Subgraph Studio dashboard, you can deploy by using the `graph deploy` CLI command. +Sobald Ihr Subgraph auf Ihrem Subgraph Studio Dashboard erstellt wurde, können Sie ihn mit dem CLI-Befehl `graph deploy` bereitstellen. ```bash -graph deploy --access-token +graph deploy --access-token ``` -## Querying an Arweave Subgraph +## Abfrage eines Arweave-Subgraphen -The GraphQL endpoint for Arweave Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. +Der GraphQL-Endpunkt für Arweave Subgraphen wird durch die Schemadefinition bestimmt, mit der vorhandenen API-Schnittstelle. Bitte besuchen Sie die [GraphQL API Dokumentation](/subgraphs/querying/graphql-api/) für weitere Informationen. -## Example Subgraphs +## Beispiele von Subgraphen -Here is an example Subgraph for reference: +Hier ist ein Beispiel für einen Subgraphen als Referenz: -- [Example Subgraph for Arweave](https://github.com/graphprotocol/graph-tooling/tree/main/examples/arweave-blocks-transactions) +- [Beispiel-Subgraph für Arweave](https://github.com/graphprotocol/graph-tooling/tree/main/examples/arweave-blocks-transactions) ## FAQ -### Can a Subgraph index Arweave and other chains? +### Kann ein Subgraph Arweave und andere Ketten indizieren? -No, a Subgraph can only support data sources from one chain/network. +Nein, ein Subgraph kann nur Datenquellen von einer Kette oder einem Netzwerk unterstützen. -### Can I index the stored files on Arweave? +### Kann ich die gespeicherten Dateien auf Arweave indizieren? -Currently, The Graph is only indexing Arweave as a blockchain (its blocks and transactions). +Derzeit indiziert The Graph Arweave nur als Blockchain (seine Blöcke und Transaktionen). -### Can I identify Bundlr bundles in my Subgraph? +### Kann ich Bundlr-„Bundles“ in meinem Subgraph identifizieren? -This is not currently supported. +Dies wird derzeit nicht unterstützt. -### How can I filter transactions to a specific account? +### Wie kann ich Transaktionen nach einem bestimmten Konto filtern? -The source.owner can be the user's public key or account address. +Der source.owner kann der öffentliche Schlüssel oder die Kontoadresse des Benutzers sein. -### What is the current encryption format? +### Was ist das aktuelle Verschlüsselungsformat? -Data is generally passed into the mappings as Bytes, which if stored directly is returned in the Subgraph in a `hex` format (ex. block and transaction hashes). You may want to convert to a `base64` or `base64 URL`-safe format in your mappings, in order to match what is displayed in block explorers like [Arweave Explorer](https://viewblock.io/arweave/). +Daten werden im Allgemeinen als Bytes an die Mappings übergeben, die, wenn sie direkt gespeichert werden, im Subgraphen in einem `hex`-Format zurückgegeben werden (z.B. Block- und Transaktions-Hashes). Möglicherweise möchten Sie in Ihren Mappings in ein `base64`- oder `base64 URL`-sicheres Format konvertieren, um dem zu entsprechen, was in Block-Explorern wie [Arweave Explorer] (https://viewblock.io/arweave/) angezeigt wird. -The following `bytesToBase64(bytes: Uint8Array, urlSafe: boolean): string` helper function can be used, and will be added to `graph-ts`: +Die folgende Hilfsfunktion `bytesToBase64(bytes: Uint8Array, urlSafe: boolean): string` kann verwendet werden und wird zu `graph-ts` hinzugefügt: ``` const base64Alphabet = [ diff --git a/website/src/pages/de/subgraphs/guides/contract-analyzer.mdx b/website/src/pages/de/subgraphs/guides/contract-analyzer.mdx index 084ac8d28a00..90d94eed5242 100644 --- a/website/src/pages/de/subgraphs/guides/contract-analyzer.mdx +++ b/website/src/pages/de/subgraphs/guides/contract-analyzer.mdx @@ -2,11 +2,15 @@ title: Smart Contract Analysis with Cana CLI --- -# Cana CLI: Quick & Efficient Contract Analysis +Improve smart contract analysis with **Cana CLI**. It's fast, efficient, and designed specifically for EVM chains. -**Cana CLI** is a command-line tool that streamlines helpful smart contract metadata analysis specific to subgraph development across multiple EVM-compatible chains. +## Überblick -## 📌 Key Features +**Cana CLI** is a command-line tool that streamlines helpful smart contract metadata analysis specific to subgraph development across multiple EVM-compatible chains. It simplifies retrieving contract details, detecting proxy implementations, extracting ABIs, and more. + +### Key Features + +With Cana CLI, you can: - Detect deployment blocks - Verify source code @@ -14,63 +18,75 @@ title: Smart Contract Analysis with Cana CLI - Identify proxy and implementation contracts - Support multiple chains -## 🚀 Installation & Setup +### Voraussetzungen + +Before installing Cana CLI, make sure you have: + +- [Node.js v16+](https://nodejs.org/en) +- [npm v6+](https://docs.npmjs.com/cli/v11/commands/npm-install) +- Block explorer API keys + +### Installation & Setup -Install Cana globally using npm: +1. Install Cana CLI + +Use npm to install it globally: ```bash npm install -g contract-analyzer ``` -Set up a blockchain for analysis: +2. Configure Cana CLI + +Set up a blockchain environment for analysis: ```bash cana setup ``` -Provide the required block explorer API and block explorer endpoint URL details when prompted. +During setup, you'll be prompted to provide the required block explorer API key and block explorer endpoint URL. -Running `cana setup` creates a configuration file at `~/.contract-analyzer/config.json`. This file stores your block explorer API credentials, endpoint URLs, and chain selection preferences for future use. +After setup, Cana CLI creates a configuration file at `~/.contract-analyzer/config.json`. This file stores your block explorer API credentials, endpoint URLs, and chain selection preferences for future use. -## 🍳 Usage +### Steps: Using Cana CLI for Smart Contract Analysis -### 🔹 Chain Selection +#### 1. Select a Chain -Cana supports multiple EVM-compatible chains. +Cana CLI supports multiple EVM-compatible chains. -List chains added with: +For a list of chains added run this command: ```bash cana chains ``` -Then select a chain with: +Then select a chain with this command: ```bash cana chains --switch ``` -Once a chain is selected, all subsequent contract analases will continue on that chain. +Once a chain is selected, all subsequent contract analyses will continue on that chain. -### 🔹 Basic Contract Analysis +#### 2. Basic Contract Analysis -Analyze a contract with: +Run the following command to analyze a contract: ```bash cana analyze 0xContractAddress ``` -or +oder ```bash cana -a 0xContractAddress ``` -This command displays essential contract information in the terminal using a clear, organized format. +This command fetches and displays essential contract information in the terminal using a clear, organized format. -### 🔹 Understanding Output +#### 3. Understanding the Output -Cana organizes results into the terminal as well as into a structured directory when detailed contract data is successfully retrieved: +Cana CLI organizes results into the terminal and into a structured directory when detailed contract data is successfully retrieved: ``` contracts-analyzed/ @@ -80,24 +96,22 @@ contracts-analyzed/ └── event-information.json # Event signatures and examples ``` -### 🔹 Chain Management +This format makes it easy to reference contract metadata, event signatures, and ABIs for subgraph development. + +#### 4. Chain Management Add and manage chains: ```bash -cana setup # Add a new chain -cana chains # List configured chains -cana chains -s # Swich chains. +cana setup # Add a new chain +cana chains # List configured chains +cana chains -s # Switch chains ``` -## ⚠️ Troubleshooting +### Troubleshooting -- **Missing Data**: Ensure the contract address is correct, verified on the block explorer, and that your API key has the required permissions. +Missing Data? Ensure that the contract address is correct, that it's verified on the block explorer, and that your API key has the required permissions. -## ✅ Requirements - -- Node.js v16+ -- npm v6+ -- Block explorer API keys +### Schlussfolgerung -Keep your contract analyses efficient and well-organized. 🚀 +With Cana CLI, you can efficiently analyze smart contracts, extract crucial metadata, and support subgraph development with ease. diff --git a/website/src/pages/de/subgraphs/guides/enums.mdx b/website/src/pages/de/subgraphs/guides/enums.mdx index 9f55ae07c54b..c01b20cac51b 100644 --- a/website/src/pages/de/subgraphs/guides/enums.mdx +++ b/website/src/pages/de/subgraphs/guides/enums.mdx @@ -1,20 +1,20 @@ --- -title: Categorize NFT Marketplaces Using Enums +title: NFT-Marktplätze mit Enums kategorisieren --- -Use Enums to make your code cleaner and less error-prone. Here's a full example of using Enums on NFT marketplaces. +Verwenden Sie Enums, um Ihren Code sauberer und weniger fehleranfällig zu machen. Hier finden Sie ein vollständiges Beispiel für die Verwendung von Enums auf NFT-Marktplätzen. -## What are Enums? +## Was sind Enums? -Enums, or enumeration types, are a specific data type that allows you to define a set of specific, allowed values. +Enums oder Aufzählungstypen sind ein spezieller Datentyp, mit dem Sie eine Reihe von bestimmten, zulässigen Werten definieren können. -### Example of Enums in Your Schema +### Beispiel für Enums in Ihrem Schema -If you're building a Subgraph to track the ownership history of tokens on a marketplace, each token might go through different ownerships, such as `OriginalOwner`, `SecondOwner`, and `ThirdOwner`. By using enums, you can define these specific ownerships, ensuring only predefined values are assigned. +Wenn Sie einen Subgraphen erstellen, um den Besitzverlauf von Token auf einem Marktplatz zu verfolgen, kann jeder Token verschiedene Besitzverhältnisse durchlaufen, z. B. `OriginalOwner`, `SecondOwner` und `ThirdOwner`. Durch die Verwendung von Enums können Sie diese spezifischen Besitzverhältnisse definieren und sicherstellen, dass nur vordefinierte Werte zugewiesen werden. -You can define enums in your schema, and once defined, you can use the string representation of the enum values to set an enum field on an entity. +Sie können Enums in Ihrem Schema definieren, und sobald sie definiert sind, können Sie die String-Darstellung der Enum-Werte verwenden, um ein Enum-Feld auf einer Entität zu setzen. -Here's what an enum definition might look like in your schema, based on the example above: +So könnte eine Enum-Definition in Ihrem Schema aussehen, basierend auf dem obigen Beispiel: ```graphql enum TokenStatus { @@ -24,109 +24,109 @@ enum TokenStatus { } ``` -This means that when you use the `TokenStatus` type in your schema, you expect it to be exactly one of predefined values: `OriginalOwner`, `SecondOwner`, or `ThirdOwner`, ensuring consistency and validity. +Das heißt, wenn Sie den Typ `TokenStatus` in Ihrem Schema verwenden, erwarten Sie, dass er genau einen der vordefinierten Werte annimmt: `OriginalOwner`, `SecondOwner` oder `ThirdOwner`, um Konsistenz und Gültigkeit zu gewährleisten. -To learn more about enums, check out [Creating a Subgraph](/developing/creating-a-subgraph/#enums) and [GraphQL documentation](https://graphql.org/learn/schema/#enumeration-types). +Um mehr über Enums zu erfahren, lesen Sie [Erstellen eines Subgraphen](/developing/creating-a-subgraph/#enums) und [GraphQL-Dokumentation](https://graphql.org/learn/schema/#enumeration-types). -## Benefits of Using Enums +## Vorteile der Verwendung von Enums -- **Clarity:** Enums provide meaningful names for values, making data easier to understand. -- **Validation:** Enums enforce strict value definitions, preventing invalid data entries. -- **Maintainability:** When you need to change or add new categories, enums allow you to do this in a focused manner. +- **Klarheit:** Enums bieten aussagekräftige Namen für Werte, wodurch die Daten leichter zu verstehen sind. +- **Validierung:** Enums erzwingen strenge Wertedefinitionen, die ungültige Dateneinträge verhindern. +- **Pflegeleichtigkeit:** Wenn Sie Kategorien ändern oder neue hinzufügen müssen, können Sie dies mit Hilfe von Enums gezielt tun. -### Without Enums +### Ohne Enums -If you choose to define the type as a string instead of using an Enum, your code might look like this: +Wenn Sie sich dafür entscheiden, den Typ als String zu definieren, anstatt eine Enum zu verwenden, könnte Ihr Code wie folgt aussehen: ```graphql type Token @entity { id: ID! tokenId: BigInt! - owner: Bytes! # Owner of the token - tokenStatus: String! # String field to track token status + owner: Bytes! # Eigentümer des Tokens + tokenStatus: String! # String-Feld zur Verfolgung des Token-Status timestamp: BigInt! } ``` -In this schema, `TokenStatus` is a simple string with no specific, allowed values. +In diesem Schema ist `TokenStatus` eine einfache Zeichenfolge ohne spezifische, zulässige Werte. -#### Why is this a problem? +#### Warum ist das ein Problem? -- There's no restriction of `TokenStatus` values, so any string can be accidentally assigned. This makes it hard to ensure that only valid statuses like `OriginalOwner`, `SecondOwner`, or `ThirdOwner` are set. -- It's easy to make typos such as `Orgnalowner` instead of `OriginalOwner`, making the data and potential queries unreliable. +- Es gibt keine Beschränkung der `TokenStatus`-Werte, so dass jede beliebige Zeichenfolge versehentlich zugewiesen werden kann. Das macht es schwer sicherzustellen, dass nur gültige Status wie `OriginalOwner`, `SecondOwner` oder `ThirdOwner` gesetzt werden. +- Es ist leicht, Tippfehler zu machen, wie z. B. `Orgnalowner` anstelle von `OriginalOwner`, was die Daten und mögliche Abfragen unzuverlässig macht. -### With Enums +### Mit Enums -Instead of assigning free-form strings, you can define an enum for `TokenStatus` with specific values: `OriginalOwner`, `SecondOwner`, or `ThirdOwner`. Using an enum ensures only allowed values are used. +Anstelle der Zuweisung von Freiform-Strings können Sie ein Enum für `TokenStatus` mit spezifischen Werten definieren: `OriginalOwner`, `SecondOwner`, oder `ThirdOwner`. Die Verwendung einer Aufzählung stellt sicher, dass nur erlaubte Werte verwendet werden. -Enums provide type safety, minimize typo risks, and ensure consistent and reliable results. +Enums bieten Typsicherheit, minimieren das Risiko von Tippfehlern und gewährleisten konsistente und zuverlässige Ergebnisse. -## Defining Enums for NFT Marketplaces +## Definieren von Enums für NFT-Marktplätze -> Note: The following guide uses the CryptoCoven NFT smart contract. +> Hinweis: Die folgende Anleitung verwendet den CryptoCoven NFT Smart Contract. -To define enums for the various marketplaces where NFTs are traded, use the following in your Subgraph schema: +Um Enums für die verschiedenen Marktplätze, auf denen NFTs gehandelt werden, zu definieren, verwenden Sie Folgendes in Ihrem Subgraph-Schema: ```gql -# Enum for Marketplaces that the CryptoCoven contract interacted with(likely a Trade/Mint) +# Enum für Marktplätze, mit denen der CryptoCoven-Vertrag interagiert (wahrscheinlich ein Trade/Mint) enum Marketplace { - OpenSeaV1 # Represents when a CryptoCoven NFT is traded on the marketplace - OpenSeaV2 # Represents when a CryptoCoven NFT is traded on the OpenSeaV2 marketplace - SeaPort # Represents when a CryptoCoven NFT is traded on the SeaPort marketplace - LooksRare # Represents when a CryptoCoven NFT is traded on the LookRare marketplace - # ...and other marketplaces + OpenSeaV1 # Repräsentiert, wenn ein CryptoCoven NFT auf dem Marktplatz gehandelt wird + OpenSeaV2 # Stellt dar, wenn ein CryptoCoven NFT auf dem OpenSeaV2-Marktplatz gehandelt wird + SeaPort # Stellt dar, wenn ein CryptoCoven NFT auf dem SeaPort-Marktplatz gehandelt wird + LooksRare # Stellt dar, wenn ein CryptoCoven NFT auf dem LookRare-Marktplatz gehandelt wird. + # ...und andere Marktplätze } ``` -## Using Enums for NFT Marketplaces +## Verwendung von Enums für NFT-Marktplätze -Once defined, enums can be used throughout your Subgraph to categorize transactions or events. +Einmal definiert, können Enums in Ihrem gesamten Subgraphen verwendet werden, um Transaktionen oder Ereignisse zu kategorisieren. -For example, when logging NFT sales, you can specify the marketplace involved in the trade using the enum. +Bei der Protokollierung von NFT-Verkäufen können Sie beispielsweise mit Hilfe des Enums den Marktplatz angeben, der an dem Geschäft beteiligt ist. -### Implementing a Function for NFT Marketplaces +### Implementieren einer Funktion für NFT-Marktplätze -Here's how you can implement a function to retrieve the marketplace name from the enum as a string: +So können Sie eine Funktion implementieren, die den Namen des Marktplatzes als String aus der Aufzählung abruft: ```ts export function getMarketplaceName(marketplace: Marketplace): string { - // Using if-else statements to map the enum value to a string + // Verwendung von if-else-Anweisungen, um den Enum-Wert auf eine Zeichenkette abzubilden if (marketplace === Marketplace.OpenSeaV1) { - return 'OpenSeaV1' // If the marketplace is OpenSea, return its string representation + return 'OpenSeaV1' // Wenn der Marktplatz OpenSea ist, wird seine String-Repräsentation zurückgegeben } else if (marketplace === Marketplace.OpenSeaV2) { return 'OpenSeaV2' } else if (marketplace === Marketplace.SeaPort) { - return 'SeaPort' // If the marketplace is SeaPort, return its string representation + return 'SeaPort' // Wenn der Marktplatz SeaPort ist, wird seine String-Repräsentation zurückgegeben } else if (marketplace === Marketplace.LooksRare) { - return 'LooksRare' // If the marketplace is LooksRare, return its string representation - // ... and other market places + return 'LooksRare' // Wenn der Marktplatz LooksRare ist, wird seine String-Repräsentation zurückgegeben + // ... und andere Marktplätze } } ``` -## Best Practices for Using Enums +## Best Practices für die Verwendung von Enums -- **Consistent Naming:** Use clear, descriptive names for enum values to improve readability. -- **Centralized Management:** Keep enums in a single file for consistency. This makes enums easier to update and ensures they are the single source of truth. -- **Documentation:** Add comments to enum to clarify their purpose and usage. +- **Konsistente Benennung:** Verwenden Sie klare, beschreibende Namen für Enum-Werte, um die Lesbarkeit zu verbessern. +- **Zentrale Verwaltung:** Halten Sie Enums in einer einzigen Datei für Konsistenz. Dies erleichtert die Aktualisierung von Enums und stellt sicher, dass sie die einzige Quelle der Wahrheit sind. +- **Dokumentation:** Hinzufügen von Kommentaren zu Enum, um deren Zweck und Verwendung zu verdeutlichen. -## Using Enums in Queries +## Verwendung von Enums in Abfragen -Enums in queries help you improve data quality and make your results easier to interpret. They function as filters and response elements, ensuring consistency and reducing errors in marketplace values. +Enums in Abfragen helfen Ihnen, die Datenqualität zu verbessern und Ihre Ergebnisse leichter zu interpretieren. Sie fungieren als Filter und Antwortelemente, sorgen für Konsistenz und reduzieren Fehler bei Marktplatzwerten. -**Specifics** +**Besonderheiten** -- **Filtering with Enums:** Enums provide clear filters, allowing you to confidently include or exclude specific marketplaces. -- **Enums in Responses:** Enums guarantee that only recognized marketplace names are returned, making the results standardized and accurate. +- **Filtern mit Enums:** Enums bieten klare Filter, mit denen Sie bestimmte Marktplätze ein- oder ausschließen können. +- **Enums in Antworten:** Enums garantieren, dass nur anerkannte Marktplatznamen zurückgegeben werden, wodurch die Ergebnisse standardisiert und genau sind. -### Sample Queries +### Beispiele für Abfragen -#### Query 1: Account With The Highest NFT Marketplace Interactions +#### Abfrage 1: Konto mit den höchsten NFT-Marktplatzinteraktionen -This query does the following: +Diese Abfrage führt Folgendes aus: -- It finds the account with the highest unique NFT marketplace interactions, which is great for analyzing cross-marketplace activity. -- The marketplaces field uses the marketplace enum, ensuring consistent and validated marketplace values in the response. +- Es findet das Konto mit den meisten eindeutigen NFT-Marktplatzinteraktionen, was sich hervorragend für die Analyse von marktplatzübergreifenden Aktivitäten eignet. +- Das Feld marketplaces verwendet das Marktplatz-Enum, um konsistente und validierte Marktplatzwerte in der Antwort zu gewährleisten. ```gql { @@ -137,15 +137,15 @@ This query does the following: totalSpent uniqueMarketplacesCount marketplaces { - marketplace # This field returns the enum value representing the marketplace + marketplace # Dieses Feld gibt den Enum-Wert für den Marktplatz zurück } } } ``` -#### Returns +#### Rückgabe -This response provides account details and a list of unique marketplace interactions with enum values for standardized clarity: +Diese Antwort enthält Kontodetails und eine Liste eindeutiger Marktplatz-Interaktionen mit Enum-Werten für standardisierte Klarheit: ```gql { @@ -186,12 +186,12 @@ This response provides account details and a list of unique marketplace interact } ``` -#### Query 2: Most Active Marketplace for CryptoCoven transactions +#### Abfrage 2: Aktivste Marktplätze für CryptoCoven-Transaktionen -This query does the following: +Diese Abfrage führt Folgendes aus: -- It identifies the marketplace with the highest volume of CryptoCoven transactions. -- It uses the marketplace enum to ensure that only valid marketplace types appear in the response, adding reliability and consistency to your data. +- Sie identifiziert den Marktplatz mit dem höchsten Transaktionsvolumen von CryptoCoven. +- Sie verwendet das Marktplatz-Enum, um sicherzustellen, dass nur gültige Marktplatztypen in der Antwort erscheinen, was die Zuverlässigkeit und Konsistenz Ihrer Daten erhöht. ```gql { @@ -202,9 +202,9 @@ This query does the following: } ``` -#### Result 2 +#### Ergebnis 2 -The expected response includes the marketplace and the corresponding transaction count, using the enum to indicate the marketplace type: +Die erwartete Antwort enthält den Marktplatz und die entsprechende Anzahl der Transaktionen, wobei das Enum zur Angabe des Marktplatztyps verwendet wird: ```gql { @@ -219,12 +219,12 @@ The expected response includes the marketplace and the corresponding transaction } ``` -#### Query 3: Marketplace Interactions with High Transaction Counts +#### Abfrage 3: Marktplatz-Interaktionen mit hohen Transaktionszahlen -This query does the following: +Diese Abfrage führt Folgendes aus: -- It retrieves the top four marketplaces with over 100 transactions, excluding "Unknown" marketplaces. -- It uses enums as filters to ensure that only valid marketplace types are included, increasing accuracy. +- Sie ermittelt die vier wichtigsten Marktplätze mit mehr als 100 Transaktionen, wobei „unbekannte“ Marktplätze ausgeschlossen sind. +- Sie verwendet Enums als Filter, um sicherzustellen, dass nur gültige Marktplatztypen einbezogen werden, was die Genauigkeit erhöht. ```gql { @@ -240,9 +240,9 @@ This query does the following: } ``` -#### Result 3 +#### Ergebnis 3 -Expected output includes the marketplaces that meet the criteria, each represented by an enum value: +Die erwartete Ausgabe umfasst die Marktplätze, die die Kriterien erfüllen und jeweils durch einen Enumwert dargestellt werden: ```gql { @@ -269,6 +269,6 @@ Expected output includes the marketplaces that meet the criteria, each represent } ``` -## Additional Resources +## Zusätzliche Ressourcen -For additional information, check out this guide's [repo](https://github.com/chidubemokeke/Subgraph-Tutorial-Enums). +Weitere Informationen finden Sie in der [Repo](https://github.com/chidubemokeke/Subgraph-Tutorial-Enums) dieses Leitfadens. diff --git a/website/src/pages/de/subgraphs/guides/grafting.mdx b/website/src/pages/de/subgraphs/guides/grafting.mdx index d9abe0e70d2a..a9ca6f6eda54 100644 --- a/website/src/pages/de/subgraphs/guides/grafting.mdx +++ b/website/src/pages/de/subgraphs/guides/grafting.mdx @@ -1,56 +1,56 @@ --- -title: Replace a Contract and Keep its History With Grafting +title: Ersetzen Sie einen Vertrag und bewahren Sie seine Historie mit Grafting --- -In this guide, you will learn how to build and deploy new Subgraphs by grafting existing Subgraphs. +In dieser Anleitung erfahren Sie, wie Sie neue Subgraphen durch Aufpfropfen bestehender Subgraphen erstellen und einsetzen können. -## What is Grafting? +## Was ist Grafting? -Grafting reuses the data from an existing Subgraph and starts indexing it at a later block. This is useful during development to get past simple errors in the mappings quickly or to temporarily get an existing Subgraph working again after it has failed. Also, it can be used when adding a feature to a Subgraph that takes long to index from scratch. +Beim Grafting werden die Daten eines bestehenden Subgraphen wiederverwendet und erst in einem späteren Block indiziert. Dies ist während der Entwicklung nützlich, um einfache Fehler in den Mappings schnell zu beheben oder um einen bestehenden Subgraphen vorübergehend wieder zum Laufen zu bringen, nachdem er ausgefallen ist. Es kann auch verwendet werden, wenn ein Feature zu einem Subgraphen hinzugefügt wird, dessen Indizierung von Grund auf lange dauert. -The grafted Subgraph can use a GraphQL schema that is not identical to the one of the base Subgraph, but merely compatible with it. It has to be a valid Subgraph schema in its own right, but may deviate from the base Subgraph's schema in the following ways: +Der aufgepfropfte Subgrafen kann ein GraphQL-Schema verwenden, das nicht identisch mit dem des Basis-Subgrafen ist, sondern lediglich mit diesem kompatibel ist. Es muss ein eigenständig gültiges Subgrafen-Schema sein, darf aber auf folgende Weise vom Schema des Basis-Subgrafen abweichen: -- It adds or removes entity types -- It removes attributes from entity types -- It adds nullable attributes to entity types -- It turns non-nullable attributes into nullable attributes -- It adds values to enums -- It adds or removes interfaces -- It changes for which entity types an interface is implemented +- Es fügt Entitätstypen hinzu oder entfernt sie +- Es entfernt Attribute von Entitätstypen +- Es fügt Entitätstypen nullfähige Attribute hinzu +- Es wandelt Nicht-Nullable-Attribute in Nullable-Attribute um +- Es fügt Aufzählungen Werte hinzu +- Es fügt Interface hinzu oder entfernt sie +- Sie ändert sich je nachdem, für welche Art von Elementen das Interface implementiert ist -For more information, you can check: +Weitere Informationen finden Sie unter: - [Grafting](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) -In this tutorial, we will be covering a basic use case. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing Subgraph onto the "base" Subgraph that tracks the new contract. +In diesem Tutorial werden wir einen grundlegenden Anwendungsfall behandeln. Wir werden einen bestehenden Vertrag durch einen identischen Vertrag (mit einer neuen Adresse, aber demselben Code) ersetzen. Anschließend wird der bestehende Subgraph auf den „Basis“-Subgraphen verpflanzt, der den neuen Vertrag verfolgt. -## Important Note on Grafting When Upgrading to the Network +## Wichtiger Hinweis auf Grafting beim Upgrade auf das Netzwerk -> **Caution**: It is recommended to not use grafting for Subgraphs published to The Graph Network +> **Vorsicht**: Es wird empfohlen, das Grafting nicht für Subgraphen zu verwenden, die in The Graph Network veröffentlicht wurden. -### Why Is This Important? +### Warum ist das wichtig? -Grafting is a powerful feature that allows you to "graft" one Subgraph onto another, effectively transferring historical data from the existing Subgraph to a new version. It is not possible to graft a Subgraph from The Graph Network back to Subgraph Studio. +Das Grafting ist eine leistungsstarke Funktion, mit der Sie einen Subgraphen auf einen anderen "grafen" können, wodurch historische Daten aus dem bestehenden Subgraphen in eine neue Version übertragen werden. Es ist nicht möglich, einen Subgraphen aus The Graph Network zurück in Subgraph Studio zu übertragen. -### Best Practices +### Bewährte Praktiken -**Initial Migration**: when you first deploy your Subgraph to the decentralized network, do so without grafting. Ensure that the Subgraph is stable and functioning as expected. +**Erstmalige Migration**: Wenn Sie Ihren Subgraphen zum ersten Mal im dezentralen Netzwerk einsetzen, tun Sie dies ohne Veredelung. Stellen Sie sicher, dass der Subgraph stabil ist und wie erwartet funktioniert. -**Subsequent Updates**: once your Subgraph is live and stable on the decentralized network, you may use grafting for future versions to make the transition smoother and to preserve historical data. +**Nachfolgende Updates**: Sobald Ihr Subgraph live und stabil im dezentralen Netzwerk ist, können Sie Grafting für zukünftige Versionen verwenden, um den Übergang reibungsloser zu gestalten und historische Daten zu erhalten. -By adhering to these guidelines, you minimize risks and ensure a smoother migration process. +Wenn Sie sich an diese Richtlinien halten, minimieren Sie die Risiken und sorgen für einen reibungsloseren Migrationsprozess. -## Building an Existing Subgraph +## Erstellen eines vorhanden Subgrafen -Building Subgraphs is an essential part of The Graph, described more in depth [here](/subgraphs/quick-start/). To be able to build and deploy the existing Subgraph used in this tutorial, the following repo is provided: +Die Erstellung von Subgraphen ist ein wesentlicher Bestandteil von The Graph, der [hier] näher beschrieben wird (/subgraphs/quick-start/). Um den bestehenden Subgraphen, der in diesem Tutorial verwendet wird, zu bauen und einzusetzen, wird das folgende Repo zur Verfügung gestellt: -- [Subgraph example repo](https://github.com/Shiyasmohd/grafting-tutorial) +- [Subgraph-Beispiel-Repo](https://github.com/Shiyasmohd/grafting-tutorial) -> Note: The contract used in the Subgraph was taken from the following [Hackathon Starterkit](https://github.com/schmidsi/hackathon-starterkit). +> Hinweis: Der im Subgraphen verwendete Vertrag wurde dem folgenden [Hackathon Starterkit] (https://github.com/schmidsi/hackathon-starterkit) entnommen. -## Subgraph Manifest Definition +## Subgraph-Manifest-Definition -The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest that you will use: +Das Subgraph-Manifest `subgraph.yaml` identifiziert die Datenquellen für den Subgraphen, die Auslöser von Interesse und die Funktionen, die als Reaktion auf diese Auslöser ausgeführt werden sollen. Im Folgenden finden Sie ein Beispiel für ein Subgraph-Manifest, das Sie verwenden werden: ```yaml specVersion: 1.3.0 @@ -79,33 +79,33 @@ dataSources: file: ./src/lock.ts ``` -- The `Lock` data source is the abi and contract address we will get when we compile and deploy the contract -- The network should correspond to an indexed network being queried. Since we're running on Sepolia testnet, the network is `sepolia` -- The `mapping` section defines the triggers of interest and the functions that should be run in response to those triggers. In this case, we are listening for the `Withdrawal` event and calling the `handleWithdrawal` function when it is emitted. +- Die Datenquelle `Lock` ist die Abi- und Vertragsadresse, die wir erhalten, wenn wir den Vertrag kompilieren und einsetzen +- Das Netzwerk sollte einem indizierten Netzwerk entsprechen, das abgefragt wird. Da wir mit dem Sepolia-Testnetz arbeiten, lautet das Netzwerk `sepolia`. +- Der Abschnitt `mapping` definiert die Auslöser von Interesse und die Funktionen, die als Reaktion auf diese Auslöser ausgeführt werden sollen. In diesem Fall warten wir auf das Ereignis `Withdrawal` und rufen die Funktion `handleWithdrawal` auf, wenn es ausgelöst wird. -## Grafting Manifest Definition +## Grafting-Manifest-Definition -Grafting requires adding two new items to the original Subgraph manifest: +Beim Grafting müssen dem ursprünglichen Subgraph-Manifest zwei neue Elemente hinzugefügt werden: ```yaml --- features: - grafting # feature name graft: - base: Qm... # Subgraph ID of base Subgraph - block: 5956000 # block number + base: Qm... # Subgraph ID des Basis-Subgraphen + block: 5956000 # Blocknummer ``` -- `features:` is a list of all used [feature names](/developing/creating-a-subgraph/#experimental-features). -- `graft:` is a map of the `base` Subgraph and the block to graft on to. The `block` is the block number to start indexing from. The Graph will copy the data of the base Subgraph up to and including the given block and then continue indexing the new Subgraph from that block on. +- `features:` ist eine Liste aller verwendeten [Funktionsnamen](/developing/creating-a-subgraph/#experimental-features). +- `graft:` ist eine Abbildung des `base`-Subgraphen und des Blocks, auf den veredelt werden soll. Der `block` ist die Blocknummer, ab der die Indizierung beginnen soll. The Graph kopiert die Daten des Basis-Subgraphen bis einschließlich des angegebenen Blocks und fährt dann mit der Indizierung des neuen Subgraphen von diesem Block an fort. -The `base` and `block` values can be found by deploying two Subgraphs: one for the base indexing and one with grafting +Die `base`- und `block`-Werte können durch das Bereitstellen von zwei Subgraphen ermittelt werden: einen für die Basisindizierung und einen mit Grafting -## Deploying the Base Subgraph +## Bereitstellen des Basis-Subgrafen -1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-example` -2. Follow the directions in the `AUTH & DEPLOY` section on your Subgraph page in the `graft-example` folder from the repo -3. Once finished, verify the Subgraph is indexing properly. If you run the following command in The Graph Playground +1. Gehen Sie zu [Subgraph Studio] (https://thegraph.com/studio/) und erstellen Sie einen Subgraphen im Sepolia-Testnetz mit dem Namen `graft-example` +2. Befolgen Sie die Anweisungen im Abschnitt `AUTH & DEPLOY` auf Ihrer Subgraph-Seite im Ordner `graft-example` aus dem Repo +3. Wenn Sie fertig sind, überprüfen Sie, ob der Subgraf richtig indiziert wird. Wenn Sie den folgenden Befehl in The Graph Playground ausführen ```graphql { @@ -117,7 +117,7 @@ The `base` and `block` values can be found by deploying two Subgraphs: one for t } ``` -It returns something like this: +Es gibt so etwas zurück: ``` { @@ -138,16 +138,16 @@ It returns something like this: } ``` -Once you have verified the Subgraph is indexing properly, you can quickly update the Subgraph with grafting. +Sobald Sie sich vergewissert haben, dass die Indizierung des Subgraphen ordnungsgemäß funktioniert, können Sie den Subgraphen mit Grafting schnell aktualisieren. -## Deploying the Grafting Subgraph +## Bereitstellen des Grafting-Subgrafen -The graft replacement subgraph.yaml will have a new contract address. This could happen when you update your dapp, redeploy a contract, etc. +Der Graft-Ersatz subgraph.yaml wird eine neue Vertragsadresse haben. Dies könnte passieren, wenn Sie Ihre DApp aktualisieren, einen Vertrag erneut bereitstellen usw. -1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-replacement` -2. Create a new manifest. The `subgraph.yaml` for `graph-replacement` contains a different contract address and new information about how it should graft. These are the `block` of the [last event emitted](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452) you care about by the old contract and the `base` of the old Subgraph. The `base` Subgraph ID is the `Deployment ID` of your original `graph-example` Subgraph. You can find this in Subgraph Studio. -3. Follow the directions in the `AUTH & DEPLOY` section on your Subgraph page in the `graft-replacement` folder from the repo -4. Once finished, verify the Subgraph is indexing properly. If you run the following command in The Graph Playground +1. Gehen Sie zu [Subgraph Studio] (https://thegraph.com/studio/) und erstellen Sie einen Subgraphen im Sepolia-Testnetz mit dem Namen `graft-replacement` +2. Erstellen Sie ein neues Manifest. Die `subgraph.yaml` für `graph-replacement` enthält eine andere Vertragsadresse und neue Informationen darüber, wie sie gegraft werden soll. Dies sind der `block` des [letztes Eregnisses emittiert](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452), um den sich der alte Vertrag kümmert, und die `base` des alten Subgraphen. Die `base` Subgraph ID ist die `Deployment ID` Ihres ursprünglichen `graph-example`-Subgraphen. Sie können diese in Subgraph Studio finden. +3. Folgen Sie den Anweisungen im Abschnitt `AUTH & DEPLOY` auf Ihrer Subgraph-Seite im Ordner `graft-replacement` aus dem Repo +4. Wenn Sie fertig sind, überprüfen Sie, ob der Subgraf richtig indiziert wird. Wenn Sie den folgenden Befehl in The Graph Playground ausführen ```graphql { @@ -159,7 +159,7 @@ The graft replacement subgraph.yaml will have a new contract address. This could } ``` -It should return the following: +Es sollte Folgendes zurückgeben: ``` { @@ -185,18 +185,18 @@ It should return the following: } ``` -You can see that the `graft-replacement` Subgraph is indexing from older `graph-example` data and newer data from the new contract address. The original contract emitted two `Withdrawal` events, [Event 1](https://sepolia.etherscan.io/tx/0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d) and [Event 2](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452). The new contract emitted one `Withdrawal` after, [Event 3](https://sepolia.etherscan.io/tx/0x2410475f76a44754bae66d293d14eac34f98ec03a3689cbbb56a716d20b209af). The two previously indexed transactions (Event 1 and 2) and the new transaction (Event 3) were combined together in the `graft-replacement` Subgraph. +Sie können sehen, dass der Subgraph `graft-replacement` ältere Daten von `graph-example` und neuere Daten von der neuen Vertragsadresse indiziert. Der ursprüngliche Vertrag hat zwei `Withdrawal`-Ereignisse ausgelöst, [Ereignis 1](https://sepolia.etherscan.io/tx/0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d) und [Ereignis 2](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452). Der neue Vertrag hat ein `Withdrawal`-Ereignis ausgelöst, nämlich [Ereignis 3] (https://sepolia.etherscan.io/tx/0x2410475f76a44754bae66d293d14eac34f98ec03a3689cbbb56a716d20b209af). Die beiden zuvor indizierten Transaktionen (Ereignis 1 und 2) und die neue Transaktion (Ereignis 3) wurden im Subgraphen `graft-replacement` zusammengefasst. -Congrats! You have successfully grafted a Subgraph onto another Subgraph. +Herzlichen Glückwunsch! Sie haben erfolgreich einen Subgraphen auf einen anderen Subgraphen gegraft. -## Additional Resources +## Zusätzliche Ressourcen -If you want more experience with grafting, here are a few examples for popular contracts: +Wenn Sie mehr Erfahrung mit dem Grafting haben möchten, finden Sie hier einige Beispiele für beliebte Verträge: - [Curve](https://github.com/messari/subgraphs/blob/master/subgraphs/curve-finance/protocols/curve-finance/config/templates/curve.template.yaml) - [ERC-721](https://github.com/messari/subgraphs/blob/master/subgraphs/erc721-metadata/subgraph.yaml) - [Uniswap](https://github.com/messari/subgraphs/blob/master/subgraphs/uniswap-v3-forks/protocols/uniswap-v3/config/templates/uniswapV3Template.yaml), -To become even more of a Graph expert, consider learning about other ways to handle changes in underlying datasources. Alternatives like [Data Source Templates](/developing/creating-a-subgraph/#data-source-templates) can achieve similar results +Um ein noch besserer Graph-Experte zu werden, sollten Sie sich mit anderen Methoden zur Handhabung von Änderungen in den zugrunde liegenden Datenquellen vertraut machen. Alternativen wie [Datenquellenvorlagen](/developing/creating-a-subgraph/#data-source-templates) können ähnliche Ergebnisse erzielen -> Note: A lot of material from this article was taken from the previously published [Arweave article](/subgraphs/cookbook/arweave/) +> Hinweis: Vieles in diesem Artikel wurde aus dem zuvor veröffentlichten [Arweave-Artikel](/subgraphs/cookbook/arweave/) übernommen. diff --git a/website/src/pages/de/subgraphs/guides/near.mdx b/website/src/pages/de/subgraphs/guides/near.mdx index e78a69eb7fa2..3bb7e5af4796 100644 --- a/website/src/pages/de/subgraphs/guides/near.mdx +++ b/website/src/pages/de/subgraphs/guides/near.mdx @@ -1,79 +1,79 @@ --- -title: Building Subgraphs on NEAR +title: Aufbau von Subgraphen auf NEAR --- -This guide is an introduction to building Subgraphs indexing smart contracts on the [NEAR blockchain](https://docs.near.org/). +Diese Anleitung ist eine Einführung in die Erstellung von Subgraphen, die Smart Contracts auf der [NEAR-Blockchain] (https://docs.near.org/) indizieren. -## What is NEAR? +## Was ist NEAR? -[NEAR](https://near.org/) is a smart contract platform for building decentralized applications. Visit the [official documentation](https://docs.near.org/concepts/basics/protocol) for more information. +[NEAR](https://near.org/) ist eine Smart-Contract-Plattform zur Erstellung dezentraler Anwendungen. Besuchen Sie die [offizielle Dokumentation](https://docs.near.org/concepts/basics/protocol) für weitere Informationen. -## What are NEAR Subgraphs? +## Was sind NEAR-Subgraphen? -The Graph gives developers tools to process blockchain events and make the resulting data easily available via a GraphQL API, known individually as a Subgraph. [Graph Node](https://github.com/graphprotocol/graph-node) is now able to process NEAR events, which means that NEAR developers can now build Subgraphs to index their smart contracts. +The Graph gibt Entwicklern Werkzeuge an die Hand, um Blockchain-Ereignisse zu verarbeiten und die daraus resultierenden Daten über eine GraphQL-API, die individuell als Subgraph bezeichnet wird, leicht verfügbar zu machen. Der [Graph Node] (https://github.com/graphprotocol/graph-node) ist nun in der Lage, NEAR-Ereignisse zu verarbeiten, was bedeutet, dass NEAR-Entwickler nun Subgraphen erstellen können, um ihre Smart Contracts zu indizieren. -Subgraphs are event-based, which means that they listen for and then process onchain events. There are currently two types of handlers supported for NEAR Subgraphs: +Subgraphen sind ereignisbasiert, was bedeutet, dass sie auf Onchain-Ereignisse warten und diese dann verarbeiten. Derzeit werden zwei Arten von Handlern für NEAR-Subgraphen unterstützt: -- Block handlers: these are run on every new block -- Receipt handlers: run every time a message is executed at a specified account +- Blockhandler: diese werden bei jedem neuen Block ausgeführt +- Empfangshandler: werden jedes Mal ausgeführt, wenn eine Nachricht auf einem bestimmten Konto ausgeführt wird -[From the NEAR documentation](https://docs.near.org/build/data-infrastructure/lake-data-structures/receipt): +[Aus der NEAR-Dokumentation] (https://docs.near.org/build/data-infrastructure/lake-data-structures/receipt): -> A Receipt is the only actionable object in the system. When we talk about "processing a transaction" on the NEAR platform, this eventually means "applying receipts" at some point. +> Eine Quittung ist das einzige handlungsfähige Objekt im System. Wenn wir auf der NEAR-Plattform von der „Verarbeitung einer Transaktion“ sprechen, bedeutet dies letztendlich, dass an einem bestimmten Punkt „Quittungen angewendet werden“. -## Building a NEAR Subgraph +## Aufbau eines NEAR-Subgraphen -`@graphprotocol/graph-cli` is a command-line tool for building and deploying Subgraphs. +`@graphprotocol/graph-cli` ist ein Kommandozeilen-Werkzeug zum Erstellen und Bereitstellen von Subgraphen. -`@graphprotocol/graph-ts` is a library of Subgraph-specific types. +`@graphprotocol/graph-ts` ist eine Bibliothek mit subgraphspezifischen Typen. -NEAR Subgraph development requires `graph-cli` above version `0.23.0`, and `graph-ts` above version `0.23.0`. +Die NEAR-Subgraph-Entwicklung erfordert `graph-cli` ab Version `0.23.0`, und `graph-ts` ab Version `0.23.0`. -> Building a NEAR Subgraph is very similar to building a Subgraph that indexes Ethereum. +> Der Aufbau eines NEAR-Subgraphen ist dem Aufbau eines Subgraphen, der Ethereum indiziert, sehr ähnlich. -There are three aspects of Subgraph definition: +Bei der Definition von Subgraphen gibt es drei Aspekte: -**subgraph.yaml:** the Subgraph manifest, defining the data sources of interest, and how they should be processed. NEAR is a new `kind` of data source. +**subgraph.yaml:** das Subgraph-Manifest, das die interessierenden Datenquellen und deren Verarbeitung definiert. NEAR ist eine neue `kind` (Art) von Datenquelle. -**schema.graphql:** a schema file that defines what data is stored for your Subgraph, and how to query it via GraphQL. The requirements for NEAR Subgraphs are covered by [the existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). +**schema.graphql:** eine Schemadatei, die definiert, welche Daten für Ihren Subgraphen gespeichert werden und wie sie über GraphQL abgefragt werden können. Die Anforderungen für NEAR-Subgraphen werden in [der bestehenden Dokumentation](/developing/creating-a-subgraph/#the-graphql-schema) behandelt. -**AssemblyScript Mappings:** [AssemblyScript code](/subgraphs/developing/creating/graph-ts/api/) that translates from the event data to the entities defined in your schema. NEAR support introduces NEAR-specific data types and new JSON parsing functionality. +**AssemblyScript-Mappings:** [AssemblyScript-Code](/subgraphs/developing/creating/graph-ts/api/), der die Ereignisdaten in die in Ihrem Schema definierten Entitäten übersetzt. Die NEAR-Unterstützung führt NEAR-spezifische Datentypen und neue JSON-Parsing-Funktionen ein. -During Subgraph development there are two key commands: +Bei der Entwicklung von Subgraphen gibt es zwei wichtige Befehle: ```bash -$ graph codegen # generates types from the schema file identified in the manifest -$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder +$ graph codegen # erzeugt Typen aus der im Manifest angegebenen Schemadatei +$ graph build # generiert Web Assembly aus den AssemblyScript-Dateien und bereitet alle Subgraph-Dateien in einem /build-Ordner vor ``` -### Subgraph Manifest Definition +### Subgraph-Manifest-Definition -The Subgraph manifest (`subgraph.yaml`) identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for a NEAR Subgraph: +Das Subgraph-Manifest (`subgraph.yaml`) identifiziert die Datenquellen für den Subgraphen, die Auslöser von Interesse und die Funktionen, die als Reaktion auf diese Auslöser ausgeführt werden sollen. Im Folgenden finden Sie ein Beispiel für ein Subgraph-Manifest für einen NEAR-Subgraphen: ```yaml specVersion: 1.3.0 schema: - file: ./src/schema.graphql # link to the schema file + file: ./src/schema.graphql # Verweis auf die Schemadatei dataSources: - kind: near network: near-mainnet source: - account: app.good-morning.near # This data source will monitor this account - startBlock: 10662188 # Required for NEAR + account: app.good-morning.near # Diese Datenquelle wird dieses Konto überwachen + startBlock: 10662188 # Erforderlich für NEAR mapping: apiVersion: 0.0.9 language: wasm/assemblyscript blockHandlers: - - handler: handleNewBlock # the function name in the mapping file + - handler: handleNewBlock # der Funktionsname in der Mapping-Datei receiptHandlers: - - handler: handleReceipt # the function name in the mapping file - file: ./src/mapping.ts # link to the file with the Assemblyscript mappings + - handler: handleReceipt # der Funktionsname in der Mapping-Datei + file: ./src/mapping.ts # Verweis auf die Datei mit den Assemblyscript-Mappings ``` -- NEAR Subgraphs introduce a new `kind` of data source (`near`) -- The `network` should correspond to a network on the hosting Graph Node. On Subgraph Studio, NEAR's mainnet is `near-mainnet`, and NEAR's testnet is `near-testnet` -- NEAR data sources introduce an optional `source.account` field, which is a human-readable ID corresponding to a [NEAR account](https://docs.near.org/concepts/protocol/account-model). This can be an account or a sub-account. -- NEAR data sources introduce an alternative optional `source.accounts` field, which contains optional suffixes and prefixes. At least prefix or suffix must be specified, they will match the any account starting or ending with the list of values respectively. The example below would match: `[app|good].*[morning.near|morning.testnet]`. If only a list of prefixes or suffixes is necessary the other field can be omitted. +- NEAR Subgraphen führen eine neue `kind` (Art) von Datenquelle ein (`near`) +- Das `network` sollte einem Netz auf dem hostenden Graph Node entsprechen. In Subgraph Studio ist das Mainnet von NEAR `near-mainnet` und das Testnetz von NEAR `near-testnet` +- NEAR-Datenquellen führen ein optionales Feld `source.account` ein, das eine von Menschen lesbare ID ist, die einem [NEAR-Konto] (https://docs.near.org/concepts/protocol/account-model) entspricht. Dies kann ein Konto oder ein Unterkonto sein. +- NEAR-Datenquellen führen ein alternatives optionales Feld `source.accounts` ein, das optionale Suffixe und Präfixe enthält. Es müssen mindestens Präfix oder Suffix angegeben werden, da sie mit jedem Konto übereinstimmen, das mit der Liste der Werte beginnt bzw. endet. Das folgende Beispiel würde passen: `[app|good].*[morning.near|morning.testnet]`. Wenn nur eine Liste von Präfixen oder Suffixen erforderlich ist, kann das andere Feld weggelassen werden. ```yaml accounts: @@ -85,20 +85,20 @@ accounts: - morning.testnet ``` -NEAR data sources support two types of handlers: +NEAR-Datenquellen unterstützen zwei Arten von Handlern: -- `blockHandlers`: run on every new NEAR block. No `source.account` is required. -- `receiptHandlers`: run on every receipt where the data source's `source.account` is the recipient. Note that only exact matches are processed ([subaccounts](https://docs.near.org/tutorials/crosswords/basics/add-functions-call#create-a-subaccount) must be added as independent data sources). +- `blockHandlers`: werden bei jedem neuen NEAR-Block ausgeführt. Es ist kein `source.account` erforderlich. +- `receiptHandlers`: wird bei jeder Quittung ausgeführt, bei der das `source.account` der Datenquelle der Empfänger ist. Beachten Sie, dass nur exakte Übereinstimmungen verarbeitet werden ([Unterkonten](https://docs.near.org/tutorials/crosswords/basics/add-functions-call#create-a-subaccount) müssen als unabhängige Datenquellen hinzugefügt werden). -### Schema Definition +### Schema-Definition -Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). +Die Schemadefinition beschreibt die Struktur der entstehenden Subgraph-Datenbank und die Beziehungen zwischen den Entitäten. Dies ist unabhängig von der ursprünglichen Datenquelle. Weitere Details zur Subgraph-Schemadefinition finden Sie [hier](/developing/creating-a-subgraph/#the-graphql-schema). -### AssemblyScript Mappings +### AssemblyScript-Mappings -The handlers for processing events are written in [AssemblyScript](https://www.assemblyscript.org/). +Die Handler für die Ereignisverarbeitung sind in [AssemblyScript](https://www.assemblyscript.org/) geschrieben. -NEAR indexing introduces NEAR-specific data types to the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/). +Die NEAR-Indizierung führt NEAR-spezifische Datentypen in die [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/) ein. ```typescript @@ -109,7 +109,7 @@ class ExecutionOutcome { logs: Array, receiptIds: Array, tokensBurnt: BigInt, - executorId: string, + executorId: String, } class ActionReceipt { @@ -125,7 +125,7 @@ class ActionReceipt { class BlockHeader { height: u64, - prevHeight: u64,// Always zero when version < V3 + prevHeight: u64,// Immer Null wenn Version < V3 epochId: Bytes, nextEpochId: Bytes, chunksIncluded: u64, @@ -148,7 +148,7 @@ class ChunkHeader { } class Block { - author: string, + Autor: String, header: BlockHeader, chunks: Array, } @@ -160,36 +160,36 @@ class ReceiptWithOutcome { } ``` -These types are passed to block & receipt handlers: +Diese Typen werden an Block- und Quittungshandler weitergegeben: -- Block handlers will receive a `Block` -- Receipt handlers will receive a `ReceiptWithOutcome` +- Block-Handler erhalten einen `Block` +- Empfangshandler erhalten einen `ReceiptWithOutcome` -Otherwise, the rest of the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/) is available to NEAR Subgraph developers during mapping execution. +Andernfalls ist der Rest der [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/) für NEAR-Subgraph-Entwickler während der Mapping-Ausführung verfügbar. -This includes a new JSON parsing function - logs on NEAR are frequently emitted as stringified JSONs. A new `json.fromString(...)` function is available as part of the [JSON API](/subgraphs/developing/creating/graph-ts/api/#json-api) to allow developers to easily process these logs. +Dazu gehört eine neue JSON-Parsing-Funktion - Logs auf NEAR werden häufig als stringifizierte JSONs ausgegeben. Eine neue Funktion `json.fromString(...)` ist als Teil der [JSON-API] (/subgraphs/developing/creating/graph-ts/api/#json-api) verfügbar, damit Entwickler diese Protokolle einfach verarbeiten können. -## Deploying a NEAR Subgraph +## Bereitstellen eines NEAR- Subgraphen -Once you have a built Subgraph, it is time to deploy it to Graph Node for indexing. NEAR Subgraphs can be deployed to any Graph Node `>=v0.26.x` (this version has not yet been tagged & released). +Sobald Sie einen Subgraphen erstellt haben, ist es an der Zeit, ihn für die Indizierung auf Graph Node zu übertragen. NEAR-Subgraphen können an jeden Graph Node `>=v0.26.x` deployed werden (diese Version wurde noch nicht getaggt und freigegeben). -Subgraph Studio and the upgrade Indexer on The Graph Network currently supports indexing NEAR mainnet and testnet in beta, with the following network names: +Subgraph Studio und der Upgrade Indexierer auf The Graph Network unterstützen derzeit die Indizierung von NEAR Mainnet und Testnet in der Betaphase, mit den folgenden Netzwerknamen: - `near-mainnet` - `near-testnet` -More information on creating and deploying Subgraphs on Subgraph Studio can be found [here](/deploying/deploying-a-subgraph-to-studio/). +Weitere Informationen zum Erstellen und Bereitstellen von Subgraphen in Subgraph Studio finden Sie [hier](/deploying/deploying-a-subgraph-to-studio/). -As a quick primer - the first step is to "create" your Subgraph - this only needs to be done once. On Subgraph Studio, this can be done from [your Dashboard](https://thegraph.com/studio/): "Create a Subgraph". +Als kurze Einführung - der erste Schritt ist das „Erstellen“ Ihres Subgraphen - dies muss nur einmal gemacht werden. In Subgraph Studio können Sie dies über [Ihr Dashboard] (https://thegraph.com/studio/) tun: „Einen Subgraphen erstellen“. -Once your Subgraph has been created, you can deploy your Subgraph by using the `graph deploy` CLI command: +Sobald Ihr Subgraph erstellt wurde, können Sie ihn mit dem CLI-Befehl `graph deploy` einsetzen: ```sh -$ graph create --node # creates a Subgraph on a local Graph Node (on Subgraph Studio, this is done via the UI) -$ graph deploy --node --ipfs https://api.thegraph.com/ipfs/ # uploads the build files to a specified IPFS endpoint, and then deploys the Subgraph to a specified Graph Node based on the manifest IPFS hash +$ graph create --node # erstellt einen Subgraph auf einem lokalen Graph-Knoten (bei Subgraph Studio wird dies über die Benutzeroberfläche erledigt) +$ graph deploy --node --ipfs https://api.thegraph.com/ipfs/ # lädt die Build-Dateien auf einen angegebenen IPFS-Endpunkt hoch und stellt den Subgraphen dann auf der Grundlage des manifestierten IPFS-Hashs auf einem angegebenen Graph-Knoten bereit ``` -The node configuration will depend on where the Subgraph is being deployed. +Die Knotenkonfiguration hängt davon ab, wo der Subgraph eingesetzt werden soll. ### Subgraph Studio @@ -198,13 +198,13 @@ graph auth graph deploy ``` -### Local Graph Node (based on default configuration) +### Lokaler Graph-Knoten (basierend auf der Standardkonfiguration) ```sh graph deploy --node http://localhost:8020/ --ipfs http://localhost:5001 ``` -Once your Subgraph has been deployed, it will be indexed by Graph Node. You can check its progress by querying the Subgraph itself: +Sobald Ihr Subgraph bereitgestellt wurde, wird er von Graph Node indiziert. Sie können den Fortschritt überprüfen, indem Sie den Subgraphen selbst abfragen: ```graphql { @@ -216,45 +216,45 @@ Once your Subgraph has been deployed, it will be indexed by Graph Node. You can } ``` -### Indexing NEAR with a Local Graph Node +### Indizieren von NEAR mit einem lokalen Graph-Knoten -Running a Graph Node that indexes NEAR has the following operational requirements: +Für den Betrieb eines Graph-Knotens, der NEAR indiziert, gelten die folgenden betrieblichen Anforderungen: -- NEAR Indexer Framework with Firehose instrumentation -- NEAR Firehose Component(s) -- Graph Node with Firehose endpoint configured +- NEAR Indexierer Framework mit Firehose-Instrumentierung +- NEAR-Firehose-Komponente(n) +- Graph-Knoten mit konfiguriertem Firehose-Endpunkt -We will provide more information on running the above components soon. +Wir werden in Kürze weitere Informationen zum Betrieb der oben genannten Komponenten bereitstellen. -## Querying a NEAR Subgraph +## Abfrage eines NEAR-Subgraphen -The GraphQL endpoint for NEAR Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. +Der GraphQL-Endpunkt für NEAR Subgraphen wird durch die Schemadefinition bestimmt, mit der vorhandenen API-Schnittstelle. Bitte besuchen Sie die [GraphQL-API-Dokumentation](/subgraphs/querying/graphql-api/) für weitere Informationen. -## Example Subgraphs +## Beispiele von Subgraphen -Here are some example Subgraphs for reference: +Hier sind einige Beispiel- Subgraphen als Referenz: [NEAR Blocks](https://github.com/graphprotocol/graph-tooling/tree/main/examples/near-blocks) -[NEAR Receipts](https://github.com/graphprotocol/graph-tooling/tree/main/examples/near-receipts) +[NEAR Quittungen](https://github.com/graphprotocol/graph-tooling/tree/main/examples/near-receipts) ## FAQ -### How does the beta work? +### Wie funktioniert die Beta-Version? -NEAR support is in beta, which means that there may be changes to the API as we continue to work on improving the integration. Please email near@thegraph.com so that we can support you in building NEAR Subgraphs, and keep you up to date on the latest developments! +Die NEAR-Unterstützung befindet sich in der Beta-Phase, was bedeutet, dass es zu Änderungen an der API kommen kann, während wir weiter an der Verbesserung der Integration arbeiten. Bitte senden Sie eine E-Mail an near@thegraph.com, damit wir Sie bei der Erstellung von NEAR-Subgraphen unterstützen und Sie über die neuesten Entwicklungen auf dem Laufenden halten können! -### Can a Subgraph index both NEAR and EVM chains? +### Kann ein Subgraph sowohl NEAR- als auch EVM-Ketten indizieren? -No, a Subgraph can only support data sources from one chain/network. +Nein, ein Subgraph kann nur Datenquellen von einer Kette oder einem Netzwerk unterstützen. -### Can Subgraphs react to more specific triggers? +### Können Subgraphen auf spezifischere Auslöser reagieren? -Currently, only Block and Receipt triggers are supported. We are investigating triggers for function calls to a specified account. We are also interested in supporting event triggers, once NEAR has native event support. +Zurzeit werden nur Auslöser für Sperren und Quittungen unterstützt. Wir untersuchen derzeit Auslöser für Funktionsaufrufe an ein bestimmtes Konto. Wir sind auch an der Unterstützung von Ereignisauslösern interessiert, sobald NEAR über eine native Ereignisunterstützung verfügt. -### Will receipt handlers trigger for accounts and their sub-accounts? +### Werden Empfangshandler für Konten und deren Unterkonten ausgelöst? -If an `account` is specified, that will only match the exact account name. It is possible to match sub-accounts by specifying an `accounts` field, with `suffixes` and `prefixes` specified to match accounts and sub-accounts, for example the following would match all `mintbase1.near` sub-accounts: +Wenn ein `account` angegeben wird, wird nur der exakte Kontoname abgeglichen. Es ist möglich, Unterkonten abzugleichen, indem ein Feld `account` mit `suffixes` und `prefixes` angegeben wird, um Konten und Unterkonten abzugleichen, z. B. würde das folgende Feld allen Unterkonten von `mintbase1.near` entsprechen: ```yaml accounts: @@ -262,22 +262,22 @@ accounts: - mintbase1.near ``` -### Can NEAR Subgraphs make view calls to NEAR accounts during mappings? +### Können NEAR-Subgraphen bei Mappings Sichtaufrufe auf NEAR-Konten machen? -This is not supported. We are evaluating whether this functionality is required for indexing. +Dies wird nicht unterstützt. Wir prüfen derzeit, ob diese Funktion für die Indizierung erforderlich ist. -### Can I use data source templates in my NEAR Subgraph? +### Kann ich Datenquellenvorlagen in meinem NEAR-Subgraphen verwenden? -This is not currently supported. We are evaluating whether this functionality is required for indexing. +Dies wird derzeit nicht unterstützt. Wir prüfen derzeit, ob diese Funktion für die Indizierung erforderlich ist. -### Ethereum Subgraphs support "pending" and "current" versions, how can I deploy a "pending" version of a NEAR Subgraph? +### Ethereum-Subgraphen unterstützen „schwebende“ und „aktuelle“ Versionen. Wie kann ich eine „schwebende“ Version eines NEAR-Subgraphen bereitstellen? -Pending functionality is not yet supported for NEAR Subgraphs. In the interim, you can deploy a new version to a different "named" Subgraph, and then when that is synced with the chain head, you can redeploy to your primary "named" Subgraph, which will use the same underlying deployment ID, so the main Subgraph will be instantly synced. +Die Pending-Funktionalität wird für NEAR-Subgraphen noch nicht unterstützt. In der Zwischenzeit können Sie eine neue Version in einem anderen „benannten“ Subgraphen bereitstellen. Wenn dieser dann mit dem Kettenkopf synchronisiert ist, können Sie eine erneute Bereitstellung in Ihrem primären „benannten“ Subgraphen vornehmen, der dieselbe zugrunde liegende Bereitstellungs-ID verwendet, sodass der Haupt-Subgraph sofort synchronisiert wird. -### My question hasn't been answered, where can I get more help building NEAR Subgraphs? +### Meine Frage wurde nicht beantwortet. Wo kann ich weitere Hilfe bei der Erstellung von NEAR Subgraphen erhalten? -If it is a general question about Subgraph development, there is a lot more information in the rest of the [Developer documentation](/subgraphs/quick-start/). Otherwise please join [The Graph Protocol Discord](https://discord.gg/graphprotocol) and ask in the #near channel or email near@thegraph.com. +Wenn es sich um eine allgemeine Frage zur Entwicklung von Subgraphen handelt, gibt es viele weitere Informationen im Rest der [Entwicklerdokumentation](/subgraphs/quick-start/). Andernfalls treten Sie bitte dem [The Graph Protocol Discord](https://discord.gg/graphprotocol) bei und fragen Sie im Kanal #near oder schreiben Sie eine E-Mail an near@thegraph.com. -## References +## Referenzen -- [NEAR developer documentation](https://docs.near.org/tutorials/crosswords/basics/set-up-skeleton) +- [NEAR Entwicklerdokumentation](https://docs.near.org/tutorials/crosswords/basics/set-up-skeleton) diff --git a/website/src/pages/de/subgraphs/guides/polymarket.mdx b/website/src/pages/de/subgraphs/guides/polymarket.mdx index 74efe387b0d7..548c823e58a6 100644 --- a/website/src/pages/de/subgraphs/guides/polymarket.mdx +++ b/website/src/pages/de/subgraphs/guides/polymarket.mdx @@ -1,23 +1,23 @@ --- -title: Querying Blockchain Data from Polymarket with Subgraphs on The Graph -sidebarTitle: Query Polymarket Data +title: Abfrage von Blockchain-Daten von Polymarket mit Subgraphen auf The Graph +sidebarTitle: Abfrage von Polymarktdaten --- -Query Polymarket’s onchain data using GraphQL via Subgraphs on The Graph Network. Subgraphs are decentralized APIs powered by The Graph, a protocol for indexing & querying data from blockchains. +Abfrage der Onchain-Daten von Polymarket mit GraphQL über Subgraphen im The Graph Network. Subgraphen sind dezentrale APIs, die von The Graph angetrieben werden, einem Protokoll zur Indizierung & Abfrage von Daten aus Blockchains. -## Polymarket Subgraph on Graph Explorer +## Polymarkt Subgraph auf Graph Explorer -You can see an interactive query playground on the [Polymarket Subgraph’s page on The Graph Explorer](https://thegraph.com/explorer/subgraphs/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp?view=Query&chain=arbitrum-one), where you can test any query. +Auf der Seite [Polymarket Subgraph's page on The Graph Explorer] (https://thegraph.com/explorer/subgraphs/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp?view=Query&chain=arbitrum-one) können Sie eine interaktive Abfrage-Spielwiese sehen, auf der Sie jede Abfrage testen können. ![Polymarket Playground](/img/Polymarket-playground.png) -## How to use the Visual Query Editor +## Verwendung des visuellen Abfrageeditors -The visual query editor helps you test sample queries from your Subgraph. +Der visuelle Abfrage-Editor hilft Ihnen beim Testen von Beispielabfragen aus Ihrem Subgraphen. -You can use the GraphiQL Explorer to compose your GraphQL queries by clicking on the fields you want. +Mit dem GraphiQL Explorer können Sie Ihre GraphQL-Abfragen zusammenstellen, indem Sie auf die gewünschten Felder klicken. -### Example Query: Get the top 5 highest payouts from Polymarket +### Beispielabfrage: Erhalten Sie die Top 5 der höchsten Auszahlungen von Polymarket ``` { @@ -30,7 +30,7 @@ You can use the GraphiQL Explorer to compose your GraphQL queries by clicking on } ``` -### Example output +### Beispielausgabe ``` { @@ -71,41 +71,41 @@ You can use the GraphiQL Explorer to compose your GraphQL queries by clicking on } ``` -## Polymarket's GraphQL Schema +## Polymarket's GraphQL-Schema -The schema for this Subgraph is defined [here in Polymarket’s GitHub](https://github.com/Polymarket/polymarket-subgraph/blob/main/polymarket-subgraph/schema.graphql). +Das Schema für diesen Subgraphen ist [hier in Polymarkets GitHub] definiert (https://github.com/Polymarket/polymarket-subgraph/blob/main/polymarket-subgraph/schema.graphql). -### Polymarket Subgraph Endpoint +### Polymarkt Subgraph Endpunkt https://gateway.thegraph.com/api/{api-key}/subgraphs/id/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp -The Polymarket Subgraph endpoint is available on [Graph Explorer](https://thegraph.com/explorer). +Der Polymarket Subgraph Endpunkt ist auf [Graph Explorer] (https://thegraph.com/explorer) verfügbar. -![Polymarket Endpoint](/img/Polymarket-endpoint.png) +![Polymarket Endpunkt](/img/Polymarket-endpoint.png) -## How to Get your own API Key +## Wie Sie Ihren eigenen API-Schlüssel erhalten -1. Go to [https://thegraph.com/studio](http://thegraph.com/studio) and connect your wallet -2. Go to https://thegraph.com/studio/apikeys/ to create an API key +1. Gehen Sie zu [https://thegraph.com/studio](http://thegraph.com/studio) und verbinden Sie Ihre Wallet +2. Rufen Sie https://thegraph.com/studio/apikeys/ auf, um einen API-Schlüssel zu erstellen -You can use this API key on any Subgraph in [Graph Explorer](https://thegraph.com/explorer), and it’s not limited to just Polymarket. +Sie können diesen API-Schlüssel für jeden Subgraphen im [Graph Explorer] (https://thegraph.com/explorer) verwenden, und er ist nicht nur auf Polymarket beschränkt. -100k queries per month are free which is perfect for your side project! +100k Abfragen pro Monat sind kostenlos, was perfekt für Ihr Nebenprojekt ist! -## Additional Polymarket Subgraphs +## Zusätzliche Polymarkt Subgraphen - [Polymarket](https://thegraph.com/explorer/subgraphs/81Dm16JjuFSrqz813HysXoUPvzTwE7fsfPk2RTf66nyC?view=Query&chain=arbitrum-one) -- [Polymarket Activity Polygon](https://thegraph.com/explorer/subgraphs/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp?view=Query&chain=arbitrum-one) -- [Polymarket Profit & Loss](https://thegraph.com/explorer/subgraphs/6c58N5U4MtQE2Y8njfVrrAfRykzfqajMGeTMEvMmskVz?view=Query&chain=arbitrum-one) -- [Polymarket Open Interest](https://thegraph.com/explorer/subgraphs/ELaW6RtkbmYNmMMU6hEPsghG9Ko3EXSmiRkH855M4qfF?view=Query&chain=arbitrum-one) +- [Polymarket-Aktivitätspolygon](https://thegraph.com/explorer/subgraphs/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp?view=Query&chain=arbitrum-one) +- [Polymarkt Profit & Verlust](https://thegraph.com/explorer/subgraphs/6c58N5U4MtQE2Y8njfVrrAfRykzfqajMGeTMEvMmskVz?view=Query&chain=arbitrum-one) +- [Polymarket Offenes Interesse](https://thegraph.com/explorer/subgraphs/ELaW6RtkbmYNmMMU6hEPsghG9Ko3EXSmiRkH855M4qfF?view=Query&chain=arbitrum-one) -## How to Query with the API +## Abfragen mit der API -You can pass any GraphQL query to the Polymarket endpoint and receive data in json format. +Sie können eine beliebige GraphQL-Abfrage an den Polymarket-Endpunkt übergeben und Daten im json-Format erhalten. -This following code example will return the exact same output as above. +Das folgende Beispiel für einen Datencode liefert genau die gleiche Ausgabe wie oben. -### Sample Code from node.js +### Beispielcode aus node.js ``` const axios = require('axios'); @@ -127,22 +127,22 @@ const graphQLRequest = { }, }; -// Send the GraphQL query +// Senden der GraphQL-Abfrage axios(graphQLRequest) .then((response) => { - // Handle the response here + // Behandeln Sie die Antwort hier const data = response.data.data console.log(data) }) .catch((error) => { - // Handle any errors + // Behandeln Sie eventuelle Fehler console.error(error); }); ``` -### Additional resources +### Zusätzliche Ressourcen -For more information about querying data from your Subgraph, read more [here](/subgraphs/querying/introduction/). +Weitere Informationen zur Abfrage von Daten aus Ihrem Subgraphen finden Sie [hier](/subgraphs/querying/introduction/). -To explore all the ways you can optimize & customize your Subgraph for a better performance, read more about [creating a Subgraph here](/developing/creating-a-subgraph/). +Um alle Möglichkeiten zu erkunden, wie Sie Ihren Subgraphen optimieren & anpassen können, um eine bessere Leistung zu erzielen, lesen Sie mehr über [Erstellen eines Subgraphen hier](/developing/creating-a-subgraph/). diff --git a/website/src/pages/de/subgraphs/guides/secure-api-keys-nextjs.mdx b/website/src/pages/de/subgraphs/guides/secure-api-keys-nextjs.mdx index e17e594408ff..dc8bea1a3a0c 100644 --- a/website/src/pages/de/subgraphs/guides/secure-api-keys-nextjs.mdx +++ b/website/src/pages/de/subgraphs/guides/secure-api-keys-nextjs.mdx @@ -1,48 +1,48 @@ --- -title: How to Secure API Keys Using Next.js Server Components +title: Wie man API-Schlüssel mit Next.js Server-Komponenten sichert --- -## Overview +## Überblick -We can use [Next.js server components](https://nextjs.org/docs/app/building-your-application/rendering/server-components) to properly secure our API key from exposure in the frontend of our dapp. To further increase our API key security, we can also [restrict our API key to certain Subgraphs or domains in Subgraph Studio](/cookbook/upgrading-a-subgraph/#securing-your-api-key). +Wir können [Next.js Server-Komponenten](https://nextjs.org/docs/app/building-your-application/rendering/server-components) verwenden, um unseren API-Schlüssel vor der Offenlegung im Frontend unserer App zu schützen. Um die Sicherheit unseres API-Schlüssels weiter zu erhöhen, können wir auch [unseren API-Schlüssel auf bestimmte Subgraphen oder Domänen in Subgraph Studio beschränken](/cookbook/upgrading-a-subgraph/#securing-your-api-key). -In this cookbook, we will go over how to create a Next.js server component that queries a Subgraph while also hiding the API key from the frontend. +In diesem Kochbuch (Schritt-für-Schritt Anleitung) wird gezeigt, wie man eine Next.js-Serverkomponente erstellt, die einen Subgraphen abfragt und gleichzeitig den API-Schlüssel vor dem Frontend verbirgt. -### Caveats +### Vorbehalte -- Next.js server components do not protect API keys from being drained using denial of service attacks. -- The Graph Network gateways have denial of service detection and mitigation strategies in place, however using server components may weaken these protections. -- Next.js server components introduce centralization risks as the server can go down. +- Next.js-Serverkomponenten schützen API-Schlüssel nicht vor Denial-of-Service-Angriffen. +- The Graph Network Gateways verfügen über Strategien zur Erkennung und Eindämmung von Denial-of-Service-Attacken, doch die Verwendung von Serverkomponenten kann diese Schutzmaßnahmen schwächen. +- Next.js-Serverkomponenten bergen Zentralisierungsrisiken, da der Server ausfallen kann. -### Why It's Needed +### Warum es gebraucht wird -In a standard React application, API keys included in the frontend code can be exposed to the client-side, posing a security risk. While `.env` files are commonly used, they don't fully protect the keys since React's code is executed on the client side, exposing the API key in the headers. Next.js Server Components address this issue by handling sensitive operations server-side. +In einer Standard-React-Anwendung können API-Schlüssel, die im Frontend-Code enthalten sind, auf der Client-Seite offengelegt werden, was ein Sicherheitsrisiko darstellt. Obwohl \`.env'-Dateien häufig verwendet werden, schützen sie die Schlüssel nicht vollständig, da der Code von React auf der Client-Seite ausgeführt wird und die API-Schlüssel in den Headern offengelegt werden. Next.js Server Components lösen dieses Problem, indem sie sensible Operationen serverseitig verarbeiten. -### Using client-side rendering to query a Subgraph +### Client-seitiges Rendering zur Abfrage eines Subgraphen verwenden -![Client-side rendering](/img/api-key-client-side-rendering.png) +![Client-seitiges Rendering](/img/api-key-client-side-rendering.png) -### Prerequisites +### Voraussetzungen -- An API key from [Subgraph Studio](https://thegraph.com/studio) -- Basic knowledge of Next.js and React. -- An existing Next.js project that uses the [App Router](https://nextjs.org/docs/app). +- Ein API-Schlüssel von [Subgraph Studio] (https://thegraph.com/studio) +- Grundkenntnisse in Next.js und React. +- Ein bestehendes Next.js-Projekt, das den [App Router](https://nextjs.org/docs/app) verwendet. -## Step-by-Step Cookbook +## Schritt-für-Schritt Cookbook -### Step 1: Set Up Environment Variables +### Schritt 1: Einrichten der Umgebungsvariablen -1. In our Next.js project root, create a `.env.local` file. -2. Add our API key: `API_KEY=`. +1. Erstellen Sie im Stammverzeichnis unseres Next.js-Projekts eine Datei `.env.local`. +2. Fügen Sie unseren API-Schlüssel hinzu: `API_KEY=`. -### Step 2: Create a Server Component +### Schritt 2: Erstellen einer Server-Komponente -1. In our `components` directory, create a new file, `ServerComponent.js`. -2. Use the provided example code to set up the server component. +1. Erstellen Sie in unserem Verzeichnis `components` eine neue Datei `ServerComponent.js`. +2. Verwenden Sie den mitgelieferten Beispielcode, um die Serverkomponente einzurichten. -### Step 3: Implement Server-Side API Request +### Schritt 3: Implementierung der serverseitigen API-Anfrage -In `ServerComponent.js`, add the following code: +Fügen Sie in `ServerComponent.js` den folgenden Code ein: ```javascript const API_KEY = process.env.API_KEY @@ -95,10 +95,10 @@ export default async function ServerComponent() { } ``` -### Step 4: Use the Server Component +### Schritt 4: Verwenden Sie die Server-Komponente -1. In our page file (e.g., `pages/index.js`), import `ServerComponent`. -2. Render the component: +1. In unserer Seitendatei (z. B. `pages/index.js`) importieren Sie `ServerComponent`. +2. Rendern Sie die Komponente: ```javascript import ServerComponent from './components/ServerComponent' @@ -112,12 +112,12 @@ export default function Home() { } ``` -### Step 5: Run and Test Our Dapp +### Schritt 5: Starten und testen Sie unseren Dapp -Start our Next.js application using `npm run dev`. Verify that the server component is fetching data without exposing the API key. +Starten Sie unsere Next.js-Anwendung mit `npm run dev`. Überprüfen Sie, ob die Serverkomponente Daten abruft, ohne den API-Schlüssel preiszugeben. -![Server-side rendering](/img/api-key-server-side-rendering.png) +![Serverseitiges Rendering](/img/api-key-server-side-rendering.png) -### Conclusion +### Schlussfolgerung -By utilizing Next.js Server Components, we've effectively hidden the API key from the client-side, enhancing the security of our application. This method ensures that sensitive operations are handled server-side, away from potential client-side vulnerabilities. Finally, be sure to explore [other API key security measures](/subgraphs/querying/managing-api-keys/) to increase your API key security even further. +Durch die Verwendung von Next.js Server Components haben wir den API-Schlüssel effektiv vor der Client-Seite versteckt, was die Sicherheit unserer Anwendung erhöht. Diese Methode stellt sicher, dass sensible Vorgänge serverseitig behandelt werden, weit weg von potentiellen clientseitigen Schwachstellen. Abschließend sollten Sie unbedingt [andere Sicherheitsmaßnahmen für API-Schlüssel](/subgraphs/querying/managing-api-keys/) erkunden, um die Sicherheit Ihrer API-Schlüssel noch weiter zu erhöhen. diff --git a/website/src/pages/de/subgraphs/guides/subgraph-composition.mdx b/website/src/pages/de/subgraphs/guides/subgraph-composition.mdx new file mode 100644 index 000000000000..900ecb8e636d --- /dev/null +++ b/website/src/pages/de/subgraphs/guides/subgraph-composition.mdx @@ -0,0 +1,132 @@ +--- +title: Aggregieren von Daten mit Hilfe von Subgraphen-Komposition +sidebarTitle: Erstellen eines zusammensetzbaren Subgraphen mit mehreren Subgraphen +--- + +Nutzen Sie die Komposition von Subgraphen, um die Entwicklungszeit zu verkürzen. Erstellen Sie einen Basis-Subgraphen mit den wichtigsten Daten und bauen Sie dann weitere Subgraphen darauf auf. + +Optimize your Subgraph by merging data from independent, source Subgraphs into a single composable Subgraph to enhance data aggregation. + +## Einführung + +Composable Subgraphs enable you to combine multiple Subgraphs' data sources into a new Subgraph, facilitating faster and more flexible Subgraph development. Subgraph composition empowers you to create and maintain smaller, focused Subgraphs that collectively form a larger, interconnected dataset. + +### Vorteile der Komposition + +Die Komposition von Subgraphen ist eine leistungsstarke Funktion für die Skalierung, die es Ihnen ermöglicht,: + +- Wiederverwendung, Mischung und Kombination vorhandener Daten +- Rationalisierung von Entwicklung und Abfragen +- Verwendung mehrerer Datenquellen (bis zu fünf Subgraphen als Quelle) +- Beschleunigen Sie die Synchronisierungsgeschwindigkeit Ihres Subgraphen +- Behandlung von Fehlern und Optimierung der Neusynchronisierung + +## Architektur-Übersicht + +Für dieses Beispiel werden zwei Subgraphen erstellt: + +1. **Quellensubgraph**: Verfolgt Ereignisdaten als Entitäten. +2. **Abhängiger Subgraph**: Verwendet den Quell-Subgraphen als Datenquelle. + +Sie finden diese in den Verzeichnissen `source` und `dependent`. + +- Der **Quellen-Subgraph** ist ein grundlegender Ereignisverfolgungs-Subgraph, der Ereignisse aufzeichnet, die von relevanten Verträgen ausgehen. +- Der **abhängige Subgraph** referenziert den Quell-Subgraph als Datenquelle und verwendet die Entitäten aus der Quelle als Auslöser. + +Während der Ausgangs-Subgraph ein Standard-Subgraph ist, verwendet der abhängige Subgraph die Subgraph-Kompositionsfunktion. + +## Voraussetzungen + +### Source Subgraphs + +- All Subgraphs need to be published with a **specVersion 1.3.0 or later** (Use the latest graph-cli version to be able to deploy composable Subgraphs) +- See notes here: https://github.com/graphprotocol/graph-node/releases/tag/v0.37.0 +- Immutable entities only: All Subgraphs must have [immutable entities](https://thegraph.com/docs/en/subgraphs/best-practices/immutable-entities-bytes-as-ids/#immutable-entities) when the Subgraph is deployed +- Pruning can be used in the source Subgraphs, but only entities that are immutable can be composed on top of +- Source Subgraphs cannot use grafting on top of existing entities +- Aggregated entities can be used in composition, but entities that are composed from them cannot performed additional aggregations directly + +### Composed Subgraphs + +- You can only compose up to a **maximum of 5 source Subgraphs** +- Composed Subgraphs can only use **datasources from the same chain** +- **Nested composition is not yet supported**: Composing on top of another composed Subgraph isn’t allowed at this time +- Aggregated entities can be used in composition, but the composed entities on them cannot also use aggregations directly +- Developers cannot compose an onchain datasource with a Subgraph datasource (i.e. you can’t do normal event handlers and call handlers and block handlers in a composed Subgraph) + +Additionally, you can explore the [example-composable-subgraph](https://github.com/graphprotocol/example-composable-subgraph) repository for a working implementation of composable Subgraphs + +## Los geht’s + +The following guide provides examples for defining 3 source Subgraphs to create one powerful composed Subgraph. + +### Besonderheiten + +- Um dieses Beispiel einfach zu halten, verwenden alle Source-Subgraphen nur Block-Handler. In einer realen Umgebung wird jedoch jeder Source-Subgraph Daten aus verschiedenen Smart Contracts verwenden. +- Die folgenden Beispiele zeigen, wie Sie das Schema eines anderen Subgraphen importieren und erweitern können, um seine Funktionalität zu verbessern. +- Jeder Source-Subgraph wird für eine bestimmte Entität optimiert. +- Alle aufgeführten Befehle installieren die erforderlichen Abhängigkeiten, generieren Code auf der Grundlage des GraphQL-Schemas, erstellen den Subgraphen und stellen ihn auf Ihrer lokalen Graph Node-Instanz bereit. + +### Schritt 1. Blockzeit-Source-Subgraph bereitstellen + +Dieser erste Source-Subgraph berechnet die Blockzeit für jeden Block. + +- Es importiert Schemata aus anderen Subgraphen und fügt eine `block`-Entität mit einem `timestamp`-Feld hinzu, das die Zeit angibt, zu der jeder Block abgebaut wurde. +- Er hört auf zeitbezogene Blockchain-Ereignisse (z. B. Blockzeitstempel) und verarbeitet diese Daten, um die Entitäten des Subgraphen entsprechend zu aktualisieren. + +Um diesen Subgraphen lokal einzusetzen, führen Sie die folgenden Befehle aus: + +```bash +npm install +npm run codegen +npm run build +npm run create-local +npm run deploy-local +``` + +### Schritt 2. Block Cost Source-Subgraph bereitstellen + +Dieser zweite Source-Subgraph indiziert die Kosten für jeden Block. + +#### Schlüsselfunktionen + +- Es importiert Schemata aus anderen Subgraphen und fügt eine `block`-Entität mit kostenbezogenen Feldern hinzu. +- Er hört auf Blockchain-Ereignisse im Zusammenhang mit Kosten (z. B. Gasgebühren, Transaktionskosten) und verarbeitet diese Daten, um die Entitäten des Subgraphen entsprechend zu aktualisieren. + +Um diesen Subgraphen lokal zu verteilen, führen Sie die gleichen Befehle wie oben aus. + +### Schritt 3. Blockgröße im Source-Subgraphen definieren + +Dieser dritte Source-Subgraph indiziert die Größe der einzelnen Blöcke. Um diesen Subgraphen lokal einzusetzen, führen Sie die gleichen Befehle wie oben aus. + +#### Schlüsselfunktionen + +- Es importiert bestehende Schemata von anderen Subgraphen und fügt eine `block`-Entität mit einem `size`-Feld hinzu, das die Größe eines jeden Blocks angibt. +- Er hört auf Blockchain-Ereignisse in Bezug auf Blockgrößen (z. B. Speicher oder Volumen) und verarbeitet diese Daten, um die Entitäten des Subgrafen entsprechend zu aktualisieren. + +### Schritt 4. Kombinieren Sie in Block-Statistik-Subgraph + +This composed Subgraph combines and aggregates the information from the source Subgraphs above, providing a unified view of block statistics. To deploy this Subgraph locally, run the same commands as above. + +> Note: +> +> - Jede Änderung an einem Source-Subgraphen wird wahrscheinlich eine neue Bereitstellungs-ID erzeugen. +> - Stellen Sie sicher, dass Sie die Bereitstellungs-ID in der Datenquellenadresse des Subgraph-Manifests aktualisieren, um von den neuesten Änderungen zu profitieren. +> - Alle Source-Subgraphen sollten bereitgestellt werden, bevor der zusammengesetzte Subgraph bereitgestellt wird. + +#### Schlüsselfunktionen + +- Es bietet ein konsolidiertes Datenmodell, das alle relevanten Blockmetriken umfasst. +- It combines data from 3 source Subgraphs, and provides a comprehensive view of block statistics, enabling more complex queries and analyses. + +## Wichtigste Erkenntnisse + +- Dieses leistungsstarke Werkzeug skaliert die Entwicklung von Subgraphen und ermöglicht es Ihnen, mehrere Subgraphen zu kombinieren. +- The setup includes the deployment of 3 source Subgraphs and one final deployment of the composed Subgraph. +- Diese Funktion ermöglicht Skalierbarkeit und vereinfacht sowohl die Entwicklung als auch die Wartungseffizienz. + +## Zusätzliche Ressourcen + +- Check out all the code for this example in [this GitHub repo](https://github.com/graphprotocol/example-composable-subgraph). +- Um Ihrem Subgraphen erweiterte Funktionen hinzuzufügen, lesen Sie [Erweiterte Subgraph-Funktionen](/developing/creating/advanced/). +- Um mehr über Aggregationen zu erfahren, lesen Sie [Zeitreihen und Aggregationen](/subgraphs/developing/creating/advanced/#timeseries-and-aggregations). diff --git a/website/src/pages/de/subgraphs/guides/subgraph-debug-forking.mdx b/website/src/pages/de/subgraphs/guides/subgraph-debug-forking.mdx index 91aa7484d2ec..83fa90179ff7 100644 --- a/website/src/pages/de/subgraphs/guides/subgraph-debug-forking.mdx +++ b/website/src/pages/de/subgraphs/guides/subgraph-debug-forking.mdx @@ -1,26 +1,26 @@ --- -title: Quick and Easy Subgraph Debugging Using Forks +title: Schnelles und einfaches Debuggen von Subgraphen mit Forks --- -As with many systems processing large amounts of data, The Graph's Indexers (Graph Nodes) may take quite some time to sync-up your Subgraph with the target blockchain. The discrepancy between quick changes with the purpose of debugging and long wait times needed for indexing is extremely counterproductive and we are well aware of that. This is why we are introducing **Subgraph forking**, developed by [LimeChain](https://limechain.tech/), and in this article I will show you how this feature can be used to substantially speed-up Subgraph debugging! +Wie bei vielen Systemen, die große Datenmengen verarbeiten, können die Indexierer (Graph Nodes) von The Graph einige Zeit benötigen, um Ihren Subgraphen mit der Ziel-Blockchain zu synchronisieren. Die Diskrepanz zwischen schnellen Änderungen zum Zweck der Fehlersuche und langen Wartezeiten für die Indizierung ist äußerst kontraproduktiv und wir sind uns dessen bewusst. Aus diesem Grund führen wir das **Subgraph forking** ein, das von [LimeChain] (https://limechain.tech/) entwickelt wurde, und in diesem Artikel zeige ich Ihnen, wie diese Funktion genutzt werden kann, um das Debuggen von Subgraphen erheblich zu beschleunigen! -## Ok, what is it? +## Ok, was ist es? -**Subgraph forking** is the process of lazily fetching entities from _another_ Subgraph's store (usually a remote one). +**Subgraph forking** ist der Prozess, bei dem Entitäten aus dem Speicher eines _anderen_ Subgraphen (normalerweise eines entfernten) geholt werden. -In the context of debugging, **Subgraph forking** allows you to debug your failed Subgraph at block _X_ without needing to wait to sync-up to block _X_. +Im Zusammenhang mit der Fehlersuche ermöglicht **Subgraph forking** die Fehlersuche in einem fehlgeschlagenen Subgraphen im Block _X_, ohne dass Sie auf die Synchronisierung mit Block _X_ warten müssen. -## What?! How? +## Was? Wie? -When you deploy a Subgraph to a remote Graph Node for indexing and it fails at block _X_, the good news is that the Graph Node will still serve GraphQL queries using its store, which is synced-up to block _X_. That's great! This means we can take advantage of this "up-to-date" store to fix the bugs arising when indexing block _X_. +Wenn Sie einen Subgraphen an einen entfernten Graph Node zur Indizierung bereitstellen und dieser bei Block _X_ ausfällt, ist die gute Nachricht, dass der Graph Node weiterhin GraphQL-Abfragen mit seinem Speicher bedient, der mit Block _X_ synchronisiert ist. Das ist großartig! Das bedeutet, dass wir diesen „aktuellen“ Speicher nutzen können, um die Fehler zu beheben, die bei der Indizierung von Block _X_ auftreten. -In a nutshell, we are going to _fork the failing Subgraph_ from a remote Graph Node that is guaranteed to have the Subgraph indexed up to block _X_ in order to provide the locally deployed Subgraph being debugged at block _X_ an up-to-date view of the indexing state. +Kurz gesagt, wir _forken den fehlgeschlagenen Subgraphen_ von einem entfernten Graph Node, der garantiert den Subgraphen bis zum Block _X_ indiziert hat, um dem lokal eingesetzten Subgraphen, der im Block _X_ debuggt wird, eine aktuelle Sicht auf den Indizierungsstatus zu geben. -## Please, show me some code! +## Bitte, zeigen Sie mir einen Code! -To stay focused on Subgraph debugging, let's keep things simple and run along with the [example-Subgraph](https://github.com/graphprotocol/graph-tooling/tree/main/examples/ethereum-gravatar) indexing the Ethereum Gravity smart contract. +Um uns auf das Debuggen von Subgraphen zu konzentrieren, halten wir die Dinge einfach und führen den [Beispiel-Subgraph](https://github.com/graphprotocol/graph-tooling/tree/main/examples/ethereum-gravatar) aus, der den Ethereum Gravity Smart Contract indiziert. -Here are the handlers defined for indexing `Gravatar`s, with no bugs whatsoever: +Hier sind die für die Indizierung von `Gravatar` definierten Handler, die keinerlei Fehler aufweisen: ```tsx export function handleNewGravatar(event: NewGravatar): void { @@ -44,43 +44,43 @@ export function handleUpdatedGravatar(event: UpdatedGravatar): void { } ``` -Oops, how unfortunate, when I deploy my perfect looking Subgraph to [Subgraph Studio](https://thegraph.com/studio/) it fails with the _"Gravatar not found!"_ error. +Oops, wie schade, wenn ich meinen perfekt aussehenden Subgraphen in [Subgraph Studio] (https://thegraph.com/studio/) einsetze, schlägt er mit der Fehlermeldung _„Gravatar not found!“_ fehl. -The usual way to attempt a fix is: +Der übliche Weg, eine Lösung zu finden, ist: -1. Make a change in the mappings source, which you believe will solve the issue (while I know it won't). -2. Re-deploy the Subgraph to [Subgraph Studio](https://thegraph.com/studio/) (or another remote Graph Node). -3. Wait for it to sync-up. -4. If it breaks again go back to 1, otherwise: Hooray! +1. Nehmen Sie eine Änderung in der Mappingquelle vor, von der Sie glauben, dass sie das Problem lösen wird (während ich weiß, dass sie es nicht tut). +2. Stellen Sie den Subgraphen erneut in [Subgraph Studio](https://thegraph.com/studio/) (oder einem anderen entfernten Graph-Knoten) bereit. +3. Warten Sie, bis es synchronisiert wird. +4. Wenn es wieder bricht, gehen Sie zurück zu 1, sonst: Hurra! -It is indeed pretty familiar to an ordinary debug process, but there is one step that horribly slows down the process: _3. Wait for it to sync-up._ +Es ist in der Tat ziemlich vertraut mit einem normalen Debug-Prozess, aber es gibt einen Schritt, der den Prozess schrecklich verlangsamt: _3. Warten Sie auf die Synchronisierung._ -Using **Subgraph forking** we can essentially eliminate this step. Here is how it looks: +Mit **Subgraph forking** können wir diesen Schritt im Wesentlichen eliminieren. So sieht es aus: -0. Spin-up a local Graph Node with the **_appropriate fork-base_** set. -1. Make a change in the mappings source, which you believe will solve the issue. -2. Deploy to the local Graph Node, **_forking the failing Subgraph_** and **_starting from the problematic block_**. -3. If it breaks again, go back to 1, otherwise: Hooray! +0. Starten Sie einen lokalen Graph-Knotens mit der **_geeigneten fork-base_**-Satz. +1. Nehmen Sie eine Änderung in der Mappingquelle vor, von der Sie glauben, dass sie das Problem lösen wird. +2. Bereitstellung auf dem lokalen Graph Node, **_Forking des fehlgeschlagenen Subgraphs_** und **_Start vom problematischen Block_**. +3. Wenn es wieder bricht, gehen Sie zurück zu 1, sonst: Hurra! -Now, you may have 2 questions: +Jetzt haben Sie vielleicht 2 Fragen: -1. fork-base what??? -2. Forking who?! +1. fork-base, was??? +2. Forking von wem?! -And I answer: +Und ich antworte: -1. `fork-base` is the "base" URL, such that when the _subgraph id_ is appended the resulting URL (`/`) is a valid GraphQL endpoint for the Subgraph's store. -2. Forking is easy, no need to sweat: +1. `fork-base` ist die ‚Basis‘-URL, so dass, wenn die _subgraph id_ angehängt wird, die resultierende URL (`/`) ein gültiger GraphQL-Endpunkt für den Subgraph-Speicher ist. +2. Forken ist einfach, keine Notwendigkeit zu schwitzen : ```bash $ graph deploy --debug-fork --ipfs http://localhost:5001 --node http://localhost:8020 ``` -Also, don't forget to set the `dataSources.source.startBlock` field in the Subgraph manifest to the number of the problematic block, so you can skip indexing unnecessary blocks and take advantage of the fork! +Vergessen Sie auch nicht, das Feld `dataSources.source.startBlock` im Subgraph-Manifest auf die Nummer des problematischen Blocks zu setzen, damit Sie die Indizierung unnötiger Blöcke überspringen und die Vorteile der Gabelung nutzen können! -So, here is what I do: +Also, ich mache Folgendes: -1. I spin-up a local Graph Node ([here is how to do it](https://github.com/graphprotocol/graph-node#running-a-local-graph-node)) with the `fork-base` option set to: `https://api.thegraph.com/subgraphs/id/`, since I will fork a Subgraph, the buggy one I deployed earlier, from [Subgraph Studio](https://thegraph.com/studio/). +1. Ich erstelle einen lokalen Graph Node ([hier wird erklärt, wie man es macht](https://github.com/graphprotocol/graph-node#running-a-local-graph-node)) mit der Option `fork-base` auf: `https://api.thegraph.com/subgraphs/id/`, da ich einen Subgraphen, den fehlerhaften, den ich zuvor eingesetzt habe, von [Subgraph Studio](https://thegraph.com/studio/) forken werde. ``` $ cargo run -p graph-node --release -- \ @@ -90,12 +90,12 @@ $ cargo run -p graph-node --release -- \ --fork-base https://api.thegraph.com/subgraphs/id/ ``` -2. After careful inspection I notice that there is a mismatch in the `id` representations used when indexing `Gravatar`s in my two handlers. While `handleNewGravatar` converts it to a hex (`event.params.id.toHex()`), `handleUpdatedGravatar` uses an int32 (`event.params.id.toI32()`) which causes the `handleUpdatedGravatar` to panic with "Gravatar not found!". I make them both convert the `id` to a hex. -3. After I made the changes I deploy my Subgraph to the local Graph Node, **_forking the failing Subgraph_** and setting `dataSources.source.startBlock` to `6190343` in `subgraph.yaml`: +2. Nach sorgfältiger Prüfung stelle ich fest, dass es eine Unstimmigkeit in der `id`-Darstellung gibt, die bei der Indizierung von `Gravatar` in meinen beiden Handlern verwendet wird. Während `handleNewGravatar` sie in eine Hexadezimaldarstellung umwandelt (`event.params.id.toHex()`), verwendet `handleUpdatedGravatar` eine int32-Darstellung (`event.params.id.toI32()`), was dazu führt, dass `handleUpdatedGravatar` in Panik gerät mit „Gravatar not found!“. Ich lasse sie beide die `id` in eine Hexadezimalzahl konvertieren. +3. Nachdem ich die Änderungen vorgenommen habe, verteile ich meinen Subgraphen auf dem lokalen Graph Node, **_forking the failing Subgraph_** und setze `dataSources.source.startBlock` auf `6190343` in `subgraph.yaml`: ```bash $ graph deploy gravity --debug-fork QmNp169tKvomnH3cPXTfGg4ZEhAHA6kEq5oy1XDqAxqHmW --ipfs http://localhost:5001 --node http://localhost:8020 ``` -4. I inspect the logs produced by the local Graph Node and, Hooray!, everything seems to be working. -5. I deploy my now bug-free Subgraph to a remote Graph Node and live happily ever after! (no potatoes tho) +4. Ich schaue mir die vom lokalen Graph Node erstellten Protokolle an, und - hurra - alles scheint zu funktionieren. +5. Ich verteile meinen nun fehlerfreien Subgraphen an einen entfernten Graph Node und lebe glücklich bis ans Ende meiner Tage! (allerdings ohne Kartoffeln) diff --git a/website/src/pages/de/subgraphs/guides/subgraph-uncrashable.mdx b/website/src/pages/de/subgraphs/guides/subgraph-uncrashable.mdx index a08e2a7ad8c9..17c44f701811 100644 --- a/website/src/pages/de/subgraphs/guides/subgraph-uncrashable.mdx +++ b/website/src/pages/de/subgraphs/guides/subgraph-uncrashable.mdx @@ -1,29 +1,29 @@ --- -title: Safe Subgraph Code Generator +title: Sicherer Subgraph Code Generator --- -[Subgraph Uncrashable](https://float-capital.github.io/float-subgraph-uncrashable/) is a code generation tool that generates a set of helper functions from the graphql schema of a project. It ensures that all interactions with entities in your Subgraph are completely safe and consistent. +[Subgraph Uncrashable](https://float-capital.github.io/float-subgraph-uncrashable/) ist ein Codegenerierungswerkzeug, das eine Reihe von Hilfsfunktionen aus dem Graphql-Schema eines Projekts generiert. Es stellt sicher, dass alle Interaktionen mit Entitäten in Ihrem Subgraphen vollkommen sicher und konsistent sind. -## Why integrate with Subgraph Uncrashable? +## Warum sollte man mit Subgraph Uncrashable integrieren? -- **Continuous Uptime**. Mishandled entities may cause Subgraphs to crash, which can be disruptive for projects that are dependent on The Graph. Set up helper functions to make your Subgraphs “uncrashable” and ensure business continuity. +- **Kontinuierliche Betriebszeit**. Falsch behandelte Entitäten können zum Absturz von Subgraphen führen, was für Projekte, die von The Graph abhängig sind, störend sein kann. Richten Sie Hilfsfunktionen ein, um Ihre Subgraphen „absturzsicher“ zu machen und die Geschäftskontinuität zu gewährleisten. -- **Completely Safe**. Common problems seen in Subgraph development are issues of loading undefined entities, not setting or initializing all values of entities, and race conditions on loading and saving entities. Ensure all interactions with entities are completely atomic. +- **Vollständig sicher**. Häufig auftretende Probleme bei der Entwicklung von Subgraphen sind das Laden von undefinierten Entitäten, das nicht Setzen oder Initialisieren aller Werte von Entitäten und Race Conditions beim Laden und Speichern von Entitäten. Stellen Sie sicher, dass alle Interaktionen mit Entitäten vollständig atomar sind. -- **User Configurable** Set default values and configure the level of security checks that suits your individual project's needs. Warning logs are recorded indicating where there is a breach of Subgraph logic to help patch the issue to ensure data accuracy. +- **Benutzerdefinierbar** Legen Sie Standardwerte fest und konfigurieren Sie den Grad der Sicherheitsprüfungen, der Ihren individuellen Projektanforderungen entspricht. Es werden Warnprotokolle aufgezeichnet, die anzeigen, wo eine Verletzung der Subgraph-Logik vorliegt, um das Problem zu beheben und die Datengenauigkeit zu gewährleisten. -**Key Features** +**Schlüsselfunktionen** -- The code generation tool accommodates **all** Subgraph types and is configurable for users to set sane defaults on values. The code generation will use this config to generate helper functions that are to the users specification. +- Das Code-Generierungstool unterstützt **alle** Subgraphentypen und ist für Benutzer konfigurierbar, um sinnvolle Standardwerte festzulegen. Die Codegenerierung verwendet diese Konfiguration, um Hilfsfunktionen zu generieren, die den Vorgaben des Benutzers entsprechen. -- The framework also includes a way (via the config file) to create custom, but safe, setter functions for groups of entity variables. This way it is impossible for the user to load/use a stale graph entity and it is also impossible to forget to save or set a variable that is required by the function. +- Das Framework enthält auch eine Möglichkeit (über die Konfigurationsdatei), benutzerdefinierte, aber sichere Setter-Funktionen für Gruppen von Entitätsvariablen zu erstellen. Auf diese Weise ist es für den Benutzer unmöglich, eine veraltete Graph-Entität zu laden/zu verwenden, und es ist auch unmöglich, zu vergessen, eine Variable zu speichern oder zu setzen, die von der Funktion benötigt wird. -- Warning logs are recorded as logs indicating where there is a breach of Subgraph logic to help patch the issue to ensure data accuracy. +- Warnmeldungen werden als Protokolle aufgezeichnet, die anzeigen, wo ein Verstoß gegen die Subgraph-Logik vorliegt, um das Problem zu beheben und die Datengenauigkeit zu gewährleisten. -Subgraph Uncrashable can be run as an optional flag using the Graph CLI codegen command. +Subgraph Uncrashable kann als optionales Flag mit dem Graph CLI Codegen-Befehl ausgeführt werden. ```sh graph codegen -u [options] [] ``` -Visit the [Subgraph uncrashable documentation](https://float-capital.github.io/float-subgraph-uncrashable/docs/) or watch this [video tutorial](https://float-capital.github.io/float-subgraph-uncrashable/docs/tutorial) to learn more and to get started with developing safer Subgraphs. +Besuchen Sie die [Subgraph-Dokumentation zur Uncrash-Funktion](https://float-capital.github.io/float-subgraph-uncrashable/docs/) oder sehen Sie sich dieses [Video-Tutorial](https://float-capital.github.io/float-subgraph-uncrashable/docs/tutorial) an, um mehr zu erfahren und mit der Entwicklung sicherer Subgraphen zu beginnen. diff --git a/website/src/pages/de/subgraphs/guides/transfer-to-the-graph.mdx b/website/src/pages/de/subgraphs/guides/transfer-to-the-graph.mdx index a62072c48373..680d30f2f4b6 100644 --- a/website/src/pages/de/subgraphs/guides/transfer-to-the-graph.mdx +++ b/website/src/pages/de/subgraphs/guides/transfer-to-the-graph.mdx @@ -1,104 +1,104 @@ --- -title: Transfer to The Graph +title: Übertragung auf The Graph --- -Quickly upgrade your Subgraphs from any platform to [The Graph's decentralized network](https://thegraph.com/networks/). +Aktualisieren Sie schnell Ihre Subgraphen von jeder Plattform auf [The Graph's decentralized network] (https://thegraph.com/networks/). -## Benefits of Switching to The Graph +## Vorteile der Umstellung auf The Graph -- Use the same Subgraph that your apps already use with zero-downtime migration. -- Increase reliability from a global network supported by 100+ Indexers. -- Receive lightning-fast support for Subgraphs 24/7, with an on-call engineering team. +- Verwenden Sie denselben Subgraphen, den Ihre Anwendungen bereits verwenden, mit einer Zero-Downtime-Migration. +- Erhöhen Sie die Zuverlässigkeit durch ein globales Netzwerk, das von über 100 Indexierern unterstützt wird. +- Erhalten Sie blitzschnellen Support für Subgraphen rund um die Uhr, mit einem technischen Team, das auf Abruf bereitsteht. -## Upgrade Your Subgraph to The Graph in 3 Easy Steps +## Aktualisieren Sie Ihren Subgraph in 3 einfachen Schritten zu The Graph -1. [Set Up Your Studio Environment](/subgraphs/cookbook/transfer-to-the-graph/#1-set-up-your-studio-environment) -2. [Deploy Your Subgraph to Studio](/subgraphs/cookbook/transfer-to-the-graph/#2-deploy-your-subgraph-to-studio) -3. [Publish to The Graph Network](/subgraphs/cookbook/transfer-to-the-graph/#publish-your-subgraph-to-the-graphs-decentralized-network) +1. [Richten Sie Ihre Studioumgebung ein](/subgraphs/cookbook/transfer-to-the-graph/#1-set-up-your-studio-environment) +2. [Stellen Sie Ihren Subgraphen im Studio bereit](/subgraphs/cookbook/transfer-to-the-graph/#2-deploy-your-subgraph-to-studio) +3. [Veröffentlichen Sie im The Graph Network](/subgraphs/cookbook/transfer-to-the-graph/#publish-your-subgraph-to-the-graphs-decentralized-network) -## 1. Set Up Your Studio Environment +## 1. Einrichten der Studioumgebung -### Create a Subgraph in Subgraph Studio +### Create a subgraph in Subgraph Studio -- Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. -- Click "Create a Subgraph". It is recommended to name the Subgraph in Title Case: "Subgraph Name Chain Name". +- Gehen Sie zu [Subgraph Studio] (https://thegraph.com/studio/) und verbinden Sie Ihre Wallet. +- Klicken Sie auf „Einen Subgraphen erstellen“. Es wird empfohlen, den Subgraph in Title Case zu benennen: „Subgraph Name Chain Name“. -> Note: After publishing, the Subgraph name will be editable but requires onchain action each time, so name it properly. +> Hinweis: Nach der Veröffentlichung ist der Name des Subgraphen bearbeitbar, erfordert aber jedes Mal eine Onchain-Aktion, also benennen Sie ihn richtig. -### Install the Graph CLI⁠ +### Installieren Sie die Graph CLI⁠ -You must have [Node.js](https://nodejs.org/) and a package manager of your choice (`npm` or `pnpm`) installed to use the Graph CLI. Check for the [most recent](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) CLI version. +Sie müssen [Node.js](https://nodejs.org/) und einen Paketmanager Ihrer Wahl (`npm` oder `pnpm`) installiert haben, um das Graph CLI zu verwenden. Prüfen Sie, ob die [aktuellste](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) CLI-Version installiert ist. -On your local machine, run the following command: +Führen Sie auf Ihrem lokalen Computer den folgenden Befehl aus: -Using [npm](https://www.npmjs.com/): +Verwendung von [npm](https://www.npmjs.com/): ```sh npm install -g @graphprotocol/graph-cli@latest ``` -Use the following command to create a Subgraph in Studio using the CLI: +Verwenden Sie den folgenden Befehl, um einen Subgraphen in Studio über die CLI zu erstellen: ```sh graph init --product subgraph-studio ``` -### Authenticate Your Subgraph +### Authentifizieren Sie Ihren Subgraphen -In The Graph CLI, use the auth command seen in Subgraph Studio: +Verwenden Sie in The Graph CLI den Befehl auth aus Subgraph Studio: ```sh graph auth ``` -## 2. Deploy Your Subgraph to Studio +## 2. Bereitstellung des Subgraphen in Studio -If you have your source code, you can easily deploy it to Studio. If you don't have it, here's a quick way to deploy your Subgraph. +Wenn Sie Ihren Quellcode haben, können Sie ihn einfach in Studio bereitstellen. Wenn Sie ihn nicht haben, finden Sie hier eine schnelle Möglichkeit, Ihren Subgraphen bereitzustellen. -In The Graph CLI, run the following command: +Führen Sie in The Graph CLI den folgenden Befehl aus: ```sh graph deploy --ipfs-hash ``` -> **Note:** Every Subgraph has an IPFS hash (Deployment ID), which looks like this: "Qmasdfad...". To deploy simply use this **IPFS hash**. You’ll be prompted to enter a version (e.g., v0.0.1). +> **Hinweis:** Jeder Subgraph hat einen IPFS-Hash (Deployment ID), der wie folgt aussieht: „Qmasdfad...“. Zur Bereitstellung verwenden Sie einfach diesen **IPFS-Hash**. Sie werden aufgefordert, eine Version einzugeben (z. B. v0.0.1). -## 3. Publish Your Subgraph to The Graph Network +## 3. Veröffentlichen Ihres Subgraphen im The Graph Network -![publish button](/img/publish-sub-transfer.png) +![Schaltfläche „Veröffentlichen“](/img/publish-sub-transfer.png) -### Query Your Subgraph +### Fragen Sie Ihren Subgraphen ab -> To attract about 3 indexers to query your Subgraph, it’s recommended to curate at least 3,000 GRT. To learn more about curating, check out [Curating](/resources/roles/curating/) on The Graph. +> Um etwa 3 Indexierer für die Abfrage Ihres Subgraphen zu gewinnen, wird empfohlen, mindestens 3.000 GRT zu kuratieren. Um mehr über das Kuratieren zu erfahren, lesen Sie [Kuratieren](/resources/roles/curating/) auf The Graph. -You can start [querying](/subgraphs/querying/introduction/) any Subgraph by sending a GraphQL query into the Subgraph’s query URL endpoint, which is located at the top of its Explorer page in Subgraph Studio. +Sie können die [Abfrage](/subgraphs/querying/introduction/) eines beliebigen Subgraphen starten, indem Sie eine GraphQL-Abfrage an den Abfrage-URL-Endpunkt des Subgraphen senden, der sich am oberen Rand seiner Explorer-Seite in Subgraph Studio befindet. -#### Example +#### Beispiel -[CryptoPunks Ethereum Subgraph](https://thegraph.com/explorer/subgraphs/HdVdERFUe8h61vm2fDyycHgxjsde5PbB832NHgJfZNqK) by Messari: +[CryptoPunks Ethereum Subgraph] (https://thegraph.com/explorer/subgraphs/HdVdERFUe8h61vm2fDyycHgxjsde5PbB832NHgJfZNqK) von Messari: -![Query URL](/img/cryptopunks-screenshot-transfer.png) +![Abfrage-URL](/img/cryptopunks-screenshot-transfer.png) -The query URL for this Subgraph is: +Die Abfrage-URL für diesen Subgraphen lautet: ```sh -https://gateway-arbitrum.network.thegraph.com/api/`**your-own-api-key**`/subgraphs/id/HdVdERFUe8h61vm2fDyycgxjsde5PbB832NHgJfZNqK +https://gateway-arbitrum.network.thegraph.com/api/`**ihr-eigener-api-schlüssel**`/subgraphs/id/HdVdERFUe8h61vm2fDyycgxjsde5PbB832NHgJfZNqK ``` -Now, you simply need to fill in **your own API Key** to start sending GraphQL queries to this endpoint. +Jetzt müssen Sie nur noch **Ihren eigenen API-Schlüssel** eingeben, um GraphQL-Abfragen an diesen Endpunkt zu senden. -### Getting your own API Key +### Erhalten Sie Ihren eigenen API-Schlüssel -You can create API Keys in Subgraph Studio under the “API Keys” menu at the top of the page: +Sie können API-Schlüssel in Subgraph Studio unter dem Menüpunkt „API-Schlüssel“ oben auf der Seite erstellen: -![API keys](/img/Api-keys-screenshot.png) +![API-Schlüssel](/img/Api-keys-screenshot.png) -### Monitor Subgraph Status +### Überwachen Sie den Subgraph-Status -Once you upgrade, you can access and manage your Subgraphs in [Subgraph Studio](https://thegraph.com/studio/) and explore all Subgraphs in [The Graph Explorer](https://thegraph.com/networks/). +Nach dem Upgrade können Sie auf Ihre Subgraphen in [Subgraph Studio] (https://thegraph.com/studio/) zugreifen und sie verwalten und alle Subgraphen in [The Graph Explorer] (https://thegraph.com/networks/) erkunden. -### Additional Resources +### Zusätzliche Ressourcen -- To quickly create and publish a new Subgraph, check out the [Quick Start](/subgraphs/quick-start/). -- To explore all the ways you can optimize and customize your Subgraph for a better performance, read more about [creating a Subgraph here](/developing/creating-a-subgraph/). +- Wie Sie schnell einen neuen Subgraphen erstellen und veröffentlichen können, erfahren Sie im [Schnellstart](/subgraphs/quick-start/). +- Um alle Möglichkeiten zu erkunden, wie Sie Ihren Subgraphen optimieren und anpassen können, um eine bessere Leistung zu erzielen, lesen Sie mehr über [Erstellen eines Subgraphen hier](/developing/creating-a-subgraph/). diff --git a/website/src/pages/de/subgraphs/querying/_meta-titles.json b/website/src/pages/de/subgraphs/querying/_meta-titles.json index a30daaefc9d0..1f70ade23096 100644 --- a/website/src/pages/de/subgraphs/querying/_meta-titles.json +++ b/website/src/pages/de/subgraphs/querying/_meta-titles.json @@ -1,3 +1,3 @@ { - "graph-client": "Graph Client" + "graph-client": "Graph-Client" } diff --git a/website/src/pages/de/subgraphs/querying/best-practices.mdx b/website/src/pages/de/subgraphs/querying/best-practices.mdx index ff5f381e2993..50053b27f889 100644 --- a/website/src/pages/de/subgraphs/querying/best-practices.mdx +++ b/website/src/pages/de/subgraphs/querying/best-practices.mdx @@ -1,20 +1,20 @@ --- -title: Querying Best Practices +title: Best Practices für Abfragen --- -The Graph provides a decentralized way to query data from blockchains. Its data is exposed through a GraphQL API, making it easier to query with the GraphQL language. +The Graph bietet eine dezentrale Möglichkeit zur Abfrage von Daten aus Blockchains. Die Daten werden über eine GraphQL-API zugänglich gemacht, was die Abfrage mit der GraphQL-Sprache erleichtert. -Learn the essential GraphQL language rules and best practices to optimize your subgraph. +Lernen Sie die wesentlichen GraphQL-Sprachregeln und Best Practices, um Ihren Subgraph zu optimieren. --- -## Querying a GraphQL API +## Abfrage einer GraphQL-API -### The Anatomy of a GraphQL Query +### Die Anatomie einer GraphQL-Abfrage -Unlike REST API, a GraphQL API is built upon a Schema that defines which queries can be performed. +Im Gegensatz zur REST-API basiert eine GraphQL-API auf einem Schema, das definiert, welche Abfragen durchgeführt werden können. -For example, a query to get a token using the `token` query will look as follows: +Eine Abfrage zum Abrufen eines Tokens mit der Abfrage `token` sieht zum Beispiel wie folgt aus: ```graphql query GetToken($id: ID!) { @@ -25,7 +25,7 @@ query GetToken($id: ID!) { } ``` -which will return the following predictable JSON response (_when passing the proper `$id` variable value_): +die die folgende vorhersehbare JSON-Antwort zurückgibt (\_bei Übergabe des richtigen Variablenwerts `$id`): ```json { @@ -36,47 +36,47 @@ which will return the following predictable JSON response (_when passing the pro } ``` -GraphQL queries use the GraphQL language, which is defined upon [a specification](https://spec.graphql.org/). +GraphQL-Abfragen verwenden die GraphQL-Sprache, die nach [einer Spezifikation] (https://spec.graphql.org/) definiert ist. -The above `GetToken` query is composed of multiple language parts (replaced below with `[...]` placeholders): +Die obige `GetToken`-Abfrage besteht aus mehreren Sprachteilen (im Folgenden durch `[...]` Platzhalter ersetzt): ```graphql query [operationName]([variableName]: [variableType]) { [queryName]([argumentName]: [variableName]) { - # "{ ... }" express a Selection-Set, we are querying fields from `queryName`. + # "{ ... }" Express-Sets auswählen, wir fragen Felder von `queryName` ab. [field] [field] } } ``` -## Rules for Writing GraphQL Queries +## Regeln für das Schreiben von GraphQL-Abfragen -- Each `queryName` must only be used once per operation. -- Each `field` must be used only once in a selection (we cannot query `id` twice under `token`) -- Some `field`s or queries (like `tokens`) return complex types that require a selection of sub-field. Not providing a selection when expected (or providing one when not expected - for example, on `id`) will raise an error. To know a field type, please refer to [Graph Explorer](/subgraphs/explorer/). -- Any variable assigned to an argument must match its type. -- In a given list of variables, each of them must be unique. -- All defined variables must be used. +- Jeder `queryName` darf nur einmal pro Vorgang verwendet werden. +- Jedes `field` darf nur einmal in einer Auswahl verwendet werden (wir können `id` nicht zweimal unter `token`abfragen) +- Einige `field`s oder Abfragen (wie `tokens`) geben komplexe Typen zurück, die eine Auswahl von Unterfeldern erfordern. Wird eine Auswahl nicht bereitgestellt, wenn sie erwartet wird (oder eine Auswahl bereitgestellt, wenn sie nicht erwartet wird - zum Beispiel bei `id`), wird ein Fehler ausgelöst. Um einen Feldtyp zu kennen, schauen Sie bitte im [Graph Explorer](/subgraphs/explorer/) nach. +- Jede Variable, die einem Argument zugewiesen wird, muss ihrem Typ entsprechen. +- In einer gegebenen Liste von Variablen muss jede von ihnen eindeutig sein. +- Alle definierten Variablen müssen verwendet werden. -> Note: Failing to follow these rules will result in an error from The Graph API. +> Hinweis: Die Nichtbeachtung dieser Regeln führt zu einer Fehlermeldung von The Graph API. For a complete list of rules with code examples, check out [GraphQL Validations guide](/resources/migration-guides/graphql-validations-migration-guide/). -### Sending a query to a GraphQL API +### Senden einer Abfrage an eine GraphQL API -GraphQL is a language and set of conventions that transport over HTTP. +GraphQL ist eine Sprache und ein Satz von Konventionen, die über HTTP transportiert werden. -It means that you can query a GraphQL API using standard `fetch` (natively or via `@whatwg-node/fetch` or `isomorphic-fetch`). +Das bedeutet, dass Sie eine GraphQL-API mit dem Standard `fetch` abfragen können (nativ oder über `@whatwg-node/fetch` oder `isomorphic-fetch`). -However, as mentioned in ["Querying from an Application"](/subgraphs/querying/from-an-application/), it's recommended to use `graph-client`, which supports the following unique features: +Wie in [„Abfragen von einer Anwendung“](/subgraphs/querying/from-an-application/) erwähnt, wird jedoch empfohlen, den `graph-client` zu verwenden, der die folgenden einzigartigen Funktionen unterstützt: -- Cross-chain Subgraph Handling: Querying from multiple subgraphs in a single query -- [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) -- [Automatic Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) -- Fully typed result +- Kettenübergreifende Behandlung von Subgraphen: Abfragen von mehreren Subgraphen in einer einzigen Abfrage +- [Automatische Blockverfolgung](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) +- [Automatische Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) +- Vollständig typisiertes Ergebnis -Here's how to query The Graph with `graph-client`: +So wird The Graph mit `graph-client` abgefragt: ```tsx import { execute } from '../.graphclient' @@ -93,45 +93,43 @@ const variables = { id: '1' } async function main() { const result = await execute(query, variables) - // `result` is fully typed! + // `result` ist vollständig typisiert! console.log(result) } main() ``` -More GraphQL client alternatives are covered in ["Querying from an Application"](/subgraphs/querying/from-an-application/). +Weitere GraphQL-Client-Alternativen werden in [„Abfragen von einer Anwendung“](/subgraphs/querying/from-an-application/) behandelt. --- -## Best Practices +## Bewährte Praktiken -### Always write static queries +### Schreiben Sie immer statische Abfragen -A common (bad) practice is to dynamically build query strings as follows: +Eine gängige (schlechte) Praxis ist es, Abfragezeichenfolgen dynamisch wie folgt zu erstellen: ```tsx const id = params.id const fields = ['id', 'owner'] const query = ` query GetToken { - token(id: ${id}) { - ${fields.join('\n')} + token(id: ${id}) { + ${fields.join('\n')} } } ` - -// Execute query... ``` -While the above snippet produces a valid GraphQL query, **it has many drawbacks**: +Auch wenn das obige Snippet eine gültige GraphQL-Abfrage erzeugt, **hat es viele Nachteile**: -- it makes it **harder to understand** the query as a whole -- developers are **responsible for safely sanitizing the string interpolation** -- not sending the values of the variables as part of the request parameters **prevent possible caching on server-side** -- it **prevents tools from statically analyzing the query** (ex: Linter, or type generations tools) +- es macht es **schwieriger**, die Abfrage als Ganzes zu verstehen +- Die Entwickler sind **für die sichere Bereinigung der String-Interpolation verantwortlich**. +- die Werte der Variablen nicht als Teil der Anforderungsparameter zu senden **eine mögliche Zwischenspeicherung auf der Server-Seite zu verhindern** +- es **verhindert, dass Werkzeuge die Abfrage statisch analysieren** (z. B. Linter oder Werkzeuge zur Typgenerierung) -For this reason, it is recommended to always write queries as static strings: +Aus diesem Grund ist es empfehlenswert, Abfragen immer als statische Strings zu schreiben: ```tsx import { execute } from 'your-favorite-graphql-client' @@ -153,18 +151,18 @@ const result = await execute(query, { }) ``` -Doing so brings **many advantages**: +Dies bringt **viele Vorteile**: -- **Easy to read and maintain** queries -- The GraphQL **server handles variables sanitization** -- **Variables can be cached** at server-level -- **Queries can be statically analyzed by tools** (more on this in the following sections) +- **Einfach zu lesende und zu pflegende** Abfragen +- Der GraphQL **Server kümmert sich um die Bereinigung von Variablen** +- **Variablen können auf Server-Ebene zwischengespeichert werden**. +- **Abfragen können von Tools statisch analysiert werden** (mehr dazu in den folgenden Abschnitten) -### How to include fields conditionally in static queries +### Wie man Felder bedingt in statische Abfragen einbezieht -You might want to include the `owner` field only on a particular condition. +Möglicherweise möchten Sie das Feld `owner` nur unter einer bestimmten Bedingung einbeziehen. -For this, you can leverage the `@include(if:...)` directive as follows: +Dazu können Sie die Richtlinie `@include(if:...)` wie folgt nutzen: ```tsx import { execute } from 'your-favorite-graphql-client' @@ -187,41 +185,42 @@ const result = await execute(query, { }) ``` -> Note: The opposite directive is `@skip(if: ...)`. +> Anmerkung: Die gegenteilige Direktive ist `@skip(if: ...)`. -### Ask for what you want +### Verlangen Sie, was Sie wollen -GraphQL became famous for its "Ask for what you want" tagline. +GraphQL wurde durch den Slogan „Frag nach dem, was du willst“ bekannt. -For this reason, there is no way, in GraphQL, to get all available fields without having to list them individually. +Aus diesem Grund gibt es in GraphQL keine Möglichkeit, alle verfügbaren Felder zu erhalten, ohne sie einzeln auflisten zu müssen. -- When querying GraphQL APIs, always think of querying only the fields that will be actually used. -- Make sure queries only fetch as many entities as you actually need. By default, queries will fetch 100 entities in a collection, which is usually much more than what will actually be used, e.g., for display to the user. This applies not just to top-level collections in a query, but even more so to nested collections of entities. +- Denken Sie bei der Abfrage von GraphQL-APIs immer daran, nur die Felder abzufragen, die tatsächlich verwendet werden. +- Stellen Sie sicher, dass Abfragen nur so viele Entitäten abrufen, wie Sie tatsächlich benötigen. Standardmäßig rufen Abfragen 100 Entitäten in einer Sammlung ab, was in der Regel viel mehr ist, als tatsächlich verwendet wird, z. B. für die Anzeige für den Benutzer. Dies gilt nicht nur für die Top-Level-Sammlungen in einer Abfrage, sondern vor allem auch für verschachtelte Sammlungen von Entitäten. -For example, in the following query: +Zum Beispiel in der folgenden Abfrage: ```graphql query listTokens { tokens { - # will fetch up to 100 tokens + # wird bis zu 100 Tokens id - transactions { - # will fetch up to 100 transactions + Transaktionen + abrufen { + # wird bis zu 100 Transaktionen abrufen id } } } ``` -The response could contain 100 transactions for each of the 100 tokens. +Die Antwort könnte 100 Transaktionen für jedes der 100 Token enthalten. -If the application only needs 10 transactions, the query should explicitly set `first: 10` on the transactions field. +Wenn die Anwendung nur 10 Transaktionen benötigt, sollte die Abfrage explizit `first: 10` für das Feld „transactions“ festlegen. -### Use a single query to request multiple records +### Verwenden Sie eine einzige Abfrage, um mehrere Datensätze abzufragen -By default, subgraphs have a singular entity for one record. For multiple records, use the plural entities and filter: `where: {id_in:[X,Y,Z]}` or `where: {volume_gt:100000}` +Standardmäßig haben Subgraphen eine singuläre Entität für einen Datensatz. Für mehrere Datensätze verwenden Sie die Plural-Entitäten und den Filter: `where: {id_in:[X,Y,Z]}` oder `where: {Volumen_gt:100000}` -Example of inefficient querying: +Beispiel für eine ineffiziente Abfrage: ```graphql query SingleRecord { @@ -238,7 +237,7 @@ query SingleRecord { } ``` -Example of optimized querying: +Beispiel für eine optimierte Abfrage: ```graphql query ManyRecords { @@ -249,9 +248,9 @@ query ManyRecords { } ``` -### Combine multiple queries in a single request +### Mehrere Abfragen in einer einzigen Anfrage kombinieren -Your application might require querying multiple types of data as follows: +Für Ihre Anwendung kann es erforderlich sein, mehrere Datentypen wie folgt abzufragen: ```graphql import { execute } from "your-favorite-graphql-client" @@ -281,9 +280,9 @@ const [tokens, counters] = Promise.all( ) ``` -While this implementation is totally valid, it will require two round trips with the GraphQL API. +Diese Implementierung ist zwar durchaus sinnvoll, erfordert aber zwei Umläufe mit der GraphQL-API. -Fortunately, it is also valid to send multiple queries in the same GraphQL request as follows: +Glücklicherweise ist es auch möglich, mehrere Abfragen in der gleichen GraphQL-Anfrage wie folgt zu senden: ```graphql import { execute } from "your-favorite-graphql-client" @@ -300,17 +299,16 @@ query GetTokensandCounters { } } ` - -const { result: { tokens, counters } } = execute(query) +const { result: { tokens, counters } } = execute(query) ``` -This approach will **improve the overall performance** by reducing the time spent on the network (saves you a round trip to the API) and will provide a **more concise implementation**. +Dieser Ansatz **verbessert die Gesamtleistung**, indem er die im Netz verbrachte Zeit reduziert (erspart Ihnen einen Hin- und Rückweg zur API) und bietet eine **präzisere Implementierung**. -### Leverage GraphQL Fragments +### Nutzung von GraphQL-Fragmenten -A helpful feature to write GraphQL queries is GraphQL Fragment. +Eine hilfreiche Funktion zum Schreiben von GraphQL-Abfragen ist GraphQL Fragment. -Looking at the following query, you will notice that some fields are repeated across multiple Selection-Sets (`{ ... }`): +Wenn Sie sich die folgende Abfrage ansehen, werden Sie feststellen, dass einige Felder über mehrere Auswahlsätze hinweg wiederholt werden (`{ ... }`): ```graphql query { @@ -330,12 +328,12 @@ query { } ``` -Such repeated fields (`id`, `active`, `status`) bring many issues: +Solche wiederholten Felder (`id`, `active`, `status`) bringen viele Probleme mit sich: -- More extensive queries become harder to read. -- When using tools that generate TypeScript types based on queries (_more on that in the last section_), `newDelegate` and `oldDelegate` will result in two distinct inline interfaces. +- Umfangreichere Abfragen werden schwieriger zu lesen. +- Bei der Verwendung von Tools, die TypeScript-Typen auf Basis von Abfragen generieren (_mehr dazu im letzten Abschnitt_), führen `newDelegate` und `oldDelegate` zu zwei unterschiedlichen Inline-Schnittstellen. -A refactored version of the query would be the following: +Eine überarbeitete Version der Abfrage würde wie folgt aussehen: ```graphql query { @@ -350,45 +348,47 @@ query { } } -# we define a fragment (subtype) on Transcoder -# to factorize repeated fields in the query -fragment DelegateItem on Transcoder { +# wir definieren ein Fragment (Subtyp) auf Transcoder +# um wiederholte Felder in der Abfrage zu faktorisieren +fragment DelegateItem auf Transcoder { id active status } ``` -Using GraphQL `fragment` will improve readability (especially at scale) and result in better TypeScript types generation. +Die Verwendung von GraphQL `fragment` verbessert die Lesbarkeit (insbesondere bei Skalierung) und führt zu einer besseren TypeScript-Typengenerierung. -When using the types generation tool, the above query will generate a proper `DelegateItemFragment` type (_see last "Tools" section_). +Wenn Sie das Tool zur Generierung von Typen verwenden, wird die obige Abfrage einen geeigneten Typ `DelegateItemFragment` erzeugen (\_siehe letzter Abschnitt „Tools“). -### GraphQL Fragment do's and don'ts +### GraphQL-Fragmente: Was man tun und lassen sollte -### Fragment base must be a type +### Die Fragmentbasis muss ein Typ sein -A Fragment cannot be based on a non-applicable type, in short, **on type not having fields**: +Ein Fragment kann nicht auf einem nicht anwendbaren Typ basieren, kurz gesagt, **auf einem Typ, der keine Felder hat**: ```graphql fragment MyFragment on BigInt { - # ... + # ... } ``` -`BigInt` is a **scalar** (native "plain" type) that cannot be used as a fragment's base. +`BigInt` ist ein **Skalar** (nativer “einfacher" Typ), der nicht als Basis für ein Fragment verwendet werden kann. -#### How to spread a Fragment +#### Wie man ein Fragment verbreitet -Fragments are defined on specific types and should be used accordingly in queries. +Fragmente sind für bestimmte Typen definiert und sollten entsprechend in Abfragen verwendet werden. -Example: +Beispiel: ```graphql query { bondEvents { id newDelegate { - ...VoteItem # Error! `VoteItem` cannot be spread on `Transcoder` type + ...VoteItem # Fehler! `VoteItem` kann nicht auf `Transcoder` Typ + verteilt + werden } oldDelegate { ...VoteItem @@ -402,29 +402,29 @@ fragment VoteItem on Vote { } ``` -`newDelegate` and `oldDelegate` are of type `Transcoder`. +`newDelegate` und `oldDelegate` sind vom Typ `Transcoder`. -It is not possible to spread a fragment of type `Vote` here. +Es ist nicht möglich, ein Fragment des Typs `Vote` hier zu verbreiten. -#### Define Fragment as an atomic business unit of data +#### Definition eines Fragments als atomare Geschäftseinheit von Daten -GraphQL `Fragment`s must be defined based on their usage. +GraphQL `Fragment`s müssen entsprechend ihrer Verwendung definiert werden. -For most use-case, defining one fragment per type (in the case of repeated fields usage or type generation) is sufficient. +Für die meisten Anwendungsfälle reicht es aus, ein Fragment pro Typ zu definieren (im Falle der Verwendung wiederholter Felder oder der Generierung von Typen). -Here is a rule of thumb for using fragments: +Hier ist eine Faustregel für die Verwendung von Fragmenten: -- When fields of the same type are repeated in a query, group them in a `Fragment`. -- When similar but different fields are repeated, create multiple fragments, for instance: +- Wenn Felder desselben Typs in einer Abfrage wiederholt werden, gruppieren Sie sie in einem `Fragment`. +- Wenn sich ähnliche, aber unterschiedliche Felder wiederholen, erstellen Sie z. B. mehrere Fragmente: ```graphql -# base fragment (mostly used in listing) +# Basisfragment (meist im Listing verwendet) fragment Voter on Vote { id voter } -# extended fragment (when querying a detailed view of a vote) +# erweitertes Fragment (bei Abfrage einer detaillierten Ansicht einer Abstimmung) fragment VoteWithPoll on Vote { id voter @@ -438,51 +438,51 @@ fragment VoteWithPoll on Vote { --- -## The Essential Tools +## Die wichtigsten Tools -### GraphQL web-based explorers +### Webbasierte GraphQL-Explorer -Iterating over queries by running them in your application can be cumbersome. For this reason, don't hesitate to use [Graph Explorer](https://thegraph.com/explorer) to test your queries before adding them to your application. Graph Explorer will provide you a preconfigured GraphQL playground to test your queries. +Das Iterieren von Abfragen, indem Sie sie in Ihrer Anwendung ausführen, kann mühsam sein. Zögern Sie deshalb nicht, den [Graph Explorer] (https://thegraph.com/explorer) zu verwenden, um Ihre Abfragen zu testen, bevor Sie sie Ihrer Anwendung hinzufügen. Der Graph Explorer bietet Ihnen eine vorkonfigurierte GraphQL-Spielwiese zum Testen Ihrer Abfragen. -If you are looking for a more flexible way to debug/test your queries, other similar web-based tools are available such as [Altair](https://altairgraphql.dev/) and [GraphiQL](https://graphiql-online.com/graphiql). +Wenn Sie nach einer flexibleren Methode zum Debuggen/Testen Ihrer Abfragen suchen, gibt es ähnliche webbasierte Tools wie [Altair] (https://altairgraphql.dev/) und [GraphiQL] (https://graphiql-online.com/graphiql). -### GraphQL Linting +### GraphQL-Linting -In order to keep up with the mentioned above best practices and syntactic rules, it is highly recommended to use the following workflow and IDE tools. +Um die oben genannten Best Practices und syntaktischen Regeln einzuhalten, wird die Verwendung der folgenden Workflow- und IDE-Tools dringend empfohlen. **GraphQL ESLint** -[GraphQL ESLint](https://the-guild.dev/graphql/eslint/docs/getting-started) will help you stay on top of GraphQL best practices with zero effort. +[GraphQL ESLint] (https://the-guild.dev/graphql/eslint/docs/getting-started) hilft Ihnen dabei, mit null Aufwand auf dem neuesten Stand der GraphQL Best Practices zu bleiben. -[Setup the "operations-recommended"](https://the-guild.dev/graphql/eslint/docs/configs) config will enforce essential rules such as: +[Die „operations-recommended“](https://the-guild.dev/graphql/eslint/docs/configs) Konfiguration setzt wichtige Regeln wie z.B.: -- `@graphql-eslint/fields-on-correct-type`: is a field used on a proper type? -- `@graphql-eslint/no-unused variables`: should a given variable stay unused? -- and more! +- `@graphql-eslint/fields-on-correct-type`: wird ein Feld auf einen richtigen Typ verwendet? +- `@graphql-eslint/no-unused variables`: Soll eine bestimmte Variable unbenutzt bleiben? +- und mehr! -This will allow you to **catch errors without even testing queries** on the playground or running them in production! +So können Sie **Fehler aufspüren, ohne Abfragen** auf dem Playground zu testen oder sie in der Produktion auszuführen! -### IDE plugins +### IDE-Plugins -**VSCode and GraphQL** +**VSCode und GraphQL** -The [GraphQL VSCode extension](https://marketplace.visualstudio.com/items?itemName=GraphQL.vscode-graphql) is an excellent addition to your development workflow to get: +Die [GraphQL VSCode-Erweiterung] (https://marketplace.visualstudio.com/items?itemName=GraphQL.vscode-graphql) ist eine hervorragende Ergänzung zu Ihrem Entwicklungs-Workflow zu bekommen: -- Syntax highlighting -- Autocomplete suggestions -- Validation against schema +- Syntaxhervorhebung +- Autovervollständigungsvorschläge +- Validierung gegen Schema - Snippets -- Go to definition for fragments and input types +- Zur Definition von Fragmenten und Eingabetypen -If you are using `graphql-eslint`, the [ESLint VSCode extension](https://marketplace.visualstudio.com/items?itemName=dbaeumer.vscode-eslint) is a must-have to visualize errors and warnings inlined in your code correctly. +Wenn Sie `graphql-eslint` verwenden, ist die [ESLint VSCode-Erweiterung] (https://marketplace.visualstudio.com/items?itemName=dbaeumer.vscode-eslint) ein Muss, um Fehler und Warnungen in Ihrem Code korrekt zu visualisieren. -**WebStorm/Intellij and GraphQL** +**WebStorm/Intellij und GraphQL** -The [JS GraphQL plugin](https://plugins.jetbrains.com/plugin/8097-graphql/) will significantly improve your experience while working with GraphQL by providing: +Das [JS GraphQL Plugin] (https://plugins.jetbrains.com/plugin/8097-graphql/) wird Ihre Erfahrung bei der Arbeit mit GraphQL erheblich verbessern, indem es Folgendes bietet: -- Syntax highlighting -- Autocomplete suggestions -- Validation against schema +- Syntaxhervorhebung +- Autovervollständigungsvorschläge +- Validierung gegen Schema - Snippets -For more information on this topic, check out the [WebStorm article](https://blog.jetbrains.com/webstorm/2019/04/featured-plugin-js-graphql/) which showcases all the plugin's main features. +Weitere Informationen zu diesem Thema finden Sie im [WebStorm-Artikel] (https://blog.jetbrains.com/webstorm/2019/04/featured-plugin-js-graphql/), in dem alle wichtigen Funktionen des Plugins vorgestellt werden. diff --git a/website/src/pages/de/subgraphs/querying/distributed-systems.mdx b/website/src/pages/de/subgraphs/querying/distributed-systems.mdx index 85337206bfd3..8f0f97242473 100644 --- a/website/src/pages/de/subgraphs/querying/distributed-systems.mdx +++ b/website/src/pages/de/subgraphs/querying/distributed-systems.mdx @@ -1,50 +1,50 @@ --- -title: Distributed Systems +title: Verteilte Systeme --- -The Graph is a protocol implemented as a distributed system. +The Graph ist ein Protokoll, das als verteiltes System implementiert ist. -Connections fail. Requests arrive out of order. Different computers with out-of-sync clocks and states process related requests. Servers restart. Re-orgs happen between requests. These problems are inherent to all distributed systems but are exacerbated in systems operating at a global scale. +Verbindungen schlagen fehl. Anfragen treffen nicht in der richtigen Reihenfolge ein. Verschiedene Computer mit nicht synchronisierten Uhren und Zuständen bearbeiten zusammengehörige Anfragen. Server werden neu gestartet. Zwischen den Anfragen kommt es zu Re-orgs. Diese Probleme treten bei allen verteilten Systemen auf, verschärfen sich jedoch bei Systemen, die in globalem Maßstab arbeiten. -Consider this example of what may occur if a client polls an Indexer for the latest data during a re-org. +Ein Beispiel zeigt, was passieren kann, wenn ein Client während einer Reorganisation einen Indexierer nach den neuesten Daten abfragt. -1. Indexer ingests block 8 -2. Request served to the client for block 8 -3. Indexer ingests block 9 -4. Indexer ingests block 10A -5. Request served to the client for block 10A -6. Indexer detects reorg to 10B and rolls back 10A -7. Request served to the client for block 9 -8. Indexer ingests block 10B -9. Indexer ingests block 11 -10. Request served to the client for block 11 +1. Indexierer nimmt Block 8 auf +2. Anfrage an den Kunden für Block 8 +3. Indexierer nimmt Block 9 auf +4. Indexierer nimmt Block 10A auf +5. Anfrage an den Kunden für Block 10A +6. Indexierer erkennt Reorg nach 10B und rollt 10A zurück +7. Anfrage an den Kunden für Block 9 +8. Indexierer nimmt den Block 10B auf +9. Indexierer nimmt Block 11 auf +10. Anfrage an den Kunden für Block 11 -From the point of view of the Indexer, things are progressing forward logically. Time is moving forward, though we did have to roll back an uncle block and play the block under consensus forward on top of it. Along the way, the Indexer serves requests using the latest state it knows about at that time. +Aus der Sicht des Indexers schreiten die Dinge logisch voran. Die Zeit schreitet voran, obwohl wir einen Uncle-Block zurückdrehen und den Block unter Konsens vorwärts auf ihn spielen mussten. Auf dem Weg dorthin bedient der Indexer die Anfragen mit dem neuesten Stand, der ihm zu diesem Zeitpunkt bekannt ist. -From the point of view of the client, however, things appear chaotic. The client observes that the responses were for blocks 8, 10, 9, and 11 in that order. We call this the "block wobble" problem. When a client experiences block wobble, data may appear to contradict itself over time. The situation worsens when we consider that Indexers do not all ingest the latest blocks simultaneously, and your requests may be routed to multiple Indexers. +Aus der Sicht des Kunden erscheinen die Dinge jedoch chaotisch. Der Kunde stellt fest, dass die Antworten für die Blöcke 8, 10, 9 und 11 in dieser Reihenfolge erfolgten. Wir nennen dies das „Block Wobble“-Problem. Wenn ein Kunde von Block-Wobble betroffen ist, kann es sein, dass sich die Daten im Laufe der Zeit widersprechen. Die Situation verschlimmert sich noch, wenn man bedenkt, dass nicht alle Indexer die neuesten Blöcke gleichzeitig aufnehmen und Ihre Anfragen möglicherweise an mehrere Indexierer weitergeleitet werden. -It is the responsibility of the client and server to work together to provide consistent data to the user. Different approaches must be used depending on the desired consistency as there is no one right program for every problem. +Es liegt in der Verantwortung von Client und Server, zusammenzuarbeiten, um dem Benutzer konsistente Daten zu liefern. Je nach gewünschter Konsistenz müssen unterschiedliche Ansätze verwendet werden, da es nicht das eine richtige Programm für jedes Problem gibt. -Reasoning through the implications of distributed systems is hard, but the fix may not be! We've established APIs and patterns to help you navigate some common use-cases. The following examples illustrate those patterns but still elide details required by production code (like error handling and cancellation) to not obfuscate the main ideas. +Die Implikationen verteilter Systeme zu durchdenken ist schwierig, aber die Lösung muss es nicht sein! Wir haben APIs und Muster entwickelt, die Ihnen bei der Navigation in einigen häufigen Anwendungsfällen helfen. Die folgenden Beispiele veranschaulichen diese Muster, lassen aber Details aus, die für den Produktionscode erforderlich sind (z. B. Fehlerbehandlung und Stornierung), um die wichtigsten Ideen nicht zu verschleiern. -## Polling for updated data +## Abruf von aktualisierten Daten -The Graph provides the `block: { number_gte: $minBlock }` API, which ensures that the response is for a single block equal or higher to `$minBlock`. If the request is made to a `graph-node` instance and the min block is not yet synced, `graph-node` will return an error. If `graph-node` has synced min block, it will run the response for the latest block. If the request is made to an Edge & Node Gateway, the Gateway will filter out any Indexers that have not yet synced min block and make the request for the latest block the Indexer has synced. +The Graph bietet die `block: { number_gte: $minBlock }` API, die sicherstellt, dass die Antwort für einen einzelnen Block gleich oder höher als `$minBlock` ist. Wenn die Anfrage an eine `graph-node` Instanz gestellt wird und der Min-Block noch nicht synchronisiert ist, wird `graph-node` einen Fehler zurückgeben. Wenn `graph-node` den Min-Block synchronisiert hat, wird er die Antwort für den letzten Block ausführen. Wenn die Anfrage an ein Edge & Node Gateway gerichtet ist, wird das Gateway alle Indexer herausfiltern, die den Min-Block noch nicht synchronisiert haben, und die Anfrage für den letzten Block stellen, den der Indexierer synchronisiert hat. -We can use `number_gte` to ensure that time never travels backward when polling for data in a loop. Here is an example: +Wir können `number_gte` verwenden, um sicherzustellen, dass die Zeit niemals rückwärts läuft, wenn wir Daten in einer Schleife abfragen. Hier ist ein Beispiel: ```javascript -/// Updates the protocol.paused variable to the latest -/// known value in a loop by fetching it using The Graph. +/// Aktualisiert die Variable protocol.paused auf den letzten +/// bekannten Wert in einer Schleife, indem sie ihn mit The Graph abruft. async function updateProtocolPaused() { - // It's ok to start with minBlock at 0. The query will be served - // using the latest block available. Setting minBlock to 0 is the - // same as leaving out that argument. + // Es ist in Ordnung, mit minBlock bei 0 zu beginnen. Die Abfrage wird + // mit dem letzten verfügbaren Block bedient. Das Setzen von minBlock auf 0 ist + // dasselbe wie das Weglassen dieses Arguments. let minBlock = 0 for (;;) { - // Schedule a promise that will be ready once - // the next Ethereum block will likely be available. + // Planen Sie ein Versprechen, das bereit sein wird, sobald + // der nächste Ethereum-Block wahrscheinlich verfügbar ist. const nextBlock = new Promise((f) => { setTimeout(f, 14000) }) @@ -65,30 +65,30 @@ async function updateProtocolPaused() { const response = await graphql(query, variables) minBlock = response._meta.block.number - // TODO: Do something with the response data here instead of logging it. + // TODO: Machen Sie hier etwas mit den Antwortdaten, anstatt sie zu protokollieren. console.log(response.protocol.paused) - // Sleep to wait for the next block + // Sleep um auf den nächsten Block zu warten await nextBlock } } ``` -## Fetching a set of related items +## Abrufen einer Gruppe verwandter Elemente -Another use-case is retrieving a large set or, more generally, retrieving related items across multiple requests. Unlike the polling case (where the desired consistency was to move forward in time), the desired consistency is for a single point in time. +Ein weiterer Anwendungsfall ist der Abruf einer großen Menge oder, allgemeiner, der Abruf zusammengehöriger Elemente über mehrere Anfragen hinweg. Im Gegensatz zum Abruffall (bei dem die gewünschte Konsistenz in der Zeit vorwärts gehen sollte), bezieht sich die gewünschte Konsistenz auf einen einzigen Zeitpunkt. -Here we will use the `block: { hash: $blockHash }` argument to pin all of our results to the same block. +Hier werden wir den `block: { hash: $blockHash }`, um alle unsere Ergebnisse an denselben Block zu binden. ```javascript -/// Gets a list of domain names from a single block using pagination +/// Ruft eine Liste von Domainnamen aus einem einzelnen Block mit Paginierung ab async function getDomainNames() { - // Set a cap on the maximum number of items to pull. + // Legen Sie eine Obergrenze für die maximale Anzahl der zu ziehenden Elemente fest. let pages = 5 const perPage = 1000 - // The first query will get the first page of results and also get the block - // hash so that the remainder of the queries are consistent with the first. + // Die erste Abfrage erhält die erste Seite der Ergebnisse und auch den Block + // Hash, so dass die restlichen Abfragen mit der ersten konsistent sind. const listDomainsQuery = ` query ListDomains($perPage: Int!) { domains(first: $perPage) { @@ -107,9 +107,9 @@ async function getDomainNames() { let blockHash = data._meta.block.hash let query - // Continue fetching additional pages until either we run into the limit of - // 5 pages total (specified above) or we know we have reached the last page - // because the page has fewer entities than a full page. + // Wir fahren fort, weitere Seiten zu holen, bis wir entweder auf das Limit von + // 5 Seiten insgesamt (oben angegeben) stoßen oder wissen, dass wir die letzte Seite + // erreicht haben, weil die Seite weniger Entitäten als eine volle Seite hat. while (data.domains.length == perPage && --pages) { let lastID = data.domains[data.domains.length - 1].id query = ` @@ -122,7 +122,7 @@ async function getDomainNames() { data = await graphql(query, { perPage, lastID, blockHash }) - // Accumulate domain names into the result + // Domainnamen im Ergebnis akkumulieren for (domain of data.domains) { result.push(domain.name) } @@ -131,4 +131,4 @@ async function getDomainNames() { } ``` -Note that in case of a re-org, the client will need to retry from the first request to update the block hash to a non-uncle block. +Beachten Sie, dass der Client im Falle eines Reorgs ab der ersten Anfrage erneut versuchen muss, den Block-Hash auf einen nicht-uncle-Block zu aktualisieren. diff --git a/website/src/pages/de/subgraphs/querying/from-an-application.mdx b/website/src/pages/de/subgraphs/querying/from-an-application.mdx index af85c4086630..9f016b3f2952 100644 --- a/website/src/pages/de/subgraphs/querying/from-an-application.mdx +++ b/website/src/pages/de/subgraphs/querying/from-an-application.mdx @@ -1,73 +1,74 @@ --- -title: Querying from an Application +title: Abfragen aus einer Anwendung +sidebarTitle: Querying from an App --- -Learn how to query The Graph from your application. +Erfahren Sie, wie Sie The Graph von Ihrer Anwendung aus abfragen können. -## Getting GraphQL Endpoints +## GraphQL-Endpunkte abrufen -During the development process, you will receive a GraphQL API endpoint at two different stages: one for testing in Subgraph Studio, and another for making queries to The Graph Network in production. +Während des Entwicklungsprozesses erhalten Sie einen GraphQL-API-Endpunkt in zwei verschiedenen Stadien: einen zum Testen in Subgraph Studio und einen weiteren für Abfragen an The Graph Network in der Produktion. -### Subgraph Studio Endpoint +### Subgraph Studio Endpunkt -After deploying your subgraph to [Subgraph Studio](https://thegraph.com/docs/en/subgraphs/developing/deploying/using-subgraph-studio/), you will receive an endpoint that looks like this: +Nachdem Sie Ihren Subgraphen in [Subgraph Studio] (https://thegraph.com/docs/en/subgraphs/developing/deploying/using-subgraph-studio/) bereitgestellt haben, erhalten Sie einen Endpunkt, der wie folgt aussieht: ``` https://api.studio.thegraph.com/query/// ``` -> This endpoint is intended for testing purposes **only** and is rate-limited. +> Dieser Endpunkt ist **nur** für Testzwecke gedacht und hat eine begrenzte Übertragungsrate. -### The Graph Network Endpoint +### The Graph Network Endpunkt -After publishing your subgraph to the network, you will receive an endpoint that looks like this: : +Nachdem Sie Ihren Subgraphen im Netzwerk veröffentlicht haben, erhalten Sie einen Endpunkt, der wie folgt aussieht: : ``` https://gateway.thegraph.com/api//subgraphs/id/ ``` -> This endpoint is intended for active use on the network. It allows you to use various GraphQL client libraries to query the subgraph and populate your application with indexed data. +> Dieser Endpunkt ist für die aktive Nutzung im Netz gedacht. Er ermöglicht es Ihnen, verschiedene GraphQL-Client-Bibliotheken zu verwenden, um den Subgraphen abzufragen und Ihre Anwendung mit indizierten Daten zu bestücken. -## Using Popular GraphQL Clients +## Gängige GraphQL-Clients verwenden -### Graph Client +### Graph-Client -The Graph is providing its own GraphQL client, `graph-client` that supports unique features such as: +The Graph bietet einen eigenen GraphQL-Client, `graph-client`, der einzigartige Funktionen wie z.B.: -- Cross-chain Subgraph Handling: Querying from multiple subgraphs in a single query -- [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) -- [Automatic Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) -- Fully typed result +- Kettenübergreifende Behandlung von Subgraphen: Abfragen von mehreren Subgraphen in einer einzigen Abfrage +- [Automatische Blockverfolgung](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) +- [Automatische Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) +- Vollständig typisiertes Ergebnis -> Note: `graph-client` is integrated with other popular GraphQL clients such as Apollo and URQL, which are compatible with environments such as React, Angular, Node.js, and React Native. As a result, using `graph-client` will provide you with an enhanced experience for working with The Graph. +> Hinweis: `graph-client` ist mit anderen beliebten GraphQL-Clients wie Apollo und URQL integriert, die mit Umgebungen wie React, Angular, Node.js und React Native kompatibel sind. Die Verwendung von `graph-client` bietet Ihnen daher eine verbesserte Erfahrung bei der Arbeit mit The Graph. -### Fetch Data with Graph Client +### Daten mit Graph Client abrufen -Let's look at how to fetch data from a subgraph with `graph-client`: +Schauen wir uns an, wie man mit `graph-client` Daten aus einem Subgraphen holt: #### Schritt 1 -Install The Graph Client CLI in your project: +Installieren Sie The Graph Client CLI in Ihrem Projekt: ```sh yarn add -D @graphprotocol/client-cli -# or, with NPM: +# oder, mit NPM: npm install --save-dev @graphprotocol/client-cli ``` #### Schritt 2 -Define your query in a `.graphql` file (or inlined in your `.js` or `.ts` file): +Definieren Sie Ihre Abfrage in einer `.graphql` Datei (oder inline in Ihrer `.js` oder `.ts` Datei): ```graphql query ExampleQuery { - # this one is coming from compound-v2 + # dieses kommt von Compound-v2 markets(first: 7) { borrowRate cash collateralFactor } - # this one is coming from uniswap-v2 + # dieses kommt von Uniswap-v2 pair(id: "0x00004ee988665cdda9a1080d5792cecd16dc1220") { id token0 { @@ -86,7 +87,7 @@ query ExampleQuery { #### Schritt 3 -Create a configuration file (called `.graphclientrc.yml`) and point to your GraphQL endpoints provided by The Graph, for example: +Erstellen Sie eine Konfigurationsdatei (mit dem Namen `.graphclientrc.yml`) und verweisen Sie auf Ihre GraphQL-Endpunkte, die z.B. von The Graph bereitgestellt werden: ```yaml # .graphclientrc.yml @@ -104,22 +105,22 @@ documents: - ./src/example-query.graphql ``` -#### Step 4 +#### Schritt 4 -Run the following The Graph Client CLI command to generate typed and ready to use JavaScript code: +Führen Sie den folgenden The Graph Client CLI-Befehl aus, um typisierten und gebrauchsfertigen JavaScript-Code zu erzeugen: ```sh -graphclient build +Graphclient erstellen ``` -#### Step 5 +#### Schritt 5 -Update your `.ts` file to use the generated typed GraphQL documents: +Aktualisieren Sie Ihre \`.ts'-Datei, um die generierten typisierten GraphQL-Dokumente zu verwenden: ```tsx import React, { useEffect } from 'react' // ... -// we import types and typed-graphql document from the generated code (`..graphclient/`) +// wir importieren Typen und typisierte GraphQL-Dokumente aus dem generierten Code (`..graphclient/`) import { ExampleQueryDocument, ExampleQueryQuery, execute } from '../.graphclient' function App() { @@ -152,27 +153,27 @@ function App() { export default App ``` -> **Important Note:** `graph-client` is perfectly integrated with other GraphQL clients such as Apollo client, URQL, or React Query; you can [find examples in the official repository](https://github.com/graphprotocol/graph-client/tree/main/examples). However, if you choose to go with another client, keep in mind that **you won't be able to use Cross-chain Subgraph Handling or Automatic Pagination, which are core features for querying The Graph**. +> **Wichtiger Hinweis:** `graph-client` ist perfekt mit anderen GraphQL-Clients wie Apollo client, URQL oder React Query integriert; Sie können [Beispiele im offiziellen Repository finden](https://github.com/graphprotocol/graph-client/tree/main/examples). Wenn Sie sich jedoch für einen anderen Client entscheiden, bedenken Sie, dass **Sie nicht in der Lage sein werden, die kettenübergreifende Behandlung von Subgraphen oder die automatische Paginierung zu nutzen, die Kernfunktionen für die Abfrage von The Graph** sind. -### Apollo Client +### Apollo Klient -[Apollo client](https://www.apollographql.com/docs/) is a common GraphQL client on front-end ecosystems. It's available for React, Angular, Vue, Ember, iOS, and Android. +Der [Apollo-Client] (https://www.apollographql.com/docs/) ist ein gängiger GraphQL-Client für Frontend-Ökosysteme. Er ist für React, Angular, Vue, Ember, iOS und Android verfügbar. -Although it's the heaviest client, it has many features to build advanced UI on top of GraphQL: +Obwohl es der schwerste Client ist, hat er viele Funktionen, um fortgeschrittene UI auf GraphQL aufzubauen: -- Advanced error handling +- Erweiterte Fehlerbehandlung - Pagination -- Data prefetching -- Optimistic UI -- Local state management +- Vorabruf von Daten +- Optimistische Benutzeroberfläche +- Lokale Zustandsverwaltung (Local State Management) -### Fetch Data with Apollo Client +### Daten mit Apollo Client abrufen -Let's look at how to fetch data from a subgraph with Apollo client: +Schauen wir uns an, wie man mit dem Apollo-Client Daten aus einem Subgraphen abruft: #### Schritt 1 -Install `@apollo/client` and `graphql`: +Installieren Sie `@apollo/client` und `graphql`: ```sh npm install @apollo/client graphql @@ -180,7 +181,7 @@ npm install @apollo/client graphql #### Schritt 2 -Query the API with the following code: +Fragen Sie die API mit dem folgenden Code ab: ```javascript import { ApolloClient, InMemoryCache, gql } from '@apollo/client' @@ -215,7 +216,7 @@ client #### Schritt 3 -To use variables, you can pass in a `variables` argument to the query: +Um Variablen zu verwenden, können Sie der Abfrage das Argument `variables` hinzufügen: ```javascript const tokensQuery = ` @@ -246,22 +247,22 @@ client }) ``` -### URQL Overview +### URQL-Übersicht -[URQL](https://formidable.com/open-source/urql/) is available within Node.js, React/Preact, Vue, and Svelte environments, with some more advanced features: +[URQL] (https://formidable.com/open-source/urql/) ist in Node.js, React/Preact, Vue und Svelte-Umgebungen verfügbar, mit einigen erweiterten Funktionen: -- Flexible cache system -- Extensible design (easing adding new capabilities on top of it) -- Lightweight bundle (~5x lighter than Apollo Client) -- Support for file uploads and offline mode +- Flexibles Cache-System +- Erweiterbares Design (einfaches Hinzufügen neuer Funktionen) +- Leichtes Bundle (~5x leichter als Apollo Client) +- Unterstützung für Datei-Uploads und Offline-Modus -### Fetch data with URQL +### Daten mit URQL abrufen -Let's look at how to fetch data from a subgraph with URQL: +Schauen wir uns an, wie man mit URQL Daten aus einem Subgraphen abruft: #### Schritt 1 -Install `urql` and `graphql`: +Installieren Sie `urql` und `graphql`: ```sh npm install urql graphql @@ -269,7 +270,7 @@ npm install urql graphql #### Schritt 2 -Query the API with the following code: +Fragen Sie die API mit dem folgenden Code ab: ```javascript import { createClient } from 'urql' diff --git a/website/src/pages/de/subgraphs/querying/graph-client/README.md b/website/src/pages/de/subgraphs/querying/graph-client/README.md index 416cadc13c6f..583c61e95bc4 100644 --- a/website/src/pages/de/subgraphs/querying/graph-client/README.md +++ b/website/src/pages/de/subgraphs/querying/graph-client/README.md @@ -1,54 +1,54 @@ # The Graph Client Tools -This repo is the home for [The Graph](https://thegraph.com) consumer-side tools (for both browser and NodeJS environments). +Dieses Repo ist das Zuhause für [The Graph](https://thegraph.com) Tools auf der Verbraucherseite (sowohl für Browser- als auch NodeJS-Umgebungen). -## Background +## Hintergrund -The tools provided in this repo are intended to enrich and extend the DX, and add the additional layer required for dApps in order to implement distributed applications. +Die in diesem Repo bereitgestellten Tools sollen den DX bereichern und erweitern und die zusätzliche Schicht hinzufügen, die für dApps erforderlich ist, um verteilte Anwendungen zu implementieren. -Developers who consume data from [The Graph](https://thegraph.com) GraphQL API often need peripherals for making data consumption easier, and also tools that allow using multiple indexers at the same time. +Entwickler, die Daten von [The Graph](https://thegraph.com) GraphQL API konsumieren, benötigen oft Peripheriegeräte, um den Datenkonsum zu vereinfachen, und auch Tools, die die gleichzeitige Verwendung mehrerer Indexer ermöglichen. -## Features and Goals +## Merkmale und Ziele -This library is intended to simplify the network aspect of data consumption for dApps. The tools provided within this repository are intended to run at build time, in order to make execution faster and performant at runtime. +Diese Bibliothek soll den Netzwerkaspekt des Datenverbrauchs für dApps vereinfachen. Die in diesem Repository bereitgestellten Tools sollen zur Build-Zeit ausgeführt werden, um die Ausführung zur Laufzeit schneller und leistungsfähiger zu machen. -> The tools provided in this repo can be used as standalone, but you can also use it with any existing GraphQL Client! +> Die in diesem Repo zur Verfügung gestellten Tools können als Standalone verwendet werden, aber Sie können sie auch mit jedem bestehenden GraphQL Client verwenden! -| Status | Feature | Notes | -| :----: | ---------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------- | -| ✅ | Multiple indexers | based on fetch strategies | -| ✅ | Fetch Strategies | timeout, retry, fallback, race, highestValue | -| ✅ | Build time validations & optimizations | | -| ✅ | Client-Side Composition | with improved execution planner (based on GraphQL-Mesh) | -| ✅ | Cross-chain Subgraph Handling | Use similar subgraphs as a single source | -| ✅ | Raw Execution (standalone mode) | without a wrapping GraphQL client | -| ✅ | Local (client-side) Mutations | | -| ✅ | [Automatic Block Tracking](../packages/block-tracking/README.md) | tracking block numbers [as described here](https://thegraph.com/docs/en/developer/distributed-systems/#polling-for-updated-data) | -| ✅ | [Automatic Pagination](../packages/auto-pagination/README.md) | doing multiple requests in a single call to fetch more than the indexer limit | -| ✅ | Integration with `@apollo/client` | | -| ✅ | Integration with `urql` | | -| ✅ | TypeScript support | with built-in GraphQL Codegen and `TypedDocumentNode` | -| ✅ | [`@live` queries](./live.md) | Based on polling | +| Status | Merkmal | Anmerkungen | +| :----: | ----------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| ✅ | Mehrere Indexer | basierend auf Abrufstrategien | +| ✅ | Abruf-Strategien | timeout, retry, fallback, race, highestValue | +| ✅ | Validierung der Erstellungszeit & Optimierungen | | +| ✅ | Kundenseitige Zusammensetzung | mit verbessertem Ausführungsplaner (basierend auf GraphQL-Mesh) | +| ✅ | Behandlung kettenübergreifender Subgraphen | Verwenden Sie ähnliche Subgraphen als eine einzige Quelle | +| ✅ | Unbearbeitete Ausführung (Standalone-Modus) | ohne einen umhüllenden GraphQL-Client | +| ✅ | Lokale (client-seitige) Mutationen | | +| ✅ | [Automatische Blockverfolgung](../packages/block-tracking/README.md) | Tracking-Blocknummern [wie hier beschrieben] (https://thegraph.com/docs/en/developer/distributed-systems/#polling-for-updated-data) | +| ✅ | [Automatischer Seitenumbruch](../packages/auto-pagination/README.md) | mehrere Anfragen in einem einzigen Aufruf, um mehr als das Indexierer-Limit abzurufen | +| ✅ | Integration mit `@apollo/client` | | +| ✅ | Integration mit `urql` | | +| ✅ | TypeScript-Unterstützung | mit eingebautem GraphQL Codegen und `TypedDocumentNode` | +| ✅ | [`@live`-Abfragen](./live.md) | Auf der Grundlage von Umfragen | -> You can find an [extended architecture design here](./architecture.md) +> Einen [erweiterten Architekturentwurf finden Sie hier](./architecture.md) -## Getting Started +## Erste Schritte -You can follow [Episode 45 of `graphql.wtf`](https://graphql.wtf/episodes/45-the-graph-client) to learn more about Graph Client: +Sie können [Episode 45 von `graphql.wtf`] (https://graphql.wtf/episodes/45-the-graph-client) verfolgen, um mehr über Graph Client zu erfahren: [![GraphQL.wtf Episode 45](https://img.youtube.com/vi/ZsRAmyUtvwg/0.jpg)](https://graphql.wtf/episodes/45-the-graph-client) -To get started, make sure to install [The Graph Client CLI] in your project: +Um loszulegen, stellen Sie sicher, dass Sie [The Graph Client CLI] in Ihrem Projekt installieren: ```sh yarn add -D @graphprotocol/client-cli -# or, with NPM: +# oder, mit NPM: npm install --save-dev @graphprotocol/client-cli ``` -> The CLI is installed as dev dependency since we are using it to produce optimized runtime artifacts that can be loaded directly from your app! +> Das CLI wird als Dev-Abhängigkeit installiert, da wir es verwenden, um optimierte Laufzeit-Artefakte zu erzeugen, die direkt aus Ihrer Anwendung geladen werden können! -Create a configuration file (called `.graphclientrc.yml`) and point to your GraphQL endpoints provided by The Graph, for example: +Erstellen Sie eine Konfigurationsdatei (mit dem Namen `.graphclientrc.yml`) und verweisen Sie auf Ihre GraphQL-Endpunkte, die z.B. von The Graph bereitgestellt werden: ```yml # .graphclientrc.yml @@ -59,28 +59,28 @@ sources: endpoint: https://api.thegraph.com/subgraphs/name/uniswap/uniswap-v2 ``` -Now, create a runtime artifact by running The Graph Client CLI: +Erstellen Sie nun ein Laufzeit-Artefakt, indem Sie The Graph Client CLI ausführen: ```sh -graphclient build +Graphclient erstellen ``` -> Note: you need to run this with `yarn` prefix, or add that as a script in your `package.json`. +> Hinweis: Sie müssen dies mit dem Präfix `yarn` ausführen oder es als Skript in Ihrer `package.json` hinzufügen. -This should produce a ready-to-use standalone `execute` function, that you can use for running your application GraphQL operations, you should have an output similar to the following: +Dies sollte eine einsatzbereite eigenständige Funktion `execute` erzeugen, die Sie für die Ausführung Ihrer GraphQL-Operationen verwenden können. Sie sollten eine Ausgabe ähnlich der folgenden erhalten: ```sh -GraphClient: Cleaning existing artifacts -GraphClient: Reading the configuration -🕸️: Generating the unified schema -🕸️: Generating artifacts -🕸️: Generating index file in TypeScript -🕸️: Writing index.ts for ESM to the disk. -🕸️: Cleanup -🕸️: Done! => .graphclient +GraphClient: Bereinigung vorhandener Artefakte +GraphClient: Einlesen der Konfiguration +🕸️: Erzeugen des einheitlichen Schemas +🕸️: Erzeugen von Artefakten +🕸️: Erzeugen der Indexdatei in TypeScript +🕸️: Schreiben der index.ts für ESM auf die Festplatte. +🕸️: Aufräumen +🕸️: Erledigt! => .graphclient ``` -Now, the `.graphclient` artifact is generated for you, and you can import it directly from your code, and run your queries: +Nun wird das Artefakt `.graphclient` für Sie generiert, und Sie können es direkt aus Ihrem Code importieren und Ihre Abfragen ausführen: ```ts import { execute } from '../.graphclient' @@ -111,54 +111,54 @@ async function main() { main() ``` -### Using Vanilla JavaScript Instead of TypeScript +### Vanilla JavaScript anstelle von TypeScript verwenden -GraphClient CLI generates the client artifacts as TypeScript files by default, but you can configure CLI to generate JavaScript and JSON files together with additional TypeScript definition files by using `--fileType js` or `--fileType json`. +GraphClient CLI generiert die Client-Artefakte standardmäßig als TypeScript-Dateien, aber Sie können CLI so konfigurieren, dass JavaScript- und JSON-Dateien zusammen mit zusätzlichen TypeScript-Definitionsdateien generiert werden, indem Sie `--fileType js` oder `--fileType json` verwenden. -`js` flag generates all files as JavaScript files with ESM Syntax and `json` flag generates source artifacts as JSON files while entrypoint JavaScript file with old CommonJS syntax because only CommonJS supports JSON files as modules. +Das `js`-Flag generiert alle Dateien als JavaScript-Dateien mit ESM-Syntax und das `json`-Flag generiert Quellartefakte als JSON-Dateien, während der Einstiegspunkt JavaScript-Dateien mit der alten CommonJS-Syntax erzeugt, da nur CommonJS JSON-Dateien als Module unterstützt. -Unless you use CommonJS(`require`) specifically, we'd recommend you to use `js` flag. +Wenn Sie nicht gerade CommonJS(`require`) verwenden, empfehlen wir Ihnen, das `js`-Flag zu verwenden. `graphclient --fileType js` -- [An example for JavaScript usage in CommonJS syntax with JSON files](../examples/javascript-cjs) -- [An example for JavaScript usage in ESM syntax](../examples/javascript-esm) +- [Ein Beispiel für die Verwendung von JavaScript in CommonJS-Syntax mit JSON-Dateien](../examples/javascript-cjs) +- [Ein Beispiel für die Verwendung von JavaScript in der ESM-Syntax](../examples/javascript-esm) -#### The Graph Client DevTools +#### The Graph Client Tools -The Graph Client CLI comes with a built-in GraphiQL, so you can experiment with queries in real-time. +The Graph Client CLI verfügt über ein eingebautes GraphiQL, so dass Sie mit Abfragen in Echtzeit experimentieren können. -The GraphQL schema served in that environment, is the eventual schema based on all composed Subgraphs and transformations you applied. +Das GraphQL-Schema, das in dieser Umgebung serviert wird, ist das letztendliche Schema, das auf allen zusammengesetzten Subgraphen und Transformationen basiert, die Sie angewendet haben. -To start the DevTool GraphiQL, run the following command: +Um das DevTool GraphiQL zu starten, führen Sie den folgenden Befehl aus: ```sh graphclient serve-dev ``` -And open http://localhost:4000/ to use GraphiQL. You can now experiment with your Graph client-side GraphQL schema locally! 🥳 +Und öffnen Sie http://localhost:4000/, um GraphiQL zu verwenden. Sie können nun mit Ihrem Graph-Client-seitigen GraphQL-Schema lokal experimentieren! 🥳 -#### Examples +#### Beispiele -You can also refer to [examples directory in this repo](../examples), for more advanced examples and integration examples: +Sie können auch auf [examples directory in this repo](../examples) verweisen, für fortgeschrittene Beispiele und Integrationsbeispiele: - [TypeScript & React example with raw `execute` and built-in GraphQL-Codegen](../examples/execute) -- [TS/JS NodeJS standalone mode](../examples/node) -- [Client-Side GraphQL Composition](../examples/composition) -- [Integration with Urql and React](../examples/urql) -- [Integration with NextJS and TypeScript](../examples/nextjs) -- [Integration with Apollo-Client and React](../examples/apollo) +- [TS/JS NodeJS Einzelplatzmodus](../Beispiele/node) +- [Client-seitige GraphQL-Zusammensetzung](../Beispiele/Zusammensetzung) +- [Integration mit Urql und React](../Beispiele/urql) +- [Integration mit NextJS und TypeScript](../examples/nextjs) +- [Integration mit Apollo-Client und React](../examples/apollo) - [Integration with React-Query](../examples/react-query) -- _Cross-chain merging (same Subgraph, different chains)_ -- - [Parallel SDK calls](../examples/cross-chain-sdk) -- - [Parallel internal calls with schema extensions](../examples/cross-chain-extension) -- [Customize execution with Transforms (auto-pagination and auto-block-tracking)](../examples/transforms) +- _Kettenübergreifende Zusammenführung (gleicher Subgraphen, unterschiedliche Ketten)_ +- - [Parallele SDK-Aufrufe](../examples/cross-chain-sdk) +- - [Parallele interne Aufrufe mit Schemaerweiterungen](../examples/cross-chain-extension) +- [Ausführung mit Transforms anpassen (Auto-Pagination und Auto-Block-Tracking)](../examples/transforms) -### Advanced Examples/Features +### Erweiterte Beispiele/Funktionen -#### Customize Network Calls +#### Anpassen von Netzanrufen -You can customize the network execution (for example, to add authentication headers) by using `operationHeaders`: +Sie können die Netzwerkausführung anpassen (z. B. um Authentifizierungs-Header hinzuzufügen), indem Sie `operationHeaders` verwenden: ```yaml sources: @@ -170,19 +170,19 @@ sources: Authorization: Bearer MY_TOKEN ``` -You can also use runtime variables if you wish, and specify it in a declarative way: +Sie können auch Laufzeitvariablen verwenden, wenn Sie dies wünschen, und sie deklarativ angeben: ```yaml -sources: - - name: uniswapv2 - handler: +Quellen: + - Name: uniswapv2 + Handler: graphql: endpoint: https://api.thegraph.com/subgraphs/name/uniswap/uniswap-v2 operationHeaders: Authorization: Bearer {context.config.apiToken} ``` -Then, you can specify that when you execute operations: +Dann können Sie dies bei der Ausführung von Vorgängen angeben: ```ts execute(myQuery, myVariables, { @@ -192,11 +192,11 @@ execute(myQuery, myVariables, { }) ``` -> You can find the [complete documentation for the `graphql` handler here](https://graphql-mesh.com/docs/handlers/graphql#config-api-reference). +> Sie finden die [vollständige Dokumentation für den `graphql`-Handler hier](https://graphql-mesh.com/docs/handlers/graphql#config-api-reference). -#### Environment Variables Interpolation +#### Umgebungsvariablen Interpolation -If you wish to use environment variables in your Graph Client configuration file, you can use interpolation with `env` helper: +Wenn Sie Umgebungsvariablen in Ihrer Graph-Client-Konfigurationsdatei verwenden möchten, können Sie die Interpolation mit dem `env`-Helper nutzen: ```yaml sources: @@ -208,9 +208,9 @@ sources: Authorization: Bearer {env.MY_API_TOKEN} # runtime ``` -Then, make sure to have `MY_API_TOKEN` defined when you run `process.env` at runtime. +Stellen Sie dann sicher, dass Sie `MY_API_TOKEN` definiert haben, wenn Sie `process.env` zur Laufzeit ausführen. -You can also specify environment variables to be filled at build time (during `graphclient build` run) by using the env-var name directly: +Sie können auch Umgebungsvariablen angeben, die zur Erstellungszeit (während der Ausführung von `graphclient build`) gefüllt werden sollen, indem Sie den Namen der Umgebungsvariablen direkt verwenden: ```yaml sources: @@ -222,20 +222,19 @@ sources: Authorization: Bearer ${MY_API_TOKEN} # build time ``` -> You can find the [complete documentation for the `graphql` handler here](https://graphql-mesh.com/docs/handlers/graphql#config-api-reference). +> Sie finden die [vollständige Dokumentation für den `graphql`-Handler hier](https://graphql-mesh.com/docs/handlers/graphql#config-api-reference). -#### Fetch Strategies and Multiple Graph Indexers +#### Abrufstrategien und mehrere Graph-Indexer -It's a common practice to use more than one indexer in dApps, so to achieve the ideal experience with The Graph, you can specify several `fetch` strategies in order to make it more smooth and simple. +Es ist eine gängige Praxis, mehr als einen Indexer in dApps zu verwenden. Um die ideale Erfahrung mit The Graph zu erreichen, können Sie mehrere „Fetch“-Strategien angeben, um den Vorgang reibungsloser und einfacher zu gestalten. -All `fetch` strategies can be combined to create the ultimate execution flow. +Alle „Abruf“-Strategien können kombiniert werden, um den ultimativen Ausführungsfluss zu schaffen. -
- `retry` +
`retry` -The `retry` mechanism allow you to specify the retry attempts for a single GraphQL endpoint/source. +Mit dem Mechanismus `retry` können Sie die Wiederholungsversuche für einen einzelnen GraphQL-Endpunkt/Quelle festlegen. -The retry flow will execute in both conditions: a netword error, or due to a runtime error (indexing issue/inavailability of the indexer). +Der Wiederholungslauf wird unter beiden Bedingungen ausgeführt: bei einem Netzwortfehler oder aufgrund eines Laufzeitfehlers (Indizierungsproblem/Verfügbarkeit des Indexers). ```yaml sources: @@ -248,10 +247,9 @@ sources:
-
- `timeout` +
`Timeout` -The `timeout` mechanism allow you to specify the `timeout` for a given GraphQL endpoint. +Der „Timeout“-Mechanismus ermöglicht es Ihnen, den „Timeout“ für einen bestimmten GraphQL-Endpunkt anzugeben. ```yaml sources: @@ -264,12 +262,11 @@ sources:
-
- `fallback` +
`Fallback` -The `fallback` mechanism allow you to specify use more than one GraphQL endpoint, for the same source. +Der „Fallback“-Mechanismus ermöglicht es Ihnen, mehr als einen GraphQL-Endpunkt für dieselbe Quelle zu verwenden. -This is useful if you want to use more than one indexer for the same Subgraph, and fallback when an error/timeout happens. You can also use this strategy in order to use a custom indexer, but allow it to fallback to [The Graph Hosted Service](https://thegraph.com/hosted-service). +Dies ist nützlich, wenn Sie mehr als einen Indexer für denselben Subgraphen verwenden und bei einem Fehler/Timeout zurückgreifen möchten. Sie können diese Strategie auch verwenden, um einen benutzerdefinierten Indexer zu verwenden, der jedoch auf [The Graph Hosted Service] (https://thegraph.com/hosted-service) zurückgreifen kann. ```yaml sources: @@ -286,12 +283,11 @@ sources:
-
- `race` +
`race` -The `race` mechanism allow you to specify use more than one GraphQL endpoint, for the same source, and race on every execution. +Der „Race“-Mechanismus ermöglicht es Ihnen, mehr als einen GraphQL-Endpunkt für dieselbe Quelle zu verwenden und bei jeder Ausführung ein Race durchzuführen. -This is useful if you want to use more than one indexer for the same Subgraph, and allow both sources to race and get the fastest response from all specified indexers. +Dies ist nützlich, wenn Sie mehr als einen Indizierer für denselben Subgraphen verwenden möchten und beide Quellen gegeneinander antreten lassen wollen, um die schnellste Antwort von allen angegebenen Indizierern zu erhalten. ```yaml sources: @@ -306,12 +302,11 @@ sources:
-
- `highestValue` - - This strategy allows you to send parallel requests to different endpoints for the same source and choose the most updated. +
`höchsterWert` -This is useful if you want to choose most synced data for the same Subgraph over different indexers/sources. +Diese Strategie ermöglicht es Ihnen, parallele Anfragen an verschiedene Endpunkte für dieselbe Quelle zu senden und die aktuellste auszuwählen. + +Dies ist nützlich, wenn Sie die meisten synchronisierten Daten für denselben Subgraphen über verschiedene Indexer/Quellen auswählen möchten. ```yaml sources: @@ -349,9 +344,9 @@ graph LR;
-#### Block Tracking +#### Blockverfolgung -The Graph Client can track block numbers and do the following queries by following [this pattern](https://thegraph.com/docs/en/developer/distributed-systems/#polling-for-updated-data) with `blockTracking` transform; +The Graph Client kann Blocknummern verfolgen und die folgenden Abfragen durchführen, indem er [diesem Muster] (https://thegraph.com/docs/en/developer/distributed-systems/#polling-for-updated-data) mit der Transformation `blockTracking` folgt; ```yaml sources: @@ -361,57 +356,57 @@ sources: endpoint: https://api.thegraph.com/subgraphs/name/uniswap/uniswap-v2 transforms: - blockTracking: - # You might want to disable schema validation for faster startup - validateSchema: true - # Ignore the fields that you don't want to be tracked + # Sie möchten vielleicht die Schema-Validierung für einen schnelleren Start deaktivieren + validateSchema: true + # Ignorieren Sie die Felder, die nicht verfolgt werden sollen ignoreFieldNames: [users, prices] - # Exclude the operation with the following names + # Schließen Sie die Operation mit den folgenden Namen aus ignoreOperationNames: [NotFollowed] ``` -[You can try a working example here](../examples/transforms) +[Hier können Sie ein funktionierendes Beispiel ausprobieren](../examples/transforms) -#### Automatic Pagination +#### Automatische Paginierung -With most subgraphs, the number of records you can fetch is limited. In this case, you have to send multiple requests with pagination. +Bei den meisten Subgraphen ist die Anzahl der Datensätze, die Sie abrufen können, begrenzt. In diesem Fall müssen Sie mehrere Anfragen mit Paginierung senden. ```graphql query { - # Will throw an error if the limit is 1000 - users(first: 2000) { - id - name - } + # Wirft einen Fehler, wenn das Limit 1000 ist + users(first: 2000) { + id + name + } } ``` -So you have to send the following operations one after the other: +Sie müssen also die folgenden Vorgänge nacheinander senden: ```graphql query { - # Will throw an error if the limit is 1000 - users(first: 1000) { - id - name - } + # Wirft einen Fehler, wenn das Limit 1000 ist + users(first: 1000) { + id + name + } } ``` -Then after the first response: +Dann nach der ersten Antwort: ```graphql query { - # Will throw an error if the limit is 1000 - users(first: 1000, skip: 1000) { - id - name - } + # Wirft einen Fehler, wenn die Grenze bei 1000 liegt + users(first: 1000, skip: 1000) { + id + name + } } ``` -After the second response, you have to merge the results manually. But instead The Graph Client allows you to do the first one and automatically does those multiple requests for you under the hood. +Nach der zweiten Antwort müssen Sie die Ergebnisse manuell zusammenführen. The Graph Client erlaubt Ihnen jedoch, die erste Anfrage zu stellen, und führt diese mehreren Anfragen automatisch für Sie durch. -All you have to do is: +Alles, was Sie tun müssen, ist: ```yaml sources: @@ -421,21 +416,21 @@ sources: endpoint: https://api.thegraph.com/subgraphs/name/uniswap/uniswap-v2 transforms: - autoPagination: - # You might want to disable schema validation for faster startup + # Sie möchten vielleicht die Schema-Validierung für einen schnelleren Start deaktivieren validateSchema: true ``` -[You can try a working example here](../examples/transforms) +[Hier können Sie ein funktionierendes Beispiel ausprobieren](../examples/transforms) -#### Client-side Composition +#### Client-seitige Komposition -The Graph Client has built-in support for client-side GraphQL Composition (powered by [GraphQL-Tools Schema-Stitching](https://graphql-tools.com/docs/schema-stitching/stitch-combining-schemas)). +The Graph Client verfügt über integrierte Unterstützung für clientseitige GraphQL Composition (powered by [GraphQL-Tools Schema-Stitching](https://graphql-tools.com/docs/schema-stitching/stitch-combining-schemas)). -You can leverage this feature in order to create a single GraphQL layer from multiple Subgraphs, deployed on multiple indexers. +Sie können diese Funktion nutzen, um eine einzige GraphQL-Schicht aus mehreren Subgraphen zu erstellen, die auf mehreren Indexierern bereitgestellt werden. -> 💡 Tip: You can compose any GraphQL sources, and not only Subgraphs! +> 💡 Tipp: Sie können beliebige GraphQL-Quellen zusammenstellen, und nicht nur Subgraphen! -Trivial composition can be done by adding more than one GraphQL source to your `.graphclientrc.yml` file, here's an example: +Triviale Komposition kann durch Hinzufügen von mehr als einer GraphQL-Quelle zu Ihrer `.graphclientrc.yml`-Datei erfolgen, hier ein Beispiel: ```yaml sources: @@ -449,7 +444,7 @@ sources: endpoint: https://api.thegraph.com/subgraphs/name/graphprotocol/compound-v2 ``` -As long as there a no conflicts across the composed schemas, you can compose it, and then run a single query to both Subgraphs: +Solange es keine Konflikte zwischen den zusammengestellten Schemata gibt, können Sie sie zusammenstellen und dann eine einzige Abfrage für beide Subgraphen ausführen: ```graphql query myQuery { @@ -457,7 +452,7 @@ query myQuery { markets(first: 7) { borrowRate } - # this one is coming from uniswap-v2 + # dieser kommt von uniswap-v2 pair(id: "0x00004ee988665cdda9a1080d5792cecd16dc1220") { id token0 { @@ -470,71 +465,71 @@ query myQuery { } ``` -You can also resolve conflicts, rename parts of the schema, add custom GraphQL fields, and modify the entire execution phase. +Sie können auch Konflikte beheben, Teile des Schemas umbenennen, benutzerdefinierte GraphQL-Felder hinzufügen und die gesamte Ausführungsphase ändern. -For advanced use-cases with composition, please refer to the following resources: +Für fortgeschrittene Anwendungsfälle mit Komposition lesen Sie bitte die folgenden Ressourcen: -- [Advanced Composition Example](../examples/composition) -- [GraphQL-Mesh Schema transformations](https://graphql-mesh.com/docs/transforms/transforms-introduction) -- [GraphQL-Tools Schema-Stitching documentation](https://graphql-tools.com/docs/schema-stitching/stitch-combining-schemas) +- [Fortgeschrittenes Kompositionsbeispiel](../examples/composition) +- [GraphQL-Mesh Schema-Transformationen](https://graphql-mesh.com/docs/transforms/transforms-introduction) +- [GraphQL-Tools Schema-Stitching Dokumentation](https://graphql-tools.com/docs/schema-stitching/stitch-combining-schemas) -#### TypeScript Support +#### TypeScript-Unterstützung -If your project is written in TypeScript, you can leverage the power of [`TypedDocumentNode`](https://the-guild.dev/blog/typed-document-node) and have a fully-typed GraphQL client experience. +Wenn Ihr Projekt in TypeScript geschrieben ist, können Sie die Leistung von [`TypedDocumentNode`] (https://the-guild.dev/blog/typed-document-node) nutzen und eine vollständig typisierte GraphQL-Client-Erfahrung haben. -The standalone mode of The GraphQL, and popular GraphQL client libraries like Apollo-Client and urql has built-in support for `TypedDocumentNode`! +Der Standalone-Modus von The GraphQL und populäre GraphQL-Client-Bibliotheken wie Apollo-Client und urql haben integrierte Unterstützung für `TypedDocumentNode`! -The Graph Client CLI comes with a ready-to-use configuration for [GraphQL Code Generator](https://graphql-code-generator.com), and it can generate `TypedDocumentNode` based on your GraphQL operations. +The Graph Client CLI wird mit einer gebrauchsfertigen Konfiguration für den [GraphQL Code Generator] (https://graphql-code-generator.com) geliefert und kann `TypedDocumentNode` basierend auf Ihren GraphQL-Operationen erzeugen. -To get started, define your GraphQL operations in your application code, and point to those files using the `documents` section of `.graphclientrc.yml`: +Um loszulegen, definieren Sie Ihre GraphQL-Operationen in Ihrem Anwendungscode und verweisen auf diese Dateien mit dem Abschnitt `documents` in `.graphclientrc.yml`: ```yaml sources: - - # ... your Subgraphs/GQL sources here + - # ... Ihre Subgraphs/GQL-Quellen hier documents: - ./src/example-query.graphql ``` -You can also use Glob expressions, or even point to code files, and the CLI will find your GraphQL queries automatically: +Sie können auch Glob-Ausdrücke verwenden oder sogar auf Codedateien verweisen, und die CLI wird Ihre GraphQL-Abfragen automatisch finden: ```yaml documents: - './src/**/*.graphql' - - './src/**/*.{ts,tsx,js,jsx}' + - './src/**/*.{ts,tsx,js,jsx}' ``` -Now, run the GraphQL CLI `build` command again, the CLI will generate a `TypedDocumentNode` object under `.graphclient` for every operation found. +Führen Sie nun den GraphQL-CLI-Befehl `build` erneut aus. Die CLI wird für jede gefundene Operation ein `TypedDocumentNode`-Objekt unter `.graphclient` erzeugen. -> Make sure to name your GraphQL operations, otherwise it will be ignored! +> Stellen Sie sicher, dass Sie Ihre GraphQL-Operationen benennen, sonst werden sie ignoriert! -For example, a query called `query ExampleQuery` will have the corresponding `ExampleQueryDocument` generated in `.graphclient`. You can now import it and use that for your GraphQL calls, and you'll have a fully typed experience without writing or specifying any TypeScript manually: +Zum Beispiel wird für eine Abfrage mit dem Namen `query ExampleQuery` das entsprechende `ExampleQueryDocument` in `.graphclient` generiert. Sie können es nun importieren und für Ihre GraphQL-Aufrufe verwenden. So haben Sie eine vollständig typisierte Erfahrung, ohne TypeScript manuell schreiben oder angeben zu müssen: ```ts import { ExampleQueryDocument, execute } from '../.graphclient' async function main() { - // "result" variable is fully typed, and represents the exact structure of the fields you selected in your query. - const result = await execute(ExampleQueryDocument, {}) - console.log(result) + // Die Variable "result" ist vollständig typisiert und repräsentiert die genaue Struktur der Felder, die Sie in Ihrer Abfrage ausgewählt haben. + const result = await execute(ExampleQueryDocument, {}) + console.log(result) } ``` -> You can find a [TypeScript project example here](../examples/urql). +> Sie können ein [TypeScript-Projektbeispiel hier](../examples/urql) finden. -#### Client-Side Mutations +#### Client-seitige Mutationen -Due to the nature of Graph-Client setup, it is possible to add client-side schema, that you can later bridge to run any arbitrary code. +Aufgrund der Natur des Graph-Client-Setups ist es möglich, clientseitige Schemata hinzuzufügen, die Sie später überbrücken können, um beliebigen Code auszuführen. -This is helpful since you can implement custom code as part of your GraphQL schema, and have it as unified application schema that is easier to track and develop. +Dies ist hilfreich, da Sie benutzerdefinierten Code als Teil Ihres GraphQL-Schemas implementieren können und es als einheitliches Anwendungsschema haben, das einfacher zu verfolgen und zu entwickeln ist. -> This document explains how to add custom mutations, but in fact you can add any GraphQL operation (query/mutation/subscriptions). See [Extending the unified schema article](https://graphql-mesh.com/docs/guides/extending-unified-schema) for more information about this feature. +> Dieses Dokument erklärt, wie man benutzerdefinierte Mutationen hinzufügt, aber eigentlich kann man jede GraphQL-Operation (Abfrage/Mutation/Abonnements) hinzufügen. Sehen Sie [Erweiterung des einheitlichen Schemaartikels](https://graphql-mesh.com/docs/guides/extending-unified-schema) für weitere Informationen über diese Funktion. -To get started, define a `additionalTypeDefs` section in your config file: +Um zu beginnen, definieren Sie einen Abschnitt `additionalTypeDefs` in Ihrer Konfigurationsdatei: ```yaml additionalTypeDefs: | - # We should define the missing `Mutation` type + # Wir sollten dtn fehlenden Typ `Mutation` definieren extend schema { mutation: Mutation } @@ -548,14 +543,14 @@ additionalTypeDefs: | } ``` -Then, add a pointer to a custom GraphQL resolvers file: +Fügen Sie dann einen Pointer auf eine benutzerdefinierte GraphQL-Resolver-Datei hinzu: ```yaml additionalResolvers: - './resolvers' ``` -Now, create `resolver.js` (or, `resolvers.ts`) in your project, and implement your custom mutation: +Erstellen Sie nun `resolver.js` (oder `resolvers.ts`) in Ihrem Projekt, und implementieren Sie Ihre benutzerdefinierte Mutation: ```js module.exports = { @@ -570,7 +565,7 @@ module.exports = { } ``` -If you are using TypeScript, you can also get fully type-safe signature by doing: +Wenn Sie TypeScript verwenden, können Sie auch eine vollständig typsichere Signatur erhalten, indem Sie dies tun: ```ts import { Resolvers } from './.graphclient' @@ -590,7 +585,7 @@ const resolvers: Resolvers = { export default resolvers ``` -If you need to inject runtime variables into your GraphQL execution `context`, you can use the following snippet: +Wenn Sie Laufzeitvariablen in Ihren GraphQL-Ausführungskontext einfügen müssen, können Sie das folgende Snippet verwenden: ```ts execute( @@ -602,10 +597,10 @@ execute( ) ``` -> [You can read more about client-side schema extensions here](https://graphql-mesh.com/docs/guides/extending-unified-schema) +> [Mehr über clientseitige Schemaerweiterungen erfahren Sie hier](https://graphql-mesh.com/docs/guides/extending-unified-schema) -> [You can also delegate and call Query fields as part of your mutation](https://graphql-mesh.com/docs/guides/extending-unified-schema#using-the-sdk-to-fetch-sources) +> [Sie können auch Abfragefelder als Teil Ihrer Mutation delegieren und aufrufen] (https://graphql-mesh.com/docs/guides/extending-unified-schema#using-the-sdk-to-fetch-sources) -## License +## Lizenz -Released under the [MIT license](../LICENSE). +Freigegeben unter der [MIT-Lizenz](../LICENSE). diff --git a/website/src/pages/de/subgraphs/querying/graph-client/architecture.md b/website/src/pages/de/subgraphs/querying/graph-client/architecture.md index 99098cd77b95..60f45c85bb36 100644 --- a/website/src/pages/de/subgraphs/querying/graph-client/architecture.md +++ b/website/src/pages/de/subgraphs/querying/graph-client/architecture.md @@ -1,13 +1,13 @@ -# The Graph Client Architecture +# The Graph-Client-Architektur -To address the need to support a distributed network, we plan to take several actions to ensure The Graph client provides everything app needs: +Um der Notwendigkeit der Unterstützung eines verteilten Netzwerks gerecht zu werden, planen wir mehrere Maßnahmen, um sicherzustellen, dass der Graph-Client alles bietet, was eine App braucht: -1. Compose multiple Subgraphs (on the client-side) -2. Fallback to multiple indexers/sources/hosted services -3. Automatic/Manual source picking strategy -4. Agnostic core, with the ability to run integrate with any GraphQL client +1. Mehrere Subgraphen zusammenstellen (auf der Client-Seite) +2. Fallback auf mehrere Indexer/Quellen/gehostete Dienste +3. Automatische/manuelle Kommissionierstrategie +4. Agnostischer Kern, mit der Fähigkeit, die Integration mit jedem GraphQL-Client auszuführen -## Standalone mode +## Standalone-Modus ```mermaid graph LR; @@ -17,7 +17,7 @@ graph LR; op-->sB[Subgraph B]; ``` -## With any GraphQL client +## Mit jedem GraphQL-Client ```mermaid graph LR; @@ -28,11 +28,11 @@ graph LR; op-->sB[Subgraph B]; ``` -## Subgraph Composition +## Subgraphen-Zusammensetzung -To allow simple and efficient client-side composition, we'll use [`graphql-tools`](https://graphql-tools.com) to create a remote schema / Executor, then can be hooked into the GraphQL client. +Um eine einfache und effiziente client-seitige Komposition zu ermöglichen, werden wir [`graphql-tools`](https://graphql-tools.com) verwenden, um ein entferntes Schema / Executor zu erstellen, das dann in den GraphQL-Client eingehängt werden kann. -API could be either raw `graphql-tools` transformers, or using [GraphQL-Mesh declarative API](https://graphql-mesh.com/docs/transforms/transforms-introduction) for composing the schema. +API könnte entweder rohe `graphql-tools`-Transformatoren oder die Verwendung von [GraphQL-Mesh declarative API] (https://graphql-mesh.com/docs/transforms/transforms-introduction) für die Zusammenstellung des Schemas sein. ```mermaid graph LR; @@ -42,9 +42,9 @@ graph LR; m-->s3[Subgraph C GraphQL schema]; ``` -## Subgraph Execution Strategies +## Strategien für die Ausführung von Subgraphen -Within every Subgraph defined as source, there will be a way to define it's source(s) indexer and the querying strategy, here are a few options: +Für jeden Subgraphen, der als Quelle definiert ist, gibt es eine Möglichkeit, seine(n) Quell-Indexer und die Abfragestrategie zu definieren, hier einige Optionen: ```mermaid graph LR; @@ -85,9 +85,9 @@ graph LR; end ``` -> We can ship a several built-in strategies, along with a simple interfaces to allow developers to write their own. +> Wir können mehrere eingebaute Strategien liefern, zusammen mit einfachen Schnittstellen, die es Entwicklern ermöglichen, ihre eigenen zu schreiben. -To take the concept of strategies to the extreme, we can even build a magical layer that does subscription-as-query, with any hook, and provide a smooth DX for dapps: +Um das Konzept der Strategien auf die Spitze zu treiben, können wir sogar eine magische Schicht aufbauen, die Abonnement-als-Abfrage mit einem beliebigen Hook durchführt und einen reibungslosen DX für Dapps bietet: ```mermaid graph LR; @@ -99,5 +99,5 @@ graph LR; sc[Smart Contract]-->|change event|op; ``` -With this mechanism, developers can write and execute GraphQL `subscription`, but under the hood we'll execute a GraphQL `query` to The Graph indexers, and allow to connect any external hook/probe for re-running the operation. -This way, we can watch for changes on the Smart Contract itself, and the GraphQL client will fill the gap on the need to real-time changes from The Graph. +Mit diesem Mechanismus können Entwickler GraphQL-Abonnements schreiben und ausführen, aber unter der Haube führen wir eine GraphQL-Abfrage an die The Graph-Indexer aus und ermöglichen den Anschluss eines externen Hooks/einer externen Probe zur erneuten Ausführung der Operation. +Auf diese Weise können wir auf Änderungen am Smart Contract selbst achten, und der GraphQL-Client füllt die Lücke, wenn Echtzeitänderungen von The Graph erforderlich sind. diff --git a/website/src/pages/de/subgraphs/querying/graph-client/live.md b/website/src/pages/de/subgraphs/querying/graph-client/live.md index e6f726cb4352..7996d3bf3b85 100644 --- a/website/src/pages/de/subgraphs/querying/graph-client/live.md +++ b/website/src/pages/de/subgraphs/querying/graph-client/live.md @@ -1,10 +1,10 @@ -# `@live` queries in `graph-client` +# `@live`-Abfragen im `graph-client` -Graph-Client implements a custom `@live` directive that can make every GraphQL query work with real-time data. +Graph-Client implementiert eine benutzerdefinierte `@live`-Direktive, mit der jede GraphQL-Abfrage mit Echtzeitdaten arbeiten kann. -## Getting Started +## Erste Schritte -Start by adding the following configuration to your `.graphclientrc.yml` file: +Beginnen Sie, indem Sie die folgende Konfiguration zu Ihrer `.graphclientrc.yml`-Datei hinzufügen: ```yaml plugins: @@ -12,9 +12,9 @@ plugins: defaultInterval: 1000 ``` -## Usage +## Verwendung -Set the default update interval you wish to use, and then you can apply the following GraphQL `@directive` over your GraphQL queries: +Legen Sie das standardmäßige Aktualisierungsintervall fest, das Sie verwenden möchten, und wenden Sie dann die folgende GraphQL-`@directive` auf Ihre GraphQL-Abfragen an: ```graphql query ExampleQuery @live { @@ -26,7 +26,7 @@ query ExampleQuery @live { } ``` -Or, you can specify a per-query interval: +Sie können auch ein Intervall pro Abfrage festlegen: ```graphql query ExampleQuery @live(interval: 5000) { @@ -36,8 +36,8 @@ query ExampleQuery @live(interval: 5000) { } ``` -## Integrations +## Integrationen -Since the entire network layer (along with the `@live` mechanism) is implemented inside `graph-client` core, you can use Live queries with every GraphQL client (such as Urql or Apollo-Client), as long as it supports streame responses (`AsyncIterable`). +Da die gesamte Netzwerkschicht (zusammen mit dem `@live`-Mechanismus) innerhalb des `graph-client`-Kerns implementiert ist, können Sie Live-Abfragen mit jedem GraphQL-Client (wie z. B. Urql oder Apollo-Client) verwenden, solange dieser Streame-Antworten (`AsyncIterable`) unterstützt. -No additional setup is required for GraphQL clients cache updates. +Für die Cache-Aktualisierung von GraphQL-Clients ist keine zusätzliche Einrichtung erforderlich. diff --git a/website/src/pages/de/subgraphs/querying/graphql-api.mdx b/website/src/pages/de/subgraphs/querying/graphql-api.mdx index e6636e20a53e..effc56357802 100644 --- a/website/src/pages/de/subgraphs/querying/graphql-api.mdx +++ b/website/src/pages/de/subgraphs/querying/graphql-api.mdx @@ -2,23 +2,23 @@ title: GraphQL-API --- -Learn about the GraphQL Query API used in The Graph. +Erfahren Sie mehr über die GraphQL Query API, die in The Graph verwendet wird. -## What is GraphQL? +## Was ist GraphQL? -[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. +[GraphQL] (https://graphql.org/learn/) ist eine Abfragesprache für APIs und eine Laufzeitumgebung für die Ausführung dieser Abfragen mit Ihren vorhandenen Daten. The Graph verwendet GraphQL zur Abfrage von Subgraphen. -To understand the larger role that GraphQL plays, review [developing](/subgraphs/developing/introduction/) and [creating a subgraph](/developing/creating-a-subgraph/). +Um die größere Rolle, die GraphQL spielt, zu verstehen, lesen Sie [Entwickeln](/subgraphs/entwickeln/einfuehrung/) und [Erstellen eines Subgraphen](/entwickeln/einen-subgraph-erstellen/). -## Queries with GraphQL +## Abfragen mit GraphQL -In your subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type. +In Ihrem Subgraph-Schema definieren Sie Typen namens `Entities`. Für jeden `Entity`-Typ werden `entity`- und `entities`-Felder auf der obersten Ebene des `Query`-Typs erzeugt. -> Note: `query` does not need to be included at the top of the `graphql` query when using The Graph. +> Hinweis: Bei der Verwendung von The Graph muss `query` nicht am Anfang der `graphql`-Abfrage stehen. ### Beispiele -Query for a single `Token` entity defined in your schema: +Abfrage nach einer einzelnen, in Ihrem Schema definierten Entität `Token`: ```graphql { @@ -29,9 +29,9 @@ Query for a single `Token` entity defined in your schema: } ``` -> Note: When querying for a single entity, the `id` field is required, and it must be written as a string. +> Hinweis: Bei der Abfrage einer einzelnen Entität ist das Feld `id` erforderlich und muss als String geschrieben werden. -Query all `Token` entities: +Abfrage aller `Token`-Entitäten: ```graphql { @@ -44,10 +44,10 @@ Query all `Token` entities: ### Sortierung -When querying a collection, you may: +Wenn Sie eine Sammlung abfragen, können Sie: -- Use the `orderBy` parameter to sort by a specific attribute. -- Use the `orderDirection` to specify the sort direction, `asc` for ascending or `desc` for descending. +- den Parameter `orderBy` verwenden, um nach einem bestimmten Attribut zu sortieren. +- `orderDirection` verwenden, um die Sortierrichtung anzugeben, `asc` für aufsteigend oder `desc` für absteigend. #### Beispiel @@ -62,9 +62,9 @@ When querying a collection, you may: #### Beispiel für die Sortierung verschachtelter Entitäten -As of Graph Node [`v0.30.0`](https://github.com/graphprotocol/graph-node/releases/tag/v0.30.0) entities can be sorted on the basis of nested entities. +Ab Graph Node [`v0.30.0`] (https://github.com/graphprotocol/graph-node/releases/tag/v0.30.0) können Entitäten auf der Basis von verschachtelten Entitäten sortiert werden. -The following example shows tokens sorted by the name of their owner: +Im folgenden Beispiel werden die Token nach dem Namen ihres Besitzers sortiert: ```graphql { @@ -77,18 +77,18 @@ The following example shows tokens sorted by the name of their owner: } ``` -> Currently, you can sort by one-level deep `String` or `ID` types on `@entity` and `@derivedFrom` fields. Unfortunately, [sorting by interfaces on one level-deep entities](https://github.com/graphprotocol/graph-node/pull/4058), sorting by fields which are arrays and nested entities is not yet supported. +> Derzeit können Sie nach den Typen `String` oder `ID` auf den Feldern `@entity` und `@derivedFrom` sortieren. Leider wird die [Sortierung nach Schnittstellen auf Entitäten mit einer Tiefe von einer Ebene] (https://github.com/graphprotocol/graph-node/pull/4058), die Sortierung nach Feldern, die Arrays und verschachtelte Entitäten sind, noch nicht unterstützt. ### Pagination -When querying a collection, it's best to: +Wenn Sie eine Sammlung abfragen, ist es am besten, dies zu tun: -- Use the `first` parameter to paginate from the beginning of the collection. - - The default sort order is by `ID` in ascending alphanumeric order, **not** by creation time. -- Use the `skip` parameter to skip entities and paginate. For instance, `first:100` shows the first 100 entities and `first:100, skip:100` shows the next 100 entities. -- Avoid using `skip` values in queries because they generally perform poorly. To retrieve a large number of items, it's best to page through entities based on an attribute as shown in the previous example above. +- Verwenden Sie den Parameter `first`, um vom Anfang der Sammlung an zu paginieren. + - Die Standardsortierung erfolgt nach `ID` in aufsteigender alphanumerischer Reihenfolge, **nicht** nach Erstellungszeit. +- Verwenden Sie den Parameter `skip`, um Entitäten zu überspringen und zu paginieren. Zum Beispiel zeigt `first:100` die ersten 100 Entitäten und `first:100, skip:100` zeigt die nächsten 100 Entitäten. +- Vermeiden Sie die Verwendung von `skip`-Werten in Abfragen, da diese im Allgemeinen schlecht funktionieren. Um eine große Anzahl von Elementen abzurufen, ist es am besten, die Entitäten auf der Grundlage eines Attributs zu durchblättern, wie im obigen Beispiel gezeigt. -#### Example using `first` +#### Beispiel mit `first` Die Abfrage für die ersten 10 Token: @@ -101,11 +101,11 @@ Die Abfrage für die ersten 10 Token: } ``` -To query for groups of entities in the middle of a collection, the `skip` parameter may be used in conjunction with the `first` parameter to skip a specified number of entities starting at the beginning of the collection. +Um nach Gruppen von Entitäten in der Mitte einer Sammlung zu suchen, kann der Parameter `skip` in Verbindung mit dem Parameter `first` verwendet werden, um eine bestimmte Anzahl von Entitäten zu überspringen, beginnend am Anfang der Sammlung. -#### Example using `first` and `skip` +#### Beispiel mit `first` und `skip` -Query 10 `Token` entities, offset by 10 places from the beginning of the collection: +Abfrage von 10 „Token“-Entitäten, versetzt um 10 Stellen vom Beginn der Sammlung: ```graphql { @@ -116,9 +116,9 @@ Query 10 `Token` entities, offset by 10 places from the beginning of the collect } ``` -#### Example using `first` and `id_ge` +#### Beispiel mit `first` und `id_ge` -If a client needs to retrieve a large number of entities, it's more performant to base queries on an attribute and filter by that attribute. For example, a client could retrieve a large number of tokens using this query: +Wenn ein Client eine große Anzahl von Entitäten abrufen muss, ist es leistungsfähiger, Abfragen auf ein Attribut zu stützen und nach diesem Attribut zu filtern. Zum Beispiel könnte ein Client mit dieser Abfrage eine große Anzahl von Token abrufen: ```graphql query manyTokens($lastID: String) { @@ -129,16 +129,16 @@ query manyTokens($lastID: String) { } ``` -The first time, it would send the query with `lastID = ""`, and for subsequent requests it would set `lastID` to the `id` attribute of the last entity in the previous request. This approach will perform significantly better than using increasing `skip` values. +Beim ersten Mal würde es die Abfrage mit `lastID = „“` senden, und bei nachfolgenden Anfragen würde es `lastID` auf das Attribut `id` der letzten Entität in der vorherigen Anfrage setzen. Dieser Ansatz ist wesentlich leistungsfähiger als die Verwendung steigender `skip`-Werte. -### Filtering +### Filtration -- You can use the `where` parameter in your queries to filter for different properties. -- You can filter on multiple values within the `where` parameter. +- Sie können den Parameter `where` in Ihren Abfragen verwenden, um nach verschiedenen Eigenschaften zu filtern. +- Sie können nach mehreren Werten innerhalb des Parameters `where` filtern. -#### Example using `where` +#### Beispiel mit `where` -Query challenges with `failed` outcome: +Abfrage von Herausforderungen mit `failed`-Ergebnis: ```graphql { @@ -152,9 +152,9 @@ Query challenges with `failed` outcome: } ``` -You can use suffixes like `_gt`, `_lte` for value comparison: +Sie können Suffixe wie `_gt`, `_lte` für den Wertevergleich verwenden: -#### Example for range filtering +#### Beispiel für Range-Filterung ```graphql { @@ -166,11 +166,11 @@ You can use suffixes like `_gt`, `_lte` for value comparison: } ``` -#### Example for block filtering +#### Beispiel für Block-Filterung -You can also filter entities that were updated in or after a specified block with `_change_block(number_gte: Int)`. +Sie können auch Entitäten filtern, die in oder nach einem bestimmten Block mit `_change_block(number_gte: Int)` aktualisiert wurden. -This can be useful if you are looking to fetch only entities which have changed, for example since the last time you polled. Or alternatively it can be useful to investigate or debug how entities are changing in your subgraph (if combined with a block filter, you can isolate only entities that changed in a specific block). +Dies kann nützlich sein, wenn Sie nur Entitäten abrufen möchten, die sich geändert haben, z. B. seit der letzten Abfrage. Oder es kann nützlich sein, um zu untersuchen oder zu debuggen, wie sich Entitäten in Ihrem Subgraphen ändern (wenn Sie dies mit einem Blockfilter kombinieren, können Sie nur Entitäten isolieren, die sich in einem bestimmten Block geändert haben). ```graphql { @@ -182,11 +182,11 @@ This can be useful if you are looking to fetch only entities which have changed, } ``` -#### Example for nested entity filtering +#### Beispiel für die Filterung verschachtelter Entitäten -Filtering on the basis of nested entities is possible in the fields with the `_` suffix. +Die Filterung nach verschachtelten Entitäten ist in den Feldern mit dem Suffix `_`möglich. -This can be useful if you are looking to fetch only entities whose child-level entities meet the provided conditions. +Dies kann nützlich sein, wenn Sie nur die Entitäten abrufen möchten, deren untergeordnete Entitäten die angegebenen Bedingungen erfüllen. ```graphql { @@ -200,13 +200,13 @@ This can be useful if you are looking to fetch only entities whose child-level e } ``` -#### Logical operators +#### Logische Operatoren -As of Graph Node [`v0.30.0`](https://github.com/graphprotocol/graph-node/releases/tag/v0.30.0) you can group multiple parameters in the same `where` argument using the `and` or the `or` operators to filter results based on more than one criteria. +Seit Graph Node [`v0.30.0`] (https://github.com/graphprotocol/graph-node/releases/tag/v0.30.0) können Sie mehrere Parameter im selben `where`-Argument gruppieren, indem Sie die `und`- oder `oder`-Operatoren verwenden, um Ergebnisse nach mehr als einem Kriterium zu filtern. -##### `AND` Operator +##### Operator `AND` -The following example filters for challenges with `outcome` `succeeded` and `number` greater than or equal to `100`. +Das folgende Beispiel filtert nach Challenges mit `outcome` `succeeded` und `number` größer als oder gleich `100`. ```graphql { @@ -220,7 +220,7 @@ The following example filters for challenges with `outcome` `succeeded` and `num } ``` -> **Syntactic sugar:** You can simplify the above query by removing the `and` operator by passing a sub-expression separated by commas. +> **Syntaktischer Zucker:** Sie können die obige Abfrage vereinfachen, indem Sie den „und“-Operator entfernen und einen durch Kommata getrennten Unterausdruck übergeben. > > ```graphql > { @@ -234,9 +234,9 @@ The following example filters for challenges with `outcome` `succeeded` and `num > } > ``` -##### `OR` Operator +##### Operator `OR` -The following example filters for challenges with `outcome` `succeeded` or `number` greater than or equal to `100`. +Das folgende Beispiel filtert nach Herausforderungen mit `outcome` `succeeded` oder `number` größer oder gleich `100`. ```graphql { @@ -250,11 +250,17 @@ The following example filters for challenges with `outcome` `succeeded` or `numb } ``` -> **Note**: When constructing queries, it is important to consider the performance impact of using the `or` operator. While `or` can be a useful tool for broadening search results, it can also have significant costs. One of the main issues with `or` is that it can cause queries to slow down. This is because `or` requires the database to scan through multiple indexes, which can be a time-consuming process. To avoid these issues, it is recommended that developers use and operators instead of or whenever possible. This allows for more precise filtering and can lead to faster, more accurate queries. +> Hinweis: Beim Erstellen von Abfragen ist es wichtig, die Auswirkungen der Verwendung des +> or-Operators auf die Leistung zu berücksichtigen. Obwohl or ein nützliches Tool zum +> Erweitern von Suchergebnissen sein kann, kann es auch erhebliche Kosten verursachen. Eines der Hauptprobleme mit +> or ist, dass Abfragen dadurch verlangsamt werden können. Dies liegt daran, dass or +> erfordert, dass die Datenbank mehrere Indizes durchsucht, was ein zeitaufwändiger Prozess sein kann. Um diese Probleme +> zu vermeiden, wird empfohlen, dass Entwickler and -Operatoren anstelle von oder verwenden, wann immer dies möglich +> ist. Dies ermöglicht eine präzisere Filterung und kann zu schnelleren und genaueren Abfragen führen. -#### All Filters +#### Alle Filter -Full list of parameter suffixes: +Vollständige Liste der Parameter-Suffixe: ``` _ @@ -279,21 +285,21 @@ _not_ends_with _not_ends_with_nocase ``` -> Please note that some suffixes are only supported for specific types. For example, `Boolean` only supports `_not`, `_in`, and `_not_in`, but `_` is available only for object and interface types. +> Bitte beachten Sie, dass einige Suffixe nur für bestimmte Typen unterstützt werden. So unterstützt `Boolean` nur `_not`, `_in` und `_not_in`, aber `_` ist nur für Objekt- und Schnittstellentypen verfügbar. -In addition, the following global filters are available as part of `where` argument: +Darüber hinaus sind die folgenden globalen Filter als Teil des Arguments `where` verfügbar: ```graphql _change_block(number_gte: Int) ``` -### Time-travel queries +### Time-travel-Anfragen -You can query the state of your entities not just for the latest block, which is the default, but also for an arbitrary block in the past. The block at which a query should happen can be specified either by its block number or its block hash by including a `block` argument in the toplevel fields of queries. +Sie können den Zustand Ihrer Entitäten nicht nur für den letzten Block abfragen, was der Standard ist, sondern auch für einen beliebigen Block in der Vergangenheit. Der Block, zu dem eine Abfrage erfolgen soll, kann entweder durch seine Blocknummer oder seinen Block-Hash angegeben werden, indem ein `block`-Argument in die Toplevel-Felder von Abfragen aufgenommen wird. -The result of such a query will not change over time, i.e., querying at a certain past block will return the same result no matter when it is executed, with the exception that if you query at a block very close to the head of the chain, the result might change if that block turns out to **not** be on the main chain and the chain gets reorganized. Once a block can be considered final, the result of the query will not change. +Das Ergebnis einer solchen Abfrage wird sich im Laufe der Zeit nicht ändern, d.h. die Abfrage eines bestimmten vergangenen Blocks wird das gleiche Ergebnis liefern, egal wann sie ausgeführt wird, mit der Ausnahme, dass sich das Ergebnis bei einer Abfrage eines Blocks, der sehr nahe am Kopf der Kette liegt, ändern kann, wenn sich herausstellt, dass dieser Block **nicht** in der Hauptkette ist und die Kette umorganisiert wird. Sobald ein Block als endgültig betrachtet werden kann, wird sich das Ergebnis der Abfrage nicht mehr ändern. -> Note: The current implementation is still subject to certain limitations that might violate these guarantees. The implementation can not always tell that a given block hash is not on the main chain at all, or if a query result by a block hash for a block that is not yet considered final could be influenced by a block reorganization running concurrently with the query. They do not affect the results of queries by block hash when the block is final and known to be on the main chain. [This issue](https://github.com/graphprotocol/graph-node/issues/1405) explains what these limitations are in detail. +> Hinweis: Die derzeitige Implementierung unterliegt noch bestimmten Beschränkungen, die diese Garantien verletzen könnten. Die Implementierung kann nicht immer erkennen, dass ein bestimmter Block-Hash überhaupt nicht in der Hauptkette ist, oder ob ein Abfrageergebnis durch einen Block-Hash für einen Block, der noch nicht als endgültig gilt, durch eine gleichzeitig mit der Abfrage laufende Blockumstrukturierung beeinflusst werden könnte. Sie haben keinen Einfluss auf die Ergebnisse von Abfragen per Block-Hash, wenn der Block endgültig ist und sich bekanntermaßen in der Hauptkette befindet. In [Diese Ausgabe] (https://github.com/graphprotocol/graph-node/issues/1405) werden diese Einschränkungen im Detail erläutert. #### Beispiel @@ -309,7 +315,7 @@ The result of such a query will not change over time, i.e., querying at a certai } ``` -This query will return `Challenge` entities, and their associated `Application` entities, as they existed directly after processing block number 8,000,000. +Diese Abfrage gibt die `Challenge`-Entitäten und die zugehörigen `Application`-Entitäten so zurück, wie sie unmittelbar nach der Verarbeitung von Block Nummer 8.000.000 bestanden. #### Beispiel @@ -325,26 +331,26 @@ This query will return `Challenge` entities, and their associated `Application` } ``` -This query will return `Challenge` entities, and their associated `Application` entities, as they existed directly after processing the block with the given hash. +Diese Abfrage gibt `Challenge`-Entitäten und die zugehörigen `Application`-Entitäten zurück, wie sie unmittelbar nach der Verarbeitung des Blocks mit dem angegebenen Hash vorhanden waren. -### Fulltext Search Queries +### Volltext-Suchanfragen -Fulltext search query fields provide an expressive text search API that can be added to the subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) to add fulltext search to your subgraph. +Volltextsuchabfrage-Felder bieten eine aussagekräftige Textsuch-API, die dem Subgraph-Schema hinzugefügt und angepasst werden kann. Siehe [Definieren von Volltext-Suchfeldern](/developing/creating-a-subgraph/#defining-fulltext-search-fields), um die Volltextsuche zu Ihrem Subgraph hinzuzufügen. -Fulltext search queries have one required field, `text`, for supplying search terms. Several special fulltext operators are available to be used in this `text` search field. +Volltextsuchanfragen haben ein erforderliches Feld, `text`, für die Eingabe von Suchbegriffen. Mehrere spezielle Volltext-Operatoren sind verfügbar, die in diesem `text`-Suchfeld verwendet werden können. -Fulltext search operators: +Volltext-Suchanfragen: | Symbol | Operator | Beschreibung | | --- | --- | --- | -| `&` | `And` | For combining multiple search terms into a filter for entities that include all of the provided terms | -| | | `Or` | Queries with multiple search terms separated by the or operator will return all entities with a match from any of the provided terms | -| `<->` | `Follow by` | Specify the distance between two words. | -| `:*` | `Prefix` | Use the prefix search term to find words whose prefix match (2 characters required.) | +| `&` | `And` | Zum Kombinieren mehrerer Suchbegriffe zu einem Filter für Entitäten, die alle bereitgestellten Begriffe enthalten | +| | | `Or` | Abfragen mit mehreren durch den Operator or getrennten Suchbegriffen geben alle Entitäten mit einer Übereinstimmung mit einem der bereitgestellten Begriffe zurück | +| `<->` | `Follow by` | Geben Sie den Abstand zwischen zwei Wörtern an. | +| `:*` | `Prefix` | Verwenden Sie den Präfix-Suchbegriff, um Wörter zu finden, deren Präfix übereinstimmt (2 Zeichen erforderlich) | #### Beispiele -Using the `or` operator, this query will filter to blog entities with variations of either "anarchism" or "crumpet" in their fulltext fields. +Mit dem Operator `or` filtert diese Abfrage nach Blog-Entitäten mit Variationen von entweder "anarchism" oder „crumpet“ in ihren Volltextfeldern. ```graphql { @@ -357,7 +363,7 @@ Using the `or` operator, this query will filter to blog entities with variations } ``` -The `follow by` operator specifies a words a specific distance apart in the fulltext documents. The following query will return all blogs with variations of "decentralize" followed by "philosophy" +Der Operator `follow by` gibt Wörter an, die in den Volltextdokumenten einen bestimmten Abstand zueinander haben. Die folgende Abfrage gibt alle Blogs mit Variationen von „decentralize“ gefolgt von „philosophy“ zurück ```graphql { @@ -370,7 +376,7 @@ The `follow by` operator specifies a words a specific distance apart in the full } ``` -Combine fulltext operators to make more complex filters. With a pretext search operator combined with a follow by this example query will match all blog entities with words that start with "lou" followed by "music". +Kombinieren Sie Volltextoperatoren, um komplexere Filter zu erstellen. Mit einem Präfix-Suchoperator in Kombination mit "follow by" von dieser Beispielabfrage werden alle Blog-Entitäten mit Wörtern abgeglichen, die mit „lou“ beginnen, gefolgt von „music“. ```graphql { @@ -385,25 +391,25 @@ Combine fulltext operators to make more complex filters. With a pretext search o ### Validierung -Graph Node implements [specification-based](https://spec.graphql.org/October2021/#sec-Validation) validation of the GraphQL queries it receives using [graphql-tools-rs](https://github.com/dotansimha/graphql-tools-rs#validation-rules), which is based on the [graphql-js reference implementation](https://github.com/graphql/graphql-js/tree/main/src/validation). Queries which fail a validation rule do so with a standard error - visit the [GraphQL spec](https://spec.graphql.org/October2021/#sec-Validation) to learn more. +Graph Node implementiert die [spezifikationsbasierte](https://spec.graphql.org/October2021/#sec-Validation) Validierung der empfangenen GraphQL-Abfragen mit [graphql-tools-rs](https://github.com/dotansimha/graphql-tools-rs#validation-rules), die auf der [graphql-js-Referenzimplementierung](https://github.com/graphql/graphql-js/tree/main/src/validation) basiert. Abfragen, die eine Validierungsregel nicht erfüllen, werden mit einem Standardfehler angezeigt - besuchen Sie die [GraphQL spec](https://spec.graphql.org/October2021/#sec-Validation), um mehr zu erfahren. ## Schema -The schema of your dataSources, i.e. the entity types, values, and relationships that are available to query, are defined through the [GraphQL Interface Definition Language (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). +Das Schema Ihrer Datenquellen, d. h. die Entitätstypen, Werte und Beziehungen, die zur Abfrage zur Verfügung stehen, werden über die [GraphQL Interface Definition Language (IDL)] (https://facebook.github.io/graphql/draft/#sec-Type-System) definiert. -GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your subgraph is automatically generated from the GraphQL schema that's included in your [subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph). +GraphQL-Schemata definieren im Allgemeinen Wurzeltypen für „Abfragen“, „Abonnements“ und „Mutationen“. The Graph unterstützt nur `Abfragen`. Der Root-Typ „Abfrage“ für Ihren Subgraph wird automatisch aus dem GraphQL-Schema generiert, das in Ihrem [Subgraph-Manifest] enthalten ist (/developing/creating-a-subgraph/#components-of-a-subgraph). -> Note: Our API does not expose mutations because developers are expected to issue transactions directly against the underlying blockchain from their applications. +> Hinweis: Unsere API stellt keine Mutationen zur Verfügung, da von den Entwicklern erwartet wird, dass sie aus ihren Anwendungen heraus Transaktionen direkt gegen die zugrunde liegende Blockchain durchführen. -### Entities +### Entitäten -All GraphQL types with `@entity` directives in your schema will be treated as entities and must have an `ID` field. +Alle GraphQL-Typen mit `@entity`-Direktiven in Ihrem Schema werden als Entitäten behandelt und müssen ein `ID`-Feld haben. -> **Note:** Currently, all types in your schema must have an `@entity` directive. In the future, we will treat types without an `@entity` directive as value objects, but this is not yet supported. +> **Hinweis:** Derzeit müssen alle Typen in Ihrem Schema eine `@entity`-Direktive haben. In Zukunft werden wir Typen ohne `@entity`-Direktive als Wertobjekte behandeln, aber dies wird noch nicht unterstützt. -### Subgraph Metadata +### Subgraph-Metadaten -All subgraphs have an auto-generated `_Meta_` object, which provides access to subgraph metadata. This can be queried as follows: +Alle Subgraphen haben ein automatisch generiertes `_Meta_`-Objekt, das Zugriff auf die Metadaten des Subgraphen bietet. Dieses kann wie folgt abgefragt werden: ```graphQL { @@ -419,14 +425,14 @@ All subgraphs have an auto-generated `_Meta_` object, which provides access to s } ``` -If a block is provided, the metadata is as of that block, if not the latest indexed block is used. If provided, the block must be after the subgraph's start block, and less than or equal to the most recently indexed block. +Wenn ein Block angegeben wird, gelten die Metadaten ab diesem Block, andernfalls wird der zuletzt indizierte Block verwendet. Falls angegeben, muss der Block nach dem Startblock des Subgraphen liegen und kleiner oder gleich dem zuletzt indizierten Block sein. -`deployment` is a unique ID, corresponding to the IPFS CID of the `subgraph.yaml` file. +`deployment` ist eine eindeutige ID, die der IPFS CID der Datei `subgraph.yaml` entspricht. -`block` provides information about the latest block (taking into account any block constraints passed to `_meta`): +`block` liefert Informationen über den letzten Block (unter Berücksichtigung aller an `_meta` übergebenen Blockeinschränkungen): -- hash: the hash of the block -- number: the block number -- timestamp: the timestamp of the block, if available (this is currently only available for subgraphs indexing EVM networks) +- hash: der Hash des Blocks +- number: die Blocknummer +- timestamp: der Zeitstempel des Blocks, falls verfügbar (dies ist derzeit nur für Subgraphen verfügbar, die EVM-Netzwerke indizieren) -`hasIndexingErrors` is a boolean identifying whether the subgraph encountered indexing errors at some past block +hasIndexingErrors“ ist ein boolescher Wert, der angibt, ob der Subgraph in einem vergangenen Block auf Indizierungsfehler gestoßen ist. diff --git a/website/src/pages/de/subgraphs/querying/introduction.mdx b/website/src/pages/de/subgraphs/querying/introduction.mdx index 58a720de4509..d889e2efc3d6 100644 --- a/website/src/pages/de/subgraphs/querying/introduction.mdx +++ b/website/src/pages/de/subgraphs/querying/introduction.mdx @@ -1,32 +1,32 @@ --- -title: Querying The Graph +title: The Graph abfragen sidebarTitle: Einführung --- -To start querying right away, visit [The Graph Explorer](https://thegraph.com/explorer). +Um sofort mit der Abfrage zu beginnen, besuchen Sie [The Graph Explorer] (https://thegraph.com/explorer). ## Überblick -When a subgraph is published to The Graph Network, you can visit its subgraph details page on Graph Explorer and use the "Query" tab to explore the deployed GraphQL API for each subgraph. +Wenn ein Subgraph in The Graph Network veröffentlicht wird, können Sie die Detailseite des Subgraphen im Graph Explorer besuchen und die Registerkarte „Abfrage“ verwenden, um die eingesetzte GraphQL-API für jeden Subgraphen zu erkunden. ## Besonderheiten -Each subgraph published to The Graph Network has a unique query URL in Graph Explorer to make direct queries. You can find it by navigating to the subgraph details page and clicking on the "Query" button in the top right corner. +Jeder im The Graph Network veröffentlichte Subgraph hat eine eindeutige Abfrage-URL im Graph Explorer, um direkte Abfragen durchzuführen. Sie finden sie, indem Sie zur Detailseite des Subgraphen navigieren und auf die Schaltfläche „Abfrage“ in der oberen rechten Ecke klicken. -![Query Subgraph Button](/img/query-button-screenshot.png) +![Abfrage-Subgraphen-Schaltfläche](/img/query-button-screenshot.png) -![Query Subgraph URL](/img/query-url-screenshot.png) +![Abfrage-Subgraph URL](/img/query-url-screenshot.png) -You will notice that this query URL must use a unique API key. You can create and manage your API keys in [Subgraph Studio](https://thegraph.com/studio), under the "API Keys" section. Learn more about how to use Subgraph Studio [here](/deploying/subgraph-studio/). +Sie werden feststellen, dass diese Abfrage-URL einen eindeutigen API-Schlüssel verwenden muss. Sie können Ihre API-Schlüssel in [Subgraph Studio](https://thegraph.com/studio) unter dem Abschnitt „API-Schlüssel“ erstellen und verwalten. Erfahren Sie mehr über die Verwendung von Subgraph Studio [hier](/deploying/subgraph-studio/). -Subgraph Studio users start on a Free Plan, which allows them to make 100,000 queries per month. Additional queries are available on the Growth Plan, which offers usage based pricing for additional queries, payable by credit card, or GRT on Arbitrum. You can learn more about billing [here](/subgraphs/billing/). +Benutzer von Subgraph Studio starten mit einem kostenlosen Plan, der ihnen 100.000 Abfragen pro Monat erlaubt. Zusätzliche Abfragen sind mit dem Growth Plan möglich, der nutzungsbasierte Preise für zusätzliche Abfragen bietet, zahlbar per Kreditkarte oder GRT auf Arbitrum. Sie können mehr über die Abrechnung [hier](/subgraphs/billing/) erfahren. -> Please see the [Query API](/subgraphs/querying/graphql-api/) for a complete reference on how to query the subgraph's entities. +> In der [Abfrage-API](/subgraphs/querying/graphql-api/) finden Sie eine vollständige Anleitung zur Abfrage der Entitäten des Subgraphen. > -> Note: If you encounter 405 errors with a GET request to the Graph Explorer URL, please switch to a POST request instead. +> Hinweis: Wenn Sie bei einer GET-Anfrage an die Graph Explorer-URL 405-Fehler erhalten, wechseln Sie bitte zu einer POST-Anfrage. ### Zusätzliche Ressourcen -- Use [GraphQL querying best practices](/subgraphs/querying/best-practices/). -- To query from an application, click [here](/subgraphs/querying/from-an-application/). -- View [querying examples](https://github.com/graphprotocol/query-examples/tree/main). +- Verwenden Sie [GraphQL-Abfrage-Best-Practices](/subgraphs/querying/best-practices/). +- Um von einer Anwendung aus abzufragen, klicken Sie [hier](/subgraphs/querying/from-an-application/). +- Sehen Sie [Abfragebeispiele] (https://github.com/graphprotocol/query-examples/tree/main). diff --git a/website/src/pages/de/subgraphs/querying/managing-api-keys.mdx b/website/src/pages/de/subgraphs/querying/managing-api-keys.mdx index 45ead286cf8a..cc71c6e7afd0 100644 --- a/website/src/pages/de/subgraphs/querying/managing-api-keys.mdx +++ b/website/src/pages/de/subgraphs/querying/managing-api-keys.mdx @@ -1,34 +1,34 @@ --- -title: Managing API keys +title: Verwalten von API-Schlüsseln --- ## Überblick -API keys are needed to query subgraphs. They ensure that the connections between application services are valid and authorized, including authenticating the end user and the device using the application. +API-Schlüssel werden für die Abfrage von Subgraphen benötigt. Sie stellen sicher, dass die Verbindungen zwischen Anwendungsdiensten gültig und autorisiert sind, einschließlich der Authentifizierung des Endnutzers und des Geräts, das die Anwendung verwendet. -### Create and Manage API Keys +### Erstellen und Verwalten von API-Schlüsseln -Go to [Subgraph Studio](https://thegraph.com/studio/) and click the **API Keys** tab to create and manage your API keys for specific subgraphs. +Gehen Sie zu [Subgraph Studio] (https://thegraph.com/studio/) und klicken Sie auf die Registerkarte **API-Schlüssel**, um Ihre API-Schlüssel für bestimmte Subgraphen zu erstellen und zu verwalten. -The "API keys" table lists existing API keys and allows you to manage or delete them. For each key, you can see its status, the cost for the current period, the spending limit for the current period, and the total number of queries. +Die Tabelle „API-Schlüssel“ listet die vorhandenen API-Schlüssel auf und ermöglicht es Ihnen, diese zu verwalten oder zu löschen. Für jeden Schlüssel können Sie seinen Status, die Kosten für den aktuellen Zeitraum, das Ausgabenlimit für den aktuellen Zeitraum und die Gesamtzahl der Abfragen sehen. -You can click the "three dots" menu to the right of a given API key to: +Sie können auf das Menü mit den „drei Punkten“ rechts neben einem bestimmten API-Schlüssel klicken, um: -- Rename API key -- Regenerate API key -- Delete API key -- Manage spending limit: this is an optional monthly spending limit for a given API key, in USD. This limit is per billing period (calendar month). +- Umbenennen des API-Schlüssels +- API-Schlüssel neu generieren +- Löschen des API-Schlüssels +- Ausgabenlimit verwalten: Dies ist ein optionales monatliches Ausgabenlimit für einen bestimmten API-Schlüssel, in USD. Dieses Limit gilt pro Abrechnungszeitraum (Kalendermonat). -### API Key Details +### API-Schlüssel Details -You can click on an individual API key to view the Details page: +Sie können auf einen einzelnen API-Schlüssel klicken, um die Detailseite anzuzeigen: -1. Under the **Overview** section, you can: - - Edit your key name - - Regenerate API keys - - View the current usage of the API key with stats: - - Number of queries - - Amount of GRT spent -2. Under the **Security** section, you can opt into security settings depending on the level of control you’d like to have. Specifically, you can: - - View and manage the domain names authorized to use your API key - - Assign subgraphs that can be queried with your API key +1. Unter dem Abschnitt **Übersicht** können Sie: + - Bearbeiten Sie den Namen Ihres Schlüssels + - API-Schlüssel neu generieren + - Anzeige der aktuellen Nutzung des API-Schlüssels mit Statistiken: + - Anzahl der Abfragen + - Ausgegebener GRT-Betrag +2. Unter dem Abschnitt **Sicherheit** können Sie je nach gewünschter Kontrollstufe Sicherheitseinstellungen vornehmen. Im Einzelnen können Sie: + - Anzeigen und Verwalten der Domainnamen, die zur Verwendung Ihres API-Schlüssels berechtigt sind + - Zuweisung von Subgraphen, die mit Ihrem API-Schlüssel abgefragt werden können diff --git a/website/src/pages/de/subgraphs/querying/python.mdx b/website/src/pages/de/subgraphs/querying/python.mdx index a6640d513d6e..389e6f56a12c 100644 --- a/website/src/pages/de/subgraphs/querying/python.mdx +++ b/website/src/pages/de/subgraphs/querying/python.mdx @@ -1,57 +1,57 @@ --- -title: Query The Graph with Python and Subgrounds +title: Abfrage von The Graph mit Python und Subgrounds sidebarTitle: Python (Subgrounds) --- -Subgrounds is an intuitive Python library for querying subgraphs, built by [Playgrounds](https://playgrounds.network/). It allows you to directly connect subgraph data to a Python data environment, letting you use libraries like [pandas](https://pandas.pydata.org/) to perform data analysis! +Subgrounds ist eine intuitive Python-Bibliothek zur Abfrage von Subgraphen, entwickelt von [Playgrounds](https://playgrounds.network/). Sie ermöglicht es Ihnen, Subgraph-Daten direkt mit einer Python-Datenumgebung zu verbinden, so dass Sie Bibliotheken wie [pandas](https://pandas.pydata.org/) für die Datenanalyse verwenden können! -Subgrounds offers a simple Pythonic API for building GraphQL queries, automates tedious workflows such as pagination, and empowers advanced users through controlled schema transformations. +Subgrounds bietet eine einfache Pythonic-API für die Erstellung von GraphQL-Abfragen, automatisiert mühsame Arbeitsabläufe wie die Paginierung und ermöglicht fortgeschrittenen Nutzern kontrollierte Schema-Transformationen. ## Erste Schritte -Subgrounds requires Python 3.10 or higher and is available on [pypi](https://pypi.org/project/subgrounds/). +Subgrounds erfordert Python 3.10 oder höher und ist auf [pypi](https://pypi.org/project/subgrounds/) verfügbar. ```bash pip install --upgrade subgrounds -# or +# oder python -m pip install --upgrade subgrounds ``` -Once installed, you can test out subgrounds with the following query. The following example grabs a subgraph for the Aave v2 protocol and queries the top 5 markets ordered by TVL (Total Value Locked), selects their name and their TVL (in USD) and returns the data as a pandas [DataFrame](https://pandas.pydata.org/pandas-docs/dev/reference/api/pandas.DataFrame.html#pandas.DataFrame). +Nach der Installation können Sie die Subgrounds mit der folgenden Abfrage testen. Das folgende Beispiel greift auf einen Subgraph für das Aave v2-Protokoll zurück und fragt die Top 5 Märkte geordnet nach TVL (Total Value Locked) ab, wählt ihren Namen und ihren TVL (in USD) aus und gibt die Daten als pandas [DataFrame](https://pandas.pydata.org/pandas-docs/dev/reference/api/pandas.DataFrame.html#pandas.DataFrame) zurück. ```python from subgrounds import Subgrounds sg = Subgrounds() -# Load the subgraph +# Laden des Subgraphen aave_v2 = sg.load_subgraph( - "https://api.thegraph.com/subgraphs/name/messari/aave-v2-ethereum") + „https://api.thegraph.com/subgraphs/name/messari/aave-v2-ethereum") -# Construct the query +# Konstruieren Sie die Abfrage latest_markets = aave_v2.Query.markets( orderBy=aave_v2.Market.totalValueLockedUSD, orderDirection='desc', first=5, ) -# Return query to a dataframe +# Abfrage in einem Datenrahmen zurückgeben sg.query_df([ latest_markets.name, latest_markets.totalValueLockedUSD, ]) ``` -## Documentation +## Dokumentation -Subgrounds is built and maintained by the [Playgrounds](https://playgrounds.network/) team and can be accessed on the [Playgrounds docs](https://docs.playgrounds.network/subgrounds). +Subgrounds wird vom [Playgrounds](https://playgrounds.network/) Team entwickelt und gewartet und kann auf der [Playgrounds docs](https://docs.playgrounds.network/subgrounds) eingesehen werden. -Since subgrounds has a large feature set to explore, here are some helpful starting places: +Da Subgrounds einen großen Funktionsumfang hat, den es zu erkunden gilt, finden Sie hier einige hilfreiche Startpunkte: -- [Getting Started with Querying](https://docs.playgrounds.network/subgrounds/getting_started/basics/) - - A good first step for how to build queries with subgrounds. -- [Building Synthetic Fields](https://docs.playgrounds.network/subgrounds/getting_started/synthetic_fields/) - - A gentle introduction to defining synthetic fields that transform data defined from the schema. -- [Concurrent Queries](https://docs.playgrounds.network/subgrounds/getting_started/async/) - - Learn how to level up your queries by parallelizing them. -- [Exporting Data to CSVs](https://docs.playgrounds.network/subgrounds/faq/exporting/) - - A quick article on how to seamlessly save your data as CSVs for further analysis. +- [Erste Schritte mit Abfragen](https://docs.playgrounds.network/subgrounds/getting_started/basics/) + - Ein guter erster Schritt für die Erstellung von Abfragen mit Untergründen. +- [Aufbau synthetischer Felder](https://docs.playgrounds.network/subgrounds/getting_started/synthetic_fields/) + - Eine sanfte Einführung in die Definition synthetischer Felder, die aus dem Schema definierte Daten umwandeln. +- [Gleichzeitige Abfragen](https://docs.playgrounds.network/subgrounds/getting_started/async/) + - Lernen Sie, wie Sie Ihre Abfragen durch Parallelisierung verbessern können. +- [Exportieren von Daten in CSV-Dateien] (https://docs.playgrounds.network/subgrounds/faq/exporting/) + - Ein kurzer Artikel darüber, wie Sie Ihre Daten nahtlos als CSV-Dateien für weitere Analysen speichern können. diff --git a/website/src/pages/de/subgraphs/querying/subgraph-id-vs-deployment-id.mdx b/website/src/pages/de/subgraphs/querying/subgraph-id-vs-deployment-id.mdx index 103e470e14da..b35d7d952215 100644 --- a/website/src/pages/de/subgraphs/querying/subgraph-id-vs-deployment-id.mdx +++ b/website/src/pages/de/subgraphs/querying/subgraph-id-vs-deployment-id.mdx @@ -1,27 +1,27 @@ --- -title: Subgraph ID vs Deployment ID +title: Subgraphen-ID vs. Einsatz-ID --- -A subgraph is identified by a Subgraph ID, and each version of the subgraph is identified by a Deployment ID. +Ein Subgraph wird durch eine Subgraph-ID identifiziert, und jede Version des Subgraphen wird durch eine Deployment-ID identifiziert. -When querying a subgraph, either ID can be used, though it is generally suggested that the Deployment ID is used due to its ability to specify a specific version of a subgraph. +Bei der Abfrage eines Subgraphen kann jede der beiden IDs verwendet werden, obwohl im Allgemeinen empfohlen wird, die Deployment ID zu verwenden, da sie eine bestimmte Version eines Subgraphen angeben kann. -Here are some key differences between the two IDs: ![](/img/subgraph-id-vs-deployment-id.png) +Hier sind einige wichtige Unterschiede zwischen den beiden IDs: ![](/img/subgraph-id-vs-deployment-id.png) -## Deployment ID +## Einsatz-ID -The Deployment ID is the IPFS hash of the compiled manifest file, which refers to other files on IPFS instead of relative URLs on the computer. For example, the compiled manifest can be accessed via: `https://api.thegraph.com/ipfs/api/v0/cat?arg=QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. To change the Deployment ID, one can simply update the manifest file, such as modifying the description field as described in the [subgraph manifest documentation](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api). +Die Bereitstellungs-ID ist der IPFS-Hash der kompilierten Manifestdatei, der auf andere Dateien im IPFS statt auf relative URLs auf dem Computer verweist. Auf das kompilierte Manifest kann zum Beispiel zugegriffen werden über: `https://api.thegraph.com/ipfs/api/v0/cat?arg=QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. Um die Bereitstellungs-ID zu ändern, kann man einfach die Manifestdatei aktualisieren, z. B. durch Ändern des Beschreibungsfeldes, wie in der [Subgraph manifest documentation] (https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api) beschrieben. -When queries are made using a subgraph's Deployment ID, we are specifying a version of that subgraph to query. Using the Deployment ID to query a specific subgraph version results in a more sophisticated and robust setup as there is full control over the subgraph version being queried. However, this results in the need of updating the query code manually every time a new version of the subgraph is published. +Wenn Abfragen unter Verwendung der Einsatz-ID eines Subgraphen durchgeführt werden, geben wir eine Version dieses Subgraphen zur Abfrage an. Die Verwendung der Bereitstellungs-ID zur Abfrage einer bestimmten Subgraphenversion führt zu einer ausgefeilteren und robusteren Einrichtung, da die volle Kontrolle über die abgefragte Subgraphenversion besteht. Dies hat jedoch zur Folge, dass der Abfragecode jedes Mal manuell aktualisiert werden muss, wenn eine neue Version des Subgraphen veröffentlicht wird. -Example endpoint that uses Deployment ID: +Beispiel für einen Endpunkt, der die Bereitstellungs-ID verwendet: `https://gateway-arbitrum.network.thegraph.com/api/[api-key]/deployments/id/QmfYaVdSSekUeK6expfm47tP8adg3NNdEGnVExqswsSwaB` ## Subgraph ID -The Subgraph ID is a unique identifier for a subgraph. It remains constant across all versions of a subgraph. It is recommended to use the Subgraph ID to query the latest version of a subgraph, although there are some caveats. +Die Subgraph-ID ist ein eindeutiger Bezeichner für einen Subgraphen. Sie bleibt über alle Versionen eines Subgraphen hinweg konstant. Es wird empfohlen, die Subgraph-ID zu verwenden, um die neueste Version eines Subgraphen abzufragen, obwohl es einige Einschränkungen gibt. -Be aware that querying using Subgraph ID may result in queries being responded to by an older version of the subgraph due to the new version needing time to sync. Also, new versions could introduce breaking schema changes. +Beachten Sie, dass Abfragen unter Verwendung der Subgraph-ID dazu führen können, dass Abfragen von einer älteren Version des Subgraphen beantwortet werden, da die neue Version Zeit zum Synchronisieren benötigt. Außerdem könnten neue Versionen Änderungen am Schema mit sich bringen. -Example endpoint that uses Subgraph ID: `https://gateway-arbitrum.network.thegraph.com/api/[api-key]/subgraphs/id/FL3ePDCBbShPvfRJTaSCNnehiqxsPHzpLud6CpbHoeKW` +Beispiel-Endpunkt, der die Subgraph-ID verwendet: `https://gateway-arbitrum.network.thegraph.com/api/[api-key]/subgraphs/id/FL3ePDCBbShPvfRJTaSCNnehiqxsPHzpLud6CpbHoeKW` diff --git a/website/src/pages/de/subgraphs/quick-start.mdx b/website/src/pages/de/subgraphs/quick-start.mdx index 91172561a67d..4608dc407ca7 100644 --- a/website/src/pages/de/subgraphs/quick-start.mdx +++ b/website/src/pages/de/subgraphs/quick-start.mdx @@ -2,24 +2,24 @@ title: Schnellstart --- -Learn how to easily build, publish and query a [subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) on The Graph. +Erfahren Sie, wie Sie auf einfache Weise einen [Subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) auf The Graph erstellen, veröffentlichen und abfragen können. -## Prerequisites +## Voraussetzungen - Eine Krypto-Wallet -- A smart contract address on a [supported network](/supported-networks/) -- [Node.js](https://nodejs.org/) installed -- A package manager of your choice (`npm`, `yarn` or `pnpm`) +- Eine Smart-Contract-Adresse in einem [unterstützten Netzwerk](/supported-networks/ +- [Node.js](https://nodejs.org/) installiert +- Ein Paketmanager Ihrer Wahl (`npm`, `yarn` oder `pnpm`) -## How to Build a Subgraph +## Wie man einen Subgraphen erstellt -### 1. Create a subgraph in Subgraph Studio +### 1. Erstellen Sie einen Subgraphen in Subgraph Studio Gehen Sie zu [Subgraph Studio] (https://thegraph.com/studio/) und verbinden Sie Ihre Wallet. Mit Subgraph Studio können Sie Subgraphen erstellen, verwalten, bereitstellen und veröffentlichen sowie API-Schlüssel erstellen und verwalten. -Click "Create a Subgraph". It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name". +Klicken Sie auf „Einen Subgraphen erstellen“. Es wird empfohlen, den Subgraph in Title Case zu benennen: „Subgraph Name Chain Name“. ### 2. Installieren der Graph-CLI @@ -37,54 +37,54 @@ Verwendung von [yarn] (https://yarnpkg.com/): yarn global add @graphprotocol/graph-cli ``` -### 3. Initialize your subgraph +### 3. Initialisieren Sie Ihren Subgraphen -> Die Befehle für Ihren spezifischen Subgraphen finden Sie auf der Subgraphen-Seite in [Subgraph Studio](https://thegraph.com/studio/). +> Sie finden die Befehle für Ihren spezifischen Subgraphen auf der Subgraphen-Seite in [Subgraph Studio] (https://thegraph.com/studio/). -The `graph init` command will automatically create a scaffold of a subgraph based on your contract's events. +Der Befehl `graph init` erstellt automatisch ein Gerüst eines Subgraphen auf der Grundlage der Ereignisse Ihres Vertrags. -Mit dem folgenden Befehl wird Ihr Subgraph aus einem bestehenden Vertrag initialisiert: +Der folgende Befehl initialisiert Ihren Subgraphen anhand eines bestehenden Vertrags: ```sh graph init ``` -If your contract is verified on the respective blockscanner where it is deployed (such as [Etherscan](https://etherscan.io/)), then the ABI will automatically be created in the CLI. +Wenn Ihr Vertrag auf dem jeweiligen Blockscanner, auf dem er eingesetzt wird (z. B. [Etherscan](https://etherscan.io/)), verifiziert wird, wird die ABI automatisch im CLI erstellt. -When you initialize your subgraph, the CLI will ask you for the following information: +Wenn Sie Ihren Subgraphen initialisieren, werden Sie von der CLI nach den folgenden Informationen gefragt: -- **Protocol**: Choose the protocol your subgraph will be indexing data from. -- **Subgraph slug**: Create a name for your subgraph. Your subgraph slug is an identifier for your subgraph. -- **Directory**: Choose a directory to create your subgraph in. -- **Ethereum network** (optional): You may need to specify which EVM-compatible network your subgraph will be indexing data from. -- **Contract address**: Locate the smart contract address you’d like to query data from. -- **ABI**: If the ABI is not auto-populated, you will need to input it manually as a JSON file. -- **Start Block**: You should input the start block to optimize subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed. -- **Contract Name**: Input the name of your contract. -- **Index contract events as entities**: It is suggested that you set this to true, as it will automatically add mappings to your subgraph for every emitted event. -- **Add another contract** (optional): You can add another contract. +- **Protokoll**: Wählen Sie das Protokoll, mit dem Ihr Subgraph Daten indizieren soll. +- **Subgraph-Schlagwort**: Erstellen Sie einen Namen für Ihren Subgraphen. Ihr Subgraph-Slug ist ein Bezeichner für Ihren Subgraphen. +- **Verzeichnis**: Wählen Sie ein Verzeichnis, in dem Sie Ihren Subgraphen erstellen möchten. +- **Ethereum-Netzwerk** (optional): Möglicherweise müssen Sie angeben, von welchem EVM-kompatiblen Netzwerk Ihr Subgraph Daten indizieren soll. +- **Vertragsadresse**: Suchen Sie die Adresse des Smart Contracts, von dem Sie Daten abfragen möchten. +- **ABI**: Wenn die ABI nicht automatisch ausgefüllt wird, müssen Sie sie manuell in eine JSON-Datei eingeben. +- **Startblock**: Sie sollten den Startblock eingeben, um die Subgraph-Indizierung von Blockchain-Daten zu optimieren. Ermitteln Sie den Startblock, indem Sie den Block suchen, in dem Ihr Vertrag bereitgestellt wurde. +- **Vertragsname**: Geben Sie den Namen Ihres Vertrags ein. +- **Vertragsereignisse als Entitäten indizieren**: Es wird empfohlen, dies auf „true“ zu setzen, da es automatisch Mappings zu Ihrem Subgraph für jedes emittierte Ereignis hinzufügt. +- **Einen weiteren Vertrag hinzufügen** (optional): Sie können einen weiteren Vertrag hinzufügen. -Der folgende Screenshot zeigt ein Beispiel dafür, was Sie bei der Initialisierung Ihres Untergraphen ( Subgraph ) erwarten können: +Der folgende Screenshot zeigt ein Beispiel dafür, was Sie bei der Initialisierung Ihres Subgraphen erwarten können: -![Subgraph command](/img/CLI-Example.png) +![Subgraph-Befehl](/img/CLI-Example.png) -### 4. Edit your subgraph +### 4. Bearbeiten Sie Ihren Subgraphen -The `init` command in the previous step creates a scaffold subgraph that you can use as a starting point to build your subgraph. +Der `init`-Befehl im vorherigen Schritt erzeugt einen Gerüst-Subgraphen, den Sie als Ausgangspunkt für den Aufbau Ihres Subgraphen verwenden können. -When making changes to the subgraph, you will mainly work with three files: +Wenn Sie Änderungen am Subgraphen vornehmen, werden Sie hauptsächlich mit drei Dateien arbeiten: - Manifest (`subgraph.yaml`) - definiert, welche Datenquellen Ihr Subgraph indizieren wird. -- Schema (`schema.graphql`) - defines what data you wish to retrieve from the subgraph. +- Schema (`schema.graphql`) - legt fest, welche Daten Sie aus dem Subgraphen abrufen möchten. - AssemblyScript Mappings (mapping.ts) - Dies ist der Code, der die Daten aus Ihren Datenquellen in die im Schema definierten Entitäten übersetzt. -For a detailed breakdown on how to write your subgraph, check out [Creating a Subgraph](/developing/creating-a-subgraph/). +Eine detaillierte Aufschlüsselung, wie Sie Ihren Subgraphen schreiben, finden Sie unter [Erstellen eines Subgraphen](/developing/creating-a-subgraph/). -### 5. Deploy your subgraph +### 5. Verteilen Sie Ihren Subgraphen -> Remember, deploying is not the same as publishing. +> Denken Sie daran, dass die Bereitstellung nicht dasselbe ist wie die Veröffentlichung. -When you **deploy** a subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. A deployed subgraph's indexing is performed by the [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/), which is a single Indexer owned and operated by Edge & Node, rather than by the many decentralized Indexers in The Graph Network. A **deployed** subgraph is free to use, rate-limited, not visible to the public, and meant to be used for development, staging, and testing purposes. +Wenn Sie einen Subgraphen **breitstellen**, schieben Sie ihn in das [Subgraph Studio] (https://thegraph.com/studio/), wo Sie ihn testen, einstellen und überprüfen können. Die Indizierung eines bereitgestellten Subgraphen wird vom [Upgrade Indexierer](https://thegraph.com/blog/upgrade-indexer/) durchgeführt, der ein einzelner Indexierer ist, der von Edge & Node betrieben wird, und nicht von den vielen dezentralen Indexierern im Graph Network. Ein **eingesetzter** Subgraph ist frei nutzbar, ratenbegrenzt, für die Öffentlichkeit nicht sichtbar und für Entwicklungs-, Staging- und Testzwecke gedacht. Sobald Ihr Subgraph geschrieben ist, führen Sie die folgenden Befehle aus: @@ -94,9 +94,9 @@ graph codegen && graph build ``` ```` -Authenticate and deploy your subgraph. The deploy key can be found on the subgraph's page in Subgraph Studio. +Authentifizieren Sie Ihren Subgraphen und stellen Sie ihn bereit. Den Bereitstellungsschlüssel finden Sie auf der Seite des Subgraphen in Subgraph Studio. -![Deploy key](/img/subgraph-studio-deploy-key.jpg) +![ Deploy-Schlüssel](/img/subgraph-studio-deploy-key.jpg) ```` ```sh @@ -107,39 +107,39 @@ graph deploy ``` ```` -The CLI will ask for a version label. It's strongly recommended to use [semantic versioning](https://semver.org/), e.g. `0.0.1`. +Die CLI fragt nach einer Versionsbezeichnung. Es wird dringend empfohlen, [semantische Versionierung](https://semver.org/) zu verwenden, z.B. `0.0.1`. -### 6. Review your subgraph +### 6. Überprüfen Sie Ihren Subgraphen -If you’d like to test your subgraph before publishing it, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following: +Wenn Sie Ihren Subgraph vor der Veröffentlichung testen möchten, können Sie mit [Subgraph Studio] (https://thegraph.com/studio/) Folgendes tun: - Führen Sie eine Testabfrage durch. -- Analyze your subgraph in the dashboard to check information. -- Check the logs on the dashboard to see if there are any errors with your subgraph. The logs of an operational subgraph will look like this: +- Analysieren Sie Ihren Subgraphen im Dashboard, um Informationen zu überprüfen. +- Überprüfen Sie die Protokolle auf dem Dashboard, um zu sehen, ob es irgendwelche Fehler mit Ihrem Subgraph gibt. Die Protokolle eines funktionierenden Subgraphen sehen wie folgt aus: ![Subgraph logs](/img/subgraph-logs-image.png) -### 7. Publish your subgraph to The Graph Network +### 7. Veröffentlichen Sie Ihren Subgraphen im The Graph Network -When your subgraph is ready for a production environment, you can publish it to the decentralized network. Publishing is an onchain action that does the following: +Wenn Ihr Subgraph bereit für eine Produktionsumgebung ist, können Sie ihn im dezentralen Netzwerk veröffentlichen. Die Veröffentlichung ist eine Onchain-Aktion, die Folgendes bewirkt: -- It makes your subgraph available to be to indexed by the decentralized [Indexers](/indexing/overview/) on The Graph Network. -- It removes rate limits and makes your subgraph publicly searchable and queryable in [Graph Explorer](https://thegraph.com/explorer/). -- It makes your subgraph available for [Curators](/resources/roles/curating/) to curate it. +- Es macht Ihren Subgraphen verfügbar, um von den dezentralisierten [Indexierers](/indexing/overview/) auf The Graph Network indiziert zu werden. +- Sie hebt Ratenbeschränkungen auf und macht Ihren Subgraphen öffentlich durchsuchbar und abfragbar im [Graph Explorer] (https://thegraph.com/explorer/). +- Es macht Ihren Subgraphen für [Kuratoren](/resources/roles/curating/) verfügbar, um ihn zu kuratieren. -> The greater the amount of GRT you and others curate on your subgraph, the more Indexers will be incentivized to index your subgraph, improving the quality of service, reducing latency, and enhancing network redundancy for your subgraph. +> Je mehr GRT Sie und andere auf Ihrem Subgraph kuratieren, desto mehr Indexierer werden dazu angeregt, Ihren Subgraphen zu indizieren, was die Servicequalität verbessert, die Latenzzeit reduziert und die Netzwerkredundanz für Ihren Subgraphen erhöht. #### Veröffentlichung mit Subgraph Studio -To publish your subgraph, click the Publish button in the dashboard. +Um Ihren Subgraphen zu veröffentlichen, klicken Sie auf die Schaltfläche "Veröffentlichen" im Dashboard. -![Publish a subgraph on Subgraph Studio](/img/publish-sub-transfer.png) +![Veröffentlichen eines Subgraphen auf Subgraph Studio](/img/publish-sub-transfer.png) -Select the network to which you would like to publish your subgraph. +Wählen Sie das Netzwerk aus, in dem Sie Ihren Subgraphen veröffentlichen möchten. #### Veröffentlichen über die CLI -As of version 0.73.0, you can also publish your subgraph with the Graph CLI. +Ab Version 0.73.0 können Sie Ihren Subgraphen auch mit dem Graph CLI veröffentlichen. Öffnen Sie den `graph-cli`. @@ -147,10 +147,10 @@ Verwenden Sie die folgenden Befehle: ```` ```sh -graph codegen && graph build +graph codegen &amp;&amp; graph build ``` -Then, +Dann, ```sh graph publish @@ -161,28 +161,28 @@ graph publish ![cli-ui](/img/cli-ui.png) -To customize your deployment, see [Publishing a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). +Wie Sie Ihre Bereitstellung anpassen können, erfahren Sie unter [Veröffentlichen eines Subgraphen](/subgraphs/developing/publishing/publishing-a-subgraph/). #### Hinzufügen von Signalen zu Ihrem Subgraphen -1. To attract Indexers to query your subgraph, you should add GRT curation signal to it. +1. Um Indexierer für die Abfrage Ihres Subgraphen zu gewinnen, sollten Sie ihn mit einem GRT-Kurationssignal versehen. - - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your subgraph. + - Diese Maßnahme verbessert die Servicequalität, verringert die Latenz und erhöht die Netzwerkredundanz und -verfügbarkeit für Ihren Subgraphen. 2. Indexer erhalten GRT Rewards auf der Grundlage des signalisierten Betrags, wenn sie für Indexing Rewards in Frage kommen. - - Es wird empfohlen, mindestens 3.000 GRT zu kuratieren, um 3 Indexer anzuziehen. Prüfen Sie die Berechtigung zum Reward anhand der Nutzung der Subgraph-Funktionen und der unterstützten Netzwerke. + - Es wird empfohlen, mindestens 3.000 GRT zu kuratieren, um 3 Indexierer zu gewinnen. Prüfen Sie die Berechtigung zur Belohnung anhand der Nutzung der Subgraph-Funktion und der unterstützten Netzwerke. -To learn more about curation, read [Curating](/resources/roles/curating/). +Um mehr über das Kuratieren zu erfahren, lesen Sie [Kuratieren](/resources/roles/curating/). -To save on gas costs, you can curate your subgraph in the same transaction you publish it by selecting this option: +Um Gaskosten zu sparen, können Sie Ihren Subgraphen in der gleichen Transaktion kuratieren, in der Sie ihn veröffentlichen, indem Sie diese Option wählen: ![Subgraph veröffentlichen](/img/studio-publish-modal.png) -### 8. Query your subgraph +### 8. Abfrage des Subgraphen -You now have access to 100,000 free queries per month with your subgraph on The Graph Network! +Sie haben jetzt Zugang zu 100.000 kostenlosen Abfragen pro Monat mit Ihrem Subgraph auf The Graph Network! -You can query your subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button. +Sie können Ihren Subgraphen abfragen, indem Sie GraphQL-Abfragen an seine Abfrage-URL senden, die Sie durch Klicken auf die Schaltfläche Abfrage finden können. -For more information about querying data from your subgraph, read [Querying The Graph](/subgraphs/querying/introduction/). +Weitere Informationen zur Abfrage von Daten aus Ihrem Subgraphen finden Sie unter [Querying The Graph](/subgraphs/querying/introduction/). diff --git a/website/src/pages/de/substreams/_meta-titles.json b/website/src/pages/de/substreams/_meta-titles.json index 6262ad528c3a..cf75f2729d64 100644 --- a/website/src/pages/de/substreams/_meta-titles.json +++ b/website/src/pages/de/substreams/_meta-titles.json @@ -1,3 +1,3 @@ { - "developing": "Developing" + "developing": "Entwicklung" } diff --git a/website/src/pages/de/substreams/developing/_meta-titles.json b/website/src/pages/de/substreams/developing/_meta-titles.json index 882ee9fc7c9c..8170106cbff4 100644 --- a/website/src/pages/de/substreams/developing/_meta-titles.json +++ b/website/src/pages/de/substreams/developing/_meta-titles.json @@ -1,4 +1,4 @@ { "solana": "Solana", - "sinks": "Sink your Substreams" + "sinks": "Versenken Sie Ihre Substreams" } diff --git a/website/src/pages/de/substreams/developing/dev-container.mdx b/website/src/pages/de/substreams/developing/dev-container.mdx index bd4acf16eec7..8e4a49286f43 100644 --- a/website/src/pages/de/substreams/developing/dev-container.mdx +++ b/website/src/pages/de/substreams/developing/dev-container.mdx @@ -3,46 +3,46 @@ title: Substreams Dev Container sidebarTitle: Dev Container --- -Develop your first project with Substreams Dev Container. +Entwickeln Sie Ihr erstes Projekt mit Substreams Dev Container. -## What is a Dev Container? +## Was ist ein Dev Container? -It's a tool to help you build your first project. You can either run it remotely through Github codespaces or locally by cloning the [substreams starter repository](https://github.com/streamingfast/substreams-starter?tab=readme-ov-file). +Es ist ein Tool, mit dem Sie Ihr erstes Projekt erstellen können. Sie können es entweder aus der Ferne über Github-Codespaces oder lokal durch Klonen des [substreams starter repository](https://github.com/streamingfast/substreams-starter?tab=readme-ov-file) ausführen. -Inside the Dev Container, the `substreams init` command sets up a code-generated Substreams project, allowing you to easily build a subgraph or an SQL-based solution for data handling. +Innerhalb des Dev Containers richtet der Befehl `substreams init` ein codegeneriertes Substreams-Projekt ein, mit dem Sie auf einfache Weise einen Subgraph oder eine SQL-basierte Lösung für die Datenverarbeitung erstellen können. -## Prerequisites +## Voraussetzungen -- Ensure Docker and VS Code are up-to-date. +- Stellen Sie sicher, dass Docker und VS Code auf dem neuesten Stand sind. -## Navigating the Dev Container +## Navigieren im Dev Container -In the Dev Container, you can either build or import your own `substreams.yaml` and associate modules within the minimal path or opt for the automatically generated Substreams paths. Then, when you run the `Substreams Build` it will generate the Protobuf files. +Im Dev Container können Sie entweder Ihre eigene `substreams.yaml` erstellen oder importieren und Module innerhalb des Minimalpfades assoziieren oder sich für die automatisch generierten Substreams-Pfade entscheiden. Wenn Sie dann den „Substreams Build“ ausführen, werden die Protobuf-Dateien generiert. -### Options +### Optionen -- **Minimal**: Starts you with the raw block `.proto` and requires development. This path is intended for experienced users. -- **Non-Minimal**: Extracts filtered data using network-specific caches and Protobufs taken from corresponding foundational modules (maintained by the StreamingFast team). This path generates a working Substreams out of the box. +- **Minimal**: Beginnt mit dem Rohblock `.proto` und erfordert Entwicklung. Dieser Pfad ist für erfahrene Benutzer gedacht. +- **Nicht-Minimal**: Extrahiert gefilterte Daten unter Verwendung von netzspezifischen Caches und Protobufs aus den entsprechenden Grundmodulen (die vom StreamingFast-Team gepflegt werden). Dieser Pfad generiert einen funktionsfähigen Substream aus der Box heraus. -To share your work with the broader community, publish your `.spkg` to [Substreams registry](https://substreams.dev/) using: +Um Ihre Arbeit mit einer breiteren Community zu teilen, veröffentlichen Sie Ihr `.spkg` im [Substreams registry](https://substreams.dev/): - `substreams registry login` - `substreams registry publish` -> Note: If you run into any problems within the Dev Container, use the `help` command to access trouble shooting tools. +> Hinweis: Wenn Sie im Dev Container auf Probleme stoßen, verwenden Sie den Befehl `help`, um auf Tools zur Fehlerbehebung zuzugreifen. -## Building a Sink for Your Project +## Bau einer Senkung für Ihr Projekt -You can configure your project to query data either through a Subgraph or directly from an SQL database: +Sie können Ihr Projekt so konfigurieren, dass Daten entweder über einen Subgraphen oder direkt von einer SQL-Datenbank abgefragt werden: -- **Subgraph**: Run `substreams codegen subgraph`. This generates a project with a basic `schema.graphql` and `mappings.ts` file. You can customize these to define entities based on the data extracted by Substreams. For more configurations, see [subgraph sink documentation](https://docs.substreams.dev/how-to-guides/sinks/subgraph). -- **SQL**: Run `substreams codegen sql` for SQL-based queries. For more information on configuring a SQL sink, refer to the [SQL documentation](https://docs.substreams.dev/how-to-guides/sinks/sql-sink). +- **Subgraph**: Führen Sie `substreams codegen subgraph` aus. Dies erzeugt ein Projekt mit einer grundlegenden `schema.graphql` und `mappings.ts` Datei. Sie können diese anpassen, um Entitäten basierend auf den von Substreams extrahierten Daten zu definieren. Für weitere Konfigurationen siehe [Subgraph sink documentation](https://docs.substreams.dev/how-to-guides/sinks/subgraph). +- **SQL**: Führen Sie `substreams codegen sql` für SQL-basierte Abfragen aus. Weitere Informationen zur Konfiguration einer SQL-Senke finden Sie in der [SQL-Dokumentation](https://docs.substreams.dev/how-to-guides/sinks/sql-sink). -## Deployment Options +## Einsatz-Optionen -To deploy a Subgraph, you can either run the `graph-node` locally using the `deploy-local` command or deploy to Subgraph Studio by using the `deploy` command found in the `package.json` file. +Um einen Subgraph einzusetzen, können Sie entweder den `graph-node` lokal mit dem Befehl `deploy-local` ausführen oder mit dem Befehl `deploy` aus der Datei `package.json` in Subgraph Studio einsetzen. -## Common Errors +## Häufige Fehler -- When running locally, make sure to verify that all Docker containers are healthy by running the `dev-status` command. -- If you put the wrong start-block while generating your project, navigate to the `substreams.yaml` to change the block number, then re-run `substreams build`. +- Wenn Sie lokal arbeiten, stellen Sie sicher, dass alle Docker-Container in Ordnung sind, indem Sie den Befehl `dev-status` ausführen. +- Wenn Sie beim Generieren Ihres Projekts den falschen Startblock gesetzt haben, navigieren Sie zur `substreams.yaml`, um die Blocknummer zu ändern, und führen Sie dann `substreams build` erneut aus. diff --git a/website/src/pages/de/substreams/developing/sinks.mdx b/website/src/pages/de/substreams/developing/sinks.mdx index 6990190c555d..9902c99e2b3d 100644 --- a/website/src/pages/de/substreams/developing/sinks.mdx +++ b/website/src/pages/de/substreams/developing/sinks.mdx @@ -1,32 +1,32 @@ --- -title: Official Sinks +title: Versenken Sie Ihre Substreams --- -Choose a sink that meets your project's needs. +Wählen Sie ein Becken, das den Anforderungen Ihres Projekts entspricht. ## Überblick -Once you find a package that fits your needs, you can choose how you want to consume the data. +Sobald Sie ein Paket gefunden haben, das Ihren Anforderungen entspricht, können Sie wählen, wie Sie die Daten nutzen möchten. -Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database, a file or a subgraph. +Senken sind Integrationen, die es Ihnen ermöglichen, die extrahierten Daten an verschiedene Ziele zu senden, z. B. an eine SQL-Datenbank, eine Datei oder einen Subgraphen. ## Sinks -> Note: Some of the sinks are officially supported by the StreamingFast core development team (i.e. active support is provided), but other sinks are community-driven and support can't be guaranteed. +> Hinweis: Einige der Sinks werden offiziell vom StreamingFast-Entwicklungsteam unterstützt (d.h. es wird aktiver Support angeboten), aber andere Sinks werden von der Community betrieben und der Support kann nicht garantiert werden. -- [SQL Database](https://docs.substreams.dev/how-to-guides/sinks/sql-sink): Send the data to a database. -- [Subgraph](/sps/introduction/): Configure an API to meet your data needs and host it on The Graph Network. -- [Direct Streaming](https://docs.substreams.dev/how-to-guides/sinks/stream): Stream data directly from your application. -- [PubSub](https://docs.substreams.dev/how-to-guides/sinks/pubsub): Send data to a PubSub topic. -- [Community Sinks](https://docs.substreams.dev/how-to-guides/sinks/community-sinks): Explore quality community maintained sinks. +- [SQL-Datenbank](https://docs.substreams.dev/how-to-guides/sinks/sql-sink): Senden Sie die Daten an eine Datenbank. +- [Subgraph](/sps/einfuehrung/): Konfigurieren Sie eine API, die Ihren Datenanforderungen entspricht, und hosten Sie sie im The Graph Network. +- [Direktes Streaming](https://docs.substreams.dev/how-to-guides/sinks/stream): Streamen Sie Daten direkt aus Ihrer Anwendung. +- [PubSub](https://docs.substreams.dev/how-to-guides/sinks/pubsub): Senden von Daten an ein PubSub-Thema. +- [[Community Sinks]] (https://docs.substreams.dev/how-to-guides/sinks/community-sinks): Erforschen Sie hochwertige, von der Community unterhaltene Sinks. -> Important: If you’d like your sink (e.g., SQL or PubSub) hosted for you, reach out to the StreamingFast team [here](mailto:sales@streamingfast.io). +> Wichtig: Wenn Sie möchten, dass Ihre Senke (z. B. SQL oder PubSub) für Sie gehostet wird, wenden Sie sich an das StreamingFast-Team [hier] (mailto:sales@streamingfast.io). -## Navigating Sink Repos +## Sink Repos navigieren -### Official +### Offiziell -| Name | Support | Maintainer | Source Code | +| Name | Support | Maintainer | Quellcode | | --- | --- | --- | --- | | SQL | O | StreamingFast | [substreams-sink-sql](https://github.com/streamingfast/substreams-sink-sql) | | Go SDK | O | StreamingFast | [substreams-sink](https://github.com/streamingfast/substreams-sink) | @@ -40,12 +40,12 @@ Sinks are integrations that allow you to send the extracted data to different de ### Community -| Name | Support | Maintainer | Source Code | +| Name | Support | Maintainer | Quellcode | | --- | --- | --- | --- | | MongoDB | C | Community | [substreams-sink-mongodb](https://github.com/streamingfast/substreams-sink-mongodb) | | Files | C | Community | [substreams-sink-files](https://github.com/streamingfast/substreams-sink-files) | | KV Store | C | Community | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) | | Prometheus | C | Community | [substreams-sink-Prometheus](https://github.com/pinax-network/substreams-sink-prometheus) | -- O = Official Support (by one of the main Substreams providers) +- O = Offizielle Unterstützung (durch einen der wichtigsten Substreams-Anbieter) - C = Community Support diff --git a/website/src/pages/de/substreams/developing/solana/account-changes.mdx b/website/src/pages/de/substreams/developing/solana/account-changes.mdx index 74c54f3760c7..64e919552244 100644 --- a/website/src/pages/de/substreams/developing/solana/account-changes.mdx +++ b/website/src/pages/de/substreams/developing/solana/account-changes.mdx @@ -1,57 +1,57 @@ --- -title: Solana Account Changes -sidebarTitle: Account Changes +title: Änderungen am Solana-Konto +sidebarTitle: Kontoänderungen --- -Learn how to consume Solana account change data using Substreams. +Erfahren Sie, wie Sie Solana-Konto-Änderungsdaten mithilfe von Substreams nutzen können. ## Einführung -This guide walks you through the process of setting up your environment, configuring your first Substreams stream, and consuming account changes efficiently. By the end of this guide, you will have a working Substreams feed that allows you to track real-time account changes on the Solana blockchain, as well as historical account change data. +Dieser Leitfaden führt Sie durch den Prozess der Einrichtung Ihrer Umgebung, der Konfiguration Ihres ersten Substreams-Streams und der effizienten Nutzung von Kontoänderungen. Am Ende dieses Leitfadens werden Sie einen funktionierenden Substreams-Feed haben, der es Ihnen ermöglicht, Kontoänderungen in Echtzeit auf der Solana-Blockchain zu verfolgen, sowie historische Daten zu Kontoänderungen. -> NOTE: History for the Solana Account Changes dates as of 2025, block 310629601. +> HINWEIS: Die Historie für das Solana-Konto ändert sich ab 2025, Block 310629601. -For each Substreams Solana account block, only the latest update per account is recorded, see the [Protobuf Referece](https://buf.build/streamingfast/firehose-solana/file/main:sf/solana/type/v1/account.proto). If an account is deleted, a payload with `deleted == True` is provided. Additionally, events of low importance ware omitted, such as those with the special owner “Vote11111111…” account or changes that do not affect the account data (ex: lamport changes). +Für jeden Substreams Solana-Kontoblock wird nur die letzte Aktualisierung pro Konto aufgezeichnet, siehe die [Protobuf-Referenz] (https://buf.build/streamingfast/firehose-solana/file/main:sf/solana/type/v1/account.proto). Wenn ein Konto gelöscht wird, wird ein Payload mit `deleted == True` geliefert. Darüber hinaus werden Ereignisse von geringer Bedeutung ausgelassen, z. B. solche mit dem speziellen Eigentümer „Vote11111111...“ oder Änderungen, die sich nicht auf die Kontodaten auswirken (z. B. Lamportänderungen). -> NOTE: To test Substreams latency for Solana accounts, measured as block-head drift, install the [Substreams CLI](https://docs.substreams.dev/reference-material/substreams-cli/installing-the-cli) and running `substreams run solana-common blocks_without_votes -s -1 -o clock`. +> HINWEIS: Um die Substreams-Latenz für Solana-Konten zu testen, gemessen als Block-Head-Drift, installieren Sie die [Substreams CLI] (https://docs.substreams.dev/reference-material/substreams-cli/installing-the-cli) und führen Sie `substreams run solana-common blocks_without_votes -s -1 -o clock` aus. ## Erste Schritte -### Prerequisites +### Voraussetzungen -Before you begin, ensure that you have the following: +Bevor Sie beginnen, vergewissern Sie sich, dass Sie über die folgenden Informationen verfügen: -1. [Substreams CLI](https://docs.substreams.dev/reference-material/substreams-cli/installing-the-cli) installed. -2. A [Substreams key](https://docs.substreams.dev/reference-material/substreams-cli/authentication) for access to the Solana Account Change data. -3. Basic knowledge of [how to use](https://docs.substreams.dev/reference-material/substreams-cli/command-line-interface) the command line interface (CLI). +1. [Substreams CLI](https://docs.substreams.dev/reference-material/substreams-cli/installing-the-cli) installiert. +2. Ein [Substreams-key](https://docs.substreams.dev/reference-material/substreams-cli/authentication) für den Zugriff auf die Solana-Kontoänderungsdaten. +3. Grundlegende Kenntnisse der Befehlszeilenschnittstelle (CLI) [hohe to Muse](https://docs.substreams.dev/reference-material/substreams-cli/command-line-interface). -### Step 1: Set Up a Connection to Solana Account Change Substreams +### Schritt 1: Einrichten einer Verbindung zu Solana Account Change Substreams -Now that you have Substreams CLI installed, you can set up a connection to the Solana Account Change Substreams feed. +Nachdem Sie nun Substreams CLI installiert haben, können Sie eine Verbindung zum Solana Account Change Substrats-Feed herstellen. -- Using the [Solana Accounts Foundational Module](https://substreams.dev/packages/solana-accounts-foundational/latest), you can choose to stream data directly or use the GUI for a more visual experience. The following `gui` example filters for Honey Token account data. +- Mit dem [Solana Accounts Foundational Module] (https://substreams.dev/packages/solana-accounts-foundational/latest) können Sie wählen, ob Sie Daten direkt streamen oder die grafische Benutzeroberfläche (GUI) für eine bessere visuelle Darstellung verwenden möchten. Das folgende `gui`-Beispiel filtert nach Honey Token-Kontodaten. ```bash substreams gui solana-accounts-foundational filtered_accounts -t +10 -p filtered_accounts="owner:TokenkegQfeZyiNwAJbNbGKPFXCWuBvf9Ss623VQ5DA || account:4vMsoUT2BWatFweudnQM1xedRLfJgJ7hswhcpz4xgBTy" ``` -- This command will stream account changes directly to your terminal. +- Mit diesem Befehl werden Kontoänderungen direkt in Ihr Terminal übertragen. ```bash substreams run solana-accounts-foundational filtered_accounts -s -1 -o clock ``` -The Foundational Module has support for filtering on specific accounts and/or owners. You can adjust the query based on your needs. +Das Basismodul unterstützt die Filterung nach bestimmten Konten und/oder Eigentümern. Sie können die Abfrage an Ihre Bedürfnisse anpassen. -### Step 2: Sink the Substreams +### Schritt 2: Versenkung der Substreams -Consume the account stream [directly in your application](https://docs.substreams.dev/how-to-guides/sinks/stream) using a callback or make it queryable by using the [SQL-DB sink](https://docs.substreams.dev/how-to-guides/sinks/sql-sink). +Verwenden Sie den Kontenstrom [direkt in Ihrer Anwendung] (https://docs.substreams.dev/how-to-guides/sinks/stream) mit einem Callback oder machen Sie ihn mit der [SQL-DB-Senke] (https://docs.substreams.dev/how-to-guides/sinks/sql-sink) abfragbar. -### Step 3: Setting up a Reconnection Policy +### Schritt 3: Einrichten einer Verbindungswiederherstellungsrichtlinie -[Cursor Management](https://docs.substreams.dev/reference-material/reliability-guarantees) ensures seamless continuity and retraceability by allowing you to resume from the last consumed block if the connection is interrupted. This functionality prevents data loss and maintains a persistent stream. +Die [Cursor-Verwaltung] (https://docs.substreams.dev/reference-material/reliability-guarantees) sorgt für nahtlose Kontinuität und Rückverfolgbarkeit, indem sie es Ihnen ermöglicht, bei einer Unterbrechung der Verbindung mit dem letzten verbrauchten Block fortzufahren. Diese Funktion verhindert Datenverluste und sorgt für die Aufrechterhaltung eines kontinuierlichen Datenstroms. -When creating or using a sink, the user's primary responsibility is to provide implementations of BlockScopedDataHandler and a BlockUndoSignalHandler implementation(s) which has the following interface: +Bei der Erstellung oder Verwendung einer Senke ist der Benutzer in erster Linie dafür verantwortlich, Implementierungen von BlockScopedDataHandler und eine BlockUndoSignalHandler-Implementierung(en) bereitzustellen, die die folgende Schnittstelle aufweisen: ```go import ( diff --git a/website/src/pages/de/substreams/developing/solana/transactions.mdx b/website/src/pages/de/substreams/developing/solana/transactions.mdx index 74bb987f4578..d4c6b01ad24e 100644 --- a/website/src/pages/de/substreams/developing/solana/transactions.mdx +++ b/website/src/pages/de/substreams/developing/solana/transactions.mdx @@ -1,61 +1,61 @@ --- -title: Solana Transactions -sidebarTitle: Transactions +title: Solana-Transaktionen +sidebarTitle: Transaktionen --- -Learn how to initialize a Solana-based Substreams project within the Dev Container. +Erfahren Sie, wie Sie ein Solana-basiertes Substreams-Projekt im Dev Container initialisieren. -> Note: This guide excludes [Account Changes](/substreams/developing/solana/account-changes/). +> Hinweis: Diese Anleitung schließt [Kontoänderungen](/substreams/developing/solana/account-changes/) aus. -## Options +## Optionen -If you prefer to begin locally within your terminal rather than through the Dev Container (VS Code required), refer to the [Substreams CLI installation guide](https://docs.substreams.dev/reference-material/substreams-cli/installing-the-cli). +Wenn Sie es vorziehen, lokal in Ihrem Terminal zu beginnen, anstatt über den Dev Container (VS Code erforderlich), lesen Sie die [Substreams CLI Installationsanleitung] (https://docs.substreams.dev/reference-material/substreams-cli/installing-the-cli). -## Step 1: Initialize Your Solana Substreams Project +## Schritt 1: Initialisieren Sie Ihr Solana Substreams Projekt -1. Open the [Dev Container](https://github.com/streamingfast/substreams-starter) and follow the on-screen steps to initialize your project. +1. Öffnen Sie den [Dev Container] (https://github.com/streamingfast/substreams-starter) und folgen Sie den Schritten auf dem Bildschirm, um Ihr Projekt zu initialisieren. -2. Running `substreams init` will give you the option to choose between two Solana project options. Select the best option for your project: - - **sol-minimal**: This creates a simple Substreams that extracts raw Solana block data and generates corresponding Rust code. This path will start you with the full raw block, and you can navigate to the `substreams.yaml` (the manifest) to modify the input. - - **sol-transactions**: This creates a Substreams that filters Solana transactions based on one or more Program IDs and/or Account IDs, using the cached [Solana Foundational Module](https://substreams.dev/streamingfast/solana-common/v0.3.0). - - **sol-anchor-beta**: This creates a Substreams that decodes instructions and events with an Anchor IDL. If an IDL isn’t available (reference [Anchor CLI](https://www.anchor-lang.com/docs/cli)), then you’ll need to provide it yourself. +2. Wenn Sie `substreams init` ausführen, haben Sie die Möglichkeit, zwischen zwei Solana-Projektoptionen zu wählen. Wählen Sie die beste Option für Ihr Projekt: + - **sol-minimal**: Damit wird ein einfacher Substreams erstellt, der die Rohdaten des Solana-Blocks extrahiert und den entsprechenden Rust-Code erzeugt. Dieser Pfad startet mit dem vollständigen Rohblock, und Sie können zur `substreams.yaml` (dem Manifest) navigieren, um die Eingabe zu ändern. + - **sol-transactions**: Damit wird ein Substream erstellt, der Solana-Transaktionen auf der Grundlage einer oder mehrerer Programm-IDs und/oder Konto-IDs filtert, wobei das zwischengespeicherte [Solana-Grundlagenmodul] (https://substreams.dev/streamingfast/solana-common/v0.3.0) verwendet wird. + - **sol-anchor-beta**: Dies erzeugt einen Substream, der Anweisungen und Ereignisse mit einer Anchor-IDL dekodiert. Wenn eine IDL nicht verfügbar ist (siehe [Anchor CLI](https://www.anchor-lang.com/docs/cli)), müssen Sie sie selbst bereitstellen. -The modules within Solana Common do not include voting transactions. To gain a 75% reduction in data processing size and costs, delay your stream by over 1000 blocks from the head. This can be done using the [`sleep`](https://doc.rust-lang.org/std/thread/fn.sleep.html) function in Rust. +Die Module in Solana Common enthalten keine Abstimmungstransaktionen. Um eine 75%ige Reduzierung der Datenverarbeitungsgröße und -kosten zu erreichen, verzögern Sie Ihren Stream um mehr als 1000 Blöcke vom Kopf. Dies kann mit der Funktion [`sleep`](https://doc.rust-lang.org/std/thread/fn.sleep.html) in Rust erreicht werden. -To access voting transactions, use the full Solana block, `sf.solana.type.v1.Block`, as input. +Für den Zugriff auf Abstimmungsvorgänge ist der vollständige Solana-Block `sf.solana.type.v1.Block` als Eingabe zu verwenden. -## Step 2: Visualize the Data +## Schritt 2: Visualisierung der Daten -1. Run `substreams auth` to create your [account](https://thegraph.market/) and generate an authentication token (JWT), then pass this token back as input. +1. Führen Sie `substreams auth` aus, um Ihr [Konto](https://thegraph.market/) zu erstellen und ein Authentifizierungs-Token (JWT) zu generieren, und geben Sie dieses Token als Eingabe zurück. -2. Now you can freely use the `substreams gui` to visualize and iterate on your extracted data. +2. Jetzt können Sie die `substreams gui` frei verwenden, um Ihre extrahierten Daten zu visualisieren und zu iterieren. -## Step 2.5: (Optionally) Transform the Data +## Schritt 2.5: (Optional) Transformation der Daten -Within the generated directories, modify your Substreams modules to include additional filters, aggregations, and transformations, then update the manifest accordingly. +Ändern Sie innerhalb der generierten Verzeichnisse Ihre Substreams-Module, um zusätzliche Filter, Aggregationen und Transformationen aufzunehmen, und aktualisieren Sie das Manifest entsprechend. -## Step 3: Load the Data +## Schritt 3: Laden der Daten -To make your Substreams queryable (as opposed to [direct streaming](https://docs.substreams.dev/how-to-guides/sinks/stream)), you can automatically generate a [Substreams-powered subgraph](/sps/introduction/) or SQL-DB sink. +Um Ihre Substreams abfragbar zu machen (im Gegensatz zu [direct streaming](https://docs.substreams.dev/how-to-guides/sinks/stream)), können Sie automatisch einen [Substreams-powered subgraph](/sps/introduction/) oder eine SQL-DB-Senke erzeugen. -### Subgraph +### Subgrafen -1. Run `substreams codegen subgraph` to initialize the sink, producing the necessary files and function definitions. -2. Create your [subgraph mappings](/sps/triggers/) within the `mappings.ts` and associated entities within the `schema.graphql`. -3. Build and deploy locally or to [Subgraph Studio](https://thegraph.com/studio-pricing/) by running `deploy-studio`. +1. Führen Sie `substreams codegen subgraph` aus, um die Senke zu initialisieren und die erforderlichen Dateien und Funktionsdefinitionen zu erstellen. +2. Erstellen Sie Ihre [[Subgraph Mappings]] (/sps/triggers/) in der Datei `mappings.ts` und die zugehörigen Entitäten in der Datei `schema.graphql`. +3. Erstellen und verteilen Sie lokal oder in [Subgraph Studio] (https://thegraph.com/studio-pricing/), indem Sie `deploy-studio` ausführen. ### SQL -1. Run `substreams codegen sql` and choose from either ClickHouse or Postgres to initialize the sink, producing the necessary files. -2. Run `substreams build` build the [Substream SQL](https://docs.substreams.dev/how-to-guides/sinks/sql-sink) sink. -3. Run `substreams-sink-sql` to sink the data into your selected SQL DB. +1. Führen Sie `substreams codegen sql` aus und wählen Sie entweder ClickHouse oder Postgres aus, um die Senke zu initialisieren und die erforderlichen Dateien zu erzeugen. +2. Führen Sie `substreams build` aus, um die [Substream SQL](https://docs.substreams.dev/how-to-guides/sinks/sql-sink) Senke zu bauen. +3. Führen Sie `substreams-sink-sql` aus, um die Daten in die von Ihnen ausgewählte SQL-DB zu übertragen. -> Note: Run `help` to better navigate the development environment and check the health of containers. +> Hinweis: Führen Sie `help` aus, um sich in der Entwicklungsumgebung besser zurechtzufinden und den Zustand der Container zu überprüfen. ## Zusätzliche Ressourcen -You may find these additional resources helpful for developing your first Solana application. +Vielleicht finden Sie diese zusätzlichen Ressourcen hilfreich für die Entwicklung Ihrer ersten Solana-Anwendung. -- The [Dev Container Reference](/substreams/developing/dev-container/) helps you navigate the container and its common errors. -- The [CLI reference](https://docs.substreams.dev/reference-material/substreams-cli/command-line-interface) lets you explore all the tools available in the Substreams CLI. -- The [Components Reference](https://docs.substreams.dev/reference-material/substreams-components/packages) dives deeper into navigating the `substreams.yaml`. +- Die [Dev-Container-Referenz](/substreams/developing/dev-container/) hilft Ihnen bei der Navigation im Container und bei häufigen Fehlern. +- Mit der [CLI-Referenz](https://docs.substreams.dev/reference-material/substreams-cli/command-line-interface) können Sie alle in der Substreams-CLI verfügbaren Tools erkunden. +- Die [Komponenten-Referenz] (https://docs.substreams.dev/reference-material/substreams-components/packages) taucht tiefer in die Navigation in der `substreams.yaml` ein. diff --git a/website/src/pages/de/substreams/introduction.mdx b/website/src/pages/de/substreams/introduction.mdx index feb5b5d6fb13..b835c7916802 100644 --- a/website/src/pages/de/substreams/introduction.mdx +++ b/website/src/pages/de/substreams/introduction.mdx @@ -1,45 +1,45 @@ --- -title: Introduction to Substreams +title: Einführung in Substreams sidebarTitle: Einführung --- ![Substreams Logo](/img/substreams-logo.png) -To start coding right away, check out the [Substreams Quick Start](/substreams/quick-start/). +Wenn Sie sofort mit dem Programmieren beginnen möchten, lesen Sie den [Substreams Quick Start](/substreams/quick-start/). ## Überblick -Substreams is a powerful parallel blockchain indexing technology designed to enhance performance and scalability within The Graph Network. +Substreams ist eine leistungsstarke parallele Blockchain-Indizierungstechnologie, die entwickelt wurde, um die Leistung und Skalierbarkeit innerhalb von The Graph Network zu verbessern. -## Substreams Benefits +## Substreams Vorteile -- **Accelerated Indexing**: Boost subgraph indexing time with a parallelized engine for quicker data retrieval and processing. -- **Multi-Chain Support**: Expand indexing capabilities beyond EVM-based chains, supporting ecosystems like Solana, Injective, Starknet, and Vara. -- **Enhanced Data Model**: Access comprehensive data, including the `trace` level data on EVM or account changes on Solana, while efficiently managing forks/disconnections. -- **Multi-Sink Support:** For Subgraph, Postgres database, Clickhouse, and Mongo database. +- **Beschleunigte Indizierung**: Beschleunigen Sie die Indizierung von Subgraphen mit einer parallelisierten Engine für schnelleren Datenabruf und -verarbeitung. +- **Multi-Ketten-Unterstützung**: Erweitern Sie die Indizierungsmöglichkeiten über EVM-basierte Ketten hinaus und unterstützen Sie Ökosysteme wie Solana, Injective, Starknet und Vara. +- **Erweitertes Datenmodell**: Zugriff auf umfassende Daten, einschließlich der `trace`-Ebene von EVM oder Kontoänderungen auf Solana, bei effizienter Verwaltung von Forks/Trennungen. +- **Multi-Sink-Unterstützung:** Für Subgraph, Postgres-Datenbank, Clickhouse und Mongo-Datenbank. -## How Substreams Works in 4 Steps +## So funktioniert Substreams in 4 Schritten -1. You write a Rust program, which defines the transformations that you want to apply to the blockchain data. For example, the following Rust function extracts relevant information from an Ethereum block (number, hash, and parent hash). +1. Sie schreiben ein Rust-Programm, das die Transformationen definiert, die Sie auf die Blockchain-Daten anwenden möchten. Zum Beispiel extrahiert die folgende Rust-Funktion relevante Informationen aus einem Ethereum-Block (Nummer, Hash und übergeordneter Hash). ```rust -fn get_my_block(blk: Block) -> Result { - let header = blk.header.as_ref().unwrap(); - - Ok(MyBlock { - number: blk.number, - hash: Hex::encode(&blk.hash), - parent_hash: Hex::encode(&header.parent_hash), - }) +fn get_my_block(blk: Block) -&gt; Ergebnis&lt;MyBlock, substreams::errors::Error&gt; { + let header = blk.header.as_ref().unwrap(); + + Ok(MyBlock { + number: blk.number, + hash: Hex::encode(&amp;blk.hash), + parent_hash: Hex::encode(&amp;header.parent_hash), + }) } ``` -2. You wrap up your Rust program into a WASM module just by running a single CLI command. +2. Sie verpacken Ihr Rust-Programm in ein WASM-Modul, indem Sie einfach einen einzigen CLI-Befehl ausführen. -3. The WASM container is sent to a Substreams endpoint for execution. The Substreams provider feeds the WASM container with the blockchain data and the transformations are applied. +3. Der WASM-Container wird zur Ausführung an einen Substreams-Endpunkt gesendet. Der Substreams-Anbieter füttert den WASM-Container mit den Blockchain-Daten und die Transformationen werden angewendet. -4. You select a [sink](https://docs.substreams.dev/how-to-guides/sinks), a place where you want to send the transformed data (such as a SQL database or a Subgraph). +4. Sie wählen eine [„sink“] (https://docs.substreams.dev/how-to-guides/sinks), einen Ort, an den Sie die umgewandelten Daten senden möchten (z. B. eine SQL-Datenbank oder einen Subgraph). ## Zusätzliche Ressourcen -All Substreams developer documentation is maintained by the StreamingFast core development team on the [Substreams registry](https://docs.substreams.dev). +Die gesamte Substreams-Entwicklerdokumentation wird vom StreamingFast-Kernentwicklungsteam auf der [Substreams-Registry] (https://docs.substreams.dev) gepflegt. diff --git a/website/src/pages/de/substreams/publishing.mdx b/website/src/pages/de/substreams/publishing.mdx index c2878910fb9e..fb43367658ca 100644 --- a/website/src/pages/de/substreams/publishing.mdx +++ b/website/src/pages/de/substreams/publishing.mdx @@ -1,53 +1,53 @@ --- -title: Publishing a Substreams Package -sidebarTitle: Publishing +title: Veröffentlichung eines Substrats-Pakets +sidebarTitle: Veröffentlichung --- -Learn how to publish a Substreams package to the [Substreams Registry](https://substreams.dev). +Erfahren Sie, wie Sie ein Substreams-Paket in der [Substreams Registry] (https://substreams.dev) veröffentlichen. ## Überblick -### What is a package? +### Was ist ein Paket? -A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional subgraphs. +Ein Substreams-Paket ist eine vorkompilierte Binärdatei, die die spezifischen Daten definiert, die Sie aus der Blockchain extrahieren möchten, ähnlich wie die Datei `mapping.ts` in traditionellen Subgraphen. -## Publish a Package +## Veröffentlichung eines Pakets -### Prerequisites +### Voraussetzungen -- You must have the Substreams CLI installed. -- You must have a Substreams package (`.spkg`) that you want to publish. +- Sie müssen die Substreams CLI installiert haben. +- Sie müssen ein Substreams-Paket (`.spkg`) haben, das Sie veröffentlichen wollen. -### Step 1: Run the `substreams publish` Command +### Schritt 1: Führen Sie den Befehl `substreams publish` aus -1. In a command-line terminal, run `substreams publish .spkg`. +1. Führen Sie in einem Befehlszeilen-Terminal die Datei `substreams publish .spkg` aus. -2. If you do not have a token set in your computer, navigate to `https://substreams.dev/me`. +2. Wenn Sie keinen Token auf Ihrem Computer haben, navigieren Sie zu `https://substreams.dev/me`. ![get token](/img/1_get-token.png) -### Step 2: Get a Token in the Substreams Registry +### Schritt 2: Erhalten Sie ein Token in der Substrats-Registrierung -1. In the Substreams Registry, log in with your GitHub account. +1. Melden Sie sich in der Substreams Registry mit Ihrem GitHub-Konto an. -2. Create a new token and copy it in a safe location. +2. Erstellen Sie einen neuen Token und kopieren Sie ihn an einen sicheren Ort. -![new token](/img/2_new_token.png) +![neues Token](/img/2_new_token.png) -### Step 3: Authenticate in the Substreams CLI +### Schritt 3: Authentifizierung in der Substreams-CLI -1. Back in the Substreams CLI, paste the previously generated token. +1. Zurück in der Substreams-CLI fügen Sie das zuvor generierte Token ein. -![paste token](/img/3_paste_token.png) +![Token einfügen](/img/3_paste_token.png) -2. Lastly, confirm that you want to publish the package. +2. Bestätigen Sie abschließend, dass Sie das Paket veröffentlichen möchten. -![confirm](/img/4_confirm.png) +![bestätigen](/img/4_confirm.png) -That's it! You have succesfully published a package in the Substreams registry. +Das war's! Sie haben erfolgreich ein Paket in der Substreams-Registrierung veröffentlicht. -![success](/img/5_success.png) +![Erfolg](/img/5_success.png) ## Zusätzliche Ressourcen -Visit [Substreams](https://substreams.dev/) to explore a growing collection of ready-to-use Substreams packages across various blockchain networks. +Besuchen Sie [Substreams] (https://substreams.dev/), um eine wachsende Sammlung von gebrauchsfertigen Substreams-Paketen für verschiedene Blockchain-Netzwerke zu entdecken. diff --git a/website/src/pages/de/substreams/quick-start.mdx b/website/src/pages/de/substreams/quick-start.mdx index cd29be60d2f9..6d82be0f8ac1 100644 --- a/website/src/pages/de/substreams/quick-start.mdx +++ b/website/src/pages/de/substreams/quick-start.mdx @@ -3,28 +3,28 @@ title: Substreams Kurzanleitung sidebarTitle: Schnellstart --- -Discover how to utilize ready-to-use substream packages or develop your own. +Entdecken Sie, wie Sie gebrauchsfertige Substream-Pakete verwenden oder eigene entwickeln können. ## Überblick -Integrating Substreams can be quick and easy. They are permissionless, and you can [obtain a key here](https://thegraph.market/) without providing personal information to start streaming on-chain data. +Die Integration von Substreams kann schnell und einfach sein. Sie sind erlaubnisfrei, und Sie können [hier einen Schlüssel erhalten] (https://thegraph.market/), ohne persönliche Informationen anzugeben, mit dem Streaming von In-Rhein-Daten beginnen. ## Start des Erstellens -### Use Substreams Packages +### Substreams Pakete verwenden -There are many ready-to-use Substreams packages available. You can explore these packages by visiting the [Substreams Registry](https://substreams.dev) and [sinking them](/substreams/developing/sinks/). The registry lets you search for and find any package that meets your needs. +Es sind viele gebrauchsfertige Substreams-Pakete verfügbar. Sie können diese Pakete erforschen, indem Sie die [Substreams Registry](https://substreams.dev) besuchen und [sinking them](/substreams/developing/sinks/). In der Registry können Sie jedes Paket suchen und finden, das Ihren Anforderungen entspricht. -Once you find a package that fits your needs, you can choose how you want to consume the data: +Sobald Sie ein Paket gefunden haben, das Ihren Anforderungen entspricht, können Sie wählen, wie Sie die Daten nutzen möchten: -- **[Subgraph](/sps/introduction/)**: Configure an API to meet your data needs and host it on The Graph Network. -- **[SQL Database](https://docs.substreams.dev/how-to-guides/sinks/sql-sink)**: Send the data to a database. -- **[Direct Streaming](https://docs.substreams.dev/how-to-guides/sinks/stream)**: Stream data directly to your application. -- **[PubSub](https://docs.substreams.dev/how-to-guides/sinks/pubsub)**: Send data to a PubSub topic. +- **[Subgraph](/sps/einführung/)**: Konfigurieren Sie eine API, die Ihren Datenanforderungen entspricht, und hosten Sie sie im The Graph Network. +- \*[SQL-Datenbank](https://docs.substreams.dev/how-to-guides/sinks/sql-sink)\*\*: Senden Sie die Daten an eine Datenbank. +- **[Direktes Streaming](https://docs.substreams.dev/how-to-guides/sinks/stream)**: Streamen Sie Daten direkt in Ihre Anwendung. +- **[PubSub](https://docs.substreams.dev/how-to-guides/sinks/pubsub)**: Daten an ein PubSub-Thema senden. -### Develop Your Own +### Entwickeln Sie Ihr eigenes -If you can't find a Substreams package that meets your specific needs, you can develop your own. Substreams are built with Rust, so you'll write functions that extract and filter the data you need from the blockchain. To get started, check out the following tutorials: +Wenn Sie kein Substreams-Paket finden können, das Ihren speziellen Anforderungen entspricht, können Sie Ihr eigenes entwickeln. Substreams werden mit Rust erstellt, sodass Sie Funktionen schreiben, die die benötigten Daten aus der Blockchain extrahieren und filtern. Schauen Sie sich für den Einstieg die folgenden Tutorials an: - [EVM](https://docs.substreams.dev/tutorials/intro-to-tutorials/evm) - [Solana](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-solana) @@ -32,11 +32,11 @@ If you can't find a Substreams package that meets your specific needs, you can d - [Injective](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/injective) - [MANTRA](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/mantra) -To build and optimize your Substreams from zero, use the minimal path within the [Dev Container](/substreams/developing/dev-container/). +Um Ihre Substreams von Anfang an zu erstellen und zu optimieren, verwenden Sie den minimalen Pfad innerhalb des [Dev Containers](/substreams/developing/dev-container/). -> Note: Substreams guarantees that you'll [never miss data](https://docs.substreams.dev/reference-material/reliability-guarantees) with a simple reconnection policy. +> Hinweis: Substreams garantiert, dass Sie [niemals Daten verpassen] (https://docs.substreams.dev/reference-material/reliability-guarantees) mit einer einfachen Wiederverbindungsrichtlinie. ## Zusätzliche Ressourcen -- For additional guidance, reference the [Tutorials](https://docs.substreams.dev/tutorials/intro-to-tutorials) and follow the [How-To Guides](https://docs.substreams.dev/how-to-guides/develop-your-own-substreams) on Streaming Fast docs. -- For a deeper understanding of how Substreams works, explore the [architectural overview](https://docs.substreams.dev/reference-material/architecture) of the data service. +- Weitere Anleitungen finden Sie in den [Tutorials] (https://docs.substreams.dev/tutorials/intro-to-tutorials) und in den [How-To Guides] (https://docs.substreams.dev/how-to-guides/develop-your-own-substreams) auf Streaming Fast docs. +- Ein tieferes Verständnis der Funktionsweise von Substreams finden Sie in der [Architekturübersicht] (https://docs.substreams.dev/reference-material/architecture) des Datendienstes. diff --git a/website/src/pages/de/supported-networks.mdx b/website/src/pages/de/supported-networks.mdx index 7ae7ff45350a..1ae4bd5d095b 100644 --- a/website/src/pages/de/supported-networks.mdx +++ b/website/src/pages/de/supported-networks.mdx @@ -1,5 +1,5 @@ --- -title: Supported Networks +title: Unterstützte Netzwerke hideTableOfContents: true hideContentHeader: true --- @@ -16,13 +16,13 @@ export const getStaticProps = getSupportedNetworksStaticProps -- Subgraph Studio relies on the stability and reliability of the underlying technologies, for example JSON-RPC, Firehose and Substreams endpoints. -- Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier. -- If a subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks. -- For a full list of which features are supported on the decentralized network, see [this page](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). +- Subgraph Studio verlässt sich auf die Stabilität und Zuverlässigkeit der zugrundeliegenden Technologien, z.B. JSON-RPC, Firehose und Substreams Endpunkte. +- Subgraphs, die die Gnosis-Kette indizieren, können jetzt mit dem gnosis- Netzwerkidentifikator eingesetzt werden. +- Wenn ein Subgraph über die CLI veröffentlicht und von einem Indexer aufgenommen wurde, könnte er technisch gesehen auch ohne Unterstützung abgefragt werden, und es wird daran gearbeitet, die Integration neuer Netzwerke weiter zu vereinfachen. +- Für eine vollständige Liste, welche Funktionen im dezentralen Netzwerk unterstützt werden, siehe [diese Seite](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). -## Running Graph Node locally +## Graph Node lokal ausführen If your preferred network isn't supported on The Graph's decentralized network, you can run your own [Graph Node](https://github.com/graphprotocol/graph-node) to index any EVM-compatible network. Make sure that the [version](https://github.com/graphprotocol/graph-node/releases) you are using supports the network and you have the needed configuration. -Graph Node can also index other protocols, via a Firehose integration. Firehose integrations have been created for NEAR, Arweave and Cosmos-based networks. Additionally, Graph Node can support Substreams-powered subgraphs for any network with Substreams support. +Graph Node kann über eine Firehose-Integration auch andere Protokolle indizieren. Firehose-Integrationen wurden für NEAR, Arweave und Cosmos-basierte Netzwerke erstellt. Darüber hinaus kann Graph Node Subgraphs auf Basis von Substreams für jedes Netzwerk mit Substreams-Unterstützung unterstützen. diff --git a/website/src/pages/de/token-api/_meta-titles.json b/website/src/pages/de/token-api/_meta-titles.json index 692cec84bd58..7ed31e0af95d 100644 --- a/website/src/pages/de/token-api/_meta-titles.json +++ b/website/src/pages/de/token-api/_meta-titles.json @@ -1,5 +1,6 @@ { "mcp": "MCP", "evm": "EVM Endpoints", - "monitoring": "Monitoring Endpoints" + "monitoring": "Monitoring Endpoints", + "faq": "FAQ" } diff --git a/website/src/pages/de/token-api/_meta.js b/website/src/pages/de/token-api/_meta.js index 09aa7ffc2649..0e526f673a66 100644 --- a/website/src/pages/de/token-api/_meta.js +++ b/website/src/pages/de/token-api/_meta.js @@ -5,4 +5,5 @@ export default { mcp: titles.mcp, evm: titles.evm, monitoring: titles.monitoring, + faq: '', } diff --git a/website/src/pages/de/token-api/faq.mdx b/website/src/pages/de/token-api/faq.mdx new file mode 100644 index 000000000000..c90af204668f --- /dev/null +++ b/website/src/pages/de/token-api/faq.mdx @@ -0,0 +1,109 @@ +--- +title: Token API FAQ +--- + +Get fast answers to easily integrate and scale with The Graph's high-performance Token API. + +## Allgemein + +### What blockchains does the Token API support? + +Currently, the Token API supports Ethereum, Binance Smart Chain (BSC), Polygon, Optimism, Base, and Arbitrum One. + +### Why isn't my API key from The Graph Market working? + +Be sure to use the Access Token generated from the API key, not the API key itself. An access token can be generated from the dashboard on The Graph Market using the dropdown menu next to each API key. + +### How current is the data provided by the API relative to the blockchain? + +The API provides data up to the latest finalized block. + +### How do I authenticate requests to the Token API? + +Authentication is managed via JWT tokens obtained through The Graph Market. JWT tokens issued by [Pinax](https://pinax.network/en), a core developer of The Graph, are also supported. + +### Does the Token API provide a client SDK? + +While a client SDK is not currently available, please share feedback on any SDKs or integrations you would like to see on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to support additional blockchains in the future? + +Yes, more blockchains will be supported in the future. Please share feedback on desired chains on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to offer data closer to the chain head? + +Yes, improvements to provide data closer to the chain head are planned. Feedback is welcome on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to support additional use cases such as NFTs? + +The Graph ecosystem is actively determining the [roadmap](https://thegraph.com/blog/token-api-the-graph/) for additional use cases, including NFTs. Please provide feedback on specific features you would like prioritized on [Discord](https://discord.gg/graphprotocol). + +## MCP / LLM / AI Topics + +### Is there a time limit for LLM queries? + +Yes. The maximum time limit for LLM queries is 10 seconds. + +### Is there a known list of LLMs that work with the API? + +Yes, Cline, Cursor, and Claude Desktop integrate successfully with The Graph's Token API + MCP server. + +Beyond that, whether an LLM "works" depends on whether it supports the MCP protocol directly (or has a compatible plugin/adapter). + +### Where can I find the MCP client? + +You can find the code for the MCP client in [The Graph's repo](https://github.com/graphprotocol/mcp-client). + +## Advanced Topics + +### I'm getting 403/401 errors. What's wrong? + +Check that you included the `Authorization: Bearer ` header with the correct, non-expired token. Common issues include using the API key instead of generating a new JWT, forgetting the "Bearer" prefix, using an incorrect token, or omitting the header entirely. Ensure you copied the JWT exactly as provided by The Graph Market. + +### Are there rate limits or usage costs?\*\* + +During Beta, the Token API is free for authorized developers. While specific rate limits aren't documented, reasonable throttling exists to prevent abuse. High request volumes may trigger HTTP 429 errors. Monitor official announcements for future pricing changes after Beta. + +### What networks are supported, and how do I specify them? + +You can query available networks with [this link](https://token-api.thegraph.com/#tag/monitoring/GET/networks). A full list of network IDs accepted by The Graph can be found on [The Graph's Networks](https://thegraph.com/docs/en/supported-networks/). The API supports Ethereum mainnet, Binance Smart Chain, Base, Arbitrum One, Optimism, and Polygon/Matic. Use the optional `network_id` parameter (e.g., `mainnet`, `bsc`, `base`, `arbitrum-one`, `optimism`, `matic`) to target a specific chain. Without this parameter, the API defaults to Ethereum mainnet. + +### Why do I only see 10 results? How can I get more data? + +Endpoints cap output at 10 items by default. Use pagination parameters: `limit` (up to 500) and `page` (1-indexed). For example, set `limit=50` to get 50 results, and increment `page` for subsequent batches (e.g., `page=2` for items 51-100). + +### How do I fetch older transfer history? + +The Transfers endpoint defaults to 30 days of history. To retrieve older events, increase the `age` parameter up to 180 days maximum (e.g., `age=180` for 6 months of transfers). Transfers older than 180 days cannot be fetched in a single call. + +### What does an empty `"data": []` array mean? + +An empty data array means no records were found matching the query – not an error. This occurs when querying wallets with no tokens/transfers or token contracts with no holders. Verify you've used the correct address and parameters. An invalid address format will trigger a 4xx error. + +### Why is the JSON response wrapped in a `"data"` array? + +All Token API responses consistently wrap results in a top-level `data` array, even for single items. This uniform design handles the common case where addresses have multiple balances or transfers. When parsing, be sure to index into the `data` array (e.g., `const results = response.data`). + +### Why are token amounts returned as strings? + +Large numeric values (like token amounts or supplies) are returned as strings to avoid precision loss, as they often exceed JavaScript's safe integer range. Convert these to big number types for arithmetic operations. Fields like `decimals` are provided as normal numbers to help derive human-readable values. + +### What format should addresses be in? + +The API accepts 40-character hex addresses with or without the `0x` prefix. The endpoint is case-insensitive, so both lower and uppercase hex characters work. Ensure addresses are exactly 40 hex digits (20 bytes) if you remove the prefix. For contract queries, use the token's contract address; for balance/transfer queries, use the wallet address. + +### Do I need special headers besides authentication? + +While recommended, `Accept: application/json` isn't strictly required as the API returns JSON by default. The critical header is `Authorization: Bearer `. Ensure you make a GET request to the correct URL without trailing slashes or path typos (e.g., use `/balances/evm/{address}` not `/balance`). + +### MCP integration with Claude/Cline/Cursor shows errors like "ENOENT" or "Server disconnected". How do I fix this? + +For "ENOENT" errors, ensure Node.js 18+ is installed and the path to `npx`/`bunx` is correct (consider using full paths in config). "Server disconnected" usually indicates authentication or connectivity issues – verify your `ACCESS_TOKEN` is set correctly and your network allows access to `https://token-api.thegraph.com/sse`. + +### Is the Token API part of The Graph's GraphQL service? + +No, the Token API is a separate RESTful service. Unlike traditional subgraphs, it provides ready-to-use REST endpoints (HTTP GET) for common token data. You don't need to write GraphQL queries or deploy subgraphs. Under the hood, it uses The Graph's infrastructure and MCP with AI for enrichment, but you simply interact with REST endpoints. + +### Do I need to use MCP or tools like Claude, Cline, or Cursor? + +No, these are optional. MCP is an advanced feature allowing AI assistants to interface with the API via streaming. For standard usage, simply call the REST endpoints with any HTTP client using your JWT. Claude Desktop, Cline bot, and Cursor IDE integrations are provided for convenience but aren't required. diff --git a/website/src/pages/de/token-api/mcp/claude.mdx b/website/src/pages/de/token-api/mcp/claude.mdx index 0da8f2be031d..8c151e39a608 100644 --- a/website/src/pages/de/token-api/mcp/claude.mdx +++ b/website/src/pages/de/token-api/mcp/claude.mdx @@ -3,7 +3,7 @@ title: Using Claude Desktop to Access the Token API via MCP sidebarTitle: Claude Desktop --- -## Prerequisites +## Voraussetzungen - [Claude Desktop](https://claude.ai/download) installed. - A [JWT token](/token-api/quick-start) from [The Graph Market](https://thegraph.market/). @@ -12,7 +12,7 @@ sidebarTitle: Claude Desktop ![Screenshot of Claude Desktop's settings panel showing the MCP server configuration option.](/img/claude-preview-token-api.png) -## Configuration +## Konfiguration Create or edit your `claude_desktop_config.json` file. @@ -25,11 +25,11 @@ Create or edit your `claude_desktop_config.json` file. ```json label="claude_desktop_config.json" { "mcpServers": { - "mcp-pinax": { + "token-api": { "command": "npx", "args": ["@pinax/mcp", "--sse-url", "https://token-api.thegraph.com/sse"], "env": { - "ACCESS_TOKEN": "" + "ACCESS_TOKEN": "" } } } diff --git a/website/src/pages/de/token-api/mcp/cline.mdx b/website/src/pages/de/token-api/mcp/cline.mdx index ab54c0c8f6f0..d0269aa67aff 100644 --- a/website/src/pages/de/token-api/mcp/cline.mdx +++ b/website/src/pages/de/token-api/mcp/cline.mdx @@ -3,16 +3,16 @@ title: Using Cline to Access the Token API via MCP sidebarTitle: Cline --- -## Prerequisites +## Voraussetzungen - [Cline](https://cline.bot/) installed. - A [JWT token](/token-api/quick-start) from [The Graph Market](https://thegraph.market/). - [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path. - The `@pinax/mcp` package requires Node 18+, as it relies on built-in `fetch()` / `Headers`, which are not available in Node 17 or older. You may need to specify an exact path to an up-to-date Node version, or uninstall previous versions of Node to ensure `@pinax/mcp` uses the correct version. -![Screenshot of Cline's MCP server configuration interface displaying the JSON settings file with mcp-pinax server details visible](/img/cline-preview-token-api.png) +![Screenshot of Cline's MCP server configuration interface displaying the JSON settings file with mcp-pinax server details visible.](/img/cline-preview-token-api.png) -## Configuration +## Konfiguration Create or edit your `cline_mcp_settings.json` file. diff --git a/website/src/pages/de/token-api/mcp/cursor.mdx b/website/src/pages/de/token-api/mcp/cursor.mdx index 658108d1337b..953d283fd2b3 100644 --- a/website/src/pages/de/token-api/mcp/cursor.mdx +++ b/website/src/pages/de/token-api/mcp/cursor.mdx @@ -3,7 +3,7 @@ title: Using Cursor to Access the Token API via MCP sidebarTitle: Cursor --- -## Prerequisites +## Voraussetzungen - [Cursor](https://www.cursor.com/) installed. - A [JWT token](/token-api/quick-start) from [The Graph Market](https://thegraph.market/). @@ -12,7 +12,7 @@ sidebarTitle: Cursor ![Screenshot of Cursor's MCP configuration panel.](/img/cursor-preview-token-api.png) -## Configuration +## Konfiguration Create or edit your `~/.cursor/mcp.json` file. diff --git a/website/src/pages/de/token-api/quick-start.mdx b/website/src/pages/de/token-api/quick-start.mdx index 4653c3d41ac6..b84fad5f665a 100644 --- a/website/src/pages/de/token-api/quick-start.mdx +++ b/website/src/pages/de/token-api/quick-start.mdx @@ -1,6 +1,6 @@ --- title: Token API Quick Start -sidebarTitle: Quick Start +sidebarTitle: Schnellstart --- ![The Graph Token API Quick Start banner](/img/token-api-quickstart-banner.jpg) @@ -11,7 +11,7 @@ The Graph's Token API lets you access blockchain token information via a GET req The Token API provides access to onchain token data, including balances, holders, detailed token metadata, and historical transfers. This API also uses the Model Context Protocol (MCP) to enrich raw blockchain data with contextual insights using AI tools, such as Claude. -## Prerequisites +## Voraussetzungen Before you begin, get a JWT token by signing up on [The Graph Market](https://thegraph.market/). You can generate a JWT token for each of your API keys using the dropdown menu. diff --git a/website/src/pages/en/indexing/tooling/graph-node.mdx b/website/src/pages/en/indexing/tooling/graph-node.mdx index 9ae620ae7200..be77cd619d77 100644 --- a/website/src/pages/en/indexing/tooling/graph-node.mdx +++ b/website/src/pages/en/indexing/tooling/graph-node.mdx @@ -330,7 +330,7 @@ Database tables that store entities seem to generally come in two varieties: 'tr For account-like tables, `graph-node` can generate queries that take advantage of details of how Postgres ends up storing data with such a high rate of change, namely that all of the versions for recent blocks are in a small subsection of the overall storage for such a table. -The command `graphman stats show shows, for each entity type/table in a deployment, how many distinct entities, and how many entity versions each table contains. That data is based on Postgres-internal estimates, and is therefore necessarily imprecise, and can be off by an order of magnitude. A `-1` in the `entities` column means that Postgres believes that all rows contain a distinct entity. +The command `graphman stats show ` shows, for each entity type/table in a deployment, how many distinct entities, and how many entity versions each table contains. That data is based on Postgres-internal estimates, and is therefore necessarily imprecise, and can be off by an order of magnitude. A `-1` in the `entities` column means that Postgres believes that all rows contain a distinct entity. In general, tables where the number of distinct entities are less than 1% of the total number of rows/entity versions are good candidates for the account-like optimization. When the output of `graphman stats show` indicates that a table might benefit from this optimization, running `graphman stats show
` will perform a full count of the table - that can be slow, but gives a precise measure of the ratio of distinct entities to overall entity versions. diff --git a/website/src/pages/es/about.mdx b/website/src/pages/es/about.mdx index 22dafa9785ad..ffa133b4e0b7 100644 --- a/website/src/pages/es/about.mdx +++ b/website/src/pages/es/about.mdx @@ -30,25 +30,25 @@ Blockchain properties, such as finality, chain reorganizations, and uncled block ## The Graph Provides a Solution -The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API. +The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "Subgraphs") can then be queried with a standard GraphQL API. Today, there is a decentralized protocol that is backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node) that enables this process. ### How The Graph Functions -Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL. +Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using Subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL. #### Specifics -- The Graph uses subgraph descriptions, which are known as the subgraph manifest inside the subgraph. +- The Graph uses Subgraph descriptions, which are known as the Subgraph manifest inside the Subgraph. -- The subgraph description outlines the smart contracts of interest for a subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database. +- The Subgraph description outlines the smart contracts of interest for a Subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database. -- When creating a subgraph, you need to write a subgraph manifest. +- When creating a Subgraph, you need to write a Subgraph manifest. -- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that subgraph. +- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that Subgraph. -The diagram below provides more detailed information about the flow of data after a subgraph manifest has been deployed with Ethereum transactions. +The diagram below provides more detailed information about the flow of data after a Subgraph manifest has been deployed with Ethereum transactions. ![Un gráfico explicando como The Graph usa Graph Node para servir consultas a los consumidores de datos](/img/graph-dataflow.png) @@ -56,12 +56,12 @@ El flujo sigue estos pasos: 1. Una aplicación descentralizada (dapp) añade datos a Ethereum a través de una transacción en un contrato inteligente. 2. El contrato inteligente emite uno o más eventos mientras procesa la transacción. -3. Graph Node escanea continuamente la red de Ethereum en busca de nuevos bloques y los datos de tu subgrafo que puedan contener. -4. Graph Node encuentra los eventos de la red Ethereum, a fin de proveerlos en tu subgrafo mediante estos bloques y ejecuta los mapping handlers que proporcionaste. El mapeo (mapping) es un módulo WASM que crea o actualiza las entidades de datos que Graph Node almacena en respuesta a los eventos de Ethereum. +3. Graph Node continually scans Ethereum for new blocks and the data for your Subgraph they may contain. +4. Graph Node finds Ethereum events for your Subgraph in these blocks and runs the mapping handlers you provided. The mapping is a WASM module that creates or updates the data entities that Graph Node stores in response to Ethereum events. 5. La dapp consulta a través de Graph Node los datos indexados de la blockchain, utilizando el [GraphQL endpoint](https://graphql.org/learn/) del nodo. El Nodo de The Graph, a su vez, traduce las consultas GraphQL en consultas para su almacenamiento de datos subyacentes con el fin de obtener estos datos, haciendo uso de las capacidades de indexación que ofrece el almacenamiento. La dapp muestra estos datos en una interfaz muy completa para el usuario, a fin de que los end users que usan este subgrafo puedan emitir nuevas transacciones en Ethereum. El ciclo se repite. ## Próximos puntos -The following sections provide a more in-depth look at subgraphs, their deployment and data querying. +The following sections provide a more in-depth look at Subgraphs, their deployment and data querying. -Before you write your own subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed subgraphs. Each subgraph's page includes a GraphQL playground, allowing you to query its data. +Before you write your own Subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed Subgraphs. Each Subgraph's page includes a GraphQL playground, allowing you to query its data. diff --git a/website/src/pages/es/archived/arbitrum/arbitrum-faq.mdx b/website/src/pages/es/archived/arbitrum/arbitrum-faq.mdx index 85ad70c11ca2..2b7fe7284fc8 100644 --- a/website/src/pages/es/archived/arbitrum/arbitrum-faq.mdx +++ b/website/src/pages/es/archived/arbitrum/arbitrum-faq.mdx @@ -14,7 +14,7 @@ By scaling The Graph on L2, network participants can now benefit from: - Security inherited from Ethereum -Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers can open and close allocations more frequently to index a greater number of subgraphs. Developers can deploy and update subgraphs more easily, and Delegators can delegate GRT more frequently. Curators can add or remove signal to a larger number of subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. +Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers can open and close allocations more frequently to index a greater number of Subgraphs. Developers can deploy and update Subgraphs more easily, and Delegators can delegate GRT more frequently. Curators can add or remove signal to a larger number of Subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. The Graph community decided to move forward with Arbitrum last year after the outcome of the [GIP-0031](https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305) discussion. @@ -39,7 +39,7 @@ Para aprovechar el uso de The Graph en L2, usa este conmutador desplegable para ![Dropdown switcher to toggle Arbitrum](/img/arbitrum-screenshot-toggle.png) -## Como developer de subgrafos, consumidor de datos, Indexador, Curador o Delegador, ¿qué debo hacer ahora? +## As a Subgraph developer, data consumer, Indexer, Curator, or Delegator, what do I need to do now? Network participants must move to Arbitrum to continue participating in The Graph Network. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) for additional support. @@ -51,9 +51,9 @@ All smart contracts have been thoroughly [audited](https://github.com/graphproto Everything has been tested thoroughly, and a contingency plan is in place to ensure a safe and seamless transition. Details can be found [here](https://forum.thegraph.com/t/gip-0037-the-graph-arbitrum-deployment-with-linear-rewards-minted-in-l2/3551#risks-and-security-considerations-20). -## Are existing subgraphs on Ethereum working? +## Are existing Subgraphs on Ethereum working? -All subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) to ensure your subgraphs operate seamlessly. +All Subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) to ensure your Subgraphs operate seamlessly. ## Does GRT have a new smart contract deployed on Arbitrum? diff --git a/website/src/pages/es/archived/arbitrum/l2-transfer-tools-faq.mdx b/website/src/pages/es/archived/arbitrum/l2-transfer-tools-faq.mdx index 4b5963a153d4..730aa861a37d 100644 --- a/website/src/pages/es/archived/arbitrum/l2-transfer-tools-faq.mdx +++ b/website/src/pages/es/archived/arbitrum/l2-transfer-tools-faq.mdx @@ -24,9 +24,9 @@ The exception is with smart contract wallets like multisigs: these are smart con Las Herramientas de Transferencia a L2 utilizan el mecanismo nativo de Arbitrum para enviar mensajes de L1 a L2. Este mecanismo se llama "ticket reintentable" y es utilizado por todos los puentes de tokens nativos, incluido el puente GRT de Arbitrum. Puedes obtener más información sobre los tickets reintentables en la [Documentación de Arbitrum](https://docs.arbitrum.io/arbos/l1-to-l2-messaging). -Cuando transfieres tus activos (subgrafo, stake, delegación o curación) a L2, se envía un mensaje a través del puente Arbitrum GRT que crea un ticket reintentable en L2. La herramienta de transferencia incluye un valor ETH en la transacción, que se utiliza para: 1) pagar la creación del ticket y 2) pagar por el gas para ejecutar el ticket en L2. Sin embargo, debido a que los precios del gas pueden variar durante el tiempo hasta que el ticket esté listo para ejecutarse en L2, es posible que este intento de autoejecución falle. Cuando eso sucede, el puente de Arbitrum mantendrá el ticket reintentable activo durante un máximo de 7 días, y cualquier persona puede intentar nuevamente "canjear" el ticket (lo que requiere una wallet con algo de ETH transferido a Arbitrum). +When you transfer your assets (Subgraph, stake, delegation or curation) to L2, a message is sent through the Arbitrum GRT bridge which creates a retryable ticket in L2. The transfer tool includes some ETH value in the transaction, that is used to 1) pay to create the ticket and 2) pay for the gas to execute the ticket in L2. However, because gas prices might vary in the time until the ticket is ready to execute in L2, it is possible that this auto-execution attempt fails. When that happens, the Arbitrum bridge will keep the retryable ticket alive for up to 7 days, and anyone can retry “redeeming” the ticket (which requires a wallet with some ETH bridged to Arbitrum). -Esto es lo que llamamos el paso de "Confirmar" en todas las herramientas de transferencia. En la mayoría de los casos, se ejecutará automáticamente, ya que la autoejecución suele ser exitosa, pero es importante que vuelvas a verificar para asegurarte de que se haya completado. Si no tiene éxito y no hay reintentos exitosos en 7 días, el puente de Arbitrum descartará el ticket, y tus activos (subgrafo, stake, delegación o curación) se perderán y no podrán recuperarse. Los core devs de The Graph tienen un sistema de monitoreo para detectar estas situaciones e intentar canjear los tickets antes de que sea demasiado tarde, pero en última instancia, es tu responsabilidad asegurarte de que tu transferencia se complete a tiempo. Si tienes problemas para confirmar tu transacción, por favor comunícate a través de [este formulario](https://noteforms.com/forms/notionform-l2-transfer-tooling-issues-0ogqfu?notionforms=1&utm_source=notionforms) y los core devs estarán allí para ayudarte. +This is what we call the “Confirm” step in all the transfer tools - it will run automatically in most cases, as the auto-execution is most often successful, but it is important that you check back to make sure it went through. If it doesn’t succeed and there are no successful retries in 7 days, the Arbitrum bridge will discard the ticket, and your assets (Subgraph, stake, delegation or curation) will be lost and can’t be recovered. The Graph core devs have a monitoring system in place to detect these situations and try to redeem the tickets before it’s too late, but it is ultimately your responsibility to ensure your transfer is completed in time. If you’re having trouble confirming your transaction, please reach out using [this form](https://noteforms.com/forms/notionform-l2-transfer-tooling-issues-0ogqfu?notionforms=1&utm_source=notionforms) and core devs will be there help you. ### Comencé la transferencia de mi delegación/stake/curación y no estoy seguro de si se completó en L2, ¿cómo puedo confirmar que se transfirió correctamente? @@ -36,43 +36,43 @@ Si tienes el hash de la transacción en L1 (que puedes encontrar revisando las t ## Transferencia de Subgrafo -### ¿Cómo transfiero mi subgrafo? +### How do I transfer my Subgraph? -Para transferir tu subgrafo, tendrás que completar los siguientes pasos: +To transfer your Subgraph, you will need to complete the following steps: 1. Inicia la transferencia en Ethereum mainnet 2. Espera 20 minutos para la confirmación -3. Confirma la transferencia del subgrafo en Arbitrum +3. Confirm Subgraph transfer on Arbitrum\* -4. Termina de publicar el subgrafo en Arbitrum +4. Finish publishing Subgraph on Arbitrum 5. Actualiza la URL de consulta (recomendado) -\*Ten en cuenta que debes confirmar la transferencia dentro de los 7 días, de lo contrario, es posible que se pierda tu subgrafo. En la mayoría de los casos, este paso se ejecutará automáticamente, pero puede ser necesaria una confirmación manual si hay un aumento en el precio del gas en Arbitrum. Si surgen problemas durante este proceso, habrá recursos disponibles para ayudarte: ponte en contacto con el soporte en support@thegraph.com o en [Discord](https://discord.gg/graphprotocol). +\*Note that you must confirm the transfer within 7 days otherwise your Subgraph may be lost. In most cases, this step will run automatically, but a manual confirmation may be needed if there is a gas price spike on Arbitrum. If there are any issues during this process, there will be resources to help: contact support at support@thegraph.com or on [Discord](https://discord.gg/graphprotocol). ### ¿Desde dónde debo iniciar mi transferencia? -Puedes iniciar la transferencia desde el [Subgraph Studio](https://thegraph.com/studio/), [Explorer](https://thegraph.com/explorer) o desde cualquier página de detalles del subgrafo. Haz clic en el botón "Transferir Subgrafo" en la página de detalles del subgrafo para iniciar la transferencia. +You can initiate your transfer from the [Subgraph Studio](https://thegraph.com/studio/), [Explorer,](https://thegraph.com/explorer) or any Subgraph details page. Click the "Transfer Subgraph" button in the Subgraph details page to start the transfer. -### ¿Cuánto tiempo tengo que esperar hasta que se transfiera mi subgrafo? +### How long do I need to wait until my Subgraph is transferred El tiempo de transferencia demora aproximadamente 20 minutos. El puente de Arbitrum está trabajando en segundo plano para completar la transferencia automáticamente. En algunos casos, los costos de gas pueden aumentar y necesitarás confirmar la transacción nuevamente. -### ¿Mi subgrafo seguirá siendo accesible después de transferirlo a L2? +### Will my Subgraph still be discoverable after I transfer it to L2? -Tu subgrafo solo será accesible en la red donde esté publicado. Por ejemplo, si tu subgrafo está en Arbitrum One, solo podrás encontrarlo en el explorador de Arbitrum One y no podrás encontrarlo en Ethereum. Asegúrate de tener seleccionado Arbitrum One en el selector de redes en la parte superior de la página para asegurarte de estar en la red correcta. Después de la transferencia, el subgrafo en L1 aparecerá como obsoleto. +Your Subgraph will only be discoverable on the network it is published to. For example, if your Subgraph is on Arbitrum One, then you can only find it in Explorer on Arbitrum One and will not be able to find it on Ethereum. Please ensure that you have Arbitrum One selected in the network switcher at the top of the page to ensure you are on the correct network.  After the transfer, the L1 Subgraph will appear as deprecated. -### ¿Es necesario publicar mi subgrafo para transferirlo? +### Does my Subgraph need to be published to transfer it? -Para aprovechar la herramienta de transferencia de subgrafos, tu subgrafo debe estar ya publicado en la red principal de Ethereum y debe tener alguna señal de curación propiedad de la wallet que posee el subgrafo. Si tu subgrafo no está publicado, se recomienda que lo publiques directamente en Arbitrum One, ya que las tarifas de gas asociadas serán considerablemente más bajas. Si deseas transferir un subgrafo ya publicado pero la cuenta del propietario no ha curado ninguna señal en él, puedes señalizar una pequeña cantidad (por ejemplo, 1 GRT) desde esa cuenta; asegúrate de elegir la opción de señal "auto-migración". +To take advantage of the Subgraph transfer tool, your Subgraph must be already published to Ethereum mainnet and must have some curation signal owned by the wallet that owns the Subgraph. If your Subgraph is not published, it is recommended you simply publish directly on Arbitrum One - the associated gas fees will be considerably lower. If you want to transfer a published Subgraph but the owner account hasn't curated any signal on it, you can signal a small amount (e.g. 1 GRT) from that account; make sure to choose "auto-migrating" signal. -### ¿Qué ocurre con la versión de Ethereum mainnet de mi subgrafo después de transferirlo a Arbitrum? +### What happens to the Ethereum mainnet version of my Subgraph after I transfer to Arbitrum? -Tras transferir tu subgrafo a Arbitrum, la versión de Ethereum mainnet quedará obsoleta. Te recomendamos que actualices tu URL de consulta en un plazo de 48 horas. Sin embargo, existe un periodo de gracia que mantiene tu URL de mainnet en funcionamiento para que se pueda actualizar cualquier soporte de dapp de terceros. +After transferring your Subgraph to Arbitrum, the Ethereum mainnet version will be deprecated. We recommend you update your query URL within 48 hours. However, there is a grace period in place that keeps your mainnet URL functioning so that any third-party dapp support can be updated. ### Después de la transferencia, ¿también tengo que volver a publicar en Arbitrum? @@ -80,21 +80,21 @@ Una vez transcurridos los 20 minutos de la ventana de transferencia, tendrás qu ### ¿Experimentará mi endpoint una interrupción durante la republicación? -Es poco probable, pero es posible experimentar una breve interrupción dependiendo de qué Indexadores estén respaldando el subgrafo en L1 y si continúan indexándolo hasta que el subgrafo esté completamente respaldado en L2. +It is unlikely, but possible to experience a brief downtime depending on which Indexers are supporting the Subgraph on L1 and whether they keep indexing it until the Subgraph is fully supported on L2. ### ¿Es lo mismo publicar y versionar en L2 que en Ethereum mainnet? -Sí. Asegúrate de seleccionar Arbitrum One como tu red para publicar cuando publiques en Subgraph Studio. En el Studio, estará disponible el último endpoint que apunta a la última versión actualizada del subgrafo. +Yes. Select Arbitrum One as your published network when publishing in Subgraph Studio. In the Studio, the latest endpoint will be available which points to the latest updated version of the Subgraph. -### ¿Se moverá la curación de mi subgrafo junto con mi subgrafo? +### Will my Subgraph's curation move with my Subgraph? -Si has elegido auto-migrar la señal, el 100% de tu curación propia se moverá con tu subgrafo a Arbitrum One. Toda la señal de curación del subgrafo se convertirá a GRT en el momento de la transferencia, y el GRT correspondiente a tu señal de curación se utilizará para mintear señal en el subgrafo L2. +If you've chosen auto-migrating signal, 100% of your own curation will move with your Subgraph to Arbitrum One. All of the Subgraph's curation signal will be converted to GRT at the time of the transfer, and the GRT corresponding to your curation signal will be used to mint signal on the L2 Subgraph. -Otros Curadores pueden elegir si retiran su fracción de GRT, o también la transfieren a L2 para mintear señal en el mismo subgrafo. +Other Curators can choose whether to withdraw their fraction of GRT, or also transfer it to L2 to mint signal on the same Subgraph. -### ¿Puedo mover mi subgrafo de nuevo a Ethereum mainnet después de la transferencia? +### Can I move my Subgraph back to Ethereum mainnet after I transfer? -Una vez transferida, la versión en Ethereum mainnet de este subgrafo quedará obsoleta. Si deseas regresar a mainnet, deberás volver a deployar y publicar en mainnet. Sin embargo, se desaconseja firmemente volver a transferir a Ethereum mainnet, ya que las recompensas por indexación se distribuirán eventualmente por completo en Arbitrum One. +Once transferred, your Ethereum mainnet version of this Subgraph will be deprecated. If you would like to move back to mainnet, you will need to redeploy and publish back to mainnet. However, transferring back to Ethereum mainnet is strongly discouraged as indexing rewards will eventually be distributed entirely on Arbitrum One. ### ¿Por qué necesito ETH bridgeado para completar mi transferencia? @@ -206,19 +206,19 @@ Para transferir tu curación, deberás completar los siguientes pasos: \*Si es necesario - i.e. si estás utilizando una dirección de contrato. -### ¿Cómo sabré si el subgrafo que he curado ha pasado a L2? +### How will I know if the Subgraph I curated has moved to L2? -Al ver la página de detalles del subgrafo, un banner te notificará que este subgrafo ha sido transferido. Puedes seguir la indicación para transferir tu curación. También puedes encontrar esta información en la página de detalles del subgrafo de cualquier subgrafo que se haya trasladado. +When viewing the Subgraph details page, a banner will notify you that this Subgraph has been transferred. You can follow the prompt to transfer your curation. You can also find this information on the Subgraph details page of any Subgraph that has moved. ### ¿Qué ocurre si no deseo trasladar mi curación a L2? -Cuando un subgrafo queda obsoleto, tienes la opción de retirar tu señal. De manera similar, si un subgrafo se ha trasladado a L2, puedes elegir retirar tu señal en Ethereum mainnet o enviar la señal a L2. +When a Subgraph is deprecated you have the option to withdraw your signal. Similarly, if a Subgraph has moved to L2, you can choose to withdraw your signal in Ethereum mainnet or send the signal to L2. ### ¿Cómo sé si mi curación se ha transferido correctamente? Los detalles de la señal serán accesibles a través del Explorer aproximadamente 20 minutos después de iniciar la herramienta de transferencia a L2. -### ¿Puedo transferir mi curación en más de un subgrafo a la vez? +### Can I transfer my curation on more than one Subgraph at a time? En este momento no existe la opción de transferencia masiva. @@ -266,7 +266,7 @@ La herramienta de transferencia L2 tardará aproximadamente 20 minutos en comple ### ¿Tengo que indexar en Arbitrum antes de transferir mi stake? -En efecto, puedes transferir tu stake primero antes de configurar la indexación de manera efectiva, pero no podrás reclamar ninguna recompensa en L2 hasta que asignes a subgrafos en L2, los indexes y presentes POIs. +You can effectively transfer your stake first before setting up indexing, but you will not be able to claim any rewards on L2 until you allocate to Subgraphs on L2, index them, and present POIs. ### ¿Pueden los Delegadores trasladar su delegación antes de que yo traslade mi stake de Indexador? diff --git a/website/src/pages/es/archived/arbitrum/l2-transfer-tools-guide.mdx b/website/src/pages/es/archived/arbitrum/l2-transfer-tools-guide.mdx index 4ec61fdc3a7c..3d0d90acb9a9 100644 --- a/website/src/pages/es/archived/arbitrum/l2-transfer-tools-guide.mdx +++ b/website/src/pages/es/archived/arbitrum/l2-transfer-tools-guide.mdx @@ -6,53 +6,53 @@ The Graph ha facilitado la migración a L2 en Arbitrum One. Para cada participan Some frequent questions about these tools are answered in the [L2 Transfer Tools FAQ](/archived/arbitrum/l2-transfer-tools-faq/). The FAQs contain in-depth explanations of how to use the tools, how they work, and things to keep in mind when using them. -## Cómo transferir tu subgrafo a Arbitrum (L2) +## How to transfer your Subgraph to Arbitrum (L2) -## Beneficios de transferir tus subgrafos +## Benefits of transferring your Subgraphs La comunidad de The Graph y los core devs se han [estado preparando] (https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305) para migrar a Arbitrum durante el último año. Arbitrum, una blockchain de capa 2 o "L2", hereda la seguridad de Ethereum pero ofrece tarifas de gas considerablemente más bajas. -Cuando publicas o actualizas tus subgrafos en The Graph Network, estás interactuando con contratos inteligentes en el protocolo, lo cual requiere pagar por gas utilizando ETH. Al mover tus subgrafos a Arbitrum, cualquier actualización futura de tu subgrafo requerirá tarifas de gas mucho más bajas. Las tarifas más bajas, y el hecho de que las bonding curves de curación en L2 son planas, también facilitan que otros Curadores realicen curación en tu subgrafo, aumentando las recompensas para los Indexadores en tu subgrafo. Este contexto con tarifas más económicas también hace que sea más barato para los Indexadores indexar y servir tu subgrafo. Las recompensas por indexación aumentarán en Arbitrum y disminuirán en Ethereum mainnet en los próximos meses, por lo que cada vez más Indexadores transferirán su stake y establecerán sus operaciones en L2. +When you publish or upgrade your Subgraph to The Graph Network, you're interacting with smart contracts on the protocol and this requires paying for gas using ETH. By moving your Subgraphs to Arbitrum, any future updates to your Subgraph will require much lower gas fees. The lower fees, and the fact that curation bonding curves on L2 are flat, also make it easier for other Curators to curate on your Subgraph, increasing the rewards for Indexers on your Subgraph. This lower-cost environment also makes it cheaper for Indexers to index and serve your Subgraph. Indexing rewards will be increasing on Arbitrum and decreasing on Ethereum mainnet over the coming months, so more and more Indexers will be transferring their stake and setting up their operations on L2. -## Comprensión de lo que sucede con la señal, tu subgrafo de L1 y las URL de consulta +## Understanding what happens with signal, your L1 Subgraph and query URLs -Transferir un subgrafo a Arbitrum utiliza el puente de GRT de Arbitrum, que a su vez utiliza el puente nativo de Arbitrum para enviar el subgrafo a L2. La "transferencia" deprecará el subgrafo en mainnet y enviará la información para recrear el subgrafo en L2 utilizando el puente. También incluirá el GRT señalizado del propietario del subgrafo, el cual debe ser mayor que cero para que el puente acepte la transferencia. +Transferring a Subgraph to Arbitrum uses the Arbitrum GRT bridge, which in turn uses the native Arbitrum bridge, to send the Subgraph to L2. The "transfer" will deprecate the Subgraph on mainnet and send the information to re-create the Subgraph on L2 using the bridge. It will also include the Subgraph owner's signaled GRT, which must be more than zero for the bridge to accept the transfer. -Cuando eliges transferir el subgrafo, esto convertirá toda la señal de curación del subgrafo a GRT. Esto equivale a "deprecar" el subgrafo en mainnet. El GRT correspondiente a tu curación se enviará a L2 junto con el subgrafo, donde se utilizarán para emitir señal en tu nombre. +When you choose to transfer the Subgraph, this will convert all of the Subgraph's curation signal to GRT. This is equivalent to "deprecating" the Subgraph on mainnet. The GRT corresponding to your curation will be sent to L2 together with the Subgraph, where they will be used to mint signal on your behalf. -Otros Curadores pueden elegir si retirar su fracción de GRT o también transferirlo a L2 para emitir señal en el mismo subgrafo. Si un propietario de subgrafo no transfiere su subgrafo a L2 y lo depreca manualmente a través de una llamada de contrato, entonces los Curadores serán notificados y podrán retirar su curación. +Other Curators can choose whether to withdraw their fraction of GRT, or also transfer it to L2 to mint signal on the same Subgraph. If a Subgraph owner does not transfer their Subgraph to L2 and manually deprecates it via a contract call, then Curators will be notified and will be able to withdraw their curation. -Tan pronto como se transfiera el subgrafo, dado que toda la curación se convierte en GRT, los Indexadores ya no recibirán recompensas por indexar el subgrafo. Sin embargo, habrá Indexadores que 1) continuarán sirviendo los subgrafos transferidos durante 24 horas y 2) comenzarán inmediatamente a indexar el subgrafo en L2. Dado que estos Indexadores ya tienen el subgrafo indexado, no será necesario esperar a que se sincronice el subgrafo y será posible realizar consultas al subgrafo en L2 casi de inmediato. +As soon as the Subgraph is transferred, since all curation is converted to GRT, Indexers will no longer receive rewards for indexing the Subgraph. However, there will be Indexers that will 1) keep serving transferred Subgraphs for 24 hours, and 2) immediately start indexing the Subgraph on L2. Since these Indexers already have the Subgraph indexed, there should be no need to wait for the Subgraph to sync, and it will be possible to query the L2 Subgraph almost immediately. -Las consultas al subgrafo en L2 deberán realizarse a una URL diferente (en `arbitrum-gateway.thegraph.com`), pero la URL de L1 seguirá funcionando durante al menos 48 horas. Después de eso, la gateway de L1 redirigirá las consultas a la gateway de L2 (durante algún tiempo), pero esto agregará latencia, por lo que se recomienda cambiar todas las consultas a la nueva URL lo antes posible. +Queries to the L2 Subgraph will need to be done to a different URL (on `arbitrum-gateway.thegraph.com`), but the L1 URL will continue working for at least 48 hours. After that, the L1 gateway will forward queries to the L2 gateway (for some time), but this will add latency so it is recommended to switch all your queries to the new URL as soon as possible. ## Elección de tu wallet en L2 -Cuando publicaste tu subgrafo en mainnet, utilizaste una wallet conectada para crear el subgrafo, y esta wallet es la propietaria del NFT que representa este subgrafo y te permite publicar actualizaciones. +When you published your Subgraph on mainnet, you used a connected wallet to create the Subgraph, and this wallet owns the NFT that represents this Subgraph and allows you to publish updates. -Al transferir el subgrafo a Arbitrum, puedes elegir una wallet diferente que será la propietaria del NFT de este subgrafo en L2. +When transferring the Subgraph to Arbitrum, you can choose a different wallet that will own this Subgraph NFT on L2. Si estás utilizando una wallet "convencional" como MetaMask (una Cuenta de Propiedad Externa o EOA, es decir, una wallet que no es un contrato inteligente), esto es opcional y se recomienda para mantener la misma dirección del propietario que en L1. -Si estás utilizando una wallet de tipo smart contract, como una multisig (por ejemplo, una Safe), entonces elegir una dirección de wallet L2 diferente es obligatorio, ya que es muy probable que esta cuenta solo exista en mainnet y no podrás realizar transacciones en Arbitrum utilizando esta wallet. Si deseas seguir utilizando una wallet de tipo smart contract o multisig, crea una nueva wallet en Arbitrum y utiliza su dirección como propietario L2 de tu subgrafo. +If you're using a smart contract wallet, like a multisig (e.g. a Safe), then choosing a different L2 wallet address is mandatory, as it is most likely that this account only exists on mainnet and you will not be able to make transactions on Arbitrum using this wallet. If you want to keep using a smart contract wallet or multisig, create a new wallet on Arbitrum and use its address as the L2 owner of your Subgraph. -**Es muy importante utilizar una dirección de wallet que controles y que pueda realizar transacciones en Arbitrum. De lo contrario, el subgrafo se perderá y no podrá ser recuperado.** +**It is very important to use a wallet address that you control, and that can make transactions on Arbitrum. Otherwise, the Subgraph will be lost and cannot be recovered.** ## Preparándose para la transferencia: bridgeando algo de ETH -Transferir el subgrafo implica enviar una transacción a través del puente y luego ejecutar otra transacción en Arbitrum. La primera transacción utiliza ETH en la red principal e incluye cierta cantidad de ETH para pagar el gas cuando se recibe el mensaje en L2. Sin embargo, si este gas es insuficiente, deberás volver a intentar la transacción y pagar el gas directamente en L2 (esto es "Paso 3: Confirmando la transferencia" que se describe a continuación). Este paso **debe ejecutarse dentro de los 7 días desde el inicio de la transferencia**. Además, la segunda transacción ("Paso 4: Finalizando la transferencia en L2") se realizará directamente en Arbitrum. Por estas razones, necesitarás tener algo de ETH en una billetera de Arbitrum. Si estás utilizando una cuenta de firma múltiple o un contrato inteligente, el ETH debe estar en la billetera regular (EOA) que estás utilizando para ejecutar las transacciones, no en la billetera de firma múltiple en sí misma. +Transferring the Subgraph involves sending a transaction through the bridge, and then executing another transaction on Arbitrum. The first transaction uses ETH on mainnet, and includes some ETH to pay for gas when the message is received on L2. However, if this gas is insufficient, you will have to retry the transaction and pay for the gas directly on L2 (this is "Step 3: Confirming the transfer" below). This step **must be executed within 7 days of starting the transfer**. Moreover, the second transaction ("Step 4: Finishing the transfer on L2") will be done directly on Arbitrum. For these reasons, you will need some ETH on an Arbitrum wallet. If you're using a multisig or smart contract account, the ETH will need to be in the regular (EOA) wallet that you are using to execute the transactions, not on the multisig wallet itself. Puedes comprar ETH en algunos exchanges y retirarlo directamente a Arbitrum, o puedes utilizar el puente de Arbitrum para enviar ETH desde una billetera en la red principal a L2: [bridge.arbitrum.io](http://bridge.arbitrum.io). Dado que las tarifas de gas en Arbitrum son más bajas, solo necesitarás una pequeña cantidad. Se recomienda que comiences con un umbral bajo (por ejemplo, 0.01 ETH) para que tu transacción sea aprobada. -## Encontrando la herramienta de transferencia del subgrafo +## Finding the Subgraph Transfer Tool -Puedes encontrar la herramienta de transferencia a L2 cuando estás viendo la página de tu subgrafo en Subgraph Studio: +You can find the L2 Transfer Tool when you're looking at your Subgraph's page on Subgraph Studio: ![transfer tool](/img/L2-transfer-tool1.png) -También está disponible en Explorer si estás conectado con la wallet que es propietaria de un subgrafo y en la página de ese subgrafo en Explorer: +It is also available on Explorer if you're connected with the wallet that owns a Subgraph and on that Subgraph's page on Explorer: ![Transferring to L2](/img/transferToL2.png) @@ -60,19 +60,19 @@ Al hacer clic en el botón "Transferir a L2" se abrirá la herramienta de transf ## Paso 1: Iniciar la transferencia -Antes de iniciar la transferencia, debes decidir qué dirección será la propietaria del subgrafo en L2 (ver "Elección de tu wallet en L2" anteriormente), y se recomienda encarecidamente tener ETH para gas ya transferido a Arbitrum (ver "Preparando para la transferencia: transferir ETH" anteriormente). +Before starting the transfer, you must decide which address will own the Subgraph on L2 (see "Choosing your L2 wallet" above), and it is strongly recommend having some ETH for gas already bridged on Arbitrum (see "Preparing for the transfer: bridging some ETH" above). -También ten en cuenta que la transferencia del subgrafo requiere tener una cantidad distinta de cero de señal en el subgrafo con la misma cuenta que es propietaria del subgrafo; si no has emitido señal en el subgrafo, deberás agregar un poco de curación (añadir una pequeña cantidad como 1 GRT sería suficiente). +Also please note transferring the Subgraph requires having a nonzero amount of signal on the Subgraph with the same account that owns the Subgraph; if you haven't signaled on the Subgraph you will have to add a bit of curation (adding a small amount like 1 GRT would suffice). -Después de abrir la herramienta de transferencia, podrás ingresar la dirección de la wallet L2 en el campo "Dirección de la wallet receptora" - asegúrate de ingresar la dirección correcta aquí. Al hacer clic en "Transferir Subgrafo", se te pedirá que ejecutes la transacción en tu wallet (ten en cuenta que se incluye un valor de ETH para pagar el gas de L2); esto iniciará la transferencia y deprecará tu subgrafo de L1 (consulta "Comprensión de lo que sucede con la señal, tu subgrafo de L1 y las URL de consulta" anteriormente para obtener más detalles sobre lo que ocurre detrás de escena). +After opening the Transfer Tool, you will be able to input the L2 wallet address into the "Receiving wallet address" field - **make sure you've entered the correct address here**. Clicking on Transfer Subgraph will prompt you to execute the transaction on your wallet (note some ETH value is included to pay for L2 gas); this will initiate the transfer and deprecate your L1 Subgraph (see "Understanding what happens with signal, your L1 Subgraph and query URLs" above for more details on what goes on behind the scenes). -Si ejecutas este paso, **asegúrate de completar el paso 3 en menos de 7 días, o el subgrafo y tu GRT de señal se perderán**. Esto se debe a cómo funciona la mensajería de L1 a L2 en Arbitrum: los mensajes que se envían a través del puente son "tickets reintentables" que deben ejecutarse dentro de los 7 días, y la ejecución inicial puede requerir un reintento si hay picos en el precio del gas en Arbitrum. +If you execute this step, **make sure you proceed until completing step 3 in less than 7 days, or the Subgraph and your signal GRT will be lost.** This is due to how L1-L2 messaging works on Arbitrum: messages that are sent through the bridge are "retry-able tickets" that must be executed within 7 days, and the initial execution might need a retry if there are spikes in the gas price on Arbitrum. ![Start the transfer to L2](/img/startTransferL2.png) -## Paso 2: Esperarando a que el subgrafo llegue a L2 +## Step 2: Waiting for the Subgraph to get to L2 -Después de iniciar la transferencia, el mensaje que envía tu subgrafo de L1 a L2 debe propagarse a través del puente de Arbitrum. Esto tarda aproximadamente 20 minutos (el puente espera a que el bloque de mainnet que contiene la transacción sea "seguro" para evitar posibles reorganizaciones de la cadena). +After you start the transfer, the message that sends your L1 Subgraph to L2 must propagate through the Arbitrum bridge. This takes approximately 20 minutes (the bridge waits for the mainnet block containing the transaction to be "safe" from potential chain reorgs). Una vez que finalice este tiempo de espera, Arbitrum intentará ejecutar automáticamente la transferencia en los contratos de L2. @@ -80,7 +80,7 @@ Una vez que finalice este tiempo de espera, Arbitrum intentará ejecutar automá ## Paso 3: Confirmando la transferencia -En la mayoría de los casos, este paso se ejecutará automáticamente, ya que el gas de L2 incluido en el paso 1 debería ser suficiente para ejecutar la transacción que recibe el subgrafo en los contratos de Arbitrum. Sin embargo, en algunos casos, es posible que un aumento en el precio del gas en Arbitrum cause que esta autoejecución falle. En este caso, el "ticket" que envía tu subgrafo a L2 quedará pendiente y requerirá un reintento dentro de los 7 días. +In most cases, this step will auto-execute as the L2 gas included in step 1 should be sufficient to execute the transaction that receives the Subgraph on the Arbitrum contracts. In some cases, however, it is possible that a spike in gas prices on Arbitrum causes this auto-execution to fail. In this case, the "ticket" that sends your Subgraph to L2 will be pending and require a retry within 7 days. Si este es el caso, deberás conectarte utilizando una wallet de L2 que tenga algo de ETH en Arbitrum, cambiar la red de tu wallet a Arbitrum y hacer clic en "Confirmar Transferencia" para volver a intentar la transacción. @@ -88,33 +88,33 @@ Si este es el caso, deberás conectarte utilizando una wallet de L2 que tenga al ## Paso 4: Finalizando la transferencia en L2 -En este punto, tu subgrafo y GRT se han recibido en Arbitrum, pero el subgrafo aún no se ha publicado. Deberás conectarte utilizando la wallet de L2 que elegiste como la wallet receptora, cambiar la red de tu wallet a Arbitrum y hacer clic en "Publicar Subgrafo". +At this point, your Subgraph and GRT have been received on Arbitrum, but the Subgraph is not published yet. You will need to connect using the L2 wallet that you chose as the receiving wallet, switch your wallet network to Arbitrum, and click "Publish Subgraph." -![Publicar el subgrafo](/img/publishSubgraphL2TransferTools.png) +![Publish the Subgraph](/img/publishSubgraphL2TransferTools.png) -![Espera a que el subgrafo este publicado](/img/waitForSubgraphToPublishL2TransferTools.png) +![Wait for the Subgraph to be published](/img/waitForSubgraphToPublishL2TransferTools.png) -Esto publicará el subgrafo para que los Indexadores que estén operando en Arbitrum puedan comenzar a servirlo. También se emitirá señal de curación utilizando los GRT que se transfirieron desde L1. +This will publish the Subgraph so that Indexers that are operating on Arbitrum can start serving it. It will also mint curation signal using the GRT that were transferred from L1. ## Paso 5: Actualizando la URL de consulta -¡Tu subgrafo se ha transferido correctamente a Arbitrum! Para realizar consultas al subgrafo, la nueva URL será: +Your Subgraph has been successfully transferred to Arbitrum! To query the Subgraph, the new URL will be : `https://arbitrum-gateway.thegraph.com/api/[api-key]/subgraphs/id/[l2-subgraph-id]` -Ten en cuenta que el ID del subgrafo en Arbitrum será diferente al que tenías en mainnet, pero siempre podrás encontrarlo en Explorer o Studio. Como se mencionó anteriormente (ver "Comprensión de lo que sucede con la señal, tu subgrafo de L1 y las URL de consulta"), la antigua URL de L1 será compatible durante un corto período de tiempo, pero debes cambiar tus consultas a la nueva dirección tan pronto como el subgrafo se haya sincronizado en L2. +Note that the Subgraph ID on Arbitrum will be a different than the one you had on mainnet, but you can always find it on Explorer or Studio. As mentioned above (see "Understanding what happens with signal, your L1 Subgraph and query URLs") the old L1 URL will be supported for a short while, but you should switch your queries to the new address as soon as the Subgraph has been synced on L2. ## Cómo transferir tu curación a Arbitrum (L2) -## Comprensión de lo que sucede con la curación al transferir subgrafos a L2 +## Understanding what happens to curation on Subgraph transfers to L2 -Cuando el propietario de un subgrafo transfiere un subgrafo a Arbitrum, toda la señal del subgrafo se convierte en GRT al mismo tiempo. Esto se aplica a la señal "migrada automáticamente", es decir, la señal que no está vinculada a una versión o deploy específico del subgrafo, sino que sigue la última versión del subgrafo. +When the owner of a Subgraph transfers a Subgraph to Arbitrum, all of the Subgraph's signal is converted to GRT at the same time. This applies to "auto-migrated" signal, i.e. signal that is not specific to a Subgraph version or deployment but that follows the latest version of a Subgraph. -Esta conversión de señal a GRT es similar a lo que sucedería si el propietario del subgrafo deprecara el subgrafo en L1. Cuando el subgrafo se depreca o se transfiere, toda la señal de curación se "quema" simultáneamente (utilizando la bonding curve de curación) y el GRT resultante se mantiene en el contrato inteligente de GNS (que es el contrato que maneja las actualizaciones de subgrafos y la señal auto-migrada). Cada Curador en ese subgrafo, por lo tanto, tiene un reclamo sobre ese GRT proporcional a la cantidad de participaciones que tenían para el subgrafo. +This conversion from signal to GRT is the same as what would happen if the Subgraph owner deprecated the Subgraph in L1. When the Subgraph is deprecated or transferred, all curation signal is "burned" simultaneously (using the curation bonding curve) and the resulting GRT is held by the GNS smart contract (that is the contract that handles Subgraph upgrades and auto-migrated signal). Each Curator on that Subgraph therefore has a claim to that GRT proportional to the amount of shares they had for the Subgraph. -Una fracción de estos GRT correspondientes al propietario del subgrafo se envía a L2 junto con el subgrafo. +A fraction of these GRT corresponding to the Subgraph owner is sent to L2 together with the Subgraph. -En este punto, el GRT curado ya no acumulará más tarifas de consulta, por lo que los Curadores pueden optar por retirar su GRT o transferirlo al mismo subgrafo en L2, donde se puede utilizar para generar nueva señal de curación. No hay prisa para hacerlo, ya que el GRT se puede mantener indefinidamente y todos reciben una cantidad proporcional a sus participaciones, independientemente de cuándo lo hagan. +At this point, the curated GRT will not accrue any more query fees, so Curators can choose to withdraw their GRT or transfer it to the same Subgraph on L2, where it can be used to mint new curation signal. There is no rush to do this as the GRT can be help indefinitely and everybody gets an amount proportional to their shares, irrespective of when they do it. ## Elección de tu wallet en L2 @@ -130,9 +130,9 @@ Si estás utilizando una billetera de contrato inteligente, como una multisig (p Antes de comenzar la transferencia, debes decidir qué dirección será la propietaria de la curación en L2 (ver "Elegir tu wallet en L2" arriba), y se recomienda tener algo de ETH para el gas ya bridgeado en Arbitrum en caso de que necesites volver a intentar la ejecución del mensaje en L2. Puedes comprar ETH en algunos exchanges y retirarlo directamente a Arbitrum, o puedes utilizar el puente de Arbitrum para enviar ETH desde una wallet en la red principal a L2: [bridge.arbitrum.io](http://bridge.arbitrum.io) - dado que las tarifas de gas en Arbitrum son muy bajas, es probable que solo necesites una pequeña cantidad, por ejemplo, 0.01 ETH será más que suficiente. -Si un subgrafo al que has curado ha sido transferido a L2, verás un mensaje en Explorer que te indicará que estás curando hacia un subgrafo transferido. +If a Subgraph that you curate to has been transferred to L2, you will see a message on Explorer telling you that you're curating to a transferred Subgraph. -Cuando estás en la página del subgrafo, puedes elegir retirar o transferir la curación. Al hacer clic en "Transferir Señal a Arbitrum" se abrirá la herramienta de transferencia. +When looking at the Subgraph page, you can choose to withdraw or transfer the curation. Clicking on "Transfer Signal to Arbitrum" will open the transfer tool. ![Transferir señal](/img/transferSignalL2TransferTools.png) @@ -162,4 +162,4 @@ Si este es el caso, deberás conectarte utilizando una wallet de L2 que tenga al ## Retirando tu curacion en L1 -Si prefieres no enviar tu GRT a L2, o prefieres bridgear GRT de forma manual, puedes retirar tu GRT curado en L1. En el banner en la página del subgrafo, elige "Retirar Señal" y confirma la transacción; el GRT se enviará a tu dirección de Curador. +If you prefer not to send your GRT to L2, or you'd rather bridge the GRT manually, you can withdraw your curated GRT on L1. On the banner on the Subgraph page, choose "Withdraw Signal" and confirm the transaction; the GRT will be sent to your Curator address. diff --git a/website/src/pages/es/archived/sunrise.mdx b/website/src/pages/es/archived/sunrise.mdx index eb18a93c506c..71262f22e7d8 100644 --- a/website/src/pages/es/archived/sunrise.mdx +++ b/website/src/pages/es/archived/sunrise.mdx @@ -7,61 +7,61 @@ sidebarTitle: Post-Sunrise Upgrade FAQ ## What was the Sunrise of Decentralized Data? -The Sunrise of Decentralized Data was an initiative spearheaded by Edge & Node. This initiative enabled subgraph developers to upgrade to The Graph’s decentralized network seamlessly. +The Sunrise of Decentralized Data was an initiative spearheaded by Edge & Node. This initiative enabled Subgraph developers to upgrade to The Graph’s decentralized network seamlessly. -This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published subgraphs. +This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published Subgraphs. ### What happened to the hosted service? -The hosted service query endpoints are no longer available, and developers cannot deploy new subgraphs on the hosted service. +The hosted service query endpoints are no longer available, and developers cannot deploy new Subgraphs on the hosted service. -During the upgrade process, owners of hosted service subgraphs could upgrade their subgraphs to The Graph Network. Additionally, developers were able to claim auto-upgraded subgraphs. +During the upgrade process, owners of hosted service Subgraphs could upgrade their Subgraphs to The Graph Network. Additionally, developers were able to claim auto-upgraded Subgraphs. ### Was Subgraph Studio impacted by this upgrade? No, Subgraph Studio was not impacted by Sunrise. Subgraphs were immediately available for querying, powered by the upgrade Indexer, which uses the same infrastructure as the hosted service. -### Why were subgraphs published to Arbitrum, did it start indexing a different network? +### Why were Subgraphs published to Arbitrum, did it start indexing a different network? -The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that subgraphs are published to, but subgraphs can index any of the [supported networks](/supported-networks/) +The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new Subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that Subgraphs are published to, but Subgraphs can index any of the [supported networks](/supported-networks/) ## About the Upgrade Indexer > The upgrade Indexer is currently active. -The upgrade Indexer was implemented to improve the experience of upgrading subgraphs from the hosted service to The Graph Network and support new versions of existing subgraphs that had not yet been indexed. +The upgrade Indexer was implemented to improve the experience of upgrading Subgraphs from the hosted service to The Graph Network and support new versions of existing Subgraphs that had not yet been indexed. ### What does the upgrade Indexer do? -- It bootstraps chains that have yet to receive indexing rewards on The Graph Network and ensures that an Indexer is available to serve queries as quickly as possible after a subgraph is published. +- It bootstraps chains that have yet to receive indexing rewards on The Graph Network and ensures that an Indexer is available to serve queries as quickly as possible after a Subgraph is published. - It supports chains that were previously only available on the hosted service. Find a comprehensive list of supported chains [here](/supported-networks/). -- Indexers that operate an upgrade Indexer do so as a public service to support new subgraphs and additional chains that lack indexing rewards before The Graph Council approves them. +- Indexers that operate an upgrade Indexer do so as a public service to support new Subgraphs and additional chains that lack indexing rewards before The Graph Council approves them. ### Why is Edge & Node running the upgrade Indexer? -Edge & Node historically maintained the hosted service and, as a result, already have synced data for hosted service subgraphs. +Edge & Node historically maintained the hosted service and, as a result, already have synced data for hosted service Subgraphs. ### What does the upgrade indexer mean for existing Indexers? Chains previously only supported on the hosted service were made available to developers on The Graph Network without indexing rewards at first. -However, this action unlocked query fees for any interested Indexer and increased the number of subgraphs published on The Graph Network. As a result, Indexers have more opportunities to index and serve these subgraphs in exchange for query fees, even before indexing rewards are enabled for a chain. +However, this action unlocked query fees for any interested Indexer and increased the number of Subgraphs published on The Graph Network. As a result, Indexers have more opportunities to index and serve these Subgraphs in exchange for query fees, even before indexing rewards are enabled for a chain. -The upgrade Indexer also provides the Indexer community with information about the potential demand for subgraphs and new chains on The Graph Network. +The upgrade Indexer also provides the Indexer community with information about the potential demand for Subgraphs and new chains on The Graph Network. ### What does this mean for Delegators? -The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity. +The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more Subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity. ### Did the upgrade Indexer compete with existing Indexers for rewards? -No, the upgrade Indexer only allocates the minimum amount per subgraph and does not collect indexing rewards. +No, the upgrade Indexer only allocates the minimum amount per Subgraph and does not collect indexing rewards. -It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and subgraphs. +It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and Subgraphs. -### How does this affect subgraph developers? +### How does this affect Subgraph developers? -Subgraph developers can query their subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/subgraphs/developing/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a subgraph](/developing/creating-a-subgraph/) was not impacted by this upgrade. +Subgraph developers can query their Subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/subgraphs/developing/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a Subgraph](/developing/creating-a-subgraph/) was not impacted by this upgrade. ### How does the upgrade Indexer benefit data consumers? @@ -71,10 +71,10 @@ The upgrade Indexer enables chains on the network that were previously only supp The upgrade Indexer prices queries at the market rate to avoid influencing the query fee market. -### When will the upgrade Indexer stop supporting a subgraph? +### When will the upgrade Indexer stop supporting a Subgraph? -The upgrade Indexer supports a subgraph until at least 3 other Indexers successfully and consistently serve queries made to it. +The upgrade Indexer supports a Subgraph until at least 3 other Indexers successfully and consistently serve queries made to it. -Furthermore, the upgrade Indexer stops supporting a subgraph if it has not been queried in the last 30 days. +Furthermore, the upgrade Indexer stops supporting a Subgraph if it has not been queried in the last 30 days. -Other Indexers are incentivized to support subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it. +Other Indexers are incentivized to support Subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it. diff --git a/website/src/pages/es/contracts.json b/website/src/pages/es/contracts.json index 35d93318521e..6de137f39dc3 100644 --- a/website/src/pages/es/contracts.json +++ b/website/src/pages/es/contracts.json @@ -1,4 +1,4 @@ { - "contract": "Contract", + "contract": "Contrato", "address": "Dirección" } diff --git a/website/src/pages/es/global.json b/website/src/pages/es/global.json index b9c8db5fa5fa..a35f826df076 100644 --- a/website/src/pages/es/global.json +++ b/website/src/pages/es/global.json @@ -1,35 +1,78 @@ { "navigation": { - "title": "Main navigation", - "show": "Show navigation", - "hide": "Hide navigation", + "title": "Navegación principal", + "show": "Mostrar navegación", + "hide": "Ocultar navegación", "subgraphs": "Subgrafos", "substreams": "Corrientes secundarias", - "sps": "Substreams-Powered Subgraphs", - "indexing": "Indexing", - "resources": "Resources", - "archived": "Archived" + "sps": "Subgrafos impulsados por Substreams", + "tokenApi": "Token API", + "indexing": "Indexación", + "resources": "Recursos", + "archived": "Archivado" }, "page": { - "lastUpdated": "Last updated", + "lastUpdated": "Última actualización", "readingTime": { - "title": "Reading time", - "minutes": "minutes" + "title": "Tiempo de lectura", + "minutes": "minutos" }, - "previous": "Previous page", - "next": "Next page", - "edit": "Edit on GitHub", - "onThisPage": "On this page", - "tableOfContents": "Table of contents", - "linkToThisSection": "Link to this section" + "previous": "Página anterior", + "next": "Página siguiente", + "edit": "Editar en GitHub", + "onThisPage": "En esta página", + "tableOfContents": "Tabla de contenidos", + "linkToThisSection": "Enlace a esta sección" }, "content": { - "note": "Note", + "callout": { + "note": "Note", + "tip": "Tip", + "important": "Important", + "warning": "Warning", + "caution": "Caution" + }, "video": "Video" }, + "openApi": { + "parameters": { + "pathParameters": "Path Parameters", + "queryParameters": "Query Parameters", + "headerParameters": "Header Parameters", + "cookieParameters": "Cookie Parameters", + "parameter": "Parameter", + "description": "Descripción", + "value": "Value", + "required": "Required", + "deprecated": "Deprecated", + "defaultValue": "Default value", + "minimumValue": "Minimum value", + "maximumValue": "Maximum value", + "acceptedValues": "Accepted values", + "acceptedPattern": "Accepted pattern", + "format": "Format", + "serializationFormat": "Serialization format" + }, + "request": { + "label": "Test this endpoint", + "noCredentialsRequired": "No credentials required", + "send": "Send Request" + }, + "responses": { + "potentialResponses": "Potential Responses", + "status": "Estado", + "description": "Descripción", + "liveResponse": "Live Response", + "example": "Ejemplo" + }, + "errors": { + "invalidApi": "Could not retrieve API {0}.", + "invalidOperation": "Could not retrieve operation {0} in API {1}." + } + }, "notFound": { - "title": "Oops! This page was lost in space...", - "subtitle": "Check if you’re using the right address or explore our website by clicking on the link below.", - "back": "Go Home" + "title": "¡Ups! Esta página se ha perdido en el espacio...", + "subtitle": "Verifica que estés usando la dirección correcta o visita nuestro sitio web haciendo clic en el enlace de abajo.", + "back": "Ir a la página principal" } } diff --git a/website/src/pages/es/index.json b/website/src/pages/es/index.json index d95f65ce5452..2c1eeb105f26 100644 --- a/website/src/pages/es/index.json +++ b/website/src/pages/es/index.json @@ -1,50 +1,50 @@ { - "title": "Home", + "title": "Inicio", "hero": { - "title": "The Graph Docs", - "description": "Kick-start your web3 project with the tools to extract, transform, and load blockchain data.", - "cta1": "How The Graph works", - "cta2": "Build your first subgraph" + "title": "Documentación de The Graph", + "description": "Inicia tu proyecto web3 con las herramientas para extraer, transformar y cargar datos de blockchain.", + "cta1": "Cómo funciona The Graph", + "cta2": "Crea tu primer subgrafo" }, "products": { - "title": "The Graph’s Products", - "description": "Choose a solution that fits your needs—interact with blockchain data your way.", + "title": "The Graph's Products", + "description": "Elige una solución que se ajuste a tus necesidades: interactúa con los datos de blockchain a tu manera.", "subgraphs": { "title": "Subgrafos", - "description": "Extract, process, and query blockchain data with open APIs.", - "cta": "Develop a subgraph" + "description": "Extrae, procesa y consulta datos de blockchain con APIs abiertas.", + "cta": "Desarrollar un subgrafo" }, "substreams": { "title": "Corrientes secundarias", - "description": "Fetch and consume blockchain data with parallel execution.", - "cta": "Develop with Substreams" + "description": "Obtén y consume datos de blockchain con ejecución paralela.", + "cta": "Desarrolla con Substreams" }, "sps": { - "title": "Substreams-Powered Subgraphs", - "description": "Boost your subgraph’s efficiency and scalability by using Substreams.", - "cta": "Set up a Substreams-powered subgraph" + "title": "Subgrafos impulsados por Substreams", + "description": "Boost your subgraph's efficiency and scalability by using Substreams.", + "cta": "Configura un subgrafo impulsado por Substreams" }, "graphNode": { "title": "Graph Node", - "description": "Index blockchain data and serve it via GraphQL queries.", - "cta": "Set up a local Graph Node" + "description": "Indexa datos de blockchain y sírvelos a través de consultas GraphQL.", + "cta": "Configura un nodo local de Graph" }, "firehose": { "title": "Firehose", - "description": "Extract blockchain data into flat files to enhance sync times and streaming capabilities.", - "cta": "Get started with Firehose" + "description": "Extrae datos de blockchain en archivos planos para mejorar los tiempos de sincronización y las capacidades de transmisión.", + "cta": "Comienza con Firehose" } }, "supportedNetworks": { "title": "Redes Admitidas", "details": "Network Details", "services": "Services", - "type": "Type", + "type": "Tipo", "protocol": "Protocol", "identifier": "Identifier", "chainId": "Chain ID", "nativeCurrency": "Native Currency", - "docs": "Docs", + "docs": "Documentación", "shortName": "Short Name", "guides": "Guides", "search": "Search networks", @@ -54,9 +54,9 @@ "infoText": "Boost your developer experience by enabling The Graph's indexing network.", "infoLink": "Integrate new network", "description": { - "base": "The Graph supports {0}. To add a new network, {1}", + "base": "The Graph soporta {0}. Para agregar una nueva red, {1}", "networks": "networks", - "completeThisForm": "complete this form" + "completeThisForm": "completa este formulario" }, "emptySearch": { "title": "No networks found", @@ -65,10 +65,10 @@ "showTestnets": "Show testnets" }, "tableHeaders": { - "name": "Name", + "name": "Nombre", "id": "ID", - "subgraphs": "Subgraphs", - "substreams": "Substreams", + "subgraphs": "Subgrafos", + "substreams": "Corrientes secundarias", "firehose": "Firehose", "tokenapi": "Token API" } @@ -80,7 +80,7 @@ "description": "Kickstart your journey into subgraph development." }, "substreams": { - "title": "Substreams", + "title": "Corrientes secundarias", "description": "Stream high-speed data for real-time indexing." }, "timeseries": { @@ -92,7 +92,7 @@ "description": "Leverage features like custom data sources, event handlers, and topic filters." }, "billing": { - "title": "Billing", + "title": "Facturación", "description": "Optimize costs and manage billing efficiently." } }, @@ -120,56 +120,56 @@ } }, "guides": { - "title": "Guides", + "title": "Guías", "description": "", "explorer": { - "title": "Find Data in Graph Explorer", - "description": "Leverage hundreds of public subgraphs for existing blockchain data." + "title": "Buscar datos en Graph Explorer", + "description": "Aprovecha cientos de subgrafos públicos para datos existentes de blockchain." }, "publishASubgraph": { - "title": "Publish a Subgraph", - "description": "Add your subgraph to the decentralized network." + "title": "Publicar un Subgrafo", + "description": "Agrega tu subgrafo a la red descentralizada." }, "publishSubstreams": { - "title": "Publish Substreams", - "description": "Launch your Substreams package to the Substreams Registry." + "title": "Publicar Substreams", + "description": "Lanza tu paquete de Substreams al Registro de Substreams." }, "queryingBestPractices": { "title": "Mejores Prácticas para Consultas", - "description": "Optimize your subgraph queries for faster, better results." + "description": "Optimiza tus consultas de subgrafo para obtener resultados más rápidos y mejores." }, "timeseries": { - "title": "Optimized Timeseries & Aggregations", - "description": "Streamline your subgraph for efficiency." + "title": "Series temporales optimizadas y agregaciones", + "description": "Optimiza tu subgrafo para mejorar la eficiencia." }, "apiKeyManagement": { - "title": "API Key Management", - "description": "Easily create, manage, and secure API keys for your subgraphs." + "title": "Gestión de claves API", + "description": "Crea, gestiona y asegura fácilmente las claves API para tus subgrafos." }, "transferToTheGraph": { - "title": "Transfer to The Graph", - "description": "Seamlessly upgrade your subgraph from any platform." + "title": "Transferir a The Graph", + "description": "Mejora tu subgrafo sin problemas desde cualquier plataforma." } }, "videos": { - "title": "Video Tutorials", - "watchOnYouTube": "Watch on YouTube", + "title": "Tutoriales en video", + "watchOnYouTube": "Ver en YouTube", "theGraphExplained": { - "title": "The Graph Explained In 1 Minute", - "description": "What is The Graph? How does it work? Why does it matter so much to web3 developers? Learn how and why The Graph is the backbone of web3 in this short, non-technical video." + "title": "The Graph explicado en 1 minuto", + "description": "Learn how and why The Graph is the backbone of web3 in this short, non-technical video." }, "whatIsDelegating": { - "title": "What is Delegating?", - "description": "Delegators are key participants who help secure The Graph by staking their GRT tokens to Indexers. This video explains key concepts to understand before delegating." + "title": "¿Qué es la delegación?", + "description": "This video explains key concepts to understand before delegating, a form of staking that helps secure The Graph." }, "howToIndexSolana": { - "title": "How to Index Solana with a Substreams-powered Subgraph", - "description": "If you’re familiar with subgraphs, discover how Substreams offer a different approach for key use cases. This video walks you through the process of building your first Substreams-powered subgraph." + "title": "Cómo indexar Solana con un subgrafo impulsado por Substreams", + "description": "If you're familiar with Subgraphs, discover how Substreams offer a different approach for key use cases." } }, "time": { - "reading": "Reading time", - "duration": "Duration", + "reading": "Tiempo de lectura", + "duration": "Duración", "minutes": "min" } } diff --git a/website/src/pages/es/indexing/_meta-titles.json b/website/src/pages/es/indexing/_meta-titles.json index 42f4de188fd4..ee110b7adfe8 100644 --- a/website/src/pages/es/indexing/_meta-titles.json +++ b/website/src/pages/es/indexing/_meta-titles.json @@ -1,3 +1,3 @@ { - "tooling": "Indexer Tooling" + "tooling": "Herramientas para Indexadores" } diff --git a/website/src/pages/es/indexing/chain-integration-overview.mdx b/website/src/pages/es/indexing/chain-integration-overview.mdx index 77141e82b34a..dfcb2a2442d7 100644 --- a/website/src/pages/es/indexing/chain-integration-overview.mdx +++ b/website/src/pages/es/indexing/chain-integration-overview.mdx @@ -1,5 +1,5 @@ --- -title: Chain Integration Process Overview +title: Descripción general del proceso de integración de cadena --- A transparent and governance-based integration process was designed for blockchain teams seeking [integration with The Graph protocol](https://forum.thegraph.com/t/gip-0057-chain-integration-process/4468). It is a 3-phase process, as summarised below. @@ -36,7 +36,7 @@ This process is related to the Subgraph Data Service, applicable only to new Sub ### 2. What happens if Firehose & Substreams support comes after the network is supported on mainnet? -This would only impact protocol support for indexing rewards on Substreams-powered subgraphs. The new Firehose implementation would need testing on testnet, following the methodology outlined for Stage 2 in this GIP. Similarly, assuming the implementation is performant and reliable, a PR on the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) would be required (`Substreams data sources` Subgraph Feature), as well as a new GIP for protocol support for indexing rewards. Anyone can create the PR and GIP; the Foundation would help with Council approval. +This would only impact protocol support for indexing rewards on Substreams-powered Subgraphs. The new Firehose implementation would need testing on testnet, following the methodology outlined for Stage 2 in this GIP. Similarly, assuming the implementation is performant and reliable, a PR on the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) would be required (`Substreams data sources` Subgraph Feature), as well as a new GIP for protocol support for indexing rewards. Anyone can create the PR and GIP; the Foundation would help with Council approval. ### 3. How much time will the process of reaching full protocol support take? diff --git a/website/src/pages/es/indexing/new-chain-integration.mdx b/website/src/pages/es/indexing/new-chain-integration.mdx index 04aa90b6e5ae..5e56afca4d75 100644 --- a/website/src/pages/es/indexing/new-chain-integration.mdx +++ b/website/src/pages/es/indexing/new-chain-integration.mdx @@ -2,7 +2,7 @@ title: New Chain Integration --- -Chains can bring subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies: +Chains can bring Subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies: 1. **EVM JSON-RPC** 2. **Firehose**: All Firehose integration solutions include Substreams, a large-scale streaming engine based off Firehose with native `graph-node` support, allowing for parallelized transforms. @@ -47,15 +47,15 @@ For EVM chains, there exists a deeper level of data that can be achieved through ## EVM considerations - Difference between JSON-RPC & Firehose -While the JSON-RPC and Firehose are both suitable for subgraphs, a Firehose is always required for developers wanting to build with [Substreams](https://substreams.streamingfast.io). Supporting Substreams allows developers to build [Substreams-powered subgraphs](/subgraphs/cookbook/substreams-powered-subgraphs/) for the new chain, and has the potential to improve the performance of your subgraphs. Additionally, Firehose — as a drop-in replacement for the JSON-RPC extraction layer of `graph-node` — reduces by 90% the number of RPC calls required for general indexing. +While the JSON-RPC and Firehose are both suitable for Subgraphs, a Firehose is always required for developers wanting to build with [Substreams](https://substreams.streamingfast.io). Supporting Substreams allows developers to build [Substreams-powered Subgraphs](/subgraphs/cookbook/substreams-powered-subgraphs/) for the new chain, and has the potential to improve the performance of your Subgraphs. Additionally, Firehose — as a drop-in replacement for the JSON-RPC extraction layer of `graph-node` — reduces by 90% the number of RPC calls required for general indexing. -- All those `getLogs` calls and roundtrips get replaced by a single stream arriving into the heart of `graph-node`; a single block model for all subgraphs it processes. +- All those `getLogs` calls and roundtrips get replaced by a single stream arriving into the heart of `graph-node`; a single block model for all Subgraphs it processes. -> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers) +> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index Subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers) ## Configuración del Graph Node -Configuring Graph Node is as easy as preparing your local environment. Once your local environment is set, you can test the integration by locally deploying a subgraph. +Configuring Graph Node is as easy as preparing your local environment. Once your local environment is set, you can test the integration by locally deploying a Subgraph. 1. [Clone Graph Node](https://github.com/graphprotocol/graph-node) @@ -67,4 +67,4 @@ Configuring Graph Node is as easy as preparing your local environment. Once your ## Substreams-powered Subgraphs -For StreamingFast-led Firehose/Substreams integrations, basic support for foundational Substreams modules (e.g. decoded transactions, logs and smart-contract events) and Substreams codegen tools are included. These tools enable the ability to enable [Substreams-powered subgraphs](/substreams/sps/introduction/). Follow the [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) and run `substreams codegen subgraph` to experience the codegen tools for yourself. +For StreamingFast-led Firehose/Substreams integrations, basic support for foundational Substreams modules (e.g. decoded transactions, logs and smart-contract events) and Substreams codegen tools are included. These tools enable the ability to enable [Substreams-powered Subgraphs](/substreams/sps/introduction/). Follow the [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) and run `substreams codegen subgraph` to experience the codegen tools for yourself. diff --git a/website/src/pages/es/indexing/overview.mdx b/website/src/pages/es/indexing/overview.mdx index 43b74287044a..cf592d9ad7e4 100644 --- a/website/src/pages/es/indexing/overview.mdx +++ b/website/src/pages/es/indexing/overview.mdx @@ -7,7 +7,7 @@ Indexers are node operators in The Graph Network that stake Graph Tokens (GRT) i Los GRT que se depositan en stake en el protocolo está sujeto a un periodo de desbloqueo y puede incurrir en slashing (ser reducidos) si los Indexadores son maliciosos y sirven datos incorrectos a las aplicaciones o si indexan incorrectamente. Los Indexadores también obtienen recompensas por stake delegados de los Delegadores, para contribuir a la red. -Los Indexadores seleccionan subgrafos para indexar basados en la señal de curación del subgrafo, donde los Curadores realizan stake de sus GRT para indicar qué subgrafos son de mejor calidad y deben tener prioridad para ser indexados. Los consumidores (por ejemplo, aplicaciones, clientes) también pueden establecer parámetros para los cuales los Indexadores procesan consultas para sus subgrafos y establecen preferencias para el precio asignado a cada consulta. +Indexers select Subgraphs to index based on the Subgraph’s curation signal, where Curators stake GRT in order to indicate which Subgraphs are high-quality and should be prioritized. Consumers (eg. applications) can also set parameters for which Indexers process queries for their Subgraphs and set preferences for query fee pricing. ## FAQ @@ -19,17 +19,17 @@ The minimum stake for an Indexer is currently set to 100K GRT. **Query fee rebates** - Payments for serving queries on the network. These payments are mediated via state channels between an Indexer and a gateway. Each query request from a gateway contains a payment and the corresponding response a proof of query result validity. -**Indexing rewards** - Generated via a 3% annual protocol wide inflation, the indexing rewards are distributed to Indexers who are indexing subgraph deployments for the network. +**Indexing rewards** - Generated via a 3% annual protocol wide inflation, the indexing rewards are distributed to Indexers who are indexing Subgraph deployments for the network. ### How are indexing rewards distributed? -Indexing rewards come from protocol inflation which is set to 3% annual issuance. They are distributed across subgraphs based on the proportion of all curation signal on each, then distributed proportionally to Indexers based on their allocated stake on that subgraph. **An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.** +Indexing rewards come from protocol inflation which is set to 3% annual issuance. They are distributed across Subgraphs based on the proportion of all curation signal on each, then distributed proportionally to Indexers based on their allocated stake on that Subgraph. **An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.** Numerous tools have been created by the community for calculating rewards; you'll find a collection of them organized in the [Community Guides collection](https://www.notion.so/Community-Guides-abbb10f4dba040d5ba81648ca093e70c). You can also find an up to date list of tools in the #Delegators and #Indexers channels on the [Discord server](https://discord.gg/graphprotocol). Here we link a [recommended allocation optimiser](https://github.com/graphprotocol/allocation-optimizer) integrated with the indexer software stack. ### What is a proof of indexing (POI)? -POIs are used in the network to verify that an Indexer is indexing the subgraphs they have allocated on. A POI for the first block of the current epoch must be submitted when closing an allocation for that allocation to be eligible for indexing rewards. A POI for a block is a digest for all entity store transactions for a specific subgraph deployment up to and including that block. +POIs are used in the network to verify that an Indexer is indexing the Subgraphs they have allocated on. A POI for the first block of the current epoch must be submitted when closing an allocation for that allocation to be eligible for indexing rewards. A POI for a block is a digest for all entity store transactions for a specific Subgraph deployment up to and including that block. ### When are indexing rewards distributed? @@ -41,7 +41,7 @@ The RewardsManager contract has a read-only [getRewards](https://github.com/grap Many of the community-made dashboards include pending rewards values and they can be easily checked manually by following these steps: -1. Query the [mainnet subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations: +1. Query the [mainnet Subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations: ```graphql query indexerAllocations { @@ -91,24 +91,24 @@ The `queryFeeCut` and `indexingRewardCut` values are delegation parameters that - **indexingRewardCut** - the % of indexing rewards that will be distributed to the Indexer. If this is set to 95%, the Indexer will receive 95% of the indexing rewards when an allocation is closed and the Delegators will split the other 5%. -### How do Indexers know which subgraphs to index? +### How do Indexers know which Subgraphs to index? -Indexers may differentiate themselves by applying advanced techniques for making subgraph indexing decisions but to give a general idea we'll discuss several key metrics used to evaluate subgraphs in the network: +Indexers may differentiate themselves by applying advanced techniques for making Subgraph indexing decisions but to give a general idea we'll discuss several key metrics used to evaluate Subgraphs in the network: -- **Curation signal** - The proportion of network curation signal applied to a particular subgraph is a good indicator of the interest in that subgraph, especially during the bootstrap phase when query voluming is ramping up. +- **Curation signal** - The proportion of network curation signal applied to a particular Subgraph is a good indicator of the interest in that Subgraph, especially during the bootstrap phase when query volume is ramping up. -- **Query fees collected** - The historical data for volume of query fees collected for a specific subgraph is a good indicator of future demand. +- **Query fees collected** - The historical data for volume of query fees collected for a specific Subgraph is a good indicator of future demand. -- **Amount staked** - Monitoring the behavior of other Indexers or looking at proportions of total stake allocated towards specific subgraphs can allow an Indexer to monitor the supply side for subgraph queries to identify subgraphs that the network is showing confidence in or subgraphs that may show a need for more supply. +- **Amount staked** - Monitoring the behavior of other Indexers or looking at proportions of total stake allocated towards specific Subgraphs can allow an Indexer to monitor the supply side for Subgraph queries to identify Subgraphs that the network is showing confidence in or Subgraphs that may show a need for more supply. -- **Subgraphs with no indexing rewards** - Some subgraphs do not generate indexing rewards mainly because they are using unsupported features like IPFS or because they are querying another network outside of mainnet. You will see a message on a subgraph if it is not generating indexing rewards. +- **Subgraphs with no indexing rewards** - Some Subgraphs do not generate indexing rewards mainly because they are using unsupported features like IPFS or because they are querying another network outside of mainnet. You will see a message on a Subgraph if it is not generating indexing rewards. ### What are the hardware requirements? -- **Small** - Enough to get started indexing several subgraphs, will likely need to be expanded. +- **Small** - Enough to get started indexing several Subgraphs, will likely need to be expanded. - **Standard** - Default setup, this is what is used in the example k8s/terraform deployment manifests. -- **Medium** - Production Indexer supporting 100 subgraphs and 200-500 requests per second. -- **Large** - Prepared to index all currently used subgraphs and serve requests for the related traffic. +- **Medium** - Production Indexer supporting 100 Subgraphs and 200-500 requests per second. +- **Large** - Prepared to index all currently used Subgraphs and serve requests for the related traffic. | Setup | Postgres
(CPUs) | Postgres
(memory in GBs) | Postgres
(disk in TBs) | VMs
(CPUs) | VMs
(memory in GBs) | | --- | :-: | :-: | :-: | :-: | :-: | @@ -125,17 +125,17 @@ Indexers may differentiate themselves by applying advanced techniques for making ## Infrastructure -At the center of an Indexer's infrastructure is the Graph Node which monitors the indexed networks, extracts and loads data per a subgraph definition and serves it as a [GraphQL API](/about/#how-the-graph-works). The Graph Node needs to be connected to an endpoint exposing data from each indexed network; an IPFS node for sourcing data; a PostgreSQL database for its store; and Indexer components which facilitate its interactions with the network. +At the center of an Indexer's infrastructure is the Graph Node which monitors the indexed networks, extracts and loads data per a Subgraph definition and serves it as a [GraphQL API](/about/#how-the-graph-works). The Graph Node needs to be connected to an endpoint exposing data from each indexed network; an IPFS node for sourcing data; a PostgreSQL database for its store; and Indexer components which facilitate its interactions with the network. -- **PostgreSQL database** - The main store for the Graph Node, this is where subgraph data is stored. The Indexer service and agent also use the database to store state channel data, cost models, indexing rules, and allocation actions. +- **PostgreSQL database** - The main store for the Graph Node, this is where Subgraph data is stored. The Indexer service and agent also use the database to store state channel data, cost models, indexing rules, and allocation actions. -- **Data endpoint** - For EVM-compatible networks, Graph Node needs to be connected to an endpoint that exposes an EVM-compatible JSON-RPC API. This may take the form of a single client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain subgraphs will require particular client capabilities such as archive mode and/or the parity tracing API. +- **Data endpoint** - For EVM-compatible networks, Graph Node needs to be connected to an endpoint that exposes an EVM-compatible JSON-RPC API. This may take the form of a single client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain Subgraphs will require particular client capabilities such as archive mode and/or the parity tracing API. -- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during subgraph deployment to fetch the subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com. +- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com. - **Indexer service** - Handles all required external communications with the network. Shares cost models and indexing statuses, passes query requests from gateways on to a Graph Node, and manages the query payments via state channels with the gateway. -- **Indexer agent** - Facilitates the Indexers interactions onchain including registering on the network, managing subgraph deployments to its Graph Node/s, and managing allocations. +- **Indexer agent** - Facilitates the Indexers interactions onchain including registering on the network, managing Subgraph deployments to its Graph Node/s, and managing allocations. - **Prometheus metrics server** - The Graph Node and Indexer components log their metrics to the metrics server. @@ -149,8 +149,8 @@ Note: To support agile scaling, it is recommended that query and indexing concer | Port | Purpose | Routes | CLI Argument | Environment Variable | | --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | -| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | +| 8000 | GraphQL HTTP server
(for Subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | +| 8001 | GraphQL WS
(for Subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | | 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | | 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | | 8040 | Prometheus metrics | /metrics | \--metrics-port | - | @@ -159,7 +159,7 @@ Note: To support agile scaling, it is recommended that query and indexing concer | Port | Purpose | Routes | CLI Argument | Environment Variable | | --- | --- | --- | --- | --- | -| 7600 | GraphQL HTTP server
(for paid subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` | +| 7600 | GraphQL HTTP server
(for paid Subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` | | 7300 | Prometheus metrics | /metrics | \--metrics-port | - | #### Indexer Agent @@ -295,7 +295,7 @@ Deploy all resources with `kubectl apply -k $dir`. ### Graph Node -[Graph Node](https://github.com/graphprotocol/graph-node) is an open source Rust implementation that event sources the Ethereum blockchain to deterministically update a data store that can be queried via the GraphQL endpoint. Developers use subgraphs to define their schema, and a set of mappings for transforming the data sourced from the blockchain and the Graph Node handles syncing the entire chain, monitoring for new blocks, and serving it via a GraphQL endpoint. +[Graph Node](https://github.com/graphprotocol/graph-node) is an open source Rust implementation that event sources the Ethereum blockchain to deterministically update a data store that can be queried via the GraphQL endpoint. Developers use Subgraphs to define their schema, and a set of mappings for transforming the data sourced from the blockchain and the Graph Node handles syncing the entire chain, monitoring for new blocks, and serving it via a GraphQL endpoint. #### Getting started from source @@ -365,9 +365,9 @@ docker-compose up To successfully participate in the network requires almost constant monitoring and interaction, so we've built a suite of Typescript applications for facilitating an Indexers network participation. There are three Indexer components: -- **Indexer agent** - The agent monitors the network and the Indexer's own infrastructure and manages which subgraph deployments are indexed and allocated towards onchain and how much is allocated towards each. +- **Indexer agent** - The agent monitors the network and the Indexer's own infrastructure and manages which Subgraph deployments are indexed and allocated towards onchain and how much is allocated towards each. -- **Indexer service** - The only component that needs to be exposed externally, the service passes on subgraph queries to the graph node, manages state channels for query payments, shares important decision making information to clients like the gateways. +- **Indexer service** - The only component that needs to be exposed externally, the service passes on Subgraph queries to the graph node, manages state channels for query payments, shares important decision making information to clients like the gateways. - **Indexer CLI** - The command line interface for managing the Indexer agent. It allows Indexers to manage cost models, manual allocations, actions queue, and indexing rules. @@ -525,7 +525,7 @@ graph indexer status #### Indexer management using Indexer CLI -The suggested tool for interacting with the **Indexer Management API** is the **Indexer CLI**, an extension to the **Graph CLI**. The Indexer agent needs input from an Indexer in order to autonomously interact with the network on the behalf of the Indexer. The mechanism for defining Indexer agent behavior are **allocation management** mode and **indexing rules**. Under auto mode, an Indexer can use **indexing rules** to apply their specific strategy for picking subgraphs to index and serve queries for. Rules are managed via a GraphQL API served by the agent and known as the Indexer Management API. Under manual mode, an Indexer can create allocation actions using **actions queue** and explicitly approve them before they get executed. Under oversight mode, **indexing rules** are used to populate **actions queue** and also require explicit approval for execution. +The suggested tool for interacting with the **Indexer Management API** is the **Indexer CLI**, an extension to the **Graph CLI**. The Indexer agent needs input from an Indexer in order to autonomously interact with the network on the behalf of the Indexer. The mechanism for defining Indexer agent behavior are **allocation management** mode and **indexing rules**. Under auto mode, an Indexer can use **indexing rules** to apply their specific strategy for picking Subgraphs to index and serve queries for. Rules are managed via a GraphQL API served by the agent and known as the Indexer Management API. Under manual mode, an Indexer can create allocation actions using **actions queue** and explicitly approve them before they get executed. Under oversight mode, **indexing rules** are used to populate **actions queue** and also require explicit approval for execution. #### Usage @@ -537,7 +537,7 @@ The **Indexer CLI** connects to the Indexer agent, typically through port-forwar - `graph indexer rules set [options] ...` - Set one or more indexing rules. -- `graph indexer rules start [options] ` - Start indexing a subgraph deployment if available and set its `decisionBasis` to `always`, so the Indexer agent will always choose to index it. If the global rule is set to always then all available subgraphs on the network will be indexed. +- `graph indexer rules start [options] ` - Start indexing a Subgraph deployment if available and set its `decisionBasis` to `always`, so the Indexer agent will always choose to index it. If the global rule is set to always then all available Subgraphs on the network will be indexed. - `graph indexer rules stop [options] ` - Stop indexing a deployment and set its `decisionBasis` to never, so it will skip this deployment when deciding on deployments to index. @@ -561,9 +561,9 @@ All commands which display rules in the output can choose between the supported #### Indexing rules -Indexing rules can either be applied as global defaults or for specific subgraph deployments using their IDs. The `deployment` and `decisionBasis` fields are mandatory, while all other fields are optional. When an indexing rule has `rules` as the `decisionBasis`, then the Indexer agent will compare non-null threshold values on that rule with values fetched from the network for the corresponding deployment. If the subgraph deployment has values above (or below) any of the thresholds it will be chosen for indexing. +Indexing rules can either be applied as global defaults or for specific Subgraph deployments using their IDs. The `deployment` and `decisionBasis` fields are mandatory, while all other fields are optional. When an indexing rule has `rules` as the `decisionBasis`, then the Indexer agent will compare non-null threshold values on that rule with values fetched from the network for the corresponding deployment. If the Subgraph deployment has values above (or below) any of the thresholds it will be chosen for indexing. -For example, if the global rule has a `minStake` of **5** (GRT), any subgraph deployment which has more than 5 (GRT) of stake allocated to it will be indexed. Threshold rules include `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, and `minAverageQueryFees`. +For example, if the global rule has a `minStake` of **5** (GRT), any Subgraph deployment which has more than 5 (GRT) of stake allocated to it will be indexed. Threshold rules include `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, and `minAverageQueryFees`. Data model: @@ -679,7 +679,7 @@ graph indexer actions execute approve Note that supported action types for allocation management have different input requirements: -- `Allocate` - allocate stake to a specific subgraph deployment +- `Allocate` - allocate stake to a specific Subgraph deployment - required action params: - deploymentID @@ -694,7 +694,7 @@ Note that supported action types for allocation management have different input - poi - force (forces using the provided POI even if it doesn’t match what the graph-node provides) -- `Reallocate` - atomically close allocation and open a fresh allocation for the same subgraph deployment +- `Reallocate` - atomically close allocation and open a fresh allocation for the same Subgraph deployment - required action params: - allocationID @@ -706,7 +706,7 @@ Note that supported action types for allocation management have different input #### Cost models -Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make Indexer selection decisions per query and to negotiate payment with chosen Indexers. +Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each Subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make Indexer selection decisions per query and to negotiate payment with chosen Indexers. #### Agora @@ -782,9 +782,9 @@ Once an Indexer has staked GRT in the protocol, the [Indexer components](/indexi 6. Call `stake()` to stake GRT in the protocol. -7. (Optional) Indexers may approve another address to be the operator for their Indexer infrastructure in order to separate the keys that control the funds from those that are performing day to day actions such as allocating on subgraphs and serving (paid) queries. In order to set the operator call `setOperator()` with the operator address. +7. (Optional) Indexers may approve another address to be the operator for their Indexer infrastructure in order to separate the keys that control the funds from those that are performing day to day actions such as allocating on Subgraphs and serving (paid) queries. In order to set the operator call `setOperator()` with the operator address. -8. (Optional) In order to control the distribution of rewards and strategically attract Delegators Indexers can update their delegation parameters by updating their indexingRewardCut (parts per million), queryFeeCut (parts per million), and cooldownBlocks (number of blocks). To do so call `setDelegationParameters()`. The following example sets the queryFeeCut to distribute 95% of query rebates to the Indexer and 5% to Delegators, set the indexingRewardCutto distribute 60% of indexing rewards to the Indexer and 40% to Delegators, and set `thecooldownBlocks` period to 500 blocks. +8. (Optional) In order to control the distribution of rewards and strategically attract Delegators Indexers can update their delegation parameters by updating their `indexingRewardCut` (parts per million), `queryFeeCut` (parts per million), and `cooldownBlocks` (number of blocks). To do so call `setDelegationParameters()`. The following example sets the `queryFeeCut` to distribute 95% of query rebates to the Indexer and 5% to Delegators, set the `indexingRewardCut` to distribute 60% of indexing rewards to the Indexer and 40% to Delegators, and set the `cooldownBlocks` period to 500 blocks. ``` setDelegationParameters(950000, 600000, 500) @@ -810,8 +810,8 @@ To set the delegation parameters using Graph Explorer interface, follow these st After being created by an Indexer a healthy allocation goes through two states. -- **Active** - Once an allocation is created onchain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L316)) it is considered **active**. A portion of the Indexer's own and/or delegated stake is allocated towards a subgraph deployment, which allows them to claim indexing rewards and serve queries for that subgraph deployment. The Indexer agent manages creating allocations based on the Indexer rules. +- **Active** - Once an allocation is created onchain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L316)) it is considered **active**. A portion of the Indexer's own and/or delegated stake is allocated towards a Subgraph deployment, which allows them to claim indexing rewards and serve queries for that Subgraph deployment. The Indexer agent manages creating allocations based on the Indexer rules. - **Closed** - An Indexer is free to close an allocation once 1 epoch has passed ([closeAllocation()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L335)) or their Indexer agent will automatically close the allocation after the **maxAllocationEpochs** (currently 28 days). When an allocation is closed with a valid proof of indexing (POI) their indexing rewards are distributed to the Indexer and its Delegators ([learn more](/indexing/overview/#how-are-indexing-rewards-distributed)). -Indexers are recommended to utilize offchain syncing functionality to sync subgraph deployments to chainhead before creating the allocation onchain. This feature is especially useful for subgraphs that may take longer than 28 epochs to sync or have some chances of failing undeterministically. +Indexers are recommended to utilize offchain syncing functionality to sync Subgraph deployments to chainhead before creating the allocation onchain. This feature is especially useful for Subgraphs that may take longer than 28 epochs to sync or have some chances of failing undeterministically. diff --git a/website/src/pages/es/indexing/supported-network-requirements.mdx b/website/src/pages/es/indexing/supported-network-requirements.mdx index dfebec344880..d34e8330f5da 100644 --- a/website/src/pages/es/indexing/supported-network-requirements.mdx +++ b/website/src/pages/es/indexing/supported-network-requirements.mdx @@ -6,7 +6,7 @@ title: Supported Network Requirements | --- | --- | --- | :-: | | Arbitrum | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | 4+ core CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_last updated August 2023_ | ✅ | | Avalanche | [Docker Guide](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 5 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
Debian 12/Ubuntu 22.04
16 GB RAM
>= 4.5TB (NVME preffered)
_last updated 14th May 2024_ | ✅ | +| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
Debian 12/Ubuntu 22.04
16 GB RAM
>= 4.5TB (NVME preferred)
_last updated 14th May 2024_ | ✅ | | Binance | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | 8 core / 16 threads CPU
Ubuntu 22.04
>=32 GB RAM
>= 14 TiB NVMe SSD
_last updated 22nd June 2024_ | ✅ | | Celo | [Docker Guide](https://docs.infradao.com/archive-nodes-101/celo/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 2 TiB NVMe SSD
_last updated August 2023_ | ✅ | | Ethereum | [Docker Guide](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Higher clock speed over core count
Ubuntu 22.04
16GB+ RAM
>=3TB (NVMe recommended)
_last updated August 2023_ | ✅ | diff --git a/website/src/pages/es/indexing/tap.mdx b/website/src/pages/es/indexing/tap.mdx index 36fb0939af81..024347d695c4 100644 --- a/website/src/pages/es/indexing/tap.mdx +++ b/website/src/pages/es/indexing/tap.mdx @@ -1,140 +1,21 @@ --- -title: |+ - Guía de Migración TAP - Aprende sobre el nuevo sistema de pagos de The Graph, el Protocolo de Agregación de Línea de Tiempo (TAP). Este sistema ofrece microtransacciones rápidas y eficientes con una confianza minimizada. - - Descripción General - TAP es un reemplazo directo del sistema de pagos Scalar actualmente en uso. Ofrece las siguientes características clave: - - Manejo eficiente de micropagos. - Agrega una capa de consolidación a las transacciones y costos en la cadena. - Permite a los Indexadores controlar los recibos y pagos, garantizando el pago por consultas. - Facilita puertas de enlace descentralizadas y sin confianza, mejorando el indexer-service para múltiples remitentes. - - Especificaciones - TAP permite que un remitente realice múltiples pagos a un receptor a través de TAP Receipts, los cuales agrupan estos pagos en un único pago denominado Receipt Aggregate Voucher (RAV). Este pago consolidado puede verificarse en la blockchain, reduciendo la cantidad de transacciones y simplificando el proceso de pago. - - Para cada consulta, la puerta de enlace te enviará un recibo firmado (signed receipt) que se almacenará en tu base de datos. Luego, estas consultas serán agrupadas por un tap-agent mediante una solicitud. Posteriormente, recibirás un RAV. Puedes actualizar un RAV enviándolo con recibos más recientes, lo que generará un nuevo RAV con un valor incrementado. - - Detalles del RAV - Es dinero que está pendiente de ser enviado a la blockchain. - Continuará enviando solicitudes para agrupar recibos y garantizar que el valor total de los recibos no agregados no supere la cantidad dispuesta a perder. - Cada RAV puede ser canjeado una sola vez en los contratos, por lo que se envían después de que la asignación se haya cerrado. - - Canjeo de RAV - Mientras ejecutes tap-agent e indexer-agent, todo el proceso se ejecutará automáticamente. A continuación, se presenta un desglose detallado del proceso: - - Proceso de Canjeo de RAV - 1. Un Indexador cierra la asignación. - 2. Durante el período , tap-agent toma todos los recibos pendientes de esa asignación específica y solicita su agregación en un RAV, marcándolo como el último. - 3. Indexer-agent toma todos los últimos RAVs y envía solicitudes de canje a la blockchain, lo que actualizará el valor de redeem_at. - 4. Durante el período , indexer-agent monitorea si la blockchain experimenta alguna reorganización que revierta la transacción. - Si la transacción es revertida, el RAV se reenvía a la blockchain. Si no es revertida, se marca como final. - - Blockchain Addresses - Contracts - Contract Arbitrum Mainnet (42161) Arbitrum Sepolia (421614) - TAP Verifier 0x33f9E93266ce0E108fc85DdE2f71dab555A0F05a 0xfC24cE7a4428A6B89B52645243662A02BA734ECF - AllocationIDTracker 0x5B2F33d7Ca6Ec88f5586f2528f58c20843D9FE7c 0xAaC28a10d707bbc6e02029f1bfDAEB5084b2aD11 - Escrow 0x8f477709eF277d4A880801D01A140a9CF88bA0d3 0x1e4dC4f9F95E102635D8F7ED71c5CdbFa20e2d02 - Gateway - Component Edge and Node Mainnet (Arbitrum Mainnet) Edge and Node Testnet (Arbitrum Sepolia) - Sender 0xDDE4cfFd3D9052A9cb618fC05a1Cd02be1f2F467 0xC3dDf37906724732FfD748057FEBe23379b0710D - Signers 0xfF4B7A5EfD00Ff2EC3518D4F250A27e4c29A2211 0xFb142dE83E261e43a81e9ACEADd1c66A0DB121FE - Aggregator https://tap-aggregator.network.thegraph.com https://tap-aggregator.testnet.thegraph.com - - Requisitos - Además de los requisitos habituales para ejecutar un indexador, necesitarás un endpoint tap-escrow-subgraph para consultar actualizaciones de TAP. Puedes utilizar The Graph Network para hacer consultas o alojarlo en tu propio graph-node. - - Subgrafo Graph TAP Arbitrum Sepolia (para la testnet de The Graph). - Subgrafo Graph TAP Arbitrum One (para la mainnet de The Graph). - - Nota: Actualmente, indexer-agent no gestiona la indexación de este subgrafo como lo hace con la implementación del subgrafo de la red. Por lo tanto, debes indexarlo manualmente. - - Guía de Migración - Versiones de Software - La versión requerida del software se puede encontrar aquí. - - Pasos - 1. Indexer Agent - Sigue el mismo proceso de configuración. - Agrega el nuevo argumento --tap-subgraph-endpoint para activar las rutas de código de TAP y habilitar el canje de RAVs de TAP. - 2. Indexer Service - Reemplaza completamente tu configuración actual con la nueva versión de Indexer Service rs. Se recomienda usar la imagen del contenedor. - Como en la versión anterior, puedes escalar Indexer Service horizontalmente con facilidad. Sigue siendo stateless. - 3. TAP Agent - Ejecuta una única instancia de TAP Agent en todo momento. Se recomienda usar la imagen del contenedor. - 4. Configura Indexer Service y TAP Agent mediante un archivo TOML compartido, suministrado con el argumento --config /path/to/config.toml. - Consulta la configuración completa y los valores predeterminados. - Para una configuración mínima, usa la siguiente plantilla: - - toml - Copy - Edit - [indexer] - indexer_address = "0x1111111111111111111111111111111111111111" - operator_mnemonic = "celery smart tip orange scare van steel radio dragon joy alarm crane" - - [database] - postgres_url = "postgres://postgres@postgres:5432/postgres" - - [graph_node] - query_url = "http://graph-node:8000" - status_url = "http://graph-node:8000/graphql" - - [subgraphs.network] - query_url = "http://example.com/network-subgraph" - deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" - - [subgraphs.escrow] - query_url = "http://example.com/network-subgraph" - deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" - - [blockchain] - chain_id = 1337 - receipts_verifier_address = "0x2222222222222222222222222222222222222222" - - [tap] - max_amount_willing_to_lose_grt = 20 - - [tap.sender_aggregator_endpoints] - 0xDDE4cfFd3D9052A9cb618fC05a1Cd02be1f2F467 = "https://tap-aggregator.network.thegraph.com" - Notas Importantes - Los valores de tap.sender_aggregator_endpoints se encuentran en la sección de gateway. - El valor de blockchain.receipts_verifier_address debe coincidir con la sección de direcciones de Blockchain según el chain ID apropiado. - Nivel de Registro (Log Level) - Puedes establecer el nivel de registro con la variable de entorno RUST_LOG. Se recomienda: - - bash - Copy - Edit - RUST_LOG=indexer_tap_agent=debug,info - Monitoreo - Métricas - Todos los componentes exponen el puerto 7300, que puede ser consultado por Prometheus. - - Grafana Dashboard - Puedes descargar el Dashboard de Grafana e importarlo. - - Launchpad - Actualmente, hay una versión en desarrollo de indexer-rs y tap-agent, que puedes encontrar aquí. - +title: GraphTally Guide --- -Learn about The Graph’s new payment system, **Timeline Aggregation Protocol, TAP**. This system provides fast, efficient microtransactions with minimized trust. +Learn about The Graph’s new payment system, **GraphTally** [(previously Timeline Aggregation Protocol)](https://docs.rs/tap_core/latest/tap_core/index.html). This system provides fast, efficient microtransactions with minimized trust. ## Descripción -[TAP](https://docs.rs/tap_core/latest/tap_core/index.html) is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features: +GraphTally is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features: - Efficiently handles micropayments. - Adds a layer of consolidations to onchain transactions and costs. - Allows Indexers control of receipts and payments, guaranteeing payment for queries. - It enables decentralized, trustless gateways and improves `indexer-service` performance for multiple senders. -## Specifics +### Specifics -TAP allows a sender to make multiple payments to a receiver, **TAP Receipts**, which aggregates these payments into a single payment, a **Receipt Aggregate Voucher**, also known as a **RAV**. This aggregated payment can then be verified on the blockchain, reducing the number of transactions and simplifying the payment process. +GraphTally allows a sender to make multiple payments to a receiver, **Receipts**, which aggregates these payments into a single payment, a **Receipt Aggregate Voucher**, also known as a **RAV**. This aggregated payment can then be verified on the blockchain, reducing the number of transactions and simplifying the payment process. For each query, the gateway will send you a `signed receipt` that is stored on your database. Then, these queries will be aggregated by a `tap-agent` through a request. Afterwards, you’ll receive a RAV. You can update a RAV by sending it with newer receipts and this will generate a new RAV with an increased value. @@ -178,14 +59,14 @@ As long as you run `tap-agent` and `indexer-agent`, everything will be executed | Signers | `0xfF4B7A5EfD00Ff2EC3518D4F250A27e4c29A2211` | `0xFb142dE83E261e43a81e9ACEADd1c66A0DB121FE` | | Aggregator | `https://tap-aggregator.network.thegraph.com` | `https://tap-aggregator.testnet.thegraph.com` | -### Requirements +### Prerequisites -In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query TAP updates. You can use The Graph Network to query or host yourself on your `graph-node`. +In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query updates. You can use The Graph Network to query or host yourself on your `graph-node`. -- [Graph TAP Arbitrum Sepolia subgraph (for The Graph testnet)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) -- [Graph TAP Arbitrum One subgraph (for The Graph mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) +- [Graph TAP Arbitrum Sepolia Subgraph (for The Graph testnet)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) +- [Graph TAP Arbitrum One Subgraph (for The Graph mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) -> Note: `indexer-agent` does not currently handle the indexing of this subgraph like it does for the network subgraph deployment. As a result, you have to index it manually. +> Note: `indexer-agent` does not currently handle the indexing of this Subgraph like it does for the network Subgraph deployment. As a result, you have to index it manually. ## Migration Guide @@ -198,7 +79,7 @@ The required software version can be found [here](https://github.com/graphprotoc 1. **Indexer Agent** - Follow the [same process](https://github.com/graphprotocol/indexer/pkgs/container/indexer-agent#graph-protocol-indexer-components). - - Give the new argument `--tap-subgraph-endpoint` to activate the new TAP codepaths and enable redeeming of TAP RAVs. + - Give the new argument `--tap-subgraph-endpoint` to activate the new GraphTally codepaths and enable redeeming of RAVs. 2. **Indexer Service** @@ -247,18 +128,18 @@ query_url = "" status_url = "" [subgraphs.network] -# Query URL for the Graph Network subgraph. +# Query URL for the Graph Network Subgraph. query_url = "" # Optional, deployment to look for in the local `graph-node`, if locally indexed. -# Locally indexing the subgraph is recommended. +# Locally indexing the Subgraph is recommended. # NOTE: Use `query_url` or `deployment_id` only deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" [subgraphs.escrow] -# Query URL for the Escrow subgraph. +# Query URL for the Escrow Subgraph. query_url = "" # Optional, deployment to look for in the local `graph-node`, if locally indexed. -# Locally indexing the subgraph is recommended. +# Locally indexing the Subgraph is recommended. # NOTE: Use `query_url` or `deployment_id` only deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" diff --git a/website/src/pages/es/indexing/tooling/graph-node.mdx b/website/src/pages/es/indexing/tooling/graph-node.mdx index 7fadb2a27660..4563bd8444bb 100644 --- a/website/src/pages/es/indexing/tooling/graph-node.mdx +++ b/website/src/pages/es/indexing/tooling/graph-node.mdx @@ -2,31 +2,31 @@ title: Graph Node --- -Graph Node es el componente que indexa los subgrafos, y hace que los datos resultantes estén disponibles para su consulta a través de una API GraphQL. Como tal, es fundamental para el stack del Indexador, y el correcto funcionamiento de Graph Node es crucial para ejecutar un Indexador con éxito. +Graph Node is the component which indexes Subgraphs, and makes the resulting data available to query via a GraphQL API. As such it is central to the indexer stack, and correct operation of Graph Node is crucial to running a successful indexer. This provides a contextual overview of Graph Node, and some of the more advanced options available to indexers. Detailed documentation and instructions can be found in the [Graph Node repository](https://github.com/graphprotocol/graph-node). ## Graph Node -[Graph Node](https://github.com/graphprotocol/graph-node) is the reference implementation for indexing Subgraphs on The Graph Network, connecting to blockchain clients, indexing subgraphs and making indexed data available to query. +[Graph Node](https://github.com/graphprotocol/graph-node) is the reference implementation for indexing Subgraphs on The Graph Network, connecting to blockchain clients, indexing Subgraphs and making indexed data available to query. Graph Node (and the whole indexer stack) can be run on bare metal, or in a cloud environment. This flexibility of the central indexing component is crucial to the robustness of The Graph Protocol. Similarly, Graph Node can be [built from source](https://github.com/graphprotocol/graph-node), or indexers can use one of the [provided Docker Images](https://hub.docker.com/r/graphprotocol/graph-node). ### Base de datos PostgreSQL -El almacén principal para Graph Node, aquí es donde se almacenan los datos de los subgrafos, así como los metadatos de los subgrafos, y los datos de una red subgrafo-agnóstica como el caché de bloques, y el caché eth_call. +The main store for the Graph Node, this is where Subgraph data is stored, as well as metadata about Subgraphs, and Subgraph-agnostic network data such as the block cache, and eth_call cache. ### Clientes de red Para indexar una red, Graph Node necesita acceso a un cliente de red a través de una API JSON-RPC compatible con EVM. Esta RPC puede conectarse a un solo cliente o puede ser una configuración más compleja que equilibre la carga entre varios clientes. -While some subgraphs may just require a full node, some may have indexing features which require additional RPC functionality. Specifically subgraphs which make `eth_calls` as part of indexing will require an archive node which supports [EIP-1898](https://eips.ethereum.org/EIPS/eip-1898), and subgraphs with `callHandlers`, or `blockHandlers` with a `call` filter, require `trace_filter` support ([see trace module documentation here](https://openethereum.github.io/JSONRPC-trace-module)). +While some Subgraphs may just require a full node, some may have indexing features which require additional RPC functionality. Specifically Subgraphs which make `eth_calls` as part of indexing will require an archive node which supports [EIP-1898](https://eips.ethereum.org/EIPS/eip-1898), and Subgraphs with `callHandlers`, or `blockHandlers` with a `call` filter, require `trace_filter` support ([see trace module documentation here](https://openethereum.github.io/JSONRPC-trace-module)). **Network Firehoses** - a Firehose is a gRPC service providing an ordered, yet fork-aware, stream of blocks, developed by The Graph's core developers to better support performant indexing at scale. This is not currently an Indexer requirement, but Indexers are encouraged to familiarise themselves with the technology, ahead of full network support. Learn more about the Firehose [here](https://firehose.streamingfast.io/). ### Nodos IPFS -Los metadatos de deploy del subgrafo se almacenan en la red IPFS. El Graph Node accede principalmente al nodo IPFS durante el deploy del subgrafo para obtener el manifiesto del subgrafo y todos los archivos vinculados. Los Indexadores de red no necesitan alojar su propio nodo IPFS. En https://ipfs.network.thegraph.com se aloja un nodo IPFS para la red. +Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network indexers do not need to host their own IPFS node. An IPFS node for the network is hosted at https://ipfs.network.thegraph.com. ### Servidor de métricas Prometheus @@ -79,8 +79,8 @@ Cuando está funcionando, Graph Node muestra los siguientes puertos: | Port | Purpose | Routes | CLI Argument | Environment Variable | | --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | -| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | +| 8000 | GraphQL HTTP server
(for Subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | +| 8001 | GraphQL WS
(for Subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | | 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | | 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | | 8040 | Prometheus metrics | /metrics | \--metrics-port | - | @@ -89,7 +89,7 @@ Cuando está funcionando, Graph Node muestra los siguientes puertos: ## Configuración avanzada de Graph Node -En su forma más simple, Graph Node puede funcionar con una única instancia de Graph Node, una única base de datos PostgreSQL, un nodo IPFS y los clientes de red que requieran los subgrafos a indexar. +At its simplest, Graph Node can be operated with a single instance of Graph Node, a single PostgreSQL database, an IPFS node, and the network clients as required by the Subgraphs to be indexed. This setup can be scaled horizontally, by adding multiple Graph Nodes, and multiple databases to support those Graph Nodes. Advanced users may want to take advantage of some of the horizontal scaling capabilities of Graph Node, as well as some of the more advanced configuration options, via the `config.toml` file and Graph Node's environment variables. @@ -114,13 +114,13 @@ Full documentation of `config.toml` can be found in the [Graph Node docs](https: #### Graph Nodes múltiples -Graph Node indexing can scale horizontally, running multiple instances of Graph Node to split indexing and querying across different nodes. This can be done simply by running Graph Nodes configured with a different `node_id` on startup (e.g. in the Docker Compose file), which can then be used in the `config.toml` file to specify [dedicated query nodes](#dedicated-query-nodes), [block ingestors](#dedicated-block-ingestion), and splitting subgraphs across nodes with [deployment rules](#deployment-rules). +Graph Node indexing can scale horizontally, running multiple instances of Graph Node to split indexing and querying across different nodes. This can be done simply by running Graph Nodes configured with a different `node_id` on startup (e.g. in the Docker Compose file), which can then be used in the `config.toml` file to specify [dedicated query nodes](#dedicated-query-nodes), [block ingestors](#dedicated-block-ingestion), and splitting Subgraphs across nodes with [deployment rules](#deployment-rules). > Ten en cuenta que varios Graph Nodes pueden configurarse para utilizar la misma base de datos, que a su vez puede escalarse horizontalmente mediante sharding. #### Reglas de deploy -Given multiple Graph Nodes, it is necessary to manage deployment of new subgraphs so that the same subgraph isn't being indexed by two different nodes, which would lead to collisions. This can be done by using deployment rules, which can also specify which `shard` a subgraph's data should be stored in, if database sharding is being used. Deployment rules can match on the subgraph name and the network that the deployment is indexing in order to make a decision. +Given multiple Graph Nodes, it is necessary to manage deployment of new Subgraphs so that the same Subgraph isn't being indexed by two different nodes, which would lead to collisions. This can be done by using deployment rules, which can also specify which `shard` a Subgraph's data should be stored in, if database sharding is being used. Deployment rules can match on the Subgraph name and the network that the deployment is indexing in order to make a decision. Ejemplo de configuración de reglas de deploy: @@ -138,7 +138,7 @@ indexers = [ "index_node_kovan_0" ] match = { network = [ "xdai", "poa-core" ] } indexers = [ "index_node_other_0" ] [[deployment.rule]] -# There's no 'match', so any subgraph matches +# There's no 'match', so any Subgraph matches shards = [ "sharda", "shardb" ] indexers = [ "index_node_community_0", @@ -167,11 +167,11 @@ Cualquier nodo cuyo --node-id coincida con la expresión regular se configurará Para la mayoría de los casos de uso, una única base de datos Postgres es suficiente para soportar una instancia de graph-node. Cuando una instancia de graph-node supera una única base de datos Postgres, es posible dividir el almacenamiento de los datos de graph-node en varias bases de datos Postgres. Todas las bases de datos juntas forman el almacén de la instancia graph-node. Cada base de datos individual se denomina shard. -Shards can be used to split subgraph deployments across multiple databases, and can also be used to use replicas to spread query load across databases. This includes configuring the number of available database connections each `graph-node` should keep in its connection pool for each database, which becomes increasingly important as more subgraphs are being indexed. +Shards can be used to split Subgraph deployments across multiple databases, and can also be used to use replicas to spread query load across databases. This includes configuring the number of available database connections each `graph-node` should keep in its connection pool for each database, which becomes increasingly important as more Subgraphs are being indexed. El Sharding resulta útil cuando la base de datos existente no puede soportar la carga que le impone Graph Node y cuando ya no es posible aumentar el tamaño de la base de datos. -> En general, es mejor hacer una única base de datos lo más grande posible, antes de empezar con los shards. Una excepción es cuando el tráfico de consultas se divide de forma muy desigual entre los subgrafos; en esas situaciones puede ayudar dramáticamente si los subgrafos de alto volumen se mantienen en un shard y todo lo demás en otro, porque esa configuración hace que sea más probable que los datos de los subgrafos de alto volumen permanezcan en la caché interna de la base de datos y no sean reemplazados por datos que no se necesitan tanto de los subgrafos de bajo volumen. +> It is generally better make a single database as big as possible, before starting with shards. One exception is where query traffic is split very unevenly between Subgraphs; in those situations it can help dramatically if the high-volume Subgraphs are kept in one shard and everything else in another because that setup makes it more likely that the data for the high-volume Subgraphs stays in the db-internal cache and doesn't get replaced by data that's not needed as much from low-volume Subgraphs. En términos de configuración de las conexiones, comienza con max_connections en postgresql.conf establecido en 400 (o tal vez incluso 200) y mira las métricas de Prometheus store_connection_wait_time_ms y store_connection_checkout_count. Tiempos de espera notables (cualquier cosa por encima de 5ms) es una indicación de que hay muy pocas conexiones disponibles; altos tiempos de espera allí también serán causados por la base de datos que está muy ocupada (como alta carga de CPU). Sin embargo, si la base de datos parece estable, los tiempos de espera elevados indican la necesidad de aumentar el número de conexiones. En la configuración, el número de conexiones que puede utilizar cada instancia de Graph Node es un límite superior, y Graph Node no mantendrá conexiones abiertas si no las necesita. @@ -188,7 +188,7 @@ ingestor = "block_ingestor_node" #### Soporte de múltiples redes -The Graph Protocol is increasing the number of networks supported for indexing rewards, and there exist many subgraphs indexing unsupported networks which an indexer would like to process. The `config.toml` file allows for expressive and flexible configuration of: +The Graph Protocol is increasing the number of networks supported for indexing rewards, and there exist many Subgraphs indexing unsupported networks which an indexer would like to process. The `config.toml` file allows for expressive and flexible configuration of: - Redes múltiples - Múltiples proveedores por red (esto puede permitir dividir la carga entre los proveedores, y también puede permitir la configuración de nodos completos, así como nodos de archivo, con Graph Node prefiriendo proveedores más baratos si una carga de trabajo dada lo permite). @@ -225,11 +225,11 @@ Los usuarios que están operando una configuración de indexación escalada con ### Operar Graph Node -Dado un Graph Node en funcionamiento (¡o Graph Nodes!), el reto consiste en gestionar los subgrafos deployados en esos nodos. Graph Node ofrece una serie de herramientas para ayudar a gestionar los subgrafos. +Given a running Graph Node (or Graph Nodes!), the challenge is then to manage deployed Subgraphs across those nodes. Graph Node surfaces a range of tools to help with managing Subgraphs. #### Logging -Graph Node's logs can provide useful information for debugging and optimisation of Graph Node and specific subgraphs. Graph Node supports different log levels via the `GRAPH_LOG` environment variable, with the following levels: error, warn, info, debug or trace. +Graph Node's logs can provide useful information for debugging and optimisation of Graph Node and specific Subgraphs. Graph Node supports different log levels via the `GRAPH_LOG` environment variable, with the following levels: error, warn, info, debug or trace. In addition setting `GRAPH_LOG_QUERY_TIMING` to `gql` provides more details about how GraphQL queries are running (though this will generate a large volume of logs). @@ -247,11 +247,11 @@ The graphman command is included in the official containers, and you can docker Full documentation of `graphman` commands is available in the Graph Node repository. See [/docs/graphman.md] (https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md) in the Graph Node `/docs` -### Trabajar con subgrafos +### Working with Subgraphs #### API de estado de indexación -Disponible por defecto en el puerto 8030/graphql, la API de estado de indexación expone una serie de métodos para comprobar el estado de indexación de diferentes subgrafos, comprobar pruebas de indexación, inspeccionar características de subgrafos y mucho más. +Available on port 8030/graphql by default, the indexing status API exposes a range of methods for checking indexing status for different Subgraphs, checking proofs of indexing, inspecting Subgraph features and more. The full schema is available [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). @@ -263,7 +263,7 @@ El proceso de indexación consta de tres partes diferenciadas: - Procesar los eventos en orden con los handlers apropiados (esto puede implicar llamar a la cadena para obtener el estado y obtener datos del store) - Escribir los datos resultantes en el store -"Estas etapas están en serie (es decir, se pueden ejecutar en paralelo), pero dependen una de la otra. Cuando los subgrafos son lentos en indexarse, la causa subyacente dependerá del subgrafo específico. +These stages are pipelined (i.e. they can be executed in parallel), but they are dependent on one another. Where Subgraphs are slow to index, the underlying cause will depend on the specific Subgraph. Causas habituales de la lentitud de indexación: @@ -276,24 +276,24 @@ Causas habituales de la lentitud de indexación: - El proveedor en sí mismo se está quedando rezagado con respecto a la cabeza de la cadena - Lentitud en la obtención de nuevos recibos en la cabeza de la cadena desde el proveedor -Las métricas de indexación de subgrafos pueden ayudar a diagnosticar la causa raíz de la lentitud de la indexación. En algunos casos, el problema reside en el propio subgrafo, pero en otros, la mejora de los proveedores de red, la reducción de la contención de la base de datos y otras mejoras de configuración pueden mejorar notablemente el rendimiento de la indexación. +Subgraph indexing metrics can help diagnose the root cause of indexing slowness. In some cases, the problem lies with the Subgraph itself, but in others, improved network providers, reduced database contention and other configuration improvements can markedly improve indexing performance. -#### Subgrafos fallidos +#### Failed Subgraphs -Durante la indexación, los subgrafos pueden fallar si encuentran datos inesperados, si algún componente no funciona como se esperaba o si hay algún error en los event handlers o en la configuración. Hay dos tipos generales de fallo: +During indexing Subgraphs might fail, if they encounter data that is unexpected, some component not working as expected, or if there is some bug in the event handlers or configuration. There are two general types of failure: - Fallos deterministas: son fallos que no se resolverán con reintentos - Fallos no deterministas: pueden deberse a problemas con el proveedor o a algún error inesperado de Graph Node. Cuando se produce un fallo no determinista, Graph Node reintentará los handlers que han fallado, retrocediendo en el tiempo. -En algunos casos, un fallo puede ser resuelto por el Indexador (por ejemplo, si el error es resultado de no tener el tipo correcto de proveedor, añadir el proveedor necesario permitirá continuar con la indexación). Sin embargo, en otros, se requiere un cambio en el código del subgrafo. +In some cases a failure might be resolvable by the indexer (for example if the error is a result of not having the right kind of provider, adding the required provider will allow indexing to continue). However in others, a change in the Subgraph code is required. -> Deterministic failures are considered "final", with a Proof of Indexing generated for the failing block, while non-deterministic failures are not, as the subgraph may manage to "unfail" and continue indexing. In some cases, the non-deterministic label is incorrect, and the subgraph will never overcome the error; such failures should be reported as issues on the Graph Node repository. +> Deterministic failures are considered "final", with a Proof of Indexing generated for the failing block, while non-deterministic failures are not, as the Subgraph may manage to "unfail" and continue indexing. In some cases, the non-deterministic label is incorrect, and the Subgraph will never overcome the error; such failures should be reported as issues on the Graph Node repository. #### Caché de bloques y llamadas -Graph Node caches certain data in the store in order to save refetching from the provider. Blocks are cached, as are the results of `eth_calls` (the latter being cached as of a specific block). This caching can dramatically increase indexing speed during "resyncing" of a slightly altered subgraph. +Graph Node caches certain data in the store in order to save refetching from the provider. Blocks are cached, as are the results of `eth_calls` (the latter being cached as of a specific block). This caching can dramatically increase indexing speed during "resyncing" of a slightly altered Subgraph. -However, in some instances, if an Ethereum node has provided incorrect data for some period, that can make its way into the cache, leading to incorrect data or failed subgraphs. In this case indexers can use `graphman` to clear the poisoned cache, and then rewind the affected subgraphs, which will then fetch fresh data from the (hopefully) healthy provider. +However, in some instances, if an Ethereum node has provided incorrect data for some period, that can make its way into the cache, leading to incorrect data or failed Subgraphs. In this case indexers can use `graphman` to clear the poisoned cache, and then rewind the affected Subgraphs, which will then fetch fresh data from the (hopefully) healthy provider. Si se sospecha de una inconsistencia en el caché de bloques, como un evento de falta de recepción tx: @@ -304,7 +304,7 @@ Si se sospecha de una inconsistencia en el caché de bloques, como un evento de #### Consulta de problemas y errores -Una vez que un subgrafo ha sido indexado, los Indexadores pueden esperar servir consultas a través del endpoint de consulta dedicado del subgrafo. Si el Indexador espera servir un volumen de consultas significativo, se recomienda un nodo de consulta dedicado, y en caso de volúmenes de consulta muy altos, los Indexadores pueden querer configurar shards de réplica para que las consultas no impacten en el proceso de indexación. +Once a Subgraph has been indexed, indexers can expect to serve queries via the Subgraph's dedicated query endpoint. If the indexer is hoping to serve significant query volume, a dedicated query node is recommended, and in case of very high query volumes, indexers may want to configure replica shards so that queries don't impact the indexing process. Sin embargo, incluso con un nodo de consulta dedicado y réplicas, ciertas consultas pueden llevar mucho tiempo para ejecutarse y, en algunos casos, aumentar el uso de memoria y afectar negativamente el tiempo de consulta de otros usuarios. @@ -316,7 +316,7 @@ Graph Node caches GraphQL queries by default, which can significantly reduce dat ##### Análisis de consultas -Las consultas problemáticas suelen surgir de dos maneras. En algunos casos, los propios usuarios informan de que una consulta determinada es lenta. En ese caso, el reto consiste en diagnosticar el motivo de la lentitud, ya sea un problema general o específico de ese subgrafo o consulta. Y, por supuesto, resolverlo, si es posible. +Problematic queries most often surface in one of two ways. In some cases, users themselves report that a given query is slow. In that case the challenge is to diagnose the reason for the slowness - whether it is a general issue, or specific to that Subgraph or query. And then of course to resolve it, if possible. En otros casos, el desencadenante puede ser un uso elevado de memoria en un nodo de consulta, en cuyo caso el reto consiste primero en identificar la consulta causante del problema. @@ -336,10 +336,10 @@ In general, tables where the number of distinct entities are less than 1% of the Once a table has been determined to be account-like, running `graphman stats account-like .
` will turn on the account-like optimization for queries against that table. The optimization can be turned off again with `graphman stats account-like --clear .
` It takes up to 5 minutes for query nodes to notice that the optimization has been turned on or off. After turning the optimization on, it is necessary to verify that the change does not in fact make queries slower for that table. If you have configured Grafana to monitor Postgres, slow queries would show up in `pg_stat_activity`in large numbers, taking several seconds. In that case, the optimization needs to be turned off again. -For Uniswap-like subgraphs, the `pair` and `token` tables are prime candidates for this optimization, and can have a dramatic effect on database load. +For Uniswap-like Subgraphs, the `pair` and `token` tables are prime candidates for this optimization, and can have a dramatic effect on database load. -#### Eliminar subgrafos +#### Removing Subgraphs > Se trata de una nueva funcionalidad, que estará disponible en Graph Node 0.29.x -At some point an indexer might want to remove a given subgraph. This can be easily done via `graphman drop`, which deletes a deployment and all it's indexed data. The deployment can be specified as either a subgraph name, an IPFS hash `Qm..`, or the database namespace `sgdNNN`. Further documentation is available [here](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop). +At some point an indexer might want to remove a given Subgraph. This can be easily done via `graphman drop`, which deletes a deployment and all it's indexed data. The deployment can be specified as either a Subgraph name, an IPFS hash `Qm..`, or the database namespace `sgdNNN`. Further documentation is available [here](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop). diff --git a/website/src/pages/es/indexing/tooling/graphcast.mdx b/website/src/pages/es/indexing/tooling/graphcast.mdx index 3da74365af91..3fef530ae421 100644 --- a/website/src/pages/es/indexing/tooling/graphcast.mdx +++ b/website/src/pages/es/indexing/tooling/graphcast.mdx @@ -10,10 +10,10 @@ En la actualidad, el costo de transmitir información a otros participantes de l El Graphcast SDK (Kit de Desarrollo de Software) permite a los desarrolladores construir Radios, que son aplicaciones impulsadas por gossip que los Indexadores pueden utilizar con una finalidad específica. También queremos crear algunas Radios (o dar soporte a otros desarrolladores/equipos que deseen construir Radios) para los siguientes casos de uso: -- Real-time cross-checking of subgraph data integrity ([Subgraph Radio](https://docs.graphops.xyz/graphcast/radios/subgraph-radio/intro)). -- Llevar a cabo subastas y coordinar warp-syncing de datos de subgrafos, substreams y Firehose de otros Indexadores. -- Autoinforme sobre análisis de consultas activas, incluidos volúmenes de consultas de subgrafos, volúmenes de tarifas, etc. -- Generar informes propios sobre análisis del proceso de indexación, que incluyan período de indexación de subgrafos, costos de gas handler, indexación de errores encontrados, etc. +- Real-time cross-checking of Subgraph data integrity ([Subgraph Radio](https://docs.graphops.xyz/graphcast/radios/subgraph-radio/intro)). +- Conducting auctions and coordination for warp syncing Subgraphs, substreams, and Firehose data from other Indexers. +- Self-reporting on active query analytics, including Subgraph request volumes, fee volumes, etc. +- Self-reporting on indexing analytics, including Subgraph indexing time, handler gas costs, indexing errors encountered, etc. - Generar informes propios sobre información de stack que incluyan versión del graph-node, la versión de Postgres, la versión del cliente de Ethereum, etc. ### Aprende más diff --git a/website/src/pages/es/resources/benefits.mdx b/website/src/pages/es/resources/benefits.mdx index e50969112dde..764dabb8ba50 100644 --- a/website/src/pages/es/resources/benefits.mdx +++ b/website/src/pages/es/resources/benefits.mdx @@ -75,9 +75,9 @@ Query costs may vary; the quoted cost is the average at time of publication (Mar Reflects cost for data consumer. Query fees are still paid to Indexers for Free Plan queries. -Estimated costs are only for Ethereum Mainnet subgraphs — costs are even higher when self hosting a `graph-node` on other networks. Some users may need to update their subgraph to a new version. Due to Ethereum gas fees, an update costs ~$50 at time of writing. Note that gas fees on [Arbitrum](/archived/arbitrum/arbitrum-faq/) are substantially lower than Ethereum mainnet. +Estimated costs are only for Ethereum Mainnet Subgraphs — costs are even higher when self hosting a `graph-node` on other networks. Some users may need to update their Subgraph to a new version. Due to Ethereum gas fees, an update costs ~$50 at time of writing. Note that gas fees on [Arbitrum](/archived/arbitrum/arbitrum-faq/) are substantially lower than Ethereum mainnet. -La señal de curación en un subgrafo es una acción opcional de única vez y no tiene costo neto (por ejemplo, se pueden curar $1k en señales en un subgrafo y luego retirarlas, con el potencial de obtener retornos en el proceso). +Curating signal on a Subgraph is an optional one-time, net-zero cost (e.g., $1k in signal can be curated on a Subgraph, and later withdrawn—with potential to earn returns in the process). ## No Setup Costs & Greater Operational Efficiency @@ -89,4 +89,4 @@ The Graph’s decentralized network gives users access to geographic redundancy Bottom line: The Graph Network is less expensive, easier to use, and produces superior results compared to running a `graph-node` locally. -Start using The Graph Network today, and learn how to [publish your subgraph to The Graph's decentralized network](/subgraphs/quick-start/). +Start using The Graph Network today, and learn how to [publish your Subgraph to The Graph's decentralized network](/subgraphs/quick-start/). diff --git a/website/src/pages/es/resources/glossary.mdx b/website/src/pages/es/resources/glossary.mdx index a3614062a63a..dfbe07decedf 100644 --- a/website/src/pages/es/resources/glossary.mdx +++ b/website/src/pages/es/resources/glossary.mdx @@ -4,51 +4,51 @@ title: Glosario - **The Graph**: A decentralized protocol for indexing and querying data. -- **Query**: A request for data. In the case of The Graph, a query is a request for data from a subgraph that will be answered by an Indexer. +- **Query**: A request for data. In the case of The Graph, a query is a request for data from a Subgraph that will be answered by an Indexer. -- **GraphQL**: A query language for APIs and a runtime for fulfilling those queries with your existing data. The Graph uses GraphQL to query subgraphs. +- **GraphQL**: A query language for APIs and a runtime for fulfilling those queries with your existing data. The Graph uses GraphQL to query Subgraphs. -- **Endpoint**: A URL that can be used to query a subgraph. The testing endpoint for Subgraph Studio is `https://api.studio.thegraph.com/query///` and the Graph Explorer endpoint is `https://gateway.thegraph.com/api//subgraphs/id/`. The Graph Explorer endpoint is used to query subgraphs on The Graph's decentralized network. +- **Endpoint**: A URL that can be used to query a Subgraph. The testing endpoint for Subgraph Studio is `https://api.studio.thegraph.com/query///` and the Graph Explorer endpoint is `https://gateway.thegraph.com/api//subgraphs/id/`. The Graph Explorer endpoint is used to query Subgraphs on The Graph's decentralized network. -- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish subgraphs to The Graph Network. Once it is indexed, the subgraph can be queried by anyone. +- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish Subgraphs to The Graph Network. Once it is indexed, the Subgraph can be queried by anyone. - **Indexer**: Network participants that run indexing nodes to index data from blockchains and serve GraphQL queries. - **Indexer Revenue Streams**: Indexers are rewarded in GRT with two components: query fee rebates and indexing rewards. - 1. **Query Fee Rebates**: Payments from subgraph consumers for serving queries on the network. + 1. **Query Fee Rebates**: Payments from Subgraph consumers for serving queries on the network. - 2. **Indexing Rewards**: The rewards that Indexers receive for indexing subgraphs. Indexing rewards are generated via new issuance of 3% GRT annually. + 2. **Indexing Rewards**: The rewards that Indexers receive for indexing Subgraphs. Indexing rewards are generated via new issuance of 3% GRT annually. - **Indexer's Self-Stake**: The amount of GRT that Indexers stake to participate in the decentralized network. The minimum is 100,000 GRT, and there is no upper limit. - **Delegation Capacity**: The maximum amount of GRT an Indexer can accept from Delegators. Indexers can only accept up to 16x their Indexer Self-Stake, and additional delegation results in diluted rewards. For example, if an Indexer has a Self-Stake of 1M GRT, their delegation capacity is 16M. However, Indexers can increase their Delegation Capacity by increasing their Self-Stake. -- **Upgrade Indexer**: An Indexer designed to act as a fallback for subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. +- **Upgrade Indexer**: An Indexer designed to act as a fallback for Subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. -- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing subgraphs. +- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in Subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing Subgraphs. - **Delegation Tax**: A 0.5% fee paid by Delegators when they delegate GRT to Indexers. The GRT used to pay the fee is burned. -- **Curator**: Network participants that identify high-quality subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a subgraph, 10% is distributed to the Curators of that subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a subgraph. +- **Curator**: Network participants that identify high-quality Subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a Subgraph, 10% is distributed to the Curators of that Subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a Subgraph. -- **Curation Tax**: A 1% fee paid by Curators when they signal GRT on subgraphs. The GRT used to pay the fee is burned. +- **Curation Tax**: A 1% fee paid by Curators when they signal GRT on Subgraphs. The GRT used to pay the fee is burned. -- **Data Consumer**: Any application or user that queries a subgraph. +- **Data Consumer**: Any application or user that queries a Subgraph. -- **Subgraph Developer**: A developer who builds and deploys a subgraph to The Graph's decentralized network. +- **Subgraph Developer**: A developer who builds and deploys a Subgraph to The Graph's decentralized network. -- **Subgraph Manifest**: A YAML file that describes the subgraph's GraphQL schema, data sources, and other metadata. [Here](https://github.com/graphprotocol/example-subgraph/blob/master/subgraph.yaml) is an example. +- **Subgraph Manifest**: A YAML file that describes the Subgraph's GraphQL schema, data sources, and other metadata. [Here](https://github.com/graphprotocol/example-subgraph/blob/master/subgraph.yaml) is an example. - **Epoch**: A unit of time within the network. Currently, one epoch is 6,646 blocks or approximately 1 day. -- **Allocation**: An Indexer can allocate their total GRT stake (including Delegators' stake) towards subgraphs that have been published on The Graph's decentralized network. Allocations can have different statuses: +- **Allocation**: An Indexer can allocate their total GRT stake (including Delegators' stake) towards Subgraphs that have been published on The Graph's decentralized network. Allocations can have different statuses: - 1. **Active**: An allocation is considered active when it is created onchain. This is called opening an allocation, and indicates to the network that the Indexer is actively indexing and serving queries for a particular subgraph. Active allocations accrue indexing rewards proportional to the signal on the subgraph, and the amount of GRT allocated. + 1. **Active**: An allocation is considered active when it is created onchain. This is called opening an allocation, and indicates to the network that the Indexer is actively indexing and serving queries for a particular Subgraph. Active allocations accrue indexing rewards proportional to the signal on the Subgraph, and the amount of GRT allocated. - 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. + 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given Subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. -- **Subgraph Studio**: A powerful dapp for building, deploying, and publishing subgraphs. +- **Subgraph Studio**: A powerful dapp for building, deploying, and publishing Subgraphs. - **Fishermen**: A role within The Graph Network held by participants who monitor the accuracy and integrity of data served by Indexers. When a Fisherman identifies a query response or a POI they believe to be incorrect, they can initiate a dispute against the Indexer. If the dispute rules in favor of the Fisherman, the Indexer is slashed by losing 2.5% of their self-stake. Of this amount, 50% is awarded to the Fisherman as a bounty for their vigilance, and the remaining 50% is removed from circulation (burned). This mechanism is designed to encourage Fishermen to help maintain the reliability of the network by ensuring that Indexers are held accountable for the data they provide. @@ -56,28 +56,28 @@ title: Glosario - **Slashing**: Indexers can have their self-staked GRT slashed for providing an incorrect POI or for serving inaccurate data. The slashing percentage is a protocol parameter currently set to 2.5% of an Indexer's self-stake. 50% of the slashed GRT goes to the Fisherman that disputed the inaccurate data or incorrect POI. The other 50% is burned. -- **Indexing Rewards**: The rewards that Indexers receive for indexing subgraphs. Indexing rewards are distributed in GRT. +- **Indexing Rewards**: The rewards that Indexers receive for indexing Subgraphs. Indexing rewards are distributed in GRT. - **Delegation Rewards**: The rewards that Delegators receive for delegating GRT to Indexers. Delegation rewards are distributed in GRT. - **GRT**: The Graph's work utility token. GRT provides economic incentives to network participants for contributing to the network. -- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. +- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given Subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. -- **Graph Node**: Graph Node is the component that indexes subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. +- **Graph Node**: Graph Node is the component that indexes Subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. -- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions onchain, including registering on the network, managing subgraph deployments to its Graph Node(s), and managing allocations. +- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions onchain, including registering on the network, managing Subgraph deployments to its Graph Node(s), and managing allocations. - **The Graph Client**: A library for building GraphQL-based dapps in a decentralized way. -- **Graph Explorer**: A dapp designed for network participants to explore subgraphs and interact with the protocol. +- **Graph Explorer**: A dapp designed for network participants to explore Subgraphs and interact with the protocol. - **Graph CLI**: A command line interface tool for building and deploying to The Graph. - **Cooldown Period**: The time remaining until an Indexer who changed their delegation parameters can do so again. -- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, subgraphs, curation shares, and Indexer's self-stake. +- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, Subgraphs, curation shares, and Indexer's self-stake. -- **Updating a subgraph**: The process of releasing a new subgraph version with updates to the subgraph's manifest, schema, or mappings. +- **Updating a Subgraph**: The process of releasing a new Subgraph version with updates to the Subgraph's manifest, schema, or mappings. -- **Migrating**: The process of curation shares moving from an old version of a subgraph to a new version of a subgraph (e.g. when v0.0.1 is updated to v0.0.2). +- **Migrating**: The process of curation shares moving from an old version of a Subgraph to a new version of a Subgraph (e.g. when v0.0.1 is updated to v0.0.2). diff --git a/website/src/pages/es/resources/migration-guides/assemblyscript-migration-guide.mdx b/website/src/pages/es/resources/migration-guides/assemblyscript-migration-guide.mdx index 354d8c68a3e8..42a4b35e7677 100644 --- a/website/src/pages/es/resources/migration-guides/assemblyscript-migration-guide.mdx +++ b/website/src/pages/es/resources/migration-guides/assemblyscript-migration-guide.mdx @@ -2,49 +2,49 @@ title: Guía de Migración de AssemblyScript --- -Up until now, subgraphs have been using one of the [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉 +Up until now, Subgraphs have been using one of the [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉 -Esto permitirá a los desarrolladores de subgrafos utilizar las nuevas características del lenguaje AS y la librería estándar. +That will enable Subgraph developers to use newer features of the AS language and standard library. -This guide is applicable for anyone using `graph-cli`/`graph-ts` below version `0.22.0`. If you're already at a higher than (or equal) version to that, you've already been using version `0.19.10` of AssemblyScript 🙂 +Esta guía es aplicable para cualquiera que use `graph-cli`/`graph-ts` bajo la versión `0.22.0`. Si ya estás en una versión superior (o igual) a esa, has estado usando la versión `0.19.10` de AssemblyScript 🙂 -> Note: As of `0.24.0`, `graph-node` can support both versions, depending on the `apiVersion` specified in the subgraph manifest. +> Note: As of `0.24.0`, `graph-node` can support both versions, depending on the `apiVersion` specified in the Subgraph manifest. ## Características ### Nueva Funcionalidad -- `TypedArray`s can now be built from `ArrayBuffer`s by using the [new `wrap` static method](https://www.assemblyscript.org/stdlib/typedarray.html#static-members) ([v0.8.1](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.8.1)) -- New standard library functions: `String#toUpperCase`, `String#toLowerCase`, `String#localeCompare`and `TypedArray#set` ([v0.9.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.0)) -- Added support for x instanceof GenericClass ([v0.9.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.2)) -- Added `StaticArray`, a more efficient array variant ([v0.9.3](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.3)) -- Added `Array#flat` ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) -- Implemented `radix` argument on `Number#toString` ([v0.10.1](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.1)) -- Added support for separators in floating point literals ([v0.13.7](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.13.7)) -- Added support for first class functions ([v0.14.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.14.0)) -- Add builtins: `i32/i64/f32/f64.add/sub/mul` ([v0.14.13](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.14.13)) -- Implement `Array/TypedArray/String#at` ([v0.18.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.2)) -- Added support for template literal strings ([v0.18.17](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.17)) -- Add `encodeURI(Component)` and `decodeURI(Component)` ([v0.18.27](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.27)) -- Add `toString`, `toDateString` and `toTimeString` to `Date` ([v0.18.29](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.29)) -- Add `toUTCString` for `Date` ([v0.18.30](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.30)) -- Add `nonnull/NonNullable` builtin type ([v0.19.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.19.2)) +- `TypedArray`s ahora puede construirse desde `ArrayBuffer`s usando el [nuevo `wrap` método estático](https://www.assemblyscript.org/stdlib/typedarray.html#static-members) ([v0.8.1](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.8.1)) +- Nuevas funciones de la biblioteca estándar: `String#toUpperCase`, `String#toLowerCase`, `String#localeCompare`and `TypedArray#set` ([v0.9.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.0)) +- Se agregó soporte para x instanceof GenericClass ([v0.9.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.2)) +- Se agregó `StaticArray`, una más eficiente variante de array ([v0.9.3](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.3)) +- Se agregó `Array#flat` ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) +- Se implementó el argumento `radix` en `Number#toString` ([v0.10.1](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.1)) +- Se agregó soporte para los separadores en los literales de punto flotante ([v0.13.7](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.13.7)) +- Se agregó soporte para las funciones de primera clase ([v0.14.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.14.0)) +- Se agregaron builtins: `i32/i64/f32/f64.add/sub/mul` ([v0.14.13](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.14.13)) +- Se implementó `Array/TypedArray/String#at` ([v0.18.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.2)) +- Se agregó soporte para las plantillas de strings literales ([v0.18.17](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.17)) +- Se agregó `encodeURI(Component)` y `decodeURI(Component)` ([v0.18.27](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.27)) +- Se agregó `toString`, `toDateString` and `toTimeString` to `Date` ([v0.18.29](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.29)) +- Se agregó `toUTCString` para `Date` ([v0.18.30](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.30)) +- Se agregó `nonnull/NonNullable` builtin type ([v0.19.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.19.2)) ### Optimizaciones -- `Math` functions such as `exp`, `exp2`, `log`, `log2` and `pow` have been replaced by faster variants ([v0.9.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.0)) -- Slightly optimize `Math.mod` ([v0.17.1](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.1)) -- Cache more field accesses in std Map and Set ([v0.17.8](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.8)) -- Optimize for powers of two in `ipow32/64` ([v0.18.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.2)) +- Funciones `Math` como `exp`, `exp2`, `log`, `log2` y `pow` fueron reemplazadas por variantes más rápidas ([v0.9.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.0)) +- Optimizar ligeramente `Math.mod` ([v0.17.1](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.1)) +- Caché de más accesos a campos en std Map y Set ([v0.17.8](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.8)) +- Optimizar para potencias de dos en `ipow32/64` ([v0.18.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.2)) ### Otros -- The type of an array literal can now be inferred from its contents ([v0.9.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.0)) -- Updated stdlib to Unicode 13.0.0 ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) +- El tipo de un de array literal ahora puede inferirse a partir de su contenido ([v0.9.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.0)) +- Actualizado stdlib a Unicode 13.0.0 ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) ## ¿Cómo actualizar? -1. Change your mappings `apiVersion` in `subgraph.yaml` to `0.0.6`: +1. Change your mappings `apiVersion` in `subgraph.yaml` to `0.0.9`: ```yaml ... @@ -52,11 +52,11 @@ dataSources: ... mapping: ... - apiVersion: 0.0.6 + apiVersion: 0.0.9 ... ``` -2. Update the `graph-cli` you're using to the `latest` version by running: +2. Actualiza la `graph-cli` que usas a la `última` versión: ```bash # si lo tiene instalada de forma global @@ -66,14 +66,14 @@ npm install --global @graphprotocol/graph-cli@latest npm install --save-dev @graphprotocol/graph-cli@latest ``` -3. Do the same for `graph-ts`, but instead of installing globally, save it in your main dependencies: +3. Haz lo mismo con `graph-ts`, pero en lugar de instalarlo globalmente, guárdalo en tus dependencias principales: ```bash npm install --save @graphprotocol/graph-ts@latest ``` 4. Sigue el resto de la guía para arreglar los cambios que rompen el lenguaje. -5. Run `codegen` and `deploy` again. +5. Ejecuta `codegen` y `deploy` nuevamente. ## Rompiendo los esquemas @@ -106,11 +106,11 @@ let maybeValue = load()! // rompiendo el runtime si el valor es nulo maybeValue.aMethod() ``` -Si no estás seguro de cuál elegir, te recomendamos que utilices siempre la versión segura. Si el valor no existe, es posible que quieras hacer una declaración if temprana con un retorno en tu handler de subgrafo. +If you are unsure which to choose, we recommend always using the safe version. If the value doesn't exist you might want to just do an early if statement with a return in you Subgraph handler. ### Variable Shadowing -Before you could do [variable shadowing](https://en.wikipedia.org/wiki/Variable_shadowing) and code like this would work: +Antes podías hacer [variable shadowing](https://en.wikipedia.org/wiki/Variable_shadowing) y un código como este funcionaría: ```typescript let a = 10 @@ -132,7 +132,7 @@ Tendrás que cambiar el nombre de las variables duplicadas si tienes una variabl ### Comparaciones Nulas -Al hacer la actualización en un subgrafo, a veces pueden aparecer errores como estos: +By doing the upgrade on your Subgraph, sometimes you might get errors like these: ```typescript ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' is not assignable to type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt'. @@ -141,7 +141,7 @@ ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' i in src/mappings/file.ts(41,21) ``` -To solve you can simply change the `if` statement to something like this: +Para solucionarlo puedes simplemente cambiar la declaración `if` por algo así: ```typescript if (!decimals) { @@ -155,7 +155,7 @@ Lo mismo ocurre si haces != en lugar de ==. ### Casting -The common way to do casting before was to just use the `as` keyword, like this: +La forma común de hacer el casting antes era simplemente usar la palabra clave `as`, de la siguiente forma: ```typescript let byteArray = new ByteArray(10) @@ -164,7 +164,7 @@ let uint8Array = byteArray as Uint8Array // equivalent to: byteArray Sin embargo, esto solo funciona en dos casos: -- Primitive casting (between types such as `u8`, `i32`, `bool`; eg: `let b: isize = 10; b as usize`); +- Casting de primitivas (entre tipos como `u8`, `i32`, `bool`; eg: `let b: isize = 10; b as usize`); - Upcasting en la herencia de clases (subclase → superclase) Ejemplos: @@ -184,7 +184,7 @@ let bytes = new Bytes(2) // bytes // same as: bytes as Uint8Array ``` -There are two scenarios where you may want to cast, but using `as`/`var` **isn't safe**: +Hay dos escenarios en los que puede querer cast, pero usando `as`/`var` **no es seguro**: - Downcasting en la herencia de clases (superclase → subclase) - Entre dos tipos que comparten una superclase @@ -206,7 +206,7 @@ let bytes = new Bytes(2) // bytes // breaks in runtime :( ``` -For those cases, you can use the `changetype` function: +Para esos casos, puedes usar la función `changetype`: ```typescript // downcasting on class inheritance @@ -217,7 +217,7 @@ changetype(uint8Array) // works :) ``` ```typescript -// between two types that share a superclass +// entre dos tipos que comparten un superclass class Bytes extends Uint8Array {} class ByteArray extends Uint8Array {} @@ -225,7 +225,7 @@ let bytes = new Bytes(2) changetype(bytes) // works :) ``` -If you just want to remove nullability, you can keep using the `as` operator (or `variable`), but make sure you know that value can't be null, otherwise it will break. +Si solo quieres eliminar la anulabilidad, puedes seguir usando el `as` operador (o `variable`), pero asegúrate de que el valor no puede ser nulo, de lo contrario se romperá. ```typescript // eliminar anulabilidad @@ -238,7 +238,7 @@ if (previousBalance != null) { let newBalance = new AccountBalance(balanceId) ``` -For the nullability case we recommend taking a look at the [nullability check feature](https://www.assemblyscript.org/basics.html#nullability-checks), it will make your code cleaner 🙂 +Para el caso de la anulabilidad se recomienda echar un vistazo al [nullability check feature](https://www.assemblyscript.org/basics.html#nullability-checks), hará que tu código sea más limpio 🙂 También hemos añadido algunos métodos estáticos en algunos tipos para facilitar el casting, son: @@ -249,7 +249,7 @@ También hemos añadido algunos métodos estáticos en algunos tipos para facili ### Comprobación de anulabilidad con acceso a la propiedad -To use the [nullability check feature](https://www.assemblyscript.org/basics.html#nullability-checks) you can use either `if` statements or the ternary operator (`?` and `:`) like this: +Para usar el [nullability check feature](https://www.assemblyscript.org/basics.html#nullability-checks) puedes usar la declaración `if` o el operador ternario (`?` and `:`) asi: ```typescript let something: string | null = 'data' @@ -267,7 +267,7 @@ if (something) { } ``` -However that only works when you're doing the `if` / ternary on a variable, not on a property access, like this: +Sin embargo eso solo funciona cuando estás haciendo el `if` / ternario en una variable, no en un acceso a una propiedad, como este: ```typescript class Container { @@ -330,7 +330,7 @@ let wrapper = new Wrapper(y) wrapper.n = wrapper.n + x // doesn't give compile time errors as it should ``` -Hemos abierto un tema en el compilador de AssemblyScript para esto, pero por ahora si haces este tipo de operaciones en tus mapeos de subgrafos, deberías cambiarlos para hacer una comprobación de nulos antes de ello. +We've opened a issue on the AssemblyScript compiler for this, but for now if you do these kind of operations in your Subgraph mappings, you should change them to do a null check before it. ```typescript let wrapper = new Wrapper(y) @@ -352,7 +352,7 @@ value.x = 10 value.y = 'content' ``` -Compilará pero se romperá en tiempo de ejecución, eso ocurre porque el valor no ha sido inicializado, así que asegúrate de que tu subgrafo ha inicializado sus valores, así: +It will compile but break at runtime, that happens because the value hasn't been initialized, so make sure your Subgraph has initialized their values, like this: ```typescript var value = new Type() // initialized @@ -381,7 +381,7 @@ if (total === null) { total.amount = total.amount + BigInt.fromI32(1) ``` -You'll need to make sure to initialize the `total.amount` value, because if you try to access like in the last line for the sum, it will crash. So you either initialize it first: +Tendrás que asegurarte de inicializar el valor `total.amount`, porque si intentas acceder como en la última línea para la suma, se bloqueará. Así que o bien la inicializas primero: ```typescript let total = Total.load('latest') @@ -394,7 +394,7 @@ if (total === null) { total.tokens = total.tokens + BigInt.fromI32(1) ``` -Or you can just change your GraphQL schema to not use a nullable type for this property, then we'll initialize it as zero on the `codegen` step 😉 +O simplemente puedes cambiar tu esquema GraphQL para no usar un tipo anulable para esta propiedad, entonces la inicializaremos como cero en el paso `codegen` 😉 ```graphql type Total @entity { @@ -425,7 +425,7 @@ export class Something { } ``` -The compiler will error because you either need to add an initializer for the properties that are classes, or add the `!` operator: +El compilador dará un error porque tienes que añadir un inicializador para las propiedades que son clases, o añadir el operador `!`: ```typescript export class Something { @@ -451,7 +451,7 @@ export class Something { ### Inicialización de Array -The `Array` class still accepts a number to initialize the length of the list, however you should take care because operations like `.push` will actually increase the size instead of adding to the beginning, for example: +La clase `Array` sigue aceptando un número para inicializar la longitud de la lista, sin embargo hay que tener cuidado porque operaciones como `.push` en realidad aumentarán el tamaño en lugar de añadirlo al principio, por ejemplo: ```typescript let arr = new Array(5) // ["", "", "", "", ""] @@ -465,7 +465,7 @@ Dependiendo de los tipos que estés utilizando, por ejemplo los anulables, y de ERRO Handler skipped due to execution failure, error: Mapping aborted at ~lib/array.ts, line 110, column 40, with message: Element type must be nullable if array is holey wasm backtrace: 0: 0x19c4 - !~lib/@graphprotocol/graph-ts/index/format 1: 0x1e75 - !~lib/@graphprotocol/graph-ts/common/collections/Entity#constructor 2: 0x30b9 - !node_modules/@graphprotocol/graph-ts/global/global/id_of_type ``` -To actually push at the beginning you should either, initialize the `Array` with size zero, like this: +Para realmente empujar al principio deberías o bien, inicializar el `Array` con tamaño cero, así: ```typescript let arr = new Array(0) // [] @@ -483,7 +483,7 @@ arr[0] = 'something' // ["something", "", "", "", ""] ### Esquema GraphQL -This is not a direct AssemblyScript change, but you may have to update your `schema.graphql` file. +Esto no es un cambio directo de AssemblyScript, pero es posible que tengas que actualizar tu archivo `schema.graphql`. Ahora ya no puedes definir campos en tus tipos que sean Listas No Anulables. Si tienes un esquema como este: @@ -498,7 +498,7 @@ type MyEntity @entity { } ``` -You'll have to add an `!` to the member of the List type, like this: +Tendrás que añadir un `!` al miembro del tipo Lista, así: ```graphql type Something @entity { @@ -511,14 +511,14 @@ type MyEntity @entity { } ``` -This changed because of nullability differences between AssemblyScript versions, and it's related to the `src/generated/schema.ts` file (default path, you might have changed this). +Esto cambió debido a las diferencias de anulabilidad entre las versiones de AssemblyScript, y está relacionado con el archivo `src/generated/schema.ts` (ruta por defecto, puede que lo hayas cambiado). ### Otros -- Aligned `Map#set` and `Set#add` with the spec, returning `this` ([v0.9.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.2)) -- Arrays no longer inherit from ArrayBufferView, but are now distinct ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) -- Classes initialized from object literals can no longer define a constructor ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) -- The result of a `**` binary operation is now the common denominator integer if both operands are integers. Previously, the result was a float as if calling `Math/f.pow` ([v0.11.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.11.0)) -- Coerce `NaN` to `false` when casting to `bool` ([v0.14.9](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.14.9)) -- When shifting a small integer value of type `i8`/`u8` or `i16`/`u16`, only the 3 respectively 4 least significant bits of the RHS value affect the result, analogous to the result of an `i32.shl` only being affected by the 5 least significant bits of the RHS value. Example: `someI8 << 8` previously produced the value `0`, but now produces `someI8` due to masking the RHS as `8 & 7 = 0` (3 bits) ([v0.17.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.0)) -- Bug fix of relational string comparisons when sizes differ ([v0.17.8](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.8)) +- Alineado `Map#set` y `Set#add` con el spec, devolviendo `this` ([v0.9.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.2)) +- Las arrays ya no heredan de ArrayBufferView, sino que son distintas ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) +- Las clases inicializadas a partir de objetos literales ya no pueden definir un constructor ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) +- El resultado de una operación binaria `**` es ahora el entero denominador común si ambos operandos son enteros. Anteriormente, el resultado era un flotante como si se llamara a `Math/f.pow` ([v0.11.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.11.0)) +- Coerción `NaN` a `false` cuando casting a `bool` ([v0.14.9](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.14.9)) +- Al desplazar un valor entero pequeño de tipo `i8`/`u8` o `i16`/`u16`, sólo los 3 o 4 bits menos significativos del valor RHS afectan al resultado, de forma análoga al resultado de un `i32.shl` que sólo se ve afectado por los 5 bits menos significativos del valor RHS. Ejemplo: `someI8 << 8` previamente producía el valor `0`, pero ahora produce `someI8` debido a enmascarar el RHS como `8 & 7 = 0` (3 bits) ([v0.17.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.0)) +- Corrección de errores en las comparaciones de strings relacionales cuando los tamaños difieren ([v0.17.8](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.8)) diff --git a/website/src/pages/es/resources/migration-guides/graphql-validations-migration-guide.mdx b/website/src/pages/es/resources/migration-guides/graphql-validations-migration-guide.mdx index 55801738ddca..163b186ba828 100644 --- a/website/src/pages/es/resources/migration-guides/graphql-validations-migration-guide.mdx +++ b/website/src/pages/es/resources/migration-guides/graphql-validations-migration-guide.mdx @@ -1,5 +1,5 @@ --- -title: Guía de migración de Validaciones GraphQL +title: GraphQL Validations Migration Guide --- Pronto `graph-node` admitirá una cobertura del 100% de la [especificación de validaciones GraphQL](https://spec.graphql.org/June2018/#sec-Validation). @@ -20,7 +20,7 @@ Para ser compatible con esas validaciones, por favor sigue la guía de migració Puedes utilizar la herramienta de migración CLI para encontrar cualquier problema en tus operaciones GraphQL y solucionarlo. Alternativamente, puedes actualizar el endpoint de tu cliente GraphQL para usar el endpoint `https://api-next.thegraph.com/subgraphs/name/$GITHUB_USER/$SUBGRAPH_NAME`. Probar tus consultas contra este endpoint te ayudará a encontrar los problemas en tus consultas. -> No todos los subgrafos deberán migrarse, si estás utilizando [GraphQL ESlint](https://the-guild.dev/graphql/eslint/docs) o [GraphQL Code Generator](https://the-guild.dev/graphql/codegen), ya se aseguran de que tus consultas sean válidas. +> Not all Subgraphs will need to be migrated, if you are using [GraphQL ESlint](https://the-guild.dev/graphql/eslint/docs) or [GraphQL Code Generator](https://the-guild.dev/graphql/codegen), they already ensure that your queries are valid. ## Herramienta de migración de la línea de comandos diff --git a/website/src/pages/es/resources/roles/curating.mdx b/website/src/pages/es/resources/roles/curating.mdx index da189f62bf69..a3ec7ae0ce5e 100644 --- a/website/src/pages/es/resources/roles/curating.mdx +++ b/website/src/pages/es/resources/roles/curating.mdx @@ -2,37 +2,37 @@ title: Curación --- -Curators are critical to The Graph's decentralized economy. They use their knowledge of the web3 ecosystem to assess and signal on the subgraphs that should be indexed by The Graph Network. Through Graph Explorer, Curators view network data to make signaling decisions. In turn, The Graph Network rewards Curators who signal on good quality subgraphs with a share of the query fees those subgraphs generate. The amount of GRT signaled is one of the key considerations for indexers when determining which subgraphs to index. +Curators are critical to The Graph's decentralized economy. They use their knowledge of the web3 ecosystem to assess and signal on the Subgraphs that should be indexed by The Graph Network. Through Graph Explorer, Curators view network data to make signaling decisions. In turn, The Graph Network rewards Curators who signal on good quality Subgraphs with a share of the query fees those Subgraphs generate. The amount of GRT signaled is one of the key considerations for indexers when determining which Subgraphs to index. ## What Does Signaling Mean for The Graph Network? -Before consumers can query a subgraph, it must be indexed. This is where curation comes into play. In order for Indexers to earn substantial query fees on quality subgraphs, they need to know what subgraphs to index. When Curators signal on a subgraph, it lets Indexers know that a subgraph is in demand and of sufficient quality that it should be indexed. +Before consumers can query a Subgraph, it must be indexed. This is where curation comes into play. In order for Indexers to earn substantial query fees on quality Subgraphs, they need to know what Subgraphs to index. When Curators signal on a Subgraph, it lets Indexers know that a Subgraph is in demand and of sufficient quality that it should be indexed. -Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that Curators use to let Indexers know that a subgraph is good to index. Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives. +Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that Curators use to let Indexers know that a Subgraph is good to index. Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the Subgraph, entitling them to a portion of future query fees that the Subgraph drives. -Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality subgraph because there will be fewer queries to process or fewer Indexers to process them. +Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to Subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality Subgraph because there will be fewer queries to process or fewer Indexers to process them. -The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. +The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all Subgraphs, signaling GRT on a particular Subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. -When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. +When signaling, Curators can decide to signal on a specific version of the Subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. -If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. +If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the Subgraphs you need assistance with. -Indexers can find subgraphs to index based on curation signals they see in Graph Explorer (screenshot below). +Indexers can find Subgraphs to index based on curation signals they see in Graph Explorer (screenshot below). -![Explorer subgraphs](/img/explorer-subgraphs.png) +![Explorer Subgraphs](/img/explorer-subgraphs.png) ## Cómo señalar -Within the Curator tab in Graph Explorer, curators will be able to signal and unsignal on certain subgraphs based on network stats. For a step-by-step overview of how to do this in Graph Explorer, [click here.](/subgraphs/explorer/) +Within the Curator tab in Graph Explorer, curators will be able to signal and unsignal on certain Subgraphs based on network stats. For a step-by-step overview of how to do this in Graph Explorer, [click here.](/subgraphs/explorer/) -Un curador puede optar por señalar una versión especifica de un subgrafo, o puede optar por que su señal migre automáticamente a la versión de producción mas reciente de ese subgrafo. Ambas son estrategias válidas y tienen sus pros y sus contras. +A curator can choose to signal on a specific Subgraph version, or they can choose to have their signal automatically migrate to the newest production build of that Subgraph. Both are valid strategies and come with their own pros and cons. -Signaling on a specific version is especially useful when one subgraph is used by multiple dapps. One dapp might need to regularly update the subgraph with new features. Another dapp might prefer to use an older, well-tested subgraph version. Upon initial curation, a 1% standard tax is incurred. +Signaling on a specific version is especially useful when one Subgraph is used by multiple dapps. One dapp might need to regularly update the Subgraph with new features. Another dapp might prefer to use an older, well-tested Subgraph version. Upon initial curation, a 1% standard tax is incurred. Hacer que tu señal migre automáticamente a la más reciente compilación de producción puede ser valioso para asegurarse de seguir acumulando tarifas de consulta. Cada vez que curas, se incurre en un impuesto de curación del 1%. También pagarás un impuesto de curación del 0,5% en cada migración. Se desaconseja a los desarrolladores de Subgrafos que publiquen con frecuencia nuevas versiones - tienen que pagar un impuesto de curación del 0,5% en todas las acciones de curación auto-migradas. -> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy. +> **Note**: The first address to signal a particular Subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy. ## Withdrawing your GRT @@ -40,39 +40,39 @@ Curators have the option to withdraw their signaled GRT at any time. Unlike the process of delegating, if you decide to withdraw your signaled GRT you will not have to wait for a cooldown period and will receive the entire amount (minus the 1% curation tax). -Once a curator withdraws their signal, indexers may choose to keep indexing the subgraph, even if there's currently no active GRT signaled. +Once a curator withdraws their signal, indexers may choose to keep indexing the Subgraph, even if there's currently no active GRT signaled. -However, it is recommended that curators leave their signaled GRT in place not only to receive a portion of the query fees, but also to ensure reliability and uptime of the subgraph. +However, it is recommended that curators leave their signaled GRT in place not only to receive a portion of the query fees, but also to ensure reliability and uptime of the Subgraph. ## Riesgos 1. El mercado de consultas es inherentemente joven en The Graph y existe el riesgo de que su APY (Rentabilidad anualizada) sea más bajo de lo esperado debido a la dinámica del mercado que recién está empezando. -2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned. -3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their subgraph or if a subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/resources/roles/delegating/delegating/). -4. Un subgrafo puede fallar debido a un error. Un subgrafo fallido no acumula tarifas de consulta. Como resultado, tendrás que esperar hasta que el desarrollador corrija el error e implemente una nueva versión. - - Si estás suscrito a la versión más reciente de un subgrafo, tus acciones se migrarán automáticamente a esa nueva versión. Esto incurrirá un impuesto de curación del 0.5%. - - If you have signaled on a specific subgraph version and it fails, you will have to manually burn your curation shares. You can then signal on the new subgraph version, thus incurring a 1% curation tax. +2. Curation Fee - when a curator signals GRT on a Subgraph, they incur a 1% curation tax. This fee is burned. +3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their Subgraph or if a Subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/resources/roles/delegating/delegating/). +4. A Subgraph can fail due to a bug. A failed Subgraph does not accrue query fees. As a result, you’ll have to wait until the developer fixes the bug and deploys a new version. + - If you are subscribed to the newest version of a Subgraph, your shares will auto-migrate to that new version. This will incur a 0.5% curation tax. + - If you have signaled on a specific Subgraph version and it fails, you will have to manually burn your curation shares. You can then signal on the new Subgraph version, thus incurring a 1% curation tax. ## Preguntas frecuentes sobre Curación ### 1. ¿Qué porcentaje de las tasas de consulta ganan los curadores? -By signalling on a subgraph, you will earn a share of all the query fees that the subgraph generates. 10% of all query fees go to the Curators pro-rata to their curation shares. This 10% is subject to governance. +By signalling on a Subgraph, you will earn a share of all the query fees that the Subgraph generates. 10% of all query fees go to the Curators pro-rata to their curation shares. This 10% is subject to governance. -### 2. ¿Cómo decido qué subgrafos son de alta calidad para señalar? +### 2. How do I decide which Subgraphs are high quality to signal on? -Finding high-quality subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy subgraphs that are driving query volume. A trustworthy subgraph may be valuable if it is complete, accurate, and supports a dapp’s data needs. A poorly architected subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a subgraph’s architecture or code in order to assess if a subgraph is valuable. As a result: +Finding high-quality Subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy Subgraphs that are driving query volume. A trustworthy Subgraph may be valuable if it is complete, accurate, and supports a dapp’s data needs. A poorly architected Subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a Subgraph’s architecture or code in order to assess if a Subgraph is valuable. As a result: -- Curators can use their understanding of a network to try and predict how an individual subgraph may generate a higher or lower query volume in the future -- Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the subgraph developer is can help determine whether or not a subgraph is worth signalling on. +- Curators can use their understanding of a network to try and predict how an individual Subgraph may generate a higher or lower query volume in the future +- Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the Subgraph developer is can help determine whether or not a Subgraph is worth signalling on. -### 3. What’s the cost of updating a subgraph? +### 3. What’s the cost of updating a Subgraph? -Migrating your curation shares to a new subgraph version incurs a curation tax of 1%. Curators can choose to subscribe to the newest version of a subgraph. When curator shares get auto-migrated to a new version, Curators will also pay half curation tax, ie. 0.5%, because upgrading subgraphs is an onchain action that costs gas. +Migrating your curation shares to a new Subgraph version incurs a curation tax of 1%. Curators can choose to subscribe to the newest version of a Subgraph. When curator shares get auto-migrated to a new version, Curators will also pay half curation tax, ie. 0.5%, because upgrading Subgraphs is an onchain action that costs gas. -### 4. How often can I update my subgraph? +### 4. How often can I update my Subgraph? -It’s suggested that you don’t update your subgraphs too frequently. See the question above for more details. +It’s suggested that you don’t update your Subgraphs too frequently. See the question above for more details. ### 5. ¿Puedo vender mis acciones de curación? diff --git a/website/src/pages/es/resources/subgraph-studio-faq.mdx b/website/src/pages/es/resources/subgraph-studio-faq.mdx index 14174cc468bf..1d2ebbae57a6 100644 --- a/website/src/pages/es/resources/subgraph-studio-faq.mdx +++ b/website/src/pages/es/resources/subgraph-studio-faq.mdx @@ -4,7 +4,7 @@ title: Preguntas Frecuentes sobre Subgraph Studio ## 1. ¿Qué es Subgraph Studio? -[Subgraph Studio](https://thegraph.com/studio/) is a dapp for creating, managing, and publishing subgraphs and API keys. +[Subgraph Studio](https://thegraph.com/studio/) is a dapp for creating, managing, and publishing Subgraphs and API keys. ## 2. ¿Cómo creo una clave API? @@ -12,20 +12,20 @@ To create an API, navigate to Subgraph Studio and connect your wallet. You will ## 3. ¿Puedo crear múltiples claves de API? -Yes! You can create multiple API Keys to use in different projects. Check out the link [here](https://thegraph.com/studio/apikeys/). +¡Sí! Puedes crear varias claves de API para usar en diferentes proyectos. Consulta el enlace [aquí](https://thegraph.com/studio/apikeys/). ## 4. ¿Cómo restrinjo un dominio para una clave API? Después de crear una clave de API, en la sección Seguridad, puedes definir los dominios que pueden consultar una clave de API específica. -## 5. ¿Puedo transferir mi subgrafo a otro propietario? +## 5. Can I transfer my Subgraph to another owner? -Yes, subgraphs that have been published to Arbitrum One can be transferred to a new wallet or a Multisig. You can do so by clicking the three dots next to the 'Publish' button on the subgraph's details page and selecting 'Transfer ownership'. +Yes, Subgraphs that have been published to Arbitrum One can be transferred to a new wallet or a Multisig. You can do so by clicking the three dots next to the 'Publish' button on the Subgraph's details page and selecting 'Transfer ownership'. -Ten en cuenta que ya no podrás ver o editar el subgrafo en Studio una vez que haya sido transferido. +Note that you will no longer be able to see or edit the Subgraph in Studio once it has been transferred. -## 6. ¿Cómo encuentro URLs de consulta para subgrafos si no soy el desarrollador del subgrafo que quiero usar? +## 6. How do I find query URLs for Subgraphs if I’m not the developer of the Subgraph I want to use? -You can find the query URL of each subgraph in the Subgraph Details section of Graph Explorer. When you click on the “Query” button, you will be directed to a pane wherein you can view the query URL of the subgraph you’re interested in. You can then replace the `` placeholder with the API key you wish to leverage in Subgraph Studio. +You can find the query URL of each Subgraph in the Subgraph Details section of Graph Explorer. When you click on the “Query” button, you will be directed to a pane wherein you can view the query URL of the Subgraph you’re interested in. You can then replace the `` placeholder with the API key you wish to leverage in Subgraph Studio. -Recuerda que puedes crear una clave API y consultar cualquier subgrafo publicado en la red, incluso si tú mismo construyes un subgrafo. Estas consultas a través de la nueva clave API, son consultas pagadas como cualquier otra en la red. +Remember that you can create an API key and query any Subgraph published to the network, even if you build a Subgraph yourself. These queries via the new API key, are paid queries as any other on the network. diff --git a/website/src/pages/es/resources/tokenomics.mdx b/website/src/pages/es/resources/tokenomics.mdx index cd30274637ea..a15d15155fd5 100644 --- a/website/src/pages/es/resources/tokenomics.mdx +++ b/website/src/pages/es/resources/tokenomics.mdx @@ -6,7 +6,7 @@ description: The Graph Network is incentivized by powerful tokenomics. Here’s ## Descripción -The Graph is a decentralized protocol that enables easy access to blockchain data. It indexes blockchain data similarly to how Google indexes the web. If you've used a dapp that retrieves data from a subgraph, you've probably interacted with The Graph. Today, thousands of [popular dapps](https://thegraph.com/explorer) in the web3 ecosystem use The Graph. +The Graph is a decentralized protocol that enables easy access to blockchain data. It indexes blockchain data similarly to how Google indexes the web. If you've used a dapp that retrieves data from a Subgraph, you've probably interacted with The Graph. Today, thousands of [popular dapps](https://thegraph.com/explorer) in the web3 ecosystem use The Graph. ## Specifics @@ -24,9 +24,9 @@ There are four primary network participants: 1. Delegators - Delegate GRT to Indexers & secure the network -2. Curadores - Encuentran los mejores subgrafos para los Indexadores +2. Curators - Find the best Subgraphs for Indexers -3. Developers - Build & query subgraphs +3. Developers - Build & query Subgraphs 4. Indexadores: Son la columna vertebral de los datos de la blockchain @@ -36,7 +36,7 @@ Fishermen and Arbitrators are also integral to the network's success through oth ## Delegators (Passively earn GRT) -Indexers are delegated GRT by Delegators, increasing the Indexer’s stake in subgraphs on the network. In return, Delegators earn a percentage of all query fees and indexing rewards from the Indexer. Each Indexer sets the cut that will be rewarded to Delegators independently, creating competition among Indexers to attract Delegators. Most Indexers offer between 9-12% annually. +Indexers are delegated GRT by Delegators, increasing the Indexer’s stake in Subgraphs on the network. In return, Delegators earn a percentage of all query fees and indexing rewards from the Indexer. Each Indexer sets the cut that will be rewarded to Delegators independently, creating competition among Indexers to attract Delegators. Most Indexers offer between 9-12% annually. For example, if a Delegator were to delegate 15k GRT to an Indexer offering 10%, the Delegator would receive ~1,500 GRT in rewards annually. @@ -46,25 +46,25 @@ If you're reading this, you're capable of becoming a Delegator right now by head ## Curators (Earn GRT) -Curators identify high-quality subgraphs and "curate" them (i.e., signal GRT on them) to earn curation shares, which guarantee a percentage of all future query fees generated by the subgraph. While any independent network participant can be a Curator, typically subgraph developers are among the first Curators for their own subgraphs because they want to ensure their subgraph is indexed. +Curators identify high-quality Subgraphs and "curate" them (i.e., signal GRT on them) to earn curation shares, which guarantee a percentage of all future query fees generated by the Subgraph. While any independent network participant can be a Curator, typically Subgraph developers are among the first Curators for their own Subgraphs because they want to ensure their Subgraph is indexed. -Subgraph developers are encouraged to curate their subgraph with at least 3,000 GRT. However, this number may be impacted by network activity and community participation. +Subgraph developers are encouraged to curate their Subgraph with at least 3,000 GRT. However, this number may be impacted by network activity and community participation. -Curators pay a 1% curation tax when they curate a new subgraph. This curation tax is burned, decreasing the supply of GRT. +Curators pay a 1% curation tax when they curate a new Subgraph. This curation tax is burned, decreasing the supply of GRT. ## Developers -Developers build and query subgraphs to retrieve blockchain data. Since subgraphs are open source, developers can query existing subgraphs to load blockchain data into their dapps. Developers pay for queries they make in GRT, which is distributed to network participants. +Developers build and query Subgraphs to retrieve blockchain data. Since Subgraphs are open source, developers can query existing Subgraphs to load blockchain data into their dapps. Developers pay for queries they make in GRT, which is distributed to network participants. -### Creación de un subgrafo +### Creating a Subgraph -Developers can [create a subgraph](/developing/creating-a-subgraph/) to index data on the blockchain. Subgraphs are instructions for Indexers about which data should be served to consumers. +Developers can [create a Subgraph](/developing/creating-a-subgraph/) to index data on the blockchain. Subgraphs are instructions for Indexers about which data should be served to consumers. -Once developers have built and tested their subgraph, they can [publish their subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) on The Graph's decentralized network. +Once developers have built and tested their Subgraph, they can [publish their Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) on The Graph's decentralized network. -### Consulta de un subgrafo existente +### Querying an existing Subgraph -Once a subgraph is [published](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph's decentralized network, anyone can create an API key, add GRT to their billing balance, and query the subgraph. +Once a Subgraph is [published](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph's decentralized network, anyone can create an API key, add GRT to their billing balance, and query the Subgraph. Subgraphs are [queried using GraphQL](/subgraphs/querying/introduction/), and the query fees are paid for with GRT in [Subgraph Studio](https://thegraph.com/studio/). Query fees are distributed to network participants based on their contributions to the protocol. @@ -72,27 +72,27 @@ Subgraphs are [queried using GraphQL](/subgraphs/querying/introduction/), and th ## Indexers (Earn GRT) -Indexers are the backbone of The Graph. They operate independent hardware and software powering The Graph’s decentralized network. Indexers serve data to consumers based on instructions from subgraphs. +Indexers are the backbone of The Graph. They operate independent hardware and software powering The Graph’s decentralized network. Indexers serve data to consumers based on instructions from Subgraphs. Indexers can earn GRT rewards in two ways: -1. **Query fees**: GRT paid by developers or users for subgraph data queries. Query fees are directly distributed to Indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). +1. **Query fees**: GRT paid by developers or users for Subgraph data queries. Query fees are directly distributed to Indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). -2. **Indexing rewards**: the 3% annual issuance is distributed to Indexers based on the number of subgraphs they are indexing. These rewards incentivize Indexers to index subgraphs, occasionally before the query fees begin, to accrue and submit Proofs of Indexing (POIs), verifying that they have indexed data accurately. +2. **Indexing rewards**: the 3% annual issuance is distributed to Indexers based on the number of Subgraphs they are indexing. These rewards incentivize Indexers to index Subgraphs, occasionally before the query fees begin, to accrue and submit Proofs of Indexing (POIs), verifying that they have indexed data accurately. -Each subgraph is allotted a portion of the total network token issuance, based on the amount of the subgraph’s curation signal. That amount is then rewarded to Indexers based on their allocated stake on the subgraph. +Each Subgraph is allotted a portion of the total network token issuance, based on the amount of the Subgraph’s curation signal. That amount is then rewarded to Indexers based on their allocated stake on the Subgraph. In order to run an indexing node, Indexers must self-stake 100,000 GRT or more with the network. Indexers are incentivized to self-stake GRT in proportion to the amount of queries they serve. -Indexers can increase their GRT allocations on subgraphs by accepting GRT delegation from Delegators, and they can accept up to 16 times their initial self-stake. If an Indexer becomes "over-delegated" (i.e., more than 16 times their initial self-stake), they will not be able to use the additional GRT from Delegators until they increase their self-stake in the network. +Indexers can increase their GRT allocations on Subgraphs by accepting GRT delegation from Delegators, and they can accept up to 16 times their initial self-stake. If an Indexer becomes "over-delegated" (i.e., more than 16 times their initial self-stake), they will not be able to use the additional GRT from Delegators until they increase their self-stake in the network. The amount of rewards an Indexer receives can vary based on the Indexer's self-stake, accepted delegation, quality of service, and many more factors. ## Token Supply: Burning & Issuance -The initial token supply is 10 billion GRT, with a target of 3% new issuance annually to reward Indexers for allocating stake on subgraphs. This means that the total supply of GRT tokens will increase by 3% each year as new tokens are issued to Indexers for their contribution to the network. +The initial token supply is 10 billion GRT, with a target of 3% new issuance annually to reward Indexers for allocating stake on Subgraphs. This means that the total supply of GRT tokens will increase by 3% each year as new tokens are issued to Indexers for their contribution to the network. -The Graph is designed with multiple burning mechanisms to offset new token issuance. Approximately 1% of the GRT supply is burned annually through various activities on the network, and this number has been increasing as network activity continues to grow. These burning activities include a 0.5% delegation tax whenever a Delegator delegates GRT to an Indexer, a 1% curation tax when Curators signal on a subgraph, and a 1% of query fees for blockchain data. +The Graph is designed with multiple burning mechanisms to offset new token issuance. Approximately 1% of the GRT supply is burned annually through various activities on the network, and this number has been increasing as network activity continues to grow. These burning activities include a 0.5% delegation tax whenever a Delegator delegates GRT to an Indexer, a 1% curation tax when Curators signal on a Subgraph, and a 1% of query fees for blockchain data. ![Total burned GRT](/img/total-burned-grt.jpeg) diff --git a/website/src/pages/es/sps/introduction.mdx b/website/src/pages/es/sps/introduction.mdx index 344648a4c8a4..4340733cfc84 100644 --- a/website/src/pages/es/sps/introduction.mdx +++ b/website/src/pages/es/sps/introduction.mdx @@ -3,21 +3,21 @@ title: Introducción a los Subgrafos Impulsados por Substreams sidebarTitle: Introducción --- -Boost your subgraph's efficiency and scalability by using [Substreams](/substreams/introduction/) to stream pre-indexed blockchain data. +Boost your Subgraph's efficiency and scalability by using [Substreams](/substreams/introduction/) to stream pre-indexed blockchain data. ## Descripción -Use a Substreams package (`.spkg`) as a data source to give your subgraph access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks. +Use a Substreams package (`.spkg`) as a data source to give your Subgraph access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks. ### Specifics Existen dos métodos para habilitar esta tecnología: -1. **Using Substreams [triggers](/sps/triggers/)**: Consume from any Substreams module by importing the Protobuf model through a subgraph handler and move all your logic into a subgraph. This method creates the subgraph entities directly in the subgraph. +1. **Using Substreams [triggers](/sps/triggers/)**: Consume from any Substreams module by importing the Protobuf model through a Subgraph handler and move all your logic into a Subgraph. This method creates the Subgraph entities directly in the Subgraph. -2. **Using [Entity Changes](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)**: By writing more of the logic into Substreams, you can consume the module's output directly into [graph-node](/indexing/tooling/graph-node/). In graph-node, you can use the Substreams data to create your subgraph entities. +2. **Using [Entity Changes](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)**: By writing more of the logic into Substreams, you can consume the module's output directly into [graph-node](/indexing/tooling/graph-node/). In graph-node, you can use the Substreams data to create your Subgraph entities. -You can choose where to place your logic, either in the subgraph or Substreams. However, consider what aligns with your data needs, as Substreams has a parallelized model, and triggers are consumed linearly in the graph node. +You can choose where to place your logic, either in the Subgraph or Substreams. However, consider what aligns with your data needs, as Substreams has a parallelized model, and triggers are consumed linearly in the graph node. ### Recursos Adicionales @@ -28,3 +28,4 @@ Visit the following links for tutorials on using code-generation tooling to buil - [Starknet](https://docs.substreams.dev/tutorials/intro-to-tutorials/starknet) - [Injective](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/injective) - [MANTRA](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/mantra) +- [Stellar](https://docs.substreams.dev/tutorials/intro-to-tutorials/stellar) diff --git a/website/src/pages/es/sps/sps-faq.mdx b/website/src/pages/es/sps/sps-faq.mdx index 592bdff3db63..dd7685e1a4be 100644 --- a/website/src/pages/es/sps/sps-faq.mdx +++ b/website/src/pages/es/sps/sps-faq.mdx @@ -11,21 +11,21 @@ Specifically, it's a blockchain-agnostic, parallelized, and streaming-first engi Substreams is developed by [StreamingFast](https://www.streamingfast.io/). Visit the [Substreams Documentation](/substreams/introduction/) to learn more about Substreams. -## ¿Qué son los subgrafos impulsados por Substreams? +## What are Substreams-powered Subgraphs? -[Substreams-powered subgraphs](/sps/introduction/) combine the power of Substreams with the queryability of subgraphs. When publishing a Substreams-powered Subgraph, the data produced by the Substreams transformations, can [output entity changes](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs) that are compatible with subgraph entities. +[Substreams-powered Subgraphs](/sps/introduction/) combine the power of Substreams with the queryability of Subgraphs. When publishing a Substreams-powered Subgraph, the data produced by the Substreams transformations, can [output entity changes](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs) that are compatible with Subgraph entities. -If you are already familiar with subgraph development, note that Substreams-powered subgraphs can then be queried just as if it had been produced by the AssemblyScript transformation layer. This provides all the benefits of subgraphs, including a dynamic and flexible GraphQL API. +If you are already familiar with Subgraph development, note that Substreams-powered Subgraphs can then be queried just as if it had been produced by the AssemblyScript transformation layer. This provides all the benefits of Subgraphs, including a dynamic and flexible GraphQL API. -## ¿Cómo se diferencian los subgrafos impulsados por Substreams de los subgrafos tradicionales? +## How are Substreams-powered Subgraphs different from Subgraphs? Los subgrafos están compuestos por fuentes de datos que especifican eventos en la cadena de bloques, y cómo esos eventos deben ser transformados mediante controladores escritos en AssemblyScript. Estos eventos se procesan de manera secuencial, según el orden en el que ocurren los eventos onchain. -En cambio, los subgrafos potenciados por Substreams tienen una única fuente de datos que hace referencia a un paquete de Substreams, que es procesado por Graph Node. Los Substreams tienen acceso a datos más granulares onchain en comparación con los subgrafos convencionales, y también pueden beneficiarse de un procesamiento masivamente paralelizado, lo que puede significar tiempos de procesamiento mucho más rápidos. +By contrast, substreams-powered Subgraphs have a single datasource which references a substreams package, which is processed by the Graph Node. Substreams have access to additional granular onchain data compared to conventional Subgraphs, and can also benefit from massively parallelised processing, which can mean much faster processing times. -## ¿Cuáles son los beneficios de usar subgrafos potenciados por Substreams? +## What are the benefits of using Substreams-powered Subgraphs? -Substreams-powered subgraphs combine all the benefits of Substreams with the queryability of subgraphs. They bring greater composability and high-performance indexing to The Graph. They also enable new data use cases; for example, once you've built your Substreams-powered Subgraph, you can reuse your [Substreams modules](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) to output to different [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) such as PostgreSQL, MongoDB, and Kafka. +Substreams-powered Subgraphs combine all the benefits of Substreams with the queryability of Subgraphs. They bring greater composability and high-performance indexing to The Graph. They also enable new data use cases; for example, once you've built your Substreams-powered Subgraph, you can reuse your [Substreams modules](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) to output to different [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) such as PostgreSQL, MongoDB, and Kafka. ## ¿Cuáles son los beneficios de Substreams? @@ -35,7 +35,7 @@ Hay muchos beneficios al usar Substreams, incluyendo: - Indexación de alto rendimiento: Indexación mucho más rápida mediante grandes clústeres de operaciones en paralelo (piensa en BigQuery). -- Almacenamiento en cualquier lugar: Envía tus datos a donde quieras: PostgreSQL, MongoDB, Kafka, subgrafos, archivos planos, Google Sheets. +- Sink anywhere: Sink your data to anywhere you want: PostgreSQL, MongoDB, Kafka, Subgraphs, flat files, Google Sheets. - Programable: Usa código para personalizar la extracción, realizar agregaciones en tiempo de transformación y modelar tu salida para múltiples destinos. @@ -63,17 +63,17 @@ There are many benefits to using Firehose, including: - Leverages flat files: Blockchain data is extracted into flat files, the cheapest and most optimized computing resource available. -## ¿Dónde pueden los desarrolladores acceder a más información sobre los subgrafos potenciados por Substreams y Substreams? +## Where can developers access more information about Substreams-powered Subgraphs and Substreams? La [documentación de Substreams] (/substreams/introduction/) te enseñará cómo construir módulos de Substreams. -The [Substreams-powered subgraphs documentation](/sps/introduction/) will show you how to package them for deployment on The Graph. +The [Substreams-powered Subgraphs documentation](/sps/introduction/) will show you how to package them for deployment on The Graph. La [última herramienta de Substreams Codegen] (https://streamingfastio.medium.com/substreams-codegen-no-code-tool-to-bootstrap-your-project-a11efe0378c6) te permitirá iniciar un proyecto de Substreams sin necesidad de escribir código. ## ¿Cuál es el papel de los módulos de Rust en Substreams? -Rust modules are the equivalent of the AssemblyScript mappers in subgraphs. They are compiled to WASM in a similar way, but the programming model allows for parallel execution. They define the sort of transformations and aggregations you want to apply to the raw blockchain data. +Rust modules are the equivalent of the AssemblyScript mappers in Subgraphs. They are compiled to WASM in a similar way, but the programming model allows for parallel execution. They define the sort of transformations and aggregations you want to apply to the raw blockchain data. See [modules documentation](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) for details. @@ -81,16 +81,16 @@ See [modules documentation](https://docs.substreams.dev/reference-material/subst Cuando se usa Substreams, la composición ocurre en la capa de transformación, lo que permite que los módulos en caché sean reutilizados. -As an example, Alice can build a DEX price module, Bob can use it to build a volume aggregator for some tokens of his interest, and Lisa can combine four individual DEX price modules to create a price oracle. A single Substreams request will package all of these individual's modules, link them together, to offer a much more refined stream of data. That stream can then be used to populate a subgraph, and be queried by consumers. +As an example, Alice can build a DEX price module, Bob can use it to build a volume aggregator for some tokens of his interest, and Lisa can combine four individual DEX price modules to create a price oracle. A single Substreams request will package all of these individual's modules, link them together, to offer a much more refined stream of data. That stream can then be used to populate a Subgraph, and be queried by consumers. ## How can you build and deploy a Substreams-powered Subgraph? After [defining](/sps/introduction/) a Substreams-powered Subgraph, you can use the Graph CLI to deploy it in [Subgraph Studio](https://thegraph.com/studio/). -## ¿Dónde puedo encontrar ejemplos de Substreams y Subgrafos potenciados por Substreams? +## Where can I find examples of Substreams and Substreams-powered Subgraphs? -Puedes visitar [este repositorio de Github] (https://github.com/pinax-network/awesome-substreams) para encontrar ejemplos de Substreams y Subgrafos potenciados por Substreams. +You can visit [this Github repo](https://github.com/pinax-network/awesome-substreams) to find examples of Substreams and Substreams-powered Subgraphs. -## ¿Qué significan los Substreams y los subgrafos impulsados por Substreams para The Graph Network? +## What do Substreams and Substreams-powered Subgraphs mean for The Graph Network? The integration promises many benefits, including extremely high-performance indexing and greater composability by leveraging community modules and building on them. diff --git a/website/src/pages/es/sps/triggers.mdx b/website/src/pages/es/sps/triggers.mdx index a0b15ced3b13..16db4057a732 100644 --- a/website/src/pages/es/sps/triggers.mdx +++ b/website/src/pages/es/sps/triggers.mdx @@ -6,13 +6,13 @@ Use Custom Triggers and enable the full use GraphQL. ## Descripción -Custom Triggers allow you to send data directly into your subgraph mappings file and entities, which are similar to tables and fields. This enables you to fully use the GraphQL layer. +Custom Triggers allow you to send data directly into your Subgraph mappings file and entities, which are similar to tables and fields. This enables you to fully use the GraphQL layer. -By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data in your subgraph's handler. This ensures efficient and streamlined data management within the subgraph framework. +By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data in your Subgraph's handler. This ensures efficient and streamlined data management within the Subgraph framework. ### Defining `handleTransactions` -El siguiente código demuestra cómo definir una función 'handleTransactions' en un controlador de subgraph. Esta función recibe bytes sin procesar de Substreams como parámetro y los decodifica en un objeto 'Transactions'. Para cada transacción, se crea una nueva entidad en el subgrafo. +The following code demonstrates how to define a `handleTransactions` function in a Subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new Subgraph entity is created. ```tsx export function handleTransactions(bytes: Uint8Array): void { @@ -38,9 +38,9 @@ Here's what you're seeing in the `mappings.ts` file: 1. Los bytes que contienen los datos de Substreams se decodifican en el objeto 'Transactions' generado, y este objeto se utiliza como cualquier otro objeto de AssemblyScript. 2. Iterando sobre las transacciones -3. Crear una nueva entidad de subgrafo para cada transacción. +3. Create a new Subgraph entity for every transaction -To go through a detailed example of a trigger-based subgraph, [check out the tutorial](/sps/tutorial/). +To go through a detailed example of a trigger-based Subgraph, [check out the tutorial](/sps/tutorial/). ### Recursos Adicionales diff --git a/website/src/pages/es/sps/tutorial.mdx b/website/src/pages/es/sps/tutorial.mdx index 0c289f179d4b..52ebe46d1753 100644 --- a/website/src/pages/es/sps/tutorial.mdx +++ b/website/src/pages/es/sps/tutorial.mdx @@ -3,7 +3,7 @@ title: 'Tutorial: Configurar un Subgrafo Potenciado por Substreams en Solana' sidebarTitle: Tutorial --- -Successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. +Successfully set up a trigger-based Substreams-powered Subgraph for a Solana SPL token. ## Comenzar @@ -54,7 +54,7 @@ params: # Modify the param fields to meet your needs ### Paso 2: Generar el Manifiesto del Subgrafo -Una vez que el proyecto esté inicializado, genera un manifiesto de subgraph ejecutando el siguiente comando en el Dev Container: +Once the project is initialized, generate a Subgraph manifest by running the following command in the Dev Container: ```bash substreams codegen subgrafo @@ -73,7 +73,7 @@ dataSources: moduleName: map_spl_transfers # Module defined in the substreams.yaml file: ./my-project-sol-v0.1.0.spkg mapping: - apiVersion: 0.0.7 + apiVersion: 0.0.9 kind: substreams/graph-entities file: ./src/mappings.ts handler: handleTriggers @@ -81,7 +81,7 @@ dataSources: ### Paso 3: Definir Entidades en schema.graphql -Define the fields you want to save in your subgraph entities by updating the `schema.graphql` file. +Define the fields you want to save in your Subgraph entities by updating the `schema.graphql` file. Here is an example: @@ -101,7 +101,7 @@ Este esquema define una entidad 'MyTransfer' con campos como 'id', 'amount', 'so With the Protobuf objects generated, you can now handle the decoded Substreams data in your `mappings.ts` file found in the `./src` directory. -The example below demonstrates how to extract to subgraph entities the non-derived transfers associated to the Orca account id: +The example below demonstrates how to extract to Subgraph entities the non-derived transfers associated to the Orca account id: ```ts import { Protobuf } from 'as-proto/assembly' @@ -140,11 +140,11 @@ Para generar objetos Protobuf en AssemblyScript, ejecuta el siguiente comando: npm run protogen ``` -Este comando convierte las definiciones de Protobuf en AssemblyScript, lo que te permite usarlas en el controlador del subgrafo. +This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the Subgraph's handler. ### Conclusion -Congratulations! You've successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. You can take the next step by customizing your schema, mappings, and modules to fit your specific use case. +Congratulations! You've successfully set up a trigger-based Substreams-powered Subgraph for a Solana SPL token. You can take the next step by customizing your schema, mappings, and modules to fit your specific use case. ### Video Tutorial diff --git a/website/src/pages/es/subgraphs/best-practices/avoid-eth-calls.mdx b/website/src/pages/es/subgraphs/best-practices/avoid-eth-calls.mdx index e40a7b3712e4..07249c97dd2a 100644 --- a/website/src/pages/es/subgraphs/best-practices/avoid-eth-calls.mdx +++ b/website/src/pages/es/subgraphs/best-practices/avoid-eth-calls.mdx @@ -1,19 +1,19 @@ --- title: Subgraph Best Practice 4 - Improve Indexing Speed by Avoiding eth_calls -sidebarTitle: 'Subgraph Best Practice 4: Avoiding eth_calls' +sidebarTitle: Avoiding eth_calls --- ## TLDR -`eth_calls` are calls that can be made from a subgraph to an Ethereum node. These calls take a significant amount of time to return data, slowing down indexing. If possible, design smart contracts to emit all the data you need so you don’t need to use `eth_calls`. +`eth_calls` are calls that can be made from a Subgraph to an Ethereum node. These calls take a significant amount of time to return data, slowing down indexing. If possible, design smart contracts to emit all the data you need so you don’t need to use `eth_calls`. ## Why Avoiding `eth_calls` Is a Best Practice -Subgraphs are optimized to index event data emitted from smart contracts. A subgraph can also index the data coming from an `eth_call`, however, this can significantly slow down subgraph indexing as `eth_calls` require making external calls to smart contracts. The responsiveness of these calls relies not on the subgraph but on the connectivity and responsiveness of the Ethereum node being queried. By minimizing or eliminating eth_calls in our subgraphs, we can significantly improve our indexing speed. +Subgraphs are optimized to index event data emitted from smart contracts. A Subgraph can also index the data coming from an `eth_call`, however, this can significantly slow down Subgraph indexing as `eth_calls` require making external calls to smart contracts. The responsiveness of these calls relies not on the Subgraph but on the connectivity and responsiveness of the Ethereum node being queried. By minimizing or eliminating eth_calls in our Subgraphs, we can significantly improve our indexing speed. ### What Does an eth_call Look Like? -`eth_calls` are often necessary when the data required for a subgraph is not available through emitted events. For example, consider a scenario where a subgraph needs to identify whether ERC20 tokens are part of a specific pool, but the contract only emits a basic `Transfer` event and does not emit an event that contains the data that we need: +`eth_calls` are often necessary when the data required for a Subgraph is not available through emitted events. For example, consider a scenario where a Subgraph needs to identify whether ERC20 tokens are part of a specific pool, but the contract only emits a basic `Transfer` event and does not emit an event that contains the data that we need: ```yaml event Transfer(address indexed from, address indexed to, uint256 value); @@ -44,7 +44,7 @@ export function handleTransfer(event: Transfer): void { } ``` -This is functional, however is not ideal as it slows down our subgraph’s indexing. +This is functional, however is not ideal as it slows down our Subgraph’s indexing. ## How to Eliminate `eth_calls` @@ -54,7 +54,7 @@ Ideally, the smart contract should be updated to emit all necessary data within event TransferWithPool(address indexed from, address indexed to, uint256 value, bytes32 indexed poolInfo); ``` -With this update, the subgraph can directly index the required data without external calls: +With this update, the Subgraph can directly index the required data without external calls: ```typescript import { Address } from '@graphprotocol/graph-ts' @@ -96,11 +96,11 @@ The portion highlighted in yellow is the call declaration. The part before the c The handler itself accesses the result of this `eth_call` exactly as in the previous section by binding to the contract and making the call. graph-node caches the results of declared `eth_calls` in memory and the call from the handler will retrieve the result from this in memory cache instead of making an actual RPC call. -Note: Declared eth_calls can only be made in subgraphs with specVersion >= 1.2.0. +Note: Declared eth_calls can only be made in Subgraphs with specVersion >= 1.2.0. ## Conclusion -You can significantly improve indexing performance by minimizing or eliminating `eth_calls` in your subgraphs. +You can significantly improve indexing performance by minimizing or eliminating `eth_calls` in your Subgraphs. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/es/subgraphs/best-practices/derivedfrom.mdx b/website/src/pages/es/subgraphs/best-practices/derivedfrom.mdx index db3a49928c89..093eb29255ab 100644 --- a/website/src/pages/es/subgraphs/best-practices/derivedfrom.mdx +++ b/website/src/pages/es/subgraphs/best-practices/derivedfrom.mdx @@ -1,11 +1,11 @@ --- title: Subgraph Best Practice 2 - Improve Indexing and Query Responsiveness By Using @derivedFrom -sidebarTitle: 'Subgraph Best Practice 2: Arrays with @derivedFrom' +sidebarTitle: Arrays with @derivedFrom --- ## TLDR -Arrays in your schema can really slow down a subgraph's performance as they grow beyond thousands of entries. If possible, the `@derivedFrom` directive should be used when using arrays as it prevents large arrays from forming, simplifies handlers, and reduces the size of individual entities, improving indexing speed and query performance significantly. +Arrays in your schema can really slow down a Subgraph's performance as they grow beyond thousands of entries. If possible, the `@derivedFrom` directive should be used when using arrays as it prevents large arrays from forming, simplifies handlers, and reduces the size of individual entities, improving indexing speed and query performance significantly. ## How to Use the `@derivedFrom` Directive @@ -15,7 +15,7 @@ You just need to add a `@derivedFrom` directive after your array in your schema. comments: [Comment!]! @derivedFrom(field: "post") ``` -`@derivedFrom` creates efficient one-to-many relationships, enabling an entity to dynamically associate with multiple related entities based on a field in the related entity. This approach removes the need for both sides of the relationship to store duplicate data, making the subgraph more efficient. +`@derivedFrom` creates efficient one-to-many relationships, enabling an entity to dynamically associate with multiple related entities based on a field in the related entity. This approach removes the need for both sides of the relationship to store duplicate data, making the Subgraph more efficient. ### Example Use Case for `@derivedFrom` @@ -60,17 +60,17 @@ type Comment @entity { Just by adding the `@derivedFrom` directive, this schema will only store the “Comments” on the “Comments” side of the relationship and not on the “Post” side of the relationship. Arrays are stored across individual rows, which allows them to expand significantly. This can lead to particularly large sizes if their growth is unbounded. -This will not only make our subgraph more efficient, but it will also unlock three features: +This will not only make our Subgraph more efficient, but it will also unlock three features: 1. We can query the `Post` and see all of its comments. 2. We can do a reverse lookup and query any `Comment` and see which post it comes from. -3. We can use [Derived Field Loaders](/subgraphs/developing/creating/graph-ts/api/#looking-up-derived-entities) to unlock the ability to directly access and manipulate data from virtual relationships in our subgraph mappings. +3. We can use [Derived Field Loaders](/subgraphs/developing/creating/graph-ts/api/#looking-up-derived-entities) to unlock the ability to directly access and manipulate data from virtual relationships in our Subgraph mappings. ## Conclusion -Use the `@derivedFrom` directive in subgraphs to effectively manage dynamically growing arrays, enhancing indexing efficiency and data retrieval. +Use the `@derivedFrom` directive in Subgraphs to effectively manage dynamically growing arrays, enhancing indexing efficiency and data retrieval. For a more detailed explanation of strategies to avoid large arrays, check out Kevin Jones' blog: [Best Practices in Subgraph Development: Avoiding Large Arrays](https://thegraph.com/blog/improve-subgraph-performance-avoiding-large-arrays/). diff --git a/website/src/pages/es/subgraphs/best-practices/grafting-hotfix.mdx b/website/src/pages/es/subgraphs/best-practices/grafting-hotfix.mdx index 0f85dfc8acf6..39750e51189d 100644 --- a/website/src/pages/es/subgraphs/best-practices/grafting-hotfix.mdx +++ b/website/src/pages/es/subgraphs/best-practices/grafting-hotfix.mdx @@ -1,26 +1,26 @@ --- title: Subgraph Best Practice 6 - Use Grafting for Quick Hotfix Deployment -sidebarTitle: 'Subgraph Best Practice 6: Grafting and Hotfixing' +sidebarTitle: Grafting and Hotfixing --- ## TLDR -Grafting is a powerful feature in subgraph development that allows you to build and deploy new subgraphs while reusing the indexed data from existing ones. +Grafting is a powerful feature in Subgraph development that allows you to build and deploy new Subgraphs while reusing the indexed data from existing ones. ### Descripción -This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. +This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire Subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. ## Benefits of Grafting for Hotfixes 1. **Rapid Deployment** - - **Minimize Downtime**: When a subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. - - **Immediate Recovery**: The new subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. + - **Minimize Downtime**: When a Subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. + - **Immediate Recovery**: The new Subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. 2. **Data Preservation** - - **Reuse Historical Data**: Grafting copies the existing data from the base subgraph, so you don’t lose valuable historical records. + - **Reuse Historical Data**: Grafting copies the existing data from the base Subgraph, so you don’t lose valuable historical records. - **Consistency**: Maintains data continuity, which is crucial for applications relying on consistent historical data. 3. **Efficiency** @@ -31,38 +31,38 @@ This feature enables quick deployment of hotfixes for critical issues, eliminati 1. **Initial Deployment Without Grafting** - - **Start Clean**: Always deploy your initial subgraph without grafting to ensure that it’s stable and functions as expected. - - **Test Thoroughly**: Validate the subgraph’s performance to minimize the need for future hotfixes. + - **Start Clean**: Always deploy your initial Subgraph without grafting to ensure that it’s stable and functions as expected. + - **Test Thoroughly**: Validate the Subgraph’s performance to minimize the need for future hotfixes. 2. **Implementing the Hotfix with Grafting** - **Identify the Issue**: When a critical error occurs, determine the block number of the last successfully indexed event. - - **Create a New Subgraph**: Develop a new subgraph that includes the hotfix. - - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed subgraph. - - **Deploy Quickly**: Publish the grafted subgraph to restore service as soon as possible. + - **Create a New Subgraph**: Develop a new Subgraph that includes the hotfix. + - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed Subgraph. + - **Deploy Quickly**: Publish the grafted Subgraph to restore service as soon as possible. 3. **Post-Hotfix Actions** - - **Monitor Performance**: Ensure the grafted subgraph is indexing correctly and the hotfix resolves the issue. - - **Republish Without Grafting**: Once stable, deploy a new version of the subgraph without grafting for long-term maintenance. + - **Monitor Performance**: Ensure the grafted Subgraph is indexing correctly and the hotfix resolves the issue. + - **Republish Without Grafting**: Once stable, deploy a new version of the Subgraph without grafting for long-term maintenance. > Note: Relying on grafting indefinitely is not recommended as it can complicate future updates and maintenance. - - **Update References**: Redirect any services or applications to use the new, non-grafted subgraph. + - **Update References**: Redirect any services or applications to use the new, non-grafted Subgraph. 4. **Important Considerations** - **Careful Block Selection**: Choose the graft block number carefully to prevent data loss. - **Tip**: Use the block number of the last correctly processed event. - - **Use Deployment ID**: Ensure you reference the Deployment ID of the base subgraph, not the Subgraph ID. - - **Note**: The Deployment ID is the unique identifier for a specific subgraph deployment. - - **Feature Declaration**: Remember to declare grafting in the subgraph manifest under features. + - **Use Deployment ID**: Ensure you reference the Deployment ID of the base Subgraph, not the Subgraph ID. + - **Note**: The Deployment ID is the unique identifier for a specific Subgraph deployment. + - **Feature Declaration**: Remember to declare grafting in the Subgraph manifest under features. ## Example: Deploying a Hotfix with Grafting -Suppose you have a subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. +Suppose you have a Subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. 1. **Failed Subgraph Manifest (subgraph.yaml)** ```yaml - specVersion: 1.0.0 + specVersion: 1.3.0 schema: file: ./schema.graphql dataSources: @@ -75,7 +75,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing startBlock: 5000000 mapping: kind: ethereum/events - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Withdrawal @@ -90,7 +90,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing 2. **New Grafted Subgraph Manifest (subgraph.yaml)** ```yaml - specVersion: 1.0.0 + specVersion: 1.3.0 schema: file: ./schema.graphql dataSources: @@ -103,7 +103,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing startBlock: 6000001 # Block after the last indexed block mapping: kind: ethereum/events - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Withdrawal @@ -117,16 +117,16 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing features: - grafting graft: - base: QmBaseDeploymentID # Deployment ID of the failed subgraph + base: QmBaseDeploymentID # Deployment ID of the failed Subgraph block: 6000000 # Last successfully indexed block ``` **Explanation:** -- **Data Source Update**: The new subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. +- **Data Source Update**: The new Subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. - **Start Block**: Set to one block after the last successfully indexed block to avoid reprocessing the error. - **Grafting Configuration**: - - **base**: Deployment ID of the failed subgraph. + - **base**: Deployment ID of the failed Subgraph. - **block**: Block number where grafting should begin. 3. **Deployment Steps** @@ -135,10 +135,10 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing - **Adjust the Manifest**: As shown above, update the `subgraph.yaml` with grafting configurations. - **Deploy the Subgraph**: - Authenticate with the Graph CLI. - - Deploy the new subgraph using `graph deploy`. + - Deploy the new Subgraph using `graph deploy`. 4. **Post-Deployment** - - **Verify Indexing**: Check that the subgraph is indexing correctly from the graft point. + - **Verify Indexing**: Check that the Subgraph is indexing correctly from the graft point. - **Monitor Data**: Ensure that new data is being captured and the hotfix is effective. - **Plan for Republish**: Schedule the deployment of a non-grafted version for long-term stability. @@ -146,9 +146,9 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing While grafting is a powerful tool for deploying hotfixes quickly, there are specific scenarios where it should be avoided to maintain data integrity and ensure optimal performance. -- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new subgraph’s schema to be compatible with the base subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. +- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new Subgraph’s schema to be compatible with the base Subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. - **Significant Mapping Logic Overhauls**: When the hotfix involves substantial modifications to your mapping logic—such as changing how events are processed or altering handler functions—grafting may not function correctly. The new logic might not be compatible with the data processed under the old logic, leading to incorrect data or failed indexing. -- **Deployments to The Graph Network**: Grafting is not recommended for subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the subgraph from scratch to ensure full compatibility and reliability. +- **Deployments to The Graph Network**: Grafting is not recommended for Subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the Subgraph from scratch to ensure full compatibility and reliability. ### Risk Management @@ -157,20 +157,20 @@ While grafting is a powerful tool for deploying hotfixes quickly, there are spec ## Conclusion -Grafting is an effective strategy for deploying hotfixes in subgraph development, enabling you to: +Grafting is an effective strategy for deploying hotfixes in Subgraph development, enabling you to: - **Quickly Recover** from critical errors without re-indexing. - **Preserve Historical Data**, maintaining continuity for applications and users. - **Ensure Service Availability** by minimizing downtime during critical fixes. -However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. +However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your Subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. ## Recursos Adicionales - **[Grafting Documentation](/subgraphs/cookbook/grafting/)**: Replace a Contract and Keep its History With Grafting - **[Understanding Deployment IDs](/subgraphs/querying/subgraph-id-vs-deployment-id/)**: Learn the difference between Deployment ID and Subgraph ID. -By incorporating grafting into your subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. +By incorporating grafting into your Subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/es/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx b/website/src/pages/es/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx index 6ff60ec9ab34..c2bd2e50b23c 100644 --- a/website/src/pages/es/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx +++ b/website/src/pages/es/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx @@ -1,6 +1,6 @@ --- title: Subgraph Best Practice 3 - Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs -sidebarTitle: 'Subgraph Best Practice 3: Immutable Entities and Bytes as IDs' +sidebarTitle: Immutable Entities and Bytes as IDs --- ## TLDR @@ -22,7 +22,7 @@ type Transfer @entity(immutable: true) { By making the `Transfer` entity immutable, graph-node is able to process the entity more efficiently, improving indexing speeds and query responsiveness. -Immutable Entities structures will not change in the future. An ideal entity to become an Immutable Entity would be an entity that is directly logging onchain event data, such as a `Transfer` event being logged as a `Transfer` entity. +Immutable Entities structures will not change in the future. An ideal entity to become an Immutable Entity would be an entity that is directly logging on-chain event data, such as a `Transfer` event being logged as a `Transfer` entity. ### Under the hood @@ -50,12 +50,12 @@ While other types for IDs are possible, such as String and Int8, it is recommend ### Reasons to Not Use Bytes as IDs 1. If entity IDs must be human-readable such as auto-incremented numerical IDs or readable strings, Bytes for IDs should not be used. -2. If integrating a subgraph’s data with another data model that does not use Bytes as IDs, Bytes as IDs should not be used. +2. If integrating a Subgraph’s data with another data model that does not use Bytes as IDs, Bytes as IDs should not be used. 3. Indexing and querying performance improvements are not desired. ### Concatenating With Bytes as IDs -It is a common practice in many subgraphs to use string concatenation to combine two properties of an event into a single ID, such as using `event.transaction.hash.toHex() + "-" + event.logIndex.toString()`. However, as this returns a string, this significantly impedes subgraph indexing and querying performance. +It is a common practice in many Subgraphs to use string concatenation to combine two properties of an event into a single ID, such as using `event.transaction.hash.toHex() + "-" + event.logIndex.toString()`. However, as this returns a string, this significantly impedes Subgraph indexing and querying performance. Instead, we should use the `concatI32()` method to concatenate event properties. This strategy results in a `Bytes` ID that is much more performant. @@ -172,7 +172,7 @@ Query Response: ## Conclusion -Using both Immutable Entities and Bytes as IDs has been shown to markedly improve subgraph efficiency. Specifically, tests have highlighted up to a 28% increase in query performance and up to a 48% acceleration in indexing speeds. +Using both Immutable Entities and Bytes as IDs has been shown to markedly improve Subgraph efficiency. Specifically, tests have highlighted up to a 28% increase in query performance and up to a 48% acceleration in indexing speeds. Read more about using Immutable Entities and Bytes as IDs in this blog post by David Lutterkort, a Software Engineer at Edge & Node: [Two Simple Subgraph Performance Improvements](https://thegraph.com/blog/two-simple-subgraph-performance-improvements/). diff --git a/website/src/pages/es/subgraphs/best-practices/pruning.mdx b/website/src/pages/es/subgraphs/best-practices/pruning.mdx index 1b51dde8894f..2d4f9ad803e0 100644 --- a/website/src/pages/es/subgraphs/best-practices/pruning.mdx +++ b/website/src/pages/es/subgraphs/best-practices/pruning.mdx @@ -1,11 +1,11 @@ --- title: Subgraph Best Practice 1 - Improve Query Speed with Subgraph Pruning -sidebarTitle: 'Subgraph Best Practice 1: Pruning with indexerHints' +sidebarTitle: Pruning with indexerHints --- ## TLDR -[Pruning](/developing/creating-a-subgraph/#prune) removes archival entities from the subgraph’s database up to a given block, and removing unused entities from a subgraph’s database will improve a subgraph’s query performance, often dramatically. Using `indexerHints` is an easy way to prune a subgraph. +[Pruning](/developing/creating-a-subgraph/#prune) removes archival entities from the Subgraph’s database up to a given block, and removing unused entities from a Subgraph’s database will improve a Subgraph’s query performance, often dramatically. Using `indexerHints` is an easy way to prune a Subgraph. ## How to Prune a Subgraph With `indexerHints` @@ -13,14 +13,14 @@ Add a section called `indexerHints` in the manifest. `indexerHints` has three `prune` options: -- `prune: auto`: Retains the minimum necessary history as set by the Indexer, optimizing query performance. This is the generally recommended setting and is the default for all subgraphs created by `graph-cli` >= 0.66.0. +- `prune: auto`: Retains the minimum necessary history as set by the Indexer, optimizing query performance. This is the generally recommended setting and is the default for all Subgraphs created by `graph-cli` >= 0.66.0. - `prune: `: Sets a custom limit on the number of historical blocks to retain. - `prune: never`: No pruning of historical data; retains the entire history and is the default if there is no `indexerHints` section. `prune: never` should be selected if [Time Travel Queries](/subgraphs/querying/graphql-api/#time-travel-queries) are desired. -We can add `indexerHints` to our subgraphs by updating our `subgraph.yaml`: +We can add `indexerHints` to our Subgraphs by updating our `subgraph.yaml`: ```yaml -specVersion: 1.0.0 +specVersion: 1.3.0 schema: file: ./schema.graphql indexerHints: @@ -39,7 +39,7 @@ dataSources: ## Conclusion -Pruning using `indexerHints` is a best practice for subgraph development, offering significant query performance improvements. +Pruning using `indexerHints` is a best practice for Subgraph development, offering significant query performance improvements. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/es/subgraphs/best-practices/timeseries.mdx b/website/src/pages/es/subgraphs/best-practices/timeseries.mdx index 991ac69c38b7..bfda432f7555 100644 --- a/website/src/pages/es/subgraphs/best-practices/timeseries.mdx +++ b/website/src/pages/es/subgraphs/best-practices/timeseries.mdx @@ -1,11 +1,11 @@ --- title: Subgraph Best Practice 5 - Simplify and Optimize with Timeseries and Aggregations -sidebarTitle: 'Subgraph Best Practice 5: Timeseries and Aggregations' +sidebarTitle: Timeseries and Aggregations --- ## TLDR -Leveraging the new time-series and aggregations feature in subgraphs can significantly enhance both indexing speed and query performance. +Leveraging the new time-series and aggregations feature in Subgraphs can significantly enhance both indexing speed and query performance. ## Descripción @@ -36,6 +36,10 @@ Timeseries and aggregations reduce data processing overhead and accelerate queri ## How to Implement Timeseries and Aggregations +### Prerequisites + +You need `spec version 1.1.0` for this feature. + ### Defining Timeseries Entities A timeseries entity represents raw data points collected over time. It is defined with the `@entity(timeseries: true)` annotation. Key requirements: @@ -51,7 +55,7 @@ Ejemplo: type Data @entity(timeseries: true) { id: Int8! timestamp: Timestamp! - price: BigDecimal! + amount: BigDecimal! } ``` @@ -68,11 +72,11 @@ Ejemplo: type Stats @aggregation(intervals: ["hour", "day"], source: "Data") { id: Int8! timestamp: Timestamp! - sum: BigDecimal! @aggregate(fn: "sum", arg: "price") + sum: BigDecimal! @aggregate(fn: "sum", arg: "amount") } ``` -In this example, Stats aggregates the price field from Data over hourly and daily intervals, computing the sum. +In this example, Stats aggregates the amount field from Data over hourly and daily intervals, computing the sum. ### Querying Aggregated Data @@ -172,13 +176,13 @@ Supported operators and functions include basic arithmetic (+, -, \_, /), compar ### Conclusion -Implementing timeseries and aggregations in subgraphs is a best practice for projects dealing with time-based data. This approach: +Implementing timeseries and aggregations in Subgraphs is a best practice for projects dealing with time-based data. This approach: - Enhances Performance: Speeds up indexing and querying by reducing data processing overhead. - Simplifies Development: Eliminates the need for manual aggregation logic in mappings. - Scales Efficiently: Handles large volumes of data without compromising on speed or responsiveness. -By adopting this pattern, developers can build more efficient and scalable subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your subgraphs. +By adopting this pattern, developers can build more efficient and scalable Subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your Subgraphs. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/es/subgraphs/billing.mdx b/website/src/pages/es/subgraphs/billing.mdx index b2210285e434..d8535da9fcb7 100644 --- a/website/src/pages/es/subgraphs/billing.mdx +++ b/website/src/pages/es/subgraphs/billing.mdx @@ -4,12 +4,14 @@ title: Facturación ## Planes de consultas -Existen dos planes para usar al consultar subgrafos en The Graph Network. +There are two plans to use when querying Subgraphs on The Graph Network. - **Plan Gratuito**: El Plan Gratuito incluye 100.000 consultas mensuales gratuitas con acceso completo al entorno de pruebas de Subgraph Studio. Este plan está diseñado para aficionados, participantes de hackatones y aquellos con proyectos paralelos que deseen probar The Graph antes de escalar su dapp. - Plan de Expansión: El Plan de Expansión incluye todo lo que ofrece el Plan Gratuito, pero todas las consultas que excedan las 100.000 consultas mensuales requieren pagos con GRT o tarjeta de crédito. El Plan de Expansión es lo suficientemente flexible como para cubrir las necesidades de equipos con dapps consolidadas en una variedad de casos de uso. +Learn more about pricing [here](https://thegraph.com/studio-pricing/). + ## Pagos de consultas con tarjeta de crédito @@ -59,7 +61,7 @@ Una vez que transfieras GRT, puedes agregarlo a tu saldo de facturación. 5. Selecciona "Cripto". Actualmente, GRT es la única criptomoneda aceptada en The Graph Network. 6. Selecciona la cantidad de meses que deseas pagar por adelantado. - Pagar por adelantado no te compromete a un uso futuro. Solo se te cobrará por lo que utilices, y puedes retirar tu saldo en cualquier momento. -7. Pick the network from which you are depositing your GRT. GRT on Arbitrum or Ethereum are both acceptable. +7. Elige la red desde la cual vas a depositar tu GRT. GRT en Arbitrum o Ethereum ambas opciones son aceptables. 8. Haz clic en "Permitir acceso a GRT" y luego especifica la cantidad de GRT que se puede tomar de tu wallet. - Si estás pagando por adelantado varios meses, debes permitirle acceso a la cantidad que corresponde con ese monto. Esta interacción no tendrá costo de gas. 9. Por último, haz clic en "Agregar GRT al saldo de facturación". Esta transacción requerirá ETH en Arbitrum para cubrir los costos de gas. @@ -103,70 +105,70 @@ Esta será una guía paso a paso para comprar GRT en Coinbase. 2. Una vez que hayas creado una cuenta, necesitarás verificar tu identidad a través de un proceso conocido como KYC (o Conoce a tu Cliente). Este es un procedimiento estándar para todos los intercambios de criptomonedas centralizados o con custodia de activos. 3. Una vez que hayas verificado tu identidad, puedes comprar GRT. Para hacerlo, haz clic en el botón "Comprar/Vender" en la parte superior derecha de la página. 4. Selecciona la moneda que deseas comprar. Selecciona GRT. -5. Select the payment method. Select your preferred payment method. -6. Select the amount of GRT you want to purchase. -7. Review your purchase. Review your purchase and click "Buy GRT". -8. Confirm your purchase. Confirm your purchase and you will have successfully purchased GRT. -9. You can transfer the GRT from your account to your wallet such as [MetaMask](https://metamask.io/). - - To transfer the GRT to your wallet, click on the "Accounts" button on the top right of the page. - - Click on the "Send" button next to the GRT account. - - Enter the amount of GRT you want to send and the wallet address you want to send it to. - - Click "Continue" and confirm your transaction. -Please note that for larger purchase amounts, Coinbase may require you to wait 7-10 days before transferring the full amount to a wallet. - -You can learn more about getting GRT on Coinbase [here](https://help.coinbase.com/en/coinbase/trading-and-funding/buying-selling-or-converting-crypto/how-do-i-buy-digital-currency). +5. Selecciona el método de pago. Elige tu método de pago preferido. +6. Selecciona la cantidad de GRT que deseas comprar. +7. Revisa tu compra. Revisa los detalles de tu compra y haz clic en "Comprar GRT". +8. Confirma tu compra. Confirma tu compra y habrás adquirido GRT con éxito. +9. Puedes transferir el GRT desde tu cuenta a tu billetera, como [MetaMask](https://metamask.io/). + - Para transferir el GRT a tu billetera, haz clic en el botón "Cuentas" en la parte superior derecha de la página. + - Haz clic en el botón "Enviar" junto a la cuenta de GRT. + - Ingresa la cantidad de GRT que deseas enviar y la dirección de la wallet a la que quieres enviarlo. + - Haz clic en "Continuar" y confirma tu transacción. Ten en cuenta que, para montos de compra más grandes, Coinbase puede requerir que esperes de 7 a 10 días antes de transferir la cantidad completa a una wallet. + +Puedes obtener más información sobre cómo obtener GRT en Coinbase [aquí[(https://help.coinbase.com/en/coinbase/trading-and-funding/buying-selling-or-converting-crypto/how-do-i-buy-digital-currency). ### Binance -This will be a step by step guide for purchasing GRT on Binance. +Esta será una guía paso a paso para comprar GRT en Binance. -1. Go to [Binance](https://www.binance.com/en) and create an account. +1. Ve a [Binance](https://www.binance.com/en) y crea una cuenta. 2. Una vez que hayas creado una cuenta, necesitarás verificar tu identidad a través de un proceso conocido como KYC (o Conoce a tu Cliente). Este es un procedimiento estándar para todos los intercambios de criptomonedas centralizados o con custodia de activos. -3. Once you have verified your identity, you can purchase GRT. You can do this by clicking on the "Buy Now" button on the homepage banner. -4. You will be taken to a page where you can select the currency you want to purchase. Select GRT. -5. Select your preferred payment method. You'll be able to pay with different fiat currencies such as Euros, US Dollars, and more. -6. Select the amount of GRT you want to purchase. -7. Review your purchase and click "Buy GRT". -8. Confirm your purchase and you will be able to see your GRT in your Binance Spot Wallet. -9. You can withdraw the GRT from your account to your wallet such as [MetaMask](https://metamask.io/). - - [To withdraw](https://www.binance.com/en/blog/ecosystem/how-to-transfer-crypto-from-binance-to-trust-wallet-8305050796630181570) the GRT to your wallet, add your wallet's address to the withdrawal whitelist. - - Click on the "wallet" button, click withdraw, and select GRT. - - Enter the amount of GRT you want to send and the whitelisted wallet address you want to send it to. - - Click "Continue" and confirm your transaction. - -You can learn more about getting GRT on Binance [here](https://www.binance.com/en/support/faq/how-to-buy-cryptocurrency-on-binance-homepage-400c38f5e0cd4b46a1d0805c296b5582). +3. Una vez que hayas verificado tu identidad, puedes comprar GRT. Para hacerlo, haz clic en el botón "Comprar ahora" en el banner de la página de inicio. +4. Serás redirigido a una página donde podrás seleccionar la moneda que deseas comprar. Selecciona GRT. +5. Selecciona tu método de pago preferido. Podrás pagar con diferentes monedas fiduciarias, como euros, dólares estadounidenses y más. +6. Selecciona la cantidad de GRT que deseas comprar. +7. Revisa tu compra y haz clic en "Comprar GRT". +8. Confirma tu compra y podrás ver tu GRT en tu wallet Spot de Binance. +9. Puedes retirar el ETH de tu cuenta a tu wallet, como [MetaMask](https://metamask.io/). + - Para [retirar](https://www.binance.com/en/blog/ecosystem/how-to-transfer-crypto-from-binance-to-trust-wallet-8305050796630181570) el GRT a tu wallet, añade la dirección de tu wallet a la lista de retiros autorizados. + - Haz clic en el botón "wallet", haz clic en retirar y selecciona GRT. + - Ingresa la cantidad de GRT que deseas enviar y la dirección de wallet autorizada a la que quieres enviarlo. + - Haz clic en "Continuar" y confirma tu transacción. + +Puedes obtener más información sobre cómo obtener GRT en Binance [aquí](https://www.binance.com/en/support/faq/how-to-buy-cryptocurrency-on-binance-homepage-400c38f5e0cd4b46a1d0805c296b5582). ### Uniswap -This is how you can purchase GRT on Uniswap. +Así es como puedes comprar GRT en Uniswap. -1. Go to [Uniswap](https://app.uniswap.org/swap?chain=arbitrum) and connect your wallet. -2. Select the token you want to swap from. Select ETH. -3. Select the token you want to swap to. Select GRT. - - Make sure you're swapping for the correct token. The GRT smart contract address on Arbitrum One is: [0x9623063377AD1B27544C965cCd7342f7EA7e88C7](https://arbiscan.io/token/0x9623063377ad1b27544c965ccd7342f7ea7e88c7) -4. Enter the amount of ETH you want to swap. -5. Click "Swap". -6. Confirm the transaction in your wallet and you wait for the transaction to process. +1. Ve a [Uniswap](https://app.uniswap.org/swap?chain=arbitrum) y conecta tu wallet. +2. Selecciona el token del que deseas intercambiar. Selecciona ETH. +3. Selecciona el token al que deseas intercambiar. Selecciona GRT. + - Asegúrate de que estás intercambiando por el token correcto. La dirección del contrato inteligente de GRT en Arbitrum One es: [0x9623063377AD1B27544C965cCd7342f7EA7e88C7](https://arbiscan.io/token/0x9623063377ad1b27544c965ccd7342f7ea7e88c7) +4. Ingresa la cantidad de ETH que deseas intercambiar. +5. Haz clic en "Intercambiar". +6. Confirma la transacción en tu wallet y espera a que la transacción se procese. -You can learn more about getting GRT on Uniswap [here](https://support.uniswap.org/hc/en-us/articles/8370549680909-How-to-Swap-Tokens-). +Puedes obtener más información sobre cómo obtener GRT en Uniswap [aquí](https://support.uniswap.org/hc/en-us/articles/8370549680909-How-to-Swap-Tokens-). -## Getting Ether +## Obtener Ether -This section will show you how to get Ether (ETH) to pay for transaction fees or gas costs. ETH is necessary to execute operations on the Ethereum network such as transferring tokens or interacting with contracts. +Esta sección te mostrará cómo obtener Ether (ETH) para pagar las tarifas de transacción o los costos de gas. ETH es necesario para ejecutar operaciones en la red de Ethereum, como transferir tokens o interactuar con contratos. ### Coinbase -This will be a step by step guide for purchasing ETH on Coinbase. +Esta será una guía paso a paso para comprar ETH en Coinbase. 1. Ve a [Coinbase](https://www.coinbase.com/) y crea una cuenta. 2. Una vez que hayas creado una cuenta, verifica tu identidad a través de un proceso conocido como KYC (o Conoce a tu Cliente). Este es un procedimiento estándar para todos los exchanges centralizados o que mantienen custodia de criptomonedas. -3. Once you have verified your identity, purchase ETH by clicking on the "Buy/Sell" button on the top right of the page. +3. Una vez que hayas verificado tu identidad, compra ETH haciendo clic en el botón "Comprar/Vender" en la esquina superior derecha de la página. 4. Selecciona la moneda que deseas comprar. Elige ETH. 5. Selecciona tu método de pago preferido. 6. Ingresa la cantidad de ETH que deseas comprar. 7. Revisa tu compra y haz clic en "Comprar ETH". -8. Confirm your purchase and you will have successfully purchased ETH. -9. You can transfer the ETH from your Coinbase account to your wallet such as [MetaMask](https://metamask.io/). - - To transfer the ETH to your wallet, click on the "Accounts" button on the top right of the page. +8. Confirma tu compra y habrás adquirido ETH con éxito. +9. Puedes transferir el ETH desde tu cuenta de Coinbase a tu billetera, como [MetaMask](https://metamask.io/). + - Para transferir el ETH a tu billetera, haz clic en el botón "Cuentas" en la esquina superior derecha de la página. - Haz clic en el botón "Enviar" junto a la cuenta de ETH. - Ingresa la cantidad de ETH que deseas enviar y la dirección de la wallet a la que quieres enviarlo. - Asegúrate de que estás enviando a la dirección de tu wallet de Ethereum en Arbitrum One. @@ -178,18 +180,18 @@ Puedes obtener más información sobre cómo adquirir ETH en Coinbase [aquí](ht Esta será una guía paso a paso para comprar ETH en Binance. -1. Go to [Binance](https://www.binance.com/en) and create an account. +1. Ve a [Binance](https://www.binance.com/en) y crea una cuenta. 2. Una vez que hayas creado una cuenta, verifica tu identidad a través de un proceso conocido como KYC (o Conoce a tu Cliente). Este es un procedimiento estándar para todos los exchanges centralizados o que mantienen custodia de criptomonedas. 3. Una vez que hayas verificado tu identidad, compra ETH haciendo clic en el botón "Comprar ahora" en el banner de la página de inicio. 4. Selecciona la moneda que deseas comprar. Elige ETH. -5. Selecciona tu método de pago preferido. +5. Selecciona tu método de pago de preferencia. 6. Ingresa la cantidad de ETH que deseas comprar. 7. Revisa tu compra y haz clic en "Comprar ETH". -8. Confirm your purchase and you will see your ETH in your Binance Spot Wallet. -9. You can withdraw the ETH from your account to your wallet such as [MetaMask](https://metamask.io/). +8. Confirma tu compra y verás tu ETH en tu Wallet Spot de Binance. +9. Puedes retirar el ETH de tu cuenta a tu wallet, como [MetaMask](https://metamask.io/). - Para retirar el ETH a tu wallet, añade la dirección de tu wallet a la lista de direcciones autorizadas para retiros. - - Click on the "wallet" button, click withdraw, and select ETH. - - Enter the amount of ETH you want to send and the whitelisted wallet address you want to send it to. + - Haz clic en el botón "wallet", luego en "retirar" y selecciona ETH. + - Ingresa la cantidad de ETH que deseas enviar y la dirección de wallet autorizada a la que quieres enviarlo. - Asegúrate de que estás enviando a la dirección de tu wallet de Ethereum en Arbitrum One. - Haz clic en "Continuar" y confirma tu transacción. diff --git a/website/src/pages/es/subgraphs/developing/_meta-titles.json b/website/src/pages/es/subgraphs/developing/_meta-titles.json index 01a91b09ed77..ba2fe22a0c4d 100644 --- a/website/src/pages/es/subgraphs/developing/_meta-titles.json +++ b/website/src/pages/es/subgraphs/developing/_meta-titles.json @@ -1,6 +1,6 @@ { "creating": "Creating", - "deploying": "Deploying", - "publishing": "Publishing", - "managing": "Managing" + "deploying": "Deployando", + "publishing": "Publicando", + "managing": "Administrando" } diff --git a/website/src/pages/es/subgraphs/developing/creating/advanced.mdx b/website/src/pages/es/subgraphs/developing/creating/advanced.mdx index 63cf8f312906..eec792c562e4 100644 --- a/website/src/pages/es/subgraphs/developing/creating/advanced.mdx +++ b/website/src/pages/es/subgraphs/developing/creating/advanced.mdx @@ -4,9 +4,9 @@ title: Advanced Subgraph Features ## Descripción -Add and implement advanced subgraph features to enhanced your subgraph's built. +Add and implement advanced Subgraph features to enhanced your Subgraph's built. -Starting from `specVersion` `0.0.4`, subgraph features must be explicitly declared in the `features` section at the top level of the manifest file, using their `camelCase` name, as listed in the table below: +Starting from `specVersion` `0.0.4`, Subgraph features must be explicitly declared in the `features` section at the top level of the manifest file, using their `camelCase` name, as listed in the table below: | Feature | Name | | ---------------------------------------------------- | ---------------- | @@ -14,10 +14,10 @@ Starting from `specVersion` `0.0.4`, subgraph features must be explicitly declar | [Full-text Search](#defining-fulltext-search-fields) | `fullTextSearch` | | [Grafting](#grafting-onto-existing-subgraphs) | `grafting` | -For instance, if a subgraph uses the **Full-Text Search** and the **Non-fatal Errors** features, the `features` field in the manifest should be: +For instance, if a Subgraph uses the **Full-Text Search** and the **Non-fatal Errors** features, the `features` field in the manifest should be: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum features: - fullTextSearch @@ -25,7 +25,7 @@ features: dataSources: ... ``` -> Note that using a feature without declaring it will incur a **validation error** during subgraph deployment, but no errors will occur if a feature is declared but not used. +> Note that using a feature without declaring it will incur a **validation error** during Subgraph deployment, but no errors will occur if a feature is declared but not used. ## Timeseries and Aggregations @@ -33,9 +33,9 @@ Prerequisites: - Subgraph specVersion must be ≥1.1.0. -Timeseries and aggregations enable your subgraph to track statistics like daily average price, hourly total transfers, and more. +Timeseries and aggregations enable your Subgraph to track statistics like daily average price, hourly total transfers, and more. -This feature introduces two new types of subgraph entity. Timeseries entities record data points with timestamps. Aggregation entities perform pre-declared calculations on the timeseries data points on an hourly or daily basis, then store the results for easy access via GraphQL. +This feature introduces two new types of Subgraph entity. Timeseries entities record data points with timestamps. Aggregation entities perform pre-declared calculations on the timeseries data points on an hourly or daily basis, then store the results for easy access via GraphQL. ### Example Schema @@ -97,21 +97,21 @@ Aggregation entities are automatically calculated on the basis of the specified ## Errores no fatales -Los errores de indexación en subgrafos ya sincronizados provocarán, por defecto, que el subgrafo falle y deje de sincronizarse. Los subgrafos pueden ser configurados de manera alternativa para continuar la sincronización en presencia de errores, ignorando los cambios realizados por el handler que provocó el error. Esto da a los autores de los subgrafos tiempo para corregir sus subgrafos mientras las consultas continúan siendo servidas contra el último bloque, aunque los resultados serán posiblemente inconsistentes debido al bug que provocó el error. Nótese que algunos errores siguen siendo siempre fatales, para que el error no sea fatal debe saberse que es deterministico. +Indexing errors on already synced Subgraphs will, by default, cause the Subgraph to fail and stop syncing. Subgraphs can alternatively be configured to continue syncing in the presence of errors, by ignoring the changes made by the handler which provoked the error. This gives Subgraph authors time to correct their Subgraphs while queries continue to be served against the latest block, though the results might be inconsistent due to the bug that caused the error. Note that some errors are still always fatal. To be non-fatal, the error must be known to be deterministic. -> **Note:** The Graph Network does not yet support non-fatal errors, and developers should not deploy subgraphs using that functionality to the network via the Studio. +> **Note:** The Graph Network does not yet support non-fatal errors, and developers should not deploy Subgraphs using that functionality to the network via the Studio. -Para activar los errores no fatales es necesario establecer el siguiente indicador en el manifiesto del subgrafo: +Enabling non-fatal errors requires setting the following feature flag on the Subgraph manifest: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum features: - nonFatalErrors ... ``` -The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the subgraph has skipped over errors, as in the example: +The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the Subgraph has skipped over errors, as in the example: ```graphql foos(first: 100, subgraphError: allow) { @@ -123,7 +123,7 @@ _meta { } ``` -If the subgraph encounters an error, that query will return both the data and a graphql error with the message `"indexing_error"`, as in this example response: +If the Subgraph encounters an error, that query will return both the data and a graphql error with the message `"indexing_error"`, as in this example response: ```graphql "data": { @@ -145,7 +145,7 @@ If the subgraph encounters an error, that query will return both the data and a ## IPFS/Arweave File Data Sources -File data sources are a new subgraph functionality for accessing off-chain data during indexing in a robust, extendable way. File data sources support fetching files from IPFS and from Arweave. +File data sources are a new Subgraph functionality for accessing off-chain data during indexing in a robust, extendable way. File data sources support fetching files from IPFS and from Arweave. > Esto también establece las bases para la indexación determinista de datos off-chain, así como la posible introducción de datos arbitrarios procedentes de HTTP. @@ -221,7 +221,7 @@ templates: - name: TokenMetadata kind: file/ipfs mapping: - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mapping.ts handler: handleMetadata @@ -290,7 +290,7 @@ Ejemplo: import { TokenMetadata as TokenMetadataTemplate } from '../generated/templates' const ipfshash = 'QmaXzZhcYnsisuue5WRdQDH6FDvqkLQX1NckLqBYeYYEfm' -//This example code is for a Crypto coven subgraph. The above ipfs hash is a directory with token metadata for all crypto coven NFTs. +//This example code is for a Crypto coven Subgraph. The above ipfs hash is a directory with token metadata for all crypto coven NFTs. export function handleTransfer(event: TransferEvent): void { let token = Token.load(event.params.tokenId.toString()) @@ -317,23 +317,23 @@ This will create a new file data source, which will poll Graph Node's configured This example is using the CID as the lookup between the parent `Token` entity and the resulting `TokenMetadata` entity. -> Previously, this is the point at which a subgraph developer would have called `ipfs.cat(CID)` to fetch the file +> Previously, this is the point at which a Subgraph developer would have called `ipfs.cat(CID)` to fetch the file ¡Felicitaciones, estás utilizando fuentes de datos de archivos! -#### Deploy de tus subgrafos +#### Deploying your Subgraphs -You can now `build` and `deploy` your subgraph to any Graph Node >=v0.30.0-rc.0. +You can now `build` and `deploy` your Subgraph to any Graph Node >=v0.30.0-rc.0. #### Limitaciones -Los handlers y entidades de fuentes de datos de archivos están aislados de otras entidades del subgrafo, asegurando que son deterministas cuando se ejecutan, y asegurando que no se contaminan las fuentes de datos basadas en cadenas. En concreto: +File data source handlers and entities are isolated from other Subgraph entities, ensuring that they are deterministic when executed, and ensuring no contamination of chain-based data sources. To be specific: - Las entidades creadas por File Data Sources son inmutables y no pueden actualizarse - Los handlers de File Data Source no pueden acceder a entidades de otras fuentes de datos de archivos - Los handlers basados en cadenas no pueden acceder a las entidades asociadas a File Data Sources -> Aunque esta restricción no debería ser problemática para la mayoría de los casos de uso, puede introducir complejidad para algunos. Si tienes problemas para modelar tus datos basados en archivos en un subgrafo, ponte en contacto con nosotros a través de Discord! +> While this constraint should not be problematic for most use-cases, it may introduce complexity for some. Please get in touch via Discord if you are having issues modelling your file-based data in a Subgraph! Además, no es posible crear fuentes de datos a partir de una File Data Source, ya sea una fuente de datos on-chain u otra File Data Source. Es posible que esta restricción se elimine en el futuro. @@ -365,15 +365,15 @@ Handlers for File Data Sources cannot be in files which import `eth_call` contra > **Requires**: [SpecVersion](#specversion-releases) >= `1.2.0` -Topic filters, also known as indexed argument filters, are a powerful feature in subgraphs that allow users to precisely filter blockchain events based on the values of their indexed arguments. +Topic filters, also known as indexed argument filters, are a powerful feature in Subgraphs that allow users to precisely filter blockchain events based on the values of their indexed arguments. -- These filters help isolate specific events of interest from the vast stream of events on the blockchain, allowing subgraphs to operate more efficiently by focusing only on relevant data. +- These filters help isolate specific events of interest from the vast stream of events on the blockchain, allowing Subgraphs to operate more efficiently by focusing only on relevant data. -- This is useful for creating personal subgraphs that track specific addresses and their interactions with various smart contracts on the blockchain. +- This is useful for creating personal Subgraphs that track specific addresses and their interactions with various smart contracts on the blockchain. ### How Topic Filters Work -When a smart contract emits an event, any arguments that are marked as indexed can be used as filters in a subgraph's manifest. This allows the subgraph to listen selectively for events that match these indexed arguments. +When a smart contract emits an event, any arguments that are marked as indexed can be used as filters in a Subgraph's manifest. This allows the Subgraph to listen selectively for events that match these indexed arguments. - The event's first indexed argument corresponds to `topic1`, the second to `topic2`, and so on, up to `topic3`, since the Ethereum Virtual Machine (EVM) allows up to three indexed arguments per event. @@ -401,7 +401,7 @@ In this example: #### Configuration in Subgraphs -Topic filters are defined directly within the event handler configuration in the subgraph manifest. Here is how they are configured: +Topic filters are defined directly within the event handler configuration in the Subgraph manifest. Here is how they are configured: ```yaml eventHandlers: @@ -436,7 +436,7 @@ In this configuration: - `topic1` is configured to filter `Transfer` events where `0xAddressA` is the sender. - `topic2` is configured to filter `Transfer` events where `0xAddressB` is the receiver. -- The subgraph will only index transactions that occur directly from `0xAddressA` to `0xAddressB`. +- The Subgraph will only index transactions that occur directly from `0xAddressA` to `0xAddressB`. #### Example 2: Tracking Transactions in Either Direction Between Two or More Addresses @@ -452,17 +452,17 @@ In this configuration: - `topic1` is configured to filter `Transfer` events where `0xAddressA`, `0xAddressB`, `0xAddressC` is the sender. - `topic2` is configured to filter `Transfer` events where `0xAddressB` and `0xAddressC` is the receiver. -- The subgraph will index transactions that occur in either direction between multiple addresses allowing for comprehensive monitoring of interactions involving all addresses. +- The Subgraph will index transactions that occur in either direction between multiple addresses allowing for comprehensive monitoring of interactions involving all addresses. ## Declared eth_call > Note: This is an experimental feature that is not currently available in a stable Graph Node release yet. You can only use it in Subgraph Studio or your self-hosted node. -Declarative `eth_calls` are a valuable subgraph feature that allows `eth_calls` to be executed ahead of time, enabling `graph-node` to execute them in parallel. +Declarative `eth_calls` are a valuable Subgraph feature that allows `eth_calls` to be executed ahead of time, enabling `graph-node` to execute them in parallel. This feature does the following: -- Significantly improves the performance of fetching data from the Ethereum blockchain by reducing the total time for multiple calls and optimizing the subgraph's overall efficiency. +- Significantly improves the performance of fetching data from the Ethereum blockchain by reducing the total time for multiple calls and optimizing the Subgraph's overall efficiency. - Allows faster data fetching, resulting in quicker query responses and a better user experience. - Reduces wait times for applications that need to aggregate data from multiple Ethereum calls, making the data retrieval process more efficient. @@ -474,7 +474,7 @@ This feature does the following: #### Scenario without Declarative `eth_calls` -Imagine you have a subgraph that needs to make three Ethereum calls to fetch data about a user's transactions, balance, and token holdings. +Imagine you have a Subgraph that needs to make three Ethereum calls to fetch data about a user's transactions, balance, and token holdings. Traditionally, these calls might be made sequentially: @@ -498,15 +498,15 @@ Total time taken = max (3, 2, 4) = 4 seconds #### How it Works -1. Declarative Definition: In the subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. +1. Declarative Definition: In the Subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. 2. Parallel Execution Engine: The Graph Node's execution engine recognizes these declarations and runs the calls simultaneously. -3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the subgraph for further processing. +3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the Subgraph for further processing. #### Example Configuration in Subgraph Manifest Declared `eth_calls` can access the `event.address` of the underlying event as well as all the `event.params`. -`Subgraph.yaml` using `event.address`: +`subgraph.yaml` using `event.address`: ```yaml eventHandlers: @@ -524,7 +524,7 @@ Details for the example above: - The text (`Pool[event.address].feeGrowthGlobal0X128()`) is the actual `eth_call` that will be executed, which is in the form of `Contract[address].function(arguments)` - The `address` and `arguments` can be replaced with variables that will be available when the handler is executed. -`Subgraph.yaml` using `event.params` +`subgraph.yaml` using `event.params` ```yaml calls: @@ -535,22 +535,22 @@ calls: > **Note:** it is not recommended to use grafting when initially upgrading to The Graph Network. Learn more [here](/subgraphs/cookbook/grafting/#important-note-on-grafting-when-upgrading-to-the-network). -When a subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances; it is beneficial to reuse the data from an existing subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly or to temporarily get an existing subgraph working again after it has failed. +When a Subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances; it is beneficial to reuse the data from an existing Subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly or to temporarily get an existing Subgraph working again after it has failed. -A subgraph is grafted onto a base subgraph when the subgraph manifest in `subgraph.yaml` contains a `graft` block at the top-level: +A Subgraph is grafted onto a base Subgraph when the Subgraph manifest in `subgraph.yaml` contains a `graft` block at the top-level: ```yaml description: ... graft: - base: Qm... # Subgraph ID of base subgraph + base: Qm... # Subgraph ID of base Subgraph block: 7345624 # Block number ``` -When a subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` subgraph up to and including the given `block` and then continue indexing the new subgraph from that block on. The base subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted subgraph. +When a Subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` Subgraph up to and including the given `block` and then continue indexing the new Subgraph from that block on. The base Subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted Subgraph. -Debido a que el grafting copia en lugar de indexar los datos base, es mucho más rápido llevar el subgrafo al bloque deseado que indexar desde cero, aunque la copia inicial de los datos aún puede llevar varias horas para subgrafos muy grandes. Mientras se inicializa el subgrafo grafted, Graph Node registrará información sobre los tipos de entidad que ya han sido copiados. +Because grafting copies rather than indexes base data, it is much quicker to get the Subgraph to the desired block than indexing from scratch, though the initial data copy can still take several hours for very large Subgraphs. While the grafted Subgraph is being initialized, the Graph Node will log information about the entity types that have already been copied. -El subgrafo grafteado puede utilizar un esquema GraphQL que no es idéntico al del subgrafo base, sino simplemente compatible con él. Tiene que ser un esquema de subgrafo válido por sí mismo, pero puede diferir del esquema del subgrafo base de las siguientes maneras: +The grafted Subgraph can use a GraphQL schema that is not identical to the one of the base Subgraph, but merely compatible with it. It has to be a valid Subgraph schema in its own right, but may deviate from the base Subgraph's schema in the following ways: - Agrega o elimina tipos de entidades - Elimina los atributos de los tipos de entidad @@ -560,4 +560,4 @@ El subgrafo grafteado puede utilizar un esquema GraphQL que no es idéntico al d - Agrega o elimina interfaces - Cambia para qué tipos de entidades se implementa una interfaz -> **[Feature Management](#experimental-features):** `grafting` must be declared under `features` in the subgraph manifest. +> **[Feature Management](#experimental-features):** `grafting` must be declared under `features` in the Subgraph manifest. diff --git a/website/src/pages/es/subgraphs/developing/creating/assemblyscript-mappings.mdx b/website/src/pages/es/subgraphs/developing/creating/assemblyscript-mappings.mdx index 792a6521f82d..520914f913f6 100644 --- a/website/src/pages/es/subgraphs/developing/creating/assemblyscript-mappings.mdx +++ b/website/src/pages/es/subgraphs/developing/creating/assemblyscript-mappings.mdx @@ -10,7 +10,7 @@ The mappings take data from a particular source and transform it into entities t For each event handler that is defined in `subgraph.yaml` under `mapping.eventHandlers`, create an exported function of the same name. Each handler must accept a single parameter called `event` with a type corresponding to the name of the event which is being handled. -In the example subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events: +In the example Subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events: ```javascript import { NewGravatar, UpdatedGravatar } from '../generated/Gravity/Gravity' @@ -72,7 +72,7 @@ If no value is set for a field in the new entity with the same ID, the field wil ## Generación de código -Para que trabajar con contratos inteligentes, eventos y entidades sea fácil y seguro desde el punto de vista de los tipos, Graph CLI puede generar tipos AssemblyScript a partir del esquema GraphQL del subgrafo y de las ABIs de los contratos incluidas en las fuentes de datos. +In order to make it easy and type-safe to work with smart contracts, events and entities, the Graph CLI can generate AssemblyScript types from the Subgraph's GraphQL schema and the contract ABIs included in the data sources. Esto se hace con @@ -80,7 +80,7 @@ Esto se hace con graph codegen [--output-dir ] [] ``` -but in most cases, subgraphs are already preconfigured via `package.json` to allow you to simply run one of the following to achieve the same: +but in most cases, Subgraphs are already preconfigured via `package.json` to allow you to simply run one of the following to achieve the same: ```sh # Yarn @@ -90,7 +90,7 @@ yarn codegen npm run codegen ``` -This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with. +This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example Subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with. ```javascript import { @@ -102,12 +102,12 @@ import { } from '../generated/Gravity/Gravity' ``` -In addition to this, one class is generated for each entity type in the subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with +In addition to this, one class is generated for each entity type in the Subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with ```javascript import { Gravatar } from '../generated/schema' ``` -> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the subgraph. +> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the Subgraph. -Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your subgraph to Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find. +Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your Subgraph to Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find. diff --git a/website/src/pages/es/subgraphs/developing/creating/graph-ts/CHANGELOG.md b/website/src/pages/es/subgraphs/developing/creating/graph-ts/CHANGELOG.md index 5d90888ac378..5f964d3cbb78 100644 --- a/website/src/pages/es/subgraphs/developing/creating/graph-ts/CHANGELOG.md +++ b/website/src/pages/es/subgraphs/developing/creating/graph-ts/CHANGELOG.md @@ -1,5 +1,11 @@ # @graphprotocol/graph-ts +## 0.38.0 + +### Minor Changes + +- [#1935](https://github.com/graphprotocol/graph-tooling/pull/1935) [`0c36a02`](https://github.com/graphprotocol/graph-tooling/commit/0c36a024e0516bbf883ae62b8312dba3d9945f04) Thanks [@isum](https://github.com/isum)! - feat: add yaml parsing support to mappings + ## 0.37.0 ### Minor Changes diff --git a/website/src/pages/es/subgraphs/developing/creating/graph-ts/api.mdx b/website/src/pages/es/subgraphs/developing/creating/graph-ts/api.mdx index 67ec89027c6b..7673a925ad21 100644 --- a/website/src/pages/es/subgraphs/developing/creating/graph-ts/api.mdx +++ b/website/src/pages/es/subgraphs/developing/creating/graph-ts/api.mdx @@ -2,12 +2,12 @@ title: AssemblyScript API --- -> Note: If you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/). +> Note: If you created a Subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/). -Learn what built-in APIs can be used when writing subgraph mappings. There are two kinds of APIs available out of the box: +Learn what built-in APIs can be used when writing Subgraph mappings. There are two kinds of APIs available out of the box: - The [Graph TypeScript library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) -- Code generated from subgraph files by `graph codegen` +- Code generated from Subgraph files by `graph codegen` You can also add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). @@ -27,7 +27,7 @@ The `@graphprotocol/graph-ts` library provides the following APIs: ### Versiones -The `apiVersion` in the subgraph manifest specifies the mapping API version which is run by Graph Node for a given subgraph. +The `apiVersion` in the Subgraph manifest specifies the mapping API version which is run by Graph Node for a given Subgraph. | Version | Notas del lanzamiento | | :-: | --- | @@ -223,7 +223,7 @@ import { store } from '@graphprotocol/graph-ts' The `store` API allows to load, save and remove entities from and to the Graph Node store. -Entities written to the store map one-to-one to the `@entity` types defined in the subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities. +Entities written to the store map one-to-one to the `@entity` types defined in the Subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities. #### Creacion de entidades @@ -282,8 +282,8 @@ As of `graph-node` v0.31.0, `@graphprotocol/graph-ts` v0.30.0 and `@graphprotoco The store API facilitates the retrieval of entities that were created or updated in the current block. A typical situation for this is that one handler creates a transaction from some onchain event, and a later handler wants to access this transaction if it exists. -- In the case where the transaction does not exist, the subgraph will have to go to the database simply to find out that the entity does not exist. If the subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. -- For some subgraphs, these missed lookups can contribute significantly to the indexing time. +- In the case where the transaction does not exist, the Subgraph will have to go to the database simply to find out that the entity does not exist. If the Subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. +- For some Subgraphs, these missed lookups can contribute significantly to the indexing time. ```typescript let id = event.transaction.hash // or however the ID is constructed @@ -380,11 +380,11 @@ La API de Ethereum proporciona acceso a los contratos inteligentes, a las variab #### Compatibilidad con los tipos de Ethereum -As with entities, `graph codegen` generates classes for all smart contracts and events used in a subgraph. For this, the contract ABIs need to be part of the data source in the subgraph manifest. Typically, the ABI files are stored in an `abis/` folder. +As with entities, `graph codegen` generates classes for all smart contracts and events used in a Subgraph. For this, the contract ABIs need to be part of the data source in the Subgraph manifest. Typically, the ABI files are stored in an `abis/` folder. -With the generated classes, conversions between Ethereum types and the [built-in types](#built-in-types) take place behind the scenes so that subgraph authors do not have to worry about them. +With the generated classes, conversions between Ethereum types and the [built-in types](#built-in-types) take place behind the scenes so that Subgraph authors do not have to worry about them. -El siguiente ejemplo lo ilustra. Dado un esquema de subgrafos como +The following example illustrates this. Given a Subgraph schema like ```graphql type Transfer @entity { @@ -483,7 +483,7 @@ class Log { #### Acceso al Estado del Contrato Inteligente -The code generated by `graph codegen` also includes classes for the smart contracts used in the subgraph. These can be used to access public state variables and call functions of the contract at the current block. +The code generated by `graph codegen` also includes classes for the smart contracts used in the Subgraph. These can be used to access public state variables and call functions of the contract at the current block. Un patrón común es acceder al contrato desde el que se origina un evento. Esto se consigue con el siguiente código: @@ -506,7 +506,7 @@ export function handleTransfer(event: TransferEvent) { As long as the `ERC20Contract` on Ethereum has a public read-only function called `symbol`, it can be called with `.symbol()`. For public state variables a method with the same name is created automatically. -Cualquier otro contrato que forme parte del subgrafo puede ser importado desde el código generado y puede ser vinculado a una dirección válida. +Any other contract that is part of the Subgraph can be imported from the generated code and can be bound to a valid address. #### Tratamiento de las Llamadas Revertidas @@ -582,7 +582,7 @@ let isContract = ethereum.hasCode(eoa).inner // returns false import { log } from '@graphprotocol/graph-ts' ``` -The `log` API allows subgraphs to log information to the Graph Node standard output as well as Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument. +The `log` API allows Subgraphs to log information to the Graph Node standard output as well as Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument. The `log` API includes the following functions: @@ -590,7 +590,7 @@ The `log` API includes the following functions: - `log.info(fmt: string, args: Array): void` - logs an informational message. - `log.warning(fmt: string, args: Array): void` - logs a warning. - `log.error(fmt: string, args: Array): void` - logs an error message. -- `log.critical(fmt: string, args: Array): void` – logs a critical message _and_ terminates the subgraph. +- `log.critical(fmt: string, args: Array): void` – logs a critical message _and_ terminates the Subgraph. The `log` API takes a format string and an array of string values. It then replaces placeholders with the string values from the array. The first `{}` placeholder gets replaced by the first value in the array, the second `{}` placeholder gets replaced by the second value and so on. @@ -721,7 +721,7 @@ ipfs.mapJSON('Qm...', 'processItem', Value.fromString('parentId')) The only flag currently supported is `json`, which must be passed to `ipfs.map`. With the `json` flag, the IPFS file must consist of a series of JSON values, one value per line. The call to `ipfs.map` will read each line in the file, deserialize it into a `JSONValue` and call the callback for each of them. The callback can then use entity operations to store data from the `JSONValue`. Entity changes are stored only when the handler that called `ipfs.map` finishes successfully; in the meantime, they are kept in memory, and the size of the file that `ipfs.map` can process is therefore limited. -On success, `ipfs.map` returns `void`. If any invocation of the callback causes an error, the handler that invoked `ipfs.map` is aborted, and the subgraph is marked as failed. +On success, `ipfs.map` returns `void`. If any invocation of the callback causes an error, the handler that invoked `ipfs.map` is aborted, and the Subgraph is marked as failed. ### API Cripto @@ -836,7 +836,7 @@ The base `Entity` class and the child `DataSourceContext` class have helpers to ### DataSourceContext in Manifest -The `context` section within `dataSources` allows you to define key-value pairs that are accessible within your subgraph mappings. The available types are `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. +The `context` section within `dataSources` allows you to define key-value pairs that are accessible within your Subgraph mappings. The available types are `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Here is a YAML example illustrating the usage of various types in the `context` section: @@ -887,4 +887,4 @@ dataSources: - `List`: Specifies a list of items. Each item needs to specify its type and data. - `BigInt`: Specifies a large integer value. Must be quoted due to its large size. -This context is then accessible in your subgraph mapping files, enabling more dynamic and configurable subgraphs. +This context is then accessible in your Subgraph mapping files, enabling more dynamic and configurable Subgraphs. diff --git a/website/src/pages/es/subgraphs/developing/creating/graph-ts/common-issues.mdx b/website/src/pages/es/subgraphs/developing/creating/graph-ts/common-issues.mdx index 9b540b6d07d4..6d2a39b9e67b 100644 --- a/website/src/pages/es/subgraphs/developing/creating/graph-ts/common-issues.mdx +++ b/website/src/pages/es/subgraphs/developing/creating/graph-ts/common-issues.mdx @@ -2,7 +2,7 @@ title: Problemas comunes de AssemblyScript --- -There are certain [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) issues that are common to run into during subgraph development. They range in debug difficulty, however, being aware of them may help. The following is a non-exhaustive list of these issues: +There are certain [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) issues that are common to run into during Subgraph development. They range in debug difficulty, however, being aware of them may help. The following is a non-exhaustive list of these issues: - `Private` class variables are not enforced in [AssemblyScript](https://www.assemblyscript.org/status.html#language-features). There is no way to protect class variables from being directly changed from the class object. - Scope is not inherited into [closure functions](https://www.assemblyscript.org/status.html#on-closures), i.e. variables declared outside of closure functions cannot be used. Explanation in [Developer Highlights #3](https://www.youtube.com/watch?v=1-8AW-lVfrA&t=3243s). diff --git a/website/src/pages/es/subgraphs/developing/creating/install-the-cli.mdx b/website/src/pages/es/subgraphs/developing/creating/install-the-cli.mdx index 5a0e73fd0bbd..d968a59b17ff 100644 --- a/website/src/pages/es/subgraphs/developing/creating/install-the-cli.mdx +++ b/website/src/pages/es/subgraphs/developing/creating/install-the-cli.mdx @@ -2,11 +2,11 @@ title: Instalar The Graph CLI --- -> In order to use your subgraph on The Graph's decentralized network, you will need to [create an API key](/resources/subgraph-studio-faq/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your subgraph with at least 3,000 GRT to attract 2-3 Indexers. To learn more about signaling, check out [curating](/resources/roles/curating/). +> In order to use your Subgraph on The Graph's decentralized network, you will need to [create an API key](/resources/subgraph-studio-faq/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your Subgraph with at least 3,000 GRT to attract 2-3 Indexers. To learn more about signaling, check out [curating](/resources/roles/curating/). ## Descripción -The [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) is a command-line interface that facilitates developers' commands for The Graph. It processes a [subgraph manifest](/subgraphs/developing/creating/subgraph-manifest/) and compiles the [mappings](/subgraphs/developing/creating/assemblyscript-mappings/) to create the files you will need to deploy the subgraph to [Subgraph Studio](https://thegraph.com/studio/) and the network. +The [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) is a command-line interface that facilitates developers' commands for The Graph. It processes a [Subgraph manifest](/subgraphs/developing/creating/subgraph-manifest/) and compiles the [mappings](/subgraphs/developing/creating/assemblyscript-mappings/) to create the files you will need to deploy the Subgraph to [Subgraph Studio](https://thegraph.com/studio/) and the network. ## Empezando @@ -28,13 +28,13 @@ npm install -g @graphprotocol/graph-cli@latest yarn global add @graphprotocol/graph-cli ``` -The `graph init` command can be used to set up a new subgraph project, either from an existing contract or from an example subgraph. If you already have a smart contract deployed to your preferred network, you can bootstrap a new subgraph from that contract to get started. +The `graph init` command can be used to set up a new Subgraph project, either from an existing contract or from an example Subgraph. If you already have a smart contract deployed to your preferred network, you can bootstrap a new Subgraph from that contract to get started. ## Crear un Subgrafo ### Desde un Contrato Existente -The following command creates a subgraph that indexes all events of an existing contract: +The following command creates a Subgraph that indexes all events of an existing contract: ```sh graph init \ @@ -51,25 +51,25 @@ graph init \ - If any of the optional arguments are missing, it guides you through an interactive form. -- The `` is the ID of your subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your subgraph details page. +- The `` is the ID of your Subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your Subgraph details page. ### De un Subgrafo de Ejemplo -The following command initializes a new project from an example subgraph: +The following command initializes a new project from an example Subgraph: ```sh graph init --from-example=example-subgraph ``` -- The [example subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. +- The [example Subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. -- The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. +- The Subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. ### Add New `dataSources` to an Existing Subgraph -`dataSources` are key components of subgraphs. They define the sources of data that the subgraph indexes and processes. A `dataSource` specifies which smart contract to listen to, which events to process, and how to handle them. +`dataSources` are key components of Subgraphs. They define the sources of data that the Subgraph indexes and processes. A `dataSource` specifies which smart contract to listen to, which events to process, and how to handle them. -Recent versions of the Graph CLI supports adding new `dataSources` to an existing subgraph through the `graph add` command: +Recent versions of the Graph CLI supports adding new `dataSources` to an existing Subgraph through the `graph add` command: ```sh graph add
[] @@ -101,19 +101,5 @@ The `graph add` command will fetch the ABI from Etherscan (unless an ABI path is Los archivos ABI deben coincidir con tu(s) contrato(s). Hay varias formas de obtener archivos ABI: - Si estás construyendo tu propio proyecto, es probable que tengas acceso a tus ABIs más actuales. -- If you are building a subgraph for a public project, you can download that project to your computer and get the ABI by using [`npx hardhat compile`](https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts) or using `solc` to compile. -- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your subgraph will fail. - -## SpecVersion Releases - -| Version | Notas del lanzamiento | -| :-: | --- | -| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | -| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | -| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune subgraphs | -| 0.0.9 | Supports `endBlock` feature | -| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). | -| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). | -| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | -| 0.0.5 | Added support for event handlers having access to transaction receipts. | -| 0.0.4 | Added support for managing subgraph features. | +- If you are building a Subgraph for a public project, you can download that project to your computer and get the ABI by using [`npx hardhat compile`](https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts) or using `solc` to compile. +- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your Subgraph will fail. diff --git a/website/src/pages/es/subgraphs/developing/creating/ql-schema.mdx b/website/src/pages/es/subgraphs/developing/creating/ql-schema.mdx index 09924401ce11..2f2b8c25e231 100644 --- a/website/src/pages/es/subgraphs/developing/creating/ql-schema.mdx +++ b/website/src/pages/es/subgraphs/developing/creating/ql-schema.mdx @@ -4,7 +4,7 @@ title: The Graph QL Schema ## Descripción -The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language. +The schema for your Subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language. > Note: If you've never written a GraphQL schema, it is recommended that you check out this primer on the GraphQL type system. Reference documentation for GraphQL schemas can be found in the [GraphQL API](/subgraphs/querying/graphql-api/) section. @@ -12,7 +12,7 @@ The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas ar Before defining entities, it is important to take a step back and think about how your data is structured and linked. -- All queries will be made against the data model defined in the subgraph schema. As a result, the design of the subgraph schema should be informed by the queries that your application will need to perform. +- All queries will be made against the data model defined in the Subgraph schema. As a result, the design of the Subgraph schema should be informed by the queries that your application will need to perform. - It may be useful to imagine entities as "objects containing data", rather than as events or functions. - You define entity types in `schema.graphql`, and Graph Node will generate top-level fields for querying single instances and collections of that entity type. - Each type that should be an entity is required to be annotated with an `@entity` directive. @@ -141,7 +141,7 @@ type TokenBalance @entity { Reverse lookups can be defined on an entity through the `@derivedFrom` field. This creates a virtual field on the entity that may be queried but cannot be set manually through the mappings API. Rather, it is derived from the relationship defined on the other entity. For such relationships, it rarely makes sense to store both sides of the relationship, and both indexing and query performance will be better when only one side is stored and the other is derived. -En el caso de las relaciones one-to-many, la relación debe almacenarse siempre en el lado "one", y el lado "many" debe derivarse siempre. Almacenar la relación de esta manera, en lugar de almacenar una array de entidades en el lado "many", resultará en un rendimiento dramáticamente mejor tanto para la indexación como para la consulta del subgrafo. En general, debe evitarse, en la medida de lo posible, el almacenamiento de arrays de entidades. +For one-to-many relationships, the relationship should always be stored on the 'one' side, and the 'many' side should always be derived. Storing the relationship this way, rather than storing an array of entities on the 'many' side, will result in dramatically better performance for both indexing and querying the Subgraph. In general, storing arrays of entities should be avoided as much as is practical. #### Ejemplo @@ -160,7 +160,7 @@ type TokenBalance @entity { } ``` -Here is an example of how to write a mapping for a subgraph with reverse lookups: +Here is an example of how to write a mapping for a Subgraph with reverse lookups: ```typescript let token = new Token(event.address) // Create Token @@ -231,7 +231,7 @@ query usersWithOrganizations { } ``` -Esta forma más elaborada de almacenar las relaciones many-to-many se traducirá en menos datos almacenados para el subgrafo y, por tanto, en un subgrafo que suele ser mucho más rápido de indexar y consultar. +This more elaborate way of storing many-to-many relationships will result in less data stored for the Subgraph, and therefore to a Subgraph that is often dramatically faster to index and to query. ### Agregar comentarios al esquema @@ -287,7 +287,7 @@ query { } ``` -> **[Feature Management](#experimental-features):** From `specVersion` `0.0.4` and onwards, `fullTextSearch` must be declared under the `features` section in the subgraph manifest. +> **[Feature Management](#experimental-features):** From `specVersion` `0.0.4` and onwards, `fullTextSearch` must be declared under the `features` section in the Subgraph manifest. ## Idiomas admitidos diff --git a/website/src/pages/es/subgraphs/developing/creating/starting-your-subgraph.mdx b/website/src/pages/es/subgraphs/developing/creating/starting-your-subgraph.mdx index 76ff7db16bba..aad5349fb149 100644 --- a/website/src/pages/es/subgraphs/developing/creating/starting-your-subgraph.mdx +++ b/website/src/pages/es/subgraphs/developing/creating/starting-your-subgraph.mdx @@ -4,20 +4,32 @@ title: Starting Your Subgraph ## Descripción -The Graph is home to thousands of subgraphs already available for query, so check [The Graph Explorer](https://thegraph.com/explorer) and find one that already matches your needs. +The Graph is home to thousands of Subgraphs already available for query, so check [The Graph Explorer](https://thegraph.com/explorer) and find one that already matches your needs. -When you create a [subgraph](/subgraphs/developing/subgraphs/), you create a custom open API that extracts data from a blockchain, processes it, stores it, and makes it easy to query via GraphQL. +When you create a [Subgraph](/subgraphs/developing/subgraphs/), you create a custom open API that extracts data from a blockchain, processes it, stores it, and makes it easy to query via GraphQL. -Subgraph development ranges from simple scaffold subgraphs to advanced, specifically tailored subgraphs. +Subgraph development ranges from simple scaffold Subgraphs to advanced, specifically tailored Subgraphs. ### Start Building -Start the process and build a subgraph that matches your needs: +Start the process and build a Subgraph that matches your needs: 1. [Install the CLI](/subgraphs/developing/creating/install-the-cli/) - Set up your infrastructure -2. [Subgraph Manifest](/subgraphs/developing/creating/subgraph-manifest/) - Understand a subgraph's key component +2. [Subgraph Manifest](/subgraphs/developing/creating/subgraph-manifest/) - Understand a Subgraph's key component 3. [The GraphQL Schema](/subgraphs/developing/creating/ql-schema/) - Write your schema 4. [Writing AssemblyScript Mappings](/subgraphs/developing/creating/assemblyscript-mappings/) - Write your mappings -5. [Advanced Features](/subgraphs/developing/creating/advanced/) - Customize your subgraph with advanced features +5. [Advanced Features](/subgraphs/developing/creating/advanced/) - Customize your Subgraph with advanced features Explore additional [resources for APIs](/subgraphs/developing/creating/graph-ts/README/) and conduct local testing with [Matchstick](/subgraphs/developing/creating/unit-testing-framework/). + +| Version | Notas del lanzamiento | +| :-: | --- | +| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune Subgraphs | +| 0.0.9 | Supports `endBlock` feature | +| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). | +| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | diff --git a/website/src/pages/es/subgraphs/developing/creating/subgraph-manifest.mdx b/website/src/pages/es/subgraphs/developing/creating/subgraph-manifest.mdx index c825906fef29..c2a3e8156927 100644 --- a/website/src/pages/es/subgraphs/developing/creating/subgraph-manifest.mdx +++ b/website/src/pages/es/subgraphs/developing/creating/subgraph-manifest.mdx @@ -4,19 +4,19 @@ title: Subgraph Manifest ## Descripción -The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. +The Subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your Subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. -The **subgraph definition** consists of the following files: +The **Subgraph definition** consists of the following files: -- `subgraph.yaml`: Contains the subgraph manifest +- `subgraph.yaml`: Contains the Subgraph manifest -- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL +- `schema.graphql`: A GraphQL schema defining the data stored for your Subgraph and how to query it via GraphQL - `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema (e.g. `mapping.ts` in this guide) ### Subgraph Capabilities -A single subgraph can: +A single Subgraph can: - Index data from multiple smart contracts (but not multiple networks). @@ -24,12 +24,12 @@ A single subgraph can: - Add an entry for each contract that requires indexing to the `dataSources` array. -The full specification for subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). +The full specification for Subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). -For the example subgraph listed above, `subgraph.yaml` is: +For the example Subgraph listed above, `subgraph.yaml` is: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum repository: https://github.com/graphprotocol/graph-tooling schema: @@ -54,7 +54,7 @@ dataSources: data: 'bar' mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -79,47 +79,47 @@ dataSources: ## Subgraph Entries -> Important Note: Be sure you populate your subgraph manifest with all handlers and [entities](/subgraphs/developing/creating/ql-schema/). +> Important Note: Be sure you populate your Subgraph manifest with all handlers and [entities](/subgraphs/developing/creating/ql-schema/). Las entradas importantes a actualizar para el manifiesto son: -- `specVersion`: a semver version that identifies the supported manifest structure and functionality for the subgraph. The latest version is `1.2.0`. See [specVersion releases](#specversion-releases) section to see more details on features & releases. +- `specVersion`: a semver version that identifies the supported manifest structure and functionality for the Subgraph. The latest version is `1.3.0`. See [specVersion releases](#specversion-releases) section to see more details on features & releases. -- `description`: a human-readable description of what the subgraph is. This description is displayed in Graph Explorer when the subgraph is deployed to Subgraph Studio. +- `description`: a human-readable description of what the Subgraph is. This description is displayed in Graph Explorer when the Subgraph is deployed to Subgraph Studio. -- `repository`: the URL of the repository where the subgraph manifest can be found. This is also displayed in Graph Explorer. +- `repository`: the URL of the repository where the Subgraph manifest can be found. This is also displayed in Graph Explorer. - `features`: a list of all used [feature](#experimental-features) names. -- `indexerHints.prune`: Defines the retention of historical block data for a subgraph. See [prune](#prune) in [indexerHints](#indexer-hints) section. +- `indexerHints.prune`: Defines the retention of historical block data for a Subgraph. See [prune](#prune) in [indexerHints](#indexer-hints) section. -- `dataSources.source`: the address of the smart contract the subgraph sources, and the ABI of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts. +- `dataSources.source`: the address of the smart contract the Subgraph sources, and the ABI of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts. - `dataSources.source.startBlock`: the optional number of the block that the data source starts indexing from. In most cases, we suggest using the block in which the contract was created. - `dataSources.source.endBlock`: The optional number of the block that the data source stops indexing at, including that block. Minimum spec version required: `0.0.9`. -- `dataSources.context`: key-value pairs that can be used within subgraph mappings. Supports various data types like `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Each variable needs to specify its `type` and `data`. These context variables are then accessible in the mapping files, offering more configurable options for subgraph development. +- `dataSources.context`: key-value pairs that can be used within Subgraph mappings. Supports various data types like `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Each variable needs to specify its `type` and `data`. These context variables are then accessible in the mapping files, offering more configurable options for Subgraph development. - `dataSources.mapping.entities`: the entities that the data source writes to the store. The schema for each entity is defined in the schema.graphql file. - `dataSources.mapping.abis`: one or more named ABI files for the source contract as well as any other smart contracts that you interact with from within the mappings. -- `dataSources.mapping.eventHandlers`: lists the smart contract events this subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store. +- `dataSources.mapping.eventHandlers`: lists the smart contract events this Subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store. -- `dataSources.mapping.callHandlers`: lists the smart contract functions this subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store. +- `dataSources.mapping.callHandlers`: lists the smart contract functions this Subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store. -- `dataSources.mapping.blockHandlers`: lists the blocks this subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional call-filter can be provided by adding a `filter` field with `kind: call` to the handler. This will only run the handler if the block contains at least one call to the data source contract. +- `dataSources.mapping.blockHandlers`: lists the blocks this Subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional call-filter can be provided by adding a `filter` field with `kind: call` to the handler. This will only run the handler if the block contains at least one call to the data source contract. -A single subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array. +A single Subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array. ## Event Handlers -Event handlers in a subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the subgraph's manifest. This enables subgraphs to process and store event data according to defined logic. +Event handlers in a Subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the Subgraph's manifest. This enables Subgraphs to process and store event data according to defined logic. ### Defining an Event Handler -An event handler is declared within a data source in the subgraph's YAML configuration. It specifies which events to listen for and the corresponding function to execute when those events are detected. +An event handler is declared within a data source in the Subgraph's YAML configuration. It specifies which events to listen for and the corresponding function to execute when those events are detected. ```yaml dataSources: @@ -131,7 +131,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -149,11 +149,11 @@ dataSources: ## Call Handlers -While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured. +While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a Subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured. Los call handlers solo se activarán en uno de estos dos casos: cuando la función especificada sea llamada por una cuenta distinta del propio contrato o cuando esté marcada como externa en Solidity y sea llamada como parte de otra función en el mismo contrato. -> **Note:** Call handlers currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a subgraph indexing one of these networks contain one or more call handlers, it will not start syncing. Subgraph developers should instead use event handlers. These are far more performant than call handlers, and are supported on every evm network. +> **Note:** Call handlers currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a Subgraph indexing one of these networks contain one or more call handlers, it will not start syncing. Subgraph developers should instead use event handlers. These are far more performant than call handlers, and are supported on every evm network. ### Definición de un Call Handler @@ -169,7 +169,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -186,7 +186,7 @@ The `function` is the normalized function signature to filter calls by. The `han ### Función mapeo -Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument: +Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example Subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument: ```typescript import { CreateGravatarCall } from '../generated/Gravity/Gravity' @@ -205,7 +205,7 @@ The `handleCreateGravatar` function takes a new `CreateGravatarCall` which is a ## Handlers de bloques -Además de suscribirse a eventos del contracto o calls de funciones, un subgrafo puede querer actualizar sus datos a medida que se añaden nuevos bloques a la cadena. Para ello, un subgrafo puede ejecutar una función después de cada bloque o después de los bloques que coincidan con un filtro predefinido. +In addition to subscribing to contract events or function calls, a Subgraph may want to update its data as new blocks are appended to the chain. To achieve this a Subgraph can run a function after every block or after blocks that match a pre-defined filter. ### Filtros admitidos @@ -218,7 +218,7 @@ filter: _The defined handler will be called once for every block which contains a call to the contract (data source) the handler is defined under._ -> **Note:** The `call` filter currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a subgraph indexing one of these networks contain one or more block handlers with a `call` filter, it will not start syncing. +> **Note:** The `call` filter currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a Subgraph indexing one of these networks contain one or more block handlers with a `call` filter, it will not start syncing. La ausencia de un filtro para un handler de bloque asegurará que el handler sea llamado en cada bloque. Una fuente de datos solo puede contener un handler de bloque para cada tipo de filtro. @@ -232,7 +232,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -261,7 +261,7 @@ blockHandlers: every: 10 ``` -The defined handler will be called once for every `n` blocks, where `n` is the value provided in the `every` field. This configuration allows the subgraph to perform specific operations at regular block intervals. +The defined handler will be called once for every `n` blocks, where `n` is the value provided in the `every` field. This configuration allows the Subgraph to perform specific operations at regular block intervals. #### Once Filter @@ -276,7 +276,7 @@ blockHandlers: kind: once ``` -The defined handler with the once filter will be called only once before all other handlers run. This configuration allows the subgraph to use the handler as an initialization handler, performing specific tasks at the start of indexing. +The defined handler with the once filter will be called only once before all other handlers run. This configuration allows the Subgraph to use the handler as an initialization handler, performing specific tasks at the start of indexing. ```ts export function handleOnce(block: ethereum.Block): void { @@ -288,7 +288,7 @@ export function handleOnce(block: ethereum.Block): void { ### Función mapeo -The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing subgraph entities in the store, call smart contracts and create or update entities. +The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing Subgraph entities in the store, call smart contracts and create or update entities. ```typescript import { ethereum } from '@graphprotocol/graph-ts' @@ -317,7 +317,7 @@ An event will only be triggered when both the signature and topic 0 match. By de Starting from `specVersion` `0.0.5` and `apiVersion` `0.0.7`, event handlers can have access to the receipt for the transaction which emitted them. -To do so, event handlers must be declared in the subgraph manifest with the new `receipt: true` key, which is optional and defaults to false. +To do so, event handlers must be declared in the Subgraph manifest with the new `receipt: true` key, which is optional and defaults to false. ```yaml eventHandlers: @@ -360,7 +360,7 @@ dataSources: abi: Factory mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/factory.ts entities: @@ -390,7 +390,7 @@ templates: abi: Exchange mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/exchange.ts entities: @@ -454,7 +454,7 @@ There are setters and getters like `setString` and `getString` for all value typ ## Bloques iniciales -The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. +The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a Subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. ```yaml dataSources: @@ -467,7 +467,7 @@ dataSources: startBlock: 6627917 mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/factory.ts entities: @@ -488,13 +488,13 @@ dataSources: ## Indexer Hints -The `indexerHints` setting in a subgraph's manifest provides directives for indexers on processing and managing a subgraph. It influences operational decisions across data handling, indexing strategies, and optimizations. Presently, it features the `prune` option for managing historical data retention or pruning. +The `indexerHints` setting in a Subgraph's manifest provides directives for indexers on processing and managing a Subgraph. It influences operational decisions across data handling, indexing strategies, and optimizations. Presently, it features the `prune` option for managing historical data retention or pruning. > This feature is available from `specVersion: 1.0.0` ### Prune -`indexerHints.prune`: Defines the retention of historical block data for a subgraph. Options include: +`indexerHints.prune`: Defines the retention of historical block data for a Subgraph. Options include: 1. `"never"`: No pruning of historical data; retains the entire history. 2. `"auto"`: Retains the minimum necessary history as set by the indexer, optimizing query performance. @@ -505,19 +505,19 @@ The `indexerHints` setting in a subgraph's manifest provides directives for inde prune: auto ``` -> The term "history" in this context of subgraphs is about storing data that reflects the old states of mutable entities. +> The term "history" in this context of Subgraphs is about storing data that reflects the old states of mutable entities. History as of a given block is required for: -- [Time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), which enable querying the past states of these entities at specific blocks throughout the subgraph's history -- Using the subgraph as a [graft base](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) in another subgraph, at that block -- Rewinding the subgraph back to that block +- [Time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), which enable querying the past states of these entities at specific blocks throughout the Subgraph's history +- Using the Subgraph as a [graft base](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) in another Subgraph, at that block +- Rewinding the Subgraph back to that block If historical data as of the block has been pruned, the above capabilities will not be available. > Using `"auto"` is generally recommended as it maximizes query performance and is sufficient for most users who do not require access to extensive historical data. -For subgraphs leveraging [time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), it's advisable to either set a specific number of blocks for historical data retention or use `prune: never` to keep all historical entity states. Below are examples of how to configure both options in your subgraph's settings: +For Subgraphs leveraging [time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), it's advisable to either set a specific number of blocks for historical data retention or use `prune: never` to keep all historical entity states. Below are examples of how to configure both options in your Subgraph's settings: To retain a specific amount of historical data: @@ -532,3 +532,18 @@ To preserve the complete history of entity states: indexerHints: prune: never ``` + +## SpecVersion Releases + +| Version | Notas del lanzamiento | +| :-: | --- | +| 1.3.0 | Added support for [Subgraph Composition](/cookbook/subgraph-composition-three-sources) | +| 1.2.0 | Added support for [Indexed Argument Filtering](/developing/creating/advanced/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](/developing/creating/advanced/#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supports [`indexerHints`](/developing/creating/subgraph-manifest/#indexer-hints) feature to prune Subgraphs | +| 0.0.9 | Supports `endBlock` feature | +| 0.0.8 | Added support for polling [Block Handlers](/developing/creating/subgraph-manifest/#polling-filter) and [Initialisation Handlers](/developing/creating/subgraph-manifest/#once-filter). | +| 0.0.7 | Added support for [File Data Sources](/developing/creating/advanced/#ipfsarweave-file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | diff --git a/website/src/pages/es/subgraphs/developing/creating/unit-testing-framework.mdx b/website/src/pages/es/subgraphs/developing/creating/unit-testing-framework.mdx index a9ab2a9ef384..7be3dfb08f89 100644 --- a/website/src/pages/es/subgraphs/developing/creating/unit-testing-framework.mdx +++ b/website/src/pages/es/subgraphs/developing/creating/unit-testing-framework.mdx @@ -2,12 +2,12 @@ title: Marco de Unit Testing --- -Learn how to use Matchstick, a unit testing framework developed by [LimeChain](https://limechain.tech/). Matchstick enables subgraph developers to test their mapping logic in a sandboxed environment and successfully deploy their subgraphs. +Learn how to use Matchstick, a unit testing framework developed by [LimeChain](https://limechain.tech/). Matchstick enables Subgraph developers to test their mapping logic in a sandboxed environment and successfully deploy their Subgraphs. ## Benefits of Using Matchstick - It's written in Rust and optimized for high performance. -- It gives you access to developer features, including the ability to mock contract calls, make assertions about the store state, monitor subgraph failures, check test performance, and many more. +- It gives you access to developer features, including the ability to mock contract calls, make assertions about the store state, monitor Subgraph failures, check test performance, and many more. ## Empezando @@ -87,7 +87,7 @@ And finally, do not use `graph test` (which uses your global installation of gra ### Using Matchstick -To use **Matchstick** in your subgraph project just open up a terminal, navigate to the root folder of your project and simply run `graph test [options] ` - it downloads the latest **Matchstick** binary and runs the specified test or all tests in a test folder (or all existing tests if no datasource flag is specified). +To use **Matchstick** in your Subgraph project just open up a terminal, navigate to the root folder of your project and simply run `graph test [options] ` - it downloads the latest **Matchstick** binary and runs the specified test or all tests in a test folder (or all existing tests if no datasource flag is specified). ### Opciones CLI @@ -113,7 +113,7 @@ graph test path/to/file.test.ts ```sh -c, --coverage Run the tests in coverage mode --d, --docker Run the tests in a docker container (Note: Please execute from the root folder of the subgraph) +-d, --docker Run the tests in a docker container (Note: Please execute from the root folder of the Subgraph) -f, --force Binary: Redownloads the binary. Docker: Redownloads the Dockerfile and rebuilds the docker image. -h, --help Show usage information -l, --logs Logs to the console information about the OS, CPU model and download url (debugging purposes) @@ -145,17 +145,17 @@ libsFolder: path/to/libs manifestPath: path/to/subgraph.yaml ``` -### Subgrafo de demostración +### Demo Subgraph You can try out and play around with the examples from this guide by cloning the [Demo Subgraph repo](https://github.com/LimeChain/demo-subgraph) ### Tutoriales en vídeo -Also you can check out the video series on ["How to use Matchstick to write unit tests for your subgraphs"](https://www.youtube.com/playlist?list=PLTqyKgxaGF3SNakGQwczpSGVjS_xvOv3h) +Also you can check out the video series on ["How to use Matchstick to write unit tests for your Subgraphs"](https://www.youtube.com/playlist?list=PLTqyKgxaGF3SNakGQwczpSGVjS_xvOv3h) ## Tests structure -_**IMPORTANT: The test structure described below depens on `matchstick-as` version >=0.5.0**_ +_**IMPORTANT: The test structure described below depends on `matchstick-as` version >=0.5.0**_ ### describe() @@ -662,7 +662,7 @@ That's a lot to unpack! First off, an important thing to notice is that we're im Ahí vamos: ¡hemos creado nuestra primera prueba! 👏 -Ahora, para ejecutar nuestras pruebas, simplemente necesitas ejecutar lo siguiente en la carpeta raíz de tu subgrafo: +Now in order to run our tests you simply need to run the following in your Subgraph root folder: `graph test Gravity` @@ -756,7 +756,7 @@ createMockedFunction(contractAddress, 'getGravatar', 'getGravatar(address):(stri Users can mock IPFS files by using `mockIpfsFile(hash, filePath)` function. The function accepts two arguments, the first one is the IPFS file hash/path and the second one is the path to a local file. -NOTE: When testing `ipfs.map/ipfs.mapJSON`, the callback function must be exported from the test file in order for matchstck to detect it, like the `processGravatar()` function in the test example bellow: +NOTE: When testing `ipfs.map/ipfs.mapJSON`, the callback function must be exported from the test file in order for matchstick to detect it, like the `processGravatar()` function in the test example bellow: `.test.ts` file: @@ -765,7 +765,7 @@ import { assert, test, mockIpfsFile } from 'matchstick-as/assembly/index' import { ipfs } from '@graphprotocol/graph-ts' import { gravatarFromIpfs } from './utils' -// Export ipfs.map() callback in order for matchstck to detect it +// Export ipfs.map() callback in order for matchstick to detect it export { processGravatar } from './utils' test('ipfs.cat', () => { @@ -1172,7 +1172,7 @@ templates: network: mainnet mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/token-lock-wallet.ts handler: handleMetadata @@ -1289,7 +1289,7 @@ test('file/ipfs dataSource creation example', () => { ## Cobertura de prueba -Using **Matchstick**, subgraph developers are able to run a script that will calculate the test coverage of the written unit tests. +Using **Matchstick**, Subgraph developers are able to run a script that will calculate the test coverage of the written unit tests. The test coverage tool takes the compiled test `wasm` binaries and converts them to `wat` files, which can then be easily inspected to see whether or not the handlers defined in `subgraph.yaml` have been called. Since code coverage (and testing as whole) is in very early stages in AssemblyScript and WebAssembly, **Matchstick** cannot check for branch coverage. Instead we rely on the assertion that if a given handler has been called, the event/function for it have been properly mocked. @@ -1395,7 +1395,7 @@ The mismatch in arguments is caused by mismatch in `graph-ts` and `matchstick-as ## Recursos Adicionales -For any additional support, check out this [demo subgraph repo using Matchstick](https://github.com/LimeChain/demo-subgraph#readme_). +For any additional support, check out this [demo Subgraph repo using Matchstick](https://github.com/LimeChain/demo-subgraph#readme_). ## Comentario diff --git a/website/src/pages/es/subgraphs/developing/deploying/multiple-networks.mdx b/website/src/pages/es/subgraphs/developing/deploying/multiple-networks.mdx index c206beeb8fb3..a96efc430a61 100644 --- a/website/src/pages/es/subgraphs/developing/deploying/multiple-networks.mdx +++ b/website/src/pages/es/subgraphs/developing/deploying/multiple-networks.mdx @@ -1,12 +1,13 @@ --- title: Deploying a Subgraph to Multiple Networks +sidebarTitle: Deploying to Multiple Networks --- -This page explains how to deploy a subgraph to multiple networks. To deploy a subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a subgraph already, see [Creating a subgraph](/developing/creating-a-subgraph/). +This page explains how to deploy a Subgraph to multiple networks. To deploy a Subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a Subgraph already, see [Creating a Subgraph](/developing/creating-a-subgraph/). -## Desplegando el subgráfo en múltiples redes +## Deploying the Subgraph to multiple networks -En algunos casos, querrás desplegar el mismo subgrafo en múltiples redes sin duplicar todo su código. El principal reto que conlleva esto es que las direcciones de los contratos en estas redes son diferentes. +In some cases, you will want to deploy the same Subgraph to multiple networks without duplicating all of its code. The main challenge that comes with this is that the contract addresses on these networks are different. ### Using `graph-cli` @@ -20,7 +21,7 @@ Options: --network-file Networks config file path (default: "./networks.json") ``` -You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your subgraph during development. +You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your Subgraph during development. > Note: The `init` command will now auto-generate a `networks.json` based on the provided information. You will then be able to update existing or add additional networks. @@ -54,7 +55,7 @@ If you don't have a `networks.json` file, you'll need to manually create one wit > Note: You don't have to specify any of the `templates` (if you have any) in the config file, only the `dataSources`. If there are any `templates` declared in the `subgraph.yaml` file, their network will be automatically updated to the one specified with the `--network` option. -Now, let's assume you want to be able to deploy your subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`: +Now, let's assume you want to be able to deploy your Subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`: ```yaml # ... @@ -96,7 +97,7 @@ yarn build --network sepolia yarn build --network sepolia --network-file path/to/config ``` -The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the subgraph. Your `subgraph.yaml` file now should look like this: +The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the Subgraph. Your `subgraph.yaml` file now should look like this: ```yaml # ... @@ -127,7 +128,7 @@ yarn deploy --network sepolia --network-file path/to/config One way to parameterize aspects like contract addresses using older `graph-cli` versions is to generate parts of it with a templating system like [Mustache](https://mustache.github.io/) or [Handlebars](https://handlebarsjs.com/). -To illustrate this approach, let's assume a subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network: +To illustrate this approach, let's assume a Subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network: ```json { @@ -179,7 +180,7 @@ In order to generate a manifest to either network, you could add two additional } ``` -To deploy this subgraph for mainnet or Sepolia you would now simply run one of the two following commands: +To deploy this Subgraph for mainnet or Sepolia you would now simply run one of the two following commands: ```sh # Mainnet: @@ -193,25 +194,25 @@ A working example of this can be found [here](https://github.com/graphprotocol/e **Note:** This approach can also be applied to more complex situations, where it is necessary to substitute more than contract addresses and network names or where generating mappings or ABIs from templates as well. -This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your Subgraph to check if it is running behind. `synced` informs if the Subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the Subgraph. In this case, you can check the `fatalError` field for details on this error. -## Política de archivo de subgrafos en Subgraph Studio +## Subgraph Studio Subgraph archive policy -A subgraph version in Studio is archived if and only if it meets the following criteria: +A Subgraph version in Studio is archived if and only if it meets the following criteria: - The version is not published to the network (or pending publish) - The version was created 45 or more days ago -- The subgraph hasn't been queried in 30 days +- The Subgraph hasn't been queried in 30 days -In addition, when a new version is deployed, if the subgraph has not been published, then the N-2 version of the subgraph is archived. +In addition, when a new version is deployed, if the Subgraph has not been published, then the N-2 version of the Subgraph is archived. -Cada subgrafo afectado por esta política tiene una opción para recuperar la versión en cuestión. +Every Subgraph affected with this policy has an option to bring the version in question back. -## Comprobando la salud del subgrafo +## Checking Subgraph health -Si un subgrafo se sincroniza con éxito, es una buena señal de que seguirá funcionando bien para siempre. Sin embargo, los nuevos activadores en la red pueden hacer que tu subgrafo alcance una condición de error no probada o puede comenzar a retrasarse debido a problemas de rendimiento o problemas con los operadores de nodos. +If a Subgraph syncs successfully, that is a good sign that it will continue to run well forever. However, new triggers on the network might cause your Subgraph to hit an untested error condition or it may start to fall behind due to performance issues or issues with the node operators. -Graph Node exposes a GraphQL endpoint which you can query to check the status of your subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a subgraph: +Graph Node exposes a GraphQL endpoint which you can query to check the status of your Subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a Subgraph: ```graphql { @@ -238,4 +239,4 @@ Graph Node exposes a GraphQL endpoint which you can query to check the status of } ``` -This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your Subgraph to check if it is running behind. `synced` informs if the Subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the Subgraph. In this case, you can check the `fatalError` field for details on this error. diff --git a/website/src/pages/es/subgraphs/developing/deploying/using-subgraph-studio.mdx b/website/src/pages/es/subgraphs/developing/deploying/using-subgraph-studio.mdx index 11e4e4c22495..29eed7358005 100644 --- a/website/src/pages/es/subgraphs/developing/deploying/using-subgraph-studio.mdx +++ b/website/src/pages/es/subgraphs/developing/deploying/using-subgraph-studio.mdx @@ -2,23 +2,23 @@ title: Deploying Using Subgraph Studio --- -Learn how to deploy your subgraph to Subgraph Studio. +Learn how to deploy your Subgraph to Subgraph Studio. -> Note: When you deploy a subgraph, you push it to Subgraph Studio, where you'll be able to test it. It's important to remember that deploying is not the same as publishing. When you publish a subgraph, you're publishing it onchain. +> Note: When you deploy a Subgraph, you push it to Subgraph Studio, where you'll be able to test it. It's important to remember that deploying is not the same as publishing. When you publish a Subgraph, you're publishing it onchain. ## Subgraph Studio Overview In [Subgraph Studio](https://thegraph.com/studio/), you can do the following: -- View a list of subgraphs you've created -- Manage, view details, and visualize the status of a specific subgraph -- Crear y gestionar sus claves API para subgrafos específicos +- View a list of Subgraphs you've created +- Manage, view details, and visualize the status of a specific Subgraph +- Create and manage your API keys for specific Subgraphs - Restrict your API keys to specific domains and allow only certain Indexers to query with them -- Create your subgraph -- Deploy your subgraph using The Graph CLI -- Test your subgraph in the playground environment -- Integrate your subgraph in staging using the development query URL -- Publish your subgraph to The Graph Network +- Create your Subgraph +- Deploy your Subgraph using The Graph CLI +- Test your Subgraph in the playground environment +- Integrate your Subgraph in staging using the development query URL +- Publish your Subgraph to The Graph Network - Manage your billing ## Install The Graph CLI @@ -44,10 +44,10 @@ npm install -g @graphprotocol/graph-cli 1. Open [Subgraph Studio](https://thegraph.com/studio/). 2. Connect your wallet to sign in. - You can do this via MetaMask, Coinbase Wallet, WalletConnect, or Safe. -3. After you sign in, your unique deploy key will be displayed on your subgraph details page. - - The deploy key allows you to publish your subgraphs or manage your API keys and billing. It is unique but can be regenerated if you think it has been compromised. +3. After you sign in, your unique deploy key will be displayed on your Subgraph details page. + - The deploy key allows you to publish your Subgraphs or manage your API keys and billing. It is unique but can be regenerated if you think it has been compromised. -> Important: You need an API key to query subgraphs +> Important: You need an API key to query Subgraphs ### How to Create a Subgraph in Subgraph Studio @@ -57,31 +57,25 @@ npm install -g @graphprotocol/graph-cli ### Compatibilidad de los Subgrafos con The Graph Network -In order to be supported by Indexers on The Graph Network, subgraphs must: - -- Index a [supported network](/supported-networks/) -- No debe utilizar ninguna de las siguientes funciones: - - ipfs.cat & ipfs.map - - Errores no fatales - - Grafting +To be supported by Indexers on The Graph Network, Subgraphs must index a [supported network](/supported-networks/). For a full list of supported and unsupported features, check out the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) repo. ## Initialize Your Subgraph -Once your subgraph has been created in Subgraph Studio, you can initialize its code through the CLI using this command: +Once your Subgraph has been created in Subgraph Studio, you can initialize its code through the CLI using this command: ```bash graph init ``` -You can find the `` value on your subgraph details page in Subgraph Studio, see image below: +You can find the `` value on your Subgraph details page in Subgraph Studio, see image below: ![Subgraph Studio - Slug](/img/doc-subgraph-slug.png) -After running `graph init`, you will be asked to input the contract address, network, and an ABI that you want to query. This will generate a new folder on your local machine with some basic code to start working on your subgraph. You can then finalize your subgraph to make sure it works as expected. +After running `graph init`, you will be asked to input the contract address, network, and an ABI that you want to query. This will generate a new folder on your local machine with some basic code to start working on your Subgraph. You can then finalize your Subgraph to make sure it works as expected. ## Graph Auth -Before you can deploy your subgraph to Subgraph Studio, you need to log into your account within the CLI. To do this, you will need your deploy key, which you can find under your subgraph details page. +Before you can deploy your Subgraph to Subgraph Studio, you need to log into your account within the CLI. To do this, you will need your deploy key, which you can find under your Subgraph details page. Then, use the following command to authenticate from the CLI: @@ -91,11 +85,11 @@ graph auth ## Deploying a Subgraph -Once you are ready, you can deploy your subgraph to Subgraph Studio. +Once you are ready, you can deploy your Subgraph to Subgraph Studio. -> Deploying a subgraph with the CLI pushes it to the Studio, where you can test it and update the metadata. This action won't publish your subgraph to the decentralized network. +> Deploying a Subgraph with the CLI pushes it to the Studio, where you can test it and update the metadata. This action won't publish your Subgraph to the decentralized network. -Use the following CLI command to deploy your subgraph: +Use the following CLI command to deploy your Subgraph: ```bash graph deploy @@ -108,30 +102,30 @@ After running this command, the CLI will ask for a version label. ## Testing Your Subgraph -After deploying, you can test your subgraph (either in Subgraph Studio or in your own app, with the deployment query URL), deploy another version, update the metadata, and publish to [Graph Explorer](https://thegraph.com/explorer) when you are ready. +After deploying, you can test your Subgraph (either in Subgraph Studio or in your own app, with the deployment query URL), deploy another version, update the metadata, and publish to [Graph Explorer](https://thegraph.com/explorer) when you are ready. -Use Subgraph Studio to check the logs on the dashboard and look for any errors with your subgraph. +Use Subgraph Studio to check the logs on the dashboard and look for any errors with your Subgraph. ## Publish Your Subgraph -In order to publish your subgraph successfully, review [publishing a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). +In order to publish your Subgraph successfully, review [publishing a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). ## Versioning Your Subgraph with the CLI -If you want to update your subgraph, you can do the following: +If you want to update your Subgraph, you can do the following: - You can deploy a new version to Studio using the CLI (it will only be private at this point). - Once you're happy with it, you can publish your new deployment to [Graph Explorer](https://thegraph.com/explorer). -- This action will create a new version of your subgraph that Curators can start signaling on and Indexers can index. +- This action will create a new version of your Subgraph that Curators can start signaling on and Indexers can index. -You can also update your subgraph's metadata without publishing a new version. You can update your subgraph details in Studio (under the profile picture, name, description, etc.) by checking an option called **Update Details** in [Graph Explorer](https://thegraph.com/explorer). If this is checked, an onchain transaction will be generated that updates subgraph details in Explorer without having to publish a new version with a new deployment. +You can also update your Subgraph's metadata without publishing a new version. You can update your Subgraph details in Studio (under the profile picture, name, description, etc.) by checking an option called **Update Details** in [Graph Explorer](https://thegraph.com/explorer). If this is checked, an onchain transaction will be generated that updates Subgraph details in Explorer without having to publish a new version with a new deployment. -> Note: There are costs associated with publishing a new version of a subgraph to the network. In addition to the transaction fees, you must also fund a part of the curation tax on the auto-migrating signal. You cannot publish a new version of your subgraph if Curators have not signaled on it. For more information, please read more [here](/resources/roles/curating/). +> Note: There are costs associated with publishing a new version of a Subgraph to the network. In addition to the transaction fees, you must also fund a part of the curation tax on the auto-migrating signal. You cannot publish a new version of your Subgraph if Curators have not signaled on it. For more information, please read more [here](/resources/roles/curating/). ## Archivado Automático de Versiones de Subgrafos -Whenever you deploy a new subgraph version in Subgraph Studio, the previous version will be archived. Archived versions won't be indexed/synced and therefore cannot be queried. You can unarchive an archived version of your subgraph in Subgraph Studio. +Whenever you deploy a new Subgraph version in Subgraph Studio, the previous version will be archived. Archived versions won't be indexed/synced and therefore cannot be queried. You can unarchive an archived version of your Subgraph in Subgraph Studio. -> Note: Previous versions of non-published subgraphs deployed to Studio will be automatically archived. +> Note: Previous versions of non-published Subgraphs deployed to Studio will be automatically archived. ![Subgraph Studio - Unarchive](/img/Unarchive.png) diff --git a/website/src/pages/es/subgraphs/developing/developer-faq.mdx b/website/src/pages/es/subgraphs/developing/developer-faq.mdx index 0a3bad37fd09..6bf2d3eb2199 100644 --- a/website/src/pages/es/subgraphs/developing/developer-faq.mdx +++ b/website/src/pages/es/subgraphs/developing/developer-faq.mdx @@ -7,37 +7,37 @@ This page summarizes some of the most common questions for developers building o ## Subgraph Related -### 1. ¿Qué es un subgrafo? +### 1. What is a Subgraph? -A subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process subgraphs and make them available for subgraph consumers to query. +A Subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process Subgraphs and make them available for Subgraph consumers to query. -### 2. What is the first step to create a subgraph? +### 2. What is the first step to create a Subgraph? -To successfully create a subgraph, you will need to install The Graph CLI. Review the [Quick Start](/subgraphs/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). +To successfully create a Subgraph, you will need to install The Graph CLI. Review the [Quick Start](/subgraphs/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). -### 3. Can I still create a subgraph if my smart contracts don't have events? +### 3. Can I still create a Subgraph if my smart contracts don't have events? -It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the subgraph are triggered by contract events and are the fastest way to retrieve useful data. +It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the Subgraph are triggered by contract events and are the fastest way to retrieve useful data. -If the contracts you work with do not contain events, your subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. +If the contracts you work with do not contain events, your Subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. -### 4. ¿Puedo cambiar la cuenta de GitHub asociada con mi subgrafo? +### 4. Can I change the GitHub account associated with my Subgraph? -No. Once a subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your subgraph. +No. Once a Subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your Subgraph. -### 5. How do I update a subgraph on mainnet? +### 5. How do I update a Subgraph on mainnet? -You can deploy a new version of your subgraph to Subgraph Studio using the CLI. This action maintains your subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. +You can deploy a new version of your Subgraph to Subgraph Studio using the CLI. This action maintains your Subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your Subgraph that Curators can start signaling on. -### 6. Is it possible to duplicate a subgraph to another account or endpoint without redeploying? +### 6. Is it possible to duplicate a Subgraph to another account or endpoint without redeploying? -Tienes que volver a realizar el deploy del subgrafo, pero si el ID del subgrafo (hash IPFS) no cambia, no tendrá que sincronizarse desde el principio. +You have to redeploy the Subgraph, but if the Subgraph ID (IPFS hash) doesn't change, it won't have to sync from the beginning. -### 7. How do I call a contract function or access a public state variable from my subgraph mappings? +### 7. How do I call a contract function or access a public state variable from my Subgraph mappings? Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/#access-to-smart-contract-state). -### 8. Can I import `ethers.js` or other JS libraries into my subgraph mappings? +### 8. Can I import `ethers.js` or other JS libraries into my Subgraph mappings? Not currently, as mappings are written in AssemblyScript. @@ -45,15 +45,15 @@ One possible alternative solution to this is to store raw data in entities and p ### 9. When listening to multiple contracts, is it possible to select the contract order to listen to events? -Dentro de un subgrafo, los eventos se procesan siempre en el orden en que aparecen en los bloques, independientemente de que sea a través de múltiples contratos o no. +Within a Subgraph, the events are always processed in the order they appear in the blocks, regardless of whether that is across multiple contracts or not. ### 10. How are templates different from data sources? -Templates allow you to create data sources quickly, while your subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your subgraph will create a dynamic data source by supplying the contract address. +Templates allow you to create data sources quickly, while your Subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your Subgraph will create a dynamic data source by supplying the contract address. Check out the "Instantiating a data source template" section on: [Data Source Templates](/developing/creating-a-subgraph/#data-source-templates). -### 11. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? +### 11. Is it possible to set up a Subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? Yes. On `graph init` command itself you can add multiple dataSources by entering contracts one after the other. @@ -79,9 +79,9 @@ docker pull graphprotocol/graph-node:latest If only one entity is created during the event and if there's nothing better available, then the transaction hash + log index would be unique. You can obfuscate these by converting that to Bytes and then piping it through `crypto.keccak256` but this won't make it more unique. -### 15. Can I delete my subgraph? +### 15. Can I delete my Subgraph? -Yes, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) and [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) your subgraph. +Yes, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) and [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) your Subgraph. ## Network Related @@ -110,11 +110,11 @@ Yes. Sepolia supports block handlers, call handlers and event handlers. It shoul Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the dataSource starts indexing from. In most cases, we suggest using the block where the contract was created: [Start blocks](/developing/creating-a-subgraph/#start-blocks) -### 20. What are some tips to increase the performance of indexing? My subgraph is taking a very long time to sync +### 20. What are some tips to increase the performance of indexing? My Subgraph is taking a very long time to sync Yes, you should take a look at the optional start block feature to start indexing from the block where the contract was deployed: [Start blocks](/developing/creating-a-subgraph/#start-blocks) -### 21. Is there a way to query the subgraph directly to determine the latest block number it has indexed? +### 21. Is there a way to query the Subgraph directly to determine the latest block number it has indexed? ¡Sí es posible! Prueba el siguiente comando, sustituyendo "organization/subgraphName" por la organización bajo la que se publica y el nombre de tu subgrafo: @@ -132,7 +132,7 @@ someCollection(first: 1000, skip: ) { ... } ### 23. If my dapp frontend uses The Graph for querying, do I need to write my API key into the frontend directly? What if we pay query fees for users – will malicious users cause our query fees to be very high? -Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. +Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and Subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. ## Miscellaneous diff --git a/website/src/pages/es/subgraphs/developing/introduction.mdx b/website/src/pages/es/subgraphs/developing/introduction.mdx index 7d4760cb4c35..facd793fde33 100644 --- a/website/src/pages/es/subgraphs/developing/introduction.mdx +++ b/website/src/pages/es/subgraphs/developing/introduction.mdx @@ -11,21 +11,21 @@ As a developer, you need data to build and power your dapp. Querying and indexin On The Graph, you can: -1. Create, deploy, and publish subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). -2. Use GraphQL to query existing subgraphs. +1. Create, deploy, and publish Subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). +2. Use GraphQL to query existing Subgraphs. ### What is GraphQL? -- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. +- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query Subgraphs. ### Developer Actions -- Query subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. -- Create custom subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. -- Deploy, publish and signal your subgraphs within The Graph Network. +- Query Subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. +- Create custom Subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. +- Deploy, publish and signal your Subgraphs within The Graph Network. -### What are subgraphs? +### What are Subgraphs? -A subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. +A Subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. -Check out the documentation on [subgraphs](/subgraphs/developing/subgraphs/) to learn specifics. +Check out the documentation on [Subgraphs](/subgraphs/developing/subgraphs/) to learn specifics. diff --git a/website/src/pages/es/subgraphs/developing/managing/deleting-a-subgraph.mdx b/website/src/pages/es/subgraphs/developing/managing/deleting-a-subgraph.mdx index 972a4f552c25..b8c2330ca49d 100644 --- a/website/src/pages/es/subgraphs/developing/managing/deleting-a-subgraph.mdx +++ b/website/src/pages/es/subgraphs/developing/managing/deleting-a-subgraph.mdx @@ -2,30 +2,30 @@ title: Deleting a Subgraph --- -Delete your subgraph using [Subgraph Studio](https://thegraph.com/studio/). +Delete your Subgraph using [Subgraph Studio](https://thegraph.com/studio/). -> Deleting your subgraph will remove all published versions from The Graph Network, but it will remain visible on Graph Explorer and Subgraph Studio for users who have signaled on it. +> Deleting your Subgraph will remove all published versions from The Graph Network, but it will remain visible on Graph Explorer and Subgraph Studio for users who have signaled on it. ## Step-by-Step -1. Visit the subgraph's page on [Subgraph Studio](https://thegraph.com/studio/). +1. Visit the Subgraph's page on [Subgraph Studio](https://thegraph.com/studio/). 2. Click on the three-dots to the right of the "publish" button. -3. Click on the option to "delete this subgraph": +3. Click on the option to "delete this Subgraph": ![Delete-subgraph](/img/Delete-subgraph.png) -4. Depending on the subgraph's status, you will be prompted with various options. +4. Depending on the Subgraph's status, you will be prompted with various options. - - If the subgraph is not published, simply click “delete” and confirm. - - If the subgraph is published, you will need to confirm on your wallet before the subgraph can be deleted from Studio. If a subgraph is published to multiple networks, such as testnet and mainnet, additional steps may be required. + - If the Subgraph is not published, simply click “delete” and confirm. + - If the Subgraph is published, you will need to confirm on your wallet before the Subgraph can be deleted from Studio. If a Subgraph is published to multiple networks, such as testnet and mainnet, additional steps may be required. -> If the owner of the subgraph has signal on it, the signaled GRT will be returned to the owner. +> If the owner of the Subgraph has signal on it, the signaled GRT will be returned to the owner. ### Important Reminders -- Once you delete a subgraph, it will **not** appear on Graph Explorer's homepage. However, users who have signaled on it will still be able to view it on their profile pages and remove their signal. -- Los Curadores ya no podrán señalar en el subgrafo. -- Curators that already signaled on the subgraph can withdraw their signal at an average share price. -- Deleted subgraphs will show an error message. +- Once you delete a Subgraph, it will **not** appear on Graph Explorer's homepage. However, users who have signaled on it will still be able to view it on their profile pages and remove their signal. +- Curators will not be able to signal on the Subgraph anymore. +- Curators that already signaled on the Subgraph can withdraw their signal at an average share price. +- Deleted Subgraphs will show an error message. diff --git a/website/src/pages/es/subgraphs/developing/managing/transferring-a-subgraph.mdx b/website/src/pages/es/subgraphs/developing/managing/transferring-a-subgraph.mdx index 0fc6632cbc40..e80bde3fa6d2 100644 --- a/website/src/pages/es/subgraphs/developing/managing/transferring-a-subgraph.mdx +++ b/website/src/pages/es/subgraphs/developing/managing/transferring-a-subgraph.mdx @@ -2,18 +2,18 @@ title: Transferring a Subgraph --- -Subgraphs published to the decentralized network have an NFT minted to the address that published the subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network. +Subgraphs published to the decentralized network have an NFT minted to the address that published the Subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network. ## Reminders -- Whoever owns the NFT controls the subgraph. -- If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that subgraph on the network. -- You can easily move control of a subgraph to a multi-sig. -- A community member can create a subgraph on behalf of a DAO. +- Whoever owns the NFT controls the Subgraph. +- If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that Subgraph on the network. +- You can easily move control of a Subgraph to a multi-sig. +- A community member can create a Subgraph on behalf of a DAO. ## View Your Subgraph as an NFT -To view your subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**: +To view your Subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**: ``` https://opensea.io/your-wallet-address @@ -27,13 +27,13 @@ https://rainbow.me/your-wallet-addres ## Step-by-Step -To transfer ownership of a subgraph, do the following: +To transfer ownership of a Subgraph, do the following: 1. Use the UI built into Subgraph Studio: ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-1.png) -2. Choose the address that you would like to transfer the subgraph to: +2. Choose the address that you would like to transfer the Subgraph to: ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-2.png) diff --git a/website/src/pages/es/subgraphs/developing/publishing/publishing-a-subgraph.mdx b/website/src/pages/es/subgraphs/developing/publishing/publishing-a-subgraph.mdx index d37d8bf2ed62..67c076d0a156 100644 --- a/website/src/pages/es/subgraphs/developing/publishing/publishing-a-subgraph.mdx +++ b/website/src/pages/es/subgraphs/developing/publishing/publishing-a-subgraph.mdx @@ -1,10 +1,11 @@ --- title: Publicación de un subgrafo en la Red Descentralizada +sidebarTitle: Publishing to the Decentralized Network --- -Once you have [deployed your subgraph to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/) and it's ready to go into production, you can publish it to the decentralized network. +Once you have [deployed your Subgraph to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/) and it's ready to go into production, you can publish it to the decentralized network. -When you publish a subgraph to the decentralized network, you make it available for: +When you publish a Subgraph to the decentralized network, you make it available for: - [Curators](/resources/roles/curating/) to begin curating it. - [Indexers](/indexing/overview/) to begin indexing it. @@ -17,33 +18,33 @@ Check out the list of [supported networks](/supported-networks/). 1. Go to the [Subgraph Studio](https://thegraph.com/studio/) dashboard 2. Click on the **Publish** button -3. Your subgraph will now be visible in [Graph Explorer](https://thegraph.com/explorer/). +3. Your Subgraph will now be visible in [Graph Explorer](https://thegraph.com/explorer/). -All published versions of an existing subgraph can: +All published versions of an existing Subgraph can: - Be published to Arbitrum One. [Learn more about The Graph Network on Arbitrum](/archived/arbitrum/arbitrum-faq/). -- Index data on any of the [supported networks](/supported-networks/), regardless of the network on which the subgraph was published. +- Index data on any of the [supported networks](/supported-networks/), regardless of the network on which the Subgraph was published. -### Actualización de los metadatos de un subgrafo publicado +### Updating metadata for a published Subgraph -- After publishing your subgraph to the decentralized network, you can update the metadata anytime in Subgraph Studio. +- After publishing your Subgraph to the decentralized network, you can update the metadata anytime in Subgraph Studio. - Once you’ve saved your changes and published the updates, they will appear in Graph Explorer. - It's important to note that this process will not create a new version since your deployment has not changed. ## Publishing from the CLI -As of version 0.73.0, you can also publish your subgraph with the [`graph-cli`](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). +As of version 0.73.0, you can also publish your Subgraph with the [`graph-cli`](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). 1. Open the `graph-cli`. 2. Use the following commands: `graph codegen && graph build` then `graph publish`. -3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized subgraph to a network of your choice. +3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized Subgraph to a network of your choice. ![cli-ui](/img/cli-ui.png) ### Customizing your deployment -You can upload your subgraph build to a specific IPFS node and further customize your deployment with the following flags: +You can upload your Subgraph build to a specific IPFS node and further customize your deployment with the following flags: ``` USAGE @@ -61,33 +62,33 @@ FLAGS ``` -## Adding signal to your subgraph +## Adding signal to your Subgraph -Developers can add GRT signal to their subgraphs to incentivize Indexers to query the subgraph. +Developers can add GRT signal to their Subgraphs to incentivize Indexers to query the Subgraph. -- If a subgraph is eligible for indexing rewards, Indexers who provide a "proof of indexing" will receive a GRT reward, based on the amount of GRT signalled. +- If a Subgraph is eligible for indexing rewards, Indexers who provide a "proof of indexing" will receive a GRT reward, based on the amount of GRT signalled. -- You can check indexing reward eligibility based on subgraph feature usage [here](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). +- You can check indexing reward eligibility based on Subgraph feature usage [here](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). - Specific supported networks can be checked [here](/supported-networks/). -> Adding signal to a subgraph which is not eligible for rewards will not attract additional Indexers. +> Adding signal to a Subgraph which is not eligible for rewards will not attract additional Indexers. > -> If your subgraph is eligible for rewards, it is recommended that you curate your own subgraph with at least 3,000 GRT in order to attract additional indexers to index your subgraph. +> If your Subgraph is eligible for rewards, it is recommended that you curate your own Subgraph with at least 3,000 GRT in order to attract additional indexers to index your Subgraph. -The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs. However, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. +The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all Subgraphs. However, signaling GRT on a particular Subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. -When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. +When signaling, Curators can decide to signal on a specific version of the Subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. -Indexers can find subgraphs to index based on curation signals they see in Graph Explorer. +Indexers can find Subgraphs to index based on curation signals they see in Graph Explorer. -![Explorer subgraphs](/img/explorer-subgraphs.png) +![Explorer Subgraphs](/img/explorer-subgraphs.png) -Subgraph Studio enables you to add signal to your subgraph by adding GRT to your subgraph's curation pool in the same transaction it is published. +Subgraph Studio enables you to add signal to your Subgraph by adding GRT to your Subgraph's curation pool in the same transaction it is published. ![Curation Pool](/img/curate-own-subgraph-tx.png) -Alternatively, you can add GRT signal to a published subgraph from Graph Explorer. +Alternatively, you can add GRT signal to a published Subgraph from Graph Explorer. ![Signal from Explorer](/img/signal-from-explorer.png) diff --git a/website/src/pages/es/subgraphs/developing/subgraphs.mdx b/website/src/pages/es/subgraphs/developing/subgraphs.mdx index f7046bd367c7..97429af0208d 100644 --- a/website/src/pages/es/subgraphs/developing/subgraphs.mdx +++ b/website/src/pages/es/subgraphs/developing/subgraphs.mdx @@ -4,83 +4,83 @@ title: Subgrafos ## What is a Subgraph? -A subgraph is a custom, open API that extracts data from a blockchain, processes it, and stores it so it can be easily queried via GraphQL. +A Subgraph is a custom, open API that extracts data from a blockchain, processes it, and stores it so it can be easily queried via GraphQL. ### Subgraph Capabilities - **Access Data:** Subgraphs enable the querying and indexing of blockchain data for web3. -- **Build:** Developers can build, deploy, and publish subgraphs to The Graph Network. To get started, check out the subgraph developer [Quick Start](quick-start/). -- **Index & Query:** Once a subgraph is indexed, anyone can query it. Explore and query all subgraphs published to the network in [Graph Explorer](https://thegraph.com/explorer). +- **Build:** Developers can build, deploy, and publish Subgraphs to The Graph Network. To get started, check out the Subgraph developer [Quick Start](quick-start/). +- **Index & Query:** Once a Subgraph is indexed, anyone can query it. Explore and query all Subgraphs published to the network in [Graph Explorer](https://thegraph.com/explorer). ## Inside a Subgraph -The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. +The Subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your Subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. -The **subgraph definition** consists of the following files: +The **Subgraph definition** consists of the following files: -- `subgraph.yaml`: Contains the subgraph manifest +- `subgraph.yaml`: Contains the Subgraph manifest -- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL +- `schema.graphql`: A GraphQL schema defining the data stored for your Subgraph and how to query it via GraphQL - `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema -To learn more about each subgraph component, check out [creating a subgraph](/developing/creating-a-subgraph/). +To learn more about each Subgraph component, check out [creating a Subgraph](/developing/creating-a-subgraph/). ## Ciclo de vida de un Subgrafo -Here is a general overview of a subgraph’s lifecycle: +Here is a general overview of a Subgraph’s lifecycle: ![Subgraph Lifecycle](/img/subgraph-lifecycle.png) ## Subgraph Development -1. [Create a subgraph](/developing/creating-a-subgraph/) -2. [Deploy a subgraph](/deploying/deploying-a-subgraph-to-studio/) -3. [Test a subgraph](/deploying/subgraph-studio/#testing-your-subgraph-in-subgraph-studio) -4. [Publish a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) -5. [Signal on a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) +1. [Create a Subgraph](/developing/creating-a-subgraph/) +2. [Deploy a Subgraph](/deploying/deploying-a-subgraph-to-studio/) +3. [Test a Subgraph](/deploying/subgraph-studio/#testing-your-subgraph-in-subgraph-studio) +4. [Publish a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) +5. [Signal on a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) ### Build locally -Great subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying subgraphs on The Graph. They can also use [Graph TypeScript](/subgraphs/developing/creating/graph-ts/README/) and [Matchstick](/subgraphs/developing/creating/unit-testing-framework/) to create robust subgraphs. +Great Subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying Subgraphs on The Graph. They can also use [Graph TypeScript](/subgraphs/developing/creating/graph-ts/README/) and [Matchstick](/subgraphs/developing/creating/unit-testing-framework/) to create robust Subgraphs. ### Deploy to Subgraph Studio -Once defined, a subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: +Once defined, a Subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: -- Use its staging environment to index the deployed subgraph and make it available for review. -- Verify that your subgraph doesn't have any indexing errors and works as expected. +- Use its staging environment to index the deployed Subgraph and make it available for review. +- Verify that your Subgraph doesn't have any indexing errors and works as expected. ### Publish to the Network -When you're happy with your subgraph, you can [publish it](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph Network. +When you're happy with your Subgraph, you can [publish it](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph Network. -- This is an onchain action, which registers the subgraph and makes it discoverable by Indexers. -- Published subgraphs have a corresponding NFT, which defines the ownership of the subgraph. You can [transfer the subgraph's ownership](/subgraphs/developing/managing/transferring-a-subgraph/) by sending the NFT. -- Published subgraphs have associated metadata, which provides other network participants with useful context and information. +- This is an onchain action, which registers the Subgraph and makes it discoverable by Indexers. +- Published Subgraphs have a corresponding NFT, which defines the ownership of the Subgraph. You can [transfer the Subgraph's ownership](/subgraphs/developing/managing/transferring-a-subgraph/) by sending the NFT. +- Published Subgraphs have associated metadata, which provides other network participants with useful context and information. ### Add Curation Signal for Indexing -Published subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your subgraph. Learn more about signaling and [curating](/resources/roles/curating/) on The Graph. +Published Subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your Subgraph. Learn more about signaling and [curating](/resources/roles/curating/) on The Graph. #### What is signal? -- Signal is locked GRT associated with a given subgraph. It indicates to Indexers that a given subgraph will receive query volume and it contributes to the indexing rewards available for processing it. -- Third-party Curators may also signal on a given subgraph, if they deem the subgraph likely to drive query volume. +- Signal is locked GRT associated with a given Subgraph. It indicates to Indexers that a given Subgraph will receive query volume and it contributes to the indexing rewards available for processing it. +- Third-party Curators may also signal on a given Subgraph, if they deem the Subgraph likely to drive query volume. ### Querying & Application Development Subgraphs on The Graph Network receive 100,000 free queries per month, after which point developers can either [pay for queries with GRT or a credit card](/subgraphs/billing/). -Learn more about [querying subgraphs](/subgraphs/querying/introduction/). +Learn more about [querying Subgraphs](/subgraphs/querying/introduction/). ### Updating Subgraphs -To update your subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. +To update your Subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your Subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. -- If you selected "auto-migrate" when you applied the signal, updating the subgraph will migrate any signal to the new version and incur a migration tax. -- This signal migration should prompt Indexers to start indexing the new version of the subgraph, so it should soon become available for querying. +- If you selected "auto-migrate" when you applied the signal, updating the Subgraph will migrate any signal to the new version and incur a migration tax. +- This signal migration should prompt Indexers to start indexing the new version of the Subgraph, so it should soon become available for querying. ### Deleting & Transferring Subgraphs -If you no longer need a published subgraph, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) or [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) it. Deleting a subgraph returns any signaled GRT to [Curators](/resources/roles/curating/). +If you no longer need a published Subgraph, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) or [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) it. Deleting a Subgraph returns any signaled GRT to [Curators](/resources/roles/curating/). diff --git a/website/src/pages/es/subgraphs/explorer.mdx b/website/src/pages/es/subgraphs/explorer.mdx index a64b3d4188ae..e7d1980ac05d 100644 --- a/website/src/pages/es/subgraphs/explorer.mdx +++ b/website/src/pages/es/subgraphs/explorer.mdx @@ -2,11 +2,11 @@ title: Graph Explorer --- -Unlock the world of subgraphs and network data with [Graph Explorer](https://thegraph.com/explorer). +Unlock the world of Subgraphs and network data with [Graph Explorer](https://thegraph.com/explorer). ## Descripción -Graph Explorer consists of multiple parts where you can interact with [subgraphs](https://thegraph.com/explorer?chain=arbitrum-one), [delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one), engage [participants](https://thegraph.com/explorer/participants?chain=arbitrum-one), view [network information](https://thegraph.com/explorer/network?chain=arbitrum-one), and access your user profile. +Graph Explorer consists of multiple parts where you can interact with [Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one), [delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one), engage [participants](https://thegraph.com/explorer/participants?chain=arbitrum-one), view [network information](https://thegraph.com/explorer/network?chain=arbitrum-one), and access your user profile. ## Inside Explorer @@ -14,33 +14,33 @@ The following is a breakdown of all the key features of Graph Explorer. For addi ### Subgraphs Page -After deploying and publishing your subgraph in Subgraph Studio, go to [Graph Explorer](https://thegraph.com/explorer) and click on the "[Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one)" link in the navigation bar to access the following: +After deploying and publishing your Subgraph in Subgraph Studio, go to [Graph Explorer](https://thegraph.com/explorer) and click on the "[Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one)" link in the navigation bar to access the following: -- Your own finished subgraphs +- Your own finished Subgraphs - Subgraphs published by others -- The exact subgraph you want (based on the date created, signal amount, or name). +- The exact Subgraph you want (based on the date created, signal amount, or name). ![Explorer Image 1](/img/Subgraphs-Explorer-Landing.png) -When you click into a subgraph, you will be able to do the following: +When you click into a Subgraph, you will be able to do the following: - Test queries in the playground and be able to leverage network details to make informed decisions. -- Signal GRT on your own subgraph or the subgraphs of others to make indexers aware of its importance and quality. +- Signal GRT on your own Subgraph or the Subgraphs of others to make indexers aware of its importance and quality. - - This is critical because signaling on a subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries. + - This is critical because signaling on a Subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries. ![Explorer Image 2](/img/Subgraph-Details.png) -On each subgraph’s dedicated page, you can do the following: +On each Subgraph’s dedicated page, you can do the following: -- Señalar/dejar de señalar un subgrafo +- Signal/Un-signal on Subgraphs - Ver más detalles como gráficos, ID de implementación actual y otros metadatos -- Cambiar de versión para explorar iteraciones pasadas del subgrafo -- Consultar subgrafos a través de GraphQL -- Probar subgrafos en el playground -- Ver los Indexadores que están indexando en un subgrafo determinado +- Switch versions to explore past iterations of the Subgraph +- Query Subgraphs via GraphQL +- Test Subgraphs in the playground +- View the Indexers that are indexing on a certain Subgraph - Estadísticas de subgrafo (asignaciones, Curadores, etc.) -- Ver la entidad que publicó el subgrafo +- View the entity who published the Subgraph ![Explorer Image 3](/img/Explorer-Signal-Unsignal.png) @@ -53,7 +53,7 @@ On this page, you can see the following: - Indexers who collected the most query fees - Indexers with the highest estimated APR -Additionally, you can calculate your ROI and search for top Indexers by name, address, or subgraph. +Additionally, you can calculate your ROI and search for top Indexers by name, address, or Subgraph. ### Participants Page @@ -63,9 +63,9 @@ This page provides a bird's-eye view of all "participants," which includes every ![Explorer Image 4](/img/Indexer-Pane.png) -Indexers are the backbone of the protocol. They stake on subgraphs, index them, and serve queries to anyone consuming subgraphs. +Indexers are the backbone of the protocol. They stake on Subgraphs, index them, and serve queries to anyone consuming Subgraphs. -In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each subgraph, and how much revenue they have made from query fees and indexing rewards. +In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each Subgraph, and how much revenue they have made from query fees and indexing rewards. **Specifics** @@ -74,7 +74,7 @@ In the Indexers table, you can see an Indexers’ delegation parameters, their s - Cooldown Remaining - the time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters. - Owned - This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior. - Delegated - Stake from Delegators which can be allocated by the Indexer, but cannot be slashed. -- Allocated - Stake that Indexers are actively allocating towards the subgraphs they are indexing. +- Allocated - Stake that Indexers are actively allocating towards the Subgraphs they are indexing. - Available Delegation Capacity - the amount of delegated stake the Indexers can still receive before they become over-delegated. - Max Delegation Capacity: la cantidad máxima de participación delegada que el Indexador puede aceptar de forma productiva. Un exceso de participación delegada no puede utilizarse para asignaciones o cálculos de recompensas. - Query Fees - this is the total fees that end users have paid for queries from an Indexer over all time. @@ -90,10 +90,10 @@ To learn more about how to become an Indexer, you can take a look at the [offici #### 2. Curadores -Curators analyze subgraphs to identify which subgraphs are of the highest quality. Once a Curator has found a potentially high-quality subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which subgraphs are high quality and should be indexed. +Curators analyze Subgraphs to identify which Subgraphs are of the highest quality. Once a Curator has found a potentially high-quality Subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which Subgraphs are high quality and should be indexed. -- Curators can be community members, data consumers, or even subgraph developers who signal on their own subgraphs by depositing GRT tokens into a bonding curve. - - By depositing GRT, Curators mint curation shares of a subgraph. As a result, they can earn a portion of the query fees generated by the subgraph they have signaled on. +- Curators can be community members, data consumers, or even Subgraph developers who signal on their own Subgraphs by depositing GRT tokens into a bonding curve. + - By depositing GRT, Curators mint curation shares of a Subgraph. As a result, they can earn a portion of the query fees generated by the Subgraph they have signaled on. - The bonding curve incentivizes Curators to curate the highest quality data sources. In the The Curator table listed below you can see: @@ -144,8 +144,8 @@ The overview section has both all the current network metrics and some cumulativ A few key details to note: -- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the subgraphs have been closed and the data they served has been validated by the consumers. -- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). +- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the Subgraphs have been closed and the data they served has been validated by the consumers. +- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the Subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). ![Explorer Image 8](/img/Network-Stats.png) @@ -178,15 +178,15 @@ In this section, you can view the following: ### Pestaña de subgrafos -In the Subgraphs tab, you’ll see your published subgraphs. +In the Subgraphs tab, you’ll see your published Subgraphs. -> This will not include any subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network. +> This will not include any Subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network. ![Explorer Image 11](/img/Subgraphs-Overview.png) ### Pestaña de indexación -In the Indexing tab, you’ll find a table with all the active and historical allocations towards subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer. +In the Indexing tab, you’ll find a table with all the active and historical allocations towards Subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer. Esta sección también incluirá detalles sobre las recompensas netas que obtienes como Indexador y las tarifas netas que recibes por cada consulta. Verás las siguientes métricas: @@ -223,13 +223,13 @@ Con los botones situados al lado derecho de la tabla, puedes administrar tu dele ### Pestaña de curación -En la pestaña Curación, encontrarás todos los subgrafos a los que estás señalando (lo que te permite recibir tarifas de consulta). La señalización permite a los Curadores destacar un subgrafo importante y fiable a los Indexadores, dándoles a entender que debe ser indexado. +In the Curation tab, you’ll find all the Subgraphs you’re signaling on (thus enabling you to receive query fees). Signaling allows Curators to highlight to Indexers which Subgraphs are valuable and trustworthy, thus signaling that they need to be indexed on. Dentro de esta pestaña, encontrarás una descripción general de: -- Todos los subgrafos que estás curando con detalles de la señalización actual -- Participaciones totales en cada subgrafo -- Recompensas de consulta por cada subgrafo +- All the Subgraphs you're curating on with signal details +- Share totals per Subgraph +- Query rewards per Subgraph - Actualizaciones de los subgrafos ![Explorer Image 14](/img/Curation-Stats.png) diff --git a/website/src/pages/es/subgraphs/guides/_meta.js b/website/src/pages/es/subgraphs/guides/_meta.js index 37e18bc51651..a1bb04fb6d3f 100644 --- a/website/src/pages/es/subgraphs/guides/_meta.js +++ b/website/src/pages/es/subgraphs/guides/_meta.js @@ -1,4 +1,5 @@ export default { + 'subgraph-composition': '', 'subgraph-debug-forking': '', near: '', arweave: '', diff --git a/website/src/pages/es/subgraphs/guides/arweave.mdx b/website/src/pages/es/subgraphs/guides/arweave.mdx index 08e6c4257268..71c58f8afabd 100644 --- a/website/src/pages/es/subgraphs/guides/arweave.mdx +++ b/website/src/pages/es/subgraphs/guides/arweave.mdx @@ -1,50 +1,50 @@ --- -title: Building Subgraphs on Arweave +title: Construyendo Subgrafos en Arweave --- > Arweave support in Graph Node and on Subgraph Studio is in beta: please reach us on [Discord](https://discord.gg/graphprotocol) with any questions about building Arweave Subgraphs! -In this guide, you will learn how to build and deploy Subgraphs to index the Arweave blockchain. +En esta guía, aprenderás a construir y deployar subgrafos para indexar la blockchain de Arweave. -## What is Arweave? +## ¿Qué es Arweave? -The Arweave protocol allows developers to store data permanently and that is the main difference between Arweave and IPFS, where IPFS lacks the feature; permanence, and files stored on Arweave can't be changed or deleted. +El protocolo Arweave permite a los developers almacenar datos de forma permanente y esa es la principal diferencia entre Arweave e IPFS, donde IPFS carece de la característica; permanencia, y los archivos almacenados en Arweave no pueden ser modificados o eliminados. -Arweave already has built numerous libraries for integrating the protocol in a number of different programming languages. For more information you can check: +Arweave ya ha construido numerosas bibliotecas para integrar el protocolo en varios lenguajes de programación. Para más información puede consultar: - [Arwiki](https://arwiki.wiki/#/en/main) - [Arweave Resources](https://www.arweave.org/build) -## What are Arweave Subgraphs? +## ¿Qué son los subgrafos Arweave? The Graph allows you to build custom open APIs called "Subgraphs". Subgraphs are used to tell indexers (server operators) which data to index on a blockchain and save on their servers in order for you to be able to query it at any time using [GraphQL](https://graphql.org/). [Graph Node](https://github.com/graphprotocol/graph-node) is now able to index data on Arweave protocol. The current integration is only indexing Arweave as a blockchain (blocks and transactions), it is not indexing the stored files yet. -## Building an Arweave Subgraph +## Construcción de un subgrafo Arweave -To be able to build and deploy Arweave Subgraphs, you need two packages: +Para poder construir y deployar subgrafos Arweave, necesita dos paquetes: 1. `@graphprotocol/graph-cli` above version 0.30.2 - This is a command-line tool for building and deploying Subgraphs. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-cli) to download using `npm`. 2. `@graphprotocol/graph-ts` above version 0.27.0 - This is library of Subgraph-specific types. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-ts) to download using `npm`. -## Subgraph's components +## Componentes del subgrafo There are three components of a Subgraph: ### 1. Manifest - `subgraph.yaml` -Defines the data sources of interest, and how they should be processed. Arweave is a new kind of data source. +Define las fuentes de datos de interés y cómo deben ser procesadas. Arweave es un nuevo tipo de fuente de datos. ### 2. Schema - `schema.graphql` -Here you define which data you want to be able to query after indexing your Subgraph using GraphQL. This is actually similar to a model for an API, where the model defines the structure of a request body. +Aquí defines qué datos quieres poder consultar después de indexar tu Subgrafo usando GraphQL. Esto es en realidad similar a un modelo para una API, donde el modelo define la estructura de un cuerpo de solicitud. The requirements for Arweave Subgraphs are covered by the [existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). ### 3. AssemblyScript Mappings - `mapping.ts` -This is the logic that determines how data should be retrieved and stored when someone interacts with the data sources you are listening to. The data gets translated and is stored based off the schema you have listed. +Esta es la lógica que determina cómo los datos deben ser recuperados y almacenados cuando alguien interactúa con las fuentes de datos que estás escuchando. Los datos se traducen y se almacenan basándose en el esquema que has listado. During Subgraph development there are two key commands: @@ -53,7 +53,7 @@ $ graph codegen # generates types from the schema file identified in the manifes $ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder ``` -## Subgraph Manifest Definition +## Definición de manifiesto del subgrafo The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for an Arweave Subgraph: @@ -84,24 +84,24 @@ dataSources: - Arweave Subgraphs introduce a new kind of data source (`arweave`) - The network should correspond to a network on the hosting Graph Node. In Subgraph Studio, Arweave's mainnet is `arweave-mainnet` -- Arweave data sources introduce an optional source.owner field, which is the public key of an Arweave wallet +- Las fuentes de datos de Arweave introducen un campo opcional "source.owner", que es la clave pública de una billetera Arweave -Arweave data sources support two types of handlers: +Las fuentes de datos de Arweave admiten dos tipos de handlers: - `blockHandlers` - Run on every new Arweave block. No source.owner is required. - `transactionHandlers` - Run on every transaction where the data source's `source.owner` is the owner. Currently an owner is required for `transactionHandlers`, if users want to process all transactions they should provide "" as the `source.owner` -> The source.owner can be the owner's address, or their Public Key. - -> Transactions are the building blocks of the Arweave permaweb and they are objects created by end-users. - +> El source.owner puede ser la dirección del propietario o su clave pública. +> +> Las transacciones son los bloques de construcción de la permaweb de Arweave y son objetos creados por los usuarios finales. +> > Note: [Irys (previously Bundlr)](https://irys.xyz/) transactions are not supported yet. -## Schema Definition +## Definición de esquema Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on the Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). -## AssemblyScript Mappings +## Asignaciones de AssemblyScript The handlers for processing events are written in [AssemblyScript](https://www.assemblyscript.org/). @@ -158,11 +158,11 @@ Once your Subgraph has been created on your Subgraph Studio dashboard, you can d graph deploy --access-token ``` -## Querying an Arweave Subgraph +## Consultando un subgrafo de Arweave The GraphQL endpoint for Arweave Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. -## Example Subgraphs +## Subgrafos de ejemplo Here is an example Subgraph for reference: @@ -174,19 +174,19 @@ Here is an example Subgraph for reference: No, a Subgraph can only support data sources from one chain/network. -### Can I index the stored files on Arweave? +### ¿Puedo indexar los archivos almacenados en Arweave? -Currently, The Graph is only indexing Arweave as a blockchain (its blocks and transactions). +Actualmente, The Graph sólo indexa Arweave como blockchain (sus bloques y transacciones). ### Can I identify Bundlr bundles in my Subgraph? -This is not currently supported. +Actualmente no se admite. -### How can I filter transactions to a specific account? +### ¿Cómo puedo filtrar las transacciones a una cuenta específica? -The source.owner can be the user's public key or account address. +El source.owner puede ser la clave pública del usuario o la dirección de la cuenta. -### What is the current encryption format? +### ¿Cuál es el formato actual de encriptación? Data is generally passed into the mappings as Bytes, which if stored directly is returned in the Subgraph in a `hex` format (ex. block and transaction hashes). You may want to convert to a `base64` or `base64 URL`-safe format in your mappings, in order to match what is displayed in block explorers like [Arweave Explorer](https://viewblock.io/arweave/). diff --git a/website/src/pages/es/subgraphs/guides/contract-analyzer.mdx b/website/src/pages/es/subgraphs/guides/contract-analyzer.mdx index 084ac8d28a00..d0a60dc8ee83 100644 --- a/website/src/pages/es/subgraphs/guides/contract-analyzer.mdx +++ b/website/src/pages/es/subgraphs/guides/contract-analyzer.mdx @@ -2,11 +2,15 @@ title: Smart Contract Analysis with Cana CLI --- -# Cana CLI: Quick & Efficient Contract Analysis +Improve smart contract analysis with **Cana CLI**. It's fast, efficient, and designed specifically for EVM chains. -**Cana CLI** is a command-line tool that streamlines helpful smart contract metadata analysis specific to subgraph development across multiple EVM-compatible chains. +## Descripción -## 📌 Key Features +**Cana CLI** is a command-line tool that streamlines helpful smart contract metadata analysis specific to subgraph development across multiple EVM-compatible chains. It simplifies retrieving contract details, detecting proxy implementations, extracting ABIs, and more. + +### Key Features + +With Cana CLI, you can: - Detect deployment blocks - Verify source code @@ -14,63 +18,75 @@ title: Smart Contract Analysis with Cana CLI - Identify proxy and implementation contracts - Support multiple chains -## 🚀 Installation & Setup +### Prerequisites + +Before installing Cana CLI, make sure you have: + +- [Node.js v16+](https://nodejs.org/en) +- [npm v6+](https://docs.npmjs.com/cli/v11/commands/npm-install) +- Block explorer API keys + +### Installation & Setup -Install Cana globally using npm: +1. Install Cana CLI + +Use npm to install it globally: ```bash npm install -g contract-analyzer ``` -Set up a blockchain for analysis: +2. Configure Cana CLI + +Set up a blockchain environment for analysis: ```bash cana setup ``` -Provide the required block explorer API and block explorer endpoint URL details when prompted. +During setup, you'll be prompted to provide the required block explorer API key and block explorer endpoint URL. -Running `cana setup` creates a configuration file at `~/.contract-analyzer/config.json`. This file stores your block explorer API credentials, endpoint URLs, and chain selection preferences for future use. +After setup, Cana CLI creates a configuration file at `~/.contract-analyzer/config.json`. This file stores your block explorer API credentials, endpoint URLs, and chain selection preferences for future use. -## 🍳 Usage +### Steps: Using Cana CLI for Smart Contract Analysis -### 🔹 Chain Selection +#### 1. Select a Chain -Cana supports multiple EVM-compatible chains. +Cana CLI supports multiple EVM-compatible chains. -List chains added with: +For a list of chains added run this command: ```bash cana chains ``` -Then select a chain with: +Then select a chain with this command: ```bash cana chains --switch ``` -Once a chain is selected, all subsequent contract analases will continue on that chain. +Once a chain is selected, all subsequent contract analyses will continue on that chain. -### 🔹 Basic Contract Analysis +#### 2. Basic Contract Analysis -Analyze a contract with: +Run the following command to analyze a contract: ```bash cana analyze 0xContractAddress ``` -or +o ```bash cana -a 0xContractAddress ``` -This command displays essential contract information in the terminal using a clear, organized format. +This command fetches and displays essential contract information in the terminal using a clear, organized format. -### 🔹 Understanding Output +#### 3. Understanding the Output -Cana organizes results into the terminal as well as into a structured directory when detailed contract data is successfully retrieved: +Cana CLI organizes results into the terminal and into a structured directory when detailed contract data is successfully retrieved: ``` contracts-analyzed/ @@ -80,24 +96,22 @@ contracts-analyzed/ └── event-information.json # Event signatures and examples ``` -### 🔹 Chain Management +This format makes it easy to reference contract metadata, event signatures, and ABIs for subgraph development. + +#### 4. Chain Management Add and manage chains: ```bash -cana setup # Add a new chain -cana chains # List configured chains -cana chains -s # Swich chains. +cana setup # Add a new chain +cana chains # List configured chains +cana chains -s # Switch chains ``` -## ⚠️ Troubleshooting +### Troubleshooting -- **Missing Data**: Ensure the contract address is correct, verified on the block explorer, and that your API key has the required permissions. +Missing Data? Ensure that the contract address is correct, that it's verified on the block explorer, and that your API key has the required permissions. -## ✅ Requirements - -- Node.js v16+ -- npm v6+ -- Block explorer API keys +### Conclusion -Keep your contract analyses efficient and well-organized. 🚀 +With Cana CLI, you can efficiently analyze smart contracts, extract crucial metadata, and support subgraph development with ease. diff --git a/website/src/pages/es/subgraphs/guides/enums.mdx b/website/src/pages/es/subgraphs/guides/enums.mdx index 9f55ae07c54b..8a3da763d6e2 100644 --- a/website/src/pages/es/subgraphs/guides/enums.mdx +++ b/website/src/pages/es/subgraphs/guides/enums.mdx @@ -269,6 +269,6 @@ Expected output includes the marketplaces that meet the criteria, each represent } ``` -## Additional Resources +## Recursos Adicionales For additional information, check out this guide's [repo](https://github.com/chidubemokeke/Subgraph-Tutorial-Enums). diff --git a/website/src/pages/es/subgraphs/guides/grafting.mdx b/website/src/pages/es/subgraphs/guides/grafting.mdx index d9abe0e70d2a..3717e35b3d8a 100644 --- a/website/src/pages/es/subgraphs/guides/grafting.mdx +++ b/website/src/pages/es/subgraphs/guides/grafting.mdx @@ -1,24 +1,24 @@ --- -title: Replace a Contract and Keep its History With Grafting +title: Reemplazar un contrato y mantener su historia con el grafting --- In this guide, you will learn how to build and deploy new Subgraphs by grafting existing Subgraphs. -## What is Grafting? +## ¿Qué es el Grafting? Grafting reuses the data from an existing Subgraph and starts indexing it at a later block. This is useful during development to get past simple errors in the mappings quickly or to temporarily get an existing Subgraph working again after it has failed. Also, it can be used when adding a feature to a Subgraph that takes long to index from scratch. The grafted Subgraph can use a GraphQL schema that is not identical to the one of the base Subgraph, but merely compatible with it. It has to be a valid Subgraph schema in its own right, but may deviate from the base Subgraph's schema in the following ways: -- It adds or removes entity types -- It removes attributes from entity types -- It adds nullable attributes to entity types -- It turns non-nullable attributes into nullable attributes -- It adds values to enums -- It adds or removes interfaces -- It changes for which entity types an interface is implemented +- Agrega o elimina tipos de entidades +- Elimina los atributos de los tipos de entidad +- Agrega atributos anulables a los tipos de entidad +- Convierte los atributos no anulables en atributos anulables +- Añade valores a los enums +- Agrega o elimina interfaces +- Cambia para qué tipos de entidades se implementa una interfaz -For more information, you can check: +Para más información, puedes consultar: - [Grafting](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) @@ -40,7 +40,7 @@ Grafting is a powerful feature that allows you to "graft" one Subgraph onto anot By adhering to these guidelines, you minimize risks and ensure a smoother migration process. -## Building an Existing Subgraph +## Construcción de un subgrafo existente Building Subgraphs is an essential part of The Graph, described more in depth [here](/subgraphs/quick-start/). To be able to build and deploy the existing Subgraph used in this tutorial, the following repo is provided: @@ -48,7 +48,7 @@ Building Subgraphs is an essential part of The Graph, described more in depth [h > Note: The contract used in the Subgraph was taken from the following [Hackathon Starterkit](https://github.com/schmidsi/hackathon-starterkit). -## Subgraph Manifest Definition +## Definición de manifiesto del subgrafo The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest that you will use: @@ -83,7 +83,7 @@ dataSources: - The network should correspond to an indexed network being queried. Since we're running on Sepolia testnet, the network is `sepolia` - The `mapping` section defines the triggers of interest and the functions that should be run in response to those triggers. In this case, we are listening for the `Withdrawal` event and calling the `handleWithdrawal` function when it is emitted. -## Grafting Manifest Definition +## Definición del manifiesto de grafting Grafting requires adding two new items to the original Subgraph manifest: @@ -101,7 +101,7 @@ graft: The `base` and `block` values can be found by deploying two Subgraphs: one for the base indexing and one with grafting -## Deploying the Base Subgraph +## Deploy del subgrafo base 1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-example` 2. Follow the directions in the `AUTH & DEPLOY` section on your Subgraph page in the `graft-example` folder from the repo @@ -117,7 +117,7 @@ The `base` and `block` values can be found by deploying two Subgraphs: one for t } ``` -It returns something like this: +Devuelve algo como esto: ``` { @@ -140,9 +140,9 @@ It returns something like this: Once you have verified the Subgraph is indexing properly, you can quickly update the Subgraph with grafting. -## Deploying the Grafting Subgraph +## Deploy del subgrafo grafting -The graft replacement subgraph.yaml will have a new contract address. This could happen when you update your dapp, redeploy a contract, etc. +El subgraph.yaml de sustitución del graft tendrá una nueva dirección de contrato. Esto podría ocurrir cuando actualices tu dApp, vuelvas a deployar un contrato, etc. 1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-replacement` 2. Create a new manifest. The `subgraph.yaml` for `graph-replacement` contains a different contract address and new information about how it should graft. These are the `block` of the [last event emitted](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452) you care about by the old contract and the `base` of the old Subgraph. The `base` Subgraph ID is the `Deployment ID` of your original `graph-example` Subgraph. You can find this in Subgraph Studio. @@ -159,7 +159,7 @@ The graft replacement subgraph.yaml will have a new contract address. This could } ``` -It should return the following: +Debería devolver lo siguiente: ``` { @@ -189,7 +189,7 @@ You can see that the `graft-replacement` Subgraph is indexing from older `graph- Congrats! You have successfully grafted a Subgraph onto another Subgraph. -## Additional Resources +## Recursos Adicionales If you want more experience with grafting, here are a few examples for popular contracts: diff --git a/website/src/pages/es/subgraphs/guides/near.mdx b/website/src/pages/es/subgraphs/guides/near.mdx index e78a69eb7fa2..f22a497db7e1 100644 --- a/website/src/pages/es/subgraphs/guides/near.mdx +++ b/website/src/pages/es/subgraphs/guides/near.mdx @@ -1,10 +1,10 @@ --- -title: Building Subgraphs on NEAR +title: Construcción de subgrafos en NEAR --- This guide is an introduction to building Subgraphs indexing smart contracts on the [NEAR blockchain](https://docs.near.org/). -## What is NEAR? +## ¿Qué es NEAR? [NEAR](https://near.org/) is a smart contract platform for building decentralized applications. Visit the [official documentation](https://docs.near.org/concepts/basics/protocol) for more information. @@ -14,14 +14,14 @@ The Graph gives developers tools to process blockchain events and make the resul Subgraphs are event-based, which means that they listen for and then process onchain events. There are currently two types of handlers supported for NEAR Subgraphs: -- Block handlers: these are run on every new block -- Receipt handlers: run every time a message is executed at a specified account +- Handlers de bloques: se ejecutan en cada nuevo bloque +- Handlers de recibos: se realizan cada vez que se ejecuta un mensaje en una cuenta específica [From the NEAR documentation](https://docs.near.org/build/data-infrastructure/lake-data-structures/receipt): -> A Receipt is the only actionable object in the system. When we talk about "processing a transaction" on the NEAR platform, this eventually means "applying receipts" at some point. +> Un recibo es el único objeto procesable del sistema. Cuando hablamos de "procesar una transacción" en la plataforma NEAR, esto significa eventualmente "aplicar recibos" en algún momento. -## Building a NEAR Subgraph +## Construcción de un subgrafo NEAR `@graphprotocol/graph-cli` is a command-line tool for building and deploying Subgraphs. @@ -46,7 +46,7 @@ $ graph codegen # generates types from the schema file identified in the manifes $ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder ``` -### Subgraph Manifest Definition +### Definición de manifiesto del subgrafo The Subgraph manifest (`subgraph.yaml`) identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for a NEAR Subgraph: @@ -85,16 +85,16 @@ accounts: - morning.testnet ``` -NEAR data sources support two types of handlers: +Las fuentes de datos NEAR admiten dos tipos de handlers: - `blockHandlers`: run on every new NEAR block. No `source.account` is required. - `receiptHandlers`: run on every receipt where the data source's `source.account` is the recipient. Note that only exact matches are processed ([subaccounts](https://docs.near.org/tutorials/crosswords/basics/add-functions-call#create-a-subaccount) must be added as independent data sources). -### Schema Definition +### Definición de esquema Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). -### AssemblyScript Mappings +### Asignaciones de AssemblyScript The handlers for processing events are written in [AssemblyScript](https://www.assemblyscript.org/). @@ -169,7 +169,7 @@ Otherwise, the rest of the [AssemblyScript API](/subgraphs/developing/creating/g This includes a new JSON parsing function - logs on NEAR are frequently emitted as stringified JSONs. A new `json.fromString(...)` function is available as part of the [JSON API](/subgraphs/developing/creating/graph-ts/api/#json-api) to allow developers to easily process these logs. -## Deploying a NEAR Subgraph +## Deployando un subgrafo NEAR Once you have a built Subgraph, it is time to deploy it to Graph Node for indexing. NEAR Subgraphs can be deployed to any Graph Node `>=v0.26.x` (this version has not yet been tagged & released). @@ -198,7 +198,7 @@ graph auth graph deploy ``` -### Local Graph Node (based on default configuration) +### Graph Node Local (basado en la configuración predeterminada) ```sh graph deploy --node http://localhost:8020/ --ipfs http://localhost:5001 @@ -216,21 +216,21 @@ Once your Subgraph has been deployed, it will be indexed by Graph Node. You can } ``` -### Indexing NEAR with a Local Graph Node +### Indexación NEAR con un Graph Node local -Running a Graph Node that indexes NEAR has the following operational requirements: +Ejecutar un Graph Node que indexa NEAR tiene los siguientes requisitos operativos: -- NEAR Indexer Framework with Firehose instrumentation -- NEAR Firehose Component(s) -- Graph Node with Firehose endpoint configured +- NEAR Indexer Framework con instrumentación Firehose +- Componente(s) NEAR Firehose +- Graph Node con endpoint de Firehose configurado -We will provide more information on running the above components soon. +Pronto proporcionaremos más información sobre cómo ejecutar los componentes anteriores. -## Querying a NEAR Subgraph +## Consultando un subgrafo NEAR The GraphQL endpoint for NEAR Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. -## Example Subgraphs +## Subgrafos de ejemplo Here are some example Subgraphs for reference: @@ -240,7 +240,7 @@ Here are some example Subgraphs for reference: ## FAQ -### How does the beta work? +### ¿Cómo funciona la beta? NEAR support is in beta, which means that there may be changes to the API as we continue to work on improving the integration. Please email near@thegraph.com so that we can support you in building NEAR Subgraphs, and keep you up to date on the latest developments! @@ -250,9 +250,9 @@ No, a Subgraph can only support data sources from one chain/network. ### Can Subgraphs react to more specific triggers? -Currently, only Block and Receipt triggers are supported. We are investigating triggers for function calls to a specified account. We are also interested in supporting event triggers, once NEAR has native event support. +Actualmente, solo se admiten los activadores de Bloque y Recibo. Estamos investigando activadores para llamadas a funciones a una cuenta específica. También estamos interesados en admitir activadores de eventos, una vez que NEAR tenga soporte nativo para eventos. -### Will receipt handlers trigger for accounts and their sub-accounts? +### ¿Se activarán los handlers de recibos para las cuentas y sus subcuentas? If an `account` is specified, that will only match the exact account name. It is possible to match sub-accounts by specifying an `accounts` field, with `suffixes` and `prefixes` specified to match accounts and sub-accounts, for example the following would match all `mintbase1.near` sub-accounts: @@ -264,11 +264,11 @@ accounts: ### Can NEAR Subgraphs make view calls to NEAR accounts during mappings? -This is not supported. We are evaluating whether this functionality is required for indexing. +Esto no es compatible. Estamos evaluando si esta funcionalidad es necesaria para la indexación. ### Can I use data source templates in my NEAR Subgraph? -This is not currently supported. We are evaluating whether this functionality is required for indexing. +Esto no es compatible actualmente. Estamos evaluando si esta funcionalidad es necesaria para la indexación. ### Ethereum Subgraphs support "pending" and "current" versions, how can I deploy a "pending" version of a NEAR Subgraph? @@ -278,6 +278,6 @@ Pending functionality is not yet supported for NEAR Subgraphs. In the interim, y If it is a general question about Subgraph development, there is a lot more information in the rest of the [Developer documentation](/subgraphs/quick-start/). Otherwise please join [The Graph Protocol Discord](https://discord.gg/graphprotocol) and ask in the #near channel or email near@thegraph.com. -## References +## Referencias - [NEAR developer documentation](https://docs.near.org/tutorials/crosswords/basics/set-up-skeleton) diff --git a/website/src/pages/es/subgraphs/guides/polymarket.mdx b/website/src/pages/es/subgraphs/guides/polymarket.mdx index 74efe387b0d7..2edab84a377b 100644 --- a/website/src/pages/es/subgraphs/guides/polymarket.mdx +++ b/website/src/pages/es/subgraphs/guides/polymarket.mdx @@ -3,17 +3,17 @@ title: Querying Blockchain Data from Polymarket with Subgraphs on The Graph sidebarTitle: Query Polymarket Data --- -Query Polymarket’s onchain data using GraphQL via Subgraphs on The Graph Network. Subgraphs are decentralized APIs powered by The Graph, a protocol for indexing & querying data from blockchains. +Query Polymarket’s onchain data using GraphQL via subgraphs on The Graph Network. Subgraphs are decentralized APIs powered by The Graph, a protocol for indexing & querying data from blockchains. ## Polymarket Subgraph on Graph Explorer -You can see an interactive query playground on the [Polymarket Subgraph’s page on The Graph Explorer](https://thegraph.com/explorer/subgraphs/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp?view=Query&chain=arbitrum-one), where you can test any query. +You can see an interactive query playground on the [Polymarket subgraph’s page on The Graph Explorer](https://thegraph.com/explorer/subgraphs/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp?view=Query&chain=arbitrum-one), where you can test any query. ![Polymarket Playground](/img/Polymarket-playground.png) ## How to use the Visual Query Editor -The visual query editor helps you test sample queries from your Subgraph. +The visual query editor helps you test sample queries from your subgraph. You can use the GraphiQL Explorer to compose your GraphQL queries by clicking on the fields you want. @@ -73,7 +73,7 @@ You can use the GraphiQL Explorer to compose your GraphQL queries by clicking on ## Polymarket's GraphQL Schema -The schema for this Subgraph is defined [here in Polymarket’s GitHub](https://github.com/Polymarket/polymarket-subgraph/blob/main/polymarket-subgraph/schema.graphql). +The schema for this subgraph is defined [here in Polymarket’s GitHub](https://github.com/Polymarket/polymarket-subgraph/blob/main/polymarket-subgraph/schema.graphql). ### Polymarket Subgraph Endpoint @@ -88,7 +88,7 @@ The Polymarket Subgraph endpoint is available on [Graph Explorer](https://thegra 1. Go to [https://thegraph.com/studio](http://thegraph.com/studio) and connect your wallet 2. Go to https://thegraph.com/studio/apikeys/ to create an API key -You can use this API key on any Subgraph in [Graph Explorer](https://thegraph.com/explorer), and it’s not limited to just Polymarket. +You can use this API key on any subgraph in [Graph Explorer](https://thegraph.com/explorer), and it’s not limited to just Polymarket. 100k queries per month are free which is perfect for your side project! @@ -143,6 +143,6 @@ axios(graphQLRequest) ### Additional resources -For more information about querying data from your Subgraph, read more [here](/subgraphs/querying/introduction/). +For more information about querying data from your subgraph, read more [here](/subgraphs/querying/introduction/). -To explore all the ways you can optimize & customize your Subgraph for a better performance, read more about [creating a Subgraph here](/developing/creating-a-subgraph/). +To explore all the ways you can optimize & customize your subgraph for a better performance, read more about [creating a subgraph here](/developing/creating-a-subgraph/). diff --git a/website/src/pages/es/subgraphs/guides/secure-api-keys-nextjs.mdx b/website/src/pages/es/subgraphs/guides/secure-api-keys-nextjs.mdx index e17e594408ff..772a00ad317d 100644 --- a/website/src/pages/es/subgraphs/guides/secure-api-keys-nextjs.mdx +++ b/website/src/pages/es/subgraphs/guides/secure-api-keys-nextjs.mdx @@ -2,7 +2,7 @@ title: How to Secure API Keys Using Next.js Server Components --- -## Overview +## Descripción We can use [Next.js server components](https://nextjs.org/docs/app/building-your-application/rendering/server-components) to properly secure our API key from exposure in the frontend of our dapp. To further increase our API key security, we can also [restrict our API key to certain Subgraphs or domains in Subgraph Studio](/cookbook/upgrading-a-subgraph/#securing-your-api-key). @@ -120,4 +120,4 @@ Start our Next.js application using `npm run dev`. Verify that the server compon ### Conclusion -By utilizing Next.js Server Components, we've effectively hidden the API key from the client-side, enhancing the security of our application. This method ensures that sensitive operations are handled server-side, away from potential client-side vulnerabilities. Finally, be sure to explore [other API key security measures](/subgraphs/querying/managing-api-keys/) to increase your API key security even further. +By utilizing Next.js Server Components, we've effectively hidden the API key from the client-side, enhancing the security of our application. This method ensures that sensitive operations are handled server-side, away from potential client-side vulnerabilities. Finally, be sure to explore [other API key security measures](/cookbook/upgrading-a-subgraph/#securing-your-api-key) to increase your API key security even further. diff --git a/website/src/pages/es/subgraphs/guides/subgraph-composition.mdx b/website/src/pages/es/subgraphs/guides/subgraph-composition.mdx new file mode 100644 index 000000000000..3f5fc5e44cca --- /dev/null +++ b/website/src/pages/es/subgraphs/guides/subgraph-composition.mdx @@ -0,0 +1,132 @@ +--- +title: Aggregate Data Using Subgraph Composition +sidebarTitle: Build a Composable Subgraph with Multiple Subgraphs +--- + +Leverage Subgraph composition to speed up development time. Create a base Subgraph with essential data, then build additional Subgraphs on top of it. + +Optimize your Subgraph by merging data from independent, source Subgraphs into a single composable Subgraph to enhance data aggregation. + +## Introducción + +Composable Subgraphs enable you to combine multiple Subgraphs' data sources into a new Subgraph, facilitating faster and more flexible Subgraph development. Subgraph composition empowers you to create and maintain smaller, focused Subgraphs that collectively form a larger, interconnected dataset. + +### Benefits of Composition + +Subgraph composition is a powerful feature for scaling, allowing you to: + +- Reuse, mix, and combine existing data +- Streamline development and queries +- Use multiple data sources (up to five source Subgraphs) +- Speed up your Subgraph's syncing speed +- Handle errors and optimize the resync + +## Architecture Overview + +The setup for this example involves two Subgraphs: + +1. **Source Subgraph**: Tracks event data as entities. +2. **Dependent Subgraph**: Uses the source Subgraph as a data source. + +You can find these in the `source` and `dependent` directories. + +- The **source Subgraph** is a basic event-tracking Subgraph that records events emitted by relevant contracts. +- The **dependent Subgraph** references the source Subgraph as a data source, using the entities from the source as triggers. + +While the source Subgraph is a standard Subgraph, the dependent Subgraph uses the Subgraph composition feature. + +## Prerequisites + +### Source Subgraphs + +- All Subgraphs need to be published with a **specVersion 1.3.0 or later** (Use the latest graph-cli version to be able to deploy composable Subgraphs) +- See notes here: https://github.com/graphprotocol/graph-node/releases/tag/v0.37.0 +- Immutable entities only: All Subgraphs must have [immutable entities](https://thegraph.com/docs/en/subgraphs/best-practices/immutable-entities-bytes-as-ids/#immutable-entities) when the Subgraph is deployed +- Pruning can be used in the source Subgraphs, but only entities that are immutable can be composed on top of +- Source Subgraphs cannot use grafting on top of existing entities +- Aggregated entities can be used in composition, but entities that are composed from them cannot performed additional aggregations directly + +### Composed Subgraphs + +- You can only compose up to a **maximum of 5 source Subgraphs** +- Composed Subgraphs can only use **datasources from the same chain** +- **Nested composition is not yet supported**: Composing on top of another composed Subgraph isn’t allowed at this time +- Aggregated entities can be used in composition, but the composed entities on them cannot also use aggregations directly +- Developers cannot compose an onchain datasource with a Subgraph datasource (i.e. you can’t do normal event handlers and call handlers and block handlers in a composed Subgraph) + +Additionally, you can explore the [example-composable-subgraph](https://github.com/graphprotocol/example-composable-subgraph) repository for a working implementation of composable Subgraphs + +## Comenzar + +The following guide provides examples for defining 3 source Subgraphs to create one powerful composed Subgraph. + +### Specifics + +- To keep this example simple, all source Subgraphs use only block handlers. However, in a real environment, each source Subgraph will use data from different smart contracts. +- The examples below show how to import and extend the schema of another Subgraph to enhance its functionality. +- Each source Subgraph is optimized with a specific entity. +- All the commands listed install the necessary dependencies, generate code based on the GraphQL schema, build the Subgraph, and deploy it to your local Graph Node instance. + +### Step 1. Deploy Block Time Source Subgraph + +This first source Subgraph calculates the block time for each block. + +- It imports schemas from other Subgraphs and adds a `block` entity with a `timestamp` field, representing the time each block was mined. +- It listens to time-related blockchain events (e.g., block timestamps) and processes this data to update the Subgraph's entities accordingly. + +To deploy this Subgraph locally, run the following commands: + +```bash +npm install +npm run codegen +npm run build +npm run create-local +npm run deploy-local +``` + +### Step 2. Deploy Block Cost Source Subgraph + +This second source Subgraph indexes the cost of each block. + +#### Key Functions + +- It imports schemas from other Subgraphs and adds a `block` entity with cost-related fields. +- It listens to blockchain events related to costs (e.g. gas fees, transaction costs) and processes this data to update the Subgraph's entities accordingly. + +To deploy this Subgraph locally, run the same commands as above. + +### Step 3. Define Block Size in Source Subgraph + +This third source Subgraph indexes the size of each block. To deploy this Subgraph locally, run the same commands as above. + +#### Key Functions + +- It imports existing schemas from other Subgraphs and adds a `block` entity with a `size` field representing each block's size. +- It listens to blockchain events related to block sizes (e.g., storage or volume) and processes this data to update the Subgraph's entities accordingly. + +### Step 4. Combine Into Block Stats Subgraph + +This composed Subgraph combines and aggregates the information from the source Subgraphs above, providing a unified view of block statistics. To deploy this Subgraph locally, run the same commands as above. + +> Note: +> +> - Any change to a source Subgraph will likely generate a new deployment ID. +> - Be sure to update the deployment ID in the data source address of the Subgraph manifest to take advantage of the latest changes. +> - All source Subgraphs should be deployed before the composed Subgraph is deployed. + +#### Key Functions + +- It provides a consolidated data model that encompasses all relevant block metrics. +- It combines data from 3 source Subgraphs, and provides a comprehensive view of block statistics, enabling more complex queries and analyses. + +## Key Takeaways + +- This powerful tool will scale your Subgraph development and allow you to combine multiple Subgraphs. +- The setup includes the deployment of 3 source Subgraphs and one final deployment of the composed Subgraph. +- This feature unlocks scalability, simplifying both development and maintenance efficiency. + +## Recursos Adicionales + +- Check out all the code for this example in [this GitHub repo](https://github.com/graphprotocol/example-composable-subgraph). +- To add advanced features to your Subgraph, check out [Subgraph advanced features](/developing/creating/advanced/). +- To learn more about aggregations, check out [Timeseries and Aggregations](/subgraphs/developing/creating/advanced/#timeseries-and-aggregations). diff --git a/website/src/pages/es/subgraphs/guides/subgraph-debug-forking.mdx b/website/src/pages/es/subgraphs/guides/subgraph-debug-forking.mdx index 91aa7484d2ec..e979d752b4b7 100644 --- a/website/src/pages/es/subgraphs/guides/subgraph-debug-forking.mdx +++ b/website/src/pages/es/subgraphs/guides/subgraph-debug-forking.mdx @@ -1,26 +1,26 @@ --- -title: Quick and Easy Subgraph Debugging Using Forks +title: Debugging rápido y sencillo de subgrafos mediante Forks --- -As with many systems processing large amounts of data, The Graph's Indexers (Graph Nodes) may take quite some time to sync-up your Subgraph with the target blockchain. The discrepancy between quick changes with the purpose of debugging and long wait times needed for indexing is extremely counterproductive and we are well aware of that. This is why we are introducing **Subgraph forking**, developed by [LimeChain](https://limechain.tech/), and in this article I will show you how this feature can be used to substantially speed-up Subgraph debugging! +As with many systems processing large amounts of data, The Graph's Indexers (Graph Nodes) may take quite some time to sync-up your subgraph with the target blockchain. The discrepancy between quick changes with the purpose of debugging and long wait times needed for indexing is extremely counterproductive and we are well aware of that. This is why we are introducing **subgraph forking**, developed by [LimeChain](https://limechain.tech/), and in this article I will show you how this feature can be used to substantially speed-up subgraph debugging! -## Ok, what is it? +## ¿Bien, qué es? -**Subgraph forking** is the process of lazily fetching entities from _another_ Subgraph's store (usually a remote one). +**Subgraph forking** is the process of lazily fetching entities from _another_ subgraph's store (usually a remote one). -In the context of debugging, **Subgraph forking** allows you to debug your failed Subgraph at block _X_ without needing to wait to sync-up to block _X_. +In the context of debugging, **subgraph forking** allows you to debug your failed subgraph at block _X_ without needing to wait to sync-up to block _X_. -## What?! How? +## ¡¿Qué?! ¿Cómo? -When you deploy a Subgraph to a remote Graph Node for indexing and it fails at block _X_, the good news is that the Graph Node will still serve GraphQL queries using its store, which is synced-up to block _X_. That's great! This means we can take advantage of this "up-to-date" store to fix the bugs arising when indexing block _X_. +When you deploy a subgraph to a remote Graph Node for indexing and it fails at block _X_, the good news is that the Graph Node will still serve GraphQL queries using its store, which is synced-up to block _X_. That's great! This means we can take advantage of this "up-to-date" store to fix the bugs arising when indexing block _X_. -In a nutshell, we are going to _fork the failing Subgraph_ from a remote Graph Node that is guaranteed to have the Subgraph indexed up to block _X_ in order to provide the locally deployed Subgraph being debugged at block _X_ an up-to-date view of the indexing state. +In a nutshell, we are going to _fork the failing subgraph_ from a remote Graph Node that is guaranteed to have the subgraph indexed up to block _X_ in order to provide the locally deployed subgraph being debugged at block _X_ an up-to-date view of the indexing state. -## Please, show me some code! +## ¡Por favor, muéstrame algo de código! -To stay focused on Subgraph debugging, let's keep things simple and run along with the [example-Subgraph](https://github.com/graphprotocol/graph-tooling/tree/main/examples/ethereum-gravatar) indexing the Ethereum Gravity smart contract. +To stay focused on subgraph debugging, let's keep things simple and run along with the [example-subgraph](https://github.com/graphprotocol/graph-tooling/tree/main/examples/ethereum-gravatar) indexing the Ethereum Gravity smart contract. -Here are the handlers defined for indexing `Gravatar`s, with no bugs whatsoever: +Estos son los handlers definidos para indexar `Gravatar`s, sin errores de ningún tipo: ```tsx export function handleNewGravatar(event: NewGravatar): void { @@ -44,43 +44,43 @@ export function handleUpdatedGravatar(event: UpdatedGravatar): void { } ``` -Oops, how unfortunate, when I deploy my perfect looking Subgraph to [Subgraph Studio](https://thegraph.com/studio/) it fails with the _"Gravatar not found!"_ error. +Oops, how unfortunate, when I deploy my perfect looking subgraph to [Subgraph Studio](https://thegraph.com/studio/) it fails with the _"Gravatar not found!"_ error. -The usual way to attempt a fix is: +La forma habitual de intentar una solución es: -1. Make a change in the mappings source, which you believe will solve the issue (while I know it won't). -2. Re-deploy the Subgraph to [Subgraph Studio](https://thegraph.com/studio/) (or another remote Graph Node). -3. Wait for it to sync-up. -4. If it breaks again go back to 1, otherwise: Hooray! +1. Realiza un cambio en la fuente de mapeos, que crees que resolverá el problema (aunque sé que no lo hará). +2. Re-deploy the subgraph to [Subgraph Studio](https://thegraph.com/studio/) (or another remote Graph Node). +3. Espera a que se sincronice. +4. Si se vuelve a romper vuelve a 1, de lo contrario: ¡Hurra! -It is indeed pretty familiar to an ordinary debug process, but there is one step that horribly slows down the process: _3. Wait for it to sync-up._ +De hecho, es bastante familiar para un proceso de depuración ordinario, pero hay un paso que ralentiza terriblemente el proceso: _3. Espera a que se sincronice._ -Using **Subgraph forking** we can essentially eliminate this step. Here is how it looks: +Using **subgraph forking** we can essentially eliminate this step. Here is how it looks: 0. Spin-up a local Graph Node with the **_appropriate fork-base_** set. -1. Make a change in the mappings source, which you believe will solve the issue. -2. Deploy to the local Graph Node, **_forking the failing Subgraph_** and **_starting from the problematic block_**. -3. If it breaks again, go back to 1, otherwise: Hooray! +1. Realiza un cambio en la fuente de mapeos, que crees que resolverá el problema. +2. Deploy to the local Graph Node, **_forking the failing subgraph_** and **_starting from the problematic block_**. +3. Si se vuelve a romper, vuelve a 1, de lo contrario: ¡Hurra! -Now, you may have 2 questions: +Ahora, puedes tener 2 preguntas: -1. fork-base what??? -2. Forking who?! +1. fork-base que??? +2. Bifurcando a quien?! -And I answer: +Y yo respondo: -1. `fork-base` is the "base" URL, such that when the _subgraph id_ is appended the resulting URL (`/`) is a valid GraphQL endpoint for the Subgraph's store. -2. Forking is easy, no need to sweat: +1. `fork-base` is the "base" URL, such that when the _subgraph id_ is appended the resulting URL (`/`) is a valid GraphQL endpoint for the subgraph's store. +2. Bifurcar es fácil, no hay necesidad de sudar: ```bash $ graph deploy --debug-fork --ipfs http://localhost:5001 --node http://localhost:8020 ``` -Also, don't forget to set the `dataSources.source.startBlock` field in the Subgraph manifest to the number of the problematic block, so you can skip indexing unnecessary blocks and take advantage of the fork! +Also, don't forget to set the `dataSources.source.startBlock` field in the subgraph manifest to the number of the problematic block, so you can skip indexing unnecessary blocks and take advantage of the fork! -So, here is what I do: +Entonces, esto es lo que hago: -1. I spin-up a local Graph Node ([here is how to do it](https://github.com/graphprotocol/graph-node#running-a-local-graph-node)) with the `fork-base` option set to: `https://api.thegraph.com/subgraphs/id/`, since I will fork a Subgraph, the buggy one I deployed earlier, from [Subgraph Studio](https://thegraph.com/studio/). +1. I spin-up a local Graph Node ([here is how to do it](https://github.com/graphprotocol/graph-node#running-a-local-graph-node)) with the `fork-base` option set to: `https://api.thegraph.com/subgraphs/id/`, since I will fork a subgraph, the buggy one I deployed earlier, from [Subgraph Studio](https://thegraph.com/studio/). ``` $ cargo run -p graph-node --release -- \ @@ -90,12 +90,12 @@ $ cargo run -p graph-node --release -- \ --fork-base https://api.thegraph.com/subgraphs/id/ ``` -2. After careful inspection I notice that there is a mismatch in the `id` representations used when indexing `Gravatar`s in my two handlers. While `handleNewGravatar` converts it to a hex (`event.params.id.toHex()`), `handleUpdatedGravatar` uses an int32 (`event.params.id.toI32()`) which causes the `handleUpdatedGravatar` to panic with "Gravatar not found!". I make them both convert the `id` to a hex. -3. After I made the changes I deploy my Subgraph to the local Graph Node, **_forking the failing Subgraph_** and setting `dataSources.source.startBlock` to `6190343` in `subgraph.yaml`: +2. Después de una inspección cuidadosa, noté que hay una falta de coincidencia en las representaciones de `id` utilizadas al indexar `Gravatar` en mis dos handlers. Mientras que `handleNewGravatar` lo convierte en hexadecimal (`event.params.id.toHex()`), `handleUpdatedGravatar` usa un int32 (`event. params.id.toI32()`) que hace que `handleUpdatedGravatar` entre en pánico con "¡Gravatar no encontrado!". Hago que ambos conviertan el `id` en un hexadecimal. +3. After I made the changes I deploy my subgraph to the local Graph Node, **_forking the failing subgraph_** and setting `dataSources.source.startBlock` to `6190343` in `subgraph.yaml`: ```bash $ graph deploy gravity --debug-fork QmNp169tKvomnH3cPXTfGg4ZEhAHA6kEq5oy1XDqAxqHmW --ipfs http://localhost:5001 --node http://localhost:8020 ``` 4. I inspect the logs produced by the local Graph Node and, Hooray!, everything seems to be working. -5. I deploy my now bug-free Subgraph to a remote Graph Node and live happily ever after! (no potatoes tho) +5. I deploy my now bug-free subgraph to a remote Graph Node and live happily ever after! (no potatoes tho) diff --git a/website/src/pages/es/subgraphs/guides/subgraph-uncrashable.mdx b/website/src/pages/es/subgraphs/guides/subgraph-uncrashable.mdx index a08e2a7ad8c9..59b33568a1f2 100644 --- a/website/src/pages/es/subgraphs/guides/subgraph-uncrashable.mdx +++ b/website/src/pages/es/subgraphs/guides/subgraph-uncrashable.mdx @@ -1,29 +1,29 @@ --- -title: Safe Subgraph Code Generator +title: Generador de código de subgrafo seguro --- -[Subgraph Uncrashable](https://float-capital.github.io/float-subgraph-uncrashable/) is a code generation tool that generates a set of helper functions from the graphql schema of a project. It ensures that all interactions with entities in your Subgraph are completely safe and consistent. +[Subgraph Uncrashable](https://float-capital.github.io/float-subgraph-uncrashable/) is a code generation tool that generates a set of helper functions from the graphql schema of a project. It ensures that all interactions with entities in your subgraph are completely safe and consistent. -## Why integrate with Subgraph Uncrashable? +## ¿Por qué integrarse con Subgraph Uncrashable? -- **Continuous Uptime**. Mishandled entities may cause Subgraphs to crash, which can be disruptive for projects that are dependent on The Graph. Set up helper functions to make your Subgraphs “uncrashable” and ensure business continuity. +- **Continuous Uptime**. Mishandled entities may cause subgraphs to crash, which can be disruptive for projects that are dependent on The Graph. Set up helper functions to make your subgraphs “uncrashable” and ensure business continuity. -- **Completely Safe**. Common problems seen in Subgraph development are issues of loading undefined entities, not setting or initializing all values of entities, and race conditions on loading and saving entities. Ensure all interactions with entities are completely atomic. +- **Completely Safe**. Common problems seen in subgraph development are issues of loading undefined entities, not setting or initializing all values of entities, and race conditions on loading and saving entities. Ensure all interactions with entities are completely atomic. -- **User Configurable** Set default values and configure the level of security checks that suits your individual project's needs. Warning logs are recorded indicating where there is a breach of Subgraph logic to help patch the issue to ensure data accuracy. +- **User Configurable** Set default values and configure the level of security checks that suits your individual project's needs. Warning logs are recorded indicating where there is a breach of subgraph logic to help patch the issue to ensure data accuracy. **Key Features** -- The code generation tool accommodates **all** Subgraph types and is configurable for users to set sane defaults on values. The code generation will use this config to generate helper functions that are to the users specification. +- The code generation tool accommodates **all** subgraph types and is configurable for users to set sane defaults on values. The code generation will use this config to generate helper functions that are to the users specification. -- The framework also includes a way (via the config file) to create custom, but safe, setter functions for groups of entity variables. This way it is impossible for the user to load/use a stale graph entity and it is also impossible to forget to save or set a variable that is required by the function. +- El marco también incluye una forma (a través del archivo de configuración) para crear funciones de establecimiento personalizadas, pero seguras, para grupos de variables de entidad. De esta forma, es imposible que el usuario cargue/utilice una entidad gráfica obsoleta y también es imposible olvidarse de guardar o configurar una variable requerida por la función. -- Warning logs are recorded as logs indicating where there is a breach of Subgraph logic to help patch the issue to ensure data accuracy. +- Warning logs are recorded as logs indicating where there is a breach of subgraph logic to help patch the issue to ensure data accuracy. -Subgraph Uncrashable can be run as an optional flag using the Graph CLI codegen command. +Subgraph Uncrashable se puede ejecutar como un indicador opcional mediante el comando codegen Graph CLI. ```sh graph codegen -u [options] [] ``` -Visit the [Subgraph uncrashable documentation](https://float-capital.github.io/float-subgraph-uncrashable/docs/) or watch this [video tutorial](https://float-capital.github.io/float-subgraph-uncrashable/docs/tutorial) to learn more and to get started with developing safer Subgraphs. +Visit the [subgraph uncrashable documentation](https://float-capital.github.io/float-subgraph-uncrashable/docs/) or watch this [video tutorial](https://float-capital.github.io/float-subgraph-uncrashable/docs/tutorial) to learn more and to get started with developing safer subgraphs. diff --git a/website/src/pages/es/subgraphs/guides/transfer-to-the-graph.mdx b/website/src/pages/es/subgraphs/guides/transfer-to-the-graph.mdx index a62072c48373..ec6a7079ee75 100644 --- a/website/src/pages/es/subgraphs/guides/transfer-to-the-graph.mdx +++ b/website/src/pages/es/subgraphs/guides/transfer-to-the-graph.mdx @@ -1,20 +1,20 @@ --- -title: Transfer to The Graph +title: Transferir a The Graph --- -Quickly upgrade your Subgraphs from any platform to [The Graph's decentralized network](https://thegraph.com/networks/). +Quickly upgrade your subgraphs from any platform to [The Graph's decentralized network](https://thegraph.com/networks/). ## Benefits of Switching to The Graph -- Use the same Subgraph that your apps already use with zero-downtime migration. +- Use the same subgraph that your apps already use with zero-downtime migration. - Increase reliability from a global network supported by 100+ Indexers. -- Receive lightning-fast support for Subgraphs 24/7, with an on-call engineering team. +- Receive lightning-fast support for subgraphs 24/7, with an on-call engineering team. ## Upgrade Your Subgraph to The Graph in 3 Easy Steps -1. [Set Up Your Studio Environment](/subgraphs/cookbook/transfer-to-the-graph/#1-set-up-your-studio-environment) -2. [Deploy Your Subgraph to Studio](/subgraphs/cookbook/transfer-to-the-graph/#2-deploy-your-subgraph-to-studio) -3. [Publish to The Graph Network](/subgraphs/cookbook/transfer-to-the-graph/#publish-your-subgraph-to-the-graphs-decentralized-network) +1. [Set Up Your Studio Environment](/subgraphs/cookbook/transfer-to-the-graph/#1-set-up-your-studio-environment) +2. [Deploy Your Subgraph to Studio](/subgraphs/cookbook/transfer-to-the-graph/#2-deploy-your-subgraph-to-studio) +3. [Publish to The Graph Network](/subgraphs/cookbook/transfer-to-the-graph/#publish-your-subgraph-to-the-graphs-decentralized-network) ## 1. Set Up Your Studio Environment @@ -37,7 +37,7 @@ Using [npm](https://www.npmjs.com/): npm install -g @graphprotocol/graph-cli@latest ``` -Use the following command to create a Subgraph in Studio using the CLI: +Use the following command to create a subgraph in Studio using the CLI: ```sh graph init --product subgraph-studio @@ -53,7 +53,7 @@ graph auth ## 2. Deploy Your Subgraph to Studio -If you have your source code, you can easily deploy it to Studio. If you don't have it, here's a quick way to deploy your Subgraph. +If you have your source code, you can easily deploy it to Studio. If you don't have it, here's a quick way to deploy your subgraph. In The Graph CLI, run the following command: @@ -62,7 +62,7 @@ graph deploy --ipfs-hash ``` -> **Note:** Every Subgraph has an IPFS hash (Deployment ID), which looks like this: "Qmasdfad...". To deploy simply use this **IPFS hash**. You’ll be prompted to enter a version (e.g., v0.0.1). +> **Note:** Every subgraph has an IPFS hash (Deployment ID), which looks like this: "Qmasdfad...". To deploy simply use this **IPFS hash**. You’ll be prompted to enter a version (e.g., v0.0.1). ## 3. Publish Your Subgraph to The Graph Network @@ -70,17 +70,17 @@ graph deploy --ipfs-hash ### Query Your Subgraph -> To attract about 3 indexers to query your Subgraph, it’s recommended to curate at least 3,000 GRT. To learn more about curating, check out [Curating](/resources/roles/curating/) on The Graph. +> To attract about 3 indexers to query your subgraph, it’s recommended to curate at least 3,000 GRT. To learn more about curating, check out [Curating](/resources/roles/curating/) on The Graph. -You can start [querying](/subgraphs/querying/introduction/) any Subgraph by sending a GraphQL query into the Subgraph’s query URL endpoint, which is located at the top of its Explorer page in Subgraph Studio. +You can start [querying](/subgraphs/querying/introduction/) any subgraph by sending a GraphQL query into the subgraph’s query URL endpoint, which is located at the top of its Explorer page in Subgraph Studio. -#### Example +#### Ejemplo -[CryptoPunks Ethereum Subgraph](https://thegraph.com/explorer/subgraphs/HdVdERFUe8h61vm2fDyycHgxjsde5PbB832NHgJfZNqK) by Messari: +[CryptoPunks Ethereum subgraph](https://thegraph.com/explorer/subgraphs/HdVdERFUe8h61vm2fDyycHgxjsde5PbB832NHgJfZNqK) by Messari: ![Query URL](/img/cryptopunks-screenshot-transfer.png) -The query URL for this Subgraph is: +The query URL for this subgraph is: ```sh https://gateway-arbitrum.network.thegraph.com/api/`**your-own-api-key**`/subgraphs/id/HdVdERFUe8h61vm2fDyycgxjsde5PbB832NHgJfZNqK @@ -96,9 +96,9 @@ You can create API Keys in Subgraph Studio under the “API Keys” menu at the ### Monitor Subgraph Status -Once you upgrade, you can access and manage your Subgraphs in [Subgraph Studio](https://thegraph.com/studio/) and explore all Subgraphs in [The Graph Explorer](https://thegraph.com/networks/). +Once you upgrade, you can access and manage your subgraphs in [Subgraph Studio](https://thegraph.com/studio/) and explore all subgraphs in [The Graph Explorer](https://thegraph.com/networks/). -### Additional Resources +### Recursos Adicionales -- To quickly create and publish a new Subgraph, check out the [Quick Start](/subgraphs/quick-start/). -- To explore all the ways you can optimize and customize your Subgraph for a better performance, read more about [creating a Subgraph here](/developing/creating-a-subgraph/). +- To quickly create and publish a new subgraph, check out the [Quick Start](/subgraphs/quick-start/). +- To explore all the ways you can optimize and customize your subgraph for a better performance, read more about [creating a subgraph here](/developing/creating-a-subgraph/). diff --git a/website/src/pages/es/subgraphs/querying/best-practices.mdx b/website/src/pages/es/subgraphs/querying/best-practices.mdx index c3340b65f4b2..eb9567990435 100644 --- a/website/src/pages/es/subgraphs/querying/best-practices.mdx +++ b/website/src/pages/es/subgraphs/querying/best-practices.mdx @@ -4,7 +4,7 @@ title: Mejores Prácticas para Consultas The Graph provides a decentralized way to query data from blockchains. Its data is exposed through a GraphQL API, making it easier to query with the GraphQL language. -Learn the essential GraphQL language rules and best practices to optimize your subgraph. +Learn the essential GraphQL language rules and best practices to optimize your Subgraph. --- @@ -71,7 +71,7 @@ It means that you can query a GraphQL API using standard `fetch` (natively or vi However, as mentioned in ["Querying from an Application"](/subgraphs/querying/from-an-application/), it's recommended to use `graph-client`, which supports the following unique features: -- Manejo de subgrafos cross-chain: Consulta de varios subgrafos en una sola consulta +- Cross-chain Subgraph Handling: Querying from multiple Subgraphs in a single query - [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) - [Automatic Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) - Resultado completamente tipificado @@ -219,7 +219,7 @@ If the application only needs 10 transactions, the query should explicitly set ` ### Use a single query to request multiple records -By default, subgraphs have a singular entity for one record. For multiple records, use the plural entities and filter: `where: {id_in:[X,Y,Z]}` or `where: {volume_gt:100000}` +By default, Subgraphs have a singular entity for one record. For multiple records, use the plural entities and filter: `where: {id_in:[X,Y,Z]}` or `where: {volume_gt:100000}` Example of inefficient querying: diff --git a/website/src/pages/es/subgraphs/querying/from-an-application.mdx b/website/src/pages/es/subgraphs/querying/from-an-application.mdx index b36ffabaa3e6..df6f5f381dda 100644 --- a/website/src/pages/es/subgraphs/querying/from-an-application.mdx +++ b/website/src/pages/es/subgraphs/querying/from-an-application.mdx @@ -1,5 +1,6 @@ --- title: Consultar desde una Aplicación +sidebarTitle: Querying from an App --- Learn how to query The Graph from your application. @@ -10,7 +11,7 @@ During the development process, you will receive a GraphQL API endpoint at two d ### Subgraph Studio Endpoint -After deploying your subgraph to [Subgraph Studio](https://thegraph.com/docs/en/subgraphs/developing/deploying/using-subgraph-studio/), you will receive an endpoint that looks like this: +After deploying your Subgraph to [Subgraph Studio](https://thegraph.com/docs/en/subgraphs/developing/deploying/using-subgraph-studio/), you will receive an endpoint that looks like this: ``` https://api.studio.thegraph.com/query/// @@ -20,13 +21,13 @@ https://api.studio.thegraph.com/query/// ### The Graph Network Endpoint -After publishing your subgraph to the network, you will receive an endpoint that looks like this: : +After publishing your Subgraph to the network, you will receive an endpoint that looks like this: : ``` https://gateway.thegraph.com/api//subgraphs/id/ ``` -> This endpoint is intended for active use on the network. It allows you to use various GraphQL client libraries to query the subgraph and populate your application with indexed data. +> This endpoint is intended for active use on the network. It allows you to use various GraphQL client libraries to query the Subgraph and populate your application with indexed data. ## Using Popular GraphQL Clients @@ -34,7 +35,7 @@ https://gateway.thegraph.com/api//subgraphs/id/ The Graph is providing its own GraphQL client, `graph-client` that supports unique features such as: -- Manejo de subgrafos cross-chain: Consulta de varios subgrafos en una sola consulta +- Cross-chain Subgraph Handling: Querying from multiple Subgraphs in a single query - [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) - [Automatic Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) - Resultado completamente tipificado @@ -43,7 +44,7 @@ The Graph is providing its own GraphQL client, `graph-client` that supports uniq ### Fetch Data with Graph Client -Let's look at how to fetch data from a subgraph with `graph-client`: +Let's look at how to fetch data from a Subgraph with `graph-client`: #### Paso 1 @@ -168,7 +169,7 @@ Although it's the heaviest client, it has many features to build advanced UI on ### Fetch Data with Apollo Client -Let's look at how to fetch data from a subgraph with Apollo client: +Let's look at how to fetch data from a Subgraph with Apollo client: #### Paso 1 @@ -257,7 +258,7 @@ client ### Fetch data with URQL -Let's look at how to fetch data from a subgraph with URQL: +Let's look at how to fetch data from a Subgraph with URQL: #### Paso 1 diff --git a/website/src/pages/es/subgraphs/querying/graph-client/README.md b/website/src/pages/es/subgraphs/querying/graph-client/README.md index 416cadc13c6f..b6e6726bbed6 100644 --- a/website/src/pages/es/subgraphs/querying/graph-client/README.md +++ b/website/src/pages/es/subgraphs/querying/graph-client/README.md @@ -14,25 +14,25 @@ This library is intended to simplify the network aspect of data consumption for > The tools provided in this repo can be used as standalone, but you can also use it with any existing GraphQL Client! -| Status | Feature | Notes | +| Estado | Feature | Notes | | :----: | ---------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------- | -| ✅ | Multiple indexers | based on fetch strategies | -| ✅ | Fetch Strategies | timeout, retry, fallback, race, highestValue | -| ✅ | Build time validations & optimizations | | -| ✅ | Client-Side Composition | with improved execution planner (based on GraphQL-Mesh) | -| ✅ | Cross-chain Subgraph Handling | Use similar subgraphs as a single source | -| ✅ | Raw Execution (standalone mode) | without a wrapping GraphQL client | -| ✅ | Local (client-side) Mutations | | -| ✅ | [Automatic Block Tracking](../packages/block-tracking/README.md) | tracking block numbers [as described here](https://thegraph.com/docs/en/developer/distributed-systems/#polling-for-updated-data) | -| ✅ | [Automatic Pagination](../packages/auto-pagination/README.md) | doing multiple requests in a single call to fetch more than the indexer limit | -| ✅ | Integration with `@apollo/client` | | -| ✅ | Integration with `urql` | | -| ✅ | TypeScript support | with built-in GraphQL Codegen and `TypedDocumentNode` | -| ✅ | [`@live` queries](./live.md) | Based on polling | +| ✅ | Multiple indexers | based on fetch strategies | +| ✅ | Fetch Strategies | timeout, retry, fallback, race, highestValue | +| ✅ | Build time validations & optimizations | | +| ✅ | Client-Side Composition | with improved execution planner (based on GraphQL-Mesh) | +| ✅ | Cross-chain Subgraph Handling | Use similar subgraphs as a single source | +| ✅ | Raw Execution (standalone mode) | without a wrapping GraphQL client | +| ✅ | Local (client-side) Mutations | | +| ✅ | [Automatic Block Tracking](../packages/block-tracking/README.md) | tracking block numbers [as described here](https://thegraph.com/docs/en/developer/distributed-systems/#polling-for-updated-data) | +| ✅ | [Automatic Pagination](../packages/auto-pagination/README.md) | doing multiple requests in a single call to fetch more than the indexer limit | +| ✅ | Integration with `@apollo/client` | | +| ✅ | Integration with `urql` | | +| ✅ | TypeScript support | with built-in GraphQL Codegen and `TypedDocumentNode` | +| ✅ | [`@live` queries](./live.md) | Based on polling | > You can find an [extended architecture design here](./architecture.md) -## Getting Started +## Empezando You can follow [Episode 45 of `graphql.wtf`](https://graphql.wtf/episodes/45-the-graph-client) to learn more about Graph Client: @@ -138,7 +138,7 @@ graphclient serve-dev And open http://localhost:4000/ to use GraphiQL. You can now experiment with your Graph client-side GraphQL schema locally! 🥳 -#### Examples +#### Ejemplos You can also refer to [examples directory in this repo](../examples), for more advanced examples and integration examples: @@ -308,8 +308,8 @@ sources:
`highestValue` - - This strategy allows you to send parallel requests to different endpoints for the same source and choose the most updated. + +This strategy allows you to send parallel requests to different endpoints for the same source and choose the most updated. This is useful if you want to choose most synced data for the same Subgraph over different indexers/sources. diff --git a/website/src/pages/es/subgraphs/querying/graph-client/live.md b/website/src/pages/es/subgraphs/querying/graph-client/live.md index e6f726cb4352..4ccf6ee7eda1 100644 --- a/website/src/pages/es/subgraphs/querying/graph-client/live.md +++ b/website/src/pages/es/subgraphs/querying/graph-client/live.md @@ -2,7 +2,7 @@ Graph-Client implements a custom `@live` directive that can make every GraphQL query work with real-time data. -## Getting Started +## Empezando Start by adding the following configuration to your `.graphclientrc.yml` file: diff --git a/website/src/pages/es/subgraphs/querying/graphql-api.mdx b/website/src/pages/es/subgraphs/querying/graphql-api.mdx index 018abd046e72..374291ce0a88 100644 --- a/website/src/pages/es/subgraphs/querying/graphql-api.mdx +++ b/website/src/pages/es/subgraphs/querying/graphql-api.mdx @@ -6,13 +6,13 @@ Learn about the GraphQL Query API used in The Graph. ## What is GraphQL? -[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. +[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query Subgraphs. -To understand the larger role that GraphQL plays, review [developing](/subgraphs/developing/introduction/) and [creating a subgraph](/developing/creating-a-subgraph/). +To understand the larger role that GraphQL plays, review [developing](/subgraphs/developing/introduction/) and [creating a Subgraph](/developing/creating-a-subgraph/). ## Queries with GraphQL -In your subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type. +In your Subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type. > Note: `query` does not need to be included at the top of the `graphql` query when using The Graph. @@ -170,7 +170,7 @@ You can use suffixes like `_gt`, `_lte` for value comparison: You can also filter entities that were updated in or after a specified block with `_change_block(number_gte: Int)`. -Esto puede ser útil si buscas obtener solo las entidades que han cambiado, por ejemplo, desde la última vez que realizaste una encuesta. O, alternativamente, puede ser útil para investigar o depurar cómo cambian las entidades en tu subgrafo (si se combina con un filtro de bloque, puedes aislar solo las entidades que cambiaron en un bloque específico). +This can be useful if you are looking to fetch only entities which have changed, for example since the last time you polled. Or alternatively it can be useful to investigate or debug how entities are changing in your Subgraph (if combined with a block filter, you can isolate only entities that changed in a specific block). ```graphql { @@ -329,7 +329,7 @@ This query will return `Challenge` entities, and their associated `Application` ### Consultas de Búsqueda de Texto Completo -Fulltext search query fields provide an expressive text search API that can be added to the subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) to add fulltext search to your subgraph. +Fulltext search query fields provide an expressive text search API that can be added to the Subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) to add fulltext search to your Subgraph. Fulltext search queries have one required field, `text`, for supplying search terms. Several special fulltext operators are available to be used in this `text` search field. @@ -391,7 +391,7 @@ Graph Node implements [specification-based](https://spec.graphql.org/October2021 The schema of your dataSources, i.e. the entity types, values, and relationships that are available to query, are defined through the [GraphQL Interface Definition Language (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). -GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your subgraph is automatically generated from the GraphQL schema that's included in your [subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph). +GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your Subgraph is automatically generated from the GraphQL schema that's included in your [Subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph). > Note: Our API does not expose mutations because developers are expected to issue transactions directly against the underlying blockchain from their applications. @@ -403,7 +403,7 @@ All GraphQL types with `@entity` directives in your schema will be treated as en ### Metadatos del subgrafo -All subgraphs have an auto-generated `_Meta_` object, which provides access to subgraph metadata. This can be queried as follows: +All Subgraphs have an auto-generated `_Meta_` object, which provides access to Subgraph metadata. This can be queried as follows: ```graphQL { @@ -419,7 +419,7 @@ All subgraphs have an auto-generated `_Meta_` object, which provides access to s } ``` -Si se proporciona un bloque, los metadatos corresponden a ese bloque; de lo contrario, se utiliza el bloque indexado más reciente. Si es proporcionado, el bloque debe ser posterior al bloque de inicio del subgrafo y menor o igual que el bloque indexado más reciente. +If a block is provided, the metadata is as of that block, if not the latest indexed block is used. If provided, the block must be after the Subgraph's start block, and less than or equal to the most recently indexed block. `deployment` is a unique ID, corresponding to the IPFS CID of the `subgraph.yaml` file. @@ -427,6 +427,6 @@ Si se proporciona un bloque, los metadatos corresponden a ese bloque; de lo cont - hash: el hash del bloque - número: el número de bloque -- timestamp: la marca de tiempo del bloque, en caso de estar disponible (actualmente solo está disponible para subgrafos que indexan redes EVM) +- timestamp: the timestamp of the block, if available (this is currently only available for Subgraphs indexing EVM networks) -`hasIndexingErrors` is a boolean identifying whether the subgraph encountered indexing errors at some past block +`hasIndexingErrors` is a boolean identifying whether the Subgraph encountered indexing errors at some past block diff --git a/website/src/pages/es/subgraphs/querying/introduction.mdx b/website/src/pages/es/subgraphs/querying/introduction.mdx index ae3afee41ded..40935a799eed 100644 --- a/website/src/pages/es/subgraphs/querying/introduction.mdx +++ b/website/src/pages/es/subgraphs/querying/introduction.mdx @@ -7,11 +7,11 @@ To start querying right away, visit [The Graph Explorer](https://thegraph.com/ex ## Descripción -When a subgraph is published to The Graph Network, you can visit its subgraph details page on Graph Explorer and use the "Query" tab to explore the deployed GraphQL API for each subgraph. +When a Subgraph is published to The Graph Network, you can visit its Subgraph details page on Graph Explorer and use the "Query" tab to explore the deployed GraphQL API for each Subgraph. ## Specifics -Each subgraph published to The Graph Network has a unique query URL in Graph Explorer to make direct queries. You can find it by navigating to the subgraph details page and clicking on the "Query" button in the top right corner. +Each Subgraph published to The Graph Network has a unique query URL in Graph Explorer to make direct queries. You can find it by navigating to the Subgraph details page and clicking on the "Query" button in the top right corner. ![Query Subgraph Button](/img/query-button-screenshot.png) @@ -21,7 +21,7 @@ You will notice that this query URL must use a unique API key. You can create an Subgraph Studio users start on a Free Plan, which allows them to make 100,000 queries per month. Additional queries are available on the Growth Plan, which offers usage based pricing for additional queries, payable by credit card, or GRT on Arbitrum. You can learn more about billing [here](/subgraphs/billing/). -> Please see the [Query API](/subgraphs/querying/graphql-api/) for a complete reference on how to query the subgraph's entities. +> Please see the [Query API](/subgraphs/querying/graphql-api/) for a complete reference on how to query the Subgraph's entities. > > Note: If you encounter 405 errors with a GET request to the Graph Explorer URL, please switch to a POST request instead. diff --git a/website/src/pages/es/subgraphs/querying/managing-api-keys.mdx b/website/src/pages/es/subgraphs/querying/managing-api-keys.mdx index cdbad6cb7c81..50c2fbab7883 100644 --- a/website/src/pages/es/subgraphs/querying/managing-api-keys.mdx +++ b/website/src/pages/es/subgraphs/querying/managing-api-keys.mdx @@ -1,14 +1,14 @@ --- -title: Administración de tus claves API +title: Managing API keys --- ## Descripción -API keys are needed to query subgraphs. They ensure that the connections between application services are valid and authorized, including authenticating the end user and the device using the application. +API keys are needed to query Subgraphs. They ensure that the connections between application services are valid and authorized, including authenticating the end user and the device using the application. ### Create and Manage API Keys -Go to [Subgraph Studio](https://thegraph.com/studio/) and click the **API Keys** tab to create and manage your API keys for specific subgraphs. +Go to [Subgraph Studio](https://thegraph.com/studio/) and click the **API Keys** tab to create and manage your API keys for specific Subgraphs. The "API keys" table lists existing API keys and allows you to manage or delete them. For each key, you can see its status, the cost for the current period, the spending limit for the current period, and the total number of queries. @@ -31,4 +31,4 @@ You can click on an individual API key to view the Details page: - Cantidad de GRT gastado 2. Under the **Security** section, you can opt into security settings depending on the level of control you’d like to have. Specifically, you can: - Ver y administrar los nombres de dominio autorizados a utilizar tu clave API - - Asignar subgrafos que puedan ser consultados con tu clave API + - Assign Subgraphs that can be queried with your API key diff --git a/website/src/pages/es/subgraphs/querying/python.mdx b/website/src/pages/es/subgraphs/querying/python.mdx index d51fd5deb007..4f2ad9280b58 100644 --- a/website/src/pages/es/subgraphs/querying/python.mdx +++ b/website/src/pages/es/subgraphs/querying/python.mdx @@ -3,7 +3,7 @@ title: Query The Graph with Python and Subgrounds sidebarTitle: Python (Subgrounds) --- -Subgrounds is an intuitive Python library for querying subgraphs, built by [Playgrounds](https://playgrounds.network/). It allows you to directly connect subgraph data to a Python data environment, letting you use libraries like [pandas](https://pandas.pydata.org/) to perform data analysis! +Subgrounds is an intuitive Python library for querying Subgraphs, built by [Playgrounds](https://playgrounds.network/). It allows you to directly connect Subgraph data to a Python data environment, letting you use libraries like [pandas](https://pandas.pydata.org/) to perform data analysis! Subgrounds offers a simple Pythonic API for building GraphQL queries, automates tedious workflows such as pagination, and empowers advanced users through controlled schema transformations. @@ -17,14 +17,14 @@ pip install --upgrade subgrounds python -m pip install --upgrade subgrounds ``` -Once installed, you can test out subgrounds with the following query. The following example grabs a subgraph for the Aave v2 protocol and queries the top 5 markets ordered by TVL (Total Value Locked), selects their name and their TVL (in USD) and returns the data as a pandas [DataFrame](https://pandas.pydata.org/pandas-docs/dev/reference/api/pandas.DataFrame.html#pandas.DataFrame). +Once installed, you can test out subgrounds with the following query. The following example grabs a Subgraph for the Aave v2 protocol and queries the top 5 markets ordered by TVL (Total Value Locked), selects their name and their TVL (in USD) and returns the data as a pandas [DataFrame](https://pandas.pydata.org/pandas-docs/dev/reference/api/pandas.DataFrame.html#pandas.DataFrame). ```python from subgrounds import Subgrounds sg = Subgrounds() -# Load the subgraph +# Load the Subgraph aave_v2 = sg.load_subgraph( "https://api.thegraph.com/subgraphs/name/messari/aave-v2-ethereum") diff --git a/website/src/pages/es/subgraphs/querying/subgraph-id-vs-deployment-id.mdx b/website/src/pages/es/subgraphs/querying/subgraph-id-vs-deployment-id.mdx index 103e470e14da..17258dd13ea1 100644 --- a/website/src/pages/es/subgraphs/querying/subgraph-id-vs-deployment-id.mdx +++ b/website/src/pages/es/subgraphs/querying/subgraph-id-vs-deployment-id.mdx @@ -2,17 +2,17 @@ title: Subgraph ID vs Deployment ID --- -A subgraph is identified by a Subgraph ID, and each version of the subgraph is identified by a Deployment ID. +A Subgraph is identified by a Subgraph ID, and each version of the Subgraph is identified by a Deployment ID. -When querying a subgraph, either ID can be used, though it is generally suggested that the Deployment ID is used due to its ability to specify a specific version of a subgraph. +When querying a Subgraph, either ID can be used, though it is generally suggested that the Deployment ID is used due to its ability to specify a specific version of a Subgraph. Here are some key differences between the two IDs: ![](/img/subgraph-id-vs-deployment-id.png) ## Deployment ID -The Deployment ID is the IPFS hash of the compiled manifest file, which refers to other files on IPFS instead of relative URLs on the computer. For example, the compiled manifest can be accessed via: `https://api.thegraph.com/ipfs/api/v0/cat?arg=QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. To change the Deployment ID, one can simply update the manifest file, such as modifying the description field as described in the [subgraph manifest documentation](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api). +The Deployment ID is the IPFS hash of the compiled manifest file, which refers to other files on IPFS instead of relative URLs on the computer. For example, the compiled manifest can be accessed via: `https://api.thegraph.com/ipfs/api/v0/cat?arg=QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. To change the Deployment ID, one can simply update the manifest file, such as modifying the description field as described in the [Subgraph manifest documentation](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api). -When queries are made using a subgraph's Deployment ID, we are specifying a version of that subgraph to query. Using the Deployment ID to query a specific subgraph version results in a more sophisticated and robust setup as there is full control over the subgraph version being queried. However, this results in the need of updating the query code manually every time a new version of the subgraph is published. +When queries are made using a Subgraph's Deployment ID, we are specifying a version of that Subgraph to query. Using the Deployment ID to query a specific Subgraph version results in a more sophisticated and robust setup as there is full control over the Subgraph version being queried. However, this results in the need of updating the query code manually every time a new version of the Subgraph is published. Example endpoint that uses Deployment ID: @@ -20,8 +20,8 @@ Example endpoint that uses Deployment ID: ## Subgraph ID -The Subgraph ID is a unique identifier for a subgraph. It remains constant across all versions of a subgraph. It is recommended to use the Subgraph ID to query the latest version of a subgraph, although there are some caveats. +The Subgraph ID is a unique identifier for a Subgraph. It remains constant across all versions of a Subgraph. It is recommended to use the Subgraph ID to query the latest version of a Subgraph, although there are some caveats. -Be aware that querying using Subgraph ID may result in queries being responded to by an older version of the subgraph due to the new version needing time to sync. Also, new versions could introduce breaking schema changes. +Be aware that querying using Subgraph ID may result in queries being responded to by an older version of the Subgraph due to the new version needing time to sync. Also, new versions could introduce breaking schema changes. Example endpoint that uses Subgraph ID: `https://gateway-arbitrum.network.thegraph.com/api/[api-key]/subgraphs/id/FL3ePDCBbShPvfRJTaSCNnehiqxsPHzpLud6CpbHoeKW` diff --git a/website/src/pages/es/subgraphs/quick-start.mdx b/website/src/pages/es/subgraphs/quick-start.mdx index 4ccb601e3948..57d13e479ba2 100644 --- a/website/src/pages/es/subgraphs/quick-start.mdx +++ b/website/src/pages/es/subgraphs/quick-start.mdx @@ -2,7 +2,7 @@ title: Comienzo Rapido --- -Learn how to easily build, publish and query a [subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) on The Graph. +Learn how to easily build, publish and query a [Subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) on The Graph. ## Prerequisites @@ -13,13 +13,13 @@ Learn how to easily build, publish and query a [subgraph](/subgraphs/developing/ ## How to Build a Subgraph -### 1. Create a subgraph in Subgraph Studio +### 1. Create a Subgraph in Subgraph Studio Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. -Subgraph Studio lets you create, manage, deploy, and publish subgraphs, as well as create and manage API keys. +Subgraph Studio lets you create, manage, deploy, and publish Subgraphs, as well as create and manage API keys. -Click "Create a Subgraph". It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name". +Click "Create a Subgraph". It is recommended to name the Subgraph in Title Case: "Subgraph Name Chain Name". ### 2. Instala the graph CLI @@ -37,13 +37,13 @@ Using [yarn](https://yarnpkg.com/): yarn global add @graphprotocol/graph-cli ``` -### 3. Initialize your subgraph +### 3. Initialize your Subgraph -> You can find commands for your specific subgraph on the subgraph page in [Subgraph Studio](https://thegraph.com/studio/). +> You can find commands for your specific Subgraph on the Subgraph page in [Subgraph Studio](https://thegraph.com/studio/). -The `graph init` command will automatically create a scaffold of a subgraph based on your contract's events. +The `graph init` command will automatically create a scaffold of a Subgraph based on your contract's events. -The following command initializes your subgraph from an existing contract: +The following command initializes your Subgraph from an existing contract: ```sh graph init @@ -51,42 +51,42 @@ graph init If your contract is verified on the respective blockscanner where it is deployed (such as [Etherscan](https://etherscan.io/)), then the ABI will automatically be created in the CLI. -When you initialize your subgraph, the CLI will ask you for the following information: +When you initialize your Subgraph, the CLI will ask you for the following information: -- **Protocol**: Choose the protocol your subgraph will be indexing data from. -- **Subgraph slug**: Create a name for your subgraph. Your subgraph slug is an identifier for your subgraph. -- **Directory**: Choose a directory to create your subgraph in. -- **Ethereum network** (optional): You may need to specify which EVM-compatible network your subgraph will be indexing data from. +- **Protocol**: Choose the protocol your Subgraph will be indexing data from. +- **Subgraph slug**: Create a name for your Subgraph. Your Subgraph slug is an identifier for your Subgraph. +- **Directory**: Choose a directory to create your Subgraph in. +- **Ethereum network** (optional): You may need to specify which EVM-compatible network your Subgraph will be indexing data from. - **Contract address**: Locate the smart contract address you’d like to query data from. - **ABI**: If the ABI is not auto-populated, you will need to input it manually as a JSON file. -- **Start Block**: You should input the start block to optimize subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed. +- **Start Block**: You should input the start block to optimize Subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed. - **Contract Name**: Input the name of your contract. -- **Index contract events as entities**: It is suggested that you set this to true, as it will automatically add mappings to your subgraph for every emitted event. +- **Index contract events as entities**: It is suggested that you set this to true, as it will automatically add mappings to your Subgraph for every emitted event. - **Add another contract** (optional): You can add another contract. -Ve la siguiente captura para un ejemplo de que debes de esperar cuando inicializes tu subgrafo: +See the following screenshot for an example for what to expect when initializing your Subgraph: ![Subgraph command](/img/CLI-Example.png) -### 4. Edit your subgraph +### 4. Edit your Subgraph -The `init` command in the previous step creates a scaffold subgraph that you can use as a starting point to build your subgraph. +The `init` command in the previous step creates a scaffold Subgraph that you can use as a starting point to build your Subgraph. -When making changes to the subgraph, you will mainly work with three files: +When making changes to the Subgraph, you will mainly work with three files: -- Manifest (`subgraph.yaml`) - defines what data sources your subgraph will index. -- Schema (`schema.graphql`) - defines what data you wish to retrieve from the subgraph. +- Manifest (`subgraph.yaml`) - defines what data sources your Subgraph will index. +- Schema (`schema.graphql`) - defines what data you wish to retrieve from the Subgraph. - AssemblyScript Mappings (`mapping.ts`) - translates data from your data sources to the entities defined in the schema. -For a detailed breakdown on how to write your subgraph, check out [Creating a Subgraph](/developing/creating-a-subgraph/). +For a detailed breakdown on how to write your Subgraph, check out [Creating a Subgraph](/developing/creating-a-subgraph/). -### 5. Deploy your subgraph +### 5. Deploy your Subgraph > Remember, deploying is not the same as publishing. -When you **deploy** a subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. A deployed subgraph's indexing is performed by the [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/), which is a single Indexer owned and operated by Edge & Node, rather than by the many decentralized Indexers in The Graph Network. A **deployed** subgraph is free to use, rate-limited, not visible to the public, and meant to be used for development, staging, and testing purposes. +When you **deploy** a Subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. A deployed Subgraph's indexing is performed by the [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/), which is a single Indexer owned and operated by Edge & Node, rather than by the many decentralized Indexers in The Graph Network. A **deployed** Subgraph is free to use, rate-limited, not visible to the public, and meant to be used for development, staging, and testing purposes. -Una vez escrito tu subgrafo, ejecuta los siguientes comandos: +Once your Subgraph is written, run the following commands: ```` ```sh @@ -94,7 +94,7 @@ graph codegen && graph build ``` ```` -Authenticate and deploy your subgraph. The deploy key can be found on the subgraph's page in Subgraph Studio. +Authenticate and deploy your Subgraph. The deploy key can be found on the Subgraph's page in Subgraph Studio. ![Deploy key](/img/subgraph-studio-deploy-key.jpg) @@ -109,37 +109,37 @@ graph deploy The CLI will ask for a version label. It's strongly recommended to use [semantic versioning](https://semver.org/), e.g. `0.0.1`. -### 6. Review your subgraph +### 6. Review your Subgraph -If you’d like to test your subgraph before publishing it, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following: +If you’d like to test your Subgraph before publishing it, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following: - Run a sample query. -- Analyze your subgraph in the dashboard to check information. -- Check the logs on the dashboard to see if there are any errors with your subgraph. The logs of an operational subgraph will look like this: +- Analyze your Subgraph in the dashboard to check information. +- Check the logs on the dashboard to see if there are any errors with your Subgraph. The logs of an operational Subgraph will look like this: ![Subgraph logs](/img/subgraph-logs-image.png) -### 7. Publish your subgraph to The Graph Network +### 7. Publish your Subgraph to The Graph Network -When your subgraph is ready for a production environment, you can publish it to the decentralized network. Publishing is an onchain action that does the following: +When your Subgraph is ready for a production environment, you can publish it to the decentralized network. Publishing is an onchain action that does the following: -- It makes your subgraph available to be to indexed by the decentralized [Indexers](/indexing/overview/) on The Graph Network. -- It removes rate limits and makes your subgraph publicly searchable and queryable in [Graph Explorer](https://thegraph.com/explorer/). -- It makes your subgraph available for [Curators](/resources/roles/curating/) to curate it. +- It makes your Subgraph available to be to indexed by the decentralized [Indexers](/indexing/overview/) on The Graph Network. +- It removes rate limits and makes your Subgraph publicly searchable and queryable in [Graph Explorer](https://thegraph.com/explorer/). +- It makes your Subgraph available for [Curators](/resources/roles/curating/) to curate it. -> The greater the amount of GRT you and others curate on your subgraph, the more Indexers will be incentivized to index your subgraph, improving the quality of service, reducing latency, and enhancing network redundancy for your subgraph. +> The greater the amount of GRT you and others curate on your Subgraph, the more Indexers will be incentivized to index your Subgraph, improving the quality of service, reducing latency, and enhancing network redundancy for your Subgraph. #### Publishing with Subgraph Studio -To publish your subgraph, click the Publish button in the dashboard. +To publish your Subgraph, click the Publish button in the dashboard. -![Publish a subgraph on Subgraph Studio](/img/publish-sub-transfer.png) +![Publish a Subgraph on Subgraph Studio](/img/publish-sub-transfer.png) -Select the network to which you would like to publish your subgraph. +Select the network to which you would like to publish your Subgraph. #### Publishing from the CLI -As of version 0.73.0, you can also publish your subgraph with the Graph CLI. +As of version 0.73.0, you can also publish your Subgraph with the Graph CLI. Open the `graph-cli`. @@ -157,32 +157,32 @@ graph publish ``` ```` -3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized subgraph to a network of your choice. +3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized Subgraph to a network of your choice. ![cli-ui](/img/cli-ui.png) To customize your deployment, see [Publishing a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). -#### Adding signal to your subgraph +#### Adding signal to your Subgraph -1. To attract Indexers to query your subgraph, you should add GRT curation signal to it. +1. To attract Indexers to query your Subgraph, you should add GRT curation signal to it. - - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your subgraph. + - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your Subgraph. 2. If eligible for indexing rewards, Indexers receive GRT rewards based on the signaled amount. - - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on subgraph feature usage and supported networks. + - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on Subgraph feature usage and supported networks. To learn more about curation, read [Curating](/resources/roles/curating/). -To save on gas costs, you can curate your subgraph in the same transaction you publish it by selecting this option: +To save on gas costs, you can curate your Subgraph in the same transaction you publish it by selecting this option: ![Subgraph publish](/img/studio-publish-modal.png) -### 8. Query your subgraph +### 8. Query your Subgraph -You now have access to 100,000 free queries per month with your subgraph on The Graph Network! +You now have access to 100,000 free queries per month with your Subgraph on The Graph Network! -You can query your subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button. +You can query your Subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button. -For more information about querying data from your subgraph, read [Querying The Graph](/subgraphs/querying/introduction/). +For more information about querying data from your Subgraph, read [Querying The Graph](/subgraphs/querying/introduction/). diff --git a/website/src/pages/es/substreams/developing/dev-container.mdx b/website/src/pages/es/substreams/developing/dev-container.mdx index bd4acf16eec7..339ddb159c87 100644 --- a/website/src/pages/es/substreams/developing/dev-container.mdx +++ b/website/src/pages/es/substreams/developing/dev-container.mdx @@ -9,7 +9,7 @@ Develop your first project with Substreams Dev Container. It's a tool to help you build your first project. You can either run it remotely through Github codespaces or locally by cloning the [substreams starter repository](https://github.com/streamingfast/substreams-starter?tab=readme-ov-file). -Inside the Dev Container, the `substreams init` command sets up a code-generated Substreams project, allowing you to easily build a subgraph or an SQL-based solution for data handling. +Inside the Dev Container, the `substreams init` command sets up a code-generated Substreams project, allowing you to easily build a Subgraph or an SQL-based solution for data handling. ## Prerequisites @@ -35,7 +35,7 @@ To share your work with the broader community, publish your `.spkg` to [Substrea You can configure your project to query data either through a Subgraph or directly from an SQL database: -- **Subgraph**: Run `substreams codegen subgraph`. This generates a project with a basic `schema.graphql` and `mappings.ts` file. You can customize these to define entities based on the data extracted by Substreams. For more configurations, see [subgraph sink documentation](https://docs.substreams.dev/how-to-guides/sinks/subgraph). +- **Subgraph**: Run `substreams codegen subgraph`. This generates a project with a basic `schema.graphql` and `mappings.ts` file. You can customize these to define entities based on the data extracted by Substreams. For more configurations, see [Subgraph sink documentation](https://docs.substreams.dev/how-to-guides/sinks/subgraph). - **SQL**: Run `substreams codegen sql` for SQL-based queries. For more information on configuring a SQL sink, refer to the [SQL documentation](https://docs.substreams.dev/how-to-guides/sinks/sql-sink). ## Deployment Options diff --git a/website/src/pages/es/substreams/developing/sinks.mdx b/website/src/pages/es/substreams/developing/sinks.mdx index 3900895e2871..44e6368c9c7b 100644 --- a/website/src/pages/es/substreams/developing/sinks.mdx +++ b/website/src/pages/es/substreams/developing/sinks.mdx @@ -1,5 +1,5 @@ --- -title: Official Sinks +title: Sink your Substreams --- Choose a sink that meets your project's needs. @@ -8,14 +8,14 @@ Choose a sink that meets your project's needs. Once you find a package that fits your needs, you can choose how you want to consume the data. -Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database, a file or a subgraph. +Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database, a file or a Subgraph. ## Sinks > Note: Some of the sinks are officially supported by the StreamingFast core development team (i.e. active support is provided), but other sinks are community-driven and support can't be guaranteed. - [SQL Database](https://docs.substreams.dev/how-to-guides/sinks/sql-sink): Send the data to a database. -- [Subgraph](/sps/introduction/): Configure an API to meet your data needs and host it on The Graph Network. +- [Subgraph](./sps/introduction.mdx): Configure an API to meet your data needs and host it on The Graph Network. - [Direct Streaming](https://docs.substreams.dev/how-to-guides/sinks/stream): Stream data directly from your application. - [PubSub](https://docs.substreams.dev/how-to-guides/sinks/pubsub): Send data to a PubSub topic. - [Community Sinks](https://docs.substreams.dev/how-to-guides/sinks/community-sinks): Explore quality community maintained sinks. @@ -26,7 +26,7 @@ Sinks are integrations that allow you to send the extracted data to different de ### Official -| Name | Support | Maintainer | Source Code | +| Nombre | Soporte | Maintainer | Source Code | | --- | --- | --- | --- | | SQL | O | StreamingFast | [substreams-sink-sql](https://github.com/streamingfast/substreams-sink-sql) | | Go SDK | O | StreamingFast | [substreams-sink](https://github.com/streamingfast/substreams-sink) | @@ -40,7 +40,7 @@ Sinks are integrations that allow you to send the extracted data to different de ### Community -| Name | Support | Maintainer | Source Code | +| Nombre | Soporte | Maintainer | Source Code | | --- | --- | --- | --- | | MongoDB | C | Community | [substreams-sink-mongodb](https://github.com/streamingfast/substreams-sink-mongodb) | | Files | C | Community | [substreams-sink-files](https://github.com/streamingfast/substreams-sink-files) | diff --git a/website/src/pages/es/substreams/developing/solana/account-changes.mdx b/website/src/pages/es/substreams/developing/solana/account-changes.mdx index b7fd1cc260b2..87f3d384f9e2 100644 --- a/website/src/pages/es/substreams/developing/solana/account-changes.mdx +++ b/website/src/pages/es/substreams/developing/solana/account-changes.mdx @@ -11,7 +11,7 @@ This guide walks you through the process of setting up your environment, configu > NOTE: History for the Solana Account Changes dates as of 2025, block 310629601. -For each Substreams Solana account block, only the latest update per account is recorded, see the [Protobuf Referece](https://buf.build/streamingfast/firehose-solana/file/main:sf/solana/type/v1/account.proto). If an account is deleted, a payload with `deleted == True` is provided. Additionally, events of low importance ware omitted, such as those with the special owner “Vote11111111…” account or changes that do not affect the account data (ex: lamport changes). +For each Substreams Solana account block, only the latest update per account is recorded, see the [Protobuf Reference](https://buf.build/streamingfast/firehose-solana/file/main:sf/solana/type/v1/account.proto). If an account is deleted, a payload with `deleted == True` is provided. Additionally, events of low importance ware omitted, such as those with the special owner “Vote11111111…” account or changes that do not affect the account data (ex: lamport changes). > NOTE: To test Substreams latency for Solana accounts, measured as block-head drift, install the [Substreams CLI](https://docs.substreams.dev/reference-material/substreams-cli/installing-the-cli) and running `substreams run solana-common blocks_without_votes -s -1 -o clock`. diff --git a/website/src/pages/es/substreams/developing/solana/transactions.mdx b/website/src/pages/es/substreams/developing/solana/transactions.mdx index 17c285b7f53c..9ec56a15e187 100644 --- a/website/src/pages/es/substreams/developing/solana/transactions.mdx +++ b/website/src/pages/es/substreams/developing/solana/transactions.mdx @@ -36,12 +36,12 @@ Within the generated directories, modify your Substreams modules to include addi ## Step 3: Load the Data -To make your Substreams queryable (as opposed to [direct streaming](https://docs.substreams.dev/how-to-guides/sinks/stream)), you can automatically generate a [Substreams-powered subgraph](/sps/introduction/) or SQL-DB sink. +To make your Substreams queryable (as opposed to [direct streaming](https://docs.substreams.dev/how-to-guides/sinks/stream)), you can automatically generate a [Substreams-powered Subgraph](/sps/introduction/) or SQL-DB sink. ### Subgrafo 1. Run `substreams codegen subgraph` to initialize the sink, producing the necessary files and function definitions. -2. Create your [subgraph mappings](/sps/triggers/) within the `mappings.ts` and associated entities within the `schema.graphql`. +2. Create your [Subgraph mappings](/sps/triggers/) within the `mappings.ts` and associated entities within the `schema.graphql`. 3. Build and deploy locally or to [Subgraph Studio](https://thegraph.com/studio-pricing/) by running `deploy-studio`. ### SQL diff --git a/website/src/pages/es/substreams/introduction.mdx b/website/src/pages/es/substreams/introduction.mdx index 1b9de410b165..a952aeeb594b 100644 --- a/website/src/pages/es/substreams/introduction.mdx +++ b/website/src/pages/es/substreams/introduction.mdx @@ -13,7 +13,7 @@ Substreams is a powerful parallel blockchain indexing technology designed to enh ## Substreams Benefits -- **Accelerated Indexing**: Boost subgraph indexing time with a parallelized engine for quicker data retrieval and processing. +- **Accelerated Indexing**: Boost Subgraph indexing time with a parallelized engine for quicker data retrieval and processing. - **Multi-Chain Support**: Expand indexing capabilities beyond EVM-based chains, supporting ecosystems like Solana, Injective, Starknet, and Vara. - **Enhanced Data Model**: Access comprehensive data, including the `trace` level data on EVM or account changes on Solana, while efficiently managing forks/disconnections. - **Multi-Sink Support:** For Subgraph, Postgres database, Clickhouse, and Mongo database. diff --git a/website/src/pages/es/substreams/publishing.mdx b/website/src/pages/es/substreams/publishing.mdx index 169f12bff0ef..5787b254df98 100644 --- a/website/src/pages/es/substreams/publishing.mdx +++ b/website/src/pages/es/substreams/publishing.mdx @@ -1,6 +1,6 @@ --- title: Publishing a Substreams Package -sidebarTitle: Publishing +sidebarTitle: Publicando --- Learn how to publish a Substreams package to the [Substreams Registry](https://substreams.dev). @@ -9,7 +9,7 @@ Learn how to publish a Substreams package to the [Substreams Registry](https://s ### What is a package? -A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional subgraphs. +A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional Subgraphs. ## Publish a Package @@ -44,7 +44,7 @@ A Substreams package is a precompiled binary file that defines the specific data ![confirm](/img/4_confirm.png) -That's it! You have succesfully published a package in the Substreams registry. +That's it! You have successfully published a package in the Substreams registry. ![success](/img/5_success.png) diff --git a/website/src/pages/es/substreams/quick-start.mdx b/website/src/pages/es/substreams/quick-start.mdx index 897daa5fe502..4e6ec88c0c0e 100644 --- a/website/src/pages/es/substreams/quick-start.mdx +++ b/website/src/pages/es/substreams/quick-start.mdx @@ -1,5 +1,5 @@ --- -title: Substreams Quick Start +title: Introducción rápida a Substreams sidebarTitle: Comienzo Rapido --- diff --git a/website/src/pages/es/supported-networks.mdx b/website/src/pages/es/supported-networks.mdx index 4e106880f4a7..93a003ce8005 100644 --- a/website/src/pages/es/supported-networks.mdx +++ b/website/src/pages/es/supported-networks.mdx @@ -18,11 +18,11 @@ export const getStaticProps = getSupportedNetworksStaticProps - Subgraph Studio relies on the stability and reliability of the underlying technologies, for example JSON-RPC, Firehose and Substreams endpoints. - Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier. -- If a subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks. +- If a Subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks. - For a full list of which features are supported on the decentralized network, see [this page](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). ## Running Graph Node locally If your preferred network isn't supported on The Graph's decentralized network, you can run your own [Graph Node](https://github.com/graphprotocol/graph-node) to index any EVM-compatible network. Make sure that the [version](https://github.com/graphprotocol/graph-node/releases) you are using supports the network and you have the needed configuration. -Graph Node can also index other protocols, via a Firehose integration. Firehose integrations have been created for NEAR, Arweave and Cosmos-based networks. Additionally, Graph Node can support Substreams-powered subgraphs for any network with Substreams support. +Graph Node can also index other protocols, via a Firehose integration. Firehose integrations have been created for NEAR and Arweave. Additionally, Graph Node can support Substreams-powered Subgraphs for any network with Substreams support. diff --git a/website/src/pages/es/token-api/_meta-titles.json b/website/src/pages/es/token-api/_meta-titles.json index 692cec84bd58..7ed31e0af95d 100644 --- a/website/src/pages/es/token-api/_meta-titles.json +++ b/website/src/pages/es/token-api/_meta-titles.json @@ -1,5 +1,6 @@ { "mcp": "MCP", "evm": "EVM Endpoints", - "monitoring": "Monitoring Endpoints" + "monitoring": "Monitoring Endpoints", + "faq": "FAQ" } diff --git a/website/src/pages/es/token-api/_meta.js b/website/src/pages/es/token-api/_meta.js index 09aa7ffc2649..0e526f673a66 100644 --- a/website/src/pages/es/token-api/_meta.js +++ b/website/src/pages/es/token-api/_meta.js @@ -5,4 +5,5 @@ export default { mcp: titles.mcp, evm: titles.evm, monitoring: titles.monitoring, + faq: '', } diff --git a/website/src/pages/es/token-api/faq.mdx b/website/src/pages/es/token-api/faq.mdx new file mode 100644 index 000000000000..6178aee33e86 --- /dev/null +++ b/website/src/pages/es/token-api/faq.mdx @@ -0,0 +1,109 @@ +--- +title: Token API FAQ +--- + +Get fast answers to easily integrate and scale with The Graph's high-performance Token API. + +## General + +### What blockchains does the Token API support? + +Currently, the Token API supports Ethereum, Binance Smart Chain (BSC), Polygon, Optimism, Base, and Arbitrum One. + +### Why isn't my API key from The Graph Market working? + +Be sure to use the Access Token generated from the API key, not the API key itself. An access token can be generated from the dashboard on The Graph Market using the dropdown menu next to each API key. + +### How current is the data provided by the API relative to the blockchain? + +The API provides data up to the latest finalized block. + +### How do I authenticate requests to the Token API? + +Authentication is managed via JWT tokens obtained through The Graph Market. JWT tokens issued by [Pinax](https://pinax.network/en), a core developer of The Graph, are also supported. + +### Does the Token API provide a client SDK? + +While a client SDK is not currently available, please share feedback on any SDKs or integrations you would like to see on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to support additional blockchains in the future? + +Yes, more blockchains will be supported in the future. Please share feedback on desired chains on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to offer data closer to the chain head? + +Yes, improvements to provide data closer to the chain head are planned. Feedback is welcome on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to support additional use cases such as NFTs? + +The Graph ecosystem is actively determining the [roadmap](https://thegraph.com/blog/token-api-the-graph/) for additional use cases, including NFTs. Please provide feedback on specific features you would like prioritized on [Discord](https://discord.gg/graphprotocol). + +## MCP / LLM / AI Topics + +### Is there a time limit for LLM queries? + +Yes. The maximum time limit for LLM queries is 10 seconds. + +### Is there a known list of LLMs that work with the API? + +Yes, Cline, Cursor, and Claude Desktop integrate successfully with The Graph's Token API + MCP server. + +Beyond that, whether an LLM "works" depends on whether it supports the MCP protocol directly (or has a compatible plugin/adapter). + +### Where can I find the MCP client? + +You can find the code for the MCP client in [The Graph's repo](https://github.com/graphprotocol/mcp-client). + +## Advanced Topics + +### I'm getting 403/401 errors. What's wrong? + +Check that you included the `Authorization: Bearer ` header with the correct, non-expired token. Common issues include using the API key instead of generating a new JWT, forgetting the "Bearer" prefix, using an incorrect token, or omitting the header entirely. Ensure you copied the JWT exactly as provided by The Graph Market. + +### Are there rate limits or usage costs?\*\* + +During Beta, the Token API is free for authorized developers. While specific rate limits aren't documented, reasonable throttling exists to prevent abuse. High request volumes may trigger HTTP 429 errors. Monitor official announcements for future pricing changes after Beta. + +### What networks are supported, and how do I specify them? + +You can query available networks with [this link](https://token-api.thegraph.com/#tag/monitoring/GET/networks). A full list of network IDs accepted by The Graph can be found on [The Graph's Networks](https://thegraph.com/docs/en/supported-networks/). The API supports Ethereum mainnet, Binance Smart Chain, Base, Arbitrum One, Optimism, and Polygon/Matic. Use the optional `network_id` parameter (e.g., `mainnet`, `bsc`, `base`, `arbitrum-one`, `optimism`, `matic`) to target a specific chain. Without this parameter, the API defaults to Ethereum mainnet. + +### Why do I only see 10 results? How can I get more data? + +Endpoints cap output at 10 items by default. Use pagination parameters: `limit` (up to 500) and `page` (1-indexed). For example, set `limit=50` to get 50 results, and increment `page` for subsequent batches (e.g., `page=2` for items 51-100). + +### How do I fetch older transfer history? + +The Transfers endpoint defaults to 30 days of history. To retrieve older events, increase the `age` parameter up to 180 days maximum (e.g., `age=180` for 6 months of transfers). Transfers older than 180 days cannot be fetched in a single call. + +### What does an empty `"data": []` array mean? + +An empty data array means no records were found matching the query – not an error. This occurs when querying wallets with no tokens/transfers or token contracts with no holders. Verify you've used the correct address and parameters. An invalid address format will trigger a 4xx error. + +### Why is the JSON response wrapped in a `"data"` array? + +All Token API responses consistently wrap results in a top-level `data` array, even for single items. This uniform design handles the common case where addresses have multiple balances or transfers. When parsing, be sure to index into the `data` array (e.g., `const results = response.data`). + +### Why are token amounts returned as strings? + +Large numeric values (like token amounts or supplies) are returned as strings to avoid precision loss, as they often exceed JavaScript's safe integer range. Convert these to big number types for arithmetic operations. Fields like `decimals` are provided as normal numbers to help derive human-readable values. + +### What format should addresses be in? + +The API accepts 40-character hex addresses with or without the `0x` prefix. The endpoint is case-insensitive, so both lower and uppercase hex characters work. Ensure addresses are exactly 40 hex digits (20 bytes) if you remove the prefix. For contract queries, use the token's contract address; for balance/transfer queries, use the wallet address. + +### Do I need special headers besides authentication? + +While recommended, `Accept: application/json` isn't strictly required as the API returns JSON by default. The critical header is `Authorization: Bearer `. Ensure you make a GET request to the correct URL without trailing slashes or path typos (e.g., use `/balances/evm/{address}` not `/balance`). + +### MCP integration with Claude/Cline/Cursor shows errors like "ENOENT" or "Server disconnected". How do I fix this? + +For "ENOENT" errors, ensure Node.js 18+ is installed and the path to `npx`/`bunx` is correct (consider using full paths in config). "Server disconnected" usually indicates authentication or connectivity issues – verify your `ACCESS_TOKEN` is set correctly and your network allows access to `https://token-api.thegraph.com/sse`. + +### Is the Token API part of The Graph's GraphQL service? + +No, the Token API is a separate RESTful service. Unlike traditional subgraphs, it provides ready-to-use REST endpoints (HTTP GET) for common token data. You don't need to write GraphQL queries or deploy subgraphs. Under the hood, it uses The Graph's infrastructure and MCP with AI for enrichment, but you simply interact with REST endpoints. + +### Do I need to use MCP or tools like Claude, Cline, or Cursor? + +No, these are optional. MCP is an advanced feature allowing AI assistants to interface with the API via streaming. For standard usage, simply call the REST endpoints with any HTTP client using your JWT. Claude Desktop, Cline bot, and Cursor IDE integrations are provided for convenience but aren't required. diff --git a/website/src/pages/es/token-api/mcp/claude.mdx b/website/src/pages/es/token-api/mcp/claude.mdx index 0da8f2be031d..ae0f71aa800b 100644 --- a/website/src/pages/es/token-api/mcp/claude.mdx +++ b/website/src/pages/es/token-api/mcp/claude.mdx @@ -12,7 +12,7 @@ sidebarTitle: Claude Desktop ![Screenshot of Claude Desktop's settings panel showing the MCP server configuration option.](/img/claude-preview-token-api.png) -## Configuration +## Configuración Create or edit your `claude_desktop_config.json` file. @@ -25,11 +25,11 @@ Create or edit your `claude_desktop_config.json` file. ```json label="claude_desktop_config.json" { "mcpServers": { - "mcp-pinax": { + "token-api": { "command": "npx", "args": ["@pinax/mcp", "--sse-url", "https://token-api.thegraph.com/sse"], "env": { - "ACCESS_TOKEN": "" + "ACCESS_TOKEN": "" } } } diff --git a/website/src/pages/es/token-api/mcp/cline.mdx b/website/src/pages/es/token-api/mcp/cline.mdx index ab54c0c8f6f0..085427f14744 100644 --- a/website/src/pages/es/token-api/mcp/cline.mdx +++ b/website/src/pages/es/token-api/mcp/cline.mdx @@ -10,9 +10,9 @@ sidebarTitle: Cline - [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path. - The `@pinax/mcp` package requires Node 18+, as it relies on built-in `fetch()` / `Headers`, which are not available in Node 17 or older. You may need to specify an exact path to an up-to-date Node version, or uninstall previous versions of Node to ensure `@pinax/mcp` uses the correct version. -![Screenshot of Cline's MCP server configuration interface displaying the JSON settings file with mcp-pinax server details visible](/img/cline-preview-token-api.png) +![Screenshot of Cline's MCP server configuration interface displaying the JSON settings file with mcp-pinax server details visible.](/img/cline-preview-token-api.png) -## Configuration +## Configuración Create or edit your `cline_mcp_settings.json` file. diff --git a/website/src/pages/es/token-api/mcp/cursor.mdx b/website/src/pages/es/token-api/mcp/cursor.mdx index 658108d1337b..70e68aaf8d33 100644 --- a/website/src/pages/es/token-api/mcp/cursor.mdx +++ b/website/src/pages/es/token-api/mcp/cursor.mdx @@ -12,7 +12,7 @@ sidebarTitle: Cursor ![Screenshot of Cursor's MCP configuration panel.](/img/cursor-preview-token-api.png) -## Configuration +## Configuración Create or edit your `~/.cursor/mcp.json` file. diff --git a/website/src/pages/es/token-api/quick-start.mdx b/website/src/pages/es/token-api/quick-start.mdx index 4653c3d41ac6..8488268e1356 100644 --- a/website/src/pages/es/token-api/quick-start.mdx +++ b/website/src/pages/es/token-api/quick-start.mdx @@ -1,6 +1,6 @@ --- title: Token API Quick Start -sidebarTitle: Quick Start +sidebarTitle: Comienzo Rapido --- ![The Graph Token API Quick Start banner](/img/token-api-quickstart-banner.jpg) diff --git a/website/src/pages/fr/about.mdx b/website/src/pages/fr/about.mdx index 0740a57e71c5..1cce1a4218ea 100644 --- a/website/src/pages/fr/about.mdx +++ b/website/src/pages/fr/about.mdx @@ -30,25 +30,25 @@ Les spécificités de la blockchain, comme la finalité des transactions, les r ## The Graph apporte une solution -The Graph répond à ce défi grâce à un protocole décentralisé qui indexe les données de la blockchain et permet de les interroger de manière efficace et performantes. Ces API (appelées "subgraphs" indexés) peuvent ensuite être interrogées via une API standard GraphQL. +The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "Subgraphs") can then be queried with a standard GraphQL API. Aujourd'hui, il existe un protocole décentralisé soutenu par l'implémentation open source de [Graph Node](https://github.com/graphprotocol/graph-node) qui permet ce processus. ### Comment fonctionne The Graph⁠ -Indexer les données de la blockchain est une tâche complexe, mais The Graph la simplifie. Il apprend à indexer les données d'Ethereum en utilisant des subgraphs. Les subgraphs sont des API personnalisées construites sur les données de la blockchain qui extraient, traitent et stockent ces données pour qu'elles puissent être interrogées facilement via GraphQL. +Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using Subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL. #### Spécificités⁠ -- The Graph utilise des descriptions de subgraph, qui sont connues sous le nom de "manifeste de subgraph" à l'intérieur du subgraph. +- The Graph uses Subgraph descriptions, which are known as the Subgraph manifest inside the Subgraph. -- Ce manifeste définit les contrats intelligents intéressants pour un subgraph, les événements spécifiques à surveiller au sein de ces contrats, et la manière de mapper les données de ces événements aux données que The Graph stockera dans sa base de données. +- The Subgraph description outlines the smart contracts of interest for a Subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database. -- Lors de la création d'un subgraph, vous devez rédiger ce manifeste. +- When creating a Subgraph, you need to write a Subgraph manifest. -- Une fois le `manifeste du subgraph` écrit, vous pouvez utiliser l'outil en ligne de commande Graph CLI pour stocker la définition en IPFS et demander à un Indexeur de commencer à indexer les données pour ce subgraph. +- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that Subgraph. -Le schéma ci-dessous illustre plus en détail le flux de données après le déploiement d'un manifeste de subgraph avec des transactions Ethereum. +The diagram below provides more detailed information about the flow of data after a Subgraph manifest has been deployed with Ethereum transactions. ![Un graphique expliquant comment The Graph utilise Graph Node pour répondre aux requêtes des consommateurs de données](/img/graph-dataflow.png) @@ -56,12 +56,12 @@ La description des étapes du flux : 1. Une dapp ajoute des données à Ethereum via une transaction sur un contrat intelligent. 2. Le contrat intelligent va alors produire un ou plusieurs événements lors du traitement de la transaction. -3. Parallèlement, Le nœud de The Graph scanne continuellement Ethereum à la recherche de nouveaux blocs et de nouvelles données intéressantes pour votre subgraph. -4. The Graph Node trouve alors les événements Ethereum d'intérêt pour votre subgraph dans ces blocs et vient exécuter les corrélations correspondantes que vous avez fournies. Le gestionnaire de corrélation se définit comme un module WASM qui crée ou met à jour les entités de données que le nœud de The Graph stocke en réponse aux événements Ethereum. +3. Graph Node continually scans Ethereum for new blocks and the data for your Subgraph they may contain. +4. Graph Node finds Ethereum events for your Subgraph in these blocks and runs the mapping handlers you provided. The mapping is a WASM module that creates or updates the data entities that Graph Node stores in response to Ethereum events. 5. Le dapp interroge le Graph Node pour des données indexées à partir de la blockchain, à l'aide du [point de terminaison GraphQL](https://graphql.org/learn/) du noeud. À son tour, le Graph Node traduit les requêtes GraphQL en requêtes pour sa base de données sous-jacente afin de récupérer ces données, en exploitant les capacités d'indexation du magasin. Le dapp affiche ces données dans une interface utilisateur riche pour les utilisateurs finaux, qui s'en servent pour émettre de nouvelles transactions sur Ethereum. Le cycle se répète. ## Les Étapes suivantes -Les sections suivantes proposent une exploration plus approfondie des subgraphs, de leur déploiement et de la manière d'interroger les données. +The following sections provide a more in-depth look at Subgraphs, their deployment and data querying. -Avant de créer votre propre subgraph, il est conseillé de visiter Graph Explorer et d'examiner certains des subgraphs déjà déployés. Chaque page de subgraph comprend un playground (un espace de test) GraphQL, vous permettant d'interroger ses données. +Before you write your own Subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed Subgraphs. Each Subgraph's page includes a GraphQL playground, allowing you to query its data. diff --git a/website/src/pages/fr/archived/arbitrum/arbitrum-faq.mdx b/website/src/pages/fr/archived/arbitrum/arbitrum-faq.mdx index b2f6d7382c61..3aeb3de89d39 100644 --- a/website/src/pages/fr/archived/arbitrum/arbitrum-faq.mdx +++ b/website/src/pages/fr/archived/arbitrum/arbitrum-faq.mdx @@ -14,7 +14,7 @@ Grâce à la mise à l'échelle de The Graph sur la L2, les participants du rés - La sécurité héritée d'Ethereum -La mise à l'échelle des contrats intelligents du protocole sur la L2 permet aux participants du réseau d'interagir plus fréquemment pour un coût réduit en termes de frais de gaz. Par exemple, les Indexeurs peuvent ouvrir et fermer des allocations plus fréquemment pour indexer un plus grand nombre de subgraphs. Les développeurs peuvent déployer et mettre à jour des subgraphs plus facilement, et les Déléguateurs peuvent déléguer des GRT plus fréquemment. Les Curateurs peuvent ajouter ou supprimer des signaux dans un plus grand nombre de subgraphs - des actions auparavant considérées comme trop coûteuses pour être effectuées fréquemment en raison des frais de gaz. +Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers can open and close allocations more frequently to index a greater number of Subgraphs. Developers can deploy and update Subgraphs more easily, and Delegators can delegate GRT more frequently. Curators can add or remove signal to a larger number of Subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. La communauté Graph a décidé d'avancer avec Arbitrum l'année dernière après le résultat de la discussion [GIP-0031](https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305). @@ -39,7 +39,7 @@ Pour tirer parti de l'utilisation de The Graph sur L2, utilisez ce sélecteur d [Sélecteur déroulant pour activer Arbitrum](/img/arbitrum-screenshot-toggle.png) -## En tant que développeur de subgraphs, consommateur de données, indexeur, curateur ou délégateur, que dois-je faire maintenant ? +## As a Subgraph developer, data consumer, Indexer, Curator, or Delegator, what do I need to do now? Network participants must move to Arbitrum to continue participating in The Graph Network. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) for additional support. @@ -51,9 +51,9 @@ Tous les contrats intelligents ont été soigneusement [vérifiés](https://gith Tout a été testé minutieusement et un plan d’urgence est en place pour assurer une transition sûre et fluide. Les détails peuvent être trouvés [here](https://forum.thegraph.com/t/gip-0037-the-graph-arbitrum-deployment-with-linear-rewards-minted-in-l2/3551#risks-and- considérations de sécurité-20). -## Les subgraphs existants sur Ethereum fonctionnent  t-ils? +## Are existing Subgraphs on Ethereum working? -All subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) to ensure your subgraphs operate seamlessly. +All Subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) to ensure your Subgraphs operate seamlessly. ## GRT a-t-il un nouveau contrat intelligent déployé sur Arbitrum ? diff --git a/website/src/pages/fr/archived/arbitrum/l2-transfer-tools-faq.mdx b/website/src/pages/fr/archived/arbitrum/l2-transfer-tools-faq.mdx index d4edd391bed6..9b953a6de6b4 100644 --- a/website/src/pages/fr/archived/arbitrum/l2-transfer-tools-faq.mdx +++ b/website/src/pages/fr/archived/arbitrum/l2-transfer-tools-faq.mdx @@ -24,9 +24,9 @@ Une exception concerne les portefeuilles de smart contracts comme les multisigs Les outils de transfert L2 utilisent le mécanisme natif d’Arbitrum pour envoyer des messages de L1 à L2. Ce mécanisme s’appelle un « billet modifiable » et est utilisé par tous les ponts de jetons natifs, y compris le pont GRT Arbitrum. Vous pouvez en savoir plus sur les billets retryables dans le [Arbitrum docs](https://docs.arbitrum.io/arbos/l1-to-l2-messaging). -Lorsque vous transférez vos actifs (subgraph, enjeu, délégation ou curation) vers L2, un message est envoyé par le pont GRT Arbitrum qui crée un ticket modifiable en L2. L’outil de transfert inclut une certaine valeur ETH dans la transaction, qui est utilisée pour 1) payer la création du ticket et 2) payer pour le gaz utile à l'exécution du ticket en L2. Cependant, comme le prix du gaz peut varier durant le temps nécessaire à l'exécution du ticket en L2, il est possible que cette tentative d’exécution automatique échoue. Lorsque cela se produit, le pont Arbitrum maintient le billet remboursable en vie pendant 7 jours, et tout le monde peut réessayer de « racheter » le billet (ce qui nécessite un portefeuille avec des ETH liés à Arbitrum). +When you transfer your assets (Subgraph, stake, delegation or curation) to L2, a message is sent through the Arbitrum GRT bridge which creates a retryable ticket in L2. The transfer tool includes some ETH value in the transaction, that is used to 1) pay to create the ticket and 2) pay for the gas to execute the ticket in L2. However, because gas prices might vary in the time until the ticket is ready to execute in L2, it is possible that this auto-execution attempt fails. When that happens, the Arbitrum bridge will keep the retryable ticket alive for up to 7 days, and anyone can retry “redeeming” the ticket (which requires a wallet with some ETH bridged to Arbitrum). -C'est ce que nous appelons l'étape « Confirmer » dans tous les outils de transfert : elle s'exécute automatiquement dans la plupart des cas et l'exécution automatique réussit le plus souvent. Il est tout de même important de vérifier que le transfert se soit bien déroulé. Si cela échoue et qu'aucune autre tentative n'est confirmé dans les 7 jours, le pont Arbitrum rejettera le ticket et vos actifs (subgraph, participation, délégation ou curation) ne pourront pas être récupérés. Les développeurs principaux de Graph ont mis en place un système de surveillance pour détecter ces situations et essayer d'échanger les billets avant qu'il ne soit trop tard, mais il en reste de votre responsabilité de vous assurer que votre transfert est terminé à temps. Si vous rencontrez des difficultés pour confirmer votre transaction, veuillez nous contacter en utilisant [ce formulaire](https://noteforms.com/forms/notionform-l2-transfer-tooling-issues-0ogqfu?notionforms=1&utm_source=notionforms) et les développeurs seront là pour vous aider. +This is what we call the “Confirm” step in all the transfer tools - it will run automatically in most cases, as the auto-execution is most often successful, but it is important that you check back to make sure it went through. If it doesn’t succeed and there are no successful retries in 7 days, the Arbitrum bridge will discard the ticket, and your assets (Subgraph, stake, delegation or curation) will be lost and can’t be recovered. The Graph core devs have a monitoring system in place to detect these situations and try to redeem the tickets before it’s too late, but it is ultimately your responsibility to ensure your transfer is completed in time. If you’re having trouble confirming your transaction, please reach out using [this form](https://noteforms.com/forms/notionform-l2-transfer-tooling-issues-0ogqfu?notionforms=1&utm_source=notionforms) and core devs will be there help you. ### J'ai commencé le transfert de ma délégation/enjeu/curation et je ne suis pas sûr qu'il soit parvenu jusqu'à L2, comment puis-je confirmer qu'il a été transféré correctement ? @@ -36,43 +36,43 @@ Si vous disposez du hachage de transaction L1 (que vous pouvez trouver en consul ## Subgraph transfert -### Comment transférer mon subgraph ? +### How do I transfer my Subgraph? -Pour transférer votre subgraph, suivez les étapes qui suivent : +To transfer your Subgraph, you will need to complete the following steps: 1. Initier le transfert sur le mainnet Ethereum 2. Attendre 20 minutes pour une confirmation -3. Vérifier le transfert de subgraph sur Arbitrum\* +3. Confirm Subgraph transfer on Arbitrum\* -4. Terminer la publication du sous-graphe sur Arbitrum +4. Finish publishing Subgraph on Arbitrum 5. Mettre à jour l’URL de requête (recommandé) -\*Notez que vous devez confirmer le transfert dans un délai de 7 jours, faute de quoi votre subgraph pourrait être perdu. Dans la plupart des cas, cette étape s'exécutera automatiquement, mais une confirmation manuelle peut être nécessaire en cas de hausse du prix du gaz sur Arbitrum. En cas de problème au cours de ce processus, des ressources seront disponibles pour vous aider : contactez le service d'assistance à l'adresse support@thegraph.com ou sur [Discord](https://discord.gg/graphprotocol). +\*Note that you must confirm the transfer within 7 days otherwise your Subgraph may be lost. In most cases, this step will run automatically, but a manual confirmation may be needed if there is a gas price spike on Arbitrum. If there are any issues during this process, there will be resources to help: contact support at support@thegraph.com or on [Discord](https://discord.gg/graphprotocol). ### D’où dois-je initier mon transfert ? -Vous pouvez effectuer votre transfert à partir de la [Subgraph Studio] (https://thegraph.com/studio/), [Explorer,] (https://thegraph.com/explorer) ou de n’importe quelle page de détails de subgraph. Cliquez sur le bouton "Transférer le subgraph" dans la page de détails du subgraph pour démarrer le transfert. +You can initiate your transfer from the [Subgraph Studio](https://thegraph.com/studio/), [Explorer,](https://thegraph.com/explorer) or any Subgraph details page. Click the "Transfer Subgraph" button in the Subgraph details page to start the transfer. -### Combien de temps dois-je attendre avant que mon subgraph soit transféré ? +### How long do I need to wait until my Subgraph is transferred Le temps de transfert prend environ 20 minutes. Le pont Arbitrum fonctionne en arrière-plan pour terminer automatiquement le transfert du pont. Dans certains cas, les coûts du gaz peuvent augmenter et vous devrez confirmer à nouveau la transaction. -### Mon subgraph sera-t-il toujours repérable après le transfert à L2? +### Will my Subgraph still be discoverable after I transfer it to L2? -Votre subgraph ne sera détectable que sur le réseau sur lequel il est publié. Par exemple, si votre subgraph est sur Arbitrum One, vous ne pouvez le trouver que dans Explorer sur Arbitrum One et vous ne pourrez pas le trouver sur Ethereum. Assurez-vous que vous avez Arbitrum One sélectionné dans le commutateur de réseau en haut de la page pour vous assurer que vous êtes sur le bon réseau.  Après le transfert, le subgraph L1 apparaîtra comme obsolète. +Your Subgraph will only be discoverable on the network it is published to. For example, if your Subgraph is on Arbitrum One, then you can only find it in Explorer on Arbitrum One and will not be able to find it on Ethereum. Please ensure that you have Arbitrum One selected in the network switcher at the top of the page to ensure you are on the correct network.  After the transfer, the L1 Subgraph will appear as deprecated. -### Mon subgraph doit-il être publié afin d'être transférer ? +### Does my Subgraph need to be published to transfer it? -Pour profiter de l’outil de transfert de subgraph, votre subgraph doit déjà être publié sur Ethereum mainnet et doit avoir un signal de curation appartenant au portefeuille qui possède le subgraph. Si votre subgraph n’est pas publié, il est recommandé de publier simplement directement sur Arbitrum One - les frais de gaz associés seront considérablement moins élevés. Si vous souhaitez transférer un subgraph publié mais que le compte propriétaire n’a pas sélectionné de signal, vous pouvez signaler un petit montant (par ex. 1 GRT) à partir de ce compte; assurez-vous de choisir le signal de “migration automatique”. +To take advantage of the Subgraph transfer tool, your Subgraph must be already published to Ethereum mainnet and must have some curation signal owned by the wallet that owns the Subgraph. If your Subgraph is not published, it is recommended you simply publish directly on Arbitrum One - the associated gas fees will be considerably lower. If you want to transfer a published Subgraph but the owner account hasn't curated any signal on it, you can signal a small amount (e.g. 1 GRT) from that account; make sure to choose "auto-migrating" signal. -### Que se passe-t-il pour la version Ethereum mainnet de mon subgraph après que j'ai transféré sur Arbitrum ? +### What happens to the Ethereum mainnet version of my Subgraph after I transfer to Arbitrum? -Après avoir transféré votre subgraph vers Arbitrum, la version du réseau principal Ethereum deviendra obsolète. Nous vous recommandons de mettre à jour votre URL de requête dans les 48 heures. Cependant, il existe une période de grâce qui maintient le fonctionnement de votre URL mainnet afin que tout support dapp tiers puisse être mis à jour. +After transferring your Subgraph to Arbitrum, the Ethereum mainnet version will be deprecated. We recommend you update your query URL within 48 hours. However, there is a grace period in place that keeps your mainnet URL functioning so that any third-party dapp support can be updated. ### Après le transfert, dois-je également republier sur Arbitrum ? @@ -80,21 +80,21 @@ Après la fenêtre de transfert de 20 minutes, vous devrez confirmer le transfer ### Mon point de terminaison subira-t-il un temps d'arrêt lors de la republication ? -Il est peu probable, mais possible, de subir un bref temps d'arrêt selon les indexeurs qui prennent en charge le subgraph sur L1 et s'ils continuent à l'indexer jusqu'à ce que le subgraph soit entièrement pris en charge sur L2. +It is unlikely, but possible to experience a brief downtime depending on which Indexers are supporting the Subgraph on L1 and whether they keep indexing it until the Subgraph is fully supported on L2. ### La publication et la gestion des versions sont-elles les mêmes sur L2 que sur le mainnet Ethereum Ethereum ? -Oui. Sélectionnez Arbitrum One comme réseau publié lors de la publication dans le Subgraph Studio. Dans le Studio, le dernier point de terminaison sera disponible et vous dirigera vers la dernière version mise à jour du subgraph. +Yes. Select Arbitrum One as your published network when publishing in Subgraph Studio. In the Studio, the latest endpoint will be available which points to the latest updated version of the Subgraph. -### La curation de mon subgraph sera-t-elle déplacée avec mon subgraph? +### Will my Subgraph's curation move with my Subgraph? -Si vous avez choisi le signal de migration automatique, 100% de votre propre curation se déplacera avec votre subgraph vers Arbitrum One. Tout le signal de curation du subgraph sera converti en GTR au moment du transfert, et le GRT correspondant à votre signal de curation sera utilisé pour frapper le signal sur le subgraph L2. +If you've chosen auto-migrating signal, 100% of your own curation will move with your Subgraph to Arbitrum One. All of the Subgraph's curation signal will be converted to GRT at the time of the transfer, and the GRT corresponding to your curation signal will be used to mint signal on the L2 Subgraph. -D’autres conservateurs peuvent choisir de retirer leur fraction de GRT ou de la transférer à L2 pour créer un signal neuf sur le même subgraph. +Other Curators can choose whether to withdraw their fraction of GRT, or also transfer it to L2 to mint signal on the same Subgraph. -### Puis-je déplacer mon subgraph vers le mainnet Ethereum après le transfert? +### Can I move my Subgraph back to Ethereum mainnet after I transfer? -Une fois transféré, votre version mainnet Ethereum de ce subgraph deviendra obsolète. Si vous souhaitez revenir au mainnet, vous devrez redéployer et publier à nouveau sur le mainnet. Cependant, le transfert vers le mainnet Ethereumt est fortement déconseillé car les récompenses d’indexation seront distribuées entièrement sur Arbitrum One. +Once transferred, your Ethereum mainnet version of this Subgraph will be deprecated. If you would like to move back to mainnet, you will need to redeploy and publish back to mainnet. However, transferring back to Ethereum mainnet is strongly discouraged as indexing rewards will eventually be distributed entirely on Arbitrum One. ### Pourquoi ai-je besoin d’un pont ETH pour finaliser mon transfert ? @@ -206,19 +206,19 @@ Pour transférer votre curation, vous devrez compléter les étapes suivantes : \*Si nécessaire, c'est-à-dire que vous utilisez une adresse contractuelle. -### Comment saurai-je si le subgraph que j'ai organisé a été déplacé vers L2 ? +### How will I know if the Subgraph I curated has moved to L2? -Lors de la visualisation de la page de détails du subgraph, une bannière vous informera que ce subgraph a été transféré. Vous pouvez suivre l'invite pour transférer votre curation. Vous pouvez également trouver ces informations sur la page de détails du subgraph de tout subgraph déplacé. +When viewing the Subgraph details page, a banner will notify you that this Subgraph has been transferred. You can follow the prompt to transfer your curation. You can also find this information on the Subgraph details page of any Subgraph that has moved. ### Que se passe-t-il si je ne souhaite pas déplacer ma curation en L2 ? -Lorsqu’un subgraph est déprécié, vous avez la possibilité de retirer votre signal. De même, si un subgraph est passé à L2, vous pouvez choisir de retirer votre signal dans Ethereum mainnet ou d’envoyer le signal à L2. +When a Subgraph is deprecated you have the option to withdraw your signal. Similarly, if a Subgraph has moved to L2, you can choose to withdraw your signal in Ethereum mainnet or send the signal to L2. ### Comment puis-je savoir si ma curation a été transférée avec succès? Les détails du signal seront accessibles via Explorer environ 20 minutes après le lancement de l'outil de transfert L2. -### Puis-je transférer ma curation sur plus d’un subgraph à la fois? +### Can I transfer my curation on more than one Subgraph at a time? Il n’existe actuellement aucune option de transfert groupé. @@ -266,7 +266,7 @@ Il faudra environ 20 minutes à l'outil de transfert L2 pour achever le transfer ### Dois-je indexer sur Arbitrum avant de transférer ma mise ? -Vous pouvez effectivement transférer votre mise d’abord avant de mettre en place l’indexation, mais vous ne serez pas en mesure de réclamer des récompenses sur L2 jusqu’à ce que vous allouez à des sous-graphes sur L2, les indexer, et présenter des points d’intérêt. +You can effectively transfer your stake first before setting up indexing, but you will not be able to claim any rewards on L2 until you allocate to Subgraphs on L2, index them, and present POIs. ### Les délégués peuvent-ils déplacer leur délégation avant que je ne déplace ma participation à l'indexation ? diff --git a/website/src/pages/fr/archived/arbitrum/l2-transfer-tools-guide.mdx b/website/src/pages/fr/archived/arbitrum/l2-transfer-tools-guide.mdx index 6d59607442b4..d6014f6d5dac 100644 --- a/website/src/pages/fr/archived/arbitrum/l2-transfer-tools-guide.mdx +++ b/website/src/pages/fr/archived/arbitrum/l2-transfer-tools-guide.mdx @@ -6,53 +6,53 @@ The Graph a facilité le passage à L2 sur Arbitrum One. Pour chaque participant Some frequent questions about these tools are answered in the [L2 Transfer Tools FAQ](/archived/arbitrum/l2-transfer-tools-faq/). The FAQs contain in-depth explanations of how to use the tools, how they work, and things to keep in mind when using them. -## Comment transférer votre subgraph vers Arbitrum (L2) +## How to transfer your Subgraph to Arbitrum (L2) -## Avantages du transfert de vos subgraphs +## Benefits of transferring your Subgraphs La communauté et les développeurs du Graph se sont préparés (https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305) à passer à Arbitrum au cours de l'année écoulée. Arbitrum, une blockchain de couche 2 ou "L2", hérite de la sécurité d'Ethereum mais offre des frais de gaz considérablement réduits. -Lorsque vous publiez ou mettez à niveau votre subgraph sur The Graph Network, vous interagissez avec des contrats intelligents sur le protocole, ce qui nécessite de payer le gaz avec ETH. En déplaçant vos subgraphs vers Arbitrum, toute mise à jour future de votre subgraph nécessitera des frais de gaz bien inférieurs. Les frais inférieurs et le fait que les courbes de liaison de curation sur L2 soient plates facilitent également la curation pour les autres conservateurs sur votre subgraph, augmentant ainsi les récompenses des indexeurs sur votre subgraph. Cet environnement moins coûteux rend également moins cher pour les indexeurs l'indexation et la diffusion de votre subgraph. Les récompenses d'indexation augmenteront sur Arbitrum et diminueront sur le réseau principal Ethereum au cours des prochains mois, de sorte que de plus en plus d'indexeurs transféreront leur participation et établiront leurs opérations sur L2. +When you publish or upgrade your Subgraph to The Graph Network, you're interacting with smart contracts on the protocol and this requires paying for gas using ETH. By moving your Subgraphs to Arbitrum, any future updates to your Subgraph will require much lower gas fees. The lower fees, and the fact that curation bonding curves on L2 are flat, also make it easier for other Curators to curate on your Subgraph, increasing the rewards for Indexers on your Subgraph. This lower-cost environment also makes it cheaper for Indexers to index and serve your Subgraph. Indexing rewards will be increasing on Arbitrum and decreasing on Ethereum mainnet over the coming months, so more and more Indexers will be transferring their stake and setting up their operations on L2. -## Comprendre ce qui se passe avec le signal, votre subgraph L1 et les URL de requête +## Understanding what happens with signal, your L1 Subgraph and query URLs -Le transfert d'un subgraph vers Arbitrum utilise le pont GRT sur Arbitrum, qui à son tour utilise le pont natif d'Arbitrum, pour envoyer le subgraph vers L2. Le 'transfert' va déprécier le subgraph sur le mainnet et envoyer les informations pour recréer le subgraph sur L2 en utilisant le pont. Il inclura également les GRT signalés par le propriétaire du subgraph, qui doivent être supérieurs à zéro pour que le pont accepte le transfert. +Transferring a Subgraph to Arbitrum uses the Arbitrum GRT bridge, which in turn uses the native Arbitrum bridge, to send the Subgraph to L2. The "transfer" will deprecate the Subgraph on mainnet and send the information to re-create the Subgraph on L2 using the bridge. It will also include the Subgraph owner's signaled GRT, which must be more than zero for the bridge to accept the transfer. -Lorsque vous choisissez de transférer le subgraph, cela convertira tous les signaux de curation du subgraph en GRT. Cela équivaut à "déprécier" le subgraph sur le mainnet. Les GRT correspondant à votre curation seront envoyés à L2 avec le subgraph, où ils seront utilisés pour monnayer des signaux en votre nom. +When you choose to transfer the Subgraph, this will convert all of the Subgraph's curation signal to GRT. This is equivalent to "deprecating" the Subgraph on mainnet. The GRT corresponding to your curation will be sent to L2 together with the Subgraph, where they will be used to mint signal on your behalf. -Les autres curateurs peuvent choisir de retirer leur fraction de GRT ou de la transférer également à L2 pour le signal de monnayage sur le même subgraph. Si un propriétaire de subgraph ne transfère pas son subgraph à L2 et le déprécie manuellement via un appel de contrat, les curateurs en seront informés et pourront retirer leur curation. +Other Curators can choose whether to withdraw their fraction of GRT, or also transfer it to L2 to mint signal on the same Subgraph. If a Subgraph owner does not transfer their Subgraph to L2 and manually deprecates it via a contract call, then Curators will be notified and will be able to withdraw their curation. -Dès que le subgraph est transféré, puisque toute la curation est convertie en GRT, les indexeurs ne recevront plus de récompenses pour l'indexation du subgraph. Cependant, certains indexeurs 1) continueront à servir les subgraphs transférés pendant 24 heures et 2) commenceront immédiatement à indexer le subgraph sur L2. Comme ces indexeurs ont déjà indexé le subgraph, il ne devrait pas être nécessaire d'attendre la synchronisation du subgraph, et il sera possible d'interroger le subgraph L2 presque immédiatement. +As soon as the Subgraph is transferred, since all curation is converted to GRT, Indexers will no longer receive rewards for indexing the Subgraph. However, there will be Indexers that will 1) keep serving transferred Subgraphs for 24 hours, and 2) immediately start indexing the Subgraph on L2. Since these Indexers already have the Subgraph indexed, there should be no need to wait for the Subgraph to sync, and it will be possible to query the L2 Subgraph almost immediately. -Les requêtes vers le subgraph L2 devront être effectuées vers une URL différente (sur `arbitrum-gateway.thegraph.com`), mais l'URL L1 continuera à fonctionner pendant au moins 48 heures. Après cela, la passerelle L1 transmettra les requêtes à la passerelle L2 (pendant un certain temps), mais cela augmentera la latence. Il est donc recommandé de basculer toutes vos requêtes vers la nouvelle URL dès que possible. +Queries to the L2 Subgraph will need to be done to a different URL (on `arbitrum-gateway.thegraph.com`), but the L1 URL will continue working for at least 48 hours. After that, the L1 gateway will forward queries to the L2 gateway (for some time), but this will add latency so it is recommended to switch all your queries to the new URL as soon as possible. ## Choisir son portefeuille L2 -Lorsque vous avez publié votre subgraph sur le mainnet, vous avez utilisé un portefeuille connecté pour créer le subgraph, et ce portefeuille possède le NFT qui représente ce subgraph et vous permet de publier des mises à jour. +When you published your Subgraph on mainnet, you used a connected wallet to create the Subgraph, and this wallet owns the NFT that represents this Subgraph and allows you to publish updates. -Lors du transfert du subgraph vers Arbitrum, vous pouvez choisir un autre portefeuille qui possédera ce subgraph NFT sur L2. +When transferring the Subgraph to Arbitrum, you can choose a different wallet that will own this Subgraph NFT on L2. Si vous utilisez un portefeuille "normal" comme MetaMask (un Externally Owned Account ou EOA, c'est-à-dire un portefeuille qui n'est pas un smart contract), cette étape est facultative et il est recommandé de conserver la même adresse de propriétaire que dans L1.portefeuille. -Si vous utilisez un portefeuille de smart contrat, comme un multisig (par exemple un Safe), alors choisir une adresse de portefeuille L2 différente est obligatoire, car il est très probable que ce compte n'existe que sur le mainnet et vous ne pourrez pas faire de transactions sur Arbitrum en utilisant ce portefeuille. Si vous souhaitez continuer à utiliser un portefeuille de contrat intelligent ou un multisig, créez un nouveau portefeuille sur Arbitrum et utilisez son adresse comme propriétaire L2 de votre subgraph. +If you're using a smart contract wallet, like a multisig (e.g. a Safe), then choosing a different L2 wallet address is mandatory, as it is most likely that this account only exists on mainnet and you will not be able to make transactions on Arbitrum using this wallet. If you want to keep using a smart contract wallet or multisig, create a new wallet on Arbitrum and use its address as the L2 owner of your Subgraph. -**Il est très important d'utiliser une adresse de portefeuille que vous contrôlez, et qui peut effectuer des transactions sur Arbitrum. Dans le cas contraire, le subgraph sera perdu et ne pourra pas être récupéré** +**It is very important to use a wallet address that you control, and that can make transactions on Arbitrum. Otherwise, the Subgraph will be lost and cannot be recovered.** ## Préparer le transfert : faire le pont avec quelques EPF -Le transfert du subgraph implique l'envoi d'une transaction à travers le pont, puis l'exécution d'une autre transaction sur Arbitrum. La première transaction utilise de l'ETH sur le mainnet, et inclut de l'ETH pour payer le gaz lorsque le message est reçu sur L2. Cependant, si ce gaz est insuffisant, vous devrez réessayer la transaction et payer le gaz directement sur L2 (c'est "l'étape 3 : Confirmer le transfert" ci-dessous). Cette étape **doit être exécutée dans les 7 jours suivant le début du transfert**. De plus, la deuxième transaction ("Etape 4 : Terminer le transfert sur L2") se fera directement sur Arbitrum. Pour ces raisons, vous aurez besoin de quelques ETH sur un portefeuille Arbitrum. Si vous utilisez un compte multisig ou smart contract, l'ETH devra être dans le portefeuille régulier (EOA) que vous utilisez pour exécuter les transactions, et non sur le portefeuille multisig lui-même. +Transferring the Subgraph involves sending a transaction through the bridge, and then executing another transaction on Arbitrum. The first transaction uses ETH on mainnet, and includes some ETH to pay for gas when the message is received on L2. However, if this gas is insufficient, you will have to retry the transaction and pay for the gas directly on L2 (this is "Step 3: Confirming the transfer" below). This step **must be executed within 7 days of starting the transfer**. Moreover, the second transaction ("Step 4: Finishing the transfer on L2") will be done directly on Arbitrum. For these reasons, you will need some ETH on an Arbitrum wallet. If you're using a multisig or smart contract account, the ETH will need to be in the regular (EOA) wallet that you are using to execute the transactions, not on the multisig wallet itself. Vous pouvez acheter de l'ETH sur certains échanges et le retirer directement sur Arbitrum, ou vous pouvez utiliser le pont Arbitrum pour envoyer de l'ETH d'un portefeuille du mainnet vers L2 : [bridge.arbitrum.io](http://bridge.arbitrum.io). Étant donné que les frais de gaz sur Arbitrum sont moins élevés, vous ne devriez avoir besoin que d'une petite quantité. Il est recommandé de commencer par un seuil bas (0,par exemple 01 ETH) pour que votre transaction soit approuvée. -## Trouver l'outil de transfert de subgraph +## Finding the Subgraph Transfer Tool -Vous pouvez trouver l'outil de transfert L2 lorsque vous consultez la page de votre subgraph dans le Subgraph Studio : +You can find the L2 Transfer Tool when you're looking at your Subgraph's page on Subgraph Studio: ![outil de transfert](/img/L2-transfer-tool1.png) -Elle est également disponible sur Explorer si vous êtes connecté au portefeuille qui possède un subgraph et sur la page de ce subgraph sur Explorer : +It is also available on Explorer if you're connected with the wallet that owns a Subgraph and on that Subgraph's page on Explorer: ![Transfert vers L2](/img/transferToL2.png) @@ -60,19 +60,19 @@ En cliquant sur le bouton Transférer vers L2, vous ouvrirez l'outil de transfer ## Étape 1 : Démarrer le transfert -Avant de commencer le transfert, vous devez décider quelle adresse sera propriétaire du subgraph sur L2 (voir "Choisir votre portefeuille L2" ci-dessus), et il est fortement recommandé d'avoir quelques ETH pour le gaz déjà bridgé sur Arbitrum (voir "Préparer le transfert : brider quelques ETH" ci-dessus). +Before starting the transfer, you must decide which address will own the Subgraph on L2 (see "Choosing your L2 wallet" above), and it is strongly recommend having some ETH for gas already bridged on Arbitrum (see "Preparing for the transfer: bridging some ETH" above). -Veuillez également noter que le transfert du subgraph nécessite d'avoir un montant de signal non nul sur le subgraph avec le même compte qui possède le subgraph ; si vous n'avez pas signalé sur le subgraph, vous devrez ajouter un peu de curation (ajouter un petit montant comme 1 GRT suffirait). +Also please note transferring the Subgraph requires having a nonzero amount of signal on the Subgraph with the same account that owns the Subgraph; if you haven't signaled on the Subgraph you will have to add a bit of curation (adding a small amount like 1 GRT would suffice). -Après avoir ouvert l'outil de transfert, vous pourrez saisir l'adresse du portefeuille L2 dans le champ "Adresse du portefeuille destinataire" - **assurez-vous que vous avez saisi la bonne adresse ici**. En cliquant sur Transférer le subgraph, vous serez invité à exécuter la transaction sur votre portefeuille (notez qu'une certaine valeur ETH est incluse pour payer le gaz L2) ; cela lancera le transfert et dépréciera votre subgraph L1 (voir "Comprendre ce qui se passe avec le signal, votre subgraph L1 et les URL de requête" ci-dessus pour plus de détails sur ce qui se passe en coulisses). +After opening the Transfer Tool, you will be able to input the L2 wallet address into the "Receiving wallet address" field - **make sure you've entered the correct address here**. Clicking on Transfer Subgraph will prompt you to execute the transaction on your wallet (note some ETH value is included to pay for L2 gas); this will initiate the transfer and deprecate your L1 Subgraph (see "Understanding what happens with signal, your L1 Subgraph and query URLs" above for more details on what goes on behind the scenes). -Si vous exécutez cette étape, **assurez-vous de continuer jusqu'à terminer l'étape 3 en moins de 7 jours, sinon le subgraph et votre signal GRT seront perdus.** Cela est dû au fonctionnement de la messagerie L1-L2 sur Arbitrum : les messages qui sont envoyés via le pont sont des « tickets réessayables » qui doivent être exécutés dans les 7 jours, et l'exécution initiale peut nécessiter une nouvelle tentative s'il y a des pics dans le prix du gaz sur Arbitrum. +If you execute this step, **make sure you proceed until completing step 3 in less than 7 days, or the Subgraph and your signal GRT will be lost.** This is due to how L1-L2 messaging works on Arbitrum: messages that are sent through the bridge are "retry-able tickets" that must be executed within 7 days, and the initial execution might need a retry if there are spikes in the gas price on Arbitrum. ![Démarrer le transfert vers la L2](/img/startTransferL2.png) -## Étape 2 : Attendre que le subgraphe atteigne L2 +## Step 2: Waiting for the Subgraph to get to L2 -Après avoir lancé le transfert, le message qui envoie votre subgraph de L1 à L2 doit se propager à travers le pont Arbitrum. Cela prend environ 20 minutes (le pont attend que le bloc du réseau principal contenant la transaction soit "sûr" face aux potentielles réorganisations de la chaîne). +After you start the transfer, the message that sends your L1 Subgraph to L2 must propagate through the Arbitrum bridge. This takes approximately 20 minutes (the bridge waits for the mainnet block containing the transaction to be "safe" from potential chain reorgs). Une fois ce temps d'attente terminé, le réseau Arbitrum tentera d'exécuter automatiquement le transfert sur les contrats L2. @@ -80,7 +80,7 @@ Une fois ce temps d'attente terminé, le réseau Arbitrum tentera d'exécuter au ## Étape 3 : Confirmer le transfert -Dans la plupart des cas, cette étape s'exécutera automatiquement car le gaz L2 inclus dans l'étape 1 devrait être suffisant pour exécuter la transaction qui reçoit le subgraph sur les contrats Arbitrum. Cependant, dans certains cas, il est possible qu'une hausse soudaine des prix du gaz sur Arbitrum entraîne l'échec de cette exécution automatique. Dans ce cas, le "ticket" qui envoie votre subgraphe vers L2 sera en attente et nécessitera une nouvelle tentative dans les 7 jours. +In most cases, this step will auto-execute as the L2 gas included in step 1 should be sufficient to execute the transaction that receives the Subgraph on the Arbitrum contracts. In some cases, however, it is possible that a spike in gas prices on Arbitrum causes this auto-execution to fail. In this case, the "ticket" that sends your Subgraph to L2 will be pending and require a retry within 7 days. Si c'est le cas, vous devrez vous connecter en utilisant un portefeuille L2 qui contient de l'ETH sur Arbitrum, changer le réseau de votre portefeuille vers Arbitrum, et cliquer sur "Confirmer le transfert" pour retenter la transaction. @@ -88,33 +88,33 @@ Si c'est le cas, vous devrez vous connecter en utilisant un portefeuille L2 qui ## Étape 4 : Terminer le transfert sur L2 -À ce stade, votre subgraph et vos GRT ont été reçus sur Arbitrum, mais le subgraph n'est pas encore publié. Vous devrez vous connecter à l'aide du portefeuille L2 que vous avez choisi comme portefeuille de réception, basculer votre réseau de portefeuille sur Arbitrum et cliquer sur « Publier le subgraph.» +At this point, your Subgraph and GRT have been received on Arbitrum, but the Subgraph is not published yet. You will need to connect using the L2 wallet that you chose as the receiving wallet, switch your wallet network to Arbitrum, and click "Publish Subgraph." -![Publier le subgraph](/img/publishSubgraphL2TransferTools.png) +![Publish the Subgraph](/img/publishSubgraphL2TransferTools.png) -![Attendez que le subgraph soit publié](/img/waitForSubgraphToPublishL2TransferTools.png) +![Wait for the Subgraph to be published](/img/waitForSubgraphToPublishL2TransferTools.png) -Cela permettra de publier le subgraph afin que les indexeurs opérant sur Arbitrum puissent commencer à le servir. Il va également modifier le signal de curation en utilisant les GRT qui ont été transférés de L1. +This will publish the Subgraph so that Indexers that are operating on Arbitrum can start serving it. It will also mint curation signal using the GRT that were transferred from L1. ## Étape 5 : Mise à jour de l'URL de la requête -Votre subgraph a été transféré avec succès vers Arbitrum ! Pour interroger le subgraph, la nouvelle URL sera : +Your Subgraph has been successfully transferred to Arbitrum! To query the Subgraph, the new URL will be : https://arbitrum-gateway.thegraph.com/api/[api-key]/subgraphs/id/[l2-subgraph-id]\` -Notez que l'ID du subgraph sur Arbitrum sera différent de celui que vous aviez sur le mainnet, mais vous pouvez toujours le trouver sur Explorer ou Studio. Comme mentionné ci-dessus (voir "Comprendre ce qui se passe avec le signal, votre subgraph L1 et les URL de requête"), l'ancienne URL L1 sera prise en charge pendant une courte période, mais vous devez basculer vos requêtes vers la nouvelle adresse dès que le subgraph aura été synchronisé. sur L2. +Note that the Subgraph ID on Arbitrum will be a different than the one you had on mainnet, but you can always find it on Explorer or Studio. As mentioned above (see "Understanding what happens with signal, your L1 Subgraph and query URLs") the old L1 URL will be supported for a short while, but you should switch your queries to the new address as soon as the Subgraph has been synced on L2. ## Comment transférer votre curation vers Arbitrum (L2) -## Comprendre ce qui arrive à la curation lors des transferts de subgraphs vers L2 +## Understanding what happens to curation on Subgraph transfers to L2 -Lorsque le propriétaire d'un subgraph transfère un subgraph vers Arbitrum, tout le signal du subgraph est converti en GRT en même temps. Cela s'applique au signal "auto-migré", c'est-à-dire au signal qui n'est pas spécifique à une version de subgraph ou à un déploiement, mais qui suit la dernière version d'un subgraph. +When the owner of a Subgraph transfers a Subgraph to Arbitrum, all of the Subgraph's signal is converted to GRT at the same time. This applies to "auto-migrated" signal, i.e. signal that is not specific to a Subgraph version or deployment but that follows the latest version of a Subgraph. -Cette conversion du signal en GRT est identique à ce qui se produirait si le propriétaire du subgraph dépréciait le subgraph en L1. Lorsque le subgraph est déprécié ou transféré, tout le signal de curation est "brûlé" simultanément (en utilisant la courbe de liaison de curation) et le GRT résultant est détenu par le contrat intelligent GNS (c'est-à-dire le contrat qui gère les mises à niveau des subgraphs et le signal auto-migré). Chaque Curateur de ce subgraph a donc droit à ce GRT de manière proportionnelle à la quantité de parts qu'il détenait pour le subgraph. +This conversion from signal to GRT is the same as what would happen if the Subgraph owner deprecated the Subgraph in L1. When the Subgraph is deprecated or transferred, all curation signal is "burned" simultaneously (using the curation bonding curve) and the resulting GRT is held by the GNS smart contract (that is the contract that handles Subgraph upgrades and auto-migrated signal). Each Curator on that Subgraph therefore has a claim to that GRT proportional to the amount of shares they had for the Subgraph. -Une fraction de ces GRT correspondant au propriétaire du subgraph est envoyée à L2 avec le subgraph. +A fraction of these GRT corresponding to the Subgraph owner is sent to L2 together with the Subgraph. -À ce stade, le GRT organisé n'accumulera plus de frais de requête, les conservateurs peuvent donc choisir de retirer leur GRT ou de le transférer vers le même subgraph sur L2, où il pourra être utilisé pour créer un nouveau signal de curation. Il n'y a pas d'urgence à le faire car le GRT peut être utile indéfiniment et chacun reçoit un montant proportionnel à ses actions, quel que soit le moment où il le fait. +At this point, the curated GRT will not accrue any more query fees, so Curators can choose to withdraw their GRT or transfer it to the same Subgraph on L2, where it can be used to mint new curation signal. There is no rush to do this as the GRT can be help indefinitely and everybody gets an amount proportional to their shares, irrespective of when they do it. ## Choisir son portefeuille L2 @@ -130,9 +130,9 @@ Si vous utilisez un portefeuille de contrat intelligent, comme un multisig (par Avant de commencer le transfert, vous devez décider quelle adresse détiendra la curation sur L2 (voir "Choisir votre portefeuille L2" ci-dessus), et il est recommandé d'avoir des ETH pour le gaz déjà pontés sur Arbitrum au cas où vous auriez besoin de réessayer l'exécution du message sur L2. Vous pouvez acheter de l'ETH sur certaines bourses et le retirer directement sur Arbitrum, ou vous pouvez utiliser le pont Arbitrum pour envoyer de l'ETH depuis un portefeuille du mainnet vers L2 : [bridge.arbitrum.io](http://bridge.arbitrum.io) - étant donné que les frais de gaz sur Arbitrum sont si bas, vous ne devriez avoir besoin que d'un petit montant, par ex. 0,01 ETH sera probablement plus que suffisant. -Si un subgraph que vous organisez a été transféré vers L2, vous verrez un message sur l'Explorateur vous indiquant que vous organisez un subgraph transféré. +If a Subgraph that you curate to has been transferred to L2, you will see a message on Explorer telling you that you're curating to a transferred Subgraph. -En consultant la page du subgraph, vous pouvez choisir de retirer ou de transférer la curation. En cliquant sur "Transférer le signal vers Arbitrum", vous ouvrirez l'outil de transfert. +When looking at the Subgraph page, you can choose to withdraw or transfer the curation. Clicking on "Transfer Signal to Arbitrum" will open the transfer tool. Signal de transfert](/img/transferSignalL2TransferTools.png) @@ -162,4 +162,4 @@ Si c'est le cas, vous devrez vous connecter en utilisant un portefeuille L2 qui ## Retrait de la curation sur L1 -Si vous préférez ne pas envoyer votre GRT vers L2, ou si vous préférez combler le GRT manuellement, vous pouvez retirer votre GRT organisé sur L1. Sur la bannière de la page du subgraph, choisissez « Retirer le signal » et confirmez la transaction ; le GRT sera envoyé à votre adresse de conservateur. +If you prefer not to send your GRT to L2, or you'd rather bridge the GRT manually, you can withdraw your curated GRT on L1. On the banner on the Subgraph page, choose "Withdraw Signal" and confirm the transaction; the GRT will be sent to your Curator address. diff --git a/website/src/pages/fr/archived/sunrise.mdx b/website/src/pages/fr/archived/sunrise.mdx index 575d138c0f55..dc20e31aee77 100644 --- a/website/src/pages/fr/archived/sunrise.mdx +++ b/website/src/pages/fr/archived/sunrise.mdx @@ -7,61 +7,61 @@ sidebarTitle: Post-Sunrise Upgrade FAQ ## What was the Sunrise of Decentralized Data? -The Sunrise of Decentralized Data was an initiative spearheaded by Edge & Node. This initiative enabled subgraph developers to upgrade to The Graph’s decentralized network seamlessly. +The Sunrise of Decentralized Data was an initiative spearheaded by Edge & Node. This initiative enabled Subgraph developers to upgrade to The Graph’s decentralized network seamlessly. -This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published subgraphs. +This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published Subgraphs. ### What happened to the hosted service? -The hosted service query endpoints are no longer available, and developers cannot deploy new subgraphs on the hosted service. +The hosted service query endpoints are no longer available, and developers cannot deploy new Subgraphs on the hosted service. -During the upgrade process, owners of hosted service subgraphs could upgrade their subgraphs to The Graph Network. Additionally, developers were able to claim auto-upgraded subgraphs. +During the upgrade process, owners of hosted service Subgraphs could upgrade their Subgraphs to The Graph Network. Additionally, developers were able to claim auto-upgraded Subgraphs. ### Was Subgraph Studio impacted by this upgrade? No, Subgraph Studio was not impacted by Sunrise. Subgraphs were immediately available for querying, powered by the upgrade Indexer, which uses the same infrastructure as the hosted service. -### Why were subgraphs published to Arbitrum, did it start indexing a different network? +### Why were Subgraphs published to Arbitrum, did it start indexing a different network? -The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that subgraphs are published to, but subgraphs can index any of the [supported networks](/supported-networks/) +The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new Subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that Subgraphs are published to, but Subgraphs can index any of the [supported networks](/supported-networks/) ## À propos de l'indexeur de mise à niveau > The upgrade Indexer is currently active. -The upgrade Indexer was implemented to improve the experience of upgrading subgraphs from the hosted service to The Graph Network and support new versions of existing subgraphs that had not yet been indexed. +The upgrade Indexer was implemented to improve the experience of upgrading Subgraphs from the hosted service to The Graph Network and support new versions of existing Subgraphs that had not yet been indexed. ### What does the upgrade Indexer do? -- It bootstraps chains that have yet to receive indexing rewards on The Graph Network and ensures that an Indexer is available to serve queries as quickly as possible after a subgraph is published. +- It bootstraps chains that have yet to receive indexing rewards on The Graph Network and ensures that an Indexer is available to serve queries as quickly as possible after a Subgraph is published. - It supports chains that were previously only available on the hosted service. Find a comprehensive list of supported chains [here](/supported-networks/). -- Indexers that operate an upgrade Indexer do so as a public service to support new subgraphs and additional chains that lack indexing rewards before The Graph Council approves them. +- Indexers that operate an upgrade Indexer do so as a public service to support new Subgraphs and additional chains that lack indexing rewards before The Graph Council approves them. ### Pourquoi Edge & Node exécutent-ils l'indexeur de mise à niveau ? -Edge & Node historically maintained the hosted service and, as a result, already have synced data for hosted service subgraphs. +Edge & Node historically maintained the hosted service and, as a result, already have synced data for hosted service Subgraphs. ### Que signifie la mise à niveau de l'indexeur pour les indexeurs existants ? Chains previously only supported on the hosted service were made available to developers on The Graph Network without indexing rewards at first. -However, this action unlocked query fees for any interested Indexer and increased the number of subgraphs published on The Graph Network. As a result, Indexers have more opportunities to index and serve these subgraphs in exchange for query fees, even before indexing rewards are enabled for a chain. +However, this action unlocked query fees for any interested Indexer and increased the number of Subgraphs published on The Graph Network. As a result, Indexers have more opportunities to index and serve these Subgraphs in exchange for query fees, even before indexing rewards are enabled for a chain. -The upgrade Indexer also provides the Indexer community with information about the potential demand for subgraphs and new chains on The Graph Network. +The upgrade Indexer also provides the Indexer community with information about the potential demand for Subgraphs and new chains on The Graph Network. ### What does this mean for Delegators? -The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity. +The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more Subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity. ### Did the upgrade Indexer compete with existing Indexers for rewards? -No, the upgrade Indexer only allocates the minimum amount per subgraph and does not collect indexing rewards. +No, the upgrade Indexer only allocates the minimum amount per Subgraph and does not collect indexing rewards. -It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and subgraphs. +It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and Subgraphs. -### How does this affect subgraph developers? +### How does this affect Subgraph developers? -Subgraph developers can query their subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/subgraphs/developing/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a subgraph](/developing/creating-a-subgraph/) was not impacted by this upgrade. +Subgraph developers can query their Subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/subgraphs/developing/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a Subgraph](/developing/creating-a-subgraph/) was not impacted by this upgrade. ### How does the upgrade Indexer benefit data consumers? @@ -71,10 +71,10 @@ L'indexeur de mise à niveau active les chaînes sur le réseau qui n'étaient a The upgrade Indexer prices queries at the market rate to avoid influencing the query fee market. -### When will the upgrade Indexer stop supporting a subgraph? +### When will the upgrade Indexer stop supporting a Subgraph? -The upgrade Indexer supports a subgraph until at least 3 other Indexers successfully and consistently serve queries made to it. +The upgrade Indexer supports a Subgraph until at least 3 other Indexers successfully and consistently serve queries made to it. -Furthermore, the upgrade Indexer stops supporting a subgraph if it has not been queried in the last 30 days. +Furthermore, the upgrade Indexer stops supporting a Subgraph if it has not been queried in the last 30 days. -Other Indexers are incentivized to support subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it. +Other Indexers are incentivized to support Subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it. diff --git a/website/src/pages/fr/global.json b/website/src/pages/fr/global.json index 71ccdec34af5..42719abe3b7b 100644 --- a/website/src/pages/fr/global.json +++ b/website/src/pages/fr/global.json @@ -6,6 +6,7 @@ "subgraphs": "Subgraphs", "substreams": "Substreams", "sps": "Substreams-Powered Subgraphs", + "tokenApi": "Token API", "indexing": "Indexing", "resources": "Resources", "archived": "Archived" @@ -24,9 +25,51 @@ "linkToThisSection": "Link to this section" }, "content": { - "note": "Note", + "callout": { + "note": "Note", + "tip": "Tip", + "important": "Important", + "warning": "Warning", + "caution": "Caution" + }, "video": "Video" }, + "openApi": { + "parameters": { + "pathParameters": "Path Parameters", + "queryParameters": "Paramètres de requête", + "headerParameters": "Header Parameters", + "cookieParameters": "Cookie Parameters", + "parameter": "Parameter", + "description": "Description", + "value": "Value", + "required": "Required", + "deprecated": "Deprecated", + "defaultValue": "Default value", + "minimumValue": "Minimum value", + "maximumValue": "Maximum value", + "acceptedValues": "Accepted values", + "acceptedPattern": "Accepted pattern", + "format": "Format", + "serializationFormat": "Serialization format" + }, + "request": { + "label": "Test this endpoint", + "noCredentialsRequired": "No credentials required", + "send": "Send Request" + }, + "responses": { + "potentialResponses": "Potential Responses", + "status": "Status", + "description": "Description", + "liveResponse": "Live Response", + "example": "Exemple" + }, + "errors": { + "invalidApi": "Could not retrieve API {0}.", + "invalidOperation": "Could not retrieve operation {0} in API {1}." + } + }, "notFound": { "title": "Oops! This page was lost in space...", "subtitle": "Check if you’re using the right address or explore our website by clicking on the link below.", diff --git a/website/src/pages/fr/index.json b/website/src/pages/fr/index.json index f8474c21382e..ee19877c78e6 100644 --- a/website/src/pages/fr/index.json +++ b/website/src/pages/fr/index.json @@ -7,7 +7,7 @@ "cta2": "Construisez votre premier subgraph" }, "products": { - "title": "The Graph’s Products", + "title": "The Graph's Products", "description": "Choose a solution that fits your needs—interact with blockchain data your way.", "subgraphs": { "title": "Subgraphs", @@ -21,7 +21,7 @@ }, "sps": { "title": "Substreams-Powered Subgraphs", - "description": "Boost your subgraph’s efficiency and scalability by using Substreams.", + "description": "Boost your subgraph's efficiency and scalability by using Substreams.", "cta": "Set up a Substreams-powered subgraph" }, "graphNode": { @@ -92,7 +92,7 @@ "description": "Leverage features like custom data sources, event handlers, and topic filters." }, "billing": { - "title": "Billing", + "title": "Facturation", "description": "Optimize costs and manage billing efficiently." } }, @@ -156,15 +156,15 @@ "watchOnYouTube": "Watch on YouTube", "theGraphExplained": { "title": "The Graph Explained In 1 Minute", - "description": "What is The Graph? How does it work? Why does it matter so much to web3 developers? Learn how and why The Graph is the backbone of web3 in this short, non-technical video." + "description": "Learn how and why The Graph is the backbone of web3 in this short, non-technical video." }, "whatIsDelegating": { "title": "Qu'est-ce que la délégation ?", - "description": "Delegators are key participants who help secure The Graph by staking their GRT tokens to Indexers. This video explains key concepts to understand before delegating." + "description": "This video explains key concepts to understand before delegating, a form of staking that helps secure The Graph." }, "howToIndexSolana": { "title": "How to Index Solana with a Substreams-powered Subgraph", - "description": "If you’re familiar with subgraphs, discover how Substreams offer a different approach for key use cases. This video walks you through the process of building your first Substreams-powered subgraph." + "description": "If you're familiar with Subgraphs, discover how Substreams offer a different approach for key use cases." } }, "time": { diff --git a/website/src/pages/fr/indexing/_meta-titles.json b/website/src/pages/fr/indexing/_meta-titles.json index 42f4de188fd4..29c95ac126cd 100644 --- a/website/src/pages/fr/indexing/_meta-titles.json +++ b/website/src/pages/fr/indexing/_meta-titles.json @@ -1,3 +1,3 @@ { - "tooling": "Indexer Tooling" + "tooling": "Outillage de l'indexeur" } diff --git a/website/src/pages/fr/indexing/chain-integration-overview.mdx b/website/src/pages/fr/indexing/chain-integration-overview.mdx index 4bbb83bdc4a9..48787263c1af 100644 --- a/website/src/pages/fr/indexing/chain-integration-overview.mdx +++ b/website/src/pages/fr/indexing/chain-integration-overview.mdx @@ -36,7 +36,7 @@ Ce processus est lié au service de données Subgraph, applicable uniquement aux ### 2. Que se passe-t-il si la prise en charge de Firehose et Substreams intervient après que le réseau est pris en charge sur le mainnet ? -Cela n’aurait un impact que sur la prise en charge du protocole pour l’indexation des récompenses sur les subgraphs alimentés par Substreams. La nouvelle implémentation de Firehose nécessiterait des tests sur testnet, en suivant la méthodologie décrite pour l'étape 2 de ce GIP. De même, en supposant que l'implémentation soit performante et fiable, un PR sur la [Matrice de support des fonctionnalités](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) serait requis ( Fonctionnalité de sous-graphe « Sous-flux de sources de données »), ainsi qu'un nouveau GIP pour la prise en charge du protocole pour l'indexation des récompenses. N'importe qui peut créer le PR et le GIP ; la Fondation aiderait à obtenir l'approbation du Conseil. +This would only impact protocol support for indexing rewards on Substreams-powered Subgraphs. The new Firehose implementation would need testing on testnet, following the methodology outlined for Stage 2 in this GIP. Similarly, assuming the implementation is performant and reliable, a PR on the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) would be required (`Substreams data sources` Subgraph Feature), as well as a new GIP for protocol support for indexing rewards. Anyone can create the PR and GIP; the Foundation would help with Council approval. ### 3. Combien de temps faudra-t-il pour parvenir à la prise en charge complète du protocole ? diff --git a/website/src/pages/fr/indexing/new-chain-integration.mdx b/website/src/pages/fr/indexing/new-chain-integration.mdx index b5b6fa8ccd73..ab70ce6efb3a 100644 --- a/website/src/pages/fr/indexing/new-chain-integration.mdx +++ b/website/src/pages/fr/indexing/new-chain-integration.mdx @@ -2,7 +2,7 @@ title: Intégration d'une Nouvelle Chaîne --- -Les chaînes peuvent apporter le support des subgraphs à leur écosystème en démarrant une nouvelle intégration `graph-node`. Les subgraphs sont un outil d'indexation puissant qui ouvre un monde de possibilités pour les développeurs. Graph Node indexe déjà les données des chaînes listées ici. Si vous êtes intéressé par une nouvelle intégration, il existe 2 stratégies d'intégration : +Chains can bring Subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies: 1. **EVM JSON-RPC** 2. **Firehose** : toutes les solutions d'intégration Firehose incluent Substreams, un moteur de streaming à grande échelle basé sur Firehose avec prise en charge native de `graph-node`, permettant des transformations parallélisées. @@ -47,15 +47,15 @@ Pour les chaînes EVM, il existe un niveau de données plus approfondi qui peut ## Considérations sur EVM - Différence entre JSON-RPC et Firehose -Bien que le JSON-RPC et le Firehose soient tous deux adaptés aux subgraphs, un Firehose est toujours nécessaire pour les développeurs qui souhaitent construire avec [Substreams](https://substreams.streamingfast.io). La prise en charge de Substreams permet aux développeurs de construire des [subgraphs alimentés par Substreams](/subgraphs/cookbook/substreams-powered-subgraphs/) pour la nouvelle chaîne, et a le potentiel d'améliorer les performances de vos subgraphs. De plus, Firehose - en tant que remplacement direct de la couche d'extraction JSON-RPC de `graph-node` - réduit de 90% le nombre d'appels RPC requis pour l'indexation générale. +While the JSON-RPC and Firehose are both suitable for Subgraphs, a Firehose is always required for developers wanting to build with [Substreams](https://substreams.streamingfast.io). Supporting Substreams allows developers to build [Substreams-powered Subgraphs](/subgraphs/cookbook/substreams-powered-subgraphs/) for the new chain, and has the potential to improve the performance of your Subgraphs. Additionally, Firehose — as a drop-in replacement for the JSON-RPC extraction layer of `graph-node` — reduces by 90% the number of RPC calls required for general indexing. -- Tous ces appels et allers-retours `getLogs` sont remplacés par un seul flux arrivant au cœur de `graph-node` ; un modèle de bloc unique pour tous les subgraphs qu'il traite. +- All those `getLogs` calls and roundtrips get replaced by a single stream arriving into the heart of `graph-node`; a single block model for all Subgraphs it processes. -> NOTEZ: une intégration basée sur Firehose pour les chaînes EVM nécessitera toujours que les indexeurs exécutent le nœud RPC d'archivage de la chaîne pour indexer correctement les subgraphs. Cela est dû à l'incapacité de Firehose à fournir un état de contrat intelligent généralement accessible par la méthode RPC `eth_calls`. (Il convient de rappeler que les `eth_call` ne sont pas une bonne pratique pour les développeurs) +> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index Subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers) ## Configuration Graph Node -La configuration de Graph Node est aussi simple que la préparation de votre environnement local. Une fois votre environnement local défini, vous pouvez tester l'intégration en déployant localement un subgraph. +Configuring Graph Node is as easy as preparing your local environment. Once your local environment is set, you can test the integration by locally deploying a Subgraph. 1. [Clone Graph Node](https://github.com/graphprotocol/graph-node) @@ -67,4 +67,4 @@ La configuration de Graph Node est aussi simple que la préparation de votre env ## Subgraphs alimentés par des substreams -Pour les intégrations Firehose/Substreams pilotées par StreamingFast, la prise en charge de base des modules Substreams fondamentaux (par exemple, les transactions décodées, les logs et les événements smart-contract) et les outils codegen Substreams sont inclus. Ces outils permettent d'activer des [subgraphs alimentés par Substreams](/substreams/sps/introduction/). Suivez le [Guide pratique](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) et exécutez `substreams codegen subgraph` pour expérimenter les outils codegen par vous-même. +For StreamingFast-led Firehose/Substreams integrations, basic support for foundational Substreams modules (e.g. decoded transactions, logs and smart-contract events) and Substreams codegen tools are included. These tools enable the ability to enable [Substreams-powered Subgraphs](/substreams/sps/introduction/). Follow the [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) and run `substreams codegen subgraph` to experience the codegen tools for yourself. diff --git a/website/src/pages/fr/indexing/overview.mdx b/website/src/pages/fr/indexing/overview.mdx index aedc3415a442..1c0d0f9c7221 100644 --- a/website/src/pages/fr/indexing/overview.mdx +++ b/website/src/pages/fr/indexing/overview.mdx @@ -7,41 +7,41 @@ Les indexeurs sont des opérateurs de nœuds dans The Graph Network qui mettent Le GRT intégré au protocole est soumis à une période de décongélation et peut être réduit si les indexeurs sont malveillants et fournissent des données incorrectes aux applications ou s'ils indexent de manière incorrecte. Les indexeurs gagnent également des récompenses pour la participation déléguée des délégués, afin de contribuer au réseau. -Les indexeurs sélectionnent les subgraphs à indexer en fonction du signal de curation du subgraph, où les curateurs misent du GRT afin d'indiquer quels subgraphs sont de haute qualité et doivent être priorisés. Les consommateurs (par exemple les applications) peuvent également définir les paramètres pour lesquels les indexeurs traitent les requêtes pour leurs subgraphs et définir les préférences pour la tarification des frais de requête. +Indexers select Subgraphs to index based on the Subgraph’s curation signal, where Curators stake GRT in order to indicate which Subgraphs are high-quality and should be prioritized. Consumers (eg. applications) can also set parameters for which Indexers process queries for their Subgraphs and set preferences for query fee pricing. ## FAQ -### What is the minimum stake required to be an Indexer on the network? +### Quelle est le staking minimal requise pour être Indexeur sur le réseau ? -The minimum stake for an Indexer is currently set to 100K GRT. +Le staking minimal pour un Indexeur est actuellement fixée à 100 000 GRT. -### What are the revenue streams for an Indexer? +### Quelles sont les sources de revenus d'un Indexeur ? -**Query fee rebates** - Payments for serving queries on the network. These payments are mediated via state channels between an Indexer and a gateway. Each query request from a gateway contains a payment and the corresponding response a proof of query result validity. +**Rabais de frais de requête** - Paiements pour servir les requêtes sur le réseau. Ces paiements sont effectués par l'intermédiaire de canaux d'état entre un Indexeur et une passerelle. Chaque demande de requête provenant d'une passerelle contient un paiement et la réponse correspondante est une preuve de la validité du résultat de la requête. -**Indexing rewards** - Generated via a 3% annual protocol wide inflation, the indexing rewards are distributed to Indexers who are indexing subgraph deployments for the network. +**Indexing rewards** - Generated via a 3% annual protocol wide inflation, the indexing rewards are distributed to Indexers who are indexing Subgraph deployments for the network. -### How are indexing rewards distributed? +### Comment les récompenses d'indexation sont-elles distribuées ? -Indexing rewards come from protocol inflation which is set to 3% annual issuance. They are distributed across subgraphs based on the proportion of all curation signal on each, then distributed proportionally to Indexers based on their allocated stake on that subgraph. **An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.** +Indexing rewards come from protocol inflation which is set to 3% annual issuance. They are distributed across Subgraphs based on the proportion of all curation signal on each, then distributed proportionally to Indexers based on their allocated stake on that Subgraph. **An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.** -Numerous tools have been created by the community for calculating rewards; you'll find a collection of them organized in the [Community Guides collection](https://www.notion.so/Community-Guides-abbb10f4dba040d5ba81648ca093e70c). You can also find an up to date list of tools in the #Delegators and #Indexers channels on the [Discord server](https://discord.gg/graphprotocol). Here we link a [recommended allocation optimiser](https://github.com/graphprotocol/allocation-optimizer) integrated with the indexer software stack. +De nombreux outils ont été créés par la communauté pour calculer les récompenses ; vous en trouverez une collection organisée dans la [Collection des guides de la communauté](https://www.notion.so/Community-Guides-abbb10f4dba040d5ba81648ca093e70c). Vous pouvez également trouver une liste actualisée d'outils dans les canaux #Delegators et #Indexers sur le [serveur Discord](https://discord.gg/graphprotocol). Nous présentons ici un lien vers un [optimiseur d'allocation recommandé](https://github.com/graphprotocol/allocation-optimizer) intégré à la pile logicielle de l'Indexeur. -### What is a proof of indexing (POI)? +### Qu'est-ce qu'une preuve d'indexation (POI) ? -POIs are used in the network to verify that an Indexer is indexing the subgraphs they have allocated on. A POI for the first block of the current epoch must be submitted when closing an allocation for that allocation to be eligible for indexing rewards. A POI for a block is a digest for all entity store transactions for a specific subgraph deployment up to and including that block. +POIs are used in the network to verify that an Indexer is indexing the Subgraphs they have allocated on. A POI for the first block of the current epoch must be submitted when closing an allocation for that allocation to be eligible for indexing rewards. A POI for a block is a digest for all entity store transactions for a specific Subgraph deployment up to and including that block. -### When are indexing rewards distributed? +### Quand les récompenses d'indexation sont-elles distribuées ? -Allocations are continuously accruing rewards while they're active and allocated within 28 epochs. Rewards are collected by the Indexers, and distributed whenever their allocations are closed. That happens either manually, whenever the Indexer wants to force close them, or after 28 epochs a Delegator can close the allocation for the Indexer, but this results in no rewards. 28 epochs is the max allocation lifetime (right now, one epoch lasts for ~24h). +Les allocations accumulent continuellement des récompenses tant qu'elles sont actives et allouées dans un délai de 28 époques. Les récompenses sont collectées par les Indexeurs et distribuées lorsque leurs allocations sont fermées. Cela se fait soit manuellement, lorsque l'Indexeur veut forcer la fermeture, soit après 28 époques, un Déléguateur peut fermer l'allocation pour l'Indexeur, mais cela n'entraîne pas de récompenses. 28 époques est la durée de vie maximale d'une allocation (actuellement, une époque dure environ 24 heures). -### Can pending indexing rewards be monitored? +### Les récompenses d'indexation en attente peuvent-elles être surveillées ? -The RewardsManager contract has a read-only [getRewards](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/rewards/RewardsManager.sol#L316) function that can be used to check the pending rewards for a specific allocation. +Le contrat RewardsManager dispose d'une fonction en lecture seule [getRewards](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/rewards/RewardsManager.sol#L316) qui peut être utilisée pour vérifier les récompenses en attente pour une allocation spécifique. -Many of the community-made dashboards include pending rewards values and they can be easily checked manually by following these steps: +De nombreux tableaux de bord élaborés par la communauté comprennent des valeurs de récompenses en attente et il est facile de les vérifier manuellement en suivant les étapes suivantes : -1. Query the [mainnet subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations: +1. Query the [mainnet Subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations: ```graphql query indexerAllocations { @@ -57,138 +57,138 @@ query indexerAllocations { } ``` -Use Etherscan to call `getRewards()`: +Utilisez Etherscan pour appeler `getRewards()` : -- Navigate to [Etherscan interface to Rewards contract](https://etherscan.io/address/0x9Ac758AB77733b4150A901ebd659cbF8cB93ED66#readProxyContract) -- To call `getRewards()`: - - Expand the **9. getRewards** dropdown. - - Enter the **allocationID** in the input. - - Click the **Query** button. +- Naviguer vers [l'Interface Etherscan pour le contrat de récompenses] (https://etherscan.io/address/0x9Ac758AB77733b4150A901ebd659cbF8cB93ED66#readProxyContract) +- Pour appeler `getRewards()` : + - Développez la liste déroulante **9. getRewards**. + - Saisissez l'**allocationID** dans le champ de saisie. + - Cliquez sur le bouton **Query**. -### What are disputes and where can I view them? +### Que sont les litiges et où puis-je les consulter ? -Indexer's queries and allocations can both be disputed on The Graph during the dispute period. The dispute period varies, depending on the type of dispute. Queries/attestations have 7 epochs dispute window, whereas allocations have 56 epochs. After these periods pass, disputes cannot be opened against either of allocations or queries. When a dispute is opened, a deposit of a minimum of 10,000 GRT is required by the Fishermen, which will be locked until the dispute is finalized and a resolution has been given. Fisherman are any network participants that open disputes. +Les requêtes et les allocations des Indexeurs peuvent toutes deux être contestées sur The Graph pendant la période de contestation. La période de contestation varie en fonction du type de contestation. Les requêtes/attestations ont une fenêtre de contestation de 7 époques, tandis que les attributions ont une fenêtre de 56 époques. Une fois ces périodes écoulées, il n'est plus possible d'ouvrir un litige contre une allocation ou une requête. Lorsqu'un litige est ouvert, un dépôt d'un minimum de 10 000 GRT est exigé par les Fisherman, qui sera bloqué jusqu'à ce que le litige soit finalisé et qu'une résolution ait été donnée. Les Fisherman sont tous les participants au réseau qui ouvrent des litiges. -Disputes have **three** possible outcomes, so does the deposit of the Fishermen. +Les litiges ont **trois** issues possibles, tout comme le dépôt des Fishermen. -- If the dispute is rejected, the GRT deposited by the Fishermen will be burned, and the disputed Indexer will not be slashed. -- If the dispute is settled as a draw, the Fishermen's deposit will be returned, and the disputed Indexer will not be slashed. -- If the dispute is accepted, the GRT deposited by the Fishermen will be returned, the disputed Indexer will be slashed and the Fishermen will earn 50% of the slashed GRT. +- Si le litige est rejeté, les GRT déposées par le Fishermen seront brûlées et l’Indexeur accusé n’est pas « slashed » (aucune pénalité). +- Si le litige se solde par un match nul, la caution du Fisherman sera restituée et l’Indexeur mis en cause n’est pas pénalisé. +- Si le litige est accepté, les GRT déposés par le Fisherman lui seront restitués, l’Indexeur mis en cause sera pénalisé (slashed) et le Fisherman recevra 50 % des GRT ainsi confisqués. -Disputes can be viewed in the UI in an Indexer's profile page under the `Disputes` tab. +Les litiges peuvent être consultés dans l'interface utilisateur sur la page de profil d'un indexeur sous l'onglet `Disputes`. -### What are query fee rebates and when are they distributed? +### Que sont les query fee rebates et quand sont-ils distribués ? -Query fees are collected by the gateway and distributed to indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). The exponential rebate function is proposed as a way to ensure indexers achieve the best outcome by faithfully serving queries. It works by incentivizing Indexers to allocate a large amount of stake (which can be slashed for erring when serving a query) relative to the amount of query fees they may collect. +Les frais de requête sont collectés par la passerelle et distribués aux Indexeurs selon la fonction de ristourne exponentielle (exponential rebate function, voir GIP à ce sujet [ici](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). Cette fonction est proposée comme un moyen de garantir que les Indexeurs obtiennent le meilleur résultat en répondant fidèlement aux requêtes. Elle incite les Indexeurs à allouer un montant élevé de staking (qui peut être réduit en cas d'erreur lors du service d'une requête) par rapport au montant des frais de requête qu'ils peuvent percevoir. -Once an allocation has been closed the rebates are available to be claimed by the Indexer. Upon claiming, the query fee rebates are distributed to the Indexer and their Delegators based on the query fee cut and the exponential rebate function. +Une fois l’allocation clôturée, les ristournes peuvent être réclamées par l’Indexeur. Une fois réclamées, ces ristournes sur les frais de requête sont partagées entre l’Indexeur et ses Délégateurs, conformément au query fee cut et à la fonction de ristourne exponentielle. -### What is query fee cut and indexing reward cut? +### Que sont les query fee cut et l’indexing reward cut ? -The `queryFeeCut` and `indexingRewardCut` values are delegation parameters that the Indexer may set along with cooldownBlocks to control the distribution of GRT between the Indexer and their Delegators. See the last steps in [Staking in the Protocol](/indexing/overview/#stake-in-the-protocol) for instructions on setting the delegation parameters. +Les valeurs `queryFeeCut` et `indexingRewardCut` sont des paramètres de délégation que l'Indexeur peut définir avec les cooldownBlocks pour contrôler la distribution des GRT entre l'Indexeur et ses Déléguateurs. Voir les dernières étapes de [Staking dans le Protocol](/indexing/overview/#stake-in-the-protocol) pour les instructions sur la définition des paramètres de délégation. -- **queryFeeCut** - the % of query fee rebates that will be distributed to the Indexer. If this is set to 95%, the Indexer will receive 95% of the query fees earned when an allocation is closed with the other 5% going to the Delegators. +- **queryFeeCut** - le pourcentage des remises sur les frais de requête qui sera distribué à l'Indexeur. Si cette valeur est fixée à 95 %, l'indexeur recevra 95 % des frais de requête perçus lors de la clôture d'une allocation, les 5 % restants revenant aux Déléguateurs. -- **indexingRewardCut** - the % of indexing rewards that will be distributed to the Indexer. If this is set to 95%, the Indexer will receive 95% of the indexing rewards when an allocation is closed and the Delegators will split the other 5%. +- **indexingRewardCut** - le pourcentage des récompenses d'indexation qui sera distribué à l'Indexeur. Si cette valeur est fixée à 95 %, l'Indexeur recevra 95 % des récompenses d'indexation lorsqu'une allocation est clôturée et les Déléguateurs se partageront les 5 % restants. -### How do Indexers know which subgraphs to index? +### How do Indexers know which Subgraphs to index? -Indexers may differentiate themselves by applying advanced techniques for making subgraph indexing decisions but to give a general idea we'll discuss several key metrics used to evaluate subgraphs in the network: +Indexers may differentiate themselves by applying advanced techniques for making Subgraph indexing decisions but to give a general idea we'll discuss several key metrics used to evaluate Subgraphs in the network: -- **Curation signal** - The proportion of network curation signal applied to a particular subgraph is a good indicator of the interest in that subgraph, especially during the bootstrap phase when query voluming is ramping up. +- **Curation signal** - The proportion of network curation signal applied to a particular Subgraph is a good indicator of the interest in that Subgraph, especially during the bootstrap phase when query volume is ramping up. -- **Query fees collected** - The historical data for volume of query fees collected for a specific subgraph is a good indicator of future demand. +- **Query fees collected** - The historical data for volume of query fees collected for a specific Subgraph is a good indicator of future demand. -- **Amount staked** - Monitoring the behavior of other Indexers or looking at proportions of total stake allocated towards specific subgraphs can allow an Indexer to monitor the supply side for subgraph queries to identify subgraphs that the network is showing confidence in or subgraphs that may show a need for more supply. +- **Amount staked** - Monitoring the behavior of other Indexers or looking at proportions of total stake allocated towards specific Subgraphs can allow an Indexer to monitor the supply side for Subgraph queries to identify Subgraphs that the network is showing confidence in or Subgraphs that may show a need for more supply. -- **Subgraphs with no indexing rewards** - Some subgraphs do not generate indexing rewards mainly because they are using unsupported features like IPFS or because they are querying another network outside of mainnet. You will see a message on a subgraph if it is not generating indexing rewards. +- **Subgraphs with no indexing rewards** - Some Subgraphs do not generate indexing rewards mainly because they are using unsupported features like IPFS or because they are querying another network outside of mainnet. You will see a message on a Subgraph if it is not generating indexing rewards. -### What are the hardware requirements? +### Quelles sont les exigences en matière de matériel ? -- **Small** - Enough to get started indexing several subgraphs, will likely need to be expanded. -- **Standard** - Default setup, this is what is used in the example k8s/terraform deployment manifests. -- **Medium** - Production Indexer supporting 100 subgraphs and 200-500 requests per second. -- **Large** - Prepared to index all currently used subgraphs and serve requests for the related traffic. +- **Small** - Enough to get started indexing several Subgraphs, will likely need to be expanded. +- **Standard** - Configuration par défaut, c'est ce qui est utilisé dans les manifestes de déploiement de l'exemple k8s/terraform. +- **Medium** - Production Indexer supporting 100 Subgraphs and 200-500 requests per second. +- **Large** - Prepared to index all currently used Subgraphs and serve requests for the related traffic. -| Setup | Postgres
(CPUs) | Postgres
(memory in GBs) | Postgres
(disk in TBs) | VMs
(CPUs) | VMs
(memory in GBs) | +| Configuration | Postgres
(CPUs) | Postgres
(mémoire en Go) | Postgres
(disque en To) | VMs
(CPUs) | VMs
(mémoire en Go) | | --- | :-: | :-: | :-: | :-: | :-: | -| Small | 4 | 8 | 1 | 4 | 16 | +| Petit | 4 | 8 | 1 | 4 | 16 | | Standard | 8 | 30 | 1 | 12 | 48 | -| Medium | 16 | 64 | 2 | 32 | 64 | -| Large | 72 | 468 | 3.5 | 48 | 184 | +| Moyen | 16 | 64 | 2 | 32 | 64 | +| Grand | 72 | 468 | 3.5 | 48 | 184 | -### What are some basic security precautions an Indexer should take? +### Quelles sont les précautions de sécurité de base qu'un Indexeur doit prendre ? -- **Operator wallet** - Setting up an operator wallet is an important precaution because it allows an Indexer to maintain separation between their keys that control stake and those that are in control of day-to-day operations. See [Stake in Protocol](/indexing/overview/#stake-in-the-protocol) for instructions. +- **Portefeuille de l'opérateur** - La mise en place d'un portefeuille de l'opérateur est une précaution importante car elle permet à un Indexeur de maintenir une séparation entre les clés qui contrôlent le stajing et celles qui contrôlent les opérations quotidiennes. Voir [Staking dans le Protocol](/indexing/overview/#stake-in-the-protocol) pour les instructions. -- **Firewall** - Only the Indexer service needs to be exposed publicly and particular attention should be paid to locking down admin ports and database access: the Graph Node JSON-RPC endpoint (default port: 8030), the Indexer management API endpoint (default port: 18000), and the Postgres database endpoint (default port: 5432) should not be exposed. +- **Firewall** - Seul le service de l'Indexeur doit être exposé publiquement et une attention particulière doit être portée au verrouillage des ports d'administration et de l'accès à la base de données : l'endpoint JSON-RPC de Graph Node (port par défaut : 8030), l'endpoint de l'API de gestion de l'Indexeur (port par défaut : 18000), et l'enpoint de la base de données Postgres (port par défaut : 5432) ne doivent pas être exposés. ## Infrastructure -At the center of an Indexer's infrastructure is the Graph Node which monitors the indexed networks, extracts and loads data per a subgraph definition and serves it as a [GraphQL API](/about/#how-the-graph-works). The Graph Node needs to be connected to an endpoint exposing data from each indexed network; an IPFS node for sourcing data; a PostgreSQL database for its store; and Indexer components which facilitate its interactions with the network. +At the center of an Indexer's infrastructure is the Graph Node which monitors the indexed networks, extracts and loads data per a Subgraph definition and serves it as a [GraphQL API](/about/#how-the-graph-works). The Graph Node needs to be connected to an endpoint exposing data from each indexed network; an IPFS node for sourcing data; a PostgreSQL database for its store; and Indexer components which facilitate its interactions with the network. -- **PostgreSQL database** - The main store for the Graph Node, this is where subgraph data is stored. The Indexer service and agent also use the database to store state channel data, cost models, indexing rules, and allocation actions. +- **PostgreSQL database** - The main store for the Graph Node, this is where Subgraph data is stored. The Indexer service and agent also use the database to store state channel data, cost models, indexing rules, and allocation actions. -- **Data endpoint** - For EVM-compatible networks, Graph Node needs to be connected to an endpoint that exposes an EVM-compatible JSON-RPC API. This may take the form of a single client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain subgraphs will require particular client capabilities such as archive mode and/or the parity tracing API. +- **Data endpoint** - For EVM-compatible networks, Graph Node needs to be connected to an endpoint that exposes an EVM-compatible JSON-RPC API. This may take the form of a single client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain Subgraphs will require particular client capabilities such as archive mode and/or the parity tracing API. -- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during subgraph deployment to fetch the subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com. +- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com. -- **Indexer service** - Handles all required external communications with the network. Shares cost models and indexing statuses, passes query requests from gateways on to a Graph Node, and manages the query payments via state channels with the gateway. +- **Service de l'Indexeur** - Gère toutes les communications externes requises avec le réseau. Il partage les modèles de coûts et les états d'indexation, transmet les requêtes des passerelles à un Graph Node et gère les paiements des requêtes via des canaux d'état avec la passerelle. -- **Indexer agent** - Facilitates the Indexers interactions onchain including registering on the network, managing subgraph deployments to its Graph Node/s, and managing allocations. +- **Indexer agent** - Facilitates the Indexers interactions onchain including registering on the network, managing Subgraph deployments to its Graph Node/s, and managing allocations. -- **Prometheus metrics server** - The Graph Node and Indexer components log their metrics to the metrics server. +- **Serveur de métriques Prometheus** - Les composants Graph Node et Indexeur enregistrent leurs métriques sur le serveur de métriques. -Note: To support agile scaling, it is recommended that query and indexing concerns are separated between different sets of nodes: query nodes and index nodes. +Remarque : pour permettre une mise à l'échelle souple, il est recommandé de séparer les préoccupations de requête et d'indexation soient séparés différents ensembles de nœuds : les nœuds de requête et les nœuds d'indexation. -### Ports overview +### Vue d'ensemble des ports -> **Important**: Be careful about exposing ports publicly - **administration ports** should be kept locked down. This includes the the Graph Node JSON-RPC and the Indexer management endpoints detailed below. +> **Important** : Attention à ne pas exposer les ports publiquement - les **ports d'administration** doivent être verrouillés. Cela inclut les endpoints JSON-RPC de Graph Node et les endpoints de gestion de l'Indexeur détaillés ci-dessous. #### Nœud de The Graph -| Port | Purpose | Routes | CLI Argument | Environment Variable | +| Port | Objectif | Routes | Argument CLI | Variable d'Environment | | --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | -| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | -| 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | -| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | -| 8040 | Prometheus metrics | /metrics | \--metrics-port | - | +| 8000 | GraphQL HTTP server
(for Subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | +| 8001 | GraphQL WS
(for Subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | +| 8020 | JSON-RPC
(pour gérer les déploiements) | / | \--admin-port | - | +| 8030 | API du statut de l'indexation des subgraphs | /graphql | \--index-node-port | - | +| 8040 | Métriques Prometheus | /metrics | \--metrics-port | - | -#### Indexer Service +#### Service d'Indexeur -| Port | Purpose | Routes | CLI Argument | Environment Variable | +| Port | Objectif | Routes | Argument CLI | Variable D'environment | | --- | --- | --- | --- | --- | -| 7600 | GraphQL HTTP server
(for paid subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` | -| 7300 | Prometheus metrics | /metrics | \--metrics-port | - | +| 7600 | GraphQL HTTP server
(for paid Subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` | +| 7300 | Métriques Prometheus | /metrics | \--metrics-port | - | #### Indexer Agent -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| ---- | ---------------------- | ------ | -------------------------- | --------------------------------------- | -| 8000 | Indexer management API | / | \--indexer-management-port | `INDEXER_AGENT_INDEXER_MANAGEMENT_PORT` | +| Port | Objectif | Routes | Argument CLI | Variable D'environment | +| ---- | ---------------------------- | ------ | -------------------------- | --------------------------------------- | +| 8000 | API de gestion des Indexeurs | / | \--indexer-management-port | `INDEXER_AGENT_INDEXER_MANAGEMENT_PORT` | -### Setup server infrastructure using Terraform on Google Cloud +### Mise en place d'une infrastructure de serveurs à l'aide de Terraform sur Google Cloud -> Note: Indexers can alternatively use AWS, Microsoft Azure, or Alibaba. +> Remarque : les indexeurs peuvent également utiliser AWS, Microsoft Azure ou Alibaba. -#### Install prerequisites +#### Installer les prérequis -- Google Cloud SDK -- Kubectl command line tool +- SDK Google Cloud +- Outil en ligne de commande Kubectl - Terraform -#### Create a Google Cloud Project +#### Créer un projet Google Cloud -- Clone or navigate to the [Indexer repository](https://github.com/graphprotocol/indexer). +- Cloner ou naviguer vers la [repo de l'Indexeur](https://github.com/graphprotocol/indexer). -- Navigate to the `./terraform` directory, this is where all commands should be executed. +- Naviguez jusqu'au répertoire `./terraform`, c'est là que toutes les commandes doivent être exécutées. ```sh cd terraform ``` -- Authenticate with Google Cloud and create a new project. +- Authentifiez-vous auprès de Google Cloud et créez un nouveau projet. ```sh gcloud auth login @@ -196,9 +196,9 @@ project= gcloud projects create --enable-cloud-apis $project ``` -- Use the Google Cloud Console's billing page to enable billing for the new project. +- Utilisez la page de facturation de la Google Cloud Console pour activer la facturation du nouveau projet. -- Create a Google Cloud configuration. +- Créez une configuration Google Cloud. ```sh proj_id=$(gcloud projects list --format='get(project_id)' --filter="name=$project") @@ -208,7 +208,7 @@ gcloud config set compute/region us-central1 gcloud config set compute/zone us-central1-a ``` -- Enable required Google Cloud APIs. +- Activer les API Google Cloud requises. ```sh gcloud services enable compute.googleapis.com @@ -217,7 +217,7 @@ gcloud services enable servicenetworking.googleapis.com gcloud services enable sqladmin.googleapis.com ``` -- Create a service account. +- Créer un compte de service. ```sh svc_name= @@ -225,7 +225,7 @@ gcloud iam service-accounts create $svc_name \ --description="Service account for Terraform" \ --display-name="$svc_name" gcloud iam service-accounts list -# Get the email of the service account from the list +# Obtenir l'email du compte de service à partir de la liste svc=$(gcloud iam service-accounts list --format='get(email)' --filter="displayName=$svc_name") gcloud iam service-accounts keys create .gcloud-credentials.json \ @@ -235,7 +235,7 @@ gcloud projects add-iam-policy-binding $proj_id \ --role roles/editor ``` -- Enable peering between database and Kubernetes cluster that will be created in the next step. +- Activer le peering entre la base de données et le cluster Kubernetes qui sera créé à l'étape suivante. ```sh gcloud compute addresses create google-managed-services-default \ @@ -249,7 +249,7 @@ gcloud services vpc-peerings connect \ --ranges=google-managed-services-default ``` -- Create minimal terraform configuration file (update as needed). +- Créer un fichier de configuration minimal pour terraform (mettre à jour si nécessaire). ```sh indexer= @@ -260,24 +260,24 @@ database_password = "" EOF ``` -#### Use Terraform to create infrastructure +#### Utiliser Terraform pour créer une infrastructure -Before running any commands, read through [variables.tf](https://github.com/graphprotocol/indexer/blob/main/terraform/variables.tf) and create a file `terraform.tfvars` in this directory (or modify the one we created in the last step). For each variable where you want to override the default, or where you need to set a value, enter a setting into `terraform.tfvars`. +Avant de lancer une commande, lisez [variables.tf](https://github.com/graphprotocol/indexer/blob/main/terraform/variables.tf) et créez un fichier `terraform.tfvars` dans ce répertoire (ou modifiez celui que nous avons créé à la dernière étape). Pour chaque variable pour laquelle vous voulez remplacer la valeur par défaut, ou pour laquelle vous avez besoin de définir une valeur, entrez un paramètre dans `terraform.tfvars`. -- Run the following commands to create the infrastructure. +- Exécutez les commandes suivantes pour créer l'infrastructure. ```sh -# Install required plugins +# Installer les plugins nécessaires terraform init -# View plan for resources to be created +# Visualiser le plan des ressources à créer terraform plan -# Create the resources (expect it to take up to 30 minutes) +# Créer les ressources (cela peut prendre jusqu'à 30 minutes) terraform apply ``` -Download credentials for the new cluster into `~/.kube/config` and set it as your default context. +Téléchargez les informations d'identification du nouveau cluster dans `~/.kube/config` et définissez-le comme contexte par défaut. ```sh gcloud container clusters get-credentials $indexer @@ -285,21 +285,21 @@ kubectl config use-context $(kubectl config get-contexts --output='name' | grep $indexer) ``` -#### Creating the Kubernetes components for the Indexer +#### Création des composants Kubernetes pour l'Indexeur -- Copy the directory `k8s/overlays` to a new directory `$dir,` and adjust the `bases` entry in `$dir/kustomization.yaml` so that it points to the directory `k8s/base`. +- Copiez le répertoire `k8s/overlays` dans un nouveau répertoire `$dir,` et ajustez l'entrée `bases` dans `$dir/kustomization.yaml` pour qu'elle pointe vers le répertoire `k8s/base`. -- Read through all the files in `$dir` and adjust any values as indicated in the comments. +- Lisez tous les fichiers de `$dir` et ajustez les valeurs indiquées dans les commentaires. -Deploy all resources with `kubectl apply -k $dir`. +Déployer toutes les ressources avec `kubectl apply -k $dir`. ### Nœud de The Graph -[Graph Node](https://github.com/graphprotocol/graph-node) is an open source Rust implementation that event sources the Ethereum blockchain to deterministically update a data store that can be queried via the GraphQL endpoint. Developers use subgraphs to define their schema, and a set of mappings for transforming the data sourced from the blockchain and the Graph Node handles syncing the entire chain, monitoring for new blocks, and serving it via a GraphQL endpoint. +[Graph Node](https://github.com/graphprotocol/graph-node) is an open source Rust implementation that event sources the Ethereum blockchain to deterministically update a data store that can be queried via the GraphQL endpoint. Developers use Subgraphs to define their schema, and a set of mappings for transforming the data sourced from the blockchain and the Graph Node handles syncing the entire chain, monitoring for new blocks, and serving it via a GraphQL endpoint. -#### Getting started from source +#### Démarrer à partir des sources -#### Install prerequisites +#### Installer les prérequis - **Rust** @@ -307,15 +307,15 @@ Deploy all resources with `kubectl apply -k $dir`. - **IPFS** -- **Additional Requirements for Ubuntu users** - To run a Graph Node on Ubuntu a few additional packages may be needed. +- **Exigences supplémentaires pour les utilisateurs d'Ubuntu** - Pour faire fonctionner un Graph Node sur Ubuntu, quelques packages supplémentaires peuvent être nécessaires. ```sh sudo apt-get install -y clang libpg-dev libssl-dev pkg-config ``` -#### Setup +#### Configuration -1. Start a PostgreSQL database server +1. Démarrer un serveur de base de données PostgreSQL ```sh initdb -D .postgres @@ -323,9 +323,9 @@ pg_ctl -D .postgres -l logfile start createdb graph-node ``` -2. Clone [Graph Node](https://github.com/graphprotocol/graph-node) repo and build the source by running `cargo build` +2. Clonez la repo [Graph Node](https://github.com/graphprotocol/graph-node) et compilez les sources en lançant `cargo build` -3. Now that all the dependencies are setup, start the Graph Node: +3. Maintenant que toutes les dépendances sont installées, démarrez Graph Node : ```sh cargo run -p graph-node --release -- \ @@ -334,28 +334,28 @@ cargo run -p graph-node --release -- \ --ipfs https://ipfs.network.thegraph.com ``` -#### Getting started using Docker +#### Commencer à utiliser Docker -#### Prerequisites +#### Prérequis -- **Ethereum node** - By default, the docker compose setup will use mainnet: [http://host.docker.internal:8545](http://host.docker.internal:8545) to connect to the Ethereum node on your host machine. You can replace this network name and url by updating `docker-compose.yaml`. +- **Nœud Ethereum** - Par défaut, l'installation de docker compose utilisera mainnet : [http://host.docker.internal:8545](http://host.docker.internal:8545) pour se connecter au nœud Ethereum sur votre machine hôte. Vous pouvez remplacer ce nom de réseau et cette url en mettant à jour `docker-compose.yaml`. -#### Setup +#### Configuration -1. Clone Graph Node and navigate to the Docker directory: +1. Clonez Graph Node et accédez au répertoire Docker : ```sh git clone https://github.com/graphprotocol/graph-node cd graph-node/docker ``` -2. For linux users only - Use the host IP address instead of `host.docker.internal` in the `docker-compose.yaml `using the included script: +2. Pour les utilisateurs linux uniquement - Utilisez l'adresse IP de l'hôte au lieu de `host.docker.internal` dans le `docker-compose.yaml ` en utilisant le script inclus : ```sh ./setup.sh ``` -3. Start a local Graph Node that will connect to your Ethereum endpoint: +3. Démarrez un Graph Node local qui se connectera à votre endpoint Ethereum : ```sh docker-compose up @@ -363,25 +363,25 @@ docker-compose up ### Indexer components -To successfully participate in the network requires almost constant monitoring and interaction, so we've built a suite of Typescript applications for facilitating an Indexers network participation. There are three Indexer components: +Pour participer avec succès au réseau, il faut une surveillance et une interaction presque constantes. C'est pourquoi nous avons créé une suite d'applications Typescript pour faciliter la participation d'un Indexeur au réseau. Il y a trois Indexer components : -- **Indexer agent** - The agent monitors the network and the Indexer's own infrastructure and manages which subgraph deployments are indexed and allocated towards onchain and how much is allocated towards each. +- **Indexer agent** - The agent monitors the network and the Indexer's own infrastructure and manages which Subgraph deployments are indexed and allocated towards onchain and how much is allocated towards each. -- **Indexer service** - The only component that needs to be exposed externally, the service passes on subgraph queries to the graph node, manages state channels for query payments, shares important decision making information to clients like the gateways. +- **Indexer service** - The only component that needs to be exposed externally, the service passes on Subgraph queries to the graph node, manages state channels for query payments, shares important decision making information to clients like the gateways. -- **Indexer CLI** - The command line interface for managing the Indexer agent. It allows Indexers to manage cost models, manual allocations, actions queue, and indexing rules. +- **Indexer CLI** - L'interface en ligne de commande pour la gestion de l'agent Indexer. Elle permet aux Indexeurs de gérer les modèles de coûts, les allocations manuelles, la file d'attente des actions et les règles d'indexation. -#### Getting started +#### Pour commencer -The Indexer agent and Indexer service should be co-located with your Graph Node infrastructure. There are many ways to set up virtual execution environments for your Indexer components; here we'll explain how to run them on baremetal using NPM packages or source, or via kubernetes and docker on the Google Cloud Kubernetes Engine. If these setup examples do not translate well to your infrastructure there will likely be a community guide to reference, come say hi on [Discord](https://discord.gg/graphprotocol)! Remember to [stake in the protocol](/indexing/overview/#stake-in-the-protocol) before starting up your Indexer components! +L'Indexer agent et l'Indexer service doivent être situés au même endroit que votre infrastructure Graph Node. Il existe de nombreuses façons de mettre en place des environnements d'exécution virtuels pour vos Indexer components; nous expliquerons ici comment les exécuter sur baremetal en utilisant les packages NPM ou les sources, ou via kubernetes et docker sur Google Cloud Kubernetes Engine. Si ces exemples de configuration ne s'appliquent pas à votre infrastructure, il y aura probablement un guide communautaire à consulter, venez nous dire bonjour sur [Discord](https://discord.gg/graphprotocol) ! N'oubliez pas de [staker sur le protocole](/indexing/overview/#stake-in-the-protocol) avant de démarrer vos Indexer components ! -#### From NPM packages +#### A partir des packages NPM ```sh npm install -g @graphprotocol/indexer-service npm install -g @graphprotocol/indexer-agent -# Indexer CLI is a plugin for Graph CLI, so both need to be installed: +# Indexer CLI est un plugin pour Graph CLI, les deux doivent donc être installés : npm install -g @graphprotocol/graph-cli npm install -g @graphprotocol/indexer-cli @@ -392,16 +392,16 @@ graph-indexer-service start ... graph-indexer-agent start ... # Indexer CLI -#Forward the port of your agent pod if using Kubernetes +#Transférer le port de votre pod agent si vous utilisez Kubernetes. kubectl port-forward pod/POD_ID 18000:8000 graph indexer connect http://localhost:18000/ graph indexer ... ``` -#### From source +#### Depuis la source ```sh -# From Repo root directory +# Depuis le répertoire racine de Repo yarn # Indexer Service @@ -418,16 +418,16 @@ cd packages/indexer-cli ./bin/graph-indexer-cli indexer ... ``` -#### Using docker +#### Utilisation de Docker -- Pull images from the registry +- Extraire des images du registre ```sh docker pull ghcr.io/graphprotocol/indexer-service:latest docker pull ghcr.io/graphprotocol/indexer-agent:latest ``` -Or build images locally from source +Ou construire des images localement à partir des sources ```sh # Indexer service @@ -442,22 +442,22 @@ docker build \ -t indexer-agent:latest \ ``` -- Run the components +- Exécuter les composants ```sh docker run -p 7600:7600 -it indexer-service:latest ... docker run -p 18000:8000 -it indexer-agent:latest ... ``` -**NOTE**: After starting the containers, the Indexer service should be accessible at [http://localhost:7600](http://localhost:7600) and the Indexer agent should be exposing the Indexer management API at [http://localhost:18000/](http://localhost:18000/). +**NOTE** : Après le démarrage des conteneurs, le service Indexer doit être accessible à l'adresse [http://localhost:7600](http://localhost:7600) et l'agent Indexer doit exposer l'API de gestion de l'Indexeur à l'adresse [http://localhost:18000/](http://localhost:18000/). -#### Using K8s and Terraform +#### En utilisant K8s et Terraform -See the [Setup Server Infrastructure Using Terraform on Google Cloud](/indexing/overview/#setup-server-infrastructure-using-terraform-on-google-cloud) section +Voir la section [Configuration de l'infrastructure du serveur à l'aide de Terraform sur Google Cloud](/indexing/overview/#setup-server-infrastructure-using-terraform-on-google-cloud) #### Usage -> **NOTE**: All runtime configuration variables may be applied either as parameters to the command on startup or using environment variables of the format `COMPONENT_NAME_VARIABLE_NAME`(ex. `INDEXER_AGENT_ETHEREUM`). +> **NOTE** : Toutes les variables de configuration d'exécution peuvent être appliquées soit en tant que paramètres de la commande au démarrage, soit en utilisant des variables d'environnement du format `COMPONENT_NAME_VARIABLE_NAME` (ex. `INDEXER_AGENT_ETHEREUM`). #### Indexer agent @@ -516,56 +516,56 @@ graph-indexer-service start \ #### Indexer CLI -The Indexer CLI is a plugin for [`@graphprotocol/graph-cli`](https://www.npmjs.com/package/@graphprotocol/graph-cli) accessible in the terminal at `graph indexer`. +L'Indexer CLI est un plugin pour [`@graphprotocol/graph-cli`](https://www.npmjs.com/package/@graphprotocol/graph-cli) accessible dans le terminal à `graph indexer`. ```sh graph indexer connect http://localhost:18000 graph indexer status ``` -#### Indexer management using Indexer CLI +#### Gestion de l'Indexeur à l'aide de l'Indexer CLI -The suggested tool for interacting with the **Indexer Management API** is the **Indexer CLI**, an extension to the **Graph CLI**. The Indexer agent needs input from an Indexer in order to autonomously interact with the network on the behalf of the Indexer. The mechanism for defining Indexer agent behavior are **allocation management** mode and **indexing rules**. Under auto mode, an Indexer can use **indexing rules** to apply their specific strategy for picking subgraphs to index and serve queries for. Rules are managed via a GraphQL API served by the agent and known as the Indexer Management API. Under manual mode, an Indexer can create allocation actions using **actions queue** and explicitly approve them before they get executed. Under oversight mode, **indexing rules** are used to populate **actions queue** and also require explicit approval for execution. +The suggested tool for interacting with the **Indexer Management API** is the **Indexer CLI**, an extension to the **Graph CLI**. The Indexer agent needs input from an Indexer in order to autonomously interact with the network on the behalf of the Indexer. The mechanism for defining Indexer agent behavior are **allocation management** mode and **indexing rules**. Under auto mode, an Indexer can use **indexing rules** to apply their specific strategy for picking Subgraphs to index and serve queries for. Rules are managed via a GraphQL API served by the agent and known as the Indexer Management API. Under manual mode, an Indexer can create allocation actions using **actions queue** and explicitly approve them before they get executed. Under oversight mode, **indexing rules** are used to populate **actions queue** and also require explicit approval for execution. #### Usage -The **Indexer CLI** connects to the Indexer agent, typically through port-forwarding, so the CLI does not need to run on the same server or cluster. To help you get started, and to provide some context, the CLI will briefly be described here. +L'**Indexer CLI** se connecte à l'Indexer agent, généralement par le biais d'une redirection de port, de sorte que le CLI n'a pas besoin d'être exécuté sur le même serveur ou cluster. Pour vous aider à démarrer, et pour vous donner un peu de contexte, nous allons décrire brièvement le CLI. -- `graph indexer connect ` - Connect to the Indexer management API. Typically the connection to the server is opened via port forwarding, so the CLI can be easily operated remotely. (Example: `kubectl port-forward pod/ 8000:8000`) +- `graph indexer connect ` - Se connecter à l'API de gestion de l'Indexeur. Généralement, la connexion au serveur est ouverte via une redirection de port, afin que le CLI puisse être facilement utilisé à distance. (Exemple : `kubectl port-forward pod/ 8000:8000`) -- `graph indexer rules get [options] [ ...]` - Get one or more indexing rules using `all` as the `` to get all rules, or `global` to get the global defaults. An additional argument `--merged` can be used to specify that deployment specific rules are merged with the global rule. This is how they are applied in the Indexer agent. +- `graph indexer rules get [options] [ ...]` - Obtenir une ou plusieurs règles d'indexation en utilisant `all` comme `` pour obtenir toutes les règles, ou `global` pour obtenir les valeurs par défaut globales. Un argument supplémentaire `--merged` peut être utilisé pour spécifier que les règles spécifiques au déploiement sont fusionnées avec la règle globale. C'est ainsi qu'elles sont appliquées dans l'Indexer agent. -- `graph indexer rules set [options] ...` - Set one or more indexing rules. +- `graph indexer rules set [options] ...` - Définir une ou plusieurs règles d'indexation. -- `graph indexer rules start [options] ` - Start indexing a subgraph deployment if available and set its `decisionBasis` to `always`, so the Indexer agent will always choose to index it. If the global rule is set to always then all available subgraphs on the network will be indexed. +- `graph indexer rules start [options] ` - Start indexing a Subgraph deployment if available and set its `decisionBasis` to `always`, so the Indexer agent will always choose to index it. If the global rule is set to always then all available Subgraphs on the network will be indexed. -- `graph indexer rules stop [options] ` - Stop indexing a deployment and set its `decisionBasis` to never, so it will skip this deployment when deciding on deployments to index. +- `graph indexer rules stop [options] ` - Arrête l'indexation d'un déploiement et met sa `decisionBasis` à never, de sorte qu'il ignorera ce déploiement lorsqu'il décidera des déploiements à indexer. -- `graph indexer rules maybe [options] ` — Set the `decisionBasis` for a deployment to `rules`, so that the Indexer agent will use indexing rules to decide whether to index this deployment. +- `graph indexer rules maybe [options] ` — Définit la `decisionBasis` pour un déploiement à `rules`, afin que l'agent d'Indexeur utilise les règles d'indexation pour décider d'indexer ou non ce déploiement. -- `graph indexer actions get [options] ` - Fetch one or more actions using `all` or leave `action-id` empty to get all actions. An additional argument `--status` can be used to print out all actions of a certain status. +- `graph indexer actions get [options] ` - Récupère une ou plusieurs actions en utilisant `all` ou laisse `action-id` vide pour obtenir toutes les actions. Un argument supplémentaire `--status` peut être utilisé pour afficher toutes les actions d'un certain statut. -- `graph indexer action queue allocate ` - Queue allocation action +- `graph indexer action queue allocate ` - Action d'allocation de la file d'attente -- `graph indexer action queue reallocate ` - Queue reallocate action +- `graph indexer action queue reallocate ` - Action de réallocation de la file d'attente -- `graph indexer action queue unallocate ` - Queue unallocate action +- `graph indexer action queue unallocate ` - Action de désallocation de la file d'attente -- `graph indexer actions cancel [ ...]` - Cancel all action in the queue if id is unspecified, otherwise cancel array of id with space as separator +- `graph indexer actions cancel [ ...]` - Annule toutes les actions de la file d'attente si id n'est pas spécifié, sinon annule un tableau d'id avec un espace comme séparateur -- `graph indexer actions approve [ ...]` - Approve multiple actions for execution +- `graph indexer actions approve [ ...]` - Approuver l'exécution de plusieurs actions -- `graph indexer actions execute approve` - Force the worker to execute approved actions immediately +- `graph indexer actions execute approve` - Force le worker à exécuter immédiatement les actions approuvées -All commands which display rules in the output can choose between the supported output formats (`table`, `yaml`, and `json`) using the `-output` argument. +Toutes les commandes qui affichent des règles dans la sortie peuvent choisir entre les formats de sortie supportés (`table`, `yaml`, et `json`) en utilisant l'argument `-output`. -#### Indexing rules +#### Règles d'indexation -Indexing rules can either be applied as global defaults or for specific subgraph deployments using their IDs. The `deployment` and `decisionBasis` fields are mandatory, while all other fields are optional. When an indexing rule has `rules` as the `decisionBasis`, then the Indexer agent will compare non-null threshold values on that rule with values fetched from the network for the corresponding deployment. If the subgraph deployment has values above (or below) any of the thresholds it will be chosen for indexing. +Indexing rules can either be applied as global defaults or for specific Subgraph deployments using their IDs. The `deployment` and `decisionBasis` fields are mandatory, while all other fields are optional. When an indexing rule has `rules` as the `decisionBasis`, then the Indexer agent will compare non-null threshold values on that rule with values fetched from the network for the corresponding deployment. If the Subgraph deployment has values above (or below) any of the thresholds it will be chosen for indexing. -For example, if the global rule has a `minStake` of **5** (GRT), any subgraph deployment which has more than 5 (GRT) of stake allocated to it will be indexed. Threshold rules include `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, and `minAverageQueryFees`. +For example, if the global rule has a `minStake` of **5** (GRT), any Subgraph deployment which has more than 5 (GRT) of stake allocated to it will be indexed. Threshold rules include `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, and `minAverageQueryFees`. -Data model: +Modèle de données : ```graphql type IndexingRule { @@ -599,7 +599,7 @@ IndexingDecisionBasis { } ``` -Example usage of indexing rule: +Exemple d'utilisation de la règle d'indexation : ``` graph indexer rules offchain QmZfeJYR86UARzp9HiXbURWunYgC9ywvPvoePNbuaATrEK @@ -611,20 +611,20 @@ graph indexer rules stop QmZfeJYR86UARzp9HiXbURWunYgC9ywvPvoePNbuaATrEK graph indexer rules delete QmZfeJYR86UARzp9HiXbURWunYgC9ywvPvoePNbuaATrEK ``` -#### Actions queue CLI +#### File d'attente d'actions CLI -The indexer-cli provides an `actions` module for manually working with the action queue. It uses the **Graphql API** hosted by the indexer management server to interact with the actions queue. +L'indexer-cli fournit un module `actions` pour travailler manuellement avec la file d'attente des actions. Il utilise l' **API Graphql** hébergée par le serveur de gestion de l'Indexeur pour interagir avec la file d'attente des actions. -The action execution worker will only grab items from the queue to execute if they have `ActionStatus = approved`. In the recommended path actions are added to the queue with ActionStatus = queued, so they must then be approved in order to be executed onchain. The general flow will look like: +L'agent d'exécution des actions ne récupère les éléments de la file d'attente pour les exécuter que s'ils ont un `ActionStatus = approved`. Dans le chemin recommandé, les actions sont ajoutées à la file d'attente avec ActionStatus = queued, de ce fait elles doivent donc être approuvées pour être exécutées onchain. Le flux général se présente comme suit : -- Action added to the queue by the 3rd party optimizer tool or indexer-cli user -- Indexer can use the `indexer-cli` to view all queued actions -- Indexer (or other software) can approve or cancel actions in the queue using the `indexer-cli`. The approve and cancel commands take an array of action ids as input. -- The execution worker regularly polls the queue for approved actions. It will grab the `approved` actions from the queue, attempt to execute them, and update the values in the db depending on the status of execution to `success` or `failed`. -- If an action is successful the worker will ensure that there is an indexing rule present that tells the agent how to manage the allocation moving forward, useful when taking manual actions while the agent is in `auto` or `oversight` mode. -- The indexer can monitor the action queue to see a history of action execution and if needed re-approve and update action items if they failed execution. The action queue provides a history of all actions queued and taken. +- Action ajoutée à la file d'attente par l'outil d'optimisation tiers ou l'utilisateur d'indexer-cli +- L'Indexeur peut utiliser `indexer-cli` pour visualiser toutes les actions en attente +- L'Indexeur (ou un autre logiciel) peut approuver ou annuler des actions dans la file d'attente en utilisant l'`indexer-cli`. Les commandes approve et cancel prennent en entrée un tableau d'identifiants d'actions. +- L'agent d'exécution interroge régulièrement la file d'attente pour les actions approuvées. Il récupère les actions `approuvées` de la file d'attente, tente de les exécuter, et met à jour les valeurs dans la base de données en fonction du statut de l'exécution, `success` ou `failed`. +- Si une action est réussie, le worker s'assurera qu'il y a une règle d'indexation présente qui indique à l'agent comment gérer l'allocation à l'avenir, ce qui est utile pour prendre des actions manuelles lorsque l'agent est en mode `auto` ou `oversight`. +- L'indexeur peut surveiller la file d'attente des actions pour voir l'historique de l'exécution des actions et, si nécessaire, réapprouver et mettre à jour les éléments d'action dont l'exécution a échoué. La file d'attente des actions fournit un historique de toutes les actions mises en attente et exécutées. -Data model: +Modèle de données : ```graphql Type ActionInput { @@ -657,7 +657,7 @@ ActionType { } ``` -Example usage from source: +Exemple d'utilisation de la source : ```bash graph indexer actions get all @@ -677,141 +677,141 @@ graph indexer actions approve 1 3 5 graph indexer actions execute approve ``` -Note that supported action types for allocation management have different input requirements: +Notez que les types d'action pris en charge pour la gestion de l'allocation ont des exigences différentes en matière de données d'entrée : -- `Allocate` - allocate stake to a specific subgraph deployment +- `Allocate` - allocate stake to a specific Subgraph deployment - - required action params: + - paramètres d'action requis : - deploymentID - amount -- `Unallocate` - close allocation, freeing up the stake to reallocate elsewhere +- `Unallocate` - ferme l'allocation, libérant le staking pour le réallouer ailleurs - - required action params: + - paramètres d'action requis : - allocationID - deploymentID - - optional action params: + - paramètres d'action optionnels : - poi - - force (forces using the provided POI even if it doesn’t match what the graph-node provides) + - force (force à utiliser le POI fourni même s'il ne correspond pas à ce que graph-node fournit) -- `Reallocate` - atomically close allocation and open a fresh allocation for the same subgraph deployment +- `Reallocate` - atomically close allocation and open a fresh allocation for the same Subgraph deployment - - required action params: + - paramètres d'action requis : - allocationID - deploymentID - amount - - optional action params: + - paramètres d'action optionnels : - poi - - force (forces using the provided POI even if it doesn’t match what the graph-node provides) + - force (force à utiliser le POI fourni même s'il ne correspond pas à ce que graph-node fournit) -#### Cost models +#### Modèles de coûts -Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make Indexer selection decisions per query and to negotiate payment with chosen Indexers. +Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each Subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make Indexer selection decisions per query and to negotiate payment with chosen Indexers. #### Agora -The Agora language provides a flexible format for declaring cost models for queries. An Agora price model is a sequence of statements that execute in order for each top-level query in a GraphQL query. For each top-level query, the first statement which matches it determines the price for that query. +Le langage Agora fournit un format flexible pour déclarer des modèles de coûts pour les requêtes. Un modèle de prix Agora est une séquence d'instructions qui s'exécutent dans l'ordre pour chaque requête de niveau supérieur dans une requête GraphQL. Pour chaque requête de niveau supérieur, la première instruction qui lui correspond détermine le prix de cette requête. -A statement is comprised of a predicate, which is used for matching GraphQL queries, and a cost expression which when evaluated outputs a cost in decimal GRT. Values in the named argument position of a query may be captured in the predicate and used in the expression. Globals may also be set and substituted in for placeholders in an expression. +Une déclaration est composée d'un prédicat, qui est utilisé pour faire correspondre les requêtes GraphQL, et d'une expression de coût qui, lorsqu'elle est évaluée, produit un coût en GRT décimal. Les valeurs de l'argument nommé d'une requête peuvent être capturées dans le prédicat et utilisées dans l'expression. Les globaux peuvent également être définis et remplacés par des espaces réservés dans une expression. -Example cost model: +Exemple de modèle de coût : ``` -# This statement captures the skip value, -# uses a boolean expression in the predicate to match specific queries that use `skip` -# and a cost expression to calculate the cost based on the `skip` value and the SYSTEM_LOAD global +# Cette instruction capture la valeur du saut, +# utilise une expression booléenne dans le prédicat pour faire correspondre les requêtes spécifiques qui utilisent `skip` +# et une expression de coût pour calculer le coût en fonction de la valeur `skip` et de la valeur globale SYSTEM_LOAD query { pairs(skip: $skip) { id } } when $skip > 2000 => 0.0001 * $skip * $SYSTEM_LOAD; -# This default will match any GraphQL expression. -# It uses a Global substituted into the expression to calculate cost +# Cette valeur par défaut correspondra à n'importe quelle expression GraphQL. +# Il utilise un Global substitué dans l'expression pour calculer le coût. default => 0.1 * $SYSTEM_LOAD; ``` -Example query costing using the above model: +Exemple de calcul des coûts d'une requête à l'aide du modèle ci-dessus : -| Query | Price | +| Requête | Prix | | ---------------------------------------------------------------------------- | ------- | | { pairs(skip: 5000) { id } } | 0.5 GRT | | { tokens { symbol } } | 0.1 GRT | | { pairs(skip: 5000) { id } tokens { symbol } } | 0.6 GRT | -#### Applying the cost model +#### Application du modèle de coût -Cost models are applied via the Indexer CLI, which passes them to the Indexer Management API of the Indexer agent for storing in the database. The Indexer Service will then pick them up and serve the cost models to gateways whenever they ask for them. +Les modèles de coûts sont appliqués via l'Indexer CLI, qui les transmet à l'API de gestion de l'Indexer agent pour qu'ils soient stockés dans la base de données. L'Indexer Service les récupère ensuite et fournit les modèles de coûts aux passerelles chaque fois qu'elles le demandent. ```sh indexer cost set variables '{ "SYSTEM_LOAD": 1.4 }' indexer cost set model my_model.agora ``` -## Interacting with the network +## Interaction avec le réseau -### Stake in the protocol +### Staker dans le protocol -The first steps to participating in the network as an Indexer are to approve the protocol, stake funds, and (optionally) set up an operator address for day-to-day protocol interactions. +Les premières étapes de la participation au réseau en tant qu'Indexeur consistent à approuver le protocole, à staker des fonds et (éventuellement) à créer une adresse d'opérateur pour les interactions quotidiennes avec le protocole. -> Note: For the purposes of these instructions Remix will be used for contract interaction, but feel free to use your tool of choice ([OneClickDapp](https://oneclickdapp.com/), [ABItopic](https://abitopic.io/), and [MyCrypto](https://www.mycrypto.com/account) are a few other known tools). +> Note : Dans le cadre de ces instructions, Remix sera utilisé pour l'interaction contractuelle, mais vous pouvez utiliser l'outil de votre choix ([OneClickDapp](https://oneclickdapp.com/), [ABItopic](https://abitopic.io/), et [MyCrypto](https://www.mycrypto.com/account) sont d'autres outils connus). -Once an Indexer has staked GRT in the protocol, the [Indexer components](/indexing/overview/#indexer-components) can be started up and begin their interactions with the network. +Une fois qu'un Indexeur a staké des GRT dans le protocole, les [composants de l'Indexeur](/indexing/overview/#indexer-components) peuvent être démarrés et commencer leurs interactions avec le réseau. -#### Approve tokens +#### Approuver les jetons -1. Open the [Remix app](https://remix.ethereum.org/) in a browser +1. Ouvrir l'[application Remix](https://remix.ethereum.org/) dans un navigateur -2. In the `File Explorer` create a file named **GraphToken.abi** with the [token ABI](https://raw.githubusercontent.com/graphprotocol/contracts/mainnet-deploy-build/build/abis/GraphToken.json). +2. Dans l'explorateur de fichiers, créez un fichier nommé **GraphToken.abi** avec le [token ABI](https://raw.githubusercontent.com/graphprotocol/contracts/mainnet-deploy-build/build/abis/GraphToken.json). -3. With `GraphToken.abi` selected and open in the editor, switch to the `Deploy and run transactions` section in the Remix interface. +3. Avec `GraphToken.abi` sélectionné et ouvert dans l'éditeur, passez à la section `Deploy and run transactions` dans l'interface Remix. -4. Under environment select `Injected Web3` and under `Account` select your Indexer address. +4. Sous environnement, sélectionnez `Injected Web3` et sous `Account` sélectionnez votre adresse d'Indexeur. -5. Set the GraphToken contract address - Paste the GraphToken contract address (`0xc944E90C64B2c07662A292be6244BDf05Cda44a7`) next to `At Address` and click the `At address` button to apply. +5. Définir l'adresse du contrat GraphToken - Collez l'adresse du contrat GraphToken (`0xc944E90C64B2c07662A292be6244BDf05Cda44a7`) à côté de `At Address` et cliquez sur le bouton `At Address` pour l'appliquer. -6. Call the `approve(spender, amount)` function to approve the Staking contract. Fill in `spender` with the Staking contract address (`0xF55041E37E12cD407ad00CE2910B8269B01263b9`) and `amount` with the tokens to stake (in wei). +6. Appeler la fonction `approve(spender, amount)` pour approuver le contrat de Staking. Remplissez `spender` avec l'adresse du contrat de staking (`0xF55041E37E12cD407ad00CE2910B8269B01263b9`) et `amount` avec les jetons à staker (en wei). -#### Stake tokens +#### Staker les jetons -1. Open the [Remix app](https://remix.ethereum.org/) in a browser +1. Ouvrir l'[application Remix](https://remix.ethereum.org/) dans un navigateur -2. In the `File Explorer` create a file named **Staking.abi** with the staking ABI. +2. Dans l'explorateur de fichiers, créez un fichier nommé **Staking.abi** avec l'ABI de staking. -3. With `Staking.abi` selected and open in the editor, switch to the `Deploy and run transactions` section in the Remix interface. +3. Avec `Staking.abi` sélectionné et ouvert dans l'éditeur, passez à la section `Deploy and run transactions` dans l'interface Remix. -4. Under environment select `Injected Web3` and under `Account` select your Indexer address. +4. Sous environnement, sélectionnez `Injected Web3` et sous `Account` sélectionnez votre adresse d'Indexeur. -5. Set the Staking contract address - Paste the Staking contract address (`0xF55041E37E12cD407ad00CE2910B8269B01263b9`) next to `At Address` and click the `At address` button to apply. +5. Définir l'adresse du contrat de staking - Collez l'adresse du contrat de staking (`0xF55041E37E12cD407ad00CE2910B8269B01263b9`) à côté de `At Address` et cliquez sur le bouton `At address` pour l'appliquer. -6. Call `stake()` to stake GRT in the protocol. +6. Appeler `stake()` pour staker les GRT dans le protocole. -7. (Optional) Indexers may approve another address to be the operator for their Indexer infrastructure in order to separate the keys that control the funds from those that are performing day to day actions such as allocating on subgraphs and serving (paid) queries. In order to set the operator call `setOperator()` with the operator address. +7. (Optional) Indexers may approve another address to be the operator for their Indexer infrastructure in order to separate the keys that control the funds from those that are performing day to day actions such as allocating on Subgraphs and serving (paid) queries. In order to set the operator call `setOperator()` with the operator address. -8. (Optional) In order to control the distribution of rewards and strategically attract Delegators Indexers can update their delegation parameters by updating their indexingRewardCut (parts per million), queryFeeCut (parts per million), and cooldownBlocks (number of blocks). To do so call `setDelegationParameters()`. The following example sets the queryFeeCut to distribute 95% of query rebates to the Indexer and 5% to Delegators, set the indexingRewardCutto distribute 60% of indexing rewards to the Indexer and 40% to Delegators, and set `thecooldownBlocks` period to 500 blocks. +8. (Optional) In order to control the distribution of rewards and strategically attract Delegators Indexers can update their delegation parameters by updating their `indexingRewardCut` (parts per million), `queryFeeCut` (parts per million), and `cooldownBlocks` (number of blocks). To do so call `setDelegationParameters()`. The following example sets the `queryFeeCut` to distribute 95% of query rebates to the Indexer and 5% to Delegators, set the `indexingRewardCut` to distribute 60% of indexing rewards to the Indexer and 40% to Delegators, and set the `cooldownBlocks` period to 500 blocks. ``` setDelegationParameters(950000, 600000, 500) ``` -### Setting delegation parameters +### Définition des paramètres de délégation -The `setDelegationParameters()` function in the [staking contract](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol) is essential for Indexers, allowing them to set parameters that define their interactions with Delegators, influencing their reward sharing and delegation capacity. +La fonction `setDelegationParameters()` du [contrat de staking](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol) est essentielle pour les Indexeurs, car elle leur permet de définir les paramètres qui définissent leurs interactions avec les Déléguateurs, en influençant le partage des récompenses et la capacité de délégation. -### How to set delegation parameters +### Comment définir les paramètres de délégation -To set the delegation parameters using Graph Explorer interface, follow these steps: +Pour définir les paramètres de délégation à l'aide de l'interface Graph Explorer, procédez comme suit : -1. Navigate to [Graph Explorer](https://thegraph.com/explorer/). -2. Connect your wallet. Choose multisig (such as Gnosis Safe) and then select mainnet. Note: You will need to repeat this process for Arbitrum One. -3. Connect the wallet you have as a signer. -4. Navigate to the 'Settings' section and select 'Delegation Parameters'. These parameters should be configured to achieve an effective cut within the desired range. Upon entering values in the provided input fields, the interface will automatically calculate the effective cut. Adjust these values as necessary to attain the desired effective cut percentage. -5. Submit the transaction to the network. +1. Naviguez jusqu'à [Graph Explorer](https://thegraph.com/explorer/). +2. Connectez votre portefeuille. Choisissez multisig (comme Gnosis Safe) et sélectionnez ensuite mainnet. Note : Vous devrez répéter ce processus pour Arbitrum One. +3. Connectez le portefeuille que vous avez en tant que signataire. +4. Accédez à la section 'Paramètres' et sélectionnez 'Paramètres de délégation'. Ces paramètres doivent être configurés de manière à obtenir une réduction effective dans la fourchette souhaitée. En saisissant des valeurs dans les champs de saisie prévus à cet effet, l'interface calculera automatiquement la réduction effective. Ajustez ces valeurs si nécessaire pour obtenir le pourcentage de réduction effective souhaité. +5. Soumettre la transaction au réseau. -> Note: This transaction will need to be confirmed by the multisig wallet signers. +> Remarque : cette transaction devra être confirmée par les signataires du portefeuille multisig. -### The life of an allocation +### La durée de vie d'une allocation -After being created by an Indexer a healthy allocation goes through two states. +Après avoir été créée par un Indexeur, une allocation saine passe par deux états. -- **Active** - Once an allocation is created onchain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L316)) it is considered **active**. A portion of the Indexer's own and/or delegated stake is allocated towards a subgraph deployment, which allows them to claim indexing rewards and serve queries for that subgraph deployment. The Indexer agent manages creating allocations based on the Indexer rules. +- **Active** - Once an allocation is created onchain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L316)) it is considered **active**. A portion of the Indexer's own and/or delegated stake is allocated towards a Subgraph deployment, which allows them to claim indexing rewards and serve queries for that Subgraph deployment. The Indexer agent manages creating allocations based on the Indexer rules. -- **Closed** - An Indexer is free to close an allocation once 1 epoch has passed ([closeAllocation()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L335)) or their Indexer agent will automatically close the allocation after the **maxAllocationEpochs** (currently 28 days). When an allocation is closed with a valid proof of indexing (POI) their indexing rewards are distributed to the Indexer and its Delegators ([learn more](/indexing/overview/#how-are-indexing-rewards-distributed)). +- **Closed** - Un indexeur est libre de clôturer une allocation une fois qu'une époque s'est écoulée ([closeAllocation()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L335)) ou son agent d'Indexeur clôturera automatiquement l'allocation après le **maxAllocationEpochs** (actuellement 28 jours). Lorsqu'une allocation est clôturée avec une preuve d'indexation (POI) valide, les récompenses d'indexation sont distribuées à l'Indexeur et à ses délégués ([en savoir plus](/indexing/overview/#how-are-indexing-rewards-distributed)). -Indexers are recommended to utilize offchain syncing functionality to sync subgraph deployments to chainhead before creating the allocation onchain. This feature is especially useful for subgraphs that may take longer than 28 epochs to sync or have some chances of failing undeterministically. +Indexers are recommended to utilize offchain syncing functionality to sync Subgraph deployments to chainhead before creating the allocation onchain. This feature is especially useful for Subgraphs that may take longer than 28 epochs to sync or have some chances of failing undeterministically. diff --git a/website/src/pages/fr/indexing/supported-network-requirements.mdx b/website/src/pages/fr/indexing/supported-network-requirements.mdx index 799fd25b8136..c08c18d25e01 100644 --- a/website/src/pages/fr/indexing/supported-network-requirements.mdx +++ b/website/src/pages/fr/indexing/supported-network-requirements.mdx @@ -6,7 +6,7 @@ title: Exigences du réseau pris en charge | --- | --- | --- | :-: | | Arbitrum | [Guide Baremetal ](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
[Guide Docker ](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | CPU 4+ coeurs
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_dernière mise à jour août 2023_ | ✅ | | Avalanche | [Guide Docker](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | CPU 4 cœurs / 8 threads
Ubuntu 22.04
16Go+ RAM
>= 5 Tio NVMe SSD
_dernière mise à jour août 2023_ | ✅ | -| Base | [Guide Erigon Baremetal ](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

[Guide GETH Baremetal ](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
[Guide GETH Docker ](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | CPU 8+ cœurs
Debian 12/Ubuntu 22.04
16 Go RAM
>= 4.5To (NVME recommandé)
_Dernière mise à jour le 14 mai 2024_ | ✅ | +| Base | [Guide Erigon Baremetal ](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

[Guide GETH Baremetal ](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
[Guide GETH Docker ](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
Debian 12/Ubuntu 22.04
16 GB RAM
>= 4.5TB (NVME preferred)
_last updated 14th May 2024_ | ✅ | | Binance | [Guide Erigon Baremetal ](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | CPU 8 cœurs / 16 threads
Ubuntu 22.04
>=32 Go RAM
>= 14 Tio NVMe SSD
_Dernière mise à jour le 22 juin 2024_ | ✅ | | Celo | [Guide Docker](https://docs.infradao.com/archive-nodes-101/celo/docker) | CPU 4 cœurs / 8 threads
Ubuntu 22.04
16Go+ RAM
>= 2 Tio NVMe SSD
_Dernière mise à jour en août 2023_ | ✅ | | Ethereum | [Guide Docker](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Vitesse d'horloge supérieure par rapport au nombre de cœurs
Ubuntu 22.04
16 Go+ RAM
>=3 To (NVMe recommandé)
_dernière mise à jour août 2023_ | ✅ | diff --git a/website/src/pages/fr/indexing/tap.mdx b/website/src/pages/fr/indexing/tap.mdx index b378f70212be..68a0b79a2e6f 100644 --- a/website/src/pages/fr/indexing/tap.mdx +++ b/website/src/pages/fr/indexing/tap.mdx @@ -1,21 +1,21 @@ --- -title: Guide de migration TAP +title: GraphTally Guide --- -Découvrez le nouveau système de paiement de The Graph, le **Timeline Aggregation Protocol, TAP**. Ce système permet des microtransactions rapides et efficaces avec une confiance minimale. +Learn about The Graph’s new payment system, **GraphTally** [(previously Timeline Aggregation Protocol)](https://docs.rs/tap_core/latest/tap_core/index.html). This system provides fast, efficient microtransactions with minimized trust. ## Aperçu -[TAP](https://docs.rs/tap_core/latest/tap_core/index.html) est un remplacement direct du système de paiement Scalar actuellement en place. Il offre les fonctionnalités clés suivantes : +GraphTally is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features: - Gère efficacement les micropaiements. - Ajoute une couche de consolidations aux transactions et aux coûts onchain. - Permet aux Indexeurs de contrôler les recettes et les paiements, garantissant ainsi le paiement des requêtes. - Il permet des passerelles décentralisées, sans confiance, et améliore les performances du service d'indexation pour les expéditeurs multiples. -## Spécificités⁠ +### Spécificités⁠ -Le TAP permet à un expéditeur d'effectuer plusieurs paiements à un destinataire, **TAP Receipts**, qui regroupe ces paiements en un seul paiement, un **Receipt Aggregate Voucher**, également connu sous le nom de **RAV**. Ce paiement regroupé peut ensuite être vérifié sur la blockchain, ce qui réduit le nombre de transactions et simplifie le processus de paiement. +GraphTally allows a sender to make multiple payments to a receiver, **Receipts**, which aggregates these payments into a single payment, a **Receipt Aggregate Voucher**, also known as a **RAV**. This aggregated payment can then be verified on the blockchain, reducing the number of transactions and simplifying the payment process. Pour chaque requête, la passerelle vous enverra un `reçu signé` qui sera stocké dans votre base de données. Ensuite, ces requêtes seront agrégées par un `tap-agent` par le biais d'une demande. Vous recevrez ensuite un RAV. Vous pouvez mettre à jour un RAV en l'envoyant avec des reçus plus récents, ce qui générera un nouveau RAV avec une valeur plus élevée. @@ -59,14 +59,14 @@ Tant que vous exécutez `tap-agent` et `indexer-agent`, tout sera exécuté auto | Signataires | `0xfF4B7A5EfD00Ff2EC3518D4F250A27e4c29A2211` | `0xFb142dE83E261e43a81e9ACEADd1c66A0DB121FE` | | Aggregateur | `https://tap-aggregator.network.thegraph.com` | `https://tap-aggregator.testnet.thegraph.com` | -### Exigences +### Prérequis -En plus des conditions typiques pour faire fonctionner un Indexeur, vous aurez besoin d'un Endpoint `tap-escrow-subgraph` pour interroger les mises à jour de TAP. Vous pouvez utiliser The Graph Network pour interroger ou vous héberger vous-même sur votre `graph-node`. +In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query updates. You can use The Graph Network to query or host yourself on your `graph-node`. -- [Subgraph Graph TAP Arbitrum Sepolia (pour le testnet The Graph )](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) -- [Subgraph Graph TAP Arbitrum One (Pour le mainnet The Graph )](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) +- [Graph TAP Arbitrum Sepolia Subgraph (for The Graph testnet)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) +- [Graph TAP Arbitrum One Subgraph (for The Graph mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) -> Note : `indexer-agent` ne gère pas actuellement l'indexation de ce subgraph comme il le fait pour le déploiement du subgraph réseau. Par conséquent, vous devez l'indexer manuellement. +> Note: `indexer-agent` does not currently handle the indexing of this Subgraph like it does for the network Subgraph deployment. As a result, you have to index it manually. ## Guide De Migration @@ -79,7 +79,7 @@ La version requise du logiciel peut être trouvée [ici](https://github.com/grap 1. **Agent d'indexeur** - Suivez le [même processus](https://github.com/graphprotocol/indexer/pkgs/container/indexer-agent#graph-protocol-indexer-components). - - Donnez le nouvel argument `--tap-subgraph-endpoint` pour activer les nouveaux chemins de code TAP et permettre l'échange de RAVs TAP. + - Give the new argument `--tap-subgraph-endpoint` to activate the new GraphTally codepaths and enable redeeming of RAVs. 2. **Indexer Service** @@ -99,72 +99,72 @@ La version requise du logiciel peut être trouvée [ici](https://github.com/grap Pour une configuration minimale, utilisez le modèle suivant : ```bash -# Vous devrez modifier *toutes* les valeurs ci-dessous pour qu'elles correspondent à votre configuration. +# You will have to change *all* the values below to match your setup. # -# Certaines des configurations ci-dessous sont des valeurs globales de graph network, que vous pouvez trouver ici : +# Some of the config below are global graph network values, which you can find here: # # -# Astuce de pro : si vous devez charger certaines valeurs de l'environnement dans cette configuration, vous -# pouvez les écraser avec des variables d'environnement. Par exemple, ce qui suit peut être remplacé -# par [PREFIX]_DATABASE_POSTGRESURL, où PREFIX peut être `INDEXER_SERVICE` ou `TAP_AGENT` : +# Pro tip: if you need to load some values from the environment into this config, you +# can overwrite with environment variables. For example, the following can be replaced +# by [PREFIX]_DATABASE_POSTGRESURL, where PREFIX can be `INDEXER_SERVICE` or `TAP_AGENT`: # # [database] # postgres_url = "postgresql://indexer:${POSTGRES_PASSWORD}@postgres:5432/indexer_components_0" [indexer] -indexer_address = "0x111111111111111111111111111111111111111111" +indexer_address = "0x1111111111111111111111111111111111111111" operator_mnemonic = "celery smart tip orange scare van steel radio dragon joy alarm crane" [database] -# L'URL de la base de données Postgres utilisée pour les composants de l'Indexeur. La même base de données -# qui est utilisée par `indexer-agent`. Il est prévu que `indexer-agent` crée -# les tables nécessaires. +# The URL of the Postgres database used for the indexer components. The same database +# that is used by the `indexer-agent`. It is expected that `indexer-agent` will create +# the necessary tables. postgres_url = "postgres://postgres@postgres:5432/postgres" [graph_node] -# URL vers l'endpoint de requête de votre graph-node +# URL to your graph-node's query endpoint query_url = "" -# URL vers l'endpoint d'état de votre graph-node +# URL to your graph-node's status endpoint status_url = "" [subgraphs.network] -# URL de requête pour le subgraph Graph Network. +# Query URL for the Graph Network Subgraph. query_url = "" -# Facultatif, déploiement à rechercher dans le `graph-node` local, s'il est indexé localement. -# L'indexation locale du subgraph est recommandée. -# REMARQUE : utilisez uniquement `query_url` ou `deployment_id` -deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" +# Optional, deployment to look for in the local `graph-node`, if locally indexed. +# Locally indexing the Subgraph is recommended. +# NOTE: Use `query_url` or `deployment_id` only +deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" [subgraphs.escrow] -# URL de requête pour le subgraph Escrow. +# Query URL for the Escrow Subgraph. query_url = "" -# Facultatif, déploiement à rechercher dans le `graph-node` local, s'il est indexé localement. -# Il est recommandé d'indexer localement le subgraph. -# REMARQUE : utilisez uniquement `query_url` ou `deployment_id` -deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" +# Optional, deployment to look for in the local `graph-node`, if locally indexed. +# Locally indexing the Subgraph is recommended. +# NOTE: Use `query_url` or `deployment_id` only +deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" [blockchain] -# Le chain ID du réseau sur lequel The Graph Network s'exécute +# The chain ID of the network that the graph network is running on chain_id = 1337 -# Adresse du contrat du vérificateur de bon de réception agrégé (RAV) de TAP. -receives_verifier_address = "0x222222222222222222222222222222222222222222222" +# Contract address of TAP's receipt aggregate voucher (RAV) verifier. +receipts_verifier_address = "0x2222222222222222222222222222222222222222" -############################################ -# Configurations spécifiques à tap-agent # -########################################## +######################################## +# Specific configurations to tap-agent # +######################################## [tap] -# Il s'agit du montant des frais que vous êtes prêt à risquer à un moment donné. Par exemple, -# si l'expéditeur cesse de fournir des RAV pendant suffisamment longtemps et que les frais dépassent ce -# montant, le service d'indexation cessera d'accepter les requêtes de l'expéditeur -# jusqu'à ce que les frais soient agrégés. -# REMARQUE : utilisez des chaînes de caractère pour les valeurs décimales afin d'éviter les erreurs d'arrondi -# p. ex. : +# This is the amount of fees you are willing to risk at any given time. For ex. +# if the sender stops supplying RAVs for long enough and the fees exceed this +# amount, the indexer-service will stop accepting queries from the sender +# until the fees are aggregated. +# NOTE: Use strings for decimal values to prevent rounding errors +# e.g: # max_amount_willing_to_lose_grt = "0.1" max_amount_willing_to_lose_grt = 20 [tap.sender_aggregator_endpoints] -# Clé-valeur de tous les expéditeurs et de leurs endpoint d'agrégation -# Celle-ci ci-dessous concerne par exemple la passerelle de testnet E&N. +# Key-Value of all senders and their aggregator endpoints +# This one below is for the E&N testnet gateway for example. 0xDDE4cfFd3D9052A9cb618fC05a1Cd02be1f2F467 = "https://tap-aggregator.network.thegraph.com" ``` diff --git a/website/src/pages/fr/indexing/tooling/graph-node.mdx b/website/src/pages/fr/indexing/tooling/graph-node.mdx index 6476aad5aa73..ea35e2fb9680 100644 --- a/website/src/pages/fr/indexing/tooling/graph-node.mdx +++ b/website/src/pages/fr/indexing/tooling/graph-node.mdx @@ -2,39 +2,39 @@ title: Nœud de The Graph --- -Graph Node est le composant qui indexe les subgraphs et rend les données résultantes disponibles pour interrogation via une API GraphQL. En tant que tel, il est au cœur de la pile de l’indexeur, et le bon fonctionnement de Graph Node est crucial pour exécuter un indexeur réussi. +Graph Node is the component which indexes Subgraphs, and makes the resulting data available to query via a GraphQL API. As such it is central to the indexer stack, and correct operation of Graph Node is crucial to running a successful indexer. Ceci fournit un aperçu contextuel de Graph Node et de certaines des options les plus avancées disponibles pour les Indexeurs. Une documentation et des instructions détaillées peuvent être trouvées dans le dépôt [Graph Node ](https://github.com/graphprotocol/graph-node). ## Nœud de The Graph -[Graph Node](https://github.com/graphprotocol/graph-node) est l'implémentation de référence pour l'indexation des subgraphs sur The Graph Network, la connexion aux clients de la blockchain, l'indexation des subgraphs et la mise à disposition des données indexées pour les requêtes. +[Graph Node](https://github.com/graphprotocol/graph-node) is the reference implementation for indexing Subgraphs on The Graph Network, connecting to blockchain clients, indexing Subgraphs and making indexed data available to query. Graph Node (et l'ensemble de la pile de l’indexeur) peut être exécuté sur serveur dédié (bare metal) ou dans un environnement cloud. Cette souplesse du composant central d'indexation est essentielle à la solidité du protocole The Graph. De même, Graph Node peut être [compilé à partir du code source](https://github.com/graphprotocol/graph-node), ou les Indexeurs peuvent utiliser l'une des [images Docker fournies](https://hub.docker.com/r/graphprotocol/graph-node). ### PostgreSQL database -Le magasin principal du nœud de graph, c'est là que les données des sous-graphes sont stockées, ainsi que les métadonnées sur les subgraphs et les données réseau indépendantes des subgraphs telles que le cache de blocs et le cache eth_call. +The main store for the Graph Node, this is where Subgraph data is stored, as well as metadata about Subgraphs, and Subgraph-agnostic network data such as the block cache, and eth_call cache. ### Clients réseau Pour indexer un réseau, Graph Node doit avoir accès à un client réseau via une API JSON-RPC compatible avec EVM. Cette RPC peut se connecter à un seul client ou à une configuration plus complexe qui équilibre la charge entre plusieurs clients. -Alors que certains subgraphs peuvent ne nécessiter qu'un nœud complet, d'autres peuvent avoir des caractéristiques d'indexation qui nécessitent des fonctionnalités RPC supplémentaires. En particulier, les subgraphs qui font des `eth_calls` dans le cadre de l'indexation nécessiteront un noeud d'archive qui supporte [EIP-1898](https://eips.ethereum.org/EIPS/eip-1898), et les subgraphs avec des `callHandlers`, ou des `blockHandlers` avec un filtre `call`, nécessitent le support de `trace_filter` ([voir la documentation du module trace ici](https://openethereum.github.io/JSONRPC-trace-module)). +While some Subgraphs may just require a full node, some may have indexing features which require additional RPC functionality. Specifically Subgraphs which make `eth_calls` as part of indexing will require an archive node which supports [EIP-1898](https://eips.ethereum.org/EIPS/eip-1898), and Subgraphs with `callHandlers`, or `blockHandlers` with a `call` filter, require `trace_filter` support ([see trace module documentation here](https://openethereum.github.io/JSONRPC-trace-module)). \*\*Network Firehoses : un Firehose est un service gRPC fournissant un flux de blocs ordonné, mais compatible avec les fork, développé par les principaux développeurs de The Graph pour mieux prendre en charge une indexation performante à l'échelle. Il ne s'agit pas actuellement d'une exigence de l'Indexeur, mais les Indexeurs sont encouragés à se familiariser avec la technologie, en avance sur la prise en charge complète du réseau. Pour en savoir plus sur le Firehose [ici](https://firehose.streamingfast.io/). ### Nœuds IPFS -Les métadonnées de déploiement de subgraphs sont stockées sur le réseau IPFS. The Graph Node accède principalement au noed IPFS pendant le déploiement du subgraph pour récupérer le manifeste du subgraph et tous les fichiers liés. Les indexeurs de réseau n'ont pas besoin d'héberger leur propre noed IPFS. Un noed IPFS pour le réseau est hébergé sur https://ipfs.network.thegraph.com. +Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network indexers do not need to host their own IPFS node. An IPFS node for the network is hosted at https://ipfs.network.thegraph.com. ### Serveur de métriques Prometheus Pour activer la surveillance et la création de rapports, Graph Node peut éventuellement enregistrer les métriques sur un serveur de métriques Prometheus. -### Getting started from source +### Démarrer à partir des sources -#### Install prerequisites +#### Installer les prérequis - **Rust** @@ -48,9 +48,9 @@ Pour activer la surveillance et la création de rapports, Graph Node peut évent sudo apt-get install -y clang libpq-dev libssl-dev pkg-config ``` -#### Setup +#### Configuration -1. Start a PostgreSQL database server +1. Démarrer un serveur de base de données PostgreSQL ```sh initdb -D .postgres @@ -60,7 +60,7 @@ createdb graph-node 2. Clonez le repo [Graph Node](https://github.com/graphprotocol/graph-node) et compilez les sources en lançant `cargo build` -3. Now that all the dependencies are setup, start the Graph Node: +3. Maintenant que toutes les dépendances sont installées, démarrez Graph Node : ```sh cargo run -p graph-node --release -- \ @@ -77,19 +77,19 @@ Un exemple complet de configuration Kubernetes se trouve dans le [dépôt d'Inde Lorsqu'il est en cours d'exécution, Graph Node expose les ports suivants : -| Port | Purpose | Routes | CLI Argument | Environment Variable | +| Port | Objectif | Routes | Argument CLI | Variable d'Environment | | --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | -| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | -| 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | -| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | -| 8040 | Prometheus metrics | /metrics | \--metrics-port | - | +| 8000 | GraphQL HTTP server
(for Subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | +| 8001 | GraphQL WS
(for Subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | +| 8020 | JSON-RPC
(pour gérer les déploiements) | / | \--admin-port | - | +| 8030 | API du statut de l'indexation des subgraphs | /graphql | \--index-node-port | - | +| 8040 | Métriques Prometheus | /metrics | \--metrics-port | - | > **Important** : Soyez prudent lorsque vous exposez des ports publiquement - les **ports d'administration** doivent être verrouillés. Ceci inclut l'endpoint JSON-RPC de Graph Node. ## Configuration avancée du nœud graph -Dans sa forme la plus simple, Graph Node peut être utilisé avec une seule instance de Graph Node, une seule base de données PostgreSQL, un nœud IPFS et les clients réseau selon les besoins des subgraphs à indexer. +At its simplest, Graph Node can be operated with a single instance of Graph Node, a single PostgreSQL database, an IPFS node, and the network clients as required by the Subgraphs to be indexed. Cette configuration peut être mise à l'échelle horizontalement, en ajoutant plusieurs Graph Nodes, et plusieurs bases de données pour supporter ces Graph Nodes. Les utilisateurs avancés voudront peut-être profiter de certaines des capacités de mise à l'échelle horizontale de Graph Node, ainsi que de certaines des options de configuration les plus avancées, via le fichier `config.toml` et les variables d'environnement de Graph Node. @@ -114,13 +114,13 @@ La documentation complète de `config.toml` peut être trouvée dans la [documen #### Multiple Graph Nodes -L'indexation Graph Node peut être mise à l'échelle horizontalement, en exécutant plusieurs instances de Graph Node pour répartir l'indexation et l'interrogation sur différents nœuds. Cela peut être fait simplement en exécutant des Graph Nodes configurés avec un `node_id` différent au démarrage (par exemple dans le fichier Docker Compose), qui peut ensuite être utilisé dans le fichier `config.toml` pour spécifier les [nœuds de requête dédiés](#dedicated-query-nodes), les [ingesteurs de blocs](#dedicated-block-ingestion) et en répartissant les subgraphs sur les nœuds avec des [règles de déploiement](#deployment-rules). +Graph Node indexing can scale horizontally, running multiple instances of Graph Node to split indexing and querying across different nodes. This can be done simply by running Graph Nodes configured with a different `node_id` on startup (e.g. in the Docker Compose file), which can then be used in the `config.toml` file to specify [dedicated query nodes](#dedicated-query-nodes), [block ingestors](#dedicated-block-ingestion), and splitting Subgraphs across nodes with [deployment rules](#deployment-rules). > Notez que plusieurs nœuds de graph peuvent tous être configurés pour utiliser la même base de données, qui elle-même peut être mise à l'échelle horizontalement via le partitionnement. #### Règles de déploiement -Étant donné plusieurs Graph Node, il est nécessaire de gérer le déploiement de nouveaux subgraphs afin que le même subgraph ne soit pas indexé par deux nœuds différents, ce qui entraînerait des collisions. Cela peut être fait en utilisant des règles de déploiement, qui peuvent également spécifier dans quel `shard` les données d'un subgraph doivent être stockées, si le partitionnement de base de données est utilisé. Les règles de déploiement peuvent correspondre au nom du subgraph et au réseau que le déploiement indexe afin de prendre une décision. +Given multiple Graph Nodes, it is necessary to manage deployment of new Subgraphs so that the same Subgraph isn't being indexed by two different nodes, which would lead to collisions. This can be done by using deployment rules, which can also specify which `shard` a Subgraph's data should be stored in, if database sharding is being used. Deployment rules can match on the Subgraph name and the network that the deployment is indexing in order to make a decision. Exemple de configuration de règle de déploiement : @@ -138,7 +138,7 @@ indexers = [ "index_node_kovan_0" ] match = { network = [ "xdai", "poa-core" ] } indexers = [ "index_node_other_0" ] [[deployment.rule]] -# There's no 'match', so any subgraph matches +# There's no 'match', so any Subgraph matches shards = [ "sharda", "shardb" ] indexers = [ "index_node_community_0", @@ -167,11 +167,11 @@ Tout nœud dont --node-id correspond à l'expression régulière sera configuré Pour la plupart des cas d'utilisation, une seule base de données Postgres suffit pour prendre en charge une instance de nœud graph. Lorsqu'une instance de nœud graph dépasse une seule base de données Postgres, il est possible de diviser le stockage des données de nœud graph sur plusieurs bases de données Postgres. Toutes les bases de données forment ensemble le magasin de l’instance de nœud graph. Chaque base de données individuelle est appelée une partition. -Les fragments peuvent être utilisés pour diviser les déploiements de subgraph sur plusieurs bases de données et peuvent également être utilisés pour faire intervenir des réplicas afin de répartir la charge de requête sur plusieurs bases de données. Cela inclut la configuration du nombre de connexions de base de données disponibles que chaque `graph-node` doit conserver dans son pool de connexions pour chaque base de données, ce qui devient de plus en plus important à mesure que davantage de subgraph sont indexés. +Shards can be used to split Subgraph deployments across multiple databases, and can also be used to use replicas to spread query load across databases. This includes configuring the number of available database connections each `graph-node` should keep in its connection pool for each database, which becomes increasingly important as more Subgraphs are being indexed. Le partage devient utile lorsque votre base de données existante ne peut pas suivre la charge que Graph Node lui impose et lorsqu'il n'est plus possible d'augmenter la taille de la base de données. -> Il est généralement préférable de créer une base de données unique aussi grande que possible avant de commencer avec des fragments. Une exception est lorsque le trafic des requêtes est réparti de manière très inégale entre les subgraphs ; dans ces situations, cela peut être considérablement utile si les subgraphs à volume élevé sont conservés dans une partition et tout le reste dans une autre, car cette configuration rend plus probable que les données des subgraphs à volume élevé restent dans le cache interne de la base de données et ne le font pas. sont remplacés par des données qui ne sont pas autant nécessaires à partir de subgraphs à faible volume. +> It is generally better make a single database as big as possible, before starting with shards. One exception is where query traffic is split very unevenly between Subgraphs; in those situations it can help dramatically if the high-volume Subgraphs are kept in one shard and everything else in another because that setup makes it more likely that the data for the high-volume Subgraphs stays in the db-internal cache and doesn't get replaced by data that's not needed as much from low-volume Subgraphs. En termes de configuration des connexions, commencez par max_connections dans postgresql.conf défini sur 400 (ou peut-être même 200) et regardez les métriques store_connection_wait_time_ms et store_connection_checkout_count Prometheus. Des temps d'attente notables (tout ce qui dépasse 5 ms) indiquent qu'il y a trop peu de connexions disponibles ; des temps d'attente élevés seront également dus au fait que la base de données est très occupée (comme une charge CPU élevée). Cependant, si la base de données semble par ailleurs stable, des temps d'attente élevés indiquent la nécessité d'augmenter le nombre de connexions. Dans la configuration, le nombre de connexions que chaque instance de nœud graph peut utiliser constitue une limite supérieure, et Graph Node ne maintiendra pas les connexions ouvertes s'il n'en a pas besoin. @@ -188,7 +188,7 @@ ingestor = "block_ingestor_node" #### Prise en charge de plusieurs réseaux -The Graph Protocol augmente le nombre de réseaux pris en charge pour l'indexation des récompenses, et il existe de nombreux subgraphs indexant des réseaux non pris en charge. Un indexeur peut choisir de les indexer malgré tout. Le fichier `config.toml` permet une configuration riche et flexible : +The Graph Protocol is increasing the number of networks supported for indexing rewards, and there exist many Subgraphs indexing unsupported networks which an indexer would like to process. The `config.toml` file allows for expressive and flexible configuration of: - Plusieurs réseaux - Plusieurs fournisseurs par réseau (cela peut permettre de répartir la charge entre les fournisseurs, et peut également permettre la configuration de nœuds complets ainsi que de nœuds d'archives, Graph Node préférant les fournisseurs moins chers si une charge de travail donnée le permet). @@ -225,11 +225,11 @@ Les utilisateurs qui utilisent une configuration d'indexation à grande échelle ### Gestion du nœud de graph -Étant donné un nœud de graph en cours d'exécution (ou des nœuds de graph !), le défi consiste alors à gérer les subgraphs déployés sur ces nœuds. Graph Node propose une gamme d'outils pour vous aider à gérer les subgraphs. +Given a running Graph Node (or Graph Nodes!), the challenge is then to manage deployed Subgraphs across those nodes. Graph Node surfaces a range of tools to help with managing Subgraphs. #### Journal de bord -Les logs de Graph Node peuvent fournir des informations utiles pour le débogage et l'optimisation de Graph Node et de subgraphs spécifiques. Graph Node supporte différents niveaux de logs via la variable d'environnement `GRAPH_LOG`, avec les niveaux suivants : error, warn, info, debug ou trace. +Graph Node's logs can provide useful information for debugging and optimisation of Graph Node and specific Subgraphs. Graph Node supports different log levels via the `GRAPH_LOG` environment variable, with the following levels: error, warn, info, debug or trace. De plus, fixer `GRAPH_LOG_QUERY_TIMING` à `gql` fournit plus de détails sur la façon dont les requêtes GraphQL s'exécutent (bien que cela génère un grand volume de logs). @@ -247,11 +247,11 @@ La commande graphman est incluse dans les conteneurs officiels, et vous pouvez d La documentation complète des commandes `graphman` est disponible dans le dépôt Graph Node. Voir [/docs/graphman.md](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md) dans le dépôt Graph Node `/docs` -### Travailler avec des subgraphs +### Working with Subgraphs #### API d'état d'indexation -Disponible sur le port 8030/graphql par défaut, l'API d'état d'indexation expose une gamme de méthodes pour vérifier l'état d'indexation de différents subgraphs, vérifier les preuves d'indexation, inspecter les fonctionnalités des subgraphs et bien plus encore. +Available on port 8030/graphql by default, the indexing status API exposes a range of methods for checking indexing status for different Subgraphs, checking proofs of indexing, inspecting Subgraph features and more. Le schéma complet est disponible [ici](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). @@ -263,7 +263,7 @@ Le processus d'indexation comporte trois parties distinctes : - Traiter les événements dans l'ordre avec les gestionnaires appropriés (cela peut impliquer d'appeler la chaîne pour connaître l'état et de récupérer les données du magasin) - Écriture des données résultantes dans le magasin -Ces étapes sont pipeline (c’est-à-dire qu’elles peuvent être exécutées en parallèle), mais elles dépendent les unes des autres. Lorsque les subgraphs sont lents à indexer, la cause sous-jacente dépendra du subgraph spécifique. +These stages are pipelined (i.e. they can be executed in parallel), but they are dependent on one another. Where Subgraphs are slow to index, the underlying cause will depend on the specific Subgraph. Causes courantes de lenteur d’indexation : @@ -276,24 +276,24 @@ Causes courantes de lenteur d’indexation : - Le prestataire lui-même prend du retard sur la tête de la chaîne - Lenteur dans la récupération des nouvelles recettes en tête de chaîne auprès du prestataire -Les métriques d’indexation de subgraphs peuvent aider à diagnostiquer la cause première de la lenteur de l’indexation. Dans certains cas, le problème réside dans le subgraph lui-même, mais dans d'autres, des fournisseurs de réseau améliorés, une réduction des conflits de base de données et d'autres améliorations de configuration peuvent améliorer considérablement les performances d'indexation. +Subgraph indexing metrics can help diagnose the root cause of indexing slowness. In some cases, the problem lies with the Subgraph itself, but in others, improved network providers, reduced database contention and other configuration improvements can markedly improve indexing performance. -#### Subgraphs ayant échoué +#### Failed Subgraphs -Lors de l'indexation, les subgraphs peuvent échouer s'ils rencontrent des données inattendues, si certains composants ne fonctionnent pas comme prévu ou s'il y a un bogue dans les gestionnaires d'événements ou la configuration. Il existe deux types généraux de pannes : +During indexing Subgraphs might fail, if they encounter data that is unexpected, some component not working as expected, or if there is some bug in the event handlers or configuration. There are two general types of failure: - Échecs déterministes : ce sont des échecs qui ne seront pas résolus par de nouvelles tentatives - Échecs non déterministes : ils peuvent être dus à des problèmes avec le fournisseur ou à une erreur inattendue de Graph Node. Lorsqu'un échec non déterministe se produit, Graph Node réessaiera les gestionnaires défaillants, en reculant au fil du temps. -Dans certains cas, un échec peut être résolu par l'indexeur (par exemple, si l'erreur est due au fait de ne pas disposer du bon type de fournisseur, l'ajout du fournisseur requis permettra de poursuivre l'indexation). Cependant, dans d'autres cas, une modification du code du subgraph est requise. +In some cases a failure might be resolvable by the indexer (for example if the error is a result of not having the right kind of provider, adding the required provider will allow indexing to continue). However in others, a change in the Subgraph code is required. -> Les défaillances déterministes sont considérés comme "final" (définitifs), avec une preuve d'indexation générée pour le bloc défaillant, alors que les défaillances non déterministes ne le sont pas, car le subgraph pourait "se rétablir " et poursuivre l'indexation. Dans certains cas, l'étiquette non déterministe est incorrecte et le subgraph ne surmontera jamais l'erreur ; de tels défaillances doivent être signalés en tant que problèmes sur le dépôt de Graph Node. +> Deterministic failures are considered "final", with a Proof of Indexing generated for the failing block, while non-deterministic failures are not, as the Subgraph may manage to "unfail" and continue indexing. In some cases, the non-deterministic label is incorrect, and the Subgraph will never overcome the error; such failures should be reported as issues on the Graph Node repository. #### Bloquer et appeler le cache -Graph Node met en cache certaines données dans le store afin d'éviter de les récupérer auprès du fournisseur. Les blocs sont mis en cache, ainsi que les résultats des `eth_calls` (ces derniers étant mis en cache à partir d'un bloc spécifique). Cette mise en cache peut augmenter considérablement la vitesse d'indexation lors de la « resynchronisation » d'un subgraph légèrement modifié. +Graph Node caches certain data in the store in order to save refetching from the provider. Blocks are cached, as are the results of `eth_calls` (the latter being cached as of a specific block). This caching can dramatically increase indexing speed during "resyncing" of a slightly altered Subgraph. -Cependant, dans certains cas, si un nœud Ethereum a fourni des données incorrectes pendant une certaine période, cela peut se retrouver dans le cache, conduisant à des données incorrectes ou à des subgraphs défaillants. Dans ce cas, les Indexeurs peuvent utiliser `graphman` pour effacer le cache empoisonné, puis rembobiner les subgraph affectés, ce qui permettra de récupérer des données fraîches auprès du fournisseur (que l'on espère sain). +However, in some instances, if an Ethereum node has provided incorrect data for some period, that can make its way into the cache, leading to incorrect data or failed Subgraphs. In this case indexers can use `graphman` to clear the poisoned cache, and then rewind the affected Subgraphs, which will then fetch fresh data from the (hopefully) healthy provider. Si une incohérence du cache de blocs est suspectée, telle qu'un événement de réception de transmission manquant : @@ -304,7 +304,7 @@ Si une incohérence du cache de blocs est suspectée, telle qu'un événement de #### Interroger les problèmes et les erreurs -Une fois qu'un subgraph a été indexé, les indexeurs peuvent s'attendre à traiter les requêtes via le point de terminaison de requête dédié du subgraph. Si l'indexeur espère traiter un volume de requêtes important, un nœud de requête dédié est recommandé, et en cas de volumes de requêtes très élevés, les indexeurs peuvent souhaiter configurer des fragments de réplique afin que les requêtes n'aient pas d'impact sur le processus d'indexation. +Once a Subgraph has been indexed, indexers can expect to serve queries via the Subgraph's dedicated query endpoint. If the indexer is hoping to serve significant query volume, a dedicated query node is recommended, and in case of very high query volumes, indexers may want to configure replica shards so that queries don't impact the indexing process. Cependant, même avec un nœud de requête et des répliques dédiés, certaines requêtes peuvent prendre beaucoup de temps à exécuter et, dans certains cas, augmenter l'utilisation de la mémoire et avoir un impact négatif sur le temps de requête des autres utilisateurs. @@ -316,7 +316,7 @@ Graph Node met en cache les requêtes GraphQL par défaut, ce qui peut réduire ##### Analyser les requêtes -Les requêtes problématiques apparaissent le plus souvent de deux manières. Dans certains cas, les utilisateurs eux-mêmes signalent qu'une requête donnée est lente. Dans ce cas, le défi consiste à diagnostiquer la raison de la lenteur, qu'il s'agisse d'un problème général ou spécifique à ce subgraph ou à cette requête. Et puis bien sûr de le résoudre, si possible. +Problematic queries most often surface in one of two ways. In some cases, users themselves report that a given query is slow. In that case the challenge is to diagnose the reason for the slowness - whether it is a general issue, or specific to that Subgraph or query. And then of course to resolve it, if possible. Dans d'autres cas, le déclencheur peut être une utilisation élevée de la mémoire sur un nœud de requête, auquel cas le défi consiste d'abord à identifier la requête à l'origine du problème. @@ -336,10 +336,10 @@ En général, les tables où le nombre d'entités distinctes est inférieur à 1 Une fois qu'une table a été déterminée comme étant de type compte, l'exécution de `graphman stats account-like .
` activera l'optimisation de type compte pour les requêtes sur cette table. L'optimisation peut être désactivée à nouveau avec `graphman stats account-like --clear .
` Il faut compter jusqu'à 5 minutes pour que les noeuds de requêtes remarquent que l'optimisation a été activée ou désactivée. Après avoir activé l'optimisation, il est nécessaire de vérifier que le changement ne ralentit pas les requêtes pour cette table. Si vous avez configuré Grafana pour surveiller Postgres, les requêtes lentes apparaîtront dans `pg_stat_activity` en grand nombre, prenant plusieurs secondes. Dans ce cas, l'optimisation doit être désactivée à nouveau. -Pour les subgraphs de type Uniswap, les tables `pair` et `token` sont les meilleurs candidats pour cette optimisation, et peuvent avoir un effet considérable sur la charge de la base de données. +For Uniswap-like Subgraphs, the `pair` and `token` tables are prime candidates for this optimization, and can have a dramatic effect on database load. -#### Supprimer des subgraphs +#### Removing Subgraphs > Il s'agit d'une nouvelle fonctionnalité qui sera disponible dans Graph Node 0.29.x -A un moment donné, un Indexeur peut vouloir supprimer un subgraph donné. Cela peut être facilement fait via `graphman drop`, qui supprime un déploiement et toutes ses données indexées. Le déploiement peut être spécifié soit comme un nom de subgraph, soit comme un hash IPFS `Qm..`, ou alors comme le namespace `sgdNN` de la base de données . Une documentation plus détaillée est disponible [ici](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop). +At some point an indexer might want to remove a given Subgraph. This can be easily done via `graphman drop`, which deletes a deployment and all it's indexed data. The deployment can be specified as either a Subgraph name, an IPFS hash `Qm..`, or the database namespace `sgdNNN`. Further documentation is available [here](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop). diff --git a/website/src/pages/fr/indexing/tooling/graphcast.mdx b/website/src/pages/fr/indexing/tooling/graphcast.mdx index 5edccfb10588..e24e9904bdd8 100644 --- a/website/src/pages/fr/indexing/tooling/graphcast.mdx +++ b/website/src/pages/fr/indexing/tooling/graphcast.mdx @@ -10,10 +10,10 @@ Actuellement, le coût de diffusion d’informations vers d’autres participant Le SDK Graphcast (Software Development Kit) permet aux développeurs de créer des radios, qui sont des applications basées sur les potins que les indexeurs peuvent exécuter dans un but donné. Nous avons également l'intention de créer quelques radios (ou de fournir une assistance à d'autres développeurs/équipes qui souhaitent créer des radios) pour les cas d'utilisation suivants : -- Real-time cross-checking of subgraph data integrity ([Subgraph Radio](https://docs.graphops.xyz/graphcast/radios/subgraph-radio/intro)). -- Réalisation d'enchères et coordination pour les subgraphs, les substreams, et les données Firehose de synchronisation de distorsion provenant d'autres indexeurs. -- Auto-rapport sur l'analyse des requêtes actives, y compris les volumes de requêtes de subgraphs, les volumes de frais, etc. -- Auto-rapport sur l'analyse de l'indexation, y compris le temps d'indexation des subgraphs, les coûts des gaz de traitement, les erreurs d'indexation rencontrées, etc. +- Real-time cross-checking of Subgraph data integrity ([Subgraph Radio](https://docs.graphops.xyz/graphcast/radios/subgraph-radio/intro)). +- Conducting auctions and coordination for warp syncing Subgraphs, substreams, and Firehose data from other Indexers. +- Self-reporting on active query analytics, including Subgraph request volumes, fee volumes, etc. +- Self-reporting on indexing analytics, including Subgraph indexing time, handler gas costs, indexing errors encountered, etc. - Auto-déclaration sur les informations de la pile, y compris la version du graph-node, la version Postgres, la version du client Ethereum, etc. ### En savoir plus diff --git a/website/src/pages/fr/resources/benefits.mdx b/website/src/pages/fr/resources/benefits.mdx index b39ea9fa5ca4..2e389d78427c 100644 --- a/website/src/pages/fr/resources/benefits.mdx +++ b/website/src/pages/fr/resources/benefits.mdx @@ -76,9 +76,9 @@ Les coûts d'interrogation peuvent varier ; le coût indiqué est la moyenne au Reflète le coût pour le consommateur de données. Les frais de requête sont toujours payés aux Indexeurs pour les requêtes du Plan Gratuit. -Les coûts estimés concernent uniquement les subgraphs sur le Mainnet d'Ethereum — les coûts sont encore plus élevés lorsqu’un `graph-node` est auto-hébergé sur d’autres réseaux. Certains utilisateurs peuvent avoir besoin de mettre à jour leur subgraph vers une nouvelle version. En raison des frais de gas sur Ethereum, une mise à jour coûte environ 50 $ au moment de la rédaction. Notez que les frais de gas sur [Arbitrum](/archived/arbitrum/arbitrum-faq/) sont nettement inférieurs à ceux du Mainnet d'Ethereum. +Estimated costs are only for Ethereum Mainnet Subgraphs — costs are even higher when self hosting a `graph-node` on other networks. Some users may need to update their Subgraph to a new version. Due to Ethereum gas fees, an update costs ~$50 at time of writing. Note that gas fees on [Arbitrum](/archived/arbitrum/arbitrum-faq/) are substantially lower than Ethereum mainnet. -Émettre un signal sur un subgraph est un cout net, nul optionnel et unique (par exemple, 1 000 $ de signal peuvent être conservés sur un subgraph, puis retirés - avec la possibilité de gagner des revenus au cours du processus). +Curating signal on a Subgraph is an optional one-time, net-zero cost (e.g., $1k in signal can be curated on a Subgraph, and later withdrawn—with potential to earn returns in the process). ## Pas de Coûts d’Installation & Plus grande Efficacité Opérationnelle @@ -90,4 +90,4 @@ Le réseau décentralisé de The Graph offre aux utilisateurs une redondance gé En résumé : The Graph Network est moins cher, plus facile à utiliser et produit des résultats supérieurs à ceux obtenus par l'exécution locale d'un `graph-node`. -Commencez à utiliser The Graph Network dès aujourd’hui et découvrez comment [publier votre subgraph sur le réseau décentralisé de The Graph](/subgraphs/quick-start/). +Start using The Graph Network today, and learn how to [publish your Subgraph to The Graph's decentralized network](/subgraphs/quick-start/). diff --git a/website/src/pages/fr/resources/glossary.mdx b/website/src/pages/fr/resources/glossary.mdx index cfaa0beb4c78..f874e54e73cd 100644 --- a/website/src/pages/fr/resources/glossary.mdx +++ b/website/src/pages/fr/resources/glossary.mdx @@ -4,80 +4,80 @@ title: Glossaire - **The Graph** : Un protocole décentralisé pour l'indexation et l'interrogation des données. -- **Query** : Une requête de données. Dans le cas de The Graph, une requête est une demande de données provenant d'un subgraph à laquelle répondra un Indexeur. +- **Query**: A request for data. In the case of The Graph, a query is a request for data from a Subgraph that will be answered by an Indexer. -- **GraphQL** : Un langage de requête pour les API et un moteur d'exécution pour répondre à ces requêtes avec vos données existantes. The Graph utilise GraphQL pour interroger les subgraphs. +- **GraphQL**: A query language for APIs and a runtime for fulfilling those queries with your existing data. The Graph uses GraphQL to query Subgraphs. -- **Endpoint** : Une URL qui peut être utilisée pour interroger un subgraph. L'endpoint de test pour Subgraph Studio est `https://api.studio.thegraph.com/query///` et l'endpoint pour Graph Explorer est `https://gateway.thegraph.com/api//subgraphs/id/`. L'endpoint Graph Explorer est utilisé pour interroger les subgraphs sur le réseau décentralisé de The Graph. +- **Endpoint**: A URL that can be used to query a Subgraph. The testing endpoint for Subgraph Studio is `https://api.studio.thegraph.com/query///` and the Graph Explorer endpoint is `https://gateway.thegraph.com/api//subgraphs/id/`. The Graph Explorer endpoint is used to query Subgraphs on The Graph's decentralized network. -- **Subgraph** : Une API ouverte qui extrait des données d'une blockchain, les traite et les stocke de manière à ce qu'elles puissent être facilement interrogées via GraphQL. Les développeurs peuvent créer, déployer et publier des subgraphs sur The Graph Network. Une fois indexé, le subgraph peut être interrogé par n'importe qui. +- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish Subgraphs to The Graph Network. Once it is indexed, the Subgraph can be queried by anyone. - **Indexeur** : Participants au réseau qui gèrent des nœuds d'indexation pour indexer les données des blockchains et répondre aux requêtes GraphQL. - **Flux de revenus pour les Indexeurs** : Les Indexeurs sont récompensés en GRT par deux éléments : les remises sur les frais de requête et les récompenses pour l'indexation. - 1. **Remboursements de frais de requête** : Paiements effectués par les consommateurs de subgraphs pour avoir servi des requêtes sur le réseau. + 1. **Query Fee Rebates**: Payments from Subgraph consumers for serving queries on the network. - 2. **Récompenses d'indexation** : Les récompenses que les Indexeurs reçoivent pour l'indexation des subgraphs. Les récompenses d'indexation sont générées par une nouvelle émission de 3 % de GRT par an. + 2. **Indexing Rewards**: The rewards that Indexers receive for indexing Subgraphs. Indexing rewards are generated via new issuance of 3% GRT annually. - **Indexer's Self-Stake** : Le montant de GRT que les Indexeurs stakent pour participer au réseau décentralisé. Le minimum est de 100 000 GRT, et il n'y a pas de limite supérieure. - **Delegation Capacity** : C'est le montant maximum de GRT qu'un Indexeur peut accepter de la part des Déléguateurs. Les Indexeurs ne peuvent accepter que jusqu'à 16 fois leur propre Indexer Self-Stake, et toute délégation supplémentaire entraîne une dilution des récompenses. Par exemple, si un Indexeur a une Indexer Self-Stake de 1M GRT, sa capacité de délégation est de 16M. Cependant, les indexeurs peuvent augmenter leur capacité de délégation en augmentant leur Indexer Self-Stake. -- **Upgrade Indexer** : Un Indexeur conçu pour servir de solution de repli pour les requêtes de subgraphs qui ne sont pas traitées par d'autres Indexeurs sur le réseau. L'upgrade Indexer n'est pas compétitif par rapport aux autres Indexeurs. +- **Upgrade Indexer**: An Indexer designed to act as a fallback for Subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. -- **Delegator**(Déléguateurs) : Participants au réseau qui possèdent des GRT et les délèguent à des Indexeurs. Cela permet aux Indexeurs d'augmenter leur participation dans les subgraphs du réseau. En retour, les Déléguateurs reçoivent une partie des récompenses d'indexation que les Indexeurs reçoivent pour le traitement des subgraphs. +- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in Subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing Subgraphs. - **Taxe de délégation** : Une taxe de 0,5 % payée par les Déléguateurs lorsqu'ils délèguent des GRT aux Indexeurs. Les GRT utilisés pour payer la taxe sont brûlés. -- **Curator**(Curateur) : Participants au réseau qui identifient les subgraphs de haute qualité et signalent les GRT sur ces derniers en échange de parts de curation. Lorsque les Indexeurs réclament des frais de requête pour un subgraph, 10 % sont distribués aux Curateurs de ce subgraph. Il existe une corrélation positive entre la quantité de GRT signalée et le nombre d'Indexeurs indexant un subgraph. +- **Curator**: Network participants that identify high-quality Subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a Subgraph, 10% is distributed to the Curators of that Subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a Subgraph. -- **Taxe de curation** : Une taxe de 1% payée par les Curateurs lorsqu'ils signalent des GRT sur des subgraphs. Les GRT utiliséa pour payer la taxe sont brûlés. +- **Curation Tax**: A 1% fee paid by Curators when they signal GRT on Subgraphs. The GRT used to pay the fee is burned. -- **Consommateur de données** : Toute application ou utilisateur qui interroge un subgraph. +- **Data Consumer**: Any application or user that queries a Subgraph. -- **Développeur de subgraphs** : Un développeur qui construit et déploie un subgraph sur le réseau décentralisé de The Graph. +- **Subgraph Developer**: A developer who builds and deploys a Subgraph to The Graph's decentralized network. -- **Manifeste du subgraph** : Un fichier YAML qui décrit le schéma GraphQL du subgraph, les sources de données et d'autres métadonnées. Vous trouverez [Ici](https://github.com/graphprotocol/example-subgraph/blob/master/subgraph.yaml) un exemple. +- **Subgraph Manifest**: A YAML file that describes the Subgraph's GraphQL schema, data sources, and other metadata. [Here](https://github.com/graphprotocol/example-subgraph/blob/master/subgraph.yaml) is an example. - **Epoque** : Unité de temps au sein du réseau. Actuellement, une époque correspond à 6 646 blocs, soit environ 1 jour. -- **Allocation** : Un Indexeur peut allouer l'ensemble de son staking de GRT (y compris le staking des Déléguateurs) à des subgraphs qui ont été publiés sur le réseau décentralisé de The Graph. Les allocations peuvent avoir différents statuts : +- **Allocation**: An Indexer can allocate their total GRT stake (including Delegators' stake) towards Subgraphs that have been published on The Graph's decentralized network. Allocations can have different statuses: - 1. **Actif** : Une allocation est considérée comme active lorsqu'elle est créée onchain. C'est ce qu'on appelle ouvrir une allocation, et cela indique au réseau que l'Indexeur est en train d'indexer et de servir des requêtes pour un subgraph particulier. Les allocations actives accumulent des récompenses d'indexation proportionnelles au signal sur le subgraph et à la quantité de GRT allouée. + 1. **Active**: An allocation is considered active when it is created onchain. This is called opening an allocation, and indicates to the network that the Indexer is actively indexing and serving queries for a particular Subgraph. Active allocations accrue indexing rewards proportional to the signal on the Subgraph, and the amount of GRT allocated. - 2. **Fermé** : Un Indexeur peut réclamer les récompenses d'indexation accumulées sur un subgraph donné en soumettant une preuve d'indexation (POI) récente et valide. C'est ce qu'on appelle la fermeture d'une allocation. Une allocation doit avoir été ouverte pendant au moins une époque avant de pouvoir être fermée. La période d'allocation maximale est de 28 époques. Si un Indexeur laisse une allocation ouverte au-delà de 28 époques, il s'agit d'une allocation périmée. Lorsqu'une allocation est dans l'état **fermé**, un Fisherman peut encore ouvrir un litige pour contester un Indexeur pour avoir servi de fausses données. + 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given Subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. -- **Subgraph Studio** : Une dapp puissante pour construire, déployer et publier des subgraphs. +- **Subgraph Studio**: A powerful dapp for building, deploying, and publishing Subgraphs. -- **Fishermen**: A role within The Graph Network held by participants who monitor the accuracy and integrity of data served by Indexers. When a Fisherman identifies a query response or a POI they believe to be incorrect, they can initiate a dispute against the Indexer. If the dispute rules in favor of the Fisherman, the Indexer is slashed by losing 2.5% of their self-stake. Of this amount, 50% is awarded to the Fisherman as a bounty for their vigilance, and the remaining 50% is removed from circulation (burned). This mechanism is designed to encourage Fishermen to help maintain the reliability of the network by ensuring that Indexers are held accountable for the data they provide. +- **Fishermen** : Un rôle au sein de The Graph Network tenu par les participants qui surveillent l'exactitude et l'intégrité des données servies par les Indexeurs. Lorsqu'un Fisherman identifie une réponse à une requête ou un POI qu'il estime incorrect, il peut lancer un litige contre l'indexeur. Si le litige est tranché en faveur du Fisherman, l'indexeur perd 2,5 % de son staking. Sur ce montant, 50 % sont attribués au Fisherman à titre de récompense pour sa vigilance, et les 50 % restants sont retirés de la circulation (brûlés). Ce mécanisme est conçu pour encourager les pêcheurs à contribuer au maintien de la fiabilité du réseau en veillant à ce que les Indexeurs soient tenus responsables des données qu'ils fournissent. - **Arbitres** : Les arbitres sont des participants au réseau nommés dans le cadre d'un processus de gouvernance. Le rôle de l'arbitre est de décider de l'issue des litiges relatifs à l'indexation et aux requêtes. Leur objectif est de maximiser l'utilité et la fiabilité de The Graph. - **Slashing**(Taillade) : Les Indexeurs peuvent se voir retirer leur GRT pour avoir fourni un POI incorrect ou pour avoir diffusé des données inexactes. Le pourcentage de réduction est un paramètre protocolaire actuellement fixé à 2,5 % du staking personnel de l'Indexeur. 50 % des GRT réduit est versé au pêcheur qui a contesté les données inexactes ou le point d'intérêt incorrect. Les 50 % restants sont brûlés. -- **Récompenses d'indexation** : Les récompenses que les Indexeurs reçoivent pour l'indexation des subgraphs. Les récompenses d'indexation sont distribuées en GRT. +- **Indexing Rewards**: The rewards that Indexers receive for indexing Subgraphs. Indexing rewards are distributed in GRT. - **Récompenses de délégation** : Les récompenses que les Déléguateurs reçoivent pour avoir délégué des GRT aux Indexeurs. Les récompenses de délégation sont distribuées en GRT. -- **GRT**: The Graph's work utility token. GRT provides economic incentives to network participants for contributing to the network. +- **GRT** : Le jeton d'utilité du travail de The Graph. Le GRT fournit des incitations économiques aux participants du réseau pour leur contribution au réseau. -- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. +- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given Subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. -- **Graph Node**: Graph Node is the component that indexes subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. +- **Graph Node**: Graph Node is the component that indexes Subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. -- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions onchain, including registering on the network, managing subgraph deployments to its Graph Node(s), and managing allocations. +- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions onchain, including registering on the network, managing Subgraph deployments to its Graph Node(s), and managing allocations. -- **The Graph Client**: A library for building GraphQL-based dapps in a decentralized way. +- **The Graph Client** : Une bibliothèque pour construire des dapps basées sur GraphQL de manière décentralisée. -- **Graph Explorer**: A dapp designed for network participants to explore subgraphs and interact with the protocol. +- **Graph Explorer**: A dapp designed for network participants to explore Subgraphs and interact with the protocol. -- **Graph CLI**: A command line interface tool for building and deploying to The Graph. +- **Graph CLI** : Un outil d'interface de ligne de commande pour construire et déployer sur The Graph. -- **Cooldown Period**: The time remaining until an Indexer who changed their delegation parameters can do so again. +- **Cooldown Period** : Le temps restant avant qu'un indexeur qui a modifié ses paramètres de délégation puisse le faire à nouveau. -- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, subgraphs, curation shares, and Indexer's self-stake. +- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, Subgraphs, curation shares, and Indexer's self-stake. -- **Updating a subgraph**: The process of releasing a new subgraph version with updates to the subgraph's manifest, schema, or mappings. +- **Updating a Subgraph**: The process of releasing a new Subgraph version with updates to the Subgraph's manifest, schema, or mappings. -- **Migrating**: The process of curation shares moving from an old version of a subgraph to a new version of a subgraph (e.g. when v0.0.1 is updated to v0.0.2). +- **Migrating**: The process of curation shares moving from an old version of a Subgraph to a new version of a Subgraph (e.g. when v0.0.1 is updated to v0.0.2). diff --git a/website/src/pages/fr/resources/migration-guides/assemblyscript-migration-guide.mdx b/website/src/pages/fr/resources/migration-guides/assemblyscript-migration-guide.mdx index afd49ffd3fa8..a8e7baac39db 100644 --- a/website/src/pages/fr/resources/migration-guides/assemblyscript-migration-guide.mdx +++ b/website/src/pages/fr/resources/migration-guides/assemblyscript-migration-guide.mdx @@ -2,13 +2,13 @@ title: Guide de migration de l'AssemblyScript --- -Jusqu'à présent, les subgraphs utilisaient l'une des [premières versions d'AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Nous avons enfin ajouté la prise en charge de la [dernière version disponible](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10) ! 🎉 +Up until now, Subgraphs have been using one of the [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉 -Cela permettra aux développeurs de subgraph d'utiliser les nouvelles fonctionnalités du langage AS et de la bibliothèque standard. +That will enable Subgraph developers to use newer features of the AS language and standard library. Ce guide s'applique à tous ceux qui utilisent `graph-cli`/`graph-ts` en dessous de la version `0.22.0`. Si vous êtes déjà à une version supérieure (ou égale) à celle-ci, vous avez déjà utilisé la version `0.19.10` d'AssemblyScript 🙂 -> Note : A partir de `0.24.0`, `graph-node` peut supporter les deux versions, en fonction de la `apiVersion` spécifiée dans le manifeste du subgraph. +> Note: As of `0.24.0`, `graph-node` can support both versions, depending on the `apiVersion` specified in the Subgraph manifest. ## Fonctionnalités @@ -44,7 +44,7 @@ Ce guide s'applique à tous ceux qui utilisent `graph-cli`/`graph-ts` en dessous ## Comment mettre à niveau ? -1. Changez vos mappages `apiVersion` dans `subgraph.yaml` en `0.0.6` : +1. Change your mappings `apiVersion` in `subgraph.yaml` to `0.0.9`: ```yaml ... @@ -52,7 +52,7 @@ dataSources: ... mapping: ... - apiVersion: 0.0.6 + apiVersion: 0.0.9 ... ``` @@ -106,7 +106,7 @@ let maybeValue = load()! // breaks in runtime if value is null maybeValue.aMethod() ``` -Si vous ne savez pas lequel choisir, nous vous recommandons de toujours utiliser la version sécurisée. Si la valeur n'existe pas, vous souhaiterez peut-être simplement effectuer une instruction if précoce avec un retour dans votre gestionnaire de subgraph. +If you are unsure which to choose, we recommend always using the safe version. If the value doesn't exist you might want to just do an early if statement with a return in you Subgraph handler. ### Ombrage variable @@ -132,7 +132,7 @@ Vous devrez renommer vos variables en double si vous conservez une observation d ### Comparaisons nulles -En effectuant la mise à niveau sur votre subgraph, vous pouvez parfois obtenir des erreurs comme celles-ci : +By doing the upgrade on your Subgraph, sometimes you might get errors like these: ```typescript ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' is not assignable to type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt'. @@ -329,7 +329,7 @@ let wrapper = new Wrapper(y) wrapper.n = wrapper.n + x // ne donne pas d'erreurs de compilation comme il se doit ``` -Nous avons ouvert un problème sur le compilateur AssemblyScript pour cela, mais pour l'instant, si vous effectuez ce type d'opérations dans vos mappages de subgraph, vous devez les modifier pour effectuer une vérification nulle avant. +We've opened a issue on the AssemblyScript compiler for this, but for now if you do these kind of operations in your Subgraph mappings, you should change them to do a null check before it. ```typescript let wrapper = new Wrapper(y) @@ -351,7 +351,7 @@ value.x = 10 value.y = 'content' ``` -Il sera compilé mais s'arrêtera au moment de l'exécution, cela se produit parce que la valeur n'a pas été initialisée, alors assurez-vous que votre subgraph a initialisé ses valeurs, comme ceci : +It will compile but break at runtime, that happens because the value hasn't been initialized, so make sure your Subgraph has initialized their values, like this: ```typescript var value = new Type() // initialized diff --git a/website/src/pages/fr/resources/migration-guides/graphql-validations-migration-guide.mdx b/website/src/pages/fr/resources/migration-guides/graphql-validations-migration-guide.mdx index 62e5435c0fc3..ec4bcfe54fee 100644 --- a/website/src/pages/fr/resources/migration-guides/graphql-validations-migration-guide.mdx +++ b/website/src/pages/fr/resources/migration-guides/graphql-validations-migration-guide.mdx @@ -1,5 +1,5 @@ --- -title: Guide de migration des validations GraphQL +title: GraphQL Validations Migration Guide --- Bientôt, `graph-node` supportera 100% de la couverture de la [Spécification des validations GraphQL] (https://spec.graphql.org/June2018/#sec-Validation). @@ -20,7 +20,7 @@ Pour être conforme à ces validations, veuillez suivre le guide de migration. Vous pouvez utiliser l'outil de migration CLI pour rechercher tous les problèmes dans vos opérations GraphQL et les résoudre. Vous pouvez également mettre à jour le point de terminaison de votre client GraphQL pour utiliser le point de terminaison « https://api-next.thegraph.com/subgraphs/name/$GITHUB_USER/$SUBGRAPH_NAME ». Tester vos requêtes sur ce point de terminaison vous aidera à trouver les problèmes dans vos requêtes. -> Tous les subgraphs n'auront pas besoin d'être migrés si vous utilisez [GraphQL ESlint](https://the-guild.dev/graphql/eslint/docs) ou [GraphQL Code Generator](https://the-guild.dev /graphql/codegen), ils garantissent déjà que vos requêtes sont valides. +> Not all Subgraphs will need to be migrated, if you are using [GraphQL ESlint](https://the-guild.dev/graphql/eslint/docs) or [GraphQL Code Generator](https://the-guild.dev/graphql/codegen), they already ensure that your queries are valid. ## Outil CLI de migration diff --git a/website/src/pages/fr/resources/roles/curating.mdx b/website/src/pages/fr/resources/roles/curating.mdx index 909aa9f0e848..931afdc98101 100644 --- a/website/src/pages/fr/resources/roles/curating.mdx +++ b/website/src/pages/fr/resources/roles/curating.mdx @@ -2,37 +2,37 @@ title: Curation --- -Les Curateurs jouent un rôle essentiel dans l'économie décentralisée de The Graph. Ils utilisent leur connaissance de l'écosystème web3 pour évaluer et signaler les subgraphs qui devraient être indexés par The Graph Network. à travers Graph Explorer, les Curateurs consultent les données du réseau pour prendre des décisions de signalisation. En retour, The Graph Network récompense les Curateurs qui signalent des subgraphs de bonne qualité en leur reversant une partie des frais de recherche générés par ces subgraphs. La quantité de GRT signalée est l'une des principales considérations des Indexeurs lorsqu'ils déterminent les subgraphs à indexer. +Curators are critical to The Graph's decentralized economy. They use their knowledge of the web3 ecosystem to assess and signal on the Subgraphs that should be indexed by The Graph Network. Through Graph Explorer, Curators view network data to make signaling decisions. In turn, The Graph Network rewards Curators who signal on good quality Subgraphs with a share of the query fees those Subgraphs generate. The amount of GRT signaled is one of the key considerations for indexers when determining which Subgraphs to index. ## Que signifie "le signalement" pour The Graph Network? -Avant que les consommateurs ne puissent interroger un subgraphs, celui-ci doit être indexé. C'est ici que la curation entre en jeu. Afin que les Indexeurs puissent gagner des frais de requête substantiels sur des subgraphs de qualité, ils doivent savoir quels subgraphs indexer. Lorsque les Curateurs signalent un subgraphs , ils indiquent aux Indexeurs qu'un subgraphs est demandé et de qualité suffisante pour être indexé. +Before consumers can query a Subgraph, it must be indexed. This is where curation comes into play. In order for Indexers to earn substantial query fees on quality Subgraphs, they need to know what Subgraphs to index. When Curators signal on a Subgraph, it lets Indexers know that a Subgraph is in demand and of sufficient quality that it should be indexed. -Les Curateurs rendent le réseau The Graph efficace et le [signalement](#how-to-signal) est le processus que les Curateurs utilisent pour informer les Indexeurs qu'un subgraph est bon à indexer. Les Indexeurs peuvent se fier au signal d’un Curateur car, en signalant, les Curateurs mintent une part de curation (curation share) pour le subgraph, leur donnant droit à une partie des futurs frais de requête générés par ce subgraph. +Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that Curators use to let Indexers know that a Subgraph is good to index. Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the Subgraph, entitling them to a portion of future query fees that the Subgraph drives. -Les signaux des Curateurs sont représentés par des jetons ERC20 appelés Graph Curation Shares (GCS). Ceux qui veulent gagner plus de frais de requête doivent signaler leurs GRT aux subgraphs qui, selon eux, généreront un flux important de frais pour le réseau. Les Curateurs ne peuvent pas être réduits pour mauvais comportement, mais il y a une taxe de dépôt sur les Curateurs pour dissuader les mauvaises décisions pouvant nuire à l'intégrité du réseau. Les Curateurs gagneront également moins de frais de requête s'ils sélectionnent un subgraph de mauvaise qualité car il y aura moins de requêtes à traiter ou moins d'Indexeurs pour les traiter. +Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to Subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality Subgraph because there will be fewer queries to process or fewer Indexers to process them. -L’[Indexer Sunrise Upgrade](/archived/sunrise/#what-is-the-upgrade-indexer) assure l'indexation de tous les subgraphs, toutefois, signaler des GRT sur un subgraph spécifique attirera davantage d’Indexeurs vers ce dernier. Cette incitation supplémentaire a pour but d’améliorer la qualité de service pour les requêtes en réduisant la latence et en améliorant la disponibilité du réseau. +The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all Subgraphs, signaling GRT on a particular Subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. -Lors du signalement, les Curateurs peuvent décider de signaler une version spécifique du subgraph ou de signaler en utilisant l'auto-migration. S'ils signalent en utilisant l'auto-migration, les parts d'un Curateur seront toujours mises à jour vers la dernière version publiée par le développeur. S'ils décident de signaler une version spécifique, les parts resteront toujours sur cette version spécifique. +When signaling, Curators can decide to signal on a specific version of the Subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. -Si vous avez besoin d’aide pour la curation afin d’améliorer la qualité de service, envoyez une demande à l’équipe Edge & Node à l’adresse support@thegraph.zendesk.com en précisant les subgraphs pour lesquels vous avez besoin d’assistance. +If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the Subgraphs you need assistance with. -Les Indexeurs peuvent trouver des subgraphs à indexer en fonction des signaux de curation qu'ils voient dans Graph Explorer (capture d'écran ci-dessous). +Indexers can find Subgraphs to index based on curation signals they see in Graph Explorer (screenshot below). -![Subgraphs de l'Explorer](/img/explorer-subgraphs.png) +![Explorer Subgraphs](/img/explorer-subgraphs.png) ## Comment signaler -Dans l'onglet Curateur de Graph Explorer, les curateurs pourront signaler et retirer leur signal sur certains subgraphs en fonction des statistiques du réseau. Pour un guide pas à pas expliquant comment procéder dans Graph Explorer, [cliquez ici.](/subgraphs/explorer/) +Within the Curator tab in Graph Explorer, curators will be able to signal and unsignal on certain Subgraphs based on network stats. For a step-by-step overview of how to do this in Graph Explorer, [click here.](/subgraphs/explorer/) -Un curateur peut choisir de signaler une version spécifique d'un sugraph ou de faire migrer automatiquement son signal vers la version de production la plus récente de ce subgraph. Ces deux stratégies sont valables et comportent leurs propres avantages et inconvénients. +A curator can choose to signal on a specific Subgraph version, or they can choose to have their signal automatically migrate to the newest production build of that Subgraph. Both are valid strategies and come with their own pros and cons. -Le signalement sur une version spécifique est particulièrement utile lorsqu'un subgraph est utilisé par plusieurs dapps. Une dapp pourrait avoir besoin de mettre à jour régulièrement le subgraph avec de nouvelles fonctionnalités, tandis qu’une autre dapp pourrait préférer utiliser une version plus ancienne et bien testée du subgraph. Lors de la curation initiale, une taxe standard de 1 % est prélevée. +Signaling on a specific version is especially useful when one Subgraph is used by multiple dapps. One dapp might need to regularly update the Subgraph with new features. Another dapp might prefer to use an older, well-tested Subgraph version. Upon initial curation, a 1% standard tax is incurred. La migration automatique de votre signal vers la version de production la plus récente peut s'avérer utile pour vous assurer que vous continuez à accumuler des frais de requête. Chaque fois que vous effectuez une curation, une taxe de curation de 1 % est appliquée. Vous paierez également une taxe de curation de 0,5 % à chaque migration. Les développeurs de subgraphs sont découragés de publier fréquemment de nouvelles versions - ils doivent payer une taxe de curation de 0,5 % sur toutes les parts de curation migrées automatiquement. -> **Remarque**: La première adresse à signaler un subgraph donné est considérée comme le premier curateur et devra effectuer un travail bien plus coûteux en gas que les curateurs suivants, car le premier curateur doit initialiser les tokens de part de curation et transférer les tokens dans le proxy de The Graph. +> **Note**: The first address to signal a particular Subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy. ## Retrait de vos GRT @@ -40,39 +40,39 @@ Les Curateurs ont la possibilité de retirer leur GRT signalé à tout moment. Contrairement au processus de délégation, si vous décidez de retirer vos GRT signalés, vous n'aurez pas un délai d'attente et vous recevrez le montant total (moins la taxe de curation de 1%). -Une fois qu'un Curateur retire ses signaux, les Indexeurs peuvent choisir de continuer à indexer le subgraph, même s'il n'y a actuellement aucun GRT signalé actif. +Once a curator withdraws their signal, indexers may choose to keep indexing the Subgraph, even if there's currently no active GRT signaled. -Cependant, il est recommandé que les Curateurs laissent leur GRT signalé en place non seulement pour recevoir une partie des frais de requête, mais aussi pour assurer la fiabilité et la disponibilité du subgraph. +However, it is recommended that curators leave their signaled GRT in place not only to receive a portion of the query fees, but also to ensure reliability and uptime of the Subgraph. ## Risques 1. Le marché des requêtes est intrinsèquement jeune chez The Graph et il y a un risque que votre %APY soit inférieur à vos attentes en raison de la dynamique naissante du marché. -2. Frais de curation - lorsqu'un Curateur signale des GRT sur un subgraph, il doit s'acquitter d'une taxe de curation de 1%. Cette taxe est brûlée. -3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their subgraph or if a subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/resources/roles/delegating/delegating/). -4. Un subgraph peut échouer à cause d'un bug. Un subgraph qui échoue n'accumule pas de frais de requête. Par conséquent, vous devrez attendre que le développeur corrige le bogue et déploie une nouvelle version. - - Si vous êtes abonné à la version la plus récente d'un subgraph, vos parts migreront automatiquement vers cette nouvelle version. Cela entraînera une taxe de curation de 0,5 %. - - Si vous avez signalé sur une version spécifique d'un subgraph et qu'elle échoue, vous devrez brûler manuellement vos parts de curation. Vous pouvez alors signaler sur la nouvelle version du subgraph, encourant ainsi une taxe de curation de 1%. +2. Curation Fee - when a curator signals GRT on a Subgraph, they incur a 1% curation tax. This fee is burned. +3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their Subgraph or if a Subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/resources/roles/delegating/delegating/). +4. A Subgraph can fail due to a bug. A failed Subgraph does not accrue query fees. As a result, you’ll have to wait until the developer fixes the bug and deploys a new version. + - If you are subscribed to the newest version of a Subgraph, your shares will auto-migrate to that new version. This will incur a 0.5% curation tax. + - If you have signaled on a specific Subgraph version and it fails, you will have to manually burn your curation shares. You can then signal on the new Subgraph version, thus incurring a 1% curation tax. ## FAQs sur la Curation ### 1. Quel pourcentage des frais de requête les Curateurs perçoivent-ils? -En signalant sur un subgraph, vous gagnerez une part de tous les frais de requête générés par le subgraph. 10% de tous les frais de requête vont aux Curateurs au prorata de leurs parts de curation. Ces 10% sont soumis à la gouvernance. +By signalling on a Subgraph, you will earn a share of all the query fees that the Subgraph generates. 10% of all query fees go to the Curators pro-rata to their curation shares. This 10% is subject to governance. -### 2. Comment décider quels sont les subgraphs de haute qualité sur lesquels on peut émettre un signal ? +### 2. How do I decide which Subgraphs are high quality to signal on? -Identifier des subgraphs de haute qualité est une tâche complexe, mais il existe de multiples approches.. En tant que Curateur, vous souhaitez trouver des subgraphs fiables qui génèrent un volume de requêtes élevé. Un subgraph fiable peut être précieux s’il est complet, précis et s’il répond aux besoins en données d’une dapp. Un subgraph mal conçu pourrait avoir besoin d'être révisé ou republié, et peut aussi finir par échouer. Il est crucial pour les Curateurs d'examiner l'architecture ou le code d'un subgraph afin d'évaluer sa valeur. Ainsi : +Finding high-quality Subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy Subgraphs that are driving query volume. A trustworthy Subgraph may be valuable if it is complete, accurate, and supports a dapp’s data needs. A poorly architected Subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a Subgraph’s architecture or code in order to assess if a Subgraph is valuable. As a result: -- Les Curateurs peuvent utiliser leur compréhension d'un réseau pour essayer de prédire comment un subgraph individuel peut générer un volume de requêtes plus élevé ou plus faible à l'avenir -- Les Curateurs doivent également comprendre les métriques disponibles via Graph Explorer. Des métriques telles que le volume de requêtes passées et l'identité du développeur du subgraph peuvent aider à déterminer si un subgraph mérite ou non d'être signalé. +- Curators can use their understanding of a network to try and predict how an individual Subgraph may generate a higher or lower query volume in the future +- Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the Subgraph developer is can help determine whether or not a Subgraph is worth signalling on. -### 3. Quel est le coût de la mise à jour d'un subgraph ? +### 3. What’s the cost of updating a Subgraph? -La migration de vos parts de curation (curation shares) vers une nouvelle version de subgraph entraîne une taxe de curation de 1 %. Les Curateurs peuvent choisir de s'abonner à la dernière version d'un subgraph. Lorsque les parts de Curateurs sont automatiquement migrées vers une nouvelle version, les Curateurs paieront également une demi-taxe de curation, soit 0,5 %, car la mise à niveau (upgrade) des subgraphs est une action onchain qui coûte du gas. +Migrating your curation shares to a new Subgraph version incurs a curation tax of 1%. Curators can choose to subscribe to the newest version of a Subgraph. When curator shares get auto-migrated to a new version, Curators will also pay half curation tax, ie. 0.5%, because upgrading Subgraphs is an onchain action that costs gas. -### 4. À quelle fréquence puis-je mettre à jour mon subgraph ? +### 4. How often can I update my Subgraph? -Il est conseillé de ne pas mettre à jour vos subgraphs trop fréquemment. Voir la question ci-dessus pour plus de détails. +It’s suggested that you don’t update your Subgraphs too frequently. See the question above for more details. ### 5. Puis-je vendre mes parts de curateurs ? diff --git a/website/src/pages/fr/resources/roles/delegating/delegating.mdx b/website/src/pages/fr/resources/roles/delegating/delegating.mdx index 83dcd5dfc17c..5425b865ba2e 100644 --- a/website/src/pages/fr/resources/roles/delegating/delegating.mdx +++ b/website/src/pages/fr/resources/roles/delegating/delegating.mdx @@ -2,54 +2,54 @@ title: Délégation --- -To start delegating right away, check out [delegate on the graph](https://thegraph.com/explorer/delegate?chain=arbitrum-one). +Pour commencer à déléguer tout de suite, consultez [déléguer sur The Graph](https://thegraph.com/explorer/delegate?chain=arbitrum-one). ## Aperçu -Delegators earn GRT by delegating GRT to Indexers, which helps network security and functionality. +Les Déléguateurs gagnent des GRT en déléguant des GRT aux indexeurs, ce qui contribue à la sécurité et à la fonctionnalité du réseau. ## Avantages de la délégation -- Strengthen the network’s security and scalability by supporting Indexers. -- Earn a portion of rewards generated by the Indexers. +- Renforcer la sécurité et l'évolutivité du réseau en soutenant les Indexeurs. +- Gagner une partie des récompenses générées par les Indexeurs. ## Comment fonctionne la délégation ? -Delegators earn GRT rewards from the Indexer(s) they choose to delegate their GRT to. +Les Déléguateurs reçoivent des récompenses GRT de la part de l'Indexeur ou des Indexeurs auxquels ils choisissent de déléguer leurs GRT. -An Indexer's ability to process queries and earn rewards depends on three key factors: +La capacité d'un Indexeur à traiter les requêtes et à obtenir des récompenses dépend de trois facteurs clés : -1. The Indexer's Self-Stake (GRT staked by the Indexer). -2. The total GRT delegated to them by Delegators. -3. The price the Indexer sets for queries. +1. L'Indexer's Self-Stake (GRT stackés par l'Indexeur). +2. Le total des GRT qui leur ont été déléguées par les Déléguateurs. +3. Le prix que l'Indexeur fixe pour les requêtes. -The more GRT staked and delegated to an Indexer, the more queries they can serve, leading to higher potential rewards for both the Delegator and Indexer. +Plus le nombre de GRT staké et délégués à un Indexeur est important, plus le nombre de requêtes qu'il peut traiter est élevé, ce qui se traduit par des récompenses potentielles plus importantes tant pour le Déléguateur que pour l'Indexeur. -### What is Delegation Capacity? +### Qu'est-ce que la capacité de délégation ? -Delegation Capacity refers to the maximum amount of GRT an Indexer can accept from Delegators, based on the Indexer's Self-Stake. +La capacité de délégation fait référence au montant maximum de GRT qu'un Indexeur peut accepter de la part des Déléguateurs, , en fonction de la mise personnelle de l’Indexeur(Self-Stake). -The Graph Network includes a delegation ratio of 16, meaning an Indexer can accept up to 16 times their Self-Stake in delegated GRT. +The Graph Network comprend un ratio de délégation de 16, ce qui signifie qu'un Indexeur peut accepter jusqu'à 16 fois son Self-Stake en GRT délégués. -For example, if an Indexer has a Self-Stake of 1M GRT, their Delegation Capacity is 16M. +Par exemple, si un indexeur a un Self-Stake de 1 million de GRT, sa capacité de délégation est de 16 millions de GRT. -### Why Does Delegation Capacity Matter? +### Pourquoi la capacité de délégation est-elle importante ? -If an Indexer exceeds their Delegation Capacity, rewards for all Delegators become diluted because the excess delegated GRT cannot be used effectively within the protocol. +Si un Indexeur dépasse sa capacité de délégation, les récompenses pour tous les Déléguateurs sont diluées parce que l'excédent de GRT délégué ne peut pas être utilisé efficacement dans le protocole. -This makes it crucial for Delegators to evaluate an Indexer's current Delegation Capacity before selecting an Indexer. +Il est donc essentiel que les Déléguateurs évaluent la capacité de délégation actuelle d'un Indexeur avant de le sélectionner. -Indexers can increase their Delegation Capacity by increasing their Self-Stake, thereby raising the limit for delegated tokens. +Les Indexeurs peuvent augmenter leur capacité de délégation en augmentant leur Self-Stake, ce qui a pour effet d'augmenter la limite des jetons délégués. -## Delegation on The Graph +## Délégation sur The Graph -> Please note this guide does not cover steps such as setting up MetaMask. The Ethereum community provides a [comprehensive resource regarding wallets](https://ethereum.org/en/wallets/). +> Veuillez noter que ce guide ne couvre pas les étapes telles que la configuration de MetaMask. La communauté Ethereum propose une [ressource complète sur les portefeuilles](https://ethereum.org/en/wallets/). -There are two sections in this guide: +Ce guide comporte deux sections : - Les risques de la délégation de jetons dans The Graph Network - Comment calculer les rendements escomptés en tant que délégué @@ -70,17 +70,17 @@ En tant que Délégateur, il est important de comprendre ce qui suit : ### La période de retrait de délégation -When a Delegator chooses to undelegate, their tokens are subject to a 28-day undelegation period. +Lorsqu'un délégué choisit de retirer sa délégation, ses jetons sont soumis à une période de retrait de 28 jours. -This means they cannot transfer their tokens or earn any rewards for 28 days. +Cela signifie qu'ils ne peuvent pas transférer leurs jetons ou gagner des récompenses pendant 28 jours. -After the undelegation period, GRT will return to your crypto wallet. +Après la période de retrait de délégation, les GRT retourneront dans votre portefeuille crypto. ### Pourquoi ceci est t-il important? -If you choose an Indexer that is not trustworthy or not doing a good job, you will want to undelegate. This means you will be losing opportunities to earn rewards. +Si vous choisissez un indexeur qui n'est pas digne de confiance ou qui ne fait pas du bon travail, vous voudrez retirer la délégation. Cela signifie que vous perdrez des occasions de gagner des récompenses. -As a result, it’s recommended that you choose an Indexer wisely. +Il est donc recommandé de bien choisir son Indexeur. ![Delegation unbonding. Note the 0.5% fee in the Delegation UI, as well as the 28 day unbonding period.](/img/Delegation-Unbonding.png) @@ -96,25 +96,25 @@ Pour comprendre comment choisir un Indexeur fiable, vous devez comprendre les pa - **Query Fee Cut** - C’est la même chose que l’Indexing Reward Cut, mais cela s’applique aux revenus des frais de requête que l’Indexeur perçoit. -- It is highly recommended that you explore [The Graph Discord](https://discord.gg/graphprotocol) to determine which Indexers have the best social and technical reputations. +- Il est fortement recommandé d'explorer [Le Discord de The Graph](https://discord.gg/graphprotocol) pour déterminer quels indexeurs ont les meilleures réputations sociales et techniques. -- Many Indexers are active in Discord and will be happy to answer your questions. +- De nombreux Indexeurs sont actifs sur Discord et seront heureux de répondre à vos questions. ## Calcul du rendement attendu par les Délégateurs -> Calculate the ROI on your delegation [here](https://thegraph.com/explorer/delegate?chain=arbitrum-one). +> Calculez le retour sur investissement de votre délégation [ici](https://thegraph.com/explorer/delegate?chain=arbitrum-one). -A Delegator must consider a variety of factors to determine a return: +Le Délégateur doit tenir compte de plusieurs facteurs pour déterminer un retour : -An Indexer's ability to use the delegated GRT available to them impacts their rewards. +La capacité d'un Indexeur à utiliser les GRT délégués dont il dispose a un impact sur ses récompenses. -If an Indexer does not allocate all the GRT at their disposal, they may miss out on maximizing potential earnings for both themselves and their Delegators. +Si un Indexeur n'alloue pas tout les GRT à sa disposition, il risque de ne pas maximiser ses gains potentiels et ceux de ses déléguateurs. -Indexers can close an allocation and collect rewards at any time within the 1 to 28-day window. However, if rewards are not promptly collected, the total rewards may appear lower, even if a percentage of rewards remain unclaimed. +Les Indexeurs peuvent clôturer une allocation et collecter les récompenses à tout moment dans la fenêtre de 1 à 28 jours. Toutefois, si les récompenses ne sont pas perçues rapidement, le montant total des récompenses peut sembler inférieur, même si un pourcentage des récompenses n'est pas réclamé. ### Considérant la réduction des frais d'interrogation et la réduction des frais d'indexation -You should choose an Indexer that is transparent about setting their Query Fee and Indexing Fee Cuts. +Vous devriez choisir un Indexeur qui est transparent quant à la fixation de ses frais de requête et de ses réductions de frais d'indexation. La formule est : diff --git a/website/src/pages/fr/resources/subgraph-studio-faq.mdx b/website/src/pages/fr/resources/subgraph-studio-faq.mdx index 10300b3d9ada..c7c261788a00 100644 --- a/website/src/pages/fr/resources/subgraph-studio-faq.mdx +++ b/website/src/pages/fr/resources/subgraph-studio-faq.mdx @@ -4,7 +4,7 @@ title: Subgraph Studio FAQ ## 1. Qu'est-ce que Subgraph Studio ? -[Subgraph Studio](https://thegraph.com/studio/) est une dapp permettant de créer, gérer et publier des subgraphs et des clés API. +[Subgraph Studio](https://thegraph.com/studio/) is a dapp for creating, managing, and publishing Subgraphs and API keys. ## 2. Comment créer une clé API ? @@ -18,14 +18,14 @@ Oui ! Vous pouvez créer plusieurs clés API à utiliser dans différents projet Après avoir créé une clé API, dans la section Sécurité, vous pouvez définir les domaines qui peuvent interroger une clé API spécifique. -## Puis-je transférer mon subgraph à un autre propriétaire ? +## 5. Can I transfer my Subgraph to another owner? -Oui, les subgraphs qui ont été publiés sur Arbitrum One peuvent être transférés vers un nouveau portefeuille ou un Multisig. Vous pouvez le faire en cliquant sur les trois points à côté du bouton 'Publish' sur la page des détails du subgraph et en sélectionnant 'Transfer ownership'. +Yes, Subgraphs that have been published to Arbitrum One can be transferred to a new wallet or a Multisig. You can do so by clicking the three dots next to the 'Publish' button on the Subgraph's details page and selecting 'Transfer ownership'. -Notez que vous ne pourrez plus voir ou modifier le subgraph dans Studio une fois qu'il aura été transféré. +Note that you will no longer be able to see or edit the Subgraph in Studio once it has been transferred. -## Comment trouver les URL de requête pour les sugraphs si je ne suis pas le développeur du subgraph que je veux utiliser ? +## 6. How do I find query URLs for Subgraphs if I’m not the developer of the Subgraph I want to use? -Vous pouvez trouver l'URL de requête de chaque subgraph dans la section Détails du subgraph de Graph Explorer. Lorsque vous cliquez sur le bouton “Requête”, vous serez redirigé vers un volet dans lequel vous pourrez afficher l'URL de requête du subgraph qui vous intéresse. Vous pouvez ensuite remplacer le placeholder `` par la clé API que vous souhaitez exploiter dans Subgraph Studio. +You can find the query URL of each Subgraph in the Subgraph Details section of Graph Explorer. When you click on the “Query” button, you will be directed to a pane wherein you can view the query URL of the Subgraph you’re interested in. You can then replace the `` placeholder with the API key you wish to leverage in Subgraph Studio. -N'oubliez pas que vous pouvez créer une clé API et interroger n'importe quel subgraph publié sur le réseau, même si vous créez vous-même un subgraph. Ces requêtes via la nouvelle clé API, sont des requêtes payantes comme n'importe quelle autre sur le réseau. +Remember that you can create an API key and query any Subgraph published to the network, even if you build a Subgraph yourself. These queries via the new API key, are paid queries as any other on the network. diff --git a/website/src/pages/fr/resources/tokenomics.mdx b/website/src/pages/fr/resources/tokenomics.mdx index 27bbbee1af4d..7568b69ebd35 100644 --- a/website/src/pages/fr/resources/tokenomics.mdx +++ b/website/src/pages/fr/resources/tokenomics.mdx @@ -1,103 +1,103 @@ --- title: Les tokenomiques du réseau The Graph sidebarTitle: Tokenomics -description: The Graph Network is incentivized by powerful tokenomics. Here’s how GRT, The Graph’s native work utility token, works. +description: The Graph Network est encouragé par une puissante tokénomic. Voici comment fonctionne GRT, le jeton d'utilité de travail natif de The Graph. --- ## Aperçu -The Graph is a decentralized protocol that enables easy access to blockchain data. It indexes blockchain data similarly to how Google indexes the web. If you've used a dapp that retrieves data from a subgraph, you've probably interacted with The Graph. Today, thousands of [popular dapps](https://thegraph.com/explorer) in the web3 ecosystem use The Graph. +The Graph is a decentralized protocol that enables easy access to blockchain data. It indexes blockchain data similarly to how Google indexes the web. If you've used a dapp that retrieves data from a Subgraph, you've probably interacted with The Graph. Today, thousands of [popular dapps](https://thegraph.com/explorer) in the web3 ecosystem use The Graph. ## Spécificités⁠ -The Graph's model is akin to a B2B2C model, but it's driven by a decentralized network where participants collaborate to provide data to end users in exchange for GRT rewards. GRT is the utility token for The Graph. It coordinates and incentivizes the interaction between data providers and consumers within the network. +Le modèle de The Graph s'apparente à un modèle B2B2C, mais il est piloté par un réseau décentralisé où les participants collaborent pour fournir des données aux utilisateurs finaux en échange de récompenses GRT. GRT est le jeton d'utilité de The Graph. Il coordonne et encourage l'interaction entre les fournisseurs de données et les consommateurs au sein du réseau. -The Graph plays a vital role in making blockchain data more accessible and supports a marketplace for its exchange. To learn more about The Graph's pay-for-what-you-need model, check out its [free and growth plans](/subgraphs/billing/). +The Graph joue un rôle essentiel en rendant les données de la blockchain plus accessibles et en soutenant une marketplace pour leur échange. Pour en savoir plus sur le modèle de facturation de The Graph, consultez ses [plans gratuits et de croissance](/subgraphs/billing/). -- GRT Token Address on Mainnet: [0xc944e90c64b2c07662a292be6244bdf05cda44a7](https://etherscan.io/token/0xc944e90c64b2c07662a292be6244bdf05cda44a7) +- Adresse du jeton GRT sur le réseau principal : [0xc944e90c64b2c07662a292be6244bdf05cda44a7](https://etherscan.io/token/0xc944e90c64b2c07662a292be6244bdf05cda44a7) -- GRT Token Address on Arbitrum One: [0x9623063377AD1B27544C965cCd7342f7EA7e88C7](https://arbiscan.io/token/0x9623063377ad1b27544c965ccd7342f7ea7e88c7) +- Adresse du jeton GRT sur Arbitrum One : [0x9623063377AD1B27544C965cCd7342f7EA7e88C7](https://arbiscan.io/token/0x9623063377ad1b27544c965ccd7342f7ea7e88c7) ## Les rôles des participants au réseau -There are four primary network participants: +Les participants au réseau sont au nombre de quatre : -1. Delegators - Delegate GRT to Indexers & secure the network +1. Délégateurs - Délèguent des GRT aux Indexeurs & sécurisent le réseau -2. Curateurs - Trouver les meilleurs subgraphs pour les indexeurs +2. Curators - Find the best Subgraphs for Indexers -3. Developers - Build & query subgraphs +3. Developers - Build & query Subgraphs 4. Indexeurs - épine dorsale des données de la blockchain -Fishermen and Arbitrators are also integral to the network's success through other contributions, supporting the work of the other primary participant roles. For more information about network roles, [read this article](https://thegraph.com/blog/the-graph-grt-token-economics/). +Les Fishermen et les arbitres font également partie intégrante du succès du réseau grâce à d'autres contributions, soutenant le travail des autres participants principaux. Pour plus d'informations sur les rôles du réseau, [lire cet article](https://thegraph.com/blog/the-graph-grt-token-economics/). -![Tokenomics diagram](/img/updated-tokenomics-image.png) +![Diagramme de la tokenomic](/img/updated-tokenomics-image.png) -## Delegators (Passively earn GRT) +## Délégateurs (gagnent passivement des GRT) -Indexers are delegated GRT by Delegators, increasing the Indexer’s stake in subgraphs on the network. In return, Delegators earn a percentage of all query fees and indexing rewards from the Indexer. Each Indexer sets the cut that will be rewarded to Delegators independently, creating competition among Indexers to attract Delegators. Most Indexers offer between 9-12% annually. +Indexers are delegated GRT by Delegators, increasing the Indexer’s stake in Subgraphs on the network. In return, Delegators earn a percentage of all query fees and indexing rewards from the Indexer. Each Indexer sets the cut that will be rewarded to Delegators independently, creating competition among Indexers to attract Delegators. Most Indexers offer between 9-12% annually. -For example, if a Delegator were to delegate 15k GRT to an Indexer offering 10%, the Delegator would receive ~1,500 GRT in rewards annually. +Par exemple, si un Délégateur délègue 15 000 GRT à un Indexeur offrant 10 %, le Délégateur recevra environ 1 500 GRT de récompenses par an. -There is a 0.5% delegation tax which is burned whenever a Delegator delegates GRT on the network. If a Delegator chooses to withdraw their delegated GRT, the Delegator must wait for the 28-epoch unbonding period. Each epoch is 6,646 blocks, which means 28 epochs ends up being approximately 26 days. +Une taxe de délégation de 0,5 % est prélevée chaque fois qu'un Délégateur délègue des GRT sur le réseau. Si un Délégateur choisit de retirer les GRT qu'il a délégués, il doit attendre la période de déverrouillage de 28 époques. Chaque époque compte 6 646 blocs, ce qui signifie que 28 époques représentent environ 26 jours. -If you're reading this, you're capable of becoming a Delegator right now by heading to the [network participants page](https://thegraph.com/explorer/participants/indexers), and delegating GRT to an Indexer of your choice. +Si vous lisez ceci, vous pouvez devenir Délégateur dès maintenant en vous rendant sur la [page des participants au réseau](https://thegraph.com/explorer/participants/indexers), et en déléguant des GRT à un Indexeur de votre choix. ## Curateurs (Gagnez des GRT) -Curators identify high-quality subgraphs and "curate" them (i.e., signal GRT on them) to earn curation shares, which guarantee a percentage of all future query fees generated by the subgraph. While any independent network participant can be a Curator, typically subgraph developers are among the first Curators for their own subgraphs because they want to ensure their subgraph is indexed. +Curators identify high-quality Subgraphs and "curate" them (i.e., signal GRT on them) to earn curation shares, which guarantee a percentage of all future query fees generated by the Subgraph. While any independent network participant can be a Curator, typically Subgraph developers are among the first Curators for their own Subgraphs because they want to ensure their Subgraph is indexed. -Subgraph developers are encouraged to curate their subgraph with at least 3,000 GRT. However, this number may be impacted by network activity and community participation. +Subgraph developers are encouraged to curate their Subgraph with at least 3,000 GRT. However, this number may be impacted by network activity and community participation. -Curators pay a 1% curation tax when they curate a new subgraph. This curation tax is burned, decreasing the supply of GRT. +Curators pay a 1% curation tax when they curate a new Subgraph. This curation tax is burned, decreasing the supply of GRT. -## Developers +## Développeurs -Developers build and query subgraphs to retrieve blockchain data. Since subgraphs are open source, developers can query existing subgraphs to load blockchain data into their dapps. Developers pay for queries they make in GRT, which is distributed to network participants. +Developers build and query Subgraphs to retrieve blockchain data. Since Subgraphs are open source, developers can query existing Subgraphs to load blockchain data into their dapps. Developers pay for queries they make in GRT, which is distributed to network participants. -### Création d'un subgraph +### Creating a Subgraph -Developers can [create a subgraph](/developing/creating-a-subgraph/) to index data on the blockchain. Subgraphs are instructions for Indexers about which data should be served to consumers. +Developers can [create a Subgraph](/developing/creating-a-subgraph/) to index data on the blockchain. Subgraphs are instructions for Indexers about which data should be served to consumers. -Once developers have built and tested their subgraph, they can [publish their subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) on The Graph's decentralized network. +Once developers have built and tested their Subgraph, they can [publish their Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) on The Graph's decentralized network. -### Interroger un subgraph existant +### Querying an existing Subgraph -Once a subgraph is [published](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph's decentralized network, anyone can create an API key, add GRT to their billing balance, and query the subgraph. +Once a Subgraph is [published](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph's decentralized network, anyone can create an API key, add GRT to their billing balance, and query the Subgraph. -Subgraphs are [queried using GraphQL](/subgraphs/querying/introduction/), and the query fees are paid for with GRT in [Subgraph Studio](https://thegraph.com/studio/). Query fees are distributed to network participants based on their contributions to the protocol. +Les subgraph sont [interrogés à l'aide de GraphQL](/subgraphs/querying/introduction/), et les frais d'interrogation sont payés avec des GRT dans [Subgraph Studio](https://thegraph.com/studio/). Les frais d'interrogation sont distribués aux participants au réseau en fonction de leur contribution au protocole. -1% of the query fees paid to the network are burned. +1% des frais de requête payés au réseau sont brûlés. -## Indexers (Earn GRT) +## Indexeurs (gagner des GRT) -Indexers are the backbone of The Graph. They operate independent hardware and software powering The Graph’s decentralized network. Indexers serve data to consumers based on instructions from subgraphs. +Indexers are the backbone of The Graph. They operate independent hardware and software powering The Graph’s decentralized network. Indexers serve data to consumers based on instructions from Subgraphs. Les Indexeurs peuvent gagner des récompenses en GRT de deux façons : -1. **Query fees**: GRT paid by developers or users for subgraph data queries. Query fees are directly distributed to Indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). +1. **Query fees**: GRT paid by developers or users for Subgraph data queries. Query fees are directly distributed to Indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). -2. **Indexing rewards**: the 3% annual issuance is distributed to Indexers based on the number of subgraphs they are indexing. These rewards incentivize Indexers to index subgraphs, occasionally before the query fees begin, to accrue and submit Proofs of Indexing (POIs), verifying that they have indexed data accurately. +2. **Indexing rewards**: the 3% annual issuance is distributed to Indexers based on the number of Subgraphs they are indexing. These rewards incentivize Indexers to index Subgraphs, occasionally before the query fees begin, to accrue and submit Proofs of Indexing (POIs), verifying that they have indexed data accurately. -Each subgraph is allotted a portion of the total network token issuance, based on the amount of the subgraph’s curation signal. That amount is then rewarded to Indexers based on their allocated stake on the subgraph. +Each Subgraph is allotted a portion of the total network token issuance, based on the amount of the Subgraph’s curation signal. That amount is then rewarded to Indexers based on their allocated stake on the Subgraph. -In order to run an indexing node, Indexers must self-stake 100,000 GRT or more with the network. Indexers are incentivized to self-stake GRT in proportion to the amount of queries they serve. +Pour faire fonctionner un nœud d'indexation, les Indexeurs doivent staker 100 000 GRT ou plus avec le réseau. Les Indexeurs sont incités à s'approprier des GRT proportionnellement au nombre de requêtes qu'ils traitent. -Indexers can increase their GRT allocations on subgraphs by accepting GRT delegation from Delegators, and they can accept up to 16 times their initial self-stake. If an Indexer becomes "over-delegated" (i.e., more than 16 times their initial self-stake), they will not be able to use the additional GRT from Delegators until they increase their self-stake in the network. +Indexers can increase their GRT allocations on Subgraphs by accepting GRT delegation from Delegators, and they can accept up to 16 times their initial self-stake. If an Indexer becomes "over-delegated" (i.e., more than 16 times their initial self-stake), they will not be able to use the additional GRT from Delegators until they increase their self-stake in the network. -The amount of rewards an Indexer receives can vary based on the Indexer's self-stake, accepted delegation, quality of service, and many more factors. +Le montant des récompenses reçues par un Indexeur peut varier en fonction du self-stake de l'indexeur, de la délégation acceptée, de la qualité du service et de nombreux autres facteurs. ## Token Supply : Incinération & Emission -The initial token supply is 10 billion GRT, with a target of 3% new issuance annually to reward Indexers for allocating stake on subgraphs. This means that the total supply of GRT tokens will increase by 3% each year as new tokens are issued to Indexers for their contribution to the network. +The initial token supply is 10 billion GRT, with a target of 3% new issuance annually to reward Indexers for allocating stake on Subgraphs. This means that the total supply of GRT tokens will increase by 3% each year as new tokens are issued to Indexers for their contribution to the network. -The Graph is designed with multiple burning mechanisms to offset new token issuance. Approximately 1% of the GRT supply is burned annually through various activities on the network, and this number has been increasing as network activity continues to grow. These burning activities include a 0.5% delegation tax whenever a Delegator delegates GRT to an Indexer, a 1% curation tax when Curators signal on a subgraph, and a 1% of query fees for blockchain data. +The Graph is designed with multiple burning mechanisms to offset new token issuance. Approximately 1% of the GRT supply is burned annually through various activities on the network, and this number has been increasing as network activity continues to grow. These burning activities include a 0.5% delegation tax whenever a Delegator delegates GRT to an Indexer, a 1% curation tax when Curators signal on a Subgraph, and a 1% of query fees for blockchain data. -![Total burned GRT](/img/total-burned-grt.jpeg) +![Total de GRT brûlés](/img/total-burned-grt.jpeg) -In addition to these regularly occurring burning activities, the GRT token also has a slashing mechanism in place to penalize malicious or irresponsible behavior by Indexers. If an Indexer is slashed, 50% of their indexing rewards for the epoch are burned (while the other half goes to the fisherman), and their self-stake is slashed by 2.5%, with half of this amount being burned. This helps to ensure that Indexers have a strong incentive to act in the best interests of the network and to contribute to its security and stability. +En plus de ces activités d'incinération régulières, le jeton GRT dispose également d'un mécanisme de réduction (slashing) pour pénaliser les comportements malveillants ou irresponsables des Indexeurs. Lorsqu'un Indexeur est sanctionné, 50 % de ses récompenses d'indexation pour l'époque sont brûlées (l'autre moitié est versée au fisherman), et sa participation personnelle est réduite de 2,5 %, la moitié de ce montant étant brûlée. Les Indexeurs sont ainsi fortement incités à agir dans l'intérêt du réseau et à contribuer à sa sécurité et à sa stabilité. ## Amélioration du protocole -The Graph Network is ever-evolving and improvements to the economic design of the protocol are constantly being made to provide the best experience for all network participants. The Graph Council oversees protocol changes and community members are encouraged to participate. Get involved with protocol improvements in [The Graph Forum](https://forum.thegraph.com/). +The Graph Network est en constante évolution et des améliorations sont constamment apportées à la conception économique du protocole afin d'offrir la meilleure expérience possible à tous les participants au réseau. The Graph Council supervise les modifications du protocole et les membres de la communauté sont encouragés à y participer. Participez aux améliorations du protocole sur [le Forum The Graph](https://forum.thegraph.com/). diff --git a/website/src/pages/fr/sps/introduction.mdx b/website/src/pages/fr/sps/introduction.mdx index 64f5b60d32fe..0454b6f4acee 100644 --- a/website/src/pages/fr/sps/introduction.mdx +++ b/website/src/pages/fr/sps/introduction.mdx @@ -3,28 +3,29 @@ title: Introduction aux Subgraphs alimentés par Substreams sidebarTitle: Présentation --- -Boost your subgraph's efficiency and scalability by using [Substreams](/substreams/introduction/) to stream pre-indexed blockchain data. +Améliorez l'efficacité et l'évolutivité de votre subgraph en utilisant [Substreams](/substreams/introduction/) pour streamer des données blockchain pré-indexées. ## Aperçu -Use a Substreams package (`.spkg`) as a data source to give your subgraph access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks. +Utilisez un package Substreams (`.spkg`) comme source de données pour donner à votre Subgraph l'accès à un flux de données blockchain pré-indexées. Cela permet un traitement des données plus efficace et évolutif, en particulier avec des réseaux de blockchain complexes ou de grande taille. ### Spécificités⁠ Il existe deux méthodes pour activer cette technologie : -1. **Using Substreams [triggers](/sps/triggers/)**: Consume from any Substreams module by importing the Protobuf model through a subgraph handler and move all your logic into a subgraph. This method creates the subgraph entities directly in the subgraph. +1. **Utilisation des [déclencheurs](/sps/triggers/) de Substreams ** : Consommez à partir de n'importe quel module Substreams en important le modèle Protobuf par le biais d'un gestionnaire de subgraph et déplacez toute votre logique dans un subgraph. Cette méthode crée les entités du subgraph directement dans le subgraph. -2. **Using [Entity Changes](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)**: By writing more of the logic into Substreams, you can consume the module's output directly into [graph-node](/indexing/tooling/graph-node/). In graph-node, you can use the Substreams data to create your subgraph entities. +2. **En utilisant [Entity Changes](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)** : En écrivant une plus grande partie de la logique dans Substreams, vous pouvez consommer la sortie du module directement dans [graph-node](/indexing/tooling/graph-node/). Dans graph-node, vous pouvez utiliser les données de Substreams pour créer vos entités Subgraph. -You can choose where to place your logic, either in the subgraph or Substreams. However, consider what aligns with your data needs, as Substreams has a parallelized model, and triggers are consumed linearly in the graph node. +Vous pouvez choisir où placer votre logique, soit dans le subgraph, soit dans Substreams. Cependant, réfléchissez à ce qui correspond à vos besoins en matière de données, car Substreams a un modèle parallélisé et les déclencheurs sont consommés de manière linéaire dans graph node. ### Ressources supplémentaires -Visit the following links for tutorials on using code-generation tooling to build your first end-to-end Substreams project quickly: +Consultez les liens suivants pour obtenir des tutoriels sur l'utilisation de l'outil de génération de code afin de créer rapidement votre premier projet Substreams de bout en bout : - [Solana](/substreams/developing/solana/transactions/) - [EVM](https://docs.substreams.dev/tutorials/intro-to-tutorials/evm) - [Starknet](https://docs.substreams.dev/tutorials/intro-to-tutorials/starknet) - [Injective](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/injective) - [MANTRA](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/mantra) +- [Stellar](https://docs.substreams.dev/tutorials/intro-to-tutorials/stellar) diff --git a/website/src/pages/fr/sps/sps-faq.mdx b/website/src/pages/fr/sps/sps-faq.mdx index 0924ecb989ca..9519360ba265 100644 --- a/website/src/pages/fr/sps/sps-faq.mdx +++ b/website/src/pages/fr/sps/sps-faq.mdx @@ -5,27 +5,27 @@ sidebarTitle: FAQ ## Que sont les sous-flux ? -Substreams is an exceptionally powerful processing engine capable of consuming rich streams of blockchain data. It allows you to refine and shape blockchain data for fast and seamless digestion by end-user applications. +Substreams est un moteur de traitement exceptionnellement puissant capable de consommer de riches flux de données blockchain. Il vous permet d'affiner et de façonner les données de la blockchain pour une digestion rapide et transparente par les applications des utilisateurs finaux. Specifically, it's a blockchain-agnostic, parallelized, and streaming-first engine that serves as a blockchain data transformation layer. It's powered by [Firehose](https://firehose.streamingfast.io/), and enables developers to write Rust modules, build upon community modules, provide extremely high-performance indexing, and [sink](/substreams/developing/sinks/) their data anywhere. -Substreams is developed by [StreamingFast](https://www.streamingfast.io/). Visit the [Substreams Documentation](/substreams/introduction/) to learn more about Substreams. +Substreams est développé par [StreamingFast](https://www.streamingfast.io/). Visitez la [Documentation Substreams](/substreams/introduction/) pour en savoir plus sur Substreams. -## Qu'est-ce qu'un subgraph alimenté par des courants de fond ? +## Qu'est-ce qu'un subgraph alimenté par Substreams ? -[Substreams-powered subgraphs](/sps/introduction/) combine the power of Substreams with the queryability of subgraphs. When publishing a Substreams-powered Subgraph, the data produced by the Substreams transformations, can [output entity changes](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs) that are compatible with subgraph entities. +Les [subgraphs alimentés par Substreams](/sps/introduction/) combinent la puissance de Substreams avec la capacité d'interrogation des subgraphs. Lors de la publication d'un subgraph alimenté par Substreams, les données produites par les transformations Substreams peuvent [produire des changements d'entité](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs) compatibles avec les entités du subgraph. -If you are already familiar with subgraph development, note that Substreams-powered subgraphs can then be queried just as if it had been produced by the AssemblyScript transformation layer. This provides all the benefits of subgraphs, including a dynamic and flexible GraphQL API. +Si vous êtes déjà familiarisé avec le développement de subgraphs, notez que les subgraphs alimentés par Substreams peuvent être interrogés comme s'ils avaient été produits par la couche de transformation AssemblyScript. Cela permet de bénéficier de tous les avantages des subgraphs, y compris d'une API GraphQL dynamique et flexible. -## En quoi les subgraphs alimentés par les courants secondaires sont-ils différents des subgraphs ? +## En quoi les Subgraphs alimentés par Substreams se distinguent-ils des Subgraphs ? Les subgraphs sont constitués de sources de données qui spécifient des événements onchain et comment ces événements doivent être transformés via des gestionnaires écrits en Assemblyscript. Ces événements sont traités de manière séquentielle, en fonction de l'ordre dans lequel ils se produisent onchain. -En revanche, les subgraphs alimentés par des substreams ont une seule source de données qui référence un package de substreams, qui est traité par Graph Node. Les substreams ont accès à des données onchain supplémentaires granulaires par rapport aux subgraphs conventionnels et peuvent également bénéficier d'un traitement massivement parallélisé, ce qui peut signifier des temps de traitement beaucoup plus rapides. +En revanche, les subgraphs alimentés par Substreams ont une source de données unique qui fait référence à un package substream, qui est traité par le Graph Node. Les subgraphs ont accès à des données granulaires supplémentaires onchain par rapport aux subgraphs conventionnels et peuvent également bénéficier d'un traitement massivement parallélisé, ce qui peut se traduire par des temps de traitement beaucoup plus rapides. -## Quels sont les avantages de l'utilisation de subgraphs alimentés par des courants descendants ? +## Quels sont les avantages de l'utilisation des subgraphs alimentés par Substreams ? -Substreams-powered subgraphs combine all the benefits of Substreams with the queryability of subgraphs. They bring greater composability and high-performance indexing to The Graph. They also enable new data use cases; for example, once you've built your Substreams-powered Subgraph, you can reuse your [Substreams modules](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) to output to different [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) such as PostgreSQL, MongoDB, and Kafka. +Les subgraphs alimentés par Substreams combinent tous les avantages de Substreams avec la capacité d'interrogation des subgraphs. Ils apportent à The Graph une plus grande composabilité et une indexation très performante. Ils permettent également de nouveaux cas d'utilisation des données ; par exemple, une fois que vous avez construit votre subgraph alimenté par Substreams, vous pouvez réutiliser vos [modules Substreams](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) pour sortir vers différents [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) tels que PostgreSQL, MongoDB et Kafka. ## Quels sont les avantages de Substreams ? @@ -35,7 +35,7 @@ L'utilisation de Substreams présente de nombreux avantages, notamment: - Indexation haute performance : Indexation plus rapide d'un ordre de grandeur grâce à des grappes d'opérations parallèles à grande échelle (comme BigQuery). -- Sortez vos données n'importe où : Transférez vos données où vous le souhaitez : PostgreSQL, MongoDB, Kafka, subgraphs, fichiers plats, Google Sheets. +- "Sinkez" n'importe où : "Sinkez" vos données où vous le souhaitez : PostgreSQL, MongoDB, Kafka, Subgraphs, fichiers plats, Google Sheets. - Programmable : Utilisez du code pour personnaliser l'extraction, effectuer des agrégations au moment de la transformation et modéliser vos résultats pour plusieurs puits. @@ -63,19 +63,19 @@ L'utilisation de Firehose présente de nombreux avantages, notamment: - Exploite les fichiers plats : Les données de la blockchain sont extraites dans des fichiers plats, la ressource informatique la moins chère et la plus optimisée disponible. -## Où les développeurs peuvent-ils trouver plus d'informations sur les subgraphs alimentés par Substreams et sur Substreams ? +## Où les développeurs peuvent-ils trouver plus d'informations sur les Substreams et les Subgraphs alimentés par Substreams ? La [documentation Substreams ](/substreams/introduction/) vous explique comment construire des modules Substreams. -The [Substreams-powered subgraphs documentation](/sps/introduction/) will show you how to package them for deployment on The Graph. +La [documentation sur les subgraphs alimentés par Substreams](/sps/introduction/) vous montrera comment les packager pour les déployer sur The Graph. Le [dernier outil Substreams Codegen ](https://streamingfastio.medium.com/substreams-codegen-no-code-tool-to-bootstrap-your-project-a11efe0378c6) vous permettra de lancer un projet Substreams sans aucun code. ## Quel est le rôle des modules Rust dans Substreams ? -Les modules Rust sont l'équivalent des mappeurs AssemblyScript dans les subgraphs. Ils sont compilés dans WASM de la même manière, mais le modèle de programmation permet une exécution parallèle. Ils définissent le type de transformations et d'agrégations que vous souhaitez appliquer aux données brutes de la blockchain. +Les modules Rust sont l'équivalent des mappeurs AssemblyScript dans Subgraphs. Ils sont compilés dans WASM de la même manière, mais le modèle de programmation permet une exécution parallèle. Ils définissent le type de transformations et d'agrégations que vous souhaitez appliquer aux données brutes de la blockchain. -See [modules documentation](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) for details. +Consultez la [documentation des modules](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) pour plus de détails. ## Qu'est-ce qui rend Substreams composable ? @@ -85,12 +85,12 @@ Par exemple, Alice peut créer un module de prix DEX, Bob peut l'utiliser pour c ## Comment pouvez-vous créer et déployer un Subgraph basé sur Substreams ? -After [defining](/sps/introduction/) a Substreams-powered Subgraph, you can use the Graph CLI to deploy it in [Subgraph Studio](https://thegraph.com/studio/). +Après avoir [défini](/sps/introduction/) un subgraph basé sur Substreams, vous pouvez utiliser Graph CLI pour le déployer dans [Subgraph Studio](https://thegraph.com/studio/). -## Où puis-je trouver des exemples de subgraphs et de subgraphs alimentés par des substreams ? +## Où puis-je trouver des exemples de Substreams et de Subgraphs alimentés par Substreams ? -Vous pouvez visiter [ce repo Github] (https://github.com/pinax-network/awesome-substreams) pour trouver des exemples de Substreams et de subgraphs alimentés par Substreams. +Vous pouvez consulter [cette repo Github](https://github.com/pinax-network/awesome-substreams) pour trouver des exemples de Substreams et de subgraphs alimentés par Substreams. -## Que signifient les subgraphs et les subgraphs alimentés par des substreams pour le réseau graph ? +## Que signifient les Substreams et les subgraphs alimentés par Substreams pour The Gaph Network ? L'intégration promet de nombreux avantages, notamment une indexation extrêmement performante et une plus grande composabilité grâce à l'exploitation des modules de la communauté et à leur développement. diff --git a/website/src/pages/fr/sps/triggers.mdx b/website/src/pages/fr/sps/triggers.mdx index 3dea45dc752b..ecd1253f24c7 100644 --- a/website/src/pages/fr/sps/triggers.mdx +++ b/website/src/pages/fr/sps/triggers.mdx @@ -6,13 +6,13 @@ Use Custom Triggers and enable the full use GraphQL. ## Aperçu -Custom Triggers allow you to send data directly into your subgraph mappings file and entities, which are similar to tables and fields. This enables you to fully use the GraphQL layer. +Les déclencheurs personnalisés vous permettent d'envoyer des données directement dans votre fichier de mappage de subgraph et dans vos entités, qui sont similaires aux tables et aux champs. Cela vous permet d'utiliser pleinement la couche GraphQL. -By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data in your subgraph's handler. This ensures efficient and streamlined data management within the subgraph framework. +En important les définitions Protobuf émises par votre module Substreams, vous pouvez recevoir et traiter ces données dans le gestionnaire de votre subgraph. Cela garantit une gestion efficace et rationalisée des données dans le cadre du Subgraph. -### Defining `handleTransactions` +### Définition de `handleTransactions` -Le code suivant montre comment définir une fonction `handleTransactions` dans un gestionnaire de subgraph. Cette fonction reçoit des données brutes (bytes) Substreams en paramètre et les décode en un objet Transactions. Pour chaque transaction, une nouvelle entité de subgraph est créée. +Le code suivant montre comment définir une fonction `handleTransactions` dans un gestionnaire de Subgraph. Cette fonction reçoit comme paramètre de Substreams des Bytes bruts et les décode en un objet `Transactions`. Pour chaque transaction, une nouvelle entité Subgraph est créée. ```tsx export function handleTransactions(bytes: Uint8Array): void { @@ -34,14 +34,14 @@ export function handleTransactions(bytes: Uint8Array): void { } ``` -Here's what you're seeing in the `mappings.ts` file: +Voici ce que vous voyez dans le fichier `mappings.ts` : 1. Les bytes contenant les données Substreams sont décodés en un objet `Transactions` généré, qui est utilisé comme n’importe quel autre objet AssemblyScript 2. Boucle sur les transactions -3. Création d’une nouvelle entité de subgraph pour chaque transaction +3. Créer une nouvelle entité de subgraph pour chaque transaction -To go through a detailed example of a trigger-based subgraph, [check out the tutorial](/sps/tutorial/). +Pour découvrir un exemple détaillé de subgraph à déclencheurs, [consultez le tutoriel](/sps/tutorial/). ### Ressources supplémentaires -To scaffold your first project in the Development Container, check out one of the [How-To Guide](/substreams/developing/dev-container/). +Pour élaborer votre premier projet dans le conteneur de développement, consultez l'un des [guides pratiques](/substreams/developing/dev-container/). diff --git a/website/src/pages/fr/sps/tutorial.mdx b/website/src/pages/fr/sps/tutorial.mdx index a923cca0d94e..d4876d6000bd 100644 --- a/website/src/pages/fr/sps/tutorial.mdx +++ b/website/src/pages/fr/sps/tutorial.mdx @@ -1,15 +1,15 @@ --- title: 'Tutoriel : Configurer un Subgraph alimenté par Substreams sur Solana' -sidebarTitle: Tutorial +sidebarTitle: Tutoriel --- -Successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. +Mise en place réussie d'un subgraph alimenté par Substreams basé sur des déclencheurs pour un jeton Solana SPL. ## Commencer -For a video tutorial, check out [How to Index Solana with a Substreams-powered Subgraph](/sps/tutorial/#video-tutorial) +Pour un tutoriel vidéo, consultez [Comment indexer Solana avec un subgraph alimenté par des Substreams](/sps/tutorial/#video-tutorial) -### Prerequisites +### Prérequis Avant de commencer, assurez-vous de : @@ -54,7 +54,7 @@ params: # Modifiez les champs param pour répondre à vos besoins ### Étape 2 : Générer le Manifeste du Subgraph -Une fois le projet initialisé, générez un manifeste de subgraph en exécutant la commande suivante dans le Dev Container: +Une fois le projet initialisé, générez un manifeste de subgraph en exécutant la commande suivante dans le Dev Container : ```bash substreams codegen subgraph @@ -70,10 +70,10 @@ dataSources: network: solana-mainnet-beta source: package: - moduleName: map_spl_transfers # Module défini dans le substreams.yaml + moduleName: map_spl_transfers # Module defined in the substreams.yaml file: ./my-project-sol-v0.1.0.spkg mapping: - apiVersion: 0.0.7 + apiVersion: 0.0.9 kind: substreams/graph-entities file: ./src/mappings.ts handler: handleTriggers @@ -81,9 +81,9 @@ dataSources: ### Étape 3 : Définir les Entités dans `schema.graphql` -Define the fields you want to save in your subgraph entities by updating the `schema.graphql` file. +Définissez les champs que vous souhaitez enregistrer dans vos entités Subgraph en mettant à jour le fichier `schema.graphql`. -Here is an example: +Voici un exemple : ```graphql type MyTransfer @entity { @@ -99,9 +99,9 @@ Ce schéma définit une entité `MyTransfer` avec des champs tels que `id`, `amo ### Étape 4 : Gérer les Données Substreams dans `mappings.ts` -With the Protobuf objects generated, you can now handle the decoded Substreams data in your `mappings.ts` file found in the `./src` directory. +Avec les objets Protobuf générés, vous pouvez désormais gérer les données de Substreams décodées dans votre fichier `mappings.ts` trouvé dans le répertoire `./src`. -The example below demonstrates how to extract to subgraph entities the non-derived transfers associated to the Orca account id: +L'exemple ci-dessous montre comment extraire vers les entités du subgraph les transferts non dérivés associés à l'Id du compte Orca : ```ts import { Protobuf } from 'as-proto/assembly' @@ -140,13 +140,13 @@ Pour générer les objets Protobuf en AssemblyScript, exécutez la commande suiv npm run protogen ``` -Cette commande convertit les définitions Protobuf en AssemblyScript, vous permettant de les utiliser dans le gestionnaire de votre subgraph. +Cette commande convertit les définitions Protobuf en AssemblyScript, ce qui permet de les utiliser dans le gestionnaire du subgraph. ### Conclusion -Congratulations! You've successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. You can take the next step by customizing your schema, mappings, and modules to fit your specific use case. +Félicitations ! Vous avez configuré avec succès un subgraph alimenté par Substreams basé sur des déclencheurs pour un jeton Solana SPL. Vous pouvez passer à l'étape suivante en personnalisant votre schéma, vos mappages et vos modules pour les adapter à votre cas d'utilisation spécifique. -### Video Tutorial +### Tutoriel Vidéo diff --git a/website/src/pages/fr/subgraphs/_meta-titles.json b/website/src/pages/fr/subgraphs/_meta-titles.json index 3fd405eed29a..e10948c648a1 100644 --- a/website/src/pages/fr/subgraphs/_meta-titles.json +++ b/website/src/pages/fr/subgraphs/_meta-titles.json @@ -1,6 +1,6 @@ { "querying": "Querying", "developing": "Developing", - "guides": "How-to Guides", - "best-practices": "Best Practices" + "guides": "Guides pratiques", + "best-practices": "Les meilleures pratiques" } diff --git a/website/src/pages/fr/subgraphs/best-practices/avoid-eth-calls.mdx b/website/src/pages/fr/subgraphs/best-practices/avoid-eth-calls.mdx index 2015af316873..33594aca38e1 100644 --- a/website/src/pages/fr/subgraphs/best-practices/avoid-eth-calls.mdx +++ b/website/src/pages/fr/subgraphs/best-practices/avoid-eth-calls.mdx @@ -1,19 +1,19 @@ --- title: Meilleure Pratique Subgraph 4 - Améliorer la Vitesse d'Indexation en Évitant les eth_calls -sidebarTitle: 'Subgraph Best Practice 4: Avoiding eth_calls' +sidebarTitle: Éviter les eth_calls --- ## TLDR -Les `eth_calls` sont des appels qui peuvent être faits depuis un subgraph vers un nœud Ethereum. Ces appels prennent un temps considérable pour renvoyer des données, ralentissant ainsi l'indexation. Si possible, concevez des smart contracts pour émettre toutes les données dont vous avez besoin afin de ne pas avoir à utiliser des `eth_calls`. +Les `eth_calls` sont des appels qui peuvent être effectués depuis un Subgraph vers un nœud Ethereum. Ces appels prennent beaucoup de temps pour renvoyer les données, ce qui ralentit l'indexation. Si possible, concevez des contrats intelligents pour émettre toutes les données dont vous avez besoin afin de ne pas avoir à utiliser les `eth_calls`. ## Pourquoi Éviter les `eth_calls` est une Bonne Pratique -Les subgraphs sont optimisés pour indexer les données des événements émis par les smart contracts. Un subgraph peut également indexer les données provenant d'un `eth_call`, cependant, cela peut considérablement ralentir l'indexation du subgraph car les `eth_call` nécessitent de faire des appels externes aux smart contracts. La réactivité de ces appels dépend non pas du subgraph mais de la connectivité et de la réactivité du nœud Ethereum interrogé. En minimisant ou en éliminant les `eth_call` dans nos subgraphs, nous pouvons améliorer considérablement notre vitesse d'indexation. +Les subgraphs sont optimisés pour indexer les données d'événements émises par les contrats intelligents. Un subgraph peut également indexer les données provenant d'un `eth_call`, mais cela peut ralentir considérablement l'indexation du subgraph car les `eth_calls` nécessitent de faire des appels externes aux smart contracts. La réactivité de ces appels ne dépend pas du subgraph mais de la connectivité et de la réactivité du nœud Ethereum interrogé. En minimisant ou en éliminant les eth_calls dans nos subgraphs, nous pouvons améliorer de manière significative notre vitesse d'indexation. ### À quoi ressemble un eth_call ? -Les `eth_calls` sont souvent nécessaires lorsque les données requises pour un subgraph ne sont pas disponibles par le biais d'événements émis. Par exemple, considérons un scénario où un subgraph doit identifier si les tokens ERC20 font partie d'un pool spécifique, mais le contrat n'émet qu'un événement `Transfer` de base et n'émet pas un événement contenant les données dont nous avons besoin : +Les `eth_calls` sont souvent nécessaires lorsque les données requises pour un Subgraph ne sont pas disponibles par le biais des événements émis. Par exemple, considérons un scénario dans lequel un Subgraph doit identifier si les tokens ERC20 font partie d'un pool spécifique, mais le contrat n'émet qu'un événement `Transfer` de base et n'émet pas d'événement contenant les données dont nous avons besoin : ```yaml event Transfer(address indexed from, address indexed to, uint256 value); @@ -44,7 +44,7 @@ export function handleTransfer(event: Transfer): void { } ``` -Cela fonctionne, mais ce n'est pas idéal car cela ralentit l'indexation de notre subgraph. +Cette méthode est fonctionnelle, mais elle n'est pas idéale car elle ralentit l'indexation de notre Subgraph. ## Comment Éliminer les `eth_calls` @@ -54,7 +54,7 @@ Idéalement, le smart contract devrait être mis à jour pour émettre toutes le event TransferWithPool(address indexed from, address indexed to, uint256 value, bytes32 indexed poolInfo); ``` -Avec cette mise à jour, le subgraph peut indexer directement les données requises sans appels externes : +Grâce à cette mise à jour, le Subgraph peut indexer directement les données requises sans appel externe : ```typescript import { Address } from '@graphprotocol/graph-ts' @@ -96,22 +96,22 @@ La partie mise en évidence en jaune est la déclaration d'appel. La partie avan Le handler lui-même accède au résultat de ce `eth_call` exactement comme dans la section précédente en se liant au contrat et en effectuant l'appel. graph-node met en cache les résultats des `eth_calls` déclarés en mémoire et l'appel depuis le handler récupérera le résultat depuis ce cache en mémoire au lieu d'effectuer un appel RPC réel. -Note : Les eth_calls déclarés ne peuvent être effectués que dans les subgraphs avec specVersion >= 1.2.0. +Remarque : les appels eth_call déclarés ne peuvent être effectués que dans les Subgraphs dont la version specVersion est >= 1.2.0. ## Conclusion -You can significantly improve indexing performance by minimizing or eliminating `eth_calls` in your subgraphs. +Vous pouvez améliorer de manière significative les performances d'indexation en minimisant ou en éliminant les `eth_calls` dans vos Subgraphs. -## Subgraph Best Practices 1-6 +## Bonnes pratiques pour les subgraphs 1-6 -1. [Improve Query Speed with Subgraph Pruning](/subgraphs/best-practices/pruning/) +1. [Améliorer la vitesse des requêtes avec l'élagage des Subgraphs](/subgraphs/best-practices/pruning/) -2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/subgraphs/best-practices/derivedfrom/) +2. [Améliorer l'indexation et la réactivité des requêtes en utilisant @derivedFrom](/subgraphs/best-practices/derivedfrom/) -3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) +3. [Améliorer l'indexation et les performances des requêtes en utilisant des entités immuables et des Bytes comme IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) -4. [Improve Indexing Speed by Avoiding `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) +4. [Améliorer la vitesse d'indexation en évitant les `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) -5. [Simplify and Optimize with Timeseries and Aggregations](/subgraphs/best-practices/timeseries/) +5. [Simplifier et optimiser avec les séries chronologiques et les agrégations](/subgraphs/best-practices/timeseries/) -6. [Use Grafting for Quick Hotfix Deployment](/subgraphs/best-practices/grafting-hotfix/) +6. [Utiliser le greffage pour un déploiement rapide des correctifs](/subgraphs/best-practices/grafting-hotfix/) diff --git a/website/src/pages/fr/subgraphs/best-practices/derivedfrom.mdx b/website/src/pages/fr/subgraphs/best-practices/derivedfrom.mdx index 0f735fd35304..2966865fe02c 100644 --- a/website/src/pages/fr/subgraphs/best-practices/derivedfrom.mdx +++ b/website/src/pages/fr/subgraphs/best-practices/derivedfrom.mdx @@ -1,11 +1,11 @@ --- title: Bonne pratique pour les subgraphs 2 - Améliorer la Réactivité de l'Indexation et des Requêtes en Utilisant @derivedFrom -sidebarTitle: 'Subgraph Best Practice 2: Arrays with @derivedFrom' +sidebarTitle: Tableaux avec @derivedFrom --- ## TLDR -Les tableaux dans votre schéma peuvent vraiment ralentir les performances d'un subgraph lorsqu'ils dépassent des milliers d'entrées. Si possible, la directive `@derivedFrom` devrait être utilisée lors de l'utilisation des tableaux car elle empêche la formation de grands tableaux, simplifie les gestionnaires et réduit la taille des entités individuelles, améliorant considérablement la vitesse d'indexation et la performance des requêtes. +Les tableaux dans votre schéma peuvent vraiment ralentir les performances d'un Subgraph lorsqu'ils dépassent des milliers d'entrées. Si possible, la directive `@derivedFrom` devrait être utilisée lors de l'utilisation de tableaux, car elle empêche la formation de grands tableaux, simplifie les gestionnaires et réduit la taille des entités individuelles, ce qui améliore considérablement la vitesse d'indexation et les performances des requêtes. ## Comment Utiliser la Directive `@derivedFrom` @@ -15,7 +15,7 @@ Il vous suffit d'ajouter une directive `@derivedFrom` après votre tableau dans comments: [Comment!]! @derivedFrom(field: "post") ``` -`@derivedFrom` crée des relations efficaces de un à plusieurs, permettant à une entité de s'associer dynamiquement à plusieurs entités liées en fonction d'un champ dans l'entité liée. Cette approche élimine la nécessité pour les deux côtés de la relation de stocker des données dupliquées, rendant le subgraph plus efficace. +`@derivedFrom` crée des relations efficaces d'un à plusieurs, permettant à une entité de s'associer dynamiquement à plusieurs entités apparentées sur la base d'un champ de l'entité apparentée. Cette approche évite aux deux parties de la relation de stocker des données en double, ce qui rend le Subgraph plus efficace. ### Exemple de cas d'utilisation de `@derivedFrom` @@ -60,30 +60,30 @@ type Comment @entity { En ajoutant simplement la directive `@derivedFrom`, ce schéma ne stockera les "Comments" que du côté "Comments" de la relation et non du côté "Post" de la relation. Les tableaux sont stockés sur des lignes individuelles, ce qui leur permet de s'étendre de manière significative. Cela peut entraîner des tailles particulièrement grandes si leur croissance est illimitée. -Cela rendra non seulement notre subgraph plus efficace, mais débloquera également trois fonctionnalités : +Cela ne rendra pas seulement notre Subgraph plus efficace, mais débloquera également trois fonctionnalités : 1. Nous pouvons interroger le `Post` et voir tous ses commentaires. 2. Nous pouvons faire une recherche inverse et interroger n'importe quel Commentaire et voir de quel post il provient. -3. We can use [Derived Field Loaders](/subgraphs/developing/creating/graph-ts/api/#looking-up-derived-entities) to unlock the ability to directly access and manipulate data from virtual relationships in our subgraph mappings. +3. Nous pouvons utiliser [Chargeurs de champs dérivés] (/subgraphs/developing/creating/graph-ts/api/#looking-up-derived-entities) pour débloquer la possibilité d'accéder directement aux données des relations virtuelles et de les manipuler dans nos mappages de Subgraph. ## Conclusion -Use the `@derivedFrom` directive in subgraphs to effectively manage dynamically growing arrays, enhancing indexing efficiency and data retrieval. +Utilisez la directive `@derivedFrom` dans les Subgraphs pour gérer efficacement les tableaux à croissance dynamique, en améliorant l'efficacité de l'indexation et la récupération des données. -For a more detailed explanation of strategies to avoid large arrays, check out Kevin Jones' blog: [Best Practices in Subgraph Development: Avoiding Large Arrays](https://thegraph.com/blog/improve-subgraph-performance-avoiding-large-arrays/). +Pour une explication plus détaillée des stratégies permettant d'éviter les tableaux volumineux, consultez le blog de Kevin Jones : [Bonnes pratiques en matière de développement de subgraphs : éviter les tableaux volumineux](https://thegraph.com/blog/improve-subgraph-performance-avoiding-large-arrays/). -## Subgraph Best Practices 1-6 +## Bonnes pratiques pour les subgraphs 1-6 -1. [Improve Query Speed with Subgraph Pruning](/subgraphs/best-practices/pruning/) +1. [Améliorer la vitesse des requêtes avec l'élagage des Subgraphs](/subgraphs/best-practices/pruning/) -2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/subgraphs/best-practices/derivedfrom/) +2. [Améliorer l'indexation et la réactivité des requêtes en utilisant @derivedFrom](/subgraphs/best-practices/derivedfrom/) -3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) +3. [Améliorer l'indexation et les performances des requêtes en utilisant des entités immuables et des Bytes comme IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) -4. [Improve Indexing Speed by Avoiding `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) +4. [Améliorer la vitesse d'indexation en évitant les `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) -5. [Simplify and Optimize with Timeseries and Aggregations](/subgraphs/best-practices/timeseries/) +5. [Simplifier et optimiser avec les séries chronologiques et les agrégations](/subgraphs/best-practices/timeseries/) -6. [Use Grafting for Quick Hotfix Deployment](/subgraphs/best-practices/grafting-hotfix/) +6. [Utiliser le greffage pour un déploiement rapide des correctifs](/subgraphs/best-practices/grafting-hotfix/) diff --git a/website/src/pages/fr/subgraphs/best-practices/grafting-hotfix.mdx b/website/src/pages/fr/subgraphs/best-practices/grafting-hotfix.mdx index 3b56e2b7eb6c..e8813f5e8a20 100644 --- a/website/src/pages/fr/subgraphs/best-practices/grafting-hotfix.mdx +++ b/website/src/pages/fr/subgraphs/best-practices/grafting-hotfix.mdx @@ -1,68 +1,68 @@ --- -title: Subgraph Best Practice 6 - Use Grafting for Quick Hotfix Deployment -sidebarTitle: 'Subgraph Best Practice 6: Grafting and Hotfixing' +title: Meilleure pratique pour les subgraphs 6 - Utiliser le greffage pour un déploiement rapide des correctifs +sidebarTitle: Greffage et réparation en environement de production --- ## TLDR -Grafting is a powerful feature in subgraph development that allows you to build and deploy new subgraphs while reusing the indexed data from existing ones. +Le greffage est une fonctionnalité puissante dans le développement de Subgraphs qui vous permet de construire et de déployer de nouveaux Subgraphs tout en réutilisant les données indexées des Subgraphs existants. ### Aperçu -This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. +Cette fonction permet de déployer rapidement des correctifs pour les problèmes critiques, éliminant ainsi la nécessité de réindexer l'ensemble du Subgraph à partir de zéro. En préservant les données historiques, le greffage minimise les temps d'arrêt et assure la continuité des services de données. -## Benefits of Grafting for Hotfixes +## Avantages du greffage pour les correctifs -1. **Rapid Deployment** +1. **Déploiement rapide** - - **Minimize Downtime**: When a subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. - - **Immediate Recovery**: The new subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. + - **Minimiser les temps d'arrêt** : Lorsqu'un Subgraph rencontre une erreur critique et cesse d'être indexé, la greffe vous permet de déployer immédiatement un correctif sans attendre la réindexation. + - **Récupération immédiate** : Le nouveau Subgraph continue à partir du dernier bloc indexé, garantissant que les services de données restent ininterrompus. -2. **Data Preservation** +2. **Préservation des données** - - **Reuse Historical Data**: Grafting copies the existing data from the base subgraph, so you don’t lose valuable historical records. - - **Consistency**: Maintains data continuity, which is crucial for applications relying on consistent historical data. + - **Réutilisation des données historiques** : Le greffage copie les données existantes du Subgraph de base, de sorte que vous ne perdez pas de précieux enregistrements historiques. + - **Consistance** : Maintient la continuité des données, ce qui est crucial pour les applications qui s'appuient sur des données historiques cohérentes. -3. **Efficiency** - - **Save Time and Resources**: Avoids the computational overhead of re-indexing large datasets. - - **Focus on Fixes**: Allows developers to concentrate on resolving issues rather than managing data recovery. +3. **Efficacité** + - **Économie de temps et de ressources** : Évite surcoût de calcul lié à la réindexation de grands ensembles de données. + - **Focalisation sur les corrections** : Permet aux développeurs de se concentrer sur la résolution des problèmes plutôt que sur la gestion de la récupération des données. -## Best Practices When Using Grafting for Hotfixes +## Meilleures pratiques lors de l'utilisation du greffage pour les correctifs -1. **Initial Deployment Without Grafting** +1. **Déploiement initial sans greffage** - - **Start Clean**: Always deploy your initial subgraph without grafting to ensure that it’s stable and functions as expected. - - **Test Thoroughly**: Validate the subgraph’s performance to minimize the need for future hotfixes. + - **Démarrez proprement** : Déployez toujours votre Subgraph initial sans greffe pour vous assurer qu'il est stable et qu'il fonctionne comme prévu. + - **Testez minutieusement** : Validez les performances du Subgraph afin de minimiser les besoins en correctifs futurs. -2. **Implementing the Hotfix with Grafting** +2. **Mise en œuvre du correctif par greffage** - - **Identify the Issue**: When a critical error occurs, determine the block number of the last successfully indexed event. - - **Create a New Subgraph**: Develop a new subgraph that includes the hotfix. - - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed subgraph. - - **Deploy Quickly**: Publish the grafted subgraph to restore service as soon as possible. + - **Identifier le problème** : Lorsqu'une erreur critique se produit, déterminez le numéro de bloc du dernier événement indexé avec succès. + - **Créer un nouveau Subgraph** : Développer un nouveau Subgraph qui inclut le correctif. + - **Configurer la greffe** : Utiliser la greffage pour copier les données jusqu'au numéro de bloc identifié à partir du Subgraph défaillant. + - **Déployer rapidement** : Publier le Subgraph greffé pour rétablir le service dès que possible. -3. **Post-Hotfix Actions** +3. **Actions post-correctif** - - **Monitor Performance**: Ensure the grafted subgraph is indexing correctly and the hotfix resolves the issue. - - **Republish Without Grafting**: Once stable, deploy a new version of the subgraph without grafting for long-term maintenance. - > Note: Relying on grafting indefinitely is not recommended as it can complicate future updates and maintenance. - - **Update References**: Redirect any services or applications to use the new, non-grafted subgraph. + - **Surveillez les performances** : Assurez-vous que le Subgraph greffé est indexé correctement et que le correctif résout le problème. + - **Républier sans greffer** : Une fois stable, déployer une nouvelle version du Subgraph sans greffe pour une maintenance à long terme. + > Remarque : il n'est pas recommandé de s'appuyer indéfiniment sur le greffage, car cela peut compliquer les mises à jour et la maintenance futures. + - **Mettre à jour les références** : Rediriger tous les services ou applications pour qu'ils utilisent le nouveau Subgraph non greffé. -4. **Important Considerations** - - **Careful Block Selection**: Choose the graft block number carefully to prevent data loss. - - **Tip**: Use the block number of the last correctly processed event. - - **Use Deployment ID**: Ensure you reference the Deployment ID of the base subgraph, not the Subgraph ID. - - **Note**: The Deployment ID is the unique identifier for a specific subgraph deployment. - - **Feature Declaration**: Remember to declare grafting in the subgraph manifest under features. +4. **Considérations importantes** + - **Sélection minutieuse des blocs** : Choisissez soigneusement le numéro du bloc de greffage pour éviter toute perte de données. + - **Conseil** : Utilisez le numéro de bloc du dernier événement correctement traité. + - **Utiliser l'ID de déploiement** : Assurez-vous que vous faites référence à l'ID de déploiement du Subgraph de base, et non à l'ID du Subgraph. + - **Note** : L'ID de déploiement est l'identifiant unique d'un déploiement de Subgraph spécifique. + - **Déclaration de fonctionnalité** : N'oubliez pas de déclarer le greffage dans le manifeste Subgraph en dessous de features. -## Example: Deploying a Hotfix with Grafting +## Exemple : Déploiement d'un correctif par greffage -Suppose you have a subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. +Supposons que vous ayez un Subgraph qui suit un contrat intelligent qui a cessé d'être indexé en raison d'une erreur critique. Voici comment vous pouvez utiliser le greffage pour déployer un correctif. -1. **Failed Subgraph Manifest (subgraph.yaml)** +1. **Manifeste du subgraph échoué (subgraph.yaml)** ```yaml - specVersion: 1.0.0 + specVersion: 1.3.0 schema: file: ./schema.graphql dataSources: @@ -75,7 +75,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing startBlock: 5000000 mapping: kind: ethereum/events - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Withdrawal @@ -88,9 +88,9 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing file: ./src/old-lock.ts ``` -2. **New Grafted Subgraph Manifest (subgraph.yaml)** +2. **Nouveau manifeste de subgraph greffé (subgraph.yaml)** ```yaml - specVersion: 1.0.0 + specVersion: 1.3.0 schema: file: ./schema.graphql dataSources: @@ -100,10 +100,10 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing source: address: '0xNewContractAddress' abi: Lock - startBlock: 6000001 # Block after the last indexed block + startBlock: 6000001 # Bloc suivant le dernier bloc indexé mapping: kind: ethereum/events - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Withdrawal @@ -117,71 +117,71 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing features: - grafting graft: - base: QmBaseDeploymentID # Deployment ID of the failed subgraph - block: 6000000 # Last successfully indexed block + base: QmBaseDeploymentID # Deployment ID of the failed Subgraph + block: 6000000 # Dernier bloc indexé avec succès ``` -**Explanation:** +**Explication:** -- **Data Source Update**: The new subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. -- **Start Block**: Set to one block after the last successfully indexed block to avoid reprocessing the error. -- **Grafting Configuration**: - - **base**: Deployment ID of the failed subgraph. - - **block**: Block number where grafting should begin. +- **Mise à jour de la source de données** : Le nouveau Subgraph pointe vers 0xNewContractAddress, qui pourrait être une version corrigée du contrat intelligent. +- **Bloc de départ** : Fixé à un bloc après le dernier bloc indexé avec succès afin d'éviter de retraiter l'erreur. +- **Configuration du greffage** : + - **base** : ID de déploiement du Subgraph défaillant. + - **bloc** : Numéro du bloc où le greffage doit commencer. -3. **Deployment Steps** +3. **Étapes de déploiement** - - **Update the Code**: Implement the hotfix in your mapping scripts (e.g., handleWithdrawal). - - **Adjust the Manifest**: As shown above, update the `subgraph.yaml` with grafting configurations. - - **Deploy the Subgraph**: - - Authenticate with the Graph CLI. - - Deploy the new subgraph using `graph deploy`. + - **Mise à jour du code** : Implémentez le correctif dans vos scripts de mappage (par exemple, handleWithdrawal). + - **Ajuster le manifeste** : Comme indiqué ci-dessus, mettez à jour le fichier `subgraph.yaml` avec les configurations de greffage. + - **Déployer le subgraph** : + - S'authentifier à l'aide de l'interface de Graph CLI. + - Déployer le nouveau Subgraph en utilisant `graph deploy`. -4. **Post-Deployment** - - **Verify Indexing**: Check that the subgraph is indexing correctly from the graft point. - - **Monitor Data**: Ensure that new data is being captured and the hotfix is effective. - - **Plan for Republish**: Schedule the deployment of a non-grafted version for long-term stability. +4. **Post-Déploiement** + - **Vérifier l'indexation** : Vérifier que le Subgraph est correctement indexé à partir du point de greffage. + - **Surveiller les données** : S'assurer que les nouvelles données sont capturées et que le correctif est efficace. + - **Planifier la republication** : Planifier le déploiement d'une version non greffée pour une stabilité à long terme. -## Warnings and Cautions +## Avertissements et précautions -While grafting is a powerful tool for deploying hotfixes quickly, there are specific scenarios where it should be avoided to maintain data integrity and ensure optimal performance. +Bien que le greffage soit un outil puissant pour déployer rapidement des correctifs, il existe des scénarios spécifiques dans lesquels il doit être évité afin de préserver l'intégrité des données et d'assurer des performances optimales. -- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new subgraph’s schema to be compatible with the base subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. -- **Significant Mapping Logic Overhauls**: When the hotfix involves substantial modifications to your mapping logic—such as changing how events are processed or altering handler functions—grafting may not function correctly. The new logic might not be compatible with the data processed under the old logic, leading to incorrect data or failed indexing. -- **Deployments to The Graph Network**: Grafting is not recommended for subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the subgraph from scratch to ensure full compatibility and reliability. +- **Modifications de schéma incompatibles** : Si votre correctif nécessite de modifier le type des champs existants ou de supprimer des champs de votre schéma, le greffage n'est pas appropriée. La greffe suppose que le schéma du nouveau subgraph soit compatible avec celui du subgraph de base. Des modifications incompatibles peuvent entraîner des incohérences et des erreurs dans les données, car les données existantes ne seront pas alignées sur le nouveau schéma. +- **Révisions importantes de la logique de mappage** : Lorsque le correctif implique des modifications substantielles de votre logique de mappage, telles que la modification du traitement des événements ou des fonctions de gestion, le greffage risque de ne pas fonctionner correctement. La nouvelle logique peut ne pas être compatible avec les données traitées dans le cadre de l'ancienne logique, ce qui entraîne des données incorrectes ou un échec de l'indexation. +- **Déploiements sur le réseau The Graph** : Le greffage n'est pas recommandée pour les subgraphs destinés au réseau décentralisé de The Graph (réseau principal). Elle peut compliquer l'indexation et peut ne pas être entièrement prise en charge par tous les Indexeurs, ce qui peut entraîner un comportement inattendu ou une augmentation des coûts. Pour les déploiements sur le réseau principal, il est plus sûr de réindexer le subgraph à partir de zéro pour garantir une compatibilité et une fiabilité totales. -### Risk Management +### Gestion des risques -- **Data Integrity**: Incorrect block numbers can lead to data loss or duplication. -- **Testing**: Always test grafting in a development environment before deploying to production. +- **Intégrité des données** : Des numéros de blocs incorrects peuvent entraîner la perte ou la duplication de données. +- **Test** : Testez toujours le greffage dans un environnement de développement avant de la déployer en production. ## Conclusion -Grafting is an effective strategy for deploying hotfixes in subgraph development, enabling you to: +Le greffage est une stratégie efficace pour déployer des correctifs dans le cadre du développement de Subgraphs : -- **Quickly Recover** from critical errors without re-indexing. -- **Preserve Historical Data**, maintaining continuity for applications and users. -- **Ensure Service Availability** by minimizing downtime during critical fixes. +- **Rétablissement rapide** sans besoin de réindexation après des erreurs critiques. +- **Préserver les données historiques**, en maintenant la continuité pour les applications et les utilisateurs. +- **Assurer la disponibilité du service** en minimisant les temps d'arrêt lors des corrections critiques. -However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. +Cependant, il est important d'utiliser le greffage de manière judicieuse et de suivre les meilleures pratiques pour atténuer les risques. Après avoir stabilisé votre Subgraph à l'aide du correctif, prévoyez de déployer une version non greffée afin de garantir la maintenabilité à long terme. ## Ressources supplémentaires -- **[Grafting Documentation](/subgraphs/cookbook/grafting/)**: Replace a Contract and Keep its History With Grafting -- **[Understanding Deployment IDs](/subgraphs/querying/subgraph-id-vs-deployment-id/)**: Learn the difference between Deployment ID and Subgraph ID. +- **[Documentation sur le greffage](/subgraphs/cookbook/grafting/)** : Remplacer un contrat et conserver son historique avec le greffage +- **[Comprendre les ID de déploiement](/subgraphs/querying/subgraph-id-vs-deployment-id/)** : Apprenez la différence entre l'ID de déploiement et l'ID de subgraph. -By incorporating grafting into your subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. +En incorporant le greffage dans votre flux de développement Subgraph, vous pouvez améliorer votre capacité à répondre rapidement aux problèmes, en veillant à ce que vos services de données restent robustes et fiables. -## Subgraph Best Practices 1-6 +## Bonnes pratiques pour les subgraphs 1-6 -1. [Improve Query Speed with Subgraph Pruning](/subgraphs/best-practices/pruning/) +1. [Améliorer la vitesse des requêtes avec l'élagage des Subgraphs](/subgraphs/best-practices/pruning/) -2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/subgraphs/best-practices/derivedfrom/) +2. [Améliorer l'indexation et la réactivité des requêtes en utilisant @derivedFrom](/subgraphs/best-practices/derivedfrom/) -3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) +3. [Améliorer l'indexation et les performances des requêtes en utilisant des entités immuables et des Bytes comme IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) -4. [Improve Indexing Speed by Avoiding `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) +4. [Améliorer la vitesse d'indexation en évitant les `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) -5. [Simplify and Optimize with Timeseries and Aggregations](/subgraphs/best-practices/timeseries/) +5. [Simplifier et optimiser avec les séries chronologiques et les agrégations](/subgraphs/best-practices/timeseries/) -6. [Use Grafting for Quick Hotfix Deployment](/subgraphs/best-practices/grafting-hotfix/) +6. [Utiliser le greffage pour un déploiement rapide des correctifs](/subgraphs/best-practices/grafting-hotfix/) diff --git a/website/src/pages/fr/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx b/website/src/pages/fr/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx index ae0e39b2564b..e87150855b2e 100644 --- a/website/src/pages/fr/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx +++ b/website/src/pages/fr/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx @@ -1,6 +1,6 @@ --- title: Bonne pratique pour les subgraphs 3 - Améliorer l'Indexation et les Performances de Recherche en Utilisant des Entités Immuables et des Bytes comme IDs -sidebarTitle: 'Subgraph Best Practice 3: Immutable Entities and Bytes as IDs' +sidebarTitle: Entités immuables et Bytes comme IDs --- ## TLDR @@ -22,7 +22,7 @@ type Transfer @entity(immutable: true) { En rendant l'entité `Transfer` immuable, graph-node est capable de traiter l'entité plus efficacement, améliorant la vitesse d'indexation et la réactivité des requêtes. -Immutable Entities structures will not change in the future. An ideal entity to become an Immutable Entity would be an entity that is directly logging onchain event data, such as a `Transfer` event being logged as a `Transfer` entity. +Les structures des entités immuables ne changeront pas dans le futur. Une entité idéale pour devenir une Entité immuable serait une entité qui enregistre directement les données d'un événement onchain, comme un événement `Transfer` enregistré en tant qu'entité `Transfer`. ### Sous le capot @@ -50,12 +50,12 @@ Bien que d'autres types d'ID soient possibles, tels que String et Int8, il est r ### Raisons de ne pas utiliser les Bytes comme IDs 1. Si les IDs d'entité doivent être lisibles par les humains, comme les IDs numériques auto-incrémentés ou les chaînes lisibles, les Bytes pour les IDs ne doivent pas être utilisés. -2. Si nous intégrons des données d'un subgraph avec un autre modèle de données qui n'utilise pas les Bytes comme IDs, les Bytes comme IDs ne doivent pas être utilisés. +2. Si vous intégrez les données d'un Subgraph dans un autre modèle de données qui n'utilise pas les Bytes en tant qu'ID, il ne faut pas utiliser les Bytes en tant qu'ID. 3. Les améliorations de performances d'indexation et de recherche ne sont pas souhaitées. ### Concatenation Avec Bytes comme IDs -Il est courant dans de nombreux subgraphs d'utiliser la concaténation de chaînes de caractères pour combiner deux propriétés d'un événement en un seul ID, comme utiliser `event.transaction.hash.toHex() + "-" + event.logIndex.toString()`.. Cependant, comme cela retourne une chaîne de caractères, cela nuit considérablement à la performance d'indexation et de recherche des subgraphs. +Dans de nombreux subgraphs, il est courant d'utiliser la concaténation de chaînes pour combiner deux propriétés d'un événement en un seul identifiant, par exemple en utilisant `event.transaction.hash.toHex() + "-" + event.logIndex.toString()`. Cependant, comme cette méthode renvoie une chaîne de caractères, elle entrave considérablement les performances d'indexation et d'interrogation du Subgraph. Au lieu de cela, nous devrions utiliser la méthode `concatI32()` pour concaténer les propriétés des événements. Cette stratégie donne un ID de type Bytes beaucoup plus performant. @@ -172,20 +172,20 @@ Réponse de la requête: ## Conclusion -L'utilisation à la fois d' Entités immuables et de Bytes en tant qu'IDs a montré une amélioration marquée de l'efficacité des subgraphs. Plus précisément, des tests ont mis en évidence une augmentation de 28% des performances des requêtes et une accélération de 48% des vitesses d'indexation. +L'utilisation d'entités immuables et de Bytes comme IDs a permis d'améliorer sensiblement l'efficacité de Subgraph. Plus précisément, les tests ont mis en évidence une augmentation de 28 % des performances des requêtes et une accélération de 48 % des vitesses d'indexation. En savoir plus sur l'utilisation des Entités immuables et des Bytes en tant qu'IDs dans cet article de blog de David Lutterkort, un ingénieur logiciel chez Edge & Node : [Deux améliorations simples des performances des subgraphs](https://thegraph.com/blog/two-simple-subgraph-performance-improvements/). -## Subgraph Best Practices 1-6 +## Bonnes pratiques pour les subgraphs 1-6 -1. [Improve Query Speed with Subgraph Pruning](/subgraphs/best-practices/pruning/) +1. [Améliorer la vitesse des requêtes avec l'élagage des Subgraphs](/subgraphs/best-practices/pruning/) -2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/subgraphs/best-practices/derivedfrom/) +2. [Améliorer l'indexation et la réactivité des requêtes en utilisant @derivedFrom](/subgraphs/best-practices/derivedfrom/) -3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) +3. [Améliorer l'indexation et les performances des requêtes en utilisant des entités immuables et des Bytes comme IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) -4. [Improve Indexing Speed by Avoiding `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) +4. [Améliorer la vitesse d'indexation en évitant les `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) -5. [Simplify and Optimize with Timeseries and Aggregations](/subgraphs/best-practices/timeseries/) +5. [Simplifier et optimiser avec les séries chronologiques et les agrégations](/subgraphs/best-practices/timeseries/) -6. [Use Grafting for Quick Hotfix Deployment](/subgraphs/best-practices/grafting-hotfix/) +6. [Utiliser le greffage pour un déploiement rapide des correctifs](/subgraphs/best-practices/grafting-hotfix/) diff --git a/website/src/pages/fr/subgraphs/best-practices/pruning.mdx b/website/src/pages/fr/subgraphs/best-practices/pruning.mdx index 82db761dcdac..ea2ff4855676 100644 --- a/website/src/pages/fr/subgraphs/best-practices/pruning.mdx +++ b/website/src/pages/fr/subgraphs/best-practices/pruning.mdx @@ -1,11 +1,11 @@ --- title: Meilleure Pratique Subgraph 1 - Améliorer la Vitesse des Requêtes avec le Pruning de Subgraph -sidebarTitle: 'Subgraph Best Practice 1: Pruning with indexerHints' +sidebarTitle: Élagage avec indexerHints --- ## TLDR -[Le pruning](/developing/creating-a-subgraph/#prune) (élagage) retire les entités archivées de la base de données des subgraphs jusqu'à un bloc donné, et retirer les entités inutilisées de la base de données d'un subgraph améliorera souvent de manière spectaculaire les performances de requête d'un subgraph. L'utilisation de `indexerHints` est un moyen simple de réaliser le pruning d'un subgraph. +[L'élagage](/developing/creating-a-subgraph/#prune) supprime les entités archivées de la base de données du Subgraph jusqu'à un bloc donné, et la suppression des entités inutilisées de la base de données d'un Subgraph améliore les performances d'interrogation d'un Subgraph, souvent de façon spectaculaire. L'utilisation de `indexerHints` est un moyen facile d'élaguer un Subgraph. ## Comment effectuer le Pruning d'un subgraph avec `indexerHints` @@ -13,14 +13,14 @@ Ajoutez une section appelée `indexerHints` dans le manifest. `indexerHints` dispose de trois options de `prune` : -- `prune: auto`: Conserve l'historique minimum nécessaire tel que défini par l'Indexeur, optimisant ainsi les performances des requêtes. C'est le paramètre généralement recommandé et celui par défaut pour tous les subgraphs créés par `graph-cli` >= 0.66.0. +- `prune : auto` : Conserve le minimum d'historique nécessaire tel que défini par l'Indexeur, optimisant ainsi les performances des requêtes. C'est le réglage généralement recommandé et c'est le réglage par défaut pour tous les Subgraphs créés par `graph-cli` >= 0.66.0. - `prune: `: Définit une limite personnalisée sur le nombre de blocs historiques à conserver. -- `prune: never`: No pruning of historical data; retains the entire history and is the default if there is no `indexerHints` section. `prune: never` should be selected if [Time Travel Queries](/subgraphs/querying/graphql-api/#time-travel-queries) are desired. +- `prune : never` : Pas d'élagage des données historiques ; conserve l'historique complet et est la valeur par défaut s'il n'y a pas de section `indexerHints`. `prune : never` devrait être sélectionné si [Les requetes Chronologiques](/subgraphs/querying/graphql-api/#time-travel-queries) sont désirées. -Nous pouvons ajouter `indexerHints` à nos subgraphs en mettant à jour notre `subgraph.yaml`: +Nous pouvons ajouter des `indexerHints` à nos Subgraphs en mettant à jour notre `subgraph.yaml` : ```yaml -specVersion: 1.0.0 +specVersion: 1.3.0 schema: file: ./schema.graphql indexerHints: @@ -33,24 +33,24 @@ dataSources: ## Points Importants -- If [Time Travel Queries](/subgraphs/querying/graphql-api/#time-travel-queries) are desired as well as pruning, pruning must be performed accurately to retain Time Travel Query functionality. Due to this, it is generally not recommended to use `indexerHints: prune: auto` with Time Travel Queries. Instead, prune using `indexerHints: prune: ` to accurately prune to a block height that preserves the historical data required by Time Travel Queries, or use `prune: never` to maintain all data. +- Si les [requêtes chronologiques](/subgraphs/querying/graphql-api/#time-travel-queries) sont souhaitées en plus de l'élagage, l'élagage doit être effectué avec précision pour conserver la fonctionnalité des requêtes chronologiques. Pour cette raison, il n'est généralement pas recommandé d'utiliser `indexerHints : prune : auto` avec les requêtes chronologiques. Au lieu de cela, élaguez en utilisant `indexerHints : prune : ` pour élaguer précisément à une hauteur de bloc qui préserve les données historiques requises par les requêtes chronologiques, ou utilisez `prune : never` pour conserver toutes les données. -- It is not possible to [graft](/subgraphs/cookbook/grafting/) at a block height that has been pruned. If grafting is routinely performed and pruning is desired, it is recommended to use `indexerHints: prune: ` that will accurately retain a set number of blocks (e.g., enough for six months). +- Il n'est pas possible de [greffer](/subgraphs/cookbook/grafting/) à une hauteur de bloc qui a été élaguée. Si le greffage est effectué de manière routinière et que l'élagage est souhaité, il est recommandé d'utiliser `indexerHints : prune : ` qui conservera avec précision un nombre défini de blocs (par exemple, suffisamment pour six mois). ## Conclusion -L'élagage en utilisant `indexerHints` est une meilleure bonne pour le développement de subgraphs, offrant des améliorations significatives des performances des requêtes. +L'élagage à l'aide de `indexerHints` est une meilleure pratique pour le développement de Subgraphs, offrant des améliorations significatives de la performance des requêtes. -## Subgraph Best Practices 1-6 +## Bonnes pratiques pour les subgraphs 1-6 -1. [Improve Query Speed with Subgraph Pruning](/subgraphs/best-practices/pruning/) +1. [Améliorer la vitesse des requêtes avec l'élagage des Subgraphs](/subgraphs/best-practices/pruning/) -2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/subgraphs/best-practices/derivedfrom/) +2. [Améliorer l'indexation et la réactivité des requêtes en utilisant @derivedFrom](/subgraphs/best-practices/derivedfrom/) -3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) +3. [Améliorer l'indexation et les performances des requêtes en utilisant des entités immuables et des Bytes comme IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) -4. [Improve Indexing Speed by Avoiding `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) +4. [Améliorer la vitesse d'indexation en évitant les `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) -5. [Simplify and Optimize with Timeseries and Aggregations](/subgraphs/best-practices/timeseries/) +5. [Simplifier et optimiser avec les séries chronologiques et les agrégations](/subgraphs/best-practices/timeseries/) -6. [Use Grafting for Quick Hotfix Deployment](/subgraphs/best-practices/grafting-hotfix/) +6. [Utiliser le greffage pour un déploiement rapide des correctifs](/subgraphs/best-practices/grafting-hotfix/) diff --git a/website/src/pages/fr/subgraphs/best-practices/timeseries.mdx b/website/src/pages/fr/subgraphs/best-practices/timeseries.mdx index 39363a06651f..9be75d158d07 100644 --- a/website/src/pages/fr/subgraphs/best-practices/timeseries.mdx +++ b/website/src/pages/fr/subgraphs/best-practices/timeseries.mdx @@ -1,49 +1,53 @@ --- -title: Subgraph Best Practice 5 - Simplify and Optimize with Timeseries and Aggregations -sidebarTitle: 'Subgraph Best Practice 5: Timeseries and Aggregations' +title: Meilleure pratique pour les subgraphs 5 - Simplifier et optimiser avec les séries chronologiques et les agrégations +sidebarTitle: Séries chronologiques et agrégations --- ## TLDR -Leveraging the new time-series and aggregations feature in subgraphs can significantly enhance both indexing speed and query performance. +L'utilisation de la nouvelle fonction de séries Chronologiques et d'agrégations dans les Subgraphs peut améliorer de manière significative la vitesse d'indexation et la performance des requêtes. ## Aperçu -Timeseries and aggregations reduce data processing overhead and accelerate queries by offloading aggregation computations to the database and simplifying mapping code. This approach is particularly effective when handling large volumes of time-based data. +Les séries chronologiques et les agrégations réduisent le coûts de traitementt des données et accélèrent les requêtes en déchargeant les calculs d'agrégation dans la base de données et en simplifiant le code de mappage. Cette approche est particulièrement efficace lorsqu'il s'agit de traiter de grands volumes de données chronologiques. -## Benefits of Timeseries and Aggregations +## Avantages des séries chronologiques et des agrégations -1. Improved Indexing Time +1. Amélioration du temps d'indexation -- Less Data to Load: Mappings handle less data since raw data points are stored as immutable timeseries entities. -- Database-Managed Aggregations: Aggregations are automatically computed by the database, reducing the workload on the mappings. +- Moins de données à charger : Les mappages traitent moins de données puisque les points de données brutes sont stockés sous forme d'entités chronologiques immuables. +- Agrégations gérées par la base de données : Les agrégations sont automatiquement calculées par la base de données, ce qui réduit la charge de travail sur les mappages. -2. Simplified Mapping Code +2. Code de mappage simplifié -- No Manual Calculations: Developers no longer need to write complex aggregation logic in mappings. -- Reduced Complexity: Simplifies code maintenance and minimizes the potential for errors. +- Pas de calculs manuels : Les développeurs n'ont plus besoin d'écrire une logique d'agrégation complexe dans les mappages. +- Complexité réduite : Simplifie la maintenance du code et minimise les risques d'erreurs. -3. Dramatically Faster Queries +3. Des requêtes beaucoup plus rapides -- Immutable Data: All timeseries data is immutable, enabling efficient storage and retrieval. -- Efficient Data Separation: Aggregates are stored separately from raw timeseries data, allowing queries to process significantly less data—often several orders of magnitude less. +- Données immuables : Toutes les données de séries chronologiques sont immuables, ce qui permet un stockage et une extraction efficaces. +- Séparation efficace des données : Les agrégats sont stockés séparément des données chronologiques brutes, ce qui permet aux requêtes de traiter beaucoup moins de données, souvent plusieurs ordres de grandeur en moins. ### Points Importants -- Immutable Data: Timeseries data cannot be altered once written, ensuring data integrity and simplifying indexing. -- Automatic ID and Timestamp Management: id and timestamp fields are automatically managed by graph-node, reducing potential errors. -- Efficient Data Storage: By separating raw data from aggregates, storage is optimized, and queries run faster. +- Données immuables : Les données chronologiques ne peuvent pas être modifiées une fois écrites, ce qui garantit l'intégrité des données et simplifie l'indexation. +- Gestion automatique de l'identification et de l'horodatage : les champs d'identification et d'horodatage sont automatiquement gérés par graph-node, ce qui réduit les erreurs potentielles. +- Stockage efficace des données : En séparant les données brutes des agrégats, le stockage est optimisé et les requêtes s'exécutent plus rapidement. -## How to Implement Timeseries and Aggregations +## Comment mettre en œuvre des séries chronologiques et des agrégations -### Defining Timeseries Entities +### Prérequis -A timeseries entity represents raw data points collected over time. It is defined with the `@entity(timeseries: true)` annotation. Key requirements: +Vous avez besoin de `spec version 1.1.0` pour cette fonctionnalité. -- Immutable: Timeseries entities are always immutable. -- Mandatory Fields: - - `id`: Must be of type `Int8!` and is auto-incremented. - - `timestamp`: Must be of type `Timestamp!` and is automatically set to the block timestamp. +### Définition des entités de séries chronologiques + +Une entité de séries chronologiques représente des points de données brutes collectés au fil du temps. Elle est définie par l'annotation `@entity(timeseries : true)`. Exigences principales : + +- Immuable : Les entités de séries chronologiques sont toujours immuables. +- Champs obligatoires : + - `id` : Doit être de type `Int8!` et est auto-incrémenté. + - `timestamp` : Doit être de type 'Timestamp!\` et est automatiquement fixé à l'horodatage du bloc. L'exemple: @@ -51,16 +55,16 @@ L'exemple: type Data @entity(timeseries: true) { id: Int8! timestamp: Timestamp! - price: BigDecimal! + amount: BigDecimal! } ``` -### Defining Aggregation Entities +### Définition des entités d'agrégation -An aggregation entity computes aggregated values from a timeseries source. It is defined with the `@aggregation` annotation. Key components: +Une entité d'agrégation calcule des valeurs agrégées à partir d'une source de séries chronologiques. Elle est définie par l'annotation `@aggregation`. Composants clés : -- Annotation Arguments: - - `intervals`: Specifies time intervals (e.g., `["hour", "day"]`). +- Arguments d'annotation : + - `intervals` : Spécifie les intervalles de temps (par exemple, `["hour", "day"]`). L'exemple: @@ -68,15 +72,15 @@ L'exemple: type Stats @aggregation(intervals: ["hour", "day"], source: "Data") { id: Int8! timestamp: Timestamp! - sum: BigDecimal! @aggregate(fn: "sum", arg: "price") + sum: BigDecimal! @aggregate(fn: "sum", arg: "amount") } ``` -In this example, Stats aggregates the price field from Data over hourly and daily intervals, computing the sum. +Dans cet exemple, Stats agrège le champ montant de Data sur des intervalles horaires et quotidiens, en calculant la somme. -### Querying Aggregated Data +### Interroger des données agrégées -Aggregations are exposed via query fields that allow filtering and retrieval based on dimensions and time intervals. +Les agrégations sont exposées via des champs de requête qui permettent le filtrage et la recherche sur la base de dimensions et d'intervalles de temps. L'exemple: @@ -98,13 +102,13 @@ L'exemple: } ``` -### Using Dimensions in Aggregations +### Utilisation des dimensions dans les agrégations -Dimensions are non-aggregated fields used to group data points. They enable aggregations based on specific criteria, such as a token in a financial application. +Les dimensions sont des champs non agrégés utilisés pour regrouper des points de données. Elles permettent des agrégations basées sur des critères spécifiques, tels qu'un jeton dans une application financière. L'exemple: -### Timeseries Entity +### Entité de séries chronologiques ```graphql type TokenData @entity(timeseries: true) { @@ -116,7 +120,7 @@ type TokenData @entity(timeseries: true) { } ``` -### Aggregation Entity with Dimension +### Entité d'agrégation avec dimension ```graphql type TokenStats @aggregation(intervals: ["hour", "day"], source: "TokenData") { @@ -129,15 +133,15 @@ type TokenStats @aggregation(intervals: ["hour", "day"], source: "TokenData") { } ``` -- Dimension Field: token groups the data, so aggregates are computed per token. -- Aggregates: - - totalVolume: Sum of amount. - - priceUSD: Last recorded priceUSD. - - count: Cumulative count of records. +- Champ dimensionnel : le jeton regroupe les données, de sorte que les agrégats sont calculés par jeton. +- Agrégats : + - totalVolume: Somme des montants. + - priceUSD: priceUSD le plus récent Enregistré. + - count: Nombre cumulé d'enregistrements. -### Aggregation Functions and Expressions +### Fonctions et expressions d'agrégation -Supported aggregation functions: +Fonctions d'agrégation prises en charge : - sum - count @@ -146,50 +150,50 @@ Supported aggregation functions: - first - last -### The arg in @aggregate can be +### L'argument dans @aggregate peut être -- A field name from the timeseries entity. -- An expression using fields and constants. +- Un nom de champ de l'entité de série chronologique. +- Une expression utilisant des champs et des constantes. -### Examples of Aggregation Expressions +### Exemples d'expressions d'agrégation -- Sum Token Value: @aggregate(fn: "sum", arg: "priceUSD \_ amount") -- Maximum Positive Amount: @aggregate(fn: "max", arg: "greatest(amount0, amount1, 0)") -- Conditional Sum: @aggregate(fn: "sum", arg: "case when amount0 > amount1 then amount0 else 0 end") +- Addition de la Valeur du jeton: @aggregate(fn: "sum", arg: "priceUSD \_ amount") +- Montant positif maximum: @aggregate(fn: "max", arg: "greatest(amount0, amount1, 0)") +- Somme conditionnelle: @aggregate(fn: "sum", arg: "case when amount0 > amount1 then amount0 else 0 end") -Supported operators and functions include basic arithmetic (+, -, \_, /), comparison operators, logical operators (and, or, not), and SQL functions like greatest, least, coalesce, etc. +Les opérateurs et fonctions pris en charge comprennent l'arithmétique de base (+, -, \_, /), les opérateurs de comparaison, les opérateurs logiques (and, or, not) et les fonctions SQL telles que la plus grande, la plus petite, la coalescence, etc. -### Query Parameters +### Paramètres de requête -- interval: Specifies the time interval (e.g., "hour"). -- where: Filters based on dimensions and timestamp ranges. -- timestamp_gte / timestamp_lt: Filters for start and end times (microseconds since epoch). +- interval: Spécifie l'intervalle de temps (e.g., "heure"). +- where: Filtres basés sur les dimensions et les plages d'horodatage. +- timestamp_gte / timestamp_lt: Filtre pour les heures de début et de fin (microsecondes depuis l'epoch). ### Notes -- Sorting: Results are automatically sorted by timestamp and id in descending order. -- Current Data: An optional current argument can include the current, partially filled interval. +- Tri : Les résultats sont automatiquement triés par date et par numéro d'identification, dans l'ordre décroissant. +- Données actuelles : Un argument facultatif de données actuelles peut inclure l'intervalle actuel, partiellement rempli. ### Conclusion -Implementing timeseries and aggregations in subgraphs is a best practice for projects dealing with time-based data. This approach: +La mise en œuvre de séries chronologiques et d'agrégations dans des Subgraphs est une bonne pratique pour les projets traitant de données temporelles. Cette approche : -- Enhances Performance: Speeds up indexing and querying by reducing data processing overhead. -- Simplifies Development: Eliminates the need for manual aggregation logic in mappings. -- Scales Efficiently: Handles large volumes of data without compromising on speed or responsiveness. +- Améliore les performances : Accélère l'indexation et l'interrogation en réduisant le coût du traitement des données. +- Simplifie le développement : Élimine la nécessité d'une logique d'agrégation manuelle dans les correspondances. +- Évolue efficacement : Traite d'importants volumes de données sans compromettre la vitesse ou la réactivité. -By adopting this pattern, developers can build more efficient and scalable subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your subgraphs. +En adoptant ce modèle, les développeurs peuvent construire des subgraphs plus efficaces et plus évolutifs, offrant un accès aux données plus rapide et plus fiable aux utilisateurs finaux. Pour en savoir plus sur l'implémentation des séries chronologiques et des agrégations, consultez le [Readme des Séries chronologiques et agrégations] (https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) et envisagez d'expérimenter cette fonctionnalité dans vos subgraphs. -## Subgraph Best Practices 1-6 +## Bonnes pratiques pour les subgraphs 1-6 -1. [Improve Query Speed with Subgraph Pruning](/subgraphs/best-practices/pruning/) +1. [Améliorer la vitesse des requêtes avec l'élagage des Subgraphs](/subgraphs/best-practices/pruning/) -2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/subgraphs/best-practices/derivedfrom/) +2. [Améliorer l'indexation et la réactivité des requêtes en utilisant @derivedFrom](/subgraphs/best-practices/derivedfrom/) -3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) +3. [Améliorer l'indexation et les performances des requêtes en utilisant des entités immuables et des Bytes comme IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) -4. [Improve Indexing Speed by Avoiding `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) +4. [Améliorer la vitesse d'indexation en évitant les `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) -5. [Simplify and Optimize with Timeseries and Aggregations](/subgraphs/best-practices/timeseries/) +5. [Simplifier et optimiser avec les séries chronologiques et les agrégations](/subgraphs/best-practices/timeseries/) -6. [Use Grafting for Quick Hotfix Deployment](/subgraphs/best-practices/grafting-hotfix/) +6. [Utiliser le greffage pour un déploiement rapide des correctifs](/subgraphs/best-practices/grafting-hotfix/) diff --git a/website/src/pages/fr/subgraphs/billing.mdx b/website/src/pages/fr/subgraphs/billing.mdx index ba4239f2ea01..c718e8864b9d 100644 --- a/website/src/pages/fr/subgraphs/billing.mdx +++ b/website/src/pages/fr/subgraphs/billing.mdx @@ -2,20 +2,22 @@ title: Facturation --- -## Querying Plans +## Plans de requêtes -Il y a deux plans à utiliser lorsqu'on interroge les subgraphs sur le réseau de The Graph. +Il existe deux plans à utiliser pour interroger les subgraphs sur The Graph Network. -- **Free Plan**: The Free Plan includes 100,000 free monthly queries with full access to the Subgraph Studio testing environment. This plan is designed for hobbyists, hackathoners, and those with side projects to try out The Graph before scaling their dapp. +- **Plan Gratuit (Free Plan)** : Le plan gratuit comprend 100 000 requêtes mensuelles gratuites et un accès complet à l'environnement de test Subgraph Studio. Ce plan est conçu pour les amateurs, les hackathoniens et ceux qui ont des projets parallèles pour essayer The Graph avant de faire évoluer leur dapp. -- **Growth Plan**: The Growth Plan includes everything in the Free Plan with all queries after 100,000 monthly queries requiring payments with GRT or credit card. The Growth Plan is flexible enough to cover teams that have established dapps across a variety of use cases. +- **Plan Croissance (Growth Plan)** : Le plan de croissance comprend tout ce qui est inclus dans le plan gratuit avec toutes les requêtes après 100 000 requêtes mensuelles nécessitant des paiements avec des GRT ou par carte de crédit. Le plan de croissance est suffisamment flexible pour couvrir les équipes qui ont établi des dapps à travers une variété de cas d'utilisation. + +Learn more about pricing [here](https://thegraph.com/studio-pricing/). ## Paiements de Requêtes avec Carte de Crédit⁠ - Pour mettre en place la facturation par carte de crédit/débit, les utilisateurs doivent accéder à Subgraph Studio (https://thegraph.com/studio/) - 1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/subgraphs/billing/). + 1. Accédez à la [page de Facturation de Subgraph Studio](https://thegraph.com/studio/subgraphs/billing/). 2. Cliquez sur le bouton "Connecter le portefeuille" dans le coin supérieur droit de la page. Vous serez redirigé vers la page de sélection des portefeuilles. Sélectionnez votre portefeuille et cliquez sur "Connecter". 3. Choisissez « Mettre à niveau votre abonnement » si vous effectuez une mise à niveau depuis le plan gratuit, ou choisissez « Gérer l'abonnement » si vous avez déjà ajouté des GRT à votre solde de facturation par le passé. Ensuite, vous pouvez estimer le nombre de requêtes pour obtenir une estimation du prix, mais ce n'est pas une étape obligatoire. 4. Pour choisir un paiement par carte de crédit, choisissez “Credit card” comme mode de paiement et remplissez les informations de votre carte de crédit. Ceux qui ont déjà utilisé Stripe peuvent utiliser la fonctionnalité Link pour remplir automatiquement leurs informations. @@ -45,17 +47,17 @@ L'utilisation du GRT sur Arbitrum est nécessaire pour le paiement des requêtes - Alternativement, vous pouvez acquérir du GRT directement sur Arbitrum via un échange décentralisé. -> This section is written assuming you already have GRT in your wallet, and you're on Arbitrum. If you don't have GRT, you can learn how to get GRT [here](#getting-grt). +> Cette section est écrite en supposant que vous avez déjà des GRT dans votre portefeuille, et que vous êtes sur Arbitrum. Si vous n'avez pas de GRT, vous pouvez apprendre à en obtenir [ici](#getting-grt). Une fois que vous avez transféré du GRT, vous pouvez l'ajouter à votre solde de facturation. ### Ajout de GRT à l'aide d'un portefeuille -1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/subgraphs/billing/). +1. Accédez à la [page de Facturation de Subgraph Studio](https://thegraph.com/studio/subgraphs/billing/). 2. Cliquez sur le bouton "Connecter le portefeuille" dans le coin supérieur droit de la page. Vous serez redirigé vers la page de sélection des portefeuilles. Sélectionnez votre portefeuille et cliquez sur "Connecter". 3. Cliquez sur le bouton « Manage » situé dans le coin supérieur droit. Les nouveaux utilisateurs verront l'option « Upgrade to Growth plan » (Passer au plan de croissance), tandis que les utilisateurs existants devront sélectionner « Deposit from wallet » (Déposer depuis le portefeuille). 4. Utilisez le curseur pour estimer le nombre de requêtes que vous prévoyez d’effectuer sur une base mensuelle. - - For suggestions on the number of queries you may use, see our **Frequently Asked Questions** page. + - Pour des suggestions sur le nombre de requêtes que vous pouvez utiliser, consultez notre page **Frequently Asked Questions** (Questions fréquemment posées). 5. Choisissez "Cryptocurrency". Le GRT est actuellement la seule cryptomonnaie acceptée sur le réseau The Graph. 6. Sélectionnez le nombre de mois pour lesquels vous souhaitez effectuer un paiement anticipé. - Le paiement anticipé ne vous engage pas sur une utilisation future. Vous ne serez facturé que pour ce que vous utiliserez et vous pourrez retirer votre solde à tout moment. @@ -68,7 +70,7 @@ Une fois que vous avez transféré du GRT, vous pouvez l'ajouter à votre solde ### Retirer des GRT en utilisant un portefeuille -1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/subgraphs/billing/). +1. Accédez à la [page de Facturation de Subgraph Studio](https://thegraph.com/studio/subgraphs/billing/). 2. Cliquez sur le bouton "Connect Wallet" dans le coin supérieur droit de la page. Sélectionnez votre portefeuille et cliquez sur "Connect". 3. Cliquez sur le bouton « Gérer » dans le coin supérieur droit de la page. Sélectionnez « Retirer des GRT ». Un panneau latéral apparaîtra. 4. Entrez le montant de GRT que vous voudriez retirer. @@ -77,11 +79,11 @@ Une fois que vous avez transféré du GRT, vous pouvez l'ajouter à votre solde ### Ajout de GRT à l'aide d'un portefeuille multisig -1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/subgraphs/billing/). -2. Click on the "Connect Wallet" button on the top right corner of the page. Select your wallet and click on "Connect". If you're using [Gnosis-Safe](https://gnosis-safe.io/), you'll be able to connect your multisig as well as your signing wallet. Then, sign the associated message. This will not cost any gas. +1. Accédez à la [page de Facturation de Subgraph Studio] (https://thegraph.com/studio/subgraphs/billing/). +2. Cliquez sur le bouton "Connecter votre Portefeuille" dans le coin supérieur droit de la page. Sélectionnez votre portefeuille et cliquez sur "Connecter". Si vous utilisez [Gnosis-Safe](https://gnosis-safe.io/), vous pourrez connecter votre portefeuille multisig ainsi que votre portefeuille de signature. Ensuite, signez le message associé. Cela ne coûtera pas de gaz. 3. Cliquez sur le bouton « Manage » situé dans le coin supérieur droit. Les nouveaux utilisateurs verront l'option « Upgrade to Growth plan » (Passer au plan de croissance), tandis que les utilisateurs existants devront sélectionner « Deposit from wallet » (Déposer depuis le portefeuille). 4. Utilisez le curseur pour estimer le nombre de requêtes que vous prévoyez d’effectuer sur une base mensuelle. - - For suggestions on the number of queries you may use, see our **Frequently Asked Questions** page. + - Pour des suggestions sur le nombre de requêtes que vous pouvez utiliser, consultez notre page **Frequently Asked Questions** (Questions fréquemment posées). 5. Choisissez "Cryptocurrency". Le GRT est actuellement la seule cryptomonnaie acceptée sur le réseau The Graph. 6. Sélectionnez le nombre de mois pour lesquels vous souhaitez effectuer un paiement anticipé. - Le paiement anticipé ne vous engage pas sur une utilisation future. Vous ne serez facturé que pour ce que vous utiliserez et vous pourrez retirer votre solde à tout moment. @@ -99,7 +101,7 @@ Cette section vous montrera comment obtenir du GRT pour payer les frais de requ Voici un guide étape par étape pour acheter de GRT sur Coinbase. -1. Go to [Coinbase](https://www.coinbase.com/) and create an account. +1. Allez sur [Coinbase](https://www.coinbase.com/) et créez un compte. 2. Dès que vous aurez créé un compte, vous devrez vérifier votre identité par le biais d'un processus connu sous le nom de KYC (Know Your Customer ou Connaître Votre Client). Il s'agit d'une procédure standard pour toutes les plateformes d'échange de crypto-monnaies centralisées ou dépositaires. 3. Une fois votre identité vérifiée, vous pouvez acheter des GRT. Pour ce faire, cliquez sur le bouton « Acheter/Vendre » en haut à droite de la page. 4. Sélectionnez la devise que vous souhaitez acheter. Sélectionnez GRT. @@ -107,19 +109,19 @@ Voici un guide étape par étape pour acheter de GRT sur Coinbase. 6. Sélectionnez la quantité de GRT que vous souhaitez acheter. 7. Vérifiez votre achat. Vérifiez votre achat et cliquez sur "Buy GRT". 8. Confirmez votre achat. Confirmez votre achat et vous aurez acheté des GRT avec succès. -9. You can transfer the GRT from your account to your wallet such as [MetaMask](https://metamask.io/). +9. Vous pouvez transférer les GRT de votre compte à votre portefeuille tel que [MetaMask](https://metamask.io/). - Pour transférer les GRT dans votre portefeuille, cliquez sur le bouton "Accounts" en haut à droite de la page. - Cliquez sur le bouton "Send" à côté du compte GRT. - Entrez le montant de GRT que vous souhaitez envoyer et l'adresse du portefeuille vers laquelle vous souhaitez l'envoyer. - Cliquez sur "Continue" et confirmez votre transaction. -Veuillez noter que pour des montants d'achat plus importants, Coinbase peut vous demander d'attendre 7 à 10 jours avant de transférer le montant total vers un portefeuille. -You can learn more about getting GRT on Coinbase [here](https://help.coinbase.com/en/coinbase/trading-and-funding/buying-selling-or-converting-crypto/how-do-i-buy-digital-currency). +Vous pouvez en savoir plus sur l'acquisition de GRT sur Coinbase [ici](https://help.coinbase.com/en/coinbase/trading-and-funding/buying-selling-or-converting-crypto/how-do-i-buy-digital-currency). ### Binance Ceci est un guide étape par étape pour l'achat des GRT sur Binance. -1. Go to [Binance](https://www.binance.com/en) and create an account. +1. Allez sur [Binance](https://www.binance.com/en) et créez un compte. 2. Dès que vous aurez créé un compte, vous devrez vérifier votre identité par le biais d'un processus connu sous le nom de KYC (Know Your Customer ou Connaître Votre Client). Il s'agit d'une procédure standard pour toutes les plateformes d'échange de crypto-monnaies centralisées ou dépositaires. 3. Une fois votre identité vérifiée, vous pouvez acheter des GRT. Pour ce faire, cliquez sur le bouton « Acheter maintenant » sur la bannière de la page d'accueil. 4. Vous accéderez à une page où vous pourrez sélectionner la devise que vous souhaitez acheter. Sélectionnez GRT. @@ -127,27 +129,27 @@ Ceci est un guide étape par étape pour l'achat des GRT sur Binance. 6. Sélectionnez la quantité de GRT que vous souhaitez acheter. 7. Confirmez votre achat et cliquez sur « Acheter des GRT ». 8. Confirmez votre achat et vous pourrez voir vos GRT dans votre portefeuille Binance Spot. -9. You can withdraw the GRT from your account to your wallet such as [MetaMask](https://metamask.io/). - - [To withdraw](https://www.binance.com/en/blog/ecosystem/how-to-transfer-crypto-from-binance-to-trust-wallet-8305050796630181570) the GRT to your wallet, add your wallet's address to the withdrawal whitelist. +9. Vous pouvez retirer les GRT de votre compte vers votre portefeuille tel que [MetaMask](https://metamask.io/). + - [Pour retirer](https://www.binance.com/en/blog/ecosystem/how-to-transfer-crypto-from-binance-to-trust-wallet-8305050796630181570) les GRT dans votre portefeuille, ajoutez l'adresse de votre portefeuille à la liste blanche des retraits. - Cliquez sur le bouton « portefeuille », cliquez sur retrait et sélectionnez GRT. - Saisissez le montant de GRT que vous souhaitez envoyer et l'adresse du portefeuille sur liste blanche à laquelle vous souhaitez l'envoyer. - Cliquer sur « Continuer » et confirmez votre transaction. -You can learn more about getting GRT on Binance [here](https://www.binance.com/en/support/faq/how-to-buy-cryptocurrency-on-binance-homepage-400c38f5e0cd4b46a1d0805c296b5582). +Vous pouvez en savoir plus sur l'achat de GRT sur Binance [ici](https://www.binance.com/en/support/faq/how-to-buy-cryptocurrency-on-binance-homepage-400c38f5e0cd4b46a1d0805c296b5582). ### Uniswap Voici comment vous pouvez acheter des GRT sur Uniswap. -1. Go to [Uniswap](https://app.uniswap.org/swap?chain=arbitrum) and connect your wallet. +1. Allez sur [Uniswap](https://app.uniswap.org/swap?chain=arbitrum) et connectez votre portefeuille. 2. Sélectionnez le jeton dont vous souhaitez échanger. Sélectionnez ETH. 3. Sélectionnez le jeton vers lequel vous souhaitez échanger. Sélectionnez GRT. - - Make sure you're swapping for the correct token. The GRT smart contract address on Arbitrum One is: [0x9623063377AD1B27544C965cCd7342f7EA7e88C7](https://arbiscan.io/token/0x9623063377ad1b27544c965ccd7342f7ea7e88c7) + - Assurez-vous que vous échangez contre le bon jeton. L'adresse du contrat intelligent GRT sur Arbitrum One est la suivante : [0x9623063377AD1B27544C965cCd7342f7EA7e88C7](https://arbiscan.io/token/0x9623063377ad1b27544c965ccd7342f7ea7e88c7) 4. Entrez le montant d'ETH que vous souhaitez échanger. 5. Cliquez sur « Échanger ». 6. Confirmez la transaction dans votre portefeuille et attendez qu'elle soit traitée. -You can learn more about getting GRT on Uniswap [here](https://support.uniswap.org/hc/en-us/articles/8370549680909-How-to-Swap-Tokens-). +Vous pouvez en savoir plus sur l'obtention de GRT sur Uniswap [ici](https://support.uniswap.org/hc/en-us/articles/8370549680909-How-to-Swap-Tokens-). ## Obtenir de l'Ether⁠ @@ -157,7 +159,7 @@ Cette section vous montrera comment obtenir de l'Ether (ETH) pour payer les frai Ce sera un guide étape par étape pour acheter de l'ETH sur Coinbase. -1. Go to [Coinbase](https://www.coinbase.com/) and create an account. +1. Allez sur [Coinbase](https://www.coinbase.com/) et créez un compte. 2. Une fois que vous avez créé un compte, vérifiez votre identité via un processus appelé KYC (ou Know Your Customer). l s'agit d'une procédure standard pour toutes les plateformes d'échange de crypto-monnaies centralisées ou dépositaires. 3. Une fois que vous avez vérifié votre identité, achetez de l'ETH en cliquant sur le bouton « Acheter/Vendre » en haut à droite de la page. 4. Choisissez la devise que vous souhaitez acheter. Sélectionnez ETH. @@ -165,20 +167,20 @@ Ce sera un guide étape par étape pour acheter de l'ETH sur Coinbase. 6. Entrez le montant d'ETH que vous souhaitez acheter. 7. Vérifiez votre achat et cliquez sur « Acheter des Ethereum ». 8. Confirmez votre achat et vous aurez acheté avec succès de l'ETH. -9. You can transfer the ETH from your Coinbase account to your wallet such as [MetaMask](https://metamask.io/). +9. Vous pouvez transférer les ETH de votre compte Coinbase vers votre portefeuille tel que [MetaMask](https://metamask.io/). - Pour transférer l'ETH vers votre portefeuille, cliquez sur le bouton « Comptes » en haut à droite de la page. - Cliquez sur le bouton « Envoyer » à côté du compte ETH. - Entrez le montant d'ETH que vous souhaitez envoyer et l'adresse du portefeuille vers lequel vous souhaitez l'envoyer. - Assurez-vous que vous envoyez à votre adresse de portefeuille Ethereum sur Arbitrum One. - Cliquer sur « Continuer » et confirmez votre transaction. -You can learn more about getting ETH on Coinbase [here](https://help.coinbase.com/en/coinbase/trading-and-funding/buying-selling-or-converting-crypto/how-do-i-buy-digital-currency). +Vous pouvez en savoir plus sur l'obtention d'ETH sur Coinbase [ici](https://help.coinbase.com/en/coinbase/trading-and-funding/buying-selling-or-converting-crypto/how-do-i-buy-digital-currency). ### Binance Ce sera un guide étape par étape pour acheter des ETH sur Binance. -1. Go to [Binance](https://www.binance.com/en) and create an account. +1. Allez sur [Binance](https://www.binance.com/en) et créez un compte. 2. Une fois que vous avez créé un compte, vérifiez votre identité via un processus appelé KYC (ou Know Your Customer). l s'agit d'une procédure standard pour toutes les plateformes d'échange de crypto-monnaies centralisées ou dépositaires. 3. Une fois que vous avez vérifié votre identité, achetez des ETH en cliquant sur le bouton « Acheter maintenant » sur la bannière de la page d'accueil. 4. Choisissez la devise que vous souhaitez acheter. Sélectionnez ETH. @@ -186,14 +188,14 @@ Ce sera un guide étape par étape pour acheter des ETH sur Binance. 6. Entrez le montant d'ETH que vous souhaitez acheter. 7. Vérifiez votre achat et cliquez sur « Acheter des Ethereum ». 8. Confirmez votre achat et vous verrez votre ETH dans votre portefeuille Binance Spot. -9. You can withdraw the ETH from your account to your wallet such as [MetaMask](https://metamask.io/). +9. Vous pouvez retirer les ETH de votre compte vers votre portefeuille tel que [MetaMask](https://metamask.io/). - Pour retirer l'ETH vers votre portefeuille, ajoutez l'adresse de votre portefeuille à la liste blanche de retrait. - Cliquez sur le bouton « portefeuille », cliquez sur retirer et sélectionnez ETH. - Entrez le montant d'ETH que vous souhaitez envoyer et l'adresse du portefeuille sur liste blanche à laquelle vous souhaitez l'envoyer. - Assurez-vous que vous envoyez à votre adresse de portefeuille Ethereum sur Arbitrum One. - Cliquer sur « Continuer » et confirmez votre transaction. -You can learn more about getting ETH on Binance [here](https://www.binance.com/en/support/faq/how-to-buy-cryptocurrency-on-binance-homepage-400c38f5e0cd4b46a1d0805c296b5582). +Vous pouvez en savoir plus sur l'achat d'ETH sur Binance [ici](https://www.binance.com/en/support/faq/how-to-buy-cryptocurrency-on-binance-homepage-400c38f5e0cd4b46a1d0805c296b5582). ## FAQ sur la facturation @@ -203,11 +205,11 @@ Vous n'avez pas besoin de savoir à l'avance combien de requêtes vous aurez bes Nous vous recommandons de surestimer le nombre de requêtes dont vous aurez besoin afin de ne pas avoir à recharger votre solde fréquemment. Pour les applications de petite et moyenne taille, une bonne estimation consiste à commencer par 1 à 2 millions de requêtes par mois et à surveiller de près l'utilisation au cours des premières semaines. Pour les applications plus grandes, une bonne estimation consiste à utiliser le nombre de visites quotidiennes que reçoit votre site multiplié par le nombre de requêtes que votre page la plus active effectue à son ouverture. -Of course, both new and existing users can reach out to Edge & Node's BD team for a consult to learn more about anticipated usage. +Bien entendu, les nouveaux utilisateurs et les utilisateurs existants peuvent contacter l'équipe BD d'Edge & Node pour une consultation afin d'en savoir plus sur l'utilisation prévue. ### Puis-je retirer du GRT de mon solde de facturation ? -Yes, you can always withdraw GRT that has not already been used for queries from your billing balance. The billing contract is only designed to bridge GRT from Ethereum mainnet to the Arbitrum network. If you'd like to transfer your GRT from Arbitrum back to Ethereum mainnet, you'll need to use the [Arbitrum Bridge](https://bridge.arbitrum.io/?l2ChainId=42161). +Oui, vous pouvez toujours retirer de votre solde de facturation les GRT qui n'ont pas encore été utilisés pour des requêtes. Le contrat de facturation est uniquement conçu pour faire le bridge entre les GRT du réseau principal Ethereum et le réseau Arbitrum. Si vous souhaitez transférer vos GRT d'Arbitrum vers le réseau principal Ethereum, vous devrez utiliser le [Bridge Arbitrum](https://bridge.arbitrum.io/?l2ChainId=42161). ### Que se passe-t-il lorsque mon solde de facturation est épuisé ? Vais-je recevoir un avertissement ? diff --git a/website/src/pages/fr/subgraphs/developing/_meta-titles.json b/website/src/pages/fr/subgraphs/developing/_meta-titles.json index 01a91b09ed77..c49c19eec25d 100644 --- a/website/src/pages/fr/subgraphs/developing/_meta-titles.json +++ b/website/src/pages/fr/subgraphs/developing/_meta-titles.json @@ -1,6 +1,6 @@ { "creating": "Creating", "deploying": "Deploying", - "publishing": "Publishing", + "publishing": "Publication", "managing": "Managing" } diff --git a/website/src/pages/fr/subgraphs/developing/creating/advanced.mdx b/website/src/pages/fr/subgraphs/developing/creating/advanced.mdx index 12e0f444c4d8..5992294de057 100644 --- a/website/src/pages/fr/subgraphs/developing/creating/advanced.mdx +++ b/website/src/pages/fr/subgraphs/developing/creating/advanced.mdx @@ -4,9 +4,9 @@ title: Fonctionnalités avancées des subgraphs ## Aperçu -Ajoutez et implémentez des fonctionnalités avancées de subgraph pour améliorer la construction de votre subgraph. +Ajouter et mettre en œuvre des fonctionnalités avancées de subgraph pour améliorer la construction de votre subgraph. -À partir de `specVersion` `0.0.4`, les fonctionnalités de subgraph doivent être explicitement déclarées dans la section `features` au niveau supérieur du fichier de manifeste, en utilisant leur nom en `camelCase` comme indiqué dans le tableau ci-dessous : +A partir de la `specVersion` `0.0.4`, les fonctionnalités de Subgraph doivent être explicitement déclarées dans la section `features` au premier niveau du fichier manifest, en utilisant leur nom `camelCase`, comme listé dans le tableau ci-dessous : | Fonctionnalité | Nom | | --------------------------------------------------------- | ---------------- | @@ -14,10 +14,10 @@ Ajoutez et implémentez des fonctionnalités avancées de subgraph pour amélior | [Recherche plein texte](#defining-fulltext-search-fields) | `fullTextSearch` | | [Greffage](#grafting-onto-existing-subgraphs) | `grafting` | -Par exemple, si un subgraph utilise les fonctionnalités **Full-Text Search** et **Non-fatal Errors**, le champ `features` dans le manifeste devrait être : +Par exemple, si un subgraph utilise les fonctionnalités **Recherche plein texte** et **Erreurs non fatales**, le champ `features` dans le manifeste devrait être : ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum features: - fullTextSearch @@ -25,17 +25,17 @@ features: dataSources: ... ``` -> Notez que L'utilisation d'une fonctionnalité sans la déclarer entraînera une **validation error** lors du déploiement du subgraph, mais aucune erreur ne se produira si une fonctionnalité est déclarée mais non utilisée. +> Notez que l'utilisation d'une fonctionnalité sans la déclarer entraînera une **erreur de validation** lors du déploiement du subgraph, mais aucune erreur ne se produira si une fonctionnalité est déclarée mais n'est pas utilisée. ## Séries chronologiques et agrégations -Prerequisites: +Prérequis : -- Subgraph specVersion must be ≥1.1.0. +- Le subgraph specVersion doit être ≥1.1.0. -Timeseries and aggregations enable your subgraph to track statistics like daily average price, hourly total transfers, and more. +Les séries chronologiques et les agrégations permettent à votre subgraph de suivre des statistiques telles que le prix moyen quotidien, le nombre total de transferts par heure, etc. -This feature introduces two new types of subgraph entity. Timeseries entities record data points with timestamps. Aggregation entities perform pre-declared calculations on the timeseries data points on an hourly or daily basis, then store the results for easy access via GraphQL. +Cette fonctionnalité introduit deux nouveaux types d'entités de subgraph. Les entités de séries chronologiques enregistrent des points de données avec des horodatages. Les entités d'agrégation effectuent des calculs prédéfinis sur les points de données des séries chronologiques sur une base horaire ou quotidienne, puis stockent les résultats pour faciliter l'accès via GraphQL. ### Exemple de schéma @@ -53,19 +53,19 @@ type Stats @aggregation(intervals: ["hour", "day"], source: "Data") { } ``` -### How to Define Timeseries and Aggregations +### Comment définir des séries chronologiques et des agrégations ? -Timeseries entities are defined with `@entity(timeseries: true)` in the GraphQL schema. Every timeseries entity must: +Les entités de séries chronologiques sont définies avec `@entity(timeseries : true)` dans le schéma GraphQL. Chaque entité timeseries doit : -- have a unique ID of the int8 type -- have a timestamp of the Timestamp type -- include data that will be used for calculation by aggregation entities. +- ont un ID unique de type int8 +- ont un horodatage de type Horodatage +- inclure les données qui seront utilisées pour le calcul par les entités d'agrégation. -These Timeseries entities can be saved in regular trigger handlers, and act as the “raw data” for the aggregation entities. +Ces entités de séries chronologiques peuvent être enregistrées dans des gestionnaires de déclencheurs ordinaires et servent de "données brutes" pour les entités d'agrégation. -Aggregation entities are defined with `@aggregation` in the GraphQL schema. Every aggregation entity defines the source from which it will gather data (which must be a timeseries entity), sets the intervals (e.g., hour, day), and specifies the aggregation function it will use (e.g., sum, count, min, max, first, last). +Les entités d'agrégation sont définies avec `@aggregation` dans le schéma GraphQL. Chaque entité d'agrégation définit la source à partir de laquelle elle recueillera les données (qui doit être une entité de série chronologique), définit les intervalles (par exemple, heure, jour) et spécifie la fonction d'agrégation qu'elle utilisera (par exemple, sum, count, min, max, first, last). -Aggregation entities are automatically calculated on the basis of the specified source at the end of the required interval. +Les entités d'agrégation sont automatiquement calculées sur la base de la source spécifiée à la fin de l'intervalle requis. #### Intervalles d'Agrégation disponibles @@ -97,21 +97,21 @@ Aggregation entities are automatically calculated on the basis of the specified ## Erreurs non fatales -Les erreurs d'indexation sur les subgraphs déjà synchronisés entraîneront, par défaut, l'échec du subgraph et l'arrêt de la synchronisation. Les subgraphs peuvent également être configurés pour continuer la synchronisation en présence d'erreurs, en ignorant les modifications apportées par le gestionnaire qui a provoqué l'erreur. Cela donne aux auteurs de subgraphs le temps de corriger leurs subgraphs pendant que les requêtes continuent d'être traitées sur le dernier bloc, bien que les résultats puissent être incohérents en raison du bogue à l'origine de l'erreur. Notez que certaines erreurs sont toujours fatales. Pour être non fatale, l'erreur doit être connue pour être déterministe. +Les erreurs d'indexation sur des subgraphs déjà synchronisés entraîneront, par défaut, l'échec du subgraph et l'arrêt de la synchronisation. Les subgraphs peuvent également être configurés pour continuer la synchronisation en présence d'erreurs, en ignorant les modifications apportées par le gestionnaire qui a provoqué l'erreur. Les auteurs de subgraphs ont ainsi le temps de corriger leurs subgraphs tandis que les requêtes continuent d'être servies par rapport au dernier bloc, bien que les résultats puissent être incohérents en raison du bug qui a provoqué l'erreur. Notez que certaines erreurs sont toujours fatales. Pour être non fatale, l'erreur doit être connue comme étant déterministe. -> **Note:** The Graph Network ne supporte pas encore les erreurs non fatales, et les développeurs ne doivent pas déployer de subgraphs utilisant cette fonctionnalité sur le réseau via le Studio. +> **Note:** The Graph Network ne prend pas encore en charge les erreurs non fatales, et les développeurs ne doivent pas déployer les subgraphs utilisant cette fonctionnalité sur le réseau via le Studio. -L'activation des erreurs non fatales nécessite la définition de l'indicateur de fonctionnalité suivant sur le manifeste du subgraph : +Pour activer les erreurs non fatales, il faut définir l'indicateur de fonctionnalité suivant dans le manifeste du subgraph : ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum features: - nonFatalErrors ... ``` -La requête doit également opter pour l'interrogation de données avec des incohérences potentielles via l'argument `subgraphError`. Il est également recommandé d'interroger `_meta` pour vérifier si le subgraph a ignoré des erreurs, comme dans l'exemple : +La requête doit également accepter d'interroger des données avec des incohérences potentielles grâce à l'argument `subgraphError`. Il est également recommandé d'interroger `_meta` pour vérifier si le subgraph a ignoré les erreurs, comme dans l'exemple : ```graphql foos(first: 100, subgraphError: allow) { @@ -145,7 +145,7 @@ Si le subgraph rencontre une erreur, cette requête renverra à la fois les donn ## File Data Sources de fichiers IPFS/Arweave -Les sources de données de fichiers sont une nouvelle fonctionnalité de subgraph permettant d'accéder aux données hors chaîne pendant l'indexation de manière robuste et extensible. Les sources de données de fichiers prennent en charge la récupération de fichiers depuis IPFS et Arweave. +Les fichiers sources de données sont une nouvelle fonctionnalité de Subgraph permettant d'accéder à des données hors chaîne pendant l'indexation d'une manière robuste et extensible. Les fichiers sources de données permettent de récupérer des fichiers à partir d'IPFS et d'Arweave. > Cela jette également les bases d’une indexation déterministe des données hors chaîne, ainsi que de l’introduction potentielle de données arbitraires provenant de HTTP. @@ -221,7 +221,7 @@ templates: - name: TokenMetadata kind: file/ipfs mapping: - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mapping.ts handler: handleMetadata @@ -290,7 +290,7 @@ L'exemple: import { TokenMetadata as TokenMetadataTemplate } from '../generated/templates' const ipfshash = 'QmaXzZhcYnsisuue5WRdQDH6FDvqkLQX1NckLqBYeYYEfm' -//Cet exemple de code concerne un sous-graphe de Crypto coven. Le hachage ipfs ci-dessus est un répertoire contenant les métadonnées des jetons pour toutes les NFT de l'alliance cryptographique. +//Cet exemple de code concerne un subgraph Crypto coven. Le hash ipfs ci-dessus est un répertoire contenant les métadonnées des jetons pour tous les NFT de la communauté crypto export function handleTransfer(event: TransferEvent): void { let token = Token.load(event.params.tokenId.toString()) @@ -300,7 +300,7 @@ export function handleTransfer(event: TransferEvent): void { token.tokenURI = '/' + event.params.tokenId.toString() + '.json' const tokenIpfsHash = ipfshash + token.tokenURI - //Ceci crée un chemin vers les métadonnées pour un seul Crypto coven NFT. Il concatène le répertoire avec "/" + nom de fichier + ".json" + //Cette opération crée un chemin d'accès aux métadonnées d'un seul Crypto coven NFT. Il concatène le répertoire avec "/" + nom de fichier + ".json" token.ipfsURI = tokenIpfsHash @@ -317,23 +317,23 @@ Cela créera une nouvelle source de données de fichier, qui interrogera le poin Cet exemple utilise le CID comme référence entre l'entité parent `Token` et l'entité résultante `TokenMetadata`. -> Auparavant, c'est à ce stade qu'un développeur de subgraphs aurait appelé `ipfs.cat(CID)` pour récupérer le fichier +> Auparavant, c'est à ce stade qu'un développeur de Subgraph aurait appelé `ipfs.cat(CID)` pour récupérer le fichier Félicitations, vous utilisez des sources de données de fichiers ! -#### Déployer vos subgraphs +#### Déploiement de vos Subgraphs -Vous pouvez maintenant `construire` et `déployer` votre subgraph sur n'importe quel Graph Node >=v0.30.0-rc.0. +Vous pouvez maintenant `construire` et `déployer` votre Subgraph sur n'importe quel Graph Node >=v0.30.0-rc.0. #### Limitations -Les entités et les gestionnaires de sources de données de fichiers sont isolés des autres entités du subgraph, ce qui garantit que leur exécution est déterministe et qu'il n'y a pas de contamination des sources de données basées sur des chaînes. Pour être plus précis : +Les entités et les gestionnaires de fichiers sources de données sont isolés des autres entités du subgraph, ce qui garantit qu'ils sont déterministes lorsqu'ils sont exécutés et qu'il n'y a pas de contamination des sources de données basées sur des blockchain. Pour être plus précis : - Les entités créées par les sources de données de fichiers sont immuables et ne peuvent pas être mises à jour - Les gestionnaires de sources de données de fichiers ne peuvent pas accéder à des entités provenant d'autres sources de données de fichiers - Les entités associées aux sources de données de fichiers ne sont pas accessibles aux gestionnaires basés sur des chaînes -> Cette contrainte ne devrait pas poser de problème pour la plupart des cas d'utilisation, mais elle peut en compliquer certains. N'hésitez pas à nous contacter via Discord si vous rencontrez des problèmes pour modéliser vos données basées sur des fichiers dans un subgraph ! +> Cette contrainte ne devrait pas poser de problème pour la plupart des cas d'utilisation, mais elle peut en compliquer certains. N'hésitez pas à nous contacter via Discord si vous rencontrez des problèmes pour modéliser vos données dans un Subgraph! En outre, il n'est pas possible de créer des sources de données à partir d'une source de données de fichier, qu'il s'agisse d'une source de données onchain ou d'une autre source de données de fichier. Cette restriction pourrait être levée à l'avenir. @@ -365,15 +365,15 @@ Les gestionnaires pour les fichiers sources de données ne peuvent pas être dan > **Nécessite** : [SpecVersion](#specversion-releases) >= `1.2.0` -Les filtres de topics, également connus sous le nom de filtres d'arguments indexés, sont une fonctionnalité puissante dans les subgraphs qui permettent aux utilisateurs de filtrer précisément les événements de la blockchain en fonction des valeurs de leurs arguments indexés. +Les filtres thématiques, également connus sous le nom de filtres d'arguments indexés, sont une fonctionnalité puissante de Subgraphs qui permet aux utilisateurs de filtrer précisément les événements de la blockchain en fonction des valeurs de leurs arguments indexés. -- Ces filtres aident à isoler des événements spécifiques intéressants parmi le vaste flux d'événements sur la blockchain, permettant aux subgraphs de fonctionner plus efficacement en se concentrant uniquement sur les données pertinentes. +- Ces filtres permettent d'isoler des événements spécifiques intéressants du vaste flux d'événements sur la blockchain, ce qui permet aux Subgraphs de fonctionner plus efficacement en se concentrant uniquement sur les données pertinentes. - Ceci est utile pour créer des subgraphs personnels qui suivent des adresses spécifiques et leurs interactions avec divers contrats intelligents sur la blockchain. ### Comment fonctionnent les filtres de Topics -Lorsqu'un contrat intelligent émet un événement, tous les arguments marqués comme indexés peuvent être utilisés comme filtres dans le manifeste d'un subgraph. Ceci permet au subgraph d'écouter de façon sélective les événements qui correspondent à ces arguments indexés. +Lorsqu'un contrat intelligent émet un événement, tous les arguments marqués comme indexés peuvent être utilisés comme filtres dans le manifeste d'un subgraph. Cela permet au subgraph d'écouter sélectivement les événements qui correspondent à ces arguments indexés. - Le premier argument indexé de l'événement correspond à `topic1`, le second à `topic2`, et ainsi de suite, jusqu'à `topic3`, puisque la machine virtuelle Ethereum (EVM) autorise jusqu'à trois arguments indexés par événement. @@ -401,7 +401,7 @@ Dans cet exemple: #### Configuration dans les subgraphs -Les filtres de topics sont définis directement dans la configuration du gestionnaire d'évènement situé dans le manifeste du subgraph. Voici comment ils sont configurés : +Les filtres thématiques sont définis directement dans la configuration du gestionnaire d'événements dans le manifeste Subgraph. Voici comment ils sont configurés : ```yaml eventHandlers: @@ -452,17 +452,17 @@ Dans cette configuration: - `topic1` est configuré pour filtrer les événements `Transfer` dont l'expéditeur est `0xAddressA`, `0xAddressB`, `0xAddressC`. - `topic2` est configuré pour filtrer les événements `Transfer` où `0xAddressB` et `0xAddressC` sont les destinataires. -- Le subgraph indexera les transactions qui se produisent dans les deux sens entre plusieurs adresses, permettant une surveillance complète des interactions impliquant toutes les adresses. +- Le subgraph indexe les transactions qui se produisent dans les deux sens entre plusieurs adresses, ce qui permet un suivi complet des interactions impliquant toutes les adresses. ## Déclaration eth_call > Remarque : Il s'agit d'une fonctionnalité expérimentale qui n'est pas encore disponible dans une version stable de Graph Node. Vous ne pouvez l'utiliser que dans Subgraph Studio ou sur votre nœud auto-hébergé. -Les `eth_calls' déclaratifs sont une caractéristique précieuse des subgraphs qui permet aux `eth_calls' d'être exécutés à l'avance, ce qui permet à `graph-node` de les exécuter en parallèle. +Les `eth_calls` déclaratifs sont une fonctionnalité précieuse de Subgraph qui permet aux `eth_calls` d'être exécutés à l'avance, permettant à `graph-node` de les exécuter en parallèle. Cette fonctionnalité permet de : -- Améliorer de manière significative les performances de la récupération des données de la blockchain Ethereum en réduisant le temps total pour plusieurs appels et en optimisant l'efficacité globale du subgraph. +- Améliore considérablement les performances de la récupération des données de la blockchain Ethereum en réduisant le temps total des appels multiples et en optimisant l'efficacité globale du subgraph. - Permet une récupération plus rapide des données, entraînant des réponses de requête plus rapides et une meilleure expérience utilisateur. - Réduire les temps d'attente pour les applications qui doivent réunir des données de plusieurs appels Ethereum, rendant le processus de récupération des données plus efficace. @@ -474,7 +474,7 @@ Cette fonctionnalité permet de : #### Scénario sans `eth_calls` déclaratifs -Imaginez que vous ayez un subgraph qui doit effectuer trois appels Ethereum pour récupérer des données sur les transactions, le solde et les avoirs en jetons d'un utilisateur. +Imaginez que vous ayez un subgraph qui doit faire trois appels Ethereum pour récupérer des données sur les transactions, le solde et les avoirs en jetons d'un utilisateur. Traditionnellement, ces appels pourraient être effectués de manière séquentielle : @@ -498,15 +498,15 @@ Temps total pris = max (3, 2, 4) = 4 secondes #### Comment ça marche -1. Définition déclarative : Dans le manifeste du subgraph, vous déclarez les appels Ethereum d'une manière indiquant qu'ils peuvent être exécutés en parallèle. +1. Définition déclarative : Dans le manifeste Subgraph, vous déclarez les appels Ethereum d'une manière qui indique qu'ils peuvent être exécutés en parallèle. 2. Moteur d'exécution parallèle : Le moteur d'exécution de Graph Node reconnaît ces déclarations et exécute les appels simultanément. -3. Agrégation des résultats : Une fois que tous les appels sont terminés, les résultats sont réunis et utilisés par le subgraph pour un traitement ultérieur. +3. Agrégation des résultats : Une fois tous les appels terminés, les résultats sont agrégés et utilisés par le Subgraph pour la suite du traitement. #### Exemple de configuration dans le manifeste du subgraph Les `eth_calls` déclarés peuvent accéder à l'adresse `event.address` de l'événement sous-jacent ainsi qu'à tous les paramètres `event.params`. -`Subgraph.yaml` utilisant `event.address` : +`subgraph.yaml` en utilisant `event.address`: ```yaml eventHandlers: @@ -524,7 +524,7 @@ Détails pour l'exemple ci-dessus : - Le texte (`Pool[event.address].feeGrowthGlobal0X128()`) est le `eth_call` réel qui sera exécuté, et est sous la forme de `Contract[address].function(arguments)` - L'adresse et les arguments peuvent être remplacés par des variables qui seront disponibles lorsque le gestionnaire sera exécuté. -`Subgraph.yaml` utilisant `event.params` +`subgraph.yaml` en utilisant `event.params` ```yaml calls: @@ -535,22 +535,22 @@ calls: > **Note:** il n'est pas recommandé d'utiliser le greffage lors de l'upgrade initial vers The Graph Network. Pour en savoir plus [ici](/subgraphs/cookbook/grafting/#important-note-on-grafting-when-upgrading-to-the-network). -Lorsqu'un subgraph est déployé pour la première fois, il commence à indexer les événements au bloc de initial de la blockchain correspondante (ou au `startBlock` défini avec chaque source de données). Dans certaines circonstances, il est avantageux de réutiliser les données d'un subgraph existant et de commencer l'indexation à un bloc beaucoup plus tardif. Ce mode d'indexation est appelé _Grafting_. Le greffage (grafting) est, par exemple, utile pendant le développement pour surmonter rapidement de simples erreurs dans les mappages ou pour faire fonctionner temporairement un subgraph existant après qu'il ait échoué. +Lorsqu'un subgraph est déployé pour la première fois, il commence à indexer les événements au bloc de genèse de la chaîne correspondante (ou au `startBlock` défini avec chaque source de données). Dans certaines circonstances, il est avantageux de réutiliser les données d'un subgraph existant et de commencer l'indexation à un bloc beaucoup plus tardif. Ce mode d'indexation est appelé "greffage". Le greffage est, par exemple, utile pendant le développement pour surmonter rapidement de simples erreurs dans les mappages ou pour rétablir temporairement le fonctionnement d'un subgraph existant après qu'il ait échoué. Un subgraph est greffé sur un subgraph de base lorsque le manifeste du subgraph dans `subgraph.yaml` contient un bloc `graft` au niveau supérieur : ```yaml description: ... graft: - base: Qm... # Subgraph ID of base subgraph - block: 7345624 # Block number + base: Qm... # ID du Subgraph de base Subgraph + block: 7345624 # Numéro de bloc ``` -Lorsqu'un subgraph dont le manifeste contient un bloc `graft` est déployé, Graph Node copiera les données du subgraph `de base` jusqu'au bloc spécifié inclus, puis continuera à indexer le nouveau subgraph à partir de ce bloc. Le subgraph de base doit exister sur l'instance cible de Graph Node et doit avoir indexé au moins jusqu'au bloc spécifié. En raison de cette restriction, le greffage ne doit être utilisé que pendant le développement ou en cas d'urgence pour accélérer la production d'un subgraph équivalent non greffé. +Lorsqu'un subgraph dont le manifeste contient un bloc `graft` est déployé, Graph Node va copier les données du subgraph `base` jusqu'au `block` donné inclus, puis continuer à indexer le nouveau subgraph à partir de ce bloc. Le subgraph de base doit exister sur l'instance du Graph Node cible et doit avoir été indexé au moins jusqu'au bloc donné. En raison de cette restriction, le greffage ne devrait être utilisée qu'en cours de développement ou en cas d'urgence pour accélérer la production d'un subgraph équivalent non greffé. -Étant donné que le greffage copie plutôt que l'indexation des données de base, il est beaucoup plus rapide d'amener le susgraph dans le bloc souhaité que l'indexation à partir de zéro, bien que la copie initiale des données puisse encore prendre plusieurs heures pour de très gros subgraphs. Pendant l'initialisation du subgraph greffé, le nœud graphique enregistrera des informations sur les types d'entités qui ont déjà été copiés. +Étant donné que la greffe copie les données de base plutôt que de les indexer, il est beaucoup plus rapide d'amener le subgraph au bloc souhaité que de l'indexer à partir de zéro, bien que la copie initiale des données puisse encore prendre plusieurs heures pour les très grands subgraphs. Pendant l'initialisation du subgraph greffé, Graph Node enregistre des informations sur les types d'entités qui ont déjà été copiés. -Le subgraph greffé peut utiliser un schema GraphQL qui n'est pas identique à celui du subgraph de base, mais simplement compatible avec lui. Il doit s'agir d'un schema de subgraph valide en tant que tel, mais il peut s'écarter du schema du subgraph de base de la manière suivante : +The grafted Subgraph can use a GraphQL schema that is not identical to the one of the base Subgraph, but merely compatible with it. It has to be a valid Subgraph schema in its own right, but may deviate from the base Subgraph's schema in the following ways: - Il ajoute ou supprime des types d'entité - Il supprime les attributs des types d'entité @@ -560,4 +560,4 @@ Le subgraph greffé peut utiliser un schema GraphQL qui n'est pas identique à c - Il ajoute ou supprime des interfaces - Cela change pour quels types d'entités une interface est implémentée -> **[Gestion des fonctionnalités](#experimental-features):** `grafting` doit être déclaré sous `features` dans le manifeste du subgraph. +> **[Gestion des fonctionnalités](#experimental-features):** `grafting` doit être déclaré sous `features` dans le Subgraph manifest. diff --git a/website/src/pages/fr/subgraphs/developing/creating/assemblyscript-mappings.mdx b/website/src/pages/fr/subgraphs/developing/creating/assemblyscript-mappings.mdx index 7bb87fa69ab6..7a7febddbebb 100644 --- a/website/src/pages/fr/subgraphs/developing/creating/assemblyscript-mappings.mdx +++ b/website/src/pages/fr/subgraphs/developing/creating/assemblyscript-mappings.mdx @@ -10,7 +10,7 @@ Les mappages prennent des données d'une source particulière et les transformen Pour chaque gestionnaire d'événements défini dans `subgraph.yaml` sous `mapping.eventHandlers`, créez une fonction exportée du même nom. Chaque gestionnaire doit accepter un seul paramètre appelé `event` avec un type correspondant au nom de l'événement traité. -Dans le subgraph d'exemple, `src/mapping.ts` contient des gestionnaires pour les événements `NewGravatar` et `UpdatedGravatar`: +Dans l'exemple Subgraph, `src/mapping.ts` contient des gestionnaires pour les événements `NewGravatar` et `UpdatedGravatar` : ```javascript import { NewGravatar, UpdatedGravatar } from '../generated/Gravity/Gravity' @@ -72,7 +72,7 @@ Si aucune valeur n'est définie pour un champ de la nouvelle entité avec le mê ## Génération de code -Afin de faciliter et de sécuriser le travail avec les contrats intelligents, les événements et les entités, la CLI Graph peut générer des types AssemblyScript à partir du schéma GraphQL du subgraph et des ABI de contrat inclus dans les sources de données. +Afin de faciliter le travail avec les contrats intelligents, les événements et les entités, Graph CLI peut générer des types AssemblyScript à partir du schéma GraphQL du Subgraph et des ABI des contrats inclus dans les sources de données. Cela se fait avec @@ -80,7 +80,7 @@ Cela se fait avec graph codegen [--output-dir ] [] ``` -mais dans la plupart des cas, les subgraphs sont déjà préconfigurés via `package.json` pour vous permettre d'exécuter simplement l'un des éléments suivants pour obtenir le même résultat : +mais dans la plupart des cas, les Subgraph sont déjà préconfigurés via `package.json` pour vous permettre d'exécuter simplement l'un des éléments suivants pour obtenir le même résultat : ```sh # Yarn @@ -90,7 +90,7 @@ yarn codegen npm run codegen ``` -Cela va générer une classe AssemblyScript pour chaque contrat intelligent dans les fichiers ABI mentionnés dans `subgraph.yaml`, vous permettant de lier ces contrats à des adresses spécifiques dans les mappagess et d'appeler des méthodes de contrat en lecture seule sur le bloc en cours de traitement. Il génère également une classe pour chaque événement de contrat afin de fournir un accès facile aux paramètres de l'événement, ainsi qu'au bloc et à la transaction d'où provient l'événement. Tous ces types sont écrits dans `//.ts`. Dans l'exemple du subgraph, ce serait `generated/Gravity/Gravity.ts`, permettant aux mappages d'importer ces types avec. +Cela va générer une classe AssemblyScript pour chaque contrat intelligent dans les fichiers ABI mentionnés dans `subgraph.yaml`, vous permettant de lier ces contrats à des adresses spécifiques dans les mappages et d'appeler des méthodes de contrat en lecture seule sur le bloc en cours de traitement. Il génère également une classe pour chaque événement de contrat afin de fournir un accès facile aux paramètres de l'événement, ainsi qu'au bloc et à la transaction d'où provient l'événement. Tous ces types sont écrits dans `//.ts`. Dans l'exemple Subgraph, ce serait `generated/Gravity/Gravity.ts`, permettant aux mappages d'importer ces types avec. ```javascript import { @@ -102,12 +102,12 @@ import { } from '../generated/Gravity/Gravity' ``` -En outre, une classe est générée pour chaque type d'entité dans le schéma GraphQL du subgraph. Ces classes fournissent un chargement sécurisé des entités, un accès en lecture et en écriture aux champs des entités ainsi qu'une méthode `save()` pour écrire les entités dans le store. Toutes les classes d'entités sont écrites dans le fichier `/schema.ts`, ce qui permet aux mappages de les importer avec la commande +En outre, une classe est générée pour chaque type d'entité dans le schéma GraphQL du Subgraph. Ces classes fournissent un chargement d'entité sécurisé, un accès en lecture et en écriture aux champs de l'entité ainsi qu'une méthode `save()` pour écrire les entités dans le store. Toutes les classes d'entités sont écrites dans le fichier `/schema.ts`, ce qui permet aux mappages de les importer avec la commande ```javascript import { Gravatar } from '../generated/schema' ``` -> **Note:** La génération de code doit être exécutée à nouveau après chaque modification du schéma GraphQL ou des ABIs incluses dans le manifeste. Elle doit également être effectuée au moins une fois avant de construire ou de déployer le subgraphs. +> **Note:** La génération de code doit être exécutée à nouveau après chaque modification du schéma GraphQL ou des ABIs inclus dans le manifeste. Elle doit également être effectuée au moins une fois avant de construire ou de déployer le Subgraph. -La génération de code ne vérifie pas votre code de mappage dans `src/mapping.ts`. Si vous souhaitez vérifier cela avant d'essayer de déployer votre subgraph sur Graph Explorer, vous pouvez exécuter `yarn build` et corriger les erreurs de syntaxe que le compilateur TypeScript pourrait trouver. +La génération de code ne vérifie pas votre code de mappage dans `src/mapping.ts`. Si vous voulez le vérifier avant d'essayer de déployer votre Subgraph dans Graph Explorer, vous pouvez lancer `yarn build` et corriger les erreurs de syntaxe que le compilateur TypeScript pourrait trouver. diff --git a/website/src/pages/fr/subgraphs/developing/creating/graph-ts/CHANGELOG.md b/website/src/pages/fr/subgraphs/developing/creating/graph-ts/CHANGELOG.md index 5d90888ac378..e1411a2c1465 100644 --- a/website/src/pages/fr/subgraphs/developing/creating/graph-ts/CHANGELOG.md +++ b/website/src/pages/fr/subgraphs/developing/creating/graph-ts/CHANGELOG.md @@ -1,12 +1,18 @@ # @graphprotocol/graph-ts +## 0.38.0 + +### Minor Changes + +- [#1935](https://github.com/graphprotocol/graph-tooling/pull/1935) [`0c36a02`](https://github.com/graphprotocol/graph-tooling/commit/0c36a024e0516bbf883ae62b8312dba3d9945f04) Thanks [@isum](https://github.com/isum)! - feat: add yaml parsing support to mappings + ## 0.37.0 ### Minor Changes - [#1843](https://github.com/graphprotocol/graph-tooling/pull/1843) [`c09b56b`](https://github.com/graphprotocol/graph-tooling/commit/c09b56b093f23c80aa5d217b2fd56fccac061145) - Thanks [@YaroShkvorets](https://github.com/YaroShkvorets)! - Update all dependencies + Merci [@YaroShkvorets](https://github.com/YaroShkvorets) ! - Mise à jour de toutes les dépendances ## 0.36.0 @@ -14,16 +20,16 @@ - [#1754](https://github.com/graphprotocol/graph-tooling/pull/1754) [`2050bf6`](https://github.com/graphprotocol/graph-tooling/commit/2050bf6259c19bd86a7446410c7e124dfaddf4cd) - Thanks [@incrypto32](https://github.com/incrypto32)! - Add support for subgraph datasource and - associated types. + Merci à [@incrypto32](https://github.com/incrypto32) ! - Ajout de la prise en charge de la source de données de Subgraphs et + types associés. ## 0.35.1 -### Patch Changes +### Changements dans les correctifs - [#1637](https://github.com/graphprotocol/graph-tooling/pull/1637) [`f0c583f`](https://github.com/graphprotocol/graph-tooling/commit/f0c583f00c90e917d87b707b5b7a892ad0da916f) - Thanks [@incrypto32](https://github.com/incrypto32)! - Update return type for ethereum.hasCode + Merci [@incrypto32](https://github.com/incrypto32) ! - Mise à jour du type de retour pour ethereum.hasCode ## 0.35.0 @@ -31,7 +37,7 @@ - [#1609](https://github.com/graphprotocol/graph-tooling/pull/1609) [`e299f6c`](https://github.com/graphprotocol/graph-tooling/commit/e299f6ce5cf1ad74cab993f6df3feb7ca9993254) - Thanks [@incrypto32](https://github.com/incrypto32)! - Add support for eth.hasCode method + Merci [@incrypto32](https://github.com/incrypto32) ! - Ajouter la prise en charge de la méthode eth.hasCode ## 0.34.0 @@ -39,8 +45,8 @@ - [#1522](https://github.com/graphprotocol/graph-tooling/pull/1522) [`d132f9c`](https://github.com/graphprotocol/graph-tooling/commit/d132f9c9f6ea5283e40a8d913f3abefe5a8ad5f8) - Thanks [@dotansimha](https://github.com/dotansimha)! - Added support for handling GraphQL - `Timestamp` scalar as `i64` (AssemblyScript) + Merci [@dotansimha](https://github.com/dotansimha)! - Ajout d'un support pour la gestion de GraphQL + `Timestamp` scalaire en tant que `i64` (AssemblyScript) ## 0.33.0 @@ -48,7 +54,7 @@ - [#1584](https://github.com/graphprotocol/graph-tooling/pull/1584) [`0075f06`](https://github.com/graphprotocol/graph-tooling/commit/0075f06ddaa6d37606e42e1c12d11d19674d00ad) - Thanks [@incrypto32](https://github.com/incrypto32)! - Added getBalance call to ethereum API + Merci [@incrypto32](https://github.com/incrypto32) ! - Ajout de l'appel getBalance à l'API ethereum ## 0.32.0 @@ -56,7 +62,7 @@ - [#1523](https://github.com/graphprotocol/graph-tooling/pull/1523) [`167696e`](https://github.com/graphprotocol/graph-tooling/commit/167696eb611db0da27a6cf92a7390e72c74672ca) - Thanks [@xJonathanLEI](https://github.com/xJonathanLEI)! - add starknet data types + Merci [@xJonathanLEI](https://github.com/xJonathanLEI) ! - ajouter les types de données de starknet ## 0.31.0 @@ -64,12 +70,12 @@ - [#1340](https://github.com/graphprotocol/graph-tooling/pull/1340) [`2375877`](https://github.com/graphprotocol/graph-tooling/commit/23758774b33b5b7c6934f57a3e137870205ca6f0) - Thanks [@incrypto32](https://github.com/incrypto32)! - export `loadRelated` host function + Merci [@incrypto32](https://github.com/incrypto32) ! - export `loadRelated` host function - [#1296](https://github.com/graphprotocol/graph-tooling/pull/1296) [`dab4ca1`](https://github.com/graphprotocol/graph-tooling/commit/dab4ca1f5df7dcd0928bbaa20304f41d23b20ced) - Thanks [@dotansimha](https://github.com/dotansimha)! - Added support for handling GraphQL `Int8` - scalar as `i64` (AssemblyScript) + Merci à [@dotansimha](https://github.com/dotansimha) ! - Ajout du support de la gestion des scalaires GraphQL `Int8` en tant que `i64` (AssemblyScript). + scalaire GraphQL comme `i64` (AssemblyScript) ## 0.30.0 @@ -77,25 +83,25 @@ - [#1299](https://github.com/graphprotocol/graph-tooling/pull/1299) [`3f8b514`](https://github.com/graphprotocol/graph-tooling/commit/3f8b51440db281e69879be7d91d79cd43e45fe86) - Thanks [@saihaj](https://github.com/saihaj)! - introduce new Etherum utility to get a CREATE2 - Address + Merci [@saihaj](https://github.com/saihaj) ! - introduction d'un nouvel utilitaire Etherum pour obtenir un CREATE2 + Adresse - [#1306](https://github.com/graphprotocol/graph-tooling/pull/1306) [`f5e4b58`](https://github.com/graphprotocol/graph-tooling/commit/f5e4b58989edc5f3bb8211f1b912449e77832de8) - Thanks [@saihaj](https://github.com/saihaj)! - expose Host's `get_in_block` function + Merci [@saihaj](https://github.com/saihaj) ! - exposer la fonction `get_in_block` de l'hôte ## 0.29.3 -### Patch Changes +### Changements dans les correctifs - [#1057](https://github.com/graphprotocol/graph-tooling/pull/1057) [`b7a2ec3`](https://github.com/graphprotocol/graph-tooling/commit/b7a2ec3e9e2206142236f892e2314118d410ac93) - Thanks [@saihaj](https://github.com/saihaj)! - fix publihsed contents + Merci [@saihaj](https://github.com/saihaj) ! - Correction des contenus publiés ## 0.29.2 -### Patch Changes +### Changements dans les correctifs - [#1044](https://github.com/graphprotocol/graph-tooling/pull/1044) [`8367f90`](https://github.com/graphprotocol/graph-tooling/commit/8367f90167172181870c1a7fe5b3e84d2c5aeb2c) - Thanks [@saihaj](https://github.com/saihaj)! - publish readme with packages + Merci [@saihaj](https://github.com/saihaj) ! - publier le readme avec les paquets diff --git a/website/src/pages/fr/subgraphs/developing/creating/graph-ts/README.md b/website/src/pages/fr/subgraphs/developing/creating/graph-ts/README.md index b6771a8305e5..1661eae0df70 100644 --- a/website/src/pages/fr/subgraphs/developing/creating/graph-ts/README.md +++ b/website/src/pages/fr/subgraphs/developing/creating/graph-ts/README.md @@ -1,68 +1,66 @@ -# The Graph TypeScript Library (graph-ts) +# La bibliothèque Graph TypeScript (graph-ts) -[![npm (scoped)](https://img.shields.io/npm/v/@graphprotocol/graph-ts.svg)](https://www.npmjs.com/package/@graphprotocol/graph-ts) -[![Build Status](https://travis-ci.org/graphprotocol/graph-ts.svg?branch=master)](https://travis-ci.org/graphprotocol/graph-ts) +[ ![npm (cadré)](https://img.shields.io/npm/v/@graphprotocol/graph-ts.svg)](https://www.npmjs.com/package/@graphprotocol/graph-ts) +[ ![État de la construction](https://travis-ci.org/graphprotocol/graph-ts.svg?branch=master)](https://travis-ci.org/graphprotocol/graph-ts) -TypeScript/AssemblyScript library for writing subgraph mappings to be deployed to +Bibliothèque TypeScript/AssemblyScript pour l'écriture de mappages de Subgraphs à déployer sur [The Graph](https://github.com/graphprotocol/graph-node). ## Usage -For a detailed guide on how to create a subgraph, please see the +Pour un guide détaillé sur la création d'un Subgraph, veuillez consulter le document suivant [Graph CLI docs](https://github.com/graphprotocol/graph-cli). -One step of creating the subgraph is writing mappings that will process blockchain events and will -write entities into the store. These mappings are written in TypeScript/AssemblyScript. +Une étape de la création du Subgraph consiste à écrire des mappages qui traiteront les événements de la blockchain et écriront des entités dans le magasin. +écrire des entités dans le store. Ces mappages sont écrits en TypeScript/AssemblyScript. -The `graph-ts` library provides APIs to access the Graph Node store, blockchain data, smart -contracts, data on IPFS, cryptographic functions and more. To use it, all you have to do is add a -dependency on it: +La bibliothèque `graph-ts` fournit des API pour accéder au store Graph Node, aux données de la blockchain, aux contrats intelligents, aux données sur IPFS, aux fonctions cryptographiques et plus encore. Pour l'utiliser, tout ce que vous avez à faire est d'ajouter une dépendance +une dépendance sur cette bibliothèque : ```sh npm install --dev @graphprotocol/graph-ts # NPM yarn add --dev @graphprotocol/graph-ts # Yarn ``` -After that, you can import the `store` API and other features from this library in your mappings. A -few examples: +Ensuite, vous pouvez importer l'API `store` et d'autres fonctionnalités de cette bibliothèque dans vos mappages. Quelques exemples : ```typescript import { crypto, store } from '@graphprotocol/graph-ts' -// This is just an example event type generated by `graph-cli` -// from an Ethereum smart contract ABI +// Ceci est juste un exemple de type d'événement généré par `graph-cli` +// à partir d'un contrat intelligent Ethereum ABI import { NameRegistered } from './types/abis/SomeContract' -// This is an example of an entity type generated from a -// subgraph's GraphQL schema +// Voici un exemple de type d'entité généré à partir du schéma GraphQL d'un subgraph. +// schéma GraphQL d'un subgraph import { Domain } from './types/schema' function handleNameRegistered(event: NameRegistered) { - // Example use of a crypto function + // Exemple d'utilisation d'une fonction crypto let id = crypto.keccak256(name).toHexString() - // Example use of the generated `Entry` class + // Exemple d'utilisation de la classe `Entry` générée let domain = new Domain() domain.name = name domain.owner = event.params.owner domain.timeRegistered = event.block.timestamp - // Example use of the store API + // Exemple d'utilisation du store API store.set('Name', id, entity) } ``` -## Helper Functions for AssemblyScript +## Fonctions d'aide pour AssemblyScript -Refer to the `helper-functions.ts` file in +Référez-vous au fichier `helper-functions.ts` dans [this](https://github.com/graphprotocol/graph-tooling/blob/main/packages/ts/helper-functions.ts) -repository for a few common functions that help build on top of the AssemblyScript library, such as -byte array concatenation, among others. +pour quelques fonctions communes qui aident à construire au-dessus de la bibliothèque AssemblyScript, comme la concaténation de tableaux de byte, entre autres. +la concaténation de tableaux byte, entre autres. ## API -Documentation on the API can be found -[here](https://thegraph.com/docs/en/developer/assemblyscript-api/). +La documentation sur l'API est disponible +[ici](https://thegraph.com/docs/en/developer/assemblyscript-api/). -For examples of `graph-ts` in use take a look at one of the following subgraphs: +Pour des exemples d'utilisation de `graph-ts`, regardez l'un des Subgraphs suivants : - https://github.com/graphprotocol/ens-subgraph - https://github.com/graphprotocol/decentraland-subgraph @@ -71,15 +69,15 @@ For examples of `graph-ts` in use take a look at one of the following subgraphs: - https://github.com/graphprotocol/aragon-subgraph - https://github.com/graphprotocol/dharma-subgraph -## License +## Licence -Copyright © 2018 Graph Protocol, Inc. and contributors. +Copyright © 2018 Graph Protocol, Inc. et contributeurs. -The Graph TypeScript library is dual-licensed under the -[MIT license](https://github.com/graphprotocol/graph-tooling/blob/main/LICENSE-MIT) and the +La bibliothèque TypeScript The Graph est soumise à une double licence, à savoir la licence +MIT et de la [licence Apache, version 2.0](https://github.com/graphprotocol/graph-tooling/blob/main/LICENSE-MIT). [Apache License, Version 2.0](https://github.com/graphprotocol/graph-tooling/blob/main/LICENSE-APACHE). -Unless required by applicable law or agreed to in writing, software distributed under the License is -distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or -implied. See the License for the specific language governing permissions and limitations under the -License. +Sauf obligation légale ou accord écrit, le logiciel distribué dans le cadre de la licence est +distribué « EN L’ÉTAT », SANS GARANTIE NI CONDITION DE QUELQUE NATURE QUE CE SOIT, expresse ou implicite. +explicites ou implicites. Voir la Licence pour le langage spécifique régissant les permissions et les limitations dans le cadre de la +La licence. diff --git a/website/src/pages/fr/subgraphs/developing/creating/graph-ts/_meta-titles.json b/website/src/pages/fr/subgraphs/developing/creating/graph-ts/_meta-titles.json index 5c5a85ba9a2e..5cde1b58c3ac 100644 --- a/website/src/pages/fr/subgraphs/developing/creating/graph-ts/_meta-titles.json +++ b/website/src/pages/fr/subgraphs/developing/creating/graph-ts/_meta-titles.json @@ -1,5 +1,5 @@ { "README": "Présentation", "api": "Référence API", - "common-issues": "Common Issues" + "common-issues": "Problèmes communs" } diff --git a/website/src/pages/fr/subgraphs/developing/creating/graph-ts/api.mdx b/website/src/pages/fr/subgraphs/developing/creating/graph-ts/api.mdx index a74814844016..90bc58c98943 100644 --- a/website/src/pages/fr/subgraphs/developing/creating/graph-ts/api.mdx +++ b/website/src/pages/fr/subgraphs/developing/creating/graph-ts/api.mdx @@ -2,12 +2,12 @@ title: API AssemblyScript --- -> Note : Si vous avez créé un subgraph avant la version `graph-cli`/`graph-ts` `0.22.0`, alors vous utilisez une ancienne version d'AssemblyScript. Il est recommandé de consulter le [`Guide de Migration`](/resources/migration-guides/assemblyscript-migration-guide/). +> Note : Si vous avez créé un subgraph avant la version `graph-cli`/`graph-ts` `0.22.0`, alors vous utilisez une ancienne version d'AssemblyScript. Il est recommandé de consulter le [`Guide de migration`](/resources/migration-guides/assemblyscript-migration-guide/). -Découvrez quelles APIs intégrées peuvent être utilisées lors de l'écriture des mappages de subgraph. Il existe deux types d'APIs disponibles par défaut : +Découvrez les API intégrées qui peuvent être utilisées lors de l'écriture de mappages de subgraphs. Deux types d'API sont disponibles sont disponibles nativement : - La [Bibliothèque TypeScript de The Graph](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) -- Code généré à partir des fichiers du subgraph par `graph codegen` +- Code généré à partir des fichiers de subgraphs par `graph codegen` Vous pouvez également ajouter d'autres bibliothèques comme dépendances, à condition qu'elles soient compatibles avec [AssemblyScript] (https://github.com/AssemblyScript/assemblyscript). @@ -27,7 +27,7 @@ La bibliothèque `@graphprotocol/graph-ts` fournit les API suivantes : ### Versions -La `apiVersion` dans le manifeste du subgraph spécifie la version de l'API de mappage exécutée par Graph Node pour un subgraph donné. +La `apiVersion` dans le manifeste du subgraph spécifie la version de l'API de mappage qui est exécutée par Graph Node pour un subgraph donné. | Version | Notes de version | | :-: | --- | @@ -35,7 +35,7 @@ La `apiVersion` dans le manifeste du subgraph spécifie la version de l'API de m | 0.0.8 | Ajout de la validation pour l'existence des champs dans le schéma lors de l'enregistrement d'une entité. | | 0.0.7 | Ajout des classes `TransactionReceipt` et `Log`aux types Ethereum
Ajout du champ `receipt` à l'objet Ethereum Event | | 0.0.6 | Ajout du champ `nonce` à l'objet Ethereum Transaction
Ajout de `baseFeePerGas` à l'objet Ethereum Block | -| 0.0.5 | AssemblyScript a été mis à niveau à niveau vers la version 0.19.10 (cela inclut des changements brusques, veuillez consulter le [`Guide de migration`](/resources/migration-guides/assemblyscript-migration-guide/))
`ethereum.transaction.gasUsed` renommé en `ethereum.transaction.gasLimit` | +| 0.0.5 | AssemblyScript mis à jour vers la version 0.19.10 (cela inclut des changements de rupture, veuillez consulter le [`Guide de migration`](/resources/migration-guides/assemblyscript-migration-guide/))
`ethereum.transaction.gasUsed` renommé en `ethereum.transaction.gasLimit` | | 0.0.4 | Ajout du champ `functionSignature` à l'objet Ethereum SmartContractCall | | 0.0.3 | Ajout du champ `from` à l'objet Ethereum Call
`ethereum.call.address` renommé en `ethereum.call.to` | | 0.0.2 | Ajout du champ `input` à l'objet Ethereum Transaction | @@ -223,7 +223,7 @@ import { store } from '@graphprotocol/graph-ts' L'API `store` permet de charger, sauvegarder et supprimer des entités dans et depuis le magasin Graph Node. -Les entités écrites dans le magasin correspondent directement aux types `@entity` définis dans le schéma GraphQL du subgraph. Pour faciliter le travail avec ces entités, la commande `graph codegen` fournie par [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) génère des classes d'entités, qui sont des sous-classes du type `Entity` intégré, avec des accesseurs et des mutateurs pour les champs du schéma ainsi que des méthodes pour charger et sauvegarder ces entités. +Les entités écrites dans le store correspondent aux types `@entity` définis dans le schéma GraphQL du subgraph. Pour faciliter le travail avec ces entités, la commande `graph codegen` fournie par [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) génère des classes d'entités, qui sont des sous-classes du type intégré `Entity`, avec des getters et des setters de propriétés pour les champs du schéma ainsi que des méthodes pour charger et sauvegarder ces entités. #### Création d'entités @@ -282,8 +282,8 @@ Depuis `graph-node` v0.31.0, `@graphprotocol/graph-ts` v0.30.0 et `@graphprotoco L'API de store facilite la récupération des entités créées ou mises à jour dans le bloc actuel. Une situation typique pour cela est qu'un gestionnaire crée une transaction à partir d'un événement onchain et qu'un gestionnaire ultérieur souhaite accéder à cette transaction si elle existe. -- Dans le cas où la transaction n'existe pas, le subgraph devra interroger la base de données pour découvrir que l'entité n'existe pas. Si l'auteur du subgraph sait déjà que l'entité doit avoir été créée dans le même bloc, utiliser `loadInBlock` évite ce détour par la base de données. -- Pour certains subgraphs, ces recherches infructueuses peuvent contribuer de manière significative au temps d'indexation. +- Dans le cas où la transaction n'existe pas, le subgraph devra aller dans la base de données simplement pour découvrir que l'entité n'existe pas. Si l'auteur du subgraph sait déjà que l'entité a dû être créée dans le même bloc, l'utilisation de `loadInBlock` évite cet aller-retour dans la base de données. +- Pour certains subgraphs, ces recherches manquées peuvent contribuer de manière significative au temps d'indexation. ```typescript let id = event.transaction.hash // ou de toute autre manière dont l'ID est construit @@ -380,11 +380,11 @@ L'API Ethereum donne accès aux contrats intelligents, aux variables d'état pub #### Prise en charge des types Ethereum -Comme pour les entités, `graph codegen` génère des classes pour tous les contrats intelligents et événements utilisés dans un subgraph. Pour cela, les ABIs des contrats doivent faire partie de la source de données dans le manifeste du subgraph. En général, les fichiers ABI sont stockés dans un dossier `abis/` . +Comme pour les entités, `graph codegen` génère des classes pour tous les contrats intelligents et les événements utilisés dans un subgraph. Pour cela, les ABI des contrats doivent faire partie de la source de données dans le manifeste du subgraph. Typiquement, les fichiers ABI sont stockés dans un dossier `abis/`. -Avec les classes générées, les conversions entre les types Ethereum et [les types intégrés](#built-in-types) se font en arrière-plan afin que les auteurs de subgraph n'aient pas à s'en soucier. +Avec les classes générées, les conversions entre les types Ethereum et les [types intégrés](#built-in-types) ont lieu en coulisses, de sorte que les auteurs de subgraphs n'ont pas à s'en préoccuper. -L’exemple suivant illustre cela. Étant donné un schéma de subgraph comme +L'exemple suivant l'illustre. Étant donné un schéma de Subgraphs tel que ```graphql type Transfer @entity { @@ -483,7 +483,7 @@ class Log { #### Accès à l'état du contrat intelligent -Le code généré par `graph codegen` inclut également des classes pour les contrats intelligents utilisés dans le subgraph. Celles-ci peuvent être utilisées pour accéder aux variables d'état publiques et appeler des fonctions du contrat au bloc actuel. +Le code généré par `graph codegen` comprend également des classes pour les contrats intelligents utilisés dans le subgraph. Celles-ci peuvent être utilisées pour accéder aux variables d'état publiques et appeler les fonctions du contrat dans le bloc actuel. Un modèle courant consiste à accéder au contrat dont provient un événement. Ceci est réalisé avec le code suivant : @@ -582,7 +582,7 @@ let isContract = ethereum.hasCode(eoa).inner // renvoie false import { log } from '@graphprotocol/graph-ts' ``` -L'API `log` permet aux subgraphs d'enregistrer des informations sur la sortie standard de Graph Node ainsi que sur Graph Explorer. Les messages peuvent être enregistrés en utilisant différents niveaux de journalisation. Une syntaxe de chaîne de caractère de format de base est fournie pour composer des messages de journal à partir de l'argument. +L'API `log` permet aux subgraphs de consigner des informations sur la sortie standard de Graph Node ainsi que sur Graph Explorer. Les messages peuvent être enregistrés à différents niveaux. Une syntaxe de chaîne de caractère de format de base est fournie pour composer les messages de journal à partir d'un argument. L'API `log` inclut les fonctions suivantes : @@ -590,7 +590,7 @@ L'API `log` inclut les fonctions suivantes : - `log.info(fmt: string, args: Array): void` - enregistre un message d'information. - `log.warning(fmt: string, args: Array): void` - enregistre un avertissement. - `log.error(fmt: string, args: Array): void` - enregistre un message d'erreur. -- `log.critical(fmt: string, args: Array): void` – enregistre un message critique _et_ met fin au subgraph. +- `log.critical(fmt : string, args : Array) : void` - enregistre un message critique _et_ met fin au Subgraph. L'API `log` prend une chaîne de caractères de format et un tableau de valeurs de chaîne de caractères. Elle remplace ensuite les espaces réservés par les valeurs de chaîne de caractères du tableau. Le premier espace réservé `{}` est remplacé par la première valeur du tableau, le second `{}` est remplacé par la deuxième valeur, et ainsi de suite. @@ -721,7 +721,7 @@ ipfs.mapJSON('Qm...', 'processItem', Value.fromString('parentId')) Le seul indicateur actuellement pris en charge est `json`, qui doit être passé à `ipfs.map`. Avec l'indicateur `json` , le fichier IPFS doit consister en une série de valeurs JSON, une valeur par ligne. L'appel à `ipfs.map` lira chaque ligne du fichier, la désérialisera en un `JSONValue` et appellera le callback pour chacune d'entre elles. Le callback peut alors utiliser des opérations des entités pour stocker des données à partir du `JSONValue`. Les modifications d'entité ne sont enregistrées que lorsque le gestionnaire qui a appelé `ipfs.map` se termine avec succès ; en attendant, elles sont conservées en mémoire, et la taille du fichier que `ipfs.map` peut traiter est donc limitée. -En cas de succès, `ipfs.map` renvoie `void`. Si une invocation du callback provoque une erreur, le gestionnaire qui a invoqué `ipfs.map` est interrompu et le subgraph marqué comme échoué. +En cas de succès, `ipfs.map` renvoie `void`. Si une invocation du callback provoque une erreur, le gestionnaire qui a invoqué `ipfs.map` est interrompu, et le subgraph est marqué comme ayant échoué. ### Crypto API @@ -836,7 +836,7 @@ La classe de base `Entity` et la classe enfant `DataSourceContext` disposent d'a ### DataSourceContext in Manifest -La section `context` de `dataSources` vous permet de définir des paires clé-valeur qui sont accessibles dans vos mappages de subgraphs. Les types disponibles sont `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, et `BigInt`. +La section `context` de `dataSources` vous permet de définir des paires clé-valeur accessibles dans vos mappages de subgraphs. Les types disponibles sont `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, et `BigInt`. Voici un exemple YAML illustrant l'utilisation de différents types dans la section `context` : @@ -887,4 +887,4 @@ dataSources: - `List` : Spécifie une liste d'éléments. Chaque élément doit spécifier son type et ses données. - `BigInt` : Spécifie une grande valeur entière. Elle doit être mise entre guillemets en raison de sa grande taille. -Ce contexte est ensuite accessible dans vos fichiers de mappage de subgraphs, permettant des subgraphs plus dynamiques et configurables. +Ce contexte est ensuite accessible dans vos fichiers de mappage de Subgraph, ce qui permet de créer des Subgraphs plus dynamiques et configurables. diff --git a/website/src/pages/fr/subgraphs/developing/creating/graph-ts/common-issues.mdx b/website/src/pages/fr/subgraphs/developing/creating/graph-ts/common-issues.mdx index a946b30a71b1..ec5500baac76 100644 --- a/website/src/pages/fr/subgraphs/developing/creating/graph-ts/common-issues.mdx +++ b/website/src/pages/fr/subgraphs/developing/creating/graph-ts/common-issues.mdx @@ -2,7 +2,7 @@ title: Common AssemblyScript Issues --- -Il existe certains problèmes courants avec [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) lors du développement de subgraph. Ces problèmes varient en termes de difficulté de débogage, mais les connaître peut être utile. Voici une liste non exhaustive de ces problèmes : +Il existe certains problèmes [AssemblyScript] (https://github.com/AssemblyScript/assemblyscript) que l'on rencontre fréquemment au cours du développement d'un subgraph. Ils varient en difficulté de débugage, cependant, être conscient d'eux peut aider. Voici une liste non exhaustive de ces problèmes : - Les variables de classe `Private` ne sont pas appliquées dans [AssemblyScript](https://www.assemblyscript.org/status.html#language-features). Il n'y a aucun moyen de protéger les variables de classe d'une modification directe à partir de l'objet de la classe. - La portée n'est pas héritée dans les [fonctions de fermeture] (https://www.assemblyscript.org/status.html#on-closures), c'est-à-dire que les variables déclarées en dehors des fonctions de fermeture ne peuvent pas être utilisées. Explication dans les [Developer Highlights #3](https://www.youtube.com/watch?v=1-8AW-lVfrA&t=3243s). diff --git a/website/src/pages/fr/subgraphs/developing/creating/install-the-cli.mdx b/website/src/pages/fr/subgraphs/developing/creating/install-the-cli.mdx index 0376a713f058..eaa6d4601d27 100644 --- a/website/src/pages/fr/subgraphs/developing/creating/install-the-cli.mdx +++ b/website/src/pages/fr/subgraphs/developing/creating/install-the-cli.mdx @@ -2,11 +2,11 @@ title: Installation du Graph CLI --- -> Pour utiliser votre subgraph sur le réseau décentralisé de The Graph, vous devrez [créer une clé API](/resources/subgraph-studio-faq/#2-how-do-i-create-an-api-key) dans [Subgraph Studio](https://thegraph.com/studio/apikeys/). Il est recommandé d'ajouter un signal à votre subgraph avec au moins 3 000 GRT pour attirer 2 à 3 Indexeurs. Pour en savoir plus sur la signalisation, consultez [curation](/resources/roles/curating/). +> Afin d'utiliser votre subgraph sur le réseau décentralisé de The Graph, vous devrez [créer une clé API](/resources/subgraph-studio-faq/#2-how-do-i-create-an-api-key) dans [Subgraph Studio](https://thegraph.com/studio/apikeys/). Il est recommandé d'ajouter un signal à votre subgraph avec au moins 3 000 GRT pour attirer 2 ou 3 Indexeurs. Pour en savoir plus sur la signalisation, consultez [Curation](/resources/roles/curating/). ## Aperçu -[Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) est une interface de ligne de commande qui facilite les commandes des développeurs pour The Graph. Il traite un [manifeste de subgraph](/subgraphs/developing/creating/subgraph-manifest/) et compile les [mappages](/subgraphs/developing/creating/assemblyscript-mappings/) pour créer les fichiers dont vous aurez besoin pour déployer le subgraph sur [Subgraph Studio](https://thegraph.com/studio/) et le réseau. +Le [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) est une interface de ligne de commande qui facilite les commandes des développeurs pour The Graph. Elle traite un [Subgraph manifest](/subgraphs/developing/creating/subgraph-manifest/) et compile les [mappages](/subgraphs/developing/creating/assemblyscript-mappings/) pour créer les fichiers dont vous aurez besoin pour déployer le subgraph dans [Subgraph Studio](https://thegraph.com/studio/) et sur le réseau. ## Introduction @@ -28,13 +28,13 @@ npm install -g @graphprotocol/graph-cli@latest yarn global add @graphprotocol/graph-cli ``` -La commande `graph init` peut être utilisée pour configurer un nouveau projet de subgraph, soit à partir d'un contrat existant, soit à partir d'un exemple de subgraph. Si vous avez déjà déployé un contrat intelligent sur votre réseau préféré, vous pouvez démarrer un nouveau subgraph à partir de ce contrat pour commencer. +La commande `graph init` peut être utilisée pour mettre en place un nouveau projet Subgraph, soit à partir d'un contrat existant, soit à partir d'un exemple de Subgraph. Si vous avez déjà un contrat intelligent déployé sur votre réseau préféré, vous pouvez démarrer un nouveau Subgraph à partir de ce contrat pour commencer. ## Créer un subgraph ### À partir d'un contrat existant -La commande suivante crée un subgraph qui indexe tous les événements d'un contrat existant : +La commande suivante crée un Subgraph qui indexe tous les événements d'un contrat existant : ```sh graph init \ @@ -51,25 +51,25 @@ graph init \ - Si certains arguments optionnels manquent, il vous guide à travers un formulaire interactif. -- Le `` est l'ID de votre subgraph dans [Subgraph Studio](https://thegraph.com/studio/). Il se trouve sur la page de détails de votre subgraph. +- Le `` est l'identifiant de votre Subgraph dans [Subgraph Studio](https://thegraph.com/studio/). Il se trouve sur la page de détails de votre Subgraph. ### À partir d'un exemple de subgraph -La commande suivante initialise un nouveau projet à partir d'un exemple de subgraph : +La commande suivante permet d'initialiser un nouveau projet à partir d'un exemple de Subgraph : ```sh graph init --from-example=example-subgraph ``` -- Le [subgraph d'exemple](https://github.com/graphprotocol/example-subgraph) est basé sur le contrat Gravity de Dani Grant, qui gère les avatars des utilisateurs et émet des événements `NewGravatar` ou `UpdateGravatar` chaque fois que des avatars sont créés ou mis à jour. +- Le [Subgraph d'exemple](https://github.com/graphprotocol/example-subgraph) est basé sur le contrat Gravity de Dani Grant, qui gère les avatars des utilisateurs et émet des événements `NewGravatar` ou `UpdateGravatar` à chaque fois que des avatars sont créés ou mis à jour. -- Le subgraph gère ces événements en écrivant des entités `Gravatar` dans le store de Graph Node et en veillant à ce qu'elles soient mises à jour en fonction des événements. +- Le Subgraph gère ces événements en écrivant des entités `Gravatar` dans le store de Graph Node et en veillant à ce qu'elles soient mises à jour en fonction des événements. ### Ajouter de nouvelles `sources de données` à un subgraph existant -Les `dataSources` sont des composants clés des subgraphs. Ils définissent les sources de données que le subgraphs indexe et traite. Une `dataSource` spécifie quel smart contract doit être écouté, quels événements doivent être traités et comment les traiter. +Les `sources de données` sont des composants clés des subgraphs. Ils définissent les sources de données que le subgraph indexe et traite. Une `dataSource` spécifie quel contrat intelligent écouter, quels événements traiter et comment les traiter. -Les versions récentes de Graph CLI permettent d'ajouter de nouvelles `dataSources` à un subgraph existant grâce à la commande `graph add` : +Les versions récentes de Graph CLI permettent d'ajouter de nouvelles `dataSources` à un Subgraph existant grâce à la commande `graph add` : ```sh graph add
[] @@ -101,19 +101,5 @@ La commande `graph add` récupère l'ABI depuis Etherscan (à moins qu'un chemin Le(s) fichier(s) ABI doivent correspondre à votre(vos) contrat(s). Il existe plusieurs façons d'obtenir des fichiers ABI : - Si vous construisez votre propre projet, vous aurez probablement accès à vos ABI les plus récents. -- Si vous construisez un subgraph pour un projet public, vous pouvez télécharger ce projet sur votre ordinateur et obtenir l'ABI en utilisant [`npx hardhat compile`](https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts) ou en utilisant `solc` pour compiler. -- Vous pouvez également trouver l'ABI sur [Etherscan](https://etherscan.io/), mais ce n'est pas toujours fiable, car l'ABI qui y est téléchargé peut être obsolète. Assurez-vous d'avoir le bon ABI, sinon l'exécution de votre subgraph échouera. - -## Versions disponibles de SpecVersion - -| Version | Notes de version | -| :-: | --- | -| 1.2.0 | Ajout de la prise en charge du [filtrage des arguments indexés](/#indexed-argument-filters--topic-filters) et de la déclaration `eth_call` | -| 1.1.0 | Prend en charge [Timeseries & Aggregations](#timeseries-and-aggregations). Ajout de la prise en charge du type `Int8` pour `id`. | -| 1.0.0 | Prend en charge la fonctionnalité [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) pour élaguer les subgraphs | -| 0.0.9 | Prend en charge la fonctionnalité `endBlock` | -| 0.0.8 | Ajout de la prise en charge des [gestionnaires de blocs](/developing/creating-a-subgraph/#polling-filter) et des [gestionnaires d'initialisation](/developing/creating-a-subgraph/#once-filter) d'interrogation. | -| 0.0.7 | Ajout de la prise en charge des [fichiers sources de données](/developing/creating-a-subgraph/#file-data-sources). | -| 0.0.6 | Prend en charge la variante de calcul rapide de la [Preuve d'indexation](/indexing/overview/#what-is-a-proof-of-indexing-poi). | -| 0.0.5 | Ajout de la prise en charge des gestionnaires d'événement ayant accès aux reçus de transactions. | -| 0.0.4 | Ajout de la prise en charge du management des fonctionnalités de subgraph. | +- Si vous construisez un Subgraph pour un projet public, vous pouvez télécharger ce projet sur votre ordinateur et obtenir l'ABI en utilisant [`npx hardhat compile`] (https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts) ou en utilisant `solc` pour compiler. +- Vous pouvez également trouver l'ABI sur [Etherscan] (https://etherscan.io/), mais ce n'est pas toujours fiable, car l'ABI qui y est téléchargé peut être obsolète. Assurez-vous d'avoir le bon ABI, sinon l'exécution de votre Subgraph échouera. diff --git a/website/src/pages/fr/subgraphs/developing/creating/ql-schema.mdx b/website/src/pages/fr/subgraphs/developing/creating/ql-schema.mdx index 0d6ae1beb2bf..5786aa5f8364 100644 --- a/website/src/pages/fr/subgraphs/developing/creating/ql-schema.mdx +++ b/website/src/pages/fr/subgraphs/developing/creating/ql-schema.mdx @@ -4,7 +4,7 @@ title: Schema The Graph QL ## Aperçu -Le schéma de votre subgraph se trouve dans le fichier `schema.graphql`. Les schémas GraphQL sont définis à l'aide du langage de définition d'interface GraphQL. +Le schéma de votre Subgraph se trouve dans le fichier `schema.graphql`. Les schémas GraphQL sont définis à l'aide du langage de définition d'interface GraphQL. > Remarque : si vous n'avez jamais écrit de schéma GraphQL, il est recommandé de consulter ce guide sur le système de types GraphQL. La documentation de référence pour les schémas GraphQL est disponible dans la section [API GraphQL](/subgraphs/querying/graphql-api/). @@ -12,7 +12,7 @@ Le schéma de votre subgraph se trouve dans le fichier `schema.graphql`. Les sch Avant de définir des entités, il est important de prendre du recul et de réfléchir à la manière dont vos données sont structurées et liées. -- Toutes les requêtes seront effectuées sur le modèle de données défini dans le schéma de subgraph. Par conséquent, la conception du schéma de subgraph doit être informée par les requêtes que votre application devra exécuter. +- Toutes les requêtes seront effectuées à partir du modèle de données défini dans le schéma du Subgraph. Par conséquent, la conception du schéma du Subgraph doit être guidée par les requêtes que votre application devra effectuer. - Il peut être utile d'imaginer les entités comme des "objets contenant des données", plutôt que comme des événements ou des fonctions. - Vous définissez les types d'entités dans `schema.graphql`, et Graph Node générera des champs de premier niveau pour interroger des instances uniques et des collections de ce type d'entité. - Chaque type qui doit être une entité doit être annoté avec une directive `@entity`. @@ -141,7 +141,7 @@ type TokenBalance @entity { Les recherches inversées peuvent être définies sur une entité à travers le champ `@derivedFrom`. Cela crée un champ virtuel sur l'entité qui peut être interrogé mais qui ne peut pas être défini manuellement par l'intermédiaire de l'API des correspondances. Il est plutôt dérivé de la relation définie sur l'autre entité. Pour de telles relations, il est rarement utile de stocker les deux côtés de la relation, et l'indexation et les performances des requêtes seront meilleures si un seul côté est stocké et que l'autre est dérivé. -Pour les relations un-à-plusieurs, la relation doit toujours être stockée du côté « un » et le côté « plusieurs » doit toujours être dérivé. Stocker la relation de cette façon, plutôt que de stocker un tableau d'entités du côté « plusieurs », entraînera des performances considérablement meilleures pour l'indexation et l'interrogation du sous-graphe. En général, le stockage de tableaux d’entités doit être évité autant que possible. +Pour les relations "un à plusieurs", la relation doit toujours être stockée du côté "un" et le côté "plusieurs" doit toujours être dérivé. Le stockage de la relation de cette manière, plutôt que le stockage d'un tableau d'entités du côté "plusieurs", se traduira par des performances nettement meilleures pour l'indexation et l'interrogation du subgraph. En général, le stockage de tableaux d'entités doit être évité autant que possible. #### Exemple @@ -160,15 +160,15 @@ type TokenBalance @entity { } ``` -Here is an example of how to write a mapping for a subgraph with reverse lookups: +Voici un exemple de la façon d'écrire une correspondance pour un Subgraph avec des recherches inversées : ```typescript let token = new Token(event.address) // Create Token -token.save() // tokenBalances is derived automatically +token.save() // tokenBalances est dérivé automatiquement let tokenBalance = new TokenBalance(event.address) tokenBalance.amount = BigInt.fromI32(0) -tokenBalance.token = token.id // Reference stored here +tokenBalance.token = token.id // Référence stockée ici tokenBalance.save() ``` @@ -231,7 +231,7 @@ query usersWithOrganizations { } ``` -Cette manière plus élaborée de stocker des relations plusieurs-à-plusieurs entraînera moins de données stockées pour le subgraph, et donc vers un subgraph qui est souvent considérablement plus rapide à indexer et à interroger. +Cette façon plus élaborée de stocker les relations de plusieurs à plusieurs permettra de stocker moins de données pour le Subgraph et, par conséquent, d'obtenir un Subgraph dont l'indexation et l'interrogation sont souvent beaucoup plus rapides. ### Ajouter des commentaires au schéma @@ -287,7 +287,7 @@ query { } ``` -> **[Gestion des fonctionnalités](#experimental-features):** A partir de `specVersion` `0.0.4`, `fullTextSearch` doit être déclaré dans la section `features` du manifeste du subgraph. +> **[Gestion des fonctionnalités](#experimental-features):** A partir de `specVersion` `0.0.4`, `fullTextSearch` doit être déclaré dans la section `features` du manifeste Subgraph. ## Langues prises en charge diff --git a/website/src/pages/fr/subgraphs/developing/creating/starting-your-subgraph.mdx b/website/src/pages/fr/subgraphs/developing/creating/starting-your-subgraph.mdx index 4030093310a4..247c5e721c94 100644 --- a/website/src/pages/fr/subgraphs/developing/creating/starting-your-subgraph.mdx +++ b/website/src/pages/fr/subgraphs/developing/creating/starting-your-subgraph.mdx @@ -4,20 +4,32 @@ title: Démarrer votre subgraph ## Aperçu -The Graph contient des milliers de subgraphs déjà disponibles pour des requêtes. Consultez [The Graph Explorer](https://thegraph.com/explorer) et trouvez-en un qui correspond déjà à vos besoins. +The Graph contient des milliers de subgraphs qui peuvent déjà être interrogés. Consultez [The Graph Explorer](https://thegraph.com/explorer) et trouvez-en un qui correspond déjà à vos besoins. -Lorsque vous créez un [subgraph](/subgraphs/developing/subgraphs/), vous créez une API ouverte personnalisée qui extrait des données d'une blockchain, les traite, les stocke et les rend faciles à interroger via GraphQL. +Lorsque vous créez un [Subgraph](/subgraphs/developing/subgraphs/), vous créez une API ouverte personnalisée qui extrait des données d'une blockchain, les traite, les stocke et les rend faciles à interroger via GraphQL. -Le développement de subgraphs peut aller de simples modèles « scaffold » à des subgraphs avancés, spécialement adaptés à vos besoins. +Le développement de subgraphs va de simples subgraphs basiques générés à partir d'un modèle, à des subgraphs avancés spécifiquement adaptés à vos besoins. ### Commencez à développer -Lancez le processus et construisez un subgraph qui correspond à vos besoins : +Commencez le processus et construisez un subgraph qui correspond à vos besoins : 1. [Installer la CLI](/subgraphs/developing/creating/install-the-cli/) - Configurez votre infrastructure -2. [Subgraph Manifest](/subgraphs/developing/creating/subgraph-manifest/) - Comprenez le composant clé d'un subgraph +2. [Manifest du Subgraph](/subgraphs/developing/creating/subgraph-manifest/) - Comprendre la composante clé d'un subgraph 3. [Le schéma GraphQL](/subgraphs/developing/creating/ql-schema/) - Écrivez votre schéma 4. [Écrire les mappings AssemblyScript](/subgraphs/developing/creating/assemblyscript-mappings/) - Rédigez vos mappings -5. [Fonctionnalités avancées](/subgraphs/developing/creating/advanced/) - Personnalisez votre subgraphs avec des fonctionnalités avancées +5. [Fonctionnalités avancées](/subgraphs/developing/creating/advanced/) - Personnalisez votre subgraph avec des fonctionnalités avancées Explorez d'autres [ressources pour les API](/subgraphs/developing/creating/graph-ts/README/) et effectuez des tests en local avec [Matchstick](/subgraphs/developing/creating/unit-testing-framework/). + +| Version | Notes de version | +| :-: | --- | +| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supporte la fonctionnalité [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) pour élaguer les subgraphs | +| 0.0.9 | Supports `endBlock` feature | +| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). | +| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | diff --git a/website/src/pages/fr/subgraphs/developing/creating/subgraph-manifest.mdx b/website/src/pages/fr/subgraphs/developing/creating/subgraph-manifest.mdx index f3b29bd0de75..6fbfab0c5411 100644 --- a/website/src/pages/fr/subgraphs/developing/creating/subgraph-manifest.mdx +++ b/website/src/pages/fr/subgraphs/developing/creating/subgraph-manifest.mdx @@ -4,19 +4,19 @@ title: Manifeste de Subgraph ## Aperçu -Le manifeste du subgraph, `subgraph.yaml`, définit les contrats intelligents et le réseau que votre subgraph va indexer, les événements de ces contrats auxquels il faut prêter attention, et comment faire correspondre les données d'événements aux entités que Graph Node stocke et permet d'interroger. +The Subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your Subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. -La **définition du subgraph** se compose des fichiers suivants : +The **Subgraph definition** consists of the following files: -- `subgraph.yaml` : Contient le manifeste du subgraph +- `subgraph.yaml`: Contains the Subgraph manifest -- `schema.graphql` : Un schéma GraphQL définissant les données stockées pour votre subgraph et comment les interroger via GraphQL +- `schema.graphql`: A GraphQL schema defining the data stored for your Subgraph and how to query it via GraphQL - `mapping.ts` : [Mappage AssemblyScript](https://github.com/AssemblyScript/assemblyscript) code qui traduit les données d'événements en entités définies dans votre schéma (par exemple `mapping.ts` dans ce guide) ### Capacités des subgraphs -Un seul subgraph peut : +Un seul Subgraph peut : - Indexer les données de plusieurs contrats intelligents (mais pas de plusieurs réseaux). @@ -24,102 +24,102 @@ Un seul subgraph peut : - Ajouter une entrée pour chaque contrat nécessitant une indexation dans le tableau `dataSources`. -La spécification complète des manifestes de subgraphs est disponible [ici](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). +La spécification complète des manifestes de Subgraphs est disponible [ici](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). -Pour l'exemple de subgraph cité ci-dessus, `subgraph.yaml` est : +Pour l'exemple de Subgraph cité ci-dessus, `subgraph.yaml` est : ```yaml -version spec : 0.0.4 -description : Gravatar pour Ethereum -référentiel : https://github.com/graphprotocol/graph-tooling -schéma: - fichier : ./schema.graphql -indexeurConseils : - tailler : automatique -les sources de données: - - genre : ethereum/contrat - nom: Gravité - réseau : réseau principal - source: - adresse : '0x2E645469f354BB4F5c8a05B3b30A929361cf77eC' - abi : Gravité - bloc de démarrage : 6175244 - bloc de fin : 7175245 - contexte: - foo : - tapez : Booléen - données : vrai - bar: - tapez : chaîne - données : 'barre' - cartographie : - genre : ethereum/événements - Version api : 0.0.6 - langage : wasm/assemblyscript - entités : - -Gravatar - abis : - - nom : Gravité - fichier : ./abis/Gravity.json - Gestionnaires d'événements : - - événement : NewGravatar(uint256,adresse,chaîne,chaîne) - gestionnaire : handleNewGravatar - - événement : UpdatedGravatar (uint256, adresse, chaîne, chaîne) - gestionnaire : handleUpdatedGravatar - Gestionnaires d'appels : - - fonction : createGravatar(string,string) - gestionnaire : handleCreateGravatar - gestionnaires de blocs : - - gestionnaire : handleBlock - - gestionnaire : handleBlockWithCall - filtre: - genre : appeler - fichier : ./src/mapping.ts +specVersion: 1.3.0 +description: Gravatar for Ethereum +repository: https://github.com/graphprotocol/graph-tooling +schema: + file: ./schema.graphql +indexerHints: + prune: auto +dataSources: + - kind: ethereum/contract + name: Gravity + network: mainnet + source: + address: '0x2E645469f354BB4F5c8a05B3b30A929361cf77eC' + abi: Gravity + startBlock: 6175244 + endBlock: 7175245 + context: + foo: + type: Bool + data: true + bar: + type: String + data: 'bar' + mapping: + kind: ethereum/events + apiVersion: 0.0.9 + language: wasm/assemblyscript + entities: + - Gravatar + abis: + - name: Gravity + file: ./abis/Gravity.json + eventHandlers: + - event: NewGravatar(uint256,address,string,string) + handler: handleNewGravatar + - event: UpdatedGravatar(uint256,address,string,string) + handler: handleUpdatedGravatar + callHandlers: + - function: createGravatar(string,string) + handler: handleCreateGravatar + blockHandlers: + - handler: handleBlock + - handler: handleBlockWithCall + filter: + kind: call + file: ./src/mapping.ts ``` ## Entrées de subgraphs -> Remarque importante : veillez à remplir le manifeste de votre subgraph avec tous les gestionnaires et [entités](/subgraphs/developing/creating/ql-schema/). +> Remarque importante : veillez à remplir votre manifeste de Subgraph avec tous les gestionnaires et [entités](/subgraphs/developing/creating/ql-schema/). Les entrées importantes à mettre à jour pour le manifeste sont : -- `specVersion` : une version de semver qui identifie la structure du manifeste et les fonctionnalités supportées pour le subgraph. La dernière version est `1.2.0`. Voir la section [versions de specVersion](#specversion-releases) pour plus de détails sur les fonctionnalités et les versions. +- `specVersion` : une version du semver qui identifie la structure du manifeste et les fonctionnalités supportées pour le Subgraph. La dernière version est `1.3.0`. Voir la section [specVersion releases](#specversion-releases) pour plus de détails sur les fonctionnalités et les releases. -- `description` : une description lisible par l'homme de ce qu'est le subgraph. Cette description est affichée dans Graph Explorer lorsque le subgraph est déployé dans Subgraph Studio. +- `description` : une description lisible par l'homme de ce qu'est le Subgraph. Cette description est affichée dans Graph Explorer lorsque le Subgraph est déployé dans Subgraph Studio. -- `repository` : l'URL du dépôt où le manifeste du subgraph peut être trouvé. Cette URL est également affichée dans Graph Explorer. +- `repository` : l'URL du dépôt où le manifeste du Subgraph peut être trouvé. Cette URL est également affichée dans Graph Explorer. - `features` : une liste de tous les noms de [fonctionnalités](#experimental-features) utilisés. -- `indexerHints.prune` : Définit la conservation des données de blocs historiques pour un subgraph. Voir [prune](#prune) dans la section [indexerHints](#indexer-hints). +- `indexerHints.prune` : Définit la conservation des données de blocs historiques pour un Subgraph. Voir [élaguage](#prune) dans la section [indexerHints](#indexer-hints). -- `dataSources.source` : l'adresse du contrat intelligent dont le subgraph est issu, et l'ABI du contrat intelligent à utiliser. L'adresse est optionnelle ; l'omettre permet d'indexer les événements correspondants de tous les contrats. +- `dataSources.source` : l'adresse du contrat intelligent dont le Subgraph s'inspire, et l'ABI du contrat intelligent à utiliser. L'adresse est optionnelle ; l'omettre permet d'indexer les événements correspondants de tous les contrats. - `dataSources.source.startBlock` : le numéro optionnel du bloc à partir duquel la source de données commence l'indexation. Dans la plupart des cas, nous suggérons d'utiliser le bloc dans lequel le contrat a été créé. - `dataSources.source.endBlock` : Le numéro optionnel du bloc sur lequel la source de données arrête l'indexation, y compris ce bloc. Version minimale de la spécification requise : `0.0.9`. -- `dataSources.context` : paires clé-valeur qui peuvent être utilisées dans les mappages de subgraphs. Supporte différents types de données comme `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, et `BigInt`. Chaque variable doit spécifier son `type` et ses `données`. Ces variables de contexte sont ensuite accessibles dans les fichiers de mappage, offrant plus d'options configurables pour le développement de subgraphs. +- `dataSources.context` : paires clé-valeur qui peuvent être utilisées dans les mappages de subgraphs. Supporte différents types de données comme `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, et `BigInt`. Chaque variable doit spécifier son `type` et ses `données`. Ces variables de contexte sont ensuite accessibles dans les fichiers de mappage, offrant plus d'options configurables pour le développement de Subgraph. - `dataSources.mapping.entities` : les entités que la source de données écrit dans le store. Le schéma de chaque entité est défini dans le fichier schema.graphql. - `dataSources.mapping.abis` : un ou plusieurs fichiers ABI nommés pour le contrat source ainsi que pour tous les autres contrats intelligents avec lesquels vous interagissez à partir des mappages. -- `dataSources.mapping.eventHandlers` : liste les événements du contrat intelligent auxquels ce subgraph réagit et les gestionnaires dans le mappage - ./src/mapping.ts dans l'exemple - qui transforment ces événements en entités dans le store. +- `dataSources.mapping.eventHandlers` : liste les événements du contrat intelligent auxquels ce Subgraph réagit et les gestionnaires dans le mappage - ./src/mapping.ts dans l'exemple - qui transforment ces événements en entités dans le store. -- `dataSources.mapping.callHandlers` : liste les fonctions de contrat intelligent auxquelles ce subgraph réagit et les handlers dans le mappage qui transforment les entrées et sorties des appels de fonction en entités dans le store. +- `dataSources.mapping.callHandlers` : liste les fonctions du contrat intelligent auxquelles ce Subgraph réagit et les handlers dans le mappage qui transforment les entrées et sorties des appels de fonction en entités dans le store. - `dataSources.mapping.blockHandlers` : liste les blocs auxquels ce subgraph réagit et les gestionnaires du mappage à exécuter lorsqu'un bloc est ajouté à la blockchain. Sans filtre, le gestionnaire de bloc sera exécuté à chaque bloc. Un filtre d'appel optionnel peut être fourni en ajoutant un champ `filter` avec `kind : call` au gestionnaire. Ceci ne lancera le gestionnaire que si le bloc contient au moins un appel au contrat de la source de données. -Un seul subgraph peut indexer des données provenant de plusieurs contrats intelligents. Ajoutez une entrée pour chaque contrat dont les données doivent être indexées dans le tableau `dataSources`. +Un seul Subgraph peut indexer les données de plusieurs contrats intelligents. Ajoutez une entrée pour chaque contrat dont les données doivent être indexées dans le tableau `dataSources`. ## Gestionnaires d'événements -Les gestionnaires d'événements dans un subgraph réagissent à des événements spécifiques émis par des contrats intelligents sur la blockchain et déclenchent des gestionnaires définis dans le manifeste du subgraph. Ceci permet aux subgraphs de traiter et de stocker les données des événements selon une logique définie. +Les gestionnaires d'événements d'un Subgraph réagissent à des événements spécifiques émis par des contrats intelligents sur la blockchain et déclenchent des gestionnaires définis dans le manifeste du Subgraph. Cela permet aux Subgraphs de traiter et de stocker les données d'événements selon une logique définie. ### Définition d'un gestionnaire d'événements -Un gestionnaire d'événements est déclaré dans une source de données dans la configuration YAML du subgraph. Il spécifie quels événements écouter et la fonction correspondante à exécuter lorsque ces événements sont détectés. +Un gestionnaire d'événements est déclaré dans une source de données dans la configuration YAML du Subgraph. Il spécifie les événements à écouter et la fonction correspondante à exécuter lorsque ces événements sont détectés. ```yaml dataSources: @@ -131,7 +131,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -144,16 +144,16 @@ dataSources: handler: handleApproval - event: Transfer(address,address,uint256) handler: handleTransfer - topic1: ['0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045', '0xc8dA6BF26964aF9D7eEd9e03E53415D37aA96325'] # Filtre de rubrique optionnel qui filtre uniquement les événements avec la rubrique spécifiée. + topic1: ['0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045', '0xc8dA6BF26964aF9D7eEd9e03E53415D37aA96325'] # Filtre thématique facultatif permettant de filtrer uniquement les événements ayant trait au thème spécifié. ``` ## Gestionnaires d'appels -Si les événements constituent un moyen efficace de collecter les modifications pertinentes de l'état d'un contrat, de nombreux contrats évitent de générer des logs afin d'optimiser les coûts de gaz. Dans ce cas, un subgraph peut s'abonner aux appels faits au contrat de source de données. Pour ce faire, il suffit de définir des gestionnaires d'appels faisant référence à la signature de la fonction et au gestionnaire de mappage qui traitera les appels à cette fonction. Pour traiter ces appels, le gestionnaire de mappage recevra un `ethereum.Call` comme argument avec les entrées et sorties typées de l'appel. Les appels effectués à n'importe quel niveau de la blockchain d'appels d'une transaction déclencheront le mappage, ce qui permettra de capturer l'activité avec le contrat de source de données par le biais de contrats proxy. +Bien que les événements constituent un moyen efficace de collecter les modifications pertinentes de l'état d'un contrat, de nombreux contrats évitent de générer des logs afin d'optimiser les coûts de gaz. Dans ce cas, un subgraph peut s'abonner aux appels faits au contrat de source de données. Pour ce faire, il définit des gestionnaires d'appels référençant la signature de la fonction et le gestionnaire de mappage qui traitera les appels à cette fonction. Pour traiter ces appels, le gestionnaire de mappage recevra un `ethereum.Call` comme argument avec les entrées et sorties typées de l'appel. Les appels effectués à n'importe quel niveau de la chaîne d'appel d'une transaction déclencheront le mappage, ce qui permettra de capturer l'activité avec le contrat de source de données par le biais de contrats proxy. Les gestionnaires d'appels ne se déclencheront que dans l'un des deux cas suivants : lorsque la fonction spécifiée est appelée par un compte autre que le contrat lui-même ou lorsqu'elle est marquée comme externe dans Solidity et appelée dans le cadre d'une autre fonction du même contrat. -> **Note:** Les gestionnaires d'appels dépendent actuellement de l'API de traçage de Parité. Certains réseaux, tels que BNB chain et Arbitrum, ne supportent pas cette API. Si un subgraph indexant l'un de ces réseaux contient un ou plusieurs gestionnaires d'appels, il ne commencera pas à se synchroniser. Les développeurs de subgraphs devraient plutôt utiliser des gestionnaires d'événements. Ceux-ci sont bien plus performants que les gestionnaires d'appels et sont pris en charge par tous les réseaux evm. +> **Note:** Les gestionnaires d'appels dépendent actuellement de l'API de traçage de Parity. Certains réseaux, tels que BNB chain et Arbitrum, ne supportent pas cette API. Si un subgraph indexant l'un de ces réseaux contient un ou plusieurs gestionnaires d'appels, il ne commencera pas à se synchroniser. Les développeurs de subgraphs devraient plutôt utiliser des gestionnaires d'événements. Ceux-ci sont bien plus performants que les gestionnaires d'appels et sont pris en charge par tous les réseaux evm. ### Définir un gestionnaire d'appels @@ -169,7 +169,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -186,7 +186,7 @@ La propriété `function` est la signature de la fonction normalisée pour filtr ### Fonction de cartographie -Chaque gestionnaire d'appel prend un seul paramètre qui a un type correspondant au nom de la fonction appelée. Dans l'exemple du subgraph ci-dessus, le mapping contient un gestionnaire d'appel lorsque la fonction `createGravatar` est appelée et reçoit un paramètre `CreateGravatarCall` en tant qu'argument : +Chaque gestionnaire d'appel prend un seul paramètre qui a un type correspondant au nom de la fonction appelée. Dans l'exemple du Subgraph ci-dessus, le mappage contient un gestionnaire pour l'appel de la fonction `createGravatar` qui reçoit un paramètre `CreateGravatarCall` comme argument : ```typescript import { CreateGravatarCall } from '../generated/Gravity/Gravity' @@ -205,7 +205,7 @@ La fonction `handleCreateGravatar` prend un nouveau `CreateGravatarCall` qui est ## Block Handlers -En plus de s'abonner à des événements de contrat ou à des appels de fonction, un subgraph peut souhaiter mettre à jour ses données à mesure que de nouveaux blocs sont ajoutés à la chaîne. Pour y parvenir, un subgraph peut exécuter une fonction après chaque bloc ou après des blocs correspondant à un filtre prédéfini. +Outre l'abonnement à des événements contractuels ou à des appels de fonction, un Subgraph peut vouloir mettre à jour ses données lorsque de nouveaux blocs sont ajoutés à la blockchain. Pour ce faire, un Subgraph peut exécuter une fonction après chaque bloc ou après les blocs qui correspondent à un filtre prédéfini. ### Filtres pris en charge @@ -218,7 +218,7 @@ filter: _Le gestionnaire défini sera appelé une fois pour chaque bloc qui contient un appel au contrat (source de données) sous lequel le gestionnaire est défini._ -> **Note:** Le filtre `call` dépend actuellement de l'API de traçage de Parité. Certains réseaux, tels que BNB chain et Arbitrum, ne supportent pas cette API. Si un subgraph indexant un de ces réseaux contient un ou plusieurs gestionnaire de bloc avec un filtre `call`, il ne commencera pas à se synchroniser. +> **Note:** Le filtre `call` dépend actuellement de l'API de traçage de Parity. Certains réseaux, tels que BNB chain et Arbitrum, ne supportent pas cette API. Si un subgraph indexant un de ces réseaux contient un ou plusieurs block handlers avec un filtre `call`, il ne commencera pas à se synchroniser. L'absence de filtre pour un gestionnaire de bloc garantira que le gestionnaire est appelé à chaque bloc. Une source de données ne peut contenir qu'un seul gestionnaire de bloc pour chaque type de filtre. @@ -232,7 +232,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -261,7 +261,7 @@ blockHandlers: every: 10 ``` -Le gestionnaire défini sera appelé une fois tous les `n` blocs, où `n` est la valeur fournie dans le champ `every`. Cette configuration permet au subgraph d'effectuer des opérations spécifiques à intervalles réguliers. +Le gestionnaire défini sera appelé une fois tous les `n` blocs, où `n` est la valeur fournie dans le champ `every`. Cette configuration permet au Subgraph d'effectuer des opérations spécifiques à intervalles réguliers. #### Le filtre Once @@ -276,7 +276,7 @@ blockHandlers: kind: once ``` -Le gestionnaire défini avec le filtre once ne sera appelé qu'une seule fois avant l'exécution de tous les autres gestionnaires. Cette configuration permet au subgraph d'utiliser le gestionnaire comme gestionnaire d'initialisation, effectuant des tâches spécifiques au début de l'indexation. +Le gestionnaire défini avec le filtre once ne sera appelé qu'une seule fois avant l'exécution de tous les autres gestionnaires. Cette configuration permet au Subgraph d'utiliser le gestionnaire comme gestionnaire d'initialisation, en exécutant des tâches spécifiques au début de l'indexation. ```ts export function handleOnce(block: ethereum.Block): void { @@ -288,7 +288,7 @@ export function handleOnce(block: ethereum.Block): void { ### Fonction de cartographie -La fonction de mappage recevra une `ethereum.Block` comme seul argument. Comme les fonctions de mappage pour les événements, cette fonction peut accéder aux entités de subgraphs existantes dans le store, appeler des contrats intelligents et créer ou mettre à jour des entités. +La fonction de mappage recevra une `ethereum.Block` comme seul argument. Comme les fonctions de mappage pour les événements, cette fonction peut accéder aux entités Subgraph existantes dans le store, appeler des contrats intelligents et créer ou mettre à jour des entités. ```typescript import { ethereum } from '@graphprotocol/graph-ts' @@ -317,7 +317,7 @@ Un événement ne sera déclenché que si la signature et le sujet 0 corresponde A partir de `specVersion` `0.0.5` et `apiVersion` `0.0.7`, les gestionnaires d'événements peuvent avoir accès au reçu de la transaction qui les a émis. -Pour ce faire, les gestionnaires d'événements doivent être déclarés dans le manifeste du subgraph avec la nouvelle clé `receipt : true`, qui est facultative et prend par défaut la valeur false. +Pour ce faire, les gestionnaires d'événements doivent être déclarés dans le manifeste Subgraph avec la nouvelle clé `receipt : true`, qui est facultative et prend par défaut la valeur false. ```yaml eventHandlers: @@ -360,7 +360,7 @@ dataSources: abi: Factory mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/factory.ts entities: @@ -381,7 +381,7 @@ Ensuite, vous ajoutez des _modèles de sources de données_ au manifeste. Ceux-c dataSources: - kind: ethereum/contract name: Factory - # ... other source fields for the main contract ... + # ... d'autres champs sources pour le contrat principal ... templates: - name: Exchange kind: ethereum/contract @@ -390,7 +390,7 @@ templates: abi: Exchange mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/exchange.ts entities: @@ -454,7 +454,7 @@ Il existe des setters et getters comme `setString` et `getString` pour tous les ## Blocs de démarrage -Le `startBlock` est un paramètre optionnel qui vous permet de définir à partir de quel bloc de la chaîne la source de données commencera l'indexation. Définir le bloc de départ permet à la source de données de sauter potentiellement des millions de blocs qui ne sont pas pertinents. En règle générale, un développeur de subgraphs définira `startBlock` au bloc dans lequel le contrat intelligent de la source de données a été créé. +Le `startBlock` est un paramètre optionnel qui vous permet de définir à partir de quel bloc de la chaîne la source de données commencera l'indexation. La définition du bloc de départ permet à la source de données de sauter des millions de blocs potentiellement non pertinents. Typiquement, un développeur de Subgraph définira `startBlock` au bloc dans lequel le contrat intelligent de la source de données a été créé. ```yaml dataSources: @@ -467,7 +467,7 @@ dataSources: startBlock: 6627917 mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/factory.ts entities: @@ -488,13 +488,13 @@ dataSources: ## Conseils pour l'indexeur -Le paramètre `indexerHints` dans le manifeste d'un subgraph fournit des directives aux Indexeurs sur le traitement et la gestion d'un subgraph. Il influence les décisions opérationnelles concernant le traitement des données, les stratégies d'indexation et les optimisations. Actuellement, il propose l'option `prune` pour gérer la rétention ou suppression des données historiques. +Le paramètre `indexerHints` dans le manifeste d'un Subgraph fournit des directives aux Indexeurs sur le traitement et la gestion d'un Subgraph. Il influence les décisions opérationnelles concernant le traitement des données, les stratégies d'indexation et les optimisations. Actuellement, il comporte l'option `prune` pour gérer la rétention ou l'élagage des données historiques. > Cette fonctionnalité est disponible à partir de `specVersion : 1.0.0` ### Prune -`indexerHints.prune` : Définit la rétention des données de blocs historiques pour un subgraph. Les options sont les suivantes : +`indexerHints.prune` : Définit la conservation des données de blocs historiques pour un Subgraph. Les options comprennent : 1. `"never"`: Aucune suppression des données historiques ; conserve l'ensemble de l'historique. 2. `"auto"`: Conserve l'historique minimum nécessaire tel que défini par l'Indexeur, optimisant ainsi les performances de la requête. @@ -505,19 +505,19 @@ Le paramètre `indexerHints` dans le manifeste d'un subgraph fournit des directi prune: auto ``` -> Le terme "historique" dans ce contexte des subgraphs concerne le stockage des données qui reflètent les anciens états des entités mutables. +> Dans le contexte des Subgraphs, le terme "historique" désigne le stockage de données reflétant les anciens états d'entités mutables. L'historique à partir d'un bloc donné est requis pour : -- Les [requêtes chronologiques](/subgraphs/querying/graphql-api/#time-travel-queries), qui permettent d'interroger les états passés de ces entités à des moments précis de l'histoire du subgraph -- Utilisation du subgraph comme [base de greffage](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) dans un autre subgraph, à ce bloc -- Rembobiner le subgraph jusqu'à ce bloc +- Les [requêtes chronologiques] (/subgraphs/querying/graphql-api/#time-travel-queries), qui permettent d'interroger les états passés de ces entités à des moments précis de l'histoire du Subgraph +- Utiliser le Subgraph comme [base de greffage](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) dans un autre Sugraph, au niveau de ce bloc +- Remonter le Subgraph jusqu'à ce bloc Si les données historiques à partir du bloc ont été purgées, les capacités ci-dessus ne seront pas disponibles. > L'utilisation de `"auto"` est généralement recommandée car elle maximise les performances des requêtes et est suffisante pour la plupart des utilisateurs qui n'ont pas besoin d'accéder à des données historiques étendues. -Pour les subgraphs exploitant les [requêtes chronologiques](/subgraphs/querying/graphql-api/#time-travel-queries), il est conseillé de définir un nombre spécifique de blocs pour la conservation des données historiques ou d'utiliser `prune: never` pour conserver tous les états d'entité historiques. Vous trouverez ci-dessous des exemples de configuration des deux options dans les paramètres de votre subgraphs : +Pour les Subgraphs utilisant les [requêtes chronologiques](/subgraphs/querying/graphql-api/#time-travel-queries), il est conseillé de définir un nombre spécifique de blocs pour la conservation des données historiques ou d'utiliser `prune : never` pour conserver tous les états historiques de l'entité. Vous trouverez ci-dessous des exemples de configuration de ces deux options dans les paramètres de votre Subgraph : Pour conserver une quantité spécifique de données historiques : @@ -532,3 +532,18 @@ Préserver l'histoire complète des États de l'entité : indexerHints: prune: never ``` + +## SpecVersion Releases + +| Version | Notes de version | +| :-: | --- | +| 1.3.0 | Ajout de la prise en charge de la [Composition de Subgraphs] (/cookbook/subgraph-composition-three-sources) | +| 1.2.0 | Ajout de la prise en charge pour le [Filtrage des arguments indexés](/developing/creating/advanced/#indexed-argument-filters--topic-filters) & les `eth_call` déclarés | +| 1.1.0 | Prend en charge les [Séries Chronologiques & Aggregations](/developing/creating/advanced/#timeseries-and-aggregations). Ajout de la prise en charge du type `Int8` pour `id`. | +| 1.0.0 | Supporte la fonctionnalité [`indexerHints`](/developing/creating/subgraph-manifest/#indexer-hints) pour élaguer les Subgraphs | +| 0.0.9 | Supports `endBlock` feature | +| 0.0.8 | Ajout de la prise en charge de l'interrogation des [Gestionnaires de blocs](/developing/creating/subgraph-manifest/#polling-filter) et des [Gestionnaires d'initialisation](/developing/creating/subgraph-manifest/#once-filter). | +| 0.0.7 | Ajout de la prise en charge des [fichiers sources de données](/developing/creating/advanced/#ipfsarweave-file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | diff --git a/website/src/pages/fr/subgraphs/developing/creating/unit-testing-framework.mdx b/website/src/pages/fr/subgraphs/developing/creating/unit-testing-framework.mdx index 4ba4ab8d4111..61b209325211 100644 --- a/website/src/pages/fr/subgraphs/developing/creating/unit-testing-framework.mdx +++ b/website/src/pages/fr/subgraphs/developing/creating/unit-testing-framework.mdx @@ -2,12 +2,12 @@ title: Cadre pour les tests unitaires --- -Apprenez à utiliser Matchstick, un framework de test unitaire développé par [LimeChain](https://limechain.tech/). Matchstick permet aux développeurs de subgraphs de tester leur logique de mappages dans un environnement sandbox et de déployer avec succès leurs subgraphs. +Apprenez à utiliser Matchstick, un cadre de test unitaire développé par [LimeChain](https://limechain.tech/). Matchstick permet aux développeurs de subgraphs de tester leur logique de mappages dans un environnement sandbox et de déployer avec succès leurs subgraphs. ## Avantages de l'utilisation de Matchstick - Il est écrit en Rust et optimisé pour des hautes performances. -- Il vous donne accès à des fonctionnalités pour développeurs, y compris la possibilité de simuler des appels de contrat, de faire des assertions sur l'état du store, de surveiller les échecs de subgraph, de vérifier les performances des tests, et bien plus encore. +- Il vous donne accès à des fonctions de développement, notamment la possibilité de simuler des appels de contrat, de faire des assertions sur l'état du store, de surveiller les échecs du subgraph, de vérifier les performances des tests, et bien d'autres choses encore. ## Introduction @@ -87,7 +87,7 @@ Et enfin, n'utilisez pas `graph test` (qui utilise votre installation globale de ### En utilisant Matchstick -Pour utiliser **Matchstick** dans votre projet de ssubgraph, ouvrez un terminal, naviguez jusqu'au dossier racine de votre projet et exécutez simplement `graph test [options] ` - il télécharge le dernier binaire **Matchstick** et exécute le test spécifié ou tous les tests dans un dossier de test (ou tous les tests existants si aucun flag de source de données n'est spécifié). +Pour utiliser **Matchstick** dans votre projet Subgraph, ouvrez simplement un terminal, naviguez jusqu'au dossier racine de votre projet et lancez simplement `graph test [options] ` - il télécharge le dernier binaire **Matchstick** et exécute le test spécifié ou tous les tests dans un dossier de test (ou tous les tests existants si aucun flag de source de données n'est spécifié). ### CLI options @@ -112,13 +112,13 @@ graph test path/to/file.test.ts **Options:** ```sh --c, --coverage Exécuter les tests en mode couverture --d, --docker Exécuter les tests dans un conteneur docker (Note : Veuillez exécuter à partir du dossier racine du subgraph) --f, --force Binaire : Retélécharge le binaire. Docker : Retélécharge le fichier Docker et reconstruit l'image Docker. --h, --help Affiche les informations d'utilisation --l, --logs Enregistre dans la console des informations sur le système d'exploitation, le modèle de processeur et l'URL de téléchargement (à des fins de débogage). --r, --recompile Force les tests à être recompilés --v, --version Choisissez la version du binaire rust que vous souhaitez télécharger/utiliser +-c, --coverage Exécute les tests en mode couverture +-d, --docker Exécute les tests dans un conteneur Docker (Note : Exécute à partir du dossier racine du subgraph). +-f, --force Binaire : Redécharge le binaire. Docker : Redécharge le fichier Docker et reconstruit l'image Docker. +-h, --help Affiche les informations sur l'utilisation +-l, --logs Enregistre dans la console des informations sur le système d'exploitation, le modèle de processeur et l'adresse de téléchargement (à des fins de débogage). +-r, --recompile Oblige à recompiler les tests +-v, --version Choisi la version du binaire rust que vous souhaitez télécharger/utiliser ``` ### Docker @@ -145,13 +145,13 @@ libsFolder: path/to/libs manifestPath: path/to/subgraph.yaml ``` -### Subgraph démonstration +### Subgraph Demo Vous pouvez essayer et jouer avec les exemples de ce guide en clonant le [dépôt du Demo Subgraph.](https://github.com/LimeChain/demo-subgraph) ### Tutoriels vidéos -Vous pouvez également consulter la série de vidéos sur [" Comment utiliser Matchstick pour écrire des tests unitaires pour vos subgraphs"](https://www.youtube.com/playlist?list=PLTqyKgxaGF3SNakGQwczpSGVjS_xvOv3h) +Vous pouvez également consulter la série de vidéos sur ["Comment utiliser Matchstick pour écrire des tests unitaires pour vos subgraphs" ](https://www.youtube.com/playlist?list=PLTqyKgxaGF3SNakGQwczpSGVjS_xvOv3h) ## Structure des tests @@ -662,7 +662,7 @@ Cela fait beaucoup à décortiquer ! Tout d'abord, une chose importante à noter Et voilà, nous avons formulé notre premier test ! 👏 -Maintenant, afin d'exécuter nos tests, il suffit d'exécuter ce qui suit dans le dossier racine de votre subgraph : +Maintenant, pour exécuter nos tests, il vous suffit d'exécuter ce qui suit dans le dossier racine de Subgraph : `graph test Gravity` @@ -756,7 +756,7 @@ createMockedFunction(contractAddress, 'getGravatar', 'getGravatar(address):(stri Les utilisateurs peuvent simuler des fichiers IPFS en utilisant la fonction `mockIpfsFile(hash, filePath)`. La fonction accepte deux arguments, le premier étant le hash/chemin du fichier IPFS et le second le chemin d'un fichier local. -NOTE : Lorsque l'on teste `ipfs.map/ipfs.mapJSON`, la fonction callback doit être exportée depuis le fichier de test afin que matchstck la détecte, comme la fonction `processGravatar()` dans l'exemple de test ci-dessous : +NOTE : Lorsque l'on teste `ipfs.map/ipfs.mapJSON`, la fonction callback doit être exportée depuis le fichier de test afin que matchtick la détecte, comme la fonction `processGravatar()` dans l'exemple de test ci-dessous : Ficher `.test.ts` : @@ -765,7 +765,7 @@ import { assert, test, mockIpfsFile } from 'matchstick-as/assembly/index' import { ipfs } from '@graphprotocol/graph-ts' import { gravatarFromIpfs } from './utils' -// Exporter le callback ipfs.map() pour que matchstck le détecte +// Exporter le callback ipfs.map() pour qu'il soit détecté par matchstick export { processGravatar } from './utils' test('ipfs.cat', () => { @@ -1164,14 +1164,14 @@ De même que pour les sources de données dynamiques de contrat, les utilisateur ##### Exemple `subgraph.yaml` ```yaml ---- +... templates: - - kind: file/ipfs + - kind: file/ipfs name: GraphTokenLockMetadata network: mainnet mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/token-lock-wallet.ts handler: handleMetadata @@ -1289,11 +1289,11 @@ test('exemple de création d'une dataSource file/ipfs', () => { ## Couverture de test -En utilisant **Matchstick**, les développeurs de subgraphs peuvent exécuter un script qui calculera la couverture des tests unitaires écrits. +En utilisant **Matchstick**, les développeurs de Subgraph peuvent exécuter un script qui calculera la couverture des tests unitaires écrits. L'outil de couverture des tests prend les binaires de test compilés `wasm` et les convertit en fichiers `wat`, qui peuvent alors être facilement inspectés pour voir si les gestionnaires définis dans `subgraph.yaml` ont été appelés ou non. Comme la couverture du code (et les tests dans leur ensemble) n'en est qu'à ses débuts en AssemblyScript et WebAssembly, **Matchstick** ne peut pas vérifier la couverture des branches. Au lieu de cela, nous nous appuyons sur l'affirmation que si un gestionnaire donné a été appelé, l'événement/la fonction correspondant(e) a été correctement simulé(e). -### Prerequisites +### Prérequis Pour utiliser la fonctionnalité de couverture des tests fournie dans **Matchstick**, il y a quelques éléments à préparer à l'avance : @@ -1395,7 +1395,7 @@ La non-concordance des arguments est causée par la non-concordance de `graph-ts ## Ressources supplémentaires -Pour toute aide supplémentaire, consultez cette [démo de subgraph utilisant Matchstick](https://github.com/LimeChain/demo-subgraph#readme_). +Pour toute aide supplémentaire, consultez cette [démo Subgraph repo utilisant Matchstick](https://github.com/LimeChain/demo-subgraph#readme_). ## Réaction diff --git a/website/src/pages/fr/subgraphs/developing/deploying/multiple-networks.mdx b/website/src/pages/fr/subgraphs/developing/deploying/multiple-networks.mdx index a72771045069..2916c6fa07ad 100644 --- a/website/src/pages/fr/subgraphs/developing/deploying/multiple-networks.mdx +++ b/website/src/pages/fr/subgraphs/developing/deploying/multiple-networks.mdx @@ -1,12 +1,13 @@ --- title: Déploiement d'un subgraph sur plusieurs réseaux +sidebarTitle: Déploiement sur plusieurs réseaux --- Cette page explique comment déployer un subgraph sur plusieurs réseaux. Pour déployer un subgraph, vous devez d'abord installer [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). Si vous n'avez pas encore créé de subgraph, consultez [Créer un subgraph](/developing/creating-a-subgraph/). -## Déploiement du subgraph sur plusieurs réseaux +## Déployer le Subgraph sur plusieurs réseaux -Dans certains cas, vous souhaiterez déployer le même subgraph sur plusieurs réseaux sans dupliquer tout son code. Le principal défi qui en découle est que les adresses contractuelles sur ces réseaux sont différentes. +Dans certains cas, vous souhaiterez déployer le même Subgraph sur plusieurs réseaux sans dupliquer l'ensemble de son code. La principale difficulté réside dans le fait que les adresses contractuelles de ces réseaux sont différentes. ### En utilisant `graph-cli` @@ -19,7 +20,7 @@ Options: --network-file Chemin du fichier de configuration des réseaux (par défaut : "./networks.json") ``` -Vous pouvez utiliser l'option `--network` pour spécifier une configuration de réseau à partir d'un fichier standard `json` (par défaut networks.json) pour facilement mettre à jour votre subgraph pendant le développement. +Vous pouvez utiliser l'option `--network` pour spécifier une configuration réseau à partir d'un fichier standard `json` (par défaut `networks.json`) pour mettre à jour facilement votre Subgraph pendant le développement. > Note : La commande `init` générera désormais automatiquement un fichier networks.json en se basant sur les informations fournies. Vous pourrez ensuite mettre à jour les réseaux existants ou en ajouter de nouveaux. @@ -53,7 +54,7 @@ Si vous n'avez pas de fichier `networks.json`, vous devrez en créer un manuelle > Note : Vous n'avez besoin de spécifier aucun des `templates` (si vous en avez) dans le fichier de configuration, uniquement les `dataSources`. Si des `templates` sont déclarés dans le fichier `subgraph.yaml`, leur réseau sera automatiquement mis à jour vers celui spécifié avec l'option `--network`. -Supposons maintenant que vous souhaitiez déployer votre subgraph sur les réseaux `mainnet` et `sepolia`, et que ceci est votre fichier subgraph.yaml : +Maintenant, supposons que vous vouliez être capable de déployer votre Subgraph sur les réseaux `mainnet` et `sepolia`, et voici votre `subgraph.yaml` : ```yaml # ... @@ -95,7 +96,7 @@ yarn build --network sepolia yarn build --network sepolia --network-file chemin/à/configurer ``` -La commande `build` mettra à jour votre fichier `subgraph.yaml` avec la configuration `sepolia` puis recompilera le subgraph. Votre fichier `subgraph.yaml` devrait maintenant ressembler à ceci: +La commande `build` va mettre à jour votre `subgraph.yaml` avec la configuration `sepolia` et ensuite recompiler le Subgraph. Votre fichier `subgraph.yaml` devrait maintenant ressembler à ceci : ```yaml # ... @@ -126,7 +127,7 @@ yarn deploy --network sepolia --network-file chemin/à/configurer Une façon de paramétrer des aspects tels que les adresses de contrat en utilisant des versions plus anciennes de `graph-cli` est de générer des parties de celui-ci avec un système de creation de modèle comme [Mustache](https://mustache.github.io/) ou [Handlebars](https://handlebarsjs.com/). -Pour illustrer cette approche, supposons qu'un subgraph doive être déployé sur le réseau principal (mainnet) et sur Sepolia en utilisant des adresses de contrat différentes. Vous pourriez alors définir deux fichiers de configuration fournissant les adresses pour chaque réseau : +Pour illustrer cette approche, supposons qu'un Subgraph doive être déployé sur le réseau principal et sur Sepolia en utilisant des adresses contractuelles différentes. Vous pourriez alors définir deux fichiers de configuration fournissant les adresses pour chaque réseau : ```json { @@ -178,7 +179,7 @@ Pour générer un manifeste pour l'un ou l'autre réseau, vous pourriez ajouter } ``` -Pour déployer ce subgraph pour mainnet ou Sepolia, vous devez simplement exécuter l'une des deux commandes suivantes : +Pour déployer ce Subgraph sur le Mainnet ou Sepolia, il vous suffit de lancer l'une des deux commandes suivantes : ```sh # Mainnet: @@ -192,25 +193,25 @@ Un exemple fonctionnel de ceci peut être trouvé [ici](https://github.com/graph Note : Cette approche peut également être appliquée à des situations plus complexes, dans lesquelles il est nécessaire de remplacer plus que les adresses des contrats et les noms de réseau ou où il est nécessaire de générer des mappages ou alors des ABI à partir de modèles également. -Cela vous donnera le `chainHeadBlock` que vous pouvez comparer avec le `latestBlock` sur votre subgraph pour vérifier s'il est en retard. `synced` vous informe si le subgraph a déjà rattrapé la chaîne. `health` peut actuellement prendre les valeurs de `healthy` si aucune erreur ne s'est produite, ou `failed` s'il y a eu une erreur qui a stoppé la progression du subgraph. Dans ce cas, vous pouvez vérifier le champ `fatalError` pour les détails sur cette erreur. +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your Subgraph to check if it is running behind. `synced` informs if the Subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the Subgraph. In this case, you can check the `fatalError` field for details on this error. -## Politique d'archivage des subgraphs de Subgraph Studio +## Subgraph Studio Politique d'archivage de Subgraph -Une version de subgraph dans Studio est archivée si et seulement si elle répond aux critères suivants : +Une version de Subgraph dans Studio est archivée si et seulement si elle répond aux critères suivants : - La version n'est pas publiée sur le réseau (ou en attente de publication) - La version a été créée il y a 45 jours ou plus -- Le subgraph n'a pas été interrogé depuis 30 jours +- Le Subgraph n'a pas été interrogé depuis 30 jours -De plus, lorsqu'une nouvelle version est déployée, si le subgraph n'a pas été publié, la version N-2 du subgraph est archivée. +En outre, lorsqu'une nouvelle version est déployée, si le Subgraph n'a pas été publié, la version N-2 du Subgraph est archivée. -Chaque subgraph concerné par cette politique dispose d'une option de restauration de la version en question. +Chaque Subgraph concerné par cette politique a la possibilité de rétablir la version en question. -## Vérification de l'état des subgraphs +## Vérification de la santé du Subgraphs -Si un subgraph se synchronise avec succès, c'est un bon signe qu'il continuera à bien fonctionner pour toujours. Cependant, de nouveaux déclencheurs sur le réseau peuvent amener votre subgraph à rencontrer une condition d'erreur non testée ou il peut commencer à prendre du retard en raison de problèmes de performances ou de problèmes avec les opérateurs de nœuds. +Si un Subgraph se synchronise avec succès, c'est le signe qu'il continuera à fonctionner correctement pour toujours. Toutefois, de nouveaux déclencheurs sur le réseau peuvent entraîner une condition d'erreur non testée dans votre Subgraph ou un retard dû à des problèmes de performance ou à des problèmes avec les opérateurs de nœuds. -Graph Node expose un endpoint GraphQL que vous pouvez interroger pour vérifier l'état de votre subgraph. Sur le service hébergé, il est disponible à l'adresse `https://api.thegraph.com/index-node/graphql`. Sur un nœud local, il est disponible sur le port `8030/graphql` par défaut. Le schéma complet de cet endpoint peut être trouvé [ici](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Voici un exemple de requête qui vérifie l'état de la version actuelle d'un subgraph: +Graph Node expose un endpoint GraphQL que vous pouvez interroger pour vérifier l'état de votre subgraph. Sur le service hébergé, il est disponible à `https://api.thegraph.com/index-node/graphql`. Sur un nœud local, il est disponible sur le port `8030/graphql` par défaut. Le schéma complet de ce point d'accès peut être trouvé [ici](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Voici un exemple de requête qui vérifie le statut de la version actuelle d'un subgraph : ```graphql { @@ -237,4 +238,4 @@ Graph Node expose un endpoint GraphQL que vous pouvez interroger pour vérifier } ``` -Cela vous donnera le `chainHeadBlock` que vous pouvez comparer avec le `latestBlock` sur votre subgraph pour vérifier s'il est en retard. `synced` vous informe si le subgraph a déjà rattrapé la chaîne. `health` peut actuellement prendre les valeurs de `healthy` si aucune erreur ne s'est produite, ou `failed` s'il y a eu une erreur qui a stoppé la progression du subgraph. Dans ce cas, vous pouvez vérifier le champ `fatalError` pour les détails sur cette erreur. +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your Subgraph to check if it is running behind. `synced` informs if the Subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the Subgraph. In this case, you can check the `fatalError` field for details on this error. diff --git a/website/src/pages/fr/subgraphs/developing/deploying/using-subgraph-studio.mdx b/website/src/pages/fr/subgraphs/developing/deploying/using-subgraph-studio.mdx index f4e354e2bb21..4582f8643eb7 100644 --- a/website/src/pages/fr/subgraphs/developing/deploying/using-subgraph-studio.mdx +++ b/website/src/pages/fr/subgraphs/developing/deploying/using-subgraph-studio.mdx @@ -2,23 +2,23 @@ title: Déploiement en utilisant Subgraph Studio --- -Apprenez à déployer votre subgraph sur Subgraph Studio. +Apprenez à déployer votre Subgraph dans Subgraph Studio. -> Remarque : lorsque vous déployez un subgraph, vous le transférez vers Subgraph Studio, où vous pourrez le tester. Il est important de se rappeler que le déploiement n'est pas la même chose que la publication. Lorsque vous publiez un subgraph, vous le publiez onchain. +> Note : lorsque vous déployez un Subgraph, vous l'envoyez au Subgraph Studio, où vous pourrez le tester. Il est important de se rappeler que le déploiement n'est pas la même chose que la publication. Lorsque vous publiez un Subgraph, vous le publiez onchain. ## Présentation de Subgraph Studio Dans [Subgraph Studio](https://thegraph.com/studio/), vous pouvez faire ce qui suit: -- Voir une liste des subgraphs que vous avez créés -- Gérer, voir les détails et visualiser l'état d'un subgraph spécifique -- Créez et gérez vos clés API pour des subgraphs spécifiques +- Afficher la liste des Subgraphs que vous avez créés +- Gérer, afficher les détails et visualiser l'état d'un Subgraph spécifique +- Créez et gérez vos clés API pour des Subgraphs spécifiques - Limitez vos clés API à des domaines spécifiques et autorisez uniquement certains Indexers à les utiliser pour effectuer des requêtes -- Créer votre subgraph -- Déployer votre subgraph en utilisant The Graph CLI -- Tester votre subgraph dans l'environnement de test -- Intégrer votre subgraph en staging en utilisant l'URL de requête du développement -- Publier votre subgraph sur The Graph Network +- Créez votre Subgraph +- Déployez votre Subgraph à l'aide de Graph CLI +- Testez votre Subgraph dans l'environnement du terrain de jeu +- Intégrez votre Subgraph dans staging à l'aide de l'URL de requête de développement +- Publier votre Subgraph sur le The Graph Network - Gérer votre facturation ## Installer The Graph CLI @@ -44,10 +44,10 @@ npm install -g @graphprotocol/graph-cli 1. Ouvrez [Subgraph Studio](https://thegraph.com/studio/). 2. Connectez votre portefeuille pour vous connecter. - Vous pouvez le faire via MetaMask, Coinbase Wallet, WalletConnect ou Safe. -3. Après vous être connecté, votre clé de déploiement unique sera affichée sur la page des détails de votre subgraph. - - La clé de déploiement vous permet de publier vos subgraphs ou de gérer vos clés d'API et votre facturation. Elle est unique mais peut être régénérée si vous pensez qu'elle a été compromise. +3. Après vous être connecté, votre clé de déploiement unique sera affichée sur la page de détails de votre Subgraph. + - La clé de déploiement vous permet de publier vos Subgraphs ou de gérer vos clés API et la facturation. Elle est unique mais peut être régénérée si vous pensez qu'elle a été compromise. -> Important : Vous avez besoin d'une clé API pour interroger les subgraphs +> Important : Vous avez besoin d'une clé API pour interroger les Subgraphs ### Comment créer un subgraph dans Subgraph Studio @@ -57,31 +57,25 @@ npm install -g @graphprotocol/graph-cli ### Compatibilité des subgraphs avec le réseau de The Graph -Pour être pris en charge par les Indexeurs sur The Graph Network, les subgraphs doivent : - -- Indexer un [réseau pris en charge](/supported-networks/) -- Ne doit utiliser aucune des fonctionnalités suivantes : - - ipfs.cat & ipfs.map - - Erreurs non fatales - - La greffe +Pour être pris en charge par les Indexeurs sur The Graph Network, les Subgraph doivent indexer un [réseau pris en charge](/supported-networks/). Pour une liste complète des fonctionnalités supportées et non supportées, consultez le repo [Matrice de prise en charge des fonctionnalités](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). ## Initialisez votre Subgraph -Une fois que votre subgraph a été créé dans Subgraph Studio, vous pouvez initialiser son code via la CLI en utilisant cette commande : +Une fois que votre sous-graphe a été créé dans Subgraph Studio, vous pouvez initialiser son code la CLI à l'aide de cette commande : ```bash graph init ``` -Vous pouvez trouver la valeur `` sur la page des détails de votre subgraph dans Subgraph Studio, voir l'image ci-dessous : +Vous pouvez trouver la valeur `` sur la page de détails de votre Subgraph dans Subgraph Studio, voir l'image ci-dessous : ![Subgraph Studio - Slug](/img/doc-subgraph-slug.png) -Après avoir exécuté la commande `graph init`, ilvous sera demandé de saisir l'adresse du contrat, le réseau, et un ABI que vous souhaitez interroger. Cela générera un nouveau dossier sur votre machine locale avec quelques codes de base pour commencer à travailler sur votre subgraph. Vous pouvez ensuite finaliser votre subgraph pour vous assurer qu'il fonctionne comme prévu. +Après avoir lancé `graph init`, il vous sera demandé d'entrer l'adresse du contrat, le réseau, et un ABI que vous souhaitez interroger. Cela générera un nouveau dossier sur votre machine locale avec du code de base pour commencer à travailler sur votre Subgraph. Vous pouvez ensuite finaliser votre Subgraph pour vous assurer qu'il fonctionne comme prévu. ## Authentification The Graph -Avant de pouvoir déployer votre subgraph sur Subgraph Studio, vous devez vous connecter à votre compte via la CLI. Pour le faire, vous aurez besoin de votre clé de déploiement, que vous pouvez trouver sur la page des détails de votre subgraph. +Avant de pouvoir déployer votre Subgraph dans le Subgraph Studio, vous devez vous connecter à votre compte dans la CLI. Pour ce faire, vous aurez besoin de votre clé de déploiement, que vous trouverez sur la page des détails de votre Subgraph. Ensuite, utilisez la commande suivante pour vous authentifier depuis la CLI : @@ -91,11 +85,11 @@ graph auth ## Déploiement d'un Subgraph -Une fois prêt, vous pouvez déployer votre subgraph sur Subgraph Studio. +Une fois que vous êtes prêt, vous pouvez déployer votre Subgraph dans Subgraph Studio. -> Déployer un subgraph avec la CLI le pousse vers le Studio, où vous pouvez le tester et mettre à jour les métadonnées. Cette action ne publiera pas votre subgraph sur le réseau décentralisé. +> Le déploiement d'un Subgraph à l'aide de la CLI le transfère dans le Studio, où vous pouvez le tester et mettre à jour les métadonnées. Cette action ne publie pas votre Subgraph sur le réseau décentralisé. -Utilisez la commande CLI suivante pour déployer votre subgraph : +Utilisez la commande CLI suivante pour déployer votre Subgraph : ```bash graph deploy @@ -108,30 +102,30 @@ Après avoir exécuté cette commande, la CLI demandera une étiquette de versio ## Tester votre Subgraph -Après le déploiement, vous pouvez tester votre subgraph (soit dans Subgraph Studio, soit dans votre propre application, avec l'URL de requête du déploiement), déployer une autre version, mettre à jour les métadonnées, et publier sur [Graph Explorer](https://thegraph.com/explorer) lorsque vous êtes prêt. +Après le déploiement, vous pouvez tester votre Subgraph (soit dans Subgraph Studio, soit dans votre propre application, avec l'URL de requête de déploiement), déployer une autre version, mettre à jour les métadonnées et publier sur [Graph Explorer](https://thegraph.com/explorer) lorsque vous êtes prêt. -Utilisez Subgraph Studio pour vérifier les journaux (logs) sur le tableau de bord et rechercher les erreurs éventuelles de votre subgraph. +Utilisez Subgraph Studio pour vérifier les journaux du tableau de bord et rechercher les erreurs éventuelles de votre Subgraph. ## Publiez votre subgraph -Afin de publier votre subgraph avec succès, consultez [publier un subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). +Pour publier votre Subgraph avec succès, consultez [publier un Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). ## Versionning de votre subgraph avec le CLI -Si vous souhaitez mettre à jour votre subgraph, vous pouvez faire ce qui suit : +Si vous souhaitez mettre à jour votre Subgraph, vous pouvez procéder comme suit : - Vous pouvez déployer une nouvelle version dans Studio en utilisant la CLI (cette version sera privée à ce stade). - Une fois que vous en êtes satisfait, vous pouvez publier votre nouveau déploiement sur [Graph Explorer](https://thegraph.com/explorer). -- Cette action créera une nouvelle version de votre subgraph sur laquelle les Curateurs pourront commencer à signaler et que les Indexeurs pourront indexer. +- Cette action créera une nouvelle version de votre Subgraph que les Curateurs pourront commencer à signaler et que les Indexeurs pourront indexer. -Vous pouvez également mettre à jour les métadonnées de votre subgraph sans publier de nouvelle version. Vous pouvez mettre à jour les détails de votre subgraph dans Studio (sous la photo de profil, le nom, la description, etc.) en cochant une option appelée **Mettre à jour les détails** dans [Graph Explorer](https://thegraph.com/explorer). Si cette option est cochée, une transaction onchain sera générée qui mettra à jour les détails du subgraph dans Explorer sans avoir à publier une nouvelle version avec un nouveau déploiement. +Vous pouvez également mettre à jour les métadonnées de votre subgraph sans en publier une nouvelle version. Vous pouvez mettre à jour les détails de votre subgraph dans Studio (sous l'image de profil, le nom, la description, etc.) en cochant une option appelée **Mettre à jour les détails** dans [Graph Explorer](https://thegraph.com/explorer). Si cette option est cochée, une transaction onchain sera générée pour mettre à jour les détails du subgraph dans l'explorateur sans avoir à publier une nouvelle version avec un nouveau déploiement. -> Remarque : la publication d'une nouvelle version d'un subgraph sur le réseau entraîne des coûts. En plus des frais de transaction, vous devez également financer une partie de la taxe de curation sur le signal de migration automatique. Vous ne pouvez pas publier une nouvelle version de votre subgraph si les Curateurs ne l'ont pas signalé. Pour plus d'informations, veuillez lire la suite [ici](/resources/roles/curating/). +> Remarque : la publication d'une nouvelle version d'un subgraph dans le réseau entraîne des coûts. Outre les frais de transaction, vous devez également financer une partie de la taxe de curation sur le signal de migration automatique. Vous ne pouvez pas publier une nouvelle version de votre subgraph si les Curateurs ne l'ont pas signalé. Pour plus d'informations, veuillez lire [ici](/resources/roles/curating/). ## Archivage automatique des versions de subgraphs -Chaque fois que vous déployez une nouvelle version de subgraph dans Subgraph Studio, la version précédente sera archivée. Les versions archivées ne seront pas indexées/synchronisées et ne pourront donc pas être interrogées. Vous pouvez désarchiver une version de votre subgraph dans Subgraph Studio. +Chaque fois que vous déployez une nouvelle version de Subgraph dans Subgraph Studio, la version précédente est archivée. Les versions archivées ne seront pas indexées/synchronisées et ne pourront donc pas être interrogées. Vous pouvez désarchiver une version archivée de votre Subgraph dans Subgraph Studio. -> Remarque : les versions précédentes des subgraphs non publiés mais déployés dans Studio seront automatiquement archivées. +> Remarque : les versions précédentes des Subgraphs non publiés déployés dans Studio seront automatiquement archivées. ![Subgraph Studio - Unarchive](/img/Unarchive.png) diff --git a/website/src/pages/fr/subgraphs/developing/developer-faq.mdx b/website/src/pages/fr/subgraphs/developing/developer-faq.mdx index e2bb16ce90af..bb34b94566de 100644 --- a/website/src/pages/fr/subgraphs/developing/developer-faq.mdx +++ b/website/src/pages/fr/subgraphs/developing/developer-faq.mdx @@ -7,37 +7,37 @@ Cette page résume certaines des questions les plus courantes pour les développ ## Relatif aux Subgraphs -### 1. Qu'est-ce qu'un subgraph ? +### 1. Qu'est-ce qu'un Subgraph ? -Un subgraph est une API personnalisée construite sur des données blockchain. Les subgraphs sont interrogés en utilisant le langage de requête GraphQL et sont déployés sur Graph Node en utilisant Graph CLI. Une fois déployés et publiés sur le réseau décentralisé de The Graph, les Indexeurs traitent les subgraphs et les rendent disponibles pour que les consommateurs de subgraphs puissent les interroger. +Un Subgraph est une API personnalisée construite sur les données de la blockchain. Les Subgraphs sont interrogés à l'aide du langage de requête GraphQL et sont déployés dans un Graph Node à l'aide de l'interface CLI de The Graph. Une fois déployés et publiés sur le réseau décentralisé de The Graph, les Indexeurs traitent les Subgraphs et les mettent à la disposition des consommateurs de Subgraphs pour qu'ils les interrogent. -### 2. Quelle est la première étape pour créer un subgraph ? +### 2. Quelle est la première étape pour créer un Subgraph ? -Pour créer un subgraph avec succès, vous devez installer Graph CLI. Consultez le [Démarrage rapide](/subgraphs/quick-start/) pour commencer. Pour des informations détaillées, consultez [Création d'un subgraph](/developing/creating-a-subgraph/). +Pour créer un Subgraph avec succès, vous devez installer Graph CLI. Consultez le [Démarrage rapide](/subgraphs/quick-start/) pour commencer. Pour des informations plus détaillées, voir [Créer un Subgraph](/developing/creating-a-subgraph/). -### 3. Suis-je toujours en mesure de créer un subgraph si mes smart contracts n'ont pas d'événements ? +### 3. Puis-je créer un Subgraph si mes contrats intelligents n'ont pas d'événements ? -Il est fortement recommandé de structurer vos smart contracts pour avoir des événements associés aux données que vous souhaitez interroger. Les gestionnaires d'événements du subgraph sont déclenchés par des événements de contrat et constituent le moyen le plus rapide de récupérer des données utiles. +Il est fortement recommandé de structurer vos contrats intelligents pour avoir des événements associés aux données que vous êtes intéressé à interroger. Les gestionnaires d'événements du Subgraph sont déclenchés par les événements du contrat et constituent le moyen le plus rapide de récupérer des données utiles. -Si les contrats avec lesquels vous travaillez ne contiennent pas d'événements, votre subgraph peut utiliser des gestionnaires d'appels et de blocs pour déclencher l'indexation. Cependant, ceci n'est pas recommandé, car les performances seront nettement plus lentes. +Si les contrats avec lesquels vous travaillez ne contiennent pas d'événements, votre subgraph peut utiliser des gestionnaires d'appels et de blocs pour déclencher l'indexation. Cette méthode n'est toutefois pas recommandée, car elle ralentit considérablement les performances. -### 4. Puis-je modifier le compte GitHub associé à mon subgraph ? +### 4. Puis-je changer le compte GitHub associé à mon Subgraph ? -Non. Une fois un subgraph créé, le compte GitHub associé ne peut pas être modifié. Veuillez vous assurer de bien prendre en compte ce détail avant de créer votre subgraph. +Non. Une fois qu'un Subgraph est créé, le compte GitHub associé ne peut pas être modifié. Veillez à bien prendre en compte ce point avant de créer votre Subgraph. -### 5. Comment mettre à jour un subgraph sur le mainnet ? +### 5. Comment mettre à jour un Subgraph sur le réseau principal ? -Vous pouvez déployer une nouvelle version de votre subgraph sur Subgraph Studio en utilisant la CLI. Cette action maintient votre subgraph privé, mais une fois que vous en êtes satisfait, vous pouvez le publier sur Graph Explorer. Cela créera une nouvelle version de votre subgraph sur laquelle les Curateurs pourront commencer à signaler. +Vous pouvez déployer une nouvelle version de votre Subgraph dans Subgraph Studio à l'aide de l'interface de commande. Cette action maintient votre Subgraph privé, mais une fois que vous en êtes satisfait, vous pouvez le publier dans Graph Explorer. Cela créera une nouvelle version de votre Subgraph sur laquelle les Curateurs pourront commencer à émettre des signaux. -### 6. Est-il possible de dupliquer un subgraph vers un autre compte ou endpoint sans le redéployer ? +### 6. Est-il possible de dupliquer un Subgraph vers un autre compte ou un autre endpoint sans le redéployer ? -Vous devez redéployer le subgraph, mais si l'ID de subgraph (hachage IPFS) ne change pas, il n'aura pas à se synchroniser depuis le début. +Vous devez redéployer le Subgraph, mais si l'ID du Subgraph (hash IPFS) ne change pas, il ne sera pas nécessaire de le synchroniser depuis le début. -### 7. Comment puis-je appeler une fonction d'un contrat ou accéder à une variable d'état publique depuis mes mappages de subgraph ? +### 7. Comment appeler une fonction du contrat ou accéder à une variable d'état publique à partir de mes mappages de Subgraphs ? Jetez un œil à l’état `Accès au contrat intelligent` dans la section [API AssemblyScript](/subgraphs/developing/creating/graph-ts/api/#access-to-smart-contract-state). -### 8. Puis-je importer `ethers.js` ou d'autres bibliothèques JS dans mes mappages de subgraphs ? +### 8. Puis-je importer `ethers.js` ou d'autres bibliothèques JS dans mes mappages de Subgraphs ? Actuellement non, car les mappages sont écrits en AssemblyScript. @@ -45,15 +45,15 @@ Une solution alternative possible serait de stocker des données brutes dans des ### 9. Lorsqu'on écoute plusieurs contrats, est-il possible de sélectionner l'ordre des contrats pour écouter les événements ? -Dans un subgraph, les événements sont toujours traités dans l'ordre dans lequel ils apparaissent dans les blocs, que ce soit sur plusieurs contrats ou non. +Dans un Subgraph, les événements sont toujours traités dans l'ordre dans lequel ils apparaissent dans les blocs, qu'il s'agisse ou non de contrats multiples. ### 10. En quoi les modèles sont-ils différents des sources de données ? -Les modèles vous permettent de créer rapidement des sources de données , pendant que votre subgraph est en cours d'indexation. Votre contrat peut générer de nouveaux contrats à mesure que les gens interagissent avec lui. Étant donné que vous connaissez la structure de ces contrats (ABI, événements, etc.) à l'avance, vous pouvez définir comment vous souhaitez les indexer dans un modèle. Lorsqu'ils sont générés, votre subgraph créera une source de données dynamique en fournissant l'adresse du contrat. +Les modèles vous permettent de créer rapidement des sources de données pendant que votre subgraph est indexé. Votre contrat peut engendrer de nouveaux contrats au fur et à mesure que les gens interagissent avec lui. Puisque vous connaissez la forme de ces contrats (ABI, événements, etc.) à l'avance, vous pouvez définir comment vous voulez les indexer dans un modèle. Lorsqu'ils sont créés, votre subgraph crée une source de données dynamique en fournissant l'adresse du contrat. Consultez la section "Instanciation d'un modèle de source de données" sur : [Modèles de sources de données](/developing/creating-a-subgraph/#data-source-templates). -### 11. Est-il possible de configurer un subgraph en utilisant `graph init` à partir de `graph-cli` avec deux contrats ? Ou dois-je ajouter manuellement une autre source de données dans `subgraph.yaml` après avoir lancé `graph init` ? +### 11. Est-il possible de configurer un Subgraph en utilisant `graph init` à partir de `graph-cli` avec deux contrats ? Ou dois-je ajouter manuellement une autre source de données dans `subgraph.yaml` après avoir lancé `graph init` ? Oui. Dans la commande `graph init` elle-même, vous pouvez ajouter plusieurs sources de données en entrant des contrats l'un après l'autre. @@ -79,9 +79,9 @@ docker pull graphprotocol/graph-node:dernier Si une seule entité est créée pendant l'événement et s'il n'y a rien de mieux disponible, alors le hash de la transaction + l'index du journal seront uniques. Vous pouvez les obscurcir en les convertissant en Bytes et en les faisant passer par `crypto.keccak256`, mais cela ne les rendra pas plus uniques. -### 15. Puis-je supprimer mon subgraph ? +### 15. Puis-je supprimer mon Subgraph ? -Oui, vous pouvez [supprimer](/subgraphs/developing/managing/deleting-a-subgraph/) et [transférer](/subgraphs/developing/managing/transferring-a-subgraph/) votre subgraph. +Oui, vous pouvez [supprimer](/subgraphs/developing/managing/deleting-a-subgraph/) et [transférer](/subgraphs/developing/managing/transferring-a-subgraph/) votre Subgraph. ## Relatif au Réseau @@ -110,11 +110,11 @@ Oui. Sepolia prend en charge les gestionnaires de blocs, les gestionnaires d'app Oui. `dataSources.source.startBlock` dans le fichier `subgraph.yaml` spécifie le numéro du bloc à partir duquel la source de données commence l'indexation. Dans la plupart des cas, nous suggérons d'utiliser le bloc où le contrat a été créé : [Start blocks](/developing/creating-a-subgraph/#start-blocks) -### 20. Quels sont quelques conseils pour augmenter les performances d'indexation? Mon subgraph prend beaucoup de temps à se synchroniser +### 20. Quelles sont les astuces pour améliorer la performance de l'indexation ? La synchronisation de mon Subgraph prend beaucoup de temps Oui, vous devriez jeter un coup d'œil à la fonctionnalité optionnelle de bloc de démarrage pour commencer l'indexation à partir du bloc où le contrat a été déployé : [Start blocks](/developing/creating-a-subgraph/#start-blocks) -### 21. Existe-t-il un moyen d'interroger directement le subgraph pour déterminer le dernier numéro de bloc qu'il a indexé? +### 21. Existe-t-il un moyen d'interroger directement le Subgraph pour connaître le dernier numéro de bloc qu'il a indexé ? Oui ! Essayez la commande suivante, en remplaçant "organization/subgraphName" par l'organisation sous laquelle elle est publiée et le nom de votre subgraphe : @@ -132,7 +132,7 @@ someCollection(first: 1000, skip: ) { ... } ### 23. Si mon application décentralisée (dapp) utilise The Graph pour effectuer des requêtes, dois-je écrire ma clé API directement dans le code du frontend ? Et si nous payons les frais de requête pour les utilisateurs – des utilisateurs malveillants pourraient-ils faire augmenter considérablement nos frais de requête ? -Actuellement, l'approche recommandée pour une dapp est d'ajouter la clé au frontend et de l'exposer aux utilisateurs finaux. Cela dit, vous pouvez limiter cette clé à un nom d'hôte, comme _yourdapp.io_ et subgraph. La passerelle est actuellement gérée par Edge & Node. Une partie de la responsabilité d'une passerelle est de surveiller les comportements abusifs et de bloquer le trafic des clients malveillants. +Actuellement, l'approche recommandée pour une dapp est d'ajouter la clé au frontend et de l'exposer aux utilisateurs finaux. Cela dit, vous pouvez limiter cette clé à un nom d'hôte, comme _yourdapp.io_ et Subgraph. La passerelle est actuellement gérée par Edge & Node. Une partie de la responsabilité d'une passerelle est de surveiller les comportements abusifs et de bloquer le trafic des clients malveillants. ## Divers diff --git a/website/src/pages/fr/subgraphs/developing/introduction.mdx b/website/src/pages/fr/subgraphs/developing/introduction.mdx index 7956855d9d83..5ee0f03573ff 100644 --- a/website/src/pages/fr/subgraphs/developing/introduction.mdx +++ b/website/src/pages/fr/subgraphs/developing/introduction.mdx @@ -11,21 +11,21 @@ En tant que développeur, vous avez besoin de données pour construire et alimen Sur The Graph, vous pouvez : -1. Créer, déployer et publier des subgraphs sur The Graph à l'aide de Graph CLI et de [Subgraph Studio](https://thegraph.com/studio/). -2. Utiliser GraphQL pour interroger des subgraphs existants. +1. Créer, déployer et publier des Subgraphs sur The Graph à l'aide de Graph CLI et de [Subgraph Studio](https://thegraph.com/studio/). +2. Utiliser GraphQL pour interroger les Subgraphs existants. ### Qu'est-ce que GraphQL ? -- [GraphQL](https://graphql.org/learn/) est un langage de requête pour les API et un moteur d'exécution permettant d'exécuter ces requêtes avec vos données existantes. The Graph utilise GraphQL pour interroger les subgraphs. +- [GraphQL] (https://graphql.org/learn/) est un langage de requête pour les API et un moteur d'exécution permettant d'exécuter ces requêtes avec vos données existantes. Le graphe utilise GraphQL pour interroger les Subgraphs. ### Actions des Développeurs -- Interrogez les subgraphs construits par d'autres développeurs dans [The Graph Network](https://thegraph.com/explorer) et intégrez-les dans vos propres dapps. -- Créer des subgraphs personnalisés pour répondre à des besoins de données spécifiques, permettant une meilleure évolutivité et flexibilité pour les autres développeurs. -- Déployer, publier et signaler vos subgraphs au sein de The Graph Network. +- Interrogez les Subgraphs construits par d'autres développeurs dans [The Graph Network](https://thegraph.com/explorer) et intégrez-les dans vos propres dapps. +- Créer des Subgraphs personnalisés pour répondre à des besoins de données spécifiques, ce qui permet d'améliorer l'évolutivité et la flexibilité pour d'autres développeurs. +- Déployez, publiez et signalez vos Subgraphs au sein de The Graph Network. -### Que sont les subgraphs ? +### Qu'est-ce qu'un Subgraph ? -Un subgraph est une API personnalisée construite sur des données blockchain. Il extrait des données d'une blockchain, les traite et les stocke afin qu'elles puissent être facilement interrogées via GraphQL. +Un Subgraph est une API personnalisée construite sur les données de la blockchain. Elle extrait les données d'une blockchain, les traite et les stocke de manière à ce qu'elles puissent être facilement interrogées via GraphQL. -Consultez la documentation sur les [subgraphs](/subgraphs/developing/subgraphs/) pour en savoir plus. +Consultez la documentation sur les [Subgraphs](/subgraphs/developing/subgraphs/) pour en savoir plus. diff --git a/website/src/pages/fr/subgraphs/developing/managing/deleting-a-subgraph.mdx b/website/src/pages/fr/subgraphs/developing/managing/deleting-a-subgraph.mdx index c74be2b234dd..480046bd10c8 100644 --- a/website/src/pages/fr/subgraphs/developing/managing/deleting-a-subgraph.mdx +++ b/website/src/pages/fr/subgraphs/developing/managing/deleting-a-subgraph.mdx @@ -2,30 +2,30 @@ title: Suppression d'un Subgraph --- -Supprimez votre subgraph en utilisant [Subgraph Studio](https://thegraph.com/studio/). +Supprimez votre Subgraph en utilisant [Subgraph Studio](https://thegraph.com/studio/). -> En supprimant votre subgraph, vous supprimez toutes les versions publiées de The Graph Network, mais il restera visible sur Graph Explorer et Subgraph Studio pour les utilisateurs qui l'ont signalé. +> Si votre Subgraph est éligible aux récompenses, il est recommandé de créer votre propre Subgraph avec au moins 3 000 GRT afin d'attirer des Indexeurs supplémentaires pour indexer votre Subgraph. ## Étape par Étape -1. Visitez la page du subgraph sur [Subgraph Studio](https://thegraph.com/studio/). +1. Visitez la page du Subgraph sur [Subgraph Studio](https://thegraph.com/studio/). 2. Cliquez sur les trois points à droite du bouton "publier". -3. Cliquez sur l'option "delete this subgraph": +3. Cliquez sur l'option "supprimer ce subgraph" : ![Delete-subgraph](/img/Delete-subgraph.png) -4. En fonction de l'état du subgraph, différentes options vous seront proposées. +4. En fonction de l'état du Subgraph, différentes options vous seront proposées. - - Si le subgraph n'est pas publié, il suffit de cliquer sur “delete“ et de confirmer. - - Si le subgraph est publié, vous devrez le confirmer sur votre portefeuille avant de pouvoir le supprimer de Studio. Si un subgraph est publié sur plusieurs réseaux, tels que testnet et mainnet, des étapes supplémentaires peuvent être nécessaires. + - Si le Subgraph n'est pas publié, il suffit de cliquer sur "supprimer" et de confirmer. + - Si le Subgraph est publié, vous devrez le confirmer dans votre portefeuille avant de pouvoir le supprimer de Studio. Si un Subgraph est publié sur plusieurs réseaux, tels que testnet et mainnet, des étapes supplémentaires peuvent être nécessaires. -> Si le propriétaire du subgraph l'a signalé, les GRT signalés seront renvoyés au propriétaire. +> Si le propriétaire du Subgraph l'a signalé, les GRT signalés seront renvoyé au propriétaire. ### Rappels importants -- Une fois que vous avez supprimé un subgraph, il **n'apparaîtra plus** sur la page d'accueil de Graph Explorer. Toutefois, les utilisateurs qui ont signalé sur ce subgraph pourront toujours le voir sur leurs pages de profil et supprimer leur signal. -- Les curateurs ne seront plus en mesure de signaler le subgraph. -- Les Curateurs qui ont déjà signalé sur le subgraph peuvent retirer leur signal à un prix moyen par action. -- Les subgraphs supprimés afficheront un message d'erreur. +- Une fois que vous avez supprimé un Subgraph, il n'apparaîtra **plus** sur la page d'accueil de The Graph Explorer. Cependant, les utilisateurs qui ont émis un signal sur ce Subgraph pourront toujours le voir sur leurs pages de profil et supprimer leur signal. +- Les curateurs ne pourront plus signaler le subgraph. +- Les curateurs ayant déjà signalé le subgraph pourront retirer leur signal à un prix moyen par part. +- Les Subgraphs supprimés afficheront un message d'erreur. diff --git a/website/src/pages/fr/subgraphs/developing/managing/transferring-a-subgraph.mdx b/website/src/pages/fr/subgraphs/developing/managing/transferring-a-subgraph.mdx index fe386614b198..197bb29de363 100644 --- a/website/src/pages/fr/subgraphs/developing/managing/transferring-a-subgraph.mdx +++ b/website/src/pages/fr/subgraphs/developing/managing/transferring-a-subgraph.mdx @@ -2,18 +2,18 @@ title: Transfer d'un Subgraph --- -Les subgraphs publiés sur le réseau décentralisé possèdent un NFT minté à l'adresse qui a publié le subgraph. Le NFT est basé sur la norme ERC721, ce qui facilite les transferts entre comptes sur The Graph Network. +Les subgraphs publiés sur le réseau décentralisé ont un NFT mintés à l'adresse qui a publié le subgraph. Le NFT est basé sur un standard ERC721, qui facilite les transferts entre comptes sur The Graph Network. ## Rappels -- Quiconque possède le NFT contrôle le subgraph. -- Si le propriétaire décide de vendre ou de transférer le NFT, il ne pourra plus éditer ou mettre à jour ce subgraph sur le réseau. -- Vous pouvez facilement déplacer le contrôle d'un subgraph vers un multi-sig. -- Un membre de la communauté peut créer un subgraph au nom d'une DAO. +- Celui qui possède le NFT contrôle le Subgraph. +- Si le propriétaire décide de vendre ou de transférer le NFT, il ne pourra plus modifier ou mettre à jour ce Subgraph sur le réseau. +- Vous pouvez facilement transférer le contrôle d'un Subgraph à un multi-sig. +- Un membre de la communauté peut créer un Subgraph pour le compte d'une DAO. ## Voir votre Subgraph en tant que NFT -Pour voir votre subgraph en tant que NFT, vous pouvez visiter une marketplace NFT telle que **OpenSea**: +Pour visualiser votre Subgraph en tant que NFT, vous pouvez visiter Marketplace NFT comme **OpenSea** : ``` https://opensea.io/your-wallet-address @@ -27,13 +27,13 @@ https://rainbow.me/adresse-de-votre-portefeuille ## Étape par Étape -Pour transférer la propriété d'un subgraph, procédez comme suit : +Pour transférer la propriété d'un Subgraph, procédez comme suit : 1. Utilisez l'interface utilisateur intégrée dans Subgraph Studio : ![Transfert de propriété de subgraph](/img/subgraph-ownership-transfer-1.png) -2. Choisissez l'adresse vers laquelle vous souhaitez transférer le subgraph : +2. Choisissez l'adresse à laquelle vous souhaitez transférer le Subgraph : ![Transfert de propriété d'un subgraph](/img/subgraph-ownership-transfer-2.png) diff --git a/website/src/pages/fr/subgraphs/developing/publishing/publishing-a-subgraph.mdx b/website/src/pages/fr/subgraphs/developing/publishing/publishing-a-subgraph.mdx index 19a14a1b0eb2..88b91fcd179c 100644 --- a/website/src/pages/fr/subgraphs/developing/publishing/publishing-a-subgraph.mdx +++ b/website/src/pages/fr/subgraphs/developing/publishing/publishing-a-subgraph.mdx @@ -1,10 +1,11 @@ --- title: Publication d'un subgraph sur le réseau décentralisé +sidebarTitle: Publier sur le réseau décentralisé --- -Une fois que vous avez [déployé votre sous-graphe dans Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/) et qu'il est prêt à être mis en production, vous pouvez le publier sur le réseau décentralisé. +Une fois que vous avez [déployé votre Subgraph dans Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/) et qu'il est prêt à être mis en production, vous pouvez le publier sur le réseau décentralisé. -Lorsque vous publiez un subgraph sur le réseau décentralisé, vous le rendez disponible pour : +Lorsque vous publiez un Subgraph sur le réseau décentralisé, vous le rendez disponible pour : - [Curateurs](/resources/roles/curating/) pour commencer la curation. - [Indexeurs](/indexing/overview/) pour commencer à l'indexer. @@ -17,33 +18,33 @@ Consultez la liste des [réseaux pris en charge](/supported-networks/). 1. Accédez au tableau de bord de [Subgraph Studio](https://thegraph.com/studio/) 2. Cliquez sur le bouton **Publish** -3. Votre subgraph est désormais visible dans [Graph Explorer](https://thegraph.com/explorer/). +3. Votre Subgraph sera désormais visible dans [Graph Explorer](https://thegraph.com/explorer/). -Toutes les versions publiées d'un subgraph existant peuvent : +Toutes les versions publiées d'un Subgraph existant peuvent : - Être publié sur Arbitrum One. [En savoir plus sur The Graph Network sur Arbitrum](/archived/arbitrum/arbitrum-faq/). -- Indexer les données sur n'importe lequel des [réseaux pris en charge](/supported-networks/), quel que soit le réseau sur lequel le subgraph a été publié. +- Indexer des données sur n'importe lequel des [réseaux pris en charge](/supported-networks/), quel que soit le réseau sur lequel le Subgraph a été publié. -### Mise à jour des métadonnées d'un subgraph publié +### Mise à jour des métadonnées d'un Subgraph publié -- Après avoir publié votre subgraph sur le réseau décentralisé, vous pouvez mettre à jour les métadonnées à tout moment dans Subgraph Studio. +- Après avoir publié votre Subgraph sur le réseau décentralisé, vous pouvez mettre à jour les métadonnées à tout moment dans Subgraph Studio. - Une fois que vous avez enregistré vos modifications et publié les mises à jour, elles apparaîtront dans Graph Explorer. - Il est important de noter que ce processus ne créera pas une nouvelle version puisque votre déploiement n'a pas changé. ## Publication à partir de la CLI -Depuis la version 0.73.0, vous pouvez également publier votre subgraph avec [`graph-cli`](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). +Depuis la version 0.73.0, vous pouvez également publier votre Subgraph avec [`graph-cli`](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). 1. Ouvrez le `graph-cli`. 2. Utilisez les commandes suivantes : `graph codegen && graph build` puis `graph publish`. -3. Une fenêtre s'ouvrira, vous permettant de connecter votre portefeuille, d'ajouter des métadonnées et de déployer votre subgraph finalisé sur le réseau de votre choix. +3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized Subgraph to a network of your choice. ![cli-ui](/img/cli-ui.png) ### Personnalisation de votre déploiement -Vous pouvez uploader votre build de subgraph sur un nœud IPFS spécifique et personnaliser davantage votre déploiement avec les options suivantes : +Vous pouvez télécharger votre Subgraph sur un nœud IPFS spécifique et personnaliser davantage votre déploiement à l'aide des flags suivants : ``` UTILISATION @@ -61,33 +62,33 @@ FLAGS ``` -## Ajout de signal à votre subgraph +## Adding signal to your Subgraph -Les développeurs peuvent ajouter des signaux GRT à leurs subgraphs pour inciter les Indexeurs à interroger le subgraph. +Les développeurs peuvent ajouter un signal GRT à leurs Subgraphs pour inciter les Indexeurs à interroger le Subgraphs. -- Si un subgraph est éligible aux récompenses d'indexation, les Indexeurs qui fournissent une "preuve d'indexation" recevront une récompense en GRT, basée sur la quantité de GRT signalée. +- Si un Subgraph est éligible pour des récompenses d'indexation, les Indexeurs qui fournissent une "preuve d'indexation" recevront une récompense GRT, basée sur la quantité de GRT signalée. -- Vous pouvez vérifier l'éligibilité de la récompense d'indexation en fonction de l'utilisation des caractéristiques du subgraph [ici](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). +- Vous pouvez vérifier l'éligibilité de la récompense d'indexation en fonction de l'utilisation de la fonction Subgraph [ici](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). - Les réseaux spécifiques pris en charge peuvent être vérifiés [ici](/supported-networks/). -> Ajouter un signal à un subgraph non éligible aux récompenses n'attirera pas d'Indexeurs supplémentaires. +> L'ajout d'un signal à un subgraph qui n'est pas éligible aux récompenses n'attirera pas d'indexeurs supplémentaires. > -> Si votre subgraph est éligible aux récompenses, il est recommandé de curer votre propre subgraph avec au moins 3 000 GRT afin d'attirer des indexeurs supplémentaires pour indexer votre subgraph. +> Si votre Subgraph est éligible aux récompenses, il est recommandé de créer votre propre Subgraph avec au moins 3 000 GRT afin d'attirer des Indexeurs supplémentaires pour indexer votre Subgraph. -Le [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) assure l'indexation de tous les subgraphs. Cependant, le fait de signaler un GRT sur un subgraph particulier attirera plus d'Indexeurs vers celui-ci. Cette incitation à la création d'Indexeurs supplémentaires par le biais de la curation vise à améliorer la qualité de service pour les requêtes en réduisant la latence et en améliorant la disponibilité du réseau. +Le [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) assure l'indexation de tous les Subgraphs. Cependant, le fait de signaler des GRT sur un Subgraph particulier attirera plus d'Indexeurs vers celui-ci. Cette incitation à la création d'Indexeurs supplémentaires par le biais de la curation vise à améliorer la qualité de service pour les requêtes en réduisant la latence et en améliorant la disponibilité du réseau. -Lors du signalement, les Curateurs peuvent décider de signaler une version spécifique du subgraph ou de signaler en utilisant l'auto-migration. S'ils signalent en utilisant l'auto-migration, les parts d'un Curateur seront toujours mises à jour vers la dernière version publiée par le développeur. S'ils décident de signaler une version spécifique, les parts resteront toujours sur cette version spécifique. +When signaling, Curators can decide to signal on a specific version of the Subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. -Les Indexeurs peuvent trouver des subgraphs à indexer en fonction des signaux de curation qu'ils voient dans Graph Explorer. +Les Indexeurs peuvent trouver des Subgraphs à indexer sur la base des signaux de curation qu'ils voient dans Graph Explorer. -![Explorer subgraphs](/img/explorer-subgraphs.png) +![Explorer Subgraphs](/img/explorer-subgraphs.png) -Subgraph Studio vous permet d'ajouter des signaux à votre subgraph en ajoutant des GRT au pool de curation de votre subgraph dans la même transaction où il est publié. +Subgraph Studio vous permet d'ajouter un signal à votre Subgraph en ajoutant des GRT au pool de curation de votre Subgraph lors de la même transaction que celle de sa publication. ![Curation Pool](/img/curate-own-subgraph-tx.png) -Alternativement, vous pouvez ajouter des signaux GRT à un subgraph publié à partir de Graph Explorer. +Vous pouvez également ajouter un signal GRT à un Subgraph publié à partir de Graph Explorer. ![Signal provenant de l'Explorer](/img/signal-from-explorer.png) diff --git a/website/src/pages/fr/subgraphs/developing/subgraphs.mdx b/website/src/pages/fr/subgraphs/developing/subgraphs.mdx index d042af3b7930..8addd4e2ebda 100644 --- a/website/src/pages/fr/subgraphs/developing/subgraphs.mdx +++ b/website/src/pages/fr/subgraphs/developing/subgraphs.mdx @@ -4,83 +4,83 @@ title: Subgraphs ## Qu'est-ce qu'un subgraph ? -Un subgraph est une API ouverte et personnalisée qui extrait des données d'une blockchain, les traite et les stocke de manière à ce qu'elles puissent être facilement interrogées via GraphQL. +Un Subgraph est une API ouverte personnalisée qui extrait des données d'une blockchain, les traite et les stocke de manière à ce qu'elles puissent être facilement interrogées via GraphQL. ### Capacités des subgraphs - **Accès aux données:** Les subgraphs permettent d'interroger et d'indexer les données de la blockchain pour le web3. -- \*\*Les développeurs peuvent créer, déployer et publier des subgraphs sur The Graph Network. Pour commencer, consultez le [Démarrage Rapide](quick-start/) du développeur de subgraphs. -- **Indexation et interrogation:** Une fois qu'un subgraph est indexé, tout le monde peut l'interroger. Explorez et interrogez tous les subgraphs publiés sur le réseau dans [Graph Explorer](https://thegraph.com/explorer). +- \*\*Les développeurs peuvent créer, déployer et publier des Subgraphs sur The Graph Network. Pour commencer, consultez le [Démarrage rapide] du développeur de Subgraphs (quick-start/). +- **Indexation et interrogation:** Une fois qu'un Subgraph est indexé, tout le monde peut l'interroger. Explorez et interrogez tous les Subgraphs publiés sur le réseau dans [Graph Explorer](https://thegraph.com/explorer). ## À l'intérieur d'un subgraph -Le manifeste du subgraph, `subgraph.yaml`, définit les contrats intelligents et le réseau que votre subgraph va indexer, les événements de ces contrats auxquels il faut prêter attention, et comment faire correspondre les données d'événements aux entités que Graph Node stocke et permet d'interroger. +The Subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your Subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. -La **définition du subgraph** se compose des fichiers suivants : +The **Subgraph definition** consists of the following files: -- `subgraph.yaml` : Contient le manifeste du subgraph +- `subgraph.yaml`: Contains the Subgraph manifest -- `schema.graphql` : Un schéma GraphQL définissant les données stockées pour votre subgraph et comment les interroger via GraphQL +- `schema.graphql`: A GraphQL schema defining the data stored for your Subgraph and how to query it via GraphQL - `mapping.ts` : [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code qui traduit les données d'événements en entités définies dans votre schéma -Pour en savoir plus sur chaque composant d'un subgraph, consultez [créer un subgraph](/developing/creating-a-subgraph/). +Pour en savoir plus sur chaque composant du Subgraph, consultez [créer un Subgraph](/developing/creating-a-subgraph/). ## Flux du cycle de vie des subgraphes -Voici un aperçu général du cycle de vie d'un subgraph : +Voici un aperçu général du cycle de vie d'un Subgraph : ![Cycle de vie d'un Subgraph](/img/subgraph-lifecycle.png) ## Développement de subgraphs -1. [Créer un subgraph](/developing/creating-a-subgraph/) -2. [Déployer un subgraph](/deploying/deploying-a-subgraph-to-studio/) -3. [Tester un subgraph](/deploying/subgraph-studio/#testing-your-subgraph-in-subgraph-studio) -4. [Publier un subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) -5. [Signaler sur un subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) +1. [Créer un Subgraph](/developing/creating-a-subgraph/) +2. [Déployer un Subgraph](/deploying/deploying-a-subgraph-to-studio/) +3. [Tester un Subgraph](/deploying/subgraph-studio/#testing-your-subgraph-in-subgraph-studio) +4. [Publier un Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) +5. [Signaler sur un Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) ### Développement en local -Les meilleurs subgraphs commencent par un environnement de développement local et des tests unitaires. Les développeurs utilisent [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), un outil d'interface de ligne de commande pour construire et déployer des subgraphs sur The Graph. Ils peuvent également utiliser [Graph TypeScript](/subgraphs/developing/creating/graph-ts/README/) et [Matchstick](/subgraphs/developing/creating/unit-testing-framework/) pour créer des subgraphs robustes. +Les grands subgraphs commencent par un environnement de développement local et des tests unitaires. Les développeurs utilisent [Graph CLI] (https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), un outil d'interface de ligne de commande pour construire et déployer des subgraphs sur The Graph. Ils peuvent également utiliser [Graph TypeScript](/subgraphs/developing/creating/graph-ts/README/) et [Matchstick](/subgraphs/developing/creating/unit-testing-framework/) pour créer des subgraphs robustes. ### Déployer sur Subgraph Studio -Une fois défini, un subgraph peut être [déployé dans Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). Dans Subgraph Studio, vous pouvez effectuer les opérations suivantes : +Une fois défini, un Subgraph peut être [déployé dans le Subgraph Studio] (/deploying/deploying-a-subgraph-to-studio/). Dans Subgraph Studio, vous pouvez effectuer les opérations suivantes : -- Utiliser son environnement de test pour indexer le subgraph déployé et le mettre à disposition pour évaluation. -- Vérifiez que votre subgraph ne présente aucune erreur d'indexation et qu'il fonctionne comme prévu. +- Utiliser l'environnement d'essai pour indexer le Subgraph déployé et le mettre à disposition pour examen. +- Vérifiez que votre Subgraph ne présente aucune erreur d'indexation et qu'il fonctionne comme prévu. ### Publier sur le réseau -Lorsque vous êtes satisfait de votre subgraph, vous pouvez le [publier](/subgraphs/developing/publishing/publishing-a-subgraph/) sur The Graph Network. +Lorsque vous êtes satisfait de votre Subgraph, vous pouvez le [publier](/subgraphs/developing/publishing/publishing-a-subgraph/) sur The Graph Network. -- Il s'agit d'une action onchain, qui enregistre le subgraph et le rend accessible aux Indexeurs. -- Les subgraphs publiés ont un NFT correspondant, qui définit la propriété du subgraph. Vous pouvez [transférer la propriété du subgraph](/subgraphes/developing/managing/transferring-a-subgraph/) en envoyant le NFT. -- Les subgraphs publiés sont associés à des métadonnées qui fournissent aux autres participants du réseau un contexte et des informations utiles. +- Il s'agit d'une action onchain, qui enregistre le Subgraph et le rend accessible aux Indexeurs. +- Les Subgraphs publiés ont un NFT correspondant, qui définit la propriété du Subgraphs. Vous pouvez [transférer la propriété du Subgraph](/subgraphs/developing/managing/transferring-a-subgraph/) en envoyant le NFT. +- Les Subgraph publiés sont associés à des métadonnées qui fournissent aux autres participants du réseau un contexte et des informations utiles. ### Ajouter un signal de curation pour l'indexation -Les subgraphs publiés ont peu de chances d'être repérés par les Indexeurs s'ils ne sont pas accompagnés d'un signal de curation. Pour encourager l'indexation, vous devez ajouter un signal à votre subgraph. Consultez la signalisation et la [curation](/resources/roles/curating/) sur The Graph. +Les Subgraphs publiés ont peu de chances d'être repérés par les Indexeurs s'ils ne sont pas accompagnés d'un signal de curation. Pour encourager l'indexation, vous devez ajouter un signal à votre Subgraph. Pour en savoir plus sur la signalisation et la [curation](/resources/roles/curating/), consultez le site The Graph. #### Qu'est-ce qu'un signal ? -- Le signal correspond aux GRT verrouillés associé à un subgraph donné. Il indique aux Indexeurs qu'un subgraph donné recevra un volume de requêtes et contribue aux récompenses d'indexation disponibles pour le traiter. -- Les Curateurs tiers peuvent également signaler un subgraph donné s'ils estiment que ce subgraph est susceptible de générer un volume de requêtes. +- Le signal est constitué de GRT verrouillés associés à un Subgrqph donné. Il indique aux Indexeurs qu'un Subgraph donné recevra un volume de requêtes et contribue aux récompenses d'indexation disponibles pour le traiter. +- Les Curateurs tiers peuvent également signaler un Subgraph donné s'ils estiment que ce Subgraph est susceptible de générer un volume de requêtes. ### Intérrogation & Développement d'applications Les subgraphs sur The Graph Network reçoivent 100 000 requêtes gratuites par mois, après quoi les développeurs peuvent soit [payer les requêtes avec GRT ou une carte de crédit](/subgraphs/billing/). -En savoir plus sur [l'interrogation des subgraphs](/subgraphs/querying/introduction/). +En savoir plus sur [l'interrogation des Subgraphs](/subgraphs/querying/introduction/). ### Mise à jour des subgraphs -Pour mettre à jour votre subgraph avec des corrections de bug ou de nouvelles fonctionnalités, lancez une transaction pour le faire pointer vers la nouvelle version. Vous pouvez déployer les nouvelles versions de vos subgraphs dans le [Subgraph Studio](https://thegraph.com/studio/) à des fins de développement et de test. +Pour mettre à jour votre Subgraph avec des corrections de bogues ou de nouvelles fonctionnalités, lancez une transaction pour le faire pointer vers la nouvelle version. Vous pouvez déployer les nouvelles versions de vos Subgraph dans le [Subgraph Studio](https://thegraph.com/studio/) à des fins de développement et de test. -- Si vous avez sélectionné "migration automatique" lorsque vous avez appliqué le signal, la mise à jour du subgraph migrera tout signal vers la nouvelle version et entraînera une taxe de migration. -- Ce signal de migration devrait inciter les Indexeurs à commencer à indexer la nouvelle version du subgraph, qui devrait donc bientôt pouvoir être consultée. +- Si vous avez sélectionné "migration automatique" lorsque vous avez appliqué le signal, la mise à jour du Subgraph migrera tout signal vers la nouvelle version et entraînera une taxe de migration. +- Ce signal de migration devrait inciter les Indexeurs à commencer à indexer la nouvelle version du Subgraph, qui devrait donc bientôt pouvoir être consultée. ### Suppression et Transfert de Subgraphs -Si vous n'avez plus besoin d'un subgraph publié, vous pouvez le [supprimer](/subgraphs/developing/managing/deleting-a-subgraph/) ou le [transférer](/subgraphs/developing/managing/transferring-a-subgraph/). La suppression d'un subgraph renvoie tout les GRT signalés aux [Curateurs](/resources/roles/curating/). +Si vous n'avez plus besoin d'un Subgraph publié, vous pouvez le [supprimer](/subgraphs/developing/managing/deleting-a-subgraph/) ou le [transférer](/subgraphs/developing/managing/transferring-a-subgraph/). La suppression d'un Subgraph renvoie tout GRT signalé aux [Curateurs](/resources/roles/curating/). diff --git a/website/src/pages/fr/subgraphs/explorer.mdx b/website/src/pages/fr/subgraphs/explorer.mdx index 324c6b5602b3..7a7cf7e972db 100644 --- a/website/src/pages/fr/subgraphs/explorer.mdx +++ b/website/src/pages/fr/subgraphs/explorer.mdx @@ -2,70 +2,70 @@ title: Graph Explorer --- -Unlock the world of subgraphs and network data with [Graph Explorer](https://thegraph.com/explorer). +Découvrez le monde des subgraphs et des données de réseau avec [Graph Explorer](https://thegraph.com/explorer). ## Aperçu -Graph Explorer consists of multiple parts where you can interact with [subgraphs](https://thegraph.com/explorer?chain=arbitrum-one), [delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one), engage [participants](https://thegraph.com/explorer/participants?chain=arbitrum-one), view [network information](https://thegraph.com/explorer/network?chain=arbitrum-one), and access your user profile. +Graph Explorer se compose de plusieurs parties où vous pouvez interagir avec les [subgraphs](https://thegraph.com/explorer?chain=arbitrum-one), [déléguer](https://thegraph.com/explorer/delegate?chain=arbitrum-one), engager les [participants](https://thegraph.com/explorer/participants?chain=arbitrum-one), voir les [informations sur le réseau](https://thegraph.com/explorer/network?chain=arbitrum-one) et accéder à votre profil d'utilisateur. -## Inside Explorer +## À l'intérieur de l'Explorer -The following is a breakdown of all the key features of Graph Explorer. For additional support, you can watch the [Graph Explorer video guide](/subgraphs/explorer/#video-guide). +Vous trouverez ci-dessous une liste de toutes les fonctionnalités clés de Graph Explorer. Pour obtenir une assistance supplémentaire, vous pouvez regarder le [guide vidéo de Graph Explorer](/subgraphs/explorer/#video-guide). -### Subgraphs Page +### Page des subgraphs -After deploying and publishing your subgraph in Subgraph Studio, go to [Graph Explorer](https://thegraph.com/explorer) and click on the "[Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one)" link in the navigation bar to access the following: +Après avoir déployé et publié votre subgraph dans Subgraph Studio, allez sur [Graph Explorer](https://thegraph.com/explorer) et cliquez sur le lien "[Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one)" dans la barre de navigation pour accéder à ce qui suit : -- Vos propres subgraphs terminés +- Vos propres subgraphs finis - Les subgraphs publiés par d'autres -- Le subgraph exact que vous voulez (basé sur la date de création, le montant du signal ou le nom). +- Le Subgraph exact que vous souhaitez (sur la base de la date de création, de la quantité de signal ou du nom). -![Explorer Image 1](/img/Subgraphs-Explorer-Landing.png) +![Image 1 de l'Explorer](/img/Subgraphs-Explorer-Landing.png) -Lorsque vous cliquez sur un subgraph, vous pourrez faire ce qui suit : +Lorsque vous cliquez sur un subgraph, vous pouvez effectuer les opérations suivantes : - Tester des requêtes dans le l'environnement de test et utiliser les détails du réseau pour prendre des décisions éclairées. -- Signaler des GRT sur votre propre subgraph ou sur les subgraphs des autres pour informer les Indexeurs de son importance et de sa qualité. +- Signalez des GRT sur votre propre subgraph ou sur les subgraphs d'autres personnes afin de sensibiliser les Indexeurs sur son importance et sa qualité. - - This is critical because signaling on a subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries. + - Ce point est essentiel, car le fait de signaler un subgraph incite à être indexé, ce qui signifie qu'il finira par faire surface sur le réseau pour répondre aux requêtes. -![Explorer Image 2](/img/Subgraph-Details.png) +![Image 2 de l'Explorer](/img/Subgraph-Details.png) -Sur la page dédiée de chaque subgraph, vous pouvez faire ce qui suit : +Sur la page dédiée à chaque subgraph, vous pouvez effectuer les opérations suivantes : -- Signal/Un-signal sur les subgraphs +- Signaler/Dé-signaler sur les subgraphs - Afficher plus de détails tels que des graphs, l'ID de déploiement actuel et d'autres métadonnées - Passer d'une version à l'autre pour explorer les itérations passées du subgraph - Interroger les subgraphs via GraphQL - Tester les subgraphs dans le playground -- Afficher les indexeurs qui indexent sur un certain subgraph +- Voir les Indexeurs qui indexent un certain subgraph - Statistiques du subgraph (allocations, conservateurs, etc.) - Afficher l'entité qui a publié le subgraph -![Explorer Image 3](/img/Explorer-Signal-Unsignal.png) +![Image 3 de l'Explorer](/img/Explorer-Signal-Unsignal.png) -### Delegate Page +### Page de Délégué -On the [Delegate page](https://thegraph.com/explorer/delegate?chain=arbitrum-one), you can find information about delegating, acquiring GRT, and choosing an Indexer. +Sur la [page de Délégué](https://thegraph.com/explorer/delegate?chain=arbitrum-one), vous trouverez des informations sur la délégation, l'acquisition de GRT et le choix d'un Indexeur. -On this page, you can see the following: +Sur cette page, vous pouvez voir les éléments suivants : -- Indexers who collected the most query fees -- Indexers with the highest estimated APR +- Indexeurs ayant perçu le plus de frais de requête +- Indexeurs avec l'APR estimé le plus élevé -Additionally, you can calculate your ROI and search for top Indexers by name, address, or subgraph. +En outre, vous pouvez calculer votre retour sur investissement et rechercher les meilleurs Indexeurs par nom, adresse ou subgraph. -### Participants Page +### Page des participants -This page provides a bird's-eye view of all "participants," which includes everyone participating in the network, such as Indexers, Delegators, and Curators. +Cette page offre une vue d'ensemble de tous les "participants," c'est-à-dire de toutes les personnes qui participent au réseau, telles que les Indexeurs, les Déléguateurs et les Curateurs. #### 1. Indexeurs -![Explorer Image 4](/img/Indexer-Pane.png) +![Image 4 de l'Explorer](/img/Indexer-Pane.png) -Les Indexeurs sont la colonne vertébrale du protocole. Ils stakent sur les subgraphs, les indexent et servent les requêtes à quiconque consomme les subgraphs. +Les Indexeurs constituent la colonne principale du protocole. Ils s'intéressent aux subgraphs, les indexent et servent des requêtes à tous ceux qui consomment des subgraphs. -Dans le tableau des Indexeurs, vous pouvez voir les paramètres de délégation des Indexeurs, leur staking, combien ils ont staké sur chaque subgraph et combien de revenus ils ont généré à partir des frais de requête et des récompenses d'indexation. +Dans le tableau des Indexeurs, vous pouvez voir les paramètres de délégation d'un Indexeur, son staking, le montant qu'il a staké sur chaque subgraph et le revenu qu'il a tiré des frais de requête et des récompenses d'indexation. Spécificités @@ -74,7 +74,7 @@ Spécificités - Cooldown Remaining - le temps restant avant que l'Indexeur puisse modifier les paramètres de délégation ci-dessus. Les périodes de cooldown sont définies par les Indexeurs lorsqu'ils mettent à jour leurs paramètres de délégation. - Owned - Il s'agit du staking de l'Indexeur, qui peut être partiellement confisquée en cas de comportement malveillant ou incorrect. - Delegated - Le staking des Délégateurs qui peut être allouée par l'Indexeur, mais ne peut pas être confisquée. -- Allocated - Le staking les Indexeurs allouent activement aux subgraphs qu'ils indexent. +- Alloué - Le Staking que les Indexeurs allouent activement aux subgraphs qu'ils indexent. - Available Delegation Capacity - le staking délégué que les Indexeurs peuvent encore recevoir avant d'être sur-délégués. - Capacité de délégation maximale : montant maximum de participation déléguée que l'indexeur peut accepter de manière productive. Une mise déléguée excédentaire ne peut pas être utilisée pour le calcul des allocations ou des récompenses. - Query Fees - il s'agit du total des frais que les utilisateurs finaux ont payés pour les requêtes d'un Indexeur au fil du temps. @@ -84,16 +84,16 @@ Les Indexeurs peuvent gagner à la fois des frais de requête et des récompense - Les paramètres d'indexation peuvent être définis en cliquant sur le côté droit du tableau ou en accédant au profil d'un Indexeur et en cliquant sur le bouton "Delegate ". -To learn more about how to become an Indexer, you can take a look at the [official documentation](/indexing/overview/) or [The Graph Academy Indexer guides.](https://thegraph.academy/delegators/choosing-indexers/) +Pour en savoir plus sur la façon de devenir Indexeur, vous pouvez consulter la [documentation officielle](/indexing/overview/) ou les [guides de l'Indexeur de The Graph Academy](https://thegraph.academy/delegators/choosing-indexers/) -![Indexing details pane](/img/Indexing-Details-Pane.png) +![Volet Détails de l'indexation](/img/Indexing-Details-Pane.png) #### 2. Curateurs -Les Curateurs analysent les subgraphs pour identifier ceux de la plus haute qualité. Une fois qu'un Curateur a trouvé un subgraph potentiellement de haute qualité, il peut le curer en le signalant sur sa courbe de liaison. Ce faisant, les Curateurs informent les Indexeurs des subgraphs de haute qualité qui doivent être indexés. +Les Curateurs analysent les subgraphs afin d'identifier ceux qui sont de la plus haute qualité. Une fois qu'un Curateur a trouvé un subgraph potentiellement de haute qualité, il peut le curer en signalant sa courbe de liaison. Ce faisant, les Curateurs indiquent aux Indexeurs quels subgraphs sont de haute qualité et devraient être indexés. - Les Curateurs peuvent être des membres de la communauté, des consommateurs de données ou même des développeurs de subgraphs qui signalent leurs propres subgraphs en déposant des jetons GRT dans une courbe de liaison. - - En déposant des GRT, les Curateurs mintent des actions de curation d'un subgraph. En conséquence, ils peuvent gagner une partie des frais de requête générés par le subgraph sur lequel ils ont signalé. + - En déposant des GRT, les Curateurs acquièrent des parts de curation d'un subgraph. Ils peuvent ainsi gagner une partie des frais de requête générés par le subgraph qu'ils ont signalé. - La courbe de liaison incite les Curateurs à curer les sources de données de la plus haute qualité. Dans le tableau des Curateurs ci-dessous, vous pouvez voir : @@ -102,9 +102,9 @@ Dans le tableau des Curateurs ci-dessous, vous pouvez voir : - Le nombre de GRT déposés - Nombre d'actions détenues par un curateur -![Explorer Image 6](/img/Curation-Overview.png) +![Image 6 de l'Explorer](/img/Curation-Overview.png) -If you want to learn more about the Curator role, you can do so by visiting [official documentation.](/resources/roles/curating/) or [The Graph Academy](https://thegraph.academy/curators/). +Si vous souhaitez en savoir plus sur le rôle de Curateur, vous pouvez consulter la [documentation officielle](/resources/roles/curating/) ou [The Graph Academy](https://thegraph.academy/curators/). #### 3. Délégués @@ -112,24 +112,24 @@ Les Délégateurs jouent un rôle clé dans le maintien de la sécurité et de l - Sans Délégateurs, les Indexeurs sont moins susceptibles de gagner des récompenses et des frais importants. Par conséquent, les Indexeurs attirent les Délégateurs en leur offrant une partie de leurs récompenses d'indexation et de leurs frais de requête. - Les Délégateurs sélectionnent leurs Indexeurs selon divers critères, telles que les performances passées, les taux de récompense d'indexation et le partage des frais. -- Reputation within the community can also play a factor in the selection process. It's recommended to connect with the selected Indexers via [The Graph's Discord](https://discord.gg/graphprotocol) or [The Graph Forum](https://forum.thegraph.com/). +- La réputation au sein de la communauté peut également jouer un rôle dans le processus de sélection. Il est recommandé d'entrer en contact avec les Indexeurs sélectionnés via le [Discord de The Graph](https://discord.gg/graphprotocol) ou le [Forum de The Graph] (https://forum.thegraph.com/). -![Explorer Image 7](/img/Delegation-Overview.png) +![Image 7 de l'Explorer](/img/Delegation-Overview.png) Dans le tableau des Délégateurs, vous pouvez voir les Délégateurs actifs dans la communauté et les métriques importantes : - Le nombre d’indexeurs auxquels un délégant délègue -- A Delegator's original delegation +- La Délégation initiale d'un Déléguateur - Les récompenses qu'ils ont accumulées mais qu'ils n'ont pas retirées du protocole - Les récompenses obtenues qu'ils ont retirées du protocole - Quantité totale de GRT qu'ils ont actuellement dans le protocole - La date de leur dernière délégation -If you want to learn more about how to become a Delegator, check out the [official documentation](/resources/roles/delegating/delegating/) or [The Graph Academy](https://docs.thegraph.academy/official-docs/delegator/choosing-indexers). +Si vous souhaitez en savoir plus sur la façon de devenir Déléguateur, consultez la [documentation officielle](/resources/roles/delegating/delegating/) ou [The Graph Academy](https://docs.thegraph.academy/official-docs/delegator/choosing-indexers). -### Network Page +### Page de réseau -On this page, you can see global KPIs and have the ability to switch to a per-epoch basis and analyze network metrics in more detail. These details will give you a sense of how the network is performing over time. +Sur cette page, vous pouvez voir les KPIs globaux et avoir la possibilité de passer à une base par époque et d'analyser les métriques du réseau plus en détail. Ces détails vous donneront une idée des performances du réseau au fil du temps. #### Aperçu @@ -144,10 +144,10 @@ La section d'aperçu présente à la fois toutes les métriques actuelles du ré Quelques détails clés à noter : -- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the subgraphs have been closed and the data they served has been validated by the consumers. -- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). +- **Les frais de requête représentent les frais générés par les consommateurs**. Ils peuvent être réclamés (ou non) par les Indexeurs après une période d'au moins 7 époques (voir ci-dessous) après que leurs allocations vers les subgraphs ont été clôturées et que les données qu'ils ont servies ont été validées par les consommateurs. +- **Les récompenses d'indexation représentent le montant des récompenses que les Indexeurs ont réclamé de l'émission du réseau au cours de l'époque.** Bien que l'émission du protocole soit fixe, les récompenses ne sont mintées qu'une fois que les Indexeurs ont fermé leurs allocations vers les subgraphs qu'ils ont indexés. Ainsi, le nombre de récompenses par époque varie (par exemple, au cours de certaines époques, les Indexeurs peuvent avoir fermé collectivement des attributions qui étaient ouvertes depuis plusieurs jours). -![Explorer Image 8](/img/Network-Stats.png) +![Image 8 de l'Explorer](/img/Network-Stats.png) #### Époques @@ -161,7 +161,7 @@ Dans la section Époques, vous pouvez analyser, époque par époque, des métriq - Les époques de distribution sont les époques au cours desquelles les canaux d'État pour les époques sont réglés et les indexeurs peuvent réclamer leurs remises sur les frais de requête. - Les époques finalisées sont les époques qui n'ont plus de remboursements de frais de requête à réclamer par les Indexeurs. -![Explorer Image 9](/img/Epoch-Stats.png) +![Image 9 de l'Explorer](/img/Epoch-Stats.png) ## Votre profil d'utilisateur @@ -174,19 +174,19 @@ Dans cette section, vous pouvez voir ce qui suit : - Toutes les actions en cours que vous avez effectuées. - Les informations de votre profil, description et site web (si vous en avez ajouté un). -![Explorer Image 10](/img/Profile-Overview.png) +![Image 10 de l'Explorer](/img/Profile-Overview.png) ### Onglet Subgraphs -Dans l'onglet Subgraphs, vous verrez vos subgraphs publiés. +Dans l'onglet Subgraphs, vous verrez les subgraphs publiés. -> Ceci n'inclura pas les subgraphs déployés avec la CLI à des fins de test. Les subgraphs n'apparaîtront que lorsqu'ils sont publiés sur le réseau décentralisé. +> Cela n'inclut pas les subgraphs déployés avec la CLI à des fins de test. Les subgraphs n'apparaîtront que lorsqu'ils seront publiés sur le réseau décentralisé. -![Explorer Image 11](/img/Subgraphs-Overview.png) +![Image 11 de l'Explorer](/img/Subgraphs-Overview.png) ### Onglet Indexation -Dans l'onglet Indexation, vous trouverez un tableau avec toutes les allocations actives et historiques via-à-vis des subgraphs. Vous trouverez également des graphiques où vous pourrez voir et analyser vos performances passées en tant qu'Indexeur. +Dans l'onglet Indexation, vous trouverez un tableau avec toutes les allocations actives et historiques vers les subgraphs. Vous trouverez également des graphiques qui vous permettront de voir et d'analyser vos performances passées en tant qu'Indexeur. Cette section comprendra également des détails sur vos récompenses nettes d'indexeur et vos frais de requête nets. Vous verrez les métriques suivantes : @@ -197,7 +197,7 @@ Cette section comprendra également des détails sur vos récompenses nettes d'i - Récompenses de l'indexeur - le montant total des récompenses de l'indexeur que vous avez reçues, en GRT - Possédé : votre mise déposée, qui pourrait être réduite en cas de comportement malveillant ou incorrect -![Explorer Image 12](/img/Indexer-Stats.png) +![Image 12 de l'Explorer](/img/Indexer-Stats.png) ### Onglet Délégation @@ -219,20 +219,20 @@ Les boutons situés à droite du tableau vous permettent de gérer votre délég Gardez à l'esprit que ce graph peut être parcouru horizontalement, donc si vous le faites défiler jusqu'à la droite, vous pouvez également voir le statut de votre délégation (en cours de délégation, non-déléguée, en cours de retrait). -![Explorer Image 13](/img/Delegation-Stats.png) +![Image 13 de l'Explorer](/img/Delegation-Stats.png) ### Onglet Conservation -Dans l'onglet Curation, vous trouverez tous les subgraphs vous signalez (vous permettant ainsi de recevoir des frais de requête). La signalisation permet aux conservateurs de mettre en évidence aux indexeurs quels subgraphs sont précieux et dignes de confiance, signalant ainsi qu'ils doivent être indexés. +Dans l'onglet Curation, vous trouverez tous les subgraphs que vous signalez (ce qui vous permet de recevoir des frais de requête). La signalisation permet aux Curateurs d'indiquer aux Indexeurs les subgraphs qui ont de la valeur et qui sont dignes de confiance, signalant ainsi qu'ils doivent être indexés. Dans cet onglet, vous trouverez un aperçu de : -- Tous les subgraphs sur lesquels vous êtes en train de curer avec les détails du signal -- Partager les totaux par subgraph -- Récompenses de requête par subraph +- Tous les subgraphs sur lesquels vous êtes Curateur avec les détails des signaux +- Total des parts par Subgraph +- Récompenses pour les requêtes par subgraph - Détails mis à jour -![Explorer Image 14](/img/Curation-Stats.png) +![Image 14 de l'Explorer](/img/Curation-Stats.png) ### Paramètres de votre profil @@ -241,11 +241,11 @@ Dans votre profil utilisateur, vous pourrez gérer les détails de votre profil - Les opérateurs effectuent des actions limitées dans le protocole au nom de l'indexeur, telles que l'ouverture et la clôture des allocations. Les opérateurs sont généralement d'autres adresses Ethereum, distinctes de leur portefeuille de jalonnement, avec un accès sécurisé au réseau que les indexeurs peuvent définir personnellement - Les paramètres de délégation vous permettent de contrôler la répartition des GRT entre vous et vos délégués. -![Explorer Image 15](/img/Profile-Settings.png) +![Image 15 de l'Explorer](/img/Profile-Settings.png) En tant que portail officiel dans le monde des données décentralisées, Graph Explorer vous permet de prendre diverses actions, quel que soit votre rôle dans le réseau. Vous pouvez accéder aux paramètres de votre profil en ouvrant le menu déroulant à côté de votre adresse, puis en cliquant sur le bouton Paramètres. -![Wallet details](/img/Wallet-Details.png) +![détails du portefeuille](/img/Wallet-Details.png) ## Ressources supplémentaires diff --git a/website/src/pages/fr/subgraphs/guides/_meta.js b/website/src/pages/fr/subgraphs/guides/_meta.js index 37e18bc51651..a1bb04fb6d3f 100644 --- a/website/src/pages/fr/subgraphs/guides/_meta.js +++ b/website/src/pages/fr/subgraphs/guides/_meta.js @@ -1,4 +1,5 @@ export default { + 'subgraph-composition': '', 'subgraph-debug-forking': '', near: '', arweave: '', diff --git a/website/src/pages/fr/subgraphs/guides/arweave.mdx b/website/src/pages/fr/subgraphs/guides/arweave.mdx index 08e6c4257268..f888e87bd16e 100644 --- a/website/src/pages/fr/subgraphs/guides/arweave.mdx +++ b/website/src/pages/fr/subgraphs/guides/arweave.mdx @@ -1,50 +1,50 @@ --- -title: Building Subgraphs on Arweave +title: Construction de subgraphs pour Arweave --- -> Arweave support in Graph Node and on Subgraph Studio is in beta: please reach us on [Discord](https://discord.gg/graphprotocol) with any questions about building Arweave Subgraphs! +> La prise en charge d'Arweave dans Graph Node et dans Subgraph Studio est en beta : n'hésitez pas à nous contacter sur [Discord](https://discord.gg/graphprotocol) pour toute question concernant la construction de subgraphs Arweave ! -In this guide, you will learn how to build and deploy Subgraphs to index the Arweave blockchain. +Dans ce guide, vous apprendrez comment créer et déployer des subgraphs pour indexer la blockchain Arweave. -## What is Arweave? +## Qu’est-ce qu’Arweave ? -The Arweave protocol allows developers to store data permanently and that is the main difference between Arweave and IPFS, where IPFS lacks the feature; permanence, and files stored on Arweave can't be changed or deleted. +Arweave est un protocole qui permet aux développeurs de stocker des données de façon permanente. C'est cette caractéristique qui constitue la principale différence entre Arweave et IPFS. En effet, IPFS n'a pas la caractéristique de permanence, et les fichiers stockés sur Arweave ne peuvent pas être modifiés ou supprimés. -Arweave already has built numerous libraries for integrating the protocol in a number of different programming languages. For more information you can check: +Arweave a déjà construit de nombreuses bibliothèques pour intégrer le protocole dans plusieurs langages de programmation différents. Pour plus d'informations, vous pouvez consulter : - [Arwiki](https://arwiki.wiki/#/en/main) -- [Arweave Resources](https://www.arweave.org/build) +- [Ressources Arweave](https://www.arweave.org/build) -## What are Arweave Subgraphs? +## À quoi servent les subgraphs d'Arweave ? -The Graph allows you to build custom open APIs called "Subgraphs". Subgraphs are used to tell indexers (server operators) which data to index on a blockchain and save on their servers in order for you to be able to query it at any time using [GraphQL](https://graphql.org/). +The Graph vous permet de créer des API ouvertes personnalisées appelées "Subgraphs". Les subgraphs sont utilisés pour indiquer aux Indexeurs (opérateurs de serveur) quelles données indexer sur une blockchain et enregistrer sur leurs serveurs afin que vous puissiez les interroger à tout moment à l'aide de [GraphQL](https://graphql.org/). -[Graph Node](https://github.com/graphprotocol/graph-node) is now able to index data on Arweave protocol. The current integration is only indexing Arweave as a blockchain (blocks and transactions), it is not indexing the stored files yet. +[Graph Node](https://github.com/graphprotocol/graph-node) est désormais capable d'indexer les données sur le protocole Arweave. L'intégration actuelle indexe uniquement Arweave en tant que blockchain (blocs et transactions), elle n'indexe pas encore les fichiers stockés. -## Building an Arweave Subgraph +## Construire un subgraph Arweave -To be able to build and deploy Arweave Subgraphs, you need two packages: +Pour pouvoir créer et déployer des Arweave Subgraphs, vous avez besoin de deux packages : -1. `@graphprotocol/graph-cli` above version 0.30.2 - This is a command-line tool for building and deploying Subgraphs. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-cli) to download using `npm`. -2. `@graphprotocol/graph-ts` above version 0.27.0 - This is library of Subgraph-specific types. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-ts) to download using `npm`. +1. `@graphprotocol/graph-cli` au-dessus de la version 0.30.2 - C'est un outil en ligne de commande pour construire et déployer des subgraphs. [Cliquez ici](https://www.npmjs.com/package/@graphprotocol/graph-cli) pour le télécharger en utilisant `npm`. +2. `@graphprotocol/graph-ts` au-dessus de la version 0.27.0 - Il s'agit d'une bibliothèque de types spécifiques aux subgraphs. [Cliquez ici](https://www.npmjs.com/package/@graphprotocol/graph-ts) pour télécharger en utilisant `npm`. -## Subgraph's components +## Caractéristique des subgraphs -There are three components of a Subgraph: +Un subgraph se compose de trois éléments : -### 1. Manifest - `subgraph.yaml` +### 1. Le Manifest - `subgraph.yaml` -Defines the data sources of interest, and how they should be processed. Arweave is a new kind of data source. +Définit les sources de données intéressantes et la manière dont elles doivent être traitées. Arweave est un nouveau type de source de données. -### 2. Schema - `schema.graphql` +### 2. Schéma - `schema.graphql` -Here you define which data you want to be able to query after indexing your Subgraph using GraphQL. This is actually similar to a model for an API, where the model defines the structure of a request body. +Vous définissez ici les données que vous souhaitez pouvoir interroger après avoir indexé votre subgraph à l'aide de GraphQL. Ceci est en fait similaire à un modèle pour une API, où le modèle définit la structure d'un corps de requête. -The requirements for Arweave Subgraphs are covered by the [existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). +Les exigences relatives aux subgraphs Arweave sont couvertes par la [documentation existante](/developing/creating-a-subgraph/#the-graphql-schema). -### 3. AssemblyScript Mappings - `mapping.ts` +### 3. Mappages en AssemblyScript - `mapping.ts` -This is the logic that determines how data should be retrieved and stored when someone interacts with the data sources you are listening to. The data gets translated and is stored based off the schema you have listed. +Il s'agit de la logique qui détermine comment les données doivent être récupérées et stockées lorsqu'une personne interagit avec les sources de données que vous interrogez. Les données sont traduites et stockées sur la base du schema que vous avez répertorié. During Subgraph development there are two key commands: @@ -53,9 +53,9 @@ $ graph codegen # generates types from the schema file identified in the manifes $ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder ``` -## Subgraph Manifest Definition +## Définition du manifeste du subgraph -The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for an Arweave Subgraph: +Le manifeste du subgraph `subgraph.yaml` identifie les sources de données pour le subgraph, les déclencheurs intéressants et les fonctions qui doivent être exécutées en réponse à ces déclencheurs. Voir ci-dessous un exemple de manifeste de subgraph pour un subgraph Arweave : ```yaml specVersion: 1.3.0 @@ -82,30 +82,30 @@ dataSources: - handler: handleTx # the function name in the mapping file ``` -- Arweave Subgraphs introduce a new kind of data source (`arweave`) -- The network should correspond to a network on the hosting Graph Node. In Subgraph Studio, Arweave's mainnet is `arweave-mainnet` -- Arweave data sources introduce an optional source.owner field, which is the public key of an Arweave wallet +- Les subgraphs Arweave introduisent un nouveau type de source de données (`arweave`) +- Le réseau doit correspondre à un réseau sur le Graph Node hôte. Dans Subgraph Studio, le réseau principal d'Arweave est `arweave-mainnet` +- Les sources de données Arweave introduisent un champ source.owner facultatif, qui est la clé publique d'un portefeuille Arweave -Arweave data sources support two types of handlers: +Les sources de données Arweave prennent en charge deux types de gestionnaires : -- `blockHandlers` - Run on every new Arweave block. No source.owner is required. -- `transactionHandlers` - Run on every transaction where the data source's `source.owner` is the owner. Currently an owner is required for `transactionHandlers`, if users want to process all transactions they should provide "" as the `source.owner` +- `blockHandlers` - Exécuté sur chaque nouveau bloc Arweave. Aucun source.owner n'est requis. +- `transactionHandlers` - Exécute chaque transaction dont le propriétaire est `source.owner` de la source de données. Actuellement, un propriétaire est requis pour `transactionHandlers`, si les utilisateurs veulent traiter toutes les transactions, ils doivent fournir "" comme `source.owner` -> The source.owner can be the owner's address, or their Public Key. +> Source.owner peut être l’adresse du propriétaire ou sa clé publique. +> +> Les transactions sont les éléments constitutifs du permaweb Arweave et ce sont des objets créés par les utilisateurs finaux. +> +> Note : Les transactions [Irys (anciennement Bundlr)](https://irys.xyz/) ne sont pas encore prises en charge. -> Transactions are the building blocks of the Arweave permaweb and they are objects created by end-users. +## Définition de schéma -> Note: [Irys (previously Bundlr)](https://irys.xyz/) transactions are not supported yet. +La définition du schéma décrit la structure de la base de données Subgraph résultante et les relations entre les entités. Elle est indépendante de la source de données d'origine. Vous trouverez plus de détails sur la définition du schéma du subgraph [ici](/developing/creating-a-subgraph/#the-graphql-schema). -## Schema Definition +## Cartographies AssemblyScript -Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on the Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). +Les gestionnaires d'événements sont écrits en [AssemblyScript](https://www.assemblyscript.org/). -## AssemblyScript Mappings - -The handlers for processing events are written in [AssemblyScript](https://www.assemblyscript.org/). - -Arweave indexing introduces Arweave-specific data types to the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/). +L'indexation Arweave introduit des types de données spécifiques à Arweave dans l'[API AssemblyScript](/subgraphs/developing/creating/graph-ts/api/). ```tsx class Block { @@ -146,51 +146,51 @@ class Transaction { } ``` -Block handlers receive a `Block`, while transactions receive a `Transaction`. +Les gestionnaires de blocs reçoivent un `Block`, tandis que les transactions reçoivent un `Transaction`. -Writing the mappings of an Arweave Subgraph is very similar to writing the mappings of an Ethereum Subgraph. For more information, click [here](/developing/creating-a-subgraph/#writing-mappings). +L'écriture des mappages d'un subgraph Arweave est très similaire à l'écriture des mappages d'un subgraph Ethereum. Pour plus d'informations, cliquez [ici](/developing/creating-a-subgraph/#writing-mappings). -## Deploying an Arweave Subgraph in Subgraph Studio +## Déploiement d'un subgraph Arweave dans Subgraph Studio -Once your Subgraph has been created on your Subgraph Studio dashboard, you can deploy by using the `graph deploy` CLI command. +Une fois que votre subgraph a été créé sur le tableau de bord de Subgraph Studio, vous pouvez le déployer en utilisant la commande CLI `graph deploy`. ```bash -graph deploy --access-token +graph deploy --access-token ``` -## Querying an Arweave Subgraph +## Interroger un subgraph d'Arweave -The GraphQL endpoint for Arweave Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. +L'Endpoint GraphQL pour les subgraphs Arweave est déterminé par la définition du schéma, avec l'interface API existante. Veuillez consulter la [documentation API GraphQL](/subgraphs/querying/graphql-api/) pour plus d'informations. -## Example Subgraphs +## Exemples de subgraphs -Here is an example Subgraph for reference: +Voici un exemple de subgraph à titre de référence : -- [Example Subgraph for Arweave](https://github.com/graphprotocol/graph-tooling/tree/main/examples/arweave-blocks-transactions) +- [Exemple de sous-graphe pour Arweave](https://github.com/graphprotocol/graph-tooling/tree/main/examples/arweave-blocks-transactions) ## FAQ -### Can a Subgraph index Arweave and other chains? +### Un subgraph peut-il indexer Arweave et d'autres blockchains ? No, a Subgraph can only support data sources from one chain/network. -### Can I index the stored files on Arweave? +### Puis-je indexer les fichiers enregistrés sur Arweave ? -Currently, The Graph is only indexing Arweave as a blockchain (its blocks and transactions). +Actuellement, The Graph n'indexe Arweave qu'en tant que blockchain (ses blocs et ses transactions). -### Can I identify Bundlr bundles in my Subgraph? +### Puis-je identifier les packages de Bundlr dans mon subgraph ? -This is not currently supported. +Cette fonction n'est pas prise en charge actuellement. -### How can I filter transactions to a specific account? +### Comment puis-je filtrer les transactions sur un compte spécifique ? -The source.owner can be the user's public key or account address. +La source.owner peut être la clé publique de l'utilisateur ou l'adresse de son compte. -### What is the current encryption format? +### Quel est le format de chiffrement actuel ? -Data is generally passed into the mappings as Bytes, which if stored directly is returned in the Subgraph in a `hex` format (ex. block and transaction hashes). You may want to convert to a `base64` or `base64 URL`-safe format in your mappings, in order to match what is displayed in block explorers like [Arweave Explorer](https://viewblock.io/arweave/). +Les données sont généralement passées dans les mappages sous forme de Bytes, qui, s'ils sont stockés directement, sont renvoyés dans le subgraph dans un format `hex` (par exemple, les hash de blocs et de transactions). Vous pouvez vouloir convertir en un format `base64` ou `base64 URL` dans vos mappages, afin de correspondre à ce qui est affiché dans les explorateurs de blocs comme [Arweave Explorer](https://viewblock.io/arweave/). -The following `bytesToBase64(bytes: Uint8Array, urlSafe: boolean): string` helper function can be used, and will be added to `graph-ts`: +La fonction d'assistant `bytesToBase64(bytes : Uint8Array, urlSafe : boolean) : string` suivante peut être utilisée, et sera ajoutée à `graph-ts` : ``` const base64Alphabet = [ @@ -219,14 +219,14 @@ function bytesToBase64(bytes: Uint8Array, urlSafe: boolean): string { result += alphabet[((bytes[i - 1] & 0x0F) << 2) | (bytes[i] >> 6)]; result += alphabet[bytes[i] & 0x3F]; } - if (i === l + 1) { // 1 octet yet to write + if (i === l + 1) { // 1 octet à écrire result += alphabet[bytes[i - 2] >> 2]; result += alphabet[(bytes[i - 2] & 0x03) << 4]; if (!urlSafe) { result += "=="; } } - if (!urlSafe && i === l) { // 2 octets yet to write + if (!urlSafe && i === l) { // 2 octets à écrire result += alphabet[bytes[i - 2] >> 2]; result += alphabet[((bytes[i - 2] & 0x03) << 4) | (bytes[i - 1] >> 4)]; result += alphabet[(bytes[i - 1] & 0x0F) << 2]; diff --git a/website/src/pages/fr/subgraphs/guides/contract-analyzer.mdx b/website/src/pages/fr/subgraphs/guides/contract-analyzer.mdx index 084ac8d28a00..f6ba3015de68 100644 --- a/website/src/pages/fr/subgraphs/guides/contract-analyzer.mdx +++ b/website/src/pages/fr/subgraphs/guides/contract-analyzer.mdx @@ -2,11 +2,15 @@ title: Smart Contract Analysis with Cana CLI --- -# Cana CLI: Quick & Efficient Contract Analysis +Improve smart contract analysis with **Cana CLI**. It's fast, efficient, and designed specifically for EVM chains. -**Cana CLI** is a command-line tool that streamlines helpful smart contract metadata analysis specific to subgraph development across multiple EVM-compatible chains. +## Aperçu -## 📌 Key Features +**Cana CLI** is a command-line tool that streamlines helpful smart contract metadata analysis specific to subgraph development across multiple EVM-compatible chains. It simplifies retrieving contract details, detecting proxy implementations, extracting ABIs, and more. + +### Key Features + +With Cana CLI, you can: - Detect deployment blocks - Verify source code @@ -14,63 +18,75 @@ title: Smart Contract Analysis with Cana CLI - Identify proxy and implementation contracts - Support multiple chains -## 🚀 Installation & Setup +### Prérequis + +Before installing Cana CLI, make sure you have: + +- [Node.js v16+](https://nodejs.org/en) +- [npm v6+](https://docs.npmjs.com/cli/v11/commands/npm-install) +- Block explorer API keys + +### Installation & Setup -Install Cana globally using npm: +1. Install Cana CLI + +Use npm to install it globally: ```bash npm install -g contract-analyzer ``` -Set up a blockchain for analysis: +2. Configure Cana CLI + +Set up a blockchain environment for analysis: ```bash cana setup ``` -Provide the required block explorer API and block explorer endpoint URL details when prompted. +During setup, you'll be prompted to provide the required block explorer API key and block explorer endpoint URL. -Running `cana setup` creates a configuration file at `~/.contract-analyzer/config.json`. This file stores your block explorer API credentials, endpoint URLs, and chain selection preferences for future use. +After setup, Cana CLI creates a configuration file at `~/.contract-analyzer/config.json`. This file stores your block explorer API credentials, endpoint URLs, and chain selection preferences for future use. -## 🍳 Usage +### Steps: Using Cana CLI for Smart Contract Analysis -### 🔹 Chain Selection +#### 1. Select a Chain -Cana supports multiple EVM-compatible chains. +Cana CLI supports multiple EVM-compatible chains. -List chains added with: +For a list of chains added run this command: ```bash cana chains ``` -Then select a chain with: +Then select a chain with this command: ```bash cana chains --switch ``` -Once a chain is selected, all subsequent contract analases will continue on that chain. +Once a chain is selected, all subsequent contract analyses will continue on that chain. -### 🔹 Basic Contract Analysis +#### 2. Basic Contract Analysis -Analyze a contract with: +Run the following command to analyze a contract: ```bash cana analyze 0xContractAddress ``` -or +ou bien ```bash cana -a 0xContractAddress ``` -This command displays essential contract information in the terminal using a clear, organized format. +This command fetches and displays essential contract information in the terminal using a clear, organized format. -### 🔹 Understanding Output +#### 3. Understanding the Output -Cana organizes results into the terminal as well as into a structured directory when detailed contract data is successfully retrieved: +Cana CLI organizes results into the terminal and into a structured directory when detailed contract data is successfully retrieved: ``` contracts-analyzed/ @@ -80,24 +96,22 @@ contracts-analyzed/ └── event-information.json # Event signatures and examples ``` -### 🔹 Chain Management +This format makes it easy to reference contract metadata, event signatures, and ABIs for subgraph development. + +#### 4. Chain Management Add and manage chains: ```bash -cana setup # Add a new chain -cana chains # List configured chains -cana chains -s # Swich chains. +cana setup # Add a new chain +cana chains # List configured chains +cana chains -s # Switch chains ``` -## ⚠️ Troubleshooting +### Troubleshooting -- **Missing Data**: Ensure the contract address is correct, verified on the block explorer, and that your API key has the required permissions. +Missing Data? Ensure that the contract address is correct, that it's verified on the block explorer, and that your API key has the required permissions. -## ✅ Requirements - -- Node.js v16+ -- npm v6+ -- Block explorer API keys +### Conclusion -Keep your contract analyses efficient and well-organized. 🚀 +With Cana CLI, you can efficiently analyze smart contracts, extract crucial metadata, and support subgraph development with ease. diff --git a/website/src/pages/fr/subgraphs/guides/enums.mdx b/website/src/pages/fr/subgraphs/guides/enums.mdx index 9f55ae07c54b..53daa9ce4993 100644 --- a/website/src/pages/fr/subgraphs/guides/enums.mdx +++ b/website/src/pages/fr/subgraphs/guides/enums.mdx @@ -1,20 +1,20 @@ --- -title: Categorize NFT Marketplaces Using Enums +title: Catégoriser les marketplaces NFT à l’aide d’Enums --- -Use Enums to make your code cleaner and less error-prone. Here's a full example of using Enums on NFT marketplaces. +Utilisez des Enums pour rendre votre code plus propre et moins sujet aux erreurs. Voici un exemple complet d'utilisation des Enums sur les marketplaces NFT. -## What are Enums? +## Que sont les Enums ? -Enums, or enumeration types, are a specific data type that allows you to define a set of specific, allowed values. +Les Enums, ou types d'énumération, sont un type de données spécifique qui vous permet de définir un ensemble de valeurs spécifiques et autorisées. -### Example of Enums in Your Schema +### Exemple d'Enums dans Votre Schéma -If you're building a Subgraph to track the ownership history of tokens on a marketplace, each token might go through different ownerships, such as `OriginalOwner`, `SecondOwner`, and `ThirdOwner`. By using enums, you can define these specific ownerships, ensuring only predefined values are assigned. +Si vous construisez un subgraph pour suivre l'historique de la propriété des jetons sur une marketplace, chaque jeton peut passer par différents propriétaires, tels que `OriginalOwner`, `SecondOwner`, et `ThirdOwner`. En utilisant des enums, vous pouvez définir ces propriétaires spécifiques, en vous assurant que seules des valeurs prédéfinies sont assignées. -You can define enums in your schema, and once defined, you can use the string representation of the enum values to set an enum field on an entity. +Vous pouvez définir des enums dans votre schéma et, une fois définis, vous pouvez utiliser la représentation en chaîne de caractères des valeurs enum pour définir un champ enum sur une entité. -Here's what an enum definition might look like in your schema, based on the example above: +Voici à quoi pourrait ressembler une définition d'enum dans votre schéma, basée sur l'exemple ci-dessus : ```graphql enum TokenStatus { @@ -24,109 +24,109 @@ enum TokenStatus { } ``` -This means that when you use the `TokenStatus` type in your schema, you expect it to be exactly one of predefined values: `OriginalOwner`, `SecondOwner`, or `ThirdOwner`, ensuring consistency and validity. +Ceci signifie que lorsque vous utilisez le type `TokenStatus` dans votre schéma, vous attendez qu'il soit exactement l'une des valeurs prédéfinies : `OriginalOwner`, `SecondOwner`, ou `ThirdOwner`, garantissant la cohérence et la validité des données. -To learn more about enums, check out [Creating a Subgraph](/developing/creating-a-subgraph/#enums) and [GraphQL documentation](https://graphql.org/learn/schema/#enumeration-types). +Pour en savoir plus sur les enums, consultez [Création d'un Subgraph](/developing/creating-a-subgraph/#enums) et [documentation GraphQL ](https://graphql.org/learn/schema/#enumeration-types). -## Benefits of Using Enums +## Avantages de l'Utilisation des Enums -- **Clarity:** Enums provide meaningful names for values, making data easier to understand. -- **Validation:** Enums enforce strict value definitions, preventing invalid data entries. -- **Maintainability:** When you need to change or add new categories, enums allow you to do this in a focused manner. +- **Clarté** : Les enums fournissent des noms significatifs pour les valeurs, rendant les données plus faciles à comprendre. +- **Validation** : Les enums imposent des définitions de valeurs strictes, empêchant les entrées de données invalides. +- **Maintenabilité** : Lorsque vous avez besoin de changer ou d'ajouter de nouvelles catégories, les enums vous permettent de le faire de manière ciblée. -### Without Enums +### Sans Enums -If you choose to define the type as a string instead of using an Enum, your code might look like this: +Si vous choisissez de définir le type comme une chaîne de caractères au lieu d'utiliser un Enum, votre code pourrait ressembler à ceci : ```graphql type Token @entity { id: ID! tokenId: BigInt! - owner: Bytes! # Owner of the token - tokenStatus: String! # String field to track token status + owner: Bytes! # Propriétaire du jeton + tokenStatus: String! # Champ de type chaîne pour suivre l'état du jeton timestamp: BigInt! } ``` -In this schema, `TokenStatus` is a simple string with no specific, allowed values. +Dans ce schéma, `TokenStatus` est une simple chaîne de caractères sans valeurs spécifiques autorisées. -#### Why is this a problem? +#### Pourquoi est-ce un problème ? -- There's no restriction of `TokenStatus` values, so any string can be accidentally assigned. This makes it hard to ensure that only valid statuses like `OriginalOwner`, `SecondOwner`, or `ThirdOwner` are set. -- It's easy to make typos such as `Orgnalowner` instead of `OriginalOwner`, making the data and potential queries unreliable. +- Il n'y a aucune restriction sur les valeurs de `TokenStatus` : n’importe quelle chaîne de caractères peut être affectée par inadvertance. Difficile donc de s'assurer que seules des valeurs valides comme comme `OriginalOwner`, `SecondOwner`, ou `ThirdOwner` soient utilisées. +- Il est facile de faire des fautes de frappe comme `Orgnalowner` au lieu de `OriginalOwner`, rendant les données et les requêtes potentielles peu fiables. -### With Enums +### Avec Enums -Instead of assigning free-form strings, you can define an enum for `TokenStatus` with specific values: `OriginalOwner`, `SecondOwner`, or `ThirdOwner`. Using an enum ensures only allowed values are used. +Au lieu d'assigner des chaînes de caractères libres, vous pouvez définir un enum pour `TokenStatus` avec des valeurs spécifiques : `OriginalOwner`, `SecondOwner`, ou `ThirdOwner`. L'utilisation d'un enum garantit que seules les valeurs autorisées sont utilisées. -Enums provide type safety, minimize typo risks, and ensure consistent and reliable results. +Les Enums assurent la sécurité des types, minimisent les risques de fautes de frappe et garantissent des résultats cohérents et fiables. -## Defining Enums for NFT Marketplaces +## Définition des Enums pour les Marketplaces NFT -> Note: The following guide uses the CryptoCoven NFT smart contract. +> Note: Le guide suivant utilise le smart contract CryptoCoven NFT. -To define enums for the various marketplaces where NFTs are traded, use the following in your Subgraph schema: +Pour définir des énumérations pour les différentes marketplaces où les NFT sont échangés, utilisez ce qui suit dans votre schéma de subgraph : ```gql -# Enum for Marketplaces that the CryptoCoven contract interacted with(likely a Trade/Mint) +# Enum pour les Marketplaces avec lesquelles le contrat CryptoCoven a interagi (probablement une vente ou un mint) enum Marketplace { - OpenSeaV1 # Represents when a CryptoCoven NFT is traded on the marketplace - OpenSeaV2 # Represents when a CryptoCoven NFT is traded on the OpenSeaV2 marketplace - SeaPort # Represents when a CryptoCoven NFT is traded on the SeaPort marketplace - LooksRare # Represents when a CryptoCoven NFT is traded on the LookRare marketplace - # ...and other marketplaces + OpenSeaV1 # Représente lorsque un NFT CryptoCoven est échangé sur la marketplace + OpenSeaV2 # Représente lorsque un NFT CryptoCoven est échangé sur la marketplace OpenSeaV2 + SeaPort # Représente lorsque un NFT CryptoCoven est échangé sur la marketplace SeaPort + LooksRare # Représente lorsque un NFT CryptoCoven est échangé sur la marketplace LooksRare + # ...et d'autres marketplaces } ``` -## Using Enums for NFT Marketplaces +## Utilisation des Enums pour les Marketplaces NFT -Once defined, enums can be used throughout your Subgraph to categorize transactions or events. +Une fois définis, les enums peuvent être utilisés dans l'ensemble du subgraph pour classer les transactions ou les événements. -For example, when logging NFT sales, you can specify the marketplace involved in the trade using the enum. +Par exemple, lors de la journalisation des ventes de NFT, vous pouvez spécifier la marketplace impliqué dans la transaction en utilisant l'enum. -### Implementing a Function for NFT Marketplaces +### Implémenter une Fonction pour les Marketplaces NFT -Here's how you can implement a function to retrieve the marketplace name from the enum as a string: +Voici comment vous pouvez implémenter une fonction pour récupérer le nom de la marketplace à partir de l'enum sous forme de chaîne de caractères : ```ts export function getMarketplaceName(marketplace: Marketplace): string { - // Using if-else statements to map the enum value to a string + // Utilisation des instructions if-else pour mapper la valeur de l'enum à une chaîne de caractères if (marketplace === Marketplace.OpenSeaV1) { - return 'OpenSeaV1' // If the marketplace is OpenSea, return its string representation + return 'OpenSeaV1' // Si le marketplace est OpenSea, renvoie sa représentation en chaîne de caractères } else if (marketplace === Marketplace.OpenSeaV2) { return 'OpenSeaV2' } else if (marketplace === Marketplace.SeaPort) { - return 'SeaPort' // If the marketplace is SeaPort, return its string representation + return 'SeaPort' // Si le marketplace est SeaPort, renvoie sa représentation en chaîne de caractères } else if (marketplace === Marketplace.LooksRare) { - return 'LooksRare' // If the marketplace is LooksRare, return its string representation - // ... and other market places + return 'LooksRare' // Si le marketplace est LooksRare, renvoie sa représentation en chaîne de caractères + // ... et d'autres marketplaces } } ``` -## Best Practices for Using Enums +## Bonnes Pratiques pour l'Utilisation des Enums -- **Consistent Naming:** Use clear, descriptive names for enum values to improve readability. -- **Centralized Management:** Keep enums in a single file for consistency. This makes enums easier to update and ensures they are the single source of truth. -- **Documentation:** Add comments to enum to clarify their purpose and usage. +- **Nommer avec cohérence** : Utilisez des noms clairs et descriptifs pour les valeurs d'enum pour améliorer la lisibilité. +- **Gestion Centralisée** : Gardez les enums dans un fichier unique pour plus de cohérence. Ainsi, il est plus simple de les mettre à jour et de garantir qu’ils sont votre unique source de vérité. +- **Documentation** : Ajoutez des commentaires aux enums pour clarifier leur objectif et leur utilisation. -## Using Enums in Queries +## Utilisation des Enums dans les Requêtes -Enums in queries help you improve data quality and make your results easier to interpret. They function as filters and response elements, ensuring consistency and reducing errors in marketplace values. +Les enums dans les requêtes aident à améliorer la qualité des données et à rendre les résultats plus faciles à interpréter. Ils fonctionnent comme des filtres et des éléments de réponse, assurant la cohérence et réduisant les erreurs dans les valeurs des marketplaces. -**Specifics** +Spécificités -- **Filtering with Enums:** Enums provide clear filters, allowing you to confidently include or exclude specific marketplaces. -- **Enums in Responses:** Enums guarantee that only recognized marketplace names are returned, making the results standardized and accurate. +- **Filtrer avec des Enums**: Les Enums offrent des filtres clairs, vous permettant d’inclure ou d’exclure facilement des marketplaces spécifiques. +- **Enums dans les Réponses**: Les Enums garantissent que seules des valeurs de marketplace reconnues sont renvoyées, ce qui rend les résultats standardisés et précis. -### Sample Queries +### Exemples de requêtes -#### Query 1: Account With The Highest NFT Marketplace Interactions +#### Requête 1 : Compte avec le Plus d'Interactions sur les Marketplaces NFT -This query does the following: +Cette requête fait ce qui suit : -- It finds the account with the highest unique NFT marketplace interactions, which is great for analyzing cross-marketplace activity. -- The marketplaces field uses the marketplace enum, ensuring consistent and validated marketplace values in the response. +- Elle trouve le compte avec le plus grand nombre unique d'interactions sur les marketplaces NFT, ce qui est excellent pour analyser l'activité inter-marketplaces. +- Le champ marketplaces utilise l'enum marketplace, garantissant des valeurs de marketplace cohérentes et validées dans la réponse. ```gql { @@ -137,15 +137,15 @@ This query does the following: totalSpent uniqueMarketplacesCount marketplaces { - marketplace # This field returns the enum value representing the marketplace + marketplace # Ce champ retourne la valeur enum représentant la marketplace } } } ``` -#### Returns +#### Résultats -This response provides account details and a list of unique marketplace interactions with enum values for standardized clarity: +Cette réponse fournit les détails du compte et une liste des interactions uniques sur les marketplaces avec des valeurs enum pour une clarté standardisée : ```gql { @@ -186,12 +186,12 @@ This response provides account details and a list of unique marketplace interact } ``` -#### Query 2: Most Active Marketplace for CryptoCoven transactions +#### Requête 2 : Marketplace la Plus Active pour les Transactions CryptoCoven -This query does the following: +Cette requête fait ce qui suit : -- It identifies the marketplace with the highest volume of CryptoCoven transactions. -- It uses the marketplace enum to ensure that only valid marketplace types appear in the response, adding reliability and consistency to your data. +- Elle identifie la marketplace avec le plus grand volume de transactions CryptoCoven. +- Il utilise l'enum marketplace pour s'assurer que seuls les types de marketplace valides apparaissent dans la réponse, ajoutant fiabilité et cohérence à vos données. ```gql { @@ -202,9 +202,9 @@ This query does the following: } ``` -#### Result 2 +#### Résultat 2 -The expected response includes the marketplace and the corresponding transaction count, using the enum to indicate the marketplace type: +La réponse attendue inclut la marketplace et le nombre de transactions correspondant, en utilisant l'enum pour indiquer le type de marketplace : ```gql { @@ -219,12 +219,12 @@ The expected response includes the marketplace and the corresponding transaction } ``` -#### Query 3: Marketplace Interactions with High Transaction Counts +#### Requête 3: Interactions sur les marketplaces avec un haut volume de transactions -This query does the following: +Cette requête fait ce qui suit : -- It retrieves the top four marketplaces with over 100 transactions, excluding "Unknown" marketplaces. -- It uses enums as filters to ensure that only valid marketplace types are included, increasing accuracy. +- Elle récupère les quatre principales marketplaces avec plus de 100 transactions, en excluant les marketplaces "Unknown". +- Elle utilise des enums comme filtres pour s'assurer que seuls les types de marketplace valides sont inclus, augmentant ainsi la précision. ```gql { @@ -240,9 +240,9 @@ This query does the following: } ``` -#### Result 3 +#### Résultat 3 -Expected output includes the marketplaces that meet the criteria, each represented by an enum value: +La sortie attendue inclut les marketplaces qui répondent aux critères, chacune représentée par une valeur enum : ```gql { @@ -269,6 +269,6 @@ Expected output includes the marketplaces that meet the criteria, each represent } ``` -## Additional Resources +## Ressources supplémentaires -For additional information, check out this guide's [repo](https://github.com/chidubemokeke/Subgraph-Tutorial-Enums). +Pour des informations supplémentaires, consultez le [repo](https://github.com/chidubemokeke/Subgraph-Tutorial-Enums) de ce guide. diff --git a/website/src/pages/fr/subgraphs/guides/grafting.mdx b/website/src/pages/fr/subgraphs/guides/grafting.mdx index d9abe0e70d2a..9a0dd2d5ca80 100644 --- a/website/src/pages/fr/subgraphs/guides/grafting.mdx +++ b/website/src/pages/fr/subgraphs/guides/grafting.mdx @@ -1,56 +1,56 @@ --- -title: Replace a Contract and Keep its History With Grafting +title: Remplacer un contrat et conserver son historique grâce au « greffage » --- -In this guide, you will learn how to build and deploy new Subgraphs by grafting existing Subgraphs. +Dans ce guide, vous apprendrez à construire et à déployer de nouveaux subgraphs en greffant des subgraps existants. -## What is Grafting? +## Qu'est-ce qu'une greffe ? -Grafting reuses the data from an existing Subgraph and starts indexing it at a later block. This is useful during development to get past simple errors in the mappings quickly or to temporarily get an existing Subgraph working again after it has failed. Also, it can be used when adding a feature to a Subgraph that takes long to index from scratch. +Le greffage permet de réutiliser les données d'un subgraph existant et de commencer à les indexer dans un bloc ultérieur. Cette méthode est utile au cours du développement pour surmonter rapidement de simples erreurs dans les mappages ou pour rétablir temporairement le fonctionnement d'un subgraph existant après une défaillance. Elle peut également être utilisée lors de l'ajout d'une fonctionnalité à un subgraph dont l'indexation à partir de zéro prend beaucoup de temps. The grafted Subgraph can use a GraphQL schema that is not identical to the one of the base Subgraph, but merely compatible with it. It has to be a valid Subgraph schema in its own right, but may deviate from the base Subgraph's schema in the following ways: -- It adds or removes entity types -- It removes attributes from entity types -- It adds nullable attributes to entity types -- It turns non-nullable attributes into nullable attributes -- It adds values to enums -- It adds or removes interfaces -- It changes for which entity types an interface is implemented +- Il ajoute ou supprime des types d'entité +- Il supprime les attributs des types d'entité +- Il ajoute des attributs nullables aux types d'entités +- Il transforme les attributs non nullables en attributs nuls +- Cela ajoute des valeurs aux énumérations +- Il ajoute ou supprime des interfaces +- Cela change pour quels types d'entités une interface est implémentée -For more information, you can check: +Pour plus d’informations, vous pouvez vérifier : -- [Grafting](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) +- [Greffage](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) -In this tutorial, we will be covering a basic use case. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing Subgraph onto the "base" Subgraph that tracks the new contract. +Dans ce tutoriel, nous allons couvrir un cas d'utilisation de base. Nous remplacerons un contrat existant par un contrat identique (avec une nouvelle adresse, mais le même code). Ensuite, nous grefferons le subgraph existant sur le subgraph "de base" qui suit le nouveau contrat. -## Important Note on Grafting When Upgrading to the Network +## Remarque importante sur le greffage lors de la mise à niveau vers le réseau -> **Caution**: It is recommended to not use grafting for Subgraphs published to The Graph Network +> **Attention** : Il est recommandé de ne pas utiliser le greffage pour les subgraphs publiés sur The Graph Network -### Why Is This Important? +### Pourquoi est-ce important? -Grafting is a powerful feature that allows you to "graft" one Subgraph onto another, effectively transferring historical data from the existing Subgraph to a new version. It is not possible to graft a Subgraph from The Graph Network back to Subgraph Studio. +Le greffage est une fonction puissante qui vous permet de "greffer" un subgraph sur un autre, en transférant efficacement les données historiques du subgraph existant vers une nouvelle version. Il n'est pas possible de greffer un subgraph provenant de The Graph Network vers Subgraph Studio. -### Best Practices +### Les meilleures pratiques -**Initial Migration**: when you first deploy your Subgraph to the decentralized network, do so without grafting. Ensure that the Subgraph is stable and functioning as expected. +**Migration initiale** : lorsque vous déployez pour la première fois votre subgraph sur le réseau décentralisé, faites-le sans greffage. Assurez-vous que le subgraph est stable et fonctionne comme prévu. -**Subsequent Updates**: once your Subgraph is live and stable on the decentralized network, you may use grafting for future versions to make the transition smoother and to preserve historical data. +**Mises à jour ultérieures** : une fois que votre subgraph est en ligne et stable sur le réseau décentralisé, vous pouvez utiliser le greffage pour les versions ultérieures afin de faciliter la transition et de préserver les données historiques. -By adhering to these guidelines, you minimize risks and ensure a smoother migration process. +En respectant ces lignes directrices, vous minimisez les risques et vous vous assurez que le processus de migration se déroule sans heurts. -## Building an Existing Subgraph +## Création d'un subgraph existant -Building Subgraphs is an essential part of The Graph, described more in depth [here](/subgraphs/quick-start/). To be able to build and deploy the existing Subgraph used in this tutorial, the following repo is provided: +La construction de subgraphs est une partie essentielle de The Graph, décrite plus en profondeur [ici](/subgraphs/quick-start/). Pour pouvoir construire et déployer le subgraph existant utilisé dans ce tutoriel, la repo suivant est fourni : -- [Subgraph example repo](https://github.com/Shiyasmohd/grafting-tutorial) +- [Dépôt d'exemples de subgraphs](https://github.com/Shiyasmohd/grafting-tutorial) -> Note: The contract used in the Subgraph was taken from the following [Hackathon Starterkit](https://github.com/schmidsi/hackathon-starterkit). +> Remarque : le contrat utilisé dans le subgraph est tiré du [Hackathon Starterkit](https://github.com/schmidsi/hackathon-starterkit) suivant. -## Subgraph Manifest Definition +## Définition du manifeste du subgraph -The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest that you will use: +Le manifeste du subgraph `subgraph.yaml` identifie les sources de données pour le subgraph, les déclencheurs interessants et les fonctions qui doivent être exécutées en réponse à ces déclencheurs. Vous trouverez ci-dessous un exemple de manifeste de subgraph que vous utiliserez : ```yaml specVersion: 1.3.0 @@ -79,32 +79,32 @@ dataSources: file: ./src/lock.ts ``` -- The `Lock` data source is the abi and contract address we will get when we compile and deploy the contract -- The network should correspond to an indexed network being queried. Since we're running on Sepolia testnet, the network is `sepolia` -- The `mapping` section defines the triggers of interest and the functions that should be run in response to those triggers. In this case, we are listening for the `Withdrawal` event and calling the `handleWithdrawal` function when it is emitted. +- La source de données `Lock` est l'adresse de l'abi et du contrat que nous obtiendrons lorsque nous compilerons et déploierons le contrat +- Le réseau doit correspondre à un réseau indexé qui est interrogé. Comme nous fonctionnons sur le réseau de test Sepolia, le réseau est `sepolia` +- La section `mapping` définit les déclencheurs intéressants et les fonctions qui doivent être exécutées en réponse à ces déclencheurs. Dans ce cas, nous écoutons l'événement `Withdrawal` et appelons la fonction `handleWithdrawal` lorsqu'il est émis. -## Grafting Manifest Definition +## Définition de manifeste de greffage -Grafting requires adding two new items to the original Subgraph manifest: +Le greffage consiste à ajouter deux nouveaux éléments au manifeste original du subgraph : ```yaml --- features: - - grafting # feature name + - grafting # nom de la caractéristique graft: base: Qm... # Subgraph ID of base Subgraph block: 5956000 # block number ``` -- `features:` is a list of all used [feature names](/developing/creating-a-subgraph/#experimental-features). -- `graft:` is a map of the `base` Subgraph and the block to graft on to. The `block` is the block number to start indexing from. The Graph will copy the data of the base Subgraph up to and including the given block and then continue indexing the new Subgraph from that block on. +- `features:` est une liste de tous les [noms de fonctionnalités](/developing/creating-a-subgraph/#experimental-features) utilisées. +- `graft:` est une carte du subgraph `base` et du bloc sur lequel se greffer. Le `bloc` est le numéro du bloc à partir duquel l'indexation doit commencer. The Graph copiera les données du subgraph de base jusqu'au bloc donné inclus, puis continuera à indexer le nouveau subgraph à partir de ce bloc. -The `base` and `block` values can be found by deploying two Subgraphs: one for the base indexing and one with grafting +Les valeurs `base` et `block` peuvent être trouvées en déployant deux subgraphs : l'un pour l'indexation de base et l'autre avec le greffage -## Deploying the Base Subgraph +## Déploiement du subgraph de base -1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-example` -2. Follow the directions in the `AUTH & DEPLOY` section on your Subgraph page in the `graft-example` folder from the repo +1. Allez sur [Subgraph Studio](https://thegraph.com/studio/) et créez un subgraph sur le réseau de test Sepolia appelé `graft-example` +2. Suivez les instructions dans la section `AUTH & DEPLOY` sur votre page Subgraph dans le dossier `graft-example` de la repo 3. Once finished, verify the Subgraph is indexing properly. If you run the following command in The Graph Playground ```graphql @@ -117,7 +117,7 @@ The `base` and `block` values can be found by deploying two Subgraphs: one for t } ``` -It returns something like this: +Cela renvoie quelque chose comme ceci : ``` { @@ -138,15 +138,15 @@ It returns something like this: } ``` -Once you have verified the Subgraph is indexing properly, you can quickly update the Subgraph with grafting. +Une fois que vous avez vérifié que le subgraph est correctement indexé, vous pouvez rapidement le mettre à jour par greffage. -## Deploying the Grafting Subgraph +## Déploiement du subgraph greffé -The graft replacement subgraph.yaml will have a new contract address. This could happen when you update your dapp, redeploy a contract, etc. +Le subgraph.yaml de remplacement du greffon aura une nouvelle adresse de contrat. Cela peut arriver lorsque vous mettez à jour votre dapp, redéployez un contrat, etc. -1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-replacement` -2. Create a new manifest. The `subgraph.yaml` for `graph-replacement` contains a different contract address and new information about how it should graft. These are the `block` of the [last event emitted](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452) you care about by the old contract and the `base` of the old Subgraph. The `base` Subgraph ID is the `Deployment ID` of your original `graph-example` Subgraph. You can find this in Subgraph Studio. -3. Follow the directions in the `AUTH & DEPLOY` section on your Subgraph page in the `graft-replacement` folder from the repo +1. Allez sur [Subgraph Studio](https://thegraph.com/studio/) et créez un subgraph sur le réseau test de Sepolia appelé `graft-replacement` +2. Créer un nouveau manifeste. Le `subgraph.yaml` de `graph-replacement` contient une adresse de contrat différente et de nouvelles informations sur la façon dont il devrait se greffer. Il s'agit du `block` du [dernier événement émis](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452) qui vous intéresse dans l'ancien contrat et de la `base` de l'ancien subgraph. L'ID du subgraph `base` est l'ID de déploiement de votre subgraph original `graph-example`. Vous pouvez le trouver dans Subgraph Studio. +3. Suivez les instructions de la section `AUTH & DEPLOY` sur votre page Subgraph dans le dossier `graft-replacement` de la repo 4. Once finished, verify the Subgraph is indexing properly. If you run the following command in The Graph Playground ```graphql @@ -159,7 +159,7 @@ The graft replacement subgraph.yaml will have a new contract address. This could } ``` -It should return the following: +Le résultat devrait être le suivant : ``` { @@ -185,18 +185,18 @@ It should return the following: } ``` -You can see that the `graft-replacement` Subgraph is indexing from older `graph-example` data and newer data from the new contract address. The original contract emitted two `Withdrawal` events, [Event 1](https://sepolia.etherscan.io/tx/0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d) and [Event 2](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452). The new contract emitted one `Withdrawal` after, [Event 3](https://sepolia.etherscan.io/tx/0x2410475f76a44754bae66d293d14eac34f98ec03a3689cbbb56a716d20b209af). The two previously indexed transactions (Event 1 and 2) and the new transaction (Event 3) were combined together in the `graft-replacement` Subgraph. +Vous pouvez voir que le subgraph `graft-replacement` indexe les anciennes données du `graph-example` et les nouvelles données de la nouvelle adresse du contrat. Le contrat original a émis deux événements `Withdrawal`, [Event 1](https://sepolia.etherscan.io/tx/0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d) et [Event 2](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452). Le nouveau contrat a émis un seul événement de retrait, [Event 3](https://sepolia.etherscan.io/tx/0x2410475f76a44754bae66d293d14eac34f98ec03a3689cbbb56a716d20b209af). Les deux transactions précédemment indexées (événements 1 et 2) et la nouvelle transaction (événement 3) ont été combinées ensemble dans le subgraph de remplacement de greffe. -Congrats! You have successfully grafted a Subgraph onto another Subgraph. +Félicitations ! Vous avez réussi à greffer un subgraph sur un autre subgraph. -## Additional Resources +## Ressources supplémentaires -If you want more experience with grafting, here are a few examples for popular contracts: +Si vous souhaitez acquérir plus d'expérience avec le greffage, voici quelques exemples pour des contrats populaires : - [Curve](https://github.com/messari/subgraphs/blob/master/subgraphs/curve-finance/protocols/curve-finance/config/templates/curve.template.yaml) - [ERC-721](https://github.com/messari/subgraphs/blob/master/subgraphs/erc721-metadata/subgraph.yaml) - [Uniswap](https://github.com/messari/subgraphs/blob/master/subgraphs/uniswap-v3-forks/protocols/uniswap-v3/config/templates/uniswapV3Template.yaml), -To become even more of a Graph expert, consider learning about other ways to handle changes in underlying datasources. Alternatives like [Data Source Templates](/developing/creating-a-subgraph/#data-source-templates) can achieve similar results +Pour devenir encore plus expert sur The Graph, vous pouvez vous familiariser avec d'autres méthodes de gestion des modifications apportées aux sources de données sous-jacentes. Des alternatives comme des [Modèles de sources de données](/developing/creating-a-subgraph/#data-source-templates) permettent d'obtenir des résultats similaires -> Note: A lot of material from this article was taken from the previously published [Arweave article](/subgraphs/cookbook/arweave/) +> Note : De nombreux éléments de cet article ont été repris de l'article [Arweave](/subgraphs/cookbook/arweave/) publié précédemment diff --git a/website/src/pages/fr/subgraphs/guides/near.mdx b/website/src/pages/fr/subgraphs/guides/near.mdx index e78a69eb7fa2..71baadc8ba82 100644 --- a/website/src/pages/fr/subgraphs/guides/near.mdx +++ b/website/src/pages/fr/subgraphs/guides/near.mdx @@ -1,41 +1,41 @@ --- -title: Building Subgraphs on NEAR +title: Construction de subgraphs sur NEAR --- -This guide is an introduction to building Subgraphs indexing smart contracts on the [NEAR blockchain](https://docs.near.org/). +Ce guide est une introduction à la construction de subgraphs indexant des contrats intelligents sur la [blockchain NEAR](https://docs.near.org/). -## What is NEAR? +## Que signifie NEAR ? [NEAR](https://near.org/) is a smart contract platform for building decentralized applications. Visit the [official documentation](https://docs.near.org/concepts/basics/protocol) for more information. -## What are NEAR Subgraphs? +## Que sont les subgraphs NEAR ? -The Graph gives developers tools to process blockchain events and make the resulting data easily available via a GraphQL API, known individually as a Subgraph. [Graph Node](https://github.com/graphprotocol/graph-node) is now able to process NEAR events, which means that NEAR developers can now build Subgraphs to index their smart contracts. +Le graphique fournit aux développeurs des outils pour traiter les événements de la blockchain et rendre les données résultantes facilement accessibles via une API GraphQL, connue individuellement sous le nom de subgraph. Le [Graph Node](https://github.com/graphprotocol/graph-node) est désormais capable de traiter les événements NEAR, ce qui signifie que les développeurs NEAR peuvent désormais créer des subgraphs pour indexer leurs contrats intelligents. -Subgraphs are event-based, which means that they listen for and then process onchain events. There are currently two types of handlers supported for NEAR Subgraphs: +Les subgraphs sont basés sur les événements, ce qui signifie qu'ils écoutent et traitent les événements de la blockchain. Il existe actuellement deux types de gestionnaires pour les subgraphs NEAR : -- Block handlers: these are run on every new block -- Receipt handlers: run every time a message is executed at a specified account +- Gestionnaires de blocs : ceux-ci sont exécutés à chaque nouveau bloc +- Gestionnaires de reçus : exécutés à chaque fois qu'un message est exécuté sur un compte spécifié [From the NEAR documentation](https://docs.near.org/build/data-infrastructure/lake-data-structures/receipt): -> A Receipt is the only actionable object in the system. When we talk about "processing a transaction" on the NEAR platform, this eventually means "applying receipts" at some point. +> Un reçu est le seul objet actionnable dans le système. Lorsque nous parlons de "traitement d'une transaction" sur la plateforme NEAR, cela signifie en fin de compte "appliquer des reçus" à un moment ou à un autre. -## Building a NEAR Subgraph +## Construction d'un subgraph NEAR -`@graphprotocol/graph-cli` is a command-line tool for building and deploying Subgraphs. +`@graphprotocol/graph-cli` est un outil en ligne de commande pour construire et déployer des subgraphs. -`@graphprotocol/graph-ts` is a library of Subgraph-specific types. +`@graphprotocol/graph-ts` est une bibliothèque de types spécifiques aux subgraphs. -NEAR Subgraph development requires `graph-cli` above version `0.23.0`, and `graph-ts` above version `0.23.0`. +Le développement de subgraphs NEAR nécessite `graph-cli` au-dessus de la version `0.23.0`, et `graph-ts` au-dessus de la version `0.23.0`. -> Building a NEAR Subgraph is very similar to building a Subgraph that indexes Ethereum. +> La construction d'un subgraph NEAR est très similaire à la construction d'un subgraph qui indexe Ethereum. -There are three aspects of Subgraph definition: +La définition d'un subgraph comporte trois aspects : -**subgraph.yaml:** the Subgraph manifest, defining the data sources of interest, and how they should be processed. NEAR is a new `kind` of data source. +**subgraph.yaml:** le manifeste du subgraph, définissant les sources de données intéressantes et la manière dont elles doivent être traitées. NEAR est un nouveau type de source de données. -**schema.graphql:** a schema file that defines what data is stored for your Subgraph, and how to query it via GraphQL. The requirements for NEAR Subgraphs are covered by [the existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). +**schema.graphql:** un fichier de schéma qui définit les données stockées dans votre subgraph et la manière de les interroger via GraphQL. Les exigences pour les subgraphs NEAR sont couvertes par [la documentation existante](/developing/creating-a-subgraph/#the-graphql-schema). **AssemblyScript Mappings:** [AssemblyScript code](/subgraphs/developing/creating/graph-ts/api/) that translates from the event data to the entities defined in your schema. NEAR support introduces NEAR-specific data types and new JSON parsing functionality. @@ -46,19 +46,19 @@ $ graph codegen # generates types from the schema file identified in the manifes $ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder ``` -### Subgraph Manifest Definition +### Définition du manifeste du subgraph -The Subgraph manifest (`subgraph.yaml`) identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for a NEAR Subgraph: +Le manifeste du subgraph (`subgraph.yaml`) identifie les sources de données pour le subgraph, les déclencheurs intéressants et les fonctions qui doivent être exécutées en réponse à ces déclencheurs. Voir ci-dessous un exemple de manifeste de subgraph pour un subgraph NEAR : ```yaml specVersion: 1.3.0 schema: - file: ./src/schema.graphql # link to the schema file + file: ./src/schema.graphql # lien vers le fichier de schéma dataSources: - kind: near network: near-mainnet source: - account: app.good-morning.near # This data source will monitor this account + account: app.good-morning.near # Cette source de données surveillera ce compte startBlock: 10662188 # Required for NEAR mapping: apiVersion: 0.0.9 @@ -70,33 +70,33 @@ dataSources: file: ./src/mapping.ts # link to the file with the Assemblyscript mappings ``` -- NEAR Subgraphs introduce a new `kind` of data source (`near`) +- Les subgraphs NEAR introduisent un nouveau `type` de source de données (`near`) - The `network` should correspond to a network on the hosting Graph Node. On Subgraph Studio, NEAR's mainnet is `near-mainnet`, and NEAR's testnet is `near-testnet` - NEAR data sources introduce an optional `source.account` field, which is a human-readable ID corresponding to a [NEAR account](https://docs.near.org/concepts/protocol/account-model). This can be an account or a sub-account. - NEAR data sources introduce an alternative optional `source.accounts` field, which contains optional suffixes and prefixes. At least prefix or suffix must be specified, they will match the any account starting or ending with the list of values respectively. The example below would match: `[app|good].*[morning.near|morning.testnet]`. If only a list of prefixes or suffixes is necessary the other field can be omitted. ```yaml -accounts: - prefixes: - - app - - good +comptes: + préfixes: + - application + - bien suffixes: - - morning.near - - morning.testnet + - matin.près + - matin.testnet ``` -NEAR data sources support two types of handlers: +Les fichiers de données NEAR prennent en charge deux types de gestionnaires : - `blockHandlers`: run on every new NEAR block. No `source.account` is required. - `receiptHandlers`: run on every receipt where the data source's `source.account` is the recipient. Note that only exact matches are processed ([subaccounts](https://docs.near.org/tutorials/crosswords/basics/add-functions-call#create-a-subaccount) must be added as independent data sources). -### Schema Definition +### Définition de schéma -Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). +La définition du schéma décrit la structure de la base de données Subgraph résultante et les relations entre les entités. Elle est agnostique de la source de données d'origine. Vous trouverez plus de détails sur la définition du schéma du subgraph [ici](/developing/creating-a-subgraph/#the-graphql-schema). -### AssemblyScript Mappings +### Cartographies AssemblyScript -The handlers for processing events are written in [AssemblyScript](https://www.assemblyscript.org/). +Les gestionnaires d'événements sont écrits en [AssemblyScript](https://www.assemblyscript.org/). NEAR indexing introduces NEAR-specific data types to the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/). @@ -165,31 +165,31 @@ These types are passed to block & receipt handlers: - Block handlers will receive a `Block` - Receipt handlers will receive a `ReceiptWithOutcome` -Otherwise, the rest of the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/) is available to NEAR Subgraph developers during mapping execution. +Sinon, le reste de l'[AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/) est à la disposition des développeurs de subgraphs NEAR pendant l'exécution du mappage. This includes a new JSON parsing function - logs on NEAR are frequently emitted as stringified JSONs. A new `json.fromString(...)` function is available as part of the [JSON API](/subgraphs/developing/creating/graph-ts/api/#json-api) to allow developers to easily process these logs. -## Deploying a NEAR Subgraph +## Déploiement d'un subgraph NEAR -Once you have a built Subgraph, it is time to deploy it to Graph Node for indexing. NEAR Subgraphs can be deployed to any Graph Node `>=v0.26.x` (this version has not yet been tagged & released). +Une fois que vous avez construit un subgraph, il est temps de le déployer sur Graph Node pour l'indexation. Les subgraphs NEAR peuvent être déployés sur n'importe quel Graph Node `>=v0.26.x` (cette version n'a pas encore été étiquetée et publiée). -Subgraph Studio and the upgrade Indexer on The Graph Network currently supports indexing NEAR mainnet and testnet in beta, with the following network names: +Subgraph Studio et l'Indexeur de mise à niveau sur The Graph Network prennent en charge actuellement l'indexation du mainnet et du testnet NEAR en bêta, avec les noms de réseau suivants : - `near-mainnet` - `near-testnet` -More information on creating and deploying Subgraphs on Subgraph Studio can be found [here](/deploying/deploying-a-subgraph-to-studio/). +De plus amples informations sur la création et le déploiement de subgraphs sur Subgraph Studio sont disponibles [ici](/deploying/deploying-a-subgraph-to-studio/). -As a quick primer - the first step is to "create" your Subgraph - this only needs to be done once. On Subgraph Studio, this can be done from [your Dashboard](https://thegraph.com/studio/): "Create a Subgraph". +Pour commencer, la première étape consiste à "créer" votre subgraph, ce qui ne doit être fait qu'une seule fois. Sur Subgraph Studio, vous pouvez le faire à partir de [votre tableau de bord](https://thegraph.com/studio/) : "Créer un Subgraph". -Once your Subgraph has been created, you can deploy your Subgraph by using the `graph deploy` CLI command: +Une fois votre subgraph créé, vous pouvez le déployer en utilisant la commande CLI `graph deploy` : ```sh -$ graph create --node # creates a Subgraph on a local Graph Node (on Subgraph Studio, this is done via the UI) -$ graph deploy --node --ipfs https://api.thegraph.com/ipfs/ # uploads the build files to a specified IPFS endpoint, and then deploys the Subgraph to a specified Graph Node based on the manifest IPFS hash +$ graph create --node # crée un subgrpah sur un Graph Node local (sur Subgraph Studio, cela se fait via l'interface utilisateur) +$ graph deploy --node --ipfs https://api.thegraph.com/ipfs/ # upload les fichiers de build vers un endpoint IPFS spécifié, puis déploie le subgraph vers un Graph Node spécifié sur la base du hash IPFS du manifeste ``` -The node configuration will depend on where the Subgraph is being deployed. +La configuration des nœuds dépend de l'endroit où le subgraph est déployé. ### Subgraph Studio @@ -198,13 +198,13 @@ graph auth graph deploy ``` -### Local Graph Node (based on default configuration) +### Nœud Graph local ( en fonction de la configuration par défaut) ```sh graph deploy --node http://localhost:8020/ --ipfs http://localhost:5001 ``` -Once your Subgraph has been deployed, it will be indexed by Graph Node. You can check its progress by querying the Subgraph itself: +Une fois votre subgraph déployé, il sera indexé par Graph Node. Vous pouvez vérifier sa progression en interrogeant le subgraph lui-même : ```graphql { @@ -216,23 +216,23 @@ Once your Subgraph has been deployed, it will be indexed by Graph Node. You can } ``` -### Indexing NEAR with a Local Graph Node +### Indexation de NEAR avec un nœud The graph local -Running a Graph Node that indexes NEAR has the following operational requirements: +L'exécution d'un nœud de Graph qui indexe NEAR répond aux exigences opérationnelles suivantes : -- NEAR Indexer Framework with Firehose instrumentation -- NEAR Firehose Component(s) -- Graph Node with Firehose endpoint configured +- Cadre d'indexation NEAR avec instrumentation Firehose +- Composant(s) du NEAR Firehose +- Nœud Gaph avec point d'extrémité Firehose configuré -We will provide more information on running the above components soon. +Nous fournirons bientôt plus d'informations sur l'utilisation des composants ci-dessus. -## Querying a NEAR Subgraph +## Interrogation d'un subgraph NEAR -The GraphQL endpoint for NEAR Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. +L'endpoint GraphQL pour les subgraphs NEAR est déterminé par la définition du schéma, avec l'interface API existante. Veuillez consulter la [documentation API GraphQL](/subgraphs/querying/graphql-api/) pour plus d'informations. -## Example Subgraphs +## Exemples de subgraphs -Here are some example Subgraphs for reference: +Voici quelques exemples de Subgraphs pour référence : [NEAR Blocks](https://github.com/graphprotocol/graph-tooling/tree/main/examples/near-blocks) @@ -240,44 +240,44 @@ Here are some example Subgraphs for reference: ## FAQ -### How does the beta work? +### Comment fonctionne la bêta ? -NEAR support is in beta, which means that there may be changes to the API as we continue to work on improving the integration. Please email near@thegraph.com so that we can support you in building NEAR Subgraphs, and keep you up to date on the latest developments! +La prise en charge de NEAR est en version bêta, ce qui signifie qu'il peut y avoir des changements dans l'API alors que nous continuons à travailler sur l'amélioration de l'intégration. Veuillez envoyer un email à near@thegraph.com afin que nous puissions vous aider à construire des subgraphs NEAR et vous tenir au courant des derniers développements ! -### Can a Subgraph index both NEAR and EVM chains? +### Un subgraph peut-il indexer simultanément les blockchains NEAR et EVM ? No, a Subgraph can only support data sources from one chain/network. -### Can Subgraphs react to more specific triggers? +### Les subgraphs peuvent-ils réagir à des déclencheurs plus spécifiques ? -Currently, only Block and Receipt triggers are supported. We are investigating triggers for function calls to a specified account. We are also interested in supporting event triggers, once NEAR has native event support. +Actuellement, seuls les déclencheurs de blocage et de réception sont pris en charge. Nous étudions les déclencheurs pour les appels de fonction à un compte spécifique. Nous souhaitons également prendre en charge les déclencheurs d'événements, une fois que NEAR disposera d'un support natif pour les événements. -### Will receipt handlers trigger for accounts and their sub-accounts? +### Les gestionnaires de reçus se déclencheront-ils pour les comptes et leurs sous-comptes ? If an `account` is specified, that will only match the exact account name. It is possible to match sub-accounts by specifying an `accounts` field, with `suffixes` and `prefixes` specified to match accounts and sub-accounts, for example the following would match all `mintbase1.near` sub-accounts: ```yaml -accounts: +comptes: suffixes: - mintbase1.near ``` -### Can NEAR Subgraphs make view calls to NEAR accounts during mappings? +### Les subgraphs NEAR peuvent-ils faire des appels de vue aux comptes NEAR pendant les mappages ? -This is not supported. We are evaluating whether this functionality is required for indexing. +Cette fonction n'est pas prise en charge. Nous sommes en train d'évaluer si cette fonctionnalité est nécessaire pour l'indexation. -### Can I use data source templates in my NEAR Subgraph? +### Puis-je utiliser des modèles de sources de données dans mon subgraph NEAR ? -This is not currently supported. We are evaluating whether this functionality is required for indexing. +Ceci n’est actuellement pas pris en charge. Nous évaluons si cette fonctionnalité est requise pour l'indexation. -### Ethereum Subgraphs support "pending" and "current" versions, how can I deploy a "pending" version of a NEAR Subgraph? +### Les subgraphs Ethereum prennent en charge les versions "en attente"(pending) et "actuelles"(current), comment puis-je déployer une version "en attente" d'un subgraph NEAR ? -Pending functionality is not yet supported for NEAR Subgraphs. In the interim, you can deploy a new version to a different "named" Subgraph, and then when that is synced with the chain head, you can redeploy to your primary "named" Subgraph, which will use the same underlying deployment ID, so the main Subgraph will be instantly synced. +La fonctionnalité d'attente n'est pas encore prise en charge pour les subgraphs NEAR. Dans l'intervalle, vous pouvez déployer une nouvelle version dans un subgraph "nommé" différemment, puis, lorsque celui-ci est synchronisé avec la tête de chaîne, vous pouvez le redéployer dans votre subgraph principal "nommé", qui utilisera le même ID de déploiement sous-jacent, de sorte que le subgraph principal sera instantanément synchronisé. -### My question hasn't been answered, where can I get more help building NEAR Subgraphs? +### Ma question n'a pas reçu de réponse, où puis-je obtenir plus d'aide pour construire des subgraphs NEAR ? -If it is a general question about Subgraph development, there is a lot more information in the rest of the [Developer documentation](/subgraphs/quick-start/). Otherwise please join [The Graph Protocol Discord](https://discord.gg/graphprotocol) and ask in the #near channel or email near@thegraph.com. +S'il s'agit d'une question générale sur le développement de Subgraph, il y a beaucoup plus d'informations dans le reste de la [Documentation du développeur](/subgraphs/quick-start/). Sinon, rejoignez [Le Discord de The Graph Protocol](https://discord.gg/graphprotocol) et posez votre question dans le canal #near ou envoyez un email à near@thegraph.com. -## References +## Les Références - [NEAR developer documentation](https://docs.near.org/tutorials/crosswords/basics/set-up-skeleton) diff --git a/website/src/pages/fr/subgraphs/guides/polymarket.mdx b/website/src/pages/fr/subgraphs/guides/polymarket.mdx index 74efe387b0d7..f19f6c7aef53 100644 --- a/website/src/pages/fr/subgraphs/guides/polymarket.mdx +++ b/website/src/pages/fr/subgraphs/guides/polymarket.mdx @@ -1,23 +1,23 @@ --- -title: Querying Blockchain Data from Polymarket with Subgraphs on The Graph -sidebarTitle: Query Polymarket Data +title: Interroger les données de la blockchain à partir de Polymarket avec des subgraphs sur The Graph +sidebarTitle: Interroger les données Polymarket --- -Query Polymarket’s onchain data using GraphQL via Subgraphs on The Graph Network. Subgraphs are decentralized APIs powered by The Graph, a protocol for indexing & querying data from blockchains. +Interroger les données onchain de Polymarket en utilisant GraphQL via Subgraphs sur The Graph Network. Les subgraphs sont des API décentralisées alimentées par The Graph, un protocole d'indexation et d'interrogation des données des blockchains. -## Polymarket Subgraph on Graph Explorer +## Subgraph Polymarket sur Graph Explorer -You can see an interactive query playground on the [Polymarket Subgraph’s page on The Graph Explorer](https://thegraph.com/explorer/subgraphs/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp?view=Query&chain=arbitrum-one), where you can test any query. +Vous pouvez voir un terrain de jeu (playground) interactif pour les requêtes sur la [page du subgraph Polymarket sur The Graph Explorer](https://thegraph.com/explorer/subgraphs/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp?view=Query&chain=arbitrum-one), où vous pouvez tester n'importe quelle requête. -![Polymarket Playground](/img/Polymarket-playground.png) +![Terrain de jeux Polymarket](/img/Polymarket-playground.png) -## How to use the Visual Query Editor +## Comment utiliser l'éditeur visuel de requêtes -The visual query editor helps you test sample queries from your Subgraph. +L'éditeur visuel de requêtes vous aide à tester des exemples de requêtes à partir de votre subgraph. -You can use the GraphiQL Explorer to compose your GraphQL queries by clicking on the fields you want. +Vous pouvez utiliser l'explorateur GraphiQL pour composer vos requêtes GraphQL en cliquant sur les champs souhaités. -### Example Query: Get the top 5 highest payouts from Polymarket +### Exemple de requête : Obtenir les 5 paiements les plus élevés de Polymarket ``` { @@ -30,7 +30,7 @@ You can use the GraphiQL Explorer to compose your GraphQL queries by clicking on } ``` -### Example output +### Exemple de sortie ``` { @@ -71,41 +71,41 @@ You can use the GraphiQL Explorer to compose your GraphQL queries by clicking on } ``` -## Polymarket's GraphQL Schema +## Schéma GraphQL de Polymarket -The schema for this Subgraph is defined [here in Polymarket’s GitHub](https://github.com/Polymarket/polymarket-subgraph/blob/main/polymarket-subgraph/schema.graphql). +Le schéma de ce subgraph est défini [ici dans le GitHub de Polymarket](https://github.com/Polymarket/polymarket-subgraph/blob/main/polymarket-subgraph/schema.graphql). -### Polymarket Subgraph Endpoint +### Endpoint du Subgraph Polymarket https://gateway.thegraph.com/api/{api-key}/subgraphs/id/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp -The Polymarket Subgraph endpoint is available on [Graph Explorer](https://thegraph.com/explorer). +Le subgraph Polymarket est disponible sur [Graph Explorer](https://thegraph.com/explorer). -![Polymarket Endpoint](/img/Polymarket-endpoint.png) +![Endpoint Polymarket](/img/Polymarket-endpoint.png) -## How to Get your own API Key +## Comment obtenir votre propre clé API -1. Go to [https://thegraph.com/studio](http://thegraph.com/studio) and connect your wallet -2. Go to https://thegraph.com/studio/apikeys/ to create an API key +1. Aller à [https://thegraph.com/studio](http://thegraph.com/studio) et connectez votre portefeuille +2. Rendez-vous sur https://thegraph.com/studio/apikeys/ pour créer une clé API -You can use this API key on any Subgraph in [Graph Explorer](https://thegraph.com/explorer), and it’s not limited to just Polymarket. +Vous pouvez utiliser cette clé API sur n'importe quel subgraph dans [Graph Explorer](https://thegraph.com/explorer), et ce n'est pas limité à Polymarket. -100k queries per month are free which is perfect for your side project! +100k requêtes par mois sont gratuites, ce qui est parfait pour votre projet secondaire ! -## Additional Polymarket Subgraphs +## Subgraphs Additionels Polymarket - [Polymarket](https://thegraph.com/explorer/subgraphs/81Dm16JjuFSrqz813HysXoUPvzTwE7fsfPk2RTf66nyC?view=Query&chain=arbitrum-one) -- [Polymarket Activity Polygon](https://thegraph.com/explorer/subgraphs/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp?view=Query&chain=arbitrum-one) -- [Polymarket Profit & Loss](https://thegraph.com/explorer/subgraphs/6c58N5U4MtQE2Y8njfVrrAfRykzfqajMGeTMEvMmskVz?view=Query&chain=arbitrum-one) -- [Polymarket Open Interest](https://thegraph.com/explorer/subgraphs/ELaW6RtkbmYNmMMU6hEPsghG9Ko3EXSmiRkH855M4qfF?view=Query&chain=arbitrum-one) +- [Activité Polymarket de Polygon](https://thegraph.com/explorer/subgraphs/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp?view=Query&chain=arbitrum-one) +- [Profit & Pertes Polymarket ](https://thegraph.com/explorer/subgraphs/6c58N5U4MtQE2Y8njfVrrAfRykzfqajMGeTMEvMmskVz?view=Query&chain=arbitrum-one) +- [Intérêt Ouverts Polymarket](https://thegraph.com/explorer/subgraphs/ELaW6RtkbmYNmMMU6hEPsghG9Ko3EXSmiRkH855M4qfF?view=Query&chain=arbitrum-one) -## How to Query with the API +## Comment interroger l'API -You can pass any GraphQL query to the Polymarket endpoint and receive data in json format. +Vous pouvez passer n'importe quelle requête GraphQL àl'endpoint Polymarket et recevoir des données au format json. -This following code example will return the exact same output as above. +L'exemple de code suivant renvoie exactement le même résultat que ci-dessus. -### Sample Code from node.js +### Exemple de code de node.js ``` const axios = require('axios'); @@ -127,22 +127,22 @@ const graphQLRequest = { }, }; -// Send the GraphQL query +// Envoi de la requête GraphQL axios(graphQLRequest) .then((response) => { - // Handle the response here + //Traitez la réponse ici const data = response.data.data console.log(data) }) .catch((error) => { - // Handle any errors + // Traiter les erreurs éventuelles console.error(error); }); ``` -### Additional resources +### Ressources complémentaires -For more information about querying data from your Subgraph, read more [here](/subgraphs/querying/introduction/). +Pour plus d'informations sur l'interrogation des données de votre subgraph, lisez [ici](/subgraphs/querying/introduction/). -To explore all the ways you can optimize & customize your Subgraph for a better performance, read more about [creating a Subgraph here](/developing/creating-a-subgraph/). +Pour découvrir toutes les façons d'optimiser et de personnaliser votre subgraph pour obtenir de meilleures performances, lisez davantage sur [la création d'un subgraph ici](/developing/creating-a-subgraph/). diff --git a/website/src/pages/fr/subgraphs/guides/secure-api-keys-nextjs.mdx b/website/src/pages/fr/subgraphs/guides/secure-api-keys-nextjs.mdx index e17e594408ff..965146218bef 100644 --- a/website/src/pages/fr/subgraphs/guides/secure-api-keys-nextjs.mdx +++ b/website/src/pages/fr/subgraphs/guides/secure-api-keys-nextjs.mdx @@ -1,48 +1,48 @@ --- -title: How to Secure API Keys Using Next.js Server Components +title: Comment sécuriser les clés d'API en utilisant les composants serveur de Next.js --- -## Overview +## Aperçu -We can use [Next.js server components](https://nextjs.org/docs/app/building-your-application/rendering/server-components) to properly secure our API key from exposure in the frontend of our dapp. To further increase our API key security, we can also [restrict our API key to certain Subgraphs or domains in Subgraph Studio](/cookbook/upgrading-a-subgraph/#securing-your-api-key). +Nous pouvons utiliser les [Composants du serveur Next.js](https://nextjs.org/docs/app/building-your-application/rendering/server-components) pour sécuriser correctement notre clé API et éviter qu'elle ne soit exposée dans le frontend de notre dapp. Pour renforcer la sécurité de notre clé API, nous pouvons également [restreindre notre clé API à certains subgraphs ou domaines dans Subgraph Studio](/cookbook/upgrading-a-subgraph/#securing-your-api-key). -In this cookbook, we will go over how to create a Next.js server component that queries a Subgraph while also hiding the API key from the frontend. +Dans ce cookbook, nous allons voir comment créer un composant serveur Next.js qui interroge un subgraph tout en cachant la clé API du frontend. -### Caveats +### Mise en garde -- Next.js server components do not protect API keys from being drained using denial of service attacks. -- The Graph Network gateways have denial of service detection and mitigation strategies in place, however using server components may weaken these protections. -- Next.js server components introduce centralization risks as the server can go down. +- Les composants serveur de Next.js ne protègent pas les clés API contre les attaques de déni de service. +- Les passerelles de The Graph Network disposent de stratégies de détection et d'atténuation des attaques de déni de service, cependant, l'utilisation des composants serveur peut affaiblir ces protections. +- Les composants serveur de Next.js introduisent des risques de centralisation car le serveur peut tomber en panne. -### Why It's Needed +### Pourquoi est-ce nécessaire -In a standard React application, API keys included in the frontend code can be exposed to the client-side, posing a security risk. While `.env` files are commonly used, they don't fully protect the keys since React's code is executed on the client side, exposing the API key in the headers. Next.js Server Components address this issue by handling sensitive operations server-side. +Dans une application React standard, les clés API incluses dans le code frontend peuvent être exposées du côté client, posant un risque de sécurité. Bien que les fichiers `.env` soient couramment utilisés, ils ne protègent pas complètement les clés car le code de React est exécuté côté client, exposant ainsi la clé API dans les headers. Les composants serveur Next.js résolvent ce problème en gérant les opérations sensibles côté serveur. -### Using client-side rendering to query a Subgraph +### Utiliser le rendu côté client pour interroger un subgraph -![Client-side rendering](/img/api-key-client-side-rendering.png) +![rendu côté client](/img/api-key-client-side-rendering.png) -### Prerequisites +### Prérequis -- An API key from [Subgraph Studio](https://thegraph.com/studio) -- Basic knowledge of Next.js and React. -- An existing Next.js project that uses the [App Router](https://nextjs.org/docs/app). +- Une clé API provenant de [Subgraph Studio](https://thegraph.com/studio) +- Une connaissance de base de Next.js et React. +- Un projet Next.js existant qui utilise l'[App Router](https://nextjs.org/docs/app). -## Step-by-Step Cookbook +## Guide étape par étape -### Step 1: Set Up Environment Variables +### Étape 1 : Configurer les Variables d'Environnement -1. In our Next.js project root, create a `.env.local` file. -2. Add our API key: `API_KEY=`. +1. À la racine de notre projet Next.js, créer un fichier `.env.local` . +2. Ajouter notre clé API :: `API_KEY=`. -### Step 2: Create a Server Component +### Étape 2 : Créer un Composant Serveur -1. In our `components` directory, create a new file, `ServerComponent.js`. -2. Use the provided example code to set up the server component. +1. Dans notre répertoire`components` , créer un nouveau fichier, `ServerComponent.js`. +2. Utiliser le code exemple fourni pour configurer le composant serveur. -### Step 3: Implement Server-Side API Request +### Étape 3 : Implémenter la Requête API Côté Serveur -In `ServerComponent.js`, add the following code: +Dans `ServerComponent.js`, ajouter le code suivant : ```javascript const API_KEY = process.env.API_KEY @@ -95,10 +95,10 @@ export default async function ServerComponent() { } ``` -### Step 4: Use the Server Component +### Étape 4 : Utiliser le Composant Serveur -1. In our page file (e.g., `pages/index.js`), import `ServerComponent`. -2. Render the component: +1. Dans notre fichier de page (par exemple, `pages/index.js`), importer `ServerComponent`. +2. Rendu du composant: ```javascript import ServerComponent from './components/ServerComponent' @@ -112,12 +112,12 @@ export default function Home() { } ``` -### Step 5: Run and Test Our Dapp +### Étape 5 : Lancer et tester notre Dapp -Start our Next.js application using `npm run dev`. Verify that the server component is fetching data without exposing the API key. +Démarrez notre application Next.js en utilisant `npm run dev`. Vérifiez que le composant serveur récupère les données sans exposer la clé API. -![Server-side rendering](/img/api-key-server-side-rendering.png) +![Rendu côté serveur](/img/api-key-server-side-rendering.png) ### Conclusion -By utilizing Next.js Server Components, we've effectively hidden the API key from the client-side, enhancing the security of our application. This method ensures that sensitive operations are handled server-side, away from potential client-side vulnerabilities. Finally, be sure to explore [other API key security measures](/subgraphs/querying/managing-api-keys/) to increase your API key security even further. +En utilisant les composants serveur de Next.js, nous avons effectivement caché la clé API du côté client, améliorant ainsi la sécurité de notre application. Cette méthode garantit que les opérations sensibles sont traitées côté serveur, à l'abri des vulnérabilités potentielles côté client. Enfin, n'oubliez pas d'explorer [d'autres mesures de sécurité des clés d'API](/subgraphs/querying/managing-api-keys/) pour renforcer encore davantage la sécurité de vos clés d'API. diff --git a/website/src/pages/fr/subgraphs/guides/subgraph-composition.mdx b/website/src/pages/fr/subgraphs/guides/subgraph-composition.mdx new file mode 100644 index 000000000000..ccf1f043fcb1 --- /dev/null +++ b/website/src/pages/fr/subgraphs/guides/subgraph-composition.mdx @@ -0,0 +1,132 @@ +--- +title: Agrégation de données par composition de subgraphs +sidebarTitle: Construire un subgraph composable avec plusieurs subgraphs +--- + +Tirez parti de la composition de subgraphs pour accélérer le temps de développement. Créez un subgraph de base avec les données essentielles, puis construisez d'autres subgraphs par-dessus. + +Optimize your Subgraph by merging data from independent, source Subgraphs into a single composable Subgraph to enhance data aggregation. + +## Présentation + +Composable Subgraphs enable you to combine multiple Subgraphs' data sources into a new Subgraph, facilitating faster and more flexible Subgraph development. Subgraph composition empowers you to create and maintain smaller, focused Subgraphs that collectively form a larger, interconnected dataset. + +### Avantages de la composition + +La composition de subgraphs est une fonctionnalité puissante pour la mise à l'échelle, qui vous permet de.. : + +- Réutiliser, mélanger et combiner les données existantes +- Rationaliser le développement et les requêtes +- Utiliser plusieurs sources de données (jusqu'à cinq subgraphs sources) +- Accélérer la vitesse de synchronisation de votre subgraph +- Gérer les erreurs et optimiser la resynchronisation + +## Overview de l'architecture + +La configuration de cet exemple implique deux subgraphs : + +1. **Subgraph source** : Suit les données d'événements en tant qu'entités. +2. **Subgraph dépendant** : Utilise le subgraph source comme source de données. + +Vous pouvez les trouver dans les répertoires `source` et `dependent`. + +- Le **Subgraph Source** est un subgraph de base de suivi des événements qui enregistre les événements émis par les contrats concernés. +- Le **subgraph dépendant** fait référence au subgraph source en tant que source de données, en utilisant les entités de la source comme déclencheurs. + +Alors que le subgraph source est un subgraph standard, le subgraph dépendant utilise la fonction de composition de subgraphs. + +## Prérequis + +### Source Subgraphs + +- All Subgraphs need to be published with a **specVersion 1.3.0 or later** (Use the latest graph-cli version to be able to deploy composable Subgraphs) +- See notes here: https://github.com/graphprotocol/graph-node/releases/tag/v0.37.0 +- Immutable entities only: All Subgraphs must have [immutable entities](https://thegraph.com/docs/en/subgraphs/best-practices/immutable-entities-bytes-as-ids/#immutable-entities) when the Subgraph is deployed +- Pruning can be used in the source Subgraphs, but only entities that are immutable can be composed on top of +- Source Subgraphs cannot use grafting on top of existing entities +- Aggregated entities can be used in composition, but entities that are composed from them cannot performed additional aggregations directly + +### Composed Subgraphs + +- You can only compose up to a **maximum of 5 source Subgraphs** +- Composed Subgraphs can only use **datasources from the same chain** +- **Nested composition is not yet supported**: Composing on top of another composed Subgraph isn’t allowed at this time +- Aggregated entities can be used in composition, but the composed entities on them cannot also use aggregations directly +- Developers cannot compose an onchain datasource with a Subgraph datasource (i.e. you can’t do normal event handlers and call handlers and block handlers in a composed Subgraph) + +Additionally, you can explore the [example-composable-subgraph](https://github.com/graphprotocol/example-composable-subgraph) repository for a working implementation of composable Subgraphs + +## Commencer + +The following guide provides examples for defining 3 source Subgraphs to create one powerful composed Subgraph. + +### Spécificités⁠ + +- Pour que cet exemple reste simple, tous les subgraphs sources n'utilisent que des gestionnaires de blocs. Cependant, dans un environnement réel, chaque subgraph source utilisera des données provenant de différents contrats intelligents. +- Les exemples ci-dessous montrent comment importer et étendre le schéma d'un autre subgraph afin d'en améliorer les fonctionnalités. +- Chaque subgraphe source est optimisé avec une entité spécifique. +- Toutes les commandes listées installent les dépendances nécessaires, génèrent du code basé sur le schéma GraphQL, construisent le subgraph et le déploient sur votre instance locale de Graph Node. + +### Étape 1. Déployer le subgraph source de temps de bloc + +Ce premier subgraph source calcule le temps de bloc pour chaque bloc. + +- Il importe des schémas d'autres subgraphs et ajoute une entité `block` avec un champ `timestamp`, représentant l'heure à laquelle chaque bloc a été extrait. +- Il écoute les événements de la blockchain liés au temps (par exemple, les horodatages des blocs) et traite ces données pour mettre à jour les entités du subgraph en conséquence. + +Pour déployer ce subgraph localement, exécutez les commandes suivantes : + +```bash +npm install +npm run codegen +npm run build +npm run create-local +npm run deploy-local +``` + +### Étape 2. Déployer le subgraph de la source de coût du bloc + +Ce deuxième subgraph source indexe le coût de chaque bloc. + +#### Principales fonctions + +- Il importe des schémas d'autres subgraphs et ajoute une entité `block` avec des champs liés aux coûts. +- Il écoute les événements de la blockchain liés aux coûts (par exemple, les frais de gaz, les coûts de transaction) et traite ces données pour mettre à jour les entités du subgraph en conséquence. + +Pour déployer ce subgraph localement, exécutez les mêmes commandes que ci-dessus. + +### Étape 3. Définition de la taille des blocs dans le subgraph source + +Ce troisième subgraph source indexe la taille de chaque bloc. Pour déployer ce subgraph localement, exécutez les mêmes commandes que ci-dessus. + +#### Principales fonctions + +- Il importe les schémas existants des autres subgraphs et ajoute une entité `block` avec un champ `size` représentant la taille de chaque bloc. +- Il écoute les événements de la blockchain liés à la taille des blocs (par exemple, le stockage ou le volume) et traite ces données pour mettre à jour les entités du subgraph en conséquence. + +### Étape 4. Combinaison en Subgraph Block Stats + +This composed Subgraph combines and aggregates the information from the source Subgraphs above, providing a unified view of block statistics. To deploy this Subgraph locally, run the same commands as above. + +> Note: +> +> - Toute modification apportée à un subgraph source est susceptible de générer un nouvel ID de déploiement. +> - Veillez à mettre à jour l'ID de déploiement dans l'adresse de la source de données du manifeste Subgraph pour bénéficier des dernières modifications. +> - Tous les subgraphs sources doivent être déployés avant le déploiement du subgraph composé. + +#### Principales fonctions + +- Il fournit un modèle de données consolidé qui englobe toutes les mesures de bloc pertinentes. +- It combines data from 3 source Subgraphs, and provides a comprehensive view of block statistics, enabling more complex queries and analyses. + +## Principaux points à retenir + +- Cet outil puissant vous permettra de développer vos subgraphs et de combiner plusieurs subgraphs. +- The setup includes the deployment of 3 source Subgraphs and one final deployment of the composed Subgraph. +- Cette caractéristique permet de débloquer l'évolutivité, simplifiant ainsi l'efficacité du développement et de la maintenance. + +## Ressources supplémentaires + +- Check out all the code for this example in [this GitHub repo](https://github.com/graphprotocol/example-composable-subgraph). +- Pour ajouter des fonctionnalités avancées à votre subgraph, consultez [Fonctionnalités avancées du subgraph](/developing/creating/advanced/). +- Pour en savoir plus sur les agrégations, consultez [Séries chronologiques et agrégations](/subgraphs/developing/creating/advanced/#timeseries-and-aggregations). diff --git a/website/src/pages/fr/subgraphs/guides/subgraph-debug-forking.mdx b/website/src/pages/fr/subgraphs/guides/subgraph-debug-forking.mdx index 91aa7484d2ec..37a9815532d3 100644 --- a/website/src/pages/fr/subgraphs/guides/subgraph-debug-forking.mdx +++ b/website/src/pages/fr/subgraphs/guides/subgraph-debug-forking.mdx @@ -1,26 +1,26 @@ --- -title: Quick and Easy Subgraph Debugging Using Forks +title: Débogage rapide et facile des subgraph à l'aide de Forks --- -As with many systems processing large amounts of data, The Graph's Indexers (Graph Nodes) may take quite some time to sync-up your Subgraph with the target blockchain. The discrepancy between quick changes with the purpose of debugging and long wait times needed for indexing is extremely counterproductive and we are well aware of that. This is why we are introducing **Subgraph forking**, developed by [LimeChain](https://limechain.tech/), and in this article I will show you how this feature can be used to substantially speed-up Subgraph debugging! +Comme pour de nombreux systèmes traitant de grandes quantités de données, les Indexeurs de The Graph (Graph Nodes) peuvent prendre un certain temps pour synchroniser votre subgraph avec la blockchain cible. L'écart entre les changements rapides dans le but de déboguer et les longs temps d'attente nécessaires à l'indexation est extrêmement contre-productif et nous en sommes bien conscients. C'est pourquoi nous introduisons **Subgraph forking**, développé par [LimeChain](https://limechain.tech/), et dans cet article je vous montrerai comment cette fonctionnalité peut être utilisée pour accélérer considérablement le débogage du subgraph ! -## Ok, what is it? +## D'accord, qu'est-ce que c'est ? -**Subgraph forking** is the process of lazily fetching entities from _another_ Subgraph's store (usually a remote one). +**Le Subgraph forking** est le processus de récupération paresseuse d'entités à partir du store d'un autre subgraph (généralement un store distant). -In the context of debugging, **Subgraph forking** allows you to debug your failed Subgraph at block _X_ without needing to wait to sync-up to block _X_. +Dans le contexte du débogage, **Subgraph forking** vous permet de déboguer votre subgraph défaillant au bloc _X_ sans avoir besoin d'attendre la synchronisation au bloc _X_. -## What?! How? +## Quoi ? Comment ? -When you deploy a Subgraph to a remote Graph Node for indexing and it fails at block _X_, the good news is that the Graph Node will still serve GraphQL queries using its store, which is synced-up to block _X_. That's great! This means we can take advantage of this "up-to-date" store to fix the bugs arising when indexing block _X_. +Lorsque vous déployez un subgraph vers un Graph Node distant pour l'indexation et qu'il échoue au bloc _X_, la bonne nouvelle est que le Graph Node servira toujours les requêtes GraphQL à l'aide de son store, qui est synchronisé avec le bloc _X_. C'est formidable ! Cela signifie que nous pouvons tirer parti de ce store "à jour" pour corriger les bugs survenant lors de l'indexation du bloc _X_. -In a nutshell, we are going to _fork the failing Subgraph_ from a remote Graph Node that is guaranteed to have the Subgraph indexed up to block _X_ in order to provide the locally deployed Subgraph being debugged at block _X_ an up-to-date view of the indexing state. +En bref, nous allons _forker le subgraph défaillant_ à partir d'un Graph Node distant qui est garanti d'avoir le subgraph indexé jusqu'au bloc _X_ afin de fournir au subgraph déployé localement et débogué au bloc _X_ une vue à jour de l'état de l'indexation. -## Please, show me some code! +## S'il vous plaît, montrez-moi du code ! -To stay focused on Subgraph debugging, let's keep things simple and run along with the [example-Subgraph](https://github.com/graphprotocol/graph-tooling/tree/main/examples/ethereum-gravatar) indexing the Ethereum Gravity smart contract. +Pour rester concentré sur le débogage des subgraphs, gardons les choses simples et exécutons le [Subgraph d'exemple](https://github.com/graphprotocol/graph-tooling/tree/main/examples/ethereum-gravatar) indexant le contrat intelligent Ethereum Gravity. -Here are the handlers defined for indexing `Gravatar`s, with no bugs whatsoever: +Voici les gestionnaires définis pour indexer `Gravatar`s, sans aucun bug : ```tsx export function handleNewGravatar(event: NewGravatar): void { @@ -34,7 +34,7 @@ export function handleNewGravatar(event: NewGravatar): void { export function handleUpdatedGravatar(event: UpdatedGravatar): void { let gravatar = Gravatar.load(event.params.id.toI32().toString()) if (gravatar == null) { - log.critical('Gravatar not found!', []) + log.critical('Gravatar introuvable!', []) return } gravatar.owner = event.params.owner @@ -44,58 +44,58 @@ export function handleUpdatedGravatar(event: UpdatedGravatar): void { } ``` -Oops, how unfortunate, when I deploy my perfect looking Subgraph to [Subgraph Studio](https://thegraph.com/studio/) it fails with the _"Gravatar not found!"_ error. +Oups, comme c'est malheureux, quand je déploie mon parfait subgraph dans [Subgraph Studio](https://thegraph.com/studio/), il échoue avec l'erreur _"Gravatar not found!"_. -The usual way to attempt a fix is: +La méthode habituelle pour tenter de résoudre le problème est la suivante : -1. Make a change in the mappings source, which you believe will solve the issue (while I know it won't). -2. Re-deploy the Subgraph to [Subgraph Studio](https://thegraph.com/studio/) (or another remote Graph Node). -3. Wait for it to sync-up. -4. If it breaks again go back to 1, otherwise: Hooray! +1. Apportez une modification à la source des mappages, ce qui, selon vous, résoudra le problème (même si je sais que ce ne sera pas le cas). +2. Redéployer le subgraph vers [Subgraph Studio](https://thegraph.com/studio/) (ou un autre Graph Node distant). +3. Attendez qu’il soit synchronisé. +4. S'il se casse à nouveau, revenez au point 1, sinon : Hourra ! -It is indeed pretty familiar to an ordinary debug process, but there is one step that horribly slows down the process: _3. Wait for it to sync-up._ +Il s'agit en fait d'un processus assez familier à un processus de débogage ordinaire, mais il y a une étape qui ralentit terriblement le processus : _3. Attendez qu'il se synchronise._ -Using **Subgraph forking** we can essentially eliminate this step. Here is how it looks: +L'utilisation du **Subgraph forking** permet d'éliminer cette étape. Voici à quoi cela ressemble : -0. Spin-up a local Graph Node with the **_appropriate fork-base_** set. -1. Make a change in the mappings source, which you believe will solve the issue. -2. Deploy to the local Graph Node, **_forking the failing Subgraph_** and **_starting from the problematic block_**. -3. If it breaks again, go back to 1, otherwise: Hooray! +0. Créer un Graph Node local avec l'ensemble de **_base de fork approprié_**. +1. Apportez une modification à la source des mappings qui, selon vous, résoudra le problème. +2. Déployer sur le Graph Node local, **_forking du Subgraph Défaillant_** et **_à partir du bloc problématique_**. +3. S'il casse à nouveau, revenez à 1, sinon : Hourra ! -Now, you may have 2 questions: +Maintenant, vous pouvez avoir 2 questions : -1. fork-base what??? -2. Forking who?! +1. base de fourche quoi ??? +2. Fourcher qui ?! -And I answer: +Je réponds : -1. `fork-base` is the "base" URL, such that when the _subgraph id_ is appended the resulting URL (`/`) is a valid GraphQL endpoint for the Subgraph's store. -2. Forking is easy, no need to sweat: +1. `fork-base` est l'URL "de base", de sorte que lorsque l'_id_ du subgraph est ajouté, l'URL résultante (`/`) est un endpoint GraphQL valide pour le store du subgraph. +2. Fourcher est facile, pas besoin de transpirer : ```bash $ graph deploy --debug-fork --ipfs http://localhost:5001 --node http://localhost:8020 ``` -Also, don't forget to set the `dataSources.source.startBlock` field in the Subgraph manifest to the number of the problematic block, so you can skip indexing unnecessary blocks and take advantage of the fork! +N'oubliez pas non plus de définir le champ `dataSources.source.startBlock` dans le manifeste Subgraph au numéro du bloc problématique, afin d'éviter d'indexer des blocs inutiles et de profiter du fork ! -So, here is what I do: +Voici donc ce que je fais : -1. I spin-up a local Graph Node ([here is how to do it](https://github.com/graphprotocol/graph-node#running-a-local-graph-node)) with the `fork-base` option set to: `https://api.thegraph.com/subgraphs/id/`, since I will fork a Subgraph, the buggy one I deployed earlier, from [Subgraph Studio](https://thegraph.com/studio/). +1. Je démarre un Graph Node local ([voici comment faire](https://github.com/graphprotocol/graph-node#running-a-local-graph-node)) avec l'option `fork-base` fixée à : `https://api.thegraph.com/subgraphs/id/`, puisque je vais créer un subgraph, le subgraph bogué que j'ai déployé plus tôt, à partir de [Subgraph Studio](https://thegraph.com/studio/). ``` $ cargo run -p graph-node --release -- \ - --postgres-url postgresql://USERNAME[:PASSWORD]@localhost:5432/graph-node \ - --ethereum-rpc NETWORK_NAME:[CAPABILITIES]:URL \ - --ipfs 127.0.0.1:5001 - --fork-base https://api.thegraph.com/subgraphs/id/ + --postgres-url postgresql://USERNAME[:PASSWORD]@localhost:5432/graph-node \ + --ethereum-rpc NOM_RÉSEAU : [CAPABILITIES] :URL \ + --ipfs 127.0.0.1:5001 + --fork-base https://api.thegraph.com/subgraphs/id/ ``` -2. After careful inspection I notice that there is a mismatch in the `id` representations used when indexing `Gravatar`s in my two handlers. While `handleNewGravatar` converts it to a hex (`event.params.id.toHex()`), `handleUpdatedGravatar` uses an int32 (`event.params.id.toI32()`) which causes the `handleUpdatedGravatar` to panic with "Gravatar not found!". I make them both convert the `id` to a hex. -3. After I made the changes I deploy my Subgraph to the local Graph Node, **_forking the failing Subgraph_** and setting `dataSources.source.startBlock` to `6190343` in `subgraph.yaml`: +2. Après une inspection minutieuse, j'ai remarqué qu'il y avait un décalage dans les représentations `id` utilisées lors de l'indexation des `Gravatar`s dans mes deux handlers. Alors que `handleNewGravatar` le convertit en hexadécimal (`event.params.id.toHex()`), `handleUpdatedGravatar` utilise un int32 (`event.params.id.toI32()`) ce qui fait paniquer `handleUpdatedGravatar` avec "Gravatar not found!". Je fais en sorte qu'ils convertissent tous les deux l'`id` en hexadécimal. +3. Après avoir fait les changements, je déploie mon Subgraph sur le Graph Node local, **_forking du Subgraph défaillant_** et configurer `dataSources.source.startBlock` à `6190343` dans `subgraph.yaml` : ```bash $ graph deploy gravity --debug-fork QmNp169tKvomnH3cPXTfGg4ZEhAHA6kEq5oy1XDqAxqHmW --ipfs http://localhost:5001 --node http://localhost:8020 ``` -4. I inspect the logs produced by the local Graph Node and, Hooray!, everything seems to be working. -5. I deploy my now bug-free Subgraph to a remote Graph Node and live happily ever after! (no potatoes tho) +4. J'inspecte les logs générés par le Graph Node local et, Hourra!, tout semble fonctionner. +5. Je déploie mon subgraph maintenant exempt de bugs vers un Graph Node distant et je vis heureux jusqu'à la fin des temps ! diff --git a/website/src/pages/fr/subgraphs/guides/subgraph-uncrashable.mdx b/website/src/pages/fr/subgraphs/guides/subgraph-uncrashable.mdx index a08e2a7ad8c9..60cb89d52da1 100644 --- a/website/src/pages/fr/subgraphs/guides/subgraph-uncrashable.mdx +++ b/website/src/pages/fr/subgraphs/guides/subgraph-uncrashable.mdx @@ -1,29 +1,29 @@ --- -title: Safe Subgraph Code Generator +title: Générateur de code de subgraph sécurisé --- -[Subgraph Uncrashable](https://float-capital.github.io/float-subgraph-uncrashable/) is a code generation tool that generates a set of helper functions from the graphql schema of a project. It ensures that all interactions with entities in your Subgraph are completely safe and consistent. +[Subgraph Uncrashable] (https://float-capital.github.io/float-subgraph-uncrashable/) est un outil de génération de code qui génère un ensemble de fonctions d'aide à partir du schéma graphql d'un projet. Il garantit que toutes les interactions avec les entités de votre subgraph sont totalement sûres et cohérentes. -## Why integrate with Subgraph Uncrashable? +## Pourquoi intégrer Subgraph Uncrashable ? -- **Continuous Uptime**. Mishandled entities may cause Subgraphs to crash, which can be disruptive for projects that are dependent on The Graph. Set up helper functions to make your Subgraphs “uncrashable” and ensure business continuity. +- **Temps de fonctionnement continu**. Les entités mal gérées peuvent entraîner le plantage des subgraphs, ce qui peut perturber les projets qui dépendent de The Graph. Mettez en place des fonctions d'aide pour rendre vos subgraphs “incrashable” et assurer la continuité des activités. -- **Completely Safe**. Common problems seen in Subgraph development are issues of loading undefined entities, not setting or initializing all values of entities, and race conditions on loading and saving entities. Ensure all interactions with entities are completely atomic. +- **Tout à fait sûr**. Les problèmes courants rencontrés dans le développement de subgraphs sont le chargement d'entités non définies, l'absence de définition ou d'initialisation de toutes les valeurs des entités et les conditions de course lors du chargement et de l'enregistrement des entités. Assurez-vous que toutes les interactions avec les entités sont complètement atomiques. -- **User Configurable** Set default values and configure the level of security checks that suits your individual project's needs. Warning logs are recorded indicating where there is a breach of Subgraph logic to help patch the issue to ensure data accuracy. +- **Configurable par l'utilisateur** Définissez les valeurs par défaut et configurez le niveau des contrôles de sécurité en fonction des besoins de votre projet. Des logs d'avertissement sont enregistrés, indiquant les cas de violation de la logique du subgraph, afin d'aider à résoudre le problème et de garantir l'exactitude des données. -**Key Features** +**Caractéristiques principales** -- The code generation tool accommodates **all** Subgraph types and is configurable for users to set sane defaults on values. The code generation will use this config to generate helper functions that are to the users specification. +- L'outil de génération de code prend en charge **tous** les types de subgraphs et est configurable pour que les utilisateurs puissent définir des valeurs par défaut saines. La génération de code utilisera cette configuration pour générer des fonctions d'aide conformes aux spécifications de l'utilisateur. -- The framework also includes a way (via the config file) to create custom, but safe, setter functions for groups of entity variables. This way it is impossible for the user to load/use a stale graph entity and it is also impossible to forget to save or set a variable that is required by the function. +- Le cadre comprend également un moyen (via le fichier de configuration) de créer des fonctions de définition personnalisées, mais sûres, pour des groupes de variables d'entité. De cette façon, il est impossible pour l'utilisateur de charger/utiliser une entité de graph obsolète et il est également impossible d'oublier de sauvegarder ou définissez une variable requise par la fonction. -- Warning logs are recorded as logs indicating where there is a breach of Subgraph logic to help patch the issue to ensure data accuracy. +- Les logs d'avertissement sont enregistrés en tant que logs indiquant une violation de la logique du subgraph afin d'aider à résoudre le problème et de garantir l'exactitude des données. -Subgraph Uncrashable can be run as an optional flag using the Graph CLI codegen command. +Subgraph Uncrashable peut être exécuté en tant qu'indicateur facultatif à l'aide de la commande Graph CLI codegen. ```sh graph codegen -u [options] [] ``` -Visit the [Subgraph uncrashable documentation](https://float-capital.github.io/float-subgraph-uncrashable/docs/) or watch this [video tutorial](https://float-capital.github.io/float-subgraph-uncrashable/docs/tutorial) to learn more and to get started with developing safer Subgraphs. +Visitez la [documentation sur les subgraphs incrashable](https://float-capital.github.io/float-subgraph-uncrashable/docs/) ou regardez ce [tutoriel vidéo] (https://float-capital.github.io/float-subgraph-uncrashable/docs/tutorial) pour en savoir plus et commencer à développer des subgraphs plus sûrs. diff --git a/website/src/pages/fr/subgraphs/guides/transfer-to-the-graph.mdx b/website/src/pages/fr/subgraphs/guides/transfer-to-the-graph.mdx index a62072c48373..0d51588d5ad4 100644 --- a/website/src/pages/fr/subgraphs/guides/transfer-to-the-graph.mdx +++ b/website/src/pages/fr/subgraphs/guides/transfer-to-the-graph.mdx @@ -1,104 +1,104 @@ --- -title: Transfer to The Graph +title: Transfert à The Graph --- -Quickly upgrade your Subgraphs from any platform to [The Graph's decentralized network](https://thegraph.com/networks/). +Mettez rapidement à niveau vos subgraphs de n'importe quelle plate-forme vers [le réseau décentralisé de The Graph](https://thegraph.com/networks/). -## Benefits of Switching to The Graph +## Avantages du passage à The Graph -- Use the same Subgraph that your apps already use with zero-downtime migration. -- Increase reliability from a global network supported by 100+ Indexers. -- Receive lightning-fast support for Subgraphs 24/7, with an on-call engineering team. +- Utilisez le même subgraph que vos applications utilisent déjà avec une migration sans temps mort. +- Améliorez la fiabilité grâce à un réseau mondial pris en charge par plus de 100 Indexers. +- Bénéficiez d'une assistance rapide pour Subgraphs 24h/24, 7j/7, avec une équipe d'ingénieurs sur appel. -## Upgrade Your Subgraph to The Graph in 3 Easy Steps +## Mettez à jour votre Subgraph vers The Graph en 3 étapes simples -1. [Set Up Your Studio Environment](/subgraphs/cookbook/transfer-to-the-graph/#1-set-up-your-studio-environment) -2. [Deploy Your Subgraph to Studio](/subgraphs/cookbook/transfer-to-the-graph/#2-deploy-your-subgraph-to-studio) -3. [Publish to The Graph Network](/subgraphs/cookbook/transfer-to-the-graph/#publish-your-subgraph-to-the-graphs-decentralized-network) +1. [Set Up Your Studio Environment](/subgraphs/cookbook/transfer-to-the-graph/#1-set-up-your-studio-environment) +2. [Deploy Your Subgraph to Studio](/subgraphs/cookbook/transfer-to-the-graph/#2-deploy-your-subgraph-to-studio) +3. [Publish to The Graph Network](/subgraphs/cookbook/transfer-to-the-graph/#publish-your-subgraph-to-the-graphs-decentralized-network) -## 1. Set Up Your Studio Environment +## 1. Configurer votre environnement Studio -### Create a Subgraph in Subgraph Studio +### Créer un subgraph dans Subgraph Studio -- Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. +- Accédez à [Subgraph Studio](https://thegraph.com/studio/) et connectez votre portefeuille. - Click "Create a Subgraph". It is recommended to name the Subgraph in Title Case: "Subgraph Name Chain Name". -> Note: After publishing, the Subgraph name will be editable but requires onchain action each time, so name it properly. +> Remarque : après la publication, le nom du subgraph sera modifiable, mais il nécessitera à chaque fois une action onchain, c'est pourquoi il faut le nommer correctement. -### Install the Graph CLI⁠ +### Installer Graph CLI -You must have [Node.js](https://nodejs.org/) and a package manager of your choice (`npm` or `pnpm`) installed to use the Graph CLI. Check for the [most recent](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) CLI version. +Vous devez avoir Node.js et un gestionnaire de paquets de votre choix (`npm` or `pnpm`) installés pour utiliser Graph CLI. Vérifiez la version la [plus récente](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) de CLI. -On your local machine, run the following command: +Sur votre machine locale, exécutez la commande suivante : -Using [npm](https://www.npmjs.com/): +Utilisation de [npm](https://www.npmjs.com/) : ```sh npm install -g @graphprotocol/graph-cli@latest ``` -Use the following command to create a Subgraph in Studio using the CLI: +Utilisez la commande suivante pour créer un subgraph dans Studio à l'aide de la CLI : ```sh graph init --product subgraph-studio ``` -### Authenticate Your Subgraph +### Authentifiez votre subgraph -In The Graph CLI, use the auth command seen in Subgraph Studio: +Dans Graph CLI, utilisez la commande auth vue dans Subgraph Studio : ```sh graph auth ``` -## 2. Deploy Your Subgraph to Studio +## 2. Déployez votre Subgraph sur Studio -If you have your source code, you can easily deploy it to Studio. If you don't have it, here's a quick way to deploy your Subgraph. +Si vous avez votre code source, vous pouvez facilement le déployer dans Studio. Si vous ne l'avez pas, voici un moyen rapide de déployer votre subgraph. -In The Graph CLI, run the following command: +Dans Graph CLI, exécutez la commande suivante : ```sh graph deploy --ipfs-hash ``` -> **Note:** Every Subgraph has an IPFS hash (Deployment ID), which looks like this: "Qmasdfad...". To deploy simply use this **IPFS hash**. You’ll be prompted to enter a version (e.g., v0.0.1). +> **Note:** Chaque subgraph a un hash IPFS (Deployment ID), qui ressemble à ceci : "Qmasdfad...". Pour déployer, il suffit d'utiliser cet **IPFS hash**. Vous serez invité à entrer une version (par exemple, v0.0.1). -## 3. Publish Your Subgraph to The Graph Network +## 3. Publier votre Subgraph sur The Graph Network -![publish button](/img/publish-sub-transfer.png) +![bouton de publication](/img/publish-sub-transfer.png) -### Query Your Subgraph +### Interroger votre Subgraph -> To attract about 3 indexers to query your Subgraph, it’s recommended to curate at least 3,000 GRT. To learn more about curating, check out [Curating](/resources/roles/curating/) on The Graph. +> Pour inciter environ 3 Indexeurs à interroger votre subgraph, il est recommandé de curer au moins 3 000 GRT. Pour en savoir plus sur la curation, consultez [Curation](/resources/roles/curating/) sur The Graph. -You can start [querying](/subgraphs/querying/introduction/) any Subgraph by sending a GraphQL query into the Subgraph’s query URL endpoint, which is located at the top of its Explorer page in Subgraph Studio. +Vous pouvez commencer à [interroger](/subgraphs/querying/introduction/) n'importe quel subgraph en envoyant une requête GraphQL dans l'endpoint URL de requête du subgraph, qui se trouve en haut de sa page d'exploration dans Subgraph Studio. -#### Example +#### Exemple -[CryptoPunks Ethereum Subgraph](https://thegraph.com/explorer/subgraphs/HdVdERFUe8h61vm2fDyycHgxjsde5PbB832NHgJfZNqK) by Messari: +[Subgraph Ethereum CryptoPunks](https://thegraph.com/explorer/subgraphs/HdVdERFUe8h61vm2fDyycHgxjsde5PbB832NHgJfZNqK) par Messari: -![Query URL](/img/cryptopunks-screenshot-transfer.png) +![L'URL de requête](/img/cryptopunks-screenshot-transfer.png) -The query URL for this Subgraph is: +L'URL de la requête pour ce subgraph est la suivante : ```sh -https://gateway-arbitrum.network.thegraph.com/api/`**your-own-api-key**`/subgraphs/id/HdVdERFUe8h61vm2fDyycgxjsde5PbB832NHgJfZNqK +https://gateway-arbitrum.network.thegraph.com/api/`**votre-propre-clé-Api**`/subgraphs/id/HdVdERFUe8h61vm2fDyycgxjsde5PbB832NHgJfZNqK ``` -Now, you simply need to fill in **your own API Key** to start sending GraphQL queries to this endpoint. +Maintenant, il vous suffit de remplir **votre propre clé API** pour commencer à envoyer des requêtes GraphQL à ce point de terminaison. -### Getting your own API Key +### Obtenir votre propre clé API -You can create API Keys in Subgraph Studio under the “API Keys” menu at the top of the page: +Vous pouvez créer des clés API dans Subgraph Studio sous le menu "API Keys" en haut de la page : -![API keys](/img/Api-keys-screenshot.png) +![clés API](/img/Api-keys-screenshot.png) -### Monitor Subgraph Status +### Surveiller l'état du Subgraph -Once you upgrade, you can access and manage your Subgraphs in [Subgraph Studio](https://thegraph.com/studio/) and explore all Subgraphs in [The Graph Explorer](https://thegraph.com/networks/). +Une fois la mise à niveau effectuée, vous pouvez accéder à vos subgraphs et les gérer dans [Subgraph Studio](https://thegraph.com/studio/) et explorer tous les subgraphs dans [The Graph Explorer](https://thegraph.com/networks/). -### Additional Resources +### Ressources supplémentaires -- To quickly create and publish a new Subgraph, check out the [Quick Start](/subgraphs/quick-start/). -- To explore all the ways you can optimize and customize your Subgraph for a better performance, read more about [creating a Subgraph here](/developing/creating-a-subgraph/). +- Pour créer et publier rapidement un nouveau subgraph, consultez le [Démarrage Rapide](/subgraphs/quick-start/). +- Pour découvrir toutes les façons d'optimiser et de personnaliser votre subgraph pour obtenir de meilleures performances, lisez davantage sur [la création d'un subgraph ici](/developing/creating-a-subgraph/). diff --git a/website/src/pages/fr/subgraphs/querying/best-practices.mdx b/website/src/pages/fr/subgraphs/querying/best-practices.mdx index 7840723ca03d..9b5bddd7d439 100644 --- a/website/src/pages/fr/subgraphs/querying/best-practices.mdx +++ b/website/src/pages/fr/subgraphs/querying/best-practices.mdx @@ -2,19 +2,19 @@ title: Bonnes pratiques d'interrogation --- -The Graph provides a decentralized way to query data from blockchains. Its data is exposed through a GraphQL API, making it easier to query with the GraphQL language. +The Graph offre un moyen décentralisé d'interroger les données des blockchains. Ses données sont exposées par le biais d'une API GraphQL, ce qui facilite l'interrogation avec le langage GraphQL. -Learn the essential GraphQL language rules and best practices to optimize your subgraph. +Apprenez les règles essentielles du langage GraphQL et les meilleures pratiques pour optimiser votre subgraph. --- ## Interroger une API GraphQL -### The Anatomy of a GraphQL Query +### Anatomie d'une requête GraphQL Contrairement à l'API REST, une API GraphQL repose sur un schéma qui définit les requêtes qui peuvent être effectuées. -For example, a query to get a token using the `token` query will look as follows: +Par exemple, une requête pour obtenir un jeton en utilisant la requête `token` ressemblera à ce qui suit : ```graphql query GetToken($id: ID!) { @@ -25,7 +25,7 @@ query GetToken($id: ID!) { } ``` -which will return the following predictable JSON response (_when passing the proper `$id` variable value_): +qui retournera la réponse JSON prévisible suivante (\_en passant la bonne valeur de la variable `$id`): ```json { @@ -36,9 +36,9 @@ which will return the following predictable JSON response (_when passing the pro } ``` -GraphQL queries use the GraphQL language, which is defined upon [a specification](https://spec.graphql.org/). +Les requêtes GraphQL utilisent le langage GraphQL, qui est défini dans [une spécification](https://spec.graphql.org/). -The above `GetToken` query is composed of multiple language parts (replaced below with `[...]` placeholders): +La requête `GetToken` ci-dessus est composée de plusieurs parties de langage (remplacées ci-dessous par des espaces réservés `[...]`) : ```graphql query [operationName]([variableName]: [variableType]) { @@ -50,33 +50,33 @@ query [operationName]([variableName]: [variableType]) { } ``` -## Rules for Writing GraphQL Queries +## Règles d'écriture des requêtes GraphQL -- Each `queryName` must only be used once per operation. -- Each `field` must be used only once in a selection (we cannot query `id` twice under `token`) -- Some `field`s or queries (like `tokens`) return complex types that require a selection of sub-field. Not providing a selection when expected (or providing one when not expected - for example, on `id`) will raise an error. To know a field type, please refer to [Graph Explorer](/subgraphs/explorer/). +- Chaque `queryName` ne doit être utilisé qu'une seule fois par opération. +- Chaque `champ` ne doit être utilisé qu'une seule fois dans une sélection (nous ne pouvons pas interroger `id` deux fois sous `token`) +- Certains `champs` ou certaines requêtes (comme `tokens`) renvoient des types complexes qui nécessitent une sélection de sous-champs. Ne pas fournir de sélection quand cela est attendu (ou en fournir une quand cela n'est pas attendu - par exemple, sur `id`) lèvera une erreur. Pour connaître un type de champ, veuillez vous référer à [Graph Explorer] (/subgraphs/explorer/). - Toute variable affectée à un argument doit correspondre à son type. - Dans une liste de variables donnée, chacune d’elles doit être unique. - Toutes les variables définies doivent être utilisées. -> Note: Failing to follow these rules will result in an error from The Graph API. +> Remarque : le non-respect de ces règles entraînera une erreur de la part de The Graph API. -For a complete list of rules with code examples, check out [GraphQL Validations guide](/resources/migration-guides/graphql-validations-migration-guide/). +Pour une liste complète des règles avec des exemples de code, consultez le [Guide des validations GraphQL](/resources/migration-guides/graphql-validations-migration-guide/). ### Envoi d'une requête à une API GraphQL -GraphQL is a language and set of conventions that transport over HTTP. +GraphQL est un langage et un ensemble de conventions qui se transportent sur HTTP. -It means that you can query a GraphQL API using standard `fetch` (natively or via `@whatwg-node/fetch` or `isomorphic-fetch`). +Cela signifie que vous pouvez interroger une API GraphQL en utilisant le standard `fetch` (nativement ou via `@whatwg-node/fetch` ou `isomorphic-fetch`). -However, as mentioned in ["Querying from an Application"](/subgraphs/querying/from-an-application/), it's recommended to use `graph-client`, which supports the following unique features: +Cependant, comme mentionné dans ["Interrogation à partir d'une application"](/subgraphs/querying/from-an-application/), il est recommandé d'utiliser `graph-client`, qui supporte les caractéristiques uniques suivantes : -- Gestion des subgraphs inter-chaînes : interrogation à partir de plusieurs subgraphs en une seule requête -- [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) -- [Automatic Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) +- Traitement des subgraphs multi-chaînes : Interrogation de plusieurs subgraphs en une seule requête +- [Suivi automatique des blocs](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) +- [Pagination automatique] (https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) - Résultat entièrement typé -Here's how to query The Graph with `graph-client`: +Voici comment interroger The Graph avec `graph-client` : ```tsx import { execute } from '../.graphclient' @@ -100,7 +100,7 @@ async function main() { main() ``` -More GraphQL client alternatives are covered in ["Querying from an Application"](/subgraphs/querying/from-an-application/). +D'autres alternatives au client GraphQL sont abordées dans ["Requête à partir d'une application"](/subgraphs/querying/from-an-application/). --- @@ -108,7 +108,7 @@ More GraphQL client alternatives are covered in ["Querying from an Application"] ### Écrivez toujours des requêtes statiques -A common (bad) practice is to dynamically build query strings as follows: +Une (mauvaise) pratique courante consiste à construire dynamiquement des chaînes de requête comme suit : ```tsx const id = params.id @@ -124,14 +124,14 @@ query GetToken { // Execute query... ``` -While the above snippet produces a valid GraphQL query, **it has many drawbacks**: +Bien que l'extrait ci-dessus produise une requête GraphQL valide, **il présente de nombreux inconvénients** : -- it makes it **harder to understand** the query as a whole -- developers are **responsible for safely sanitizing the string interpolation** -- not sending the values of the variables as part of the request parameters **prevent possible caching on server-side** -- it **prevents tools from statically analyzing the query** (ex: Linter, or type generations tools) +- cela rend **plus difficile la compréhension** de la requête dans son ensemble +- les développeurs sont **responsables de l'assainissement de l'interpolation de la chaîne de caractères** +- ne pas envoyer les valeurs des variables dans le cadre des paramètres de la requête **empêcher la mise en cache éventuelle côté serveur** +- il **empêche les outils d'analyser statiquement la requête** (ex : Linter, ou les outils de génération de types) -For this reason, it is recommended to always write queries as static strings: +C'est pourquoi il est recommandé de toujours écrire les requêtes sous forme de chaînes de caractères statiques : ```tsx import { execute } from 'your-favorite-graphql-client' @@ -153,18 +153,18 @@ const result = await execute(query, { }) ``` -Doing so brings **many advantages**: +Cela présente de **nombreux avantages** : -- **Easy to read and maintain** queries -- The GraphQL **server handles variables sanitization** -- **Variables can be cached** at server-level -- **Queries can be statically analyzed by tools** (more on this in the following sections) +- **Facile à lire et à entretenir** les requêtes +- Le **serveur GraphQL s’occupe de la validation des variables** +- **Les variables peuvent être mises en cache** au niveau du serveur +- **Les requêtes peuvent être analysées statiquement par des outils** (plus d'informations à ce sujet dans les sections suivantes) -### How to include fields conditionally in static queries +### Comment inclure des champs de manière conditionnelle dans des requêtes statiques -You might want to include the `owner` field only on a particular condition. +Il se peut que vous souhaitiez inclure le champ `owner` uniquement pour une condition particulière. -For this, you can leverage the `@include(if:...)` directive as follows: +Pour cela, vous pouvez utiliser la directive `@include(if :...)` comme suit : ```tsx import { execute } from 'your-favorite-graphql-client' @@ -187,18 +187,18 @@ const result = await execute(query, { }) ``` -> Note: The opposite directive is `@skip(if: ...)`. +> Note : La directive opposée est `@skip(if : ...)`. -### Ask for what you want +### Demandez ce que vous voulez -GraphQL became famous for its "Ask for what you want" tagline. +GraphQL est devenu célèbre grâce à son slogan "Ask for what you want" (demandez ce que vous voulez). -For this reason, there is no way, in GraphQL, to get all available fields without having to list them individually. +Pour cette raison, il n'existe aucun moyen, dans GraphQL, d'obtenir tous les champs disponibles sans avoir à les lister individuellement. - Lorsque vous interrogez les API GraphQL, pensez toujours à interroger uniquement les champs qui seront réellement utilisés. -- Make sure queries only fetch as many entities as you actually need. By default, queries will fetch 100 entities in a collection, which is usually much more than what will actually be used, e.g., for display to the user. This applies not just to top-level collections in a query, but even more so to nested collections of entities. +- Assurez-vous que les requêtes ne récupèrent que le nombre d'entités dont vous avez réellement besoin. Par défaut, les requêtes récupèrent 100 entités dans une collection, ce qui est généralement beaucoup plus que ce qui sera réellement utilisé, par exemple pour l'affichage à l'utilisateur. Cela s'applique non seulement aux collections de premier niveau d'une requête, mais plus encore aux collections imbriquées d'entités. -For example, in the following query: +Par exemple, dans la requête suivante : ```graphql query listTokens { @@ -213,15 +213,15 @@ query listTokens { } ``` -The response could contain 100 transactions for each of the 100 tokens. +La réponse pourrait contenir 100 transactions pour chacun des 100 jetons. -If the application only needs 10 transactions, the query should explicitly set `first: 10` on the transactions field. +Si l'application n'a besoin que de 10 transactions, la requête doit explicitement définir `first: 10` dans le champ transactions. -### Use a single query to request multiple records +### Utiliser une seule requête pour demander plusieurs enregistrements -By default, subgraphs have a singular entity for one record. For multiple records, use the plural entities and filter: `where: {id_in:[X,Y,Z]}` or `where: {volume_gt:100000}` +Par défaut, les subgraphs ont une entité singulière pour un enregistrement. Pour plusieurs enregistrements, utilisez les entités plurielles et le filtre : `where: {id_in:[X,Y,Z]}` ou `where: {volume_gt:100000}` -Example of inefficient querying: +Exemple de requête inefficace : ```graphql query SingleRecord { @@ -238,7 +238,7 @@ query SingleRecord { } ``` -Example of optimized querying: +Exemple de requête optimisée : ```graphql query ManyRecords { @@ -249,9 +249,9 @@ query ManyRecords { } ``` -### Combine multiple queries in a single request +### Combiner plusieurs requêtes en une seule -Your application might require querying multiple types of data as follows: +Votre application peut nécessiter l'interrogation de plusieurs types de données, comme suit : ```graphql import { execute } from "your-favorite-graphql-client" @@ -281,9 +281,9 @@ const [tokens, counters] = Promise.all( ) ``` -While this implementation is totally valid, it will require two round trips with the GraphQL API. +Bien que cette mise en œuvre soit tout à fait valable, elle nécessitera deux allers-retours avec l'API GraphQL. -Fortunately, it is also valid to send multiple queries in the same GraphQL request as follows: +Heureusement, il est également possible d'envoyer plusieurs requêtes dans la même requête GraphQL, comme suit : ```graphql import { execute } from "your-favorite-graphql-client" @@ -304,13 +304,13 @@ query GetTokensandCounters { const { result: { tokens, counters } } = execute(query) ``` -This approach will **improve the overall performance** by reducing the time spent on the network (saves you a round trip to the API) and will provide a **more concise implementation**. +Cette approche **améliore les performances globales** en réduisant le temps passé sur le réseau (vous évite un aller-retour vers l'API) et fournit une **mise en œuvre plus concise**. ### Tirer parti des fragments GraphQL -A helpful feature to write GraphQL queries is GraphQL Fragment. +Une fonctionnalité utile pour écrire des requêtes GraphQL est GraphQL Fragment. -Looking at the following query, you will notice that some fields are repeated across multiple Selection-Sets (`{ ... }`): +En regardant la requête suivante, vous remarquerez que certains champs sont répétés dans plusieurs Ensembles de sélection (`{ ... }`) : ```graphql query { @@ -330,12 +330,12 @@ query { } ``` -Such repeated fields (`id`, `active`, `status`) bring many issues: +Ces champs répétés (`id`, `active`, `status`) posent de nombreux problèmes : -- More extensive queries become harder to read. -- When using tools that generate TypeScript types based on queries (_more on that in the last section_), `newDelegate` and `oldDelegate` will result in two distinct inline interfaces. +- Les requêtes plus longues deviennent plus difficiles à lire. +- Lorsque l'on utilise des outils qui génèrent des types TypeScript basés sur des requêtes (_plus d'informations à ce sujet dans la dernière section_), `newDelegate` et `oldDelegate` donneront lieu à deux interfaces inline distinctes. -A refactored version of the query would be the following: +Une version remaniée de la requête serait la suivante : ```graphql query { @@ -359,15 +359,15 @@ fragment DelegateItem on Transcoder { } ``` -Using GraphQL `fragment` will improve readability (especially at scale) and result in better TypeScript types generation. +L'utilisation de GraphQL `fragment` améliorera la lisibilité (en particulier à grande échelle) et permettra une meilleure génération de types TypeScript. -When using the types generation tool, the above query will generate a proper `DelegateItemFragment` type (_see last "Tools" section_). +Lorsque l'on utilise l'outil de génération de types, la requête ci-dessus génère un type `DelegateItemFragment` approprié (\_voir la dernière section "Outils"). ### Bonnes pratiques et erreurs à éviter avec les fragments GraphQL ### La base du fragment doit être un type -A Fragment cannot be based on a non-applicable type, in short, **on type not having fields**: +Un fragment ne peut pas être basé sur un type non applicable, en bref, **sur un type n'ayant pas de champs** : ```graphql fragment MyFragment on BigInt { @@ -375,11 +375,11 @@ fragment MyFragment on BigInt { } ``` -`BigInt` is a **scalar** (native "plain" type) that cannot be used as a fragment's base. +`BigInt` est un **scalaire** (type natif "plain" ) qui ne peut pas être utilisé comme base d'un fragment. #### Comment diffuser un fragment -Fragments are defined on specific types and should be used accordingly in queries. +Les fragments sont définis pour des types spécifiques et doivent être utilisés en conséquence dans les requêtes. L'exemple: @@ -402,20 +402,20 @@ fragment VoteItem on Vote { } ``` -`newDelegate` and `oldDelegate` are of type `Transcoder`. +`newDelegate` et `oldDelegate` sont de type `Transcoder`. -It is not possible to spread a fragment of type `Vote` here. +Il n'est pas possible de diffuser un fragment de type `Vote` ici. #### Définir Fragment comme une unité commerciale atomique de données -GraphQL `Fragment`s must be defined based on their usage. +Les `Fragment` GraphQL doivent être définis en fonction de leur utilisation. -For most use-case, defining one fragment per type (in the case of repeated fields usage or type generation) is sufficient. +Pour la plupart des cas d'utilisation, la définition d'un fragment par type (dans le cas de l'utilisation répétée de champs ou de la génération de types) est suffisante. -Here is a rule of thumb for using fragments: +Voici une règle empirique pour l'utilisation des fragments : -- When fields of the same type are repeated in a query, group them in a `Fragment`. -- When similar but different fields are repeated, create multiple fragments, for instance: +- Lorsque des champs de même type sont répétés dans une requête, ils sont regroupés dans un `Fragment`. +- Lorsque des champs similaires mais différents se répètent, créer plusieurs fragments, par exemple : ```graphql # fragment de base (utilisé principalement pour les listes) @@ -438,51 +438,51 @@ fragment VoteWithPoll on Vote { --- -## The Essential Tools +## Les outils essentiels ### Explorateurs Web GraphQL -Iterating over queries by running them in your application can be cumbersome. For this reason, don't hesitate to use [Graph Explorer](https://thegraph.com/explorer) to test your queries before adding them to your application. Graph Explorer will provide you a preconfigured GraphQL playground to test your queries. +Itérer sur des requêtes en les exécutant dans votre application peut s'avérer fastidieux. Pour cette raison, n'hésitez pas à utiliser [Graph Explorer](https://thegraph.com/explorer) pour tester vos requêtes avant de les ajouter à votre application. Graph Explorer vous fournira un terrain de jeu GraphQL préconfiguré pour tester vos requêtes. -If you are looking for a more flexible way to debug/test your queries, other similar web-based tools are available such as [Altair](https://altairgraphql.dev/) and [GraphiQL](https://graphiql-online.com/graphiql). +Si vous recherchez un moyen plus souple de déboguer/tester vos requêtes, d'autres outils web similaires sont disponibles, tels que [Altair](https://altairgraphql.dev/) et [GraphiQL](https://graphiql-online.com/graphiql). ### Linting GraphQL -In order to keep up with the mentioned above best practices and syntactic rules, it is highly recommended to use the following workflow and IDE tools. +Afin de respecter les meilleures pratiques et les règles syntaxiques mentionnées ci-dessus, il est fortement recommandé d'utiliser les outils de workflow et d'IDE suivants. **GraphQL ESLint** -[GraphQL ESLint](https://the-guild.dev/graphql/eslint/docs/getting-started) will help you stay on top of GraphQL best practices with zero effort. +[GraphQL ESLint](https://the-guild.dev/graphql/eslint/docs/getting-started) vous aidera à rester au fait des meilleures pratiques GraphQL sans effort. -[Setup the "operations-recommended"](https://the-guild.dev/graphql/eslint/docs/configs) config will enforce essential rules such as: +[La configuration "opérations-recommandées"](https://the-guild.dev/graphql/eslint/docs/configs) permet d'appliquer des règles essentielles telles que: -- `@graphql-eslint/fields-on-correct-type`: is a field used on a proper type? -- `@graphql-eslint/no-unused variables`: should a given variable stay unused? +- `@graphql-eslint/fields-on-correct-type` : un champ est-il utilisé sur un type correct ? +- `@graphql-eslint/no-unused variables` : une variable donnée doit-elle rester inutilisée ? - et plus ! -This will allow you to **catch errors without even testing queries** on the playground or running them in production! +Cela vous permettra de **récupérer les erreurs sans même tester les requêtes** sur le terrain de jeu ou les exécuter en production ! ### Plugins IDE -**VSCode and GraphQL** +**VSCode et GraphQL** -The [GraphQL VSCode extension](https://marketplace.visualstudio.com/items?itemName=GraphQL.vscode-graphql) is an excellent addition to your development workflow to get: +L'[extension GraphQL VSCode] (https://marketplace.visualstudio.com/items?itemName=GraphQL.vscode-graphql) est un excellent complément à votre workflow de développement : -- Syntax highlighting -- Autocomplete suggestions -- Validation against schema +- Mise en évidence des syntaxes +- Suggestions d'auto-complétion +- Validation par rapport au schéma - Snippets -- Go to definition for fragments and input types +- Aller à la définition des fragments et des types d'entrée -If you are using `graphql-eslint`, the [ESLint VSCode extension](https://marketplace.visualstudio.com/items?itemName=dbaeumer.vscode-eslint) is a must-have to visualize errors and warnings inlined in your code correctly. +Si vous utilisez `graphql-eslint`, l'extension [ESLint VSCode extension](https://marketplace.visualstudio.com/items?itemName=dbaeumer.vscode-eslint) est indispensable pour visualiser correctement les erreurs et les avertissements dans votre code. -**WebStorm/Intellij and GraphQL** +**WebStorm/Intellij et GraphQL** -The [JS GraphQL plugin](https://plugins.jetbrains.com/plugin/8097-graphql/) will significantly improve your experience while working with GraphQL by providing: +Le [JS GraphQL plugin] (https://plugins.jetbrains.com/plugin/8097-graphql/) améliorera considérablement votre expérience lorsque vous travaillez avec GraphQL en fournissant : -- Syntax highlighting -- Autocomplete suggestions -- Validation against schema +- Mise en évidence des syntaxes +- Suggestions d'auto-complétion +- Validation par rapport au schéma - Snippets -For more information on this topic, check out the [WebStorm article](https://blog.jetbrains.com/webstorm/2019/04/featured-plugin-js-graphql/) which showcases all the plugin's main features. +Pour plus d'informations sur ce sujet, consultez l'[article WebStorm](https://blog.jetbrains.com/webstorm/2019/04/featured-plugin-js-graphql/) qui présente toutes les principales fonctionnalités du plugin. diff --git a/website/src/pages/fr/subgraphs/querying/distributed-systems.mdx b/website/src/pages/fr/subgraphs/querying/distributed-systems.mdx index 7c1f4526f7dc..0aae8f7731eb 100644 --- a/website/src/pages/fr/subgraphs/querying/distributed-systems.mdx +++ b/website/src/pages/fr/subgraphs/querying/distributed-systems.mdx @@ -29,22 +29,22 @@ Il est difficile de raisonner sur les implications des systèmes distribués, ma ## Demande de données actualisées -The Graph provides the `block: { number_gte: $minBlock }` API, which ensures that the response is for a single block equal or higher to `$minBlock`. If the request is made to a `graph-node` instance and the min block is not yet synced, `graph-node` will return an error. If `graph-node` has synced min block, it will run the response for the latest block. If the request is made to an Edge & Node Gateway, the Gateway will filter out any Indexers that have not yet synced min block and make the request for the latest block the Indexer has synced. +The Graph fournit l'API `block: { number_gte: $minBlock }` qui assure que la réponse est pour un seul bloc égal ou supérieur à `$minBlock`. Si la requête est faite à une instance de `graph-node` et que le bloc min n'est pas encore synchronisé, `graph-node` retournera une erreur. Si `graph-node` a synchronisé le bloc min, il exécutera la réponse pour le dernier bloc. Si la requête est faite à une passerelle Edge & Node, la passerelle filtrera tous les Indexeurs qui n'ont pas encore synchronisé le bloc min et fera la requête pour le dernier bloc que l'Indexeur a synchronisé. -We can use `number_gte` to ensure that time never travels backward when polling for data in a loop. Here is an example: +Nous pouvons utiliser `number_gte` pour nous assurer que le temps ne recule jamais lors de l'interrogation des données dans une boucle. Voici un exemple : ```javascript -/// Updates the protocol.paused variable to the latest -/// known value in a loop by fetching it using The Graph. +/// Met à jour la variable protocol.paused avec la dernière valeur +/// connue dans une boucle en la récupérant à l'aide de The Graph. async function updateProtocolPaused() { - // It's ok to start with minBlock at 0. The query will be served - // using the latest block available. Setting minBlock to 0 is the - // same as leaving out that argument. + // Il n'y a pas de problème à commencer avec minBlock à 0. La requête sera servie + // en utilisant le dernier bloc disponible. Définir minBlock à 0 + // revient à ne pas utiliser cet argument. let minBlock = 0 for (;;) { - // Schedule a promise that will be ready once - // the next Ethereum block will likely be available. + // Programmer une promesse qui sera prête une fois que + // le prochain bloc Ethereum sera probablement disponible. const nextBlock = new Promise((f) => { setTimeout(f, 14000) }) @@ -65,10 +65,10 @@ async function updateProtocolPaused() { const response = await graphql(query, variables) minBlock = response._meta.block.number - // TODO: Do something with the response data here instead of logging it. + // TODO : Faire quelque chose avec les données de réponse ici au lieu de les enregistrer. console.log(response.protocol.paused) - // Sleep to wait for the next block + // Dormir pour attendre le bloc suivant await nextBlock } } @@ -78,17 +78,17 @@ async function updateProtocolPaused() { Un autre cas d'utilisation est la récupération d'un grand ensemble ou, plus généralement, la récupération d'éléments liés entre plusieurs requêtes. Contrairement au cas des sondages (où la cohérence souhaitée était d'avancer dans le temps), la cohérence souhaitée est pour un seul point dans le temps. -Here we will use the `block: { hash: $blockHash }` argument to pin all of our results to the same block. +Ici, nous utiliserons la méthode `block: { hash: $blockHash }` afin de rattacher tous nos résultats au même bloc. ```javascript -/// Gets a list of domain names from a single block using pagination +/// Obtient une liste de noms de domaine à partir d'un seul bloc en utilisant la pagination async function getDomainNames() { - // Set a cap on the maximum number of items to pull. + // Fixer un plafond pour le nombre maximum d'articles à retirer. let pages = 5 const perPage = 1000 - // The first query will get the first page of results and also get the block - // hash so that the remainder of the queries are consistent with the first. + // La première requête obtiendra la première page de résultats ainsi que le bloc + // afin que les autres requêtes soient cohérentes avec la première. const listDomainsQuery = ` query ListDomains($perPage: Int!) { domains(first: $perPage) { @@ -107,9 +107,9 @@ async function getDomainNames() { let blockHash = data._meta.block.hash let query - // Continue fetching additional pages until either we run into the limit of - // 5 pages total (specified above) or we know we have reached the last page - // because the page has fewer entities than a full page. + // Continuer à rechercher des pages supplémentaires jusqu'à ce que nous atteignions la limite de + // 5 pages au total (spécifiée ci-dessus) ou jusqu'à ce que nous sachions que nous avons atteint la dernière page + // parce que la page contient moins d'entités qu'une page complète. while (data.domains.length == perPage && --pages) { let lastID = data.domains[data.domains.length - 1].id query = ` @@ -122,7 +122,7 @@ async function getDomainNames() { data = await graphql(query, { perPage, lastID, blockHash }) - // Accumulate domain names into the result + // Accumuler les noms de domaine dans le résultat for (domain of data.domains) { result.push(domain.name) } diff --git a/website/src/pages/fr/subgraphs/querying/from-an-application.mdx b/website/src/pages/fr/subgraphs/querying/from-an-application.mdx index d86768f27d33..d778cec92320 100644 --- a/website/src/pages/fr/subgraphs/querying/from-an-application.mdx +++ b/website/src/pages/fr/subgraphs/querying/from-an-application.mdx @@ -1,53 +1,54 @@ --- title: Interrogation à partir d'une application +sidebarTitle: Interroger à partir d'une application --- -Learn how to query The Graph from your application. +Apprenez à interroger The Graph à partir de votre application. -## Getting GraphQL Endpoints +## Obtenir des endpoints GraphQL -During the development process, you will receive a GraphQL API endpoint at two different stages: one for testing in Subgraph Studio, and another for making queries to The Graph Network in production. +Au cours du processus de développement, vous recevrez un Endpoint de l'API GraphQL à deux étapes différentes : l'une pour les tests dans Subgraph Studio, et l'autre pour effectuer des requêtes sur The Graph Network en production. -### Subgraph Studio Endpoint +### Endpoint Subgraph Studio -After deploying your subgraph to [Subgraph Studio](https://thegraph.com/docs/en/subgraphs/developing/deploying/using-subgraph-studio/), you will receive an endpoint that looks like this: +Après avoir déployé votre subgraph dans [Subgraph Studio](https://thegraph.com/docs/en/subgraphs/developing/deploying/using-subgraph-studio/), vous recevrez un endpoint qui ressemble à ceci : ``` https://api.studio.thegraph.com/query/// ``` -> This endpoint is intended for testing purposes **only** and is rate-limited. +> Cet endpoint est destiné à des fins de test **uniquement** et son débit est limité. -### The Graph Network Endpoint +### Endpoint de The Graph Network -After publishing your subgraph to the network, you will receive an endpoint that looks like this: : +Après avoir publié votre subgraph sur le réseau, vous recevrez un endpoint qui ressemble à ceci : : ``` https://gateway.thegraph.com/api//subgraphs/id/ ``` -> This endpoint is intended for active use on the network. It allows you to use various GraphQL client libraries to query the subgraph and populate your application with indexed data. +> Cet endpoint à une utilisation active sur le réseau. Il vous permet d'utiliser diverses bibliothèques client GraphQL pour interroger le Subgraph et alimenter votre application en données indexées. -## Using Popular GraphQL Clients +## Utilisation de clients GraphQL populaires ### Graph Client -The Graph is providing its own GraphQL client, `graph-client` that supports unique features such as: +The Graph fournit son propre client GraphQL, `graph-client`, qui prend en charge des fonctionnalités uniques telles que : -- Gestion des subgraphs inter-chaînes : interrogation à partir de plusieurs subgraphs en une seule requête -- [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) -- [Automatic Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) +- Traitement des subgraphs multi-chaînes : Interrogation de plusieurs subgraphs en une seule requête +- [Suivi automatique des blocs](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) +- [Pagination automatique](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) - Résultat entièrement typé -> Note: `graph-client` is integrated with other popular GraphQL clients such as Apollo and URQL, which are compatible with environments such as React, Angular, Node.js, and React Native. As a result, using `graph-client` will provide you with an enhanced experience for working with The Graph. +> Remarque : `graph-client` est intégré à d'autres clients GraphQL populaires tels qu'Apollo et URQL, qui sont compatibles avec des environnements tels que React, Angular, Node.js et React Native. Par conséquent, l'utilisation de `graph-client` vous fournira une expérience améliorée pour travailler avec The Graph. -### Fetch Data with Graph Client +### Récupérer des données avec Graph Client -Let's look at how to fetch data from a subgraph with `graph-client`: +Voyons comment récupérer les données d'un subgraph avec `graph-client` : #### Étape 1 -Install The Graph Client CLI in your project: +Installez The Graph Client CLI dans votre projet : ```sh yarn add -D @graphprotocol/client-cli @@ -57,7 +58,7 @@ npm install --save-dev @graphprotocol/client-cli #### Étape 2 -Define your query in a `.graphql` file (or inlined in your `.js` or `.ts` file): +Définissez votre requête dans un fichier `.graphql` (ou dans votre fichier `.js` ou `.ts`) : ```graphql query ExampleQuery { @@ -86,7 +87,7 @@ query ExampleQuery { #### Étape 3 -Create a configuration file (called `.graphclientrc.yml`) and point to your GraphQL endpoints provided by The Graph, for example: +Créez un fichier de configuration (appelé `.graphclientrc.yml`) et pointez vers vos endpoints GraphQL fournis par The Graph, par exemple : ```yaml # .graphclientrc.yml @@ -104,22 +105,22 @@ documents: - ./src/example-query.graphql ``` -#### Step 4 +#### Étape 4 -Run the following The Graph Client CLI command to generate typed and ready to use JavaScript code: +Exécutez la commande CLI suivante de The Graph Client pour générer un code JavaScript typé et prêt à l'emploi : ```sh graphclient build ``` -#### Step 5 +#### Étape 5 -Update your `.ts` file to use the generated typed GraphQL documents: +Mettez à jour votre fichier `.ts` pour utiliser les documents GraphQL typés générés : ```tsx import React, { useEffect } from 'react' // ... -// we import types and typed-graphql document from the generated code (`..graphclient/`) +// nous importons les types et le document typed-graphql du code généré (`..graphclient/`) import { ExampleQueryDocument, ExampleQueryQuery, execute } from '../.graphclient' function App() { @@ -134,7 +135,7 @@ function App() {
logo -

Graph Client Example

+

Exemple de Graph Client

{data && (
@@ -152,27 +153,27 @@ function App() { export default App ``` -> **Important Note:** `graph-client` is perfectly integrated with other GraphQL clients such as Apollo client, URQL, or React Query; you can [find examples in the official repository](https://github.com/graphprotocol/graph-client/tree/main/examples). However, if you choose to go with another client, keep in mind that **you won't be able to use Cross-chain Subgraph Handling or Automatic Pagination, which are core features for querying The Graph**. +> **Note importante:** `graph-client` est parfaitement intégré avec d'autres clients GraphQL tels que Apollo client, URQL, ou React Query ; vous pouvez [trouver des exemples dans le dépôt officiel](https://github.com/graphprotocol/graph-client/tree/main/examples). Cependant, si vous choisissez d'aller avec un autre client, gardez à l'esprit que **vous ne serez pas en mesure d'utiliser Cross-chain Subgraph Handling (La manipulation cross-chain des subgraphs) ou Automatic Pagination (La pagination automatique), qui sont des fonctionnalités essentielles pour interroger The Graph**. -### Apollo Client +### Le client Apollo -[Apollo client](https://www.apollographql.com/docs/) is a common GraphQL client on front-end ecosystems. It's available for React, Angular, Vue, Ember, iOS, and Android. +[Apollo client] (https://www.apollographql.com/docs/) est un client GraphQL commun sur les écosystèmes front-end. Il est disponible pour React, Angular, Vue, Ember, iOS et Android. -Although it's the heaviest client, it has many features to build advanced UI on top of GraphQL: +Bien qu'il s'agisse du client le plus lourd, il possède de nombreuses fonctionnalités permettant de construire des interfaces utilisateur avancées sur GraphQL : -- Advanced error handling +- Gestion avancée des erreurs - Pagination -- Data prefetching -- Optimistic UI -- Local state management +- Pré-récupération des données +- UI optimiste +- Gestion locale de l'État -### Fetch Data with Apollo Client +### Récupérer des données avec Apollo Client -Let's look at how to fetch data from a subgraph with Apollo client: +Voyons comment récupérer les données d'un subgraph avec le client Apollo : #### Étape 1 -Install `@apollo/client` and `graphql`: +Installez `@apollo/client` et `graphql` : ```sh npm install @apollo/client graphql @@ -180,7 +181,7 @@ npm install @apollo/client graphql #### Étape 2 -Query the API with the following code: +Interrogez l'API avec le code suivant : ```javascript import { ApolloClient, InMemoryCache, gql } from '@apollo/client' @@ -215,7 +216,7 @@ client #### Étape 3 -To use variables, you can pass in a `variables` argument to the query: +Pour utiliser des variables, vous pouvez passer un argument `variables` à la requête : ```javascript const tokensQuery = ` @@ -246,22 +247,22 @@ client }) ``` -### URQL Overview +### Vue d'ensemble d'URQL -[URQL](https://formidable.com/open-source/urql/) is available within Node.js, React/Preact, Vue, and Svelte environments, with some more advanced features: +[URQL] (https://formidable.com/open-source/urql/) est disponible dans les environnements Node.js, React/Preact, Vue et Svelte, avec des fonctionnalités plus avancées : - Système de cache flexible - Conception extensible (facilitant l’ajout de nouvelles fonctionnalités par-dessus) - Offre légère (~ 5 fois plus légère que Apollo Client) - Prise en charge des téléchargements de fichiers et du mode hors ligne -### Fetch data with URQL +### Récupérer des données avec URQL -Let's look at how to fetch data from a subgraph with URQL: +Voyons comment récupérer des données d'un subgraph avec URQL : #### Étape 1 -Install `urql` and `graphql`: +Installez `urql` et `graphql` : ```sh npm install urql graphql @@ -269,7 +270,7 @@ npm install urql graphql #### Étape 2 -Query the API with the following code: +Interrogez l'API avec le code suivant : ```javascript import { createClient } from 'urql' diff --git a/website/src/pages/fr/subgraphs/querying/graph-client/README.md b/website/src/pages/fr/subgraphs/querying/graph-client/README.md index 416cadc13c6f..394465ec1712 100644 --- a/website/src/pages/fr/subgraphs/querying/graph-client/README.md +++ b/website/src/pages/fr/subgraphs/querying/graph-client/README.md @@ -1,44 +1,44 @@ -# The Graph Client Tools +# Les outils de The Graph Client -This repo is the home for [The Graph](https://thegraph.com) consumer-side tools (for both browser and NodeJS environments). +Ce répertoire abrite les outils côté consommateur de [The Graph](https://thegraph.com) (pour les environnements navigateur et NodeJS). -## Background +## Contexte -The tools provided in this repo are intended to enrich and extend the DX, and add the additional layer required for dApps in order to implement distributed applications. +Les outils fournis dans ce repo sont destinés à enrichir et à étendre le DX, et à ajouter la couche supplémentaire requise pour les dApps afin de mettre en œuvre des applications distribuées. -Developers who consume data from [The Graph](https://thegraph.com) GraphQL API often need peripherals for making data consumption easier, and also tools that allow using multiple indexers at the same time. +Les développeurs qui consomment des données à partir de [The Graph](https://thegraph.com) GraphQL API ont souvent besoin de périphériques pour faciliter la consommation des données, ainsi que d'outils permettant d'utiliser plusieurs Indexeurs en même temps. -## Features and Goals +## Fonctionnalités et objectifs -This library is intended to simplify the network aspect of data consumption for dApps. The tools provided within this repository are intended to run at build time, in order to make execution faster and performant at runtime. +Cette bibliothèque est destinée à simplifier l'aspect réseau de la consommation de données pour les dApps. Les outils fournis dans ce dépôt sont destinés à être exécutés au moment de la construction, afin de rendre l'exécution plus rapide et plus performante au moment de l'exécution. -> The tools provided in this repo can be used as standalone, but you can also use it with any existing GraphQL Client! +> Les outils fournis dans ce repo peuvent être utilisés de manière autonome, mais vous pouvez également les utiliser avec n'importe quel client GraphQL existant ! -| Status | Feature | Notes | -| :----: | ---------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------- | -| ✅ | Multiple indexers | based on fetch strategies | -| ✅ | Fetch Strategies | timeout, retry, fallback, race, highestValue | -| ✅ | Build time validations & optimizations | | -| ✅ | Client-Side Composition | with improved execution planner (based on GraphQL-Mesh) | -| ✅ | Cross-chain Subgraph Handling | Use similar subgraphs as a single source | -| ✅ | Raw Execution (standalone mode) | without a wrapping GraphQL client | -| ✅ | Local (client-side) Mutations | | -| ✅ | [Automatic Block Tracking](../packages/block-tracking/README.md) | tracking block numbers [as described here](https://thegraph.com/docs/en/developer/distributed-systems/#polling-for-updated-data) | -| ✅ | [Automatic Pagination](../packages/auto-pagination/README.md) | doing multiple requests in a single call to fetch more than the indexer limit | -| ✅ | Integration with `@apollo/client` | | -| ✅ | Integration with `urql` | | -| ✅ | TypeScript support | with built-in GraphQL Codegen and `TypedDocumentNode` | -| ✅ | [`@live` queries](./live.md) | Based on polling | +| Status | Fonctionnalité | Notes | +| :----: | ------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------ | +| ✅ | Indexeurs multiples | sur la base de stratégies d'extraction | +| ✅ | Stratégies d'extraction | timeout, retry, fallback, race, highestValue | +| ✅ | Validations et optimisations du temps de construction | | +| ✅ | Composition côté client | avec un planificateur d'exécution amélioré (basé sur GraphQL-Mesh) | +| ✅ | Gestion des subgraphs multi-chaînes | Utiliser des subgraphs similaires comme source unique | +| ✅ | Exécution brute (mode autonome) | sans client GraphQL intégré | +| ✅ | Mutations locales (côté client) | | +| ✅ | [Suivi automatique des blocs](../packages/block-tracking/README.md) | les numéros de blocs de suivi [tels que décrits ici](https://thegraph.com/docs/en/developer/distributed-systems/#polling-for-updated-data) | +| ✅ | [Pagination automatique](../packages/auto-pagination/README.md) | effectuer plusieurs requêtes en un seul appel pour récupérer plus que la limite de l'Indexeur | +| ✅ | Intégration avec `@apollo/client` | | +| ✅ | Intégration avec `urql` | | +| ✅ | Prise en charge de TypeScript | avec GraphQL Codegen et `TypedDocumentNode` intégrés | +| ✅ | [`@live` queries](./live.md) | Sur la base de sondages | -> You can find an [extended architecture design here](./architecture.md) +> Vous pouvez trouver un [modèle d'architecture étendu ici](./architecture.md) -## Getting Started +## Introduction -You can follow [Episode 45 of `graphql.wtf`](https://graphql.wtf/episodes/45-the-graph-client) to learn more about Graph Client: +Vous pouvez suivre [l'épisode 45 de `graphql.wtf`] (https://graphql.wtf/episodes/45-the-graph-client) pour en savoir plus sur Graph Client : [![GraphQL.wtf Episode 45](https://img.youtube.com/vi/ZsRAmyUtvwg/0.jpg)](https://graphql.wtf/episodes/45-the-graph-client) -To get started, make sure to install [The Graph Client CLI] in your project: +Pour commencer, assurez-vous d'installer [The Graph Client CLI] dans votre projet : ```sh yarn add -D @graphprotocol/client-cli @@ -46,9 +46,9 @@ yarn add -D @graphprotocol/client-cli npm install --save-dev @graphprotocol/client-cli ``` -> The CLI is installed as dev dependency since we are using it to produce optimized runtime artifacts that can be loaded directly from your app! +> La CLI est installé en tant que dépendance dev puisque nous l'utilisons pour produire des artefacts d'exécution optimisés qui peuvent être chargés directement à partir de votre application ! -Create a configuration file (called `.graphclientrc.yml`) and point to your GraphQL endpoints provided by The Graph, for example: +Créez un fichier de configuration (appelé `.graphclientrc.yml`) et pointez vers vos endpoints GraphQL fournis par The Graph, par exemple : ```yml # .graphclientrc.yml @@ -59,15 +59,15 @@ sources: endpoint: https://api.thegraph.com/subgraphs/name/uniswap/uniswap-v2 ``` -Now, create a runtime artifact by running The Graph Client CLI: +Maintenant, créez un artefact d'exécution en exécutant The Graph Client CLI: ```sh graphclient build ``` -> Note: you need to run this with `yarn` prefix, or add that as a script in your `package.json`. +> Note : vous devez exécuter ceci avec le préfixe `yarn`, ou ajouter ce script dans votre `package.json`. -This should produce a ready-to-use standalone `execute` function, that you can use for running your application GraphQL operations, you should have an output similar to the following: +Cela devrait produire une fonction autonome `execute` prête à l'emploi, que vous pouvez utiliser pour exécuter les opérations GraphQL de votre application, vous devriez obtenir une sortie similaire à la suivante : ```sh GraphClient: Cleaning existing artifacts @@ -80,7 +80,7 @@ GraphClient: Reading the configuration 🕸️: Done! => .graphclient ``` -Now, the `.graphclient` artifact is generated for you, and you can import it directly from your code, and run your queries: +Maintenant, l'artefact `.graphclient` est généré pour vous, et vous pouvez l'importer directement depuis votre code, et lancer vos requêtes : ```ts import { execute } from '../.graphclient' @@ -111,54 +111,54 @@ async function main() { main() ``` -### Using Vanilla JavaScript Instead of TypeScript +### Utiliser Vanilla JavaScript au lieu de TypeScript -GraphClient CLI generates the client artifacts as TypeScript files by default, but you can configure CLI to generate JavaScript and JSON files together with additional TypeScript definition files by using `--fileType js` or `--fileType json`. +GraphClient CLI génère par défaut les artefacts du client sous forme de fichiers TypeScript, mais vous pouvez configurer la CLI pour générer des fichiers JavaScript et JSON ainsi que des fichiers de définition TypeScript supplémentaires en utilisant `--fileType js` ou `--fileType json`. -`js` flag generates all files as JavaScript files with ESM Syntax and `json` flag generates source artifacts as JSON files while entrypoint JavaScript file with old CommonJS syntax because only CommonJS supports JSON files as modules. +L'option `js` génère tous les fichiers en tant que fichiers JavaScript avec la syntaxe ESM et l'option `json` génère les artefacts source en tant que fichiers JSON tandis que le fichier JavaScript du point d'entrée avec l'ancienne syntaxe CommonJS parce que seul CommonJS supporte les fichiers JSON en tant que modules. -Unless you use CommonJS(`require`) specifically, we'd recommend you to use `js` flag. +A moins que vous n'utilisiez CommonJS (`require`) spécifiquement, nous vous recommandons d'utiliser le l'option `js`. `graphclient --fileType js` -- [An example for JavaScript usage in CommonJS syntax with JSON files](../examples/javascript-cjs) -- [An example for JavaScript usage in ESM syntax](../examples/javascript-esm) +- [Un exemple d'utilisation de JavaScript dans la syntaxe CommonJS avec des fichiers JSON](../examples/javascript-cjs) +- [Un exemple d'utilisation de JavaScript dans la syntaxe ESM](../examples/javascript-esm) -#### The Graph Client DevTools +#### Le DevTools The Graph Client -The Graph Client CLI comes with a built-in GraphiQL, so you can experiment with queries in real-time. +La CLI de The Graph Client est dotée d'une interface GraphiQL intégrée, ce qui vous permet d'expérimenter des requêtes en temps réel. -The GraphQL schema served in that environment, is the eventual schema based on all composed Subgraphs and transformations you applied. +Le schéma GraphQL servi dans cet environnement est le schéma final basé sur tous les subgraphs composés et les transformations que vous avez appliquées. -To start the DevTool GraphiQL, run the following command: +Pour lancer Le DevTool GraphiQL, exécutez la commande suivante : ```sh graphclient serve-dev ``` -And open http://localhost:4000/ to use GraphiQL. You can now experiment with your Graph client-side GraphQL schema locally! 🥳 +Et ouvrez http://localhost:4000/ pour utiliser GraphiQL. Vous pouvez maintenant expérimenter votre schéma GraphQL côté client localement ! 🥳 -#### Examples +#### Exemples -You can also refer to [examples directory in this repo](../examples), for more advanced examples and integration examples: +Vous pouvez également vous référer aux [répertoires examples dans ce repo](../examples), pour des exemples plus avancés et des exemples d'intégration : -- [TypeScript & React example with raw `execute` and built-in GraphQL-Codegen](../examples/execute) +- [Exemple TypeScript & React avec un `execute` brut et GraphQL-Codegen intégré](../examples/execute) - [TS/JS NodeJS standalone mode](../examples/node) -- [Client-Side GraphQL Composition](../examples/composition) -- [Integration with Urql and React](../examples/urql) -- [Integration with NextJS and TypeScript](../examples/nextjs) -- [Integration with Apollo-Client and React](../examples/apollo) -- [Integration with React-Query](../examples/react-query) -- _Cross-chain merging (same Subgraph, different chains)_ -- - [Parallel SDK calls](../examples/cross-chain-sdk) -- - [Parallel internal calls with schema extensions](../examples/cross-chain-extension) -- [Customize execution with Transforms (auto-pagination and auto-block-tracking)](../examples/transforms) +- [Composition GraphQL côté client](../examples/composition) +- [Intégration avec Urql et React](../examples/urql) +- [Intégration avec NextJS et TypeScript](../examples/nextjs) +- [Intégration avec Apollo-Client et React](../examples/apollo) +- [Intégration avec React-Query](../examples/react-query) +- Fusion interchain (même subgraph, blockchains différentes) +- - [Appels SDK parallèles](../examples/cross-chain-sdk) +- - [Appels internes parallèles avec les extensions de schéma](../examples/cross-chain-extension) +- [Personnaliser l'exécution avec Transforms (auto-pagination et auto-block-tracking)](../examples/transforms) -### Advanced Examples/Features +### Exemples/fonctionnalités avancés -#### Customize Network Calls +#### Personnaliser les appels réseau -You can customize the network execution (for example, to add authentication headers) by using `operationHeaders`: +Vous pouvez personnaliser l'exécution du réseau (par exemple, pour ajouter des en-têtes d'authentification) en utilisant `operationHeaders` : ```yaml sources: @@ -170,7 +170,7 @@ sources: Authorization: Bearer MY_TOKEN ``` -You can also use runtime variables if you wish, and specify it in a declarative way: +Vous pouvez également utiliser des variables d'exécution si vous le souhaitez, et les spécifier de manière déclarative : ```yaml sources: @@ -182,7 +182,7 @@ sources: Authorization: Bearer {context.config.apiToken} ``` -Then, you can specify that when you execute operations: +Vous pouvez ensuite le spécifier lorsque vous exécutez des opérations : ```ts execute(myQuery, myVariables, { @@ -192,11 +192,11 @@ execute(myQuery, myVariables, { }) ``` -> You can find the [complete documentation for the `graphql` handler here](https://graphql-mesh.com/docs/handlers/graphql#config-api-reference). +> Vous pouvez trouver la [documentation complète du gestionnaire `graphql` ici](https://graphql-mesh.com/docs/handlers/graphql#config-api-reference). -#### Environment Variables Interpolation +#### Interpolation des Variables d'environnement -If you wish to use environment variables in your Graph Client configuration file, you can use interpolation with `env` helper: +Si vous souhaitez utiliser des variables d'environnement dans votre fichier de configuration Graph Client, vous pouvez utiliser l'interpolation avec l'assistant `env` : ```yaml sources: @@ -208,9 +208,9 @@ sources: Authorization: Bearer {env.MY_API_TOKEN} # runtime ``` -Then, make sure to have `MY_API_TOKEN` defined when you run `process.env` at runtime. +Ensuite, assurez-vous que `MY_API_TOKEN` est défini lorsque vous lancez `process.env` au moment de l'exécution. -You can also specify environment variables to be filled at build time (during `graphclient build` run) by using the env-var name directly: +Vous pouvez également spécifier des variables d'environnement à remplir au moment de la construction (pendant l'exécution de `graphclient build`) en utilisant directement le nom env-var : ```yaml sources: @@ -219,21 +219,21 @@ sources: graphql: endpoint: https://api.thegraph.com/subgraphs/name/uniswap/uniswap-v2 operationHeaders: - Authorization: Bearer ${MY_API_TOKEN} # build time + Authorization: Bearer ${MY_API_TOKEN} # temps de construction ``` -> You can find the [complete documentation for the `graphql` handler here](https://graphql-mesh.com/docs/handlers/graphql#config-api-reference). +> Vous pouvez trouver la [documentation complète du gestionnaire `graphql` ici](https://graphql-mesh.com/docs/handlers/graphql#config-api-reference). -#### Fetch Strategies and Multiple Graph Indexers +#### Extraire les Stratégies et les multiples Indexeurs de The Graph -It's a common practice to use more than one indexer in dApps, so to achieve the ideal experience with The Graph, you can specify several `fetch` strategies in order to make it more smooth and simple. +C'est une pratique courante d'utiliser plus d'un Indexeur dans les dApps, donc pour obtenir l'expérience idéale avec The Graph, vous pouvez spécifier plusieurs stratégies `fetch` afin de rendre les choses plus fluides et plus simples. -All `fetch` strategies can be combined to create the ultimate execution flow. +Toutes les stratégies `fetch` peuvent être combinées pour créer le flux d'exécution ultime.
`retry` -The `retry` mechanism allow you to specify the retry attempts for a single GraphQL endpoint/source. +Le mécanisme (retry)`réessai` vous permet de spécifier les tentatives de réessais pour un seul endpoint/source GraphQL. The retry flow will execute in both conditions: a netword error, or due to a runtime error (indexing issue/inavailability of the indexer). @@ -243,7 +243,7 @@ sources: handler: graphql: endpoint: https://api.thegraph.com/subgraphs/name/uniswap/uniswap-v2 - retry: 2 # specify here, if you have an unstable/error prone indexer + retry: 2 # spécifier ici, si vous avez un Indexeur instable ou sujet à des erreurs ```
@@ -251,7 +251,7 @@ sources:
`timeout` -The `timeout` mechanism allow you to specify the `timeout` for a given GraphQL endpoint. +Le mécanisme `timeout` vous permet de spécifier le `timeout` pour un endpoint GraphQL donné. ```yaml sources: @@ -259,7 +259,7 @@ sources: handler: graphql: endpoint: https://api.thegraph.com/subgraphs/name/uniswap/uniswap-v2 - timeout: 5000 # 5 seconds + timeout: 5000 # 5 secondes ```
@@ -267,9 +267,9 @@ sources:
`fallback` -The `fallback` mechanism allow you to specify use more than one GraphQL endpoint, for the same source. +Le mécanisme `fallback` vous permet de spécifier l'utilisation de plus d'un endpoint GraphQL, pour la même source. -This is useful if you want to use more than one indexer for the same Subgraph, and fallback when an error/timeout happens. You can also use this strategy in order to use a custom indexer, but allow it to fallback to [The Graph Hosted Service](https://thegraph.com/hosted-service). +Ceci est utile si vous voulez utiliser plus d'un Indexeur pour le même subgraph, et vous replier en cas d'erreur ou de dépassement de délai. Vous pouvez également utiliser cette stratégie pour utiliser un Indexeur personnalisé, mais lui permettre de se replier sur [Le Service Hébergé de The Graph](https://thegraph.com/hosted-service). ```yaml sources: @@ -289,9 +289,9 @@ sources:
`race` -The `race` mechanism allow you to specify use more than one GraphQL endpoint, for the same source, and race on every execution. +Le mécanisme `race` permet d'utiliser plusieurs endpoints GraphQL simultanément pour une même source et de prendre la réponse la plus rapide. -This is useful if you want to use more than one indexer for the same Subgraph, and allow both sources to race and get the fastest response from all specified indexers. +Cette option est utile si vous souhaitez utiliser plus d'un Indexeur pour le même subgraph, et permettre aux deux sources de faire la course et d'obtenir la réponse la plus rapide de tous les Indexeurs spécifiés. ```yaml sources: @@ -308,10 +308,10 @@ sources:
`highestValue` - - This strategy allows you to send parallel requests to different endpoints for the same source and choose the most updated. -This is useful if you want to choose most synced data for the same Subgraph over different indexers/sources. +Cette stratégie vous permet d'envoyer des demandes parallèles à différents endpoints pour la même source et de choisir la plus récente. + +Cette option est utile si vous souhaitez choisir les données les plus synchronisées pour le même subgraph parmi différents Indexeurs/sources. ```yaml sources: @@ -349,9 +349,9 @@ graph LR;
-#### Block Tracking +#### Suivi des blocs -The Graph Client can track block numbers and do the following queries by following [this pattern](https://thegraph.com/docs/en/developer/distributed-systems/#polling-for-updated-data) with `blockTracking` transform; +The Graph Client peut suivre les numéros de blocs et effectuer les requêtes suivantes en suivant [ce schéma](https://thegraph.com/docs/en/developer/distributed-systems/#polling-for-updated-data) avec la transformation `blockTracking` ; ```yaml sources: @@ -361,23 +361,23 @@ sources: endpoint: https://api.thegraph.com/subgraphs/name/uniswap/uniswap-v2 transforms: - blockTracking: - # You might want to disable schema validation for faster startup + # Vous pouvez désactiver la validation des schémas pour un démarrage plus rapide validateSchema: true - # Ignore the fields that you don't want to be tracked + # Ignorer les champs qui ne doivent pas être suivis ignoreFieldNames: [users, prices] - # Exclude the operation with the following names + # Exclure les opérations avec les noms suivants ignoreOperationNames: [NotFollowed] ``` -[You can try a working example here](../examples/transforms) +[Vous pouvez essayer un exemple pratique ici](../examples/transforms) -#### Automatic Pagination +#### Pagination automatique -With most subgraphs, the number of records you can fetch is limited. In this case, you have to send multiple requests with pagination. +Dans la plupart des subgraphs, le nombre d'enregistrements que vous pouvez récupérer est limité. Dans ce cas, vous devez envoyer plusieurs requêtes avec pagination. ```graphql query { - # Will throw an error if the limit is 1000 + # Lance une erreur si la limite est de 1000 users(first: 2000) { id name @@ -385,11 +385,11 @@ query { } ``` -So you have to send the following operations one after the other: +Vous devez donc envoyer les opérations suivantes l'une après l'autre : ```graphql query { - # Will throw an error if the limit is 1000 + # Lance une erreur si la limite est de 1000 users(first: 1000) { id name @@ -397,11 +397,11 @@ query { } ``` -Then after the first response: +Ensuite, après la première réponse : ```graphql query { - # Will throw an error if the limit is 1000 + # Lance une erreur si la limite est de 1000 users(first: 1000, skip: 1000) { id name @@ -409,9 +409,9 @@ query { } ``` -After the second response, you have to merge the results manually. But instead The Graph Client allows you to do the first one and automatically does those multiple requests for you under the hood. +Après la deuxième réponse, vous devez fusionner les résultats manuellement. En revanche, The Graph Client vous permet de faire la première réponse et exécute automatiquement ces demandes multiples pour vous. -All you have to do is: +Tout ce que vous avez à faire, c'est : ```yaml sources: @@ -421,21 +421,21 @@ sources: endpoint: https://api.thegraph.com/subgraphs/name/uniswap/uniswap-v2 transforms: - autoPagination: - # You might want to disable schema validation for faster startup + # Vous pouvez désactiver la validation des schémas pour accélérer le démarrage. validateSchema: true ``` -[You can try a working example here](../examples/transforms) +[Vous pouvez essayer un exemple pratique ici](../examples/transforms) -#### Client-side Composition +#### Composition côté client -The Graph Client has built-in support for client-side GraphQL Composition (powered by [GraphQL-Tools Schema-Stitching](https://graphql-tools.com/docs/schema-stitching/stitch-combining-schemas)). +The Graph Client est doté d'une prise en charge intégrée de la composition GraphQL côté client (assurée par [GraphQL-Tools Schema-Stitching](https://graphql-tools.com/docs/schema-stitching/stitch-combining-schemas)). -You can leverage this feature in order to create a single GraphQL layer from multiple Subgraphs, deployed on multiple indexers. +Vous pouvez tirer parti de cette fonctionnalité pour créer une seule couche GraphQL à partir de plusieurs subgraphs, déployés sur plusieurs Indexeurs. -> 💡 Tip: You can compose any GraphQL sources, and not only Subgraphs! +> 💡 Astuce : Vous pouvez composer n'importe quelle source GraphQL, et pas seulement des subgraphs ! -Trivial composition can be done by adding more than one GraphQL source to your `.graphclientrc.yml` file, here's an example: +Une composition triviale peut être faite en ajoutant plus d'une source GraphQL à votre fichier `.graphclientrc.yml`, voici un exemple : ```yaml sources: @@ -449,15 +449,15 @@ sources: endpoint: https://api.thegraph.com/subgraphs/name/graphprotocol/compound-v2 ``` -As long as there a no conflicts across the composed schemas, you can compose it, and then run a single query to both Subgraphs: +Tant qu'il n'y a pas de conflit entre les schémas composés, vous pouvez les composer, puis exécuter une seule requête sur les deux subgraphs : ```graphql query myQuery { - # this one is coming from compound-v2 + # Celui-ci provient de compound-v2 markets(first: 7) { borrowRate } - # this one is coming from uniswap-v2 + # Celui-ci provient de l'uniswap-v2 pair(id: "0x00004ee988665cdda9a1080d5792cecd16dc1220") { id token0 { @@ -470,33 +470,33 @@ query myQuery { } ``` -You can also resolve conflicts, rename parts of the schema, add custom GraphQL fields, and modify the entire execution phase. +Vous pouvez également résoudre des conflits, renommer des parties du schéma, ajouter des champs GraphQL personnalisés et modifier l'ensemble de la phase d'exécution. -For advanced use-cases with composition, please refer to the following resources: +Pour les cas d'utilisation avancée de la composition, veuillez vous référer aux ressources suivantes : -- [Advanced Composition Example](../examples/composition) -- [GraphQL-Mesh Schema transformations](https://graphql-mesh.com/docs/transforms/transforms-introduction) -- [GraphQL-Tools Schema-Stitching documentation](https://graphql-tools.com/docs/schema-stitching/stitch-combining-schemas) +- [Exemple de composition avancée](../examples/composition) +- [Transformations de schémas GraphQL-Mesh](https://graphql-mesh.com/docs/transforms/transforms-introduction) +- [Documentation GraphQL-Tools Schema-Stitching](https://graphql-tools.com/docs/schema-stitching/stitch-combining-schemas) -#### TypeScript Support +#### Prise en charge de TypeScript -If your project is written in TypeScript, you can leverage the power of [`TypedDocumentNode`](https://the-guild.dev/blog/typed-document-node) and have a fully-typed GraphQL client experience. +Si votre projet est écrit en TypeScript, vous pouvez exploiter la puissance de [`TypedDocumentNode`](https://the-guild.dev/blog/typed-document-node) et avoir une expérience GraphQL client entièrement typée. -The standalone mode of The GraphQL, and popular GraphQL client libraries like Apollo-Client and urql has built-in support for `TypedDocumentNode`! +Le mode autonome de The GraphQL, et les bibliothèques client GraphQL populaires comme Apollo-Client et urql ont une prise en charge intégrée pour `TypedDocumentNode` ! -The Graph Client CLI comes with a ready-to-use configuration for [GraphQL Code Generator](https://graphql-code-generator.com), and it can generate `TypedDocumentNode` based on your GraphQL operations. +La CLI The Graph Client est livrée avec une configuration prête à l'emploi pour [GraphQL Code Generator](https://graphql-code-generator.com), et il peut générer `TypedDocumentNode` sur la base de vos opérations GraphQL. -To get started, define your GraphQL operations in your application code, and point to those files using the `documents` section of `.graphclientrc.yml`: +Pour commencer, définissez vos opérations GraphQL dans le code de votre application, et pointez vers ces fichiers en utilisant la section `documents` de `.graphclientrc.yml` : ```yaml sources: - - # ... your Subgraphs/GQL sources here + - # ... vos sources Subgraphs/GQL ici documents: - ./src/example-query.graphql ``` -You can also use Glob expressions, or even point to code files, and the CLI will find your GraphQL queries automatically: +Vous pouvez également utiliser des expressions globales, ou même pointer vers des fichiers de code, et la CLI trouvera automatiquement vos requêtes GraphQL : ```yaml documents: @@ -504,37 +504,37 @@ documents: - './src/**/*.{ts,tsx,js,jsx}' ``` -Now, run the GraphQL CLI `build` command again, the CLI will generate a `TypedDocumentNode` object under `.graphclient` for every operation found. +Maintenant, lancez à nouveau la commande `build` de la CLI GraphQL, la CLI va générer un objet `TypedDocumentNode` sous `.graphclient` pour chaque opération trouvée. -> Make sure to name your GraphQL operations, otherwise it will be ignored! +> Veillez à nommer vos opérations GraphQL, sinon elles seront ignorées ! -For example, a query called `query ExampleQuery` will have the corresponding `ExampleQueryDocument` generated in `.graphclient`. You can now import it and use that for your GraphQL calls, and you'll have a fully typed experience without writing or specifying any TypeScript manually: +Par exemple, une requête appelée `query ExampleQuery` aura le `ExampleQueryDocument` correspondant généré dans `.graphclient`. Vous pouvez maintenant l'importer et l'utiliser pour vos appels GraphQL, et vous aurez une expérience entièrement typée sans écrire ou spécifier manuellement du TypeScript : ```ts import { ExampleQueryDocument, execute } from '../.graphclient' async function main() { - // "result" variable is fully typed, and represents the exact structure of the fields you selected in your query. + //La variable "result" est entièrement typée et représente la structure exacte des champs que vous avez sélectionnés dans votre requête. const result = await execute(ExampleQueryDocument, {}) console.log(result) } ``` -> You can find a [TypeScript project example here](../examples/urql). +> Vous pouvez trouver un [exemple de projet TypeScript ici](../examples/urql). -#### Client-Side Mutations +#### Mutations côté client -Due to the nature of Graph-Client setup, it is possible to add client-side schema, that you can later bridge to run any arbitrary code. +En raison de la nature de la configuration de Graph-Client, il est possible d'ajouter un schéma côté client, que vous pouvez ensuite relier pour exécuter n'importe quel code arbitraire. -This is helpful since you can implement custom code as part of your GraphQL schema, and have it as unified application schema that is easier to track and develop. +Cela est utile car vous pouvez implémenter du code personnalisé dans le cadre de votre schéma GraphQL et en faire un schéma d'application unifié qui est plus facile à suivre et à développer. -> This document explains how to add custom mutations, but in fact you can add any GraphQL operation (query/mutation/subscriptions). See [Extending the unified schema article](https://graphql-mesh.com/docs/guides/extending-unified-schema) for more information about this feature. +> Ce document explique comment ajouter des mutations personnalisées, mais en fait vous pouvez ajouter n'importe quelle opération GraphQL (requête/mutation/abonnements). Voir [Extension de l'article sur le schéma unifié](https://graphql-mesh.com/docs/guides/extending-unified-schema) pour plus d'informations sur cette fonctionnalité. -To get started, define a `additionalTypeDefs` section in your config file: +Pour commencer, définissez une section `additionalTypeDefs` dans votre fichier de configuration : ```yaml additionalTypeDefs: | - # We should define the missing `Mutation` type + # Nous devrions définir le type `Mutation` manquant extend schema { mutation: Mutation } @@ -548,21 +548,21 @@ additionalTypeDefs: | } ``` -Then, add a pointer to a custom GraphQL resolvers file: +Ensuite, ajoutez un pointeur vers un fichier de résolveurs GraphQL personnalisé : ```yaml additionalResolvers: - './resolvers' ``` -Now, create `resolver.js` (or, `resolvers.ts`) in your project, and implement your custom mutation: +Maintenant, créez `resolver.js` (ou, `resolvers.ts`) dans votre projet, et implémentez votre mutation personnalisée : ```js module.exports = { Mutation: { async doSomething(root, args, context, info) { - // Here, you can run anything you wish. - // For example, use `web3` lib, connect a wallet and so on. + // Ici, vous pouvez exécuter tout ce que vous voulez. + // Par exemple, utiliser la librairie `web3`, connecter un portefeuille et ainsi de suite. return true }, @@ -570,17 +570,17 @@ module.exports = { } ``` -If you are using TypeScript, you can also get fully type-safe signature by doing: +Si vous utilisez TypeScript, vous pouvez également obtenir une signature entièrement sécurisée en faisant : ```ts import { Resolvers } from './.graphclient' -// Now it's fully typed! +// Maintenant, il est entièrement saisi ! const resolvers: Resolvers = { Mutation: { async doSomething(root, args, context, info) { - // Here, you can run anything you wish. - // For example, use `web3` lib, connect a wallet and so on. + // Ici, vous pouvez exécuter tout ce que vous voulez. + // Par exemple, utiliser la librairie `web3`, connecter un portefeuille et ainsi de suite. return true }, @@ -590,22 +590,22 @@ const resolvers: Resolvers = { export default resolvers ``` -If you need to inject runtime variables into your GraphQL execution `context`, you can use the following snippet: +Si vous avez besoin d'injecter des variables d'exécution dans votre `contexte` d'exécution GraphQL, vous pouvez utiliser l'extrait suivant : ```ts execute( MY_QUERY, {}, { - myHelper: {}, // this will be available in your Mutation resolver as `context.myHelper` + myHelper: {}, // Ceci sera disponible dans votre Mutation resolver as `context.myHelper` }, ) ``` -> [You can read more about client-side schema extensions here](https://graphql-mesh.com/docs/guides/extending-unified-schema) +> [Pour en savoir plus sur les extensions de schéma côté client, cliquez ici](https://graphql-mesh.com/docs/guides/extending-unified-schema) -> [You can also delegate and call Query fields as part of your mutation](https://graphql-mesh.com/docs/guides/extending-unified-schema#using-the-sdk-to-fetch-sources) +> [Vous pouvez également déléguer et appeler des champs de requête dans le cadre de votre mutation](https://graphql-mesh.com/docs/guides/extending-unified-schema#using-the-sdk-to-fetch-sources) -## License +## Licence -Released under the [MIT license](../LICENSE). +Publié sous la [licence MIT](../LICENSE). diff --git a/website/src/pages/fr/subgraphs/querying/graph-client/architecture.md b/website/src/pages/fr/subgraphs/querying/graph-client/architecture.md index 99098cd77b95..f14ce931aecc 100644 --- a/website/src/pages/fr/subgraphs/querying/graph-client/architecture.md +++ b/website/src/pages/fr/subgraphs/querying/graph-client/architecture.md @@ -1,13 +1,13 @@ -# The Graph Client Architecture +# L'architecture The Graph Client -To address the need to support a distributed network, we plan to take several actions to ensure The Graph client provides everything app needs: +Pour répondre à la nécessité de prendre en charge un réseau distribué, nous prévoyons de prendre plusieurs mesures pour faire en sorte que The Graph client fournisse tout ce dont l'application a besoin : -1. Compose multiple Subgraphs (on the client-side) -2. Fallback to multiple indexers/sources/hosted services -3. Automatic/Manual source picking strategy -4. Agnostic core, with the ability to run integrate with any GraphQL client +1. Composer plusieurs subgraphs (côté client) +2. Repli sur plusieurs Indexeurs/sources/services hébergés +3. Stratégie de prélèvement automatique/manuel à la source +4. Un noyau agnostique, avec la possibilité d'exécuter des intégrations avec n'importe quel client GraphQL -## Standalone mode +## Mode Standalone ```mermaid graph LR; @@ -17,7 +17,7 @@ graph LR; op-->sB[Subgraph B]; ``` -## With any GraphQL client +## Avec n'importe quel client GraphQL ```mermaid graph LR; @@ -28,11 +28,11 @@ graph LR; op-->sB[Subgraph B]; ``` -## Subgraph Composition +## Composition d'un subgraph -To allow simple and efficient client-side composition, we'll use [`graphql-tools`](https://graphql-tools.com) to create a remote schema / Executor, then can be hooked into the GraphQL client. +Pour permettre une composition simple et efficace côté client, nous allons utiliser [`graphql-tools`](https://graphql-tools.com) pour créer un schéma / Executor distant, qui peut ensuite être accroché au client GraphQL. -API could be either raw `graphql-tools` transformers, or using [GraphQL-Mesh declarative API](https://graphql-mesh.com/docs/transforms/transforms-introduction) for composing the schema. +L'API peut être soit des transformateurs `graphql-tools` bruts, soit l'utilisation de l'[API déclarative GraphQL-Mesh](https://graphql-mesh.com/docs/transforms/transforms-introduction) pour composer le schéma. ```mermaid graph LR; @@ -42,9 +42,9 @@ graph LR; m-->s3[Subgraph C GraphQL schema]; ``` -## Subgraph Execution Strategies +## Stratégies d'exécution des subgraphs -Within every Subgraph defined as source, there will be a way to define it's source(s) indexer and the querying strategy, here are a few options: +Dans chaque subgraph défini comme source, il sera possible de définir l'Indexeur de la (des) source(s) et la stratégie d'interrogation, dont voici quelques exemples : ```mermaid graph LR; @@ -85,9 +85,9 @@ graph LR; end ``` -> We can ship a several built-in strategies, along with a simple interfaces to allow developers to write their own. +> Nous pouvons proposer plusieurs stratégies intégrées, ainsi qu'une interface simple permettant aux développeurs d'écrire leurs propres stratégies. -To take the concept of strategies to the extreme, we can even build a magical layer that does subscription-as-query, with any hook, and provide a smooth DX for dapps: +Pour pousser le concept de stratégies à l'extrême, nous pouvons même construire une couche magique qui fait de l'abonnement en tant que requête, avec n'importe quel crochet, et fournit un DX fluide pour les dapps : ```mermaid graph LR; @@ -99,5 +99,5 @@ graph LR; sc[Smart Contract]-->|change event|op; ``` -With this mechanism, developers can write and execute GraphQL `subscription`, but under the hood we'll execute a GraphQL `query` to The Graph indexers, and allow to connect any external hook/probe for re-running the operation. -This way, we can watch for changes on the Smart Contract itself, and the GraphQL client will fill the gap on the need to real-time changes from The Graph. +Avec ce mécanisme, les développeurs peuvent écrire et exécuter des `subscriptions` GraphQL, mais sous le capot, nous exécuterons une `requête` GraphQL vers les Indexeurs de The Graph, et nous permettrons de connecter n'importe quel hook/probe externe pour ré-exécuter l'opération. +De cette façon, nous pouvons surveiller les changements sur le Smart Contract lui-même, et le client GraphQL comblera l'écart sur le besoin de changements en temps réel de The Graph. diff --git a/website/src/pages/fr/subgraphs/querying/graph-client/live.md b/website/src/pages/fr/subgraphs/querying/graph-client/live.md index e6f726cb4352..4337c6eb2d0a 100644 --- a/website/src/pages/fr/subgraphs/querying/graph-client/live.md +++ b/website/src/pages/fr/subgraphs/querying/graph-client/live.md @@ -1,10 +1,10 @@ -# `@live` queries in `graph-client` +# Requêtes `@live` dans `graph-client` -Graph-Client implements a custom `@live` directive that can make every GraphQL query work with real-time data. +Graph-Client implémente une directive personnalisée `@live` qui permet à chaque requête GraphQL de fonctionner avec des données en temps réel. -## Getting Started +## Introduction -Start by adding the following configuration to your `.graphclientrc.yml` file: +Commencez par ajouter la configuration suivante à votre fichier `.graphclientrc.yml` : ```yaml plugins: @@ -14,7 +14,7 @@ plugins: ## Usage -Set the default update interval you wish to use, and then you can apply the following GraphQL `@directive` over your GraphQL queries: +Définissez l'intervalle de mise à jour par défaut que vous souhaitez utiliser, puis vous pouvez appliquer la `@directive` GraphQL suivante à vos requêtes GraphQL : ```graphql query ExampleQuery @live { @@ -26,7 +26,7 @@ query ExampleQuery @live { } ``` -Or, you can specify a per-query interval: +Vous pouvez également spécifier un intervalle par requête : ```graphql query ExampleQuery @live(interval: 5000) { @@ -36,8 +36,8 @@ query ExampleQuery @live(interval: 5000) { } ``` -## Integrations +## Intégrations Since the entire network layer (along with the `@live` mechanism) is implemented inside `graph-client` core, you can use Live queries with every GraphQL client (such as Urql or Apollo-Client), as long as it supports streame responses (`AsyncIterable`). -No additional setup is required for GraphQL clients cache updates. +Aucune configuration supplémentaire n'est requise pour les mises à jour du cache des clients GraphQL. diff --git a/website/src/pages/fr/subgraphs/querying/graphql-api.mdx b/website/src/pages/fr/subgraphs/querying/graphql-api.mdx index 204fae24a5a5..ae81b6e5427c 100644 --- a/website/src/pages/fr/subgraphs/querying/graphql-api.mdx +++ b/website/src/pages/fr/subgraphs/querying/graphql-api.mdx @@ -2,23 +2,23 @@ title: API GraphQL --- -Learn about the GraphQL Query API used in The Graph. +Découvrez l'API de requête GraphQL utilisée dans The Graph. ## Qu'est-ce que GraphQL ? -[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. +[GraphQL](https://graphql.org/learn/) est un langage d'interrogation pour les API et un moteur d'exécution pour l'exécution de ces requêtes avec vos données existantes. Le graphe utilise GraphQL pour interroger les subgraphs. -To understand the larger role that GraphQL plays, review [developing](/subgraphs/developing/introduction/) and [creating a subgraph](/developing/creating-a-subgraph/). +Pour comprendre le rôle plus important joué par GraphQL, consultez [développer](/subgraphs/developing/introduction/) et [créer un subgraph](/developing/creating-a-subgraph/). -## Queries with GraphQL +## Requêtes avec GraphQL -In your subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type. +Dans votre schéma Subgraph, vous définissez des types appelés `Entities`. Pour chaque type `Entity`, les champs `entity` et `entities` seront générés sur le type `Query` de premier niveau. -> Note: `query` does not need to be included at the top of the `graphql` query when using The Graph. +> Note : `query` n'a pas besoin d'être inclus au début de la requête `graphql` lors de l'utilisation de The Graph. ### Exemples -Query for a single `Token` entity defined in your schema: +Requête pour une seule entité `Token` définie dans votre schéma : ```graphql { @@ -29,9 +29,9 @@ Query for a single `Token` entity defined in your schema: } ``` -> Note: When querying for a single entity, the `id` field is required, and it must be written as a string. +> Note : Lors de l'interrogation d'une seule entité, le champ `id` est obligatoire et doit être écrit sous forme de chaîne de caractères. -Query all `Token` entities: +Interroge toutes les entités `Token` : ```graphql { @@ -44,10 +44,10 @@ Query all `Token` entities: ### Tri -When querying a collection, you may: +Lors de l'interrogation d'une collection, vous pouvez : -- Use the `orderBy` parameter to sort by a specific attribute. -- Use the `orderDirection` to specify the sort direction, `asc` for ascending or `desc` for descending. +- Utilisez le paramètre `orderBy` pour trier les données en fonction d'un attribut spécifique. +- Utilisez `orderDirection` pour spécifier la direction du tri, `asc` pour ascendant ou `desc` pour descendant. #### Exemple @@ -62,9 +62,9 @@ When querying a collection, you may: #### Exemple de tri d'entités imbriquées -As of Graph Node [`v0.30.0`](https://github.com/graphprotocol/graph-node/releases/tag/v0.30.0) entities can be sorted on the basis of nested entities. +Depuis Graph Node [`v0.30.0`](https://github.com/graphprotocol/graph-node/releases/tag/v0.30.0), les entités peuvent être triées sur la base des entités imbriquées. -The following example shows tokens sorted by the name of their owner: +L'exemple suivant montre des jetons triés par le nom de leur propriétaire : ```graphql { @@ -79,18 +79,18 @@ The following example shows tokens sorted by the name of their owner: } ``` -> Currently, you can sort by one-level deep `String` or `ID` types on `@entity` and `@derivedFrom` fields. Unfortunately, [sorting by interfaces on one level-deep entities](https://github.com/graphprotocol/graph-node/pull/4058), sorting by fields which are arrays and nested entities is not yet supported. +> Actuellement, vous pouvez trier par type `String` ou `ID` à "un" niveau de profondeur sur les champs `@entity` et `@derivedFrom`. Malheureusement, le [tri par interfaces sur des entités d'un niveau de profondeur] (https://github.com/graphprotocol/graph-node/pull/4058), le tri par champs qui sont des tableaux et des entités imbriquées n'est pas encore prit en charge. ### Pagination -When querying a collection, it's best to: +Lors de l'interrogation d'une collection, il est préférable de : -- Use the `first` parameter to paginate from the beginning of the collection. - - The default sort order is by `ID` in ascending alphanumeric order, **not** by creation time. -- Use the `skip` parameter to skip entities and paginate. For instance, `first:100` shows the first 100 entities and `first:100, skip:100` shows the next 100 entities. -- Avoid using `skip` values in queries because they generally perform poorly. To retrieve a large number of items, it's best to page through entities based on an attribute as shown in the previous example above. +- Utilisez le paramètre `first` pour paginer à partir du début de la collection. + - L'ordre de tri par défaut est le tri par `ID` dans l'ordre alphanumérique croissant, **non** par heure de création. +- Utilisez le paramètre `skip` pour sauter des entités et paginer. Par exemple, `first:100` affiche les 100 premières entités et `first:100, skip:100` affiche les 100 entités suivantes. +- Évitez d'utiliser les valeurs `skip` dans les requêtes car elles sont généralement peu performantes. Pour récupérer un grand nombre d'éléments, il est préférable de parcourir les entités en fonction d'un attribut, comme indiqué dans l'exemple précédent. -#### Example using `first` +#### Exemple d'utilisation de `first` Interroger les 10 premiers tokens : @@ -103,11 +103,11 @@ Interroger les 10 premiers tokens : } ``` -To query for groups of entities in the middle of a collection, the `skip` parameter may be used in conjunction with the `first` parameter to skip a specified number of entities starting at the beginning of the collection. +Pour rechercher des groupes d'entités au milieu d'une collection, le paramètre `skip` peut être utilisé en conjonction avec le paramètre `first` pour sauter un nombre spécifié d'entités en commençant par le début de la collection. -#### Example using `first` and `skip` +#### Exemple utilisant `first` et `skip` -Query 10 `Token` entities, offset by 10 places from the beginning of the collection: +Interroger 10 entités `Token`, décalées de 10 places par rapport au début de la collection : ```graphql { @@ -118,9 +118,9 @@ Query 10 `Token` entities, offset by 10 places from the beginning of the collect } ``` -#### Example using `first` and `id_ge` +#### Exemple utilisant `first` et `id_ge` -If a client needs to retrieve a large number of entities, it's more performant to base queries on an attribute and filter by that attribute. For example, a client could retrieve a large number of tokens using this query: +Si un client a besoin de récupérer un grand nombre d'entités, il est plus performant de baser les requêtes sur un attribut et de filtrer par cet attribut. Par exemple, un client pourrait récupérer un grand nombre de jetons en utilisant cette requête : ```graphql query manyTokens($lastID: String) { @@ -131,16 +131,16 @@ query manyTokens($lastID: String) { } ``` -The first time, it would send the query with `lastID = ""`, and for subsequent requests it would set `lastID` to the `id` attribute of the last entity in the previous request. This approach will perform significantly better than using increasing `skip` values. +La première fois, il enverra la requête avec `lastID = ""`, et pour les requêtes suivantes, il fixera `lastID` à l'attribut `id` de la dernière entité dans la requête précédente. Cette approche est nettement plus performante que l'utilisation de valeurs `skip` croissantes. ### Filtration -- You can use the `where` parameter in your queries to filter for different properties. -- You can filter on multiple values within the `where` parameter. +- Vous pouvez utiliser le paramètre `where` dans vos requêtes pour filtrer les différentes propriétés. +- Vous pouvez filtrer sur plusieurs valeurs dans le paramètre `where`. -#### Example using `where` +#### Exemple d'utilisation de `where` -Query challenges with `failed` outcome: +Défis de la requête avec un résultat `failed` : ```graphql { @@ -154,7 +154,7 @@ Query challenges with `failed` outcome: } ``` -You can use suffixes like `_gt`, `_lte` for value comparison: +Vous pouvez utiliser des suffixes comme `_gt`, `_lte` pour comparer les valeurs : #### Exemple de filtrage de plage @@ -170,9 +170,9 @@ You can use suffixes like `_gt`, `_lte` for value comparison: #### Exemple de filtrage par bloc -You can also filter entities that were updated in or after a specified block with `_change_block(number_gte: Int)`. +Vous pouvez également filtrer les entités qui ont été mises à jour dans ou après un bloc spécifié avec `_change_block(number_gte : Int)`. -Cela peut être utile si vous cherchez à récupérer uniquement les entités qui ont changé, par exemple depuis la dernière fois que vous avez interrogé. Ou bien, il peut être utile d'étudier ou de déboguer la façon dont les entités changent dans votre subgraph (si combiné avec un filtre de bloc, vous pouvez isoler uniquement les entités qui ont changé dans un bloc spécifique). +Cela peut être utile si vous cherchez à récupérer uniquement les entités qui ont changé, par exemple depuis la dernière fois que vous avez interrogé. Ou encore, elle peut être utile pour étudier ou déboguer la façon dont les entités changent dans votre subgraph (si elle est combinée à un filtre de bloc, vous pouvez isoler uniquement les entités qui ont changé dans un bloc spécifique). ```graphql { @@ -186,7 +186,7 @@ Cela peut être utile si vous cherchez à récupérer uniquement les entités qu #### Exemple de filtrage d'entités imbriquées -Filtering on the basis of nested entities is possible in the fields with the `_` suffix. +Le filtrage sur la base d'entités imbriquées est possible dans les champs avec le suffixe `_`. Cela peut être utile si vous souhaitez récupérer uniquement les entités dont les entités au niveau enfant remplissent les conditions fournies. @@ -204,11 +204,11 @@ Cela peut être utile si vous souhaitez récupérer uniquement les entités dont #### Opérateurs logiques -As of Graph Node [`v0.30.0`](https://github.com/graphprotocol/graph-node/releases/tag/v0.30.0) you can group multiple parameters in the same `where` argument using the `and` or the `or` operators to filter results based on more than one criteria. +Depuis Graph Node [`v0.30.0`](https://github.com/graphprotocol/graph-node/releases/tag/v0.30.0), vous pouvez regrouper plusieurs paramètres dans le même argument `where` en utilisant les opérateurs `and` ou `or` pour filtrer les résultats en fonction de plusieurs critères. -##### `AND` Operator +##### L'opérateur `AND` -The following example filters for challenges with `outcome` `succeeded` and `number` greater than or equal to `100`. +L'exemple suivant filtre les défis avec `outcome` `succeeded` et `number` supérieur ou égal à `100`. ```graphql { @@ -222,7 +222,7 @@ The following example filters for challenges with `outcome` `succeeded` and `num } ``` -> **Syntactic sugar:** You can simplify the above query by removing the `and` operator by passing a sub-expression separated by commas. +> **Sucre syntaxique:** Vous pouvez simplifier la requête ci-dessus en supprimant l'opérateur \`and\`\` et en passant une sous-expression séparée par des virgules. > > ```graphql > { @@ -236,9 +236,9 @@ The following example filters for challenges with `outcome` `succeeded` and `num > } > ``` -##### `OR` Operator +##### L'opérateur `OR` -The following example filters for challenges with `outcome` `succeeded` or `number` greater than or equal to `100`. +L'exemple suivant filtre les défis avec `outcome` `succeeded` ou `number` supérieur ou égal à `100`. ```graphql { @@ -252,7 +252,7 @@ The following example filters for challenges with `outcome` `succeeded` or `numb } ``` -> **Note**: When constructing queries, it is important to consider the performance impact of using the `or` operator. While `or` can be a useful tool for broadening search results, it can also have significant costs. One of the main issues with `or` is that it can cause queries to slow down. This is because `or` requires the database to scan through multiple indexes, which can be a time-consuming process. To avoid these issues, it is recommended that developers use and operators instead of or whenever possible. This allows for more precise filtering and can lead to faster, more accurate queries. +> **Note** : Lors de l'élaboration des requêtes, il est important de prendre en compte l'impact sur les performances de l'utilisation de l'opérateur `or`. Si `or` peut être un outil utile pour élargir les résultats d'une recherche, il peut aussi avoir des coûts importants. L'un des principaux problèmes de l'opérateur `or` est qu'il peut ralentir les requêtes. En effet, `or` oblige la base de données à parcourir plusieurs index, ce qui peut prendre beaucoup de temps. Pour éviter ces problèmes, il est recommandé aux développeurs d'utiliser les opérateurs and au lieu de or chaque fois que cela est possible. Cela permet un filtrage plus précis et peut conduire à des requêtes plus rapides et plus précises. #### Tous les filtres @@ -281,9 +281,9 @@ _not_ends_with _not_ends_with_nocase ``` -> Please note that some suffixes are only supported for specific types. For example, `Boolean` only supports `_not`, `_in`, and `_not_in`, but `_` is available only for object and interface types. +> Veuillez noter que certains suffixes ne sont supportés que pour des types spécifiques. Par exemple, `Boolean` ne supporte que `_not`, `_in`, et `_not_in`, mais `_` n'est disponible que pour les types objet et interface. -In addition, the following global filters are available as part of `where` argument: +En outre, les filtres globaux suivants sont disponibles en tant que partie de l'argument `where` : ```graphql _change_block(numéro_gte : Int) @@ -291,11 +291,11 @@ _change_block(numéro_gte : Int) ### Interrogation des états précédents -You can query the state of your entities not just for the latest block, which is the default, but also for an arbitrary block in the past. The block at which a query should happen can be specified either by its block number or its block hash by including a `block` argument in the toplevel fields of queries. +Vous pouvez interroger l'état de vos entités non seulement pour le dernier bloc, ce qui est le cas par défaut, mais aussi pour un bloc arbitraire dans le passé. Le bloc auquel une requête doit se produire peut être spécifié soit par son numéro de bloc, soit par son hash de bloc, en incluant un argument `block` dans les champs de niveau supérieur des requêtes. -The result of such a query will not change over time, i.e., querying at a certain past block will return the same result no matter when it is executed, with the exception that if you query at a block very close to the head of the chain, the result might change if that block turns out to **not** be on the main chain and the chain gets reorganized. Once a block can be considered final, the result of the query will not change. +Le résultat d'une telle requête ne changera pas au fil du temps, c'est-à-dire qu'une requête portant sur un certain bloc passé renverra le même résultat quel que soit le moment où elle est exécutée, à l'exception d'une requête portant sur un bloc très proche de la tête de la chaîne, dont le résultat pourrait changer s'il s'avérait que ce bloc ne figurait **pas** sur la chaîne principale et que la chaîne était réorganisée. Une fois qu'un bloc peut être considéré comme définitif, le résultat de la requête ne changera pas. -> Note: The current implementation is still subject to certain limitations that might violate these guarantees. The implementation can not always tell that a given block hash is not on the main chain at all, or if a query result by a block hash for a block that is not yet considered final could be influenced by a block reorganization running concurrently with the query. They do not affect the results of queries by block hash when the block is final and known to be on the main chain. [This issue](https://github.com/graphprotocol/graph-node/issues/1405) explains what these limitations are in detail. +> Remarque : l'implémentation actuelle est encore sujette à certaines limitations qui pourraient violer ces garanties. L'implémentation ne permet pas toujours de déterminer si un bloc donné n'est pas du tout sur la chaîne principale ou si le résultat d'une requête par bloc pour un bloc qui n'est pas encore considéré comme final peut être influencé par une réorganisation du bloc qui a lieu en même temps que la requête. Elles n'affectent pas les résultats des requêtes par hash de bloc lorsque le bloc est final et que l'on sait qu'il se trouve sur la chaîne principale. [Ce numéro](https://github.com/graphprotocol/graph-node/issues/1405) explique ces limitations en détail. #### Exemple @@ -311,7 +311,7 @@ The result of such a query will not change over time, i.e., querying at a certai } ``` -This query will return `Challenge` entities, and their associated `Application` entities, as they existed directly after processing block number 8,000,000. +Cette requête renverra les entités `Challenge` et les entités `Application` qui leur sont associées, telles qu'elles existaient directement après le traitement du bloc numéro 8 000 000. #### Exemple @@ -327,13 +327,13 @@ This query will return `Challenge` entities, and their associated `Application` } ``` -This query will return `Challenge` entities, and their associated `Application` entities, as they existed directly after processing the block with the given hash. +Cette requête renverra les entités `Challenge`, et leurs entités `Application` associées, telles qu'elles existaient directement après le traitement du bloc avec le hash donné. ### Requêtes de recherche en texte intégral -Fulltext search query fields provide an expressive text search API that can be added to the subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) to add fulltext search to your subgraph. +Les champs de recherche intégralement en texte fournissent une API de recherche textuelle expressive qui peut être ajoutée au schéma du subgraph et personnalisée. Reportez-vous à [Définir des champs de recherche en texte intégral](/developing/creating-a-subgraph/#defining-fulltext-search-fields) pour ajouter la recherche intégralement en texte à votre subgraph. -Fulltext search queries have one required field, `text`, for supplying search terms. Several special fulltext operators are available to be used in this `text` search field. +Les requêtes de recherche en texte intégral comportent un champ obligatoire, `text`, pour fournir les termes de la recherche. Plusieurs opérateurs spéciaux de texte intégral peuvent être utilisés dans ce champ de recherche `text`. Opérateurs de recherche en texte intégral : @@ -346,7 +346,7 @@ Opérateurs de recherche en texte intégral : #### Exemples -Using the `or` operator, this query will filter to blog entities with variations of either "anarchism" or "crumpet" in their fulltext fields. +En utilisant l'opérateur `ou`, cette requête filtrera les entités de blog ayant des variations d' "anarchism" ou "crumpet" dans leurs champs de texte intégral. ```graphql { @@ -359,7 +359,7 @@ Using the `or` operator, this query will filter to blog entities with variations } ``` -The `follow by` operator specifies a words a specific distance apart in the fulltext documents. The following query will return all blogs with variations of "decentralize" followed by "philosophy" +L'opérateur `follow by` spécifie un mot à une distance spécifique dans les documents en texte intégral. La requête suivante renverra tous les blogs contenant des variations de "decentralize" suivies de "philosophy" ```graphql { @@ -387,25 +387,25 @@ Combinez des opérateurs de texte intégral pour créer des filtres plus complex ### Validation -Graph Node implements [specification-based](https://spec.graphql.org/October2021/#sec-Validation) validation of the GraphQL queries it receives using [graphql-tools-rs](https://github.com/dotansimha/graphql-tools-rs#validation-rules), which is based on the [graphql-js reference implementation](https://github.com/graphql/graphql-js/tree/main/src/validation). Queries which fail a validation rule do so with a standard error - visit the [GraphQL spec](https://spec.graphql.org/October2021/#sec-Validation) to learn more. +Graph Node met en œuvre une validation [basée sur les spécifications](https://spec.graphql.org/October2021/#sec-Validation) des requêtes GraphQL qu'il reçoit à l'aide de [graphql-tools-rs](https://github.com/dotansimha/graphql-tools-rs#validation-rules), qui est basée sur l'implémentation de référence [graphql-js](https://github.com/graphql/graphql-js/tree/main/src/validation). Les requêtes qui échouent à une règle de validation sont accompagnées d'une erreur standard - consultez les [spécifications GraphQL](https://spec.graphql.org/October2021/#sec-Validation) pour en savoir plus. ## Schema -The schema of your dataSources, i.e. the entity types, values, and relationships that are available to query, are defined through the [GraphQL Interface Definition Language (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). +Le schéma de vos sources de données, c'est-à-dire les types d'entités, les valeurs et les relations qui peuvent être interrogés, est défini dans le [GraphQL Interface Definition Language (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). -GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your subgraph is automatically generated from the GraphQL schema that's included in your [subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph). +Les schémas GraphQL définissent généralement des types racines pour les `queries`, les `subscriptions` et les `mutations`. The Graph ne prend en charge que les `requêtes`. Le type racine `Query` pour votre subgraph est automatiquement généré à partir du schéma GraphQL qui est inclus dans votre [Subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph). -> Note: Our API does not expose mutations because developers are expected to issue transactions directly against the underlying blockchain from their applications. +> Remarque : notre API n'expose pas les mutations car les développeurs sont censés émettre des transactions directement contre la blockchain sous-jacente à partir de leurs applications. ### Entities -All GraphQL types with `@entity` directives in your schema will be treated as entities and must have an `ID` field. +Tous les types GraphQL avec des directives `@entity` dans votre schéma seront traités comme des entités et doivent avoir un champ `ID`. -> **Note:** Currently, all types in your schema must have an `@entity` directive. In the future, we will treat types without an `@entity` directive as value objects, but this is not yet supported. +> **Note:** Actuellement, tous les types de votre schéma doivent avoir une directive `@entity`. Dans le futur, nous traiterons les types n'ayant pas la directive `@entity` comme des objets de valeur, mais cela n'est pas encore pris en charge. ### Métadonnées du Subgraph -All subgraphs have an auto-generated `_Meta_` object, which provides access to subgraph metadata. This can be queried as follows: +Tous les subgraphs ont un objet `_Meta_` auto-généré, qui permet d'accéder aux métadonnées du subgraph. Cet objet peut être interrogé comme suit : ```graphQL { @@ -421,14 +421,14 @@ All subgraphs have an auto-generated `_Meta_` object, which provides access to s } ``` -Si un bloc est fourni, les métadonnées sont celles de ce bloc, sinon le dernier bloc indexé est utilisé. S'il est fourni, le bloc doit être postérieur au bloc de départ du subgraph et inférieur ou égal au bloc indexé le plus récent. +Si un bloc est fourni, les métadonnées sont celles de ce bloc, sinon le dernier bloc indexé est utilisé. S'il est fourni, le bloc doit être postérieur au bloc de départ du subgraph et inférieur ou égal au dernier bloc indexé. -`deployment` is a unique ID, corresponding to the IPFS CID of the `subgraph.yaml` file. +`deployment` est un ID unique, correspondant au IPFS CID du fichier `subgraph.yaml`. -`block` provides information about the latest block (taking into account any block constraints passed to `_meta`): +`block` fournit des informations sur le dernier bloc (en tenant compte des contraintes de bloc passées à `_meta`) : - hash : le hash du bloc - number: the block number -- timestamp : l'horodatage du bloc, si disponible (ceci n'est actuellement disponible que pour les subgraphs indexant les réseaux EVM) +- horodatage : l'horodatage du bloc, s'il est disponible (pour l'instant, cette information n'est disponible que pour les subgraphs indexant les réseaux EVM) -`hasIndexingErrors` is a boolean identifying whether the subgraph encountered indexing errors at some past block +`hasIndexingErrors` est un booléen indiquant si le subgraph a rencontré des erreurs d'indexation à un moment donné diff --git a/website/src/pages/fr/subgraphs/querying/introduction.mdx b/website/src/pages/fr/subgraphs/querying/introduction.mdx index 38a2f3d528d7..75088fa635a9 100644 --- a/website/src/pages/fr/subgraphs/querying/introduction.mdx +++ b/website/src/pages/fr/subgraphs/querying/introduction.mdx @@ -3,30 +3,30 @@ title: Interroger The Graph sidebarTitle: Présentation --- -To start querying right away, visit [The Graph Explorer](https://thegraph.com/explorer). +Pour commencer à interroger immédiatement, visitez [The Graph Explorer](https://thegraph.com/explorer). ## Aperçu -When a subgraph is published to The Graph Network, you can visit its subgraph details page on Graph Explorer and use the "Query" tab to explore the deployed GraphQL API for each subgraph. +Lorsqu'un subgraph est publié sur The Graph Network, vous pouvez visiter sa page de détails sur Graph Explorer et utiliser l'onglet "Query" pour explorer l'API GraphQL déployée pour chaque subgraph. ## Spécificités⁠ -Each subgraph published to The Graph Network has a unique query URL in Graph Explorer to make direct queries. You can find it by navigating to the subgraph details page and clicking on the "Query" button in the top right corner. +Chaque subgraph publié dans The Graph Network possède une URL de requête unique dans Graph Explorer, qui permet d'effectuer des requêtes directes. Vous pouvez la trouver en naviguant vers la page de détails du subgraph et en cliquant sur le bouton "Requête" dans le coin supérieur droit. -![Query Subgraph Button](/img/query-button-screenshot.png) +![Bouton d'interrogation de subgraphs](/img/query-button-screenshot.png) -![Query Subgraph URL](/img/query-url-screenshot.png) +![URL d'interrogation de subgraphs](/img/query-url-screenshot.png) -You will notice that this query URL must use a unique API key. You can create and manage your API keys in [Subgraph Studio](https://thegraph.com/studio), under the "API Keys" section. Learn more about how to use Subgraph Studio [here](/deploying/subgraph-studio/). +Vous remarquerez que cette URL de requête doit utiliser une clé API unique. Vous pouvez créer et gérer vos clés API dans [Subgraph Studio](https://thegraph.com/studio), dans la section "clés API". Pour en savoir plus sur l'utilisation de Subgraph Studio cliquez [ici](/deploying/subgraph-studio/). -Subgraph Studio users start on a Free Plan, which allows them to make 100,000 queries per month. Additional queries are available on the Growth Plan, which offers usage based pricing for additional queries, payable by credit card, or GRT on Arbitrum. You can learn more about billing [here](/subgraphs/billing/). +Les utilisateurs de Subgraph Studio commencent avec un plan gratuit, qui leur permet d'effectuer 100 000 requêtes par mois. Des requêtes supplémentaires sont disponibles sur le plan de croissance, qui offre une tarification basée sur l'utilisation pour les requêtes supplémentaires, payable par carte de crédit, ou GRT sur Arbitrum. Vous pouvez en savoir plus sur la facturation [ici](/subgraphs/billing/). -> Please see the [Query API](/subgraphs/querying/graphql-api/) for a complete reference on how to query the subgraph's entities. +> Veuillez consulter l'[API de requête](/subgraphs/querying/graphql-api/) pour une référence complète sur la manière d'interroger les entités du Subgraph. > -> Note: If you encounter 405 errors with a GET request to the Graph Explorer URL, please switch to a POST request instead. +> Remarque : si vous rencontrez des erreurs 405 lors d'une requête GET vers l'URL de Graph Explorer, veuillez passer à une requête POST. ### Ressources supplémentaires -- Use [GraphQL querying best practices](/subgraphs/querying/best-practices/). -- To query from an application, click [here](/subgraphs/querying/from-an-application/). -- View [querying examples](https://github.com/graphprotocol/query-examples/tree/main). +- Utilisez les [meilleures pratiques d'interrogation GraphQL](/subgraphs/querying/best-practices/). +- Pour effectuer une requête à partir d'une application, cliquez sur [ici](/subgraphs/querying/from-an-application/). +- Voir [exemples de recherche](https://github.com/graphprotocol/query-examples/tree/main). diff --git a/website/src/pages/fr/subgraphs/querying/managing-api-keys.mdx b/website/src/pages/fr/subgraphs/querying/managing-api-keys.mdx index d44a65306dc1..644b58ccf482 100644 --- a/website/src/pages/fr/subgraphs/querying/managing-api-keys.mdx +++ b/website/src/pages/fr/subgraphs/querying/managing-api-keys.mdx @@ -1,34 +1,34 @@ --- -title: Gérer vos clés API +title: Gestion des clés API --- ## Aperçu -API keys are needed to query subgraphs. They ensure that the connections between application services are valid and authorized, including authenticating the end user and the device using the application. +Les clés API sont nécessaires pour interroger les subgraphs. Elles garantissent que les connexions entre les services d'application sont valides et autorisées, y compris l'authentification de l'utilisateur final et de l'appareil utilisant l'application. -### Create and Manage API Keys +### Créer et gérer des clés API -Go to [Subgraph Studio](https://thegraph.com/studio/) and click the **API Keys** tab to create and manage your API keys for specific subgraphs. +Allez sur [Subgraph Studio](https://thegraph.com/studio/) et cliquez sur l'onglet **API Keys** pour créer et gérer vos clés API pour des subgraphs spécifiques. -The "API keys" table lists existing API keys and allows you to manage or delete them. For each key, you can see its status, the cost for the current period, the spending limit for the current period, and the total number of queries. +Le tableau "Clés API" répertorie les clés API existantes et vous permet de les gérer ou de les supprimer. Pour chaque clé, vous pouvez voir son statut, le coût pour la période en cours, la limite de dépenses pour la période en cours et le nombre total de requêtes. -You can click the "three dots" menu to the right of a given API key to: +Vous pouvez cliquer sur le "menu à trois points" à droite d'une clé API donnée pour : -- Rename API key -- Regenerate API key -- Delete API key -- Manage spending limit: this is an optional monthly spending limit for a given API key, in USD. This limit is per billing period (calendar month). +- Renommer la clé API +- Régénérer la clé API +- Supprimer la clé API +- Gérer la limite de dépenses : il s'agit d'une limite de dépenses mensuelle facultative pour une clé API donnée, en USD. Cette limite s'applique à chaque période de facturation (mois civil). -### API Key Details +### Détails de la clé API -You can click on an individual API key to view the Details page: +Vous pouvez cliquer sur une clé API individuelle pour afficher la page des détails : -1. Under the **Overview** section, you can: +1. Dans la section **Aperçu**, vous pouvez : - Modifiez le nom de votre clé - Régénérer les clés API - Affichez l'utilisation actuelle de la clé API avec les statistiques : - Nombre de requêtes - Montant de GRT dépensé -2. Under the **Security** section, you can opt into security settings depending on the level of control you’d like to have. Specifically, you can: +2. Dans la section **Sécurité**, vous pouvez choisir des paramètres de sécurité en fonction du niveau de contrôle que vous souhaitez avoir. Plus précisément, vous pouvez : - Visualisez et gérez les noms de domaine autorisés à utiliser votre clé API - - Attribuez des subgraphs qui peuvent être interrogés avec votre clé API + - Attribuer des subgraphs qui peuvent être interrogés avec votre clé API diff --git a/website/src/pages/fr/subgraphs/querying/python.mdx b/website/src/pages/fr/subgraphs/querying/python.mdx index f8d2b0741c18..3e172e324351 100644 --- a/website/src/pages/fr/subgraphs/querying/python.mdx +++ b/website/src/pages/fr/subgraphs/querying/python.mdx @@ -3,7 +3,7 @@ title: Interroger The Graph avec Python et Subgrounds sidebarTitle: Python (Subgrounds) --- -Subgrounds est une librairie Python utilisée pour les requêtes Subgraph. Cette librairie a été conçue par [Playgrounds](https://playgrounds.network/). Subgrounds permet de connecter directement les données d'un Subgraph à un environnement de données Python, permettant l'utilisation de librairies comme [pandas](https://pandas.pydata.org/) afin de faire de l'analyse de données! +Subgrounds est une bibliothèque Python intuitive pour l'interrogation des subgraphs, créée par [Playgrounds](https://playgrounds.network/). Elle vous permet de connecter directement les données des subgraphs à un environnement de données Python, ce qui vous permet d'utiliser des bibliothèques comme [pandas](https://pandas.pydata.org/) pour effectuer des analyses de données ! Subgrounds propose une API Python simplifiée afin de construire des requêtes GraphQL. Subgrounds automatise les workflows fastidieux comme la pagination, et donne aux utilisateurs avancés plus de pouvoir grâce à des transformations de schéma contrôlées. @@ -17,24 +17,24 @@ pip install --upgrade subgrounds python -m pip install --upgrade subgrounds ``` -Une fois installé, vous pouvez tester Subgrounds avec la requête suivante. La requête ci-dessous récupère un Subgraph pour le protocole Aave v2 et interroge les 5 principaux marchés par TVL (Total Value Locked - Valeur Totale Verouillée), sélectionne leur nom et leur TVL (en USD) et renvoie les données sous forme de DataFrame Panda [DataFrame](https://pandas.pydata.org/pandas-docs/dev/reference/api/pandas.DataFrame.html#pandas.DataFrame). +Une fois installé, vous pouvez tester les subgraphs avec la requête suivante. L'exemple suivant récupère un subgraph pour le protocole Aave v2 et interroge les 5 premiers marchés classés par TVL (Total Value Locked), sélectionne leur nom et leur TVL (en USD) et renvoie les données sous la forme d'un [DataFrame](https://pandas.pydata.org/pandas-docs/dev/reference/api/pandas.DataFrame.html#pandas.DataFrame) pandas . ```python from subgrounds import Subgrounds sg = Subgrounds() -# Load the subgraph +# Charge le Subgraph aave_v2 = sg.load_subgraph( "https://api.thegraph.com/subgraphs/name/messari/aave-v2-ethereum") -# Construct the query +# Construit la requête latest_markets = aave_v2.Query.markets( orderBy=aave_v2.Market.totalValueLockedUSD, orderDirection='desc', first=5, ) -# Return query to a dataframe +# Renvoi la requête à un dataframe sg.query_df([ latest_markets.name, latest_markets.totalValueLockedUSD, @@ -54,4 +54,4 @@ Subgrounds est développé et maintenu par l'équipe de [Playgrounds](https://pl - [Requêtes concurrentes](https://docs.playgrounds.network/subgrounds/getting_started/async/) - Améliorez vos requêtes en les parallélisant. - [Export de données en CSVs](https://docs.playgrounds.network/subgrounds/faq/exporting/) - - A quick article on how to seamlessly save your data as CSVs for further analysis. + - Un article rapide sur la manière d'enregistrer de manière transparente vos données au format CSV en vue d'une analyse ultérieure. diff --git a/website/src/pages/fr/subgraphs/querying/subgraph-id-vs-deployment-id.mdx b/website/src/pages/fr/subgraphs/querying/subgraph-id-vs-deployment-id.mdx index 91eb7ec02307..acd40aface24 100644 --- a/website/src/pages/fr/subgraphs/querying/subgraph-id-vs-deployment-id.mdx +++ b/website/src/pages/fr/subgraphs/querying/subgraph-id-vs-deployment-id.mdx @@ -2,17 +2,17 @@ title: Identifiant du Subgraph VS. Identifiant de déploiement --- -Un Subgraph est identifié par un identifiant Subgraph (Subpgraph ID), et chaque version de ce subgraph est identifiée par un identifiant de déploiement (Deployment ID). +Un subgraph est identifié par un ID de subgraph, et chaque version du subgraph est identifiée par un ID de déploiement. -When querying a subgraph, either ID can be used, though it is generally suggested that the Deployment ID is used due to its ability to specify a specific version of a subgraph. +Lors de l'interrogation d'un subgraph, l'un ou l'autre ID peut être utilisé, bien qu'il soit généralement suggéré d'utiliser l'ID de déploiement en raison de sa capacité à spécifier une version spécifique d'un subgraph. -Here are some key differences between the two IDs: ![](/img/subgraph-id-vs-deployment-id.png) +Voici les principales différences entre les deux ID : ![](/img/subgraph-id-vs-deployment-id.png) ## Identifiant de déploiement -The Deployment ID is the IPFS hash of the compiled manifest file, which refers to other files on IPFS instead of relative URLs on the computer. For example, the compiled manifest can be accessed via: `https://api.thegraph.com/ipfs/api/v0/cat?arg=QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. To change the Deployment ID, one can simply update the manifest file, such as modifying the description field as described in the [subgraph manifest documentation](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api). +L'ID de déploiement est le hash IPFS du fichier manifeste compilé, qui fait référence à d'autres fichiers sur IPFS au lieu d'URL relatives sur l'ordinateur. Par exemple, le manifeste compilé est accessible via : `https://api.thegraph.com/ipfs/api/v0/cat?arg=QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. Pour modifier l'ID de déploiement, il suffit de mettre à jour le fichier de manifeste, en modifiant par exemple le champ de description comme décrit dans la [documentation du manifeste du subgraph](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api). -When queries are made using a subgraph's Deployment ID, we are specifying a version of that subgraph to query. Using the Deployment ID to query a specific subgraph version results in a more sophisticated and robust setup as there is full control over the subgraph version being queried. However, this results in the need of updating the query code manually every time a new version of the subgraph is published. +Lorsque des requêtes sont effectuées à l'aide de l'ID de déploiement d'un subgraph, nous spécifions une version de ce subgraph à interroger. L'utilisation de l'ID de déploiement pour interroger une version spécifique du subgraph donne lieu à une configuration plus sophistiquée et plus robuste, car il y a un contrôle total sur la version du subgraph interrogée. Toutefois, cela implique la nécessité de mettre à jour manuellement le code d'interrogation chaque fois qu'une nouvelle version du subgraph est publiée. Exemple d'endpoint utilisant l'identifiant de déploiement: @@ -20,8 +20,8 @@ Exemple d'endpoint utilisant l'identifiant de déploiement: ## Identifiant du Subgraph -The Subgraph ID is a unique identifier for a subgraph. It remains constant across all versions of a subgraph. It is recommended to use the Subgraph ID to query the latest version of a subgraph, although there are some caveats. +L'ID du subgraph est un ID unique pour un subgraph. Il reste constant dans toutes les versions d'un subgraph. Il est recommandé d'utiliser l'ID du subgraph pour demander la dernière version d'un subgraph, bien qu'il y ait quelques mises en garde. -Be aware that querying using Subgraph ID may result in queries being responded to by an older version of the subgraph due to the new version needing time to sync. Also, new versions could introduce breaking schema changes. +Sachez que l'interrogation à l'aide de l'ID du Subgraph peut entraîner la réponse à des requêtes par une version plus ancienne du Subgraph, la nouvelle version ayant besoin d'un certain temps pour se synchroniser. De plus, les nouvelles versions peuvent introduire des changements de schéma radicaux. -Example endpoint that uses Subgraph ID: `https://gateway-arbitrum.network.thegraph.com/api/[api-key]/subgraphs/id/FL3ePDCBbShPvfRJTaSCNnehiqxsPHzpLud6CpbHoeKW` +Exemple d'endpoint utilisant l'ID du subgraph : `https://gateway-arbitrum.network.thegraph.com/api/[api-key]/subgraphs/id/FL3ePDCBbShPvfRJTaSCNnehiqxsPHzpLud6CpbHoeKW` diff --git a/website/src/pages/fr/subgraphs/quick-start.mdx b/website/src/pages/fr/subgraphs/quick-start.mdx index 7f5b41aa8eaf..c227ec40ccc7 100644 --- a/website/src/pages/fr/subgraphs/quick-start.mdx +++ b/website/src/pages/fr/subgraphs/quick-start.mdx @@ -2,24 +2,24 @@ title: Démarrage rapide --- -Learn how to easily build, publish and query a [subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) on The Graph. +Apprenez à construire, publier et interroger facilement un [Subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) sur The Graph. -## Prerequisites +## Prérequis - Un portefeuille crypto -- A smart contract address on a [supported network](/supported-networks/) -- [Node.js](https://nodejs.org/) installed -- A package manager of your choice (`npm`, `yarn` or `pnpm`) +- Une adresse de contrat intelligent sur un [réseau pris en charge](/supported-networks/) +- [Node.js](https://nodejs.org/) installé +- Un gestionnaire de package de votre choix (`npm`, `yarn` ou `pnpm`) -## How to Build a Subgraph +## Comment construire un subgraph -### 1. Create a subgraph in Subgraph Studio +### 1. Créer un subgraph dans Subgraph Studio Accédez à [Subgraph Studio](https://thegraph.com/studio/) et connectez votre portefeuille. -Subgraph Studio lets you create, manage, deploy, and publish subgraphs, as well as create and manage API keys. +Subgraph Studio vous permet de créer, de gérer, de déployer et de publier des subgraphs, ainsi que de créer et de gérer des clés API. -Cliquez sur « Créer un subgraph ». Il est recommandé de nommer le subgraph en majuscule : « Nom du subgraph Nom de la chaîne ». +Click "Create a Subgraph". It is recommended to name the Subgraph in Title Case: "Subgraph Name Chain Name". ### 2. Installez la CLI Graph @@ -37,56 +37,56 @@ Using [yarn](https://yarnpkg.com/): yarn global add @graphprotocol/graph-cli ``` -### 3. Initialize your subgraph +### 3. Initialiser votre subgraph -> You can find commands for your specific subgraph on the subgraph page in [Subgraph Studio](https://thegraph.com/studio/). +> Vous trouverez les commandes pour votre subgraph spécifique sur la page du subgraph dans [Subgraph Studio](https://thegraph.com/studio/). -The `graph init` command will automatically create a scaffold of a subgraph based on your contract's events. +La commande `graph init` créera automatiquement un échafaudage d'un subgraph basé sur les événements de votre contrat. -The following command initializes your subgraph from an existing contract: +La commande suivante initialise votre subgraph à partir d'un contrat existant : ```sh graph init ``` -If your contract is verified on the respective blockscanner where it is deployed (such as [Etherscan](https://etherscan.io/)), then the ABI will automatically be created in the CLI. +Si votre contrat est vérifié sur le scanner de blocs où il est déployé (comme [Etherscan](https://etherscan.io/)), l'ABI sera automatiquement créé dans le CLI. -When you initialize your subgraph, the CLI will ask you for the following information: +Lorsque vous initialisez votre subgraph, la CLI vous demande les informations suivantes : -- **Protocol**: Choose the protocol your subgraph will be indexing data from. -- **Subgraph slug**: Create a name for your subgraph. Your subgraph slug is an identifier for your subgraph. -- **Directory**: Choose a directory to create your subgraph in. -- **Ethereum network** (optional): You may need to specify which EVM-compatible network your subgraph will be indexing data from. -- **Contract address**: Locate the smart contract address you’d like to query data from. -- **ABI**: If the ABI is not auto-populated, you will need to input it manually as a JSON file. -- **Start Block**: You should input the start block to optimize subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed. -- **Contract Name**: Input the name of your contract. -- **Index contract events as entities**: It is suggested that you set this to true, as it will automatically add mappings to your subgraph for every emitted event. -- **Add another contract** (optional): You can add another contract. +- **Protocole** : Choisissez le protocole à partir duquel votre subgraph indexera les données. +- **Subgraph slug** : Créez un nom pour votre subgraphe. Votre nom de subgraph est un Identifiant pour votre subgraph. +- **Répertoire** : Choisissez un répertoire dans lequel créer votre Subgraph. +- **Réseau Ethereum** (optionnel) : Vous pouvez avoir besoin de spécifier le réseau compatible EVM à partir duquel votre subgraph indexera les données. +- **Adresse du contrat** : Localisez l'adresse du contrat intelligent dont vous souhaitez interroger les données. +- **ABI** : Si l'ABI n'est pas renseigné automatiquement, vous devrez le saisir manuellement sous la forme d'un fichier JSON. +- **Bloc de départ** : Vous devez saisir le bloc de départ pour optimiser l'indexation du Subgraph des données de la blockchain. Localisez le bloc de départ en trouvant le bloc où votre contrat a été déployé. +- **Nom du contrat** : Saisissez le nom de votre contrat. +- **Indexer les événements contractuels comme des entités** : Il est conseillé de mettre cette option à true, car elle ajoutera automatiquement des mappages à votre subgraph pour chaque événement émis. +- **Ajouter un autre contrat** (facultatif) : Vous pouvez ajouter un autre contrat. -La capture d'écran suivante donne un exemple de ce qui vous attend lors de l'initialisation de votre subgraph : +La capture d'écran suivante donne un exemple de ce à quoi on peut s'attendre lors de l'initialisation du subgraph : -![Subgraph command](/img/CLI-Example.png) +![Commande de subgraph](/img/CLI-Example.png) -### 4. Edit your subgraph +### 4. Modifiez votre subgraph -The `init` command in the previous step creates a scaffold subgraph that you can use as a starting point to build your subgraph. +La commande `init` de l'étape précédente crée un Subgraph d'échafaudage que vous pouvez utiliser comme point de départ pour construire votre Subgraph. -When making changes to the subgraph, you will mainly work with three files: +Lorsque vous modifiez le Subgraph, vous travaillez principalement avec trois fichiers : -- Manifest (`subgraph.yaml`) - defines what data sources your subgraph will index. -- Schema (`schema.graphql`) - defines what data you wish to retrieve from the subgraph. +- Manifest (`subgraph.yaml`) - définit les sources de données que votre Subgraph indexera. +- Schema (`schema.graphql`) - définit les données que vous souhaitez extraire du Subgraph. - AssemblyScript Mappings (`mapping.ts`) - translates data from your data sources to the entities defined in the schema. -For a detailed breakdown on how to write your subgraph, check out [Creating a Subgraph](/developing/creating-a-subgraph/). +Pour une description détaillée de la manière d'écrire votre Subgraph, consultez [Créer un Subgraph](/developing/creating-a-subgraph/). -### 5. Déployer votre subgraph +### 5. Déployez votre Subgraph -> Remember, deploying is not the same as publishing. +> N'oubliez pas que le déploiement n'est pas la même chose que la publication. -When you **deploy** a subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. A deployed subgraph's indexing is performed by the [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/), which is a single Indexer owned and operated by Edge & Node, rather than by the many decentralized Indexers in The Graph Network. A **deployed** subgraph is free to use, rate-limited, not visible to the public, and meant to be used for development, staging, and testing purposes. +Lorsque vous **déployez** un Subgraph, vous l'envoyez au [Subgraph Studio](https://thegraph.com/studio/), où vous pouvez le tester, le mettre en scène et le réviser. L'indexation d'un Subgraph déployé est effectuée par l'[Indexeur de mise à niveau](https://thegraph.com/blog/upgrade-indexer/), qui est un indexeur unique détenu et exploité par Edge & Node, plutôt que par les nombreux Indexeurs décentralisés de The Graph Network. Un Subgraph **déployé** est libre d'utilisation, à taux limité, non visible par le public et destiné à être utilisé à des fins de développement, de mise en place et de test. -Une fois votre subgraph écrit, exécutez les commandes suivantes : +Une fois que votre Subgraph est écrit, exécutez les commandes suivantes : ```` ```sh @@ -94,7 +94,7 @@ graph codegen && graph build ``` ```` -Authentifiez-vous et déployez votre subgraph. La clé de déploiement se trouve sur la page du subgraph dans Subgraph Studio. +Authentifiez et déployez votre Subgraph. La clé de déploiement se trouve sur la page du Subgraph dans Subgraph Studio. ![Clé de déploiement](/img/subgraph-studio-deploy-key.jpg) @@ -107,39 +107,39 @@ graph deploy ``` ```` -The CLI will ask for a version label. It's strongly recommended to use [semantic versioning](https://semver.org/), e.g. `0.0.1`. +La CLI demandera un label de version. Il est fortement recommandé d'utiliser [le versionnement sémantique](https://semver.org/), par exemple `0.0.1`. -### 6. Examiner votre subgraph +### 6. Examinez votre subgraph -If you’d like to test your subgraph before publishing it, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following: +Si vous souhaitez tester votre subgraph avant de le publier, vous pouvez utiliser [Subgraph Studio](https://thegraph.com/studio/) pour effectuer les opérations suivantes : - Exécuter un exemple de requête. -- Analyser votre subgraph dans le tableau de bord pour vérifier les informations. -- Vérifier les logs sur le tableau de bord pour voir si des erreurs surviennent avec votre subgraph. Les logs d'un subgraph opérationnel ressembleront à ceci : +- Analysez votre subgraph dans le tableau de bord pour vérifier les informations. +- Vérifiez les logs sur le tableau de bord pour voir s'il y a des erreurs avec votre subgraph. Les logs d'un subgraph opérationnel ressemblent à ceci : ![Logs du subgraph](/img/subgraph-logs-image.png) -### 7. Publier votre subgraph sur The Graph Network⁠ +### 7. Publier votre subgraph sur The Graph Network -When your subgraph is ready for a production environment, you can publish it to the decentralized network. Publishing is an onchain action that does the following: +Lorsque votre subgraph est prêt pour un environnement de production, vous pouvez le publier sur le réseau décentralisé. La publication est une action onchain qui effectue les opérations suivantes : -- It makes your subgraph available to be to indexed by the decentralized [Indexers](/indexing/overview/) on The Graph Network. -- It removes rate limits and makes your subgraph publicly searchable and queryable in [Graph Explorer](https://thegraph.com/explorer/). -- It makes your subgraph available for [Curators](/resources/roles/curating/) to curate it. +- Il rend votre subgraph disponible pour être indexé par les [Indexeurs](/indexing/overview/) décentralisés sur The Graph Network. +- Il supprime les limites de taux et rend votre subgraph publiquement consultable et interrogeable dans [Graph Explorer](https://thegraph.com/explorer/). +- Il met votre subgraph à la disposition des [Curateurs](/resources/roles/curating/) pour qu'ils le curent. -> The greater the amount of GRT you and others curate on your subgraph, the more Indexers will be incentivized to index your subgraph, improving the quality of service, reducing latency, and enhancing network redundancy for your subgraph. +> Plus la quantité de GRT que vous et d'autres personnes curez dans votre subgraph est importante, plus les Indexeurs seront incités à indexer votre subgraph, ce qui améliorera la qualité du service, réduira la latence et renforcera la redondance du réseau pour votre subgraph. #### Publier avec Subgraph Studio⁠ -Pour publier votre subgraph, cliquez sur le bouton "Publish" dans le tableau de bord. +Pour publier votre subgraph, cliquez sur le bouton Publier dans le tableau de bord. -![Publish a subgraph on Subgraph Studio](/img/publish-sub-transfer.png) +![Publier un subgraph sur Subgraph Studio](/img/publish-sub-transfer.png) -Sélectionnez le réseau sur lequel vous souhaitez publier votre subgraph. +Sélectionnez le réseau dans lequel vous souhaitez publier votre subgraph. #### Publication à partir de la CLI -À partir de la version 0.73.0, vous pouvez également publier votre subgraph avec Graph CLI. +Depuis la version 0.73.0, vous pouvez également publier votre subgraph à l'aide de Graph CLI. Ouvrez le `graph-cli`. @@ -157,32 +157,32 @@ graph publish ``` ```` -3. Une fenêtre s'ouvrira, vous permettant de connecter votre portefeuille, d'ajouter des métadonnées et de déployer votre subgraph finalisé sur le réseau de votre choix. +3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized Subgraph to a network of your choice. ![cli-ui](/img/cli-ui.png) -To customize your deployment, see [Publishing a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). +Pour personnaliser votre déploiement, voir [Publier un subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). -#### Ajout de signal à votre subgraph +#### Adding signal to your Subgraph -1. To attract Indexers to query your subgraph, you should add GRT curation signal to it. +1. Pour inciter les Indexeurs à interroger votre subgraph, vous devez y ajouter un signal de curation GRT. - - Cette action améliore la qualité du service, réduit la latence et renforce la redondance et la disponibilité du réseau pour votre subgraph. + - Cette action améliore la qualité de service, réduit la latence et améliore la redondance et la disponibilité du réseau pour votre Subgraph. 2. Si éligibles aux récompenses d'indexation, les Indexeurs reçoivent des récompenses en GRT proportionnelles au montant signalé. - - Il est recommandé de curer au moins 3 000 GRT pour attirer 3 Indexeurs. Vérifiez l'éligibilité aux récompenses en fonction de l'utilisation des fonctionnalités du subgraph et des réseaux supportés. + - Il est recommandé de rassembler au moins 3 000 GRT pour attirer 3 Indexeurs. Vérifiez l'éligibilité des récompenses en fonction de l'utilisation des fonctions du subgraph et des réseaux pris en charge. -To learn more about curation, read [Curating](/resources/roles/curating/). +Pour en savoir plus sur la curation, lisez [Curating](/resources/roles/curating/). -Pour économiser sur les frais de gas, vous pouvez curer votre subgraph dans la même transaction que celle où vous le publiez en sélectionnant cette option : +Pour économiser des frais de gas, vous pouvez créer votre subgraph dans la même transaction que vous le publiez en sélectionnant cette option : -![Subgraph publish](/img/studio-publish-modal.png) +![Publication de subgraph](/img/studio-publish-modal.png) ### 8. Interroger votre subgraph -You now have access to 100,000 free queries per month with your subgraph on The Graph Network! +Vous avez maintenant accès à 100 000 requêtes gratuites par mois avec votre subgraph sur The Graph Network ! -You can query your subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button. +Vous pouvez interroger votre subgraph en envoyant des requêtes GraphQL à son URL de requête, que vous trouverez en cliquant sur le bouton Requête. -For more information about querying data from your subgraph, read [Querying The Graph](/subgraphs/querying/introduction/). +Pour plus d'informations sur l'interrogation des données de votre subgraph, lisez [Interroger The Graph](/subgraphs/querying/introduction/). diff --git a/website/src/pages/fr/substreams/_meta-titles.json b/website/src/pages/fr/substreams/_meta-titles.json index 6262ad528c3a..bd6a51423076 100644 --- a/website/src/pages/fr/substreams/_meta-titles.json +++ b/website/src/pages/fr/substreams/_meta-titles.json @@ -1,3 +1,3 @@ { - "developing": "Developing" + "developing": "Développement" } diff --git a/website/src/pages/fr/substreams/developing/_meta-titles.json b/website/src/pages/fr/substreams/developing/_meta-titles.json index 882ee9fc7c9c..05826edb5e9f 100644 --- a/website/src/pages/fr/substreams/developing/_meta-titles.json +++ b/website/src/pages/fr/substreams/developing/_meta-titles.json @@ -1,4 +1,4 @@ { "solana": "Solana", - "sinks": "Sink your Substreams" + "sinks": "Faites un Sink de vos Substreams" } diff --git a/website/src/pages/fr/substreams/developing/dev-container.mdx b/website/src/pages/fr/substreams/developing/dev-container.mdx index bd4acf16eec7..3e7814c857df 100644 --- a/website/src/pages/fr/substreams/developing/dev-container.mdx +++ b/website/src/pages/fr/substreams/developing/dev-container.mdx @@ -1,48 +1,48 @@ --- -title: Substreams Dev Container -sidebarTitle: Dev Container +title: Dev Container Substreams +sidebarTitle: Le Dev Container --- -Develop your first project with Substreams Dev Container. +Développez votre premier projet avec Substreams Dev Container. -## What is a Dev Container? +## C'est quoi un Dev Container? -It's a tool to help you build your first project. You can either run it remotely through Github codespaces or locally by cloning the [substreams starter repository](https://github.com/streamingfast/substreams-starter?tab=readme-ov-file). +C'est un outil qui vous aide à construire votre premier projet. Vous pouvez l'utiliser à distance via les codespaces Github ou localement en clonant la [repo de départ de substreams](https://github.com/streamingfast/substreams-starter?tab=readme-ov-file). -Inside the Dev Container, the `substreams init` command sets up a code-generated Substreams project, allowing you to easily build a subgraph or an SQL-based solution for data handling. +Dans le Dev Container, la commande `substreams init` met en place un projet Substreams généré par le code, ce qui vous permet de construire facilement un subgraph ou une solution basée sur SQL pour le traitement des données. -## Prerequisites +## Prérequis -- Ensure Docker and VS Code are up-to-date. +- S'assurer que Docker et VS Code sont à jour. -## Navigating the Dev Container +## Naviguer dans le Dev Container -In the Dev Container, you can either build or import your own `substreams.yaml` and associate modules within the minimal path or opt for the automatically generated Substreams paths. Then, when you run the `Substreams Build` it will generate the Protobuf files. +Dans le Dev Container, vous pouvez soit construire ou importer votre propre `substreams.yaml` et associer des modules dans le chemin minimal, soit opter pour les chemins Substreams générés automatiquement. Ensuite, lorsque vous exécutez le `Substreams Build`, il génère les fichiers Protobuf. ### Options -- **Minimal**: Starts you with the raw block `.proto` and requires development. This path is intended for experienced users. -- **Non-Minimal**: Extracts filtered data using network-specific caches and Protobufs taken from corresponding foundational modules (maintained by the StreamingFast team). This path generates a working Substreams out of the box. +- **Minimal** : Vous démarre avec le bloc brut `.proto` et nécessite du développement. Ce chemin est destiné aux utilisateurs expérimentés. +- **Non-Minimal** : Extrait les données filtrées en utilisant les caches spécifiques au réseau et les Protobufs provenant des modules de base correspondants (maintenus par l'équipe StreamingFast). Ce chemin génère un Substreams fonctionnel dès sa sortie de la boîte. -To share your work with the broader community, publish your `.spkg` to [Substreams registry](https://substreams.dev/) using: +Pour partager votre travail avec la communauté, publiez votre `.spkg` sur [Substreams registry](https://substreams.dev/) en utilisant : - `substreams registry login` - `substreams registry publish` -> Note: If you run into any problems within the Dev Container, use the `help` command to access trouble shooting tools. +> Note : Si vous rencontrez des problèmes dans le Dev Container, utilisez la commande `help` pour accéder aux outils de dépannage. -## Building a Sink for Your Project +## Construire un sink pour votre projet -You can configure your project to query data either through a Subgraph or directly from an SQL database: +Vous pouvez configurer votre projet pour qu'il interroge des données soit par l'intermédiaire d'un subgraph, soit directement à partir d'une base de données SQL : -- **Subgraph**: Run `substreams codegen subgraph`. This generates a project with a basic `schema.graphql` and `mappings.ts` file. You can customize these to define entities based on the data extracted by Substreams. For more configurations, see [subgraph sink documentation](https://docs.substreams.dev/how-to-guides/sinks/subgraph). -- **SQL**: Run `substreams codegen sql` for SQL-based queries. For more information on configuring a SQL sink, refer to the [SQL documentation](https://docs.substreams.dev/how-to-guides/sinks/sql-sink). +- **Subgraph** : Exécutez `substreams codegen subgraph`. Cela génère un projet avec un fichier de base `schema.graphql` et `mappings.ts`. Vous pouvez les personnaliser pour définir des entités basées sur les données extraites par Substreams. Pour plus de configurations, voir [Documentation Subgraph sink](https://docs.substreams.dev/how-to-guides/sinks/subgraph). +- **SQL** : Exécutez `substreams codegen sql` pour les requêtes basées sur SQL. Pour plus d'informations sur la configuration d'un sink SQL, consultez la [documentation SQL](https://docs.substreams.dev/how-to-guides/sinks/sql-sink). -## Deployment Options +## Options de déploiement -To deploy a Subgraph, you can either run the `graph-node` locally using the `deploy-local` command or deploy to Subgraph Studio by using the `deploy` command found in the `package.json` file. +Pour déployer un subgraph, vous pouvez soit exécuter le `graph-node` localement en utilisant la commande `deploy-local`, soit le déployer dans Subgraph Studio en utilisant la commande `deploy` qui se trouve dans le fichier `package.json`. -## Common Errors +## Erreurs courantes -- When running locally, make sure to verify that all Docker containers are healthy by running the `dev-status` command. -- If you put the wrong start-block while generating your project, navigate to the `substreams.yaml` to change the block number, then re-run `substreams build`. +- Lors d'une exécution locale, assurez-vous de vérifier que tous les conteneurs Docker sont sains en lançant la commande `dev-status`. +- Si vous avez mis le mauvais bloc de départ lors de la génération de votre projet, naviguez jusqu'à `substreams.yaml` pour changer le numéro de bloc, puis relancez `substreams build`. diff --git a/website/src/pages/fr/substreams/developing/sinks.mdx b/website/src/pages/fr/substreams/developing/sinks.mdx index 265c2e31b425..c56d379e996d 100644 --- a/website/src/pages/fr/substreams/developing/sinks.mdx +++ b/website/src/pages/fr/substreams/developing/sinks.mdx @@ -1,51 +1,51 @@ --- -title: Official Sinks +title: Faites un Sink de vos Substreams --- -Choose a sink that meets your project's needs. +Choisissez un sink qui répond aux besoins de votre projet. ## Aperçu -Once you find a package that fits your needs, you can choose how you want to consume the data. +Une fois que vous avez trouvé un package qui répond à vos besoins, vous pouvez choisir la façon dont vous voulez utiliser les données. -Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database, a file or a subgraph. +Les sinks sont des intégrations qui vous permettent d'envoyer les données extraites vers différentes destinations, telles qu'une base de données SQL, un fichier ou un Subgraph. ## Sinks -> Note: Some of the sinks are officially supported by the StreamingFast core development team (i.e. active support is provided), but other sinks are community-driven and support can't be guaranteed. +> Remarque : certains sinks sont officiellement pris en charge par l'équipe de développement de StreamingFast (c'est-à-dire qu'ils bénéficient d'un soutien actif), mais d'autres sinks sont gérés par la communauté et leur prise en charge n'est pas garantie. -- [SQL Database](https://docs.substreams.dev/how-to-guides/sinks/sql-sink): Send the data to a database. -- [Subgraph](/sps/introduction/): Configure an API to meet your data needs and host it on The Graph Network. -- [Direct Streaming](https://docs.substreams.dev/how-to-guides/sinks/stream): Stream data directly from your application. -- [PubSub](https://docs.substreams.dev/how-to-guides/sinks/pubsub): Send data to a PubSub topic. -- [Community Sinks](https://docs.substreams.dev/how-to-guides/sinks/community-sinks): Explore quality community maintained sinks. +- [Base de données SQL](https://docs.substreams.dev/how-to-guides/sinks/sql-sink): Send the data to a database. +- [Subgraph](/sps/introduction/) : Configurez une API pour répondre à vos besoins en matière de données et hébergez-la sur The Graph Network. +- [Direct Streaming] (https://docs.substreams.dev/how-to-guides/sinks/stream) : Stream en continu des données directement à partir de votre application. +- [PubSub](https://docs.substreams.dev/how-to-guides/sinks/pubsub) : Envoyer des données à un sujet PubSub. +- [Sinks communautaires] (https://docs.substreams.dev/how-to-guides/sinks/community-sinks) : Découvrez des Sinks de qualité entretenus par la communauté. -> Important: If you’d like your sink (e.g., SQL or PubSub) hosted for you, reach out to the StreamingFast team [here](mailto:sales@streamingfast.io). +> Important : Si vous souhaitez que votre sink (par exemple, SQL ou PubSub) soit hébergé pour vous, contactez l'équipe StreamingFast [ici](mailto:sales@streamingfast.io). -## Navigating Sink Repos +## Naviguer dans les Repos de Sink -### Official +### Officiel -| Name | Support | Maintainer | Source Code | +| Nom | Support | Responsable de la maintenance | Code Source | | --- | --- | --- | --- | | SQL | O | StreamingFast | [substreams-sink-sql](https://github.com/streamingfast/substreams-sink-sql) | -| Go SDK | O | StreamingFast | [substreams-sink](https://github.com/streamingfast/substreams-sink) | -| Rust SDK | O | StreamingFast | [substreams-sink-rust](https://github.com/streamingfast/substreams-sink-rust) | -| JS SDK | O | StreamingFast | [substreams-js](https://github.com/substreams-js/substreams-js) | -| KV Store | O | StreamingFast | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) | +| SDK Go | O | StreamingFast | [substreams-sink](https://github.com/streamingfast/substreams-sink) | +| SDK Rust | O | StreamingFast | [substreams-sink-rust](https://github.com/streamingfast/substreams-sink-rust) | +| SDK JS | O | StreamingFast | [substreams-js](https://github.com/substreams-js/substreams-js) | +| Store KV | O | StreamingFast | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) | | Prometheus | O | Pinax | [substreams-sink-prometheus](https://github.com/pinax-network/substreams-sink-prometheus) | | Webhook | O | Pinax | [substreams-sink-webhook](https://github.com/pinax-network/substreams-sink-webhook) | | CSV | O | Pinax | [substreams-sink-csv](https://github.com/pinax-network/substreams-sink-csv) | | PubSub | O | StreamingFast | [substreams-sink-pubsub](https://github.com/streamingfast/substreams-sink-pubsub) | -### Community +### Communauté -| Name | Support | Maintainer | Source Code | +| Nom | Support | Responsable de la maintenance | Code Source | | --- | --- | --- | --- | -| MongoDB | C | Community | [substreams-sink-mongodb](https://github.com/streamingfast/substreams-sink-mongodb) | -| Files | C | Community | [substreams-sink-files](https://github.com/streamingfast/substreams-sink-files) | -| KV Store | C | Community | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) | -| Prometheus | C | Community | [substreams-sink-Prometheus](https://github.com/pinax-network/substreams-sink-prometheus) | +| MongoDB | C | Communauté | [substreams-sink-mongodb](https://github.com/streamingfast/substreams-sink-mongodb) | +| Fichiers | C | Communauté | [substreams-sink-files](https://github.com/streamingfast/substreams-sink-files) | +| Store KV | C | Communauté | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) | +| Prometheus | C | Communauté | [substreams-sink-Prometheus](https://github.com/pinax-network/substreams-sink-prometheus) | -- O = Official Support (by one of the main Substreams providers) -- C = Community Support +- O = Soutien officiel (par l'un des principaux fournisseurs de Substreams) +- C = Soutien de la Communauté diff --git a/website/src/pages/fr/substreams/developing/solana/account-changes.mdx b/website/src/pages/fr/substreams/developing/solana/account-changes.mdx index b295ffdce030..7211f25c5f6e 100644 --- a/website/src/pages/fr/substreams/developing/solana/account-changes.mdx +++ b/website/src/pages/fr/substreams/developing/solana/account-changes.mdx @@ -1,57 +1,57 @@ --- -title: Solana Account Changes -sidebarTitle: Account Changes +title: Modifications du compte Solana +sidebarTitle: Modifications du compte --- -Learn how to consume Solana account change data using Substreams. +Apprenez comment consommer les données de modification de compte Solana en utilisant Substreams. ## Présentation -This guide walks you through the process of setting up your environment, configuring your first Substreams stream, and consuming account changes efficiently. By the end of this guide, you will have a working Substreams feed that allows you to track real-time account changes on the Solana blockchain, as well as historical account change data. +Ce guide vous accompagne dans le processus de mise en place de votre environnement, de configuration de votre premier flux Substreams et de consommation efficace des modifications de compte. À la fin de ce guide, vous aurez un flux Substreams opérationnel qui vous permettra de suivre les changements de compte en temps réel sur la blockchain Solana, ainsi que les données historiques des changements de compte. -> NOTE: History for the Solana Account Changes dates as of 2025, block 310629601. +> NOTE : L'historique des modifications du compte Solana est daté de 2025, bloc 310629601. -For each Substreams Solana account block, only the latest update per account is recorded, see the [Protobuf Referece](https://buf.build/streamingfast/firehose-solana/file/main:sf/solana/type/v1/account.proto). If an account is deleted, a payload with `deleted == True` is provided. Additionally, events of low importance ware omitted, such as those with the special owner “Vote11111111…” account or changes that do not affect the account data (ex: lamport changes). +Pour chaque bloc de comptes Substreams Solana, seule la dernière mise à jour par compte est enregistrée, voir la [Référence Protobuf](https://buf.build/streamingfast/firehose-solana/file/main:sf/solana/type/v1/account.proto). Si un compte est supprimé, une charge utile avec `deleted == True` est fournie. En outre, les événements de faible importance sont omis, tels que ceux dont le propriétaire spécial est le compte “Vote11111111…” ou les changements qui n'affectent pas les données du compte (par exemple, les changements de lamport). -> NOTE: To test Substreams latency for Solana accounts, measured as block-head drift, install the [Substreams CLI](https://docs.substreams.dev/reference-material/substreams-cli/installing-the-cli) and running `substreams run solana-common blocks_without_votes -s -1 -o clock`. +> NOTE : Pour tester la latence de Substreams pour les comptes Solana, mesurée par la dérive des têtes de blocs, installez la [CLI Substreams](https://docs.substreams.dev/reference-material/substreams-cli/installing-the-cli) et exécutez `substreams run solana-common blocks_without_votes -s -1 -o clock`. ## Introduction -### Prerequisites +### Prérequis -Before you begin, ensure that you have the following: +Avant de commencer, assurez-vous que vous disposez des éléments suivants : -1. [Substreams CLI](https://docs.substreams.dev/reference-material/substreams-cli/installing-the-cli) installed. -2. A [Substreams key](https://docs.substreams.dev/reference-material/substreams-cli/authentication) for access to the Solana Account Change data. -3. Basic knowledge of [how to use](https://docs.substreams.dev/reference-material/substreams-cli/command-line-interface) the command line interface (CLI). +1. [Substreams CLI](https://docs.substreams.dev/reference-material/substreams-cli/installing-the-cli) installé. +2. Une [clé Substreams] (https://docs.substreams.dev/reference-material/substreams-cli/authentication) pour accéder aux données de modification du compte Solana (Solana Account Change). +3. Connaissance de base de [l'utilisation](https://docs.substreams.dev/reference-material/substreams-cli/command-line-interface) de l'interface de ligne de commande (CLI). -### Step 1: Set Up a Connection to Solana Account Change Substreams +### Étape 1 : Établir une connexion au flux Substreams des modifications de compte Solana -Now that you have Substreams CLI installed, you can set up a connection to the Solana Account Change Substreams feed. +Maintenant que vous avez installé Substreams CLI, vous pouvez établir une connexion au flux Substreams des modifications de compte Solana. -- Using the [Solana Accounts Foundational Module](https://substreams.dev/packages/solana-accounts-foundational/latest), you can choose to stream data directly or use the GUI for a more visual experience. The following `gui` example filters for Honey Token account data. +- En utilisant le [Module de base du compte Solana](https://substreams.dev/packages/solana-accounts-foundational/latest), vous pouvez choisir de diffuser les données directement ou d'utiliser l'interface graphique pour une expérience plus visuelle. L'exemple `gui` suivant filtre les données du compte Honey Token. ```bash substreams gui solana-accounts-foundational filtered_accounts -t +10 -p filtered_accounts="owner:TokenkegQfeZyiNwAJbNbGKPFXCWuBvf9Ss623VQ5DA || account:4vMsoUT2BWatFweudnQM1xedRLfJgJ7hswhcpz4xgBTy" ``` -- This command will stream account changes directly to your terminal. +- Cette commande permet de streamer les modifications apportées aux comptes directement dans votre terminal. ```bash substreams run solana-accounts-foundational filtered_accounts -s -1 -o clock ``` -The Foundational Module has support for filtering on specific accounts and/or owners. You can adjust the query based on your needs. +Le module de base permet de filtrer des comptes et/ou des propriétaires spécifiques. Vous pouvez adapter la requête en fonction de vos besoins. -### Step 2: Sink the Substreams +### Étape 2 : Intégrer les Substreams -Consume the account stream [directly in your application](https://docs.substreams.dev/how-to-guides/sinks/stream) using a callback or make it queryable by using the [SQL-DB sink](https://docs.substreams.dev/how-to-guides/sinks/sql-sink). +Consommez le flux de modifications de compte [directement dans votre application](https://docs.substreams.dev/how-to-guides/sinks/stream) à l'aide d'un callback, ou rendez-le interrogeable en utilisant le [sink SQL-DB](https://docs.substreams.dev/how-to-guides/sinks/sql-sink). -### Step 3: Setting up a Reconnection Policy +### Étape 3 : Mise en place d'une politique de reconnexion -[Cursor Management](https://docs.substreams.dev/reference-material/reliability-guarantees) ensures seamless continuity and retraceability by allowing you to resume from the last consumed block if the connection is interrupted. This functionality prevents data loss and maintains a persistent stream. +La [gestion du curseur](https://docs.substreams.dev/reference-material/reliability-guarantees) garantit une continuité et une traçabilité sans faille en vous permettant de reprendre à partir du dernier bloc consommé si la connexion est interrompue. Cette fonctionnalité permet d'éviter les pertes de données et de maintenir un flux persistant. -When creating or using a sink, the user's primary responsibility is to provide implementations of BlockScopedDataHandler and a BlockUndoSignalHandler implementation(s) which has the following interface: +Lors de la création ou de l'utilisation d'un sink, la responsabilité première de l'utilisateur est de fournir des implémentations de BlockScopedDataHandler et une ou plusieurs implémentations de BlockUndoSignalHandler qui ont l'interface suivante : ```go import ( diff --git a/website/src/pages/fr/substreams/developing/solana/transactions.mdx b/website/src/pages/fr/substreams/developing/solana/transactions.mdx index 762fc65ad792..4660c252afcf 100644 --- a/website/src/pages/fr/substreams/developing/solana/transactions.mdx +++ b/website/src/pages/fr/substreams/developing/solana/transactions.mdx @@ -1,61 +1,61 @@ --- -title: Solana Transactions +title: Transactions Solana sidebarTitle: Transactions --- -Learn how to initialize a Solana-based Substreams project within the Dev Container. +Apprenez à initialiser un projet Substreams basé sur Solana dans le Dev Container. -> Note: This guide excludes [Account Changes](/substreams/developing/solana/account-changes/). +> Note : ce guide ne concerne pas les [Modifications de compte](/substreams/developing/solana/account-changes/). ## Options -If you prefer to begin locally within your terminal rather than through the Dev Container (VS Code required), refer to the [Substreams CLI installation guide](https://docs.substreams.dev/reference-material/substreams-cli/installing-the-cli). +Si vous préférez commencer localement dans votre terminal plutôt que par l'intermédiaire du Dev Container (VS Code requis), referez-vous au [Guide d'installation de l'interface CLI de Substreams](https://docs.substreams.dev/reference-material/substreams-cli/installing-the-cli). -## Step 1: Initialize Your Solana Substreams Project +## Étape 1 : Initialisation du projet Solana Substreams -1. Open the [Dev Container](https://github.com/streamingfast/substreams-starter) and follow the on-screen steps to initialize your project. +1. Ouvrez le [Dev Container] (https://github.com/streamingfast/substreams-starter) et suivez les étapes à l'écran pour initialiser votre projet. -2. Running `substreams init` will give you the option to choose between two Solana project options. Select the best option for your project: - - **sol-minimal**: This creates a simple Substreams that extracts raw Solana block data and generates corresponding Rust code. This path will start you with the full raw block, and you can navigate to the `substreams.yaml` (the manifest) to modify the input. - - **sol-transactions**: This creates a Substreams that filters Solana transactions based on one or more Program IDs and/or Account IDs, using the cached [Solana Foundational Module](https://substreams.dev/streamingfast/solana-common/v0.3.0). - - **sol-anchor-beta**: This creates a Substreams that decodes instructions and events with an Anchor IDL. If an IDL isn’t available (reference [Anchor CLI](https://www.anchor-lang.com/docs/cli)), then you’ll need to provide it yourself. +2. L'exécution de `substreams init` vous donnera la possibilité de choisir entre deux options de projet Solana. Sélectionnez la meilleure option pour votre projet : + - **sol-minimal** : Ceci crée un simple Substreams qui extrait les données brutes du bloc Solana et génère le code Rust correspondant. Ce chemin démarre avec le bloc brut complet, et vous pouvez naviguer vers le `substreams.yaml` (le manifeste) pour modifier l'entrée. + - **sol-transactions** : Ceci crée un Substreams qui filtre les transactions Solana sur la base d'un ou plusieurs Program IDs et/ou Account IDs, en utilisant le [Module fondamental de Solana](https://substreams.dev/streamingfast/solana-common/v0.3.0) mis en cache. + - **sol-anchor-beta** : Ceci crée un Substreams qui décode les instructions et les événements avec un IDL Anchor. Si un IDL n'est pas disponible (référence [Anchor CLI](https://www.anchor-lang.com/docs/cli)), vous devrez le fournir vous-même. -The modules within Solana Common do not include voting transactions. To gain a 75% reduction in data processing size and costs, delay your stream by over 1000 blocks from the head. This can be done using the [`sleep`](https://doc.rust-lang.org/std/thread/fn.sleep.html) function in Rust. +Les modules de Solana Common ne comprennent pas de transactions de vote. Pour obtenir une réduction de 75 % de la taille et des coûts de traitement des données, retardez votre flux de plus de 1000 blocs à partir de la tête. Cela peut être fait en utilisant la fonction [`sleep`](https://doc.rust-lang.org/std/thread/fn.sleep.html) de Rust. -To access voting transactions, use the full Solana block, `sf.solana.type.v1.Block`, as input. +Pour accéder aux transactions de vote, utilisez le bloc Solana complet, `sf.solana.type.v1.Block`, comme entrée. -## Step 2: Visualize the Data +## Étape 2 : Visualiser les données -1. Run `substreams auth` to create your [account](https://thegraph.market/) and generate an authentication token (JWT), then pass this token back as input. +1. Exécutez `substreams auth` pour créer votre [compte] (https://thegraph.market/) et générer un jeton d'authentification (JWT), puis renvoyez ce jeton en entrée. -2. Now you can freely use the `substreams gui` to visualize and iterate on your extracted data. +2. Vous pouvez maintenant utiliser librement l'interface `substreams` pour visualiser et itérer sur vos données extraites. -## Step 2.5: (Optionally) Transform the Data +## Étape 2.5 : Transformer (éventuellement) les données -Within the generated directories, modify your Substreams modules to include additional filters, aggregations, and transformations, then update the manifest accordingly. +Dans les répertoires générés, modifiez vos modules Substreams pour inclure des filtres, des agrégations et des transformations supplémentaires, puis mettez à jour le manifeste en conséquence. -## Step 3: Load the Data +## Étape 3 : Charger les données -To make your Substreams queryable (as opposed to [direct streaming](https://docs.substreams.dev/how-to-guides/sinks/stream)), you can automatically generate a [Substreams-powered subgraph](/sps/introduction/) or SQL-DB sink. +Pour rendre vos Substreams interrogeables (par opposition au [streaming direct](https://docs.substreams.dev/how-to-guides/sinks/stream)), vous pouvez générer automatiquement un [Subgraph alimenté par Substreams](/sps/introduction/) ou un sink SQL-DB. ### Subgraphe -1. Run `substreams codegen subgraph` to initialize the sink, producing the necessary files and function definitions. -2. Create your [subgraph mappings](/sps/triggers/) within the `mappings.ts` and associated entities within the `schema.graphql`. -3. Build and deploy locally or to [Subgraph Studio](https://thegraph.com/studio-pricing/) by running `deploy-studio`. +1. Exécutez `substreams codegen subgraph` pour initialiser le sink, en produisant les fichiers et les définitions de fonctions nécessaires. +2. Créez vos [Mappages de Subgraphs](/sps/triggers/) dans le fichier `mappings.ts` et les entités associées dans le fichier `schema.graphql`. +3. Construire et déployer localement ou vers [Subgraph Studio](https://thegraph.com/studio-pricing/) en lançant `deploy-studio`. ### SQL -1. Run `substreams codegen sql` and choose from either ClickHouse or Postgres to initialize the sink, producing the necessary files. -2. Run `substreams build` build the [Substream SQL](https://docs.substreams.dev/how-to-guides/sinks/sql-sink) sink. -3. Run `substreams-sink-sql` to sink the data into your selected SQL DB. +1. Exécutez `substreams codegen sql` et choisissez entre ClickHouse et Postgres pour initialiser le sink, en produisant les fichiers nécessaires. +2. Exécutez `substreams build` pour construire le sink [Substream SQL](https://docs.substreams.dev/how-to-guides/sinks/sql-sink). +3. Exécutez `substreams-sink-sql` pour transférer les données dans la base de données SQL choisie. -> Note: Run `help` to better navigate the development environment and check the health of containers. +> Note : Lancez `help` pour mieux naviguer dans l'environnement de développement et vérifier l'état des conteneurs. ## Ressources supplémentaires -You may find these additional resources helpful for developing your first Solana application. +Ces ressources supplémentaires peuvent vous être utiles pour développer votre première application Solana. -- The [Dev Container Reference](/substreams/developing/dev-container/) helps you navigate the container and its common errors. -- The [CLI reference](https://docs.substreams.dev/reference-material/substreams-cli/command-line-interface) lets you explore all the tools available in the Substreams CLI. -- The [Components Reference](https://docs.substreams.dev/reference-material/substreams-components/packages) dives deeper into navigating the `substreams.yaml`. +- La [Référence du Dev Container](/substreams/developing/dev-container/) vous aide à naviguer dans le conteneur et ses erreurs courantes. +- La [référence CLI](https://docs.substreams.dev/reference-material/substreams-cli/command-line-interface) vous permet d'explorer tous les outils disponibles dans la CLI de Substreams. +- La [Référence des composants](https://docs.substreams.dev/reference-material/substreams-components/packages) permet d'approfondir la navigation dans le fichier `substreams.yaml`. diff --git a/website/src/pages/fr/substreams/introduction.mdx b/website/src/pages/fr/substreams/introduction.mdx index 8e17afebc2a0..1f37496ab7c0 100644 --- a/website/src/pages/fr/substreams/introduction.mdx +++ b/website/src/pages/fr/substreams/introduction.mdx @@ -1,26 +1,26 @@ --- -title: Introduction to Substreams +title: Introduction à Substreams sidebarTitle: Présentation --- -![Substreams Logo](/img/substreams-logo.png) +![Logo de Substreams](/img/substreams-logo.png) -To start coding right away, check out the [Substreams Quick Start](/substreams/quick-start/). +Pour commencer à coder tout de suite, consultez le [Démarrage rapide de Substreams] (/substreams/quick-start/). ## Aperçu -Substreams is a powerful parallel blockchain indexing technology designed to enhance performance and scalability within The Graph Network. +Substreams est une puissante technologie d'indexation parallèle de la blockchain conçue pour améliorer les performances et l'évolutivité au sein de The Graph Network. -## Substreams Benefits +## Avantages de Substreams -- **Accelerated Indexing**: Boost subgraph indexing time with a parallelized engine for quicker data retrieval and processing. -- **Multi-Chain Support**: Expand indexing capabilities beyond EVM-based chains, supporting ecosystems like Solana, Injective, Starknet, and Vara. -- **Enhanced Data Model**: Access comprehensive data, including the `trace` level data on EVM or account changes on Solana, while efficiently managing forks/disconnections. -- **Multi-Sink Support:** For Subgraph, Postgres database, Clickhouse, and Mongo database. +- **Indexation accélérée** : Augmentez le temps d'indexation des subgraphs grâce à un moteur parallélisé pour une récupération et un traitement plus rapides des données. +- **Prise en charge de plusieurs blockchains** : Étendre les capacités d'indexation au-delà des blockchains basées sur EVM, en prenant en charge des écosystèmes tels que Solana, Injective, Starknet et Vara. +- **Modèle de données amélioré** : Accédez à des données complètes, y compris les données de niveau `trace` sur EVM ou les changements de compte sur Solana, tout en gérant efficacement les forks/déconnexions. +- **Support multi-Sink:** Pour Subgraph, base de données Postgres, Clickhouse, et base de données Mongo. ## Le fonctionnement de Substreams en 4 étapes -1. You write a Rust program, which defines the transformations that you want to apply to the blockchain data. For example, the following Rust function extracts relevant information from an Ethereum block (number, hash, and parent hash). +1. Vous écrivez un programme Rust, qui définit les transformations que vous souhaitez appliquer aux données de la blockchain. Par exemple, la fonction Rust suivante extrait les informations pertinentes d'un bloc Ethereum (numéro, hash et hash parent). ```rust fn get_my_block(blk: Block) -> Result { @@ -34,12 +34,12 @@ fn get_my_block(blk: Block) -> Result { } ``` -2. You wrap up your Rust program into a WASM module just by running a single CLI command. +2. Il suffit d'exécuter une seule commande CLI pour transformer votre programme Rust en un module WASM. -3. The WASM container is sent to a Substreams endpoint for execution. The Substreams provider feeds the WASM container with the blockchain data and the transformations are applied. +3. Le conteneur WASM est envoyé à un endpoint Substreams pour exécution. Le fournisseur Substreams alimente le conteneur WASM avec les données de la blockchain et les transformations sont appliquées. -4. You select a [sink](https://docs.substreams.dev/how-to-guides/sinks), a place where you want to send the transformed data (such as a SQL database or a Subgraph). +4. Vous sélectionnez un [sink](https://docs.substreams.dev/how-to-guides/sinks), un endroit où vous souhaitez envoyer les données transformées (comme une base de données SQL ou un subgraph). ## Ressources supplémentaires -All Substreams developer documentation is maintained by the StreamingFast core development team on the [Substreams registry](https://docs.substreams.dev). +Toute la documentation destinée aux développeurs de Substreams est conservée par l'équipe de développement de StreamingFast sur le [Registre Substreams](https://docs.substreams.dev). diff --git a/website/src/pages/fr/substreams/publishing.mdx b/website/src/pages/fr/substreams/publishing.mdx index eecb92d0d48b..6059a7e26c8a 100644 --- a/website/src/pages/fr/substreams/publishing.mdx +++ b/website/src/pages/fr/substreams/publishing.mdx @@ -1,53 +1,53 @@ --- -title: Publishing a Substreams Package -sidebarTitle: Publishing +title: Publication d'un package Substreams +sidebarTitle: Publication --- -Learn how to publish a Substreams package to the [Substreams Registry](https://substreams.dev). +Apprenez à publier un package Substreams sur le [Registre Substreams] (https://substreams.dev). ## Aperçu -### What is a package? +### Qu'est-ce qu'un package ? -A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional subgraphs. +Un package Substreams est un fichier binaire précompilé qui définit les données spécifiques que vous souhaitez extraire de la blockchain, similaire au fichier `mapping.ts` dans les Subgraphs traditionnels. -## Publish a Package +## Publier un package -### Prerequisites +### Prérequis -- You must have the Substreams CLI installed. -- You must have a Substreams package (`.spkg`) that you want to publish. +- La CLI de Substreams doit être installé. +- Vous devez avoir un package Substreams (`.spkg`) que vous voulez publier. -### Step 1: Run the `substreams publish` Command +### Étape 1 : Exécuter la commande `substreams publish` -1. In a command-line terminal, run `substreams publish .spkg`. +1. Dans un terminal de ligne de commande, lancez `substreams publish .spkg`. -2. If you do not have a token set in your computer, navigate to `https://substreams.dev/me`. +2. Si vous n'avez pas de jeu de jetons sur votre ordinateur, naviguez vers `https://substreams.dev/me`. -![get token](/img/1_get-token.png) +![obtenir un jeton](/img/1_get-token.png) -### Step 2: Get a Token in the Substreams Registry +### Étape 2 : Obtenir un jeton dans le registre de Substreams -1. In the Substreams Registry, log in with your GitHub account. +1. Dans le registre Substreams, connectez-vous avec votre compte GitHub. -2. Create a new token and copy it in a safe location. +2. Créez un nouveau jeton et copiez-le dans un endroit sûr. -![new token](/img/2_new_token.png) +![nouveau jeton](/img/2_new_token.png) -### Step 3: Authenticate in the Substreams CLI +### Étape 3 : Authentification dans l'interface de gestion de Substreams -1. Back in the Substreams CLI, paste the previously generated token. +1. De retour dans le CLI de Substreams, collez le jeton généré précédemment. -![paste token](/img/3_paste_token.png) +![collez le jeton](/img/3_paste_token.png) -2. Lastly, confirm that you want to publish the package. +2. Enfin, confirmez que vous souhaitez publier le package. -![confirm](/img/4_confirm.png) +![confirmer](/img/4_confirm.png) -That's it! You have succesfully published a package in the Substreams registry. +C'est Terminé ! Vous avez réussi à publier un package dans le registre de Substreams. Vous avez publié avec succès un package dans le registre Substreams. -![success](/img/5_success.png) +![succès](/img/5_success.png) ## Ressources supplémentaires -Visit [Substreams](https://substreams.dev/) to explore a growing collection of ready-to-use Substreams packages across various blockchain networks. +Visitez [Substreams](https://substreams.dev/) pour découvrir une collection croissante de packages Substreams prêts à l'emploi sur différents réseaux de blockchain. diff --git a/website/src/pages/fr/substreams/quick-start.mdx b/website/src/pages/fr/substreams/quick-start.mdx index ad7774b5102e..75da28206cb5 100644 --- a/website/src/pages/fr/substreams/quick-start.mdx +++ b/website/src/pages/fr/substreams/quick-start.mdx @@ -3,28 +3,28 @@ title: Démarrage rapide des Substreams sidebarTitle: Démarrage rapide --- -Discover how to utilize ready-to-use substream packages or develop your own. +Découvrez comment utiliser des packages substream prêts à l'emploi ou développer vos propres packages. ## Aperçu -Integrating Substreams can be quick and easy. They are permissionless, and you can [obtain a key here](https://thegraph.market/) without providing personal information to start streaming on-chain data. +L'intégration de Substreams peut être rapide et facile. Ils sont sans autorisation et vous pouvez [obtenir une clé ici](https://thegraph.market/) sans fournir d'informations personnelles pour commencer à streamer des données onchain. ## Commencez à développer -### Use Substreams Packages +### Utiliser les packages Substreams -There are many ready-to-use Substreams packages available. You can explore these packages by visiting the [Substreams Registry](https://substreams.dev) and [sinking them](/substreams/developing/sinks/). The registry lets you search for and find any package that meets your needs. +Il existe de nombreux packages Substreams prêts à l'emploi. Vous pouvez les découvrir en visitant le [Registre Substreams] (https://substreams.dev) et les [sink] (/substreams/developing/sinks/). Le registre vous permet de rechercher et de trouver n'importe quel package répondant à vos besoins. -Once you find a package that fits your needs, you can choose how you want to consume the data: +Une fois que vous avez trouvé un package qui répond à vos besoins, vous pouvez choisir la façon dont vous voulez utiliser les données : -- **[Subgraph](/sps/introduction/)**: Configure an API to meet your data needs and host it on The Graph Network. -- **[SQL Database](https://docs.substreams.dev/how-to-guides/sinks/sql-sink)**: Send the data to a database. -- **[Direct Streaming](https://docs.substreams.dev/how-to-guides/sinks/stream)**: Stream data directly to your application. -- **[PubSub](https://docs.substreams.dev/how-to-guides/sinks/pubsub)**: Send data to a PubSub topic. +- **[Subgraph](/sps/introduction/)** : Configurez une API pour répondre à vos besoins en matière de données et hébergez-la sur The Graph Network. +- **[Base de données SQL](https://docs.substreams.dev/how-to-guides/sinks/sql-sink)** : Envoyer les données à une base de données. +- **[Direct Streaming] (https://docs.substreams.dev/how-to-guides/sinks/stream)** : Streamez des données en continu directement dans votre application. +- **[PubSub](https://docs.substreams.dev/how-to-guides/sinks/pubsub)** : Envoyer des données à un sujet PubSub. -### Develop Your Own +### Développez le vôtre -If you can't find a Substreams package that meets your specific needs, you can develop your own. Substreams are built with Rust, so you'll write functions that extract and filter the data you need from the blockchain. To get started, check out the following tutorials: +Si vous ne trouvez pas de package Substreams qui réponde à vos besoins spécifiques, vous pouvez développer le vôtre. Substreams est construit avec Rust, vous écrirez donc des fonctions qui extrairont et filtreront les données dont vous avez besoin à partir de la blockchain. Pour commencer, consultez les tutoriels suivants : - [EVM](https://docs.substreams.dev/tutorials/intro-to-tutorials/evm) - [Solana](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-solana) @@ -32,11 +32,11 @@ If you can't find a Substreams package that meets your specific needs, you can d - [Injective](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/injective) - [MANTRA](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/mantra) -To build and optimize your Substreams from zero, use the minimal path within the [Dev Container](/substreams/developing/dev-container/). +Pour construire et optimiser vos Substreams à partir de zéro, utilisez le chemin minimal dans le [conteneur de développement](/substreams/developing/dev-container/). -> Note: Substreams guarantees that you'll [never miss data](https://docs.substreams.dev/reference-material/reliability-guarantees) with a simple reconnection policy. +> Remarque : Substreams garantit que vous ne manquerez jamais de données (https://docs.substreams.dev/reference-material/reliability-guarantees) grâce à une politique de reconnexion simple. ## Ressources supplémentaires -- For additional guidance, reference the [Tutorials](https://docs.substreams.dev/tutorials/intro-to-tutorials) and follow the [How-To Guides](https://docs.substreams.dev/how-to-guides/develop-your-own-substreams) on Streaming Fast docs. -- For a deeper understanding of how Substreams works, explore the [architectural overview](https://docs.substreams.dev/reference-material/architecture) of the data service. +- Pour obtenir des conseils supplémentaires, consultez les [Tutoriels](https://docs.substreams.dev/tutorials/intro-to-tutorials) et suivez les [Guides pratiques](https://docs.substreams.dev/how-to-guides/develop-your-own-substreams) sur les documents de Streaming Fast. +- Pour mieux comprendre le fonctionnement de Substreams, consultez la [vue d'ensemble de l'architecture](https://docs.substreams.dev/reference-material/architecture) du service de données. diff --git a/website/src/pages/fr/supported-networks.mdx b/website/src/pages/fr/supported-networks.mdx index 604164f84cc6..c1b6ee3fd39c 100644 --- a/website/src/pages/fr/supported-networks.mdx +++ b/website/src/pages/fr/supported-networks.mdx @@ -18,11 +18,11 @@ export const getStaticProps = getSupportedNetworksStaticProps - Subgraph Studio repose sur la stabilité et la fiabilité des technologies sous-jacentes, comme les endpoints JSON-RPC, Firehose et Substreams. - Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier. -- Si un subgraph a été publié via la CLI et repris par un Indexer, il pourrait techniquement être interrogé même sans support, et des efforts sont en cours pour simplifier davantage l'intégration de nouveaux réseaux. +- If a Subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks. - For a full list of which features are supported on the decentralized network, see [this page](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). ## Exécution de Graph Node en local If your preferred network isn't supported on The Graph's decentralized network, you can run your own [Graph Node](https://github.com/graphprotocol/graph-node) to index any EVM-compatible network. Make sure that the [version](https://github.com/graphprotocol/graph-node/releases) you are using supports the network and you have the needed configuration. -Graph Node peut également indexer d'autres protocoles via une intégration Firehose. Des intégrations Firehose ont été créées pour NEAR, Arweave et les réseaux basés sur Cosmos. De plus, Graph Node peut prendre en charge les subgraphs alimentés par Substreams pour tout réseau prenant en charge Substreams. +Graph Node can also index other protocols, via a Firehose integration. Firehose integrations have been created for NEAR and Arweave. Additionally, Graph Node can support Substreams-powered Subgraphs for any network with Substreams support. diff --git a/website/src/pages/fr/token-api/_meta-titles.json b/website/src/pages/fr/token-api/_meta-titles.json index 692cec84bd58..7ed31e0af95d 100644 --- a/website/src/pages/fr/token-api/_meta-titles.json +++ b/website/src/pages/fr/token-api/_meta-titles.json @@ -1,5 +1,6 @@ { "mcp": "MCP", "evm": "EVM Endpoints", - "monitoring": "Monitoring Endpoints" + "monitoring": "Monitoring Endpoints", + "faq": "FAQ" } diff --git a/website/src/pages/fr/token-api/_meta.js b/website/src/pages/fr/token-api/_meta.js index 09aa7ffc2649..0e526f673a66 100644 --- a/website/src/pages/fr/token-api/_meta.js +++ b/website/src/pages/fr/token-api/_meta.js @@ -5,4 +5,5 @@ export default { mcp: titles.mcp, evm: titles.evm, monitoring: titles.monitoring, + faq: '', } diff --git a/website/src/pages/fr/token-api/faq.mdx b/website/src/pages/fr/token-api/faq.mdx new file mode 100644 index 000000000000..55125891c079 --- /dev/null +++ b/website/src/pages/fr/token-api/faq.mdx @@ -0,0 +1,109 @@ +--- +title: Token API FAQ +--- + +Get fast answers to easily integrate and scale with The Graph's high-performance Token API. + +## Général + +### What blockchains does the Token API support? + +Currently, the Token API supports Ethereum, Binance Smart Chain (BSC), Polygon, Optimism, Base, and Arbitrum One. + +### Why isn't my API key from The Graph Market working? + +Be sure to use the Access Token generated from the API key, not the API key itself. An access token can be generated from the dashboard on The Graph Market using the dropdown menu next to each API key. + +### How current is the data provided by the API relative to the blockchain? + +The API provides data up to the latest finalized block. + +### How do I authenticate requests to the Token API? + +Authentication is managed via JWT tokens obtained through The Graph Market. JWT tokens issued by [Pinax](https://pinax.network/en), a core developer of The Graph, are also supported. + +### Does the Token API provide a client SDK? + +While a client SDK is not currently available, please share feedback on any SDKs or integrations you would like to see on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to support additional blockchains in the future? + +Yes, more blockchains will be supported in the future. Please share feedback on desired chains on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to offer data closer to the chain head? + +Yes, improvements to provide data closer to the chain head are planned. Feedback is welcome on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to support additional use cases such as NFTs? + +The Graph ecosystem is actively determining the [roadmap](https://thegraph.com/blog/token-api-the-graph/) for additional use cases, including NFTs. Please provide feedback on specific features you would like prioritized on [Discord](https://discord.gg/graphprotocol). + +## MCP / LLM / AI Topics + +### Is there a time limit for LLM queries? + +Yes. The maximum time limit for LLM queries is 10 seconds. + +### Is there a known list of LLMs that work with the API? + +Yes, Cline, Cursor, and Claude Desktop integrate successfully with The Graph's Token API + MCP server. + +Beyond that, whether an LLM "works" depends on whether it supports the MCP protocol directly (or has a compatible plugin/adapter). + +### Where can I find the MCP client? + +You can find the code for the MCP client in [The Graph's repo](https://github.com/graphprotocol/mcp-client). + +## Advanced Topics + +### I'm getting 403/401 errors. What's wrong? + +Check that you included the `Authorization: Bearer ` header with the correct, non-expired token. Common issues include using the API key instead of generating a new JWT, forgetting the "Bearer" prefix, using an incorrect token, or omitting the header entirely. Ensure you copied the JWT exactly as provided by The Graph Market. + +### Are there rate limits or usage costs?\*\* + +During Beta, the Token API is free for authorized developers. While specific rate limits aren't documented, reasonable throttling exists to prevent abuse. High request volumes may trigger HTTP 429 errors. Monitor official announcements for future pricing changes after Beta. + +### What networks are supported, and how do I specify them? + +You can query available networks with [this link](https://token-api.thegraph.com/#tag/monitoring/GET/networks). A full list of network IDs accepted by The Graph can be found on [The Graph's Networks](https://thegraph.com/docs/en/supported-networks/). The API supports Ethereum mainnet, Binance Smart Chain, Base, Arbitrum One, Optimism, and Polygon/Matic. Use the optional `network_id` parameter (e.g., `mainnet`, `bsc`, `base`, `arbitrum-one`, `optimism`, `matic`) to target a specific chain. Without this parameter, the API defaults to Ethereum mainnet. + +### Why do I only see 10 results? How can I get more data? + +Endpoints cap output at 10 items by default. Use pagination parameters: `limit` (up to 500) and `page` (1-indexed). For example, set `limit=50` to get 50 results, and increment `page` for subsequent batches (e.g., `page=2` for items 51-100). + +### How do I fetch older transfer history? + +The Transfers endpoint defaults to 30 days of history. To retrieve older events, increase the `age` parameter up to 180 days maximum (e.g., `age=180` for 6 months of transfers). Transfers older than 180 days cannot be fetched in a single call. + +### What does an empty `"data": []` array mean? + +An empty data array means no records were found matching the query – not an error. This occurs when querying wallets with no tokens/transfers or token contracts with no holders. Verify you've used the correct address and parameters. An invalid address format will trigger a 4xx error. + +### Why is the JSON response wrapped in a `"data"` array? + +All Token API responses consistently wrap results in a top-level `data` array, even for single items. This uniform design handles the common case where addresses have multiple balances or transfers. When parsing, be sure to index into the `data` array (e.g., `const results = response.data`). + +### Why are token amounts returned as strings? + +Large numeric values (like token amounts or supplies) are returned as strings to avoid precision loss, as they often exceed JavaScript's safe integer range. Convert these to big number types for arithmetic operations. Fields like `decimals` are provided as normal numbers to help derive human-readable values. + +### What format should addresses be in? + +The API accepts 40-character hex addresses with or without the `0x` prefix. The endpoint is case-insensitive, so both lower and uppercase hex characters work. Ensure addresses are exactly 40 hex digits (20 bytes) if you remove the prefix. For contract queries, use the token's contract address; for balance/transfer queries, use the wallet address. + +### Do I need special headers besides authentication? + +While recommended, `Accept: application/json` isn't strictly required as the API returns JSON by default. The critical header is `Authorization: Bearer `. Ensure you make a GET request to the correct URL without trailing slashes or path typos (e.g., use `/balances/evm/{address}` not `/balance`). + +### MCP integration with Claude/Cline/Cursor shows errors like "ENOENT" or "Server disconnected". How do I fix this? + +For "ENOENT" errors, ensure Node.js 18+ is installed and the path to `npx`/`bunx` is correct (consider using full paths in config). "Server disconnected" usually indicates authentication or connectivity issues – verify your `ACCESS_TOKEN` is set correctly and your network allows access to `https://token-api.thegraph.com/sse`. + +### Is the Token API part of The Graph's GraphQL service? + +No, the Token API is a separate RESTful service. Unlike traditional subgraphs, it provides ready-to-use REST endpoints (HTTP GET) for common token data. You don't need to write GraphQL queries or deploy subgraphs. Under the hood, it uses The Graph's infrastructure and MCP with AI for enrichment, but you simply interact with REST endpoints. + +### Do I need to use MCP or tools like Claude, Cline, or Cursor? + +No, these are optional. MCP is an advanced feature allowing AI assistants to interface with the API via streaming. For standard usage, simply call the REST endpoints with any HTTP client using your JWT. Claude Desktop, Cline bot, and Cursor IDE integrations are provided for convenience but aren't required. diff --git a/website/src/pages/fr/token-api/mcp/claude.mdx b/website/src/pages/fr/token-api/mcp/claude.mdx index 0da8f2be031d..3c7c756d5b31 100644 --- a/website/src/pages/fr/token-api/mcp/claude.mdx +++ b/website/src/pages/fr/token-api/mcp/claude.mdx @@ -3,7 +3,7 @@ title: Using Claude Desktop to Access the Token API via MCP sidebarTitle: Claude Desktop --- -## Prerequisites +## Prérequis - [Claude Desktop](https://claude.ai/download) installed. - A [JWT token](/token-api/quick-start) from [The Graph Market](https://thegraph.market/). @@ -12,7 +12,7 @@ sidebarTitle: Claude Desktop ![Screenshot of Claude Desktop's settings panel showing the MCP server configuration option.](/img/claude-preview-token-api.png) -## Configuration +## La Configuration Create or edit your `claude_desktop_config.json` file. @@ -25,11 +25,11 @@ Create or edit your `claude_desktop_config.json` file. ```json label="claude_desktop_config.json" { "mcpServers": { - "mcp-pinax": { + "token-api": { "command": "npx", "args": ["@pinax/mcp", "--sse-url", "https://token-api.thegraph.com/sse"], "env": { - "ACCESS_TOKEN": "" + "ACCESS_TOKEN": "" } } } diff --git a/website/src/pages/fr/token-api/mcp/cline.mdx b/website/src/pages/fr/token-api/mcp/cline.mdx index ab54c0c8f6f0..e4952d58a1d9 100644 --- a/website/src/pages/fr/token-api/mcp/cline.mdx +++ b/website/src/pages/fr/token-api/mcp/cline.mdx @@ -3,16 +3,16 @@ title: Using Cline to Access the Token API via MCP sidebarTitle: Cline --- -## Prerequisites +## Prérequis - [Cline](https://cline.bot/) installed. - A [JWT token](/token-api/quick-start) from [The Graph Market](https://thegraph.market/). - [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path. - The `@pinax/mcp` package requires Node 18+, as it relies on built-in `fetch()` / `Headers`, which are not available in Node 17 or older. You may need to specify an exact path to an up-to-date Node version, or uninstall previous versions of Node to ensure `@pinax/mcp` uses the correct version. -![Screenshot of Cline's MCP server configuration interface displaying the JSON settings file with mcp-pinax server details visible](/img/cline-preview-token-api.png) +![Screenshot of Cline's MCP server configuration interface displaying the JSON settings file with mcp-pinax server details visible.](/img/cline-preview-token-api.png) -## Configuration +## La Configuration Create or edit your `cline_mcp_settings.json` file. diff --git a/website/src/pages/fr/token-api/mcp/cursor.mdx b/website/src/pages/fr/token-api/mcp/cursor.mdx index 658108d1337b..ae68e7ff6cf9 100644 --- a/website/src/pages/fr/token-api/mcp/cursor.mdx +++ b/website/src/pages/fr/token-api/mcp/cursor.mdx @@ -3,7 +3,7 @@ title: Using Cursor to Access the Token API via MCP sidebarTitle: Cursor --- -## Prerequisites +## Prérequis - [Cursor](https://www.cursor.com/) installed. - A [JWT token](/token-api/quick-start) from [The Graph Market](https://thegraph.market/). @@ -12,7 +12,7 @@ sidebarTitle: Cursor ![Screenshot of Cursor's MCP configuration panel.](/img/cursor-preview-token-api.png) -## Configuration +## La Configuration Create or edit your `~/.cursor/mcp.json` file. diff --git a/website/src/pages/fr/token-api/quick-start.mdx b/website/src/pages/fr/token-api/quick-start.mdx index 4653c3d41ac6..4a38a878fd7c 100644 --- a/website/src/pages/fr/token-api/quick-start.mdx +++ b/website/src/pages/fr/token-api/quick-start.mdx @@ -1,6 +1,6 @@ --- title: Token API Quick Start -sidebarTitle: Quick Start +sidebarTitle: Démarrage rapide --- ![The Graph Token API Quick Start banner](/img/token-api-quickstart-banner.jpg) @@ -11,7 +11,7 @@ The Graph's Token API lets you access blockchain token information via a GET req The Token API provides access to onchain token data, including balances, holders, detailed token metadata, and historical transfers. This API also uses the Model Context Protocol (MCP) to enrich raw blockchain data with contextual insights using AI tools, such as Claude. -## Prerequisites +## Prérequis Before you begin, get a JWT token by signing up on [The Graph Market](https://thegraph.market/). You can generate a JWT token for each of your API keys using the dropdown menu. diff --git a/website/src/pages/hi/about.mdx b/website/src/pages/hi/about.mdx index 7f9feff0a53e..98bf7a76374e 100644 --- a/website/src/pages/hi/about.mdx +++ b/website/src/pages/hi/about.mdx @@ -30,25 +30,25 @@ Alternatively, you have the option to set up your own server, process the transa ## The Graph एक समाधान प्रदान करता है -The Graph इस चुनौती को एक विकेन्द्रीकृत प्रोटोकॉल के माध्यम से हल करता है जो ब्लॉकचेन डेटा को इंडेक्स करता है और उसकी कुशल और उच्च-प्रदर्शन वाली क्वेरी करने की सुविधा प्रदान करता है। ये एपीआई (इंडेक्स किए गए "सबग्राफ") फिर एक मानक GraphQL एपीआई के साथ क्वेरी की जा सकती हैं। +The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "Subgraphs") can then be queried with a standard GraphQL API. आज एक विकेंद्रीकृत प्रोटोकॉल है, जो [Graph Node](https://github.com/graphprotocol/graph-node) के ओपन सोर्स इम्प्लीमेंटेशन द्वारा समर्थित है, जो इस प्रक्रिया को सक्षम बनाता है। ### The Graph कैसे काम करता है -ब्लॉकचेन डेटा को इंडेक्स करना बहुत मुश्किल होता है, लेकिन The Graph इसे आसान बना देता है। The Graph सबग्राफ्स का उपयोग करके एथेरियम डेटा को इंडेक्स करना सीखता है। सबग्राफ्स ब्लॉकचेन डेटा पर बनाए गए कस्टम एपीआई होते हैं, जो ब्लॉकचेन से डेटा निकालते हैं, उसे प्रोसेस करते हैं, और उसे इस तरह स्टोर करते हैं ताकि उसे GraphQL के माध्यम से आसानी से क्वेरी किया जा सके। +Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using Subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL. #### विशिष्टताएँ -- The Graph का उपयोग subgraph विवरणों के लिए करता है, जिन्हें subgraph के अंदर subgraph manifest के रूप में जाना जाता है। +- The Graph uses Subgraph descriptions, which are known as the Subgraph manifest inside the Subgraph. -- सबग्राफ विवरण उन स्मार्ट कॉन्ट्रैक्ट्स की रूपरेखा प्रदान करता है जो एक सबग्राफ के लिए महत्वपूर्ण हैं, उन कॉन्ट्रैक्ट्स के भीतर कौन-कौन सी घटनाओं पर ध्यान केंद्रित करना है, और घटना डेटा को उस डेटा से कैसे मैप करना है जिसे The Graph अपने डेटाबेस में संग्रहीत करेगा। +- The Subgraph description outlines the smart contracts of interest for a Subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database. -- जब आप एक subgraph बना रहे होते हैं, तो आपको एक subgraph मैनिफेस्ट लिखने की आवश्यकता होती है। +- When creating a Subgraph, you need to write a Subgraph manifest. -- `Subgraph manifest` लिखने के बाद, आप Graph CLI का उपयोग करके परिभाषा को IPFS में संग्रहीत कर सकते हैं और एक Indexer को उस subgraph के लिए डेटा को इंडेक्स करने का निर्देश दे सकते हैं। +- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that Subgraph. -नीचे दिया गया आरेख Ethereum लेनदेन के साथ subgraph मैनिफेस्ट को डिप्लॉय करने के बाद डेटा के प्रवाह के बारे में अधिक विस्तृत जानकारी प्रदान करता है। +The diagram below provides more detailed information about the flow of data after a Subgraph manifest has been deployed with Ethereum transactions. ![एक ग्राफ़िक समझाता है कि कैसे ग्राफ़ डेटा उपभोक्ताओं को क्वेरीज़ प्रदान करने के लिए ग्राफ़ नोड का उपयोग करता है](/img/graph-dataflow.png) @@ -56,12 +56,12 @@ The Graph इस चुनौती को एक विकेन्द्री 1. एक विकेंद्रीकृत एप्लिकेशन स्मार्ट अनुबंध पर लेनदेन के माध्यम से एथेरियम में डेटा जोड़ता है। 2. लेन-देन संसाधित करते समय स्मार्ट अनुबंध एक या अधिक घटनाओं का उत्सर्जन करता है। -3. ग्राफ़ नोड लगातार नए ब्लॉकों के लिए एथेरियम को स्कैन करता है और आपके सबग्राफ के डेटा में शामिल हो सकता है। -4. ग्राफ नोड इन ब्लॉकों में आपके सबग्राफ के लिए एथेरियम ईवेंट ढूंढता है और आपके द्वारा प्रदान किए गए मैपिंग हैंडलर को चलाता है। मैपिंग एक WASM मॉड्यूल है जो एथेरियम घटनाओं के जवाब में ग्राफ़ नोड द्वारा संग्रहीत डेटा संस्थाओं को बनाता या अपडेट करता है। +3. Graph Node continually scans Ethereum for new blocks and the data for your Subgraph they may contain. +4. Graph Node finds Ethereum events for your Subgraph in these blocks and runs the mapping handlers you provided. The mapping is a WASM module that creates or updates the data entities that Graph Node stores in response to Ethereum events. 5. नोड के [GraphQL समापन बिंदु](https://graphql.org/learn/) का उपयोग करते हुए, विकेन्द्रीकृत एप्लिकेशन ब्लॉकचैन से अनुक्रमित डेटा के लिए ग्राफ़ नोड से पूछताछ करता है। ग्राफ़ नोड बदले में इस डेटा को प्राप्त करने के लिए, स्टोर की इंडेक्सिंग क्षमताओं का उपयोग करते हुए, अपने अंतर्निहित डेटा स्टोर के लिए ग्राफ़कॉल प्रश्नों का अनुवाद करता है। विकेंद्रीकृत एप्लिकेशन इस डेटा को एंड-यूजर्स के लिए एक समृद्ध यूआई में प्रदर्शित करता है, जिसका उपयोग वे एथेरियम पर नए लेनदेन जारी करने के लिए करते हैं। चक्र दोहराता है। ## अगले कदम -निम्नलिखित अनुभागों में subgraphs, उनके डिप्लॉयमेंट और डेटा क्वेरी करने के तरीके पर अधिक गहराई से जानकारी दी गई है। +The following sections provide a more in-depth look at Subgraphs, their deployment and data querying. -अपना खुद का subgraph लिखने से पहले, यह अनुशंसा की जाती है कि आप [Graph Explorer](https://thegraph.com/explorer) को एक्सप्लोर करें और पहले से डिप्लॉय किए गए कुछ subgraphs की समीक्षा करें। प्रत्येक subgraph के पेज में एक GraphQL प्लेग्राउंड शामिल होता है, जिससे आप उसके डेटा को क्वेरी कर सकते हैं। +Before you write your own Subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed Subgraphs. Each Subgraph's page includes a GraphQL playground, allowing you to query its data. diff --git a/website/src/pages/hi/archived/arbitrum/arbitrum-faq.mdx b/website/src/pages/hi/archived/arbitrum/arbitrum-faq.mdx index 35afafb65cd3..ee970e360d2e 100644 --- a/website/src/pages/hi/archived/arbitrum/arbitrum-faq.mdx +++ b/website/src/pages/hi/archived/arbitrum/arbitrum-faq.mdx @@ -2,21 +2,21 @@ title: Arbitrum FAQ --- -Click [here](#billing-on-arbitrum-faqs) if you would like to skip to the Arbitrum Billing FAQs. +यदि आप आर्बिट्रम बिलिंग एफएक्यू पर जाना चाहते हैं तो [here] (#बिलिंग-ऑन-आर्बिट्रम-एफएक्यू) पर क्लिक करें। ## The Graph ने L2 समाधान को लागू करने का कारण क्या था? L2 पर The Graph को स्केल करके, नेटवर्क के प्रतिभागी अब निम्नलिखित लाभ उठा सकते हैं: -- Upwards of 26x savings on gas fees +- गैस शुल्क पर 26 गुना से अधिक की बचत -- Faster transaction speed +- तेज़ लेनदेन गति -- Security inherited from Ethereum +- सुरक्षा एथेरियम से विरासत में मिली है -L2 पर प्रोटोकॉल स्मार्ट कॉन्ट्रैक्ट्स को स्केल करने से नेटवर्क के प्रतिभागियों को गैस शुल्क में कमी के साथ अधिक बार इंटरैक्ट करने की अनुमति मिलती है। उदाहरण के लिए, Indexer अधिक बार आवंटन खोल और बंद कर सकते हैं ताकि अधिक सबग्राफ़ को इंडेक्स किया जा सके। डेवलपर्स सबग्राफ़ को अधिक आसानी से तैनात और अपडेट कर सकते हैं, और डेलीगेटर्स अधिक बार GRT को डेलीगेट कर सकते हैं। क्यूरेटर अधिक सबग्राफ़ में सिग्नल जोड़ या हटा सकते हैं—ऐसे कार्य जो पहले गैस की उच्च लागत के कारण अक्सर करना बहुत महंगा माना जाता था। +स्केलिंग प्रोटोकॉल स्मार्ट contract को L2 पर ले जाने से नेटवर्क प्रतिभागियों को कम गैस शुल्क में अधिक बार इंटरैक्ट करने की सुविधा मिलती है। उदाहरण के लिए, Indexers अधिक सबग्राफ को इंडेक्स करने के लिए अधिक बार आवंटन खोल और बंद कर सकते हैं। डेवलपर्स अधिक आसानी से सबग्राफ को डिप्लॉय और अपडेट कर सकते हैं, और Delegators अधिक बार GRT डेलीगेट कर सकते हैं। Curators अधिक संख्या में सबग्राफ में सिग्नल जोड़ या हटा सकते हैं—जो पहले गैस लागत के कारण बार-बार करना महंगा माना जाता था। -The Graph community decided to move forward with Arbitrum last year after the outcome of the [GIP-0031](https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305) discussion. +ग्राफ समुदाय ने पिछले साल [GIP-0031](https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305) चर्चा के नतीजे के बाद आर्बिट्रम के साथ आगे बढ़ने का फैसला किया। ## What do I need to do to use The Graph on L2? @@ -35,11 +35,11 @@ The Graph का बिलिंग सिस्टम Arbitrum पर GRT क एक बार जब आपके पास Arbitrum पर GRT हो, तो आप इसे अपनी बिलिंग बैलेंस में जोड़ सकते हैं। -To take advantage of using The Graph on L2, use this dropdown switcher to toggle between chains. +L2 पर The Graph का उपयोग करने का लाभ उठाने के लिए, इस dropdown switcher का उपयोग chains के बीच toggle करने के लिए करें। ![Dropdown switcher to toggle Arbitrum](/img/arbitrum-screenshot-toggle.png) -## Subgraph developer, data consumer, Indexer, Curator, or Delegator, के रूप में, मुझे अब क्या करने की आवश्यकता है? +## As a सबग्राफ developer, data consumer, Indexer, Curator, or Delegator, अब आपको क्या करना चाहिए? The Graph Network में भाग लेने के लिए नेटवर्क प्रतिभागियों को Arbitrum पर स्थानांतरित होना आवश्यक है। अतिरिक्त सहायता के लिए कृपया [L2 Transfer Tool मार्गदर्शक](/archived/arbitrum/l2-transfer-tools-guide/) देखें। @@ -51,29 +51,29 @@ The Graph Network में भाग लेने के लिए नेटव हर चीज़ का पूरी तरह से परीक्षण किया गया है, और एक सुरक्षित और निर्बाध संक्रमण सुनिश्चित करने के लिए एक आकस्मिक योजना बनाई गई है। विवरण यहां पाया जा सकता है [here] (https://forum.thegraph.com/t/gip-0037-the-graph-arbitrum-deployment-with-linear-rewards-minted-in-l2/3551#risks-and- सुरक्षा-विचार-20). -## क्या Ethereum पर मौजूद सबग्राफ़ काम कर रहे हैं? +## क्या मौजूदा सबग्राफ Ethereum पर काम कर रहे हैं? -सभी सबग्राफ अब Arbitrum पर हैं। कृपया [ L2 Transfer Tool मार्गदर्शक](/archived/arbitrum/l2-transfer-tools-guide/) का संदर्भ लें ताकि आपके सबग्राफ बिना किसी समस्या के कार्य करें। +सभी सबग्राफ अब Arbitrum पर हैं। कृपया [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) देखें ताकि आपके सबग्राफ बिना किसी समस्या के कार्य कर सकें। ## क्या GRT का एक नया स्मार्ट कॉन्ट्रैक्ट Arbitrum पर तैनात किया गया है? -Yes, GRT has an additional [smart contract on Arbitrum](https://arbiscan.io/address/0x9623063377ad1b27544c965ccd7342f7ea7e88c7). However, the Ethereum mainnet [GRT contract](https://etherscan.io/token/0xc944e90c64b2c07662a292be6244bdf05cda44a7) will remain operational. +हां, जीआरटी के पास एक अतिरिक्त [आर्बिट्रम पर स्मार्ट अनुबंध](https://arbiscan.io/address/0x9623063377ad1b27544c965ccd7342f7ea7e88c7) है। हालाँकि, एथेरियम मेननेट [जीआरटी अनुबंध](https://etherscan.io/token/0xc944e90c64b2c07662a292be6244bdf05cda44a7) चालू रहेगा। ## Arbitrum पर बिलिंग FAQs -## What do I need to do about the GRT in my billing balance? +## मुझे अपने billing balance में GRT के बारे में क्या करना होगा? -Nothing! Your GRT has been securely migrated to Arbitrum and is being used to pay for queries as you read this. +कुछ नहीं! आपके GRT को Arbitrum में सुरक्षित रूप से migrate कर दिया गया है और जब आप इसे पढ़ रहे हैं तो इसका उपयोग queries के भुगतान के लिए किया जा रहा है। -## How do I know my funds have migrated securely to Arbitrum? +## मुझे कैसे पता चलेगा कि मेरे funds Arbitrum में सुरक्षित रूप से migrate हो गए हैं? सभी जीआरटी बिलिंग शेष पहले ही सफलतापूर्वक आर्बिट्रम में स्थानांतरित कर दिए गए हैं। आप आर्बिट्रम पर बिलिंग अनुबंध [यहां] [here](https://arbiscan.io/address/0x1B07D3344188908Fb6DEcEac381f3eE63C48477a) देख सकते हैं। -## How do I know the Arbitrum bridge is secure? +## मुझे कैसे पता चलेगा कि Arbitrum bridge सुरक्षित है? -The bridge has been [heavily audited](https://code4rena.com/contests/2022-10-the-graph-l2-bridge-contest) to ensure safety and security for all users. +सभी उपयोगकर्ताओं के लिए सुरक्षा सुनिश्चित करने के लिए पुल का [भारी ऑडिट](https://code4rena.com/contests/2022-10-the-graph-l2-bridge-contest) किया गया है। -## What do I need to do if I'm adding fresh GRT from my Ethereum mainnet wallet? +## यदि मैं अपने Ethereum mainnet wallet से fresh GRT add कर रहा हूँ तो मुझे क्या करने की आवश्यकता है? आपके आर्बिट्रम बिलिंग बैलेंस में जीआरटी जोड़ना [सबग्राफ स्टूडियो] (https://thegraph.com/studio/) में एक-क्लिक अनुभव के साथ किया जा सकता है। आप आसानी से अपने जीआरटी को आर्बिट्रम से जोड़ सकेंगे और एक लेनदेन में अपनी एपीआई कुंजी भर सकेंगे। diff --git a/website/src/pages/hi/archived/arbitrum/l2-transfer-tools-faq.mdx b/website/src/pages/hi/archived/arbitrum/l2-transfer-tools-faq.mdx index 66574cb53dd4..49d4b164805c 100644 --- a/website/src/pages/hi/archived/arbitrum/l2-transfer-tools-faq.mdx +++ b/website/src/pages/hi/archived/arbitrum/l2-transfer-tools-faq.mdx @@ -1,100 +1,100 @@ --- -title: L2 Transfer Tools FAQ +title: L2 स्थानांतरण उपकरण अक्सर पूछे जाने वाले प्रश्न --- ## आम -### What are L2 Transfer Tools? +### L2 स्थानांतरण उपकरण क्या हैं? -The Graph has made it 26x cheaper for contributors to participate in the network by deploying the protocol to Arbitrum One. The L2 Transfer Tools were created by core devs to make it easy to move to L2. +ग्राफ़ ने आर्बिट्रम वन में प्रोटोकॉल लागू करके योगदानकर्ताओं के लिए नेटवर्क में भाग लेना 26 गुना सस्ता कर दिया है। L2 ट्रांसफर टूल्स को कोर डेवलपर्स द्वारा L2 पर ले जाना आसान बनाने के लिए बनाया गया था। -For each network participant, a set of L2 Transfer Tools are available to make the experience seamless when moving to L2, avoiding thawing periods or having to manually withdraw and bridge GRT. +प्रत्येक नेटवर्क प्रतिभागी के लिए, L2 पर जाने पर अनुभव को सहज बनाने, पिघलने की अवधि से बचने या मैन्युअल रूप से निकालने और GRT को पाटने के लिए L2 ट्रांसफर टूल का एक सेट उपलब्ध है। -These tools will require you to follow a specific set of steps depending on what your role is within The Graph and what you are transferring to L2. +इन उपकरणों के लिए आपको चरणों के एक विशिष्ट सेट का पालन करने की आवश्यकता होगी जो इस बात पर निर्भर करेगा कि ग्राफ़ के भीतर आपकी भूमिका क्या है और आप एल2 में क्या स्थानांतरित कर रहे हैं। -### Can I use the same wallet I use on Ethereum mainnet? +### क्या मैं उसी वॉलेट का उपयोग कर सकता हूँ जिसका उपयोग मैं एथेरियम मेननेट पर करता हूँ? यदि आप [ EOA ] (https://ethereum.org/en/developers/docs/accounts/#types-of-account) वॉलेट का उपयोग कर रहे हैं, तो आप उसी पते का उपयोग कर सकते हैं। यदि आपका Ethereum mainnet वॉलेट एक contract है (जैसे कि एक multisig), तो आपको एक [Arbitrum बटुआ पता ](/archived/arbitrum/arbitrum-faq/#what-do-i-need-to-do-to-use-the-graph-on-l2) निर्दिष्ट करना होगा जहाँ आपका ट्रांसफर भेजा जाएगा। कृपया पते को ध्यानपूर्वक जांचें, क्योंकि गलत पते पर ट्रांसफर करने से स्थायी हानि हो सकती है। यदि आप L2 पर multisig का उपयोग करना चाहते हैं, तो सुनिश्चित करें कि आपने Arbitrum One पर एक multisig contract तैनात किया हो। -Wallets on EVM blockchains like Ethereum and Arbitrum are a pair of keys (public and private), that you create without any need to interact with the blockchain. So any wallet that was created for Ethereum will also work on Arbitrum without having to do anything else. +एथेरियम और आर्बिट्रम जैसे ईवीएम ब्लॉकचेन पर वॉलेट कुंजी (सार्वजनिक और निजी) की एक जोड़ी है, जिसे आप ब्लॉकचेन के साथ बातचीत करने की आवश्यकता के बिना बनाते हैं। इसलिए एथेरियम के लिए बनाया गया कोई भी वॉलेट बिना कुछ और किए आर्बिट्रम पर भी काम करेगा। -The exception is with smart contract wallets like multisigs: these are smart contracts that are deployed separately on each chain, and get their address when they are deployed. If a multisig was deployed to Ethereum, it won't exist with the same address on Arbitrum. A new multisig must be created first on Arbitrum, and may get a different address. +अपवाद मल्टीसिग जैसे स्मार्ट कॉन्ट्रैक्ट वॉलेट के साथ है: ये स्मार्ट कॉन्ट्रैक्ट हैं जो प्रत्येक श्रृंखला पर अलग से तैनात किए जाते हैं, और तैनात होने पर उनका पता प्राप्त होता है। यदि एक मल्टीसिग को एथेरियम पर तैनात किया गया था, तो यह आर्बिट्रम पर समान पते के साथ मौजूद नहीं होगा। आर्बिट्रम पर पहले एक नया मल्टीसिग बनाया जाना चाहिए, और उसे एक अलग पता मिल सकता है। ### यदि मैं अपना स्थानांतरण 7 दिनों में पूरा नहीं कर पाता तो क्या होगा? L2 ट्रांसफर टूल L1 से L2 तक संदेश भेजने के लिए आर्बिट्रम के मूल तंत्र का उपयोग करते हैं। इस तंत्र को "पुनर्प्रयास योग्य टिकट" कहा जाता है और इसका उपयोग आर्बिट्रम जीआरटी ब्रिज सहित सभी देशी टोकन ब्रिजों द्वारा किया जाता है। आप पुनः प्रयास योग्य टिकटों के बारे में अधिक जानकारी [आर्बिट्रम डॉक्स](https://docs.arbitrum.io/arbos/l1-to-l2-messageing) में पढ़ सकते हैं। -जब आप अपनी संपत्ति (सबग्राफ, हिस्सेदारी, प्रतिनिधिमंडल या क्यूरेशन) को एल2 में स्थानांतरित करते हैं, तो आर्बिट्रम जीआरटी ब्रिज के माध्यम से एक संदेश भेजा जाता है जो एल2 में एक पुनः प्रयास योग्य टिकट बनाता है। ट्रांसफ़र टूल में लेन-देन में कुछ ETH मान शामिल होते हैं, जिनका उपयोग 1) टिकट बनाने के लिए भुगतान करने और 2) L2 में टिकट निष्पादित करने के लिए गैस का भुगतान करने के लिए किया जाता है। हालाँकि, क्योंकि गैस की कीमतें L2 में निष्पादित होने के लिए टिकट तैयार होने तक के समय में भिन्न हो सकती हैं, यह संभव है कि यह ऑटो-निष्पादन प्रयास विफल हो जाए। जब ऐसा होता है, तो आर्बिट्रम ब्रिज पुनः प्रयास योग्य टिकट को 7 दिनों तक जीवित रखेगा, और कोई भी टिकट को "रिडीम" करने का पुनः प्रयास कर सकता है (जिसके लिए आर्बिट्रम में ब्रिज किए गए कुछ ईटीएच के साथ वॉलेट की आवश्यकता होती है)। +जब आप अपने assets (सबग्राफ, stake, delegation या curation) को L2 में ट्रांसफर करते हैं, तो एक संदेश Arbitrum GRT bridge के माध्यम से भेजा जाता है, जो L2 में एक retryable ticket बनाता है। ट्रांसफर टूल लेनदेन में कुछ ETH मूल्य शामिल करता है, जिसका उपयोग 1) टिकट बनाने के लिए भुगतान करने और 2) L2 में टिकट को निष्पादित करने के लिए गैस के भुगतान के लिए किया जाता है। हालाँकि, क्योंकि गैस की कीमतें उस समय तक बदल सकती हैं जब तक टिकट L2 में निष्पादित होने के लिए तैयार होता है, यह संभव है कि यह ऑटो-निष्पादन प्रयास विफल हो जाए। जब ऐसा होता है, तो Arbitrum bridge 7 दिनों तक retryable ticket को सक्रिय रखेगा, और कोई भी "redeeming" टिकट को पुन: प्रयास कर सकता है (जिसके लिए Arbitrum पर कुछ ETH ब्रिज्ड किए गए वॉलेट की आवश्यकता होगी)। -इसे हम सभी स्थानांतरण टूल में "पुष्टि करें" चरण कहते हैं - यह ज्यादातर मामलों में स्वचालित रूप से चलेगा, क्योंकि ऑटो-निष्पादन अक्सर सफल होता है, लेकिन यह महत्वपूर्ण है कि आप यह सुनिश्चित करने के लिए वापस जांचें कि यह पूरा हो गया है। यदि यह सफल नहीं होता है और 7 दिनों में कोई सफल पुनर्प्रयास नहीं होता है, तो आर्बिट्रम ब्रिज टिकट को खारिज कर देगा, और आपकी संपत्ति (सबग्राफ, हिस्सेदारी, प्रतिनिधिमंडल या क्यूरेशन) खो जाएगी और पुनर्प्राप्त नहीं की जा सकेगी। ग्राफ़ कोर डेवलपर्स के पास इन स्थितियों का पता लगाने और बहुत देर होने से पहले टिकटों को भुनाने की कोशिश करने के लिए एक निगरानी प्रणाली है, लेकिन यह सुनिश्चित करना अंततः आपकी ज़िम्मेदारी है कि आपका स्थानांतरण समय पर पूरा हो जाए। यदि आपको अपने लेनदेन की पुष्टि करने में परेशानी हो रही है, तो कृपया [इस फॉर्म](https://noteforms.com/forms/notionform-l2-transfer-tooling-issues-0ogqfu?notionforms=1&utm_source=notionforms) और कोर डेव का उपयोग करके संपर्क करें आपकी मदद के लिए वहाँ मौजूद रहूँगा. +यह वह चरण है जिसे हम सभी ट्रांसफर टूल्स में "Confirm" स्टेप कहते हैं - यह ज्यादातर मामलों में स्वचालित रूप से चलेगा, क्योंकि ऑटो-एक्सीक्यूशन आमतौर पर सफल होता है, लेकिन यह महत्वपूर्ण है कि आप यह सुनिश्चित करने के लिए वापस जांचें कि यह सफलतापूर्वक पूरा हुआ है। यदि यह सफल नहीं होता है और 7 दिनों के भीतर कोई सफल पुनःप्रयास नहीं होता है, तो Arbitrum ब्रिज टिकट को हटा देगा, और आपके assets (सबग्राफ, stake, delegation या curation) खो जाएंगे और उन्हें पुनः प्राप्त नहीं किया जा सकता। The Graph के कोर डेव्स के पास ऐसी स्थितियों का पता लगाने और टिकट को समय रहते रिडीम करने के लिए एक मॉनिटरिंग सिस्टम है, लेकिन अंततः यह आपकी जिम्मेदारी है कि आप सुनिश्चित करें कि आपका ट्रांसफर समय पर पूरा हो जाए। यदि आपको अपने ट्रांजेक्शन की पुष्टि करने में समस्या आ रही है, तो कृपया [इस फॉर्म](https://noteforms.com/forms/notionform-l2-transfer-tooling-issues-0ogqfu?notionforms=1&utm_source=notionforms) का उपयोग करके संपर्क करें, और कोर डेव्स आपकी सहायता के लिए उपलब्ध होंगे। ### मैंने अपना डेलिगेशन/स्टेक/क्यूरेशन ट्रांसफर शुरू कर दिया है और मुझे यकीन नहीं है कि यह एल2 तक पहुंच गया है या नहीं, मैं कैसे पुष्टि कर सकता हूं कि इसे सही तरीके से ट्रांसफर किया गया था? यदि आपको अपनी प्रोफ़ाइल पर स्थानांतरण पूरा करने के लिए कहने वाला कोई बैनर नहीं दिखता है, तो संभव है कि लेन-देन सुरक्षित रूप से L2 पर पहुंच गया है और किसी और कार्रवाई की आवश्यकता नहीं है। यदि संदेह है, तो आप जांच सकते हैं कि एक्सप्लोरर आर्बिट्रम वन पर आपका प्रतिनिधिमंडल, हिस्सेदारी या क्यूरेशन दिखाता है या नहीं। -If you have the L1 transaction hash (which you can find by looking at the recent transactions in your wallet), you can also confirm if the "retryable ticket" that carried the message to L2 was redeemed here: https://retryable-dashboard.arbitrum.io/ - if the auto-redeem failed, you can also connect your wallet there and redeem it. Rest assured that core devs are also monitoring for messages that get stuck, and will attempt to redeem them before they expire. +यदि आपके पास L1 transaction hash हैं (which you can find by looking at the recent transactions in your wallet), तो आप यह भी पुष्टि कर सकते हैं कि L2 पर संदेश ले जाने वाला "retryable ticket" यहाँ carried गया था या नहीं: https://retryable-dashboard.arbitrum.io/ - यदि auto-redeem विफल रहा, तो आप वहाँ अपना wallet जोड़ सकते हैं और इसे redeem सकते हैं। आश्वासन दिया जाता है कि core Developer भी फंसे हुए संदेशों की निगरानी कर रहे हैं, और वे समाप्त होने से पहले उन्हें redeem का प्रयास करेंगे। ## सबग्राफ स्थानांतरण -### मैं अपना सबग्राफ कैसे स्थानांतरित करूं? +### मेरा सबग्राफ कैसे ट्रांसफर करें? -अपने सबग्राफ को स्थानांतरित करने के लिए, आपको निम्नलिखित चरणों को पूरा करने होंगे: +अपने सबग्राफ को स्थानांतरित करने के लिए, आपको निम्नलिखित चरणों को पूरा करना होगा: 1. Ethereum mainnet वर हस्तांतरण सुरू करा 2. पुष्टि के लिए 20 मिनट का इंतजार करें: -3. आर्बिट्रमवर सबग्राफ हस्तांतरणाची पुष्टी करा\* +3. सबग्राफ स्थानांतरण की पुष्टि करें Arbitrum\* पर -4. आर्बिट्रम पर सबग्राफ का प्रकाशन समाप्त करें +4. सबग्राफ को Arbitrum पर प्रकाशित करना समाप्त करें 5. क्वेरी यूआरएल अपडेट करें (अनुशंसित) -\*Note that you must confirm the transfer within 7 days otherwise your subgraph may be lost. In most cases, this step will run automatically, but a manual confirmation may be needed if there is a gas price spike on Arbitrum. If there are any issues during this process, there will be resources to help: contact support at support@thegraph.com or on [Discord](https://discord.gg/graphprotocol). +\*ध्यान दें कि आपको स्थानांतरण की पुष्टि 7 दिनों के भीतर करनी होगी, अन्यथा आपका सबग्राफ खो सकता है। अधिकांश मामलों में, यह चरण स्वचालित रूप से पूरा हो जाएगा, लेकिन यदि Arbitrum पर गैस मूल्य में अचानक वृद्धि होती है, तो मैन्युअल पुष्टि की आवश्यकता हो सकती है। यदि इस प्रक्रिया के दौरान कोई समस्या आती है, तो सहायता के लिए संसाधन उपलब्ध होंगे: समर्थन से संपर्क करें support@thegraph.com या [Discord](https://discord.gg/graphprotocol) पर। ### मुझे अपना स्थानांतरण कहाँ से आरंभ करना चाहिए? -आप[Subgraph Studio](https://thegraph.com/studio/), [Explorer,](https://thegraph.com/explorer) या किसी भी Subgraph विवरण पृष्ठ से अपने transfer को प्रारंभ कर सकते हैं। Subgraph विवरण पृष्ठ में "Transfer " button पर click करके transfer आरंभ करें। +आप अपना ट्रांसफर शुरू कर सकते हैं [सबग्राफ Studio](https://thegraph.com/studio/), [Explorer,](https://thegraph.com/explorer) या किसी भी सबग्राफ विवरण पृष्ठ से। ट्रांसफर शुरू करने के लिए सबग्राफ विवरण पृष्ठ पर "Transfer सबग्राफ" बटन पर क्लिक करें। -### मेरा सबग्राफ़ स्थानांतरित होने तक मुझे कितने समय तक प्रतीक्षा करनी होगी? +### आपको अपना सबग्राफ ट्रांसफर होने में कितना समय लगेगा इंतजार करना पड़ेगा? अंतरण करने में लगभग 20 मिनट का समय लगता है। Arbitrum bridge स्वचालित रूप से bridge अंतरण पूरा करने के लिए पृष्ठभूमि में काम कर रहा है। कुछ मामलों में, गैस लागत में spike हो सकती है और आपको transaction की पुष्टि फिर से करनी होगी। -### क्या मेरा सबग्राफ L2 में स्थानांतरित करने के बाद भी खोजा जा सकेगा? +### क्या मेरा सबग्राफ L2 पर ट्रांसफर करने के बाद भी खोजने योग्य रहेगा? -आपका सबग्राफ केवल उस नेटवर्क पर खोजने योग्य होगा जिस पर यह प्रकाशित किया गया है। उदाहरण स्वरूप, यदि आपका सबग्राफ आर्बिट्रम वन पर है, तो आपकेंद्रीय तंत्र पर केवल आर्बिट्रम वन के खोजक में ही ढूंढा जा सकता है और आप इथेरियम पर इसे नहीं खोज पाएंगे। कृपया सुनिश्चित करें कि आपने पृष्ठ के शीर्ष में नेटवर्क स्विचर में आर्बिट्रम वन को चुना है ताकि आप सही नेटवर्क पर हों। अंतरण के बाद, L1 सबग्राफ को पुराना किया गया माना जाएगा। +आपका सबग्राफ केवल उसी नेटवर्क पर खोजा जा सकेगा, जिस पर इसे प्रकाशित किया गया है। उदाहरण के लिए, यदि आपका सबग्राफ Arbitrum One पर है, तो आप इसे केवल Arbitrum One के Explorer में खोज सकते हैं और इसे Ethereum पर नहीं ढूंढ पाएंगे। कृपया सुनिश्चित करें कि आप पृष्ठ के शीर्ष पर नेटवर्क स्विचर में Arbitrum One का चयन करें, ताकि यह सुनिश्चित हो सके कि आप सही नेटवर्क पर हैं। स्थानांतरण के बाद, L1 सबग्राफ अप्रचलित के रूप में दिखाई देगा। -### क्या मेरे सबग्राफ को स्थानांतरित करने के लिए इसे प्रकाशित किया जाना आवश्यक है? +### क्या मेरा सबग्राफ स्थानांतरित करने के लिए प्रकाशित होना आवश्यक है? -सबग्राफ अंतरण उपकरण का लाभ उठाने के लिए, आपके सबग्राफ को पहले ही ईथेरियम मेननेट पर प्रकाशित किया जाना चाहिए और सबग्राफ के मालिक wallet द्वारा स्वामित्व signal subgraph का कुछ होना चाहिए। यदि आपका subgraph प्रकाशित नहीं है, तो सिफ़ारिश की जाती है कि आप सीधे Arbitrum One पर प्रकाशित करें - जुड़े गए gas fees काफी कम होंगे। यदि आप किसी प्रकाशित subgraph को अंतरण करना चाहते हैं लेकिन owner account ने उस पर कोई signal curate नहीं किया है, तो आप उस account से थोड़ी सी राशि (जैसे 1 GRT) के signal कर सकते हैं; सुनिश्चित करें कि आपने "auto-migrating" signal को चुना है। +आप सबग्राफ transfer tool का लाभ उठाने के लिए, आपका सबग्राफ पहले से ही Ethereum mainnet पर प्रकाशित होना चाहिए और उसमें उस वॉलेट के स्वामित्व में कुछ क्यूरेशन सिग्नल होना चाहिए जो सबग्राफ का मालिक है। यदि आपका सबग्राफ प्रकाशित नहीं है, तो यह अनुशंसित है कि आप इसे सीधे Arbitrum One पर प्रकाशित करें - इससे संबंधित गैस शुल्क काफी कम होंगे। यदि आप पहले से प्रकाशित सबग्राफ को स्थानांतरित करना चाहते हैं, लेकिन स्वामी खाते ने उस पर कोई क्यूरेशन सिग्नल नहीं दिया है, तो आप उस खाते से एक छोटी राशि (जैसे 1 GRT) का सिग्नल दे सकते हैं; सुनिश्चित करें कि आप "auto-migrating" सिग्नल चुनें। -### मी आर्बिट्रममध्ये हस्तांतरित केल्यानंतर माझ्या सबग्राफच्या इथरियम मेननेट आवृत्तीचे काय होते? +### Ethereum मेननेट संस्करण का आपका सबग्राफ Arbitrum में स्थानांतरित करने के बाद क्या होता है? -अपने सबग्राफ को आर्बिट्रम पर अंतरण करने के बाद, ईथेरियम मेननेट संस्करण को पुराना किया जाएगा। हम आपको 48 घंटों के भीतर अपनी क्वेरी URL को अद्यतन करने की सिफारिश करते हैं। हालांकि, एक ग्रेस पीरियड लागू होता है जिसके तहत आपकी मुख्यनेट URL को कार्यरत रखा जाता है ताकि किसी तिसरी पक्ष डैप समर्थन को अपडेट किया जा सके। +आपके सबग्राफ को Arbitrum में ट्रांसफर करने के बाद, Ethereum mainnet संस्करण को हटा दिया जाएगा। हम अनुशंसा करते हैं कि आप अपनी क्वेरी URL को 48 घंटों के भीतर अपडेट करें। हालाँकि, एक ग्रेस अवधि उपलब्ध है, जिससे आपका mainnet URL कार्यशील बना रहेगा ताकि कोई भी तृतीय-पक्ष dapp समर्थन अपडेट किया जा सके। ### स्थानांतरण करने के बाद, क्या मुझे आर्बिट्रम पर पुनः प्रकाशन की आवश्यकता होती है? 20 मिनट के अंतराल के बाद, आपको अंतरण को पूरा करने के लिए UI में एक लेन-देन की पुष्टि करनी होगी, लेकिन अंतरण उपकरण आपको इसके माध्यम से मार्गदर्शन करेगा। आपकी L1 इंड पॉइंट ट्रांसफर विंडो के दौरान और एक ग्रेस पीरियड के बाद भी समर्थित रहेगा। आपको यह सुझाव दिया जाता है कि आप अपनी इंड पॉइंट को अपनी सुविधा के अनुसार अपडेट करें। -### Will my endpoint experience downtime while re-publishing? +### क्या पुनः प्रकाशित करते समय मेरे समापन बिंदु को डाउनटाइम का अनुभव होगा? -It is unlikely, but possible to experience a brief downtime depending on which Indexers are supporting the subgraph on L1 and whether they keep indexing it until the subgraph is fully supported on L2. +यह असंभव नहीं है, लेकिन संभव है कि थोड़े समय के लिए डाउनटाइम का अनुभव हो सकता है, यह इस बात पर निर्भर करता है कि कौन से Indexers L1 पर सबग्राफ को सपोर्ट कर रहे हैं और क्या वे इसे तब तक इंडेक्सिंग करते रहते हैं जब तक कि सबग्राफ पूरी तरह से L2 पर सपोर्ट न हो जाए। ### क्या L2 पर प्रकाशन और संस्करणीकरण Ethereum मेननेट के समान होते हैं? -Yes. Select Arbitrum One as your published network when publishing in Subgraph Studio. In the Studio, the latest endpoint will be available which points to the latest updated version of the subgraph. +हाँ। सबग्राफ Studio में प्रकाशित करते समय अपने प्रकाशित नेटवर्क के रूप में Arbitrum One चुनें। Studio में, नवीनतम संस्करण की ओर इंगित करने वाला नवीनतम endpoint उपलब्ध होगा। -### क्या मेरे subgraph की curation उसके साथ चलेगी जब मैंsubgraph को स्थानांतरित करूँगा? +### क्या मेरे सबग्राफ का curation मेरे सबग्राफ के साथ मूव होगा? -यदि आपने " auto-migrating" signal का चयन किया है, तो आपके खुद के curation का 100% आपकेsubgraph के साथ Arbitrum One पर जाएगा। subgraph के सभी curation signalको अंतरण के समय GRT में परिवर्तित किया जाएगा, और आपके curation signal के समर्थन में उत्पन्न होने वाले GRT का उपयोग L2 subgraph पर signal mint करने के लिए किया जाएगा। +यदि आपने auto-migrating signal चुना है, तो आपकी पूरी curation आपके सबग्राफ के साथ Arbitrum One पर स्थानांतरित हो जाएगी। स्थानांतरण के समय सबग्राफ की पूरी curation signal को GRT में परिवर्तित कर दिया जाएगा, और आपकी curation signal के अनुरूप GRT का उपयोग L2 सबग्राफ पर signal को मिंट करने के लिए किया जाएगा। -अन्य क्यूरेटर यह चुन सकते हैं कि जीआरटी का अपना अंश वापस लेना है या नहीं, या इसे उसी सबग्राफ पर मिंट सिग्नल के लिए एल2 में स्थानांतरित करना है या नहीं। +Other Curators यह चुन सकते हैं कि वे अपने GRT के भाग को निकालना चाहते हैं, या फिर इसे L2 में स्थानांतरित करके उसी सबग्राफ पर सिग्नल मिंट करना चाहते हैं। -### क्या मैं स्थानांतरण के बाद अपने सबग्राफ को एथेरियम मेननेट पर वापस ले जा सकता हूं? +### क्या मैं अपना सबग्राफ ट्रांसफर करने के बाद वापस Ethereum mainnet पर ला सकता हूँ? -एक बार अंतरित होने के बाद, आपके ईथेरियम मेननेट संस्करण को पुराना मान दिया जाएगा। अगर आप मुख्यनेट पर वापस जाना चाहते हैं, तो आपको पुनः डिप्लॉय और प्रकाशित करने की आवश्यकता होगी। हालांकि, वापस ईथेरियम मेननेट पर लौटने को मजबूरी से अनुशंसित किया जाता है क्योंकि सूचीकरण रिवॉर्ड आखिरकार पूरी तरह से आर्बिट्रम वन पर ही वितरित किए जाएंगे। +एक बार ट्रांसफर हो जाने के बाद, आपके Ethereum mainnet संस्करण का यह सबग्राफ डिप्रिकेट कर दिया जाएगा। यदि आप वापस mainnet पर जाना चाहते हैं, तो आपको इसे दोबारा डिप्लॉय और पब्लिश करना होगा। हालांकि, वापस Ethereum mainnet पर ट्रांसफर करना दृढ़ता से हतोत्साहित किया जाता है क्योंकि Indexing रिवॉर्ड्स अंततः पूरी तरह से Arbitrum One पर वितरित किए जाएंगे। ### मेरे स्थानांतरण को पूरा करने के लिए मुझे ब्रिज़्ड ईथ की आवश्यकता क्यों है? @@ -112,11 +112,11 @@ Yes. Select Arbitrum One as your published network when publishing in Subgraph S 2. पुष्टि के लिए 20 मिनट का इंतजार करें: 3. आर्बिट्रम पर समर्पण स्थानांतरण की पुष्टि करें: -\*\*\*\*You must confirm the transaction to complete the delegation transfer on Arbitrum. This step must be completed within 7 days or the delegation could be lost. In most cases, this step will run automatically, but a manual confirmation may be needed if there is a gas price spike on Arbitrum. If there are any issues during this process, there will be resources to help: contact support at support@thegraph.com or on [Discord](https://discord.gg/graphprotocol). +\*\*\*\* आपको Arbitrum पर balance हस्तांतरण पूरा करने के लिए अपने transaction की पुष्टि करनी होगी। इस कदम को 7 दिनों के भीतर पूरा करना होगा, अन्यथा balance खो सकता है। अधिकांश मामलों में, इस कदम को स्वचालित रूप से चलाया जाएगा, लेकिन अगर Arbitrum पर gas मूल्य में spike होती है तो कुछ स्थानिक पुष्टि की आवश्यकता हो सकती है। यदि इस प्रक्रिया के दौरान कोई समस्या होती है, तो सहायता के लिए संसाधन होंगे: support@thegraph.com or on [Discord](https://discord.gg/graphprotocol). ### अगर मैं ईथेरियम मेननेट पर खुली आवंटन के साथ स्थानांतरण प्रारंभ करता हूँ, तो मेरे पुरस्कारों के साथ क्या होता है? -If the Indexer to whom you're delegating is still operating on L1, when you transfer to Arbitrum you will forfeit any delegation rewards from open allocations on Ethereum mainnet. This means that you will lose the rewards from, at most, the last 28-day period. If you time the transfer right after the Indexer has closed allocations you can make sure this is the least amount possible. If you have a communication channel with your Indexer(s), consider discussing with them to find the best time to do your transfer. +यदि जिस इंडेक्सर को आप सौंप रहे हैं वह अभी भी एल1 पर काम कर रहा है, तो जब आप आर्बिट्रम में स्थानांतरित होते हैं तो आप एथेरियम मेननेट पर खुले आवंटन से किसी भी प्रतिनिधिमंडल पुरस्कार को जब्त कर लेंगे। इसका मतलब यह है कि आप अधिकतम 28 दिनों की अवधि से पुरस्कार खो देंगे। यदि आप इंडेक्सर द्वारा आवंटन बंद करने के ठीक बाद स्थानांतरण का समय तय करते हैं तो आप यह सुनिश्चित कर सकते हैं कि यह न्यूनतम संभव राशि है। यदि आपके पास अपने इंडेक्सर्स के साथ संचार चैनल है, तो अपना स्थानांतरण करने के लिए सबसे अच्छा समय खोजने के लिए उनके साथ चर्चा करने पर विचार करें। ### यदि मैं जिस इंडेक्सर को वर्तमान में सौंप रहा हूं वह आर्बिट्रम वन पर नहीं है तो क्या होगा? @@ -144,53 +144,53 @@ L2 हस्तांतरण उपकरण हमेशा आपकी ड ### मेरे प्रतिनिधित्व को L2 में ट्रांसफर करने का पूरा काम कितने समय तक लगता है? -A 20-minute confirmation is required for delegation transfer. Please note that after the 20-minute period, you must come back and complete step 3 of the transfer process within 7 days. If you fail to do this, then your delegation may be lost. Note that in most cases the transfer tool will complete this step for you automatically. In case of a failed auto-attempt, you will need to complete it manually. If any issues arise during this process, don't worry, we'll be here to help: contact us at support@thegraph.com or on [Discord](https://discord.gg/graphprotocol). +A 20-minute confirmation is required for delegation transfer. कृपया ध्यान दें कि 20 मिनट के अवधि के बाद, आपको वापस आकर स्थानांतरण प्रक्रिया के कदम 3 को 7 दिन के भीतर पूरा करना होगा। यदि आप ऐसा नहीं करते हैं, तो आपका delegation may be lost। ध्यान दें कि अधिकांश मामलों में transfer tool यह कदम स्वचालित रूप से पूरा कर देगा। स्वचालित प्रयास में असफल होने पर, आपको इसे manually रूप से पूरा करना होगा। इस प्रक्रिया के दौरान यदि कोई समस्याएं उत्पन्न होती हैं, तो चिंता न करें, हम यहां हैं आपकी सहायता के लिए: contact के लिए हमसे support@thegraph.com पर या Discord पर संपर्क करें। ### क्या मैं अपनी सौंपन को स्थानांतरित कर सकता हूँ अगर मैं एक जीआरटी वेस्टिंग अनुबंध/टोकन लॉक वॉलेट का उपयोग कर रहा हूँ? हाँ! प्रक्रिया थोड़ी अलग है क्योंकि वेस्टिंग कॉन्ट्रैक्ट्स आवश्यक L2 गैस के लिए आवश्यक ETH को फॉरवर्ड नहीं कर सकते, इसलिए आपको पहले ही इसे जमा करना होगा। यदि आपका वेस्टिंग कॉन्ट्रैक्ट पूरी तरह से वेस्ट नहीं होता है, तो आपको पहले L2 पर एक समकक्ष वेस्टिंग कॉन्ट्रैक्ट को प्रारंभ करना होगा और आप केवल इस L2 वेस्टिंग कॉन्ट्रैक्ट पर डेलीगेशन को हस्तांतरित कर सकेंगे। जब आप वेस्टिंग लॉक वॉलेट का उपयोग करके एक्सप्लोरर से जुड़ते हैं, तो यह प्रक्रिया आपको एक्सप्लोरर पर कनेक्ट करने के लिए गाइड कर सकती है। -### Does my Arbitrum vesting contract allow releasing GRT just like on mainnet? +### क्या मेरा Arbitrum vesting contract अनुबंध मेननेट की तरह ही GRT जारी करने की अनुमति देता है? -No, the vesting contract that is created on Arbitrum will not allow releasing any GRT until the end of the vesting timeline, i.e. until your contract is fully vested. This is to prevent double spending, as otherwise it would be possible to release the same amounts on both layers. +नहीं, Arbitrum पर बनाया गया vesting contract, निहित समयसीमा के अंत तक किसी भी GRT को जारी करने की अनुमति नहीं देगा, यानी जब तक कि आपका contract पूरी तरह से fully vested। यह दोहरे खर्च को रोकने के लिए है, अन्यथा दोनों स्तरों पर समान amounts जारी करना संभव होगा। -If you'd like to release GRT from the vesting contract, you can transfer them back to the L1 vesting contract using Explorer: in your Arbitrum One profile, you will see a banner saying you can transfer GRT back to the mainnet vesting contract. This requires a transaction on Arbitrum One, waiting 7 days, and a final transaction on mainnet, as it uses the same native bridging mechanism from the GRT bridge. +यदि आप GRT को vesting contract से मुक्त करना चाहते हैं, तो आप उन्हें Explorer का उपयोग करके L1 निहित अनुबंध में वापस स्थानांतरित कर सकते हैं: आपके Arbitrum One profile में, आपको एक banner दिखाई देगा जिसमें कहा जाएगा कि आप GRT को mainnet vesting contract में वापस स्थानांतरित कर सकते हैं। इसके लिए Arbitrum One पर transaction, 7 दिनों की प्रतीक्षा और mainnet पर अंतिम transaction की आवश्यकता होती है, क्योंकि यह GRT bridge से समान native bridging mechanism का उपयोग करता है। ### क्या कोई प्रतिनिधिमंडल कर है? नहीं, L2 पर प्राप्त टोकनों को निर्दिष्ट इंडेक्सर की ओर से निर्दिष्ट डेलीगेटर के प्रतिनिधि रूप में डेलीगेट किया जाता है और डेलीगेशन टैक्स का कोई भुगतान नहीं होता है। -### Will my unrealized rewards be transferred when I transfer my delegation? +### जब मैं अपना delegation स्थानांतरित करूंगा तो क्या मेरे unrealized rewards स्थानांतरित कर दिए जाएंगे? -​Yes! The only rewards that can't be transferred are the ones for open allocations, as those won't exist until the Indexer closes the allocations (usually every 28 days). If you've been delegating for a while, this is likely only a small fraction of rewards. +हाँ! एकमात्र rewards जिन्हें स्थानांतरित नहीं किया जा सकता है वे open allocations के लिए हैं, क्योंकि वे तब तक मौजूद नहीं रहेंगे जब तक कि Indexer allocations बंद नहीं कर देता (usually every 28 days)। यदि आप कुछ समय से delegating कर रहे हैं, तो यह fraction of rewards का एक छोटा सा अंश है। -At the smart contract level, unrealized rewards are already part of your delegation balance, so they will be transferred when you transfer your delegation to L2. ​ +Smart contract level पर, unrealized rewards पहले से ही आपके delegation balance का हिस्सा हैं, इसलिए जब आप अपने delegation को L2 में स्थानांतरित करेंगे तो उन्हें स्थानांतरित कर दिया जाएगा। ​ -### Is moving delegations to L2 mandatory? Is there a deadline? +### क्या delegations को L2 में ले जाना mandatory है? क्या कोई deadline है? -​Moving delegation to L2 is not mandatory, but indexing rewards are increasing on L2 following the timeline described in [GIP-0052](https://forum.thegraph.com/t/gip-0052-timeline-and-requirements-to-increase-rewards-in-l2/4193). Eventually, if the Council keeps approving the increases, all rewards will be distributed in L2 and there will be no indexing rewards for Indexers and Delegators on L1. ​ +Moving delegation को L2 पर ले जाना mandatory नहीं है, लेकिन [GIP-0052](https://forum.thegraph.com/t/gip-0052-timeline-and-requirements-to-increase-rewards-in-l2/4193)। अंततः, यदि council वृद्धि को मंजूरी देती रहती है, तो सभी rewards L2 में वितरित किए जाएंगे और L1 पर Indexers और Delegators के लिए कोई अनुक्रमण पुरस्कार नहीं होंगे। ​ -### If I am delegating to an Indexer that has already transferred stake to L2, do I stop receiving rewards on L1? +### यदि मैं किसी ऐसे delegator को सौंप रहा हूं जिसने Indexer पहले ही stake L2 में स्थानांतरित कर दी है, तो क्या मुझे L1 पर पुरस्कार मिलना बंद हो जाएगा? -​Many Indexers are transferring stake gradually so Indexers on L1 will still be earning rewards and fees on L1, which are then shared with Delegators. Once an Indexer has transferred all of their stake, then they will stop operating on L1, so Delegators will not receive any more rewards unless they transfer to L2. +​कई Indexers धीरे-धीरे stake स्थानांतरित कर रहे हैं, इसलिए L1 पर Indexers अभी भी L1 पर reward और fees अर्जित करेंगे, जिन्हें बाद में Delegators के साथ साझा किया जाता है। एक बार जब कोई Indexer अपनी सारी हिस्सेदारी हस्तांतरित कर देता है, तो वे L1 पर काम करना बंद कर देंगे, इसलिए जब तक वे L2 में स्थानांतरित नहीं हो जाते, तब तक Delegators को कोई और rewards नहीं मिलेगा। -Eventually, if the Council keeps approving the indexing rewards increases in L2, all rewards will be distributed on L2 and there will be no indexing rewards for Indexers and Delegators on L1. ​ +Eventually, यदि Council L2 में indexing rewards में वृद्धि को मंजूरी देती रहती है, तो सभी rewards L2 पर वितरित किए जाएंगे और L1 पर Indexers and Delegators के लिए कोई indexing rewards नहीं होगा। ​ -### I don't see a button to transfer my delegation. Why is that? +### मुझे अपना delegation स्थानांतरित करने के लिए कोई button नहीं दिख रहा है। ऐसा क्यों? -​Your Indexer has probably not used the L2 transfer tools to transfer stake yet. +​आपके Indexers ने शायद अभी तक हिस्सेदारी हस्तांतरित करने के लिए L2 transfer tools का उपयोग नहीं किया है। -If you can contact the Indexer, you can encourage them to use the L2 Transfer Tools so that Delegators can transfer delegations to their L2 Indexer address. ​ +यदि आप Indexer से संपर्क कर सकते हैं, तो आप उन्हें L2 Transfer Tools का उपयोग करने के लिए encourage कर सकते हैं ताकि Delegators delegations को उनके L2 Indexer पते पर स्थानांतरित कर सकें। ​ -### My Indexer is also on Arbitrum, but I don't see a button to transfer the delegation in my profile. Why is that? +### मेरा Indexer भी Arbitrum पर है, लेकिन मुझे अपनी profile में delegation को स्थानांतरित करने के लिए कोई button नहीं दिख रहा है। ऐसा क्यों? -​It is possible that the Indexer has set up operations on L2, but hasn't used the L2 transfer tools to transfer stake. The L1 smart contracts will therefore not know about the Indexer's L2 address. If you can contact the Indexer, you can encourage them to use the transfer tool so that Delegators can transfer delegations to their L2 Indexer address. ​ +​यह संभव है कि Indexer ने L2 पर operations set up किया है, लेकिन transfer stake करने के लिए L2 transfer tools का उपयोग नहीं किया है। इसलिए L1 स smart contracts को Indexer के L2 पते के बारे में पता नहीं चलेगा। यदि आप Indexer से संपर्क कर सकते हैं, तो आप उन्हें transfer tool का उपयोग करने के लिए प्रोत्साहित कर सकते हैं ताकि Delegators delegations को उनके L2 Indexer address पर स्थानांतरित कर सकें। ​ ### Can I transfer my delegation to L2 if I have started the undelegating process and haven't withdrawn it yet? -​No. If your delegation is thawing, you have to wait the 28 days and withdraw it. +नहीं. यदि आपका delegation thawing रहा है, तो आपको 28 दिनों तक इंतजार करना होगा और इसे वापस लेना होगा। -The tokens that are being undelegated are "locked" and therefore cannot be transferred to L2. +जिन tokens को undelegated नहीं किया जा रहा है वे "locked" हैं और इसलिए उन्हें L2 में स्थानांतरित नहीं किया जा सकता है। ## क्यूरेशन सिग्नल @@ -206,19 +206,19 @@ The tokens that are being undelegated are "locked" and therefore cannot be trans \*यदि आवश्यक हो - अर्थात्, आप एक कॉन्ट्रैक्ट पते का उपयोग कर रहे हैं | -### मी क्युरेट केलेला सबग्राफ L2 वर गेला असल्यास मला कसे कळेल? +### मैं कैसे जानूँगा कि मैंने क्यूरेट किया हुआ सबग्राफ L2 पर चला गया है? -सबग्राफ विवरण पृष्ठ को देखते समय, एक बैनर आपको सूचित करेगा कि यह सबग्राफ अंतरण किया गया है। आप प्रोंप्ट का पालन करके अपने क्यूरेशन को अंतरण कर सकते हैं। आप इस जानकारी को भी उन सभी सबग्राफों के विवरण पृष्ठ पर पा सकते हैं जिन्होंने अंतरण किया है। +जब आप सबग्राफ विवरण पृष्ठ देख रहे होते हैं, तो एक बैनर आपको सूचित करेगा कि यह सबग्राफ स्थानांतरित कर दिया गया है। आप अपने curation को स्थानांतरित करने के लिए संकेत का पालन कर सकते हैं। आप यह जानकारी किसी भी स्थानांतरित किए गए सबग्राफ के विवरण पृष्ठ पर भी पा सकते हैं। ### अगर मैं अपनी संरचना को L2 में स्थानांतरित करना नहीं चाहता हूँ तो क्या होगा? -जब एक सबग्राफ पुराना होता है, तो आपके पास सिग्नल वापस लेने का विकल्प होता है। उसी तरह, अगर कोई सबग्राफ L2 पर चल रहा है, तो आपको चुनने का विकल्प होता है कि क्या आप ईथेरियम मेननेट से सिग्नल वापस लेना चाहेंगे या सिग्नल को L2 पर भेजें। +जब कोई सबग्राफ अमान्य हो जाता है, तो आपके पास अपना सिग्नल निकालने का विकल्प होता है। इसी तरह, यदि कोई सबग्राफ L2 में स्थानांतरित हो गया है, तो आप Ethereum mainnet में अपना सिग्नल निकालने या इसे L2 पर भेजने का विकल्प चुन सकते हैं। ### माझे क्युरेशन यशस्वीरित्या हस्तांतरित झाले हे मला कसे कळेल? एल2 स्थानांतरण उपकरण को प्रारंभ करने के बाद, सिग्नल विवरण एक्सप्लोरर के माध्यम से लगभग 20 मिनट के बाद उपलब्ध होंगे। -### क्या मैं एक समय पर एक से अधिक सबग्राफ पर अपनी संरचना को स्थानांतरित कर सकता हूँ? +### हाँ, क्या मैं एक समय में एक से अधिक सबग्राफ पर अपनी curation स्थानांतरित कर सकता हूँ? वर्तमान में कोई थोक स्थानांतरण विकल्प उपलब्ध नहीं है। @@ -238,7 +238,7 @@ The tokens that are being undelegated are "locked" and therefore cannot be trans 3. आर्बिट्रम पर स्थानांतरण की पुष्टि करें: -\*Note that you must confirm the transfer within 7 days otherwise your stake may be lost. In most cases, this step will run automatically, but a manual confirmation may be needed if there is a gas price spike on Arbitrum. If there are any issues during this process, there will be resources to help: contact support at support@thegraph.com or on [Discord](https://discord.gg/graphprotocol). +\*कृपया ध्यान दें कि आपको 7 दिनों के भीतर हस्तांतरण की पुष्टि करनी होगी, अन्यथा आपका stake हो सकता है। अधिकांश मामलों में, यह चरण स्वचालित रूप से चलेगा, लेकिन अगर Arbitrum पर gas value में एक अचानक बढ़ोतरी होती है तो manual पुष्टि की आवश्यकता हो सकती है। इस प्रक्रिया के दौरान कोई भी समस्याएँ हो तो सहायता के लिए संसाधन उपलब्ध होंगे: समर्थन से संपर्क करें support@thegraph.com या [Discord](https://discord.gg/graphprotocol) पर। ### क्या मेरा सम्पूर्ण स्थानांतरण हो जाएगा? @@ -266,7 +266,7 @@ L2 ट्रान्स्फर टूलला तुमचा स्टे ### मी माझा हिस्सा हस्तांतरित करण्यापूर्वी मला आर्बिट्रमवर इंडेक्स करावे लागेल का? -आप पहले ही अपने स्टेक को प्रभावी रूप से हस्तांतरित कर सकते हैं, लेकिन आप L2 पर किसी भी पुरस्कार का दावा नहीं कर पाएंगे जब तक आप L2 पर सबग्राफ्स को आवंटित नहीं करते हैं, उन्हें इंडेक्स करते हैं, और पॉइंट ऑफ इंटरेस्ट (POI) प्रस्तुत नहीं करते। +आप प्रभावी रूप से अपनी stake को पहले स्थानांतरित कर सकते हैं इससे पहले कि आप indexing सेटअप करें, लेकिन जब तक आप सबग्राफ को L2 पर आवंटित नहीं करते, उन्हें index नहीं करते और POIs प्रस्तुत नहीं करते, तब तक आप L2 पर कोई इनाम प्राप्त नहीं कर पाएंगे। ### मी माझा इंडेक्सिंग स्टेक हलवण्यापूर्वी प्रतिनिधी त्यांचे प्रतिनिधी हलवू शकतात का? @@ -276,11 +276,11 @@ L2 ट्रान्स्फर टूलला तुमचा स्टे हाँ! प्रक्रिया कुछ अलग है, क्योंकि वेस्टिंग कॉन्ट्रैक्ट्स L2 गैस के लिए आवश्यक ETH को फॉरवर्ड नहीं कर सकते, इसलिए आपको पहले ही इसे जमा करना होगा। यदि आपका वेस्टिंग कॉन्ट्रैक्ट पूरी तरह से वेस्ट नहीं होता है, तो आपको पहले L2 पर एक समकक्ष वेस्टिंग कॉन्ट्रैक्ट को प्रारंभ करना होगा और आपको केवल इस L2 वेस्टिंग कॉन्ट्रैक्ट पर स्टेक को हस्तांतरित करने की अनुमति होगी। जब आप वेस्टिंग लॉक वॉलेट का उपयोग करके एक्सप्लोरर से जुड़ते हैं, तो यह प्रक्रिया आपको एक्सप्लोरर पर कनेक्ट करने के लिए गाइड कर सकती है। -### I already have stake on L2. Do I still need to send 100k GRT when I use the transfer tools the first time? +### L2 पर मेरी पहले से ही stake है। जब मैं पहली बार transfer tool का उपयोग करता हूँ तो क्या मुझे अभी भी 100k GRT भेजने की आवश्यकता है? -​Yes. The L1 smart contracts will not be aware of your L2 stake, so they will require you to transfer at least 100k GRT when you transfer for the first time. ​ +हाँ। L1 smart contracts को आपकी L2 हिस्सेदारी के बारे में पता नहीं होगा, इसलिए जब आप पहली बार transfer करेंगे तो उन्हें आपसे कम से कम 100k GRT transfer करने की आवश्यकता होगी। ​ -### Can I transfer my stake to L2 if I am in the process of unstaking GRT? +### यदि मैं GRT को unstake करने की प्रक्रिया में हूं तो क्या मैं अपनी stake L2 में स्थानांतरित कर सकता हूं? ​No. If any fraction of your stake is thawing, you have to wait the 28 days and withdraw it before you can transfer stake. The tokens that are being staked are "locked" and will prevent any transfers or stake to L2. @@ -377,25 +377,25 @@ L2 ट्रान्स्फर टूलला तुमचा स्टे \*यदि आवश्यक हो - अर्थात्, आप एक कॉन्ट्रैक्ट पते का उपयोग कर रहे हैं | -\*\*\*\*You must confirm your transaction to complete the balance transfer on Arbitrum. This step must be completed within 7 days or the balance could be lost. In most cases, this step will run automatically, but a manual confirmation may be needed if there is a gas price spike on Arbitrum. If there are any issues during this process, there will be resources to help: contact support at support@thegraph.com or on [Discord](https://discord.gg/graphprotocol). +\*\*\*\* आपको Arbitrum पर balance हस्तांतरण पूरा करने के लिए अपने transcation की पुष्टि करनी होगी। इस कदम को 7 दिनों के भीतर पूरा करना होगा, अन्यथा balance खो सकता है। अधिकांश मामलों में, इस कदम को स्वचालित रूप से चलाया जाएगा, लेकिन अगर Arbitrum पर gas value में spike होती है तो कुछ स्थानिक पुष्टि की आवश्यकता हो सकती है। यदि इस प्रक्रिया के दौरान कोई समस्या होती है, तो सहायता के लिए संसाधन होंगे: support@thegraph.com पर समर्थन करें या [Discord](https://discord.gg/graphprotocol)पर। -### My vesting contract shows 0 GRT so I cannot transfer it, why is this and how do I fix it? +### मेरा vesting contract 0 GRT दिखाता है इसलिए मैं इसे स्थानांतरित नहीं कर सकता, ऐसा क्यों है और मैं इसे कैसे ठीक करूं? -​To initialize your L2 vesting contract, you need to transfer a nonzero amount of GRT to L2. This is required by the Arbitrum GRT bridge that is used by the L2 Transfer Tools. The GRT must come from the vesting contract's balance, so it does not include staked or delegated GRT. +अपने L2 vesting contract को आरंभ करने के लिए, आपको GRT की एक nonzero amount को L2 में स्थानांतरित करना होगा। यह Arbitrum GRT bridge के लिए आवश्यक है जिसका उपयोग L2 Transfer Tools द्वारा किया जाता है। GRTvesting contract's balance से आना चाहिए, इसलिए इसमें staked पर लगाया गया या प्रत्यायोजित GRT शामिल नहीं है। -If you've staked or delegated all your GRT from the vesting contract, you can manually send a small amount like 1 GRT to the vesting contract address from anywhere else (e.g. from another wallet, or an exchange). ​ +यदि आपने अपने सभी GRT कोvesting contract से staked or delegated कर दिया है, तो आप कहीं और से (e.g. from another wallet, or an exchange)vesting contract पते पर 1 GRT जैसी छोटी balance manual रूप से भेज सकते हैं। ​ -### I am using a vesting contract to stake or delegate, but I don't see a button to transfer my stake or delegation to L2, what do I do? +### मैं stake or delegate के लिए एक vesting contract का उपयोग कर रहा हूं, लेकिन मुझे अपनी stake or delegate को L2 में स्थानांतरित करने के लिए कोई button नहीं दिख रहा है, मैं क्या करूं? -​If your vesting contract hasn't finished vesting, you need to first create an L2 vesting contract that will receive your stake or delegation on L2. This vesting contract will not allow releasing tokens in L2 until the end of the vesting timeline, but will allow you to transfer GRT back to the L1 vesting contract to be released there. +​यदि आपका vesting contract समाप्त नहीं हुआ है, तो आपको पहले एक L2 vesting contract बनाना होगा जो L2 पर आपकी stake or delegation प्राप्त करेगा। यह निहित अनुबंध, vesting timeline के अंत तक L2 में token जारी करने की अनुमति नहीं देगा, लेकिन आपको वहां जारी होने वाले L1 vesting contract में GRT को वापस स्थानांतरित करने की अनुमति देगा। -When connected with the vesting contract on Explorer, you should see a button to initialize your L2 vesting contract. Follow that process first, and you will then see the buttons to transfer your stake or delegation in your profile. ​ +Explorer पर vesting contract से connect होने पर, आपको अपने L2 vesting contract को आरंभ करने के लिए एक button देखना चाहिए। पहले उस प्रक्रिया का पालन करें, और फिर आप अपनी profile में अपनी stake or delegation को स्थानांतरित करने के लिए button देखेंगे। ​ -### If I initialize my L2 vesting contract, will this also transfer my delegation to L2 automatically? +### यदि मैं अपना L2 vesting contract प्रारंभ करता हूँ, तो क्या इससे मेरा delegation स्वचालित रूप से L2 में स्थानांतरित हो जाएगा? -​No, initializing your L2 vesting contract is a prerequisite for transferring stake or delegation from the vesting contract, but you still need to transfer these separately. +​नहीं, अपने L2 vesting contract को आरंभ करना, vesting contract से stake or delegation को स्थानांतरित करने के लिए एक शर्त है, लेकिन आपको अभी भी इन्हें अलग से स्थानांतरित करने की आवश्यकता है। -You will see a banner on your profile prompting you to transfer your stake or delegation after you have initialized your L2 vesting contract. +आपको अपनी profile पर एक banner दिखाई देगा जो आपको अपना L2 vesting contract शुरू करने के बाद अपनी stake or delegation को स्थानांतरित करने के लिए प्रेरित करेगा। ### क्या मैं अपने निहित अनुबंध को वापस L1 पर ले जा सकता हूँ? diff --git a/website/src/pages/hi/archived/arbitrum/l2-transfer-tools-guide.mdx b/website/src/pages/hi/archived/arbitrum/l2-transfer-tools-guide.mdx index 22cea8b3617f..3f09f2032b44 100644 --- a/website/src/pages/hi/archived/arbitrum/l2-transfer-tools-guide.mdx +++ b/website/src/pages/hi/archived/arbitrum/l2-transfer-tools-guide.mdx @@ -6,53 +6,53 @@ The Graph has made it easy to move to L2 on Arbitrum One. For each protocol part इन टूल्स के बारे में कुछ सामान्य प्रश्नों के उत्तर [L2 Transfer Tools FAQ](/archived/arbitrum/l2-transfer-tools-faq/) में दिए गए हैं। FAQs में इन टूल्स का उपयोग कैसे करें, वे कैसे काम करते हैं, और उनका उपयोग करते समय ध्यान में रखने वाली बातें विस्तृत रूप से समझाई गई हैं। -## अपने सबग्राफ को आर्बिट्रम (L2) में कैसे स्थानांतरित करें +## अपने सबग्राफ को Arbitrum (L2) में स्थानांतरित कैसे करें -## अपने सबग्राफ़ स्थानांतरित करने के लाभ +## अपने सबग्राफ को ट्रांसफर करने के लाभ ग्राफ़ का समुदाय और मुख्य डेवलपर पिछले वर्ष से आर्बिट्रम में जाने की तैयारी कर रहे हैं (https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305)। आर्बिट्रम, एक परत 2 या "एल2" ब्लॉकचेन, एथेरियम से सुरक्षा प्राप्त करता है लेकिन काफी कम गैस शुल्क प्रदान करता है। -जब आप अपने सबग्राफ को दी ग्राफ नेटवर्क पर प्रकाशित या अपग्रेड करते हैं, तो आप प्रोटोकॉल पर स्मार्ट कॉन्ट्रैक्ट्स के साथ इंटरैक्ट कर रहे होते हैं और इसके लिए ईथरियम (ETH) का उपयोग करके गैस के लिए भुगतान करना आवश्यक होता है। अपने सबग्राफ को Arbitrum पर स्थानांतरित करके, आपके सबग्राफ के किसी भी भविष्य के अपडेट के लिए गैस शुल्क बहुत कम होगा। कम शुल्कों के साथ, और L2 पर क्यूरेशन बॉन्डिंग कर्व्स फ्लैट होने के कारण, अन्य क्यूरेटर्स को भी आपके सबग्राफ पर क्यूरेट करने में आसानी होगी, जिससे आपके सबग्राफ पर इंडेक्सर्स के लिए पुरस्कार बढ़ेंगे। इस कम लागत वाले वातावरण से इंडेक्सर्स को आपके सबग्राफ को इंडेक्स करने और सेव करने में सस्तापन होगा। आगामी महीनों में Arbitrum पर इंडेक्सिंग पुरस्कार बढ़ जाएगा और ईथिरियम मेननेट पर कम हो जाएगा, इसलिए और भी अधिक इंडेक्सर्स अपने स्टेक को स्थानांतरित करेंगे और उनके संचालन को L2 पर सेटअप करेंगे। +यहाँ पर जब आप अपना सबग्राफ को The Graph Network पर प्रकाशित या अपग्रेड करते हैं, तो आप प्रोटोकॉल पर स्मार्ट contracts के साथ इंटरैक्ट कर रहे होते हैं, और इसके लिए ETH का उपयोग करके गैस शुल्क का भुगतान करना आवश्यक होता है। अपने सबग्राफ को Arbitrum में स्थानांतरित करने से, आपके सबग्राफ के भविष्य के किसी भी अपडेट के लिए बहुत कम गैस शुल्क की आवश्यकता होगी। कम शुल्क, और L2 पर क्यूरेशन बॉन्डिंग कर्व्स के फ्लैट होने के कारण, अन्य Curators के लिए आपके सबग्राफ पर क्यूरेट करना आसान हो जाता है, जिससे आपके सबग्राफ पर Indexers के लिए पुरस्कार बढ़ जाते हैं। यह कम लागत वाला वातावरण Indexers के लिए आपके सबग्राफ को इंडेक्स और सर्व करने की लागत को भी कम कर देता है। आने वाले महीनों में Arbitrum पर Indexing पुरस्कार बढ़ेंगे और Ethereum मेननेट पर घटेंगे, जिससे अधिक से अधिक Indexers अपनी स्टेक ट्रांसफर कर रहे हैं और L2 पर अपनी ऑपरेशन्स सेटअप कर रहे हैं। -## सिग्नल, आपके L1 सबग्राफ और क्वेरी URL के साथ जो होता है, उसे समझने की प्रक्रिया: +## सिग्नल के साथ क्या होता है, आपके L1 सबग्राफ और क्वेरी URLs को समझना -Transferring a subgraph to Arbitrum uses the Arbitrum GRT bridge, which in turn uses the native Arbitrum bridge, to send the subgraph to L2. The "transfer" will deprecate the subgraph on mainnet and send the information to re-create the subgraph on L2 using the bridge. It will also include the subgraph owner's signaled GRT, which must be more than zero for the bridge to accept the transfer. +सबग्राफ को Arbitrum पर ट्रांसफर करने के लिए Arbitrum GRT ब्रिज का उपयोग किया जाता है, जो कि मूल Arbitrum ब्रिज का उपयोग करके सबग्राफ को L2 पर भेजता है। "ट्रांसफर" मुख्य नेटवर्क पर सबग्राफ को निष्क्रिय कर देगा और ब्रिज का उपयोग करके L2 पर सबग्राफ को फिर से बनाने के लिए जानकारी भेजेगा। यह सबग्राफ मालिक द्वारा संकेतित GRT को भी शामिल करेगा, जो ब्रिज द्वारा ट्रांसफर स्वीकार करने के लिए शून्य से अधिक होना आवश्यक है। -जब आप सबग्राफ को स्थानांतरित करने का विकल्प चुनते हैं, तो यह सबग्राफ के सभी क्यूरेशन सिग्नल को GRT में रूपांतरित कर देगा। इसका मतलब है कि मुख्यनेट पर सबग्राफ को "विलीन" किया जाएगा। आपके क्यूरेशन के अनुरूप GRT को सबग्राफ के साथ L2 पर भेजा जाएगा, जहां वे आपके प्रतिनिधित्व में सिग्नल निर्माण करने के लिए उपयोग होंगे। +जब आप सबग्राफ को ट्रांसफर करने का विकल्प चुनते हैं, तो यह सबग्राफ के सभी क्यूरेशन सिग्नल को GRT में बदल देगा। यह मुख्य नेटवर्क पर सबग्राफ को "डिप्रिकेट" करने के समान है। आपकी क्यूरेशन के अनुरूप GRT को L2 पर Subgraph के साथ भेजा जाएगा, जहाँ इनका उपयोग आपके लिए सिग्नल मिंट करने के लिए किया जाएगा। -अन्य क्यूरेटर्स का विकल्प होता है कि क्या वे अपने अंशिक GRT को विद्वेष्टित करें या उसे भी L2 पर स्थानांतरित करें ताकि वे उसी सबग्राफ पर सिग्नल निर्मित कर सकें। अगर कोई सबग्राफ का मालिक अपने सबग्राफ को L2 पर स्थानांतरित नहीं करता है और अधिकारिक रूप से उसे एक कॉन्ट्रैक्ट कॉल के माध्यम से विलीन करता है, तो क्यूरेटर्स को सूचित किया जाएगा और उन्हें उनके क्यूरेशन को वापस लेने का अधिकार होगा। +अन्य Curators यह चुन सकते हैं कि वे अपने GRT के भाग को निकालें, या इसे L2 पर स्थानांतरित करके उसी सबग्राफ पर संकेत को मिंट करें। यदि कोई सबग्राफ मालिक अपने सबग्राफ को L2 पर स्थानांतरित नहीं करता है और अनुबंध कॉल के माध्यम से इसे मैन्युअल रूप से अमान्य कर देता है, तो Curators को सूचित किया जाएगा और वे अपने क्यूरेशन को वापस लेने में सक्षम होंगे। -Subgraph को स्थानांतरित करते ही, curation को GRT में रूपांतरित किये जाने के कारण Indexers को subgraph को index करने के लिए अब और rewards नहीं मिलेगा। हालांकि, ऐसे Indexers भी होंगे जो 1) स्थानांतरित subgraphs की सेवा 24 घंटे तक करते रहेंगे और 2) तुरंत L2 पर subgraph को indexing करने की प्रारंभ करेंगे। क्योंकि इन Indexers ने पहले से ही subgraph को indexed किया होता है, इसलिए subgraph को sync करने की प्रतीक्षा करने की आवश्यकता नहीं होगी, और L2 subgraph को तकनीकी रूप से तुरंत carry किया जा सकेगा। +जैसे ही सबग्राफ ट्रांसफर हो जाता है, क्योंकि सारी curation GRT में कन्वर्ट हो जाती है, Indexers को अब सबग्राफ को index करने के लिए कोई रिवॉर्ड नहीं मिलेगा। हालांकि, कुछ Indexers होंगे जो 1) ट्रांसफर किए गए सबग्राफ को 24 घंटे तक सर्व करते रहेंगे, और 2) तुरंत L2 पर सबग्राफ को index करना शुरू कर देंगे। चूंकि इन Indexers के पास पहले से ही सबग्राफ indexed है, इसलिए सबग्राफ को sync होने का इंतजार करने की कोई आवश्यकता नहीं होगी, और L2 Subgraph को लगभग तुरंत क्वेरी करना संभव होगा। -L2 सबग्राफ के क्वेरी को एक विभिन्न URL पर ( 'arbitrum-gateway.thegraph.com' पर) किया जाना चाहिए, लेकिन L1 URL काम करना जारी रखेगा कम से कम 48 घंटे तक। उसके बाद, L1 गेटवे क्वेरी को L2 गेटवे के लिए आगे प्रेषित करेगा (कुछ समय के लिए), लेकिन इससे लैटेंसी बढ़ सकती है, इसलिए संभावना है कि आपको सभी क्वेरी को नए URL पर जल्द से जल्द स्विच कर लेने की सिफारिश की जाए। +L2 सबग्राफ के लिए क्वेरी अब एक अलग URL (`arbitrum-gateway.thegraph.com`) पर की जानी चाहिए, लेकिन L1 URL कम से कम 48 घंटे तक काम करता रहेगा। उसके बाद, L1 गेटवे कुछ समय के लिए क्वेरी को L2 गेटवे पर फॉरवर्ड करेगा, लेकिन इससे विलंब (latency) बढ़ जाएगा, इसलिए सभी क्वेरी को जल्द से जल्द नए URL पर स्विच करने की सिफारिश की जाती है। ## अपना L2 वॉलेट चुनना -जब आपने मुख्यनेट पर अपने सबग्राफ को प्रकाशित किया, तो आपने एक कनेक्टेड वॉलेट का उपयोग सबग्राफ बनाने के लिए किया और यह वॉलेट वह NFT स्वामित्व करता है जो इस सबग्राफ का प्रतिनिधित्व करता है और आपको अपडेट प्रकाशित करने की अनुमति देता है। +जब आपने अपना सबग्राफ मुख्य नेटवर्क पर प्रकाशित किया, तो आपने सबग्राफ बनाने के लिए एक जुड़े हुए वॉलेट का उपयोग किया, और यह वॉलेट उस NFT का मालिक है जो इस सबग्राफ का प्रतिनिधित्व करता है और आपको अपडेट प्रकाशित करने की अनुमति देता है। -सबग्राफ को Arbitrum पर स्थानांतरित करते समय, आप एक विभिन्न वॉलेट का चयन कर सकते हैं जो L2 पर इस सबग्राफ NFT का स्वामित्व करेगा। +जब सबग्राफ को Arbitrum में ट्रांसफर किया जाता है, तो आप एक अलग वॉलेट चुन सकते हैं जो L2 पर इस सबग्राफ NFT का मालिक होगा। अगर आप "सामान्य" wallet जैसे MetaMask का उपयोग कर रहे हैं (जिसे बाह्यिक अधिकारित खाता या EOA कहा जाता है, यानी एक wallet जो smart contract नहीं है), तो यह वैकल्पिक है और सिफारिश की जाती है कि आप L1 में के समान मालिक पता बनाए रखें।बटुआ -अगर आप स्मार्ट कॉन्ट्रैक्ट वॉलेट का उपयोग कर रहे हैं, जैसे कि मल्टिसिग (उदाहरणस्वरूप, एक सेफ), तो एक विभिन्न L2 वॉलेट पता चुनना अनिवार्य है, क्योंकि यह बहुत संभावना है कि यह खाता केवल मुख्यनेट पर मौजूद है और आप इस वॉलेट का उपयोग अर्बिट्रम पर लेन-देन करने के लिए नहीं कर सकते हैं। अगर आप स्मार्ट कॉन्ट्रैक्ट वॉलेट या मल्टिसिग का उपयोग करना चाहते हैं, तो अर्बिट्रम पर एक नया वॉलेट बनाएं और उसका पता अपने सबग्राफ के L2 मालिक के रूप में उपयोग करें। +यदि आप एक स्मार्ट contract वॉलेट का उपयोग कर रहे हैं, जैसे कि मल्टीसिग (जैसे कि Safe), तो एक अलग L2 वॉलेट एड्रेस चुनना अनिवार्य है, क्योंकि यह संभावना है कि यह खाता केवल मेननेट पर मौजूद हो और आप इस वॉलेट का उपयोग करके Arbitrum पर लेन-देन(transaction) नहीं कर पाएंगे। यदि आप स्मार्ट कॉन्ट्रैक्ट वॉलेट या मल्टीसिग का उपयोग जारी रखना चाहते हैं, तो Arbitrum पर एक नया वॉलेट बनाएं और इसके एड्रेस को अपने सबग्राफ के L2 ओनर के रूप में उपयोग करें। -**यह महत्वपूर्ण है कि आप एक वॉलेट पता का उपयोग करें जिस पर आपका नियंत्रण है, और जिससे आप अर्बिट्रम पर लेन-देन कर सकते हैं। अन्यथा, सबग्राफ हानि हो जाएगा और उसे पुनः प्राप्त नहीं किया जा सकता।** +**यह बहुत महत्वपूर्ण है कि आप एक ऐसे वॉलेट पते का उपयोग करें जिसे आप नियंत्रित कर सकते हैं और जो Arbitrum पर लेनदेन कर सकता है। अन्यथा, सबग्राफ खो जाएगा और इसे पुनर्प्राप्त नहीं किया जा सकेगा।** ## स्थानांतरण के लिए तैयारी: कुछ ETH को ब्रिज करना -सबग्राफ को स्थानांतरित करने में एक लेन-देन को ब्रिज के माध्यम से भेजना शामिल है, और फिर अर्बिट्रम पर एक और लेन-देन को प्रारंभ करना। पहली लेन-देन मुख्यनेट पर ETH का उपयोग करता है, और जब संदेश L2 पर प्राप्त होता है, तो गैस के भुगतान के लिए कुछ ETH को शामिल करता है। हालांकि, अगर यह गैस पर्याप्त नहीं होता है, तो आपको लेन-देन को पुनः प्रयास करना होगा और गैस के लिए सीधे L2 पर भुगतान करना होगा (यह "चरण 3: स्थानांतरण की पुष्टि करना" है, नीचे दिए गए हैं)। यह कदम **स्थानांतरण की प्रारंभिक करने के 7 दिनों के भीतर कार्यान्वित किया जाना चाहिए।** इसके अलावा, दूसरी लेन-देन ("चरण 4: L2 पर स्थानांतरण को समाप्त करना") को सीधे अर्बिट्रम पर किया जाएगा। इन कारणों से, आपको किसी एक Arbitrum वॉलेट पर कुछ ETH की आवश्यकता होगी। यदि आप मल्टिसिग या स्मार्ट कॉन्ट्रैक्ट खाता का उपयोग कर रहे हैं, तो ETH को उन्हीं सामान्य (EOA) वॉलेट में होना चाहिए जिसका आप लेन-देन कार्यान्वित करने के लिए उपयोग कर रहे हैं, मल्टिसिग वॉलेट में नहीं। +सबग्राफ ट्रांसफर करने की प्रक्रिया में ब्रिज के माध्यम से एक लेन-देन(transaction) भेजना शामिल होता है, और फिर Arbitrum पर एक और लेन-देन(transaction) को निष्पादित करना होता है। पहला लेन-देन(transaction) मेननेट पर ETH का उपयोग करता है और इसमें कुछ ETH शामिल होता है ताकि जब संदेश L2 पर प्राप्त हो, तो गैस शुल्क का भुगतान किया जा सके। हालाँकि, यदि यह गैस अपर्याप्त होती है, तो आपको लेन-देन(transaction) को पुनः प्रयास करना होगा और सीधे L2 पर गैस शुल्क का भुगतान करना होगा (यह नीचे दिए गए "Step 3: Confirming the transfer" का हिस्सा है)। यह स्टेप ट्रांसफर शुरू करने के 7 दिनों के भीतर निष्पादित किया जाना चाहिए। इसके अलावा, दूसरा लेन-देन(transaction) ("Step 4: Finishing the transfer on L2") सीधे Arbitrum पर किया जाएगा। इन कारणों से, आपके पास Arbitrum वॉलेट में कुछ ETH होना आवश्यक है। यदि आप multisig या स्मार्ट कॉन्ट्रैक्ट अकाउंट का उपयोग कर रहे हैं, तो ETH को उस नियमित (EOA) वॉलेट में होना चाहिए जिसका उपयोग आप ट्रांज़ैक्शन निष्पादित करने के लिए कर रहे हैं, न कि multisig वॉलेट में। आप कुछ एक्सचेंजों पर ETH खरीद सकते हैं और उसे सीधे अर्बिट्रम में विद्वेष्टित कर सकते हैं, या आप अर्बिट्रम ब्रिज का उपयोग करके ETH को मुख्यनेट वॉलेट से L2 में भेज सकते हैं: [bridge.arbitrum.io](http://bridge.arbitrum.io)। क्योंकि अर्बिट्रम पर गैस शुल्क कम होते हैं, आपको केवल थोड़ी सी राशि की आवश्यकता होनी चाहिए। यह सिफारिश की जाती है कि आप अपने लेन-देन को स्वीकृति प्राप्त करने के लिए कम थ्रेशहोल्ड (उदाहरणस्वरूप 0.01 ETH) से प्रारंभ करें। -## सबग्राफ ट्रांसफर टूल ढूँढना +## सबग्राफ ट्रांसफर टूल खोजना -आप सबग्राफ स्टूडियो पर अपने सबग्राफ के पेज को देखते समय L2 ट्रांसफर टूल पा सकते हैं: +आप अपने सबग्राफ के पेज पर सबग्राफ Studio में जाकर L2 Transfer Tool पा सकते हैं: - ![transfer tool](/img/L2-transfer-tool1.png) -यह भी उपलब्ध है एक्सप्लोरर पर अगर आप ऐसे वॉलेट से कनेक्ट हो जाते हैं जिसका सबग्राफ का स्वामित्व है, और उस सबग्राफ के पेज पर एक्सप्लोरर पर: +यह Explorer पर भी उपलब्ध है यदि आप उस वॉलेट से जुड़े हैं जो किसी सबग्राफ का मालिक है और Explorer पर उस सबग्राफ के पेज पर है: ![Transferring to L2](/img/transferToL2.png) @@ -60,19 +60,19 @@ L2 सबग्राफ के क्वेरी को एक विभिन ## चरण 1: स्थानांतरण की प्रारंभिक कदम -स्थानांतरण की प्रारंभिक करने से पहले, आपको तय करना होगा कि L2 पर सबग्राफ का स्वामित्व किस पते पर होगा (ऊपर "अपने L2 वॉलेट का चयन करना" देखें), और यह मजबूती से सिफारिश की जाती है कि अर्बिट्रम पर गैस के लिए कुछ ETH ब्रिज कर दिया गया हो (ऊपर "स्थानांतरण की तैयारी: कुछ ETH को ब्रिज करना" देखें)। +इससे पहले कि आप ट्रांसफर शुरू करें, आपको यह तय करना होगा कि L2 पर कौन सा एड्रेस सबग्राफ का स्वामी होगा (देखें "अपना L2 वॉलेट चुनना" ऊपर), और यह अत्यधिक अनुशंसा की जाती है कि आपके पास पहले से ही Arbitrum पर कुछ ETH गैस के लिए ब्रिज किया हुआ हो (देखें "ट्रांसफर की तैयारी: कुछ ETH ब्रिज करना" ऊपर)। -यह भी ध्यान दें कि सबग्राफ को स्थानांतरित करने के लिए सबग्राफ के साथ एक ही खाते में कोई भी सिग्नल की गई राशि होनी चाहिए; अगर आपने सबग्राफ पर सिग्नल नहीं किया है तो आपको थोड़ी सी क्यूरेशन जोड़नी होगी (एक छोटी राशि जैसे 1 GRT जोड़ना काफी होगा)। +सबग्राफ को ट्रांसफर करने के लिए आवश्यक है कि उसी खाते पर सबग्राफ के साथ कुछ न कुछ सिग्नल मौजूद हो जो सबग्राफ का मालिक है; यदि आपने सबग्राफ पर सिग्नल नहीं किया है, तो आपको थोड़ा सा क्यूरेशन जोड़ना होगा (जैसे 1 GRT जोड़ना पर्याप्त होगा)। -स्थानांतरण टूल खोलने के बाद, आपको "प्राप्ति वॉलेट पता" फ़ील्ड में L2 वॉलेट पता दर्ज करने की अनुमति मिलेगी - **सुनिश्चित करें कि आपने यहाँ सही पता डाला है।** "सबग्राफ स्थानांतरित करें" पर क्लिक करने से आपको अपने वॉलेट पर लेन-देन कार्यान्वित करने के लिए प्रोम्प्ट किया जाएगा (ध्यान दें कि L2 गैस के भुगतान के लिए कुछ ETH मान शामिल है)। इससे स्थानांतरण प्रारंभ होगा और आपका L1 सबग्राफ विलीन हो जाएगा (इसके पीछे के प्रक्रिया के बारे में अधिक जानकारी के लिए "सिग्नल, आपके L1 सबग्राफ और क्वेरी URL के साथ क्या होता है की समझ" देखें)। +ट्रांसफर टूल खोलने के बाद, आप "Receiving wallet address" फ़ील्ड में L2 वॉलेट पता दर्ज कर सकते हैं - **सुनिश्चित करें कि आपने यहां सही पता दर्ज किया है।** "Transfer सबग्राफ" पर क्लिक करने से आपको अपने वॉलेट में लेन-देन(transaction) निष्पादित करने के लिए संकेत मिलेगा (ध्यान दें कि L2 गैस के भुगतान के लिए इसमें कुछ ETH मूल्य शामिल होता है); इससे ट्रांसफर शुरू होगा और आपका L1 सबग्राफ अप्रचलित हो जाएगा। (इस प्रक्रिया के पीछे क्या होता है, इसे समझने के लिए ऊपर "Understanding what happens with signal, your L1 सबग्राफ and query URLs" अनुभाग देखें)। -इस कदम को कार्यान्वित करते समय, **सुनिश्चित करें कि आप 7 दिन से कम समय में चरण 3 को पूरा करने जाते हैं, अन्यथा सबग्राफ और आपका सिग्नल GRT हानि हो सकते हैं।** यह अर्बिट्रम पर L1-L2 संदेशिकरण कैसे काम करता है के कारण है: ब्रिज के माध्यम से भेजे गए संदेश "पुनः प्रयासनीय टिकट" होते हैं जिन्हें 7 दिन के भीतर कार्यान्वित किया जाना चाहिए, और पहले कार्यान्वयन में अगर अर्बिट्रम पर गैस की मूल्य में वृद्धि होती है तो पुनः प्रयास की आवश्यकता हो सकती है। +यदि आप इस चरण को निष्पादित करते हैं, तो सुनिश्चित करें कि आप 7 दिनों से कम समय में चरण 3 तक पूरा करें, अन्यथा सबग्राफ और आपका signal GRT खो जाएगा। यह Arbitrum पर L1-L2 मैसेजिंग के काम करने के तरीके के कारण है: ब्रिज के माध्यम से भेजे गए संदेश "retry-able tickets" होते हैं, जिन्हें 7 दिनों के भीतर निष्पादित किया जाना आवश्यक होता है, और यदि Arbitrum पर गैस की कीमत में उतार-चढ़ाव होता है, तो प्रारंभिक निष्पादन को पुनः प्रयास करने की आवश्यकता हो सकती है। ![Start the transfer to L2](/img/startTransferL2.png) -## चरण 2: सबग्राफ को L2 तक पहुँचने की प्रतीक्षा करना +## चरण 2: सबग्राफ के L2 तक पहुंचने की प्रतीक्षा करना -जब आप स्थानांतरण की प्रारंभिक करते हैं, तो आपके L1 सबग्राफ को L2 भेजने वाले संदेश को अर्बिट्रम ब्रिज के माध्यम से प्रसारित होना चाहिए। यह लगभग 20 मिनट लगता है (ब्रिज मुख्यनेट ब्लॉक को "सुरक्षित" बनाने के लिए प्रत्येक लेनदेन के मुख्यनेट ब्लॉक के लिए प्रतीक्षा करता है, जिसमें संभावित चेन रीआर्ग से बचाया जा सकता है)। +ट्रांसफर शुरू करने के बाद, संदेश जो आपके L1 सबग्राफ को L2 पर भेजता है, उसे Arbitrum ब्रिज के माध्यम से प्रसारित होना चाहिए। इसमें लगभग 20 मिनट लगते हैं (ब्रिज मुख्यनेट ब्लॉक का इंतजार करता है जिसमें लेन-देन "सुरक्षित" हो ताकि संभावित चेन रीऑर्ग से बचा जा सके)। इस प्रतीक्षा काल के बाद, अर्बिट्रम ल2 अनुबंधों पर स्थानांतरण को स्वतः कार्यान्वित करने का प्रयास करेगा। @@ -80,7 +80,7 @@ L2 सबग्राफ के क्वेरी को एक विभिन ## चरण 3: स्थानांतरण की पुष्टि करना -अधिकांश मामलों में, यह कदम स्वचालित रूप से क्रियान्वित हो जाएगा क्योंकि स्टेप 1 में शामिल एल2 गैस काफी होता है ताकि आर्बिट्रम कॉन्ट्रैक्ट पर सबग्राफ प्राप्त करने वाले लेनदेन को क्रियान्वित किया जा सके। हालांकि, कुछ मामलों में, यह संभावित है कि आर्बिट्रम पर गैस मूल्यों में एक उछाल के कारण यह स्वचालित क्रियान्वित होने में विफल हो सकता है। इस मामले में, जो "टिकट" आपके सबग्राफ को एल2 पर भेजता है, वह लंबित हो जाएगा और 7 दिनों के भीतर पुनः प्रयास की आवश्यकता होगी। +इस ज्यादातर मामलों में, यह चरण स्वचालित रूप से निष्पादित हो जाएगा क्योंकि चरण 1 में शामिल L2 गैस आमतौर पर उस लेन-देन को निष्पादित करने के लिए पर्याप्त होती है जो Arbitrum कॉन्ट्रैक्ट्स पर सबग्राफ प्राप्त करता है। हालाँकि, कुछ मामलों में, यह संभव है कि Arbitrum पर गैस की कीमतों में अचानक वृद्धि के कारण यह स्वचालित निष्पादन विफल हो जाए। ऐसे में, जो "टिकट" आपके सबग्राफ को L2 पर भेजता है, वह लंबित रहेगा और इसे 7 दिनों के भीतर पुनः प्रयास करने की आवश्यकता होगी। यदि यह मामला आपके साथ होता है, तो आपको ऐसे L2 वॉलेट का उपयोग करके कनेक्ट करना होगा जिसमें आर्बिट्रम पर कुछ ETH हो, अपनी वॉलेट नेटवर्क को आर्बिट्रम पर स्विच करना होगा, और "पुनः प्रायोग की पुष्टि करें" पर क्लिक करके लेन-देन को पुनः प्रयास करने के लिए। @@ -88,33 +88,33 @@ L2 सबग्राफ के क्वेरी को एक विभिन ## चरण 4: L2 पर स्थानांतरण समाप्त करना -इस बिंदु पर, आपका सबग्राफ और GRT आर्बिट्रम पर प्राप्त हो चुके हैं, लेकिन सबग्राफ अबतक प्रकाशित नहीं हुआ है। आपको वह एल2 वॉलेट का उपयोग करके कनेक्ट करना होगा जिसे आपने प्राप्ति वॉलेट के रूप में चुना है, अपने वॉलेट नेटवर्क को आर्बिट्रम पर स्विच करना होगा, और "पब्लिश सबग्राफ" पर क्लिक करना होगा। +आपका सबग्राफ और GRT अब Arbitrum पर प्राप्त हो चुका है, लेकिन सबग्राफ अभी प्रकाशित नहीं किया गया है। आपको उस L2 वॉलेट का उपयोग करके कनेक्ट करना होगा जिसे आपने प्राप्त करने वाले वॉलेट के रूप में चुना था, अपने वॉलेट नेटवर्क को Arbitrum में स्विच करना होगा, और "प्रकाशित सबग्राफ" पर क्लिक करना होगा। -![Publish the subgraph](/img/publishSubgraphL2TransferTools.png) +![सबग्राफ प्रकाशित करें -](/img/publishSubgraphL2TransferTools.png) -![Wait for the subgraph to be published](/img/waitForSubgraphToPublishL2TransferTools.png) +![सबरग्राफ प्रकाशित होने की प्रतीक्षा करें](/img/waitForSubgraphToPublishL2TransferTools.png) -इससे सबग्राफ प्रकाशित हो जाएगा ताकि Arbitrum पर काम करने वाले इंडेक्सर उसकी सेवा करना शुरू कर सकें। यह भी उसी GRT का करेशन सिग्नल मिन्ट करेगा जो L1 से स्थानांतरित हुए थे। +यह सबग्राफ को प्रकाशित करेगा ताकि Arbitrum पर कार्यरत Indexers इसे प्रदान करना शुरू कर सकें। यह L1 से स्थानांतरित किए गए GRT का उपयोग करके क्यूरेशन सिग्नल भी मिंट करेगा। ## Step 5: query Step 5 को Update करना -आपकी सबग्राफ सफलतापूर्वक Arbitrum में स्थानांतरित की गई है! सबग्राफ का प्रश्न करने के लिए, नया URL होगा: +आपका सबग्राफ सफलतापूर्वक Arbitrum पर स्थानांतरित कर दिया गया है! सबग्राफ को क्वेरी करने के लिए, नया URL होगा: `https://arbitrum-gateway.thegraph.com/api/[api-key]/subgraphs/id/[l2-subgraph-id]` -ध्यान दें कि आर्बिट्रम पर सबग्राफ आईडी मुख्यनेट पर जितना भिन्न होगा, लेकिन आप हमेशा इसे एक्सप्लोरर या स्टूडियो पर ढूंढ सकते हैं। जैसा कि पहले उल्लिखित किया गया है ("सिग्नल, आपके L1 सबग्राफ और क्वेरी URL के साथ क्या होता है" देखें), पुराना L1 URL कुछ समय तक समर्थित किया जाएगा, लेकिन आपको सबग्राफ को L2 पर सिंक होने के बाद नए पते पर अपने क्वेरी को स्विच कर देना चाहिए। +सबग्राफ आईडी Arbitrum पर आपके मेननेट पर मौजूद आईडी से अलग होगी, लेकिन आप इसे हमेशा Explorer या Studio पर पा सकते हैं। जैसा कि ऊपर उल्लेख किया गया है (देखें "Understanding what happens with signal, your L1 सबग्राफ and query URLs"), पुराना L1 URL थोड़े समय के लिए समर्थित रहेगा, लेकिन आपको जैसे ही सबग्राफ L2 पर सिंक हो जाए, अपने क्वेरीज़ को नए पते पर स्विच कर लेना चाहिए। ## अपने क्यूरेशन को आर्बिट्रम (L2) में कैसे स्थानांतरित करें -## यह समझना कि एल2 में सबग्राफ़ स्थानांतरण पर क्यूरेशन का क्या होता है +## सबग्राफ को L2 पर ट्रांसफर करने पर क्यूरेशन के साथ क्या होता है, इसे समझना -जब कोई सबग्राफ के मालिक सबग्राफ को आर्बिट्रम पर ट्रांसफर करते हैं, तो सबग्राफ की सभी सिग्नल को एक साथ GRT में रूपांतरित किया जाता है। यह "ऑटो-माइग्रेटेड" सिग्नल के लिए भी लागू होता है, अर्थात्, सिग्नल जो सबग्राफ के किसी वर्शन या डिप्लॉयमेंट के लिए विशिष्ट नहीं है, लेकिन जो सबग्राफ के नवीनतम संस्करण का पालन करते हैं। +जब किसी सबग्राफ का मालिक एक सबग्राफ को Arbitrum में ट्रांसफर करता है, तो उस Subgraph का सारा signal एक ही समय में GRT में कन्वर्ट हो जाता है। यह "ऑटो-माइग्रेटेड" signal पर लागू होता है, यानी ऐसा signal जो किसी विशेष सबग्राफ संस्करण या डिप्लॉयमेंट से जुड़ा नहीं होता, बल्कि किसी सबग्राफ के नवीनतम संस्करण का अनुसरण करता है। -सिग्नल से GRT में इस परिवर्तन को वही होता है जो होता है अगर सबग्राफ के मालिक ने L1 में सबग्राफ को विच्छेद किया होता। जब सबग्राफ को विच्छेदित या स्थानांतरित किया जाता है, तो सभी क्यूरेशन सिग्नल को समयानुसार "जलाया" जाता है (क्यूरेशन बॉन्डिंग कर्व का उपयोग करके) और परिणित GRT को GNS स्मार्ट कॉन्ट्रैक्ट द्वारा रखा जाता है (जो सबग्राफ अपग्रेड और ऑटो-माइग्रेटेड सिग्नल को संभालता है)। इस प्रकार, उस सबग्राफ के प्रत्येक क्यूरेटर के पास उस GRT का दावा होता है जो उनके लिए उपग्रहानुशासित था। +यह रूपांतरण सिग्नल से GRT में उसी प्रकार होता है जैसे कि अगर Subgraph का मालिक L1 में सबग्राफ को डिप्रिकेट कर दे। जब सबग्राफ को डिप्रिकेट या ट्रांसफर किया जाता है, तो सभी क्यूरेशन सिग्नल एक साथ "बर्न" हो जाते हैं (क्यूरेशन बॉन्डिंग कर्व का उपयोग करके) और उत्पन्न हुआ GRT GNS स्मार्ट contract द्वारा रखा जाता है (जो कि Subgraph अपग्रेड और ऑटो-माइग्रेटेड सिग्नल को हैंडल करता है)। इस प्रकार, उस सबग्राफ के प्रत्येक Curator के पास उस GRT पर दावा करने का अधिकार होता है, जो उनके पास सबग्राफ के लिए उपलब्ध शेयरों के अनुपात में होता है। -इन जीआरटी की एक भाग, जो सबग्राफ के मालिक के संवर्ग के साथ मेल खाते हैं, वह एल2 में भेजे जाते हैं। +इन GRT का एक अंश, जो सबग्राफ के मालिक से संबंधित है, सबग्राफ के साथ L2 पर भेजा जाता है। -इस बिंदु पर, क्यूरेटेड GRT को अब और क्वेरी शुल्क नहीं बढ़ेंगे, इसलिए क्यूरेटर्स अपने GRT को वापस निकालने का चयन कर सकते हैं या उसे L2 पर उसी सबग्राफ में ट्रांसफर कर सकते हैं, जहां उसे नई क्यूरेशन सिग्नल बनाने के लिए उपयोग किया जा सकता है। इसे करने के लिए कोई जल्दी नहीं है क्योंकि GRT को अनिश्चितकाल तक रखा जा सकता है और हर कोई अपने हिस्से के अनुपात में एक निश्चित राशि प्राप्त करता है, चाहे वो जब भी करे। +At this point, the curated GRT अब कोई अतिरिक्त क्वेरी शुल्क नहीं जोड़ेगा, इसलिए Curators अपने GRT को निकालने या इसे उसी सबग्राफ पर L2 में स्थानांतरित करने का विकल्प चुन सकते हैं, जहां इसका उपयोग नए क्यूरेशन सिग्नल को मिंट करने के लिए किया जा सकता है। इसे तुरंत करने की कोई आवश्यकता नहीं है क्योंकि GRT को अनिश्चित काल तक रखा जा सकता है और सभी को उनके शेयरों के अनुपात में राशि मिलेगी, इस बात की परवाह किए बिना कि वे इसे कब करते हैं। ## अपना L2 वॉलेट चुनना @@ -130,9 +130,9 @@ L2 सबग्राफ के क्वेरी को एक विभिन ट्रांसफर शुरू करने से पहले, आपको निर्णय लेना होगा कि L2 पर क्यूरेशन किस पते का स्वामित्व करेगा (ऊपर "अपने L2 वॉलेट का चयन करना" देखें), और संदेश को L2 पर पुनः क्रियान्वित करने की आवश्यकता पड़ने पर आपके पास गैस के लिए पहले से ही कुछ ETH होने की सिफारिश की जाती है। आप कुछ एक्सचेंजों पर ETH खरीद सकते हैं और उसे सीधे Arbitrum पर निकाल सकते हैं, या आप मुख्यनेट वॉलेट से L2 में ETH भेजने के लिए आर्बिट्रम ब्रिज का उपयोग कर सकते हैं: [bridge.arbitrum.io](http://bridge.arbitrum.io) - क्योंकि आर्बिट्रम पर गैस शुल्क इतने कम होते हैं, तो आपको केवल थोड़ी सी राशि की आवश्यकता होगी, जैसे कि 0.01 ETH शायद पर्याप्त हो। -अगर वह सबग्राफ जिसे आप करेशन कर रहे हैं L2 पर स्थानांतरित किया गया है, तो आपको एक संदेश दिखाई देगा जो आपको एक स्थानांतरित सबग्राफ करेशन की जानकारी देगा। +अगर कोई सबग्राफ जिसे आप क्यूरेट कर रहे हैं, L2 पर ट्रांसफर कर दिया गया है, तो आपको Explorer पर एक संदेश दिखाई देगा जो आपको बताएगा कि आप एक ट्रांसफर किए गए सबग्राफ को क्यूरेट कर रहे हैं। -सबग्राफ पेज को देखते समय, आपको करेशन को वापस लेने या स्थानांतरित करने का चयन करने का विकल्प होता है। "Transfer Signal to Arbitrum" पर क्लिक करने से स्थानांतरण उपकरण खुल जाता है। +जब आप सबग्राफ पेज पर देखते हैं, तो आप क्यूरेशन को वापस लेने या ट्रांसफर करने का विकल्प चुन सकते हैं। "ट्रांसफर सिग्नल टू Arbitrum" पर क्लिक करने से ट्रांसफर टूल खुल जाएगा। ![Transfer signal](/img/transferSignalL2TransferTools.png) @@ -162,4 +162,4 @@ L2 सबग्राफ के क्वेरी को एक विभिन ## L1 पर अपना कार्यकाल वापस ले रहा हूँ -अगर आप चाहते हैं कि आप अपने GRT को L2 पर नहीं भेजें, या फिर आप पसंद करते हैं कि GRT को मैन्युअल रूप से ब्रिज करें, तो आप अपने क्यूरेटेड GRT को L1 पर निकाल सकते हैं। सबग्राफ पृष्ठ पर बैनर पर, "सिग्नल निकालें" चुनें और लेनदेन की पुष्टि करें; GRT आपके क्यूरेटर पते पर भेज दिया जाएगा। +यदि आप अपना GRT L2 पर भेजना पसंद नहीं करते हैं, या आप GRT को मैन्युअल रूप से ब्रिज करना चाहते हैं, तो आप L1 पर अपने क्यूरेट किए गए GRT को निकाल सकते हैं। सबग्राफ पेज पर बैनर में, "Withdraw Signal" चुनें और लेन-देन(transaction) की पुष्टि करें; GRT आपके Curator पते पर भेज दिया जाएगा। diff --git a/website/src/pages/hi/archived/sunrise.mdx b/website/src/pages/hi/archived/sunrise.mdx index 64396d2fb998..19927b21dce0 100644 --- a/website/src/pages/hi/archived/sunrise.mdx +++ b/website/src/pages/hi/archived/sunrise.mdx @@ -7,74 +7,74 @@ sidebarTitle: Post-Sunrise Upgrade FAQ ## विकेंद्रीकृत डेटा का सूर्योदय क्या था? -"Decentralized Data का उदय" Edge & Node द्वारा आरंभ की गई एक पहल थी। इस पहल ने subgraph डेवलपर्स को The Graph के विकेंद्रीकृत नेटवर्क में सहजता से अपग्रेड करने में सक्षम बनाया। +The Sunrise of Decentralized Data एक पहल थी जिसे Edge & नोड द्वारा शुरू किया गया था। इस पहल ने subgraph डेवलपर्स को The Graph के विकेंद्रीकृत नेटवर्क में सहजता से अपग्रेड करने में सक्षम बनाया। -इस योजना ने The Graph इकोसिस्टम के पिछले विकासों पर आधारित किया, जिसमें नए प्रकाशित सबग्राफ पर क्वेरी सर्व करने के लिए एक अपग्रेडेड इंडेक्सर शामिल था। +यह योजना The Graph इकोसिस्टम में पिछले विकासों पर आधारित थी, जिसमें एक उन्नयन Indexer शामिल था ताकि नए प्रकाशित सबग्राफ पर क्वेरी प्रदान की जा सके। ### Hosted service का क्या होगा? -होस्टेड सेवा के क्वेरी एंडपॉइंट अब उपलब्ध नहीं हैं, और डेवलपर्स होस्टेड सेवा पर नए सबग्राफ्स को तैनात नहीं कर सकते हैं। +होस्टेड सेवा क्वेरी एंडपॉइंट अब उपलब्ध नहीं हैं, और डेवलपर्स होस्टेड सेवा पर नए Subgraph तैनात नहीं कर सकते। -अपग्रेड प्रक्रिया के दौरान, होस्टेड सर्विस सबग्राफ के मालिक अपने सबग्राफ को The Graph Network पर अपग्रेड कर सकते थे। इसके अतिरिक्त, डेवलपर्स ऑटो-अपग्रेड किए गए सबग्राफ को क्लेम करने में सक्षम थे। +होस्टेड सेवा Subgraph के मालिक अपग्रेड प्रक्रिया के दौरान अपने subgraph को The Graph Network में अपग्रेड कर सकते थे। इसके अतिरिक्त, डेवलपर्स स्वचालित रूप से अपग्रेड किए गए Subgraph को क्लेम कर सकते थे। ### क्या इस अपग्रेड से Subgraph Studio प्रभावित हुआ था? नहीं, सबग्राफ स्टूडियो पर Sunrise का कोई प्रभाव नहीं पड़ा। सबग्राफ तुरंत क्वेरी के लिए उपलब्ध थे, जो अपग्रेड किए गए Indexer द्वारा संचालित हैं, जो उसी इंफ्रास्ट्रक्चर का उपयोग करता है जैसा Hosted Service में होता है। -### सबग्राफ्स को Arbitrum पर क्यों प्रकाशित किया गया, क्या इसने एक अलग नेटवर्क को इंडेक्स करना शुरू किया? +### क्यों subgraph को Arbitrum पर प्रकाशित किया गया, क्या इसने किसी अलग नेटवर्क को इंडेक्स करना शुरू कर दिया? -The Graph Network को पहले Ethereum mainnet पर डिप्लॉय किया गया था, लेकिन गैस लागत को कम करने के लिए इसे बाद में Arbitrum One पर स्थानांतरित कर दिया गया। परिणामस्वरूप, सभी नए सबग्राफ को Arbitrum पर The Graph Network में प्रकाशित किया जाता है ताकि Indexers उन्हें सपोर्ट कर सकें। Arbitrum वह नेटवर्क है जिस पर सबग्राफ को प्रकाशित किया जाता है, लेकिन सबग्राफ [supported networks](/supported-networks/) में से किसी पर भी index कर सकते हैं +The Graph Network को शुरू में Ethereum mainnet पर डिप्लॉय किया गया था, लेकिन बाद में सभी उपयोगकर्ताओं के लिए गैस लागत कम करने के उद्देश्य से इसे Arbitrum One पर स्थानांतरित कर दिया गया। परिणामस्वरूप, सभी नए subgraph अब Arbitrum पर The Graph Network में प्रकाशित किए जाते हैं ताकि Indexers उन्हें सपोर्ट कर सकें। Arbitrum वह नेटवर्क है जहाँ subgraph प्रकाशित किए जाते हैं, लेकिन सबग्राफ किसी भी [supported networks](/supported-networks/) को इंडेक्स कर सकते हैं। ## About the Upgrade Indexer > अपग्रेड Indexer वर्तमान में सक्रिय है। -अपग्रेड Indexer को Hosted Service से The Graph Network में सबग्राफ़्स के अपग्रेड करने के अनुभव को सुधारने और उन मौजूदा सबग्राफ़्स के नए संस्करणों का समर्थन करने के लिए लागू किया गया था जो अभी तक इंडेक्स नहीं किए गए थे। +सुधार Indexer को लागू किया गया था ताकि hosted service से subgraph को The Graph Network में अपग्रेड करने के अनुभव को बेहतर बनाया जा सके और उन नए संस्करणों का समर्थन किया जा सके जो अभी तक इंडेक्स नहीं किए गए थे। ### अपग्रेड Indexer क्या करता है? -- यह उन चेन को बूटस्ट्रैप करता है जिन्हें अभी तक The Graph Network पर इंडेक्सिंग पुरस्कार नहीं मिले हैं और यह सुनिश्चित करता है कि एक Indexer उपलब्ध हो ताकि एक Subgraph प्रकाशित होने के तुरंत बाद क्वेरी को यथाशीघ्र सेवा दी जा सके। +- यह उन चेन को बूटस्ट्रैप करता है जिन्होंने अभी तक The Graph Network पर indexing रिवार्ड्स प्राप्त नहीं किए हैं और यह सुनिश्चित करता है कि एक Indexer उपलब्ध हो ताकि किसी subgraph के प्रकाशित होने के बाद यथासंभव शीघ्र क्वेरीज़ को सर्व किया जा सके। - यह उन chain को भी सपोर्ट करता है जो पहले केवल Hosted Service पर उपलब्ध थीं। सपोर्टेड chain की व्यापक सूची [यहां](/supported-networks/) देखें। -- जो Indexer अपग्रेड इंडेक्सर का संचालन करते हैं, वे नए सबग्राफ़ और अतिरिक्त चेन का समर्थन करने के लिए एक सार्वजनिक सेवा के रूप में ऐसा करते हैं जो इंडेक्सिंग पुरस्कारों की कमी का सामना कर रहे हैं, जब तक कि The Graph काउंसिल उन्हें मंजूरी नहीं देती। +- Indexers जो एक upgrade Indexer को संचालित करते हैं, वे नए subgraph और अतिरिक्त चेन का समर्थन करने के लिए एक सार्वजनिक सेवा के रूप में ऐसा करते हैं, जिन्हें The Graph Council द्वारा अनुमोदित किए जाने से पहले Indexing पुरस्कारों की कमी होती है। -### Why is Edge & Node running the upgrade Indexer? +### Edge & Node upgrade indexer क्यों चला रहे हैं? -Edge & Node ने ऐतिहासिक रूप से होस्टेड सेवा का प्रबंधन किया है और, परिणामस्वरूप, उनके पास होस्टेड सेवा के सबग्राफ के लिए पहले से ही समन्वयित डेटा है। +Edge & Node ऐतिहासिक रूप से होस्टेड सेवा को बनाए रखते थे और परिणामस्वरूप, उनके पास पहले से ही होस्टेड सेवा के लिए सिंक किया हुआ डेटा है subgraph. -### What does the upgrade indexer mean for existing Indexers? +### Existing Indexers के लिए upgrade indexer का क्या मतलब है? पहले केवल होस्टेड सेवा पर समर्थित चेन अब बिना indexing पुरस्कार के डेवलपर्स के लिएT he Graph Network पर उपलब्ध कराई गईं। -हालांकि, इस कार्रवाई ने किसी भी इच्छुक Indexer के लिए क्वेरी शुल्क को अनलॉक कर दिया और The Graph Network पर प्रकाशित सबग्राफ की संख्या बढ़ा दी। परिणामस्वरूप, Indexers के पास इन सबग्राफ को इंडेक्स करने और सेवा देने के लिए अधिक अवसर हैं, जो कि क्वेरी शुल्क के बदले में हैं, यहां तक कि जब तक किसी चेन के लिए इंडेक्सिंग इनाम सक्षम नहीं होते। +हालांकि, इस कार्रवाई से किसी भी इच्छुक Indexer के लिए क्वेरी शुल्क अनलॉक हो गया और The Graph Network पर प्रकाशित सबग्राफ की संख्या बढ़ गई। परिणामस्वरूप, Indexer को इन सबग्राफ को इंडेक्स करने और क्वेरी शुल्क के बदले सर्व करने के अधिक अवसर मिले, भले ही किसी चेन के लिए indexing रिवॉर्ड सक्षम न किए गए हों। -अपग्रेड इंडेक्सर Indexer समुदाय को The Graph Network पर सबग्राफ और नए चेन की संभावित मांग के बारे में जानकारी भी प्रदान करता है। +अपग्रेड Indexer समुदाय को यह जानकारी भी प्रदान करता है कि The Graph Network पर subgraph और नई चेन की संभावित मांग क्या हो सकती है। -### What does this mean for Delegators? +### Delegators के लिए यह क्या अर्थ है? -अपग्रेड Indexer डेलीगेटर्स के लिए एक शक्तिशाली अवसर प्रदान करता है। क्योंकि इससे अधिक सबग्राफ को होस्टेड सेवा से The Graph Network में अपग्रेड करने की अनुमति मिली, डेलीगेटर्स को बढ़ी हुई नेटवर्क गतिविधि का लाभ मिलता है। +सुधार Indexer एक शक्तिशाली अवसर प्रदान करता है Delegators के लिए। जैसे ही अधिक subgraph को होस्टेड सेवा से The Graph Network में अपग्रेड किया गया, Delegators को नेटवर्क गतिविधि में वृद्धि से लाभ मिलता है। ### क्या अपग्रेड किया गया Indexer मौजूदा Indexer के साथ पुरस्कारों के लिए प्रतिस्पर्धा करता था? -नहीं, अपग्रेड किया गया Indexer केवल प्रति Subgraph न्यूनतम राशि आवंटित करता है और indexing पुरस्कार एकत्र नहीं करता है। +नहीं, upgrade Indexer केवल प्रत्येक subgraph के लिए न्यूनतम राशि आवंटित करता है और indexing पुरस्कार एकत्र नहीं करता। -यह "आवश्यकता अनुसार" आधार पर काम करता है, एक बैकअप के रूप में कार्य करता है जब तक कि नेटवर्क में संबंधित चेन और सबग्राफ के लिए कम से कम तीन अन्य Indexer द्वारा पर्याप्त सेवा गुणवत्ता प्राप्त नहीं की जाती। +यह "जैसा आवश्यक हो" के आधार पर कार्य करता है, जब तक कि संबंधित चेन और subgraph के लिए नेटवर्क में कम से कम तीन अन्य Indexers द्वारा पर्याप्त सेवा गुणवत्ता प्राप्त नहीं की जाती, तब तक यह एक बैकअप के रूप में कार्य करता है। -### यह Subgraph डेवलपर्स को कैसे प्रभावित करता है? +### यह subgraph डेवलपर्स को कैसे प्रभावित करता है? -सबग्राफ डेवलपर्स अपने सबग्राफ को The Graph Network पर लगभग तुरंत क्वेरी कर सकते हैं, जब वे होस्टेड सेवा से या Subgraph Studio()/subgraphs/developing/publishing/publishing-a-subgraph/ से प्रकाशित करते हैं, क्योंकि इंडेक्सिंग के लिए कोई लीड टाइम आवश्यक नहीं है। कृपया ध्यान दें कि सबग्राफ बनाना(/developing/creating-a-subgraph/) इस अपग्रेड से प्रभावित नहीं हुआ था। +subgraph डेवलपर्स अपने subgraph को The Graph Network पर लगभग तुरंत क्वेरी कर सकते हैं, जब वे होस्टेड सर्विस से अपग्रेड करने के बाद या [subgraph Studio से पब्लिश](/subgraphs/developing/publishing/publishing-a-subgraph/) करने के बाद अपग्रेड करते हैं, क्योंकि indexing के लिए कोई लीड टाइम आवश्यक नहीं था। कृपया ध्यान दें कि [subgraph बनाना](/developing/creating-a-subgraph/) इस अपग्रेड से प्रभावित नहीं हुआ था। ### अपग्रेड Indexer डेटा उपभोक्ताओं को कैसे लाभ पहुंचाता है? -The upgrade Indexer enables chains on the network that were previously only supported on the hosted service. Therefore, it widens the scope and availability of data that can be queried on the network. +The upgrade Indexer network पर उन chains को सक्षम बनाता है जो पहले केवल hosted service पर समर्थित थीं। इसलिए, यह उस data के दायरे और उपलब्धता को बढ़ाता है जिसे network पर queried किया जा सकता है। ### अपग्रेड Indexer क्वेरीज़ की कीमत कैसे तय करता है? अपग्रेड में Indexer बाज़ार दर पर क्वेरीज़ की कीमत तय करता है ताकि क्वेरी शुल्क बाज़ार पर कोई प्रभाव न पड़े। -### अपग्रेड Indexer कब एक Subgraph का समर्थन करना बंद करेगा? +### अपग्रेड होने पर Indexer कब तक subgraph को सपोर्ट करना बंद कर देगा? -अपग्रेड Indexer एक Subgraph का समर्थन करता है जब तक कि कम से कम 3 अन्य Indexers सफलतापूर्वक और लगातार किए गए प्रश्नों का उत्तर नहीं देते। +The upgrade Indexer तब तक एक subgraph का समर्थन करता है जब तक कि कम से कम 3 अन्य Indexer सफलतापूर्वक और लगातार इसे किए गए क्वेरीज़ की सेवा नहीं देते। -इसके अतिरिक्त, अपग्रेड Indexer एक Subgraph का समर्थन करना बंद कर देता है यदि उसे पिछले 30 दिनों में क्वेरी नहीं किया गया है। +इसके अलावा, यदि किसी subgraph को पिछले 30 दिनों में क्वेरी नहीं किया गया है, तो अपग्रेड Indexer उसका समर्थन बंद कर देता है। -अन्य Indexer को उन सबग्राफ का समर्थन करने के लिए प्रोत्साहित किया जाता है जिनमें निरंतर क्वेरी वॉल्यूम होता है। अपग्रेड Indexer के लिए क्वेरी वॉल्यूम शून्य की ओर बढ़ना चाहिए, क्योंकि इसका आवंटन आकार छोटा होता है, और क्वेरी के लिए अन्य Indexer को प्राथमिकता दी जानी चाहिए। +अन्य Indexers को उन subgraph का समर्थन करने के लिए प्रोत्साहित किया जाता है जिनमें निरंतर क्वेरी वॉल्यूम होता है। अपग्रेड Indexer के लिए क्वेरी वॉल्यूम शून्य की ओर बढ़ना चाहिए, क्योंकि इसका आवंटन आकार छोटा है, और अन्य Indexers को इससे पहले क्वेरी के लिए चुना जाना चाहिए। diff --git a/website/src/pages/hi/contracts.mdx b/website/src/pages/hi/contracts.mdx index 0a57ae81839b..aae4d2906e17 100644 --- a/website/src/pages/hi/contracts.mdx +++ b/website/src/pages/hi/contracts.mdx @@ -4,7 +4,7 @@ title: Protocol Contracts import { ProtocolContractsTable } from '@/contracts' -Below are the deployed contracts which power The Graph Network. Visit the official [contracts repository](https://github.com/graphprotocol/contracts) to learn more. +नीचे deployed contracts हैं जो The Graph Network को शक्ति प्रदान करते हैं। अधिक जानने के लिए official [contracts repository](https://github.com/graphprotocol/contracts) पर जाएँ। ## Arbitrum @@ -20,7 +20,7 @@ This is the principal deployment of The Graph Network. ## Arbitrum Sepolia -This is the primary testnet for The Graph Network. Testnet is predominantly used by core developers and ecosystem participants for testing purposes. There are no guarantees of service or availability on The Graph's testnets. +यह The Graph Network का प्रमुख testnet है। Testnet का मुख्य रूप से core developers और ecosystem के participants द्वारा परीक्षण उद्देश्यों के लिए उपयोग किया जाता है। The Graph's के testnets पर सेवा या उपलब्धता की कोई guarantee नहीं है। diff --git a/website/src/pages/hi/global.json b/website/src/pages/hi/global.json index 5b5292d8b096..08b190f4facc 100644 --- a/website/src/pages/hi/global.json +++ b/website/src/pages/hi/global.json @@ -1,35 +1,78 @@ { "navigation": { "title": "मुख्य नेविगेशन", - "show": "Show navigation", - "hide": "Hide navigation", + "show": "नेविगेशन दिखाएं", + "hide": "नेविगेशन छिपाएँ", "subgraphs": "सबग्राफ", "substreams": "सबस्ट्रीम", - "sps": "Substreams-Powered Subgraphs", - "indexing": "Indexing", - "resources": "Resources", - "archived": "Archived" + "sps": "सबस्ट्रीम पावर्ड सबग्राफ", + "tokenApi": "टोकन API", + "indexing": "indexing", + "resources": "संसाधन", + "archived": "संग्रहीत" }, "page": { - "lastUpdated": "Last updated", + "lastUpdated": "अंतिम अपडेट", "readingTime": { - "title": "Reading time", - "minutes": "minutes" + "title": "पढ़ने का समय -", + "minutes": "मिनट" }, - "previous": "Previous page", - "next": "Next page", - "edit": "Edit on GitHub", - "onThisPage": "On this page", - "tableOfContents": "Table of contents", - "linkToThisSection": "Link to this section" + "previous": "पिछला पृष्ठ", + "next": "अगला पृष्ठ", + "edit": "GitHub पर संपादित करें", + "onThisPage": "इस पृष्ठ पर", + "tableOfContents": "विषय-सूची", + "linkToThisSection": "इस अनुभाग का लिंक" }, "content": { - "note": "Note", - "video": "Video" + "callout": { + "note": "नोट", + "tip": "सलाह", + "important": "जरूरी", + "warning": "चेतावनी", + "caution": "सावधानी" + }, + "video": "वीडियो" + }, + "openApi": { + "parameters": { + "pathParameters": "पथ पैरामीटर", + "queryParameters": "क्वेरी पैरामीटर", + "headerParameters": "हैडर पैरामीटर", + "cookieParameters": "कुकी पैरामीटर", + "parameter": "पैरामीटर", + "description": "Description", + "value": "मान", + "required": "आवश्यक", + "deprecated": "अवकाशप्राप्त", + "defaultValue": "डिफ़ॉल्ट मान", + "minimumValue": "न्यूनतम मान", + "maximumValue": "अधिकतम मान ", + "acceptedValues": "स्वीकृत मान", + "acceptedPattern": "स्वीकृत पैटर्न", + "format": "प्रारूप", + "serializationFormat": "सिरीयलाइज़ेशन प्रारूप" + }, + "request": { + "label": "इस एंडपॉइंट का परीक्षण करें", + "noCredentialsRequired": "कोई प्रमाण-पत्र आवश्यक नहीं", + "send": "अनुरोध भेजें" + }, + "responses": { + "potentialResponses": "संभावित प्रतिक्रियाएँ", + "status": "स्थिति", + "description": "Description", + "liveResponse": "लाइव प्रतिक्रिया", + "example": "उदाहरण" + }, + "errors": { + "invalidApi": "Could not retrieve API {0}.", + "invalidOperation": "Could not retrieve operation {0} in API {1}." + } }, "notFound": { - "title": "Oops! This page was lost in space...", - "subtitle": "Check if you’re using the right address or explore our website by clicking on the link below.", - "back": "Go Home" + "title": "ओह! यह पृष्ठ अंतरिक्ष में खो गया...", + "subtitle": "पता सही है या नहीं, इसकी जाँच करें या नीचे दिए गए लिंक पर क्लिक करके हमारी वेबसाइट एक्सप्लोर करें।", + "back": "घर जाओ" } } diff --git a/website/src/pages/hi/index.json b/website/src/pages/hi/index.json index 647821902fcd..006af907dc33 100644 --- a/website/src/pages/hi/index.json +++ b/website/src/pages/hi/index.json @@ -1,50 +1,50 @@ { "title": "Home", "hero": { - "title": "The Graph Docs", - "description": "Kick-start your web3 project with the tools to extract, transform, and load blockchain data.", - "cta1": "How The Graph works", - "cta2": "Build your first subgraph" + "title": "The Graph डॉक्स", + "description": "अपनी वेब3 परियोजना को शुरू करें उन उपकरणों के साथ जो ब्लॉकचेन डेटा को निकालने, बदलने और लोड करने में सहायता करते हैं।", + "cta1": "The Graph कैसे काम करता है", + "cta2": "अपना पहला Subgraph बनाएं" }, "products": { - "title": "The Graph’s Products", - "description": "Choose a solution that fits your needs—interact with blockchain data your way.", + "title": "The Graph's Products", + "description": "अपनी जरूरतों के अनुसार समाधान चुनें—ब्लॉकचेन डेटा के साथ अपने तरीके से इंटरैक्ट क", "subgraphs": { "title": "सबग्राफ", - "description": "Extract, process, and query blockchain data with open APIs.", - "cta": "Develop a subgraph" + "description": "ब्लॉकचेन डेटा को निकालें, प्रोसेस करें और ओपन APIs के साथ क्वेरी करें।", + "cta": "सबग्राफ विकसित करें" }, "substreams": { "title": "सबस्ट्रीम", - "description": "Fetch and consume blockchain data with parallel execution.", - "cta": "Develop with Substreams" + "description": "ब्लॉकचेन डेटा प्राप्त करें और समानांतर निष्पादन के साथ उपयोग करें।", + "cta": "सबस्ट्रीम के साथ विकसित करें" }, "sps": { - "title": "Substreams-Powered Subgraphs", - "description": "Boost your subgraph’s efficiency and scalability by using Substreams.", - "cta": "Set up a Substreams-powered subgraph" + "title": "सबस्ट्रीम पावर्ड सबग्राफ", + "description": "Boost your subgraph's efficiency and scalability by using Substreams.", + "cta": "सबस्ट्रीम-संचालित सबग्राफ सेट करें" }, "graphNode": { - "title": "ग्राफ-नोड", - "description": "Index blockchain data and serve it via GraphQL queries.", - "cta": "Set up a local Graph Node" + "title": "Graph Node", + "description": "ब्लॉकचेन डेटा को इंडेक्स करें और इसे GraphQL क्वेरीज़ के माध्यम से सर्व करें।", + "cta": "स्थानीय Graph Node सेट करें" }, "firehose": { "title": "Firehose", - "description": "Extract blockchain data into flat files to enhance sync times and streaming capabilities.", - "cta": "Get started with Firehose" + "description": "ब्लॉकचेन डेटा को फ्लैट फ़ाइलों में निकालें ताकि सिंक समय और स्ट्रीमिंग क्षमताओं में सुधार किया जा सके।", + "cta": "Firehose के साथ शुरुआत करें" } }, "supportedNetworks": { "title": "समर्थित नेटवर्क", "details": "Network Details", "services": "Services", - "type": "Type", + "type": "प्रकार", "protocol": "Protocol", "identifier": "Identifier", "chainId": "Chain ID", "nativeCurrency": "Native Currency", - "docs": "Docs", + "docs": "दस्तावेज़", "shortName": "Short Name", "guides": "Guides", "search": "Search networks", @@ -54,9 +54,9 @@ "infoText": "Boost your developer experience by enabling The Graph's indexing network.", "infoLink": "Integrate new network", "description": { - "base": "The Graph supports {0}. To add a new network, {1}", - "networks": "networks", - "completeThisForm": "complete this form" + "base": "The Graph {0} का समर्थन करता है। एक नया नेटवर्क जोड़ने के लिए, {1}", + "networks": "नेटवर्क्स ", + "completeThisForm": "इस फ़ॉर्म को पूरा करें " }, "emptySearch": { "title": "No networks found", @@ -65,12 +65,12 @@ "showTestnets": "Show testnets" }, "tableHeaders": { - "name": "Name", + "name": "नाम", "id": "ID", - "subgraphs": "Subgraphs", - "substreams": "Substreams", + "subgraphs": "सबग्राफ", + "substreams": "सबस्ट्रीम", "firehose": "Firehose", - "tokenapi": "Token API" + "tokenapi": "टोकन API" } }, "networkGuides": { @@ -80,7 +80,7 @@ "description": "Kickstart your journey into subgraph development." }, "substreams": { - "title": "Substreams", + "title": "सबस्ट्रीम", "description": "Stream high-speed data for real-time indexing." }, "timeseries": { @@ -92,7 +92,7 @@ "description": "Leverage features like custom data sources, event handlers, and topic filters." }, "billing": { - "title": "Billing", + "title": "बिलिंग", "description": "Optimize costs and manage billing efficiently." } }, @@ -123,53 +123,53 @@ "title": "Guides", "description": "", "explorer": { - "title": "Find Data in Graph Explorer", - "description": "Leverage hundreds of public subgraphs for existing blockchain data." + "title": "ग्राफ एक्सप्लोरर में डेटा खोजें", + "description": "सैकड़ों सार्वजनिक सबग्राफ का उपयोग करके मौजूदा ब्लॉकचेन डेटा प्राप्त करें।" }, "publishASubgraph": { - "title": "Publish a Subgraph", - "description": "Add your subgraph to the decentralized network." + "title": "Subgraph प्रकाशित करें", + "description": "अपने Subgraph को विकेंद्रीकृत नेटवर्क में जोड़ें।" }, "publishSubstreams": { - "title": "Publish Substreams", - "description": "Launch your Substreams package to the Substreams Registry." + "title": "सबस्ट्रीम प्रकाशित करें", + "description": "अपनी सबस्ट्रीम पैकेज को सबस्ट्रीम रजिस्ट्री पर लॉन्च करें।" }, "queryingBestPractices": { - "title": "सर्वोत्तम प्रथाओं को क्वेरी करना", - "description": "Optimize your subgraph queries for faster, better results." + "title": "Querying Best Practices", + "description": "अपने Subgraph क्वेरीज़ को तेज़ और बेहतर परिणामों के लिए ऑप्टिमाइज़ करें।" }, "timeseries": { - "title": "Optimized Timeseries & Aggregations", - "description": "Streamline your subgraph for efficiency." + "title": "अनुकूलित टाइमसीरीज और एग्रीगेशन", + "description": "अपने Subgraph को कुशलता के लिए सरल बनाएं।" }, "apiKeyManagement": { - "title": "API Key Management", - "description": "Easily create, manage, and secure API keys for your subgraphs." + "title": "API Key प्रबंधन", + "description": "आसानी से API कुंजियों को बनाएँ, प्रबंधित करें और सुरक्षित करें अपने Subgraph के लिए।" }, "transferToTheGraph": { - "title": "Transfer to The Graph", - "description": "Seamlessly upgrade your subgraph from any platform." + "title": "The Graph पर स्थानांतरण", + "description": "किसी भी प्लेटफ़ॉर्म से आसानी से अपने Subgraph को अपग्रेड करें।" } }, "videos": { - "title": "Video Tutorials", - "watchOnYouTube": "Watch on YouTube", + "title": "वीडियो ट्यूटोरियल्स", + "watchOnYouTube": "YouTube पर देखें", "theGraphExplained": { - "title": "The Graph Explained In 1 Minute", - "description": "What is The Graph? How does it work? Why does it matter so much to web3 developers? Learn how and why The Graph is the backbone of web3 in this short, non-technical video." + "title": "The Graph को 1 मिनट में समझाया गया", + "description": "इस छोटे, गैर-तकनीकी वीडियो में जानें कि The Graph Web3 की रीढ़ (backbone) क्यों और कैसे है।" }, "whatIsDelegating": { - "title": "What is Delegating?", - "description": "Delegators are key participants who help secure The Graph by staking their GRT tokens to Indexers. This video explains key concepts to understand before delegating." + "title": "Delegating का क्या अर्थ है?", + "description": "यह वीडियो उन मुख्य अवधारणाओं को समझाने में मदद करता है जो delegating, जो कि staking का एक रूप है और The Graph को सुरक्षित करने में सहायता करता है, से पहले समझनी आवश्यक हैं।" }, "howToIndexSolana": { - "title": "How to Index Solana with a Substreams-powered Subgraph", - "description": "If you’re familiar with subgraphs, discover how Substreams offer a different approach for key use cases. This video walks you through the process of building your first Substreams-powered subgraph." + "title": "Solana को सबस्ट्रीम-संचालित Subgraph के साथ इंडेक्स कैसे करें", + "description": "If you're familiar with Subgraphs, discover how Substreams offer a different approach for key use cases." } }, "time": { - "reading": "Reading time", - "duration": "Duration", - "minutes": "min" + "reading": "पढ़ने का समय -", + "duration": "अवधि", + "minutes": "मिनट" } } diff --git a/website/src/pages/hi/indexing/_meta-titles.json b/website/src/pages/hi/indexing/_meta-titles.json index 42f4de188fd4..52f24f7e7d81 100644 --- a/website/src/pages/hi/indexing/_meta-titles.json +++ b/website/src/pages/hi/indexing/_meta-titles.json @@ -1,3 +1,3 @@ { - "tooling": "Indexer Tooling" + "tooling": "Indexer टूलिंग" } diff --git a/website/src/pages/hi/indexing/chain-integration-overview.mdx b/website/src/pages/hi/indexing/chain-integration-overview.mdx index 6a7c06a71a07..28458ea16d09 100644 --- a/website/src/pages/hi/indexing/chain-integration-overview.mdx +++ b/website/src/pages/hi/indexing/chain-integration-overview.mdx @@ -2,12 +2,12 @@ title: Chain Integration Process Overview --- -A transparent and governance-based integration process was designed for blockchain teams seeking [integration with The Graph protocol](https://forum.thegraph.com/t/gip-0057-chain-integration-process/4468). It is a 3-phase process, as summarised below. +[integration with The Graph protocol](https://forum.thegraph.com/t/gip-0057-chain-integration-process/4468) चाहने वाली blockchain teams के लिए एक transparent और governance-based integration प्रक्रिया designed की गई थी। यह 3-phase वाली प्रक्रिया है, जैसा कि नीचे संक्षेप में बताया गया है। ## Stage 1. Technical Integration - कृपया `ग्राफ-नोड` द्वारा नए chain समर्थन के लिए [New Chain इंटीग्रेशन](/indexing/new-chain-integration/) पर जाएं। -- Teams initiate the protocol integration process by creating a Forum thread [here](https://forum.thegraph.com/c/governance-gips/new-chain-support/71) (New Data Sources sub-category under Governance & GIPs). Using the default Forum template is mandatory. +- Teams एक Forum thread बनाकर protocol integration प्रक्रिया शुरू करती हैं [here](https://forum.thegraph.com/c/governance-gips/new-chain-support/71) (New Data Sources sub-category under Governance & GIPs). Default Forum template का उपयोग करना अनिवार्य है। ## Stage 2. Integration Validation @@ -17,12 +17,12 @@ A transparent and governance-based integration process was designed for blockcha ## Stage 3. Mainnet Integration -- Teams propose mainnet integration by submitting a Graph Improvement Proposal (GIP) and initiating a pull request (PR) on the [feature support matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) (more details on the link). -- The Graph Council reviews the request and approves mainnet support, providing a successful Stage 2 and positive community feedback. +- Teams Graph Improvement Proposal (GIP) submit करके और [feature support matrix] (https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) पर एक pull request (PR) शुरू करके mainnet integration का प्रस्ताव करती हैं। (more details on the link)। +- Graph Council request की reviews करती है और successful Stage 2 और positive community feedback प्रदान करते हुए mainnet समर्थन को मंजूरी देती है। --- -If the process looks daunting, don't worry! The Graph Foundation is committed to supporting integrators by fostering collaboration, offering essential information, and guiding them through various stages, including navigating governance processes such as Graph Improvement Proposals (GIPs) and pull requests. If you have questions, please reach out to [info@thegraph.foundation](mailto:info@thegraph.foundation) or through Discord (either Pedro, The Graph Foundation member, IndexerDAO, or other core developers). +यदि प्रक्रिया कठिन लगती है, तो चिंता न करें! Graph Foundation सहयोग को बढ़ावा देने, आवश्यक जानकारी प्रदान करने और Graph Improvement Proposals (GIPs) और pull अनुरोध जैसी शासन प्रक्रियाओं को navigate करने सहित विभिन्न stages के माध्यम से उनका मार्गदर्शन करके integrators का समर्थन करने के लिए प्रतिबद्ध है। यदि आपके कोई प्रश्न हैं, तो कृपया [info@thegraph.foundation](mailto:info@thegraph.foundation) या Discord (either Pedro, The Graph Foundation member, IndexerDAO, or other core developers) के माध्यम से संपर्क करें। Ready to shape the future of The Graph Network? [Start your proposal](https://github.com/graphprotocol/graph-improvement-proposals/blob/main/gips/0057-chain-integration-process.md) now and be a part of the web3 revolution! @@ -30,20 +30,20 @@ Ready to shape the future of The Graph Network? [Start your proposal](https://gi ## Frequently Asked Questions -### 1. How does this relate to the [World of Data Services GIP](https://forum.thegraph.com/t/gip-0042-a-world-of-data-services/3761)? +### 1. इसका इससे क्या संबंध है [World of Data Services GIP](https://forum.thegraph.com/t/gip-0042-a-world-of-data-services/3761)? -This process is related to the Subgraph Data Service, applicable only to new Subgraph `Data Sources`. +यह प्रक्रिया subgraph Data Service से संबंधित है, जो केवल नए Subgraph \`Data Sources' पर लागू होती है। -### 2. What happens if Firehose & Substreams support comes after the network is supported on mainnet? +### 2. यदि mainnet पर network समर्थित होने के बाद Firehose और Substreams समर्थन आता है तो क्या होगा? -This would only impact protocol support for indexing rewards on Substreams-powered subgraphs. The new Firehose implementation would need testing on testnet, following the methodology outlined for Stage 2 in this GIP. Similarly, assuming the implementation is performant and reliable, a PR on the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) would be required (`Substreams data sources` Subgraph Feature), as well as a new GIP for protocol support for indexing rewards. Anyone can create the PR and GIP; the Foundation would help with Council approval. +यह केवल सबस्ट्रीम-powered सबग्राफ पर indexing rewards के समर्थन को प्रभावित करेगा। नए Firehose कार्यान्वयन को testnet पर परीक्षण की आवश्यकता होगी, जिसे इस GIP के Stage 2 में उल्लिखित पद्धति का पालन करते हुए किया जाएगा। इसी तरह, यदि कार्यान्वयन प्रभावी और विश्वसनीय साबित होता है, तो [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) पर (`सबस्ट्रीम data sources` सबग्राफ Feature) के लिए एक PR आवश्यक होगा, साथ ही indexing rewards के समर्थन के लिए एक नया GIP भी तैयार करना होगा। कोई भी इस PR और GIP को बना सकता है; Foundation इस प्रक्रिया में Council अनुमोदन के लिए सहायता करेगा। ### 3. पूर्ण प्रोटोकॉल समर्थन तक पहुंचने की प्रक्रिया में कितना समय लगेगा? -The time to mainnet is expected to be several weeks, varying based on the time of integration development, whether additional research is required, testing and bug fixes, and, as always, the timing of the governance process that requires community feedback. +Mainnet का समय कई weeks होने की उम्मीद है, जो integration development के समय के आधार पर अलग-अलग होगा, चाहे अतिरिक्त शोध की आवश्यकता हो, testing और bug fixes, और, हमेशा की तरह, governance process का समय जिसके लिए community feedback की आवश्यकता होती है। -Protocol support for indexing rewards depends on the stakeholders' bandwidth to proceed with testing, feedback gathering, and handling contributions to the core codebase, if applicable. This is directly tied to the integration's maturity and how responsive the integration team is (who may or may not be the team behind the RPC/Firehose implementation). The Foundation is here to help support throughout the whole process. +Indexing rewards के लिए protocol समर्थन, यदि लागू हो, तो परीक्षण, feedback एकत्र करने और core codebase में योगदान को संभालने के लिए stakeholders की bandwidth पर निर्भर करता है। यह सीधे तौर पर integration की परिपक्वता और integration team कितनी संवेदनशील है (who may or may not be the team behind the RPC/Firehose implementation) से जुड़ी है। Foundation पूरी प्रक्रिया में सहायता के लिए यहां मौजूद है। -### 4. How will priorities be handled? +### 4. Priorities कैसे संभाली जाएंगी? \#3 के समान, यह समग्र तत्परता और शामिल हितधारकों की क्षमता पर निर्भर करेगा। उदाहरण के लिए, एक नए चेन के साथ एक नई Firehose कार्यान्वयन को उन एकीकरणों की तुलना में अधिक समय लग सकता है जो पहले से ही परीक्षण किए जा चुके हैं या जो शासन प्रक्रिया में आगे बढ़ चुके हैं। diff --git a/website/src/pages/hi/indexing/new-chain-integration.mdx b/website/src/pages/hi/indexing/new-chain-integration.mdx index 0cb393914982..1863ea5fa2bc 100644 --- a/website/src/pages/hi/indexing/new-chain-integration.mdx +++ b/website/src/pages/hi/indexing/new-chain-integration.mdx @@ -2,7 +2,7 @@ title: नई श्रृंखला एकीकरण --- -चेन अपने पारिस्थितिकी तंत्र में सबग्राफ़ समर्थन लाने के लिए एक नया `graph-node` एकीकरण शुरू कर सकती हैं। सबग्राफ़ एक शक्तिशाली इंडेक्सिंग उपकरण हैं, जो डेवलपर्स के लिए संभावनाओं की एक नई दुनिया खोलते हैं। ग्राफ़ नोड पहले से ही यहाँ सूचीबद्ध चेन से डेटा को इंडेक्स करता है। यदि आप नए एकीकरण में रुचि रखते हैं, तो दो एकीकरण रणनीतियाँ हैं: +चेनें अपने इकोसिस्टम में Subgraph सपोर्ट लाने के लिए एक नया `graph-node` इंटीग्रेशन शुरू कर सकती हैं। Subgraph एक शक्तिशाली इंडेक्सिंग टूल हैं जो डेवलपर्स के लिए संभावनाओं की दुनिया खोलते हैं। `Graph Node` पहले से ही यहाँ सूचीबद्ध चेन से डेटा इंडेक्स करता है। यदि आप एक नए इंटीग्रेशन में रुचि रखते हैं, तो इसके लिए 2 इंटीग्रेशन रणनीतियाँ हैं: 1. EVM JSON-RPC 2. Firehose: सभी Firehose एकीकरण समाधान में Substreams शामिल हैं, जो Firehose पर आधारित एक बड़े पैमाने पर स्ट्रीमिंग इंजन है, जिसमें स्वदेशी `graph-node` समर्थन है, जो समानांतर रूपांतरण की अनुमति देता है। @@ -15,7 +15,7 @@ title: नई श्रृंखला एकीकरण यदि ब्लॉकचेन EVM समान है और क्लाइंट/नोड मानक EVM JSON-RPC API को एक्सपोज़ करता है, तो Graph Node को नए चेन को इंडेक्स करने में सक्षम होना चाहिए। -#### Testing an EVM JSON-RPC +#### एक EVM JSON-RPC का परीक्षण Graph Node को EVM चेन से डेटा इन्गेस्ट करने के लिए, RPC नोड को निम्नलिखित EVM JSON-RPC विधियों को एक्सपोज़ करना होगा: @@ -33,11 +33,11 @@ Graph Node को EVM चेन से डेटा इन्गेस्ट क > नोट: StreamingFast टीम द्वारा की गई सभी एकीकरणों में श्रृंखला के कोडबेस में Firehose प्रतिकृति प्रोटोकॉल के लिए रखरखाव शामिल है।StreamingFast किसी भी परिवर्तन को ट्रैक करता है और जब आप कोड बदलते हैं और जब StreamingFastकोड बदलता है, तो बाइनरी जारी करता है। इसमें प्रोटोकॉल के लिए Firehose/Substreamsबाइनरी जारी करना, श्रृंखला के ब्लॉक मॉडल के लिए Substreamsमॉड्यूल को बनाए रखना, और आवश्यकता होने पर ब्लॉकचेन नोड के लिए इंस्ट्रुमेंटेशन के साथ बाइनरी जारी करना शामिल है। -#### Integration for Non-EVM chains +#### Non-EVM चेन के लिए इंटीग्रेशन फायरहोज़ को चेन में एकीकृत करने का प्राथमिक तरीका RPC पॉलिंग रणनीति का उपयोग करना है। हमारी पॉलिंग एल्गोरिदम नए ब्लॉक के आने का पूर्वानुमान लगाएगी और उस समय के करीब नए ब्लॉक के लिए जाँच करने की दर बढ़ा देगी, जिससे यह एक बहुत कम लेटेंसी और प्रभावी समाधान बन जाता है। फायरहोज़ के एकीकरण और रखरखाव में मदद के लिए, [स्ट्रीमिंगफास्ट टीम](https://www.streamingfast.io/firehose-integration-program) से संपर्क करें। नए चेन और उनके एकीकृतकर्ताओं को फायरहोज़ और सबस्ट्रीम द्वारा उनके पारिस्थितिकी तंत्र में लाए गए [फोर्क जागरूकता](https://substreams.streamingfast.io/documentation/consume/reliability-guarantees) और विशाल समानांतर इंडेक्सिंग क्षमताओं की सराहना होगी। -#### Specific Instrumentation for EVM (`geth`) chains +#### EVM (' geth ') चेन के लिए विशिष्ट इंस्ट्रूमेंटेशन EVM चेन के लिए, एक गहरे स्तर के डेटा को प्राप्त करने के लिए `geth` [लाइव-ट्रेसर](https://github.com/ethereum/go-ethereum/releases/tag/v1.14.0) का उपयोग किया जाता है, जो गो-एथेरियम और स्ट्रीमिंगफास्ट के बीच सहयोग है, जो उच्च थ्रूपुट और समृद्ध लेनदेन ट्रेसिंग प्रणाली बनाने के लिए है। लाइव ट्रेसर सबसे व्यापक समाधान है, जो [विस्तारित](https://streamingfastio.medium.com/new-block-model-to-accelerate-chain-integration-9f65126e5425) ब्लॉक विवरण का परिणाम है। यह नए इंडेक्सिंग पैरेडाइम्स की अनुमति देता है, जैसे राज्य परिवर्तनों, कॉल्स, पैरेंट कॉल ट्रीज़ के आधार पर घटनाओं का पैटर्न मिलाना, या स्मार्ट कॉन्ट्रैक्ट में वास्तविक वेरिएबल्स में बदलाव के आधार पर घटनाओं को ट्रिगर करना। @@ -47,19 +47,19 @@ EVM चेन के लिए, एक गहरे स्तर के डे ## EVM विचार - JSON-RPC और Firehose के बीच का अंतर -JSON-RPC और Firehose दोनों ही सबग्राफ के लिए उपयुक्त हैं, लेकिन एक Firehose हमेशा आवश्यक होता है यदि डेवलपर्स [सबस्ट्रीम](https://substreams.streamingfast.io) के साथ निर्माण करना चाहते हैं। सबस्ट्रीम का समर्थन करने से डेवलपर्स को नए chain के लिए [सबस्ट्रीम-powered सबग्राफ](/subgraphs/cookbook/substreams-powered-subgraphs/) बनाने की अनुमति मिलती है, और इसके परिणामस्वरूप आपके सबग्राफ की प्रदर्शन क्षमता में सुधार हो सकता है। इसके अतिरिक्त, Firehose — जो कि `ग्राफ-नोड` के JSON-RPC extraction layer का एक drop-in replacement है — सामान्य indexing के लिए आवश्यक RPC कॉल्स की संख्या को 90% तक घटा देता है। +JSON-RPC और Firehose दोनों ही सबग्राफ के लिए उपयुक्त हैं, लेकिन उन डेवलपर्स के लिए Firehose हमेशा आवश्यक होता है जो [सबस्ट्रीम](https://substreams.streamingfast.io) के साथ निर्माण करना चाहते हैं। सबस्ट्रीम का समर्थन करने से डेवलपर्स को नए चेन के लिए [सबस्ट्रीम -powered सबग्राफ](/subgraphs/cookbook/substreams-powered-subgraphs/) बनाने में मदद मिलती है और यह आपके सबग्राफ के प्रदर्शन को बेहतर बनाने की क्षमता रखता है। इसके अतिरिक्त, Firehose — `graph-node` की JSON-RPC एक्सट्रैक्शन लेयर के ड्रॉप-इन रिप्लेसमेंट के रूप में — सामान्य indexing के लिए आवश्यक RPC कॉल्स की संख्या को 90% तक कम कर देता है। -- सभी `getLogs` कॉल्स और राउंडट्रिप्स को एकल स्ट्रीम द्वारा प्रतिस्थापित किया जाता है, जो सीधे `graph-node` के केंद्र में पहुंचती है; यह एकल ब्लॉक मॉडल सभी सबग्राफ्स के लिए काम करता है जिन्हें यह प्रोसेस करता है। +- सभी `getLogs` कॉल और राउंडट्रिप्स को एकल स्ट्रीम द्वारा बदल दिया जाता है, जो सीधे `graph-node` के केंद्र में पहुँचती है; यह उन सभी Subgraph के लिए एक एकल ब्लॉक मॉडल प्रदान करता है जिनका यह प्रोसेस करता है। -> **NOTE**: EVM chains के लिए Firehose-based integration के लिए अभी भी Indexers को chain के संग्रह RPC node को subgraph को ठीक से index करने के लिए चलाने की आवश्यकता होगी। यह `eth_call` RPC विधि द्वारा आम तौर पर पहुंच योग्य smart contract स्थिति प्रदान करने में Firehosesकी असमर्थता के कारण है। (It's worth reminding that eth_calls are [not a good practice for developers](/)) +> नोट: Firehose-आधारित एकीकरण के लिए EVM चेन पर अभी भी Indexers को चेन का आर्काइव RPC नोड चलाने की आवश्यकता होगी ताकि सबग्राफ को सही तरीके से Index किया जा सके। इसका कारण यह है कि Firehose आमतौर पर  `eth_call` RPC मेथड द्वारा एक्सेस किए जाने वाली स्मार्ट contract स्थिति प्रदान नहीं कर सकता। (यह याद दिलाना महत्वपूर्ण है कि `eth_calls` डेवलपर्स के लिए एक अच्छी प्रैक्टिस नहीं है)। ## Graph Node Configuration -ग्राफ नोड को कॉन्फ़िगर करना आपके स्थानीय वातावरण को तैयार करने के समान आसान है। एक बार जब आपका स्थानीय वातावरण सेट हो जाता है, तो आप एक उपग्राफ को स्थानीय रूप से डिप्लॉय करके एकीकरण का परीक्षण कर सकते हैं। +ग्राफ-नोड को कॉन्फ़िगर करना उतना ही आसान है जितना कि अपने स्थानीय वातावरण को तैयार करना। एक बार जब आपका स्थानीय वातावरण सेट हो जाता है, तो आप स्थानीय रूप से एक सबग्राफ को तैनात करके एकीकरण का परीक्षण कर सकते हैं। 1. [Clone Graph Node](https://github.com/graphprotocol/graph-node) -2. Modify [this line](https://github.com/graphprotocol/graph-node/blob/master/docker/docker-compose.yml#L22) to include the new network name and the EVM JSON-RPC or Firehose compliant URL +2. [इस पंक्ति](https://github.com/graphprotocol/graph-node/blob/master/docker/docker-compose.yml#L22) को नए नेटवर्क नाम और EVM JSON-RPC या Firehose संगत URL को शामिल करने के लिए संशोधित करें। > कृपया पर्यावरण चर ethereum को खुद नाम में बदलें नहीं। यही रहना चाहिए, चाहे network का नाम भिन्न हो। @@ -67,4 +67,4 @@ JSON-RPC और Firehose दोनों ही सबग्राफ के ल ## सबस्ट्रीम-संचालित सबग्राफ की सेवा -StreamingFast द्वारा संचालित Firehose/सबस्ट्रीम इंटीग्रेशन के लिए, बुनियादी सबस्ट्रीम मॉड्यूल (जैसे डिकोड किए गए लेनदेन, log और स्मार्ट-contract आयोजन) और सबस्ट्रीम कोडजेन टूल्स का बेसिक सपोर्ट शामिल है। ये टूल्स [सबस्ट्रीम-powered सबग्राफ](/substreams/sps/introduction/) को सक्षम बनाने की क्षमता प्रदान करते हैं। [ मार्गदर्शक](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) का अनुसरण करें और `सबस्ट्रीम codegen सबग्राफ` चलाकर कोडजेन टूल्स का अनुभव लें। +StreamingFast के नेतृत्व वाले Firehose/Substreams एकीकरणों के लिए, मूलभूत सबस्ट्रीम मॉड्यूल (जैसे कि डिकोड किए गए लेन-देन, लॉग्स और स्मार्ट-कॉन्ट्रैक्ट इवेंट्स) और सबस्ट्रीम codegen टूल्स के लिए बुनियादी समर्थन शामिल है। ये टूल्स [सबस्ट्रीम-powered सबग्राफ](/substreams/sps/introduction/) को सक्षम करने की क्षमता प्रदान करते हैं। [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) का पालन करें और `substreams codegen subgraph` कमांड चलाकर स्वयं codegen टूल्स का अनुभव करें। diff --git a/website/src/pages/hi/indexing/overview.mdx b/website/src/pages/hi/indexing/overview.mdx index f1109f1f70c9..38a778b97854 100644 --- a/website/src/pages/hi/indexing/overview.mdx +++ b/website/src/pages/hi/indexing/overview.mdx @@ -1,47 +1,47 @@ --- title: Indexing का अवलोकन -sidebarTitle: अवलोकन +sidebarTitle: Overview --- -Indexers are node operators in The Graph Network that stake Graph Tokens (GRT) in order to provide indexing and query processing services. Indexers earn query fees and indexing rewards for their services. They also earn query fees that are rebated according to an exponential rebate function. +Indexers, The Graph Network में node operators होते हैं जो Graph Tokens (GRT) stake करके indexing और query processing services प्रदान करते हैं। वे अपनी सेवाओं के लिए query fees और indexing rewards अर्जित करते हैं। इसके अलावा, उन्हें query fees भी मिलती हैं, जो एक exponential rebate function के अनुसार rebate की जाती हैं। जीआरटी जो प्रोटोकॉल में दांव पर लगा है, विगलन अवधि के अधीन है और यदि अनुक्रमणिका दुर्भावनापूर्ण हैं और अनुप्रयोगों को गलत डेटा प्रदान करते हैं या यदि वे गलत तरीके से अनुक्रमणित करते हैं तो इसे घटाया जा सकता है। इंडेक्सर्स नेटवर्क में योगदान करने के लिए डेलीगेटर्स से प्रत्यायोजित हिस्सेदारी के लिए पुरस्कार भी अर्जित करते हैं। -इंडेक्सर्स सबग्राफ के क्यूरेशन सिग्नल के आधार पर इंडेक्स के लिए सबग्राफ का चयन करते हैं, जहां क्यूरेटर GRT को यह इंगित करने के लिए दांव पर लगाते हैं कि कौन से सबग्राफ उच्च-गुणवत्ता वाले हैं और उन्हें प्राथमिकता दी जानी चाहिए। उपभोक्ता (उदाहरण के लिए अनुप्रयोग) पैरामीटर भी सेट कर सकते हैं जिसके लिए इंडेक्सर्स अपने सबग्राफ के लिए प्रश्नों को प्रोसेस करते हैं और क्वेरी शुल्क मूल्य निर्धारण के लिए वरीयताएँ निर्धारित करते हैं। +Indexers किसी सबग्राफ के curation signal के आधार पर उसे चुनते हैं, जहाँ Curators GRT को स्टेक करते हैं ताकि यह संकेत दिया जा सके कि कौन से Subgraph उच्च-गुणवत्ता वाले हैं और प्राथमिकता दी जानी चाहिए। Consumers (जैसे कि applications) यह भी निर्धारित कर सकते हैं कि कौन से Indexers उनके सबग्राफ के लिए queries को प्रोसेस करें और query fee pricing के लिए अपनी प्राथमिकताएँ सेट कर सकते हैं। ## FAQ -### What is the minimum stake required to be an Indexer on the network? +### नेटवर्क पर Indexer बनने के लिए न्यूनतम स्टेक कितना आवश्यक है? -The minimum stake for an Indexer is currently set to 100K GRT. +Indexer के लिए न्यूनतम स्टेक वर्तमान में 100K GRT निर्धारित है। -### What are the revenue streams for an Indexer? +### एक Indexer के लिए राजस्व स्रोत क्या हैं? -**Query fee rebates** - Payments for serving queries on the network. These payments are mediated via state channels between an Indexer and a gateway. Each query request from a gateway contains a payment and the corresponding response a proof of query result validity. +**पूछताछ शुल्क rebates** - नेटवर्क पर क्वेरी सर्व करने के लिए किए गए भुगतान। ये भुगतान एक Indexer और एक गेटवे के बीच स्टेट चैनलों के माध्यम से संचालित होते हैं। गेटवे से प्रत्येक क्वेरी अनुरोध में एक भुगतान शामिल होता है और संबंधित प्रतिक्रिया में क्वेरी परिणाम की वैधता का प्रमाण होता है। -**Indexing rewards** - Generated via a 3% annual protocol wide inflation, the indexing rewards are distributed to Indexers who are indexing subgraph deployments for the network. +**indexing रिवार्ड्स** - 3% वार्षिक प्रोटोकॉल-वाइड मुद्रास्फीति के माध्यम से उत्पन्न, indexing रिवार्ड्स उन Indexers को वितरित किए जाते हैं जो नेटवर्क के लिए सबग्राफ डिप्लॉयमेंट को इंडेक्स कर रहे हैं। -### How are indexing rewards distributed? +### Indexing इनाम कैसे वितरित किए जाते हैं? -Indexing rewards come from protocol inflation which is set to 3% annual issuance. They are distributed across subgraphs based on the proportion of all curation signal on each, then distributed proportionally to Indexers based on their allocated stake on that subgraph. **An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.** +Indexing rewards प्रोटोकॉल मुद्रास्फीति से आते हैं, जो 3% वार्षिक जारी करने के लिए सेट किया गया है। इन्हें सभी सबग्राफ पर कुल क्यूरेशन सिग्नल के अनुपात के आधार पर वितरित किया जाता है, और फिर Indexers को उनके द्वारा उस सबग्राफ पर आवंटित स्टेक के अनुपात में वितरित किया जाता है। **एक आवंटन को मान्य प्रूफ ऑफ Indexing (POI) के साथ बंद किया जाना चाहिए, जो मध्यस्थता चार्टर द्वारा निर्धारित मानकों को पूरा करता हो, ताकि इसे पुरस्कारों के लिए योग्य माना जा सके।** -Numerous tools have been created by the community for calculating rewards; you'll find a collection of them organized in the [Community Guides collection](https://www.notion.so/Community-Guides-abbb10f4dba040d5ba81648ca093e70c). You can also find an up to date list of tools in the #Delegators and #Indexers channels on the [Discord server](https://discord.gg/graphprotocol). Here we link a [recommended allocation optimiser](https://github.com/graphprotocol/allocation-optimizer) integrated with the indexer software stack. +समुदाय द्वारा कई उपकरण बनाए गए हैं जो इनाम की गणना करने में मदद करते हैं; आपको इनका संग्रह [Community Guides collection](https://www.notion.so/Community-Guides-abbb10f4dba040d5ba81648ca093e70c) में संगठित रूप में मिलेगा। आप #Delegators और #Indexers चैनलों में भी उपकरणों की एक अद्यतन सूची [Discord server](https://discord.gg/graphprotocol) पर पा सकते हैं। यहाँ हम एक [recommended allocation optimiser](https://github.com/graphprotocol/allocation-optimizer) को लिंक कर रहे हैं जो indexer software stack के साथ एकीकृत है। -### What is a proof of indexing (POI)? +### Indexing का प्रमाण (POI) क्या है? -POIs are used in the network to verify that an Indexer is indexing the subgraphs they have allocated on. A POI for the first block of the current epoch must be submitted when closing an allocation for that allocation to be eligible for indexing rewards. A POI for a block is a digest for all entity store transactions for a specific subgraph deployment up to and including that block. +POIs का उपयोग नेटवर्क में यह सत्यापित करने के लिए किया जाता है कि कोई Indexer उन सबग्राफ को Indexing कर रहा है जिन पर उन्होंने आवंटन किया है। जब किसी आवंटन को बंद किया जाता है, तो वर्तमान युग के पहले ब्लॉक के लिए एक POI प्रस्तुत करना आवश्यक होता है ताकि वह आवंटन Indexing पुरस्कारों के लिए पात्र हो सके। किसी ब्लॉक के लिए POI उस ब्लॉक तक और उसमें शामिल सभी entity store लेनदेन के लिए एक डाइजेस्ट होता है, जो एक विशिष्ट Subgraph परिनियोजन के लिए होता है। -### When are indexing rewards distributed? +### indexing पुरस्कार कब वितरित किए जाते हैं? -Allocations are continuously accruing rewards while they're active and allocated within 28 epochs. Rewards are collected by the Indexers, and distributed whenever their allocations are closed. That happens either manually, whenever the Indexer wants to force close them, or after 28 epochs a Delegator can close the allocation for the Indexer, but this results in no rewards. 28 epochs is the max allocation lifetime (right now, one epoch lasts for ~24h). +आवंटन सक्रिय रहते हुए और 28 युगों के भीतर आवंटित होने पर लगातार इनाम अर्जित करते रहते हैं। इनाम Indexers द्वारा एकत्र किए जाते हैं और तब वितरित किए जाते हैं जब उनके आवंटन बंद हो जाते हैं। यह या तो मैन्युअल रूप से होता है, जब भी Indexer उन्हें बलपूर्वक बंद करना चाहता है, या 28 युगों के बाद एक Delegator Indexer के लिए आवंटन बंद कर सकता है, लेकिन इससे कोई इनाम नहीं मिलता। 28 युग अधिकतम आवंटन अवधि है (फिलहाल, एक युग लगभग ~24 घंटे तक चलता है)। -### Can pending indexing rewards be monitored? +### क्या लंबित indexing पुरस्कारों की निगरानी की जा सकती है? -The RewardsManager contract has a read-only [getRewards](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/rewards/RewardsManager.sol#L316) function that can be used to check the pending rewards for a specific allocation. +RewardsManager contract में एक केवल-पढ़ने योग्य [getRewards](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/rewards/RewardsManager.sol#L316) फ़ंक्शन है, जिसका उपयोग किसी विशिष्ट आवंटन के लिए लंबित इनाम की जाँच करने के लिए किया जा सकता है। -Many of the community-made dashboards include pending rewards values and they can be easily checked manually by following these steps: +कई समुदाय द्वारा बनाए गए डैशबोर्ड में पेंडिंग रिवॉर्ड्स के मान होते हैं और इन्हें मैन्युअल रूप से निम्नलिखित कदमों का पालन करके आसानी से चेक किया जा सकता है: -1. Query the [mainnet subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations: +1. [mainnet सबग्राफ](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) को क्वेरी करें ताकि सभी सक्रिय आवंटनों के लिए ID प्राप्त की जा सके। ```graphql query indexerAllocations { @@ -57,138 +57,138 @@ query indexerAllocations { } ``` -Use Etherscan to call `getRewards()`: +Etherscan का उपयोग करके `getRewards()` कॉल करें: -- Navigate to [Etherscan interface to Rewards contract](https://etherscan.io/address/0x9Ac758AB77733b4150A901ebd659cbF8cB93ED66#readProxyContract) -- To call `getRewards()`: - - Expand the **9. getRewards** dropdown. - - Enter the **allocationID** in the input. - - Click the **Query** button. +- [ईथरस्कैन इंटरफेस पर रिवॉर्ड्स contract](https://etherscan.io/address/0x9Ac758AB77733b4150A901ebd659cbF8cB93ED66#readProxyContract) पर जाएं। +- `getRewards()` को कॉल करने के लिए: + - **9. getRewards** ड्रॉपडाउन का विस्तार करें। + - इनपुट में **allocationID** दर्ज करें। + - कृपया **Query** बटन पर क्लिक करें। -### What are disputes and where can I view them? +### क्या होते हैं और मैं उन्हें कहाँ देख सकता हूँ? -Indexer's queries and allocations can both be disputed on The Graph during the dispute period. The dispute period varies, depending on the type of dispute. Queries/attestations have 7 epochs dispute window, whereas allocations have 56 epochs. After these periods pass, disputes cannot be opened against either of allocations or queries. When a dispute is opened, a deposit of a minimum of 10,000 GRT is required by the Fishermen, which will be locked until the dispute is finalized and a resolution has been given. Fisherman are any network participants that open disputes. +Indexers की queries और आवंटन दोनों को The Graph में विवाद अवधि के दौरान विवादित किया जा सकता है। विवाद अवधि विवाद के प्रकार के अनुसार भिन्न होती है। Queries/अभिप्रमाणन के लिए 7 अवधियों को विवाद विंडो होती है, जबकि आवंटन के लिए 56 युगों की अवधि होती है। इन अवधियों के बीतने के बाद, आवंटन या queries के खिलाफ कोई विवाद नहीं खोला जा सकता। जब कोई विवाद खोला जाता है, तो Fishermen को न्यूनतम 10,000 GRT की जमा राशि की आवश्यकता होती है, जिसे विवाद के अंतिम निर्णय और समाधान दिए जाने तक लॉक कर दिया जाता है। Fishermen वे नेटवर्क प्रतिभागी होते हैं जो विवाद खोलते हैं। -Disputes have **three** possible outcomes, so does the deposit of the Fishermen. +विवादों के **तीन** संभावित परिणाम होते हैं, और मछुआरों की जमा राशि भी। -- If the dispute is rejected, the GRT deposited by the Fishermen will be burned, and the disputed Indexer will not be slashed. -- If the dispute is settled as a draw, the Fishermen's deposit will be returned, and the disputed Indexer will not be slashed. -- If the dispute is accepted, the GRT deposited by the Fishermen will be returned, the disputed Indexer will be slashed and the Fishermen will earn 50% of the slashed GRT. +- यदि विवाद अस्वीकार कर दिया जाता है, तो फ़िशरमैन द्वारा जमा किया गया GRT नष्ट कर दिया जाएगा, और विवादित Indexer पर कोई दंड नहीं लगाया जाएगा। +- यदि विवाद ड्रा के रूप में निपटाया जाता है, तो मछुआरों की जमा राशि वापस कर दी जाएगी, और विवादित Indexer पर कोई दंड नहीं लगाया जाएगा। +- यदि विवाद स्वीकार कर लिया जाता है, तो मछुआरों द्वारा जमा किया गया GRT वापस कर दिया जाएगा, विवादित Indexer को दंडित किया जाएगा, और मछुआरों को दंडित किए गए GRT का 50% मिलेगा। -Disputes can be viewed in the UI in an Indexer's profile page under the `Disputes` tab. +विवादों को UI में एक Indexer's प्रोफ़ाइल पृष्ठ पर Disputes टैब के अंतर्गत देखा जा सकता है। -### What are query fee rebates and when are they distributed? +### पूछताछ शुल्क रिबेट्स क्या हैं और वे कब वितरित किए जाते हैं? -Query fees are collected by the gateway and distributed to indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). The exponential rebate function is proposed as a way to ensure indexers achieve the best outcome by faithfully serving queries. It works by incentivizing Indexers to allocate a large amount of stake (which can be slashed for erring when serving a query) relative to the amount of query fees they may collect. +पूछताछ शुल्क गेटवे द्वारा एकत्र किए जाते हैं और Indexers को घातांकीय छूट फ़ंक्शन के अनुसार वितरित किए जाते हैं (देखें GIP [यहाँ](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162))। घातांकीय छूट फ़ंक्शन को यह सुनिश्चित करने के तरीके के रूप में प्रस्तावित किया गया है कि Indexers queries की सही सेवा करके सर्वोत्तम परिणाम प्राप्त करें। यह Indexers को एक बड़ी मात्रा में स्टेक आवंटित करने के लिए प्रोत्साहित करके काम करता है (जो किसी queries की सेवा करते समय गलती करने पर स्लैश किया जा सकता है) जो वे एकत्र कर सकने वाली पूछताछ शुल्क की मात्रा के सापेक्ष होती है। -Once an allocation has been closed the rebates are available to be claimed by the Indexer. Upon claiming, the query fee rebates are distributed to the Indexer and their Delegators based on the query fee cut and the exponential rebate function. +एक बार आवंटन बंद हो जाने के बाद, रिबेट्स को Indexer द्वारा क्लेम किया जा सकता है। क्लेम करने पर, पूछताछ शुल्क रिबेट्स को Indexer और उनके Delegators के बीच पूछताछ शुल्क कट और घातीय रिबेट फ़ंक्शन के आधार पर वितरित किया जाता है। -### What is query fee cut and indexing reward cut? +### पूछताछ शुल्क कटौती और indexing पुरस्कार कटौती क्या हैं? -The `queryFeeCut` and `indexingRewardCut` values are delegation parameters that the Indexer may set along with cooldownBlocks to control the distribution of GRT between the Indexer and their Delegators. See the last steps in [Staking in the Protocol](/indexing/overview/#stake-in-the-protocol) for instructions on setting the delegation parameters. +`queryFeeCut` और `indexingRewardCut` मान delegation पैरामीटर हैं, जिन्हें Indexer cooldownBlocks के साथ सेट कर सकता है ताकि Indexer और उनके Delegators के बीच GRT के वितरण को नियंत्रित किया जा सके। Delegation पैरामीटर सेट करने के निर्देशों के लिए [Staking in the Protocol](/indexing/overview/#stake-in-the-protocol) के अंतिम चरण देखें। -- **queryFeeCut** - the % of query fee rebates that will be distributed to the Indexer. If this is set to 95%, the Indexer will receive 95% of the query fees earned when an allocation is closed with the other 5% going to the Delegators. +- **queryFeeCut** - वह % जो पूछताछ शुल्क रिबेट्स में से Indexer को वितरित किया जाएगा। यदि इसे 95% पर सेट किया गया है, तो जब एक एलोकेशन बंद होगी, तो Indexer को अर्जित किए गए पूछताछ शुल्क का 95% प्राप्त होगा, और शेष 5% Delegators को जाएगा। -- **indexingRewardCut** - the % of indexing rewards that will be distributed to the Indexer. If this is set to 95%, the Indexer will receive 95% of the indexing rewards when an allocation is closed and the Delegators will split the other 5%. +- **indexingRewardCut** - वह % जो Indexing पुरस्कारों में से Indexer को वितरित किया जाएगा। यदि इसे 95% पर सेट किया जाता है, तो जब कोई आवंटन बंद होता है, तो Indexer को Indexing पुरस्कारों का 95% प्राप्त होगा और Delegators शेष 5% को साझा करेंगे। -### How do Indexers know which subgraphs to index? +### Indexers को कैसे पता चलता है कि कौन से सबग्राफ को index करना है? -Indexers may differentiate themselves by applying advanced techniques for making subgraph indexing decisions but to give a general idea we'll discuss several key metrics used to evaluate subgraphs in the network: +Indexers उन्नत तकनीकों को लागू करके सबग्राफ indexing निर्णय लेने में खुद को अलग कर सकते हैं, लेकिन सामान्य विचार देने के लिए, हम नेटवर्क में सबग्राफ का मूल्यांकन करने के लिए उपयोग की जाने वाली कुछ प्रमुख मीट्रिक्स पर चर्चा करेंगे: -- **Curation signal** - The proportion of network curation signal applied to a particular subgraph is a good indicator of the interest in that subgraph, especially during the bootstrap phase when query voluming is ramping up. +- **Curation signal** - किसी विशेष Subgraph पर लागू किए गए नेटवर्क curation signal का अनुपात उस Subgraph में रुचि का एक अच्छा संकेतक होता है, विशेष रूप से प्रारंभिक चरण में जब क्वेरी वॉल्यूम बढ़ रहा होता है। -- **Query fees collected** - The historical data for volume of query fees collected for a specific subgraph is a good indicator of future demand. +- **क्वेरी फीस संग्रहित **- किसी विशेष सबग्राफ के लिए संग्रहित क्वेरी फीस की ऐतिहासिक डेटा भविष्य की मांग का एक अच्छा संकेतक है। -- **Amount staked** - Monitoring the behavior of other Indexers or looking at proportions of total stake allocated towards specific subgraphs can allow an Indexer to monitor the supply side for subgraph queries to identify subgraphs that the network is showing confidence in or subgraphs that may show a need for more supply. +- **राशि दांव पर लगी हुई** - अन्य Indexers के व्यवहार की निगरानी करना या कुल दांव का विशिष्ट सबग्राफ की ओर आवंटित अनुपात देखना, एक Indexer को सबग्राफ क्वेरी के लिए आपूर्ति पक्ष की निगरानी करने में मदद कर सकता है। इससे वे उन सबग्राफ की पहचान कर सकते हैं जिनमें नेटवर्क आत्मविश्वास दिखा रहा है या ऐसे सबग्राफ जिनमें अधिक आपूर्ति की आवश्यकता हो सकती है। -- **Subgraphs with no indexing rewards** - Some subgraphs do not generate indexing rewards mainly because they are using unsupported features like IPFS or because they are querying another network outside of mainnet. You will see a message on a subgraph if it is not generating indexing rewards. +- **Subgraph जिनके लिए कोई indexing रिवार्ड नहीं है -**कुछ सबग्राफ को indexing इनाम नहीं मिलते हैं, मुख्य रूप से इसलिए क्योंकि वे असमर्थित सुविधाओं जैसे कि IPFS का उपयोग कर रहे हैं या वे मुख्य नेटवर्क के बाहर किसी अन्य नेटवर्क से क्वेरी कर रहे हैं। यदि कोई सबग्राफ indexing इनाम उत्पन्न नहीं कर रहा है, तो आपको उस पर एक संदेश दिखाई देगा। -### What are the hardware requirements? +### हार्डवेयर आवश्यकताएँ क्या हैं? -- **Small** - Enough to get started indexing several subgraphs, will likely need to be expanded. -- **Standard** - Default setup, this is what is used in the example k8s/terraform deployment manifests. -- **Medium** - Production Indexer supporting 100 subgraphs and 200-500 requests per second. -- **Large** - Prepared to index all currently used subgraphs and serve requests for the related traffic. +- **छोटा** - शुरुआत में कुछ सबग्राफ को index करने के लिए पर्याप्त, लेकिन संभवतः विस्तार करने की आवश्यकता होगी। +- **स्टैंडर्ड **- डिफ़ॉल्ट सेटअप, यह वही है जो उदाहरण k8s/terraform परिनियोजन मैनिफेस्ट में उपयोग किया जाता है। +- **मध्यम**- एक प्रोडक्शन Indexer जो 100 सबग्राफ को सपोर्ट करता है और 200-500 अनुरोध प्रति सेकंड प्रोसेस करता है। +- **बड़ा **- वर्तमान में उपयोग किए जा रहे सभी सबग्राफ को इंडेक्स करने और संबंधित ट्रैफ़िक के लिए अनुरोधों को सर्व करने के लिए तैयार। -| Setup | Postgres
(CPUs) | Postgres
(memory in GBs) | Postgres
(disk in TBs) | VMs
(CPUs) | VMs
(memory in GBs) | +| सेटअप | Postgres
(CPUs) | Postgres
(मेमोरी in GBs) | Postgres
(डिस्क in TBs) | VMs
(CPUs) | VMs
(मेमोरी in GBs) | | --- | :-: | :-: | :-: | :-: | :-: | -| Small | 4 | 8 | 1 | 4 | 16 | -| Standard | 8 | 30 | 1 | 12 | 48 | -| Medium | 16 | 64 | 2 | 32 | 64 | -| Large | 72 | 468 | 3.5 | 48 | 184 | +| छोटा | 4 | 8 | 1 | 4 | 16 | +| मानक | 8 | 30 | 1 | 12 | 48 | +| मध्यम | 16 | 64 | 2 | 32 | 64 | +| बड़ा | 72 | 468 | 3.5 | 48 | 184 | -### What are some basic security precautions an Indexer should take? +### कोई Indexer को कौन-कौन सी बुनियादी सुरक्षा सावधानियाँ बरतनी चाहिए? -- **Operator wallet** - Setting up an operator wallet is an important precaution because it allows an Indexer to maintain separation between their keys that control stake and those that are in control of day-to-day operations. See [Stake in Protocol](/indexing/overview/#stake-in-the-protocol) for instructions. +- **ऑपरेटर वॉलेट** - एक ऑपरेटर वॉलेट सेट अप करना एक महत्वपूर्ण एहतियात है क्योंकि यह एक Indexer को अपने उन कुंजियों के बीच अलगाव बनाए रखने की अनुमति देता है जो स्टेक को नियंत्रित करती हैं और वे जो दिन-प्रतिदिन के संचालन के नियंत्रण में होती हैं। निर्देशों के लिए (Stake in Protocol](/indexing/overview/#stake-in-the-protocol) देखें। -- **Firewall** - Only the Indexer service needs to be exposed publicly and particular attention should be paid to locking down admin ports and database access: the Graph Node JSON-RPC endpoint (default port: 8030), the Indexer management API endpoint (default port: 18000), and the Postgres database endpoint (default port: 5432) should not be exposed. +- **Firewall**- केवल Indexer सेवा को सार्वजनिक रूप से एक्सपोज़ किया जाना चाहिए और विशेष ध्यान एडमिन पोर्ट्स और डेटाबेस एक्सेस को लॉक करने पर दिया जाना चाहिए: Graph Node JSON-RPC एंडपॉइंट (डिफ़ॉल्ट पोर्ट: 8030), Indexer प्रबंधन API एंडपॉइंट (डिफ़ॉल्ट पोर्ट: 18000), और Postgres डेटाबेस एंडपॉइंट (डिफ़ॉल्ट पोर्ट: 5432) को एक्सपोज़ नहीं किया जाना चाहिए। -## Infrastructure +## इंफ्रास्ट्रक्चर -At the center of an Indexer's infrastructure is the Graph Node which monitors the indexed networks, extracts and loads data per a subgraph definition and serves it as a [GraphQL API](/about/#how-the-graph-works). The Graph Node needs to be connected to an endpoint exposing data from each indexed network; an IPFS node for sourcing data; a PostgreSQL database for its store; and Indexer components which facilitate its interactions with the network. +Indexer के इंफ्रास्ट्रक्चर के केंद्र में Graph Node होता है, जो इंडेक्स किए गए नेटवर्क की निगरानी करता है, डेटा को सबग्राफ परिभाषा के अनुसार निकालता और लोड करता है, और इसे एक [GraphQL API](/about/#how-the-graph-works) के रूप में सर्व करता है। Graph Node को प्रत्येक इंडेक्स किए गए नेटवर्क से डेटा एक्सपोज़ करने वाले एक एंडपॉइंट से कनेक्ट करने की आवश्यकता होती है; डेटा स्रोत करने के लिए एक IPFS नोड; अपने स्टोर के लिए एक PostgreSQL डेटाबेस; और Indexer घटकों से, जो इसे नेटवर्क के साथ इंटरैक्शन की सुविधा प्रदान करते हैं। -- **PostgreSQL database** - The main store for the Graph Node, this is where subgraph data is stored. The Indexer service and agent also use the database to store state channel data, cost models, indexing rules, and allocation actions. +- **PostgreSQL डेटाबेस** - यह Graph Node के लिए मुख्य स्टोर है, जहाँ Subgraph डेटा संग्रहीत किया जाता है। Indexer सेवा और एजेंट भी इस डेटाबेस का उपयोग state channel डेटा, cost models, Indexing नियमों और allocation क्रियाओं को संग्रहीत करने के लिए करते हैं। -- **Data endpoint** - For EVM-compatible networks, Graph Node needs to be connected to an endpoint that exposes an EVM-compatible JSON-RPC API. This may take the form of a single client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain subgraphs will require particular client capabilities such as archive mode and/or the parity tracing API. +- **डेटा एंडपॉइंट** - EVM-संगत नेटवर्क्स के लिए, Graph Node को एक ऐसे एंडपॉइंट से कनेक्ट करने की आवश्यकता होती है जो EVM-संगत JSON-RPC API को एक्सपोज़ करता हो। यह एक सिंगल क्लाइंट के रूप में हो सकता है या यह एक अधिक जटिल सेटअप हो सकता है जो मल्टीपल क्लाइंट्स के बीच लोड बैलेंस करता हो। यह जानना महत्वपूर्ण है कि कुछ सबग्राफ को विशेष क्लाइंट क्षमताओं की आवश्यकता हो सकती है, जैसे कि आर्काइव मोड और/या पैरिटी ट्रेसिंग API। -- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during subgraph deployment to fetch the subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com. +- **IPFS node (संस्करण 5 से कम)** - सबग्राफ डिप्लॉयमेंट मेटाडेटा IPFS नेटवर्क पर स्टोर किया जाता है। The Graph Node मुख्य रूप से सबग्राफ डिप्लॉयमेंट के दौरान IPFS node तक पहुंचता है ताकि सबग्राफ मैनिफेस्ट और सभी लिंक की गई फ़ाइलों को प्राप्त किया जा सके। नेटवर्क Indexers को अपना स्वयं का IPFS node होस्ट करने की आवश्यकता नहीं है, नेटवर्क के लिए एक IPFS node होस्ट किया गया है: https://ipfs.network.thegraph.com. -- **Indexer service** - Handles all required external communications with the network. Shares cost models and indexing statuses, passes query requests from gateways on to a Graph Node, and manages the query payments via state channels with the gateway. +- **Indexer सेवा** - आवश्यक बाहरी संचार को नेटवर्क के साथ संभालती है। लागत मॉडल और इंडेक्सिंग स्थितियों को साझा करती है, गेटवे से आने वाले क्वेरी अनुरोधों को एक Graph Node तक पहुंचाती है, और गेटवे के साथ स्टेट चैनलों के माध्यम से क्वेरी भुगतान को प्रबंधित करती है। -- **Indexer agent** - Facilitates the Indexers interactions onchain including registering on the network, managing subgraph deployments to its Graph Node/s, and managing allocations. +- **Indexer agent** - ऑनचेन पर Indexers की इंटरैक्शन को सुविधाजनक बनाता है, जिसमें नेटवर्क पर पंजीकरण करना, अपने ग्राफ-नोड पर सबग्राफ परिनियोजन का प्रबंधन करना और आवंटनों का प्रबंधन करना शामिल है। -- **Prometheus metrics server** - The Graph Node and Indexer components log their metrics to the metrics server. +- **Prometheus मेट्रिक्स सर्वर** - The Graph Node और Indexer घटक अपने मेट्रिक्स को मेट्रिक्स सर्वर में लॉग करते हैं। -Note: To support agile scaling, it is recommended that query and indexing concerns are separated between different sets of nodes: query nodes and index nodes. +कृपया ध्यान दें: चुस्त स्केलिंग का समर्थन करने के लिए, यह अनुशंसा की जाती है कि क्वेरी और indexing संबंधी चिंताओं को विभिन्न सेट के नोड्स के बीच विभाजित किया जाए: क्वेरी नोड्स और इंडेक्स नोड्स। -### Ports overview +### पोर्ट्स का अवलोकन -> **Important**: Be careful about exposing ports publicly - **administration ports** should be kept locked down. This includes the the Graph Node JSON-RPC and the Indexer management endpoints detailed below. +> **महत्वपूर्ण**: पोर्ट्स को सार्वजनिक रूप से एक्सपोज़ करने में सावधानी बरतें - **प्रशासनिक पोर्ट्स** को सुरक्षित रखा जाना चाहिए। इसमें नीचे दिए गए Graph Node JSON-RPC और Indexer प्रबंधन एंडपॉइंट्स शामिल हैं। -#### ग्राफ-नोड +#### Graph Node -| Port | Purpose | Routes | CLI Argument | Environment Variable | +| पोर्ट | उद्देश्य | रूट्स | आर्गुमेंट्स | पर्यावरण वेरिएबल्स | | --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | -| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | +| 8000 | GraphQL HTTP server
(for Subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | +| 8001 | GraphQL WS
(for सबग्राफ subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | | 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | | 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | -| 8040 | Prometheus metrics | /metrics | \--metrics-port | - | +| 8040 | Prometheus मेट्रिक्स | /metrics | \--metrics-port | - | -#### Indexer Service +#### Indexer सेवा -| Port | Purpose | Routes | CLI Argument | Environment Variable | +| पोर्ट | उद्देश्य | Routes | CLI Argument | Environment Variable | | --- | --- | --- | --- | --- | -| 7600 | GraphQL HTTP server
(for paid subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` | -| 7300 | Prometheus metrics | /metrics | \--metrics-port | - | +| 7600 | GraphQL HTTP server
(भुगतान किए गए सबग्राफ क्वेरीज़ के लिए) | /subgraphs/id/...
/status
/channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` | +| 7300 | Prometheus मेट्रिक्स | /metrics | \--metrics-port | - | -#### Indexer Agent +#### Indexer एजेंट -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| ---- | ---------------------- | ------ | -------------------------- | --------------------------------------- | -| 8000 | Indexer management API | / | \--indexer-management-port | `INDEXER_AGENT_INDEXER_MANAGEMENT_PORT` | +| पोर्ट | उद्देश्य | Routes | CLI Argument | Environment Variable | +| ----- | ------------------- | ------ | -------------------------- | --------------------------------------- | +| 8000 | Indexer प्रबंधन API | / | \--indexer-management-port | `INDEXER_AGENT_INDEXER_MANAGEMENT_PORT` | -### Setup server infrastructure using Terraform on Google Cloud +### Google Cloud पर Terraform का उपयोग करके सर्वर अवसंरचना सेटअप करें -> Note: Indexers can alternatively use AWS, Microsoft Azure, or Alibaba. +> Indexers वैकल्पिक रूप से AWS, Microsoft Azure, या Alibaba का उपयोग कर सकते हैं। -#### Install prerequisites +#### आवश्यक पूर्वापेक्षाएँ स्थापित करें - Google Cloud SDK -- Kubectl command line tool +- Kubectl कमांड लाइन टूल - Terraform -#### Create a Google Cloud Project +#### Google Cloud प्रोजेक्ट बनाएं -- Clone or navigate to the [Indexer repository](https://github.com/graphprotocol/indexer). +- Clone करें या [Indexer repository](https://github.com/graphprotocol/indexer) पर जाएं। -- Navigate to the `./terraform` directory, this is where all commands should be executed. +- `./terraform` डायरेक्टरी पर जाएं, यही वह स्थान है जहां सभी कमांड निष्पादित की जानी चाहिए। ```sh cd terraform ``` -- Authenticate with Google Cloud and create a new project. +- Google Cloud के साथ प्रमाणीकृत करें और एक नया प्रोजेक्ट बनाएं। ```sh gcloud auth login @@ -196,9 +196,9 @@ project= gcloud projects create --enable-cloud-apis $project ``` -- Use the Google Cloud Console's billing page to enable billing for the new project. +- Google Cloud Console के बिलिंग पेज का उपयोग करके नए प्रोजेक्ट के लिए बिलिंग सक्षम करें। -- Create a Google Cloud configuration. +- Google Cloud कॉन्फ़िगरेशन बनाएँ। ```sh proj_id=$(gcloud projects list --format='get(project_id)' --filter="name=$project") @@ -208,7 +208,7 @@ gcloud config set compute/region us-central1 gcloud config set compute/zone us-central1-a ``` -- Enable required Google Cloud APIs. +- आवश्यक Google Cloud APIs सक्षम करें। ```sh gcloud services enable compute.googleapis.com @@ -217,7 +217,7 @@ gcloud services enable servicenetworking.googleapis.com gcloud services enable sqladmin.googleapis.com ``` -- Create a service account. +- सर्विस अकाउंट बनाएं। ```sh svc_name= @@ -235,7 +235,7 @@ gcloud projects add-iam-policy-binding $proj_id \ --role roles/editor ``` -- Enable peering between database and Kubernetes cluster that will be created in the next step. +- डाटाबेस और Kubernetes क्लस्टर के बीच peering सक्षम करें, जो अगले चरण में बनाया जाएगा। ```sh gcloud compute addresses create google-managed-services-default \ @@ -249,7 +249,7 @@ gcloud services vpc-peerings connect \ --ranges=google-managed-services-default ``` -- Create minimal terraform configuration file (update as needed). +- न्यूनतम Terraform कॉन्फ़िगरेशन फ़ाइल बनाएँ (आवश्यकतानुसार अपडेट करें)। ```sh indexer= @@ -260,24 +260,24 @@ database_password = "" EOF ``` -#### Use Terraform to create infrastructure +#### टेराफॉर्म का उपयोग करके इंफ्रास्ट्रक्चर बनाएं -Before running any commands, read through [variables.tf](https://github.com/graphprotocol/indexer/blob/main/terraform/variables.tf) and create a file `terraform.tfvars` in this directory (or modify the one we created in the last step). For each variable where you want to override the default, or where you need to set a value, enter a setting into `terraform.tfvars`. +[variables.tf](https://github.com/graphprotocol/indexer/blob/main/terraform/variables.tf) फ़ाइल को पढ़ने के बाद, इस डायरेक्टरी में terraform.tfvars नाम की एक फ़ाइल बनाएँ (या पिछली स्टेप में बनाई गई फ़ाइल को संशोधित करें)। प्रत्येक वेरिएबल के लिए, जहाँ आप डिफ़ॉल्ट मान को ओवरराइड करना चाहते हैं या जहाँ आपको कोई मान सेट करने की आवश्यकता है, terraform.tfvars में एक सेटिंग दर्ज करें। -- Run the following commands to create the infrastructure. +- इन्फ्रास्ट्रक्चर बनाने के लिए निम्नलिखित कमांड चलाएँ। ```sh -# Install required plugins +#आवश्यक प्लगइन इंस्टॉल करें terraform init -# View plan for resources to be created +#बनने वाले संसाधनों की योजना देखें terraform plan -# Create the resources (expect it to take up to 30 minutes) +#संसाधनों का निर्माण करें (इसे पूरा होने में 30 मिनट तक लग सकते हैं) terraform apply ``` -Download credentials for the new cluster into `~/.kube/config` and set it as your default context. +नई क्लस्टर के लिए क्रेडेंशियल्स को ~/.kube/config में डाउनलोड करें और इसे अपने डिफ़ॉल्ट संदर्भ के रूप में सेट करें। ```sh gcloud container clusters get-credentials $indexer @@ -285,21 +285,21 @@ kubectl config use-context $(kubectl config get-contexts --output='name' | grep $indexer) ``` -#### Creating the Kubernetes components for the Indexer +#### Indexer के लिए Kubernetes घटकों का निर्माण -- Copy the directory `k8s/overlays` to a new directory `$dir,` and adjust the `bases` entry in `$dir/kustomization.yaml` so that it points to the directory `k8s/base`. +- डायरेक्टरी `k8s/overlays` को एक नई डायरेक्टरी `$dir` में कॉपी करें, और `$dir/kustomization.yaml` में `bases` एंट्री को इस तरह समायोजित करें कि यह `k8s/base` डायरेक्टरी की ओर इशारा करे। -- Read through all the files in `$dir` and adjust any values as indicated in the comments. +- सभी फ़ाइलों को `$dir` में पढ़ें और टिप्पणियों में दिए गए निर्देशों के अनुसार किसी भी मान को समायोजित करें। -Deploy all resources with `kubectl apply -k $dir`. +सभी संसाधनों को `kubectl apply -k $dir` के साथ परिनियोजित करें। -### ग्राफ-नोड +### Graph Node -[Graph Node](https://github.com/graphprotocol/graph-node) is an open source Rust implementation that event sources the Ethereum blockchain to deterministically update a data store that can be queried via the GraphQL endpoint. Developers use subgraphs to define their schema, and a set of mappings for transforming the data sourced from the blockchain and the Graph Node handles syncing the entire chain, monitoring for new blocks, and serving it via a GraphQL endpoint. +[Graph Node](https://github.com/graphprotocol/graph-node) एक ओपन सोर्स Rust इम्प्लीमेंटेशन है जो Ethereum ब्लॉकचेन को इवेंट सोर्स करके एक डेटा स्टोर को डिटर्मिनिस्टिक तरीके से अपडेट करता है, जिसे GraphQL एंडपॉइंट के जरिए क्वेरी किया जा सकता है। डेवलपर्स सबग्राफ का उपयोग करके अपनी स्कीमा को परिभाषित करते हैं और ब्लॉकचेन से सोर्स किए गए डेटा को ट्रांसफॉर्म करने के लिए एक सेट ऑफ मैपिंग्स बनाते हैं, और Graph Node पूरी चेन को सिंक करने, नए ब्लॉक्स की मॉनिटरिंग करने और इसे एक GraphQL एंडपॉइंट के जरिए सर्व करने का काम संभालता है। -#### Getting started from source +#### सोर्स से शुरू करना -#### Install prerequisites +#### आवश्यक पूर्वापेक्षाएँ स्थापित करें - **Rust** @@ -307,15 +307,15 @@ Deploy all resources with `kubectl apply -k $dir`. - **IPFS** -- **Additional Requirements for Ubuntu users** - To run a Graph Node on Ubuntu a few additional packages may be needed. +- **उबंटू उपयोगकर्ताओं के लिए अतिरिक्त आवश्यकताएँ** - उबंटू पर एक ग्राफ-नोड चलाने के लिए कुछ अतिरिक्त पैकेजों की आवश्यकता हो सकती है। ```sh sudo apt-get install -y clang libpg-dev libssl-dev pkg-config ``` -#### Setup +#### सेटअप -1. Start a PostgreSQL database server +1. PostgreSQL डेटाबेस सर्वर शुरू करें ```sh initdb -D .postgres @@ -323,9 +323,9 @@ pg_ctl -D .postgres -l logfile start createdb graph-node ``` -2. Clone [Graph Node](https://github.com/graphprotocol/graph-node) repo and build the source by running `cargo build` +2. [Graph Node](https://github.com/graphprotocol/graph-node) रिपॉजिटरी को क्लोन करें और सोर्स को बिल्ड करने के लिए `cargo build` कमांड चलाएँ। -3. Now that all the dependencies are setup, start the Graph Node: +3. अब जब सभी dependencies सेटअप हो गई हैं, तो Graph Node शुरू करें: ```sh cargo run -p graph-node --release -- \ @@ -334,48 +334,48 @@ cargo run -p graph-node --release -- \ --ipfs https://ipfs.network.thegraph.com ``` -#### Getting started using Docker +#### Docker का उपयोग शुरू करना -#### Prerequisites +#### आवश्यक शर्तें -- **Ethereum node** - By default, the docker compose setup will use mainnet: [http://host.docker.internal:8545](http://host.docker.internal:8545) to connect to the Ethereum node on your host machine. You can replace this network name and url by updating `docker-compose.yaml`. +- **Ethereum नोड** -डिफ़ॉल्ट रूप से, Docker Compose सेटअप मुख्य नेटवर्क (mainnet) का उपयोग करेगा:[http://host.docker.internal:8545](http://host.docker.internal:8545) आपके होस्ट मशीन पर Ethereum node से कनेक्ट करने के लिए। आप `docker-compose.yaml` को अपडेट करके इस नेटवर्क नाम और URL को बदल सकते हैं। -#### Setup +#### सेटअप -1. Clone Graph Node and navigate to the Docker directory: +1. Clone Graph Node और Docker डायरेक्टरी पर नेविगेट करें: ```sh git clone https://github.com/graphprotocol/graph-node cd graph-node/docker ``` -2. For linux users only - Use the host IP address instead of `host.docker.internal` in the `docker-compose.yaml `using the included script: +2. सिर्फ़ Linux उपयोगकर्ताओं के लिए - `docker-compose.yaml` में `host.docker.internal` की जगह होस्ट IP एड्रेस का उपयोग करें, दिए गए स्क्रिप्ट का उपयोग करके: ```sh ./setup.sh ``` -3. Start a local Graph Node that will connect to your Ethereum endpoint: +3. एक लोकल Graph Node शुरू करें जो आपके Ethereum endpoint से कनेक्ट होगा: ```sh docker-compose up ``` -### Indexer components +### Indexer घटक -To successfully participate in the network requires almost constant monitoring and interaction, so we've built a suite of Typescript applications for facilitating an Indexers network participation. There are three Indexer components: +नेटवर्क में सफलतापूर्वक भाग लेने के लिए लगभग निरंतर निगरानी और इंटरैक्शन की आवश्यकता होती है, इसलिए हमने एक TypeScript application का एक सूट बनाया है जो Indexers नेटवर्क भागीदारी को सुगम बनाता है। तीन Indexer घटक हैं: -- **Indexer agent** - The agent monitors the network and the Indexer's own infrastructure and manages which subgraph deployments are indexed and allocated towards onchain and how much is allocated towards each. +- **Indexer agent** - यह एजेंट नेटवर्क और Indexer's स्वयं के बुनियादी ढांचे की निगरानी करता है और ऑनचेन पर कौन-कौन से सबग्राफ डिप्लॉयमेंट को इंडेक्स और आवंटित किया जाएगा, तथा प्रत्येक के लिए कितना आवंटित किया जाएगा, इसका प्रबंधन करता है। -- **Indexer service** - The only component that needs to be exposed externally, the service passes on subgraph queries to the graph node, manages state channels for query payments, shares important decision making information to clients like the gateways. +- **Indexer सेवा** - यह एकमात्र घटक है जिसे बाहरी रूप से एक्सपोज़ करने की आवश्यकता होती है। यह सेवा सबग्राफ क्वेरीज़ को The Graph नोड तक पहुंचाती है, क्वेरी भुगतान के लिए स्टेट चैनल प्रबंधित करती है, और गेटवे जैसे क्लाइंट्स को महत्वपूर्ण निर्णय लेने की जानकारी साझा करती है। -- **Indexer CLI** - The command line interface for managing the Indexer agent. It allows Indexers to manage cost models, manual allocations, actions queue, and indexing rules. +- **Indexer CLI** - कमांड लाइन इंटरफ़ेस जो Indexer एजेंट को प्रबंधित करने के लिए उपयोग किया जाता है। यह Indexers को लागत मॉडल, मैनुअल अलोकेशन, एक्शन कतार, और Indexing नियमों को प्रबंधित करने की अनुमति देता है। -#### Getting started +#### शुरू करना -The Indexer agent and Indexer service should be co-located with your Graph Node infrastructure. There are many ways to set up virtual execution environments for your Indexer components; here we'll explain how to run them on baremetal using NPM packages or source, or via kubernetes and docker on the Google Cloud Kubernetes Engine. If these setup examples do not translate well to your infrastructure there will likely be a community guide to reference, come say hi on [Discord](https://discord.gg/graphprotocol)! Remember to [stake in the protocol](/indexing/overview/#stake-in-the-protocol) before starting up your Indexer components! +Indexer agent और Indexer service को आपके Graph Node इंफ्रास्ट्रक्चर के साथ ही रखना चाहिए। आपके Indexer components के लिए वर्चुअल execution environments सेटअप करने के कई तरीके हैं; यहाँ हम बताएंगे कि उन्हें baremetal पर NPM पैकेज या source से कैसे चलाया जाए, या फिर Kubernetes और Docker के ज़रिए Google Cloud Kubernetes Engine पर कैसे रन किया जाए। अगर ये सेटअप उदाहरण आपके इंफ्रास्ट्रक्चर के लिए उपयुक्त नहीं हैं, तो संभवतः कोई कम्युनिटी गाइड उपलब्ध होगी, हमें [Discord](https://discord.gg/graphprotocol) पर आकर हैलो कहें! शुरू करने से पहले [protocol में stake करें!](/indexing/overview/#stake-in-the-protocol) -#### From NPM packages +#### NPM पैकेजों से - ```sh npm install -g @graphprotocol/indexer-service @@ -398,7 +398,7 @@ graph indexer connect http://localhost:18000/ graph indexer ... ``` -#### From source +#### स्रोत से ```sh # From Repo root directory @@ -418,16 +418,16 @@ cd packages/indexer-cli ./bin/graph-indexer-cli indexer ... ``` -#### Using docker +#### Docker का उपयोग -- Pull images from the registry +- रजिस्ट्र्री से इमेजेस प्राप्त करें ```sh docker pull ghcr.io/graphprotocol/indexer-service:latest docker pull ghcr.io/graphprotocol/indexer-agent:latest ``` -Or build images locally from source +या स्रोत से स्थानीय रूप से छवियाँ बनाएं ```sh # Indexer service @@ -442,24 +442,24 @@ docker build \ -t indexer-agent:latest \ ``` -- Run the components +- कंपोनेंट्स चलाएं ```sh docker run -p 7600:7600 -it indexer-service:latest ... docker run -p 18000:8000 -it indexer-agent:latest ... ``` -**NOTE**: After starting the containers, the Indexer service should be accessible at [http://localhost:7600](http://localhost:7600) and the Indexer agent should be exposing the Indexer management API at [http://localhost:18000/](http://localhost:18000/). +**नोट**: कंटेनर शुरू करने के बाद, Indexer सेवा [http://localhost:7600](http://localhost:7600) पर उपलब्ध होगी और Indexer एजेंट [http://localhost:18000/](http://localhost:18000/) पर Indexer प्रबंधन API को एक्सपोज़ करेगा। -#### Using K8s and Terraform +#### कुबेरनेट्स (K8s) और टेराफॉर्म का उपयोग -See the [Setup Server Infrastructure Using Terraform on Google Cloud](/indexing/overview/#setup-server-infrastructure-using-terraform-on-google-cloud) section +देखें [Google Cloud पर Terraform का उपयोग करके सर्वर इंफ्रास्ट्रक्चर सेटअप करें अनुभाग](/indexing/overview/#setup-server-infrastructure-using-terraform-on-google-cloud) -#### Usage +#### उपयोग -> **NOTE**: All runtime configuration variables may be applied either as parameters to the command on startup or using environment variables of the format `COMPONENT_NAME_VARIABLE_NAME`(ex. `INDEXER_AGENT_ETHEREUM`). +> नोट: सभी रनटाइम कॉन्फ़िगरेशन वेरिएबल्स या तो कमांड पर स्टार्टअप के समय पैरामीटर्स के रूप में लागू किए जा सकते हैं या फिर `COMPONENT_NAME_VARIABLE_NAME` प्रारूप में एनवायरनमेंट वेरिएबल्स के रूप में उपयोग किए जा सकते हैं (उदाहरण: `INDEXER_AGENT_ETHEREUM`)। -#### Indexer agent +#### Indexer एजेंट ```sh graph-indexer-agent start \ @@ -488,7 +488,7 @@ graph-indexer-agent start \ | pino-pretty ``` -#### Indexer service +#### Indexer सेवा ```sh SERVER_HOST=localhost \ @@ -516,56 +516,56 @@ graph-indexer-service start \ #### Indexer CLI -The Indexer CLI is a plugin for [`@graphprotocol/graph-cli`](https://www.npmjs.com/package/@graphprotocol/graph-cli) accessible in the terminal at `graph indexer`. +The Indexer CLI @graphprotocol/graph-clihttps://www.npmjs.com/package/@graphprotocol/graph-cli के लिए एक प्लगइन है, जिसे टर्मिनल में `graph indexer` कमांड के माध्यम से एक्सेस किया जा सकता है। ```sh graph indexer connect http://localhost:18000 graph indexer status ``` -#### Indexer management using Indexer CLI +#### Indexer प्रबंधन Indexer CLI का उपयोग करके -The suggested tool for interacting with the **Indexer Management API** is the **Indexer CLI**, an extension to the **Graph CLI**. The Indexer agent needs input from an Indexer in order to autonomously interact with the network on the behalf of the Indexer. The mechanism for defining Indexer agent behavior are **allocation management** mode and **indexing rules**. Under auto mode, an Indexer can use **indexing rules** to apply their specific strategy for picking subgraphs to index and serve queries for. Rules are managed via a GraphQL API served by the agent and known as the Indexer Management API. Under manual mode, an Indexer can create allocation actions using **actions queue** and explicitly approve them before they get executed. Under oversight mode, **indexing rules** are used to populate **actions queue** and also require explicit approval for execution. +**Indexer Management API** के साथ इंटरैक्ट करने के लिए सुझाया गया टूल **Indexer CLI** है, जो कि **Graph CLI** का एक एक्सटेंशन है। Indexer agent को एक Indexer से इनपुट की आवश्यकता होती है ताकि वह Indexer की ओर से नेटवर्क के साथ स्वायत्त रूप से इंटरैक्ट कर सके। Indexer agent व्यवहार को परिभाषित करने के लिए **allocation management** मोड और **indexing rules** का उपयोग किया जाता है। Auto mode में, एक Indexer **indexing rules** का उपयोग करके यह तय कर सकता है कि वह किन को इंडेक्स और क्वेरी के लिए सर्व करेगा। इन नियमों को GraphQL API के माध्यम से प्रबंधित किया जाता है, जिसे agent द्वारा सर्व किया जाता है और यह Indexer Management API के रूप में जाना जाता है। Manual mode में, एक Indexer **actions queue** का उपयोग करके allocation actions बना सकता है और उन्हें निष्पादित करने से पहले स्पष्ट रूप से अनुमोदित कर सकता है। Oversight mode में, **indexing rules** का उपयोग **actions queue** को भरने के लिए किया जाता है और इन्हें निष्पादित करने से पहले भी स्पष्ट अनुमोदन की आवश्यकता होती है। -#### Usage +#### उपयोग -The **Indexer CLI** connects to the Indexer agent, typically through port-forwarding, so the CLI does not need to run on the same server or cluster. To help you get started, and to provide some context, the CLI will briefly be described here. +**Indexer CLI** Indexer agent से कनेक्ट होता है, आमतौर पर पोर्ट-फॉरवर्डिंग के माध्यम से, जिससे CLI को वही सर्वर या क्लस्टर पर चलाने की जरूरत नहीं होती है। शुरुआत करने के लिए और कुछ संदर्भ देने के लिए, यहां CLI का संक्षिप्त विवरण दिया जाएगा। -- `graph indexer connect ` - Connect to the Indexer management API. Typically the connection to the server is opened via port forwarding, so the CLI can be easily operated remotely. (Example: `kubectl port-forward pod/ 8000:8000`) +- `graph indexer connect ` - Indexer प्रबंधन API से कनेक्ट करें। आमतौर पर, सर्वर से कनेक्शन पोर्ट फॉरवर्डिंग के माध्यम से खोला जाता है, जिससे CLI को आसानी से रिमोटली ऑपरेट किया जा सकता है। (उदाहरण: `kubectl port-forward pod/ 8000:8000`) -- `graph indexer rules get [options] [ ...]` - Get one or more indexing rules using `all` as the `` to get all rules, or `global` to get the global defaults. An additional argument `--merged` can be used to specify that deployment specific rules are merged with the global rule. This is how they are applied in the Indexer agent. +- `graph indexer rules get [options] [ ...]` - एक या अधिक इंडेक्सिंग नियम प्राप्त करें,``के रूप में `all` का उपयोग करके सभी नियम प्राप्त करें, या global का उपयोग करके वैश्विक डिफॉल्ट प्राप्त करें। एक अतिरिक्त आर्ग्यूमेंट --merged का उपयोग किया जा सकता है, जो यह निर्दिष्ट करता है कि डिप्लॉयमेंट-विशिष्ट नियम वैश्विक नियम के साथ मर्ज किए गए हैं। यह उसी तरह लागू होते हैं जैसे वे Indexer agent में होते हैं। -- `graph indexer rules set [options] ...` - Set one or more indexing rules. +- `graph indexer rules set [options] ... `- एक या अधिक indexing नियम सेट करें। -- `graph indexer rules start [options] ` - Start indexing a subgraph deployment if available and set its `decisionBasis` to `always`, so the Indexer agent will always choose to index it. If the global rule is set to always then all available subgraphs on the network will be indexed. +- `graph indexer rules start [options] ` - यदि उपलब्ध हो तो किसी सबग्राफ डिप्लॉयमेंट का Indexing शुरू करें और इसका `decisionBasis` को `always` पर सेट करें, ताकि Indexer एजेंट इसे हमेशा Index करने के लिए चुने। यदि ग्लोबल नियम always पर सेट है, तो नेटवर्क पर उपलब्ध सभी सबग्राफ को Index किया जाएगा। -- `graph indexer rules stop [options] ` - Stop indexing a deployment and set its `decisionBasis` to never, so it will skip this deployment when deciding on deployments to index. +- `graph indexer rules stop [options] ` - किसी डिप्लॉयमेंट की इंडेक्सिंग को रोकें और इसका `decisionBasis` को कभी नहीं पर सेट करें, जिससे यह डिप्लॉयमेंट को इंडेक्स करने के निर्णय में छोड़ देगा। -- `graph indexer rules maybe [options] ` — Set the `decisionBasis` for a deployment to `rules`, so that the Indexer agent will use indexing rules to decide whether to index this deployment. +- `graph indexer rules maybe [options] ` — किसी deployment के लिए `decisionBasis` को `rules` पर सेट करें, ताकि Indexer agent यह तय करने के लिए indexing rules का उपयोग करे कि इस deployment को index करना है या नहीं। -- `graph indexer actions get [options] ` - Fetch one or more actions using `all` or leave `action-id` empty to get all actions. An additional argument `--status` can be used to print out all actions of a certain status. +- `graph indexer actions get [options] ` - एक या अधिक कार्यों को प्राप्त करें `all` का उपयोग करके या सभी कार्य प्राप्त करने के लिए `action-id` को खाली छोड़ दें। एक अतिरिक्त आर्गुमेंट --status का उपयोग एक निश्चित स्थिति वाले सभी कार्यों को प्रदर्शित करने के लिए किया जा सकता है। -- `graph indexer action queue allocate ` - Queue allocation action +- `graph indexer एक्शन कतार आवंटित ` - आवंटन क्रिया को कतारबद्ध करें -- `graph indexer action queue reallocate ` - Queue reallocate action +- `graph indexer एक्शन कतार पुनः आवंटित ` - पुनः आवंटन क्रिया को कतारबद्ध करें -- `graph indexer action queue unallocate ` - Queue unallocate action +- `graph indexer एक्शन कतार अनआवंटित ` - अनविन्यास क्रिया को कतारबद्ध करें -- `graph indexer actions cancel [ ...]` - Cancel all action in the queue if id is unspecified, otherwise cancel array of id with space as separator +- `graph indexer actions cancel [ ...]` - यदि ID निर्दिष्ट नहीं है, तो कतार में सभी कार्रवाइयों को रद्द करें, अन्यथा स्पेस से अलग की गई आईडी की सूची को रद्द करें। -- `graph indexer actions approve [ ...]` - Approve multiple actions for execution +- `graph indexer actions approve [ ...]` - कई क्रियाओं को निष्पादन के लिए अनुमोदित करें -- `graph indexer actions execute approve` - Force the worker to execute approved actions immediately +- `graph indexer actions execute approve` - कार्यकर्ता को स्वीकृत क्रियाओं को तुरंत निष्पादित करने के लिए बाध्य करें -All commands which display rules in the output can choose between the supported output formats (`table`, `yaml`, and `json`) using the `-output` argument. +सभी कमांड जो आउटपुट में नियम दिखाते हैं, वे -output आर्गुमेंट का उपयोग करके समर्थित `-otuput` फ़ॉर्मेट (`table,` `yaml`, और `json`) में से किसी एक को चुन सकते हैं। -#### Indexing rules +#### indexing नियम -Indexing rules can either be applied as global defaults or for specific subgraph deployments using their IDs. The `deployment` and `decisionBasis` fields are mandatory, while all other fields are optional. When an indexing rule has `rules` as the `decisionBasis`, then the Indexer agent will compare non-null threshold values on that rule with values fetched from the network for the corresponding deployment. If the subgraph deployment has values above (or below) any of the thresholds it will be chosen for indexing. +Indexing नियमों को या तो वैश्विक डिफ़ॉल्ट के रूप में या विशिष्ट सबग्राफ डिप्लॉयमेंट्स के लिए उनकी IDs का उपयोग करके लागू किया जा सकता है। `deployment` और `decisionBasis` फ़ील्ड अनिवार्य हैं, जबकि सभी अन्य फ़ील्ड वैकल्पिक हैं। जब किसी Indexing नियम में `rules` को `decisionBasis` के रूप में सेट किया जाता है, तो Indexer एजेंट उस नियम पर दिए गए गैर-शून्य थ्रेशोल्ड मानों की तुलना नेटवर्क से प्राप्त मानों से संबंधित डिप्लॉयमेंट के लिए करेगा। यदि सबग्राफ डिप्लॉयमेंट के मान किसी भी थ्रेशोल्ड से ऊपर (या नीचे) होते हैं, तो इसे Indexing के लिए चुना जाएगा। -For example, if the global rule has a `minStake` of **5** (GRT), any subgraph deployment which has more than 5 (GRT) of stake allocated to it will be indexed. Threshold rules include `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, and `minAverageQueryFees`. +For example, अगर global rule का `minStake` **5** (GRT) है, तो कोई भी सबग्राफ deployment जिसमें 5 (GRT) से ज्यादा stake allocated है, उसे index किया जाएगा। Threshold rules में शामिल हैं `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, और `minAverageQueryFees`। -Data model: +डेटा मॉडल: ```graphql type IndexingRule { @@ -599,7 +599,7 @@ IndexingDecisionBasis { } ``` -Example usage of indexing rule: +indexing नियम का उदाहरण उपयोग: ``` graph indexer rules offchain QmZfeJYR86UARzp9HiXbURWunYgC9ywvPvoePNbuaATrEK @@ -611,20 +611,20 @@ graph indexer rules stop QmZfeJYR86UARzp9HiXbURWunYgC9ywvPvoePNbuaATrEK graph indexer rules delete QmZfeJYR86UARzp9HiXbURWunYgC9ywvPvoePNbuaATrEK ``` -#### Actions queue CLI +#### कार्य सूची CLI -The indexer-cli provides an `actions` module for manually working with the action queue. It uses the **Graphql API** hosted by the indexer management server to interact with the actions queue. +Indexer-cli एक `actions` मॉड्यूल प्रदान करता है जो मैन्युअल रूप से एक्शन कतार के साथ काम करने के लिए उपयोग किया जाता है। यह **Graphql API**, जो कि indexer management server द्वारा होस्ट की गई है, का उपयोग एक्शन कतार के साथ इंटरैक्ट करने के लिए करता है। -The action execution worker will only grab items from the queue to execute if they have `ActionStatus = approved`. In the recommended path actions are added to the queue with ActionStatus = queued, so they must then be approved in order to be executed onchain. The general flow will look like: +एक्शन एक्सीक्यूशन वर्कर केवल तभी कतार से आइटम उठाकर निष्पादित करेगा जब उनका `ActionStatus = approved` होगा। अनुशंसित मार्ग में, एक्शन को कतार में ActionStatus = queued के साथ जोड़ा जाता है, इसलिए उन्हें ऑनचेन निष्पादित होने के लिए अनुमोदित किया जाना चाहिए। सामान्य प्रवाह इस प्रकार होगा: -- Action added to the queue by the 3rd party optimizer tool or indexer-cli user -- Indexer can use the `indexer-cli` to view all queued actions -- Indexer (or other software) can approve or cancel actions in the queue using the `indexer-cli`. The approve and cancel commands take an array of action ids as input. -- The execution worker regularly polls the queue for approved actions. It will grab the `approved` actions from the queue, attempt to execute them, and update the values in the db depending on the status of execution to `success` or `failed`. -- If an action is successful the worker will ensure that there is an indexing rule present that tells the agent how to manage the allocation moving forward, useful when taking manual actions while the agent is in `auto` or `oversight` mode. -- The indexer can monitor the action queue to see a history of action execution and if needed re-approve and update action items if they failed execution. The action queue provides a history of all actions queued and taken. +- 3rd party ऑप्टिमाइज़र टूल या indexer-cli उपयोगकर्ता द्वारा कतार में क्रिया जोड़ी गई है +- Indexer `indexer-cli` का उपयोग करके सभी कतारबद्ध क्रियाओं को देख सकता है। +- Indexer (या अन्य सॉफ़्टवेयर) `indexer-cli` का उपयोग करके कतार में क्रियाओं को मंजूरी या रद्द कर सकता है। मंजूरी और रद्द करने वाले आदेश एक्शन आईडीज़ के एक एरे को इनपुट के रूप में लेते हैं। +- निर्वाचन कार्यकर्ता नियमित रूप से क्यू से अनुमोदित क्रियाओं के लिए पोल करता है। यह क्यू से `approved` क्रियाओं को प्राप्त करेगा, उन्हें निष्पादित करने का प्रयास करेगा, और निष्पादन की स्थिति के आधार पर डाटाबेस में मानों को `success` या `failed` के रूप में अपडेट करेगा। +- अगर कोई क्रिया सफल होती है तो कर्मचारी यह सुनिश्चित करेगा कि एक indexing नियम मौजूद हो जो एजेंट को यह बताए कि आगे बढ़ते हुए आवंटन को कैसे प्रबंधित करना है, यह उस स्थिति में उपयोगी होता है जब एजेंट `auto` या `oversight` मोड में हो और मैन्युअल क्रियाएं ली जा रही हों। +- Indexer एक्शन कतार की निगरानी कर सकता है ताकि एक्शन निष्पादन के इतिहास को देखा जा सके और यदि आवश्यक हो, तो असफल निष्पादन वाले एक्शन आइटम्स को पुनः अनुमोदित और अपडेट किया जा सके। एक्शन कतार उन सभी एक्शनों का इतिहास प्रदान करती है जो कतारबद्ध और लिए गए हैं। -Data model: +डेटा मॉडल: ```graphql Type ActionInput { @@ -657,7 +657,7 @@ ActionType { } ``` -Example usage from source: +स्रोत से उदाहरण उपयोग: ```bash graph indexer actions get all @@ -677,141 +677,141 @@ graph indexer actions approve 1 3 5 graph indexer actions execute approve ``` -Note that supported action types for allocation management have different input requirements: +Supported action types के लिए आवंटन प्रबंधन की विभिन्न इनपुट आवश्यकताएँ होती हैं: -- `Allocate` - allocate stake to a specific subgraph deployment +- Allocate - किसी विशिष्ट सबग्राफ डिप्लॉयमेंट के लिए स्टेक आवंटित करें - - required action params: + - आवश्यक क्रिया पैरामीटर्स: - deploymentID - - amount + - राशि -- `Unallocate` - close allocation, freeing up the stake to reallocate elsewhere +- `अनुदेश हटाएं` - आवंटन बंद करें, जिससे दांव को मुक्त किया जा सके और इसे कहीं और पुनः आवंटित किया जा सके। - - required action params: + - आवश्यक क्रिया पैरामीटर्स: - allocationID - deploymentID - - optional action params: + - वैकल्पिक क्रिया पैरामीटर: - poi - - force (forces using the provided POI even if it doesn’t match what the graph-node provides) + - बल प्रयोग करें (दिए गए POI का उपयोग तब भी करें यदि यह ग्राफ-नोड द्वारा प्रदान किए गए से मेल नहीं खाता) -- `Reallocate` - atomically close allocation and open a fresh allocation for the same subgraph deployment +- `पुनः आवंटित करें` - परमाणु रूप से आवंटन को बंद करें और उसी Subgraph परिनियोजन के लिए एक नया आवंटन खोलें - - required action params: + - आवश्यक क्रिया पैरामीटर: - allocationID - deploymentID - - amount - - optional action params: + - राशि + - वैकल्पिक क्रिया पैरामीटर्स: - poi - - force (forces using the provided POI even if it doesn’t match what the graph-node provides) + - बल (दिए गए POI का उपयोग करने के लिए मजबूर करता है, भले ही वह ग्राफ-नोड द्वारा प्रदान किए गए डेटा से मेल न खाए) -#### Cost models +#### लागत मॉडल -Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make Indexer selection decisions per query and to negotiate payment with chosen Indexers. +कॉस्ट मॉडल बाज़ार और क्वेरी विशेषताओं के आधार पर क्वेरी के लिए डायनामिक मूल्य निर्धारण प्रदान करते हैं। Indexer Service प्रत्येक सबग्राफ के लिए गेटवे के साथ एक कॉस्ट मॉडल साझा करता है, जिसके लिए वे क्वेरी का जवाब देने का इरादा रखते हैं। बदले में, गेटवे इस कॉस्ट मॉडल का उपयोग प्रति क्वेरी Indexer चयन निर्णय लेने और चुने गए Indexers के साथ भुगतान पर बातचीत करने के लिए करते हैं। #### Agora -The Agora language provides a flexible format for declaring cost models for queries. An Agora price model is a sequence of statements that execute in order for each top-level query in a GraphQL query. For each top-level query, the first statement which matches it determines the price for that query. +Agora भाषा क्वेरी के लिए लागत मॉडल घोषित करने के लिए एक लचीला प्रारूप प्रदान करती है। एक Agora मूल्य मॉडल बयानों का एक क्रम होता है जो प्रत्येक शीर्ष-स्तरीय GraphQL क्वेरी के लिए क्रम में निष्पादित होता है। प्रत्येक शीर्ष-स्तरीय क्वेरी के लिए, पहला कथन जो उससे मेल खाता है, उस क्वेरी के लिए मूल्य निर्धारित करता है। -A statement is comprised of a predicate, which is used for matching GraphQL queries, and a cost expression which when evaluated outputs a cost in decimal GRT. Values in the named argument position of a query may be captured in the predicate and used in the expression. Globals may also be set and substituted in for placeholders in an expression. +एक कथन में एक ​predicate​ होता है, जिसका उपयोग GraphQL queries से मिलान करने के लिए किया जाता है, और एक cost expression होता है, जो मूल्यांकन किए जाने पर दशमलव GRT में एक लागत आउटपुट करता है। किसी क्वेरी में नामित आर्गुमेंट स्थिति के मानों को predicate में कैप्चर किया जा सकता है और expression में उपयोग किया जा सकता है। Globals भी सेट किए जा सकते हैं और expression में प्लेसहोल्डर्स के लिए प्रतिस्थापित किए जा सकते हैं। -Example cost model: +उदाहरण लागत मॉडल: ``` -# This statement captures the skip value, -# uses a boolean expression in the predicate to match specific queries that use `skip` -# and a cost expression to calculate the cost based on the `skip` value and the SYSTEM_LOAD global +#यह कथन skip मान को प्राप्त करता है, +#शर्त में एक बूलियन अभिव्यक्ति का उपयोग करता है ताकि skip का उपयोग करने वाले विशिष्ट क्वेरीज़ का मिलान किया जा सके, +#और skip मान और SYSTEM_LOAD ग्लोबल के आधार पर लागत की गणना करने के लिए लागत अभिव्यक्ति का उपयोग करता है। query { pairs(skip: $skip) { id } } when $skip > 2000 => 0.0001 * $skip * $SYSTEM_LOAD; -# This default will match any GraphQL expression. -# It uses a Global substituted into the expression to calculate cost +#यह डिफ़ॉल्ट किसी भी GraphQL अभिव्यक्ति से मेल खाएगा। +#यह ग्लोबल का उपयोग करके लागत की गणना करने के लिए अभिव्यक्ति में प्रतिस्थापित करता है। default => 0.1 * $SYSTEM_LOAD; ``` -Example query costing using the above model: +उपरोक्त मॉडल का उपयोग करके उदाहरण क्वेरी लागत: -| Query | Price | +| Query | कीमत | | ---------------------------------------------------------------------------- | ------- | | { pairs(skip: 5000) { id } } | 0.5 GRT | | { tokens { symbol } } | 0.1 GRT | | { pairs(skip: 5000) { id } tokens { symbol } } | 0.6 GRT | -#### Applying the cost model +#### लागत मॉडल लागू करना -Cost models are applied via the Indexer CLI, which passes them to the Indexer Management API of the Indexer agent for storing in the database. The Indexer Service will then pick them up and serve the cost models to gateways whenever they ask for them. +कास्ट मॉडल को Indexer CLI के माध्यम से लागू किया जाता है, जो उन्हें Indexer एजेंट के Indexer Management API को पास करता है ताकि उन्हें डेटाबेस में संग्रहीत किया जा सके। इसके बाद, Indexer Service उन्हें लेगी और जब भी गेटवे इसकी मांग करेंगे, तो उन्हें कास्ट मॉडल प्रदान करेगी। ```sh indexer cost set variables '{ "SYSTEM_LOAD": 1.4 }' indexer cost set model my_model.agora ``` -## Interacting with the network +## नेटवर्क के साथ इंटरैक्ट करना -### Stake in the protocol +### प्रोटोकॉल में staking -The first steps to participating in the network as an Indexer are to approve the protocol, stake funds, and (optionally) set up an operator address for day-to-day protocol interactions. +पहले कदम नेटवर्क में एक Indexer के रूप में भाग लेने के लिए प्रोटोकॉल को अनुमोदित करना, धन को स्टेक करना, और (वैकल्पिक रूप से) दिन-प्रतिदिन की प्रोटोकॉल इंटरैक्शन के लिए एक ऑपरेटर पता सेट करना शामिल हैं। -> Note: For the purposes of these instructions Remix will be used for contract interaction, but feel free to use your tool of choice ([OneClickDapp](https://oneclickdapp.com/), [ABItopic](https://abitopic.io/), and [MyCrypto](https://www.mycrypto.com/account) are a few other known tools). +> नोट: contract इंटरैक्शन के लिए इन निर्देशों में Remix का उपयोग किया जाएगा, लेकिन आप अपनी पसंद के किसी भी टूल का उपयोग कर सकते हैं ([OneClickDapp](https://oneclickdapp.com/), [ABItopic](https://abitopic.io/), और [MyCrypto](https://www.mycrypto.com/account) कुछ अन्य ज्ञात टूल हैं)। -Once an Indexer has staked GRT in the protocol, the [Indexer components](/indexing/overview/#indexer-components) can be started up and begin their interactions with the network. +Once an Indexer ने प्रोटोकॉल में GRT को स्टेक कर दिया है, तो [Indexer components](/indexing/overview/#indexer-components) को शुरू किया जा सकता है और वे नेटवर्क के साथ अपनी इंटरैक्शन शुरू कर सकते हैं। -#### Approve tokens +#### स्वीकृत करें टोकन -1. Open the [Remix app](https://remix.ethereum.org/) in a browser +1. ओपन द [Remix app](https://remix.ethereum.org/) एक ब्राउज़र में -2. In the `File Explorer` create a file named **GraphToken.abi** with the [token ABI](https://raw.githubusercontent.com/graphprotocol/contracts/mainnet-deploy-build/build/abis/GraphToken.json). +2. `File Explorer` में **GraphToken.abi** नामक फ़ाइल बनाएं जिसमें [token ABI](https://raw.githubusercontent.com/graphprotocol/contracts/mainnet-deploy-build/build/abis/GraphToken.json) हो। -3. With `GraphToken.abi` selected and open in the editor, switch to the `Deploy and run transactions` section in the Remix interface. +3. `GraphToken.abi` चयनित और संपादक में खुला होने पर, Remix इंटरफ़ेस में `Deploy and run transactions` अनुभाग पर स्विच करें। -4. Under environment select `Injected Web3` and under `Account` select your Indexer address. +4. पर्यावरण के अंतर्गत `Injected Web3` चुनें और `Account` के अंतर्गत अपना Indexer पता चुनें। -5. Set the GraphToken contract address - Paste the GraphToken contract address (`0xc944E90C64B2c07662A292be6244BDf05Cda44a7`) next to `At Address` and click the `At address` button to apply. +5. GraphToken contract एड्रेस सेट करें - `At Address` के बगल में GraphToken कॉन्ट्रैक्ट एड्रेस (0xc944E90C64B2c07662A292be6244BDf05Cda44a7) पेस्ट करें और लागू करने के लिए `At address` बटन पर क्लिक करें। -6. Call the `approve(spender, amount)` function to approve the Staking contract. Fill in `spender` with the Staking contract address (`0xF55041E37E12cD407ad00CE2910B8269B01263b9`) and `amount` with the tokens to stake (in wei). +6. `approve(spender, amount)` फ़ंक्शन को कॉल करके Staking कॉन्ट्रैक्ट को अप्रूव करें। spender को Staking contract एड्रेस (0xF55041E37E12cD407ad00CE2910B8269B01263b9) से भरें और amount में स्टेक किए जाने वाले टोकन (wei में) डालें। -#### Stake tokens +#### Staking टोकन -1. Open the [Remix app](https://remix.ethereum.org/) in a browser +1. [Remix app](https://remix.ethereum.org/) को ब्राउज़र में खोलें -2. In the `File Explorer` create a file named **Staking.abi** with the staking ABI. +2. File Explorer में **Staking.abi** नाम की एक फ़ाइल बनाएं जिसमें स्टेकिंग ABI हो। -3. With `Staking.abi` selected and open in the editor, switch to the `Deploy and run transactions` section in the Remix interface. +3. `Staking.abi` को संपादक में चयनित और खुला रखने के साथ, Remix इंटरफ़ेस में `Deploy and run transactions` अनुभाग पर स्विच करें। -4. Under environment select `Injected Web3` and under `Account` select your Indexer address. +4. पर्यावरण के अंतर्गत `Injected Web3` चुनें और `Account` के अंतर्गत अपना Indexer पता चुनें। -5. Set the Staking contract address - Paste the Staking contract address (`0xF55041E37E12cD407ad00CE2910B8269B01263b9`) next to `At Address` and click the `At address` button to apply. +5. Staking कॉन्ट्रैक्ट एड्रेस सेट करें - `At Address` के पास Staking contract एड्रेस (0xF55041E37E12cD407ad00CE2910B8269B01263b9) पेस्ट करें और इसे लागू करने के लिए `At address` बटन पर क्लिक करें। -6. Call `stake()` to stake GRT in the protocol. +6. `stake()` को कॉल करें ताकि प्रोटोकॉल में GRT को स्टेक किया जा सके। -7. (Optional) Indexers may approve another address to be the operator for their Indexer infrastructure in order to separate the keys that control the funds from those that are performing day to day actions such as allocating on subgraphs and serving (paid) queries. In order to set the operator call `setOperator()` with the operator address. +7. (Optional) Indexers दूसरे पते को अपने Indexer इंफ्रास्ट्रक्चर के लिए ऑपरेटर के रूप में अनुमोदित कर सकते हैं ताकि उन कुंजियों को अलग किया जा सके जो धन को नियंत्रित करती हैं और जो दिन-प्रतिदिन की क्रियाएँ जैसे सबग्राफ पर आवंटन करना और (भुगतान किए गए) क्वेरीज़ की सेवा करना कर रही हैं। ऑपरेटर सेट करने के लिए, `setOperator()` को ऑपरेटर पते के साथ कॉल करें। -8. (Optional) In order to control the distribution of rewards and strategically attract Delegators Indexers can update their delegation parameters by updating their indexingRewardCut (parts per million), queryFeeCut (parts per million), and cooldownBlocks (number of blocks). To do so call `setDelegationParameters()`. The following example sets the queryFeeCut to distribute 95% of query rebates to the Indexer and 5% to Delegators, set the indexingRewardCutto distribute 60% of indexing rewards to the Indexer and 40% to Delegators, and set `thecooldownBlocks` period to 500 blocks. +8. (Optional) पुरस्कारों के वितरण को नियंत्रित करने और रणनीतिक रूप से Delegators को आकर्षित करने के लिए, Indexers अपने delegation पैरामीटर्स को अपडेट कर सकते हैं। इसके लिए वे indexingRewardCut (parts per million), queryFeeCut (parts per million), और cooldownBlocks (ब्लॉक्स की संख्या) को अपडेट कर सकते हैं। ऐसा करने के लिए, setDelegationParameters() को कॉल करें। निम्नलिखित उदाहरण में queryFeeCut को सेट किया गया है ताकि 95% क्वेरी रिबेट्स Indexer को और 5% Delegators को वितरित किए जाएं, indexingRewardCut को सेट किया गया है ताकि 60% Indexing पुरस्कार Indexer को और 40% Delegators को वितरित किए जाएं, और cooldownBlocks अवधि को 500 ब्लॉक्स पर सेट किया गया है। ``` setDelegationParameters(950000, 600000, 500) ``` -### Setting delegation parameters +### delegation मानक सेट करना -The `setDelegationParameters()` function in the [staking contract](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol) is essential for Indexers, allowing them to set parameters that define their interactions with Delegators, influencing their reward sharing and delegation capacity. +`setDelegationParameters()` फ़ंक्शन [staking contract](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol) में आवश्यक है, जो Indexers को उन मापदंडों को सेट करने की अनुमति देता है जो उनके Delegators के साथ इंटरैक्शन को परिभाषित करते हैं, जिससे उनके इनाम साझा करने और delegation क्षमता को प्रभावित किया जाता है। -### How to set delegation parameters +### delegation मापदंड सेट करने का तरीका -To set the delegation parameters using Graph Explorer interface, follow these steps: +Graph Explorer इंटरफेस का उपयोग करके delegation पैरामीटर सेट करने के लिए, इन चरणों का पालन करें: -1. Navigate to [Graph Explorer](https://thegraph.com/explorer/). -2. Connect your wallet. Choose multisig (such as Gnosis Safe) and then select mainnet. Note: You will need to repeat this process for Arbitrum One. -3. Connect the wallet you have as a signer. -4. Navigate to the 'Settings' section and select 'Delegation Parameters'. These parameters should be configured to achieve an effective cut within the desired range. Upon entering values in the provided input fields, the interface will automatically calculate the effective cut. Adjust these values as necessary to attain the desired effective cut percentage. -5. Submit the transaction to the network. +1. [Graph Explorer](https://thegraph.com/explorer/)को नेविगेट करें +2. अपने वॉलेट को कनेक्ट करें। मल्टीसिग (जैसे Gnosis Safe) चुनें और फिर मुख्य नेटवर्क (mainnet) का चयन करें। ध्यान दें: आपको इस प्रक्रिया को Arbitrum One के लिए दोहराने की आवश्यकता होगी। +3. अपने वॉलेट को एक साइनर के रूप में कनेक्ट करें। +4. `सेटिंग्स` अनुभाग पर जाएं और `delegation पैरामीटर्स` का चयन करें। इन पैरामीटर्स को वांछित सीमा के भीतर प्रभावी कट प्राप्त करने के लिए कॉन्फ़िगर किया जाना चाहिए। प्रदान किए गए इनपुट फ़ील्ड में मान दर्ज करने पर, इंटरफ़ेस स्वचालित रूप से प्रभावी कट की गणना करेगा। वांछित प्रभावी कट प्रतिशत प्राप्त करने के लिए इन मानों को आवश्यकतानुसार समायोजित करें। +5. लेन-लेन-देन(transaction) को नेटवर्क पर जमा करें। -> Note: This transaction will need to be confirmed by the multisig wallet signers. +> नोट: इस लेन-देन(transaction) की पुष्टि मल्टीसिग वॉलेट साइनर्स द्वारा की जानी होगी। -### The life of an allocation +### एक आवंटन का जीवन -After being created by an Indexer a healthy allocation goes through two states. +एक Indexer द्वारा बनाए जाने के बाद, एक स्वस्थ आवंटन दो अवस्थाओं से गुजरता है। -- **Active** - Once an allocation is created onchain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L316)) it is considered **active**. A portion of the Indexer's own and/or delegated stake is allocated towards a subgraph deployment, which allows them to claim indexing rewards and serve queries for that subgraph deployment. The Indexer agent manages creating allocations based on the Indexer rules. +- **सक्रिय** - एक बार जब ऑनचेन पर आवंटन बनाया जाता है ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L316)), तो इसे सक्रिय माना जाता है। Indexer के स्वयं के और/या प्रत्यायोजित स्टेक का एक हिस्सा किसी सबग्राफ परिनियोजन की ओर आवंटित किया जाता है, जो उन्हें उस सबग्राफ परिनियोजन के लिए इंडेक्सिंग पुरस्कारों का दावा करने और क्वेरीज़ को सर्व करने की अनुमति देता है। Indexer एजेंट, Indexer नियमों के आधार पर आवंटन बनाने का प्रबंधन करता है। -- **Closed** - An Indexer is free to close an allocation once 1 epoch has passed ([closeAllocation()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L335)) or their Indexer agent will automatically close the allocation after the **maxAllocationEpochs** (currently 28 days). When an allocation is closed with a valid proof of indexing (POI) their indexing rewards are distributed to the Indexer and its Delegators ([learn more](/indexing/overview/#how-are-indexing-rewards-distributed)). +- **बंद** - एक Indexer एक आवंटन को बंद करने के लिए स्वतंत्र होता है जब 1 युग (epoch) बीत चुका हो ([closeAllocation()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L335) या उनका Indexer एजेंट **maxAllocationEpochs** (वर्तमान में 28 दिन) के बाद स्वचालित रूप से आवंटन बंद कर देगा। जब कोई आवंटन एक वैध प्रूफ ऑफ indexing (POI) के साथ बंद किया जाता है, तो उनके indexing पुरस्कार Indexer और उसके Delegators को वितरित किए जाते हैं ([अधिक जानें])(/indexing/overview/#how-are-indexing-rewards-distributed)। -Indexers are recommended to utilize offchain syncing functionality to sync subgraph deployments to chainhead before creating the allocation onchain. This feature is especially useful for subgraphs that may take longer than 28 epochs to sync or have some chances of failing undeterministically. +Indexers को अनुशंसा दी जाती है कि वे onchain पर allocation बनाने से पहले सबग्राफ deployments को chainhead तक sync करने के लिए offchain syncing सुविधा का उपयोग करें। यह सुविधा विशेष रूप से उन सबग्राफ के लिए उपयोगी है जिन्हें sync होने में 28 epochs से अधिक समय लग सकता है या जिनके अनिश्चित रूप से विफल होने की संभावना हो सकती है। diff --git a/website/src/pages/hi/indexing/supported-network-requirements.mdx b/website/src/pages/hi/indexing/supported-network-requirements.mdx index 647eda3e6651..fbc33db94a64 100644 --- a/website/src/pages/hi/indexing/supported-network-requirements.mdx +++ b/website/src/pages/hi/indexing/supported-network-requirements.mdx @@ -6,7 +6,7 @@ title: Supported Network Requirements | --- | --- | --- | :-: | | Arbitrum | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | 4+ core CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_last updated August 2023_ | ✅ | | Avalanche | [Docker Guide](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 5 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
Debian 12/Ubuntu 22.04
16 GB RAM
>= 4.5TB (NVME preffered)
_last updated 14th May 2024_ | ✅ | +| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
Debian 12/Ubuntu 22.04
16 GB RAM
>= 4.5TB (NVME preferred)
_last updated 14th May 2024_ | ✅ | | Binance | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | 8 core / 16 threads CPU
Ubuntu 22.04
>=32 GB RAM
>= 14 TiB NVMe SSD
_अंतिम बार अपडेट किया गया 22 जून 2024_ | ✅ | | Celo | [Docker Guide](https://docs.infradao.com/archive-nodes-101/celo/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 2 TiB NVMe SSD
_last updated August 2023_ | ✅ | | Ethereum | [Docker Guide](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Higher clock speed over core count
Ubuntu 22.04
16GB+ RAM
>=3TB (NVMe recommended)
_last updated August 2023_ | ✅ | diff --git a/website/src/pages/hi/indexing/tap.mdx b/website/src/pages/hi/indexing/tap.mdx index d2a42ac00ea5..9aff98fe1eae 100644 --- a/website/src/pages/hi/indexing/tap.mdx +++ b/website/src/pages/hi/indexing/tap.mdx @@ -1,21 +1,21 @@ --- -title: TAP माइग्रेशन गाइड +title: GraphTally Guide --- -The Graph के नए भुगतान प्रणाली, Timeline Aggregation Protocol, TAP के बारे में जानें। यह प्रणाली तेज, कुशल माइक्रोट्रांजेक्शन प्रदान करती है जिसमें विश्वास को न्यूनतम किया गया है। +Learn about The Graph’s new payment system, **GraphTally** [(previously Timeline Aggregation Protocol)](https://docs.rs/tap_core/latest/tap_core/index.html). This system provides fast, efficient microtransactions with minimized trust. -## अवलोकन +## Overview -[TAP](https://docs.rs/tap_core/latest/tap_core/index.html) मौजूदा Scalar भुगतान प्रणाली का एक ड्रॉप-इन प्रतिस्थापन है। यह निम्नलिखित प्रमुख सुविधाएँ प्रदान करता है: +GraphTally is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features: - सूक्ष्म भुगतानों को कुशलता से संभालता है। - ऑनचेन लेनदेन और लागतों में समेकन की एक परत जोड़ता है। - प्राप्तियों और भुगतान पर Indexers को नियंत्रण की अनुमति देता है, प्रश्नों के लिए भुगतान की गारंटी देता है। - यह विकेन्द्रीकृत, विश्वास रहित गेटवे को सक्षम बनाता है और कई भेजने वालों के लिए indexer-service के प्रदर्शन में सुधार करता है। -## विशिष्टताएँ +### विशिष्टताएँ -TAP एक प्रेषक को एक प्राप्तकर्ता को कई भुगतान करने की अनुमति देता है, TAP Receipts, जो इन भुगतानों को एकल भुगतान में एकत्र करता है, जिसे Receipt Aggregate Voucher भी कहा जाता है, जिसे RAV के नाम से भी जाना जाता है। यह एकत्रित भुगतान फिर ब्लॉकचेन पर सत्यापित किया जा सकता है, लेनदेन की संख्या को कम करता है और भुगतान प्रक्रिया को सरल बनाता है। +GraphTally allows a sender to make multiple payments to a receiver, **Receipts**, which aggregates these payments into a single payment, a **Receipt Aggregate Voucher**, also known as a **RAV**. This aggregated payment can then be verified on the blockchain, reducing the number of transactions and simplifying the payment process. प्रत्येक क्वेरी के लिए, गेटवे आपको एक साइन किए गए रिसिप्ट भेजेगा जिसे आपके डेटाबेस में संग्रहीत किया जाएगा। फिर, इन क्वेरियों को एक अनुरोध के माध्यम से एक टेप-एजेंट द्वारा समेकित किया जाएगा। इसके बाद, आपको एक RAV प्राप्त होगा। आप नए रिसिप्ट्स के साथ इसे भेजकर RAV को अपडेट कर सकते हैं और इससे एक नया RAV उत्पन्न होगा जिसमें बढ़ी हुई राशि होगी। @@ -59,14 +59,14 @@ TAP एक प्रेषक को एक प्राप्तकर्ता | हस्ताक्षरकर्ता | `0xfF4B7A5EfD00Ff2EC3518D4F250A27e4c29A2211` | `0xFb142dE83E261e43a81e9ACEADd1c66A0DB121FE` | | संकेन्द्रीयकर्ता | `https://tap-aggregator.network.thegraph.com` | `https://tap-aggregator.testnet.thegraph.com` | -### Requirements +### आवश्यक शर्तें -एक Indexer चलाने की सामान्य आवश्यकताओं के अलावा, आपको TAP अपडेट को क्वेरी करने के लिए एक tap-escrow-subgraph एंडपॉइंट की आवश्यकता होगी। आप TAP को क्वेरी करने के लिए The Graph Network का उपयोग कर सकते हैं या अपने graph-node पर स्वयं होस्ट कर सकते हैं। +In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query updates. You can use The Graph Network to query or host yourself on your `graph-node`. -- [Graph TAP Arbitrum Sepolia subgraph (The Graph टेस्टनेट के लिए)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) -- [Graph TAP Arbitrum One subgraph (for The Graph mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) +- [Graph TAP Arbitrum Sepolia Subgraph (The Graph टेस्टनेट के लिए)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) +- [Graph TAP Arbitrum One Subgraph (The Graph mainnet के लिए)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) -> नोट: `indexer-agent` वर्तमान में इस subgraph का indexing नेटवर्क subgraph डिप्लॉयमेंट की तरह नहीं करता है। इसके परिणामस्वरूप, आपको इसे मैन्युअल रूप से इंडेक्स करना होगा। +> `indexer-agent` वर्तमान में इस सबग्राफ की Indexing उसी तरह नहीं करता जैसे वह नेटवर्क सबग्राफ डिप्लॉयमेंट के लिए करता है। इसलिए, आपको इसे मैन्युअल रूप से इंडेक्स करना होगा। ## माइग्रेशन गाइड @@ -79,7 +79,7 @@ TAP एक प्रेषक को एक प्राप्तकर्ता 1. **Indexer एजेंट** - उसी प्रक्रिया का पालन करें'(https://github.com/graphprotocol/indexer/pkgs/container/indexer-agent#graph-protocol-indexer-components). - - नया तर्क --tap-subgraph-endpoint दें ताकि नए TAP कोडपाथ्स को सक्रिय किया जा सके और TAP RAVs को रिडीम करने की अनुमति मिल सके। + - Give the new argument `--tap-subgraph-endpoint` to activate the new GraphTally codepaths and enable redeeming of RAVs. 2. **Indexer सेवा** @@ -99,73 +99,72 @@ TAP एक प्रेषक को एक प्राप्तकर्ता "कम से कम कॉन्फ़िगरेशन के लिए, निम्नलिखित टेम्पलेट का उपयोग करें:" ```bash -#आपको नीचे दिए गए *सभी* मान अपनी सेटअप के अनुसार बदलने होंगे। -*नीचे दिए गए कुछ कॉन्फ़िग वैल्यू ग्लोबल ग्राफ नेटवर्क वैल्यू हैं, जिन्हें आप यहां पा सकते हैं: +#आपको नीचे दिए गए सभी मानों को अपने सेटअप के अनुसार बदलना होगा। # - +#कुछ कॉन्फ़िगरेशन नीचे वैश्विक ग्राफ नेटवर्क मान हैं, जिन्हें आप यहां देख सकते हैं: +# +#प्रो टिप: यदि आपको इस कॉन्फ़िगरेशन में कुछ मान वातावरण से लोड करने की आवश्यकता है, तो आप... +#पर्यावरणीय वेरिएबल्स के साथ अधिलेखित किया जा सकता है। उदाहरण के लिए, निम्नलिखित को +#[PREFIX]_DATABASE_POSTGRESURL से बदला जा सकता है, जहां PREFIX `INDEXER_SERVICE` या `TAP_AGENT` हो सकता है: # -#प्रो टिप: यदि आपको इस कॉन्फ़िग में कुछ मान environment से लोड करने की आवश्यकता है, तो आप environment वेरिएबल्स का उपयोग करके ओवरराइट कर सकते हैं। उदाहरण के लिए, निम्नलिखित को [PREFIX]_DATABASE_POSTGRESURL से बदला जा सकता है, जहां PREFIX `INDEXER_SERVICE` या `TAP_AGENT` हो सकता है: -[database] -#postgres_url = "postgresql://indexer:${POSTGRES_PASSWORD}@postgres:5432/indexer_components_0" +#[database] +#postgres_url="postgresql://indexer:${POSTGRES_PASSWORD} +@postgres:5432/indexer_components_0" + [indexer] indexer_address = "0x1111111111111111111111111111111111111111" operator_mnemonic = "celery smart tip orange scare van steel radio dragon joy alarm crane" [database] - -Postgres डेटाबेस का URL जो indexer components के लिए उपयोग किया जाता है। वही डेटाबेस -जो indexer-agent द्वारा उपयोग किया जाता है। यह अपेक्षित है कि indexer-agent आवश्यक तालिकाएं बनाएगा। +# Indexer घटकों के लिए उपयोग किए जाने वाले Postgres डेटाबेस का URL। वही डेटाबेस +# जिसका उपयोग `indexer-agent` द्वारा किया जाता है। यह अपेक्षित है कि `indexer-agent` +#आवश्यक तालिकाएँ बनाएगा। postgres_url = "postgres://postgres@postgres:5432/postgres" [graph_node] -आपके graph-node के क्वेरी एंडपॉइंट का URL +# आपके graph-node के क्वेरी एंडपॉइंट का URL query_url = "" - -आपके graph-node के स्टेटस एंडपॉइंट का URL +# आपके graph-node के स्टेटस एंडपॉइंट का URL status_url = "" [subgraphs.network] -Graph Network subgraph के लिए क्वेरी URL। +# Graph Network सबग्राफ के लिए क्वेरी URL। query_url = "" - -वैकल्पिक, local graph-node में देखने के लिए deployment, यदि स्थानीय रूप से इंडेक्स किया गया है। -subgraph को स्थानीय रूप से इंडेक्स करना अनुशंसित है। -नोट: केवल query_url या deployment_id का उपयोग करें +# वैकल्पिक, स्थानीय `graph-node` में खोजने के लिए deployment, यदि स्थानीय रूप से इंडेक्स किया गया हो। +# सबग्राफ को स्थानीय रूप से इंडेक्स करना अनुशंसित है। +# नोट: केवल `query_url` या `deployment_id` का उपयोग करें deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" [subgraphs.escrow] -Escrow subgraph के लिए क्वेरी URL। +#Escrow Subgraph के लिए क्वेरी URL। query_url = "" - -वैकल्पिक, local graph-node में देखने के लिए deployment, यदि स्थानीय रूप से इंडेक्स किया गया है। -subgraph को स्थानीय रूप से इंडेक्स करना अनुशंसित है। -नोट: केवल query_url या deployment_id का उपयोग करें +#वैकल्पिक, स्थानीय `graph-node` में मौजूद deployment, यदि इसे स्थानीय रूप से index किया गया हो। +#स्थानीय रूप से सबग्राफ को index करना अनुशंसित है। +#नोट: केवल query_url या deployment_id में से किसी एक का उपयोग करें deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" [blockchain] - -उस नेटवर्क का chain ID जिस पर graph network चल रहा है +#उस नेटवर्क का chain ID जिस पर The Graph नेटवर्क चल रहा है chain_id = 1337 - -TAP के receipt aggregate voucher (RAV) verifier का कॉन्ट्रैक्ट एड्रेस। +#TAP की receipt aggregate voucher (RAV) verifier का कॉन्ट्रैक्ट पता। receipts_verifier_address = "0x2222222222222222222222222222222222222222" ######################################## -#tap-agent के लिए विशिष्ट कॉन्फ़िगरेशन# +#tap-agent के लिए विशिष्ट कॉन्फ़िगरेशन ######################################## [tap] -#यह वह फीस की मात्रा है जिसे आप किसी भी समय जोखिम में डालने के लिए तैयार हैं। उदाहरण के लिए, -#यदि sender लंबे समय तक RAVs प्रदान करना बंद कर देता है और फीस इस -#राशि से अधिक हो जाती है, तो indexer-service sender से क्वेरी स्वीकार करना बंद कर देगा -#जब तक कि फीस को समेकित नहीं किया जाता। -#नोट: राउंडिंग त्रुटियों से बचने के लिए दशमलव मानों के लिए strings का उपयोग करें +#यह वह राशि है जिसे आप किसी भी समय जोखिम में डालने के लिए तैयार हैं। उदाहरण के लिए, +#यदि प्रेषक (sender) RAVs को लंबे समय तक प्रदान करना बंद कर देता है और शुल्क इस +#राशि से अधिक हो जाता है, तो indexer-service प्रेषक से क्वेरी स्वीकार करना बंद कर देगा +#जब तक कि शुल्क एकत्रित नहीं हो जाते। +#नोट: राउंडिंग त्रुटियों को रोकने के लिए दशमलव मूल्यों के लिए स्ट्रिंग्स का उपयोग करें #जैसे: #max_amount_willing_to_lose_grt = "0.1" max_amount_willing_to_lose_grt = 20 [tap.sender_aggregator_endpoints] -सभी senders और उनके aggregator endpoints के key-value -नीचे दिया गया यह उदाहरण E&N टेस्टनेट गेटवे के लिए है। +#सभी प्रेषकों और उनके aggregator endpoints की Key-Value +#नीचे दिया गया उदाहरण E&N testnet gateway के लिए है। 0xDDE4cfFd3D9052A9cb618fC05a1Cd02be1f2F467 = "https://tap-aggregator.network.thegraph.com" ``` diff --git a/website/src/pages/hi/indexing/tooling/firehose.mdx b/website/src/pages/hi/indexing/tooling/firehose.mdx index d2a13417500b..59ee28be31eb 100644 --- a/website/src/pages/hi/indexing/tooling/firehose.mdx +++ b/website/src/pages/hi/indexing/tooling/firehose.mdx @@ -8,7 +8,7 @@ Firehose एक नई तकनीक है जिसे StreamingFast ने The Graph ने Go Ethereum/geth में विलय कर लिया है और [Live Tracer with v1.14.0 release](https://github.com/ethereum/go-ethereum/releases/tag/v1.14.0) को अपनाया है। -Firehose extracts, transforms and saves blockchain data in a highly performant file-based strategy. Blockchain developers can then access data extracted by Firehose through binary data streams. Firehose is intended to stand as a replacement for The Graph’s original blockchain data extraction layer. +Firehose अत्यधिक प्रदर्शन वाली file-based strategy में blockchain data को निकालता है, परिवर्तित करता है और सहेजता है। Blockchain developers binary data streams के माध्यम से Firehose द्वारा निकाले गए data तक पहुंच सकते हैं। Firehose का उद्देश्य Graph’s original blockchain data extraction layer के प्रतिस्थापन के रूप में खड़ा होना है। ## Firehose Documentation @@ -19,6 +19,6 @@ Firehose का दस्तावेज़ वर्तमान में Stre - Firehose का परिचय पढ़ें [Firehose introduction](https://firehose.streamingfast.io/introduction/firehose-overview) यह जानने के लिए कि यह क्या है और इसे क्यों बनाया गया। - [Prerequisites](https://firehose.streamingfast.io/introduction/prerequisites) के बारे में जानें ताकि Firehose को इंस्टॉल और डिप्लॉय किया जा सके। -### Expand Your Knowledge +### अपने ज्ञान का विस्तार करें - विभिन्न [Firehose components](https://firehose.streamingfast.io/architecture/components) के बारे में जानें। diff --git a/website/src/pages/hi/indexing/tooling/graph-node.mdx b/website/src/pages/hi/indexing/tooling/graph-node.mdx index 9acca5cf6557..fad29c77e35e 100644 --- a/website/src/pages/hi/indexing/tooling/graph-node.mdx +++ b/website/src/pages/hi/indexing/tooling/graph-node.mdx @@ -1,40 +1,40 @@ --- -title: ग्राफ-नोड +title: Graph Node --- -Graph Node is the component which indexes subgraphs, and makes the resulting data available to query via a GraphQL API. As such it is central to the indexer stack, and correct operation of Graph Node is crucial to running a successful indexer. +Graph Node वह घटक है जो सबग्राफ को अनुक्रमित करता है, और परिणामी डेटा को GraphQL API के माध्यम से क्वेरी करने के लिए उपलब्ध कराता है। इसलिए, यह Indexer स्टैक के लिए केंद्रीय है, और ग्राफ-नोड का सही संचालन एक सफल Indexer चलाने के लिए अत्यंत महत्वपूर्ण है। ग्राफ-नोड का संदर्भ और indexers के लिए उपलब्ध कुछ उन्नत विकल्पों का परिचय प्रदान करता है। विस्तृत दस्तावेज़ और निर्देश [Graph Node repository](https://github.com/graphprotocol/graph-node) में पाए जा सकते हैं। -## ग्राफ-नोड +## Graph Node -[Graph Node](https://github.com/graphprotocol/graph-node) The Graph Network पर सबग्राफ को indexing करने के लिए रेफरेंस इंप्लीमेंटेशन है, जो ब्लॉकचेन क्लाइंट्स से जुड़ता है, सबग्राफ को indexing करता है और इंडेक्स किए गए डेटा को queries के लिए उपलब्ध कराता है। +[Graph Node](https://github.com/graphprotocol/graph-node) The Graph Network पर सबग्राफ को indexing करने के लिए संदर्भ कार्यान्वयन है, जो ब्लॉकचेन क्लाइंट्स से जुड़ता है, सबग्राफ को indexing करता है और अनुक्रमित डेटा को क्वेरी करने के लिए उपलब्ध कराता है। Graph Node (और पूरा indexer stack) को bare metal पर या एक cloud environment में चलाया जा सकता है। The Graph Protocol की मजबूती के लिए केंद्रीय indexing घटक की यह लचीलापन बहुत महत्वपूर्ण है। इसी तरह, ग्राफ-नोड को [साधन से बनाया जा सकता](https://github.com/graphprotocol/graph-node) है, या indexers [प्रदत्त Docker Images](https://hub.docker.com/r/graphprotocol/graph-node) में से एक का उपयोग कर सकते हैं। ### पोस्टग्रेएसक्यूएल डेटाबेस -The main store for the Graph Node, this is where subgraph data is stored, as well as metadata about subgraphs, and subgraph-agnostic network data such as the block cache, and eth_call cache. +ग्राफ नोड The Graph नेटवर्क पर सबग्राफ को Indexing करने के लिए एक संदर्भ कार्यान्वयन है, जो ब्लॉकचेन क्लाइंट से जुड़ता है, सबग्राफ को Indexing करता है और इंडेक्स किए गए डेटा को क्वेरी के लिए उपलब्ध कराता है। ### नेटवर्क क्लाइंट किसी नेटवर्क को इंडेक्स करने के लिए, ग्राफ़ नोड को एथेरियम-संगत JSON-RPC के माध्यम से नेटवर्क क्लाइंट तक पहुंच की आवश्यकता होती है। यह आरपीसी एक एथेरियम क्लाइंट से जुड़ सकता है या यह एक अधिक जटिल सेटअप हो सकता है जो कई में संतुलन लोड करता है। -कुछ सबग्राफ को केवल एक पूर्ण नोड की आवश्यकता हो सकती है, लेकिन कुछ में indexing फीचर्स होते हैं, जिनके लिए अतिरिक्त RPC कार्यक्षमता की आवश्यकता होती है। विशेष रूप से, ऐसे सबग्राफ जो indexing के हिस्से के रूप में `eth_calls` करते हैं, उन्हें एक आर्काइव नोड की आवश्यकता होगी जो [EIP-1898](https://eips.ethereum.org/EIPS/eip-1898) को सपोर्ट करता हो। साथ ही, ऐसे सबग्राफ जिनमें `callHandlers` या `blockHandlers` के साथ एक `call` फ़िल्टर हो, उन्हें `trace_filter` सपोर्ट की आवश्यकता होती है ([trace module documentation यहां देखें](https://openethereum.github.io/JSONRPC-trace-module))। +कुछ सबग्राफ को केवल एक पूर्ण नोड की आवश्यकता हो सकती है, जबकि कुछ में अतिरिक्त RPC कार्यक्षमता की आवश्यकता होती है जो indexing सुविधाओं के लिए आवश्यक होती है। विशेष रूप से, ऐसे सबग्राफ जो indexing के हिस्से के रूप में `eth_calls` करते हैं, उन्हें एक आर्काइव नोड की आवश्यकता होगी जो [EIP-1898](https://eips.ethereum.org/EIPS/eip-1898) का समर्थन करता है, और ऐसे सबग्राफ जिनमें `callHandlers` या `blockHandlers` हैं जिनमें `call` फ़िल्टर है, उन्हें `trace_filter` समर्थन की आवश्यकता होती है ([यहाँ ट्रेस मॉड्यूल दस्तावेज़ देखें](https://openethereum.github.io/JSONRPC-trace-module)). **नेटवर्क फायरहोज़** - फायरहोज़ एक gRPC सेवा है जो ब्लॉक्स का क्रमबद्ध, फिर भी फोर्क-अवेयर स्ट्रीम प्रदान करती है। इसे The Graph के कोर डेवलपर्स द्वारा बड़े पैमाने पर प्रभावी indexing का समर्थन करने के लिए विकसित किया गया है। यह वर्तमान में Indexer के लिए अनिवार्य नहीं है, लेकिन Indexers को इस तकनीक से परिचित होने के लिए प्रोत्साहित किया जाता है ताकि वे नेटवर्क के पूर्ण समर्थन के लिए तैयार रहें। फायरहोज़ के बारे में अधिक जानें [यहां](https://firehose.streamingfast.io/)। ### आईपीएफएस नोड्स -Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during subgraph deployment to fetch the subgraph manifest and all linked files. Network indexers do not need to host their own IPFS node. An IPFS node for the network is hosted at https://ipfs.network.thegraph.com. +सबग्राफ तैनाती मेटाडेटा IPFS नेटवर्क पर संग्रहीत होता है। ग्राफ नोड मुख्य रूप से सबग्राफ तैनाती के दौरान IPFS नोड तक पहुँचता है ताकि सबग्राफ मैनिफेस्ट और सभी लिंक किए गए फ़ाइलों को प्राप्त कर सके। नेटवर्क Indexer को अपने स्वयं के IPFS नोड की मेज़बानी करने की आवश्यकता नहीं है। नेटवर्क के लिए एक IPFS नोड यहाँ होस्ट किया गया है: https://ipfs.network.thegraph.com. ### प्रोमेथियस मेट्रिक्स सर्वर -To enable monitoring and reporting, Graph Node can optionally log metrics to a Prometheus metrics server. +Monitoring and reporting को enable करने के लिए, Graph Node optionally रूप से metrics को Prometheus metrics server पर log कर सकता है। -### Getting started from source +### सोर्स से शुरू करना -#### Install prerequisites +#### आवश्यक पूर्वापेक्षाएँ स्थापित करें - **Rust** @@ -42,15 +42,15 @@ To enable monitoring and reporting, Graph Node can optionally log metrics to a P - **IPFS** -- **Additional Requirements for Ubuntu users** - To run a Graph Node on Ubuntu a few additional packages may be needed. +- **उबंटू उपयोगकर्ताओं के लिए अतिरिक्त आवश्यकताएँ** - उबंटू पर एक ग्राफ-नोड चलाने के लिए कुछ अतिरिक्त पैकेजों की आवश्यकता हो सकती है। ```sh sudo apt-get install -y clang libpq-dev libssl-dev pkg-config ``` -#### Setup +#### सेटअप -1. Start a PostgreSQL database server +1. PostgreSQL डेटाबेस सर्वर शुरू करें ```sh initdb -D .postgres @@ -58,9 +58,9 @@ pg_ctl -D .postgres -l logfile start createdb graph-node ``` -2. Clone [Graph Node](https://github.com/graphprotocol/graph-node) repo and build the source by running `cargo build` +2. [ग्राफ-नोड](https://github.com/graphprotocol/graph-node) रिपोजिटरी को क्लोन करें और स्रोत को बनाने के लिए cargo build कमांड चलाएँ। -3. Now that all the dependencies are setup, start the Graph Node: +3. अब जब सभी डिपेंडेंसीज़ सेटअप हो गई हैं, तो ग्राफ नोड शुरू करें: ```sh cargo run -p graph-node --release -- \ @@ -69,27 +69,27 @@ cargo run -p graph-node --release -- \ --ipfs https://ipfs.network.thegraph.com ``` -### Getting started with Kubernetes +### Kubernetes के साथ शुरुआत करना Kubernetes का एक पूर्ण उदाहरण कॉन्फ़िगरेशन [indexer repository](https://github.com/graphprotocol/indexer/tree/main/k8s) में पाया जा सकता है। ### Ports -When it is running Graph Node exposes the following ports: +जब यह चल रहा होता है तो Graph Node following port को expose करता है: -| Port | Purpose | Routes | CLI Argument | Environment Variable | +| पोर्ट | उद्देश्य | रूट्स | आर्गुमेंट्स | पर्यावरण वेरिएबल्स | | --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | -| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | +| 8000 | GraphQL HTTP server
(for Subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | +| 8001 | GraphQL WS
(for सबग्राफ subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | | 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | | 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | -| 8040 | Prometheus metrics | /metrics | \--metrics-port | - | +| 8040 | Prometheus मेट्रिक्स | /metrics | \--metrics-port | - | > **प्रमुख बात**: सार्वजनिक रूप से पोर्ट्स को एक्सपोज़ करने में सावधानी बरतें - \*\*प्रशासनिक पोर्ट्स को लॉक रखना चाहिए। इसमें ग्राफ नोड JSON-RPC एंडपॉइंट भी शामिल है। ## Advanced Graph Node configuration -At its simplest, Graph Node can be operated with a single instance of Graph Node, a single PostgreSQL database, an IPFS node, and the network clients as required by the subgraphs to be indexed. +At its simplest, ग्राफ-नोड को एकल Graph Node instance, एकल PostgreSQL database, एक IPFS node, और नेटवर्क क्लाइंट्स की आवश्यकता होती है, जैसा कि सबग्राफ द्वारा अनुक्रमण के लिए आवश्यक होता है। इस सेटअप को क्षैतिज रूप से स्केल किया जा सकता है, कई Graph नोड और उन Graph नोड को समर्थन देने के लिए कई डेटाबेस जोड़कर। उन्नत उपयोगकर्ता ग्राफ-नोड की कुछ क्षैतिज स्केलिंग क्षमताओं का लाभ उठाना चाह सकते हैं, साथ ही कुछ अधिक उन्नत कॉन्फ़िगरेशन विकल्पों का भी, `config.toml` फ़ाइल और ग्राफ-नोड के पर्यावरण वेरिएबल्स के माध्यम से। @@ -114,15 +114,15 @@ indexers = [ "<.. list of all indexing nodes ..>" ] #### Multiple Graph Nodes -ग्राफ-नोड indexing को क्षैतिज रूप से स्केल किया जा सकता है, कई ग्राफ-नोड instances चलाकर indexing और queries को विभिन्न नोड्स पर विभाजित किया जा सकता है। यह सरलता से किया जा सकता है, जब Graph नोड को एक अलग `node_id` के साथ शुरू किया जाता है (जैसे कि Docker Compose फ़ाइल में), जिसे फिर `config.toml` फ़ाइल में [dedicated query nodes](#dedicated-query-nodes), [block ingestors](#dedicated-block-ingestion) को निर्दिष्ट करने के लिए और [deployment rules](#deployment-rules) के साथ सबग्राफ को नोड्स के बीच विभाजित करने के लिए इस्तेमाल किया जा सकता है। +ग्राफ नोड indexing क्षैतिज रूप से स्केल कर सकता है, विभिन्न नोड्स पर indexing और क्वेरी को विभाजित करने के लिए ग्राफ नोड के कई उदाहरण चलाते हुए। यह सरलता से किया जा सकता है, जब ग्राफ नोड्स को स्टार्टअप पर विभिन्न `node_id` के साथ कॉन्फ़िगर किया जाता है (जैसे, डॉकर कंपोज़ फ़ाइल में), जिसे फिर `config.toml` फ़ाइल में [dedicated query nodes](#dedicated-query-nodes), [block ingestors](#dedicated-block-ingestion), और [deployment rules](#deployment-rules) के साथ नोड्स के बीच सबग्राफ विभाजित करने के लिए उपयोग किया जा सकता है। > ध्यान दें कि एक ही डेटाबेस का उपयोग करने के लिए कई ग्राफ़ नोड्स को कॉन्फ़िगर किया जा सकता है, जिसे स्वयं शार्डिंग के माध्यम से क्षैतिज रूप से बढ़ाया जा सकता है। #### Deployment rules -यहां कई Graph नोड दिए गए हैं, इसलिए नए सबग्राफ की तैनाती का प्रबंधन करना आवश्यक है ताकि एक ही subgraph को दो विभिन्न नोड द्वारा इंडेक्स न किया जाए, क्योंकि इससे टकराव हो सकता है। यह deployment नियमों का उपयोग करके किया जा सकता है, जो यह भी निर्दिष्ट कर सकते हैं कि यदि डेटाबेस sharding का उपयोग किया जा रहा है, तो subgraph का डेटा किस `shard` में स्टोर किया जाना चाहिए। Deployment नियम subgraph के नाम और उस नेटवर्क पर मिलान कर सकते हैं जिसमें तैनाती indexing हो रही है, ताकि निर्णय लिया जा सके। +कई Graph नोड को देखते हुए, नए सबग्राफ की तैनाती का प्रबंधन करना आवश्यक है ताकि एक ही सबग्राफ दो अलग-अलग नोड्स द्वारा अनुक्रमित न किया जाए, जिससे टकराव हो सकता है। इसे तैनाती नियमों का उपयोग करके किया जा सकता है, जो यह भी निर्दिष्ट कर सकते हैं कि यदि डेटाबेस शार्डिंग का उपयोग किया जा रहा है, तो एक सबग्राफ के डेटा को किस `shard` में संग्रहीत किया जाना चाहिए। तैनाती नियम सबग्राफ के नाम और उस नेटवर्क पर मेल खा सकते हैं जिसमें तैनाती indexing हो रही है ताकि निर्णय लिया जा सके। -Example deployment rule configuration: +उदाहरण deployment rule configuration: ```toml [deployment] @@ -138,7 +138,7 @@ indexers = [ "index_node_kovan_0" ] match = { network = [ "xdai", "poa-core" ] } indexers = [ "index_node_other_0" ] [[deployment.rule]] -# There's no 'match', so any subgraph matches +# There's no 'match', so any Subgraph matches shards = [ "sharda", "shardb" ] indexers = [ "index_node_community_0", @@ -154,30 +154,30 @@ indexers = [ #### Dedicated query nodes -Nodes can be configured to explicitly be query nodes by including the following in the configuration file: +Configuration file में following को शामिल करके Nodes को explicitly रूप से query nodes होने के लिए configure किया जा सकता है: ```toml [general] query = "" ``` -Any node whose --node-id matches the regular expression will be set up to only respond to queries. +कोई भी node जिसका --node-id regular expression से mail खाता है, केवल प्रश्नों का जवाब देने के लिए set किया जाएगा। -#### Database scaling via sharding +#### Sharding के माध्यम से Database scaling -For most use cases, a single Postgres database is sufficient to support a graph-node instance. When a graph-node instance outgrows a single Postgres database, it is possible to split the storage of graph-node's data across multiple Postgres databases. All databases together form the store of the graph-node instance. Each individual database is called a shard. +For most use cases, एक एकल Postgres database graph-node उदाहरण का support करने के लिए sufficient है। जब एक graph-node उदाहरण single Postgres database से आगे निकल जाता है, तो graph-node के data के भंडारण को कई Postgres databases में split करना possible है। सभी database मिलकर graph-node instance का store बनाते हैं। प्रत्येक personal database को shard कहा जाता है। -Shard का उपयोग subgraph deployments को कई डेटाबेस में विभाजित करने के लिए किया जा सकता है, और प्रतिकृति का उपयोग करके query लोड को डेटाबेस में फैलाने के लिए भी किया जा सकता है। इसमें यह कॉन्फ़िगर करना शामिल है कि प्रत्येक डेटाबेस के लिए प्रत्येक `ग्राफ-नोड` को अपने कनेक्शन पूल में कितने उपलब्ध डेटाबेस कनेक्शन रखने चाहिए। जैसे-जैसे अधिक सबग्राफ को index किया जा रहा है, यह अधिक महत्वपूर्ण होता जा रहा है। +Shards का उपयोग कई डेटाबेस में सबग्राफ डिप्लॉयमेंट को विभाजित करने के लिए किया जा सकता है, और साथ ही प्रतिकृतियों (replicas) का उपयोग करके क्वेरी लोड को डेटाबेस में वितरित करने के लिए भी किया जा सकता है। इसमें प्रत्येक `graph-node` के लिए प्रत्येक डेटाबेस में कनेक्शन पूल में रखे जाने वाले उपलब्ध डेटाबेस कनेक्शनों की संख्या को कॉन्फ़िगर करना शामिल है, जो कि जैसे-जैसे अधिक सबग्राफ इंडेक्स किए जा रहे हैं, उतना ही महत्वपूर्ण हो जाता है। शेयरिंग तब उपयोगी हो जाती है जब आपका मौजूदा डेटाबेस ग्राफ़ नोड द्वारा डाले गए भार के साथ नहीं रह सकता है, और जब डेटाबेस का आकार बढ़ाना संभव नहीं होता है। -> It is generally better make a single database as big as possible, before starting with shards. One exception is where query traffic is split very unevenly between subgraphs; in those situations it can help dramatically if the high-volume subgraphs are kept in one shard and everything else in another because that setup makes it more likely that the data for the high-volume subgraphs stays in the db-internal cache and doesn't get replaced by data that's not needed as much from low-volume subgraphs. +> यह सामान्यतः बेहतर होता है कि किसी एक डेटाबेस को जितना संभव हो उतना बड़ा बनाया जाए, इससे पहले कि shard शुरू की जाए। एक अपवाद यह है जब क्वेरी ट्रैफिक विभिन्न सबग्राफ के बीच बहुत असमान रूप से विभाजित होता है; ऐसे मामलों में, यदि उच्च-वॉल्यूम सबग्राफ को एक shard में रखा जाए और बाकी सब कुछ दूसरे shard में, तो यह काफी मदद कर सकता है क्योंकि इस सेटअप से यह संभावना बढ़ जाती है कि उच्च-वॉल्यूम सबग्राफ के लिए आवश्यक डेटा डेटाबेस-आंतरिक कैश में बना रहे और कम-वॉल्यूम सबग्राफ के कम आवश्यक डेटा द्वारा प्रतिस्थापित न हो। -In terms of configuring connections, start with max_connections in postgresql.conf set to 400 (or maybe even 200) and look at the store_connection_wait_time_ms and store_connection_checkout_count Prometheus metrics. Noticeable wait times (anything above 5ms) is an indication that there are too few connections available; high wait times there will also be caused by the database being very busy (like high CPU load). However if the database seems otherwise stable, high wait times indicate a need to increase the number of connections. In the configuration, how many connections each graph-node instance can use is an upper limit, and Graph Node will not keep connections open if it doesn't need them. +Configuring connections करने के मामले में, postgresql.conf में max_connections से 400 (or maybe even 200) पर set करें और store_connection_wait_time_ms और store_connection_checkout_count Prometheus metrics देखें। ध्यान देने Noticeable wait times (anything above 5ms) एक संकेत है कि बहुत कम connection उपलब्ध हैं; high wait times database बहुत busy होने (like high CPU load) के कारण भी होगा। हालाँकि यदि database otherwise stable लगता है, तो high wait times indicate की संख्या connection बढ़ाने की आवश्यकता का संकेत देता है। configuration में, प्रत्येक graph-node उदाहरण कितने connection का उपयोग कर सकता है, यह एक upper limit है, और Graph Node को connections खुला नहीं रखेगा यदि इसकी आवश्यकता नहीं है। [यहाँ](https://github.com/graphprotocol/graph-node/blob/master/docs/config.md#configuring-multiple-databases) स्टोर कॉन्फ़िगरेशन के बारे में और पढ़ें। -#### Dedicated block ingestion +#### समर्पित ब्लॉक अंतर्ग्रहण यदि कई नोड्स कॉन्फ़िगर किए गए हैं, तो यह आवश्यक होगा कि एक नोड निर्दिष्ट किया जाए जो नए ब्लॉक्स के इनजेशन के लिए जिम्मेदार हो, ताकि सभी कॉन्फ़िगर किए गए इंडेक्स नोड्स chain हेड को बार-बार पूछताछ न करें। इसे `chains` नेमस्पेस के हिस्से के रूप में किया जाता है, जहां ब्लॉक इनजेशन के लिए उपयोग किए जाने वाले `node_id` को निर्दिष्ट किया जाता है: @@ -186,13 +186,13 @@ In terms of configuring connections, start with max_connections in postgresql.co ingestor = "block_ingestor_node" ``` -#### Supporting multiple networks +#### कई network का Supporting करना -The Graph Protocol उन नेटवर्क्स की संख्या बढ़ा रहा है जो indexing रिवार्ड्स के लिए सपोर्टेड हैं, और ऐसे कई सबग्राफ हैं जो अनसपोर्टेड नेटवर्क्स को indexing कर रहे हैं जिन्हें एक indexer प्रोसेस करना चाहेगा। `config.toml` फ़ाइल अभिव्यक्त और लचीली कॉन्फ़िगरेशन की अनुमति देती है: +The Graph Protocol उन नेटवर्क की संख्या बढ़ा रहा है जिन्हें Indexing पुरस्कारों के लिए समर्थित किया गया है, और कई सबग्राफ मौजूद हैं जो असमर्थित नेटवर्क को Indexers द्वारा संसाधित करने के लिए Indexing कर रहे हैं। `config.toml` फ़ाइल अभिव्यंजक और लचीले विन्यास की अनुमति देती है: - Multiple networks - Multiple providers per network (this can allow splitting of load across providers, and can also allow for configuration of full nodes as well as archive nodes, with Graph Node preferring cheaper providers if a given workload allows). -- Additional provider details, such as features, authentication and the type of provider (for experimental Firehose support) +- Additional provider details, जैसे सुविधाएँ, authentication और provider का प्रकार (for experimental Firehose support) `[chains]` अनुभाग उन Ethereum प्रदाताओं को नियंत्रित करता है जिनसे ग्राफ-नोड कनेक्ट होता है और जहाँ प्रत्येक chain के लिए ब्लॉक और अन्य मेटाडेटा संग्रहीत होते हैं। निम्नलिखित उदाहरण दो chain, mainnet और kovan को कॉन्फ़िगर करता है, जहाँ mainnet के लिए ब्लॉक vip shard में संग्रहीत होते हैं और kovan के लिए ब्लॉक primary shard में संग्रहीत होते हैं। mainnet chain दो अलग-अलग प्रदाताओं का उपयोग कर सकती है, जबकि kovan के पास केवल एक प्रदाता है। @@ -225,11 +225,11 @@ provider = [ { label = "kovan", url = "http://..", features = [] } ] ### Managing Graph Node -Given a running Graph Node (or Graph Nodes!), the challenge is then to manage deployed subgraphs across those nodes. Graph Node surfaces a range of tools to help with managing subgraphs. +एक चालू Graph Node (या कई Graph Nodes!) को चलाने के बाद, अगली चुनौती उन Graph Nodes पर तैनात किए गए सबग्राफ को प्रबंधित करने की होती है। ग्राफ-नोड विभिन्न टूल्स प्रदान करता है जो सबग्राफ के प्रबंधन में मदद करते हैं। #### लॉगिंग -ग्राफ-नोड के log डिबगिंग और ग्राफ-नोड और विशिष्ट सबग्राफ के ऑप्टिमाइजेशन के लिए उपयोगी जानकारी प्रदान कर सकते हैं। ग्राफ-नोड विभिन्न log स्तरों का समर्थन करता है via `GRAPH_LOG` पर्यावरण चर, जिनमें निम्नलिखित स्तर होते हैं: error, warn, info, debug या trace। +ग्राफ-नोड के लॉग्स ग्राफ-नोड और विशिष्ट सबग्राफ की डिबगिंग और ऑप्टिमाइज़ेशन के लिए उपयोगी जानकारी प्रदान कर सकते हैं। ग्राफ-नोड `GRAPH_LOG` एनवायरमेंट वेरिएबल के माध्यम से विभिन्न लॉग स्तरों का समर्थन करता है, जिनमें निम्नलिखित स्तर शामिल हैं: error, warn, info, debug या trace। GraphQL queries कैसे चल रही हैं, इस बारे में अधिक विवरण प्राप्त करने के लिए `GRAPH_LOG_QUERY_TIMING` को `gql` पर सेट करना उपयोगी हो सकता है (हालांकि इससे बड़ी मात्रा में लॉग उत्पन्न होंगे)। @@ -247,64 +247,64 @@ The graphman कमांड आधिकारिक कंटेनरों `graphman` कमांड्स का पूरा दस्तावेज़ ग्राफ नोड रिपॉजिटरी में उपलब्ध है। ग्राफ नोड `/docs` में [/docs/graphman.md](https://github.com/graphprotocol/ग्राफ-नोड/blob/master/docs/graphman.md) देखें। -### सबग्राफ के साथ काम करना +### Subgraph के साथ कार्य करना #### अनुक्रमण स्थिति एपीआई -Available on port 8030/graphql by default, the indexing status API exposes a range of methods for checking indexing status for different subgraphs, checking proofs of indexing, inspecting subgraph features and more. +डिफ़ॉल्ट रूप से पोर्ट 8030/graphql पर उपलब्ध, indexing स्थिति API विभिन्न सबग्राफ के लिए indexing स्थिति की जाँच करने, proofs of indexing की जाँच करने, सबग्राफ सुविधाओं का निरीक्षण करने और अधिक के लिए कई तरीकों को उजागर करता है। पूर्ण स्कीमा [यहां](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql) उपलब्ध है। #### Indexing performance -There are three separate parts of the indexing process: +Indexing process के तीन अलग-अलग भाग हैं: -- Fetching events of interest from the provider +- Provider से रुचि के event लाए जा रहे हैं - उपयुक्त संचालकों के साथ घटनाओं को संसाधित करना (इसमें राज्य के लिए श्रृंखला को कॉल करना और स्टोर से डेटा प्राप्त करना शामिल हो सकता है) -- Writing the resulting data to the store +- Resulting data को store पर लिखना -These stages are pipelined (i.e. they can be executed in parallel), but they are dependent on one another. Where subgraphs are slow to index, the underlying cause will depend on the specific subgraph. +ये चरण पाइपलाइन किए गए हैं (अर्थात वे समानांतर रूप से निष्पादित किए जा सकते हैं), लेकिन वे एक-दूसरे पर निर्भर हैं। जहाँ सबग्राफ को इंडेक्स करने में धीमापन होता है, वहाँ इसकी मूल वजह विशिष्ट सबग्राफ पर निर्भर करेगी। Common causes of indexing slowness: - Chain से प्रासंगिक आयोजन खोजने में लगने वाला समय (विशेष रूप से कॉल handler धीमे हो सकते हैं, क्योंकि ये `trace_filter` पर निर्भर करते हैं)। - Handler के हिस्से के रूप में बड़ी संख्या में `eth_calls` करना। -- A large amount of store interaction during execution -- A large amount of data to save to the store -- A large number of events to process -- Slow database connection time, for crowded nodes -- The provider itself falling behind the chain head -- Slowness in fetching new receipts at the chain head from the provider +- Execution के दौरान बड़ी मात्रा में store interaction +- Store में सहेजने के लिए बड़ी मात्रा में data +- Process करने के लिए बड़ी संख्या में events +- भीड़भाड़ वाले nodes के लिए Slow database connection समय +- Provider itself chain head के पीछे पड़ रहा है +- Provider से chain head पर नई receipt प्राप्त करने में Slowness -Subgraph indexing metrics can help diagnose the root cause of indexing slowness. In some cases, the problem lies with the subgraph itself, but in others, improved network providers, reduced database contention and other configuration improvements can markedly improve indexing performance. +सबग्राफ Indexing मैट्रिक्स Indexing की धीमी गति के मूल कारण का निदान करने में मदद कर सकते हैं। कुछ मामलों में, समस्या स्वयं सबग्राफ में होती है, लेकिन अन्य मामलों में, बेहतर नेटवर्क प्रदाता, कम डेटाबेस प्रतिस्पर्धा और अन्य कॉन्फ़िगरेशन सुधार Indexing प्रदर्शन को उल्लेखनीय रूप से बेहतर बना सकते हैं। -#### विफल सबग्राफ +#### असफल Subgraph -During indexing subgraphs might fail, if they encounter data that is unexpected, some component not working as expected, or if there is some bug in the event handlers or configuration. There are two general types of failure: +Indexing के दौरान Subgraph असफल हो सकते हैं, यदि उन्हें अप्रत्याशित डेटा मिलता है, कोई घटक अपेक्षित रूप से कार्य नहीं कर रहा हो, या यदि event handlers या configuration में कोई बग हो। असफलता के दो सामान्य प्रकार हैं: -- Deterministic failures: these are failures which will not be resolved with retries +- Deterministic failures: ये ऐसी failures हैं जिन्हें resolved से हल नहीं किया जा सकता है - गैर-नियतात्मक विफलताएँ: ये प्रदाता के साथ समस्याओं या कुछ अप्रत्याशित ग्राफ़ नोड त्रुटि के कारण हो सकती हैं। जब एक गैर-नियतात्मक विफलता होती है, तो ग्राफ़ नोड समय के साथ पीछे हटते हुए विफल हैंडलर को फिर से प्रयास करेगा। -In some cases a failure might be resolvable by the indexer (for example if the error is a result of not having the right kind of provider, adding the required provider will allow indexing to continue). However in others, a change in the subgraph code is required. +कुछ मामलों में, विफलता को Indexer द्वारा हल किया जा सकता है (उदाहरण के लिए, यदि त्रुटि सही प्रकार के provider की अनुपस्थिति के कारण है, तो आवश्यक provider जोड़ने से Indexing जारी रह सकती है)। हालाँकि, अन्य मामलों में, सबग्राफ कोड में परिवर्तन आवश्यक होता है। -> निश्चितात्मक विफलताएँ "अंतिम" मानी जाती हैं, जिनके लिए विफल ब्लॉक के लिए एक Proof of Indexing उत्पन्न किया जाता है, जबकि अनिर्णायक विफलताएँ नहीं होतीं, क्योंकि Subgraph "अविफल" हो सकता है और indexing जारी रख सकता है। कुछ मामलों में, अनिर्णायक लेबल गलत होता है, और Subgraph कभी भी त्रुटि को पार नहीं कर पाएगा; ऐसी विफलताओं को ग्राफ नोड रिपॉजिटरी पर मुद्दों के रूप में रिपोर्ट किया जाना चाहिए। +> निर्धारित विफलताओं को "अंतिम" माना जाता है, जिसमें असफल ब्लॉक के लिए एक Proof of Indexing उत्पन्न किया जाता है, जबकि अनिर्धारित विफलताओं को ऐसा नहीं माना जाता है, क्योंकि सबग्राफ संभवतः "असफल" होने से उबरकर पुनः Indexing जारी रख सकता है। कुछ मामलों में, अनिर्धारित लेबल गलत होता है, और सबग्राफ कभी भी इस त्रुटि को पार नहीं कर पाता; ऐसी विफलताओं को ग्राफ नोड रिपॉज़िटरी पर समस्याओं के रूप में रिपोर्ट किया जाना चाहिए। #### कैश को ब्लॉक और कॉल करें -ग्राफ-नोड कुछ डेटा को स्टोर में कैश करता है ताकि प्रोवाइडर से फिर से प्राप्त करने की आवश्यकता न हो। ब्लॉक्स को कैश किया जाता है, साथ ही `eth_calls` के परिणाम (जो कि एक विशिष्ट ब्लॉक से कैश किए जाते हैं)। यह कैशिंग "थोड़े बदले हुए subgraph" के दौरान indexing की गति को नाटकीय रूप से बढ़ा सकती है। +ग्राफ-नोड कुछ डेटा को स्टोर में कैश करता है ताकि प्रोवाइडर से पुनः प्राप्त करने से बचा जा सके। ब्लॉक्स को कैश किया जाता है, जैसे कि `eth_calls` के परिणाम (जिसे एक विशिष्ट ब्लॉक के रूप में कैश किया जाता है)। यह कैशिंग "resyncing" के दौरान थोड़ा बदले हुए सबग्राफ की indexing स्पीड को नाटकीय रूप से बढ़ा सकती है। -यदि कभी Ethereum नोड ने किसी समय अवधि के लिए गलत डेटा प्रदान किया है, तो वह कैश में जा सकता है, जिसके परिणामस्वरूप गलत डेटा या विफल सबग्राफ हो सकते हैं। इस स्थिति में, Indexer `graphman` का उपयोग करके ज़हरीले कैश को हटा सकते हैं, और फिर प्रभावित सबग्राफ को रीवाइंड कर सकते हैं, जो फिर (आशा है) स्वस्थ प्रदाता से ताज़ा डेटा प्राप्त करेंगे। +हालांकि, कुछ मामलों में, यदि कोई Ethereum नोड कुछ समय के लिए गलत डेटा प्रदान करता है, तो वह कैश में आ सकता है, जिससे गलत डेटा या असफल सबग्राफ हो सकते हैं। इस स्थिति में, Indexers `graphman` का उपयोग करके दूषित कैश को साफ कर सकते हैं और फिर प्रभावित सबग्राफको पुनः पीछे ले जा सकते हैं, जिससे वे (उम्मीद है) स्वस्थ प्रदाता से नया डेटा प्राप्त कर सकें। -If a block cache inconsistency is suspected, such as a tx receipt missing event: +यदि एक block cache inconsistency का संदेह है, जैसे कि tx receipt missing event: 1. `graphman chain list` का उपयोग करके chain का नाम पता करें। 2. `graphman chain check-blocks by-number ` यह जांच करेगा कि क्या कैश किया हुआ ब्लॉक प्रदाता से मेल खाता है, और यदि यह मेल नहीं खाता है तो ब्लॉक को कैश से हटा देगा। 1. यदि कोई अंतर है, तो पूरे कैश को `graphman chain truncate ` के साथ हटाना अधिक सुरक्षित हो सकता है। 2. यदि ब्लॉक प्रदाता से मेल खाता है, तो समस्या को सीधे प्रदाता के विरुद्ध डिबग किया जा सकता है। -#### Querying issues and errors +#### Issues और errors को query करना -Once a subgraph has been indexed, indexers can expect to serve queries via the subgraph's dedicated query endpoint. If the indexer is hoping to serve significant query volume, a dedicated query node is recommended, and in case of very high query volumes, indexers may want to configure replica shards so that queries don't impact the indexing process. +एक बार जब सबग्राफ को इंडेक्स कर लिया जाता है, तो Indexers इससे जुड़े समर्पित क्वेरी एंडपॉइंट के माध्यम से क्वेरी प्रदान करने की उम्मीद कर सकते हैं। यदि Indexer महत्वपूर्ण मात्रा में क्वेरी सर्व करना चाहता है, तो एक समर्पित क्वेरी नोड की सिफारिश की जाती है, और बहुत अधिक क्वेरी वॉल्यूम के मामले में, Indexers को प्रतिकृति shard कॉन्फ़िगर करने पर विचार करना चाहिए ताकि क्वेरीज़ Indexing प्रक्रिया को प्रभावित न करें। हालाँकि, एक समर्पित क्वेरी नोड और प्रतिकृतियों के साथ भी, कुछ प्रश्नों को निष्पादित करने में लंबा समय लग सकता है, और कुछ मामलों में मेमोरी उपयोग में वृद्धि होती है और अन्य उपयोगकर्ताओं के लिए क्वेरी समय को नकारात्मक रूप से प्रभावित करती है। @@ -316,17 +316,17 @@ Once a subgraph has been indexed, indexers can expect to serve queries via the s ##### Analysing queries -Problematic queries most often surface in one of two ways. In some cases, users themselves report that a given query is slow. In that case the challenge is to diagnose the reason for the slowness - whether it is a general issue, or specific to that subgraph or query. And then of course to resolve it, if possible. +समस्याग्रस्त क्वेरीज़ अक्सर दो तरीकों से सामने आती हैं। कुछ मामलों में, उपयोगकर्ता स्वयं रिपोर्ट करते हैं कि कोई विशेष क्वेरी धीमी है। ऐसे में चुनौती यह होती है कि धीमेपन के कारण का निदान किया जाए - यह पता लगाया जाए कि यह कोई सामान्य समस्या है या किसी विशेष सबग्राफ या क्वेरी से संबंधित है। और फिर, यदि संभव हो, तो इसे हल किया जाए। अन्य मामलों में, क्वेरी नोड पर ट्रिगर उच्च मेमोरी उपयोग हो सकता है, इस मामले में सबसे पहले समस्या उत्पन्न करने वाली क्वेरी की पहचान करना चुनौती है। Indexers [qlog](https://github.com/graphprotocol/qlog/) का उपयोग करके ग्राफ-नोड के query logs को प्रोसेस और सारांशित कर सकते हैं। धीमे queries की पहचान और डिबग करने में मदद के लिए `GRAPH_LOG_QUERY_TIMING` को भी सक्षम किया जा सकता है। -Given a slow query, indexers have a few options. Of course they can alter their cost model, to significantly increase the cost of sending the problematic query. This may result in a reduction in the frequency of that query. However this often doesn't resolve the root cause of the issue. +Given a slow query, indexer के पास कुछ options होते हैं. Of course they can alter their cost model, problematic query भेजने की लागत में काफी increase कर सकते हैं। इसके result उस query की frequency में कमी हो सकती है। हालाँकि यह अक्सर issue के मूल कारण को हल नहीं करता है। ##### Account-like optimisation -Database tables that store entities seem to generally come in two varieties: 'transaction-like', where entities, once created, are never updated, i.e., they store something akin to a list of financial transactions, and 'account-like' where entities are updated very often, i.e., they store something like financial accounts that get modified every time a transaction is recorded. Account-like tables are characterized by the fact that they contain a large number of entity versions, but relatively few distinct entities. Often, in such tables the number of distinct entities is 1% of the total number of rows (entity versions) +Database tables that store entities seem to generally come in two varieties: 'transaction-like', आम तौर पर दो तरह में आती हैं: 'transaction-like', जहाँ संस्थाएँ, एक बार बनने के बाद, कभी-कभी updated नहीं होती हैं, यानी, वे financial transactions की सूची के तरह कुछ store करते हैं, और 'account-like' जहां संस्थाएं बार-बार updated होती हैं, यानी, वे financial accounts की सूची के तरह कुछ store करते हैं जो every time a transaction is recorded. Account-like tables are characterized by the fact that they contain a large number of entity versions, but relatively few distinct entities. बारंबार, ऐसे विद्वानों में अलग-अलग टुकड़ों की संख्या, कुल संख्या (entity versions) का 1% होती है अकाउंट-जैसी तालिकाओं के लिए, `ग्राफ-नोड` ऐसे queries जनरेट कर सकता है जो इस विवरण का लाभ उठाते हैं कि Postgres इतनी तेज़ दर पर डेटा स्टोर करते समय इसे कैसे प्रबंधित करता है। खासतौर पर, हाल के ब्लॉक्स के सभी संस्करण ऐसी तालिका के कुल स्टोरेज के एक छोटे से हिस्से में होते हैं। @@ -336,10 +336,10 @@ Database tables that store entities seem to generally come in two varieties: 'tr एक बार जब यह तय कर लिया जाता है कि एक तालिका खाता जैसी है, तो `graphman stats account-like .
` चलाने से उस तालिका के खिलाफ queries के लिए खाता जैसी अनुकूलन सक्षम हो जाएगा। इस अनुकूलन को फिर से बंद किया जा सकता है `graphman stats account-like --clear .
` के साथ। queries नोड्स को यह नोटिस करने में 5 मिनट तक का समय लग सकता है कि अनुकूलन को चालू या बंद किया गया है। अनुकूलन को चालू करने के बाद, यह सत्यापित करना आवश्यक है कि बदलाव वास्तव में उस तालिका के लिए queries को धीमा नहीं कर रहा है। यदि आपने Grafana को Postgres की निगरानी के लिए कॉन्फ़िगर किया है, तो धीमी queries `pg_stat_activity` में बड़ी संख्या में दिखाई देंगी, जो कई सेकंड ले रही हैं। ऐसे में, अनुकूलन को फिर से बंद करने की आवश्यकता होती है। -Uniswap- जैसे सबग्राफ़ के लिए, `pair` और `token` तालिकाएँ इस अनुकूलन के प्रमुख उम्मीदवार हैं, और ये डेटाबेस लोड पर नाटकीय प्रभाव डाल सकते हैं। +Uniswap जैसी Subgraphs के लिए, `pair` और `token` टेबल इस ऑप्टिमाइज़ेशन के लिए प्रमुख उम्मीदवार हैं, और डेटाबेस लोड पर इसका नाटकीय प्रभाव पड़ सकता है। #### सबग्राफ हटाना -> This is new functionality, which will be available in Graph Node 0.29.x +> यह new functionality है, जो garph node 0.29.x में उपलब्ध होगी -किसी बिंदु पर एक indexer एक दिए गए subgraph को हटाना चाहता है। इसे आसानी से `graphman drop` के माध्यम से किया जा सकता है, जो एक deployment और उसके सभी indexed डेटा को हटा देता है। डिप्लॉयमेंट को subgraph नाम, एक IPFS हैश `Qm..`, या डेटाबेस नामस्थान `sgdNNN` के रूप में निर्दिष्ट किया जा सकता है। आगे की दस्तावेज़ीकरण यहां उपलब्ध है [here](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop)। +At some point, एक Indexer किसी दिए गए Subgraph को हटाना चाह सकता है। यह आसानी से `graphman drop` के माध्यम से किया जा सकता है, जो एक deployment और उसके सभी indexed डेटा को हटा देता है। Deployment को या तो सबग्राफ नाम, एक IPFS हैश `Qm..`, या डेटाबेस namespace `sgdNNN` के रूप में निर्दिष्ट किया जा सकता है। आगे का दस्तावेज़ [यहाँ](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop) उपलब्ध है। diff --git a/website/src/pages/hi/indexing/tooling/graphcast.mdx b/website/src/pages/hi/indexing/tooling/graphcast.mdx index 216fc0a502c5..f4978a7b800d 100644 --- a/website/src/pages/hi/indexing/tooling/graphcast.mdx +++ b/website/src/pages/hi/indexing/tooling/graphcast.mdx @@ -10,10 +10,10 @@ title: Graphcast ग्राफकास्ट एसडीके (सॉफ्टवेयर डेवलपमेंट किट) डेवलपर्स को रेडियो बनाने की अनुमति देता है, जो गपशप-संचालित अनुप्रयोग हैं जो इंडेक्सर्स किसी दिए गए उद्देश्य को पूरा करने के लिए चला सकते हैं। हम निम्नलिखित उपयोग के मामलों के लिए कुछ रेडियो बनाने का भी इरादा रखते हैं (या अन्य डेवलपर्स/टीमों को सहायता प्रदान करते हैं जो रेडियो बनाना चाहते हैं): -- Real-time cross-checking of subgraph data integrity ([Subgraph Radio](https://docs.graphops.xyz/graphcast/radios/subgraph-radio/intro)). -- अन्य इंडेक्सर्स से ताना सिंकिंग सबग्राफ, सबस्ट्रीम और फायरहोज डेटा के लिए नीलामी और समन्वय आयोजित करना। -- सक्रिय क्वेरी एनालिटिक्स पर स्व-रिपोर्टिंग, जिसमें सबग्राफ अनुरोध मात्रा, शुल्क मात्रा आदि शामिल हैं। -- इंडेक्सिंग एनालिटिक्स पर सेल्फ-रिपोर्टिंग, जिसमें सबग्राफ इंडेक्सिंग टाइम, हैंडलर गैस कॉस्ट, इंडेक्सिंग एरर, आदि शामिल हैं। +- Real-time cross-checking of Subgraph data integrity ([Subgraph Radio](https://docs.graphops.xyz/graphcast/radios/subgraph-radio/intro)). +- Conducting auctions and coordination for warp syncing Subgraphs, substreams, and Firehose data from other Indexers. +- Self-reporting on active query analytics, including Subgraph request volumes, fee volumes, etc. +- Self-reporting on indexing analytics, including Subgraph indexing time, handler gas costs, indexing errors encountered, etc. - ग्राफ-नोड संस्करण, पोस्टग्रेज संस्करण, एथेरियम क्लाइंट संस्करण, आदि सहित स्टैक जानकारी पर स्व-रिपोर्टिंग। ### और अधिक जानें diff --git a/website/src/pages/hi/resources/_meta-titles.json b/website/src/pages/hi/resources/_meta-titles.json index f5971e95a8f6..dc887c723101 100644 --- a/website/src/pages/hi/resources/_meta-titles.json +++ b/website/src/pages/hi/resources/_meta-titles.json @@ -1,4 +1,4 @@ { - "roles": "Additional Roles", - "migration-guides": "Migration Guides" + "roles": "अतिरिक्त भूमिकाएँ", + "migration-guides": "माइग्रेशन मार्गदर्शक" } diff --git a/website/src/pages/hi/resources/benefits.mdx b/website/src/pages/hi/resources/benefits.mdx index cb043820d821..3392b29fe908 100644 --- a/website/src/pages/hi/resources/benefits.mdx +++ b/website/src/pages/hi/resources/benefits.mdx @@ -14,57 +14,57 @@ socialImage: https://thegraph.com/docs/img/seo/benefits.jpg - Significantly lower monthly costs - $0 इंफ्रास्ट्रक्चर सेटअप लागत - सुपीरियर अपटाइम -- Access to hundreds of independent Indexers around the world +- सैकड़ों स्वतंत्र Indexer का Access विश्वभर में। - वैश्विक समुदाय द्वारा 24/7 तकनीकी सहायता ## लाभ समझाया ### कम और अधिक लचीला लागत संरचना -No contracts. No monthly fees. Only pay for the queries you use—with an average cost-per-query of $40 per million queries (~$0.00004 per query). Queries are priced in USD and paid in GRT or credit card. +**कोई अनुबंध नहीं। कोई मासिक शुल्क नहीं। केवल उपयोग की गई **queries** के लिए भुगतान करें—औसत लागत $40 प्रति मिलियन **queries** (~$0.00004 प्रति **query**)। **Queries** की कीमत **USD** में होती है और भुगतान **GRT** या **क्रेडिट कार्ड** से किया जा सकता है।** -Query costs may vary; the quoted cost is the average at time of publication (March 2024). +Query लागत भिन्न हो सकती है; उद्धृत लागत प्रकाशन के समय (मार्च 2024) की औसत है। -## Low Volume User (less than 100,000 queries per month) +## कम वॉल्यूम उपयोगकर्ता (100,000 queries प्रति माह से कम) | लागत तुलना | स्वयं होस्ट किया गया | The Graph Network | | :-: | :-: | :-: | | मासिक सर्वर लागत\* | $350 प्रति माह | $0 | -| पूछताछ लागत | $0+ | $0 per month | +| पूछताछ लागत | $0+ | $0 प्रति माह | | इंजीनियरिंग का समय | $ 400 प्रति माह | कोई नहीं, विश्व स्तर पर वितरित इंडेक्सर्स के साथ नेटवर्क में बनाया गया | -| प्रति माह प्रश्न | इन्फ्रा क्षमताओं तक सीमित | 100,000 (Free Plan) | +| प्रति माह प्रश्न | इन्फ्रा क्षमताओं तक सीमित | 100,000 (नि: शुल्क योजना) | | लागत प्रति क्वेरी | $0 | $0 | -| Infrastructure | केंद्रीकृत | विकेन्द्रीकृत | +| इंफ्रास्ट्रक्चर | केंद्रीकृत | विकेन्द्रीकृत | | भौगोलिक अतिरेक | $750+ प्रति अतिरिक्त नोड | शामिल | | अपटाइम | भिन्न | 99.9%+ | | कुल मासिक लागत | $750+ | $0 | -## Medium Volume User (~3M queries per month) +## मध्यम वॉल्यूम उपयोगकर्ता (~3 मिलियन queries प्रति माह) | लागत तुलना | स्वयं होस्ट किया गया | The Graph Network | | :-: | :-: | :-: | | मासिक सर्वर लागत\* | $350 प्रति माह | $0 | -| पूछताछ लागत | $ 500 प्रति माह | $120 per month | +| पूछताछ लागत | $ 500 प्रति माह | $120 प्रति माह | | इंजीनियरिंग का समय | $800 प्रति माह | कोई नहीं, विश्व स्तर पर वितरित इंडेक्सर्स के साथ नेटवर्क में बनाया गया | | प्रति माह प्रश्न | इन्फ्रा क्षमताओं तक सीमित | ~3,000,000 | | लागत प्रति क्वेरी | $0 | $0.00004 | -| Infrastructure | केंद्रीकृत | विकेन्द्रीकृत | +| इंफ्रास्ट्रक्चर | केंद्रीकृत | विकेन्द्रीकृत | | इंजीनियरिंग खर्च | $ 200 प्रति घंटा | शामिल | | भौगोलिक अतिरेक | प्रति अतिरिक्त नोड कुल लागत में $1,200 | शामिल | | अपटाइम | भिन्न | 99.9%+ | | कुल मासिक लागत | $1,650+ | $120 | -## High Volume User (~30M queries per month) +## उच्च वॉल्यूम उपयोगकर्ता (~30 मिलियन queries प्रति माह) | लागत तुलना | स्वयं होस्ट किया गया | The Graph Network | | :-: | :-: | :-: | | मासिक सर्वर लागत\* | $1100 प्रति माह, प्रति नोड | $0 | -| पूछताछ लागत | $4000 | $1,200 per month | +| पूछताछ लागत | $4000 | $1,200 प्रति माह | | आवश्यक नोड्स की संख्या | 10 | Not applicable | | इंजीनियरिंग का समय | $6,000 or more per month | कोई नहीं, विश्व स्तर पर वितरित इंडेक्सर्स के साथ नेटवर्क में बनाया गया | | प्रति माह प्रश्न | इन्फ्रा क्षमताओं तक सीमित | ~30,000,000 | | लागत प्रति क्वेरी | $0 | $0.00004 | -| Infrastructure | केंद्रीकृत | विकेन्द्रीकृत | +| इंफ्रास्ट्रक्चर | केंद्रीकृत | विकेन्द्रीकृत | | भौगोलिक अतिरेक | प्रति अतिरिक्त नोड कुल लागत में $1,200 | शामिल | | अपटाइम | भिन्न | 99.9%+ | | कुल मासिक लागत | $11,000+ | $1,200 | @@ -73,11 +73,12 @@ Query costs may vary; the quoted cost is the average at time of publication (Mar इंजीनियरिंग समय $200 प्रति घंटे की धारणा पर आधारित है -Reflects cost for data consumer. Query fees are still paid to Indexers for Free Plan queries. + डेटा उपभोक्ता के लिए लागत को दर्शाता है। निःशुल्क योजना की queries के लिए query fees अभी भी indexers को +भुगतान की जाती है। -एस्टिमेटेड लागत केवल Ethereum Mainnet सबग्राफ़ के लिए है — अन्य नेटवर्कों पर `ग्राफ-नोड` को स्वयं होस्ट करने पर लागत और भी अधिक होती है। कुछ उपयोगकर्ताओं को अपने Subgraph को एक नई संस्करण में अपडेट करने की आवश्यकता हो सकती है। Ethereum गैस शुल्क के कारण, एक अपडेट की लागत लगभग ~$50 है जब लेख लिखा गया था। ध्यान दें कि [Arbitrum](/archived/arbitrum/arbitrum-faq/) पर गैस शुल्क Ethereum mainnet से काफी कम हैं। +Ethereum मेननेट सबग्राफ के लिए अनुमानित लागतें ही दी गई हैं — अन्य नेटवर्क पर `graph-node` को स्वयं होस्ट करने पर लागतें और भी अधिक होती हैं। कुछ उपयोगकर्ताओं को अपने सबग्राफ को नए संस्करण में अपडेट करने की आवश्यकता हो सकती है। Ethereum गैस शुल्क के कारण, एक अपडेट की लागत लेखन के समय लगभग $50 होती है। ध्यान दें कि [Arbitrum](/archived/arbitrum/arbitrum-faq/) पर गैस शुल्क Ethereum मेननेट की तुलना में काफी कम है। -एक सबग्राफ पर क्यूरेटिंग सिग्नल एक वैकल्पिक वन-टाइम, नेट-जीरो कॉस्ट है (उदाहरण के लिए, सिग्नल में $1k को सबग्राफ पर क्यूरेट किया जा सकता है, और बाद में वापस ले लिया जाता है - प्रक्रिया में रिटर्न अर्जित करने की क्षमता के साथ)। +Curating signal on a Subgraph एक वैकल्पिक एक-बार का, शुद्ध-शून्य लागत वाला प्रक्रिया है (उदाहरण के लिए, $1k का सिग्नल एक सबग्राफ पर क्यूरेट किया जा सकता है, और बाद में वापस लिया जा सकता है—जिसमें संभावित रूप से लाभ अर्जित करने का अवसर हो सकता है)। ## कोई सेटअप लागत नहीं और अधिक परिचालन दक्षता @@ -89,4 +90,4 @@ The Graph का विकेन्द्रीकृत नेटवर्क The Graph Network कम खर्चीला, उपयोग में आसान और बेहतर परिणाम प्रदान करता है, जब की graph-node को लोकल पर चलाने के मुकाबले। -आज ही The Graph Network का उपयोग शुरू करें, और सीखें कि कैसे [अपने subgraph को The Graph के विकेंद्रीकृत नेटवर्क पर प्रकाशित](/subgraphs/quick-start/) करें। +The Graph Network का उपयोग आज ही शुरू करें, और जानें कि अपने Subgraph को The Graph के विकेंद्रीकृत नेटवर्क पर कैसे [प्रकाशित करें](/subgraphs/quick-start/)। diff --git a/website/src/pages/hi/resources/glossary.mdx b/website/src/pages/hi/resources/glossary.mdx index d7c1fd85df2b..1e802e518173 100644 --- a/website/src/pages/hi/resources/glossary.mdx +++ b/website/src/pages/hi/resources/glossary.mdx @@ -2,82 +2,87 @@ title: शब्दकोष --- -- **The Graph**: A decentralized protocol for indexing and querying data. +- ** The Graph**: डेटा को अनुक्रमण और क्वेरी करने के लिए एक विकेंद्रीकृत प्रोटोकॉल। -- **Query**: A request for data. In the case of The Graph, a query is a request for data from a subgraph that will be answered by an Indexer. +- Query: डेटा के लिए अनुरोध। The Graph के संदर्भ में, query एक Subgraph से डेटा अनुरोधित करने की प्रक्रिया है, जिसका उत्तर एक Indexer द्वारा दिया जाता है। -- **GraphQL**: A query language for APIs and a runtime for fulfilling those queries with your existing data. The Graph uses GraphQL to query subgraphs. +- **GraphQL**: API के लिए एक query language और मौजूदा डेटा से उन queries को पूरा करने के लिए एक runtime। The Graph, Subgraphs से query करने के लिए GraphQL का उपयोग करता है। -- **Endpoint**: A URL that can be used to query a subgraph. The testing endpoint for Subgraph Studio is `https://api.studio.thegraph.com/query///` and the Graph Explorer endpoint is `https://gateway.thegraph.com/api//subgraphs/id/`. The Graph Explorer endpoint is used to query subgraphs on The Graph's decentralized network. +- **Endpoint**: एक URL जिसका उपयोग किसी Subgraph से query करने के लिए किया जाता है। Subgraph Studio के परीक्षण endpoint का प्रारूप है: `https://api.studio.thegraph.com/query///` Graph Explorer का endpoint है: `https://gateway.thegraph.com/api//subgraphs/id/` Graph Explorer endpoint का उपयोग The Graph के विकेंद्रीकृत नेटवर्क पर Subgraphs से query करने के लिए किया जाता है। -- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish subgraphs to The Graph Network. Once it is indexed, the subgraph can be queried by anyone. +- **Subgraph**: एक ओपन API जो ब्लॉकचेन से डेटा निकालता है, उसे प्रोसेस करके संग्रहीत करता है ताकि उसे आसानी से GraphQL के माध्यम से query किया जा सके। डेवलपर्स The Graph Network पर Subgraphs बना सकते हैं, डिप्लॉय कर सकते हैं और प्रकाशित कर सकते हैं। एक बार indexing पूरी होने के बाद, कोई भी इस Subgraph को query कर सकता है। -- **Indexer**: Network participants that run indexing nodes to index data from blockchains and serve GraphQL queries. +- **Indexer**: नेटवर्क प्रतिभागी जो ब्लॉकचेन से डेटा को अनुक्रमित करने के लिए अनुक्रमण नोड्स चलाते हैं और GraphQL क्वेरीज़ को सर्व करते हैं I -- **Indexer Revenue Streams**: Indexers are rewarded in GRT with two components: query fee rebates and indexing rewards. +- ** Indexer राजस्व स्रोत**: Indexer को GRT में दो घटकों के साथ पुरस्कृत किया जाता है: क्वेरी शुल्क रिबेट्स और indexing रिवार्ड्स। - 1. **Query Fee Rebates**: Payments from subgraph consumers for serving queries on the network. + 1. **Query Fee Rebates**: नेटवर्क पर queries को संसाधित करने के लिए Subgraph उपभोक्ताओं द्वारा किया गया भुगतान। - 2. **Indexing Rewards**: The rewards that Indexers receive for indexing subgraphs. Indexing rewards are generated via new issuance of 3% GRT annually. + 2. **Indexing Rewards**: Subgraphs को index करने के बदले Indexers को मिलने वाले इनाम। Indexing rewards हर साल 3% GRT के नए जारीकरण से उत्पन्न होते हैं। -- **Indexer's Self-Stake**: The amount of GRT that Indexers stake to participate in the decentralized network. The minimum is 100,000 GRT, and there is no upper limit. +- Indexer's Self-Stake: वह राशि जो Indexers विकेन्द्रीकृत नेटवर्क में भाग लेने के लिए स्टेक करते हैं। न्यूनतम 100,000 GRT है, और इसकी कोई ऊपरी सीमा नहीं है। -- **Delegation Capacity**: The maximum amount of GRT an Indexer can accept from Delegators. Indexers can only accept up to 16x their Indexer Self-Stake, and additional delegation results in diluted rewards. For example, if an Indexer has a Self-Stake of 1M GRT, their delegation capacity is 16M. However, Indexers can increase their Delegation Capacity by increasing their Self-Stake. +- **Delegation Capacity**: वह अधिकतम मात्रा में GRT जो एक Indexer, Delegators से स्वीकार कर सकता है। Indexers केवल अपनी Indexer Self-Stake की 16 गुना तक ही स्वीकार कर सकते हैं, और अतिरिक्त delegation से पुरस्कारों में कमी आती है। उदाहरण के लिए, यदि किसी Indexer की Self-Stake 1M GRT है, तो उनकी Delegation Capacity 16M होगी। हालांकि, Indexers अपनी Self-Stake बढ़ाकर अपनी Delegation Capacity बढ़ा सकते हैं। -- **Upgrade Indexer**: An Indexer designed to act as a fallback for subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. +- **Upgrade Indexer**: एक ऐसा Indexer जो उन Subgraph queries के लिए बैकअप के रूप में कार्य करता है जिन्हें नेटवर्क पर अन्य Indexers द्वारा संसाधित नहीं किया जाता। Upgrade Indexer अन्य Indexers के साथ प्रतिस्पर्धा नहीं करता। -- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing subgraphs. +- **Delegator**: वे नेटवर्क प्रतिभागी जो GRT रखते हैं और इसे Indexers को delegate करते हैं। इससे Indexers को नेटवर्क पर Subgraphs में अपना stake बढ़ाने में मदद मिलती है। बदले में, Delegators को उन Indexing Rewards का एक हिस्सा मिलता है, जो Indexers को Subgraphs प्रोसेस करने के लिए प्राप्त होते हैं। -- **Delegation Tax**: A 0.5% fee paid by Delegators when they delegate GRT to Indexers. The GRT used to pay the fee is burned. +- **Delegation Tax** : जब Delegators अपने GRT को Indexers को डेलीगेट करते हैं, तो उन्हें 0.5% शुल्क देना पड़ता है। इस शुल्क के भुगतान के लिए उपयोग किया गया GRT नष्ट (burn) कर दिया जाता है। -- **Curator**: Network participants that identify high-quality subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a subgraph, 10% is distributed to the Curators of that subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a subgraph. +- **Curator**: वे नेटवर्क प्रतिभागी जो उच्च-गुणवत्ता वाले Subgraphs की पहचान करते हैं और उन पर GRT **signal** करके curation shares प्राप्त करते हैं। जब Indexers किसी Subgraph पर query fees का दावा करते हैं, तो उसका 10% उस Subgraph के Curators को वितरित किया जाता है। GRT **signal** की गई राशि और किसी Subgraph को index करने वाले Indexers की संख्या के बीच एक सकारात्मक संबंध होता है। -- **Curation Tax**: A 1% fee paid by Curators when they signal GRT on subgraphs. The GRT used to pay the fee is burned. +- **Curation Tax**: Curators द्वारा Subgraphs पर GRT **signal** करने पर दिया जाने वाला 1% शुल्क। इस शुल्क के रूप में उपयोग किया गया GRT **burn** कर दिया जाता है। -- **Data Consumer**: Any application or user that queries a subgraph. +- **Data Consumer**: कोई भी एप्लिकेशन या उपयोगकर्ता जो किसी Subgraph से query करता है। -- **Subgraph Developer**: A developer who builds and deploys a subgraph to The Graph's decentralized network. +- **Subgraph Developer**: वह डेवलपर जो The Graph के विकेंद्रीकृत नेटवर्क पर एक Subgraph बनाता और डिप्लॉय करता है। -- **Subgraph Manifest**: A YAML file that describes the subgraph's GraphQL schema, data sources, and other metadata. [Here](https://github.com/graphprotocol/example-subgraph/blob/master/subgraph.yaml) is an example. +- **Subgraph Manifest**: एक YAML फ़ाइल जो Subgraph की GraphQL **schema**, **data sources**, और अन्य **metadata** को वर्णित करती है। [यहां](https://github.com/graphprotocol/example-subgraph/blob/master/subgraph.yaml) एक उदाहरण दिया गया है। -- **Epoch**: A unit of time within the network. Currently, one epoch is 6,646 blocks or approximately 1 day. +- **अवधियों को** : नेटवर्क के भीतर समय की एक इकाई। वर्तमान में, एक अवधियों को 6,646 ब्लॉक्स या लगभग 1 दिन के बराबर होता है। -- **Allocation**: An Indexer can allocate their total GRT stake (including Delegators' stake) towards subgraphs that have been published on The Graph's decentralized network. Allocations can have different statuses: +- **Allocation**: एक Indexer अपने कुल GRT **stake** (जिसमें Delegators का stake भी शामिल है) को उन Subgraphs की ओर आवंटित कर सकता है, जो The Graph के विकेंद्रीकृत नेटवर्क पर प्रकाशित किए गए हैं। Allocations के विभिन्न **status** हो सकते हैं: - 1. **Active**: An allocation is considered active when it is created onchain. This is called opening an allocation, and indicates to the network that the Indexer is actively indexing and serving queries for a particular subgraph. Active allocations accrue indexing rewards proportional to the signal on the subgraph, and the amount of GRT allocated. + 1. **Active**: जब कोई allocation ऑनचेन बनाई जाती है, तो उसे **active** माना जाता है। इसे **allocation खोलना** कहा जाता है और यह नेटवर्क को संकेत देता है कि Indexer किसी विशेष Subgraph को सक्रिय रूप से **index** कर रहा है और **queries** को संसाधित कर रहा है। Active allocations, Subgraph पर दिए गए **signal** और आवंटित किए गए **GRT** की मात्रा के अनुपात में **indexing rewards** अर्जित करते हैं। - 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. + 2. **Closed**: कोई Indexer किसी दिए गए Subgraph पर अर्जित **indexing rewards** का दावा करने के लिए हालिया और मान्य **Proof of Indexing (POI)** जमा कर सकता है। इसे **allocation बंद करना** कहा जाता है। - किसी allocation को बंद करने से पहले, इसे कम से कम **एक epoch** तक खुला रहना आवश्यक है। + - अधिकतम allocation अवधि **28 epochs** होती है। + - यदि कोई Indexer 28 epochs से अधिक समय तक allocation को खुला रखता है, तो इसे **stale allocation** कहा जाता है। + - **Closed** स्थिति में भी, कोई **Fisherman** विवाद खोल सकता है और झूठे डेटा परोसने के लिए Indexer को चुनौती दे सकता है। -- **Subgraph Studio**: A powerful dapp for building, deploying, and publishing subgraphs. +- **Subgraph Studio**: Subgraphs को बनाने, डिप्लॉय करने और प्रकाशित करने के लिए एक शक्तिशाली dapp। -- **Fishermen**: A role within The Graph Network held by participants who monitor the accuracy and integrity of data served by Indexers. When a Fisherman identifies a query response or a POI they believe to be incorrect, they can initiate a dispute against the Indexer. If the dispute rules in favor of the Fisherman, the Indexer is slashed by losing 2.5% of their self-stake. Of this amount, 50% is awarded to the Fisherman as a bounty for their vigilance, and the remaining 50% is removed from circulation (burned). This mechanism is designed to encourage Fishermen to help maintain the reliability of the network by ensuring that Indexers are held accountable for the data they provide. +- ** मछुआरे**: The Graph Network में एक भूमिका होती है जिसे वे प्रतिभागी निभाते हैं जो Indexers द्वारा प्रदान किए गए डेटा की सटीकता और अखंडता की निगरानी करते हैं। जब कोई मछुआरा किसी क्वेरी प्रतिक्रिया या POI को गलत मानता है, तो वह Indexer के खिलाफ विवाद शुरू कर सकता है। यदि विवाद मछुआरे के पक्ष में जाता है, तो Indexer को 2.5% उनके स्वयं के स्टेक से काट लिया जाता है। इस राशि का 50% मछुआरे को उनके सतर्कता पुरस्कार के रूप में दिया जाता है, और शेष 50% को नष्ट (बर्न) कर दिया जाता है। यह तंत्र मछुआरों को नेटवर्क की विश्वसनीयता बनाए रखने में मदद करने के लिए प्रोत्साहित करने हेतु डिज़ाइन किया गया है, जिससे यह सुनिश्चित किया जा सके कि Indexers द्वारा प्रदान किया गया डेटा जवाबदेह हो। -- **Arbitrators**: Arbitrators are network participants appointed through a governance process. The role of the Arbitrator is to decide the outcome of indexing and query disputes. Their goal is to maximize the utility and reliability of The Graph Network. +- ** Arbitrators**: Arbitrators नेटवर्क प्रतिभागी होते हैं जिन्हें एक गवर्नेंस प्रक्रिया के माध्यम से नियुक्त किया जाता है। Arbitrator की भूमिका indexing और query विवादों के परिणाम का निर्णय लेना होती है। उनका लक्ष्य The Graph Network की उपयोगिता और विश्वसनीयता को अधिकतम करना होता है। -- **Slashing**: Indexers can have their self-staked GRT slashed for providing an incorrect POI or for serving inaccurate data. The slashing percentage is a protocol parameter currently set to 2.5% of an Indexer's self-stake. 50% of the slashed GRT goes to the Fisherman that disputed the inaccurate data or incorrect POI. The other 50% is burned. +- ** Slashing**: Indexers अपने self-staked GRT का slashing झेल सकते हैं यदि वे गलत POI प्रदान करते हैं या गलत डेटा सर्व करते हैं। Slashing प्रतिशत एक protocol parameter है, जो वर्तमान में एक Indexer के self-stake का 2.5% निर्धारित है। Slashed किए गए GRT का 50% उस Fisherman को जाता है जिसने गलत डेटा या गलत POI को विवादित किया था। बाकी 50% को जला दिया जाता है। -- **Indexing Rewards**: The rewards that Indexers receive for indexing subgraphs. Indexing rewards are distributed in GRT. +- **Indexing Rewards**: Subgraphs को **index** करने के बदले Indexers को मिलने वाले इनाम, जो **GRT** में वितरित किए जाते हैं। -- **Delegation Rewards**: The rewards that Delegators receive for delegating GRT to Indexers. Delegation rewards are distributed in GRT. +- ** delegation रिवॉर्ड्स**: वे रिवॉर्ड्स जो Delegators को GRT को Indexers को डेलीगेट करने के लिए मिलते हैं। delegation रिवॉर्ड्स GRT में वितरित किए जाते हैं। -- **GRT**: The Graph's work utility token. GRT provides economic incentives to network participants for contributing to the network. +- ** GRT**: The Graph का कार्य उपयोगिता टोकन। GRT नेटवर्क में योगदान देने वाले सहभागियों के लिए आर्थिक प्रोत्साहन प्रदान करता है। -- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. +- **Proof of Indexing (POI)**: जब कोई Indexer अपनी **allocation बंद** करता है और किसी विशेष Subgraph पर अर्जित **indexing rewards** का दावा करना चाहता है, तो उसे एक **वैध और हालिया POI** प्रदान करना आवश्यक होता है। - **Fishermen** Indexer द्वारा प्रस्तुत POI पर विवाद कर सकते हैं। -- **Graph Node**: Graph Node is the component that indexes subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. + - यदि विवाद Fisherman के पक्ष में हल होता है, तो संबंधित Indexer का **slashing** किया जाता है। -- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions onchain, including registering on the network, managing subgraph deployments to its Graph Node(s), and managing allocations. +- **Graph Node**: वह **component** जो Subgraphs को **index** करता है और उत्पन्न डेटा को **GraphQL API** के माध्यम से query करने के लिए उपलब्ध कराता है। यह Indexer **stack** का एक केंद्रीय भाग है, और Graph Node का सही संचालन एक सफल Indexer चलाने के लिए आवश्यक है। -- **The Graph Client**: A library for building GraphQL-based dapps in a decentralized way. +- **Indexer Agent**: Indexer **stack** का एक हिस्सा, जो ऑनचेन इंटरैक्शन को सुविधाजनक बनाता है। इसमें नेटवर्क पर **पंजीकरण**, अपने **Graph Node(s)** पर Subgraph **deployments** का प्रबंधन, और **allocations** को संभालना शामिल है। -- **Graph Explorer**: A dapp designed for network participants to explore subgraphs and interact with the protocol. +- ** The Graph Client**: एक लाइब्रेरी जो विकेंद्रीकृत तरीके से GraphQL-आधारित dapps बनाने के लिए उपयोग होती है। -- **Graph CLI**: A command line interface tool for building and deploying to The Graph. +- **Graph Explorer**: नेटवर्क प्रतिभागियों के लिए एक **dapp**, जो उन्हें Subgraphs को एक्सप्लोर करने और प्रोटोकॉल के साथ इंटरैक्ट करने की सुविधा देता है। -- **Cooldown Period**: The time remaining until an Indexer who changed their delegation parameters can do so again. +- **Graph CLI**: The Graph पर निर्माण और परिनियोजन के लिए एक कमांड लाइन इंटरफ़ेस टूल। -- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, subgraphs, curation shares, and Indexer's self-stake. +- **कूलडाउन अवधि**: वह समय जो बचा है जब तक एक Indexer जिसने अपनी delegation पैरामीटर बदले हैं, उन्हें दोबारा बदल नहीं सकता। -- **Updating a subgraph**: The process of releasing a new subgraph version with updates to the subgraph's manifest, schema, or mappings. +- **L2 Transfer Tools**: स्मार्ट कॉन्ट्रैक्ट और UI, जो नेटवर्क प्रतिभागियों को Ethereum **mainnet** से **Arbitrum One** पर नेटवर्क-संबंधित संपत्तियों को ट्रांसफर करने में सक्षम बनाते हैं। प्रतिभागी **delegated GRT**, **Subgraphs**, **curation shares**, और Indexer का **self-stake** ट्रांसफर कर सकते हैं। -- **Migrating**: The process of curation shares moving from an old version of a subgraph to a new version of a subgraph (e.g. when v0.0.1 is updated to v0.0.2). +- **Updating a Subgraph**: किसी Subgraph के **manifest**, **schema**, या **mappings** में अपडेट करके उसका नया संस्करण जारी करने की प्रक्रिया। + +- **Migrating**: किसी Subgraph के पुराने संस्करण से नए संस्करण में **curation shares** को स्थानांतरित करने की प्रक्रिया (जैसे v0.0.1 से v0.0.2 में अपडेट होने पर)। diff --git a/website/src/pages/hi/resources/migration-guides/assemblyscript-migration-guide.mdx b/website/src/pages/hi/resources/migration-guides/assemblyscript-migration-guide.mdx index e7e4b62b509e..d3410d7eaad7 100644 --- a/website/src/pages/hi/resources/migration-guides/assemblyscript-migration-guide.mdx +++ b/website/src/pages/hi/resources/migration-guides/assemblyscript-migration-guide.mdx @@ -2,13 +2,13 @@ title: असेंबलीस्क्रिप्ट माइग्रेशन गाइड --- -अब तक, सबग्राफ [AssemblyScript के शुरुआती संस्करणों](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) में से एक का उपयोग कर रहे थे (v0.6)। अंततः हमने सबसे [नए उपलब्ध संस्करण](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10) के लिए समर्थन जोड़ दिया है! 🎉 +अब तक, सबग्राफ ने [AssemblyScript के शुरुआती संस्करणों](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) में से एक (v0.6) का उपयोग किया है। आखिरकार, हमने [नवीनतम उपलब्ध संस्करण](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10)(v0.19.10) के लिए समर्थन जोड़ दिया है! 🎉 -यह सबग्राफ डेवलपर्स को एएस भाषा और मानक पुस्तकालय की नई सुविधाओं का उपयोग करने में सक्षम करेगा। +यह सबग्राफ डेवलपर्स को AS भाषा और स्टैंडर्ड लाइब्रेरी की नई विशेषताओं का उपयोग करने में सक्षम बनाएगा। यह मार्गदर्शक उन सभी लोगों के लिए लागू है जो `graph-cli`/`graph-ts` का संस्करण `0.22.0` से कम उपयोग कर रहे हैं। यदि आप पहले से ही इस संस्करण (या उससे उच्च) पर हैं, तो आप पहले से ही AssemblyScript के संस्करण `0.19.10` का उपयोग कर रहे हैं 🙂 -> `0.24.0` संस्करण से, `graph-node` दोनों संस्करणों का समर्थन कर सकता है, यह इस पर निर्भर करता है कि subgraph manifest में कौन सा `apiVersion` निर्दिष्ट किया गया है। +> ध्यान दें: `0.24.0` संस्करण से, `graph-node` दोनों संस्करणों का समर्थन कर सकता है, जो apiVersion द्वारा निर्धारित किया जाता है जो सबग्राफ मैनिफेस्ट में निर्दिष्ट होता है। ## विशेषताएँ @@ -44,7 +44,7 @@ title: असेंबलीस्क्रिप्ट माइग्रेश ## कैसे करें अपग्रेड? -1. अपने मानचित्रण `सबग्राफ.yaml` में `apiVersion` को `0.0.6` में बदलें: +1. अपनी `subgraph.yaml` फ़ाइल में `apiVersion` को `0.0.9` में बदलें: ```yaml ... @@ -52,7 +52,7 @@ dataSources: ... mapping: ... - apiVersion: 0.0.6 + apiVersion: 0.0.9 ... ``` @@ -106,7 +106,7 @@ let maybeValue = load()! // breaks in runtime if value is null maybeValue.aMethod() ``` -यदि आप अनिश्चित हैं कि किसे चुनना है, तो हम हमेशा सुरक्षित संस्करण का उपयोग करने की सलाह देते हैं। यदि मान मौजूद नहीं है, तो आप अपने सबग्राफ हैंडलर में वापसी के साथ एक शुरुआती if स्टेटमेंट करना चाहते हैं। +अगर आपको यकीन नहीं है कि किसे चुनना है, तो हम हमेशा सुरक्षित संस्करण का उपयोग करने की सलाह देते हैं। यदि मान मौजूद नहीं है, तो आप अपने सबग्राफ के handler में एक प्रारंभिक if स्टेटमेंट के साथ return का उपयोग कर सकते हैं। ### Variable Shadowing @@ -132,7 +132,7 @@ in assembly/index.ts(4,3) ### Null Comparisons -अपने सबग्राफ पर अपग्रेड करने से, कभी-कभी आपको इस तरह की त्रुटियाँ मिल सकती हैं: +अपने सबग्राफ को अपग्रेड करने पर, कभी-कभी आपको ऐसे त्रुटियाँ मिल सकती हैं: ```typescript ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' is not assignable to type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt'. @@ -329,7 +329,7 @@ let wrapper = new Wrapper(y) wrapper.n = wrapper.n + x // doesn't give compile time errors as it should ``` -हमने इसके लिए असेंबलीस्क्रिप्ट कंपाइलर पर एक मुद्दा खोला है, लेकिन अभी के लिए यदि आप अपने सबग्राफ मैपिंग में इस तरह के ऑपरेशन करते हैं, तो आपको इससे पहले एक अशक्त जांच करने के लिए उन्हें बदलना चाहिए। +हमने इस मुद्दे को AssemblyScript compiler पर खोला है, लेकिन अभी के लिए, यदि आप अपनी सबग्राफ mappings में इस प्रकार के संचालन कर रहे हैं, तो आपको इसके पहले एक null जांच करनी चाहिए। ```typescript let wrapper = new Wrapper(y) @@ -351,7 +351,7 @@ value.x = 10 value.y = 'content' ``` -यह संकलित होगा लेकिन रनटाइम पर टूट जाएगा, ऐसा इसलिए होता है क्योंकि मान प्रारंभ नहीं किया गया है, इसलिए सुनिश्चित करें कि आपके सबग्राफ ने उनके मानों को प्रारंभ किया है, जैसे: +यह संकलित हो जाएगा लेकिन रनटाइम पर टूट जाएगा, क्योंकि मान प्रारंभ नहीं किया गया है। इसलिए सुनिश्चित करें कि आपका सबग्राफ ने अपने मानों को प्रारंभ किया है, इस प्रकार: ```typescript var value = new Type() // initialized diff --git a/website/src/pages/hi/resources/migration-guides/graphql-validations-migration-guide.mdx b/website/src/pages/hi/resources/migration-guides/graphql-validations-migration-guide.mdx index 71a47e6e2ac3..2285c96d1497 100644 --- a/website/src/pages/hi/resources/migration-guides/graphql-validations-migration-guide.mdx +++ b/website/src/pages/hi/resources/migration-guides/graphql-validations-migration-guide.mdx @@ -1,5 +1,5 @@ --- -title: ग्राफक्यूएल सत्यापन माइग्रेशन गाइड +title: GraphQL Validations Migration Guide --- जल्द ही `ग्राफ़-नोड` [ग्राफ़क्यूएल सत्यापन विनिर्देश] (https://spec.graphql.org/June2018/#sec-Validation) के 100% कवरेज का समर्थन करेगा। @@ -20,7 +20,7 @@ title: ग्राफक्यूएल सत्यापन माइग् आप अपने ग्राफक्यूएल संचालन में किसी भी समस्या का पता लगाने और उन्हें ठीक करने के लिए सीएलआई माइग्रेशन टूल का उपयोग कर सकते हैं। वैकल्पिक रूप से आप `https://api-next.thegraph.com/subgraphs/name/$GITHUB_USER/$SUBGRAPH_NAME` समापन बिंदु का उपयोग करने के लिए अपने ग्राफ़िकल क्लाइंट के समापन बिंदु को अपडेट कर सकते हैं। इस समापन बिंदु के विरुद्ध अपने प्रश्नों का परीक्षण करने से आपको अपने प्रश्नों में समस्याओं का पता लगाने में मदद मिलेगी। -> अगर आप [GraphQL ESlint](https://the-guild.dev/graphql/eslint/docs) या [GraphQL Code Generator](https://the-guild.dev) का इस्तेमाल कर रहे हैं, तो सभी सबग्राफ को माइग्रेट करने की ज़रूरत नहीं है /graphql/codegen), वे पहले से ही सुनिश्चित करते हैं कि आपके प्रश्न मान्य हैं। +> Not all Subgraphs will need to be migrated, if you are using [GraphQL ESlint](https://the-guild.dev/graphql/eslint/docs) or [GraphQL Code Generator](https://the-guild.dev/graphql/codegen), they already ensure that your queries are valid. ## माइग्रेशन सीएलआई टूल diff --git a/website/src/pages/hi/resources/roles/curating.mdx b/website/src/pages/hi/resources/roles/curating.mdx index 3d50ad907083..8c47325caf7f 100644 --- a/website/src/pages/hi/resources/roles/curating.mdx +++ b/website/src/pages/hi/resources/roles/curating.mdx @@ -1,88 +1,88 @@ --- -title: क्यूरेटिंग +title: Curating --- -Curators are critical to The Graph's decentralized economy. They use their knowledge of the web3 ecosystem to assess and signal on the subgraphs that should be indexed by The Graph Network. Through Graph Explorer, Curators view network data to make signaling decisions. In turn, The Graph Network rewards Curators who signal on good quality subgraphs with a share of the query fees those subgraphs generate. The amount of GRT signaled is one of the key considerations for indexers when determining which subgraphs to index. +Curators, The Graph की विकेंद्रीकृत अर्थव्यवस्था में महत्वपूर्ण भूमिका निभाते हैं। वे वेब3 इकोसिस्टम के अपने ज्ञान का उपयोग यह मूल्यांकन करने और संकेत देने के लिए करते हैं कि किन सबग्राफ को The Graph Network द्वारा अनुक्रमित किया जाना चाहिए। Graph Explorer के माध्यम से, Curators नेटवर्क डेटा को देखकर संकेत देने के निर्णय लेते हैं। बदले में, The Graph Network उन Curators को पुरस्कृत करता है जो उच्च गुणवत्ता वाले सबग्राफ पर संकेत देते हैं, उन्हें उन सबग्राफ द्वारा उत्पन्न क्वेरी शुल्क का एक हिस्सा प्राप्त होता है। Indexers के लिए यह तय करने में कि किन सबग्राफ को अनुक्रमित किया जाए, GRT संकेतित की गई राशि एक प्रमुख विचार है। -## What Does Signaling Mean for The Graph Network? +## The Graph Network के लिए signal देने का क्या अर्थ है? -Before consumers can query a subgraph, it must be indexed. This is where curation comes into play. In order for Indexers to earn substantial query fees on quality subgraphs, they need to know what subgraphs to index. When Curators signal on a subgraph, it lets Indexers know that a subgraph is in demand and of sufficient quality that it should be indexed. +इससे पहले कि उपभोक्ता किसी सबग्राफ पर क्वेरी कर सकें, उसे इंडेक्स किया जाना आवश्यक है। यही वह जगह है जहाँ क्यूरेशन काम आता है। ताकि Indexers उच्च गुणवत्ता वाले सबग्राफ पर पर्याप्त क्वेरी शुल्क कमा सकें, उन्हें यह जानने की जरूरत होती है कि किन सबग्राफ को इंडेक्स करना चाहिए। जब Curators किसी सबग्राफ पर संकेत देते हैं, तो यह Indexers को सूचित करता है कि कोई सबग्राफ मांग में है और इतनी उच्च गुणवत्ता का है कि उसे इंडेक्स किया जाना चाहिए। -Curators The Graph network को कुशल बनाते हैं और [संकेत देना](#how-to-signal) वह प्रक्रिया है जिसका उपयोग Curators यह बताने के लिए करते हैं कि कौन सा subgraph Indexer के लिए अच्छा है। Indexers Curator से आने वाले संकेत पर भरोसा कर सकते हैं क्योंकि संकेत देना के दौरान, Curators subgraph के लिए एक curation share मिंट करते हैं, जो उन्हें उस subgraph द्वारा उत्पन्न भविष्य के पूछताछ शुल्क के एक हिस्से का हकदार बनाता है। +Curators The Graph नेटवर्क को कुशल बनाते हैं और [signaling](#how-to-signal) वह प्रक्रिया है जिसका उपयोग Curators यह संकेत देने के लिए करते हैं कि किसी सबग्राफ को Indexers द्वारा इंडेक्स किया जाना चाहिए। Indexers Curators के संकेत पर भरोसा कर सकते हैं क्योंकि signaling करते समय, Curators सबग्राफ के लिए एक curation शेयर मिंट करते हैं, जिससे उन्हें उस सबग्राफ द्वारा उत्पन्न भविष्य की क्वेरी फीस का एक हिस्सा प्राप्त करने का अधिकार मिलता है। -Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality subgraph because there will be fewer queries to process or fewer Indexers to process them. +Curator संकेतों को ERC20 टोकन के रूप में प्रस्तुत किया जाता है, जिन्हें Graph Curation Shares (GCS) कहा जाता है। जो अधिक query शुल्क अर्जित करना चाहते हैं, उन्हें अपने GRT को उन सबग्राफ पर संकेतित करना चाहिए, जिनके बारे में वे भविष्यवाणी करते हैं कि वे नेटवर्क में शुल्क के प्रवाह को मजबूत बनाएंगे। Curators को गलत व्यवहार के लिए दंडित नहीं किया जा सकता, लेकिन नेटवर्क की अखंडता को नुकसान पहुंचाने वाले गलत निर्णय लेने से हतोत्साहित करने के लिए उन पर एक जमा कर (deposit tax) लगाया जाता है। यदि वे कम-गुणवत्ता वाले सबग्राफ पर curation करते हैं, तो वे कम query शुल्क अर्जित करेंगे क्योंकि या तो कम queries को प्रोसेस किया जाएगा या फिर उन्हें प्रोसेस करने के लिए कम Indexers उपलब्ध होंगे। -[Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) यह सुनिश्चित करता है कि सभी सबग्राफ को index किया जाए। किसी विशेष subgraph पर GRT को संकेत करने से अधिक indexers उस पर आकर्षित होते हैं। curation के माध्यम से अतिरिक्त Indexers को प्रोत्साहित करना queries की सेवा की गुणवत्ता को बढ़ाने के लिए है, जिससे latency कम हो और नेटवर्क उपलब्धता में सुधार हो। +[Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) यह सुनिश्चित करता है कि सभी सबग्राफ को Indexing मिले, जिससे किसी विशेष सबग्राफ पर GRT को संकेत देने से अधिक Indexers आकर्षित होंगे। इस क्यूरेशन के माध्यम से अतिरिक्त Indexers को प्रोत्साहित करना क्वेरीज़ की सेवा की गुणवत्ता को बढ़ाने का लक्ष्य रखता है, जिससे विलंबता (latency) कम हो और नेटवर्क उपलब्धता (availability) बेहतर हो। -When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. +Indexers can find subgraphs to index based on curation signals they see in Graph Explorer. -यदि आपको सेवा की गुणवत्ता बढ़ाने के लिए curation में सहायता की आवश्यकता हो, तो कृपया एज और नोड टीम को support@thegraph.zendesk.com पर अनुरोध भेजें और उन सबग्राफ को निर्दिष्ट करें जिनमें आपको सहायता चाहिए। +यदि आपको सेवा की गुणवत्ता बढ़ाने के लिए क्यूरेशन में सहायता की आवश्यकता हो, तो कृपया Edge & नोड टीम को support@thegraph.zendesk.com पर एक अनुरोध भेजें और निर्दिष्ट करें कि किन सबग्राफ के लिए आपको सहायता चाहिए। -Indexers can find subgraphs to index based on curation signals they see in Graph Explorer (screenshot below). +Indexers ग्राफ एक्सप्लोरर में उन्हें दिखाई देने वाले क्यूरेशन सिग्नल के आधार पर सबग्राफ को इंडेक्स करने के लिए खोज सकते हैं (स्क्रीनशॉट नीचे दिया गया है)। -![Explorer सबग्राफ](/img/explorer-subgraphs.png) +Subgraph Studio आपको अपने सबग्राफ़ में सिग्नल जोड़ने की सुविधा देता है, जिसमें आप अपने सबग्राफ़ के क्यूरेशन पूल में उसी लेन-देन के साथ GRT जोड़ सकते हैं, जब इसे प्रकाशित किया जाता है. ## सिग्नल कैसे करें -Graph Explorer के Curator टैब में, curators नेटवर्क स्टैट्स के आधार पर कुछ सबग्राफ पर signal और unsignal कर सकेंगे। Graph Explorer में यह कैसे करना है, इसका चरण-दर-चरण अवलोकन पाने के लिए [यहाँ क्लिक करें](/subgraphs/explorer/)। +Graph Explorer में Curator टैब के भीतर, क्यूरेटर कुछ नेटवर्क आंकड़ों के आधार पर कुछ सबग्राफ पर सिग्नल और अनसिग्नल कर सकेंगे। Graph Explorer में इसे चरण-दर-चरण कैसे किया जाए, इसके लिए [यहाँ](/subgraphs/explorer/) क्लिक करें. -एक क्यूरेटर एक विशिष्ट सबग्राफ संस्करण पर संकेत देना चुन सकता है, या वे अपने सिग्नल को स्वचालित रूप से उस सबग्राफ के नवीनतम उत्पादन निर्माण में माइग्रेट करना चुन सकते हैं। दोनों मान्य रणनीतियाँ हैं और अपने स्वयं के पेशेवरों और विपक्षों के साथ आती हैं। +A Curator किसी विशिष्ट सबग्राफ संस्करण पर संकेत देने का चयन कर सकता है, या वे अपने संकेत को स्वचालित रूप से उस सबग्राफ के नवीनतम उत्पादन निर्माण में माइग्रेट करने के लिए चुन सकते हैं। दोनों वैध रणनीतियाँ हैं और इनके अपने फायदे और नुकसान हैं। -विशेष संस्करण पर संकेत देना विशेष रूप से उपयोगी होता है जब एक subgraph को कई dapp द्वारा उपयोग किया जाता है। एक dapp को नियमित रूप से subgraph को नई विशेषता के साथ अपडेट करने की आवश्यकता हो सकती है। दूसरी dapp एक पुराना, अच्छी तरह से परीक्षण किया हुआ उपग्राफ subgraph संस्करण उपयोग करना पसंद कर सकती है। प्रारंभिक क्यूरेशन curation पर, 1% मानक कर tax लिया जाता है। +Signaling किसी विशिष्ट संस्करण पर विशेष रूप से उपयोगी होता है जब एक सबग्राफ को कई dapps द्वारा उपयोग किया जाता है। एक dapp को नियमित रूप से नए फीचर्स के साथ सबग्राफ को अपडेट करने की आवश्यकता हो सकती है। वहीं, दूसरा dapp एक पुराने, अच्छी तरह से परीक्षण किए गए सबग्राफ संस्करण का उपयोग करना पसंद कर सकता है। प्रारंभिक curation के दौरान, 1% का मानक कर लिया जाता है। अपने सिग्नल को स्वचालित रूप से नवीनतम उत्पादन बिल्ड में माइग्रेट करना यह सुनिश्चित करने के लिए मूल्यवान हो सकता है कि आप क्वेरी शुल्क अर्जित करते रहें। हर बार जब आप क्यूरेट करते हैं, तो 1% क्यूरेशन टैक्स लगता है। आप हर माइग्रेशन पर 0.5% क्यूरेशन टैक्स भी देंगे। सबग्राफ डेवलपर्स को बार-बार नए संस्करण प्रकाशित करने से हतोत्साहित किया जाता है - उन्हें सभी ऑटो-माइग्रेटेड क्यूरेशन शेयरों पर 0.5% क्यूरेशन टैक्स देना पड़ता है। -> **नोट**पहला पता जो किसी विशेष subgraph को सिग्नल करता है, उसे पहला curator माना जाएगा और उसे बाकी आने वाले curators की तुलना में अधिक गैस-इंटेंसिव कार्य करना होगा क्योंकि पहला curator curation share टोकन को इनिशियलाइज़ करता है और टोकन को The Graph प्रॉक्सी में ट्रांसफर करता है। +> **नोट**:पहले किसी विशेष सबग्राफ को संकेत देने वाला पता पहले क्यूरेटर के रूप में माना जाता है और उसे बाकी क्यूरेटरों की तुलना में अधिक गैस-गहन कार्य करना होगा क्योंकि पहला क्यूरेटर क्यूरेशन शेयर टोकनों को प्रारंभ करता है और साथ ही The Graph प्रॉक्सी में टोकन स्थानांतरित करता है। ## Withdrawing your GRT -Curators have the option to withdraw their signaled GRT at any time. +Curators के पास किसी भी समय अपना signaled GRT वापस लेने का option होता है। -Unlike the process of delegating, if you decide to withdraw your signaled GRT you will not have to wait for a cooldown period and will receive the entire amount (minus the 1% curation tax). +Delegating की प्रक्रिया के विपरीत, यदि आप अपना signaled GRT वापस लेने का निर्णय लेते हैं तो आपको cooldown period की प्रतीक्षा नहीं करनी होगी और entire amount प्राप्त होगी (minus the 1% curation tax)। -Once a curator withdraws their signal, indexers may choose to keep indexing the subgraph, even if there's currently no active GRT signaled. +Once a curator अपनी signal को वापस ले लेता है, Indexers यह चुन सकते हैं कि वे सबग्राफ को Indexing करते रहें, भले ही वर्तमान में कोई सक्रिय GRT signal न हो। -However, it is recommended that curators leave their signaled GRT in place not only to receive a portion of the query fees, but also to ensure reliability and uptime of the subgraph. +हालाँकि, यह सिफारिश की जाती है कि Curators अपने संकेतित GRT को उसी स्थान पर छोड़ दें, न केवल क्वेरी शुल्क का एक हिस्सा प्राप्त करने के लिए, बल्कि सबग्राफ की विश्वसनीयता और अपटाइम सुनिश्चित करने के लिए भी। ## जोखिम 1. क्वेरी बाजार द ग्राफ में स्वाभाविक रूप से युवा है और इसमें जोखिम है कि नवजात बाजार की गतिशीलता के कारण आपका %APY आपकी अपेक्षा से कम हो सकता है। -2. क्यूरेशन शुल्क - जब कोई क्यूरेटर किसी सबग्राफ़ पर GRT सिग्नल करता है, तो उसे 1% क्यूरेशन टैक्स देना होता है। यह शुल्क जला दिया जाता है। -3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their subgraph or if a subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/resources/roles/delegating/delegating/). -4. बग के कारण सबग्राफ विफल हो सकता है। एक विफल सबग्राफ क्वेरी शुल्क अर्जित नहीं करता है। नतीजतन, आपको तब तक इंतजार करना होगा जब तक कि डेवलपर बग को ठीक नहीं करता है और एक नया संस्करण तैनात करता है। - - यदि आपने सबग्राफ के नवीनतम संस्करण की सदस्यता ली है, तो आपके शेयर उस नए संस्करण में स्वत: माइग्रेट हो जाएंगे। इस पर 0.5% क्यूरेशन टैक्स लगेगा। - - If you have signaled on a specific subgraph version and it fails, you will have to manually burn your curation shares. You can then signal on the new subgraph version, thus incurring a 1% curation tax. +2. Curation Fee - जब कोई curator किसी सबग्राफ पर GRT को signal करता है, तो उसे 1% curation tax देना होता है। यह शुल्क जला दिया जाता है.. +3. (Ethereum only) जब क्यूरेटर अपने शेयरों को जलाकर GRT निकालते हैं, तो बचे हुए शेयरों का GRT मूल्यांकन कम हो जाएगा। ध्यान दें कि कुछ मामलों में, क्यूरेटर अपने शेयरों को एक ही बार में जलाने का निर्णय ले सकते हैं। यह स्थिति आम हो सकती है यदि कोई dapp डेवलपर अपने सबग्राफ का संस्करण अपडेट करना/सुधारना और क्वेरी करना बंद कर देता है या यदि कोई सबग्राफ विफल हो जाता है। परिणामस्वरूप, शेष क्यूरेटर केवल अपने प्रारंभिक GRT का एक अंश ही निकालने में सक्षम हो सकते हैं। कम जोखिम प्रोफ़ाइल वाले नेटवर्क भूमिका के लिए, देखें [Delegators](/resources/roles/delegating/delegating/)। +4. एक सबग्राफ किसी बग के कारण फेल हो सकता है। एक फेल हुआ सबग्राफ क्वेरी शुल्क प्राप्त नहीं करता है। इसके परिणामस्वरूप, आपको तब तक इंतजार करना होगा जब तक डेवलपर बग को ठीक नहीं करता और एक नया संस्करण डिप्लॉय नहीं करता। + - यदि आप किसी सबग्राफ के नवीनतम संस्करण की सदस्यता लिए हुए हैं, तो आपके शेयर स्वचालित रूप से उस नए संस्करण में स्थानांतरित हो जाएंगे। इसके लिए 0.5% क्यूरेशन टैक्स लिया जाएगा। + - यदि आपने किसी विशिष्ट सबग्राफ संस्करण पर संकेत दिया है और वह विफल हो जाता है, तो आपको मैन्युअल रूप से अपने क्यूरेशन शेयर जलाने होंगे। इसके बाद, आप नए सबग्राफ संस्करण पर संकेत दे सकते हैं, जिससे आपको 1% क्यूरेशन कर देना होगा। ## अवधि पूछे जाने वाले प्रश्न ### 1. क्यूरेटर क्वेरी फीस का कितना % कमाते हैं? -By signalling on a subgraph, you will earn a share of all the query fees that the subgraph generates. 10% of all query fees go to the Curators pro-rata to their curation shares. This 10% is subject to governance. +सबग्राफ पर संकेत देने से, आप उन सभी क्वेरी शुल्कों में से एक हिस्सा अर्जित करेंगे जो सबग्राफ उत्पन्न करता है। सभी क्वेरी शुल्कों का 10% Curators को उनके curation shares के अनुसार प्रो-राटा आधार पर जाता है। यह 10% शासन के अधीन है। -### 2. मैं यह कैसे तय करूं कि कौन से सबग्राफ सिग्नल देने के लिए उच्च गुणवत्ता वाले हैं? +### 2. मुझे यह कैसे तय करना चाहिए कि कौन से सबग्राफ उच्च गुणवत्ता वाले हैं जिन पर संकेत देना है? -उच्च-गुणवत्ता वाले सबग्राफ खोजना एक जटिल कार्य है, लेकिन इसे कई अलग-अलग तरीकों से किया जा सकता है। एक Curator के रूप में, आपको उन भरोसेमंद सबग्राफ को देखना चाहिए जो query volume को बढ़ा रहे हैं। एक भरोसेमंद subgraph मूल्यवान हो सकता है यदि वह पूर्ण, सटीक हो और किसी dapp की डेटा आवश्यकताओं को पूरा करता हो। एक खराब डिज़ाइन किया गया subgraph संशोधित या पुनः प्रकाशित करने की आवश्यकता हो सकती है और अंततः असफल भी हो सकता है। यह Curators के लिए अत्यंत महत्वपूर्ण है कि वे किसी subgraph की संरचना या कोड की समीक्षा करें ताकि यह आकलन कर सकें कि subgraph मूल्यवान है या नहीं। +उच्च-गुणवत्ता सबग्राफ खोजने एक जटिल कार्य है, लेकिन इसे कई अलग-अलग तरीकों से अपनाया जा सकता है। एक Curator के रूप में, आपको उन भरोसेमंद सबग्राफकी तलाश करनी चाहिए जो query volume को बढ़ा रहे हैं। एक भरोसेमंद सबग्राफ मूल्यवान हो सकता है यदि यह पूर्ण, सटीक हो और किसी dapp की डेटा आवश्यकताओं का समर्थन करता हो। एक खराब तरीके से डिज़ाइन किया गया सबग्राफ संशोधित या पुनः प्रकाशित करने की आवश्यकता हो सकती है, और यह विफल भी हो सकता है। यह महत्वपूर्ण है कि Curators किसी Subgraph की संरचना या कोड की समीक्षा करें ताकि यह आकलन किया जा सके कि कोई सबग्राफ मूल्यवान है या नहीं। परिणामस्वरू -- क्यूरेटर नेटवर्क की अपनी समझ का उपयोग करके यह अनुमान लगाने की कोशिश कर सकते हैं कि भविष्य में कोई विशेष सबग्राफ़ अधिक या कम क्वेरी वॉल्यूम कैसे उत्पन्न कर सकता है। -- क्यूरेटर को Graph Explorer के माध्यम से उपलब्ध मेट्रिक्स को भी समझना चाहिए। जैसे कि पिछले क्वेरी वॉल्यूम और सबग्राफ़ डेवलपर कौन है, ये मेट्रिक्स यह तय करने में मदद कर सकते हैं कि किसी सबग्राफ़ पर सिग्नलिंग करना उचित है या नहीं। +- Curators अपने नेटवर्क की समझ का उपयोग करके यह भविष्यवाणी करने का प्रयास कर सकते हैं कि भविष्य में किसी व्यक्तिगत सबग्राफ में क्वेरी वॉल्यूम अधिक या कम कैसे हो सकता है। +- Curators को यह भी समझना चाहिए कि Graph Explorer के माध्यम से उपलब्ध मीट्रिक्स क्या हैं। पिछले क्वेरी वॉल्यूम और कौन सबग्राफ डेवलपर है जैसे मीट्रिक्स यह निर्धारित करने में मदद कर सकते हैं कि किसी सबग्राफ पर संकेत देना उचित है या नहीं। -### 3. What’s the cost of updating a subgraph? +### 3. किसी सबग्राफ को अपडेट करने की लागत क्या है? -नए subgraph संस्करण में अपनी curation shares को माइग्रेट करने पर 1% curation टैक्स लगता है। Curators नए subgraph संस्करण को सब्सक्राइब करने का विकल्प चुन सकते हैं। जब curator shares अपने आप नए संस्करण में माइग्रेट होती हैं, तो Curators को आधा curation टैक्स, यानी 0.5%, देना पड़ता है क्योंकि सबग्राफ को अपग्रेड करना एक ऑनचेन क्रिया है जो गैस खर्च करती है। +Migrating your curation shares to a new सबग्राफ version पर 1% का curation tax लगता है। Curators नए संस्करण की सदस्यता लेने का विकल्प चुन सकते हैं। जब curator shares स्वतः नए संस्करण में माइग्रेट होते हैं, तो Curators को आधा curation tax, यानी 0.5% देना होगा, क्योंकि सबग्राफ को अपग्रेड करना एक ऑनचेन प्रक्रिया है, जिसमें गैस शुल्क लगता है। -### 4. How often can I update my subgraph? +### 4. मैं अपना सबग्राफ कितनी बार अपडेट कर सकता हूँ? -It’s suggested that you don’t update your subgraphs too frequently. See the question above for more details. +ऐसा सुझाव दिया जाता है कि आप अपने सबग्राफ को बहुत बार अपडेट न करें। अधिक जानकारी के लिए ऊपर दिए गए प्रश्न को देखें। ### 5. क्या मैं अपने क्यूरेशन शेयर बेच सकता हूँ? क्यूरेशन शेयरों को अन्य ERC20 टोकनों की तरह "खरीदा" या "बेचा" नहीं जा सकता, जिन्हें आप जानते होंगे। इन्हें केवल मिंट (निर्मित) या बर्न (नष्ट) किया जा सकता है। -As a Curator on Arbitrum, you are guaranteed to get back the GRT you initially deposited (minus the tax). +Arbitrum पर Curator के रूप में, आपको शुरू में जमा किया गया GRT (minus the tax) वापस मिलने की guarantee है। ### 6. Am I eligible for a curation grant? -Curation grants are determined individually on a case-by-case basis. If you need assistance with curation, please send a request to support@thegraph.zendesk.com. +Curation grants case-by-case आधार पर individual रूप से निर्धारित किया जाता है। यदि आपको curation में सहायता की आवश्यकता है, तो कृपया support@thegraph.zendesk.com पर एक request भेजें। अभी भी उलझन में? नीचे हमारे क्यूरेशन वीडियो गाइड देखें: diff --git a/website/src/pages/hi/resources/roles/delegating/delegating.mdx b/website/src/pages/hi/resources/roles/delegating/delegating.mdx index 398581e518b8..0e5d8027b822 100644 --- a/website/src/pages/hi/resources/roles/delegating/delegating.mdx +++ b/website/src/pages/hi/resources/roles/delegating/delegating.mdx @@ -2,54 +2,54 @@ title: Delegating --- -To start delegating right away, check out [delegate on the graph](https://thegraph.com/explorer/delegate?chain=arbitrum-one). +तुरंत डेलीगेट करना शुरू करने के लिए, यहाँ देखें [delegate on the graph](https://thegraph.com/explorer/delegate?chain=arbitrum-one)। -## अवलोकन +## Overview -Delegators earn GRT by delegating GRT to Indexers, which helps network security and functionality. +Delegator, Indexer को GRT डेलीगेट करके GRT अर्जित करते हैं, जिससे नेटवर्क की सुरक्षा और कार्यक्षमता में मदद मिलती है। -## Benefits of Delegating +## Delegationबनाने के लाभ -- Strengthen the network’s security and scalability by supporting Indexers. -- Earn a portion of rewards generated by the Indexers. +- Indexers का समर्थन करके नेटवर्क की सुरक्षा और विस्तार क्षमता को मजबूत करें। +- Indexer द्वारा उत्पन्न इनामों के एक हिस्से को कमाएं। -## How Does Delegation Work? +## delegation कैसे काम करता है? -Delegators earn GRT rewards from the Indexer(s) they choose to delegate their GRT to. +Delegators उन Indexer(s) से GRT पुरस्कार अर्जित करते हैं जिनको वे अपना GRT डेलिगेट करने के लिए चुनते हैं। -An Indexer's ability to process queries and earn rewards depends on three key factors: +किसी Indexer की क्वेरी प्रोसेस करने और पुरस्कार अर्जित करने की क्षमता तीन मुख्य कारकों पर निर्भर करती है: -1. The Indexer's Self-Stake (GRT staked by the Indexer). -2. The total GRT delegated to them by Delegators. -3. The price the Indexer sets for queries. +1. Indexer'sकी स्वयं की स्टेक (Indexer द्वारा स्टेक किया गया GRT)। +2. Delegator द्वारा उन्हें कुल GRT डेलीगेट किया गया। +3. क्वेरी के लिए Indexer द्वारा निर्धारित कीमत। -The more GRT staked and delegated to an Indexer, the more queries they can serve, leading to higher potential rewards for both the Delegator and Indexer. +जितना अधिक GRT किसी Indexer को स्टेक और डेलीगेट किया जाता है, उतनी ही अधिक क्वेरीज़ वे सर्व कर सकते हैं, जिससे Delegator और Indexer दोनों के लिए अधिक संभावित रिवॉर्ड मिल सकते हैं। -### What is Delegation Capacity? +### Delegation Capacity क्या है? -Delegation Capacity refers to the maximum amount of GRT an Indexer can accept from Delegators, based on the Indexer's Self-Stake. +delegation क्षमता उस अधिकतम GRT को दर्शाती है जिसे एक Indexer अपने Self-Stake के आधार पर Delegators से स्वीकार कर सकता है। -The Graph Network includes a delegation ratio of 16, meaning an Indexer can accept up to 16 times their Self-Stake in delegated GRT. +The Graph Network में 16 का एक delegation अनुपात शामिल है, जिसका अर्थ है कि एक Indexer अपनी स्वयं की स्टेक का 16 गुना तक डेलीगेट किए गए GRT को स्वीकार कर सकता है। -For example, if an Indexer has a Self-Stake of 1M GRT, their Delegation Capacity is 16M. +उदाहरण के लिए, यदि किसी Indexer के पास 1M GRT का Self-Stake है, तो उनकी Delegation Capacity 16M होगी। -### Why Does Delegation Capacity Matter? +### delegation क्षमता क्यों मायने रखती है? -If an Indexer exceeds their Delegation Capacity, rewards for all Delegators become diluted because the excess delegated GRT cannot be used effectively within the protocol. +यदि कोई Indexer अपनी Delegation Capacity से अधिक हो जाता है, तो सभी Delegators के लिए इनाम कम हो जाता है क्योंकि अतिरिक्त सौंपे गए GRT को प्रोटोकॉल के भीतर प्रभावी रूप से उपयोग नहीं किया जा सकता। -This makes it crucial for Delegators to evaluate an Indexer's current Delegation Capacity before selecting an Indexer. +यह Delegators के लिए किसी Indexer की वर्तमान Delegation Capacity का मूल्यांकन करने से पहले उसका चयन करना अत्यंत महत्वपूर्ण बनाता है। -Indexers can increase their Delegation Capacity by increasing their Self-Stake, thereby raising the limit for delegated tokens. +Indexers अपनी Self-Stake बढ़ाकर अपनी Delegation Capacity बढ़ा सकते हैं, जिससे delegated tokens की सीमा बढ़ जाती है. -## Delegation on The Graph +## delegation ऑन The Graph -> Please note this guide does not cover steps such as setting up MetaMask. The Ethereum community provides a [comprehensive resource regarding wallets](https://ethereum.org/en/wallets/). +> यह मार्गदर्शिका उन चरणों को शामिल नहीं करती है जैसे कि MetaMask सेट करना। Ethereum समुदाय वॉलेट्स के बारे में एक [व्यापक संसाधन प्रदान करता है।](https://ethereum.org/en/wallets/) -There are two sections in this guide: +इस गाइड में दो अनुभाग हैं: - ग्राफ़ नेटवर्क में टोकन सौंपने का जोखिम - प्रतिनिधि के रूप में अपेक्षित रिटर्न की गणना कैसे करें @@ -58,7 +58,7 @@ There are two sections in this guide: प्रोटोकॉल में प्रतिनिधि होने के मुख्य जोखिमों की सूची नीचे दी गई है। -### The Delegation Tax +### delegation कर प्रतिनिधियों को खराब व्यवहार के लिए कम नहीं किया जा सकता है, लेकिन खराब निर्णय लेने को हतोत्साहित करने के लिए प्रतिनिधियों पर एक कर है जो नेटवर्क की अखंडता को नुकसान पहुंचा सकता है। @@ -68,21 +68,21 @@ There are two sections in this guide: - सुरक्षित रहने के लिए, आपको Indexer को डेलीगेट करते समय अपने संभावित रिटर्न की गणना करनी चाहिए। उदाहरण के लिए, आप यह गणना कर सकते हैं कि आपके डेलीगेशन पर 0.5% कर वापस कमाने में कितने दिन लगेंगे। -### The Undelegation Period +### अनियोजन अवधि -When a Delegator chooses to undelegate, their tokens are subject to a 28-day undelegation period. +जब एक Delegator अनडेलीगेट करने का चयन करता है, तो उनके टोकन 28-दिन की अनडेलीगेशन अवधि के अधीन होते हैं। -This means they cannot transfer their tokens or earn any rewards for 28 days. +इसका मतलब है कि वे 28 दिनों तक अपने टोकन ट्रांसफर नहीं कर सकते या कोई इनाम अर्जित नहीं कर सकते। -After the undelegation period, GRT will return to your crypto wallet. +After the undelegation period, GRT आपके क्रिप्टो वॉलेट में वापस आ जाएगा। ### यह क्यों महत्वपूर्ण है? -If you choose an Indexer that is not trustworthy or not doing a good job, you will want to undelegate. This means you will be losing opportunities to earn rewards. +यदि आप किसी ऐसे Indexer को चुनते हैं जो भरोसेमंद नहीं है या अच्छा काम नहीं कर रहा है, तो आप उसे अनडेलीगेट करना चाहेंगे। इसका अर्थ है कि आप इनाम अर्जित करने के अवसरों को खो देंगे। -As a result, it’s recommended that you choose an Indexer wisely. +As a result, यह अनुशंसा की जाती है कि आप एक Indexer को समझदारी से चुनें। -![Delegation unbonding. Note the 0.5% fee in the Delegation UI, as well as the 28 day unbonding period.](/img/Delegation-Unbonding.png) +![delegation अनबॉन्डिंग। delegation UI में 0.5% शुल्क को नोट करें, साथ ही 28 दिन की अनबॉन्डिंग अवधि।](/img/Delegation-Unbonding.png) #### डेलीगेशन पैरामीटर @@ -92,29 +92,29 @@ As a result, it’s recommended that you choose an Indexer wisely. - यदि किसी Indexer का पुरस्कार कट 100% पर सेट है, तो एक Delegator के रूप में, आपको 0 इंडेक्सिंग पुरस्कार मिलेंगे। - यदि इसे 80% पर सेट किया गया है, तो एक Delegator के रूप में, आप 20% प्राप्त करेंगे। -![Indexing Reward Cut. The top Indexer is giving Delegators 90% of the rewards. The middle one is giving Delegators 20%. The bottom one is giving Delegators ~83%.](/img/Indexing-Reward-Cut.png) +![Indexing Reward Cut. शीर्ष Indexer Delegators को 90% इनाम दे रहा है। मध्य वाला Delegators को 20% दे रहा है। निचला वाला Delegators को ~83% दे रहा है।](/img/Indexing-Reward-Cut.png) - \*\*पूछताछ शुल्क कटौती - यह बिल्कुल Indexing Reward Cut की तरह है, लेकिन यह उन पूछताछ शुल्क पर लागू होता है जो Indexer एकत्र करता है। -- It is highly recommended that you explore [The Graph Discord](https://discord.gg/graphprotocol) to determine which Indexers have the best social and technical reputations. +- यह अत्यधिक अनुशंसा की जाती है कि आप [The Graph Discord](https://discord.gg/graphprotocol) का अन्वेषण करें ताकि यह निर्धारित किया जा सके कि किन Indexers की सामाजिक और तकनीकी प्रतिष्ठा सर्वश्रेष्ठ है। -- Many Indexers are active in Discord and will be happy to answer your questions. +- Many Indexers Discord में सक्रिय हैं और आपके प्रश्नों का उत्तर देने में खुश होंगे। ## यहाँ पर 'Delegators' की अपेक्षित लाभ की गणना की जा रही है। -> Calculate the ROI on your delegation [here](https://thegraph.com/explorer/delegate?chain=arbitrum-one). +> अपने delegation पर ROI की गणना यहां करें।(https://thegraph.com/explorer/delegate?chain=arbitrum-one)। -A Delegator must consider a variety of factors to determine a return: +एक Delegator को प्रतिफल निर्धारित करने के लिए विभिन्न कारकों पर विचार करना चाहिए: - -An Indexer's ability to use the delegated GRT available to them impacts their rewards. +An Indexer's द्वारा उपलब्ध कराए गए प्रत्यायोजित GRT का उपयोग करने की क्षमता उनके पुरस्कारों को प्रभावित करती है। -If an Indexer does not allocate all the GRT at their disposal, they may miss out on maximizing potential earnings for both themselves and their Delegators. +यदि कोई Indexer अपने निपटान में उपलब्ध सभी GRT को आवंटित नहीं करता है, तो वे स्वयं और उनके Delegators दोनों के लिए संभावित आय को अधिकतम करने का अवसर खो सकते हैं। -Indexers can close an allocation and collect rewards at any time within the 1 to 28-day window. However, if rewards are not promptly collected, the total rewards may appear lower, even if a percentage of rewards remain unclaimed. +Indexers किसी आवंटन को बंद कर सकते हैं और 1 से 28 दिन की विंडो के भीतर किसी भी समय इनाम एकत्र कर सकते हैं। हालाँकि, यदि इनाम तुरंत एकत्र नहीं किए जाते हैं, तो कुल इनाम कम दिखाई दे सकते हैं, भले ही इनाम का कुछ प्रतिशत अप्राप्त बना रहे। ### प्रश्न शुल्क में कटौती और अनुक्रमण शुल्क में कटौती को ध्यान में रखते हुए -You should choose an Indexer that is transparent about setting their Query Fee and Indexing Fee Cuts. +आपको एक Indexer चुनना चाहिए जो अपने Query Fee और Indexing Fee Cuts को निर्धारित करने में पारदर्शी हो। सूत्र है: diff --git a/website/src/pages/hi/resources/subgraph-studio-faq.mdx b/website/src/pages/hi/resources/subgraph-studio-faq.mdx index 9901cc26d73f..d04f10b22b7c 100644 --- a/website/src/pages/hi/resources/subgraph-studio-faq.mdx +++ b/website/src/pages/hi/resources/subgraph-studio-faq.mdx @@ -4,7 +4,7 @@ title: सबग्राफ स्टूडियो अक्सर पूछ ## 1. सबग्राफ स्टूडियो क्या है? -[Subgraph Studio](https://thegraph.com/studio/) is a dapp for creating, managing, and publishing subgraphs and API keys. +[Subgraph Studio](https://thegraph.com/studio/) एक **dapp** है, जो Subgraphs और **API keys** बनाने, प्रबंधित करने और प्रकाशित करने के लिए उपयोग किया जाता है।सबग्राफ ## 2. मैं एक एपीआई कुंजी कैसे बना सकता हूँ? @@ -12,20 +12,22 @@ To create an API, navigate to Subgraph Studio and connect your wallet. You will ## 3. क्या मैं कई एपीआई कुंजियां बना सकता हूं? -Yes! You can create multiple API Keys to use in different projects. Check out the link [here](https://thegraph.com/studio/apikeys/). +हाँ! आप विभिन्न परियोजनाओं में उपयोग करने के लिए कई एपीआई keys बना सकते हैं। [यहां](https://thegraph.com/studio/apikeys/) लिंक देखें। ## 4. मैं एपीआई कुंजी के लिए डोमेन को कैसे प्रतिबंधित करूं? एपीआई key बनाने के बाद, सुरक्षा अनुभाग में, आप उन डोमेन को परिभाषित कर सकते हैं जो किसी विशिष्ट एपीआई key को क्वेरी कर सकते हैं। -## 5. क्या मैं अपना सबग्राफ किसी अन्य स्वामी को स्थानांतरित कर सकता हूं? +## 5. क्या मैं अपना Subgraph किसी अन्य मालिक को ट्रांसफर कर सकता हूँ? -Yes, subgraphs that have been published to Arbitrum One can be transferred to a new wallet or a Multisig. You can do so by clicking the three dots next to the 'Publish' button on the subgraph's details page and selecting 'Transfer ownership'. +हाँ, जो Subgraphs **Arbitrum One** पर प्रकाशित किए गए हैं, उन्हें किसी नए **wallet** या **Multisig** में ट्रांसफर किया जा सकता है। इसके लिए, Subgraph की **details page** पर 'Publish' बटन के पास तीन बिंदुओं (•••) पर क्लिक करें और **'Transfer ownership'** विकल्प चुनें। -ध्यान दें कि एक बार स्थानांतरित हो जाने के बाद आप स्टूडियो में सबग्राफ को देख या संपादित नहीं कर पाएंगे। +ध्यान दें कि ट्रांसफर करने के बाद, आप Studio में उस Subgraph को देख या संपादित नहीं कर पाएंगे। -## 6. How do I find query URLs for subgraphs if I’m not the developer of the subgraph I want to use? +## 6. अगर मैं जिस Subgraph को उपयोग करना चाहता हूँ, उसका **developer** नहीं हूँ, तो मैं उसके **query URLs** कैसे खोज सकता हूँ? -You can find the query URL of each subgraph in the Subgraph Details section of Graph Explorer. When you click on the “Query” button, you will be directed to a pane wherein you can view the query URL of the subgraph you’re interested in. You can then replace the `` placeholder with the API key you wish to leverage in Subgraph Studio. +आप **Graph Explorer** के **Subgraph Details** सेक्शन में प्रत्येक Subgraph का **query URL** देख सकते हैं। - **"Query"** बटन पर क्लिक करने पर, एक पैन खुल जाएगा, जहाँ आपको इच्छित Subgraph का **query URL** मिलेगा। -याद रखें कि आप एक एपीआई key बना सकते हैं और नेटवर्क पर प्रकाशित किसी सबग्राफ को क्वेरी कर सकते हैं, भले ही आप स्वयं एक सबग्राफ बनाते हों। नई एपीआई key के माध्यम से ये प्रश्न, नेटवर्क पर किसी अन्य के रूप में भुगतान किए गए प्रश्न हैं। +- इसके बाद, **``** प्लेसहोल्डर को अपने **Subgraph Studio** के API key से बदलकर उपयोग कर सकते हैं। + +याद रखें कि आप एक **API key** बना सकते हैं और नेटवर्क पर प्रकाशित किसी भी Subgraph को query कर सकते हैं, चाहे आपने वह Subgraph खुद बनाया हो या नहीं। नई **API key** के माध्यम से किए गए ये queries, नेटवर्क पर अन्य queries की तरह **paid queries** होंगे। diff --git a/website/src/pages/hi/resources/tokenomics.mdx b/website/src/pages/hi/resources/tokenomics.mdx index e3437e3a0fff..ac420f323ab7 100644 --- a/website/src/pages/hi/resources/tokenomics.mdx +++ b/website/src/pages/hi/resources/tokenomics.mdx @@ -1,12 +1,12 @@ --- -title: ग्राफ नेटवर्क के टोकनोमिक्स -sidebarTitle: Tokenomics +title: Tokenomics of The Graph Network +sidebarTitle: टोकनोमिक्स description: The Graph Network को शक्तिशाली टोकनोमिक्स द्वारा प्रोत्साहित किया जाता है। यहां बताया गया है कि GRT, The Graph का मूल कार्य उपयोगिता टोकन, कैसे काम करता है। --- -## अवलोकन +## Overview -The Graph एक विकेन्द्रीकृत प्रोटोकॉल है जो ब्लॉकचेन डेटा तक आसान पहुंच सक्षम करता है। यह ब्लॉकचेन डेटा को उसी तरह से अनुक्रमित करता है जैसे Google वेब को अनुक्रमित करता है। यदि आपने किसी dapp का उपयोग किया है जो किसी Subgraph से डेटा पुनर्प्राप्त करता है, तो संभवतः आपने The Graph के साथ इंटरैक्ट किया है। आज, वेब3 इकोसिस्टम में हजारों [popular dapps](https://thegraph.com/explorer) The Graph का उपयोग कर रहे हैं। +The Graph एक **decentralized protocol** है, जो **blockchain data** तक आसान पहुँच प्रदान करता है। यह **blockchain data** को उसी तरह **index** करता है, जैसे **Google** वेब को **index** करता है। अगर आपने किसी ऐसे **dapp** का उपयोग किया है जो किसी **Subgraph** से डेटा प्राप्त करता है, तो आपने संभवतः **The Graph** के साथ इंटरैक्ट किया है। आज, **Web3 ecosystem** में हजारों [लोकप्रिय dapps](https://thegraph.com/explorer) **The Graph** का उपयोग कर रहे हैं। ## विशिष्टताएँ @@ -14,90 +14,93 @@ The Graph का मॉडल एक B2B2C मॉडल के समान ह The Graph ब्लॉकचेन डेटा को अधिक सुलभ बनाने में महत्वपूर्ण भूमिका निभाता है और इसके आदान-प्रदान के लिए एक मार्केटप्लेस का समर्थन करता है। The Graph के पे-फॉर-व्हाट-यू-नीड मॉडल के बारे में अधिक जानने के लिए, इसके [free and growth plans](/subgraphs/billing/) देखें। -- GRT Token Address on Mainnet: [0xc944e90c64b2c07662a292be6244bdf05cda44a7](https://etherscan.io/token/0xc944e90c64b2c07662a292be6244bdf05cda44a7) +- GRT टोकन पता मुख्य नेटवर्क पर: [0xc944e90c64b2c07662a292be6244bdf05cda44a7](https://etherscan.io/token/0xc944e90c64b2c07662a292be6244bdf05cda44a7) -- GRT Token Address on Arbitrum One: [0x9623063377AD1B27544C965cCd7342f7EA7e88C7](https://arbiscan.io/token/0x9623063377ad1b27544c965ccd7342f7ea7e88c7) +- GRT टोकन पता Arbitrum One पर: [0x9623063377AD1B27544C965cCd7342f7EA7e88C7](https://arbiscan.io/token/0x9623063377ad1b27544c965ccd7342f7ea7e88c7) -## The Roles of Network Participants +## नेटवर्क प्रतिभागियों की भूमिकाएँ -There are four primary network participants: +चार प्रमुख नेटवर्क प्रतिभागी हैं: -1. Delegators - Delegate GRT to Indexers & secure the network +1. Delegators - Indexers को GRT सौंपें और नेटवर्क को सुरक्षित करें -2. Curators - Find the best subgraphs for Indexers +2. **Curators** - Indexers के लिए सबसे अच्छे **Subgraphs** खोजें। -3. Developers - Build & query subgraphs +3. **Developers** - **Subgraphs** बनाएं और उन्हें **query** करें। 4. इंडेक्सर्स - ब्लॉकचेन डेटा की रीढ़ -Fishermen and Arbitrators are also integral to the network's success through other contributions, supporting the work of the other primary participant roles. For more information about network roles, [read this article](https://thegraph.com/blog/the-graph-grt-token-economics/). +मछुआरे और मध्यस्थ भी अन्य योगदानों के माध्यम से नेटवर्क की सफलता में महत्वपूर्ण भूमिका निभाते हैं, अन्य प्राथमिक प्रतिभागी भूमिकाओं के कार्यों का समर्थन करते हैं। नेटवर्क भूमिकाओं के बारे में अधिक जानकारी के लिए, [यह लेख पढ़ें](https://thegraph.com/blog/the-graph-grt-token-economics/)। -![Tokenomics diagram](/img/updated-tokenomics-image.png) +![Tokenomics आरेख](/img/updated-tokenomics-image.png) -## Delegators (Passively earn GRT) +## Delegator(निष्क्रिय रूप से GRT कमाएं) -Indexers are delegated GRT by Delegators, increasing the Indexer’s stake in subgraphs on the network. In return, Delegators earn a percentage of all query fees and indexing rewards from the Indexer. Each Indexer sets the cut that will be rewarded to Delegators independently, creating competition among Indexers to attract Delegators. Most Indexers offer between 9-12% annually. +**Indexers** को **Delegators** द्वारा **GRT** डेलिगेट किया जाता है, जिससे नेटवर्क पर Subgraphs में Indexer की **stake** बढ़ती है। इसके बदले में, **Delegators** को Indexer से मिलने वाले कुल **query fees** और **indexing rewards** का एक निश्चित प्रतिशत मिलता है। हर **Indexer** स्वतंत्र रूप से तय करता है कि वह **Delegators** को कितना रिवार्ड देगा, जिससे **Indexers** के बीच **Delegators** को आकर्षित करने की प्रतिस्पर्धा बनी रहती है। अधिकांश **Indexers** सालाना **9-12%** रिटर्न ऑफर करते हैं। -For example, if a Delegator were to delegate 15k GRT to an Indexer offering 10%, the Delegator would receive ~1,500 GRT in rewards annually. +यदि कोई Delegator 15k GRT को किसी ऐसे Indexer को डेलिगेट करता है जो 10% की पेशकश कर रहा है, तो Delegator को वार्षिक रूप से ~1,500 GRT का इनाम प्राप्त होगा। -There is a 0.5% delegation tax which is burned whenever a Delegator delegates GRT on the network. If a Delegator chooses to withdraw their delegated GRT, the Delegator must wait for the 28-epoch unbonding period. Each epoch is 6,646 blocks, which means 28 epochs ends up being approximately 26 days. +नेटवर्क पर किसी Delegator द्वारा GRT डेलीगेट करने पर 0.5% डेलीगेशन टैक्स जल जाता है। यदि कोई Delegator अपने डेलीगेट किए गए GRT को वापस लेने का निर्णय लेता है, तो उसे 28-एपॉक अनबॉन्डिंग अवधि की प्रतीक्षा करनी होगी। प्रत्येक एपॉक 6,646 ब्लॉक्स का होता है, जिसका अर्थ है कि 28 एपॉक लगभग 26 दिनों के बराबर होते हैं। -If you're reading this, you're capable of becoming a Delegator right now by heading to the [network participants page](https://thegraph.com/explorer/participants/indexers), and delegating GRT to an Indexer of your choice. +अगर आप इसे पढ़ रहे हैं, तो आप अभी Delegator बन सकते हैं बस [ network participants page ] (https://thegraph.com/explorer/participants/indexers)पर जाएं और अपनी पसंद के किसी Indexer को GRT डेलीगेट करें। -## Curators (Earn GRT) +## Curators (GRT कमाएं) -Curators identify high-quality subgraphs and "curate" them (i.e., signal GRT on them) to earn curation shares, which guarantee a percentage of all future query fees generated by the subgraph. While any independent network participant can be a Curator, typically subgraph developers are among the first Curators for their own subgraphs because they want to ensure their subgraph is indexed. +**Curators** उच्च-गुणवत्ता वाले **Subgraphs** की पहचान करते हैं और उन्हें **"curate"** करते हैं (अर्थात, उन पर **GRT signal** करते हैं) ताकि **curation shares** कमा सकें। ये **curation shares** उस **Subgraph** द्वारा उत्पन्न सभी भविष्य की **query fees** का एक निश्चित प्रतिशत सुनिश्चित करते हैं। हालाँकि कोई भी स्वतंत्र नेटवर्क प्रतिभागी **Curator** बन सकता है, आमतौर पर **Subgraph developers** अपने स्वयं के **Subgraphs** के पहले **Curators** होते हैं, क्योंकि वे सुनिश्चित करना चाहते हैं कि उनका **Subgraph indexed** हो। -Subgraph developers are encouraged to curate their subgraph with at least 3,000 GRT. However, this number may be impacted by network activity and community participation. +**Subgraph developers** को सलाह दी जाती है कि वे अपने **Subgraph** को कम से कम **3,000 GRT** के साथ **curate** करें। हालांकि, यह संख्या **network activity** और **community participation** के अनुसार बदल सकती है। -Curators pay a 1% curation tax when they curate a new subgraph. This curation tax is burned, decreasing the supply of GRT. +**Curators** को किसी नए **Subgraph** को **curate** करते समय **1% curation tax** देना पड़ता है। यह **curation tax** **burn** हो जाता है, जिससे **GRT** की कुल आपूर्ति कम होती है। -## Developers +## डेवलपर्स -Developers build and query subgraphs to retrieve blockchain data. Since subgraphs are open source, developers can query existing subgraphs to load blockchain data into their dapps. Developers pay for queries they make in GRT, which is distributed to network participants. +**Developers** **Subgraphs** बनाते हैं और उन्हें **query** करके **blockchain data** प्राप्त करते हैं। चूंकि **Subgraphs** **open source** होते हैं, **developers** मौजूदा **Subgraphs** को **query** करके अपने **dapps** में **blockchain data** लोड कर सकते हैं। **Developers** द्वारा किए गए **queries** के लिए **GRT** में भुगतान किया जाता है, जो नेटवर्क प्रतिभागियों के बीच वितरित किया जाता है। -### सबग्राफ बनाना +### Creating a Subgraph -Developers can [create a subgraph](/developing/creating-a-subgraph/) to index data on the blockchain. Subgraphs are instructions for Indexers about which data should be served to consumers. +**Developers** **[Subgraph create](/developing/creating-a-subgraph/)** करके **blockchain** पर डेटा **index** कर सकते हैं। **Subgraphs** यह निर्देश देते हैं कि **Indexers** को कौन सा डेटा **consumers** को उपलब्ध कराना चाहिए। -Once developers have built and tested their subgraph, they can [publish their subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) on The Graph's decentralized network. +जब **developers** अपना **Subgraph** बना और टेस्ट कर लेते हैं, तो वे इसे **The Graph** के **decentralized network** पर **[publish](/subgraphs/developing/publishing/publishing-a-subgraph/)** कर सकते हैं। -### किसी मौजूदा सबग्राफ को क्वेरी करना +### मौजूदा **Subgraph** को **query** करना -Once a subgraph is [published](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph's decentralized network, anyone can create an API key, add GRT to their billing balance, and query the subgraph. +एक बार **Subgraph** **[published](/subgraphs/developing/publishing/publishing-a-subgraph/)** हो जाने के बाद, कोई भी **API key** बना सकता है, अपनी **billing balance** में **GRT** जोड़ सकता है और **Subgraph** को **query** कर सकता है। सबग्राफ़ को GraphQL का उपयोग करके क्वेरी किया जाता है()/subgraphs/querying/introduction/, और क्वेरी शुल्क को Subgraph Studio()https://thegraph.com/studio/ में GRT के साथ भुगतान किया जाता है। क्वेरी शुल्क को नेटवर्क प्रतिभागियों में उनके प्रोटोकॉल में योगदान के आधार पर वितरित किया जाता है। -1% of the query fees paid to the network are burned. +नेटवर्क को दिए गए क्वेरी शुल्क का 1% नष्ट (burn) कर दिया जाता है। -## Indexers (Earn GRT) +## Indexers (GRT कमाएँ) -Indexers The Graph की रीढ़ हैं। वे स्वतंत्र हार्डवेयर और सॉफ़्टवेयर संचालित करते हैं जो The Graph के विकेन्द्रीकृत नेटवर्क को शक्ति प्रदान करता है। Indexers, सबग्राफ से निर्देशों के आधार पर उपभोक्ताओं को डेटा प्रदान करते हैं। +**Indexers** **The Graph** की रीढ़ हैं। वे **The Graph** के **decentralized network** को चलाने के लिए स्वतंत्र **hardware** और **software** ऑपरेट करते हैं। **Indexers**, **Subgraphs** से मिले निर्देशों के आधार पर **data consumers** को डेटा प्रदान करते हैं। Indexers दो तरीकों से GRT रिवार्ड्स कमा सकते हैं: -1. **क्वेरी शुल्क:** डेवलपर्स या उपयोगकर्ताओं द्वारा Subgraph डेटा क्वेरी के लिए भुगतान किया गया GRT। क्वेरी शुल्क सीधे Indexers को एक्सपोनेंशियल रिबेट फ़ंक्शन के अनुसार वितरित किया जाता है (देखें GIP [यहाँ](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162))। +1. **Query fees**: **Developers** या **users** द्वारा **Subgraph data queries** के लिए भुगतान किए गए **GRT**। ये शुल्क सीधे **Indexers** को **exponential rebate function** के अनुसार वितरित किए जाते हैं। [यहां देखें](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)। -2. **Indexing रिवार्ड्स**: 3% की वार्षिक जारी राशि Indexers को उनके द्वारा indexed किए गए सबग्राफकी संख्या के आधार पर वितरित की जाती है। ये पुरस्कार Indexers को सबग्राफको index करने के लिए प्रेरित करते हैं, कभी-कभी query fees शुरू होने से पहले भी, ताकि वे Proofs of Indexing (POIs) को एकत्रित और प्रस्तुत कर सकें, यह सत्यापित करने के लिए कि उन्होंने डेटा को सटीक रूप से index किया है। +2. **Indexing rewards**: **3% वार्षिक जारी किए गए GRT** को **Indexers** के बीच वितरित किया जाता है, इस आधार पर कि वे कितने **Subgraphs** को **index** कर रहे हैं। ये **rewards** Indexers को **Subgraphs** को **index** करने के लिए प्रेरित करते हैं, कभी-कभी **query fees** शुरू होने से पहले ही, ताकि वे **Proofs of Indexing (POIs)** जमा कर सकें और सत्यापित कर सकें कि उन्होंने डेटा को सही तरीके से **index** किया है। -Each subgraph is allotted a portion of the total network token issuance, based on the amount of the subgraph’s curation signal. That amount is then rewarded to Indexers based on their allocated stake on the subgraph. +प्रत्येक **Subgraph** को कुल नेटवर्क **token issuance** का एक हिस्सा आवंटित किया जाता है, जो उस **Subgraph** के **curation signal** की मात्रा पर आधारित होता है। यह राशि फिर उस **Subgraph** पर **Indexers** के आवंटित **stake** के अनुसार उन्हें **reward** के रूप में दी जाती है। -In order to run an indexing node, Indexers must self-stake 100,000 GRT or more with the network. Indexers are incentivized to self-stake GRT in proportion to the amount of queries they serve. +एक Indexing Node चलाने के लिए, Indexers को नेटवर्क के साथ 100,000 GRT या उससे अधिक की स्वयं-स्टेकिंग करनी होगी। Indexers को उनके द्वारा सेव की जाने वाली क्वेरी की मात्रा के अनुपात में GRT स्वयं-स्टेक करने के लिए प्रोत्साहित किया जाता है। -Indexers can increase their GRT allocations on subgraphs by accepting GRT delegation from Delegators, and they can accept up to 16 times their initial self-stake. If an Indexer becomes "over-delegated" (i.e., more than 16 times their initial self-stake), they will not be able to use the additional GRT from Delegators until they increase their self-stake in the network. +**Indexers** अपने **Subgraph** पर **GRT allocation** बढ़ाने के लिए **Delegators** से **GRT delegation** स्वीकार कर सकते हैं, और वे अपने प्रारंभिक **self-stake** का अधिकतम **16 गुना** स्वीकार कर सकते हैं। यदि कोई **Indexer** "over-delegated" हो जाता है (अर्थात् उसका **delegated GRT** उसके प्रारंभिक **self-stake** के 16 गुना से अधिक हो जाता है), तो वह नेटवर्क में अपना **self-stake** बढ़ाने तक अतिरिक्त **GRT** का उपयोग नहीं कर पाएगा। -The amount of rewards an Indexer receives can vary based on the Indexer's self-stake, accepted delegation, quality of service, and many more factors. +एक Indexer को मिलने वाले पुरस्कारों की मात्रा विभिन्न कारकों पर निर्भर कर सकती है, जैसे कि Indexer की स्वयं की हिस्सेदारी, स्वीकृत डेलिगेशन, सेवा की गुणवत्ता, और कई अन्य कारक। -## Token Supply: Burning & Issuance +## टोकन आपूर्ति: जलाना और जारी करना -The initial token supply is 10 billion GRT, with a target of 3% new issuance annually to reward Indexers for allocating stake on subgraphs. This means that the total supply of GRT tokens will increase by 3% each year as new tokens are issued to Indexers for their contribution to the network. +**प्रारंभिक टोकन आपूर्ति** 10 बिलियन **GRT** है, और **Indexers** को **Subgraphs** पर **stake allocate** करने के लिए प्रति वर्ष **3%** नई **GRT issuance** का लक्ष्य रखा गया है। इसका मतलब है कि हर साल **Indexers** के योगदान के लिए नए टोकन जारी किए जाएंगे, जिससे कुल **GRT आपूर्ति** 3% बढ़ेगी। -The Graph is designed with multiple burning mechanisms to offset new token issuance. Approximately 1% of the GRT supply is burned annually through various activities on the network, and this number has been increasing as network activity continues to grow. These burning activities include a 0.5% delegation tax whenever a Delegator delegates GRT to an Indexer, a 1% curation tax when Curators signal on a subgraph, and a 1% of query fees for blockchain data. +The Graph में नए टोकन **issuance** को संतुलित करने के लिए कई **burning mechanisms** शामिल किए गए हैं। सालाना लगभग **1% GRT supply** विभिन्न नेटवर्क गतिविधियों के माध्यम से **burn** हो जाती है, और यह संख्या नेटवर्क की वृद्धि के साथ बढ़ रही है। ये **burning mechanisms** शामिल हैं: - **0.5% Delegation Tax**: जब कोई **Delegator** किसी **Indexer** को **GRT** डेलीगेट करता है। -![Total burned GRT](/img/total-burned-grt.jpeg) +- **1% Curation Tax**: जब **Curators** किसी **Subgraph** पर **GRT signal** करते हैं। +- **1% Query Fees Burn**: जब **ब्लॉकचेन डेटा** के लिए **queries** की जाती हैं। -In addition to these regularly occurring burning activities, the GRT token also has a slashing mechanism in place to penalize malicious or irresponsible behavior by Indexers. If an Indexer is slashed, 50% of their indexing rewards for the epoch are burned (while the other half goes to the fisherman), and their self-stake is slashed by 2.5%, with half of this amount being burned. This helps to ensure that Indexers have a strong incentive to act in the best interests of the network and to contribute to its security and stability. +![कुल जले हुए GRT](/img/total-burned-grt.jpeg) -## Improving the Protocol +इन नियमित रूप से होने वाली टोकन बर्निंग गतिविधियों के अलावा, GRT टोकन में एक slashing mechanism भी शामिल है, जो Indexer द्वारा किए गए दुर्भावनापूर्ण या गैर-जिम्मेदाराना व्यवहार को दंडित करने के लिए लागू किया जाता है। यदि किसी Indexer को slashed किया जाता है, तो उस epoch के लिए उसके indexing rewards का 50% burn कर दिया जाता है (जबकि बाकी आधा हिस्सा fisherman को जाता है), और उसकी self-stake का 2.5% slashed कर दिया जाता है, जिसमें से आधा हिस्सा burn कर दिया जाता है। यह सुनिश्चित करने में मदद करता है कि Indexer नेटवर्क के सर्वोत्तम हितों में कार्य करें और इसकी security और stability में योगदान दें। -The Graph Network is ever-evolving and improvements to the economic design of the protocol are constantly being made to provide the best experience for all network participants. The Graph Council oversees protocol changes and community members are encouraged to participate. Get involved with protocol improvements in [The Graph Forum](https://forum.thegraph.com/). +## प्रोटोकॉल में सुधार करना + +The Graph Network निरंतर विकसित हो रहा है और प्रोटोकॉल की आर्थिक संरचना में सुधार किए जा रहे हैं ताकि सभी नेटवर्क प्रतिभागियों को सर्वोत्तम अनुभव मिल सके। The Graph Council प्रोटोकॉल परिवर्तनों की निगरानी करता है और समुदाय के सदस्यों को भाग लेने के लिए प्रोत्साहित किया जाता है। प्रोटोकॉल सुधारों में शामिल हों [The Graph Forum] (https://forum.thegraph.com/) में। diff --git a/website/src/pages/hi/sps/introduction.mdx b/website/src/pages/hi/sps/introduction.mdx index 30d84b5cfb7f..56ee02d1d54a 100644 --- a/website/src/pages/hi/sps/introduction.mdx +++ b/website/src/pages/hi/sps/introduction.mdx @@ -3,28 +3,29 @@ title: सबस्ट्रीम-पावर्ड सबग्राफ क sidebarTitle: Introduction --- -Boost your subgraph's efficiency and scalability by using [Substreams](/substreams/introduction/) to stream pre-indexed blockchain data. +अपने सबग्राफ की कार्यक्षमता और स्केलेबिलिटी को बढ़ाएं [सबस्ट्रीम](/substreams/introduction/) का उपयोग करके, जो प्री-इंडेक्स्ड ब्लॉकचेन डेटा को स्ट्रीम करता है। -## अवलोकन +## Overview -Use a Substreams package (`.spkg`) as a data source to give your subgraph access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks. +सबस्ट्रीम पैकेज (.spkg) को डेटा स्रोत के रूप में उपयोग करें ताकि आपका सबग्राफ पहले से इंडेक्स किए गए ब्लॉकचेन डेटा की स्ट्रीम तक पहुंच प्राप्त कर सके। यह बड़े या जटिल ब्लॉकचेन नेटवर्क के साथ अधिक कुशल और स्केलेबल डेटा हैंडलिंग को सक्षम बनाता है। ### विशिष्टताएँ इस तकनीक को सक्षम करने के दो तरीके हैं: -1. **Using Substreams [triggers](/sps/triggers/)**: Consume from any Substreams module by importing the Protobuf model through a subgraph handler and move all your logic into a subgraph. This method creates the subgraph entities directly in the subgraph. +1. **सबस्ट्रीम [triggers](/sps/triggers/) का उपयोग करना**: किसी भी सबस्ट्रीम मॉड्यूल से उपभोग करने के लिए, Protobuf मॉडल को एक सबग्राफ हैंडलर के माध्यम से आयात करें और अपनी पूरी लॉजिक को एक सबग्राफ में स्थानांतरित करें। इस विधि से Subgraph में सीधे सबग्राफ entities बनाई जाती हैं। -2. **Using [Entity Changes](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)**: By writing more of the logic into Substreams, you can consume the module's output directly into [graph-node](/indexing/tooling/graph-node/). In graph-node, you can use the Substreams data to create your subgraph entities. +2. **[Entity Changes](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out) का उपयोग करके**: अधिक लॉजिक को सबस्ट्रीम में लिखकर, आप सीधे मॉड्यूल के आउटपुट को [`ग्राफ-नोड`](/indexing/tooling/graph-node/) में कंज्यूम कर सकते हैं। graph-node में, आप सबस्ट्रीम डेटा का उपयोग करके अपनी सबग्राफ entities बना सकते हैं। -You can choose where to place your logic, either in the subgraph or Substreams. However, consider what aligns with your data needs, as Substreams has a parallelized model, and triggers are consumed linearly in the graph node. +आप अपना लॉजिक सबग्राफ या सबस्ट्रीम में कहीं भी रख सकते हैं। हालाँकि, अपने डेटा की आवश्यकताओं के अनुसार निर्णय लें, क्योंकि सबस्ट्रीम एक समानांतर मॉडल का उपयोग करता है, और ट्रिगर `graph node` में रैखिक रूप से उपभोग किए जाते हैं। ### Additional Resources -Visit the following links for tutorials on using code-generation tooling to build your first end-to-end Substreams project quickly: +इन लिंक पर जाएं ताकि आप कोड-जनरेशन टूलिंग का उपयोग करके अपना पहला एंड-टू-एंड सबस्ट्रीम प्रोजेक्ट तेजी से बना सकें: - [Solana](/substreams/developing/solana/transactions/) - [EVM](https://docs.substreams.dev/tutorials/intro-to-tutorials/evm) - [Starknet](https://docs.substreams.dev/tutorials/intro-to-tutorials/starknet) - [Injective](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/injective) - [MANTRA](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/mantra) +- [Stellar](https://docs.substreams.dev/tutorials/intro-to-tutorials/stellar) diff --git a/website/src/pages/hi/sps/sps-faq.mdx b/website/src/pages/hi/sps/sps-faq.mdx index 53a8f393a5bc..3c77c89cebb0 100644 --- a/website/src/pages/hi/sps/sps-faq.mdx +++ b/website/src/pages/hi/sps/sps-faq.mdx @@ -3,39 +3,39 @@ title: सबस्ट्रीम्स-पावर्ड सबग्रा sidebarTitle: FAQ --- -## What are Substreams? +## सबस्ट्रीम क्या होते हैं? -Substreams is an exceptionally powerful processing engine capable of consuming rich streams of blockchain data. It allows you to refine and shape blockchain data for fast and seamless digestion by end-user applications. +सबस्ट्रीम एक अत्यधिक शक्तिशाली प्रोसेसिंग इंजन है जो ब्लॉकचेन डेटा की समृद्ध स्ट्रीम्स को उपभोग करने में सक्षम है। यह आपको ब्लॉकचेन डेटा को परिष्कृत और आकार देने की अनुमति देता है ताकि एंड-यूजर applications द्वारा इसे तेजी और सहजता से पचाया जा सके। -Specifically, it's a blockchain-agnostic, parallelized, and streaming-first engine that serves as a blockchain data transformation layer. It's powered by [Firehose](https://firehose.streamingfast.io/), and enables developers to write Rust modules, build upon community modules, provide extremely high-performance indexing, and [sink](/substreams/developing/sinks/) their data anywhere. +यह एक ब्लॉकचेन-अज्ञेयवादी, समानांतरित, और स्ट्रीमिंग-प्रथम इंजन है, जो ब्लॉकचेन डेटा ट्रांसफॉर्मेशन लेयर के रूप में कार्य करता है। यह [Firehose](https://firehose.streamingfast.io/) द्वारा संचालित है और डेवलपर्स को Rust मॉड्यूल लिखने, कम्युनिटी मॉड्यूल्स पर निर्माण करने, बेहद उच्च-प्रदर्शन इंडेक्सिंग प्रदान करने, और अपना डेटा कहीं भी [sink](/substreams/developing/sinks/) करने में सक्षम बनाता है। -Substreams is developed by [StreamingFast](https://www.streamingfast.io/). Visit the [Substreams Documentation](/substreams/introduction/) to learn more about Substreams. +सबस्ट्रीम को [StreamingFast](https://www.streamingfast.io/) द्वारा विकसित किया गया है। सबस्ट्रीम के बारे में अधिक जानने के लिए [सबस्ट्रीम Documentation](/substreams/introduction/) पर जाएं। -## What are Substreams-powered subgraphs? +## सबस्ट्रीम-संचालित सबग्राफ क्या हैं? -[Substreams-powered subgraphs](/sps/introduction/) combine the power of Substreams with the queryability of subgraphs. When publishing a Substreams-powered Subgraph, the data produced by the Substreams transformations, can [output entity changes](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs) that are compatible with subgraph entities. +[सबस्ट्रीम-powered सबग्राफ](/sps/introduction/)सबस्ट्रीमकी शक्ति को सबग्राफ की queryability के साथ जोड़ते हैं। जब किसी सबस्ट्रीम-powered सबग्राफ को प्रकाशित किया जाता है, तो सबस्ट्रीम परिवर्तनों द्वारा निर्मित डेटा [output entity changes](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs) उत्पन्न कर सकता है, जो सबग्राफ entities के साथ संगत होते हैं। -If you are already familiar with subgraph development, note that Substreams-powered subgraphs can then be queried just as if it had been produced by the AssemblyScript transformation layer. This provides all the benefits of subgraphs, including a dynamic and flexible GraphQL API. +यदि आप पहले से ही सबग्राफ विकास से परिचित हैं, तो ध्यान दें कि सबस्ट्रीम-संचालित सबग्राफ को उसी तरह से क्वेरी किया जा सकता है जैसे कि इसे AssemblyScript ट्रांसफॉर्मेशन लेयर द्वारा उत्पन्न किया गया हो। यह सबग्राफ के सभी लाभ प्रदान करता है, जिसमें एक डायनेमिक और लचीला GraphQL API शामिल है। -## How are Substreams-powered subgraphs different from subgraphs? +## सबस्ट्रीम-powered सबग्राफ सामान्य सबग्राफ से कैसे भिन्न हैं? सबग्राफ डेटा सोर्सेस से बने होते हैं, जो ऑनचेन आयोजन को निर्धारित करते हैं और उन आयोजन को Assemblyscript में लिखे handler के माध्यम से कैसे ट्रांसफॉर्म करना चाहिए। ये आयोजन क्रमवार तरीके से प्रोसेस किए जाते हैं, जिस क्रम में ये आयोजन ऑनचेन होते हैं। -सबस्ट्रीम-सक्षम सबग्राफ के पास एक ही datasource होता है जो सबस्ट्रीम पैकेज को संदर्भित करता है, जिसे ग्राफ नोड द्वारा प्रोसेस किया जाता है। सबस्ट्रीम को पारंपरिक सबग्राफ की तुलना में अतिरिक्त सटीक ऑनचेन डेटा तक पहुंच होती है, और यह बड़े पैमाने पर समानांतर प्रोसेसिंग का लाभ भी ले सकते हैं, जिससे प्रोसेसिंग समय काफी तेज हो सकता है। +By contrast, सबस्ट्रीम-powered सबग्राफ के पास एक ही datasource होता है जो एक सबस्ट्रीम package को संदर्भित करता है, जिसे ग्राफ नोड द्वारा प्रोसेस किया जाता है। सबस्ट्रीम को पारंपरिक सबग्राफ की तुलना में अतिरिक्त विस्तृत ऑनचेन डेटा तक पहुंच प्राप्त होती है, और यह बड़े पैमाने पर समानांतर प्रोसेसिंग से भी लाभ उठा सकते हैं, जिससे प्रोसेसिंग समय काफी तेज़ हो सकता है। -## What are the benefits of using Substreams-powered subgraphs? +## सबस्ट्रीम- powered सबग्राफ के उपयोग के लाभ क्या हैं? -Substreams-powered subgraphs combine all the benefits of Substreams with the queryability of subgraphs. They bring greater composability and high-performance indexing to The Graph. They also enable new data use cases; for example, once you've built your Substreams-powered Subgraph, you can reuse your [Substreams modules](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) to output to different [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) such as PostgreSQL, MongoDB, and Kafka. +सबस्ट्रीम-powered सबग्राफ सभी लाभों को एक साथ लाते हैं जो सबस्ट्रीम और सबग्राफ प्रदान करते हैं। वे अधिक संयोजनशीलता और उच्च-प्रदर्शन इंडेक्सिंग को The Graph में लाते हैं। वे नए डेटा उपयोग के मामलों को भी सक्षम बनाते हैं; उदाहरण के लिए, एक बार जब आपने अपना सबस्ट्रीम-powered सबग्राफ बना लिया, तो आप अपने [सबस्ट्रीम modules](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) को पुन: उपयोग कर सकते हैं ताकि विभिन्न [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) जैसे कि PostgreSQL, MongoDB, और Kafka में आउटपुट किया जा सके। -## What are the benefits of Substreams? +## Substream के क्या benefit हैं? -There are many benefits to using Substreams, including: +Substream का उपयोग करने के कई benefit हैं, जिनमें: -- Composable: You can stack Substreams modules like LEGO blocks, and build upon community modules, further refining public data. +- Composable: आप Substreams modules को LEGO blocks की तरह stack कर सकते हैं, और community module पर निर्माण करके public data को अधिक refining कर कते हैं। -- High-performance indexing: Orders of magnitude faster indexing through large-scale clusters of parallel operations (think BigQuery). +- High-performance indexing: बड़े पैमाने पर parallel operation के विशाल संगठनों के माध्यम से कई गुना तेज़ सूचीकरण (think BigQuery). -- Sink anywhere: Sink your data to anywhere you want: PostgreSQL, MongoDB, Kafka, subgraphs, flat files, Google Sheets. +- Sink anywhere: अपना डेटा कहीं भी सिंक करें: PostgreSQL, MongoDB, Kafka, Subgraphs, flat files, Google Sheets. - Programmable: Use code to customize extraction, do transformation-time aggregations, and model your output for multiple sinks. @@ -49,48 +49,48 @@ Developed by [StreamingFast](https://www.streamingfast.io/), the Firehose is a b Go to the [documentation](https://firehose.streamingfast.io/) to learn more about the Firehose. -## What are the benefits of the Firehose? +## Firehouse के क्या benefits हैं? -There are many benefits to using Firehose, including: +Firehouse का उपयोग करने के कई benefits हैं, जिनमें: -- Lowest latency & no polling: In a streaming-first fashion, the Firehose nodes are designed to race to push out the block data first. +- सबसे कम latency और कोई मतदान नहीं: streaming-first fashion में, Firehose nodes को पहले block data को push करने की दौड़ के लिए designed किया गया है। -- Prevents downtimes: Designed from the ground up for High Availability. +- Prevents downtimes: उच्च उपलब्धता के लिए मौलिक रूप से design किया गया है। -- Never miss a beat: The Firehose stream cursor is designed to handle forks and to continue where you left off in any condition. +- Never miss a beat: Firehose stream cursor को forks to handle और किसी भी स्थिति में जहां आप छोड़े थे वहां जारी रहने के लिए design किया गया है। -- Richest data model:  Best data model that includes the balance changes, the full call tree, internal transactions, logs, storage changes, gas costs, and more. +- Richest data model: Best data model जिसमें balance changes, the full call tree, आंतरिक लेनदेन, logs, storage changes, gas costs और बहुत कुछ शामिल है। -- Leverages flat files: Blockchain data is extracted into flat files, the cheapest and most optimized computing resource available. +- Leverages flat files: blockchain data को flat files में निकाला जाता है, जो सबसे सस्ते और सबसे अधिक अनुकूल गणना संसाधन होता है। -## Where can developers access more information about Substreams-powered subgraphs and Substreams? +## डेवलपर्स सबस्ट्रीम-powered सबग्राफ और सबस्ट्रीम के बारे में अधिक जानकारी कहाँ प्राप्त कर सकते हैं? [सबस्ट्रीम documentation](/substreams/introduction/) आपको सबस्ट्रीम modules बनाने का तरीका सिखाएगी। -The [Substreams-powered subgraphs documentation](/sps/introduction/) will show you how to package them for deployment on The Graph. +The [सबस्ट्रीम-powered सबग्राफ documentation](/sps/introduction/) आपको यह दिखाएगी कि उन्हें The Graph पर परिनियोजन के लिए कैसे संकलित किया जाए। [नवीनतम Substreams Codegen टूल](https://streamingfastio.medium.com/substreams-codegen-no-code-tool-to-bootstrap-your-project-a11efe0378c6) आपको बिना किसी कोड के एक Substreams प्रोजेक्ट शुरू करने की अनुमति देगा। -## What is the role of Rust modules in Substreams? +## Substreams में Rust modules का क्या भूमिका है? -Rust modules are the equivalent of the AssemblyScript mappers in subgraphs. They are compiled to WASM in a similar way, but the programming model allows for parallel execution. They define the sort of transformations and aggregations you want to apply to the raw blockchain data. +Rust मॉड्यूल्स AssemblyScript मापर्स के समकक्ष होते हैं सबग्राफ में। इन्हें समान तरीके से WASM में संकलित किया जाता है, लेकिन प्रोग्रामिंग मॉडल समानांतर निष्पादन की अनुमति देता है। ये उस प्रकार के रूपांतरण और समुच्चयन को परिभाषित करते हैं, जिन्हें आप कच्चे ब्लॉकचेन डेटा पर लागू करना चाहते हैं। See [modules documentation](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) for details. -## What makes Substreams composable? +## Substreams को composable क्या बनाता है? When using Substreams, the composition happens at the transformation layer enabling cached modules to be re-used. -As an example, Alice can build a DEX price module, Bob can use it to build a volume aggregator for some tokens of his interest, and Lisa can combine four individual DEX price modules to create a price oracle. A single Substreams request will package all of these individual's modules, link them together, to offer a much more refined stream of data. That stream can then be used to populate a subgraph, and be queried by consumers. +ऐसे मान लीजिए, एलिस एक DEX प्राइस मॉड्यूल बना सकती है, बॉब इसका उपयोग करके अपने इच्छित कुछ टोकनों के लिए एक वॉल्यूम एग्रीगेटर बना सकता है, और लिसा चार अलग-अलग DEX प्राइस मॉड्यूल को जोड़कर एक प्राइस ओरैकल बना सकती है। एक ही सबस्ट्रीम अनुरोध इन सभी व्यक्तिगत मॉड्यूल्स को एक साथ पैकेज करेगा, उन्हें आपस में लिंक करेगा, और एक अधिक परिष्कृत डेटा स्ट्रीम प्रदान करेगा। उस स्ट्रीम का उपयोग फिर एक सबग्राफ को पॉप्युलेट करने के लिए किया जा सकता है और उपभोक्ताओं द्वारा क्वेरी किया जा सकता है। -## How can you build and deploy a Substreams-powered Subgraph? +## आप कैसे एक Substreams-powered Subgraph बना सकते हैं और deploy कर सकते हैं? -After [defining](/sps/introduction/) a Substreams-powered Subgraph, you can use the Graph CLI to deploy it in [Subgraph Studio](https://thegraph.com/studio/). +सबस्ट्रीम-समर्थित सबग्राफ को [परिभाषित](/sps/introduction/) करने के बाद, आप इसे Graph CLI का उपयोग करके [सबग्राफ Studio](https://thegraph.com/studio/) में डिप्लॉय कर सकते हैं। -## Where can I find examples of Substreams and Substreams-powered subgraphs? +## आप सबस्ट्रीम और सबस्ट्रीम-powered सबग्राफ के उदाहरण कहाँ पा सकते हैं? -You can visit [this Github repo](https://github.com/pinax-network/awesome-substreams) to find examples of Substreams and Substreams-powered subgraphs. +आप [इस Github रिपॉज़िटरी](https://github.com/pinax-network/awesome-substreams) पर जाकर सबस्ट्रीम और सबस्ट्रीम -powered सबग्राफके उदाहरण देख सकते हैं। -## What do Substreams and Substreams-powered subgraphs mean for The Graph Network? +## सबस्ट्रीम और सबस्ट्रीम-powered सबग्राफ का The Graph Network के लिए क्या अर्थ है? The integration promises many benefits, including extremely high-performance indexing and greater composability by leveraging community modules and building on them. diff --git a/website/src/pages/hi/sps/triggers.mdx b/website/src/pages/hi/sps/triggers.mdx index 258b6e532745..196694448b05 100644 --- a/website/src/pages/hi/sps/triggers.mdx +++ b/website/src/pages/hi/sps/triggers.mdx @@ -2,17 +2,17 @@ title: सबस्ट्रीम्स ट्रिगर्स --- -Use Custom Triggers and enable the full use GraphQL. +कस्टम ट्रिगर्स का उपयोग करें और पूर्ण रूप से GraphQL को सक्षम करें। -## अवलोकन +## Overview -Custom Triggers allow you to send data directly into your subgraph mappings file and entities, which are similar to tables and fields. This enables you to fully use the GraphQL layer. +कस्टम ट्रिगर्स आपको डेटा सीधे आपके सबग्राफ मैपिंग फ़ाइल और entities में भेजने की अनुमति देते हैं, जो तालिकाओं और फ़ील्ड्स के समान होते हैं। इससे आप पूरी तरह से GraphQL लेयर का उपयोग कर सकते हैं। -By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data in your subgraph's handler. This ensures efficient and streamlined data management within the subgraph framework. +आपके सबस्ट्रीम मॉड्यूल द्वारा उत्पन्न Protobuf परिभाषाओं को आयात करके, आप इस डेटा को अपने सबग्राफ के handler में प्राप्त और प्रोसेस कर सकते हैं। यह सबग्राफ ढांचे के भीतर कुशल और सुव्यवस्थित डेटा प्रबंधन सुनिश्चित करता है। -### Defining `handleTransactions` +### `handleTransactions` को परिभाषित करना -निम्नलिखित कोड यह दर्शाता है कि कैसे एक handleTransactions फ़ंक्शन को एक subgraph हैंडलर में परिभाषित किया जा सकता है। यह फ़ंक्शन कच्चे Substreams बाइट्स को एक पैरामीटर के रूप में प्राप्त करता है और उन्हें एक Transactions ऑब्जेक्ट में डिकोड करता है। प्रत्येक लेनदेन के लिए, एक नई subgraph एंटिटी बनाई जाती है। +यह कोड एक सबग्राफ handler में `handleTransactions` फ़ंक्शन को परिभाषित करने का तरीका दर्शाता है। यह फ़ंक्शन कच्चे सबस्ट्रीम बाइट्स को पैरामीटर के रूप में प्राप्त करता है और उन्हें `Transactions` ऑब्जेक्ट में डिकोड करता है। प्रत्येक लेन-देन के लिए, एक नया सबग्राफ entity बनाया जाता है। ```tsx export function handleTransactions(bytes: Uint8Array): void { @@ -34,14 +34,14 @@ export function handleTransactions(bytes: Uint8Array): void { } ``` -Here's what you're seeing in the `mappings.ts` file: +यहाँ आप `mappings.ts` फ़ाइल में जो देख रहे हैं: 1. Substreams डेटा को जनरेट किए गए Transactions ऑब्जेक्ट में डिकोड किया जाता है, यह ऑब्जेक्ट किसी अन्य AssemblyScript ऑब्जेक्ट की तरह उपयोग किया जाता है। 2. लेनदेन पर लूप करना -3. प्रत्येक लेनदेन के लिए एक नया subgraph entity बनाएं +3. यहाँ आप `mappings.ts` फ़ाइल में जो देख रहे हैं: -To go through a detailed example of a trigger-based subgraph, [check out the tutorial](/sps/tutorial/). +एक ट्रिगर-आधारित सबग्राफ का विस्तृत उदाहरण देखने के लिए, [इस ट्यूटोरियल को देखें](/sps/tutorial/)। ### Additional Resources -To scaffold your first project in the Development Container, check out one of the [How-To Guide](/substreams/developing/dev-container/). +अपने पहले प्रोजेक्ट को डेवलपमेंट कंटेनर में स्कैफोल्ड करने के लिए, इनमें से किसी एक [How-To Guide](/substreams/developing/dev-container/) को देखें। diff --git a/website/src/pages/hi/sps/tutorial.mdx b/website/src/pages/hi/sps/tutorial.mdx index 86326b903aad..e7dab45640d3 100644 --- a/website/src/pages/hi/sps/tutorial.mdx +++ b/website/src/pages/hi/sps/tutorial.mdx @@ -3,13 +3,13 @@ title: 'ट्यूटोरियल: Solana पर एक Substreams-शक sidebarTitle: Tutorial --- -Successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. +Successfully set up a trigger-based Substreams-powered Subgraph for a Solana SPL token. ## शुरू करिये For a video tutorial, check out [How to Index Solana with a Substreams-powered Subgraph](/sps/tutorial/#video-tutorial) -### Prerequisites +### आवश्यक शर्तें 'शुरू करने से पहले, सुनिश्चित करें कि:' @@ -54,7 +54,7 @@ params: # Modify the param fields to meet your needs ### चरण 2: Subgraph Manifest उत्पन्न करें -एक बार जब प्रोजेक्ट इनिशियलाइज़ हो जाए, Dev Container में निम्नलिखित कमांड चलाकर subgraph मैनिफेस्ट जेनरेट करें: +Once the project is initialized, generate a Subgraph manifest by running the following command in the Dev Container: ```bash 'सबस्ट्रीम्स कोडजेन' subgraph @@ -73,7 +73,7 @@ dataSources: moduleName: map_spl_transfers # Module defined in the substreams.yaml file: ./my-project-sol-v0.1.0.spkg mapping: - apiVersion: 0.0.7 + apiVersion: 0.0.9 kind: substreams/graph-entities file: ./src/mappings.ts handler: handleTriggers @@ -81,7 +81,7 @@ dataSources: ### चरण 3: schema.graphql में संस्थाएँ परिभाषित करें -Define the fields you want to save in your subgraph entities by updating the `schema.graphql` file. +Define the fields you want to save in your Subgraph entities by updating the `schema.graphql` file. Here is an example: @@ -101,7 +101,7 @@ type MyTransfer @entity { With the Protobuf objects generated, you can now handle the decoded Substreams data in your `mappings.ts` file found in the `./src` directory. -The example below demonstrates how to extract to subgraph entities the non-derived transfers associated to the Orca account id: +The example below demonstrates how to extract to Subgraph entities the non-derived transfers associated to the Orca account id: ```ts import { Protobuf } from 'as-proto/assembly' @@ -140,11 +140,11 @@ AssemblyScript में Protobuf ऑब्जेक्ट बनाने क npm चलाएँ protogen ``` -यह कमांड Protobuf परिभाषाओं को AssemblyScript में परिवर्तित करता है, जिससे आप उन्हें हैंडलर में उपयोग कर सकते हैं। +This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the Subgraph's handler. ### निष्कर्ष -Congratulations! You've successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. You can take the next step by customizing your schema, mappings, and modules to fit your specific use case. +Congratulations! You've successfully set up a trigger-based Substreams-powered Subgraph for a Solana SPL token. You can take the next step by customizing your schema, mappings, and modules to fit your specific use case. ### Video Tutorial diff --git a/website/src/pages/hi/subgraphs/_meta-titles.json b/website/src/pages/hi/subgraphs/_meta-titles.json index 3fd405eed29a..87cd473806ba 100644 --- a/website/src/pages/hi/subgraphs/_meta-titles.json +++ b/website/src/pages/hi/subgraphs/_meta-titles.json @@ -1,6 +1,6 @@ { - "querying": "Querying", - "developing": "Developing", + "querying": "queries", + "developing": "विकसित करना", "guides": "How-to Guides", "best-practices": "Best Practices" } diff --git a/website/src/pages/hi/subgraphs/best-practices/avoid-eth-calls.mdx b/website/src/pages/hi/subgraphs/best-practices/avoid-eth-calls.mdx index cd5921ae8354..e08107689925 100644 --- a/website/src/pages/hi/subgraphs/best-practices/avoid-eth-calls.mdx +++ b/website/src/pages/hi/subgraphs/best-practices/avoid-eth-calls.mdx @@ -1,19 +1,19 @@ --- title: सबग्राफ सर्वोत्तम प्रथा 4 - eth_calls से बचकर अनुक्रमण गति में सुधार करें -sidebarTitle: 'Subgraph Best Practice 4: Avoiding eth_calls' +sidebarTitle: eth_calls से बचाव --- ## TLDR -eth_calls वे कॉल हैं जो एक Subgraph से Ethereum नोड पर किए जा सकते हैं। ये कॉल डेटा लौटाने में महत्वपूर्ण समय लेते हैं, जिससे indexing धीमी हो जाती है। यदि संभव हो, तो स्मार्ट कॉन्ट्रैक्ट्स को इस तरह से डिजाइन करें कि वे सभी आवश्यक डेटा उत्पन्न करें ताकि आपको eth_calls का उपयोग न करना पड़े। +eth_calls वे कॉल हैं जो एक Subgraph से Ethereum नोड पर किया जा सकता है। ये कॉल डेटा लौटाने में महत्वपूर्ण समय लेते हैं, जिससे indexing धीमी हो जाती है। यदि संभव हो, तो smart contract को इस तरह से डिजाइन करें कि वे सभी आवश्यक डेटा उत्पन्न करें ताकि आपको eth_calls का उपयोग न करना पड़े। ## Eth_calls से बचना एक सर्वोत्तम अभ्यास क्यों है -Subgraphs को स्मार्ट कॉन्ट्रैक्ट्स से निकले हुए इवेंट डेटा को इंडेक्स करने के लिए ऑप्टिमाइज़ किया गया है। एक subgraph ‘eth_call’ से आने वाले डेटा को भी इंडेक्स कर सकता है, लेकिन इससे subgraph इंडेक्सिंग काफी धीमी हो सकती है क्योंकि ‘eth_calls’ के लिए स्मार्ट कॉन्ट्रैक्ट्स को एक्सटर्नल कॉल्स करने की आवश्यकता होती है। इन कॉल्स की प्रतिक्रिया subgraph पर निर्भर नहीं करती, बल्कि उस Ethereum नोड की कनेक्टिविटी और प्रतिक्रिया पर निर्भर करती है, जिसे क्वेरी किया जा रहा है। हमारे subgraphs में ‘eth_calls’ को कम करके या पूरी तरह से समाप्त करके, हम अपने इंडेक्सिंग स्पीड में उल्लेखनीय सुधार कर सकते हैं। +सबग्राफ स्मार्ट contract द्वारा उत्सर्जित इवेंट डेटा को इंडेक्स करने के लिए ऑप्टिमाइज़ किए गए हैं। एक सबग्राफ `eth_call` से आने वाले डेटा को भी इंडेक्स कर सकता है, लेकिन यह सबग्राफ indexing को काफी धीमा कर सकता है क्योंकि eth_calls के लिए स्मार्ट कॉन्ट्रैक्ट्स को एक्सटर्नल कॉल करने की आवश्यकता होती है। इन कॉल्स की प्रतिक्रियाशीलता सबग्राफ पर नहीं, बल्कि उस Ethereum नोड की कनेक्टिविटी और प्रतिक्रियाशीलता पर निर्भर करती है, जिससे क्वेरी की जा रही है। यदि हम अपने सबग्राफ में `eth_calls` को कम या समाप्त कर देते हैं, तो हम अपनी indexing स्पीड को काफी हद तक सुधार सकते हैं। ### एक eth_call कैसा दिखता है? -eth_calls अक्सर तब आवश्यक होते हैं जब subgraph के लिए आवश्यक डेटा इमिटेड इवेंट्स के माध्यम से उपलब्ध नहीं होता है। उदाहरण के लिए, एक ऐसा परिदृश्य मानें जहां एक subgraph को यह पहचानने की आवश्यकता है कि क्या ERC20 टोकन एक विशेष पूल का हिस्सा हैं, लेकिन कॉन्ट्रैक्ट केवल एक बुनियादी Transfer इवेंट इमिट करता है और वह इवेंट इमिट नहीं करता जिसमें हमें आवश्यक डेटा हो: +`eth_calls` अक्सर आवश्यक होते हैं जब किसी सबग्राफ के लिए आवश्यक डेटा उत्सर्जित घटनाओं के माध्यम से उपलब्ध नहीं होता है। उदाहरण के लिए, एक स्थिति पर विचार करें जहां एक सबग्राफ को यह पहचानने की आवश्यकता होती है कि कोई ERC20 टोकन किसी विशेष पूल का हिस्सा है या नहीं, लेकिन अनुबंध केवल एक बुनियादी `Transfer` आयोजन उत्सर्जित करता है और वह घटना उत्सर्जित नहीं करता है जिसमें हमारे लिए आवश्यक डेटा हो। ```yaml इवेंट Transfer(address indexed from, address indexed to, uint256 value); @@ -44,7 +44,7 @@ export function handleTransfer(event: Transfer): void { } ``` -यह कार्यात्मक है, हालांकि यह आदर्श नहीं है क्योंकि यह हमारे subgraph की indexing को धीमा कर देता है। +यह कार्यशील है, हालांकि यह हमारे सबग्राफ की indexing को धीमा कर देता है। ## Eth_calls को कैसे समाप्त करें @@ -54,7 +54,7 @@ export function handleTransfer(event: Transfer): void { event TransferWithPool(address indexed from, address indexed to, uint256 value, bytes32 indexed poolInfo); ``` -इस अपडेट के साथ, subgraph आवश्यक डेटा को बिना बाहरी कॉल के सीधे अनुक्रमित कर सकता है: +इस अपडेट के साथ, सबग्राफ बाहरी कॉल किए बिना सीधे आवश्यक डेटा को इंडेक्स कर सकता है। ```typescript import { Address } from '@graphprotocol/graph-ts' @@ -96,22 +96,22 @@ export function handleTransferWithPool(event: TransferWithPool): void { Handler स्वयं इस eth_call के परिणाम तक ठीक उसी तरह पहुंचता है जैसे पिछले अनुभाग में, अनुबंध से बाइंडिंग करके और कॉल करके। graph-node घोषित eth_calls के परिणामों को मेमोरी में कैश करता है और हैंडलर से कॉल इस मेमोरी कैश से परिणाम प्राप्त करेगा, बजाय इसके कि एक वास्तविक RPC कॉल की जाए। -नोट: घोषित eth_calls केवल उन subgraphs में किए जा सकते हैं जिनका specVersion >= 1.2.0 है। +घोषित eth_calls केवल उन सबग्राफ में किए जा सकते हैं जिनका specVersion >= 1.2.0 है। ## निष्कर्ष -You can significantly improve indexing performance by minimizing or eliminating `eth_calls` in your subgraphs. +आप अपने सबग्राफ में `eth_calls` को कम या समाप्त करके Indexing प्रदर्शन को काफी हद तक सुधार सकते हैं। -## Subgraph Best Practices 1-6 +## सबग्राफ सर्वोत्तम प्रथाएँ 1-6 -1. [Improve Query Speed with Subgraph Pruning](/subgraphs/best-practices/pruning/) +1. [सबग्राफ की गति में सुधार करें सबग्राफ प्रूनिंग के साथ](/subgraphs/best-practices/pruning/) -2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/subgraphs/best-practices/derivedfrom/) +2. [indexing और क्वेरी प्रतिसादशीलता में सुधार करें @derivedFrom का उपयोग करके](/subgraphs/best-practices/derivedfrom/) -3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) +3. [अपरिवर्तनीय entities और Bytes को ID के रूप में उपयोग करके Indexing और क्वेरी प्रदर्शन में सुधार करें](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) -4. [Improve Indexing Speed by Avoiding `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) +4. [indexing गति में सुधार करें `eth_calls` से बचकर](/subgraphs/best-practices/avoid-eth-calls/) -5. [Simplify and Optimize with Timeseries and Aggregations](/subgraphs/best-practices/timeseries/) +5. [समय श्रृंखला और समुच्चयन के साथ सरल और अनुकूलित करें](/subgraphs/best-practices/timeseries/) -6. [Use Grafting for Quick Hotfix Deployment](/subgraphs/best-practices/grafting-hotfix/) +6. [त्वरित हॉटफ़िक्स परिनियोजन के लिए ग्राफ्टिंग का उपयोग करें](/subgraphs/best-practices/grafting-hotfix/) diff --git a/website/src/pages/hi/subgraphs/best-practices/derivedfrom.mdx b/website/src/pages/hi/subgraphs/best-practices/derivedfrom.mdx index 6711d2943209..b09a98137eba 100644 --- a/website/src/pages/hi/subgraphs/best-practices/derivedfrom.mdx +++ b/website/src/pages/hi/subgraphs/best-practices/derivedfrom.mdx @@ -1,11 +1,11 @@ --- title: Subgraph सर्वोत्तम प्रथा 2 - @derivedFrom का उपयोग करके अनुक्रमण और क्वेरी की प्रतिक्रियाशीलता में सुधार करें। -sidebarTitle: 'Subgraph Best Practice 2: Arrays with @derivedFrom' +sidebarTitle: Arrays with @derivedFrom --- ## TLDR -आपके स्कीमा में ऐरे हजारों प्रविष्टियों से बढ़ने पर एक सबग्राफ के प्रदर्शन को वास्तव में धीमा कर सकते हैं। यदि संभव हो, तो @derivedFrom निर्देशिका का उपयोग करना चाहिए जब आप ऐरे का उपयोग कर रहे हों, क्योंकि यह बड़े ऐरे के निर्माण को रोकता है, हैंडलरों को सरल बनाता है और व्यक्तिगत संस्थाओं के आकार को कम करता है, जिससे अनुक्रमण गति और प्रश्न प्रदर्शन में महत्वपूर्ण सुधार होता है। +Arrays in your schema can really slow down a Subgraph's performance as they grow beyond thousands of entries. If possible, the `@derivedFrom` directive should be used when using arrays as it prevents large arrays from forming, simplifies handlers, and reduces the size of individual entities, improving indexing speed and query performance significantly. ## @derivedFrom निर्देशिका का उपयोग कैसे करें @@ -15,7 +15,7 @@ sidebarTitle: 'Subgraph Best Practice 2: Arrays with @derivedFrom' टिप्पणियाँ : [Comment!]! @derivedFrom(field: “post”) ``` -@derivedFrom कुशल एक से कई संबंध बनाता है, जिससे एक इकाई को संबंधित इकाई में एक फ़ील्ड के आधार पर कई संबंधित इकाइयों के साथ गतिशील रूप से संबंध बनाने की अनुमति मिलती है। यह दृष्टिकोण रिश्ते के दोनों पक्षों को डुप्लिकेट डेटा संग्रहीत करने की आवश्यकता को समाप्त करता है, जिससे subgraph अधिक कुशल बन जाता है। +`@derivedFrom` creates efficient one-to-many relationships, enabling an entity to dynamically associate with multiple related entities based on a field in the related entity. This approach removes the need for both sides of the relationship to store duplicate data, making the Subgraph more efficient. ### @derivedFrom के लिए उदाहरण उपयोग मामला @@ -60,30 +60,30 @@ type Comment @entity { बस @derivedFrom निर्देश जोड़ने से, यह स्कीमा केवल संबंध के “Comments” पक्ष पर “Comments” को संग्रहीत करेगा और संबंध के “Post” पक्ष पर नहीं। ऐरे व्यक्तिगत पंक्तियों में संग्रहीत होते हैं, जिससे उन्हें काफी विस्तार करने की अनुमति मिलती है। यदि उनका विकास अनियंत्रित है, तो इससे विशेष रूप से बड़े आकार हो सकते हैं। -यह न केवल हमारे subgraph को अधिक प्रभावी बनाएगा, बल्कि यह तीन विशेषताओं को भी अनलॉक करेगा: +This will not only make our Subgraph more efficient, but it will also unlock three features: 1. हम Post को क्वेरी कर सकते हैं और इसके सभी कमेंट्स देख सकते हैं। 2. हम एक रिवर्स लुकअप कर सकते हैं और किसी भी Comment को क्वेरी कर सकते हैं और देख सकते हैं कि यह किस पोस्ट से आया है। -3. We can use [Derived Field Loaders](/subgraphs/developing/creating/graph-ts/api/#looking-up-derived-entities) to unlock the ability to directly access and manipulate data from virtual relationships in our subgraph mappings. +3. We can use [Derived Field Loaders](/subgraphs/developing/creating/graph-ts/api/#looking-up-derived-entities) to unlock the ability to directly access and manipulate data from virtual relationships in our Subgraph mappings. ## निष्कर्ष -Use the `@derivedFrom` directive in subgraphs to effectively manage dynamically growing arrays, enhancing indexing efficiency and data retrieval. +Use the `@derivedFrom` directive in Subgraphs to effectively manage dynamically growing arrays, enhancing indexing efficiency and data retrieval. For a more detailed explanation of strategies to avoid large arrays, check out Kevin Jones' blog: [Best Practices in Subgraph Development: Avoiding Large Arrays](https://thegraph.com/blog/improve-subgraph-performance-avoiding-large-arrays/). -## Subgraph Best Practices 1-6 +## सबग्राफ सर्वोत्तम प्रथाएँ 1-6 -1. [Improve Query Speed with Subgraph Pruning](/subgraphs/best-practices/pruning/) +1. [सबग्राफ की गति में सुधार करें सबग्राफ प्रूनिंग के साथ](/subgraphs/best-practices/pruning/) -2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/subgraphs/best-practices/derivedfrom/) +2. [indexing और क्वेरी प्रतिसादशीलता में सुधार करें @derivedFrom का उपयोग करके](/subgraphs/best-practices/derivedfrom/) -3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) +3. [अपरिवर्तनीय entities और Bytes को ID के रूप में उपयोग करके Indexing और क्वेरी प्रदर्शन में सुधार करें](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) -4. [Improve Indexing Speed by Avoiding `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) +4. [indexing गति में सुधार करें `eth_calls` से बचकर](/subgraphs/best-practices/avoid-eth-calls/) -5. [Simplify and Optimize with Timeseries and Aggregations](/subgraphs/best-practices/timeseries/) +5. [समय श्रृंखला और समुच्चयन के साथ सरल और अनुकूलित करें](/subgraphs/best-practices/timeseries/) -6. [Use Grafting for Quick Hotfix Deployment](/subgraphs/best-practices/grafting-hotfix/) +6. [त्वरित हॉटफ़िक्स परिनियोजन के लिए ग्राफ्टिंग का उपयोग करें](/subgraphs/best-practices/grafting-hotfix/) diff --git a/website/src/pages/hi/subgraphs/best-practices/grafting-hotfix.mdx b/website/src/pages/hi/subgraphs/best-practices/grafting-hotfix.mdx index cc3c759ebdea..bfab7b096073 100644 --- a/website/src/pages/hi/subgraphs/best-practices/grafting-hotfix.mdx +++ b/website/src/pages/hi/subgraphs/best-practices/grafting-hotfix.mdx @@ -1,26 +1,26 @@ --- title: Subgraph Best Practice 6 - Use Grafting for Quick Hotfix Deployment -sidebarTitle: 'Subgraph Best Practice 6: Grafting and Hotfixing' +sidebarTitle: Grafting and Hotfixing --- ## TLDR -Grafting is a powerful feature in subgraph development that allows you to build and deploy new subgraphs while reusing the indexed data from existing ones. +Grafting is a powerful feature in Subgraph development that allows you to build and deploy new Subgraphs while reusing the indexed data from existing ones. -### अवलोकन +### Overview -This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. +This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire Subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. ## Benefits of Grafting for Hotfixes 1. **Rapid Deployment** - - **Minimize Downtime**: When a subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. - - **Immediate Recovery**: The new subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. + - **Minimize Downtime**: When a Subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. + - **Immediate Recovery**: The new Subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. 2. **डेटा प्रिजर्वेशन** - - **Reuse Historical Data**: Grafting copies the existing data from the base subgraph, so you don’t lose valuable historical records. + - **Reuse Historical Data**: Grafting copies the existing data from the base Subgraph, so you don’t lose valuable historical records. - **Consistency**: Maintains data continuity, which is crucial for applications relying on consistent historical data. 3. **Efficiency** @@ -31,38 +31,38 @@ This feature enables quick deployment of hotfixes for critical issues, eliminati 1. **Initial Deployment Without Grafting** - - **Start Clean**: Always deploy your initial subgraph without grafting to ensure that it’s stable and functions as expected. - - **Test Thoroughly**: Validate the subgraph’s performance to minimize the need for future hotfixes. + - **Start Clean**: Always deploy your initial Subgraph without grafting to ensure that it’s stable and functions as expected. + - **Test Thoroughly**: Validate the Subgraph’s performance to minimize the need for future hotfixes. 2. **Implementing the Hotfix with Grafting** - **Identify the Issue**: When a critical error occurs, determine the block number of the last successfully indexed event. - - **Create a New Subgraph**: Develop a new subgraph that includes the hotfix. - - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed subgraph. - - **Deploy Quickly**: Publish the grafted subgraph to restore service as soon as possible. + - **Create a New Subgraph**: Develop a new Subgraph that includes the hotfix. + - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed Subgraph. + - **Deploy Quickly**: Publish the grafted Subgraph to restore service as soon as possible. 3. **Post-Hotfix Actions** - - **Monitor Performance**: Ensure the grafted subgraph is indexing correctly and the hotfix resolves the issue. - - **Republish Without Grafting**: Once stable, deploy a new version of the subgraph without grafting for long-term maintenance. + - **Monitor Performance**: Ensure the grafted Subgraph is indexing correctly and the hotfix resolves the issue. + - **Republish Without Grafting**: Once stable, deploy a new version of the Subgraph without grafting for long-term maintenance. > Note: Relying on grafting indefinitely is not recommended as it can complicate future updates and maintenance. - - **Update References**: Redirect any services or applications to use the new, non-grafted subgraph. + - **Update References**: Redirect any services or applications to use the new, non-grafted Subgraph. 4. **Important Considerations** - **Careful Block Selection**: Choose the graft block number carefully to prevent data loss. - **Tip**: Use the block number of the last correctly processed event. - - **Use Deployment ID**: Ensure you reference the Deployment ID of the base subgraph, not the Subgraph ID. - - **Note**: The Deployment ID is the unique identifier for a specific subgraph deployment. - - **Feature Declaration**: Remember to declare grafting in the subgraph manifest under features. + - **Use Deployment ID**: Ensure you reference the Deployment ID of the base Subgraph, not the Subgraph ID. + - **Note**: The Deployment ID is the unique identifier for a specific Subgraph deployment. + - **Feature Declaration**: Remember to declare grafting in the Subgraph manifest under features. ## Example: Deploying a Hotfix with Grafting -Suppose you have a subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. +Suppose you have a Subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. 1. **Failed Subgraph Manifest (subgraph.yaml)** ```yaml - specVersion: 1.0.0 + specVersion: 1.3.0 schema: file: ./schema.graphql dataSources: @@ -75,7 +75,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing startBlock: 5000000 mapping: kind: ethereum/events - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Withdrawal @@ -90,7 +90,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing 2. **New Grafted Subgraph Manifest (subgraph.yaml)** ```yaml - specVersion: 1.0.0 + specVersion: 1.3.0 schema: file: ./schema.graphql dataSources: @@ -103,7 +103,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing startBlock: 6000001 # Block after the last indexed block mapping: kind: ethereum/events - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Withdrawal @@ -117,16 +117,16 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing features: - grafting graft: - base: QmBaseDeploymentID # Deployment ID of the failed subgraph + base: QmBaseDeploymentID # Deployment ID of the failed Subgraph block: 6000000 # Last successfully indexed block ``` **Explanation:** -- **Data Source Update**: The new subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. +- **Data Source Update**: The new Subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. - **Start Block**: Set to one block after the last successfully indexed block to avoid reprocessing the error. - **Grafting Configuration**: - - **base**: Deployment ID of the failed subgraph. + - **base**: Deployment ID of the failed Subgraph. - **block**: Block number where grafting should begin. 3. **Deployment Steps** @@ -135,10 +135,10 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing - **Adjust the Manifest**: As shown above, update the `subgraph.yaml` with grafting configurations. - **Deploy the Subgraph**: - Authenticate with the Graph CLI. - - Deploy the new subgraph using `graph deploy`. + - Deploy the new Subgraph using `graph deploy`. 4. **Post-Deployment** - - **Verify Indexing**: Check that the subgraph is indexing correctly from the graft point. + - **Verify Indexing**: Check that the Subgraph is indexing correctly from the graft point. - **Monitor Data**: Ensure that new data is being captured and the hotfix is effective. - **Plan for Republish**: Schedule the deployment of a non-grafted version for long-term stability. @@ -146,9 +146,9 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing While grafting is a powerful tool for deploying hotfixes quickly, there are specific scenarios where it should be avoided to maintain data integrity and ensure optimal performance. -- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new subgraph’s schema to be compatible with the base subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. +- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new Subgraph’s schema to be compatible with the base Subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. - **Significant Mapping Logic Overhauls**: When the hotfix involves substantial modifications to your mapping logic—such as changing how events are processed or altering handler functions—grafting may not function correctly. The new logic might not be compatible with the data processed under the old logic, leading to incorrect data or failed indexing. -- **Deployments to The Graph Network**: Grafting is not recommended for subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the subgraph from scratch to ensure full compatibility and reliability. +- **Deployments to The Graph Network**: Grafting is not recommended for Subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the Subgraph from scratch to ensure full compatibility and reliability. ### Risk Management @@ -157,31 +157,31 @@ While grafting is a powerful tool for deploying hotfixes quickly, there are spec ## निष्कर्ष -Grafting is an effective strategy for deploying hotfixes in subgraph development, enabling you to: +Grafting is an effective strategy for deploying hotfixes in Subgraph development, enabling you to: - **Quickly Recover** from critical errors without re-indexing. - **Preserve Historical Data**, maintaining continuity for applications and users. - **Ensure Service Availability** by minimizing downtime during critical fixes. -However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. +However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your Subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. ## Additional Resources - **[Grafting Documentation](/subgraphs/cookbook/grafting/)**: Replace a Contract and Keep its History With Grafting - **[Understanding Deployment IDs](/subgraphs/querying/subgraph-id-vs-deployment-id/)**: Learn the difference between Deployment ID and Subgraph ID. -By incorporating grafting into your subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. +By incorporating grafting into your Subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. -## Subgraph Best Practices 1-6 +## सबग्राफ सर्वोत्तम प्रथाएँ 1-6 -1. [Improve Query Speed with Subgraph Pruning](/subgraphs/best-practices/pruning/) +1. [सबग्राफ की गति में सुधार करें सबग्राफ प्रूनिंग के साथ](/subgraphs/best-practices/pruning/) -2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/subgraphs/best-practices/derivedfrom/) +2. [indexing और क्वेरी प्रतिसादशीलता में सुधार करें @derivedFrom का उपयोग करके](/subgraphs/best-practices/derivedfrom/) -3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) +3. [अपरिवर्तनीय entities और Bytes को ID के रूप में उपयोग करके Indexing और क्वेरी प्रदर्शन में सुधार करें](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) -4. [Improve Indexing Speed by Avoiding `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) +4. [indexing गति में सुधार करें `eth_calls` से बचकर](/subgraphs/best-practices/avoid-eth-calls/) -5. [Simplify and Optimize with Timeseries and Aggregations](/subgraphs/best-practices/timeseries/) +5. [समय श्रृंखला और समुच्चयन के साथ सरल और अनुकूलित करें](/subgraphs/best-practices/timeseries/) -6. [Use Grafting for Quick Hotfix Deployment](/subgraphs/best-practices/grafting-hotfix/) +6. [त्वरित हॉटफ़िक्स परिनियोजन के लिए ग्राफ्टिंग का उपयोग करें](/subgraphs/best-practices/grafting-hotfix/) diff --git a/website/src/pages/hi/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx b/website/src/pages/hi/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx index 5cccca23acb2..25502ea74c0e 100644 --- a/website/src/pages/hi/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx +++ b/website/src/pages/hi/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx @@ -1,6 +1,6 @@ --- title: सबग्राफ सर्वश्रेष्ठ प्रथा 3 - अपरिवर्तनीय संस्थाओं और बाइट्स को आईडी के रूप में उपयोग करके अनुक्रमण और क्वेरी प्रदर्शन में सुधार करें। -sidebarTitle: 'Subgraph Best Practice 3: Immutable Entities and Bytes as IDs' +sidebarTitle: Immutable Entities and Bytes as IDs --- ## TLDR @@ -50,12 +50,12 @@ type Transfer @entity(immutable: true) { ### IDs के रूप में Bytes का उपयोग न करने के कारण 1. यदि एंटिटी IDs मानव-पठनीय होने चाहिए, जैसे कि ऑटो-इंक्रीमेंटेड न्यूमेरिकल IDs या पठनीय स्ट्रिंग्स, तो IDs के लिए Bytes का उपयोग नहीं किया जाना चाहिए। -2. यदि किसी subgraph के डेटा को दूसरे डेटा मॉडल के साथ एकीकृत किया जा रहा है जो IDs के रूप में Bytes का उपयोग नहीं करता है, तो Bytes के रूप में IDs का उपयोग नहीं किया जाना चाहिए। +2. If integrating a Subgraph’s data with another data model that does not use Bytes as IDs, Bytes as IDs should not be used. 3. Indexing और क्वेरीिंग प्रदर्शन में सुधार की आवश्यकता नहीं है। ### Bytes के रूप में IDs के साथ जोड़ना -बहुत से subgraphs में एक ID में दो प्रॉपर्टीज को जोड़ने के लिए स्ट्रिंग संयोजन का उपयोग करना एक सामान्य प्रथा है, जैसे कि event.transaction.hash.toHex() + "-" + event.logIndex.toString() का उपयोग करना। हालांकि, चूंकि यह एक स्ट्रिंग लौटाता है, यह subgraph इंडेक्सिंग और क्वेरी प्रदर्शन में महत्वपूर्ण रूप से बाधा डालता है। +It is a common practice in many Subgraphs to use string concatenation to combine two properties of an event into a single ID, such as using `event.transaction.hash.toHex() + "-" + event.logIndex.toString()`. However, as this returns a string, this significantly impedes Subgraph indexing and querying performance. इसके बजाय, हमें event properties को जोड़ने के लिए concatI32() method का उपयोग करना चाहिए। यह रणनीति एक Bytes ID उत्पन्न करती है जो बहुत अधिक performant होती है। @@ -172,20 +172,20 @@ Query: ## निष्कर्ष -Immutable Entities और Bytes को IDs के रूप में उपयोग करने से subgraph की दक्षता में उल्लेखनीय सुधार हुआ है। विशेष रूप से, परीक्षणों ने क्वेरी प्रदर्शन में 28% तक की वृद्धि और indexing स्पीड में 48% तक की तेजी को उजागर किया है। +Using both Immutable Entities and Bytes as IDs has been shown to markedly improve Subgraph efficiency. Specifically, tests have highlighted up to a 28% increase in query performance and up to a 48% acceleration in indexing speeds. इस ब्लॉग पोस्ट में, Edge & Node के सॉफ़्टवेयर इंजीनियर डेविड लुटरकोर्ट द्वारा Immutable Entities और Bytes को IDs के रूप में उपयोग करने के बारे में और अधिक पढ़ें: [दो सरल Subgraph प्रदर्शन सुधार।](https://thegraph.com/blog/two-simple-subgraph-performance-improvements/). -## Subgraph Best Practices 1-6 +## सबग्राफ सर्वोत्तम प्रथाएँ 1-6 -1. [Improve Query Speed with Subgraph Pruning](/subgraphs/best-practices/pruning/) +1. [सबग्राफ की गति में सुधार करें सबग्राफ प्रूनिंग के साथ](/subgraphs/best-practices/pruning/) -2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/subgraphs/best-practices/derivedfrom/) +2. [indexing और क्वेरी प्रतिसादशीलता में सुधार करें @derivedFrom का उपयोग करके](/subgraphs/best-practices/derivedfrom/) -3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) +3. [अपरिवर्तनीय entities और Bytes को ID के रूप में उपयोग करके Indexing और क्वेरी प्रदर्शन में सुधार करें](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) -4. [Improve Indexing Speed by Avoiding `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) +4. [indexing गति में सुधार करें `eth_calls` से बचकर](/subgraphs/best-practices/avoid-eth-calls/) -5. [Simplify and Optimize with Timeseries and Aggregations](/subgraphs/best-practices/timeseries/) +5. [समय श्रृंखला और समुच्चयन के साथ सरल और अनुकूलित करें](/subgraphs/best-practices/timeseries/) -6. [Use Grafting for Quick Hotfix Deployment](/subgraphs/best-practices/grafting-hotfix/) +6. [त्वरित हॉटफ़िक्स परिनियोजन के लिए ग्राफ्टिंग का उपयोग करें](/subgraphs/best-practices/grafting-hotfix/) diff --git a/website/src/pages/hi/subgraphs/best-practices/pruning.mdx b/website/src/pages/hi/subgraphs/best-practices/pruning.mdx index e566e35d240e..ffbb611c56d0 100644 --- a/website/src/pages/hi/subgraphs/best-practices/pruning.mdx +++ b/website/src/pages/hi/subgraphs/best-practices/pruning.mdx @@ -1,11 +1,11 @@ --- title: सबग्राफ बेस्ट प्रैक्टिस 1 - सबग्राफ प्रूनिंग के साथ क्वेरी की गति में सुधार करें -sidebarTitle: 'Subgraph Best Practice 1: Pruning with indexerHints' +sidebarTitle: Pruning with indexerHints --- ## TLDR -Pruning(/developing/creating-a-subgraph/#prune), subgraph के डेटाबेस से दिए गए ब्लॉक तक की archival entities को हटाता है, और unused entities को subgraph के डेटाबेस से हटाने से subgraph की query performance में सुधार होगा, अक्सर काफी हद तक। indexerHints का उपयोग करना subgraph को prune करने का एक आसान तरीका है। +[Pruning](/developing/creating-a-subgraph/#prune) removes archival entities from the Subgraph’s database up to a given block, and removing unused entities from a Subgraph’s database will improve a Subgraph’s query performance, often dramatically. Using `indexerHints` is an easy way to prune a Subgraph. ## IndexerHints के साथ subgraph को prune करने का तरीका @@ -13,14 +13,14 @@ Manifest में एक section को indexerHints के नाम से indexerHints में तीन prune विकल्प होते हैं: -- prune: auto: आवश्यक न्यूनतम इतिहास को बनाए रखता है जैसा कि Indexer द्वारा निर्धारित किया गया है, जो क्वेरी प्रदर्शन को अनुकूलित करता है। यह सामान्यतः अनुशंसित सेटिंग है और यह सभी subgraphs के लिए डिफ़ॉल्ट है जो graph-cli >= 0.66.0 द्वारा बनाए गए हैं। +- `prune: auto`: Retains the minimum necessary history as set by the Indexer, optimizing query performance. This is the generally recommended setting and is the default for all Subgraphs created by `graph-cli` >= 0.66.0. - `prune: `: ऐतिहासिक ब्लॉकों को बनाए रखने की संख्या पर एक कस्टम सीमा निर्धारित करता है। -- `prune: never`: No pruning of historical data; retains the entire history and is the default if there is no `indexerHints` section. `prune: never` should be selected if [Time Travel Queries](/subgraphs/querying/graphql-api/#time-travel-queries) are desired. +- `prune: never`: ऐतिहासिक डेटा को कभी भी नहीं हटाया जाता; यह संपूर्ण इतिहास को बनाए रखता है और यदि `indexerHints` अनुभाग नहीं है तो यह डिफ़ॉल्ट होता है। यदि [Time Travel Queries](/subgraphs/querying/graphql-api/#time-travel-queries) आवश्यक हैं तो `prune: never` का चयन किया जाना चाहिए। -हम अपने 'subgraph' में indexerHints जोड़ सकते हैं हमारे subgraph.yaml को अपडेट करके: +We can add `indexerHints` to our Subgraphs by updating our `subgraph.yaml`: ```yaml -specVersion: 1.0.0 +specVersion: 1.3.0 schema: file: ./schema.graphql indexerHints: @@ -33,24 +33,24 @@ dataSources: ## महत्वपूर्ण विचार -- If [Time Travel Queries](/subgraphs/querying/graphql-api/#time-travel-queries) are desired as well as pruning, pruning must be performed accurately to retain Time Travel Query functionality. Due to this, it is generally not recommended to use `indexerHints: prune: auto` with Time Travel Queries. Instead, prune using `indexerHints: prune: ` to accurately prune to a block height that preserves the historical data required by Time Travel Queries, or use `prune: never` to maintain all data. +- यदि [Time Travel Queries](/subgraphs/querying/graphql-api/#time-travel-queries) की आवश्यकता हो और साथ ही pruning भी करनी हो, तो Time Travel Query की कार्यक्षमता बनाए रखने के लिए pruning को सटीक रूप से करना आवश्यक है। इसी कारण, आमतौर पर Time Travel Queries के साथ `indexerHints: prune: auto` का उपयोग करने की अनुशंसा नहीं की जाती है। इसके बजाय, `indexerHints: prune: ` का उपयोग करें ताकि उस ब्लॉक ऊँचाई तक सटीक रूप से pruning हो सके, जो Time Travel Queries के लिए आवश्यक ऐतिहासिक डेटा को सुरक्षित रखे, या फिर `prune: never` का उपयोग करें ताकि सभी डेटा सुरक्षित रहे। -- It is not possible to [graft](/subgraphs/cookbook/grafting/) at a block height that has been pruned. If grafting is routinely performed and pruning is desired, it is recommended to use `indexerHints: prune: ` that will accurately retain a set number of blocks (e.g., enough for six months). +- यह संभव नहीं है कि किसी ब्लॉक ऊंचाई पर [graft](/subgraphs/cookbook/grafting/) किया जाए जो कि हटा दिया गया हो। यदि grafting नियमित रूप से की जाती है और हटाने की आवश्यकता होती है, तो यह अनुशंसित है कि `indexerHints: prune: ` का उपयोग करें, जो सटीक रूप से एक निर्धारित संख्या में ब्लॉक बनाए रखेगा (उदाहरण के लिए, छह महीनों के लिए पर्याप्त)। ## निष्कर्ष -Pruning का उपयोग indexerHints से करना एक सर्वोत्तम प्रथा है subgraph विकास के लिए, जो महत्वपूर्ण क्वेरी प्रदर्शन सुधार प्रदान करता है। +Pruning using `indexerHints` is a best practice for Subgraph development, offering significant query performance improvements. -## Subgraph Best Practices 1-6 +## सबग्राफ सर्वोत्तम प्रथाएँ 1-6 -1. [Improve Query Speed with Subgraph Pruning](/subgraphs/best-practices/pruning/) +1. [सबग्राफ की गति में सुधार करें सबग्राफ प्रूनिंग के साथ](/subgraphs/best-practices/pruning/) -2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/subgraphs/best-practices/derivedfrom/) +2. [indexing और क्वेरी प्रतिसादशीलता में सुधार करें @derivedFrom का उपयोग करके](/subgraphs/best-practices/derivedfrom/) -3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) +3. [अपरिवर्तनीय entities और Bytes को ID के रूप में उपयोग करके Indexing और क्वेरी प्रदर्शन में सुधार करें](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) -4. [Improve Indexing Speed by Avoiding `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) +4. [indexing गति में सुधार करें `eth_calls` से बचकर](/subgraphs/best-practices/avoid-eth-calls/) -5. [Simplify and Optimize with Timeseries and Aggregations](/subgraphs/best-practices/timeseries/) +5. [समय श्रृंखला और समुच्चयन के साथ सरल और अनुकूलित करें](/subgraphs/best-practices/timeseries/) -6. [Use Grafting for Quick Hotfix Deployment](/subgraphs/best-practices/grafting-hotfix/) +6. [त्वरित हॉटफ़िक्स परिनियोजन के लिए ग्राफ्टिंग का उपयोग करें](/subgraphs/best-practices/grafting-hotfix/) diff --git a/website/src/pages/hi/subgraphs/best-practices/timeseries.mdx b/website/src/pages/hi/subgraphs/best-practices/timeseries.mdx index a0c4f65157ad..882adfe43ca1 100644 --- a/website/src/pages/hi/subgraphs/best-practices/timeseries.mdx +++ b/website/src/pages/hi/subgraphs/best-practices/timeseries.mdx @@ -1,13 +1,13 @@ --- title: Subgraph Best Practice 5 - Simplify and Optimize with Timeseries and Aggregations -sidebarTitle: 'Subgraph Best Practice 5: Timeseries and Aggregations' +sidebarTitle: Timeseries और Aggregations --- ## TLDR -Leveraging the new time-series and aggregations feature in subgraphs can significantly enhance both indexing speed and query performance. +Leveraging the new time-series and aggregations feature in Subgraphs can significantly enhance both indexing speed and query performance. -## अवलोकन +## Overview Timeseries and aggregations reduce data processing overhead and accelerate queries by offloading aggregation computations to the database and simplifying mapping code. This approach is particularly effective when handling large volumes of time-based data. @@ -36,6 +36,10 @@ Timeseries and aggregations reduce data processing overhead and accelerate queri ## How to Implement Timeseries and Aggregations +### आवश्यक शर्तें + +You need `spec version 1.1.0` for this feature. + ### Defining Timeseries Entities A timeseries entity represents raw data points collected over time. It is defined with the `@entity(timeseries: true)` annotation. Key requirements: @@ -51,7 +55,7 @@ A timeseries entity represents raw data points collected over time. It is define type Data @entity(timeseries: true) { id: Int8! timestamp: Timestamp! - price: BigDecimal! + amount: BigDecimal! } ``` @@ -68,11 +72,11 @@ An aggregation entity computes aggregated values from a timeseries source. It is type Stats @aggregation(intervals: ["hour", "day"], source: "Data") { id: Int8! timestamp: Timestamp! - sum: BigDecimal! @aggregate(fn: "sum", arg: "price") + sum: BigDecimal! @aggregate(fn: "sum", arg: "amount") } ``` -In this example, Stats aggregates the price field from Data over hourly and daily intervals, computing the sum. +In this example, Stats aggregates the amount field from Data over hourly and daily intervals, computing the sum. ### Querying Aggregated Data @@ -141,7 +145,7 @@ Supported aggregation functions: - sum - count -- min +- मिनट - max - first - last @@ -172,24 +176,24 @@ Supported operators and functions include basic arithmetic (+, -, \_, /), compar ### निष्कर्ष -Implementing timeseries and aggregations in subgraphs is a best practice for projects dealing with time-based data. This approach: +Implementing timeseries and aggregations in Subgraphs is a best practice for projects dealing with time-based data. This approach: - Enhances Performance: Speeds up indexing and querying by reducing data processing overhead. - Simplifies Development: Eliminates the need for manual aggregation logic in mappings. - Scales Efficiently: Handles large volumes of data without compromising on speed or responsiveness. -By adopting this pattern, developers can build more efficient and scalable subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your subgraphs. +By adopting this pattern, developers can build more efficient and scalable Subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your Subgraphs. -## Subgraph Best Practices 1-6 +## सबग्राफ सर्वोत्तम प्रथाएँ 1-6 -1. [Improve Query Speed with Subgraph Pruning](/subgraphs/best-practices/pruning/) +1. [सबग्राफ की गति में सुधार करें सबग्राफ प्रूनिंग के साथ](/subgraphs/best-practices/pruning/) -2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/subgraphs/best-practices/derivedfrom/) +2. [indexing और क्वेरी प्रतिसादशीलता में सुधार करें @derivedFrom का उपयोग करके](/subgraphs/best-practices/derivedfrom/) -3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) +3. [अपरिवर्तनीय entities और Bytes को ID के रूप में उपयोग करके Indexing और क्वेरी प्रदर्शन में सुधार करें](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) -4. [Improve Indexing Speed by Avoiding `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) +4. [indexing गति में सुधार करें `eth_calls` से बचकर](/subgraphs/best-practices/avoid-eth-calls/) -5. [Simplify and Optimize with Timeseries and Aggregations](/subgraphs/best-practices/timeseries/) +5. [समय श्रृंखला और समुच्चयन के साथ सरल और अनुकूलित करें](/subgraphs/best-practices/timeseries/) -6. [Use Grafting for Quick Hotfix Deployment](/subgraphs/best-practices/grafting-hotfix/) +6. [त्वरित हॉटफ़िक्स परिनियोजन के लिए ग्राफ्टिंग का उपयोग करें](/subgraphs/best-practices/grafting-hotfix/) diff --git a/website/src/pages/hi/subgraphs/billing.mdx b/website/src/pages/hi/subgraphs/billing.mdx index db7598ed5faf..66e6e015c57c 100644 --- a/website/src/pages/hi/subgraphs/billing.mdx +++ b/website/src/pages/hi/subgraphs/billing.mdx @@ -4,12 +4,14 @@ title: बिलिंग ## Querying Plans -There are two plans to use when querying subgraphs on The Graph Network. +There are two plans to use when querying Subgraphs on The Graph Network. - **Free Plan**: The Free Plan includes 100,000 free monthly queries with full access to the Subgraph Studio testing environment. This plan is designed for hobbyists, hackathoners, and those with side projects to try out The Graph before scaling their dapp. - **Growth Plan**: The Growth Plan includes everything in the Free Plan with all queries after 100,000 monthly queries requiring payments with GRT or credit card. The Growth Plan is flexible enough to cover teams that have established dapps across a variety of use cases. +Learn more about pricing [here](https://thegraph.com/studio-pricing/). + ## Query Payments with credit card diff --git a/website/src/pages/hi/subgraphs/developing/_meta-titles.json b/website/src/pages/hi/subgraphs/developing/_meta-titles.json index 01a91b09ed77..bb1dffb7294d 100644 --- a/website/src/pages/hi/subgraphs/developing/_meta-titles.json +++ b/website/src/pages/hi/subgraphs/developing/_meta-titles.json @@ -1,6 +1,6 @@ { - "creating": "Creating", - "deploying": "Deploying", - "publishing": "Publishing", - "managing": "Managing" + "creating": "बनाने", + "deploying": "परिनियोजित", + "publishing": "प्रकाशित करना", + "managing": "प्रबंध" } diff --git a/website/src/pages/hi/subgraphs/developing/creating/_meta-titles.json b/website/src/pages/hi/subgraphs/developing/creating/_meta-titles.json index 6106ac328dc1..553273beaf56 100644 --- a/website/src/pages/hi/subgraphs/developing/creating/_meta-titles.json +++ b/website/src/pages/hi/subgraphs/developing/creating/_meta-titles.json @@ -1,3 +1,3 @@ { - "graph-ts": "AssemblyScript API" + "graph-ts": "असेंबलीस्क्रिप्ट एपीआई" } diff --git a/website/src/pages/hi/subgraphs/developing/creating/advanced.mdx b/website/src/pages/hi/subgraphs/developing/creating/advanced.mdx index ac869ec36e5b..22a80fd744e2 100644 --- a/website/src/pages/hi/subgraphs/developing/creating/advanced.mdx +++ b/website/src/pages/hi/subgraphs/developing/creating/advanced.mdx @@ -1,23 +1,23 @@ --- -title: Advanced Subgraph Features +title: उन्नत Subgraph विशेषताएँ --- -## अवलोकन +## Overview -Add and implement advanced subgraph features to enhanced your subgraph's built. +अपने Subgraph के निर्माण को उन्नत करने के लिए उन्नत सबग्राफ सुविधाएँ जोड़ें और लागू करें। -Starting from `specVersion` `0.0.4`, subgraph features must be explicitly declared in the `features` section at the top level of the manifest file, using their `camelCase` name, as listed in the table below: +`specVersion` `0.0.4` से शुरू होकर, सबग्राफ सुविधाओं को स्पष्ट रूप से `विशेषता` अनुभाग में शीर्ष स्तर पर घोषित किया जाना चाहिए, जो उनके `camelCase` नाम का उपयोग करके किया जाता है, जैसा कि नीचे दी गई तालिका में सूचीबद्ध है: -| Feature | Name | -| ---------------------------------------------------- | ---------------- | -| [Non-fatal errors](#non-fatal-errors) | `nonFatalErrors` | -| [Full-text Search](#defining-fulltext-search-fields) | `fullTextSearch` | -| [Grafting](#grafting-onto-existing-subgraphs) | `grafting` | +| विशेषता | नाम | +| ------------------------------------------------- | -------------------- | +| [गैर-घातक त्रुटियाँ](#non-fatal-errors) | `गैर-घातक त्रुटियाँ` | +| [पूर्ण-पाठ खोज](#defining-fulltext-search-fields) | `पूर्ण-पाठ खोज` | +| [Grafting](#grafting-onto-existing-subgraphs) | `grafting` | -For instance, if a subgraph uses the **Full-Text Search** and the **Non-fatal Errors** features, the `features` field in the manifest should be: +instance के लिए, यदि कोई सबग्राफ **Full-Text Search** और **Non-fatal Errors** सुविधाओं का उपयोग करता है, तो मैनिफेस्ट में `विशेषता` फ़ील्ड इस प्रकार होनी चाहिए: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum features: - fullTextSearch @@ -25,17 +25,17 @@ features: dataSources: ... ``` -> Note that using a feature without declaring it will incur a **validation error** during subgraph deployment, but no errors will occur if a feature is declared but not used. +> कोई फ़ीचर घोषित किए बिना उसका उपयोग करने से **मान्यकरण त्रुटि** होगी जब Subgraph डिप्लॉय किया जाएगा, लेकिन यदि कोई फ़ीचर घोषित किया जाता है लेकिन उपयोग नहीं किया जाता, तो कोई त्रुटि नहीं होगी। ## Timeseries और Aggregations -Prerequisites: +पूर्व आवश्यकताएँ: -- Subgraph specVersion must be ≥1.1.0. +- सबग्राफ का specVersion ≥1.1.0 होना चाहिए। -Timeseries and aggregations enable your subgraph to track statistics like daily average price, hourly total transfers, and more. +Timeseries और aggregations आपके Subgraph को दैनिक औसत मूल्य, प्रति घंटे कुल ट्रांसफर और अन्य आँकड़े ट्रैक करने में सक्षम बनाते हैं। -This feature introduces two new types of subgraph entity. Timeseries entities record data points with timestamps. Aggregation entities perform pre-declared calculations on the timeseries data points on an hourly or daily basis, then store the results for easy access via GraphQL. +यह सुविधा दो नए प्रकार की सबग्राफ entity पेश करती है। Timeseries entities समय मुहर (timestamps) के साथ डेटा पॉइंट्स रिकॉर्ड करती हैं। Aggregation entities पहले से घोषित गणनाएँ करती हैं, जो Timeseries डेटा पॉइंट्स पर प्रति घंटे या दैनिक आधार पर की जाती हैं, फिर परिणामों को आसान पहुंच के लिए GraphQL के माध्यम से संग्रहीत किया जाता है। ### उदाहरण स्कीमा @@ -53,33 +53,33 @@ type Stats @aggregation(intervals: ["hour", "day"], source: "Data") { } ``` -### How to Define Timeseries and Aggregations +### टाइमसीरीज़ और एग्रीगेशन को कैसे परिभाषित करें -Timeseries entities are defined with `@entity(timeseries: true)` in the GraphQL schema. Every timeseries entity must: +टाइमसीरीज़ entities GraphQL स्कीमा में `@entity(timeseries: true)` के साथ परिभाषित की जाती हैं। हर टाइमसीरीज़ entities को अवश्य: -- have a unique ID of the int8 type -- have a timestamp of the Timestamp type -- include data that will be used for calculation by aggregation entities. +- एक अद्वितीय आईडी हो जो int8 प्रकार की हो। +- टाइमस्टैम्प प्रकार का एक टाइमस्टैम्प रखें। +- गणना के लिए अभिग्रहण entities द्वारा उपयोग किए जाने वाले डेटा को शामिल करें। -These Timeseries entities can be saved in regular trigger handlers, and act as the “raw data” for the aggregation entities. +इन टाइमसीरीज़ entities को नियमित ट्रिगर handler में सेव किया जा सकता है और ये एग्रीगेशन entities के लिए "कच्चे डेटा" के रूप में कार्य करती हैं। -Aggregation entities are defined with `@aggregation` in the GraphQL schema. Every aggregation entity defines the source from which it will gather data (which must be a timeseries entity), sets the intervals (e.g., hour, day), and specifies the aggregation function it will use (e.g., sum, count, min, max, first, last). +एग्रीगेशन entities को GraphQL schema में `@aggregation` के साथ परिभाषित किया जाता है। प्रत्येक aggregation entity उस साधन को परिभाषित करती है जिससे वह डेटा एकत्र करेगी (जो कि एक timeseries entity होनी चाहिए), अंतराल सेट करती है (जैसे, घंटे, दिन), और उस aggregation function को निर्दिष्ट करती है जिसका वह उपयोग करेगी (जैसे, sum, count, min, max, first, last)। -Aggregation entities are automatically calculated on the basis of the specified source at the end of the required interval. +एग्रीगेशन entities निर्दिष्ट साधन के आधार पर आवश्यक अंतराल के अंत में स्वचालित रूप से गणना की जाती हैं। #### उपलब्ध Aggregation अंतराल -- `hour`: sets the timeseries period every hour, on the hour. -- `day`: sets the timeseries period every day, starting and ending at 00:00. +- `hour`: हर घंटे, ठीक घंटे पर, टाइमसीरीज़ अवधि सेट करता है। +- `day`: टाइमसीरीज़ अवधि को हर दिन सेट करता है, जो 00:00 पर शुरू और समाप्त होती है। #### उपलब्ध Aggregation फ़ंक्शन -- `sum`: Total of all values. -- `count`: Number of values. -- `min`: Minimum value. -- `max`: Maximum value. -- `first`: First value in the period. -- `last`: Last value in the period. +- `sum`: सभी मानों का कुल योग। +- `count`: मानों की संख्या। +- `min`: न्यूनतम मान। +- `max`: अधिकतम मान। +- `first`: अवधि में पहला मान। +- `last`: अवधि में अंतिम मान। #### उदाहरण Aggregations queries @@ -97,21 +97,21 @@ Aggregation entities are automatically calculated on the basis of the specified ## गैर-घातक त्रुटियाँ -पहले से सिंक किए गए सबग्राफ पर इंडेक्सिंग त्रुटियां, डिफ़ॉल्ट रूप से, सबग्राफ को विफल कर देंगी और सिंक करना बंद कर देंगी। सबग्राफ को वैकल्पिक रूप से त्रुटियों की उपस्थिति में समन्वयन जारी रखने के लिए कॉन्फ़िगर किया जा सकता है, हैंडलर द्वारा किए गए परिवर्तनों को अनदेखा करके त्रुटि उत्पन्न हुई। यह सबग्राफ लेखकों को अपने सबग्राफ को ठीक करने का समय देता है, जबकि नवीनतम ब्लॉक के विरुद्ध प्रश्नों को जारी रखा जाता है, हालांकि त्रुटि के कारण बग के कारण परिणाम असंगत हो सकते हैं। ध्यान दें कि कुछ त्रुटियाँ अभी भी हमेशा घातक होती हैं। गैर-घातक होने के लिए, त्रुटि नियतात्मक होने के लिए जानी जानी चाहिए। +indexing त्रुटियाँ, जो पहले से सिंक हो चुके सबग्राफ पर होती हैं, डिफ़ॉल्ट रूप से सबग्राफ को विफल कर देंगी और सिंकिंग रोक देंगी। वैकल्पिक रूप से, सबग्राफ को इस तरह कॉन्फ़िगर किया जा सकता है कि वे त्रुटियों की उपस्थिति में भी सिंकिंग जारी रखें, उन परिवर्तनों को अनदेखा करके जो उस handler द्वारा किए गए थे जिसने त्रुटि उत्पन्न की। यह सबग्राफ लेखकों को अपने सबग्राफ को सही करने का समय देता है, जबकि नवीनतम ब्लॉक के विरुद्ध क्वेरीज़ दी जाती रहती हैं, हालांकि परिणाम उस बग के कारण असंगत हो सकते हैं जिसने त्रुटि उत्पन्न की थी। ध्यान दें कि कुछ त्रुटियाँ फिर भी हमेशा घातक होती हैं। गैर-घातक होने के लिए, त्रुटि को निर्धारक (deterministic) रूप से ज्ञात होना चाहिए। -> **ध्यान दें:** The Graph Network अभी तक गैर-घातक त्रुटियों non-fatal errors का समर्थन नहीं करता है, और डेवलपर्स को Studio के माध्यम से उस कार्यक्षमता का उपयोग करके सबग्राफ को नेटवर्क पर परिनियोजित (deploy) नहीं करना चाहिए। +> **नोट:**ग्राफ नेटवर्क अभी तक गैर-घातक त्रुटियों का समर्थन नहीं करता है, और डेवलपर्स को स्टूडियो के माध्यम से उस कार्यक्षमता का उपयोग करके सबग्राफ को नेटवर्क पर परिनियोजित नहीं करना चाहिए। -गैर-घातक त्रुटियों को सक्षम करने के लिए सबग्राफ मेनिफ़ेस्ट पर निम्न फ़ीचर फ़्लैग सेट करने की आवश्यकता होती है: +सबग्राफ मैनिफेस्ट पर निम्नलिखित फीचर फ्लैग सेट करके नॉन-फैटल एरर सक्षम किया जा सकता है ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum features: - nonFatalErrors ... ``` -Queries को संभावित असंगतियों वाले डेटा को queries करने के लिए `subgraphError` आर्ग्यूमेंट के माध्यम से ऑप्ट-इन करना होगा। यह भी अनुशंसा की जाती है कि `_meta` को queries करें यह जांचने के लिए कि subgraph ने त्रुटियों को स्किप किया है या नहीं, जैसे इस उदाहरण में: +क्वेरी को `subgraphError` आर्ग्यूमेंट के माध्यम से संभावित असंगतियों वाले डेटा को क्वेरी करने के लिए भी ऑप्ट-इन करना आवश्यक है। साथ ही, यह अनुशंसित है कि `_meta` को क्वेरी किया जाए ताकि यह जांचा जा सके कि सबग्राफ ने किसी त्रुटि को छोड़ दिया है या नहीं, जैसा कि निम्न उदाहरण में दिखाया गया है: ```graphql foos(first: 100, subgraphError: allow) { @@ -123,7 +123,7 @@ _meta { } ``` -यदि subgraph में कोई त्रुटि आती है, तो वह queries डेटा और एक graphql त्रुटि के साथ `"indexing_error"` संदेश लौटाएगी, जैसा कि इस उदाहरण उत्तर में दिखाया गया है: +यदि सबग्राफ को कोई त्रुटि मिलती है, तो वह क्वेरी डेटा के साथ एक GraphQL त्रुटि वापस करेगी, जिसमें संदेश "indexing_error" होगा, जैसा कि इस उदाहरण प्रतिक्रिया में है: ```graphql "data": { @@ -145,11 +145,11 @@ _meta { ## IPFS/Arweave फ़ाइल डेटा स्रोत -फाइल डेटा स्रोत एक नई subgraph कार्यक्षमता है जो indexing के दौरान ऑफ-चेन डेटा तक एक मजबूत, विस्तारित तरीके से पहुँच प्रदान करती है। फाइल डेटा स्रोत IPFS और Arweave से फ़ाइलें फ़ेच करने का समर्थन करते हैं। +फाइल डेटा स्रोत एक नया सबग्राफ कार्यक्षमता है जो इंडेक्सिंग के दौरान ऑफ-चेन डेटा तक पहुँचने के लिए एक मजबूत और विस्तारित तरीका प्रदान करता है। फाइल डेटा स्रोत IPFS और Arweave से फ़ाइलें प्राप्त करने का समर्थन करता है। > यह ऑफ-चेन डेटा के नियतात्मक अनुक्रमण के साथ-साथ स्वैच्छिक HTTP-स्रोत डेटा के संभावित परिचय के लिए आधार भी देता है। -### अवलोकन +### Overview "लाइन" में हैंडलर कार्यान्वयन के दौरान फ़ाइलों को लाने के बजाय, यह टेम्पलेट्स को पेश करता है जिन्हें एक दिए गए फ़ाइल पहचानकर्ता के लिए नए डेटा स्रोतों के रूप में उत्पन्न किया जा सकता है। ये नए डेटा स्रोत फ़ाइलों को लाते हैं, यदि वे असफल होते हैं तो पुनः प्रयास करते हैं, और जब फ़ाइल मिलती है तो एक समर्पित हैंडलर चलाते हैं। @@ -221,7 +221,7 @@ templates: - name: TokenMetadata kind: file/ipfs mapping: - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mapping.ts handler: handleMetadata @@ -278,11 +278,11 @@ export function handleMetadata(content: Bytes): void { अब आप चेन-आधारित हैंडलर के निष्पादन के दौरान फ़ाइल डेटा स्रोत बना सकते हैं: - ऑटो-जनरेटेड `templates` से टेम्पलेट आयात करें। -- call `TemplateName.create(cid: string)` from within a mapping, where the cid is a valid content identifier for IPFS or Arweave +- मानचित्रण के भीतर `TemplateName.create(cid: string)` को कॉल करें, जहाँ cid एक वैध कंटेंट पहचानकर्ता है IPFS या Arweave के लिए। -For IPFS, Graph Node supports [v0 and v1 content identifiers](https://docs.ipfs.tech/concepts/content-addressing/), and content identifiers with directories (e.g. `bafyreighykzv2we26wfrbzkcdw37sbrby4upq7ae3aqobbq7i4er3tnxci/metadata.json`). +IPFS के लिए, ग्राफ-नोड [v0 और v1 कंटेंट आइडेंटिफायर्स] का समर्थन करता है(https://docs.ipfs.tech/concepts/content-addressing/), और डायरेक्ट्रीज़ के साथ कंटेंट आइडेंटिफायर्स (जैसे `bafyreighykzv2we26wfrbzkcdw37sbrby4upq7ae3aqobbq7i4er3tnxci/metadata.json`)। -For Arweave, as of version 0.33.0 Graph Node can fetch files stored on Arweave based on their [transaction ID](https://docs.arweave.org/developers/arweave-node-server/http-api#transactions) from an Arweave gateway ([example file](https://bdxujjl5ev5eerd5ouhhs6o4kjrs4g6hqstzlci5pf6vhxezkgaa.arweave.net/CO9EpX0lekJEfXUOeXncUmMuG8eEp5WJHXl9U9yZUYA)). Arweave supports transactions uploaded via Irys (previously Bundlr), and Graph Node can also fetch files based on [Irys manifests](https://docs.irys.xyz/overview/gateways#indexing). +Arweave के लिए, संस्करण 0.33.0 के अनुसार, ग्राफ-नोड Arweave गेटवे से उनके [ लेन-देन(transaction) ID] (https://docs.arweave.org/developers/arweave-node-server/http-api#transactions) के आधार पर संग्रहित फ़ाइलों को प्राप्त कर सकता है ([उदाहरण फ़ाइल](https://bdxujjl5ev5eerd5ouhhs6o4kjrs4g6hqstzlci5pf6vhxezkgaa.arweave.net/CO9EpX0lekJEfXUOeXncUmMuG8eEp5WJHXl9U9yZUYA)). Arweave उन लेन-देन(transaction) का समर्थन करता है जो Irys (पूर्व में Bundlr) के माध्यम से अपलोड की गई हैं, और ग्राफ-नोड [ Irys manifests](https://docs.irys.xyz/overview/gateways#indexing)के आधार पर भी फ़ाइलों को प्राप्त कर सकता है। उदाहरण: @@ -290,7 +290,7 @@ For Arweave, as of version 0.33.0 Graph Node can fetch files stored on Arweave b import { TokenMetadata as TokenMetadataTemplate } from '../generated/templates' const ipfshash = 'QmaXzZhcYnsisuue5WRdQDH6FDvqkLQX1NckLqBYeYYEfm' -//This example code is for a Crypto coven subgraph. The above ipfs hash is a directory with token metadata for all crypto coven NFTs. +//This example code is for a Crypto coven Subgraph. The above ipfs hash is a directory with token metadata for all crypto coven NFTs. export function handleTransfer(event: TransferEvent): void { let token = Token.load(event.params.tokenId.toString()) @@ -315,25 +315,25 @@ export function handleTransfer(event: TransferEvent): void { यह एक नया file data source बनाएगा, जो Graph Node के configured किए गए IPFS या Arweave endpoint का सर्वेक्षण करेगा, यदि यह नहीं मिलता है तो पुनः प्रयास करेगा। जब file मिल जाती है, तो file data source handler execute किया जाएगा। -This example is using the CID as the lookup between the parent `Token` entity and the resulting `TokenMetadata` entity. +यह उदाहरण पेरेंट `टोकन ` entities और परिणामी `TokenMetadata` entities के बीच लुकअप के रूप में CID का उपयोग कर रहा है। -> Previously, this is the point at which a subgraph developer would have called `ipfs.cat(CID)` to fetch the file +> पहले, इस बिंदु पर, एक सबग्राफ डेवलपर `ipfs.cat(CID)` को कॉल करता था ताकि फ़ाइल को प्राप्त किया जा सके। बधाई हो, आप फ़ाइल डेटा स्रोतों का उपयोग कर रहे हैं! -#### अपने उप-अनुच्छेदों को तैनात करना +#### अपने सबग्राफ को परिनियोजित करना -You can now `build` and `deploy` your subgraph to any Graph Node >=v0.30.0-rc.0. +आप अब अपने सबग्राफ को किसी भी ग्राफ-नोड >=v0.30.0-rc.0 पर `build` और `deploy` कर सकते हैं। #### परिसीमन -फ़ाइल डेटा स्रोत हैंडलर और संस्थाएँ अन्य सबग्राफ संस्थाओं से अलग हैं, यह सुनिश्चित करते हुए कि वे निष्पादित होने पर नियतात्मक हैं, और श्रृंखला-आधारित डेटा स्रोतों का कोई संदूषण सुनिश्चित नहीं करते हैं। विस्तार से: +फ़ाइल डेटा स्रोत handlers और entities अन्य सबग्राफ entities से अलग होते हैं, जिससे यह सुनिश्चित होता है कि वे निष्पादन के समय निर्धारक (deterministic) बने रहें और चेन-आधारित डेटा स्रोतों में कोई मिलावट न हो। विशेष रूप से: - फ़ाइल डेटा स्रोतों द्वारा बनाई गई इकाइयाँ अपरिवर्तनीय हैं, और इन्हें अद्यतन नहीं किया जा सकता है - फ़ाइल डेटा स्रोत हैंडलर अन्य फ़ाइल डेटा स्रोतों से संस्थाओं तक नहीं पहुँच सकते - फ़ाइल डेटा स्रोतों से जुड़ी संस्थाओं को चेन-आधारित हैंडलर द्वारा एक्सेस नहीं किया जा सकता है -> हालांकि यह बाधा अधिकांश उपयोग-मामलों के लिए समस्याग्रस्त नहीं होनी चाहिए, यह कुछ के लिए जटिलता का परिचय दे सकती है। यदि आपको अपने फ़ाइल-आधारित डेटा को सबग्राफ में मॉडलिंग करने में समस्या आ रही है, तो कृपया डिस्कॉर्ड के माध्यम से संपर्क करें! +> यह बाधा अधिकांश उपयोग के मामलों में समस्या उत्पन्न नहीं करेगी, लेकिन कुछ के लिए जटिलता बढ़ा सकती है। यदि आपको अपने फ़ाइल-आधारित डेटा को सबग्राफ में मॉडल करने में समस्या हो रही है, तो कृपया Discord के माध्यम से संपर्क करें! इसके अतिरिक्त, फ़ाइल डेटा स्रोत से डेटा स्रोत बनाना संभव नहीं है, चाहे वह ऑनचेन डेटा स्रोत हो या अन्य फ़ाइल डेटा स्रोत। भविष्य में यह प्रतिबंध हटाया जा सकता है। @@ -341,41 +341,41 @@ You can now `build` and `deploy` your subgraph to any Graph Node >=v0.30.0-rc.0. यदि आप NFT मेटाडेटा को संबंधित टोकन से लिंक कर रहे हैं, तो टोकन इकाई से मेटाडेटा इकाई को संदर्भित करने के लिए मेटाडेटा के IPFS हैश का उपयोग करें। एक आईडी के रूप में IPFS हैश का उपयोग करके मेटाडेटा इकाई को सहेजें। -You can use [DataSource context](/subgraphs/developing/creating/graph-ts/api/#entity-and-datasourcecontext) when creating File Data Sources to pass extra information which will be available to the File Data Source handler. +आप [ DataSource context](/subgraphs/developing/creating/graph-ts/api/#entity-and-datasourcecontext) का उपयोग कर सकते हैं जब आप File Data साधन बना रहे हों ताकि अतिरिक्त जानकारी पास की जा सके जो File Data साधन handler में उपलब्ध होगी। -If you have entities which are refreshed multiple times, create unique file-based entities using the IPFS hash & the entity ID, and reference them using a derived field in the chain-based entity. +यदि आपके पास ऐसी entities हैं जो कई बार रिफ्रेश होती हैं, तो IPFS हैश और entity ID का उपयोग करके unique file-based entities बनाएं, और उन्हें chain-based entity में एक derived field का उपयोग करके संदर्भित करें। entities > हम ऊपर दिए गए सुझाव को बेहतर बनाने के लिए काम कर रहे हैं, इसलिए क्वेरी केवल "नवीनतम" संस्करण लौटाती हैं #### ज्ञात समस्याएँ -File data sources currently require ABIs, even though ABIs are not used ([issue](https://github.com/graphprotocol/graph-cli/issues/961)). Workaround is to add any ABI. +फ़ाइल डेटा साधन को वर्तमान में ABIs, की आवश्यकता होती है, हालांकि ABIs का उपयोग नहीं किया जाता है ([issue])(https://github.com/graphprotocol/graph-cli/issues/961)। इसका समाधान यह है कि कोई भी ABI जोड़ें। -Handlers for File Data Sources cannot be in files which import `eth_call` contract bindings, failing with "unknown import: `ethereum::ethereum.call` has not been defined" ([issue](https://github.com/graphprotocol/graph-node/issues/4309)). Workaround is to create file data source handlers in a dedicated file. +फ़ाइल डेटा साधन लिए handler उन फ़ाइलों में नहीं हो सकते जो `eth_call`contract बाइंडिंग्स को आयात करती हैं, जिससे "unknown import:`ethereum::ethereum.call `has not been defined" त्रुटि होती है ([issue](https://github.com/graphprotocol/graph-node/issues/4309). वर्कअराउंड के रूप में फ़ाइल डेटा साधन handler को एक समर्पित फ़ाइल में बनाना चाहिए। #### उदाहरण -[Crypto Coven Subgraph migration](https://github.com/azf20/cryptocoven-api/tree/file-data-sources-refactor) +[Crypto Coven सबग्राफ migration](https://github.com/azf20/cryptocoven-api/tree/file-data-sources-refactor) #### संदर्भ -[GIP File Data Sources](https://forum.thegraph.com/t/gip-file-data-sources/2721) +[GIP File Data साधन ] (https://forum.thegraph.com/t/gip-file-data-sources/2721) ## सूचीकृत तर्क फ़िल्टर / विषय फ़िल्टर -> **Requires**: [SpecVersion](#specversion-releases) >= `1.2.0` +> **आवश्यकता**: [SpecVersion](#specversion-releases) >= `1.2.0` -विषय फ़िल्टर, जिन्हें इंडेक्स किए गए तर्क फ़िल्टर भी कहा जाता है, एक शक्तिशाली विशेषता है जो उपयोगकर्ताओं को उनके इंडेक्स किए गए तर्कों के मानों के आधार पर ब्लॉकचेन घटनाओं को सटीक रूप से फ़िल्टर करने की अनुमति देती है। +Topic filters, जिन्हें indexed argument filters के रूप में भी जाना जाता है, सबग्राफ में एक शक्तिशाली विशेषता हैं जो उपयोगकर्ताओं को उनके indexed arguments के मूल्यों के आधार पर ब्लॉकचेन घटनाओं को सटीक रूप से फ़िल्टर करने की अनुमति देती हैं। -- ये फ़िल्टर ब्लॉकचेन पर घटनाओं की विशाल धारा से रुचि की विशिष्ट घटनाओं को अलग करने में मदद करते हैं, जिससे सबग्राफ़ केवल प्रासंगिक डेटा पर ध्यान केंद्रित करके अधिक कुशलता से कार्य कर सके। +- ये फ़िल्टर ब्लॉकचेन पर बड़ी संख्या में घटनाओं की धाराओं से विशिष्ट घटनाओं को अलग करने में मदद करते हैं, जिससे सबग्राफ अधिक कुशलता से काम कर सकते हैं और केवल प्रासंगिक डेटा पर ध्यान केंद्रित कर सकते हैं। -- यह व्यक्तिगत subgraphs बनाने के लिए उपयोगी है जो विशेष पते और विभिन्न स्मार्ट कॉन्ट्रैक्ट्स के साथ उनके इंटरैक्शन को ट्रैक करते हैं ब्लॉकचेन पर। +- यह विशिष्ट पतों और उनके विभिन्न स्मार्ट contract के साथ इंटरैक्शन को ट्रैक करने वाले व्यक्तिगत सबग्राफ बनाने के लिए उपयोगी है। ### शीर्षक फ़िल्टर कैसे काम करते हैं -जब एक स्मार्ट कॉन्ट्रैक्ट एक इवेंट को उत्पन्न करता है, तो कोई भी तर्क जो 'indexed' के रूप में चिह्नित किया गया है, एक 'subgraph' की मैनिफेस्ट में फ़िल्टर के रूप में उपयोग किया जा सकता है। यह 'subgraph' को इन 'indexed' तर्कों से मेल खाने वाले इवेंट्स के लिए चयनात्मक रूप से सुनने की अनुमति देता है। +जब कोई स्मार्ट contract कोई इवेंट एमिट करता है, तो कोई भी आर्ग्यूमेंट जो indexed के रूप में चिह्नित होता है, उसे एक सबग्राफ के मैनिफेस्ट में फ़िल्टर के रूप में उपयोग किया जा सकता है। यह सबग्राफ को उन इवेंट्स को चयनित रूप से सुनने की अनुमति देता है जो इन indexed आर्ग्यूमेंट्स से मेल खाते हैं। -- The event's first indexed argument corresponds to `topic1`, the second to `topic2`, and so on, up to `topic3`, since the Ethereum Virtual Machine (EVM) allows up to three indexed arguments per event. +- इस आयोजन का पहला इंडेक्स किया गया तर्क `topic1`, से संबंधित है, दूसरा `topic2`, से और इसी तरह, `topic3`, तक, क्योंकि Ethereum Virtual Machine (EVM) प्रत्येक आयोजन में तीन तक इंडेक्स किए गए तर्कों की अनुमति देता है ```solidity // SPDX-License-Identifier: MIT @@ -395,13 +395,13 @@ contract Token { इस उदाहरण में: -- The `Transfer` event is used to log transactions of tokens between addresses. -- The `from` and `to` parameters are indexed, allowing event listeners to filter and monitor transfers involving specific addresses. -- The `transfer` function is a simple representation of a token transfer action, emitting the Transfer event whenever it is called. +- `Transfer`आयोजन घटना का उपयोग पते के बीच टोकन log लेन-देन को लॉग करने के लिए किया जाता है। +- The` from` और `to` पैरामीटर सूचकांकित होते हैं, जिससे आयोजन लिस्नर्स को विशिष्ट पतों से जुड़ी ट्रांसफर को फ़िल्टर और मॉनिटर करने की अनुमति मिलती है। +- `transfer` फ़ंक्शन एक साधारण प्रतिनिधित्व है एक टोकन ट्रांसफर क्रिया का, जो हर बार इसे कॉल किए जाने पर Transfer आयोजन को उत्पन्न करता है। #### सबस्पष्ट में कॉन्फ़िगरेशन -Topic filters are defined directly within the event handler configuration in the subgraph manifest. Here is how they are configured: +टॉपिक फ़िल्टर्स को सीधे इवेंट हैंडलर कॉन्फ़िगरेशन के भीतर सबग्राफ मैनिफेस्ट में परिभाषित किया जाता है। इन्हें इस प्रकार कॉन्फ़िगर किया जाता है: ```yaml eventHandlers: @@ -414,7 +414,7 @@ eventHandlers: इस सेटअप में: -- `topic1` corresponds to the first indexed argument of the event, `topic2` to the second, and `topic3` to the third. +- `topic`1 इवेंट के पहले आयोजन किए गए तर्क के अनुरूप है, `topic2` दूसरे के अनुरूप है, और `topic3` तीसरे के अनुरूप है। - प्रत्येक विषय में एक या अधिक मान हो सकते हैं, और एक घटना केवल तभी प्रोसेस की जाती है जब वह प्रत्येक निर्दिष्ट विषय में से किसी एक मान से मेल खाती है। #### फ़िल्टर लॉजिक @@ -434,9 +434,9 @@ eventHandlers: इस कॉन्फ़िगरेशन में: -- `topic1` is configured to filter `Transfer` events where `0xAddressA` is the sender. -- `topic2` is configured to filter `Transfer` events where `0xAddressB` is the receiver. -- The subgraph will only index transactions that occur directly from `0xAddressA` to `0xAddressB`. +- `topic1` को `Transfer` आयोजन को फ़िल्टर करने के लिए कॉन्फ़िगर किया गया है जहाँ `0xAddressA` भेजने वाला है। +- `topic2` को इस प्रकार से कॉन्फ़िगर किया गया है कि यह `Transfer`आयोजन घटनाओं को फिल्टर करता है जहां 0xAddressB रिसीवर है। +- सबग्राफ केवल उन्हीं लेन-देन को इंडेक्स करेगा जो सीधे `0xAddressA` से `0xAddressB` तक होते हैं। #### उदाहरण 2: दो या अधिक 'पते' के बीच किसी भी दिशा में लेन-देन को ट्रैक करना @@ -450,31 +450,31 @@ eventHandlers: इस कॉन्फ़िगरेशन में: -- `topic1` is configured to filter `Transfer` events where `0xAddressA`, `0xAddressB`, `0xAddressC` is the sender. -- `topic2` is configured to filter `Transfer` events where `0xAddressB` and `0xAddressC` is the receiver. -- Subgraph उन कई पतों के बीच होने वाले लेनदेन को दोनों दिशाओं में सूचीबद्ध करेगा, जिससे सभी पतों के बीच इंटरैक्शन की व्यापक निगरानी संभव हो सकेगी। +- `topic1` को `Transfer`आयोजन को फिल्टर करने के लिए कॉन्फ़िगर किया गया है जहाँ `0xAddressA`, `0xAddressB`,` 0xAddressC` प्रेषक हैं। +- `topic2` को `Transfer` आयोजन को फिल्टर करने के लिए कॉन्फ़िगर किया गया है, जहाँ `0xAddressB` और `0xAddressC` रिसीवर हैं। +- सबग्राफ उन सभी पतों के बीच दोनों दिशाओं में होने वाले लेन-देन को अनुक्रमित करेगा, जिससे सभी पतों के बीच होने वाली अंतःक्रियाओं की व्यापक निगरानी संभव हो सकेगी। ## घोषित eth_call > नोट: यह एक प्रयोगात्मक फीचर है जो अभी तक स्थिर Graph Node रिलीज़ में उपलब्ध नहीं है। आप इसे केवल Subgraph Studio या अपने स्वयं-होस्टेड नोड में ही उपयोग कर सकते हैं। -Declarative `eth_calls` are a valuable subgraph feature that allows `eth_calls` to be executed ahead of time, enabling `graph-node` to execute them in parallel. +Declarative `eth_calls` एक मूल्यवान सबग्राफ विशेषता है जो `eth_calls` को पहले से निष्पादित करने की अनुमति देती है, जिससे `graph-node` उन्हें समानांतर रूप से निष्पादित कर सकता है। यह फ़ीचर निम्नलिखित कार्य करता है: -- इथेरियम ब्लॉकचेन से डेटा प्राप्त करने के प्रदर्शन में महत्वपूर्ण सुधार करता है, जिससे कई कॉल के लिए कुल समय कम होता है और सबग्राफ की समग्र दक्षता का अनुकूलन होता है। +- यह Ethereum ब्लॉकचेन से डेटा प्राप्त करने के प्रदर्शन में महत्वपूर्ण सुधार करता है, जिससे कई कॉल के लिए कुल समय कम हो जाता है और सबग्राफ की समग्र दक्षता में वृद्धि होती है। - यह तेजी से डेटा फ़ेचिंग की अनुमति देता है, जिससे तेजी से क्वेरी प्रतिक्रियाएँ और बेहतर उपयोगकर्ता अनुभव मिलता है। - कई Ethereum कॉल्स से डेटा को एकत्रित करने की आवश्यकता वाली अनुप्रयोगों के लिए प्रतीक्षा समय को कम करता है, जिससे डेटा पुनर्प्राप्ति प्रक्रिया अधिक प्रभावी हो जाती है। ### मुख्य अवधारणाएँ -- Declarative `eth_calls`: Ethereum calls that are defined to be executed in parallel rather than sequentially. +- घोषणात्मक `eth_calls`: एथेरियम कॉल्स जिन्हें अनुक्रमिक रूप से निष्पादित होने के बजाय समानांतर में निष्पादित किया जाना परिभाषित किया गया है। - समानांतर निष्पादन: एक कॉल समाप्त होने की प्रतीक्षा करने के बजाय, कई कॉल एक साथ आरंभ किए जा सकते हैं। - समय दक्षता: सभी कॉल के लिए कुल समय व्यक्तिगत कॉल के समय के योग (अनुक्रमिक) से बदलकर सबसे लंबे कॉल के द्वारा लिए गए समय (समानांतर) में बदल जाता है। -#### Scenario without Declarative `eth_calls` +#### केवल `eth_calls` के बिना परिदृश्य -आपके पास एक subgraph है जिसे एक उपयोगकर्ता के लेनदेन, बैलेंस और टोकन होल्डिंग्स के बारे में डेटा प्राप्त करने के लिए तीन Ethereum कॉल करने की आवश्यकता है। +मान लीजिए कि आपके पास एक सबग्राफ है जिसे किसी उपयोगकर्ता के लेन-देन, बैलेंस और टोकन होल्डिंग्स के बारे में डेटा लाने के लिए तीन Ethereum कॉल करने की आवश्यकता है। परंपरागत रूप से, ये कॉल क्रमिक रूप से की जा सकती हैं: @@ -484,7 +484,7 @@ Declarative `eth_calls` are a valuable subgraph feature that allows `eth_calls` कुल समय लिया गया = 3 + 2 + 4 = 9 सेकंड -#### Scenario with Declarative `eth_calls` +#### परिदृश्य डिक्लेरेटिव `eth_calls` के साथ इस फीचर के साथ, आप इन कॉल्स को समानांतर में निष्पादित करने के लिए घोषित कर सकते हैं: @@ -498,15 +498,15 @@ Declarative `eth_calls` are a valuable subgraph feature that allows `eth_calls` #### कैसे कार्य करता है -1. In the subgraph manifest, आप Ethereum कॉल्स को इस तरह घोषित करते हैं कि ये समानांतर में निष्पादित किए जा सकें। +1. सबग्राफ manifest में, आप Ethereum कॉल्स को इस तरह घोषित करते हैं जिससे संकेत मिलता है कि वे समानांतर रूप से निष्पादित किए जा सकते हैं। 2. पैरलेल निष्पादन इंजन: The Graph Node का निष्पादन इंजन इन घोषणाओं को पहचानता है और कॉल को समानांतर में चलाता है। -3. परिणाम संग्रहण: जब सभी कॉल समाप्त हो जाते हैं, तो परिणामों को एकत्रित किया जाता है और आगे की प्रक्रिया के लिए उपयोग किया जाता है। +3. परिणाम एकत्रीकरण: सभी कॉल पूरे होने के बाद, परिणाम एकत्र किए जाते हैं और आगे की प्रोसेसिंग के लिए सबग्राफ द्वारा उपयोग किए जाते हैं। #### उदाहरण कॉन्फ़िगरेशन Subgraph मैनिफेस्ट में -Declared `eth_calls` can access the `event.address` of the underlying event as well as all the `event.params`. +निर्धारित `eth_calls` underlying आयोजन के `event.address`के साथ-साथ सभी `event.params` तक पहुँच सकते हैं। -`Subgraph.yaml` using `event.address`: +`subgraph.yaml` का उपयोग करते हुए `event.address`: ```yaml eventHandlers: @@ -519,12 +519,12 @@ calls: उदाहरण उपरोक्त के लिए विवरण: -- `global0X128` is the declared `eth_call`. -- The text (`global0X128`) is the label for this `eth_call` which is used when logging errors. -- The text (`Pool[event.address].feeGrowthGlobal0X128()`) is the actual `eth_call` that will be executed, which is in the form of `Contract[address].function(arguments)` -- The `address` and `arguments` can be replaced with variables that will be available when the handler is executed. +- `global0X128` घोषित `eth_call` है| +- यह टेक्स्ट (`global0X128`) उस `eth_call` के लिए लेबल है जिसे त्रुटियों को log करते समय उपयोग किया जाता है। +- यह पाठ (`Pool[आयोजन.address].feeGrowthGlobal0X128()`) वह वास्तविक `eth_call` है जो निष्पादित किया जाएगा, जो `Contract[address].function(arguments)` के रूप में है। +- `address` और `arguments` को उन वेरिएबल्स से बदला जा सकता है जो handler के निष्पादन के समय उपलब्ध होंगे। -`Subgraph.yaml` using `event.params` +`subgraph.yaml`का उपयोग करते हुए `event.params` ```yaml calls: @@ -533,24 +533,24 @@ calls: ### मौजूदा सबग्राफ पर ग्राफ्टिंग -> **Note:** it is not recommended to use grafting when initially upgrading to The Graph Network. Learn more [here](/subgraphs/cookbook/grafting/#important-note-on-grafting-when-upgrading-to-the-network). +> **नोट**: प्रारंभिक रूप से The Graph Network में अपग्रेड करते समय graft का उपयोग करने की अनुशंसा नहीं की जाती है। अधिक जानें [यहाँ](/subgraphs/cookbook/grafting/#important-note-on-grafting-when-upgrading-to-the-network)। -When a subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances; it is beneficial to reuse the data from an existing subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly or to temporarily get an existing subgraph working again after it has failed. +जब कोई सबग्राफ पहली बार डिप्लॉय किया जाता है, तो यह संबंधित चेन के जेनेसिस ब्लॉक (या प्रत्येक डेटा स्रोत के साथ परिभाषित `startBlock`) से इवेंट्स को indexing करना शुरू करता है। कुछ परिस्थितियों में, मौजूदा सबग्राफ से डेटा को पुन: उपयोग करना और किसी बाद के ब्लॉक से इंडेक्सिंग शुरू करना फायदेमंद होता है। इस indexing मोड को _Grafting_ कहा जाता है। उदाहरण के लिए, विकास के दौरान, यह मैपिंग में छोटे एरर्स को जल्दी से पार करने या किसी मौजूदा सबग्राफ को फिर से चालू करने के लिए उपयोगी होता है, यदि वह फेल हो गया हो। -A subgraph is grafted onto a base subgraph when the subgraph manifest in `subgraph.yaml` contains a `graft` block at the top-level: +एक सबग्राफ को एक बेस सबग्राफ पर graft किया जाता है जब `subgraph.yaml` में सबग्राफ manifest में शीर्ष स्तर पर एक `graft` ब्लॉक होता है। ```yaml description: ... graft: - base: Qm... # Subgraph ID of base subgraph + base: Qm... # Subgraph ID of base Subgraph block: 7345624 # Block number ``` -When a subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` subgraph up to and including the given `block` and then continue indexing the new subgraph from that block on. The base subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted subgraph. +जब कोई सबग्राफ , जिसकी मैनिफेस्ट में `graft` ब्लॉक शामिल होता है, डिप्लॉय किया जाता है, तो ग्राफ-नोड दिए गए `block` तक base सबग्राफ के डेटा को कॉपी करेगा और फिर उस ब्लॉक से नए सबग्राफ को इंडेक्स करना जारी रखेगा। base सबग्राफ को लक्षित ग्राफ-नोड इंस्टेंस पर मौजूद होना चाहिए और कम से कम दिए गए ब्लॉक तक इंडेक्स किया जाना चाहिए। इस प्रतिबंध के कारण, ग्राफ्टिंग का उपयोग केवल डेवलपमेंट के दौरान या किसी आपात स्थिति में एक समान गैर-ग्राफ्टेड सबग्राफ को जल्दी से तैयार करने के लिए किया जाना चाहिए। -क्योंकि आधार डेटा को अनुक्रमित करने के बजाय प्रतियों को ग्राफ्ट करना, स्क्रैच से अनुक्रमणित करने की तुलना में सबग्राफ को वांछित ब्लॉक में प्राप्त करना बहुत तेज है, हालांकि बहुत बड़े सबग्राफ के लिए प्रारंभिक डेटा कॉपी में अभी भी कई घंटे लग सकते हैं। जबकि ग्राफ्टेड सबग्राफ को इनिशियलाइज़ किया जा रहा है, ग्राफ़ नोड उन एंटिटी प्रकारों के बारे में जानकारी लॉग करेगा जो पहले ही कॉपी किए जा चुके हैं। +ग्राफ्टिंग मूल डेटा के बजाय प्रतिलिपियाँ बनाता है, इसलिए यह शुरू से इंडेक्सिंग करने की तुलना में सबग्राफ को वांछित ब्लॉक तक पहुँचाने में कहीं अधिक तेज़ होता है, हालाँकि बहुत बड़े सबग्राफ के लिए प्रारंभिक डेटा कॉपी करने में अभी भी कई घंटे लग सकते हैं। जब तक ग्राफ्ट किया गया सबग्राफ प्रारंभिक रूप से स्थापित हो रहा होता है, तब तक The ग्राफ नोड उन entity प्रकारों के बारे में जानकारी लॉग करेगा जिन्हें पहले ही कॉपी किया जा चुका है। -ग्राफ्टेड सबग्राफ एक ग्राफक्यूएल स्कीमा का उपयोग कर सकता है जो बेस सबग्राफ के समान नहीं है, लेकिन इसके अनुकूल हो। यह अपने आप में एक मान्य सबग्राफ स्कीमा होना चाहिए, लेकिन निम्नलिखित तरीकों से बेस सबग्राफ के स्कीमा से विचलित हो सकता है: +ग्राफ्टेड Subgraph एक ग्राफक्यूएल स्कीमा का उपयोग कर सकता है जो बेस Subgraph के समान नहीं है, लेकिन इसके अनुकूल हो। यह अपने आप में एक मान्य Subgraph स्कीमा होना चाहिए, लेकिन निम्नलिखित तरीकों से बेस Subgraph के स्कीमा से विचलित हो सकता है: - यह इकाई के प्रकारों को जोड़ या हटा सकता है| - यह इकाई प्रकारों में से गुणों को हटाता है| @@ -560,4 +560,4 @@ When a subgraph whose manifest contains a `graft` block is deployed, Graph Node - यह इंटरफेस जोड़ता या हटाता है| - यह कि, किन इकाई प्रकारों के लिए इंटरफ़ेस लागू होगा, इसे बदल देता है| -> **[Feature Management](#experimental-features):** `grafting` must be declared under `features` in the subgraph manifest. +> **[Feature Management](#experimental-features):** grafting को features में घोषित किया जाना आवश्यक है सबग्राफ मैनिफेस्ट में। diff --git a/website/src/pages/hi/subgraphs/developing/creating/assemblyscript-mappings.mdx b/website/src/pages/hi/subgraphs/developing/creating/assemblyscript-mappings.mdx index 38441c623127..beb33c359091 100644 --- a/website/src/pages/hi/subgraphs/developing/creating/assemblyscript-mappings.mdx +++ b/website/src/pages/hi/subgraphs/developing/creating/assemblyscript-mappings.mdx @@ -2,7 +2,7 @@ title: Writing AssemblyScript Mappings --- -## अवलोकन +## Overview The mappings take data from a particular source and transform it into entities that are defined within your schema. Mappings are written in a subset of [TypeScript](https://www.typescriptlang.org/docs/handbook/typescript-in-5-minutes.html) called [AssemblyScript](https://github.com/AssemblyScript/assemblyscript/wiki) which can be compiled to WASM ([WebAssembly](https://webassembly.org/)). AssemblyScript is stricter than normal TypeScript, yet provides a familiar syntax. @@ -10,7 +10,7 @@ The mappings take data from a particular source and transform it into entities t For each event handler that is defined in `subgraph.yaml` under `mapping.eventHandlers`, create an exported function of the same name. Each handler must accept a single parameter called `event` with a type corresponding to the name of the event which is being handled. -In the example subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events: +In the example Subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events: ```javascript import { NewGravatar, UpdatedGravatar } from '../generated/Gravity/Gravity' @@ -72,7 +72,7 @@ There is a [Graph Typescript Library](https://github.com/graphprotocol/graph-too ## कोड जनरेशन -स्मार्ट कॉन्ट्रैक्ट्स, इवेंट्स और एंटिटीज के साथ काम करना आसान और टाइप-सेफ बनाने के लिए, ग्राफ सीएलआई सबग्राफ के ग्राफक्यूएल स्कीमा और डेटा स्रोतों में शामिल कॉन्ट्रैक्ट एबीआई से असेंबलीस्क्रिप्ट प्रकार उत्पन्न कर सकता है। +In order to make it easy and type-safe to work with smart contracts, events and entities, the Graph CLI can generate AssemblyScript types from the Subgraph's GraphQL schema and the contract ABIs included in the data sources. इसके साथ किया जाता है @@ -80,7 +80,7 @@ There is a [Graph Typescript Library](https://github.com/graphprotocol/graph-too graph codegen [--output-dir ] [] ``` -but in most cases, subgraphs are already preconfigured via `package.json` to allow you to simply run one of the following to achieve the same: +but in most cases, Subgraphs are already preconfigured via `package.json` to allow you to simply run one of the following to achieve the same: ```sh # Yarn @@ -90,7 +90,7 @@ yarn codegen npm run codegen ``` -This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with. +This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example Subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with. ```javascript import { @@ -102,12 +102,12 @@ import { } from '../generated/Gravity/Gravity' ``` -In addition to this, one class is generated for each entity type in the subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with +In addition to this, one class is generated for each entity type in the Subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with ```javascript import { Gravatar } from '../generated/schema' ``` -> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the subgraph. +> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the Subgraph. -Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your subgraph to Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find. +Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your Subgraph to Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find. diff --git a/website/src/pages/hi/subgraphs/developing/creating/graph-ts/CHANGELOG.md b/website/src/pages/hi/subgraphs/developing/creating/graph-ts/CHANGELOG.md index 5d90888ac378..5f964d3cbb78 100644 --- a/website/src/pages/hi/subgraphs/developing/creating/graph-ts/CHANGELOG.md +++ b/website/src/pages/hi/subgraphs/developing/creating/graph-ts/CHANGELOG.md @@ -1,5 +1,11 @@ # @graphprotocol/graph-ts +## 0.38.0 + +### Minor Changes + +- [#1935](https://github.com/graphprotocol/graph-tooling/pull/1935) [`0c36a02`](https://github.com/graphprotocol/graph-tooling/commit/0c36a024e0516bbf883ae62b8312dba3d9945f04) Thanks [@isum](https://github.com/isum)! - feat: add yaml parsing support to mappings + ## 0.37.0 ### Minor Changes diff --git a/website/src/pages/hi/subgraphs/developing/creating/graph-ts/README.md b/website/src/pages/hi/subgraphs/developing/creating/graph-ts/README.md index b6771a8305e5..9f47691b06a1 100644 --- a/website/src/pages/hi/subgraphs/developing/creating/graph-ts/README.md +++ b/website/src/pages/hi/subgraphs/developing/creating/graph-ts/README.md @@ -6,7 +6,7 @@ TypeScript/AssemblyScript library for writing subgraph mappings to be deployed to [The Graph](https://github.com/graphprotocol/graph-node). -## Usage +## उपयोग For a detailed guide on how to create a subgraph, please see the [Graph CLI docs](https://github.com/graphprotocol/graph-cli). diff --git a/website/src/pages/hi/subgraphs/developing/creating/graph-ts/_meta-titles.json b/website/src/pages/hi/subgraphs/developing/creating/graph-ts/_meta-titles.json index 7580246e94fd..efb08ac104b3 100644 --- a/website/src/pages/hi/subgraphs/developing/creating/graph-ts/_meta-titles.json +++ b/website/src/pages/hi/subgraphs/developing/creating/graph-ts/_meta-titles.json @@ -1,5 +1,5 @@ { "README": "Introduction", - "api": "API Reference", + "api": "एपीआई संदर्भ", "common-issues": "Common Issues" } diff --git a/website/src/pages/hi/subgraphs/developing/creating/graph-ts/api.mdx b/website/src/pages/hi/subgraphs/developing/creating/graph-ts/api.mdx index e967ffa1b80b..1bed291fc89f 100644 --- a/website/src/pages/hi/subgraphs/developing/creating/graph-ts/api.mdx +++ b/website/src/pages/hi/subgraphs/developing/creating/graph-ts/api.mdx @@ -2,12 +2,12 @@ title: AssemblyScript API --- -> Note: If you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/). +> Note: If you created a Subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/). -यह पृष्ठ दस्तावेज करता है कि Subgraph मैपिंग लिखते समय किन अंतर्निहित एपीआई का उपयोग किया जा सकता है। बॉक्स से बाहर दो प्रकार के एपीआई उपलब्ध हैं: +Learn what built-in APIs can be used when writing Subgraph mappings. There are two kinds of APIs available out of the box: - The Graph TypeScript लाइब्रेरी (https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (graph-ts) -- `graph codegen` द्वारा subgraph files से उत्पन्न code +- Code generated from Subgraph files by `graph codegen` आप अन्य पुस्तकालयों को भी निर्भरताओं के रूप में जोड़ सकते हैं, बशर्ते कि वे [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) के साथ संगत हों। @@ -15,19 +15,19 @@ title: AssemblyScript API ## API Reference -The `@graphprotocol/graph-ts` library provides the following APIs: +`@graphprotocol/graph-ts` library निम्नलिखित API प्रदान करती है: - An `ethereum` API for working with Ethereum smart contracts, events, blocks, transactions, and Ethereum values. - A `store` API to load and save entities from and to the Graph Node store. - A `log` API to log messages to the Graph Node output and Graph Explorer. -- An `ipfs` API to load files from IPFS. -- A `json` API to parse JSON data. -- A `crypto` API to use cryptographic functions. +- IPFS से files load करने के लिए एक `ipfs` API। +- JSON data को parse करने के लिए एक `json` API। +- Cryptographic functions का उपयोग करने के लिए एक `crypto` API। - एथेरियम, JSON, ग्राफक्यूएल और असेंबलीस्क्रिप्ट जैसे विभिन्न प्रकार की प्रणालियों के बीच अनुवाद करने के लिए निम्न-स्तरीय आदिम। ### Versions -The `apiVersion` in the subgraph manifest specifies the mapping API version which is run by Graph Node for a given subgraph. +The `apiVersion` in the Subgraph manifest specifies the mapping API version which is run by Graph Node for a given Subgraph. | Version | Release notes | | :-: | --- | @@ -223,7 +223,7 @@ import { store } from '@graphprotocol/graph-ts' The `store` API allows to load, save and remove entities from and to the Graph Node store. -Entities written to the store map one-to-one to the `@entity` types defined in the subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities. +Entities written to the store map one-to-one to the `@entity` types defined in the Subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities. #### संस्थाओं का निर्माण @@ -280,10 +280,10 @@ if (transfer == null) { As of `graph-node` v0.31.0, `@graphprotocol/graph-ts` v0.30.0 and `@graphprotocol/graph-cli` v0.49.0 the `loadInBlock` method is available on all entity types. -The store API facilitates the retrieval of entities that were created or updated in the current block. A typical situation for this is that one handler creates a transaction from some onchain event, and a later handler wants to access this transaction if it exists. +Store API उन इकाइयों entities को पुनः प्राप्त करने की सुविधा प्रदान करता है जो वर्तमान ब्लॉक में बनाई गई थीं या अपडेट की गई थीं। इसका एक सामान्य परिदृश्य यह है कि एक हैंडलर किसी ऑनचेन इवेंट से एक ट्रांज़ेक्शन बनाता है, और बाद में कोई अन्य हैंडलर इस ट्रांज़ेक्शन तक पहुंचना चाहता है, यदि यह मौजूद है। -- यदि लेन-देन मौजूद नहीं है, तो subgraph को केवल यह पता लगाने के लिए डेटाबेस में जाना होगा कि Entity मौजूद नहीं है। यदि subgraph लेखक पहले से जानता है कि Entity उसी ब्लॉक में बनाई जानी चाहिए थी, तो `loadInBlock` का उपयोग इस डेटाबेस राउंडट्रिप से बचाता है। -- कुछ subgraphs के लिए, ये छूटे हुए लुकअप्स indexing समय में महत्वपूर्ण योगदान दे सकते हैं। +- In the case where the transaction does not exist, the Subgraph will have to go to the database simply to find out that the entity does not exist. If the Subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. +- For some Subgraphs, these missed lookups can contribute significantly to the indexing time. ```typescript let id = event.transaction.hash // or however the ID is constructed @@ -329,7 +329,7 @@ let tokens = holder.tokens.load() किसी मौजूदा निकाय को अद्यतन करने के दो तरीके हैं: 1. Load the entity with e.g. `Transfer.load(id)`, set properties on the entity, then `.save()` it back to the store. -2. Simply create the entity with e.g. `new Transfer(id)`, set properties on the entity, then `.save()` it to the store. If the entity already exists, the changes are merged into it. +2. बस उदाहरण के साथ entity बनाएं। `new Transfer(id)`, entity पर properties set करें, फिर इसे store पर `.save()` करें। यदि entity पहले से मौजूद है, तो परिवर्तन उसमें merge कर दिए जाते हैं। ज्यादातर मामलों में गुण बदलना सीधे आगे है, उत्पन्न संपत्ति सेटर्स के लिए धन्यवाद: @@ -380,11 +380,11 @@ store.remove('Transfer', id) #### एथेरियम प्रकार के लिए समर्थन -As with entities, `graph codegen` generates classes for all smart contracts and events used in a subgraph. For this, the contract ABIs need to be part of the data source in the subgraph manifest. Typically, the ABI files are stored in an `abis/` folder. +As with entities, `graph codegen` generates classes for all smart contracts and events used in a Subgraph. For this, the contract ABIs need to be part of the data source in the Subgraph manifest. Typically, the ABI files are stored in an `abis/` folder. -With the generated classes, conversions between Ethereum types and the [built-in types](#built-in-types) take place behind the scenes so that subgraph authors do not have to worry about them. +With the generated classes, conversions between Ethereum types and the [built-in types](#built-in-types) take place behind the scenes so that Subgraph authors do not have to worry about them. -The following example illustrates this. Given a subgraph schema like +The following example illustrates this. Given a Subgraph schema like ```graphql type Transfer @entity { @@ -483,7 +483,7 @@ class Log { #### Access to Smart Contract State -The code generated by `graph codegen` also includes classes for the smart contracts used in the subgraph. These can be used to access public state variables and call functions of the contract at the current block. +The code generated by `graph codegen` also includes classes for the smart contracts used in the Subgraph. These can be used to access public state variables and call functions of the contract at the current block. एक सामान्य पैटर्न उस अनुबंध का उपयोग करना है जिससे कोई घटना उत्पन्न होती है। यह निम्नलिखित कोड के साथ हासिल किया गया है: @@ -506,7 +506,7 @@ export function handleTransfer(event: TransferEvent) { As long as the `ERC20Contract` on Ethereum has a public read-only function called `symbol`, it can be called with `.symbol()`. For public state variables a method with the same name is created automatically. -कोई अन्य अनुबंध जो सबग्राफ का हिस्सा है, उत्पन्न कोड से आयात किया जा सकता है और एक वैध पते के लिए बाध्य किया जा सकता है। +Any other contract that is part of the Subgraph can be imported from the generated code and can be bound to a valid address. #### रिवर्टेड कॉल्स को हैंडल करना @@ -582,7 +582,7 @@ let isContract = ethereum.hasCode(eoa).inner // returns false import { log } from '@graphprotocol/graph-ts' ``` -The `log` API allows subgraphs to log information to the Graph Node standard output as well as Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument. +The `log` API allows Subgraphs to log information to the Graph Node standard output as well as Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument. The `log` API includes the following functions: @@ -590,7 +590,7 @@ The `log` API includes the following functions: - `log.info(fmt: string, args: Array): void` - logs an informational message. - `log.warning(fmt: string, args: Array): void` - logs a warning. - `log.error(fmt: string, args: Array): void` - logs an error message. -- `log.critical(fmt: string, args: Array): void` – logs a critical message _and_ terminates the subgraph. +- `log.critical(fmt: string, args: Array): void` – logs a critical message _and_ terminates the Subgraph. The `log` API takes a format string and an array of string values. It then replaces placeholders with the string values from the array. The first `{}` placeholder gets replaced by the first value in the array, the second `{}` placeholder gets replaced by the second value and so on. @@ -725,7 +725,7 @@ ipfs.mapJSON('Qm...', 'processItem', Value.fromString('parentId')) The only flag currently supported is `json`, which must be passed to `ipfs.map`. With the `json` flag, the IPFS file must consist of a series of JSON values, one value per line. The call to `ipfs.map` will read each line in the file, deserialize it into a `JSONValue` and call the callback for each of them. The callback can then use entity operations to store data from the `JSONValue`. Entity changes are stored only when the handler that called `ipfs.map` finishes successfully; in the meantime, they are kept in memory, and the size of the file that `ipfs.map` can process is therefore limited. -On success, `ipfs.map` returns `void`. If any invocation of the callback causes an error, the handler that invoked `ipfs.map` is aborted, and the subgraph is marked as failed. +On success, `ipfs.map` returns `void`. If any invocation of the callback causes an error, the handler that invoked `ipfs.map` is aborted, and the Subgraph is marked as failed. ### क्रिप्टो एपीआई @@ -840,7 +840,7 @@ The base `Entity` class and the child `DataSourceContext` class have helpers to ### DataSourceContext in Manifest -The `context` section within `dataSources` allows you to define key-value pairs that are accessible within your subgraph mappings. The available types are `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. +The `context` section within `dataSources` allows you to define key-value pairs that are accessible within your Subgraph mappings. The available types are `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Here is a YAML example illustrating the usage of various types in the `context` section: @@ -891,4 +891,4 @@ dataSources: - `List`: Specifies a list of items. Each item needs to specify its type and data. - `BigInt`: Specifies a large integer value. Must be quoted due to its large size. -This context is then accessible in your subgraph mapping files, enabling more dynamic and configurable subgraphs. +This context is then accessible in your Subgraph mapping files, enabling more dynamic and configurable Subgraphs. diff --git a/website/src/pages/hi/subgraphs/developing/creating/graph-ts/common-issues.mdx b/website/src/pages/hi/subgraphs/developing/creating/graph-ts/common-issues.mdx index 155469a5960b..fb8daba8b9a6 100644 --- a/website/src/pages/hi/subgraphs/developing/creating/graph-ts/common-issues.mdx +++ b/website/src/pages/hi/subgraphs/developing/creating/graph-ts/common-issues.mdx @@ -2,7 +2,7 @@ title: आम AssemblyScript मुद्दे --- -There are certain [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) issues that are common to run into during subgraph development. They range in debug difficulty, however, being aware of them may help. The following is a non-exhaustive list of these issues: +There are certain [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) issues that are common to run into during Subgraph development. They range in debug difficulty, however, being aware of them may help. The following is a non-exhaustive list of these issues: -- `Private` class variables are not enforced in [AssemblyScript](https://www.assemblyscript.org/status.html#language-features). There is no way to protect class variables from being directly changed from the class object. -- Scope is not inherited into [closure functions](https://www.assemblyscript.org/status.html#on-closures), i.e. variables declared outside of closure functions cannot be used. Explanation in [Developer Highlights #3](https://www.youtube.com/watch?v=1-8AW-lVfrA&t=3243s). +- `Private` क्लास वेरिएबल्स [AssemblyScript] (https://www.assemblyscript.org/status.html#language-features) में अनिवार्य नहीं होते हैं। क्लास ऑब्जेक्ट से सीधे क्लास वेरिएबल्स को बदले जाने से बचाने का कोई तरीका नहीं है। +- Scope को [closure functions](https://www.assemblyscript.org/status.html#on-closures) में inherite नहीं किया गया है, यानी closure functions के बाहर declared variables का उपयोग नहीं किया जा सकता है। [Developer Highlights #3](https://www.youtube.com/watch?v=1-8AW-lVfrA&t=3243s) में स्पष्टीकरण। diff --git a/website/src/pages/hi/subgraphs/developing/creating/install-the-cli.mdx b/website/src/pages/hi/subgraphs/developing/creating/install-the-cli.mdx index 84d3b139b130..031c70bf3507 100644 --- a/website/src/pages/hi/subgraphs/developing/creating/install-the-cli.mdx +++ b/website/src/pages/hi/subgraphs/developing/creating/install-the-cli.mdx @@ -2,11 +2,11 @@ title: . ग्राफ़ सीएलआई इनस्टॉल करें --- -> In order to use your subgraph on The Graph's decentralized network, you will need to [create an API key](/resources/subgraph-studio-faq/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your subgraph with at least 3,000 GRT to attract 2-3 Indexers. To learn more about signaling, check out [curating](/resources/roles/curating/). +> In order to use your Subgraph on The Graph's decentralized network, you will need to [create an API key](/resources/subgraph-studio-faq/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your Subgraph with at least 3,000 GRT to attract 2-3 Indexers. To learn more about signaling, check out [curating](/resources/roles/curating/). -## अवलोकन +## Overview -The [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) is a command-line interface that facilitates developers' commands for The Graph. It processes a [subgraph manifest](/subgraphs/developing/creating/subgraph-manifest/) and compiles the [mappings](/subgraphs/developing/creating/assemblyscript-mappings/) to create the files you will need to deploy the subgraph to [Subgraph Studio](https://thegraph.com/studio/) and the network. +The [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) is a command-line interface that facilitates developers' commands for The Graph. It processes a [Subgraph manifest](/subgraphs/developing/creating/subgraph-manifest/) and compiles the [mappings](/subgraphs/developing/creating/assemblyscript-mappings/) to create the files you will need to deploy the Subgraph to [Subgraph Studio](https://thegraph.com/studio/) and the network. ## शुरू करना @@ -28,13 +28,13 @@ npm install -g @graphprotocol/graph-cli@latest yarn global add @graphprotocol/graph-cli ``` -The `graph init` command can be used to set up a new subgraph project, either from an existing contract or from an example subgraph. If you already have a smart contract deployed to your preferred network, you can bootstrap a new subgraph from that contract to get started. +The `graph init` command can be used to set up a new Subgraph project, either from an existing contract or from an example Subgraph. If you already have a smart contract deployed to your preferred network, you can bootstrap a new Subgraph from that contract to get started. ## एक सबग्राफ बनाएं ### एक मौजूदा कॉन्ट्रैक्ट से -यह कमांड एक subgraph बनाता है जो एक मौजूदा कॉन्ट्रैक्ट के सभी इवेंट्स को इंडेक्स करता है: +The following command creates a Subgraph that indexes all events of an existing contract: ```sh graph init \ @@ -51,25 +51,25 @@ graph init \ - यदि कोई वैकल्पिक तर्क गायब है, तो यह आपको एक इंटरैक्टिव फॉर्म के माध्यम से मार्गदर्शन करता है। -- The `` is the ID of your subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your subgraph details page. +- The `` is the ID of your Subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your Subgraph details page. ### एक उदाहरण सबग्राफ से -निम्नलिखित कमांड एक उदाहरण subgraph से एक नया प्रोजेक्ट प्रारंभ करता है: +The following command initializes a new project from an example Subgraph: ```sh graph init --from-example=example-subgraph ``` -- The [example subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. +- The [example Subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. -- The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. +- The Subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. ### Add New `dataSources` to an Existing Subgraph -`dataSources` are key components of subgraphs. They define the sources of data that the subgraph indexes and processes. A `dataSource` specifies which smart contract to listen to, which events to process, and how to handle them. +`dataSources` are key components of Subgraphs. They define the sources of data that the Subgraph indexes and processes. A `dataSource` specifies which smart contract to listen to, which events to process, and how to handle them. -Recent versions of the Graph CLI supports adding new `dataSources` to an existing subgraph through the `graph add` command: +Recent versions of the Graph CLI supports adding new `dataSources` to an existing Subgraph through the `graph add` command: ```sh graph add
[] @@ -101,19 +101,5 @@ The `graph add` command will fetch the ABI from Etherscan (unless an ABI path is एबीआई फाइल(फाइलों) को आपके अनुबंध(ओं) से मेल खाना चाहिए। ABI फ़ाइलें प्राप्त करने के कुछ तरीके हैं: - यदि आप अपना खुद का प्रोजेक्ट बना रहे हैं, तो आपके पास अपने सबसे मौजूदा एबीआई तक पहुंच होने की संभावना है। -- If you are building a subgraph for a public project, you can download that project to your computer and get the ABI by using [`npx hardhat compile`](https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts) or using `solc` to compile. -- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your subgraph will fail. - -## स्पेकवर्जन रिलीज़ - -| Version | Release notes | -| :-: | --- | -| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | -| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | -| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune subgraphs | -| 0.0.9 | Supports `endBlock` feature | -| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). | -| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). | -| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | -| 0.0.5 | घटना हैंडलरों को लेनदेन रसीदों तक पहुंच प्रदान करने के लिए समर्थन जोड़ा गया है। | -| 0.0.4 | घटना हैंडलरों को लेनदेन रसीदों तक पहुंच प्रदान करने के लिए समर्थन जोड़ा गया है। | +- If you are building a Subgraph for a public project, you can download that project to your computer and get the ABI by using [`npx hardhat compile`](https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts) or using `solc` to compile. +- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your Subgraph will fail. diff --git a/website/src/pages/hi/subgraphs/developing/creating/ql-schema.mdx b/website/src/pages/hi/subgraphs/developing/creating/ql-schema.mdx index 5c2b1f2037bc..47c7f97d0406 100644 --- a/website/src/pages/hi/subgraphs/developing/creating/ql-schema.mdx +++ b/website/src/pages/hi/subgraphs/developing/creating/ql-schema.mdx @@ -2,9 +2,9 @@ title: The Graph QL Schema --- -## अवलोकन +## Overview -The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language. +The schema for your Subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language. > Note: If you've never written a GraphQL schema, it is recommended that you check out this primer on the GraphQL type system. Reference documentation for GraphQL schemas can be found in the [GraphQL API](/subgraphs/querying/graphql-api/) section. @@ -12,7 +12,7 @@ The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas ar इससे पहले कि आप एन्टिटीज को परिभाषित करें, यह महत्वपूर्ण है कि आप एक कदम पीछे हटें और सोचें कि आपका डेटा कैसे संरचित और लिंक किया गया है। -- All queries will be made against the data model defined in the subgraph schema. As a result, the design of the subgraph schema should be informed by the queries that your application will need to perform. +- All queries will be made against the data model defined in the Subgraph schema. As a result, the design of the Subgraph schema should be informed by the queries that your application will need to perform. - यह उपयोगी हो सकता है कि संस्थाओं की कल्पना 'डेटा' समाहित करने वाले वस्तुओं के रूप में की जाए, न कि घटनाओं या कार्यों के रूप में। - You define entity types in `schema.graphql`, and Graph Node will generate top-level fields for querying single instances and collections of that entity type. - Each type that should be an entity is required to be annotated with an `@entity` directive. @@ -141,7 +141,7 @@ type TokenBalance @entity { Reverse lookups can be defined on an entity through the `@derivedFrom` field. This creates a virtual field on the entity that may be queried but cannot be set manually through the mappings API. Rather, it is derived from the relationship defined on the other entity. For such relationships, it rarely makes sense to store both sides of the relationship, and both indexing and query performance will be better when only one side is stored and the other is derived. -एक-से-अनेक संबंधों के लिए, संबंध को हमेशा 'एक' पक्ष में संग्रहीत किया जाना चाहिए, और 'अनेक' पक्ष हमेशा निकाला जाना चाहिए। संबंधों को इस तरह से संग्रहीत करने के बजाय, 'अनेक' पक्ष पर संस्थाओं की एक सरणी संग्रहीत करने के परिणामस्वरूप, सबग्राफ को अनुक्रमित करने और क्वेरी करने दोनों के लिए नाटकीय रूप से बेहतर प्रदर्शन होगा। सामान्य तौर पर, संस्थाओं की सरणियों को संग्रहीत करने से जितना संभव हो उतना बचा जाना चाहिए। +For one-to-many relationships, the relationship should always be stored on the 'one' side, and the 'many' side should always be derived. Storing the relationship this way, rather than storing an array of entities on the 'many' side, will result in dramatically better performance for both indexing and querying the Subgraph. In general, storing arrays of entities should be avoided as much as is practical. #### उदाहरण @@ -160,7 +160,7 @@ type TokenBalance @entity { } ``` -Here is an example of how to write a mapping for a subgraph with reverse lookups: +Here is an example of how to write a mapping for a Subgraph with reverse lookups: ```typescript let token = new Token(event.address) // Create Token @@ -231,7 +231,7 @@ query usersWithOrganizations { } ``` -मैनी-टू-मैनी संबंधों को संग्रहीत करने के इस अधिक विस्तृत तरीके के परिणामस्वरूप सबग्राफ के लिए कम डेटा संग्रहीत होगा, और इसलिए एक सबग्राफ के लिए जो अक्सर इंडेक्स और क्वेरी के लिए नाटकीय रूप से तेज़ होता है। +This more elaborate way of storing many-to-many relationships will result in less data stored for the Subgraph, and therefore to a Subgraph that is often dramatically faster to index and to query. ### स्कीमा में टिप्पणियां जोड़ना @@ -287,7 +287,7 @@ query { } ``` -> **[Feature Management](#experimental-features):** From `specVersion` `0.0.4` and onwards, `fullTextSearch` must be declared under the `features` section in the subgraph manifest. +> **[Feature Management](#experimental-features):** From `specVersion` `0.0.4` and onwards, `fullTextSearch` must be declared under the `features` section in the Subgraph manifest. ## भाषाओं का समर्थन किया diff --git a/website/src/pages/hi/subgraphs/developing/creating/starting-your-subgraph.mdx b/website/src/pages/hi/subgraphs/developing/creating/starting-your-subgraph.mdx index a162f802cf9c..180a343470b1 100644 --- a/website/src/pages/hi/subgraphs/developing/creating/starting-your-subgraph.mdx +++ b/website/src/pages/hi/subgraphs/developing/creating/starting-your-subgraph.mdx @@ -2,22 +2,34 @@ title: Starting Your Subgraph --- -## अवलोकन +## Overview -ग्राफ़ में पहले से ही हजारों सबग्राफ उपलब्ध हैं, जिन्हें क्वेरी के लिए उपयोग किया जा सकता है, तो The Graph Explorer(https://thegraph.com/explorer) को चेक करें और ऐसा कोई Subgraph ढूंढें जो पहले से आपकी ज़रूरतों से मेल खाता हो। +The Graph is home to thousands of Subgraphs already available for query, so check [The Graph Explorer](https://thegraph.com/explorer) and find one that already matches your needs. -जब आप एक [सबग्राफ](/subgraphs/developing/subgraphs/)बनाते हैं, तो आप एक कस्टम ओपन API बनाते हैं जो ब्लॉकचेन से डेटा निकालता है, उसे प्रोसेस करता है, स्टोर करता है और इसे GraphQL के माध्यम से क्वेरी करना आसान बनाता है। +When you create a [Subgraph](/subgraphs/developing/subgraphs/), you create a custom open API that extracts data from a blockchain, processes it, stores it, and makes it easy to query via GraphQL. -Subgraph development ranges from simple scaffold subgraphs to advanced, specifically tailored subgraphs. +Subgraph development ranges from simple scaffold Subgraphs to advanced, specifically tailored Subgraphs. ### Start Building -Start the process and build a subgraph that matches your needs: +Start the process and build a Subgraph that matches your needs: 1. [Install the CLI](/subgraphs/developing/creating/install-the-cli/) - Set up your infrastructure -2. [Subgraph Manifest](/subgraphs/developing/creating/subgraph-manifest/) - Understand a subgraph's key component +2. [Subgraph Manifest](/subgraphs/developing/creating/subgraph-manifest/) - Understand a Subgraph's key component 3. [The GraphQL Schema](/subgraphs/developing/creating/ql-schema/) - Write your schema 4. [Writing AssemblyScript Mappings](/subgraphs/developing/creating/assemblyscript-mappings/) - Write your mappings -5. [Advanced Features](/subgraphs/developing/creating/advanced/) - Customize your subgraph with advanced features +5. [Advanced Features](/subgraphs/developing/creating/advanced/) - Customize your Subgraph with advanced features Explore additional [resources for APIs](/subgraphs/developing/creating/graph-ts/README/) and conduct local testing with [Matchstick](/subgraphs/developing/creating/unit-testing-framework/). + +| Version | Release notes | +| :-: | --- | +| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune Subgraphs | +| 0.0.9 | Supports `endBlock` feature | +| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). | +| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | diff --git a/website/src/pages/hi/subgraphs/developing/creating/subgraph-manifest.mdx b/website/src/pages/hi/subgraphs/developing/creating/subgraph-manifest.mdx index 31dbc7079552..f3900aeaa283 100644 --- a/website/src/pages/hi/subgraphs/developing/creating/subgraph-manifest.mdx +++ b/website/src/pages/hi/subgraphs/developing/creating/subgraph-manifest.mdx @@ -2,21 +2,21 @@ title: Subgraph Manifest --- -## अवलोकन +## Overview -subgraph मैनिफेस्ट, subgraph.yaml, उन स्मार्ट कॉन्ट्रैक्ट्स और नेटवर्क को परिभाषित करता है जिन्हें आपका subgraph इंडेक्स करेगा, इन कॉन्ट्रैक्ट्स से ध्यान देने योग्य इवेंट्स, और इवेंट डेटा को उन संस्थाओं के साथ मैप करने का तरीका जिन्हें Graph Node स्टोर करता है और जिन्हें क्वेरी करने की अनुमति देता है। +The Subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your Subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. -**subgraph definition** में निम्नलिखित फ़ाइलें शामिल हैं: +The **Subgraph definition** consists of the following files: -- subgraph.yaml: में subgraph मैनिफेस्ट शामिल है +- `subgraph.yaml`: Contains the Subgraph manifest -- schema.graphql: एक GraphQL स्कीमा जो आपके लिए डेटा को परिभाषित करता है और इसे GraphQL के माध्यम से क्वेरी करने का तरीका बताता है. +- `schema.graphql`: A GraphQL schema defining the data stored for your Subgraph and how to query it via GraphQL - `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema (e.g. `mapping.ts` in this guide) ### Subgraph क्षमताएँ -एक सिंगल subgraph कर सकता है: +A single Subgraph can: - कई स्मार्ट कॉन्ट्रैक्ट्स से डेटा को इंडेक्स करें (लेकिन कई नेटवर्क नहीं)। @@ -24,12 +24,12 @@ subgraph मैनिफेस्ट, subgraph.yaml, उन स्मार् - Add an entry for each contract that requires indexing to the `dataSources` array. -The full specification for subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). +The full specification for Subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). -For the example subgraph listed above, `subgraph.yaml` is: +For the example Subgraph listed above, `subgraph.yaml` is: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum repository: https://github.com/graphprotocol/graph-tooling schema: @@ -54,7 +54,7 @@ dataSources: data: 'bar' mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -79,47 +79,47 @@ dataSources: ## Subgraph Entries -> Important Note: Be sure you populate your subgraph manifest with all handlers and [entities](/subgraphs/developing/creating/ql-schema/). +> Important Note: Be sure you populate your Subgraph manifest with all handlers and [entities](/subgraphs/developing/creating/ql-schema/). मेनिफेस्ट के लिए अद्यतन करने के लिए महत्वपूर्ण प्रविष्टियां हैं: -- `specVersion`: a semver version that identifies the supported manifest structure and functionality for the subgraph. The latest version is `1.2.0`. See [specVersion releases](#specversion-releases) section to see more details on features & releases. +- `specVersion`: a semver version that identifies the supported manifest structure and functionality for the Subgraph. The latest version is `1.3.0`. See [specVersion releases](#specversion-releases) section to see more details on features & releases. -- `description`: a human-readable description of what the subgraph is. This description is displayed in Graph Explorer when the subgraph is deployed to Subgraph Studio. +- `description`: a human-readable description of what the Subgraph is. This description is displayed in Graph Explorer when the Subgraph is deployed to Subgraph Studio. -- `repository`: the URL of the repository where the subgraph manifest can be found. This is also displayed in Graph Explorer. +- `repository`: the URL of the repository where the Subgraph manifest can be found. This is also displayed in Graph Explorer. - `features`: a list of all used [feature](#experimental-features) names. -- `indexerHints.prune`: Defines the retention of historical block data for a subgraph. See [prune](#prune) in [indexerHints](#indexer-hints) section. +- `indexerHints.prune`: Defines the retention of historical block data for a Subgraph. See [prune](#prune) in [indexerHints](#indexer-hints) section. -- `dataSources.source`: the address of the smart contract the subgraph sources, and the ABI of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts. +- `dataSources.source`: the address of the smart contract the Subgraph sources, and the ABI of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts. - `dataSources.source.startBlock`: the optional number of the block that the data source starts indexing from. In most cases, we suggest using the block in which the contract was created. - `dataSources.source.endBlock`: The optional number of the block that the data source stops indexing at, including that block. Minimum spec version required: `0.0.9`. -- `dataSources.context`: key-value pairs that can be used within subgraph mappings. Supports various data types like `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Each variable needs to specify its `type` and `data`. These context variables are then accessible in the mapping files, offering more configurable options for subgraph development. +- `dataSources.context`: key-value pairs that can be used within Subgraph mappings. Supports various data types like `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Each variable needs to specify its `type` and `data`. These context variables are then accessible in the mapping files, offering more configurable options for Subgraph development. - `dataSources.mapping.entities`: the entities that the data source writes to the store. The schema for each entity is defined in the schema.graphql file. - `dataSources.mapping.abis`: one or more named ABI files for the source contract as well as any other smart contracts that you interact with from within the mappings. -- `dataSources.mapping.eventHandlers`: lists the smart contract events this subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store. +- `dataSources.mapping.eventHandlers`: lists the smart contract events this Subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store. -- `dataSources.mapping.callHandlers`: lists the smart contract functions this subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store. +- `dataSources.mapping.callHandlers`: lists the smart contract functions this Subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store. -- `dataSources.mapping.blockHandlers`: lists the blocks this subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional call-filter can be provided by adding a `filter` field with `kind: call` to the handler. This will only run the handler if the block contains at least one call to the data source contract. +- `dataSources.mapping.blockHandlers`: lists the blocks this Subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional call-filter can be provided by adding a `filter` field with `kind: call` to the handler. This will only run the handler if the block contains at least one call to the data source contract. -A single subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array. +A single Subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array. ## आयोजन Handlers -Event handlers एक subgraph में स्मार्ट कॉन्ट्रैक्ट्स द्वारा ब्लॉकचेन पर उत्पन्न होने वाले विशिष्ट घटनाओं पर प्रतिक्रिया करते हैं और subgraph के मैनिफेस्ट में परिभाषित हैंडलर्स को ट्रिगर करते हैं। इससे subgraphs को परिभाषित लॉजिक के अनुसार घटना डेटा को प्रोसेस और स्टोर करने की अनुमति मिलती है। +Event handlers in a Subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the Subgraph's manifest. This enables Subgraphs to process and store event data according to defined logic. ### इवेंट हैंडलर को परिभाषित करना -एक event handler को डेटा स्रोत के भीतर subgraph के YAML configuration में घोषित किया जाता है। यह निर्दिष्ट करता है कि कौन से events पर ध्यान देना है और उन events का पता चलने पर कार्यान्वित करने के लिए संबंधित function क्या है। +An event handler is declared within a data source in the Subgraph's YAML configuration. It specifies which events to listen for and the corresponding function to execute when those events are detected. ```yaml dataSources: @@ -131,7 +131,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -149,11 +149,11 @@ dataSources: ## कॉल हैंडलर्स -While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured. +While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a Subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured. कॉल हैंडलर केवल दो मामलों में से एक में ट्रिगर होंगे: जब निर्दिष्ट फ़ंक्शन को अनुबंध के अलावा किसी अन्य खाते द्वारा कॉल किया जाता है या जब इसे सॉलिडिटी में बाहरी के रूप में चिह्नित किया जाता है और उसी अनुबंध में किसी अन्य फ़ंक्शन के भाग के रूप में कॉल किया जाता है। -> **Note:** Call handlers currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a subgraph indexing one of these networks contain one or more call handlers, it will not start syncing. Subgraph developers should instead use event handlers. These are far more performant than call handlers, and are supported on every evm network. +> **Note:** Call handlers currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a Subgraph indexing one of these networks contain one or more call handlers, it will not start syncing. Subgraph developers should instead use event handlers. These are far more performant than call handlers, and are supported on every evm network. ### कॉल हैंडलर को परिभाषित करना @@ -169,7 +169,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -186,7 +186,7 @@ The `function` is the normalized function signature to filter calls by. The `han ### मानचित्रण समारोह -Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument: +Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example Subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument: ```typescript import { CreateGravatarCall } from '../generated/Gravity/Gravity' @@ -205,7 +205,7 @@ The `handleCreateGravatar` function takes a new `CreateGravatarCall` which is a ## ब्लॉक हैंडलर -Contract events या function calls की सदस्यता लेने के अलावा, एक subgraph अपने data को update करना चाह सकता है क्योंकि chain में नए blocks जोड़े जाते हैं। इसे प्राप्त करने के लिए एक subgraph every block के बाद या pre-defined filter से match होन वाले block के बाद एक function चला सकता है। +In addition to subscribing to contract events or function calls, a Subgraph may want to update its data as new blocks are appended to the chain. To achieve this a Subgraph can run a function after every block or after blocks that match a pre-defined filter. ### समर्थित फ़िल्टर @@ -218,7 +218,7 @@ filter: _The defined handler will be called once for every block which contains a call to the contract (data source) the handler is defined under._ -> **Note:** The `call` filter currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a subgraph indexing one of these networks contain one or more block handlers with a `call` filter, it will not start syncing. +> **Note:** The `call` filter currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a Subgraph indexing one of these networks contain one or more block handlers with a `call` filter, it will not start syncing. ब्लॉक हैंडलर के लिए फ़िल्टर की अनुपस्थिति सुनिश्चित करेगी कि हैंडलर को प्रत्येक ब्लॉक कहा जाता है। डेटा स्रोत में प्रत्येक फ़िल्टर प्रकार के लिए केवल एक ब्लॉक हैंडलर हो सकता है। @@ -232,7 +232,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -261,7 +261,7 @@ blockHandlers: every: 10 ``` -The defined handler will be called once for every `n` blocks, where `n` is the value provided in the `every` field. This configuration allows the subgraph to perform specific operations at regular block intervals. +The defined handler will be called once for every `n` blocks, where `n` is the value provided in the `every` field. This configuration allows the Subgraph to perform specific operations at regular block intervals. #### Once फ़िल्टर @@ -276,7 +276,7 @@ blockHandlers: kind: once ``` -'once' फ़िल्टर के साथ परिभाषित हैंडलर केवल एक बार सभी अन्य हैंडलर्स चलने से पहले कॉल किया जाएगा। यह कॉन्फ़िगरेशन 'subgraph' को प्रारंभिक हैंडलर के रूप में उपयोग करने की अनुमति देता है, जिससे 'indexing' के शुरू होने पर विशिष्ट कार्य किए जा सकते हैं। +The defined handler with the once filter will be called only once before all other handlers run. This configuration allows the Subgraph to use the handler as an initialization handler, performing specific tasks at the start of indexing. ```ts export function handleOnce(block: ethereum.Block): void { @@ -288,7 +288,7 @@ export function handleOnce(block: ethereum.Block): void { ### मानचित्रण समारोह -The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing subgraph entities in the store, call smart contracts and create or update entities. +The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing Subgraph entities in the store, call smart contracts and create or update entities. ```typescript import { ethereum } from '@graphprotocol/graph-ts' @@ -317,7 +317,7 @@ An event will only be triggered when both the signature and topic 0 match. By de Starting from `specVersion` `0.0.5` and `apiVersion` `0.0.7`, event handlers can have access to the receipt for the transaction which emitted them. -To do so, event handlers must be declared in the subgraph manifest with the new `receipt: true` key, which is optional and defaults to false. +To do so, event handlers must be declared in the Subgraph manifest with the new `receipt: true` key, which is optional and defaults to false. ```yaml eventHandlers: @@ -360,7 +360,7 @@ dataSources: abi: Factory mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/factory.ts entities: @@ -390,7 +390,7 @@ templates: abi: Exchange mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/exchange.ts entities: @@ -454,7 +454,7 @@ There are setters and getters like `setString` and `getString` for all value typ ## Start Blocks -The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. +The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a Subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. ```yaml dataSources: @@ -467,7 +467,7 @@ dataSources: startBlock: 6627917 mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/factory.ts entities: @@ -488,13 +488,13 @@ dataSources: ## Indexer संकेत -The `indexerHints` setting in a subgraph's manifest provides directives for indexers on processing and managing a subgraph. It influences operational decisions across data handling, indexing strategies, and optimizations. Presently, it features the `prune` option for managing historical data retention or pruning. +The `indexerHints` setting in a Subgraph's manifest provides directives for indexers on processing and managing a Subgraph. It influences operational decisions across data handling, indexing strategies, and optimizations. Presently, it features the `prune` option for managing historical data retention or pruning. > This feature is available from `specVersion: 1.0.0` ### Prune -`indexerHints.prune`: Defines the retention of historical block data for a subgraph. Options include: +`indexerHints.prune`: Defines the retention of historical block data for a Subgraph. Options include: 1. `"never"`: No pruning of historical data; retains the entire history. 2. `"auto"`: Retains the minimum necessary history as set by the indexer, optimizing query performance. @@ -505,19 +505,19 @@ The `indexerHints` setting in a subgraph's manifest provides directives for inde prune: auto ``` -> इस संदर्भ में "history" का अर्थ उन आंकड़ों को संग्रहीत करने से है जो 'mutable' संस्थाओं की पुरानी स्थितियों को दर्शाते हैं। +> The term "history" in this context of Subgraphs is about storing data that reflects the old states of mutable entities. दिए गए ब्लॉक के रूप में इतिहास की आवश्यकता है: -- [Time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), which enable querying the past states of these entities at specific blocks throughout the subgraph's history -- Using the subgraph as a [graft base](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) in another subgraph, at that block -- उस ब्लॉक पर 'subgraph' को वापस लाना +- [Time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), which enable querying the past states of these entities at specific blocks throughout the Subgraph's history +- Using the Subgraph as a [graft base](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) in another Subgraph, at that block +- Rewinding the Subgraph back to that block यदि ब्लॉक के रूप में ऐतिहासिक डेटा को प्रून किया गया है, तो उपरोक्त क्षमताएँ उपलब्ध नहीं होंगी। > Using `"auto"` is generally recommended as it maximizes query performance and is sufficient for most users who do not require access to extensive historical data. -For subgraphs leveraging [time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), it's advisable to either set a specific number of blocks for historical data retention or use `prune: never` to keep all historical entity states. Below are examples of how to configure both options in your subgraph's settings: +For Subgraphs leveraging [time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), it's advisable to either set a specific number of blocks for historical data retention or use `prune: never` to keep all historical entity states. Below are examples of how to configure both options in your Subgraph's settings: विशिष्ट मात्रा में ऐतिहासिक डेटा बनाए रखने के लिए: @@ -532,3 +532,18 @@ For subgraphs leveraging [time travel queries](/subgraphs/querying/graphql-api/# indexerHints: prune: never ``` + +## SpecVersion Releases + +| Version | Release notes | +| :-: | --- | +| 1.3.0 | Added support for [Subgraph Composition](/cookbook/subgraph-composition-three-sources) | +| 1.2.0 | Added support for [Indexed Argument Filtering](/developing/creating/advanced/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](/developing/creating/advanced/#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supports [`indexerHints`](/developing/creating/subgraph-manifest/#indexer-hints) feature to prune Subgraphs | +| 0.0.9 | Supports `endBlock` feature | +| 0.0.8 | Added support for polling [Block Handlers](/developing/creating/subgraph-manifest/#polling-filter) and [Initialisation Handlers](/developing/creating/subgraph-manifest/#once-filter). | +| 0.0.7 | Added support for [File Data Sources](/developing/creating/advanced/#ipfsarweave-file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | diff --git a/website/src/pages/hi/subgraphs/developing/creating/unit-testing-framework.mdx b/website/src/pages/hi/subgraphs/developing/creating/unit-testing-framework.mdx index 89a802802610..64ec49930c33 100644 --- a/website/src/pages/hi/subgraphs/developing/creating/unit-testing-framework.mdx +++ b/website/src/pages/hi/subgraphs/developing/creating/unit-testing-framework.mdx @@ -4,12 +4,12 @@ title: |- कला --- -Learn how to use Matchstick, a unit testing framework developed by [LimeChain](https://limechain.tech/). Matchstick enables subgraph developers to test their mapping logic in a sandboxed environment and successfully deploy their subgraphs. +Learn how to use Matchstick, a unit testing framework developed by [LimeChain](https://limechain.tech/). Matchstick enables Subgraph developers to test their mapping logic in a sandboxed environment and successfully deploy their Subgraphs. ## Benefits of Using Matchstick - यह Rust में लिखा गया है और उच्च प्रदर्शन के लिए अनुकूलित है। -- यह आपको डेवलपर विशेषता तक पहुंच प्रदान करता है, जिसमें contract कॉल्स को मॉक करने, स्टोर स्टेट के बारे में एसेर्शन करने, सबग्राफ विफलताओं की निगरानी करने, टेस्ट परफॉर्मेंस जांचने और बहुत कुछ करने की क्षमता शामिल है। +- It gives you access to developer features, including the ability to mock contract calls, make assertions about the store state, monitor Subgraph failures, check test performance, and many more. ## शुरू करना @@ -89,7 +89,7 @@ And finally, do not use `graph test` (which uses your global installation of gra ### Using Matchstick -To use **Matchstick** in your subgraph project just open up a terminal, navigate to the root folder of your project and simply run `graph test [options] ` - it downloads the latest **Matchstick** binary and runs the specified test or all tests in a test folder (or all existing tests if no datasource flag is specified). +To use **Matchstick** in your Subgraph project just open up a terminal, navigate to the root folder of your project and simply run `graph test [options] ` - it downloads the latest **Matchstick** binary and runs the specified test or all tests in a test folder (or all existing tests if no datasource flag is specified). ### सीएलआई विकल्प @@ -115,7 +115,7 @@ graph test path/to/file.test.ts ```sh -c, --coverage Run the tests in coverage mode --d, --docker Run the tests in a docker container (Note: Please execute from the root folder of the subgraph) +-d, --docker Run the tests in a docker container (Note: Please execute from the root folder of the Subgraph) -f, --force Binary: Redownloads the binary. Docker: Redownloads the Dockerfile and rebuilds the docker image. -h, --help Show usage information -l, --logs Logs to the console information about the OS, CPU model and download url (debugging purposes) @@ -147,17 +147,17 @@ libsFolder: path/to/libs manifestPath: path/to/subgraph.yaml ``` -### डेमो सबग्राफ +### Demo Subgraph You can try out and play around with the examples from this guide by cloning the [Demo Subgraph repo](https://github.com/LimeChain/demo-subgraph) ### वीडियो शिक्षण -Also you can check out the video series on ["How to use Matchstick to write unit tests for your subgraphs"](https://www.youtube.com/playlist?list=PLTqyKgxaGF3SNakGQwczpSGVjS_xvOv3h) +Also you can check out the video series on ["How to use Matchstick to write unit tests for your Subgraphs"](https://www.youtube.com/playlist?list=PLTqyKgxaGF3SNakGQwczpSGVjS_xvOv3h) ## Tests structure -_**IMPORTANT: The test structure described below depens on `matchstick-as` version >=0.5.0**_ +_**IMPORTANT: The test structure described below depends on `matchstick-as` version >=0.5.0**_ ### describe() @@ -664,7 +664,7 @@ That's a lot to unpack! First off, an important thing to notice is that we're im ये रहा - हमने अपना पहला परीक्षण बना लिया है! 👏 -अब हमारे परीक्षण चलाने के लिए आपको बस अपने सबग्राफ रूट फ़ोल्डर में निम्नलिखित को चलाने की आवश्यकता है: +Now in order to run our tests you simply need to run the following in your Subgraph root folder: `graph test Gravity` @@ -758,7 +758,7 @@ createMockedFunction(contractAddress, 'getGravatar', 'getGravatar(address):(stri Users can mock IPFS files by using `mockIpfsFile(hash, filePath)` function. The function accepts two arguments, the first one is the IPFS file hash/path and the second one is the path to a local file. -NOTE: When testing `ipfs.map/ipfs.mapJSON`, the callback function must be exported from the test file in order for matchstck to detect it, like the `processGravatar()` function in the test example bellow: +NOTE: When testing `ipfs.map/ipfs.mapJSON`, the callback function must be exported from the test file in order for matchstick to detect it, like the `processGravatar()` function in the test example bellow: `.test.ts` file: @@ -767,7 +767,7 @@ import { assert, test, mockIpfsFile } from 'matchstick-as/assembly/index' import { ipfs } from '@graphprotocol/graph-ts' import { gravatarFromIpfs } from './utils' -// Export ipfs.map() callback in order for matchstck to detect it +// Export ipfs.map() callback in order for matchstick to detect it export { processGravatar } from './utils' test('ipfs.cat', () => { @@ -1174,7 +1174,7 @@ templates: network: mainnet mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/token-lock-wallet.ts handler: handleMetadata @@ -1291,11 +1291,11 @@ test('file/ipfs dataSource creation example', () => { ## टेस्ट कवरेज -Using **Matchstick**, subgraph developers are able to run a script that will calculate the test coverage of the written unit tests. +Using **Matchstick**, Subgraph developers are able to run a script that will calculate the test coverage of the written unit tests. The test coverage tool takes the compiled test `wasm` binaries and converts them to `wat` files, which can then be easily inspected to see whether or not the handlers defined in `subgraph.yaml` have been called. Since code coverage (and testing as whole) is in very early stages in AssemblyScript and WebAssembly, **Matchstick** cannot check for branch coverage. Instead we rely on the assertion that if a given handler has been called, the event/function for it have been properly mocked. -### Prerequisites +### आवश्यक शर्तें To run the test coverage functionality provided in **Matchstick**, there are a few things you need to prepare beforehand: @@ -1313,7 +1313,7 @@ In order for that function to be visible (for it to be included in the `wat` fil export { handleNewGravatar } ``` -### Usage +### उपयोग एक बार यह सब सेट हो जाने के बाद, परीक्षण कवरेज टूल चलाने के लिए, बस चलाएँ: @@ -1397,7 +1397,7 @@ The mismatch in arguments is caused by mismatch in `graph-ts` and `matchstick-as ## Additional Resources -For any additional support, check out this [demo subgraph repo using Matchstick](https://github.com/LimeChain/demo-subgraph#readme_). +For any additional support, check out this [demo Subgraph repo using Matchstick](https://github.com/LimeChain/demo-subgraph#readme_). ## प्रतिक्रिया diff --git a/website/src/pages/hi/subgraphs/developing/deploying/multiple-networks.mdx b/website/src/pages/hi/subgraphs/developing/deploying/multiple-networks.mdx index 3e03014aba51..d10ef9160dc6 100644 --- a/website/src/pages/hi/subgraphs/developing/deploying/multiple-networks.mdx +++ b/website/src/pages/hi/subgraphs/developing/deploying/multiple-networks.mdx @@ -1,12 +1,13 @@ --- title: मल्टीपल नेटवर्क्स पर एक Subgraph डिप्लॉय करना +sidebarTitle: Deploying to Multiple Networks --- -This page explains how to deploy a subgraph to multiple networks. To deploy a subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a subgraph already, see [Creating a subgraph](/developing/creating-a-subgraph/). +This page explains how to deploy a Subgraph to multiple networks. To deploy a Subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a Subgraph already, see [Creating a Subgraph](/developing/creating-a-subgraph/). -## सबग्राफ को कई नेटवर्क पर तैनात करना +## Deploying the Subgraph to multiple networks -कुछ मामलों में, आप एक ही सबग्राफ को इसके सभी कोड को डुप्लिकेट किए बिना कई नेटवर्क पर तैनात करना चाहेंगे। इसके साथ आने वाली मुख्य चुनौती यह है कि इन नेटवर्कों पर अनुबंध के पते अलग-अलग हैं। +In some cases, you will want to deploy the same Subgraph to multiple networks without duplicating all of its code. The main challenge that comes with this is that the contract addresses on these networks are different. ### graph-cli का उपयोग करते हुए @@ -21,7 +22,7 @@ This page explains how to deploy a subgraph to multiple networks. To deploy a su ``` -आप --network विकल्प का उपयोग करके एक नेटवर्क कॉन्फ़िगरेशन को एक json मानक फ़ाइल (डिफ़ॉल्ट रूप से networks.json) से निर्दिष्ट कर सकते हैं ताकि विकास के दौरान आसानी से अपने subgraph को अपडेट किया जा सके +You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your Subgraph during development. > ध्यान दें: init कमांड अब दी गई जानकारी के आधार पर एक networks.json को स्वचालित रूप से उत्पन्न करेगा। इसके बाद आप मौजूदा नेटवर्क को अपडेट कर सकेंगे या अतिरिक्त नेटवर्क जोड़ सकेंगे। @@ -55,7 +56,7 @@ This page explains how to deploy a subgraph to multiple networks. To deploy a su > ध्यान दें: आपको किसी भी 'templates' (यदि आपके पास कोई है) को config फ़ाइल में निर्दिष्ट करने की आवश्यकता नहीं है, केवल 'dataSources' को। यदि 'subgraph.yaml' फ़ाइल में कोई 'templates' घोषित किए गए हैं, तो उनका नेटवर्क स्वचालित रूप से उस नेटवर्क में अपडेट हो जाएगा जो 'network' विकल्प के साथ निर्दिष्ट किया गया है। -मान लीजिए कि आप अपने subgraph को mainnet और sepolia नेटवर्क पर डिप्लॉय करना चाहते हैं, और यह आपका subgraph.yaml है: +Now, let's assume you want to be able to deploy your Subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`: ```yaml # ... @@ -97,7 +98,7 @@ yarn build --network sepolia yarn build --network sepolia --network-file path/to/config ``` -build कमांड आपके subgraph.yaml को sepolia कॉन्फ़िगरेशन के साथ अपडेट करेगा और फिर से subgraph को पुनः-कंपाइल करेगा। आपका subgraph.yaml फ़ाइल अब इस प्रकार दिखना चाहिए: +The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the Subgraph. Your `subgraph.yaml` file now should look like this: ```yaml # ... @@ -128,7 +129,7 @@ yarn deploy --network sepolia --network-file path/to/config एक तरीका है 'graph-cli' के पुराने संस्करणों का उपयोग करके अनुबंध पते जैसी विशेषताओं को पैरामीटरित करना, जो कि एक टेम्पलेटिंग सिस्टम जैसे Mustache (https://mustache.github.io/) या Handlebars (https://handlebarsjs.com/) के साथ इसके कुछ हिस्सों को जनरेट करना है। -To illustrate this approach, let's assume a subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network: +To illustrate this approach, let's assume a Subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network: ```json { @@ -180,7 +181,7 @@ dataSources: } ``` -To deploy this subgraph for mainnet or Sepolia you would now simply run one of the two following commands: +To deploy this Subgraph for mainnet or Sepolia you would now simply run one of the two following commands: ```sh मेननेट: @@ -194,25 +195,25 @@ yarn prepare && yarn deploy यह दृष्टिकोण अधिक जटिल परिस्थितियों में भी लागू किया जा सकता है, जहां अनुबंध पते और नेटवर्क नामों के अलावा अधिक को प्रतिस्थापित करने की आवश्यकता होती है या जहां टेम्पलेट से मैपिंग या ABIs उत्पन्न करने की आवश्यकता होती है। -यह आपको chainHeadBlock देगा जिसे आप अपने subgraph पर latestBlock के साथ तुलना कर सकते हैं यह जाँचने के लिए कि क्या यह पीछे चल रहा है। synced यह बताता है कि क्या subgraph कभी श्रृंखला के साथ मेल खा गया है। health वर्तमान में दो मान ले सकता है: healthy अगर कोई त्रुटियाँ नहीं हुई हैं, या failed अगर कोई त्रुटि हुई है जिसने subgraph की प्रगति को रोक दिया है। इस स्थिति में, आप इस त्रुटि के विवरण के लिए fatalError फ़ील्ड की जांच कर सकते हैं। +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your Subgraph to check if it is running behind. `synced` informs if the Subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the Subgraph. In this case, you can check the `fatalError` field for details on this error. -## सबग्राफ स्टूडियो सबग्राफ संग्रह नीति +## Subgraph Studio Subgraph archive policy -A subgraph version in Studio is archived if and only if it meets the following criteria: +A Subgraph version in Studio is archived if and only if it meets the following criteria: - The version is not published to the network (or pending publish) - The version was created 45 or more days ago -- The subgraph hasn't been queried in 30 days +- The Subgraph hasn't been queried in 30 days -In addition, when a new version is deployed, if the subgraph has not been published, then the N-2 version of the subgraph is archived. +In addition, when a new version is deployed, if the Subgraph has not been published, then the N-2 version of the Subgraph is archived. -इस नीति से प्रभावित प्रत्येक सबग्राफ के पास विचाराधीन संस्करण को वापस लाने का विकल्प है। +Every Subgraph affected with this policy has an option to bring the version in question back. -## सबग्राफ स्वास्थ्य की जाँच करना +## Checking Subgraph health -यदि एक सबग्राफ सफलतापूर्वक सिंक हो जाता है, तो यह एक अच्छा संकेत है कि यह हमेशा के लिए अच्छी तरह से चलता रहेगा। हालांकि, नेटवर्क पर नए ट्रिगर्स के कारण आपका सबग्राफ एक अनुपयोगी त्रुटि स्थिति में आ सकता है या यह प्रदर्शन समस्याओं या नोड ऑपरेटरों के साथ समस्याओं के कारण पीछे पड़ना शुरू हो सकता है। +If a Subgraph syncs successfully, that is a good sign that it will continue to run well forever. However, new triggers on the network might cause your Subgraph to hit an untested error condition or it may start to fall behind due to performance issues or issues with the node operators. -Graph Node एक GraphQL endpoint को उजागर करता है जिसे आप अपने subgraph की स्थिति की जांच करने के लिए क्वेरी कर सकते हैं। होस्टेड सेवा पर, यह https://api.thegraph.com/index-node/graphql पर उपलब्ध है। एक स्थानीय नोड पर, यह डिफ़ॉल्ट रूप से पोर्ट 8030/graphql पर उपलब्ध है। इस endpoint के लिए पूरा स्कीमा यहां (https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql) पाया जा सकता है। यहां एक उदाहरण क्वेरी है जो एक subgraph के वर्तमान संस्करण की स्थिति की जांच करती है: +Graph Node exposes a GraphQL endpoint which you can query to check the status of your Subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a Subgraph: ```graphql { @@ -239,4 +240,4 @@ Graph Node एक GraphQL endpoint को उजागर करता है } ``` -यह आपको chainHeadBlock देगा जिसे आप अपने subgraph पर latestBlock के साथ तुलना कर सकते हैं यह जाँचने के लिए कि क्या यह पीछे चल रहा है। synced यह बताता है कि क्या subgraph कभी श्रृंखला के साथ मेल खा गया है। health वर्तमान में दो मान ले सकता है: healthy अगर कोई त्रुटियाँ नहीं हुई हैं, या failed अगर कोई त्रुटि हुई है जिसने subgraph की प्रगति को रोक दिया है। इस स्थिति में, आप इस त्रुटि के विवरण के लिए fatalError फ़ील्ड की जांच कर सकते हैं। +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your Subgraph to check if it is running behind. `synced` informs if the Subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the Subgraph. In this case, you can check the `fatalError` field for details on this error. diff --git a/website/src/pages/hi/subgraphs/developing/deploying/using-subgraph-studio.mdx b/website/src/pages/hi/subgraphs/developing/deploying/using-subgraph-studio.mdx index 3fa668ee3535..eab335f08623 100644 --- a/website/src/pages/hi/subgraphs/developing/deploying/using-subgraph-studio.mdx +++ b/website/src/pages/hi/subgraphs/developing/deploying/using-subgraph-studio.mdx @@ -2,23 +2,23 @@ title: Deploying Using Subgraph Studio --- -अपने subgraph को Subgraph Studio में डिप्लॉय करना सीखें। +Learn how to deploy your Subgraph to Subgraph Studio. -> Note: When you deploy a subgraph, you push it to Subgraph Studio, where you'll be able to test it. It's important to remember that deploying is not the same as publishing. When you publish a subgraph, you're publishing it onchain. +> Note: When you deploy a Subgraph, you push it to Subgraph Studio, where you'll be able to test it. It's important to remember that deploying is not the same as publishing. When you publish a Subgraph, you're publishing it onchain. ## Subgraph Studio का अवलोकन In Subgraph Studio,आप निम्नलिखित कर सकते हैं: -- आपने बनाए गए subgraphs की सूची देखें -- एक विशेष subgraph की स्थिति को प्रबंधित करें, विवरण देखें और दृश्य रूप में प्रदर्शित करें -- विशिष्ट सबग्राफ के लिए अपनी एपीआई keys बनाएं और प्रबंधित करें +- View a list of Subgraphs you've created +- Manage, view details, and visualize the status of a specific Subgraph +- Create and manage your API keys for specific Subgraphs - अपने API कुंजी को विशेष डोमेन तक सीमित करें और केवल कुछ Indexers को उनके साथ क्वेरी करने की अनुमति दें -- अपना subgraph बनाएं -- अपने subgraph को The Graph CLI का उपयोग करके डिप्लॉय करें -- अपने 'subgraph' को 'playground' वातावरण में टेस्ट करें -- अपने स्टेजिंग में 'subgraph' को विकास क्वेरी URL का उपयोग करके एकीकृत करें -- अपने subgraph को The Graph Network पर प्रकाशित करें +- Create your Subgraph +- Deploy your Subgraph using The Graph CLI +- Test your Subgraph in the playground environment +- Integrate your Subgraph in staging using the development query URL +- Publish your Subgraph to The Graph Network - अपने बिलिंग को प्रबंधित करें ## The Graph CLI स्थापित करें @@ -44,10 +44,10 @@ npm install -g @graphprotocol/graph-cli 1. खोलें [Subgraph Studio](https://thegraph.com/studio/). 2. अपने वॉलेट से साइन इन करें। - आप इसे MetaMask, Coinbase Wallet, WalletConnect, या Safe के माध्यम से कर सकते हैं। -3. साइन इन करने के बाद, आपका यूनिक डिप्लॉय की आपकी subgraph विवरण पृष्ठ पर प्रदर्शित होगा। - - Deploy key आपको अपने subgraphs को प्रकाशित करने या अपने API keys और billing को प्रबंधित करने की अनुमति देता है। यह अद्वितीय है लेकिन यदि आपको लगता है कि यह समझौता किया गया है, तो इसे पुनः उत्पन्न किया जा सकता है। +3. After you sign in, your unique deploy key will be displayed on your Subgraph details page. + - The deploy key allows you to publish your Subgraphs or manage your API keys and billing. It is unique but can be regenerated if you think it has been compromised. -> महत्वपूर्ण: आपको subgraphs को क्वेरी करने के लिए एक API कुंजी की आवश्यकता है +> Important: You need an API key to query Subgraphs ### How to Create a Subgraph in Subgraph Studio @@ -57,31 +57,25 @@ npm install -g @graphprotocol/graph-cli ### ग्राफ नेटवर्क के साथ सबग्राफ अनुकूलता -In order to be supported by Indexers on The Graph Network, subgraphs must: - -- Index a [supported network](/supported-networks/) -- निम्नलिखित सुविधाओं में से किसी का उपयोग नहीं करना चाहिए: - - ipfs.cat & ipfs.map - - गैर-घातक त्रुटियाँ - - ग्राफ्टिंग +To be supported by Indexers on The Graph Network, Subgraphs must index a [supported network](/supported-networks/). For a full list of supported and unsupported features, check out the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) repo. ## अपने Subgraph को प्रारंभ करें -एक बार जब आपका subgraph Subgraph Studio में बना दिया गया है, तो आप इस कमांड का उपयोग करके CLI के माध्यम से इसके कोड को प्रारंभ कर सकते हैं: +Once your Subgraph has been created in Subgraph Studio, you can initialize its code through the CLI using this command: ```bash graph init ``` -आप `` मान को अपने subgraph विवरण पृष्ठ पर Subgraph Studio में पा सकते हैं, नीचे दी गई छवि देखें: +You can find the `` value on your Subgraph details page in Subgraph Studio, see image below: ![Subgraph Studio - Slug](/img/doc-subgraph-slug.png) -`graph init` चलाने के बाद, आपसे संपर्क पता, नेटवर्क, और एक ABI इनपुट करने के लिए कहा जाएगा जिसे आप क्वेरी करना चाहते हैं। यह आपके स्थानीय मशीन पर एक नया फोल्डर उत्पन्न करेगा जिसमें आपके Subgraph पर काम करना शुरू करने के लिए कुछ मूल कोड होगा। आप फिर अपने Subgraph को अंतिम रूप दे सकते हैं ताकि यह सुनिश्चित किया जा सके कि यह अपेक्षित रूप से काम करता है। +After running `graph init`, you will be asked to input the contract address, network, and an ABI that you want to query. This will generate a new folder on your local machine with some basic code to start working on your Subgraph. You can then finalize your Subgraph to make sure it works as expected. ## ग्राफ प्रमाणीकरण -अपने subgraph को Subgraph Studio पर डिप्लॉय करने से पहले, आपको CLI के भीतर अपने खाते में लॉग इन करना होगा। ऐसा करने के लिए, आपको अपना deploy key चाहिए होगा, जिसे आप अपने subgraph विवरण पृष्ठ के तहत पा सकते हैं। +Before you can deploy your Subgraph to Subgraph Studio, you need to log into your account within the CLI. To do this, you will need your deploy key, which you can find under your Subgraph details page. फिर, CLI से प्रमाणित करने के लिए निम्नलिखित आदेश का उपयोग करें: @@ -91,11 +85,11 @@ graph auth ## Subgraph डिप्लॉय करना -जब आप तैयार हों, तो आप अपना subgraph को Subgraph Studio पर डिप्लॉय कर सकते हैं। +Once you are ready, you can deploy your Subgraph to Subgraph Studio. -> CLI का उपयोग करके subgraph को डिप्लॉय करना उसे Studio में पुश करता है, जहां आप इसे टेस्ट कर सकते हैं और मेटाडेटा को अपडेट कर सकते हैं। यह क्रिया आपके subgraph को विकेंद्रीकृत नेटवर्क पर प्रकाशित नहीं करेगी। +> Deploying a Subgraph with the CLI pushes it to the Studio, where you can test it and update the metadata. This action won't publish your Subgraph to the decentralized network. -निम्नलिखित CLI कमांड का उपयोग करके अपना subgraph डिप्लॉय करें: +Use the following CLI command to deploy your Subgraph: ```bash graph deploy @@ -108,30 +102,30 @@ graph deploy ## अपने Subgraph का परीक्षण करें -डिप्लॉय करने के बाद, आप अपने subgraph का परीक्षण कर सकते हैं (या तो Subgraph Studio में या अपने ऐप में, डिप्लॉयमेंट क्वेरी URL के साथ), एक और संस्करण डिप्लॉय करें, मेटाडेटा को अपडेट करें, और जब आप तैयार हों, तो Graph Explorer(https://thegraph.com/explorer) पर प्रकाशित करें। +After deploying, you can test your Subgraph (either in Subgraph Studio or in your own app, with the deployment query URL), deploy another version, update the metadata, and publish to [Graph Explorer](https://thegraph.com/explorer) when you are ready. -Subgraph Studio का उपयोग करके डैशबोर्ड पर लॉग्स की जांच करें और अपने subgraph के साथ किसी भी त्रुटियों की तलाश करें। +Use Subgraph Studio to check the logs on the dashboard and look for any errors with your Subgraph. ## अपने Subgraph को प्रकाशित करें -In order to publish your subgraph successfully, review [publishing a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). +In order to publish your Subgraph successfully, review [publishing a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). ## अपने Subgraph को CLI के साथ संस्करण बनाना -यदि आप अपने subgraph को अपडेट करना चाहते हैं, तो आप निम्नलिखित कर सकते हैं: +If you want to update your Subgraph, you can do the following: - आप स्टूडियो में CLI का उपयोग करके एक नया संस्करण डिप्लॉय कर सकते हैं (इस समय यह केवल निजी होगा)। - एक बार जब आप इससे संतुष्ट हो जाएं, तो आप अपने नए डिप्लॉयमेंट को Graph Explorer(https://thegraph.com/explorer). पर प्रकाशित कर सकते हैं। -- यह क्रिया आपके नए संस्करण का निर्माण करेगी जिसे Curators सिग्नल करना शुरू कर सकते हैं और Indexers अनुक्रमित कर सकते हैं। +- This action will create a new version of your Subgraph that Curators can start signaling on and Indexers can index. -You can also update your subgraph's metadata without publishing a new version. You can update your subgraph details in Studio (under the profile picture, name, description, etc.) by checking an option called **Update Details** in [Graph Explorer](https://thegraph.com/explorer). If this is checked, an onchain transaction will be generated that updates subgraph details in Explorer without having to publish a new version with a new deployment. +You can also update your Subgraph's metadata without publishing a new version. You can update your Subgraph details in Studio (under the profile picture, name, description, etc.) by checking an option called **Update Details** in [Graph Explorer](https://thegraph.com/explorer). If this is checked, an onchain transaction will be generated that updates Subgraph details in Explorer without having to publish a new version with a new deployment. -> Note: There are costs associated with publishing a new version of a subgraph to the network. In addition to the transaction fees, you must also fund a part of the curation tax on the auto-migrating signal. You cannot publish a new version of your subgraph if Curators have not signaled on it. For more information, please read more [here](/resources/roles/curating/). +> Note: There are costs associated with publishing a new version of a Subgraph to the network. In addition to the transaction fees, you must also fund a part of the curation tax on the auto-migrating signal. You cannot publish a new version of your Subgraph if Curators have not signaled on it. For more information, please read more [here](/resources/roles/curating/). ## सबग्राफ संस्करणों का स्वचालित संग्रह -जब भी आप Subgraph Studio में एक नया subgraph संस्करण डिप्लॉय करते हैं, तो पिछले संस्करण को आर्काइव कर दिया जाएगा। आर्काइव किए गए संस्करणों को इंडेक्स/सिंक नहीं किया जाएगा और इसलिए उन्हें क्वेरी नहीं किया जा सकता। आप Subgraph Studio में अपने subgraph के आर्काइव किए गए संस्करण को अनआर्काइव कर सकते हैं। +Whenever you deploy a new Subgraph version in Subgraph Studio, the previous version will be archived. Archived versions won't be indexed/synced and therefore cannot be queried. You can unarchive an archived version of your Subgraph in Subgraph Studio. -> नोट: स्टूडियो में डिप्लॉय किए गए गैर-प्रकाशित subgraphs के पिछले संस्करणों को स्वचालित रूप से आर्काइव किया जाएगा। +> Note: Previous versions of non-published Subgraphs deployed to Studio will be automatically archived. ![Subgraph Studio - Unarchive](/img/Unarchive.png) diff --git a/website/src/pages/hi/subgraphs/developing/developer-faq.mdx b/website/src/pages/hi/subgraphs/developing/developer-faq.mdx index 6eeb3c64ff7f..6a1923517667 100644 --- a/website/src/pages/hi/subgraphs/developing/developer-faq.mdx +++ b/website/src/pages/hi/subgraphs/developing/developer-faq.mdx @@ -7,37 +7,37 @@ sidebarTitle: FAQ ## सबग्रह संबंधित -### 1. सबग्राफ क्या है? +### 1. What is a Subgraph? -एक subgraph एक कस्टम API है जो ब्लॉकचेन डेटा पर आधारित है। subgraphs को GraphQL क्वेरी भाषा का उपयोग करके क्वेरी किया जाता है और इन्हें The Graph CLI का उपयोग करके Graph Node पर तैनात किया जाता है। एक बार तैनात और The Graph के विकेन्द्रीकृत नेटवर्क पर प्रकाशित होने के बाद, Indexers subgraphs को प्रोसेस करते हैं और उन्हें subgraph उपभोक्ताओं के लिए क्वेरी करने के लिए उपलब्ध कराते हैं। +A Subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process Subgraphs and make them available for Subgraph consumers to query. -### 2. एक Subgraph बनाने का पहला कदम क्या है? +### 2. What is the first step to create a Subgraph? -To successfully create a subgraph, you will need to install The Graph CLI. Review the [Quick Start](/subgraphs/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). +To successfully create a Subgraph, you will need to install The Graph CLI. Review the [Quick Start](/subgraphs/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). -### 3. क्या मैं अभी भी एक subgraph बना सकता हूँ यदि मेरी स्मार्ट कॉन्ट्रैक्ट्स में कोई इवेंट्स नहीं हैं? +### 3. Can I still create a Subgraph if my smart contracts don't have events? -यह अत्यधिक अनुशंसित है कि आप अपने स्मार्ट अनुबंधों को इस तरह से संरचित करें कि उन डेटा के साथ घटनाएँ हों जिनमें आपकी रुचि है। अनुबंध की घटनाओं द्वारा संचालित 'event handlers' को Subgraph में ट्रिगर किया जाता है और यह उपयोगी डेटा प्राप्त करने का सबसे तेज़ तरीका है। +It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the Subgraph are triggered by contract events and are the fastest way to retrieve useful data. -अगर आप जिन अनुबंधों के साथ काम कर रहे हैं, उनमें घटनाएँ नहीं हैं, तो आपका subgraph कॉल और ब्लॉक हैंडलर्स का उपयोग कर सकता है ताकि इंडेक्सिंग को ट्रिगर किया जा सके। हालाँकि, यह अनुशंसित नहीं है, क्योंकि प्रदर्शन काफी धीमा होगा। +If the contracts you work with do not contain events, your Subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. -### 4. क्या मैं अपने सबग्राफ से जुड़े GitHub खाते को बदल सकता हूँ? +### 4. Can I change the GitHub account associated with my Subgraph? -एक बार जब एक subgraph बनाया जाता है, तो संबंधित GitHub खाता नहीं बदला जा सकता है। कृपया अपने subgraph को बनाने से पहले इसे ध्यान से विचार करें। +No. Once a Subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your Subgraph. -### 5. मैं मुख्य नेटवर्क पर एक subgraph को कैसे अपडेट करूँ? +### 5. How do I update a Subgraph on mainnet? -आप अपने subgraph का नया संस्करण Subgraph Studio में CLI का उपयोग करके डिप्लॉय कर सकते हैं। यह क्रिया आपके subgraph को निजी रखती है, लेकिन जब आप इससे खुश हों, तो आप Graph Explorer में इसे प्रकाशित कर सकते हैं। इससे आपके subgraph का एक नया संस्करण बनेगा जिस पर Curators सिग्नल करना शुरू कर सकते हैं। +You can deploy a new version of your Subgraph to Subgraph Studio using the CLI. This action maintains your Subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your Subgraph that Curators can start signaling on. -### 6. एक Subgraph को दूसरे खाते या एंडपॉइंट पर बिना पुनः तैनात किए डुप्लिकेट करना संभव है? +### 6. Is it possible to duplicate a Subgraph to another account or endpoint without redeploying? -आपको सबग्राफ को फिर से तैनात करना होगा, लेकिन अगर सबग्राफ आईडी (आईपीएफएस हैश) नहीं बदलता है, तो इसे शुरुआत से सिंक नहीं करना पड़ेगा। +You have to redeploy the Subgraph, but if the Subgraph ID (IPFS hash) doesn't change, it won't have to sync from the beginning. -### 7. आप अपने subgraph mappings से एक contract function को कैसे कॉल करें या एक सार्वजनिक state variable तक कैसे पहुँचें? +### 7. How do I call a contract function or access a public state variable from my Subgraph mappings? Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/#access-to-smart-contract-state). -### 8. Can I import `ethers.js` or other JS libraries into my subgraph mappings? +### 8. Can I import `ethers.js` or other JS libraries into my Subgraph mappings? AssemblyScript में वर्तमान में मैपिंग्स नहीं लिखी जा रही हैं। @@ -45,15 +45,15 @@ AssemblyScript में वर्तमान में मैपिंग् ### 9. कई कॉन्ट्रैक्ट सुनते समय, क्या घटनाओं को सुनने के लिए कॉन्ट्रैक्ट के क्रम का चयन करना संभव है? -एक सबग्राफ के भीतर, घटनाओं को हमेशा उसी क्रम में संसाधित किया जाता है जिस क्रम में वे ब्लॉक में दिखाई देते हैं, भले ही वह कई अनुबंधों में हो या नहीं। +Within a Subgraph, the events are always processed in the order they appear in the blocks, regardless of whether that is across multiple contracts or not. ### 10. टेम्प्लेट्स और डेटा स्रोतों में क्या अंतर है? -Templates आपको डेटा स्रोतों को तेजी से बनाने की अनुमति देते हैं, जबकि आपका subgraph इंडेक्सिंग कर रहा है। आपका कॉन्ट्रैक्ट नए कॉन्ट्रैक्ट उत्पन्न कर सकता है जब लोग इसके साथ इंटरैक्ट करते हैं। चूंकि आप उन कॉन्ट्रैक्टों का आकार (ABI, इवेंट, आदि) पहले से जानते हैं, आप यह निर्धारित कर सकते हैं कि आप उन्हें एक टेम्पलेट में कैसे इंडेक्स करना चाहते हैं। जब वे उत्पन्न होते हैं, तो आपका subgraph कॉन्ट्रैक्ट पते को प्रदान करके एक डायनामिक डेटा स्रोत बनाएगा। +Templates allow you to create data sources quickly, while your Subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your Subgraph will create a dynamic data source by supplying the contract address. Check out the "Instantiating a data source template" section on: [Data Source Templates](/developing/creating-a-subgraph/#data-source-templates). -### 11. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? +### 11. Is it possible to set up a Subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? Yes. On `graph init` command itself you can add multiple dataSources by entering contracts one after the other. @@ -79,9 +79,9 @@ docker pull graphprotocol/graph-node:latest If only one entity is created during the event and if there's nothing better available, then the transaction hash + log index would be unique. You can obfuscate these by converting that to Bytes and then piping it through `crypto.keccak256` but this won't make it more unique. -### 15. क्या मैं अपना subgraph हटा सकता हूँ? +### 15. Can I delete my Subgraph? -Yes, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) and [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) your subgraph. +Yes, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) and [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) your Subgraph. ## नेटवर्क से संबंधित। @@ -110,11 +110,11 @@ Yes. Sepolia supports block handlers, call handlers and event handlers. It shoul Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the dataSource starts indexing from. In most cases, we suggest using the block where the contract was created: [Start blocks](/developing/creating-a-subgraph/#start-blocks) -### 20. यहां कुछ सुझाव दिए गए हैं ताकि इंडेक्सिंग का प्रदर्शन बढ़ सके। मेरा subgraph बहुत लंबे समय तक सिंक होने में समय ले रहा है। +### 20. What are some tips to increase the performance of indexing? My Subgraph is taking a very long time to sync Yes, you should take a look at the optional start block feature to start indexing from the block where the contract was deployed: [Start blocks](/developing/creating-a-subgraph/#start-blocks) -### 21. क्या कोई तरीका है कि 'subgraph' को सीधे क्वेरी करके यह पता लगाया जा सके कि उसने कौन सा लेटेस्ट ब्लॉक नंबर इंडेक्स किया है? +### 21. Is there a way to query the Subgraph directly to determine the latest block number it has indexed? हाँ! निम्न आदेश का प्रयास करें, "संगठन/सबग्राफनाम" को उस संगठन के साथ प्रतिस्थापित करें जिसके अंतर्गत वह प्रकाशित है और आपके सबग्राफ का नाम: @@ -132,7 +132,7 @@ someCollection(first: 1000, skip: ) { ... } ### 23. If my dapp frontend uses The Graph for querying, do I need to write my API key into the frontend directly? What if we pay query fees for users – will malicious users cause our query fees to be very high? -Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. +Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and Subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. ## विविध diff --git a/website/src/pages/hi/subgraphs/developing/introduction.mdx b/website/src/pages/hi/subgraphs/developing/introduction.mdx index 12e2aba18447..cc7e3f61d20d 100644 --- a/website/src/pages/hi/subgraphs/developing/introduction.mdx +++ b/website/src/pages/hi/subgraphs/developing/introduction.mdx @@ -5,27 +5,27 @@ sidebarTitle: Introduction To start coding right away, go to [Developer Quick Start](/subgraphs/quick-start/). -## अवलोकन +## Overview एक डेवलपर के रूप में, आपको अपने dapp को बनाने और शक्ति प्रदान करने के लिए डेटा की आवश्यकता होती है। ब्लॉकचेन डेटा को क्वेरी करना और इंडेक्स करना चुनौतीपूर्ण होता है, लेकिन The Graph इस समस्या का समाधान प्रदान करता है। The Graph पर, आप: -1. Create, deploy, and publish subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). -2. मौजूदा subgraphs को क्वेरी करने के लिए GraphQL का उपयोग करें। +1. Create, deploy, and publish Subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). +2. Use GraphQL to query existing Subgraphs. ### GraphQL क्या है? -- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. +- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query Subgraphs. ### डेवलपर क्रियाएँ -- Query subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. -- विशिष्ट डेटा आवश्यकताओं को पूरा करने के लिए कस्टम सबग्राफ़ बनाएं, जिससे अन्य डेवलपर्स के लिए स्केलेबिलिटी और लचीलापन में सुधार हो सके। -- अपने subgraphs को The Graph Network में तैनात करें, प्रकाशित करें और संकेत दें। +- Query Subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. +- Create custom Subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. +- Deploy, publish and signal your Subgraphs within The Graph Network. -### सबग्राफ़ क्या हैं? +### What are Subgraphs? -एक Subgraph एक कस्टम API है जो ब्लॉकचेन डेटा पर आधारित होता है। यह ब्लॉकचेन से डेटा निकालता है, उसे प्रोसेस करता है, और उसे इस तरह से संग्रहित करता है कि उसे GraphQL के माध्यम से आसानी से क्वेरी किया जा सके। +A Subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. -Check out the documentation on [subgraphs](/subgraphs/developing/subgraphs/) to learn specifics. +Check out the documentation on [Subgraphs](/subgraphs/developing/subgraphs/) to learn specifics. diff --git a/website/src/pages/hi/subgraphs/developing/managing/deleting-a-subgraph.mdx b/website/src/pages/hi/subgraphs/developing/managing/deleting-a-subgraph.mdx index e0889b86b0ab..02fdc71480ef 100644 --- a/website/src/pages/hi/subgraphs/developing/managing/deleting-a-subgraph.mdx +++ b/website/src/pages/hi/subgraphs/developing/managing/deleting-a-subgraph.mdx @@ -2,30 +2,30 @@ title: Deleting a Subgraph --- -Delete your subgraph using [Subgraph Studio](https://thegraph.com/studio/). +Delete your Subgraph using [Subgraph Studio](https://thegraph.com/studio/). -> Deleting your subgraph will remove all published versions from The Graph Network, but it will remain visible on Graph Explorer and Subgraph Studio for users who have signaled on it. +> Deleting your Subgraph will remove all published versions from The Graph Network, but it will remain visible on Graph Explorer and Subgraph Studio for users who have signaled on it. ## चरण-दर-चरण -1. Visit the subgraph's page on [Subgraph Studio](https://thegraph.com/studio/). +1. Visit the Subgraph's page on [Subgraph Studio](https://thegraph.com/studio/). 2. Click on the three-dots to the right of the "publish" button. -3. Click on the option to "delete this subgraph": +3. Click on the option to "delete this Subgraph": ![Delete-subgraph](/img/Delete-subgraph.png) -4. Depending on the subgraph's status, you will be prompted with various options. +4. Depending on the Subgraph's status, you will be prompted with various options. - - If the subgraph is not published, simply click “delete” and confirm. - - If the subgraph is published, you will need to confirm on your wallet before the subgraph can be deleted from Studio. If a subgraph is published to multiple networks, such as testnet and mainnet, additional steps may be required. + - If the Subgraph is not published, simply click “delete” and confirm. + - If the Subgraph is published, you will need to confirm on your wallet before the Subgraph can be deleted from Studio. If a Subgraph is published to multiple networks, such as testnet and mainnet, additional steps may be required. -> If the owner of the subgraph has signal on it, the signaled GRT will be returned to the owner. +> If the owner of the Subgraph has signal on it, the signaled GRT will be returned to the owner. ### Important Reminders -- Once you delete a subgraph, it will **not** appear on Graph Explorer's homepage. However, users who have signaled on it will still be able to view it on their profile pages and remove their signal. -- क्यूरेटर अब सबग्राफ पर संकेत नहीं दे पाएंगे। -- Subgraph पर पहले से संकेत कर चुके Curators औसत शेयर मूल्य पर अपना संकेत वापस ले सकते हैं। -- Deleted subgraphs will show an error message. +- Once you delete a Subgraph, it will **not** appear on Graph Explorer's homepage. However, users who have signaled on it will still be able to view it on their profile pages and remove their signal. +- Curators will not be able to signal on the Subgraph anymore. +- Curators that already signaled on the Subgraph can withdraw their signal at an average share price. +- Deleted Subgraphs will show an error message. diff --git a/website/src/pages/hi/subgraphs/developing/managing/transferring-a-subgraph.mdx b/website/src/pages/hi/subgraphs/developing/managing/transferring-a-subgraph.mdx index 1b71f96fd6e8..007eaa76acc2 100644 --- a/website/src/pages/hi/subgraphs/developing/managing/transferring-a-subgraph.mdx +++ b/website/src/pages/hi/subgraphs/developing/managing/transferring-a-subgraph.mdx @@ -2,18 +2,18 @@ title: Transferring a Subgraph --- -विभिन्न नेटवर्क पर प्रकाशित subgraphs के लिए उस पते पर एक NFT जारी किया गया है जिसने subgraph प्रकाशित किया। NFT एक मानक ERC721 पर आधारित है, जो The Graph नेटवर्क पर खातों के बीच स्थानांतरण की सुविधा देता है। +Subgraphs published to the decentralized network have an NFT minted to the address that published the Subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network. ## अनुस्मारक -- जो भी 'NFT' का मालिक है, वह subgraph को नियंत्रित करता है। -- यदि मालिक 'NFT' को बेचने या स्थानांतरित करने का निर्णय लेता है, तो वे नेटवर्क पर उस subgraph को संपादित या अपडेट नहीं कर पाएंगे। -- आप आसानी से एक subgraph का नियंत्रण एक multi-sig में स्थानांतरित कर सकते हैं। -- एक समुदाय का सदस्य DAO की ओर से एक subgraph बना सकता है। +- Whoever owns the NFT controls the Subgraph. +- If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that Subgraph on the network. +- You can easily move control of a Subgraph to a multi-sig. +- A community member can create a Subgraph on behalf of a DAO. ## अपने 'subgraph' को एक NFT के रूप में देखें -अपने 'subgraph' को एक NFT के रूप में देखने के लिए, आप एक NFT मार्केटप्लेस जैसे OpenSea पर जा सकते हैं: +To view your Subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**: ``` https://opensea.io/your-wallet-address @@ -27,13 +27,13 @@ https://rainbow.me/your-wallet-addres ## चरण-दर-चरण -एक Subgraph का स्वामित्व स्थानांतरित करने के लिए, निम्नलिखित करें: +To transfer ownership of a Subgraph, do the following: 1. 'Subgraph Studio' में निर्मित UI का उपयोग करें: ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-1.png) -2. उस पते का चयन करें जिसे आप 'subgraph' को स्थानांतरित करना चाहेंगे: +2. Choose the address that you would like to transfer the Subgraph to: ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-2.png) diff --git a/website/src/pages/hi/subgraphs/developing/publishing/publishing-a-subgraph.mdx b/website/src/pages/hi/subgraphs/developing/publishing/publishing-a-subgraph.mdx index 51a773bb8012..4de4472caf4c 100644 --- a/website/src/pages/hi/subgraphs/developing/publishing/publishing-a-subgraph.mdx +++ b/website/src/pages/hi/subgraphs/developing/publishing/publishing-a-subgraph.mdx @@ -1,10 +1,11 @@ --- title: विकेंद्रीकृत नेटवर्क के लिए एक सबग्राफ प्रकाशित करना +sidebarTitle: Publishing to the Decentralized Network --- -Once you have [deployed your subgraph to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/) and it's ready to go into production, you can publish it to the decentralized network. +Once you have [deployed your Subgraph to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/) and it's ready to go into production, you can publish it to the decentralized network. -जब आप एक subgraph को विकेंद्रीकृत नेटवर्क पर प्रकाशित करते हैं, तो आप इसे उपलब्ध कराते हैं: +When you publish a Subgraph to the decentralized network, you make it available for: - [Curators](/resources/roles/curating/) to begin curating it. - [Indexers](/indexing/overview/) to begin indexing it. @@ -17,33 +18,33 @@ Check out the list of [supported networks](/supported-networks/). 1. Go to the [Subgraph Studio](https://thegraph.com/studio/) dashboard 2. Click on the **Publish** button -3. Your subgraph will now be visible in [Graph Explorer](https://thegraph.com/explorer/). +3. Your Subgraph will now be visible in [Graph Explorer](https://thegraph.com/explorer/). -एक मौजूदा subgraph के सभी प्रकाशित संस्करण कर सकते हैं: +All published versions of an existing Subgraph can: - Be published to Arbitrum One. [Learn more about The Graph Network on Arbitrum](/archived/arbitrum/arbitrum-faq/). -- Index data on any of the [supported networks](/supported-networks/), regardless of the network on which the subgraph was published. +- Index data on any of the [supported networks](/supported-networks/), regardless of the network on which the Subgraph was published. -### प्रकाशित सबग्राफ के लिए मेटाडेटा अपडेट करना +### Updating metadata for a published Subgraph -- अपने सबग्राफ को विकेंद्रीकृत नेटवर्क पर प्रकाशित करने के बाद, आप Subgraph Studio में किसी भी समय मेटाडेटा को अपडेट कर सकते हैं। +- After publishing your Subgraph to the decentralized network, you can update the metadata anytime in Subgraph Studio. - एक बार जब आप अपने परिवर्तनों को सहेज लेते हैं और अपडेट प्रकाशित कर देते हैं, तो वे Graph Explorer में दिखाई देंगे। - यह ध्यान रखना महत्वपूर्ण है कि इस प्रक्रिया से कोई नया संस्करण नहीं बनेगा क्योंकि आपका डिप्लॉयमेंट नहीं बदला है। ## Publishing from the CLI -As of version 0.73.0, you can also publish your subgraph with the [`graph-cli`](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). +As of version 0.73.0, you can also publish your Subgraph with the [`graph-cli`](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). 1. `graph-cli` खोलें। 2. Use the following commands: `graph codegen && graph build` then `graph publish`. -3. एक विंडो खुलेगी, जो आपको अपनी वॉलेट कनेक्ट करने, मेटाडेटा जोड़ने, और अपने अंतिम Subgraph को आपकी पसंद के नेटवर्क पर डिप्लॉय करने की अनुमति देगी। +3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized Subgraph to a network of your choice. ![cli-ui](/img/cli-ui.png) ### अपने डिप्लॉयमेंट को अनुकूलित करना -आप अपने Subgraph बिल्ड को एक विशेष IPFSनोड पर अपलोड कर सकते हैं और निम्नलिखित फ्लैग्स के साथ अपने डिप्लॉयमेंट को और अनुकूलित कर सकते हैं: +You can upload your Subgraph build to a specific IPFS node and further customize your deployment with the following flags: ``` USAGE @@ -61,33 +62,33 @@ FLAGS ``` -## Adding signal to your subgraph +## Adding signal to your Subgraph -डेवलपर्स अपने Subgraph में GRT सिग्नल जोड़ सकते हैं ताकि Indexer को Subgraph पर क्वेरी करने के लिए प्रेरित किया जा सके। +Developers can add GRT signal to their Subgraphs to incentivize Indexers to query the Subgraph. -- यदि कोई Subgraph इंडेक्सिंग पुरस्कारों के लिए पात्र है, तो जो Indexer "इंडेक्सिंग का प्रमाण" प्रदान करते हैं, उन्हें संकेतित GRTकी मात्रा के आधार पर GRT पुरस्कार मिलेगा। +- If a Subgraph is eligible for indexing rewards, Indexers who provide a "proof of indexing" will receive a GRT reward, based on the amount of GRT signalled. -- You can check indexing reward eligibility based on subgraph feature usage [here](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). +- You can check indexing reward eligibility based on Subgraph feature usage [here](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). - Specific supported networks can be checked [here](/supported-networks/). -> Adding signal to a subgraph which is not eligible for rewards will not attract additional Indexers. +> Adding signal to a Subgraph which is not eligible for rewards will not attract additional Indexers. > -> यदि आपका Subgraph पुरस्कारों के लिए पात्र है, तो यह अनुशंसा की जाती है कि आप अपने Subgraph को कम से कम 3,000 GRT के साथ क्यूरेट करें ताकि अधिक Indexer को आपके सबग्राफ़ को इंडेक्स करने के लिए आकर्षित किया जा सके। +> If your Subgraph is eligible for rewards, it is recommended that you curate your own Subgraph with at least 3,000 GRT in order to attract additional indexers to index your Subgraph. -The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs. However, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. - -When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. +The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all Subgraphs. However, signaling GRT on a particular Subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. Indexers can find subgraphs to index based on curation signals they see in Graph Explorer. -![Explorer सबग्राफ](/img/explorer-subgraphs.png) +Indexers can find Subgraphs to index based on curation signals they see in Graph Explorer. Subgraph Studio आपको अपने सबग्राफ़ में सिग्नल जोड़ने की सुविधा देता है, जिसमें आप अपने सबग्राफ़ के क्यूरेशन पूल में उसी लेन-देन के साथ GRT जोड़ सकते हैं, जब इसे प्रकाशित किया जाता है. +Subgraph Studio enables you to add signal to your Subgraph by adding GRT to your Subgraph's curation pool in the same transaction it is published. + ![Curation Pool](/img/curate-own-subgraph-tx.png) -Alternatively, you can add GRT signal to a published subgraph from Graph Explorer. +Alternatively, you can add GRT signal to a published Subgraph from Graph Explorer. ![Signal from Explorer](/img/signal-from-explorer.png) diff --git a/website/src/pages/hi/subgraphs/developing/subgraphs.mdx b/website/src/pages/hi/subgraphs/developing/subgraphs.mdx index 153754823989..03d4a6ad952d 100644 --- a/website/src/pages/hi/subgraphs/developing/subgraphs.mdx +++ b/website/src/pages/hi/subgraphs/developing/subgraphs.mdx @@ -4,83 +4,83 @@ title: सबग्राफ ## Subgraph क्या है? -एक subgraph एक कस्टम, ओपन API है जो एक ब्लॉकचेन से डेटा निकालता है, उसे प्रोसेस करता है, और उसे इस तरह से स्टोर करता है कि उसे GraphQL के माध्यम से आसानी से क्वेरी किया जा सके। +A Subgraph is a custom, open API that extracts data from a blockchain, processes it, and stores it so it can be easily queried via GraphQL. ### Subgraph क्षमताएँ - डेटा एक्सेस करें: Subgraphs web3 के लिए ब्लॉकचेन डेटा के क्वेरी और इंडेक्सिंग को सक्षम बनाते हैं। -- बनाएँ: डेवलपर्स The Graph Network पर subgraphs बना सकते हैं, डिप्लॉय कर सकते हैं और प्रकाशित कर सकते हैं। शुरुआत करने के लिए, subgraph डेवलपर Quick Start(quick-start/) देखें। -- इंडेक्स और क्वेरी: एक बार जब एक subgraph को इंडेक्स किया जाता है, तो कोई भी इसे क्वेरी कर सकता है।GraphExplorer(https://thegraph.com/explorer) में नेटवर्क पर प्रकाशित सभी subgraphs का अन्वेषण और क्वेरी करें। +- **Build:** Developers can build, deploy, and publish Subgraphs to The Graph Network. To get started, check out the Subgraph developer [Quick Start](quick-start/). +- **Index & Query:** Once a Subgraph is indexed, anyone can query it. Explore and query all Subgraphs published to the network in [Graph Explorer](https://thegraph.com/explorer). ## एक Subgraph के अंदर -subgraph मैनिफेस्ट, subgraph.yaml, उन स्मार्ट कॉन्ट्रैक्ट्स और नेटवर्क को परिभाषित करता है जिन्हें आपका subgraph इंडेक्स करेगा, इन कॉन्ट्रैक्ट्स से ध्यान देने योग्य इवेंट्स, और इवेंट डेटा को उन संस्थाओं के साथ मैप करने का तरीका जिन्हें Graph Node स्टोर करता है और जिन्हें क्वेरी करने की अनुमति देता है। +The Subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your Subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. -**subgraph definition** में निम्नलिखित फ़ाइलें शामिल हैं: +The **Subgraph definition** consists of the following files: -- subgraph.yaml: में subgraph मैनिफेस्ट शामिल है +- `subgraph.yaml`: Contains the Subgraph manifest -- schema.graphql: एक GraphQL स्कीमा जो आपके लिए डेटा को परिभाषित करता है और इसे GraphQL के माध्यम से क्वेरी करने का तरीका बताता है. +- `schema.graphql`: A GraphQL schema defining the data stored for your Subgraph and how to query it via GraphQL - `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema -प्रत्येक उपग्राफ घटक के बारे में अधिक जानने के लिए, देखें creating a subgraph(/developing/creating-a-subgraph/). +To learn more about each Subgraph component, check out [creating a Subgraph](/developing/creating-a-subgraph/). ## सबग्राफ जीवनचक्र -यहाँ एक Subgraph के जीवनचक्र का सामान्य अवलोकन है। +Here is a general overview of a Subgraph’s lifecycle: ![Subgraph जीवनचक्र ](/img/subgraph-lifecycle.png) ## सबग्राफ विकास -1. [एक subgraph बनाएँ](/developing/creating-a-subgraph/) -2. [डिप्लॉय a subgraph](/deploying/deploying-a-subgraph-to-studio/) -3. [एक 'subgraph' का परीक्षण करें](/deploying/subgraph-studio/#testing-your-subgraph-in-subgraph-studio) -4. [Publish a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) -5. [Signal on a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) +1. [Create a Subgraph](/developing/creating-a-subgraph/) +2. [Deploy a Subgraph](/deploying/deploying-a-subgraph-to-studio/) +3. [Test a Subgraph](/deploying/subgraph-studio/#testing-your-subgraph-in-subgraph-studio) +4. [Publish a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) +5. [Signal on a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) ### Build locally -Great subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying subgraphs on The Graph. They can also use [Graph TypeScript](/subgraphs/developing/creating/graph-ts/README/) and [Matchstick](/subgraphs/developing/creating/unit-testing-framework/) to create robust subgraphs. +Great Subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying Subgraphs on The Graph. They can also use [Graph TypeScript](/subgraphs/developing/creating/graph-ts/README/) and [Matchstick](/subgraphs/developing/creating/unit-testing-framework/) to create robust Subgraphs. ### Deploy to Subgraph Studio -Once defined, a subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: +Once defined, a Subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: -- Use its staging environment to index the deployed subgraph and make it available for review. -- Verify that your subgraph doesn't have any indexing errors and works as expected. +- Use its staging environment to index the deployed Subgraph and make it available for review. +- Verify that your Subgraph doesn't have any indexing errors and works as expected. ### Publish to the Network -When you're happy with your subgraph, you can [publish it](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph Network. +When you're happy with your Subgraph, you can [publish it](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph Network. -- This is an onchain action, which registers the subgraph and makes it discoverable by Indexers. -- Published subgraphs have a corresponding NFT, which defines the ownership of the subgraph. You can [transfer the subgraph's ownership](/subgraphs/developing/managing/transferring-a-subgraph/) by sending the NFT. -- Published subgraphs have associated metadata, which provides other network participants with useful context and information. +- This is an onchain action, which registers the Subgraph and makes it discoverable by Indexers. +- Published Subgraphs have a corresponding NFT, which defines the ownership of the Subgraph. You can [transfer the Subgraph's ownership](/subgraphs/developing/managing/transferring-a-subgraph/) by sending the NFT. +- Published Subgraphs have associated metadata, which provides other network participants with useful context and information. ### Add Curation Signal for Indexing -Published subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your subgraph. Learn more about signaling and [curating](/resources/roles/curating/) on The Graph. +Published Subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your Subgraph. Learn more about signaling and [curating](/resources/roles/curating/) on The Graph. #### What is signal? -- Signal is locked GRT associated with a given subgraph. It indicates to Indexers that a given subgraph will receive query volume and it contributes to the indexing rewards available for processing it. -- Third-party Curators may also signal on a given subgraph, if they deem the subgraph likely to drive query volume. +- Signal is locked GRT associated with a given Subgraph. It indicates to Indexers that a given Subgraph will receive query volume and it contributes to the indexing rewards available for processing it. +- Third-party Curators may also signal on a given Subgraph, if they deem the Subgraph likely to drive query volume. ### Querying & Application Development Subgraphs on The Graph Network receive 100,000 free queries per month, after which point developers can either [pay for queries with GRT or a credit card](/subgraphs/billing/). -Learn more about [querying subgraphs](/subgraphs/querying/introduction/). +Learn more about [querying Subgraphs](/subgraphs/querying/introduction/). ### Updating Subgraphs -To update your subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. +To update your Subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your Subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. -- If you selected "auto-migrate" when you applied the signal, updating the subgraph will migrate any signal to the new version and incur a migration tax. -- This signal migration should prompt Indexers to start indexing the new version of the subgraph, so it should soon become available for querying. +- If you selected "auto-migrate" when you applied the signal, updating the Subgraph will migrate any signal to the new version and incur a migration tax. +- This signal migration should prompt Indexers to start indexing the new version of the Subgraph, so it should soon become available for querying. ### Deleting & Transferring Subgraphs -If you no longer need a published subgraph, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) or [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) it. Deleting a subgraph returns any signaled GRT to [Curators](/resources/roles/curating/). +If you no longer need a published Subgraph, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) or [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) it. Deleting a Subgraph returns any signaled GRT to [Curators](/resources/roles/curating/). diff --git a/website/src/pages/hi/subgraphs/explorer.mdx b/website/src/pages/hi/subgraphs/explorer.mdx index f0b92dfd72b1..64a671781463 100644 --- a/website/src/pages/hi/subgraphs/explorer.mdx +++ b/website/src/pages/hi/subgraphs/explorer.mdx @@ -2,11 +2,11 @@ title: Graph Explorer --- -Unlock the world of subgraphs and network data with [Graph Explorer](https://thegraph.com/explorer). +Unlock the world of Subgraphs and network data with [Graph Explorer](https://thegraph.com/explorer). -## अवलोकन +## Overview -Graph Explorer consists of multiple parts where you can interact with [subgraphs](https://thegraph.com/explorer?chain=arbitrum-one), [delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one), engage [participants](https://thegraph.com/explorer/participants?chain=arbitrum-one), view [network information](https://thegraph.com/explorer/network?chain=arbitrum-one), and access your user profile. +Graph Explorer consists of multiple parts where you can interact with [Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one), [delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one), engage [participants](https://thegraph.com/explorer/participants?chain=arbitrum-one), view [network information](https://thegraph.com/explorer/network?chain=arbitrum-one), and access your user profile. ## Inside Explorer @@ -14,33 +14,33 @@ The following is a breakdown of all the key features of Graph Explorer. For addi ### Subgraphs Page -After deploying and publishing your subgraph in Subgraph Studio, go to [Graph Explorer](https://thegraph.com/explorer) and click on the "[Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one)" link in the navigation bar to access the following: +After deploying and publishing your Subgraph in Subgraph Studio, go to [Graph Explorer](https://thegraph.com/explorer) and click on the "[Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one)" link in the navigation bar to access the following: -- आपके अपने तैयार किए गए subgraphs +- Your own finished Subgraphs - दूसरों द्वारा प्रकाशित subgraphs -- आपके पास जिस विशेष subgraph की आवश्यकता है (निर्माण की तारीख, सिग्नल राशि, या नाम के आधार पर)। +- The exact Subgraph you want (based on the date created, signal amount, or name). ![Explorer Image 1](/img/Subgraphs-Explorer-Landing.png) -जब आप एक subgraph पर क्लिक करते हैं, तो आप निम्नलिखित कर सकेंगे: +When you click into a Subgraph, you will be able to do the following: - प्लेग्राउंड में परीक्षण प्रश्न करें और सूचनापूर्ण निर्णय लेने के लिए नेटवर्क विवरण का उपयोग करें। -- अपने स्वयं के subgraph या दूसरों के subgraphs पर GRT का सिग्नल दें ताकि indexers इसकी महत्ता और गुणवत्ता के बारे में जागरूक हो सकें। +- Signal GRT on your own Subgraph or the Subgraphs of others to make indexers aware of its importance and quality. - - This is critical because signaling on a subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries. + - This is critical because signaling on a Subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries. ![Explorer Image 2](/img/Subgraph-Details.png) -हर subgraph के समर्पित पृष्ठ पर, आप निम्नलिखित कार्य कर सकते हैं: +On each Subgraph’s dedicated page, you can do the following: -- सबग्राफ पर सिग्नल/अन-सिग्नल +- Signal/Un-signal on Subgraphs - चार्ट, वर्तमान परिनियोजन आईडी और अन्य मेटाडेटा जैसे अधिक विवरण देखें -- सबग्राफ के पिछले पुनरावृत्तियों का पता लगाने के लिए संस्करणों को स्विच करें -- ग्राफ़क्यूएल के माध्यम से क्वेरी सबग्राफ -- खेल के मैदान में टेस्ट सबग्राफ -- उन अनुक्रमणकों को देखें जो एक निश्चित सबग्राफ पर अनुक्रमणित कर रहे हैं +- Switch versions to explore past iterations of the Subgraph +- Query Subgraphs via GraphQL +- Test Subgraphs in the playground +- View the Indexers that are indexing on a certain Subgraph - सबग्राफ आँकड़े (आवंटन, क्यूरेटर, आदि) -- उस इकाई को देखें जिसने सबग्राफ प्रकाशित किया था +- View the entity who published the Subgraph ![Explorer Image 3](/img/Explorer-Signal-Unsignal.png) @@ -53,7 +53,7 @@ On this page, you can see the following: - Indexers who collected the most query fees - Indexers with the highest estimated APR -Additionally, you can calculate your ROI and search for top Indexers by name, address, or subgraph. +Additionally, you can calculate your ROI and search for top Indexers by name, address, or Subgraph. ### Participants Page @@ -63,9 +63,9 @@ This page provides a bird's-eye view of all "participants," which includes every ![Explorer Image 4](/img/Indexer-Pane.png) -Indexer प्रोटोकॉल की रीढ़ हैं। वे सबग्राफ पर स्टेक करते हैं, उन्हें इंडेक्स करते हैं, और उन सभी को प्रश्न प्रदान करते हैं जो सबग्राफ का उपभोग करते हैं। +Indexers are the backbone of the protocol. They stake on Subgraphs, index them, and serve queries to anyone consuming Subgraphs. -Indexers तालिका में, आप Indexers के डेलीगेशन पैरामीटर, उनकी स्टेक, प्रत्येक subgraph के लिए उन्होंने कितना स्टेक किया है, और उन्होंने प्रश्न शुल्क और इंडेक्सिंग पुरस्कारों से कितना राजस्व प्राप्त किया है, देख सकते हैं। +In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each Subgraph, and how much revenue they have made from query fees and indexing rewards. **विशिष्टताएँ** @@ -74,7 +74,7 @@ Indexers तालिका में, आप Indexers के डेलीगे - कूलडाउन शेष - वह समय जो उपरोक्त डेलीगेशन पैरामीटर को बदलने के लिए Indexer को बचा है। कूलडाउन अवधि वे होती हैं जो Indexers अपने डेलीगेशन पैरामीटर को अपडेट करते समय सेट करते हैं। - यह है Indexer का जमा किया गया हिस्सेदारी, जिसे दुष्ट या गलत व्यवहार के लिए काटा जा सकता है। - प्रतिनिधि - 'Delegators' से स्टेक जो 'Indexers' द्वारा आवंटित किया जा सकता है, लेकिन इसे स्लैश नहीं किया जा सकता। -- आवंटित- वह स्टेक है जिसे Indexers उन subgraphs के लिए सक्रिय रूप से आवंटित कर रहे हैं जिन्हें वे इंडेक्स कर रहे हैं। +- Allocated - Stake that Indexers are actively allocating towards the Subgraphs they are indexing. - अवेलबल डेलीगेशन कैपेसिटी - वह मात्रा जो डेलीगेटेड स्टेक है, जो Indexers अभी भी प्राप्त कर सकते हैं इससे पहले कि वे ओवर-डेलीगेटेड हो जाएं। - अधिकतम प्रत्यायोजन क्षमता - प्रत्यायोजित हिस्सेदारी की अधिकतम राशि जिसे इंडेक्सर उत्पादक रूप से स्वीकार कर सकता है। आवंटन या पुरस्कार गणना के लिए एक अतिरिक्त प्रत्यायोजित हिस्सेदारी का उपयोग नहीं किया जा सकता है। - क्वेरी शुल्क - यह कुल शुल्क है जो अंतिम उपयोगकर्ताओं ने सभी समय में एक Indexer से क्वेरी के लिए भुगतान किया है। @@ -90,10 +90,10 @@ To learn more about how to become an Indexer, you can take a look at the [offici #### 2. क्यूरेटर -क्यूरेटर subgraphs का विश्लेषण करते हैं ताकि यह पहचान सकें कि कौन से subgraphs उच्चतम गुणवत्ता के हैं। एक बार जब एक क्यूरेटर एक संभावित उच्च गुणवत्ता वाले subgraph को खोज लेता है, तो वे इसके बॉन्डिंग कर्व पर सिग्नल देकर इसे क्यूरेट कर सकते हैं। ऐसा करके, क्यूरेटर इंडेक्सर्स को बताते हैं कि कौन से subgraphs उच्च गुणवत्ता के हैं और उन्हें इंडेक्स किया जाना चाहिए। +Curators analyze Subgraphs to identify which Subgraphs are of the highest quality. Once a Curator has found a potentially high-quality Subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which Subgraphs are high quality and should be indexed. -- क्यूरेटर समुदाय के सदस्य, डेटा उपभोक्ता, या यहां तक कि अपने Subgraphs पर संकेत देने के लिए GRT टोकन को बॉन्डिंग कर्व में जमा करके अपने स्वयं के Subgraph पर संकेत देने वाले सबग्रह डेवलपर्स भी हो सकते हैं। - - GRT जमा करके, Curators एक subgraph के curation shares का निर्माण करते हैं। इसके परिणामस्वरूप, वे उस subgraph से उत्पन्न query fees का एक भाग अर्जित कर सकते हैं जिस पर उन्होंने संकेत दिया है। +- Curators can be community members, data consumers, or even Subgraph developers who signal on their own Subgraphs by depositing GRT tokens into a bonding curve. + - By depositing GRT, Curators mint curation shares of a Subgraph. As a result, they can earn a portion of the query fees generated by the Subgraph they have signaled on. - "Bonding curve" क्यूरेटर्स को सबसे उच्च गुणवत्ता वाले डेटा स्रोतों को क्यूरेट करने के लिए प्रोत्साहित करता है। यहां 'Curator' तालिका में नीचे दी गई जानकारी को देख सकते हैं: @@ -131,7 +131,7 @@ If you want to learn more about how to become a Delegator, check out the [offici On this page, you can see global KPIs and have the ability to switch to a per-epoch basis and analyze network metrics in more detail. These details will give you a sense of how the network is performing over time. -#### अवलोकन +#### Overview ओवरव्यू सेक्शन में वर्तमान नेटवर्क मैट्रिक्स और समय के साथ कुछ संचयी मैट्रिक्स दोनों शामिल हैं: @@ -144,8 +144,8 @@ On this page, you can see global KPIs and have the ability to switch to a per-ep कुछ महत्वपूर्ण विवरण नोट करने के लिए: -- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the subgraphs have been closed and the data they served has been validated by the consumers. -- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). +- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the Subgraphs have been closed and the data they served has been validated by the consumers. +- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the Subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). ![Explorer Image 8](/img/Network-Stats.png) @@ -178,15 +178,15 @@ On this page, you can see global KPIs and have the ability to switch to a per-ep ### सबग्राफ टैब -सबग्राफ टैब में, आप अपने प्रकाशित सबग्राफ को देखेंगे। +In the Subgraphs tab, you’ll see your published Subgraphs. -> यह उन subgraphs को शामिल नहीं करेगा जो परीक्षण उद्देश्यों के लिए CLI के साथ तैनात किए गए हैं। subgraphs तब ही दिखाई देंगे जब उन्हें विकेंद्रीकृत नेटवर्क पर प्रकाशित किया जाएगा। +> This will not include any Subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network. ![Explorer Image 11](/img/Subgraphs-Overview.png) ### अनुक्रमण टैब -इंडेक्सिंग टैब में, आपको एक तालिका मिलेगी जिसमें सभी सक्रिय और ऐतिहासिक आवंटन सबग्राफ के प्रति हैं। आप चार्ट भी पाएंगे जहां आप एक Indexerके रूप में अपने पिछले प्रदर्शन को देख और विश्लेषण कर सकते हैं। +In the Indexing tab, you’ll find a table with all the active and historical allocations towards Subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer. इस खंड में आपके नेट इंडेक्सर रिवार्ड्स और नेट क्वेरी फीस के विवरण भी शामिल होंगे। आपको ये मेट्रिक दिखाई देंगे: @@ -223,13 +223,13 @@ Delegator ,The Graph नेटवर्क के लिए महत्वप ### क्यूरेटिंग टैब -क्यूरेशन टैब में, आपको वे सभी सबग्राफ मिलेंगे जिन पर आप संकेत कर रहे हैं (इस प्रकार आपको क्वेरी शुल्क प्राप्त करने में सक्षम बनाता है)। सिग्नलिंग क्यूरेटर को इंडेक्सर्स को हाइलाइट करने की अनुमति देता है जो उपग्राफ मूल्यवान और भरोसेमंद हैं, इस प्रकार संकेत देते हैं कि उन्हें अनुक्रमित करने की आवश्यकता है। +In the Curation tab, you’ll find all the Subgraphs you’re signaling on (thus enabling you to receive query fees). Signaling allows Curators to highlight to Indexers which Subgraphs are valuable and trustworthy, thus signaling that they need to be indexed on. इस टैब के भीतर, आपको इसका अवलोकन मिलेगा: -- सभी सबग्राफ आप सिग्नल विवरण के साथ क्यूरेट कर रहे हैं -- प्रति सबग्राफ शेयर योग -- क्वेरी पुरस्कार प्रति सबग्राफ +- All the Subgraphs you're curating on with signal details +- Share totals per Subgraph +- Query rewards per Subgraph - दिनांक विवरण पर अद्यतन किया गया ![Explorer Image 14](/img/Curation-Stats.png) diff --git a/website/src/pages/hi/subgraphs/guides/_meta.js b/website/src/pages/hi/subgraphs/guides/_meta.js index 37e18bc51651..a1bb04fb6d3f 100644 --- a/website/src/pages/hi/subgraphs/guides/_meta.js +++ b/website/src/pages/hi/subgraphs/guides/_meta.js @@ -1,4 +1,5 @@ export default { + 'subgraph-composition': '', 'subgraph-debug-forking': '', near: '', arweave: '', diff --git a/website/src/pages/hi/subgraphs/guides/arweave.mdx b/website/src/pages/hi/subgraphs/guides/arweave.mdx index 08e6c4257268..505f7ddd5785 100644 --- a/website/src/pages/hi/subgraphs/guides/arweave.mdx +++ b/website/src/pages/hi/subgraphs/guides/arweave.mdx @@ -1,61 +1,61 @@ --- -title: Building Subgraphs on Arweave +title: आरवीव पर सब-ग्राफ्र्स बनाना --- -> Arweave support in Graph Node and on Subgraph Studio is in beta: please reach us on [Discord](https://discord.gg/graphprotocol) with any questions about building Arweave Subgraphs! +> Arweave समर्थन Graph Node और सबग्राफ Studio में बीटा में है: कृपया हमसे [Discord](https://discord.gg/graphprotocol) पर संपर्क करें यदि आपके पास Arweave सबग्राफ बनाने के बारे में कोई प्रश्न हैं! -In this guide, you will learn how to build and deploy Subgraphs to index the Arweave blockchain. +इस गाइड में आप आरवीव ब्लॉकचेन पर सब ग्राफ्स बनाना और डेप्लॉय करना सीखेंगे! -## What is Arweave? +## आरवीव क्या है? -The Arweave protocol allows developers to store data permanently and that is the main difference between Arweave and IPFS, where IPFS lacks the feature; permanence, and files stored on Arweave can't be changed or deleted. +आरवीव प्रोटोकॉल डेवेलपर्स को स्थायी तौर पर डाटा स्टोर करने की क्षमता देता है जो कि IPFS और आरवीव के बीच का मुख्या अंतर भी है, जहाँ IPFS में इस क्षमता की कमी है, वहीँ आरवीवे पर फाइल्स डिलीट या बदली नहीं जा सकती | -Arweave already has built numerous libraries for integrating the protocol in a number of different programming languages. For more information you can check: +अरवीव द्वारा पहले से ही कई लाइब्रेरी विभिन्न प्रोग्रामिंग भाषाओं में विकशित की गई हैं| अधिक जानकारी के लिए आप इनका रुख कर सकते हैं: - [Arwiki](https://arwiki.wiki/#/en/main) - [Arweave Resources](https://www.arweave.org/build) -## What are Arweave Subgraphs? +## आरवीवे सब ग्राफ्स क्या हैं? -The Graph allows you to build custom open APIs called "Subgraphs". Subgraphs are used to tell indexers (server operators) which data to index on a blockchain and save on their servers in order for you to be able to query it at any time using [GraphQL](https://graphql.org/). +The Graph आपको कस्टम ओपन API बनाने की सुविधा देता है, जिन्हें "Subgraphs" कहा जाता है। Subgraphs का उपयोग Indexers (सर्वर ऑपरेटर्स) को यह बताने के लिए किया जाता है कि ब्लॉकचेन पर कौन सा डेटा Indexing करना है और इसे उनके सर्वर पर सहेजना है, ताकि आप इसे किसी भी समय [GraphQL](https://graphql.org/) का उपयोग करके क्वेरी कर सकें। -[Graph Node](https://github.com/graphprotocol/graph-node) is now able to index data on Arweave protocol. The current integration is only indexing Arweave as a blockchain (blocks and transactions), it is not indexing the stored files yet. +[Graph Node(https://github.com/graphprotocol/graph-node) अब Arweave protocol पर डेटा को इंडेक्स करने में सक्षम है। वर्तमान इंटीग्रेशन केवल Arweave को एक ब्लॉकचेन के रूप में indexing कर रहा है (blocks and transactions), यह अभी संग्रहीत फ़ाइलों को indexing नहीं कर रहा है। -## Building an Arweave Subgraph +## एक आरवीव सब ग्राफ बनाना -To be able to build and deploy Arweave Subgraphs, you need two packages: +आरवीवे पर सब ग्राफ बनाने के लिए हमे दो पैकेजेस की जरूरत है: -1. `@graphprotocol/graph-cli` above version 0.30.2 - This is a command-line tool for building and deploying Subgraphs. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-cli) to download using `npm`. -2. `@graphprotocol/graph-ts` above version 0.27.0 - This is library of Subgraph-specific types. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-ts) to download using `npm`. +1. `@graphprotocol/graph-cli` संस्करण 0.30.2 से ऊपर - यह एक कमांड-लाइन टूल है जो सबग्राफ बनाने और डिप्लॉय करने के लिए उपयोग किया जाता है। [यहाँ क्लिक करें](https://www.npmjs.com/package/@graphprotocol/graph-cli) `npm` का उपयोग करके डाउनलोड करने के लिए। +2. `@graphprotocol/graph-ts` संस्करण 0.27.0 से ऊपर - यह Subgraph-specific types की एक लाइब्रेरी है। [यहाँ क्लिक करें](https://www.npmjs.com/package/@graphprotocol/graph-ts) इसे `npm` का उपयोग करके डाउनलोड करने के लिए। -## Subgraph's components +## सब ग्राफ के कॉम्पोनेन्ट -There are three components of a Subgraph: +तीन घटक एक Subgraph के होते हैं: - -### 1. Manifest - `subgraph.yaml` +### 1. मैनिफेस्ट- `subgraph.yaml` -Defines the data sources of interest, and how they should be processed. Arweave is a new kind of data source. +डाटा का स्रोत्र और उनको प्रोसेस करने के बारे में बताता है| आरवीव एक नए प्रकार का डाटा सोर्स है| -### 2. Schema - `schema.graphql` +### 2. स्कीमा- `schema.graphql` -Here you define which data you want to be able to query after indexing your Subgraph using GraphQL. This is actually similar to a model for an API, where the model defines the structure of a request body. +यहाँ आप बताते हैं की आप कौन सा डाटा इंडेक्सिंग के बाद क्वेरी करना चाहते हैं| दरसअल यह एक API के मॉडल जैसा है, जहाँ मॉडल द्वारा रिक्वेस्ट बॉडी का स्ट्रक्चर परिभाषित किया जाता है| -The requirements for Arweave Subgraphs are covered by the [existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). +आर्वीव सबग्राफ के लिए आवश्यकताओं को [मौजूदा दस्तावेज़ीकरण](/developing/creating-a-subgraph/#the-graphql-schema) द्वारा कवर किया गया है। -### 3. AssemblyScript Mappings - `mapping.ts` +### 3. AssemblyScript मैपिंग्स- `mapping.ts` -This is the logic that determines how data should be retrieved and stored when someone interacts with the data sources you are listening to. The data gets translated and is stored based off the schema you have listed. +यह किसी के द्वारा इस्तेमाल किये जा रहे डाटा सोर्स से डाटा को पुनः प्राप्त करने और स्टोर करने के लॉजिक को बताता है| डाटा अनुवादित होकर आपके द्वारा सूचीबद्ध स्कीमा के अनुसार स्टोर हो जाता है| -During Subgraph development there are two key commands: +Subgraph को बनाते वक़्त दो मुख्य कमांड हैं: ``` $ graph codegen # generates types from the schema file identified in the manifest -$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder +$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the subgraph files in a /build folder ``` -## Subgraph Manifest Definition +## सब ग्राफ मैनिफेस्ट की परिभाषा -The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for an Arweave Subgraph: +सबग्राफ manifest `subgraph.yaml` उन डेटा स्रोतों की पहचान करता है जिनका उपयोग सबग्राफ के लिए किया जाता है, वे ट्रिगर जो रुचि के हैं, और वे फ़ंक्शन जो उन ट्रिगर्स के जवाब में चलाए जाने चाहिए। नीचे Arweave सबग्राफ के लिए एक उदाहरण सबग्राफ manifest दिया गया है: ```yaml specVersion: 1.3.0 @@ -82,30 +82,30 @@ dataSources: - handler: handleTx # the function name in the mapping file ``` -- Arweave Subgraphs introduce a new kind of data source (`arweave`) -- The network should correspond to a network on the hosting Graph Node. In Subgraph Studio, Arweave's mainnet is `arweave-mainnet` -- Arweave data sources introduce an optional source.owner field, which is the public key of an Arweave wallet +- Arweave सबग्राफ एक नए प्रकार के डेटा स्रोत (`arweave`) को प्रस्तुत करते हैं +- नेटवर्क को होस्टिंग Graph Node पर मौजूद नेटवर्क से मेल खाना चाहिए। सबग्राफ Studio में, Arweave का मुख्य नेटवर्क arweave-mainnet है। +- अरवीव डाटा सोर्स द्वारा एक वैकल्पिक source.owner फील्ड लाया गया, जो की एक आरवीव वॉलेट का मपब्लिक key है| -Arweave data sources support two types of handlers: +आरवीव डाटा सोर्स द्वारा दो प्रकार के हैंडलर्स उपयोग किये जा सकते हैं: -- `blockHandlers` - Run on every new Arweave block. No source.owner is required. -- `transactionHandlers` - Run on every transaction where the data source's `source.owner` is the owner. Currently an owner is required for `transactionHandlers`, if users want to process all transactions they should provide "" as the `source.owner` +- `blockHandlers` - हर नए Arweave ब्लॉक पर चलाया जाता है। कोई source.owner आवश्यक नहीं है। +- `transactionHandlers` - प्रत्येक लेन-देन(transaction) पर चलाया जाता है जहाँ डेटा स्रोत का `source.owner` मालिक होता है। वर्तमान में, `transactionHandlers` के लिए एक मालिक आवश्यक है, यदि उपयोगकर्ता सभी लेन-देन(transaction) को प्रोसेस करना चाहते हैं, तो उन्हें `source.owner` के रूप में "" प्रदान करना चाहिए। -> The source.owner can be the owner's address, or their Public Key. +> यहां source.owner ओनर का एड्रेस या उनका पब्लिक की हो सकता है| +> +> ट्रांसक्शन आरवीव परमावेब के लिए निर्माण खंड (बिल्डिंग ब्लॉक्स) की तरह होते हैं और एन्ड-यूजर के द्वारा बनाये गए ऑब्जेक्ट होते हैं| +> +> Note: [Irys (पहले Bundlr)](https://irys.xyz/) लेन-देन(transaction) अभी समर्थित नहीं हैं। -> Transactions are the building blocks of the Arweave permaweb and they are objects created by end-users. +## स्कीमा की परिभाषा -> Note: [Irys (previously Bundlr)](https://irys.xyz/) transactions are not supported yet. +Schema definition परिणामी सबग्राफ डेटाबेस की संरचना और entities के बीच संबंधों का वर्णन करता है। यह मूल डेटा स्रोत से स्वतंत्र होता है। सबग्राफ schema definition के बारे में अधिक विवरण [यहाँ](/developing/creating-a-subgraph/#the-graphql-schema) उपलब्ध है। -## Schema Definition +## असेंबली स्क्रिप्ट मैप्पिंग्स -Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on the Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). +आयोजन को प्रोसेस करने के लिए handler[AssemblyScript](https://www.assemblyscript.org/) में लिखे गए हैं। -## AssemblyScript Mappings - -The handlers for processing events are written in [AssemblyScript](https://www.assemblyscript.org/). - -Arweave indexing introduces Arweave-specific data types to the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/). +Arweave indexing [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/) में Arweave-विशिष्ट डेटा प्रकार प्रस्तुत करता है। ```tsx class Block { @@ -146,51 +146,51 @@ class Transaction { } ``` -Block handlers receive a `Block`, while transactions receive a `Transaction`. +ब्लॉक हैंडलर एक Block प्राप्त करते हैं, जबकि लेनदेन एक लेन-देन(transaction) प्राप्त करते हैं। -Writing the mappings of an Arweave Subgraph is very similar to writing the mappings of an Ethereum Subgraph. For more information, click [here](/developing/creating-a-subgraph/#writing-mappings). +Arweave सबग्राफ का मैपिंग लिखना Ethereum सबग्राफ के मैपिंग लिखने के बहुत समान है। अधिक जानकारी के लिए, [यहाँ क्लिक करें](/developing/creating-a-subgraph/#writing-mappings)। ## Deploying an Arweave Subgraph in Subgraph Studio -Once your Subgraph has been created on your Subgraph Studio dashboard, you can deploy by using the `graph deploy` CLI command. +एक बार जब आपका सबग्राफ आपके सबग्राफ Studio डैशबोर्ड पर बना लिया जाता है, तो आप graph deploy CLI कमांड का उपयोग करके इसे डिप्लॉय कर सकते हैं। ```bash graph deploy --access-token ``` -## Querying an Arweave Subgraph +## आरवीव सब-ग्राफ क्वेरी करना -The GraphQL endpoint for Arweave Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. +The GraphQL endpoint Arweave सबग्राफ के लिए schema परिभाषा द्वारा निर्धारित किया जाता है, जिसमें मौजूदा API इंटरफ़ेस होता है। अधिक जानकारी के लिए कृपया [GraphQL API documentation](/subgraphs/querying/graphql-api/) देखें। -## Example Subgraphs +## सब-ग्राफ के उदाहरण -Here is an example Subgraph for reference: +यहाँ संदर्भ के लिए एक उदाहरण सबग्राफ दिया गया है: - -- [Example Subgraph for Arweave](https://github.com/graphprotocol/graph-tooling/tree/main/examples/arweave-blocks-transactions) +- [उदाहरण सबग्राफ for Arweave](https://github.com/graphprotocol/graph-tooling/tree/main/examples/arweave-blocks-transactions) ## FAQ -### Can a Subgraph index Arweave and other chains? +### क्या सबग्राफ Arweave और अन्य चेन को इंडेक्स कर सकता है? -No, a Subgraph can only support data sources from one chain/network. +नहीं, एक सब-ग्राफ केवल एक चेन/नेटवर्क से डाटा सोर्स को सपोर्ट कर सकता है -### Can I index the stored files on Arweave? +### क्या मैं आरवीव पर स्टोर की फाइल्स को इंडेक्स कर सकता हूँ? -Currently, The Graph is only indexing Arweave as a blockchain (its blocks and transactions). +वर्तमान में द ग्राफ आरवीव को केवल एक ब्लॉकचेन की तरह इंडेक्स करता है (उसके ब्लॉक्स और ट्रांसक्शन्स)| -### Can I identify Bundlr bundles in my Subgraph? +### क्या मैं अपने Subgraph में Bundlr bundles की पहचान कर सकता हूँ? -This is not currently supported. +यह वर्तमान में सपोर्टेड नहीं है| -### How can I filter transactions to a specific account? +### क्या मैं किसी विशिष्ट अकाउंट से ट्रांसक्शन्स छाँट सकता हूँ? -The source.owner can be the user's public key or account address. +एक यूजर का पब्लिक की या अकाउंट एड्रेस source.owner हो सकता है -### What is the current encryption format? +### वर्तमान एन्क्रिप्शन फॉर्मेट क्या है? -Data is generally passed into the mappings as Bytes, which if stored directly is returned in the Subgraph in a `hex` format (ex. block and transaction hashes). You may want to convert to a `base64` or `base64 URL`-safe format in your mappings, in order to match what is displayed in block explorers like [Arweave Explorer](https://viewblock.io/arweave/). +डेटा आमतौर पर Bytes के रूप में मैपिंग्स में पास किया जाता है, जिसे यदि सीधे संग्रहीत किया जाए, तो यह सबग्राफ में hex प्रारूप में लौटाया जाता है (उदाहरण: ब्लॉक और लेन-देन हैश)। आप अपने मैपिंग्स में इसे base64 या base64 URL-सुरक्षित प्रारूप में परिवर्तित करना चाह सकते हैं, ताकि यह उन ब्लॉक एक्सप्लोरर्स में प्रदर्शित होने वाले प्रारूप से मेल खाए, जैसे कि Arweave Explorer। -The following `bytesToBase64(bytes: Uint8Array, urlSafe: boolean): string` helper function can be used, and will be added to `graph-ts`: +यह `bytesToBase64(bytes: Uint8Array, urlSafe: boolean): string` हेल्पर फंक्शन का उपयोग किया जा सकता है, और इसे `graph-ts` में जोड़ा जाएगा: ``` const base64Alphabet = [ diff --git a/website/src/pages/hi/subgraphs/guides/contract-analyzer.mdx b/website/src/pages/hi/subgraphs/guides/contract-analyzer.mdx index 084ac8d28a00..d7c546cc22c2 100644 --- a/website/src/pages/hi/subgraphs/guides/contract-analyzer.mdx +++ b/website/src/pages/hi/subgraphs/guides/contract-analyzer.mdx @@ -2,11 +2,15 @@ title: Smart Contract Analysis with Cana CLI --- -# Cana CLI: Quick & Efficient Contract Analysis +Improve smart contract analysis with **Cana CLI**. It's fast, efficient, and designed specifically for EVM chains. -**Cana CLI** is a command-line tool that streamlines helpful smart contract metadata analysis specific to subgraph development across multiple EVM-compatible chains. +## Overview -## 📌 Key Features +**Cana CLI** is a command-line tool that streamlines helpful smart contract metadata analysis specific to subgraph development across multiple EVM-compatible chains. It simplifies retrieving contract details, detecting proxy implementations, extracting ABIs, and more. + +### Key Features + +With Cana CLI, you can: - Detect deployment blocks - Verify source code @@ -14,63 +18,75 @@ title: Smart Contract Analysis with Cana CLI - Identify proxy and implementation contracts - Support multiple chains -## 🚀 Installation & Setup +### आवश्यक शर्तें + +Before installing Cana CLI, make sure you have: + +- [Node.js v16+](https://nodejs.org/en) +- [npm v6+](https://docs.npmjs.com/cli/v11/commands/npm-install) +- Block explorer API keys + +### Installation & Setup -Install Cana globally using npm: +1. Install Cana CLI + +Use npm to install it globally: ```bash npm install -g contract-analyzer ``` -Set up a blockchain for analysis: +2. Configure Cana CLI + +Set up a blockchain environment for analysis: ```bash cana setup ``` -Provide the required block explorer API and block explorer endpoint URL details when prompted. +During setup, you'll be prompted to provide the required block explorer API key and block explorer endpoint URL. -Running `cana setup` creates a configuration file at `~/.contract-analyzer/config.json`. This file stores your block explorer API credentials, endpoint URLs, and chain selection preferences for future use. +After setup, Cana CLI creates a configuration file at `~/.contract-analyzer/config.json`. This file stores your block explorer API credentials, endpoint URLs, and chain selection preferences for future use. -## 🍳 Usage +### Steps: Using Cana CLI for Smart Contract Analysis -### 🔹 Chain Selection +#### 1. Select a Chain -Cana supports multiple EVM-compatible chains. +Cana CLI supports multiple EVM-compatible chains. -List chains added with: +For a list of chains added run this command: ```bash cana chains ``` -Then select a chain with: +Then select a chain with this command: ```bash cana chains --switch ``` -Once a chain is selected, all subsequent contract analases will continue on that chain. +Once a chain is selected, all subsequent contract analyses will continue on that chain. -### 🔹 Basic Contract Analysis +#### 2. Basic Contract Analysis -Analyze a contract with: +Run the following command to analyze a contract: ```bash cana analyze 0xContractAddress ``` -or +या ```bash cana -a 0xContractAddress ``` -This command displays essential contract information in the terminal using a clear, organized format. +This command fetches and displays essential contract information in the terminal using a clear, organized format. -### 🔹 Understanding Output +#### 3. Understanding the Output -Cana organizes results into the terminal as well as into a structured directory when detailed contract data is successfully retrieved: +Cana CLI organizes results into the terminal and into a structured directory when detailed contract data is successfully retrieved: ``` contracts-analyzed/ @@ -80,24 +96,22 @@ contracts-analyzed/ └── event-information.json # Event signatures and examples ``` -### 🔹 Chain Management +This format makes it easy to reference contract metadata, event signatures, and ABIs for subgraph development. + +#### 4. Chain Management Add and manage chains: ```bash -cana setup # Add a new chain -cana chains # List configured chains -cana chains -s # Swich chains. +cana setup # Add a new chain +cana chains # List configured chains +cana chains -s # Switch chains ``` -## ⚠️ Troubleshooting +### Troubleshooting -- **Missing Data**: Ensure the contract address is correct, verified on the block explorer, and that your API key has the required permissions. +Missing Data? Ensure that the contract address is correct, that it's verified on the block explorer, and that your API key has the required permissions. -## ✅ Requirements - -- Node.js v16+ -- npm v6+ -- Block explorer API keys +### निष्कर्ष -Keep your contract analyses efficient and well-organized. 🚀 +With Cana CLI, you can efficiently analyze smart contracts, extract crucial metadata, and support subgraph development with ease. diff --git a/website/src/pages/hi/subgraphs/guides/enums.mdx b/website/src/pages/hi/subgraphs/guides/enums.mdx index 9f55ae07c54b..d44418fec528 100644 --- a/website/src/pages/hi/subgraphs/guides/enums.mdx +++ b/website/src/pages/hi/subgraphs/guides/enums.mdx @@ -1,20 +1,20 @@ --- -title: Categorize NFT Marketplaces Using Enums +title: NFT मार्केटप्लेस को Enums का उपयोग करके करें --- -Use Enums to make your code cleaner and less error-prone. Here's a full example of using Enums on NFT marketplaces. +Enams का उपयोग करके अपने कोड को साफ और कम त्रुटिपूर्ण बनाएं। यहां NFT मार्केटप्लेस पर Enams के उपयोग का एक पूरा उदाहरण है। -## What are Enums? +## Enums क्या हैं? -Enums, or enumeration types, are a specific data type that allows you to define a set of specific, allowed values. +Enums, या enumeration types, एक विशिष्ट डेटा प्रकार होते हैं जो आपको विशिष्ट, अनुमत मानों का एक सेट परिभाषित करने की अनुमति देते हैं। -### Example of Enums in Your Schema +### अपने Schema में Enums का उदाहरण -If you're building a Subgraph to track the ownership history of tokens on a marketplace, each token might go through different ownerships, such as `OriginalOwner`, `SecondOwner`, and `ThirdOwner`. By using enums, you can define these specific ownerships, ensuring only predefined values are assigned. +यदि आप एक Subgraph बना रहे हैं जो मार्केटप्लेस पर टोकनों के स्वामित्व इतिहास को ट्रैक करता है, तो प्रत्येक टोकन विभिन्न स्वामित्वों से गुजर सकता है, जैसे OriginalOwner, SecondOwner और ThirdOwner। एनम्स का उपयोग करके, आप इन विशिष्ट स्वामित्वों को परिभाषित कर सकते हैं, जिससे यह सुनिश्चित होगा कि केवल पूर्वनिर्धारित मान ही असाइन किए जाएं। -You can define enums in your schema, and once defined, you can use the string representation of the enum values to set an enum field on an entity. +आप अपनी स्कीमा में एन्सम्स (enums) को परिभाषित कर सकते हैं, और एक बार परिभाषित हो जाने के बाद, आप एन्सम के मानों की स्ट्रिंग प्रस्तुति का उपयोग करके एक एन्सम फ़ील्ड को एक entities पर सेट कर सकते हैं। -Here's what an enum definition might look like in your schema, based on the example above: +यहां आपके स्कीमा में एक enum परिभाषा इस प्रकार हो सकती है, उपरोक्त उदाहरण के आधार पर: ```graphql enum TokenStatus { @@ -24,19 +24,19 @@ enum TokenStatus { } ``` -This means that when you use the `TokenStatus` type in your schema, you expect it to be exactly one of predefined values: `OriginalOwner`, `SecondOwner`, or `ThirdOwner`, ensuring consistency and validity. +यह इसका मतलब है कि जब आप अपने स्कीमा में TokenStatus प्रकार का उपयोग करते हैं, तो आप इसकी अपेक्षा करते हैं कि यह पहले से परिभाषित मानों में से एक हो: OriginalOwner, SecondOwner, या ThirdOwner, जिससे निरंतरता और वैधता सुनिश्चित होती है। -To learn more about enums, check out [Creating a Subgraph](/developing/creating-a-subgraph/#enums) and [GraphQL documentation](https://graphql.org/learn/schema/#enumeration-types). +इस बारे में अधिक जानने के लिए Creating a Subgraph(/developing/creating-a-subgraph/#enums) और GraphQL documentation(https://graphql.org/learn/schema/#enumeration-types) देखें। -## Benefits of Using Enums +## Enums का उपयोग करने के लाभ -- **Clarity:** Enums provide meaningful names for values, making data easier to understand. -- **Validation:** Enums enforce strict value definitions, preventing invalid data entries. -- **Maintainability:** When you need to change or add new categories, enums allow you to do this in a focused manner. +- स्पष्टता: Enums एन्उम्स मानों के लिए सार्थक नाम प्रदान करते हैं, जिससे डेटा को समझना आसान होता है। +- सत्यापन: Enums कड़े मान मान्यताएँ लागू करते हैं, जो अवैध डेटा प्रविष्टियों को रोकते हैं। +- रखरखाव: जब आपको नए श्रेणियाँ या ईनम्स (enums) जोड़ने या बदलने की आवश्यकता हो, तो आप इसे एक केंद्रित तरीके से कर सकते हैं। -### Without Enums +### बिना Enums -If you choose to define the type as a string instead of using an Enum, your code might look like this: +यदि आप Enum का उपयोग करने के बजाय प्रकार को एक स्ट्रिंग के रूप में परिभाषित करते हैं, तो आपका कोड इस प्रकार दिख सकता है: ```graphql type Token @entity { @@ -48,85 +48,85 @@ type Token @entity { } ``` -In this schema, `TokenStatus` is a simple string with no specific, allowed values. +इस स्कीमा में, TokenStatus एक साधारण स्ट्रिंग है जिसमें कोई विशिष्ट, अनुमत मान नहीं होते हैं। -#### Why is this a problem? +#### यह एक समस्या क्यों है? -- There's no restriction of `TokenStatus` values, so any string can be accidentally assigned. This makes it hard to ensure that only valid statuses like `OriginalOwner`, `SecondOwner`, or `ThirdOwner` are set. -- It's easy to make typos such as `Orgnalowner` instead of `OriginalOwner`, making the data and potential queries unreliable. +- TokenStatus मानों की कोई सीमा नहीं है, इसलिए कोई भी स्ट्रिंग गलती से असाइन की जा सकती है। इससे यह सुनिश्चित करना कठिन हो जाता है कि केवल वैध स्टेटस जैसे OriginalOwner, SecondOwner, या ThirdOwner सेट किए जाएं। +- यह टाइपो करना आसान है जैसे Orgnalowner को OriginalOwner के बजाय, जिससे डेटा और संभावित queries अप्रतिबद्ध हो सकती हैं। -### With Enums +### Enums के साथ -Instead of assigning free-form strings, you can define an enum for `TokenStatus` with specific values: `OriginalOwner`, `SecondOwner`, or `ThirdOwner`. Using an enum ensures only allowed values are used. +इसके बजाय कि आप फ्री-फॉर्म स्ट्रिंग्स असाइन करें, आप TokenStatus के लिए एक enum परिभाषित कर सकते हैं जिसमें विशिष्ट मान हों: OriginalOwner, SecondOwner, या ThirdOwner। enum का उपयोग करने से यह सुनिश्चित होता है कि केवल अनुमत मान ही उपयोग किए जाएं। -Enums provide type safety, minimize typo risks, and ensure consistent and reliable results. +Enums प्रकार सुरक्षा प्रदान करते हैं, टाइपो के जोखिम को कम करते हैं, और सुनिश्चित करते हैं कि परिणाम लगातार और विश्वसनीय हों। -## Defining Enums for NFT Marketplaces +## NFT मार्केटप्लेस के लिए एन्उम्स को परिभाषित करना -> Note: The following guide uses the CryptoCoven NFT smart contract. +> नोट: निम्नलिखित guide CryptoCoven NFT स्मार्ट कॉन्ट्रैक्ट का उपयोग करती है। -To define enums for the various marketplaces where NFTs are traded, use the following in your Subgraph schema: +NFTs के व्यापार किए जाने वाले विभिन्न मार्केटप्लेस के लिए enums को परिभाषित करने के लिए, अपने Subgraph स्कीमा में निम्नलिखित का उपयोग करें: ```gql -# Enum for Marketplaces that the CryptoCoven contract interacted with(likely a Trade/Mint) +#मार्केटप्लेस के लिए Enum जो CryptoCoven कॉन्ट्रैक्ट के साथ इंटरएक्टेड हैं (संभवत: ट्रेड/मिंट) enum Marketplace { - OpenSeaV1 # Represents when a CryptoCoven NFT is traded on the marketplace - OpenSeaV2 # Represents when a CryptoCoven NFT is traded on the OpenSeaV2 marketplace - SeaPort # Represents when a CryptoCoven NFT is traded on the SeaPort marketplace - LooksRare # Represents when a CryptoCoven NFT is traded on the LookRare marketplace - # ...and other marketplaces + OpenSeaV1 # जब CryptoCoven NFT को इस बाजार में व्यापार किया जाता है + OpenSeaV2 # जब CryptoCoven NFT को OpenSeaV2 बाजार में व्यापार किया जाता है + SeaPort # जब CryptoCoven NFT को SeaPort बाजार में व्यापार किया जाता है + LooksRare # जब CryptoCoven NFT को LooksRare बाजार में व्यापार किया जाता है + # ...और अन्य बाजार } ``` -## Using Enums for NFT Marketplaces +## NFT Marketplaces के लिए Enums का उपयोग -Once defined, enums can be used throughout your Subgraph to categorize transactions or events. +एक बार परिभाषित हो जाने पर, enums का उपयोग आपके Subgraph में transactions या events को वर्गीकृत करने के लिए किया जा सकता है। -For example, when logging NFT sales, you can specify the marketplace involved in the trade using the enum. +उदाहरण के लिए, जब logging NFT बिक्री लॉग करते हैं, तो आप ट्रेड में शामिल मार्केटप्लेस को enum का उपयोग करके निर्दिष्ट कर सकते हैं। -### Implementing a Function for NFT Marketplaces +### NFT मार्केटप्लेस के लिए एक फंक्शन लागू करना -Here's how you can implement a function to retrieve the marketplace name from the enum as a string: +यहाँ बताया गया है कि आप एक फ़ंक्शन को कैसे लागू कर सकते हैं जो enum से मार्केटप्लेस का नाम एक स्ट्रिंग के रूप में प्राप्त करता है: ```ts export function getMarketplaceName(marketplace: Marketplace): string { - // Using if-else statements to map the enum value to a string + // यदि-और-else कथनों का उपयोग करके enum मान को एक स्ट्रिंग में मैप करें if (marketplace === Marketplace.OpenSeaV1) { - return 'OpenSeaV1' // If the marketplace is OpenSea, return its string representation + return 'OpenSeaV1' // यदि बाज़ार OpenSea है, तो इसकी स्ट्रिंग प्रतिनिधित्व लौटाएँ } else if (marketplace === Marketplace.OpenSeaV2) { return 'OpenSeaV2' } else if (marketplace === Marketplace.SeaPort) { - return 'SeaPort' // If the marketplace is SeaPort, return its string representation + return 'SeaPort' // यदि बाज़ार SeaPort है, तो इसकी स्ट्रिंग प्रतिनिधित्व लौटाएँ } else if (marketplace === Marketplace.LooksRare) { - return 'LooksRare' // If the marketplace is LooksRare, return its string representation - // ... and other market places + return 'LooksRare' // यदि बाज़ार LooksRare है, तो इसकी स्ट्रिंग प्रतिनिधित्व लौटाएँ + // ... और अन्य बाज़ार } } ``` -## Best Practices for Using Enums +## Enums का उपयोग करने के लिए सर्वोत्तम प्रथाएँ -- **Consistent Naming:** Use clear, descriptive names for enum values to improve readability. -- **Centralized Management:** Keep enums in a single file for consistency. This makes enums easier to update and ensures they are the single source of truth. -- **Documentation:** Add comments to enum to clarify their purpose and usage. +- सुसंगत नामकरण: पठनीयता को बेहतर बनाने के लिए enum मानों के लिए स्पष्ट, वर्णनात्मक नामों का उपयोग करें। +- केंद्रीकृत प्रबंधन: एकल फ़ाइल में enums रखें ताकि सुसंगतता बनी रहे। इससे enums को अपडेट करना आसान हो जाता है और यह सत्य का एकमात्र source बनता है। +- दस्तावेज़ीकरण: एनम में उनकी उद्देश्य और उपयोग को स्पष्ट करने के लिए टिप्पणियाँ जोड़ें। -## Using Enums in Queries +## queries में Enums का उपयोग करना -Enums in queries help you improve data quality and make your results easier to interpret. They function as filters and response elements, ensuring consistency and reducing errors in marketplace values. +क्वेरी में Enums आपके डेटा की गुणवत्ता में सुधार करने और आपके परिणामों को समझने में आसान बनाने में मदद करते हैं। ये फ़िल्टर और प्रतिक्रिया तत्व के रूप में कार्य करते हैं, बाज़ार के मूल्यों में स्थिरता सुनिश्चित करते हैं और त्रुटियों को कम करते हैं। -**Specifics** +**विशिष्टताएँ** -- **Filtering with Enums:** Enums provide clear filters, allowing you to confidently include or exclude specific marketplaces. -- **Enums in Responses:** Enums guarantee that only recognized marketplace names are returned, making the results standardized and accurate. +- **Enums के साथ फ़िल्टरिंग:** Enums स्पष्ट फ़िल्टर प्रदान करते हैं, जिससे आप निश्चित रूप से विशिष्ट मार्केटप्लेस को शामिल या बाहर कर सकते हैं। +- **प्रतिसादों में Enums:** एन्‍यम्‍स यह सुनिश्चित करते हैं कि केवल मान्यता प्राप्त मार्केटप्लेस नाम ही वापस आएं, जिससे परिणाम मानकीकृत और सटीक हों। -### Sample Queries +### नमूना queries -#### Query 1: Account With The Highest NFT Marketplace Interactions +#### Query 1: सबसे अधिक NFT मार्केटप्लेस इंटरएक्शन वाला खाता -This query does the following: +यह क्वेरी निम्नलिखित कार्य करती है: -- It finds the account with the highest unique NFT marketplace interactions, which is great for analyzing cross-marketplace activity. -- The marketplaces field uses the marketplace enum, ensuring consistent and validated marketplace values in the response. +- यह खाते को खोजता है जिसमें सबसे अधिक अनूठे NFT मार्केटप्लेस इंटरैक्शन होते हैं, जो क्रॉस-मार्केटप्लेस गतिविधि का विश्लेषण करने के लिए बेहतरीन है। +- मार्केटप्लेस फील्ड marketplace एनम का उपयोग करता है, जो प्रतिक्रिया में सुसंगत और मान्य मार्केटप्लेस मान सुनिश्चित करता है। ```gql { @@ -143,9 +143,9 @@ This query does the following: } ``` -#### Returns +#### रिटर्न्स -This response provides account details and a list of unique marketplace interactions with enum values for standardized clarity: +यह प्रतिक्रिया खाता विवरण और मानकीकृत स्पष्टता के लिए एनम मानों के साथ अद्वितीय मार्केटप्लेस इंटरैक्शन्स की सूची प्रदान करती है: ```gql { @@ -186,12 +186,12 @@ This response provides account details and a list of unique marketplace interact } ``` -#### Query 2: Most Active Marketplace for CryptoCoven transactions +#### Query 2: CryptoCoven transactions के लिए सबसे सक्रिय बाज़ार -This query does the following: +यह क्वेरी निम्नलिखित कार्य करती है: -- It identifies the marketplace with the highest volume of CryptoCoven transactions. -- It uses the marketplace enum to ensure that only valid marketplace types appear in the response, adding reliability and consistency to your data. +- यह उस मार्केटप्लेस की पहचान करता है जहां CryptoCoven लेनदेन का सबसे अधिक वॉल्यूम होता है। +- यह मार्केटप्लेस enum का उपयोग करता है ताकि प्रतिक्रिया में केवल मान्य मार्केटप्लेस प्रकार ही दिखाई दें, जिससे आपके डेटा में विश्वसनीयता और स्थिरता बनी रहती है। ```gql { @@ -202,9 +202,9 @@ This query does the following: } ``` -#### Result 2 +#### परिणाम 2 -The expected response includes the marketplace and the corresponding transaction count, using the enum to indicate the marketplace type: +अपेक्षित प्रतिक्रिया में मार्केटप्लेस और संबंधित transaction संख्या शामिल है, जो मार्केटप्लेस प्रकार को संकेत करने के लिए enum का उपयोग करती है: ```gql { @@ -219,12 +219,12 @@ The expected response includes the marketplace and the corresponding transaction } ``` -#### Query 3: Marketplace Interactions with High Transaction Counts +#### प्रश्न 3: उच्च लेन-देन गणना के साथ बाज़ार परस्पर क्रियाएँ -This query does the following: +यह क्वेरी निम्नलिखित कार्य करती है: -- It retrieves the top four marketplaces with over 100 transactions, excluding "Unknown" marketplaces. -- It uses enums as filters to ensure that only valid marketplace types are included, increasing accuracy. +- यह 100 से अधिक transactions वाले शीर्ष चार बाजारों को पुनः प्राप्त करता है, "Unknown" बाजारों को छोड़कर। +- यह केवल वैध मार्केटप्लेस प्रकारों को शामिल करने के लिए फ़िल्टर के रूप में एंनम का उपयोग करता है, जिससे सटीकता बढ़ती है। ```gql { @@ -240,9 +240,9 @@ This query does the following: } ``` -#### Result 3 +#### परिणाम 3 -Expected output includes the marketplaces that meet the criteria, each represented by an enum value: +अपेक्षित आउटपुट में उन मार्केटप्लेस का समावेश है जो मानदंडों को पूरा करते हैं, प्रत्येक को एक enum मान द्वारा प्रदर्शित किया जाता है: ```gql { @@ -271,4 +271,4 @@ Expected output includes the marketplaces that meet the criteria, each represent ## Additional Resources -For additional information, check out this guide's [repo](https://github.com/chidubemokeke/Subgraph-Tutorial-Enums). +अधिक जानकारी के लिए, इस guide's के [repo](https://github.com/chidubemokeke/Subgraph-Tutorial-Enums) को देखें। diff --git a/website/src/pages/hi/subgraphs/guides/grafting.mdx b/website/src/pages/hi/subgraphs/guides/grafting.mdx index d9abe0e70d2a..2a67805408e7 100644 --- a/website/src/pages/hi/subgraphs/guides/grafting.mdx +++ b/website/src/pages/hi/subgraphs/guides/grafting.mdx @@ -1,56 +1,56 @@ --- -title: Replace a Contract and Keep its History With Grafting +title: एक कॉन्ट्रैक्ट बदलें और उसका इतिहास ग्राफ्टिंग के साथ रखें --- -In this guide, you will learn how to build and deploy new Subgraphs by grafting existing Subgraphs. +इस गाइड में, आप मौजूदा Subgraph को ग्राफ्ट करके नए Subgraph को बनाना और तैनात करना सीखेंगे। -## What is Grafting? +## ग्राफ्टिंग क्या है? -Grafting reuses the data from an existing Subgraph and starts indexing it at a later block. This is useful during development to get past simple errors in the mappings quickly or to temporarily get an existing Subgraph working again after it has failed. Also, it can be used when adding a feature to a Subgraph that takes long to index from scratch. +Grafting मौजूदा Subgraph से डेटा को पुनः उपयोग करता है और इसे बाद के ब्लॉक पर indexing करना शुरू करता है। यह विकास के दौरान सरल त्रुटियों को जल्दी से पार करने या किसी मौजूदा Subgraph को फिर से कार्यशील बनाने के लिए उपयोगी है, जब यह विफल हो जाता है। साथ ही, जब किसी Subgraph में कोई ऐसा फीचर जोड़ा जाता है जिसे शुरू से इंडेक्स करने में अधिक समय लगता है, तब भी इसका उपयोग किया जा सकता है। -The grafted Subgraph can use a GraphQL schema that is not identical to the one of the base Subgraph, but merely compatible with it. It has to be a valid Subgraph schema in its own right, but may deviate from the base Subgraph's schema in the following ways: +ग्राफ्टेड Subgraph एक ग्राफक्यूएल स्कीमा का उपयोग कर सकता है जो बेस Subgraph के समान नहीं है, लेकिन इसके अनुकूल हो। यह अपने आप में एक मान्य Subgraph स्कीमा होना चाहिए, लेकिन निम्नलिखित तरीकों से बेस Subgraph के स्कीमा से विचलित हो सकता है: -- It adds or removes entity types -- It removes attributes from entity types -- It adds nullable attributes to entity types -- It turns non-nullable attributes into nullable attributes -- It adds values to enums -- It adds or removes interfaces -- It changes for which entity types an interface is implemented +- यह इकाई के प्रकारों को जोड़ या हटा सकता है| +- यह इकाई प्रकारों में से गुणों को हटाता है| +- यह प्रभावहीन गुणों को इकाई प्रकारों में जोड़ता है| +- यह प्रभाव वाले गुणों को प्रभावहीन गुणों में बदल देता है| +- यह इनम्स में महत्व देता है| +- यह इंटरफेस जोड़ता या हटाता है| +- यह कि, किन इकाई प्रकारों के लिए इंटरफ़ेस लागू होगा, इसे बदल देता है| -For more information, you can check: +अधिक जानकारी के लिए आप देख सकते हैं: - [Grafting](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) -In this tutorial, we will be covering a basic use case. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing Subgraph onto the "base" Subgraph that tracks the new contract. +इस ट्यूटोरियल में, हम एक बेसिक use case कवर करेंगे। हम एक मौजूदा contract को एक identical contract से replace करेंगे (जिसका नया address होगा, लेकिन code वही रहेगा)। इसके बाद, मौजूदा Subgraph को उस "base" Subgraph से graft करेंगे, जो नए contract को track करता है। ## Important Note on Grafting When Upgrading to the Network -> **Caution**: It is recommended to not use grafting for Subgraphs published to The Graph Network +> Caution: यह अनुशंसा की जाती है कि The Graph Network पर प्रकाशित किए गए Subgraphs के लिए grafting का उपयोग न करें। -### Why Is This Important? +### यह क्यों महत्वपूर्ण है? -Grafting is a powerful feature that allows you to "graft" one Subgraph onto another, effectively transferring historical data from the existing Subgraph to a new version. It is not possible to graft a Subgraph from The Graph Network back to Subgraph Studio. +Grafting एक शक्तिशाली feature है जो आपको एक Subgraph को दूसरे पर "graft" करने की सुविधा देता है, जिससे मौजूदा Subgraph का historical data नए version में प्रभावी रूप से ट्रांसफर हो जाता है। The Graph Network से वापस Subgraph Studio में किसी Subgraph को graft करना संभव नहीं है। ### Best Practices -**Initial Migration**: when you first deploy your Subgraph to the decentralized network, do so without grafting. Ensure that the Subgraph is stable and functioning as expected. +Initial Migration: जब आप अपना Subgraph पहली बार decentralized network पर deploy करें, तो इसे grafting के बिना करें। सुनिश्चित करें कि Subgraph स्थिर है और अपेक्षित रूप से कार्य कर रहा है। -**Subsequent Updates**: once your Subgraph is live and stable on the decentralized network, you may use grafting for future versions to make the transition smoother and to preserve historical data. +Subsequent Updates: जब आपका Subgraph decentralized network पर live और stable हो जाए, तो आप भविष्य के versions के लिए grafting का उपयोग कर सकते हैं ताकि transition स्मूथ हो और historical data संरक्षित रहे। -By adhering to these guidelines, you minimize risks and ensure a smoother migration process. +इन guidelines का पालन करके, आप risks को कम करते हैं और एक smoot migration प्रक्रिया की सुनिश्चित करते हैं। -## Building an Existing Subgraph +## एक मौजूदा सब-ग्राफ बनाना -Building Subgraphs is an essential part of The Graph, described more in depth [here](/subgraphs/quick-start/). To be able to build and deploy the existing Subgraph used in this tutorial, the following repo is provided: +Subgraphs बनाना The Graph का एक आवश्यक हिस्सा है, जिसे और गहराई से यहाँ समझाया गया है। इस ट्यूटोरियल में उपयोग किए गए मौजूदा Subgraph को build और deploy करने के लिए निम्नलिखित repo प्रदान किया गया है: -- [Subgraph example repo](https://github.com/Shiyasmohd/grafting-tutorial) +- [Subgraph उदाहरण रिपॉजिटरी](https://github.com/Shiyasmohd/grafting-tutorial) -> Note: The contract used in the Subgraph was taken from the following [Hackathon Starterkit](https://github.com/schmidsi/hackathon-starterkit). +> Note: Subgraph में उपयोग किया गया contract निम्नलिखित Hackathon Starterkit से लिया गया है। (https://github.com/schmidsi/hackathon-starterkit) -## Subgraph Manifest Definition +## सब ग्राफ मैनिफेस्ट की परिभाषा -The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest that you will use: +Subgraph manifest subgraph.yaml Subgraph के लिए data sources, महत्वपूर्ण triggers, और उन triggers के जवाब में चलने वाले functions को निर्दिष्ट करता है। नीचे एक उदाहरण Subgraph manifest दिया गया है, जिसे आप उपयोग करेंगे: ```yaml specVersion: 1.3.0 @@ -79,16 +79,16 @@ dataSources: file: ./src/lock.ts ``` -- The `Lock` data source is the abi and contract address we will get when we compile and deploy the contract -- The network should correspond to an indexed network being queried. Since we're running on Sepolia testnet, the network is `sepolia` -- The `mapping` section defines the triggers of interest and the functions that should be run in response to those triggers. In this case, we are listening for the `Withdrawal` event and calling the `handleWithdrawal` function when it is emitted. +- `Lock` डेटा स्रोत वह ABI और अनुबंध पता है जो हमें तब मिलेगा जब हम अनुबंध को संकलित और तैनात करेंगे। +- नेटवर्क को एक इंडेक्स किए गए नेटवर्क के अनुरूप होना चाहिए जिसे क्वेरी किया जा रहा है। चूंकि हम सेपोलीया टेस्टनेट पर चल रहे हैं, नेटवर्क `sepolia` है। +- `mapping` सेक्शन उन ट्रिगर्स को परिभाषित करता है जो दिलचस्प होते हैं और उन ट्रिगर्स के प्रतिक्रिया में चलने वाली कार्यों को परिभाषित करता है। इस मामले में, हम Withdrawal इवेंट की प्रतीक्षा कर रहे हैं और जब यह इवेंट उत्पन्न होता है, तो `handleWithdrawal` कार्य को कॉल किया जाता है। -## Grafting Manifest Definition +## ग्राफ्टिंग मैनिफेस्ट की परिभाषा -Grafting requires adding two new items to the original Subgraph manifest: +Grafting के लिए मूल Subgraph manifest में दो नए आइटम जोड़ने की आवश्यकता होती है: ```yaml ---- +-- features: - grafting # feature name graft: @@ -96,16 +96,16 @@ graft: block: 5956000 # block number ``` -- `features:` is a list of all used [feature names](/developing/creating-a-subgraph/#experimental-features). -- `graft:` is a map of the `base` Subgraph and the block to graft on to. The `block` is the block number to start indexing from. The Graph will copy the data of the base Subgraph up to and including the given block and then continue indexing the new Subgraph from that block on. +- `features:` सभी उपयोग किए गए [विशेषताओं के नाम](/developing/creating-a-subgraph/#experimental-features) की एक सूची है। +- graft: एक map है जो base Subgraph और जिस block पर graft करना है, उसे परिभाषित करता है।block वह block number है जिससे indexing शुरू करनी है।The Graph base Subgraph का डेटा दिए गए block तक (और उसे शामिल करते हुए) कॉपी करेगा और फिर उसी block से नए Subgraph की indexing जारी रखेगा। -The `base` and `block` values can be found by deploying two Subgraphs: one for the base indexing and one with grafting +base और block मान प्राप्त करने के लिए दो Subgraphs deploy करने होते हैं:Base indexing के लिए एक SubgraphGrafting वाले नए Subgraph के लिए एक Subgraph -## Deploying the Base Subgraph +## बेस सब-ग्राफ को तैनात करना -1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-example` -2. Follow the directions in the `AUTH & DEPLOY` section on your Subgraph page in the `graft-example` folder from the repo -3. Once finished, verify the Subgraph is indexing properly. If you run the following command in The Graph Playground +1. [Subgraph Studio](https://thegraph.com/studio/) पर जाएं और Sepolia testnet पर graft-example नाम से एक Subgraph बनाएं। +2. अपने Subgraph पेज के AUTH & DEPLOY सेक्शन में दिए गए निर्देशों का पालन करें और रिपोजिटरी की graft-example फोल्डर से Subgraph को डिप्लॉय करें। +3. एक बार पूरा होने पर, सत्यापित करें की इंडेक्सिंग सही ढंग से हो गयी है| यदि आप निम्न कमांड ग्राफ प्लेग्राउंड में चलाते हैं ```graphql { @@ -117,7 +117,7 @@ The `base` and `block` values can be found by deploying two Subgraphs: one for t } ``` -It returns something like this: +तो हमे कुछ ऐसा दिखता है: ``` { @@ -138,16 +138,16 @@ It returns something like this: } ``` -Once you have verified the Subgraph is indexing properly, you can quickly update the Subgraph with grafting. +एक बार जब आप सुनिश्चित कर लें कि Subgraph सही तरीके से indexing कर रहा है, तो आप grafting का उपयोग करके इसे तेजी से अपडेट कर सकते हैं। -## Deploying the Grafting Subgraph +## ग्राफ्टिंग सब-ग्राफ तैनात करना -The graft replacement subgraph.yaml will have a new contract address. This could happen when you update your dapp, redeploy a contract, etc. +ग्राफ्ट प्रतिस्तापित subgraph.yaml में एक नया कॉन्ट्रैक्ट एड्रेस होगा| यह तब हो सकता है जब आप अपना डैप अपडेट करें, कॉन्ट्रैक्ट को दोबारा तैनात करें, इत्यादि| -1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-replacement` -2. Create a new manifest. The `subgraph.yaml` for `graph-replacement` contains a different contract address and new information about how it should graft. These are the `block` of the [last event emitted](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452) you care about by the old contract and the `base` of the old Subgraph. The `base` Subgraph ID is the `Deployment ID` of your original `graph-example` Subgraph. You can find this in Subgraph Studio. -3. Follow the directions in the `AUTH & DEPLOY` section on your Subgraph page in the `graft-replacement` folder from the repo -4. Once finished, verify the Subgraph is indexing properly. If you run the following command in The Graph Playground +1. [Subgraph Studio](https://thegraph.com/studio/) पर जाएं और Sepolia testnet पर graft-replacement नाम से एक Subgraph बनाएं। +2. एक नया manifest बनाएँ।graph-replacement के लिए subgraph.yaml में एक अलग contract address और grafting के लिए नई जानकारी होगी। इसमें निम्नलिखित शामिल होंगे:block – यह पुराने contract द्वारा उत्पन्न आखिरी event का block नंबर है, जिससे आप grafting शुरू करना चाहते हैं। आखिरी event का ट्रांजैक्शन यहाँ देखें: https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452base – यह पुराने Subgraph का Subgraph ID है।base Subgraph ID = आपके मूल graph-example Subgraph का Deployment ID। इसे Subgraph Studio में जाकर प्राप्त किया जा सकता है। +3. अपने Subgraph पेज के AUTH & DEPLOY सेक्शन में दिए गए निर्देशों का पालन करें और रिपोजिटरी की graft-replacement फोल्डर से Subgraph को डिप्लॉय करें। +4. एक बार पूरा होने पर, सत्यापित करें की इंडेक्सिंग सही ढंग से हो गयी है| यदि आप निम्न कमांड ग्राफ प्लेग्राउंड में चलाते हैं ```graphql { @@ -159,7 +159,7 @@ The graft replacement subgraph.yaml will have a new contract address. This could } ``` -It should return the following: +आपको यह वापस मिलना चाहिए: ``` { @@ -185,18 +185,18 @@ It should return the following: } ``` -You can see that the `graft-replacement` Subgraph is indexing from older `graph-example` data and newer data from the new contract address. The original contract emitted two `Withdrawal` events, [Event 1](https://sepolia.etherscan.io/tx/0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d) and [Event 2](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452). The new contract emitted one `Withdrawal` after, [Event 3](https://sepolia.etherscan.io/tx/0x2410475f76a44754bae66d293d14eac34f98ec03a3689cbbb56a716d20b209af). The two previously indexed transactions (Event 1 and 2) and the new transaction (Event 3) were combined together in the `graft-replacement` Subgraph. +आप देख सकते हैं कि graft-replacement Subgraph पुराने graph-example डेटा और नए contract address से आने वाले डेटा को एक साथ index कर रहा है।मूल contract ने दो Withdrawal events उत्पन्न किए:Event 1: https://sepolia.etherscan.io/tx/0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1dEvent 2: https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452नए contract ने एक Withdrawal event उत्पन्न किया:Event 3: https://sepolia.etherscan.io/tx/0x2410475f76a44754bae66d293d14eac34f98ec03a3689cbbb56a716d20b209afअब, इन दोनों पुराने transactions (Event 1 और 2) और नए transaction (Event 3) को graft-replacement Subgraph में एक साथ जोड़ दिया गया है। -Congrats! You have successfully grafted a Subgraph onto another Subgraph. +बधाई हो! आपने सफलतापूर्वक एक Subgraph को दूसरे Subgraph पर graft कर लिया है। ## Additional Resources -If you want more experience with grafting, here are a few examples for popular contracts: +यदि आप grafting के साथ अधिक अनुभव प्राप्त करना चाहते हैं, तो यहां कुछ लोकप्रिय कॉन्ट्रैक्ट्स के उदाहरण दिए गए हैं: - [Curve](https://github.com/messari/subgraphs/blob/master/subgraphs/curve-finance/protocols/curve-finance/config/templates/curve.template.yaml) - [ERC-721](https://github.com/messari/subgraphs/blob/master/subgraphs/erc721-metadata/subgraph.yaml) - [Uniswap](https://github.com/messari/subgraphs/blob/master/subgraphs/uniswap-v3-forks/protocols/uniswap-v3/config/templates/uniswapV3Template.yaml), -To become even more of a Graph expert, consider learning about other ways to handle changes in underlying datasources. Alternatives like [Data Source Templates](/developing/creating-a-subgraph/#data-source-templates) can achieve similar results +ग्राफ विशेषज्ञ बनने के लिए, अन्य तरीकों के बारे में जानने पर विचार करें जो अंतर्निहित डेटा स्रोतों में परिवर्तन को संभाल सकते हैं। [Data Source Templates](/developing/creating-a-subgraph/#data-source-templates) जैसे विकल्प समान परिणाम प्राप्त कर सकते हैं। -> Note: A lot of material from this article was taken from the previously published [Arweave article](/subgraphs/cookbook/arweave/) +> ध्यान दें: इस लेख की अधिकांश सामग्री को पहले प्रकाशित [Arweave article](/subgraphs/cookbook/arweave/) से लिया गया है। diff --git a/website/src/pages/hi/subgraphs/guides/near.mdx b/website/src/pages/hi/subgraphs/guides/near.mdx index e78a69eb7fa2..d8b019189a96 100644 --- a/website/src/pages/hi/subgraphs/guides/near.mdx +++ b/website/src/pages/hi/subgraphs/guides/near.mdx @@ -1,54 +1,54 @@ --- -title: Building Subgraphs on NEAR +title: NEAR पर सबग्राफ बनाना --- -This guide is an introduction to building Subgraphs indexing smart contracts on the [NEAR blockchain](https://docs.near.org/). +यह गाइड [NEAR ब्लॉकचेन](https://docs.near.org/) पर स्मार्ट contract को इंडेक्स करने वाले Subgraphs बनाने की एक परिचयात्मक गाइड है।सबग्राफ -## What is NEAR? +## NEAR क्या है? -[NEAR](https://near.org/) is a smart contract platform for building decentralized applications. Visit the [official documentation](https://docs.near.org/concepts/basics/protocol) for more information. +[NEAR](https://near.org/) एक स्मार्ट contract प्लेटफ़ॉर्म है जो विकेंद्रीकृत applications बनाने के लिए है। अधिक जानकारी के लिए [official documentation](https://docs.near.org/concepts/basics/protocol) देखें। -## What are NEAR Subgraphs? +## NEAR Subgraphs क्या हैं? -The Graph gives developers tools to process blockchain events and make the resulting data easily available via a GraphQL API, known individually as a Subgraph. [Graph Node](https://github.com/graphprotocol/graph-node) is now able to process NEAR events, which means that NEAR developers can now build Subgraphs to index their smart contracts. +The Graph डेवलपर्स को ब्लॉकचेन इवेंट्स को प्रोसेस करने और परिणामी डेटा को आसानी से एक GraphQL API के माध्यम से उपलब्ध कराने के टूल्स देता है, जिसे व्यक्तिगत रूप से एक सबग्राफ के रूप में जाना जाता है। [Graph Node](https://github.com/graphprotocol/graph-node) अब NEAR इवेंट्स को प्रोसेस करने में सक्षम है, जिसका अर्थ है कि NEAR डेवलपर्स अब अपने स्मार्ट contract को इंडेक्स करने के लिए Subgraphs बना सकते हैं। -Subgraphs are event-based, which means that they listen for and then process onchain events. There are currently two types of handlers supported for NEAR Subgraphs: +सबग्राफ इवेंट-आधारित होते हैं, जिसका अर्थ है कि वे ऑनचेन इवेंट्स को सुनते हैं और फिर उन्हें प्रोसेस करते हैं। वर्तमान में, NEAR सबग्राफ के लिए दो प्रकार के handlers समर्थित हैं: -- Block handlers: these are run on every new block -- Receipt handlers: run every time a message is executed at a specified account +- ब्लॉक हैंडलर्स: ये हर नए ब्लॉक पर चलते हैं +- रसीद हैंडलर: किसी निर्दिष्ट खाते पर संदेश निष्पादित होने पर हर बार चलें -[From the NEAR documentation](https://docs.near.org/build/data-infrastructure/lake-data-structures/receipt): +[NEAR दस्तावेज़ से:](https://docs.near.org/build/data-infrastructure/lake-data-structures/receipt) -> A Receipt is the only actionable object in the system. When we talk about "processing a transaction" on the NEAR platform, this eventually means "applying receipts" at some point. +> रसीद सिस्टम में एकमात्र कार्रवाई योग्य वस्तु है। जब हम NEAR प्लेटफॉर्म पर "एक लेन-देन को संसाधित करने" के बारे में बात करते हैं, तो अंततः इसका अर्थ किसी बिंदु पर "रसीदें लागू करना" होता है। -## Building a NEAR Subgraph +## NEAR सबग्राफ बनाना -`@graphprotocol/graph-cli` is a command-line tool for building and deploying Subgraphs. +`@graphprotocol/graph-cli` एक कमांड-लाइन टूल है जो सबग्राफ बनाने और डिप्लॉय करने के लिए उपयोग किया जाता है। -`@graphprotocol/graph-ts` is a library of Subgraph-specific types. +`@graphprotocol/graph-ts` एक लाइब्रेरी है जो सबग्राफ-विशिष्ट प्रकार प्रदान करती है। -NEAR Subgraph development requires `graph-cli` above version `0.23.0`, and `graph-ts` above version `0.23.0`. +NEAR सबग्राफ विकास के लिए `graph-cli` का संस्करण 0.23.0 से ऊपर और `graph-ts` का संस्करण `0.23.0` से ऊपर होना आवश्यक है। -> Building a NEAR Subgraph is very similar to building a Subgraph that indexes Ethereum. +> NEAR सबग्राफ बनाना Ethereum को इंडेक्स करने वाले सबग्राफ बनाने के समान ही है। -There are three aspects of Subgraph definition: +सबग्राफ परिभाषा के तीन पहलू हैं: -**subgraph.yaml:** the Subgraph manifest, defining the data sources of interest, and how they should be processed. NEAR is a new `kind` of data source. +**subgraph.yaml:** सबग्राफ मैनिफेस्ट, जो आवश्यक डेटा स्रोतों को परिभाषित करता है और उन्हें कैसे प्रोसेस किया जाना चाहिए। NEAR एक नया kind का डेटा स्रोत है। -**schema.graphql:** a schema file that defines what data is stored for your Subgraph, and how to query it via GraphQL. The requirements for NEAR Subgraphs are covered by [the existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). +**schema.graphql:** एक स्कीमा फ़ाइल है जो यह परिभाषित करती है कि आपके सबग्राफ के लिए कौन सा डेटा संग्रहीत किया जाता है और इसे GraphQL के माध्यम से कैसे क्वेरी किया जाए। NEAR सबग्राफ के लिए आवश्यकताओं को [मौजूदा दस्तावेज़ीकरण](/developing/creating-a-subgraph/#the-graphql-schema) द्वारा कवर किया गया है। -**AssemblyScript Mappings:** [AssemblyScript code](/subgraphs/developing/creating/graph-ts/api/) that translates from the event data to the entities defined in your schema. NEAR support introduces NEAR-specific data types and new JSON parsing functionality. +असेम्बलीस्क्रिप्ट मैपिंग्स: [AssemblyScript code](/subgraphs/developing/creating/graph-ts/api/) जो इवेंट डेटा से आपके स्कीमा में परिभाषित एंटिटीज़ में अनुवाद करता है। NEAR समर्थन NEAR-विशिष्ट डेटा प्रकार और नई JSON पार्सिंग कार्यक्षमता पेश करता है। -During Subgraph development there are two key commands: +Subgraph को बनाते वक़्त दो मुख्य कमांड हैं: ```bash $ graph codegen # generates types from the schema file identified in the manifest -$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder +$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the subgraph files in a /build folder ``` -### Subgraph Manifest Definition +### सब ग्राफ मैनिफेस्ट की परिभाषा -The Subgraph manifest (`subgraph.yaml`) identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for a NEAR Subgraph: +सबग्राफ manifest (`subgraph.yaml`) उन डेटा स्रोतों की पहचान करता है जो सबग्राफ के लिए आवश्यक हैं, उन ट्रिगर्स को निर्दिष्ट करता है जिनमें रुचि है, और उन फ़ंक्शनों को परिभाषित करता है जिन्हें उन ट्रिगर्स के जवाब में चलाया जाना चाहिए। नीचे NEAR सबग्राफ के लिए एक उदाहरण सबग्राफ manifest दिया गया है: ```yaml specVersion: 1.3.0 @@ -70,10 +70,10 @@ dataSources: file: ./src/mapping.ts # link to the file with the Assemblyscript mappings ``` -- NEAR Subgraphs introduce a new `kind` of data source (`near`) -- The `network` should correspond to a network on the hosting Graph Node. On Subgraph Studio, NEAR's mainnet is `near-mainnet`, and NEAR's testnet is `near-testnet` -- NEAR data sources introduce an optional `source.account` field, which is a human-readable ID corresponding to a [NEAR account](https://docs.near.org/concepts/protocol/account-model). This can be an account or a sub-account. -- NEAR data sources introduce an alternative optional `source.accounts` field, which contains optional suffixes and prefixes. At least prefix or suffix must be specified, they will match the any account starting or ending with the list of values respectively. The example below would match: `[app|good].*[morning.near|morning.testnet]`. If only a list of prefixes or suffixes is necessary the other field can be omitted. +- NEAR सबग्राफ ने एक नए kind का data source (`near`) पेश किया है। +- `network` को होस्टिंग ग्राफ-नोड पर एक नेटवर्क से मेल खाना चाहिए। सबग्राफ Studio पर, NEAR का मेननेट `near-mainnet` है, और NEAR का टेस्टनेट `near-testnet` है। +- NEAR डेटा स्रोतों में एक वैकल्पिक `source.account` फ़ील्ड पेश किया गया है, जो एक मानव-पठनीय आईडी है जो एक [NEAR खाता](https://docs.near.org/concepts/protocol/account-model) से मेल खाती है। यह एक खाता या एक उप-खाता हो सकता है। +- NEAR डेटा स्रोत वैकल्पिक `source.accounts` फ़ील्ड पेश करते हैं, जिसमें वैकल्पिक उपसर्ग और प्रत्यय होते हैं। कम से कम उपसर्ग या प्रत्यय में से एक निर्दिष्ट किया जाना चाहिए, ये किसी भी खाते से मेल खाएंगे जो सूचीबद्ध मानों से शुरू या समाप्त होता है। नीचे दिया गया उदाहरण निम्नलिखित के लिए मेल खाएगा: `[app|good].*[morning.near|morning.testnet]`। यदि केवल उपसर्ग या प्रत्ययों की सूची आवश्यक हो तो दूसरा फ़ील्ड हटा दिया जा सकता है। ```yaml accounts: @@ -85,20 +85,20 @@ accounts: - morning.testnet ``` -NEAR data sources support two types of handlers: +NEAR डेटा स्रोत दो प्रकार के हैंडलर का समर्थन करते हैं: -- `blockHandlers`: run on every new NEAR block. No `source.account` is required. -- `receiptHandlers`: run on every receipt where the data source's `source.account` is the recipient. Note that only exact matches are processed ([subaccounts](https://docs.near.org/tutorials/crosswords/basics/add-functions-call#create-a-subaccount) must be added as independent data sources). +- `blockHandlers`: हर नए NEAR ब्लॉक पर चलते हैं। कोई source.account आवश्यक नहीं है। +- Here’s the translation of the provided text into Hindi: receiptHandlers: हर रिसीट पर तब चलाए जाते हैं जब डेटा स्रोत का source.account प्राप्तकर्ता हो। ध्यान दें कि केवल बिल्कुल मिलान वाले ही प्रोसेस किए जाते हैं ([subaccounts](https://docs.near.org/tutorials/crosswords/basics/add-functions-call#create-a-subaccount) को स्वतंत्र डेटा स्रोत के रूप में जोड़ा जाना चाहिए)। -### Schema Definition +### स्कीमा की परिभाषा -Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). +Schema परिभाषा परिणामस्वरूप बनने वाले सबग्राफ डेटाबेस की संरचना और इकाइयों के बीच संबंधों का वर्णन करती है। यह मूल डेटा स्रोत से स्वतंत्र होती है। सबग्राफ schema परिभाषा के बारे में अधिक विवरण [यहाँ](/developing/creating-a-subgraph/#the-graphql-schema) उपलब्ध हैं। -### AssemblyScript Mappings +### असेंबली स्क्रिप्ट मैप्पिंग्स -The handlers for processing events are written in [AssemblyScript](https://www.assemblyscript.org/). +आयोजन को प्रोसेस करने के लिए handler[AssemblyScript](https://www.assemblyscript.org/) में लिखे गए हैं। -NEAR indexing introduces NEAR-specific data types to the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/). +NEAR इंडेक्सिंग [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/) में NEAR-विशिष्ट डेटा प्रकारों को पेश करती है। ```typescript @@ -125,7 +125,7 @@ class ActionReceipt { class BlockHeader { height: u64, - prevHeight: u64,// Always zero when version < V3 + prevHeight: u64,// हमेशा शून्य जब संस्करण < V3 epochId: Bytes, nextEpochId: Bytes, chunksIncluded: u64, @@ -160,36 +160,36 @@ class ReceiptWithOutcome { } ``` -These types are passed to block & receipt handlers: +ये प्रकार block और receipt handlers को पास किए जाते हैं: -- Block handlers will receive a `Block` -- Receipt handlers will receive a `ReceiptWithOutcome` +- ब्लॉक handler को एक `Block` प्राप्त होगा। +- रसीद handler को `ReceiptWithOutcome` प्राप्त होगा। -Otherwise, the rest of the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/) is available to NEAR Subgraph developers during mapping execution. +अन्यथा, शेष [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/) NEAR सबग्राफ डेवलपर्स के लिए मैपिंग निष्पादन के दौरान उपलब्ध है। -This includes a new JSON parsing function - logs on NEAR are frequently emitted as stringified JSONs. A new `json.fromString(...)` function is available as part of the [JSON API](/subgraphs/developing/creating/graph-ts/api/#json-api) to allow developers to easily process these logs. +यह एक नई JSON पार्सिंग फ़ंक्शन शामिल करता है - NEAR पर अक्सर stringified JSONs के रूप में लॉग्स जारी किए जाते हैं। एक नया `json.fromString(...)` फ़ंक्शन [JSON API](/subgraphs/developing/creating/graph-ts/api/#json-api) के रूप में उपलब्ध है, जो डेवलपर्स को इन लॉग्स को आसानी से प्रोसेस करने की अनुमति देता है। -## Deploying a NEAR Subgraph +## एक NEAR सबग्राफ की तैनाती -Once you have a built Subgraph, it is time to deploy it to Graph Node for indexing. NEAR Subgraphs can be deployed to any Graph Node `>=v0.26.x` (this version has not yet been tagged & released). +एक बार जब आपने सबग्राफ बना लिया है, तो इसे ग्राफ-नोड पर Indexing के लिए डिप्लॉय करने का समय आ गया है। NEAR सबग्राफ को किसी भी ग्राफ-नोड >=v0.26.x पर डिप्लॉय किया जा सकता है (यह संस्करण अभी तक टैग और जारी नहीं किया गया है)। Subgraph Studio and the upgrade Indexer on The Graph Network currently supports indexing NEAR mainnet and testnet in beta, with the following network names: - `near-mainnet` - `near-testnet` -More information on creating and deploying Subgraphs on Subgraph Studio can be found [here](/deploying/deploying-a-subgraph-to-studio/). +More information on सबग्राफ Studio पर सबग्राफ बनाने और तैनात करने के बारे में [यहाँ](/deploying/deploying-a-subgraph-to-studio/) पाया जा सकता है। -As a quick primer - the first step is to "create" your Subgraph - this only needs to be done once. On Subgraph Studio, this can be done from [your Dashboard](https://thegraph.com/studio/): "Create a Subgraph". +पहला कदम आपका सबग्राफ "बनाना" है - यह केवल एक बार करने की आवश्यकता होती है। सबग्राफ Studio पर, इसे [आपके डैशबोर्ड](https://thegraph.com/studio/) से किया जा सकता है: "एक बनाएँ सबग्राफ "। -Once your Subgraph has been created, you can deploy your Subgraph by using the `graph deploy` CLI command: +एक बार जब आपका सबग्राफ बना लिया जाता है, तो आप `graph deploy` CLI कमांड का उपयोग करके अपने सबग्राफ को डिप्लॉय कर सकते हैं। ```sh -$ graph create --node # creates a Subgraph on a local Graph Node (on Subgraph Studio, this is done via the UI) -$ graph deploy --node --ipfs https://api.thegraph.com/ipfs/ # uploads the build files to a specified IPFS endpoint, and then deploys the Subgraph to a specified Graph Node based on the manifest IPFS hash +$ graph create --node # एक स्थानीय ग्राफ-नोड पर सबग्राफ बनाता है (सबग्राफ Studio पर, यह UI के माध्यम से किया जाता है) +$ graph deploy --node --ipfs https://api.thegraph.com/ipfs/ # निर्मित फ़ाइलों को निर्दिष्ट IPFS endpoint पर अपलोड करता है, और फिर manifest IPFS hash के आधार पर निर्दिष्ट ग्राफ-नोड पर सबग्राफ को डिप्लॉय करता है ``` -The node configuration will depend on where the Subgraph is being deployed. +नोड कॉन्फ़िगरेशन इस बात पर निर्भर करेगा कि सबग्राफ कहाँ तैनात किया जा रहा है। ### Subgraph Studio @@ -198,13 +198,13 @@ graph auth graph deploy ``` -### Local Graph Node (based on default configuration) +### स्थानीय ग्राफ़ नोड (डिफ़ॉल्ट कॉन्फ़िगरेशन पर आधारित) ```sh graph deploy --node http://localhost:8020/ --ipfs http://localhost:5001 ``` -Once your Subgraph has been deployed, it will be indexed by Graph Node. You can check its progress by querying the Subgraph itself: +एक बार जब आपका सबग्राफ डिप्लॉय हो जाता है, तो इसे ग्राफ-नोड द्वारा इंडेक्स किया जाएगा। आप खुद सबग्राफ को क्वेरी करके इसकी प्रगति की जांच कर सकते हैं। ```graphql { @@ -216,23 +216,23 @@ Once your Subgraph has been deployed, it will be indexed by Graph Node. You can } ``` -### Indexing NEAR with a Local Graph Node +### एक स्थानीय ग्राफ़ नोड के साथ NEAR को अनुक्रमणित करना -Running a Graph Node that indexes NEAR has the following operational requirements: +NEAR को अनुक्रमित करने वाले ग्राफ़ नोड को चलाने के लिए निम्नलिखित परिचालन आवश्यकताएँ हैं: -- NEAR Indexer Framework with Firehose instrumentation -- NEAR Firehose Component(s) -- Graph Node with Firehose endpoint configured +- Firehose इंस्ट्रूमेंटेशन के साथ NEAR इंडेक्सर फ्रेमवर्क +- NEAR Firehose कंपोनेंट्(स) +- Firehose एंडपॉइन्ट के साथ ग्राफ़ नोड कॉन्फ़िगर किया गया -We will provide more information on running the above components soon. +हम जल्द ही उपरोक्त कंपोनेंट्स को चलाने के बारे में और जानकारी प्रदान करेंगे। -## Querying a NEAR Subgraph +## NEAR सबग्राफ को क्वेरी करना -The GraphQL endpoint for NEAR Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. +NEAR Subgraphs के लिए GraphQL एंडपॉइंट स्कीमा परिभाषा द्वारा निर्धारित किया जाता है, जिसमें मौजूदा API इंटरफेस शामिल होता है। अधिक जानकारी के लिए कृपया [GraphQL API](/subgraphs/querying/graphql-api/) दस्तावेज़ देखें। -## Example Subgraphs +## सब-ग्राफ के उदाहरण -Here are some example Subgraphs for reference: +यहाँ कुछ उदाहरण सबग्राफ संदर्भ के लिए दिए गए हैं: [NEAR Blocks](https://github.com/graphprotocol/graph-tooling/tree/main/examples/near-blocks) @@ -240,21 +240,21 @@ Here are some example Subgraphs for reference: ## FAQ -### How does the beta work? +### बीटा कैसे काम करता है? -NEAR support is in beta, which means that there may be changes to the API as we continue to work on improving the integration. Please email near@thegraph.com so that we can support you in building NEAR Subgraphs, and keep you up to date on the latest developments! +NEAR समर्थन बीटा में है, जिसका अर्थ है कि जैसे-जैसे हम एकीकरण को बेहतर बनाने पर काम कर रहे हैं, API में परिवर्तन हो सकते हैं। कृपया हमें near@thegraph.com पर ईमेल करें ताकि हम आपको NEAR सबग्राफ बनाने में सहायता कर सकें और आपको नवीनतम विकास से अपडेट रख सकें! -### Can a Subgraph index both NEAR and EVM chains? +### क्या सबग्राफ दोनों NEAR और EVM चेन को इंडेक्स कर सकता है? -No, a Subgraph can only support data sources from one chain/network. +नहीं, एक सब-ग्राफ केवल एक चेन/नेटवर्क से डाटा सोर्स को सपोर्ट कर सकता है -### Can Subgraphs react to more specific triggers? +### क्या सबग्राफ अधिक विशिष्ट ट्रिगर्स पर प्रतिक्रिया कर सकते हैं? -Currently, only Block and Receipt triggers are supported. We are investigating triggers for function calls to a specified account. We are also interested in supporting event triggers, once NEAR has native event support. +वर्तमान में, केवल अवरोधित करें और प्राप्त करें ट्रिगर समर्थित हैं। हम एक निर्दिष्ट खाते में फ़ंक्शन कॉल के लिए ट्रिगर्स की जांच कर रहे हैं। एक बार जब NEAR को नेटिव ईवेंट समर्थन मिल जाता है, तो हम ईवेंट ट्रिगर्स का समर्थन करने में भी रुचि रखते हैं। -### Will receipt handlers trigger for accounts and their sub-accounts? +### क्या रसीद हैंडलर खातों और उनके उप-खातों के लिए ट्रिगर करेंगे? -If an `account` is specified, that will only match the exact account name. It is possible to match sub-accounts by specifying an `accounts` field, with `suffixes` and `prefixes` specified to match accounts and sub-accounts, for example the following would match all `mintbase1.near` sub-accounts: +यदि कोई `account` निर्दिष्ट किया गया है, तो यह केवल सटीक खाता नाम से मेल खाएगा। उप-खातों से मेल करना संभव है यदि `accounts` फ़ील्ड निर्दिष्ट की गई हो, जिसमें `suffixes` और `prefixes` शामिल हों ताकि खाते और उप-खाते मेल खा सकें। उदाहरण के लिए, निम्नलिखित सभी `mintbase1.near` उप-खातों से मेल खाएगा: ```yaml accounts: @@ -262,22 +262,22 @@ accounts: - mintbase1.near ``` -### Can NEAR Subgraphs make view calls to NEAR accounts during mappings? +### क्या NEAR सबग्राफमैपिंग्स के दौरान NEAR खातों पर view कॉल कर सकते हैं? -This is not supported. We are evaluating whether this functionality is required for indexing. +यह समर्थित नहीं है। हम मूल्यांकन कर रहे हैं कि अनुक्रमण के लिए यह कार्यक्षमता आवश्यक है या नहीं। -### Can I use data source templates in my NEAR Subgraph? +### क्या मैं अपने NEAR सबग्राफ में data source templates का उपयोग कर सकता हूँ? -This is not currently supported. We are evaluating whether this functionality is required for indexing. +यह वर्तमान में समर्थित नहीं है। हम मूल्यांकन कर रहे हैं कि अनुक्रमण के लिए यह कार्यक्षमता आवश्यक है या नहीं। -### Ethereum Subgraphs support "pending" and "current" versions, how can I deploy a "pending" version of a NEAR Subgraph? +### Ethereum सबग्राफ "pending" और "current" संस्करणों का समर्थन करते हैं, मैं NEAR सबग्राफ का "pending" संस्करण कैसे तैनात कर सकता हूँ? -Pending functionality is not yet supported for NEAR Subgraphs. In the interim, you can deploy a new version to a different "named" Subgraph, and then when that is synced with the chain head, you can redeploy to your primary "named" Subgraph, which will use the same underlying deployment ID, so the main Subgraph will be instantly synced. +NEAR सबग्राफ के लिए लंबित कार्यक्षमता अभी तक समर्थित नहीं है। इस बीच, आप एक नए संस्करण को एक अलग "named" सबग्राफ पर तैनात कर सकते हैं, और जब वह चेन हेड के साथ सिंक हो जाता है, तो आप अपने प्राथमिक "named" सबग्राफ पर पुनः तैनाती कर सकते हैं, जो उसी अंतर्निहित deployment ID का उपयोग करेगा, जिससे मुख्य सबग्राफ तुरंत सिंक हो जाएगा। -### My question hasn't been answered, where can I get more help building NEAR Subgraphs? +### मेरा प्रश्न अभी तक उत्तरित नहीं हुआ है, मुझे NEAR सबग्राफ बनाने में और सहायता कहाँ मिल सकती है? -If it is a general question about Subgraph development, there is a lot more information in the rest of the [Developer documentation](/subgraphs/quick-start/). Otherwise please join [The Graph Protocol Discord](https://discord.gg/graphprotocol) and ask in the #near channel or email near@thegraph.com. +यदि यह सबग्राफ विकास से संबंधित एक सामान्य प्रश्न है, तो शेष [Developer documentation](/subgraphs/quick-start/) में बहुत अधिक जानकारी उपलब्ध है। अन्यथा, कृपया [The Graph Protocol Discord](https://discord.gg/graphprotocol) से जुड़ें और #near चैनल में पूछें या near@thegraph.com पर ईमेल करें। -## References +## संदर्भ -- [NEAR developer documentation](https://docs.near.org/tutorials/crosswords/basics/set-up-skeleton) +- [NEAR डेवलपर दस्तावेज़](https://docs.near.org/tutorials/crosswords/basics/set-up-skeleton) diff --git a/website/src/pages/hi/subgraphs/guides/polymarket.mdx b/website/src/pages/hi/subgraphs/guides/polymarket.mdx index 74efe387b0d7..8ee9fad6ff50 100644 --- a/website/src/pages/hi/subgraphs/guides/polymarket.mdx +++ b/website/src/pages/hi/subgraphs/guides/polymarket.mdx @@ -1,23 +1,23 @@ --- -title: Querying Blockchain Data from Polymarket with Subgraphs on The Graph -sidebarTitle: Query Polymarket Data +title: ब्लॉकचेन डेटा को पोलिमार्केट से Subgraphs पर The Graph के साथ क्वेरी करना +sidebarTitle: Polymarket डेटा क्वेरी करें --- -Query Polymarket’s onchain data using GraphQL via Subgraphs on The Graph Network. Subgraphs are decentralized APIs powered by The Graph, a protocol for indexing & querying data from blockchains. +Polymarket के ऑनचेन डेटा को GraphQL के माध्यम से सबग्राफ का उपयोग करके The Graph Network पर क्वेरी करें। सबग्राफ विकेंद्रीकृत API हैं, जिन्हें The Graph द्वारा संचालित किया जाता है, जो ब्लॉकचेन से डेटा को indexing और क्वेरी करने के लिए एक प्रोटोकॉल है। -## Polymarket Subgraph on Graph Explorer +## Polymarket सबग्राफ पर Graph Explorer -You can see an interactive query playground on the [Polymarket Subgraph’s page on The Graph Explorer](https://thegraph.com/explorer/subgraphs/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp?view=Query&chain=arbitrum-one), where you can test any query. +आप [Polymarket Subgraph के पेज पर The Graph Explorer](https://thegraph.com/explorer/subgraphs/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp?view=Query&chain=arbitrum-one) पर एक इंटरएक्टिव क्वेरी प्लेग्राउंड देख सकते हैं, जहां आप किसी भी क्वेरी का परीक्षण कर सकते हैं। ![Polymarket Playground](/img/Polymarket-playground.png) -## How to use the Visual Query Editor +## Visual Query Editor का उपयोग कैसे करें -The visual query editor helps you test sample queries from your Subgraph. +The visual query editor आपको अपने Subgraph से सैंपल क्वेरीज़ का परीक्षण करने में मदद करता है। -You can use the GraphiQL Explorer to compose your GraphQL queries by clicking on the fields you want. +आप GraphiQL Explorer का उपयोग करके अपनी GraphQL क्वेरीज को बनाने के लिए उन क्षेत्रों पर क्लिक कर सकते हैं जिन्हें आप चाहते हैं। -### Example Query: Get the top 5 highest payouts from Polymarket +### उदाहरण क्वेरी: Polymarket से शीर्ष 5 उच्चतम भुगतान प्राप्त करें ``` { @@ -30,7 +30,7 @@ You can use the GraphiQL Explorer to compose your GraphQL queries by clicking on } ``` -### Example output +### उदाहरण आउटपुट ``` { @@ -73,39 +73,39 @@ You can use the GraphiQL Explorer to compose your GraphQL queries by clicking on ## Polymarket's GraphQL Schema -The schema for this Subgraph is defined [here in Polymarket’s GitHub](https://github.com/Polymarket/polymarket-subgraph/blob/main/polymarket-subgraph/schema.graphql). +The schema इस Subgraph के लिए [Polymarket के GitHub](https://github.com/Polymarket/polymarket-subgraph/blob/main/polymarket-subgraph/schema.graphql) में परिभाषित है। -### Polymarket Subgraph Endpoint +### Polymarket सबग्राफ Endpoint https://gateway.thegraph.com/api/{api-key}/subgraphs/id/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp -The Polymarket Subgraph endpoint is available on [Graph Explorer](https://thegraph.com/explorer). +The Polymarket सबग्राफ एंडपॉइंट [Graph Explorer](https://thegraph.com/explorer) पर उपलब्ध है। ![Polymarket Endpoint](/img/Polymarket-endpoint.png) -## How to Get your own API Key +## अपनी स्वयं की API कुंजी कैसे प्राप्त करें -1. Go to [https://thegraph.com/studio](http://thegraph.com/studio) and connect your wallet -2. Go to https://thegraph.com/studio/apikeys/ to create an API key +1. Go to [https://thegraph.com/studio](http://thegraph.com/studio) और अपना वॉलेट कनेक्ट करें +2. https://thegraph.com/studio/apikeys/ API कुंजी बनाने के लिए -You can use this API key on any Subgraph in [Graph Explorer](https://thegraph.com/explorer), and it’s not limited to just Polymarket. +आप इस API कुंजी का उपयोग किसी भी Subgraph में [Graph Explorer] (https://thegraph.com/explorer) पर कर सकते हैं, और यह केवल Polymarket तक सीमित नहीं है। -100k queries per month are free which is perfect for your side project! +100k क्वेरी प्रति माह निःशुल्क हैं, जो आपके साइड प्रोजेक्ट के लिए बिल्कुल सही है! -## Additional Polymarket Subgraphs +## अतिरिक्त Polymarket सबग्राफ- - [Polymarket](https://thegraph.com/explorer/subgraphs/81Dm16JjuFSrqz813HysXoUPvzTwE7fsfPk2RTf66nyC?view=Query&chain=arbitrum-one) - [Polymarket Activity Polygon](https://thegraph.com/explorer/subgraphs/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp?view=Query&chain=arbitrum-one) - [Polymarket Profit & Loss](https://thegraph.com/explorer/subgraphs/6c58N5U4MtQE2Y8njfVrrAfRykzfqajMGeTMEvMmskVz?view=Query&chain=arbitrum-one) - [Polymarket Open Interest](https://thegraph.com/explorer/subgraphs/ELaW6RtkbmYNmMMU6hEPsghG9Ko3EXSmiRkH855M4qfF?view=Query&chain=arbitrum-one) -## How to Query with the API +## API से क्वेरी कैसे करें -You can pass any GraphQL query to the Polymarket endpoint and receive data in json format. +आप किसी भी GraphQL क्वेरी को Polymarket एंडपॉइंट पर भेज सकते हैं और JSON प्रारूप में डेटा प्राप्त कर सकते हैं। -This following code example will return the exact same output as above. +यह निम्नलिखित कोड उदाहरण उपरोक्त के समान ही सटीक आउटपुट लौटाएगा। -### Sample Code from node.js +### नमूना कोड Node.js से ``` const axios = require('axios'); @@ -127,22 +127,22 @@ const graphQLRequest = { }, }; -// Send the GraphQL query +// GraphQL क्वेरी भेजें axios(graphQLRequest) .then((response) => { - // Handle the response here + // यहां प्रतिक्रिया को संभालें const data = response.data.data console.log(data) }) .catch((error) => { - // Handle any errors + // किसी भी त्रुटि को संभालें console.error(error); }); ``` -### Additional resources +### अन्य संसाधन -For more information about querying data from your Subgraph, read more [here](/subgraphs/querying/introduction/). +For more information about querying data from your Subgraph, read more [यहाँ पढ़ें](/subgraphs/querying/introduction/). -To explore all the ways you can optimize & customize your Subgraph for a better performance, read more about [creating a Subgraph here](/developing/creating-a-subgraph/). +अपने Subgraph के प्रदर्शन को बेहतर बनाने के लिए इसे ऑप्टिमाइज़ और कस्टमाइज़ करने के सभी तरीकों का पता लगाने के लिए, [यहाँ Subgraph बनाने के बारे में और पढ़ें](/developing/creating-a-subgraph/)। diff --git a/website/src/pages/hi/subgraphs/guides/secure-api-keys-nextjs.mdx b/website/src/pages/hi/subgraphs/guides/secure-api-keys-nextjs.mdx index e17e594408ff..8ca11e5f1dfe 100644 --- a/website/src/pages/hi/subgraphs/guides/secure-api-keys-nextjs.mdx +++ b/website/src/pages/hi/subgraphs/guides/secure-api-keys-nextjs.mdx @@ -1,48 +1,48 @@ --- -title: How to Secure API Keys Using Next.js Server Components +title: कैसे सुरक्षित करें API Keys का उपयोग करके Next.js Server Components --- ## Overview -We can use [Next.js server components](https://nextjs.org/docs/app/building-your-application/rendering/server-components) to properly secure our API key from exposure in the frontend of our dapp. To further increase our API key security, we can also [restrict our API key to certain Subgraphs or domains in Subgraph Studio](/cookbook/upgrading-a-subgraph/#securing-your-api-key). +हम [Next.js server components](https://nextjs.org/docs/app/building-your-application/rendering/server-components) का उपयोग करके अपने dapp के फ्रंटेंड में हमारे API कुंजी के एक्सपोज़र को सही तरीके से सुरक्षित कर सकते हैं। हमारी API कुंजी की सुरक्षा को और बढ़ाने के लिए, हम [अपनी API कुंजी को कुछ सबग्राफ या सबग्राफ Studio में कुछ डोमेन तक सीमित कर सकते हैं।](/cookbook/upgrading-a-subgraph/#securing-your-api-key) -In this cookbook, we will go over how to create a Next.js server component that queries a Subgraph while also hiding the API key from the frontend. +इस कुकबुक में, हम यह जानेंगे कि Next.js सर्वर कंपोनेंट कैसे बनाया जाए जो एक सबग्राफ से क्वेरी करता है, साथ ही API कुंजी को फ्रंटएंड से छुपाए रखता है। -### Caveats +### चेतावनी -- Next.js server components do not protect API keys from being drained using denial of service attacks. -- The Graph Network gateways have denial of service detection and mitigation strategies in place, however using server components may weaken these protections. -- Next.js server components introduce centralization risks as the server can go down. +- Next.js सर्वर घटक डिनायल ऑफ़ सर्विस अटैक का उपयोग करके API कुंजियों को समाप्त होने से सुरक्षित नहीं कर सकते। +- The Graph Network gateways में सेवा को बाधित करने के हमलों का पता लगाने और उन्हें रोकने की रणनीतियाँ मौजूद हैं, हालांकि server components का उपयोग करने से ये सुरक्षा कमजोर हो सकती है। +- Next.js server components केंद्रीकरण के जोखिम प्रस्तुत करते हैं क्योंकि सर्वर बंद हो सकता है। -### Why It's Needed +### यह क्यों आवश्यक है -In a standard React application, API keys included in the frontend code can be exposed to the client-side, posing a security risk. While `.env` files are commonly used, they don't fully protect the keys since React's code is executed on the client side, exposing the API key in the headers. Next.js Server Components address this issue by handling sensitive operations server-side. +एक मानक React एप्लिकेशन में, फ्रंटेंड कोड में शामिल API कुंजियाँ क्लाइंट-साइड पर उजागर हो सकती हैं, जिससे सुरक्षा का जोखिम बढ़ता है। जबकि.env फ़ाइलें सामान्यत: उपयोग की जाती हैं, ये कुंजियों की पूरी सुरक्षा नहीं करतीं क्योंकि React का कोड क्लाइंट साइड पर निष्पादित होता है, जो API कुंजी को हेडर में उजागर करता है। Next.js सर्वर घटक इस मुद्दे का समाधान करते हैं द्वारा संवेदनशील कार्यों को सर्वर-साइड पर संभालना। -### Using client-side rendering to query a Subgraph +### क्लाइंट-साइड रेंडरिंग का उपयोग करके सबग्राफ से क्वेरी करना ![Client-side rendering](/img/api-key-client-side-rendering.png) -### Prerequisites +### आवश्यक शर्तें -- An API key from [Subgraph Studio](https://thegraph.com/studio) -- Basic knowledge of Next.js and React. -- An existing Next.js project that uses the [App Router](https://nextjs.org/docs/app). +- [Subgraph Studio](https://thegraph.com/studio) से एक API कुंजी +- Next.js और React का बुनियादी ज्ञान +- एक मौजूदा Next.js प्रोजेक्ट जो App Router (https://nextjs.org/docs/app). का उपयोग करता है। -## Step-by-Step Cookbook +## स्टेप-बाय-स्टेप कुकबुक -### Step 1: Set Up Environment Variables +### चरण 1: पर्यावरण चर सेट करें -1. In our Next.js project root, create a `.env.local` file. -2. Add our API key: `API_KEY=`. +1. हमारे Next.js प्रोजेक्ट की जड़ में, एक.env.local फ़ाइल बनाएं। +2. हमारा API कुंजी जोड़ें: `API_KEY=`. -### Step 2: Create a Server Component +### चरण 2: एक सर्वर घटक बनाएं -1. In our `components` directory, create a new file, `ServerComponent.js`. -2. Use the provided example code to set up the server component. +1. हमारे components निर्देशिका में, एक नया फ़ाइल बनाएं, ServerComponent.js। +2. प्रदान किए गए उदाहरण कोड का उपयोग करके सर्वर घटक सेट करें। -### Step 3: Implement Server-Side API Request +### चरण 3: सर्वर-साइड API अनुरोध को लागू करें -In `ServerComponent.js`, add the following code: +ServerComponent.js में, निम्नलिखित कोड जोड़ें: ```javascript const API_KEY = process.env.API_KEY @@ -95,10 +95,10 @@ export default async function ServerComponent() { } ``` -### Step 4: Use the Server Component +### चरण 4: सर्वर घटक का उपयोग करें -1. In our page file (e.g., `pages/index.js`), import `ServerComponent`. -2. Render the component: +1. हमारी पृष्ठ फ़ाइल (जैसे, pages/index.js) में ServerComponent आयात करें। +2. कंपोनेंट को रेंडर करें: ```javascript import ServerComponent from './components/ServerComponent' @@ -112,12 +112,12 @@ export default function Home() { } ``` -### Step 5: Run and Test Our Dapp +### चरण 5: हमारा Dapp चलाएँ और परीक्षण करें -Start our Next.js application using `npm run dev`. Verify that the server component is fetching data without exposing the API key. +अपने Next.js एप्लिकेशन को npm run dev का उपयोग करके प्रारंभ करें। सत्यापित करें कि सर्वर कंपोनेंट डेटा प्राप्त कर रहा है बिना API कुंजी को उजागर किए। ![Server-side rendering](/img/api-key-server-side-rendering.png) -### Conclusion +### निष्कर्ष -By utilizing Next.js Server Components, we've effectively hidden the API key from the client-side, enhancing the security of our application. This method ensures that sensitive operations are handled server-side, away from potential client-side vulnerabilities. Finally, be sure to explore [other API key security measures](/subgraphs/querying/managing-api-keys/) to increase your API key security even further. +Next.js Server Components का उपयोग करके, हमने प्रभावी रूप से API key को क्लाइंट-साइड से छिपा दिया है, जिससे हमारे application की सुरक्षा बढ़ गई है। यह विधि सुनिश्चित करती है कि संवेदनशील संचालन server-side पर संभाले जाएं, जिससे संभावित client-side कमजोरियों से बचाव हो। अंत में, अपनी API कुंजी की सुरक्षा को और बढ़ाने के लिए [other API key security measures](/subgraphs/querying/managing-api-keys/) को अवश्य एक्सप्लोर करें। diff --git a/website/src/pages/hi/subgraphs/guides/subgraph-composition.mdx b/website/src/pages/hi/subgraphs/guides/subgraph-composition.mdx new file mode 100644 index 000000000000..be71b8199574 --- /dev/null +++ b/website/src/pages/hi/subgraphs/guides/subgraph-composition.mdx @@ -0,0 +1,132 @@ +--- +title: डेटा को एकत्रित करें उपयोग करके Subgraph Composition +sidebarTitle: एक Composable Subgraph बनाएं जिसमें कई Subgraphs शामिल हों +--- + +Subgraph संयोजन का उपयोग करके विकास समय को तेज़ करें। आवश्यक डेटा के साथ एक मूल Subgraph बनाएं, फिर उसके ऊपर अतिरिक्त Subgraph बनाएं। + +Optimize your Subgraph by merging data from independent, source Subgraphs into a single composable Subgraph to enhance data aggregation. + +## Introduction + +Composable Subgraphs enable you to combine multiple Subgraphs' data sources into a new Subgraph, facilitating faster and more flexible Subgraph development. Subgraph composition empowers you to create and maintain smaller, focused Subgraphs that collectively form a larger, interconnected dataset. + +### संयोजन के लाभ + +Subgraph संयोजन एक शक्तिशाली विशेषता है जो स्केलिंग के लिए अनुमति देती है: + +- पुनः उपयोग करें, मिलाएं, और मौजूदा डेटा को संयोजित करें +- विकास और क्वेरी को सुव्यवस्थित करें +- एकाधिक डेटा स्रोतों का उपयोग करें (अधिकतम पांच स्रोत Subgraphs तक) +- Subgraph की सिंकिंग स्पीड तेज करें +- त्रुटियों को संभालें और पुनःसिंक को अनुकूलित करें + +## आर्किटेक्चर अवलोकन + +यह उदाहरण दो Subgraphs की स्थापना के साथ जुड़ा हुआ है: + +1. **सोर्स Subgraph**: घटनाओं के डेटा को entities के रूप में ट्रैक करता है. +2. **आश्रित Subgrap**h: स्रोत Subgraph को डेटा स्रोत के रूप में उपयोग करता है। + +आप इन्हें `source` और `dependent` डायरेक्टरी में पा सकते हैं। + +- The **साधन Subgraph** एक बेसिक इवेंट-ट्रैकिंग Subgraph है जो संबंधित contract द्वारा एमिट किए गए इवेंट्स को रिकॉर्ड करता है। +- **निर्भर Subgraph** स्रोत Subgraph को एक डेटा स्रोत के रूप में संदर्भित करता है, और स्रोत से entities का उपयोग ट्रिगर के रूप में करता है। + +जबकि **स्रोत Subgraph** एक मानक Subgraph है, आश्रित Subgraph Subgraph संयोजन सुविधा का उपयोग करता है। + +## आवश्यक शर्तें + +### Source Subgraphs + +- All Subgraphs need to be published with a **specVersion 1.3.0 or later** (Use the latest graph-cli version to be able to deploy composable Subgraphs) +- See notes here: https://github.com/graphprotocol/graph-node/releases/tag/v0.37.0 +- Immutable entities only: All Subgraphs must have [immutable entities](https://thegraph.com/docs/en/subgraphs/best-practices/immutable-entities-bytes-as-ids/#immutable-entities) when the Subgraph is deployed +- Pruning can be used in the source Subgraphs, but only entities that are immutable can be composed on top of +- Source Subgraphs cannot use grafting on top of existing entities +- Aggregated entities can be used in composition, but entities that are composed from them cannot performed additional aggregations directly + +### Composed Subgraphs + +- You can only compose up to a **maximum of 5 source Subgraphs** +- Composed Subgraphs can only use **datasources from the same chain** +- **Nested composition is not yet supported**: Composing on top of another composed Subgraph isn’t allowed at this time +- Aggregated entities can be used in composition, but the composed entities on them cannot also use aggregations directly +- Developers cannot compose an onchain datasource with a Subgraph datasource (i.e. you can’t do normal event handlers and call handlers and block handlers in a composed Subgraph) + +Additionally, you can explore the [example-composable-subgraph](https://github.com/graphprotocol/example-composable-subgraph) repository for a working implementation of composable Subgraphs + +## शुरू करिये + +The following guide provides examples for defining 3 source Subgraphs to create one powerful composed Subgraph. + +### विशिष्टताएँ + +- इस उदाहरण को सरल रखने के लिए, सभी स्रोत Subgraph केवल ब्लॉक हैंडलर का उपयोग करते हैं। हालांकि, वास्तविक वातावरण में, प्रत्येक स्रोत Subgraph विभिन्न स्मार्ट कॉन्ट्रैक्ट्स से डेटा का उपयोग करेगा। +- ये उदाहरण दिखाते हैं कि किसी अन्य Subgraph की schema को कैसे आयात किया जाए और इसकी कार्यक्षमता को बढ़ाया जाए। +- प्रत्येक स्रोत Subgraph को एक विशिष्ट entity के साथ अनुकूलित किया जाता है। +- सभी कमांड आवश्यक डिपेंडेंसीज़ को इंस्टॉल करती हैं, GraphQL स्कीमा के आधार पर कोड जेनरेट करती हैं, Subgraph को बिल्ड करती हैं, और इसे आपकी लोकल Graph Node इंस्टेंस पर डिप्लॉय करती हैं। + +### चरण 1. Block Time साधन Subgraph को डिप्लॉय करें + +यह पहला स्रोत Subgraph प्रत्येक ब्लॉक के लिए ब्लॉक समय की गणना करता है। + +- यह अन्य Subgraphs से schemas को इम्पोर्ट करता है और प्रत्येक `ब्लॉक` के माइन किए जाने के समय को दर्शाने वाले timestamp फ़ील्ड के साथ एक block entity जोड़ता है। +- यह समय-संबंधित ब्लॉकचेन घटनाओं (जैसे, ब्लॉक टाइमस्टैम्प) को सुनता है और इस डेटा को प्रोसेस करके Subgraph की entities को अपडेट करता है। + +इस Subgraph को लोकल रूप से डिप्लॉय करने के लिए, निम्नलिखित कमांड्स चलाएँ: + +```bash +npm install +npm run codegen +npm run build +npm run create-local +npm run deploy-local +``` + +### चरण 2. Block Cost Source Subgraph को डिप्लॉय करें + +यह दूसरा स्रोत Subgraph प्रत्येक ब्लॉक की लागत को इंडेक्स करता है। + +#### मुख्य कार्य + +- यह अन्य Subgraphs से schemas आयात करता है और लागत-संबंधी फ़ील्ड के साथ एक `block` entity जोड़ता है। +- यह ब्लॉकचेन घटनाओं को सुनता है जो लागत (जैसे गैस शुल्क, लेनदेन लागत) से संबंधित होती हैं और इस डेटा को प्रोसेस करके Subgraph की entities को अपडेट करता है। + +इस Subgraph को लोकल रूप से डिप्लॉय करने के लिए, ऊपर दिए गए वही कमांड्स चलाएँ। + +### स्टेप 3. स्रोत Subgraph में ब्लॉक साइज़ परिभाषित करें + +यह तीसरा स्रोत Subgraph प्रत्येक ब्लॉक के आकार को इंडेक्स करता है। इस Subgraph को लोकली डिप्लॉय करने के लिए, ऊपर दिए गए वही कमांड्स चलाएँ। + +#### मुख्य कार्य + +- यह मौजूदा schemas को अन्य Subgraphs से आयात करता है और एक `block` entity जोड़ता है, जिसमें प्रत्येक block के आकार को दर्शाने वाला एक `size` फ़ील्ड होता है। +- यह ब्लॉक साइज़ (जैसे, स्टोरेज या वॉल्यूम) से संबंधित ब्लॉकचेन इवेंट्स को सुनता है और इस डेटा को प्रोसेस करके Subgraph की entities को उचित रूप से अपडेट करता है। + +### चरण 4. ब्लॉक स्टैट्स में मिलाएँ Subgraph + +This composed Subgraph combines and aggregates the information from the source Subgraphs above, providing a unified view of block statistics. To deploy this Subgraph locally, run the same commands as above. + +> नोट: +> +> - किसी स्रोत Subgraph में कोई भी परिवर्तन संभवतः एक नया deployment ID उत्पन्न करेगा। +> - Subgraph manifest में डेटा स्रोत पते में नवीनतम परिवर्तनों का लाभ उठाने के लिए डिप्लॉयमेंट ID को अपडेट करना सुनिश्चित करें। +> - सभी स्रोत Subgraphs को तब तक तैनात किया जाना चाहिए जब तक कि संयोजित Subgraph तैनात न हो जाए। + +#### मुख्य कार्य + +- यह एक समेकित डेटा मॉडल प्रदान करता है जो सभी प्रासंगिक ब्लॉक मेट्रिक्स को शामिल करता है। +- It combines data from 3 source Subgraphs, and provides a comprehensive view of block statistics, enabling more complex queries and analyses. + +## मुख्य निष्कर्ष + +- यह शक्तिशाली टूल आपके Subgraph डेवलपमेंट को स्केल करेगा और आपको कई Subgraph को एक साथ जोड़ने की अनुमति देगा। +- The setup includes the deployment of 3 source Subgraphs and one final deployment of the composed Subgraph. +- यह विशेषता स्केलेबिलिटी को अनलॉक करती है, जिससे विकास और रखरखाव की दक्षता सरल हो जाती है। + +## Additional Resources + +- Check out all the code for this example in [this GitHub repo](https://github.com/graphprotocol/example-composable-subgraph). +- To add advanced features to your Subgraph, और जानने के लिए देखें [Subgraph advanced features.](/developing/creating/advanced/) +- एग्रीगेशन के बारे में अधिक जानने के लिए, [Timeseries and Aggregations](/subgraphs/developing/creating/advanced/#timeseries-and-aggregations) देखें। diff --git a/website/src/pages/hi/subgraphs/guides/subgraph-debug-forking.mdx b/website/src/pages/hi/subgraphs/guides/subgraph-debug-forking.mdx index 91aa7484d2ec..f349dd43b5b4 100644 --- a/website/src/pages/hi/subgraphs/guides/subgraph-debug-forking.mdx +++ b/website/src/pages/hi/subgraphs/guides/subgraph-debug-forking.mdx @@ -1,26 +1,26 @@ --- -title: Quick and Easy Subgraph Debugging Using Forks +title: फोर्क्स का उपयोग करके त्वरित और आसान सबग्राफ डिबगिंग --- -As with many systems processing large amounts of data, The Graph's Indexers (Graph Nodes) may take quite some time to sync-up your Subgraph with the target blockchain. The discrepancy between quick changes with the purpose of debugging and long wait times needed for indexing is extremely counterproductive and we are well aware of that. This is why we are introducing **Subgraph forking**, developed by [LimeChain](https://limechain.tech/), and in this article I will show you how this feature can be used to substantially speed-up Subgraph debugging! +जैसा कि कई प्रणालियों में बड़े पैमाने पर डेटा प्रोसेसिंग के दौरान होता है, The Graph के Indexers (Graph Nodes) को आपके सबग्राफ को लक्षित ब्लॉकचेन के साथ सिंक करने में काफी समय लग सकता है। डिबगिंग के उद्देश्य से त्वरित परिवर्तन करने और Indexing के लिए आवश्यक लंबे इंतजार के बीच का अंतर अत्यधिक प्रतिकूल होता है, और हम इस समस्या से भली-भांति परिचित हैं। इसी कारण हम **सबग्राफ फॉर्किंग** पेश कर रहे हैं, जिसे [LimeChain](https://limechain.tech/) द्वारा विकसित किया गया है, और इस लेख में मैं आपको दिखाऊंगा कि इस फीचर का उपयोग करके Subgraph डिबगिंग को काफी तेज़ कैसे किया जा सकता है! -## Ok, what is it? +## ठीक है वो क्या है? -**Subgraph forking** is the process of lazily fetching entities from _another_ Subgraph's store (usually a remote one). +**सबग्राफ फॉर्किंग** वह प्रक्रिया है जिसमें आलसी तरीके से किसी दूसरे सबग्राफ के स्टोर (आमतौर पर एक रिमोट स्टोर) से entities को लाया जाता है। -In the context of debugging, **Subgraph forking** allows you to debug your failed Subgraph at block _X_ without needing to wait to sync-up to block _X_. +सबग्राफ फॉर्किंग आपको अपने असफल सबग्राफ को ब्लॉक X पर डिबग करने की अनुमति देता है बिना ब्लॉक X तक सिंक होने का इंतजार किए। -## What?! How? +## क्या?! कैसे? -When you deploy a Subgraph to a remote Graph Node for indexing and it fails at block _X_, the good news is that the Graph Node will still serve GraphQL queries using its store, which is synced-up to block _X_. That's great! This means we can take advantage of this "up-to-date" store to fix the bugs arising when indexing block _X_. +जब आप एक सबग्राफ को रिमोट ग्राफ-नोड पर indexing के लिए डिप्लॉय करते हैं और यह ब्लॉक*X*पर फेल हो जाता है, तो अच्छी खबर यह है कि ग्राफ नोड अभी भी अपनी स्टोर का उपयोग करके GraphQL क्वेरीज़ को सर्व करेगा, जो ब्लॉक*X* तक सिंक है। यह बहुत बढ़िया है! इसका मतलब है कि हम इस "अप-टू-डेट" स्टोर का लाभ उठा सकते हैं ताकि ब्लॉक*X*को indexing करते समय उत्पन्न होने वाली बग्स को ठीक किया जा सके। -In a nutshell, we are going to _fork the failing Subgraph_ from a remote Graph Node that is guaranteed to have the Subgraph indexed up to block _X_ in order to provide the locally deployed Subgraph being debugged at block _X_ an up-to-date view of the indexing state. +हम एक विफल हो रहे सबग्राफ को एक दूरस्थ ग्राफ-नोड से fork करने जा रहे हैं, जो निश्चित रूप से ब्लॉक X तक सबग्राफ को इंडेक्स कर चुका है, ताकि डिबग किए जा रहे स्थानीय रूप से तैनात सबग्राफ को ब्लॉक*X*पर इंडेक्सिंग स्थिति का अद्यतन दृश्य प्रदान किया जा सके। -## Please, show me some code! +## कृपया मुझे कुछ कोड दिखाओ! -To stay focused on Subgraph debugging, let's keep things simple and run along with the [example-Subgraph](https://github.com/graphprotocol/graph-tooling/tree/main/examples/ethereum-gravatar) indexing the Ethereum Gravity smart contract. +सुबग्राफ डिबगिंग पर ध्यान केंद्रित रखने के लिए, चलिए चीजों को सरल रखते हैं और [example-सबग्राफ](https://github.com/graphprotocol/graph-tooling/tree/main/examples/ethereum-gravatar) के साथ चलते हैं, जो Ethereum Gravity स्मार्ट contract को indexing कर रहा है। -Here are the handlers defined for indexing `Gravatar`s, with no bugs whatsoever: +यहां Gravatars को indexing करने के लिए handler परिभाषित किए गए हैं, जिनमें कोई बग नहीं है: ```tsx export function handleNewGravatar(event: NewGravatar): void { @@ -44,43 +44,43 @@ export function handleUpdatedGravatar(event: UpdatedGravatar): void { } ``` -Oops, how unfortunate, when I deploy my perfect looking Subgraph to [Subgraph Studio](https://thegraph.com/studio/) it fails with the _"Gravatar not found!"_ error. +अरे, कितनी दुर्भाग्यपूर्ण बात है, जब मैं अपना पूरी तरह से सही दिखने वाला सबग्राफ सबग्राफ Studio पर डिप्लॉय करता हूँ, तो यह*"Gravatar not found!"* त्रुटि के साथ फेल हो जाता है। -The usual way to attempt a fix is: +फिक्स का प्रयास करने का सामान्य तरीका है: -1. Make a change in the mappings source, which you believe will solve the issue (while I know it won't). -2. Re-deploy the Subgraph to [Subgraph Studio](https://thegraph.com/studio/) (or another remote Graph Node). -3. Wait for it to sync-up. -4. If it breaks again go back to 1, otherwise: Hooray! +1. मैपिंग सोर्स में बदलाव करें, जो आपको लगता है कि समस्या का समाधान करेगा (जबकि मुझे पता है कि यह नहीं होगा)। +2. सबग्राफ को [सबग्राफ Studio](https://thegraph.com/studio/) (या किसी अन्य remote ग्राफ-नोड) पर फिर से डिप्लॉय करें। +3. इसके सिंक-अप होने की प्रतीक्षा करें। +4. यदि यह फिर से टूट जाता है तो 1 पर वापस जाएँ, अन्यथा: हुर्रे! -It is indeed pretty familiar to an ordinary debug process, but there is one step that horribly slows down the process: _3. Wait for it to sync-up._ +यह वास्तव में एक सामान्य डिबग प्रक्रिया के समान है, लेकिन इसमें एक कदम है जो प्रक्रिया को बहुत धीमा कर देता है: \_3. इसके सिंक होने का इंतजार करें. -Using **Subgraph forking** we can essentially eliminate this step. Here is how it looks: +**सबग्राफ फॉर्किंग** का उपयोग करके, हम मूल रूप से इस चरण को समाप्त कर सकते हैं। यह इस प्रकार दिखता है: -0. Spin-up a local Graph Node with the **_appropriate fork-base_** set. -1. Make a change in the mappings source, which you believe will solve the issue. -2. Deploy to the local Graph Node, **_forking the failing Subgraph_** and **_starting from the problematic block_**. -3. If it breaks again, go back to 1, otherwise: Hooray! +0. लोकल ग्राफ-नोड को सेट **_appropriate fork-base_** के साथ चालू करें। +1. मैपिंग सोर्स में परिवर्तन करें, जिसके बारे में आपको लगता है कि इससे समस्या हल हो जाएगी. +2. स्थानीय ग्राफ-नोड पर डिप्लॉय करें, **_असफल हो रहे सबग्राफ को फोर्क_** करते हुए और समस्या वाले ब्लॉक से प्रारंभ करते हुए। +3. यदि यह फिर से ब्रेक जाता है, तो 1 पर वापस जाएँ, अन्यथा: हुर्रे! -Now, you may have 2 questions: +अब, आपके 2 प्रश्न हो सकते हैं: -1. fork-base what??? -2. Forking who?! +1. फोर्क-बेस क्या??? +2. फोर्किंग कौन?! -And I answer: +और मैं उत्तर देता हूं: -1. `fork-base` is the "base" URL, such that when the _subgraph id_ is appended the resulting URL (`/`) is a valid GraphQL endpoint for the Subgraph's store. -2. Forking is easy, no need to sweat: +1. `fork-base` "मूल" URL है, जिससे जब _subgraph id_ जोड़ी जाती है, तो परिणामी URL (`/`) उस सबग्राफ के स्टोर के लिए एक वैध GraphQL एंडपॉइंट बन जाता है। +2. फोर्किंग आसान है, पसीना बहाने की जरूरत नहीं: ```bash $ graph deploy --debug-fork --ipfs http://localhost:5001 --node http://localhost:8020 ``` -Also, don't forget to set the `dataSources.source.startBlock` field in the Subgraph manifest to the number of the problematic block, so you can skip indexing unnecessary blocks and take advantage of the fork! +इसके अलावा, सबग्राफ manifest में `dataSources.source.startBlock` फ़ील्ड को समस्या वाले ब्लॉक की संख्या पर सेट करना न भूलें, ताकि आप गैर-ज़रूरी ब्लॉकों को indexing करने से बच सकें और fork का लाभ उठा सकें!\` -So, here is what I do: +तो, यहाँ मैं क्या करता हूँ: -1. I spin-up a local Graph Node ([here is how to do it](https://github.com/graphprotocol/graph-node#running-a-local-graph-node)) with the `fork-base` option set to: `https://api.thegraph.com/subgraphs/id/`, since I will fork a Subgraph, the buggy one I deployed earlier, from [Subgraph Studio](https://thegraph.com/studio/). +1. मैंने एक लोकल ग्राफ-नोड स्पिन-अप किया [(यहाँ देखें कैसे करें)](https://github.com/graphprotocol/graph-node#running-a-local-graph-node) जिसमें `fork-base` ऑप्शन को सेट किया: `https://api.thegraph.com/subgraphs/id/`, क्योंकि मैं एक सबग्राफ को फोर्क करने जा रहा हूँ, जो कि पहले मैंने [सबग्राफ Studio](https://thegraph.com/studio/) पर डिप्लॉय किया था और उसमें बग्स थे। ``` $ cargo run -p graph-node --release -- \ @@ -90,12 +90,12 @@ $ cargo run -p graph-node --release -- \ --fork-base https://api.thegraph.com/subgraphs/id/ ``` -2. After careful inspection I notice that there is a mismatch in the `id` representations used when indexing `Gravatar`s in my two handlers. While `handleNewGravatar` converts it to a hex (`event.params.id.toHex()`), `handleUpdatedGravatar` uses an int32 (`event.params.id.toI32()`) which causes the `handleUpdatedGravatar` to panic with "Gravatar not found!". I make them both convert the `id` to a hex. -3. After I made the changes I deploy my Subgraph to the local Graph Node, **_forking the failing Subgraph_** and setting `dataSources.source.startBlock` to `6190343` in `subgraph.yaml`: +2. सावधानी से निरीक्षण करने के बाद, मुझे पता चलता है कि मेरे दो हैंडलरों में `Gravatar` के `id` प्रतिनिधित्व में असंगति है। जबकि `handleNewGravatar` इसे हेक्स (`event.params.id.toHex()`) में बदलता है, handleUpdatedGravatar एक int32 (`event.params.id.toI32()`) का उपयोग करता है, जिससे `handleUpdatedGravatar` "Gravatar not found!" के साथ पैनिक हो जाता है। मैंने दोनों को `id` को हेक्स में बदलने के लिए संशोधित किया है। +3. मैंने बदलाव करने के बाद अपने सबग्राफ को लोकल Graph Node पर डिप्लॉय किया, **_failing सबग्राफ को fork_** करके और `subgraph.yaml` में `dataSources.source.startBlock` को `6190343` पर सेट किया। ```bash $ graph deploy gravity --debug-fork QmNp169tKvomnH3cPXTfGg4ZEhAHA6kEq5oy1XDqAxqHmW --ipfs http://localhost:5001 --node http://localhost:8020 ``` 4. I inspect the logs produced by the local Graph Node and, Hooray!, everything seems to be working. -5. I deploy my now bug-free Subgraph to a remote Graph Node and live happily ever after! (no potatoes tho) +5. मैं अपने अब बग-मुक्त सबग्राफ को एक दूरस्थ ग्राफ-नोड पर तैनात करता हूँ और खुशी-खुशी जीवन व्यतीत करता हूँ! (no potatoes tho) diff --git a/website/src/pages/hi/subgraphs/guides/subgraph-uncrashable.mdx b/website/src/pages/hi/subgraphs/guides/subgraph-uncrashable.mdx index a08e2a7ad8c9..6f6fcb7ace1e 100644 --- a/website/src/pages/hi/subgraphs/guides/subgraph-uncrashable.mdx +++ b/website/src/pages/hi/subgraphs/guides/subgraph-uncrashable.mdx @@ -1,29 +1,29 @@ --- -title: Safe Subgraph Code Generator +title: सुरक्षित सबग्राफ कोड जेनरेटर --- -[Subgraph Uncrashable](https://float-capital.github.io/float-subgraph-uncrashable/) is a code generation tool that generates a set of helper functions from the graphql schema of a project. It ensures that all interactions with entities in your Subgraph are completely safe and consistent. +[Subgraph Uncrashable](https://float-capital.github.io/float-subgraph-uncrashable/) एक कोड जनरेशन टूल है जो किसी प्रोजेक्ट की GraphQL स्कीमा से हेल्पर फंक्शन्स का एक सेट जेनरेट करता है। यह सुनिश्चित करता है कि आपके सबग्राफ में सभी इंटरैक्शन्स पूरी तरह सुरक्षित और संगत हों। -## Why integrate with Subgraph Uncrashable? +## सबग्राफ अनक्रैशेबल के साथ एकीकृत क्यों करें? -- **Continuous Uptime**. Mishandled entities may cause Subgraphs to crash, which can be disruptive for projects that are dependent on The Graph. Set up helper functions to make your Subgraphs “uncrashable” and ensure business continuity. +- **निरंतर अपटाइम**। गलत तरीके से प्रबंधित entities आपके Subgraph को क्रैश कर सकते हैं, जिससे उन प्रोजेक्ट्स में बाधा आ सकती है जो The Graph पर निर्भर हैं। सहायक फ़ंक्शंस सेट करें ताकि आपका Subgraph "अनक्रैशेबल" बना रहे और व्यापार निरंतरता सुनिश्चित हो। -- **Completely Safe**. Common problems seen in Subgraph development are issues of loading undefined entities, not setting or initializing all values of entities, and race conditions on loading and saving entities. Ensure all interactions with entities are completely atomic. +- **पूरी तरह सुरक्षित**। Subgraph विकास में आम समस्याएँ यह होती हैं कि अपरिभाषित entities को लोड करने में समस्या आती है, सभी entities के मूल्यों को सेट या इनिशियलाइज़ नहीं किया जाता, और entities को लोड और सेव करने में race conditions हो सकती हैं। सुनिश्चित करें कि entities के साथ सभी इंटरैक्शन पूरी तरह से परमाणु (atomic) हों। -- **User Configurable** Set default values and configure the level of security checks that suits your individual project's needs. Warning logs are recorded indicating where there is a breach of Subgraph logic to help patch the issue to ensure data accuracy. +- **यूज़र कॉन्फ़िगरेबल** डिफ़ॉल्ट मान सेट करें और सुरक्षा जाँच के स्तर को अपनी परियोजना की आवश्यकताओं के अनुसार कॉन्फ़िगर करें। चेतावनी लॉग दर्ज किए जाते हैं, जो यह संकेत देते हैं कि कहाँ पर Subgraph लॉजिक का उल्लंघन हुआ है, जिससे डेटा की सटीकता सुनिश्चित करने के लिए समस्या को ठीक किया जा सके। -**Key Features** +**मुख्य विशेषताएँ** -- The code generation tool accommodates **all** Subgraph types and is configurable for users to set sane defaults on values. The code generation will use this config to generate helper functions that are to the users specification. +- Code generation टूल **सभी** Subgraph प्रकारों को सपोर्ट करता है और उपयोगकर्ताओं को मूल्यों पर उपयुक्त डिफ़ॉल्ट सेट करने के लिए कॉन्फ़िगर करने योग्य बनाता है। यह कोड जनरेशन इस कॉन्फ़िगरेशन का उपयोग उपयोगकर्ता की विशिष्टताओं के अनुसार हेल्पर फ़ंक्शंस उत्पन्न करने के लिए करेगा। -- The framework also includes a way (via the config file) to create custom, but safe, setter functions for groups of entity variables. This way it is impossible for the user to load/use a stale graph entity and it is also impossible to forget to save or set a variable that is required by the function. +- फ्रेमवर्क में इकाई वैरिएबल के समूहों के लिए कस्टम, लेकिन सुरक्षित, सेटर फ़ंक्शन बनाने का एक तरीका (कॉन्फिग फ़ाइल के माध्यम से) भी शामिल है। इस तरह उपयोगकर्ता के लिए एक पुरानी ग्राफ़ इकाई को लोड/उपयोग करना असंभव है और फ़ंक्शन द्वारा आवश्यक वैरिएबल को सहेजना या सेट करना भूलना भी असंभव है। -- Warning logs are recorded as logs indicating where there is a breach of Subgraph logic to help patch the issue to ensure data accuracy. +- Warning logs को उन लॉग्स के रूप में रिकॉर्ड किया जाता है जो यह संकेत देते हैं कि Subgraph लॉजिक में कहाँ उल्लंघन हुआ है, ताकि समस्या को ठीक करने में मदद मिल सके और डेटा की सटीकता सुनिश्चित की जा सके। -Subgraph Uncrashable can be run as an optional flag using the Graph CLI codegen command. +सबग्राफ अनक्रैशेबल को ग्राफ़ CLI codegen कमांड का उपयोग करके एक वैकल्पिक फ़्लैग के रूप में चलाया जा सकता है। ```sh graph codegen -u [options] [] ``` -Visit the [Subgraph uncrashable documentation](https://float-capital.github.io/float-subgraph-uncrashable/docs/) or watch this [video tutorial](https://float-capital.github.io/float-subgraph-uncrashable/docs/tutorial) to learn more and to get started with developing safer Subgraphs. +Visit the [Subgraph uncrashable documentation](https://float-capital.github.io/float-subgraph-uncrashable/docs/) या इस [वीडियो ट्यूटोरियल](https://float-capital.github.io/float-subgraph-uncrashable/docs/tutorial) को देखें ताकि आप अधिक जान सकें और सुरक्षित Subgraphs विकसित करना शुरू कर सकें। diff --git a/website/src/pages/hi/subgraphs/guides/transfer-to-the-graph.mdx b/website/src/pages/hi/subgraphs/guides/transfer-to-the-graph.mdx index a62072c48373..47f32c5c5739 100644 --- a/website/src/pages/hi/subgraphs/guides/transfer-to-the-graph.mdx +++ b/website/src/pages/hi/subgraphs/guides/transfer-to-the-graph.mdx @@ -1,35 +1,35 @@ --- -title: Transfer to The Graph +title: The Graph पर स्थानांतरण --- -Quickly upgrade your Subgraphs from any platform to [The Graph's decentralized network](https://thegraph.com/networks/). +किसी भी प्लेटफ़ॉर्म से जल्दी से अपने सबग्राफ को [The Graph के विकेंद्रीकृत नेटवर्क](https://thegraph.com/networks/) पर अपग्रेड करें। -## Benefits of Switching to The Graph +## The Graph पर स्विच करने के लाभ -- Use the same Subgraph that your apps already use with zero-downtime migration. -- Increase reliability from a global network supported by 100+ Indexers. -- Receive lightning-fast support for Subgraphs 24/7, with an on-call engineering team. +- आपके ऐप्स पहले से जिस सबग्राफ का उपयोग कर रहे हैं, उसी का उपयोग करें और बिना किसी डाउनटाइम के माइग्रेशन करें। +- 100+ Indexers द्वारा समर्थित एक वैश्विक नेटवर्क से विश्वसनीयता बढ़ाएं। +- 24/7 बिजली की तेजी से सहायता प्राप्त करें सबग्राफ के लिए, एक ऑन-कॉल इंजीनियरिंग टीम के साथ। -## Upgrade Your Subgraph to The Graph in 3 Easy Steps +## अपने Subgraph को The Graph में 3 आसान कदमों में अपग्रेड करें -1. [Set Up Your Studio Environment](/subgraphs/cookbook/transfer-to-the-graph/#1-set-up-your-studio-environment) -2. [Deploy Your Subgraph to Studio](/subgraphs/cookbook/transfer-to-the-graph/#2-deploy-your-subgraph-to-studio) -3. [Publish to The Graph Network](/subgraphs/cookbook/transfer-to-the-graph/#publish-your-subgraph-to-the-graphs-decentralized-network) +1. [अपने स्टूडियो पर्यावरण को सेट करें](/subgraphs/cookbook/transfer-to-the-graph/#1-set-up-your-studio-environment) +2. [अपने सबग्राफ को Studio में डिप्लॉय करें](/subgraphs/cookbook/transfer-to-the-graph/#2-deploy-your-subgraph-to-studio) +3. अपने Subgraph को The Graph के विकेंद्रीकृत नेटवर्क पर प्रकाशित करें](/subgraphs/cookbook/transfer-to-the-graph/#publish-your-subgraph-to-the-graphs-decentralized-network) -## 1. Set Up Your Studio Environment +## 1. अपने स्टूडियो वातावरण को सेट करें -### Create a Subgraph in Subgraph Studio +### सबग्राफ बनाएँ Subgraph Studio में -- Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. -- Click "Create a Subgraph". It is recommended to name the Subgraph in Title Case: "Subgraph Name Chain Name". +- [Subgraph Studio](https://thegraph.com/studio/) पर जाएँ और अपने वॉलेट को कनेक्ट करें। +- "Create a सबग्राफ" पर क्लिक करें। यह अनुशंसा की जाती है कि सबग्राफ का नाम टाइटल केस में रखा जाए: "सबग्राफ Name Chain Name"। -> Note: After publishing, the Subgraph name will be editable but requires onchain action each time, so name it properly. +> नोट: प्रकाशित करने के बाद, सबग्राफ का नाम संपादनीय होगा लेकिन प्रत्येक बार ऑनचेन क्रिया की आवश्यकता होगी, इसलिए इसे सही से नाम दें। -### Install the Graph CLI⁠ +### Graph CLI स्थापित करें -You must have [Node.js](https://nodejs.org/) and a package manager of your choice (`npm` or `pnpm`) installed to use the Graph CLI. Check for the [most recent](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) CLI version. +आपको Node.js(https://nodejs.org/) और अपनी पसंद का पैकेज मैनेजर (npm या pnpm) इंस्टॉल करना होगा ताकि आप Graph CLI का उपयोग कर सकें। सबसे हालिया(https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) CLI संस्करण चेक करें। -On your local machine, run the following command: +अपने लोकल मशीन पर, निम्नलिखित कमांड चलाएँ: Using [npm](https://www.npmjs.com/): @@ -37,68 +37,67 @@ Using [npm](https://www.npmjs.com/): npm install -g @graphprotocol/graph-cli@latest ``` -Use the following command to create a Subgraph in Studio using the CLI: +Studio में CLI का उपयोग करके सबग्राफ बनाने के लिए निम्नलिखित कमांड का उपयोग करें: ```sh graph init --product subgraph-studio ``` -### Authenticate Your Subgraph +### अपने Subgraph को प्रमाणित करें -In The Graph CLI, use the auth command seen in Subgraph Studio: +The Graph CLI में, 'auth' कमांड का उपयोग करें जो Subgraph Studio में देखा गया है: ```sh graph auth ``` -## 2. Deploy Your Subgraph to Studio +## 2. अपने Subgraph को Studio पर डिप्लॉय करें -If you have your source code, you can easily deploy it to Studio. If you don't have it, here's a quick way to deploy your Subgraph. +यदि आपके पास आपका सोर्स कोड है, तो आप इसे आसानी से Studio पर डिप्लॉय कर सकते हैं। यदि आपके पास यह नहीं है, तो यहाँ आपके सबग्राफ को डिप्लॉय करने का एक त्वरित तरीका दिया गया है। -In The Graph CLI, run the following command: +The Graph CLI में, निम्नलिखित कमांड चलाएँ: ```sh graph deploy --ipfs-hash - ``` -> **Note:** Every Subgraph has an IPFS hash (Deployment ID), which looks like this: "Qmasdfad...". To deploy simply use this **IPFS hash**. You’ll be prompted to enter a version (e.g., v0.0.1). +> **नोट** हर सबग्राफ का एक IPFS हैश (Deployment ID) होता है, जो इस तरह दिखता है: "Qmasdfad...". डिप्लॉय करने के लिए बस इस IPFS हैश का उपयोग करें। आपको एक संस्करण दर्ज करने के लिए कहा जाएगा (जैसे, v0.0.1)। -## 3. Publish Your Subgraph to The Graph Network +## 3. अपने Subgraph को The Graph Network पर प्रकाशित करें -![publish button](/img/publish-sub-transfer.png) +![पब्लिश बटन](/img/publish-sub-transfer.png) -### Query Your Subgraph +### अपने Subgraph को क्वेरी करें -> To attract about 3 indexers to query your Subgraph, it’s recommended to curate at least 3,000 GRT. To learn more about curating, check out [Curating](/resources/roles/curating/) on The Graph. +> कम से कम 3 Indexers को अपने सबग्राफ की क्वेरी करने के लिए आकर्षित करने के लिए, यह अनुशंसा की जाती है कि आप कम से कम 3,000 GRT क्यूरेट करें। क्यूरेटिंग के बारे में अधिक जानने के लिए, [Curating](/resources/roles/curating/) पर The Graph देखें। -You can start [querying](/subgraphs/querying/introduction/) any Subgraph by sending a GraphQL query into the Subgraph’s query URL endpoint, which is located at the top of its Explorer page in Subgraph Studio. +आप किसी भी सबग्राफ से [querying](/subgraphs/querying/introduction/) करके GraphQL क्वेरी को सबग्राफ के क्वेरी URL एंडपॉइंट पर भेज सकते हैं, जो कि उसके Explorer पेज के शीर्ष पर सबग्राफ Studio में स्थित होता है। -#### Example +#### उदाहरण -[CryptoPunks Ethereum Subgraph](https://thegraph.com/explorer/subgraphs/HdVdERFUe8h61vm2fDyycHgxjsde5PbB832NHgJfZNqK) by Messari: +[CryptoPunks Ethereum सबग्राफ ](https://thegraph.com/explorer/subgraphs/HdVdERFUe8h61vm2fDyycHgxjsde5PbB832NHgJfZNqK) by Messari: ![Query URL](/img/cryptopunks-screenshot-transfer.png) -The query URL for this Subgraph is: +यह सबग्राफ के लिए क्वेरी URL है: ```sh https://gateway-arbitrum.network.thegraph.com/api/`**your-own-api-key**`/subgraphs/id/HdVdERFUe8h61vm2fDyycgxjsde5PbB832NHgJfZNqK ``` -Now, you simply need to fill in **your own API Key** to start sending GraphQL queries to this endpoint. +अब, आपको केवल अपना API Key भरने की आवश्यकता है ताकि आप इस endpoint पर GraphQL queries भेज सकें। -### Getting your own API Key +### अपनी खुद की API Key प्राप्त करना -You can create API Keys in Subgraph Studio under the “API Keys” menu at the top of the page: +आप Subgraph Studio में पृष्ठ के शीर्ष पर “API Keys” मेनू के तहत API Keys बना सकते हैं: ![API keys](/img/Api-keys-screenshot.png) -### Monitor Subgraph Status +### सबग्राफ की स्थिति की निगरानी करें -Once you upgrade, you can access and manage your Subgraphs in [Subgraph Studio](https://thegraph.com/studio/) and explore all Subgraphs in [The Graph Explorer](https://thegraph.com/networks/). +Once you upgrade, you can access and manage your सबग्राफ in [सबग्राफ Studio](https://thegraph.com/studio/) और सभी सबग्राफ को [The Graph Explorer](https://thegraph.com/networks/) में एक्सप्लोर कर सकते हैं। ### Additional Resources -- To quickly create and publish a new Subgraph, check out the [Quick Start](/subgraphs/quick-start/). -- To explore all the ways you can optimize and customize your Subgraph for a better performance, read more about [creating a Subgraph here](/developing/creating-a-subgraph/). +- तेजी से एक नया सबग्राफ बनाने और प्रकाशित करने के लिए, [Quick Start](/subgraphs/quick-start/) देखें। +- अपने सबग्राफ को बेहतर प्रदर्शन के लिए अनुकूलित और कस्टमाइज़ करने के सभी तरीकों का पता लगाने के लिए, [यहाँ और पढ़ें](/developing/creating-a-subgraph/)। diff --git a/website/src/pages/hi/subgraphs/querying/best-practices.mdx b/website/src/pages/hi/subgraphs/querying/best-practices.mdx index 3dd4ad1007d4..aa4caefd2f3d 100644 --- a/website/src/pages/hi/subgraphs/querying/best-practices.mdx +++ b/website/src/pages/hi/subgraphs/querying/best-practices.mdx @@ -1,10 +1,10 @@ --- -title: सर्वोत्तम प्रथाओं को क्वेरी करना +title: Querying Best Practices --- The Graph ब्लॉकचेन से डेटा क्वेरी करने का एक विकेन्द्रीकृत तरीका प्रदान करता है। इसका डेटा एक GraphQL API के माध्यम से एक्सपोज़ किया जाता है, जिससे इसे GraphQL भाषा के साथ क्वेरी करना आसान हो जाता है। -GraphQL भाषा के आवश्यक नियमों और सर्वोत्तम प्रथाओं को सीखें ताकि आप अपने subgraph को अनुकूलित कर सकें। +GraphQL भाषा के आवश्यक नियम और Best Practices सीखें ताकि आप अपने Subgraph को optimize कर सकें। --- @@ -14,7 +14,7 @@ GraphQL भाषा के आवश्यक नियमों और सर REST API के विपरीत, एक रेखांकन API एक स्कीमा पर बनाया गया है जो परिभाषित करता है कि कौन से प्रश्न किए जा सकते हैं। -For example, a query to get a token using the `token` query will look as follows: +उदाहरण के लिए, `token` क्वेरी का उपयोग करके एक टोकन प्राप्त करने के लिए की गई क्वेरी इस प्रकार होगी: ```graphql query GetToken($id: ID!) { @@ -25,7 +25,7 @@ query GetToken($id: ID!) { } ``` -which will return the following predictable JSON response (_when passing the proper `$id` variable value_): +जो निम्नलिखित पूर्वानुमानित JSON प्रतिक्रिया लौटाएगा (जब उचित `$id` variable value\_ पास किया जाएगा): ```json { @@ -36,9 +36,9 @@ which will return the following predictable JSON response (_when passing the pro } ``` -GraphQL queries use the GraphQL language, which is defined upon [a specification](https://spec.graphql.org/). +GraphQL क्वेरीज़ GraphQL भाषा का उपयोग करती हैं, जो कि [एक स्पेसिफिकेशन](https://spec.graphql.org/) पर परिभाषित है। -The above `GetToken` query is composed of multiple language parts (replaced below with `[...]` placeholders): +उपरोक्त `GetToken` क्वेरी कई भाषाओं के भागों से बनी है (नीचे `[...]` प्लेसहोल्डर के साथ प्रतिस्थापित): ```graphql query [operationName]([variableName]: [variableType]) { @@ -52,31 +52,31 @@ query [operationName]([variableName]: [variableType]) { ## GraphQL क्वेरी लिखने के नियम -- Each `queryName` must only be used once per operation. -- Each `field` must be used only once in a selection (we cannot query `id` twice under `token`) -- Some `field`s or queries (like `tokens`) return complex types that require a selection of sub-field. Not providing a selection when expected (or providing one when not expected - for example, on `id`) will raise an error. To know a field type, please refer to [Graph Explorer](/subgraphs/explorer/). +- प्रत्येक `queryName` को प्रत्येक ऑपरेशन में केवल एक बार ही उपयोग किया जाना चाहिए। +- प्रत्येक `field` का चयन में केवल एक बार ही उपयोग किया जा सकता है (हम `token` के अंतर्गत id को दो बार क्वेरी नहीं कर सकते)। +- कुछ field या क्वेरी (जैसे tokens) जटिल प्रकार के परिणाम लौटाते हैं, जिनके लिए उप-फ़ील्ड का चयन आवश्यक होता है। जब अपेक्षित हो तब चयन न देना (या जब अपेक्षित न हो - उदाहरण के लिए, id पर चयन देना) एक त्रुटि उत्पन्न करेगा। किसी फ़ील्ड के प्रकार को जानने के लिए, कृपया [Graph Explorer](/subgraphs/explorer/) देखें। - किसी तर्क को असाइन किया गया कोई भी चर उसके प्रकार से मेल खाना चाहिए। - चरों की दी गई सूची में, उनमें से प्रत्येक अद्वितीय होना चाहिए। - सभी परिभाषित चर का उपयोग किया जाना चाहिए। > ध्यान दें: इन नियमों का पालन न करने पर The Graph API से त्रुटि होगी। -For a complete list of rules with code examples, check out [GraphQL Validations guide](/resources/migration-guides/graphql-validations-migration-guide/). +पूरी नियमों की सूची और कोड उदाहरणों के लिए GraphQL Validations guide देखें: (https://thegraph.com/resources/migration-guides/graphql-validations-migration-guide/) ### एक ग्राफ़क्यूएल एपीआई के लिए एक प्रश्न भेजना GraphQL एक भाषा और प्रथाओं का सेट है जो HTTP के माध्यम से संचालित होता है। -It means that you can query a GraphQL API using standard `fetch` (natively or via `@whatwg-node/fetch` or `isomorphic-fetch`). +इसका मतलब है कि आप एक GraphQL API को मानक `fetch` (स्थानीय रूप से या `@whatwg-node/fetch` या `isomorphic-fetch` के माध्यम से) का उपयोग करके क्वेरी कर सकते हैं। -However, as mentioned in ["Querying from an Application"](/subgraphs/querying/from-an-application/), it's recommended to use `graph-client`, which supports the following unique features: +हालांकि, जैसा कि ["Querying from an Application"](/subgraphs/querying/from-an-application/) में उल्लेख किया गया है, यह अनुशंसित है कि `graph-client` का उपयोग किया जाए, जो निम्नलिखित अद्वितीय विशेषताओं का समर्थन करता है: -- क्रॉस-चेन सबग्राफ हैंडलिंग: एक ही क्वेरी में कई सबग्राफ से पूछताछ -- [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) -- [Automatic Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) +- Cross-chain Subgraph Handling: एक ही query में multiple Subgraphs से data प्राप्त करना +- [स्वचालित ब्लॉक ट्रैकिंग](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) +- [स्वचालित Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) - पूरी तरह से टाइप किया गया परिणाम -Here's how to query The Graph with `graph-client`: +The Graph के साथ `graph-client` का उपयोग करके क्वेरी करने का तरीका: ```tsx import { execute } from '../.graphclient' @@ -100,7 +100,7 @@ async function main() { main() ``` -More GraphQL client alternatives are covered in ["Querying from an Application"](/subgraphs/querying/from-an-application/). +More GraphQL क्लाइंट विकल्पों को ["Querying from an Application"](/subgraphs/querying/from-an-application/) में कवर किया गया है। --- @@ -122,12 +122,12 @@ query GetToken { ` ``` -While the above snippet produces a valid GraphQL query, **it has many drawbacks**: +जबकि उपरोक्त स्निपेट एक मान्य GraphQL क्वेरी उत्पन्न करता है, **इसमें कई कमियाँ हैं:** -- it makes it **harder to understand** the query as a whole -- developers are **responsible for safely sanitizing the string interpolation** -- not sending the values of the variables as part of the request parameters **prevent possible caching on server-side** -- it **prevents tools from statically analyzing the query** (ex: Linter, or type generations tools) +- यह संपूर्ण क्वेरी को समझना **और कठिन बना देता है।** +- डेवलपर्स स्ट्रिंग **इंटरपोलेशन को सुरक्षित रूप से सैनिटाइज़ करने के लिए जिम्मेदार होते हैं** +- रिक्वेस्ट पैरामीटर्स के रूप में वेरिएबल्स के मान न भेजने से **सर्वर-साइड पर संभावित कैशिंग को रोका जा सकता है** +- यह **टूल्स को क्वेरी का स्टैटिक रूप से विश्लेषण करने से रोकता है** (उदाहरण: Linter या टाइप जेनरेशन टूल्स) इसी कारण, यह अनुशंसा की जाती है कि हमेशा क्वेरीज़ को स्थिर स्ट्रिंग्स के रूप में लिखा जाए। @@ -151,18 +151,18 @@ const result = await execute(query, { }) ``` -Doing so brings **many advantages**: +ऐसा करने से **कई लाभ** होते हैं: -- **Easy to read and maintain** queries -- The GraphQL **server handles variables sanitization** -- **Variables can be cached** at server-level -- **Queries can be statically analyzed by tools** (more on this in the following sections) +- **आसानी से पढ़ने और बनाए रखने योग्य** क्वेरीज़ +- GraphQL **सर्वर वेरिएबल्स की स्वच्छता को संभालता है** +- **वेरिएबल्स को सर्वर-स्तर पर कैश** किया जा सकता है +- **क्वेरीज़ को उपकरणों द्वारा स्थिर रूप से विश्लेषण किया जा सकता है** (अधिक जानकारी निम्नलिखित अनुभागों में) - ### स्टेटिक क्वेरीज़ में फ़ील्ड्स को शर्तानुसार कैसे शामिल करें -You might want to include the `owner` field only on a particular condition. +आप `owner` फ़ील्ड को केवल एक विशेष शर्त पर शामिल करना चाह सकते हैं। -For this, you can leverage the `@include(if:...)` directive as follows: +आप इसके लिए `@include(if:...)` निर्देश का उपयोग कर सकते हैं जैसे कि निम्नलिखित: ```tsx import { execute } from 'your-favorite-graphql-client' @@ -185,7 +185,7 @@ const result = await execute(query, { }) ``` -> Note: The opposite directive is `@skip(if: ...)`. +> नोट: विपरीत निर्देश `@skip(if: ...)` है। ### आप जो चाहते हैं वह मांगें @@ -193,10 +193,10 @@ GraphQL अपने “Ask for what you want” टैगलाइन के इस कारण, GraphQL में सभी उपलब्ध फ़ील्ड्स को बिना उन्हें व्यक्तिगत रूप से सूचीबद्ध किए प्राप्त करने का कोई तरीका नहीं है। -- When querying GraphQL APIs, always think of querying only the fields that will be actually used. +- GraphQL APIs query करते समय, हमेशा वो fields की query करने की सोचें जो वास्तव में use होंगे। - सुनिश्चित करें कि क्वेरीज़ केवल उतने ही एंटिटीज़ लाएँ जितनी आपको वास्तव में आवश्यकता है। डिफ़ॉल्ट रूप से, क्वेरीज़ एक संग्रह में 100 एंटिटीज़ लाएँगी, जो आमतौर पर उपयोग में लाई जाने वाली मात्रा से अधिक होती है, जैसे कि उपयोगकर्ता को प्रदर्शित करने के लिए। यह न केवल एक क्वेरी में शीर्ष-स्तरीय संग्रहों पर लागू होता है, बल्कि एंटिटीज़ के नेस्टेड संग्रहों पर भी अधिक लागू होता है। -For example, in the following query: +उदाहरण के लिए, निम्नलिखित क्वेरी में: ```graphql query listTokens { @@ -211,15 +211,15 @@ query listTokens { } ``` -The response could contain 100 transactions for each of the 100 tokens. +प्रतिक्रिया में प्रत्येक 100 टोकनों के लिए 100 लेन-देन(transaction) हो सकते हैं। -If the application only needs 10 transactions, the query should explicitly set `first: 10` on the transactions field. +यदि application को केवल 10 लेन-देन(transaction) की आवश्यकता है, तो क्वेरी को लेनदेन फ़ील्ड पर स्पष्ट रूप से first: 10 सेट करना चाहिए। ### एक ही क्वेरी का उपयोग करके कई रिकॉर्ड्स का अनुरोध करें -By default, subgraphs have a singular entity for one record. For multiple records, use the plural entities and filter: `where: {id_in:[X,Y,Z]}` or `where: {volume_gt:100000}` +डिफ़ॉल्ट रूप से, Subgraphs में एक record के लिए singular entity होती है। कई records प्राप्त करने के लिए, plural entities और filter का उपयोग करें: `where: {id_in:[X,Y,Z]}` या `where: {volume_gt:100000}` -Example of inefficient querying: +अप्रभावी क्वेरी करने का उदाहरण: ```graphql query SingleRecord { @@ -236,7 +236,7 @@ query SingleRecord { } ``` -Example of optimized querying: +इष्टतम क्वेरी करने का उदाहरण: ```graphql query ManyRecords { @@ -249,7 +249,7 @@ query ManyRecords { ### एकल अनुरोध में कई क्वेरियों को संयोजित करें। -Your application might require querying multiple types of data as follows: +आपका application निम्नलिखित प्रकार के डेटा को क्वेरी करने की आवश्यकता हो सकती है: - ```graphql import { execute } from "your-favorite-graphql-client" @@ -279,9 +279,9 @@ const [tokens, counters] = Promise.all( ) ``` -While this implementation is totally valid, it will require two round trips with the GraphQL API. +जबकि यह कार्यान्वयन पूरी तरह से मान्य है, यह GraphQL API के साथ दो राउंड ट्रिप की आवश्यकता होगी। -Fortunately, it is also valid to send multiple queries in the same GraphQL request as follows: +सौभाग्य से, एक ही GraphQL अनुरोध में कई क्वेरी भेजना भी मान्य है, जैसा कि नीचे दिया गया है: ```graphql import { execute } from "your-favorite-graphql-client" @@ -302,13 +302,13 @@ query GetTokensandCounters { const { result: { tokens, counters } } = execute(query) ``` -This approach will **improve the overall performance** by reducing the time spent on the network (saves you a round trip to the API) and will provide a **more concise implementation**. +यह तरीका कुल मिलाकर प्रदर्शन में सुधार करेगा क्योंकि यह नेटवर्क पर बिताया गया समय कम करेगा (API के लिए एक राउंड ट्रिप बचाता है) और एक अधिक संक्षिप्त कार्यान्वयन प्रदान करेगा। ### लीवरेज ग्राफक्यूएल फ़्रैगमेंट -A helpful feature to write GraphQL queries is GraphQL Fragment. +GraphQL क्वेरी लिखने में सहायक एक सुविधा है GraphQL Fragment। -Looking at the following query, you will notice that some fields are repeated across multiple Selection-Sets (`{ ... }`): +निम्नलिखित क्वेरी को देखने पर, आप देखेंगे कि कुछ फ़ील्ड्स कई चयन-सेट्स (`{ ... }`) में दोहराए गए हैं: ```graphql query { @@ -328,12 +328,12 @@ query { } ``` -Such repeated fields (`id`, `active`, `status`) bring many issues: +ऐसे दोहराए गए फ़ील्ड (id, active, status) कई समस्याएँ लाते हैं: - बड़ी क्वेरीज़ पढ़ने में कठिन होती हैं। -- When using tools that generate TypeScript types based on queries (_more on that in the last section_), `newDelegate` and `oldDelegate` will result in two distinct inline interfaces. +- जब ऐसे टूल्स का उपयोग किया जाता है जो क्वेरी के आधार पर TypeScript टाइप्स उत्पन्न करते हैं (इस पर अंतिम अनुभाग में और अधिक), newDelegate और oldDelegate दो अलग-अलग इनलाइन इंटरफेस के रूप में परिणत होंगे। -A refactored version of the query would be the following: +एक पुनर्गठित संस्करण का प्रश्न निम्नलिखित होगा: ```graphql query { @@ -357,15 +357,15 @@ fragment DelegateItem on Transcoder { } ``` -Using GraphQL `fragment` will improve readability (especially at scale) and result in better TypeScript types generation. +GraphQL में fragment का उपयोग पढ़ने की सुविधा बढ़ाएगा (विशेष रूप से बड़े स्तर पर) और बेहतर TypeScript प्रकारों की पीढ़ी का परिणाम देगा। -When using the types generation tool, the above query will generate a proper `DelegateItemFragment` type (_see last "Tools" section_). +जब टाइप्स जेनरेशन टूल का उपयोग किया जाता है, तो उपरोक्त क्वेरी एक सही 'DelegateItemFragment' टाइप उत्पन्न करेगी (अंतिम "Tools" अनुभाग देखें)। ### ग्राफकॉल फ्रैगमेंट क्या करें और क्या न करें ### Fragment base must be a type -A Fragment cannot be based on a non-applicable type, in short, **on type not having fields**: +एक फ़्रैगमेंट गैर-लागू प्रकार पर आधारित नहीं हो सकता, संक्षेप में, **ऐसे प्रकार पर जिसमें फ़ील्ड नहीं होते हैं।** ```graphql fragment MyFragment on BigInt { @@ -373,11 +373,11 @@ fragment MyFragment on BigInt { } ``` -`BigInt` is a **scalar** (native "plain" type) that cannot be used as a fragment's base. +BigInt एक **स्केलर** (मूल "plain" type) है जिसे किसी फ़्रैगमेंट के आधार के रूप में उपयोग नहीं किया जा सकता। #### How to spread a Fragment -Fragments are defined on specific types and should be used accordingly in queries. +फ्रैगमेंट विशिष्ट प्रकारों पर परिभाषित किए जाते हैं और उन्हें क्वेरी में उपयुक्त रूप से उपयोग किया जाना चाहिए। उदाहरण: @@ -400,19 +400,19 @@ fragment VoteItem on Vote { } ``` -`newDelegate` and `oldDelegate` are of type `Transcoder`. +`newDelegate` और `oldDelegate` प्रकार के `Transcoder` हैं। -It is not possible to spread a fragment of type `Vote` here. +यहाँ `Vote` प्रकार के एक खंड को फैलाना संभव नहीं है। -#### Define Fragment as an atomic business unit of data +#### Fragment को data की एक atomic business unit के रूप में define करें। -GraphQL `Fragment`s must be defined based on their usage. +GraphQL `Fragments` को उनके उपयोग के आधार पर परिभाषित किया जाना चाहिए। -For most use-case, defining one fragment per type (in the case of repeated fields usage or type generation) is sufficient. +अधिकांश उपयोग मामलों के लिए, एक प्रकार पर एक फ़्रैगमेंट परिभाषित करना (दोहराए गए फ़ील्ड उपयोग या प्रकार निर्माण के मामले में) पर्याप्त होता है। -Here is a rule of thumb for using fragments: +यहाँ एक सामान्य नियम है फ्रैगमेंट्स का उपयोग करने के लिए: -- When fields of the same type are repeated in a query, group them in a `Fragment`. +- जब समान प्रकार के फ़ील्ड किसी क्वेरी में दोहराए जाते हैं, तो उन्हें` Fragment` में समूहित करें। - जब समान लेकिन भिन्न फ़ील्ड्स को दोहराया जाता है, तो कई फ़्रैगमेंट्स बनाएं, उदाहरण के लिए: ```graphql @@ -436,35 +436,35 @@ fragment VoteWithPoll on Vote { --- -## The Essential Tools +## मूलभूत उपकरण ### ग्राफक्यूएल वेब-आधारित खोजकर्ता -Iterating over queries by running them in your application can be cumbersome. For this reason, don't hesitate to use [Graph Explorer](https://thegraph.com/explorer) to test your queries before adding them to your application. Graph Explorer will provide you a preconfigured GraphQL playground to test your queries. +क्वेरीज़ को अपने application में चलाकर उनका पुनरावर्तन करना कठिन हो सकता है। इसी कारण, अपनी क्वेरीज़ को अपने application में जोड़ने से पहले उनका परीक्षण करने के लिए बिना किसी संकोच के [Graph Explorer](https://thegraph.com/explorer) का उपयोग करें। Graph Explorer आपको एक पूर्व-कॉन्फ़िगर किया हुआ GraphQL प्लेग्राउंड प्रदान करेगा, जहाँ आप अपनी क्वेरीज़ का परीक्षण कर सकते हैं। -If you are looking for a more flexible way to debug/test your queries, other similar web-based tools are available such as [Altair](https://altairgraphql.dev/) and [GraphiQL](https://graphiql-online.com/graphiql). +यदि आप अपनी क्वेरीज़ को डिबग/परखने के लिए एक अधिक लचीला तरीका ढूंढ रहे हैं, तो अन्य समान वेब-आधारित टूल उपलब्ध हैं जैसे [Altair](https://altairgraphql.dev/) और [GraphiQL](https://graphiql-online.com/graphiql) ### ग्राफक्यूएल लाइनिंग -In order to keep up with the mentioned above best practices and syntactic rules, it is highly recommended to use the following workflow and IDE tools. +उपरोक्त सर्वोत्तम प्रथाओं और वाक्य रचना नियमों का पालन करने के लिए, निम्नलिखित वर्कफ़्लो और IDE टूल्स का उपयोग करना अत्यधिक अनुशंसित है। **GraphQL ESLint** -[GraphQL ESLint](https://the-guild.dev/graphql/eslint/docs/getting-started) will help you stay on top of GraphQL best practices with zero effort. +[GraphQL ESLint](https://the-guild.dev/graphql/eslint/docs/getting-started) आपकी बिना किसी अतिरिक्त प्रयास के GraphQL सर्वोत्तम प्रथाओं का पालन करने में मदद करेगा। -[Setup the "operations-recommended"](https://the-guild.dev/graphql/eslint/docs/configs) config will enforce essential rules such as: +["operations-recommended"](https://the-guild.dev/graphql/eslint/docs/configs) कॉन्फ़िगरेशन सेटअप करने से आवश्यक नियम लागू होंगे जैसे:- -- `@graphql-eslint/fields-on-correct-type`: is a field used on a proper type? -- `@graphql-eslint/no-unused variables`: should a given variable stay unused? +- `@graphql-eslint/fields-on-correct-type`: क्या कोई फ़ील्ड सही प्रकार पर उपयोग की गई है? +- `@graphql-eslint/no-unused variables`: क्या दिया गया चर अनुपयोगी रहना चाहिए? - और अधिक! -This will allow you to **catch errors without even testing queries** on the playground or running them in production! +यह आपको बिना प्लेग्राउंड पर क्वेरी का परीक्षण किए या उन्हें प्रोडक्शन में चलाए बिना ही त्रुटियों को पकड़ने की अनुमति देगा! ### आईडीई प्लगइन्स -**VSCode and GraphQL** +**VSCode और GraphQL** -The [GraphQL VSCode extension](https://marketplace.visualstudio.com/items?itemName=GraphQL.vscode-graphql) is an excellent addition to your development workflow to get: +[GraphQL VSCode extension](https://marketplace.visualstudio.com/items?itemName=GraphQL.vscode-graphql) आपके विकास वर्कफ़्लो में एक बेहतरीन जोड़ है जिससे आपको यह प्राप्त होता है: - सिंटैक्स हाइलाइटिंग - ऑटो-कंप्लीट सुझाव @@ -472,15 +472,15 @@ The [GraphQL VSCode extension](https://marketplace.visualstudio.com/items?itemNa - निबंध - फ्रैगमेंट्स और इनपुट टाइप्स के लिए परिभाषा पर जाएं। -If you are using `graphql-eslint`, the [ESLint VSCode extension](https://marketplace.visualstudio.com/items?itemName=dbaeumer.vscode-eslint) is a must-have to visualize errors and warnings inlined in your code correctly. +यदि आप graphql-eslint का उपयोग कर रहे हैं, तो [ESLint VSCode एक्सटेंशन](https://marketplace.visualstudio.com/items?itemName=dbaeumer.vscode-eslint) आपके कोड में त्रुटियों और चेतावनियों को इनलाइन सही तरीके से देखने के लिए आवश्यक है। -**WebStorm/Intellij and GraphQL** +**WebStorm/Intellij और GraphQL** -The [JS GraphQL plugin](https://plugins.jetbrains.com/plugin/8097-graphql/) will significantly improve your experience while working with GraphQL by providing: +[JS GraphQL प्लगइन](https://plugins.jetbrains.com/plugin/8097-graphql/) आपके GraphQL के साथ काम करने के अनुभव को काफी बेहतर बनाएगा, जिससे आपको निम्नलिखित सुविधाएँ मिलेंगी: - सिंटैक्स हाइलाइटिंग - ऑटो-कंप्लीट सुझाव - स्कीमा के खिलाफ मान्यता - निबंध -For more information on this topic, check out the [WebStorm article](https://blog.jetbrains.com/webstorm/2019/04/featured-plugin-js-graphql/) which showcases all the plugin's main features. +इस विषय पर अधिक जानकारी के लिए, [WebStorm लेख](https://blog.jetbrains.com/webstorm/2019/04/featured-plugin-js-graphql/) देखें, जिसमें इस प्लगइन की सभी प्रमुख विशेषताओं को प्रदर्शित किया गया है। diff --git a/website/src/pages/hi/subgraphs/querying/distributed-systems.mdx b/website/src/pages/hi/subgraphs/querying/distributed-systems.mdx index 0f530ebacba4..f6f11ab6ae7b 100644 --- a/website/src/pages/hi/subgraphs/querying/distributed-systems.mdx +++ b/website/src/pages/hi/subgraphs/querying/distributed-systems.mdx @@ -29,22 +29,22 @@ title: वितरित प्रणाली ## अद्यतन डेटा के लिए मतदान -The Graph provides the `block: { number_gte: $minBlock }` API, which ensures that the response is for a single block equal or higher to `$minBlock`. If the request is made to a `graph-node` instance and the min block is not yet synced, `graph-node` will return an error. If `graph-node` has synced min block, it will run the response for the latest block. If the request is made to an Edge & Node Gateway, the Gateway will filter out any Indexers that have not yet synced min block and make the request for the latest block the Indexer has synced. +The Graph block: `{ number_gte: $minBlock }` API प्रदान करता है, जो यह सुनिश्चित करता है कि प्रतिक्रिया एक ही ब्लॉक के लिए होगी जो `$minBlock` के बराबर या उससे अधिक होगा। यदि अनुरोध `ग्राफ-नोड` instances पर किया जाता है और न्यूनतम ब्लॉक अभी तक सिंक नहीं हुआ है, तो graph-node एक त्रुटि लौटाएगा। यदि `ग्राफ-नोड` ने न्यूनतम ब्लॉक को सिंक कर लिया है, तो यह नवीनतम ब्लॉक के लिए प्रतिक्रिया चलाएगा। यदि अनुरोध Edge & Node Gateway को किया जाता है, तो Gateway उन Indexers को फ़िल्टर कर देगा जिन्होंने अभी तक न्यूनतम ब्लॉक को सिंक नहीं किया है और उस नवीनतम ब्लॉक के लिए अनुरोध करेगा जिसे Indexer ने सिंक किया है। -We can use `number_gte` to ensure that time never travels backward when polling for data in a loop. Here is an example: +हम `number_gte` का उपयोग यह सुनिश्चित करने के लिए कर सकते हैं कि डेटा को लूप में पोल करते समय समय कभी पीछे न जाए। यहाँ एक उदाहरण है -```javascript -/// Updates the protocol.paused variable to the latest -/// known value in a loop by fetching it using The Graph. +````javascript +/// एक लूप में नवीनतम ज्ञात मान को लाने के लिए The Graph का उपयोग करके +/// protocol.paused वेरिएबल को अपडेट करता है। async function updateProtocolPaused() { - // It's ok to start with minBlock at 0. The query will be served - // using the latest block available. Setting minBlock to 0 is the - // same as leaving out that argument. + // minBlock को 0 से शुरू करना ठीक है। क्वेरी को + // नवीनतम उपलब्ध ब्लॉक का उपयोग करके परोसा जाएगा। minBlock को 0 सेट करना + // उसी के समान है जैसे इस आर्गुमेंट को छोड़ देना। let minBlock = 0 for (;;) { - // Schedule a promise that will be ready once - // the next Ethereum block will likely be available. + // एक प्रॉमिस शेड्यूल करें जो तभी तैयार होगी जब + // अगला Ethereum ब्लॉक उपलब्ध होने की संभावना होगी। const nextBlock = new Promise((f) => { setTimeout(f, 14000) }) @@ -65,30 +65,31 @@ async function updateProtocolPaused() { const response = await graphql(query, variables) minBlock = response._meta.block.number - // TODO: Do something with the response data here instead of logging it. + // TODO: यहाँ डेटा के साथ कुछ करें, केवल इसे लॉग करने के बजाय। console.log(response.protocol.paused) - // Sleep to wait for the next block + // अगले ब्लॉक की प्रतीक्षा करने के लिए स्लीप करें await nextBlock } } ``` +```` ## संबंधित वस्तुओं का एक सेट लाया जा रहा है एक अन्य उपयोग-मामला एक बड़े सेट को पुनः प्राप्त कर रहा है, या अधिक सामान्यतः, कई अनुरोधों में संबंधित वस्तुओं को पुनः प्राप्त कर रहा है। मतदान के मामले के विपरीत (जहां वांछित स्थिरता समय में आगे बढ़ने के लिए थी), वांछित स्थिरता समय में एक बिंदु के लिए है। -Here we will use the `block: { hash: $blockHash }` argument to pin all of our results to the same block. +यहां हम सभी परिणामों को एक ही ब्लॉक पर पिन करने के लिए `block: { hash: $blockHash }` आर्गुमेंट का उपयोग करेंगे। ```javascript -/// Gets a list of domain names from a single block using pagination +/// पृष्ठांकन का उपयोग करके एकल ब्लॉक से डोमेन नामों की सूची प्राप्त करता है async function getDomainNames() { - // Set a cap on the maximum number of items to pull. + // खींचे जाने वाले अधिकतम आइटम की एक सीमा निर्धारित करें। let pages = 5 const perPage = 1000 - // The first query will get the first page of results and also get the block - // hash so that the remainder of the queries are consistent with the first. + // पहली क्वेरी पहले पृष्ठ के परिणाम प्राप्त करेगी और ब्लॉक हैश भी प्राप्त करेगी + // ताकि शेष क्वेरी पहले के अनुरूप हों। const listDomainsQuery = ` query ListDomains($perPage: Int!) { domains(first: $perPage) { @@ -107,9 +108,9 @@ async function getDomainNames() { let blockHash = data._meta.block.hash let query - // Continue fetching additional pages until either we run into the limit of - // 5 pages total (specified above) or we know we have reached the last page - // because the page has fewer entities than a full page. + // अतिरिक्त पृष्ठों को तब तक प्राप्त करना जारी रखें जब तक कि हम या तो 5 पृष्ठों की सीमा तक न पहुँच जाएँ + // (ऊपर निर्दिष्ट) या हमें यह पता चल जाए कि हम अंतिम पृष्ठ तक पहुँच चुके हैं क्योंकि + // पृष्ठ में पूर्ण पृष्ठ की तुलना में कम इकाइयाँ हैं। while (data.domains.length == perPage && --pages) { let lastID = data.domains[data.domains.length - 1].id query = ` @@ -122,7 +123,7 @@ async function getDomainNames() { data = await graphql(query, { perPage, lastID, blockHash }) - // Accumulate domain names into the result + // परिणाम में डोमेन नामों को संचित करें for (domain of data.domains) { result.push(domain.name) } diff --git a/website/src/pages/hi/subgraphs/querying/from-an-application.mdx b/website/src/pages/hi/subgraphs/querying/from-an-application.mdx index 77b510466231..32d14acb5375 100644 --- a/website/src/pages/hi/subgraphs/querying/from-an-application.mdx +++ b/website/src/pages/hi/subgraphs/querying/from-an-application.mdx @@ -1,53 +1,54 @@ --- title: एक एप्लिकेशन से क्वेरी करना +sidebarTitle: App से Query करना --- -Learn how to query The Graph from your application. +अपने application से The Graph को क्वेरी करना सीखें। -## Getting GraphQL Endpoints +## GraphQL एंडपॉइंट प्राप्त करना -During the development process, you will receive a GraphQL API endpoint at two different stages: one for testing in Subgraph Studio, and another for making queries to The Graph Network in production. +विकास प्रक्रिया के दौरान, आपको दो अलग-अलग चरणों में एक GraphQL API endpoint प्राप्त होगा: एक परीक्षण के लिए सबग्राफ Studio में, और दूसरा उत्पादन में The Graph Network से क्वेरी करने के लिए। -### Subgraph Studio Endpoint +### सबग्राफ Studio Endpoint -After deploying your subgraph to [Subgraph Studio](https://thegraph.com/docs/en/subgraphs/developing/deploying/using-subgraph-studio/), you will receive an endpoint that looks like this: +अपने Subgraph को Subgraph Studio पर deploy करने के बाद, आपको एक endpoint मिलेगा जो इस प्रकार दिखेगा: (https://api.thegraph.com/subgraphs/name/YOUR_SUBGRAPH_NAME) ``` https://api.studio.thegraph.com/query/// ``` -> This endpoint is intended for testing purposes **only** and is rate-limited. +> यह एंडपॉइंट केवल परीक्षण उद्देश्यों के लिए है **और** इसकी अनुरोध सीमा निर्धारित है। -### The Graph Network Endpoint +### The Graph Network एंडपॉइंट -After publishing your subgraph to the network, you will receive an endpoint that looks like this: : +अपने Subgraph को नेटवर्क पर publish करने के बाद, आपको एक endpoint मिलेगा जो इस प्रकार दिखेगा: ``` https://gateway.thegraph.com/api//subgraphs/id/ ``` -> This endpoint is intended for active use on the network. It allows you to use various GraphQL client libraries to query the subgraph and populate your application with indexed data. +> यह endpoint नेटवर्क पर सक्रिय उपयोग के लिए बनाया गया है। यह आपको विभिन्न **GraphQL client libraries** का उपयोग करके Subgraph से query करने और अपनी application को indexed data से भरने की अनुमति देता है। -## Using Popular GraphQL Clients +## लोकप्रिय GraphQL क्लाइंट्स का उपयोग ### Graph Client -The Graph is providing its own GraphQL client, `graph-client` that supports unique features such as: +The Graph अपना खुद का GraphQL क्लाइंट, graph-client प्रदान कर रहा है, जो अद्वितीय विशेषताओं का समर्थन करता है, जैसे: -- क्रॉस-चेन सबग्राफ हैंडलिंग: एक ही क्वेरी में कई सबग्राफ से पूछताछ -- [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) -- [Automatic Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) +- Cross-chain Subgraph Handling: एक ही query में multiple Subgraphs से data प्राप्त करना +- [स्वचालित ब्लॉक ट्रैकिंग](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) +- [स्वचालित Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) - पूरी तरह से टाइप किया गया परिणाम -> Note: `graph-client` is integrated with other popular GraphQL clients such as Apollo and URQL, which are compatible with environments such as React, Angular, Node.js, and React Native. As a result, using `graph-client` will provide you with an enhanced experience for working with The Graph. +> नोट: `graph-client` अन्य लोकप्रिय GraphQL क्लाइंट जैसे Apollo और URQL के साथ एकीकृत है, जो React, Angular, Node.js और React Native जैसे परिवेशों के अनुकूल हैं। परिणामस्वरूप, `graph-client` का उपयोग करने से The Graph के साथ काम करने के लिए आपको एक उन्नत अनुभव मिलेगा। -### Fetch Data with Graph Client +### Graph Client के साथ डेटा प्राप्त करें -Let's look at how to fetch data from a subgraph with `graph-client`: +आइए देखें कि **`graph-client`** का उपयोग करके Subgraph से डेटा कैसे प्राप्त किया जाता है: #### स्टेप 1 -Install The Graph Client CLI in your project: +अपने प्रोजेक्ट में The Graph Client CLI इंस्टॉल करें: ```sh yarn add -D @graphprotocol/client-cli @@ -57,7 +58,7 @@ npm install --save-dev @graphprotocol/client-cli #### चरण 2 -Define your query in a `.graphql` file (or inlined in your `.js` or `.ts` file): +अपनी क्वेरी को एक `.graphql` फ़ाइल में परिभाषित करें (या अपने `.js` या `.ts` फ़ाइल में इनलाइन करें): ```graphql query ExampleQuery { @@ -86,7 +87,7 @@ query ExampleQuery { #### चरण 3 -Create a configuration file (called `.graphclientrc.yml`) and point to your GraphQL endpoints provided by The Graph, for example: +एक कॉन्फ़िगरेशन फ़ाइल (जिसका नाम `.graphclientrc.yml` हो) बनाएं और इसे आपके GraphQL endpointकी ओर इंगित करें, जो The Graph द्वारा प्रदान किए गए हैं, उदाहरण के लिए: ```yaml # .graphclientrc.yml @@ -104,17 +105,17 @@ documents: - ./src/example-query.graphql ``` -#### Step 4 +#### स्टेप 4 -Run the following The Graph Client CLI command to generate typed and ready to use JavaScript code: +निम्नलिखित The Graph Client CLI कमांड चलाएँ ताकि टाइप किए गए और उपयोग के लिए तैयार JavaScript कोड उत्पन्न हो सके:- ```sh graphclient build ``` -#### Step 5 +#### स्टेप 5 -Update your `.ts` file to use the generated typed GraphQL documents: +अपनी `.ts` फ़ाइल को उत्पन्न किए गए टाइप किए गए GraphQL दस्तावेज़ों का उपयोग करने के लिए अपडेट करें:: ```tsx import React, { useEffect } from 'react' @@ -152,27 +153,27 @@ function App() { export default App ``` -> **Important Note:** `graph-client` is perfectly integrated with other GraphQL clients such as Apollo client, URQL, or React Query; you can [find examples in the official repository](https://github.com/graphprotocol/graph-client/tree/main/examples). However, if you choose to go with another client, keep in mind that **you won't be able to use Cross-chain Subgraph Handling or Automatic Pagination, which are core features for querying The Graph**. +> **महत्वपूर्ण नोट**: graph-client अन्य GraphQL क्लाइंट जैसे Apollo client, URQL, या React Query के साथ पूरी तरह से एकीकृत है; आप [आधिकारिक रिपॉजिटरी में उदाहरण देख सकते हैं](https://github.com/graphprotocol/graph-client/tree/main/examples)। हालाँकि, **यदि आप किसी अन्य क्लाइंट का चयन करते हैं, तो ध्यान रखें कि आप क्रॉस-चेन सबग्राफ Handling या Automatic Pagination का उपयोग नहीं कर पाएंगे, जो The Graph को क्वेरी करने के लिए मुख्य विशेषताएँ हैं।** ### Apollo Client -[Apollo client](https://www.apollographql.com/docs/) is a common GraphQL client on front-end ecosystems. It's available for React, Angular, Vue, Ember, iOS, and Android. +[Apollo client](https://www.apollographql.com/docs/) एक सामान्य GraphQL क्लाइंट है जो फ्रंट-एंड इकोसिस्टम में उपयोग किया जाता है। यह React, Angular, Vue, Ember, iOS और Android के लिए उपलब्ध है। -Although it's the heaviest client, it has many features to build advanced UI on top of GraphQL: +हालाँकि यह सबसे भारी क्लाइंट है, इसमें कई विशेषताएँ हैं जो GraphQL के ऊपर उन्नत UI बनाने के लिए उपलब्ध हैं: -- Advanced error handling +- उन्नत त्रुटि प्रबंधन - पृष्ठ पर अंक लगाना -- Data prefetching -- Optimistic UI -- Local state management +- डेटा प्रीफेचिंग +- आशावादी UI +- लोकल स्टेट प्रबंधन -### Fetch Data with Apollo Client +### Apollo Client के साथ डेटा प्राप्त करें -Let's look at how to fetch data from a subgraph with Apollo client: +आइए देखें कि **Apollo Client** का उपयोग करके Subgraph से डेटा कैसे प्राप्त किया जाता है: #### स्टेप 1 -Install `@apollo/client` and `graphql`: +`@apollo/client` और `graphql` को इंस्टॉल करें: ```sh npm install @apollo/client graphql @@ -180,7 +181,7 @@ npm install @apollo/client graphql #### चरण 2 -Query the API with the following code: +API से निम्नलिखित कोड के साथ क्वेरी करें: ```javascript import { ApolloClient, InMemoryCache, gql } from '@apollo/client' @@ -215,7 +216,7 @@ client #### चरण 3 -To use variables, you can pass in a `variables` argument to the query: +आप वेरिएबल्स का उपयोग करने के लिए, क्वेरी में `variables` आर्गुमेंट पास कर सकते हैं: ```javascript const tokensQuery = ` @@ -246,22 +247,22 @@ client }) ``` -### URQL Overview +### URQL अवलोकन -[URQL](https://formidable.com/open-source/urql/) is available within Node.js, React/Preact, Vue, and Svelte environments, with some more advanced features: +[URQL](https://formidable.com/open-source/urql/) Node.js, React/Preact, Vue और Svelte वातावरण के भीतर उपलब्ध है, जिसमें कुछ अधिक उन्नत सुविधाएँ शामिल हैं: - - Flexible cache system - एक्स्टेंसिबल डिज़ाइन (इसके शीर्ष पर नई क्षमताओं को जोड़ना आसान) - लाइटवेट बंडल (अपोलो क्लाइंट की तुलना में ~ 5x हल्का) - फ़ाइल अपलोड और ऑफ़लाइन मोड के लिए समर्थन -### Fetch data with URQL +### URQL के साथ डेटा प्राप्त करें -Let's look at how to fetch data from a subgraph with URQL: +आइए देखें कि **URQL** का उपयोग करके Subgraph से डेटा कैसे प्राप्त किया जाता है: #### स्टेप 1 -Install `urql` and `graphql`: +`urql` और `graphql` को इंस्टॉल करें: ```sh npm install urql graphql @@ -269,7 +270,7 @@ npm install urql graphql #### चरण 2 -Query the API with the following code: +API से निम्नलिखित कोड के साथ क्वेरी करें: ```javascript import { createClient } from 'urql' diff --git a/website/src/pages/hi/subgraphs/querying/graph-client/README.md b/website/src/pages/hi/subgraphs/querying/graph-client/README.md index 416cadc13c6f..1844a10f1970 100644 --- a/website/src/pages/hi/subgraphs/querying/graph-client/README.md +++ b/website/src/pages/hi/subgraphs/querying/graph-client/README.md @@ -14,25 +14,25 @@ This library is intended to simplify the network aspect of data consumption for > The tools provided in this repo can be used as standalone, but you can also use it with any existing GraphQL Client! -| Status | Feature | Notes | +| स्थिति | Feature | Notes | | :----: | ---------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------- | -| ✅ | Multiple indexers | based on fetch strategies | -| ✅ | Fetch Strategies | timeout, retry, fallback, race, highestValue | -| ✅ | Build time validations & optimizations | | -| ✅ | Client-Side Composition | with improved execution planner (based on GraphQL-Mesh) | -| ✅ | Cross-chain Subgraph Handling | Use similar subgraphs as a single source | -| ✅ | Raw Execution (standalone mode) | without a wrapping GraphQL client | -| ✅ | Local (client-side) Mutations | | -| ✅ | [Automatic Block Tracking](../packages/block-tracking/README.md) | tracking block numbers [as described here](https://thegraph.com/docs/en/developer/distributed-systems/#polling-for-updated-data) | -| ✅ | [Automatic Pagination](../packages/auto-pagination/README.md) | doing multiple requests in a single call to fetch more than the indexer limit | -| ✅ | Integration with `@apollo/client` | | -| ✅ | Integration with `urql` | | -| ✅ | TypeScript support | with built-in GraphQL Codegen and `TypedDocumentNode` | -| ✅ | [`@live` queries](./live.md) | Based on polling | +| ✅ | Multiple indexers | based on fetch strategies | +| ✅ | Fetch Strategies | timeout, retry, fallback, race, highestValue | +| ✅ | Build time validations & optimizations | | +| ✅ | Client-Side Composition | with improved execution planner (based on GraphQL-Mesh) | +| ✅ | Cross-chain Subgraph Handling | Use similar subgraphs as a single source | +| ✅ | Raw Execution (standalone mode) | without a wrapping GraphQL client | +| ✅ | Local (client-side) Mutations | | +| ✅ | [Automatic Block Tracking](../packages/block-tracking/README.md) | tracking block numbers [as described here](https://thegraph.com/docs/en/developer/distributed-systems/#polling-for-updated-data) | +| ✅ | [Automatic Pagination](../packages/auto-pagination/README.md) | doing multiple requests in a single call to fetch more than the indexer limit | +| ✅ | Integration with `@apollo/client` | | +| ✅ | Integration with `urql` | | +| ✅ | TypeScript support | with built-in GraphQL Codegen and `TypedDocumentNode` | +| ✅ | [`@live` queries](./live.md) | Based on polling | > You can find an [extended architecture design here](./architecture.md) -## Getting Started +## शुरू करना You can follow [Episode 45 of `graphql.wtf`](https://graphql.wtf/episodes/45-the-graph-client) to learn more about Graph Client: @@ -48,7 +48,7 @@ npm install --save-dev @graphprotocol/client-cli > The CLI is installed as dev dependency since we are using it to produce optimized runtime artifacts that can be loaded directly from your app! -Create a configuration file (called `.graphclientrc.yml`) and point to your GraphQL endpoints provided by The Graph, for example: +एक कॉन्फ़िगरेशन फ़ाइल (जिसका नाम `.graphclientrc.yml` हो) बनाएं और इसे आपके GraphQL endpointकी ओर इंगित करें, जो The Graph द्वारा प्रदान किए गए हैं, उदाहरण के लिए: ```yml # .graphclientrc.yml @@ -138,7 +138,7 @@ graphclient serve-dev And open http://localhost:4000/ to use GraphiQL. You can now experiment with your Graph client-side GraphQL schema locally! 🥳 -#### Examples +#### उदाहरण You can also refer to [examples directory in this repo](../examples), for more advanced examples and integration examples: @@ -308,8 +308,8 @@ sources:
`highestValue` - - This strategy allows you to send parallel requests to different endpoints for the same source and choose the most updated. + +This strategy allows you to send parallel requests to different endpoints for the same source and choose the most updated. This is useful if you want to choose most synced data for the same Subgraph over different indexers/sources. diff --git a/website/src/pages/hi/subgraphs/querying/graph-client/live.md b/website/src/pages/hi/subgraphs/querying/graph-client/live.md index e6f726cb4352..624e17162567 100644 --- a/website/src/pages/hi/subgraphs/querying/graph-client/live.md +++ b/website/src/pages/hi/subgraphs/querying/graph-client/live.md @@ -2,7 +2,7 @@ Graph-Client implements a custom `@live` directive that can make every GraphQL query work with real-time data. -## Getting Started +## शुरू करना Start by adding the following configuration to your `.graphclientrc.yml` file: @@ -12,7 +12,7 @@ plugins: defaultInterval: 1000 ``` -## Usage +## उपयोग Set the default update interval you wish to use, and then you can apply the following GraphQL `@directive` over your GraphQL queries: diff --git a/website/src/pages/hi/subgraphs/querying/graphql-api.mdx b/website/src/pages/hi/subgraphs/querying/graphql-api.mdx index ecfc90819e64..a0e9da503a74 100644 --- a/website/src/pages/hi/subgraphs/querying/graphql-api.mdx +++ b/website/src/pages/hi/subgraphs/querying/graphql-api.mdx @@ -6,19 +6,19 @@ The Graph में उपयोग किए जाने वाले GraphQL ## GraphQL क्या है? -[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. +[GraphQL](https://graphql.org/learn/) एक API के लिए क्वेरी भाषा है और मौजूदा डेटा के साथ उन क्वेरियों को निष्पादित करने के लिए एक रनटाइम है। The Graph, GraphQL का उपयोग करके Subgraphs से क्वेरी करता है। -To understand the larger role that GraphQL plays, review [developing](/subgraphs/developing/introduction/) and [creating a subgraph](/developing/creating-a-subgraph/). +To समझने के लिए कि GraphQL बड़ी भूमिका कैसे निभाता है, [developing](/subgraphs/developing/introduction/) और [creating a Subgraph](/developing/creating-a-subgraph/) की समीक्षा करें। ## GraphQL के साथ क्वेरीज़ -In your subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type. +आपकी Subgraph schema में `Entities` नामक प्रकारों को परिभाषित किया जाता है। प्रत्येक `Entity` प्रकार के लिए, शीर्ष-स्तरीय Query प्रकार पर `entity` और `entities` फ़ील्ड जेनरेट की जाएंगी। -> Note: `query` does not need to be included at the top of the `graphql` query when using The Graph. +> ध्यान दें: 'queries' को The Graph का उपयोग करते समय 'graphql' क्वेरी के शीर्ष पर शामिल करने की आवश्यकता नहीं है। ### उदाहरण -Query for a single `Token` entity defined in your schema: +एकल 'Token' एंटिटी के लिए क्वेरी करें जो आपके स्कीमा में परिभाषित है ```graphql { @@ -29,9 +29,9 @@ Query for a single `Token` entity defined in your schema: } ``` -> Note: When querying for a single entity, the `id` field is required, and it must be written as a string. +> नोट: जब किसी एकल entities के लिए क्वेरी की जा रही हो, तो 'id' फ़ील्ड आवश्यक है, और इसे एक स्ट्रिंग के रूप में लिखा जाना चाहिए। -Query all `Token` entities: +सभी 'Token' entities को क्वेरी करें: ```graphql { @@ -42,12 +42,12 @@ Query all `Token` entities: } ``` -### Sorting +### Her translation means sorting out जब आप एक संग्रह के लिए क्वेरी कर रहे हों, तो आप: -- Use the `orderBy` parameter to sort by a specific attribute. -- Use the `orderDirection` to specify the sort direction, `asc` for ascending or `desc` for descending. +- 'orderBy' पैरामीटर का उपयोग किसी विशिष्ट गुण द्वारा सॉर्ट करने के लिए करें। +- 'orderDirection' का उपयोग सॉर्ट दिशा निर्दिष्ट करने के लिए करें, 'asc' के लिए आरोही या 'desc' के लिए अवरोही। #### उदाहरण @@ -62,7 +62,7 @@ Query all `Token` entities: #### नेस्टेड इकाई छँटाई के लिए उदाहरण -As of Graph Node [`v0.30.0`](https://github.com/graphprotocol/graph-node/releases/tag/v0.30.0) entities can be sorted on the basis of nested entities. +Graph Node ['v0.30.0'](https://github.com/graphprotocol/graph-node/releases/tag/v0.30.0) के अनुसार, entities को nested entities के आधार पर क्रमबद्ध किया जा सकता है। निम्नलिखित उदाहरण में टोकन उनके मालिक के नाम के अनुसार क्रमबद्ध किए गए हैं: @@ -77,18 +77,18 @@ As of Graph Node [`v0.30.0`](https://github.com/graphprotocol/graph-node/release } ``` -> Currently, you can sort by one-level deep `String` or `ID` types on `@entity` and `@derivedFrom` fields. Unfortunately, [sorting by interfaces on one level-deep entities](https://github.com/graphprotocol/graph-node/pull/4058), sorting by fields which are arrays and nested entities is not yet supported. +> वर्तमान में, आप '@entity' और '@derivedFrom' फ़ील्ड्स पर एक-स्तरीय गहरे 'String' या 'ID' प्रकारों द्वारा क्रमबद्ध कर सकते हैं। अफसोस,[इंटरफेस द्वारा एक-स्तरीय गहरे entities पर क्रमबद्ध करना](https://github.com/graphprotocol/graph-node/pull/4058), ऐसे फ़ील्ड्स द्वारा क्रमबद्ध करना जो एरेज़ और नेस्टेड entities हैं, अभी तक समर्थित नहीं है। ### पृष्ठ पर अंक लगाना जब एक संग्रह के लिए क्वेरी की जाती है, तो यह सबसे अच्छा होता है: -- Use the `first` parameter to paginate from the beginning of the collection. - - The default sort order is by `ID` in ascending alphanumeric order, **not** by creation time. -- Use the `skip` parameter to skip entities and paginate. For instance, `first:100` shows the first 100 entities and `first:100, skip:100` shows the next 100 entities. -- Avoid using `skip` values in queries because they generally perform poorly. To retrieve a large number of items, it's best to page through entities based on an attribute as shown in the previous example above. +- संग्रह की `शुरुआत` से पेजिनेट करने के लिए first पैरामीटर का उपयोग करें। + - डिफ़ॉल्ट सॉर्ट आदेश `ID` के अनुसार आरोही अल्फ़ान्यूमेरिक क्रम में होता है, **न** कि निर्माण समय के अनुसार। +- `skip` पैरामीटर का उपयोग entities को स्किप करने और पेजिनेट करने के लिए करें। instancesके लिए, first:100 पहले 100 entities दिखाता है और first:100, skip:100 अगले 100 entities दिखाता है। +- `skip` मानों का उपयोग queries में करने से बचें क्योंकि ये सामान्यतः खराब प्रदर्शन करते हैं। एक बड़ी संख्या में आइटम प्राप्त करने के लिए, पिछले उदाहरण में दिखाए गए अनुसार किसी गुण के आधार पर entities के माध्यम से पेज करना सबसे अच्छा होता है। -#### Example using `first` +#### उदाहरण जो `first` का उपयोग करता है पहले 10 टोकन पूछें: @@ -101,11 +101,11 @@ As of Graph Node [`v0.30.0`](https://github.com/graphprotocol/graph-node/release } ``` -To query for groups of entities in the middle of a collection, the `skip` parameter may be used in conjunction with the `first` parameter to skip a specified number of entities starting at the beginning of the collection. +संग्रह के मध्य में स्थित entities के समूहों के लिए queries करने के लिए, `skip` पैरामीटर को `first` पैरामीटर के साथ उपयोग किया जा सकता है, ताकि संग्रह की शुरुआत से निर्धारित संख्या में entities को छोड़ दिया जा सके। -#### Example using `first` and `skip` +#### `first` और `skip` का उपयोग करते हुए उदाहरण -Query 10 `Token` entities, offset by 10 places from the beginning of the collection: +कलेक्शन की शुरुआत से 10 स्थानों के बाद 10 `Token` entities को queries करें। ```graphql { @@ -116,7 +116,7 @@ Query 10 `Token` entities, offset by 10 places from the beginning of the collect } ``` -#### Example using `first` and `id_ge` +#### उदाहरण 'first' और 'id_ge' का उपयोग करते हुए। यदि एक क्लाइंट को बड़ी संख्या में एंटिटीज़ पुनर्प्राप्त करने की आवश्यकता है, तो एट्रिब्यूट पर आधारित क्वेरी बनाना और उस एट्रिब्यूट द्वारा फ़िल्टर करना अधिक प्रभावशाली है। उदाहरण के लिए, एक क्लाइंट इस क्वेरी का उपयोग करके बड़ी संख्या में टोकन पुनर्प्राप्त कर सकता है: @@ -129,16 +129,16 @@ query manyTokens($lastID: String) { } ``` -The first time, it would send the query with `lastID = ""`, and for subsequent requests it would set `lastID` to the `id` attribute of the last entity in the previous request. This approach will perform significantly better than using increasing `skip` values. +पहली बार, यह queries को `lastID = ""`, के साथ भेजेगा, और subsequent requests के लिए यह `lastID` को पिछले अनुरोध में आखिरी entity के `id` attribute के रूप में सेट करेगा। यह तरीका increasing 'skip' मानों का उपयोग करने की तुलना में काफी बेहतर प्रदर्शन करेगा। ### छनन -- You can use the `where` parameter in your queries to filter for different properties. -- You can filter on multiple values within the `where` parameter. +- आप अपनी क्वेरी में विभिन्न गुणों को फ़िल्टर करने के लिए 'where' पैरामीटर का उपयोग कर सकते हैं। +- आप 'where' पैरामीटर के भीतर कई मानों पर फ़िल्टर कर सकते हैं। -#### Example using `where` +#### उदाहरण 'where' का उपयोग करते हुए -Query challenges with `failed` outcome: +'failed' परिणाम वाली क्वेरी चुनौतियाँ: ```graphql { @@ -152,7 +152,7 @@ Query challenges with `failed` outcome: } ``` -You can use suffixes like `_gt`, `_lte` for value comparison: +आप मूल्य तुलना के लिए '\_gt' , '\_lte' जैसे प्रत्ययों का उपयोग कर सकते हैं। #### श्रेणी फ़िल्टरिंग के लिए उदाहरण @@ -168,9 +168,9 @@ You can use suffixes like `_gt`, `_lte` for value comparison: #### ब्लॉक फ़िल्टरिंग के लिए उदाहरण -You can also filter entities that were updated in or after a specified block with `_change_block(number_gte: Int)`. +आप उन इकाइयों entities को भी फ़िल्टर कर सकते हैं जिन्हें किसी निर्दिष्ट ब्लॉक में या उसके बाद अपडेट किया गया था, '\_change_block(number_gte: Int)' के साथ। -This can be useful if you are looking to fetch only entities which have changed, for example since the last time you polled. Or alternatively it can be useful to investigate or debug how entities are changing in your subgraph (if combined with a block filter, you can isolate only entities that changed in a specific block). +यह उपयोगी हो सकता है यदि आप केवल उन entities को लाना चाहते हैं जो बदल गई हैं, उदाहरण के लिए, पिछली बार जब आपने पोल किया था तब से। या वैकल्पिक रूप से, यह जांचने या डिबग करने के लिए उपयोगी हो सकता है कि आपकी Subgraph में entities कैसे बदल रही हैं (यदि इसे एक ब्लॉक फ़िल्टर के साथ जोड़ा जाए, तो आप केवल उन्हीं entities को अलग कर सकते हैं जो एक विशिष्ट ब्लॉक में बदली हैं)। ```graphql { @@ -184,7 +184,7 @@ This can be useful if you are looking to fetch only entities which have changed, #### नेस्टेड इकाई फ़िल्टरिंग के लिए उदाहरण -Filtering on the basis of nested entities is possible in the fields with the `_` suffix. +नेस्टेड इकाइयों के आधार पर फ़िल्टरिंग उन फ़ील्ड्स में संभव है जिनके अंत में '\_' प्रत्यय होता है। यह उपयोगी हो सकता है यदि आप केवल उन संस्थाओं को लाना चाहते हैं जिनके चाइल्ड-स्तरीय निकाय प्रदान की गई शर्तों को पूरा करते हैं। @@ -202,11 +202,11 @@ Filtering on the basis of nested entities is possible in the fields with the `_` #### लॉजिकल ऑपरेटर्स -As of Graph Node [`v0.30.0`](https://github.com/graphprotocol/graph-node/releases/tag/v0.30.0) you can group multiple parameters in the same `where` argument using the `and` or the `or` operators to filter results based on more than one criteria. +ग्राफ-नोड ['v0.30.0'](https://github.com/graphprotocol/graph-node/releases/tag/v0.30.0) से, आप एक ही 'where' आर्गुमेंट में कई पैरामीटर्स को समूहित कर सकते हैं और 'and' या 'or' ऑपरेटर्स का उपयोग करके एक से अधिक मानदंडों के आधार पर परिणामों को फ़िल्टर कर सकते हैं। -##### `AND` Operator +##### `AND` ऑपरेटर -The following example filters for challenges with `outcome` `succeeded` and `number` greater than or equal to `100`. +निम्नलिखित उदाहरण उन चुनौतियों को फ़िल्टर करता है जिनका `outcome`` succeeded` है और जिनका `number` `100` या उससे अधिक है। ```graphql { @@ -220,7 +220,7 @@ The following example filters for challenges with `outcome` `succeeded` and `num } ``` -> **Syntactic sugar:** You can simplify the above query by removing the `and` operator by passing a sub-expression separated by commas. +> **सिंटैक्टिक शुगर**: आप उपरोक्त को queriesसरल बना सकते हैं `and` ऑपरेटर को हटाकर और उप-वाक्यांश को कॉमा से अलग करके पास करके। > > ```graphql > { @@ -234,9 +234,9 @@ The following example filters for challenges with `outcome` `succeeded` and `num > } > ``` -##### `OR` Operator +##### `OR` ऑपरेटर। -The following example filters for challenges with `outcome` `succeeded` or `number` greater than or equal to `100`. +निम्नलिखित उदाहरण उन चुनौतियों को फ़िल्टर करता है जिनका `outcome` `succeeded` है या जिनका `number` `100` या उससे अधिक है। ```graphql { @@ -250,7 +250,7 @@ The following example filters for challenges with `outcome` `succeeded` or `numb } ``` -> **Note**: When constructing queries, it is important to consider the performance impact of using the `or` operator. While `or` can be a useful tool for broadening search results, it can also have significant costs. One of the main issues with `or` is that it can cause queries to slow down. This is because `or` requires the database to scan through multiple indexes, which can be a time-consuming process. To avoid these issues, it is recommended that developers use and operators instead of or whenever possible. This allows for more precise filtering and can lead to faster, more accurate queries. +> **नोट**: queries बनाते समय, `or` ऑपरेटर के उपयोग से होने वाले प्रदर्शन प्रभावों पर विचार करना महत्वपूर्ण है। हालांकि `or` खोज परिणामों को व्यापक बनाने के लिए एक उपयोगी उपकरण हो सकता है, लेकिन इसके कुछ महत्वपूर्ण लागतें भी होती हैं। `or` के साथ मुख्य समस्या यह है कि यह queries को धीमा कर सकता है। इसका कारण यह है कि `or` के उपयोग से डेटाबेस को कई इंडेक्स स्कैन करने पड़ते हैं, जो एक समय-सापेक्ष प्रक्रिया हो सकती है। इन समस्याओं से बचने के लिए, यह अनुशंसा की जाती है कि डेवलपर्स or के बजाय and ऑपरेटर का उपयोग करें जब भी संभव हो। यह अधिक सटीक फ़िल्टरिंग की अनुमति देता है और तेज़, अधिक सटीक queries प्रदान कर सकता है। #### सभी फ़िल्टर @@ -279,19 +279,19 @@ _not_ends_with _not_ends_with_nocase ``` -> Please note that some suffixes are only supported for specific types. For example, `Boolean` only supports `_not`, `_in`, and `_not_in`, but `_` is available only for object and interface types. +> कुछ प्रत्यय केवल विशिष्ट प्रकारों के लिए समर्थित होते हैं। उदाहरण के लिए, `Boolean` केवल` _not, _in`, और `_not_`in का समर्थन करता है, लेकिन \_ केवल ऑब्जेक्ट और इंटरफेस प्रकारों के लिए उपलब्ध है। -In addition, the following global filters are available as part of `where` argument: +इसके अलावा, `where` आर्ग्यूमेंट के हिस्से के रूप में निम्नलिखित वैश्विक फ़िल्टर उपलब्ध हैं: ```graphql _change_block(number_gte: Int) ``` -### Time-travel queries +### समय-यात्रा क्वेरी -You can query the state of your entities not just for the latest block, which is the default, but also for an arbitrary block in the past. The block at which a query should happen can be specified either by its block number or its block hash by including a `block` argument in the toplevel fields of queries. +आप न केवल नवीनतम ब्लॉक के लिए, जो डिफ़ॉल्ट होता है, बल्कि अतीत के किसी भी मनमाने ब्लॉक के लिए भी अपनी entities की स्थिति को queries कर सकते हैं। जिस ब्लॉक पर queries होनी चाहिए, उसे या तो उसके ब्लॉक नंबर या उसके ब्लॉक हैश द्वारा निर्दिष्ट किया जा सकता है, इसके लिए queries के शीर्ष स्तर के फ़ील्ड्स में block आर्ग्यूमेंट शामिल किया जाता है। -The result of such a query will not change over time, i.e., querying at a certain past block will return the same result no matter when it is executed, with the exception that if you query at a block very close to the head of the chain, the result might change if that block turns out to **not** be on the main chain and the chain gets reorganized. Once a block can be considered final, the result of the query will not change. +ऐसे queries का परिणाम समय के साथ नहीं बदलेगा, यानी किसी निश्चित पिछले ब्लॉक परqueries करने से हमेशा वही परिणाम मिलेगा, चाहे इसे कभी भी निष्पादित किया जाए। इसका एकमात्र अपवाद यह है कि यदि आप किसी ऐसे ब्लॉक पर queries करते हैं जो chain के हेड के बहुत करीब है, तो परिणाम बदल सकता है यदि वह ब्लॉक मुख्य chain पर **not** होता है और chain का पुनर्गठन हो जाता है। एक बार जब किसी ब्लॉक को अंतिम (final) माना जा सकता है, तो queries का परिणाम नहीं बदलेगा। > Note: The current implementation is still subject to certain limitations that might violate these guarantees. The implementation can not always tell that a given block hash is not on the main chain at all, or if a query result by a block hash for a block that is not yet considered final could be influenced by a block reorganization running concurrently with the query. They do not affect the results of queries by block hash when the block is final and known to be on the main chain. [This issue](https://github.com/graphprotocol/graph-node/issues/1405) explains what these limitations are in detail. @@ -309,7 +309,7 @@ The result of such a query will not change over time, i.e., querying at a certai } ``` -This query will return `Challenge` entities, and their associated `Application` entities, as they existed directly after processing block number 8,000,000. +यह queries `Challenge` entities और उनके संबद्ध `Application` entities को लौटाएगी, जैसा कि वे ब्लॉक संख्या 8,000,000 के प्रोसेस होने के ठीक बाद मौजूद थे। #### उदाहरण @@ -325,26 +325,26 @@ This query will return `Challenge` entities, and their associated `Application` } ``` -This query will return `Challenge` entities, and their associated `Application` entities, as they existed directly after processing the block with the given hash. +यह queries `Challenge` entities और उनसे संबंधित `Application` entities को वापस करेगी, जैसा कि वे दिए गए हैश वाले ब्लॉक को प्रोसेस करने के तुरंत बाद मौजूद थीं। ### पूर्ण पाठ खोज प्रश्न -Fulltext search query fields provide an expressive text search API that can be added to the subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) to add fulltext search to your subgraph. +Fulltext search query fields एक अभिव्यक्तिपूर्ण टेक्स्ट खोज API प्रदान करते हैं जिसे Subgraph schema में जोड़ा जा सकता है और अनुकूलित किया जा सकता है। Fulltext search को अपने Subgraph में जोड़ने के लिए [Defining Fulltext Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) देखें। -Fulltext search queries have one required field, `text`, for supplying search terms. Several special fulltext operators are available to be used in this `text` search field. +फ़ुलटेक्स्ट सर्च क्वेरीज़ में एक आवश्यक फ़ील्ड होता है, ' text ', जिसमें सर्च शब्द प्रदान किए जाते हैं। इस ' text ' सर्च फ़ील्ड में उपयोग करने के लिए कई विशेष फ़ुलटेक्स्ट ऑपरेटर उपलब्ध हैं। पूर्ण पाठ खोज ऑपरेटर: | प्रतीक | ऑपरेटर | Description | | --- | --- | --- | | `&` | `And` | सभी प्रदान किए गए शब्दों को शामिल करने वाली संस्थाओं के लिए एक से अधिक खोज शब्दों को फ़िल्टर में संयोजित करने के लिए | -| | | `Or` | या ऑपरेटर द्वारा अलग किए गए एकाधिक खोज शब्दों वाली क्वेरी सभी संस्थाओं को प्रदान की गई शर्तों में से किसी से मेल के साथ वापस कर देगी | -| `<->` | `Follow by` | दो शब्दों के बीच की दूरी निर्दिष्ट करें। | -| `:*` | `Prefix` | उन शब्दों को खोजने के लिए उपसर्ग खोज शब्द का उपयोग करें जिनके उपसर्ग मेल खाते हैं (2 वर्ण आवश्यक हैं।) | +| | | ' Or' | या ऑपरेटर द्वारा अलग किए गए एकाधिक खोज शब्दों वाली क्वेरी सभी संस्थाओं को प्रदान की गई शर्तों में से किसी से मेल के साथ वापस कर देगी | +| `<->` | ' द्वारा अनुसरण करें ' | दो शब्दों के बीच की दूरी निर्दिष्ट करें। | +| `:*` | ' उपसर्ग ' | उन शब्दों को खोजने के लिए उपसर्ग खोज शब्द का उपयोग करें जिनके उपसर्ग मेल खाते हैं (2 वर्ण आवश्यक हैं।) | #### उदाहरण -Using the `or` operator, this query will filter to blog entities with variations of either "anarchism" or "crumpet" in their fulltext fields. +' or 'ऑपरेटर का उपयोग करके, यह क्वेरी उन ब्लॉग एंटिटीज़ को फ़िल्टर करेगी जिनके पूर्ण-पाठ (fulltext) फ़ील्ड में "anarchism" या "crumpet" में से किसी एक के विभिन्न रूप शामिल हैं। ```graphql { @@ -357,7 +357,7 @@ Using the `or` operator, this query will filter to blog entities with variations } ``` -The `follow by` operator specifies a words a specific distance apart in the fulltext documents. The following query will return all blogs with variations of "decentralize" followed by "philosophy" +' follow by ' ऑपरेटर पूर्ण-पाठ दस्तावेज़ों में विशिष्ट दूरी पर स्थित शब्दों को निर्दिष्ट करता है। निम्नलिखित क्वेरी उन सभी ब्लॉगों को लौटाएगी जिनमें "विकेंद्रीकृत" के विभिन्न रूप "philosophy" के बाद आते हैं। ```graphql { @@ -370,7 +370,7 @@ The `follow by` operator specifies a words a specific distance apart in the full } ``` -Combine fulltext operators to make more complex filters. With a pretext search operator combined with a follow by this example query will match all blog entities with words that start with "lou" followed by "music". +अधिक जटिल फिल्टर बनाने के लिए फुलटेक्स्ट ऑपरेटरों को मिलाएं। इस उदाहरण क्वेरी के अनुसरण के साथ एक बहाना खोज ऑपरेटर संयुक्त रूप से सभी ब्लॉग संस्थाओं को उन शब्दों से मिलाएगा जो "लू" से शुरू होते हैं और उसके बाद "संगीत"। ```graphql { @@ -391,11 +391,11 @@ Graph Node अपने द्वारा प्राप्त GraphQL क् आपके डेटा स्रोतों का स्कीमा, अर्थात् उपलब्ध प्रश्न करने के लिए संस्थाओं की प्रकार, मान और उनके बीच के संबंध, GraphQL Interface Definition Language (IDL)(https://facebook.github.io/graphql/draft/#sec-Type-System) के माध्यम से परिभाषित किए गए हैं। -GraphQL स्कीमा आम तौर पर queries, subscriptions और mutations के लिए रूट प्रकार परिभाषित करते हैं। The Graph केवल queries का समर्थन करता है। आपके सबग्राफ के लिए रूट Query प्रकार स्वचालित रूप से उस GraphQL स्कीमा से उत्पन्न होता है जो आपके सबग्राफ manifest(/developing/creating-a-subgraph/#components-of-a-subgraph) में शामिल होता है। +GraphQL स्कीमाएँ आमतौर पर queries, subscriptions और mutations के लिए रूट टाइप्स को परिभाषित करती हैं। The Graph केवल queries को सपोर्ट करता है। आपके Subgraph के लिए रूट Query टाइप अपने आप उत्पन्न हो जाता है, जो कि आपके [Subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph) में शामिल GraphQL स्कीमा से आता है। > ध्यान दें: हमारा एपीआई म्यूटेशन को उजागर नहीं करता है क्योंकि डेवलपर्स से उम्मीद की जाती है कि वे अपने एप्लिकेशन से अंतर्निहित ब्लॉकचेन के खिलाफ सीधे लेन-देन(transaction) जारी करेंगे। -### Entities +### इकाइयां आपके स्कीमा में जिन भी GraphQL प्रकारों में @entity निर्देश होते हैं, उन्हें संस्थाएँ (entities) माना जाएगा और उनमें एक ID फ़ील्ड होना चाहिए। @@ -403,7 +403,7 @@ GraphQL स्कीमा आम तौर पर queries, subscriptions और ### सबग्राफ मेटाडेटा -सभी सबग्राफमें एक स्वचालित रूप से जनरेट किया गया _Meta_ ऑब्जेक्ट होता है, जो Subgraph मेटाडेटा तक पहुँच प्रदान करता है। इसे इस प्रकार क्वेरी किया जा सकता है: +सभी Subgraph में एक स्वचालित रूप से उत्पन्न `_Meta_` ऑब्जेक्ट होता है, जो Subgraph मेटाडाटा तक पहुंच प्रदान करता है। इसे निम्नलिखित तरीके से क्वेरी किया जा सकता है: ```graphQL { @@ -419,7 +419,7 @@ GraphQL स्कीमा आम तौर पर queries, subscriptions और } ``` -If a block is provided, the metadata is as of that block, if not the latest indexed block is used. If provided, the block must be after the subgraph's start block, and less than or equal to the most recently indexed block. +यदि कोई ब्लॉक प्रदान किया जाता है, तो मेटाडेटा उस ब्लॉक के अनुसार होगा, यदि नहीं, तो नवीनतम इंडेक्स किया गया ब्लॉक उपयोग किया जाएगा। यदि प्रदान किया जाता है, तो ब्लॉक को Subgraph के प्रारंभिक ब्लॉक के बाद और सबसे हाल ही में इंडेक्स किए गए ब्लॉक के बराबर या उससे कम होना चाहिए। deployment एक विशिष्ट ID है, जो subgraph.yaml फ़ाइल के IPFS CID के अनुरूप है। @@ -427,6 +427,6 @@ block नवीनतम ब्लॉक के बारे में जान - हैश: ब्लॉक का हैश - नंबर: ब्लॉक नंबर -- टाइमस्टैम्प: ब्लॉक का टाइमस्टैम्प, यदि उपलब्ध हो (यह वर्तमान में केवल ईवीएम नेटवर्क को इंडेक्स करने वाले सबग्राफ के लिए उपलब्ध है) +- टाइमस्टैम्प: यदि उपलब्ध हो, तो ब्लॉक का टाइमस्टैम्प (यह वर्तमान में केवल EVM नेटवर्क को इंडेक्स करने वाले Subgraphs के लिए उपलब्ध है) -hasIndexingErrors एक बूलियन है जो यह पहचानता है कि क्या सबग्राफ ने किसी पिछले ब्लॉक पर इंडेक्सिंग त्रुटियों का सामना किया था। +`hasIndexingErrors` एक boolean है जो यह पहचानता है कि Subgraph को किसी पिछले block पर Indexing errors का सामना करना पड़ा था। diff --git a/website/src/pages/hi/subgraphs/querying/introduction.mdx b/website/src/pages/hi/subgraphs/querying/introduction.mdx index 2b9f3f02ff49..f18dd5c441ad 100644 --- a/website/src/pages/hi/subgraphs/querying/introduction.mdx +++ b/website/src/pages/hi/subgraphs/querying/introduction.mdx @@ -3,30 +3,31 @@ title: ग्राफ़ को क्वेरी करना sidebarTitle: Introduction --- -To start querying right away, visit [The Graph Explorer](https://thegraph.com/explorer). +To start querying right away, visit [The Graph Explorer](https://thegraph.com/explorer)। -## अवलोकन +## Overview -When a subgraph is published to The Graph Network, you can visit its subgraph details page on Graph Explorer and use the "Query" tab to explore the deployed GraphQL API for each subgraph. +जब कोई Subgraph **The Graph Network** पर publish किया जाता है, तो आप **Graph Explorer** में उसके Subgraph details page पर जा सकते हैं और **"Query"** टैब का उपयोग करके प्रत्येक Subgraph के लिए deployed **GraphQL API** को explore कर सकते हैं। ## विशिष्टताएँ -Each subgraph published to The Graph Network has a unique query URL in Graph Explorer to make direct queries. You can find it by navigating to the subgraph details page and clicking on the "Query" button in the top right corner. +The Graph Network पर प्रकाशित प्रत्येक Subgraph का **Graph Explorer** में एक unique query URL होता है, जिससे आप सीधे queries कर सकते हैं। इसे खोजने के लिए, **Subgraph details page** पर जाएं और शीर्ष दाएँ कोने में **"Query"** बटन पर क्लिक करें। -![Query Subgraph Button](/img/query-button-screenshot.png) +![Query सबग्राफ बटन](/img/query-button-screenshot.png) -![Query Subgraph URL](/img/query-url-screenshot.png) +![Query सबग्राफ URL](/img/query-url-screenshot.png) -You will notice that this query URL must use a unique API key. You can create and manage your API keys in [Subgraph Studio](https://thegraph.com/studio), under the "API Keys" section. Learn more about how to use Subgraph Studio [here](/deploying/subgraph-studio/). +आप देखेंगे कि इस क्वेरी URL के लिए एक अद्वितीय API कुंजी का उपयोग करना आवश्यक है। आप अपनी API कुंजियों को [सबग्राफ Studio](https://thegraph.com/studio) में "API Keys" अनुभाग के अंतर्गत बना और प्रबंधित कर सकते हैं। सबग्राफ Studio का उपयोग करने के तरीके के बारे में अधिक जानें [यहाँ](/deploying/subgraph-studio/)। -Subgraph Studio users start on a Free Plan, which allows them to make 100,000 queries per month. Additional queries are available on the Growth Plan, which offers usage based pricing for additional queries, payable by credit card, or GRT on Arbitrum. You can learn more about billing [here](/subgraphs/billing/). +सबग्राफ Studio उपयोगकर्ता एक निःशुल्क योजना से शुरू करते हैं, जो उन्हें प्रति माह 100,000 क्वेरी करने की अनुमति देती है। अतिरिक्त क्वेरी Growth Plan पर उपलब्ध हैं, जो अतिरिक्त क्वेरी के लिए उपयोग-आधारित मूल्य निर्धारण प्रदान करता है, जिसे क्रेडिट कार्ड या Arbitrum पर GRT के माध्यम से भुगतान किया जा सकता है। आप बिलिंग के बारे में अधिक जान सकते हैं [यहाँ](/subgraphs/billing/)। -> Please see the [Query API](/subgraphs/querying/graphql-api/) for a complete reference on how to query the subgraph's entities. +> Subgraph की entities को query करने के लिए पूरी जानकारी के लिए **Query API** देखें:\ +> `https://thegraph.com/docs/en/subgraphs/querying/graphql-api/` > -> Note: If you encounter 405 errors with a GET request to the Graph Explorer URL, please switch to a POST request instead. +> Note: यदि आपको Graph Explorer URL पर GET request के साथ 405 errors मिलती हैं, तो कृपया इसके बजाय POST request पर switch करें। ### Additional Resources -- Use [GraphQL querying best practices](/subgraphs/querying/best-practices/). -- To query from an application, click [here](/subgraphs/querying/from-an-application/). -- View [querying examples](https://github.com/graphprotocol/query-examples/tree/main). +- [GraphQL क्वेरी करने के सर्वोत्तम अभ्यास](/subgraphs/querying/best-practices/)। +- application से क्वेरी करने के लिए, [यहाँ](/subgraphs/querying/from-an-application/) क्लिक करें। +- ऊपर दिए गए उदाहरणों को देखें [querying examples](https://github.com/graphprotocol/query-examples/tree/main)। diff --git a/website/src/pages/hi/subgraphs/querying/managing-api-keys.mdx b/website/src/pages/hi/subgraphs/querying/managing-api-keys.mdx index 4f36f067d89d..257bce21d38c 100644 --- a/website/src/pages/hi/subgraphs/querying/managing-api-keys.mdx +++ b/website/src/pages/hi/subgraphs/querying/managing-api-keys.mdx @@ -1,34 +1,34 @@ --- -title: अपनी एपीआई कुंजियों का प्रबंधन +title: API Keys को प्रबंधित करना --- -## अवलोकन +## Overview -API keys are needed to query subgraphs. They ensure that the connections between application services are valid and authorized, including authenticating the end user and the device using the application. +Subgraphs को query करने के लिए API keys आवश्यक होते हैं। ये यह सुनिश्चित करते हैं कि application services के बीच कनेक्शन वैध और अधिकृत हैं, साथ ही एंड यूज़र और डिवाइस की पहचान को प्रमाणित करते हैं। -### Create and Manage API Keys +### API Keys बनाएं और प्रबंधित करें -Go to [Subgraph Studio](https://thegraph.com/studio/) and click the **API Keys** tab to create and manage your API keys for specific subgraphs. +Subgraph Studio पर जाएं: https://thegraph.com/studio/ और API Keys टैब पर क्लिक करें ताकि आप अपने विशेष Subgraphs के लिए API keys बना और प्रबंधित कर सकें। -The "API keys" table lists existing API keys and allows you to manage or delete them. For each key, you can see its status, the cost for the current period, the spending limit for the current period, and the total number of queries. +"API keys" तालिका मौजूदा API keys को सूचीबद्ध करती है और आपको उन्हें प्रबंधित या हटाने की अनुमति देती है। प्रत्येक कुंजी के लिए, आप इसकी स्थिति, वर्तमान अवधि के लिए लागत, वर्तमान अवधि के लिए खर्च सीमा और कुल क्वेरी संख्या देख सकते हैं। -You can click the "three dots" menu to the right of a given API key to: +आप दिए गए API key के दाईं ओर स्थित "तीन बिंदु" मेनू पर क्लिक करके: - Rename API key - Regenerate API key - Delete API key -- Manage spending limit: this is an optional monthly spending limit for a given API key, in USD. This limit is per billing period (calendar month). +- Manage spending limit: यह USD में दी गई API key के लिए एक optional monthly spending limit है। यह limit per billing period (calendar month) के लिए है। -### API Key Details +### API Keysविवरण -You can click on an individual API key to view the Details page: +Details page देखने के लिए आप individual API key पर click कर सकते हैं: -1. Under the **Overview** section, you can: +1. **अवलोकन** अनुभाग के अंतर्गत, आप: - अपना कुंजी नाम संपादित करें - एपीआई कुंजियों को पुन: उत्पन्न करें - आंकड़ों के साथ एपीआई कुंजी का वर्तमान उपयोग देखें: - प्रश्नों की संख्या - जीआरटी की राशि खर्च की गई -2. Under the **Security** section, you can opt into security settings depending on the level of control you’d like to have. Specifically, you can: +2. नीचे **Security** अनुभाग में, आप अपनी पसंद के अनुसार सुरक्षा सेटिंग्स को सक्रिय कर सकते हैं। विशेष रूप से, आप: - अपनी API कुंजी का उपयोग करने के लिए प्राधिकृत डोमेन नाम देखें और प्रबंधित करें - - सबग्राफ असाइन करें जिन्हें आपकी एपीआई कुंजी से पूछा जा सकता है + - अपने API key के साथ जिन Subgraphs को query किया जा सकता है, उन्हें असाइन करें। diff --git a/website/src/pages/hi/subgraphs/querying/python.mdx b/website/src/pages/hi/subgraphs/querying/python.mdx index 22e9b71da321..687a1a693024 100644 --- a/website/src/pages/hi/subgraphs/querying/python.mdx +++ b/website/src/pages/hi/subgraphs/querying/python.mdx @@ -3,9 +3,9 @@ title: Query The Graph with Python and Subgrounds sidebarTitle: Python (Subgrounds) --- -Subgrounds is an intuitive Python library for querying subgraphs, built by [Playgrounds](https://playgrounds.network/). It allows you to directly connect subgraph data to a Python data environment, letting you use libraries like [pandas](https://pandas.pydata.org/) to perform data analysis! +Subgrounds एक सहज Python लाइब्रेरी है जो Subgraph को क्वेरी करने के लिए बनाई गई है, जिसे [Playgrounds](https://playgrounds.network/) द्वारा विकसित किया गया है। यह आपको सीधे Python डेटा वातावरण से Subgraph डेटा को कनेक्ट करने की अनुमति देता है, जिससे आप [pandas](https://pandas.pydata.org/) जैसी लाइब्रेरी का उपयोग करके डेटा विश्लेषण कर सकते हैं! -Subgrounds offers a simple Pythonic API for building GraphQL queries, automates tedious workflows such as pagination, and empowers advanced users through controlled schema transformations. +Subgrounds GraphQL queries के निर्माण के लिए एक सरल Pythonic API प्रदान करता है, pagination जैसे कठिन workflows को स्वचालित करता है, और नियंत्रित schema परिवर्तनों के माध्यम से उन्नत users को strong बनाता है। ## शुरू करना @@ -17,24 +17,25 @@ pip install --upgrade subgrounds python -m pip install --upgrade subgrounds ``` -Once installed, you can test out subgrounds with the following query. The following example grabs a subgraph for the Aave v2 protocol and queries the top 5 markets ordered by TVL (Total Value Locked), selects their name and their TVL (in USD) and returns the data as a pandas [DataFrame](https://pandas.pydata.org/pandas-docs/dev/reference/api/pandas.DataFrame.html#pandas.DataFrame). +एक बार इंस्टॉल करने के बाद, आप नीचे दिए गए क्वेरी के साथ subgrounds का परीक्षण कर सकते हैं। नीचे दिया गया उदाहरण Aave v2 प्रोटोकॉल के लिए एक Subgraph प्राप्त करता है और TVL (Total Value Locked) के आधार पर शीर्ष 5 बाजारों को क्रमबद्ध करता है, उनके नाम और उनका TVL (USD में) चुनता है और डेटा को एक pandas [DataFrame](https://pandas.pydata.org/pandas-docs/dev/reference/api/pandas.DataFrame.html#pandas.DataFrame) के रूप में लौटाता है। ```python from subgrounds import Subgrounds sg = Subgrounds() -# Load the subgraph +# Subgraph लोड करें aave_v2 = sg.load_subgraph( "https://api.thegraph.com/subgraphs/name/messari/aave-v2-ethereum") -# Construct the query +# क्वेरी बनाएँ latest_markets = aave_v2.Query.markets( orderBy=aave_v2.Market.totalValueLockedUSD, orderDirection='desc', first=5, ) -# Return query to a dataframe + +# क्वेरी को DataFrame में बदलें sg.query_df([ latest_markets.name, latest_markets.totalValueLockedUSD, @@ -45,10 +46,10 @@ sg.query_df([ Subgrounds is built and maintained by the [Playgrounds](https://playgrounds.network/) team and can be accessed on the [Playgrounds docs](https://docs.playgrounds.network/subgrounds). -Since subgrounds has a large feature set to explore, here are some helpful starting places: +चूंकि subgrounds में तलाशने के लिए एक बड़ी सुविधा मौजूद है, इसलिए यहां कुछ उपयोगी शुरुआती स्थान दिए गए हैं: - [Getting Started with Querying](https://docs.playgrounds.network/subgrounds/getting_started/basics/) - - A good first step for how to build queries with subgrounds. + - Subgrounds के साथ queries कैसे बनाएं, इसके लिए एक अच्छा पहला कदम। - [Building Synthetic Fields](https://docs.playgrounds.network/subgrounds/getting_started/synthetic_fields/) - A gentle introduction to defining synthetic fields that transform data defined from the schema. - [Concurrent Queries](https://docs.playgrounds.network/subgrounds/getting_started/async/) diff --git a/website/src/pages/hi/subgraphs/querying/subgraph-id-vs-deployment-id.mdx b/website/src/pages/hi/subgraphs/querying/subgraph-id-vs-deployment-id.mdx index 103e470e14da..17258dd13ea1 100644 --- a/website/src/pages/hi/subgraphs/querying/subgraph-id-vs-deployment-id.mdx +++ b/website/src/pages/hi/subgraphs/querying/subgraph-id-vs-deployment-id.mdx @@ -2,17 +2,17 @@ title: Subgraph ID vs Deployment ID --- -A subgraph is identified by a Subgraph ID, and each version of the subgraph is identified by a Deployment ID. +A Subgraph is identified by a Subgraph ID, and each version of the Subgraph is identified by a Deployment ID. -When querying a subgraph, either ID can be used, though it is generally suggested that the Deployment ID is used due to its ability to specify a specific version of a subgraph. +When querying a Subgraph, either ID can be used, though it is generally suggested that the Deployment ID is used due to its ability to specify a specific version of a Subgraph. Here are some key differences between the two IDs: ![](/img/subgraph-id-vs-deployment-id.png) ## Deployment ID -The Deployment ID is the IPFS hash of the compiled manifest file, which refers to other files on IPFS instead of relative URLs on the computer. For example, the compiled manifest can be accessed via: `https://api.thegraph.com/ipfs/api/v0/cat?arg=QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. To change the Deployment ID, one can simply update the manifest file, such as modifying the description field as described in the [subgraph manifest documentation](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api). +The Deployment ID is the IPFS hash of the compiled manifest file, which refers to other files on IPFS instead of relative URLs on the computer. For example, the compiled manifest can be accessed via: `https://api.thegraph.com/ipfs/api/v0/cat?arg=QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. To change the Deployment ID, one can simply update the manifest file, such as modifying the description field as described in the [Subgraph manifest documentation](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api). -When queries are made using a subgraph's Deployment ID, we are specifying a version of that subgraph to query. Using the Deployment ID to query a specific subgraph version results in a more sophisticated and robust setup as there is full control over the subgraph version being queried. However, this results in the need of updating the query code manually every time a new version of the subgraph is published. +When queries are made using a Subgraph's Deployment ID, we are specifying a version of that Subgraph to query. Using the Deployment ID to query a specific Subgraph version results in a more sophisticated and robust setup as there is full control over the Subgraph version being queried. However, this results in the need of updating the query code manually every time a new version of the Subgraph is published. Example endpoint that uses Deployment ID: @@ -20,8 +20,8 @@ Example endpoint that uses Deployment ID: ## Subgraph ID -The Subgraph ID is a unique identifier for a subgraph. It remains constant across all versions of a subgraph. It is recommended to use the Subgraph ID to query the latest version of a subgraph, although there are some caveats. +The Subgraph ID is a unique identifier for a Subgraph. It remains constant across all versions of a Subgraph. It is recommended to use the Subgraph ID to query the latest version of a Subgraph, although there are some caveats. -Be aware that querying using Subgraph ID may result in queries being responded to by an older version of the subgraph due to the new version needing time to sync. Also, new versions could introduce breaking schema changes. +Be aware that querying using Subgraph ID may result in queries being responded to by an older version of the Subgraph due to the new version needing time to sync. Also, new versions could introduce breaking schema changes. Example endpoint that uses Subgraph ID: `https://gateway-arbitrum.network.thegraph.com/api/[api-key]/subgraphs/id/FL3ePDCBbShPvfRJTaSCNnehiqxsPHzpLud6CpbHoeKW` diff --git a/website/src/pages/hi/subgraphs/quick-start.mdx b/website/src/pages/hi/subgraphs/quick-start.mdx index 719252575cc2..cbf3550a3170 100644 --- a/website/src/pages/hi/subgraphs/quick-start.mdx +++ b/website/src/pages/hi/subgraphs/quick-start.mdx @@ -1,25 +1,25 @@ --- -title: जल्दी शुरू +title: Quick Start --- -Learn how to easily build, publish and query a [subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) on The Graph. +The Graph पर आसानी से एक [सबग्राफ](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) को बनाना, प्रकाशित करना और क्वेरी करना सीखें। -## Prerequisites +## पूर्वावश्यकताएँ - एक क्रिप्टो वॉलेट -- A smart contract address on a [supported network](/supported-networks/) -- [Node.js](https://nodejs.org/) installed -- A package manager of your choice (`npm`, `yarn` or `pnpm`) +- एक स्मार्ट contract पता एक [supported network](/supported-networks/) पर। +- [Node.js](https://nodejs.org/) इंस्टॉल किया गया +- आपकी पसंद का एक पैकेज मैनेजर (`npm`, `yarn` या `pnpm`) -## How to Build a Subgraph +## सबग्राफ कैसे बनाएं -### 1. Create a subgraph in Subgraph Studio +### 1. सबग्राफ Studio में एक सबग्राफ बनाएँ - [Subgraph Studio](https://thegraph.com/studio/) पर जाएँ और अपने वॉलेट को कनेक्ट करें। -Subgraph Studio आपको सबग्राफ़ बनाने, प्रबंधित करने, तैनात करने और प्रकाशित करने की सुविधा देता है, साथ ही API कुंजी बनाने और प्रबंधित करने की भी अनुमति देता है। +सबग्राफ Studio आपको Subgraphs बनाने, प्रबंधित करने, तैनात करने और प्रकाशित करने की सुविधा देता है, साथ ही API कुंजी बनाने और प्रबंधित करने की सुविधा भी प्रदान करता है। -"एक सबग्राफ बनाएं" पर क्लिक करें। सबग्राफ का नाम टाइटल केस में रखनाrecommended है: "सबग्राफ नाम चेन नाम"। +"Create a सबग्राफ" पर क्लिक करें। यह अनुशंसा की जाती है कि सबग्राफ का नाम टाइटल केस में रखा जाए: "सबग्राफ Name Chain Name"। ### 2. ग्राफ़ सीएलआई स्थापित करें @@ -37,56 +37,56 @@ Using [yarn](https://yarnpkg.com/): yarn global add @graphprotocol/graph-cli ``` -### 3. अपना Subgraph इनिशियलाइज़ करें +### अपने सबग्राफ को प्रारंभ करें -> You can find commands for your specific subgraph on the subgraph page in [Subgraph Studio](https://thegraph.com/studio/). +> आप अपने विशिष्ट Subgraph के लिए कमांड Subgraph Studio के Subgraph पेज पर पा सकते हैं। -The `graph init` command will automatically create a scaffold of a subgraph based on your contract's events. +`graph init` कमांड स्वचालित रूप से आपके contract की घटनाओं के आधार पर एक सबग्राफ का खाका तैयार करेगा। -निम्नलिखित आदेश एक मौजूदा अनुबंध से आपके Subgraph को प्रारंभ करता है: +निम्नलिखित कमांड एक मौजूदा contract से आपका सबग्राफ प्रारंभ करता है: ```sh graph init ``` -If your contract is verified on the respective blockscanner where it is deployed (such as [Etherscan](https://etherscan.io/)), then the ABI will automatically be created in the CLI. +यदि आपका contract उस ब्लॉकस्कैनर पर वेरीफाई किया गया है जहाँ यह डिप्लॉय किया गया है (जैसे [Etherscan](https://etherscan.io/)), तो ABI अपने आप CLI में क्रिएट हो जाएगा। -जब आप अपने subgraph को प्रारंभ करते हैं, CLI आपसे निम्नलिखित जानकारी मांगेगा: +जब आप अपने सबग्राफ को प्रारंभ करते हैं, तो CLI आपसे निम्नलिखित जानकारी मांगेगा: -- **Protocol**: Choose the protocol your subgraph will be indexing data from. -- **Subgraph slug**: Create a name for your subgraph. Your subgraph slug is an identifier for your subgraph. -- **Directory**: Choose a directory to create your subgraph in. -- **Ethereum network** (optional): You may need to specify which EVM-compatible network your subgraph will be indexing data from. -- **Contract address**: Locate the smart contract address you’d like to query data from. -- **ABI**: If the ABI is not auto-populated, you will need to input it manually as a JSON file. -- **Start Block**: You should input the start block to optimize subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed. -- **Contract Name**: Input the name of your contract. -- **Index contract events as entities**: It is suggested that you set this to true, as it will automatically add mappings to your subgraph for every emitted event. -- **Add another contract** (optional): You can add another contract. +- **प्रोटोकॉल**: वह प्रोटोकॉल चुनें जिससे आपका सबग्राफ डेटा को indexing करेगा। +- **सबग्राफ slug**: अपने सबग्राफ के लिए एक नाम बनाएं। आपका सबग्राफ slug आपके सबग्राफ के लिए एक पहचानकर्ता है। +- **निर्देशिका**: अपनी सबग्राफ बनाने के लिए एक निर्देशिका चुनें। +- \*\*Ethereum नेटवर्क (वैकल्पिक): आपको यह निर्दिष्ट करने की आवश्यकता हो सकती है कि आपका Subgraph किस EVM-संगत नेटवर्क से डेटा को इंडेक्स करेगा। +- **contract एड्रेस**: उस स्मार्ट contract एड्रेस को खोजें जिससे आप डेटा क्वेरी करना चाहते हैं। +- **ABI**: यदि ABI स्वतः नहीं भरा जाता है, तो आपको इसे JSON फ़ाइल के रूप में मैन्युअल रूप से इनपुट करना होगा। +- **Start Block**: आपको स्टार्ट ब्लॉक इनपुट करना चाहिए ताकि ब्लॉकचेन डेटा की सबग्राफ indexing को ऑप्टिमाइज़ किया जा सके। स्टार्ट ब्लॉक को खोजने के लिए उस ब्लॉक को ढूंढें जहां आपका contract डिप्लॉय किया गया था। +- **contract का नाम**: अपने contract का नाम दर्ज करें। +- **contract इवेंट्स को entities के रूप में इंडेक्स करें**: इसे true पर सेट करने की सलाह दी जाती है, क्योंकि यह हर उत्सर्जित इवेंट के लिए स्वचालित रूप से आपके सबग्राफ में मैपिंग जोड़ देगा। +- **एक और contract जोड़ें** (वैकल्पिक): आप एक और contract जोड़ सकते हैं। -अपने सबग्राफ को इनिशियलाइज़ करते समय क्या अपेक्षा की जाए, इसके उदाहरण के लिए निम्न स्क्रीनशॉट देखें: +इसका एक उदाहरण देखने के लिए निम्नलिखित स्क्रीनशॉट देखें कि जब आप अपना सबग्राफ इनिशियलाइज़ करते हैं तो क्या अपेक्षा करें: -![Subgraph command](/img/CLI-Example.png) +![सबग्राफ कमांड](/img/CLI-Example.png) -### 4. Edit your subgraph +### अपना सबग्राफ संपादित करें -पिछले चरण में `init` कमांड एक स्कैफोल्ड Subgraph बनाता है जिसे आप अपने Subgraph को बनाने के लिए प्रारंभिक बिंदु के रूप में उपयोग कर सकते हैं। +`init` कमांड पिछले चरण में एक प्रारंभिक सबग्राफ बनाता है जिसे आप अपने सबग्राफ को बनाने के लिए एक शुरुआती बिंदु के रूप में उपयोग कर सकते हैं। -जब आप Subgraph में बदलाव करते हैं, तो आप मुख्य रूप से तीन फाइलों के साथ काम करेंगे: +सबग्राफ में परिवर्तन करते समय, आप मुख्य रूप से तीन फ़ाइलों के साथ काम करेंगे: -- Manifest (subgraph.yaml) - मेनिफेस्ट परिभाषित करता है कि आपका Subgraph किस डेटा सोर्स को अनुक्रमित करेगा -- Schema (schema.graphql) - ग्राफक्यूएल स्कीमा परिभाषित करता है कि आप Subgraph से कौन सा डेटा प्राप्त करना चाहते हैं +- मैनिफेस्ट (`subgraph.yaml`) - यह निर्धारित करता है कि आपका सबग्राफ किन डेटा स्रोतों को इंडेक्स करेगा। +- Schema (`schema.graphql`) - यह परिभाषित करता है कि आप सबग्राफ से कौन सा डेटा प्राप्त करना चाहते हैं। - असेंबलीस्क्रिप्ट मैपिंग (mapping.ts) - यह वह कोड है जो स्कीमा में परिभाषित इकाई के लिए आपके डेटा सोर्स से डेटा का अनुवाद करता है। -अपने उपग्राफ को लिखने के लिए विस्तृत विवरण के लिए, [सबग्राफ बनाना](/developing/creating-a-subgraph/) देखें। +आपके सबग्राफ को लिखने के विस्तृत विवरण के लिए, [Creating a सबग्राफ देखें](/developing/creating-a-subgraph/)। -### 5. अपने Subgraph का परीक्षण करें +### 5. अपना Subgraph डिप्लॉय करें -> Remember, deploying is not the same as publishing. +> तैनाती करना प्रकाशन के समान नहीं है। -When you **deploy** a subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. A deployed subgraph's indexing is performed by the [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/), which is a single Indexer owned and operated by Edge & Node, rather than by the many decentralized Indexers in The Graph Network. A **deployed** subgraph is free to use, rate-limited, not visible to the public, and meant to be used for development, staging, and testing purposes. +जब आप किसी सबग्राफ को तैनात (deploy) करते हैं, तो आप इसे [सबग्राफ Studio](https://thegraph.com/studio/) पर अपलोड करते हैं, जहाँ आप इसका परीक्षण, स्टेजिंग और समीक्षा कर सकते हैं। तैनात किए गए सबग्राफ का Indexing [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/) द्वारा किया जाता है, जो Edge & Node द्वारा संचालित एक एकल Indexer है, न कि The Graph Network में मौजूद कई विकेंद्रीकृत Indexers द्वारा। एक तैनात (deployed) सबग्राफ का उपयोग निःशुल्क है, यह दर-सीमित (rate-limited) होता है, सार्वजनिक रूप से दृश्य (visible) नहीं होता, और इसे मुख्य रूप से विकास (development), स्टेजिंग और परीक्षण (testing) उद्देश्यों के लिए डिज़ाइन किया गया है। -एक बार आपका सबग्राफ लिखे जाने के बाद, निम्नलिखित कमांड चलाएँ: +एक बार जब आपका सबग्राफ लिखा जा चुका हो, तो निम्नलिखित कमांड चलाएँ: ```` ```sh @@ -94,7 +94,7 @@ graph codegen && graph build ``` ```` -अपने सबग्राफ को प्रमाणित और तैनात करें। तैनाती key सबग्राफ स्टूडियो में सबग्राफ पेज पर पाई जा सकती है। +अपने सबग्राफ को प्रमाणित करें और तैनात करें। तैनाती कुंजी को सबग्राफ Studio में सबग्राफ के पृष्ठ पर पाया जा सकता है। ![Deploy key](/img/subgraph-studio-deploy-key.jpg) @@ -107,39 +107,39 @@ graph deploy ``` ```` -The CLI will ask for a version label. It's strongly recommended to use [semantic versioning](https://semver.org/), e.g. `0.0.1`. +The CLI एक संस्करण लेबल के लिए पूछेगा। यह दृढ़ता से सिफारिश की जाती है कि [semantic versioning](https://semver.org/) का उपयोग करें, जैसे 0.0.1। -### 6. अपने Subgraph का परीक्षण करें +### 6. अपने सबग्राफ की समीक्षा करें -If you’d like to test your subgraph before publishing it, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following: +यदि आप अपना सबग्राफ प्रकाशित करने से पहले उसका परीक्षण करना चाहते हैं, तो आप [सबग्राफ Studio](https://thegraph.com/studio/) का उपयोग करके निम्नलिखित कर सकते हैं: - एक नमूना क्वेरी चलाएँ। -- अपने Subgraph का विश्लेषण करने के लिए डैशबोर्ड में जानकारी देखें। -- लॉग आपको बताएंगे कि क्या आपके Subgraph में कोई त्रुटि है। एक ऑपरेशनल Subgraph के लॉग इस तरह दिखेंगे: +- अपने डैशबोर्ड में अपने सबग्राफ का विश्लेषण करें ताकि जानकारी की जांच की जा सके। +- डैशबोर्ड पर लॉग्स की जाँच करें ताकि यह देखा जा सके कि आपके सबग्राफ में कोई त्रुटि है या नहीं। एक सक्रिय सबग्राफ के लॉग इस प्रकार दिखेंगे: ![Subgraph logs](/img/subgraph-logs-image.png) -### 7. अपने Subgraph को ग्राफ़ के The Graph Network पर प्रकाशित करें +### 7. अपने सबग्राफ को The Graph नेटवर्क पर प्रकाशित करें -When your subgraph is ready for a production environment, you can publish it to the decentralized network. Publishing is an onchain action that does the following: +जब आपका सबग्राफ प्रोडक्शन वातावरण के लिए तैयार हो जाता है, तो आप इसे विकेंद्रीकृत नेटवर्क पर प्रकाशित कर सकते हैं। प्रकाशित करना एक ऑनचेन क्रिया है जो निम्नलिखित करता है: -- It makes your subgraph available to be to indexed by the decentralized [Indexers](/indexing/overview/) on The Graph Network. -- It removes rate limits and makes your subgraph publicly searchable and queryable in [Graph Explorer](https://thegraph.com/explorer/). -- It makes your subgraph available for [Curators](/resources/roles/curating/) to curate it. +- यह आपके सबग्राफ को विकेंद्रीकृत [Indexers](/indexing/overview/) द्वारा The Graph Network पर अनुक्रमित किए जाने के लिए उपलब्ध कराता है। +- यह आपकी दर सीमा को हटा देता है और आपके सबग्राफ को [Graph Explorer](https://thegraph.com/explorer/) में सार्वजनिक रूप से खोजने योग्य और क्वेरी करने योग्य बनाता है। +- यह आपके सबग्राफ को [Curators](/resources/roles/curating/) के लिए उपलब्ध कराता है ताकि वे इसे क्यूरेट कर सकें। -> The greater the amount of GRT you and others curate on your subgraph, the more Indexers will be incentivized to index your subgraph, improving the quality of service, reducing latency, and enhancing network redundancy for your subgraph. +> अधिक मात्रा में GRT को आप और अन्य लोग आपके सबग्राफ पर क्यूरेट करते हैं, तो अधिक Indexers को आपके सबग्राफ को इंडेक्स करने के लिए प्रोत्साहित किया जाएगा, जिससे सेवा की गुणवत्ता में सुधार होगा, विलंबता (latency) कम होगी, और आपके सबग्राफ के लिए नेटवर्क की पुनरावृत्ति (redundancy) बढ़ेगी। #### Subgraph Studio से प्रकाशित -अपने subgraph को प्रकाशित करने के लिए, डैशबोर्ड में Publish बटन पर क्लिक करें। +अपने सबग्राफ को प्रकाशित करने के लिए, डैशबोर्ड में Publish बटन पर क्लिक करें। -![Publish a subgraph on Subgraph Studio](/img/publish-sub-transfer.png) +![सबग्राफ Studio पर एक Subgraph प्रकाशित करें](/img/publish-sub-transfer.png) -उस नेटवर्क का चयन करें जिस पर आप अपना Subgraph प्रकाशित करना चाहते हैं। +अपने सबग्राफ को प्रकाशित करने के लिए उस नेटवर्क का चयन करें, जिसे आप चुनना चाहते हैं। #### Publishing from the CLI -Version 0.73.0 के अनुसार, आप अपने subgraph को graph-cli के साथ भी publish कर सकते हैं। +जैसा कि संस्करण 0.73.0 में है, अब आप अपने सबग्राफ को Graph CLI के साथ प्रकाशित कर सकते हैं। `graph-cli` खोलें। @@ -157,32 +157,32 @@ graph publish ``` ```` -3. एक विंडो खुलेगी, जो आपको अपनी वॉलेट कनेक्ट करने, मेटाडेटा जोड़ने, और अपने अंतिम Subgraph को आपकी पसंद के नेटवर्क पर डिप्लॉय करने की अनुमति देगी। +3. एक विंडो खुलेगी, जिससे आप अपना वॉलेट कनेक्ट कर सकते हैं, मेटाडेटा जोड़ सकते हैं और अपने फ़ाइनलाइज़ किए गए सबग्राफ को अपनी पसंद के नेटवर्क पर डिप्लॉय कर सकते हैं। ![cli-ui](/img/cli-ui.png) -To customize your deployment, see [Publishing a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). +अपने परिनियोजन को अनुकूलित करने के लिए, [Publishing a सबग्राफ](/subgraphs/developing/publishing/publishing-a-subgraph/) देखें। -#### Adding signal to your subgraph +#### सिग्नल को अपने Subgraph में जोड़ना -1. To attract Indexers to query your subgraph, you should add GRT curation signal to it. +1. Indexers को अपने सबग्राफ से क्वेरी करने के लिए आकर्षित करने हेतु, आपको इसमें GRT क्यूरेशन सिग्नल जोड़ना चाहिए। - - यह कार्रवाई सेवा की गुणवत्ता में सुधार करती है, विलंबता को कम करती है, और आपके Subgraph के लिए नेटवर्क की पुनरावृत्ति और उपलब्धता को बढ़ाती है। + - यह कार्रवाई सेवा की गुणवत्ता में सुधार करती है, विलंबता को कम करती है, और आपके सबग्राफ के लिए नेटवर्क की पुनरावृत्ति और उपलब्धता को बढ़ाती है। 2. यदि इंडेक्सिंग पुरस्कारों के लिए योग्य हैं, तो Indexers संकेतित राशि के आधार पर GRT पुरस्कार प्राप्त करते हैं। - - कम से कम 3,000 GRT का चयन करना अनुशंसित है ताकि 3 Indexer को आकर्षित किया जा सके। Subgraph फ़ीचर उपयोग और समर्थित नेटवर्क के आधार पर पुरस्कार पात्रता की जांच करें। + - यह अनुशंसा की जाती है कि कम से कम 3,000 GRT को क्यूरेट किया जाए ताकि 3 Indexers को आकर्षित किया जा सके। सबग्राफ फीचर उपयोग और समर्थित नेटवर्क के आधार पर पुरस्कार पात्रता की जांच करें। -To learn more about curation, read [Curating](/resources/roles/curating/). +Curation के बारे में और जानने के लिए, [Curating](/resources/roles/curating/) पढ़ें. -गैस लागत को बचाने के लिए, आप इसे प्रकाशित करते समय अपने Subgraph को उसी लेनदेन में क्यूरेट कर सकते हैं, इस विकल्प का चयन करके: +गैस लागत बचाने के लिए, आप अपने सबग्राफ को उसी लेनदेन में प्रकाशित कर सकते हैं जिसमें आप इसे क्यूरेट कर रहे हैं, बस इस विकल्प का चयन करें: -![Subgraph publish](/img/studio-publish-modal.png) +![सबग्राफ प्रकाशित ](/img/studio-publish-modal.png) -### 8. Query your subgraph +### 8. अपने सबग्राफ से क्वेरी करें -You now have access to 100,000 free queries per month with your subgraph on The Graph Network! +आप अब प्रति माह 100,000 निःशुल्क क्वेरी तक उपयोग कर सकते हैं अपने सबग्राफ के साथ The Graph Network पर! -You can query your subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button. +आप अपने सबग्राफ को उसके Query URL पर GraphQL क्वेरी भेजकर क्वेरी कर सकते हैं, जिसे आप Query बटन पर क्लिक करके प्राप्त कर सकते हैं। -For more information about querying data from your subgraph, read [Querying The Graph](/subgraphs/querying/introduction/). +आपके सबग्राफ से डेटा क्वेरी करने के बारे में अधिक जानकारी के लिए, [Querying The Graph](/subgraphs/querying/introduction/) पढ़ें। diff --git a/website/src/pages/hi/substreams/_meta-titles.json b/website/src/pages/hi/substreams/_meta-titles.json index 6262ad528c3a..83856f5ffbb5 100644 --- a/website/src/pages/hi/substreams/_meta-titles.json +++ b/website/src/pages/hi/substreams/_meta-titles.json @@ -1,3 +1,3 @@ { - "developing": "Developing" + "developing": "विकसित करना" } diff --git a/website/src/pages/hi/substreams/developing/dev-container.mdx b/website/src/pages/hi/substreams/developing/dev-container.mdx index bd4acf16eec7..1e265f9ad332 100644 --- a/website/src/pages/hi/substreams/developing/dev-container.mdx +++ b/website/src/pages/hi/substreams/developing/dev-container.mdx @@ -9,9 +9,9 @@ Develop your first project with Substreams Dev Container. It's a tool to help you build your first project. You can either run it remotely through Github codespaces or locally by cloning the [substreams starter repository](https://github.com/streamingfast/substreams-starter?tab=readme-ov-file). -Inside the Dev Container, the `substreams init` command sets up a code-generated Substreams project, allowing you to easily build a subgraph or an SQL-based solution for data handling. +Inside the Dev Container, the `substreams init` command sets up a code-generated Substreams project, allowing you to easily build a Subgraph or an SQL-based solution for data handling. -## Prerequisites +## पूर्व आवश्यकताएँ - Ensure Docker and VS Code are up-to-date. @@ -35,7 +35,7 @@ To share your work with the broader community, publish your `.spkg` to [Substrea You can configure your project to query data either through a Subgraph or directly from an SQL database: -- **Subgraph**: Run `substreams codegen subgraph`. This generates a project with a basic `schema.graphql` and `mappings.ts` file. You can customize these to define entities based on the data extracted by Substreams. For more configurations, see [subgraph sink documentation](https://docs.substreams.dev/how-to-guides/sinks/subgraph). +- **Subgraph**: Run `substreams codegen subgraph`. This generates a project with a basic `schema.graphql` and `mappings.ts` file. You can customize these to define entities based on the data extracted by Substreams. For more configurations, see [Subgraph sink documentation](https://docs.substreams.dev/how-to-guides/sinks/subgraph). - **SQL**: Run `substreams codegen sql` for SQL-based queries. For more information on configuring a SQL sink, refer to the [SQL documentation](https://docs.substreams.dev/how-to-guides/sinks/sql-sink). ## Deployment Options diff --git a/website/src/pages/hi/substreams/developing/sinks.mdx b/website/src/pages/hi/substreams/developing/sinks.mdx index 18a9c557bef2..8978f7af1938 100644 --- a/website/src/pages/hi/substreams/developing/sinks.mdx +++ b/website/src/pages/hi/substreams/developing/sinks.mdx @@ -1,21 +1,21 @@ --- -title: Official Sinks +title: Sink your Substreams --- Choose a sink that meets your project's needs. -## अवलोकन +## Overview Once you find a package that fits your needs, you can choose how you want to consume the data. -Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database, a file or a subgraph. +Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database, a file or a Subgraph. ## Sinks > Note: Some of the sinks are officially supported by the StreamingFast core development team (i.e. active support is provided), but other sinks are community-driven and support can't be guaranteed. - [SQL Database](https://docs.substreams.dev/how-to-guides/sinks/sql-sink): Send the data to a database. -- [Subgraph](/sps/introduction/): Configure an API to meet your data needs and host it on The Graph Network. +- [Subgraph](./sps/introduction.mdx): Configure an API to meet your data needs and host it on The Graph Network. - [Direct Streaming](https://docs.substreams.dev/how-to-guides/sinks/stream): Stream data directly from your application. - [PubSub](https://docs.substreams.dev/how-to-guides/sinks/pubsub): Send data to a PubSub topic. - [Community Sinks](https://docs.substreams.dev/how-to-guides/sinks/community-sinks): Explore quality community maintained sinks. @@ -26,7 +26,7 @@ Sinks are integrations that allow you to send the extracted data to different de ### Official -| Name | Support | Maintainer | Source Code | +| नाम | समर्थन | Maintainer | Source Code | | --- | --- | --- | --- | | SQL | O | StreamingFast | [substreams-sink-sql](https://github.com/streamingfast/substreams-sink-sql) | | Go SDK | O | StreamingFast | [substreams-sink](https://github.com/streamingfast/substreams-sink) | @@ -40,7 +40,7 @@ Sinks are integrations that allow you to send the extracted data to different de ### Community -| Name | Support | Maintainer | Source Code | +| नाम | समर्थन | Maintainer | Source Code | | --- | --- | --- | --- | | MongoDB | C | Community | [substreams-sink-mongodb](https://github.com/streamingfast/substreams-sink-mongodb) | | Files | C | Community | [substreams-sink-files](https://github.com/streamingfast/substreams-sink-files) | diff --git a/website/src/pages/hi/substreams/developing/solana/account-changes.mdx b/website/src/pages/hi/substreams/developing/solana/account-changes.mdx index 0fb6b35739bd..4282ec4c49c5 100644 --- a/website/src/pages/hi/substreams/developing/solana/account-changes.mdx +++ b/website/src/pages/hi/substreams/developing/solana/account-changes.mdx @@ -11,13 +11,13 @@ This guide walks you through the process of setting up your environment, configu > NOTE: History for the Solana Account Changes dates as of 2025, block 310629601. -For each Substreams Solana account block, only the latest update per account is recorded, see the [Protobuf Referece](https://buf.build/streamingfast/firehose-solana/file/main:sf/solana/type/v1/account.proto). If an account is deleted, a payload with `deleted == True` is provided. Additionally, events of low importance ware omitted, such as those with the special owner “Vote11111111…” account or changes that do not affect the account data (ex: lamport changes). +For each Substreams Solana account block, only the latest update per account is recorded, see the [Protobuf Reference](https://buf.build/streamingfast/firehose-solana/file/main:sf/solana/type/v1/account.proto). If an account is deleted, a payload with `deleted == True` is provided. Additionally, events of low importance ware omitted, such as those with the special owner “Vote11111111…” account or changes that do not affect the account data (ex: lamport changes). > NOTE: To test Substreams latency for Solana accounts, measured as block-head drift, install the [Substreams CLI](https://docs.substreams.dev/reference-material/substreams-cli/installing-the-cli) and running `substreams run solana-common blocks_without_votes -s -1 -o clock`. ## शुरू करना -### Prerequisites +### आवश्यक शर्तें Before you begin, ensure that you have the following: diff --git a/website/src/pages/hi/substreams/developing/solana/transactions.mdx b/website/src/pages/hi/substreams/developing/solana/transactions.mdx index c4e038438bba..c298b89b60fe 100644 --- a/website/src/pages/hi/substreams/developing/solana/transactions.mdx +++ b/website/src/pages/hi/substreams/developing/solana/transactions.mdx @@ -36,12 +36,12 @@ Within the generated directories, modify your Substreams modules to include addi ## Step 3: Load the Data -To make your Substreams queryable (as opposed to [direct streaming](https://docs.substreams.dev/how-to-guides/sinks/stream)), you can automatically generate a [Substreams-powered subgraph](/sps/introduction/) or SQL-DB sink. +To make your Substreams queryable (as opposed to [direct streaming](https://docs.substreams.dev/how-to-guides/sinks/stream)), you can automatically generate a [Substreams-powered Subgraph](/sps/introduction/) or SQL-DB sink. ### सबग्राफ 1. Run `substreams codegen subgraph` to initialize the sink, producing the necessary files and function definitions. -2. Create your [subgraph mappings](/sps/triggers/) within the `mappings.ts` and associated entities within the `schema.graphql`. +2. Create your [Subgraph mappings](/sps/triggers/) within the `mappings.ts` and associated entities within the `schema.graphql`. 3. Build and deploy locally or to [Subgraph Studio](https://thegraph.com/studio-pricing/) by running `deploy-studio`. ### SQL diff --git a/website/src/pages/hi/substreams/introduction.mdx b/website/src/pages/hi/substreams/introduction.mdx index 627898326c47..0bd1ea21c9f6 100644 --- a/website/src/pages/hi/substreams/introduction.mdx +++ b/website/src/pages/hi/substreams/introduction.mdx @@ -7,13 +7,13 @@ sidebarTitle: Introduction To start coding right away, check out the [Substreams Quick Start](/substreams/quick-start/). -## अवलोकन +## Overview Substreams is a powerful parallel blockchain indexing technology designed to enhance performance and scalability within The Graph Network. ## Substreams Benefits -- **Accelerated Indexing**: Boost subgraph indexing time with a parallelized engine for quicker data retrieval and processing. +- **Accelerated Indexing**: Boost Subgraph indexing time with a parallelized engine for quicker data retrieval and processing. - **Multi-Chain Support**: Expand indexing capabilities beyond EVM-based chains, supporting ecosystems like Solana, Injective, Starknet, and Vara. - **Enhanced Data Model**: Access comprehensive data, including the `trace` level data on EVM or account changes on Solana, while efficiently managing forks/disconnections. - **Multi-Sink Support:** For Subgraph, Postgres database, Clickhouse, and Mongo database. diff --git a/website/src/pages/hi/substreams/publishing.mdx b/website/src/pages/hi/substreams/publishing.mdx index 5905f69b0f07..41eed47b59d1 100644 --- a/website/src/pages/hi/substreams/publishing.mdx +++ b/website/src/pages/hi/substreams/publishing.mdx @@ -1,19 +1,19 @@ --- title: Publishing a Substreams Package -sidebarTitle: Publishing +sidebarTitle: प्रकाशित करना --- Learn how to publish a Substreams package to the [Substreams Registry](https://substreams.dev). -## अवलोकन +## Overview ### What is a package? -A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional subgraphs. +A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional Subgraphs. ## Publish a Package -### Prerequisites +### आवश्यक शर्तें - You must have the Substreams CLI installed. - You must have a Substreams package (`.spkg`) that you want to publish. @@ -44,7 +44,7 @@ A Substreams package is a precompiled binary file that defines the specific data ![confirm](/img/4_confirm.png) -That's it! You have succesfully published a package in the Substreams registry. +That's it! You have successfully published a package in the Substreams registry. ![success](/img/5_success.png) diff --git a/website/src/pages/hi/substreams/quick-start.mdx b/website/src/pages/hi/substreams/quick-start.mdx index 2a54c6032f1a..c4a0d5be8e23 100644 --- a/website/src/pages/hi/substreams/quick-start.mdx +++ b/website/src/pages/hi/substreams/quick-start.mdx @@ -1,11 +1,11 @@ --- -title: Substreams Quick Start -sidebarTitle: जल्दी शुरू +title: सबस्ट्रीम्स क्विक स्टार्ट +sidebarTitle: Quick Start --- Discover how to utilize ready-to-use substream packages or develop your own. -## अवलोकन +## Overview Integrating Substreams can be quick and easy. They are permissionless, and you can [obtain a key here](https://thegraph.market/) without providing personal information to start streaming on-chain data. diff --git a/website/src/pages/hi/supported-networks.mdx b/website/src/pages/hi/supported-networks.mdx index 1f329ca83ce0..9ddc02928dbf 100644 --- a/website/src/pages/hi/supported-networks.mdx +++ b/website/src/pages/hi/supported-networks.mdx @@ -18,11 +18,11 @@ export const getStaticProps = getSupportedNetworksStaticProps - सबग्राफ स्टूडियो निर्भर करता है अंतर्निहित प्रौद्योगिकियों की स्थिरता और विश्वसनीयता पर, जैसे JSON-RPC, फायरहोस और सबस्ट्रीम्स एंडपॉइंट्स। - Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier. -- If a subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks. +- If a Subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks. - For a full list of which features are supported on the decentralized network, see [this page](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). ## Running Graph Node locally If your preferred network isn't supported on The Graph's decentralized network, you can run your own [Graph Node](https://github.com/graphprotocol/graph-node) to index any EVM-compatible network. Make sure that the [version](https://github.com/graphprotocol/graph-node/releases) you are using supports the network and you have the needed configuration. -Graph Node can also index other protocols, via a Firehose integration. Firehose integrations have been created for NEAR, Arweave and Cosmos-based networks. Additionally, Graph Node can support Substreams-powered subgraphs for any network with Substreams support. +Graph Node can also index other protocols, via a Firehose integration. Firehose integrations have been created for NEAR and Arweave. Additionally, Graph Node can support Substreams-powered Subgraphs for any network with Substreams support. diff --git a/website/src/pages/hi/token-api/_meta-titles.json b/website/src/pages/hi/token-api/_meta-titles.json index 692cec84bd58..7ed31e0af95d 100644 --- a/website/src/pages/hi/token-api/_meta-titles.json +++ b/website/src/pages/hi/token-api/_meta-titles.json @@ -1,5 +1,6 @@ { "mcp": "MCP", "evm": "EVM Endpoints", - "monitoring": "Monitoring Endpoints" + "monitoring": "Monitoring Endpoints", + "faq": "FAQ" } diff --git a/website/src/pages/hi/token-api/_meta.js b/website/src/pages/hi/token-api/_meta.js index 09aa7ffc2649..0e526f673a66 100644 --- a/website/src/pages/hi/token-api/_meta.js +++ b/website/src/pages/hi/token-api/_meta.js @@ -5,4 +5,5 @@ export default { mcp: titles.mcp, evm: titles.evm, monitoring: titles.monitoring, + faq: '', } diff --git a/website/src/pages/hi/token-api/faq.mdx b/website/src/pages/hi/token-api/faq.mdx new file mode 100644 index 000000000000..5d8d28b2e970 --- /dev/null +++ b/website/src/pages/hi/token-api/faq.mdx @@ -0,0 +1,109 @@ +--- +title: Token API FAQ +--- + +Get fast answers to easily integrate and scale with The Graph's high-performance Token API. + +## आम + +### What blockchains does the Token API support? + +Currently, the Token API supports Ethereum, Binance Smart Chain (BSC), Polygon, Optimism, Base, and Arbitrum One. + +### Why isn't my API key from The Graph Market working? + +Be sure to use the Access Token generated from the API key, not the API key itself. An access token can be generated from the dashboard on The Graph Market using the dropdown menu next to each API key. + +### How current is the data provided by the API relative to the blockchain? + +The API provides data up to the latest finalized block. + +### How do I authenticate requests to the Token API? + +Authentication is managed via JWT tokens obtained through The Graph Market. JWT tokens issued by [Pinax](https://pinax.network/en), a core developer of The Graph, are also supported. + +### Does the Token API provide a client SDK? + +While a client SDK is not currently available, please share feedback on any SDKs or integrations you would like to see on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to support additional blockchains in the future? + +Yes, more blockchains will be supported in the future. Please share feedback on desired chains on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to offer data closer to the chain head? + +Yes, improvements to provide data closer to the chain head are planned. Feedback is welcome on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to support additional use cases such as NFTs? + +The Graph ecosystem is actively determining the [roadmap](https://thegraph.com/blog/token-api-the-graph/) for additional use cases, including NFTs. Please provide feedback on specific features you would like prioritized on [Discord](https://discord.gg/graphprotocol). + +## MCP / LLM / AI Topics + +### Is there a time limit for LLM queries? + +Yes. The maximum time limit for LLM queries is 10 seconds. + +### Is there a known list of LLMs that work with the API? + +Yes, Cline, Cursor, and Claude Desktop integrate successfully with The Graph's Token API + MCP server. + +Beyond that, whether an LLM "works" depends on whether it supports the MCP protocol directly (or has a compatible plugin/adapter). + +### Where can I find the MCP client? + +You can find the code for the MCP client in [The Graph's repo](https://github.com/graphprotocol/mcp-client). + +## Advanced Topics + +### I'm getting 403/401 errors. What's wrong? + +Check that you included the `Authorization: Bearer ` header with the correct, non-expired token. Common issues include using the API key instead of generating a new JWT, forgetting the "Bearer" prefix, using an incorrect token, or omitting the header entirely. Ensure you copied the JWT exactly as provided by The Graph Market. + +### Are there rate limits or usage costs?\*\* + +During Beta, the Token API is free for authorized developers. While specific rate limits aren't documented, reasonable throttling exists to prevent abuse. High request volumes may trigger HTTP 429 errors. Monitor official announcements for future pricing changes after Beta. + +### What networks are supported, and how do I specify them? + +You can query available networks with [this link](https://token-api.thegraph.com/#tag/monitoring/GET/networks). A full list of network IDs accepted by The Graph can be found on [The Graph's Networks](https://thegraph.com/docs/en/supported-networks/). The API supports Ethereum mainnet, Binance Smart Chain, Base, Arbitrum One, Optimism, and Polygon/Matic. Use the optional `network_id` parameter (e.g., `mainnet`, `bsc`, `base`, `arbitrum-one`, `optimism`, `matic`) to target a specific chain. Without this parameter, the API defaults to Ethereum mainnet. + +### Why do I only see 10 results? How can I get more data? + +Endpoints cap output at 10 items by default. Use pagination parameters: `limit` (up to 500) and `page` (1-indexed). For example, set `limit=50` to get 50 results, and increment `page` for subsequent batches (e.g., `page=2` for items 51-100). + +### How do I fetch older transfer history? + +The Transfers endpoint defaults to 30 days of history. To retrieve older events, increase the `age` parameter up to 180 days maximum (e.g., `age=180` for 6 months of transfers). Transfers older than 180 days cannot be fetched in a single call. + +### What does an empty `"data": []` array mean? + +An empty data array means no records were found matching the query – not an error. This occurs when querying wallets with no tokens/transfers or token contracts with no holders. Verify you've used the correct address and parameters. An invalid address format will trigger a 4xx error. + +### Why is the JSON response wrapped in a `"data"` array? + +All Token API responses consistently wrap results in a top-level `data` array, even for single items. This uniform design handles the common case where addresses have multiple balances or transfers. When parsing, be sure to index into the `data` array (e.g., `const results = response.data`). + +### Why are token amounts returned as strings? + +Large numeric values (like token amounts or supplies) are returned as strings to avoid precision loss, as they often exceed JavaScript's safe integer range. Convert these to big number types for arithmetic operations. Fields like `decimals` are provided as normal numbers to help derive human-readable values. + +### What format should addresses be in? + +The API accepts 40-character hex addresses with or without the `0x` prefix. The endpoint is case-insensitive, so both lower and uppercase hex characters work. Ensure addresses are exactly 40 hex digits (20 bytes) if you remove the prefix. For contract queries, use the token's contract address; for balance/transfer queries, use the wallet address. + +### Do I need special headers besides authentication? + +While recommended, `Accept: application/json` isn't strictly required as the API returns JSON by default. The critical header is `Authorization: Bearer `. Ensure you make a GET request to the correct URL without trailing slashes or path typos (e.g., use `/balances/evm/{address}` not `/balance`). + +### MCP integration with Claude/Cline/Cursor shows errors like "ENOENT" or "Server disconnected". How do I fix this? + +For "ENOENT" errors, ensure Node.js 18+ is installed and the path to `npx`/`bunx` is correct (consider using full paths in config). "Server disconnected" usually indicates authentication or connectivity issues – verify your `ACCESS_TOKEN` is set correctly and your network allows access to `https://token-api.thegraph.com/sse`. + +### Is the Token API part of The Graph's GraphQL service? + +No, the Token API is a separate RESTful service. Unlike traditional subgraphs, it provides ready-to-use REST endpoints (HTTP GET) for common token data. You don't need to write GraphQL queries or deploy subgraphs. Under the hood, it uses The Graph's infrastructure and MCP with AI for enrichment, but you simply interact with REST endpoints. + +### Do I need to use MCP or tools like Claude, Cline, or Cursor? + +No, these are optional. MCP is an advanced feature allowing AI assistants to interface with the API via streaming. For standard usage, simply call the REST endpoints with any HTTP client using your JWT. Claude Desktop, Cline bot, and Cursor IDE integrations are provided for convenience but aren't required. diff --git a/website/src/pages/hi/token-api/mcp/claude.mdx b/website/src/pages/hi/token-api/mcp/claude.mdx index 0da8f2be031d..7174103725e8 100644 --- a/website/src/pages/hi/token-api/mcp/claude.mdx +++ b/website/src/pages/hi/token-api/mcp/claude.mdx @@ -3,7 +3,7 @@ title: Using Claude Desktop to Access the Token API via MCP sidebarTitle: Claude Desktop --- -## Prerequisites +## आवश्यक शर्तें - [Claude Desktop](https://claude.ai/download) installed. - A [JWT token](/token-api/quick-start) from [The Graph Market](https://thegraph.market/). @@ -12,7 +12,7 @@ sidebarTitle: Claude Desktop ![Screenshot of Claude Desktop's settings panel showing the MCP server configuration option.](/img/claude-preview-token-api.png) -## Configuration +## विन्यास Create or edit your `claude_desktop_config.json` file. @@ -25,11 +25,11 @@ Create or edit your `claude_desktop_config.json` file. ```json label="claude_desktop_config.json" { "mcpServers": { - "mcp-pinax": { + "token-api": { "command": "npx", "args": ["@pinax/mcp", "--sse-url", "https://token-api.thegraph.com/sse"], "env": { - "ACCESS_TOKEN": "" + "ACCESS_TOKEN": "" } } } diff --git a/website/src/pages/hi/token-api/mcp/cline.mdx b/website/src/pages/hi/token-api/mcp/cline.mdx index ab54c0c8f6f0..39d4715e1186 100644 --- a/website/src/pages/hi/token-api/mcp/cline.mdx +++ b/website/src/pages/hi/token-api/mcp/cline.mdx @@ -3,16 +3,16 @@ title: Using Cline to Access the Token API via MCP sidebarTitle: Cline --- -## Prerequisites +## आवश्यक शर्तें - [Cline](https://cline.bot/) installed. - A [JWT token](/token-api/quick-start) from [The Graph Market](https://thegraph.market/). - [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path. - The `@pinax/mcp` package requires Node 18+, as it relies on built-in `fetch()` / `Headers`, which are not available in Node 17 or older. You may need to specify an exact path to an up-to-date Node version, or uninstall previous versions of Node to ensure `@pinax/mcp` uses the correct version. -![Screenshot of Cline's MCP server configuration interface displaying the JSON settings file with mcp-pinax server details visible](/img/cline-preview-token-api.png) +![Screenshot of Cline's MCP server configuration interface displaying the JSON settings file with mcp-pinax server details visible.](/img/cline-preview-token-api.png) -## Configuration +## विन्यास Create or edit your `cline_mcp_settings.json` file. diff --git a/website/src/pages/hi/token-api/mcp/cursor.mdx b/website/src/pages/hi/token-api/mcp/cursor.mdx index 658108d1337b..d8e9a09816fa 100644 --- a/website/src/pages/hi/token-api/mcp/cursor.mdx +++ b/website/src/pages/hi/token-api/mcp/cursor.mdx @@ -3,7 +3,7 @@ title: Using Cursor to Access the Token API via MCP sidebarTitle: Cursor --- -## Prerequisites +## आवश्यक शर्तें - [Cursor](https://www.cursor.com/) installed. - A [JWT token](/token-api/quick-start) from [The Graph Market](https://thegraph.market/). @@ -12,7 +12,7 @@ sidebarTitle: Cursor ![Screenshot of Cursor's MCP configuration panel.](/img/cursor-preview-token-api.png) -## Configuration +## विन्यास Create or edit your `~/.cursor/mcp.json` file. diff --git a/website/src/pages/hi/token-api/quick-start.mdx b/website/src/pages/hi/token-api/quick-start.mdx index 4653c3d41ac6..a381a3c8565c 100644 --- a/website/src/pages/hi/token-api/quick-start.mdx +++ b/website/src/pages/hi/token-api/quick-start.mdx @@ -11,7 +11,7 @@ The Graph's Token API lets you access blockchain token information via a GET req The Token API provides access to onchain token data, including balances, holders, detailed token metadata, and historical transfers. This API also uses the Model Context Protocol (MCP) to enrich raw blockchain data with contextual insights using AI tools, such as Claude. -## Prerequisites +## आवश्यक शर्तें Before you begin, get a JWT token by signing up on [The Graph Market](https://thegraph.market/). You can generate a JWT token for each of your API keys using the dropdown menu. diff --git a/website/src/pages/it/about.mdx b/website/src/pages/it/about.mdx index 3060784eac83..62f0bf4d3c61 100644 --- a/website/src/pages/it/about.mdx +++ b/website/src/pages/it/about.mdx @@ -30,25 +30,25 @@ Blockchain properties, such as finality, chain reorganizations, and uncled block ## The Graph Provides a Solution -The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API. +The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "Subgraphs") can then be queried with a standard GraphQL API. Today, there is a decentralized protocol that is backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node) that enables this process. ### How The Graph Functions -Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL. +Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using Subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL. #### Specifics -- The Graph uses subgraph descriptions, which are known as the subgraph manifest inside the subgraph. +- The Graph uses Subgraph descriptions, which are known as the Subgraph manifest inside the Subgraph. -- The subgraph description outlines the smart contracts of interest for a subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database. +- The Subgraph description outlines the smart contracts of interest for a Subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database. -- When creating a subgraph, you need to write a subgraph manifest. +- When creating a Subgraph, you need to write a Subgraph manifest. -- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that subgraph. +- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that Subgraph. -The diagram below provides more detailed information about the flow of data after a subgraph manifest has been deployed with Ethereum transactions. +The diagram below provides more detailed information about the flow of data after a Subgraph manifest has been deployed with Ethereum transactions. ![Un grafico che spiega come The Graph utilizza Graph Node per servire le query ai consumatori di dati](/img/graph-dataflow.png) @@ -56,12 +56,12 @@ Il flusso segue questi passi: 1. Una dapp aggiunge dati a Ethereum attraverso una transazione su uno smart contract. 2. Lo smart contract emette uno o più eventi durante l'elaborazione della transazione. -3. Graph Node scansiona continuamente Ethereum alla ricerca di nuovi blocchi e dei dati del vostro subgraph che possono contenere. -4. Graph Node trova gli eventi Ethereum per il vostro subgraph in questi blocchi ed esegue i gestori di mappatura che avete fornito. La mappatura è un modulo WASM che crea o aggiorna le entità di dati che Graph Node memorizza in risposta agli eventi Ethereum. +3. Graph Node continually scans Ethereum for new blocks and the data for your Subgraph they may contain. +4. Graph Node finds Ethereum events for your Subgraph in these blocks and runs the mapping handlers you provided. The mapping is a WASM module that creates or updates the data entities that Graph Node stores in response to Ethereum events. 5. La dapp effettua query del Graph Node per ottenere dati indicizzati dalla blockchain, utilizzando il [ GraphQL endpoint del nodo](https://graphql.org/learn/). Il Graph Node a sua volta traduce le query GraphQL in query per il suo archivio dati sottostante, al fine di recuperare questi dati, sfruttando le capacità di indicizzazione dell'archivio. La dapp visualizza questi dati in una ricca interfaccia utente per gli utenti finali, che li utilizzano per emettere nuove transazioni su Ethereum. Il ciclo si ripete. ## I prossimi passi -The following sections provide a more in-depth look at subgraphs, their deployment and data querying. +The following sections provide a more in-depth look at Subgraphs, their deployment and data querying. -Before you write your own subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed subgraphs. Each subgraph's page includes a GraphQL playground, allowing you to query its data. +Before you write your own Subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed Subgraphs. Each Subgraph's page includes a GraphQL playground, allowing you to query its data. diff --git a/website/src/pages/it/archived/arbitrum/arbitrum-faq.mdx b/website/src/pages/it/archived/arbitrum/arbitrum-faq.mdx index 4b6ef7df03fc..5c4dc7fa3aa3 100644 --- a/website/src/pages/it/archived/arbitrum/arbitrum-faq.mdx +++ b/website/src/pages/it/archived/arbitrum/arbitrum-faq.mdx @@ -14,7 +14,7 @@ By scaling The Graph on L2, network participants can now benefit from: - Sicurezza ereditata da Ethereum -Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers can open and close allocations more frequently to index a greater number of subgraphs. Developers can deploy and update subgraphs more easily, and Delegators can delegate GRT more frequently. Curators can add or remove signal to a larger number of subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. +Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers can open and close allocations more frequently to index a greater number of Subgraphs. Developers can deploy and update Subgraphs more easily, and Delegators can delegate GRT more frequently. Curators can add or remove signal to a larger number of Subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. La comunità di The Graph ha deciso di procedere con Arbitrum l'anno scorso dopo l'esito della discussione [GIP-0031](https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305). @@ -39,7 +39,7 @@ Per sfruttare l'utilizzo di The Graph su L2, utilizza il selettore a discesa per ![Selettore a discesa per cambiare a Arbitrum](/img/arbitrum-screenshot-toggle.png) -## In quanto sviluppatore di subgraph, consumatore di dati, Indexer, Curator o Delegator, cosa devo fare ora? +## As a Subgraph developer, data consumer, Indexer, Curator, or Delegator, what do I need to do now? Network participants must move to Arbitrum to continue participating in The Graph Network. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) for additional support. @@ -51,9 +51,9 @@ All smart contracts have been thoroughly [audited](https://github.com/graphproto Tutto è stato testato accuratamente e un piano di contingenza è in atto per garantire una transizione sicura e senza intoppi. I dettagli possono essere trovati [qui](https://forum.thegraph.com/t/gip-0037-the-graph-arbitrum-deployment-with-linear-rewards-minted-in-l2/3551#risks-and-security-considerations-20). -## Are existing subgraphs on Ethereum working? +## Are existing Subgraphs on Ethereum working? -All subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) to ensure your subgraphs operate seamlessly. +All Subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) to ensure your Subgraphs operate seamlessly. ## Does GRT have a new smart contract deployed on Arbitrum? diff --git a/website/src/pages/it/archived/arbitrum/l2-transfer-tools-faq.mdx b/website/src/pages/it/archived/arbitrum/l2-transfer-tools-faq.mdx index bc5a9ac711c5..0dd870395760 100644 --- a/website/src/pages/it/archived/arbitrum/l2-transfer-tools-faq.mdx +++ b/website/src/pages/it/archived/arbitrum/l2-transfer-tools-faq.mdx @@ -24,9 +24,9 @@ The exception is with smart contract wallets like multisigs: these are smart con Gli Strumenti di Trasferimento L2 utilizzano il meccanismo nativo di Arbitrum per inviare messaggi da L1 a L2. Questo meccanismo è chiamato "retryable ticket" e viene utilizzato da tutti i bridge di token nativi, incluso il bridge GRT di Arbitrum. Puoi leggere ulteriori dettagli sui retryable tickets nella [documentazione di Arbitrum](https://docs.arbitrum.io/arbos/l1-to-l2-messaging). -Quando trasferisci i tuoi asset (subgraph, stake, delegation o curation) su L2, un messaggio viene inviato tramite il bridge GRT di Arbitrum, che crea un "retryable ticket" su L2. Lo strumento di trasferimento include un valore in ETH nella transazione, che viene utilizzato per 1) pagare la creazione del ticket e 2) coprire il costo del gas per eseguire il ticket su L2. Tuttavia, poiché i prezzi del gas potrebbero variare nel tempo fino a quando il ticket non è pronto per l'esecuzione su L2, è possibile che questo tentativo di auto-esecuzione fallisca. Quando ciò accade, il bridge Arbitrum manterrà il "retryable ticket" attivo per un massimo di 7 giorni, e chiunque può riprovare a "riscattare" il ticket (il che richiede un wallet con un po' di ETH trasferiti su Arbitrum). +When you transfer your assets (Subgraph, stake, delegation or curation) to L2, a message is sent through the Arbitrum GRT bridge which creates a retryable ticket in L2. The transfer tool includes some ETH value in the transaction, that is used to 1) pay to create the ticket and 2) pay for the gas to execute the ticket in L2. However, because gas prices might vary in the time until the ticket is ready to execute in L2, it is possible that this auto-execution attempt fails. When that happens, the Arbitrum bridge will keep the retryable ticket alive for up to 7 days, and anyone can retry “redeeming” the ticket (which requires a wallet with some ETH bridged to Arbitrum). -Questo è ciò che chiamiamo il passaggio "Conferma" in tutti gli strumenti di trasferimento: in molti casi verrà eseguito automaticamente, poiché l'auto-esecuzione ha spesso successo, ma è importante che tu verifichi che sia andato a buon fine. Se non è andato a buon fine e nessuna riprova ha successo entro 7 giorni, il bridge Arbitrum scarterà il "retryable ticket" e i tuoi asset (subgraph, stake, delegation o curation) andranno persi e non potranno essere recuperati. I core devs di The Graph hanno un sistema di monitoraggio per rilevare queste situazioni e cercare di riscattare i ticket prima che sia troppo tardi, ma alla fine è tua responsabilità assicurarti che il trasferimento venga completato in tempo. Se hai difficoltà a confermare la tua transazione, ti preghiamo di contattarci utilizzando [questo modulo](https://noteforms.com/forms/notionform-l2-transfer-tooling-issues-0ogqfu?notionforms=1&utm_source=notionforms) e i core devs saranno pronti ad aiutarti. +This is what we call the “Confirm” step in all the transfer tools - it will run automatically in most cases, as the auto-execution is most often successful, but it is important that you check back to make sure it went through. If it doesn’t succeed and there are no successful retries in 7 days, the Arbitrum bridge will discard the ticket, and your assets (Subgraph, stake, delegation or curation) will be lost and can’t be recovered. The Graph core devs have a monitoring system in place to detect these situations and try to redeem the tickets before it’s too late, but it is ultimately your responsibility to ensure your transfer is completed in time. If you’re having trouble confirming your transaction, please reach out using [this form](https://noteforms.com/forms/notionform-l2-transfer-tooling-issues-0ogqfu?notionforms=1&utm_source=notionforms) and core devs will be there help you. ### I started my delegation/stake/curation transfer and I'm not sure if it made it through to L2, how can I confirm that it was transferred correctly? @@ -36,43 +36,43 @@ If you have the L1 transaction hash (which you can find by looking at the recent ## Traserimento del Subgraph -### Come faccio a trasferire un mio subgraph? +### How do I transfer my Subgraph? -Per fare un trasferimento del tuo subgraph, dovrai completare i seguenti passaggi: +To transfer your Subgraph, you will need to complete the following steps: 1. Inizializza il trasferimento su Ethereum mainnet 2. Aspetta 20 minuti per la conferma -3. Conferma il trasferimento del subgraph su Arbitrum\* +3. Confirm Subgraph transfer on Arbitrum\* -4. Termina la pubblicazione del subgraph su Arbitrum +4. Finish publishing Subgraph on Arbitrum 5. Aggiorna l'URL della Query (raccomandato) -\*Note that you must confirm the transfer within 7 days otherwise your subgraph may be lost. In most cases, this step will run automatically, but a manual confirmation may be needed if there is a gas price spike on Arbitrum. If there are any issues during this process, there will be resources to help: contact support at support@thegraph.com or on [Discord](https://discord.gg/graphprotocol). +\*Note that you must confirm the transfer within 7 days otherwise your Subgraph may be lost. In most cases, this step will run automatically, but a manual confirmation may be needed if there is a gas price spike on Arbitrum. If there are any issues during this process, there will be resources to help: contact support at support@thegraph.com or on [Discord](https://discord.gg/graphprotocol). ### Da dove devo inizializzare il mio trasferimento? -Puoi inizializzare il tuo trasferimento da [Subgraph Studio](https://thegraph.com/studio/), [Explorer,](https://thegraph.com/explorer) o dalla pagina di dettaglio di qualsiasi subgraph. Clicca sul bottone "Trasferisci Subgraph" sulla pagina di dettaglio del subgraph e inizia il trasferimento. +You can initiate your transfer from the [Subgraph Studio](https://thegraph.com/studio/), [Explorer,](https://thegraph.com/explorer) or any Subgraph details page. Click the "Transfer Subgraph" button in the Subgraph details page to start the transfer. -### Quanto devo aspettare per il completamento del trasferimento del mio subgraph +### How long do I need to wait until my Subgraph is transferred Il tempo di trasferimento richiede circa 20 minuti. Il bridge Arbitrum sta lavorando in background per completare automaticamente il trasferimento. In alcuni casi, i costi del gas potrebbero aumentare e dovrai confermare nuovamente la transazione. -### I miei subgraph saranno ancora rintracciabili dopo averli trasferiti su L2? +### Will my Subgraph still be discoverable after I transfer it to L2? -Il tuo subgraph sarà rintracciabile solo sulla rete su cui è stata pubblicata. Ad esempio, se il tuo subgraph è su Arbitrum One, potrai trovarlo solo su Explorer su Arbitrum One e non sarai in grado di trovarlo su Ethereum. Assicurati di avere selezionato Arbitrum One nel tasto in alto nella pagina per essere sicuro di essere sulla rete corretta. Dopo il transfer, il subgraph su L1 apparirà come deprecato. +Your Subgraph will only be discoverable on the network it is published to. For example, if your Subgraph is on Arbitrum One, then you can only find it in Explorer on Arbitrum One and will not be able to find it on Ethereum. Please ensure that you have Arbitrum One selected in the network switcher at the top of the page to ensure you are on the correct network.  After the transfer, the L1 Subgraph will appear as deprecated. -### Il mio subgraph deve essere pubblicato per poterlo trasferire? +### Does my Subgraph need to be published to transfer it? -Per usufruire dello strumento di trasferimento del subgraph, il tuo subgraph deve già essere pubblicato sulla mainnet di Ethereum e deve possedere alcuni segnali di curation di proprietà del wallet che possiede il subgraph. Se il tuo subgraph non è stato pubblicato, è consigliabile pubblicarlo direttamente su Arbitrum One: le commissioni di gas associate saranno considerevolmente più basse. Se desideri trasferire un subgraph pubblicato ma l'account proprietario non inserito nessun segnale di curation su di esso, puoi segnalare una piccola quantità (ad esempio 1 GRT) da quell'account; assicurati di selezionare il segnale "auto-migrante". +To take advantage of the Subgraph transfer tool, your Subgraph must be already published to Ethereum mainnet and must have some curation signal owned by the wallet that owns the Subgraph. If your Subgraph is not published, it is recommended you simply publish directly on Arbitrum One - the associated gas fees will be considerably lower. If you want to transfer a published Subgraph but the owner account hasn't curated any signal on it, you can signal a small amount (e.g. 1 GRT) from that account; make sure to choose "auto-migrating" signal. -### Cosa succede alla versione del mio subgraph sulla mainnet di Ethereum dopo il trasferimento su Arbitrum? +### What happens to the Ethereum mainnet version of my Subgraph after I transfer to Arbitrum? -Dopo aver trasferito il tuo subgraph su Arbitrum, la versione sulla mainnet di Ethereum sarà deprecata. Ti consigliamo di aggiornare l'URL della query entro 48 ore. Tuttavia, è previsto un periodo di tolleranza che mantiene funzionante l'URL sulla mainnet in modo che il supporto per eventuali dApp di terze parti possa essere aggiornato. +After transferring your Subgraph to Arbitrum, the Ethereum mainnet version will be deprecated. We recommend you update your query URL within 48 hours. However, there is a grace period in place that keeps your mainnet URL functioning so that any third-party dapp support can be updated. ### Dopo il trasferimento, devo anche pubblicare di nuovo su Arbitrum? @@ -80,21 +80,21 @@ Dopo la finestra di trasferimento di 20 minuti, dovrai confermare il trasferimen ### Will my endpoint experience downtime while re-publishing? -It is unlikely, but possible to experience a brief downtime depending on which Indexers are supporting the subgraph on L1 and whether they keep indexing it until the subgraph is fully supported on L2. +It is unlikely, but possible to experience a brief downtime depending on which Indexers are supporting the Subgraph on L1 and whether they keep indexing it until the Subgraph is fully supported on L2. ### Is publishing and versioning the same on L2 as Ethereum Ethereum mainnet? -Yes. Select Arbitrum One as your published network when publishing in Subgraph Studio. In the Studio, the latest endpoint will be available which points to the latest updated version of the subgraph. +Yes. Select Arbitrum One as your published network when publishing in Subgraph Studio. In the Studio, the latest endpoint will be available which points to the latest updated version of the Subgraph. -### Will my subgraph's curation move with my subgraph? +### Will my Subgraph's curation move with my Subgraph? -If you've chosen auto-migrating signal, 100% of your own curation will move with your subgraph to Arbitrum One. All of the subgraph's curation signal will be converted to GRT at the time of the transfer, and the GRT corresponding to your curation signal will be used to mint signal on the L2 subgraph. +If you've chosen auto-migrating signal, 100% of your own curation will move with your Subgraph to Arbitrum One. All of the Subgraph's curation signal will be converted to GRT at the time of the transfer, and the GRT corresponding to your curation signal will be used to mint signal on the L2 Subgraph. -Other Curators can choose whether to withdraw their fraction of GRT, or also transfer it to L2 to mint signal on the same subgraph. +Other Curators can choose whether to withdraw their fraction of GRT, or also transfer it to L2 to mint signal on the same Subgraph. -### Can I move my subgraph back to Ethereum mainnet after I transfer? +### Can I move my Subgraph back to Ethereum mainnet after I transfer? -Once transferred, your Ethereum mainnet version of this subgraph will be deprecated. If you would like to move back to mainnet, you will need to redeploy and publish back to mainnet. However, transferring back to Ethereum mainnet is strongly discouraged as indexing rewards will eventually be distributed entirely on Arbitrum One. +Once transferred, your Ethereum mainnet version of this Subgraph will be deprecated. If you would like to move back to mainnet, you will need to redeploy and publish back to mainnet. However, transferring back to Ethereum mainnet is strongly discouraged as indexing rewards will eventually be distributed entirely on Arbitrum One. ### Why do I need bridged ETH to complete my transfer? @@ -206,19 +206,19 @@ To transfer your curation, you will need to complete the following steps: \*If necessary - i.e. you are using a contract address. -### How will I know if the subgraph I curated has moved to L2? +### How will I know if the Subgraph I curated has moved to L2? -When viewing the subgraph details page, a banner will notify you that this subgraph has been transferred. You can follow the prompt to transfer your curation. You can also find this information on the subgraph details page of any subgraph that has moved. +When viewing the Subgraph details page, a banner will notify you that this Subgraph has been transferred. You can follow the prompt to transfer your curation. You can also find this information on the Subgraph details page of any Subgraph that has moved. ### What if I do not wish to move my curation to L2? -When a subgraph is deprecated you have the option to withdraw your signal. Similarly, if a subgraph has moved to L2, you can choose to withdraw your signal in Ethereum mainnet or send the signal to L2. +When a Subgraph is deprecated you have the option to withdraw your signal. Similarly, if a Subgraph has moved to L2, you can choose to withdraw your signal in Ethereum mainnet or send the signal to L2. ### How do I know my curation successfully transferred? Signal details will be accessible via Explorer approximately 20 minutes after the L2 transfer tool is initiated. -### Can I transfer my curation on more than one subgraph at a time? +### Can I transfer my curation on more than one Subgraph at a time? There is no bulk transfer option at this time. @@ -266,7 +266,7 @@ It will take approximately 20 minutes for the L2 transfer tool to complete trans ### Do I have to index on Arbitrum before I transfer my stake? -You can effectively transfer your stake first before setting up indexing, but you will not be able to claim any rewards on L2 until you allocate to subgraphs on L2, index them, and present POIs. +You can effectively transfer your stake first before setting up indexing, but you will not be able to claim any rewards on L2 until you allocate to Subgraphs on L2, index them, and present POIs. ### Can Delegators move their delegation before I move my indexing stake? diff --git a/website/src/pages/it/archived/arbitrum/l2-transfer-tools-guide.mdx b/website/src/pages/it/archived/arbitrum/l2-transfer-tools-guide.mdx index 549618bfd7c3..4a34da9bad0e 100644 --- a/website/src/pages/it/archived/arbitrum/l2-transfer-tools-guide.mdx +++ b/website/src/pages/it/archived/arbitrum/l2-transfer-tools-guide.mdx @@ -6,53 +6,53 @@ The Graph has made it easy to move to L2 on Arbitrum One. For each protocol part Some frequent questions about these tools are answered in the [L2 Transfer Tools FAQ](/archived/arbitrum/l2-transfer-tools-faq/). The FAQs contain in-depth explanations of how to use the tools, how they work, and things to keep in mind when using them. -## How to transfer your subgraph to Arbitrum (L2) +## How to transfer your Subgraph to Arbitrum (L2) -## Benefits of transferring your subgraphs +## Benefits of transferring your Subgraphs The Graph's community and core devs have [been preparing](https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305) to move to Arbitrum over the past year. Arbitrum, a layer 2 or "L2" blockchain, inherits the security from Ethereum but provides drastically lower gas fees. -When you publish or upgrade your subgraph to The Graph Network, you're interacting with smart contracts on the protocol and this requires paying for gas using ETH. By moving your subgraphs to Arbitrum, any future updates to your subgraph will require much lower gas fees. The lower fees, and the fact that curation bonding curves on L2 are flat, also make it easier for other Curators to curate on your subgraph, increasing the rewards for Indexers on your subgraph. This lower-cost environment also makes it cheaper for Indexers to index and serve your subgraph. Indexing rewards will be increasing on Arbitrum and decreasing on Ethereum mainnet over the coming months, so more and more Indexers will be transferring their stake and setting up their operations on L2. +When you publish or upgrade your Subgraph to The Graph Network, you're interacting with smart contracts on the protocol and this requires paying for gas using ETH. By moving your Subgraphs to Arbitrum, any future updates to your Subgraph will require much lower gas fees. The lower fees, and the fact that curation bonding curves on L2 are flat, also make it easier for other Curators to curate on your Subgraph, increasing the rewards for Indexers on your Subgraph. This lower-cost environment also makes it cheaper for Indexers to index and serve your Subgraph. Indexing rewards will be increasing on Arbitrum and decreasing on Ethereum mainnet over the coming months, so more and more Indexers will be transferring their stake and setting up their operations on L2. -## Understanding what happens with signal, your L1 subgraph and query URLs +## Understanding what happens with signal, your L1 Subgraph and query URLs -Transferring a subgraph to Arbitrum uses the Arbitrum GRT bridge, which in turn uses the native Arbitrum bridge, to send the subgraph to L2. The "transfer" will deprecate the subgraph on mainnet and send the information to re-create the subgraph on L2 using the bridge. It will also include the subgraph owner's signaled GRT, which must be more than zero for the bridge to accept the transfer. +Transferring a Subgraph to Arbitrum uses the Arbitrum GRT bridge, which in turn uses the native Arbitrum bridge, to send the Subgraph to L2. The "transfer" will deprecate the Subgraph on mainnet and send the information to re-create the Subgraph on L2 using the bridge. It will also include the Subgraph owner's signaled GRT, which must be more than zero for the bridge to accept the transfer. -When you choose to transfer the subgraph, this will convert all of the subgraph's curation signal to GRT. This is equivalent to "deprecating" the subgraph on mainnet. The GRT corresponding to your curation will be sent to L2 together with the subgraph, where they will be used to mint signal on your behalf. +When you choose to transfer the Subgraph, this will convert all of the Subgraph's curation signal to GRT. This is equivalent to "deprecating" the Subgraph on mainnet. The GRT corresponding to your curation will be sent to L2 together with the Subgraph, where they will be used to mint signal on your behalf. -Other Curators can choose whether to withdraw their fraction of GRT, or also transfer it to L2 to mint signal on the same subgraph. If a subgraph owner does not transfer their subgraph to L2 and manually deprecates it via a contract call, then Curators will be notified and will be able to withdraw their curation. +Other Curators can choose whether to withdraw their fraction of GRT, or also transfer it to L2 to mint signal on the same Subgraph. If a Subgraph owner does not transfer their Subgraph to L2 and manually deprecates it via a contract call, then Curators will be notified and will be able to withdraw their curation. -As soon as the subgraph is transferred, since all curation is converted to GRT, Indexers will no longer receive rewards for indexing the subgraph. However, there will be Indexers that will 1) keep serving transferred subgraphs for 24 hours, and 2) immediately start indexing the subgraph on L2. Since these Indexers already have the subgraph indexed, there should be no need to wait for the subgraph to sync, and it will be possible to query the L2 subgraph almost immediately. +As soon as the Subgraph is transferred, since all curation is converted to GRT, Indexers will no longer receive rewards for indexing the Subgraph. However, there will be Indexers that will 1) keep serving transferred Subgraphs for 24 hours, and 2) immediately start indexing the Subgraph on L2. Since these Indexers already have the Subgraph indexed, there should be no need to wait for the Subgraph to sync, and it will be possible to query the L2 Subgraph almost immediately. -Queries to the L2 subgraph will need to be done to a different URL (on `arbitrum-gateway.thegraph.com`), but the L1 URL will continue working for at least 48 hours. After that, the L1 gateway will forward queries to the L2 gateway (for some time), but this will add latency so it is recommended to switch all your queries to the new URL as soon as possible. +Queries to the L2 Subgraph will need to be done to a different URL (on `arbitrum-gateway.thegraph.com`), but the L1 URL will continue working for at least 48 hours. After that, the L1 gateway will forward queries to the L2 gateway (for some time), but this will add latency so it is recommended to switch all your queries to the new URL as soon as possible. ## Choosing your L2 wallet -When you published your subgraph on mainnet, you used a connected wallet to create the subgraph, and this wallet owns the NFT that represents this subgraph and allows you to publish updates. +When you published your Subgraph on mainnet, you used a connected wallet to create the Subgraph, and this wallet owns the NFT that represents this Subgraph and allows you to publish updates. -When transferring the subgraph to Arbitrum, you can choose a different wallet that will own this subgraph NFT on L2. +When transferring the Subgraph to Arbitrum, you can choose a different wallet that will own this Subgraph NFT on L2. If you're using a "regular" wallet like MetaMask (an Externally Owned Account or EOA, i.e. a wallet that is not a smart contract), then this is optional and it is recommended to keep the same owner address as in L1. -If you're using a smart contract wallet, like a multisig (e.g. a Safe), then choosing a different L2 wallet address is mandatory, as it is most likely that this account only exists on mainnet and you will not be able to make transactions on Arbitrum using this wallet. If you want to keep using a smart contract wallet or multisig, create a new wallet on Arbitrum and use its address as the L2 owner of your subgraph. +If you're using a smart contract wallet, like a multisig (e.g. a Safe), then choosing a different L2 wallet address is mandatory, as it is most likely that this account only exists on mainnet and you will not be able to make transactions on Arbitrum using this wallet. If you want to keep using a smart contract wallet or multisig, create a new wallet on Arbitrum and use its address as the L2 owner of your Subgraph. -**It is very important to use a wallet address that you control, and that can make transactions on Arbitrum. Otherwise, the subgraph will be lost and cannot be recovered.** +**It is very important to use a wallet address that you control, and that can make transactions on Arbitrum. Otherwise, the Subgraph will be lost and cannot be recovered.** ## Preparing for the transfer: bridging some ETH -Transferring the subgraph involves sending a transaction through the bridge, and then executing another transaction on Arbitrum. The first transaction uses ETH on mainnet, and includes some ETH to pay for gas when the message is received on L2. However, if this gas is insufficient, you will have to retry the transaction and pay for the gas directly on L2 (this is "Step 3: Confirming the transfer" below). This step **must be executed within 7 days of starting the transfer**. Moreover, the second transaction ("Step 4: Finishing the transfer on L2") will be done directly on Arbitrum. For these reasons, you will need some ETH on an Arbitrum wallet. If you're using a multisig or smart contract account, the ETH will need to be in the regular (EOA) wallet that you are using to execute the transactions, not on the multisig wallet itself. +Transferring the Subgraph involves sending a transaction through the bridge, and then executing another transaction on Arbitrum. The first transaction uses ETH on mainnet, and includes some ETH to pay for gas when the message is received on L2. However, if this gas is insufficient, you will have to retry the transaction and pay for the gas directly on L2 (this is "Step 3: Confirming the transfer" below). This step **must be executed within 7 days of starting the transfer**. Moreover, the second transaction ("Step 4: Finishing the transfer on L2") will be done directly on Arbitrum. For these reasons, you will need some ETH on an Arbitrum wallet. If you're using a multisig or smart contract account, the ETH will need to be in the regular (EOA) wallet that you are using to execute the transactions, not on the multisig wallet itself. You can buy ETH on some exchanges and withdraw it directly to Arbitrum, or you can use the Arbitrum bridge to send ETH from a mainnet wallet to L2: [bridge.arbitrum.io](http://bridge.arbitrum.io). Since gas fees on Arbitrum are lower, you should only need a small amount. It is recommended that you start at a low threshold (0.e.g. 01 ETH) for your transaction to be approved. -## Finding the subgraph Transfer Tool +## Finding the Subgraph Transfer Tool -You can find the L2 Transfer Tool when you're looking at your subgraph's page on Subgraph Studio: +You can find the L2 Transfer Tool when you're looking at your Subgraph's page on Subgraph Studio: ![transfer tool](/img/L2-transfer-tool1.png) -It is also available on Explorer if you're connected with the wallet that owns a subgraph and on that subgraph's page on Explorer: +It is also available on Explorer if you're connected with the wallet that owns a Subgraph and on that Subgraph's page on Explorer: ![Transferring to L2](/img/transferToL2.png) @@ -60,19 +60,19 @@ Clicking on the Transfer to L2 button will open the transfer tool where you can ## Step 1: Starting the transfer -Before starting the transfer, you must decide which address will own the subgraph on L2 (see "Choosing your L2 wallet" above), and it is strongly recommend having some ETH for gas already bridged on Arbitrum (see "Preparing for the transfer: bridging some ETH" above). +Before starting the transfer, you must decide which address will own the Subgraph on L2 (see "Choosing your L2 wallet" above), and it is strongly recommend having some ETH for gas already bridged on Arbitrum (see "Preparing for the transfer: bridging some ETH" above). -Also please note transferring the subgraph requires having a nonzero amount of signal on the subgraph with the same account that owns the subgraph; if you haven't signaled on the subgraph you will have to add a bit of curation (adding a small amount like 1 GRT would suffice). +Also please note transferring the Subgraph requires having a nonzero amount of signal on the Subgraph with the same account that owns the Subgraph; if you haven't signaled on the Subgraph you will have to add a bit of curation (adding a small amount like 1 GRT would suffice). -After opening the Transfer Tool, you will be able to input the L2 wallet address into the "Receiving wallet address" field - **make sure you've entered the correct address here**. Clicking on Transfer Subgraph will prompt you to execute the transaction on your wallet (note some ETH value is included to pay for L2 gas); this will initiate the transfer and deprecate your L1 subgraph (see "Understanding what happens with signal, your L1 subgraph and query URLs" above for more details on what goes on behind the scenes). +After opening the Transfer Tool, you will be able to input the L2 wallet address into the "Receiving wallet address" field - **make sure you've entered the correct address here**. Clicking on Transfer Subgraph will prompt you to execute the transaction on your wallet (note some ETH value is included to pay for L2 gas); this will initiate the transfer and deprecate your L1 Subgraph (see "Understanding what happens with signal, your L1 Subgraph and query URLs" above for more details on what goes on behind the scenes). -If you execute this step, **make sure you proceed until completing step 3 in less than 7 days, or the subgraph and your signal GRT will be lost.** This is due to how L1-L2 messaging works on Arbitrum: messages that are sent through the bridge are "retry-able tickets" that must be executed within 7 days, and the initial execution might need a retry if there are spikes in the gas price on Arbitrum. +If you execute this step, **make sure you proceed until completing step 3 in less than 7 days, or the Subgraph and your signal GRT will be lost.** This is due to how L1-L2 messaging works on Arbitrum: messages that are sent through the bridge are "retry-able tickets" that must be executed within 7 days, and the initial execution might need a retry if there are spikes in the gas price on Arbitrum. ![Start the transfer to L2](/img/startTransferL2.png) -## Step 2: Waiting for the subgraph to get to L2 +## Step 2: Waiting for the Subgraph to get to L2 -After you start the transfer, the message that sends your L1 subgraph to L2 must propagate through the Arbitrum bridge. This takes approximately 20 minutes (the bridge waits for the mainnet block containing the transaction to be "safe" from potential chain reorgs). +After you start the transfer, the message that sends your L1 Subgraph to L2 must propagate through the Arbitrum bridge. This takes approximately 20 minutes (the bridge waits for the mainnet block containing the transaction to be "safe" from potential chain reorgs). Once this wait time is over, Arbitrum will attempt to auto-execute the transfer on the L2 contracts. @@ -80,7 +80,7 @@ Once this wait time is over, Arbitrum will attempt to auto-execute the transfer ## Step 3: Confirming the transfer -In most cases, this step will auto-execute as the L2 gas included in step 1 should be sufficient to execute the transaction that receives the subgraph on the Arbitrum contracts. In some cases, however, it is possible that a spike in gas prices on Arbitrum causes this auto-execution to fail. In this case, the "ticket" that sends your subgraph to L2 will be pending and require a retry within 7 days. +In most cases, this step will auto-execute as the L2 gas included in step 1 should be sufficient to execute the transaction that receives the Subgraph on the Arbitrum contracts. In some cases, however, it is possible that a spike in gas prices on Arbitrum causes this auto-execution to fail. In this case, the "ticket" that sends your Subgraph to L2 will be pending and require a retry within 7 days. If this is the case, you will need to connect using an L2 wallet that has some ETH on Arbitrum, switch your wallet network to Arbitrum, and click on "Confirm Transfer" to retry the transaction. @@ -88,33 +88,33 @@ If this is the case, you will need to connect using an L2 wallet that has some E ## Step 4: Finishing the transfer on L2 -At this point, your subgraph and GRT have been received on Arbitrum, but the subgraph is not published yet. You will need to connect using the L2 wallet that you chose as the receiving wallet, switch your wallet network to Arbitrum, and click "Publish Subgraph." +At this point, your Subgraph and GRT have been received on Arbitrum, but the Subgraph is not published yet. You will need to connect using the L2 wallet that you chose as the receiving wallet, switch your wallet network to Arbitrum, and click "Publish Subgraph." -![Publish the subgraph](/img/publishSubgraphL2TransferTools.png) +![Publish the Subgraph](/img/publishSubgraphL2TransferTools.png) -![Wait for the subgraph to be published](/img/waitForSubgraphToPublishL2TransferTools.png) +![Wait for the Subgraph to be published](/img/waitForSubgraphToPublishL2TransferTools.png) -This will publish the subgraph so that Indexers that are operating on Arbitrum can start serving it. It will also mint curation signal using the GRT that were transferred from L1. +This will publish the Subgraph so that Indexers that are operating on Arbitrum can start serving it. It will also mint curation signal using the GRT that were transferred from L1. ## Step 5: Updating the query URL -Your subgraph has been successfully transferred to Arbitrum! To query the subgraph, the new URL will be : +Your Subgraph has been successfully transferred to Arbitrum! To query the Subgraph, the new URL will be : `https://arbitrum-gateway.thegraph.com/api/[api-key]/subgraphs/id/[l2-subgraph-id]` -Note that the subgraph ID on Arbitrum will be a different than the one you had on mainnet, but you can always find it on Explorer or Studio. As mentioned above (see "Understanding what happens with signal, your L1 subgraph and query URLs") the old L1 URL will be supported for a short while, but you should switch your queries to the new address as soon as the subgraph has been synced on L2. +Note that the Subgraph ID on Arbitrum will be a different than the one you had on mainnet, but you can always find it on Explorer or Studio. As mentioned above (see "Understanding what happens with signal, your L1 Subgraph and query URLs") the old L1 URL will be supported for a short while, but you should switch your queries to the new address as soon as the Subgraph has been synced on L2. ## How to transfer your curation to Arbitrum (L2) -## Understanding what happens to curation on subgraph transfers to L2 +## Understanding what happens to curation on Subgraph transfers to L2 -When the owner of a subgraph transfers a subgraph to Arbitrum, all of the subgraph's signal is converted to GRT at the same time. This applies to "auto-migrated" signal, i.e. signal that is not specific to a subgraph version or deployment but that follows the latest version of a subgraph. +When the owner of a Subgraph transfers a Subgraph to Arbitrum, all of the Subgraph's signal is converted to GRT at the same time. This applies to "auto-migrated" signal, i.e. signal that is not specific to a Subgraph version or deployment but that follows the latest version of a Subgraph. -This conversion from signal to GRT is the same as what would happen if the subgraph owner deprecated the subgraph in L1. When the subgraph is deprecated or transferred, all curation signal is "burned" simultaneously (using the curation bonding curve) and the resulting GRT is held by the GNS smart contract (that is the contract that handles subgraph upgrades and auto-migrated signal). Each Curator on that subgraph therefore has a claim to that GRT proportional to the amount of shares they had for the subgraph. +This conversion from signal to GRT is the same as what would happen if the Subgraph owner deprecated the Subgraph in L1. When the Subgraph is deprecated or transferred, all curation signal is "burned" simultaneously (using the curation bonding curve) and the resulting GRT is held by the GNS smart contract (that is the contract that handles Subgraph upgrades and auto-migrated signal). Each Curator on that Subgraph therefore has a claim to that GRT proportional to the amount of shares they had for the Subgraph. -A fraction of these GRT corresponding to the subgraph owner is sent to L2 together with the subgraph. +A fraction of these GRT corresponding to the Subgraph owner is sent to L2 together with the Subgraph. -At this point, the curated GRT will not accrue any more query fees, so Curators can choose to withdraw their GRT or transfer it to the same subgraph on L2, where it can be used to mint new curation signal. There is no rush to do this as the GRT can be help indefinitely and everybody gets an amount proportional to their shares, irrespective of when they do it. +At this point, the curated GRT will not accrue any more query fees, so Curators can choose to withdraw their GRT or transfer it to the same Subgraph on L2, where it can be used to mint new curation signal. There is no rush to do this as the GRT can be help indefinitely and everybody gets an amount proportional to their shares, irrespective of when they do it. ## Choosing your L2 wallet @@ -130,9 +130,9 @@ If you're using a smart contract wallet, like a multisig (e.g. a Safe), then cho Before starting the transfer, you must decide which address will own the curation on L2 (see "Choosing your L2 wallet" above), and it is recommended having some ETH for gas already bridged on Arbitrum in case you need to retry the execution of the message on L2. You can buy ETH on some exchanges and withdraw it directly to Arbitrum, or you can use the Arbitrum bridge to send ETH from a mainnet wallet to L2: [bridge.arbitrum.io](http://bridge.arbitrum.io) - since gas fees on Arbitrum are so low, you should only need a small amount, e.g. 0.01 ETH will probably be more than enough. -If a subgraph that you curate to has been transferred to L2, you will see a message on Explorer telling you that you're curating to a transferred subgraph. +If a Subgraph that you curate to has been transferred to L2, you will see a message on Explorer telling you that you're curating to a transferred Subgraph. -When looking at the subgraph page, you can choose to withdraw or transfer the curation. Clicking on "Transfer Signal to Arbitrum" will open the transfer tool. +When looking at the Subgraph page, you can choose to withdraw or transfer the curation. Clicking on "Transfer Signal to Arbitrum" will open the transfer tool. ![Transfer signal](/img/transferSignalL2TransferTools.png) @@ -162,4 +162,4 @@ If this is the case, you will need to connect using an L2 wallet that has some E ## Withdrawing your curation on L1 -If you prefer not to send your GRT to L2, or you'd rather bridge the GRT manually, you can withdraw your curated GRT on L1. On the banner on the subgraph page, choose "Withdraw Signal" and confirm the transaction; the GRT will be sent to your Curator address. +If you prefer not to send your GRT to L2, or you'd rather bridge the GRT manually, you can withdraw your curated GRT on L1. On the banner on the Subgraph page, choose "Withdraw Signal" and confirm the transaction; the GRT will be sent to your Curator address. diff --git a/website/src/pages/it/archived/sunrise.mdx b/website/src/pages/it/archived/sunrise.mdx index eb18a93c506c..71262f22e7d8 100644 --- a/website/src/pages/it/archived/sunrise.mdx +++ b/website/src/pages/it/archived/sunrise.mdx @@ -7,61 +7,61 @@ sidebarTitle: Post-Sunrise Upgrade FAQ ## What was the Sunrise of Decentralized Data? -The Sunrise of Decentralized Data was an initiative spearheaded by Edge & Node. This initiative enabled subgraph developers to upgrade to The Graph’s decentralized network seamlessly. +The Sunrise of Decentralized Data was an initiative spearheaded by Edge & Node. This initiative enabled Subgraph developers to upgrade to The Graph’s decentralized network seamlessly. -This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published subgraphs. +This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published Subgraphs. ### What happened to the hosted service? -The hosted service query endpoints are no longer available, and developers cannot deploy new subgraphs on the hosted service. +The hosted service query endpoints are no longer available, and developers cannot deploy new Subgraphs on the hosted service. -During the upgrade process, owners of hosted service subgraphs could upgrade their subgraphs to The Graph Network. Additionally, developers were able to claim auto-upgraded subgraphs. +During the upgrade process, owners of hosted service Subgraphs could upgrade their Subgraphs to The Graph Network. Additionally, developers were able to claim auto-upgraded Subgraphs. ### Was Subgraph Studio impacted by this upgrade? No, Subgraph Studio was not impacted by Sunrise. Subgraphs were immediately available for querying, powered by the upgrade Indexer, which uses the same infrastructure as the hosted service. -### Why were subgraphs published to Arbitrum, did it start indexing a different network? +### Why were Subgraphs published to Arbitrum, did it start indexing a different network? -The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that subgraphs are published to, but subgraphs can index any of the [supported networks](/supported-networks/) +The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new Subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that Subgraphs are published to, but Subgraphs can index any of the [supported networks](/supported-networks/) ## About the Upgrade Indexer > The upgrade Indexer is currently active. -The upgrade Indexer was implemented to improve the experience of upgrading subgraphs from the hosted service to The Graph Network and support new versions of existing subgraphs that had not yet been indexed. +The upgrade Indexer was implemented to improve the experience of upgrading Subgraphs from the hosted service to The Graph Network and support new versions of existing Subgraphs that had not yet been indexed. ### What does the upgrade Indexer do? -- It bootstraps chains that have yet to receive indexing rewards on The Graph Network and ensures that an Indexer is available to serve queries as quickly as possible after a subgraph is published. +- It bootstraps chains that have yet to receive indexing rewards on The Graph Network and ensures that an Indexer is available to serve queries as quickly as possible after a Subgraph is published. - It supports chains that were previously only available on the hosted service. Find a comprehensive list of supported chains [here](/supported-networks/). -- Indexers that operate an upgrade Indexer do so as a public service to support new subgraphs and additional chains that lack indexing rewards before The Graph Council approves them. +- Indexers that operate an upgrade Indexer do so as a public service to support new Subgraphs and additional chains that lack indexing rewards before The Graph Council approves them. ### Why is Edge & Node running the upgrade Indexer? -Edge & Node historically maintained the hosted service and, as a result, already have synced data for hosted service subgraphs. +Edge & Node historically maintained the hosted service and, as a result, already have synced data for hosted service Subgraphs. ### What does the upgrade indexer mean for existing Indexers? Chains previously only supported on the hosted service were made available to developers on The Graph Network without indexing rewards at first. -However, this action unlocked query fees for any interested Indexer and increased the number of subgraphs published on The Graph Network. As a result, Indexers have more opportunities to index and serve these subgraphs in exchange for query fees, even before indexing rewards are enabled for a chain. +However, this action unlocked query fees for any interested Indexer and increased the number of Subgraphs published on The Graph Network. As a result, Indexers have more opportunities to index and serve these Subgraphs in exchange for query fees, even before indexing rewards are enabled for a chain. -The upgrade Indexer also provides the Indexer community with information about the potential demand for subgraphs and new chains on The Graph Network. +The upgrade Indexer also provides the Indexer community with information about the potential demand for Subgraphs and new chains on The Graph Network. ### What does this mean for Delegators? -The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity. +The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more Subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity. ### Did the upgrade Indexer compete with existing Indexers for rewards? -No, the upgrade Indexer only allocates the minimum amount per subgraph and does not collect indexing rewards. +No, the upgrade Indexer only allocates the minimum amount per Subgraph and does not collect indexing rewards. -It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and subgraphs. +It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and Subgraphs. -### How does this affect subgraph developers? +### How does this affect Subgraph developers? -Subgraph developers can query their subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/subgraphs/developing/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a subgraph](/developing/creating-a-subgraph/) was not impacted by this upgrade. +Subgraph developers can query their Subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/subgraphs/developing/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a Subgraph](/developing/creating-a-subgraph/) was not impacted by this upgrade. ### How does the upgrade Indexer benefit data consumers? @@ -71,10 +71,10 @@ The upgrade Indexer enables chains on the network that were previously only supp The upgrade Indexer prices queries at the market rate to avoid influencing the query fee market. -### When will the upgrade Indexer stop supporting a subgraph? +### When will the upgrade Indexer stop supporting a Subgraph? -The upgrade Indexer supports a subgraph until at least 3 other Indexers successfully and consistently serve queries made to it. +The upgrade Indexer supports a Subgraph until at least 3 other Indexers successfully and consistently serve queries made to it. -Furthermore, the upgrade Indexer stops supporting a subgraph if it has not been queried in the last 30 days. +Furthermore, the upgrade Indexer stops supporting a Subgraph if it has not been queried in the last 30 days. -Other Indexers are incentivized to support subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it. +Other Indexers are incentivized to support Subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it. diff --git a/website/src/pages/it/global.json b/website/src/pages/it/global.json index f0bd80d9715b..c69d5fd49d85 100644 --- a/website/src/pages/it/global.json +++ b/website/src/pages/it/global.json @@ -6,6 +6,7 @@ "subgraphs": "Subgraphs", "substreams": "Substreams", "sps": "Substreams-Powered Subgraphs", + "tokenApi": "Token API", "indexing": "Indexing", "resources": "Resources", "archived": "Archived" @@ -24,9 +25,51 @@ "linkToThisSection": "Link to this section" }, "content": { - "note": "Note", + "callout": { + "note": "Note", + "tip": "Tip", + "important": "Important", + "warning": "Warning", + "caution": "Caution" + }, "video": "Video" }, + "openApi": { + "parameters": { + "pathParameters": "Path Parameters", + "queryParameters": "Query Parameters", + "headerParameters": "Header Parameters", + "cookieParameters": "Cookie Parameters", + "parameter": "Parameter", + "description": "Descrizione", + "value": "Value", + "required": "Required", + "deprecated": "Deprecated", + "defaultValue": "Default value", + "minimumValue": "Minimum value", + "maximumValue": "Maximum value", + "acceptedValues": "Accepted values", + "acceptedPattern": "Accepted pattern", + "format": "Format", + "serializationFormat": "Serialization format" + }, + "request": { + "label": "Test this endpoint", + "noCredentialsRequired": "No credentials required", + "send": "Send Request" + }, + "responses": { + "potentialResponses": "Potential Responses", + "status": "Status", + "description": "Descrizione", + "liveResponse": "Live Response", + "example": "Esempio" + }, + "errors": { + "invalidApi": "Could not retrieve API {0}.", + "invalidOperation": "Could not retrieve operation {0} in API {1}." + } + }, "notFound": { "title": "Oops! This page was lost in space...", "subtitle": "Check if you’re using the right address or explore our website by clicking on the link below.", diff --git a/website/src/pages/it/index.json b/website/src/pages/it/index.json index c2d9d0bed1be..f243894b47b5 100644 --- a/website/src/pages/it/index.json +++ b/website/src/pages/it/index.json @@ -7,7 +7,7 @@ "cta2": "Build your first subgraph" }, "products": { - "title": "The Graph’s Products", + "title": "The Graph's Products", "description": "Choose a solution that fits your needs—interact with blockchain data your way.", "subgraphs": { "title": "Subgraphs", @@ -21,7 +21,7 @@ }, "sps": { "title": "Substreams-Powered Subgraphs", - "description": "Boost your subgraph’s efficiency and scalability by using Substreams.", + "description": "Boost your subgraph's efficiency and scalability by using Substreams.", "cta": "Set up a Substreams-powered subgraph" }, "graphNode": { @@ -39,12 +39,12 @@ "title": "Supported Networks", "details": "Network Details", "services": "Services", - "type": "Type", + "type": "Tipo", "protocol": "Protocol", "identifier": "Identifier", "chainId": "Chain ID", "nativeCurrency": "Native Currency", - "docs": "Docs", + "docs": "Documentazione", "shortName": "Short Name", "guides": "Guides", "search": "Search networks", @@ -156,15 +156,15 @@ "watchOnYouTube": "Watch on YouTube", "theGraphExplained": { "title": "The Graph Explained In 1 Minute", - "description": "What is The Graph? How does it work? Why does it matter so much to web3 developers? Learn how and why The Graph is the backbone of web3 in this short, non-technical video." + "description": "Learn how and why The Graph is the backbone of web3 in this short, non-technical video." }, "whatIsDelegating": { "title": "What is Delegating?", - "description": "Delegators are key participants who help secure The Graph by staking their GRT tokens to Indexers. This video explains key concepts to understand before delegating." + "description": "This video explains key concepts to understand before delegating, a form of staking that helps secure The Graph." }, "howToIndexSolana": { "title": "How to Index Solana with a Substreams-powered Subgraph", - "description": "If you’re familiar with subgraphs, discover how Substreams offer a different approach for key use cases. This video walks you through the process of building your first Substreams-powered subgraph." + "description": "If you're familiar with Subgraphs, discover how Substreams offer a different approach for key use cases." } }, "time": { diff --git a/website/src/pages/it/indexing/chain-integration-overview.mdx b/website/src/pages/it/indexing/chain-integration-overview.mdx index 77141e82b34a..33619b03c483 100644 --- a/website/src/pages/it/indexing/chain-integration-overview.mdx +++ b/website/src/pages/it/indexing/chain-integration-overview.mdx @@ -36,7 +36,7 @@ This process is related to the Subgraph Data Service, applicable only to new Sub ### 2. What happens if Firehose & Substreams support comes after the network is supported on mainnet? -This would only impact protocol support for indexing rewards on Substreams-powered subgraphs. The new Firehose implementation would need testing on testnet, following the methodology outlined for Stage 2 in this GIP. Similarly, assuming the implementation is performant and reliable, a PR on the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) would be required (`Substreams data sources` Subgraph Feature), as well as a new GIP for protocol support for indexing rewards. Anyone can create the PR and GIP; the Foundation would help with Council approval. +This would only impact protocol support for indexing rewards on Substreams-powered Subgraphs. The new Firehose implementation would need testing on testnet, following the methodology outlined for Stage 2 in this GIP. Similarly, assuming the implementation is performant and reliable, a PR on the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) would be required (`Substreams data sources` Subgraph Feature), as well as a new GIP for protocol support for indexing rewards. Anyone can create the PR and GIP; the Foundation would help with Council approval. ### 3. How much time will the process of reaching full protocol support take? diff --git a/website/src/pages/it/indexing/new-chain-integration.mdx b/website/src/pages/it/indexing/new-chain-integration.mdx index e45c4b411010..670e06c752c3 100644 --- a/website/src/pages/it/indexing/new-chain-integration.mdx +++ b/website/src/pages/it/indexing/new-chain-integration.mdx @@ -2,7 +2,7 @@ title: New Chain Integration --- -Chains can bring subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies: +Chains can bring Subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies: 1. **EVM JSON-RPC** 2. **Firehose**: All Firehose integration solutions include Substreams, a large-scale streaming engine based off Firehose with native `graph-node` support, allowing for parallelized transforms. @@ -47,15 +47,15 @@ For EVM chains, there exists a deeper level of data that can be achieved through ## EVM considerations - Difference between JSON-RPC & Firehose -While the JSON-RPC and Firehose are both suitable for subgraphs, a Firehose is always required for developers wanting to build with [Substreams](https://substreams.streamingfast.io). Supporting Substreams allows developers to build [Substreams-powered subgraphs](/subgraphs/cookbook/substreams-powered-subgraphs/) for the new chain, and has the potential to improve the performance of your subgraphs. Additionally, Firehose — as a drop-in replacement for the JSON-RPC extraction layer of `graph-node` — reduces by 90% the number of RPC calls required for general indexing. +While the JSON-RPC and Firehose are both suitable for Subgraphs, a Firehose is always required for developers wanting to build with [Substreams](https://substreams.streamingfast.io). Supporting Substreams allows developers to build [Substreams-powered Subgraphs](/subgraphs/cookbook/substreams-powered-subgraphs/) for the new chain, and has the potential to improve the performance of your Subgraphs. Additionally, Firehose — as a drop-in replacement for the JSON-RPC extraction layer of `graph-node` — reduces by 90% the number of RPC calls required for general indexing. -- All those `getLogs` calls and roundtrips get replaced by a single stream arriving into the heart of `graph-node`; a single block model for all subgraphs it processes. +- All those `getLogs` calls and roundtrips get replaced by a single stream arriving into the heart of `graph-node`; a single block model for all Subgraphs it processes. -> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers) +> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index Subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers) ## Graph Node Configuration -Configuring Graph Node is as easy as preparing your local environment. Once your local environment is set, you can test the integration by locally deploying a subgraph. +Configuring Graph Node is as easy as preparing your local environment. Once your local environment is set, you can test the integration by locally deploying a Subgraph. 1. [Clone Graph Node](https://github.com/graphprotocol/graph-node) @@ -67,4 +67,4 @@ Configuring Graph Node is as easy as preparing your local environment. Once your ## Substreams-powered Subgraphs -For StreamingFast-led Firehose/Substreams integrations, basic support for foundational Substreams modules (e.g. decoded transactions, logs and smart-contract events) and Substreams codegen tools are included. These tools enable the ability to enable [Substreams-powered subgraphs](/substreams/sps/introduction/). Follow the [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) and run `substreams codegen subgraph` to experience the codegen tools for yourself. +For StreamingFast-led Firehose/Substreams integrations, basic support for foundational Substreams modules (e.g. decoded transactions, logs and smart-contract events) and Substreams codegen tools are included. These tools enable the ability to enable [Substreams-powered Subgraphs](/substreams/sps/introduction/). Follow the [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) and run `substreams codegen subgraph` to experience the codegen tools for yourself. diff --git a/website/src/pages/it/indexing/overview.mdx b/website/src/pages/it/indexing/overview.mdx index 7a4a5525e2d0..371d0f48cf9a 100644 --- a/website/src/pages/it/indexing/overview.mdx +++ b/website/src/pages/it/indexing/overview.mdx @@ -7,7 +7,7 @@ Gli Indexer sono operatori di nodi di The Graph Network che fanno staking di Gra Il GRT che viene fatto staking nel protocollo è soggetto a un periodo di scongelamento e può essere ridotto se gli Indexer sono malintenzionati e servono dati errati alle applicazioni o se indicizzano in modo errato. Gli Indexer guadagnano anche ricompense per le stake delegate dai Delegator, per contribuire alla rete. -Gli Indexer selezionano i subgraph da indicizzare in base al segnale di curation del subgraph, dove i Curator fanno staking di GRT per indicare quali subgraph sono di alta qualità e dovrebbero essere prioritari. I consumatori (ad esempio, le applicazioni) possono anche impostare i parametri per cui gli Indexer elaborano le query per i loro subgraph e stabilire le preferenze per le tariffe di query. +Indexers select Subgraphs to index based on the Subgraph’s curation signal, where Curators stake GRT in order to indicate which Subgraphs are high-quality and should be prioritized. Consumers (eg. applications) can also set parameters for which Indexers process queries for their Subgraphs and set preferences for query fee pricing. ## FAQ @@ -19,17 +19,17 @@ The minimum stake for an Indexer is currently set to 100K GRT. **Query fee rebates** - Payments for serving queries on the network. These payments are mediated via state channels between an Indexer and a gateway. Each query request from a gateway contains a payment and the corresponding response a proof of query result validity. -**Indexing rewards** - Generated via a 3% annual protocol wide inflation, the indexing rewards are distributed to Indexers who are indexing subgraph deployments for the network. +**Indexing rewards** - Generated via a 3% annual protocol wide inflation, the indexing rewards are distributed to Indexers who are indexing Subgraph deployments for the network. ### How are indexing rewards distributed? -Indexing rewards come from protocol inflation which is set to 3% annual issuance. They are distributed across subgraphs based on the proportion of all curation signal on each, then distributed proportionally to Indexers based on their allocated stake on that subgraph. **An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.** +Indexing rewards come from protocol inflation which is set to 3% annual issuance. They are distributed across Subgraphs based on the proportion of all curation signal on each, then distributed proportionally to Indexers based on their allocated stake on that Subgraph. **An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.** Numerous tools have been created by the community for calculating rewards; you'll find a collection of them organized in the [Community Guides collection](https://www.notion.so/Community-Guides-abbb10f4dba040d5ba81648ca093e70c). You can also find an up to date list of tools in the #Delegators and #Indexers channels on the [Discord server](https://discord.gg/graphprotocol). Here we link a [recommended allocation optimiser](https://github.com/graphprotocol/allocation-optimizer) integrated with the indexer software stack. ### What is a proof of indexing (POI)? -POIs are used in the network to verify that an Indexer is indexing the subgraphs they have allocated on. A POI for the first block of the current epoch must be submitted when closing an allocation for that allocation to be eligible for indexing rewards. A POI for a block is a digest for all entity store transactions for a specific subgraph deployment up to and including that block. +POIs are used in the network to verify that an Indexer is indexing the Subgraphs they have allocated on. A POI for the first block of the current epoch must be submitted when closing an allocation for that allocation to be eligible for indexing rewards. A POI for a block is a digest for all entity store transactions for a specific Subgraph deployment up to and including that block. ### When are indexing rewards distributed? @@ -41,7 +41,7 @@ The RewardsManager contract has a read-only [getRewards](https://github.com/grap Many of the community-made dashboards include pending rewards values and they can be easily checked manually by following these steps: -1. Query the [mainnet subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations: +1. Query the [mainnet Subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations: ```graphql query indexerAllocations { @@ -91,24 +91,24 @@ The `queryFeeCut` and `indexingRewardCut` values are delegation parameters that - **indexingRewardCut** - the % of indexing rewards that will be distributed to the Indexer. If this is set to 95%, the Indexer will receive 95% of the indexing rewards when an allocation is closed and the Delegators will split the other 5%. -### How do Indexers know which subgraphs to index? +### How do Indexers know which Subgraphs to index? -Indexers may differentiate themselves by applying advanced techniques for making subgraph indexing decisions but to give a general idea we'll discuss several key metrics used to evaluate subgraphs in the network: +Indexers may differentiate themselves by applying advanced techniques for making Subgraph indexing decisions but to give a general idea we'll discuss several key metrics used to evaluate Subgraphs in the network: -- **Curation signal** - The proportion of network curation signal applied to a particular subgraph is a good indicator of the interest in that subgraph, especially during the bootstrap phase when query voluming is ramping up. +- **Curation signal** - The proportion of network curation signal applied to a particular Subgraph is a good indicator of the interest in that Subgraph, especially during the bootstrap phase when query volume is ramping up. -- **Query fees collected** - The historical data for volume of query fees collected for a specific subgraph is a good indicator of future demand. +- **Query fees collected** - The historical data for volume of query fees collected for a specific Subgraph is a good indicator of future demand. -- **Amount staked** - Monitoring the behavior of other Indexers or looking at proportions of total stake allocated towards specific subgraphs can allow an Indexer to monitor the supply side for subgraph queries to identify subgraphs that the network is showing confidence in or subgraphs that may show a need for more supply. +- **Amount staked** - Monitoring the behavior of other Indexers or looking at proportions of total stake allocated towards specific Subgraphs can allow an Indexer to monitor the supply side for Subgraph queries to identify Subgraphs that the network is showing confidence in or Subgraphs that may show a need for more supply. -- **Subgraphs with no indexing rewards** - Some subgraphs do not generate indexing rewards mainly because they are using unsupported features like IPFS or because they are querying another network outside of mainnet. You will see a message on a subgraph if it is not generating indexing rewards. +- **Subgraphs with no indexing rewards** - Some Subgraphs do not generate indexing rewards mainly because they are using unsupported features like IPFS or because they are querying another network outside of mainnet. You will see a message on a Subgraph if it is not generating indexing rewards. ### What are the hardware requirements? -- **Small** - Enough to get started indexing several subgraphs, will likely need to be expanded. +- **Small** - Enough to get started indexing several Subgraphs, will likely need to be expanded. - **Standard** - Default setup, this is what is used in the example k8s/terraform deployment manifests. -- **Medium** - Production Indexer supporting 100 subgraphs and 200-500 requests per second. -- **Large** - Prepared to index all currently used subgraphs and serve requests for the related traffic. +- **Medium** - Production Indexer supporting 100 Subgraphs and 200-500 requests per second. +- **Large** - Prepared to index all currently used Subgraphs and serve requests for the related traffic. | Setup | Postgres
(CPUs) | Postgres
(memory in GBs) | Postgres
(disk in TBs) | VMs
(CPUs) | VMs
(memory in GBs) | | --- | :-: | :-: | :-: | :-: | :-: | @@ -125,17 +125,17 @@ Indexers may differentiate themselves by applying advanced techniques for making ## Infrastructure -At the center of an Indexer's infrastructure is the Graph Node which monitors the indexed networks, extracts and loads data per a subgraph definition and serves it as a [GraphQL API](/about/#how-the-graph-works). The Graph Node needs to be connected to an endpoint exposing data from each indexed network; an IPFS node for sourcing data; a PostgreSQL database for its store; and Indexer components which facilitate its interactions with the network. +At the center of an Indexer's infrastructure is the Graph Node which monitors the indexed networks, extracts and loads data per a Subgraph definition and serves it as a [GraphQL API](/about/#how-the-graph-works). The Graph Node needs to be connected to an endpoint exposing data from each indexed network; an IPFS node for sourcing data; a PostgreSQL database for its store; and Indexer components which facilitate its interactions with the network. -- **PostgreSQL database** - The main store for the Graph Node, this is where subgraph data is stored. The Indexer service and agent also use the database to store state channel data, cost models, indexing rules, and allocation actions. +- **PostgreSQL database** - The main store for the Graph Node, this is where Subgraph data is stored. The Indexer service and agent also use the database to store state channel data, cost models, indexing rules, and allocation actions. -- **Data endpoint** - For EVM-compatible networks, Graph Node needs to be connected to an endpoint that exposes an EVM-compatible JSON-RPC API. This may take the form of a single client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain subgraphs will require particular client capabilities such as archive mode and/or the parity tracing API. +- **Data endpoint** - For EVM-compatible networks, Graph Node needs to be connected to an endpoint that exposes an EVM-compatible JSON-RPC API. This may take the form of a single client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain Subgraphs will require particular client capabilities such as archive mode and/or the parity tracing API. -- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during subgraph deployment to fetch the subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com. +- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com. - **Indexer service** - Handles all required external communications with the network. Shares cost models and indexing statuses, passes query requests from gateways on to a Graph Node, and manages the query payments via state channels with the gateway. -- **Indexer agent** - Facilitates the Indexers interactions onchain including registering on the network, managing subgraph deployments to its Graph Node/s, and managing allocations. +- **Indexer agent** - Facilitates the Indexers interactions onchain including registering on the network, managing Subgraph deployments to its Graph Node/s, and managing allocations. - **Prometheus metrics server** - The Graph Node and Indexer components log their metrics to the metrics server. @@ -149,8 +149,8 @@ Note: To support agile scaling, it is recommended that query and indexing concer | Port | Purpose | Routes | CLI Argument | Environment Variable | | --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | -| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | +| 8000 | GraphQL HTTP server
(for Subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | +| 8001 | GraphQL WS
(for Subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | | 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | | 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | | 8040 | Prometheus metrics | /metrics | \--metrics-port | - | @@ -159,7 +159,7 @@ Note: To support agile scaling, it is recommended that query and indexing concer | Port | Purpose | Routes | CLI Argument | Environment Variable | | --- | --- | --- | --- | --- | -| 7600 | GraphQL HTTP server
(for paid subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` | +| 7600 | GraphQL HTTP server
(for paid Subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` | | 7300 | Prometheus metrics | /metrics | \--metrics-port | - | #### Indexer Agent @@ -295,7 +295,7 @@ Deploy all resources with `kubectl apply -k $dir`. ### Graph Node -[Graph Node](https://github.com/graphprotocol/graph-node) is an open source Rust implementation that event sources the Ethereum blockchain to deterministically update a data store that can be queried via the GraphQL endpoint. Developers use subgraphs to define their schema, and a set of mappings for transforming the data sourced from the blockchain and the Graph Node handles syncing the entire chain, monitoring for new blocks, and serving it via a GraphQL endpoint. +[Graph Node](https://github.com/graphprotocol/graph-node) is an open source Rust implementation that event sources the Ethereum blockchain to deterministically update a data store that can be queried via the GraphQL endpoint. Developers use Subgraphs to define their schema, and a set of mappings for transforming the data sourced from the blockchain and the Graph Node handles syncing the entire chain, monitoring for new blocks, and serving it via a GraphQL endpoint. #### Getting started from source @@ -365,9 +365,9 @@ docker-compose up To successfully participate in the network requires almost constant monitoring and interaction, so we've built a suite of Typescript applications for facilitating an Indexers network participation. There are three Indexer components: -- **Indexer agent** - The agent monitors the network and the Indexer's own infrastructure and manages which subgraph deployments are indexed and allocated towards onchain and how much is allocated towards each. +- **Indexer agent** - The agent monitors the network and the Indexer's own infrastructure and manages which Subgraph deployments are indexed and allocated towards onchain and how much is allocated towards each. -- **Indexer service** - The only component that needs to be exposed externally, the service passes on subgraph queries to the graph node, manages state channels for query payments, shares important decision making information to clients like the gateways. +- **Indexer service** - The only component that needs to be exposed externally, the service passes on Subgraph queries to the graph node, manages state channels for query payments, shares important decision making information to clients like the gateways. - **Indexer CLI** - The command line interface for managing the Indexer agent. It allows Indexers to manage cost models, manual allocations, actions queue, and indexing rules. @@ -525,7 +525,7 @@ graph indexer status #### Indexer management using Indexer CLI -The suggested tool for interacting with the **Indexer Management API** is the **Indexer CLI**, an extension to the **Graph CLI**. The Indexer agent needs input from an Indexer in order to autonomously interact with the network on the behalf of the Indexer. The mechanism for defining Indexer agent behavior are **allocation management** mode and **indexing rules**. Under auto mode, an Indexer can use **indexing rules** to apply their specific strategy for picking subgraphs to index and serve queries for. Rules are managed via a GraphQL API served by the agent and known as the Indexer Management API. Under manual mode, an Indexer can create allocation actions using **actions queue** and explicitly approve them before they get executed. Under oversight mode, **indexing rules** are used to populate **actions queue** and also require explicit approval for execution. +The suggested tool for interacting with the **Indexer Management API** is the **Indexer CLI**, an extension to the **Graph CLI**. The Indexer agent needs input from an Indexer in order to autonomously interact with the network on the behalf of the Indexer. The mechanism for defining Indexer agent behavior are **allocation management** mode and **indexing rules**. Under auto mode, an Indexer can use **indexing rules** to apply their specific strategy for picking Subgraphs to index and serve queries for. Rules are managed via a GraphQL API served by the agent and known as the Indexer Management API. Under manual mode, an Indexer can create allocation actions using **actions queue** and explicitly approve them before they get executed. Under oversight mode, **indexing rules** are used to populate **actions queue** and also require explicit approval for execution. #### Usage @@ -537,7 +537,7 @@ The **Indexer CLI** connects to the Indexer agent, typically through port-forwar - `graph indexer rules set [options] ...` - Set one or more indexing rules. -- `graph indexer rules start [options] ` - Start indexing a subgraph deployment if available and set its `decisionBasis` to `always`, so the Indexer agent will always choose to index it. If the global rule is set to always then all available subgraphs on the network will be indexed. +- `graph indexer rules start [options] ` - Start indexing a Subgraph deployment if available and set its `decisionBasis` to `always`, so the Indexer agent will always choose to index it. If the global rule is set to always then all available Subgraphs on the network will be indexed. - `graph indexer rules stop [options] ` - Stop indexing a deployment and set its `decisionBasis` to never, so it will skip this deployment when deciding on deployments to index. @@ -561,9 +561,9 @@ All commands which display rules in the output can choose between the supported #### Indexing rules -Indexing rules can either be applied as global defaults or for specific subgraph deployments using their IDs. The `deployment` and `decisionBasis` fields are mandatory, while all other fields are optional. When an indexing rule has `rules` as the `decisionBasis`, then the Indexer agent will compare non-null threshold values on that rule with values fetched from the network for the corresponding deployment. If the subgraph deployment has values above (or below) any of the thresholds it will be chosen for indexing. +Indexing rules can either be applied as global defaults or for specific Subgraph deployments using their IDs. The `deployment` and `decisionBasis` fields are mandatory, while all other fields are optional. When an indexing rule has `rules` as the `decisionBasis`, then the Indexer agent will compare non-null threshold values on that rule with values fetched from the network for the corresponding deployment. If the Subgraph deployment has values above (or below) any of the thresholds it will be chosen for indexing. -For example, if the global rule has a `minStake` of **5** (GRT), any subgraph deployment which has more than 5 (GRT) of stake allocated to it will be indexed. Threshold rules include `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, and `minAverageQueryFees`. +For example, if the global rule has a `minStake` of **5** (GRT), any Subgraph deployment which has more than 5 (GRT) of stake allocated to it will be indexed. Threshold rules include `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, and `minAverageQueryFees`. Data model: @@ -679,7 +679,7 @@ graph indexer actions execute approve Note that supported action types for allocation management have different input requirements: -- `Allocate` - allocate stake to a specific subgraph deployment +- `Allocate` - allocate stake to a specific Subgraph deployment - required action params: - deploymentID @@ -694,7 +694,7 @@ Note that supported action types for allocation management have different input - poi - force (forces using the provided POI even if it doesn’t match what the graph-node provides) -- `Reallocate` - atomically close allocation and open a fresh allocation for the same subgraph deployment +- `Reallocate` - atomically close allocation and open a fresh allocation for the same Subgraph deployment - required action params: - allocationID @@ -706,7 +706,7 @@ Note that supported action types for allocation management have different input #### Cost models -Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make Indexer selection decisions per query and to negotiate payment with chosen Indexers. +Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each Subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make Indexer selection decisions per query and to negotiate payment with chosen Indexers. #### Agora @@ -782,9 +782,9 @@ Once an Indexer has staked GRT in the protocol, the [Indexer components](/indexi 6. Call `stake()` to stake GRT in the protocol. -7. (Optional) Indexers may approve another address to be the operator for their Indexer infrastructure in order to separate the keys that control the funds from those that are performing day to day actions such as allocating on subgraphs and serving (paid) queries. In order to set the operator call `setOperator()` with the operator address. +7. (Optional) Indexers may approve another address to be the operator for their Indexer infrastructure in order to separate the keys that control the funds from those that are performing day to day actions such as allocating on Subgraphs and serving (paid) queries. In order to set the operator call `setOperator()` with the operator address. -8. (Optional) In order to control the distribution of rewards and strategically attract Delegators Indexers can update their delegation parameters by updating their indexingRewardCut (parts per million), queryFeeCut (parts per million), and cooldownBlocks (number of blocks). To do so call `setDelegationParameters()`. The following example sets the queryFeeCut to distribute 95% of query rebates to the Indexer and 5% to Delegators, set the indexingRewardCutto distribute 60% of indexing rewards to the Indexer and 40% to Delegators, and set `thecooldownBlocks` period to 500 blocks. +8. (Optional) In order to control the distribution of rewards and strategically attract Delegators Indexers can update their delegation parameters by updating their `indexingRewardCut` (parts per million), `queryFeeCut` (parts per million), and `cooldownBlocks` (number of blocks). To do so call `setDelegationParameters()`. The following example sets the `queryFeeCut` to distribute 95% of query rebates to the Indexer and 5% to Delegators, set the `indexingRewardCut` to distribute 60% of indexing rewards to the Indexer and 40% to Delegators, and set the `cooldownBlocks` period to 500 blocks. ``` setDelegationParameters(950000, 600000, 500) @@ -810,8 +810,8 @@ To set the delegation parameters using Graph Explorer interface, follow these st After being created by an Indexer a healthy allocation goes through two states. -- **Active** - Once an allocation is created onchain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L316)) it is considered **active**. A portion of the Indexer's own and/or delegated stake is allocated towards a subgraph deployment, which allows them to claim indexing rewards and serve queries for that subgraph deployment. The Indexer agent manages creating allocations based on the Indexer rules. +- **Active** - Once an allocation is created onchain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L316)) it is considered **active**. A portion of the Indexer's own and/or delegated stake is allocated towards a Subgraph deployment, which allows them to claim indexing rewards and serve queries for that Subgraph deployment. The Indexer agent manages creating allocations based on the Indexer rules. - **Closed** - An Indexer is free to close an allocation once 1 epoch has passed ([closeAllocation()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L335)) or their Indexer agent will automatically close the allocation after the **maxAllocationEpochs** (currently 28 days). When an allocation is closed with a valid proof of indexing (POI) their indexing rewards are distributed to the Indexer and its Delegators ([learn more](/indexing/overview/#how-are-indexing-rewards-distributed)). -Indexers are recommended to utilize offchain syncing functionality to sync subgraph deployments to chainhead before creating the allocation onchain. This feature is especially useful for subgraphs that may take longer than 28 epochs to sync or have some chances of failing undeterministically. +Indexers are recommended to utilize offchain syncing functionality to sync Subgraph deployments to chainhead before creating the allocation onchain. This feature is especially useful for Subgraphs that may take longer than 28 epochs to sync or have some chances of failing undeterministically. diff --git a/website/src/pages/it/indexing/supported-network-requirements.mdx b/website/src/pages/it/indexing/supported-network-requirements.mdx index 7eed955d1013..87e2db6d20b2 100644 --- a/website/src/pages/it/indexing/supported-network-requirements.mdx +++ b/website/src/pages/it/indexing/supported-network-requirements.mdx @@ -6,7 +6,7 @@ title: Supported Network Requirements | --- | --- | --- | :-: | | Arbitrum | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | 4+ core CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_last updated August 2023_ | ✅ | | Avalanche | [Docker Guide](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 5 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
Debian 12/Ubuntu 22.04
16 GB RAM
>= 4.5TB (NVME preffered)
_last updated 14th May 2024_ | ✅ | +| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
Debian 12/Ubuntu 22.04
16 GB RAM
>= 4.5TB (NVME preferred)
_last updated 14th May 2024_ | ✅ | | Binance | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | 8 core / 16 threads CPU
Ubuntu 22.04
>=32 GB RAM
>= 14 TiB NVMe SSD
_last updated 22nd June 2024_ | ✅ | | Celo | [Docker Guide](https://docs.infradao.com/archive-nodes-101/celo/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 2 TiB NVMe SSD
_last updated August 2023_ | ✅ | | Ethereum | [Docker Guide](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Higher clock speed over core count
Ubuntu 22.04
16GB+ RAM
>=3TB (NVMe recommended)
_last updated August 2023_ | ✅ | diff --git a/website/src/pages/it/indexing/tap.mdx b/website/src/pages/it/indexing/tap.mdx index 8604a92b41e7..384ed571abd5 100644 --- a/website/src/pages/it/indexing/tap.mdx +++ b/website/src/pages/it/indexing/tap.mdx @@ -1,21 +1,21 @@ --- -title: TAP Migration Guide +title: GraphTally Guide --- -Learn about The Graph’s new payment system, **Timeline Aggregation Protocol, TAP**. This system provides fast, efficient microtransactions with minimized trust. +Learn about The Graph’s new payment system, **GraphTally** [(previously Timeline Aggregation Protocol)](https://docs.rs/tap_core/latest/tap_core/index.html). This system provides fast, efficient microtransactions with minimized trust. ## Panoramica -[TAP](https://docs.rs/tap_core/latest/tap_core/index.html) is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features: +GraphTally is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features: - Efficiently handles micropayments. - Adds a layer of consolidations to onchain transactions and costs. - Allows Indexers control of receipts and payments, guaranteeing payment for queries. - It enables decentralized, trustless gateways and improves `indexer-service` performance for multiple senders. -## Specifics +### Specifics -TAP allows a sender to make multiple payments to a receiver, **TAP Receipts**, which aggregates these payments into a single payment, a **Receipt Aggregate Voucher**, also known as a **RAV**. This aggregated payment can then be verified on the blockchain, reducing the number of transactions and simplifying the payment process. +GraphTally allows a sender to make multiple payments to a receiver, **Receipts**, which aggregates these payments into a single payment, a **Receipt Aggregate Voucher**, also known as a **RAV**. This aggregated payment can then be verified on the blockchain, reducing the number of transactions and simplifying the payment process. For each query, the gateway will send you a `signed receipt` that is stored on your database. Then, these queries will be aggregated by a `tap-agent` through a request. Afterwards, you’ll receive a RAV. You can update a RAV by sending it with newer receipts and this will generate a new RAV with an increased value. @@ -59,14 +59,14 @@ As long as you run `tap-agent` and `indexer-agent`, everything will be executed | Signers | `0xfF4B7A5EfD00Ff2EC3518D4F250A27e4c29A2211` | `0xFb142dE83E261e43a81e9ACEADd1c66A0DB121FE` | | Aggregator | `https://tap-aggregator.network.thegraph.com` | `https://tap-aggregator.testnet.thegraph.com` | -### Requirements +### Prerequisites -In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query TAP updates. You can use The Graph Network to query or host yourself on your `graph-node`. +In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query updates. You can use The Graph Network to query or host yourself on your `graph-node`. -- [Graph TAP Arbitrum Sepolia subgraph (for The Graph testnet)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) -- [Graph TAP Arbitrum One subgraph (for The Graph mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) +- [Graph TAP Arbitrum Sepolia Subgraph (for The Graph testnet)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) +- [Graph TAP Arbitrum One Subgraph (for The Graph mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) -> Note: `indexer-agent` does not currently handle the indexing of this subgraph like it does for the network subgraph deployment. As a result, you have to index it manually. +> Note: `indexer-agent` does not currently handle the indexing of this Subgraph like it does for the network Subgraph deployment. As a result, you have to index it manually. ## Migration Guide @@ -79,7 +79,7 @@ The required software version can be found [here](https://github.com/graphprotoc 1. **Indexer Agent** - Follow the [same process](https://github.com/graphprotocol/indexer/pkgs/container/indexer-agent#graph-protocol-indexer-components). - - Give the new argument `--tap-subgraph-endpoint` to activate the new TAP codepaths and enable redeeming of TAP RAVs. + - Give the new argument `--tap-subgraph-endpoint` to activate the new GraphTally codepaths and enable redeeming of RAVs. 2. **Indexer Service** @@ -128,18 +128,18 @@ query_url = "" status_url = "" [subgraphs.network] -# Query URL for the Graph Network subgraph. +# Query URL for the Graph Network Subgraph. query_url = "" # Optional, deployment to look for in the local `graph-node`, if locally indexed. -# Locally indexing the subgraph is recommended. +# Locally indexing the Subgraph is recommended. # NOTE: Use `query_url` or `deployment_id` only deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" [subgraphs.escrow] -# Query URL for the Escrow subgraph. +# Query URL for the Escrow Subgraph. query_url = "" # Optional, deployment to look for in the local `graph-node`, if locally indexed. -# Locally indexing the subgraph is recommended. +# Locally indexing the Subgraph is recommended. # NOTE: Use `query_url` or `deployment_id` only deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" diff --git a/website/src/pages/it/indexing/tooling/graph-node.mdx b/website/src/pages/it/indexing/tooling/graph-node.mdx index b77c651c0bd2..32015adbd8fd 100644 --- a/website/src/pages/it/indexing/tooling/graph-node.mdx +++ b/website/src/pages/it/indexing/tooling/graph-node.mdx @@ -2,31 +2,31 @@ title: Graph Node --- -Graph Node è il componente che indica i subgraph e rende i dati risultanti disponibili per l'interrogazione tramite API GraphQL. È quindi centrale per lo stack degli indexer, ed inoltre il corretto funzionamento di Graph Node è cruciale per il buon funzionamento di un indexer di successo. +Graph Node is the component which indexes Subgraphs, and makes the resulting data available to query via a GraphQL API. As such it is central to the indexer stack, and correct operation of Graph Node is crucial to running a successful indexer. This provides a contextual overview of Graph Node, and some of the more advanced options available to indexers. Detailed documentation and instructions can be found in the [Graph Node repository](https://github.com/graphprotocol/graph-node). ## Graph Node -[Graph Node](https://github.com/graphprotocol/graph-node) is the reference implementation for indexing Subgraphs on The Graph Network, connecting to blockchain clients, indexing subgraphs and making indexed data available to query. +[Graph Node](https://github.com/graphprotocol/graph-node) is the reference implementation for indexing Subgraphs on The Graph Network, connecting to blockchain clients, indexing Subgraphs and making indexed data available to query. Graph Node (and the whole indexer stack) can be run on bare metal, or in a cloud environment. This flexibility of the central indexing component is crucial to the robustness of The Graph Protocol. Similarly, Graph Node can be [built from source](https://github.com/graphprotocol/graph-node), or indexers can use one of the [provided Docker Images](https://hub.docker.com/r/graphprotocol/graph-node). ### Database PostgreSQL -È l'archivio principale del Graph Node, in cui vengono memorizzati i dati dei subgraph, i metadati sui subgraph e i dati di rete che non dipendono dal subgraph, come la cache dei blocchi e la cache eth_call. +The main store for the Graph Node, this is where Subgraph data is stored, as well as metadata about Subgraphs, and Subgraph-agnostic network data such as the block cache, and eth_call cache. ### Clienti della rete Per indicizzare una rete, Graph Node deve accedere a un cliente di rete tramite un'API JSON-RPC compatibile con EVM. Questo RPC può connettersi a un singolo cliente o può essere una configurazione più complessa che bilancia il carico su più clienti. -While some subgraphs may just require a full node, some may have indexing features which require additional RPC functionality. Specifically subgraphs which make `eth_calls` as part of indexing will require an archive node which supports [EIP-1898](https://eips.ethereum.org/EIPS/eip-1898), and subgraphs with `callHandlers`, or `blockHandlers` with a `call` filter, require `trace_filter` support ([see trace module documentation here](https://openethereum.github.io/JSONRPC-trace-module)). +While some Subgraphs may just require a full node, some may have indexing features which require additional RPC functionality. Specifically Subgraphs which make `eth_calls` as part of indexing will require an archive node which supports [EIP-1898](https://eips.ethereum.org/EIPS/eip-1898), and Subgraphs with `callHandlers`, or `blockHandlers` with a `call` filter, require `trace_filter` support ([see trace module documentation here](https://openethereum.github.io/JSONRPC-trace-module)). **Network Firehoses** - a Firehose is a gRPC service providing an ordered, yet fork-aware, stream of blocks, developed by The Graph's core developers to better support performant indexing at scale. This is not currently an Indexer requirement, but Indexers are encouraged to familiarise themselves with the technology, ahead of full network support. Learn more about the Firehose [here](https://firehose.streamingfast.io/). ### Nodi IPFS -I metadati di distribuzione del subgraph sono memorizzati sulla rete IPFS. The Graph Node accede principalmente al nodo IPFS durante la distribuzione del subgraph per recuperare il manifest del subgraph e tutti i file collegati. Gli indexer di rete non devono ospitare un proprio nodo IPFS. Un nodo IPFS per la rete è ospitato su https://ipfs.network.thegraph.com. +Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network indexers do not need to host their own IPFS node. An IPFS node for the network is hosted at https://ipfs.network.thegraph.com. ### Server di metriche Prometheus @@ -79,8 +79,8 @@ Quando è in funzione, Graph Node espone le seguenti porte: | Port | Purpose | Routes | CLI Argument | Environment Variable | | --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | -| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | +| 8000 | GraphQL HTTP server
(for Subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | +| 8001 | GraphQL WS
(for Subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | | 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | | 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | | 8040 | Prometheus metrics | /metrics | \--metrics-port | - | @@ -89,7 +89,7 @@ Quando è in funzione, Graph Node espone le seguenti porte: ## Configurazione avanzata del Graph Node -Nella sua forma più semplice, Graph Node può essere utilizzato con una singola istanza di Graph Node, un singolo database PostgreSQL, un nodo IPFS e i client di rete richiesti dai subgraph da indicizzare. +At its simplest, Graph Node can be operated with a single instance of Graph Node, a single PostgreSQL database, an IPFS node, and the network clients as required by the Subgraphs to be indexed. This setup can be scaled horizontally, by adding multiple Graph Nodes, and multiple databases to support those Graph Nodes. Advanced users may want to take advantage of some of the horizontal scaling capabilities of Graph Node, as well as some of the more advanced configuration options, via the `config.toml` file and Graph Node's environment variables. @@ -114,39 +114,39 @@ Full documentation of `config.toml` can be found in the [Graph Node docs](https: #### Graph Node multipli -Graph Node indexing can scale horizontally, running multiple instances of Graph Node to split indexing and querying across different nodes. This can be done simply by running Graph Nodes configured with a different `node_id` on startup (e.g. in the Docker Compose file), which can then be used in the `config.toml` file to specify [dedicated query nodes](#dedicated-query-nodes), [block ingestors](#dedicated-block-ingestion), and splitting subgraphs across nodes with [deployment rules](#deployment-rules). +Graph Node indexing can scale horizontally, running multiple instances of Graph Node to split indexing and querying across different nodes. This can be done simply by running Graph Nodes configured with a different `node_id` on startup (e.g. in the Docker Compose file), which can then be used in the `config.toml` file to specify [dedicated query nodes](#dedicated-query-nodes), [block ingestors](#dedicated-block-ingestion), and splitting Subgraphs across nodes with [deployment rules](#deployment-rules). > Si noti che più Graph Node possono essere configurati per utilizzare lo stesso database, che può essere scalato orizzontalmente tramite sharding. #### Regole di distribuzione -Given multiple Graph Nodes, it is necessary to manage deployment of new subgraphs so that the same subgraph isn't being indexed by two different nodes, which would lead to collisions. This can be done by using deployment rules, which can also specify which `shard` a subgraph's data should be stored in, if database sharding is being used. Deployment rules can match on the subgraph name and the network that the deployment is indexing in order to make a decision. +Given multiple Graph Nodes, it is necessary to manage deployment of new Subgraphs so that the same Subgraph isn't being indexed by two different nodes, which would lead to collisions. This can be done by using deployment rules, which can also specify which `shard` a Subgraph's data should be stored in, if database sharding is being used. Deployment rules can match on the Subgraph name and the network that the deployment is indexing in order to make a decision. Esempio di configurazione della regola di distribuzione: ```toml [deployment] [[deployment.rule]] -match = { name = "(vip|importante)/.*" } +match = { name = "(vip|important)/.*" } shard = "vip" indexers = [ "index_node_vip_0", "index_node_vip_1" ] [[deployment.rule]] match = { network = "kovan" } -# Nessun shard, quindi usiamo lo shard predefinito chiamato "primario". -indicizzatori = [ "index_node_kovan_0" ] +# No shard, so we use the default shard called 'primary' +indexers = [ "index_node_kovan_0" ] [[deployment.rule]] match = { network = [ "xdai", "poa-core" ] } indexers = [ "index_node_other_0" ] [[deployment.rule]] -# Non c'è nessun "match", quindi qualsiasi sottografo corrisponde -shard = [ "sharda", "shardb" ] -indicizzatori = [ +# There's no 'match', so any Subgraph matches +shards = [ "sharda", "shardb" ] +indexers = [ "index_node_community_0", "index_node_community_1", "index_node_community_2", "index_node_community_3", "index_node_community_4", - "indice_nodo_comunità_5" + "index_node_community_5" ] ``` @@ -167,11 +167,11 @@ Ogni nodo il cui --node-id corrisponde all'espressione regolare sarà impostato Per la maggior parte dei casi d'uso, un singolo database Postgres è sufficiente per supportare un'istanza del graph-node. Quando un'istanza del graph-node supera un singolo database Postgres, è possibile suddividere l'archiviazione dei dati del graph-node su più database Postgres. Tutti i database insieme formano lo store dell'istanza del graph-node. Ogni singolo database è chiamato shard. -Shards can be used to split subgraph deployments across multiple databases, and can also be used to use replicas to spread query load across databases. This includes configuring the number of available database connections each `graph-node` should keep in its connection pool for each database, which becomes increasingly important as more subgraphs are being indexed. +Shards can be used to split Subgraph deployments across multiple databases, and can also be used to use replicas to spread query load across databases. This includes configuring the number of available database connections each `graph-node` should keep in its connection pool for each database, which becomes increasingly important as more Subgraphs are being indexed. Lo sharding diventa utile quando il database esistente non riesce a reggere il carico che Graph Node gli impone e quando non è più possibile aumentare le dimensioni del database. -> In genere è meglio creare un singolo database il più grande possibile, prima di iniziare con gli shard. Un'eccezione è rappresentata dai casi in cui il traffico di query è suddiviso in modo molto disomogeneo tra i subgraph; in queste situazioni può essere di grande aiuto tenere i subgraph ad alto volume in uno shard e tutto il resto in un altro, perché questa configurazione rende più probabile che i dati per i subgraph ad alto volume rimangano nella cache interna del database e non vengano sostituiti da dati non necessari per i subgraph a basso volume. +> It is generally better make a single database as big as possible, before starting with shards. One exception is where query traffic is split very unevenly between Subgraphs; in those situations it can help dramatically if the high-volume Subgraphs are kept in one shard and everything else in another because that setup makes it more likely that the data for the high-volume Subgraphs stays in the db-internal cache and doesn't get replaced by data that's not needed as much from low-volume Subgraphs. Per quanto riguarda la configurazione delle connessioni, iniziare con max_connections in postgresql.conf impostato a 400 (o forse anche a 200) e osservare le metriche di Prometheus store_connection_wait_time_ms e store_connection_checkout_count. Tempi di attesa notevoli (qualsiasi cosa superiore a 5 ms) indicano che le connessioni disponibili sono troppo poche; tempi di attesa elevati possono anche essere causati da un database molto occupato (come un elevato carico della CPU). Tuttavia, se il database sembra altrimenti stabile, tempi di attesa elevati indicano la necessità di aumentare il numero di connessioni. Nella configurazione, il numero di connessioni che ogni istanza del graph-node può utilizzare è un limite massimo e Graph Node non manterrà aperte le connessioni se non ne ha bisogno. @@ -188,7 +188,7 @@ ingestor = "block_ingestor_node" #### Supporto di più reti -The Graph Protocol is increasing the number of networks supported for indexing rewards, and there exist many subgraphs indexing unsupported networks which an indexer would like to process. The `config.toml` file allows for expressive and flexible configuration of: +The Graph Protocol is increasing the number of networks supported for indexing rewards, and there exist many Subgraphs indexing unsupported networks which an indexer would like to process. The `config.toml` file allows for expressive and flexible configuration of: - Reti multiple - Fornitori multipli per rete (questo può consentire di suddividere il carico tra i fornitori e di configurare nodi completi e nodi di archivio, con Graph Node che preferisce i fornitori più economici se un determinato carico di lavoro lo consente). @@ -225,11 +225,11 @@ Gli utenti che gestiscono una configurazione di indicizzazione scalare con una c ### Gestione del Graph Node -Dato un Graph Node (o più Graph Nodes!) in funzione, la sfida consiste nel gestire i subgraph distribuiti tra i nodi. Graph Node offre una serie di strumenti che aiutano a gestire i subgraph. +Given a running Graph Node (or Graph Nodes!), the challenge is then to manage deployed Subgraphs across those nodes. Graph Node surfaces a range of tools to help with managing Subgraphs. #### Logging -Graph Node's logs can provide useful information for debugging and optimisation of Graph Node and specific subgraphs. Graph Node supports different log levels via the `GRAPH_LOG` environment variable, with the following levels: error, warn, info, debug or trace. +Graph Node's logs can provide useful information for debugging and optimisation of Graph Node and specific Subgraphs. Graph Node supports different log levels via the `GRAPH_LOG` environment variable, with the following levels: error, warn, info, debug or trace. In addition setting `GRAPH_LOG_QUERY_TIMING` to `gql` provides more details about how GraphQL queries are running (though this will generate a large volume of logs). @@ -247,11 +247,11 @@ The graphman command is included in the official containers, and you can docker Full documentation of `graphman` commands is available in the Graph Node repository. See [/docs/graphman.md] (https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md) in the Graph Node `/docs` -### Lavorare con i subgraph +### Working with Subgraphs #### Stato dell'indicizzazione API -Disponibile sulla porta 8030/graphql per impostazione predefinita, l'API dello stato di indicizzazione espone una serie di metodi per verificare lo stato di indicizzazione di diversi subgraph, controllare le prove di indicizzazione, ispezionare le caratteristiche dei subgraph e altro ancora. +Available on port 8030/graphql by default, the indexing status API exposes a range of methods for checking indexing status for different Subgraphs, checking proofs of indexing, inspecting Subgraph features and more. The full schema is available [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). @@ -263,7 +263,7 @@ Il processo di indicizzazione si articola in tre parti distinte: - Elaborare gli eventi in ordine con i gestori appropriati (questo può comportare la chiamata alla chain per lo stato e il recupero dei dati dall'archivio) - Scrivere i dati risultanti nell'archivio -Questi stadi sono collegati tra loro (cioè possono essere eseguiti in parallelo), ma dipendono l'uno dall'altro. Se i subgraph sono lenti da indicizzare, la causa dipende dal subgraph specifico. +These stages are pipelined (i.e. they can be executed in parallel), but they are dependent on one another. Where Subgraphs are slow to index, the underlying cause will depend on the specific Subgraph. Cause comuni di lentezza dell'indicizzazione: @@ -276,24 +276,24 @@ Cause comuni di lentezza dell'indicizzazione: - Il fornitore stesso è in ritardo rispetto alla testa della chain - Lentezza nell'acquisizione di nuove ricevute dal fornitore alla testa della chain -Le metriche di indicizzazione dei subgraph possono aiutare a diagnosticare la causa principale della lentezza dell'indicizzazione. In alcuni casi, il problema risiede nel subgraph stesso, ma in altri, il miglioramento dei provider di rete, la riduzione della contesa del database e altri miglioramenti della configurazione possono migliorare notevolmente le prestazioni dell'indicizzazione. +Subgraph indexing metrics can help diagnose the root cause of indexing slowness. In some cases, the problem lies with the Subgraph itself, but in others, improved network providers, reduced database contention and other configuration improvements can markedly improve indexing performance. -#### I subgraph falliti +#### Failed Subgraphs -Durante l'indicizzazione, i subgraph possono fallire se incontrano dati inaspettati, se qualche componente non funziona come previsto o se c'è un bug nei gestori di eventi o nella configurazione. Esistono due tipi generali di errore: +During indexing Subgraphs might fail, if they encounter data that is unexpected, some component not working as expected, or if there is some bug in the event handlers or configuration. There are two general types of failure: - Guasti deterministici: si tratta di guasti che non possono essere risolti con tentativi di risposta - Fallimenti non deterministici: potrebbero essere dovuti a problemi con il provider o a qualche errore imprevisto di Graph Node. Quando si verifica un errore non deterministico, Graph Node riprova i gestori che non hanno funzionato, riducendo il tempo a disposizione. -In alcuni casi, un errore può essere risolto dall'indexer (ad esempio, se l'errore è dovuto alla mancanza del tipo di provider giusto, l'aggiunta del provider richiesto consentirà di continuare l'indicizzazione). In altri casi, invece, è necessario modificare il codice del subgraph. +In some cases a failure might be resolvable by the indexer (for example if the error is a result of not having the right kind of provider, adding the required provider will allow indexing to continue). However in others, a change in the Subgraph code is required. -> Deterministic failures are considered "final", with a Proof of Indexing generated for the failing block, while non-deterministic failures are not, as the subgraph may manage to "unfail" and continue indexing. In some cases, the non-deterministic label is incorrect, and the subgraph will never overcome the error; such failures should be reported as issues on the Graph Node repository. +> Deterministic failures are considered "final", with a Proof of Indexing generated for the failing block, while non-deterministic failures are not, as the Subgraph may manage to "unfail" and continue indexing. In some cases, the non-deterministic label is incorrect, and the Subgraph will never overcome the error; such failures should be reported as issues on the Graph Node repository. #### Cache dei blocchi e delle chiamate -Graph Node caches certain data in the store in order to save refetching from the provider. Blocks are cached, as are the results of `eth_calls` (the latter being cached as of a specific block). This caching can dramatically increase indexing speed during "resyncing" of a slightly altered subgraph. +Graph Node caches certain data in the store in order to save refetching from the provider. Blocks are cached, as are the results of `eth_calls` (the latter being cached as of a specific block). This caching can dramatically increase indexing speed during "resyncing" of a slightly altered Subgraph. -However, in some instances, if an Ethereum node has provided incorrect data for some period, that can make its way into the cache, leading to incorrect data or failed subgraphs. In this case indexers can use `graphman` to clear the poisoned cache, and then rewind the affected subgraphs, which will then fetch fresh data from the (hopefully) healthy provider. +However, in some instances, if an Ethereum node has provided incorrect data for some period, that can make its way into the cache, leading to incorrect data or failed Subgraphs. In this case indexers can use `graphman` to clear the poisoned cache, and then rewind the affected Subgraphs, which will then fetch fresh data from the (hopefully) healthy provider. Se si sospetta un'incongruenza nella cache a blocchi, come ad esempio un evento di ricezione tx mancante: @@ -304,7 +304,7 @@ Se si sospetta un'incongruenza nella cache a blocchi, come ad esempio un evento #### Problemi ed errori di query -Una volta che un subgraph è stato indicizzato, gli indexer possono aspettarsi di servire le query attraverso l'endpoint di query dedicato al subgraph. Se l'indexer spera di servire un volume significativo di query, è consigliabile un nodo di query dedicato; in caso di volumi di query molto elevati, gli indexer potrebbero voler configurare shard di replica in modo che le query non abbiano un impatto sul processo di indicizzazione. +Once a Subgraph has been indexed, indexers can expect to serve queries via the Subgraph's dedicated query endpoint. If the indexer is hoping to serve significant query volume, a dedicated query node is recommended, and in case of very high query volumes, indexers may want to configure replica shards so that queries don't impact the indexing process. Tuttavia, anche con un nodo di query dedicato e le repliche, alcune query possono richiedere molto tempo per essere eseguite e, in alcuni casi, aumentare l'utilizzo della memoria e avere un impatto negativo sul tempo di query per gli altri utenti. @@ -316,7 +316,7 @@ Graph Node caches GraphQL queries by default, which can significantly reduce dat ##### Analisi delle query -Le query problematiche emergono spesso in due modi. In alcuni casi, sono gli stessi utenti a segnalare la lentezza di una determinata query. In questo caso, la sfida consiste nel diagnosticare la ragione della lentezza, sia che si tratti di un problema generale, sia che si tratti di un problema specifico di quel subgraph o di quella query. E poi, naturalmente, risolverlo, se possibile. +Problematic queries most often surface in one of two ways. In some cases, users themselves report that a given query is slow. In that case the challenge is to diagnose the reason for the slowness - whether it is a general issue, or specific to that Subgraph or query. And then of course to resolve it, if possible. In altri casi, il fattore scatenante potrebbe essere l'elevato utilizzo della memoria su un nodo di query, nel qual caso la sfida consiste nell'identificare la query che causa il problema. @@ -336,10 +336,10 @@ In general, tables where the number of distinct entities are less than 1% of the Once a table has been determined to be account-like, running `graphman stats account-like .
` will turn on the account-like optimization for queries against that table. The optimization can be turned off again with `graphman stats account-like --clear .
` It takes up to 5 minutes for query nodes to notice that the optimization has been turned on or off. After turning the optimization on, it is necessary to verify that the change does not in fact make queries slower for that table. If you have configured Grafana to monitor Postgres, slow queries would show up in `pg_stat_activity`in large numbers, taking several seconds. In that case, the optimization needs to be turned off again. -For Uniswap-like subgraphs, the `pair` and `token` tables are prime candidates for this optimization, and can have a dramatic effect on database load. +For Uniswap-like Subgraphs, the `pair` and `token` tables are prime candidates for this optimization, and can have a dramatic effect on database load. -#### Rimozione dei subgraph +#### Removing Subgraphs > Si tratta di una nuova funzionalità, che sarà disponibile in Graph Node 0.29.x -At some point an indexer might want to remove a given subgraph. This can be easily done via `graphman drop`, which deletes a deployment and all it's indexed data. The deployment can be specified as either a subgraph name, an IPFS hash `Qm..`, or the database namespace `sgdNNN`. Further documentation is available [here](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop). +At some point an indexer might want to remove a given Subgraph. This can be easily done via `graphman drop`, which deletes a deployment and all it's indexed data. The deployment can be specified as either a Subgraph name, an IPFS hash `Qm..`, or the database namespace `sgdNNN`. Further documentation is available [here](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop). diff --git a/website/src/pages/it/indexing/tooling/graphcast.mdx b/website/src/pages/it/indexing/tooling/graphcast.mdx index 6d0cd00b7784..366d38044fd6 100644 --- a/website/src/pages/it/indexing/tooling/graphcast.mdx +++ b/website/src/pages/it/indexing/tooling/graphcast.mdx @@ -10,10 +10,10 @@ Attualmente, il costo per trasmettere informazioni ad altri partecipanti alla re L'SDK (Software Development Kit) di Graphcast consente agli sviluppatori di creare radio, che sono applicazioni alimentate da gossip che gli indexer possono eseguire per servire un determinato scopo. Intendiamo inoltre creare alcune radio (o fornire supporto ad altri sviluppatori/team che desiderano creare radio) per i seguenti casi d'uso: -- Real-time cross-checking of subgraph data integrity ([Subgraph Radio](https://docs.graphops.xyz/graphcast/radios/subgraph-radio/intro)). -- Conduzione di aste e coordinamento per la sincronizzazione warp di subgraph, substream e dati Firehose da altri indexer. -- Autodichiarazione sulle analisi delle query attive, compresi i volumi delle richieste di subgraph, i volumi delle commissioni, ecc. -- Autodichiarazione sull'analisi dell'indicizzazione, compresi i tempi di indicizzazione dei subgraph, i costi del gas per i gestori, gli errori di indicizzazione riscontrati, ecc. +- Real-time cross-checking of Subgraph data integrity ([Subgraph Radio](https://docs.graphops.xyz/graphcast/radios/subgraph-radio/intro)). +- Conducting auctions and coordination for warp syncing Subgraphs, substreams, and Firehose data from other Indexers. +- Self-reporting on active query analytics, including Subgraph request volumes, fee volumes, etc. +- Self-reporting on indexing analytics, including Subgraph indexing time, handler gas costs, indexing errors encountered, etc. - Autodichiarazione delle informazioni sullo stack, tra cui la versione del graph-node, la versione di Postgres, la versione del client Ethereum, ecc. ### Scopri di più diff --git a/website/src/pages/it/resources/benefits.mdx b/website/src/pages/it/resources/benefits.mdx index 01393da864a1..e3da622b854c 100644 --- a/website/src/pages/it/resources/benefits.mdx +++ b/website/src/pages/it/resources/benefits.mdx @@ -75,9 +75,9 @@ Query costs may vary; the quoted cost is the average at time of publication (Mar Reflects cost for data consumer. Query fees are still paid to Indexers for Free Plan queries. -Estimated costs are only for Ethereum Mainnet subgraphs — costs are even higher when self hosting a `graph-node` on other networks. Some users may need to update their subgraph to a new version. Due to Ethereum gas fees, an update costs ~$50 at time of writing. Note that gas fees on [Arbitrum](/archived/arbitrum/arbitrum-faq/) are substantially lower than Ethereum mainnet. +Estimated costs are only for Ethereum Mainnet Subgraphs — costs are even higher when self hosting a `graph-node` on other networks. Some users may need to update their Subgraph to a new version. Due to Ethereum gas fees, an update costs ~$50 at time of writing. Note that gas fees on [Arbitrum](/archived/arbitrum/arbitrum-faq/) are substantially lower than Ethereum mainnet. -La curation del segnale su un subgraph è opzionale, una tantum, a costo zero (ad esempio, $1.000 in segnale possono essere curati su un subgraph e successivamente ritirati, con un potenziale di guadagno nel processo). +Curating signal on a Subgraph is an optional one-time, net-zero cost (e.g., $1k in signal can be curated on a Subgraph, and later withdrawn—with potential to earn returns in the process). ## No Setup Costs & Greater Operational Efficiency @@ -89,4 +89,4 @@ The Graph’s decentralized network gives users access to geographic redundancy Bottom line: The Graph Network is less expensive, easier to use, and produces superior results compared to running a `graph-node` locally. -Start using The Graph Network today, and learn how to [publish your subgraph to The Graph's decentralized network](/subgraphs/quick-start/). +Start using The Graph Network today, and learn how to [publish your Subgraph to The Graph's decentralized network](/subgraphs/quick-start/). diff --git a/website/src/pages/it/resources/glossary.mdx b/website/src/pages/it/resources/glossary.mdx index ffcd4bca2eed..4c5ad55cd0d3 100644 --- a/website/src/pages/it/resources/glossary.mdx +++ b/website/src/pages/it/resources/glossary.mdx @@ -4,51 +4,51 @@ title: Glossary - **The Graph**: A decentralized protocol for indexing and querying data. -- **Query**: A request for data. In the case of The Graph, a query is a request for data from a subgraph that will be answered by an Indexer. +- **Query**: A request for data. In the case of The Graph, a query is a request for data from a Subgraph that will be answered by an Indexer. -- **GraphQL**: A query language for APIs and a runtime for fulfilling those queries with your existing data. The Graph uses GraphQL to query subgraphs. +- **GraphQL**: A query language for APIs and a runtime for fulfilling those queries with your existing data. The Graph uses GraphQL to query Subgraphs. -- **Endpoint**: A URL that can be used to query a subgraph. The testing endpoint for Subgraph Studio is `https://api.studio.thegraph.com/query///` and the Graph Explorer endpoint is `https://gateway.thegraph.com/api//subgraphs/id/`. The Graph Explorer endpoint is used to query subgraphs on The Graph's decentralized network. +- **Endpoint**: A URL that can be used to query a Subgraph. The testing endpoint for Subgraph Studio is `https://api.studio.thegraph.com/query///` and the Graph Explorer endpoint is `https://gateway.thegraph.com/api//subgraphs/id/`. The Graph Explorer endpoint is used to query Subgraphs on The Graph's decentralized network. -- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish subgraphs to The Graph Network. Once it is indexed, the subgraph can be queried by anyone. +- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish Subgraphs to The Graph Network. Once it is indexed, the Subgraph can be queried by anyone. - **Indexer**: Network participants that run indexing nodes to index data from blockchains and serve GraphQL queries. - **Indexer Revenue Streams**: Indexers are rewarded in GRT with two components: query fee rebates and indexing rewards. - 1. **Query Fee Rebates**: Payments from subgraph consumers for serving queries on the network. + 1. **Query Fee Rebates**: Payments from Subgraph consumers for serving queries on the network. - 2. **Indexing Rewards**: The rewards that Indexers receive for indexing subgraphs. Indexing rewards are generated via new issuance of 3% GRT annually. + 2. **Indexing Rewards**: The rewards that Indexers receive for indexing Subgraphs. Indexing rewards are generated via new issuance of 3% GRT annually. - **Indexer's Self-Stake**: The amount of GRT that Indexers stake to participate in the decentralized network. The minimum is 100,000 GRT, and there is no upper limit. - **Delegation Capacity**: The maximum amount of GRT an Indexer can accept from Delegators. Indexers can only accept up to 16x their Indexer Self-Stake, and additional delegation results in diluted rewards. For example, if an Indexer has a Self-Stake of 1M GRT, their delegation capacity is 16M. However, Indexers can increase their Delegation Capacity by increasing their Self-Stake. -- **Upgrade Indexer**: An Indexer designed to act as a fallback for subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. +- **Upgrade Indexer**: An Indexer designed to act as a fallback for Subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. -- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing subgraphs. +- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in Subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing Subgraphs. - **Delegation Tax**: A 0.5% fee paid by Delegators when they delegate GRT to Indexers. The GRT used to pay the fee is burned. -- **Curator**: Network participants that identify high-quality subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a subgraph, 10% is distributed to the Curators of that subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a subgraph. +- **Curator**: Network participants that identify high-quality Subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a Subgraph, 10% is distributed to the Curators of that Subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a Subgraph. -- **Curation Tax**: A 1% fee paid by Curators when they signal GRT on subgraphs. The GRT used to pay the fee is burned. +- **Curation Tax**: A 1% fee paid by Curators when they signal GRT on Subgraphs. The GRT used to pay the fee is burned. -- **Data Consumer**: Any application or user that queries a subgraph. +- **Data Consumer**: Any application or user that queries a Subgraph. -- **Subgraph Developer**: A developer who builds and deploys a subgraph to The Graph's decentralized network. +- **Subgraph Developer**: A developer who builds and deploys a Subgraph to The Graph's decentralized network. -- **Subgraph Manifest**: A YAML file that describes the subgraph's GraphQL schema, data sources, and other metadata. [Here](https://github.com/graphprotocol/example-subgraph/blob/master/subgraph.yaml) is an example. +- **Subgraph Manifest**: A YAML file that describes the Subgraph's GraphQL schema, data sources, and other metadata. [Here](https://github.com/graphprotocol/example-subgraph/blob/master/subgraph.yaml) is an example. - **Epoch**: A unit of time within the network. Currently, one epoch is 6,646 blocks or approximately 1 day. -- **Allocation**: An Indexer can allocate their total GRT stake (including Delegators' stake) towards subgraphs that have been published on The Graph's decentralized network. Allocations can have different statuses: +- **Allocation**: An Indexer can allocate their total GRT stake (including Delegators' stake) towards Subgraphs that have been published on The Graph's decentralized network. Allocations can have different statuses: - 1. **Active**: An allocation is considered active when it is created onchain. This is called opening an allocation, and indicates to the network that the Indexer is actively indexing and serving queries for a particular subgraph. Active allocations accrue indexing rewards proportional to the signal on the subgraph, and the amount of GRT allocated. + 1. **Active**: An allocation is considered active when it is created onchain. This is called opening an allocation, and indicates to the network that the Indexer is actively indexing and serving queries for a particular Subgraph. Active allocations accrue indexing rewards proportional to the signal on the Subgraph, and the amount of GRT allocated. - 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. + 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given Subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. -- **Subgraph Studio**: A powerful dapp for building, deploying, and publishing subgraphs. +- **Subgraph Studio**: A powerful dapp for building, deploying, and publishing Subgraphs. - **Fishermen**: A role within The Graph Network held by participants who monitor the accuracy and integrity of data served by Indexers. When a Fisherman identifies a query response or a POI they believe to be incorrect, they can initiate a dispute against the Indexer. If the dispute rules in favor of the Fisherman, the Indexer is slashed by losing 2.5% of their self-stake. Of this amount, 50% is awarded to the Fisherman as a bounty for their vigilance, and the remaining 50% is removed from circulation (burned). This mechanism is designed to encourage Fishermen to help maintain the reliability of the network by ensuring that Indexers are held accountable for the data they provide. @@ -56,28 +56,28 @@ title: Glossary - **Slashing**: Indexers can have their self-staked GRT slashed for providing an incorrect POI or for serving inaccurate data. The slashing percentage is a protocol parameter currently set to 2.5% of an Indexer's self-stake. 50% of the slashed GRT goes to the Fisherman that disputed the inaccurate data or incorrect POI. The other 50% is burned. -- **Indexing Rewards**: The rewards that Indexers receive for indexing subgraphs. Indexing rewards are distributed in GRT. +- **Indexing Rewards**: The rewards that Indexers receive for indexing Subgraphs. Indexing rewards are distributed in GRT. - **Delegation Rewards**: The rewards that Delegators receive for delegating GRT to Indexers. Delegation rewards are distributed in GRT. - **GRT**: The Graph's work utility token. GRT provides economic incentives to network participants for contributing to the network. -- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. +- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given Subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. -- **Graph Node**: Graph Node is the component that indexes subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. +- **Graph Node**: Graph Node is the component that indexes Subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. -- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions onchain, including registering on the network, managing subgraph deployments to its Graph Node(s), and managing allocations. +- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions onchain, including registering on the network, managing Subgraph deployments to its Graph Node(s), and managing allocations. - **The Graph Client**: A library for building GraphQL-based dapps in a decentralized way. -- **Graph Explorer**: A dapp designed for network participants to explore subgraphs and interact with the protocol. +- **Graph Explorer**: A dapp designed for network participants to explore Subgraphs and interact with the protocol. - **Graph CLI**: A command line interface tool for building and deploying to The Graph. - **Cooldown Period**: The time remaining until an Indexer who changed their delegation parameters can do so again. -- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, subgraphs, curation shares, and Indexer's self-stake. +- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, Subgraphs, curation shares, and Indexer's self-stake. -- **Updating a subgraph**: The process of releasing a new subgraph version with updates to the subgraph's manifest, schema, or mappings. +- **Updating a Subgraph**: The process of releasing a new Subgraph version with updates to the Subgraph's manifest, schema, or mappings. -- **Migrating**: The process of curation shares moving from an old version of a subgraph to a new version of a subgraph (e.g. when v0.0.1 is updated to v0.0.2). +- **Migrating**: The process of curation shares moving from an old version of a Subgraph to a new version of a Subgraph (e.g. when v0.0.1 is updated to v0.0.2). diff --git a/website/src/pages/it/resources/migration-guides/assemblyscript-migration-guide.mdx b/website/src/pages/it/resources/migration-guides/assemblyscript-migration-guide.mdx index fd2e5c45f39d..8065456f5617 100644 --- a/website/src/pages/it/resources/migration-guides/assemblyscript-migration-guide.mdx +++ b/website/src/pages/it/resources/migration-guides/assemblyscript-migration-guide.mdx @@ -2,13 +2,13 @@ title: Guida alla migrazione di AssemblyScript --- -Up until now, subgraphs have been using one of the [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉 +Up until now, Subgraphs have been using one of the [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉 -Ciò consentirà agli sviluppatori di subgraph di utilizzare le nuove caratteristiche del linguaggio AS e della libreria standard. +That will enable Subgraph developers to use newer features of the AS language and standard library. This guide is applicable for anyone using `graph-cli`/`graph-ts` below version `0.22.0`. If you're already at a higher than (or equal) version to that, you've already been using version `0.19.10` of AssemblyScript 🙂 -> Note: As of `0.24.0`, `graph-node` can support both versions, depending on the `apiVersion` specified in the subgraph manifest. +> Note: As of `0.24.0`, `graph-node` can support both versions, depending on the `apiVersion` specified in the Subgraph manifest. ## Caratteristiche @@ -44,7 +44,7 @@ This guide is applicable for anyone using `graph-cli`/`graph-ts` below version ` ## Come aggiornare? -1. Change your mappings `apiVersion` in `subgraph.yaml` to `0.0.6`: +1. Change your mappings `apiVersion` in `subgraph.yaml` to `0.0.9`: ```yaml ... @@ -52,7 +52,7 @@ dataSources: ... mapping: ... - apiVersion: 0.0.6 + apiVersion: 0.0.9 ... ``` @@ -106,7 +106,7 @@ let maybeValue = load()! // breaks in runtime if value is null maybeValue.aMethod() ``` -Se non si è sicuri di quale scegliere, si consiglia di utilizzare sempre la versione sicura. Se il valore non esiste, si potrebbe fare una dichiarazione if anticipata con un ritorno nel gestore del subgraph. +If you are unsure which to choose, we recommend always using the safe version. If the value doesn't exist you might want to just do an early if statement with a return in you Subgraph handler. ### Shadowing della variabile @@ -132,7 +132,7 @@ in assembly/index.ts(4,3) ### Confronti nulli -Eseguendo l'aggiornamento sul subgraph, a volte si possono ottenere errori come questi: +By doing the upgrade on your Subgraph, sometimes you might get errors like these: ```typescript ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' is not assignable to type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt'. @@ -330,7 +330,7 @@ let wrapper = new Wrapper(y) wrapper.n = wrapper.n + x // non dà errori in fase di compilazione come dovrebbe ``` -Abbiamo aperto un problema sul compilatore AssemblyScript per questo, ma per il momento se fate questo tipo di operazioni nelle vostre mappature di subgraph, dovreste modificarle in modo da fare un controllo di null prima di esse. +We've opened a issue on the AssemblyScript compiler for this, but for now if you do these kind of operations in your Subgraph mappings, you should change them to do a null check before it. ```typescript let wrapper = new Wrapper(y) @@ -352,7 +352,7 @@ value.x = 10 value.y = 'content' ``` -Verrà compilato ma si interromperà in fase di esecuzione, perché il valore non è stato inizializzato, quindi assicuratevi che il vostro subgraph abbia inizializzato i suoi valori, in questo modo: +It will compile but break at runtime, that happens because the value hasn't been initialized, so make sure your Subgraph has initialized their values, like this: ```typescript var value = new Type() // initialized diff --git a/website/src/pages/it/resources/migration-guides/graphql-validations-migration-guide.mdx b/website/src/pages/it/resources/migration-guides/graphql-validations-migration-guide.mdx index 067bf445e437..cfc30766450e 100644 --- a/website/src/pages/it/resources/migration-guides/graphql-validations-migration-guide.mdx +++ b/website/src/pages/it/resources/migration-guides/graphql-validations-migration-guide.mdx @@ -1,5 +1,5 @@ --- -title: Guida alla migrazione delle validazione GraphQL +title: GraphQL Validations Migration Guide --- Presto `graph-node` supporterà il 100% di copertura delle specifiche [Specifiche delle validation GraphQL] (https://spec.graphql.org/June2018/#sec-Validation). @@ -20,7 +20,7 @@ Per essere conformi a tali validation, seguire la guida alla migrazione. È possibile utilizzare lo strumento di migrazione CLI per trovare eventuali problemi nelle operazioni GraphQL e risolverli. In alternativa, è possibile aggiornare l'endpoint del client GraphQL per utilizzare l'endpoint `https://api-next.thegraph.com/subgraphs/name/$GITHUB_USER/$SUBGRAPH_NAME`. Testare le query con questo endpoint vi aiuterà a trovare i problemi nelle vostre query. -> Non è necessario migrare tutti i subgraph; se si utilizza [GraphQL ESlint](https://the-guild.dev/graphql/eslint/docs) o [GraphQL Code Generator](https://the-guild.dev/graphql/codegen), questi garantiscono già la validità delle query. +> Not all Subgraphs will need to be migrated, if you are using [GraphQL ESlint](https://the-guild.dev/graphql/eslint/docs) or [GraphQL Code Generator](https://the-guild.dev/graphql/codegen), they already ensure that your queries are valid. ## Strumento CLI di migrazione diff --git a/website/src/pages/it/resources/roles/curating.mdx b/website/src/pages/it/resources/roles/curating.mdx index 330a80715730..a449b5b9fcc0 100644 --- a/website/src/pages/it/resources/roles/curating.mdx +++ b/website/src/pages/it/resources/roles/curating.mdx @@ -2,37 +2,37 @@ title: Curating --- -Curators are critical to The Graph's decentralized economy. They use their knowledge of the web3 ecosystem to assess and signal on the subgraphs that should be indexed by The Graph Network. Through Graph Explorer, Curators view network data to make signaling decisions. In turn, The Graph Network rewards Curators who signal on good quality subgraphs with a share of the query fees those subgraphs generate. The amount of GRT signaled is one of the key considerations for indexers when determining which subgraphs to index. +Curators are critical to The Graph's decentralized economy. They use their knowledge of the web3 ecosystem to assess and signal on the Subgraphs that should be indexed by The Graph Network. Through Graph Explorer, Curators view network data to make signaling decisions. In turn, The Graph Network rewards Curators who signal on good quality Subgraphs with a share of the query fees those Subgraphs generate. The amount of GRT signaled is one of the key considerations for indexers when determining which Subgraphs to index. ## What Does Signaling Mean for The Graph Network? -Before consumers can query a subgraph, it must be indexed. This is where curation comes into play. In order for Indexers to earn substantial query fees on quality subgraphs, they need to know what subgraphs to index. When Curators signal on a subgraph, it lets Indexers know that a subgraph is in demand and of sufficient quality that it should be indexed. +Before consumers can query a Subgraph, it must be indexed. This is where curation comes into play. In order for Indexers to earn substantial query fees on quality Subgraphs, they need to know what Subgraphs to index. When Curators signal on a Subgraph, it lets Indexers know that a Subgraph is in demand and of sufficient quality that it should be indexed. -Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that Curators use to let Indexers know that a subgraph is good to index. Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives. +Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that Curators use to let Indexers know that a Subgraph is good to index. Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the Subgraph, entitling them to a portion of future query fees that the Subgraph drives. -Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality subgraph because there will be fewer queries to process or fewer Indexers to process them. +Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to Subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality Subgraph because there will be fewer queries to process or fewer Indexers to process them. -The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. +The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all Subgraphs, signaling GRT on a particular Subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. -When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. +When signaling, Curators can decide to signal on a specific version of the Subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. -If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. +If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the Subgraphs you need assistance with. -Indexers can find subgraphs to index based on curation signals they see in Graph Explorer (screenshot below). +Indexers can find Subgraphs to index based on curation signals they see in Graph Explorer (screenshot below). -![Explorer subgraphs](/img/explorer-subgraphs.png) +![Explorer Subgraphs](/img/explorer-subgraphs.png) ## Come segnalare -Within the Curator tab in Graph Explorer, curators will be able to signal and unsignal on certain subgraphs based on network stats. For a step-by-step overview of how to do this in Graph Explorer, [click here.](/subgraphs/explorer/) +Within the Curator tab in Graph Explorer, curators will be able to signal and unsignal on certain Subgraphs based on network stats. For a step-by-step overview of how to do this in Graph Explorer, [click here.](/subgraphs/explorer/) -Un curator può scegliere di segnalare su una versione specifica del subgraph, oppure può scegliere di far migrare automaticamente il proprio segnale alla versione di produzione più recente di quel subgraph. Entrambe le strategie sono valide e hanno i loro pro e contro. +A curator can choose to signal on a specific Subgraph version, or they can choose to have their signal automatically migrate to the newest production build of that Subgraph. Both are valid strategies and come with their own pros and cons. -Signaling on a specific version is especially useful when one subgraph is used by multiple dapps. One dapp might need to regularly update the subgraph with new features. Another dapp might prefer to use an older, well-tested subgraph version. Upon initial curation, a 1% standard tax is incurred. +Signaling on a specific version is especially useful when one Subgraph is used by multiple dapps. One dapp might need to regularly update the Subgraph with new features. Another dapp might prefer to use an older, well-tested Subgraph version. Upon initial curation, a 1% standard tax is incurred. La migrazione automatica del segnale alla più recente versione di produzione può essere utile per garantire l'accumulo di tariffe di query. Ogni volta che si effettua una curation, si paga una tassa di curation del 1%. Si pagherà anche una tassa di curation del 0,5% per ogni migrazione. Gli sviluppatori di subgraph sono scoraggiati dal pubblicare frequentemente nuove versioni: devono pagare una tassa di curation del 0,5% su tutte le quote di curation auto-migrate. -> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy. +> **Note**: The first address to signal a particular Subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy. ## Withdrawing your GRT @@ -40,39 +40,39 @@ Curators have the option to withdraw their signaled GRT at any time. Unlike the process of delegating, if you decide to withdraw your signaled GRT you will not have to wait for a cooldown period and will receive the entire amount (minus the 1% curation tax). -Once a curator withdraws their signal, indexers may choose to keep indexing the subgraph, even if there's currently no active GRT signaled. +Once a curator withdraws their signal, indexers may choose to keep indexing the Subgraph, even if there's currently no active GRT signaled. -However, it is recommended that curators leave their signaled GRT in place not only to receive a portion of the query fees, but also to ensure reliability and uptime of the subgraph. +However, it is recommended that curators leave their signaled GRT in place not only to receive a portion of the query fees, but also to ensure reliability and uptime of the Subgraph. ## Rischi 1. Il mercato delle query è intrinsecamente giovane per The Graph e c'è il rischio che la vostra %APY possa essere inferiore a quella prevista a causa delle dinamiche di mercato nascenti. -2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned. -3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their subgraph or if a subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/resources/roles/delegating/delegating/). -4. Un subgraph può fallire a causa di un bug. Un subgraph fallito non matura commissioni della query. Di conseguenza, si dovrà attendere che lo sviluppatore risolva il bug e distribuisca una nuova versione. - - Se siete iscritti alla versione più recente di un subgraph, le vostre quote di partecipazione migreranno automaticamente a quella nuova versione. Questo comporta una tassa di curation di 0,5%. - - If you have signaled on a specific subgraph version and it fails, you will have to manually burn your curation shares. You can then signal on the new subgraph version, thus incurring a 1% curation tax. +2. Curation Fee - when a curator signals GRT on a Subgraph, they incur a 1% curation tax. This fee is burned. +3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their Subgraph or if a Subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/resources/roles/delegating/delegating/). +4. A Subgraph can fail due to a bug. A failed Subgraph does not accrue query fees. As a result, you’ll have to wait until the developer fixes the bug and deploys a new version. + - If you are subscribed to the newest version of a Subgraph, your shares will auto-migrate to that new version. This will incur a 0.5% curation tax. + - If you have signaled on a specific Subgraph version and it fails, you will have to manually burn your curation shares. You can then signal on the new Subgraph version, thus incurring a 1% curation tax. ## FAQ sulla curation ### 1. Quale % delle tariffe di query guadagnano i curator? -By signalling on a subgraph, you will earn a share of all the query fees that the subgraph generates. 10% of all query fees go to the Curators pro-rata to their curation shares. This 10% is subject to governance. +By signalling on a Subgraph, you will earn a share of all the query fees that the Subgraph generates. 10% of all query fees go to the Curators pro-rata to their curation shares. This 10% is subject to governance. -### 2. Come si fa a decidere quali subgraph sono di alta qualità da segnalare? +### 2. How do I decide which Subgraphs are high quality to signal on? -Finding high-quality subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy subgraphs that are driving query volume. A trustworthy subgraph may be valuable if it is complete, accurate, and supports a dapp’s data needs. A poorly architected subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a subgraph’s architecture or code in order to assess if a subgraph is valuable. As a result: +Finding high-quality Subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy Subgraphs that are driving query volume. A trustworthy Subgraph may be valuable if it is complete, accurate, and supports a dapp’s data needs. A poorly architected Subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a Subgraph’s architecture or code in order to assess if a Subgraph is valuable. As a result: -- Curators can use their understanding of a network to try and predict how an individual subgraph may generate a higher or lower query volume in the future -- Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the subgraph developer is can help determine whether or not a subgraph is worth signalling on. +- Curators can use their understanding of a network to try and predict how an individual Subgraph may generate a higher or lower query volume in the future +- Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the Subgraph developer is can help determine whether or not a Subgraph is worth signalling on. -### 3. Qual è il costo dell'aggiornamento di un subgraph? +### 3. What’s the cost of updating a Subgraph? -Migrating your curation shares to a new subgraph version incurs a curation tax of 1%. Curators can choose to subscribe to the newest version of a subgraph. When curator shares get auto-migrated to a new version, Curators will also pay half curation tax, ie. 0.5%, because upgrading subgraphs is an onchain action that costs gas. +Migrating your curation shares to a new Subgraph version incurs a curation tax of 1%. Curators can choose to subscribe to the newest version of a Subgraph. When curator shares get auto-migrated to a new version, Curators will also pay half curation tax, ie. 0.5%, because upgrading Subgraphs is an onchain action that costs gas. -### 4. Con quale frequenza posso aggiornare il mio subgraph? +### 4. How often can I update my Subgraph? -Si suggerisce di non aggiornare i subgraph troppo frequentemente. Si veda la domanda precedente per maggiori dettagli. +It’s suggested that you don’t update your Subgraphs too frequently. See the question above for more details. ### 5. Posso vendere le mie quote di curation? diff --git a/website/src/pages/it/resources/subgraph-studio-faq.mdx b/website/src/pages/it/resources/subgraph-studio-faq.mdx index 66453e221c08..3aaffa3bd2b9 100644 --- a/website/src/pages/it/resources/subgraph-studio-faq.mdx +++ b/website/src/pages/it/resources/subgraph-studio-faq.mdx @@ -4,7 +4,7 @@ title: FAQ di Subgraph Studio ## 1. Che cos'è Subgraph Studio? -[Subgraph Studio](https://thegraph.com/studio/) is a dapp for creating, managing, and publishing subgraphs and API keys. +[Subgraph Studio](https://thegraph.com/studio/) is a dapp for creating, managing, and publishing Subgraphs and API keys. ## 2. Come si crea una chiave API? @@ -18,14 +18,14 @@ Yes! You can create multiple API Keys to use in different projects. Check out th Dopo aver creato una chiave API, nella sezione Sicurezza è possibile definire i domini che possono eseguire query di una specifica chiave API. -## 5. Posso trasferire il mio subgraph a un altro proprietario? +## 5. Can I transfer my Subgraph to another owner? -Yes, subgraphs that have been published to Arbitrum One can be transferred to a new wallet or a Multisig. You can do so by clicking the three dots next to the 'Publish' button on the subgraph's details page and selecting 'Transfer ownership'. +Yes, Subgraphs that have been published to Arbitrum One can be transferred to a new wallet or a Multisig. You can do so by clicking the three dots next to the 'Publish' button on the Subgraph's details page and selecting 'Transfer ownership'. -Si noti che non sarà più possibile vedere o modificare il subgraph nel Studio una volta trasferito. +Note that you will no longer be able to see or edit the Subgraph in Studio once it has been transferred. -## 6. Come posso trovare gli URL di query per i subgraph se non sono lo sviluppatore del subgraph che voglio usare? +## 6. How do I find query URLs for Subgraphs if I’m not the developer of the Subgraph I want to use? -You can find the query URL of each subgraph in the Subgraph Details section of Graph Explorer. When you click on the “Query” button, you will be directed to a pane wherein you can view the query URL of the subgraph you’re interested in. You can then replace the `` placeholder with the API key you wish to leverage in Subgraph Studio. +You can find the query URL of each Subgraph in the Subgraph Details section of Graph Explorer. When you click on the “Query” button, you will be directed to a pane wherein you can view the query URL of the Subgraph you’re interested in. You can then replace the `` placeholder with the API key you wish to leverage in Subgraph Studio. -Si ricorda che è possibile creare una chiave API ed eseguire query del qualsiasi subgraph pubblicato sulla rete, anche se si costruisce un subgraph da soli. Queste query tramite la nuova chiave API sono a pagamento, come tutte le altre sulla rete. +Remember that you can create an API key and query any Subgraph published to the network, even if you build a Subgraph yourself. These queries via the new API key, are paid queries as any other on the network. diff --git a/website/src/pages/it/resources/tokenomics.mdx b/website/src/pages/it/resources/tokenomics.mdx index c342b803f911..c869fcb1a9da 100644 --- a/website/src/pages/it/resources/tokenomics.mdx +++ b/website/src/pages/it/resources/tokenomics.mdx @@ -6,7 +6,7 @@ description: The Graph Network is incentivized by powerful tokenomics. Here’s ## Panoramica -The Graph is a decentralized protocol that enables easy access to blockchain data. It indexes blockchain data similarly to how Google indexes the web. If you've used a dapp that retrieves data from a subgraph, you've probably interacted with The Graph. Today, thousands of [popular dapps](https://thegraph.com/explorer) in the web3 ecosystem use The Graph. +The Graph is a decentralized protocol that enables easy access to blockchain data. It indexes blockchain data similarly to how Google indexes the web. If you've used a dapp that retrieves data from a Subgraph, you've probably interacted with The Graph. Today, thousands of [popular dapps](https://thegraph.com/explorer) in the web3 ecosystem use The Graph. ## Specifics @@ -24,9 +24,9 @@ There are four primary network participants: 1. Delegators - Delegate GRT to Indexers & secure the network -2. Curator - Trovare i migliori subgraph per gli Indexer +2. Curators - Find the best Subgraphs for Indexers -3. Developers - Build & query subgraphs +3. Developers - Build & query Subgraphs 4. Indexer - Struttura portante dei dati della blockchain @@ -36,7 +36,7 @@ Fishermen and Arbitrators are also integral to the network's success through oth ## Delegators (Passively earn GRT) -Indexers are delegated GRT by Delegators, increasing the Indexer’s stake in subgraphs on the network. In return, Delegators earn a percentage of all query fees and indexing rewards from the Indexer. Each Indexer sets the cut that will be rewarded to Delegators independently, creating competition among Indexers to attract Delegators. Most Indexers offer between 9-12% annually. +Indexers are delegated GRT by Delegators, increasing the Indexer’s stake in Subgraphs on the network. In return, Delegators earn a percentage of all query fees and indexing rewards from the Indexer. Each Indexer sets the cut that will be rewarded to Delegators independently, creating competition among Indexers to attract Delegators. Most Indexers offer between 9-12% annually. For example, if a Delegator were to delegate 15k GRT to an Indexer offering 10%, the Delegator would receive ~1,500 GRT in rewards annually. @@ -46,25 +46,25 @@ If you're reading this, you're capable of becoming a Delegator right now by head ## Curators (Earn GRT) -Curators identify high-quality subgraphs and "curate" them (i.e., signal GRT on them) to earn curation shares, which guarantee a percentage of all future query fees generated by the subgraph. While any independent network participant can be a Curator, typically subgraph developers are among the first Curators for their own subgraphs because they want to ensure their subgraph is indexed. +Curators identify high-quality Subgraphs and "curate" them (i.e., signal GRT on them) to earn curation shares, which guarantee a percentage of all future query fees generated by the Subgraph. While any independent network participant can be a Curator, typically Subgraph developers are among the first Curators for their own Subgraphs because they want to ensure their Subgraph is indexed. -Subgraph developers are encouraged to curate their subgraph with at least 3,000 GRT. However, this number may be impacted by network activity and community participation. +Subgraph developers are encouraged to curate their Subgraph with at least 3,000 GRT. However, this number may be impacted by network activity and community participation. -Curators pay a 1% curation tax when they curate a new subgraph. This curation tax is burned, decreasing the supply of GRT. +Curators pay a 1% curation tax when they curate a new Subgraph. This curation tax is burned, decreasing the supply of GRT. ## Developers -Developers build and query subgraphs to retrieve blockchain data. Since subgraphs are open source, developers can query existing subgraphs to load blockchain data into their dapps. Developers pay for queries they make in GRT, which is distributed to network participants. +Developers build and query Subgraphs to retrieve blockchain data. Since Subgraphs are open source, developers can query existing Subgraphs to load blockchain data into their dapps. Developers pay for queries they make in GRT, which is distributed to network participants. -### Creare un subgraph +### Creating a Subgraph -Developers can [create a subgraph](/developing/creating-a-subgraph/) to index data on the blockchain. Subgraphs are instructions for Indexers about which data should be served to consumers. +Developers can [create a Subgraph](/developing/creating-a-subgraph/) to index data on the blockchain. Subgraphs are instructions for Indexers about which data should be served to consumers. -Once developers have built and tested their subgraph, they can [publish their subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) on The Graph's decentralized network. +Once developers have built and tested their Subgraph, they can [publish their Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) on The Graph's decentralized network. -### Eseguire query di un subgraph esistente +### Querying an existing Subgraph -Once a subgraph is [published](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph's decentralized network, anyone can create an API key, add GRT to their billing balance, and query the subgraph. +Once a Subgraph is [published](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph's decentralized network, anyone can create an API key, add GRT to their billing balance, and query the Subgraph. Subgraphs are [queried using GraphQL](/subgraphs/querying/introduction/), and the query fees are paid for with GRT in [Subgraph Studio](https://thegraph.com/studio/). Query fees are distributed to network participants based on their contributions to the protocol. @@ -72,27 +72,27 @@ Subgraphs are [queried using GraphQL](/subgraphs/querying/introduction/), and th ## Indexers (Earn GRT) -Indexers are the backbone of The Graph. They operate independent hardware and software powering The Graph’s decentralized network. Indexers serve data to consumers based on instructions from subgraphs. +Indexers are the backbone of The Graph. They operate independent hardware and software powering The Graph’s decentralized network. Indexers serve data to consumers based on instructions from Subgraphs. Indexers can earn GRT rewards in two ways: -1. **Query fees**: GRT paid by developers or users for subgraph data queries. Query fees are directly distributed to Indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). +1. **Query fees**: GRT paid by developers or users for Subgraph data queries. Query fees are directly distributed to Indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). -2. **Indexing rewards**: the 3% annual issuance is distributed to Indexers based on the number of subgraphs they are indexing. These rewards incentivize Indexers to index subgraphs, occasionally before the query fees begin, to accrue and submit Proofs of Indexing (POIs), verifying that they have indexed data accurately. +2. **Indexing rewards**: the 3% annual issuance is distributed to Indexers based on the number of Subgraphs they are indexing. These rewards incentivize Indexers to index Subgraphs, occasionally before the query fees begin, to accrue and submit Proofs of Indexing (POIs), verifying that they have indexed data accurately. -Each subgraph is allotted a portion of the total network token issuance, based on the amount of the subgraph’s curation signal. That amount is then rewarded to Indexers based on their allocated stake on the subgraph. +Each Subgraph is allotted a portion of the total network token issuance, based on the amount of the Subgraph’s curation signal. That amount is then rewarded to Indexers based on their allocated stake on the Subgraph. In order to run an indexing node, Indexers must self-stake 100,000 GRT or more with the network. Indexers are incentivized to self-stake GRT in proportion to the amount of queries they serve. -Indexers can increase their GRT allocations on subgraphs by accepting GRT delegation from Delegators, and they can accept up to 16 times their initial self-stake. If an Indexer becomes "over-delegated" (i.e., more than 16 times their initial self-stake), they will not be able to use the additional GRT from Delegators until they increase their self-stake in the network. +Indexers can increase their GRT allocations on Subgraphs by accepting GRT delegation from Delegators, and they can accept up to 16 times their initial self-stake. If an Indexer becomes "over-delegated" (i.e., more than 16 times their initial self-stake), they will not be able to use the additional GRT from Delegators until they increase their self-stake in the network. The amount of rewards an Indexer receives can vary based on the Indexer's self-stake, accepted delegation, quality of service, and many more factors. ## Token Supply: Burning & Issuance -The initial token supply is 10 billion GRT, with a target of 3% new issuance annually to reward Indexers for allocating stake on subgraphs. This means that the total supply of GRT tokens will increase by 3% each year as new tokens are issued to Indexers for their contribution to the network. +The initial token supply is 10 billion GRT, with a target of 3% new issuance annually to reward Indexers for allocating stake on Subgraphs. This means that the total supply of GRT tokens will increase by 3% each year as new tokens are issued to Indexers for their contribution to the network. -The Graph is designed with multiple burning mechanisms to offset new token issuance. Approximately 1% of the GRT supply is burned annually through various activities on the network, and this number has been increasing as network activity continues to grow. These burning activities include a 0.5% delegation tax whenever a Delegator delegates GRT to an Indexer, a 1% curation tax when Curators signal on a subgraph, and a 1% of query fees for blockchain data. +The Graph is designed with multiple burning mechanisms to offset new token issuance. Approximately 1% of the GRT supply is burned annually through various activities on the network, and this number has been increasing as network activity continues to grow. These burning activities include a 0.5% delegation tax whenever a Delegator delegates GRT to an Indexer, a 1% curation tax when Curators signal on a Subgraph, and a 1% of query fees for blockchain data. ![Total burned GRT](/img/total-burned-grt.jpeg) diff --git a/website/src/pages/it/sps/introduction.mdx b/website/src/pages/it/sps/introduction.mdx index 62359b0a7ab0..0e5be69aa0c3 100644 --- a/website/src/pages/it/sps/introduction.mdx +++ b/website/src/pages/it/sps/introduction.mdx @@ -3,21 +3,21 @@ title: Introduction to Substreams-Powered Subgraphs sidebarTitle: Introduzione --- -Boost your subgraph's efficiency and scalability by using [Substreams](/substreams/introduction/) to stream pre-indexed blockchain data. +Boost your Subgraph's efficiency and scalability by using [Substreams](/substreams/introduction/) to stream pre-indexed blockchain data. ## Panoramica -Use a Substreams package (`.spkg`) as a data source to give your subgraph access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks. +Use a Substreams package (`.spkg`) as a data source to give your Subgraph access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks. ### Specifics There are two methods of enabling this technology: -1. **Using Substreams [triggers](/sps/triggers/)**: Consume from any Substreams module by importing the Protobuf model through a subgraph handler and move all your logic into a subgraph. This method creates the subgraph entities directly in the subgraph. +1. **Using Substreams [triggers](/sps/triggers/)**: Consume from any Substreams module by importing the Protobuf model through a Subgraph handler and move all your logic into a Subgraph. This method creates the Subgraph entities directly in the Subgraph. -2. **Using [Entity Changes](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)**: By writing more of the logic into Substreams, you can consume the module's output directly into [graph-node](/indexing/tooling/graph-node/). In graph-node, you can use the Substreams data to create your subgraph entities. +2. **Using [Entity Changes](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)**: By writing more of the logic into Substreams, you can consume the module's output directly into [graph-node](/indexing/tooling/graph-node/). In graph-node, you can use the Substreams data to create your Subgraph entities. -You can choose where to place your logic, either in the subgraph or Substreams. However, consider what aligns with your data needs, as Substreams has a parallelized model, and triggers are consumed linearly in the graph node. +You can choose where to place your logic, either in the Subgraph or Substreams. However, consider what aligns with your data needs, as Substreams has a parallelized model, and triggers are consumed linearly in the graph node. ### Additional Resources @@ -28,3 +28,4 @@ Visit the following links for tutorials on using code-generation tooling to buil - [Starknet](https://docs.substreams.dev/tutorials/intro-to-tutorials/starknet) - [Injective](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/injective) - [MANTRA](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/mantra) +- [Stellar](https://docs.substreams.dev/tutorials/intro-to-tutorials/stellar) diff --git a/website/src/pages/it/sps/sps-faq.mdx b/website/src/pages/it/sps/sps-faq.mdx index abc1f3906686..250c466d5929 100644 --- a/website/src/pages/it/sps/sps-faq.mdx +++ b/website/src/pages/it/sps/sps-faq.mdx @@ -11,21 +11,21 @@ Specifically, it's a blockchain-agnostic, parallelized, and streaming-first engi Substreams is developed by [StreamingFast](https://www.streamingfast.io/). Visit the [Substreams Documentation](/substreams/introduction/) to learn more about Substreams. -## What are Substreams-powered subgraphs? +## What are Substreams-powered Subgraphs? -[Substreams-powered subgraphs](/sps/introduction/) combine the power of Substreams with the queryability of subgraphs. When publishing a Substreams-powered Subgraph, the data produced by the Substreams transformations, can [output entity changes](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs) that are compatible with subgraph entities. +[Substreams-powered Subgraphs](/sps/introduction/) combine the power of Substreams with the queryability of Subgraphs. When publishing a Substreams-powered Subgraph, the data produced by the Substreams transformations, can [output entity changes](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs) that are compatible with Subgraph entities. -If you are already familiar with subgraph development, note that Substreams-powered subgraphs can then be queried just as if it had been produced by the AssemblyScript transformation layer. This provides all the benefits of subgraphs, including a dynamic and flexible GraphQL API. +If you are already familiar with Subgraph development, note that Substreams-powered Subgraphs can then be queried just as if it had been produced by the AssemblyScript transformation layer. This provides all the benefits of Subgraphs, including a dynamic and flexible GraphQL API. -## How are Substreams-powered subgraphs different from subgraphs? +## How are Substreams-powered Subgraphs different from Subgraphs? Subgraphs are made up of datasources which specify onchain events, and how those events should be transformed via handlers written in Assemblyscript. These events are processed sequentially, based on the order in which events happen onchain. -By contrast, substreams-powered subgraphs have a single datasource which references a substreams package, which is processed by the Graph Node. Substreams have access to additional granular onchain data compared to conventional subgraphs, and can also benefit from massively parallelised processing, which can mean much faster processing times. +By contrast, substreams-powered Subgraphs have a single datasource which references a substreams package, which is processed by the Graph Node. Substreams have access to additional granular onchain data compared to conventional Subgraphs, and can also benefit from massively parallelised processing, which can mean much faster processing times. -## What are the benefits of using Substreams-powered subgraphs? +## What are the benefits of using Substreams-powered Subgraphs? -Substreams-powered subgraphs combine all the benefits of Substreams with the queryability of subgraphs. They bring greater composability and high-performance indexing to The Graph. They also enable new data use cases; for example, once you've built your Substreams-powered Subgraph, you can reuse your [Substreams modules](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) to output to different [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) such as PostgreSQL, MongoDB, and Kafka. +Substreams-powered Subgraphs combine all the benefits of Substreams with the queryability of Subgraphs. They bring greater composability and high-performance indexing to The Graph. They also enable new data use cases; for example, once you've built your Substreams-powered Subgraph, you can reuse your [Substreams modules](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) to output to different [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) such as PostgreSQL, MongoDB, and Kafka. ## What are the benefits of Substreams? @@ -35,7 +35,7 @@ There are many benefits to using Substreams, including: - High-performance indexing: Orders of magnitude faster indexing through large-scale clusters of parallel operations (think BigQuery). -- Sink anywhere: Sink your data to anywhere you want: PostgreSQL, MongoDB, Kafka, subgraphs, flat files, Google Sheets. +- Sink anywhere: Sink your data to anywhere you want: PostgreSQL, MongoDB, Kafka, Subgraphs, flat files, Google Sheets. - Programmable: Use code to customize extraction, do transformation-time aggregations, and model your output for multiple sinks. @@ -63,17 +63,17 @@ There are many benefits to using Firehose, including: - Leverages flat files: Blockchain data is extracted into flat files, the cheapest and most optimized computing resource available. -## Where can developers access more information about Substreams-powered subgraphs and Substreams? +## Where can developers access more information about Substreams-powered Subgraphs and Substreams? The [Substreams documentation](/substreams/introduction/) will teach you how to build Substreams modules. -The [Substreams-powered subgraphs documentation](/sps/introduction/) will show you how to package them for deployment on The Graph. +The [Substreams-powered Subgraphs documentation](/sps/introduction/) will show you how to package them for deployment on The Graph. The [latest Substreams Codegen tool](https://streamingfastio.medium.com/substreams-codegen-no-code-tool-to-bootstrap-your-project-a11efe0378c6) will allow you to bootstrap a Substreams project without any code. ## What is the role of Rust modules in Substreams? -Rust modules are the equivalent of the AssemblyScript mappers in subgraphs. They are compiled to WASM in a similar way, but the programming model allows for parallel execution. They define the sort of transformations and aggregations you want to apply to the raw blockchain data. +Rust modules are the equivalent of the AssemblyScript mappers in Subgraphs. They are compiled to WASM in a similar way, but the programming model allows for parallel execution. They define the sort of transformations and aggregations you want to apply to the raw blockchain data. See [modules documentation](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) for details. @@ -81,16 +81,16 @@ See [modules documentation](https://docs.substreams.dev/reference-material/subst When using Substreams, the composition happens at the transformation layer enabling cached modules to be re-used. -As an example, Alice can build a DEX price module, Bob can use it to build a volume aggregator for some tokens of his interest, and Lisa can combine four individual DEX price modules to create a price oracle. A single Substreams request will package all of these individual's modules, link them together, to offer a much more refined stream of data. That stream can then be used to populate a subgraph, and be queried by consumers. +As an example, Alice can build a DEX price module, Bob can use it to build a volume aggregator for some tokens of his interest, and Lisa can combine four individual DEX price modules to create a price oracle. A single Substreams request will package all of these individual's modules, link them together, to offer a much more refined stream of data. That stream can then be used to populate a Subgraph, and be queried by consumers. ## How can you build and deploy a Substreams-powered Subgraph? After [defining](/sps/introduction/) a Substreams-powered Subgraph, you can use the Graph CLI to deploy it in [Subgraph Studio](https://thegraph.com/studio/). -## Where can I find examples of Substreams and Substreams-powered subgraphs? +## Where can I find examples of Substreams and Substreams-powered Subgraphs? -You can visit [this Github repo](https://github.com/pinax-network/awesome-substreams) to find examples of Substreams and Substreams-powered subgraphs. +You can visit [this Github repo](https://github.com/pinax-network/awesome-substreams) to find examples of Substreams and Substreams-powered Subgraphs. -## What do Substreams and Substreams-powered subgraphs mean for The Graph Network? +## What do Substreams and Substreams-powered Subgraphs mean for The Graph Network? The integration promises many benefits, including extremely high-performance indexing and greater composability by leveraging community modules and building on them. diff --git a/website/src/pages/it/sps/triggers.mdx b/website/src/pages/it/sps/triggers.mdx index 072d7ba9d194..711dcaa6423a 100644 --- a/website/src/pages/it/sps/triggers.mdx +++ b/website/src/pages/it/sps/triggers.mdx @@ -6,13 +6,13 @@ Use Custom Triggers and enable the full use GraphQL. ## Panoramica -Custom Triggers allow you to send data directly into your subgraph mappings file and entities, which are similar to tables and fields. This enables you to fully use the GraphQL layer. +Custom Triggers allow you to send data directly into your Subgraph mappings file and entities, which are similar to tables and fields. This enables you to fully use the GraphQL layer. -By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data in your subgraph's handler. This ensures efficient and streamlined data management within the subgraph framework. +By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data in your Subgraph's handler. This ensures efficient and streamlined data management within the Subgraph framework. ### Defining `handleTransactions` -The following code demonstrates how to define a `handleTransactions` function in a subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new subgraph entity is created. +The following code demonstrates how to define a `handleTransactions` function in a Subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new Subgraph entity is created. ```tsx export function handleTransactions(bytes: Uint8Array): void { @@ -38,9 +38,9 @@ Here's what you're seeing in the `mappings.ts` file: 1. The bytes containing Substreams data are decoded into the generated `Transactions` object, this object is used like any other AssemblyScript object 2. Looping over the transactions -3. Create a new subgraph entity for every transaction +3. Create a new Subgraph entity for every transaction -To go through a detailed example of a trigger-based subgraph, [check out the tutorial](/sps/tutorial/). +To go through a detailed example of a trigger-based Subgraph, [check out the tutorial](/sps/tutorial/). ### Additional Resources diff --git a/website/src/pages/it/sps/tutorial.mdx b/website/src/pages/it/sps/tutorial.mdx index fb9c4e1c7b5c..98708410813b 100644 --- a/website/src/pages/it/sps/tutorial.mdx +++ b/website/src/pages/it/sps/tutorial.mdx @@ -3,7 +3,7 @@ title: 'Tutorial: Set Up a Substreams-Powered Subgraph on Solana' sidebarTitle: Tutorial --- -Successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. +Successfully set up a trigger-based Substreams-powered Subgraph for a Solana SPL token. ## Iniziare @@ -54,7 +54,7 @@ params: # Modify the param fields to meet your needs ### Step 2: Generate the Subgraph Manifest -Once the project is initialized, generate a subgraph manifest by running the following command in the Dev Container: +Once the project is initialized, generate a Subgraph manifest by running the following command in the Dev Container: ```bash substreams codegen subgraph @@ -73,7 +73,7 @@ dataSources: moduleName: map_spl_transfers # Module defined in the substreams.yaml file: ./my-project-sol-v0.1.0.spkg mapping: - apiVersion: 0.0.7 + apiVersion: 0.0.9 kind: substreams/graph-entities file: ./src/mappings.ts handler: handleTriggers @@ -81,7 +81,7 @@ dataSources: ### Step 3: Define Entities in `schema.graphql` -Define the fields you want to save in your subgraph entities by updating the `schema.graphql` file. +Define the fields you want to save in your Subgraph entities by updating the `schema.graphql` file. Here is an example: @@ -101,7 +101,7 @@ This schema defines a `MyTransfer` entity with fields such as `id`, `amount`, `s With the Protobuf objects generated, you can now handle the decoded Substreams data in your `mappings.ts` file found in the `./src` directory. -The example below demonstrates how to extract to subgraph entities the non-derived transfers associated to the Orca account id: +The example below demonstrates how to extract to Subgraph entities the non-derived transfers associated to the Orca account id: ```ts import { Protobuf } from 'as-proto/assembly' @@ -140,11 +140,11 @@ To generate Protobuf objects in AssemblyScript, run the following command: npm run protogen ``` -This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the subgraph's handler. +This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the Subgraph's handler. ### Conclusion -Congratulations! You've successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. You can take the next step by customizing your schema, mappings, and modules to fit your specific use case. +Congratulations! You've successfully set up a trigger-based Substreams-powered Subgraph for a Solana SPL token. You can take the next step by customizing your schema, mappings, and modules to fit your specific use case. ### Video Tutorial diff --git a/website/src/pages/it/subgraphs/best-practices/avoid-eth-calls.mdx b/website/src/pages/it/subgraphs/best-practices/avoid-eth-calls.mdx index e40a7b3712e4..07249c97dd2a 100644 --- a/website/src/pages/it/subgraphs/best-practices/avoid-eth-calls.mdx +++ b/website/src/pages/it/subgraphs/best-practices/avoid-eth-calls.mdx @@ -1,19 +1,19 @@ --- title: Subgraph Best Practice 4 - Improve Indexing Speed by Avoiding eth_calls -sidebarTitle: 'Subgraph Best Practice 4: Avoiding eth_calls' +sidebarTitle: Avoiding eth_calls --- ## TLDR -`eth_calls` are calls that can be made from a subgraph to an Ethereum node. These calls take a significant amount of time to return data, slowing down indexing. If possible, design smart contracts to emit all the data you need so you don’t need to use `eth_calls`. +`eth_calls` are calls that can be made from a Subgraph to an Ethereum node. These calls take a significant amount of time to return data, slowing down indexing. If possible, design smart contracts to emit all the data you need so you don’t need to use `eth_calls`. ## Why Avoiding `eth_calls` Is a Best Practice -Subgraphs are optimized to index event data emitted from smart contracts. A subgraph can also index the data coming from an `eth_call`, however, this can significantly slow down subgraph indexing as `eth_calls` require making external calls to smart contracts. The responsiveness of these calls relies not on the subgraph but on the connectivity and responsiveness of the Ethereum node being queried. By minimizing or eliminating eth_calls in our subgraphs, we can significantly improve our indexing speed. +Subgraphs are optimized to index event data emitted from smart contracts. A Subgraph can also index the data coming from an `eth_call`, however, this can significantly slow down Subgraph indexing as `eth_calls` require making external calls to smart contracts. The responsiveness of these calls relies not on the Subgraph but on the connectivity and responsiveness of the Ethereum node being queried. By minimizing or eliminating eth_calls in our Subgraphs, we can significantly improve our indexing speed. ### What Does an eth_call Look Like? -`eth_calls` are often necessary when the data required for a subgraph is not available through emitted events. For example, consider a scenario where a subgraph needs to identify whether ERC20 tokens are part of a specific pool, but the contract only emits a basic `Transfer` event and does not emit an event that contains the data that we need: +`eth_calls` are often necessary when the data required for a Subgraph is not available through emitted events. For example, consider a scenario where a Subgraph needs to identify whether ERC20 tokens are part of a specific pool, but the contract only emits a basic `Transfer` event and does not emit an event that contains the data that we need: ```yaml event Transfer(address indexed from, address indexed to, uint256 value); @@ -44,7 +44,7 @@ export function handleTransfer(event: Transfer): void { } ``` -This is functional, however is not ideal as it slows down our subgraph’s indexing. +This is functional, however is not ideal as it slows down our Subgraph’s indexing. ## How to Eliminate `eth_calls` @@ -54,7 +54,7 @@ Ideally, the smart contract should be updated to emit all necessary data within event TransferWithPool(address indexed from, address indexed to, uint256 value, bytes32 indexed poolInfo); ``` -With this update, the subgraph can directly index the required data without external calls: +With this update, the Subgraph can directly index the required data without external calls: ```typescript import { Address } from '@graphprotocol/graph-ts' @@ -96,11 +96,11 @@ The portion highlighted in yellow is the call declaration. The part before the c The handler itself accesses the result of this `eth_call` exactly as in the previous section by binding to the contract and making the call. graph-node caches the results of declared `eth_calls` in memory and the call from the handler will retrieve the result from this in memory cache instead of making an actual RPC call. -Note: Declared eth_calls can only be made in subgraphs with specVersion >= 1.2.0. +Note: Declared eth_calls can only be made in Subgraphs with specVersion >= 1.2.0. ## Conclusion -You can significantly improve indexing performance by minimizing or eliminating `eth_calls` in your subgraphs. +You can significantly improve indexing performance by minimizing or eliminating `eth_calls` in your Subgraphs. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/it/subgraphs/best-practices/derivedfrom.mdx b/website/src/pages/it/subgraphs/best-practices/derivedfrom.mdx index db3a49928c89..093eb29255ab 100644 --- a/website/src/pages/it/subgraphs/best-practices/derivedfrom.mdx +++ b/website/src/pages/it/subgraphs/best-practices/derivedfrom.mdx @@ -1,11 +1,11 @@ --- title: Subgraph Best Practice 2 - Improve Indexing and Query Responsiveness By Using @derivedFrom -sidebarTitle: 'Subgraph Best Practice 2: Arrays with @derivedFrom' +sidebarTitle: Arrays with @derivedFrom --- ## TLDR -Arrays in your schema can really slow down a subgraph's performance as they grow beyond thousands of entries. If possible, the `@derivedFrom` directive should be used when using arrays as it prevents large arrays from forming, simplifies handlers, and reduces the size of individual entities, improving indexing speed and query performance significantly. +Arrays in your schema can really slow down a Subgraph's performance as they grow beyond thousands of entries. If possible, the `@derivedFrom` directive should be used when using arrays as it prevents large arrays from forming, simplifies handlers, and reduces the size of individual entities, improving indexing speed and query performance significantly. ## How to Use the `@derivedFrom` Directive @@ -15,7 +15,7 @@ You just need to add a `@derivedFrom` directive after your array in your schema. comments: [Comment!]! @derivedFrom(field: "post") ``` -`@derivedFrom` creates efficient one-to-many relationships, enabling an entity to dynamically associate with multiple related entities based on a field in the related entity. This approach removes the need for both sides of the relationship to store duplicate data, making the subgraph more efficient. +`@derivedFrom` creates efficient one-to-many relationships, enabling an entity to dynamically associate with multiple related entities based on a field in the related entity. This approach removes the need for both sides of the relationship to store duplicate data, making the Subgraph more efficient. ### Example Use Case for `@derivedFrom` @@ -60,17 +60,17 @@ type Comment @entity { Just by adding the `@derivedFrom` directive, this schema will only store the “Comments” on the “Comments” side of the relationship and not on the “Post” side of the relationship. Arrays are stored across individual rows, which allows them to expand significantly. This can lead to particularly large sizes if their growth is unbounded. -This will not only make our subgraph more efficient, but it will also unlock three features: +This will not only make our Subgraph more efficient, but it will also unlock three features: 1. We can query the `Post` and see all of its comments. 2. We can do a reverse lookup and query any `Comment` and see which post it comes from. -3. We can use [Derived Field Loaders](/subgraphs/developing/creating/graph-ts/api/#looking-up-derived-entities) to unlock the ability to directly access and manipulate data from virtual relationships in our subgraph mappings. +3. We can use [Derived Field Loaders](/subgraphs/developing/creating/graph-ts/api/#looking-up-derived-entities) to unlock the ability to directly access and manipulate data from virtual relationships in our Subgraph mappings. ## Conclusion -Use the `@derivedFrom` directive in subgraphs to effectively manage dynamically growing arrays, enhancing indexing efficiency and data retrieval. +Use the `@derivedFrom` directive in Subgraphs to effectively manage dynamically growing arrays, enhancing indexing efficiency and data retrieval. For a more detailed explanation of strategies to avoid large arrays, check out Kevin Jones' blog: [Best Practices in Subgraph Development: Avoiding Large Arrays](https://thegraph.com/blog/improve-subgraph-performance-avoiding-large-arrays/). diff --git a/website/src/pages/it/subgraphs/best-practices/grafting-hotfix.mdx b/website/src/pages/it/subgraphs/best-practices/grafting-hotfix.mdx index 62edf8926555..ab6bd38a1247 100644 --- a/website/src/pages/it/subgraphs/best-practices/grafting-hotfix.mdx +++ b/website/src/pages/it/subgraphs/best-practices/grafting-hotfix.mdx @@ -1,26 +1,26 @@ --- title: Subgraph Best Practice 6 - Use Grafting for Quick Hotfix Deployment -sidebarTitle: 'Subgraph Best Practice 6: Grafting and Hotfixing' +sidebarTitle: Grafting and Hotfixing --- ## TLDR -Grafting is a powerful feature in subgraph development that allows you to build and deploy new subgraphs while reusing the indexed data from existing ones. +Grafting is a powerful feature in Subgraph development that allows you to build and deploy new Subgraphs while reusing the indexed data from existing ones. ### Panoramica -This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. +This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire Subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. ## Benefits of Grafting for Hotfixes 1. **Rapid Deployment** - - **Minimize Downtime**: When a subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. - - **Immediate Recovery**: The new subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. + - **Minimize Downtime**: When a Subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. + - **Immediate Recovery**: The new Subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. 2. **Data Preservation** - - **Reuse Historical Data**: Grafting copies the existing data from the base subgraph, so you don’t lose valuable historical records. + - **Reuse Historical Data**: Grafting copies the existing data from the base Subgraph, so you don’t lose valuable historical records. - **Consistency**: Maintains data continuity, which is crucial for applications relying on consistent historical data. 3. **Efficiency** @@ -31,38 +31,38 @@ This feature enables quick deployment of hotfixes for critical issues, eliminati 1. **Initial Deployment Without Grafting** - - **Start Clean**: Always deploy your initial subgraph without grafting to ensure that it’s stable and functions as expected. - - **Test Thoroughly**: Validate the subgraph’s performance to minimize the need for future hotfixes. + - **Start Clean**: Always deploy your initial Subgraph without grafting to ensure that it’s stable and functions as expected. + - **Test Thoroughly**: Validate the Subgraph’s performance to minimize the need for future hotfixes. 2. **Implementing the Hotfix with Grafting** - **Identify the Issue**: When a critical error occurs, determine the block number of the last successfully indexed event. - - **Create a New Subgraph**: Develop a new subgraph that includes the hotfix. - - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed subgraph. - - **Deploy Quickly**: Publish the grafted subgraph to restore service as soon as possible. + - **Create a New Subgraph**: Develop a new Subgraph that includes the hotfix. + - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed Subgraph. + - **Deploy Quickly**: Publish the grafted Subgraph to restore service as soon as possible. 3. **Post-Hotfix Actions** - - **Monitor Performance**: Ensure the grafted subgraph is indexing correctly and the hotfix resolves the issue. - - **Republish Without Grafting**: Once stable, deploy a new version of the subgraph without grafting for long-term maintenance. + - **Monitor Performance**: Ensure the grafted Subgraph is indexing correctly and the hotfix resolves the issue. + - **Republish Without Grafting**: Once stable, deploy a new version of the Subgraph without grafting for long-term maintenance. > Note: Relying on grafting indefinitely is not recommended as it can complicate future updates and maintenance. - - **Update References**: Redirect any services or applications to use the new, non-grafted subgraph. + - **Update References**: Redirect any services or applications to use the new, non-grafted Subgraph. 4. **Important Considerations** - **Careful Block Selection**: Choose the graft block number carefully to prevent data loss. - **Tip**: Use the block number of the last correctly processed event. - - **Use Deployment ID**: Ensure you reference the Deployment ID of the base subgraph, not the Subgraph ID. - - **Note**: The Deployment ID is the unique identifier for a specific subgraph deployment. - - **Feature Declaration**: Remember to declare grafting in the subgraph manifest under features. + - **Use Deployment ID**: Ensure you reference the Deployment ID of the base Subgraph, not the Subgraph ID. + - **Note**: The Deployment ID is the unique identifier for a specific Subgraph deployment. + - **Feature Declaration**: Remember to declare grafting in the Subgraph manifest under features. ## Example: Deploying a Hotfix with Grafting -Suppose you have a subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. +Suppose you have a Subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. 1. **Failed Subgraph Manifest (subgraph.yaml)** ```yaml - specVersion: 1.0.0 + specVersion: 1.3.0 schema: file: ./schema.graphql dataSources: @@ -75,7 +75,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing startBlock: 5000000 mapping: kind: ethereum/events - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Withdrawal @@ -90,7 +90,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing 2. **New Grafted Subgraph Manifest (subgraph.yaml)** ```yaml - specVersion: 1.0.0 + specVersion: 1.3.0 schema: file: ./schema.graphql dataSources: @@ -103,7 +103,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing startBlock: 6000001 # Block after the last indexed block mapping: kind: ethereum/events - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Withdrawal @@ -117,16 +117,16 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing features: - grafting graft: - base: QmBaseDeploymentID # Deployment ID of the failed subgraph + base: QmBaseDeploymentID # Deployment ID of the failed Subgraph block: 6000000 # Last successfully indexed block ``` **Explanation:** -- **Data Source Update**: The new subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. +- **Data Source Update**: The new Subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. - **Start Block**: Set to one block after the last successfully indexed block to avoid reprocessing the error. - **Grafting Configuration**: - - **base**: Deployment ID of the failed subgraph. + - **base**: Deployment ID of the failed Subgraph. - **block**: Block number where grafting should begin. 3. **Deployment Steps** @@ -135,10 +135,10 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing - **Adjust the Manifest**: As shown above, update the `subgraph.yaml` with grafting configurations. - **Deploy the Subgraph**: - Authenticate with the Graph CLI. - - Deploy the new subgraph using `graph deploy`. + - Deploy the new Subgraph using `graph deploy`. 4. **Post-Deployment** - - **Verify Indexing**: Check that the subgraph is indexing correctly from the graft point. + - **Verify Indexing**: Check that the Subgraph is indexing correctly from the graft point. - **Monitor Data**: Ensure that new data is being captured and the hotfix is effective. - **Plan for Republish**: Schedule the deployment of a non-grafted version for long-term stability. @@ -146,9 +146,9 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing While grafting is a powerful tool for deploying hotfixes quickly, there are specific scenarios where it should be avoided to maintain data integrity and ensure optimal performance. -- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new subgraph’s schema to be compatible with the base subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. +- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new Subgraph’s schema to be compatible with the base Subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. - **Significant Mapping Logic Overhauls**: When the hotfix involves substantial modifications to your mapping logic—such as changing how events are processed or altering handler functions—grafting may not function correctly. The new logic might not be compatible with the data processed under the old logic, leading to incorrect data or failed indexing. -- **Deployments to The Graph Network**: Grafting is not recommended for subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the subgraph from scratch to ensure full compatibility and reliability. +- **Deployments to The Graph Network**: Grafting is not recommended for Subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the Subgraph from scratch to ensure full compatibility and reliability. ### Risk Management @@ -157,20 +157,20 @@ While grafting is a powerful tool for deploying hotfixes quickly, there are spec ## Conclusion -Grafting is an effective strategy for deploying hotfixes in subgraph development, enabling you to: +Grafting is an effective strategy for deploying hotfixes in Subgraph development, enabling you to: - **Quickly Recover** from critical errors without re-indexing. - **Preserve Historical Data**, maintaining continuity for applications and users. - **Ensure Service Availability** by minimizing downtime during critical fixes. -However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. +However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your Subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. ## Additional Resources - **[Grafting Documentation](/subgraphs/cookbook/grafting/)**: Replace a Contract and Keep its History With Grafting - **[Understanding Deployment IDs](/subgraphs/querying/subgraph-id-vs-deployment-id/)**: Learn the difference between Deployment ID and Subgraph ID. -By incorporating grafting into your subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. +By incorporating grafting into your Subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/it/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx b/website/src/pages/it/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx index 6ff60ec9ab34..3a633244e0f2 100644 --- a/website/src/pages/it/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx +++ b/website/src/pages/it/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx @@ -1,6 +1,6 @@ --- title: Subgraph Best Practice 3 - Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs -sidebarTitle: 'Subgraph Best Practice 3: Immutable Entities and Bytes as IDs' +sidebarTitle: Immutable Entities and Bytes as IDs --- ## TLDR @@ -50,12 +50,12 @@ While other types for IDs are possible, such as String and Int8, it is recommend ### Reasons to Not Use Bytes as IDs 1. If entity IDs must be human-readable such as auto-incremented numerical IDs or readable strings, Bytes for IDs should not be used. -2. If integrating a subgraph’s data with another data model that does not use Bytes as IDs, Bytes as IDs should not be used. +2. If integrating a Subgraph’s data with another data model that does not use Bytes as IDs, Bytes as IDs should not be used. 3. Indexing and querying performance improvements are not desired. ### Concatenating With Bytes as IDs -It is a common practice in many subgraphs to use string concatenation to combine two properties of an event into a single ID, such as using `event.transaction.hash.toHex() + "-" + event.logIndex.toString()`. However, as this returns a string, this significantly impedes subgraph indexing and querying performance. +It is a common practice in many Subgraphs to use string concatenation to combine two properties of an event into a single ID, such as using `event.transaction.hash.toHex() + "-" + event.logIndex.toString()`. However, as this returns a string, this significantly impedes Subgraph indexing and querying performance. Instead, we should use the `concatI32()` method to concatenate event properties. This strategy results in a `Bytes` ID that is much more performant. @@ -172,7 +172,7 @@ Query Response: ## Conclusion -Using both Immutable Entities and Bytes as IDs has been shown to markedly improve subgraph efficiency. Specifically, tests have highlighted up to a 28% increase in query performance and up to a 48% acceleration in indexing speeds. +Using both Immutable Entities and Bytes as IDs has been shown to markedly improve Subgraph efficiency. Specifically, tests have highlighted up to a 28% increase in query performance and up to a 48% acceleration in indexing speeds. Read more about using Immutable Entities and Bytes as IDs in this blog post by David Lutterkort, a Software Engineer at Edge & Node: [Two Simple Subgraph Performance Improvements](https://thegraph.com/blog/two-simple-subgraph-performance-improvements/). diff --git a/website/src/pages/it/subgraphs/best-practices/pruning.mdx b/website/src/pages/it/subgraphs/best-practices/pruning.mdx index 1b51dde8894f..2d4f9ad803e0 100644 --- a/website/src/pages/it/subgraphs/best-practices/pruning.mdx +++ b/website/src/pages/it/subgraphs/best-practices/pruning.mdx @@ -1,11 +1,11 @@ --- title: Subgraph Best Practice 1 - Improve Query Speed with Subgraph Pruning -sidebarTitle: 'Subgraph Best Practice 1: Pruning with indexerHints' +sidebarTitle: Pruning with indexerHints --- ## TLDR -[Pruning](/developing/creating-a-subgraph/#prune) removes archival entities from the subgraph’s database up to a given block, and removing unused entities from a subgraph’s database will improve a subgraph’s query performance, often dramatically. Using `indexerHints` is an easy way to prune a subgraph. +[Pruning](/developing/creating-a-subgraph/#prune) removes archival entities from the Subgraph’s database up to a given block, and removing unused entities from a Subgraph’s database will improve a Subgraph’s query performance, often dramatically. Using `indexerHints` is an easy way to prune a Subgraph. ## How to Prune a Subgraph With `indexerHints` @@ -13,14 +13,14 @@ Add a section called `indexerHints` in the manifest. `indexerHints` has three `prune` options: -- `prune: auto`: Retains the minimum necessary history as set by the Indexer, optimizing query performance. This is the generally recommended setting and is the default for all subgraphs created by `graph-cli` >= 0.66.0. +- `prune: auto`: Retains the minimum necessary history as set by the Indexer, optimizing query performance. This is the generally recommended setting and is the default for all Subgraphs created by `graph-cli` >= 0.66.0. - `prune: `: Sets a custom limit on the number of historical blocks to retain. - `prune: never`: No pruning of historical data; retains the entire history and is the default if there is no `indexerHints` section. `prune: never` should be selected if [Time Travel Queries](/subgraphs/querying/graphql-api/#time-travel-queries) are desired. -We can add `indexerHints` to our subgraphs by updating our `subgraph.yaml`: +We can add `indexerHints` to our Subgraphs by updating our `subgraph.yaml`: ```yaml -specVersion: 1.0.0 +specVersion: 1.3.0 schema: file: ./schema.graphql indexerHints: @@ -39,7 +39,7 @@ dataSources: ## Conclusion -Pruning using `indexerHints` is a best practice for subgraph development, offering significant query performance improvements. +Pruning using `indexerHints` is a best practice for Subgraph development, offering significant query performance improvements. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/it/subgraphs/best-practices/timeseries.mdx b/website/src/pages/it/subgraphs/best-practices/timeseries.mdx index 112e062e6187..1586f8edb6ff 100644 --- a/website/src/pages/it/subgraphs/best-practices/timeseries.mdx +++ b/website/src/pages/it/subgraphs/best-practices/timeseries.mdx @@ -1,11 +1,11 @@ --- title: Subgraph Best Practice 5 - Simplify and Optimize with Timeseries and Aggregations -sidebarTitle: 'Subgraph Best Practice 5: Timeseries and Aggregations' +sidebarTitle: Timeseries and Aggregations --- ## TLDR -Leveraging the new time-series and aggregations feature in subgraphs can significantly enhance both indexing speed and query performance. +Leveraging the new time-series and aggregations feature in Subgraphs can significantly enhance both indexing speed and query performance. ## Panoramica @@ -36,6 +36,10 @@ Timeseries and aggregations reduce data processing overhead and accelerate queri ## How to Implement Timeseries and Aggregations +### Prerequisites + +You need `spec version 1.1.0` for this feature. + ### Defining Timeseries Entities A timeseries entity represents raw data points collected over time. It is defined with the `@entity(timeseries: true)` annotation. Key requirements: @@ -51,7 +55,7 @@ Example: type Data @entity(timeseries: true) { id: Int8! timestamp: Timestamp! - price: BigDecimal! + amount: BigDecimal! } ``` @@ -68,11 +72,11 @@ Example: type Stats @aggregation(intervals: ["hour", "day"], source: "Data") { id: Int8! timestamp: Timestamp! - sum: BigDecimal! @aggregate(fn: "sum", arg: "price") + sum: BigDecimal! @aggregate(fn: "sum", arg: "amount") } ``` -In this example, Stats aggregates the price field from Data over hourly and daily intervals, computing the sum. +In this example, Stats aggregates the amount field from Data over hourly and daily intervals, computing the sum. ### Querying Aggregated Data @@ -172,13 +176,13 @@ Supported operators and functions include basic arithmetic (+, -, \_, /), compar ### Conclusion -Implementing timeseries and aggregations in subgraphs is a best practice for projects dealing with time-based data. This approach: +Implementing timeseries and aggregations in Subgraphs is a best practice for projects dealing with time-based data. This approach: - Enhances Performance: Speeds up indexing and querying by reducing data processing overhead. - Simplifies Development: Eliminates the need for manual aggregation logic in mappings. - Scales Efficiently: Handles large volumes of data without compromising on speed or responsiveness. -By adopting this pattern, developers can build more efficient and scalable subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your subgraphs. +By adopting this pattern, developers can build more efficient and scalable Subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your Subgraphs. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/it/subgraphs/billing.mdx b/website/src/pages/it/subgraphs/billing.mdx index c9f380bb022c..ec654ca63f55 100644 --- a/website/src/pages/it/subgraphs/billing.mdx +++ b/website/src/pages/it/subgraphs/billing.mdx @@ -4,12 +4,14 @@ title: Billing ## Querying Plans -There are two plans to use when querying subgraphs on The Graph Network. +There are two plans to use when querying Subgraphs on The Graph Network. - **Free Plan**: The Free Plan includes 100,000 free monthly queries with full access to the Subgraph Studio testing environment. This plan is designed for hobbyists, hackathoners, and those with side projects to try out The Graph before scaling their dapp. - **Growth Plan**: The Growth Plan includes everything in the Free Plan with all queries after 100,000 monthly queries requiring payments with GRT or credit card. The Growth Plan is flexible enough to cover teams that have established dapps across a variety of use cases. +Learn more about pricing [here](https://thegraph.com/studio-pricing/). + ## Query Payments with credit card diff --git a/website/src/pages/it/subgraphs/developing/creating/advanced.mdx b/website/src/pages/it/subgraphs/developing/creating/advanced.mdx index 94c7d1f0d42d..741d77c979d9 100644 --- a/website/src/pages/it/subgraphs/developing/creating/advanced.mdx +++ b/website/src/pages/it/subgraphs/developing/creating/advanced.mdx @@ -4,9 +4,9 @@ title: Advanced Subgraph Features ## Panoramica -Add and implement advanced subgraph features to enhanced your subgraph's built. +Add and implement advanced Subgraph features to enhanced your Subgraph's built. -Starting from `specVersion` `0.0.4`, subgraph features must be explicitly declared in the `features` section at the top level of the manifest file, using their `camelCase` name, as listed in the table below: +Starting from `specVersion` `0.0.4`, Subgraph features must be explicitly declared in the `features` section at the top level of the manifest file, using their `camelCase` name, as listed in the table below: | Feature | Name | | ---------------------------------------------------- | ---------------- | @@ -14,10 +14,10 @@ Starting from `specVersion` `0.0.4`, subgraph features must be explicitly declar | [Full-text Search](#defining-fulltext-search-fields) | `fullTextSearch` | | [Grafting](#grafting-onto-existing-subgraphs) | `grafting` | -For instance, if a subgraph uses the **Full-Text Search** and the **Non-fatal Errors** features, the `features` field in the manifest should be: +For instance, if a Subgraph uses the **Full-Text Search** and the **Non-fatal Errors** features, the `features` field in the manifest should be: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum features: - fullTextSearch @@ -25,7 +25,7 @@ features: dataSources: ... ``` -> Note that using a feature without declaring it will incur a **validation error** during subgraph deployment, but no errors will occur if a feature is declared but not used. +> Note that using a feature without declaring it will incur a **validation error** during Subgraph deployment, but no errors will occur if a feature is declared but not used. ## Timeseries and Aggregations @@ -33,9 +33,9 @@ Prerequisites: - Subgraph specVersion must be ≥1.1.0. -Timeseries and aggregations enable your subgraph to track statistics like daily average price, hourly total transfers, and more. +Timeseries and aggregations enable your Subgraph to track statistics like daily average price, hourly total transfers, and more. -This feature introduces two new types of subgraph entity. Timeseries entities record data points with timestamps. Aggregation entities perform pre-declared calculations on the timeseries data points on an hourly or daily basis, then store the results for easy access via GraphQL. +This feature introduces two new types of Subgraph entity. Timeseries entities record data points with timestamps. Aggregation entities perform pre-declared calculations on the timeseries data points on an hourly or daily basis, then store the results for easy access via GraphQL. ### Example Schema @@ -97,21 +97,21 @@ Aggregation entities are automatically calculated on the basis of the specified ## Errori non fatali -Gli errori di indicizzazione su subgraph già sincronizzati causano, per impostazione predefinita, il fallimento del subgraph e l'interruzione della sincronizzazione. In alternativa, i subgraph possono essere configurati per continuare la sincronizzazione in presenza di errori, ignorando le modifiche apportate dal gestore che ha provocato l'errore. In questo modo gli autori dei subgraph hanno il tempo di correggere i loro subgraph mentre le query continuano a essere servite rispetto al blocco più recente, anche se i risultati potrebbero essere incoerenti a causa del bug che ha causato l'errore. Si noti che alcuni errori sono sempre fatali. Per essere non fatale, l'errore deve essere noto come deterministico. +Indexing errors on already synced Subgraphs will, by default, cause the Subgraph to fail and stop syncing. Subgraphs can alternatively be configured to continue syncing in the presence of errors, by ignoring the changes made by the handler which provoked the error. This gives Subgraph authors time to correct their Subgraphs while queries continue to be served against the latest block, though the results might be inconsistent due to the bug that caused the error. Note that some errors are still always fatal. To be non-fatal, the error must be known to be deterministic. -> **Note:** The Graph Network does not yet support non-fatal errors, and developers should not deploy subgraphs using that functionality to the network via the Studio. +> **Note:** The Graph Network does not yet support non-fatal errors, and developers should not deploy Subgraphs using that functionality to the network via the Studio. -Per abilitare gli errori non fatali è necessario impostare il seguente flag di caratteristica nel manifesto del subgraph: +Enabling non-fatal errors requires setting the following feature flag on the Subgraph manifest: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum features: - nonFatalErrors ... ``` -The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the subgraph has skipped over errors, as in the example: +The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the Subgraph has skipped over errors, as in the example: ```graphql foos(first: 100, subgraphError: allow) { @@ -123,7 +123,7 @@ _meta { } ``` -If the subgraph encounters an error, that query will return both the data and a graphql error with the message `"indexing_error"`, as in this example response: +If the Subgraph encounters an error, that query will return both the data and a graphql error with the message `"indexing_error"`, as in this example response: ```graphql "data": { @@ -145,7 +145,7 @@ If the subgraph encounters an error, that query will return both the data and a ## IPFS/Arweave File Data Sources -I data source file sono una nuova funzionalità del subgraph per accedere ai dati fuori chain durante l'indicizzazione in modo robusto ed estendibile. I data source file supportano il recupero di file da IPFS e da Arweave. +File data sources are a new Subgraph functionality for accessing off-chain data during indexing in a robust, extendable way. File data sources support fetching files from IPFS and from Arweave. > Questo pone anche le basi per l'indicizzazione deterministica dei dati fuori chain e per la potenziale introduzione di dati arbitrari provenienti da HTTP. @@ -221,7 +221,7 @@ templates: - name: TokenMetadata kind: file/ipfs mapping: - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mapping.ts handler: handleMetadata @@ -290,7 +290,7 @@ Example: import { TokenMetadata as TokenMetadataTemplate } from '../generated/templates' const ipfshash = 'QmaXzZhcYnsisuue5WRdQDH6FDvqkLQX1NckLqBYeYYEfm' -//This example code is for a Crypto coven subgraph. The above ipfs hash is a directory with token metadata for all crypto coven NFTs. +//This example code is for a Crypto coven Subgraph. The above ipfs hash is a directory with token metadata for all crypto coven NFTs. export function handleTransfer(event: TransferEvent): void { let token = Token.load(event.params.tokenId.toString()) @@ -317,23 +317,23 @@ Questo creerà una nuova data source file, che interrogherà l'endpoint IPFS o A This example is using the CID as the lookup between the parent `Token` entity and the resulting `TokenMetadata` entity. -> Previously, this is the point at which a subgraph developer would have called `ipfs.cat(CID)` to fetch the file +> Previously, this is the point at which a Subgraph developer would have called `ipfs.cat(CID)` to fetch the file Congratulazioni, state usando i data source file! -#### Distribuire i subgraph +#### Deploying your Subgraphs -You can now `build` and `deploy` your subgraph to any Graph Node >=v0.30.0-rc.0. +You can now `build` and `deploy` your Subgraph to any Graph Node >=v0.30.0-rc.0. #### Limitazioni -I gestori e le entità di data source file sono isolati dalle altre entità del subgraph, assicurando che siano deterministici quando vengono eseguiti e garantendo che non ci sia contaminazione di data source basate sulla chain. Per essere precisi: +File data source handlers and entities are isolated from other Subgraph entities, ensuring that they are deterministic when executed, and ensuring no contamination of chain-based data sources. To be specific: - Le entità create di Data Source file sono immutabili e non possono essere aggiornate - I gestori di Data Source file non possono accedere alle entità di altre data source file - Le entità associate al Data Source file non sono accessibili ai gestori alla chain -> Sebbene questo vincolo non dovrebbe essere problematico per la maggior parte dei casi d'uso, potrebbe introdurre complessità per alcuni. Contattate via Discord se avete problemi a modellare i vostri dati basati su file in un subgraph! +> While this constraint should not be problematic for most use-cases, it may introduce complexity for some. Please get in touch via Discord if you are having issues modelling your file-based data in a Subgraph! Inoltre, non è possibile creare data source da una data source file, sia essa una data source onchain o un'altra data source file. Questa restrizione potrebbe essere eliminata in futuro. @@ -365,15 +365,15 @@ Handlers for File Data Sources cannot be in files which import `eth_call` contra > **Requires**: [SpecVersion](#specversion-releases) >= `1.2.0` -Topic filters, also known as indexed argument filters, are a powerful feature in subgraphs that allow users to precisely filter blockchain events based on the values of their indexed arguments. +Topic filters, also known as indexed argument filters, are a powerful feature in Subgraphs that allow users to precisely filter blockchain events based on the values of their indexed arguments. -- These filters help isolate specific events of interest from the vast stream of events on the blockchain, allowing subgraphs to operate more efficiently by focusing only on relevant data. +- These filters help isolate specific events of interest from the vast stream of events on the blockchain, allowing Subgraphs to operate more efficiently by focusing only on relevant data. -- This is useful for creating personal subgraphs that track specific addresses and their interactions with various smart contracts on the blockchain. +- This is useful for creating personal Subgraphs that track specific addresses and their interactions with various smart contracts on the blockchain. ### How Topic Filters Work -When a smart contract emits an event, any arguments that are marked as indexed can be used as filters in a subgraph's manifest. This allows the subgraph to listen selectively for events that match these indexed arguments. +When a smart contract emits an event, any arguments that are marked as indexed can be used as filters in a Subgraph's manifest. This allows the Subgraph to listen selectively for events that match these indexed arguments. - The event's first indexed argument corresponds to `topic1`, the second to `topic2`, and so on, up to `topic3`, since the Ethereum Virtual Machine (EVM) allows up to three indexed arguments per event. @@ -401,7 +401,7 @@ In this example: #### Configuration in Subgraphs -Topic filters are defined directly within the event handler configuration in the subgraph manifest. Here is how they are configured: +Topic filters are defined directly within the event handler configuration in the Subgraph manifest. Here is how they are configured: ```yaml eventHandlers: @@ -436,7 +436,7 @@ In this configuration: - `topic1` is configured to filter `Transfer` events where `0xAddressA` is the sender. - `topic2` is configured to filter `Transfer` events where `0xAddressB` is the receiver. -- The subgraph will only index transactions that occur directly from `0xAddressA` to `0xAddressB`. +- The Subgraph will only index transactions that occur directly from `0xAddressA` to `0xAddressB`. #### Example 2: Tracking Transactions in Either Direction Between Two or More Addresses @@ -452,17 +452,17 @@ In this configuration: - `topic1` is configured to filter `Transfer` events where `0xAddressA`, `0xAddressB`, `0xAddressC` is the sender. - `topic2` is configured to filter `Transfer` events where `0xAddressB` and `0xAddressC` is the receiver. -- The subgraph will index transactions that occur in either direction between multiple addresses allowing for comprehensive monitoring of interactions involving all addresses. +- The Subgraph will index transactions that occur in either direction between multiple addresses allowing for comprehensive monitoring of interactions involving all addresses. ## Declared eth_call > Note: This is an experimental feature that is not currently available in a stable Graph Node release yet. You can only use it in Subgraph Studio or your self-hosted node. -Declarative `eth_calls` are a valuable subgraph feature that allows `eth_calls` to be executed ahead of time, enabling `graph-node` to execute them in parallel. +Declarative `eth_calls` are a valuable Subgraph feature that allows `eth_calls` to be executed ahead of time, enabling `graph-node` to execute them in parallel. This feature does the following: -- Significantly improves the performance of fetching data from the Ethereum blockchain by reducing the total time for multiple calls and optimizing the subgraph's overall efficiency. +- Significantly improves the performance of fetching data from the Ethereum blockchain by reducing the total time for multiple calls and optimizing the Subgraph's overall efficiency. - Allows faster data fetching, resulting in quicker query responses and a better user experience. - Reduces wait times for applications that need to aggregate data from multiple Ethereum calls, making the data retrieval process more efficient. @@ -474,7 +474,7 @@ This feature does the following: #### Scenario without Declarative `eth_calls` -Imagine you have a subgraph that needs to make three Ethereum calls to fetch data about a user's transactions, balance, and token holdings. +Imagine you have a Subgraph that needs to make three Ethereum calls to fetch data about a user's transactions, balance, and token holdings. Traditionally, these calls might be made sequentially: @@ -498,15 +498,15 @@ Total time taken = max (3, 2, 4) = 4 seconds #### How it Works -1. Declarative Definition: In the subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. +1. Declarative Definition: In the Subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. 2. Parallel Execution Engine: The Graph Node's execution engine recognizes these declarations and runs the calls simultaneously. -3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the subgraph for further processing. +3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the Subgraph for further processing. #### Example Configuration in Subgraph Manifest Declared `eth_calls` can access the `event.address` of the underlying event as well as all the `event.params`. -`Subgraph.yaml` using `event.address`: +`subgraph.yaml` using `event.address`: ```yaml eventHandlers: @@ -524,7 +524,7 @@ Details for the example above: - The text (`Pool[event.address].feeGrowthGlobal0X128()`) is the actual `eth_call` that will be executed, which is in the form of `Contract[address].function(arguments)` - The `address` and `arguments` can be replaced with variables that will be available when the handler is executed. -`Subgraph.yaml` using `event.params` +`subgraph.yaml` using `event.params` ```yaml calls: @@ -535,22 +535,22 @@ calls: > **Note:** it is not recommended to use grafting when initially upgrading to The Graph Network. Learn more [here](/subgraphs/cookbook/grafting/#important-note-on-grafting-when-upgrading-to-the-network). -When a subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances; it is beneficial to reuse the data from an existing subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly or to temporarily get an existing subgraph working again after it has failed. +When a Subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances; it is beneficial to reuse the data from an existing Subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly or to temporarily get an existing Subgraph working again after it has failed. -A subgraph is grafted onto a base subgraph when the subgraph manifest in `subgraph.yaml` contains a `graft` block at the top-level: +A Subgraph is grafted onto a base Subgraph when the Subgraph manifest in `subgraph.yaml` contains a `graft` block at the top-level: ```yaml description: ... graft: - base: Qm... # Subgraph ID of base subgraph + base: Qm... # Subgraph ID of base Subgraph block: 7345624 # Block number ``` -When a subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` subgraph up to and including the given `block` and then continue indexing the new subgraph from that block on. The base subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted subgraph. +When a Subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` Subgraph up to and including the given `block` and then continue indexing the new Subgraph from that block on. The base Subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted Subgraph. -Poiché l'innesto copia piuttosto che indicizzare i dati di base, è molto più veloce portare il subgraph al blocco desiderato rispetto all'indicizzazione da zero, anche se la copia iniziale dei dati può richiedere diverse ore per subgraph molto grandi. Mentre il subgraph innestato viene inizializzato, il Graph Node registra le informazioni sui tipi di entità già copiati. +Because grafting copies rather than indexes base data, it is much quicker to get the Subgraph to the desired block than indexing from scratch, though the initial data copy can still take several hours for very large Subgraphs. While the grafted Subgraph is being initialized, the Graph Node will log information about the entity types that have already been copied. -The grafted subgraph can use a GraphQL schema that is not identical to the one of the base subgraph, but merely compatible with it. It has to be a valid subgraph schema in its own right, but may deviate from the base subgraph's schema in the following ways: +The grafted Subgraph can use a GraphQL schema that is not identical to the one of the base Subgraph, but merely compatible with it. It has to be a valid Subgraph schema in its own right, but may deviate from the base Subgraph's schema in the following ways: - It adds or removes entity types - It removes attributes from entity types @@ -560,4 +560,4 @@ The grafted subgraph can use a GraphQL schema that is not identical to the one o - It adds or removes interfaces - It changes for which entity types an interface is implemented -> **[Feature Management](#experimental-features):** `grafting` must be declared under `features` in the subgraph manifest. +> **[Feature Management](#experimental-features):** `grafting` must be declared under `features` in the Subgraph manifest. diff --git a/website/src/pages/it/subgraphs/developing/creating/assemblyscript-mappings.mdx b/website/src/pages/it/subgraphs/developing/creating/assemblyscript-mappings.mdx index 23271ae9c85c..8154b3d9555c 100644 --- a/website/src/pages/it/subgraphs/developing/creating/assemblyscript-mappings.mdx +++ b/website/src/pages/it/subgraphs/developing/creating/assemblyscript-mappings.mdx @@ -10,7 +10,7 @@ The mappings take data from a particular source and transform it into entities t For each event handler that is defined in `subgraph.yaml` under `mapping.eventHandlers`, create an exported function of the same name. Each handler must accept a single parameter called `event` with a type corresponding to the name of the event which is being handled. -In the example subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events: +In the example Subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events: ```javascript import { NewGravatar, UpdatedGravatar } from '../generated/Gravity/Gravity' @@ -72,7 +72,7 @@ If no value is set for a field in the new entity with the same ID, the field wil ## Generazione del codice -Per rendere semplice e sicuro il lavoro con gli smart contract, gli eventi e le entità, la Graph CLI può generare tipi AssemblyScript dallo schema GraphQL del subgraph e dagli ABI dei contratti inclusi nelle data source. +In order to make it easy and type-safe to work with smart contracts, events and entities, the Graph CLI can generate AssemblyScript types from the Subgraph's GraphQL schema and the contract ABIs included in the data sources. Questo viene fatto con @@ -80,7 +80,7 @@ Questo viene fatto con graph codegen [--output-dir ] [] ``` -but in most cases, subgraphs are already preconfigured via `package.json` to allow you to simply run one of the following to achieve the same: +but in most cases, Subgraphs are already preconfigured via `package.json` to allow you to simply run one of the following to achieve the same: ```sh # Yarn @@ -90,7 +90,7 @@ yarn codegen npm run codegen ``` -This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with. +This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example Subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with. ```javascript import { @@ -102,12 +102,12 @@ import { } from '../generated/Gravity/Gravity' ``` -In addition to this, one class is generated for each entity type in the subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with +In addition to this, one class is generated for each entity type in the Subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with ```javascript import { Gravatar } from '../generated/schema' ``` -> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the subgraph. +> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the Subgraph. -Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your subgraph to Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find. +Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your Subgraph to Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find. diff --git a/website/src/pages/it/subgraphs/developing/creating/graph-ts/CHANGELOG.md b/website/src/pages/it/subgraphs/developing/creating/graph-ts/CHANGELOG.md index 5d90888ac378..5f964d3cbb78 100644 --- a/website/src/pages/it/subgraphs/developing/creating/graph-ts/CHANGELOG.md +++ b/website/src/pages/it/subgraphs/developing/creating/graph-ts/CHANGELOG.md @@ -1,5 +1,11 @@ # @graphprotocol/graph-ts +## 0.38.0 + +### Minor Changes + +- [#1935](https://github.com/graphprotocol/graph-tooling/pull/1935) [`0c36a02`](https://github.com/graphprotocol/graph-tooling/commit/0c36a024e0516bbf883ae62b8312dba3d9945f04) Thanks [@isum](https://github.com/isum)! - feat: add yaml parsing support to mappings + ## 0.37.0 ### Minor Changes diff --git a/website/src/pages/it/subgraphs/developing/creating/graph-ts/api.mdx b/website/src/pages/it/subgraphs/developing/creating/graph-ts/api.mdx index 1d6fa48848b3..06fd431e7048 100644 --- a/website/src/pages/it/subgraphs/developing/creating/graph-ts/api.mdx +++ b/website/src/pages/it/subgraphs/developing/creating/graph-ts/api.mdx @@ -2,12 +2,12 @@ title: API AssemblyScript --- -> Note: If you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/). +> Note: If you created a Subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/). -Learn what built-in APIs can be used when writing subgraph mappings. There are two kinds of APIs available out of the box: +Learn what built-in APIs can be used when writing Subgraph mappings. There are two kinds of APIs available out of the box: - The [Graph TypeScript library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) -- Code generated from subgraph files by `graph codegen` +- Code generated from Subgraph files by `graph codegen` You can also add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). @@ -27,7 +27,7 @@ La libreria `@graphprotocol/graph-ts` fornisce le seguenti API: ### Versioni -La `apiVersion` nel manifest del subgraph specifica la versione dell'API di mappatura che viene eseguita da the Graph Node per un dato subgraph. +The `apiVersion` in the Subgraph manifest specifies the mapping API version which is run by Graph Node for a given Subgraph. | Versione | Note di rilascio | | :-: | --- | @@ -223,7 +223,7 @@ import { store } from '@graphprotocol/graph-ts' L'API `store` consente di caricare, salvare e rimuovere entità da e verso il Graph Node store. -Entities written to the store map one-to-one to the `@entity` types defined in the subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities. +Entities written to the store map one-to-one to the `@entity` types defined in the Subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities. #### Creazione di entità @@ -282,8 +282,8 @@ A partire da `graph-node` v0.31.0, `@graphprotocol/graph-ts` v0.30.0 e `@graphpr The store API facilitates the retrieval of entities that were created or updated in the current block. A typical situation for this is that one handler creates a transaction from some onchain event, and a later handler wants to access this transaction if it exists. -- In the case where the transaction does not exist, the subgraph will have to go to the database simply to find out that the entity does not exist. If the subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. -- For some subgraphs, these missed lookups can contribute significantly to the indexing time. +- In the case where the transaction does not exist, the Subgraph will have to go to the database simply to find out that the entity does not exist. If the Subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. +- For some Subgraphs, these missed lookups can contribute significantly to the indexing time. ```typescript let id = event.transaction.hash // or however the ID is constructed @@ -380,11 +380,11 @@ L'API di Ethereum fornisce l'accesso agli smart contract, alle variabili di stat #### Supporto per i tipi di Ethereum -Come per le entità, `graph codegen` genera classi per tutti gli smart contract e gli eventi utilizzati in un subgraph. Per questo, gli ABI dei contratti devono far parte dell'origine dati nel manifest del subgraph. In genere, i file ABI sono memorizzati in una cartella `abis/`. +As with entities, `graph codegen` generates classes for all smart contracts and events used in a Subgraph. For this, the contract ABIs need to be part of the data source in the Subgraph manifest. Typically, the ABI files are stored in an `abis/` folder. -Con le classi generate, le conversioni tra i tipi di Ethereum e i [tipi incorporati](#built-in-types) avvengono dietro le quinte, in modo che gli autori dei subgraph non debbano preoccuparsene. +With the generated classes, conversions between Ethereum types and the [built-in types](#built-in-types) take place behind the scenes so that Subgraph authors do not have to worry about them. -L'esempio seguente lo illustra. Dato uno schema di subgraph come +The following example illustrates this. Given a Subgraph schema like ```graphql type Transfer @entity { @@ -483,7 +483,7 @@ class Log { #### Accesso allo stato dello smart contract -Il codice generato da `graph codegen` include anche classi per gli smart contract utilizzati nel subgraph. Queste possono essere utilizzate per accedere alle variabili di stato pubbliche e per chiamare le funzioni del contratto nel blocco corrente. +The code generated by `graph codegen` also includes classes for the smart contracts used in the Subgraph. These can be used to access public state variables and call functions of the contract at the current block. Un modello comune è quello di accedere al contratto da cui proviene un evento. Questo si ottiene con il seguente codice: @@ -506,7 +506,7 @@ export function handleTransfer(event: TransferEvent) { Finché il `ERC20Contract` su Ethereum ha una funzione pubblica di sola lettura chiamata `symbol`, questa può essere chiamata con `.symbol()`. Per le variabili di stato pubbliche viene creato automaticamente un metodo con lo stesso nome. -Qualsiasi altro contratto che faccia parte del subgraph può essere importato dal codice generato e può essere legato a un indirizzo valido. +Any other contract that is part of the Subgraph can be imported from the generated code and can be bound to a valid address. #### Gestione delle chiamate annullate @@ -582,7 +582,7 @@ let isContract = ethereum.hasCode(eoa).inner // returns false import { log } from '@graphprotocol/graph-ts' ``` -The `log` API allows subgraphs to log information to the Graph Node standard output as well as Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument. +The `log` API allows Subgraphs to log information to the Graph Node standard output as well as Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument. L'API `log` include le seguenti funzioni: @@ -590,7 +590,7 @@ L'API `log` include le seguenti funzioni: - `log.info(fmt: string, args: Array): void` - registra un messaggio informativo. - `log.warning(fmt: string, args: Array): void` - registra un avviso. - `log.error(fmt: string, args: Array): void` - registra un messaggio di errore. -- `log.critical(fmt: string, args: Array): void` - registra un messaggio critico _and_ termina il subgraph. +- `log.critical(fmt: string, args: Array): void` – logs a critical message _and_ terminates the Subgraph. L'API `log` accetta una stringa di formato e un array di valori stringa. Quindi sostituisce i segnaposto con i valori stringa dell'array. Il primo segnaposto `{}` viene sostituito dal primo valore dell'array, il secondo segnaposto `{}` viene sostituito dal secondo valore e così via. @@ -721,7 +721,7 @@ ipfs.mapJSON('Qm...', 'processItem', Value.fromString('parentId')) L'unico flag attualmente supportato è `json`, che deve essere passato a `ipfs.map`. Con il flag `json`, il file IPFS deve essere costituito da una serie di valori JSON, un valore per riga. La chiamata a `ipfs.map` leggerà ogni riga del file, la deserializzerà in un `JSONValue` e chiamerà il callback per ognuno di essi. Il callback può quindi utilizzare le operazioni sulle entità per memorizzare i dati dal `JSONValue`. Le modifiche alle entità vengono memorizzate solo quando il gestore che ha chiamato `ipfs.map` termina con successo; nel frattempo, vengono mantenute in memoria e la dimensione del file che `ipfs.map` può elaborare è quindi limitata. -In caso di successo, `ipfs.map` restituisce `void`. Se una qualsiasi invocazione del callback causa un errore, il gestore che ha invocato `ipfs.map` viene interrotto e il subgraph viene contrassegnato come fallito. +On success, `ipfs.map` returns `void`. If any invocation of the callback causes an error, the handler that invoked `ipfs.map` is aborted, and the Subgraph is marked as failed. ### Crypto API @@ -836,7 +836,7 @@ La classe base `Entity` e la classe figlia `DataSourceContext` hanno degli helpe ### DataSourceContext nel manifesto -La sezione `contesto` all'interno di `dataSources` consente di definire coppie chiave-valore accessibili nelle mappature dei subgraph. I tipi disponibili sono `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List` e `BigInt`. +The `context` section within `dataSources` allows you to define key-value pairs that are accessible within your Subgraph mappings. The available types are `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Ecco un esempio YAML che illustra l'uso di vari tipi nella sezione `context`: @@ -887,4 +887,4 @@ dataSources: - `List`: Specifica un elenco di elementi. Ogni elemento deve specificare il suo tipo e i suoi dati. - `BigInt`: Specifica un valore intero di grandi dimensioni. Deve essere quotato a causa delle sue grandi dimensioni. -Questo contesto è quindi accessibile nei file di mappatura dei subgraph, consentendo di ottenere subgraph più dinamici e configurabili. +This context is then accessible in your Subgraph mapping files, enabling more dynamic and configurable Subgraphs. diff --git a/website/src/pages/it/subgraphs/developing/creating/graph-ts/common-issues.mdx b/website/src/pages/it/subgraphs/developing/creating/graph-ts/common-issues.mdx index 8d714dad8499..7c21ab8fc43b 100644 --- a/website/src/pages/it/subgraphs/developing/creating/graph-ts/common-issues.mdx +++ b/website/src/pages/it/subgraphs/developing/creating/graph-ts/common-issues.mdx @@ -2,7 +2,7 @@ title: Problemi comuni di AssemblyScript --- -Ci sono alcuni problemi [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) in cui è comune imbattersi durante lo sviluppo di subgraph. La loro difficoltà di debug è variabile, ma conoscerli può essere d'aiuto. Quello che segue è un elenco non esaustivo di questi problemi: +There are certain [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) issues that are common to run into during Subgraph development. They range in debug difficulty, however, being aware of them may help. The following is a non-exhaustive list of these issues: - `Private` class variables are not enforced in [AssemblyScript](https://www.assemblyscript.org/status.html#language-features). There is no way to protect class variables from being directly changed from the class object. - L'ambito non è ereditato nelle [closure functions](https://www.assemblyscript.org/status.html#on-closures), cioè le variabili dichiarate al di fuori delle closure functions non possono essere utilizzate. Spiegazione in [Developer Highlights #3](https://www.youtube.com/watch?v=1-8AW-lVfrA&t=3243s). diff --git a/website/src/pages/it/subgraphs/developing/creating/install-the-cli.mdx b/website/src/pages/it/subgraphs/developing/creating/install-the-cli.mdx index 4f4afcee006a..20770b2e37b7 100644 --- a/website/src/pages/it/subgraphs/developing/creating/install-the-cli.mdx +++ b/website/src/pages/it/subgraphs/developing/creating/install-the-cli.mdx @@ -2,11 +2,11 @@ title: Installare the Graph CLI --- -> In order to use your subgraph on The Graph's decentralized network, you will need to [create an API key](/resources/subgraph-studio-faq/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your subgraph with at least 3,000 GRT to attract 2-3 Indexers. To learn more about signaling, check out [curating](/resources/roles/curating/). +> In order to use your Subgraph on The Graph's decentralized network, you will need to [create an API key](/resources/subgraph-studio-faq/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your Subgraph with at least 3,000 GRT to attract 2-3 Indexers. To learn more about signaling, check out [curating](/resources/roles/curating/). ## Panoramica -The [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) is a command-line interface that facilitates developers' commands for The Graph. It processes a [subgraph manifest](/subgraphs/developing/creating/subgraph-manifest/) and compiles the [mappings](/subgraphs/developing/creating/assemblyscript-mappings/) to create the files you will need to deploy the subgraph to [Subgraph Studio](https://thegraph.com/studio/) and the network. +The [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) is a command-line interface that facilitates developers' commands for The Graph. It processes a [Subgraph manifest](/subgraphs/developing/creating/subgraph-manifest/) and compiles the [mappings](/subgraphs/developing/creating/assemblyscript-mappings/) to create the files you will need to deploy the Subgraph to [Subgraph Studio](https://thegraph.com/studio/) and the network. ## Per cominciare @@ -28,13 +28,13 @@ npm install -g @graphprotocol/graph-cli@latest yarn global add @graphprotocol/graph-cli ``` -The `graph init` command can be used to set up a new subgraph project, either from an existing contract or from an example subgraph. If you already have a smart contract deployed to your preferred network, you can bootstrap a new subgraph from that contract to get started. +The `graph init` command can be used to set up a new Subgraph project, either from an existing contract or from an example Subgraph. If you already have a smart contract deployed to your preferred network, you can bootstrap a new Subgraph from that contract to get started. ## Create a Subgraph ### Da un contratto esistente -The following command creates a subgraph that indexes all events of an existing contract: +The following command creates a Subgraph that indexes all events of an existing contract: ```sh graph init \ @@ -51,25 +51,25 @@ graph init \ - If any of the optional arguments are missing, it guides you through an interactive form. -- The `` is the ID of your subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your subgraph details page. +- The `` is the ID of your Subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your Subgraph details page. ### Da un subgraph di esempio -The following command initializes a new project from an example subgraph: +The following command initializes a new project from an example Subgraph: ```sh graph init --from-example=example-subgraph ``` -- The [example subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. +- The [example Subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. -- The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. +- The Subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. ### Add New `dataSources` to an Existing Subgraph -`dataSources` are key components of subgraphs. They define the sources of data that the subgraph indexes and processes. A `dataSource` specifies which smart contract to listen to, which events to process, and how to handle them. +`dataSources` are key components of Subgraphs. They define the sources of data that the Subgraph indexes and processes. A `dataSource` specifies which smart contract to listen to, which events to process, and how to handle them. -Recent versions of the Graph CLI supports adding new `dataSources` to an existing subgraph through the `graph add` command: +Recent versions of the Graph CLI supports adding new `dataSources` to an existing Subgraph through the `graph add` command: ```sh graph add
[] @@ -101,19 +101,5 @@ The `graph add` command will fetch the ABI from Etherscan (unless an ABI path is I file ABI devono corrispondere al vostro contratto. Esistono diversi modi per ottenere i file ABI: - Se state costruendo il vostro progetto, probabilmente avrete accesso alle ABI più recenti. -- If you are building a subgraph for a public project, you can download that project to your computer and get the ABI by using [`npx hardhat compile`](https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts) or using `solc` to compile. -- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your subgraph will fail. - -## SpecVersion Releases - -| Versione | Note di rilascio | -| :-: | --- | -| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | -| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | -| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune subgraphs | -| 0.0.9 | Supports `endBlock` feature | -| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). | -| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). | -| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | -| 0.0.5 | Added support for event handlers having access to transaction receipts. | -| 0.0.4 | Added support for managing subgraph features. | +- If you are building a Subgraph for a public project, you can download that project to your computer and get the ABI by using [`npx hardhat compile`](https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts) or using `solc` to compile. +- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your Subgraph will fail. diff --git a/website/src/pages/it/subgraphs/developing/creating/ql-schema.mdx b/website/src/pages/it/subgraphs/developing/creating/ql-schema.mdx index d3c22e25f97d..0ff55e8a7234 100644 --- a/website/src/pages/it/subgraphs/developing/creating/ql-schema.mdx +++ b/website/src/pages/it/subgraphs/developing/creating/ql-schema.mdx @@ -4,7 +4,7 @@ title: The Graph QL Schema ## Panoramica -The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language. +The schema for your Subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language. > Note: If you've never written a GraphQL schema, it is recommended that you check out this primer on the GraphQL type system. Reference documentation for GraphQL schemas can be found in the [GraphQL API](/subgraphs/querying/graphql-api/) section. @@ -12,7 +12,7 @@ The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas ar Before defining entities, it is important to take a step back and think about how your data is structured and linked. -- All queries will be made against the data model defined in the subgraph schema. As a result, the design of the subgraph schema should be informed by the queries that your application will need to perform. +- All queries will be made against the data model defined in the Subgraph schema. As a result, the design of the Subgraph schema should be informed by the queries that your application will need to perform. - It may be useful to imagine entities as "objects containing data", rather than as events or functions. - You define entity types in `schema.graphql`, and Graph Node will generate top-level fields for querying single instances and collections of that entity type. - Each type that should be an entity is required to be annotated with an `@entity` directive. @@ -141,7 +141,7 @@ type TokenBalance @entity { Reverse lookups can be defined on an entity through the `@derivedFrom` field. This creates a virtual field on the entity that may be queried but cannot be set manually through the mappings API. Rather, it is derived from the relationship defined on the other entity. For such relationships, it rarely makes sense to store both sides of the relationship, and both indexing and query performance will be better when only one side is stored and the other is derived. -Per le relazioni uno-a-molti, la relazione deve sempre essere memorizzata sul lato "uno" e il lato "molti" deve sempre essere derivato. Memorizzare la relazione in questo modo, piuttosto che memorizzare un array di entità sul lato "molti", migliorerà notevolmente le prestazioni sia per l'indicizzazione che per l'interrogazione del subgraph. In generale, la memorizzazione di array di entità dovrebbe essere evitata per quanto possibile. +For one-to-many relationships, the relationship should always be stored on the 'one' side, and the 'many' side should always be derived. Storing the relationship this way, rather than storing an array of entities on the 'many' side, will result in dramatically better performance for both indexing and querying the Subgraph. In general, storing arrays of entities should be avoided as much as is practical. #### Esempio @@ -160,7 +160,7 @@ type TokenBalance @entity { } ``` -Here is an example of how to write a mapping for a subgraph with reverse lookups: +Here is an example of how to write a mapping for a Subgraph with reverse lookups: ```typescript let token = new Token(event.address) // Create Token @@ -231,7 +231,7 @@ query usersWithOrganizations { } ``` -Questo modo più elaborato di memorizzare le relazioni molti-a-molti si traduce in una minore quantità di dati memorizzati per il subgraph e quindi in un subgraph che spesso è molto più veloce da indicizzare e da effettuare query. +This more elaborate way of storing many-to-many relationships will result in less data stored for the Subgraph, and therefore to a Subgraph that is often dramatically faster to index and to query. ### Aggiungere commenti allo schema @@ -287,7 +287,7 @@ query { } ``` -> **[Feature Management](#experimental-features):** From `specVersion` `0.0.4` and onwards, `fullTextSearch` must be declared under the `features` section in the subgraph manifest. +> **[Feature Management](#experimental-features):** From `specVersion` `0.0.4` and onwards, `fullTextSearch` must be declared under the `features` section in the Subgraph manifest. ## Lingue supportate diff --git a/website/src/pages/it/subgraphs/developing/creating/starting-your-subgraph.mdx b/website/src/pages/it/subgraphs/developing/creating/starting-your-subgraph.mdx index 6b6247b0ce50..49090d6b963f 100644 --- a/website/src/pages/it/subgraphs/developing/creating/starting-your-subgraph.mdx +++ b/website/src/pages/it/subgraphs/developing/creating/starting-your-subgraph.mdx @@ -4,20 +4,32 @@ title: Starting Your Subgraph ## Panoramica -The Graph is home to thousands of subgraphs already available for query, so check [The Graph Explorer](https://thegraph.com/explorer) and find one that already matches your needs. +The Graph is home to thousands of Subgraphs already available for query, so check [The Graph Explorer](https://thegraph.com/explorer) and find one that already matches your needs. -When you create a [subgraph](/subgraphs/developing/subgraphs/), you create a custom open API that extracts data from a blockchain, processes it, stores it, and makes it easy to query via GraphQL. +When you create a [Subgraph](/subgraphs/developing/subgraphs/), you create a custom open API that extracts data from a blockchain, processes it, stores it, and makes it easy to query via GraphQL. -Subgraph development ranges from simple scaffold subgraphs to advanced, specifically tailored subgraphs. +Subgraph development ranges from simple scaffold Subgraphs to advanced, specifically tailored Subgraphs. ### Start Building -Start the process and build a subgraph that matches your needs: +Start the process and build a Subgraph that matches your needs: 1. [Install the CLI](/subgraphs/developing/creating/install-the-cli/) - Set up your infrastructure -2. [Subgraph Manifest](/subgraphs/developing/creating/subgraph-manifest/) - Understand a subgraph's key component +2. [Subgraph Manifest](/subgraphs/developing/creating/subgraph-manifest/) - Understand a Subgraph's key component 3. [The GraphQL Schema](/subgraphs/developing/creating/ql-schema/) - Write your schema 4. [Writing AssemblyScript Mappings](/subgraphs/developing/creating/assemblyscript-mappings/) - Write your mappings -5. [Advanced Features](/subgraphs/developing/creating/advanced/) - Customize your subgraph with advanced features +5. [Advanced Features](/subgraphs/developing/creating/advanced/) - Customize your Subgraph with advanced features Explore additional [resources for APIs](/subgraphs/developing/creating/graph-ts/README/) and conduct local testing with [Matchstick](/subgraphs/developing/creating/unit-testing-framework/). + +| Versione | Note di rilascio | +| :-: | --- | +| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune Subgraphs | +| 0.0.9 | Supports `endBlock` feature | +| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). | +| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | diff --git a/website/src/pages/it/subgraphs/developing/creating/subgraph-manifest.mdx b/website/src/pages/it/subgraphs/developing/creating/subgraph-manifest.mdx index d8b9c415b293..e2ca99ea6043 100644 --- a/website/src/pages/it/subgraphs/developing/creating/subgraph-manifest.mdx +++ b/website/src/pages/it/subgraphs/developing/creating/subgraph-manifest.mdx @@ -4,19 +4,19 @@ title: Subgraph Manifest ## Panoramica -The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. +The Subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your Subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. -The **subgraph definition** consists of the following files: +The **Subgraph definition** consists of the following files: -- `subgraph.yaml`: Contains the subgraph manifest +- `subgraph.yaml`: Contains the Subgraph manifest -- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL +- `schema.graphql`: A GraphQL schema defining the data stored for your Subgraph and how to query it via GraphQL - `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema (e.g. `mapping.ts` in this guide) ### Subgraph Capabilities -A single subgraph can: +A single Subgraph can: - Index data from multiple smart contracts (but not multiple networks). @@ -24,12 +24,12 @@ A single subgraph can: - Add an entry for each contract that requires indexing to the `dataSources` array. -The full specification for subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). +The full specification for Subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). -For the example subgraph listed above, `subgraph.yaml` is: +For the example Subgraph listed above, `subgraph.yaml` is: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum repository: https://github.com/graphprotocol/graph-tooling schema: @@ -54,7 +54,7 @@ dataSources: data: 'bar' mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -79,47 +79,47 @@ dataSources: ## Subgraph Entries -> Important Note: Be sure you populate your subgraph manifest with all handlers and [entities](/subgraphs/developing/creating/ql-schema/). +> Important Note: Be sure you populate your Subgraph manifest with all handlers and [entities](/subgraphs/developing/creating/ql-schema/). Le voci importanti da aggiornare per il manifesto sono: -- `specVersion`: a semver version that identifies the supported manifest structure and functionality for the subgraph. The latest version is `1.2.0`. See [specVersion releases](#specversion-releases) section to see more details on features & releases. +- `specVersion`: a semver version that identifies the supported manifest structure and functionality for the Subgraph. The latest version is `1.3.0`. See [specVersion releases](#specversion-releases) section to see more details on features & releases. -- `description`: a human-readable description of what the subgraph is. This description is displayed in Graph Explorer when the subgraph is deployed to Subgraph Studio. +- `description`: a human-readable description of what the Subgraph is. This description is displayed in Graph Explorer when the Subgraph is deployed to Subgraph Studio. -- `repository`: the URL of the repository where the subgraph manifest can be found. This is also displayed in Graph Explorer. +- `repository`: the URL of the repository where the Subgraph manifest can be found. This is also displayed in Graph Explorer. - `features`: a list of all used [feature](#experimental-features) names. -- `indexerHints.prune`: Defines the retention of historical block data for a subgraph. See [prune](#prune) in [indexerHints](#indexer-hints) section. +- `indexerHints.prune`: Defines the retention of historical block data for a Subgraph. See [prune](#prune) in [indexerHints](#indexer-hints) section. -- `dataSources.source`: the address of the smart contract the subgraph sources, and the ABI of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts. +- `dataSources.source`: the address of the smart contract the Subgraph sources, and the ABI of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts. - `dataSources.source.startBlock`: the optional number of the block that the data source starts indexing from. In most cases, we suggest using the block in which the contract was created. - `dataSources.source.endBlock`: The optional number of the block that the data source stops indexing at, including that block. Minimum spec version required: `0.0.9`. -- `dataSources.context`: key-value pairs that can be used within subgraph mappings. Supports various data types like `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Each variable needs to specify its `type` and `data`. These context variables are then accessible in the mapping files, offering more configurable options for subgraph development. +- `dataSources.context`: key-value pairs that can be used within Subgraph mappings. Supports various data types like `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Each variable needs to specify its `type` and `data`. These context variables are then accessible in the mapping files, offering more configurable options for Subgraph development. - `dataSources.mapping.entities`: the entities that the data source writes to the store. The schema for each entity is defined in the schema.graphql file. - `dataSources.mapping.abis`: one or more named ABI files for the source contract as well as any other smart contracts that you interact with from within the mappings. -- `dataSources.mapping.eventHandlers`: lists the smart contract events this subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store. +- `dataSources.mapping.eventHandlers`: lists the smart contract events this Subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store. -- `dataSources.mapping.callHandlers`: lists the smart contract functions this subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store. +- `dataSources.mapping.callHandlers`: lists the smart contract functions this Subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store. -- `dataSources.mapping.blockHandlers`: lists the blocks this subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional call-filter can be provided by adding a `filter` field with `kind: call` to the handler. This will only run the handler if the block contains at least one call to the data source contract. +- `dataSources.mapping.blockHandlers`: lists the blocks this Subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional call-filter can be provided by adding a `filter` field with `kind: call` to the handler. This will only run the handler if the block contains at least one call to the data source contract. -A single subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array. +A single Subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array. ## Event Handlers -Event handlers in a subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the subgraph's manifest. This enables subgraphs to process and store event data according to defined logic. +Event handlers in a Subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the Subgraph's manifest. This enables Subgraphs to process and store event data according to defined logic. ### Defining an Event Handler -An event handler is declared within a data source in the subgraph's YAML configuration. It specifies which events to listen for and the corresponding function to execute when those events are detected. +An event handler is declared within a data source in the Subgraph's YAML configuration. It specifies which events to listen for and the corresponding function to execute when those events are detected. ```yaml dataSources: @@ -131,7 +131,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -149,11 +149,11 @@ dataSources: ## Gestori di chiamate -While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured. +While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a Subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured. I gestori di chiamate si attivano solo in uno dei due casi: quando la funzione specificata viene chiamata da un conto diverso dal contratto stesso o quando è contrassegnata come esterna in Solidity e chiamata come parte di un'altra funzione nello stesso contratto. -> **Note:** Call handlers currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a subgraph indexing one of these networks contain one or more call handlers, it will not start syncing. Subgraph developers should instead use event handlers. These are far more performant than call handlers, and are supported on every evm network. +> **Note:** Call handlers currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a Subgraph indexing one of these networks contain one or more call handlers, it will not start syncing. Subgraph developers should instead use event handlers. These are far more performant than call handlers, and are supported on every evm network. ### Definire un gestore di chiamate @@ -162,31 +162,31 @@ To define a call handler in your manifest, simply add a `callHandlers` array und ```yaml dataSources: - kind: ethereum/contract - name: Factory + name: Gravity network: mainnet source: - address: '0xc0a47dFe034B400B47bDaD5FecDa2621de6c4d95' - abi: Factory + address: '0x731a10897d267e19b34503ad902d0a29173ba4b1' + abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript - file: ./src/mappings/factory.ts entities: - - Directory + - Gravatar + - Transaction abis: - - name: Factory - file: ./abis/factory.json - eventHandlers: - - event: NewExchange(address,address) - handler: handleNewExchange + - name: Gravity + file: ./abis/Gravity.json + callHandlers: + - function: createGravatar(string,string) + handler: handleCreateGravatar ``` The `function` is the normalized function signature to filter calls by. The `handler` property is the name of the function in your mapping you would like to execute when the target function is called in the data source contract. ### Funzione di mappatura -Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument: +Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example Subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument: ```typescript import { CreateGravatarCall } from '../generated/Gravity/Gravity' @@ -205,7 +205,7 @@ The `handleCreateGravatar` function takes a new `CreateGravatarCall` which is a ## Gestori di blocchi -Oltre a sottoscrivere eventi di contratto o chiamate di funzione, un subgraph può voler aggiornare i propri dati quando nuovi blocchi vengono aggiunti alla chain. A tale scopo, un subgraph può eseguire una funzione dopo ogni blocco o dopo i blocchi che corrispondono a un filtro predefinito. +In addition to subscribing to contract events or function calls, a Subgraph may want to update its data as new blocks are appended to the chain. To achieve this a Subgraph can run a function after every block or after blocks that match a pre-defined filter. ### Filtri supportati @@ -218,7 +218,7 @@ filter: _The defined handler will be called once for every block which contains a call to the contract (data source) the handler is defined under._ -> **Note:** The `call` filter currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a subgraph indexing one of these networks contain one or more block handlers with a `call` filter, it will not start syncing. +> **Note:** The `call` filter currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a Subgraph indexing one of these networks contain one or more block handlers with a `call` filter, it will not start syncing. L'assenza di un filtro per un gestore di blocchi garantisce che il gestore venga chiamato a ogni blocco. Una data source può contenere un solo gestore di blocchi per ogni tipo di filtro. @@ -232,7 +232,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -261,7 +261,7 @@ blockHandlers: every: 10 ``` -The defined handler will be called once for every `n` blocks, where `n` is the value provided in the `every` field. This configuration allows the subgraph to perform specific operations at regular block intervals. +The defined handler will be called once for every `n` blocks, where `n` is the value provided in the `every` field. This configuration allows the Subgraph to perform specific operations at regular block intervals. #### Filtro once @@ -276,7 +276,7 @@ blockHandlers: kind: once ``` -Il gestore definito con il filtro once sarà chiamato una sola volta prima dell'esecuzione di tutti gli altri gestori. Questa configurazione consente al subgraph di utilizzare il gestore come gestore di inizializzazione, eseguendo compiti specifici all'inizio dell'indicizzazione. +The defined handler with the once filter will be called only once before all other handlers run. This configuration allows the Subgraph to use the handler as an initialization handler, performing specific tasks at the start of indexing. ```ts export function handleOnce(block: ethereum.Block): void { @@ -288,7 +288,7 @@ export function handleOnce(block: ethereum.Block): void { ### Funzione di mappatura -The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing subgraph entities in the store, call smart contracts and create or update entities. +The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing Subgraph entities in the store, call smart contracts and create or update entities. ```typescript import { ethereum } from '@graphprotocol/graph-ts' @@ -317,7 +317,7 @@ An event will only be triggered when both the signature and topic 0 match. By de Starting from `specVersion` `0.0.5` and `apiVersion` `0.0.7`, event handlers can have access to the receipt for the transaction which emitted them. -To do so, event handlers must be declared in the subgraph manifest with the new `receipt: true` key, which is optional and defaults to false. +To do so, event handlers must be declared in the Subgraph manifest with the new `receipt: true` key, which is optional and defaults to false. ```yaml eventHandlers: @@ -360,7 +360,7 @@ dataSources: abi: Factory mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/factory.ts entities: @@ -390,7 +390,7 @@ templates: abi: Exchange mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/exchange.ts entities: @@ -454,7 +454,7 @@ There are setters and getters like `setString` and `getString` for all value typ ## Blocchi di partenza -The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. +The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a Subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. ```yaml dataSources: @@ -467,7 +467,7 @@ dataSources: startBlock: 6627917 mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/factory.ts entities: @@ -488,13 +488,13 @@ dataSources: ## Indexer Hints -The `indexerHints` setting in a subgraph's manifest provides directives for indexers on processing and managing a subgraph. It influences operational decisions across data handling, indexing strategies, and optimizations. Presently, it features the `prune` option for managing historical data retention or pruning. +The `indexerHints` setting in a Subgraph's manifest provides directives for indexers on processing and managing a Subgraph. It influences operational decisions across data handling, indexing strategies, and optimizations. Presently, it features the `prune` option for managing historical data retention or pruning. > This feature is available from `specVersion: 1.0.0` ### Prune -`indexerHints.prune`: Defines the retention of historical block data for a subgraph. Options include: +`indexerHints.prune`: Defines the retention of historical block data for a Subgraph. Options include: 1. `"never"`: No pruning of historical data; retains the entire history. 2. `"auto"`: Retains the minimum necessary history as set by the indexer, optimizing query performance. @@ -505,19 +505,19 @@ The `indexerHints` setting in a subgraph's manifest provides directives for inde prune: auto ``` -> The term "history" in this context of subgraphs is about storing data that reflects the old states of mutable entities. +> The term "history" in this context of Subgraphs is about storing data that reflects the old states of mutable entities. History as of a given block is required for: -- [Time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), which enable querying the past states of these entities at specific blocks throughout the subgraph's history -- Using the subgraph as a [graft base](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) in another subgraph, at that block -- Rewinding the subgraph back to that block +- [Time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), which enable querying the past states of these entities at specific blocks throughout the Subgraph's history +- Using the Subgraph as a [graft base](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) in another Subgraph, at that block +- Rewinding the Subgraph back to that block If historical data as of the block has been pruned, the above capabilities will not be available. > Using `"auto"` is generally recommended as it maximizes query performance and is sufficient for most users who do not require access to extensive historical data. -For subgraphs leveraging [time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), it's advisable to either set a specific number of blocks for historical data retention or use `prune: never` to keep all historical entity states. Below are examples of how to configure both options in your subgraph's settings: +For Subgraphs leveraging [time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), it's advisable to either set a specific number of blocks for historical data retention or use `prune: never` to keep all historical entity states. Below are examples of how to configure both options in your Subgraph's settings: To retain a specific amount of historical data: @@ -532,3 +532,18 @@ To preserve the complete history of entity states: indexerHints: prune: never ``` + +## SpecVersion Releases + +| Versione | Note di rilascio | +| :-: | --- | +| 1.3.0 | Added support for [Subgraph Composition](/cookbook/subgraph-composition-three-sources) | +| 1.2.0 | Added support for [Indexed Argument Filtering](/developing/creating/advanced/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](/developing/creating/advanced/#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supports [`indexerHints`](/developing/creating/subgraph-manifest/#indexer-hints) feature to prune Subgraphs | +| 0.0.9 | Supports `endBlock` feature | +| 0.0.8 | Added support for polling [Block Handlers](/developing/creating/subgraph-manifest/#polling-filter) and [Initialisation Handlers](/developing/creating/subgraph-manifest/#once-filter). | +| 0.0.7 | Added support for [File Data Sources](/developing/creating/advanced/#ipfsarweave-file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | diff --git a/website/src/pages/it/subgraphs/developing/creating/unit-testing-framework.mdx b/website/src/pages/it/subgraphs/developing/creating/unit-testing-framework.mdx index 77496e8eb092..c3e791437e9f 100644 --- a/website/src/pages/it/subgraphs/developing/creating/unit-testing-framework.mdx +++ b/website/src/pages/it/subgraphs/developing/creating/unit-testing-framework.mdx @@ -2,12 +2,12 @@ title: Unit Testing Framework --- -Learn how to use Matchstick, a unit testing framework developed by [LimeChain](https://limechain.tech/). Matchstick enables subgraph developers to test their mapping logic in a sandboxed environment and successfully deploy their subgraphs. +Learn how to use Matchstick, a unit testing framework developed by [LimeChain](https://limechain.tech/). Matchstick enables Subgraph developers to test their mapping logic in a sandboxed environment and successfully deploy their Subgraphs. ## Benefits of Using Matchstick - It's written in Rust and optimized for high performance. -- It gives you access to developer features, including the ability to mock contract calls, make assertions about the store state, monitor subgraph failures, check test performance, and many more. +- It gives you access to developer features, including the ability to mock contract calls, make assertions about the store state, monitor Subgraph failures, check test performance, and many more. ## Per cominciare @@ -87,7 +87,7 @@ And finally, do not use `graph test` (which uses your global installation of gra ### Using Matchstick -To use **Matchstick** in your subgraph project just open up a terminal, navigate to the root folder of your project and simply run `graph test [options] ` - it downloads the latest **Matchstick** binary and runs the specified test or all tests in a test folder (or all existing tests if no datasource flag is specified). +To use **Matchstick** in your Subgraph project just open up a terminal, navigate to the root folder of your project and simply run `graph test [options] ` - it downloads the latest **Matchstick** binary and runs the specified test or all tests in a test folder (or all existing tests if no datasource flag is specified). ### CLI options @@ -113,7 +113,7 @@ graph test path/to/file.test.ts ```sh -c, --coverage Run the tests in coverage mode --d, --docker Run the tests in a docker container (Note: Please execute from the root folder of the subgraph) +-d, --docker Run the tests in a docker container (Note: Please execute from the root folder of the Subgraph) -f, --force Binary: Redownloads the binary. Docker: Redownloads the Dockerfile and rebuilds the docker image. -h, --help Show usage information -l, --logs Logs to the console information about the OS, CPU model and download url (debugging purposes) @@ -145,17 +145,17 @@ libsFolder: path/to/libs manifestPath: path/to/subgraph.yaml ``` -### Demo subgraph +### Demo Subgraph You can try out and play around with the examples from this guide by cloning the [Demo Subgraph repo](https://github.com/LimeChain/demo-subgraph) ### Video tutorials -Also you can check out the video series on ["How to use Matchstick to write unit tests for your subgraphs"](https://www.youtube.com/playlist?list=PLTqyKgxaGF3SNakGQwczpSGVjS_xvOv3h) +Also you can check out the video series on ["How to use Matchstick to write unit tests for your Subgraphs"](https://www.youtube.com/playlist?list=PLTqyKgxaGF3SNakGQwczpSGVjS_xvOv3h) ## Tests structure -_**IMPORTANT: The test structure described below depens on `matchstick-as` version >=0.5.0**_ +_**IMPORTANT: The test structure described below depends on `matchstick-as` version >=0.5.0**_ ### describe() @@ -662,7 +662,7 @@ That's a lot to unpack! First off, an important thing to notice is that we're im There we go - we've created our first test! 👏 -Now in order to run our tests you simply need to run the following in your subgraph root folder: +Now in order to run our tests you simply need to run the following in your Subgraph root folder: `graph test Gravity` @@ -756,7 +756,7 @@ createMockedFunction(contractAddress, 'getGravatar', 'getGravatar(address):(stri Users can mock IPFS files by using `mockIpfsFile(hash, filePath)` function. The function accepts two arguments, the first one is the IPFS file hash/path and the second one is the path to a local file. -NOTE: When testing `ipfs.map/ipfs.mapJSON`, the callback function must be exported from the test file in order for matchstck to detect it, like the `processGravatar()` function in the test example bellow: +NOTE: When testing `ipfs.map/ipfs.mapJSON`, the callback function must be exported from the test file in order for matchstick to detect it, like the `processGravatar()` function in the test example bellow: `.test.ts` file: @@ -765,7 +765,7 @@ import { assert, test, mockIpfsFile } from 'matchstick-as/assembly/index' import { ipfs } from '@graphprotocol/graph-ts' import { gravatarFromIpfs } from './utils' -// Export ipfs.map() callback in order for matchstck to detect it +// Export ipfs.map() callback in order for matchstick to detect it export { processGravatar } from './utils' test('ipfs.cat', () => { @@ -1172,7 +1172,7 @@ templates: network: mainnet mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/token-lock-wallet.ts handler: handleMetadata @@ -1289,7 +1289,7 @@ test('file/ipfs dataSource creation example', () => { ## Test Coverage -Using **Matchstick**, subgraph developers are able to run a script that will calculate the test coverage of the written unit tests. +Using **Matchstick**, Subgraph developers are able to run a script that will calculate the test coverage of the written unit tests. The test coverage tool takes the compiled test `wasm` binaries and converts them to `wat` files, which can then be easily inspected to see whether or not the handlers defined in `subgraph.yaml` have been called. Since code coverage (and testing as whole) is in very early stages in AssemblyScript and WebAssembly, **Matchstick** cannot check for branch coverage. Instead we rely on the assertion that if a given handler has been called, the event/function for it have been properly mocked. @@ -1395,7 +1395,7 @@ The mismatch in arguments is caused by mismatch in `graph-ts` and `matchstick-as ## Additional Resources -For any additional support, check out this [demo subgraph repo using Matchstick](https://github.com/LimeChain/demo-subgraph#readme_). +For any additional support, check out this [demo Subgraph repo using Matchstick](https://github.com/LimeChain/demo-subgraph#readme_). ## Feedback diff --git a/website/src/pages/it/subgraphs/developing/deploying/multiple-networks.mdx b/website/src/pages/it/subgraphs/developing/deploying/multiple-networks.mdx index 0bcbe1eddc43..f8b9f74c6479 100644 --- a/website/src/pages/it/subgraphs/developing/deploying/multiple-networks.mdx +++ b/website/src/pages/it/subgraphs/developing/deploying/multiple-networks.mdx @@ -1,12 +1,13 @@ --- title: Deploying a Subgraph to Multiple Networks +sidebarTitle: Deploying to Multiple Networks --- -This page explains how to deploy a subgraph to multiple networks. To deploy a subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a subgraph already, see [Creating a subgraph](/developing/creating-a-subgraph/). +This page explains how to deploy a Subgraph to multiple networks. To deploy a Subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a Subgraph already, see [Creating a Subgraph](/developing/creating-a-subgraph/). -## Distribuzione del subgraph su più reti +## Deploying the Subgraph to multiple networks -In alcuni casi, si desidera distribuire lo stesso subgraph su più reti senza duplicare tutto il suo codice. Il problema principale è che gli indirizzi dei contratti su queste reti sono diversi. +In some cases, you will want to deploy the same Subgraph to multiple networks without duplicating all of its code. The main challenge that comes with this is that the contract addresses on these networks are different. ### Using `graph-cli` @@ -20,7 +21,7 @@ Options: --network-file Networks config file path (default: "./networks.json") ``` -You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your subgraph during development. +You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your Subgraph during development. > Note: The `init` command will now auto-generate a `networks.json` based on the provided information. You will then be able to update existing or add additional networks. @@ -54,7 +55,7 @@ If you don't have a `networks.json` file, you'll need to manually create one wit > Note: You don't have to specify any of the `templates` (if you have any) in the config file, only the `dataSources`. If there are any `templates` declared in the `subgraph.yaml` file, their network will be automatically updated to the one specified with the `--network` option. -Now, let's assume you want to be able to deploy your subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`: +Now, let's assume you want to be able to deploy your Subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`: ```yaml # ... @@ -96,7 +97,7 @@ yarn build --network sepolia yarn build --network sepolia --network-file path/to/config ``` -The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the subgraph. Your `subgraph.yaml` file now should look like this: +The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the Subgraph. Your `subgraph.yaml` file now should look like this: ```yaml # ... @@ -127,7 +128,7 @@ yarn deploy --network sepolia --network-file path/to/config One way to parameterize aspects like contract addresses using older `graph-cli` versions is to generate parts of it with a templating system like [Mustache](https://mustache.github.io/) or [Handlebars](https://handlebarsjs.com/). -To illustrate this approach, let's assume a subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network: +To illustrate this approach, let's assume a Subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network: ```json { @@ -179,7 +180,7 @@ In order to generate a manifest to either network, you could add two additional } ``` -To deploy this subgraph for mainnet or Sepolia you would now simply run one of the two following commands: +To deploy this Subgraph for mainnet or Sepolia you would now simply run one of the two following commands: ```sh # Mainnet: @@ -193,25 +194,25 @@ A working example of this can be found [here](https://github.com/graphprotocol/e **Note:** This approach can also be applied to more complex situations, where it is necessary to substitute more than contract addresses and network names or where generating mappings or ABIs from templates as well. -This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your Subgraph to check if it is running behind. `synced` informs if the Subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the Subgraph. In this case, you can check the `fatalError` field for details on this error. -## Politica di archiviazione dei subgraph di Subgraph Studio +## Subgraph Studio Subgraph archive policy -A subgraph version in Studio is archived if and only if it meets the following criteria: +A Subgraph version in Studio is archived if and only if it meets the following criteria: - The version is not published to the network (or pending publish) - The version was created 45 or more days ago -- The subgraph hasn't been queried in 30 days +- The Subgraph hasn't been queried in 30 days -In addition, when a new version is deployed, if the subgraph has not been published, then the N-2 version of the subgraph is archived. +In addition, when a new version is deployed, if the Subgraph has not been published, then the N-2 version of the Subgraph is archived. -Ogni subgraph colpito da questa politica ha un'opzione per recuperare la versione in questione. +Every Subgraph affected with this policy has an option to bring the version in question back. -## Verifica dello stato di salute del subgraph +## Checking Subgraph health -Se un subgraph si sincronizza con successo, è un buon segno che continuerà a funzionare bene per sempre. Tuttavia, nuovi trigger sulla rete potrebbero far sì che il subgraph si trovi in una condizione di errore non testata o che inizi a rimanere indietro a causa di problemi di prestazioni o di problemi con gli operatori dei nodi. +If a Subgraph syncs successfully, that is a good sign that it will continue to run well forever. However, new triggers on the network might cause your Subgraph to hit an untested error condition or it may start to fall behind due to performance issues or issues with the node operators. -Graph Node exposes a GraphQL endpoint which you can query to check the status of your subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a subgraph: +Graph Node exposes a GraphQL endpoint which you can query to check the status of your Subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a Subgraph: ```graphql { @@ -238,4 +239,4 @@ Graph Node exposes a GraphQL endpoint which you can query to check the status of } ``` -This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your Subgraph to check if it is running behind. `synced` informs if the Subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the Subgraph. In this case, you can check the `fatalError` field for details on this error. diff --git a/website/src/pages/it/subgraphs/developing/deploying/using-subgraph-studio.mdx b/website/src/pages/it/subgraphs/developing/deploying/using-subgraph-studio.mdx index 6d7e019d9d6f..3a07d7d50b24 100644 --- a/website/src/pages/it/subgraphs/developing/deploying/using-subgraph-studio.mdx +++ b/website/src/pages/it/subgraphs/developing/deploying/using-subgraph-studio.mdx @@ -2,23 +2,23 @@ title: Deploying Using Subgraph Studio --- -Learn how to deploy your subgraph to Subgraph Studio. +Learn how to deploy your Subgraph to Subgraph Studio. -> Note: When you deploy a subgraph, you push it to Subgraph Studio, where you'll be able to test it. It's important to remember that deploying is not the same as publishing. When you publish a subgraph, you're publishing it onchain. +> Note: When you deploy a Subgraph, you push it to Subgraph Studio, where you'll be able to test it. It's important to remember that deploying is not the same as publishing. When you publish a Subgraph, you're publishing it onchain. ## Subgraph Studio Overview In [Subgraph Studio](https://thegraph.com/studio/), you can do the following: -- View a list of subgraphs you've created -- Manage, view details, and visualize the status of a specific subgraph -- Creare e gestire le chiavi API per specifici subgraph +- View a list of Subgraphs you've created +- Manage, view details, and visualize the status of a specific Subgraph +- Create and manage your API keys for specific Subgraphs - Restrict your API keys to specific domains and allow only certain Indexers to query with them -- Create your subgraph -- Deploy your subgraph using The Graph CLI -- Test your subgraph in the playground environment -- Integrate your subgraph in staging using the development query URL -- Publish your subgraph to The Graph Network +- Create your Subgraph +- Deploy your Subgraph using The Graph CLI +- Test your Subgraph in the playground environment +- Integrate your Subgraph in staging using the development query URL +- Publish your Subgraph to The Graph Network - Manage your billing ## Install The Graph CLI @@ -44,10 +44,10 @@ npm install -g @graphprotocol/graph-cli 1. Open [Subgraph Studio](https://thegraph.com/studio/). 2. Connect your wallet to sign in. - You can do this via MetaMask, Coinbase Wallet, WalletConnect, or Safe. -3. After you sign in, your unique deploy key will be displayed on your subgraph details page. - - The deploy key allows you to publish your subgraphs or manage your API keys and billing. It is unique but can be regenerated if you think it has been compromised. +3. After you sign in, your unique deploy key will be displayed on your Subgraph details page. + - The deploy key allows you to publish your Subgraphs or manage your API keys and billing. It is unique but can be regenerated if you think it has been compromised. -> Important: You need an API key to query subgraphs +> Important: You need an API key to query Subgraphs ### Come creare un subgraph nel Subgraph Studio @@ -57,31 +57,25 @@ npm install -g @graphprotocol/graph-cli ### Compatibilità del subgraph con The Graph Network -In order to be supported by Indexers on The Graph Network, subgraphs must: - -- Index a [supported network](/supported-networks/) -- Non deve utilizzare nessuna delle seguenti funzioni: - - ipfs.cat & ipfs.map - - Errori non fatali - - Grafting +To be supported by Indexers on The Graph Network, Subgraphs must index a [supported network](/supported-networks/). For a full list of supported and unsupported features, check out the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) repo. ## Initialize Your Subgraph -Once your subgraph has been created in Subgraph Studio, you can initialize its code through the CLI using this command: +Once your Subgraph has been created in Subgraph Studio, you can initialize its code through the CLI using this command: ```bash graph init ``` -You can find the `` value on your subgraph details page in Subgraph Studio, see image below: +You can find the `` value on your Subgraph details page in Subgraph Studio, see image below: ![Subgraph Studio - Slug](/img/doc-subgraph-slug.png) -After running `graph init`, you will be asked to input the contract address, network, and an ABI that you want to query. This will generate a new folder on your local machine with some basic code to start working on your subgraph. You can then finalize your subgraph to make sure it works as expected. +After running `graph init`, you will be asked to input the contract address, network, and an ABI that you want to query. This will generate a new folder on your local machine with some basic code to start working on your Subgraph. You can then finalize your Subgraph to make sure it works as expected. ## Graph Auth -Before you can deploy your subgraph to Subgraph Studio, you need to log into your account within the CLI. To do this, you will need your deploy key, which you can find under your subgraph details page. +Before you can deploy your Subgraph to Subgraph Studio, you need to log into your account within the CLI. To do this, you will need your deploy key, which you can find under your Subgraph details page. Then, use the following command to authenticate from the CLI: @@ -91,11 +85,11 @@ graph auth ## Deploying a Subgraph -Once you are ready, you can deploy your subgraph to Subgraph Studio. +Once you are ready, you can deploy your Subgraph to Subgraph Studio. -> Deploying a subgraph with the CLI pushes it to the Studio, where you can test it and update the metadata. This action won't publish your subgraph to the decentralized network. +> Deploying a Subgraph with the CLI pushes it to the Studio, where you can test it and update the metadata. This action won't publish your Subgraph to the decentralized network. -Use the following CLI command to deploy your subgraph: +Use the following CLI command to deploy your Subgraph: ```bash graph deploy @@ -108,30 +102,30 @@ After running this command, the CLI will ask for a version label. ## Testing Your Subgraph -After deploying, you can test your subgraph (either in Subgraph Studio or in your own app, with the deployment query URL), deploy another version, update the metadata, and publish to [Graph Explorer](https://thegraph.com/explorer) when you are ready. +After deploying, you can test your Subgraph (either in Subgraph Studio or in your own app, with the deployment query URL), deploy another version, update the metadata, and publish to [Graph Explorer](https://thegraph.com/explorer) when you are ready. -Use Subgraph Studio to check the logs on the dashboard and look for any errors with your subgraph. +Use Subgraph Studio to check the logs on the dashboard and look for any errors with your Subgraph. ## Publish Your Subgraph -In order to publish your subgraph successfully, review [publishing a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). +In order to publish your Subgraph successfully, review [publishing a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). ## Versioning Your Subgraph with the CLI -If you want to update your subgraph, you can do the following: +If you want to update your Subgraph, you can do the following: - You can deploy a new version to Studio using the CLI (it will only be private at this point). - Once you're happy with it, you can publish your new deployment to [Graph Explorer](https://thegraph.com/explorer). -- This action will create a new version of your subgraph that Curators can start signaling on and Indexers can index. +- This action will create a new version of your Subgraph that Curators can start signaling on and Indexers can index. -You can also update your subgraph's metadata without publishing a new version. You can update your subgraph details in Studio (under the profile picture, name, description, etc.) by checking an option called **Update Details** in [Graph Explorer](https://thegraph.com/explorer). If this is checked, an onchain transaction will be generated that updates subgraph details in Explorer without having to publish a new version with a new deployment. +You can also update your Subgraph's metadata without publishing a new version. You can update your Subgraph details in Studio (under the profile picture, name, description, etc.) by checking an option called **Update Details** in [Graph Explorer](https://thegraph.com/explorer). If this is checked, an onchain transaction will be generated that updates Subgraph details in Explorer without having to publish a new version with a new deployment. -> Note: There are costs associated with publishing a new version of a subgraph to the network. In addition to the transaction fees, you must also fund a part of the curation tax on the auto-migrating signal. You cannot publish a new version of your subgraph if Curators have not signaled on it. For more information, please read more [here](/resources/roles/curating/). +> Note: There are costs associated with publishing a new version of a Subgraph to the network. In addition to the transaction fees, you must also fund a part of the curation tax on the auto-migrating signal. You cannot publish a new version of your Subgraph if Curators have not signaled on it. For more information, please read more [here](/resources/roles/curating/). ## Archiviazione automatica delle versioni del subgraph -Whenever you deploy a new subgraph version in Subgraph Studio, the previous version will be archived. Archived versions won't be indexed/synced and therefore cannot be queried. You can unarchive an archived version of your subgraph in Subgraph Studio. +Whenever you deploy a new Subgraph version in Subgraph Studio, the previous version will be archived. Archived versions won't be indexed/synced and therefore cannot be queried. You can unarchive an archived version of your Subgraph in Subgraph Studio. -> Note: Previous versions of non-published subgraphs deployed to Studio will be automatically archived. +> Note: Previous versions of non-published Subgraphs deployed to Studio will be automatically archived. ![Subgraph Studio - Unarchive](/img/Unarchive.png) diff --git a/website/src/pages/it/subgraphs/developing/developer-faq.mdx b/website/src/pages/it/subgraphs/developing/developer-faq.mdx index 8dbe6d23ad39..e45141294523 100644 --- a/website/src/pages/it/subgraphs/developing/developer-faq.mdx +++ b/website/src/pages/it/subgraphs/developing/developer-faq.mdx @@ -7,37 +7,37 @@ This page summarizes some of the most common questions for developers building o ## Subgraph Related -### 1. What is a subgraph? +### 1. What is a Subgraph? -A subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process subgraphs and make them available for subgraph consumers to query. +A Subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process Subgraphs and make them available for Subgraph consumers to query. -### 2. What is the first step to create a subgraph? +### 2. What is the first step to create a Subgraph? -To successfully create a subgraph, you will need to install The Graph CLI. Review the [Quick Start](/subgraphs/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). +To successfully create a Subgraph, you will need to install The Graph CLI. Review the [Quick Start](/subgraphs/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). -### 3. Can I still create a subgraph if my smart contracts don't have events? +### 3. Can I still create a Subgraph if my smart contracts don't have events? -It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the subgraph are triggered by contract events and are the fastest way to retrieve useful data. +It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the Subgraph are triggered by contract events and are the fastest way to retrieve useful data. -If the contracts you work with do not contain events, your subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. +If the contracts you work with do not contain events, your Subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. -### 4. Can I change the GitHub account associated with my subgraph? +### 4. Can I change the GitHub account associated with my Subgraph? -No. Once a subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your subgraph. +No. Once a Subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your Subgraph. -### 5. How do I update a subgraph on mainnet? +### 5. How do I update a Subgraph on mainnet? -You can deploy a new version of your subgraph to Subgraph Studio using the CLI. This action maintains your subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. +You can deploy a new version of your Subgraph to Subgraph Studio using the CLI. This action maintains your Subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your Subgraph that Curators can start signaling on. -### 6. Is it possible to duplicate a subgraph to another account or endpoint without redeploying? +### 6. Is it possible to duplicate a Subgraph to another account or endpoint without redeploying? -You have to redeploy the subgraph, but if the subgraph ID (IPFS hash) doesn't change, it won't have to sync from the beginning. +You have to redeploy the Subgraph, but if the Subgraph ID (IPFS hash) doesn't change, it won't have to sync from the beginning. -### 7. How do I call a contract function or access a public state variable from my subgraph mappings? +### 7. How do I call a contract function or access a public state variable from my Subgraph mappings? Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/#access-to-smart-contract-state). -### 8. Can I import `ethers.js` or other JS libraries into my subgraph mappings? +### 8. Can I import `ethers.js` or other JS libraries into my Subgraph mappings? Not currently, as mappings are written in AssemblyScript. @@ -45,15 +45,15 @@ One possible alternative solution to this is to store raw data in entities and p ### 9. When listening to multiple contracts, is it possible to select the contract order to listen to events? -Within a subgraph, the events are always processed in the order they appear in the blocks, regardless of whether that is across multiple contracts or not. +Within a Subgraph, the events are always processed in the order they appear in the blocks, regardless of whether that is across multiple contracts or not. ### 10. How are templates different from data sources? -Templates allow you to create data sources quickly, while your subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your subgraph will create a dynamic data source by supplying the contract address. +Templates allow you to create data sources quickly, while your Subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your Subgraph will create a dynamic data source by supplying the contract address. Check out the "Instantiating a data source template" section on: [Data Source Templates](/developing/creating-a-subgraph/#data-source-templates). -### 11. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? +### 11. Is it possible to set up a Subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? Yes. On `graph init` command itself you can add multiple dataSources by entering contracts one after the other. @@ -79,9 +79,9 @@ docker pull graphprotocol/graph-node:latest If only one entity is created during the event and if there's nothing better available, then the transaction hash + log index would be unique. You can obfuscate these by converting that to Bytes and then piping it through `crypto.keccak256` but this won't make it more unique. -### 15. Can I delete my subgraph? +### 15. Can I delete my Subgraph? -Yes, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) and [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) your subgraph. +Yes, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) and [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) your Subgraph. ## Network Related @@ -110,11 +110,11 @@ Yes. Sepolia supports block handlers, call handlers and event handlers. It shoul Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the dataSource starts indexing from. In most cases, we suggest using the block where the contract was created: [Start blocks](/developing/creating-a-subgraph/#start-blocks) -### 20. What are some tips to increase the performance of indexing? My subgraph is taking a very long time to sync +### 20. What are some tips to increase the performance of indexing? My Subgraph is taking a very long time to sync Yes, you should take a look at the optional start block feature to start indexing from the block where the contract was deployed: [Start blocks](/developing/creating-a-subgraph/#start-blocks) -### 21. Is there a way to query the subgraph directly to determine the latest block number it has indexed? +### 21. Is there a way to query the Subgraph directly to determine the latest block number it has indexed? Yes! Try the following command, substituting "organization/subgraphName" with the organization under it is published and the name of your subgraph: @@ -132,7 +132,7 @@ someCollection(first: 1000, skip: ) { ... } ### 23. If my dapp frontend uses The Graph for querying, do I need to write my API key into the frontend directly? What if we pay query fees for users – will malicious users cause our query fees to be very high? -Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. +Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and Subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. ## Miscellaneous diff --git a/website/src/pages/it/subgraphs/developing/introduction.mdx b/website/src/pages/it/subgraphs/developing/introduction.mdx index 53060bdd4de4..70610ef84065 100644 --- a/website/src/pages/it/subgraphs/developing/introduction.mdx +++ b/website/src/pages/it/subgraphs/developing/introduction.mdx @@ -11,21 +11,21 @@ As a developer, you need data to build and power your dapp. Querying and indexin On The Graph, you can: -1. Create, deploy, and publish subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). -2. Use GraphQL to query existing subgraphs. +1. Create, deploy, and publish Subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). +2. Use GraphQL to query existing Subgraphs. ### What is GraphQL? -- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. +- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query Subgraphs. ### Developer Actions -- Query subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. -- Create custom subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. -- Deploy, publish and signal your subgraphs within The Graph Network. +- Query Subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. +- Create custom Subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. +- Deploy, publish and signal your Subgraphs within The Graph Network. -### What are subgraphs? +### What are Subgraphs? -A subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. +A Subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. -Check out the documentation on [subgraphs](/subgraphs/developing/subgraphs/) to learn specifics. +Check out the documentation on [Subgraphs](/subgraphs/developing/subgraphs/) to learn specifics. diff --git a/website/src/pages/it/subgraphs/developing/managing/deleting-a-subgraph.mdx b/website/src/pages/it/subgraphs/developing/managing/deleting-a-subgraph.mdx index 90a2eb4b7d33..b8c2330ca49d 100644 --- a/website/src/pages/it/subgraphs/developing/managing/deleting-a-subgraph.mdx +++ b/website/src/pages/it/subgraphs/developing/managing/deleting-a-subgraph.mdx @@ -2,30 +2,30 @@ title: Deleting a Subgraph --- -Delete your subgraph using [Subgraph Studio](https://thegraph.com/studio/). +Delete your Subgraph using [Subgraph Studio](https://thegraph.com/studio/). -> Deleting your subgraph will remove all published versions from The Graph Network, but it will remain visible on Graph Explorer and Subgraph Studio for users who have signaled on it. +> Deleting your Subgraph will remove all published versions from The Graph Network, but it will remain visible on Graph Explorer and Subgraph Studio for users who have signaled on it. ## Step-by-Step -1. Visit the subgraph's page on [Subgraph Studio](https://thegraph.com/studio/). +1. Visit the Subgraph's page on [Subgraph Studio](https://thegraph.com/studio/). 2. Click on the three-dots to the right of the "publish" button. -3. Click on the option to "delete this subgraph": +3. Click on the option to "delete this Subgraph": ![Delete-subgraph](/img/Delete-subgraph.png) -4. Depending on the subgraph's status, you will be prompted with various options. +4. Depending on the Subgraph's status, you will be prompted with various options. - - If the subgraph is not published, simply click “delete” and confirm. - - If the subgraph is published, you will need to confirm on your wallet before the subgraph can be deleted from Studio. If a subgraph is published to multiple networks, such as testnet and mainnet, additional steps may be required. + - If the Subgraph is not published, simply click “delete” and confirm. + - If the Subgraph is published, you will need to confirm on your wallet before the Subgraph can be deleted from Studio. If a Subgraph is published to multiple networks, such as testnet and mainnet, additional steps may be required. -> If the owner of the subgraph has signal on it, the signaled GRT will be returned to the owner. +> If the owner of the Subgraph has signal on it, the signaled GRT will be returned to the owner. ### Important Reminders -- Once you delete a subgraph, it will **not** appear on Graph Explorer's homepage. However, users who have signaled on it will still be able to view it on their profile pages and remove their signal. -- I Curator non potranno più segnalare il subgraph. -- Curators that already signaled on the subgraph can withdraw their signal at an average share price. -- Deleted subgraphs will show an error message. +- Once you delete a Subgraph, it will **not** appear on Graph Explorer's homepage. However, users who have signaled on it will still be able to view it on their profile pages and remove their signal. +- Curators will not be able to signal on the Subgraph anymore. +- Curators that already signaled on the Subgraph can withdraw their signal at an average share price. +- Deleted Subgraphs will show an error message. diff --git a/website/src/pages/it/subgraphs/developing/managing/transferring-a-subgraph.mdx b/website/src/pages/it/subgraphs/developing/managing/transferring-a-subgraph.mdx index 0fc6632cbc40..e80bde3fa6d2 100644 --- a/website/src/pages/it/subgraphs/developing/managing/transferring-a-subgraph.mdx +++ b/website/src/pages/it/subgraphs/developing/managing/transferring-a-subgraph.mdx @@ -2,18 +2,18 @@ title: Transferring a Subgraph --- -Subgraphs published to the decentralized network have an NFT minted to the address that published the subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network. +Subgraphs published to the decentralized network have an NFT minted to the address that published the Subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network. ## Reminders -- Whoever owns the NFT controls the subgraph. -- If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that subgraph on the network. -- You can easily move control of a subgraph to a multi-sig. -- A community member can create a subgraph on behalf of a DAO. +- Whoever owns the NFT controls the Subgraph. +- If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that Subgraph on the network. +- You can easily move control of a Subgraph to a multi-sig. +- A community member can create a Subgraph on behalf of a DAO. ## View Your Subgraph as an NFT -To view your subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**: +To view your Subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**: ``` https://opensea.io/your-wallet-address @@ -27,13 +27,13 @@ https://rainbow.me/your-wallet-addres ## Step-by-Step -To transfer ownership of a subgraph, do the following: +To transfer ownership of a Subgraph, do the following: 1. Use the UI built into Subgraph Studio: ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-1.png) -2. Choose the address that you would like to transfer the subgraph to: +2. Choose the address that you would like to transfer the Subgraph to: ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-2.png) diff --git a/website/src/pages/it/subgraphs/developing/publishing/publishing-a-subgraph.mdx b/website/src/pages/it/subgraphs/developing/publishing/publishing-a-subgraph.mdx index 8706691669d1..1672a6619d13 100644 --- a/website/src/pages/it/subgraphs/developing/publishing/publishing-a-subgraph.mdx +++ b/website/src/pages/it/subgraphs/developing/publishing/publishing-a-subgraph.mdx @@ -1,10 +1,11 @@ --- title: Pubblicare un subgraph nella rete decentralizzata +sidebarTitle: Publishing to the Decentralized Network --- -Once you have [deployed your subgraph to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/) and it's ready to go into production, you can publish it to the decentralized network. +Once you have [deployed your Subgraph to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/) and it's ready to go into production, you can publish it to the decentralized network. -When you publish a subgraph to the decentralized network, you make it available for: +When you publish a Subgraph to the decentralized network, you make it available for: - [Curators](/resources/roles/curating/) to begin curating it. - [Indexers](/indexing/overview/) to begin indexing it. @@ -17,33 +18,33 @@ Check out the list of [supported networks](/supported-networks/). 1. Go to the [Subgraph Studio](https://thegraph.com/studio/) dashboard 2. Click on the **Publish** button -3. Your subgraph will now be visible in [Graph Explorer](https://thegraph.com/explorer/). +3. Your Subgraph will now be visible in [Graph Explorer](https://thegraph.com/explorer/). -All published versions of an existing subgraph can: +All published versions of an existing Subgraph can: - Be published to Arbitrum One. [Learn more about The Graph Network on Arbitrum](/archived/arbitrum/arbitrum-faq/). -- Index data on any of the [supported networks](/supported-networks/), regardless of the network on which the subgraph was published. +- Index data on any of the [supported networks](/supported-networks/), regardless of the network on which the Subgraph was published. -### Aggiornamento dei metadati per un subgraph pubblicato +### Updating metadata for a published Subgraph -- After publishing your subgraph to the decentralized network, you can update the metadata anytime in Subgraph Studio. +- After publishing your Subgraph to the decentralized network, you can update the metadata anytime in Subgraph Studio. - Once you’ve saved your changes and published the updates, they will appear in Graph Explorer. - It's important to note that this process will not create a new version since your deployment has not changed. ## Publishing from the CLI -As of version 0.73.0, you can also publish your subgraph with the [`graph-cli`](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). +As of version 0.73.0, you can also publish your Subgraph with the [`graph-cli`](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). 1. Open the `graph-cli`. 2. Use the following commands: `graph codegen && graph build` then `graph publish`. -3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized subgraph to a network of your choice. +3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized Subgraph to a network of your choice. ![cli-ui](/img/cli-ui.png) ### Customizing your deployment -You can upload your subgraph build to a specific IPFS node and further customize your deployment with the following flags: +You can upload your Subgraph build to a specific IPFS node and further customize your deployment with the following flags: ``` USAGE @@ -61,33 +62,33 @@ FLAGS ``` -## Adding signal to your subgraph +## Adding signal to your Subgraph -Developers can add GRT signal to their subgraphs to incentivize Indexers to query the subgraph. +Developers can add GRT signal to their Subgraphs to incentivize Indexers to query the Subgraph. -- If a subgraph is eligible for indexing rewards, Indexers who provide a "proof of indexing" will receive a GRT reward, based on the amount of GRT signalled. +- If a Subgraph is eligible for indexing rewards, Indexers who provide a "proof of indexing" will receive a GRT reward, based on the amount of GRT signalled. -- You can check indexing reward eligibility based on subgraph feature usage [here](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). +- You can check indexing reward eligibility based on Subgraph feature usage [here](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). - Specific supported networks can be checked [here](/supported-networks/). -> Adding signal to a subgraph which is not eligible for rewards will not attract additional Indexers. +> Adding signal to a Subgraph which is not eligible for rewards will not attract additional Indexers. > -> If your subgraph is eligible for rewards, it is recommended that you curate your own subgraph with at least 3,000 GRT in order to attract additional indexers to index your subgraph. +> If your Subgraph is eligible for rewards, it is recommended that you curate your own Subgraph with at least 3,000 GRT in order to attract additional indexers to index your Subgraph. -The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs. However, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. +The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all Subgraphs. However, signaling GRT on a particular Subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. -When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. +When signaling, Curators can decide to signal on a specific version of the Subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. -Indexers can find subgraphs to index based on curation signals they see in Graph Explorer. +Indexers can find Subgraphs to index based on curation signals they see in Graph Explorer. -![Explorer subgraphs](/img/explorer-subgraphs.png) +![Explorer Subgraphs](/img/explorer-subgraphs.png) -Subgraph Studio enables you to add signal to your subgraph by adding GRT to your subgraph's curation pool in the same transaction it is published. +Subgraph Studio enables you to add signal to your Subgraph by adding GRT to your Subgraph's curation pool in the same transaction it is published. ![Curation Pool](/img/curate-own-subgraph-tx.png) -Alternatively, you can add GRT signal to a published subgraph from Graph Explorer. +Alternatively, you can add GRT signal to a published Subgraph from Graph Explorer. ![Signal from Explorer](/img/signal-from-explorer.png) diff --git a/website/src/pages/it/subgraphs/developing/subgraphs.mdx b/website/src/pages/it/subgraphs/developing/subgraphs.mdx index a5d5fa16fd8e..7e6c212622d1 100644 --- a/website/src/pages/it/subgraphs/developing/subgraphs.mdx +++ b/website/src/pages/it/subgraphs/developing/subgraphs.mdx @@ -4,83 +4,83 @@ title: Subgraphs ## What is a Subgraph? -A subgraph is a custom, open API that extracts data from a blockchain, processes it, and stores it so it can be easily queried via GraphQL. +A Subgraph is a custom, open API that extracts data from a blockchain, processes it, and stores it so it can be easily queried via GraphQL. ### Subgraph Capabilities - **Access Data:** Subgraphs enable the querying and indexing of blockchain data for web3. -- **Build:** Developers can build, deploy, and publish subgraphs to The Graph Network. To get started, check out the subgraph developer [Quick Start](quick-start/). -- **Index & Query:** Once a subgraph is indexed, anyone can query it. Explore and query all subgraphs published to the network in [Graph Explorer](https://thegraph.com/explorer). +- **Build:** Developers can build, deploy, and publish Subgraphs to The Graph Network. To get started, check out the Subgraph developer [Quick Start](quick-start/). +- **Index & Query:** Once a Subgraph is indexed, anyone can query it. Explore and query all Subgraphs published to the network in [Graph Explorer](https://thegraph.com/explorer). ## Inside a Subgraph -The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. +The Subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your Subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. -The **subgraph definition** consists of the following files: +The **Subgraph definition** consists of the following files: -- `subgraph.yaml`: Contains the subgraph manifest +- `subgraph.yaml`: Contains the Subgraph manifest -- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL +- `schema.graphql`: A GraphQL schema defining the data stored for your Subgraph and how to query it via GraphQL - `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema -To learn more about each subgraph component, check out [creating a subgraph](/developing/creating-a-subgraph/). +To learn more about each Subgraph component, check out [creating a Subgraph](/developing/creating-a-subgraph/). ## Ciclo di vita del subgraph -Here is a general overview of a subgraph’s lifecycle: +Here is a general overview of a Subgraph’s lifecycle: ![Subgraph Lifecycle](/img/subgraph-lifecycle.png) ## Subgraph Development -1. [Create a subgraph](/developing/creating-a-subgraph/) -2. [Deploy a subgraph](/deploying/deploying-a-subgraph-to-studio/) -3. [Test a subgraph](/deploying/subgraph-studio/#testing-your-subgraph-in-subgraph-studio) -4. [Publish a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) -5. [Signal on a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) +1. [Create a Subgraph](/developing/creating-a-subgraph/) +2. [Deploy a Subgraph](/deploying/deploying-a-subgraph-to-studio/) +3. [Test a Subgraph](/deploying/subgraph-studio/#testing-your-subgraph-in-subgraph-studio) +4. [Publish a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) +5. [Signal on a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) ### Build locally -Great subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying subgraphs on The Graph. They can also use [Graph TypeScript](/subgraphs/developing/creating/graph-ts/README/) and [Matchstick](/subgraphs/developing/creating/unit-testing-framework/) to create robust subgraphs. +Great Subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying Subgraphs on The Graph. They can also use [Graph TypeScript](/subgraphs/developing/creating/graph-ts/README/) and [Matchstick](/subgraphs/developing/creating/unit-testing-framework/) to create robust Subgraphs. ### Deploy to Subgraph Studio -Once defined, a subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: +Once defined, a Subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: -- Use its staging environment to index the deployed subgraph and make it available for review. -- Verify that your subgraph doesn't have any indexing errors and works as expected. +- Use its staging environment to index the deployed Subgraph and make it available for review. +- Verify that your Subgraph doesn't have any indexing errors and works as expected. ### Publish to the Network -When you're happy with your subgraph, you can [publish it](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph Network. +When you're happy with your Subgraph, you can [publish it](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph Network. -- This is an onchain action, which registers the subgraph and makes it discoverable by Indexers. -- Published subgraphs have a corresponding NFT, which defines the ownership of the subgraph. You can [transfer the subgraph's ownership](/subgraphs/developing/managing/transferring-a-subgraph/) by sending the NFT. -- Published subgraphs have associated metadata, which provides other network participants with useful context and information. +- This is an onchain action, which registers the Subgraph and makes it discoverable by Indexers. +- Published Subgraphs have a corresponding NFT, which defines the ownership of the Subgraph. You can [transfer the Subgraph's ownership](/subgraphs/developing/managing/transferring-a-subgraph/) by sending the NFT. +- Published Subgraphs have associated metadata, which provides other network participants with useful context and information. ### Add Curation Signal for Indexing -Published subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your subgraph. Learn more about signaling and [curating](/resources/roles/curating/) on The Graph. +Published Subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your Subgraph. Learn more about signaling and [curating](/resources/roles/curating/) on The Graph. #### What is signal? -- Signal is locked GRT associated with a given subgraph. It indicates to Indexers that a given subgraph will receive query volume and it contributes to the indexing rewards available for processing it. -- Third-party Curators may also signal on a given subgraph, if they deem the subgraph likely to drive query volume. +- Signal is locked GRT associated with a given Subgraph. It indicates to Indexers that a given Subgraph will receive query volume and it contributes to the indexing rewards available for processing it. +- Third-party Curators may also signal on a given Subgraph, if they deem the Subgraph likely to drive query volume. ### Querying & Application Development Subgraphs on The Graph Network receive 100,000 free queries per month, after which point developers can either [pay for queries with GRT or a credit card](/subgraphs/billing/). -Learn more about [querying subgraphs](/subgraphs/querying/introduction/). +Learn more about [querying Subgraphs](/subgraphs/querying/introduction/). ### Updating Subgraphs -To update your subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. +To update your Subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your Subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. -- If you selected "auto-migrate" when you applied the signal, updating the subgraph will migrate any signal to the new version and incur a migration tax. -- This signal migration should prompt Indexers to start indexing the new version of the subgraph, so it should soon become available for querying. +- If you selected "auto-migrate" when you applied the signal, updating the Subgraph will migrate any signal to the new version and incur a migration tax. +- This signal migration should prompt Indexers to start indexing the new version of the Subgraph, so it should soon become available for querying. ### Deleting & Transferring Subgraphs -If you no longer need a published subgraph, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) or [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) it. Deleting a subgraph returns any signaled GRT to [Curators](/resources/roles/curating/). +If you no longer need a published Subgraph, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) or [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) it. Deleting a Subgraph returns any signaled GRT to [Curators](/resources/roles/curating/). diff --git a/website/src/pages/it/subgraphs/explorer.mdx b/website/src/pages/it/subgraphs/explorer.mdx index ef26a5b18543..5db7212c1fb0 100644 --- a/website/src/pages/it/subgraphs/explorer.mdx +++ b/website/src/pages/it/subgraphs/explorer.mdx @@ -2,11 +2,11 @@ title: Graph Explorer --- -Unlock the world of subgraphs and network data with [Graph Explorer](https://thegraph.com/explorer). +Unlock the world of Subgraphs and network data with [Graph Explorer](https://thegraph.com/explorer). ## Panoramica -Graph Explorer consists of multiple parts where you can interact with [subgraphs](https://thegraph.com/explorer?chain=arbitrum-one), [delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one), engage [participants](https://thegraph.com/explorer/participants?chain=arbitrum-one), view [network information](https://thegraph.com/explorer/network?chain=arbitrum-one), and access your user profile. +Graph Explorer consists of multiple parts where you can interact with [Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one), [delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one), engage [participants](https://thegraph.com/explorer/participants?chain=arbitrum-one), view [network information](https://thegraph.com/explorer/network?chain=arbitrum-one), and access your user profile. ## Inside Explorer @@ -14,33 +14,33 @@ The following is a breakdown of all the key features of Graph Explorer. For addi ### Subgraphs Page -After deploying and publishing your subgraph in Subgraph Studio, go to [Graph Explorer](https://thegraph.com/explorer) and click on the "[Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one)" link in the navigation bar to access the following: +After deploying and publishing your Subgraph in Subgraph Studio, go to [Graph Explorer](https://thegraph.com/explorer) and click on the "[Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one)" link in the navigation bar to access the following: -- Your own finished subgraphs +- Your own finished Subgraphs - Subgraphs published by others -- The exact subgraph you want (based on the date created, signal amount, or name). +- The exact Subgraph you want (based on the date created, signal amount, or name). ![Explorer Image 1](/img/Subgraphs-Explorer-Landing.png) -When you click into a subgraph, you will be able to do the following: +When you click into a Subgraph, you will be able to do the following: - Test queries in the playground and be able to leverage network details to make informed decisions. -- Signal GRT on your own subgraph or the subgraphs of others to make indexers aware of its importance and quality. +- Signal GRT on your own Subgraph or the Subgraphs of others to make indexers aware of its importance and quality. - - This is critical because signaling on a subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries. + - This is critical because signaling on a Subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries. ![Explorer Image 2](/img/Subgraph-Details.png) -On each subgraph’s dedicated page, you can do the following: +On each Subgraph’s dedicated page, you can do the following: -- Segnala/non segnala i subgraph +- Signal/Un-signal on Subgraphs - Visualizza ulteriori dettagli, come grafici, ID di distribuzione corrente e altri metadati -- Cambia versione per esplorare le iterazioni passate del subgraph -- Consulta i subgraph tramite GraphQL -- Test dei subgraph nel playground -- Visualizza gli Indexer che stanno indicizzando su un determinato subgraph +- Switch versions to explore past iterations of the Subgraph +- Query Subgraphs via GraphQL +- Test Subgraphs in the playground +- View the Indexers that are indexing on a certain Subgraph - Statistiche del subgraph (allocazione, Curator, ecc.) -- Visualizza l'entità che ha pubblicato il subgraph +- View the entity who published the Subgraph ![Explorer Image 3](/img/Explorer-Signal-Unsignal.png) @@ -53,7 +53,7 @@ On this page, you can see the following: - Indexers who collected the most query fees - Indexers with the highest estimated APR -Additionally, you can calculate your ROI and search for top Indexers by name, address, or subgraph. +Additionally, you can calculate your ROI and search for top Indexers by name, address, or Subgraph. ### Participants Page @@ -63,9 +63,9 @@ This page provides a bird's-eye view of all "participants," which includes every ![Explorer Image 4](/img/Indexer-Pane.png) -Indexers are the backbone of the protocol. They stake on subgraphs, index them, and serve queries to anyone consuming subgraphs. +Indexers are the backbone of the protocol. They stake on Subgraphs, index them, and serve queries to anyone consuming Subgraphs. -In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each subgraph, and how much revenue they have made from query fees and indexing rewards. +In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each Subgraph, and how much revenue they have made from query fees and indexing rewards. **Specifics** @@ -74,7 +74,7 @@ In the Indexers table, you can see an Indexers’ delegation parameters, their s - Cooldown Remaining - the time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters. - Owned - This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior. - Delegated - Stake from Delegators which can be allocated by the Indexer, but cannot be slashed. -- Allocated - Stake that Indexers are actively allocating towards the subgraphs they are indexing. +- Allocated - Stake that Indexers are actively allocating towards the Subgraphs they are indexing. - Available Delegation Capacity - the amount of delegated stake the Indexers can still receive before they become over-delegated. - Max Delegation Capacity - l'importo massimo di stake delegato che l'Indexer può accettare in modo produttivo. Uno stake delegato in eccesso non può essere utilizzato per l'allocazione o per il calcolo dei premi. - Query Fees - this is the total fees that end users have paid for queries from an Indexer over all time. @@ -90,10 +90,10 @@ To learn more about how to become an Indexer, you can take a look at the [offici #### 2. Curator -Curators analyze subgraphs to identify which subgraphs are of the highest quality. Once a Curator has found a potentially high-quality subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which subgraphs are high quality and should be indexed. +Curators analyze Subgraphs to identify which Subgraphs are of the highest quality. Once a Curator has found a potentially high-quality Subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which Subgraphs are high quality and should be indexed. -- Curators can be community members, data consumers, or even subgraph developers who signal on their own subgraphs by depositing GRT tokens into a bonding curve. - - By depositing GRT, Curators mint curation shares of a subgraph. As a result, they can earn a portion of the query fees generated by the subgraph they have signaled on. +- Curators can be community members, data consumers, or even Subgraph developers who signal on their own Subgraphs by depositing GRT tokens into a bonding curve. + - By depositing GRT, Curators mint curation shares of a Subgraph. As a result, they can earn a portion of the query fees generated by the Subgraph they have signaled on. - The bonding curve incentivizes Curators to curate the highest quality data sources. In the The Curator table listed below you can see: @@ -144,8 +144,8 @@ The overview section has both all the current network metrics and some cumulativ A few key details to note: -- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the subgraphs have been closed and the data they served has been validated by the consumers. -- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). +- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the Subgraphs have been closed and the data they served has been validated by the consumers. +- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the Subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). ![Explorer Image 8](/img/Network-Stats.png) @@ -178,15 +178,15 @@ In this section, you can view the following: ### Scheda di subgraph -In the Subgraphs tab, you’ll see your published subgraphs. +In the Subgraphs tab, you’ll see your published Subgraphs. -> This will not include any subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network. +> This will not include any Subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network. ![Explorer Image 11](/img/Subgraphs-Overview.png) ### Scheda di indicizzazione -In the Indexing tab, you’ll find a table with all the active and historical allocations towards subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer. +In the Indexing tab, you’ll find a table with all the active and historical allocations towards Subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer. Questa sezione include anche i dettagli sui compensi netti degli Indexer e sulle tariffe nette di query. Verranno visualizzate le seguenti metriche: @@ -223,13 +223,13 @@ Tenete presente che questo grafico è scorrevole orizzontalmente, quindi se scor ### Scheda di curation -Nella scheda di Curation si trovano tutti i subgraph sui quali si sta effettuando una segnalazione (che consente di ricevere commissioni della query). La segnalazione consente ai curator di evidenziare agli Indexer quali subgraph sono di valore e affidabili, segnalando così la necessità di indicizzarli. +In the Curation tab, you’ll find all the Subgraphs you’re signaling on (thus enabling you to receive query fees). Signaling allows Curators to highlight to Indexers which Subgraphs are valuable and trustworthy, thus signaling that they need to be indexed on. All'interno di questa scheda è presente una panoramica di: -- Tutti i subgraph su cui si effettua la curation con i dettagli del segnale -- Totali delle quote per subgraph -- Ricompense della query per subgraph +- All the Subgraphs you're curating on with signal details +- Share totals per Subgraph +- Query rewards per Subgraph - Aggiornamento attuale dei dettagli ![Explorer Image 14](/img/Curation-Stats.png) diff --git a/website/src/pages/it/subgraphs/guides/_meta.js b/website/src/pages/it/subgraphs/guides/_meta.js index 37e18bc51651..a1bb04fb6d3f 100644 --- a/website/src/pages/it/subgraphs/guides/_meta.js +++ b/website/src/pages/it/subgraphs/guides/_meta.js @@ -1,4 +1,5 @@ export default { + 'subgraph-composition': '', 'subgraph-debug-forking': '', near: '', arweave: '', diff --git a/website/src/pages/it/subgraphs/guides/arweave.mdx b/website/src/pages/it/subgraphs/guides/arweave.mdx index 08e6c4257268..e59abffa383f 100644 --- a/website/src/pages/it/subgraphs/guides/arweave.mdx +++ b/website/src/pages/it/subgraphs/guides/arweave.mdx @@ -92,9 +92,9 @@ Arweave data sources support two types of handlers: - `transactionHandlers` - Run on every transaction where the data source's `source.owner` is the owner. Currently an owner is required for `transactionHandlers`, if users want to process all transactions they should provide "" as the `source.owner` > The source.owner can be the owner's address, or their Public Key. - +> > Transactions are the building blocks of the Arweave permaweb and they are objects created by end-users. - +> > Note: [Irys (previously Bundlr)](https://irys.xyz/) transactions are not supported yet. ## Schema Definition diff --git a/website/src/pages/it/subgraphs/guides/contract-analyzer.mdx b/website/src/pages/it/subgraphs/guides/contract-analyzer.mdx index 084ac8d28a00..42d80c795662 100644 --- a/website/src/pages/it/subgraphs/guides/contract-analyzer.mdx +++ b/website/src/pages/it/subgraphs/guides/contract-analyzer.mdx @@ -2,11 +2,15 @@ title: Smart Contract Analysis with Cana CLI --- -# Cana CLI: Quick & Efficient Contract Analysis +Improve smart contract analysis with **Cana CLI**. It's fast, efficient, and designed specifically for EVM chains. -**Cana CLI** is a command-line tool that streamlines helpful smart contract metadata analysis specific to subgraph development across multiple EVM-compatible chains. +## Panoramica -## 📌 Key Features +**Cana CLI** is a command-line tool that streamlines helpful smart contract metadata analysis specific to subgraph development across multiple EVM-compatible chains. It simplifies retrieving contract details, detecting proxy implementations, extracting ABIs, and more. + +### Key Features + +With Cana CLI, you can: - Detect deployment blocks - Verify source code @@ -14,63 +18,75 @@ title: Smart Contract Analysis with Cana CLI - Identify proxy and implementation contracts - Support multiple chains -## 🚀 Installation & Setup +### Prerequisites + +Before installing Cana CLI, make sure you have: + +- [Node.js v16+](https://nodejs.org/en) +- [npm v6+](https://docs.npmjs.com/cli/v11/commands/npm-install) +- Block explorer API keys + +### Installation & Setup -Install Cana globally using npm: +1. Install Cana CLI + +Use npm to install it globally: ```bash npm install -g contract-analyzer ``` -Set up a blockchain for analysis: +2. Configure Cana CLI + +Set up a blockchain environment for analysis: ```bash cana setup ``` -Provide the required block explorer API and block explorer endpoint URL details when prompted. +During setup, you'll be prompted to provide the required block explorer API key and block explorer endpoint URL. -Running `cana setup` creates a configuration file at `~/.contract-analyzer/config.json`. This file stores your block explorer API credentials, endpoint URLs, and chain selection preferences for future use. +After setup, Cana CLI creates a configuration file at `~/.contract-analyzer/config.json`. This file stores your block explorer API credentials, endpoint URLs, and chain selection preferences for future use. -## 🍳 Usage +### Steps: Using Cana CLI for Smart Contract Analysis -### 🔹 Chain Selection +#### 1. Select a Chain -Cana supports multiple EVM-compatible chains. +Cana CLI supports multiple EVM-compatible chains. -List chains added with: +For a list of chains added run this command: ```bash cana chains ``` -Then select a chain with: +Then select a chain with this command: ```bash cana chains --switch ``` -Once a chain is selected, all subsequent contract analases will continue on that chain. +Once a chain is selected, all subsequent contract analyses will continue on that chain. -### 🔹 Basic Contract Analysis +#### 2. Basic Contract Analysis -Analyze a contract with: +Run the following command to analyze a contract: ```bash cana analyze 0xContractAddress ``` -or +oppure ```bash cana -a 0xContractAddress ``` -This command displays essential contract information in the terminal using a clear, organized format. +This command fetches and displays essential contract information in the terminal using a clear, organized format. -### 🔹 Understanding Output +#### 3. Understanding the Output -Cana organizes results into the terminal as well as into a structured directory when detailed contract data is successfully retrieved: +Cana CLI organizes results into the terminal and into a structured directory when detailed contract data is successfully retrieved: ``` contracts-analyzed/ @@ -80,24 +96,22 @@ contracts-analyzed/ └── event-information.json # Event signatures and examples ``` -### 🔹 Chain Management +This format makes it easy to reference contract metadata, event signatures, and ABIs for subgraph development. + +#### 4. Chain Management Add and manage chains: ```bash -cana setup # Add a new chain -cana chains # List configured chains -cana chains -s # Swich chains. +cana setup # Add a new chain +cana chains # List configured chains +cana chains -s # Switch chains ``` -## ⚠️ Troubleshooting +### Troubleshooting -- **Missing Data**: Ensure the contract address is correct, verified on the block explorer, and that your API key has the required permissions. +Missing Data? Ensure that the contract address is correct, that it's verified on the block explorer, and that your API key has the required permissions. -## ✅ Requirements - -- Node.js v16+ -- npm v6+ -- Block explorer API keys +### Conclusion -Keep your contract analyses efficient and well-organized. 🚀 +With Cana CLI, you can efficiently analyze smart contracts, extract crucial metadata, and support subgraph development with ease. diff --git a/website/src/pages/it/subgraphs/guides/near.mdx b/website/src/pages/it/subgraphs/guides/near.mdx index e78a69eb7fa2..baa5bcc79157 100644 --- a/website/src/pages/it/subgraphs/guides/near.mdx +++ b/website/src/pages/it/subgraphs/guides/near.mdx @@ -278,6 +278,6 @@ Pending functionality is not yet supported for NEAR Subgraphs. In the interim, y If it is a general question about Subgraph development, there is a lot more information in the rest of the [Developer documentation](/subgraphs/quick-start/). Otherwise please join [The Graph Protocol Discord](https://discord.gg/graphprotocol) and ask in the #near channel or email near@thegraph.com. -## References +## Riferimenti - [NEAR developer documentation](https://docs.near.org/tutorials/crosswords/basics/set-up-skeleton) diff --git a/website/src/pages/it/subgraphs/guides/secure-api-keys-nextjs.mdx b/website/src/pages/it/subgraphs/guides/secure-api-keys-nextjs.mdx index e17e594408ff..b247912c90e6 100644 --- a/website/src/pages/it/subgraphs/guides/secure-api-keys-nextjs.mdx +++ b/website/src/pages/it/subgraphs/guides/secure-api-keys-nextjs.mdx @@ -2,7 +2,7 @@ title: How to Secure API Keys Using Next.js Server Components --- -## Overview +## Panoramica We can use [Next.js server components](https://nextjs.org/docs/app/building-your-application/rendering/server-components) to properly secure our API key from exposure in the frontend of our dapp. To further increase our API key security, we can also [restrict our API key to certain Subgraphs or domains in Subgraph Studio](/cookbook/upgrading-a-subgraph/#securing-your-api-key). diff --git a/website/src/pages/it/subgraphs/guides/subgraph-composition.mdx b/website/src/pages/it/subgraphs/guides/subgraph-composition.mdx new file mode 100644 index 000000000000..51d882cda5e9 --- /dev/null +++ b/website/src/pages/it/subgraphs/guides/subgraph-composition.mdx @@ -0,0 +1,132 @@ +--- +title: Aggregate Data Using Subgraph Composition +sidebarTitle: Build a Composable Subgraph with Multiple Subgraphs +--- + +Leverage Subgraph composition to speed up development time. Create a base Subgraph with essential data, then build additional Subgraphs on top of it. + +Optimize your Subgraph by merging data from independent, source Subgraphs into a single composable Subgraph to enhance data aggregation. + +## Introduzione + +Composable Subgraphs enable you to combine multiple Subgraphs' data sources into a new Subgraph, facilitating faster and more flexible Subgraph development. Subgraph composition empowers you to create and maintain smaller, focused Subgraphs that collectively form a larger, interconnected dataset. + +### Benefits of Composition + +Subgraph composition is a powerful feature for scaling, allowing you to: + +- Reuse, mix, and combine existing data +- Streamline development and queries +- Use multiple data sources (up to five source Subgraphs) +- Speed up your Subgraph's syncing speed +- Handle errors and optimize the resync + +## Architecture Overview + +The setup for this example involves two Subgraphs: + +1. **Source Subgraph**: Tracks event data as entities. +2. **Dependent Subgraph**: Uses the source Subgraph as a data source. + +You can find these in the `source` and `dependent` directories. + +- The **source Subgraph** is a basic event-tracking Subgraph that records events emitted by relevant contracts. +- The **dependent Subgraph** references the source Subgraph as a data source, using the entities from the source as triggers. + +While the source Subgraph is a standard Subgraph, the dependent Subgraph uses the Subgraph composition feature. + +## Prerequisites + +### Source Subgraphs + +- All Subgraphs need to be published with a **specVersion 1.3.0 or later** (Use the latest graph-cli version to be able to deploy composable Subgraphs) +- See notes here: https://github.com/graphprotocol/graph-node/releases/tag/v0.37.0 +- Immutable entities only: All Subgraphs must have [immutable entities](https://thegraph.com/docs/en/subgraphs/best-practices/immutable-entities-bytes-as-ids/#immutable-entities) when the Subgraph is deployed +- Pruning can be used in the source Subgraphs, but only entities that are immutable can be composed on top of +- Source Subgraphs cannot use grafting on top of existing entities +- Aggregated entities can be used in composition, but entities that are composed from them cannot performed additional aggregations directly + +### Composed Subgraphs + +- You can only compose up to a **maximum of 5 source Subgraphs** +- Composed Subgraphs can only use **datasources from the same chain** +- **Nested composition is not yet supported**: Composing on top of another composed Subgraph isn’t allowed at this time +- Aggregated entities can be used in composition, but the composed entities on them cannot also use aggregations directly +- Developers cannot compose an onchain datasource with a Subgraph datasource (i.e. you can’t do normal event handlers and call handlers and block handlers in a composed Subgraph) + +Additionally, you can explore the [example-composable-subgraph](https://github.com/graphprotocol/example-composable-subgraph) repository for a working implementation of composable Subgraphs + +## Iniziare + +The following guide provides examples for defining 3 source Subgraphs to create one powerful composed Subgraph. + +### Specifics + +- To keep this example simple, all source Subgraphs use only block handlers. However, in a real environment, each source Subgraph will use data from different smart contracts. +- The examples below show how to import and extend the schema of another Subgraph to enhance its functionality. +- Each source Subgraph is optimized with a specific entity. +- All the commands listed install the necessary dependencies, generate code based on the GraphQL schema, build the Subgraph, and deploy it to your local Graph Node instance. + +### Step 1. Deploy Block Time Source Subgraph + +This first source Subgraph calculates the block time for each block. + +- It imports schemas from other Subgraphs and adds a `block` entity with a `timestamp` field, representing the time each block was mined. +- It listens to time-related blockchain events (e.g., block timestamps) and processes this data to update the Subgraph's entities accordingly. + +To deploy this Subgraph locally, run the following commands: + +```bash +npm install +npm run codegen +npm run build +npm run create-local +npm run deploy-local +``` + +### Step 2. Deploy Block Cost Source Subgraph + +This second source Subgraph indexes the cost of each block. + +#### Key Functions + +- It imports schemas from other Subgraphs and adds a `block` entity with cost-related fields. +- It listens to blockchain events related to costs (e.g. gas fees, transaction costs) and processes this data to update the Subgraph's entities accordingly. + +To deploy this Subgraph locally, run the same commands as above. + +### Step 3. Define Block Size in Source Subgraph + +This third source Subgraph indexes the size of each block. To deploy this Subgraph locally, run the same commands as above. + +#### Key Functions + +- It imports existing schemas from other Subgraphs and adds a `block` entity with a `size` field representing each block's size. +- It listens to blockchain events related to block sizes (e.g., storage or volume) and processes this data to update the Subgraph's entities accordingly. + +### Step 4. Combine Into Block Stats Subgraph + +This composed Subgraph combines and aggregates the information from the source Subgraphs above, providing a unified view of block statistics. To deploy this Subgraph locally, run the same commands as above. + +> Note: +> +> - Any change to a source Subgraph will likely generate a new deployment ID. +> - Be sure to update the deployment ID in the data source address of the Subgraph manifest to take advantage of the latest changes. +> - All source Subgraphs should be deployed before the composed Subgraph is deployed. + +#### Key Functions + +- It provides a consolidated data model that encompasses all relevant block metrics. +- It combines data from 3 source Subgraphs, and provides a comprehensive view of block statistics, enabling more complex queries and analyses. + +## Key Takeaways + +- This powerful tool will scale your Subgraph development and allow you to combine multiple Subgraphs. +- The setup includes the deployment of 3 source Subgraphs and one final deployment of the composed Subgraph. +- This feature unlocks scalability, simplifying both development and maintenance efficiency. + +## Additional Resources + +- Check out all the code for this example in [this GitHub repo](https://github.com/graphprotocol/example-composable-subgraph). +- To add advanced features to your Subgraph, check out [Subgraph advanced features](/developing/creating/advanced/). +- To learn more about aggregations, check out [Timeseries and Aggregations](/subgraphs/developing/creating/advanced/#timeseries-and-aggregations). diff --git a/website/src/pages/it/subgraphs/guides/transfer-to-the-graph.mdx b/website/src/pages/it/subgraphs/guides/transfer-to-the-graph.mdx index a62072c48373..ca66ccfd91f8 100644 --- a/website/src/pages/it/subgraphs/guides/transfer-to-the-graph.mdx +++ b/website/src/pages/it/subgraphs/guides/transfer-to-the-graph.mdx @@ -12,9 +12,9 @@ Quickly upgrade your Subgraphs from any platform to [The Graph's decentralized n ## Upgrade Your Subgraph to The Graph in 3 Easy Steps -1. [Set Up Your Studio Environment](/subgraphs/cookbook/transfer-to-the-graph/#1-set-up-your-studio-environment) -2. [Deploy Your Subgraph to Studio](/subgraphs/cookbook/transfer-to-the-graph/#2-deploy-your-subgraph-to-studio) -3. [Publish to The Graph Network](/subgraphs/cookbook/transfer-to-the-graph/#publish-your-subgraph-to-the-graphs-decentralized-network) +1. [Set Up Your Studio Environment](/subgraphs/cookbook/transfer-to-the-graph/#1-set-up-your-studio-environment) +2. [Deploy Your Subgraph to Studio](/subgraphs/cookbook/transfer-to-the-graph/#2-deploy-your-subgraph-to-studio) +3. [Publish to The Graph Network](/subgraphs/cookbook/transfer-to-the-graph/#publish-your-subgraph-to-the-graphs-decentralized-network) ## 1. Set Up Your Studio Environment @@ -74,7 +74,7 @@ graph deploy --ipfs-hash You can start [querying](/subgraphs/querying/introduction/) any Subgraph by sending a GraphQL query into the Subgraph’s query URL endpoint, which is located at the top of its Explorer page in Subgraph Studio. -#### Example +#### Esempio [CryptoPunks Ethereum Subgraph](https://thegraph.com/explorer/subgraphs/HdVdERFUe8h61vm2fDyycHgxjsde5PbB832NHgJfZNqK) by Messari: diff --git a/website/src/pages/it/subgraphs/querying/best-practices.mdx b/website/src/pages/it/subgraphs/querying/best-practices.mdx index c797e432ac0b..d4bb8b226105 100644 --- a/website/src/pages/it/subgraphs/querying/best-practices.mdx +++ b/website/src/pages/it/subgraphs/querying/best-practices.mdx @@ -4,7 +4,7 @@ title: Querying Best Practices The Graph provides a decentralized way to query data from blockchains. Its data is exposed through a GraphQL API, making it easier to query with the GraphQL language. -Learn the essential GraphQL language rules and best practices to optimize your subgraph. +Learn the essential GraphQL language rules and best practices to optimize your Subgraph. --- @@ -71,7 +71,7 @@ It means that you can query a GraphQL API using standard `fetch` (natively or vi However, as mentioned in ["Querying from an Application"](/subgraphs/querying/from-an-application/), it's recommended to use `graph-client`, which supports the following unique features: -- Gestione dei subgraph a cross-chain: effettuare query di più subgraph in un'unica query +- Cross-chain Subgraph Handling: Querying from multiple Subgraphs in a single query - [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) - [Automatic Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) - Risultato completamente tipizzato @@ -219,7 +219,7 @@ If the application only needs 10 transactions, the query should explicitly set ` ### Use a single query to request multiple records -By default, subgraphs have a singular entity for one record. For multiple records, use the plural entities and filter: `where: {id_in:[X,Y,Z]}` or `where: {volume_gt:100000}` +By default, Subgraphs have a singular entity for one record. For multiple records, use the plural entities and filter: `where: {id_in:[X,Y,Z]}` or `where: {volume_gt:100000}` Example of inefficient querying: diff --git a/website/src/pages/it/subgraphs/querying/from-an-application.mdx b/website/src/pages/it/subgraphs/querying/from-an-application.mdx index d2ac36f09846..d5b632cd6f90 100644 --- a/website/src/pages/it/subgraphs/querying/from-an-application.mdx +++ b/website/src/pages/it/subgraphs/querying/from-an-application.mdx @@ -1,5 +1,6 @@ --- title: Eseguire una query da un'applicazione +sidebarTitle: Querying from an App --- Learn how to query The Graph from your application. @@ -10,7 +11,7 @@ During the development process, you will receive a GraphQL API endpoint at two d ### Subgraph Studio Endpoint -After deploying your subgraph to [Subgraph Studio](https://thegraph.com/docs/en/subgraphs/developing/deploying/using-subgraph-studio/), you will receive an endpoint that looks like this: +After deploying your Subgraph to [Subgraph Studio](https://thegraph.com/docs/en/subgraphs/developing/deploying/using-subgraph-studio/), you will receive an endpoint that looks like this: ``` https://api.studio.thegraph.com/query/// @@ -20,13 +21,13 @@ https://api.studio.thegraph.com/query/// ### The Graph Network Endpoint -After publishing your subgraph to the network, you will receive an endpoint that looks like this: : +After publishing your Subgraph to the network, you will receive an endpoint that looks like this: : ``` https://gateway.thegraph.com/api//subgraphs/id/ ``` -> This endpoint is intended for active use on the network. It allows you to use various GraphQL client libraries to query the subgraph and populate your application with indexed data. +> This endpoint is intended for active use on the network. It allows you to use various GraphQL client libraries to query the Subgraph and populate your application with indexed data. ## Using Popular GraphQL Clients @@ -34,7 +35,7 @@ https://gateway.thegraph.com/api//subgraphs/id/ The Graph is providing its own GraphQL client, `graph-client` that supports unique features such as: -- Gestione dei subgraph a cross-chain: effettuare query di più subgraph in un'unica query +- Cross-chain Subgraph Handling: Querying from multiple Subgraphs in a single query - [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) - [Automatic Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) - Risultato completamente tipizzato @@ -43,7 +44,7 @@ The Graph is providing its own GraphQL client, `graph-client` that supports uniq ### Fetch Data with Graph Client -Let's look at how to fetch data from a subgraph with `graph-client`: +Let's look at how to fetch data from a Subgraph with `graph-client`: #### Step 1 @@ -168,7 +169,7 @@ Although it's the heaviest client, it has many features to build advanced UI on ### Fetch Data with Apollo Client -Let's look at how to fetch data from a subgraph with Apollo client: +Let's look at how to fetch data from a Subgraph with Apollo client: #### Step 1 @@ -257,7 +258,7 @@ client ### Fetch data with URQL -Let's look at how to fetch data from a subgraph with URQL: +Let's look at how to fetch data from a Subgraph with URQL: #### Step 1 diff --git a/website/src/pages/it/subgraphs/querying/graph-client/README.md b/website/src/pages/it/subgraphs/querying/graph-client/README.md index 416cadc13c6f..bcbf74973703 100644 --- a/website/src/pages/it/subgraphs/querying/graph-client/README.md +++ b/website/src/pages/it/subgraphs/querying/graph-client/README.md @@ -16,23 +16,23 @@ This library is intended to simplify the network aspect of data consumption for | Status | Feature | Notes | | :----: | ---------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------- | -| ✅ | Multiple indexers | based on fetch strategies | -| ✅ | Fetch Strategies | timeout, retry, fallback, race, highestValue | -| ✅ | Build time validations & optimizations | | -| ✅ | Client-Side Composition | with improved execution planner (based on GraphQL-Mesh) | -| ✅ | Cross-chain Subgraph Handling | Use similar subgraphs as a single source | -| ✅ | Raw Execution (standalone mode) | without a wrapping GraphQL client | -| ✅ | Local (client-side) Mutations | | -| ✅ | [Automatic Block Tracking](../packages/block-tracking/README.md) | tracking block numbers [as described here](https://thegraph.com/docs/en/developer/distributed-systems/#polling-for-updated-data) | -| ✅ | [Automatic Pagination](../packages/auto-pagination/README.md) | doing multiple requests in a single call to fetch more than the indexer limit | -| ✅ | Integration with `@apollo/client` | | -| ✅ | Integration with `urql` | | -| ✅ | TypeScript support | with built-in GraphQL Codegen and `TypedDocumentNode` | -| ✅ | [`@live` queries](./live.md) | Based on polling | +| ✅ | Multiple indexers | based on fetch strategies | +| ✅ | Fetch Strategies | timeout, retry, fallback, race, highestValue | +| ✅ | Build time validations & optimizations | | +| ✅ | Client-Side Composition | with improved execution planner (based on GraphQL-Mesh) | +| ✅ | Cross-chain Subgraph Handling | Use similar subgraphs as a single source | +| ✅ | Raw Execution (standalone mode) | without a wrapping GraphQL client | +| ✅ | Local (client-side) Mutations | | +| ✅ | [Automatic Block Tracking](../packages/block-tracking/README.md) | tracking block numbers [as described here](https://thegraph.com/docs/en/developer/distributed-systems/#polling-for-updated-data) | +| ✅ | [Automatic Pagination](../packages/auto-pagination/README.md) | doing multiple requests in a single call to fetch more than the indexer limit | +| ✅ | Integration with `@apollo/client` | | +| ✅ | Integration with `urql` | | +| ✅ | TypeScript support | with built-in GraphQL Codegen and `TypedDocumentNode` | +| ✅ | [`@live` queries](./live.md) | Based on polling | > You can find an [extended architecture design here](./architecture.md) -## Getting Started +## Per cominciare You can follow [Episode 45 of `graphql.wtf`](https://graphql.wtf/episodes/45-the-graph-client) to learn more about Graph Client: @@ -138,7 +138,7 @@ graphclient serve-dev And open http://localhost:4000/ to use GraphiQL. You can now experiment with your Graph client-side GraphQL schema locally! 🥳 -#### Examples +#### Esempi You can also refer to [examples directory in this repo](../examples), for more advanced examples and integration examples: @@ -308,8 +308,8 @@ sources:
`highestValue` - - This strategy allows you to send parallel requests to different endpoints for the same source and choose the most updated. + +This strategy allows you to send parallel requests to different endpoints for the same source and choose the most updated. This is useful if you want to choose most synced data for the same Subgraph over different indexers/sources. diff --git a/website/src/pages/it/subgraphs/querying/graph-client/live.md b/website/src/pages/it/subgraphs/querying/graph-client/live.md index e6f726cb4352..1a899ac2dcbf 100644 --- a/website/src/pages/it/subgraphs/querying/graph-client/live.md +++ b/website/src/pages/it/subgraphs/querying/graph-client/live.md @@ -2,7 +2,7 @@ Graph-Client implements a custom `@live` directive that can make every GraphQL query work with real-time data. -## Getting Started +## Per cominciare Start by adding the following configuration to your `.graphclientrc.yml` file: diff --git a/website/src/pages/it/subgraphs/querying/graphql-api.mdx b/website/src/pages/it/subgraphs/querying/graphql-api.mdx index 45100b8f6d68..6449bb254449 100644 --- a/website/src/pages/it/subgraphs/querying/graphql-api.mdx +++ b/website/src/pages/it/subgraphs/querying/graphql-api.mdx @@ -6,13 +6,13 @@ Learn about the GraphQL Query API used in The Graph. ## What is GraphQL? -[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. +[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query Subgraphs. -To understand the larger role that GraphQL plays, review [developing](/subgraphs/developing/introduction/) and [creating a subgraph](/developing/creating-a-subgraph/). +To understand the larger role that GraphQL plays, review [developing](/subgraphs/developing/introduction/) and [creating a Subgraph](/developing/creating-a-subgraph/). ## Queries with GraphQL -In your subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type. +In your Subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type. > Note: `query` does not need to be included at the top of the `graphql` query when using The Graph. @@ -170,7 +170,7 @@ You can use suffixes like `_gt`, `_lte` for value comparison: You can also filter entities that were updated in or after a specified block with `_change_block(number_gte: Int)`. -Questo può essere utile se si vuole recuperare solo le entità che sono cambiate, ad esempio dall'ultima volta che è stato effettuato il polling. In alternativa, può essere utile per indagare o fare il debug di come le entità stanno cambiando nel subgraph (se combinato con un filtro di blocco, è possibile isolare solo le entità che sono cambiate in un blocco specifico). +This can be useful if you are looking to fetch only entities which have changed, for example since the last time you polled. Or alternatively it can be useful to investigate or debug how entities are changing in your Subgraph (if combined with a block filter, you can isolate only entities that changed in a specific block). ```graphql { @@ -329,7 +329,7 @@ This query will return `Challenge` entities, and their associated `Application` ### Query di ricerca fulltext -Fulltext search query fields provide an expressive text search API that can be added to the subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) to add fulltext search to your subgraph. +Fulltext search query fields provide an expressive text search API that can be added to the Subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) to add fulltext search to your Subgraph. Fulltext search queries have one required field, `text`, for supplying search terms. Several special fulltext operators are available to be used in this `text` search field. @@ -391,7 +391,7 @@ Graph Node implements [specification-based](https://spec.graphql.org/October2021 The schema of your dataSources, i.e. the entity types, values, and relationships that are available to query, are defined through the [GraphQL Interface Definition Language (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). -GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your subgraph is automatically generated from the GraphQL schema that's included in your [subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph). +GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your Subgraph is automatically generated from the GraphQL schema that's included in your [Subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph). > Note: Our API does not expose mutations because developers are expected to issue transactions directly against the underlying blockchain from their applications. @@ -403,7 +403,7 @@ All GraphQL types with `@entity` directives in your schema will be treated as en ### Metadati del Subgraph -All subgraphs have an auto-generated `_Meta_` object, which provides access to subgraph metadata. This can be queried as follows: +All Subgraphs have an auto-generated `_Meta_` object, which provides access to Subgraph metadata. This can be queried as follows: ```graphQL { @@ -419,7 +419,7 @@ All subgraphs have an auto-generated `_Meta_` object, which provides access to s } ``` -Se viene fornito un blocco, i metadati si riferiscono a quel blocco, altrimenti viene utilizzato il blocco indicizzato più recente. Se fornito, il blocco deve essere successivo al blocco iniziale del subgraph e inferiore o uguale al blocco indicizzato più recente. +If a block is provided, the metadata is as of that block, if not the latest indexed block is used. If provided, the block must be after the Subgraph's start block, and less than or equal to the most recently indexed block. `deployment` is a unique ID, corresponding to the IPFS CID of the `subgraph.yaml` file. @@ -427,6 +427,6 @@ Se viene fornito un blocco, i metadati si riferiscono a quel blocco, altrimenti - hash: l'hash del blocco - numero: il numero del blocco -- timestamp: il timestamp del blocco, se disponibile (attualmente è disponibile solo per i subgraph che indicizzano le reti EVM) +- timestamp: the timestamp of the block, if available (this is currently only available for Subgraphs indexing EVM networks) -`hasIndexingErrors` is a boolean identifying whether the subgraph encountered indexing errors at some past block +`hasIndexingErrors` is a boolean identifying whether the Subgraph encountered indexing errors at some past block diff --git a/website/src/pages/it/subgraphs/querying/introduction.mdx b/website/src/pages/it/subgraphs/querying/introduction.mdx index c2ebb666bfce..26330f644563 100644 --- a/website/src/pages/it/subgraphs/querying/introduction.mdx +++ b/website/src/pages/it/subgraphs/querying/introduction.mdx @@ -7,11 +7,11 @@ To start querying right away, visit [The Graph Explorer](https://thegraph.com/ex ## Panoramica -When a subgraph is published to The Graph Network, you can visit its subgraph details page on Graph Explorer and use the "Query" tab to explore the deployed GraphQL API for each subgraph. +When a Subgraph is published to The Graph Network, you can visit its Subgraph details page on Graph Explorer and use the "Query" tab to explore the deployed GraphQL API for each Subgraph. ## Specifics -Each subgraph published to The Graph Network has a unique query URL in Graph Explorer to make direct queries. You can find it by navigating to the subgraph details page and clicking on the "Query" button in the top right corner. +Each Subgraph published to The Graph Network has a unique query URL in Graph Explorer to make direct queries. You can find it by navigating to the Subgraph details page and clicking on the "Query" button in the top right corner. ![Query Subgraph Button](/img/query-button-screenshot.png) @@ -21,7 +21,7 @@ You will notice that this query URL must use a unique API key. You can create an Subgraph Studio users start on a Free Plan, which allows them to make 100,000 queries per month. Additional queries are available on the Growth Plan, which offers usage based pricing for additional queries, payable by credit card, or GRT on Arbitrum. You can learn more about billing [here](/subgraphs/billing/). -> Please see the [Query API](/subgraphs/querying/graphql-api/) for a complete reference on how to query the subgraph's entities. +> Please see the [Query API](/subgraphs/querying/graphql-api/) for a complete reference on how to query the Subgraph's entities. > > Note: If you encounter 405 errors with a GET request to the Graph Explorer URL, please switch to a POST request instead. diff --git a/website/src/pages/it/subgraphs/querying/managing-api-keys.mdx b/website/src/pages/it/subgraphs/querying/managing-api-keys.mdx index ea42572de442..fc4ebe1f3daf 100644 --- a/website/src/pages/it/subgraphs/querying/managing-api-keys.mdx +++ b/website/src/pages/it/subgraphs/querying/managing-api-keys.mdx @@ -1,14 +1,14 @@ --- -title: Gestione delle chiavi API +title: Managing API keys --- ## Panoramica -API keys are needed to query subgraphs. They ensure that the connections between application services are valid and authorized, including authenticating the end user and the device using the application. +API keys are needed to query Subgraphs. They ensure that the connections between application services are valid and authorized, including authenticating the end user and the device using the application. ### Create and Manage API Keys -Go to [Subgraph Studio](https://thegraph.com/studio/) and click the **API Keys** tab to create and manage your API keys for specific subgraphs. +Go to [Subgraph Studio](https://thegraph.com/studio/) and click the **API Keys** tab to create and manage your API keys for specific Subgraphs. The "API keys" table lists existing API keys and allows you to manage or delete them. For each key, you can see its status, the cost for the current period, the spending limit for the current period, and the total number of queries. @@ -31,4 +31,4 @@ You can click on an individual API key to view the Details page: - Importo di GRT speso 2. Under the **Security** section, you can opt into security settings depending on the level of control you’d like to have. Specifically, you can: - Visualizzare e gestire i nomi di dominio autorizzati a utilizzare la chiave API - - Assegnare i subgraph che possono essere interrogati con la chiave API + - Assign Subgraphs that can be queried with your API key diff --git a/website/src/pages/it/subgraphs/querying/python.mdx b/website/src/pages/it/subgraphs/querying/python.mdx index 55cae50be8a9..c289ab7ea6b0 100644 --- a/website/src/pages/it/subgraphs/querying/python.mdx +++ b/website/src/pages/it/subgraphs/querying/python.mdx @@ -3,7 +3,7 @@ title: Query The Graph with Python and Subgrounds sidebarTitle: Python (Subgrounds) --- -Subgrounds è una libreria Python intuitiva per query dei subgraph, realizzata da [Playgrounds](https://playgrounds.network/). Permette di collegare direttamente i dati dei subgraph a un ambiente dati Python, consentendo di utilizzare librerie come [pandas](https://pandas.pydata.org/) per eseguire analisi dei dati! +Subgrounds is an intuitive Python library for querying Subgraphs, built by [Playgrounds](https://playgrounds.network/). It allows you to directly connect Subgraph data to a Python data environment, letting you use libraries like [pandas](https://pandas.pydata.org/) to perform data analysis! Subgrounds offre una semplice API Pythonic per la creazione di query GraphQL, automatizza i flussi di lavoro più noiosi come la paginazione, e dà agli utenti avanzati la possibilità di effettuare trasformazioni controllate dello schema. @@ -17,24 +17,24 @@ pip install --upgrade subgrounds python -m pip install --upgrade subgrounds ``` -Una volta installato, è possibile testare subgrounds con la seguente query. L'esempio seguente prende un subgraph per il protocollo Aave v2 e effettua query dei primi 5 mercati ordinati per TVL (Total Value Locked), seleziona il loro nome e il loro TVL (in USD) e restituisce i dati come pandas [DataFrame](https://pandas.pydata.org/pandas-docs/dev/reference/api/pandas.DataFrame.html#pandas.DataFrame). +Once installed, you can test out subgrounds with the following query. The following example grabs a Subgraph for the Aave v2 protocol and queries the top 5 markets ordered by TVL (Total Value Locked), selects their name and their TVL (in USD) and returns the data as a pandas [DataFrame](https://pandas.pydata.org/pandas-docs/dev/reference/api/pandas.DataFrame.html#pandas.DataFrame). ```python -da subgrounds import Subgrounds +from subgrounds import Subgrounds sg = Subgrounds() -# Caricare il subgraph +# Load the Subgraph aave_v2 = sg.load_subgraph( "https://api.thegraph.com/subgraphs/name/messari/aave-v2-ethereum") -# Costruire la query +# Construct the query latest_markets = aave_v2.Query.markets( orderBy=aave_v2.Market.totalValueLockedUSD, orderDirection='desc', first=5, ) -# Restituire la query a un dataframe +# Return query to a dataframe sg.query_df([ latest_markets.name, latest_markets.totalValueLockedUSD, diff --git a/website/src/pages/it/subgraphs/querying/subgraph-id-vs-deployment-id.mdx b/website/src/pages/it/subgraphs/querying/subgraph-id-vs-deployment-id.mdx index 103e470e14da..17258dd13ea1 100644 --- a/website/src/pages/it/subgraphs/querying/subgraph-id-vs-deployment-id.mdx +++ b/website/src/pages/it/subgraphs/querying/subgraph-id-vs-deployment-id.mdx @@ -2,17 +2,17 @@ title: Subgraph ID vs Deployment ID --- -A subgraph is identified by a Subgraph ID, and each version of the subgraph is identified by a Deployment ID. +A Subgraph is identified by a Subgraph ID, and each version of the Subgraph is identified by a Deployment ID. -When querying a subgraph, either ID can be used, though it is generally suggested that the Deployment ID is used due to its ability to specify a specific version of a subgraph. +When querying a Subgraph, either ID can be used, though it is generally suggested that the Deployment ID is used due to its ability to specify a specific version of a Subgraph. Here are some key differences between the two IDs: ![](/img/subgraph-id-vs-deployment-id.png) ## Deployment ID -The Deployment ID is the IPFS hash of the compiled manifest file, which refers to other files on IPFS instead of relative URLs on the computer. For example, the compiled manifest can be accessed via: `https://api.thegraph.com/ipfs/api/v0/cat?arg=QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. To change the Deployment ID, one can simply update the manifest file, such as modifying the description field as described in the [subgraph manifest documentation](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api). +The Deployment ID is the IPFS hash of the compiled manifest file, which refers to other files on IPFS instead of relative URLs on the computer. For example, the compiled manifest can be accessed via: `https://api.thegraph.com/ipfs/api/v0/cat?arg=QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. To change the Deployment ID, one can simply update the manifest file, such as modifying the description field as described in the [Subgraph manifest documentation](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api). -When queries are made using a subgraph's Deployment ID, we are specifying a version of that subgraph to query. Using the Deployment ID to query a specific subgraph version results in a more sophisticated and robust setup as there is full control over the subgraph version being queried. However, this results in the need of updating the query code manually every time a new version of the subgraph is published. +When queries are made using a Subgraph's Deployment ID, we are specifying a version of that Subgraph to query. Using the Deployment ID to query a specific Subgraph version results in a more sophisticated and robust setup as there is full control over the Subgraph version being queried. However, this results in the need of updating the query code manually every time a new version of the Subgraph is published. Example endpoint that uses Deployment ID: @@ -20,8 +20,8 @@ Example endpoint that uses Deployment ID: ## Subgraph ID -The Subgraph ID is a unique identifier for a subgraph. It remains constant across all versions of a subgraph. It is recommended to use the Subgraph ID to query the latest version of a subgraph, although there are some caveats. +The Subgraph ID is a unique identifier for a Subgraph. It remains constant across all versions of a Subgraph. It is recommended to use the Subgraph ID to query the latest version of a Subgraph, although there are some caveats. -Be aware that querying using Subgraph ID may result in queries being responded to by an older version of the subgraph due to the new version needing time to sync. Also, new versions could introduce breaking schema changes. +Be aware that querying using Subgraph ID may result in queries being responded to by an older version of the Subgraph due to the new version needing time to sync. Also, new versions could introduce breaking schema changes. Example endpoint that uses Subgraph ID: `https://gateway-arbitrum.network.thegraph.com/api/[api-key]/subgraphs/id/FL3ePDCBbShPvfRJTaSCNnehiqxsPHzpLud6CpbHoeKW` diff --git a/website/src/pages/it/subgraphs/quick-start.mdx b/website/src/pages/it/subgraphs/quick-start.mdx index dc280ec699d3..a803ac8695fa 100644 --- a/website/src/pages/it/subgraphs/quick-start.mdx +++ b/website/src/pages/it/subgraphs/quick-start.mdx @@ -2,7 +2,7 @@ title: Quick Start --- -Learn how to easily build, publish and query a [subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) on The Graph. +Learn how to easily build, publish and query a [Subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) on The Graph. ## Prerequisites @@ -13,13 +13,13 @@ Learn how to easily build, publish and query a [subgraph](/subgraphs/developing/ ## How to Build a Subgraph -### 1. Create a subgraph in Subgraph Studio +### 1. Create a Subgraph in Subgraph Studio Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. -Subgraph Studio lets you create, manage, deploy, and publish subgraphs, as well as create and manage API keys. +Subgraph Studio lets you create, manage, deploy, and publish Subgraphs, as well as create and manage API keys. -Click "Create a Subgraph". It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name". +Click "Create a Subgraph". It is recommended to name the Subgraph in Title Case: "Subgraph Name Chain Name". ### 2. Install the Graph CLI @@ -37,13 +37,13 @@ Using [yarn](https://yarnpkg.com/): yarn global add @graphprotocol/graph-cli ``` -### 3. Initialize your subgraph +### 3. Initialize your Subgraph -> You can find commands for your specific subgraph on the subgraph page in [Subgraph Studio](https://thegraph.com/studio/). +> You can find commands for your specific Subgraph on the Subgraph page in [Subgraph Studio](https://thegraph.com/studio/). -The `graph init` command will automatically create a scaffold of a subgraph based on your contract's events. +The `graph init` command will automatically create a scaffold of a Subgraph based on your contract's events. -The following command initializes your subgraph from an existing contract: +The following command initializes your Subgraph from an existing contract: ```sh graph init @@ -51,42 +51,42 @@ graph init If your contract is verified on the respective blockscanner where it is deployed (such as [Etherscan](https://etherscan.io/)), then the ABI will automatically be created in the CLI. -When you initialize your subgraph, the CLI will ask you for the following information: +When you initialize your Subgraph, the CLI will ask you for the following information: -- **Protocol**: Choose the protocol your subgraph will be indexing data from. -- **Subgraph slug**: Create a name for your subgraph. Your subgraph slug is an identifier for your subgraph. -- **Directory**: Choose a directory to create your subgraph in. -- **Ethereum network** (optional): You may need to specify which EVM-compatible network your subgraph will be indexing data from. +- **Protocol**: Choose the protocol your Subgraph will be indexing data from. +- **Subgraph slug**: Create a name for your Subgraph. Your Subgraph slug is an identifier for your Subgraph. +- **Directory**: Choose a directory to create your Subgraph in. +- **Ethereum network** (optional): You may need to specify which EVM-compatible network your Subgraph will be indexing data from. - **Contract address**: Locate the smart contract address you’d like to query data from. - **ABI**: If the ABI is not auto-populated, you will need to input it manually as a JSON file. -- **Start Block**: You should input the start block to optimize subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed. +- **Start Block**: You should input the start block to optimize Subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed. - **Contract Name**: Input the name of your contract. -- **Index contract events as entities**: It is suggested that you set this to true, as it will automatically add mappings to your subgraph for every emitted event. +- **Index contract events as entities**: It is suggested that you set this to true, as it will automatically add mappings to your Subgraph for every emitted event. - **Add another contract** (optional): You can add another contract. -See the following screenshot for an example for what to expect when initializing your subgraph: +See the following screenshot for an example for what to expect when initializing your Subgraph: ![Subgraph command](/img/CLI-Example.png) -### 4. Edit your subgraph +### 4. Edit your Subgraph -The `init` command in the previous step creates a scaffold subgraph that you can use as a starting point to build your subgraph. +The `init` command in the previous step creates a scaffold Subgraph that you can use as a starting point to build your Subgraph. -When making changes to the subgraph, you will mainly work with three files: +When making changes to the Subgraph, you will mainly work with three files: -- Manifest (`subgraph.yaml`) - defines what data sources your subgraph will index. -- Schema (`schema.graphql`) - defines what data you wish to retrieve from the subgraph. +- Manifest (`subgraph.yaml`) - defines what data sources your Subgraph will index. +- Schema (`schema.graphql`) - defines what data you wish to retrieve from the Subgraph. - AssemblyScript Mappings (`mapping.ts`) - translates data from your data sources to the entities defined in the schema. -For a detailed breakdown on how to write your subgraph, check out [Creating a Subgraph](/developing/creating-a-subgraph/). +For a detailed breakdown on how to write your Subgraph, check out [Creating a Subgraph](/developing/creating-a-subgraph/). -### 5. Deploy your subgraph +### 5. Deploy your Subgraph > Remember, deploying is not the same as publishing. -When you **deploy** a subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. A deployed subgraph's indexing is performed by the [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/), which is a single Indexer owned and operated by Edge & Node, rather than by the many decentralized Indexers in The Graph Network. A **deployed** subgraph is free to use, rate-limited, not visible to the public, and meant to be used for development, staging, and testing purposes. +When you **deploy** a Subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. A deployed Subgraph's indexing is performed by the [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/), which is a single Indexer owned and operated by Edge & Node, rather than by the many decentralized Indexers in The Graph Network. A **deployed** Subgraph is free to use, rate-limited, not visible to the public, and meant to be used for development, staging, and testing purposes. -Once your subgraph is written, run the following commands: +Once your Subgraph is written, run the following commands: ```` ```sh @@ -94,7 +94,7 @@ graph codegen && graph build ``` ```` -Authenticate and deploy your subgraph. The deploy key can be found on the subgraph's page in Subgraph Studio. +Authenticate and deploy your Subgraph. The deploy key can be found on the Subgraph's page in Subgraph Studio. ![Deploy key](/img/subgraph-studio-deploy-key.jpg) @@ -109,37 +109,37 @@ graph deploy The CLI will ask for a version label. It's strongly recommended to use [semantic versioning](https://semver.org/), e.g. `0.0.1`. -### 6. Review your subgraph +### 6. Review your Subgraph -If you’d like to test your subgraph before publishing it, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following: +If you’d like to test your Subgraph before publishing it, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following: - Run a sample query. -- Analyze your subgraph in the dashboard to check information. -- Check the logs on the dashboard to see if there are any errors with your subgraph. The logs of an operational subgraph will look like this: +- Analyze your Subgraph in the dashboard to check information. +- Check the logs on the dashboard to see if there are any errors with your Subgraph. The logs of an operational Subgraph will look like this: ![Subgraph logs](/img/subgraph-logs-image.png) -### 7. Publish your subgraph to The Graph Network +### 7. Publish your Subgraph to The Graph Network -When your subgraph is ready for a production environment, you can publish it to the decentralized network. Publishing is an onchain action that does the following: +When your Subgraph is ready for a production environment, you can publish it to the decentralized network. Publishing is an onchain action that does the following: -- It makes your subgraph available to be to indexed by the decentralized [Indexers](/indexing/overview/) on The Graph Network. -- It removes rate limits and makes your subgraph publicly searchable and queryable in [Graph Explorer](https://thegraph.com/explorer/). -- It makes your subgraph available for [Curators](/resources/roles/curating/) to curate it. +- It makes your Subgraph available to be to indexed by the decentralized [Indexers](/indexing/overview/) on The Graph Network. +- It removes rate limits and makes your Subgraph publicly searchable and queryable in [Graph Explorer](https://thegraph.com/explorer/). +- It makes your Subgraph available for [Curators](/resources/roles/curating/) to curate it. -> The greater the amount of GRT you and others curate on your subgraph, the more Indexers will be incentivized to index your subgraph, improving the quality of service, reducing latency, and enhancing network redundancy for your subgraph. +> The greater the amount of GRT you and others curate on your Subgraph, the more Indexers will be incentivized to index your Subgraph, improving the quality of service, reducing latency, and enhancing network redundancy for your Subgraph. #### Publishing with Subgraph Studio -To publish your subgraph, click the Publish button in the dashboard. +To publish your Subgraph, click the Publish button in the dashboard. -![Publish a subgraph on Subgraph Studio](/img/publish-sub-transfer.png) +![Publish a Subgraph on Subgraph Studio](/img/publish-sub-transfer.png) -Select the network to which you would like to publish your subgraph. +Select the network to which you would like to publish your Subgraph. #### Publishing from the CLI -As of version 0.73.0, you can also publish your subgraph with the Graph CLI. +As of version 0.73.0, you can also publish your Subgraph with the Graph CLI. Open the `graph-cli`. @@ -157,32 +157,32 @@ graph publish ``` ```` -3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized subgraph to a network of your choice. +3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized Subgraph to a network of your choice. ![cli-ui](/img/cli-ui.png) To customize your deployment, see [Publishing a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). -#### Adding signal to your subgraph +#### Adding signal to your Subgraph -1. To attract Indexers to query your subgraph, you should add GRT curation signal to it. +1. To attract Indexers to query your Subgraph, you should add GRT curation signal to it. - - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your subgraph. + - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your Subgraph. 2. If eligible for indexing rewards, Indexers receive GRT rewards based on the signaled amount. - - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on subgraph feature usage and supported networks. + - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on Subgraph feature usage and supported networks. To learn more about curation, read [Curating](/resources/roles/curating/). -To save on gas costs, you can curate your subgraph in the same transaction you publish it by selecting this option: +To save on gas costs, you can curate your Subgraph in the same transaction you publish it by selecting this option: ![Subgraph publish](/img/studio-publish-modal.png) -### 8. Query your subgraph +### 8. Query your Subgraph -You now have access to 100,000 free queries per month with your subgraph on The Graph Network! +You now have access to 100,000 free queries per month with your Subgraph on The Graph Network! -You can query your subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button. +You can query your Subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button. -For more information about querying data from your subgraph, read [Querying The Graph](/subgraphs/querying/introduction/). +For more information about querying data from your Subgraph, read [Querying The Graph](/subgraphs/querying/introduction/). diff --git a/website/src/pages/it/substreams/developing/dev-container.mdx b/website/src/pages/it/substreams/developing/dev-container.mdx index bd4acf16eec7..339ddb159c87 100644 --- a/website/src/pages/it/substreams/developing/dev-container.mdx +++ b/website/src/pages/it/substreams/developing/dev-container.mdx @@ -9,7 +9,7 @@ Develop your first project with Substreams Dev Container. It's a tool to help you build your first project. You can either run it remotely through Github codespaces or locally by cloning the [substreams starter repository](https://github.com/streamingfast/substreams-starter?tab=readme-ov-file). -Inside the Dev Container, the `substreams init` command sets up a code-generated Substreams project, allowing you to easily build a subgraph or an SQL-based solution for data handling. +Inside the Dev Container, the `substreams init` command sets up a code-generated Substreams project, allowing you to easily build a Subgraph or an SQL-based solution for data handling. ## Prerequisites @@ -35,7 +35,7 @@ To share your work with the broader community, publish your `.spkg` to [Substrea You can configure your project to query data either through a Subgraph or directly from an SQL database: -- **Subgraph**: Run `substreams codegen subgraph`. This generates a project with a basic `schema.graphql` and `mappings.ts` file. You can customize these to define entities based on the data extracted by Substreams. For more configurations, see [subgraph sink documentation](https://docs.substreams.dev/how-to-guides/sinks/subgraph). +- **Subgraph**: Run `substreams codegen subgraph`. This generates a project with a basic `schema.graphql` and `mappings.ts` file. You can customize these to define entities based on the data extracted by Substreams. For more configurations, see [Subgraph sink documentation](https://docs.substreams.dev/how-to-guides/sinks/subgraph). - **SQL**: Run `substreams codegen sql` for SQL-based queries. For more information on configuring a SQL sink, refer to the [SQL documentation](https://docs.substreams.dev/how-to-guides/sinks/sql-sink). ## Deployment Options diff --git a/website/src/pages/it/substreams/developing/sinks.mdx b/website/src/pages/it/substreams/developing/sinks.mdx index 4689f71ab6a2..5b96274b08b7 100644 --- a/website/src/pages/it/substreams/developing/sinks.mdx +++ b/website/src/pages/it/substreams/developing/sinks.mdx @@ -1,5 +1,5 @@ --- -title: Official Sinks +title: Sink your Substreams --- Choose a sink that meets your project's needs. @@ -8,7 +8,7 @@ Choose a sink that meets your project's needs. Once you find a package that fits your needs, you can choose how you want to consume the data. -Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database, a file or a subgraph. +Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database, a file or a Subgraph. ## Sinks diff --git a/website/src/pages/it/substreams/developing/solana/account-changes.mdx b/website/src/pages/it/substreams/developing/solana/account-changes.mdx index 6f19b0c346e3..8a1bdb86a7b4 100644 --- a/website/src/pages/it/substreams/developing/solana/account-changes.mdx +++ b/website/src/pages/it/substreams/developing/solana/account-changes.mdx @@ -11,7 +11,7 @@ This guide walks you through the process of setting up your environment, configu > NOTE: History for the Solana Account Changes dates as of 2025, block 310629601. -For each Substreams Solana account block, only the latest update per account is recorded, see the [Protobuf Referece](https://buf.build/streamingfast/firehose-solana/file/main:sf/solana/type/v1/account.proto). If an account is deleted, a payload with `deleted == True` is provided. Additionally, events of low importance ware omitted, such as those with the special owner “Vote11111111…” account or changes that do not affect the account data (ex: lamport changes). +For each Substreams Solana account block, only the latest update per account is recorded, see the [Protobuf Reference](https://buf.build/streamingfast/firehose-solana/file/main:sf/solana/type/v1/account.proto). If an account is deleted, a payload with `deleted == True` is provided. Additionally, events of low importance ware omitted, such as those with the special owner “Vote11111111…” account or changes that do not affect the account data (ex: lamport changes). > NOTE: To test Substreams latency for Solana accounts, measured as block-head drift, install the [Substreams CLI](https://docs.substreams.dev/reference-material/substreams-cli/installing-the-cli) and running `substreams run solana-common blocks_without_votes -s -1 -o clock`. diff --git a/website/src/pages/it/substreams/developing/solana/transactions.mdx b/website/src/pages/it/substreams/developing/solana/transactions.mdx index c22bd0f50611..1542ae22dab7 100644 --- a/website/src/pages/it/substreams/developing/solana/transactions.mdx +++ b/website/src/pages/it/substreams/developing/solana/transactions.mdx @@ -36,12 +36,12 @@ Within the generated directories, modify your Substreams modules to include addi ## Step 3: Load the Data -To make your Substreams queryable (as opposed to [direct streaming](https://docs.substreams.dev/how-to-guides/sinks/stream)), you can automatically generate a [Substreams-powered subgraph](/sps/introduction/) or SQL-DB sink. +To make your Substreams queryable (as opposed to [direct streaming](https://docs.substreams.dev/how-to-guides/sinks/stream)), you can automatically generate a [Substreams-powered Subgraph](/sps/introduction/) or SQL-DB sink. ### Subgraph 1. Run `substreams codegen subgraph` to initialize the sink, producing the necessary files and function definitions. -2. Create your [subgraph mappings](/sps/triggers/) within the `mappings.ts` and associated entities within the `schema.graphql`. +2. Create your [Subgraph mappings](/sps/triggers/) within the `mappings.ts` and associated entities within the `schema.graphql`. 3. Build and deploy locally or to [Subgraph Studio](https://thegraph.com/studio-pricing/) by running `deploy-studio`. ### SQL diff --git a/website/src/pages/it/substreams/introduction.mdx b/website/src/pages/it/substreams/introduction.mdx index 9cda1108f1a6..a4c2a11de271 100644 --- a/website/src/pages/it/substreams/introduction.mdx +++ b/website/src/pages/it/substreams/introduction.mdx @@ -13,7 +13,7 @@ Substreams is a powerful parallel blockchain indexing technology designed to enh ## Substreams Benefits -- **Accelerated Indexing**: Boost subgraph indexing time with a parallelized engine for quicker data retrieval and processing. +- **Accelerated Indexing**: Boost Subgraph indexing time with a parallelized engine for quicker data retrieval and processing. - **Multi-Chain Support**: Expand indexing capabilities beyond EVM-based chains, supporting ecosystems like Solana, Injective, Starknet, and Vara. - **Enhanced Data Model**: Access comprehensive data, including the `trace` level data on EVM or account changes on Solana, while efficiently managing forks/disconnections. - **Multi-Sink Support:** For Subgraph, Postgres database, Clickhouse, and Mongo database. diff --git a/website/src/pages/it/substreams/publishing.mdx b/website/src/pages/it/substreams/publishing.mdx index d8904a49d38d..31a4461815a5 100644 --- a/website/src/pages/it/substreams/publishing.mdx +++ b/website/src/pages/it/substreams/publishing.mdx @@ -9,7 +9,7 @@ Learn how to publish a Substreams package to the [Substreams Registry](https://s ### What is a package? -A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional subgraphs. +A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional Subgraphs. ## Publish a Package @@ -44,7 +44,7 @@ A Substreams package is a precompiled binary file that defines the specific data ![confirm](/img/4_confirm.png) -That's it! You have succesfully published a package in the Substreams registry. +That's it! You have successfully published a package in the Substreams registry. ![success](/img/5_success.png) diff --git a/website/src/pages/it/supported-networks.mdx b/website/src/pages/it/supported-networks.mdx index 7ae7ff45350a..ef2c28393033 100644 --- a/website/src/pages/it/supported-networks.mdx +++ b/website/src/pages/it/supported-networks.mdx @@ -18,11 +18,11 @@ export const getStaticProps = getSupportedNetworksStaticProps - Subgraph Studio relies on the stability and reliability of the underlying technologies, for example JSON-RPC, Firehose and Substreams endpoints. - Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier. -- If a subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks. +- If a Subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks. - For a full list of which features are supported on the decentralized network, see [this page](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). ## Running Graph Node locally If your preferred network isn't supported on The Graph's decentralized network, you can run your own [Graph Node](https://github.com/graphprotocol/graph-node) to index any EVM-compatible network. Make sure that the [version](https://github.com/graphprotocol/graph-node/releases) you are using supports the network and you have the needed configuration. -Graph Node can also index other protocols, via a Firehose integration. Firehose integrations have been created for NEAR, Arweave and Cosmos-based networks. Additionally, Graph Node can support Substreams-powered subgraphs for any network with Substreams support. +Graph Node can also index other protocols, via a Firehose integration. Firehose integrations have been created for NEAR and Arweave. Additionally, Graph Node can support Substreams-powered Subgraphs for any network with Substreams support. diff --git a/website/src/pages/it/token-api/_meta-titles.json b/website/src/pages/it/token-api/_meta-titles.json index 692cec84bd58..7ed31e0af95d 100644 --- a/website/src/pages/it/token-api/_meta-titles.json +++ b/website/src/pages/it/token-api/_meta-titles.json @@ -1,5 +1,6 @@ { "mcp": "MCP", "evm": "EVM Endpoints", - "monitoring": "Monitoring Endpoints" + "monitoring": "Monitoring Endpoints", + "faq": "FAQ" } diff --git a/website/src/pages/it/token-api/_meta.js b/website/src/pages/it/token-api/_meta.js index 09aa7ffc2649..0e526f673a66 100644 --- a/website/src/pages/it/token-api/_meta.js +++ b/website/src/pages/it/token-api/_meta.js @@ -5,4 +5,5 @@ export default { mcp: titles.mcp, evm: titles.evm, monitoring: titles.monitoring, + faq: '', } diff --git a/website/src/pages/it/token-api/faq.mdx b/website/src/pages/it/token-api/faq.mdx new file mode 100644 index 000000000000..6178aee33e86 --- /dev/null +++ b/website/src/pages/it/token-api/faq.mdx @@ -0,0 +1,109 @@ +--- +title: Token API FAQ +--- + +Get fast answers to easily integrate and scale with The Graph's high-performance Token API. + +## General + +### What blockchains does the Token API support? + +Currently, the Token API supports Ethereum, Binance Smart Chain (BSC), Polygon, Optimism, Base, and Arbitrum One. + +### Why isn't my API key from The Graph Market working? + +Be sure to use the Access Token generated from the API key, not the API key itself. An access token can be generated from the dashboard on The Graph Market using the dropdown menu next to each API key. + +### How current is the data provided by the API relative to the blockchain? + +The API provides data up to the latest finalized block. + +### How do I authenticate requests to the Token API? + +Authentication is managed via JWT tokens obtained through The Graph Market. JWT tokens issued by [Pinax](https://pinax.network/en), a core developer of The Graph, are also supported. + +### Does the Token API provide a client SDK? + +While a client SDK is not currently available, please share feedback on any SDKs or integrations you would like to see on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to support additional blockchains in the future? + +Yes, more blockchains will be supported in the future. Please share feedback on desired chains on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to offer data closer to the chain head? + +Yes, improvements to provide data closer to the chain head are planned. Feedback is welcome on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to support additional use cases such as NFTs? + +The Graph ecosystem is actively determining the [roadmap](https://thegraph.com/blog/token-api-the-graph/) for additional use cases, including NFTs. Please provide feedback on specific features you would like prioritized on [Discord](https://discord.gg/graphprotocol). + +## MCP / LLM / AI Topics + +### Is there a time limit for LLM queries? + +Yes. The maximum time limit for LLM queries is 10 seconds. + +### Is there a known list of LLMs that work with the API? + +Yes, Cline, Cursor, and Claude Desktop integrate successfully with The Graph's Token API + MCP server. + +Beyond that, whether an LLM "works" depends on whether it supports the MCP protocol directly (or has a compatible plugin/adapter). + +### Where can I find the MCP client? + +You can find the code for the MCP client in [The Graph's repo](https://github.com/graphprotocol/mcp-client). + +## Advanced Topics + +### I'm getting 403/401 errors. What's wrong? + +Check that you included the `Authorization: Bearer ` header with the correct, non-expired token. Common issues include using the API key instead of generating a new JWT, forgetting the "Bearer" prefix, using an incorrect token, or omitting the header entirely. Ensure you copied the JWT exactly as provided by The Graph Market. + +### Are there rate limits or usage costs?\*\* + +During Beta, the Token API is free for authorized developers. While specific rate limits aren't documented, reasonable throttling exists to prevent abuse. High request volumes may trigger HTTP 429 errors. Monitor official announcements for future pricing changes after Beta. + +### What networks are supported, and how do I specify them? + +You can query available networks with [this link](https://token-api.thegraph.com/#tag/monitoring/GET/networks). A full list of network IDs accepted by The Graph can be found on [The Graph's Networks](https://thegraph.com/docs/en/supported-networks/). The API supports Ethereum mainnet, Binance Smart Chain, Base, Arbitrum One, Optimism, and Polygon/Matic. Use the optional `network_id` parameter (e.g., `mainnet`, `bsc`, `base`, `arbitrum-one`, `optimism`, `matic`) to target a specific chain. Without this parameter, the API defaults to Ethereum mainnet. + +### Why do I only see 10 results? How can I get more data? + +Endpoints cap output at 10 items by default. Use pagination parameters: `limit` (up to 500) and `page` (1-indexed). For example, set `limit=50` to get 50 results, and increment `page` for subsequent batches (e.g., `page=2` for items 51-100). + +### How do I fetch older transfer history? + +The Transfers endpoint defaults to 30 days of history. To retrieve older events, increase the `age` parameter up to 180 days maximum (e.g., `age=180` for 6 months of transfers). Transfers older than 180 days cannot be fetched in a single call. + +### What does an empty `"data": []` array mean? + +An empty data array means no records were found matching the query – not an error. This occurs when querying wallets with no tokens/transfers or token contracts with no holders. Verify you've used the correct address and parameters. An invalid address format will trigger a 4xx error. + +### Why is the JSON response wrapped in a `"data"` array? + +All Token API responses consistently wrap results in a top-level `data` array, even for single items. This uniform design handles the common case where addresses have multiple balances or transfers. When parsing, be sure to index into the `data` array (e.g., `const results = response.data`). + +### Why are token amounts returned as strings? + +Large numeric values (like token amounts or supplies) are returned as strings to avoid precision loss, as they often exceed JavaScript's safe integer range. Convert these to big number types for arithmetic operations. Fields like `decimals` are provided as normal numbers to help derive human-readable values. + +### What format should addresses be in? + +The API accepts 40-character hex addresses with or without the `0x` prefix. The endpoint is case-insensitive, so both lower and uppercase hex characters work. Ensure addresses are exactly 40 hex digits (20 bytes) if you remove the prefix. For contract queries, use the token's contract address; for balance/transfer queries, use the wallet address. + +### Do I need special headers besides authentication? + +While recommended, `Accept: application/json` isn't strictly required as the API returns JSON by default. The critical header is `Authorization: Bearer `. Ensure you make a GET request to the correct URL without trailing slashes or path typos (e.g., use `/balances/evm/{address}` not `/balance`). + +### MCP integration with Claude/Cline/Cursor shows errors like "ENOENT" or "Server disconnected". How do I fix this? + +For "ENOENT" errors, ensure Node.js 18+ is installed and the path to `npx`/`bunx` is correct (consider using full paths in config). "Server disconnected" usually indicates authentication or connectivity issues – verify your `ACCESS_TOKEN` is set correctly and your network allows access to `https://token-api.thegraph.com/sse`. + +### Is the Token API part of The Graph's GraphQL service? + +No, the Token API is a separate RESTful service. Unlike traditional subgraphs, it provides ready-to-use REST endpoints (HTTP GET) for common token data. You don't need to write GraphQL queries or deploy subgraphs. Under the hood, it uses The Graph's infrastructure and MCP with AI for enrichment, but you simply interact with REST endpoints. + +### Do I need to use MCP or tools like Claude, Cline, or Cursor? + +No, these are optional. MCP is an advanced feature allowing AI assistants to interface with the API via streaming. For standard usage, simply call the REST endpoints with any HTTP client using your JWT. Claude Desktop, Cline bot, and Cursor IDE integrations are provided for convenience but aren't required. diff --git a/website/src/pages/it/token-api/mcp/claude.mdx b/website/src/pages/it/token-api/mcp/claude.mdx index 0da8f2be031d..12a036b6fc24 100644 --- a/website/src/pages/it/token-api/mcp/claude.mdx +++ b/website/src/pages/it/token-api/mcp/claude.mdx @@ -25,11 +25,11 @@ Create or edit your `claude_desktop_config.json` file. ```json label="claude_desktop_config.json" { "mcpServers": { - "mcp-pinax": { + "token-api": { "command": "npx", "args": ["@pinax/mcp", "--sse-url", "https://token-api.thegraph.com/sse"], "env": { - "ACCESS_TOKEN": "" + "ACCESS_TOKEN": "" } } } diff --git a/website/src/pages/it/token-api/mcp/cline.mdx b/website/src/pages/it/token-api/mcp/cline.mdx index ab54c0c8f6f0..ef98e45939fe 100644 --- a/website/src/pages/it/token-api/mcp/cline.mdx +++ b/website/src/pages/it/token-api/mcp/cline.mdx @@ -10,7 +10,7 @@ sidebarTitle: Cline - [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path. - The `@pinax/mcp` package requires Node 18+, as it relies on built-in `fetch()` / `Headers`, which are not available in Node 17 or older. You may need to specify an exact path to an up-to-date Node version, or uninstall previous versions of Node to ensure `@pinax/mcp` uses the correct version. -![Screenshot of Cline's MCP server configuration interface displaying the JSON settings file with mcp-pinax server details visible](/img/cline-preview-token-api.png) +![Screenshot of Cline's MCP server configuration interface displaying the JSON settings file with mcp-pinax server details visible.](/img/cline-preview-token-api.png) ## Configuration diff --git a/website/src/pages/ja/about.mdx b/website/src/pages/ja/about.mdx index c867800369a3..b4462cd3c1c8 100644 --- a/website/src/pages/ja/about.mdx +++ b/website/src/pages/ja/about.mdx @@ -30,25 +30,25 @@ Blockchain properties, such as finality, chain reorganizations, and uncled block ## The Graph Provides a Solution -The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API. +The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "Subgraphs") can then be queried with a standard GraphQL API. Today, there is a decentralized protocol that is backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node) that enables this process. ### How The Graph Functions -Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL. +Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using Subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL. #### Specifics -- The Graph uses subgraph descriptions, which are known as the subgraph manifest inside the subgraph. +- The Graph uses Subgraph descriptions, which are known as the Subgraph manifest inside the Subgraph. -- The subgraph description outlines the smart contracts of interest for a subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database. +- The Subgraph description outlines the smart contracts of interest for a Subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database. -- When creating a subgraph, you need to write a subgraph manifest. +- When creating a Subgraph, you need to write a Subgraph manifest. -- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that subgraph. +- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that Subgraph. -The diagram below provides more detailed information about the flow of data after a subgraph manifest has been deployed with Ethereum transactions. +The diagram below provides more detailed information about the flow of data after a Subgraph manifest has been deployed with Ethereum transactions. ![グラフがグラフ ノードを使用してデータ コンシューマーにクエリを提供する方法を説明する図](/img/graph-dataflow.png) @@ -56,12 +56,12 @@ The diagram below provides more detailed information about the flow of data afte 1. Dapp は、スマート コントラクトのトランザクションを通じて Ethereum にデータを追加します。 2. スマートコントラクトは、トランザクションの処理中に 1 つまたは複数のイベントを発行します。 -3. Graph Node は、Ethereum の新しいブロックと、それに含まれる自分のサブグラフのデータを継続的にスキャンします。 -4. Graph Node は、これらのブロックの中からあなたのサブグラフの Ethereum イベントを見つけ出し、あなたが提供したマッピングハンドラーを実行します。 マッピングとは、イーサリアムのイベントに対応して Graph Node が保存するデータエンティティを作成または更新する WASM モジュールのことです。 +3. Graph Node continually scans Ethereum for new blocks and the data for your Subgraph they may contain. +4. Graph Node finds Ethereum events for your Subgraph in these blocks and runs the mapping handlers you provided. The mapping is a WASM module that creates or updates the data entities that Graph Node stores in response to Ethereum events. 5. Dapp は、ノードの [GraphQL エンドポイント](https://graphql.org/learn/) を使用して、ブロックチェーンからインデックス付けされたデータをグラフ ノードに照会します。グラフ ノードは、ストアのインデックス作成機能を利用して、このデータを取得するために、GraphQL クエリを基盤となるデータ ストアのクエリに変換します。 dapp は、このデータをエンドユーザー向けの豊富な UI に表示し、エンドユーザーはそれを使用して Ethereum で新しいトランザクションを発行します。サイクルが繰り返されます。 ## 次のステップ -The following sections provide a more in-depth look at subgraphs, their deployment and data querying. +The following sections provide a more in-depth look at Subgraphs, their deployment and data querying. -Before you write your own subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed subgraphs. Each subgraph's page includes a GraphQL playground, allowing you to query its data. +Before you write your own Subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed Subgraphs. Each Subgraph's page includes a GraphQL playground, allowing you to query its data. diff --git a/website/src/pages/ja/archived/arbitrum/arbitrum-faq.mdx b/website/src/pages/ja/archived/arbitrum/arbitrum-faq.mdx index 3ab2bdbbf83b..cc0c098f0af1 100644 --- a/website/src/pages/ja/archived/arbitrum/arbitrum-faq.mdx +++ b/website/src/pages/ja/archived/arbitrum/arbitrum-faq.mdx @@ -14,7 +14,7 @@ By scaling The Graph on L2, network participants can now benefit from: - イーサリアムから継承したセキュリティ -Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers can open and close allocations more frequently to index a greater number of subgraphs. Developers can deploy and update subgraphs more easily, and Delegators can delegate GRT more frequently. Curators can add or remove signal to a larger number of subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. +Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers can open and close allocations more frequently to index a greater number of Subgraphs. Developers can deploy and update Subgraphs more easily, and Delegators can delegate GRT more frequently. Curators can add or remove signal to a larger number of Subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. Graph コミュニティは、[GIP-0031](https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305) の議論の結果を受けて、昨年 Arbitrum を進めることを決定しました。 @@ -39,7 +39,7 @@ L2でのThe Graphの活用には、このドロップダウンスイッチャー ![Arbitrum を切り替えるドロップダウン スイッチャー](/img/arbitrum-screenshot-toggle.png) -## サブグラフ開発者、データ消費者、インデクサー、キュレーター、デリゲーターは何をする必要がありますか? +## As a Subgraph developer, data consumer, Indexer, Curator, or Delegator, what do I need to do now? Network participants must move to Arbitrum to continue participating in The Graph Network. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) for additional support. @@ -51,9 +51,9 @@ All smart contracts have been thoroughly [audited](https://github.com/graphproto すべてが徹底的にテストされており、安全かつシームレスな移行を保証するための緊急時対応計画が整備されています。詳細は[here](https://forum.thegraph.com/t/gip-0037-the-graph-arbitrum-deployment-with-linear-rewards-minted-in-l2/3551#risks-and-security-considerations-20)をご覧ください。 -## Are existing subgraphs on Ethereum working? +## Are existing Subgraphs on Ethereum working? -All subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) to ensure your subgraphs operate seamlessly. +All Subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) to ensure your Subgraphs operate seamlessly. ## Does GRT have a new smart contract deployed on Arbitrum? diff --git a/website/src/pages/ja/archived/arbitrum/l2-transfer-tools-faq.mdx b/website/src/pages/ja/archived/arbitrum/l2-transfer-tools-faq.mdx index 70999970ca9a..32be44b363b9 100644 --- a/website/src/pages/ja/archived/arbitrum/l2-transfer-tools-faq.mdx +++ b/website/src/pages/ja/archived/arbitrum/l2-transfer-tools-faq.mdx @@ -24,9 +24,9 @@ EthereumやArbitrumのようなEVMブロックチェーン上のウォレット L2転送ツールは、アービトラムのネイティブメカニズムを使用してL1からL2にメッセージを送信します。このメカニズムは「再試行可能チケット」と呼ばれ、Arbitrum GRTブリッジを含むすべてのネイティブトークンブリッジで使用されます。再試行可能なチケットの詳細については、[アービトラムドキュメント](https://docs.arbitrum.io/arbos/l1 からl2へのメッセージング)を参照してください。 -資産(サブグラフ、ステーク、委任、またはキュレーション)をL2に転送する際、Arbitrum GRTブリッジを介してメッセージが送信され、L2でretryable ticketが作成されます。転送ツールにはトランザクションに一部のETHが含まれており、これは1)チケットの作成に支払われ、2)L2でのチケットの実行に必要なガスに使用されます。ただし、チケットがL2で実行可能になるまでの時間でガス料金が変動する可能性があるため、この自動実行試行が失敗することがあります。その場合、Arbitrumブリッジはretryable ticketを最大7日間保持し、誰でもそのチケットを「償還」しようと再試行できます(これにはArbitrumにブリッジされた一部のETHを持つウォレットが必要です)。 +When you transfer your assets (Subgraph, stake, delegation or curation) to L2, a message is sent through the Arbitrum GRT bridge which creates a retryable ticket in L2. The transfer tool includes some ETH value in the transaction, that is used to 1) pay to create the ticket and 2) pay for the gas to execute the ticket in L2. However, because gas prices might vary in the time until the ticket is ready to execute in L2, it is possible that this auto-execution attempt fails. When that happens, the Arbitrum bridge will keep the retryable ticket alive for up to 7 days, and anyone can retry “redeeming” the ticket (which requires a wallet with some ETH bridged to Arbitrum). -これは、すべての転送ツールで「確認」ステップと呼んでいるものです。ほとんどの場合、自動実行は成功するため、自動的に実行されますが、確認が完了したことを確認するために戻ってチェックすることが重要です。成功せず、7日間で成功した再試行がない場合、Arbitrumブリッジはそのチケットを破棄し、あなたの資産(サブグラフ、ステーク、委任、またはキュレーション)は失われ、回復できません。The Graphのコア開発者は、これらの状況を検出し、遅すぎる前にチケットを償還しようとする監視システムを設置していますが、最終的には転送が時間内に完了することを確認する責任があなたにあります。トランザクションの確認に問題がある場合は、[this form](https://noteforms.com/forms/notionform-l2-transfer-tooling-issues-0ogqfu?notionforms=1&utm_source=notionforms) を使用して連絡し、コア開発者が助けてくれるでしょう。 +This is what we call the “Confirm” step in all the transfer tools - it will run automatically in most cases, as the auto-execution is most often successful, but it is important that you check back to make sure it went through. If it doesn’t succeed and there are no successful retries in 7 days, the Arbitrum bridge will discard the ticket, and your assets (Subgraph, stake, delegation or curation) will be lost and can’t be recovered. The Graph core devs have a monitoring system in place to detect these situations and try to redeem the tickets before it’s too late, but it is ultimately your responsibility to ensure your transfer is completed in time. If you’re having trouble confirming your transaction, please reach out using [this form](https://noteforms.com/forms/notionform-l2-transfer-tooling-issues-0ogqfu?notionforms=1&utm_source=notionforms) and core devs will be there help you. ### 委任/ステーク/キュレーション転送を開始しましたが、L2 まで転送されたかどうかわかりません。正しく転送されたことを確認するにはどうすればよいですか? @@ -36,43 +36,43 @@ L1トランザクションのハッシュを持っている場合(これはウ ## 部分グラフの転送 -### サブグラフを転送するにはどうすればよいですか? +### How do I transfer my Subgraph? -サブグラフを転送するには、次の手順を完了する必要があります。 +To transfer your Subgraph, you will need to complete the following steps: 1. イーサリアムメインネットで転送を開始する 2. 確認を待つために20分お待ちください。 -3. Arbitrum でサブグラフ転送を確認します\* +3. Confirm Subgraph transfer on Arbitrum\* -4. Arbitrum でサブグラフの公開を完了する +4. Finish publishing Subgraph on Arbitrum 5. クエリ URL を更新 (推奨) -\*注意:7日以内に転送を確認する必要があります。それ以外の場合、サブグラフが失われる可能性があります。ほとんどの場合、このステップは自動的に実行されますが、Arbitrumでガス価格が急上昇した場合には手動で確認する必要があるかもしれません。このプロセス中に問題が発生した場合、サポートを受けるためのリソースが用意されています:support@thegraph.com に連絡するか、[Discord](https://discord.gg/graphprotocol)でお問い合わせください\。 +\*Note that you must confirm the transfer within 7 days otherwise your Subgraph may be lost. In most cases, this step will run automatically, but a manual confirmation may be needed if there is a gas price spike on Arbitrum. If there are any issues during this process, there will be resources to help: contact support at support@thegraph.com or on [Discord](https://discord.gg/graphprotocol). ### どこから転送を開始すればよいですか? -トランスファーを開始するには、[Subgraph Studio](https://thegraph.com/studio/), [Explorer,](https://thegraph.com/explorer)またはサブグラフの詳細ページからトランスファーを開始できます。サブグラフの詳細ページで「サブグラフを転送」ボタンをクリックしてトランスファーを開始してください。 +You can initiate your transfer from the [Subgraph Studio](https://thegraph.com/studio/), [Explorer,](https://thegraph.com/explorer) or any Subgraph details page. Click the "Transfer Subgraph" button in the Subgraph details page to start the transfer. -### サブグラフが転送されるまでどれくらい待つ必要がありますか +### How long do I need to wait until my Subgraph is transferred トランスファーには約20分かかります。Arbitrumブリッジはバックグラウンドでブリッジトランスファーを自動的に完了します。一部の場合、ガス料金が急上昇する可能性があり、トランザクションを再度確認する必要があるかもしれません。 -### 私のサブグラフは L2 に転送した後も検出可能ですか? +### Will my Subgraph still be discoverable after I transfer it to L2? -あなたのサブグラフは、それが公開されたネットワーク上でのみ発見できます。たとえば、あなたのサブグラフがArbitrum Oneにある場合、それはArbitrum OneのExplorerでのみ見つけることができ、Ethereum上では見つけることはできません。正しいネットワークにいることを確認するために、ページの上部にあるネットワーク切り替えツールでArbitrum Oneを選択していることを確認してください。トランスファー後、L1サブグラフは非推奨として表示されます。 +Your Subgraph will only be discoverable on the network it is published to. For example, if your Subgraph is on Arbitrum One, then you can only find it in Explorer on Arbitrum One and will not be able to find it on Ethereum. Please ensure that you have Arbitrum One selected in the network switcher at the top of the page to ensure you are on the correct network.  After the transfer, the L1 Subgraph will appear as deprecated. -### 私のサブグラフを転送するには公開する必要がありますか? +### Does my Subgraph need to be published to transfer it? -サブグラフ転送ツールを活用するには、サブグラフがすでにEthereumメインネットに公開され、そのサブグラフを所有するウォレットが所有するキュレーション信号を持っている必要があります。サブグラフが公開されていない場合、Arbitrum Oneに直接公開することをお勧めします。関連するガス料金はかなり低くなります。公開されたサブグラフを転送したいが、所有者のアカウントがそれに対してキュレーション信号を出していない場合、そのアカウントから少額(たとえば1 GRT)の信号を送ることができます。必ず「auto-migrating(自動移行)」信号を選択してください。 +To take advantage of the Subgraph transfer tool, your Subgraph must be already published to Ethereum mainnet and must have some curation signal owned by the wallet that owns the Subgraph. If your Subgraph is not published, it is recommended you simply publish directly on Arbitrum One - the associated gas fees will be considerably lower. If you want to transfer a published Subgraph but the owner account hasn't curated any signal on it, you can signal a small amount (e.g. 1 GRT) from that account; make sure to choose "auto-migrating" signal. -### Arbitrumへの転送後、Ethereumメインネットバージョンの私のサブグラフはどうなりますか? +### What happens to the Ethereum mainnet version of my Subgraph after I transfer to Arbitrum? -サブグラフをArbitrumに転送した後、Ethereumメインネットワークのバージョンは非推奨とされます。おすすめでは、48時間以内にクエリURLを更新することをお勧めしています。ただし、サードパーティのDAppサポートが更新されるために、メインネットワークのURLが機能し続ける猶予期間も設けられています。 +After transferring your Subgraph to Arbitrum, the Ethereum mainnet version will be deprecated. We recommend you update your query URL within 48 hours. However, there is a grace period in place that keeps your mainnet URL functioning so that any third-party dapp support can be updated. ### 転送後、Arbitrum上で再公開する必要がありますか? @@ -80,21 +80,21 @@ L1トランザクションのハッシュを持っている場合(これはウ ### 再公開中にエンドポイントでダウンタイムが発生しますか? -短期間のダウンタイムを経験する可能性は低いですが、L1でサブグラフをサポートしているインデクサーと、サブグラフが完全にL2でサポートされるまでインデクシングを続けるかどうかに依存することがあります。 +It is unlikely, but possible to experience a brief downtime depending on which Indexers are supporting the Subgraph on L1 and whether they keep indexing it until the Subgraph is fully supported on L2. ### L2上での公開とバージョニングは、Ethereumメインネットと同じですか? -はい、Subgraph Studioで公開する際には、公開ネットワークとしてArbitrum Oneを選択してください。Studioでは、最新のエンドポイントが利用可能で、最新の更新されたサブグラフバージョンを指します。 +Yes. Select Arbitrum One as your published network when publishing in Subgraph Studio. In the Studio, the latest endpoint will be available which points to the latest updated version of the Subgraph. -### 私のサブグラフのキュレーションは、サブグラフと一緒に移動しますか? +### Will my Subgraph's curation move with my Subgraph? -自動移行信号を選択した場合、あなたのキュレーションの100%はサブグラフと一緒にArbitrum Oneに移行します。サブグラフのすべてのキュレーション信号は、転送時にGRTに変換され、あなたのキュレーション信号に対応するGRTがL2サブグラフ上で信号を発行するために使用されます。 +If you've chosen auto-migrating signal, 100% of your own curation will move with your Subgraph to Arbitrum One. All of the Subgraph's curation signal will be converted to GRT at the time of the transfer, and the GRT corresponding to your curation signal will be used to mint signal on the L2 Subgraph. -他のキュレーターは、自分の一部のGRTを引き出すか、それをL2に転送して同じサブグラフで信号を発行するかを選択できます。 +Other Curators can choose whether to withdraw their fraction of GRT, or also transfer it to L2 to mint signal on the same Subgraph. -### 転送後にサブグラフをEthereumメインネットに戻すことはできますか? +### Can I move my Subgraph back to Ethereum mainnet after I transfer? -一度転送されると、Ethereumメインネットワークのサブグラフバージョンは非推奨とされます。メインネットワークに戻りたい場合、再デプロイしてメインネットワークに再度公開する必要があります。ただし、Ethereumメインネットワークに戻すことは強く勧められていません。なぜなら、将来的にはインデクシングリワードが完全にArbitrum Oneで分配されるためです。 +Once transferred, your Ethereum mainnet version of this Subgraph will be deprecated. If you would like to move back to mainnet, you will need to redeploy and publish back to mainnet. However, transferring back to Ethereum mainnet is strongly discouraged as indexing rewards will eventually be distributed entirely on Arbitrum One. ### なぜ転送を完了するためにブリッジされたETHが必要なのですか? @@ -206,19 +206,19 @@ Indexerに連絡できる場合、彼らにL2トランスファーツールを \*必要な場合 - つまり、契約アドレスを使用している場合。 -### 私がキュレーションしたサブグラフが L2 に移動したかどうかはどうすればわかりますか? +### How will I know if the Subgraph I curated has moved to L2? -サブグラフの詳細ページを表示すると、このサブグラフが転送されたことを通知するバナーが表示されます。バナーに従ってキュレーションを転送できます。また、移動したサブグラフの詳細ページでもこの情報を見つけることができます。 +When viewing the Subgraph details page, a banner will notify you that this Subgraph has been transferred. You can follow the prompt to transfer your curation. You can also find this information on the Subgraph details page of any Subgraph that has moved. ### 自分のキュレーションを L2 に移動したくない場合はどうすればよいですか? -サブグラフが非推奨になった場合、信号を引き出すオプションがあります。同様に、サブグラフがL2に移動した場合、Ethereumメインネットワークで信号を引き出すか、L2に送信することを選択できます。 +When a Subgraph is deprecated you have the option to withdraw your signal. Similarly, if a Subgraph has moved to L2, you can choose to withdraw your signal in Ethereum mainnet or send the signal to L2. ### 私のキュレーションが正常に転送されたことを確認するにはどうすればよいですか? L2トランスファーツールを開始してから約20分後、Explorerを介して信号の詳細にアクセスできるようになります。 -### 一度に複数のサブグラフへキュレーションを転送することはできますか? +### Can I transfer my curation on more than one Subgraph at a time? 現時点では一括転送オプションは提供されていません。 @@ -266,7 +266,7 @@ L2トランスファーツールがステークの転送を完了するのに約 ### 株式を譲渡する前に、Arbitrum でインデックスを作成する必要がありますか? -インデクシングのセットアップよりも先にステークを効果的に転送できますが、L2でのサブグラフへの割り当て、それらのサブグラフのインデクシング、およびPOIの提出を行うまで、L2での報酬を請求することはできません。 +You can effectively transfer your stake first before setting up indexing, but you will not be able to claim any rewards on L2 until you allocate to Subgraphs on L2, index them, and present POIs. ### 委任者は、インデックス作成の賭け金を移動する前に委任を移動できますか? diff --git a/website/src/pages/ja/archived/arbitrum/l2-transfer-tools-guide.mdx b/website/src/pages/ja/archived/arbitrum/l2-transfer-tools-guide.mdx index b77261989131..bc10b94ac149 100644 --- a/website/src/pages/ja/archived/arbitrum/l2-transfer-tools-guide.mdx +++ b/website/src/pages/ja/archived/arbitrum/l2-transfer-tools-guide.mdx @@ -6,53 +6,53 @@ title: L2 転送ツールガイド Some frequent questions about these tools are answered in the [L2 Transfer Tools FAQ](/archived/arbitrum/l2-transfer-tools-faq/). The FAQs contain in-depth explanations of how to use the tools, how they work, and things to keep in mind when using them. -## サブグラフをアービトラムに転送する方法 (L2) +## How to transfer your Subgraph to Arbitrum (L2) -## サブグラフを転送する利点 +## Benefits of transferring your Subgraphs グラフのコミュニティとコア開発者は、過去1年間、Arbitrumに移行する準備をしてきました(https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305)。レイヤー2または「L2」ブロックチェーンであるアービトラムは、イーサリアムからセキュリティを継承しますが、ガス料金を大幅に削減します。 -サブグラフをThe Graph Networkに公開またはアップグレードする際には、プロトコル上のスマートコントラクトとやり取りするため、ETHを使用してガスを支払う必要があります。サブグラフをArbitrumに移動することで、将来のサブグラフのアップデートにかかるガス料金が大幅に削減されます。低い手数料と、L2のキュレーションボンディングカーブがフラットであるという点も、他のキュレーターがあなたのサブグラフをキュレーションしやすくし、サブグラフのインデクサーへの報酬を増加させます。この低コストな環境は、インデクサーがサブグラフをインデックス化して提供するコストも削減します。アービトラム上のインデックス報酬は今後数か月間で増加し、Ethereumメインネット上では減少する予定です。そのため、ますます多くのインデクサーがステークを転送し、L2での運用を設定していくことになるでしょう。 +When you publish or upgrade your Subgraph to The Graph Network, you're interacting with smart contracts on the protocol and this requires paying for gas using ETH. By moving your Subgraphs to Arbitrum, any future updates to your Subgraph will require much lower gas fees. The lower fees, and the fact that curation bonding curves on L2 are flat, also make it easier for other Curators to curate on your Subgraph, increasing the rewards for Indexers on your Subgraph. This lower-cost environment also makes it cheaper for Indexers to index and serve your Subgraph. Indexing rewards will be increasing on Arbitrum and decreasing on Ethereum mainnet over the coming months, so more and more Indexers will be transferring their stake and setting up their operations on L2. -## シグナル、L1サブグラフ、クエリURLで何が起こるかを理解する +## Understanding what happens with signal, your L1 Subgraph and query URLs -サブグラフをアービトラムに転送するには、アービトラムGRTブリッジが使用され、アービトラムGRTブリッジはネイティブアービトラムブリッジを使用してサブグラフをL2に送信します。「転送」はメインネット上のサブグラフを非推奨にし、ブリッジを使用してL2上のサブグラフを再作成するための情報を送信します。また、サブグラフ所有者のシグナル GRT も含まれ、ブリッジが転送を受け入れるには 0 より大きくなければなりません。 +Transferring a Subgraph to Arbitrum uses the Arbitrum GRT bridge, which in turn uses the native Arbitrum bridge, to send the Subgraph to L2. The "transfer" will deprecate the Subgraph on mainnet and send the information to re-create the Subgraph on L2 using the bridge. It will also include the Subgraph owner's signaled GRT, which must be more than zero for the bridge to accept the transfer. -サブグラフの転送を選択すると、サブグラフのすべてのキュレーション信号がGRTに変換されます。これは、メインネットのサブグラフを「非推奨」にすることと同じです。キュレーションに対応するGRTはサブグラフとともにL2に送信され、そこであなたに代わってシグナルを作成するために使用されます。 +When you choose to transfer the Subgraph, this will convert all of the Subgraph's curation signal to GRT. This is equivalent to "deprecating" the Subgraph on mainnet. The GRT corresponding to your curation will be sent to L2 together with the Subgraph, where they will be used to mint signal on your behalf. -他のキュレーターは、GRTの分数を引き出すか、同じサブグラフでシグナルをミントするためにL2に転送するかを選択できます。サブグラフの所有者がサブグラフをL2に転送せず、コントラクトコールを介して手動で非推奨にした場合、キュレーターに通知され、キュレーションを取り消すことができます。 +Other Curators can choose whether to withdraw their fraction of GRT, or also transfer it to L2 to mint signal on the same Subgraph. If a Subgraph owner does not transfer their Subgraph to L2 and manually deprecates it via a contract call, then Curators will be notified and will be able to withdraw their curation. -サブグラフが転送されるとすぐに、すべてのキュレーションがGRTに変換されるため、インデクサーはサブグラフのインデックス作成に対する報酬を受け取らなくなります。ただし、1) 転送されたサブグラフを24時間提供し続け、2) L2でサブグラフのインデックス作成をすぐに開始するインデクサーがあります。これらのインデクサーには既にサブグラフのインデックスが作成されているため、サブグラフが同期するのを待つ必要はなく、ほぼ即座にL2サブグラフを照会できます。 +As soon as the Subgraph is transferred, since all curation is converted to GRT, Indexers will no longer receive rewards for indexing the Subgraph. However, there will be Indexers that will 1) keep serving transferred Subgraphs for 24 hours, and 2) immediately start indexing the Subgraph on L2. Since these Indexers already have the Subgraph indexed, there should be no need to wait for the Subgraph to sync, and it will be possible to query the L2 Subgraph almost immediately. -L2 サブグラフへのクエリは別の URL (「arbitrum-gateway.thegraph.com」) に対して実行する必要がありますが、L1 URL は少なくとも 48 時間は機能し続けます。その後、L1ゲートウェイはクエリをL2ゲートウェイに転送しますが(しばらくの間)、これにより遅延が増えるため、できるだけ早くすべてのクエリを新しいURLに切り替えることをお勧めします。 +Queries to the L2 Subgraph will need to be done to a different URL (on `arbitrum-gateway.thegraph.com`), but the L1 URL will continue working for at least 48 hours. After that, the L1 gateway will forward queries to the L2 gateway (for some time), but this will add latency so it is recommended to switch all your queries to the new URL as soon as possible. ## L2ウォレットの選択 -メインネットでサブグラフを公開したときに、接続されたウォレットを使用してサブグラフを作成し、このウォレットはこのサブグラフを表すNFTを所有し、更新を公開できます。 +When you published your Subgraph on mainnet, you used a connected wallet to create the Subgraph, and this wallet owns the NFT that represents this Subgraph and allows you to publish updates. -サブグラフをアービトラムに転送する場合、L2でこのサブグラフNFTを所有する別のウォレットを選択できます。 +When transferring the Subgraph to Arbitrum, you can choose a different wallet that will own this Subgraph NFT on L2. MetaMaskのような "通常の" ウォレット(外部所有アカウントまたはEOA、つまりスマートコントラクトではないウォレット)を使用している場合、これはオプションであり、L1と同じ所有者アドレスを保持することをお勧めします。 -マルチシグ(Safeなど)などのスマートコントラクトウォレットを使用している場合、このアカウントはメインネットにのみ存在し、このウォレットを使用してアービトラムで取引を行うことができない可能性が高いため、別のL2ウォレットアドレスを選択する必要があります。スマートコントラクトウォレットまたはマルチシグを使い続けたい場合は、Arbitrumで新しいウォレットを作成し、そのアドレスをサブグラフのL2所有者として使用します。 +If you're using a smart contract wallet, like a multisig (e.g. a Safe), then choosing a different L2 wallet address is mandatory, as it is most likely that this account only exists on mainnet and you will not be able to make transactions on Arbitrum using this wallet. If you want to keep using a smart contract wallet or multisig, create a new wallet on Arbitrum and use its address as the L2 owner of your Subgraph. -\*\*あなたが管理し、アービトラムで取引を行うことができるウォレットアドレスを使用することは非常に重要です。そうしないと、サブグラフが失われ、復元できません。 +**It is very important to use a wallet address that you control, and that can make transactions on Arbitrum. Otherwise, the Subgraph will be lost and cannot be recovered.** ## 転送の準備: 一部のETHのブリッジング -サブグラフを転送するには、ブリッジを介してトランザクションを送信し、その後アービトラム上で別のトランザクションを実行する必要があります。最初のトランザクションでは、メインネット上のETHを使用し、L2でメッセージが受信される際にガスを支払うためにいくらかのETHが含まれています。ただし、このガスが不足している場合、トランザクションを再試行し、L2で直接ガスを支払う必要があります(これが下記の「ステップ3:転送の確認」です)。このステップは、転送を開始してから7日以内に実行する必要があります。さらに、2つ目のトランザクション(「ステップ4:L2での転送の完了」)は、直接アービトラム上で行われます。これらの理由から、アービトラムウォレットに一定のETHが必要です。マルチシグまたはスマートコントラクトアカウントを使用している場合、ETHはトランザクションを実行するために使用している通常の個人のウォレット(EOAウォレット)にある必要があり、マルチシグウォレットそのものにはないことに注意してください +Transferring the Subgraph involves sending a transaction through the bridge, and then executing another transaction on Arbitrum. The first transaction uses ETH on mainnet, and includes some ETH to pay for gas when the message is received on L2. However, if this gas is insufficient, you will have to retry the transaction and pay for the gas directly on L2 (this is "Step 3: Confirming the transfer" below). This step **must be executed within 7 days of starting the transfer**. Moreover, the second transaction ("Step 4: Finishing the transfer on L2") will be done directly on Arbitrum. For these reasons, you will need some ETH on an Arbitrum wallet. If you're using a multisig or smart contract account, the ETH will need to be in the regular (EOA) wallet that you are using to execute the transactions, not on the multisig wallet itself. 一部の取引所でETHを購入してアービトラムに直接引き出すか、アービトラムブリッジを使用してメインネットウォレットからL2にETHを送信することができます:[bridge.arbitrum.io](http://bridge.arbitrum.io)。アービトラムのガス料金は安いので、必要なのは少量だけです。トランザクションが承認されるには、低いしきい値(0.01 ETHなど)から始めることをお勧めします。 -## サブグラフ転送ツールの検索 +## Finding the Subgraph Transfer Tool -L2転送ツールは、サブグラフスタジオでサブグラフのページを見ているときに見つけることができます。 +You can find the L2 Transfer Tool when you're looking at your Subgraph's page on Subgraph Studio: ![transfer tool](/img/L2-transfer-tool1.png) -サブグラフを所有するウォレットに接続している場合は、エクスプローラーとエクスプローラーのそのサブグラフのページでも入手できます。 +It is also available on Explorer if you're connected with the wallet that owns a Subgraph and on that Subgraph's page on Explorer: ![Transferring to L2](/img/transferToL2.png) @@ -60,19 +60,19 @@ L2転送ツールは、サブグラフスタジオでサブグラフのページ ## ステップ1: 転送を開始する -転送を開始する前に、どのアドレスがL2のサブグラフを所有するかを決定する必要があり(上記の「L2ウォレットの選択」を参照)、ガス用のETHをアービトラムにすでにブリッジすることを強くお勧めします(上記の「転送の準備: ETHのブリッジング」を参照)。 +Before starting the transfer, you must decide which address will own the Subgraph on L2 (see "Choosing your L2 wallet" above), and it is strongly recommend having some ETH for gas already bridged on Arbitrum (see "Preparing for the transfer: bridging some ETH" above). -また、サブグラフを転送するには、サブグラフを所有するのと同じアカウントを持つサブグラフにゼロ以外の量のシグナルが必要であることに注意してください。サブグラフでシグナルを出していない場合は、少しキュレーションを追加する必要があります(1 GRTのような少量を追加するだけで十分です)。 +Also please note transferring the Subgraph requires having a nonzero amount of signal on the Subgraph with the same account that owns the Subgraph; if you haven't signaled on the Subgraph you will have to add a bit of curation (adding a small amount like 1 GRT would suffice). -「Transfer Tool」を開いた後、L2ウォレットアドレスを「受信ウォレットアドレス」フィールドに入力できるようになります。ここで正しいアドレスを入力していることを確認してください。「Transfer Subgraph」をクリックすると、ウォレット上でトランザクションを実行するよう求められます(注意:L2ガスの支払いに十分なETHの価値が含まれています)。これにより、トランスファーが開始され、L1サブグラフが廃止されます(詳細については、「背後で何が起こるか:シグナル、L1サブグラフ、およびクエリURLの理解」を参照してください)。 +After opening the Transfer Tool, you will be able to input the L2 wallet address into the "Receiving wallet address" field - **make sure you've entered the correct address here**. Clicking on Transfer Subgraph will prompt you to execute the transaction on your wallet (note some ETH value is included to pay for L2 gas); this will initiate the transfer and deprecate your L1 Subgraph (see "Understanding what happens with signal, your L1 Subgraph and query URLs" above for more details on what goes on behind the scenes). -このステップを実行する場合は、\*\*7日以内にステップ3を完了するまで続行してください。そうしないと、サブグラフとシグナルGRTが失われます。 これは、L1-L2メッセージングがアービトラムでどのように機能するかによるものです: ブリッジを介して送信されるメッセージは、7日以内に実行する必要がある「再試行可能なチケット」であり、アービトラムのガス価格に急上昇がある場合は、最初の実行で再試行が必要になる場合があります。 +If you execute this step, **make sure you proceed until completing step 3 in less than 7 days, or the Subgraph and your signal GRT will be lost.** This is due to how L1-L2 messaging works on Arbitrum: messages that are sent through the bridge are "retry-able tickets" that must be executed within 7 days, and the initial execution might need a retry if there are spikes in the gas price on Arbitrum. ![Start the transfer to L2](/img/startTransferL2.png) -## ステップ2: サブグラフがL2に到達するのを待つ +## Step 2: Waiting for the Subgraph to get to L2 -転送を開始した後、L1サブグラフをL2に送信するメッセージは、アービトラムブリッジを介して伝播する必要があります。これには約20分かかります(ブリッジは、トランザクションを含むメインネットブロックが潜在的なチェーン再編成から「安全」になるまで待機します)。 +After you start the transfer, the message that sends your L1 Subgraph to L2 must propagate through the Arbitrum bridge. This takes approximately 20 minutes (the bridge waits for the mainnet block containing the transaction to be "safe" from potential chain reorgs). この待機時間が終了すると、アービトラムはL2契約の転送の自動実行を試みます。 @@ -80,7 +80,7 @@ L2転送ツールは、サブグラフスタジオでサブグラフのページ ## ステップ3: 転送の確認 -ほとんどの場合、ステップ1に含まれるL2ガスは、アービトラム契約のサブグラフを受け取るトランザクションを実行するのに十分であるため、このステップは自動実行されます。ただし、場合によっては、アービトラムのガス価格の急騰により、この自動実行が失敗する可能性があります。この場合、サブグラフをL2に送信する「チケット」は保留中であり、7日以内に再試行する必要があります。 +In most cases, this step will auto-execute as the L2 gas included in step 1 should be sufficient to execute the transaction that receives the Subgraph on the Arbitrum contracts. In some cases, however, it is possible that a spike in gas prices on Arbitrum causes this auto-execution to fail. In this case, the "ticket" that sends your Subgraph to L2 will be pending and require a retry within 7 days. この場合、アービトラムにETHがあるL2ウォレットを使用して接続し、ウォレットネットワークをアービトラムに切り替え、[転送の確認] をクリックしてトランザクションを再試行する必要があります。 @@ -88,33 +88,33 @@ L2転送ツールは、サブグラフスタジオでサブグラフのページ ## ステップ4: L2での転送の完了 -この時点で、サブグラフとGRTはアービトラムで受信されましたが、サブグラフはまだ公開されていません。受信ウォレットとして選択したL2ウォレットを使用して接続し、ウォレットネットワークをArbitrumに切り替えて、[サブグラフの公開] をクリックする必要があります。 +At this point, your Subgraph and GRT have been received on Arbitrum, but the Subgraph is not published yet. You will need to connect using the L2 wallet that you chose as the receiving wallet, switch your wallet network to Arbitrum, and click "Publish Subgraph." -![Publish the subgraph](/img/publishSubgraphL2TransferTools.png) +![Publish the Subgraph](/img/publishSubgraphL2TransferTools.png) -![Wait for the subgraph to be published](/img/waitForSubgraphToPublishL2TransferTools.png) +![Wait for the Subgraph to be published](/img/waitForSubgraphToPublishL2TransferTools.png) -これにより、アービトラムで動作しているインデクサーがサブグラフの提供を開始できるように、サブグラフが公開されます。また、L1から転送されたGRTを使用してキュレーションシグナルをミントします。 +This will publish the Subgraph so that Indexers that are operating on Arbitrum can start serving it. It will also mint curation signal using the GRT that were transferred from L1. ## ステップ 5: クエリ URL の更新 -サブグラフがアービトラムに正常に転送されました! サブグラフを照会するには、新しい URL は次のようになります: +Your Subgraph has been successfully transferred to Arbitrum! To query the Subgraph, the new URL will be : `https://arbitrum-gateway.thegraph.com/api/[api-key]/subgraphs/id/[l2-subgraph-id]` -アービトラム上のサブグラフIDは、メインネット上でのものとは異なることに注意してください。ただし、エクスプローラやスタジオ上で常にそのIDを見つけることができます(詳細は「シグナル、L1サブグラフ、およびクエリURLの動作理解」を参照)。前述のように、古いL1 URLはしばらくの間サポートされますが、サブグラフがL2上で同期されたらすぐに新しいアドレスにクエリを切り替える必要があります。 +Note that the Subgraph ID on Arbitrum will be a different than the one you had on mainnet, but you can always find it on Explorer or Studio. As mentioned above (see "Understanding what happens with signal, your L1 Subgraph and query URLs") the old L1 URL will be supported for a short while, but you should switch your queries to the new address as soon as the Subgraph has been synced on L2. ## キュレーションをアービトラム(L2) に転送する方法 -## L2へのサブグラフ転送のキュレーションに何が起こるかを理解する +## Understanding what happens to curation on Subgraph transfers to L2 -サブグラフの所有者がサブグラフをアービトラムに転送すると、サブグラフのすべての信号が同時にGRTに変換されます。これは、「自動移行」シグナル、つまりサブグラフのバージョンまたはデプロイに固有ではないが、サブグラフの最新バージョンに従うシグナルに適用されます。 +When the owner of a Subgraph transfers a Subgraph to Arbitrum, all of the Subgraph's signal is converted to GRT at the same time. This applies to "auto-migrated" signal, i.e. signal that is not specific to a Subgraph version or deployment but that follows the latest version of a Subgraph. -このシグナルからGRTへの変換は、サブグラフのオーナーがL1でサブグラフを非推奨にした場合と同じです。サブグラフが非推奨化または移管されると、すべてのキュレーションシグナルは同時に(キュレーションボンディングカーブを使用して)「燃やされ」、その結果得られるGRTはGNSスマートコントラクトに保持されます(これはサブグラフのアップグレードと自動移行されるシグナルを処理するコントラクトです)。そのため、そのサブグラフの各キュレーターは、所持していたシェアの量に比例したGRTの請求権を持っています。 +This conversion from signal to GRT is the same as what would happen if the Subgraph owner deprecated the Subgraph in L1. When the Subgraph is deprecated or transferred, all curation signal is "burned" simultaneously (using the curation bonding curve) and the resulting GRT is held by the GNS smart contract (that is the contract that handles Subgraph upgrades and auto-migrated signal). Each Curator on that Subgraph therefore has a claim to that GRT proportional to the amount of shares they had for the Subgraph. -サブグラフの所有者に対応するこれらの GRT の一部は、サブグラフとともに L2 に送信されます。 +A fraction of these GRT corresponding to the Subgraph owner is sent to L2 together with the Subgraph. -この時点では、キュレートされたGRTはこれ以上のクエリ手数料を蓄積しません。したがって、キュレーターは自分のGRTを引き出すか、それをL2上の同じサブグラフに移動して新しいキュレーションシグナルを作成するために使用することができます。いつ行うかに関わらず、GRTは無期限に保持でき、すべての人が自分のシェアに比例した額を受け取ることができるため、急ぐ必要はありません。 +At this point, the curated GRT will not accrue any more query fees, so Curators can choose to withdraw their GRT or transfer it to the same Subgraph on L2, where it can be used to mint new curation signal. There is no rush to do this as the GRT can be help indefinitely and everybody gets an amount proportional to their shares, irrespective of when they do it. ## L2ウォレットの選択 @@ -130,9 +130,9 @@ L2転送ツールは、サブグラフスタジオでサブグラフのページ 転送を開始する前に、L2上でキュレーションを所有するアドレスを決定する必要があります(上記の「L2ウォレットの選択」を参照)。また、L2でメッセージの実行を再試行する必要がある場合に備えて、ガスのためにすでにArbitrumにブリッジされたいくらかのETHを持つことをお勧めします。ETHをいくつかの取引所で購入し、それを直接Arbitrumに引き出すことができます。または、Arbitrumブリッジを使用して、メインネットのウォレットからL2にETHを送信することもできます: [bridge.arbitrum.io](http://bridge.arbitrum.io)。Arbitrumのガス料金が非常に低いため、0.01 ETHなどの少額で十分です。 -もしキュレーションしているサブグラフがL2に移行された場合、エクスプローラ上でそのサブグラフが移行されたことを示すメッセージが表示されます。 +If a Subgraph that you curate to has been transferred to L2, you will see a message on Explorer telling you that you're curating to a transferred Subgraph. -サブグラフのページを表示する際に、キュレーションを引き出すか、移行するかを選択できます。"Transfer Signal to Arbitrum" をクリックすると、移行ツールが開きます。 +When looking at the Subgraph page, you can choose to withdraw or transfer the curation. Clicking on "Transfer Signal to Arbitrum" will open the transfer tool. ![Transfer signal](/img/transferSignalL2TransferTools.png) @@ -162,4 +162,4 @@ L2転送ツールは、サブグラフスタジオでサブグラフのページ ## L1 でキュレーションを取り消す -GRT を L2 に送信したくない場合、または GRT を手動でブリッジしたい場合は、L1 でキュレーションされた GRT を取り消すことができます。サブグラフページのバナーで、「シグナルの引き出し」を選択し、トランザクションを確認します。GRTはあなたのキュレーターアドレスに送信されます。 +If you prefer not to send your GRT to L2, or you'd rather bridge the GRT manually, you can withdraw your curated GRT on L1. On the banner on the Subgraph page, choose "Withdraw Signal" and confirm the transaction; the GRT will be sent to your Curator address. diff --git a/website/src/pages/ja/archived/sunrise.mdx b/website/src/pages/ja/archived/sunrise.mdx index eac51559a724..e53b28b20016 100644 --- a/website/src/pages/ja/archived/sunrise.mdx +++ b/website/src/pages/ja/archived/sunrise.mdx @@ -7,61 +7,61 @@ sidebarTitle: Post-Sunrise Upgrade FAQ ## What was the Sunrise of Decentralized Data? -The Sunrise of Decentralized Data was an initiative spearheaded by Edge & Node. This initiative enabled subgraph developers to upgrade to The Graph’s decentralized network seamlessly. +The Sunrise of Decentralized Data was an initiative spearheaded by Edge & Node. This initiative enabled Subgraph developers to upgrade to The Graph’s decentralized network seamlessly. -This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published subgraphs. +This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published Subgraphs. ### What happened to the hosted service? -The hosted service query endpoints are no longer available, and developers cannot deploy new subgraphs on the hosted service. +The hosted service query endpoints are no longer available, and developers cannot deploy new Subgraphs on the hosted service. -During the upgrade process, owners of hosted service subgraphs could upgrade their subgraphs to The Graph Network. Additionally, developers were able to claim auto-upgraded subgraphs. +During the upgrade process, owners of hosted service Subgraphs could upgrade their Subgraphs to The Graph Network. Additionally, developers were able to claim auto-upgraded Subgraphs. ### Was Subgraph Studio impacted by this upgrade? No, Subgraph Studio was not impacted by Sunrise. Subgraphs were immediately available for querying, powered by the upgrade Indexer, which uses the same infrastructure as the hosted service. -### Why were subgraphs published to Arbitrum, did it start indexing a different network? +### Why were Subgraphs published to Arbitrum, did it start indexing a different network? -The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that subgraphs are published to, but subgraphs can index any of the [supported networks](/supported-networks/) +The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new Subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that Subgraphs are published to, but Subgraphs can index any of the [supported networks](/supported-networks/) ## About the Upgrade Indexer > The upgrade Indexer is currently active. -The upgrade Indexer was implemented to improve the experience of upgrading subgraphs from the hosted service to The Graph Network and support new versions of existing subgraphs that had not yet been indexed. +The upgrade Indexer was implemented to improve the experience of upgrading Subgraphs from the hosted service to The Graph Network and support new versions of existing Subgraphs that had not yet been indexed. ### What does the upgrade Indexer do? -- It bootstraps chains that have yet to receive indexing rewards on The Graph Network and ensures that an Indexer is available to serve queries as quickly as possible after a subgraph is published. +- It bootstraps chains that have yet to receive indexing rewards on The Graph Network and ensures that an Indexer is available to serve queries as quickly as possible after a Subgraph is published. - It supports chains that were previously only available on the hosted service. Find a comprehensive list of supported chains [here](/supported-networks/). -- Indexers that operate an upgrade Indexer do so as a public service to support new subgraphs and additional chains that lack indexing rewards before The Graph Council approves them. +- Indexers that operate an upgrade Indexer do so as a public service to support new Subgraphs and additional chains that lack indexing rewards before The Graph Council approves them. ### なぜEdge & Nodeはアップグレード・インデクサーを実行しているのか? -Edge & Node historically maintained the hosted service and, as a result, already have synced data for hosted service subgraphs. +Edge & Node historically maintained the hosted service and, as a result, already have synced data for hosted service Subgraphs. ### What does the upgrade indexer mean for existing Indexers? Chains previously only supported on the hosted service were made available to developers on The Graph Network without indexing rewards at first. -However, this action unlocked query fees for any interested Indexer and increased the number of subgraphs published on The Graph Network. As a result, Indexers have more opportunities to index and serve these subgraphs in exchange for query fees, even before indexing rewards are enabled for a chain. +However, this action unlocked query fees for any interested Indexer and increased the number of Subgraphs published on The Graph Network. As a result, Indexers have more opportunities to index and serve these Subgraphs in exchange for query fees, even before indexing rewards are enabled for a chain. -The upgrade Indexer also provides the Indexer community with information about the potential demand for subgraphs and new chains on The Graph Network. +The upgrade Indexer also provides the Indexer community with information about the potential demand for Subgraphs and new chains on The Graph Network. ### これはデリゲーターにとって何を意味するのか? -The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity. +The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more Subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity. ### Did the upgrade Indexer compete with existing Indexers for rewards? -No, the upgrade Indexer only allocates the minimum amount per subgraph and does not collect indexing rewards. +No, the upgrade Indexer only allocates the minimum amount per Subgraph and does not collect indexing rewards. -It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and subgraphs. +It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and Subgraphs. -### How does this affect subgraph developers? +### How does this affect Subgraph developers? -Subgraph developers can query their subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/subgraphs/developing/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a subgraph](/developing/creating-a-subgraph/) was not impacted by this upgrade. +Subgraph developers can query their Subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/subgraphs/developing/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a Subgraph](/developing/creating-a-subgraph/) was not impacted by this upgrade. ### How does the upgrade Indexer benefit data consumers? @@ -71,10 +71,10 @@ The upgrade Indexer enables chains on the network that were previously only supp The upgrade Indexer prices queries at the market rate to avoid influencing the query fee market. -### When will the upgrade Indexer stop supporting a subgraph? +### When will the upgrade Indexer stop supporting a Subgraph? -The upgrade Indexer supports a subgraph until at least 3 other Indexers successfully and consistently serve queries made to it. +The upgrade Indexer supports a Subgraph until at least 3 other Indexers successfully and consistently serve queries made to it. -Furthermore, the upgrade Indexer stops supporting a subgraph if it has not been queried in the last 30 days. +Furthermore, the upgrade Indexer stops supporting a Subgraph if it has not been queried in the last 30 days. -Other Indexers are incentivized to support subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it. +Other Indexers are incentivized to support Subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it. diff --git a/website/src/pages/ja/contracts.json b/website/src/pages/ja/contracts.json index 7222da23adc6..be2eb06ea51f 100644 --- a/website/src/pages/ja/contracts.json +++ b/website/src/pages/ja/contracts.json @@ -1,4 +1,4 @@ { - "contract": "Contract", + "contract": "コントラクト", "address": "住所" } diff --git a/website/src/pages/ja/global.json b/website/src/pages/ja/global.json index 6326992e205b..c14d6185adb2 100644 --- a/website/src/pages/ja/global.json +++ b/website/src/pages/ja/global.json @@ -1,35 +1,78 @@ { "navigation": { "title": "メインナビゲーション", - "show": "Show navigation", - "hide": "Hide navigation", + "show": "ナビゲーションを表示する", + "hide": "ナビゲーションを隠す", "subgraphs": "サブグラフ", "substreams": "サブストリーム", - "sps": "Substreams-Powered Subgraphs", - "indexing": "Indexing", - "resources": "Resources", - "archived": "Archived" + "sps": "サブストリームを用いたサブグラフ", + "tokenApi": "Token API", + "indexing": "インデクシング", + "resources": "リソース", + "archived": "アーカイブ" }, "page": { - "lastUpdated": "Last updated", + "lastUpdated": "最終更新", "readingTime": { - "title": "Reading time", - "minutes": "minutes" + "title": "所要時間", + "minutes": "分" }, - "previous": "Previous page", - "next": "Next page", - "edit": "Edit on GitHub", - "onThisPage": "On this page", - "tableOfContents": "Table of contents", - "linkToThisSection": "Link to this section" + "previous": "前のページ", + "next": "次のページ", + "edit": "GitHubで編集する", + "onThisPage": "このページでは", + "tableOfContents": "目次", + "linkToThisSection": "このセクションへのリンク" }, "content": { - "note": "Note", - "video": "Video" + "callout": { + "note": "Note", + "tip": "Tip", + "important": "Important", + "warning": "Warning", + "caution": "Caution" + }, + "video": "ビデオ" + }, + "openApi": { + "parameters": { + "pathParameters": "Path Parameters", + "queryParameters": "Query Parameters", + "headerParameters": "Header Parameters", + "cookieParameters": "Cookie Parameters", + "parameter": "Parameter", + "description": "説明書き", + "value": "Value", + "required": "Required", + "deprecated": "Deprecated", + "defaultValue": "Default value", + "minimumValue": "Minimum value", + "maximumValue": "Maximum value", + "acceptedValues": "Accepted values", + "acceptedPattern": "Accepted pattern", + "format": "Format", + "serializationFormat": "Serialization format" + }, + "request": { + "label": "Test this endpoint", + "noCredentialsRequired": "No credentials required", + "send": "Send Request" + }, + "responses": { + "potentialResponses": "Potential Responses", + "status": "ステータス", + "description": "説明書き", + "liveResponse": "Live Response", + "example": "例" + }, + "errors": { + "invalidApi": "Could not retrieve API {0}.", + "invalidOperation": "Could not retrieve operation {0} in API {1}." + } }, "notFound": { - "title": "Oops! This page was lost in space...", - "subtitle": "Check if you’re using the right address or explore our website by clicking on the link below.", - "back": "Go Home" + "title": "おっと!このページは宇宙で失われた...", + "subtitle": "正しいURLを使用しているかどうかを確認するか、以下のリンクをクリックして当社のウェブサイトを探索してください。", + "back": "ホームへ" } } diff --git a/website/src/pages/ja/index.json b/website/src/pages/ja/index.json index adc03e4b959a..2034192e0089 100644 --- a/website/src/pages/ja/index.json +++ b/website/src/pages/ja/index.json @@ -1,50 +1,50 @@ { "title": "Home", "hero": { - "title": "The Graph Docs", - "description": "Kick-start your web3 project with the tools to extract, transform, and load blockchain data.", - "cta1": "How The Graph works", - "cta2": "Build your first subgraph" + "title": "The Graphのドキュメント", + "description": "ブロックチェーンデータを抽出、変換、読み込み可能なツールを用いて、あなたのWeb3プロジェクトを開始しましょう。", + "cta1": "The Graphの仕組み", + "cta2": "最初のサブグラフを作る" }, "products": { - "title": "The Graph’s Products", - "description": "Choose a solution that fits your needs—interact with blockchain data your way.", + "title": "The Graph's Products", + "description": "ニーズに合ったソリューションを選択し、ブロックチェーンデータを活用してみましょう。", "subgraphs": { "title": "サブグラフ", - "description": "Extract, process, and query blockchain data with open APIs.", - "cta": "Develop a subgraph" + "description": "オープンAPIでブロックチェーンデータを抽出、処理、照会しましょう。", + "cta": "サブグラフを作成する" }, "substreams": { "title": "サブストリーム", - "description": "Fetch and consume blockchain data with parallel execution.", - "cta": "Develop with Substreams" + "description": "並列実行でブロックチェーンのデータを取得し、使用できます。", + "cta": "サブストリームを使用する" }, "sps": { - "title": "Substreams-Powered Subgraphs", - "description": "Boost your subgraph’s efficiency and scalability by using Substreams.", - "cta": "Set up a Substreams-powered subgraph" + "title": "サブストリームを用いたサブグラフ", + "description": "Boost your subgraph's efficiency and scalability by using Substreams.", + "cta": "サブストリームを用いたサブグラフの設定を行う" }, "graphNode": { "title": "グラフノード", - "description": "Index blockchain data and serve it via GraphQL queries.", - "cta": "Set up a local Graph Node" + "description": "ブロックチェーンのデータをインデックスし、GraphQLクエリで提供します。", + "cta": "ローカルでのGraph Nodeのセットアップを行う" }, "firehose": { "title": "Firehose", - "description": "Extract blockchain data into flat files to enhance sync times and streaming capabilities.", - "cta": "Get started with Firehose" + "description": "ブロックチェーンデータをフラットファイルに抽出し、時間動機機能とストリーミング機能を向上させます。", + "cta": "Firehoseを使う" } }, "supportedNetworks": { "title": "サポートされているネットワーク", "details": "Network Details", "services": "Services", - "type": "Type", + "type": "タイプ", "protocol": "Protocol", "identifier": "Identifier", "chainId": "Chain ID", "nativeCurrency": "Native Currency", - "docs": "Docs", + "docs": "ドキュメント", "shortName": "Short Name", "guides": "Guides", "search": "Search networks", @@ -54,9 +54,9 @@ "infoText": "Boost your developer experience by enabling The Graph's indexing network.", "infoLink": "Integrate new network", "description": { - "base": "The Graph supports {0}. To add a new network, {1}", - "networks": "networks", - "completeThisForm": "complete this form" + "base": "グラフは{0}をサポートしています。新しいネットワークを追加するには{1}。", + "networks": "ネットワーク", + "completeThisForm": "フォームを記入する" }, "emptySearch": { "title": "No networks found", @@ -65,10 +65,10 @@ "showTestnets": "Show testnets" }, "tableHeaders": { - "name": "Name", + "name": "名称", "id": "ID", - "subgraphs": "Subgraphs", - "substreams": "Substreams", + "subgraphs": "サブグラフ", + "substreams": "サブストリーム", "firehose": "Firehose", "tokenapi": "Token API" } @@ -80,7 +80,7 @@ "description": "Kickstart your journey into subgraph development." }, "substreams": { - "title": "Substreams", + "title": "サブストリーム", "description": "Stream high-speed data for real-time indexing." }, "timeseries": { @@ -92,7 +92,7 @@ "description": "Leverage features like custom data sources, event handlers, and topic filters." }, "billing": { - "title": "Billing", + "title": "請求書", "description": "Optimize costs and manage billing efficiently." } }, @@ -120,56 +120,56 @@ } }, "guides": { - "title": "Guides", + "title": "ガイド", "description": "", "explorer": { - "title": "Find Data in Graph Explorer", - "description": "Leverage hundreds of public subgraphs for existing blockchain data." + "title": "グラフエクスプローラでデータを検索", + "description": "既存のブロックチェーンデータの何百ものパブリックサブグラフを活用。" }, "publishASubgraph": { - "title": "Publish a Subgraph", - "description": "Add your subgraph to the decentralized network." + "title": "サブグラフを公開する", + "description": "サブグラフをブロックチェーンネットワークに追加する。" }, "publishSubstreams": { - "title": "Publish Substreams", - "description": "Launch your Substreams package to the Substreams Registry." + "title": "サブストリームの公開", + "description": "サブストリームパッケージをサブストリームレジストリに公開する。" }, "queryingBestPractices": { "title": "クエリのベストプラクティス", - "description": "Optimize your subgraph queries for faster, better results." + "description": "より速く、より良い結果を得るために、サブグラフのクエリを最適化します。" }, "timeseries": { - "title": "Optimized Timeseries & Aggregations", - "description": "Streamline your subgraph for efficiency." + "title": "最適化された時系列と集計", + "description": "効率化のためにサブグラフをスリム化する。" }, "apiKeyManagement": { - "title": "API Key Management", - "description": "Easily create, manage, and secure API keys for your subgraphs." + "title": "APIキー管理", + "description": "サブグラフのAPIキーを簡単に作成、管理、保護できます。" }, "transferToTheGraph": { - "title": "Transfer to The Graph", - "description": "Seamlessly upgrade your subgraph from any platform." + "title": "The Graphに移行する", + "description": "どのプラットフォームからでもシームレスにサブグラフをアップグレードできます。" } }, "videos": { - "title": "Video Tutorials", - "watchOnYouTube": "Watch on YouTube", + "title": "ビデオ・チュートリアル", + "watchOnYouTube": "YouTubeで見る", "theGraphExplained": { - "title": "The Graph Explained In 1 Minute", - "description": "What is The Graph? How does it work? Why does it matter so much to web3 developers? Learn how and why The Graph is the backbone of web3 in this short, non-technical video." + "title": "1分でわかるThe Graph", + "description": "Learn how and why The Graph is the backbone of web3 in this short, non-technical video." }, "whatIsDelegating": { - "title": "What is Delegating?", - "description": "Delegators are key participants who help secure The Graph by staking their GRT tokens to Indexers. This video explains key concepts to understand before delegating." + "title": "委任(デリゲーション)とは何か?", + "description": "This video explains key concepts to understand before delegating, a form of staking that helps secure The Graph." }, "howToIndexSolana": { - "title": "How to Index Solana with a Substreams-powered Subgraph", - "description": "If you’re familiar with subgraphs, discover how Substreams offer a different approach for key use cases. This video walks you through the process of building your first Substreams-powered subgraph." + "title": "サブストリームを用いたサブグラフでSolanaをインデックスする方法", + "description": "If you're familiar with Subgraphs, discover how Substreams offer a different approach for key use cases." } }, "time": { - "reading": "Reading time", - "duration": "Duration", - "minutes": "min" + "reading": "所要時間", + "duration": "期間", + "minutes": "分" } } diff --git a/website/src/pages/ja/indexing/_meta-titles.json b/website/src/pages/ja/indexing/_meta-titles.json index 42f4de188fd4..a258ebae5ba6 100644 --- a/website/src/pages/ja/indexing/_meta-titles.json +++ b/website/src/pages/ja/indexing/_meta-titles.json @@ -1,3 +1,3 @@ { - "tooling": "Indexer Tooling" + "tooling": "インデクサーツール" } diff --git a/website/src/pages/ja/indexing/chain-integration-overview.mdx b/website/src/pages/ja/indexing/chain-integration-overview.mdx index c9349b7a24e5..4b996d3ddfb4 100644 --- a/website/src/pages/ja/indexing/chain-integration-overview.mdx +++ b/website/src/pages/ja/indexing/chain-integration-overview.mdx @@ -36,7 +36,7 @@ The Graph Network の未来を形作る準備はできていますか? [Start yo ### 2. ネットワークがメインネットでサポートされた後に Firehose とサブストリームのサポートが追加された場合はどうなりますか? -これは、サブストリームで動作するサブグラフに対するインデックスリワードのプロトコルサポートに影響を与えるものです。新しいFirehoseの実装は、このGIPのステージ2に概説されている方法論に従って、テストネットでテストされる必要があります。同様に、実装がパフォーマンスが良く信頼性があると仮定して、[Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md)へのPR(「Substreamsデータソース」サブグラフ機能)が必要です。また、インデックスリワードのプロトコルサポートに関する新しいGIPも必要です。誰でもPRとGIPを作成できますが、Foundationは評議会の承認をサポートします。 +This would only impact protocol support for indexing rewards on Substreams-powered Subgraphs. The new Firehose implementation would need testing on testnet, following the methodology outlined for Stage 2 in this GIP. Similarly, assuming the implementation is performant and reliable, a PR on the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) would be required (`Substreams data sources` Subgraph Feature), as well as a new GIP for protocol support for indexing rewards. Anyone can create the PR and GIP; the Foundation would help with Council approval. ### 3. How much time will the process of reaching full protocol support take? diff --git a/website/src/pages/ja/indexing/new-chain-integration.mdx b/website/src/pages/ja/indexing/new-chain-integration.mdx index decdf0266d65..f6fa2b643fc3 100644 --- a/website/src/pages/ja/indexing/new-chain-integration.mdx +++ b/website/src/pages/ja/indexing/new-chain-integration.mdx @@ -2,7 +2,7 @@ title: New Chain Integration --- -Chains can bring subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies: +Chains can bring Subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies: 1. **EVM JSON-RPC** 2. **Firehose**: All Firehose integration solutions include Substreams, a large-scale streaming engine based off Firehose with native `graph-node` support, allowing for parallelized transforms. @@ -47,15 +47,15 @@ For EVM chains, there exists a deeper level of data that can be achieved through ## EVM considerations - Difference between JSON-RPC & Firehose -While the JSON-RPC and Firehose are both suitable for subgraphs, a Firehose is always required for developers wanting to build with [Substreams](https://substreams.streamingfast.io). Supporting Substreams allows developers to build [Substreams-powered subgraphs](/subgraphs/cookbook/substreams-powered-subgraphs/) for the new chain, and has the potential to improve the performance of your subgraphs. Additionally, Firehose — as a drop-in replacement for the JSON-RPC extraction layer of `graph-node` — reduces by 90% the number of RPC calls required for general indexing. +While the JSON-RPC and Firehose are both suitable for Subgraphs, a Firehose is always required for developers wanting to build with [Substreams](https://substreams.streamingfast.io). Supporting Substreams allows developers to build [Substreams-powered Subgraphs](/subgraphs/cookbook/substreams-powered-subgraphs/) for the new chain, and has the potential to improve the performance of your Subgraphs. Additionally, Firehose — as a drop-in replacement for the JSON-RPC extraction layer of `graph-node` — reduces by 90% the number of RPC calls required for general indexing. -- All those `getLogs` calls and roundtrips get replaced by a single stream arriving into the heart of `graph-node`; a single block model for all subgraphs it processes. +- All those `getLogs` calls and roundtrips get replaced by a single stream arriving into the heart of `graph-node`; a single block model for all Subgraphs it processes. -> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers) +> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index Subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers) ## Graph Node の設定 -Configuring Graph Node is as easy as preparing your local environment. Once your local environment is set, you can test the integration by locally deploying a subgraph. +Configuring Graph Node is as easy as preparing your local environment. Once your local environment is set, you can test the integration by locally deploying a Subgraph. 1. [Clone Graph Node](https://github.com/graphprotocol/graph-node) @@ -67,4 +67,4 @@ Configuring Graph Node is as easy as preparing your local environment. Once your ## Substreams-powered Subgraphs -For StreamingFast-led Firehose/Substreams integrations, basic support for foundational Substreams modules (e.g. decoded transactions, logs and smart-contract events) and Substreams codegen tools are included. These tools enable the ability to enable [Substreams-powered subgraphs](/substreams/sps/introduction/). Follow the [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) and run `substreams codegen subgraph` to experience the codegen tools for yourself. +For StreamingFast-led Firehose/Substreams integrations, basic support for foundational Substreams modules (e.g. decoded transactions, logs and smart-contract events) and Substreams codegen tools are included. These tools enable the ability to enable [Substreams-powered Subgraphs](/substreams/sps/introduction/). Follow the [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) and run `substreams codegen subgraph` to experience the codegen tools for yourself. diff --git a/website/src/pages/ja/indexing/overview.mdx b/website/src/pages/ja/indexing/overview.mdx index f952fafb882b..25b94c36ca88 100644 --- a/website/src/pages/ja/indexing/overview.mdx +++ b/website/src/pages/ja/indexing/overview.mdx @@ -7,7 +7,7 @@ sidebarTitle: 概要 プロトコルにステークされた GRT は解凍期間が設けられており、インデクサーが悪意を持ってアプリケーションに不正なデータを提供したり、不正なインデックスを作成した場合には、スラッシュされる可能性があります。 また、インデクサーはデリゲーターからステークによる委任を受けて、ネットワークに貢献することができます。 -インデクサ − は、サブグラフのキュレーション・シグナルに基づいてインデックスを作成するサブグラフを選択し、キュレーターは、どのサブグラフが高品質で優先されるべきかを示すために GRT をステークします。 消費者(アプリケーションなど)は、インデクサーが自分のサブグラフに対するクエリを処理するパラメータを設定したり、クエリフィーの設定を行うこともできます。 +Indexers select Subgraphs to index based on the Subgraph’s curation signal, where Curators stake GRT in order to indicate which Subgraphs are high-quality and should be prioritized. Consumers (eg. applications) can also set parameters for which Indexers process queries for their Subgraphs and set preferences for query fee pricing. ## FAQ @@ -19,17 +19,17 @@ The minimum stake for an Indexer is currently set to 100K GRT. **Query fee rebates** - Payments for serving queries on the network. These payments are mediated via state channels between an Indexer and a gateway. Each query request from a gateway contains a payment and the corresponding response a proof of query result validity. -**Indexing rewards** - Generated via a 3% annual protocol wide inflation, the indexing rewards are distributed to Indexers who are indexing subgraph deployments for the network. +**Indexing rewards** - Generated via a 3% annual protocol wide inflation, the indexing rewards are distributed to Indexers who are indexing Subgraph deployments for the network. ### How are indexing rewards distributed? -Indexing rewards come from protocol inflation which is set to 3% annual issuance. They are distributed across subgraphs based on the proportion of all curation signal on each, then distributed proportionally to Indexers based on their allocated stake on that subgraph. **An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.** +Indexing rewards come from protocol inflation which is set to 3% annual issuance. They are distributed across Subgraphs based on the proportion of all curation signal on each, then distributed proportionally to Indexers based on their allocated stake on that Subgraph. **An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.** Numerous tools have been created by the community for calculating rewards; you'll find a collection of them organized in the [Community Guides collection](https://www.notion.so/Community-Guides-abbb10f4dba040d5ba81648ca093e70c). You can also find an up to date list of tools in the #Delegators and #Indexers channels on the [Discord server](https://discord.gg/graphprotocol). Here we link a [recommended allocation optimiser](https://github.com/graphprotocol/allocation-optimizer) integrated with the indexer software stack. ### What is a proof of indexing (POI)? -POIs are used in the network to verify that an Indexer is indexing the subgraphs they have allocated on. A POI for the first block of the current epoch must be submitted when closing an allocation for that allocation to be eligible for indexing rewards. A POI for a block is a digest for all entity store transactions for a specific subgraph deployment up to and including that block. +POIs are used in the network to verify that an Indexer is indexing the Subgraphs they have allocated on. A POI for the first block of the current epoch must be submitted when closing an allocation for that allocation to be eligible for indexing rewards. A POI for a block is a digest for all entity store transactions for a specific Subgraph deployment up to and including that block. ### When are indexing rewards distributed? @@ -41,7 +41,7 @@ The RewardsManager contract has a read-only [getRewards](https://github.com/grap Many of the community-made dashboards include pending rewards values and they can be easily checked manually by following these steps: -1. Query the [mainnet subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations: +1. Query the [mainnet Subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations: ```graphql query indexerAllocations { @@ -91,24 +91,24 @@ The `queryFeeCut` and `indexingRewardCut` values are delegation parameters that - **indexingRewardCut** - the % of indexing rewards that will be distributed to the Indexer. If this is set to 95%, the Indexer will receive 95% of the indexing rewards when an allocation is closed and the Delegators will split the other 5%. -### How do Indexers know which subgraphs to index? +### How do Indexers know which Subgraphs to index? -Indexers may differentiate themselves by applying advanced techniques for making subgraph indexing decisions but to give a general idea we'll discuss several key metrics used to evaluate subgraphs in the network: +Indexers may differentiate themselves by applying advanced techniques for making Subgraph indexing decisions but to give a general idea we'll discuss several key metrics used to evaluate Subgraphs in the network: -- **Curation signal** - The proportion of network curation signal applied to a particular subgraph is a good indicator of the interest in that subgraph, especially during the bootstrap phase when query voluming is ramping up. +- **Curation signal** - The proportion of network curation signal applied to a particular Subgraph is a good indicator of the interest in that Subgraph, especially during the bootstrap phase when query volume is ramping up. -- **Query fees collected** - The historical data for volume of query fees collected for a specific subgraph is a good indicator of future demand. +- **Query fees collected** - The historical data for volume of query fees collected for a specific Subgraph is a good indicator of future demand. -- **Amount staked** - Monitoring the behavior of other Indexers or looking at proportions of total stake allocated towards specific subgraphs can allow an Indexer to monitor the supply side for subgraph queries to identify subgraphs that the network is showing confidence in or subgraphs that may show a need for more supply. +- **Amount staked** - Monitoring the behavior of other Indexers or looking at proportions of total stake allocated towards specific Subgraphs can allow an Indexer to monitor the supply side for Subgraph queries to identify Subgraphs that the network is showing confidence in or Subgraphs that may show a need for more supply. -- **Subgraphs with no indexing rewards** - Some subgraphs do not generate indexing rewards mainly because they are using unsupported features like IPFS or because they are querying another network outside of mainnet. You will see a message on a subgraph if it is not generating indexing rewards. +- **Subgraphs with no indexing rewards** - Some Subgraphs do not generate indexing rewards mainly because they are using unsupported features like IPFS or because they are querying another network outside of mainnet. You will see a message on a Subgraph if it is not generating indexing rewards. ### What are the hardware requirements? -- **Small** - Enough to get started indexing several subgraphs, will likely need to be expanded. +- **Small** - Enough to get started indexing several Subgraphs, will likely need to be expanded. - **Standard** - Default setup, this is what is used in the example k8s/terraform deployment manifests. -- **Medium** - Production Indexer supporting 100 subgraphs and 200-500 requests per second. -- **Large** - Prepared to index all currently used subgraphs and serve requests for the related traffic. +- **Medium** - Production Indexer supporting 100 Subgraphs and 200-500 requests per second. +- **Large** - Prepared to index all currently used Subgraphs and serve requests for the related traffic. | Setup | Postgres
(CPUs) | Postgres
(memory in GBs) | Postgres
(disk in TBs) | VMs
(CPUs) | VMs
(memory in GBs) | | --- | :-: | :-: | :-: | :-: | :-: | @@ -125,17 +125,17 @@ Indexers may differentiate themselves by applying advanced techniques for making ## Infrastructure -At the center of an Indexer's infrastructure is the Graph Node which monitors the indexed networks, extracts and loads data per a subgraph definition and serves it as a [GraphQL API](/about/#how-the-graph-works). The Graph Node needs to be connected to an endpoint exposing data from each indexed network; an IPFS node for sourcing data; a PostgreSQL database for its store; and Indexer components which facilitate its interactions with the network. +At the center of an Indexer's infrastructure is the Graph Node which monitors the indexed networks, extracts and loads data per a Subgraph definition and serves it as a [GraphQL API](/about/#how-the-graph-works). The Graph Node needs to be connected to an endpoint exposing data from each indexed network; an IPFS node for sourcing data; a PostgreSQL database for its store; and Indexer components which facilitate its interactions with the network. -- **PostgreSQL database** - The main store for the Graph Node, this is where subgraph data is stored. The Indexer service and agent also use the database to store state channel data, cost models, indexing rules, and allocation actions. +- **PostgreSQL database** - The main store for the Graph Node, this is where Subgraph data is stored. The Indexer service and agent also use the database to store state channel data, cost models, indexing rules, and allocation actions. -- **Data endpoint** - For EVM-compatible networks, Graph Node needs to be connected to an endpoint that exposes an EVM-compatible JSON-RPC API. This may take the form of a single client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain subgraphs will require particular client capabilities such as archive mode and/or the parity tracing API. +- **Data endpoint** - For EVM-compatible networks, Graph Node needs to be connected to an endpoint that exposes an EVM-compatible JSON-RPC API. This may take the form of a single client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain Subgraphs will require particular client capabilities such as archive mode and/or the parity tracing API. -- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during subgraph deployment to fetch the subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com. +- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com. - **Indexer service** - Handles all required external communications with the network. Shares cost models and indexing statuses, passes query requests from gateways on to a Graph Node, and manages the query payments via state channels with the gateway. -- **Indexer agent** - Facilitates the Indexers interactions onchain including registering on the network, managing subgraph deployments to its Graph Node/s, and managing allocations. +- **Indexer agent** - Facilitates the Indexers interactions onchain including registering on the network, managing Subgraph deployments to its Graph Node/s, and managing allocations. - **Prometheus metrics server** - The Graph Node and Indexer components log their metrics to the metrics server. @@ -149,8 +149,8 @@ Note: To support agile scaling, it is recommended that query and indexing concer | Port | Purpose | Routes | CLI Argument | Environment Variable | | --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | -| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | +| 8000 | GraphQL HTTP server
(for Subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | +| 8001 | GraphQL WS
(for Subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | | 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | | 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | | 8040 | Prometheus metrics | /metrics | \--metrics-port | - | @@ -159,7 +159,7 @@ Note: To support agile scaling, it is recommended that query and indexing concer | Port | Purpose | Routes | CLI Argument | Environment Variable | | --- | --- | --- | --- | --- | -| 7600 | GraphQL HTTP server
(for paid subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` | +| 7600 | GraphQL HTTP server
(for paid Subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` | | 7300 | Prometheus metrics | /metrics | \--metrics-port | - | #### Indexer Agent @@ -295,7 +295,7 @@ Deploy all resources with `kubectl apply -k $dir`. ### グラフノード -[Graph Node](https://github.com/graphprotocol/graph-node) is an open source Rust implementation that event sources the Ethereum blockchain to deterministically update a data store that can be queried via the GraphQL endpoint. Developers use subgraphs to define their schema, and a set of mappings for transforming the data sourced from the blockchain and the Graph Node handles syncing the entire chain, monitoring for new blocks, and serving it via a GraphQL endpoint. +[Graph Node](https://github.com/graphprotocol/graph-node) is an open source Rust implementation that event sources the Ethereum blockchain to deterministically update a data store that can be queried via the GraphQL endpoint. Developers use Subgraphs to define their schema, and a set of mappings for transforming the data sourced from the blockchain and the Graph Node handles syncing the entire chain, monitoring for new blocks, and serving it via a GraphQL endpoint. #### Getting started from source @@ -365,9 +365,9 @@ docker-compose up To successfully participate in the network requires almost constant monitoring and interaction, so we've built a suite of Typescript applications for facilitating an Indexers network participation. There are three Indexer components: -- **Indexer agent** - The agent monitors the network and the Indexer's own infrastructure and manages which subgraph deployments are indexed and allocated towards onchain and how much is allocated towards each. +- **Indexer agent** - The agent monitors the network and the Indexer's own infrastructure and manages which Subgraph deployments are indexed and allocated towards onchain and how much is allocated towards each. -- **Indexer service** - The only component that needs to be exposed externally, the service passes on subgraph queries to the graph node, manages state channels for query payments, shares important decision making information to clients like the gateways. +- **Indexer service** - The only component that needs to be exposed externally, the service passes on Subgraph queries to the graph node, manages state channels for query payments, shares important decision making information to clients like the gateways. - **Indexer CLI** - The command line interface for managing the Indexer agent. It allows Indexers to manage cost models, manual allocations, actions queue, and indexing rules. @@ -525,7 +525,7 @@ graph indexer status #### Indexer management using Indexer CLI -The suggested tool for interacting with the **Indexer Management API** is the **Indexer CLI**, an extension to the **Graph CLI**. The Indexer agent needs input from an Indexer in order to autonomously interact with the network on the behalf of the Indexer. The mechanism for defining Indexer agent behavior are **allocation management** mode and **indexing rules**. Under auto mode, an Indexer can use **indexing rules** to apply their specific strategy for picking subgraphs to index and serve queries for. Rules are managed via a GraphQL API served by the agent and known as the Indexer Management API. Under manual mode, an Indexer can create allocation actions using **actions queue** and explicitly approve them before they get executed. Under oversight mode, **indexing rules** are used to populate **actions queue** and also require explicit approval for execution. +The suggested tool for interacting with the **Indexer Management API** is the **Indexer CLI**, an extension to the **Graph CLI**. The Indexer agent needs input from an Indexer in order to autonomously interact with the network on the behalf of the Indexer. The mechanism for defining Indexer agent behavior are **allocation management** mode and **indexing rules**. Under auto mode, an Indexer can use **indexing rules** to apply their specific strategy for picking Subgraphs to index and serve queries for. Rules are managed via a GraphQL API served by the agent and known as the Indexer Management API. Under manual mode, an Indexer can create allocation actions using **actions queue** and explicitly approve them before they get executed. Under oversight mode, **indexing rules** are used to populate **actions queue** and also require explicit approval for execution. #### Usage @@ -537,7 +537,7 @@ The **Indexer CLI** connects to the Indexer agent, typically through port-forwar - `graph indexer rules set [options] ...` - Set one or more indexing rules. -- `graph indexer rules start [options] ` - Start indexing a subgraph deployment if available and set its `decisionBasis` to `always`, so the Indexer agent will always choose to index it. If the global rule is set to always then all available subgraphs on the network will be indexed. +- `graph indexer rules start [options] ` - Start indexing a Subgraph deployment if available and set its `decisionBasis` to `always`, so the Indexer agent will always choose to index it. If the global rule is set to always then all available Subgraphs on the network will be indexed. - `graph indexer rules stop [options] ` - Stop indexing a deployment and set its `decisionBasis` to never, so it will skip this deployment when deciding on deployments to index. @@ -561,9 +561,9 @@ All commands which display rules in the output can choose between the supported #### Indexing rules -Indexing rules can either be applied as global defaults or for specific subgraph deployments using their IDs. The `deployment` and `decisionBasis` fields are mandatory, while all other fields are optional. When an indexing rule has `rules` as the `decisionBasis`, then the Indexer agent will compare non-null threshold values on that rule with values fetched from the network for the corresponding deployment. If the subgraph deployment has values above (or below) any of the thresholds it will be chosen for indexing. +Indexing rules can either be applied as global defaults or for specific Subgraph deployments using their IDs. The `deployment` and `decisionBasis` fields are mandatory, while all other fields are optional. When an indexing rule has `rules` as the `decisionBasis`, then the Indexer agent will compare non-null threshold values on that rule with values fetched from the network for the corresponding deployment. If the Subgraph deployment has values above (or below) any of the thresholds it will be chosen for indexing. -For example, if the global rule has a `minStake` of **5** (GRT), any subgraph deployment which has more than 5 (GRT) of stake allocated to it will be indexed. Threshold rules include `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, and `minAverageQueryFees`. +For example, if the global rule has a `minStake` of **5** (GRT), any Subgraph deployment which has more than 5 (GRT) of stake allocated to it will be indexed. Threshold rules include `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, and `minAverageQueryFees`. Data model: @@ -679,7 +679,7 @@ graph indexer actions execute approve Note that supported action types for allocation management have different input requirements: -- `Allocate` - allocate stake to a specific subgraph deployment +- `Allocate` - allocate stake to a specific Subgraph deployment - required action params: - deploymentID @@ -694,7 +694,7 @@ Note that supported action types for allocation management have different input - poi - force (forces using the provided POI even if it doesn’t match what the graph-node provides) -- `Reallocate` - atomically close allocation and open a fresh allocation for the same subgraph deployment +- `Reallocate` - atomically close allocation and open a fresh allocation for the same Subgraph deployment - required action params: - allocationID @@ -706,7 +706,7 @@ Note that supported action types for allocation management have different input #### Cost models -Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make Indexer selection decisions per query and to negotiate payment with chosen Indexers. +Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each Subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make Indexer selection decisions per query and to negotiate payment with chosen Indexers. #### Agora @@ -782,9 +782,9 @@ Once an Indexer has staked GRT in the protocol, the [Indexer components](/indexi 6. Call `stake()` to stake GRT in the protocol. -7. (Optional) Indexers may approve another address to be the operator for their Indexer infrastructure in order to separate the keys that control the funds from those that are performing day to day actions such as allocating on subgraphs and serving (paid) queries. In order to set the operator call `setOperator()` with the operator address. +7. (Optional) Indexers may approve another address to be the operator for their Indexer infrastructure in order to separate the keys that control the funds from those that are performing day to day actions such as allocating on Subgraphs and serving (paid) queries. In order to set the operator call `setOperator()` with the operator address. -8. (Optional) In order to control the distribution of rewards and strategically attract Delegators Indexers can update their delegation parameters by updating their indexingRewardCut (parts per million), queryFeeCut (parts per million), and cooldownBlocks (number of blocks). To do so call `setDelegationParameters()`. The following example sets the queryFeeCut to distribute 95% of query rebates to the Indexer and 5% to Delegators, set the indexingRewardCutto distribute 60% of indexing rewards to the Indexer and 40% to Delegators, and set `thecooldownBlocks` period to 500 blocks. +8. (Optional) In order to control the distribution of rewards and strategically attract Delegators Indexers can update their delegation parameters by updating their `indexingRewardCut` (parts per million), `queryFeeCut` (parts per million), and `cooldownBlocks` (number of blocks). To do so call `setDelegationParameters()`. The following example sets the `queryFeeCut` to distribute 95% of query rebates to the Indexer and 5% to Delegators, set the `indexingRewardCut` to distribute 60% of indexing rewards to the Indexer and 40% to Delegators, and set the `cooldownBlocks` period to 500 blocks. ``` setDelegationParameters(950000, 600000, 500) @@ -810,8 +810,8 @@ To set the delegation parameters using Graph Explorer interface, follow these st After being created by an Indexer a healthy allocation goes through two states. -- **Active** - Once an allocation is created onchain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L316)) it is considered **active**. A portion of the Indexer's own and/or delegated stake is allocated towards a subgraph deployment, which allows them to claim indexing rewards and serve queries for that subgraph deployment. The Indexer agent manages creating allocations based on the Indexer rules. +- **Active** - Once an allocation is created onchain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L316)) it is considered **active**. A portion of the Indexer's own and/or delegated stake is allocated towards a Subgraph deployment, which allows them to claim indexing rewards and serve queries for that Subgraph deployment. The Indexer agent manages creating allocations based on the Indexer rules. - **Closed** - An Indexer is free to close an allocation once 1 epoch has passed ([closeAllocation()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L335)) or their Indexer agent will automatically close the allocation after the **maxAllocationEpochs** (currently 28 days). When an allocation is closed with a valid proof of indexing (POI) their indexing rewards are distributed to the Indexer and its Delegators ([learn more](/indexing/overview/#how-are-indexing-rewards-distributed)). -Indexers are recommended to utilize offchain syncing functionality to sync subgraph deployments to chainhead before creating the allocation onchain. This feature is especially useful for subgraphs that may take longer than 28 epochs to sync or have some chances of failing undeterministically. +Indexers are recommended to utilize offchain syncing functionality to sync Subgraph deployments to chainhead before creating the allocation onchain. This feature is especially useful for Subgraphs that may take longer than 28 epochs to sync or have some chances of failing undeterministically. diff --git a/website/src/pages/ja/indexing/supported-network-requirements.mdx b/website/src/pages/ja/indexing/supported-network-requirements.mdx index 6aa0c0caa16f..aa62090ef8aa 100644 --- a/website/src/pages/ja/indexing/supported-network-requirements.mdx +++ b/website/src/pages/ja/indexing/supported-network-requirements.mdx @@ -6,7 +6,7 @@ title: Supported Network Requirements | --- | --- | --- | :-: | | Arbitrum | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | 4+ core CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_last updated August 2023_ | ✅ | | Avalanche | [Docker Guide](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 5 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
Debian 12/Ubuntu 22.04
16 GB RAM
>= 4.5TB (NVME preffered)
_last updated 14th May 2024_ | ✅ | +| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
Debian 12/Ubuntu 22.04
16 GB RAM
>= 4.5TB (NVME preferred)
_last updated 14th May 2024_ | ✅ | | Binance | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | 8 core / 16 threads CPU
Ubuntu 22.04
>=32 GB RAM
>= 14 TiB NVMe SSD
_last updated 22nd June 2024_ | ✅ | | Celo | [Docker Guide](https://docs.infradao.com/archive-nodes-101/celo/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 2 TiB NVMe SSD
_last updated August 2023_ | ✅ | | Ethereum | [Docker Guide](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Higher clock speed over core count
Ubuntu 22.04
16GB+ RAM
>=3TB (NVMe recommended)
_last updated August 2023_ | ✅ | diff --git a/website/src/pages/ja/indexing/tap.mdx b/website/src/pages/ja/indexing/tap.mdx index b1d43a4e628c..61a0f77343c3 100644 --- a/website/src/pages/ja/indexing/tap.mdx +++ b/website/src/pages/ja/indexing/tap.mdx @@ -1,21 +1,21 @@ --- -title: TAP Migration Guide +title: GraphTally Guide --- -Learn about The Graph’s new payment system, **Timeline Aggregation Protocol, TAP**. This system provides fast, efficient microtransactions with minimized trust. +Learn about The Graph’s new payment system, **GraphTally** [(previously Timeline Aggregation Protocol)](https://docs.rs/tap_core/latest/tap_core/index.html). This system provides fast, efficient microtransactions with minimized trust. ## 概要 -[TAP](https://docs.rs/tap_core/latest/tap_core/index.html) is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features: +GraphTally is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features: - Efficiently handles micropayments. - Adds a layer of consolidations to onchain transactions and costs. - Allows Indexers control of receipts and payments, guaranteeing payment for queries. - It enables decentralized, trustless gateways and improves `indexer-service` performance for multiple senders. -## Specifics +### Specifics -TAP allows a sender to make multiple payments to a receiver, **TAP Receipts**, which aggregates these payments into a single payment, a **Receipt Aggregate Voucher**, also known as a **RAV**. This aggregated payment can then be verified on the blockchain, reducing the number of transactions and simplifying the payment process. +GraphTally allows a sender to make multiple payments to a receiver, **Receipts**, which aggregates these payments into a single payment, a **Receipt Aggregate Voucher**, also known as a **RAV**. This aggregated payment can then be verified on the blockchain, reducing the number of transactions and simplifying the payment process. For each query, the gateway will send you a `signed receipt` that is stored on your database. Then, these queries will be aggregated by a `tap-agent` through a request. Afterwards, you’ll receive a RAV. You can update a RAV by sending it with newer receipts and this will generate a new RAV with an increased value. @@ -59,14 +59,14 @@ As long as you run `tap-agent` and `indexer-agent`, everything will be executed | Signers | `0xfF4B7A5EfD00Ff2EC3518D4F250A27e4c29A2211` | `0xFb142dE83E261e43a81e9ACEADd1c66A0DB121FE` | | Aggregator | `https://tap-aggregator.network.thegraph.com` | `https://tap-aggregator.testnet.thegraph.com` | -### 要件 +### Prerequisites -In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query TAP updates. You can use The Graph Network to query or host yourself on your `graph-node`. +In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query updates. You can use The Graph Network to query or host yourself on your `graph-node`. -- [Graph TAP Arbitrum Sepolia subgraph (for The Graph testnet)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) -- [Graph TAP Arbitrum One subgraph (for The Graph mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) +- [Graph TAP Arbitrum Sepolia Subgraph (for The Graph testnet)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) +- [Graph TAP Arbitrum One Subgraph (for The Graph mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) -> Note: `indexer-agent` does not currently handle the indexing of this subgraph like it does for the network subgraph deployment. As a result, you have to index it manually. +> Note: `indexer-agent` does not currently handle the indexing of this Subgraph like it does for the network Subgraph deployment. As a result, you have to index it manually. ## Migration Guide @@ -79,7 +79,7 @@ The required software version can be found [here](https://github.com/graphprotoc 1. **Indexer Agent** - Follow the [same process](https://github.com/graphprotocol/indexer/pkgs/container/indexer-agent#graph-protocol-indexer-components). - - Give the new argument `--tap-subgraph-endpoint` to activate the new TAP codepaths and enable redeeming of TAP RAVs. + - Give the new argument `--tap-subgraph-endpoint` to activate the new GraphTally codepaths and enable redeeming of RAVs. 2. **Indexer Service** @@ -128,18 +128,18 @@ query_url = "" status_url = "" [subgraphs.network] -# Query URL for the Graph Network subgraph. +# Query URL for the Graph Network Subgraph. query_url = "" # Optional, deployment to look for in the local `graph-node`, if locally indexed. -# Locally indexing the subgraph is recommended. +# Locally indexing the Subgraph is recommended. # NOTE: Use `query_url` or `deployment_id` only deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" [subgraphs.escrow] -# Query URL for the Escrow subgraph. +# Query URL for the Escrow Subgraph. query_url = "" # Optional, deployment to look for in the local `graph-node`, if locally indexed. -# Locally indexing the subgraph is recommended. +# Locally indexing the Subgraph is recommended. # NOTE: Use `query_url` or `deployment_id` only deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" diff --git a/website/src/pages/ja/indexing/tooling/graph-node.mdx b/website/src/pages/ja/indexing/tooling/graph-node.mdx index 604095157886..dfbb9aeea657 100644 --- a/website/src/pages/ja/indexing/tooling/graph-node.mdx +++ b/website/src/pages/ja/indexing/tooling/graph-node.mdx @@ -2,31 +2,31 @@ title: グラフノード --- -グラフノードはサブグラフのインデックスを作成し、得られたデータをGraphQL API経由でクエリできるようにするコンポーネントです。そのため、インデクサースタックの中心的存在であり、グラフノードの正しい動作はインデクサーを成功させるために非常に重要です。 +Graph Node is the component which indexes Subgraphs, and makes the resulting data available to query via a GraphQL API. As such it is central to the indexer stack, and correct operation of Graph Node is crucial to running a successful indexer. This provides a contextual overview of Graph Node, and some of the more advanced options available to indexers. Detailed documentation and instructions can be found in the [Graph Node repository](https://github.com/graphprotocol/graph-node). ## グラフノード -[Graph Node](https://github.com/graphprotocol/graph-node) is the reference implementation for indexing Subgraphs on The Graph Network, connecting to blockchain clients, indexing subgraphs and making indexed data available to query. +[Graph Node](https://github.com/graphprotocol/graph-node) is the reference implementation for indexing Subgraphs on The Graph Network, connecting to blockchain clients, indexing Subgraphs and making indexed data available to query. Graph Node (and the whole indexer stack) can be run on bare metal, or in a cloud environment. This flexibility of the central indexing component is crucial to the robustness of The Graph Protocol. Similarly, Graph Node can be [built from source](https://github.com/graphprotocol/graph-node), or indexers can use one of the [provided Docker Images](https://hub.docker.com/r/graphprotocol/graph-node). ### PostgreSQLデータベース -グラフノードのメインストアで、サブグラフデータ、サブグラフに関するメタデータ、ブロックキャッシュやeth_callキャッシュなどのサブグラフに依存しないネットワークデータが格納されます。 +The main store for the Graph Node, this is where Subgraph data is stored, as well as metadata about Subgraphs, and Subgraph-agnostic network data such as the block cache, and eth_call cache. ### ネットワーククライアント ネットワークにインデックスを付けるために、グラフ ノードは EVM 互換の JSON-RPC API を介してネットワーク クライアントにアクセスする必要があります。この RPC は単一のクライアントに接続する場合もあれば、複数のクライアントに負荷を分散するより複雑なセットアップになる場合もあります。 -While some subgraphs may just require a full node, some may have indexing features which require additional RPC functionality. Specifically subgraphs which make `eth_calls` as part of indexing will require an archive node which supports [EIP-1898](https://eips.ethereum.org/EIPS/eip-1898), and subgraphs with `callHandlers`, or `blockHandlers` with a `call` filter, require `trace_filter` support ([see trace module documentation here](https://openethereum.github.io/JSONRPC-trace-module)). +While some Subgraphs may just require a full node, some may have indexing features which require additional RPC functionality. Specifically Subgraphs which make `eth_calls` as part of indexing will require an archive node which supports [EIP-1898](https://eips.ethereum.org/EIPS/eip-1898), and Subgraphs with `callHandlers`, or `blockHandlers` with a `call` filter, require `trace_filter` support ([see trace module documentation here](https://openethereum.github.io/JSONRPC-trace-module)). **Network Firehoses** - a Firehose is a gRPC service providing an ordered, yet fork-aware, stream of blocks, developed by The Graph's core developers to better support performant indexing at scale. This is not currently an Indexer requirement, but Indexers are encouraged to familiarise themselves with the technology, ahead of full network support. Learn more about the Firehose [here](https://firehose.streamingfast.io/). ### IPFSノード -IPFS ノード(バージョン 未満) - サブグラフのデプロイメタデータは IPFS ネットワーク上に保存されます。 グラフノードは、サブグラフのデプロイ時に主に IPFS ノードにアクセスし、サブグラフマニフェストと全てのリンクファイルを取得します。 ネットワーク・インデクサーは独自の IPFS ノードをホストする必要はありません。 ネットワーク用の IPFS ノードは、https://ipfs.network.thegraph.com でホストされています。 +Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network indexers do not need to host their own IPFS node. An IPFS node for the network is hosted at https://ipfs.network.thegraph.com. ### Prometheus メトリクスサーバー @@ -79,8 +79,8 @@ A complete Kubernetes example configuration can be found in the [indexer reposit | Port | Purpose | Routes | CLI Argument | Environment Variable | | --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | -| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | +| 8000 | GraphQL HTTP server
(for Subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | +| 8001 | GraphQL WS
(for Subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | | 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | | 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | | 8040 | Prometheus metrics | /metrics | \--metrics-port | - | @@ -89,7 +89,7 @@ A complete Kubernetes example configuration can be found in the [indexer reposit ## グラフノードの高度な設定 -最も単純な場合、Graph Node は、Graph Node の単一のインスタンス、単一の PostgreSQL データベース、IPFS ノード、およびサブグラフのインデックス作成に必要なネットワーク クライアントで操作できます。 +At its simplest, Graph Node can be operated with a single instance of Graph Node, a single PostgreSQL database, an IPFS node, and the network clients as required by the Subgraphs to be indexed. This setup can be scaled horizontally, by adding multiple Graph Nodes, and multiple databases to support those Graph Nodes. Advanced users may want to take advantage of some of the horizontal scaling capabilities of Graph Node, as well as some of the more advanced configuration options, via the `config.toml` file and Graph Node's environment variables. @@ -114,13 +114,13 @@ Full documentation of `config.toml` can be found in the [Graph Node docs](https: #### 複数のグラフノード -Graph Node indexing can scale horizontally, running multiple instances of Graph Node to split indexing and querying across different nodes. This can be done simply by running Graph Nodes configured with a different `node_id` on startup (e.g. in the Docker Compose file), which can then be used in the `config.toml` file to specify [dedicated query nodes](#dedicated-query-nodes), [block ingestors](#dedicated-block-ingestion), and splitting subgraphs across nodes with [deployment rules](#deployment-rules). +Graph Node indexing can scale horizontally, running multiple instances of Graph Node to split indexing and querying across different nodes. This can be done simply by running Graph Nodes configured with a different `node_id` on startup (e.g. in the Docker Compose file), which can then be used in the `config.toml` file to specify [dedicated query nodes](#dedicated-query-nodes), [block ingestors](#dedicated-block-ingestion), and splitting Subgraphs across nodes with [deployment rules](#deployment-rules). > なお、複数のGraph Nodeはすべて同じデータベースを使用するように設定することができ、Shardingによって水平方向に拡張することができます。 #### デプロイメントルール -Given multiple Graph Nodes, it is necessary to manage deployment of new subgraphs so that the same subgraph isn't being indexed by two different nodes, which would lead to collisions. This can be done by using deployment rules, which can also specify which `shard` a subgraph's data should be stored in, if database sharding is being used. Deployment rules can match on the subgraph name and the network that the deployment is indexing in order to make a decision. +Given multiple Graph Nodes, it is necessary to manage deployment of new Subgraphs so that the same Subgraph isn't being indexed by two different nodes, which would lead to collisions. This can be done by using deployment rules, which can also specify which `shard` a Subgraph's data should be stored in, if database sharding is being used. Deployment rules can match on the Subgraph name and the network that the deployment is indexing in order to make a decision. デプロイメントルールの設定例: @@ -138,7 +138,7 @@ indexers = [ "index_node_kovan_0" ] match = { network = [ "xdai", "poa-core" ] } indexers = [ "index_node_other_0" ] [[deployment.rule]] -# There's no 'match', so any subgraph matches +# There's no 'match', so any Subgraph matches shards = [ "sharda", "shardb" ] indexers = [ "index_node_community_0", @@ -167,11 +167,11 @@ query = "" ほとんどの場合、1つのPostgresデータベースでグラフノードインスタンスをサポートするのに十分です。グラフノードインスタンスが1つのPostgresデータベースを使い切った場合、グラフノードデータを複数のPostgresデータベースに分割して保存することが可能です。全てのデータベースが一緒になってグラフノードインスタンスのストアを形成します。個々のデータベースはシャード(shard)と呼ばれます。 -Shards can be used to split subgraph deployments across multiple databases, and can also be used to use replicas to spread query load across databases. This includes configuring the number of available database connections each `graph-node` should keep in its connection pool for each database, which becomes increasingly important as more subgraphs are being indexed. +Shards can be used to split Subgraph deployments across multiple databases, and can also be used to use replicas to spread query load across databases. This includes configuring the number of available database connections each `graph-node` should keep in its connection pool for each database, which becomes increasingly important as more Subgraphs are being indexed. グラフノードの負荷に既存のデータベースが追いつかず、これ以上データベースサイズを大きくすることができない場合に、シャーディングが有効になります。 -> 一般的には、シャードを作成する前に、単一のデータベースを可能な限り大きくすることをお勧めします。例外は、クエリのトラフィックがサブグラフ間で非常に不均一に分割される場合です。このような状況では、ボリュームの大きいサブグラフを1つのシャードに、それ以外を別のシャードに保存すると劇的に効果があります。この設定により、ボリュームの大きいサブグラフのデータがdb内部キャッシュに残り、ボリュームの小さいサブグラフからそれほど必要とされていないデータに置き換えられる可能性が少なくなるためです。 +> It is generally better make a single database as big as possible, before starting with shards. One exception is where query traffic is split very unevenly between Subgraphs; in those situations it can help dramatically if the high-volume Subgraphs are kept in one shard and everything else in another because that setup makes it more likely that the data for the high-volume Subgraphs stays in the db-internal cache and doesn't get replaced by data that's not needed as much from low-volume Subgraphs. 接続の設定に関しては、まずpostgresql.confのmax_connectionsを400(あるいは200)に設定し、store_connection_wait_time_msとstore_connection_checkout_count Prometheusメトリクスを見てみてください。顕著な待ち時間(5ms以上)は、利用可能な接続が少なすぎることを示しています。高い待ち時間は、データベースが非常に忙しいこと(CPU負荷が高いなど)によっても引き起こされます。しかし、データベースが安定しているようであれば、待ち時間が長いのは接続数を増やす必要があることを示しています。設定上、各グラフノードインスタンスが使用できるコネクション数は上限であり、グラフノードは必要ないコネクションはオープンにしておきません。 @@ -188,7 +188,7 @@ ingestor = "block_ingestor_node" #### 複数のネットワークに対応 -The Graph Protocol is increasing the number of networks supported for indexing rewards, and there exist many subgraphs indexing unsupported networks which an indexer would like to process. The `config.toml` file allows for expressive and flexible configuration of: +The Graph Protocol is increasing the number of networks supported for indexing rewards, and there exist many Subgraphs indexing unsupported networks which an indexer would like to process. The `config.toml` file allows for expressive and flexible configuration of: - 複数のネットワーク - ネットワークごとに複数のプロバイダ(プロバイダ間で負荷を分割することができ、また、フルノードとアーカイブノードを構成することができ、作業負荷が許す限り、Graph Nodeはより安価なプロバイダを優先することができます)。 @@ -225,11 +225,11 @@ Graph Node supports a range of environment variables which can enable features, ### グラフノードの管理 -グラフノードが動作している場合、それらのノードに展開されたサブグラフを管理することが課題となります。グラフノードは、サブグラフを管理するための様々なツールを提供します。 +Given a running Graph Node (or Graph Nodes!), the challenge is then to manage deployed Subgraphs across those nodes. Graph Node surfaces a range of tools to help with managing Subgraphs. #### ロギング -Graph Node's logs can provide useful information for debugging and optimisation of Graph Node and specific subgraphs. Graph Node supports different log levels via the `GRAPH_LOG` environment variable, with the following levels: error, warn, info, debug or trace. +Graph Node's logs can provide useful information for debugging and optimisation of Graph Node and specific Subgraphs. Graph Node supports different log levels via the `GRAPH_LOG` environment variable, with the following levels: error, warn, info, debug or trace. In addition setting `GRAPH_LOG_QUERY_TIMING` to `gql` provides more details about how GraphQL queries are running (though this will generate a large volume of logs). @@ -247,11 +247,11 @@ The graphman command is included in the official containers, and you can docker Full documentation of `graphman` commands is available in the Graph Node repository. See [/docs/graphman.md] (https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md) in the Graph Node `/docs` -### サブグラフの操作 +### Working with Subgraphs #### インデックスステータスAPI -デフォルトではポート8030/graphqlで利用可能なindexing status APIは、異なるサブグラフのindexing statusのチェック、indexing proofのチェック、サブグラフの特徴の検査など、様々なメソッドを公開しています。 +Available on port 8030/graphql by default, the indexing status API exposes a range of methods for checking indexing status for different Subgraphs, checking proofs of indexing, inspecting Subgraph features and more. The full schema is available [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). @@ -263,7 +263,7 @@ The full schema is available [here](https://github.com/graphprotocol/graph-node/ - 適切なハンドラで順番にイベントを処理する(これには、状態のためにチェーンを呼び出したり、ストアからデータを取得したりすることが含まれます)。 - 出来上がったデータをストアに書き込む -これらのステージはパイプライン化されていますが(つまり、並列に実行することができます)、互いに依存し合っています。サブグラフのインデックス作成に時間がかかる場合、その根本的な原因は、特定のサブグラフに依存します。 +These stages are pipelined (i.e. they can be executed in parallel), but they are dependent on one another. Where Subgraphs are slow to index, the underlying cause will depend on the specific Subgraph. インデックス作成が遅くなる一般的な原因: @@ -276,24 +276,24 @@ The full schema is available [here](https://github.com/graphprotocol/graph-node/ - プロバイダー自体がチェーンヘッドに遅れる場合 - チェーンヘッドでプロバイダーから新しいレシートを取得する際の遅延 -サブグラフのインデックス作成指標は、インデックス作成の遅さの根本的な原因を診断するのに役立ちます。あるケースでは、問題はサブグラフ自体にありますが、他のケースでは、ネットワークプロバイダーの改善、データベースの競合の減少、その他の構成の改善により、インデックス作成性能を著しく向上させることができます。 +Subgraph indexing metrics can help diagnose the root cause of indexing slowness. In some cases, the problem lies with the Subgraph itself, but in others, improved network providers, reduced database contention and other configuration improvements can markedly improve indexing performance. -#### 失敗したサブグラフ +#### Failed Subgraphs -インデックス作成中、サブグラフは予期しないデータに遭遇したり、あるコンポーネントが期待通りに動作しなかったり、イベントハンドラや設定に何らかのバグがあったりすると、失敗することがあります。失敗には一般に2つのタイプがあります。 +During indexing Subgraphs might fail, if they encounter data that is unexpected, some component not working as expected, or if there is some bug in the event handlers or configuration. There are two general types of failure: - 決定論的失敗:再試行では解決できない失敗 - 非決定論的失敗:プロバイダの問題や、予期しないグラフノードのエラーに起因する可能性があります。非決定論的失敗が発生すると、グラフノードは失敗したハンドラを再試行し、時間をかけて後退させます。 -いくつかのケースでは、失敗はインデクサーによって解決できるかもしれません(例えば、エラーが正しい種類のプロバイダを持っていない結果である場合、必要なプロバイダを追加することでインデックス作成を継続することが可能になります)。しかし、サブグラフのコードを変更する必要がある場合もあります。 +In some cases a failure might be resolvable by the indexer (for example if the error is a result of not having the right kind of provider, adding the required provider will allow indexing to continue). However in others, a change in the Subgraph code is required. -> Deterministic failures are considered "final", with a Proof of Indexing generated for the failing block, while non-deterministic failures are not, as the subgraph may manage to "unfail" and continue indexing. In some cases, the non-deterministic label is incorrect, and the subgraph will never overcome the error; such failures should be reported as issues on the Graph Node repository. +> Deterministic failures are considered "final", with a Proof of Indexing generated for the failing block, while non-deterministic failures are not, as the Subgraph may manage to "unfail" and continue indexing. In some cases, the non-deterministic label is incorrect, and the Subgraph will never overcome the error; such failures should be reported as issues on the Graph Node repository. #### ブロックキャッシュとコールキャッシュ -Graph Node caches certain data in the store in order to save refetching from the provider. Blocks are cached, as are the results of `eth_calls` (the latter being cached as of a specific block). This caching can dramatically increase indexing speed during "resyncing" of a slightly altered subgraph. +Graph Node caches certain data in the store in order to save refetching from the provider. Blocks are cached, as are the results of `eth_calls` (the latter being cached as of a specific block). This caching can dramatically increase indexing speed during "resyncing" of a slightly altered Subgraph. -However, in some instances, if an Ethereum node has provided incorrect data for some period, that can make its way into the cache, leading to incorrect data or failed subgraphs. In this case indexers can use `graphman` to clear the poisoned cache, and then rewind the affected subgraphs, which will then fetch fresh data from the (hopefully) healthy provider. +However, in some instances, if an Ethereum node has provided incorrect data for some period, that can make its way into the cache, leading to incorrect data or failed Subgraphs. In this case indexers can use `graphman` to clear the poisoned cache, and then rewind the affected Subgraphs, which will then fetch fresh data from the (hopefully) healthy provider. TX受信欠落イベントなど、ブロックキャッシュの不整合が疑われる場合。 @@ -304,7 +304,7 @@ TX受信欠落イベントなど、ブロックキャッシュの不整合が疑 #### 問題やエラーのクエリ -サブグラフがインデックス化されると、インデクサはサブグラフの専用クエリエントポイントを介してクエリを提供することが期待できます。もしインデクサがかなりの量のクエリを提供することを望むなら、専用のクエリノードを推奨します。また、クエリ量が非常に多い場合、インデクサーはレプリカシャードを構成して、クエリがインデックス作成プロセスに影響を与えないようにしたいと思うかもしれません。 +Once a Subgraph has been indexed, indexers can expect to serve queries via the Subgraph's dedicated query endpoint. If the indexer is hoping to serve significant query volume, a dedicated query node is recommended, and in case of very high query volumes, indexers may want to configure replica shards so that queries don't impact the indexing process. ただし、専用のクエリ ノードとレプリカを使用しても、特定のクエリの実行に時間がかかる場合があり、場合によってはメモリ使用量が増加し、他のユーザーのクエリ時間に悪影響を及ぼします。 @@ -316,7 +316,7 @@ Graph Node caches GraphQL queries by default, which can significantly reduce dat ##### クエリの分析 -問題のあるクエリが表面化するのは、ほとんどの場合、次の2つの方法のどちらかです。あるケースでは、ユーザー自身があるクエリが遅いと報告します。この場合、一般的な問題なのか、そのサブグラフやクエリに固有の問題なのか、遅さの理由を診断することが課題となります。そしてもちろん、可能であればそれを解決することです。 +Problematic queries most often surface in one of two ways. In some cases, users themselves report that a given query is slow. In that case the challenge is to diagnose the reason for the slowness - whether it is a general issue, or specific to that Subgraph or query. And then of course to resolve it, if possible. また、クエリノードでメモリ使用量が多いことが引き金になる場合もあり、その場合は、まず問題の原因となっているクエリを特定することが課題となります。 @@ -336,10 +336,10 @@ In general, tables where the number of distinct entities are less than 1% of the Once a table has been determined to be account-like, running `graphman stats account-like .
` will turn on the account-like optimization for queries against that table. The optimization can be turned off again with `graphman stats account-like --clear .
` It takes up to 5 minutes for query nodes to notice that the optimization has been turned on or off. After turning the optimization on, it is necessary to verify that the change does not in fact make queries slower for that table. If you have configured Grafana to monitor Postgres, slow queries would show up in `pg_stat_activity`in large numbers, taking several seconds. In that case, the optimization needs to be turned off again. -For Uniswap-like subgraphs, the `pair` and `token` tables are prime candidates for this optimization, and can have a dramatic effect on database load. +For Uniswap-like Subgraphs, the `pair` and `token` tables are prime candidates for this optimization, and can have a dramatic effect on database load. -#### サブグラフの削除 +#### Removing Subgraphs > これは新しい機能で、Graph Node 0.29.xで利用可能になる予定です。 -At some point an indexer might want to remove a given subgraph. This can be easily done via `graphman drop`, which deletes a deployment and all it's indexed data. The deployment can be specified as either a subgraph name, an IPFS hash `Qm..`, or the database namespace `sgdNNN`. Further documentation is available [here](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop). +At some point an indexer might want to remove a given Subgraph. This can be easily done via `graphman drop`, which deletes a deployment and all it's indexed data. The deployment can be specified as either a Subgraph name, an IPFS hash `Qm..`, or the database namespace `sgdNNN`. Further documentation is available [here](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop). diff --git a/website/src/pages/ja/indexing/tooling/graphcast.mdx b/website/src/pages/ja/indexing/tooling/graphcast.mdx index b9d89010f922..0a1fe3e92964 100644 --- a/website/src/pages/ja/indexing/tooling/graphcast.mdx +++ b/website/src/pages/ja/indexing/tooling/graphcast.mdx @@ -10,10 +10,10 @@ title: グラフキャスト Graphcast SDK (ソフトウェア開発キット) を使用すると、開発者はラジオを構築できます。これは、インデクサーが特定の目的を果たすために実行できる、ゴシップを利用したアプリケーションです。また、次のユースケースのために、いくつかのラジオを作成する (または、ラジオを作成したい他の開発者/チームにサポートを提供する) 予定です: -- Real-time cross-checking of subgraph data integrity ([Subgraph Radio](https://docs.graphops.xyz/graphcast/radios/subgraph-radio/intro)). -- サブグラフ、サブストリーム、および他のインデクサーからの Firehose データをワープ同期するためのオークションと調整の実施。 -- サブグラフのリクエスト量、料金の量などを含む、アクティブなクエリ分析に関する自己報告。 -- サブグラフのインデックス作成時間、ハンドラー ガスのコスト、発生したインデックス作成エラーなどを含む、インデックス作成分析に関する自己報告。 +- Real-time cross-checking of Subgraph data integrity ([Subgraph Radio](https://docs.graphops.xyz/graphcast/radios/subgraph-radio/intro)). +- Conducting auctions and coordination for warp syncing Subgraphs, substreams, and Firehose data from other Indexers. +- Self-reporting on active query analytics, including Subgraph request volumes, fee volumes, etc. +- Self-reporting on indexing analytics, including Subgraph indexing time, handler gas costs, indexing errors encountered, etc. - グラフノードのバージョン、Postgres のバージョン、Ethereum クライアントのバージョンなどを含むスタック情報の自己報告。 ### もっと詳しく知る diff --git a/website/src/pages/ja/resources/_meta-titles.json b/website/src/pages/ja/resources/_meta-titles.json index f5971e95a8f6..70aff3e769c5 100644 --- a/website/src/pages/ja/resources/_meta-titles.json +++ b/website/src/pages/ja/resources/_meta-titles.json @@ -1,4 +1,4 @@ { - "roles": "Additional Roles", - "migration-guides": "Migration Guides" + "roles": "その他の役割", + "migration-guides": "移行ガイド" } diff --git a/website/src/pages/ja/resources/benefits.mdx b/website/src/pages/ja/resources/benefits.mdx index f3c7204743fb..ff45cbdff7c3 100644 --- a/website/src/pages/ja/resources/benefits.mdx +++ b/website/src/pages/ja/resources/benefits.mdx @@ -75,9 +75,9 @@ Query costs may vary; the quoted cost is the average at time of publication (Mar Reflects cost for data consumer. Query fees are still paid to Indexers for Free Plan queries. -Estimated costs are only for Ethereum Mainnet subgraphs — costs are even higher when self hosting a `graph-node` on other networks. Some users may need to update their subgraph to a new version. Due to Ethereum gas fees, an update costs ~$50 at time of writing. Note that gas fees on [Arbitrum](/archived/arbitrum/arbitrum-faq/) are substantially lower than Ethereum mainnet. +Estimated costs are only for Ethereum Mainnet Subgraphs — costs are even higher when self hosting a `graph-node` on other networks. Some users may need to update their Subgraph to a new version. Due to Ethereum gas fees, an update costs ~$50 at time of writing. Note that gas fees on [Arbitrum](/archived/arbitrum/arbitrum-faq/) are substantially lower than Ethereum mainnet. -サブグラフ上のシグナルのキュレーションは、オプションで1回限り、ネットゼロのコストで可能です(例えば、$1,000のシグナルをサブグラフ上でキュレーションし、後で引き出すことができ、その過程でリターンを得る可能性があります)。 +Curating signal on a Subgraph is an optional one-time, net-zero cost (e.g., $1k in signal can be curated on a Subgraph, and later withdrawn—with potential to earn returns in the process). ## No Setup Costs & Greater Operational Efficiency @@ -89,4 +89,4 @@ The Graph’s decentralized network gives users access to geographic redundancy Bottom line: The Graph Network is less expensive, easier to use, and produces superior results compared to running a `graph-node` locally. -Start using The Graph Network today, and learn how to [publish your subgraph to The Graph's decentralized network](/subgraphs/quick-start/). +Start using The Graph Network today, and learn how to [publish your Subgraph to The Graph's decentralized network](/subgraphs/quick-start/). diff --git a/website/src/pages/ja/resources/glossary.mdx b/website/src/pages/ja/resources/glossary.mdx index c71697a009cf..6a602dd4c2d2 100644 --- a/website/src/pages/ja/resources/glossary.mdx +++ b/website/src/pages/ja/resources/glossary.mdx @@ -4,51 +4,51 @@ title: 用語集 - **The Graph**: A decentralized protocol for indexing and querying data. -- **Query**: A request for data. In the case of The Graph, a query is a request for data from a subgraph that will be answered by an Indexer. +- **Query**: A request for data. In the case of The Graph, a query is a request for data from a Subgraph that will be answered by an Indexer. -- **GraphQL**: A query language for APIs and a runtime for fulfilling those queries with your existing data. The Graph uses GraphQL to query subgraphs. +- **GraphQL**: A query language for APIs and a runtime for fulfilling those queries with your existing data. The Graph uses GraphQL to query Subgraphs. -- **Endpoint**: A URL that can be used to query a subgraph. The testing endpoint for Subgraph Studio is `https://api.studio.thegraph.com/query///` and the Graph Explorer endpoint is `https://gateway.thegraph.com/api//subgraphs/id/`. The Graph Explorer endpoint is used to query subgraphs on The Graph's decentralized network. +- **Endpoint**: A URL that can be used to query a Subgraph. The testing endpoint for Subgraph Studio is `https://api.studio.thegraph.com/query///` and the Graph Explorer endpoint is `https://gateway.thegraph.com/api//subgraphs/id/`. The Graph Explorer endpoint is used to query Subgraphs on The Graph's decentralized network. -- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish subgraphs to The Graph Network. Once it is indexed, the subgraph can be queried by anyone. +- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish Subgraphs to The Graph Network. Once it is indexed, the Subgraph can be queried by anyone. - **Indexer**: Network participants that run indexing nodes to index data from blockchains and serve GraphQL queries. - **Indexer Revenue Streams**: Indexers are rewarded in GRT with two components: query fee rebates and indexing rewards. - 1. **Query Fee Rebates**: Payments from subgraph consumers for serving queries on the network. + 1. **Query Fee Rebates**: Payments from Subgraph consumers for serving queries on the network. - 2. **Indexing Rewards**: The rewards that Indexers receive for indexing subgraphs. Indexing rewards are generated via new issuance of 3% GRT annually. + 2. **Indexing Rewards**: The rewards that Indexers receive for indexing Subgraphs. Indexing rewards are generated via new issuance of 3% GRT annually. - **Indexer's Self-Stake**: The amount of GRT that Indexers stake to participate in the decentralized network. The minimum is 100,000 GRT, and there is no upper limit. - **Delegation Capacity**: The maximum amount of GRT an Indexer can accept from Delegators. Indexers can only accept up to 16x their Indexer Self-Stake, and additional delegation results in diluted rewards. For example, if an Indexer has a Self-Stake of 1M GRT, their delegation capacity is 16M. However, Indexers can increase their Delegation Capacity by increasing their Self-Stake. -- **Upgrade Indexer**: An Indexer designed to act as a fallback for subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. +- **Upgrade Indexer**: An Indexer designed to act as a fallback for Subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. -- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing subgraphs. +- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in Subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing Subgraphs. - **Delegation Tax**: A 0.5% fee paid by Delegators when they delegate GRT to Indexers. The GRT used to pay the fee is burned. -- **Curator**: Network participants that identify high-quality subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a subgraph, 10% is distributed to the Curators of that subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a subgraph. +- **Curator**: Network participants that identify high-quality Subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a Subgraph, 10% is distributed to the Curators of that Subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a Subgraph. -- **Curation Tax**: A 1% fee paid by Curators when they signal GRT on subgraphs. The GRT used to pay the fee is burned. +- **Curation Tax**: A 1% fee paid by Curators when they signal GRT on Subgraphs. The GRT used to pay the fee is burned. -- **Data Consumer**: Any application or user that queries a subgraph. +- **Data Consumer**: Any application or user that queries a Subgraph. -- **Subgraph Developer**: A developer who builds and deploys a subgraph to The Graph's decentralized network. +- **Subgraph Developer**: A developer who builds and deploys a Subgraph to The Graph's decentralized network. -- **Subgraph Manifest**: A YAML file that describes the subgraph's GraphQL schema, data sources, and other metadata. [Here](https://github.com/graphprotocol/example-subgraph/blob/master/subgraph.yaml) is an example. +- **Subgraph Manifest**: A YAML file that describes the Subgraph's GraphQL schema, data sources, and other metadata. [Here](https://github.com/graphprotocol/example-subgraph/blob/master/subgraph.yaml) is an example. - **Epoch**: A unit of time within the network. Currently, one epoch is 6,646 blocks or approximately 1 day. -- **Allocation**: An Indexer can allocate their total GRT stake (including Delegators' stake) towards subgraphs that have been published on The Graph's decentralized network. Allocations can have different statuses: +- **Allocation**: An Indexer can allocate their total GRT stake (including Delegators' stake) towards Subgraphs that have been published on The Graph's decentralized network. Allocations can have different statuses: - 1. **Active**: An allocation is considered active when it is created onchain. This is called opening an allocation, and indicates to the network that the Indexer is actively indexing and serving queries for a particular subgraph. Active allocations accrue indexing rewards proportional to the signal on the subgraph, and the amount of GRT allocated. + 1. **Active**: An allocation is considered active when it is created onchain. This is called opening an allocation, and indicates to the network that the Indexer is actively indexing and serving queries for a particular Subgraph. Active allocations accrue indexing rewards proportional to the signal on the Subgraph, and the amount of GRT allocated. - 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. + 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given Subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. -- **Subgraph Studio**: A powerful dapp for building, deploying, and publishing subgraphs. +- **Subgraph Studio**: A powerful dapp for building, deploying, and publishing Subgraphs. - **Fishermen**: A role within The Graph Network held by participants who monitor the accuracy and integrity of data served by Indexers. When a Fisherman identifies a query response or a POI they believe to be incorrect, they can initiate a dispute against the Indexer. If the dispute rules in favor of the Fisherman, the Indexer is slashed by losing 2.5% of their self-stake. Of this amount, 50% is awarded to the Fisherman as a bounty for their vigilance, and the remaining 50% is removed from circulation (burned). This mechanism is designed to encourage Fishermen to help maintain the reliability of the network by ensuring that Indexers are held accountable for the data they provide. @@ -56,28 +56,28 @@ title: 用語集 - **Slashing**: Indexers can have their self-staked GRT slashed for providing an incorrect POI or for serving inaccurate data. The slashing percentage is a protocol parameter currently set to 2.5% of an Indexer's self-stake. 50% of the slashed GRT goes to the Fisherman that disputed the inaccurate data or incorrect POI. The other 50% is burned. -- **Indexing Rewards**: The rewards that Indexers receive for indexing subgraphs. Indexing rewards are distributed in GRT. +- **Indexing Rewards**: The rewards that Indexers receive for indexing Subgraphs. Indexing rewards are distributed in GRT. - **Delegation Rewards**: The rewards that Delegators receive for delegating GRT to Indexers. Delegation rewards are distributed in GRT. - **GRT**: The Graph's work utility token. GRT provides economic incentives to network participants for contributing to the network. -- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. +- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given Subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. -- **Graph Node**: Graph Node is the component that indexes subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. +- **Graph Node**: Graph Node is the component that indexes Subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. -- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions onchain, including registering on the network, managing subgraph deployments to its Graph Node(s), and managing allocations. +- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions onchain, including registering on the network, managing Subgraph deployments to its Graph Node(s), and managing allocations. - **The Graph Client**: A library for building GraphQL-based dapps in a decentralized way. -- **Graph Explorer**: A dapp designed for network participants to explore subgraphs and interact with the protocol. +- **Graph Explorer**: A dapp designed for network participants to explore Subgraphs and interact with the protocol. - **Graph CLI**: A command line interface tool for building and deploying to The Graph. - **Cooldown Period**: The time remaining until an Indexer who changed their delegation parameters can do so again. -- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, subgraphs, curation shares, and Indexer's self-stake. +- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, Subgraphs, curation shares, and Indexer's self-stake. -- **Updating a subgraph**: The process of releasing a new subgraph version with updates to the subgraph's manifest, schema, or mappings. +- **Updating a Subgraph**: The process of releasing a new Subgraph version with updates to the Subgraph's manifest, schema, or mappings. -- **Migrating**: The process of curation shares moving from an old version of a subgraph to a new version of a subgraph (e.g. when v0.0.1 is updated to v0.0.2). +- **Migrating**: The process of curation shares moving from an old version of a Subgraph to a new version of a Subgraph (e.g. when v0.0.1 is updated to v0.0.2). diff --git a/website/src/pages/ja/resources/migration-guides/assemblyscript-migration-guide.mdx b/website/src/pages/ja/resources/migration-guides/assemblyscript-migration-guide.mdx index 88e5aea91168..1c7252574879 100644 --- a/website/src/pages/ja/resources/migration-guides/assemblyscript-migration-guide.mdx +++ b/website/src/pages/ja/resources/migration-guides/assemblyscript-migration-guide.mdx @@ -2,13 +2,13 @@ title: AssemblyScript マイグレーションガイド --- -Up until now, subgraphs have been using one of the [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉 +Up until now, Subgraphs have been using one of the [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉 -これにより、サブグラフの開発者は、AS 言語と標準ライブラリの新しい機能を使用できるようになります。 +That will enable Subgraph developers to use newer features of the AS language and standard library. This guide is applicable for anyone using `graph-cli`/`graph-ts` below version `0.22.0`. If you're already at a higher than (or equal) version to that, you've already been using version `0.19.10` of AssemblyScript 🙂 -> Note: As of `0.24.0`, `graph-node` can support both versions, depending on the `apiVersion` specified in the subgraph manifest. +> Note: As of `0.24.0`, `graph-node` can support both versions, depending on the `apiVersion` specified in the Subgraph manifest. ## 特徴 @@ -44,7 +44,7 @@ This guide is applicable for anyone using `graph-cli`/`graph-ts` below version ` ## アップグレードの方法 -1. Change your mappings `apiVersion` in `subgraph.yaml` to `0.0.6`: +1. Change your mappings `apiVersion` in `subgraph.yaml` to `0.0.9`: ```yaml ... @@ -52,7 +52,7 @@ dataSources: ... mapping: ... - apiVersion: 0.0.6 + apiVersion: 0.0.9 ... ``` @@ -106,7 +106,7 @@ let maybeValue = load()! // breaks in runtime if value is null maybeValue.aMethod() ``` -どちらを選択すべきか迷った場合は、常に安全なバージョンを使用することをお勧めします。 値が存在しない場合は、サブグラフハンドラの中で return を伴う初期の if 文を実行するとよいでしょう。 +If you are unsure which to choose, we recommend always using the safe version. If the value doesn't exist you might want to just do an early if statement with a return in you Subgraph handler. ### 変数シャドウイング @@ -132,7 +132,7 @@ in assembly/index.ts(4,3) ### Null 比較 -サブグラフのアップグレードを行うと、時々以下のようなエラーが発生することがあります。 +By doing the upgrade on your Subgraph, sometimes you might get errors like these: ```typescript ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' is not assignable to type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt'. @@ -330,7 +330,7 @@ let wrapper = new Wrapper(y) wrapper.n = wrapper.n + x // doesn't give compile time errors as it should ``` -この件に関して、アセンブリ・スクリプト・コンパイラーに問題を提起しましたが、 今のところ、もしサブグラフ・マッピングでこの種の操作を行う場合には、 その前に NULL チェックを行うように変更してください。 +We've opened a issue on the AssemblyScript compiler for this, but for now if you do these kind of operations in your Subgraph mappings, you should change them to do a null check before it. ```typescript let wrapper = new Wrapper(y) @@ -352,7 +352,7 @@ value.x = 10 value.y = 'content' ``` -これは、値が初期化されていないために起こります。したがって、次のようにサブグラフが値を初期化していることを確認してください。 +It will compile but break at runtime, that happens because the value hasn't been initialized, so make sure your Subgraph has initialized their values, like this: ```typescript var value = new Type() // initialized diff --git a/website/src/pages/ja/resources/migration-guides/graphql-validations-migration-guide.mdx b/website/src/pages/ja/resources/migration-guides/graphql-validations-migration-guide.mdx index b004e14d9f98..e04e62a06e0f 100644 --- a/website/src/pages/ja/resources/migration-guides/graphql-validations-migration-guide.mdx +++ b/website/src/pages/ja/resources/migration-guides/graphql-validations-migration-guide.mdx @@ -1,5 +1,5 @@ --- -title: GraphQL 検証移行ガイド +title: GraphQL Validations Migration Guide --- まもなく「graph-node」は [GraphQL Validations 仕様](https://spec.graphql.org/June2018/#sec-Validation) を 100% カバーします。 @@ -20,7 +20,7 @@ GraphQL Validations サポートは、今後の新機能と The Graph Network CLI 移行ツールを使用して、GraphQL 操作の問題を見つけて修正できます。または、GraphQL クライアントのエンドポイントを更新して、`https://api-next.thegraph.com/subgraphs/name/$GITHUB_USER/$SUBGRAPH_NAME` エンドポイントを使用することもできます。このエンドポイントに対してクエリをテストすると、クエリの問題を見つけるのに役立ちます。 -> [GraphQL ESlint](https://the-guild.dev/graphql/eslint/docs) または [GraphQL Code Generator](https://the-guild.dev) を使用している場合、すべてのサブグラフを移行する必要はありません。 /graphql/codegen)、クエリが有効であることを既に確認しています。 +> Not all Subgraphs will need to be migrated, if you are using [GraphQL ESlint](https://the-guild.dev/graphql/eslint/docs) or [GraphQL Code Generator](https://the-guild.dev/graphql/codegen), they already ensure that your queries are valid. ## 移行 CLI ツール diff --git a/website/src/pages/ja/resources/roles/curating.mdx b/website/src/pages/ja/resources/roles/curating.mdx index ff0ae8aced25..56560702df5c 100644 --- a/website/src/pages/ja/resources/roles/curating.mdx +++ b/website/src/pages/ja/resources/roles/curating.mdx @@ -2,37 +2,37 @@ title: キュレーティング --- -Curators are critical to The Graph's decentralized economy. They use their knowledge of the web3 ecosystem to assess and signal on the subgraphs that should be indexed by The Graph Network. Through Graph Explorer, Curators view network data to make signaling decisions. In turn, The Graph Network rewards Curators who signal on good quality subgraphs with a share of the query fees those subgraphs generate. The amount of GRT signaled is one of the key considerations for indexers when determining which subgraphs to index. +Curators are critical to The Graph's decentralized economy. They use their knowledge of the web3 ecosystem to assess and signal on the Subgraphs that should be indexed by The Graph Network. Through Graph Explorer, Curators view network data to make signaling decisions. In turn, The Graph Network rewards Curators who signal on good quality Subgraphs with a share of the query fees those Subgraphs generate. The amount of GRT signaled is one of the key considerations for indexers when determining which Subgraphs to index. ## What Does Signaling Mean for The Graph Network? -Before consumers can query a subgraph, it must be indexed. This is where curation comes into play. In order for Indexers to earn substantial query fees on quality subgraphs, they need to know what subgraphs to index. When Curators signal on a subgraph, it lets Indexers know that a subgraph is in demand and of sufficient quality that it should be indexed. +Before consumers can query a Subgraph, it must be indexed. This is where curation comes into play. In order for Indexers to earn substantial query fees on quality Subgraphs, they need to know what Subgraphs to index. When Curators signal on a Subgraph, it lets Indexers know that a Subgraph is in demand and of sufficient quality that it should be indexed. -Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that Curators use to let Indexers know that a subgraph is good to index. Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives. +Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that Curators use to let Indexers know that a Subgraph is good to index. Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the Subgraph, entitling them to a portion of future query fees that the Subgraph drives. -Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality subgraph because there will be fewer queries to process or fewer Indexers to process them. +Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to Subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality Subgraph because there will be fewer queries to process or fewer Indexers to process them. -The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. +The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all Subgraphs, signaling GRT on a particular Subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. -When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. +When signaling, Curators can decide to signal on a specific version of the Subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. -If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. +If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the Subgraphs you need assistance with. -Indexers can find subgraphs to index based on curation signals they see in Graph Explorer (screenshot below). +Indexers can find Subgraphs to index based on curation signals they see in Graph Explorer (screenshot below). -![Explorer subgraphs](/img/explorer-subgraphs.png) +![Explorer Subgraphs](/img/explorer-subgraphs.png) ## シグナルの出し方 -Within the Curator tab in Graph Explorer, curators will be able to signal and unsignal on certain subgraphs based on network stats. For a step-by-step overview of how to do this in Graph Explorer, [click here.](/subgraphs/explorer/) +Within the Curator tab in Graph Explorer, curators will be able to signal and unsignal on certain Subgraphs based on network stats. For a step-by-step overview of how to do this in Graph Explorer, [click here.](/subgraphs/explorer/) -キュレーターは、特定のサブグラフのバージョンでシグナルを出すことも、そのサブグラフの最新のプロダクションビルドに自動的にシグナルを移行させることも可能ですます。 どちらも有効な戦略であり、それぞれに長所と短所があります。 +A curator can choose to signal on a specific Subgraph version, or they can choose to have their signal automatically migrate to the newest production build of that Subgraph. Both are valid strategies and come with their own pros and cons. -Signaling on a specific version is especially useful when one subgraph is used by multiple dapps. One dapp might need to regularly update the subgraph with new features. Another dapp might prefer to use an older, well-tested subgraph version. Upon initial curation, a 1% standard tax is incurred. +Signaling on a specific version is especially useful when one Subgraph is used by multiple dapps. One dapp might need to regularly update the Subgraph with new features. Another dapp might prefer to use an older, well-tested Subgraph version. Upon initial curation, a 1% standard tax is incurred. シグナルを最新のプロダクションビルドに自動的に移行させることは、クエリー料金の発生を確実にするために有効です。 キュレーションを行うたびに、1%のキュレーション税が発生します。 また、移行ごとに 0.5%のキュレーション税を支払うことになります。 つまり、サブグラフの開発者が、頻繁に新バージョンを公開することは推奨されません。 自動移行された全てのキュレーションシェアに対して、0.5%のキュレーション税を支払わなければならないからです。 -> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy. +> **Note**: The first address to signal a particular Subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy. ## Withdrawing your GRT @@ -40,39 +40,39 @@ Curators have the option to withdraw their signaled GRT at any time. Unlike the process of delegating, if you decide to withdraw your signaled GRT you will not have to wait for a cooldown period and will receive the entire amount (minus the 1% curation tax). -Once a curator withdraws their signal, indexers may choose to keep indexing the subgraph, even if there's currently no active GRT signaled. +Once a curator withdraws their signal, indexers may choose to keep indexing the Subgraph, even if there's currently no active GRT signaled. -However, it is recommended that curators leave their signaled GRT in place not only to receive a portion of the query fees, but also to ensure reliability and uptime of the subgraph. +However, it is recommended that curators leave their signaled GRT in place not only to receive a portion of the query fees, but also to ensure reliability and uptime of the Subgraph. ## リスク 1. The Graph では、クエリ市場は本質的に歴史が浅く、初期の市場ダイナミクスのために、あなたの%APY が予想より低くなるリスクがあります。 -2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned. -3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their subgraph or if a subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/resources/roles/delegating/delegating/). -4. サブグラフはバグで失敗することがあります。 失敗したサブグラフは、クエリフィーが発生しません。 結果的に、開発者がバグを修正して新しいバージョンを展開するまで待たなければならなくなります。 - - サブグラフの最新バージョンに加入している場合、シェアはその新バージョンに自動移行します。 これには 0.5%のキュレーション税がかかります。 - - If you have signaled on a specific subgraph version and it fails, you will have to manually burn your curation shares. You can then signal on the new subgraph version, thus incurring a 1% curation tax. +2. Curation Fee - when a curator signals GRT on a Subgraph, they incur a 1% curation tax. This fee is burned. +3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their Subgraph or if a Subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/resources/roles/delegating/delegating/). +4. A Subgraph can fail due to a bug. A failed Subgraph does not accrue query fees. As a result, you’ll have to wait until the developer fixes the bug and deploys a new version. + - If you are subscribed to the newest version of a Subgraph, your shares will auto-migrate to that new version. This will incur a 0.5% curation tax. + - If you have signaled on a specific Subgraph version and it fails, you will have to manually burn your curation shares. You can then signal on the new Subgraph version, thus incurring a 1% curation tax. ## キューレーション FAQ ### 1. キュレータはクエリフィーの何%を獲得できますか? -By signalling on a subgraph, you will earn a share of all the query fees that the subgraph generates. 10% of all query fees go to the Curators pro-rata to their curation shares. This 10% is subject to governance. +By signalling on a Subgraph, you will earn a share of all the query fees that the Subgraph generates. 10% of all query fees go to the Curators pro-rata to their curation shares. This 10% is subject to governance. -### 2. シグナルを出すのに適した質の高いサブグラフはどのようにして決めるのですか? +### 2. How do I decide which Subgraphs are high quality to signal on? -Finding high-quality subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy subgraphs that are driving query volume. A trustworthy subgraph may be valuable if it is complete, accurate, and supports a dapp’s data needs. A poorly architected subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a subgraph’s architecture or code in order to assess if a subgraph is valuable. As a result: +Finding high-quality Subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy Subgraphs that are driving query volume. A trustworthy Subgraph may be valuable if it is complete, accurate, and supports a dapp’s data needs. A poorly architected Subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a Subgraph’s architecture or code in order to assess if a Subgraph is valuable. As a result: -- Curators can use their understanding of a network to try and predict how an individual subgraph may generate a higher or lower query volume in the future -- Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the subgraph developer is can help determine whether or not a subgraph is worth signalling on. +- Curators can use their understanding of a network to try and predict how an individual Subgraph may generate a higher or lower query volume in the future +- Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the Subgraph developer is can help determine whether or not a Subgraph is worth signalling on. -### 3. サブグラフの更新にかかるコストはいくらですか? +### 3. What’s the cost of updating a Subgraph? -Migrating your curation shares to a new subgraph version incurs a curation tax of 1%. Curators can choose to subscribe to the newest version of a subgraph. When curator shares get auto-migrated to a new version, Curators will also pay half curation tax, ie. 0.5%, because upgrading subgraphs is an onchain action that costs gas. +Migrating your curation shares to a new Subgraph version incurs a curation tax of 1%. Curators can choose to subscribe to the newest version of a Subgraph. When curator shares get auto-migrated to a new version, Curators will also pay half curation tax, ie. 0.5%, because upgrading Subgraphs is an onchain action that costs gas. -### 4. サブグラフはどれくらいの頻度で更新できますか? +### 4. How often can I update my Subgraph? -サブグラフをあまり頻繁に更新しないことをお勧めします。詳細については、上記の質問を参照してください。 +It’s suggested that you don’t update your Subgraphs too frequently. See the question above for more details. ### 5. キュレーションのシェアを売却することはできますか? diff --git a/website/src/pages/ja/resources/subgraph-studio-faq.mdx b/website/src/pages/ja/resources/subgraph-studio-faq.mdx index 5810742c4ec4..5992b07f7478 100644 --- a/website/src/pages/ja/resources/subgraph-studio-faq.mdx +++ b/website/src/pages/ja/resources/subgraph-studio-faq.mdx @@ -4,7 +4,7 @@ title: サブグラフスタジオFAQ ## 1. サブグラフスタジオとは? -[Subgraph Studio](https://thegraph.com/studio/) is a dapp for creating, managing, and publishing subgraphs and API keys. +[Subgraph Studio](https://thegraph.com/studio/) is a dapp for creating, managing, and publishing Subgraphs and API keys. ## 2. API キーを作成するにはどうすればよいですか? @@ -18,14 +18,14 @@ Yes! You can create multiple API Keys to use in different projects. Check out th API キーを作成後、「セキュリティ」セクションで、特定の API キーにクエリ可能なドメインを定義できます。 -## 5. 自分のサブグラフを他のオーナーに譲渡することはできますか? +## 5. Can I transfer my Subgraph to another owner? -Yes, subgraphs that have been published to Arbitrum One can be transferred to a new wallet or a Multisig. You can do so by clicking the three dots next to the 'Publish' button on the subgraph's details page and selecting 'Transfer ownership'. +Yes, Subgraphs that have been published to Arbitrum One can be transferred to a new wallet or a Multisig. You can do so by clicking the three dots next to the 'Publish' button on the Subgraph's details page and selecting 'Transfer ownership'. -サブグラフが転送されると、Studio でサブグラフを表示または編集できなくなることに注意してください。 +Note that you will no longer be able to see or edit the Subgraph in Studio once it has been transferred. -## 6. 使用したいサブグラフの開発者ではない場合、サブグラフのクエリ URL を見つけるにはどうすればよいですか? +## 6. How do I find query URLs for Subgraphs if I’m not the developer of the Subgraph I want to use? -You can find the query URL of each subgraph in the Subgraph Details section of Graph Explorer. When you click on the “Query” button, you will be directed to a pane wherein you can view the query URL of the subgraph you’re interested in. You can then replace the `` placeholder with the API key you wish to leverage in Subgraph Studio. +You can find the query URL of each Subgraph in the Subgraph Details section of Graph Explorer. When you click on the “Query” button, you will be directed to a pane wherein you can view the query URL of the Subgraph you’re interested in. You can then replace the `` placeholder with the API key you wish to leverage in Subgraph Studio. -APIキーを作成すると、自分でサブグラフを構築した場合でも、ネットワークに公開されているすべてのサブグラフにクエリを実行できることを覚えておいてください。新しい API キーを介したこれらのクエリは、ネットワーク上の他のクエリと同様に支払われます。 +Remember that you can create an API key and query any Subgraph published to the network, even if you build a Subgraph yourself. These queries via the new API key, are paid queries as any other on the network. diff --git a/website/src/pages/ja/resources/tokenomics.mdx b/website/src/pages/ja/resources/tokenomics.mdx index 07a04a43b06c..a1f30147507d 100644 --- a/website/src/pages/ja/resources/tokenomics.mdx +++ b/website/src/pages/ja/resources/tokenomics.mdx @@ -6,7 +6,7 @@ description: The Graph Network is incentivized by powerful tokenomics. Here’s ## 概要 -The Graph is a decentralized protocol that enables easy access to blockchain data. It indexes blockchain data similarly to how Google indexes the web. If you've used a dapp that retrieves data from a subgraph, you've probably interacted with The Graph. Today, thousands of [popular dapps](https://thegraph.com/explorer) in the web3 ecosystem use The Graph. +The Graph is a decentralized protocol that enables easy access to blockchain data. It indexes blockchain data similarly to how Google indexes the web. If you've used a dapp that retrieves data from a Subgraph, you've probably interacted with The Graph. Today, thousands of [popular dapps](https://thegraph.com/explorer) in the web3 ecosystem use The Graph. ## Specifics @@ -24,9 +24,9 @@ There are four primary network participants: 1. Delegators - Delegate GRT to Indexers & secure the network -2. キュレーター - インデクサーのために最適なサブグラフを見つける。 +2. Curators - Find the best Subgraphs for Indexers -3. Developers - Build & query subgraphs +3. Developers - Build & query Subgraphs 4. インデクサー - ブロックチェーンデータのバックボーン @@ -36,7 +36,7 @@ Fishermen and Arbitrators are also integral to the network's success through oth ## Delegators (Passively earn GRT) -Indexers are delegated GRT by Delegators, increasing the Indexer’s stake in subgraphs on the network. In return, Delegators earn a percentage of all query fees and indexing rewards from the Indexer. Each Indexer sets the cut that will be rewarded to Delegators independently, creating competition among Indexers to attract Delegators. Most Indexers offer between 9-12% annually. +Indexers are delegated GRT by Delegators, increasing the Indexer’s stake in Subgraphs on the network. In return, Delegators earn a percentage of all query fees and indexing rewards from the Indexer. Each Indexer sets the cut that will be rewarded to Delegators independently, creating competition among Indexers to attract Delegators. Most Indexers offer between 9-12% annually. For example, if a Delegator were to delegate 15k GRT to an Indexer offering 10%, the Delegator would receive ~1,500 GRT in rewards annually. @@ -46,25 +46,25 @@ If you're reading this, you're capable of becoming a Delegator right now by head ## Curators (Earn GRT) -Curators identify high-quality subgraphs and "curate" them (i.e., signal GRT on them) to earn curation shares, which guarantee a percentage of all future query fees generated by the subgraph. While any independent network participant can be a Curator, typically subgraph developers are among the first Curators for their own subgraphs because they want to ensure their subgraph is indexed. +Curators identify high-quality Subgraphs and "curate" them (i.e., signal GRT on them) to earn curation shares, which guarantee a percentage of all future query fees generated by the Subgraph. While any independent network participant can be a Curator, typically Subgraph developers are among the first Curators for their own Subgraphs because they want to ensure their Subgraph is indexed. -Subgraph developers are encouraged to curate their subgraph with at least 3,000 GRT. However, this number may be impacted by network activity and community participation. +Subgraph developers are encouraged to curate their Subgraph with at least 3,000 GRT. However, this number may be impacted by network activity and community participation. -Curators pay a 1% curation tax when they curate a new subgraph. This curation tax is burned, decreasing the supply of GRT. +Curators pay a 1% curation tax when they curate a new Subgraph. This curation tax is burned, decreasing the supply of GRT. ## Developers -Developers build and query subgraphs to retrieve blockchain data. Since subgraphs are open source, developers can query existing subgraphs to load blockchain data into their dapps. Developers pay for queries they make in GRT, which is distributed to network participants. +Developers build and query Subgraphs to retrieve blockchain data. Since Subgraphs are open source, developers can query existing Subgraphs to load blockchain data into their dapps. Developers pay for queries they make in GRT, which is distributed to network participants. -### サブグラフの作成 +### Creating a Subgraph -Developers can [create a subgraph](/developing/creating-a-subgraph/) to index data on the blockchain. Subgraphs are instructions for Indexers about which data should be served to consumers. +Developers can [create a Subgraph](/developing/creating-a-subgraph/) to index data on the blockchain. Subgraphs are instructions for Indexers about which data should be served to consumers. -Once developers have built and tested their subgraph, they can [publish their subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) on The Graph's decentralized network. +Once developers have built and tested their Subgraph, they can [publish their Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) on The Graph's decentralized network. -### 既存のサブグラフのクエリ +### Querying an existing Subgraph -Once a subgraph is [published](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph's decentralized network, anyone can create an API key, add GRT to their billing balance, and query the subgraph. +Once a Subgraph is [published](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph's decentralized network, anyone can create an API key, add GRT to their billing balance, and query the Subgraph. Subgraphs are [queried using GraphQL](/subgraphs/querying/introduction/), and the query fees are paid for with GRT in [Subgraph Studio](https://thegraph.com/studio/). Query fees are distributed to network participants based on their contributions to the protocol. @@ -72,27 +72,27 @@ Subgraphs are [queried using GraphQL](/subgraphs/querying/introduction/), and th ## Indexers (Earn GRT) -Indexers are the backbone of The Graph. They operate independent hardware and software powering The Graph’s decentralized network. Indexers serve data to consumers based on instructions from subgraphs. +Indexers are the backbone of The Graph. They operate independent hardware and software powering The Graph’s decentralized network. Indexers serve data to consumers based on instructions from Subgraphs. Indexers can earn GRT rewards in two ways: -1. **Query fees**: GRT paid by developers or users for subgraph data queries. Query fees are directly distributed to Indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). +1. **Query fees**: GRT paid by developers or users for Subgraph data queries. Query fees are directly distributed to Indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). -2. **Indexing rewards**: the 3% annual issuance is distributed to Indexers based on the number of subgraphs they are indexing. These rewards incentivize Indexers to index subgraphs, occasionally before the query fees begin, to accrue and submit Proofs of Indexing (POIs), verifying that they have indexed data accurately. +2. **Indexing rewards**: the 3% annual issuance is distributed to Indexers based on the number of Subgraphs they are indexing. These rewards incentivize Indexers to index Subgraphs, occasionally before the query fees begin, to accrue and submit Proofs of Indexing (POIs), verifying that they have indexed data accurately. -Each subgraph is allotted a portion of the total network token issuance, based on the amount of the subgraph’s curation signal. That amount is then rewarded to Indexers based on their allocated stake on the subgraph. +Each Subgraph is allotted a portion of the total network token issuance, based on the amount of the Subgraph’s curation signal. That amount is then rewarded to Indexers based on their allocated stake on the Subgraph. In order to run an indexing node, Indexers must self-stake 100,000 GRT or more with the network. Indexers are incentivized to self-stake GRT in proportion to the amount of queries they serve. -Indexers can increase their GRT allocations on subgraphs by accepting GRT delegation from Delegators, and they can accept up to 16 times their initial self-stake. If an Indexer becomes "over-delegated" (i.e., more than 16 times their initial self-stake), they will not be able to use the additional GRT from Delegators until they increase their self-stake in the network. +Indexers can increase their GRT allocations on Subgraphs by accepting GRT delegation from Delegators, and they can accept up to 16 times their initial self-stake. If an Indexer becomes "over-delegated" (i.e., more than 16 times their initial self-stake), they will not be able to use the additional GRT from Delegators until they increase their self-stake in the network. The amount of rewards an Indexer receives can vary based on the Indexer's self-stake, accepted delegation, quality of service, and many more factors. ## Token Supply: Burning & Issuance -The initial token supply is 10 billion GRT, with a target of 3% new issuance annually to reward Indexers for allocating stake on subgraphs. This means that the total supply of GRT tokens will increase by 3% each year as new tokens are issued to Indexers for their contribution to the network. +The initial token supply is 10 billion GRT, with a target of 3% new issuance annually to reward Indexers for allocating stake on Subgraphs. This means that the total supply of GRT tokens will increase by 3% each year as new tokens are issued to Indexers for their contribution to the network. -The Graph is designed with multiple burning mechanisms to offset new token issuance. Approximately 1% of the GRT supply is burned annually through various activities on the network, and this number has been increasing as network activity continues to grow. These burning activities include a 0.5% delegation tax whenever a Delegator delegates GRT to an Indexer, a 1% curation tax when Curators signal on a subgraph, and a 1% of query fees for blockchain data. +The Graph is designed with multiple burning mechanisms to offset new token issuance. Approximately 1% of the GRT supply is burned annually through various activities on the network, and this number has been increasing as network activity continues to grow. These burning activities include a 0.5% delegation tax whenever a Delegator delegates GRT to an Indexer, a 1% curation tax when Curators signal on a Subgraph, and a 1% of query fees for blockchain data. ![Total burned GRT](/img/total-burned-grt.jpeg) diff --git a/website/src/pages/ja/sps/introduction.mdx b/website/src/pages/ja/sps/introduction.mdx index fbb86f0d0763..71fabdd0416c 100644 --- a/website/src/pages/ja/sps/introduction.mdx +++ b/website/src/pages/ja/sps/introduction.mdx @@ -3,21 +3,21 @@ title: Introduction to Substreams-Powered Subgraphs sidebarTitle: イントロダクション --- -Boost your subgraph's efficiency and scalability by using [Substreams](/substreams/introduction/) to stream pre-indexed blockchain data. +Boost your Subgraph's efficiency and scalability by using [Substreams](/substreams/introduction/) to stream pre-indexed blockchain data. ## 概要 -Use a Substreams package (`.spkg`) as a data source to give your subgraph access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks. +Use a Substreams package (`.spkg`) as a data source to give your Subgraph access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks. ### Specifics There are two methods of enabling this technology: -1. **Using Substreams [triggers](/sps/triggers/)**: Consume from any Substreams module by importing the Protobuf model through a subgraph handler and move all your logic into a subgraph. This method creates the subgraph entities directly in the subgraph. +1. **Using Substreams [triggers](/sps/triggers/)**: Consume from any Substreams module by importing the Protobuf model through a Subgraph handler and move all your logic into a Subgraph. This method creates the Subgraph entities directly in the Subgraph. -2. **Using [Entity Changes](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)**: By writing more of the logic into Substreams, you can consume the module's output directly into [graph-node](/indexing/tooling/graph-node/). In graph-node, you can use the Substreams data to create your subgraph entities. +2. **Using [Entity Changes](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)**: By writing more of the logic into Substreams, you can consume the module's output directly into [graph-node](/indexing/tooling/graph-node/). In graph-node, you can use the Substreams data to create your Subgraph entities. -You can choose where to place your logic, either in the subgraph or Substreams. However, consider what aligns with your data needs, as Substreams has a parallelized model, and triggers are consumed linearly in the graph node. +You can choose where to place your logic, either in the Subgraph or Substreams. However, consider what aligns with your data needs, as Substreams has a parallelized model, and triggers are consumed linearly in the graph node. ### その他のリソース @@ -28,3 +28,4 @@ Visit the following links for tutorials on using code-generation tooling to buil - [Starknet](https://docs.substreams.dev/tutorials/intro-to-tutorials/starknet) - [Injective](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/injective) - [MANTRA](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/mantra) +- [Stellar](https://docs.substreams.dev/tutorials/intro-to-tutorials/stellar) diff --git a/website/src/pages/ja/sps/sps-faq.mdx b/website/src/pages/ja/sps/sps-faq.mdx index de0755e30c95..c038b396b268 100644 --- a/website/src/pages/ja/sps/sps-faq.mdx +++ b/website/src/pages/ja/sps/sps-faq.mdx @@ -11,21 +11,21 @@ Specifically, it's a blockchain-agnostic, parallelized, and streaming-first engi Substreams is developed by [StreamingFast](https://www.streamingfast.io/). Visit the [Substreams Documentation](/substreams/introduction/) to learn more about Substreams. -## サブストリームによって動作するサブグラフは何ですか? +## What are Substreams-powered Subgraphs? -[Substreams-powered subgraphs](/sps/introduction/) combine the power of Substreams with the queryability of subgraphs. When publishing a Substreams-powered Subgraph, the data produced by the Substreams transformations, can [output entity changes](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs) that are compatible with subgraph entities. +[Substreams-powered Subgraphs](/sps/introduction/) combine the power of Substreams with the queryability of Subgraphs. When publishing a Substreams-powered Subgraph, the data produced by the Substreams transformations, can [output entity changes](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs) that are compatible with Subgraph entities. -If you are already familiar with subgraph development, note that Substreams-powered subgraphs can then be queried just as if it had been produced by the AssemblyScript transformation layer. This provides all the benefits of subgraphs, including a dynamic and flexible GraphQL API. +If you are already familiar with Subgraph development, note that Substreams-powered Subgraphs can then be queried just as if it had been produced by the AssemblyScript transformation layer. This provides all the benefits of Subgraphs, including a dynamic and flexible GraphQL API. -## サブストリームを利用したサブグラフはサブグラフとどう違うのでしょうか? +## How are Substreams-powered Subgraphs different from Subgraphs? Subgraphs are made up of datasources which specify onchain events, and how those events should be transformed via handlers written in Assemblyscript. These events are processed sequentially, based on the order in which events happen onchain. -By contrast, substreams-powered subgraphs have a single datasource which references a substreams package, which is processed by the Graph Node. Substreams have access to additional granular onchain data compared to conventional subgraphs, and can also benefit from massively parallelised processing, which can mean much faster processing times. +By contrast, substreams-powered Subgraphs have a single datasource which references a substreams package, which is processed by the Graph Node. Substreams have access to additional granular onchain data compared to conventional Subgraphs, and can also benefit from massively parallelised processing, which can mean much faster processing times. -## サブストリームを利用したサブグラフを使用する利点は何ですか? +## What are the benefits of using Substreams-powered Subgraphs? -Substreams-powered subgraphs combine all the benefits of Substreams with the queryability of subgraphs. They bring greater composability and high-performance indexing to The Graph. They also enable new data use cases; for example, once you've built your Substreams-powered Subgraph, you can reuse your [Substreams modules](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) to output to different [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) such as PostgreSQL, MongoDB, and Kafka. +Substreams-powered Subgraphs combine all the benefits of Substreams with the queryability of Subgraphs. They bring greater composability and high-performance indexing to The Graph. They also enable new data use cases; for example, once you've built your Substreams-powered Subgraph, you can reuse your [Substreams modules](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) to output to different [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) such as PostgreSQL, MongoDB, and Kafka. ## サブストリームの利点は何ですか? @@ -35,7 +35,7 @@ Substreams-powered subgraphs combine all the benefits of Substreams with the que - 高パフォーマンスのインデックス作成: 並列操作の大規模なクラスター (BigQuery を考えてください) を通じて、桁違いに高速なインデックス作成を実現します。 -- 場所を選ばずにデータをどこにでも沈める: PostgreSQL、MongoDB、Kafka、サブグラフ、フラットファイル、Googleシート。 +- Sink anywhere: Sink your data to anywhere you want: PostgreSQL, MongoDB, Kafka, Subgraphs, flat files, Google Sheets. - プログラム可能: コードを使用して抽出をカスタマイズし、変換時の集計を実行し、複数のシンクの出力をモデル化します。 @@ -63,17 +63,17 @@ Firehose を使用すると、次のような多くの利点があります。 - フラット ファイルの活用: ブロックチェーン データは、利用可能な最も安価で最適化されたコンピューティング リソースであるフラット ファイルに抽出されます。 -## 開発者は、サブストリームを利用したサブグラフとサブストリームに関する詳細情報にどこでアクセスできますか? +## Where can developers access more information about Substreams-powered Subgraphs and Substreams? The [Substreams documentation](/substreams/introduction/) will teach you how to build Substreams modules. -The [Substreams-powered subgraphs documentation](/sps/introduction/) will show you how to package them for deployment on The Graph. +The [Substreams-powered Subgraphs documentation](/sps/introduction/) will show you how to package them for deployment on The Graph. The [latest Substreams Codegen tool](https://streamingfastio.medium.com/substreams-codegen-no-code-tool-to-bootstrap-your-project-a11efe0378c6) will allow you to bootstrap a Substreams project without any code. ## サブストリームにおけるRustモジュールの役割は何ですか? -Rust モジュールは、サブグラフの AssemblyScript マッパーに相当します。これらは同様の方法で WASM にコンパイルされますが、プログラミング モデルにより並列実行が可能になります。これらは、生のブロックチェーン データに適用する変換と集計の種類を定義します。 +Rust modules are the equivalent of the AssemblyScript mappers in Subgraphs. They are compiled to WASM in a similar way, but the programming model allows for parallel execution. They define the sort of transformations and aggregations you want to apply to the raw blockchain data. See [modules documentation](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) for details. @@ -81,16 +81,16 @@ See [modules documentation](https://docs.substreams.dev/reference-material/subst サブストリームを使用すると、変換レイヤーで合成が行われ、キャッシュされたモジュールを再利用できるようになります。 -例として、AliceはDEX価格モジュールを構築し、Bobはそれを使用して興味のあるいくつかのトークンのボリューム集計モジュールを構築し、Lisaは4つの個々のDEX価格モジュールを組み合わせて価格オラクルを作成することができます。単一のSubstreamsリクエストは、これらの個々のモジュールをまとめ、リンクしてより洗練されたデータのストリームを提供します。そのストリームはその後、サブグラフを作成し、消費者によってクエリされることができます。 +As an example, Alice can build a DEX price module, Bob can use it to build a volume aggregator for some tokens of his interest, and Lisa can combine four individual DEX price modules to create a price oracle. A single Substreams request will package all of these individual's modules, link them together, to offer a much more refined stream of data. That stream can then be used to populate a Subgraph, and be queried by consumers. ## サブストリームを利用したサブグラフを構築してデプロイするにはどうすればよいでしょうか? After [defining](/sps/introduction/) a Substreams-powered Subgraph, you can use the Graph CLI to deploy it in [Subgraph Studio](https://thegraph.com/studio/). -## サブストリームおよびサブストリームを利用したサブグラフの例はどこで見つけることができますか? +## Where can I find examples of Substreams and Substreams-powered Subgraphs? -[この Github リポジトリ](https://github.com/pinax-network/awesome-substreams) にアクセスして、サブストリームとサブストリームを利用したサブグラフの例を見つけることができます。 +You can visit [this Github repo](https://github.com/pinax-network/awesome-substreams) to find examples of Substreams and Substreams-powered Subgraphs. -## SubstreamsとSubstreamsを活用したサブグラフがThe Graph Networkにとってどのような意味を持つのでしょうか? +## What do Substreams and Substreams-powered Subgraphs mean for The Graph Network? この統合は、非常に高いパフォーマンスのインデクシングと、コミュニティモジュールを活用し、それらを基に構築することによる大きな組み合わせ可能性を含む多くの利点を約束しています。 diff --git a/website/src/pages/ja/sps/triggers.mdx b/website/src/pages/ja/sps/triggers.mdx index 6935eb956f52..9ddb07c5477c 100644 --- a/website/src/pages/ja/sps/triggers.mdx +++ b/website/src/pages/ja/sps/triggers.mdx @@ -6,13 +6,13 @@ Use Custom Triggers and enable the full use GraphQL. ## 概要 -Custom Triggers allow you to send data directly into your subgraph mappings file and entities, which are similar to tables and fields. This enables you to fully use the GraphQL layer. +Custom Triggers allow you to send data directly into your Subgraph mappings file and entities, which are similar to tables and fields. This enables you to fully use the GraphQL layer. -By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data in your subgraph's handler. This ensures efficient and streamlined data management within the subgraph framework. +By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data in your Subgraph's handler. This ensures efficient and streamlined data management within the Subgraph framework. ### Defining `handleTransactions` -The following code demonstrates how to define a `handleTransactions` function in a subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new subgraph entity is created. +The following code demonstrates how to define a `handleTransactions` function in a Subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new Subgraph entity is created. ```tsx export function handleTransactions(bytes: Uint8Array): void { @@ -38,9 +38,9 @@ Here's what you're seeing in the `mappings.ts` file: 1. The bytes containing Substreams data are decoded into the generated `Transactions` object, this object is used like any other AssemblyScript object 2. Looping over the transactions -3. Create a new subgraph entity for every transaction +3. Create a new Subgraph entity for every transaction -To go through a detailed example of a trigger-based subgraph, [check out the tutorial](/sps/tutorial/). +To go through a detailed example of a trigger-based Subgraph, [check out the tutorial](/sps/tutorial/). ### その他のリソース diff --git a/website/src/pages/ja/sps/tutorial.mdx b/website/src/pages/ja/sps/tutorial.mdx index fbf4f5d22894..33a08342de34 100644 --- a/website/src/pages/ja/sps/tutorial.mdx +++ b/website/src/pages/ja/sps/tutorial.mdx @@ -3,7 +3,7 @@ title: 'Tutorial: Set Up a Substreams-Powered Subgraph on Solana' sidebarTitle: Tutorial --- -Successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. +Successfully set up a trigger-based Substreams-powered Subgraph for a Solana SPL token. ## 始めましょう @@ -54,7 +54,7 @@ params: # Modify the param fields to meet your needs ### Step 2: Generate the Subgraph Manifest -Once the project is initialized, generate a subgraph manifest by running the following command in the Dev Container: +Once the project is initialized, generate a Subgraph manifest by running the following command in the Dev Container: ```bash substreams codegen subgraph @@ -73,7 +73,7 @@ dataSources: moduleName: map_spl_transfers # Module defined in the substreams.yaml file: ./my-project-sol-v0.1.0.spkg mapping: - apiVersion: 0.0.7 + apiVersion: 0.0.9 kind: substreams/graph-entities file: ./src/mappings.ts handler: handleTriggers @@ -81,7 +81,7 @@ dataSources: ### Step 3: Define Entities in `schema.graphql` -Define the fields you want to save in your subgraph entities by updating the `schema.graphql` file. +Define the fields you want to save in your Subgraph entities by updating the `schema.graphql` file. Here is an example: @@ -101,7 +101,7 @@ This schema defines a `MyTransfer` entity with fields such as `id`, `amount`, `s With the Protobuf objects generated, you can now handle the decoded Substreams data in your `mappings.ts` file found in the `./src` directory. -The example below demonstrates how to extract to subgraph entities the non-derived transfers associated to the Orca account id: +The example below demonstrates how to extract to Subgraph entities the non-derived transfers associated to the Orca account id: ```ts import { Protobuf } from 'as-proto/assembly' @@ -140,11 +140,11 @@ To generate Protobuf objects in AssemblyScript, run the following command: npm run protogen ``` -This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the subgraph's handler. +This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the Subgraph's handler. ### Conclusion -Congratulations! You've successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. You can take the next step by customizing your schema, mappings, and modules to fit your specific use case. +Congratulations! You've successfully set up a trigger-based Substreams-powered Subgraph for a Solana SPL token. You can take the next step by customizing your schema, mappings, and modules to fit your specific use case. ### Video Tutorial diff --git a/website/src/pages/ja/subgraphs/_meta-titles.json b/website/src/pages/ja/subgraphs/_meta-titles.json index 3fd405eed29a..5c6121aa7d88 100644 --- a/website/src/pages/ja/subgraphs/_meta-titles.json +++ b/website/src/pages/ja/subgraphs/_meta-titles.json @@ -1,6 +1,6 @@ { - "querying": "Querying", - "developing": "Developing", + "querying": "クエリ", + "developing": "開発", "guides": "How-to Guides", - "best-practices": "Best Practices" + "best-practices": "ベストプラクティス" } diff --git a/website/src/pages/ja/subgraphs/best-practices/avoid-eth-calls.mdx b/website/src/pages/ja/subgraphs/best-practices/avoid-eth-calls.mdx index e40a7b3712e4..07249c97dd2a 100644 --- a/website/src/pages/ja/subgraphs/best-practices/avoid-eth-calls.mdx +++ b/website/src/pages/ja/subgraphs/best-practices/avoid-eth-calls.mdx @@ -1,19 +1,19 @@ --- title: Subgraph Best Practice 4 - Improve Indexing Speed by Avoiding eth_calls -sidebarTitle: 'Subgraph Best Practice 4: Avoiding eth_calls' +sidebarTitle: Avoiding eth_calls --- ## TLDR -`eth_calls` are calls that can be made from a subgraph to an Ethereum node. These calls take a significant amount of time to return data, slowing down indexing. If possible, design smart contracts to emit all the data you need so you don’t need to use `eth_calls`. +`eth_calls` are calls that can be made from a Subgraph to an Ethereum node. These calls take a significant amount of time to return data, slowing down indexing. If possible, design smart contracts to emit all the data you need so you don’t need to use `eth_calls`. ## Why Avoiding `eth_calls` Is a Best Practice -Subgraphs are optimized to index event data emitted from smart contracts. A subgraph can also index the data coming from an `eth_call`, however, this can significantly slow down subgraph indexing as `eth_calls` require making external calls to smart contracts. The responsiveness of these calls relies not on the subgraph but on the connectivity and responsiveness of the Ethereum node being queried. By minimizing or eliminating eth_calls in our subgraphs, we can significantly improve our indexing speed. +Subgraphs are optimized to index event data emitted from smart contracts. A Subgraph can also index the data coming from an `eth_call`, however, this can significantly slow down Subgraph indexing as `eth_calls` require making external calls to smart contracts. The responsiveness of these calls relies not on the Subgraph but on the connectivity and responsiveness of the Ethereum node being queried. By minimizing or eliminating eth_calls in our Subgraphs, we can significantly improve our indexing speed. ### What Does an eth_call Look Like? -`eth_calls` are often necessary when the data required for a subgraph is not available through emitted events. For example, consider a scenario where a subgraph needs to identify whether ERC20 tokens are part of a specific pool, but the contract only emits a basic `Transfer` event and does not emit an event that contains the data that we need: +`eth_calls` are often necessary when the data required for a Subgraph is not available through emitted events. For example, consider a scenario where a Subgraph needs to identify whether ERC20 tokens are part of a specific pool, but the contract only emits a basic `Transfer` event and does not emit an event that contains the data that we need: ```yaml event Transfer(address indexed from, address indexed to, uint256 value); @@ -44,7 +44,7 @@ export function handleTransfer(event: Transfer): void { } ``` -This is functional, however is not ideal as it slows down our subgraph’s indexing. +This is functional, however is not ideal as it slows down our Subgraph’s indexing. ## How to Eliminate `eth_calls` @@ -54,7 +54,7 @@ Ideally, the smart contract should be updated to emit all necessary data within event TransferWithPool(address indexed from, address indexed to, uint256 value, bytes32 indexed poolInfo); ``` -With this update, the subgraph can directly index the required data without external calls: +With this update, the Subgraph can directly index the required data without external calls: ```typescript import { Address } from '@graphprotocol/graph-ts' @@ -96,11 +96,11 @@ The portion highlighted in yellow is the call declaration. The part before the c The handler itself accesses the result of this `eth_call` exactly as in the previous section by binding to the contract and making the call. graph-node caches the results of declared `eth_calls` in memory and the call from the handler will retrieve the result from this in memory cache instead of making an actual RPC call. -Note: Declared eth_calls can only be made in subgraphs with specVersion >= 1.2.0. +Note: Declared eth_calls can only be made in Subgraphs with specVersion >= 1.2.0. ## Conclusion -You can significantly improve indexing performance by minimizing or eliminating `eth_calls` in your subgraphs. +You can significantly improve indexing performance by minimizing or eliminating `eth_calls` in your Subgraphs. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/ja/subgraphs/best-practices/derivedfrom.mdx b/website/src/pages/ja/subgraphs/best-practices/derivedfrom.mdx index db3a49928c89..093eb29255ab 100644 --- a/website/src/pages/ja/subgraphs/best-practices/derivedfrom.mdx +++ b/website/src/pages/ja/subgraphs/best-practices/derivedfrom.mdx @@ -1,11 +1,11 @@ --- title: Subgraph Best Practice 2 - Improve Indexing and Query Responsiveness By Using @derivedFrom -sidebarTitle: 'Subgraph Best Practice 2: Arrays with @derivedFrom' +sidebarTitle: Arrays with @derivedFrom --- ## TLDR -Arrays in your schema can really slow down a subgraph's performance as they grow beyond thousands of entries. If possible, the `@derivedFrom` directive should be used when using arrays as it prevents large arrays from forming, simplifies handlers, and reduces the size of individual entities, improving indexing speed and query performance significantly. +Arrays in your schema can really slow down a Subgraph's performance as they grow beyond thousands of entries. If possible, the `@derivedFrom` directive should be used when using arrays as it prevents large arrays from forming, simplifies handlers, and reduces the size of individual entities, improving indexing speed and query performance significantly. ## How to Use the `@derivedFrom` Directive @@ -15,7 +15,7 @@ You just need to add a `@derivedFrom` directive after your array in your schema. comments: [Comment!]! @derivedFrom(field: "post") ``` -`@derivedFrom` creates efficient one-to-many relationships, enabling an entity to dynamically associate with multiple related entities based on a field in the related entity. This approach removes the need for both sides of the relationship to store duplicate data, making the subgraph more efficient. +`@derivedFrom` creates efficient one-to-many relationships, enabling an entity to dynamically associate with multiple related entities based on a field in the related entity. This approach removes the need for both sides of the relationship to store duplicate data, making the Subgraph more efficient. ### Example Use Case for `@derivedFrom` @@ -60,17 +60,17 @@ type Comment @entity { Just by adding the `@derivedFrom` directive, this schema will only store the “Comments” on the “Comments” side of the relationship and not on the “Post” side of the relationship. Arrays are stored across individual rows, which allows them to expand significantly. This can lead to particularly large sizes if their growth is unbounded. -This will not only make our subgraph more efficient, but it will also unlock three features: +This will not only make our Subgraph more efficient, but it will also unlock three features: 1. We can query the `Post` and see all of its comments. 2. We can do a reverse lookup and query any `Comment` and see which post it comes from. -3. We can use [Derived Field Loaders](/subgraphs/developing/creating/graph-ts/api/#looking-up-derived-entities) to unlock the ability to directly access and manipulate data from virtual relationships in our subgraph mappings. +3. We can use [Derived Field Loaders](/subgraphs/developing/creating/graph-ts/api/#looking-up-derived-entities) to unlock the ability to directly access and manipulate data from virtual relationships in our Subgraph mappings. ## Conclusion -Use the `@derivedFrom` directive in subgraphs to effectively manage dynamically growing arrays, enhancing indexing efficiency and data retrieval. +Use the `@derivedFrom` directive in Subgraphs to effectively manage dynamically growing arrays, enhancing indexing efficiency and data retrieval. For a more detailed explanation of strategies to avoid large arrays, check out Kevin Jones' blog: [Best Practices in Subgraph Development: Avoiding Large Arrays](https://thegraph.com/blog/improve-subgraph-performance-avoiding-large-arrays/). diff --git a/website/src/pages/ja/subgraphs/best-practices/grafting-hotfix.mdx b/website/src/pages/ja/subgraphs/best-practices/grafting-hotfix.mdx index cb44f95f25c1..f4726b7a89b8 100644 --- a/website/src/pages/ja/subgraphs/best-practices/grafting-hotfix.mdx +++ b/website/src/pages/ja/subgraphs/best-practices/grafting-hotfix.mdx @@ -1,26 +1,26 @@ --- title: Subgraph Best Practice 6 - Use Grafting for Quick Hotfix Deployment -sidebarTitle: 'Subgraph Best Practice 6: Grafting and Hotfixing' +sidebarTitle: Grafting and Hotfixing --- ## TLDR -Grafting is a powerful feature in subgraph development that allows you to build and deploy new subgraphs while reusing the indexed data from existing ones. +Grafting is a powerful feature in Subgraph development that allows you to build and deploy new Subgraphs while reusing the indexed data from existing ones. ### 概要 -This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. +This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire Subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. ## Benefits of Grafting for Hotfixes 1. **Rapid Deployment** - - **Minimize Downtime**: When a subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. - - **Immediate Recovery**: The new subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. + - **Minimize Downtime**: When a Subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. + - **Immediate Recovery**: The new Subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. 2. **Data Preservation** - - **Reuse Historical Data**: Grafting copies the existing data from the base subgraph, so you don’t lose valuable historical records. + - **Reuse Historical Data**: Grafting copies the existing data from the base Subgraph, so you don’t lose valuable historical records. - **Consistency**: Maintains data continuity, which is crucial for applications relying on consistent historical data. 3. **Efficiency** @@ -31,38 +31,38 @@ This feature enables quick deployment of hotfixes for critical issues, eliminati 1. **Initial Deployment Without Grafting** - - **Start Clean**: Always deploy your initial subgraph without grafting to ensure that it’s stable and functions as expected. - - **Test Thoroughly**: Validate the subgraph’s performance to minimize the need for future hotfixes. + - **Start Clean**: Always deploy your initial Subgraph without grafting to ensure that it’s stable and functions as expected. + - **Test Thoroughly**: Validate the Subgraph’s performance to minimize the need for future hotfixes. 2. **Implementing the Hotfix with Grafting** - **Identify the Issue**: When a critical error occurs, determine the block number of the last successfully indexed event. - - **Create a New Subgraph**: Develop a new subgraph that includes the hotfix. - - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed subgraph. - - **Deploy Quickly**: Publish the grafted subgraph to restore service as soon as possible. + - **Create a New Subgraph**: Develop a new Subgraph that includes the hotfix. + - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed Subgraph. + - **Deploy Quickly**: Publish the grafted Subgraph to restore service as soon as possible. 3. **Post-Hotfix Actions** - - **Monitor Performance**: Ensure the grafted subgraph is indexing correctly and the hotfix resolves the issue. - - **Republish Without Grafting**: Once stable, deploy a new version of the subgraph without grafting for long-term maintenance. + - **Monitor Performance**: Ensure the grafted Subgraph is indexing correctly and the hotfix resolves the issue. + - **Republish Without Grafting**: Once stable, deploy a new version of the Subgraph without grafting for long-term maintenance. > Note: Relying on grafting indefinitely is not recommended as it can complicate future updates and maintenance. - - **Update References**: Redirect any services or applications to use the new, non-grafted subgraph. + - **Update References**: Redirect any services or applications to use the new, non-grafted Subgraph. 4. **Important Considerations** - **Careful Block Selection**: Choose the graft block number carefully to prevent data loss. - **Tip**: Use the block number of the last correctly processed event. - - **Use Deployment ID**: Ensure you reference the Deployment ID of the base subgraph, not the Subgraph ID. - - **Note**: The Deployment ID is the unique identifier for a specific subgraph deployment. - - **Feature Declaration**: Remember to declare grafting in the subgraph manifest under features. + - **Use Deployment ID**: Ensure you reference the Deployment ID of the base Subgraph, not the Subgraph ID. + - **Note**: The Deployment ID is the unique identifier for a specific Subgraph deployment. + - **Feature Declaration**: Remember to declare grafting in the Subgraph manifest under features. ## Example: Deploying a Hotfix with Grafting -Suppose you have a subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. +Suppose you have a Subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. 1. **Failed Subgraph Manifest (subgraph.yaml)** ```yaml - specVersion: 1.0.0 + specVersion: 1.3.0 schema: file: ./schema.graphql dataSources: @@ -75,7 +75,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing startBlock: 5000000 mapping: kind: ethereum/events - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Withdrawal @@ -90,7 +90,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing 2. **New Grafted Subgraph Manifest (subgraph.yaml)** ```yaml - specVersion: 1.0.0 + specVersion: 1.3.0 schema: file: ./schema.graphql dataSources: @@ -103,7 +103,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing startBlock: 6000001 # Block after the last indexed block mapping: kind: ethereum/events - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Withdrawal @@ -117,16 +117,16 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing features: - grafting graft: - base: QmBaseDeploymentID # Deployment ID of the failed subgraph + base: QmBaseDeploymentID # Deployment ID of the failed Subgraph block: 6000000 # Last successfully indexed block ``` **Explanation:** -- **Data Source Update**: The new subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. +- **Data Source Update**: The new Subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. - **Start Block**: Set to one block after the last successfully indexed block to avoid reprocessing the error. - **Grafting Configuration**: - - **base**: Deployment ID of the failed subgraph. + - **base**: Deployment ID of the failed Subgraph. - **block**: Block number where grafting should begin. 3. **Deployment Steps** @@ -135,10 +135,10 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing - **Adjust the Manifest**: As shown above, update the `subgraph.yaml` with grafting configurations. - **Deploy the Subgraph**: - Authenticate with the Graph CLI. - - Deploy the new subgraph using `graph deploy`. + - Deploy the new Subgraph using `graph deploy`. 4. **Post-Deployment** - - **Verify Indexing**: Check that the subgraph is indexing correctly from the graft point. + - **Verify Indexing**: Check that the Subgraph is indexing correctly from the graft point. - **Monitor Data**: Ensure that new data is being captured and the hotfix is effective. - **Plan for Republish**: Schedule the deployment of a non-grafted version for long-term stability. @@ -146,9 +146,9 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing While grafting is a powerful tool for deploying hotfixes quickly, there are specific scenarios where it should be avoided to maintain data integrity and ensure optimal performance. -- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new subgraph’s schema to be compatible with the base subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. +- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new Subgraph’s schema to be compatible with the base Subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. - **Significant Mapping Logic Overhauls**: When the hotfix involves substantial modifications to your mapping logic—such as changing how events are processed or altering handler functions—grafting may not function correctly. The new logic might not be compatible with the data processed under the old logic, leading to incorrect data or failed indexing. -- **Deployments to The Graph Network**: Grafting is not recommended for subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the subgraph from scratch to ensure full compatibility and reliability. +- **Deployments to The Graph Network**: Grafting is not recommended for Subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the Subgraph from scratch to ensure full compatibility and reliability. ### Risk Management @@ -157,20 +157,20 @@ While grafting is a powerful tool for deploying hotfixes quickly, there are spec ## Conclusion -Grafting is an effective strategy for deploying hotfixes in subgraph development, enabling you to: +Grafting is an effective strategy for deploying hotfixes in Subgraph development, enabling you to: - **Quickly Recover** from critical errors without re-indexing. - **Preserve Historical Data**, maintaining continuity for applications and users. - **Ensure Service Availability** by minimizing downtime during critical fixes. -However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. +However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your Subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. ## その他のリソース - **[Grafting Documentation](/subgraphs/cookbook/grafting/)**: Replace a Contract and Keep its History With Grafting - **[Understanding Deployment IDs](/subgraphs/querying/subgraph-id-vs-deployment-id/)**: Learn the difference between Deployment ID and Subgraph ID. -By incorporating grafting into your subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. +By incorporating grafting into your Subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/ja/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx b/website/src/pages/ja/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx index 6ff60ec9ab34..3a633244e0f2 100644 --- a/website/src/pages/ja/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx +++ b/website/src/pages/ja/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx @@ -1,6 +1,6 @@ --- title: Subgraph Best Practice 3 - Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs -sidebarTitle: 'Subgraph Best Practice 3: Immutable Entities and Bytes as IDs' +sidebarTitle: Immutable Entities and Bytes as IDs --- ## TLDR @@ -50,12 +50,12 @@ While other types for IDs are possible, such as String and Int8, it is recommend ### Reasons to Not Use Bytes as IDs 1. If entity IDs must be human-readable such as auto-incremented numerical IDs or readable strings, Bytes for IDs should not be used. -2. If integrating a subgraph’s data with another data model that does not use Bytes as IDs, Bytes as IDs should not be used. +2. If integrating a Subgraph’s data with another data model that does not use Bytes as IDs, Bytes as IDs should not be used. 3. Indexing and querying performance improvements are not desired. ### Concatenating With Bytes as IDs -It is a common practice in many subgraphs to use string concatenation to combine two properties of an event into a single ID, such as using `event.transaction.hash.toHex() + "-" + event.logIndex.toString()`. However, as this returns a string, this significantly impedes subgraph indexing and querying performance. +It is a common practice in many Subgraphs to use string concatenation to combine two properties of an event into a single ID, such as using `event.transaction.hash.toHex() + "-" + event.logIndex.toString()`. However, as this returns a string, this significantly impedes Subgraph indexing and querying performance. Instead, we should use the `concatI32()` method to concatenate event properties. This strategy results in a `Bytes` ID that is much more performant. @@ -172,7 +172,7 @@ Query Response: ## Conclusion -Using both Immutable Entities and Bytes as IDs has been shown to markedly improve subgraph efficiency. Specifically, tests have highlighted up to a 28% increase in query performance and up to a 48% acceleration in indexing speeds. +Using both Immutable Entities and Bytes as IDs has been shown to markedly improve Subgraph efficiency. Specifically, tests have highlighted up to a 28% increase in query performance and up to a 48% acceleration in indexing speeds. Read more about using Immutable Entities and Bytes as IDs in this blog post by David Lutterkort, a Software Engineer at Edge & Node: [Two Simple Subgraph Performance Improvements](https://thegraph.com/blog/two-simple-subgraph-performance-improvements/). diff --git a/website/src/pages/ja/subgraphs/best-practices/pruning.mdx b/website/src/pages/ja/subgraphs/best-practices/pruning.mdx index 1b51dde8894f..2d4f9ad803e0 100644 --- a/website/src/pages/ja/subgraphs/best-practices/pruning.mdx +++ b/website/src/pages/ja/subgraphs/best-practices/pruning.mdx @@ -1,11 +1,11 @@ --- title: Subgraph Best Practice 1 - Improve Query Speed with Subgraph Pruning -sidebarTitle: 'Subgraph Best Practice 1: Pruning with indexerHints' +sidebarTitle: Pruning with indexerHints --- ## TLDR -[Pruning](/developing/creating-a-subgraph/#prune) removes archival entities from the subgraph’s database up to a given block, and removing unused entities from a subgraph’s database will improve a subgraph’s query performance, often dramatically. Using `indexerHints` is an easy way to prune a subgraph. +[Pruning](/developing/creating-a-subgraph/#prune) removes archival entities from the Subgraph’s database up to a given block, and removing unused entities from a Subgraph’s database will improve a Subgraph’s query performance, often dramatically. Using `indexerHints` is an easy way to prune a Subgraph. ## How to Prune a Subgraph With `indexerHints` @@ -13,14 +13,14 @@ Add a section called `indexerHints` in the manifest. `indexerHints` has three `prune` options: -- `prune: auto`: Retains the minimum necessary history as set by the Indexer, optimizing query performance. This is the generally recommended setting and is the default for all subgraphs created by `graph-cli` >= 0.66.0. +- `prune: auto`: Retains the minimum necessary history as set by the Indexer, optimizing query performance. This is the generally recommended setting and is the default for all Subgraphs created by `graph-cli` >= 0.66.0. - `prune: `: Sets a custom limit on the number of historical blocks to retain. - `prune: never`: No pruning of historical data; retains the entire history and is the default if there is no `indexerHints` section. `prune: never` should be selected if [Time Travel Queries](/subgraphs/querying/graphql-api/#time-travel-queries) are desired. -We can add `indexerHints` to our subgraphs by updating our `subgraph.yaml`: +We can add `indexerHints` to our Subgraphs by updating our `subgraph.yaml`: ```yaml -specVersion: 1.0.0 +specVersion: 1.3.0 schema: file: ./schema.graphql indexerHints: @@ -39,7 +39,7 @@ dataSources: ## Conclusion -Pruning using `indexerHints` is a best practice for subgraph development, offering significant query performance improvements. +Pruning using `indexerHints` is a best practice for Subgraph development, offering significant query performance improvements. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/ja/subgraphs/best-practices/timeseries.mdx b/website/src/pages/ja/subgraphs/best-practices/timeseries.mdx index c02236d7829c..72c12a82a496 100644 --- a/website/src/pages/ja/subgraphs/best-practices/timeseries.mdx +++ b/website/src/pages/ja/subgraphs/best-practices/timeseries.mdx @@ -1,11 +1,11 @@ --- title: Subgraph Best Practice 5 - Simplify and Optimize with Timeseries and Aggregations -sidebarTitle: 'Subgraph Best Practice 5: Timeseries and Aggregations' +sidebarTitle: Timeseries and Aggregations --- ## TLDR -Leveraging the new time-series and aggregations feature in subgraphs can significantly enhance both indexing speed and query performance. +Leveraging the new time-series and aggregations feature in Subgraphs can significantly enhance both indexing speed and query performance. ## 概要 @@ -36,6 +36,10 @@ Timeseries and aggregations reduce data processing overhead and accelerate queri ## How to Implement Timeseries and Aggregations +### Prerequisites + +You need `spec version 1.1.0` for this feature. + ### Defining Timeseries Entities A timeseries entity represents raw data points collected over time. It is defined with the `@entity(timeseries: true)` annotation. Key requirements: @@ -51,7 +55,7 @@ A timeseries entity represents raw data points collected over time. It is define type Data @entity(timeseries: true) { id: Int8! timestamp: Timestamp! - price: BigDecimal! + amount: BigDecimal! } ``` @@ -68,11 +72,11 @@ An aggregation entity computes aggregated values from a timeseries source. It is type Stats @aggregation(intervals: ["hour", "day"], source: "Data") { id: Int8! timestamp: Timestamp! - sum: BigDecimal! @aggregate(fn: "sum", arg: "price") + sum: BigDecimal! @aggregate(fn: "sum", arg: "amount") } ``` -In this example, Stats aggregates the price field from Data over hourly and daily intervals, computing the sum. +In this example, Stats aggregates the amount field from Data over hourly and daily intervals, computing the sum. ### Querying Aggregated Data @@ -141,7 +145,7 @@ Supported aggregation functions: - sum - count -- min +- 分 - max - first - last @@ -172,13 +176,13 @@ Supported operators and functions include basic arithmetic (+, -, \_, /), compar ### Conclusion -Implementing timeseries and aggregations in subgraphs is a best practice for projects dealing with time-based data. This approach: +Implementing timeseries and aggregations in Subgraphs is a best practice for projects dealing with time-based data. This approach: - Enhances Performance: Speeds up indexing and querying by reducing data processing overhead. - Simplifies Development: Eliminates the need for manual aggregation logic in mappings. - Scales Efficiently: Handles large volumes of data without compromising on speed or responsiveness. -By adopting this pattern, developers can build more efficient and scalable subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your subgraphs. +By adopting this pattern, developers can build more efficient and scalable Subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your Subgraphs. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/ja/subgraphs/billing.mdx b/website/src/pages/ja/subgraphs/billing.mdx index 9967aa377644..5ad5947ae4fb 100644 --- a/website/src/pages/ja/subgraphs/billing.mdx +++ b/website/src/pages/ja/subgraphs/billing.mdx @@ -4,12 +4,14 @@ title: 請求書 ## Querying Plans -There are two plans to use when querying subgraphs on The Graph Network. +There are two plans to use when querying Subgraphs on The Graph Network. - **Free Plan**: The Free Plan includes 100,000 free monthly queries with full access to the Subgraph Studio testing environment. This plan is designed for hobbyists, hackathoners, and those with side projects to try out The Graph before scaling their dapp. - **Growth Plan**: The Growth Plan includes everything in the Free Plan with all queries after 100,000 monthly queries requiring payments with GRT or credit card. The Growth Plan is flexible enough to cover teams that have established dapps across a variety of use cases. +Learn more about pricing [here](https://thegraph.com/studio-pricing/). + ## Query Payments with credit card diff --git a/website/src/pages/ja/subgraphs/developing/_meta-titles.json b/website/src/pages/ja/subgraphs/developing/_meta-titles.json index 01a91b09ed77..f973d764cbc3 100644 --- a/website/src/pages/ja/subgraphs/developing/_meta-titles.json +++ b/website/src/pages/ja/subgraphs/developing/_meta-titles.json @@ -1,6 +1,6 @@ { - "creating": "Creating", - "deploying": "Deploying", - "publishing": "Publishing", - "managing": "Managing" + "creating": "作成", + "deploying": "デプロイ", + "publishing": "情報公開", + "managing": "管理" } diff --git a/website/src/pages/ja/subgraphs/developing/creating/advanced.mdx b/website/src/pages/ja/subgraphs/developing/creating/advanced.mdx index b6269f49fcf5..fc0e92c003b1 100644 --- a/website/src/pages/ja/subgraphs/developing/creating/advanced.mdx +++ b/website/src/pages/ja/subgraphs/developing/creating/advanced.mdx @@ -4,9 +4,9 @@ title: Advanced Subgraph Features ## 概要 -Add and implement advanced subgraph features to enhanced your subgraph's built. +Add and implement advanced Subgraph features to enhanced your Subgraph's built. -Starting from `specVersion` `0.0.4`, subgraph features must be explicitly declared in the `features` section at the top level of the manifest file, using their `camelCase` name, as listed in the table below: +Starting from `specVersion` `0.0.4`, Subgraph features must be explicitly declared in the `features` section at the top level of the manifest file, using their `camelCase` name, as listed in the table below: | Feature | Name | | ---------------------------------------------------- | ---------------- | @@ -14,10 +14,10 @@ Starting from `specVersion` `0.0.4`, subgraph features must be explicitly declar | [Full-text Search](#defining-fulltext-search-fields) | `fullTextSearch` | | [Grafting](#grafting-onto-existing-subgraphs) | `grafting` | -For instance, if a subgraph uses the **Full-Text Search** and the **Non-fatal Errors** features, the `features` field in the manifest should be: +For instance, if a Subgraph uses the **Full-Text Search** and the **Non-fatal Errors** features, the `features` field in the manifest should be: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum features: - fullTextSearch @@ -25,7 +25,7 @@ features: dataSources: ... ``` -> Note that using a feature without declaring it will incur a **validation error** during subgraph deployment, but no errors will occur if a feature is declared but not used. +> Note that using a feature without declaring it will incur a **validation error** during Subgraph deployment, but no errors will occur if a feature is declared but not used. ## Timeseries and Aggregations @@ -33,9 +33,9 @@ Prerequisites: - Subgraph specVersion must be ≥1.1.0. -Timeseries and aggregations enable your subgraph to track statistics like daily average price, hourly total transfers, and more. +Timeseries and aggregations enable your Subgraph to track statistics like daily average price, hourly total transfers, and more. -This feature introduces two new types of subgraph entity. Timeseries entities record data points with timestamps. Aggregation entities perform pre-declared calculations on the timeseries data points on an hourly or daily basis, then store the results for easy access via GraphQL. +This feature introduces two new types of Subgraph entity. Timeseries entities record data points with timestamps. Aggregation entities perform pre-declared calculations on the timeseries data points on an hourly or daily basis, then store the results for easy access via GraphQL. ### Example Schema @@ -97,21 +97,21 @@ Aggregation entities are automatically calculated on the basis of the specified ## 致命的でないエラー -すでに同期しているサブグラフのインデックスエラーは、デフォルトではサブグラフを失敗させ、同期を停止させます。サブグラフは、エラーが発生したハンドラーによる変更を無視することで、エラーが発生しても同期を継続するように設定することができます。これにより、サブグラフの作成者はサブグラフを修正する時間を得ることができ、一方でクエリは最新のブロックに対して提供され続けますが、エラーの原因となったバグのために結果が一貫していない可能性があります。なお、エラーの中には常に致命的なものもあり、致命的でないものにするためには、そのエラーが決定論的であることがわかっていなければなりません。 +Indexing errors on already synced Subgraphs will, by default, cause the Subgraph to fail and stop syncing. Subgraphs can alternatively be configured to continue syncing in the presence of errors, by ignoring the changes made by the handler which provoked the error. This gives Subgraph authors time to correct their Subgraphs while queries continue to be served against the latest block, though the results might be inconsistent due to the bug that caused the error. Note that some errors are still always fatal. To be non-fatal, the error must be known to be deterministic. -> **Note:** The Graph Network does not yet support non-fatal errors, and developers should not deploy subgraphs using that functionality to the network via the Studio. +> **Note:** The Graph Network does not yet support non-fatal errors, and developers should not deploy Subgraphs using that functionality to the network via the Studio. -非致命的エラーを有効にするには、サブグラフのマニフェストに以下の機能フラグを設定する必要があります: +Enabling non-fatal errors requires setting the following feature flag on the Subgraph manifest: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum features: - - fullTextSearch + - nonFatalErrors ... ``` -The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the subgraph has skipped over errors, as in the example: +The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the Subgraph has skipped over errors, as in the example: ```graphql foos(first: 100, subgraphError: allow) { @@ -123,7 +123,7 @@ _meta { } ``` -If the subgraph encounters an error, that query will return both the data and a graphql error with the message `"indexing_error"`, as in this example response: +If the Subgraph encounters an error, that query will return both the data and a graphql error with the message `"indexing_error"`, as in this example response: ```graphql "data": { @@ -145,7 +145,7 @@ If the subgraph encounters an error, that query will return both the data and a ## IPFS/Arweave File Data Sources -ファイルデータソースは、堅牢で拡張可能な方法でインデックス作成中にオフチェーンデータにアクセスするための新しいサブグラフ機能です。ファイルデータソースは、IPFS および Arweave からのファイルのフェッチをサポートしています。 +File data sources are a new Subgraph functionality for accessing off-chain data during indexing in a robust, extendable way. File data sources support fetching files from IPFS and from Arweave. > また、オフチェーンデータの決定論的なインデックス作成、および任意のHTTPソースデータの導入の可能性についても基礎ができました。 @@ -221,7 +221,7 @@ templates: - name: TokenMetadata kind: file/ipfs mapping: - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mapping.ts handler: handleMetadata @@ -290,7 +290,7 @@ For Arweave, as of version 0.33.0 Graph Node can fetch files stored on Arweave b import { TokenMetadata as TokenMetadataTemplate } from '../generated/templates' const ipfshash = 'QmaXzZhcYnsisuue5WRdQDH6FDvqkLQX1NckLqBYeYYEfm' -//This example code is for a Crypto coven subgraph. The above ipfs hash is a directory with token metadata for all crypto coven NFTs. +//This example code is for a Crypto coven Subgraph. The above ipfs hash is a directory with token metadata for all crypto coven NFTs. export function handleTransfer(event: TransferEvent): void { let token = Token.load(event.params.tokenId.toString()) @@ -317,23 +317,23 @@ export function handleTransfer(event: TransferEvent): void { This example is using the CID as the lookup between the parent `Token` entity and the resulting `TokenMetadata` entity. -> Previously, this is the point at which a subgraph developer would have called `ipfs.cat(CID)` to fetch the file +> Previously, this is the point at which a Subgraph developer would have called `ipfs.cat(CID)` to fetch the file おめでとうございます!ファイルデータソースが使用できます。 -#### サブグラフのデプロイ +#### Deploying your Subgraphs -You can now `build` and `deploy` your subgraph to any Graph Node >=v0.30.0-rc.0. +You can now `build` and `deploy` your Subgraph to any Graph Node >=v0.30.0-rc.0. #### 制限事項 -ファイルデータソースハンドラおよびエンティティは、他のサブグラフエンティティから分離され、実行時に決定論的であることを保証し、チェーンベースのデータソースを汚染しないことを保証します。具体的には、以下の通りです。 +File data source handlers and entities are isolated from other Subgraph entities, ensuring that they are deterministic when executed, and ensuring no contamination of chain-based data sources. To be specific: - ファイルデータソースで作成されたエンティティは不変であり、更新することはできません。 - ファイルデータソースハンドラは、他のファイルデータソースのエンティティにアクセスすることはできません。 - ファイルデータソースに関連するエンティティは、チェーンベースハンドラーからアクセスできません。 -> この制約は、ほとんどのユースケースで問題になることはありませんが、一部のユースケースでは複雑さをもたらすかもしれません。ファイルベースのデータをサブグラフでモデル化する際に問題がある場合は、Discordを通じてご連絡ください。 +> While this constraint should not be problematic for most use-cases, it may introduce complexity for some. Please get in touch via Discord if you are having issues modelling your file-based data in a Subgraph! また、オンチェーンデータソースや他のファイルデータソースからデータソースを作成することはできません。この制限は、将来的に解除される可能性があります。 @@ -365,15 +365,15 @@ Handlers for File Data Sources cannot be in files which import `eth_call` contra > **Requires**: [SpecVersion](#specversion-releases) >= `1.2.0` -Topic filters, also known as indexed argument filters, are a powerful feature in subgraphs that allow users to precisely filter blockchain events based on the values of their indexed arguments. +Topic filters, also known as indexed argument filters, are a powerful feature in Subgraphs that allow users to precisely filter blockchain events based on the values of their indexed arguments. -- These filters help isolate specific events of interest from the vast stream of events on the blockchain, allowing subgraphs to operate more efficiently by focusing only on relevant data. +- These filters help isolate specific events of interest from the vast stream of events on the blockchain, allowing Subgraphs to operate more efficiently by focusing only on relevant data. -- This is useful for creating personal subgraphs that track specific addresses and their interactions with various smart contracts on the blockchain. +- This is useful for creating personal Subgraphs that track specific addresses and their interactions with various smart contracts on the blockchain. ### How Topic Filters Work -When a smart contract emits an event, any arguments that are marked as indexed can be used as filters in a subgraph's manifest. This allows the subgraph to listen selectively for events that match these indexed arguments. +When a smart contract emits an event, any arguments that are marked as indexed can be used as filters in a Subgraph's manifest. This allows the Subgraph to listen selectively for events that match these indexed arguments. - The event's first indexed argument corresponds to `topic1`, the second to `topic2`, and so on, up to `topic3`, since the Ethereum Virtual Machine (EVM) allows up to three indexed arguments per event. @@ -401,7 +401,7 @@ In this example: #### Configuration in Subgraphs -Topic filters are defined directly within the event handler configuration in the subgraph manifest. Here is how they are configured: +Topic filters are defined directly within the event handler configuration in the Subgraph manifest. Here is how they are configured: ```yaml eventHandlers: @@ -436,7 +436,7 @@ In this configuration: - `topic1` is configured to filter `Transfer` events where `0xAddressA` is the sender. - `topic2` is configured to filter `Transfer` events where `0xAddressB` is the receiver. -- The subgraph will only index transactions that occur directly from `0xAddressA` to `0xAddressB`. +- The Subgraph will only index transactions that occur directly from `0xAddressA` to `0xAddressB`. #### Example 2: Tracking Transactions in Either Direction Between Two or More Addresses @@ -452,17 +452,17 @@ In this configuration: - `topic1` is configured to filter `Transfer` events where `0xAddressA`, `0xAddressB`, `0xAddressC` is the sender. - `topic2` is configured to filter `Transfer` events where `0xAddressB` and `0xAddressC` is the receiver. -- The subgraph will index transactions that occur in either direction between multiple addresses allowing for comprehensive monitoring of interactions involving all addresses. +- The Subgraph will index transactions that occur in either direction between multiple addresses allowing for comprehensive monitoring of interactions involving all addresses. ## Declared eth_call > Note: This is an experimental feature that is not currently available in a stable Graph Node release yet. You can only use it in Subgraph Studio or your self-hosted node. -Declarative `eth_calls` are a valuable subgraph feature that allows `eth_calls` to be executed ahead of time, enabling `graph-node` to execute them in parallel. +Declarative `eth_calls` are a valuable Subgraph feature that allows `eth_calls` to be executed ahead of time, enabling `graph-node` to execute them in parallel. This feature does the following: -- Significantly improves the performance of fetching data from the Ethereum blockchain by reducing the total time for multiple calls and optimizing the subgraph's overall efficiency. +- Significantly improves the performance of fetching data from the Ethereum blockchain by reducing the total time for multiple calls and optimizing the Subgraph's overall efficiency. - Allows faster data fetching, resulting in quicker query responses and a better user experience. - Reduces wait times for applications that need to aggregate data from multiple Ethereum calls, making the data retrieval process more efficient. @@ -474,7 +474,7 @@ This feature does the following: #### Scenario without Declarative `eth_calls` -Imagine you have a subgraph that needs to make three Ethereum calls to fetch data about a user's transactions, balance, and token holdings. +Imagine you have a Subgraph that needs to make three Ethereum calls to fetch data about a user's transactions, balance, and token holdings. Traditionally, these calls might be made sequentially: @@ -498,15 +498,15 @@ Total time taken = max (3, 2, 4) = 4 seconds #### How it Works -1. Declarative Definition: In the subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. +1. Declarative Definition: In the Subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. 2. Parallel Execution Engine: The Graph Node's execution engine recognizes these declarations and runs the calls simultaneously. -3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the subgraph for further processing. +3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the Subgraph for further processing. #### Example Configuration in Subgraph Manifest Declared `eth_calls` can access the `event.address` of the underlying event as well as all the `event.params`. -`Subgraph.yaml` using `event.address`: +`subgraph.yaml` using `event.address`: ```yaml eventHandlers: @@ -524,7 +524,7 @@ Details for the example above: - The text (`Pool[event.address].feeGrowthGlobal0X128()`) is the actual `eth_call` that will be executed, which is in the form of `Contract[address].function(arguments)` - The `address` and `arguments` can be replaced with variables that will be available when the handler is executed. -`Subgraph.yaml` using `event.params` +`subgraph.yaml` using `event.params` ```yaml calls: @@ -535,22 +535,22 @@ calls: > **Note:** it is not recommended to use grafting when initially upgrading to The Graph Network. Learn more [here](/subgraphs/cookbook/grafting/#important-note-on-grafting-when-upgrading-to-the-network). -When a subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances; it is beneficial to reuse the data from an existing subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly or to temporarily get an existing subgraph working again after it has failed. +When a Subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances; it is beneficial to reuse the data from an existing Subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly or to temporarily get an existing Subgraph working again after it has failed. -A subgraph is grafted onto a base subgraph when the subgraph manifest in `subgraph.yaml` contains a `graft` block at the top-level: +A Subgraph is grafted onto a base Subgraph when the Subgraph manifest in `subgraph.yaml` contains a `graft` block at the top-level: ```yaml description: ... graft: - base: Qm... # Subgraph ID of base subgraph + base: Qm... # Subgraph ID of base Subgraph block: 7345624 # Block number ``` -When a subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` subgraph up to and including the given `block` and then continue indexing the new subgraph from that block on. The base subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted subgraph. +When a Subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` Subgraph up to and including the given `block` and then continue indexing the new Subgraph from that block on. The base Subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted Subgraph. -グラフトはベースデータのインデックスではなくコピーを行うため、スクラッチからインデックスを作成するよりもサブグラフを目的のブロックに早く到達させることができますが、非常に大きなサブグラフの場合は最初のデータコピーに数時間かかることもあります。グラフトされたサブグラフが初期化されている間、グラフノードは既にコピーされたエンティティタイプに関する情報を記録します。 +Because grafting copies rather than indexes base data, it is much quicker to get the Subgraph to the desired block than indexing from scratch, though the initial data copy can still take several hours for very large Subgraphs. While the grafted Subgraph is being initialized, the Graph Node will log information about the entity types that have already been copied. -グラフト化されたサブグラフは、ベースとなるサブグラフのスキーマと同一ではなく、単に互換性のある GraphQL スキーマを使用することができます。また、それ自体は有効なサブグラフのスキーマでなければなりませんが、以下の方法でベースサブグラフのスキーマから逸脱することができます。 +The grafted Subgraph can use a GraphQL schema that is not identical to the one of the base Subgraph, but merely compatible with it. It has to be a valid Subgraph schema in its own right, but may deviate from the base Subgraph's schema in the following ways: - エンティティタイプを追加または削除する - エンティティタイプから属性を削除する @@ -560,4 +560,4 @@ When a subgraph whose manifest contains a `graft` block is deployed, Graph Node - インターフェースの追加または削除 - インターフェースがどのエンティティタイプに実装されるかを変更する -> **[Feature Management](#experimental-features):** `grafting` must be declared under `features` in the subgraph manifest. +> **[Feature Management](#experimental-features):** `grafting` must be declared under `features` in the Subgraph manifest. diff --git a/website/src/pages/ja/subgraphs/developing/creating/assemblyscript-mappings.mdx b/website/src/pages/ja/subgraphs/developing/creating/assemblyscript-mappings.mdx index 50b664c86f3b..e46466a45c92 100644 --- a/website/src/pages/ja/subgraphs/developing/creating/assemblyscript-mappings.mdx +++ b/website/src/pages/ja/subgraphs/developing/creating/assemblyscript-mappings.mdx @@ -10,7 +10,7 @@ The mappings take data from a particular source and transform it into entities t For each event handler that is defined in `subgraph.yaml` under `mapping.eventHandlers`, create an exported function of the same name. Each handler must accept a single parameter called `event` with a type corresponding to the name of the event which is being handled. -In the example subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events: +In the example Subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events: ```javascript import { NewGravatar, UpdatedGravatar } from '../generated/Gravity/Gravity' @@ -72,7 +72,7 @@ If no value is set for a field in the new entity with the same ID, the field wil ## コード生成 -スマートコントラクト、イベント、エンティティを簡単かつタイプセーフに扱うために、Graph CLIはサブグラフのGraphQLスキーマとデータソースに含まれるコントラクトABIからAssemblyScriptタイプを生成することができます。 +In order to make it easy and type-safe to work with smart contracts, events and entities, the Graph CLI can generate AssemblyScript types from the Subgraph's GraphQL schema and the contract ABIs included in the data sources. これを行うためには @@ -80,7 +80,7 @@ If no value is set for a field in the new entity with the same ID, the field wil graph codegen [--output-dir ] [] ``` -but in most cases, subgraphs are already preconfigured via `package.json` to allow you to simply run one of the following to achieve the same: +but in most cases, Subgraphs are already preconfigured via `package.json` to allow you to simply run one of the following to achieve the same: ```sh # Yarn @@ -90,7 +90,7 @@ yarn codegen npm run codegen ``` -This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with. +This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example Subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with. ```javascript import { @@ -102,12 +102,12 @@ import { } from '../generated/Gravity/Gravity' ``` -In addition to this, one class is generated for each entity type in the subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with +In addition to this, one class is generated for each entity type in the Subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with ```javascript import { Gravatar } from '../generated/schema' ``` -> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the subgraph. +> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the Subgraph. -Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your subgraph to Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find. +Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your Subgraph to Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find. diff --git a/website/src/pages/ja/subgraphs/developing/creating/graph-ts/CHANGELOG.md b/website/src/pages/ja/subgraphs/developing/creating/graph-ts/CHANGELOG.md index 5d90888ac378..5f964d3cbb78 100644 --- a/website/src/pages/ja/subgraphs/developing/creating/graph-ts/CHANGELOG.md +++ b/website/src/pages/ja/subgraphs/developing/creating/graph-ts/CHANGELOG.md @@ -1,5 +1,11 @@ # @graphprotocol/graph-ts +## 0.38.0 + +### Minor Changes + +- [#1935](https://github.com/graphprotocol/graph-tooling/pull/1935) [`0c36a02`](https://github.com/graphprotocol/graph-tooling/commit/0c36a024e0516bbf883ae62b8312dba3d9945f04) Thanks [@isum](https://github.com/isum)! - feat: add yaml parsing support to mappings + ## 0.37.0 ### Minor Changes diff --git a/website/src/pages/ja/subgraphs/developing/creating/graph-ts/api.mdx b/website/src/pages/ja/subgraphs/developing/creating/graph-ts/api.mdx index c9d5c8a3ba47..94df906daad7 100644 --- a/website/src/pages/ja/subgraphs/developing/creating/graph-ts/api.mdx +++ b/website/src/pages/ja/subgraphs/developing/creating/graph-ts/api.mdx @@ -2,12 +2,12 @@ title: AssemblyScript API --- -> Note: If you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/). +> Note: If you created a Subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/). -Learn what built-in APIs can be used when writing subgraph mappings. There are two kinds of APIs available out of the box: +Learn what built-in APIs can be used when writing Subgraph mappings. There are two kinds of APIs available out of the box: - The [Graph TypeScript library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) -- Code generated from subgraph files by `graph codegen` +- Code generated from Subgraph files by `graph codegen` You can also add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). @@ -27,7 +27,7 @@ Since language mappings are written in AssemblyScript, it is useful to review th ### バージョン -サブグラフマニフェストapiVersionは、特定のサブグラフのマッピングAPIバージョンを指定します。このバージョンは、Graph Nodeによって実行されます。 +The `apiVersion` in the Subgraph manifest specifies the mapping API version which is run by Graph Node for a given Subgraph. | バージョン | リリースノート | | :-: | --- | @@ -223,7 +223,7 @@ Bytesの API の上に以下のメソッドを追加しています。 Store API は、グラフノードのストアにエンティティを読み込んだり、保存したり、削除したりすることができます。 -Entities written to the store map one-to-one to the `@entity` types defined in the subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities. +Entities written to the store map one-to-one to the `@entity` types defined in the Subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities. #### エンティティの作成 @@ -282,8 +282,8 @@ graph-node v0.31.0、@graphprotocol/graph-ts v0.30.0、および @graphprotocol/ The store API facilitates the retrieval of entities that were created or updated in the current block. A typical situation for this is that one handler creates a transaction from some onchain event, and a later handler wants to access this transaction if it exists. -- In the case where the transaction does not exist, the subgraph will have to go to the database simply to find out that the entity does not exist. If the subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. -- For some subgraphs, these missed lookups can contribute significantly to the indexing time. +- In the case where the transaction does not exist, the Subgraph will have to go to the database simply to find out that the entity does not exist. If the Subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. +- For some Subgraphs, these missed lookups can contribute significantly to the indexing time. ```typescript let id = event.transaction.hash // または ID が構築される方法 @@ -380,11 +380,11 @@ Ethereum API は、スマートコントラクト、パブリックステート #### Ethereum タイプのサポート -エンティティと同様に、graph codegenは、サブグラフで使用されるすべてのスマートコントラクトとイベントのためのクラスを生成します。 このためには、コントラクト ABI がサブグラフマニフェストのデータソースの一部である必要があります。 通常、ABI ファイルはabis/フォルダに格納されています。 +As with entities, `graph codegen` generates classes for all smart contracts and events used in a Subgraph. For this, the contract ABIs need to be part of the data source in the Subgraph manifest. Typically, the ABI files are stored in an `abis/` folder. -生成されたクラスでは、Ethereum Typeと[built-in types](#built-in-types)間の変換が舞台裏で行われるため、サブグラフ作成者はそれらを気にする必要がありません。 +With the generated classes, conversions between Ethereum types and the [built-in types](#built-in-types) take place behind the scenes so that Subgraph authors do not have to worry about them. -以下の例で説明します。 以下のようなサブグラフのスキーマが与えられます。 +The following example illustrates this. Given a Subgraph schema like ```graphql type Transfer @entity { @@ -483,7 +483,7 @@ class Log { #### スマートコントラクトの状態へのアクセス -graph codegenが生成するコードには、サブグラフで使用されるスマートコントラクトのクラスも含まれています。 これらを使って、パブリックな状態変数にアクセスしたり、現在のブロックにあるコントラクトの関数を呼び出したりすることができます。 +The code generated by `graph codegen` also includes classes for the smart contracts used in the Subgraph. These can be used to access public state variables and call functions of the contract at the current block. よくあるパターンは、イベントが発生したコントラクトにアクセスすることです。 これは以下のコードで実現できます。 @@ -506,7 +506,7 @@ Transferは、エンティティタイプとの名前の衝突を避けるため Ethereum の ERC20Contractにsymbolというパブリックな読み取り専用の関数があれば、.symbol()で呼び出すことができます。 パブリックな状態変数については、同じ名前のメソッドが自動的に作成されます。 -サブグラフの一部である他のコントラクトは、生成されたコードからインポートすることができ、有効なアドレスにバインドすることができます。 +Any other contract that is part of the Subgraph can be imported from the generated code and can be bound to a valid address. #### リバートされた呼び出しの処理 @@ -582,7 +582,7 @@ let isContract = ethereum.hasCode(eoa).inner // returns false '@graphprotocol/graph-ts'から{ log } をインポートします。 ``` -The `log` API allows subgraphs to log information to the Graph Node standard output as well as Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument. +The `log` API allows Subgraphs to log information to the Graph Node standard output as well as Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument. log API には以下の機能があります: @@ -590,7 +590,7 @@ log API には以下の機能があります: - `log.info(fmt: string, args: Array): void` - インフォメーションメッセージを記録します。 - `log.warning(fmt: string, args: Array): void` - 警告メッセージを記録します。 - `log.error(fmt: string, args: Array): void` - エラーメッセージを記録します。 -- `log.critical(fmt: string, args: Array): void` - クリティカル・メッセージを記録して、サブグラフを終了します。 +- `log.critical(fmt: string, args: Array): void` – logs a critical message _and_ terminates the Subgraph. log API は、フォーマット文字列と文字列値の配列を受け取ります。 そして、プレースホルダーを配列の文字列値で置き換えます。 最初の{}プレースホルダーは配列の最初の値に置き換えられ、2 番目の{}プレースホルダーは 2 番目の値に置き換えられ、以下のようになります。 @@ -721,7 +721,7 @@ ipfs.mapJSON('Qm...', 'processItem', Value.fromString('parentId')) 現在サポートされているフラグは `json` だけで、これは `ipfs.map` に渡さなければなりません。json` フラグを指定すると、IPFS ファイルは一連の JSON 値で構成されます。ipfs.map` を呼び出すと、ファイルの各行を読み込んで `JSONValue` にデシリアライズし、それぞれのコールバックを呼び出します。コールバックは `JSONValue` からデータを格納するためにエンティティ操作を使用することができます。エンティティの変更は、`ipfs.map` を呼び出したハンドラが正常に終了したときのみ保存されます。その間はメモリ上に保持されるので、`ipfs.map` が処理できるファイルのサイズは制限されます。 -成功すると,ipfs.mapは voidを返します。 コールバックの呼び出しでエラーが発生した場合、ipfs.mapを呼び出したハンドラは中止され、サブグラフは失敗とマークされます。 +On success, `ipfs.map` returns `void`. If any invocation of the callback causes an error, the handler that invoked `ipfs.map` is aborted, and the Subgraph is marked as failed. ### Crypto API @@ -836,7 +836,7 @@ if (value.kind == JSONValueKind.BOOL) { ### マニフェスト内のDataSourceContext -DataSources`の`context`セクションでは、サブグラフマッピング内でアクセス可能なキーと値のペアを定義することができます。使用可能な型は`Bool`、`String`、`Int`、`Int8`、`BigDecimal`、`Bytes`、`List`、`BigInt\` です。 +The `context` section within `dataSources` allows you to define key-value pairs that are accessible within your Subgraph mappings. The available types are `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. 以下は `context` セクションのさまざまな型の使い方を示す YAML の例です: @@ -887,4 +887,4 @@ dataSources: - `List`: Specifies a list of items. Each item needs to specify its type and data. - `BigInt`: Specifies a large integer value. Must be quoted due to its large size. -このコンテキストは、サブグラフのマッピング・ファイルからアクセスでき、よりダイナミックで設定可能なサブグラフを実現します。 +This context is then accessible in your Subgraph mapping files, enabling more dynamic and configurable Subgraphs. diff --git a/website/src/pages/ja/subgraphs/developing/creating/graph-ts/common-issues.mdx b/website/src/pages/ja/subgraphs/developing/creating/graph-ts/common-issues.mdx index 9bb0634b57b3..e7622788c797 100644 --- a/website/src/pages/ja/subgraphs/developing/creating/graph-ts/common-issues.mdx +++ b/website/src/pages/ja/subgraphs/developing/creating/graph-ts/common-issues.mdx @@ -2,7 +2,7 @@ title: AssemblyScriptのよくある問題 --- -AssemblyScript](https://github.com/AssemblyScript/assemblyscript)には、サブグラフの開発中によく遭遇する問題があります。これらの問題は、デバッグの難易度に幅がありますが、認識しておくと役に立つかもしれません。以下は、これらの問題の非網羅的なリストです: +There are certain [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) issues that are common to run into during Subgraph development. They range in debug difficulty, however, being aware of them may help. The following is a non-exhaustive list of these issues: - `Private` class variables are not enforced in [AssemblyScript](https://www.assemblyscript.org/status.html#language-features). There is no way to protect class variables from being directly changed from the class object. - スコープは[クロージャー関数](https://www.assemblyscript.org/status.html#on-closures)には継承されません。つまり、クロージャー関数の外で宣言された変数は使用できません。Developer Highlights #3](https://www.youtube.com/watch?v=1-8AW-lVfrA&t=3243s)に説明があります。 diff --git a/website/src/pages/ja/subgraphs/developing/creating/install-the-cli.mdx b/website/src/pages/ja/subgraphs/developing/creating/install-the-cli.mdx index 397b011cbdd3..3352df16b841 100644 --- a/website/src/pages/ja/subgraphs/developing/creating/install-the-cli.mdx +++ b/website/src/pages/ja/subgraphs/developing/creating/install-the-cli.mdx @@ -2,11 +2,11 @@ title: Graph CLI のインストール --- -> In order to use your subgraph on The Graph's decentralized network, you will need to [create an API key](/resources/subgraph-studio-faq/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your subgraph with at least 3,000 GRT to attract 2-3 Indexers. To learn more about signaling, check out [curating](/resources/roles/curating/). +> In order to use your Subgraph on The Graph's decentralized network, you will need to [create an API key](/resources/subgraph-studio-faq/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your Subgraph with at least 3,000 GRT to attract 2-3 Indexers. To learn more about signaling, check out [curating](/resources/roles/curating/). ## 概要 -The [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) is a command-line interface that facilitates developers' commands for The Graph. It processes a [subgraph manifest](/subgraphs/developing/creating/subgraph-manifest/) and compiles the [mappings](/subgraphs/developing/creating/assemblyscript-mappings/) to create the files you will need to deploy the subgraph to [Subgraph Studio](https://thegraph.com/studio/) and the network. +The [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) is a command-line interface that facilitates developers' commands for The Graph. It processes a [Subgraph manifest](/subgraphs/developing/creating/subgraph-manifest/) and compiles the [mappings](/subgraphs/developing/creating/assemblyscript-mappings/) to create the files you will need to deploy the Subgraph to [Subgraph Studio](https://thegraph.com/studio/) and the network. ## はじめに @@ -28,13 +28,13 @@ npm install -g @graphprotocol/graph-cli@latest yarn global add @graphprotocol/graph-cli ``` -The `graph init` command can be used to set up a new subgraph project, either from an existing contract or from an example subgraph. If you already have a smart contract deployed to your preferred network, you can bootstrap a new subgraph from that contract to get started. +The `graph init` command can be used to set up a new Subgraph project, either from an existing contract or from an example Subgraph. If you already have a smart contract deployed to your preferred network, you can bootstrap a new Subgraph from that contract to get started. ## サブグラフの作成 ### 既存のコントラクトから -The following command creates a subgraph that indexes all events of an existing contract: +The following command creates a Subgraph that indexes all events of an existing contract: ```sh graph init \ @@ -51,25 +51,25 @@ graph init \ - If any of the optional arguments are missing, it guides you through an interactive form. -- The `` is the ID of your subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your subgraph details page. +- The `` is the ID of your Subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your Subgraph details page. ### サブグラフの例から -The following command initializes a new project from an example subgraph: +The following command initializes a new project from an example Subgraph: ```sh graph init --from-example=example-subgraph ``` -- The [example subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. +- The [example Subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. -- The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. +- The Subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. ### Add New `dataSources` to an Existing Subgraph -`dataSources` are key components of subgraphs. They define the sources of data that the subgraph indexes and processes. A `dataSource` specifies which smart contract to listen to, which events to process, and how to handle them. +`dataSources` are key components of Subgraphs. They define the sources of data that the Subgraph indexes and processes. A `dataSource` specifies which smart contract to listen to, which events to process, and how to handle them. -Recent versions of the Graph CLI supports adding new `dataSources` to an existing subgraph through the `graph add` command: +Recent versions of the Graph CLI supports adding new `dataSources` to an existing Subgraph through the `graph add` command: ```sh graph add
[] @@ -101,19 +101,5 @@ The `graph add` command will fetch the ABI from Etherscan (unless an ABI path is ABI ファイルは、契約内容と一致している必要があります。ABI ファイルを入手するにはいくつかの方法があります: - 自分のプロジェクトを構築している場合は、最新の ABI にアクセスできる可能性があります。 -- If you are building a subgraph for a public project, you can download that project to your computer and get the ABI by using [`npx hardhat compile`](https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts) or using `solc` to compile. -- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your subgraph will fail. - -## SpecVersion Releases - -| バージョン | リリースノート | -| :-: | --- | -| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | -| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | -| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune subgraphs | -| 0.0.9 | Supports `endBlock` feature | -| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). | -| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). | -| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | -| 0.0.5 | Added support for event handlers having access to transaction receipts. | -| 0.0.4 | Added support for managing subgraph features. | +- If you are building a Subgraph for a public project, you can download that project to your computer and get the ABI by using [`npx hardhat compile`](https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts) or using `solc` to compile. +- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your Subgraph will fail. diff --git a/website/src/pages/ja/subgraphs/developing/creating/ql-schema.mdx b/website/src/pages/ja/subgraphs/developing/creating/ql-schema.mdx index fb06d8d022a0..5ee3f442eef9 100644 --- a/website/src/pages/ja/subgraphs/developing/creating/ql-schema.mdx +++ b/website/src/pages/ja/subgraphs/developing/creating/ql-schema.mdx @@ -4,7 +4,7 @@ title: The Graph QL Schema ## 概要 -The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language. +The schema for your Subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language. > Note: If you've never written a GraphQL schema, it is recommended that you check out this primer on the GraphQL type system. Reference documentation for GraphQL schemas can be found in the [GraphQL API](/subgraphs/querying/graphql-api/) section. @@ -12,7 +12,7 @@ The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas ar Before defining entities, it is important to take a step back and think about how your data is structured and linked. -- All queries will be made against the data model defined in the subgraph schema. As a result, the design of the subgraph schema should be informed by the queries that your application will need to perform. +- All queries will be made against the data model defined in the Subgraph schema. As a result, the design of the Subgraph schema should be informed by the queries that your application will need to perform. - It may be useful to imagine entities as "objects containing data", rather than as events or functions. - You define entity types in `schema.graphql`, and Graph Node will generate top-level fields for querying single instances and collections of that entity type. - Each type that should be an entity is required to be annotated with an `@entity` directive. @@ -141,7 +141,7 @@ type TokenBalance @entity { Reverse lookups can be defined on an entity through the `@derivedFrom` field. This creates a virtual field on the entity that may be queried but cannot be set manually through the mappings API. Rather, it is derived from the relationship defined on the other entity. For such relationships, it rarely makes sense to store both sides of the relationship, and both indexing and query performance will be better when only one side is stored and the other is derived. -1 対多の関係では、関係は常に「1」側に格納され、「多」側は常に派生されるべきです。「多」側にエンティティの配列を格納するのではなく、このように関係を格納することで、サブグラフのインデックス作成と問い合わせの両方で劇的にパフォーマンスが向上します。一般的に、エンティティの配列を保存することは、現実的に可能な限り避けるべきです。 +For one-to-many relationships, the relationship should always be stored on the 'one' side, and the 'many' side should always be derived. Storing the relationship this way, rather than storing an array of entities on the 'many' side, will result in dramatically better performance for both indexing and querying the Subgraph. In general, storing arrays of entities should be avoided as much as is practical. #### 例 @@ -160,7 +160,7 @@ type TokenBalance @entity { } ``` -Here is an example of how to write a mapping for a subgraph with reverse lookups: +Here is an example of how to write a mapping for a Subgraph with reverse lookups: ```typescript let token = new Token(event.address) // Create Token @@ -231,7 +231,7 @@ query usersWithOrganizations { } ``` -このように多対多の関係をより精巧に保存する方法では、サブグラフに保存されるデータが少なくなるため、サブグラフのインデックス作成や問い合わせが劇的に速くなります。 +This more elaborate way of storing many-to-many relationships will result in less data stored for the Subgraph, and therefore to a Subgraph that is often dramatically faster to index and to query. ### スキーマへのコメントの追加 @@ -287,7 +287,7 @@ query { } ``` -> **[Feature Management](#experimental-features):** From `specVersion` `0.0.4` and onwards, `fullTextSearch` must be declared under the `features` section in the subgraph manifest. +> **[Feature Management](#experimental-features):** From `specVersion` `0.0.4` and onwards, `fullTextSearch` must be declared under the `features` section in the Subgraph manifest. ## 対応言語 diff --git a/website/src/pages/ja/subgraphs/developing/creating/starting-your-subgraph.mdx b/website/src/pages/ja/subgraphs/developing/creating/starting-your-subgraph.mdx index c2dcb7ad1d68..3fd648b44813 100644 --- a/website/src/pages/ja/subgraphs/developing/creating/starting-your-subgraph.mdx +++ b/website/src/pages/ja/subgraphs/developing/creating/starting-your-subgraph.mdx @@ -4,20 +4,32 @@ title: Starting Your Subgraph ## 概要 -The Graph is home to thousands of subgraphs already available for query, so check [The Graph Explorer](https://thegraph.com/explorer) and find one that already matches your needs. +The Graph is home to thousands of Subgraphs already available for query, so check [The Graph Explorer](https://thegraph.com/explorer) and find one that already matches your needs. -When you create a [subgraph](/subgraphs/developing/subgraphs/), you create a custom open API that extracts data from a blockchain, processes it, stores it, and makes it easy to query via GraphQL. +When you create a [Subgraph](/subgraphs/developing/subgraphs/), you create a custom open API that extracts data from a blockchain, processes it, stores it, and makes it easy to query via GraphQL. -Subgraph development ranges from simple scaffold subgraphs to advanced, specifically tailored subgraphs. +Subgraph development ranges from simple scaffold Subgraphs to advanced, specifically tailored Subgraphs. ### Start Building -Start the process and build a subgraph that matches your needs: +Start the process and build a Subgraph that matches your needs: 1. [Install the CLI](/subgraphs/developing/creating/install-the-cli/) - Set up your infrastructure -2. [Subgraph Manifest](/subgraphs/developing/creating/subgraph-manifest/) - Understand a subgraph's key component +2. [Subgraph Manifest](/subgraphs/developing/creating/subgraph-manifest/) - Understand a Subgraph's key component 3. [The GraphQL Schema](/subgraphs/developing/creating/ql-schema/) - Write your schema 4. [Writing AssemblyScript Mappings](/subgraphs/developing/creating/assemblyscript-mappings/) - Write your mappings -5. [Advanced Features](/subgraphs/developing/creating/advanced/) - Customize your subgraph with advanced features +5. [Advanced Features](/subgraphs/developing/creating/advanced/) - Customize your Subgraph with advanced features Explore additional [resources for APIs](/subgraphs/developing/creating/graph-ts/README/) and conduct local testing with [Matchstick](/subgraphs/developing/creating/unit-testing-framework/). + +| バージョン | リリースノート | +| :-: | --- | +| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune Subgraphs | +| 0.0.9 | Supports `endBlock` feature | +| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). | +| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | diff --git a/website/src/pages/ja/subgraphs/developing/creating/subgraph-manifest.mdx b/website/src/pages/ja/subgraphs/developing/creating/subgraph-manifest.mdx index 1fc82b54930d..fb2a17678456 100644 --- a/website/src/pages/ja/subgraphs/developing/creating/subgraph-manifest.mdx +++ b/website/src/pages/ja/subgraphs/developing/creating/subgraph-manifest.mdx @@ -4,19 +4,19 @@ title: Subgraph Manifest ## 概要 -The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. +The Subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your Subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. -The **subgraph definition** consists of the following files: +The **Subgraph definition** consists of the following files: -- `subgraph.yaml`: Contains the subgraph manifest +- `subgraph.yaml`: Contains the Subgraph manifest -- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL +- `schema.graphql`: A GraphQL schema defining the data stored for your Subgraph and how to query it via GraphQL - `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema (e.g. `mapping.ts` in this guide) ### Subgraph Capabilities -A single subgraph can: +A single Subgraph can: - Index data from multiple smart contracts (but not multiple networks). @@ -24,12 +24,12 @@ A single subgraph can: - Add an entry for each contract that requires indexing to the `dataSources` array. -The full specification for subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). +The full specification for Subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). -For the example subgraph listed above, `subgraph.yaml` is: +For the example Subgraph listed above, `subgraph.yaml` is: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum repository: https://github.com/graphprotocol/graph-tooling schema: @@ -54,7 +54,7 @@ dataSources: data: 'bar' mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -79,47 +79,47 @@ dataSources: ## Subgraph Entries -> Important Note: Be sure you populate your subgraph manifest with all handlers and [entities](/subgraphs/developing/creating/ql-schema/). +> Important Note: Be sure you populate your Subgraph manifest with all handlers and [entities](/subgraphs/developing/creating/ql-schema/). マニフェストを更新する重要な項目は以下の通りです: -- `specVersion`: a semver version that identifies the supported manifest structure and functionality for the subgraph. The latest version is `1.2.0`. See [specVersion releases](#specversion-releases) section to see more details on features & releases. +- `specVersion`: a semver version that identifies the supported manifest structure and functionality for the Subgraph. The latest version is `1.3.0`. See [specVersion releases](#specversion-releases) section to see more details on features & releases. -- `description`: a human-readable description of what the subgraph is. This description is displayed in Graph Explorer when the subgraph is deployed to Subgraph Studio. +- `description`: a human-readable description of what the Subgraph is. This description is displayed in Graph Explorer when the Subgraph is deployed to Subgraph Studio. -- `repository`: the URL of the repository where the subgraph manifest can be found. This is also displayed in Graph Explorer. +- `repository`: the URL of the repository where the Subgraph manifest can be found. This is also displayed in Graph Explorer. - `features`: a list of all used [feature](#experimental-features) names. -- `indexerHints.prune`: Defines the retention of historical block data for a subgraph. See [prune](#prune) in [indexerHints](#indexer-hints) section. +- `indexerHints.prune`: Defines the retention of historical block data for a Subgraph. See [prune](#prune) in [indexerHints](#indexer-hints) section. -- `dataSources.source`: the address of the smart contract the subgraph sources, and the ABI of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts. +- `dataSources.source`: the address of the smart contract the Subgraph sources, and the ABI of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts. - `dataSources.source.startBlock`: the optional number of the block that the data source starts indexing from. In most cases, we suggest using the block in which the contract was created. - `dataSources.source.endBlock`: The optional number of the block that the data source stops indexing at, including that block. Minimum spec version required: `0.0.9`. -- `dataSources.context`: key-value pairs that can be used within subgraph mappings. Supports various data types like `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Each variable needs to specify its `type` and `data`. These context variables are then accessible in the mapping files, offering more configurable options for subgraph development. +- `dataSources.context`: key-value pairs that can be used within Subgraph mappings. Supports various data types like `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Each variable needs to specify its `type` and `data`. These context variables are then accessible in the mapping files, offering more configurable options for Subgraph development. - `dataSources.mapping.entities`: the entities that the data source writes to the store. The schema for each entity is defined in the schema.graphql file. - `dataSources.mapping.abis`: one or more named ABI files for the source contract as well as any other smart contracts that you interact with from within the mappings. -- `dataSources.mapping.eventHandlers`: lists the smart contract events this subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store. +- `dataSources.mapping.eventHandlers`: lists the smart contract events this Subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store. -- `dataSources.mapping.callHandlers`: lists the smart contract functions this subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store. +- `dataSources.mapping.callHandlers`: lists the smart contract functions this Subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store. -- `dataSources.mapping.blockHandlers`: lists the blocks this subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional call-filter can be provided by adding a `filter` field with `kind: call` to the handler. This will only run the handler if the block contains at least one call to the data source contract. +- `dataSources.mapping.blockHandlers`: lists the blocks this Subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional call-filter can be provided by adding a `filter` field with `kind: call` to the handler. This will only run the handler if the block contains at least one call to the data source contract. -A single subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array. +A single Subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array. ## Event Handlers -Event handlers in a subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the subgraph's manifest. This enables subgraphs to process and store event data according to defined logic. +Event handlers in a Subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the Subgraph's manifest. This enables Subgraphs to process and store event data according to defined logic. ### Defining an Event Handler -An event handler is declared within a data source in the subgraph's YAML configuration. It specifies which events to listen for and the corresponding function to execute when those events are detected. +An event handler is declared within a data source in the Subgraph's YAML configuration. It specifies which events to listen for and the corresponding function to execute when those events are detected. ```yaml dataSources: @@ -131,7 +131,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -149,11 +149,11 @@ dataSources: ## コールハンドラー -While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured. +While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a Subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured. コールハンドラーは、次の 2 つのケースのいずれかでのみトリガされます:指定された関数がコントラクト自身以外のアカウントから呼び出された場合、または Solidity で外部としてマークされ、同じコントラクト内の別の関数の一部として呼び出された場合。 -> **Note:** Call handlers currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a subgraph indexing one of these networks contain one or more call handlers, it will not start syncing. Subgraph developers should instead use event handlers. These are far more performant than call handlers, and are supported on every evm network. +> **Note:** Call handlers currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a Subgraph indexing one of these networks contain one or more call handlers, it will not start syncing. Subgraph developers should instead use event handlers. These are far more performant than call handlers, and are supported on every evm network. ### コールハンドラーの定義 @@ -169,7 +169,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -186,7 +186,7 @@ The `function` is the normalized function signature to filter calls by. The `han ### マッピング関数 -Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument: +Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example Subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument: ```typescript import { CreateGravatarCall } from '../generated/Gravity/Gravity' @@ -205,7 +205,7 @@ The `handleCreateGravatar` function takes a new `CreateGravatarCall` which is a ## ブロック・ハンドラー -コントラクトイベントやファンクションコールの購読に加えて、サブグラフは、新しいブロックがチェーンに追加されると、そのデータを更新したい場合があります。これを実現するために、サブグラフは各ブロックの後、あるいは事前に定義されたフィルタにマッチしたブロックの後に、関数を実行することができます。 +In addition to subscribing to contract events or function calls, a Subgraph may want to update its data as new blocks are appended to the chain. To achieve this a Subgraph can run a function after every block or after blocks that match a pre-defined filter. ### 対応フィルター @@ -218,7 +218,7 @@ filter: _The defined handler will be called once for every block which contains a call to the contract (data source) the handler is defined under._ -> **Note:** The `call` filter currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a subgraph indexing one of these networks contain one or more block handlers with a `call` filter, it will not start syncing. +> **Note:** The `call` filter currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a Subgraph indexing one of these networks contain one or more block handlers with a `call` filter, it will not start syncing. ブロックハンドラーにフィルターがない場合、ハンドラーはブロックごとに呼び出されます。1 つのデータソースには、各フィルタータイプに対して 1 つのブロックハンドラーしか含めることができません。 @@ -232,7 +232,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -261,7 +261,7 @@ blockHandlers: every: 10 ``` -The defined handler will be called once for every `n` blocks, where `n` is the value provided in the `every` field. This configuration allows the subgraph to perform specific operations at regular block intervals. +The defined handler will be called once for every `n` blocks, where `n` is the value provided in the `every` field. This configuration allows the Subgraph to perform specific operations at regular block intervals. #### ワンスフィルター @@ -276,7 +276,7 @@ blockHandlers: kind: once ``` -Once フィルターを使用して定義されたハンドラーは、他のすべてのハンドラーが実行される前に 1 回だけ呼び出されます。 この構成により、サブグラフはハンドラーを初期化ハンドラーとして使用し、インデックス作成の開始時に特定のタスクを実行できるようになります。 +The defined handler with the once filter will be called only once before all other handlers run. This configuration allows the Subgraph to use the handler as an initialization handler, performing specific tasks at the start of indexing. ```ts export function handleOnce(block: ethereum.Block): void { @@ -288,7 +288,7 @@ export function handleOnce(block: ethereum.Block): void { ### マッピング関数 -The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing subgraph entities in the store, call smart contracts and create or update entities. +The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing Subgraph entities in the store, call smart contracts and create or update entities. ```typescript import { ethereum } from '@graphprotocol/graph-ts' @@ -317,7 +317,7 @@ An event will only be triggered when both the signature and topic 0 match. By de Starting from `specVersion` `0.0.5` and `apiVersion` `0.0.7`, event handlers can have access to the receipt for the transaction which emitted them. -To do so, event handlers must be declared in the subgraph manifest with the new `receipt: true` key, which is optional and defaults to false. +To do so, event handlers must be declared in the Subgraph manifest with the new `receipt: true` key, which is optional and defaults to false. ```yaml eventHandlers: @@ -360,7 +360,7 @@ dataSources: abi: Factory mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/factory.ts entities: @@ -390,7 +390,7 @@ templates: abi: Exchange mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/exchange.ts entities: @@ -454,7 +454,7 @@ There are setters and getters like `setString` and `getString` for all value typ ## スタートブロック(start Blocks) -The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. +The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a Subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. ```yaml dataSources: @@ -467,7 +467,7 @@ dataSources: startBlock: 6627917 mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/factory.ts entities: @@ -488,13 +488,13 @@ dataSources: ## Indexer Hints -The `indexerHints` setting in a subgraph's manifest provides directives for indexers on processing and managing a subgraph. It influences operational decisions across data handling, indexing strategies, and optimizations. Presently, it features the `prune` option for managing historical data retention or pruning. +The `indexerHints` setting in a Subgraph's manifest provides directives for indexers on processing and managing a Subgraph. It influences operational decisions across data handling, indexing strategies, and optimizations. Presently, it features the `prune` option for managing historical data retention or pruning. > This feature is available from `specVersion: 1.0.0` ### Prune -`indexerHints.prune`: Defines the retention of historical block data for a subgraph. Options include: +`indexerHints.prune`: Defines the retention of historical block data for a Subgraph. Options include: 1. `"never"`: No pruning of historical data; retains the entire history. 2. `"auto"`: Retains the minimum necessary history as set by the indexer, optimizing query performance. @@ -505,19 +505,19 @@ The `indexerHints` setting in a subgraph's manifest provides directives for inde prune: auto ``` -> The term "history" in this context of subgraphs is about storing data that reflects the old states of mutable entities. +> The term "history" in this context of Subgraphs is about storing data that reflects the old states of mutable entities. History as of a given block is required for: -- [Time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), which enable querying the past states of these entities at specific blocks throughout the subgraph's history -- Using the subgraph as a [graft base](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) in another subgraph, at that block -- Rewinding the subgraph back to that block +- [Time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), which enable querying the past states of these entities at specific blocks throughout the Subgraph's history +- Using the Subgraph as a [graft base](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) in another Subgraph, at that block +- Rewinding the Subgraph back to that block If historical data as of the block has been pruned, the above capabilities will not be available. > Using `"auto"` is generally recommended as it maximizes query performance and is sufficient for most users who do not require access to extensive historical data. -For subgraphs leveraging [time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), it's advisable to either set a specific number of blocks for historical data retention or use `prune: never` to keep all historical entity states. Below are examples of how to configure both options in your subgraph's settings: +For Subgraphs leveraging [time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), it's advisable to either set a specific number of blocks for historical data retention or use `prune: never` to keep all historical entity states. Below are examples of how to configure both options in your Subgraph's settings: To retain a specific amount of historical data: @@ -532,3 +532,18 @@ To preserve the complete history of entity states: indexerHints: prune: never ``` + +## SpecVersion Releases + +| バージョン | リリースノート | +| :-: | --- | +| 1.3.0 | Added support for [Subgraph Composition](/cookbook/subgraph-composition-three-sources) | +| 1.2.0 | Added support for [Indexed Argument Filtering](/developing/creating/advanced/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](/developing/creating/advanced/#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supports [`indexerHints`](/developing/creating/subgraph-manifest/#indexer-hints) feature to prune Subgraphs | +| 0.0.9 | Supports `endBlock` feature | +| 0.0.8 | Added support for polling [Block Handlers](/developing/creating/subgraph-manifest/#polling-filter) and [Initialisation Handlers](/developing/creating/subgraph-manifest/#once-filter). | +| 0.0.7 | Added support for [File Data Sources](/developing/creating/advanced/#ipfsarweave-file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | diff --git a/website/src/pages/ja/subgraphs/developing/creating/unit-testing-framework.mdx b/website/src/pages/ja/subgraphs/developing/creating/unit-testing-framework.mdx index 5a089a93aa50..ececebba24c5 100644 --- a/website/src/pages/ja/subgraphs/developing/creating/unit-testing-framework.mdx +++ b/website/src/pages/ja/subgraphs/developing/creating/unit-testing-framework.mdx @@ -2,12 +2,12 @@ title: ユニットテストフレームワーク --- -Learn how to use Matchstick, a unit testing framework developed by [LimeChain](https://limechain.tech/). Matchstick enables subgraph developers to test their mapping logic in a sandboxed environment and successfully deploy their subgraphs. +Learn how to use Matchstick, a unit testing framework developed by [LimeChain](https://limechain.tech/). Matchstick enables Subgraph developers to test their mapping logic in a sandboxed environment and successfully deploy their Subgraphs. ## Benefits of Using Matchstick - It's written in Rust and optimized for high performance. -- It gives you access to developer features, including the ability to mock contract calls, make assertions about the store state, monitor subgraph failures, check test performance, and many more. +- It gives you access to developer features, including the ability to mock contract calls, make assertions about the store state, monitor Subgraph failures, check test performance, and many more. ## はじめに @@ -87,7 +87,7 @@ And finally, do not use `graph test` (which uses your global installation of gra ### Using Matchstick -To use **Matchstick** in your subgraph project just open up a terminal, navigate to the root folder of your project and simply run `graph test [options] ` - it downloads the latest **Matchstick** binary and runs the specified test or all tests in a test folder (or all existing tests if no datasource flag is specified). +To use **Matchstick** in your Subgraph project just open up a terminal, navigate to the root folder of your project and simply run `graph test [options] ` - it downloads the latest **Matchstick** binary and runs the specified test or all tests in a test folder (or all existing tests if no datasource flag is specified). ### CLI options @@ -113,7 +113,7 @@ graph test path/to/file.test.ts ```sh -c, --coverage Run the tests in coverage mode --d, --docker Run the tests in a docker container (Note: Please execute from the root folder of the subgraph) +-d, --docker Run the tests in a docker container (Note: Please execute from the root folder of the Subgraph) -f, --force Binary: Redownloads the binary. Docker: Redownloads the Dockerfile and rebuilds the docker image. -h, --help Show usage information -l, --logs Logs to the console information about the OS, CPU model and download url (debugging purposes) @@ -145,17 +145,17 @@ libsFolder: path/to/libs manifestPath: path/to/subgraph.yaml ``` -### デモ・サブグラフ +### Demo Subgraph You can try out and play around with the examples from this guide by cloning the [Demo Subgraph repo](https://github.com/LimeChain/demo-subgraph) ### ビデオチュートリアル -Also you can check out the video series on ["How to use Matchstick to write unit tests for your subgraphs"](https://www.youtube.com/playlist?list=PLTqyKgxaGF3SNakGQwczpSGVjS_xvOv3h) +Also you can check out the video series on ["How to use Matchstick to write unit tests for your Subgraphs"](https://www.youtube.com/playlist?list=PLTqyKgxaGF3SNakGQwczpSGVjS_xvOv3h) ## Tests structure -_**IMPORTANT: The test structure described below depens on `matchstick-as` version >=0.5.0**_ +_**IMPORTANT: The test structure described below depends on `matchstick-as` version >=0.5.0**_ ### describe() @@ -662,7 +662,7 @@ That's a lot to unpack! First off, an important thing to notice is that we're im これで最初のテストが完成しました! 👏 -テストを実行するには、サブグラフのルートフォルダで以下を実行する必要があります: +Now in order to run our tests you simply need to run the following in your Subgraph root folder: `graph test Gravity` @@ -756,7 +756,7 @@ createMockedFunction(contractAddress, 'getGravatar', 'getGravatar(address):(stri Users can mock IPFS files by using `mockIpfsFile(hash, filePath)` function. The function accepts two arguments, the first one is the IPFS file hash/path and the second one is the path to a local file. -NOTE: When testing `ipfs.map/ipfs.mapJSON`, the callback function must be exported from the test file in order for matchstck to detect it, like the `processGravatar()` function in the test example bellow: +NOTE: When testing `ipfs.map/ipfs.mapJSON`, the callback function must be exported from the test file in order for matchstick to detect it, like the `processGravatar()` function in the test example bellow: `.test.ts` file: @@ -765,7 +765,7 @@ import { assert, test, mockIpfsFile } from 'matchstick-as/assembly/index' import { ipfs } from '@graphprotocol/graph-ts' import { gravatarFromIpfs } from './utils' -// Export ipfs.map() callback in order for matchstck to detect it +// Export ipfs.map() callback in order for matchstick to detect it export { processGravatar } from './utils' test('ipfs.cat', () => { @@ -1172,7 +1172,7 @@ templates: network: mainnet mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/token-lock-wallet.ts handler: handleMetadata @@ -1289,7 +1289,7 @@ test('file/ipfs dataSource creation example', () => { ## テストカバレッジ -Using **Matchstick**, subgraph developers are able to run a script that will calculate the test coverage of the written unit tests. +Using **Matchstick**, Subgraph developers are able to run a script that will calculate the test coverage of the written unit tests. The test coverage tool takes the compiled test `wasm` binaries and converts them to `wat` files, which can then be easily inspected to see whether or not the handlers defined in `subgraph.yaml` have been called. Since code coverage (and testing as whole) is in very early stages in AssemblyScript and WebAssembly, **Matchstick** cannot check for branch coverage. Instead we rely on the assertion that if a given handler has been called, the event/function for it have been properly mocked. @@ -1395,7 +1395,7 @@ The mismatch in arguments is caused by mismatch in `graph-ts` and `matchstick-as ## その他のリソース -For any additional support, check out this [demo subgraph repo using Matchstick](https://github.com/LimeChain/demo-subgraph#readme_). +For any additional support, check out this [demo Subgraph repo using Matchstick](https://github.com/LimeChain/demo-subgraph#readme_). ## フィードバック diff --git a/website/src/pages/ja/subgraphs/developing/deploying/multiple-networks.mdx b/website/src/pages/ja/subgraphs/developing/deploying/multiple-networks.mdx index 53c7dcfbd86b..a43e7a32c7b8 100644 --- a/website/src/pages/ja/subgraphs/developing/deploying/multiple-networks.mdx +++ b/website/src/pages/ja/subgraphs/developing/deploying/multiple-networks.mdx @@ -1,12 +1,13 @@ --- title: Deploying a Subgraph to Multiple Networks +sidebarTitle: Deploying to Multiple Networks --- -This page explains how to deploy a subgraph to multiple networks. To deploy a subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a subgraph already, see [Creating a subgraph](/developing/creating-a-subgraph/). +This page explains how to deploy a Subgraph to multiple networks. To deploy a Subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a Subgraph already, see [Creating a Subgraph](/developing/creating-a-subgraph/). -## サブグラフを複数のネットワークにデプロイする +## Deploying the Subgraph to multiple networks -場合によっては、すべてのコードを複製せずに、同じサブグラフを複数のネットワークに展開する必要があります。これに伴う主な課題は、これらのネットワークのコントラクト アドレスが異なることです。 +In some cases, you will want to deploy the same Subgraph to multiple networks without duplicating all of its code. The main challenge that comes with this is that the contract addresses on these networks are different. ### Using `graph-cli` @@ -20,7 +21,7 @@ Options: --network-file Networks config file path (default: "./networks.json") ``` -You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your subgraph during development. +You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your Subgraph during development. > Note: The `init` command will now auto-generate a `networks.json` based on the provided information. You will then be able to update existing or add additional networks. @@ -54,7 +55,7 @@ If you don't have a `networks.json` file, you'll need to manually create one wit > Note: You don't have to specify any of the `templates` (if you have any) in the config file, only the `dataSources`. If there are any `templates` declared in the `subgraph.yaml` file, their network will be automatically updated to the one specified with the `--network` option. -Now, let's assume you want to be able to deploy your subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`: +Now, let's assume you want to be able to deploy your Subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`: ```yaml # ... @@ -96,7 +97,7 @@ yarn build --network sepolia yarn build --network sepolia --network-file path/to/config ``` -The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the subgraph. Your `subgraph.yaml` file now should look like this: +The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the Subgraph. Your `subgraph.yaml` file now should look like this: ```yaml # ... @@ -127,7 +128,7 @@ yarn deploy --network sepolia --network-file path/to/config One way to parameterize aspects like contract addresses using older `graph-cli` versions is to generate parts of it with a templating system like [Mustache](https://mustache.github.io/) or [Handlebars](https://handlebarsjs.com/). -To illustrate this approach, let's assume a subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network: +To illustrate this approach, let's assume a Subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network: ```json { @@ -179,7 +180,7 @@ In order to generate a manifest to either network, you could add two additional } ``` -To deploy this subgraph for mainnet or Sepolia you would now simply run one of the two following commands: +To deploy this Subgraph for mainnet or Sepolia you would now simply run one of the two following commands: ```sh # Mainnet: @@ -193,25 +194,25 @@ A working example of this can be found [here](https://github.com/graphprotocol/e **Note:** This approach can also be applied to more complex situations, where it is necessary to substitute more than contract addresses and network names or where generating mappings or ABIs from templates as well. -This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your Subgraph to check if it is running behind. `synced` informs if the Subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the Subgraph. In this case, you can check the `fatalError` field for details on this error. -## Subgraph Studio・サブグラフ・アーカイブポリシー +## Subgraph Studio Subgraph archive policy -A subgraph version in Studio is archived if and only if it meets the following criteria: +A Subgraph version in Studio is archived if and only if it meets the following criteria: - The version is not published to the network (or pending publish) - The version was created 45 or more days ago -- The subgraph hasn't been queried in 30 days +- The Subgraph hasn't been queried in 30 days -In addition, when a new version is deployed, if the subgraph has not been published, then the N-2 version of the subgraph is archived. +In addition, when a new version is deployed, if the Subgraph has not been published, then the N-2 version of the Subgraph is archived. -このポリシーで影響を受けるすべてのサブグラフには、問題のバージョンを戻すオプションがあります。 +Every Subgraph affected with this policy has an option to bring the version in question back. -## サブグラフのヘルスチェック +## Checking Subgraph health -サブグラフが正常に同期された場合、それはそれが永久に正常に動作し続けることを示す良い兆候です。ただし、ネットワーク上の新しいトリガーにより、サブグラフがテストされていないエラー状態に陥ったり、パフォーマンスの問題やノード オペレーターの問題により遅れが生じたりする可能性があります。 +If a Subgraph syncs successfully, that is a good sign that it will continue to run well forever. However, new triggers on the network might cause your Subgraph to hit an untested error condition or it may start to fall behind due to performance issues or issues with the node operators. -Graph Node exposes a GraphQL endpoint which you can query to check the status of your subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a subgraph: +Graph Node exposes a GraphQL endpoint which you can query to check the status of your Subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a Subgraph: ```graphql { @@ -238,4 +239,4 @@ Graph Node exposes a GraphQL endpoint which you can query to check the status of } ``` -This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your Subgraph to check if it is running behind. `synced` informs if the Subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the Subgraph. In this case, you can check the `fatalError` field for details on this error. diff --git a/website/src/pages/ja/subgraphs/developing/deploying/using-subgraph-studio.mdx b/website/src/pages/ja/subgraphs/developing/deploying/using-subgraph-studio.mdx index 21bb85d4fb51..4e8503e208e4 100644 --- a/website/src/pages/ja/subgraphs/developing/deploying/using-subgraph-studio.mdx +++ b/website/src/pages/ja/subgraphs/developing/deploying/using-subgraph-studio.mdx @@ -2,23 +2,23 @@ title: Deploying Using Subgraph Studio --- -Learn how to deploy your subgraph to Subgraph Studio. +Learn how to deploy your Subgraph to Subgraph Studio. -> Note: When you deploy a subgraph, you push it to Subgraph Studio, where you'll be able to test it. It's important to remember that deploying is not the same as publishing. When you publish a subgraph, you're publishing it onchain. +> Note: When you deploy a Subgraph, you push it to Subgraph Studio, where you'll be able to test it. It's important to remember that deploying is not the same as publishing. When you publish a Subgraph, you're publishing it onchain. ## Subgraph Studio Overview In [Subgraph Studio](https://thegraph.com/studio/), you can do the following: -- View a list of subgraphs you've created -- Manage, view details, and visualize the status of a specific subgraph -- 特定のサブグラフ用の API キーの作成と管理 +- View a list of Subgraphs you've created +- Manage, view details, and visualize the status of a specific Subgraph +- Create and manage your API keys for specific Subgraphs - Restrict your API keys to specific domains and allow only certain Indexers to query with them -- Create your subgraph -- Deploy your subgraph using The Graph CLI -- Test your subgraph in the playground environment -- Integrate your subgraph in staging using the development query URL -- Publish your subgraph to The Graph Network +- Create your Subgraph +- Deploy your Subgraph using The Graph CLI +- Test your Subgraph in the playground environment +- Integrate your Subgraph in staging using the development query URL +- Publish your Subgraph to The Graph Network - Manage your billing ## Install The Graph CLI @@ -44,10 +44,10 @@ npm install -g @graphprotocol/graph-cli 1. Open [Subgraph Studio](https://thegraph.com/studio/). 2. Connect your wallet to sign in. - You can do this via MetaMask, Coinbase Wallet, WalletConnect, or Safe. -3. After you sign in, your unique deploy key will be displayed on your subgraph details page. - - The deploy key allows you to publish your subgraphs or manage your API keys and billing. It is unique but can be regenerated if you think it has been compromised. +3. After you sign in, your unique deploy key will be displayed on your Subgraph details page. + - The deploy key allows you to publish your Subgraphs or manage your API keys and billing. It is unique but can be regenerated if you think it has been compromised. -> Important: You need an API key to query subgraphs +> Important: You need an API key to query Subgraphs ### Subgraph Studio でサブグラフを作成する方法 @@ -57,31 +57,25 @@ npm install -g @graphprotocol/graph-cli ### Subgraph と The Graph Network の互換性 -In order to be supported by Indexers on The Graph Network, subgraphs must: - -- Index a [supported network](/supported-networks/) -- 以下の機能のいずれも使用してはいけません: - - ipfs.cat & ipfs.map - - 致命的でないエラー - - Grafting +To be supported by Indexers on The Graph Network, Subgraphs must index a [supported network](/supported-networks/). For a full list of supported and unsupported features, check out the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) repo. ## Initialize Your Subgraph -Once your subgraph has been created in Subgraph Studio, you can initialize its code through the CLI using this command: +Once your Subgraph has been created in Subgraph Studio, you can initialize its code through the CLI using this command: ```bash graph init ``` -You can find the `` value on your subgraph details page in Subgraph Studio, see image below: +You can find the `` value on your Subgraph details page in Subgraph Studio, see image below: ![Subgraph Studio - Slug](/img/doc-subgraph-slug.png) -After running `graph init`, you will be asked to input the contract address, network, and an ABI that you want to query. This will generate a new folder on your local machine with some basic code to start working on your subgraph. You can then finalize your subgraph to make sure it works as expected. +After running `graph init`, you will be asked to input the contract address, network, and an ABI that you want to query. This will generate a new folder on your local machine with some basic code to start working on your Subgraph. You can then finalize your Subgraph to make sure it works as expected. ## グラフ認証 -Before you can deploy your subgraph to Subgraph Studio, you need to log into your account within the CLI. To do this, you will need your deploy key, which you can find under your subgraph details page. +Before you can deploy your Subgraph to Subgraph Studio, you need to log into your account within the CLI. To do this, you will need your deploy key, which you can find under your Subgraph details page. Then, use the following command to authenticate from the CLI: @@ -91,11 +85,11 @@ graph auth ## Deploying a Subgraph -Once you are ready, you can deploy your subgraph to Subgraph Studio. +Once you are ready, you can deploy your Subgraph to Subgraph Studio. -> Deploying a subgraph with the CLI pushes it to the Studio, where you can test it and update the metadata. This action won't publish your subgraph to the decentralized network. +> Deploying a Subgraph with the CLI pushes it to the Studio, where you can test it and update the metadata. This action won't publish your Subgraph to the decentralized network. -Use the following CLI command to deploy your subgraph: +Use the following CLI command to deploy your Subgraph: ```bash graph deploy @@ -108,30 +102,30 @@ After running this command, the CLI will ask for a version label. ## Testing Your Subgraph -After deploying, you can test your subgraph (either in Subgraph Studio or in your own app, with the deployment query URL), deploy another version, update the metadata, and publish to [Graph Explorer](https://thegraph.com/explorer) when you are ready. +After deploying, you can test your Subgraph (either in Subgraph Studio or in your own app, with the deployment query URL), deploy another version, update the metadata, and publish to [Graph Explorer](https://thegraph.com/explorer) when you are ready. -Use Subgraph Studio to check the logs on the dashboard and look for any errors with your subgraph. +Use Subgraph Studio to check the logs on the dashboard and look for any errors with your Subgraph. ## Publish Your Subgraph -In order to publish your subgraph successfully, review [publishing a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). +In order to publish your Subgraph successfully, review [publishing a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). ## Versioning Your Subgraph with the CLI -If you want to update your subgraph, you can do the following: +If you want to update your Subgraph, you can do the following: - You can deploy a new version to Studio using the CLI (it will only be private at this point). - Once you're happy with it, you can publish your new deployment to [Graph Explorer](https://thegraph.com/explorer). -- This action will create a new version of your subgraph that Curators can start signaling on and Indexers can index. +- This action will create a new version of your Subgraph that Curators can start signaling on and Indexers can index. -You can also update your subgraph's metadata without publishing a new version. You can update your subgraph details in Studio (under the profile picture, name, description, etc.) by checking an option called **Update Details** in [Graph Explorer](https://thegraph.com/explorer). If this is checked, an onchain transaction will be generated that updates subgraph details in Explorer without having to publish a new version with a new deployment. +You can also update your Subgraph's metadata without publishing a new version. You can update your Subgraph details in Studio (under the profile picture, name, description, etc.) by checking an option called **Update Details** in [Graph Explorer](https://thegraph.com/explorer). If this is checked, an onchain transaction will be generated that updates Subgraph details in Explorer without having to publish a new version with a new deployment. -> Note: There are costs associated with publishing a new version of a subgraph to the network. In addition to the transaction fees, you must also fund a part of the curation tax on the auto-migrating signal. You cannot publish a new version of your subgraph if Curators have not signaled on it. For more information, please read more [here](/resources/roles/curating/). +> Note: There are costs associated with publishing a new version of a Subgraph to the network. In addition to the transaction fees, you must also fund a part of the curation tax on the auto-migrating signal. You cannot publish a new version of your Subgraph if Curators have not signaled on it. For more information, please read more [here](/resources/roles/curating/). ## サブグラフのバージョンの自動アーカイブ -Whenever you deploy a new subgraph version in Subgraph Studio, the previous version will be archived. Archived versions won't be indexed/synced and therefore cannot be queried. You can unarchive an archived version of your subgraph in Subgraph Studio. +Whenever you deploy a new Subgraph version in Subgraph Studio, the previous version will be archived. Archived versions won't be indexed/synced and therefore cannot be queried. You can unarchive an archived version of your Subgraph in Subgraph Studio. -> Note: Previous versions of non-published subgraphs deployed to Studio will be automatically archived. +> Note: Previous versions of non-published Subgraphs deployed to Studio will be automatically archived. ![Subgraph Studio - Unarchive](/img/Unarchive.png) diff --git a/website/src/pages/ja/subgraphs/developing/developer-faq.mdx b/website/src/pages/ja/subgraphs/developing/developer-faq.mdx index 9744d7d9a53d..54a9d8b3a865 100644 --- a/website/src/pages/ja/subgraphs/developing/developer-faq.mdx +++ b/website/src/pages/ja/subgraphs/developing/developer-faq.mdx @@ -7,37 +7,37 @@ This page summarizes some of the most common questions for developers building o ## Subgraph Related -### 1. サブグラフとは +### 1. What is a Subgraph? -A subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process subgraphs and make them available for subgraph consumers to query. +A Subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process Subgraphs and make them available for Subgraph consumers to query. -### 2. What is the first step to create a subgraph? +### 2. What is the first step to create a Subgraph? -To successfully create a subgraph, you will need to install The Graph CLI. Review the [Quick Start](/subgraphs/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). +To successfully create a Subgraph, you will need to install The Graph CLI. Review the [Quick Start](/subgraphs/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). -### 3. Can I still create a subgraph if my smart contracts don't have events? +### 3. Can I still create a Subgraph if my smart contracts don't have events? -It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the subgraph are triggered by contract events and are the fastest way to retrieve useful data. +It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the Subgraph are triggered by contract events and are the fastest way to retrieve useful data. -If the contracts you work with do not contain events, your subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. +If the contracts you work with do not contain events, your Subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. -### 4. サブグラフに関連付けられている GitHub アカウントを変更できますか? +### 4. Can I change the GitHub account associated with my Subgraph? -No. Once a subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your subgraph. +No. Once a Subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your Subgraph. -### 5. How do I update a subgraph on mainnet? +### 5. How do I update a Subgraph on mainnet? -You can deploy a new version of your subgraph to Subgraph Studio using the CLI. This action maintains your subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. +You can deploy a new version of your Subgraph to Subgraph Studio using the CLI. This action maintains your Subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your Subgraph that Curators can start signaling on. -### 6. Is it possible to duplicate a subgraph to another account or endpoint without redeploying? +### 6. Is it possible to duplicate a Subgraph to another account or endpoint without redeploying? -サブグラフを再デプロイする必要がありますが、サブグラフの ID(IPFS ハッシュ)が変わらなければ、最初から同期する必要はありません。 +You have to redeploy the Subgraph, but if the Subgraph ID (IPFS hash) doesn't change, it won't have to sync from the beginning. -### 7. How do I call a contract function or access a public state variable from my subgraph mappings? +### 7. How do I call a contract function or access a public state variable from my Subgraph mappings? Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/#access-to-smart-contract-state). -### 8. Can I import `ethers.js` or other JS libraries into my subgraph mappings? +### 8. Can I import `ethers.js` or other JS libraries into my Subgraph mappings? Not currently, as mappings are written in AssemblyScript. @@ -45,15 +45,15 @@ One possible alternative solution to this is to store raw data in entities and p ### 9. When listening to multiple contracts, is it possible to select the contract order to listen to events? -サブグラフ内では、複数のコントラクトにまたがっているかどうかにかかわらず、イベントは常にブロックに表示される順序で処理されます。 +Within a Subgraph, the events are always processed in the order they appear in the blocks, regardless of whether that is across multiple contracts or not. ### 10. How are templates different from data sources? -Templates allow you to create data sources quickly, while your subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your subgraph will create a dynamic data source by supplying the contract address. +Templates allow you to create data sources quickly, while your Subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your Subgraph will create a dynamic data source by supplying the contract address. Check out the "Instantiating a data source template" section on: [Data Source Templates](/developing/creating-a-subgraph/#data-source-templates). -### 11. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? +### 11. Is it possible to set up a Subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? Yes. On `graph init` command itself you can add multiple dataSources by entering contracts one after the other. @@ -79,9 +79,9 @@ docker pull graphprotocol/graph-node:latest If only one entity is created during the event and if there's nothing better available, then the transaction hash + log index would be unique. You can obfuscate these by converting that to Bytes and then piping it through `crypto.keccak256` but this won't make it more unique. -### 15. Can I delete my subgraph? +### 15. Can I delete my Subgraph? -Yes, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) and [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) your subgraph. +Yes, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) and [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) your Subgraph. ## Network Related @@ -110,11 +110,11 @@ Yes. Sepolia supports block handlers, call handlers and event handlers. It shoul Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the dataSource starts indexing from. In most cases, we suggest using the block where the contract was created: [Start blocks](/developing/creating-a-subgraph/#start-blocks) -### 20. What are some tips to increase the performance of indexing? My subgraph is taking a very long time to sync +### 20. What are some tips to increase the performance of indexing? My Subgraph is taking a very long time to sync Yes, you should take a look at the optional start block feature to start indexing from the block where the contract was deployed: [Start blocks](/developing/creating-a-subgraph/#start-blocks) -### 21. Is there a way to query the subgraph directly to determine the latest block number it has indexed? +### 21. Is there a way to query the Subgraph directly to determine the latest block number it has indexed? はい、あります。organization/subgraphName」を公開先の組織とサブグラフの名前に置き換えて、以下のコマンドを実行してみてください: @@ -132,7 +132,7 @@ someCollection(first: 1000, skip: ) { ... } ### 23. If my dapp frontend uses The Graph for querying, do I need to write my API key into the frontend directly? What if we pay query fees for users – will malicious users cause our query fees to be very high? -Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. +Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and Subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. ## Miscellaneous diff --git a/website/src/pages/ja/subgraphs/developing/introduction.mdx b/website/src/pages/ja/subgraphs/developing/introduction.mdx index 982e426ba4aa..e7d2fb8eff33 100644 --- a/website/src/pages/ja/subgraphs/developing/introduction.mdx +++ b/website/src/pages/ja/subgraphs/developing/introduction.mdx @@ -11,21 +11,21 @@ As a developer, you need data to build and power your dapp. Querying and indexin On The Graph, you can: -1. Create, deploy, and publish subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). -2. Use GraphQL to query existing subgraphs. +1. Create, deploy, and publish Subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). +2. Use GraphQL to query existing Subgraphs. ### What is GraphQL? -- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. +- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query Subgraphs. ### Developer Actions -- Query subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. -- Create custom subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. -- Deploy, publish and signal your subgraphs within The Graph Network. +- Query Subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. +- Create custom Subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. +- Deploy, publish and signal your Subgraphs within The Graph Network. -### What are subgraphs? +### What are Subgraphs? -A subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. +A Subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. -Check out the documentation on [subgraphs](/subgraphs/developing/subgraphs/) to learn specifics. +Check out the documentation on [Subgraphs](/subgraphs/developing/subgraphs/) to learn specifics. diff --git a/website/src/pages/ja/subgraphs/developing/managing/deleting-a-subgraph.mdx b/website/src/pages/ja/subgraphs/developing/managing/deleting-a-subgraph.mdx index 6a9aef388d02..b8c2330ca49d 100644 --- a/website/src/pages/ja/subgraphs/developing/managing/deleting-a-subgraph.mdx +++ b/website/src/pages/ja/subgraphs/developing/managing/deleting-a-subgraph.mdx @@ -2,30 +2,30 @@ title: Deleting a Subgraph --- -Delete your subgraph using [Subgraph Studio](https://thegraph.com/studio/). +Delete your Subgraph using [Subgraph Studio](https://thegraph.com/studio/). -> Deleting your subgraph will remove all published versions from The Graph Network, but it will remain visible on Graph Explorer and Subgraph Studio for users who have signaled on it. +> Deleting your Subgraph will remove all published versions from The Graph Network, but it will remain visible on Graph Explorer and Subgraph Studio for users who have signaled on it. ## Step-by-Step -1. Visit the subgraph's page on [Subgraph Studio](https://thegraph.com/studio/). +1. Visit the Subgraph's page on [Subgraph Studio](https://thegraph.com/studio/). 2. Click on the three-dots to the right of the "publish" button. -3. Click on the option to "delete this subgraph": +3. Click on the option to "delete this Subgraph": ![Delete-subgraph](/img/Delete-subgraph.png) -4. Depending on the subgraph's status, you will be prompted with various options. +4. Depending on the Subgraph's status, you will be prompted with various options. - - If the subgraph is not published, simply click “delete” and confirm. - - If the subgraph is published, you will need to confirm on your wallet before the subgraph can be deleted from Studio. If a subgraph is published to multiple networks, such as testnet and mainnet, additional steps may be required. + - If the Subgraph is not published, simply click “delete” and confirm. + - If the Subgraph is published, you will need to confirm on your wallet before the Subgraph can be deleted from Studio. If a Subgraph is published to multiple networks, such as testnet and mainnet, additional steps may be required. -> If the owner of the subgraph has signal on it, the signaled GRT will be returned to the owner. +> If the owner of the Subgraph has signal on it, the signaled GRT will be returned to the owner. ### Important Reminders -- Once you delete a subgraph, it will **not** appear on Graph Explorer's homepage. However, users who have signaled on it will still be able to view it on their profile pages and remove their signal. -- キュレーターは、サブグラフにシグナルを送ることができなくなります。 -- Curators that already signaled on the subgraph can withdraw their signal at an average share price. -- Deleted subgraphs will show an error message. +- Once you delete a Subgraph, it will **not** appear on Graph Explorer's homepage. However, users who have signaled on it will still be able to view it on their profile pages and remove their signal. +- Curators will not be able to signal on the Subgraph anymore. +- Curators that already signaled on the Subgraph can withdraw their signal at an average share price. +- Deleted Subgraphs will show an error message. diff --git a/website/src/pages/ja/subgraphs/developing/managing/transferring-a-subgraph.mdx b/website/src/pages/ja/subgraphs/developing/managing/transferring-a-subgraph.mdx index 0fc6632cbc40..e80bde3fa6d2 100644 --- a/website/src/pages/ja/subgraphs/developing/managing/transferring-a-subgraph.mdx +++ b/website/src/pages/ja/subgraphs/developing/managing/transferring-a-subgraph.mdx @@ -2,18 +2,18 @@ title: Transferring a Subgraph --- -Subgraphs published to the decentralized network have an NFT minted to the address that published the subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network. +Subgraphs published to the decentralized network have an NFT minted to the address that published the Subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network. ## Reminders -- Whoever owns the NFT controls the subgraph. -- If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that subgraph on the network. -- You can easily move control of a subgraph to a multi-sig. -- A community member can create a subgraph on behalf of a DAO. +- Whoever owns the NFT controls the Subgraph. +- If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that Subgraph on the network. +- You can easily move control of a Subgraph to a multi-sig. +- A community member can create a Subgraph on behalf of a DAO. ## View Your Subgraph as an NFT -To view your subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**: +To view your Subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**: ``` https://opensea.io/your-wallet-address @@ -27,13 +27,13 @@ https://rainbow.me/your-wallet-addres ## Step-by-Step -To transfer ownership of a subgraph, do the following: +To transfer ownership of a Subgraph, do the following: 1. Use the UI built into Subgraph Studio: ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-1.png) -2. Choose the address that you would like to transfer the subgraph to: +2. Choose the address that you would like to transfer the Subgraph to: ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-2.png) diff --git a/website/src/pages/ja/subgraphs/developing/publishing/publishing-a-subgraph.mdx b/website/src/pages/ja/subgraphs/developing/publishing/publishing-a-subgraph.mdx index f9d92cf7d0d9..c26672ec6b84 100644 --- a/website/src/pages/ja/subgraphs/developing/publishing/publishing-a-subgraph.mdx +++ b/website/src/pages/ja/subgraphs/developing/publishing/publishing-a-subgraph.mdx @@ -1,10 +1,11 @@ --- title: 分散型ネットワークへのサブグラフの公開 +sidebarTitle: Publishing to the Decentralized Network --- -Once you have [deployed your subgraph to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/) and it's ready to go into production, you can publish it to the decentralized network. +Once you have [deployed your Subgraph to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/) and it's ready to go into production, you can publish it to the decentralized network. -When you publish a subgraph to the decentralized network, you make it available for: +When you publish a Subgraph to the decentralized network, you make it available for: - [Curators](/resources/roles/curating/) to begin curating it. - [Indexers](/indexing/overview/) to begin indexing it. @@ -17,33 +18,33 @@ Check out the list of [supported networks](/supported-networks/). 1. Go to the [Subgraph Studio](https://thegraph.com/studio/) dashboard 2. Click on the **Publish** button -3. Your subgraph will now be visible in [Graph Explorer](https://thegraph.com/explorer/). +3. Your Subgraph will now be visible in [Graph Explorer](https://thegraph.com/explorer/). -All published versions of an existing subgraph can: +All published versions of an existing Subgraph can: - Be published to Arbitrum One. [Learn more about The Graph Network on Arbitrum](/archived/arbitrum/arbitrum-faq/). -- Index data on any of the [supported networks](/supported-networks/), regardless of the network on which the subgraph was published. +- Index data on any of the [supported networks](/supported-networks/), regardless of the network on which the Subgraph was published. -### パブリッシュされたサブグラフのメタデータの更新 +### Updating metadata for a published Subgraph -- After publishing your subgraph to the decentralized network, you can update the metadata anytime in Subgraph Studio. +- After publishing your Subgraph to the decentralized network, you can update the metadata anytime in Subgraph Studio. - Once you’ve saved your changes and published the updates, they will appear in Graph Explorer. - It's important to note that this process will not create a new version since your deployment has not changed. ## Publishing from the CLI -As of version 0.73.0, you can also publish your subgraph with the [`graph-cli`](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). +As of version 0.73.0, you can also publish your Subgraph with the [`graph-cli`](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). 1. Open the `graph-cli`. 2. Use the following commands: `graph codegen && graph build` then `graph publish`. -3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized subgraph to a network of your choice. +3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized Subgraph to a network of your choice. ![cli-ui](/img/cli-ui.png) ### Customizing your deployment -You can upload your subgraph build to a specific IPFS node and further customize your deployment with the following flags: +You can upload your Subgraph build to a specific IPFS node and further customize your deployment with the following flags: ``` USAGE @@ -61,33 +62,33 @@ FLAGS ``` -## Adding signal to your subgraph +## Adding signal to your Subgraph -Developers can add GRT signal to their subgraphs to incentivize Indexers to query the subgraph. +Developers can add GRT signal to their Subgraphs to incentivize Indexers to query the Subgraph. -- If a subgraph is eligible for indexing rewards, Indexers who provide a "proof of indexing" will receive a GRT reward, based on the amount of GRT signalled. +- If a Subgraph is eligible for indexing rewards, Indexers who provide a "proof of indexing" will receive a GRT reward, based on the amount of GRT signalled. -- You can check indexing reward eligibility based on subgraph feature usage [here](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). +- You can check indexing reward eligibility based on Subgraph feature usage [here](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). - Specific supported networks can be checked [here](/supported-networks/). -> Adding signal to a subgraph which is not eligible for rewards will not attract additional Indexers. +> Adding signal to a Subgraph which is not eligible for rewards will not attract additional Indexers. > -> If your subgraph is eligible for rewards, it is recommended that you curate your own subgraph with at least 3,000 GRT in order to attract additional indexers to index your subgraph. +> If your Subgraph is eligible for rewards, it is recommended that you curate your own Subgraph with at least 3,000 GRT in order to attract additional indexers to index your Subgraph. -The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs. However, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. +The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all Subgraphs. However, signaling GRT on a particular Subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. -When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. +When signaling, Curators can decide to signal on a specific version of the Subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. -Indexers can find subgraphs to index based on curation signals they see in Graph Explorer. +Indexers can find Subgraphs to index based on curation signals they see in Graph Explorer. -![Explorer subgraphs](/img/explorer-subgraphs.png) +![Explorer Subgraphs](/img/explorer-subgraphs.png) -Subgraph Studio enables you to add signal to your subgraph by adding GRT to your subgraph's curation pool in the same transaction it is published. +Subgraph Studio enables you to add signal to your Subgraph by adding GRT to your Subgraph's curation pool in the same transaction it is published. ![Curation Pool](/img/curate-own-subgraph-tx.png) -Alternatively, you can add GRT signal to a published subgraph from Graph Explorer. +Alternatively, you can add GRT signal to a published Subgraph from Graph Explorer. ![Signal from Explorer](/img/signal-from-explorer.png) diff --git a/website/src/pages/ja/subgraphs/developing/subgraphs.mdx b/website/src/pages/ja/subgraphs/developing/subgraphs.mdx index 9f1d50744aab..b96912052ef7 100644 --- a/website/src/pages/ja/subgraphs/developing/subgraphs.mdx +++ b/website/src/pages/ja/subgraphs/developing/subgraphs.mdx @@ -4,83 +4,83 @@ title: サブグラフ ## What is a Subgraph? -A subgraph is a custom, open API that extracts data from a blockchain, processes it, and stores it so it can be easily queried via GraphQL. +A Subgraph is a custom, open API that extracts data from a blockchain, processes it, and stores it so it can be easily queried via GraphQL. ### Subgraph Capabilities - **Access Data:** Subgraphs enable the querying and indexing of blockchain data for web3. -- **Build:** Developers can build, deploy, and publish subgraphs to The Graph Network. To get started, check out the subgraph developer [Quick Start](quick-start/). -- **Index & Query:** Once a subgraph is indexed, anyone can query it. Explore and query all subgraphs published to the network in [Graph Explorer](https://thegraph.com/explorer). +- **Build:** Developers can build, deploy, and publish Subgraphs to The Graph Network. To get started, check out the Subgraph developer [Quick Start](quick-start/). +- **Index & Query:** Once a Subgraph is indexed, anyone can query it. Explore and query all Subgraphs published to the network in [Graph Explorer](https://thegraph.com/explorer). ## Inside a Subgraph -The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. +The Subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your Subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. -The **subgraph definition** consists of the following files: +The **Subgraph definition** consists of the following files: -- `subgraph.yaml`: Contains the subgraph manifest +- `subgraph.yaml`: Contains the Subgraph manifest -- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL +- `schema.graphql`: A GraphQL schema defining the data stored for your Subgraph and how to query it via GraphQL - `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema -To learn more about each subgraph component, check out [creating a subgraph](/developing/creating-a-subgraph/). +To learn more about each Subgraph component, check out [creating a Subgraph](/developing/creating-a-subgraph/). ## サブグラフのライフサイクル -Here is a general overview of a subgraph’s lifecycle: +Here is a general overview of a Subgraph’s lifecycle: ![Subgraph Lifecycle](/img/subgraph-lifecycle.png) ## Subgraph Development -1. [Create a subgraph](/developing/creating-a-subgraph/) -2. [Deploy a subgraph](/deploying/deploying-a-subgraph-to-studio/) -3. [Test a subgraph](/deploying/subgraph-studio/#testing-your-subgraph-in-subgraph-studio) -4. [Publish a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) -5. [Signal on a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) +1. [Create a Subgraph](/developing/creating-a-subgraph/) +2. [Deploy a Subgraph](/deploying/deploying-a-subgraph-to-studio/) +3. [Test a Subgraph](/deploying/subgraph-studio/#testing-your-subgraph-in-subgraph-studio) +4. [Publish a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) +5. [Signal on a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) ### Build locally -Great subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying subgraphs on The Graph. They can also use [Graph TypeScript](/subgraphs/developing/creating/graph-ts/README/) and [Matchstick](/subgraphs/developing/creating/unit-testing-framework/) to create robust subgraphs. +Great Subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying Subgraphs on The Graph. They can also use [Graph TypeScript](/subgraphs/developing/creating/graph-ts/README/) and [Matchstick](/subgraphs/developing/creating/unit-testing-framework/) to create robust Subgraphs. ### Deploy to Subgraph Studio -Once defined, a subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: +Once defined, a Subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: -- Use its staging environment to index the deployed subgraph and make it available for review. -- Verify that your subgraph doesn't have any indexing errors and works as expected. +- Use its staging environment to index the deployed Subgraph and make it available for review. +- Verify that your Subgraph doesn't have any indexing errors and works as expected. ### Publish to the Network -When you're happy with your subgraph, you can [publish it](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph Network. +When you're happy with your Subgraph, you can [publish it](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph Network. -- This is an onchain action, which registers the subgraph and makes it discoverable by Indexers. -- Published subgraphs have a corresponding NFT, which defines the ownership of the subgraph. You can [transfer the subgraph's ownership](/subgraphs/developing/managing/transferring-a-subgraph/) by sending the NFT. -- Published subgraphs have associated metadata, which provides other network participants with useful context and information. +- This is an onchain action, which registers the Subgraph and makes it discoverable by Indexers. +- Published Subgraphs have a corresponding NFT, which defines the ownership of the Subgraph. You can [transfer the Subgraph's ownership](/subgraphs/developing/managing/transferring-a-subgraph/) by sending the NFT. +- Published Subgraphs have associated metadata, which provides other network participants with useful context and information. ### Add Curation Signal for Indexing -Published subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your subgraph. Learn more about signaling and [curating](/resources/roles/curating/) on The Graph. +Published Subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your Subgraph. Learn more about signaling and [curating](/resources/roles/curating/) on The Graph. #### What is signal? -- Signal is locked GRT associated with a given subgraph. It indicates to Indexers that a given subgraph will receive query volume and it contributes to the indexing rewards available for processing it. -- Third-party Curators may also signal on a given subgraph, if they deem the subgraph likely to drive query volume. +- Signal is locked GRT associated with a given Subgraph. It indicates to Indexers that a given Subgraph will receive query volume and it contributes to the indexing rewards available for processing it. +- Third-party Curators may also signal on a given Subgraph, if they deem the Subgraph likely to drive query volume. ### Querying & Application Development Subgraphs on The Graph Network receive 100,000 free queries per month, after which point developers can either [pay for queries with GRT or a credit card](/subgraphs/billing/). -Learn more about [querying subgraphs](/subgraphs/querying/introduction/). +Learn more about [querying Subgraphs](/subgraphs/querying/introduction/). ### Updating Subgraphs -To update your subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. +To update your Subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your Subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. -- If you selected "auto-migrate" when you applied the signal, updating the subgraph will migrate any signal to the new version and incur a migration tax. -- This signal migration should prompt Indexers to start indexing the new version of the subgraph, so it should soon become available for querying. +- If you selected "auto-migrate" when you applied the signal, updating the Subgraph will migrate any signal to the new version and incur a migration tax. +- This signal migration should prompt Indexers to start indexing the new version of the Subgraph, so it should soon become available for querying. ### Deleting & Transferring Subgraphs -If you no longer need a published subgraph, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) or [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) it. Deleting a subgraph returns any signaled GRT to [Curators](/resources/roles/curating/). +If you no longer need a published Subgraph, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) or [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) it. Deleting a Subgraph returns any signaled GRT to [Curators](/resources/roles/curating/). diff --git a/website/src/pages/ja/subgraphs/explorer.mdx b/website/src/pages/ja/subgraphs/explorer.mdx index 94d1203d9084..0357d63fda7e 100644 --- a/website/src/pages/ja/subgraphs/explorer.mdx +++ b/website/src/pages/ja/subgraphs/explorer.mdx @@ -2,11 +2,11 @@ title: グラフエクスプローラ --- -Unlock the world of subgraphs and network data with [Graph Explorer](https://thegraph.com/explorer). +Unlock the world of Subgraphs and network data with [Graph Explorer](https://thegraph.com/explorer). ## 概要 -Graph Explorer consists of multiple parts where you can interact with [subgraphs](https://thegraph.com/explorer?chain=arbitrum-one), [delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one), engage [participants](https://thegraph.com/explorer/participants?chain=arbitrum-one), view [network information](https://thegraph.com/explorer/network?chain=arbitrum-one), and access your user profile. +Graph Explorer consists of multiple parts where you can interact with [Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one), [delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one), engage [participants](https://thegraph.com/explorer/participants?chain=arbitrum-one), view [network information](https://thegraph.com/explorer/network?chain=arbitrum-one), and access your user profile. ## Inside Explorer @@ -14,33 +14,33 @@ The following is a breakdown of all the key features of Graph Explorer. For addi ### Subgraphs Page -After deploying and publishing your subgraph in Subgraph Studio, go to [Graph Explorer](https://thegraph.com/explorer) and click on the "[Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one)" link in the navigation bar to access the following: +After deploying and publishing your Subgraph in Subgraph Studio, go to [Graph Explorer](https://thegraph.com/explorer) and click on the "[Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one)" link in the navigation bar to access the following: -- Your own finished subgraphs +- Your own finished Subgraphs - Subgraphs published by others -- The exact subgraph you want (based on the date created, signal amount, or name). +- The exact Subgraph you want (based on the date created, signal amount, or name). ![Explorer Image 1](/img/Subgraphs-Explorer-Landing.png) -When you click into a subgraph, you will be able to do the following: +When you click into a Subgraph, you will be able to do the following: - Test queries in the playground and be able to leverage network details to make informed decisions. -- Signal GRT on your own subgraph or the subgraphs of others to make indexers aware of its importance and quality. +- Signal GRT on your own Subgraph or the Subgraphs of others to make indexers aware of its importance and quality. - - This is critical because signaling on a subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries. + - This is critical because signaling on a Subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries. ![Explorer Image 2](/img/Subgraph-Details.png) -On each subgraph’s dedicated page, you can do the following: +On each Subgraph’s dedicated page, you can do the following: -- サブグラフのシグナル/アンシグナル +- Signal/Un-signal on Subgraphs - チャート、現在のデプロイメント ID、その他のメタデータなどの詳細情報の表示 -- バージョンを切り替えて、サブグラフの過去のイテレーションを調べる -- GraphQL によるサブグラフのクエリ -- プレイグラウンドでのサブグラフのテスト -- 特定のサブグラフにインデクシングしているインデクサーの表示 +- Switch versions to explore past iterations of the Subgraph +- Query Subgraphs via GraphQL +- Test Subgraphs in the playground +- View the Indexers that are indexing on a certain Subgraph - サブグラフの統計情報(割り当て数、キュレーターなど) -- サブグラフを公開したエンティティの表示 +- View the entity who published the Subgraph ![Explorer Image 3](/img/Explorer-Signal-Unsignal.png) @@ -53,7 +53,7 @@ On this page, you can see the following: - Indexers who collected the most query fees - Indexers with the highest estimated APR -Additionally, you can calculate your ROI and search for top Indexers by name, address, or subgraph. +Additionally, you can calculate your ROI and search for top Indexers by name, address, or Subgraph. ### Participants Page @@ -63,9 +63,9 @@ This page provides a bird's-eye view of all "participants," which includes every ![Explorer Image 4](/img/Indexer-Pane.png) -Indexers are the backbone of the protocol. They stake on subgraphs, index them, and serve queries to anyone consuming subgraphs. +Indexers are the backbone of the protocol. They stake on Subgraphs, index them, and serve queries to anyone consuming Subgraphs. -In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each subgraph, and how much revenue they have made from query fees and indexing rewards. +In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each Subgraph, and how much revenue they have made from query fees and indexing rewards. **Specifics** @@ -74,7 +74,7 @@ In the Indexers table, you can see an Indexers’ delegation parameters, their s - Cooldown Remaining - the time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters. - Owned - This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior. - Delegated - Stake from Delegators which can be allocated by the Indexer, but cannot be slashed. -- Allocated - Stake that Indexers are actively allocating towards the subgraphs they are indexing. +- Allocated - Stake that Indexers are actively allocating towards the Subgraphs they are indexing. - Available Delegation Capacity - the amount of delegated stake the Indexers can still receive before they become over-delegated. - Max Delegation Capacity - インデクサーが生産的に受け入れることができる委任されたステークの最大量。超過した委任されたステークは、割り当てや報酬の計算には使用できません。 - Query Fees - this is the total fees that end users have paid for queries from an Indexer over all time. @@ -90,10 +90,10 @@ To learn more about how to become an Indexer, you can take a look at the [offici #### 2. キュレーター -Curators analyze subgraphs to identify which subgraphs are of the highest quality. Once a Curator has found a potentially high-quality subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which subgraphs are high quality and should be indexed. +Curators analyze Subgraphs to identify which Subgraphs are of the highest quality. Once a Curator has found a potentially high-quality Subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which Subgraphs are high quality and should be indexed. -- Curators can be community members, data consumers, or even subgraph developers who signal on their own subgraphs by depositing GRT tokens into a bonding curve. - - By depositing GRT, Curators mint curation shares of a subgraph. As a result, they can earn a portion of the query fees generated by the subgraph they have signaled on. +- Curators can be community members, data consumers, or even Subgraph developers who signal on their own Subgraphs by depositing GRT tokens into a bonding curve. + - By depositing GRT, Curators mint curation shares of a Subgraph. As a result, they can earn a portion of the query fees generated by the Subgraph they have signaled on. - The bonding curve incentivizes Curators to curate the highest quality data sources. In the The Curator table listed below you can see: @@ -144,8 +144,8 @@ The overview section has both all the current network metrics and some cumulativ A few key details to note: -- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the subgraphs have been closed and the data they served has been validated by the consumers. -- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). +- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the Subgraphs have been closed and the data they served has been validated by the consumers. +- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the Subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). ![Explorer Image 8](/img/Network-Stats.png) @@ -178,15 +178,15 @@ In this section, you can view the following: ### サブグラフタブ -In the Subgraphs tab, you’ll see your published subgraphs. +In the Subgraphs tab, you’ll see your published Subgraphs. -> This will not include any subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network. +> This will not include any Subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network. ![Explorer Image 11](/img/Subgraphs-Overview.png) ### インデックスタブ -In the Indexing tab, you’ll find a table with all the active and historical allocations towards subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer. +In the Indexing tab, you’ll find a table with all the active and historical allocations towards Subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer. このセクションには、インデクサー報酬とクエリフィーの詳細も含まれます。 以下のような指標が表示されます: @@ -223,13 +223,13 @@ In the Delegators tab, you can find the details of your active and historical de ### キュレーションタブ -[キュレーション] タブには、シグナルを送信しているすべてのサブグラフが表示されます (これにより、クエリ料金を受け取ることができます)。シグナリングにより、キュレーターはどのサブグラフが価値があり信頼できるかをインデクサーに強調表示し、それらをインデックス化する必要があることを知らせることができます。 +In the Curation tab, you’ll find all the Subgraphs you’re signaling on (thus enabling you to receive query fees). Signaling allows Curators to highlight to Indexers which Subgraphs are valuable and trustworthy, thus signaling that they need to be indexed on. このタブでは、以下の概要を見ることができます: -- キュレーションしている全てのサブグラフとシグナルの詳細 -- サブグラフごとのシェアの合計 -- サブグラフごとのクエリ報酬 +- All the Subgraphs you're curating on with signal details +- Share totals per Subgraph +- Query rewards per Subgraph - 日付詳細に更新済み ![Explorer Image 14](/img/Curation-Stats.png) diff --git a/website/src/pages/ja/subgraphs/guides/_meta.js b/website/src/pages/ja/subgraphs/guides/_meta.js index 37e18bc51651..a1bb04fb6d3f 100644 --- a/website/src/pages/ja/subgraphs/guides/_meta.js +++ b/website/src/pages/ja/subgraphs/guides/_meta.js @@ -1,4 +1,5 @@ export default { + 'subgraph-composition': '', 'subgraph-debug-forking': '', near: '', arweave: '', diff --git a/website/src/pages/ja/subgraphs/guides/arweave.mdx b/website/src/pages/ja/subgraphs/guides/arweave.mdx index 08e6c4257268..66eef9c8160f 100644 --- a/website/src/pages/ja/subgraphs/guides/arweave.mdx +++ b/website/src/pages/ja/subgraphs/guides/arweave.mdx @@ -1,50 +1,50 @@ --- -title: Building Subgraphs on Arweave +title: Arweaveでのサブグラフ構築 --- > Arweave support in Graph Node and on Subgraph Studio is in beta: please reach us on [Discord](https://discord.gg/graphprotocol) with any questions about building Arweave Subgraphs! -In this guide, you will learn how to build and deploy Subgraphs to index the Arweave blockchain. +このガイドでは、Arweaveブロックチェーンのインデックスを作成するためのサブグラフの構築とデプロイ方法について学びます。 -## What is Arweave? +## Arweaveとは? -The Arweave protocol allows developers to store data permanently and that is the main difference between Arweave and IPFS, where IPFS lacks the feature; permanence, and files stored on Arweave can't be changed or deleted. +Arweave プロトコルは、開発者がデータを永久に保存することを可能にし、それが Arweave と IPFS の主な違いです。IPFSは永続性に欠ける一方、Arweaveに保存されたファイルは変更も削除もできません。 -Arweave already has built numerous libraries for integrating the protocol in a number of different programming languages. For more information you can check: +Arweaveは既に、さまざまなプログラミング言語でプロトコルを統合するための多数のライブラリを構築しています。詳細については、次を確認できます。 - [Arwiki](https://arwiki.wiki/#/en/main) - [Arweave Resources](https://www.arweave.org/build) -## What are Arweave Subgraphs? +## Arweaveサブグラフとは? The Graph allows you to build custom open APIs called "Subgraphs". Subgraphs are used to tell indexers (server operators) which data to index on a blockchain and save on their servers in order for you to be able to query it at any time using [GraphQL](https://graphql.org/). [Graph Node](https://github.com/graphprotocol/graph-node) is now able to index data on Arweave protocol. The current integration is only indexing Arweave as a blockchain (blocks and transactions), it is not indexing the stored files yet. -## Building an Arweave Subgraph +## Arweave サブグラフの作成 -To be able to build and deploy Arweave Subgraphs, you need two packages: +Arweaveのサブグラフを構築し展開できるようにするためには、2つのパッケージが必要です。 1. `@graphprotocol/graph-cli` above version 0.30.2 - This is a command-line tool for building and deploying Subgraphs. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-cli) to download using `npm`. 2. `@graphprotocol/graph-ts` above version 0.27.0 - This is library of Subgraph-specific types. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-ts) to download using `npm`. -## Subgraph's components +## サブグラフのコンポーネント There are three components of a Subgraph: ### 1. Manifest - `subgraph.yaml` -Defines the data sources of interest, and how they should be processed. Arweave is a new kind of data source. +対象のデータ ソースとその処理方法を定義します。 Arweave は新しい種類のデータ ソースです。 ### 2. Schema - `schema.graphql` -Here you define which data you want to be able to query after indexing your Subgraph using GraphQL. This is actually similar to a model for an API, where the model defines the structure of a request body. +ここでは、GraphQL を使用してサブグラフにインデックスを付けた後にクエリできるようにするデータを定義します。これは実際には API のモデルに似ており、モデルはリクエスト本文の構造を定義します。 The requirements for Arweave Subgraphs are covered by the [existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). ### 3. AssemblyScript Mappings - `mapping.ts` -This is the logic that determines how data should be retrieved and stored when someone interacts with the data sources you are listening to. The data gets translated and is stored based off the schema you have listed. +これは、リスニングしているデータソースと誰かがやりとりするときに、データをどのように取得し、保存するかを決定するロジックです。データは変換され、あなたがリストアップしたスキーマに基づいて保存されます。 During Subgraph development there are two key commands: @@ -53,7 +53,7 @@ $ graph codegen # generates types from the schema file identified in the manifes $ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder ``` -## Subgraph Manifest Definition +## サブグラフマニフェストの定義 The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for an Arweave Subgraph: @@ -84,24 +84,24 @@ dataSources: - Arweave Subgraphs introduce a new kind of data source (`arweave`) - The network should correspond to a network on the hosting Graph Node. In Subgraph Studio, Arweave's mainnet is `arweave-mainnet` -- Arweave data sources introduce an optional source.owner field, which is the public key of an Arweave wallet +- Arweave データ ソースには、オプションの source.owner フィールドが導入されています。これは、Arweave ウォレットの公開鍵です。 -Arweave data sources support two types of handlers: +Arweaveデータソースは 2 種類のハンドラーをサポートしています: - `blockHandlers` - Run on every new Arweave block. No source.owner is required. - `transactionHandlers` - Run on every transaction where the data source's `source.owner` is the owner. Currently an owner is required for `transactionHandlers`, if users want to process all transactions they should provide "" as the `source.owner` -> The source.owner can be the owner's address, or their Public Key. - -> Transactions are the building blocks of the Arweave permaweb and they are objects created by end-users. - +> Source.owner は、所有者のアドレスまたは公開鍵にすることができます。 +> +> トランザクションはArweave permawebの構成要素であり、エンドユーザーによって作成されるオブジェクトです。 +> > Note: [Irys (previously Bundlr)](https://irys.xyz/) transactions are not supported yet. -## Schema Definition +## スキーマ定義 Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on the Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). -## AssemblyScript Mappings +## AssemblyScript マッピング The handlers for processing events are written in [AssemblyScript](https://www.assemblyscript.org/). @@ -158,11 +158,11 @@ Once your Subgraph has been created on your Subgraph Studio dashboard, you can d graph deploy --access-token ``` -## Querying an Arweave Subgraph +## Arweaveサブグラフのクエリ The GraphQL endpoint for Arweave Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. -## Example Subgraphs +## サブグラフの例 Here is an example Subgraph for reference: @@ -174,19 +174,19 @@ Here is an example Subgraph for reference: No, a Subgraph can only support data sources from one chain/network. -### Can I index the stored files on Arweave? +### 保存されたファイルをArweaveでインデックス化することはできますか? -Currently, The Graph is only indexing Arweave as a blockchain (its blocks and transactions). +現在、The Graph は Arweave をブロックチェーン (ブロックとトランザクション) としてのみインデックス化しています。 ### Can I identify Bundlr bundles in my Subgraph? -This is not currently supported. +現在はサポートされていません。 -### How can I filter transactions to a specific account? +### トランザクションを特定のアカウントにフィルターするにはどうすればよいですか? -The source.owner can be the user's public key or account address. +Source.ownerには、ユーザの公開鍵またはアカウントアドレスを指定することができます。 -### What is the current encryption format? +### 現在の暗号化フォーマットは? Data is generally passed into the mappings as Bytes, which if stored directly is returned in the Subgraph in a `hex` format (ex. block and transaction hashes). You may want to convert to a `base64` or `base64 URL`-safe format in your mappings, in order to match what is displayed in block explorers like [Arweave Explorer](https://viewblock.io/arweave/). diff --git a/website/src/pages/ja/subgraphs/guides/contract-analyzer.mdx b/website/src/pages/ja/subgraphs/guides/contract-analyzer.mdx index 084ac8d28a00..414948153176 100644 --- a/website/src/pages/ja/subgraphs/guides/contract-analyzer.mdx +++ b/website/src/pages/ja/subgraphs/guides/contract-analyzer.mdx @@ -2,11 +2,15 @@ title: Smart Contract Analysis with Cana CLI --- -# Cana CLI: Quick & Efficient Contract Analysis +Improve smart contract analysis with **Cana CLI**. It's fast, efficient, and designed specifically for EVM chains. -**Cana CLI** is a command-line tool that streamlines helpful smart contract metadata analysis specific to subgraph development across multiple EVM-compatible chains. +## 概要 -## 📌 Key Features +**Cana CLI** is a command-line tool that streamlines helpful smart contract metadata analysis specific to subgraph development across multiple EVM-compatible chains. It simplifies retrieving contract details, detecting proxy implementations, extracting ABIs, and more. + +### Key Features + +With Cana CLI, you can: - Detect deployment blocks - Verify source code @@ -14,63 +18,75 @@ title: Smart Contract Analysis with Cana CLI - Identify proxy and implementation contracts - Support multiple chains -## 🚀 Installation & Setup +### Prerequisites + +Before installing Cana CLI, make sure you have: + +- [Node.js v16+](https://nodejs.org/en) +- [npm v6+](https://docs.npmjs.com/cli/v11/commands/npm-install) +- Block explorer API keys + +### Installation & Setup -Install Cana globally using npm: +1. Install Cana CLI + +Use npm to install it globally: ```bash npm install -g contract-analyzer ``` -Set up a blockchain for analysis: +2. Configure Cana CLI + +Set up a blockchain environment for analysis: ```bash cana setup ``` -Provide the required block explorer API and block explorer endpoint URL details when prompted. +During setup, you'll be prompted to provide the required block explorer API key and block explorer endpoint URL. -Running `cana setup` creates a configuration file at `~/.contract-analyzer/config.json`. This file stores your block explorer API credentials, endpoint URLs, and chain selection preferences for future use. +After setup, Cana CLI creates a configuration file at `~/.contract-analyzer/config.json`. This file stores your block explorer API credentials, endpoint URLs, and chain selection preferences for future use. -## 🍳 Usage +### Steps: Using Cana CLI for Smart Contract Analysis -### 🔹 Chain Selection +#### 1. Select a Chain -Cana supports multiple EVM-compatible chains. +Cana CLI supports multiple EVM-compatible chains. -List chains added with: +For a list of chains added run this command: ```bash cana chains ``` -Then select a chain with: +Then select a chain with this command: ```bash cana chains --switch ``` -Once a chain is selected, all subsequent contract analases will continue on that chain. +Once a chain is selected, all subsequent contract analyses will continue on that chain. -### 🔹 Basic Contract Analysis +#### 2. Basic Contract Analysis -Analyze a contract with: +Run the following command to analyze a contract: ```bash cana analyze 0xContractAddress ``` -or +または ```bash cana -a 0xContractAddress ``` -This command displays essential contract information in the terminal using a clear, organized format. +This command fetches and displays essential contract information in the terminal using a clear, organized format. -### 🔹 Understanding Output +#### 3. Understanding the Output -Cana organizes results into the terminal as well as into a structured directory when detailed contract data is successfully retrieved: +Cana CLI organizes results into the terminal and into a structured directory when detailed contract data is successfully retrieved: ``` contracts-analyzed/ @@ -80,24 +96,22 @@ contracts-analyzed/ └── event-information.json # Event signatures and examples ``` -### 🔹 Chain Management +This format makes it easy to reference contract metadata, event signatures, and ABIs for subgraph development. + +#### 4. Chain Management Add and manage chains: ```bash -cana setup # Add a new chain -cana chains # List configured chains -cana chains -s # Swich chains. +cana setup # Add a new chain +cana chains # List configured chains +cana chains -s # Switch chains ``` -## ⚠️ Troubleshooting +### Troubleshooting -- **Missing Data**: Ensure the contract address is correct, verified on the block explorer, and that your API key has the required permissions. +Missing Data? Ensure that the contract address is correct, that it's verified on the block explorer, and that your API key has the required permissions. -## ✅ Requirements - -- Node.js v16+ -- npm v6+ -- Block explorer API keys +### Conclusion -Keep your contract analyses efficient and well-organized. 🚀 +With Cana CLI, you can efficiently analyze smart contracts, extract crucial metadata, and support subgraph development with ease. diff --git a/website/src/pages/ja/subgraphs/guides/enums.mdx b/website/src/pages/ja/subgraphs/guides/enums.mdx index 9f55ae07c54b..14c608584b8f 100644 --- a/website/src/pages/ja/subgraphs/guides/enums.mdx +++ b/website/src/pages/ja/subgraphs/guides/enums.mdx @@ -269,6 +269,6 @@ Expected output includes the marketplaces that meet the criteria, each represent } ``` -## Additional Resources +## その他のリソース For additional information, check out this guide's [repo](https://github.com/chidubemokeke/Subgraph-Tutorial-Enums). diff --git a/website/src/pages/ja/subgraphs/guides/grafting.mdx b/website/src/pages/ja/subgraphs/guides/grafting.mdx index d9abe0e70d2a..0ce88bc00b3f 100644 --- a/website/src/pages/ja/subgraphs/guides/grafting.mdx +++ b/website/src/pages/ja/subgraphs/guides/grafting.mdx @@ -1,46 +1,46 @@ --- -title: Replace a Contract and Keep its History With Grafting +title: グラフティングでコントラクトを取り替え、履歴を残す --- In this guide, you will learn how to build and deploy new Subgraphs by grafting existing Subgraphs. -## What is Grafting? +## グラフティングとは? Grafting reuses the data from an existing Subgraph and starts indexing it at a later block. This is useful during development to get past simple errors in the mappings quickly or to temporarily get an existing Subgraph working again after it has failed. Also, it can be used when adding a feature to a Subgraph that takes long to index from scratch. The grafted Subgraph can use a GraphQL schema that is not identical to the one of the base Subgraph, but merely compatible with it. It has to be a valid Subgraph schema in its own right, but may deviate from the base Subgraph's schema in the following ways: -- It adds or removes entity types -- It removes attributes from entity types -- It adds nullable attributes to entity types -- It turns non-nullable attributes into nullable attributes -- It adds values to enums -- It adds or removes interfaces -- It changes for which entity types an interface is implemented +- エンティティタイプを追加または削除する +- エンティティタイプから属性を削除する +- 属性を追エンティティタイプに nullable加する +- null 化できない属性を null 化できる属性に変更する +- enums に値を追加する +- インターフェースの追加または削除 +- インターフェースがどのエンティティタイプに実装されるかを変更する -For more information, you can check: +詳しくは、こちらでご確認ください。 - [Grafting](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) In this tutorial, we will be covering a basic use case. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing Subgraph onto the "base" Subgraph that tracks the new contract. -## Important Note on Grafting When Upgrading to the Network +## ネットワークにアップグレードする際の移植に関する重要な注意事項 > **Caution**: It is recommended to not use grafting for Subgraphs published to The Graph Network -### Why Is This Important? +### 何でこれが大切ですか? Grafting is a powerful feature that allows you to "graft" one Subgraph onto another, effectively transferring historical data from the existing Subgraph to a new version. It is not possible to graft a Subgraph from The Graph Network back to Subgraph Studio. -### Best Practices +### ベストプラクティス **Initial Migration**: when you first deploy your Subgraph to the decentralized network, do so without grafting. Ensure that the Subgraph is stable and functioning as expected. **Subsequent Updates**: once your Subgraph is live and stable on the decentralized network, you may use grafting for future versions to make the transition smoother and to preserve historical data. -By adhering to these guidelines, you minimize risks and ensure a smoother migration process. +これらのガイドラインに従うことで、リスクを最小限に抑え、よりスムーズな移行プロセスを確保できます。 -## Building an Existing Subgraph +## 既存のサブグラフの構築 Building Subgraphs is an essential part of The Graph, described more in depth [here](/subgraphs/quick-start/). To be able to build and deploy the existing Subgraph used in this tutorial, the following repo is provided: @@ -48,7 +48,7 @@ Building Subgraphs is an essential part of The Graph, described more in depth [h > Note: The contract used in the Subgraph was taken from the following [Hackathon Starterkit](https://github.com/schmidsi/hackathon-starterkit). -## Subgraph Manifest Definition +## サブグラフマニフェストの定義 The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest that you will use: @@ -83,7 +83,7 @@ dataSources: - The network should correspond to an indexed network being queried. Since we're running on Sepolia testnet, the network is `sepolia` - The `mapping` section defines the triggers of interest and the functions that should be run in response to those triggers. In this case, we are listening for the `Withdrawal` event and calling the `handleWithdrawal` function when it is emitted. -## Grafting Manifest Definition +## グラフティングマニフェストの定義 Grafting requires adding two new items to the original Subgraph manifest: @@ -101,7 +101,7 @@ graft: The `base` and `block` values can be found by deploying two Subgraphs: one for the base indexing and one with grafting -## Deploying the Base Subgraph +## ベースサブグラフの起動 1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-example` 2. Follow the directions in the `AUTH & DEPLOY` section on your Subgraph page in the `graft-example` folder from the repo @@ -117,7 +117,7 @@ The `base` and `block` values can be found by deploying two Subgraphs: one for t } ``` -It returns something like this: +このようなものが返ってきます: ``` { @@ -140,9 +140,9 @@ It returns something like this: Once you have verified the Subgraph is indexing properly, you can quickly update the Subgraph with grafting. -## Deploying the Grafting Subgraph +## グラフティングサブグラフの展開 -The graft replacement subgraph.yaml will have a new contract address. This could happen when you update your dapp, redeploy a contract, etc. +グラフト置換されたsubgraph.yamlは、新しいコントラクトのアドレスを持つことになります。これは、ダンプを更新したり、コントラクトを再デプロイしたりしたときに起こりうることです。 1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-replacement` 2. Create a new manifest. The `subgraph.yaml` for `graph-replacement` contains a different contract address and new information about how it should graft. These are the `block` of the [last event emitted](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452) you care about by the old contract and the `base` of the old Subgraph. The `base` Subgraph ID is the `Deployment ID` of your original `graph-example` Subgraph. You can find this in Subgraph Studio. @@ -159,7 +159,7 @@ The graft replacement subgraph.yaml will have a new contract address. This could } ``` -It should return the following: +以下のように返ってくるはずです: ``` { @@ -189,7 +189,7 @@ You can see that the `graft-replacement` Subgraph is indexing from older `graph- Congrats! You have successfully grafted a Subgraph onto another Subgraph. -## Additional Resources +## その他のリソース If you want more experience with grafting, here are a few examples for popular contracts: diff --git a/website/src/pages/ja/subgraphs/guides/near.mdx b/website/src/pages/ja/subgraphs/guides/near.mdx index e78a69eb7fa2..9e3738689919 100644 --- a/website/src/pages/ja/subgraphs/guides/near.mdx +++ b/website/src/pages/ja/subgraphs/guides/near.mdx @@ -1,10 +1,10 @@ --- -title: Building Subgraphs on NEAR +title: NEAR でサブグラフを作成する --- This guide is an introduction to building Subgraphs indexing smart contracts on the [NEAR blockchain](https://docs.near.org/). -## What is NEAR? +## NEAR とは? [NEAR](https://near.org/) is a smart contract platform for building decentralized applications. Visit the [official documentation](https://docs.near.org/concepts/basics/protocol) for more information. @@ -14,14 +14,14 @@ The Graph gives developers tools to process blockchain events and make the resul Subgraphs are event-based, which means that they listen for and then process onchain events. There are currently two types of handlers supported for NEAR Subgraphs: -- Block handlers: these are run on every new block -- Receipt handlers: run every time a message is executed at a specified account +- ブロックハンドラ:新しいブロックごとに実行されます +- レシートハンドラ:指定されたアカウントでメッセージが実行されるたびに実行されます [From the NEAR documentation](https://docs.near.org/build/data-infrastructure/lake-data-structures/receipt): -> A Receipt is the only actionable object in the system. When we talk about "processing a transaction" on the NEAR platform, this eventually means "applying receipts" at some point. +> レシートは、システム内で唯一実行可能なオブジェクトです。NEAR プラットフォームで「トランザクションの処理」といえば、最終的にはどこかの時点で「レシートの適用」を意味します。 -## Building a NEAR Subgraph +## NEAR サブグラフの構築 `@graphprotocol/graph-cli` is a command-line tool for building and deploying Subgraphs. @@ -46,7 +46,7 @@ $ graph codegen # generates types from the schema file identified in the manifes $ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder ``` -### Subgraph Manifest Definition +### サブグラフマニフェストの定義 The Subgraph manifest (`subgraph.yaml`) identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for a NEAR Subgraph: @@ -85,16 +85,16 @@ accounts: - morning.testnet ``` -NEAR data sources support two types of handlers: +NEAR データソースは 2 種類のハンドラーをサポートしています: - `blockHandlers`: run on every new NEAR block. No `source.account` is required. - `receiptHandlers`: run on every receipt where the data source's `source.account` is the recipient. Note that only exact matches are processed ([subaccounts](https://docs.near.org/tutorials/crosswords/basics/add-functions-call#create-a-subaccount) must be added as independent data sources). -### Schema Definition +### スキーマ定義 Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). -### AssemblyScript Mappings +### AssemblyScript マッピング The handlers for processing events are written in [AssemblyScript](https://www.assemblyscript.org/). @@ -169,7 +169,7 @@ Otherwise, the rest of the [AssemblyScript API](/subgraphs/developing/creating/g This includes a new JSON parsing function - logs on NEAR are frequently emitted as stringified JSONs. A new `json.fromString(...)` function is available as part of the [JSON API](/subgraphs/developing/creating/graph-ts/api/#json-api) to allow developers to easily process these logs. -## Deploying a NEAR Subgraph +## NEAR サブグラフの展開 Once you have a built Subgraph, it is time to deploy it to Graph Node for indexing. NEAR Subgraphs can be deployed to any Graph Node `>=v0.26.x` (this version has not yet been tagged & released). @@ -198,7 +198,7 @@ graph auth graph deploy ``` -### Local Graph Node (based on default configuration) +### ローカル グラフ ノード (デフォルト構成に基づく) ```sh graph deploy --node http://localhost:8020/ --ipfs http://localhost:5001 @@ -216,21 +216,21 @@ Once your Subgraph has been deployed, it will be indexed by Graph Node. You can } ``` -### Indexing NEAR with a Local Graph Node +### ローカル グラフ ノードを使用した NEAR のインデックス作成 -Running a Graph Node that indexes NEAR has the following operational requirements: +NEAR のインデックスを作成するグラフノードの運用には、以下のような運用要件があります: -- NEAR Indexer Framework with Firehose instrumentation -- NEAR Firehose Component(s) -- Graph Node with Firehose endpoint configured +- NEAR Indexer Framework と Firehose instrumentation +- NEAR Firehose コンポーネント +- Firehose エンドポイントが設定されたグラフノード -We will provide more information on running the above components soon. +上記のコンポーネントの運用については、近日中に詳しくご紹介します。 -## Querying a NEAR Subgraph +## NEAR サブグラフへのクエリ The GraphQL endpoint for NEAR Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. -## Example Subgraphs +## サブグラフの例 Here are some example Subgraphs for reference: @@ -240,7 +240,7 @@ Here are some example Subgraphs for reference: ## FAQ -### How does the beta work? +### ベータ版はどのように機能しますか? NEAR support is in beta, which means that there may be changes to the API as we continue to work on improving the integration. Please email near@thegraph.com so that we can support you in building NEAR Subgraphs, and keep you up to date on the latest developments! @@ -250,9 +250,9 @@ No, a Subgraph can only support data sources from one chain/network. ### Can Subgraphs react to more specific triggers? -Currently, only Block and Receipt triggers are supported. We are investigating triggers for function calls to a specified account. We are also interested in supporting event triggers, once NEAR has native event support. +現在、ブロックとレシートのトリガーのみがサポートされています。指定されたアカウントへのファンクションコールのトリガーを検討しています。また、NEAR がネイティブイベントをサポートするようになれば、イベントトリガーのサポートも検討しています。 -### Will receipt handlers trigger for accounts and their sub-accounts? +### 領収書ハンドラーは、アカウントとそのサブアカウントに対してトリガーされますか? If an `account` is specified, that will only match the exact account name. It is possible to match sub-accounts by specifying an `accounts` field, with `suffixes` and `prefixes` specified to match accounts and sub-accounts, for example the following would match all `mintbase1.near` sub-accounts: @@ -264,11 +264,11 @@ accounts: ### Can NEAR Subgraphs make view calls to NEAR accounts during mappings? -This is not supported. We are evaluating whether this functionality is required for indexing. +これはサポートされていません。この機能がインデックス作成に必要かどうかを評価しています。 ### Can I use data source templates in my NEAR Subgraph? -This is not currently supported. We are evaluating whether this functionality is required for indexing. +これは現在サポートされていません。この機能がインデックス作成に必要かどうかを評価しています。 ### Ethereum Subgraphs support "pending" and "current" versions, how can I deploy a "pending" version of a NEAR Subgraph? @@ -278,6 +278,6 @@ Pending functionality is not yet supported for NEAR Subgraphs. In the interim, y If it is a general question about Subgraph development, there is a lot more information in the rest of the [Developer documentation](/subgraphs/quick-start/). Otherwise please join [The Graph Protocol Discord](https://discord.gg/graphprotocol) and ask in the #near channel or email near@thegraph.com. -## References +## 参考文献 - [NEAR developer documentation](https://docs.near.org/tutorials/crosswords/basics/set-up-skeleton) diff --git a/website/src/pages/ja/subgraphs/guides/secure-api-keys-nextjs.mdx b/website/src/pages/ja/subgraphs/guides/secure-api-keys-nextjs.mdx index e17e594408ff..ead239aa93e1 100644 --- a/website/src/pages/ja/subgraphs/guides/secure-api-keys-nextjs.mdx +++ b/website/src/pages/ja/subgraphs/guides/secure-api-keys-nextjs.mdx @@ -2,7 +2,7 @@ title: How to Secure API Keys Using Next.js Server Components --- -## Overview +## 概要 We can use [Next.js server components](https://nextjs.org/docs/app/building-your-application/rendering/server-components) to properly secure our API key from exposure in the frontend of our dapp. To further increase our API key security, we can also [restrict our API key to certain Subgraphs or domains in Subgraph Studio](/cookbook/upgrading-a-subgraph/#securing-your-api-key). diff --git a/website/src/pages/ja/subgraphs/guides/subgraph-composition.mdx b/website/src/pages/ja/subgraphs/guides/subgraph-composition.mdx new file mode 100644 index 000000000000..62b2d8eb4657 --- /dev/null +++ b/website/src/pages/ja/subgraphs/guides/subgraph-composition.mdx @@ -0,0 +1,132 @@ +--- +title: Aggregate Data Using Subgraph Composition +sidebarTitle: Build a Composable Subgraph with Multiple Subgraphs +--- + +Leverage Subgraph composition to speed up development time. Create a base Subgraph with essential data, then build additional Subgraphs on top of it. + +Optimize your Subgraph by merging data from independent, source Subgraphs into a single composable Subgraph to enhance data aggregation. + +## イントロダクション + +Composable Subgraphs enable you to combine multiple Subgraphs' data sources into a new Subgraph, facilitating faster and more flexible Subgraph development. Subgraph composition empowers you to create and maintain smaller, focused Subgraphs that collectively form a larger, interconnected dataset. + +### Benefits of Composition + +Subgraph composition is a powerful feature for scaling, allowing you to: + +- Reuse, mix, and combine existing data +- Streamline development and queries +- Use multiple data sources (up to five source Subgraphs) +- Speed up your Subgraph's syncing speed +- Handle errors and optimize the resync + +## Architecture Overview + +The setup for this example involves two Subgraphs: + +1. **Source Subgraph**: Tracks event data as entities. +2. **Dependent Subgraph**: Uses the source Subgraph as a data source. + +You can find these in the `source` and `dependent` directories. + +- The **source Subgraph** is a basic event-tracking Subgraph that records events emitted by relevant contracts. +- The **dependent Subgraph** references the source Subgraph as a data source, using the entities from the source as triggers. + +While the source Subgraph is a standard Subgraph, the dependent Subgraph uses the Subgraph composition feature. + +## Prerequisites + +### Source Subgraphs + +- All Subgraphs need to be published with a **specVersion 1.3.0 or later** (Use the latest graph-cli version to be able to deploy composable Subgraphs) +- See notes here: https://github.com/graphprotocol/graph-node/releases/tag/v0.37.0 +- Immutable entities only: All Subgraphs must have [immutable entities](https://thegraph.com/docs/en/subgraphs/best-practices/immutable-entities-bytes-as-ids/#immutable-entities) when the Subgraph is deployed +- Pruning can be used in the source Subgraphs, but only entities that are immutable can be composed on top of +- Source Subgraphs cannot use grafting on top of existing entities +- Aggregated entities can be used in composition, but entities that are composed from them cannot performed additional aggregations directly + +### Composed Subgraphs + +- You can only compose up to a **maximum of 5 source Subgraphs** +- Composed Subgraphs can only use **datasources from the same chain** +- **Nested composition is not yet supported**: Composing on top of another composed Subgraph isn’t allowed at this time +- Aggregated entities can be used in composition, but the composed entities on them cannot also use aggregations directly +- Developers cannot compose an onchain datasource with a Subgraph datasource (i.e. you can’t do normal event handlers and call handlers and block handlers in a composed Subgraph) + +Additionally, you can explore the [example-composable-subgraph](https://github.com/graphprotocol/example-composable-subgraph) repository for a working implementation of composable Subgraphs + +## 始めましょう + +The following guide provides examples for defining 3 source Subgraphs to create one powerful composed Subgraph. + +### Specifics + +- To keep this example simple, all source Subgraphs use only block handlers. However, in a real environment, each source Subgraph will use data from different smart contracts. +- The examples below show how to import and extend the schema of another Subgraph to enhance its functionality. +- Each source Subgraph is optimized with a specific entity. +- All the commands listed install the necessary dependencies, generate code based on the GraphQL schema, build the Subgraph, and deploy it to your local Graph Node instance. + +### Step 1. Deploy Block Time Source Subgraph + +This first source Subgraph calculates the block time for each block. + +- It imports schemas from other Subgraphs and adds a `block` entity with a `timestamp` field, representing the time each block was mined. +- It listens to time-related blockchain events (e.g., block timestamps) and processes this data to update the Subgraph's entities accordingly. + +To deploy this Subgraph locally, run the following commands: + +```bash +npm install +npm run codegen +npm run build +npm run create-local +npm run deploy-local +``` + +### Step 2. Deploy Block Cost Source Subgraph + +This second source Subgraph indexes the cost of each block. + +#### Key Functions + +- It imports schemas from other Subgraphs and adds a `block` entity with cost-related fields. +- It listens to blockchain events related to costs (e.g. gas fees, transaction costs) and processes this data to update the Subgraph's entities accordingly. + +To deploy this Subgraph locally, run the same commands as above. + +### Step 3. Define Block Size in Source Subgraph + +This third source Subgraph indexes the size of each block. To deploy this Subgraph locally, run the same commands as above. + +#### Key Functions + +- It imports existing schemas from other Subgraphs and adds a `block` entity with a `size` field representing each block's size. +- It listens to blockchain events related to block sizes (e.g., storage or volume) and processes this data to update the Subgraph's entities accordingly. + +### Step 4. Combine Into Block Stats Subgraph + +This composed Subgraph combines and aggregates the information from the source Subgraphs above, providing a unified view of block statistics. To deploy this Subgraph locally, run the same commands as above. + +> Note: +> +> - Any change to a source Subgraph will likely generate a new deployment ID. +> - Be sure to update the deployment ID in the data source address of the Subgraph manifest to take advantage of the latest changes. +> - All source Subgraphs should be deployed before the composed Subgraph is deployed. + +#### Key Functions + +- It provides a consolidated data model that encompasses all relevant block metrics. +- It combines data from 3 source Subgraphs, and provides a comprehensive view of block statistics, enabling more complex queries and analyses. + +## Key Takeaways + +- This powerful tool will scale your Subgraph development and allow you to combine multiple Subgraphs. +- The setup includes the deployment of 3 source Subgraphs and one final deployment of the composed Subgraph. +- This feature unlocks scalability, simplifying both development and maintenance efficiency. + +## その他のリソース + +- Check out all the code for this example in [this GitHub repo](https://github.com/graphprotocol/example-composable-subgraph). +- To add advanced features to your Subgraph, check out [Subgraph advanced features](/developing/creating/advanced/). +- To learn more about aggregations, check out [Timeseries and Aggregations](/subgraphs/developing/creating/advanced/#timeseries-and-aggregations). diff --git a/website/src/pages/ja/subgraphs/guides/subgraph-debug-forking.mdx b/website/src/pages/ja/subgraphs/guides/subgraph-debug-forking.mdx index 91aa7484d2ec..cba9bbca2ff7 100644 --- a/website/src/pages/ja/subgraphs/guides/subgraph-debug-forking.mdx +++ b/website/src/pages/ja/subgraphs/guides/subgraph-debug-forking.mdx @@ -1,22 +1,22 @@ --- -title: Quick and Easy Subgraph Debugging Using Forks +title: フォークを用いた迅速かつ容易なサブグラフのデバッグ --- As with many systems processing large amounts of data, The Graph's Indexers (Graph Nodes) may take quite some time to sync-up your Subgraph with the target blockchain. The discrepancy between quick changes with the purpose of debugging and long wait times needed for indexing is extremely counterproductive and we are well aware of that. This is why we are introducing **Subgraph forking**, developed by [LimeChain](https://limechain.tech/), and in this article I will show you how this feature can be used to substantially speed-up Subgraph debugging! -## Ok, what is it? +## さて、それは何でしょうか? **Subgraph forking** is the process of lazily fetching entities from _another_ Subgraph's store (usually a remote one). In the context of debugging, **Subgraph forking** allows you to debug your failed Subgraph at block _X_ without needing to wait to sync-up to block _X_. -## What?! How? +## その方法は? When you deploy a Subgraph to a remote Graph Node for indexing and it fails at block _X_, the good news is that the Graph Node will still serve GraphQL queries using its store, which is synced-up to block _X_. That's great! This means we can take advantage of this "up-to-date" store to fix the bugs arising when indexing block _X_. In a nutshell, we are going to _fork the failing Subgraph_ from a remote Graph Node that is guaranteed to have the Subgraph indexed up to block _X_ in order to provide the locally deployed Subgraph being debugged at block _X_ an up-to-date view of the indexing state. -## Please, show me some code! +## コードを見てみましょう To stay focused on Subgraph debugging, let's keep things simple and run along with the [example-Subgraph](https://github.com/graphprotocol/graph-tooling/tree/main/examples/ethereum-gravatar) indexing the Ethereum Gravity smart contract. @@ -46,31 +46,31 @@ export function handleUpdatedGravatar(event: UpdatedGravatar): void { Oops, how unfortunate, when I deploy my perfect looking Subgraph to [Subgraph Studio](https://thegraph.com/studio/) it fails with the _"Gravatar not found!"_ error. -The usual way to attempt a fix is: +通常の試すであろう修正方法: -1. Make a change in the mappings source, which you believe will solve the issue (while I know it won't). +1. マッピングソースを変更して問題の解決を試す(解決されないことは分かっていても) 2. Re-deploy the Subgraph to [Subgraph Studio](https://thegraph.com/studio/) (or another remote Graph Node). -3. Wait for it to sync-up. -4. If it breaks again go back to 1, otherwise: Hooray! +3. 同期を待つ +4. 再び問題が発生した場合は、1に戻る It is indeed pretty familiar to an ordinary debug process, but there is one step that horribly slows down the process: _3. Wait for it to sync-up._ Using **Subgraph forking** we can essentially eliminate this step. Here is how it looks: 0. Spin-up a local Graph Node with the **_appropriate fork-base_** set. -1. Make a change in the mappings source, which you believe will solve the issue. +1. マッピングのソースを変更し、問題を解決する 2. Deploy to the local Graph Node, **_forking the failing Subgraph_** and **_starting from the problematic block_**. -3. If it breaks again, go back to 1, otherwise: Hooray! +3. もし再度、壊れる場合1に戻る -Now, you may have 2 questions: +さて、ここで2つの疑問が生じます: -1. fork-base what??? -2. Forking who?! +1. フォークベースとは? +2. フォーキングは誰ですか? -And I answer: +回答: 1. `fork-base` is the "base" URL, such that when the _subgraph id_ is appended the resulting URL (`/`) is a valid GraphQL endpoint for the Subgraph's store. -2. Forking is easy, no need to sweat: +2. フォーキングは簡単であり煩雑な手間はありません ```bash $ graph deploy --debug-fork --ipfs http://localhost:5001 --node http://localhost:8020 @@ -78,7 +78,7 @@ $ graph deploy --debug-fork --ipfs http://localhos Also, don't forget to set the `dataSources.source.startBlock` field in the Subgraph manifest to the number of the problematic block, so you can skip indexing unnecessary blocks and take advantage of the fork! -So, here is what I do: +そこで、以下の通りです: 1. I spin-up a local Graph Node ([here is how to do it](https://github.com/graphprotocol/graph-node#running-a-local-graph-node)) with the `fork-base` option set to: `https://api.thegraph.com/subgraphs/id/`, since I will fork a Subgraph, the buggy one I deployed earlier, from [Subgraph Studio](https://thegraph.com/studio/). diff --git a/website/src/pages/ja/subgraphs/guides/subgraph-uncrashable.mdx b/website/src/pages/ja/subgraphs/guides/subgraph-uncrashable.mdx index a08e2a7ad8c9..5f51f521b214 100644 --- a/website/src/pages/ja/subgraphs/guides/subgraph-uncrashable.mdx +++ b/website/src/pages/ja/subgraphs/guides/subgraph-uncrashable.mdx @@ -1,10 +1,10 @@ --- -title: Safe Subgraph Code Generator +title: 安全なサブグラフのコード生成 --- [Subgraph Uncrashable](https://float-capital.github.io/float-subgraph-uncrashable/) is a code generation tool that generates a set of helper functions from the graphql schema of a project. It ensures that all interactions with entities in your Subgraph are completely safe and consistent. -## Why integrate with Subgraph Uncrashable? +## Subgraph Uncrashable と統合する理由 - **Continuous Uptime**. Mishandled entities may cause Subgraphs to crash, which can be disruptive for projects that are dependent on The Graph. Set up helper functions to make your Subgraphs “uncrashable” and ensure business continuity. @@ -16,11 +16,11 @@ title: Safe Subgraph Code Generator - The code generation tool accommodates **all** Subgraph types and is configurable for users to set sane defaults on values. The code generation will use this config to generate helper functions that are to the users specification. -- The framework also includes a way (via the config file) to create custom, but safe, setter functions for groups of entity variables. This way it is impossible for the user to load/use a stale graph entity and it is also impossible to forget to save or set a variable that is required by the function. +- また、このフレームワークには、エンティティ変数のグループに対して、カスタムだが安全なセッター関数を作成する方法が(設定ファイルを通じて)含まれています。この方法では、ユーザーが古いグラフ・エンティティをロード/使用することは不可能であり、また、関数が必要とする変数の保存や設定を忘れることも不可能です。 - Warning logs are recorded as logs indicating where there is a breach of Subgraph logic to help patch the issue to ensure data accuracy. -Subgraph Uncrashable can be run as an optional flag using the Graph CLI codegen command. +Subgraph Uncrashableは、Graph CLI codegenコマンドでオプションのフラグとして実行することができます。 ```sh graph codegen -u [options] [] diff --git a/website/src/pages/ja/subgraphs/guides/transfer-to-the-graph.mdx b/website/src/pages/ja/subgraphs/guides/transfer-to-the-graph.mdx index a62072c48373..890b8495ad7b 100644 --- a/website/src/pages/ja/subgraphs/guides/transfer-to-the-graph.mdx +++ b/website/src/pages/ja/subgraphs/guides/transfer-to-the-graph.mdx @@ -1,5 +1,5 @@ --- -title: Transfer to The Graph +title: The Graphに移行する --- Quickly upgrade your Subgraphs from any platform to [The Graph's decentralized network](https://thegraph.com/networks/). @@ -12,9 +12,9 @@ Quickly upgrade your Subgraphs from any platform to [The Graph's decentralized n ## Upgrade Your Subgraph to The Graph in 3 Easy Steps -1. [Set Up Your Studio Environment](/subgraphs/cookbook/transfer-to-the-graph/#1-set-up-your-studio-environment) -2. [Deploy Your Subgraph to Studio](/subgraphs/cookbook/transfer-to-the-graph/#2-deploy-your-subgraph-to-studio) -3. [Publish to The Graph Network](/subgraphs/cookbook/transfer-to-the-graph/#publish-your-subgraph-to-the-graphs-decentralized-network) +1. [Set Up Your Studio Environment](/subgraphs/cookbook/transfer-to-the-graph/#1-set-up-your-studio-environment) +2. [Deploy Your Subgraph to Studio](/subgraphs/cookbook/transfer-to-the-graph/#2-deploy-your-subgraph-to-studio) +3. [Publish to The Graph Network](/subgraphs/cookbook/transfer-to-the-graph/#publish-your-subgraph-to-the-graphs-decentralized-network) ## 1. Set Up Your Studio Environment @@ -74,7 +74,7 @@ graph deploy --ipfs-hash You can start [querying](/subgraphs/querying/introduction/) any Subgraph by sending a GraphQL query into the Subgraph’s query URL endpoint, which is located at the top of its Explorer page in Subgraph Studio. -#### Example +#### 例 [CryptoPunks Ethereum Subgraph](https://thegraph.com/explorer/subgraphs/HdVdERFUe8h61vm2fDyycHgxjsde5PbB832NHgJfZNqK) by Messari: @@ -98,7 +98,7 @@ You can create API Keys in Subgraph Studio under the “API Keys” menu at the Once you upgrade, you can access and manage your Subgraphs in [Subgraph Studio](https://thegraph.com/studio/) and explore all Subgraphs in [The Graph Explorer](https://thegraph.com/networks/). -### Additional Resources +### その他のリソース - To quickly create and publish a new Subgraph, check out the [Quick Start](/subgraphs/quick-start/). - To explore all the ways you can optimize and customize your Subgraph for a better performance, read more about [creating a Subgraph here](/developing/creating-a-subgraph/). diff --git a/website/src/pages/ja/subgraphs/querying/best-practices.mdx b/website/src/pages/ja/subgraphs/querying/best-practices.mdx index d0700c1fe37d..bd25c5d2fea6 100644 --- a/website/src/pages/ja/subgraphs/querying/best-practices.mdx +++ b/website/src/pages/ja/subgraphs/querying/best-practices.mdx @@ -4,7 +4,7 @@ title: クエリのベストプラクティス The Graph provides a decentralized way to query data from blockchains. Its data is exposed through a GraphQL API, making it easier to query with the GraphQL language. -Learn the essential GraphQL language rules and best practices to optimize your subgraph. +Learn the essential GraphQL language rules and best practices to optimize your Subgraph. --- @@ -71,7 +71,7 @@ It means that you can query a GraphQL API using standard `fetch` (natively or vi However, as mentioned in ["Querying from an Application"](/subgraphs/querying/from-an-application/), it's recommended to use `graph-client`, which supports the following unique features: -- クロスチェーンのサブグラフ処理:1回のクエリで複数のサブグラフからクエリを実行可能 +- Cross-chain Subgraph Handling: Querying from multiple Subgraphs in a single query - [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) - [Automatic Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) - 完全なタイプ付け結果 @@ -219,7 +219,7 @@ If the application only needs 10 transactions, the query should explicitly set ` ### Use a single query to request multiple records -By default, subgraphs have a singular entity for one record. For multiple records, use the plural entities and filter: `where: {id_in:[X,Y,Z]}` or `where: {volume_gt:100000}` +By default, Subgraphs have a singular entity for one record. For multiple records, use the plural entities and filter: `where: {id_in:[X,Y,Z]}` or `where: {volume_gt:100000}` Example of inefficient querying: diff --git a/website/src/pages/ja/subgraphs/querying/from-an-application.mdx b/website/src/pages/ja/subgraphs/querying/from-an-application.mdx index 226a9cd2d686..1bece60d7df9 100644 --- a/website/src/pages/ja/subgraphs/querying/from-an-application.mdx +++ b/website/src/pages/ja/subgraphs/querying/from-an-application.mdx @@ -1,5 +1,6 @@ --- title: アプリケーションからのクエリ +sidebarTitle: Querying from an App --- Learn how to query The Graph from your application. @@ -10,7 +11,7 @@ During the development process, you will receive a GraphQL API endpoint at two d ### Subgraph Studio Endpoint -After deploying your subgraph to [Subgraph Studio](https://thegraph.com/docs/en/subgraphs/developing/deploying/using-subgraph-studio/), you will receive an endpoint that looks like this: +After deploying your Subgraph to [Subgraph Studio](https://thegraph.com/docs/en/subgraphs/developing/deploying/using-subgraph-studio/), you will receive an endpoint that looks like this: ``` https://api.studio.thegraph.com/query/// @@ -20,13 +21,13 @@ https://api.studio.thegraph.com/query/// ### The Graph Network Endpoint -After publishing your subgraph to the network, you will receive an endpoint that looks like this: : +After publishing your Subgraph to the network, you will receive an endpoint that looks like this: : ``` https://gateway.thegraph.com/api//subgraphs/id/ ``` -> This endpoint is intended for active use on the network. It allows you to use various GraphQL client libraries to query the subgraph and populate your application with indexed data. +> This endpoint is intended for active use on the network. It allows you to use various GraphQL client libraries to query the Subgraph and populate your application with indexed data. ## Using Popular GraphQL Clients @@ -34,7 +35,7 @@ https://gateway.thegraph.com/api//subgraphs/id/ The Graph is providing its own GraphQL client, `graph-client` that supports unique features such as: -- クロスチェーンのサブグラフ処理:1回のクエリで複数のサブグラフからクエリを実行可能 +- Cross-chain Subgraph Handling: Querying from multiple Subgraphs in a single query - [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) - [Automatic Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) - 完全なタイプ付け結果 @@ -43,7 +44,7 @@ The Graph is providing its own GraphQL client, `graph-client` that supports uniq ### Fetch Data with Graph Client -Let's look at how to fetch data from a subgraph with `graph-client`: +Let's look at how to fetch data from a Subgraph with `graph-client`: #### ステップ1 @@ -168,7 +169,7 @@ Although it's the heaviest client, it has many features to build advanced UI on ### Fetch Data with Apollo Client -Let's look at how to fetch data from a subgraph with Apollo client: +Let's look at how to fetch data from a Subgraph with Apollo client: #### ステップ1 @@ -257,7 +258,7 @@ client ### Fetch data with URQL -Let's look at how to fetch data from a subgraph with URQL: +Let's look at how to fetch data from a Subgraph with URQL: #### ステップ1 diff --git a/website/src/pages/ja/subgraphs/querying/graph-client/README.md b/website/src/pages/ja/subgraphs/querying/graph-client/README.md index 416cadc13c6f..39ba6a53b215 100644 --- a/website/src/pages/ja/subgraphs/querying/graph-client/README.md +++ b/website/src/pages/ja/subgraphs/querying/graph-client/README.md @@ -14,15 +14,15 @@ This library is intended to simplify the network aspect of data consumption for > The tools provided in this repo can be used as standalone, but you can also use it with any existing GraphQL Client! -| Status | Feature | Notes | -| :----: | ---------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------- | +| ステータス | Feature | Notes | +| :---: | ---------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------- | | ✅ | Multiple indexers | based on fetch strategies | | ✅ | Fetch Strategies | timeout, retry, fallback, race, highestValue | -| ✅ | Build time validations & optimizations | | -| ✅ | Client-Side Composition | with improved execution planner (based on GraphQL-Mesh) | +| ✅ | Build time validations & optimizations | | +| ✅ | Client-Side Composition | with improved execution planner (based on GraphQL-Mesh) | | ✅ | Cross-chain Subgraph Handling | Use similar subgraphs as a single source | -| ✅ | Raw Execution (standalone mode) | without a wrapping GraphQL client | -| ✅ | Local (client-side) Mutations | | +| ✅ | Raw Execution (standalone mode) | without a wrapping GraphQL client | +| ✅ | Local (client-side) Mutations | | | ✅ | [Automatic Block Tracking](../packages/block-tracking/README.md) | tracking block numbers [as described here](https://thegraph.com/docs/en/developer/distributed-systems/#polling-for-updated-data) | | ✅ | [Automatic Pagination](../packages/auto-pagination/README.md) | doing multiple requests in a single call to fetch more than the indexer limit | | ✅ | Integration with `@apollo/client` | | @@ -32,7 +32,7 @@ This library is intended to simplify the network aspect of data consumption for > You can find an [extended architecture design here](./architecture.md) -## Getting Started +## はじめに You can follow [Episode 45 of `graphql.wtf`](https://graphql.wtf/episodes/45-the-graph-client) to learn more about Graph Client: @@ -138,7 +138,7 @@ graphclient serve-dev And open http://localhost:4000/ to use GraphiQL. You can now experiment with your Graph client-side GraphQL schema locally! 🥳 -#### Examples +#### 例 You can also refer to [examples directory in this repo](../examples), for more advanced examples and integration examples: @@ -308,8 +308,8 @@ sources:
`highestValue` - - This strategy allows you to send parallel requests to different endpoints for the same source and choose the most updated. + +This strategy allows you to send parallel requests to different endpoints for the same source and choose the most updated. This is useful if you want to choose most synced data for the same Subgraph over different indexers/sources. diff --git a/website/src/pages/ja/subgraphs/querying/graph-client/live.md b/website/src/pages/ja/subgraphs/querying/graph-client/live.md index e6f726cb4352..961787fa9a4c 100644 --- a/website/src/pages/ja/subgraphs/querying/graph-client/live.md +++ b/website/src/pages/ja/subgraphs/querying/graph-client/live.md @@ -2,7 +2,7 @@ Graph-Client implements a custom `@live` directive that can make every GraphQL query work with real-time data. -## Getting Started +## はじめに Start by adding the following configuration to your `.graphclientrc.yml` file: diff --git a/website/src/pages/ja/subgraphs/querying/graphql-api.mdx b/website/src/pages/ja/subgraphs/querying/graphql-api.mdx index e6fa6e325eea..c1700fb5e9da 100644 --- a/website/src/pages/ja/subgraphs/querying/graphql-api.mdx +++ b/website/src/pages/ja/subgraphs/querying/graphql-api.mdx @@ -6,13 +6,13 @@ Learn about the GraphQL Query API used in The Graph. ## What is GraphQL? -[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. +[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query Subgraphs. -To understand the larger role that GraphQL plays, review [developing](/subgraphs/developing/introduction/) and [creating a subgraph](/developing/creating-a-subgraph/). +To understand the larger role that GraphQL plays, review [developing](/subgraphs/developing/introduction/) and [creating a Subgraph](/developing/creating-a-subgraph/). ## Queries with GraphQL -In your subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type. +In your Subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type. > Note: `query` does not need to be included at the top of the `graphql` query when using The Graph. @@ -170,7 +170,7 @@ You can use suffixes like `_gt`, `_lte` for value comparison: You can also filter entities that were updated in or after a specified block with `_change_block(number_gte: Int)`. -これは、前回のポーリング以降など、変更されたエンティティのみをフェッチする場合に役立ちます。または、サブグラフでエンティティがどのように変化しているかを調査またはデバッグするのに役立ちます (ブロック フィルターと組み合わせると、特定のブロックで変更されたエンティティのみを分離できます)。 +This can be useful if you are looking to fetch only entities which have changed, for example since the last time you polled. Or alternatively it can be useful to investigate or debug how entities are changing in your Subgraph (if combined with a block filter, you can isolate only entities that changed in a specific block). ```graphql { @@ -329,7 +329,7 @@ This query will return `Challenge` entities, and their associated `Application` ### 全文検索クエリ -Fulltext search query fields provide an expressive text search API that can be added to the subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) to add fulltext search to your subgraph. +Fulltext search query fields provide an expressive text search API that can be added to the Subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) to add fulltext search to your Subgraph. Fulltext search queries have one required field, `text`, for supplying search terms. Several special fulltext operators are available to be used in this `text` search field. @@ -391,7 +391,7 @@ Graph Node implements [specification-based](https://spec.graphql.org/October2021 The schema of your dataSources, i.e. the entity types, values, and relationships that are available to query, are defined through the [GraphQL Interface Definition Language (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). -GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your subgraph is automatically generated from the GraphQL schema that's included in your [subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph). +GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your Subgraph is automatically generated from the GraphQL schema that's included in your [Subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph). > Note: Our API does not expose mutations because developers are expected to issue transactions directly against the underlying blockchain from their applications. @@ -403,7 +403,7 @@ All GraphQL types with `@entity` directives in your schema will be treated as en ### サブグラフ メタデータ -All subgraphs have an auto-generated `_Meta_` object, which provides access to subgraph metadata. This can be queried as follows: +All Subgraphs have an auto-generated `_Meta_` object, which provides access to Subgraph metadata. This can be queried as follows: ```graphQL { @@ -419,7 +419,7 @@ All subgraphs have an auto-generated `_Meta_` object, which provides access to s } ``` -ブロックが提供されている場合、メタデータはそのブロックのものであり、そうでない場合は、最新のインデックス付きブロックが使用されます。提供される場合、ブロックはサブグラフの開始ブロックの後にあり、最後にインデックス付けされたブロック以下でなければなりません。 +If a block is provided, the metadata is as of that block, if not the latest indexed block is used. If provided, the block must be after the Subgraph's start block, and less than or equal to the most recently indexed block. `deployment` is a unique ID, corresponding to the IPFS CID of the `subgraph.yaml` file. @@ -427,6 +427,6 @@ All subgraphs have an auto-generated `_Meta_` object, which provides access to s - hash: ブロックのハッシュ - number: ブロック番号 -- timestamp: 可能であれば、ブロックのタイムスタンプ (これは現在、EVMネットワークのインデックスを作成するサブグラフでのみ利用可能) +- timestamp: the timestamp of the block, if available (this is currently only available for Subgraphs indexing EVM networks) -`hasIndexingErrors` is a boolean identifying whether the subgraph encountered indexing errors at some past block +`hasIndexingErrors` is a boolean identifying whether the Subgraph encountered indexing errors at some past block diff --git a/website/src/pages/ja/subgraphs/querying/introduction.mdx b/website/src/pages/ja/subgraphs/querying/introduction.mdx index d85e6980674d..0424d25aa607 100644 --- a/website/src/pages/ja/subgraphs/querying/introduction.mdx +++ b/website/src/pages/ja/subgraphs/querying/introduction.mdx @@ -7,11 +7,11 @@ To start querying right away, visit [The Graph Explorer](https://thegraph.com/ex ## 概要 -When a subgraph is published to The Graph Network, you can visit its subgraph details page on Graph Explorer and use the "Query" tab to explore the deployed GraphQL API for each subgraph. +When a Subgraph is published to The Graph Network, you can visit its Subgraph details page on Graph Explorer and use the "Query" tab to explore the deployed GraphQL API for each Subgraph. ## Specifics -Each subgraph published to The Graph Network has a unique query URL in Graph Explorer to make direct queries. You can find it by navigating to the subgraph details page and clicking on the "Query" button in the top right corner. +Each Subgraph published to The Graph Network has a unique query URL in Graph Explorer to make direct queries. You can find it by navigating to the Subgraph details page and clicking on the "Query" button in the top right corner. ![Query Subgraph Button](/img/query-button-screenshot.png) @@ -21,7 +21,7 @@ You will notice that this query URL must use a unique API key. You can create an Subgraph Studio users start on a Free Plan, which allows them to make 100,000 queries per month. Additional queries are available on the Growth Plan, which offers usage based pricing for additional queries, payable by credit card, or GRT on Arbitrum. You can learn more about billing [here](/subgraphs/billing/). -> Please see the [Query API](/subgraphs/querying/graphql-api/) for a complete reference on how to query the subgraph's entities. +> Please see the [Query API](/subgraphs/querying/graphql-api/) for a complete reference on how to query the Subgraph's entities. > > Note: If you encounter 405 errors with a GET request to the Graph Explorer URL, please switch to a POST request instead. diff --git a/website/src/pages/ja/subgraphs/querying/managing-api-keys.mdx b/website/src/pages/ja/subgraphs/querying/managing-api-keys.mdx index fc7402c28349..5e0531142b22 100644 --- a/website/src/pages/ja/subgraphs/querying/managing-api-keys.mdx +++ b/website/src/pages/ja/subgraphs/querying/managing-api-keys.mdx @@ -1,14 +1,14 @@ --- -title: API キーの管理 +title: Managing API keys --- ## 概要 -API keys are needed to query subgraphs. They ensure that the connections between application services are valid and authorized, including authenticating the end user and the device using the application. +API keys are needed to query Subgraphs. They ensure that the connections between application services are valid and authorized, including authenticating the end user and the device using the application. ### Create and Manage API Keys -Go to [Subgraph Studio](https://thegraph.com/studio/) and click the **API Keys** tab to create and manage your API keys for specific subgraphs. +Go to [Subgraph Studio](https://thegraph.com/studio/) and click the **API Keys** tab to create and manage your API keys for specific Subgraphs. The "API keys" table lists existing API keys and allows you to manage or delete them. For each key, you can see its status, the cost for the current period, the spending limit for the current period, and the total number of queries. @@ -31,4 +31,4 @@ You can click on an individual API key to view the Details page: - 使用した GRT の量 2. Under the **Security** section, you can opt into security settings depending on the level of control you’d like to have. Specifically, you can: - API キーの使用を許可されたドメイン名の表示と管理 - - API キーでクエリ可能なサブグラフの割り当て + - Assign Subgraphs that can be queried with your API key diff --git a/website/src/pages/ja/subgraphs/querying/python.mdx b/website/src/pages/ja/subgraphs/querying/python.mdx index 4a42ae3275b4..cae61f4b49e0 100644 --- a/website/src/pages/ja/subgraphs/querying/python.mdx +++ b/website/src/pages/ja/subgraphs/querying/python.mdx @@ -3,7 +3,7 @@ title: Query The Graph with Python and Subgrounds sidebarTitle: Python (Subgrounds) --- -Subgroundsは、[Playgrounds](https://playgrounds.network/)によって構築された、サブグラフをクエリするための直感的なPythonライブラリです。サブグラフデータを直接Pythonデータ環境に接続し、[pandas](https://pandas.pydata.org/)のようなライブラリを使用してデータ分析を行うことができます! +Subgrounds is an intuitive Python library for querying Subgraphs, built by [Playgrounds](https://playgrounds.network/). It allows you to directly connect Subgraph data to a Python data environment, letting you use libraries like [pandas](https://pandas.pydata.org/) to perform data analysis! Subgroundsは、GraphQLクエリを構築するためのシンプルなPythonic APIを提供し、ページ分割のような面倒なワークフローを自動化し、制御されたスキーマ変換によって高度なユーザーを支援します。 @@ -17,27 +17,27 @@ pip install --upgrade subgrounds python -m pip install --upgrade subgrounds ``` -インストールしたら、以下のクエリでsubgroundsを試すことができる。以下の例では、Aave v2 プロトコルのサブグラフを取得し、TVL (Total Value Locked) 順に並べられた上位 5 つの市場をクエリし、その名前と TVL (USD) を選択し、pandas [DataFrame](https://pandas.pydata.org/pandas-docs/dev/reference/api/pandas.DataFrame.html#pandas.DataFrame) としてデータを返します。 +Once installed, you can test out subgrounds with the following query. The following example grabs a Subgraph for the Aave v2 protocol and queries the top 5 markets ordered by TVL (Total Value Locked), selects their name and their TVL (in USD) and returns the data as a pandas [DataFrame](https://pandas.pydata.org/pandas-docs/dev/reference/api/pandas.DataFrame.html#pandas.DataFrame). ```python from subgrounds import Subgrounds sg = Subgrounds() -# サブグラフを読み込む +# Load the Subgraph aave_v2 = sg.load_subgraph( "https://api.thegraph.com/subgraphs/name/messari/aave-v2-ethereum") -# クエリの構築 +# Construct the query latest_markets = aave_v2.Query.markets( - orderBy=aave_v2.Market.totalValueLockedUSD、 - orderDirection='desc'、 - first=5、 + orderBy=aave_v2.Market.totalValueLockedUSD, + orderDirection='desc', + first=5, ) -# クエリをデータフレームに戻す +# Return query to a dataframe sg.query_df([ - latest_markets.name、 - latest_markets.totalValueLockedUSD、 + latest_markets.name, + latest_markets.totalValueLockedUSD, ]) ``` diff --git a/website/src/pages/ja/subgraphs/querying/subgraph-id-vs-deployment-id.mdx b/website/src/pages/ja/subgraphs/querying/subgraph-id-vs-deployment-id.mdx index d1964ae0764b..4bf98ccc0c6f 100644 --- a/website/src/pages/ja/subgraphs/querying/subgraph-id-vs-deployment-id.mdx +++ b/website/src/pages/ja/subgraphs/querying/subgraph-id-vs-deployment-id.mdx @@ -2,17 +2,17 @@ title: Subgraph ID vs Deployment ID --- -サブグラフはサブグラフIDで識別され、サブグラフの各バージョンはデプロイメントIDで識別されます。 +A Subgraph is identified by a Subgraph ID, and each version of the Subgraph is identified by a Deployment ID. -When querying a subgraph, either ID can be used, though it is generally suggested that the Deployment ID is used due to its ability to specify a specific version of a subgraph. +When querying a Subgraph, either ID can be used, though it is generally suggested that the Deployment ID is used due to its ability to specify a specific version of a Subgraph. Here are some key differences between the two IDs: ![](/img/subgraph-id-vs-deployment-id.png) ## Deployment ID -The Deployment ID is the IPFS hash of the compiled manifest file, which refers to other files on IPFS instead of relative URLs on the computer. For example, the compiled manifest can be accessed via: `https://api.thegraph.com/ipfs/api/v0/cat?arg=QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. To change the Deployment ID, one can simply update the manifest file, such as modifying the description field as described in the [subgraph manifest documentation](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api). +The Deployment ID is the IPFS hash of the compiled manifest file, which refers to other files on IPFS instead of relative URLs on the computer. For example, the compiled manifest can be accessed via: `https://api.thegraph.com/ipfs/api/v0/cat?arg=QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. To change the Deployment ID, one can simply update the manifest file, such as modifying the description field as described in the [Subgraph manifest documentation](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api). -When queries are made using a subgraph's Deployment ID, we are specifying a version of that subgraph to query. Using the Deployment ID to query a specific subgraph version results in a more sophisticated and robust setup as there is full control over the subgraph version being queried. However, this results in the need of updating the query code manually every time a new version of the subgraph is published. +When queries are made using a Subgraph's Deployment ID, we are specifying a version of that Subgraph to query. Using the Deployment ID to query a specific Subgraph version results in a more sophisticated and robust setup as there is full control over the Subgraph version being queried. However, this results in the need of updating the query code manually every time a new version of the Subgraph is published. Deployment ID を使用するエンドポイントの例: @@ -20,8 +20,8 @@ Deployment ID を使用するエンドポイントの例: ## Subgraph ID -The Subgraph ID is a unique identifier for a subgraph. It remains constant across all versions of a subgraph. It is recommended to use the Subgraph ID to query the latest version of a subgraph, although there are some caveats. +The Subgraph ID is a unique identifier for a Subgraph. It remains constant across all versions of a Subgraph. It is recommended to use the Subgraph ID to query the latest version of a Subgraph, although there are some caveats. -Be aware that querying using Subgraph ID may result in queries being responded to by an older version of the subgraph due to the new version needing time to sync. Also, new versions could introduce breaking schema changes. +Be aware that querying using Subgraph ID may result in queries being responded to by an older version of the Subgraph due to the new version needing time to sync. Also, new versions could introduce breaking schema changes. Example endpoint that uses Subgraph ID: `https://gateway-arbitrum.network.thegraph.com/api/[api-key]/subgraphs/id/FL3ePDCBbShPvfRJTaSCNnehiqxsPHzpLud6CpbHoeKW` diff --git a/website/src/pages/ja/subgraphs/quick-start.mdx b/website/src/pages/ja/subgraphs/quick-start.mdx index 1e322680d75d..df410ba8ec9b 100644 --- a/website/src/pages/ja/subgraphs/quick-start.mdx +++ b/website/src/pages/ja/subgraphs/quick-start.mdx @@ -2,7 +2,7 @@ title: クイックスタート --- -Learn how to easily build, publish and query a [subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) on The Graph. +Learn how to easily build, publish and query a [Subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) on The Graph. ## Prerequisites @@ -13,13 +13,13 @@ Learn how to easily build, publish and query a [subgraph](/subgraphs/developing/ ## How to Build a Subgraph -### 1. Create a subgraph in Subgraph Studio +### 1. Create a Subgraph in Subgraph Studio Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. -Subgraph Studio lets you create, manage, deploy, and publish subgraphs, as well as create and manage API keys. +Subgraph Studio lets you create, manage, deploy, and publish Subgraphs, as well as create and manage API keys. -Click "Create a Subgraph". It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name". +Click "Create a Subgraph". It is recommended to name the Subgraph in Title Case: "Subgraph Name Chain Name". ### 2. Graph CLI をインストールする @@ -37,13 +37,13 @@ Using [yarn](https://yarnpkg.com/): yarn global add @graphprotocol/graph-cli ``` -### 3. Initialize your subgraph +### 3. Initialize your Subgraph -> 特定のサブグラフのコマンドは、[Subgraph Studio](https://thegraph.com/studio/) のサブグラフ ページで見つけることができます。 +> You can find commands for your specific Subgraph on the Subgraph page in [Subgraph Studio](https://thegraph.com/studio/). -The `graph init` command will automatically create a scaffold of a subgraph based on your contract's events. +The `graph init` command will automatically create a scaffold of a Subgraph based on your contract's events. -The following command initializes your subgraph from an existing contract: +The following command initializes your Subgraph from an existing contract: ```sh graph init @@ -51,42 +51,42 @@ graph init If your contract is verified on the respective blockscanner where it is deployed (such as [Etherscan](https://etherscan.io/)), then the ABI will automatically be created in the CLI. -When you initialize your subgraph, the CLI will ask you for the following information: +When you initialize your Subgraph, the CLI will ask you for the following information: -- **Protocol**: Choose the protocol your subgraph will be indexing data from. -- **Subgraph slug**: Create a name for your subgraph. Your subgraph slug is an identifier for your subgraph. -- **Directory**: Choose a directory to create your subgraph in. -- **Ethereum network** (optional): You may need to specify which EVM-compatible network your subgraph will be indexing data from. +- **Protocol**: Choose the protocol your Subgraph will be indexing data from. +- **Subgraph slug**: Create a name for your Subgraph. Your Subgraph slug is an identifier for your Subgraph. +- **Directory**: Choose a directory to create your Subgraph in. +- **Ethereum network** (optional): You may need to specify which EVM-compatible network your Subgraph will be indexing data from. - **Contract address**: Locate the smart contract address you’d like to query data from. - **ABI**: If the ABI is not auto-populated, you will need to input it manually as a JSON file. -- **Start Block**: You should input the start block to optimize subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed. +- **Start Block**: You should input the start block to optimize Subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed. - **Contract Name**: Input the name of your contract. -- **Index contract events as entities**: It is suggested that you set this to true, as it will automatically add mappings to your subgraph for every emitted event. +- **Index contract events as entities**: It is suggested that you set this to true, as it will automatically add mappings to your Subgraph for every emitted event. - **Add another contract** (optional): You can add another contract. -サブグラフを初期化する際に予想されることの例については、次のスクリーンショットを参照してください。 +See the following screenshot for an example for what to expect when initializing your Subgraph: ![Subgraph command](/img/CLI-Example.png) -### 4. Edit your subgraph +### 4. Edit your Subgraph -The `init` command in the previous step creates a scaffold subgraph that you can use as a starting point to build your subgraph. +The `init` command in the previous step creates a scaffold Subgraph that you can use as a starting point to build your Subgraph. -When making changes to the subgraph, you will mainly work with three files: +When making changes to the Subgraph, you will mainly work with three files: -- Manifest (`subgraph.yaml`) - defines what data sources your subgraph will index. -- Schema (`schema.graphql`) - defines what data you wish to retrieve from the subgraph. +- Manifest (`subgraph.yaml`) - defines what data sources your Subgraph will index. +- Schema (`schema.graphql`) - defines what data you wish to retrieve from the Subgraph. - AssemblyScript Mappings (`mapping.ts`) - translates data from your data sources to the entities defined in the schema. -For a detailed breakdown on how to write your subgraph, check out [Creating a Subgraph](/developing/creating-a-subgraph/). +For a detailed breakdown on how to write your Subgraph, check out [Creating a Subgraph](/developing/creating-a-subgraph/). -### 5. Deploy your subgraph +### 5. Deploy your Subgraph > Remember, deploying is not the same as publishing. -When you **deploy** a subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. A deployed subgraph's indexing is performed by the [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/), which is a single Indexer owned and operated by Edge & Node, rather than by the many decentralized Indexers in The Graph Network. A **deployed** subgraph is free to use, rate-limited, not visible to the public, and meant to be used for development, staging, and testing purposes. +When you **deploy** a Subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. A deployed Subgraph's indexing is performed by the [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/), which is a single Indexer owned and operated by Edge & Node, rather than by the many decentralized Indexers in The Graph Network. A **deployed** Subgraph is free to use, rate-limited, not visible to the public, and meant to be used for development, staging, and testing purposes. -サブグラフが作成されたら、次のコマンドを実行します。 +Once your Subgraph is written, run the following commands: ```` ```sh @@ -94,7 +94,7 @@ graph codegen && graph build ``` ```` -Authenticate and deploy your subgraph. The deploy key can be found on the subgraph's page in Subgraph Studio. +Authenticate and deploy your Subgraph. The deploy key can be found on the Subgraph's page in Subgraph Studio. ![Deploy key](/img/subgraph-studio-deploy-key.jpg) @@ -109,37 +109,37 @@ graph deploy The CLI will ask for a version label. It's strongly recommended to use [semantic versioning](https://semver.org/), e.g. `0.0.1`. -### 6. Review your subgraph +### 6. Review your Subgraph -If you’d like to test your subgraph before publishing it, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following: +If you’d like to test your Subgraph before publishing it, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following: - Run a sample query. -- Analyze your subgraph in the dashboard to check information. -- Check the logs on the dashboard to see if there are any errors with your subgraph. The logs of an operational subgraph will look like this: +- Analyze your Subgraph in the dashboard to check information. +- Check the logs on the dashboard to see if there are any errors with your Subgraph. The logs of an operational Subgraph will look like this: ![Subgraph logs](/img/subgraph-logs-image.png) -### 7. Publish your subgraph to The Graph Network +### 7. Publish your Subgraph to The Graph Network -When your subgraph is ready for a production environment, you can publish it to the decentralized network. Publishing is an onchain action that does the following: +When your Subgraph is ready for a production environment, you can publish it to the decentralized network. Publishing is an onchain action that does the following: -- It makes your subgraph available to be to indexed by the decentralized [Indexers](/indexing/overview/) on The Graph Network. -- It removes rate limits and makes your subgraph publicly searchable and queryable in [Graph Explorer](https://thegraph.com/explorer/). -- It makes your subgraph available for [Curators](/resources/roles/curating/) to curate it. +- It makes your Subgraph available to be to indexed by the decentralized [Indexers](/indexing/overview/) on The Graph Network. +- It removes rate limits and makes your Subgraph publicly searchable and queryable in [Graph Explorer](https://thegraph.com/explorer/). +- It makes your Subgraph available for [Curators](/resources/roles/curating/) to curate it. -> The greater the amount of GRT you and others curate on your subgraph, the more Indexers will be incentivized to index your subgraph, improving the quality of service, reducing latency, and enhancing network redundancy for your subgraph. +> The greater the amount of GRT you and others curate on your Subgraph, the more Indexers will be incentivized to index your Subgraph, improving the quality of service, reducing latency, and enhancing network redundancy for your Subgraph. #### Publishing with Subgraph Studio -To publish your subgraph, click the Publish button in the dashboard. +To publish your Subgraph, click the Publish button in the dashboard. -![Publish a subgraph on Subgraph Studio](/img/publish-sub-transfer.png) +![Publish a Subgraph on Subgraph Studio](/img/publish-sub-transfer.png) -Select the network to which you would like to publish your subgraph. +Select the network to which you would like to publish your Subgraph. #### Publishing from the CLI -As of version 0.73.0, you can also publish your subgraph with the Graph CLI. +As of version 0.73.0, you can also publish your Subgraph with the Graph CLI. Open the `graph-cli`. @@ -157,32 +157,32 @@ graph publish ``` ```` -3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized subgraph to a network of your choice. +3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized Subgraph to a network of your choice. ![cli-ui](/img/cli-ui.png) To customize your deployment, see [Publishing a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). -#### Adding signal to your subgraph +#### Adding signal to your Subgraph -1. To attract Indexers to query your subgraph, you should add GRT curation signal to it. +1. To attract Indexers to query your Subgraph, you should add GRT curation signal to it. - - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your subgraph. + - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your Subgraph. 2. If eligible for indexing rewards, Indexers receive GRT rewards based on the signaled amount. - - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on subgraph feature usage and supported networks. + - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on Subgraph feature usage and supported networks. To learn more about curation, read [Curating](/resources/roles/curating/). -To save on gas costs, you can curate your subgraph in the same transaction you publish it by selecting this option: +To save on gas costs, you can curate your Subgraph in the same transaction you publish it by selecting this option: ![Subgraph publish](/img/studio-publish-modal.png) -### 8. Query your subgraph +### 8. Query your Subgraph -You now have access to 100,000 free queries per month with your subgraph on The Graph Network! +You now have access to 100,000 free queries per month with your Subgraph on The Graph Network! -You can query your subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button. +You can query your Subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button. -For more information about querying data from your subgraph, read [Querying The Graph](/subgraphs/querying/introduction/). +For more information about querying data from your Subgraph, read [Querying The Graph](/subgraphs/querying/introduction/). diff --git a/website/src/pages/ja/substreams/_meta-titles.json b/website/src/pages/ja/substreams/_meta-titles.json index 6262ad528c3a..1c58294c4bfc 100644 --- a/website/src/pages/ja/substreams/_meta-titles.json +++ b/website/src/pages/ja/substreams/_meta-titles.json @@ -1,3 +1,3 @@ { - "developing": "Developing" + "developing": "開発" } diff --git a/website/src/pages/ja/substreams/developing/dev-container.mdx b/website/src/pages/ja/substreams/developing/dev-container.mdx index bd4acf16eec7..339ddb159c87 100644 --- a/website/src/pages/ja/substreams/developing/dev-container.mdx +++ b/website/src/pages/ja/substreams/developing/dev-container.mdx @@ -9,7 +9,7 @@ Develop your first project with Substreams Dev Container. It's a tool to help you build your first project. You can either run it remotely through Github codespaces or locally by cloning the [substreams starter repository](https://github.com/streamingfast/substreams-starter?tab=readme-ov-file). -Inside the Dev Container, the `substreams init` command sets up a code-generated Substreams project, allowing you to easily build a subgraph or an SQL-based solution for data handling. +Inside the Dev Container, the `substreams init` command sets up a code-generated Substreams project, allowing you to easily build a Subgraph or an SQL-based solution for data handling. ## Prerequisites @@ -35,7 +35,7 @@ To share your work with the broader community, publish your `.spkg` to [Substrea You can configure your project to query data either through a Subgraph or directly from an SQL database: -- **Subgraph**: Run `substreams codegen subgraph`. This generates a project with a basic `schema.graphql` and `mappings.ts` file. You can customize these to define entities based on the data extracted by Substreams. For more configurations, see [subgraph sink documentation](https://docs.substreams.dev/how-to-guides/sinks/subgraph). +- **Subgraph**: Run `substreams codegen subgraph`. This generates a project with a basic `schema.graphql` and `mappings.ts` file. You can customize these to define entities based on the data extracted by Substreams. For more configurations, see [Subgraph sink documentation](https://docs.substreams.dev/how-to-guides/sinks/subgraph). - **SQL**: Run `substreams codegen sql` for SQL-based queries. For more information on configuring a SQL sink, refer to the [SQL documentation](https://docs.substreams.dev/how-to-guides/sinks/sql-sink). ## Deployment Options diff --git a/website/src/pages/ja/substreams/developing/sinks.mdx b/website/src/pages/ja/substreams/developing/sinks.mdx index 3f34e35b5163..56936182c3aa 100644 --- a/website/src/pages/ja/substreams/developing/sinks.mdx +++ b/website/src/pages/ja/substreams/developing/sinks.mdx @@ -1,5 +1,5 @@ --- -title: Official Sinks +title: Sink your Substreams --- Choose a sink that meets your project's needs. @@ -8,7 +8,7 @@ Choose a sink that meets your project's needs. Once you find a package that fits your needs, you can choose how you want to consume the data. -Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database, a file or a subgraph. +Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database, a file or a Subgraph. ## Sinks @@ -26,7 +26,7 @@ Sinks are integrations that allow you to send the extracted data to different de ### Official -| Name | Support | Maintainer | Source Code | +| 名称 | サポート | Maintainer | Source Code | | --- | --- | --- | --- | | SQL | O | StreamingFast | [substreams-sink-sql](https://github.com/streamingfast/substreams-sink-sql) | | Go SDK | O | StreamingFast | [substreams-sink](https://github.com/streamingfast/substreams-sink) | @@ -40,7 +40,7 @@ Sinks are integrations that allow you to send the extracted data to different de ### Community -| Name | Support | Maintainer | Source Code | +| 名称 | サポート | Maintainer | Source Code | | --- | --- | --- | --- | | MongoDB | C | Community | [substreams-sink-mongodb](https://github.com/streamingfast/substreams-sink-mongodb) | | Files | C | Community | [substreams-sink-files](https://github.com/streamingfast/substreams-sink-files) | diff --git a/website/src/pages/ja/substreams/developing/solana/account-changes.mdx b/website/src/pages/ja/substreams/developing/solana/account-changes.mdx index 6a018b522d67..bbd30084cf9e 100644 --- a/website/src/pages/ja/substreams/developing/solana/account-changes.mdx +++ b/website/src/pages/ja/substreams/developing/solana/account-changes.mdx @@ -11,7 +11,7 @@ This guide walks you through the process of setting up your environment, configu > NOTE: History for the Solana Account Changes dates as of 2025, block 310629601. -For each Substreams Solana account block, only the latest update per account is recorded, see the [Protobuf Referece](https://buf.build/streamingfast/firehose-solana/file/main:sf/solana/type/v1/account.proto). If an account is deleted, a payload with `deleted == True` is provided. Additionally, events of low importance ware omitted, such as those with the special owner “Vote11111111…” account or changes that do not affect the account data (ex: lamport changes). +For each Substreams Solana account block, only the latest update per account is recorded, see the [Protobuf Reference](https://buf.build/streamingfast/firehose-solana/file/main:sf/solana/type/v1/account.proto). If an account is deleted, a payload with `deleted == True` is provided. Additionally, events of low importance ware omitted, such as those with the special owner “Vote11111111…” account or changes that do not affect the account data (ex: lamport changes). > NOTE: To test Substreams latency for Solana accounts, measured as block-head drift, install the [Substreams CLI](https://docs.substreams.dev/reference-material/substreams-cli/installing-the-cli) and running `substreams run solana-common blocks_without_votes -s -1 -o clock`. diff --git a/website/src/pages/ja/substreams/developing/solana/transactions.mdx b/website/src/pages/ja/substreams/developing/solana/transactions.mdx index 7912b5535ab2..ec1b7d592c37 100644 --- a/website/src/pages/ja/substreams/developing/solana/transactions.mdx +++ b/website/src/pages/ja/substreams/developing/solana/transactions.mdx @@ -36,12 +36,12 @@ Within the generated directories, modify your Substreams modules to include addi ## Step 3: Load the Data -To make your Substreams queryable (as opposed to [direct streaming](https://docs.substreams.dev/how-to-guides/sinks/stream)), you can automatically generate a [Substreams-powered subgraph](/sps/introduction/) or SQL-DB sink. +To make your Substreams queryable (as opposed to [direct streaming](https://docs.substreams.dev/how-to-guides/sinks/stream)), you can automatically generate a [Substreams-powered Subgraph](/sps/introduction/) or SQL-DB sink. ### サブグラフ 1. Run `substreams codegen subgraph` to initialize the sink, producing the necessary files and function definitions. -2. Create your [subgraph mappings](/sps/triggers/) within the `mappings.ts` and associated entities within the `schema.graphql`. +2. Create your [Subgraph mappings](/sps/triggers/) within the `mappings.ts` and associated entities within the `schema.graphql`. 3. Build and deploy locally or to [Subgraph Studio](https://thegraph.com/studio-pricing/) by running `deploy-studio`. ### SQL diff --git a/website/src/pages/ja/substreams/introduction.mdx b/website/src/pages/ja/substreams/introduction.mdx index 8af3eada8419..771e1cf64862 100644 --- a/website/src/pages/ja/substreams/introduction.mdx +++ b/website/src/pages/ja/substreams/introduction.mdx @@ -13,7 +13,7 @@ Substreams is a powerful parallel blockchain indexing technology designed to enh ## Substreams Benefits -- **Accelerated Indexing**: Boost subgraph indexing time with a parallelized engine for quicker data retrieval and processing. +- **Accelerated Indexing**: Boost Subgraph indexing time with a parallelized engine for quicker data retrieval and processing. - **Multi-Chain Support**: Expand indexing capabilities beyond EVM-based chains, supporting ecosystems like Solana, Injective, Starknet, and Vara. - **Enhanced Data Model**: Access comprehensive data, including the `trace` level data on EVM or account changes on Solana, while efficiently managing forks/disconnections. - **Multi-Sink Support:** For Subgraph, Postgres database, Clickhouse, and Mongo database. diff --git a/website/src/pages/ja/substreams/publishing.mdx b/website/src/pages/ja/substreams/publishing.mdx index 6de1dc158d15..4529da331fc6 100644 --- a/website/src/pages/ja/substreams/publishing.mdx +++ b/website/src/pages/ja/substreams/publishing.mdx @@ -9,7 +9,7 @@ Learn how to publish a Substreams package to the [Substreams Registry](https://s ### What is a package? -A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional subgraphs. +A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional Subgraphs. ## Publish a Package @@ -44,7 +44,7 @@ A Substreams package is a precompiled binary file that defines the specific data ![confirm](/img/4_confirm.png) -That's it! You have succesfully published a package in the Substreams registry. +That's it! You have successfully published a package in the Substreams registry. ![success](/img/5_success.png) diff --git a/website/src/pages/ja/substreams/quick-start.mdx b/website/src/pages/ja/substreams/quick-start.mdx index 9f23f174a4f1..6bbe99168657 100644 --- a/website/src/pages/ja/substreams/quick-start.mdx +++ b/website/src/pages/ja/substreams/quick-start.mdx @@ -1,5 +1,5 @@ --- -title: Substreams Quick Start +title: サブストリーム速習ガイド sidebarTitle: クイックスタート --- diff --git a/website/src/pages/ja/supported-networks.mdx b/website/src/pages/ja/supported-networks.mdx index b7fa1d0d8e2a..4e138e5575cc 100644 --- a/website/src/pages/ja/supported-networks.mdx +++ b/website/src/pages/ja/supported-networks.mdx @@ -18,11 +18,11 @@ export const getStaticProps = getSupportedNetworksStaticProps - Subgraph Studio relies on the stability and reliability of the underlying technologies, for example JSON-RPC, Firehose and Substreams endpoints. - Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier. -- If a subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks. +- If a Subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks. - For a full list of which features are supported on the decentralized network, see [this page](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). ## Running Graph Node locally If your preferred network isn't supported on The Graph's decentralized network, you can run your own [Graph Node](https://github.com/graphprotocol/graph-node) to index any EVM-compatible network. Make sure that the [version](https://github.com/graphprotocol/graph-node/releases) you are using supports the network and you have the needed configuration. -Graph Node can also index other protocols, via a Firehose integration. Firehose integrations have been created for NEAR, Arweave and Cosmos-based networks. Additionally, Graph Node can support Substreams-powered subgraphs for any network with Substreams support. +Graph Node can also index other protocols, via a Firehose integration. Firehose integrations have been created for NEAR and Arweave. Additionally, Graph Node can support Substreams-powered Subgraphs for any network with Substreams support. diff --git a/website/src/pages/ja/token-api/_meta-titles.json b/website/src/pages/ja/token-api/_meta-titles.json index 692cec84bd58..7ed31e0af95d 100644 --- a/website/src/pages/ja/token-api/_meta-titles.json +++ b/website/src/pages/ja/token-api/_meta-titles.json @@ -1,5 +1,6 @@ { "mcp": "MCP", "evm": "EVM Endpoints", - "monitoring": "Monitoring Endpoints" + "monitoring": "Monitoring Endpoints", + "faq": "FAQ" } diff --git a/website/src/pages/ja/token-api/_meta.js b/website/src/pages/ja/token-api/_meta.js index 09aa7ffc2649..0e526f673a66 100644 --- a/website/src/pages/ja/token-api/_meta.js +++ b/website/src/pages/ja/token-api/_meta.js @@ -5,4 +5,5 @@ export default { mcp: titles.mcp, evm: titles.evm, monitoring: titles.monitoring, + faq: '', } diff --git a/website/src/pages/ja/token-api/faq.mdx b/website/src/pages/ja/token-api/faq.mdx new file mode 100644 index 000000000000..6178aee33e86 --- /dev/null +++ b/website/src/pages/ja/token-api/faq.mdx @@ -0,0 +1,109 @@ +--- +title: Token API FAQ +--- + +Get fast answers to easily integrate and scale with The Graph's high-performance Token API. + +## General + +### What blockchains does the Token API support? + +Currently, the Token API supports Ethereum, Binance Smart Chain (BSC), Polygon, Optimism, Base, and Arbitrum One. + +### Why isn't my API key from The Graph Market working? + +Be sure to use the Access Token generated from the API key, not the API key itself. An access token can be generated from the dashboard on The Graph Market using the dropdown menu next to each API key. + +### How current is the data provided by the API relative to the blockchain? + +The API provides data up to the latest finalized block. + +### How do I authenticate requests to the Token API? + +Authentication is managed via JWT tokens obtained through The Graph Market. JWT tokens issued by [Pinax](https://pinax.network/en), a core developer of The Graph, are also supported. + +### Does the Token API provide a client SDK? + +While a client SDK is not currently available, please share feedback on any SDKs or integrations you would like to see on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to support additional blockchains in the future? + +Yes, more blockchains will be supported in the future. Please share feedback on desired chains on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to offer data closer to the chain head? + +Yes, improvements to provide data closer to the chain head are planned. Feedback is welcome on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to support additional use cases such as NFTs? + +The Graph ecosystem is actively determining the [roadmap](https://thegraph.com/blog/token-api-the-graph/) for additional use cases, including NFTs. Please provide feedback on specific features you would like prioritized on [Discord](https://discord.gg/graphprotocol). + +## MCP / LLM / AI Topics + +### Is there a time limit for LLM queries? + +Yes. The maximum time limit for LLM queries is 10 seconds. + +### Is there a known list of LLMs that work with the API? + +Yes, Cline, Cursor, and Claude Desktop integrate successfully with The Graph's Token API + MCP server. + +Beyond that, whether an LLM "works" depends on whether it supports the MCP protocol directly (or has a compatible plugin/adapter). + +### Where can I find the MCP client? + +You can find the code for the MCP client in [The Graph's repo](https://github.com/graphprotocol/mcp-client). + +## Advanced Topics + +### I'm getting 403/401 errors. What's wrong? + +Check that you included the `Authorization: Bearer ` header with the correct, non-expired token. Common issues include using the API key instead of generating a new JWT, forgetting the "Bearer" prefix, using an incorrect token, or omitting the header entirely. Ensure you copied the JWT exactly as provided by The Graph Market. + +### Are there rate limits or usage costs?\*\* + +During Beta, the Token API is free for authorized developers. While specific rate limits aren't documented, reasonable throttling exists to prevent abuse. High request volumes may trigger HTTP 429 errors. Monitor official announcements for future pricing changes after Beta. + +### What networks are supported, and how do I specify them? + +You can query available networks with [this link](https://token-api.thegraph.com/#tag/monitoring/GET/networks). A full list of network IDs accepted by The Graph can be found on [The Graph's Networks](https://thegraph.com/docs/en/supported-networks/). The API supports Ethereum mainnet, Binance Smart Chain, Base, Arbitrum One, Optimism, and Polygon/Matic. Use the optional `network_id` parameter (e.g., `mainnet`, `bsc`, `base`, `arbitrum-one`, `optimism`, `matic`) to target a specific chain. Without this parameter, the API defaults to Ethereum mainnet. + +### Why do I only see 10 results? How can I get more data? + +Endpoints cap output at 10 items by default. Use pagination parameters: `limit` (up to 500) and `page` (1-indexed). For example, set `limit=50` to get 50 results, and increment `page` for subsequent batches (e.g., `page=2` for items 51-100). + +### How do I fetch older transfer history? + +The Transfers endpoint defaults to 30 days of history. To retrieve older events, increase the `age` parameter up to 180 days maximum (e.g., `age=180` for 6 months of transfers). Transfers older than 180 days cannot be fetched in a single call. + +### What does an empty `"data": []` array mean? + +An empty data array means no records were found matching the query – not an error. This occurs when querying wallets with no tokens/transfers or token contracts with no holders. Verify you've used the correct address and parameters. An invalid address format will trigger a 4xx error. + +### Why is the JSON response wrapped in a `"data"` array? + +All Token API responses consistently wrap results in a top-level `data` array, even for single items. This uniform design handles the common case where addresses have multiple balances or transfers. When parsing, be sure to index into the `data` array (e.g., `const results = response.data`). + +### Why are token amounts returned as strings? + +Large numeric values (like token amounts or supplies) are returned as strings to avoid precision loss, as they often exceed JavaScript's safe integer range. Convert these to big number types for arithmetic operations. Fields like `decimals` are provided as normal numbers to help derive human-readable values. + +### What format should addresses be in? + +The API accepts 40-character hex addresses with or without the `0x` prefix. The endpoint is case-insensitive, so both lower and uppercase hex characters work. Ensure addresses are exactly 40 hex digits (20 bytes) if you remove the prefix. For contract queries, use the token's contract address; for balance/transfer queries, use the wallet address. + +### Do I need special headers besides authentication? + +While recommended, `Accept: application/json` isn't strictly required as the API returns JSON by default. The critical header is `Authorization: Bearer `. Ensure you make a GET request to the correct URL without trailing slashes or path typos (e.g., use `/balances/evm/{address}` not `/balance`). + +### MCP integration with Claude/Cline/Cursor shows errors like "ENOENT" or "Server disconnected". How do I fix this? + +For "ENOENT" errors, ensure Node.js 18+ is installed and the path to `npx`/`bunx` is correct (consider using full paths in config). "Server disconnected" usually indicates authentication or connectivity issues – verify your `ACCESS_TOKEN` is set correctly and your network allows access to `https://token-api.thegraph.com/sse`. + +### Is the Token API part of The Graph's GraphQL service? + +No, the Token API is a separate RESTful service. Unlike traditional subgraphs, it provides ready-to-use REST endpoints (HTTP GET) for common token data. You don't need to write GraphQL queries or deploy subgraphs. Under the hood, it uses The Graph's infrastructure and MCP with AI for enrichment, but you simply interact with REST endpoints. + +### Do I need to use MCP or tools like Claude, Cline, or Cursor? + +No, these are optional. MCP is an advanced feature allowing AI assistants to interface with the API via streaming. For standard usage, simply call the REST endpoints with any HTTP client using your JWT. Claude Desktop, Cline bot, and Cursor IDE integrations are provided for convenience but aren't required. diff --git a/website/src/pages/ja/token-api/mcp/claude.mdx b/website/src/pages/ja/token-api/mcp/claude.mdx index 0da8f2be031d..c44f99914138 100644 --- a/website/src/pages/ja/token-api/mcp/claude.mdx +++ b/website/src/pages/ja/token-api/mcp/claude.mdx @@ -12,7 +12,7 @@ sidebarTitle: Claude Desktop ![Screenshot of Claude Desktop's settings panel showing the MCP server configuration option.](/img/claude-preview-token-api.png) -## Configuration +## コンフィギュレーション Create or edit your `claude_desktop_config.json` file. @@ -25,11 +25,11 @@ Create or edit your `claude_desktop_config.json` file. ```json label="claude_desktop_config.json" { "mcpServers": { - "mcp-pinax": { + "token-api": { "command": "npx", "args": ["@pinax/mcp", "--sse-url", "https://token-api.thegraph.com/sse"], "env": { - "ACCESS_TOKEN": "" + "ACCESS_TOKEN": "" } } } diff --git a/website/src/pages/ja/token-api/mcp/cline.mdx b/website/src/pages/ja/token-api/mcp/cline.mdx index ab54c0c8f6f0..64f32deea38f 100644 --- a/website/src/pages/ja/token-api/mcp/cline.mdx +++ b/website/src/pages/ja/token-api/mcp/cline.mdx @@ -10,9 +10,9 @@ sidebarTitle: Cline - [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path. - The `@pinax/mcp` package requires Node 18+, as it relies on built-in `fetch()` / `Headers`, which are not available in Node 17 or older. You may need to specify an exact path to an up-to-date Node version, or uninstall previous versions of Node to ensure `@pinax/mcp` uses the correct version. -![Screenshot of Cline's MCP server configuration interface displaying the JSON settings file with mcp-pinax server details visible](/img/cline-preview-token-api.png) +![Screenshot of Cline's MCP server configuration interface displaying the JSON settings file with mcp-pinax server details visible.](/img/cline-preview-token-api.png) -## Configuration +## コンフィギュレーション Create or edit your `cline_mcp_settings.json` file. diff --git a/website/src/pages/ja/token-api/mcp/cursor.mdx b/website/src/pages/ja/token-api/mcp/cursor.mdx index 658108d1337b..1c4da59b67bc 100644 --- a/website/src/pages/ja/token-api/mcp/cursor.mdx +++ b/website/src/pages/ja/token-api/mcp/cursor.mdx @@ -12,7 +12,7 @@ sidebarTitle: Cursor ![Screenshot of Cursor's MCP configuration panel.](/img/cursor-preview-token-api.png) -## Configuration +## コンフィギュレーション Create or edit your `~/.cursor/mcp.json` file. diff --git a/website/src/pages/ja/token-api/quick-start.mdx b/website/src/pages/ja/token-api/quick-start.mdx index 4653c3d41ac6..0b64515243cb 100644 --- a/website/src/pages/ja/token-api/quick-start.mdx +++ b/website/src/pages/ja/token-api/quick-start.mdx @@ -1,6 +1,6 @@ --- title: Token API Quick Start -sidebarTitle: Quick Start +sidebarTitle: クイックスタート --- ![The Graph Token API Quick Start banner](/img/token-api-quickstart-banner.jpg) diff --git a/website/src/pages/ko/about.mdx b/website/src/pages/ko/about.mdx index 02b29895881f..833b097673d2 100644 --- a/website/src/pages/ko/about.mdx +++ b/website/src/pages/ko/about.mdx @@ -30,25 +30,25 @@ Blockchain properties, such as finality, chain reorganizations, and uncled block ## The Graph Provides a Solution -The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API. +The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "Subgraphs") can then be queried with a standard GraphQL API. Today, there is a decentralized protocol that is backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node) that enables this process. ### How The Graph Functions -Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL. +Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using Subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL. #### Specifics -- The Graph uses subgraph descriptions, which are known as the subgraph manifest inside the subgraph. +- The Graph uses Subgraph descriptions, which are known as the Subgraph manifest inside the Subgraph. -- The subgraph description outlines the smart contracts of interest for a subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database. +- The Subgraph description outlines the smart contracts of interest for a Subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database. -- When creating a subgraph, you need to write a subgraph manifest. +- When creating a Subgraph, you need to write a Subgraph manifest. -- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that subgraph. +- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that Subgraph. -The diagram below provides more detailed information about the flow of data after a subgraph manifest has been deployed with Ethereum transactions. +The diagram below provides more detailed information about the flow of data after a Subgraph manifest has been deployed with Ethereum transactions. ![A graphic explaining how The Graph uses Graph Node to serve queries to data consumers](/img/graph-dataflow.png) @@ -56,12 +56,12 @@ The flow follows these steps: 1. A dapp adds data to Ethereum through a transaction on a smart contract. 2. The smart contract emits one or more events while processing the transaction. -3. Graph Node continually scans Ethereum for new blocks and the data for your subgraph they may contain. -4. Graph Node finds Ethereum events for your subgraph in these blocks and runs the mapping handlers you provided. The mapping is a WASM module that creates or updates the data entities that Graph Node stores in response to Ethereum events. +3. Graph Node continually scans Ethereum for new blocks and the data for your Subgraph they may contain. +4. Graph Node finds Ethereum events for your Subgraph in these blocks and runs the mapping handlers you provided. The mapping is a WASM module that creates or updates the data entities that Graph Node stores in response to Ethereum events. 5. The dapp queries the Graph Node for data indexed from the blockchain, using the node's [GraphQL endpoint](https://graphql.org/learn/). The Graph Node in turn translates the GraphQL queries into queries for its underlying data store in order to fetch this data, making use of the store's indexing capabilities. The dapp displays this data in a rich UI for end-users, which they use to issue new transactions on Ethereum. The cycle repeats. ## Next Steps -The following sections provide a more in-depth look at subgraphs, their deployment and data querying. +The following sections provide a more in-depth look at Subgraphs, their deployment and data querying. -Before you write your own subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed subgraphs. Each subgraph's page includes a GraphQL playground, allowing you to query its data. +Before you write your own Subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed Subgraphs. Each Subgraph's page includes a GraphQL playground, allowing you to query its data. diff --git a/website/src/pages/ko/archived/arbitrum/arbitrum-faq.mdx b/website/src/pages/ko/archived/arbitrum/arbitrum-faq.mdx index 562824e64e95..d121f5a2d0f3 100644 --- a/website/src/pages/ko/archived/arbitrum/arbitrum-faq.mdx +++ b/website/src/pages/ko/archived/arbitrum/arbitrum-faq.mdx @@ -14,7 +14,7 @@ By scaling The Graph on L2, network participants can now benefit from: - Security inherited from Ethereum -Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers can open and close allocations more frequently to index a greater number of subgraphs. Developers can deploy and update subgraphs more easily, and Delegators can delegate GRT more frequently. Curators can add or remove signal to a larger number of subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. +Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers can open and close allocations more frequently to index a greater number of Subgraphs. Developers can deploy and update Subgraphs more easily, and Delegators can delegate GRT more frequently. Curators can add or remove signal to a larger number of Subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. The Graph community decided to move forward with Arbitrum last year after the outcome of the [GIP-0031](https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305) discussion. @@ -39,7 +39,7 @@ To take advantage of using The Graph on L2, use this dropdown switcher to toggle ![Dropdown switcher to toggle Arbitrum](/img/arbitrum-screenshot-toggle.png) -## As a subgraph developer, data consumer, Indexer, Curator, or Delegator, what do I need to do now? +## As a Subgraph developer, data consumer, Indexer, Curator, or Delegator, what do I need to do now? Network participants must move to Arbitrum to continue participating in The Graph Network. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) for additional support. @@ -51,9 +51,9 @@ All smart contracts have been thoroughly [audited](https://github.com/graphproto Everything has been tested thoroughly, and a contingency plan is in place to ensure a safe and seamless transition. Details can be found [here](https://forum.thegraph.com/t/gip-0037-the-graph-arbitrum-deployment-with-linear-rewards-minted-in-l2/3551#risks-and-security-considerations-20). -## Are existing subgraphs on Ethereum working? +## Are existing Subgraphs on Ethereum working? -All subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) to ensure your subgraphs operate seamlessly. +All Subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) to ensure your Subgraphs operate seamlessly. ## Does GRT have a new smart contract deployed on Arbitrum? diff --git a/website/src/pages/ko/archived/arbitrum/l2-transfer-tools-faq.mdx b/website/src/pages/ko/archived/arbitrum/l2-transfer-tools-faq.mdx index 904587bfc535..62477d152eff 100644 --- a/website/src/pages/ko/archived/arbitrum/l2-transfer-tools-faq.mdx +++ b/website/src/pages/ko/archived/arbitrum/l2-transfer-tools-faq.mdx @@ -24,9 +24,9 @@ The exception is with smart contract wallets like multisigs: these are smart con L2 전송 도구는 Arbitrum의 기본 메커니즘을 사용하여 L1에서 L2로 메시지를 보냅니다. 이 메커니즘은 "재시도 가능한 티켓"이라고 하며 Arbitrum GRT 브리지를 포함한 모든 네이티브 토큰 브리지를 사용하여 사용됩니다. 재시도 가능한 티켓에 대해 자세히 읽을 수 있습니다 [Arbitrum 문서] (https://docs.arbitrum.io/arbos/l1-to-l2-messaging). -자산(하위 그래프, 스테이크, 위임 또는 큐레이션) 을 L2로 이전하면 L2에서 재시도 가능한 티켓을 생성하는 Arbitrum GRT 브리지를 통해 메시지가 전송됩니다. 전송 도구에는 거래에 일부 ETH 값이 포함되어 있으며, 이는 1) 티켓 생성 비용을 지불하고 2) L2에서 티켓을 실행하기 위해 가스 비용을 지불하는 데 사용됩니다. 그러나 티켓이 L2에서 실행될 준비가 될 때까지 가스 가격이 시간에 따라 달라질 수 있으므로 이 자동 실행 시도가 실패할 수 있습니다. 그런 일이 발생하면 Arbitrum 브릿지는 재시도 가능한 티켓을 최대 7일 동안 유지하며 누구나 티켓 "사용"을 재시도할 수 있습니다(Arbitrum에 브릿지된 일부 ETH가 있는 지갑이 필요함). +When you transfer your assets (Subgraph, stake, delegation or curation) to L2, a message is sent through the Arbitrum GRT bridge which creates a retryable ticket in L2. The transfer tool includes some ETH value in the transaction, that is used to 1) pay to create the ticket and 2) pay for the gas to execute the ticket in L2. However, because gas prices might vary in the time until the ticket is ready to execute in L2, it is possible that this auto-execution attempt fails. When that happens, the Arbitrum bridge will keep the retryable ticket alive for up to 7 days, and anyone can retry “redeeming” the ticket (which requires a wallet with some ETH bridged to Arbitrum). -이것이 모든 전송 도구에서 '확인' 단계라고 부르는 것입니다. 자동 실행이 성공하는 경우가 가장 많기 때문에 대부분의 경우 자동으로 실행되지만 제대로 진행되었는지 다시 확인하는 것이 중요합니다. 성공하지 못하고 7일 이내에 성공적인 재시도가 없으면 Arbitrum 브릿지는 티켓을 폐기하며 귀하의 자산(하위 그래프, 지분, 위임 또는 큐레이션)은 손실되어 복구할 수 없습니다. Graph 코어 개발자는 이러한 상황을 감지하고 너무 늦기 전에 티켓을 교환하기 위해 모니터링 시스템을 갖추고 있지만 전송이 제 시간에 완료되도록 하는 것은 궁극적으로 귀하의 책임입니다. 거래를 확인하는 데 문제가 있는 경우 [이 양식]을 사용하여 문의하세요 (https://noteforms.com/forms/notionform-l2-transfer-tooling-issues-0ogqfu?notionforms=1&utm_source=notionforms) 핵심 개발자들이 도와드릴 것입니다. +This is what we call the “Confirm” step in all the transfer tools - it will run automatically in most cases, as the auto-execution is most often successful, but it is important that you check back to make sure it went through. If it doesn’t succeed and there are no successful retries in 7 days, the Arbitrum bridge will discard the ticket, and your assets (Subgraph, stake, delegation or curation) will be lost and can’t be recovered. The Graph core devs have a monitoring system in place to detect these situations and try to redeem the tickets before it’s too late, but it is ultimately your responsibility to ensure your transfer is completed in time. If you’re having trouble confirming your transaction, please reach out using [this form](https://noteforms.com/forms/notionform-l2-transfer-tooling-issues-0ogqfu?notionforms=1&utm_source=notionforms) and core devs will be there help you. ### I started my delegation/stake/curation transfer and I'm not sure if it made it through to L2, how can I confirm that it was transferred correctly? @@ -36,41 +36,43 @@ If you have the L1 transaction hash (which you can find by looking at the recent ## 하위 그래프 전송 -### 내 서브그래프를 어떻게 이전하나요? +### How do I transfer my Subgraph? +To transfer your Subgraph, you will need to complete the following steps: + 1. 이더리움 메인넷에서 전송 시작 2. 확인을 위해 20분 정도 기다리세요 -3. Arbitrum에서 하위 그래프 전송 확인\* +3. Confirm Subgraph transfer on Arbitrum\* -4. Arbitrum에 하위 그래프 게시 완료 +4. Finish publishing Subgraph on Arbitrum 5. 쿼리 URL 업데이트(권장) -\*Note that you must confirm the transfer within 7 days otherwise your subgraph may be lost. In most cases, this step will run automatically, but a manual confirmation may be needed if there is a gas price spike on Arbitrum. If there are any issues during this process, there will be resources to help: contact support at support@thegraph.com or on [Discord](https://discord.gg/graphprotocol). +\*Note that you must confirm the transfer within 7 days otherwise your Subgraph may be lost. In most cases, this step will run automatically, but a manual confirmation may be needed if there is a gas price spike on Arbitrum. If there are any issues during this process, there will be resources to help: contact support at support@thegraph.com or on [Discord](https://discord.gg/graphprotocol). ### 어디에서 이전을 시작해야 합니까? -[Subgraph Studio](https://thegraph.com/studio/), [Explorer](https://thegraph.com/explorer) 또는 하위 그래프 세부정보 페이지에서 전송을 시작할 수 있습니다. 하위 그래프 세부 정보 페이지에서 "하위 그래프 전송" 버튼을 클릭하여 전송을 시작하세요. +You can initiate your transfer from the [Subgraph Studio](https://thegraph.com/studio/), [Explorer,](https://thegraph.com/explorer) or any Subgraph details page. Click the "Transfer Subgraph" button in the Subgraph details page to start the transfer. -### 내 하위 그래프가 전송될 때까지 얼마나 기다려야 합니까? +### How long do I need to wait until my Subgraph is transferred 환승 시간은 약 20분 정도 소요됩니다. Arbitrum 브리지는 브리지 전송을 자동으로 완료하기 위해 백그라운드에서 작동하고 있습니다. 경우에 따라 가스 비용이 급증할 수 있으며 거래를 다시 확인해야 합니다. -### 내 하위 그래프를 L2로 전송한 후에도 계속 검색할 수 있나요? +### Will my Subgraph still be discoverable after I transfer it to L2? -귀하의 하위 그래프는 해당 하위 그래프가 게시된 네트워크에서만 검색 가능합니다. 예를 들어, 귀하의 하위 그래프가 Arbitrum One에 있는 경우 Arbitrum One의 Explorer에서만 찾을 수 있으며 Ethereum에서는 찾을 수 없습니다. 올바른 네트워크에 있는지 확인하려면 페이지 상단의 네트워크 전환기에서 Arbitrum One을 선택했는지 확인하세요. 이전 후 L1 하위 그래프는 더 이상 사용되지 않는 것으로 표시됩니다. +Your Subgraph will only be discoverable on the network it is published to. For example, if your Subgraph is on Arbitrum One, then you can only find it in Explorer on Arbitrum One and will not be able to find it on Ethereum. Please ensure that you have Arbitrum One selected in the network switcher at the top of the page to ensure you are on the correct network.  After the transfer, the L1 Subgraph will appear as deprecated. -### 내 하위 그래프를 전송하려면 게시해야 합니까? +### Does my Subgraph need to be published to transfer it? -하위 그래프 전송 도구를 활용하려면 하위 그래프가 이미 이더리움 메인넷에 게시되어 있어야 하며 하위 그래프를 소유한 지갑이 소유한 일부 큐레이션 신호가 있어야 합니다. 하위 그래프가 게시되지 않은 경우 Arbitrum One에 직접 게시하는 것이 좋습니다. 관련 가스 요금은 상당히 낮아집니다. 게시된 하위 그래프를 전송하고 싶지만 소유자 계정이 이에 대한 신호를 큐레이팅하지 않은 경우 해당 계정에서 소액(예: 1 GRT)을 신호로 보낼 수 있습니다. "자동 마이그레이션" 신호를 선택했는지 확인하세요. +To take advantage of the Subgraph transfer tool, your Subgraph must be already published to Ethereum mainnet and must have some curation signal owned by the wallet that owns the Subgraph. If your Subgraph is not published, it is recommended you simply publish directly on Arbitrum One - the associated gas fees will be considerably lower. If you want to transfer a published Subgraph but the owner account hasn't curated any signal on it, you can signal a small amount (e.g. 1 GRT) from that account; make sure to choose "auto-migrating" signal. -### Arbitrum으로 이전한 후 내 서브그래프의 이더리움 메인넷 버전은 어떻게 되나요? +### What happens to the Ethereum mainnet version of my Subgraph after I transfer to Arbitrum? -귀하의 하위 그래프를 Arbitrum으로 이전한 후에는 Ethereum 메인넷 버전이 더 이상 사용되지 않습니다. 48시간 이내에 쿼리 URL을 업데이트하는 것이 좋습니다. 그러나 타사 dapp 지원이 업데이트될 수 있도록 메인넷 URL이 작동하도록 유지하는 유예 기간이 있습니다. +After transferring your Subgraph to Arbitrum, the Ethereum mainnet version will be deprecated. We recommend you update your query URL within 48 hours. However, there is a grace period in place that keeps your mainnet URL functioning so that any third-party dapp support can be updated. ### 양도한 후에 Arbitrum에 다시 게시해야 합니까? @@ -78,21 +80,21 @@ If you have the L1 transaction hash (which you can find by looking at the recent ### Will my endpoint experience downtime while re-publishing? -It is unlikely, but possible to experience a brief downtime depending on which Indexers are supporting the subgraph on L1 and whether they keep indexing it until the subgraph is fully supported on L2. +It is unlikely, but possible to experience a brief downtime depending on which Indexers are supporting the Subgraph on L1 and whether they keep indexing it until the Subgraph is fully supported on L2. ### L2에서 Ethereum Ethereum 메인넷과 게시 및 버전 관리가 동일합니까? -Yes. Select Arbitrum One as your published network when publishing in Subgraph Studio. In the Studio, the latest endpoint will be available which points to the latest updated version of the subgraph. +Yes. Select Arbitrum One as your published network when publishing in Subgraph Studio. In the Studio, the latest endpoint will be available which points to the latest updated version of the Subgraph. -### 내 하위 그래프의 큐레이션이 내 하위 그래프와 함께 이동하나요? +### Will my Subgraph's curation move with my Subgraph? -자동 마이그레이션 신호를 선택한 경우 자체 큐레이션의 100%가 하위 그래프와 함께 Arbitrum One으로 이동됩니다. 하위 그래프의 모든 큐레이션 신호는 전송 시 GRT로 변환되며, 큐레이션 신호에 해당하는 GRT는 L2 하위 그래프의 신호 생성에 사용됩니다. +If you've chosen auto-migrating signal, 100% of your own curation will move with your Subgraph to Arbitrum One. All of the Subgraph's curation signal will be converted to GRT at the time of the transfer, and the GRT corresponding to your curation signal will be used to mint signal on the L2 Subgraph. -다른 큐레이터는 GRT 일부를 인출할지, 아니면 L2로 전송하여 동일한 하위 그래프의 신호를 생성할지 선택할 수 있습니다. +Other Curators can choose whether to withdraw their fraction of GRT, or also transfer it to L2 to mint signal on the same Subgraph. -### 이전 후 구독을 이더리움 메인넷으로 다시 이동할 수 있나요? +### Can I move my Subgraph back to Ethereum mainnet after I transfer? -이전되면 이 하위 그래프의 Ethereum 메인넷 버전은 더 이상 사용되지 않습니다. 메인넷으로 다시 이동하려면 다시 메인넷에 재배포하고 게시해야 합니다. 그러나 인덱싱 보상은 결국 Arbitrum One에 전적으로 배포되므로 이더리움 메인넷으로 다시 이전하는 것은 권장되지 않습니다. +Once transferred, your Ethereum mainnet version of this Subgraph will be deprecated. If you would like to move back to mainnet, you will need to redeploy and publish back to mainnet. However, transferring back to Ethereum mainnet is strongly discouraged as indexing rewards will eventually be distributed entirely on Arbitrum One. ### 전송을 완료하려면 브리지된 ETH가 필요한 이유는 무엇입니까? @@ -204,19 +206,19 @@ The tokens that are being undelegated are "locked" and therefore cannot be trans \*필요한 경우 - 즉, 계약 주소를 사용하고 있습니다. -### 내가 큐레이트한 하위 그래프가 L2로 이동했는지 어떻게 알 수 있나요? +### How will I know if the Subgraph I curated has moved to L2? -하위 세부정보 페이지를 보면 해당 하위 하위가 이전되었음을 알리는 배너가 표시됩니다. 메시지에 따라 큐레이션을 전송할 수 있습니다. 이동한 하위 그래프의 하위 그래프 세부정보 페이지에서도 이 정보를 찾을 수 있습니다. +When viewing the Subgraph details page, a banner will notify you that this Subgraph has been transferred. You can follow the prompt to transfer your curation. You can also find this information on the Subgraph details page of any Subgraph that has moved. ### 큐레이션을 L2로 옮기고 싶지 않으면 어떻게 되나요? -하위 그래프가 더 이상 사용되지 않으면 신호를 철회할 수 있는 옵션이 있습니다. 마찬가지로 하위 그래프가 L2로 이동한 경우 이더리움 메인넷에서 신호를 철회하거나 L2로 신호를 보낼 수 있습니다. +When a Subgraph is deprecated you have the option to withdraw your signal. Similarly, if a Subgraph has moved to L2, you can choose to withdraw your signal in Ethereum mainnet or send the signal to L2. ### 내 큐레이션이 성공적으로 전송되었는지 어떻게 알 수 있나요? L2 전송 도구가 시작된 후 약 20분 후에 Explorer를 통해 신호 세부 정보에 액세스할 수 있습니다. -### 한 번에 두 개 이상의 하위 그래프에 대한 내 큐레이션을 전송할 수 있나요? +### Can I transfer my curation on more than one Subgraph at a time? 현재는 대량 전송 옵션이 없습니다. @@ -264,7 +266,7 @@ L2 전송 도구가 지분 전송을 완료하는 데 약 20분이 소요됩니 ### 지분을 양도하기 전에 Arbitrum에서 색인을 생성해야 합니까? -인덱싱을 설정하기 전에 먼저 지분을 효과적으로 이전할 수 있지만, L2의 하위 그래프에 할당하고 이를 인덱싱하고 POI를 제시할 때까지는 L2에서 어떤 보상도 청구할 수 없습니다. +You can effectively transfer your stake first before setting up indexing, but you will not be able to claim any rewards on L2 until you allocate to Subgraphs on L2, index them, and present POIs. ### 내가 인덱싱 지분을 이동하기 전에 위임자가 자신의 위임을 이동할 수 있나요? diff --git a/website/src/pages/ko/archived/arbitrum/l2-transfer-tools-guide.mdx b/website/src/pages/ko/archived/arbitrum/l2-transfer-tools-guide.mdx index 549618bfd7c3..4a34da9bad0e 100644 --- a/website/src/pages/ko/archived/arbitrum/l2-transfer-tools-guide.mdx +++ b/website/src/pages/ko/archived/arbitrum/l2-transfer-tools-guide.mdx @@ -6,53 +6,53 @@ The Graph has made it easy to move to L2 on Arbitrum One. For each protocol part Some frequent questions about these tools are answered in the [L2 Transfer Tools FAQ](/archived/arbitrum/l2-transfer-tools-faq/). The FAQs contain in-depth explanations of how to use the tools, how they work, and things to keep in mind when using them. -## How to transfer your subgraph to Arbitrum (L2) +## How to transfer your Subgraph to Arbitrum (L2) -## Benefits of transferring your subgraphs +## Benefits of transferring your Subgraphs The Graph's community and core devs have [been preparing](https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305) to move to Arbitrum over the past year. Arbitrum, a layer 2 or "L2" blockchain, inherits the security from Ethereum but provides drastically lower gas fees. -When you publish or upgrade your subgraph to The Graph Network, you're interacting with smart contracts on the protocol and this requires paying for gas using ETH. By moving your subgraphs to Arbitrum, any future updates to your subgraph will require much lower gas fees. The lower fees, and the fact that curation bonding curves on L2 are flat, also make it easier for other Curators to curate on your subgraph, increasing the rewards for Indexers on your subgraph. This lower-cost environment also makes it cheaper for Indexers to index and serve your subgraph. Indexing rewards will be increasing on Arbitrum and decreasing on Ethereum mainnet over the coming months, so more and more Indexers will be transferring their stake and setting up their operations on L2. +When you publish or upgrade your Subgraph to The Graph Network, you're interacting with smart contracts on the protocol and this requires paying for gas using ETH. By moving your Subgraphs to Arbitrum, any future updates to your Subgraph will require much lower gas fees. The lower fees, and the fact that curation bonding curves on L2 are flat, also make it easier for other Curators to curate on your Subgraph, increasing the rewards for Indexers on your Subgraph. This lower-cost environment also makes it cheaper for Indexers to index and serve your Subgraph. Indexing rewards will be increasing on Arbitrum and decreasing on Ethereum mainnet over the coming months, so more and more Indexers will be transferring their stake and setting up their operations on L2. -## Understanding what happens with signal, your L1 subgraph and query URLs +## Understanding what happens with signal, your L1 Subgraph and query URLs -Transferring a subgraph to Arbitrum uses the Arbitrum GRT bridge, which in turn uses the native Arbitrum bridge, to send the subgraph to L2. The "transfer" will deprecate the subgraph on mainnet and send the information to re-create the subgraph on L2 using the bridge. It will also include the subgraph owner's signaled GRT, which must be more than zero for the bridge to accept the transfer. +Transferring a Subgraph to Arbitrum uses the Arbitrum GRT bridge, which in turn uses the native Arbitrum bridge, to send the Subgraph to L2. The "transfer" will deprecate the Subgraph on mainnet and send the information to re-create the Subgraph on L2 using the bridge. It will also include the Subgraph owner's signaled GRT, which must be more than zero for the bridge to accept the transfer. -When you choose to transfer the subgraph, this will convert all of the subgraph's curation signal to GRT. This is equivalent to "deprecating" the subgraph on mainnet. The GRT corresponding to your curation will be sent to L2 together with the subgraph, where they will be used to mint signal on your behalf. +When you choose to transfer the Subgraph, this will convert all of the Subgraph's curation signal to GRT. This is equivalent to "deprecating" the Subgraph on mainnet. The GRT corresponding to your curation will be sent to L2 together with the Subgraph, where they will be used to mint signal on your behalf. -Other Curators can choose whether to withdraw their fraction of GRT, or also transfer it to L2 to mint signal on the same subgraph. If a subgraph owner does not transfer their subgraph to L2 and manually deprecates it via a contract call, then Curators will be notified and will be able to withdraw their curation. +Other Curators can choose whether to withdraw their fraction of GRT, or also transfer it to L2 to mint signal on the same Subgraph. If a Subgraph owner does not transfer their Subgraph to L2 and manually deprecates it via a contract call, then Curators will be notified and will be able to withdraw their curation. -As soon as the subgraph is transferred, since all curation is converted to GRT, Indexers will no longer receive rewards for indexing the subgraph. However, there will be Indexers that will 1) keep serving transferred subgraphs for 24 hours, and 2) immediately start indexing the subgraph on L2. Since these Indexers already have the subgraph indexed, there should be no need to wait for the subgraph to sync, and it will be possible to query the L2 subgraph almost immediately. +As soon as the Subgraph is transferred, since all curation is converted to GRT, Indexers will no longer receive rewards for indexing the Subgraph. However, there will be Indexers that will 1) keep serving transferred Subgraphs for 24 hours, and 2) immediately start indexing the Subgraph on L2. Since these Indexers already have the Subgraph indexed, there should be no need to wait for the Subgraph to sync, and it will be possible to query the L2 Subgraph almost immediately. -Queries to the L2 subgraph will need to be done to a different URL (on `arbitrum-gateway.thegraph.com`), but the L1 URL will continue working for at least 48 hours. After that, the L1 gateway will forward queries to the L2 gateway (for some time), but this will add latency so it is recommended to switch all your queries to the new URL as soon as possible. +Queries to the L2 Subgraph will need to be done to a different URL (on `arbitrum-gateway.thegraph.com`), but the L1 URL will continue working for at least 48 hours. After that, the L1 gateway will forward queries to the L2 gateway (for some time), but this will add latency so it is recommended to switch all your queries to the new URL as soon as possible. ## Choosing your L2 wallet -When you published your subgraph on mainnet, you used a connected wallet to create the subgraph, and this wallet owns the NFT that represents this subgraph and allows you to publish updates. +When you published your Subgraph on mainnet, you used a connected wallet to create the Subgraph, and this wallet owns the NFT that represents this Subgraph and allows you to publish updates. -When transferring the subgraph to Arbitrum, you can choose a different wallet that will own this subgraph NFT on L2. +When transferring the Subgraph to Arbitrum, you can choose a different wallet that will own this Subgraph NFT on L2. If you're using a "regular" wallet like MetaMask (an Externally Owned Account or EOA, i.e. a wallet that is not a smart contract), then this is optional and it is recommended to keep the same owner address as in L1. -If you're using a smart contract wallet, like a multisig (e.g. a Safe), then choosing a different L2 wallet address is mandatory, as it is most likely that this account only exists on mainnet and you will not be able to make transactions on Arbitrum using this wallet. If you want to keep using a smart contract wallet or multisig, create a new wallet on Arbitrum and use its address as the L2 owner of your subgraph. +If you're using a smart contract wallet, like a multisig (e.g. a Safe), then choosing a different L2 wallet address is mandatory, as it is most likely that this account only exists on mainnet and you will not be able to make transactions on Arbitrum using this wallet. If you want to keep using a smart contract wallet or multisig, create a new wallet on Arbitrum and use its address as the L2 owner of your Subgraph. -**It is very important to use a wallet address that you control, and that can make transactions on Arbitrum. Otherwise, the subgraph will be lost and cannot be recovered.** +**It is very important to use a wallet address that you control, and that can make transactions on Arbitrum. Otherwise, the Subgraph will be lost and cannot be recovered.** ## Preparing for the transfer: bridging some ETH -Transferring the subgraph involves sending a transaction through the bridge, and then executing another transaction on Arbitrum. The first transaction uses ETH on mainnet, and includes some ETH to pay for gas when the message is received on L2. However, if this gas is insufficient, you will have to retry the transaction and pay for the gas directly on L2 (this is "Step 3: Confirming the transfer" below). This step **must be executed within 7 days of starting the transfer**. Moreover, the second transaction ("Step 4: Finishing the transfer on L2") will be done directly on Arbitrum. For these reasons, you will need some ETH on an Arbitrum wallet. If you're using a multisig or smart contract account, the ETH will need to be in the regular (EOA) wallet that you are using to execute the transactions, not on the multisig wallet itself. +Transferring the Subgraph involves sending a transaction through the bridge, and then executing another transaction on Arbitrum. The first transaction uses ETH on mainnet, and includes some ETH to pay for gas when the message is received on L2. However, if this gas is insufficient, you will have to retry the transaction and pay for the gas directly on L2 (this is "Step 3: Confirming the transfer" below). This step **must be executed within 7 days of starting the transfer**. Moreover, the second transaction ("Step 4: Finishing the transfer on L2") will be done directly on Arbitrum. For these reasons, you will need some ETH on an Arbitrum wallet. If you're using a multisig or smart contract account, the ETH will need to be in the regular (EOA) wallet that you are using to execute the transactions, not on the multisig wallet itself. You can buy ETH on some exchanges and withdraw it directly to Arbitrum, or you can use the Arbitrum bridge to send ETH from a mainnet wallet to L2: [bridge.arbitrum.io](http://bridge.arbitrum.io). Since gas fees on Arbitrum are lower, you should only need a small amount. It is recommended that you start at a low threshold (0.e.g. 01 ETH) for your transaction to be approved. -## Finding the subgraph Transfer Tool +## Finding the Subgraph Transfer Tool -You can find the L2 Transfer Tool when you're looking at your subgraph's page on Subgraph Studio: +You can find the L2 Transfer Tool when you're looking at your Subgraph's page on Subgraph Studio: ![transfer tool](/img/L2-transfer-tool1.png) -It is also available on Explorer if you're connected with the wallet that owns a subgraph and on that subgraph's page on Explorer: +It is also available on Explorer if you're connected with the wallet that owns a Subgraph and on that Subgraph's page on Explorer: ![Transferring to L2](/img/transferToL2.png) @@ -60,19 +60,19 @@ Clicking on the Transfer to L2 button will open the transfer tool where you can ## Step 1: Starting the transfer -Before starting the transfer, you must decide which address will own the subgraph on L2 (see "Choosing your L2 wallet" above), and it is strongly recommend having some ETH for gas already bridged on Arbitrum (see "Preparing for the transfer: bridging some ETH" above). +Before starting the transfer, you must decide which address will own the Subgraph on L2 (see "Choosing your L2 wallet" above), and it is strongly recommend having some ETH for gas already bridged on Arbitrum (see "Preparing for the transfer: bridging some ETH" above). -Also please note transferring the subgraph requires having a nonzero amount of signal on the subgraph with the same account that owns the subgraph; if you haven't signaled on the subgraph you will have to add a bit of curation (adding a small amount like 1 GRT would suffice). +Also please note transferring the Subgraph requires having a nonzero amount of signal on the Subgraph with the same account that owns the Subgraph; if you haven't signaled on the Subgraph you will have to add a bit of curation (adding a small amount like 1 GRT would suffice). -After opening the Transfer Tool, you will be able to input the L2 wallet address into the "Receiving wallet address" field - **make sure you've entered the correct address here**. Clicking on Transfer Subgraph will prompt you to execute the transaction on your wallet (note some ETH value is included to pay for L2 gas); this will initiate the transfer and deprecate your L1 subgraph (see "Understanding what happens with signal, your L1 subgraph and query URLs" above for more details on what goes on behind the scenes). +After opening the Transfer Tool, you will be able to input the L2 wallet address into the "Receiving wallet address" field - **make sure you've entered the correct address here**. Clicking on Transfer Subgraph will prompt you to execute the transaction on your wallet (note some ETH value is included to pay for L2 gas); this will initiate the transfer and deprecate your L1 Subgraph (see "Understanding what happens with signal, your L1 Subgraph and query URLs" above for more details on what goes on behind the scenes). -If you execute this step, **make sure you proceed until completing step 3 in less than 7 days, or the subgraph and your signal GRT will be lost.** This is due to how L1-L2 messaging works on Arbitrum: messages that are sent through the bridge are "retry-able tickets" that must be executed within 7 days, and the initial execution might need a retry if there are spikes in the gas price on Arbitrum. +If you execute this step, **make sure you proceed until completing step 3 in less than 7 days, or the Subgraph and your signal GRT will be lost.** This is due to how L1-L2 messaging works on Arbitrum: messages that are sent through the bridge are "retry-able tickets" that must be executed within 7 days, and the initial execution might need a retry if there are spikes in the gas price on Arbitrum. ![Start the transfer to L2](/img/startTransferL2.png) -## Step 2: Waiting for the subgraph to get to L2 +## Step 2: Waiting for the Subgraph to get to L2 -After you start the transfer, the message that sends your L1 subgraph to L2 must propagate through the Arbitrum bridge. This takes approximately 20 minutes (the bridge waits for the mainnet block containing the transaction to be "safe" from potential chain reorgs). +After you start the transfer, the message that sends your L1 Subgraph to L2 must propagate through the Arbitrum bridge. This takes approximately 20 minutes (the bridge waits for the mainnet block containing the transaction to be "safe" from potential chain reorgs). Once this wait time is over, Arbitrum will attempt to auto-execute the transfer on the L2 contracts. @@ -80,7 +80,7 @@ Once this wait time is over, Arbitrum will attempt to auto-execute the transfer ## Step 3: Confirming the transfer -In most cases, this step will auto-execute as the L2 gas included in step 1 should be sufficient to execute the transaction that receives the subgraph on the Arbitrum contracts. In some cases, however, it is possible that a spike in gas prices on Arbitrum causes this auto-execution to fail. In this case, the "ticket" that sends your subgraph to L2 will be pending and require a retry within 7 days. +In most cases, this step will auto-execute as the L2 gas included in step 1 should be sufficient to execute the transaction that receives the Subgraph on the Arbitrum contracts. In some cases, however, it is possible that a spike in gas prices on Arbitrum causes this auto-execution to fail. In this case, the "ticket" that sends your Subgraph to L2 will be pending and require a retry within 7 days. If this is the case, you will need to connect using an L2 wallet that has some ETH on Arbitrum, switch your wallet network to Arbitrum, and click on "Confirm Transfer" to retry the transaction. @@ -88,33 +88,33 @@ If this is the case, you will need to connect using an L2 wallet that has some E ## Step 4: Finishing the transfer on L2 -At this point, your subgraph and GRT have been received on Arbitrum, but the subgraph is not published yet. You will need to connect using the L2 wallet that you chose as the receiving wallet, switch your wallet network to Arbitrum, and click "Publish Subgraph." +At this point, your Subgraph and GRT have been received on Arbitrum, but the Subgraph is not published yet. You will need to connect using the L2 wallet that you chose as the receiving wallet, switch your wallet network to Arbitrum, and click "Publish Subgraph." -![Publish the subgraph](/img/publishSubgraphL2TransferTools.png) +![Publish the Subgraph](/img/publishSubgraphL2TransferTools.png) -![Wait for the subgraph to be published](/img/waitForSubgraphToPublishL2TransferTools.png) +![Wait for the Subgraph to be published](/img/waitForSubgraphToPublishL2TransferTools.png) -This will publish the subgraph so that Indexers that are operating on Arbitrum can start serving it. It will also mint curation signal using the GRT that were transferred from L1. +This will publish the Subgraph so that Indexers that are operating on Arbitrum can start serving it. It will also mint curation signal using the GRT that were transferred from L1. ## Step 5: Updating the query URL -Your subgraph has been successfully transferred to Arbitrum! To query the subgraph, the new URL will be : +Your Subgraph has been successfully transferred to Arbitrum! To query the Subgraph, the new URL will be : `https://arbitrum-gateway.thegraph.com/api/[api-key]/subgraphs/id/[l2-subgraph-id]` -Note that the subgraph ID on Arbitrum will be a different than the one you had on mainnet, but you can always find it on Explorer or Studio. As mentioned above (see "Understanding what happens with signal, your L1 subgraph and query URLs") the old L1 URL will be supported for a short while, but you should switch your queries to the new address as soon as the subgraph has been synced on L2. +Note that the Subgraph ID on Arbitrum will be a different than the one you had on mainnet, but you can always find it on Explorer or Studio. As mentioned above (see "Understanding what happens with signal, your L1 Subgraph and query URLs") the old L1 URL will be supported for a short while, but you should switch your queries to the new address as soon as the Subgraph has been synced on L2. ## How to transfer your curation to Arbitrum (L2) -## Understanding what happens to curation on subgraph transfers to L2 +## Understanding what happens to curation on Subgraph transfers to L2 -When the owner of a subgraph transfers a subgraph to Arbitrum, all of the subgraph's signal is converted to GRT at the same time. This applies to "auto-migrated" signal, i.e. signal that is not specific to a subgraph version or deployment but that follows the latest version of a subgraph. +When the owner of a Subgraph transfers a Subgraph to Arbitrum, all of the Subgraph's signal is converted to GRT at the same time. This applies to "auto-migrated" signal, i.e. signal that is not specific to a Subgraph version or deployment but that follows the latest version of a Subgraph. -This conversion from signal to GRT is the same as what would happen if the subgraph owner deprecated the subgraph in L1. When the subgraph is deprecated or transferred, all curation signal is "burned" simultaneously (using the curation bonding curve) and the resulting GRT is held by the GNS smart contract (that is the contract that handles subgraph upgrades and auto-migrated signal). Each Curator on that subgraph therefore has a claim to that GRT proportional to the amount of shares they had for the subgraph. +This conversion from signal to GRT is the same as what would happen if the Subgraph owner deprecated the Subgraph in L1. When the Subgraph is deprecated or transferred, all curation signal is "burned" simultaneously (using the curation bonding curve) and the resulting GRT is held by the GNS smart contract (that is the contract that handles Subgraph upgrades and auto-migrated signal). Each Curator on that Subgraph therefore has a claim to that GRT proportional to the amount of shares they had for the Subgraph. -A fraction of these GRT corresponding to the subgraph owner is sent to L2 together with the subgraph. +A fraction of these GRT corresponding to the Subgraph owner is sent to L2 together with the Subgraph. -At this point, the curated GRT will not accrue any more query fees, so Curators can choose to withdraw their GRT or transfer it to the same subgraph on L2, where it can be used to mint new curation signal. There is no rush to do this as the GRT can be help indefinitely and everybody gets an amount proportional to their shares, irrespective of when they do it. +At this point, the curated GRT will not accrue any more query fees, so Curators can choose to withdraw their GRT or transfer it to the same Subgraph on L2, where it can be used to mint new curation signal. There is no rush to do this as the GRT can be help indefinitely and everybody gets an amount proportional to their shares, irrespective of when they do it. ## Choosing your L2 wallet @@ -130,9 +130,9 @@ If you're using a smart contract wallet, like a multisig (e.g. a Safe), then cho Before starting the transfer, you must decide which address will own the curation on L2 (see "Choosing your L2 wallet" above), and it is recommended having some ETH for gas already bridged on Arbitrum in case you need to retry the execution of the message on L2. You can buy ETH on some exchanges and withdraw it directly to Arbitrum, or you can use the Arbitrum bridge to send ETH from a mainnet wallet to L2: [bridge.arbitrum.io](http://bridge.arbitrum.io) - since gas fees on Arbitrum are so low, you should only need a small amount, e.g. 0.01 ETH will probably be more than enough. -If a subgraph that you curate to has been transferred to L2, you will see a message on Explorer telling you that you're curating to a transferred subgraph. +If a Subgraph that you curate to has been transferred to L2, you will see a message on Explorer telling you that you're curating to a transferred Subgraph. -When looking at the subgraph page, you can choose to withdraw or transfer the curation. Clicking on "Transfer Signal to Arbitrum" will open the transfer tool. +When looking at the Subgraph page, you can choose to withdraw or transfer the curation. Clicking on "Transfer Signal to Arbitrum" will open the transfer tool. ![Transfer signal](/img/transferSignalL2TransferTools.png) @@ -162,4 +162,4 @@ If this is the case, you will need to connect using an L2 wallet that has some E ## Withdrawing your curation on L1 -If you prefer not to send your GRT to L2, or you'd rather bridge the GRT manually, you can withdraw your curated GRT on L1. On the banner on the subgraph page, choose "Withdraw Signal" and confirm the transaction; the GRT will be sent to your Curator address. +If you prefer not to send your GRT to L2, or you'd rather bridge the GRT manually, you can withdraw your curated GRT on L1. On the banner on the Subgraph page, choose "Withdraw Signal" and confirm the transaction; the GRT will be sent to your Curator address. diff --git a/website/src/pages/ko/archived/sunrise.mdx b/website/src/pages/ko/archived/sunrise.mdx index eb18a93c506c..71262f22e7d8 100644 --- a/website/src/pages/ko/archived/sunrise.mdx +++ b/website/src/pages/ko/archived/sunrise.mdx @@ -7,61 +7,61 @@ sidebarTitle: Post-Sunrise Upgrade FAQ ## What was the Sunrise of Decentralized Data? -The Sunrise of Decentralized Data was an initiative spearheaded by Edge & Node. This initiative enabled subgraph developers to upgrade to The Graph’s decentralized network seamlessly. +The Sunrise of Decentralized Data was an initiative spearheaded by Edge & Node. This initiative enabled Subgraph developers to upgrade to The Graph’s decentralized network seamlessly. -This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published subgraphs. +This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published Subgraphs. ### What happened to the hosted service? -The hosted service query endpoints are no longer available, and developers cannot deploy new subgraphs on the hosted service. +The hosted service query endpoints are no longer available, and developers cannot deploy new Subgraphs on the hosted service. -During the upgrade process, owners of hosted service subgraphs could upgrade their subgraphs to The Graph Network. Additionally, developers were able to claim auto-upgraded subgraphs. +During the upgrade process, owners of hosted service Subgraphs could upgrade their Subgraphs to The Graph Network. Additionally, developers were able to claim auto-upgraded Subgraphs. ### Was Subgraph Studio impacted by this upgrade? No, Subgraph Studio was not impacted by Sunrise. Subgraphs were immediately available for querying, powered by the upgrade Indexer, which uses the same infrastructure as the hosted service. -### Why were subgraphs published to Arbitrum, did it start indexing a different network? +### Why were Subgraphs published to Arbitrum, did it start indexing a different network? -The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that subgraphs are published to, but subgraphs can index any of the [supported networks](/supported-networks/) +The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new Subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that Subgraphs are published to, but Subgraphs can index any of the [supported networks](/supported-networks/) ## About the Upgrade Indexer > The upgrade Indexer is currently active. -The upgrade Indexer was implemented to improve the experience of upgrading subgraphs from the hosted service to The Graph Network and support new versions of existing subgraphs that had not yet been indexed. +The upgrade Indexer was implemented to improve the experience of upgrading Subgraphs from the hosted service to The Graph Network and support new versions of existing Subgraphs that had not yet been indexed. ### What does the upgrade Indexer do? -- It bootstraps chains that have yet to receive indexing rewards on The Graph Network and ensures that an Indexer is available to serve queries as quickly as possible after a subgraph is published. +- It bootstraps chains that have yet to receive indexing rewards on The Graph Network and ensures that an Indexer is available to serve queries as quickly as possible after a Subgraph is published. - It supports chains that were previously only available on the hosted service. Find a comprehensive list of supported chains [here](/supported-networks/). -- Indexers that operate an upgrade Indexer do so as a public service to support new subgraphs and additional chains that lack indexing rewards before The Graph Council approves them. +- Indexers that operate an upgrade Indexer do so as a public service to support new Subgraphs and additional chains that lack indexing rewards before The Graph Council approves them. ### Why is Edge & Node running the upgrade Indexer? -Edge & Node historically maintained the hosted service and, as a result, already have synced data for hosted service subgraphs. +Edge & Node historically maintained the hosted service and, as a result, already have synced data for hosted service Subgraphs. ### What does the upgrade indexer mean for existing Indexers? Chains previously only supported on the hosted service were made available to developers on The Graph Network without indexing rewards at first. -However, this action unlocked query fees for any interested Indexer and increased the number of subgraphs published on The Graph Network. As a result, Indexers have more opportunities to index and serve these subgraphs in exchange for query fees, even before indexing rewards are enabled for a chain. +However, this action unlocked query fees for any interested Indexer and increased the number of Subgraphs published on The Graph Network. As a result, Indexers have more opportunities to index and serve these Subgraphs in exchange for query fees, even before indexing rewards are enabled for a chain. -The upgrade Indexer also provides the Indexer community with information about the potential demand for subgraphs and new chains on The Graph Network. +The upgrade Indexer also provides the Indexer community with information about the potential demand for Subgraphs and new chains on The Graph Network. ### What does this mean for Delegators? -The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity. +The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more Subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity. ### Did the upgrade Indexer compete with existing Indexers for rewards? -No, the upgrade Indexer only allocates the minimum amount per subgraph and does not collect indexing rewards. +No, the upgrade Indexer only allocates the minimum amount per Subgraph and does not collect indexing rewards. -It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and subgraphs. +It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and Subgraphs. -### How does this affect subgraph developers? +### How does this affect Subgraph developers? -Subgraph developers can query their subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/subgraphs/developing/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a subgraph](/developing/creating-a-subgraph/) was not impacted by this upgrade. +Subgraph developers can query their Subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/subgraphs/developing/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a Subgraph](/developing/creating-a-subgraph/) was not impacted by this upgrade. ### How does the upgrade Indexer benefit data consumers? @@ -71,10 +71,10 @@ The upgrade Indexer enables chains on the network that were previously only supp The upgrade Indexer prices queries at the market rate to avoid influencing the query fee market. -### When will the upgrade Indexer stop supporting a subgraph? +### When will the upgrade Indexer stop supporting a Subgraph? -The upgrade Indexer supports a subgraph until at least 3 other Indexers successfully and consistently serve queries made to it. +The upgrade Indexer supports a Subgraph until at least 3 other Indexers successfully and consistently serve queries made to it. -Furthermore, the upgrade Indexer stops supporting a subgraph if it has not been queried in the last 30 days. +Furthermore, the upgrade Indexer stops supporting a Subgraph if it has not been queried in the last 30 days. -Other Indexers are incentivized to support subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it. +Other Indexers are incentivized to support Subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it. diff --git a/website/src/pages/ko/global.json b/website/src/pages/ko/global.json index f0bd80d9715b..4364984ad90c 100644 --- a/website/src/pages/ko/global.json +++ b/website/src/pages/ko/global.json @@ -6,6 +6,7 @@ "subgraphs": "Subgraphs", "substreams": "Substreams", "sps": "Substreams-Powered Subgraphs", + "tokenApi": "Token API", "indexing": "Indexing", "resources": "Resources", "archived": "Archived" @@ -24,9 +25,51 @@ "linkToThisSection": "Link to this section" }, "content": { - "note": "Note", + "callout": { + "note": "Note", + "tip": "Tip", + "important": "Important", + "warning": "Warning", + "caution": "Caution" + }, "video": "Video" }, + "openApi": { + "parameters": { + "pathParameters": "Path Parameters", + "queryParameters": "Query Parameters", + "headerParameters": "Header Parameters", + "cookieParameters": "Cookie Parameters", + "parameter": "Parameter", + "description": "Description", + "value": "Value", + "required": "Required", + "deprecated": "Deprecated", + "defaultValue": "Default value", + "minimumValue": "Minimum value", + "maximumValue": "Maximum value", + "acceptedValues": "Accepted values", + "acceptedPattern": "Accepted pattern", + "format": "Format", + "serializationFormat": "Serialization format" + }, + "request": { + "label": "Test this endpoint", + "noCredentialsRequired": "No credentials required", + "send": "Send Request" + }, + "responses": { + "potentialResponses": "Potential Responses", + "status": "Status", + "description": "Description", + "liveResponse": "Live Response", + "example": "Example" + }, + "errors": { + "invalidApi": "Could not retrieve API {0}.", + "invalidOperation": "Could not retrieve operation {0} in API {1}." + } + }, "notFound": { "title": "Oops! This page was lost in space...", "subtitle": "Check if you’re using the right address or explore our website by clicking on the link below.", diff --git a/website/src/pages/ko/index.json b/website/src/pages/ko/index.json index c2d9d0bed1be..95bf30d1752a 100644 --- a/website/src/pages/ko/index.json +++ b/website/src/pages/ko/index.json @@ -7,7 +7,7 @@ "cta2": "Build your first subgraph" }, "products": { - "title": "The Graph’s Products", + "title": "The Graph's Products", "description": "Choose a solution that fits your needs—interact with blockchain data your way.", "subgraphs": { "title": "Subgraphs", @@ -21,7 +21,7 @@ }, "sps": { "title": "Substreams-Powered Subgraphs", - "description": "Boost your subgraph’s efficiency and scalability by using Substreams.", + "description": "Boost your subgraph's efficiency and scalability by using Substreams.", "cta": "Set up a Substreams-powered subgraph" }, "graphNode": { @@ -156,15 +156,15 @@ "watchOnYouTube": "Watch on YouTube", "theGraphExplained": { "title": "The Graph Explained In 1 Minute", - "description": "What is The Graph? How does it work? Why does it matter so much to web3 developers? Learn how and why The Graph is the backbone of web3 in this short, non-technical video." + "description": "Learn how and why The Graph is the backbone of web3 in this short, non-technical video." }, "whatIsDelegating": { "title": "What is Delegating?", - "description": "Delegators are key participants who help secure The Graph by staking their GRT tokens to Indexers. This video explains key concepts to understand before delegating." + "description": "This video explains key concepts to understand before delegating, a form of staking that helps secure The Graph." }, "howToIndexSolana": { "title": "How to Index Solana with a Substreams-powered Subgraph", - "description": "If you’re familiar with subgraphs, discover how Substreams offer a different approach for key use cases. This video walks you through the process of building your first Substreams-powered subgraph." + "description": "If you're familiar with Subgraphs, discover how Substreams offer a different approach for key use cases." } }, "time": { diff --git a/website/src/pages/ko/indexing/chain-integration-overview.mdx b/website/src/pages/ko/indexing/chain-integration-overview.mdx index 77141e82b34a..33619b03c483 100644 --- a/website/src/pages/ko/indexing/chain-integration-overview.mdx +++ b/website/src/pages/ko/indexing/chain-integration-overview.mdx @@ -36,7 +36,7 @@ This process is related to the Subgraph Data Service, applicable only to new Sub ### 2. What happens if Firehose & Substreams support comes after the network is supported on mainnet? -This would only impact protocol support for indexing rewards on Substreams-powered subgraphs. The new Firehose implementation would need testing on testnet, following the methodology outlined for Stage 2 in this GIP. Similarly, assuming the implementation is performant and reliable, a PR on the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) would be required (`Substreams data sources` Subgraph Feature), as well as a new GIP for protocol support for indexing rewards. Anyone can create the PR and GIP; the Foundation would help with Council approval. +This would only impact protocol support for indexing rewards on Substreams-powered Subgraphs. The new Firehose implementation would need testing on testnet, following the methodology outlined for Stage 2 in this GIP. Similarly, assuming the implementation is performant and reliable, a PR on the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) would be required (`Substreams data sources` Subgraph Feature), as well as a new GIP for protocol support for indexing rewards. Anyone can create the PR and GIP; the Foundation would help with Council approval. ### 3. How much time will the process of reaching full protocol support take? diff --git a/website/src/pages/ko/indexing/new-chain-integration.mdx b/website/src/pages/ko/indexing/new-chain-integration.mdx index e45c4b411010..670e06c752c3 100644 --- a/website/src/pages/ko/indexing/new-chain-integration.mdx +++ b/website/src/pages/ko/indexing/new-chain-integration.mdx @@ -2,7 +2,7 @@ title: New Chain Integration --- -Chains can bring subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies: +Chains can bring Subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies: 1. **EVM JSON-RPC** 2. **Firehose**: All Firehose integration solutions include Substreams, a large-scale streaming engine based off Firehose with native `graph-node` support, allowing for parallelized transforms. @@ -47,15 +47,15 @@ For EVM chains, there exists a deeper level of data that can be achieved through ## EVM considerations - Difference between JSON-RPC & Firehose -While the JSON-RPC and Firehose are both suitable for subgraphs, a Firehose is always required for developers wanting to build with [Substreams](https://substreams.streamingfast.io). Supporting Substreams allows developers to build [Substreams-powered subgraphs](/subgraphs/cookbook/substreams-powered-subgraphs/) for the new chain, and has the potential to improve the performance of your subgraphs. Additionally, Firehose — as a drop-in replacement for the JSON-RPC extraction layer of `graph-node` — reduces by 90% the number of RPC calls required for general indexing. +While the JSON-RPC and Firehose are both suitable for Subgraphs, a Firehose is always required for developers wanting to build with [Substreams](https://substreams.streamingfast.io). Supporting Substreams allows developers to build [Substreams-powered Subgraphs](/subgraphs/cookbook/substreams-powered-subgraphs/) for the new chain, and has the potential to improve the performance of your Subgraphs. Additionally, Firehose — as a drop-in replacement for the JSON-RPC extraction layer of `graph-node` — reduces by 90% the number of RPC calls required for general indexing. -- All those `getLogs` calls and roundtrips get replaced by a single stream arriving into the heart of `graph-node`; a single block model for all subgraphs it processes. +- All those `getLogs` calls and roundtrips get replaced by a single stream arriving into the heart of `graph-node`; a single block model for all Subgraphs it processes. -> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers) +> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index Subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers) ## Graph Node Configuration -Configuring Graph Node is as easy as preparing your local environment. Once your local environment is set, you can test the integration by locally deploying a subgraph. +Configuring Graph Node is as easy as preparing your local environment. Once your local environment is set, you can test the integration by locally deploying a Subgraph. 1. [Clone Graph Node](https://github.com/graphprotocol/graph-node) @@ -67,4 +67,4 @@ Configuring Graph Node is as easy as preparing your local environment. Once your ## Substreams-powered Subgraphs -For StreamingFast-led Firehose/Substreams integrations, basic support for foundational Substreams modules (e.g. decoded transactions, logs and smart-contract events) and Substreams codegen tools are included. These tools enable the ability to enable [Substreams-powered subgraphs](/substreams/sps/introduction/). Follow the [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) and run `substreams codegen subgraph` to experience the codegen tools for yourself. +For StreamingFast-led Firehose/Substreams integrations, basic support for foundational Substreams modules (e.g. decoded transactions, logs and smart-contract events) and Substreams codegen tools are included. These tools enable the ability to enable [Substreams-powered Subgraphs](/substreams/sps/introduction/). Follow the [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) and run `substreams codegen subgraph` to experience the codegen tools for yourself. diff --git a/website/src/pages/ko/indexing/overview.mdx b/website/src/pages/ko/indexing/overview.mdx index 914b04e0bf47..4a980db27f12 100644 --- a/website/src/pages/ko/indexing/overview.mdx +++ b/website/src/pages/ko/indexing/overview.mdx @@ -7,7 +7,7 @@ Indexers are node operators in The Graph Network that stake Graph Tokens (GRT) i GRT that is staked in the protocol is subject to a thawing period and can be slashed if Indexers are malicious and serve incorrect data to applications or if they index incorrectly. Indexers also earn rewards for delegated stake from Delegators, to contribute to the network. -Indexers select subgraphs to index based on the subgraph’s curation signal, where Curators stake GRT in order to indicate which subgraphs are high-quality and should be prioritized. Consumers (eg. applications) can also set parameters for which Indexers process queries for their subgraphs and set preferences for query fee pricing. +Indexers select Subgraphs to index based on the Subgraph’s curation signal, where Curators stake GRT in order to indicate which Subgraphs are high-quality and should be prioritized. Consumers (eg. applications) can also set parameters for which Indexers process queries for their Subgraphs and set preferences for query fee pricing. ## FAQ @@ -19,17 +19,17 @@ The minimum stake for an Indexer is currently set to 100K GRT. **Query fee rebates** - Payments for serving queries on the network. These payments are mediated via state channels between an Indexer and a gateway. Each query request from a gateway contains a payment and the corresponding response a proof of query result validity. -**Indexing rewards** - Generated via a 3% annual protocol wide inflation, the indexing rewards are distributed to Indexers who are indexing subgraph deployments for the network. +**Indexing rewards** - Generated via a 3% annual protocol wide inflation, the indexing rewards are distributed to Indexers who are indexing Subgraph deployments for the network. ### How are indexing rewards distributed? -Indexing rewards come from protocol inflation which is set to 3% annual issuance. They are distributed across subgraphs based on the proportion of all curation signal on each, then distributed proportionally to Indexers based on their allocated stake on that subgraph. **An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.** +Indexing rewards come from protocol inflation which is set to 3% annual issuance. They are distributed across Subgraphs based on the proportion of all curation signal on each, then distributed proportionally to Indexers based on their allocated stake on that Subgraph. **An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.** Numerous tools have been created by the community for calculating rewards; you'll find a collection of them organized in the [Community Guides collection](https://www.notion.so/Community-Guides-abbb10f4dba040d5ba81648ca093e70c). You can also find an up to date list of tools in the #Delegators and #Indexers channels on the [Discord server](https://discord.gg/graphprotocol). Here we link a [recommended allocation optimiser](https://github.com/graphprotocol/allocation-optimizer) integrated with the indexer software stack. ### What is a proof of indexing (POI)? -POIs are used in the network to verify that an Indexer is indexing the subgraphs they have allocated on. A POI for the first block of the current epoch must be submitted when closing an allocation for that allocation to be eligible for indexing rewards. A POI for a block is a digest for all entity store transactions for a specific subgraph deployment up to and including that block. +POIs are used in the network to verify that an Indexer is indexing the Subgraphs they have allocated on. A POI for the first block of the current epoch must be submitted when closing an allocation for that allocation to be eligible for indexing rewards. A POI for a block is a digest for all entity store transactions for a specific Subgraph deployment up to and including that block. ### When are indexing rewards distributed? @@ -41,7 +41,7 @@ The RewardsManager contract has a read-only [getRewards](https://github.com/grap Many of the community-made dashboards include pending rewards values and they can be easily checked manually by following these steps: -1. Query the [mainnet subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations: +1. Query the [mainnet Subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations: ```graphql query indexerAllocations { @@ -91,24 +91,24 @@ The `queryFeeCut` and `indexingRewardCut` values are delegation parameters that - **indexingRewardCut** - the % of indexing rewards that will be distributed to the Indexer. If this is set to 95%, the Indexer will receive 95% of the indexing rewards when an allocation is closed and the Delegators will split the other 5%. -### How do Indexers know which subgraphs to index? +### How do Indexers know which Subgraphs to index? -Indexers may differentiate themselves by applying advanced techniques for making subgraph indexing decisions but to give a general idea we'll discuss several key metrics used to evaluate subgraphs in the network: +Indexers may differentiate themselves by applying advanced techniques for making Subgraph indexing decisions but to give a general idea we'll discuss several key metrics used to evaluate Subgraphs in the network: -- **Curation signal** - The proportion of network curation signal applied to a particular subgraph is a good indicator of the interest in that subgraph, especially during the bootstrap phase when query voluming is ramping up. +- **Curation signal** - The proportion of network curation signal applied to a particular Subgraph is a good indicator of the interest in that Subgraph, especially during the bootstrap phase when query volume is ramping up. -- **Query fees collected** - The historical data for volume of query fees collected for a specific subgraph is a good indicator of future demand. +- **Query fees collected** - The historical data for volume of query fees collected for a specific Subgraph is a good indicator of future demand. -- **Amount staked** - Monitoring the behavior of other Indexers or looking at proportions of total stake allocated towards specific subgraphs can allow an Indexer to monitor the supply side for subgraph queries to identify subgraphs that the network is showing confidence in or subgraphs that may show a need for more supply. +- **Amount staked** - Monitoring the behavior of other Indexers or looking at proportions of total stake allocated towards specific Subgraphs can allow an Indexer to monitor the supply side for Subgraph queries to identify Subgraphs that the network is showing confidence in or Subgraphs that may show a need for more supply. -- **Subgraphs with no indexing rewards** - Some subgraphs do not generate indexing rewards mainly because they are using unsupported features like IPFS or because they are querying another network outside of mainnet. You will see a message on a subgraph if it is not generating indexing rewards. +- **Subgraphs with no indexing rewards** - Some Subgraphs do not generate indexing rewards mainly because they are using unsupported features like IPFS or because they are querying another network outside of mainnet. You will see a message on a Subgraph if it is not generating indexing rewards. ### What are the hardware requirements? -- **Small** - Enough to get started indexing several subgraphs, will likely need to be expanded. +- **Small** - Enough to get started indexing several Subgraphs, will likely need to be expanded. - **Standard** - Default setup, this is what is used in the example k8s/terraform deployment manifests. -- **Medium** - Production Indexer supporting 100 subgraphs and 200-500 requests per second. -- **Large** - Prepared to index all currently used subgraphs and serve requests for the related traffic. +- **Medium** - Production Indexer supporting 100 Subgraphs and 200-500 requests per second. +- **Large** - Prepared to index all currently used Subgraphs and serve requests for the related traffic. | Setup | Postgres
(CPUs) | Postgres
(memory in GBs) | Postgres
(disk in TBs) | VMs
(CPUs) | VMs
(memory in GBs) | | --- | :-: | :-: | :-: | :-: | :-: | @@ -125,17 +125,17 @@ Indexers may differentiate themselves by applying advanced techniques for making ## Infrastructure -At the center of an Indexer's infrastructure is the Graph Node which monitors the indexed networks, extracts and loads data per a subgraph definition and serves it as a [GraphQL API](/about/#how-the-graph-works). The Graph Node needs to be connected to an endpoint exposing data from each indexed network; an IPFS node for sourcing data; a PostgreSQL database for its store; and Indexer components which facilitate its interactions with the network. +At the center of an Indexer's infrastructure is the Graph Node which monitors the indexed networks, extracts and loads data per a Subgraph definition and serves it as a [GraphQL API](/about/#how-the-graph-works). The Graph Node needs to be connected to an endpoint exposing data from each indexed network; an IPFS node for sourcing data; a PostgreSQL database for its store; and Indexer components which facilitate its interactions with the network. -- **PostgreSQL database** - The main store for the Graph Node, this is where subgraph data is stored. The Indexer service and agent also use the database to store state channel data, cost models, indexing rules, and allocation actions. +- **PostgreSQL database** - The main store for the Graph Node, this is where Subgraph data is stored. The Indexer service and agent also use the database to store state channel data, cost models, indexing rules, and allocation actions. -- **Data endpoint** - For EVM-compatible networks, Graph Node needs to be connected to an endpoint that exposes an EVM-compatible JSON-RPC API. This may take the form of a single client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain subgraphs will require particular client capabilities such as archive mode and/or the parity tracing API. +- **Data endpoint** - For EVM-compatible networks, Graph Node needs to be connected to an endpoint that exposes an EVM-compatible JSON-RPC API. This may take the form of a single client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain Subgraphs will require particular client capabilities such as archive mode and/or the parity tracing API. -- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during subgraph deployment to fetch the subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com. +- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com. - **Indexer service** - Handles all required external communications with the network. Shares cost models and indexing statuses, passes query requests from gateways on to a Graph Node, and manages the query payments via state channels with the gateway. -- **Indexer agent** - Facilitates the Indexers interactions onchain including registering on the network, managing subgraph deployments to its Graph Node/s, and managing allocations. +- **Indexer agent** - Facilitates the Indexers interactions onchain including registering on the network, managing Subgraph deployments to its Graph Node/s, and managing allocations. - **Prometheus metrics server** - The Graph Node and Indexer components log their metrics to the metrics server. @@ -149,8 +149,8 @@ Note: To support agile scaling, it is recommended that query and indexing concer | Port | Purpose | Routes | CLI Argument | Environment Variable | | --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | -| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | +| 8000 | GraphQL HTTP server
(for Subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | +| 8001 | GraphQL WS
(for Subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | | 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | | 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | | 8040 | Prometheus metrics | /metrics | \--metrics-port | - | @@ -159,7 +159,7 @@ Note: To support agile scaling, it is recommended that query and indexing concer | Port | Purpose | Routes | CLI Argument | Environment Variable | | --- | --- | --- | --- | --- | -| 7600 | GraphQL HTTP server
(for paid subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` | +| 7600 | GraphQL HTTP server
(for paid Subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` | | 7300 | Prometheus metrics | /metrics | \--metrics-port | - | #### Indexer Agent @@ -295,7 +295,7 @@ Deploy all resources with `kubectl apply -k $dir`. ### Graph Node -[Graph Node](https://github.com/graphprotocol/graph-node) is an open source Rust implementation that event sources the Ethereum blockchain to deterministically update a data store that can be queried via the GraphQL endpoint. Developers use subgraphs to define their schema, and a set of mappings for transforming the data sourced from the blockchain and the Graph Node handles syncing the entire chain, monitoring for new blocks, and serving it via a GraphQL endpoint. +[Graph Node](https://github.com/graphprotocol/graph-node) is an open source Rust implementation that event sources the Ethereum blockchain to deterministically update a data store that can be queried via the GraphQL endpoint. Developers use Subgraphs to define their schema, and a set of mappings for transforming the data sourced from the blockchain and the Graph Node handles syncing the entire chain, monitoring for new blocks, and serving it via a GraphQL endpoint. #### Getting started from source @@ -365,9 +365,9 @@ docker-compose up To successfully participate in the network requires almost constant monitoring and interaction, so we've built a suite of Typescript applications for facilitating an Indexers network participation. There are three Indexer components: -- **Indexer agent** - The agent monitors the network and the Indexer's own infrastructure and manages which subgraph deployments are indexed and allocated towards onchain and how much is allocated towards each. +- **Indexer agent** - The agent monitors the network and the Indexer's own infrastructure and manages which Subgraph deployments are indexed and allocated towards onchain and how much is allocated towards each. -- **Indexer service** - The only component that needs to be exposed externally, the service passes on subgraph queries to the graph node, manages state channels for query payments, shares important decision making information to clients like the gateways. +- **Indexer service** - The only component that needs to be exposed externally, the service passes on Subgraph queries to the graph node, manages state channels for query payments, shares important decision making information to clients like the gateways. - **Indexer CLI** - The command line interface for managing the Indexer agent. It allows Indexers to manage cost models, manual allocations, actions queue, and indexing rules. @@ -525,7 +525,7 @@ graph indexer status #### Indexer management using Indexer CLI -The suggested tool for interacting with the **Indexer Management API** is the **Indexer CLI**, an extension to the **Graph CLI**. The Indexer agent needs input from an Indexer in order to autonomously interact with the network on the behalf of the Indexer. The mechanism for defining Indexer agent behavior are **allocation management** mode and **indexing rules**. Under auto mode, an Indexer can use **indexing rules** to apply their specific strategy for picking subgraphs to index and serve queries for. Rules are managed via a GraphQL API served by the agent and known as the Indexer Management API. Under manual mode, an Indexer can create allocation actions using **actions queue** and explicitly approve them before they get executed. Under oversight mode, **indexing rules** are used to populate **actions queue** and also require explicit approval for execution. +The suggested tool for interacting with the **Indexer Management API** is the **Indexer CLI**, an extension to the **Graph CLI**. The Indexer agent needs input from an Indexer in order to autonomously interact with the network on the behalf of the Indexer. The mechanism for defining Indexer agent behavior are **allocation management** mode and **indexing rules**. Under auto mode, an Indexer can use **indexing rules** to apply their specific strategy for picking Subgraphs to index and serve queries for. Rules are managed via a GraphQL API served by the agent and known as the Indexer Management API. Under manual mode, an Indexer can create allocation actions using **actions queue** and explicitly approve them before they get executed. Under oversight mode, **indexing rules** are used to populate **actions queue** and also require explicit approval for execution. #### Usage @@ -537,7 +537,7 @@ The **Indexer CLI** connects to the Indexer agent, typically through port-forwar - `graph indexer rules set [options] ...` - Set one or more indexing rules. -- `graph indexer rules start [options] ` - Start indexing a subgraph deployment if available and set its `decisionBasis` to `always`, so the Indexer agent will always choose to index it. If the global rule is set to always then all available subgraphs on the network will be indexed. +- `graph indexer rules start [options] ` - Start indexing a Subgraph deployment if available and set its `decisionBasis` to `always`, so the Indexer agent will always choose to index it. If the global rule is set to always then all available Subgraphs on the network will be indexed. - `graph indexer rules stop [options] ` - Stop indexing a deployment and set its `decisionBasis` to never, so it will skip this deployment when deciding on deployments to index. @@ -561,9 +561,9 @@ All commands which display rules in the output can choose between the supported #### Indexing rules -Indexing rules can either be applied as global defaults or for specific subgraph deployments using their IDs. The `deployment` and `decisionBasis` fields are mandatory, while all other fields are optional. When an indexing rule has `rules` as the `decisionBasis`, then the Indexer agent will compare non-null threshold values on that rule with values fetched from the network for the corresponding deployment. If the subgraph deployment has values above (or below) any of the thresholds it will be chosen for indexing. +Indexing rules can either be applied as global defaults or for specific Subgraph deployments using their IDs. The `deployment` and `decisionBasis` fields are mandatory, while all other fields are optional. When an indexing rule has `rules` as the `decisionBasis`, then the Indexer agent will compare non-null threshold values on that rule with values fetched from the network for the corresponding deployment. If the Subgraph deployment has values above (or below) any of the thresholds it will be chosen for indexing. -For example, if the global rule has a `minStake` of **5** (GRT), any subgraph deployment which has more than 5 (GRT) of stake allocated to it will be indexed. Threshold rules include `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, and `minAverageQueryFees`. +For example, if the global rule has a `minStake` of **5** (GRT), any Subgraph deployment which has more than 5 (GRT) of stake allocated to it will be indexed. Threshold rules include `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, and `minAverageQueryFees`. Data model: @@ -679,7 +679,7 @@ graph indexer actions execute approve Note that supported action types for allocation management have different input requirements: -- `Allocate` - allocate stake to a specific subgraph deployment +- `Allocate` - allocate stake to a specific Subgraph deployment - required action params: - deploymentID @@ -694,7 +694,7 @@ Note that supported action types for allocation management have different input - poi - force (forces using the provided POI even if it doesn’t match what the graph-node provides) -- `Reallocate` - atomically close allocation and open a fresh allocation for the same subgraph deployment +- `Reallocate` - atomically close allocation and open a fresh allocation for the same Subgraph deployment - required action params: - allocationID @@ -706,7 +706,7 @@ Note that supported action types for allocation management have different input #### Cost models -Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make Indexer selection decisions per query and to negotiate payment with chosen Indexers. +Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each Subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make Indexer selection decisions per query and to negotiate payment with chosen Indexers. #### Agora @@ -782,9 +782,9 @@ Once an Indexer has staked GRT in the protocol, the [Indexer components](/indexi 6. Call `stake()` to stake GRT in the protocol. -7. (Optional) Indexers may approve another address to be the operator for their Indexer infrastructure in order to separate the keys that control the funds from those that are performing day to day actions such as allocating on subgraphs and serving (paid) queries. In order to set the operator call `setOperator()` with the operator address. +7. (Optional) Indexers may approve another address to be the operator for their Indexer infrastructure in order to separate the keys that control the funds from those that are performing day to day actions such as allocating on Subgraphs and serving (paid) queries. In order to set the operator call `setOperator()` with the operator address. -8. (Optional) In order to control the distribution of rewards and strategically attract Delegators Indexers can update their delegation parameters by updating their indexingRewardCut (parts per million), queryFeeCut (parts per million), and cooldownBlocks (number of blocks). To do so call `setDelegationParameters()`. The following example sets the queryFeeCut to distribute 95% of query rebates to the Indexer and 5% to Delegators, set the indexingRewardCutto distribute 60% of indexing rewards to the Indexer and 40% to Delegators, and set `thecooldownBlocks` period to 500 blocks. +8. (Optional) In order to control the distribution of rewards and strategically attract Delegators Indexers can update their delegation parameters by updating their `indexingRewardCut` (parts per million), `queryFeeCut` (parts per million), and `cooldownBlocks` (number of blocks). To do so call `setDelegationParameters()`. The following example sets the `queryFeeCut` to distribute 95% of query rebates to the Indexer and 5% to Delegators, set the `indexingRewardCut` to distribute 60% of indexing rewards to the Indexer and 40% to Delegators, and set the `cooldownBlocks` period to 500 blocks. ``` setDelegationParameters(950000, 600000, 500) @@ -810,8 +810,8 @@ To set the delegation parameters using Graph Explorer interface, follow these st After being created by an Indexer a healthy allocation goes through two states. -- **Active** - Once an allocation is created onchain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L316)) it is considered **active**. A portion of the Indexer's own and/or delegated stake is allocated towards a subgraph deployment, which allows them to claim indexing rewards and serve queries for that subgraph deployment. The Indexer agent manages creating allocations based on the Indexer rules. +- **Active** - Once an allocation is created onchain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L316)) it is considered **active**. A portion of the Indexer's own and/or delegated stake is allocated towards a Subgraph deployment, which allows them to claim indexing rewards and serve queries for that Subgraph deployment. The Indexer agent manages creating allocations based on the Indexer rules. - **Closed** - An Indexer is free to close an allocation once 1 epoch has passed ([closeAllocation()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L335)) or their Indexer agent will automatically close the allocation after the **maxAllocationEpochs** (currently 28 days). When an allocation is closed with a valid proof of indexing (POI) their indexing rewards are distributed to the Indexer and its Delegators ([learn more](/indexing/overview/#how-are-indexing-rewards-distributed)). -Indexers are recommended to utilize offchain syncing functionality to sync subgraph deployments to chainhead before creating the allocation onchain. This feature is especially useful for subgraphs that may take longer than 28 epochs to sync or have some chances of failing undeterministically. +Indexers are recommended to utilize offchain syncing functionality to sync Subgraph deployments to chainhead before creating the allocation onchain. This feature is especially useful for Subgraphs that may take longer than 28 epochs to sync or have some chances of failing undeterministically. diff --git a/website/src/pages/ko/indexing/supported-network-requirements.mdx b/website/src/pages/ko/indexing/supported-network-requirements.mdx index df15ef48d762..3d57daa55709 100644 --- a/website/src/pages/ko/indexing/supported-network-requirements.mdx +++ b/website/src/pages/ko/indexing/supported-network-requirements.mdx @@ -6,7 +6,7 @@ title: Supported Network Requirements | --- | --- | --- | :-: | | Arbitrum | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | 4+ core CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_last updated August 2023_ | ✅ | | Avalanche | [Docker Guide](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 5 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
Debian 12/Ubuntu 22.04
16 GB RAM
>= 4.5TB (NVME preffered)
_last updated 14th May 2024_ | ✅ | +| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
Debian 12/Ubuntu 22.04
16 GB RAM
>= 4.5TB (NVME preferred)
_last updated 14th May 2024_ | ✅ | | Binance | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | 8 core / 16 threads CPU
Ubuntu 22.04
>=32 GB RAM
>= 14 TiB NVMe SSD
_last updated 22nd June 2024_ | ✅ | | Celo | [Docker Guide](https://docs.infradao.com/archive-nodes-101/celo/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 2 TiB NVMe SSD
_last updated August 2023_ | ✅ | | Ethereum | [Docker Guide](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Higher clock speed over core count
Ubuntu 22.04
16GB+ RAM
>=3TB (NVMe recommended)
_last updated August 2023_ | ✅ | diff --git a/website/src/pages/ko/indexing/tap.mdx b/website/src/pages/ko/indexing/tap.mdx index 3bab672ab211..477534d63201 100644 --- a/website/src/pages/ko/indexing/tap.mdx +++ b/website/src/pages/ko/indexing/tap.mdx @@ -1,21 +1,21 @@ --- -title: TAP Migration Guide +title: GraphTally Guide --- -Learn about The Graph’s new payment system, **Timeline Aggregation Protocol, TAP**. This system provides fast, efficient microtransactions with minimized trust. +Learn about The Graph’s new payment system, **GraphTally** [(previously Timeline Aggregation Protocol)](https://docs.rs/tap_core/latest/tap_core/index.html). This system provides fast, efficient microtransactions with minimized trust. ## Overview -[TAP](https://docs.rs/tap_core/latest/tap_core/index.html) is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features: +GraphTally is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features: - Efficiently handles micropayments. - Adds a layer of consolidations to onchain transactions and costs. - Allows Indexers control of receipts and payments, guaranteeing payment for queries. - It enables decentralized, trustless gateways and improves `indexer-service` performance for multiple senders. -## Specifics +### Specifics -TAP allows a sender to make multiple payments to a receiver, **TAP Receipts**, which aggregates these payments into a single payment, a **Receipt Aggregate Voucher**, also known as a **RAV**. This aggregated payment can then be verified on the blockchain, reducing the number of transactions and simplifying the payment process. +GraphTally allows a sender to make multiple payments to a receiver, **Receipts**, which aggregates these payments into a single payment, a **Receipt Aggregate Voucher**, also known as a **RAV**. This aggregated payment can then be verified on the blockchain, reducing the number of transactions and simplifying the payment process. For each query, the gateway will send you a `signed receipt` that is stored on your database. Then, these queries will be aggregated by a `tap-agent` through a request. Afterwards, you’ll receive a RAV. You can update a RAV by sending it with newer receipts and this will generate a new RAV with an increased value. @@ -59,14 +59,14 @@ As long as you run `tap-agent` and `indexer-agent`, everything will be executed | Signers | `0xfF4B7A5EfD00Ff2EC3518D4F250A27e4c29A2211` | `0xFb142dE83E261e43a81e9ACEADd1c66A0DB121FE` | | Aggregator | `https://tap-aggregator.network.thegraph.com` | `https://tap-aggregator.testnet.thegraph.com` | -### Requirements +### Prerequisites -In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query TAP updates. You can use The Graph Network to query or host yourself on your `graph-node`. +In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query updates. You can use The Graph Network to query or host yourself on your `graph-node`. -- [Graph TAP Arbitrum Sepolia subgraph (for The Graph testnet)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) -- [Graph TAP Arbitrum One subgraph (for The Graph mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) +- [Graph TAP Arbitrum Sepolia Subgraph (for The Graph testnet)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) +- [Graph TAP Arbitrum One Subgraph (for The Graph mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) -> Note: `indexer-agent` does not currently handle the indexing of this subgraph like it does for the network subgraph deployment. As a result, you have to index it manually. +> Note: `indexer-agent` does not currently handle the indexing of this Subgraph like it does for the network Subgraph deployment. As a result, you have to index it manually. ## Migration Guide @@ -79,7 +79,7 @@ The required software version can be found [here](https://github.com/graphprotoc 1. **Indexer Agent** - Follow the [same process](https://github.com/graphprotocol/indexer/pkgs/container/indexer-agent#graph-protocol-indexer-components). - - Give the new argument `--tap-subgraph-endpoint` to activate the new TAP codepaths and enable redeeming of TAP RAVs. + - Give the new argument `--tap-subgraph-endpoint` to activate the new GraphTally codepaths and enable redeeming of RAVs. 2. **Indexer Service** @@ -128,18 +128,18 @@ query_url = "" status_url = "" [subgraphs.network] -# Query URL for the Graph Network subgraph. +# Query URL for the Graph Network Subgraph. query_url = "" # Optional, deployment to look for in the local `graph-node`, if locally indexed. -# Locally indexing the subgraph is recommended. +# Locally indexing the Subgraph is recommended. # NOTE: Use `query_url` or `deployment_id` only deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" [subgraphs.escrow] -# Query URL for the Escrow subgraph. +# Query URL for the Escrow Subgraph. query_url = "" # Optional, deployment to look for in the local `graph-node`, if locally indexed. -# Locally indexing the subgraph is recommended. +# Locally indexing the Subgraph is recommended. # NOTE: Use `query_url` or `deployment_id` only deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" diff --git a/website/src/pages/ko/indexing/tooling/graph-node.mdx b/website/src/pages/ko/indexing/tooling/graph-node.mdx index 0250f14a3d08..edde8a157fd3 100644 --- a/website/src/pages/ko/indexing/tooling/graph-node.mdx +++ b/website/src/pages/ko/indexing/tooling/graph-node.mdx @@ -2,31 +2,31 @@ title: Graph Node --- -Graph Node is the component which indexes subgraphs, and makes the resulting data available to query via a GraphQL API. As such it is central to the indexer stack, and correct operation of Graph Node is crucial to running a successful indexer. +Graph Node is the component which indexes Subgraphs, and makes the resulting data available to query via a GraphQL API. As such it is central to the indexer stack, and correct operation of Graph Node is crucial to running a successful indexer. This provides a contextual overview of Graph Node, and some of the more advanced options available to indexers. Detailed documentation and instructions can be found in the [Graph Node repository](https://github.com/graphprotocol/graph-node). ## Graph Node -[Graph Node](https://github.com/graphprotocol/graph-node) is the reference implementation for indexing Subgraphs on The Graph Network, connecting to blockchain clients, indexing subgraphs and making indexed data available to query. +[Graph Node](https://github.com/graphprotocol/graph-node) is the reference implementation for indexing Subgraphs on The Graph Network, connecting to blockchain clients, indexing Subgraphs and making indexed data available to query. Graph Node (and the whole indexer stack) can be run on bare metal, or in a cloud environment. This flexibility of the central indexing component is crucial to the robustness of The Graph Protocol. Similarly, Graph Node can be [built from source](https://github.com/graphprotocol/graph-node), or indexers can use one of the [provided Docker Images](https://hub.docker.com/r/graphprotocol/graph-node). ### PostgreSQL database -The main store for the Graph Node, this is where subgraph data is stored, as well as metadata about subgraphs, and subgraph-agnostic network data such as the block cache, and eth_call cache. +The main store for the Graph Node, this is where Subgraph data is stored, as well as metadata about Subgraphs, and Subgraph-agnostic network data such as the block cache, and eth_call cache. ### Network clients In order to index a network, Graph Node needs access to a network client via an EVM-compatible JSON-RPC API. This RPC may connect to a single client or it could be a more complex setup that load balances across multiple. -While some subgraphs may just require a full node, some may have indexing features which require additional RPC functionality. Specifically subgraphs which make `eth_calls` as part of indexing will require an archive node which supports [EIP-1898](https://eips.ethereum.org/EIPS/eip-1898), and subgraphs with `callHandlers`, or `blockHandlers` with a `call` filter, require `trace_filter` support ([see trace module documentation here](https://openethereum.github.io/JSONRPC-trace-module)). +While some Subgraphs may just require a full node, some may have indexing features which require additional RPC functionality. Specifically Subgraphs which make `eth_calls` as part of indexing will require an archive node which supports [EIP-1898](https://eips.ethereum.org/EIPS/eip-1898), and Subgraphs with `callHandlers`, or `blockHandlers` with a `call` filter, require `trace_filter` support ([see trace module documentation here](https://openethereum.github.io/JSONRPC-trace-module)). **Network Firehoses** - a Firehose is a gRPC service providing an ordered, yet fork-aware, stream of blocks, developed by The Graph's core developers to better support performant indexing at scale. This is not currently an Indexer requirement, but Indexers are encouraged to familiarise themselves with the technology, ahead of full network support. Learn more about the Firehose [here](https://firehose.streamingfast.io/). ### IPFS Nodes -Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during subgraph deployment to fetch the subgraph manifest and all linked files. Network indexers do not need to host their own IPFS node. An IPFS node for the network is hosted at https://ipfs.network.thegraph.com. +Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network indexers do not need to host their own IPFS node. An IPFS node for the network is hosted at https://ipfs.network.thegraph.com. ### Prometheus metrics server @@ -79,8 +79,8 @@ When it is running Graph Node exposes the following ports: | Port | Purpose | Routes | CLI Argument | Environment Variable | | --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | -| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | +| 8000 | GraphQL HTTP server
(for Subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | +| 8001 | GraphQL WS
(for Subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | | 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | | 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | | 8040 | Prometheus metrics | /metrics | \--metrics-port | - | @@ -89,7 +89,7 @@ When it is running Graph Node exposes the following ports: ## Advanced Graph Node configuration -At its simplest, Graph Node can be operated with a single instance of Graph Node, a single PostgreSQL database, an IPFS node, and the network clients as required by the subgraphs to be indexed. +At its simplest, Graph Node can be operated with a single instance of Graph Node, a single PostgreSQL database, an IPFS node, and the network clients as required by the Subgraphs to be indexed. This setup can be scaled horizontally, by adding multiple Graph Nodes, and multiple databases to support those Graph Nodes. Advanced users may want to take advantage of some of the horizontal scaling capabilities of Graph Node, as well as some of the more advanced configuration options, via the `config.toml` file and Graph Node's environment variables. @@ -114,13 +114,13 @@ Full documentation of `config.toml` can be found in the [Graph Node docs](https: #### Multiple Graph Nodes -Graph Node indexing can scale horizontally, running multiple instances of Graph Node to split indexing and querying across different nodes. This can be done simply by running Graph Nodes configured with a different `node_id` on startup (e.g. in the Docker Compose file), which can then be used in the `config.toml` file to specify [dedicated query nodes](#dedicated-query-nodes), [block ingestors](#dedicated-block-ingestion), and splitting subgraphs across nodes with [deployment rules](#deployment-rules). +Graph Node indexing can scale horizontally, running multiple instances of Graph Node to split indexing and querying across different nodes. This can be done simply by running Graph Nodes configured with a different `node_id` on startup (e.g. in the Docker Compose file), which can then be used in the `config.toml` file to specify [dedicated query nodes](#dedicated-query-nodes), [block ingestors](#dedicated-block-ingestion), and splitting Subgraphs across nodes with [deployment rules](#deployment-rules). > Note that multiple Graph Nodes can all be configured to use the same database, which itself can be horizontally scaled via sharding. #### Deployment rules -Given multiple Graph Nodes, it is necessary to manage deployment of new subgraphs so that the same subgraph isn't being indexed by two different nodes, which would lead to collisions. This can be done by using deployment rules, which can also specify which `shard` a subgraph's data should be stored in, if database sharding is being used. Deployment rules can match on the subgraph name and the network that the deployment is indexing in order to make a decision. +Given multiple Graph Nodes, it is necessary to manage deployment of new Subgraphs so that the same Subgraph isn't being indexed by two different nodes, which would lead to collisions. This can be done by using deployment rules, which can also specify which `shard` a Subgraph's data should be stored in, if database sharding is being used. Deployment rules can match on the Subgraph name and the network that the deployment is indexing in order to make a decision. Example deployment rule configuration: @@ -138,7 +138,7 @@ indexers = [ "index_node_kovan_0" ] match = { network = [ "xdai", "poa-core" ] } indexers = [ "index_node_other_0" ] [[deployment.rule]] -# There's no 'match', so any subgraph matches +# There's no 'match', so any Subgraph matches shards = [ "sharda", "shardb" ] indexers = [ "index_node_community_0", @@ -167,11 +167,11 @@ Any node whose --node-id matches the regular expression will be set up to only r For most use cases, a single Postgres database is sufficient to support a graph-node instance. When a graph-node instance outgrows a single Postgres database, it is possible to split the storage of graph-node's data across multiple Postgres databases. All databases together form the store of the graph-node instance. Each individual database is called a shard. -Shards can be used to split subgraph deployments across multiple databases, and can also be used to use replicas to spread query load across databases. This includes configuring the number of available database connections each `graph-node` should keep in its connection pool for each database, which becomes increasingly important as more subgraphs are being indexed. +Shards can be used to split Subgraph deployments across multiple databases, and can also be used to use replicas to spread query load across databases. This includes configuring the number of available database connections each `graph-node` should keep in its connection pool for each database, which becomes increasingly important as more Subgraphs are being indexed. Sharding becomes useful when your existing database can't keep up with the load that Graph Node puts on it, and when it's not possible to increase the database size anymore. -> It is generally better make a single database as big as possible, before starting with shards. One exception is where query traffic is split very unevenly between subgraphs; in those situations it can help dramatically if the high-volume subgraphs are kept in one shard and everything else in another because that setup makes it more likely that the data for the high-volume subgraphs stays in the db-internal cache and doesn't get replaced by data that's not needed as much from low-volume subgraphs. +> It is generally better make a single database as big as possible, before starting with shards. One exception is where query traffic is split very unevenly between Subgraphs; in those situations it can help dramatically if the high-volume Subgraphs are kept in one shard and everything else in another because that setup makes it more likely that the data for the high-volume Subgraphs stays in the db-internal cache and doesn't get replaced by data that's not needed as much from low-volume Subgraphs. In terms of configuring connections, start with max_connections in postgresql.conf set to 400 (or maybe even 200) and look at the store_connection_wait_time_ms and store_connection_checkout_count Prometheus metrics. Noticeable wait times (anything above 5ms) is an indication that there are too few connections available; high wait times there will also be caused by the database being very busy (like high CPU load). However if the database seems otherwise stable, high wait times indicate a need to increase the number of connections. In the configuration, how many connections each graph-node instance can use is an upper limit, and Graph Node will not keep connections open if it doesn't need them. @@ -188,7 +188,7 @@ ingestor = "block_ingestor_node" #### Supporting multiple networks -The Graph Protocol is increasing the number of networks supported for indexing rewards, and there exist many subgraphs indexing unsupported networks which an indexer would like to process. The `config.toml` file allows for expressive and flexible configuration of: +The Graph Protocol is increasing the number of networks supported for indexing rewards, and there exist many Subgraphs indexing unsupported networks which an indexer would like to process. The `config.toml` file allows for expressive and flexible configuration of: - Multiple networks - Multiple providers per network (this can allow splitting of load across providers, and can also allow for configuration of full nodes as well as archive nodes, with Graph Node preferring cheaper providers if a given workload allows). @@ -225,11 +225,11 @@ Users who are operating a scaled indexing setup with advanced configuration may ### Managing Graph Node -Given a running Graph Node (or Graph Nodes!), the challenge is then to manage deployed subgraphs across those nodes. Graph Node surfaces a range of tools to help with managing subgraphs. +Given a running Graph Node (or Graph Nodes!), the challenge is then to manage deployed Subgraphs across those nodes. Graph Node surfaces a range of tools to help with managing Subgraphs. #### Logging -Graph Node's logs can provide useful information for debugging and optimisation of Graph Node and specific subgraphs. Graph Node supports different log levels via the `GRAPH_LOG` environment variable, with the following levels: error, warn, info, debug or trace. +Graph Node's logs can provide useful information for debugging and optimisation of Graph Node and specific Subgraphs. Graph Node supports different log levels via the `GRAPH_LOG` environment variable, with the following levels: error, warn, info, debug or trace. In addition setting `GRAPH_LOG_QUERY_TIMING` to `gql` provides more details about how GraphQL queries are running (though this will generate a large volume of logs). @@ -247,11 +247,11 @@ The graphman command is included in the official containers, and you can docker Full documentation of `graphman` commands is available in the Graph Node repository. See [/docs/graphman.md] (https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md) in the Graph Node `/docs` -### Working with subgraphs +### Working with Subgraphs #### Indexing status API -Available on port 8030/graphql by default, the indexing status API exposes a range of methods for checking indexing status for different subgraphs, checking proofs of indexing, inspecting subgraph features and more. +Available on port 8030/graphql by default, the indexing status API exposes a range of methods for checking indexing status for different Subgraphs, checking proofs of indexing, inspecting Subgraph features and more. The full schema is available [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). @@ -263,7 +263,7 @@ There are three separate parts of the indexing process: - Processing events in order with the appropriate handlers (this can involve calling the chain for state, and fetching data from the store) - Writing the resulting data to the store -These stages are pipelined (i.e. they can be executed in parallel), but they are dependent on one another. Where subgraphs are slow to index, the underlying cause will depend on the specific subgraph. +These stages are pipelined (i.e. they can be executed in parallel), but they are dependent on one another. Where Subgraphs are slow to index, the underlying cause will depend on the specific Subgraph. Common causes of indexing slowness: @@ -276,24 +276,24 @@ Common causes of indexing slowness: - The provider itself falling behind the chain head - Slowness in fetching new receipts at the chain head from the provider -Subgraph indexing metrics can help diagnose the root cause of indexing slowness. In some cases, the problem lies with the subgraph itself, but in others, improved network providers, reduced database contention and other configuration improvements can markedly improve indexing performance. +Subgraph indexing metrics can help diagnose the root cause of indexing slowness. In some cases, the problem lies with the Subgraph itself, but in others, improved network providers, reduced database contention and other configuration improvements can markedly improve indexing performance. -#### Failed subgraphs +#### Failed Subgraphs -During indexing subgraphs might fail, if they encounter data that is unexpected, some component not working as expected, or if there is some bug in the event handlers or configuration. There are two general types of failure: +During indexing Subgraphs might fail, if they encounter data that is unexpected, some component not working as expected, or if there is some bug in the event handlers or configuration. There are two general types of failure: - Deterministic failures: these are failures which will not be resolved with retries - Non-deterministic failures: these might be down to issues with the provider, or some unexpected Graph Node error. When a non-deterministic failure occurs, Graph Node will retry the failing handlers, backing off over time. -In some cases a failure might be resolvable by the indexer (for example if the error is a result of not having the right kind of provider, adding the required provider will allow indexing to continue). However in others, a change in the subgraph code is required. +In some cases a failure might be resolvable by the indexer (for example if the error is a result of not having the right kind of provider, adding the required provider will allow indexing to continue). However in others, a change in the Subgraph code is required. -> Deterministic failures are considered "final", with a Proof of Indexing generated for the failing block, while non-deterministic failures are not, as the subgraph may manage to "unfail" and continue indexing. In some cases, the non-deterministic label is incorrect, and the subgraph will never overcome the error; such failures should be reported as issues on the Graph Node repository. +> Deterministic failures are considered "final", with a Proof of Indexing generated for the failing block, while non-deterministic failures are not, as the Subgraph may manage to "unfail" and continue indexing. In some cases, the non-deterministic label is incorrect, and the Subgraph will never overcome the error; such failures should be reported as issues on the Graph Node repository. #### Block and call cache -Graph Node caches certain data in the store in order to save refetching from the provider. Blocks are cached, as are the results of `eth_calls` (the latter being cached as of a specific block). This caching can dramatically increase indexing speed during "resyncing" of a slightly altered subgraph. +Graph Node caches certain data in the store in order to save refetching from the provider. Blocks are cached, as are the results of `eth_calls` (the latter being cached as of a specific block). This caching can dramatically increase indexing speed during "resyncing" of a slightly altered Subgraph. -However, in some instances, if an Ethereum node has provided incorrect data for some period, that can make its way into the cache, leading to incorrect data or failed subgraphs. In this case indexers can use `graphman` to clear the poisoned cache, and then rewind the affected subgraphs, which will then fetch fresh data from the (hopefully) healthy provider. +However, in some instances, if an Ethereum node has provided incorrect data for some period, that can make its way into the cache, leading to incorrect data or failed Subgraphs. In this case indexers can use `graphman` to clear the poisoned cache, and then rewind the affected Subgraphs, which will then fetch fresh data from the (hopefully) healthy provider. If a block cache inconsistency is suspected, such as a tx receipt missing event: @@ -304,7 +304,7 @@ If a block cache inconsistency is suspected, such as a tx receipt missing event: #### Querying issues and errors -Once a subgraph has been indexed, indexers can expect to serve queries via the subgraph's dedicated query endpoint. If the indexer is hoping to serve significant query volume, a dedicated query node is recommended, and in case of very high query volumes, indexers may want to configure replica shards so that queries don't impact the indexing process. +Once a Subgraph has been indexed, indexers can expect to serve queries via the Subgraph's dedicated query endpoint. If the indexer is hoping to serve significant query volume, a dedicated query node is recommended, and in case of very high query volumes, indexers may want to configure replica shards so that queries don't impact the indexing process. However, even with a dedicated query node and replicas, certain queries can take a long time to execute, and in some cases increase memory usage and negatively impact the query time for other users. @@ -316,7 +316,7 @@ Graph Node caches GraphQL queries by default, which can significantly reduce dat ##### Analysing queries -Problematic queries most often surface in one of two ways. In some cases, users themselves report that a given query is slow. In that case the challenge is to diagnose the reason for the slowness - whether it is a general issue, or specific to that subgraph or query. And then of course to resolve it, if possible. +Problematic queries most often surface in one of two ways. In some cases, users themselves report that a given query is slow. In that case the challenge is to diagnose the reason for the slowness - whether it is a general issue, or specific to that Subgraph or query. And then of course to resolve it, if possible. In other cases, the trigger might be high memory usage on a query node, in which case the challenge is first to identify the query causing the issue. @@ -336,10 +336,10 @@ In general, tables where the number of distinct entities are less than 1% of the Once a table has been determined to be account-like, running `graphman stats account-like .
` will turn on the account-like optimization for queries against that table. The optimization can be turned off again with `graphman stats account-like --clear .
` It takes up to 5 minutes for query nodes to notice that the optimization has been turned on or off. After turning the optimization on, it is necessary to verify that the change does not in fact make queries slower for that table. If you have configured Grafana to monitor Postgres, slow queries would show up in `pg_stat_activity`in large numbers, taking several seconds. In that case, the optimization needs to be turned off again. -For Uniswap-like subgraphs, the `pair` and `token` tables are prime candidates for this optimization, and can have a dramatic effect on database load. +For Uniswap-like Subgraphs, the `pair` and `token` tables are prime candidates for this optimization, and can have a dramatic effect on database load. -#### Removing subgraphs +#### Removing Subgraphs > This is new functionality, which will be available in Graph Node 0.29.x -At some point an indexer might want to remove a given subgraph. This can be easily done via `graphman drop`, which deletes a deployment and all it's indexed data. The deployment can be specified as either a subgraph name, an IPFS hash `Qm..`, or the database namespace `sgdNNN`. Further documentation is available [here](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop). +At some point an indexer might want to remove a given Subgraph. This can be easily done via `graphman drop`, which deletes a deployment and all it's indexed data. The deployment can be specified as either a Subgraph name, an IPFS hash `Qm..`, or the database namespace `sgdNNN`. Further documentation is available [here](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop). diff --git a/website/src/pages/ko/indexing/tooling/graphcast.mdx b/website/src/pages/ko/indexing/tooling/graphcast.mdx index 4072877a1257..d1795e9be577 100644 --- a/website/src/pages/ko/indexing/tooling/graphcast.mdx +++ b/website/src/pages/ko/indexing/tooling/graphcast.mdx @@ -10,10 +10,10 @@ Currently, the cost to broadcast information to other network participants is de The Graphcast SDK (Software Development Kit) allows developers to build Radios, which are gossip-powered applications that Indexers can run to serve a given purpose. We also intend to create a few Radios (or provide support to other developers/teams that wish to build Radios) for the following use cases: -- Real-time cross-checking of subgraph data integrity ([Subgraph Radio](https://docs.graphops.xyz/graphcast/radios/subgraph-radio/intro)). -- Conducting auctions and coordination for warp syncing subgraphs, substreams, and Firehose data from other Indexers. -- Self-reporting on active query analytics, including subgraph request volumes, fee volumes, etc. -- Self-reporting on indexing analytics, including subgraph indexing time, handler gas costs, indexing errors encountered, etc. +- Real-time cross-checking of Subgraph data integrity ([Subgraph Radio](https://docs.graphops.xyz/graphcast/radios/subgraph-radio/intro)). +- Conducting auctions and coordination for warp syncing Subgraphs, substreams, and Firehose data from other Indexers. +- Self-reporting on active query analytics, including Subgraph request volumes, fee volumes, etc. +- Self-reporting on indexing analytics, including Subgraph indexing time, handler gas costs, indexing errors encountered, etc. - Self-reporting on stack information including graph-node version, Postgres version, Ethereum client version, etc. ### Learn More diff --git a/website/src/pages/ko/resources/benefits.mdx b/website/src/pages/ko/resources/benefits.mdx index 06b1b5594b1f..bc912072b801 100644 --- a/website/src/pages/ko/resources/benefits.mdx +++ b/website/src/pages/ko/resources/benefits.mdx @@ -75,9 +75,9 @@ Query costs may vary; the quoted cost is the average at time of publication (Mar Reflects cost for data consumer. Query fees are still paid to Indexers for Free Plan queries. -Estimated costs are only for Ethereum Mainnet subgraphs — costs are even higher when self hosting a `graph-node` on other networks. Some users may need to update their subgraph to a new version. Due to Ethereum gas fees, an update costs ~$50 at time of writing. Note that gas fees on [Arbitrum](/archived/arbitrum/arbitrum-faq/) are substantially lower than Ethereum mainnet. +Estimated costs are only for Ethereum Mainnet Subgraphs — costs are even higher when self hosting a `graph-node` on other networks. Some users may need to update their Subgraph to a new version. Due to Ethereum gas fees, an update costs ~$50 at time of writing. Note that gas fees on [Arbitrum](/archived/arbitrum/arbitrum-faq/) are substantially lower than Ethereum mainnet. -Curating signal on a subgraph is an optional one-time, net-zero cost (e.g., $1k in signal can be curated on a subgraph, and later withdrawn—with potential to earn returns in the process). +Curating signal on a Subgraph is an optional one-time, net-zero cost (e.g., $1k in signal can be curated on a Subgraph, and later withdrawn—with potential to earn returns in the process). ## No Setup Costs & Greater Operational Efficiency @@ -89,4 +89,4 @@ The Graph’s decentralized network gives users access to geographic redundancy Bottom line: The Graph Network is less expensive, easier to use, and produces superior results compared to running a `graph-node` locally. -Start using The Graph Network today, and learn how to [publish your subgraph to The Graph's decentralized network](/subgraphs/quick-start/). +Start using The Graph Network today, and learn how to [publish your Subgraph to The Graph's decentralized network](/subgraphs/quick-start/). diff --git a/website/src/pages/ko/resources/glossary.mdx b/website/src/pages/ko/resources/glossary.mdx index ffcd4bca2eed..4c5ad55cd0d3 100644 --- a/website/src/pages/ko/resources/glossary.mdx +++ b/website/src/pages/ko/resources/glossary.mdx @@ -4,51 +4,51 @@ title: Glossary - **The Graph**: A decentralized protocol for indexing and querying data. -- **Query**: A request for data. In the case of The Graph, a query is a request for data from a subgraph that will be answered by an Indexer. +- **Query**: A request for data. In the case of The Graph, a query is a request for data from a Subgraph that will be answered by an Indexer. -- **GraphQL**: A query language for APIs and a runtime for fulfilling those queries with your existing data. The Graph uses GraphQL to query subgraphs. +- **GraphQL**: A query language for APIs and a runtime for fulfilling those queries with your existing data. The Graph uses GraphQL to query Subgraphs. -- **Endpoint**: A URL that can be used to query a subgraph. The testing endpoint for Subgraph Studio is `https://api.studio.thegraph.com/query///` and the Graph Explorer endpoint is `https://gateway.thegraph.com/api//subgraphs/id/`. The Graph Explorer endpoint is used to query subgraphs on The Graph's decentralized network. +- **Endpoint**: A URL that can be used to query a Subgraph. The testing endpoint for Subgraph Studio is `https://api.studio.thegraph.com/query///` and the Graph Explorer endpoint is `https://gateway.thegraph.com/api//subgraphs/id/`. The Graph Explorer endpoint is used to query Subgraphs on The Graph's decentralized network. -- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish subgraphs to The Graph Network. Once it is indexed, the subgraph can be queried by anyone. +- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish Subgraphs to The Graph Network. Once it is indexed, the Subgraph can be queried by anyone. - **Indexer**: Network participants that run indexing nodes to index data from blockchains and serve GraphQL queries. - **Indexer Revenue Streams**: Indexers are rewarded in GRT with two components: query fee rebates and indexing rewards. - 1. **Query Fee Rebates**: Payments from subgraph consumers for serving queries on the network. + 1. **Query Fee Rebates**: Payments from Subgraph consumers for serving queries on the network. - 2. **Indexing Rewards**: The rewards that Indexers receive for indexing subgraphs. Indexing rewards are generated via new issuance of 3% GRT annually. + 2. **Indexing Rewards**: The rewards that Indexers receive for indexing Subgraphs. Indexing rewards are generated via new issuance of 3% GRT annually. - **Indexer's Self-Stake**: The amount of GRT that Indexers stake to participate in the decentralized network. The minimum is 100,000 GRT, and there is no upper limit. - **Delegation Capacity**: The maximum amount of GRT an Indexer can accept from Delegators. Indexers can only accept up to 16x their Indexer Self-Stake, and additional delegation results in diluted rewards. For example, if an Indexer has a Self-Stake of 1M GRT, their delegation capacity is 16M. However, Indexers can increase their Delegation Capacity by increasing their Self-Stake. -- **Upgrade Indexer**: An Indexer designed to act as a fallback for subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. +- **Upgrade Indexer**: An Indexer designed to act as a fallback for Subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. -- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing subgraphs. +- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in Subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing Subgraphs. - **Delegation Tax**: A 0.5% fee paid by Delegators when they delegate GRT to Indexers. The GRT used to pay the fee is burned. -- **Curator**: Network participants that identify high-quality subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a subgraph, 10% is distributed to the Curators of that subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a subgraph. +- **Curator**: Network participants that identify high-quality Subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a Subgraph, 10% is distributed to the Curators of that Subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a Subgraph. -- **Curation Tax**: A 1% fee paid by Curators when they signal GRT on subgraphs. The GRT used to pay the fee is burned. +- **Curation Tax**: A 1% fee paid by Curators when they signal GRT on Subgraphs. The GRT used to pay the fee is burned. -- **Data Consumer**: Any application or user that queries a subgraph. +- **Data Consumer**: Any application or user that queries a Subgraph. -- **Subgraph Developer**: A developer who builds and deploys a subgraph to The Graph's decentralized network. +- **Subgraph Developer**: A developer who builds and deploys a Subgraph to The Graph's decentralized network. -- **Subgraph Manifest**: A YAML file that describes the subgraph's GraphQL schema, data sources, and other metadata. [Here](https://github.com/graphprotocol/example-subgraph/blob/master/subgraph.yaml) is an example. +- **Subgraph Manifest**: A YAML file that describes the Subgraph's GraphQL schema, data sources, and other metadata. [Here](https://github.com/graphprotocol/example-subgraph/blob/master/subgraph.yaml) is an example. - **Epoch**: A unit of time within the network. Currently, one epoch is 6,646 blocks or approximately 1 day. -- **Allocation**: An Indexer can allocate their total GRT stake (including Delegators' stake) towards subgraphs that have been published on The Graph's decentralized network. Allocations can have different statuses: +- **Allocation**: An Indexer can allocate their total GRT stake (including Delegators' stake) towards Subgraphs that have been published on The Graph's decentralized network. Allocations can have different statuses: - 1. **Active**: An allocation is considered active when it is created onchain. This is called opening an allocation, and indicates to the network that the Indexer is actively indexing and serving queries for a particular subgraph. Active allocations accrue indexing rewards proportional to the signal on the subgraph, and the amount of GRT allocated. + 1. **Active**: An allocation is considered active when it is created onchain. This is called opening an allocation, and indicates to the network that the Indexer is actively indexing and serving queries for a particular Subgraph. Active allocations accrue indexing rewards proportional to the signal on the Subgraph, and the amount of GRT allocated. - 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. + 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given Subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. -- **Subgraph Studio**: A powerful dapp for building, deploying, and publishing subgraphs. +- **Subgraph Studio**: A powerful dapp for building, deploying, and publishing Subgraphs. - **Fishermen**: A role within The Graph Network held by participants who monitor the accuracy and integrity of data served by Indexers. When a Fisherman identifies a query response or a POI they believe to be incorrect, they can initiate a dispute against the Indexer. If the dispute rules in favor of the Fisherman, the Indexer is slashed by losing 2.5% of their self-stake. Of this amount, 50% is awarded to the Fisherman as a bounty for their vigilance, and the remaining 50% is removed from circulation (burned). This mechanism is designed to encourage Fishermen to help maintain the reliability of the network by ensuring that Indexers are held accountable for the data they provide. @@ -56,28 +56,28 @@ title: Glossary - **Slashing**: Indexers can have their self-staked GRT slashed for providing an incorrect POI or for serving inaccurate data. The slashing percentage is a protocol parameter currently set to 2.5% of an Indexer's self-stake. 50% of the slashed GRT goes to the Fisherman that disputed the inaccurate data or incorrect POI. The other 50% is burned. -- **Indexing Rewards**: The rewards that Indexers receive for indexing subgraphs. Indexing rewards are distributed in GRT. +- **Indexing Rewards**: The rewards that Indexers receive for indexing Subgraphs. Indexing rewards are distributed in GRT. - **Delegation Rewards**: The rewards that Delegators receive for delegating GRT to Indexers. Delegation rewards are distributed in GRT. - **GRT**: The Graph's work utility token. GRT provides economic incentives to network participants for contributing to the network. -- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. +- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given Subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. -- **Graph Node**: Graph Node is the component that indexes subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. +- **Graph Node**: Graph Node is the component that indexes Subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. -- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions onchain, including registering on the network, managing subgraph deployments to its Graph Node(s), and managing allocations. +- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions onchain, including registering on the network, managing Subgraph deployments to its Graph Node(s), and managing allocations. - **The Graph Client**: A library for building GraphQL-based dapps in a decentralized way. -- **Graph Explorer**: A dapp designed for network participants to explore subgraphs and interact with the protocol. +- **Graph Explorer**: A dapp designed for network participants to explore Subgraphs and interact with the protocol. - **Graph CLI**: A command line interface tool for building and deploying to The Graph. - **Cooldown Period**: The time remaining until an Indexer who changed their delegation parameters can do so again. -- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, subgraphs, curation shares, and Indexer's self-stake. +- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, Subgraphs, curation shares, and Indexer's self-stake. -- **Updating a subgraph**: The process of releasing a new subgraph version with updates to the subgraph's manifest, schema, or mappings. +- **Updating a Subgraph**: The process of releasing a new Subgraph version with updates to the Subgraph's manifest, schema, or mappings. -- **Migrating**: The process of curation shares moving from an old version of a subgraph to a new version of a subgraph (e.g. when v0.0.1 is updated to v0.0.2). +- **Migrating**: The process of curation shares moving from an old version of a Subgraph to a new version of a Subgraph (e.g. when v0.0.1 is updated to v0.0.2). diff --git a/website/src/pages/ko/resources/migration-guides/assemblyscript-migration-guide.mdx b/website/src/pages/ko/resources/migration-guides/assemblyscript-migration-guide.mdx index 85f6903a6c69..aead2514ff51 100644 --- a/website/src/pages/ko/resources/migration-guides/assemblyscript-migration-guide.mdx +++ b/website/src/pages/ko/resources/migration-guides/assemblyscript-migration-guide.mdx @@ -2,13 +2,13 @@ title: AssemblyScript Migration Guide --- -Up until now, subgraphs have been using one of the [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉 +Up until now, Subgraphs have been using one of the [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉 -That will enable subgraph developers to use newer features of the AS language and standard library. +That will enable Subgraph developers to use newer features of the AS language and standard library. This guide is applicable for anyone using `graph-cli`/`graph-ts` below version `0.22.0`. If you're already at a higher than (or equal) version to that, you've already been using version `0.19.10` of AssemblyScript 🙂 -> Note: As of `0.24.0`, `graph-node` can support both versions, depending on the `apiVersion` specified in the subgraph manifest. +> Note: As of `0.24.0`, `graph-node` can support both versions, depending on the `apiVersion` specified in the Subgraph manifest. ## Features @@ -44,7 +44,7 @@ This guide is applicable for anyone using `graph-cli`/`graph-ts` below version ` ## How to upgrade? -1. Change your mappings `apiVersion` in `subgraph.yaml` to `0.0.6`: +1. Change your mappings `apiVersion` in `subgraph.yaml` to `0.0.9`: ```yaml ... @@ -52,7 +52,7 @@ dataSources: ... mapping: ... - apiVersion: 0.0.6 + apiVersion: 0.0.9 ... ``` @@ -106,7 +106,7 @@ let maybeValue = load()! // breaks in runtime if value is null maybeValue.aMethod() ``` -If you are unsure which to choose, we recommend always using the safe version. If the value doesn't exist you might want to just do an early if statement with a return in you subgraph handler. +If you are unsure which to choose, we recommend always using the safe version. If the value doesn't exist you might want to just do an early if statement with a return in you Subgraph handler. ### Variable Shadowing @@ -132,7 +132,7 @@ You'll need to rename your duplicate variables if you had variable shadowing. ### Null Comparisons -By doing the upgrade on your subgraph, sometimes you might get errors like these: +By doing the upgrade on your Subgraph, sometimes you might get errors like these: ```typescript ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' is not assignable to type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt'. @@ -330,7 +330,7 @@ let wrapper = new Wrapper(y) wrapper.n = wrapper.n + x // doesn't give compile time errors as it should ``` -We've opened a issue on the AssemblyScript compiler for this, but for now if you do these kind of operations in your subgraph mappings, you should change them to do a null check before it. +We've opened a issue on the AssemblyScript compiler for this, but for now if you do these kind of operations in your Subgraph mappings, you should change them to do a null check before it. ```typescript let wrapper = new Wrapper(y) @@ -352,7 +352,7 @@ value.x = 10 value.y = 'content' ``` -It will compile but break at runtime, that happens because the value hasn't been initialized, so make sure your subgraph has initialized their values, like this: +It will compile but break at runtime, that happens because the value hasn't been initialized, so make sure your Subgraph has initialized their values, like this: ```typescript var value = new Type() // initialized diff --git a/website/src/pages/ko/resources/migration-guides/graphql-validations-migration-guide.mdx b/website/src/pages/ko/resources/migration-guides/graphql-validations-migration-guide.mdx index 29fed533ef8c..ebed96df1002 100644 --- a/website/src/pages/ko/resources/migration-guides/graphql-validations-migration-guide.mdx +++ b/website/src/pages/ko/resources/migration-guides/graphql-validations-migration-guide.mdx @@ -20,7 +20,7 @@ To be compliant with those validations, please follow the migration guide. You can use the CLI migration tool to find any issues in your GraphQL operations and fix them. Alternatively you can update the endpoint of your GraphQL client to use the `https://api-next.thegraph.com/subgraphs/name/$GITHUB_USER/$SUBGRAPH_NAME` endpoint. Testing your queries against this endpoint will help you find the issues in your queries. -> Not all subgraphs will need to be migrated, if you are using [GraphQL ESlint](https://the-guild.dev/graphql/eslint/docs) or [GraphQL Code Generator](https://the-guild.dev/graphql/codegen), they already ensure that your queries are valid. +> Not all Subgraphs will need to be migrated, if you are using [GraphQL ESlint](https://the-guild.dev/graphql/eslint/docs) or [GraphQL Code Generator](https://the-guild.dev/graphql/codegen), they already ensure that your queries are valid. ## Migration CLI tool diff --git a/website/src/pages/ko/resources/roles/curating.mdx b/website/src/pages/ko/resources/roles/curating.mdx index 1cc05bb7b62f..a228ebfb3267 100644 --- a/website/src/pages/ko/resources/roles/curating.mdx +++ b/website/src/pages/ko/resources/roles/curating.mdx @@ -2,37 +2,37 @@ title: Curating --- -Curators are critical to The Graph's decentralized economy. They use their knowledge of the web3 ecosystem to assess and signal on the subgraphs that should be indexed by The Graph Network. Through Graph Explorer, Curators view network data to make signaling decisions. In turn, The Graph Network rewards Curators who signal on good quality subgraphs with a share of the query fees those subgraphs generate. The amount of GRT signaled is one of the key considerations for indexers when determining which subgraphs to index. +Curators are critical to The Graph's decentralized economy. They use their knowledge of the web3 ecosystem to assess and signal on the Subgraphs that should be indexed by The Graph Network. Through Graph Explorer, Curators view network data to make signaling decisions. In turn, The Graph Network rewards Curators who signal on good quality Subgraphs with a share of the query fees those Subgraphs generate. The amount of GRT signaled is one of the key considerations for indexers when determining which Subgraphs to index. ## What Does Signaling Mean for The Graph Network? -Before consumers can query a subgraph, it must be indexed. This is where curation comes into play. In order for Indexers to earn substantial query fees on quality subgraphs, they need to know what subgraphs to index. When Curators signal on a subgraph, it lets Indexers know that a subgraph is in demand and of sufficient quality that it should be indexed. +Before consumers can query a Subgraph, it must be indexed. This is where curation comes into play. In order for Indexers to earn substantial query fees on quality Subgraphs, they need to know what Subgraphs to index. When Curators signal on a Subgraph, it lets Indexers know that a Subgraph is in demand and of sufficient quality that it should be indexed. -Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that Curators use to let Indexers know that a subgraph is good to index. Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives. +Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that Curators use to let Indexers know that a Subgraph is good to index. Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the Subgraph, entitling them to a portion of future query fees that the Subgraph drives. -Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality subgraph because there will be fewer queries to process or fewer Indexers to process them. +Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to Subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality Subgraph because there will be fewer queries to process or fewer Indexers to process them. -The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. +The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all Subgraphs, signaling GRT on a particular Subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. -When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. +When signaling, Curators can decide to signal on a specific version of the Subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. -If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. +If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the Subgraphs you need assistance with. -Indexers can find subgraphs to index based on curation signals they see in Graph Explorer (screenshot below). +Indexers can find Subgraphs to index based on curation signals they see in Graph Explorer (screenshot below). -![Explorer subgraphs](/img/explorer-subgraphs.png) +![Explorer Subgraphs](/img/explorer-subgraphs.png) ## How to Signal -Within the Curator tab in Graph Explorer, curators will be able to signal and unsignal on certain subgraphs based on network stats. For a step-by-step overview of how to do this in Graph Explorer, [click here.](/subgraphs/explorer/) +Within the Curator tab in Graph Explorer, curators will be able to signal and unsignal on certain Subgraphs based on network stats. For a step-by-step overview of how to do this in Graph Explorer, [click here.](/subgraphs/explorer/) -A curator can choose to signal on a specific subgraph version, or they can choose to have their signal automatically migrate to the newest production build of that subgraph. Both are valid strategies and come with their own pros and cons. +A curator can choose to signal on a specific Subgraph version, or they can choose to have their signal automatically migrate to the newest production build of that Subgraph. Both are valid strategies and come with their own pros and cons. -Signaling on a specific version is especially useful when one subgraph is used by multiple dapps. One dapp might need to regularly update the subgraph with new features. Another dapp might prefer to use an older, well-tested subgraph version. Upon initial curation, a 1% standard tax is incurred. +Signaling on a specific version is especially useful when one Subgraph is used by multiple dapps. One dapp might need to regularly update the Subgraph with new features. Another dapp might prefer to use an older, well-tested Subgraph version. Upon initial curation, a 1% standard tax is incurred. Having your signal automatically migrate to the newest production build can be valuable to ensure you keep accruing query fees. Every time you curate, a 1% curation tax is incurred. You will also pay a 0.5% curation tax on every migration. Subgraph developers are discouraged from frequently publishing new versions - they have to pay a 0.5% curation tax on all auto-migrated curation shares. -> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy. +> **Note**: The first address to signal a particular Subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy. ## Withdrawing your GRT @@ -40,39 +40,39 @@ Curators have the option to withdraw their signaled GRT at any time. Unlike the process of delegating, if you decide to withdraw your signaled GRT you will not have to wait for a cooldown period and will receive the entire amount (minus the 1% curation tax). -Once a curator withdraws their signal, indexers may choose to keep indexing the subgraph, even if there's currently no active GRT signaled. +Once a curator withdraws their signal, indexers may choose to keep indexing the Subgraph, even if there's currently no active GRT signaled. -However, it is recommended that curators leave their signaled GRT in place not only to receive a portion of the query fees, but also to ensure reliability and uptime of the subgraph. +However, it is recommended that curators leave their signaled GRT in place not only to receive a portion of the query fees, but also to ensure reliability and uptime of the Subgraph. ## Risks 1. The query market is inherently young at The Graph and there is risk that your %APY may be lower than you expect due to nascent market dynamics. -2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned. -3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their subgraph or if a subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/resources/roles/delegating/delegating/). -4. A subgraph can fail due to a bug. A failed subgraph does not accrue query fees. As a result, you’ll have to wait until the developer fixes the bug and deploys a new version. - - If you are subscribed to the newest version of a subgraph, your shares will auto-migrate to that new version. This will incur a 0.5% curation tax. - - If you have signaled on a specific subgraph version and it fails, you will have to manually burn your curation shares. You can then signal on the new subgraph version, thus incurring a 1% curation tax. +2. Curation Fee - when a curator signals GRT on a Subgraph, they incur a 1% curation tax. This fee is burned. +3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their Subgraph or if a Subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/resources/roles/delegating/delegating/). +4. A Subgraph can fail due to a bug. A failed Subgraph does not accrue query fees. As a result, you’ll have to wait until the developer fixes the bug and deploys a new version. + - If you are subscribed to the newest version of a Subgraph, your shares will auto-migrate to that new version. This will incur a 0.5% curation tax. + - If you have signaled on a specific Subgraph version and it fails, you will have to manually burn your curation shares. You can then signal on the new Subgraph version, thus incurring a 1% curation tax. ## Curation FAQs ### 1. What % of query fees do Curators earn? -By signalling on a subgraph, you will earn a share of all the query fees that the subgraph generates. 10% of all query fees go to the Curators pro-rata to their curation shares. This 10% is subject to governance. +By signalling on a Subgraph, you will earn a share of all the query fees that the Subgraph generates. 10% of all query fees go to the Curators pro-rata to their curation shares. This 10% is subject to governance. -### 2. How do I decide which subgraphs are high quality to signal on? +### 2. How do I decide which Subgraphs are high quality to signal on? -Finding high-quality subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy subgraphs that are driving query volume. A trustworthy subgraph may be valuable if it is complete, accurate, and supports a dapp’s data needs. A poorly architected subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a subgraph’s architecture or code in order to assess if a subgraph is valuable. As a result: +Finding high-quality Subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy Subgraphs that are driving query volume. A trustworthy Subgraph may be valuable if it is complete, accurate, and supports a dapp’s data needs. A poorly architected Subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a Subgraph’s architecture or code in order to assess if a Subgraph is valuable. As a result: -- Curators can use their understanding of a network to try and predict how an individual subgraph may generate a higher or lower query volume in the future -- Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the subgraph developer is can help determine whether or not a subgraph is worth signalling on. +- Curators can use their understanding of a network to try and predict how an individual Subgraph may generate a higher or lower query volume in the future +- Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the Subgraph developer is can help determine whether or not a Subgraph is worth signalling on. -### 3. What’s the cost of updating a subgraph? +### 3. What’s the cost of updating a Subgraph? -Migrating your curation shares to a new subgraph version incurs a curation tax of 1%. Curators can choose to subscribe to the newest version of a subgraph. When curator shares get auto-migrated to a new version, Curators will also pay half curation tax, ie. 0.5%, because upgrading subgraphs is an onchain action that costs gas. +Migrating your curation shares to a new Subgraph version incurs a curation tax of 1%. Curators can choose to subscribe to the newest version of a Subgraph. When curator shares get auto-migrated to a new version, Curators will also pay half curation tax, ie. 0.5%, because upgrading Subgraphs is an onchain action that costs gas. -### 4. How often can I update my subgraph? +### 4. How often can I update my Subgraph? -It’s suggested that you don’t update your subgraphs too frequently. See the question above for more details. +It’s suggested that you don’t update your Subgraphs too frequently. See the question above for more details. ### 5. Can I sell my curation shares? diff --git a/website/src/pages/ko/resources/subgraph-studio-faq.mdx b/website/src/pages/ko/resources/subgraph-studio-faq.mdx index 8761f7a31bf6..c2d4037bd099 100644 --- a/website/src/pages/ko/resources/subgraph-studio-faq.mdx +++ b/website/src/pages/ko/resources/subgraph-studio-faq.mdx @@ -4,7 +4,7 @@ title: Subgraph Studio FAQs ## 1. What is Subgraph Studio? -[Subgraph Studio](https://thegraph.com/studio/) is a dapp for creating, managing, and publishing subgraphs and API keys. +[Subgraph Studio](https://thegraph.com/studio/) is a dapp for creating, managing, and publishing Subgraphs and API keys. ## 2. How do I create an API Key? @@ -18,14 +18,14 @@ Yes! You can create multiple API Keys to use in different projects. Check out th After creating an API Key, in the Security section, you can define the domains that can query a specific API Key. -## 5. Can I transfer my subgraph to another owner? +## 5. Can I transfer my Subgraph to another owner? -Yes, subgraphs that have been published to Arbitrum One can be transferred to a new wallet or a Multisig. You can do so by clicking the three dots next to the 'Publish' button on the subgraph's details page and selecting 'Transfer ownership'. +Yes, Subgraphs that have been published to Arbitrum One can be transferred to a new wallet or a Multisig. You can do so by clicking the three dots next to the 'Publish' button on the Subgraph's details page and selecting 'Transfer ownership'. -Note that you will no longer be able to see or edit the subgraph in Studio once it has been transferred. +Note that you will no longer be able to see or edit the Subgraph in Studio once it has been transferred. -## 6. How do I find query URLs for subgraphs if I’m not the developer of the subgraph I want to use? +## 6. How do I find query URLs for Subgraphs if I’m not the developer of the Subgraph I want to use? -You can find the query URL of each subgraph in the Subgraph Details section of Graph Explorer. When you click on the “Query” button, you will be directed to a pane wherein you can view the query URL of the subgraph you’re interested in. You can then replace the `` placeholder with the API key you wish to leverage in Subgraph Studio. +You can find the query URL of each Subgraph in the Subgraph Details section of Graph Explorer. When you click on the “Query” button, you will be directed to a pane wherein you can view the query URL of the Subgraph you’re interested in. You can then replace the `` placeholder with the API key you wish to leverage in Subgraph Studio. -Remember that you can create an API key and query any subgraph published to the network, even if you build a subgraph yourself. These queries via the new API key, are paid queries as any other on the network. +Remember that you can create an API key and query any Subgraph published to the network, even if you build a Subgraph yourself. These queries via the new API key, are paid queries as any other on the network. diff --git a/website/src/pages/ko/resources/tokenomics.mdx b/website/src/pages/ko/resources/tokenomics.mdx index 4a9b42ca6e0d..dac3383a28e7 100644 --- a/website/src/pages/ko/resources/tokenomics.mdx +++ b/website/src/pages/ko/resources/tokenomics.mdx @@ -6,7 +6,7 @@ description: The Graph Network is incentivized by powerful tokenomics. Here’s ## Overview -The Graph is a decentralized protocol that enables easy access to blockchain data. It indexes blockchain data similarly to how Google indexes the web. If you've used a dapp that retrieves data from a subgraph, you've probably interacted with The Graph. Today, thousands of [popular dapps](https://thegraph.com/explorer) in the web3 ecosystem use The Graph. +The Graph is a decentralized protocol that enables easy access to blockchain data. It indexes blockchain data similarly to how Google indexes the web. If you've used a dapp that retrieves data from a Subgraph, you've probably interacted with The Graph. Today, thousands of [popular dapps](https://thegraph.com/explorer) in the web3 ecosystem use The Graph. ## Specifics @@ -24,9 +24,9 @@ There are four primary network participants: 1. Delegators - Delegate GRT to Indexers & secure the network -2. Curators - Find the best subgraphs for Indexers +2. Curators - Find the best Subgraphs for Indexers -3. Developers - Build & query subgraphs +3. Developers - Build & query Subgraphs 4. Indexers - Backbone of blockchain data @@ -36,7 +36,7 @@ Fishermen and Arbitrators are also integral to the network's success through oth ## Delegators (Passively earn GRT) -Indexers are delegated GRT by Delegators, increasing the Indexer’s stake in subgraphs on the network. In return, Delegators earn a percentage of all query fees and indexing rewards from the Indexer. Each Indexer sets the cut that will be rewarded to Delegators independently, creating competition among Indexers to attract Delegators. Most Indexers offer between 9-12% annually. +Indexers are delegated GRT by Delegators, increasing the Indexer’s stake in Subgraphs on the network. In return, Delegators earn a percentage of all query fees and indexing rewards from the Indexer. Each Indexer sets the cut that will be rewarded to Delegators independently, creating competition among Indexers to attract Delegators. Most Indexers offer between 9-12% annually. For example, if a Delegator were to delegate 15k GRT to an Indexer offering 10%, the Delegator would receive ~1,500 GRT in rewards annually. @@ -46,25 +46,25 @@ If you're reading this, you're capable of becoming a Delegator right now by head ## Curators (Earn GRT) -Curators identify high-quality subgraphs and "curate" them (i.e., signal GRT on them) to earn curation shares, which guarantee a percentage of all future query fees generated by the subgraph. While any independent network participant can be a Curator, typically subgraph developers are among the first Curators for their own subgraphs because they want to ensure their subgraph is indexed. +Curators identify high-quality Subgraphs and "curate" them (i.e., signal GRT on them) to earn curation shares, which guarantee a percentage of all future query fees generated by the Subgraph. While any independent network participant can be a Curator, typically Subgraph developers are among the first Curators for their own Subgraphs because they want to ensure their Subgraph is indexed. -Subgraph developers are encouraged to curate their subgraph with at least 3,000 GRT. However, this number may be impacted by network activity and community participation. +Subgraph developers are encouraged to curate their Subgraph with at least 3,000 GRT. However, this number may be impacted by network activity and community participation. -Curators pay a 1% curation tax when they curate a new subgraph. This curation tax is burned, decreasing the supply of GRT. +Curators pay a 1% curation tax when they curate a new Subgraph. This curation tax is burned, decreasing the supply of GRT. ## Developers -Developers build and query subgraphs to retrieve blockchain data. Since subgraphs are open source, developers can query existing subgraphs to load blockchain data into their dapps. Developers pay for queries they make in GRT, which is distributed to network participants. +Developers build and query Subgraphs to retrieve blockchain data. Since Subgraphs are open source, developers can query existing Subgraphs to load blockchain data into their dapps. Developers pay for queries they make in GRT, which is distributed to network participants. -### Creating a subgraph +### Creating a Subgraph -Developers can [create a subgraph](/developing/creating-a-subgraph/) to index data on the blockchain. Subgraphs are instructions for Indexers about which data should be served to consumers. +Developers can [create a Subgraph](/developing/creating-a-subgraph/) to index data on the blockchain. Subgraphs are instructions for Indexers about which data should be served to consumers. -Once developers have built and tested their subgraph, they can [publish their subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) on The Graph's decentralized network. +Once developers have built and tested their Subgraph, they can [publish their Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) on The Graph's decentralized network. -### Querying an existing subgraph +### Querying an existing Subgraph -Once a subgraph is [published](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph's decentralized network, anyone can create an API key, add GRT to their billing balance, and query the subgraph. +Once a Subgraph is [published](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph's decentralized network, anyone can create an API key, add GRT to their billing balance, and query the Subgraph. Subgraphs are [queried using GraphQL](/subgraphs/querying/introduction/), and the query fees are paid for with GRT in [Subgraph Studio](https://thegraph.com/studio/). Query fees are distributed to network participants based on their contributions to the protocol. @@ -72,27 +72,27 @@ Subgraphs are [queried using GraphQL](/subgraphs/querying/introduction/), and th ## Indexers (Earn GRT) -Indexers are the backbone of The Graph. They operate independent hardware and software powering The Graph’s decentralized network. Indexers serve data to consumers based on instructions from subgraphs. +Indexers are the backbone of The Graph. They operate independent hardware and software powering The Graph’s decentralized network. Indexers serve data to consumers based on instructions from Subgraphs. Indexers can earn GRT rewards in two ways: -1. **Query fees**: GRT paid by developers or users for subgraph data queries. Query fees are directly distributed to Indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). +1. **Query fees**: GRT paid by developers or users for Subgraph data queries. Query fees are directly distributed to Indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). -2. **Indexing rewards**: the 3% annual issuance is distributed to Indexers based on the number of subgraphs they are indexing. These rewards incentivize Indexers to index subgraphs, occasionally before the query fees begin, to accrue and submit Proofs of Indexing (POIs), verifying that they have indexed data accurately. +2. **Indexing rewards**: the 3% annual issuance is distributed to Indexers based on the number of Subgraphs they are indexing. These rewards incentivize Indexers to index Subgraphs, occasionally before the query fees begin, to accrue and submit Proofs of Indexing (POIs), verifying that they have indexed data accurately. -Each subgraph is allotted a portion of the total network token issuance, based on the amount of the subgraph’s curation signal. That amount is then rewarded to Indexers based on their allocated stake on the subgraph. +Each Subgraph is allotted a portion of the total network token issuance, based on the amount of the Subgraph’s curation signal. That amount is then rewarded to Indexers based on their allocated stake on the Subgraph. In order to run an indexing node, Indexers must self-stake 100,000 GRT or more with the network. Indexers are incentivized to self-stake GRT in proportion to the amount of queries they serve. -Indexers can increase their GRT allocations on subgraphs by accepting GRT delegation from Delegators, and they can accept up to 16 times their initial self-stake. If an Indexer becomes "over-delegated" (i.e., more than 16 times their initial self-stake), they will not be able to use the additional GRT from Delegators until they increase their self-stake in the network. +Indexers can increase their GRT allocations on Subgraphs by accepting GRT delegation from Delegators, and they can accept up to 16 times their initial self-stake. If an Indexer becomes "over-delegated" (i.e., more than 16 times their initial self-stake), they will not be able to use the additional GRT from Delegators until they increase their self-stake in the network. The amount of rewards an Indexer receives can vary based on the Indexer's self-stake, accepted delegation, quality of service, and many more factors. ## Token Supply: Burning & Issuance -The initial token supply is 10 billion GRT, with a target of 3% new issuance annually to reward Indexers for allocating stake on subgraphs. This means that the total supply of GRT tokens will increase by 3% each year as new tokens are issued to Indexers for their contribution to the network. +The initial token supply is 10 billion GRT, with a target of 3% new issuance annually to reward Indexers for allocating stake on Subgraphs. This means that the total supply of GRT tokens will increase by 3% each year as new tokens are issued to Indexers for their contribution to the network. -The Graph is designed with multiple burning mechanisms to offset new token issuance. Approximately 1% of the GRT supply is burned annually through various activities on the network, and this number has been increasing as network activity continues to grow. These burning activities include a 0.5% delegation tax whenever a Delegator delegates GRT to an Indexer, a 1% curation tax when Curators signal on a subgraph, and a 1% of query fees for blockchain data. +The Graph is designed with multiple burning mechanisms to offset new token issuance. Approximately 1% of the GRT supply is burned annually through various activities on the network, and this number has been increasing as network activity continues to grow. These burning activities include a 0.5% delegation tax whenever a Delegator delegates GRT to an Indexer, a 1% curation tax when Curators signal on a Subgraph, and a 1% of query fees for blockchain data. ![Total burned GRT](/img/total-burned-grt.jpeg) diff --git a/website/src/pages/ko/sps/introduction.mdx b/website/src/pages/ko/sps/introduction.mdx index b11c99dfb8e5..92d8618165dd 100644 --- a/website/src/pages/ko/sps/introduction.mdx +++ b/website/src/pages/ko/sps/introduction.mdx @@ -3,21 +3,21 @@ title: Introduction to Substreams-Powered Subgraphs sidebarTitle: Introduction --- -Boost your subgraph's efficiency and scalability by using [Substreams](/substreams/introduction/) to stream pre-indexed blockchain data. +Boost your Subgraph's efficiency and scalability by using [Substreams](/substreams/introduction/) to stream pre-indexed blockchain data. ## Overview -Use a Substreams package (`.spkg`) as a data source to give your subgraph access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks. +Use a Substreams package (`.spkg`) as a data source to give your Subgraph access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks. ### Specifics There are two methods of enabling this technology: -1. **Using Substreams [triggers](/sps/triggers/)**: Consume from any Substreams module by importing the Protobuf model through a subgraph handler and move all your logic into a subgraph. This method creates the subgraph entities directly in the subgraph. +1. **Using Substreams [triggers](/sps/triggers/)**: Consume from any Substreams module by importing the Protobuf model through a Subgraph handler and move all your logic into a Subgraph. This method creates the Subgraph entities directly in the Subgraph. -2. **Using [Entity Changes](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)**: By writing more of the logic into Substreams, you can consume the module's output directly into [graph-node](/indexing/tooling/graph-node/). In graph-node, you can use the Substreams data to create your subgraph entities. +2. **Using [Entity Changes](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)**: By writing more of the logic into Substreams, you can consume the module's output directly into [graph-node](/indexing/tooling/graph-node/). In graph-node, you can use the Substreams data to create your Subgraph entities. -You can choose where to place your logic, either in the subgraph or Substreams. However, consider what aligns with your data needs, as Substreams has a parallelized model, and triggers are consumed linearly in the graph node. +You can choose where to place your logic, either in the Subgraph or Substreams. However, consider what aligns with your data needs, as Substreams has a parallelized model, and triggers are consumed linearly in the graph node. ### Additional Resources @@ -28,3 +28,4 @@ Visit the following links for tutorials on using code-generation tooling to buil - [Starknet](https://docs.substreams.dev/tutorials/intro-to-tutorials/starknet) - [Injective](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/injective) - [MANTRA](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/mantra) +- [Stellar](https://docs.substreams.dev/tutorials/intro-to-tutorials/stellar) diff --git a/website/src/pages/ko/sps/sps-faq.mdx b/website/src/pages/ko/sps/sps-faq.mdx index abc1f3906686..250c466d5929 100644 --- a/website/src/pages/ko/sps/sps-faq.mdx +++ b/website/src/pages/ko/sps/sps-faq.mdx @@ -11,21 +11,21 @@ Specifically, it's a blockchain-agnostic, parallelized, and streaming-first engi Substreams is developed by [StreamingFast](https://www.streamingfast.io/). Visit the [Substreams Documentation](/substreams/introduction/) to learn more about Substreams. -## What are Substreams-powered subgraphs? +## What are Substreams-powered Subgraphs? -[Substreams-powered subgraphs](/sps/introduction/) combine the power of Substreams with the queryability of subgraphs. When publishing a Substreams-powered Subgraph, the data produced by the Substreams transformations, can [output entity changes](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs) that are compatible with subgraph entities. +[Substreams-powered Subgraphs](/sps/introduction/) combine the power of Substreams with the queryability of Subgraphs. When publishing a Substreams-powered Subgraph, the data produced by the Substreams transformations, can [output entity changes](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs) that are compatible with Subgraph entities. -If you are already familiar with subgraph development, note that Substreams-powered subgraphs can then be queried just as if it had been produced by the AssemblyScript transformation layer. This provides all the benefits of subgraphs, including a dynamic and flexible GraphQL API. +If you are already familiar with Subgraph development, note that Substreams-powered Subgraphs can then be queried just as if it had been produced by the AssemblyScript transformation layer. This provides all the benefits of Subgraphs, including a dynamic and flexible GraphQL API. -## How are Substreams-powered subgraphs different from subgraphs? +## How are Substreams-powered Subgraphs different from Subgraphs? Subgraphs are made up of datasources which specify onchain events, and how those events should be transformed via handlers written in Assemblyscript. These events are processed sequentially, based on the order in which events happen onchain. -By contrast, substreams-powered subgraphs have a single datasource which references a substreams package, which is processed by the Graph Node. Substreams have access to additional granular onchain data compared to conventional subgraphs, and can also benefit from massively parallelised processing, which can mean much faster processing times. +By contrast, substreams-powered Subgraphs have a single datasource which references a substreams package, which is processed by the Graph Node. Substreams have access to additional granular onchain data compared to conventional Subgraphs, and can also benefit from massively parallelised processing, which can mean much faster processing times. -## What are the benefits of using Substreams-powered subgraphs? +## What are the benefits of using Substreams-powered Subgraphs? -Substreams-powered subgraphs combine all the benefits of Substreams with the queryability of subgraphs. They bring greater composability and high-performance indexing to The Graph. They also enable new data use cases; for example, once you've built your Substreams-powered Subgraph, you can reuse your [Substreams modules](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) to output to different [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) such as PostgreSQL, MongoDB, and Kafka. +Substreams-powered Subgraphs combine all the benefits of Substreams with the queryability of Subgraphs. They bring greater composability and high-performance indexing to The Graph. They also enable new data use cases; for example, once you've built your Substreams-powered Subgraph, you can reuse your [Substreams modules](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) to output to different [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) such as PostgreSQL, MongoDB, and Kafka. ## What are the benefits of Substreams? @@ -35,7 +35,7 @@ There are many benefits to using Substreams, including: - High-performance indexing: Orders of magnitude faster indexing through large-scale clusters of parallel operations (think BigQuery). -- Sink anywhere: Sink your data to anywhere you want: PostgreSQL, MongoDB, Kafka, subgraphs, flat files, Google Sheets. +- Sink anywhere: Sink your data to anywhere you want: PostgreSQL, MongoDB, Kafka, Subgraphs, flat files, Google Sheets. - Programmable: Use code to customize extraction, do transformation-time aggregations, and model your output for multiple sinks. @@ -63,17 +63,17 @@ There are many benefits to using Firehose, including: - Leverages flat files: Blockchain data is extracted into flat files, the cheapest and most optimized computing resource available. -## Where can developers access more information about Substreams-powered subgraphs and Substreams? +## Where can developers access more information about Substreams-powered Subgraphs and Substreams? The [Substreams documentation](/substreams/introduction/) will teach you how to build Substreams modules. -The [Substreams-powered subgraphs documentation](/sps/introduction/) will show you how to package them for deployment on The Graph. +The [Substreams-powered Subgraphs documentation](/sps/introduction/) will show you how to package them for deployment on The Graph. The [latest Substreams Codegen tool](https://streamingfastio.medium.com/substreams-codegen-no-code-tool-to-bootstrap-your-project-a11efe0378c6) will allow you to bootstrap a Substreams project without any code. ## What is the role of Rust modules in Substreams? -Rust modules are the equivalent of the AssemblyScript mappers in subgraphs. They are compiled to WASM in a similar way, but the programming model allows for parallel execution. They define the sort of transformations and aggregations you want to apply to the raw blockchain data. +Rust modules are the equivalent of the AssemblyScript mappers in Subgraphs. They are compiled to WASM in a similar way, but the programming model allows for parallel execution. They define the sort of transformations and aggregations you want to apply to the raw blockchain data. See [modules documentation](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) for details. @@ -81,16 +81,16 @@ See [modules documentation](https://docs.substreams.dev/reference-material/subst When using Substreams, the composition happens at the transformation layer enabling cached modules to be re-used. -As an example, Alice can build a DEX price module, Bob can use it to build a volume aggregator for some tokens of his interest, and Lisa can combine four individual DEX price modules to create a price oracle. A single Substreams request will package all of these individual's modules, link them together, to offer a much more refined stream of data. That stream can then be used to populate a subgraph, and be queried by consumers. +As an example, Alice can build a DEX price module, Bob can use it to build a volume aggregator for some tokens of his interest, and Lisa can combine four individual DEX price modules to create a price oracle. A single Substreams request will package all of these individual's modules, link them together, to offer a much more refined stream of data. That stream can then be used to populate a Subgraph, and be queried by consumers. ## How can you build and deploy a Substreams-powered Subgraph? After [defining](/sps/introduction/) a Substreams-powered Subgraph, you can use the Graph CLI to deploy it in [Subgraph Studio](https://thegraph.com/studio/). -## Where can I find examples of Substreams and Substreams-powered subgraphs? +## Where can I find examples of Substreams and Substreams-powered Subgraphs? -You can visit [this Github repo](https://github.com/pinax-network/awesome-substreams) to find examples of Substreams and Substreams-powered subgraphs. +You can visit [this Github repo](https://github.com/pinax-network/awesome-substreams) to find examples of Substreams and Substreams-powered Subgraphs. -## What do Substreams and Substreams-powered subgraphs mean for The Graph Network? +## What do Substreams and Substreams-powered Subgraphs mean for The Graph Network? The integration promises many benefits, including extremely high-performance indexing and greater composability by leveraging community modules and building on them. diff --git a/website/src/pages/ko/sps/triggers.mdx b/website/src/pages/ko/sps/triggers.mdx index 816d42cb5f12..66687aa21889 100644 --- a/website/src/pages/ko/sps/triggers.mdx +++ b/website/src/pages/ko/sps/triggers.mdx @@ -6,13 +6,13 @@ Use Custom Triggers and enable the full use GraphQL. ## Overview -Custom Triggers allow you to send data directly into your subgraph mappings file and entities, which are similar to tables and fields. This enables you to fully use the GraphQL layer. +Custom Triggers allow you to send data directly into your Subgraph mappings file and entities, which are similar to tables and fields. This enables you to fully use the GraphQL layer. -By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data in your subgraph's handler. This ensures efficient and streamlined data management within the subgraph framework. +By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data in your Subgraph's handler. This ensures efficient and streamlined data management within the Subgraph framework. ### Defining `handleTransactions` -The following code demonstrates how to define a `handleTransactions` function in a subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new subgraph entity is created. +The following code demonstrates how to define a `handleTransactions` function in a Subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new Subgraph entity is created. ```tsx export function handleTransactions(bytes: Uint8Array): void { @@ -38,9 +38,9 @@ Here's what you're seeing in the `mappings.ts` file: 1. The bytes containing Substreams data are decoded into the generated `Transactions` object, this object is used like any other AssemblyScript object 2. Looping over the transactions -3. Create a new subgraph entity for every transaction +3. Create a new Subgraph entity for every transaction -To go through a detailed example of a trigger-based subgraph, [check out the tutorial](/sps/tutorial/). +To go through a detailed example of a trigger-based Subgraph, [check out the tutorial](/sps/tutorial/). ### Additional Resources diff --git a/website/src/pages/ko/sps/tutorial.mdx b/website/src/pages/ko/sps/tutorial.mdx index 55e563608bce..7358f8c02a20 100644 --- a/website/src/pages/ko/sps/tutorial.mdx +++ b/website/src/pages/ko/sps/tutorial.mdx @@ -3,7 +3,7 @@ title: 'Tutorial: Set Up a Substreams-Powered Subgraph on Solana' sidebarTitle: Tutorial --- -Successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. +Successfully set up a trigger-based Substreams-powered Subgraph for a Solana SPL token. ## Get Started @@ -54,7 +54,7 @@ params: # Modify the param fields to meet your needs ### Step 2: Generate the Subgraph Manifest -Once the project is initialized, generate a subgraph manifest by running the following command in the Dev Container: +Once the project is initialized, generate a Subgraph manifest by running the following command in the Dev Container: ```bash substreams codegen subgraph @@ -73,7 +73,7 @@ dataSources: moduleName: map_spl_transfers # Module defined in the substreams.yaml file: ./my-project-sol-v0.1.0.spkg mapping: - apiVersion: 0.0.7 + apiVersion: 0.0.9 kind: substreams/graph-entities file: ./src/mappings.ts handler: handleTriggers @@ -81,7 +81,7 @@ dataSources: ### Step 3: Define Entities in `schema.graphql` -Define the fields you want to save in your subgraph entities by updating the `schema.graphql` file. +Define the fields you want to save in your Subgraph entities by updating the `schema.graphql` file. Here is an example: @@ -101,7 +101,7 @@ This schema defines a `MyTransfer` entity with fields such as `id`, `amount`, `s With the Protobuf objects generated, you can now handle the decoded Substreams data in your `mappings.ts` file found in the `./src` directory. -The example below demonstrates how to extract to subgraph entities the non-derived transfers associated to the Orca account id: +The example below demonstrates how to extract to Subgraph entities the non-derived transfers associated to the Orca account id: ```ts import { Protobuf } from 'as-proto/assembly' @@ -140,11 +140,11 @@ To generate Protobuf objects in AssemblyScript, run the following command: npm run protogen ``` -This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the subgraph's handler. +This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the Subgraph's handler. ### Conclusion -Congratulations! You've successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. You can take the next step by customizing your schema, mappings, and modules to fit your specific use case. +Congratulations! You've successfully set up a trigger-based Substreams-powered Subgraph for a Solana SPL token. You can take the next step by customizing your schema, mappings, and modules to fit your specific use case. ### Video Tutorial diff --git a/website/src/pages/ko/subgraphs/best-practices/avoid-eth-calls.mdx b/website/src/pages/ko/subgraphs/best-practices/avoid-eth-calls.mdx index e40a7b3712e4..07249c97dd2a 100644 --- a/website/src/pages/ko/subgraphs/best-practices/avoid-eth-calls.mdx +++ b/website/src/pages/ko/subgraphs/best-practices/avoid-eth-calls.mdx @@ -1,19 +1,19 @@ --- title: Subgraph Best Practice 4 - Improve Indexing Speed by Avoiding eth_calls -sidebarTitle: 'Subgraph Best Practice 4: Avoiding eth_calls' +sidebarTitle: Avoiding eth_calls --- ## TLDR -`eth_calls` are calls that can be made from a subgraph to an Ethereum node. These calls take a significant amount of time to return data, slowing down indexing. If possible, design smart contracts to emit all the data you need so you don’t need to use `eth_calls`. +`eth_calls` are calls that can be made from a Subgraph to an Ethereum node. These calls take a significant amount of time to return data, slowing down indexing. If possible, design smart contracts to emit all the data you need so you don’t need to use `eth_calls`. ## Why Avoiding `eth_calls` Is a Best Practice -Subgraphs are optimized to index event data emitted from smart contracts. A subgraph can also index the data coming from an `eth_call`, however, this can significantly slow down subgraph indexing as `eth_calls` require making external calls to smart contracts. The responsiveness of these calls relies not on the subgraph but on the connectivity and responsiveness of the Ethereum node being queried. By minimizing or eliminating eth_calls in our subgraphs, we can significantly improve our indexing speed. +Subgraphs are optimized to index event data emitted from smart contracts. A Subgraph can also index the data coming from an `eth_call`, however, this can significantly slow down Subgraph indexing as `eth_calls` require making external calls to smart contracts. The responsiveness of these calls relies not on the Subgraph but on the connectivity and responsiveness of the Ethereum node being queried. By minimizing or eliminating eth_calls in our Subgraphs, we can significantly improve our indexing speed. ### What Does an eth_call Look Like? -`eth_calls` are often necessary when the data required for a subgraph is not available through emitted events. For example, consider a scenario where a subgraph needs to identify whether ERC20 tokens are part of a specific pool, but the contract only emits a basic `Transfer` event and does not emit an event that contains the data that we need: +`eth_calls` are often necessary when the data required for a Subgraph is not available through emitted events. For example, consider a scenario where a Subgraph needs to identify whether ERC20 tokens are part of a specific pool, but the contract only emits a basic `Transfer` event and does not emit an event that contains the data that we need: ```yaml event Transfer(address indexed from, address indexed to, uint256 value); @@ -44,7 +44,7 @@ export function handleTransfer(event: Transfer): void { } ``` -This is functional, however is not ideal as it slows down our subgraph’s indexing. +This is functional, however is not ideal as it slows down our Subgraph’s indexing. ## How to Eliminate `eth_calls` @@ -54,7 +54,7 @@ Ideally, the smart contract should be updated to emit all necessary data within event TransferWithPool(address indexed from, address indexed to, uint256 value, bytes32 indexed poolInfo); ``` -With this update, the subgraph can directly index the required data without external calls: +With this update, the Subgraph can directly index the required data without external calls: ```typescript import { Address } from '@graphprotocol/graph-ts' @@ -96,11 +96,11 @@ The portion highlighted in yellow is the call declaration. The part before the c The handler itself accesses the result of this `eth_call` exactly as in the previous section by binding to the contract and making the call. graph-node caches the results of declared `eth_calls` in memory and the call from the handler will retrieve the result from this in memory cache instead of making an actual RPC call. -Note: Declared eth_calls can only be made in subgraphs with specVersion >= 1.2.0. +Note: Declared eth_calls can only be made in Subgraphs with specVersion >= 1.2.0. ## Conclusion -You can significantly improve indexing performance by minimizing or eliminating `eth_calls` in your subgraphs. +You can significantly improve indexing performance by minimizing or eliminating `eth_calls` in your Subgraphs. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/ko/subgraphs/best-practices/derivedfrom.mdx b/website/src/pages/ko/subgraphs/best-practices/derivedfrom.mdx index db3a49928c89..093eb29255ab 100644 --- a/website/src/pages/ko/subgraphs/best-practices/derivedfrom.mdx +++ b/website/src/pages/ko/subgraphs/best-practices/derivedfrom.mdx @@ -1,11 +1,11 @@ --- title: Subgraph Best Practice 2 - Improve Indexing and Query Responsiveness By Using @derivedFrom -sidebarTitle: 'Subgraph Best Practice 2: Arrays with @derivedFrom' +sidebarTitle: Arrays with @derivedFrom --- ## TLDR -Arrays in your schema can really slow down a subgraph's performance as they grow beyond thousands of entries. If possible, the `@derivedFrom` directive should be used when using arrays as it prevents large arrays from forming, simplifies handlers, and reduces the size of individual entities, improving indexing speed and query performance significantly. +Arrays in your schema can really slow down a Subgraph's performance as they grow beyond thousands of entries. If possible, the `@derivedFrom` directive should be used when using arrays as it prevents large arrays from forming, simplifies handlers, and reduces the size of individual entities, improving indexing speed and query performance significantly. ## How to Use the `@derivedFrom` Directive @@ -15,7 +15,7 @@ You just need to add a `@derivedFrom` directive after your array in your schema. comments: [Comment!]! @derivedFrom(field: "post") ``` -`@derivedFrom` creates efficient one-to-many relationships, enabling an entity to dynamically associate with multiple related entities based on a field in the related entity. This approach removes the need for both sides of the relationship to store duplicate data, making the subgraph more efficient. +`@derivedFrom` creates efficient one-to-many relationships, enabling an entity to dynamically associate with multiple related entities based on a field in the related entity. This approach removes the need for both sides of the relationship to store duplicate data, making the Subgraph more efficient. ### Example Use Case for `@derivedFrom` @@ -60,17 +60,17 @@ type Comment @entity { Just by adding the `@derivedFrom` directive, this schema will only store the “Comments” on the “Comments” side of the relationship and not on the “Post” side of the relationship. Arrays are stored across individual rows, which allows them to expand significantly. This can lead to particularly large sizes if their growth is unbounded. -This will not only make our subgraph more efficient, but it will also unlock three features: +This will not only make our Subgraph more efficient, but it will also unlock three features: 1. We can query the `Post` and see all of its comments. 2. We can do a reverse lookup and query any `Comment` and see which post it comes from. -3. We can use [Derived Field Loaders](/subgraphs/developing/creating/graph-ts/api/#looking-up-derived-entities) to unlock the ability to directly access and manipulate data from virtual relationships in our subgraph mappings. +3. We can use [Derived Field Loaders](/subgraphs/developing/creating/graph-ts/api/#looking-up-derived-entities) to unlock the ability to directly access and manipulate data from virtual relationships in our Subgraph mappings. ## Conclusion -Use the `@derivedFrom` directive in subgraphs to effectively manage dynamically growing arrays, enhancing indexing efficiency and data retrieval. +Use the `@derivedFrom` directive in Subgraphs to effectively manage dynamically growing arrays, enhancing indexing efficiency and data retrieval. For a more detailed explanation of strategies to avoid large arrays, check out Kevin Jones' blog: [Best Practices in Subgraph Development: Avoiding Large Arrays](https://thegraph.com/blog/improve-subgraph-performance-avoiding-large-arrays/). diff --git a/website/src/pages/ko/subgraphs/best-practices/grafting-hotfix.mdx b/website/src/pages/ko/subgraphs/best-practices/grafting-hotfix.mdx index d514e1633c75..674cf6b87c62 100644 --- a/website/src/pages/ko/subgraphs/best-practices/grafting-hotfix.mdx +++ b/website/src/pages/ko/subgraphs/best-practices/grafting-hotfix.mdx @@ -1,26 +1,26 @@ --- title: Subgraph Best Practice 6 - Use Grafting for Quick Hotfix Deployment -sidebarTitle: 'Subgraph Best Practice 6: Grafting and Hotfixing' +sidebarTitle: Grafting and Hotfixing --- ## TLDR -Grafting is a powerful feature in subgraph development that allows you to build and deploy new subgraphs while reusing the indexed data from existing ones. +Grafting is a powerful feature in Subgraph development that allows you to build and deploy new Subgraphs while reusing the indexed data from existing ones. ### Overview -This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. +This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire Subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. ## Benefits of Grafting for Hotfixes 1. **Rapid Deployment** - - **Minimize Downtime**: When a subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. - - **Immediate Recovery**: The new subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. + - **Minimize Downtime**: When a Subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. + - **Immediate Recovery**: The new Subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. 2. **Data Preservation** - - **Reuse Historical Data**: Grafting copies the existing data from the base subgraph, so you don’t lose valuable historical records. + - **Reuse Historical Data**: Grafting copies the existing data from the base Subgraph, so you don’t lose valuable historical records. - **Consistency**: Maintains data continuity, which is crucial for applications relying on consistent historical data. 3. **Efficiency** @@ -31,38 +31,38 @@ This feature enables quick deployment of hotfixes for critical issues, eliminati 1. **Initial Deployment Without Grafting** - - **Start Clean**: Always deploy your initial subgraph without grafting to ensure that it’s stable and functions as expected. - - **Test Thoroughly**: Validate the subgraph’s performance to minimize the need for future hotfixes. + - **Start Clean**: Always deploy your initial Subgraph without grafting to ensure that it’s stable and functions as expected. + - **Test Thoroughly**: Validate the Subgraph’s performance to minimize the need for future hotfixes. 2. **Implementing the Hotfix with Grafting** - **Identify the Issue**: When a critical error occurs, determine the block number of the last successfully indexed event. - - **Create a New Subgraph**: Develop a new subgraph that includes the hotfix. - - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed subgraph. - - **Deploy Quickly**: Publish the grafted subgraph to restore service as soon as possible. + - **Create a New Subgraph**: Develop a new Subgraph that includes the hotfix. + - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed Subgraph. + - **Deploy Quickly**: Publish the grafted Subgraph to restore service as soon as possible. 3. **Post-Hotfix Actions** - - **Monitor Performance**: Ensure the grafted subgraph is indexing correctly and the hotfix resolves the issue. - - **Republish Without Grafting**: Once stable, deploy a new version of the subgraph without grafting for long-term maintenance. + - **Monitor Performance**: Ensure the grafted Subgraph is indexing correctly and the hotfix resolves the issue. + - **Republish Without Grafting**: Once stable, deploy a new version of the Subgraph without grafting for long-term maintenance. > Note: Relying on grafting indefinitely is not recommended as it can complicate future updates and maintenance. - - **Update References**: Redirect any services or applications to use the new, non-grafted subgraph. + - **Update References**: Redirect any services or applications to use the new, non-grafted Subgraph. 4. **Important Considerations** - **Careful Block Selection**: Choose the graft block number carefully to prevent data loss. - **Tip**: Use the block number of the last correctly processed event. - - **Use Deployment ID**: Ensure you reference the Deployment ID of the base subgraph, not the Subgraph ID. - - **Note**: The Deployment ID is the unique identifier for a specific subgraph deployment. - - **Feature Declaration**: Remember to declare grafting in the subgraph manifest under features. + - **Use Deployment ID**: Ensure you reference the Deployment ID of the base Subgraph, not the Subgraph ID. + - **Note**: The Deployment ID is the unique identifier for a specific Subgraph deployment. + - **Feature Declaration**: Remember to declare grafting in the Subgraph manifest under features. ## Example: Deploying a Hotfix with Grafting -Suppose you have a subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. +Suppose you have a Subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. 1. **Failed Subgraph Manifest (subgraph.yaml)** ```yaml - specVersion: 1.0.0 + specVersion: 1.3.0 schema: file: ./schema.graphql dataSources: @@ -75,7 +75,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing startBlock: 5000000 mapping: kind: ethereum/events - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Withdrawal @@ -90,7 +90,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing 2. **New Grafted Subgraph Manifest (subgraph.yaml)** ```yaml - specVersion: 1.0.0 + specVersion: 1.3.0 schema: file: ./schema.graphql dataSources: @@ -103,7 +103,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing startBlock: 6000001 # Block after the last indexed block mapping: kind: ethereum/events - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Withdrawal @@ -117,16 +117,16 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing features: - grafting graft: - base: QmBaseDeploymentID # Deployment ID of the failed subgraph + base: QmBaseDeploymentID # Deployment ID of the failed Subgraph block: 6000000 # Last successfully indexed block ``` **Explanation:** -- **Data Source Update**: The new subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. +- **Data Source Update**: The new Subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. - **Start Block**: Set to one block after the last successfully indexed block to avoid reprocessing the error. - **Grafting Configuration**: - - **base**: Deployment ID of the failed subgraph. + - **base**: Deployment ID of the failed Subgraph. - **block**: Block number where grafting should begin. 3. **Deployment Steps** @@ -135,10 +135,10 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing - **Adjust the Manifest**: As shown above, update the `subgraph.yaml` with grafting configurations. - **Deploy the Subgraph**: - Authenticate with the Graph CLI. - - Deploy the new subgraph using `graph deploy`. + - Deploy the new Subgraph using `graph deploy`. 4. **Post-Deployment** - - **Verify Indexing**: Check that the subgraph is indexing correctly from the graft point. + - **Verify Indexing**: Check that the Subgraph is indexing correctly from the graft point. - **Monitor Data**: Ensure that new data is being captured and the hotfix is effective. - **Plan for Republish**: Schedule the deployment of a non-grafted version for long-term stability. @@ -146,9 +146,9 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing While grafting is a powerful tool for deploying hotfixes quickly, there are specific scenarios where it should be avoided to maintain data integrity and ensure optimal performance. -- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new subgraph’s schema to be compatible with the base subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. +- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new Subgraph’s schema to be compatible with the base Subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. - **Significant Mapping Logic Overhauls**: When the hotfix involves substantial modifications to your mapping logic—such as changing how events are processed or altering handler functions—grafting may not function correctly. The new logic might not be compatible with the data processed under the old logic, leading to incorrect data or failed indexing. -- **Deployments to The Graph Network**: Grafting is not recommended for subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the subgraph from scratch to ensure full compatibility and reliability. +- **Deployments to The Graph Network**: Grafting is not recommended for Subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the Subgraph from scratch to ensure full compatibility and reliability. ### Risk Management @@ -157,20 +157,20 @@ While grafting is a powerful tool for deploying hotfixes quickly, there are spec ## Conclusion -Grafting is an effective strategy for deploying hotfixes in subgraph development, enabling you to: +Grafting is an effective strategy for deploying hotfixes in Subgraph development, enabling you to: - **Quickly Recover** from critical errors without re-indexing. - **Preserve Historical Data**, maintaining continuity for applications and users. - **Ensure Service Availability** by minimizing downtime during critical fixes. -However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. +However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your Subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. ## Additional Resources - **[Grafting Documentation](/subgraphs/cookbook/grafting/)**: Replace a Contract and Keep its History With Grafting - **[Understanding Deployment IDs](/subgraphs/querying/subgraph-id-vs-deployment-id/)**: Learn the difference between Deployment ID and Subgraph ID. -By incorporating grafting into your subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. +By incorporating grafting into your Subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/ko/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx b/website/src/pages/ko/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx index 6ff60ec9ab34..3a633244e0f2 100644 --- a/website/src/pages/ko/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx +++ b/website/src/pages/ko/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx @@ -1,6 +1,6 @@ --- title: Subgraph Best Practice 3 - Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs -sidebarTitle: 'Subgraph Best Practice 3: Immutable Entities and Bytes as IDs' +sidebarTitle: Immutable Entities and Bytes as IDs --- ## TLDR @@ -50,12 +50,12 @@ While other types for IDs are possible, such as String and Int8, it is recommend ### Reasons to Not Use Bytes as IDs 1. If entity IDs must be human-readable such as auto-incremented numerical IDs or readable strings, Bytes for IDs should not be used. -2. If integrating a subgraph’s data with another data model that does not use Bytes as IDs, Bytes as IDs should not be used. +2. If integrating a Subgraph’s data with another data model that does not use Bytes as IDs, Bytes as IDs should not be used. 3. Indexing and querying performance improvements are not desired. ### Concatenating With Bytes as IDs -It is a common practice in many subgraphs to use string concatenation to combine two properties of an event into a single ID, such as using `event.transaction.hash.toHex() + "-" + event.logIndex.toString()`. However, as this returns a string, this significantly impedes subgraph indexing and querying performance. +It is a common practice in many Subgraphs to use string concatenation to combine two properties of an event into a single ID, such as using `event.transaction.hash.toHex() + "-" + event.logIndex.toString()`. However, as this returns a string, this significantly impedes Subgraph indexing and querying performance. Instead, we should use the `concatI32()` method to concatenate event properties. This strategy results in a `Bytes` ID that is much more performant. @@ -172,7 +172,7 @@ Query Response: ## Conclusion -Using both Immutable Entities and Bytes as IDs has been shown to markedly improve subgraph efficiency. Specifically, tests have highlighted up to a 28% increase in query performance and up to a 48% acceleration in indexing speeds. +Using both Immutable Entities and Bytes as IDs has been shown to markedly improve Subgraph efficiency. Specifically, tests have highlighted up to a 28% increase in query performance and up to a 48% acceleration in indexing speeds. Read more about using Immutable Entities and Bytes as IDs in this blog post by David Lutterkort, a Software Engineer at Edge & Node: [Two Simple Subgraph Performance Improvements](https://thegraph.com/blog/two-simple-subgraph-performance-improvements/). diff --git a/website/src/pages/ko/subgraphs/best-practices/pruning.mdx b/website/src/pages/ko/subgraphs/best-practices/pruning.mdx index 1b51dde8894f..2d4f9ad803e0 100644 --- a/website/src/pages/ko/subgraphs/best-practices/pruning.mdx +++ b/website/src/pages/ko/subgraphs/best-practices/pruning.mdx @@ -1,11 +1,11 @@ --- title: Subgraph Best Practice 1 - Improve Query Speed with Subgraph Pruning -sidebarTitle: 'Subgraph Best Practice 1: Pruning with indexerHints' +sidebarTitle: Pruning with indexerHints --- ## TLDR -[Pruning](/developing/creating-a-subgraph/#prune) removes archival entities from the subgraph’s database up to a given block, and removing unused entities from a subgraph’s database will improve a subgraph’s query performance, often dramatically. Using `indexerHints` is an easy way to prune a subgraph. +[Pruning](/developing/creating-a-subgraph/#prune) removes archival entities from the Subgraph’s database up to a given block, and removing unused entities from a Subgraph’s database will improve a Subgraph’s query performance, often dramatically. Using `indexerHints` is an easy way to prune a Subgraph. ## How to Prune a Subgraph With `indexerHints` @@ -13,14 +13,14 @@ Add a section called `indexerHints` in the manifest. `indexerHints` has three `prune` options: -- `prune: auto`: Retains the minimum necessary history as set by the Indexer, optimizing query performance. This is the generally recommended setting and is the default for all subgraphs created by `graph-cli` >= 0.66.0. +- `prune: auto`: Retains the minimum necessary history as set by the Indexer, optimizing query performance. This is the generally recommended setting and is the default for all Subgraphs created by `graph-cli` >= 0.66.0. - `prune: `: Sets a custom limit on the number of historical blocks to retain. - `prune: never`: No pruning of historical data; retains the entire history and is the default if there is no `indexerHints` section. `prune: never` should be selected if [Time Travel Queries](/subgraphs/querying/graphql-api/#time-travel-queries) are desired. -We can add `indexerHints` to our subgraphs by updating our `subgraph.yaml`: +We can add `indexerHints` to our Subgraphs by updating our `subgraph.yaml`: ```yaml -specVersion: 1.0.0 +specVersion: 1.3.0 schema: file: ./schema.graphql indexerHints: @@ -39,7 +39,7 @@ dataSources: ## Conclusion -Pruning using `indexerHints` is a best practice for subgraph development, offering significant query performance improvements. +Pruning using `indexerHints` is a best practice for Subgraph development, offering significant query performance improvements. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/ko/subgraphs/best-practices/timeseries.mdx b/website/src/pages/ko/subgraphs/best-practices/timeseries.mdx index cacdc44711fe..9732199531a8 100644 --- a/website/src/pages/ko/subgraphs/best-practices/timeseries.mdx +++ b/website/src/pages/ko/subgraphs/best-practices/timeseries.mdx @@ -1,11 +1,11 @@ --- title: Subgraph Best Practice 5 - Simplify and Optimize with Timeseries and Aggregations -sidebarTitle: 'Subgraph Best Practice 5: Timeseries and Aggregations' +sidebarTitle: Timeseries and Aggregations --- ## TLDR -Leveraging the new time-series and aggregations feature in subgraphs can significantly enhance both indexing speed and query performance. +Leveraging the new time-series and aggregations feature in Subgraphs can significantly enhance both indexing speed and query performance. ## Overview @@ -36,6 +36,10 @@ Timeseries and aggregations reduce data processing overhead and accelerate queri ## How to Implement Timeseries and Aggregations +### Prerequisites + +You need `spec version 1.1.0` for this feature. + ### Defining Timeseries Entities A timeseries entity represents raw data points collected over time. It is defined with the `@entity(timeseries: true)` annotation. Key requirements: @@ -51,7 +55,7 @@ Example: type Data @entity(timeseries: true) { id: Int8! timestamp: Timestamp! - price: BigDecimal! + amount: BigDecimal! } ``` @@ -68,11 +72,11 @@ Example: type Stats @aggregation(intervals: ["hour", "day"], source: "Data") { id: Int8! timestamp: Timestamp! - sum: BigDecimal! @aggregate(fn: "sum", arg: "price") + sum: BigDecimal! @aggregate(fn: "sum", arg: "amount") } ``` -In this example, Stats aggregates the price field from Data over hourly and daily intervals, computing the sum. +In this example, Stats aggregates the amount field from Data over hourly and daily intervals, computing the sum. ### Querying Aggregated Data @@ -172,13 +176,13 @@ Supported operators and functions include basic arithmetic (+, -, \_, /), compar ### Conclusion -Implementing timeseries and aggregations in subgraphs is a best practice for projects dealing with time-based data. This approach: +Implementing timeseries and aggregations in Subgraphs is a best practice for projects dealing with time-based data. This approach: - Enhances Performance: Speeds up indexing and querying by reducing data processing overhead. - Simplifies Development: Eliminates the need for manual aggregation logic in mappings. - Scales Efficiently: Handles large volumes of data without compromising on speed or responsiveness. -By adopting this pattern, developers can build more efficient and scalable subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your subgraphs. +By adopting this pattern, developers can build more efficient and scalable Subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your Subgraphs. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/ko/subgraphs/billing.mdx b/website/src/pages/ko/subgraphs/billing.mdx index c9f380bb022c..ec654ca63f55 100644 --- a/website/src/pages/ko/subgraphs/billing.mdx +++ b/website/src/pages/ko/subgraphs/billing.mdx @@ -4,12 +4,14 @@ title: Billing ## Querying Plans -There are two plans to use when querying subgraphs on The Graph Network. +There are two plans to use when querying Subgraphs on The Graph Network. - **Free Plan**: The Free Plan includes 100,000 free monthly queries with full access to the Subgraph Studio testing environment. This plan is designed for hobbyists, hackathoners, and those with side projects to try out The Graph before scaling their dapp. - **Growth Plan**: The Growth Plan includes everything in the Free Plan with all queries after 100,000 monthly queries requiring payments with GRT or credit card. The Growth Plan is flexible enough to cover teams that have established dapps across a variety of use cases. +Learn more about pricing [here](https://thegraph.com/studio-pricing/). + ## Query Payments with credit card diff --git a/website/src/pages/ko/subgraphs/developing/creating/advanced.mdx b/website/src/pages/ko/subgraphs/developing/creating/advanced.mdx index ee9918f5f254..8dbc48253034 100644 --- a/website/src/pages/ko/subgraphs/developing/creating/advanced.mdx +++ b/website/src/pages/ko/subgraphs/developing/creating/advanced.mdx @@ -4,9 +4,9 @@ title: Advanced Subgraph Features ## Overview -Add and implement advanced subgraph features to enhanced your subgraph's built. +Add and implement advanced Subgraph features to enhanced your Subgraph's built. -Starting from `specVersion` `0.0.4`, subgraph features must be explicitly declared in the `features` section at the top level of the manifest file, using their `camelCase` name, as listed in the table below: +Starting from `specVersion` `0.0.4`, Subgraph features must be explicitly declared in the `features` section at the top level of the manifest file, using their `camelCase` name, as listed in the table below: | Feature | Name | | ---------------------------------------------------- | ---------------- | @@ -14,10 +14,10 @@ Starting from `specVersion` `0.0.4`, subgraph features must be explicitly declar | [Full-text Search](#defining-fulltext-search-fields) | `fullTextSearch` | | [Grafting](#grafting-onto-existing-subgraphs) | `grafting` | -For instance, if a subgraph uses the **Full-Text Search** and the **Non-fatal Errors** features, the `features` field in the manifest should be: +For instance, if a Subgraph uses the **Full-Text Search** and the **Non-fatal Errors** features, the `features` field in the manifest should be: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum features: - fullTextSearch @@ -25,7 +25,7 @@ features: dataSources: ... ``` -> Note that using a feature without declaring it will incur a **validation error** during subgraph deployment, but no errors will occur if a feature is declared but not used. +> Note that using a feature without declaring it will incur a **validation error** during Subgraph deployment, but no errors will occur if a feature is declared but not used. ## Timeseries and Aggregations @@ -33,9 +33,9 @@ Prerequisites: - Subgraph specVersion must be ≥1.1.0. -Timeseries and aggregations enable your subgraph to track statistics like daily average price, hourly total transfers, and more. +Timeseries and aggregations enable your Subgraph to track statistics like daily average price, hourly total transfers, and more. -This feature introduces two new types of subgraph entity. Timeseries entities record data points with timestamps. Aggregation entities perform pre-declared calculations on the timeseries data points on an hourly or daily basis, then store the results for easy access via GraphQL. +This feature introduces two new types of Subgraph entity. Timeseries entities record data points with timestamps. Aggregation entities perform pre-declared calculations on the timeseries data points on an hourly or daily basis, then store the results for easy access via GraphQL. ### Example Schema @@ -97,21 +97,21 @@ Aggregation entities are automatically calculated on the basis of the specified ## Non-fatal errors -Indexing errors on already synced subgraphs will, by default, cause the subgraph to fail and stop syncing. Subgraphs can alternatively be configured to continue syncing in the presence of errors, by ignoring the changes made by the handler which provoked the error. This gives subgraph authors time to correct their subgraphs while queries continue to be served against the latest block, though the results might be inconsistent due to the bug that caused the error. Note that some errors are still always fatal. To be non-fatal, the error must be known to be deterministic. +Indexing errors on already synced Subgraphs will, by default, cause the Subgraph to fail and stop syncing. Subgraphs can alternatively be configured to continue syncing in the presence of errors, by ignoring the changes made by the handler which provoked the error. This gives Subgraph authors time to correct their Subgraphs while queries continue to be served against the latest block, though the results might be inconsistent due to the bug that caused the error. Note that some errors are still always fatal. To be non-fatal, the error must be known to be deterministic. -> **Note:** The Graph Network does not yet support non-fatal errors, and developers should not deploy subgraphs using that functionality to the network via the Studio. +> **Note:** The Graph Network does not yet support non-fatal errors, and developers should not deploy Subgraphs using that functionality to the network via the Studio. -Enabling non-fatal errors requires setting the following feature flag on the subgraph manifest: +Enabling non-fatal errors requires setting the following feature flag on the Subgraph manifest: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum features: - nonFatalErrors ... ``` -The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the subgraph has skipped over errors, as in the example: +The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the Subgraph has skipped over errors, as in the example: ```graphql foos(first: 100, subgraphError: allow) { @@ -123,7 +123,7 @@ _meta { } ``` -If the subgraph encounters an error, that query will return both the data and a graphql error with the message `"indexing_error"`, as in this example response: +If the Subgraph encounters an error, that query will return both the data and a graphql error with the message `"indexing_error"`, as in this example response: ```graphql "data": { @@ -145,7 +145,7 @@ If the subgraph encounters an error, that query will return both the data and a ## IPFS/Arweave File Data Sources -File data sources are a new subgraph functionality for accessing off-chain data during indexing in a robust, extendable way. File data sources support fetching files from IPFS and from Arweave. +File data sources are a new Subgraph functionality for accessing off-chain data during indexing in a robust, extendable way. File data sources support fetching files from IPFS and from Arweave. > This also lays the groundwork for deterministic indexing of off-chain data, as well as the potential introduction of arbitrary HTTP-sourced data. @@ -221,7 +221,7 @@ templates: - name: TokenMetadata kind: file/ipfs mapping: - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mapping.ts handler: handleMetadata @@ -290,7 +290,7 @@ Example: import { TokenMetadata as TokenMetadataTemplate } from '../generated/templates' const ipfshash = 'QmaXzZhcYnsisuue5WRdQDH6FDvqkLQX1NckLqBYeYYEfm' -//This example code is for a Crypto coven subgraph. The above ipfs hash is a directory with token metadata for all crypto coven NFTs. +//This example code is for a Crypto coven Subgraph. The above ipfs hash is a directory with token metadata for all crypto coven NFTs. export function handleTransfer(event: TransferEvent): void { let token = Token.load(event.params.tokenId.toString()) @@ -317,23 +317,23 @@ This will create a new file data source, which will poll Graph Node's configured This example is using the CID as the lookup between the parent `Token` entity and the resulting `TokenMetadata` entity. -> Previously, this is the point at which a subgraph developer would have called `ipfs.cat(CID)` to fetch the file +> Previously, this is the point at which a Subgraph developer would have called `ipfs.cat(CID)` to fetch the file Congratulations, you are using file data sources! -#### Deploying your subgraphs +#### Deploying your Subgraphs -You can now `build` and `deploy` your subgraph to any Graph Node >=v0.30.0-rc.0. +You can now `build` and `deploy` your Subgraph to any Graph Node >=v0.30.0-rc.0. #### Limitations -File data source handlers and entities are isolated from other subgraph entities, ensuring that they are deterministic when executed, and ensuring no contamination of chain-based data sources. To be specific: +File data source handlers and entities are isolated from other Subgraph entities, ensuring that they are deterministic when executed, and ensuring no contamination of chain-based data sources. To be specific: - Entities created by File Data Sources are immutable, and cannot be updated - File Data Source handlers cannot access entities from other file data sources - Entities associated with File Data Sources cannot be accessed by chain-based handlers -> While this constraint should not be problematic for most use-cases, it may introduce complexity for some. Please get in touch via Discord if you are having issues modelling your file-based data in a subgraph! +> While this constraint should not be problematic for most use-cases, it may introduce complexity for some. Please get in touch via Discord if you are having issues modelling your file-based data in a Subgraph! Additionally, it is not possible to create data sources from a file data source, be it an onchain data source or another file data source. This restriction may be lifted in the future. @@ -365,15 +365,15 @@ Handlers for File Data Sources cannot be in files which import `eth_call` contra > **Requires**: [SpecVersion](#specversion-releases) >= `1.2.0` -Topic filters, also known as indexed argument filters, are a powerful feature in subgraphs that allow users to precisely filter blockchain events based on the values of their indexed arguments. +Topic filters, also known as indexed argument filters, are a powerful feature in Subgraphs that allow users to precisely filter blockchain events based on the values of their indexed arguments. -- These filters help isolate specific events of interest from the vast stream of events on the blockchain, allowing subgraphs to operate more efficiently by focusing only on relevant data. +- These filters help isolate specific events of interest from the vast stream of events on the blockchain, allowing Subgraphs to operate more efficiently by focusing only on relevant data. -- This is useful for creating personal subgraphs that track specific addresses and their interactions with various smart contracts on the blockchain. +- This is useful for creating personal Subgraphs that track specific addresses and their interactions with various smart contracts on the blockchain. ### How Topic Filters Work -When a smart contract emits an event, any arguments that are marked as indexed can be used as filters in a subgraph's manifest. This allows the subgraph to listen selectively for events that match these indexed arguments. +When a smart contract emits an event, any arguments that are marked as indexed can be used as filters in a Subgraph's manifest. This allows the Subgraph to listen selectively for events that match these indexed arguments. - The event's first indexed argument corresponds to `topic1`, the second to `topic2`, and so on, up to `topic3`, since the Ethereum Virtual Machine (EVM) allows up to three indexed arguments per event. @@ -401,7 +401,7 @@ In this example: #### Configuration in Subgraphs -Topic filters are defined directly within the event handler configuration in the subgraph manifest. Here is how they are configured: +Topic filters are defined directly within the event handler configuration in the Subgraph manifest. Here is how they are configured: ```yaml eventHandlers: @@ -436,7 +436,7 @@ In this configuration: - `topic1` is configured to filter `Transfer` events where `0xAddressA` is the sender. - `topic2` is configured to filter `Transfer` events where `0xAddressB` is the receiver. -- The subgraph will only index transactions that occur directly from `0xAddressA` to `0xAddressB`. +- The Subgraph will only index transactions that occur directly from `0xAddressA` to `0xAddressB`. #### Example 2: Tracking Transactions in Either Direction Between Two or More Addresses @@ -452,17 +452,17 @@ In this configuration: - `topic1` is configured to filter `Transfer` events where `0xAddressA`, `0xAddressB`, `0xAddressC` is the sender. - `topic2` is configured to filter `Transfer` events where `0xAddressB` and `0xAddressC` is the receiver. -- The subgraph will index transactions that occur in either direction between multiple addresses allowing for comprehensive monitoring of interactions involving all addresses. +- The Subgraph will index transactions that occur in either direction between multiple addresses allowing for comprehensive monitoring of interactions involving all addresses. ## Declared eth_call > Note: This is an experimental feature that is not currently available in a stable Graph Node release yet. You can only use it in Subgraph Studio or your self-hosted node. -Declarative `eth_calls` are a valuable subgraph feature that allows `eth_calls` to be executed ahead of time, enabling `graph-node` to execute them in parallel. +Declarative `eth_calls` are a valuable Subgraph feature that allows `eth_calls` to be executed ahead of time, enabling `graph-node` to execute them in parallel. This feature does the following: -- Significantly improves the performance of fetching data from the Ethereum blockchain by reducing the total time for multiple calls and optimizing the subgraph's overall efficiency. +- Significantly improves the performance of fetching data from the Ethereum blockchain by reducing the total time for multiple calls and optimizing the Subgraph's overall efficiency. - Allows faster data fetching, resulting in quicker query responses and a better user experience. - Reduces wait times for applications that need to aggregate data from multiple Ethereum calls, making the data retrieval process more efficient. @@ -474,7 +474,7 @@ This feature does the following: #### Scenario without Declarative `eth_calls` -Imagine you have a subgraph that needs to make three Ethereum calls to fetch data about a user's transactions, balance, and token holdings. +Imagine you have a Subgraph that needs to make three Ethereum calls to fetch data about a user's transactions, balance, and token holdings. Traditionally, these calls might be made sequentially: @@ -498,15 +498,15 @@ Total time taken = max (3, 2, 4) = 4 seconds #### How it Works -1. Declarative Definition: In the subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. +1. Declarative Definition: In the Subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. 2. Parallel Execution Engine: The Graph Node's execution engine recognizes these declarations and runs the calls simultaneously. -3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the subgraph for further processing. +3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the Subgraph for further processing. #### Example Configuration in Subgraph Manifest Declared `eth_calls` can access the `event.address` of the underlying event as well as all the `event.params`. -`Subgraph.yaml` using `event.address`: +`subgraph.yaml` using `event.address`: ```yaml eventHandlers: @@ -524,7 +524,7 @@ Details for the example above: - The text (`Pool[event.address].feeGrowthGlobal0X128()`) is the actual `eth_call` that will be executed, which is in the form of `Contract[address].function(arguments)` - The `address` and `arguments` can be replaced with variables that will be available when the handler is executed. -`Subgraph.yaml` using `event.params` +`subgraph.yaml` using `event.params` ```yaml calls: @@ -535,22 +535,22 @@ calls: > **Note:** it is not recommended to use grafting when initially upgrading to The Graph Network. Learn more [here](/subgraphs/cookbook/grafting/#important-note-on-grafting-when-upgrading-to-the-network). -When a subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances; it is beneficial to reuse the data from an existing subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly or to temporarily get an existing subgraph working again after it has failed. +When a Subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances; it is beneficial to reuse the data from an existing Subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly or to temporarily get an existing Subgraph working again after it has failed. -A subgraph is grafted onto a base subgraph when the subgraph manifest in `subgraph.yaml` contains a `graft` block at the top-level: +A Subgraph is grafted onto a base Subgraph when the Subgraph manifest in `subgraph.yaml` contains a `graft` block at the top-level: ```yaml description: ... graft: - base: Qm... # Subgraph ID of base subgraph + base: Qm... # Subgraph ID of base Subgraph block: 7345624 # Block number ``` -When a subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` subgraph up to and including the given `block` and then continue indexing the new subgraph from that block on. The base subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted subgraph. +When a Subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` Subgraph up to and including the given `block` and then continue indexing the new Subgraph from that block on. The base Subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted Subgraph. -Because grafting copies rather than indexes base data, it is much quicker to get the subgraph to the desired block than indexing from scratch, though the initial data copy can still take several hours for very large subgraphs. While the grafted subgraph is being initialized, the Graph Node will log information about the entity types that have already been copied. +Because grafting copies rather than indexes base data, it is much quicker to get the Subgraph to the desired block than indexing from scratch, though the initial data copy can still take several hours for very large Subgraphs. While the grafted Subgraph is being initialized, the Graph Node will log information about the entity types that have already been copied. -The grafted subgraph can use a GraphQL schema that is not identical to the one of the base subgraph, but merely compatible with it. It has to be a valid subgraph schema in its own right, but may deviate from the base subgraph's schema in the following ways: +The grafted Subgraph can use a GraphQL schema that is not identical to the one of the base Subgraph, but merely compatible with it. It has to be a valid Subgraph schema in its own right, but may deviate from the base Subgraph's schema in the following ways: - It adds or removes entity types - It removes attributes from entity types @@ -560,4 +560,4 @@ The grafted subgraph can use a GraphQL schema that is not identical to the one o - It adds or removes interfaces - It changes for which entity types an interface is implemented -> **[Feature Management](#experimental-features):** `grafting` must be declared under `features` in the subgraph manifest. +> **[Feature Management](#experimental-features):** `grafting` must be declared under `features` in the Subgraph manifest. diff --git a/website/src/pages/ko/subgraphs/developing/creating/assemblyscript-mappings.mdx b/website/src/pages/ko/subgraphs/developing/creating/assemblyscript-mappings.mdx index 2ac894695fe1..cd81dc118f28 100644 --- a/website/src/pages/ko/subgraphs/developing/creating/assemblyscript-mappings.mdx +++ b/website/src/pages/ko/subgraphs/developing/creating/assemblyscript-mappings.mdx @@ -10,7 +10,7 @@ The mappings take data from a particular source and transform it into entities t For each event handler that is defined in `subgraph.yaml` under `mapping.eventHandlers`, create an exported function of the same name. Each handler must accept a single parameter called `event` with a type corresponding to the name of the event which is being handled. -In the example subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events: +In the example Subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events: ```javascript import { NewGravatar, UpdatedGravatar } from '../generated/Gravity/Gravity' @@ -72,7 +72,7 @@ If no value is set for a field in the new entity with the same ID, the field wil ## Code Generation -In order to make it easy and type-safe to work with smart contracts, events and entities, the Graph CLI can generate AssemblyScript types from the subgraph's GraphQL schema and the contract ABIs included in the data sources. +In order to make it easy and type-safe to work with smart contracts, events and entities, the Graph CLI can generate AssemblyScript types from the Subgraph's GraphQL schema and the contract ABIs included in the data sources. This is done with @@ -80,7 +80,7 @@ This is done with graph codegen [--output-dir ] [] ``` -but in most cases, subgraphs are already preconfigured via `package.json` to allow you to simply run one of the following to achieve the same: +but in most cases, Subgraphs are already preconfigured via `package.json` to allow you to simply run one of the following to achieve the same: ```sh # Yarn @@ -90,7 +90,7 @@ yarn codegen npm run codegen ``` -This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with. +This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example Subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with. ```javascript import { @@ -102,12 +102,12 @@ import { } from '../generated/Gravity/Gravity' ``` -In addition to this, one class is generated for each entity type in the subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with +In addition to this, one class is generated for each entity type in the Subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with ```javascript import { Gravatar } from '../generated/schema' ``` -> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the subgraph. +> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the Subgraph. -Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your subgraph to Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find. +Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your Subgraph to Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find. diff --git a/website/src/pages/ko/subgraphs/developing/creating/graph-ts/CHANGELOG.md b/website/src/pages/ko/subgraphs/developing/creating/graph-ts/CHANGELOG.md index 5d90888ac378..5f964d3cbb78 100644 --- a/website/src/pages/ko/subgraphs/developing/creating/graph-ts/CHANGELOG.md +++ b/website/src/pages/ko/subgraphs/developing/creating/graph-ts/CHANGELOG.md @@ -1,5 +1,11 @@ # @graphprotocol/graph-ts +## 0.38.0 + +### Minor Changes + +- [#1935](https://github.com/graphprotocol/graph-tooling/pull/1935) [`0c36a02`](https://github.com/graphprotocol/graph-tooling/commit/0c36a024e0516bbf883ae62b8312dba3d9945f04) Thanks [@isum](https://github.com/isum)! - feat: add yaml parsing support to mappings + ## 0.37.0 ### Minor Changes diff --git a/website/src/pages/ko/subgraphs/developing/creating/graph-ts/api.mdx b/website/src/pages/ko/subgraphs/developing/creating/graph-ts/api.mdx index 35bb04826c98..5be2530c4d6b 100644 --- a/website/src/pages/ko/subgraphs/developing/creating/graph-ts/api.mdx +++ b/website/src/pages/ko/subgraphs/developing/creating/graph-ts/api.mdx @@ -2,12 +2,12 @@ title: AssemblyScript API --- -> Note: If you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/). +> Note: If you created a Subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/). -Learn what built-in APIs can be used when writing subgraph mappings. There are two kinds of APIs available out of the box: +Learn what built-in APIs can be used when writing Subgraph mappings. There are two kinds of APIs available out of the box: - The [Graph TypeScript library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) -- Code generated from subgraph files by `graph codegen` +- Code generated from Subgraph files by `graph codegen` You can also add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). @@ -27,7 +27,7 @@ The `@graphprotocol/graph-ts` library provides the following APIs: ### Versions -The `apiVersion` in the subgraph manifest specifies the mapping API version which is run by Graph Node for a given subgraph. +The `apiVersion` in the Subgraph manifest specifies the mapping API version which is run by Graph Node for a given Subgraph. | Version | Release notes | | :-: | --- | @@ -223,7 +223,7 @@ import { store } from '@graphprotocol/graph-ts' The `store` API allows to load, save and remove entities from and to the Graph Node store. -Entities written to the store map one-to-one to the `@entity` types defined in the subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities. +Entities written to the store map one-to-one to the `@entity` types defined in the Subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities. #### Creating entities @@ -282,8 +282,8 @@ As of `graph-node` v0.31.0, `@graphprotocol/graph-ts` v0.30.0 and `@graphprotoco The store API facilitates the retrieval of entities that were created or updated in the current block. A typical situation for this is that one handler creates a transaction from some onchain event, and a later handler wants to access this transaction if it exists. -- In the case where the transaction does not exist, the subgraph will have to go to the database simply to find out that the entity does not exist. If the subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. -- For some subgraphs, these missed lookups can contribute significantly to the indexing time. +- In the case where the transaction does not exist, the Subgraph will have to go to the database simply to find out that the entity does not exist. If the Subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. +- For some Subgraphs, these missed lookups can contribute significantly to the indexing time. ```typescript let id = event.transaction.hash // or however the ID is constructed @@ -380,11 +380,11 @@ The Ethereum API provides access to smart contracts, public state variables, con #### Support for Ethereum Types -As with entities, `graph codegen` generates classes for all smart contracts and events used in a subgraph. For this, the contract ABIs need to be part of the data source in the subgraph manifest. Typically, the ABI files are stored in an `abis/` folder. +As with entities, `graph codegen` generates classes for all smart contracts and events used in a Subgraph. For this, the contract ABIs need to be part of the data source in the Subgraph manifest. Typically, the ABI files are stored in an `abis/` folder. -With the generated classes, conversions between Ethereum types and the [built-in types](#built-in-types) take place behind the scenes so that subgraph authors do not have to worry about them. +With the generated classes, conversions between Ethereum types and the [built-in types](#built-in-types) take place behind the scenes so that Subgraph authors do not have to worry about them. -The following example illustrates this. Given a subgraph schema like +The following example illustrates this. Given a Subgraph schema like ```graphql type Transfer @entity { @@ -483,7 +483,7 @@ class Log { #### Access to Smart Contract State -The code generated by `graph codegen` also includes classes for the smart contracts used in the subgraph. These can be used to access public state variables and call functions of the contract at the current block. +The code generated by `graph codegen` also includes classes for the smart contracts used in the Subgraph. These can be used to access public state variables and call functions of the contract at the current block. A common pattern is to access the contract from which an event originates. This is achieved with the following code: @@ -506,7 +506,7 @@ export function handleTransfer(event: TransferEvent) { As long as the `ERC20Contract` on Ethereum has a public read-only function called `symbol`, it can be called with `.symbol()`. For public state variables a method with the same name is created automatically. -Any other contract that is part of the subgraph can be imported from the generated code and can be bound to a valid address. +Any other contract that is part of the Subgraph can be imported from the generated code and can be bound to a valid address. #### Handling Reverted Calls @@ -582,7 +582,7 @@ let isContract = ethereum.hasCode(eoa).inner // returns false import { log } from '@graphprotocol/graph-ts' ``` -The `log` API allows subgraphs to log information to the Graph Node standard output as well as Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument. +The `log` API allows Subgraphs to log information to the Graph Node standard output as well as Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument. The `log` API includes the following functions: @@ -590,7 +590,7 @@ The `log` API includes the following functions: - `log.info(fmt: string, args: Array): void` - logs an informational message. - `log.warning(fmt: string, args: Array): void` - logs a warning. - `log.error(fmt: string, args: Array): void` - logs an error message. -- `log.critical(fmt: string, args: Array): void` – logs a critical message _and_ terminates the subgraph. +- `log.critical(fmt: string, args: Array): void` – logs a critical message _and_ terminates the Subgraph. The `log` API takes a format string and an array of string values. It then replaces placeholders with the string values from the array. The first `{}` placeholder gets replaced by the first value in the array, the second `{}` placeholder gets replaced by the second value and so on. @@ -721,7 +721,7 @@ ipfs.mapJSON('Qm...', 'processItem', Value.fromString('parentId')) The only flag currently supported is `json`, which must be passed to `ipfs.map`. With the `json` flag, the IPFS file must consist of a series of JSON values, one value per line. The call to `ipfs.map` will read each line in the file, deserialize it into a `JSONValue` and call the callback for each of them. The callback can then use entity operations to store data from the `JSONValue`. Entity changes are stored only when the handler that called `ipfs.map` finishes successfully; in the meantime, they are kept in memory, and the size of the file that `ipfs.map` can process is therefore limited. -On success, `ipfs.map` returns `void`. If any invocation of the callback causes an error, the handler that invoked `ipfs.map` is aborted, and the subgraph is marked as failed. +On success, `ipfs.map` returns `void`. If any invocation of the callback causes an error, the handler that invoked `ipfs.map` is aborted, and the Subgraph is marked as failed. ### Crypto API @@ -836,7 +836,7 @@ The base `Entity` class and the child `DataSourceContext` class have helpers to ### DataSourceContext in Manifest -The `context` section within `dataSources` allows you to define key-value pairs that are accessible within your subgraph mappings. The available types are `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. +The `context` section within `dataSources` allows you to define key-value pairs that are accessible within your Subgraph mappings. The available types are `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Here is a YAML example illustrating the usage of various types in the `context` section: @@ -887,4 +887,4 @@ dataSources: - `List`: Specifies a list of items. Each item needs to specify its type and data. - `BigInt`: Specifies a large integer value. Must be quoted due to its large size. -This context is then accessible in your subgraph mapping files, enabling more dynamic and configurable subgraphs. +This context is then accessible in your Subgraph mapping files, enabling more dynamic and configurable Subgraphs. diff --git a/website/src/pages/ko/subgraphs/developing/creating/graph-ts/common-issues.mdx b/website/src/pages/ko/subgraphs/developing/creating/graph-ts/common-issues.mdx index f8d0c9c004c2..65e8e3d4a8a3 100644 --- a/website/src/pages/ko/subgraphs/developing/creating/graph-ts/common-issues.mdx +++ b/website/src/pages/ko/subgraphs/developing/creating/graph-ts/common-issues.mdx @@ -2,7 +2,7 @@ title: Common AssemblyScript Issues --- -There are certain [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) issues that are common to run into during subgraph development. They range in debug difficulty, however, being aware of them may help. The following is a non-exhaustive list of these issues: +There are certain [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) issues that are common to run into during Subgraph development. They range in debug difficulty, however, being aware of them may help. The following is a non-exhaustive list of these issues: - `Private` class variables are not enforced in [AssemblyScript](https://www.assemblyscript.org/status.html#language-features). There is no way to protect class variables from being directly changed from the class object. - Scope is not inherited into [closure functions](https://www.assemblyscript.org/status.html#on-closures), i.e. variables declared outside of closure functions cannot be used. Explanation in [Developer Highlights #3](https://www.youtube.com/watch?v=1-8AW-lVfrA&t=3243s). diff --git a/website/src/pages/ko/subgraphs/developing/creating/install-the-cli.mdx b/website/src/pages/ko/subgraphs/developing/creating/install-the-cli.mdx index 674cc5bc22d2..c9d6966ef5fe 100644 --- a/website/src/pages/ko/subgraphs/developing/creating/install-the-cli.mdx +++ b/website/src/pages/ko/subgraphs/developing/creating/install-the-cli.mdx @@ -2,11 +2,11 @@ title: Install the Graph CLI --- -> In order to use your subgraph on The Graph's decentralized network, you will need to [create an API key](/resources/subgraph-studio-faq/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your subgraph with at least 3,000 GRT to attract 2-3 Indexers. To learn more about signaling, check out [curating](/resources/roles/curating/). +> In order to use your Subgraph on The Graph's decentralized network, you will need to [create an API key](/resources/subgraph-studio-faq/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your Subgraph with at least 3,000 GRT to attract 2-3 Indexers. To learn more about signaling, check out [curating](/resources/roles/curating/). ## Overview -The [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) is a command-line interface that facilitates developers' commands for The Graph. It processes a [subgraph manifest](/subgraphs/developing/creating/subgraph-manifest/) and compiles the [mappings](/subgraphs/developing/creating/assemblyscript-mappings/) to create the files you will need to deploy the subgraph to [Subgraph Studio](https://thegraph.com/studio/) and the network. +The [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) is a command-line interface that facilitates developers' commands for The Graph. It processes a [Subgraph manifest](/subgraphs/developing/creating/subgraph-manifest/) and compiles the [mappings](/subgraphs/developing/creating/assemblyscript-mappings/) to create the files you will need to deploy the Subgraph to [Subgraph Studio](https://thegraph.com/studio/) and the network. ## Getting Started @@ -28,13 +28,13 @@ npm install -g @graphprotocol/graph-cli@latest yarn global add @graphprotocol/graph-cli ``` -The `graph init` command can be used to set up a new subgraph project, either from an existing contract or from an example subgraph. If you already have a smart contract deployed to your preferred network, you can bootstrap a new subgraph from that contract to get started. +The `graph init` command can be used to set up a new Subgraph project, either from an existing contract or from an example Subgraph. If you already have a smart contract deployed to your preferred network, you can bootstrap a new Subgraph from that contract to get started. ## Create a Subgraph ### From an Existing Contract -The following command creates a subgraph that indexes all events of an existing contract: +The following command creates a Subgraph that indexes all events of an existing contract: ```sh graph init \ @@ -51,25 +51,25 @@ graph init \ - If any of the optional arguments are missing, it guides you through an interactive form. -- The `` is the ID of your subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your subgraph details page. +- The `` is the ID of your Subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your Subgraph details page. ### From an Example Subgraph -The following command initializes a new project from an example subgraph: +The following command initializes a new project from an example Subgraph: ```sh graph init --from-example=example-subgraph ``` -- The [example subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. +- The [example Subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. -- The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. +- The Subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. ### Add New `dataSources` to an Existing Subgraph -`dataSources` are key components of subgraphs. They define the sources of data that the subgraph indexes and processes. A `dataSource` specifies which smart contract to listen to, which events to process, and how to handle them. +`dataSources` are key components of Subgraphs. They define the sources of data that the Subgraph indexes and processes. A `dataSource` specifies which smart contract to listen to, which events to process, and how to handle them. -Recent versions of the Graph CLI supports adding new `dataSources` to an existing subgraph through the `graph add` command: +Recent versions of the Graph CLI supports adding new `dataSources` to an existing Subgraph through the `graph add` command: ```sh graph add
[] @@ -101,19 +101,5 @@ The `graph add` command will fetch the ABI from Etherscan (unless an ABI path is The ABI file(s) must match your contract(s). There are a few ways to obtain ABI files: - If you are building your own project, you will likely have access to your most current ABIs. -- If you are building a subgraph for a public project, you can download that project to your computer and get the ABI by using [`npx hardhat compile`](https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts) or using `solc` to compile. -- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your subgraph will fail. - -## SpecVersion Releases - -| Version | Release notes | -| :-: | --- | -| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | -| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | -| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune subgraphs | -| 0.0.9 | Supports `endBlock` feature | -| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). | -| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). | -| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | -| 0.0.5 | Added support for event handlers having access to transaction receipts. | -| 0.0.4 | Added support for managing subgraph features. | +- If you are building a Subgraph for a public project, you can download that project to your computer and get the ABI by using [`npx hardhat compile`](https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts) or using `solc` to compile. +- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your Subgraph will fail. diff --git a/website/src/pages/ko/subgraphs/developing/creating/ql-schema.mdx b/website/src/pages/ko/subgraphs/developing/creating/ql-schema.mdx index 27562f970620..7e0f889447c5 100644 --- a/website/src/pages/ko/subgraphs/developing/creating/ql-schema.mdx +++ b/website/src/pages/ko/subgraphs/developing/creating/ql-schema.mdx @@ -4,7 +4,7 @@ title: The Graph QL Schema ## Overview -The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language. +The schema for your Subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language. > Note: If you've never written a GraphQL schema, it is recommended that you check out this primer on the GraphQL type system. Reference documentation for GraphQL schemas can be found in the [GraphQL API](/subgraphs/querying/graphql-api/) section. @@ -12,7 +12,7 @@ The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas ar Before defining entities, it is important to take a step back and think about how your data is structured and linked. -- All queries will be made against the data model defined in the subgraph schema. As a result, the design of the subgraph schema should be informed by the queries that your application will need to perform. +- All queries will be made against the data model defined in the Subgraph schema. As a result, the design of the Subgraph schema should be informed by the queries that your application will need to perform. - It may be useful to imagine entities as "objects containing data", rather than as events or functions. - You define entity types in `schema.graphql`, and Graph Node will generate top-level fields for querying single instances and collections of that entity type. - Each type that should be an entity is required to be annotated with an `@entity` directive. @@ -141,7 +141,7 @@ type TokenBalance @entity { Reverse lookups can be defined on an entity through the `@derivedFrom` field. This creates a virtual field on the entity that may be queried but cannot be set manually through the mappings API. Rather, it is derived from the relationship defined on the other entity. For such relationships, it rarely makes sense to store both sides of the relationship, and both indexing and query performance will be better when only one side is stored and the other is derived. -For one-to-many relationships, the relationship should always be stored on the 'one' side, and the 'many' side should always be derived. Storing the relationship this way, rather than storing an array of entities on the 'many' side, will result in dramatically better performance for both indexing and querying the subgraph. In general, storing arrays of entities should be avoided as much as is practical. +For one-to-many relationships, the relationship should always be stored on the 'one' side, and the 'many' side should always be derived. Storing the relationship this way, rather than storing an array of entities on the 'many' side, will result in dramatically better performance for both indexing and querying the Subgraph. In general, storing arrays of entities should be avoided as much as is practical. #### Example @@ -160,7 +160,7 @@ type TokenBalance @entity { } ``` -Here is an example of how to write a mapping for a subgraph with reverse lookups: +Here is an example of how to write a mapping for a Subgraph with reverse lookups: ```typescript let token = new Token(event.address) // Create Token @@ -231,7 +231,7 @@ query usersWithOrganizations { } ``` -This more elaborate way of storing many-to-many relationships will result in less data stored for the subgraph, and therefore to a subgraph that is often dramatically faster to index and to query. +This more elaborate way of storing many-to-many relationships will result in less data stored for the Subgraph, and therefore to a Subgraph that is often dramatically faster to index and to query. ### Adding comments to the schema @@ -287,7 +287,7 @@ query { } ``` -> **[Feature Management](#experimental-features):** From `specVersion` `0.0.4` and onwards, `fullTextSearch` must be declared under the `features` section in the subgraph manifest. +> **[Feature Management](#experimental-features):** From `specVersion` `0.0.4` and onwards, `fullTextSearch` must be declared under the `features` section in the Subgraph manifest. ## Languages supported diff --git a/website/src/pages/ko/subgraphs/developing/creating/starting-your-subgraph.mdx b/website/src/pages/ko/subgraphs/developing/creating/starting-your-subgraph.mdx index 4823231d9a40..180a343470b1 100644 --- a/website/src/pages/ko/subgraphs/developing/creating/starting-your-subgraph.mdx +++ b/website/src/pages/ko/subgraphs/developing/creating/starting-your-subgraph.mdx @@ -4,20 +4,32 @@ title: Starting Your Subgraph ## Overview -The Graph is home to thousands of subgraphs already available for query, so check [The Graph Explorer](https://thegraph.com/explorer) and find one that already matches your needs. +The Graph is home to thousands of Subgraphs already available for query, so check [The Graph Explorer](https://thegraph.com/explorer) and find one that already matches your needs. -When you create a [subgraph](/subgraphs/developing/subgraphs/), you create a custom open API that extracts data from a blockchain, processes it, stores it, and makes it easy to query via GraphQL. +When you create a [Subgraph](/subgraphs/developing/subgraphs/), you create a custom open API that extracts data from a blockchain, processes it, stores it, and makes it easy to query via GraphQL. -Subgraph development ranges from simple scaffold subgraphs to advanced, specifically tailored subgraphs. +Subgraph development ranges from simple scaffold Subgraphs to advanced, specifically tailored Subgraphs. ### Start Building -Start the process and build a subgraph that matches your needs: +Start the process and build a Subgraph that matches your needs: 1. [Install the CLI](/subgraphs/developing/creating/install-the-cli/) - Set up your infrastructure -2. [Subgraph Manifest](/subgraphs/developing/creating/subgraph-manifest/) - Understand a subgraph's key component +2. [Subgraph Manifest](/subgraphs/developing/creating/subgraph-manifest/) - Understand a Subgraph's key component 3. [The GraphQL Schema](/subgraphs/developing/creating/ql-schema/) - Write your schema 4. [Writing AssemblyScript Mappings](/subgraphs/developing/creating/assemblyscript-mappings/) - Write your mappings -5. [Advanced Features](/subgraphs/developing/creating/advanced/) - Customize your subgraph with advanced features +5. [Advanced Features](/subgraphs/developing/creating/advanced/) - Customize your Subgraph with advanced features Explore additional [resources for APIs](/subgraphs/developing/creating/graph-ts/README/) and conduct local testing with [Matchstick](/subgraphs/developing/creating/unit-testing-framework/). + +| Version | Release notes | +| :-: | --- | +| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune Subgraphs | +| 0.0.9 | Supports `endBlock` feature | +| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). | +| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | diff --git a/website/src/pages/ko/subgraphs/developing/creating/subgraph-manifest.mdx b/website/src/pages/ko/subgraphs/developing/creating/subgraph-manifest.mdx index a42a50973690..78e4a3a55e7d 100644 --- a/website/src/pages/ko/subgraphs/developing/creating/subgraph-manifest.mdx +++ b/website/src/pages/ko/subgraphs/developing/creating/subgraph-manifest.mdx @@ -4,19 +4,19 @@ title: Subgraph Manifest ## Overview -The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. +The Subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your Subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. -The **subgraph definition** consists of the following files: +The **Subgraph definition** consists of the following files: -- `subgraph.yaml`: Contains the subgraph manifest +- `subgraph.yaml`: Contains the Subgraph manifest -- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL +- `schema.graphql`: A GraphQL schema defining the data stored for your Subgraph and how to query it via GraphQL - `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema (e.g. `mapping.ts` in this guide) ### Subgraph Capabilities -A single subgraph can: +A single Subgraph can: - Index data from multiple smart contracts (but not multiple networks). @@ -24,12 +24,12 @@ A single subgraph can: - Add an entry for each contract that requires indexing to the `dataSources` array. -The full specification for subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). +The full specification for Subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). -For the example subgraph listed above, `subgraph.yaml` is: +For the example Subgraph listed above, `subgraph.yaml` is: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum repository: https://github.com/graphprotocol/graph-tooling schema: @@ -54,7 +54,7 @@ dataSources: data: 'bar' mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -79,47 +79,47 @@ dataSources: ## Subgraph Entries -> Important Note: Be sure you populate your subgraph manifest with all handlers and [entities](/subgraphs/developing/creating/ql-schema/). +> Important Note: Be sure you populate your Subgraph manifest with all handlers and [entities](/subgraphs/developing/creating/ql-schema/). The important entries to update for the manifest are: -- `specVersion`: a semver version that identifies the supported manifest structure and functionality for the subgraph. The latest version is `1.2.0`. See [specVersion releases](#specversion-releases) section to see more details on features & releases. +- `specVersion`: a semver version that identifies the supported manifest structure and functionality for the Subgraph. The latest version is `1.3.0`. See [specVersion releases](#specversion-releases) section to see more details on features & releases. -- `description`: a human-readable description of what the subgraph is. This description is displayed in Graph Explorer when the subgraph is deployed to Subgraph Studio. +- `description`: a human-readable description of what the Subgraph is. This description is displayed in Graph Explorer when the Subgraph is deployed to Subgraph Studio. -- `repository`: the URL of the repository where the subgraph manifest can be found. This is also displayed in Graph Explorer. +- `repository`: the URL of the repository where the Subgraph manifest can be found. This is also displayed in Graph Explorer. - `features`: a list of all used [feature](#experimental-features) names. -- `indexerHints.prune`: Defines the retention of historical block data for a subgraph. See [prune](#prune) in [indexerHints](#indexer-hints) section. +- `indexerHints.prune`: Defines the retention of historical block data for a Subgraph. See [prune](#prune) in [indexerHints](#indexer-hints) section. -- `dataSources.source`: the address of the smart contract the subgraph sources, and the ABI of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts. +- `dataSources.source`: the address of the smart contract the Subgraph sources, and the ABI of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts. - `dataSources.source.startBlock`: the optional number of the block that the data source starts indexing from. In most cases, we suggest using the block in which the contract was created. - `dataSources.source.endBlock`: The optional number of the block that the data source stops indexing at, including that block. Minimum spec version required: `0.0.9`. -- `dataSources.context`: key-value pairs that can be used within subgraph mappings. Supports various data types like `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Each variable needs to specify its `type` and `data`. These context variables are then accessible in the mapping files, offering more configurable options for subgraph development. +- `dataSources.context`: key-value pairs that can be used within Subgraph mappings. Supports various data types like `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Each variable needs to specify its `type` and `data`. These context variables are then accessible in the mapping files, offering more configurable options for Subgraph development. - `dataSources.mapping.entities`: the entities that the data source writes to the store. The schema for each entity is defined in the schema.graphql file. - `dataSources.mapping.abis`: one or more named ABI files for the source contract as well as any other smart contracts that you interact with from within the mappings. -- `dataSources.mapping.eventHandlers`: lists the smart contract events this subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store. +- `dataSources.mapping.eventHandlers`: lists the smart contract events this Subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store. -- `dataSources.mapping.callHandlers`: lists the smart contract functions this subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store. +- `dataSources.mapping.callHandlers`: lists the smart contract functions this Subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store. -- `dataSources.mapping.blockHandlers`: lists the blocks this subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional call-filter can be provided by adding a `filter` field with `kind: call` to the handler. This will only run the handler if the block contains at least one call to the data source contract. +- `dataSources.mapping.blockHandlers`: lists the blocks this Subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional call-filter can be provided by adding a `filter` field with `kind: call` to the handler. This will only run the handler if the block contains at least one call to the data source contract. -A single subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array. +A single Subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array. ## Event Handlers -Event handlers in a subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the subgraph's manifest. This enables subgraphs to process and store event data according to defined logic. +Event handlers in a Subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the Subgraph's manifest. This enables Subgraphs to process and store event data according to defined logic. ### Defining an Event Handler -An event handler is declared within a data source in the subgraph's YAML configuration. It specifies which events to listen for and the corresponding function to execute when those events are detected. +An event handler is declared within a data source in the Subgraph's YAML configuration. It specifies which events to listen for and the corresponding function to execute when those events are detected. ```yaml dataSources: @@ -131,7 +131,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -149,11 +149,11 @@ dataSources: ## Call Handlers -While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured. +While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a Subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured. Call handlers will only trigger in one of two cases: when the function specified is called by an account other than the contract itself or when it is marked as external in Solidity and called as part of another function in the same contract. -> **Note:** Call handlers currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a subgraph indexing one of these networks contain one or more call handlers, it will not start syncing. Subgraph developers should instead use event handlers. These are far more performant than call handlers, and are supported on every evm network. +> **Note:** Call handlers currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a Subgraph indexing one of these networks contain one or more call handlers, it will not start syncing. Subgraph developers should instead use event handlers. These are far more performant than call handlers, and are supported on every evm network. ### Defining a Call Handler @@ -169,7 +169,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -186,7 +186,7 @@ The `function` is the normalized function signature to filter calls by. The `han ### Mapping Function -Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument: +Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example Subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument: ```typescript import { CreateGravatarCall } from '../generated/Gravity/Gravity' @@ -205,7 +205,7 @@ The `handleCreateGravatar` function takes a new `CreateGravatarCall` which is a ## Block Handlers -In addition to subscribing to contract events or function calls, a subgraph may want to update its data as new blocks are appended to the chain. To achieve this a subgraph can run a function after every block or after blocks that match a pre-defined filter. +In addition to subscribing to contract events or function calls, a Subgraph may want to update its data as new blocks are appended to the chain. To achieve this a Subgraph can run a function after every block or after blocks that match a pre-defined filter. ### Supported Filters @@ -218,7 +218,7 @@ filter: _The defined handler will be called once for every block which contains a call to the contract (data source) the handler is defined under._ -> **Note:** The `call` filter currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a subgraph indexing one of these networks contain one or more block handlers with a `call` filter, it will not start syncing. +> **Note:** The `call` filter currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a Subgraph indexing one of these networks contain one or more block handlers with a `call` filter, it will not start syncing. The absence of a filter for a block handler will ensure that the handler is called every block. A data source can only contain one block handler for each filter type. @@ -232,7 +232,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -261,7 +261,7 @@ blockHandlers: every: 10 ``` -The defined handler will be called once for every `n` blocks, where `n` is the value provided in the `every` field. This configuration allows the subgraph to perform specific operations at regular block intervals. +The defined handler will be called once for every `n` blocks, where `n` is the value provided in the `every` field. This configuration allows the Subgraph to perform specific operations at regular block intervals. #### Once Filter @@ -276,7 +276,7 @@ blockHandlers: kind: once ``` -The defined handler with the once filter will be called only once before all other handlers run. This configuration allows the subgraph to use the handler as an initialization handler, performing specific tasks at the start of indexing. +The defined handler with the once filter will be called only once before all other handlers run. This configuration allows the Subgraph to use the handler as an initialization handler, performing specific tasks at the start of indexing. ```ts export function handleOnce(block: ethereum.Block): void { @@ -288,7 +288,7 @@ export function handleOnce(block: ethereum.Block): void { ### Mapping Function -The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing subgraph entities in the store, call smart contracts and create or update entities. +The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing Subgraph entities in the store, call smart contracts and create or update entities. ```typescript import { ethereum } from '@graphprotocol/graph-ts' @@ -317,7 +317,7 @@ An event will only be triggered when both the signature and topic 0 match. By de Starting from `specVersion` `0.0.5` and `apiVersion` `0.0.7`, event handlers can have access to the receipt for the transaction which emitted them. -To do so, event handlers must be declared in the subgraph manifest with the new `receipt: true` key, which is optional and defaults to false. +To do so, event handlers must be declared in the Subgraph manifest with the new `receipt: true` key, which is optional and defaults to false. ```yaml eventHandlers: @@ -360,7 +360,7 @@ dataSources: abi: Factory mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/factory.ts entities: @@ -390,7 +390,7 @@ templates: abi: Exchange mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/exchange.ts entities: @@ -454,7 +454,7 @@ There are setters and getters like `setString` and `getString` for all value typ ## Start Blocks -The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. +The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a Subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. ```yaml dataSources: @@ -467,7 +467,7 @@ dataSources: startBlock: 6627917 mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/factory.ts entities: @@ -488,13 +488,13 @@ dataSources: ## Indexer Hints -The `indexerHints` setting in a subgraph's manifest provides directives for indexers on processing and managing a subgraph. It influences operational decisions across data handling, indexing strategies, and optimizations. Presently, it features the `prune` option for managing historical data retention or pruning. +The `indexerHints` setting in a Subgraph's manifest provides directives for indexers on processing and managing a Subgraph. It influences operational decisions across data handling, indexing strategies, and optimizations. Presently, it features the `prune` option for managing historical data retention or pruning. > This feature is available from `specVersion: 1.0.0` ### Prune -`indexerHints.prune`: Defines the retention of historical block data for a subgraph. Options include: +`indexerHints.prune`: Defines the retention of historical block data for a Subgraph. Options include: 1. `"never"`: No pruning of historical data; retains the entire history. 2. `"auto"`: Retains the minimum necessary history as set by the indexer, optimizing query performance. @@ -505,19 +505,19 @@ The `indexerHints` setting in a subgraph's manifest provides directives for inde prune: auto ``` -> The term "history" in this context of subgraphs is about storing data that reflects the old states of mutable entities. +> The term "history" in this context of Subgraphs is about storing data that reflects the old states of mutable entities. History as of a given block is required for: -- [Time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), which enable querying the past states of these entities at specific blocks throughout the subgraph's history -- Using the subgraph as a [graft base](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) in another subgraph, at that block -- Rewinding the subgraph back to that block +- [Time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), which enable querying the past states of these entities at specific blocks throughout the Subgraph's history +- Using the Subgraph as a [graft base](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) in another Subgraph, at that block +- Rewinding the Subgraph back to that block If historical data as of the block has been pruned, the above capabilities will not be available. > Using `"auto"` is generally recommended as it maximizes query performance and is sufficient for most users who do not require access to extensive historical data. -For subgraphs leveraging [time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), it's advisable to either set a specific number of blocks for historical data retention or use `prune: never` to keep all historical entity states. Below are examples of how to configure both options in your subgraph's settings: +For Subgraphs leveraging [time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), it's advisable to either set a specific number of blocks for historical data retention or use `prune: never` to keep all historical entity states. Below are examples of how to configure both options in your Subgraph's settings: To retain a specific amount of historical data: @@ -532,3 +532,18 @@ To preserve the complete history of entity states: indexerHints: prune: never ``` + +## SpecVersion Releases + +| Version | Release notes | +| :-: | --- | +| 1.3.0 | Added support for [Subgraph Composition](/cookbook/subgraph-composition-three-sources) | +| 1.2.0 | Added support for [Indexed Argument Filtering](/developing/creating/advanced/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](/developing/creating/advanced/#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supports [`indexerHints`](/developing/creating/subgraph-manifest/#indexer-hints) feature to prune Subgraphs | +| 0.0.9 | Supports `endBlock` feature | +| 0.0.8 | Added support for polling [Block Handlers](/developing/creating/subgraph-manifest/#polling-filter) and [Initialisation Handlers](/developing/creating/subgraph-manifest/#once-filter). | +| 0.0.7 | Added support for [File Data Sources](/developing/creating/advanced/#ipfsarweave-file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | diff --git a/website/src/pages/ko/subgraphs/developing/creating/unit-testing-framework.mdx b/website/src/pages/ko/subgraphs/developing/creating/unit-testing-framework.mdx index 2133c1d4b5c9..e56e1109bc04 100644 --- a/website/src/pages/ko/subgraphs/developing/creating/unit-testing-framework.mdx +++ b/website/src/pages/ko/subgraphs/developing/creating/unit-testing-framework.mdx @@ -2,12 +2,12 @@ title: Unit Testing Framework --- -Learn how to use Matchstick, a unit testing framework developed by [LimeChain](https://limechain.tech/). Matchstick enables subgraph developers to test their mapping logic in a sandboxed environment and successfully deploy their subgraphs. +Learn how to use Matchstick, a unit testing framework developed by [LimeChain](https://limechain.tech/). Matchstick enables Subgraph developers to test their mapping logic in a sandboxed environment and successfully deploy their Subgraphs. ## Benefits of Using Matchstick - It's written in Rust and optimized for high performance. -- It gives you access to developer features, including the ability to mock contract calls, make assertions about the store state, monitor subgraph failures, check test performance, and many more. +- It gives you access to developer features, including the ability to mock contract calls, make assertions about the store state, monitor Subgraph failures, check test performance, and many more. ## Getting Started @@ -87,7 +87,7 @@ And finally, do not use `graph test` (which uses your global installation of gra ### Using Matchstick -To use **Matchstick** in your subgraph project just open up a terminal, navigate to the root folder of your project and simply run `graph test [options] ` - it downloads the latest **Matchstick** binary and runs the specified test or all tests in a test folder (or all existing tests if no datasource flag is specified). +To use **Matchstick** in your Subgraph project just open up a terminal, navigate to the root folder of your project and simply run `graph test [options] ` - it downloads the latest **Matchstick** binary and runs the specified test or all tests in a test folder (or all existing tests if no datasource flag is specified). ### CLI options @@ -113,7 +113,7 @@ graph test path/to/file.test.ts ```sh -c, --coverage Run the tests in coverage mode --d, --docker Run the tests in a docker container (Note: Please execute from the root folder of the subgraph) +-d, --docker Run the tests in a docker container (Note: Please execute from the root folder of the Subgraph) -f, --force Binary: Redownloads the binary. Docker: Redownloads the Dockerfile and rebuilds the docker image. -h, --help Show usage information -l, --logs Logs to the console information about the OS, CPU model and download url (debugging purposes) @@ -145,17 +145,17 @@ libsFolder: path/to/libs manifestPath: path/to/subgraph.yaml ``` -### Demo subgraph +### Demo Subgraph You can try out and play around with the examples from this guide by cloning the [Demo Subgraph repo](https://github.com/LimeChain/demo-subgraph) ### Video tutorials -Also you can check out the video series on ["How to use Matchstick to write unit tests for your subgraphs"](https://www.youtube.com/playlist?list=PLTqyKgxaGF3SNakGQwczpSGVjS_xvOv3h) +Also you can check out the video series on ["How to use Matchstick to write unit tests for your Subgraphs"](https://www.youtube.com/playlist?list=PLTqyKgxaGF3SNakGQwczpSGVjS_xvOv3h) ## Tests structure -_**IMPORTANT: The test structure described below depens on `matchstick-as` version >=0.5.0**_ +_**IMPORTANT: The test structure described below depends on `matchstick-as` version >=0.5.0**_ ### describe() @@ -662,7 +662,7 @@ That's a lot to unpack! First off, an important thing to notice is that we're im There we go - we've created our first test! 👏 -Now in order to run our tests you simply need to run the following in your subgraph root folder: +Now in order to run our tests you simply need to run the following in your Subgraph root folder: `graph test Gravity` @@ -756,7 +756,7 @@ createMockedFunction(contractAddress, 'getGravatar', 'getGravatar(address):(stri Users can mock IPFS files by using `mockIpfsFile(hash, filePath)` function. The function accepts two arguments, the first one is the IPFS file hash/path and the second one is the path to a local file. -NOTE: When testing `ipfs.map/ipfs.mapJSON`, the callback function must be exported from the test file in order for matchstck to detect it, like the `processGravatar()` function in the test example bellow: +NOTE: When testing `ipfs.map/ipfs.mapJSON`, the callback function must be exported from the test file in order for matchstick to detect it, like the `processGravatar()` function in the test example bellow: `.test.ts` file: @@ -765,7 +765,7 @@ import { assert, test, mockIpfsFile } from 'matchstick-as/assembly/index' import { ipfs } from '@graphprotocol/graph-ts' import { gravatarFromIpfs } from './utils' -// Export ipfs.map() callback in order for matchstck to detect it +// Export ipfs.map() callback in order for matchstick to detect it export { processGravatar } from './utils' test('ipfs.cat', () => { @@ -1172,7 +1172,7 @@ templates: network: mainnet mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/token-lock-wallet.ts handler: handleMetadata @@ -1289,7 +1289,7 @@ test('file/ipfs dataSource creation example', () => { ## Test Coverage -Using **Matchstick**, subgraph developers are able to run a script that will calculate the test coverage of the written unit tests. +Using **Matchstick**, Subgraph developers are able to run a script that will calculate the test coverage of the written unit tests. The test coverage tool takes the compiled test `wasm` binaries and converts them to `wat` files, which can then be easily inspected to see whether or not the handlers defined in `subgraph.yaml` have been called. Since code coverage (and testing as whole) is in very early stages in AssemblyScript and WebAssembly, **Matchstick** cannot check for branch coverage. Instead we rely on the assertion that if a given handler has been called, the event/function for it have been properly mocked. @@ -1395,7 +1395,7 @@ The mismatch in arguments is caused by mismatch in `graph-ts` and `matchstick-as ## Additional Resources -For any additional support, check out this [demo subgraph repo using Matchstick](https://github.com/LimeChain/demo-subgraph#readme_). +For any additional support, check out this [demo Subgraph repo using Matchstick](https://github.com/LimeChain/demo-subgraph#readme_). ## Feedback diff --git a/website/src/pages/ko/subgraphs/developing/deploying/multiple-networks.mdx b/website/src/pages/ko/subgraphs/developing/deploying/multiple-networks.mdx index 4f7dcd3864e8..3b2b1bbc70ae 100644 --- a/website/src/pages/ko/subgraphs/developing/deploying/multiple-networks.mdx +++ b/website/src/pages/ko/subgraphs/developing/deploying/multiple-networks.mdx @@ -1,12 +1,13 @@ --- title: Deploying a Subgraph to Multiple Networks +sidebarTitle: Deploying to Multiple Networks --- -This page explains how to deploy a subgraph to multiple networks. To deploy a subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a subgraph already, see [Creating a subgraph](/developing/creating-a-subgraph/). +This page explains how to deploy a Subgraph to multiple networks. To deploy a Subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a Subgraph already, see [Creating a Subgraph](/developing/creating-a-subgraph/). -## Deploying the subgraph to multiple networks +## Deploying the Subgraph to multiple networks -In some cases, you will want to deploy the same subgraph to multiple networks without duplicating all of its code. The main challenge that comes with this is that the contract addresses on these networks are different. +In some cases, you will want to deploy the same Subgraph to multiple networks without duplicating all of its code. The main challenge that comes with this is that the contract addresses on these networks are different. ### Using `graph-cli` @@ -20,7 +21,7 @@ Options: --network-file Networks config file path (default: "./networks.json") ``` -You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your subgraph during development. +You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your Subgraph during development. > Note: The `init` command will now auto-generate a `networks.json` based on the provided information. You will then be able to update existing or add additional networks. @@ -54,7 +55,7 @@ If you don't have a `networks.json` file, you'll need to manually create one wit > Note: You don't have to specify any of the `templates` (if you have any) in the config file, only the `dataSources`. If there are any `templates` declared in the `subgraph.yaml` file, their network will be automatically updated to the one specified with the `--network` option. -Now, let's assume you want to be able to deploy your subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`: +Now, let's assume you want to be able to deploy your Subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`: ```yaml # ... @@ -96,7 +97,7 @@ yarn build --network sepolia yarn build --network sepolia --network-file path/to/config ``` -The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the subgraph. Your `subgraph.yaml` file now should look like this: +The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the Subgraph. Your `subgraph.yaml` file now should look like this: ```yaml # ... @@ -127,7 +128,7 @@ yarn deploy --network sepolia --network-file path/to/config One way to parameterize aspects like contract addresses using older `graph-cli` versions is to generate parts of it with a templating system like [Mustache](https://mustache.github.io/) or [Handlebars](https://handlebarsjs.com/). -To illustrate this approach, let's assume a subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network: +To illustrate this approach, let's assume a Subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network: ```json { @@ -179,7 +180,7 @@ In order to generate a manifest to either network, you could add two additional } ``` -To deploy this subgraph for mainnet or Sepolia you would now simply run one of the two following commands: +To deploy this Subgraph for mainnet or Sepolia you would now simply run one of the two following commands: ```sh # Mainnet: @@ -193,25 +194,25 @@ A working example of this can be found [here](https://github.com/graphprotocol/e **Note:** This approach can also be applied to more complex situations, where it is necessary to substitute more than contract addresses and network names or where generating mappings or ABIs from templates as well. -This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your Subgraph to check if it is running behind. `synced` informs if the Subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the Subgraph. In this case, you can check the `fatalError` field for details on this error. -## Subgraph Studio subgraph archive policy +## Subgraph Studio Subgraph archive policy -A subgraph version in Studio is archived if and only if it meets the following criteria: +A Subgraph version in Studio is archived if and only if it meets the following criteria: - The version is not published to the network (or pending publish) - The version was created 45 or more days ago -- The subgraph hasn't been queried in 30 days +- The Subgraph hasn't been queried in 30 days -In addition, when a new version is deployed, if the subgraph has not been published, then the N-2 version of the subgraph is archived. +In addition, when a new version is deployed, if the Subgraph has not been published, then the N-2 version of the Subgraph is archived. -Every subgraph affected with this policy has an option to bring the version in question back. +Every Subgraph affected with this policy has an option to bring the version in question back. -## Checking subgraph health +## Checking Subgraph health -If a subgraph syncs successfully, that is a good sign that it will continue to run well forever. However, new triggers on the network might cause your subgraph to hit an untested error condition or it may start to fall behind due to performance issues or issues with the node operators. +If a Subgraph syncs successfully, that is a good sign that it will continue to run well forever. However, new triggers on the network might cause your Subgraph to hit an untested error condition or it may start to fall behind due to performance issues or issues with the node operators. -Graph Node exposes a GraphQL endpoint which you can query to check the status of your subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a subgraph: +Graph Node exposes a GraphQL endpoint which you can query to check the status of your Subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a Subgraph: ```graphql { @@ -238,4 +239,4 @@ Graph Node exposes a GraphQL endpoint which you can query to check the status of } ``` -This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your Subgraph to check if it is running behind. `synced` informs if the Subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the Subgraph. In this case, you can check the `fatalError` field for details on this error. diff --git a/website/src/pages/ko/subgraphs/developing/deploying/using-subgraph-studio.mdx b/website/src/pages/ko/subgraphs/developing/deploying/using-subgraph-studio.mdx index 634c2700ba68..77d10212c770 100644 --- a/website/src/pages/ko/subgraphs/developing/deploying/using-subgraph-studio.mdx +++ b/website/src/pages/ko/subgraphs/developing/deploying/using-subgraph-studio.mdx @@ -2,23 +2,23 @@ title: Deploying Using Subgraph Studio --- -Learn how to deploy your subgraph to Subgraph Studio. +Learn how to deploy your Subgraph to Subgraph Studio. -> Note: When you deploy a subgraph, you push it to Subgraph Studio, where you'll be able to test it. It's important to remember that deploying is not the same as publishing. When you publish a subgraph, you're publishing it onchain. +> Note: When you deploy a Subgraph, you push it to Subgraph Studio, where you'll be able to test it. It's important to remember that deploying is not the same as publishing. When you publish a Subgraph, you're publishing it onchain. ## Subgraph Studio Overview In [Subgraph Studio](https://thegraph.com/studio/), you can do the following: -- View a list of subgraphs you've created -- Manage, view details, and visualize the status of a specific subgraph -- Create and manage your API keys for specific subgraphs +- View a list of Subgraphs you've created +- Manage, view details, and visualize the status of a specific Subgraph +- Create and manage your API keys for specific Subgraphs - Restrict your API keys to specific domains and allow only certain Indexers to query with them -- Create your subgraph -- Deploy your subgraph using The Graph CLI -- Test your subgraph in the playground environment -- Integrate your subgraph in staging using the development query URL -- Publish your subgraph to The Graph Network +- Create your Subgraph +- Deploy your Subgraph using The Graph CLI +- Test your Subgraph in the playground environment +- Integrate your Subgraph in staging using the development query URL +- Publish your Subgraph to The Graph Network - Manage your billing ## Install The Graph CLI @@ -44,10 +44,10 @@ npm install -g @graphprotocol/graph-cli 1. Open [Subgraph Studio](https://thegraph.com/studio/). 2. Connect your wallet to sign in. - You can do this via MetaMask, Coinbase Wallet, WalletConnect, or Safe. -3. After you sign in, your unique deploy key will be displayed on your subgraph details page. - - The deploy key allows you to publish your subgraphs or manage your API keys and billing. It is unique but can be regenerated if you think it has been compromised. +3. After you sign in, your unique deploy key will be displayed on your Subgraph details page. + - The deploy key allows you to publish your Subgraphs or manage your API keys and billing. It is unique but can be regenerated if you think it has been compromised. -> Important: You need an API key to query subgraphs +> Important: You need an API key to query Subgraphs ### How to Create a Subgraph in Subgraph Studio @@ -57,31 +57,25 @@ npm install -g @graphprotocol/graph-cli ### Subgraph Compatibility with The Graph Network -In order to be supported by Indexers on The Graph Network, subgraphs must: - -- Index a [supported network](/supported-networks/) -- Must not use any of the following features: - - ipfs.cat & ipfs.map - - Non-fatal errors - - Grafting +To be supported by Indexers on The Graph Network, Subgraphs must index a [supported network](/supported-networks/). For a full list of supported and unsupported features, check out the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) repo. ## Initialize Your Subgraph -Once your subgraph has been created in Subgraph Studio, you can initialize its code through the CLI using this command: +Once your Subgraph has been created in Subgraph Studio, you can initialize its code through the CLI using this command: ```bash graph init ``` -You can find the `` value on your subgraph details page in Subgraph Studio, see image below: +You can find the `` value on your Subgraph details page in Subgraph Studio, see image below: ![Subgraph Studio - Slug](/img/doc-subgraph-slug.png) -After running `graph init`, you will be asked to input the contract address, network, and an ABI that you want to query. This will generate a new folder on your local machine with some basic code to start working on your subgraph. You can then finalize your subgraph to make sure it works as expected. +After running `graph init`, you will be asked to input the contract address, network, and an ABI that you want to query. This will generate a new folder on your local machine with some basic code to start working on your Subgraph. You can then finalize your Subgraph to make sure it works as expected. ## Graph Auth -Before you can deploy your subgraph to Subgraph Studio, you need to log into your account within the CLI. To do this, you will need your deploy key, which you can find under your subgraph details page. +Before you can deploy your Subgraph to Subgraph Studio, you need to log into your account within the CLI. To do this, you will need your deploy key, which you can find under your Subgraph details page. Then, use the following command to authenticate from the CLI: @@ -91,11 +85,11 @@ graph auth ## Deploying a Subgraph -Once you are ready, you can deploy your subgraph to Subgraph Studio. +Once you are ready, you can deploy your Subgraph to Subgraph Studio. -> Deploying a subgraph with the CLI pushes it to the Studio, where you can test it and update the metadata. This action won't publish your subgraph to the decentralized network. +> Deploying a Subgraph with the CLI pushes it to the Studio, where you can test it and update the metadata. This action won't publish your Subgraph to the decentralized network. -Use the following CLI command to deploy your subgraph: +Use the following CLI command to deploy your Subgraph: ```bash graph deploy @@ -108,30 +102,30 @@ After running this command, the CLI will ask for a version label. ## Testing Your Subgraph -After deploying, you can test your subgraph (either in Subgraph Studio or in your own app, with the deployment query URL), deploy another version, update the metadata, and publish to [Graph Explorer](https://thegraph.com/explorer) when you are ready. +After deploying, you can test your Subgraph (either in Subgraph Studio or in your own app, with the deployment query URL), deploy another version, update the metadata, and publish to [Graph Explorer](https://thegraph.com/explorer) when you are ready. -Use Subgraph Studio to check the logs on the dashboard and look for any errors with your subgraph. +Use Subgraph Studio to check the logs on the dashboard and look for any errors with your Subgraph. ## Publish Your Subgraph -In order to publish your subgraph successfully, review [publishing a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). +In order to publish your Subgraph successfully, review [publishing a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). ## Versioning Your Subgraph with the CLI -If you want to update your subgraph, you can do the following: +If you want to update your Subgraph, you can do the following: - You can deploy a new version to Studio using the CLI (it will only be private at this point). - Once you're happy with it, you can publish your new deployment to [Graph Explorer](https://thegraph.com/explorer). -- This action will create a new version of your subgraph that Curators can start signaling on and Indexers can index. +- This action will create a new version of your Subgraph that Curators can start signaling on and Indexers can index. -You can also update your subgraph's metadata without publishing a new version. You can update your subgraph details in Studio (under the profile picture, name, description, etc.) by checking an option called **Update Details** in [Graph Explorer](https://thegraph.com/explorer). If this is checked, an onchain transaction will be generated that updates subgraph details in Explorer without having to publish a new version with a new deployment. +You can also update your Subgraph's metadata without publishing a new version. You can update your Subgraph details in Studio (under the profile picture, name, description, etc.) by checking an option called **Update Details** in [Graph Explorer](https://thegraph.com/explorer). If this is checked, an onchain transaction will be generated that updates Subgraph details in Explorer without having to publish a new version with a new deployment. -> Note: There are costs associated with publishing a new version of a subgraph to the network. In addition to the transaction fees, you must also fund a part of the curation tax on the auto-migrating signal. You cannot publish a new version of your subgraph if Curators have not signaled on it. For more information, please read more [here](/resources/roles/curating/). +> Note: There are costs associated with publishing a new version of a Subgraph to the network. In addition to the transaction fees, you must also fund a part of the curation tax on the auto-migrating signal. You cannot publish a new version of your Subgraph if Curators have not signaled on it. For more information, please read more [here](/resources/roles/curating/). ## Automatic Archiving of Subgraph Versions -Whenever you deploy a new subgraph version in Subgraph Studio, the previous version will be archived. Archived versions won't be indexed/synced and therefore cannot be queried. You can unarchive an archived version of your subgraph in Subgraph Studio. +Whenever you deploy a new Subgraph version in Subgraph Studio, the previous version will be archived. Archived versions won't be indexed/synced and therefore cannot be queried. You can unarchive an archived version of your Subgraph in Subgraph Studio. -> Note: Previous versions of non-published subgraphs deployed to Studio will be automatically archived. +> Note: Previous versions of non-published Subgraphs deployed to Studio will be automatically archived. ![Subgraph Studio - Unarchive](/img/Unarchive.png) diff --git a/website/src/pages/ko/subgraphs/developing/developer-faq.mdx b/website/src/pages/ko/subgraphs/developing/developer-faq.mdx index 8dbe6d23ad39..e45141294523 100644 --- a/website/src/pages/ko/subgraphs/developing/developer-faq.mdx +++ b/website/src/pages/ko/subgraphs/developing/developer-faq.mdx @@ -7,37 +7,37 @@ This page summarizes some of the most common questions for developers building o ## Subgraph Related -### 1. What is a subgraph? +### 1. What is a Subgraph? -A subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process subgraphs and make them available for subgraph consumers to query. +A Subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process Subgraphs and make them available for Subgraph consumers to query. -### 2. What is the first step to create a subgraph? +### 2. What is the first step to create a Subgraph? -To successfully create a subgraph, you will need to install The Graph CLI. Review the [Quick Start](/subgraphs/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). +To successfully create a Subgraph, you will need to install The Graph CLI. Review the [Quick Start](/subgraphs/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). -### 3. Can I still create a subgraph if my smart contracts don't have events? +### 3. Can I still create a Subgraph if my smart contracts don't have events? -It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the subgraph are triggered by contract events and are the fastest way to retrieve useful data. +It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the Subgraph are triggered by contract events and are the fastest way to retrieve useful data. -If the contracts you work with do not contain events, your subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. +If the contracts you work with do not contain events, your Subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. -### 4. Can I change the GitHub account associated with my subgraph? +### 4. Can I change the GitHub account associated with my Subgraph? -No. Once a subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your subgraph. +No. Once a Subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your Subgraph. -### 5. How do I update a subgraph on mainnet? +### 5. How do I update a Subgraph on mainnet? -You can deploy a new version of your subgraph to Subgraph Studio using the CLI. This action maintains your subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. +You can deploy a new version of your Subgraph to Subgraph Studio using the CLI. This action maintains your Subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your Subgraph that Curators can start signaling on. -### 6. Is it possible to duplicate a subgraph to another account or endpoint without redeploying? +### 6. Is it possible to duplicate a Subgraph to another account or endpoint without redeploying? -You have to redeploy the subgraph, but if the subgraph ID (IPFS hash) doesn't change, it won't have to sync from the beginning. +You have to redeploy the Subgraph, but if the Subgraph ID (IPFS hash) doesn't change, it won't have to sync from the beginning. -### 7. How do I call a contract function or access a public state variable from my subgraph mappings? +### 7. How do I call a contract function or access a public state variable from my Subgraph mappings? Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/#access-to-smart-contract-state). -### 8. Can I import `ethers.js` or other JS libraries into my subgraph mappings? +### 8. Can I import `ethers.js` or other JS libraries into my Subgraph mappings? Not currently, as mappings are written in AssemblyScript. @@ -45,15 +45,15 @@ One possible alternative solution to this is to store raw data in entities and p ### 9. When listening to multiple contracts, is it possible to select the contract order to listen to events? -Within a subgraph, the events are always processed in the order they appear in the blocks, regardless of whether that is across multiple contracts or not. +Within a Subgraph, the events are always processed in the order they appear in the blocks, regardless of whether that is across multiple contracts or not. ### 10. How are templates different from data sources? -Templates allow you to create data sources quickly, while your subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your subgraph will create a dynamic data source by supplying the contract address. +Templates allow you to create data sources quickly, while your Subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your Subgraph will create a dynamic data source by supplying the contract address. Check out the "Instantiating a data source template" section on: [Data Source Templates](/developing/creating-a-subgraph/#data-source-templates). -### 11. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? +### 11. Is it possible to set up a Subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? Yes. On `graph init` command itself you can add multiple dataSources by entering contracts one after the other. @@ -79,9 +79,9 @@ docker pull graphprotocol/graph-node:latest If only one entity is created during the event and if there's nothing better available, then the transaction hash + log index would be unique. You can obfuscate these by converting that to Bytes and then piping it through `crypto.keccak256` but this won't make it more unique. -### 15. Can I delete my subgraph? +### 15. Can I delete my Subgraph? -Yes, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) and [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) your subgraph. +Yes, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) and [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) your Subgraph. ## Network Related @@ -110,11 +110,11 @@ Yes. Sepolia supports block handlers, call handlers and event handlers. It shoul Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the dataSource starts indexing from. In most cases, we suggest using the block where the contract was created: [Start blocks](/developing/creating-a-subgraph/#start-blocks) -### 20. What are some tips to increase the performance of indexing? My subgraph is taking a very long time to sync +### 20. What are some tips to increase the performance of indexing? My Subgraph is taking a very long time to sync Yes, you should take a look at the optional start block feature to start indexing from the block where the contract was deployed: [Start blocks](/developing/creating-a-subgraph/#start-blocks) -### 21. Is there a way to query the subgraph directly to determine the latest block number it has indexed? +### 21. Is there a way to query the Subgraph directly to determine the latest block number it has indexed? Yes! Try the following command, substituting "organization/subgraphName" with the organization under it is published and the name of your subgraph: @@ -132,7 +132,7 @@ someCollection(first: 1000, skip: ) { ... } ### 23. If my dapp frontend uses The Graph for querying, do I need to write my API key into the frontend directly? What if we pay query fees for users – will malicious users cause our query fees to be very high? -Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. +Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and Subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. ## Miscellaneous diff --git a/website/src/pages/ko/subgraphs/developing/introduction.mdx b/website/src/pages/ko/subgraphs/developing/introduction.mdx index 615b6cec4c9c..06bc2b76104d 100644 --- a/website/src/pages/ko/subgraphs/developing/introduction.mdx +++ b/website/src/pages/ko/subgraphs/developing/introduction.mdx @@ -11,21 +11,21 @@ As a developer, you need data to build and power your dapp. Querying and indexin On The Graph, you can: -1. Create, deploy, and publish subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). -2. Use GraphQL to query existing subgraphs. +1. Create, deploy, and publish Subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). +2. Use GraphQL to query existing Subgraphs. ### What is GraphQL? -- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. +- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query Subgraphs. ### Developer Actions -- Query subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. -- Create custom subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. -- Deploy, publish and signal your subgraphs within The Graph Network. +- Query Subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. +- Create custom Subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. +- Deploy, publish and signal your Subgraphs within The Graph Network. -### What are subgraphs? +### What are Subgraphs? -A subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. +A Subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. -Check out the documentation on [subgraphs](/subgraphs/developing/subgraphs/) to learn specifics. +Check out the documentation on [Subgraphs](/subgraphs/developing/subgraphs/) to learn specifics. diff --git a/website/src/pages/ko/subgraphs/developing/managing/deleting-a-subgraph.mdx b/website/src/pages/ko/subgraphs/developing/managing/deleting-a-subgraph.mdx index 5a4ac15e07fd..b8c2330ca49d 100644 --- a/website/src/pages/ko/subgraphs/developing/managing/deleting-a-subgraph.mdx +++ b/website/src/pages/ko/subgraphs/developing/managing/deleting-a-subgraph.mdx @@ -2,30 +2,30 @@ title: Deleting a Subgraph --- -Delete your subgraph using [Subgraph Studio](https://thegraph.com/studio/). +Delete your Subgraph using [Subgraph Studio](https://thegraph.com/studio/). -> Deleting your subgraph will remove all published versions from The Graph Network, but it will remain visible on Graph Explorer and Subgraph Studio for users who have signaled on it. +> Deleting your Subgraph will remove all published versions from The Graph Network, but it will remain visible on Graph Explorer and Subgraph Studio for users who have signaled on it. ## Step-by-Step -1. Visit the subgraph's page on [Subgraph Studio](https://thegraph.com/studio/). +1. Visit the Subgraph's page on [Subgraph Studio](https://thegraph.com/studio/). 2. Click on the three-dots to the right of the "publish" button. -3. Click on the option to "delete this subgraph": +3. Click on the option to "delete this Subgraph": ![Delete-subgraph](/img/Delete-subgraph.png) -4. Depending on the subgraph's status, you will be prompted with various options. +4. Depending on the Subgraph's status, you will be prompted with various options. - - If the subgraph is not published, simply click “delete” and confirm. - - If the subgraph is published, you will need to confirm on your wallet before the subgraph can be deleted from Studio. If a subgraph is published to multiple networks, such as testnet and mainnet, additional steps may be required. + - If the Subgraph is not published, simply click “delete” and confirm. + - If the Subgraph is published, you will need to confirm on your wallet before the Subgraph can be deleted from Studio. If a Subgraph is published to multiple networks, such as testnet and mainnet, additional steps may be required. -> If the owner of the subgraph has signal on it, the signaled GRT will be returned to the owner. +> If the owner of the Subgraph has signal on it, the signaled GRT will be returned to the owner. ### Important Reminders -- Once you delete a subgraph, it will **not** appear on Graph Explorer's homepage. However, users who have signaled on it will still be able to view it on their profile pages and remove their signal. -- Curators will not be able to signal on the subgraph anymore. -- Curators that already signaled on the subgraph can withdraw their signal at an average share price. -- Deleted subgraphs will show an error message. +- Once you delete a Subgraph, it will **not** appear on Graph Explorer's homepage. However, users who have signaled on it will still be able to view it on their profile pages and remove their signal. +- Curators will not be able to signal on the Subgraph anymore. +- Curators that already signaled on the Subgraph can withdraw their signal at an average share price. +- Deleted Subgraphs will show an error message. diff --git a/website/src/pages/ko/subgraphs/developing/managing/transferring-a-subgraph.mdx b/website/src/pages/ko/subgraphs/developing/managing/transferring-a-subgraph.mdx index 0fc6632cbc40..e80bde3fa6d2 100644 --- a/website/src/pages/ko/subgraphs/developing/managing/transferring-a-subgraph.mdx +++ b/website/src/pages/ko/subgraphs/developing/managing/transferring-a-subgraph.mdx @@ -2,18 +2,18 @@ title: Transferring a Subgraph --- -Subgraphs published to the decentralized network have an NFT minted to the address that published the subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network. +Subgraphs published to the decentralized network have an NFT minted to the address that published the Subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network. ## Reminders -- Whoever owns the NFT controls the subgraph. -- If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that subgraph on the network. -- You can easily move control of a subgraph to a multi-sig. -- A community member can create a subgraph on behalf of a DAO. +- Whoever owns the NFT controls the Subgraph. +- If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that Subgraph on the network. +- You can easily move control of a Subgraph to a multi-sig. +- A community member can create a Subgraph on behalf of a DAO. ## View Your Subgraph as an NFT -To view your subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**: +To view your Subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**: ``` https://opensea.io/your-wallet-address @@ -27,13 +27,13 @@ https://rainbow.me/your-wallet-addres ## Step-by-Step -To transfer ownership of a subgraph, do the following: +To transfer ownership of a Subgraph, do the following: 1. Use the UI built into Subgraph Studio: ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-1.png) -2. Choose the address that you would like to transfer the subgraph to: +2. Choose the address that you would like to transfer the Subgraph to: ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-2.png) diff --git a/website/src/pages/ko/subgraphs/developing/publishing/publishing-a-subgraph.mdx b/website/src/pages/ko/subgraphs/developing/publishing/publishing-a-subgraph.mdx index dca943ad3152..2bc0ec5f514c 100644 --- a/website/src/pages/ko/subgraphs/developing/publishing/publishing-a-subgraph.mdx +++ b/website/src/pages/ko/subgraphs/developing/publishing/publishing-a-subgraph.mdx @@ -1,10 +1,11 @@ --- title: Publishing a Subgraph to the Decentralized Network +sidebarTitle: Publishing to the Decentralized Network --- -Once you have [deployed your subgraph to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/) and it's ready to go into production, you can publish it to the decentralized network. +Once you have [deployed your Subgraph to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/) and it's ready to go into production, you can publish it to the decentralized network. -When you publish a subgraph to the decentralized network, you make it available for: +When you publish a Subgraph to the decentralized network, you make it available for: - [Curators](/resources/roles/curating/) to begin curating it. - [Indexers](/indexing/overview/) to begin indexing it. @@ -17,33 +18,33 @@ Check out the list of [supported networks](/supported-networks/). 1. Go to the [Subgraph Studio](https://thegraph.com/studio/) dashboard 2. Click on the **Publish** button -3. Your subgraph will now be visible in [Graph Explorer](https://thegraph.com/explorer/). +3. Your Subgraph will now be visible in [Graph Explorer](https://thegraph.com/explorer/). -All published versions of an existing subgraph can: +All published versions of an existing Subgraph can: - Be published to Arbitrum One. [Learn more about The Graph Network on Arbitrum](/archived/arbitrum/arbitrum-faq/). -- Index data on any of the [supported networks](/supported-networks/), regardless of the network on which the subgraph was published. +- Index data on any of the [supported networks](/supported-networks/), regardless of the network on which the Subgraph was published. -### Updating metadata for a published subgraph +### Updating metadata for a published Subgraph -- After publishing your subgraph to the decentralized network, you can update the metadata anytime in Subgraph Studio. +- After publishing your Subgraph to the decentralized network, you can update the metadata anytime in Subgraph Studio. - Once you’ve saved your changes and published the updates, they will appear in Graph Explorer. - It's important to note that this process will not create a new version since your deployment has not changed. ## Publishing from the CLI -As of version 0.73.0, you can also publish your subgraph with the [`graph-cli`](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). +As of version 0.73.0, you can also publish your Subgraph with the [`graph-cli`](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). 1. Open the `graph-cli`. 2. Use the following commands: `graph codegen && graph build` then `graph publish`. -3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized subgraph to a network of your choice. +3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized Subgraph to a network of your choice. ![cli-ui](/img/cli-ui.png) ### Customizing your deployment -You can upload your subgraph build to a specific IPFS node and further customize your deployment with the following flags: +You can upload your Subgraph build to a specific IPFS node and further customize your deployment with the following flags: ``` USAGE @@ -61,33 +62,33 @@ FLAGS ``` -## Adding signal to your subgraph +## Adding signal to your Subgraph -Developers can add GRT signal to their subgraphs to incentivize Indexers to query the subgraph. +Developers can add GRT signal to their Subgraphs to incentivize Indexers to query the Subgraph. -- If a subgraph is eligible for indexing rewards, Indexers who provide a "proof of indexing" will receive a GRT reward, based on the amount of GRT signalled. +- If a Subgraph is eligible for indexing rewards, Indexers who provide a "proof of indexing" will receive a GRT reward, based on the amount of GRT signalled. -- You can check indexing reward eligibility based on subgraph feature usage [here](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). +- You can check indexing reward eligibility based on Subgraph feature usage [here](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). - Specific supported networks can be checked [here](/supported-networks/). -> Adding signal to a subgraph which is not eligible for rewards will not attract additional Indexers. +> Adding signal to a Subgraph which is not eligible for rewards will not attract additional Indexers. > -> If your subgraph is eligible for rewards, it is recommended that you curate your own subgraph with at least 3,000 GRT in order to attract additional indexers to index your subgraph. +> If your Subgraph is eligible for rewards, it is recommended that you curate your own Subgraph with at least 3,000 GRT in order to attract additional indexers to index your Subgraph. -The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs. However, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. +The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all Subgraphs. However, signaling GRT on a particular Subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. -When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. +When signaling, Curators can decide to signal on a specific version of the Subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. -Indexers can find subgraphs to index based on curation signals they see in Graph Explorer. +Indexers can find Subgraphs to index based on curation signals they see in Graph Explorer. -![Explorer subgraphs](/img/explorer-subgraphs.png) +![Explorer Subgraphs](/img/explorer-subgraphs.png) -Subgraph Studio enables you to add signal to your subgraph by adding GRT to your subgraph's curation pool in the same transaction it is published. +Subgraph Studio enables you to add signal to your Subgraph by adding GRT to your Subgraph's curation pool in the same transaction it is published. ![Curation Pool](/img/curate-own-subgraph-tx.png) -Alternatively, you can add GRT signal to a published subgraph from Graph Explorer. +Alternatively, you can add GRT signal to a published Subgraph from Graph Explorer. ![Signal from Explorer](/img/signal-from-explorer.png) diff --git a/website/src/pages/ko/subgraphs/developing/subgraphs.mdx b/website/src/pages/ko/subgraphs/developing/subgraphs.mdx index 951ec74234d1..b5a75a88e94f 100644 --- a/website/src/pages/ko/subgraphs/developing/subgraphs.mdx +++ b/website/src/pages/ko/subgraphs/developing/subgraphs.mdx @@ -4,83 +4,83 @@ title: Subgraphs ## What is a Subgraph? -A subgraph is a custom, open API that extracts data from a blockchain, processes it, and stores it so it can be easily queried via GraphQL. +A Subgraph is a custom, open API that extracts data from a blockchain, processes it, and stores it so it can be easily queried via GraphQL. ### Subgraph Capabilities - **Access Data:** Subgraphs enable the querying and indexing of blockchain data for web3. -- **Build:** Developers can build, deploy, and publish subgraphs to The Graph Network. To get started, check out the subgraph developer [Quick Start](quick-start/). -- **Index & Query:** Once a subgraph is indexed, anyone can query it. Explore and query all subgraphs published to the network in [Graph Explorer](https://thegraph.com/explorer). +- **Build:** Developers can build, deploy, and publish Subgraphs to The Graph Network. To get started, check out the Subgraph developer [Quick Start](quick-start/). +- **Index & Query:** Once a Subgraph is indexed, anyone can query it. Explore and query all Subgraphs published to the network in [Graph Explorer](https://thegraph.com/explorer). ## Inside a Subgraph -The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. +The Subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your Subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. -The **subgraph definition** consists of the following files: +The **Subgraph definition** consists of the following files: -- `subgraph.yaml`: Contains the subgraph manifest +- `subgraph.yaml`: Contains the Subgraph manifest -- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL +- `schema.graphql`: A GraphQL schema defining the data stored for your Subgraph and how to query it via GraphQL - `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema -To learn more about each subgraph component, check out [creating a subgraph](/developing/creating-a-subgraph/). +To learn more about each Subgraph component, check out [creating a Subgraph](/developing/creating-a-subgraph/). ## Subgraph Lifecycle -Here is a general overview of a subgraph’s lifecycle: +Here is a general overview of a Subgraph’s lifecycle: ![Subgraph Lifecycle](/img/subgraph-lifecycle.png) ## Subgraph Development -1. [Create a subgraph](/developing/creating-a-subgraph/) -2. [Deploy a subgraph](/deploying/deploying-a-subgraph-to-studio/) -3. [Test a subgraph](/deploying/subgraph-studio/#testing-your-subgraph-in-subgraph-studio) -4. [Publish a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) -5. [Signal on a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) +1. [Create a Subgraph](/developing/creating-a-subgraph/) +2. [Deploy a Subgraph](/deploying/deploying-a-subgraph-to-studio/) +3. [Test a Subgraph](/deploying/subgraph-studio/#testing-your-subgraph-in-subgraph-studio) +4. [Publish a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) +5. [Signal on a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) ### Build locally -Great subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying subgraphs on The Graph. They can also use [Graph TypeScript](/subgraphs/developing/creating/graph-ts/README/) and [Matchstick](/subgraphs/developing/creating/unit-testing-framework/) to create robust subgraphs. +Great Subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying Subgraphs on The Graph. They can also use [Graph TypeScript](/subgraphs/developing/creating/graph-ts/README/) and [Matchstick](/subgraphs/developing/creating/unit-testing-framework/) to create robust Subgraphs. ### Deploy to Subgraph Studio -Once defined, a subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: +Once defined, a Subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: -- Use its staging environment to index the deployed subgraph and make it available for review. -- Verify that your subgraph doesn't have any indexing errors and works as expected. +- Use its staging environment to index the deployed Subgraph and make it available for review. +- Verify that your Subgraph doesn't have any indexing errors and works as expected. ### Publish to the Network -When you're happy with your subgraph, you can [publish it](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph Network. +When you're happy with your Subgraph, you can [publish it](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph Network. -- This is an onchain action, which registers the subgraph and makes it discoverable by Indexers. -- Published subgraphs have a corresponding NFT, which defines the ownership of the subgraph. You can [transfer the subgraph's ownership](/subgraphs/developing/managing/transferring-a-subgraph/) by sending the NFT. -- Published subgraphs have associated metadata, which provides other network participants with useful context and information. +- This is an onchain action, which registers the Subgraph and makes it discoverable by Indexers. +- Published Subgraphs have a corresponding NFT, which defines the ownership of the Subgraph. You can [transfer the Subgraph's ownership](/subgraphs/developing/managing/transferring-a-subgraph/) by sending the NFT. +- Published Subgraphs have associated metadata, which provides other network participants with useful context and information. ### Add Curation Signal for Indexing -Published subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your subgraph. Learn more about signaling and [curating](/resources/roles/curating/) on The Graph. +Published Subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your Subgraph. Learn more about signaling and [curating](/resources/roles/curating/) on The Graph. #### What is signal? -- Signal is locked GRT associated with a given subgraph. It indicates to Indexers that a given subgraph will receive query volume and it contributes to the indexing rewards available for processing it. -- Third-party Curators may also signal on a given subgraph, if they deem the subgraph likely to drive query volume. +- Signal is locked GRT associated with a given Subgraph. It indicates to Indexers that a given Subgraph will receive query volume and it contributes to the indexing rewards available for processing it. +- Third-party Curators may also signal on a given Subgraph, if they deem the Subgraph likely to drive query volume. ### Querying & Application Development Subgraphs on The Graph Network receive 100,000 free queries per month, after which point developers can either [pay for queries with GRT or a credit card](/subgraphs/billing/). -Learn more about [querying subgraphs](/subgraphs/querying/introduction/). +Learn more about [querying Subgraphs](/subgraphs/querying/introduction/). ### Updating Subgraphs -To update your subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. +To update your Subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your Subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. -- If you selected "auto-migrate" when you applied the signal, updating the subgraph will migrate any signal to the new version and incur a migration tax. -- This signal migration should prompt Indexers to start indexing the new version of the subgraph, so it should soon become available for querying. +- If you selected "auto-migrate" when you applied the signal, updating the Subgraph will migrate any signal to the new version and incur a migration tax. +- This signal migration should prompt Indexers to start indexing the new version of the Subgraph, so it should soon become available for querying. ### Deleting & Transferring Subgraphs -If you no longer need a published subgraph, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) or [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) it. Deleting a subgraph returns any signaled GRT to [Curators](/resources/roles/curating/). +If you no longer need a published Subgraph, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) or [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) it. Deleting a Subgraph returns any signaled GRT to [Curators](/resources/roles/curating/). diff --git a/website/src/pages/ko/subgraphs/explorer.mdx b/website/src/pages/ko/subgraphs/explorer.mdx index f29f2a3602d9..499fcede88d3 100644 --- a/website/src/pages/ko/subgraphs/explorer.mdx +++ b/website/src/pages/ko/subgraphs/explorer.mdx @@ -2,11 +2,11 @@ title: Graph Explorer --- -Unlock the world of subgraphs and network data with [Graph Explorer](https://thegraph.com/explorer). +Unlock the world of Subgraphs and network data with [Graph Explorer](https://thegraph.com/explorer). ## Overview -Graph Explorer consists of multiple parts where you can interact with [subgraphs](https://thegraph.com/explorer?chain=arbitrum-one), [delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one), engage [participants](https://thegraph.com/explorer/participants?chain=arbitrum-one), view [network information](https://thegraph.com/explorer/network?chain=arbitrum-one), and access your user profile. +Graph Explorer consists of multiple parts where you can interact with [Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one), [delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one), engage [participants](https://thegraph.com/explorer/participants?chain=arbitrum-one), view [network information](https://thegraph.com/explorer/network?chain=arbitrum-one), and access your user profile. ## Inside Explorer @@ -14,33 +14,33 @@ The following is a breakdown of all the key features of Graph Explorer. For addi ### Subgraphs Page -After deploying and publishing your subgraph in Subgraph Studio, go to [Graph Explorer](https://thegraph.com/explorer) and click on the "[Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one)" link in the navigation bar to access the following: +After deploying and publishing your Subgraph in Subgraph Studio, go to [Graph Explorer](https://thegraph.com/explorer) and click on the "[Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one)" link in the navigation bar to access the following: -- Your own finished subgraphs +- Your own finished Subgraphs - Subgraphs published by others -- The exact subgraph you want (based on the date created, signal amount, or name). +- The exact Subgraph you want (based on the date created, signal amount, or name). ![Explorer Image 1](/img/Subgraphs-Explorer-Landing.png) -When you click into a subgraph, you will be able to do the following: +When you click into a Subgraph, you will be able to do the following: - Test queries in the playground and be able to leverage network details to make informed decisions. -- Signal GRT on your own subgraph or the subgraphs of others to make indexers aware of its importance and quality. +- Signal GRT on your own Subgraph or the Subgraphs of others to make indexers aware of its importance and quality. - - This is critical because signaling on a subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries. + - This is critical because signaling on a Subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries. ![Explorer Image 2](/img/Subgraph-Details.png) -On each subgraph’s dedicated page, you can do the following: +On each Subgraph’s dedicated page, you can do the following: -- Signal/Un-signal on subgraphs +- Signal/Un-signal on Subgraphs - View more details such as charts, current deployment ID, and other metadata -- Switch versions to explore past iterations of the subgraph -- Query subgraphs via GraphQL -- Test subgraphs in the playground -- View the Indexers that are indexing on a certain subgraph +- Switch versions to explore past iterations of the Subgraph +- Query Subgraphs via GraphQL +- Test Subgraphs in the playground +- View the Indexers that are indexing on a certain Subgraph - Subgraph stats (allocations, Curators, etc) -- View the entity who published the subgraph +- View the entity who published the Subgraph ![Explorer Image 3](/img/Explorer-Signal-Unsignal.png) @@ -53,7 +53,7 @@ On this page, you can see the following: - Indexers who collected the most query fees - Indexers with the highest estimated APR -Additionally, you can calculate your ROI and search for top Indexers by name, address, or subgraph. +Additionally, you can calculate your ROI and search for top Indexers by name, address, or Subgraph. ### Participants Page @@ -63,9 +63,9 @@ This page provides a bird's-eye view of all "participants," which includes every ![Explorer Image 4](/img/Indexer-Pane.png) -Indexers are the backbone of the protocol. They stake on subgraphs, index them, and serve queries to anyone consuming subgraphs. +Indexers are the backbone of the protocol. They stake on Subgraphs, index them, and serve queries to anyone consuming Subgraphs. -In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each subgraph, and how much revenue they have made from query fees and indexing rewards. +In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each Subgraph, and how much revenue they have made from query fees and indexing rewards. **Specifics** @@ -74,7 +74,7 @@ In the Indexers table, you can see an Indexers’ delegation parameters, their s - Cooldown Remaining - the time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters. - Owned - This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior. - Delegated - Stake from Delegators which can be allocated by the Indexer, but cannot be slashed. -- Allocated - Stake that Indexers are actively allocating towards the subgraphs they are indexing. +- Allocated - Stake that Indexers are actively allocating towards the Subgraphs they are indexing. - Available Delegation Capacity - the amount of delegated stake the Indexers can still receive before they become over-delegated. - Max Delegation Capacity - the maximum amount of delegated stake the Indexer can productively accept. An excess delegated stake cannot be used for allocations or rewards calculations. - Query Fees - this is the total fees that end users have paid for queries from an Indexer over all time. @@ -90,10 +90,10 @@ To learn more about how to become an Indexer, you can take a look at the [offici #### 2. Curators -Curators analyze subgraphs to identify which subgraphs are of the highest quality. Once a Curator has found a potentially high-quality subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which subgraphs are high quality and should be indexed. +Curators analyze Subgraphs to identify which Subgraphs are of the highest quality. Once a Curator has found a potentially high-quality Subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which Subgraphs are high quality and should be indexed. -- Curators can be community members, data consumers, or even subgraph developers who signal on their own subgraphs by depositing GRT tokens into a bonding curve. - - By depositing GRT, Curators mint curation shares of a subgraph. As a result, they can earn a portion of the query fees generated by the subgraph they have signaled on. +- Curators can be community members, data consumers, or even Subgraph developers who signal on their own Subgraphs by depositing GRT tokens into a bonding curve. + - By depositing GRT, Curators mint curation shares of a Subgraph. As a result, they can earn a portion of the query fees generated by the Subgraph they have signaled on. - The bonding curve incentivizes Curators to curate the highest quality data sources. In the The Curator table listed below you can see: @@ -144,8 +144,8 @@ The overview section has both all the current network metrics and some cumulativ A few key details to note: -- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the subgraphs have been closed and the data they served has been validated by the consumers. -- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). +- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the Subgraphs have been closed and the data they served has been validated by the consumers. +- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the Subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). ![Explorer Image 8](/img/Network-Stats.png) @@ -178,15 +178,15 @@ In this section, you can view the following: ### Subgraphs Tab -In the Subgraphs tab, you’ll see your published subgraphs. +In the Subgraphs tab, you’ll see your published Subgraphs. -> This will not include any subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network. +> This will not include any Subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network. ![Explorer Image 11](/img/Subgraphs-Overview.png) ### Indexing Tab -In the Indexing tab, you’ll find a table with all the active and historical allocations towards subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer. +In the Indexing tab, you’ll find a table with all the active and historical allocations towards Subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer. This section will also include details about your net Indexer rewards and net query fees. You’ll see the following metrics: @@ -223,13 +223,13 @@ Keep in mind that this chart is horizontally scrollable, so if you scroll all th ### Curating Tab -In the Curation tab, you’ll find all the subgraphs you’re signaling on (thus enabling you to receive query fees). Signaling allows Curators to highlight to Indexers which subgraphs are valuable and trustworthy, thus signaling that they need to be indexed on. +In the Curation tab, you’ll find all the Subgraphs you’re signaling on (thus enabling you to receive query fees). Signaling allows Curators to highlight to Indexers which Subgraphs are valuable and trustworthy, thus signaling that they need to be indexed on. Within this tab, you’ll find an overview of: -- All the subgraphs you're curating on with signal details -- Share totals per subgraph -- Query rewards per subgraph +- All the Subgraphs you're curating on with signal details +- Share totals per Subgraph +- Query rewards per Subgraph - Updated at date details ![Explorer Image 14](/img/Curation-Stats.png) diff --git a/website/src/pages/ko/subgraphs/guides/_meta.js b/website/src/pages/ko/subgraphs/guides/_meta.js index 37e18bc51651..a1bb04fb6d3f 100644 --- a/website/src/pages/ko/subgraphs/guides/_meta.js +++ b/website/src/pages/ko/subgraphs/guides/_meta.js @@ -1,4 +1,5 @@ export default { + 'subgraph-composition': '', 'subgraph-debug-forking': '', near: '', arweave: '', diff --git a/website/src/pages/ko/subgraphs/guides/arweave.mdx b/website/src/pages/ko/subgraphs/guides/arweave.mdx index 08e6c4257268..e59abffa383f 100644 --- a/website/src/pages/ko/subgraphs/guides/arweave.mdx +++ b/website/src/pages/ko/subgraphs/guides/arweave.mdx @@ -92,9 +92,9 @@ Arweave data sources support two types of handlers: - `transactionHandlers` - Run on every transaction where the data source's `source.owner` is the owner. Currently an owner is required for `transactionHandlers`, if users want to process all transactions they should provide "" as the `source.owner` > The source.owner can be the owner's address, or their Public Key. - +> > Transactions are the building blocks of the Arweave permaweb and they are objects created by end-users. - +> > Note: [Irys (previously Bundlr)](https://irys.xyz/) transactions are not supported yet. ## Schema Definition diff --git a/website/src/pages/ko/subgraphs/guides/contract-analyzer.mdx b/website/src/pages/ko/subgraphs/guides/contract-analyzer.mdx index 084ac8d28a00..ab5076c5ebf4 100644 --- a/website/src/pages/ko/subgraphs/guides/contract-analyzer.mdx +++ b/website/src/pages/ko/subgraphs/guides/contract-analyzer.mdx @@ -2,11 +2,15 @@ title: Smart Contract Analysis with Cana CLI --- -# Cana CLI: Quick & Efficient Contract Analysis +Improve smart contract analysis with **Cana CLI**. It's fast, efficient, and designed specifically for EVM chains. -**Cana CLI** is a command-line tool that streamlines helpful smart contract metadata analysis specific to subgraph development across multiple EVM-compatible chains. +## Overview -## 📌 Key Features +**Cana CLI** is a command-line tool that streamlines helpful smart contract metadata analysis specific to subgraph development across multiple EVM-compatible chains. It simplifies retrieving contract details, detecting proxy implementations, extracting ABIs, and more. + +### Key Features + +With Cana CLI, you can: - Detect deployment blocks - Verify source code @@ -14,47 +18,59 @@ title: Smart Contract Analysis with Cana CLI - Identify proxy and implementation contracts - Support multiple chains -## 🚀 Installation & Setup +### Prerequisites + +Before installing Cana CLI, make sure you have: + +- [Node.js v16+](https://nodejs.org/en) +- [npm v6+](https://docs.npmjs.com/cli/v11/commands/npm-install) +- Block explorer API keys + +### Installation & Setup -Install Cana globally using npm: +1. Install Cana CLI + +Use npm to install it globally: ```bash npm install -g contract-analyzer ``` -Set up a blockchain for analysis: +2. Configure Cana CLI + +Set up a blockchain environment for analysis: ```bash cana setup ``` -Provide the required block explorer API and block explorer endpoint URL details when prompted. +During setup, you'll be prompted to provide the required block explorer API key and block explorer endpoint URL. -Running `cana setup` creates a configuration file at `~/.contract-analyzer/config.json`. This file stores your block explorer API credentials, endpoint URLs, and chain selection preferences for future use. +After setup, Cana CLI creates a configuration file at `~/.contract-analyzer/config.json`. This file stores your block explorer API credentials, endpoint URLs, and chain selection preferences for future use. -## 🍳 Usage +### Steps: Using Cana CLI for Smart Contract Analysis -### 🔹 Chain Selection +#### 1. Select a Chain -Cana supports multiple EVM-compatible chains. +Cana CLI supports multiple EVM-compatible chains. -List chains added with: +For a list of chains added run this command: ```bash cana chains ``` -Then select a chain with: +Then select a chain with this command: ```bash cana chains --switch ``` -Once a chain is selected, all subsequent contract analases will continue on that chain. +Once a chain is selected, all subsequent contract analyses will continue on that chain. -### 🔹 Basic Contract Analysis +#### 2. Basic Contract Analysis -Analyze a contract with: +Run the following command to analyze a contract: ```bash cana analyze 0xContractAddress @@ -66,11 +82,11 @@ or cana -a 0xContractAddress ``` -This command displays essential contract information in the terminal using a clear, organized format. +This command fetches and displays essential contract information in the terminal using a clear, organized format. -### 🔹 Understanding Output +#### 3. Understanding the Output -Cana organizes results into the terminal as well as into a structured directory when detailed contract data is successfully retrieved: +Cana CLI organizes results into the terminal and into a structured directory when detailed contract data is successfully retrieved: ``` contracts-analyzed/ @@ -80,24 +96,22 @@ contracts-analyzed/ └── event-information.json # Event signatures and examples ``` -### 🔹 Chain Management +This format makes it easy to reference contract metadata, event signatures, and ABIs for subgraph development. + +#### 4. Chain Management Add and manage chains: ```bash -cana setup # Add a new chain -cana chains # List configured chains -cana chains -s # Swich chains. +cana setup # Add a new chain +cana chains # List configured chains +cana chains -s # Switch chains ``` -## ⚠️ Troubleshooting +### Troubleshooting -- **Missing Data**: Ensure the contract address is correct, verified on the block explorer, and that your API key has the required permissions. +Missing Data? Ensure that the contract address is correct, that it's verified on the block explorer, and that your API key has the required permissions. -## ✅ Requirements - -- Node.js v16+ -- npm v6+ -- Block explorer API keys +### Conclusion -Keep your contract analyses efficient and well-organized. 🚀 +With Cana CLI, you can efficiently analyze smart contracts, extract crucial metadata, and support subgraph development with ease. diff --git a/website/src/pages/ko/subgraphs/guides/subgraph-composition.mdx b/website/src/pages/ko/subgraphs/guides/subgraph-composition.mdx new file mode 100644 index 000000000000..09f1939c1fde --- /dev/null +++ b/website/src/pages/ko/subgraphs/guides/subgraph-composition.mdx @@ -0,0 +1,132 @@ +--- +title: Aggregate Data Using Subgraph Composition +sidebarTitle: Build a Composable Subgraph with Multiple Subgraphs +--- + +Leverage Subgraph composition to speed up development time. Create a base Subgraph with essential data, then build additional Subgraphs on top of it. + +Optimize your Subgraph by merging data from independent, source Subgraphs into a single composable Subgraph to enhance data aggregation. + +## Introduction + +Composable Subgraphs enable you to combine multiple Subgraphs' data sources into a new Subgraph, facilitating faster and more flexible Subgraph development. Subgraph composition empowers you to create and maintain smaller, focused Subgraphs that collectively form a larger, interconnected dataset. + +### Benefits of Composition + +Subgraph composition is a powerful feature for scaling, allowing you to: + +- Reuse, mix, and combine existing data +- Streamline development and queries +- Use multiple data sources (up to five source Subgraphs) +- Speed up your Subgraph's syncing speed +- Handle errors and optimize the resync + +## Architecture Overview + +The setup for this example involves two Subgraphs: + +1. **Source Subgraph**: Tracks event data as entities. +2. **Dependent Subgraph**: Uses the source Subgraph as a data source. + +You can find these in the `source` and `dependent` directories. + +- The **source Subgraph** is a basic event-tracking Subgraph that records events emitted by relevant contracts. +- The **dependent Subgraph** references the source Subgraph as a data source, using the entities from the source as triggers. + +While the source Subgraph is a standard Subgraph, the dependent Subgraph uses the Subgraph composition feature. + +## Prerequisites + +### Source Subgraphs + +- All Subgraphs need to be published with a **specVersion 1.3.0 or later** (Use the latest graph-cli version to be able to deploy composable Subgraphs) +- See notes here: https://github.com/graphprotocol/graph-node/releases/tag/v0.37.0 +- Immutable entities only: All Subgraphs must have [immutable entities](https://thegraph.com/docs/en/subgraphs/best-practices/immutable-entities-bytes-as-ids/#immutable-entities) when the Subgraph is deployed +- Pruning can be used in the source Subgraphs, but only entities that are immutable can be composed on top of +- Source Subgraphs cannot use grafting on top of existing entities +- Aggregated entities can be used in composition, but entities that are composed from them cannot performed additional aggregations directly + +### Composed Subgraphs + +- You can only compose up to a **maximum of 5 source Subgraphs** +- Composed Subgraphs can only use **datasources from the same chain** +- **Nested composition is not yet supported**: Composing on top of another composed Subgraph isn’t allowed at this time +- Aggregated entities can be used in composition, but the composed entities on them cannot also use aggregations directly +- Developers cannot compose an onchain datasource with a Subgraph datasource (i.e. you can’t do normal event handlers and call handlers and block handlers in a composed Subgraph) + +Additionally, you can explore the [example-composable-subgraph](https://github.com/graphprotocol/example-composable-subgraph) repository for a working implementation of composable Subgraphs + +## Get Started + +The following guide provides examples for defining 3 source Subgraphs to create one powerful composed Subgraph. + +### Specifics + +- To keep this example simple, all source Subgraphs use only block handlers. However, in a real environment, each source Subgraph will use data from different smart contracts. +- The examples below show how to import and extend the schema of another Subgraph to enhance its functionality. +- Each source Subgraph is optimized with a specific entity. +- All the commands listed install the necessary dependencies, generate code based on the GraphQL schema, build the Subgraph, and deploy it to your local Graph Node instance. + +### Step 1. Deploy Block Time Source Subgraph + +This first source Subgraph calculates the block time for each block. + +- It imports schemas from other Subgraphs and adds a `block` entity with a `timestamp` field, representing the time each block was mined. +- It listens to time-related blockchain events (e.g., block timestamps) and processes this data to update the Subgraph's entities accordingly. + +To deploy this Subgraph locally, run the following commands: + +```bash +npm install +npm run codegen +npm run build +npm run create-local +npm run deploy-local +``` + +### Step 2. Deploy Block Cost Source Subgraph + +This second source Subgraph indexes the cost of each block. + +#### Key Functions + +- It imports schemas from other Subgraphs and adds a `block` entity with cost-related fields. +- It listens to blockchain events related to costs (e.g. gas fees, transaction costs) and processes this data to update the Subgraph's entities accordingly. + +To deploy this Subgraph locally, run the same commands as above. + +### Step 3. Define Block Size in Source Subgraph + +This third source Subgraph indexes the size of each block. To deploy this Subgraph locally, run the same commands as above. + +#### Key Functions + +- It imports existing schemas from other Subgraphs and adds a `block` entity with a `size` field representing each block's size. +- It listens to blockchain events related to block sizes (e.g., storage or volume) and processes this data to update the Subgraph's entities accordingly. + +### Step 4. Combine Into Block Stats Subgraph + +This composed Subgraph combines and aggregates the information from the source Subgraphs above, providing a unified view of block statistics. To deploy this Subgraph locally, run the same commands as above. + +> Note: +> +> - Any change to a source Subgraph will likely generate a new deployment ID. +> - Be sure to update the deployment ID in the data source address of the Subgraph manifest to take advantage of the latest changes. +> - All source Subgraphs should be deployed before the composed Subgraph is deployed. + +#### Key Functions + +- It provides a consolidated data model that encompasses all relevant block metrics. +- It combines data from 3 source Subgraphs, and provides a comprehensive view of block statistics, enabling more complex queries and analyses. + +## Key Takeaways + +- This powerful tool will scale your Subgraph development and allow you to combine multiple Subgraphs. +- The setup includes the deployment of 3 source Subgraphs and one final deployment of the composed Subgraph. +- This feature unlocks scalability, simplifying both development and maintenance efficiency. + +## Additional Resources + +- Check out all the code for this example in [this GitHub repo](https://github.com/graphprotocol/example-composable-subgraph). +- To add advanced features to your Subgraph, check out [Subgraph advanced features](/developing/creating/advanced/). +- To learn more about aggregations, check out [Timeseries and Aggregations](/subgraphs/developing/creating/advanced/#timeseries-and-aggregations). diff --git a/website/src/pages/ko/subgraphs/guides/transfer-to-the-graph.mdx b/website/src/pages/ko/subgraphs/guides/transfer-to-the-graph.mdx index a62072c48373..9a4b037cafbc 100644 --- a/website/src/pages/ko/subgraphs/guides/transfer-to-the-graph.mdx +++ b/website/src/pages/ko/subgraphs/guides/transfer-to-the-graph.mdx @@ -12,9 +12,9 @@ Quickly upgrade your Subgraphs from any platform to [The Graph's decentralized n ## Upgrade Your Subgraph to The Graph in 3 Easy Steps -1. [Set Up Your Studio Environment](/subgraphs/cookbook/transfer-to-the-graph/#1-set-up-your-studio-environment) -2. [Deploy Your Subgraph to Studio](/subgraphs/cookbook/transfer-to-the-graph/#2-deploy-your-subgraph-to-studio) -3. [Publish to The Graph Network](/subgraphs/cookbook/transfer-to-the-graph/#publish-your-subgraph-to-the-graphs-decentralized-network) +1. [Set Up Your Studio Environment](/subgraphs/cookbook/transfer-to-the-graph/#1-set-up-your-studio-environment) +2. [Deploy Your Subgraph to Studio](/subgraphs/cookbook/transfer-to-the-graph/#2-deploy-your-subgraph-to-studio) +3. [Publish to The Graph Network](/subgraphs/cookbook/transfer-to-the-graph/#publish-your-subgraph-to-the-graphs-decentralized-network) ## 1. Set Up Your Studio Environment diff --git a/website/src/pages/ko/subgraphs/querying/best-practices.mdx b/website/src/pages/ko/subgraphs/querying/best-practices.mdx index ff5f381e2993..ab02b27cbc03 100644 --- a/website/src/pages/ko/subgraphs/querying/best-practices.mdx +++ b/website/src/pages/ko/subgraphs/querying/best-practices.mdx @@ -4,7 +4,7 @@ title: Querying Best Practices The Graph provides a decentralized way to query data from blockchains. Its data is exposed through a GraphQL API, making it easier to query with the GraphQL language. -Learn the essential GraphQL language rules and best practices to optimize your subgraph. +Learn the essential GraphQL language rules and best practices to optimize your Subgraph. --- @@ -71,7 +71,7 @@ It means that you can query a GraphQL API using standard `fetch` (natively or vi However, as mentioned in ["Querying from an Application"](/subgraphs/querying/from-an-application/), it's recommended to use `graph-client`, which supports the following unique features: -- Cross-chain Subgraph Handling: Querying from multiple subgraphs in a single query +- Cross-chain Subgraph Handling: Querying from multiple Subgraphs in a single query - [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) - [Automatic Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) - Fully typed result @@ -219,7 +219,7 @@ If the application only needs 10 transactions, the query should explicitly set ` ### Use a single query to request multiple records -By default, subgraphs have a singular entity for one record. For multiple records, use the plural entities and filter: `where: {id_in:[X,Y,Z]}` or `where: {volume_gt:100000}` +By default, Subgraphs have a singular entity for one record. For multiple records, use the plural entities and filter: `where: {id_in:[X,Y,Z]}` or `where: {volume_gt:100000}` Example of inefficient querying: diff --git a/website/src/pages/ko/subgraphs/querying/from-an-application.mdx b/website/src/pages/ko/subgraphs/querying/from-an-application.mdx index 681f6e6ba8d5..44677d78dcdf 100644 --- a/website/src/pages/ko/subgraphs/querying/from-an-application.mdx +++ b/website/src/pages/ko/subgraphs/querying/from-an-application.mdx @@ -1,5 +1,6 @@ --- title: Querying from an Application +sidebarTitle: Querying from an App --- Learn how to query The Graph from your application. @@ -10,7 +11,7 @@ During the development process, you will receive a GraphQL API endpoint at two d ### Subgraph Studio Endpoint -After deploying your subgraph to [Subgraph Studio](https://thegraph.com/docs/en/subgraphs/developing/deploying/using-subgraph-studio/), you will receive an endpoint that looks like this: +After deploying your Subgraph to [Subgraph Studio](https://thegraph.com/docs/en/subgraphs/developing/deploying/using-subgraph-studio/), you will receive an endpoint that looks like this: ``` https://api.studio.thegraph.com/query/// @@ -20,13 +21,13 @@ https://api.studio.thegraph.com/query/// ### The Graph Network Endpoint -After publishing your subgraph to the network, you will receive an endpoint that looks like this: : +After publishing your Subgraph to the network, you will receive an endpoint that looks like this: : ``` https://gateway.thegraph.com/api//subgraphs/id/ ``` -> This endpoint is intended for active use on the network. It allows you to use various GraphQL client libraries to query the subgraph and populate your application with indexed data. +> This endpoint is intended for active use on the network. It allows you to use various GraphQL client libraries to query the Subgraph and populate your application with indexed data. ## Using Popular GraphQL Clients @@ -34,7 +35,7 @@ https://gateway.thegraph.com/api//subgraphs/id/ The Graph is providing its own GraphQL client, `graph-client` that supports unique features such as: -- Cross-chain Subgraph Handling: Querying from multiple subgraphs in a single query +- Cross-chain Subgraph Handling: Querying from multiple Subgraphs in a single query - [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) - [Automatic Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) - Fully typed result @@ -43,7 +44,7 @@ The Graph is providing its own GraphQL client, `graph-client` that supports uniq ### Fetch Data with Graph Client -Let's look at how to fetch data from a subgraph with `graph-client`: +Let's look at how to fetch data from a Subgraph with `graph-client`: #### Step 1 @@ -168,7 +169,7 @@ Although it's the heaviest client, it has many features to build advanced UI on ### Fetch Data with Apollo Client -Let's look at how to fetch data from a subgraph with Apollo client: +Let's look at how to fetch data from a Subgraph with Apollo client: #### Step 1 @@ -257,7 +258,7 @@ client ### Fetch data with URQL -Let's look at how to fetch data from a subgraph with URQL: +Let's look at how to fetch data from a Subgraph with URQL: #### Step 1 diff --git a/website/src/pages/ko/subgraphs/querying/graph-client/README.md b/website/src/pages/ko/subgraphs/querying/graph-client/README.md index 416cadc13c6f..d4850e723c6e 100644 --- a/website/src/pages/ko/subgraphs/querying/graph-client/README.md +++ b/website/src/pages/ko/subgraphs/querying/graph-client/README.md @@ -16,19 +16,19 @@ This library is intended to simplify the network aspect of data consumption for | Status | Feature | Notes | | :----: | ---------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------- | -| ✅ | Multiple indexers | based on fetch strategies | -| ✅ | Fetch Strategies | timeout, retry, fallback, race, highestValue | -| ✅ | Build time validations & optimizations | | -| ✅ | Client-Side Composition | with improved execution planner (based on GraphQL-Mesh) | -| ✅ | Cross-chain Subgraph Handling | Use similar subgraphs as a single source | -| ✅ | Raw Execution (standalone mode) | without a wrapping GraphQL client | -| ✅ | Local (client-side) Mutations | | -| ✅ | [Automatic Block Tracking](../packages/block-tracking/README.md) | tracking block numbers [as described here](https://thegraph.com/docs/en/developer/distributed-systems/#polling-for-updated-data) | -| ✅ | [Automatic Pagination](../packages/auto-pagination/README.md) | doing multiple requests in a single call to fetch more than the indexer limit | -| ✅ | Integration with `@apollo/client` | | -| ✅ | Integration with `urql` | | -| ✅ | TypeScript support | with built-in GraphQL Codegen and `TypedDocumentNode` | -| ✅ | [`@live` queries](./live.md) | Based on polling | +| ✅ | Multiple indexers | based on fetch strategies | +| ✅ | Fetch Strategies | timeout, retry, fallback, race, highestValue | +| ✅ | Build time validations & optimizations | | +| ✅ | Client-Side Composition | with improved execution planner (based on GraphQL-Mesh) | +| ✅ | Cross-chain Subgraph Handling | Use similar subgraphs as a single source | +| ✅ | Raw Execution (standalone mode) | without a wrapping GraphQL client | +| ✅ | Local (client-side) Mutations | | +| ✅ | [Automatic Block Tracking](../packages/block-tracking/README.md) | tracking block numbers [as described here](https://thegraph.com/docs/en/developer/distributed-systems/#polling-for-updated-data) | +| ✅ | [Automatic Pagination](../packages/auto-pagination/README.md) | doing multiple requests in a single call to fetch more than the indexer limit | +| ✅ | Integration with `@apollo/client` | | +| ✅ | Integration with `urql` | | +| ✅ | TypeScript support | with built-in GraphQL Codegen and `TypedDocumentNode` | +| ✅ | [`@live` queries](./live.md) | Based on polling | > You can find an [extended architecture design here](./architecture.md) @@ -308,8 +308,8 @@ sources:
`highestValue` - - This strategy allows you to send parallel requests to different endpoints for the same source and choose the most updated. + +This strategy allows you to send parallel requests to different endpoints for the same source and choose the most updated. This is useful if you want to choose most synced data for the same Subgraph over different indexers/sources. diff --git a/website/src/pages/ko/subgraphs/querying/graphql-api.mdx b/website/src/pages/ko/subgraphs/querying/graphql-api.mdx index b3003ece651a..e10201771989 100644 --- a/website/src/pages/ko/subgraphs/querying/graphql-api.mdx +++ b/website/src/pages/ko/subgraphs/querying/graphql-api.mdx @@ -6,13 +6,13 @@ Learn about the GraphQL Query API used in The Graph. ## What is GraphQL? -[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. +[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query Subgraphs. -To understand the larger role that GraphQL plays, review [developing](/subgraphs/developing/introduction/) and [creating a subgraph](/developing/creating-a-subgraph/). +To understand the larger role that GraphQL plays, review [developing](/subgraphs/developing/introduction/) and [creating a Subgraph](/developing/creating-a-subgraph/). ## Queries with GraphQL -In your subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type. +In your Subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type. > Note: `query` does not need to be included at the top of the `graphql` query when using The Graph. @@ -170,7 +170,7 @@ You can use suffixes like `_gt`, `_lte` for value comparison: You can also filter entities that were updated in or after a specified block with `_change_block(number_gte: Int)`. -This can be useful if you are looking to fetch only entities which have changed, for example since the last time you polled. Or alternatively it can be useful to investigate or debug how entities are changing in your subgraph (if combined with a block filter, you can isolate only entities that changed in a specific block). +This can be useful if you are looking to fetch only entities which have changed, for example since the last time you polled. Or alternatively it can be useful to investigate or debug how entities are changing in your Subgraph (if combined with a block filter, you can isolate only entities that changed in a specific block). ```graphql { @@ -329,7 +329,7 @@ This query will return `Challenge` entities, and their associated `Application` ### Fulltext Search Queries -Fulltext search query fields provide an expressive text search API that can be added to the subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) to add fulltext search to your subgraph. +Fulltext search query fields provide an expressive text search API that can be added to the Subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) to add fulltext search to your Subgraph. Fulltext search queries have one required field, `text`, for supplying search terms. Several special fulltext operators are available to be used in this `text` search field. @@ -391,7 +391,7 @@ Graph Node implements [specification-based](https://spec.graphql.org/October2021 The schema of your dataSources, i.e. the entity types, values, and relationships that are available to query, are defined through the [GraphQL Interface Definition Language (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). -GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your subgraph is automatically generated from the GraphQL schema that's included in your [subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph). +GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your Subgraph is automatically generated from the GraphQL schema that's included in your [Subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph). > Note: Our API does not expose mutations because developers are expected to issue transactions directly against the underlying blockchain from their applications. @@ -403,7 +403,7 @@ All GraphQL types with `@entity` directives in your schema will be treated as en ### Subgraph Metadata -All subgraphs have an auto-generated `_Meta_` object, which provides access to subgraph metadata. This can be queried as follows: +All Subgraphs have an auto-generated `_Meta_` object, which provides access to Subgraph metadata. This can be queried as follows: ```graphQL { @@ -419,7 +419,7 @@ All subgraphs have an auto-generated `_Meta_` object, which provides access to s } ``` -If a block is provided, the metadata is as of that block, if not the latest indexed block is used. If provided, the block must be after the subgraph's start block, and less than or equal to the most recently indexed block. +If a block is provided, the metadata is as of that block, if not the latest indexed block is used. If provided, the block must be after the Subgraph's start block, and less than or equal to the most recently indexed block. `deployment` is a unique ID, corresponding to the IPFS CID of the `subgraph.yaml` file. @@ -427,6 +427,6 @@ If a block is provided, the metadata is as of that block, if not the latest inde - hash: the hash of the block - number: the block number -- timestamp: the timestamp of the block, if available (this is currently only available for subgraphs indexing EVM networks) +- timestamp: the timestamp of the block, if available (this is currently only available for Subgraphs indexing EVM networks) -`hasIndexingErrors` is a boolean identifying whether the subgraph encountered indexing errors at some past block +`hasIndexingErrors` is a boolean identifying whether the Subgraph encountered indexing errors at some past block diff --git a/website/src/pages/ko/subgraphs/querying/introduction.mdx b/website/src/pages/ko/subgraphs/querying/introduction.mdx index 36ea85c37877..2c9c553293fa 100644 --- a/website/src/pages/ko/subgraphs/querying/introduction.mdx +++ b/website/src/pages/ko/subgraphs/querying/introduction.mdx @@ -7,11 +7,11 @@ To start querying right away, visit [The Graph Explorer](https://thegraph.com/ex ## Overview -When a subgraph is published to The Graph Network, you can visit its subgraph details page on Graph Explorer and use the "Query" tab to explore the deployed GraphQL API for each subgraph. +When a Subgraph is published to The Graph Network, you can visit its Subgraph details page on Graph Explorer and use the "Query" tab to explore the deployed GraphQL API for each Subgraph. ## Specifics -Each subgraph published to The Graph Network has a unique query URL in Graph Explorer to make direct queries. You can find it by navigating to the subgraph details page and clicking on the "Query" button in the top right corner. +Each Subgraph published to The Graph Network has a unique query URL in Graph Explorer to make direct queries. You can find it by navigating to the Subgraph details page and clicking on the "Query" button in the top right corner. ![Query Subgraph Button](/img/query-button-screenshot.png) @@ -21,7 +21,7 @@ You will notice that this query URL must use a unique API key. You can create an Subgraph Studio users start on a Free Plan, which allows them to make 100,000 queries per month. Additional queries are available on the Growth Plan, which offers usage based pricing for additional queries, payable by credit card, or GRT on Arbitrum. You can learn more about billing [here](/subgraphs/billing/). -> Please see the [Query API](/subgraphs/querying/graphql-api/) for a complete reference on how to query the subgraph's entities. +> Please see the [Query API](/subgraphs/querying/graphql-api/) for a complete reference on how to query the Subgraph's entities. > > Note: If you encounter 405 errors with a GET request to the Graph Explorer URL, please switch to a POST request instead. diff --git a/website/src/pages/ko/subgraphs/querying/managing-api-keys.mdx b/website/src/pages/ko/subgraphs/querying/managing-api-keys.mdx index 6964b1a7ad9b..aed3d10422e1 100644 --- a/website/src/pages/ko/subgraphs/querying/managing-api-keys.mdx +++ b/website/src/pages/ko/subgraphs/querying/managing-api-keys.mdx @@ -4,11 +4,11 @@ title: Managing API keys ## Overview -API keys are needed to query subgraphs. They ensure that the connections between application services are valid and authorized, including authenticating the end user and the device using the application. +API keys are needed to query Subgraphs. They ensure that the connections between application services are valid and authorized, including authenticating the end user and the device using the application. ### Create and Manage API Keys -Go to [Subgraph Studio](https://thegraph.com/studio/) and click the **API Keys** tab to create and manage your API keys for specific subgraphs. +Go to [Subgraph Studio](https://thegraph.com/studio/) and click the **API Keys** tab to create and manage your API keys for specific Subgraphs. The "API keys" table lists existing API keys and allows you to manage or delete them. For each key, you can see its status, the cost for the current period, the spending limit for the current period, and the total number of queries. @@ -31,4 +31,4 @@ You can click on an individual API key to view the Details page: - Amount of GRT spent 2. Under the **Security** section, you can opt into security settings depending on the level of control you’d like to have. Specifically, you can: - View and manage the domain names authorized to use your API key - - Assign subgraphs that can be queried with your API key + - Assign Subgraphs that can be queried with your API key diff --git a/website/src/pages/ko/subgraphs/querying/python.mdx b/website/src/pages/ko/subgraphs/querying/python.mdx index 0937e4f7862d..ed0d078a4175 100644 --- a/website/src/pages/ko/subgraphs/querying/python.mdx +++ b/website/src/pages/ko/subgraphs/querying/python.mdx @@ -3,7 +3,7 @@ title: Query The Graph with Python and Subgrounds sidebarTitle: Python (Subgrounds) --- -Subgrounds is an intuitive Python library for querying subgraphs, built by [Playgrounds](https://playgrounds.network/). It allows you to directly connect subgraph data to a Python data environment, letting you use libraries like [pandas](https://pandas.pydata.org/) to perform data analysis! +Subgrounds is an intuitive Python library for querying Subgraphs, built by [Playgrounds](https://playgrounds.network/). It allows you to directly connect Subgraph data to a Python data environment, letting you use libraries like [pandas](https://pandas.pydata.org/) to perform data analysis! Subgrounds offers a simple Pythonic API for building GraphQL queries, automates tedious workflows such as pagination, and empowers advanced users through controlled schema transformations. @@ -17,14 +17,14 @@ pip install --upgrade subgrounds python -m pip install --upgrade subgrounds ``` -Once installed, you can test out subgrounds with the following query. The following example grabs a subgraph for the Aave v2 protocol and queries the top 5 markets ordered by TVL (Total Value Locked), selects their name and their TVL (in USD) and returns the data as a pandas [DataFrame](https://pandas.pydata.org/pandas-docs/dev/reference/api/pandas.DataFrame.html#pandas.DataFrame). +Once installed, you can test out subgrounds with the following query. The following example grabs a Subgraph for the Aave v2 protocol and queries the top 5 markets ordered by TVL (Total Value Locked), selects their name and their TVL (in USD) and returns the data as a pandas [DataFrame](https://pandas.pydata.org/pandas-docs/dev/reference/api/pandas.DataFrame.html#pandas.DataFrame). ```python from subgrounds import Subgrounds sg = Subgrounds() -# Load the subgraph +# Load the Subgraph aave_v2 = sg.load_subgraph( "https://api.thegraph.com/subgraphs/name/messari/aave-v2-ethereum") diff --git a/website/src/pages/ko/subgraphs/querying/subgraph-id-vs-deployment-id.mdx b/website/src/pages/ko/subgraphs/querying/subgraph-id-vs-deployment-id.mdx index 103e470e14da..17258dd13ea1 100644 --- a/website/src/pages/ko/subgraphs/querying/subgraph-id-vs-deployment-id.mdx +++ b/website/src/pages/ko/subgraphs/querying/subgraph-id-vs-deployment-id.mdx @@ -2,17 +2,17 @@ title: Subgraph ID vs Deployment ID --- -A subgraph is identified by a Subgraph ID, and each version of the subgraph is identified by a Deployment ID. +A Subgraph is identified by a Subgraph ID, and each version of the Subgraph is identified by a Deployment ID. -When querying a subgraph, either ID can be used, though it is generally suggested that the Deployment ID is used due to its ability to specify a specific version of a subgraph. +When querying a Subgraph, either ID can be used, though it is generally suggested that the Deployment ID is used due to its ability to specify a specific version of a Subgraph. Here are some key differences between the two IDs: ![](/img/subgraph-id-vs-deployment-id.png) ## Deployment ID -The Deployment ID is the IPFS hash of the compiled manifest file, which refers to other files on IPFS instead of relative URLs on the computer. For example, the compiled manifest can be accessed via: `https://api.thegraph.com/ipfs/api/v0/cat?arg=QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. To change the Deployment ID, one can simply update the manifest file, such as modifying the description field as described in the [subgraph manifest documentation](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api). +The Deployment ID is the IPFS hash of the compiled manifest file, which refers to other files on IPFS instead of relative URLs on the computer. For example, the compiled manifest can be accessed via: `https://api.thegraph.com/ipfs/api/v0/cat?arg=QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. To change the Deployment ID, one can simply update the manifest file, such as modifying the description field as described in the [Subgraph manifest documentation](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api). -When queries are made using a subgraph's Deployment ID, we are specifying a version of that subgraph to query. Using the Deployment ID to query a specific subgraph version results in a more sophisticated and robust setup as there is full control over the subgraph version being queried. However, this results in the need of updating the query code manually every time a new version of the subgraph is published. +When queries are made using a Subgraph's Deployment ID, we are specifying a version of that Subgraph to query. Using the Deployment ID to query a specific Subgraph version results in a more sophisticated and robust setup as there is full control over the Subgraph version being queried. However, this results in the need of updating the query code manually every time a new version of the Subgraph is published. Example endpoint that uses Deployment ID: @@ -20,8 +20,8 @@ Example endpoint that uses Deployment ID: ## Subgraph ID -The Subgraph ID is a unique identifier for a subgraph. It remains constant across all versions of a subgraph. It is recommended to use the Subgraph ID to query the latest version of a subgraph, although there are some caveats. +The Subgraph ID is a unique identifier for a Subgraph. It remains constant across all versions of a Subgraph. It is recommended to use the Subgraph ID to query the latest version of a Subgraph, although there are some caveats. -Be aware that querying using Subgraph ID may result in queries being responded to by an older version of the subgraph due to the new version needing time to sync. Also, new versions could introduce breaking schema changes. +Be aware that querying using Subgraph ID may result in queries being responded to by an older version of the Subgraph due to the new version needing time to sync. Also, new versions could introduce breaking schema changes. Example endpoint that uses Subgraph ID: `https://gateway-arbitrum.network.thegraph.com/api/[api-key]/subgraphs/id/FL3ePDCBbShPvfRJTaSCNnehiqxsPHzpLud6CpbHoeKW` diff --git a/website/src/pages/ko/subgraphs/quick-start.mdx b/website/src/pages/ko/subgraphs/quick-start.mdx index dc280ec699d3..a803ac8695fa 100644 --- a/website/src/pages/ko/subgraphs/quick-start.mdx +++ b/website/src/pages/ko/subgraphs/quick-start.mdx @@ -2,7 +2,7 @@ title: Quick Start --- -Learn how to easily build, publish and query a [subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) on The Graph. +Learn how to easily build, publish and query a [Subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) on The Graph. ## Prerequisites @@ -13,13 +13,13 @@ Learn how to easily build, publish and query a [subgraph](/subgraphs/developing/ ## How to Build a Subgraph -### 1. Create a subgraph in Subgraph Studio +### 1. Create a Subgraph in Subgraph Studio Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. -Subgraph Studio lets you create, manage, deploy, and publish subgraphs, as well as create and manage API keys. +Subgraph Studio lets you create, manage, deploy, and publish Subgraphs, as well as create and manage API keys. -Click "Create a Subgraph". It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name". +Click "Create a Subgraph". It is recommended to name the Subgraph in Title Case: "Subgraph Name Chain Name". ### 2. Install the Graph CLI @@ -37,13 +37,13 @@ Using [yarn](https://yarnpkg.com/): yarn global add @graphprotocol/graph-cli ``` -### 3. Initialize your subgraph +### 3. Initialize your Subgraph -> You can find commands for your specific subgraph on the subgraph page in [Subgraph Studio](https://thegraph.com/studio/). +> You can find commands for your specific Subgraph on the Subgraph page in [Subgraph Studio](https://thegraph.com/studio/). -The `graph init` command will automatically create a scaffold of a subgraph based on your contract's events. +The `graph init` command will automatically create a scaffold of a Subgraph based on your contract's events. -The following command initializes your subgraph from an existing contract: +The following command initializes your Subgraph from an existing contract: ```sh graph init @@ -51,42 +51,42 @@ graph init If your contract is verified on the respective blockscanner where it is deployed (such as [Etherscan](https://etherscan.io/)), then the ABI will automatically be created in the CLI. -When you initialize your subgraph, the CLI will ask you for the following information: +When you initialize your Subgraph, the CLI will ask you for the following information: -- **Protocol**: Choose the protocol your subgraph will be indexing data from. -- **Subgraph slug**: Create a name for your subgraph. Your subgraph slug is an identifier for your subgraph. -- **Directory**: Choose a directory to create your subgraph in. -- **Ethereum network** (optional): You may need to specify which EVM-compatible network your subgraph will be indexing data from. +- **Protocol**: Choose the protocol your Subgraph will be indexing data from. +- **Subgraph slug**: Create a name for your Subgraph. Your Subgraph slug is an identifier for your Subgraph. +- **Directory**: Choose a directory to create your Subgraph in. +- **Ethereum network** (optional): You may need to specify which EVM-compatible network your Subgraph will be indexing data from. - **Contract address**: Locate the smart contract address you’d like to query data from. - **ABI**: If the ABI is not auto-populated, you will need to input it manually as a JSON file. -- **Start Block**: You should input the start block to optimize subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed. +- **Start Block**: You should input the start block to optimize Subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed. - **Contract Name**: Input the name of your contract. -- **Index contract events as entities**: It is suggested that you set this to true, as it will automatically add mappings to your subgraph for every emitted event. +- **Index contract events as entities**: It is suggested that you set this to true, as it will automatically add mappings to your Subgraph for every emitted event. - **Add another contract** (optional): You can add another contract. -See the following screenshot for an example for what to expect when initializing your subgraph: +See the following screenshot for an example for what to expect when initializing your Subgraph: ![Subgraph command](/img/CLI-Example.png) -### 4. Edit your subgraph +### 4. Edit your Subgraph -The `init` command in the previous step creates a scaffold subgraph that you can use as a starting point to build your subgraph. +The `init` command in the previous step creates a scaffold Subgraph that you can use as a starting point to build your Subgraph. -When making changes to the subgraph, you will mainly work with three files: +When making changes to the Subgraph, you will mainly work with three files: -- Manifest (`subgraph.yaml`) - defines what data sources your subgraph will index. -- Schema (`schema.graphql`) - defines what data you wish to retrieve from the subgraph. +- Manifest (`subgraph.yaml`) - defines what data sources your Subgraph will index. +- Schema (`schema.graphql`) - defines what data you wish to retrieve from the Subgraph. - AssemblyScript Mappings (`mapping.ts`) - translates data from your data sources to the entities defined in the schema. -For a detailed breakdown on how to write your subgraph, check out [Creating a Subgraph](/developing/creating-a-subgraph/). +For a detailed breakdown on how to write your Subgraph, check out [Creating a Subgraph](/developing/creating-a-subgraph/). -### 5. Deploy your subgraph +### 5. Deploy your Subgraph > Remember, deploying is not the same as publishing. -When you **deploy** a subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. A deployed subgraph's indexing is performed by the [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/), which is a single Indexer owned and operated by Edge & Node, rather than by the many decentralized Indexers in The Graph Network. A **deployed** subgraph is free to use, rate-limited, not visible to the public, and meant to be used for development, staging, and testing purposes. +When you **deploy** a Subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. A deployed Subgraph's indexing is performed by the [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/), which is a single Indexer owned and operated by Edge & Node, rather than by the many decentralized Indexers in The Graph Network. A **deployed** Subgraph is free to use, rate-limited, not visible to the public, and meant to be used for development, staging, and testing purposes. -Once your subgraph is written, run the following commands: +Once your Subgraph is written, run the following commands: ```` ```sh @@ -94,7 +94,7 @@ graph codegen && graph build ``` ```` -Authenticate and deploy your subgraph. The deploy key can be found on the subgraph's page in Subgraph Studio. +Authenticate and deploy your Subgraph. The deploy key can be found on the Subgraph's page in Subgraph Studio. ![Deploy key](/img/subgraph-studio-deploy-key.jpg) @@ -109,37 +109,37 @@ graph deploy The CLI will ask for a version label. It's strongly recommended to use [semantic versioning](https://semver.org/), e.g. `0.0.1`. -### 6. Review your subgraph +### 6. Review your Subgraph -If you’d like to test your subgraph before publishing it, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following: +If you’d like to test your Subgraph before publishing it, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following: - Run a sample query. -- Analyze your subgraph in the dashboard to check information. -- Check the logs on the dashboard to see if there are any errors with your subgraph. The logs of an operational subgraph will look like this: +- Analyze your Subgraph in the dashboard to check information. +- Check the logs on the dashboard to see if there are any errors with your Subgraph. The logs of an operational Subgraph will look like this: ![Subgraph logs](/img/subgraph-logs-image.png) -### 7. Publish your subgraph to The Graph Network +### 7. Publish your Subgraph to The Graph Network -When your subgraph is ready for a production environment, you can publish it to the decentralized network. Publishing is an onchain action that does the following: +When your Subgraph is ready for a production environment, you can publish it to the decentralized network. Publishing is an onchain action that does the following: -- It makes your subgraph available to be to indexed by the decentralized [Indexers](/indexing/overview/) on The Graph Network. -- It removes rate limits and makes your subgraph publicly searchable and queryable in [Graph Explorer](https://thegraph.com/explorer/). -- It makes your subgraph available for [Curators](/resources/roles/curating/) to curate it. +- It makes your Subgraph available to be to indexed by the decentralized [Indexers](/indexing/overview/) on The Graph Network. +- It removes rate limits and makes your Subgraph publicly searchable and queryable in [Graph Explorer](https://thegraph.com/explorer/). +- It makes your Subgraph available for [Curators](/resources/roles/curating/) to curate it. -> The greater the amount of GRT you and others curate on your subgraph, the more Indexers will be incentivized to index your subgraph, improving the quality of service, reducing latency, and enhancing network redundancy for your subgraph. +> The greater the amount of GRT you and others curate on your Subgraph, the more Indexers will be incentivized to index your Subgraph, improving the quality of service, reducing latency, and enhancing network redundancy for your Subgraph. #### Publishing with Subgraph Studio -To publish your subgraph, click the Publish button in the dashboard. +To publish your Subgraph, click the Publish button in the dashboard. -![Publish a subgraph on Subgraph Studio](/img/publish-sub-transfer.png) +![Publish a Subgraph on Subgraph Studio](/img/publish-sub-transfer.png) -Select the network to which you would like to publish your subgraph. +Select the network to which you would like to publish your Subgraph. #### Publishing from the CLI -As of version 0.73.0, you can also publish your subgraph with the Graph CLI. +As of version 0.73.0, you can also publish your Subgraph with the Graph CLI. Open the `graph-cli`. @@ -157,32 +157,32 @@ graph publish ``` ```` -3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized subgraph to a network of your choice. +3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized Subgraph to a network of your choice. ![cli-ui](/img/cli-ui.png) To customize your deployment, see [Publishing a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). -#### Adding signal to your subgraph +#### Adding signal to your Subgraph -1. To attract Indexers to query your subgraph, you should add GRT curation signal to it. +1. To attract Indexers to query your Subgraph, you should add GRT curation signal to it. - - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your subgraph. + - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your Subgraph. 2. If eligible for indexing rewards, Indexers receive GRT rewards based on the signaled amount. - - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on subgraph feature usage and supported networks. + - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on Subgraph feature usage and supported networks. To learn more about curation, read [Curating](/resources/roles/curating/). -To save on gas costs, you can curate your subgraph in the same transaction you publish it by selecting this option: +To save on gas costs, you can curate your Subgraph in the same transaction you publish it by selecting this option: ![Subgraph publish](/img/studio-publish-modal.png) -### 8. Query your subgraph +### 8. Query your Subgraph -You now have access to 100,000 free queries per month with your subgraph on The Graph Network! +You now have access to 100,000 free queries per month with your Subgraph on The Graph Network! -You can query your subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button. +You can query your Subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button. -For more information about querying data from your subgraph, read [Querying The Graph](/subgraphs/querying/introduction/). +For more information about querying data from your Subgraph, read [Querying The Graph](/subgraphs/querying/introduction/). diff --git a/website/src/pages/ko/substreams/developing/dev-container.mdx b/website/src/pages/ko/substreams/developing/dev-container.mdx index bd4acf16eec7..339ddb159c87 100644 --- a/website/src/pages/ko/substreams/developing/dev-container.mdx +++ b/website/src/pages/ko/substreams/developing/dev-container.mdx @@ -9,7 +9,7 @@ Develop your first project with Substreams Dev Container. It's a tool to help you build your first project. You can either run it remotely through Github codespaces or locally by cloning the [substreams starter repository](https://github.com/streamingfast/substreams-starter?tab=readme-ov-file). -Inside the Dev Container, the `substreams init` command sets up a code-generated Substreams project, allowing you to easily build a subgraph or an SQL-based solution for data handling. +Inside the Dev Container, the `substreams init` command sets up a code-generated Substreams project, allowing you to easily build a Subgraph or an SQL-based solution for data handling. ## Prerequisites @@ -35,7 +35,7 @@ To share your work with the broader community, publish your `.spkg` to [Substrea You can configure your project to query data either through a Subgraph or directly from an SQL database: -- **Subgraph**: Run `substreams codegen subgraph`. This generates a project with a basic `schema.graphql` and `mappings.ts` file. You can customize these to define entities based on the data extracted by Substreams. For more configurations, see [subgraph sink documentation](https://docs.substreams.dev/how-to-guides/sinks/subgraph). +- **Subgraph**: Run `substreams codegen subgraph`. This generates a project with a basic `schema.graphql` and `mappings.ts` file. You can customize these to define entities based on the data extracted by Substreams. For more configurations, see [Subgraph sink documentation](https://docs.substreams.dev/how-to-guides/sinks/subgraph). - **SQL**: Run `substreams codegen sql` for SQL-based queries. For more information on configuring a SQL sink, refer to the [SQL documentation](https://docs.substreams.dev/how-to-guides/sinks/sql-sink). ## Deployment Options diff --git a/website/src/pages/ko/substreams/developing/sinks.mdx b/website/src/pages/ko/substreams/developing/sinks.mdx index 5f6f9de21326..48c246201e8f 100644 --- a/website/src/pages/ko/substreams/developing/sinks.mdx +++ b/website/src/pages/ko/substreams/developing/sinks.mdx @@ -1,5 +1,5 @@ --- -title: Official Sinks +title: Sink your Substreams --- Choose a sink that meets your project's needs. @@ -8,7 +8,7 @@ Choose a sink that meets your project's needs. Once you find a package that fits your needs, you can choose how you want to consume the data. -Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database, a file or a subgraph. +Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database, a file or a Subgraph. ## Sinks diff --git a/website/src/pages/ko/substreams/developing/solana/account-changes.mdx b/website/src/pages/ko/substreams/developing/solana/account-changes.mdx index a282278c7d91..8c821acaee3f 100644 --- a/website/src/pages/ko/substreams/developing/solana/account-changes.mdx +++ b/website/src/pages/ko/substreams/developing/solana/account-changes.mdx @@ -11,7 +11,7 @@ This guide walks you through the process of setting up your environment, configu > NOTE: History for the Solana Account Changes dates as of 2025, block 310629601. -For each Substreams Solana account block, only the latest update per account is recorded, see the [Protobuf Referece](https://buf.build/streamingfast/firehose-solana/file/main:sf/solana/type/v1/account.proto). If an account is deleted, a payload with `deleted == True` is provided. Additionally, events of low importance ware omitted, such as those with the special owner “Vote11111111…” account or changes that do not affect the account data (ex: lamport changes). +For each Substreams Solana account block, only the latest update per account is recorded, see the [Protobuf Reference](https://buf.build/streamingfast/firehose-solana/file/main:sf/solana/type/v1/account.proto). If an account is deleted, a payload with `deleted == True` is provided. Additionally, events of low importance ware omitted, such as those with the special owner “Vote11111111…” account or changes that do not affect the account data (ex: lamport changes). > NOTE: To test Substreams latency for Solana accounts, measured as block-head drift, install the [Substreams CLI](https://docs.substreams.dev/reference-material/substreams-cli/installing-the-cli) and running `substreams run solana-common blocks_without_votes -s -1 -o clock`. diff --git a/website/src/pages/ko/substreams/developing/solana/transactions.mdx b/website/src/pages/ko/substreams/developing/solana/transactions.mdx index c22bd0f50611..1542ae22dab7 100644 --- a/website/src/pages/ko/substreams/developing/solana/transactions.mdx +++ b/website/src/pages/ko/substreams/developing/solana/transactions.mdx @@ -36,12 +36,12 @@ Within the generated directories, modify your Substreams modules to include addi ## Step 3: Load the Data -To make your Substreams queryable (as opposed to [direct streaming](https://docs.substreams.dev/how-to-guides/sinks/stream)), you can automatically generate a [Substreams-powered subgraph](/sps/introduction/) or SQL-DB sink. +To make your Substreams queryable (as opposed to [direct streaming](https://docs.substreams.dev/how-to-guides/sinks/stream)), you can automatically generate a [Substreams-powered Subgraph](/sps/introduction/) or SQL-DB sink. ### Subgraph 1. Run `substreams codegen subgraph` to initialize the sink, producing the necessary files and function definitions. -2. Create your [subgraph mappings](/sps/triggers/) within the `mappings.ts` and associated entities within the `schema.graphql`. +2. Create your [Subgraph mappings](/sps/triggers/) within the `mappings.ts` and associated entities within the `schema.graphql`. 3. Build and deploy locally or to [Subgraph Studio](https://thegraph.com/studio-pricing/) by running `deploy-studio`. ### SQL diff --git a/website/src/pages/ko/substreams/introduction.mdx b/website/src/pages/ko/substreams/introduction.mdx index e11174ee07c8..0bd1ea21c9f6 100644 --- a/website/src/pages/ko/substreams/introduction.mdx +++ b/website/src/pages/ko/substreams/introduction.mdx @@ -13,7 +13,7 @@ Substreams is a powerful parallel blockchain indexing technology designed to enh ## Substreams Benefits -- **Accelerated Indexing**: Boost subgraph indexing time with a parallelized engine for quicker data retrieval and processing. +- **Accelerated Indexing**: Boost Subgraph indexing time with a parallelized engine for quicker data retrieval and processing. - **Multi-Chain Support**: Expand indexing capabilities beyond EVM-based chains, supporting ecosystems like Solana, Injective, Starknet, and Vara. - **Enhanced Data Model**: Access comprehensive data, including the `trace` level data on EVM or account changes on Solana, while efficiently managing forks/disconnections. - **Multi-Sink Support:** For Subgraph, Postgres database, Clickhouse, and Mongo database. diff --git a/website/src/pages/ko/substreams/publishing.mdx b/website/src/pages/ko/substreams/publishing.mdx index 3d1a3863c882..3d93e6f9376f 100644 --- a/website/src/pages/ko/substreams/publishing.mdx +++ b/website/src/pages/ko/substreams/publishing.mdx @@ -9,7 +9,7 @@ Learn how to publish a Substreams package to the [Substreams Registry](https://s ### What is a package? -A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional subgraphs. +A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional Subgraphs. ## Publish a Package @@ -44,7 +44,7 @@ A Substreams package is a precompiled binary file that defines the specific data ![confirm](/img/4_confirm.png) -That's it! You have succesfully published a package in the Substreams registry. +That's it! You have successfully published a package in the Substreams registry. ![success](/img/5_success.png) diff --git a/website/src/pages/ko/supported-networks.mdx b/website/src/pages/ko/supported-networks.mdx index 7ae7ff45350a..ef2c28393033 100644 --- a/website/src/pages/ko/supported-networks.mdx +++ b/website/src/pages/ko/supported-networks.mdx @@ -18,11 +18,11 @@ export const getStaticProps = getSupportedNetworksStaticProps - Subgraph Studio relies on the stability and reliability of the underlying technologies, for example JSON-RPC, Firehose and Substreams endpoints. - Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier. -- If a subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks. +- If a Subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks. - For a full list of which features are supported on the decentralized network, see [this page](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). ## Running Graph Node locally If your preferred network isn't supported on The Graph's decentralized network, you can run your own [Graph Node](https://github.com/graphprotocol/graph-node) to index any EVM-compatible network. Make sure that the [version](https://github.com/graphprotocol/graph-node/releases) you are using supports the network and you have the needed configuration. -Graph Node can also index other protocols, via a Firehose integration. Firehose integrations have been created for NEAR, Arweave and Cosmos-based networks. Additionally, Graph Node can support Substreams-powered subgraphs for any network with Substreams support. +Graph Node can also index other protocols, via a Firehose integration. Firehose integrations have been created for NEAR and Arweave. Additionally, Graph Node can support Substreams-powered Subgraphs for any network with Substreams support. diff --git a/website/src/pages/ko/token-api/_meta-titles.json b/website/src/pages/ko/token-api/_meta-titles.json index 692cec84bd58..7ed31e0af95d 100644 --- a/website/src/pages/ko/token-api/_meta-titles.json +++ b/website/src/pages/ko/token-api/_meta-titles.json @@ -1,5 +1,6 @@ { "mcp": "MCP", "evm": "EVM Endpoints", - "monitoring": "Monitoring Endpoints" + "monitoring": "Monitoring Endpoints", + "faq": "FAQ" } diff --git a/website/src/pages/ko/token-api/_meta.js b/website/src/pages/ko/token-api/_meta.js index 09aa7ffc2649..0e526f673a66 100644 --- a/website/src/pages/ko/token-api/_meta.js +++ b/website/src/pages/ko/token-api/_meta.js @@ -5,4 +5,5 @@ export default { mcp: titles.mcp, evm: titles.evm, monitoring: titles.monitoring, + faq: '', } diff --git a/website/src/pages/ko/token-api/faq.mdx b/website/src/pages/ko/token-api/faq.mdx new file mode 100644 index 000000000000..6178aee33e86 --- /dev/null +++ b/website/src/pages/ko/token-api/faq.mdx @@ -0,0 +1,109 @@ +--- +title: Token API FAQ +--- + +Get fast answers to easily integrate and scale with The Graph's high-performance Token API. + +## General + +### What blockchains does the Token API support? + +Currently, the Token API supports Ethereum, Binance Smart Chain (BSC), Polygon, Optimism, Base, and Arbitrum One. + +### Why isn't my API key from The Graph Market working? + +Be sure to use the Access Token generated from the API key, not the API key itself. An access token can be generated from the dashboard on The Graph Market using the dropdown menu next to each API key. + +### How current is the data provided by the API relative to the blockchain? + +The API provides data up to the latest finalized block. + +### How do I authenticate requests to the Token API? + +Authentication is managed via JWT tokens obtained through The Graph Market. JWT tokens issued by [Pinax](https://pinax.network/en), a core developer of The Graph, are also supported. + +### Does the Token API provide a client SDK? + +While a client SDK is not currently available, please share feedback on any SDKs or integrations you would like to see on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to support additional blockchains in the future? + +Yes, more blockchains will be supported in the future. Please share feedback on desired chains on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to offer data closer to the chain head? + +Yes, improvements to provide data closer to the chain head are planned. Feedback is welcome on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to support additional use cases such as NFTs? + +The Graph ecosystem is actively determining the [roadmap](https://thegraph.com/blog/token-api-the-graph/) for additional use cases, including NFTs. Please provide feedback on specific features you would like prioritized on [Discord](https://discord.gg/graphprotocol). + +## MCP / LLM / AI Topics + +### Is there a time limit for LLM queries? + +Yes. The maximum time limit for LLM queries is 10 seconds. + +### Is there a known list of LLMs that work with the API? + +Yes, Cline, Cursor, and Claude Desktop integrate successfully with The Graph's Token API + MCP server. + +Beyond that, whether an LLM "works" depends on whether it supports the MCP protocol directly (or has a compatible plugin/adapter). + +### Where can I find the MCP client? + +You can find the code for the MCP client in [The Graph's repo](https://github.com/graphprotocol/mcp-client). + +## Advanced Topics + +### I'm getting 403/401 errors. What's wrong? + +Check that you included the `Authorization: Bearer ` header with the correct, non-expired token. Common issues include using the API key instead of generating a new JWT, forgetting the "Bearer" prefix, using an incorrect token, or omitting the header entirely. Ensure you copied the JWT exactly as provided by The Graph Market. + +### Are there rate limits or usage costs?\*\* + +During Beta, the Token API is free for authorized developers. While specific rate limits aren't documented, reasonable throttling exists to prevent abuse. High request volumes may trigger HTTP 429 errors. Monitor official announcements for future pricing changes after Beta. + +### What networks are supported, and how do I specify them? + +You can query available networks with [this link](https://token-api.thegraph.com/#tag/monitoring/GET/networks). A full list of network IDs accepted by The Graph can be found on [The Graph's Networks](https://thegraph.com/docs/en/supported-networks/). The API supports Ethereum mainnet, Binance Smart Chain, Base, Arbitrum One, Optimism, and Polygon/Matic. Use the optional `network_id` parameter (e.g., `mainnet`, `bsc`, `base`, `arbitrum-one`, `optimism`, `matic`) to target a specific chain. Without this parameter, the API defaults to Ethereum mainnet. + +### Why do I only see 10 results? How can I get more data? + +Endpoints cap output at 10 items by default. Use pagination parameters: `limit` (up to 500) and `page` (1-indexed). For example, set `limit=50` to get 50 results, and increment `page` for subsequent batches (e.g., `page=2` for items 51-100). + +### How do I fetch older transfer history? + +The Transfers endpoint defaults to 30 days of history. To retrieve older events, increase the `age` parameter up to 180 days maximum (e.g., `age=180` for 6 months of transfers). Transfers older than 180 days cannot be fetched in a single call. + +### What does an empty `"data": []` array mean? + +An empty data array means no records were found matching the query – not an error. This occurs when querying wallets with no tokens/transfers or token contracts with no holders. Verify you've used the correct address and parameters. An invalid address format will trigger a 4xx error. + +### Why is the JSON response wrapped in a `"data"` array? + +All Token API responses consistently wrap results in a top-level `data` array, even for single items. This uniform design handles the common case where addresses have multiple balances or transfers. When parsing, be sure to index into the `data` array (e.g., `const results = response.data`). + +### Why are token amounts returned as strings? + +Large numeric values (like token amounts or supplies) are returned as strings to avoid precision loss, as they often exceed JavaScript's safe integer range. Convert these to big number types for arithmetic operations. Fields like `decimals` are provided as normal numbers to help derive human-readable values. + +### What format should addresses be in? + +The API accepts 40-character hex addresses with or without the `0x` prefix. The endpoint is case-insensitive, so both lower and uppercase hex characters work. Ensure addresses are exactly 40 hex digits (20 bytes) if you remove the prefix. For contract queries, use the token's contract address; for balance/transfer queries, use the wallet address. + +### Do I need special headers besides authentication? + +While recommended, `Accept: application/json` isn't strictly required as the API returns JSON by default. The critical header is `Authorization: Bearer `. Ensure you make a GET request to the correct URL without trailing slashes or path typos (e.g., use `/balances/evm/{address}` not `/balance`). + +### MCP integration with Claude/Cline/Cursor shows errors like "ENOENT" or "Server disconnected". How do I fix this? + +For "ENOENT" errors, ensure Node.js 18+ is installed and the path to `npx`/`bunx` is correct (consider using full paths in config). "Server disconnected" usually indicates authentication or connectivity issues – verify your `ACCESS_TOKEN` is set correctly and your network allows access to `https://token-api.thegraph.com/sse`. + +### Is the Token API part of The Graph's GraphQL service? + +No, the Token API is a separate RESTful service. Unlike traditional subgraphs, it provides ready-to-use REST endpoints (HTTP GET) for common token data. You don't need to write GraphQL queries or deploy subgraphs. Under the hood, it uses The Graph's infrastructure and MCP with AI for enrichment, but you simply interact with REST endpoints. + +### Do I need to use MCP or tools like Claude, Cline, or Cursor? + +No, these are optional. MCP is an advanced feature allowing AI assistants to interface with the API via streaming. For standard usage, simply call the REST endpoints with any HTTP client using your JWT. Claude Desktop, Cline bot, and Cursor IDE integrations are provided for convenience but aren't required. diff --git a/website/src/pages/ko/token-api/mcp/claude.mdx b/website/src/pages/ko/token-api/mcp/claude.mdx index 0da8f2be031d..12a036b6fc24 100644 --- a/website/src/pages/ko/token-api/mcp/claude.mdx +++ b/website/src/pages/ko/token-api/mcp/claude.mdx @@ -25,11 +25,11 @@ Create or edit your `claude_desktop_config.json` file. ```json label="claude_desktop_config.json" { "mcpServers": { - "mcp-pinax": { + "token-api": { "command": "npx", "args": ["@pinax/mcp", "--sse-url", "https://token-api.thegraph.com/sse"], "env": { - "ACCESS_TOKEN": "" + "ACCESS_TOKEN": "" } } } diff --git a/website/src/pages/ko/token-api/mcp/cline.mdx b/website/src/pages/ko/token-api/mcp/cline.mdx index ab54c0c8f6f0..ef98e45939fe 100644 --- a/website/src/pages/ko/token-api/mcp/cline.mdx +++ b/website/src/pages/ko/token-api/mcp/cline.mdx @@ -10,7 +10,7 @@ sidebarTitle: Cline - [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path. - The `@pinax/mcp` package requires Node 18+, as it relies on built-in `fetch()` / `Headers`, which are not available in Node 17 or older. You may need to specify an exact path to an up-to-date Node version, or uninstall previous versions of Node to ensure `@pinax/mcp` uses the correct version. -![Screenshot of Cline's MCP server configuration interface displaying the JSON settings file with mcp-pinax server details visible](/img/cline-preview-token-api.png) +![Screenshot of Cline's MCP server configuration interface displaying the JSON settings file with mcp-pinax server details visible.](/img/cline-preview-token-api.png) ## Configuration diff --git a/website/src/pages/mr/about.mdx b/website/src/pages/mr/about.mdx index 6ec630cd8e4e..9597ecb03bb2 100644 --- a/website/src/pages/mr/about.mdx +++ b/website/src/pages/mr/about.mdx @@ -30,25 +30,25 @@ Blockchain properties, such as finality, chain reorganizations, and uncled block ## The Graph Provides a Solution -The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API. +The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "Subgraphs") can then be queried with a standard GraphQL API. Today, there is a decentralized protocol that is backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node) that enables this process. ### How The Graph Functions -Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL. +Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using Subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL. #### Specifics -- The Graph uses subgraph descriptions, which are known as the subgraph manifest inside the subgraph. +- The Graph uses Subgraph descriptions, which are known as the Subgraph manifest inside the Subgraph. -- The subgraph description outlines the smart contracts of interest for a subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database. +- The Subgraph description outlines the smart contracts of interest for a Subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database. -- When creating a subgraph, you need to write a subgraph manifest. +- When creating a Subgraph, you need to write a Subgraph manifest. -- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that subgraph. +- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that Subgraph. -The diagram below provides more detailed information about the flow of data after a subgraph manifest has been deployed with Ethereum transactions. +The diagram below provides more detailed information about the flow of data after a Subgraph manifest has been deployed with Ethereum transactions. ![ग्राफिक डेटा ग्राहकांना प्रश्न देण्यासाठी ग्राफ नोड कसा वापरतो हे स्पष्ट करणारे ग्राफिक](/img/graph-dataflow.png) @@ -56,12 +56,12 @@ The diagram below provides more detailed information about the flow of data afte 1. A dapp स्मार्ट करारावरील व्यवहाराद्वारे इथरियममध्ये डेटा जोडते. 2. व्यवहारावर प्रक्रिया करताना स्मार्ट करार एक किंवा अधिक इव्हेंट सोडतो. -3. ग्राफ नोड सतत नवीन ब्लॉक्ससाठी इथरियम स्कॅन करतो आणि तुमच्या सबग्राफचा डेटा त्यात असू शकतो. -4. ग्राफ नोड या ब्लॉक्समध्ये तुमच्या सबग्राफसाठी इथरियम इव्हेंट शोधतो आणि तुम्ही प्रदान केलेले मॅपिंग हँडलर चालवतो. मॅपिंग हे WASM मॉड्यूल आहे जे इथरियम इव्हेंट्सच्या प्रतिसादात ग्राफ नोड संचयित केलेल्या डेटा घटक तयार करते किंवा अद्यतनित करते. +3. Graph Node continually scans Ethereum for new blocks and the data for your Subgraph they may contain. +4. Graph Node finds Ethereum events for your Subgraph in these blocks and runs the mapping handlers you provided. The mapping is a WASM module that creates or updates the data entities that Graph Node stores in response to Ethereum events. 5. नोडचा [GraphQL एंडपॉइंट](https://graphql.org/learn/) वापरून ब्लॉकचेन वरून अनुक्रमित केलेल्या डेटासाठी dapp ग्राफ नोडची क्वेरी करते. ग्राफ नोड यामधून, स्टोअरच्या इंडेक्सिंग क्षमतांचा वापर करून, हा डेटा मिळविण्यासाठी त्याच्या अंतर्निहित डेटा स्टोअरच्या क्वेरींमध्ये GraphQL क्वेरीचे भाषांतर करतो. dapp हा डेटा अंतिम वापरकर्त्यांसाठी समृद्ध UI मध्ये प्रदर्शित करते, जो ते Ethereum वर नवीन व्यवहार जारी करण्यासाठी वापरतात. चक्राची पुनरावृत्ती होते. ## पुढील पायऱ्या -The following sections provide a more in-depth look at subgraphs, their deployment and data querying. +The following sections provide a more in-depth look at Subgraphs, their deployment and data querying. -Before you write your own subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed subgraphs. Each subgraph's page includes a GraphQL playground, allowing you to query its data. +Before you write your own Subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed Subgraphs. Each Subgraph's page includes a GraphQL playground, allowing you to query its data. diff --git a/website/src/pages/mr/archived/arbitrum/arbitrum-faq.mdx b/website/src/pages/mr/archived/arbitrum/arbitrum-faq.mdx index 562824e64e95..d121f5a2d0f3 100644 --- a/website/src/pages/mr/archived/arbitrum/arbitrum-faq.mdx +++ b/website/src/pages/mr/archived/arbitrum/arbitrum-faq.mdx @@ -14,7 +14,7 @@ By scaling The Graph on L2, network participants can now benefit from: - Security inherited from Ethereum -Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers can open and close allocations more frequently to index a greater number of subgraphs. Developers can deploy and update subgraphs more easily, and Delegators can delegate GRT more frequently. Curators can add or remove signal to a larger number of subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. +Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers can open and close allocations more frequently to index a greater number of Subgraphs. Developers can deploy and update Subgraphs more easily, and Delegators can delegate GRT more frequently. Curators can add or remove signal to a larger number of Subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. The Graph community decided to move forward with Arbitrum last year after the outcome of the [GIP-0031](https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305) discussion. @@ -39,7 +39,7 @@ To take advantage of using The Graph on L2, use this dropdown switcher to toggle ![Dropdown switcher to toggle Arbitrum](/img/arbitrum-screenshot-toggle.png) -## As a subgraph developer, data consumer, Indexer, Curator, or Delegator, what do I need to do now? +## As a Subgraph developer, data consumer, Indexer, Curator, or Delegator, what do I need to do now? Network participants must move to Arbitrum to continue participating in The Graph Network. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) for additional support. @@ -51,9 +51,9 @@ All smart contracts have been thoroughly [audited](https://github.com/graphproto Everything has been tested thoroughly, and a contingency plan is in place to ensure a safe and seamless transition. Details can be found [here](https://forum.thegraph.com/t/gip-0037-the-graph-arbitrum-deployment-with-linear-rewards-minted-in-l2/3551#risks-and-security-considerations-20). -## Are existing subgraphs on Ethereum working? +## Are existing Subgraphs on Ethereum working? -All subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) to ensure your subgraphs operate seamlessly. +All Subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) to ensure your Subgraphs operate seamlessly. ## Does GRT have a new smart contract deployed on Arbitrum? diff --git a/website/src/pages/mr/archived/arbitrum/l2-transfer-tools-faq.mdx b/website/src/pages/mr/archived/arbitrum/l2-transfer-tools-faq.mdx index b6ee08a5bbed..696f3c69a4fc 100644 --- a/website/src/pages/mr/archived/arbitrum/l2-transfer-tools-faq.mdx +++ b/website/src/pages/mr/archived/arbitrum/l2-transfer-tools-faq.mdx @@ -24,9 +24,9 @@ The exception is with smart contract wallets like multisigs: these are smart con L2 ट्रांस्फर टूल्स आपल्याला L1 वरून L2ला संदेश पाठविण्याच्या अर्बिट्रमच्या स्वभाविक विधानाचा वापर करतात. हा विधान "पुनः प्रयासयोग्य पर्याय" म्हणून ओळखला जातो आणि हा सर्व स्थानिक टोकन ब्रिजेस, अर्बिट्रम GRT ब्रिज यासह सहाय्यक आहे. आपण पुनः प्रयासयोग्य पर्यायांबद्दल अधिक माहिती [Arbitrum docs](https://docs.arbitrum.io/arbos/l1-to-l2-messaging) वाचू शकता. -आपल्याला आपल्या संपत्तींच्या (सबग्राफ, स्टेक, प्रतिनिधित्व किंवा पुरवणी) L2ला स्थानांतरित केल्यास, एक संदेश अर्बिट्रम GRT ब्रिजमध्ये पाठविला जातो ज्याने L2वर पुनः प्रयासयोग्य पर्याय तयार करतो. स्थानांतरण उपकरणात्रूटील वैल्यूत्या किंवा संचलनसाठी काही ईटीएच वॅल्यू आहे, ज्यामुळे 1) पर्याय तयार करण्यासाठी पैसे देणे आणि 2) L2मध्ये पर्याय संचालित करण्यासाठी गॅस देणे ह्याचा वापर केला जातो. परंतु, पर्याय संचालनाच्या काळात गॅसची किंमते वेळेत बदलू शकतात, ज्यामुळे ही स्वयंप्रयत्न किंवा संचालन प्रयत्न अपयशी होऊ शकतात. जेव्हा ती प्रक्रिया अपयशी होते, तेव्हा अर्बिट्रम ब्रिज किंवा 7 दिवसापर्यंत पुन्हा प्रयत्न करण्याची क्षमता आहे, आणि कोणत्याही व्यक्ती त्या "पुनर्मिलन" पर्यायाचा प्रयत्न करू शकतो (त्यासाठी अर्बिट्रमवर काही ईटीएच स्थानांतरित केलेले असणे आवश्यक आहे). +When you transfer your assets (Subgraph, stake, delegation or curation) to L2, a message is sent through the Arbitrum GRT bridge which creates a retryable ticket in L2. The transfer tool includes some ETH value in the transaction, that is used to 1) pay to create the ticket and 2) pay for the gas to execute the ticket in L2. However, because gas prices might vary in the time until the ticket is ready to execute in L2, it is possible that this auto-execution attempt fails. When that happens, the Arbitrum bridge will keep the retryable ticket alive for up to 7 days, and anyone can retry “redeeming” the ticket (which requires a wallet with some ETH bridged to Arbitrum). -ही आपल्याला सगळ्या स्थानांतरण उपकरणांमध्ये "पुष्टीकरण" चरण म्हणून ओळखता - आपल्याला अधिकांशपेक्षा अधिक आपल्याला स्वयंप्रयत्न सध्याच्या वेळेत स्वयंप्रयत्न सध्याच्या वेळेत स्वतः संचालित होईल, परंतु आपल्याला येते कि ते दिले आहे ह्याची तपासणी करणे महत्वपूर्ण आहे. आपल्याला किंवा 7 दिवसात कोणत्याही सफल पुनर्मिलनाचे प्रयत्न केले त्यामुळे प्रयत्नशील नसत्या आणि त्या 7 दिवसांत कोणताही प्रयत्न नसत्याने, अर्बिट्रम ब्रिजने पुनर्मिलन पर्यायाचा त्याग केला आहे, आणि आपली संपत्ती (सबग्राफ, स्टेक, प्रतिनिधित्व किंवा पुरवणी) वेळेत विचली जाईल आणि पुनर्प्राप्त केली जाऊ शकणार नाही. ग्राफचे मुख्य डेव्हलपर्सन्सने या परिस्थितियांच्या जाणीवपणे प्राणीसमूह ठरविले आहे आणि त्याच्या अगोदर पुनर्मिलन केले जाईल, परंतु याच्यातून, आपल्याला आपल्या स्थानांतरणाची पूर्ण करण्याची जबाबदारी आहे. आपल्याला आपल्या व्यवहाराची पुष्टी करण्यात किंवा संचालनाची समस्या आहे का, कृपया [this form](https://noteforms.com/forms/notionform-l2-transfer-tooling-issues-0ogqfu?notionforms=1&utm_source=notionforms) वापरून संपूर्ण डेव्हलपर्सन्सची मदत करण्याची क्षमता आहे. +This is what we call the “Confirm” step in all the transfer tools - it will run automatically in most cases, as the auto-execution is most often successful, but it is important that you check back to make sure it went through. If it doesn’t succeed and there are no successful retries in 7 days, the Arbitrum bridge will discard the ticket, and your assets (Subgraph, stake, delegation or curation) will be lost and can’t be recovered. The Graph core devs have a monitoring system in place to detect these situations and try to redeem the tickets before it’s too late, but it is ultimately your responsibility to ensure your transfer is completed in time. If you’re having trouble confirming your transaction, please reach out using [this form](https://noteforms.com/forms/notionform-l2-transfer-tooling-issues-0ogqfu?notionforms=1&utm_source=notionforms) and core devs will be there help you. ### I started my delegation/stake/curation transfer and I'm not sure if it made it through to L2, how can I confirm that it was transferred correctly? @@ -36,43 +36,43 @@ If you have the L1 transaction hash (which you can find by looking at the recent ## सबग्राफ हस्तांतरण -### मी माझा सबग्राफ कसा हस्तांतरित करू? +### How do I transfer my Subgraph? -तुमचा सबग्राफ हस्तांतरित करण्यासाठी, तुम्हाला खालील चरण पूर्ण करावे लागतील: +To transfer your Subgraph, you will need to complete the following steps: 1. Ethereum mainnet वर हस्तांतरण सुरू करा 2. पुष्टीकरणासाठी 20 मिनिटे प्रतीक्षा करा -3. आर्बिट्रमवर सबग्राफ हस्तांतरणाची पुष्टी करा\* +3. Confirm Subgraph transfer on Arbitrum\* -4. आर्बिट्रम वर सबग्राफ प्रकाशित करणे समाप्त करा +4. Finish publishing Subgraph on Arbitrum 5. क्वेरी URL अपडेट करा (शिफारस केलेले) -\*Note that you must confirm the transfer within 7 days otherwise your subgraph may be lost. In most cases, this step will run automatically, but a manual confirmation may be needed if there is a gas price spike on Arbitrum. If there are any issues during this process, there will be resources to help: contact support at support@thegraph.com or on [Discord](https://discord.gg/graphprotocol). +\*Note that you must confirm the transfer within 7 days otherwise your Subgraph may be lost. In most cases, this step will run automatically, but a manual confirmation may be needed if there is a gas price spike on Arbitrum. If there are any issues during this process, there will be resources to help: contact support at support@thegraph.com or on [Discord](https://discord.gg/graphprotocol). ### मी माझे हस्तांतरण कोठून सुरू करावे? -आपल्याला स्थानांतरण सुरू करण्याची क्षमता आहे Subgraph Studio](https://thegraph.com/studio/), [Explorer,](https://thegraph.com/explorer) किंवा कोणत्याही सबग्राफ तपशील पृष्ठापासून सुरू करू शकता. सबग्राफ तपशील पृष्ठावर "सबग्राफ स्थानांतरित करा" बटणवर क्लिक करा आणि स्थानांतरण सुरू करा. +You can initiate your transfer from the [Subgraph Studio](https://thegraph.com/studio/), [Explorer,](https://thegraph.com/explorer) or any Subgraph details page. Click the "Transfer Subgraph" button in the Subgraph details page to start the transfer. -### माझा सबग्राफ हस्तांतरित होईपर्यंत मला किती वेळ प्रतीक्षा करावी लागेल +### How long do I need to wait until my Subgraph is transferred स्थानांतरणासाठी किंमतीतून प्रायः 20 मिनिटे लागतात. आर्बिट्रम ब्रिज आपल्याला स्वत: स्थानांतरण स्वयंप्रयत्नातून पूर्ण करण्यासाठी पारंपारिकपणे काम करत आहे. कितीतरी प्रकारांत स्थानांतरण केल्यास, गॅस किंमती वाढू शकतात आणि आपल्याला परिपुष्टीकरण पुन्हा करण्याची आवश्यकता लागू शकते. -### मी L2 मध्ये हस्तांतरित केल्यानंतर माझा सबग्राफ अजूनही शोधण्यायोग्य असेल का? +### Will my Subgraph still be discoverable after I transfer it to L2? -आपला सबग्राफ केवळ त्या नेटवर्कवर शोधन्यायला येतो, ज्यावर तो प्रकाशित केला जातो. उदाहरणार्थ, आपला सबग्राफ आर्बिट्रम वनवर आहे तर आपल्याला तो केवळ आर्बिट्रम वनवरच्या एक्सप्लोररमध्ये शोधू शकता आणि आपल्याला इथे एथेरियमवर शोधायला सक्षम नसेल. कृपया पृष्ठाच्या वरील नेटवर्क स्विचरमध्ये आर्बिट्रम वन निवडल्याची आपल्याला कसे सुनिश्चित करण्याची आवश्यकता आहे. स्थानांतरणानंतर, L1 सबग्राफ विकलप म्हणून दिसणारा. +Your Subgraph will only be discoverable on the network it is published to. For example, if your Subgraph is on Arbitrum One, then you can only find it in Explorer on Arbitrum One and will not be able to find it on Ethereum. Please ensure that you have Arbitrum One selected in the network switcher at the top of the page to ensure you are on the correct network.  After the transfer, the L1 Subgraph will appear as deprecated. -### माझा सबग्राफ हस्तांतरित करण्यासाठी प्रकाशित करणे आवश्यक आहे का? +### Does my Subgraph need to be published to transfer it? -सबग्राफ स्थानांतरण उपकरणाचा लाभ घेण्यासाठी, आपल्याला आपल्या सबग्राफला आधीच प्रकाशित केलेला पाहिजे आणि त्याच्या सबग्राफच्या मालक वॉलेटमध्ये काही परिपुष्टी संकेत असणे आवश्यक आहे. आपला सबग्राफ प्रकाशित नसल्यास, आपल्याला साधारणपणे आर्बिट्रम वनवर सीधे प्रकाशित करण्यात योग्य आहे - संबंधित गॅस फीस खूपच किमान असतील. आपल्याला प्रकाशित सबग्राफ स्थानांतरित करू इच्छित असल्यास, परंतु मालक खाते त्यावर कोणतीही प्रतिसाद संकेत दिली नाही, तर आपण त्या खाते पासून थोडीसी परिपुष्टी (उदा. 1 GRT) संकेतिक करू शकता; कृपया "स्वत: स्थानांतरित होणारी" संकेत निवडायला नक्की करा. +To take advantage of the Subgraph transfer tool, your Subgraph must be already published to Ethereum mainnet and must have some curation signal owned by the wallet that owns the Subgraph. If your Subgraph is not published, it is recommended you simply publish directly on Arbitrum One - the associated gas fees will be considerably lower. If you want to transfer a published Subgraph but the owner account hasn't curated any signal on it, you can signal a small amount (e.g. 1 GRT) from that account; make sure to choose "auto-migrating" signal. -### माझ्या सबग्राफच्या इथेरियम मुख्य नेटवर्कचा संस्करण हस्तांतरित करताना Arbitrum वर काय होतं? +### What happens to the Ethereum mainnet version of my Subgraph after I transfer to Arbitrum? -आर्बिट्रमकडे आपल्या सबग्राफ स्थानांतरित करण्यानंतर, एथेरियम मुख्यनेट आवृत्ती विकलप म्हणून दिली जाईल. आपल्याला आपल्या क्वेरी URL वरील बदल करण्याची सल्ला आहे की त्याच्या 48 तासांत दिला जाईल. हेरंब विलंबप्रदान केलेले आहे ज्यामुळे आपली मुख्यनेट URL सक्रिय ठेवली जाईल आणि कोणत्याही तृतीय पक्षाच्या dapp समर्थनाच्या आधी अद्यतनित केल्या जाऊ शकतात. +After transferring your Subgraph to Arbitrum, the Ethereum mainnet version will be deprecated. We recommend you update your query URL within 48 hours. However, there is a grace period in place that keeps your mainnet URL functioning so that any third-party dapp support can be updated. ### मी हस्तांतरित केल्यानंतर, मला आर्बिट्रमवर पुन्हा प्रकाशित करण्याची देखील आवश्यकता आहे का? @@ -80,21 +80,21 @@ If you have the L1 transaction hash (which you can find by looking at the recent ### Will my endpoint experience downtime while re-publishing? -It is unlikely, but possible to experience a brief downtime depending on which Indexers are supporting the subgraph on L1 and whether they keep indexing it until the subgraph is fully supported on L2. +It is unlikely, but possible to experience a brief downtime depending on which Indexers are supporting the Subgraph on L1 and whether they keep indexing it until the Subgraph is fully supported on L2. ### एल2 व Ethereum मुख्य नेटवर्कवर प्रकाशन आणि संस्करणदेखील सारखं आहे का? -Yes. Select Arbitrum One as your published network when publishing in Subgraph Studio. In the Studio, the latest endpoint will be available which points to the latest updated version of the subgraph. +Yes. Select Arbitrum One as your published network when publishing in Subgraph Studio. In the Studio, the latest endpoint will be available which points to the latest updated version of the Subgraph. -### पुन्हा प्रकाशित करताना माझ्या एंडपॉईंटला डाउन-टाइम असेल का? +### Will my Subgraph's curation move with my Subgraph? -आपण "स्वत: स्थानांतरित होणारी" संकेत निवडल्यास, आपल्या आपल्या स्वत: स्थानांतरित करणार्या सबग्राफसह 100% आपल्या पुरवणीने निवडलेल्या स्थानांतरण होईल. सबग्राफच्या सर्व स्थानांतरण संकेताच्या स्थानांतरणाच्या क्षणी जीआरटीत रूपांतरित केली जाईल, आणि आपल्या पुरवणीसंकेताशी संबंधित जीआरटी आपल्याला L2 सबग्राफवर संकेत वितरित करण्यासाठी वापरली जाईल. +If you've chosen auto-migrating signal, 100% of your own curation will move with your Subgraph to Arbitrum One. All of the Subgraph's curation signal will be converted to GRT at the time of the transfer, and the GRT corresponding to your curation signal will be used to mint signal on the L2 Subgraph. -इतर क्युरेटर्सनी त्याच्या भागाची GRT वापरून घेण्याची किंवा त्याच्या सबग्राफवर सिग्नल मिंट करण्यासाठी त्याची GRT L2वर हस्तांतरित करण्याची परवानगी आहे. +Other Curators can choose whether to withdraw their fraction of GRT, or also transfer it to L2 to mint signal on the same Subgraph. -### तुम्ही आपले सबग्राफ L2 वर हस्तांतरित केल्यानंतर पुन्हा Ethereum मुख्य नेटवर्कवर परत करू शकता का? +### Can I move my Subgraph back to Ethereum mainnet after I transfer? -स्थानांतरित केल्यानंतर, आपल्या आर्बिट्रम वनवरच्या सबग्राफची एथेरियम मुख्यनेट आवृत्ती विकलप म्हणून दिली जाईल. आपल्याला मुख्यनेटवर परत जाण्याची इच्छा आहे किंवा, आपल्याला मुख्यनेटवर परत जाण्याची इच्छा आहे तर आपल्याला पुन्हा डिप्लॉय आणि प्रकाशित करण्याची आवश्यकता आहे. परंतु आर्बिट्रम वनवर परत गेल्याच्या बदलाच्या दिल्लाला मुख्यनेटवरील सूचना पूर्णपणे त्यात दिलेली आहे. +Once transferred, your Ethereum mainnet version of this Subgraph will be deprecated. If you would like to move back to mainnet, you will need to redeploy and publish back to mainnet. However, transferring back to Ethereum mainnet is strongly discouraged as indexing rewards will eventually be distributed entirely on Arbitrum One. ### माझे हस्तांतरण पूर्ण करण्यासाठी मला ब्रिज्ड ETH का आवश्यक आहे? @@ -206,19 +206,19 @@ The tokens that are being undelegated are "locked" and therefore cannot be trans \*आवश्यक असल्यास - उदा. तुम्ही एक कॉन्ट्रॅक्ट पत्ता वापरत आहात. -### मी क्युरेट केलेला सबग्राफ L2 वर गेला असल्यास मला कसे कळेल? +### How will I know if the Subgraph I curated has moved to L2? -सबग्राफ तपशील पृष्ठाची पाहणी केल्यास, एक बॅनर आपल्याला सूचित करेल की हा सबग्राफ स्थानांतरित केलेला आहे. आपल्याला सुचवल्यास, आपल्या पुरवणीचे स्थानांतरण करण्यासाठी प्रॉम्प्ट अनुसरण करू शकता. आपल्याला ह्या माहितीला सापडण्याची किंवा स्थानांतरित केलेल्या कोणत्याही सबग्राफच्या तपशील पृष्ठावर मिळवू शकता. +When viewing the Subgraph details page, a banner will notify you that this Subgraph has been transferred. You can follow the prompt to transfer your curation. You can also find this information on the Subgraph details page of any Subgraph that has moved. ### मी माझे क्युरेशन L2 वर हलवू इच्छित नसल्यास काय करावे? -कोणत्याही सबग्राफला प्राकृतिक रितीने प्रतिसादित केल्यानंतर, आपल्याला आपल्या सिग्नलला वापरून घेण्याची पर्वाह आहे. तसेच, आपल्याला जर सबग्राफ L2 वर हस्तांतरित केलेला असेल तर, आपल्याला आपल्या सिग्नलला ईथेरियम मेननेटवरून वापरून घेण्याची किंवा L2 वर सिग्नल पाठवण्याची पर्वाह आहे. +When a Subgraph is deprecated you have the option to withdraw your signal. Similarly, if a Subgraph has moved to L2, you can choose to withdraw your signal in Ethereum mainnet or send the signal to L2. ### माझे क्युरेशन यशस्वीरित्या हस्तांतरित झाले हे मला कसे कळेल? L2 हस्तांतरण साधन सुरू केल्यानंतर, संकेत तपशील २० मिनिटांनंतर Explorer मध्ये पहिल्या दिशेने प्रवेशक्षम होईल. -### किंवा तुम्ही एकापेक्षा अधिक सबग्राफवर एकावेळी आपल्या कुरेशनची हस्तांतरण करू शकता का? +### Can I transfer my curation on more than one Subgraph at a time? यावेळी मोठ्या प्रमाणात हस्तांतरण पर्याय नाही. @@ -266,7 +266,7 @@ L2 स्थानांतरण उपकरणाने आपल्याच ### माझ्या शेअर्स हस्तांतरित करण्यापूर्वी मला Arbitrum वर सूचीबद्ध करण्याची आवश्यकता आहे का? -आपल्याला स्वारूपण ठरविण्यापूर्वीच आपले स्टेक प्रभावीपणे स्थानांतरित करू शकता, परंतु L2 वर कोणत्या उत्पादनाची मागणी करण्याची अनुमती नसेल तोंद, ते लागू करण्यास आपल्याला L2 वरील सबग्राफ्सला आवंटन देण्याची, त्यांची सूचीबद्धीकरण करण्याची आणि POIs प्रस्तुत करण्याची आवश्यकता आहे, ते तुम्ही L2 वर कोणत्याही प्रामोड पावण्याच्या पर्यायी नसेल. +You can effectively transfer your stake first before setting up indexing, but you will not be able to claim any rewards on L2 until you allocate to Subgraphs on L2, index them, and present POIs. ### मी माझा इंडेक्सिंग स्टेक हलवण्यापूर्वी प्रतिनिधी त्यांचे प्रतिनिधी हलवू शकतात का? diff --git a/website/src/pages/mr/archived/arbitrum/l2-transfer-tools-guide.mdx b/website/src/pages/mr/archived/arbitrum/l2-transfer-tools-guide.mdx index cb0215fe9cd0..32e1b7fc75f3 100644 --- a/website/src/pages/mr/archived/arbitrum/l2-transfer-tools-guide.mdx +++ b/website/src/pages/mr/archived/arbitrum/l2-transfer-tools-guide.mdx @@ -6,53 +6,53 @@ title: L2 Transfer Tools Guide Some frequent questions about these tools are answered in the [L2 Transfer Tools FAQ](/archived/arbitrum/l2-transfer-tools-faq/). The FAQs contain in-depth explanations of how to use the tools, how they work, and things to keep in mind when using them. -## तुमचा सबग्राफ आर्बिट्रम (L2) वर कसा हस्तांतरित करायचा +## How to transfer your Subgraph to Arbitrum (L2) -## तुमचे सबग्राफ हस्तांतरित करण्याचे फायदे +## Benefits of transferring your Subgraphs मागील वर्षापासून, The Graph चे समुदाय आणि मुख्य डेव्हलपर [been preparing](https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305)करीत होते त्याच्या गोष्टीसाठी आर्बिट्रमवर जाण्याची. आर्बिट्रम, एक श्रेणी 2 किंवा "L2" ब्लॉकचेन, ईथेरियमकिडून सुरक्षा अनुभवतो परंतु काही लोअर गॅस फी प्रदान करतो. -जेव्हा तुम्ही आपल्या सबग्राफला The Graph Network वर प्रकाशित किंवा अपग्रेड करता तेव्हा, तुम्ही प्रोटोकॉलवरच्या स्मार्ट कॉन्ट्रॅक्ट्ससोबत संवाद साधता आहात आणि हे ईथ वापरून गॅससाठी पैसे देता येतात. आर्बिट्रमवर तुमच्या सबग्राफला हल्लीक अपडेट्सची आवश्यकता असल्यामुळे आपल्याला खूप कमी गॅस फी परतण्यात आलेली आहे. या कमी फीस, आणि लोअर करण्याची बंद पट आर्बिट्रमवर असल्याचे, तुमच्या सबग्राफवर इतर क्युरेटरसाठी सुविधा असताना तुमच्या सबग्राफवर कुणासही क्युरेशन करणे सोपे होते, आणि तुमच्या सबग्राफवर इंडेक्सरसाठी पुरस्कारांची वाढ होतील. या किमतीसवर्गीय वातावरणात इंडेक्सरसाठी सबग्राफला सूचीबद्ध करणे आणि सेव करणे सोपे होते. आर्बिट्रमवर इंडेक्सिंग पुरस्कारे आणि ईथेरियम मेननेटवर किमतीची वाढ होणारी आहेत, आणि यामुळे अगदी अधिक इंडेक्सरस त्याची स्थानिकता हस्तांतरित करत आहेत आणि त्यांचे ऑपरेशन्स L2 वर स्थापित करत आहेत.". +When you publish or upgrade your Subgraph to The Graph Network, you're interacting with smart contracts on the protocol and this requires paying for gas using ETH. By moving your Subgraphs to Arbitrum, any future updates to your Subgraph will require much lower gas fees. The lower fees, and the fact that curation bonding curves on L2 are flat, also make it easier for other Curators to curate on your Subgraph, increasing the rewards for Indexers on your Subgraph. This lower-cost environment also makes it cheaper for Indexers to index and serve your Subgraph. Indexing rewards will be increasing on Arbitrum and decreasing on Ethereum mainnet over the coming months, so more and more Indexers will be transferring their stake and setting up their operations on L2. -## सिग्नल, तुमचा L1 सबग्राफ आणि क्वेरी URL सह काय होते हे समजून घेणे +## Understanding what happens with signal, your L1 Subgraph and query URLs -सबग्राफला आर्बिट्रमवर हस्तांतरित करण्यासाठी, आर्बिट्रम GRT सेतूक वापरला जातो, ज्याच्या परत आर्बिट्रमच्या मूळ सेतूकाचा वापर केला जातो, सबग्राफला L2 वर पाठवण्यासाठी. "हस्तांतरण" मुख्यनेटवर सबग्राफची वैल्यू कमी करणारा आहे आणि सेतूकाच्या ब्रिजच्या माध्यमातून लॉकल 2 वर सबग्राफ पुन्हा तयार करण्याची माहिती पाठवण्यात आली आहे. त्यामुळे हा "हस्तांतरण" मुख्यनेटवरील सबग्राफला अस्तित्वातून टाकेल आणि त्याची माहिती ब्रिजवार L2 वर पुन्हा तयार करण्यात आली आहे. हस्तांतरणात सबग्राफ मालकाची संकेतित GRT समाविष्ट केली आहे, ज्याची उपसंकेतित GRT मूळ सेतूकाच्या ब्रिजकडून हस्तांतरित करण्यासाठी जास्तीत जास्त शून्यापेक्षा असणे आवश्यक आहे. +Transferring a Subgraph to Arbitrum uses the Arbitrum GRT bridge, which in turn uses the native Arbitrum bridge, to send the Subgraph to L2. The "transfer" will deprecate the Subgraph on mainnet and send the information to re-create the Subgraph on L2 using the bridge. It will also include the Subgraph owner's signaled GRT, which must be more than zero for the bridge to accept the transfer. -जेव्हा तुम्ही सबग्राफला हस्तांतरित करण्याची निवड करता, हे सबग्राफचे सर्व क्युरेशन सिग्नल GRT मध्ये रूपांतरित होईल. ह्याचे मुख्यनेटवर "अप्रामाणिक" घेण्याच्या अर्थाने आहे. तुमच्या क्युरेशनसह संबंधित GRT सबग्राफसह पाठवली जाईल, त्यामुळे त्यांचा L2 वर पाठवला जाईल, त्यातून त्यांचा नमूद कुंडला तयार केला जाईल. +When you choose to transfer the Subgraph, this will convert all of the Subgraph's curation signal to GRT. This is equivalent to "deprecating" the Subgraph on mainnet. The GRT corresponding to your curation will be sent to L2 together with the Subgraph, where they will be used to mint signal on your behalf. -इतर क्युरेटरस स्वत: त्यांच्या भागाचा GRT परत घेण्याची किंवा त्याच्या एकल सबग्राफवर त्यांच्या सिग्नल तयार करण्यासाठी हस्तांतरित करण्याची पर्वानगी देऊ शकतात. जर सबग्राफ मालक त्याच्या सबग्राफला L2 वर हस्तांतरित करत नसता आणि त्याच्या कॉन्ट्रॅक्ट कॉलद्वारे मौना करतो, तर क्युरेटरसला सूचना दिली जाईल आणि त्यांना आपल्याच्या क्युरेशनची परवानगी वापरून परत घेतली जाईल. +Other Curators can choose whether to withdraw their fraction of GRT, or also transfer it to L2 to mint signal on the same Subgraph. If a Subgraph owner does not transfer their Subgraph to L2 and manually deprecates it via a contract call, then Curators will be notified and will be able to withdraw their curation. -सबग्राफ हस्तांतरित केल्यानंतर, क्युरेशन सर्व GRT मध्ये रूपांतरित केल्यामुळे इंडेक्सरसला सबग्राफच्या इंडेक्सिंगसाठी पुरस्कार मिळवत नाही. परंतु, 24 तासांसाठी हस्तांतरित केलेल्या सबग्राफवर सेवा देणारे इंडेक्सर असतील आणि 2) L2 वर सबग्राफची इंडेक्सिंग प्रारंभ करतील. ह्या इंडेक्सरसांच्या पासून आधीपासूनच सबग्राफची इंडेक्सिंग आहे, म्हणून सबग्राफ सिंक होण्याची वाटचाल नसल्याची आवश्यकता नसून, आणि L2 सबग्राफची क्वेरी करण्यासाठी त्याच्यासाठी वाटचाल नसेल. +As soon as the Subgraph is transferred, since all curation is converted to GRT, Indexers will no longer receive rewards for indexing the Subgraph. However, there will be Indexers that will 1) keep serving transferred Subgraphs for 24 hours, and 2) immediately start indexing the Subgraph on L2. Since these Indexers already have the Subgraph indexed, there should be no need to wait for the Subgraph to sync, and it will be possible to query the L2 Subgraph almost immediately. -L2 सबग्राफला क्वेरीसाठी वेगवेगळे URL वापरण्याची आवश्यकता आहे ('arbitrum-gateway.thegraph.com' वरील), परंतु L1 URL किमान 48 तासांसाठी काम करणार आहे. त्यानंतर, L1 गेटवे वेगवेगळ्या क्वेरीला L2 गेटवेला पुर्वानुमान देईल (काही कालावधीसाठी), परंतु त्यामुळे द्रुतिकरण वाढतो, म्हणजे तुमच्या क्वेरीस सर्व किंवा नवीन URL वर स्विच करणे शक्य आहे. +Queries to the L2 Subgraph will need to be done to a different URL (on `arbitrum-gateway.thegraph.com`), but the L1 URL will continue working for at least 48 hours. After that, the L1 gateway will forward queries to the L2 gateway (for some time), but this will add latency so it is recommended to switch all your queries to the new URL as soon as possible. ## तुमचे L2 वॉलेट निवडत आहे -तुम्ही तुमच्या सबग्राफची मेननेटवर प्रकाशित केल्यास, तुम्ही सबग्राफ तयार करण्यासाठी एक संयुक्त केलेल्या वॉलेटचा वापर केला होता, आणि हा वॉलेट हा सबग्राफ प्रतिनिधित्व करणारा NFT मिळवतो, आणि तुम्हाला अपडेट प्रकाशित करण्याची परवानगी देतो. +When you published your Subgraph on mainnet, you used a connected wallet to create the Subgraph, and this wallet owns the NFT that represents this Subgraph and allows you to publish updates. -सबग्राफ आर्बिट्रममध्ये हस्तांतरित करताना, तुम्ही वेगळे वॉलेट निवडू शकता जे L2 वर या सबग्राफ NFT चे मालक असेल. +When transferring the Subgraph to Arbitrum, you can choose a different wallet that will own this Subgraph NFT on L2. आपल्याला "सामान्य" वॉलेट वापरत आहे किंवा MetaMask (एक बाह्यिकपणे मालकीत खाता किंवा EOA, अर्थात स्मार्ट कॉन्ट्रॅक्ट नसलेला वॉलेट), तर ह्या निवडनीय आहे आणि L1 मध्ये असलेल्या समान मालकीचे पत्ते ठेवणे शिफारसले जाते. -If you're using a smart contract wallet, like a multisig (e.g. a Safe), then choosing a different L2 wallet address is mandatory, as it is most likely that this account only exists on mainnet and you will not be able to make transactions on Arbitrum using this wallet. If you want to keep using a smart contract wallet or multisig, create a new wallet on Arbitrum and use its address as the L2 owner of your subgraph. +If you're using a smart contract wallet, like a multisig (e.g. a Safe), then choosing a different L2 wallet address is mandatory, as it is most likely that this account only exists on mainnet and you will not be able to make transactions on Arbitrum using this wallet. If you want to keep using a smart contract wallet or multisig, create a new wallet on Arbitrum and use its address as the L2 owner of your Subgraph. -**तुम्हाला एक वॉलेट पत्ता वापरण्याची महत्त्वाची आहे ज्याच्या तुम्ही नियंत्रण असता आणि त्याने Arbitrum वर व्यवहार करू शकतो. अन्यथा, सबग्राफ गमावला जाईल आणि त्याची पुनर्प्राप्ती केली जाऊ शकणार नाही.** +**It is very important to use a wallet address that you control, and that can make transactions on Arbitrum. Otherwise, the Subgraph will be lost and cannot be recovered.** ## हस्तांतरणाची तयारी: काही ETH ब्रिजिंग -सबग्राफला हस्तांतरित करण्यासाठी एक ट्रॅन्झॅक्शन सेंड करण्यात आल्यामुळे ब्रिजद्वारे एक ट्रॅन्झॅक्शन आणि नंतर आर्बिट्रमवर दुसर्या ट्रॅन्झॅक्शन चालवावा लागतो. पहिल्या ट्रॅन्झॅक्शनमध्ये मुख्यनेटवर ETH वापरले जाते, आणि L2 वर संदेश प्राप्त होण्यात आल्यावर गॅस देण्यासाठी काही ETH समाविष्ट केले जाते. हेच गॅस कमी असल्यास, तर तुम्ही ट्रॅन्झॅक्शन पुन्हा प्रयत्न करून लॅटन्सीसाठी त्याच्यावर थेट पैसे द्यायला हवे, त्याच्यामुळे हे "चरण 3: हस्तांतरणाची पुष्टी करणे" असते (खालीलपैकी). ह्या कदाचित्का **तुम्ही हस्तांतरण सुरू केल्याच्या 7 दिवसांच्या आत** हे प्रक्रिया पुर्ण करणे आवश्यक आहे. इतरत्र, दुसऱ्या ट्रॅन्झॅक्शन ("चरण 4: L2 वर हस्तांतरण समाप्त करणे") ही आपल्याला खासगी आर्बिट्रमवर आणण्यात आली आहे. ह्या कारणांसाठी, तुम्हाला किमानपर्यंत काही ETH आवश्यक आहे, एक मल्टीसिग किंवा स्मार्ट कॉन्ट्रॅक्ट खात्याच्या आवश्यक आहे, ETH रोजच्या (EOA) वॉलेटमध्ये असणे आवश्यक आहे, ज्याचा तुम्ही ट्रॅन्झॅक्शन चालवण्यासाठी वापरता, मल्टीसिग वॉलेट स्वत: नसतो. +Transferring the Subgraph involves sending a transaction through the bridge, and then executing another transaction on Arbitrum. The first transaction uses ETH on mainnet, and includes some ETH to pay for gas when the message is received on L2. However, if this gas is insufficient, you will have to retry the transaction and pay for the gas directly on L2 (this is "Step 3: Confirming the transfer" below). This step **must be executed within 7 days of starting the transfer**. Moreover, the second transaction ("Step 4: Finishing the transfer on L2") will be done directly on Arbitrum. For these reasons, you will need some ETH on an Arbitrum wallet. If you're using a multisig or smart contract account, the ETH will need to be in the regular (EOA) wallet that you are using to execute the transactions, not on the multisig wallet itself. तुम्ही किमानतरी एक्सचेंजेसवर ETH खरेदी करू शकता आणि त्याच्यामध्ये सीधे Arbitrum वर विद्यमान ठेवू शकता, किंवा तुम्ही Arbitrum ब्रिजवापरून ETH मुख्यनेटवरील एक वॉलेटपासून L2 वर पाठवू शकता: bridge.arbitrum.io. आर्बिट्रमवर गॅस फीस खूप कमी आहेत, म्हणजे तुम्हाला फक्त थोडेसे फक्त आवश्यक आहे. तुमच्या ट्रॅन्झॅक्शनसाठी मंजूरी मिळविण्यासाठी तुम्हाला किमान अंतरावर (उदा. 0.01 ETH) सुरुवात करणे शिफारसले जाते. -## सबग्राफ ट्रान्सफर टूल शोधत आहे +## Finding the Subgraph Transfer Tool -तुम्ही सबग्राफ स्टुडिओवर तुमच्या सबग्राफचे पेज पाहता तेव्हा तुम्हाला L2 ट्रान्सफर टूल सापडेल: +You can find the L2 Transfer Tool when you're looking at your Subgraph's page on Subgraph Studio: ![transfer tool](/img/L2-transfer-tool1.png) -हे तयार आहे Explorer वर, आपल्याला जर तुमच्याकडून एक सबग्राफच्या मालकीची वॉलेट असेल आणि Explorer सह कनेक्ट केले तर, आणि त्या सबग्राफच्या पृष्ठावर Explorer वरून मिळवू शकता: +It is also available on Explorer if you're connected with the wallet that owns a Subgraph and on that Subgraph's page on Explorer: ![Transferring to L2](/img/transferToL2.png) @@ -60,19 +60,19 @@ If you're using a smart contract wallet, like a multisig (e.g. a Safe), then cho ## पायरी 1: हस्तांतरण सुरू करत आहे -हस्तांतरण सुरू करण्यापूर्वी, तुम्ही L2 वर सबग्राफच्या मालकपत्रक्षयक्षमतेचे निर्णय करावे लागेल (वरील "तुमच्या L2 वॉलेटची निवड" पहा), आणि आपल्याला आर्बिट्रमवर पुर्न ठेवण्यासाठी आधीपासून काही ETH असणे अत्यंत शिफारसले जाते (वरील "हस्तांतरण साठी प्राप्ती करणे: काही ETH हस्तांतरित करणे" पहा). +Before starting the transfer, you must decide which address will own the Subgraph on L2 (see "Choosing your L2 wallet" above), and it is strongly recommend having some ETH for gas already bridged on Arbitrum (see "Preparing for the transfer: bridging some ETH" above). -कृपया लक्षात घ्या की सबग्राफ हस्तांतरित करण्यासाठी सबग्राफवर आपल्याला त्याच्या मालकपत्रक्षयक्षमतेसह अगदीच सिग्नल असावे; जर तुम्हाला सबग्राफवर सिग्नल केलेलं नसलं तर तुम्हाला थोडीसी क्युरेशन वाढवावी (एक थोडीसी असांतर किंवा 1 GRT आढवंच काही आहे). +Also please note transferring the Subgraph requires having a nonzero amount of signal on the Subgraph with the same account that owns the Subgraph; if you haven't signaled on the Subgraph you will have to add a bit of curation (adding a small amount like 1 GRT would suffice). -हस्तांतरण साधन उघडण्यात आल्यावर, तुम्ही "प्राप्ति वॉलेट पत्ता" क्षेत्रात L2 वॉलेट पत्ता भरू शकता - **तुम्ही येथे योग्य पत्ता नोंदवला आहे हे खात्री करा**. सबग्राफ हस्तांतरित करण्याच्या वर्तमानीत तुम्ही आपल्या वॉलेटवर ट्रॅन्झॅक्शन सुरू करण्याच्या आवश्यकता आहे (लक्षात घ्या की L2 गॅससाठी काही ETH मूळ आहे); हे हस्तांतरणाच्या प्रक्रियेचे सुरूवात करेल आणि आपल्या L1 सबग्राफला कमी करेल (अद्यतनसाठी "सिग्न. +After opening the Transfer Tool, you will be able to input the L2 wallet address into the "Receiving wallet address" field - **make sure you've entered the correct address here**. Clicking on Transfer Subgraph will prompt you to execute the transaction on your wallet (note some ETH value is included to pay for L2 gas); this will initiate the transfer and deprecate your L1 Subgraph (see "Understanding what happens with signal, your L1 Subgraph and query URLs" above for more details on what goes on behind the scenes). -जर तुम्ही हे कदम पूर्ण करता आहात, नुकसान होऊ नये हे सुनिश्चित करा की 7 दिवसांपेक्षा कमी वेळेत पुन्हा आपल्या क्रियान्वयनाचा तपास करा, किंवा सबग्राफ आणि तुमच्या सिग्नल GRT नष्ट होईल. हे त्याच्या कारणे आहे की आर्बिट्रमवर L1-L2 संदेशाचा कसा काम करतो: ब्रिजद्वारे पाठवलेले संदेश "पुन्हा प्रयत्नीय पर्यायपत्रे" आहेत ज्याचा क्रियान्वयन 7 दिवसांच्या आत अंदाजपत्री केला पाहिजे, आणि सुरुवातीचा क्रियान्वयन, आर्बिट्रमवर गॅस दरात वाढ असल्यास, पुन्हा प्रयत्न करण्याची आवश्यकता असेल. +If you execute this step, **make sure you proceed until completing step 3 in less than 7 days, or the Subgraph and your signal GRT will be lost.** This is due to how L1-L2 messaging works on Arbitrum: messages that are sent through the bridge are "retry-able tickets" that must be executed within 7 days, and the initial execution might need a retry if there are spikes in the gas price on Arbitrum. ![Start the transfer to L2](/img/startTransferL2.png) -## पायरी 2: सबग्राफ L2 वर येण्याची वाट पाहत आहे +## Step 2: Waiting for the Subgraph to get to L2 -तुम्ही हस्तांतरण सुरू केल्यानंतर, तुमच्या L1 सबग्राफला L2 वर हस्तांतरित करण्याचे संदेश Arbitrum ब्रिजद्वारे प्रसारित होणे आवश्यक आहे. हे किंवा. 20 मिनिटे लागतात (ब्रिज त्या व्यक्तिमत्वीकृत आहे की L1 मेननेट ब्लॉक जो लेनदार चेन reorgs साठी "सुरक्षित" आहे, त्यातील संदेश किंवा लेनदार चेन reorgs साठी "सुरक्षित" आहे, त्यातील संदेश होऊन जातो). +After you start the transfer, the message that sends your L1 Subgraph to L2 must propagate through the Arbitrum bridge. This takes approximately 20 minutes (the bridge waits for the mainnet block containing the transaction to be "safe" from potential chain reorgs). ही प्रतीक्षा वेळ संपल्यानंतर, आर्बिट्रम L2 करारांवर हस्तांतरण स्वयं-अंमलबजावणी करण्याचा प्रयत्न करेल. @@ -80,7 +80,7 @@ If you're using a smart contract wallet, like a multisig (e.g. a Safe), then cho ## पायरी 3: हस्तांतरणाची पुष्टी करणे -अधिकांश प्रकरणात, आपल्याला प्राथमिकपणे संघटित ल2 गॅस असेल, ज्यामुळे सबग्राफला आर्बिट्रम कॉन्ट्रॅक्टवर प्राप्त करण्याच्या ट्रॅन्झॅक्शनची स्वत: क्रियारत झाली पाहिजे. कितीतरी प्रकरणात, आर्बिट्रमवर गॅस दरात वाढ असल्यामुळे ह्या स्वत: क्रियान्वितीत अयशस्वीता आपल्याला काहीतरी किंवा काहीतरी संभावना आहे. ह्या प्रकारे, आपल्या सबग्राफला L2 वर पाठवण्याच्या "पर्यायपत्रास" क्रियारत बसण्यासाठी अपूर्ण ठरेल आणि 7 दिवसांच्या आत पुन्हा प्रयत्न करण्याची आवश्यकता आहे. +In most cases, this step will auto-execute as the L2 gas included in step 1 should be sufficient to execute the transaction that receives the Subgraph on the Arbitrum contracts. In some cases, however, it is possible that a spike in gas prices on Arbitrum causes this auto-execution to fail. In this case, the "ticket" that sends your Subgraph to L2 will be pending and require a retry within 7 days. असे असल्यास, तुम्हाला आर्बिट्रमवर काही ETH असलेले L2 वॉलेट वापरून कनेक्ट करावे लागेल, तुमचे वॉलेट नेटवर्क आर्बिट्रमवर स्विच करा आणि व्यवहाराचा पुन्हा प्रयत्न करण्यासाठी "हस्तांतरण पुष्टी करा" वर क्लिक करा. @@ -88,33 +88,33 @@ If you're using a smart contract wallet, like a multisig (e.g. a Safe), then cho ## पायरी 4: L2 वर हस्तांतरण पूर्ण करणे -आता, आपला सबग्राफ आणि GRT आर्बिट्रमवर प्राप्त झालेले आहेत, परंतु सबग्राफ अद्याप प्रकाशित झालेला नाही. आपल्याला प्राप्ति वॉलेटसाठी निवडलेल्या L2 वॉलेटशी कनेक्ट करण्याची आवश्यकता आहे, आपला वॉलेट नेटवर्क आर्बिट्रमवर स्विच करण्याची आणि "पब्लिश सबग्राफ" वर क्लिक करण्याची आवश्यकता आहे +At this point, your Subgraph and GRT have been received on Arbitrum, but the Subgraph is not published yet. You will need to connect using the L2 wallet that you chose as the receiving wallet, switch your wallet network to Arbitrum, and click "Publish Subgraph." -![Publish the subgraph](/img/publishSubgraphL2TransferTools.png) +![Publish the Subgraph](/img/publishSubgraphL2TransferTools.png) -![Wait for the subgraph to be published](/img/waitForSubgraphToPublishL2TransferTools.png) +![Wait for the Subgraph to be published](/img/waitForSubgraphToPublishL2TransferTools.png) -हे सबग्राफ प्रकाशित करेल आहे, त्यामुळे त्याचे सेवन करणारे इंडेक्सर्स आर्बिट्रमवर संचालित आहेत, आणि त्यामुळे ला ट्रान्सफर केलेल्या GRT वापरून संवाद सिग्नल क्युरेशन निर्माणित केले जाईल. +This will publish the Subgraph so that Indexers that are operating on Arbitrum can start serving it. It will also mint curation signal using the GRT that were transferred from L1. ## पायरी 5: क्वेरी URL अपडेट करत आहे -तुमचा सबग्राफ आर्बिट्रममध्ये यशस्वीरित्या हस्तांतरित केला गेला आहे! सबग्राफची क्वेरी करण्यासाठी, नवीन URL असेल: +Your Subgraph has been successfully transferred to Arbitrum! To query the Subgraph, the new URL will be : `https://arbitrum-gateway.thegraph.com/api/[api-key]/subgraphs/id/[l2-subgraph-id]` -लक्षात घ्या की आर्बिट्रमवर सबग्राफचे ID मुख्यनेटवर आपल्याला आहे आणि त्याच्या परिपर्यंत आपल्याला आर्बिट्रमवर आहे, परंतु आपल्याला वेगवेगळा सबग्राफ ID असेल, परंतु तुम्ही सदैव तो Explorer किंवा Studio वर शोधू शकता. उपरोक्त (वरील "सिग्नलसह, आपल्या L1 सबग्राफसह आणि क्वेरी URLसह काय करता येईल" पहा) म्हणजे पुराणे L1 URL थोडेसे वेळाने समर्थित राहील, परंतु आपल्याला सबग्राफ L2 वर सिंक केल्यानंतर आपल्या क्वेरीजला त्वरित नवीन पत्ता देणे शिफारसले जाते. +Note that the Subgraph ID on Arbitrum will be a different than the one you had on mainnet, but you can always find it on Explorer or Studio. As mentioned above (see "Understanding what happens with signal, your L1 Subgraph and query URLs") the old L1 URL will be supported for a short while, but you should switch your queries to the new address as soon as the Subgraph has been synced on L2. ## तुमचे क्युरेशन आर्बिट्रम (L2) वर कसे हस्तांतरित करावे -## L2 मध्ये सबग्राफ ट्रान्सफरवरील क्युरेशनचे काय होते हे समजून घेणे +## Understanding what happens to curation on Subgraph transfers to L2 -सबग्राफच्या मालकाने सबग्राफला आर्बिट्रमवर हस्तांतरित केल्यास, सर्व सबग्राफच्या सिग्नलला एकाच वेळी GRT मध्ये रूपांतरित केला जातो. ही "ऑटो-माइग्रेटेड" सिग्नलसाठी लागू होते, अर्थात सबग्राफाच्या कोणत्याही संस्करण किंवा डिप्लॉयमेंटसाठी नसलेली सिग्नल किंवा नवीन संस्करणाच्या आधीच्या सबग्राफच्या आवृत्तीस पुरावीत केली जाते. +When the owner of a Subgraph transfers a Subgraph to Arbitrum, all of the Subgraph's signal is converted to GRT at the same time. This applies to "auto-migrated" signal, i.e. signal that is not specific to a Subgraph version or deployment but that follows the latest version of a Subgraph. -सिग्नलपासून GRTमध्ये असे रूपांतरण होण्याचे त्याचे आपल्याला उदाहरण दिले आहे ज्याच्यासाठी जर सबग्राफमालक सबग्राफला L1मध्ये पुरावा दिला तर. सबग्राफ विकल्प किंवा हस्तांतरित केला जाता तेव्हा सर्व सिग्नलला समयानुसार "दहन" केला जातो (क्युरेशन बोंडिंग कर्वच्या वापराने) आणि निकाललेल्या GRTने GNS स्मार्ट कॉन्ट्रॅक्टने (जो सबग्राफ अपग्रेड्स आणि ऑटो-माइग्रेटेड सिग्नलच्या व्यवस्थापनासाठी जबाबदार आहे) साठवलेले आहे. प्रत्येक क्युरेटरने त्या सबग्राफसाठी कितीशेअर्स आहेत त्या प्रमाणे त्याच्याकडे गणना असते, आणि त्यामुळे त्याच्या शेअर्सचा GRTचा दावा असतो. +This conversion from signal to GRT is the same as what would happen if the Subgraph owner deprecated the Subgraph in L1. When the Subgraph is deprecated or transferred, all curation signal is "burned" simultaneously (using the curation bonding curve) and the resulting GRT is held by the GNS smart contract (that is the contract that handles Subgraph upgrades and auto-migrated signal). Each Curator on that Subgraph therefore has a claim to that GRT proportional to the amount of shares they had for the Subgraph. -सबग्राफ मालकाशी संबंधित या GRT चा एक अंश सबग्राफसह L2 ला पाठविला जातो. +A fraction of these GRT corresponding to the Subgraph owner is sent to L2 together with the Subgraph. -आत्ताच, संशोधित GRTमध्ये कोणतीही अधिक क्वेरी फीस घटना आहे नसून, क्युरेटर्सला आपली GRT वापरण्याची किंवा त्याची L2वर त्याच्या आपल्या वर्णनासाठी हस्तांतरित करण्याची पर्वानगी आहे, ज्याच्या माध्यमातून नवीन क्युरेशन सिग्नल तयार केला जाऊ शकतो. हे करण्यासाठी त्वरित किंवा अनिश्चित काळासाठी कोणतीही जरूरत नाही कारण GRT अनश्वास पाहिजे आणि प्रत्येकाला त्याच्या शेअर्सच्या प्रमाणानुसार एक निश्चित वस्तु मिळणार आहे, कोणत्या वेळीही. +At this point, the curated GRT will not accrue any more query fees, so Curators can choose to withdraw their GRT or transfer it to the same Subgraph on L2, where it can be used to mint new curation signal. There is no rush to do this as the GRT can be help indefinitely and everybody gets an amount proportional to their shares, irrespective of when they do it. ## तुमचे L2 वॉलेट निवडत आहे @@ -130,9 +130,9 @@ If you're using a smart contract wallet, like a multisig (e.g. a Safe), then cho हस्तांतरण सुरू करण्यापूर्वी, तुम्ही त्याच्या L2 वर क्युरेशनचा मालक होणारा पत्ता निवडणे आवश्यक आहे (वरील "तुमच्या L2 वॉलेटची निवड" पाहा), आणि आर्बिट्रमवर संदेशाच्या क्रियान्वयनाचा पुन्हा प्रयत्न केल्यास लागणारे गॅससाठी काही ETH आधीच्या पुलाकीत सांडलेले असले पर्याय सुरुवातीच्या वेळी किंवा पुन्हा प्रयत्नीय पर्यायसाठी. आपल्याला काही एक्सचेंजवरून ETH खरेदी करून त्याची तुमच्या आर्बिट्रमवर स्थानांतरित करून सुरू आहे, किंवा आपल्याला मुख्यनेटवरून L2 वर ETH पाठवण्याच्या आर्बिट्रम ब्रिजचा वापर करून किंवा ETH खरेदी करून L2 वर पाठवण्याच्या कामाकरीत करण्याची शक्यता आहे: [bridge.arbitrum.io](http://bridge.arbitrum.io)- आर्बिट्रमवर गॅस दरात तोंड असल्यामुळे, तुम्हाला केवळ किंवा 0.01 ETH ची किंमत दरम्यानची आवश्यकता असेल. -आपल्याला संवादित केलेल्या सबग्राफ्टला L2 वर हस्तांतरित केले आहे तर, आपल्याला एक संदेश दिलेला जाईल ज्याच्या माध्यमातून Explorer वरून आपल्याला सांगण्यात येईल की आपण हस्तांतरित सबग्राफ्टच्या संवादनी आहात. +If a Subgraph that you curate to has been transferred to L2, you will see a message on Explorer telling you that you're curating to a transferred Subgraph. -सबग्राफ्ट पेज पाहताना, आपण संवादनाची पुनर्प्राप्ती किंवा हस्तांतरित करण्याचा निवड करू शकता. "Transfer Signal to Arbitrum" वर क्लिक केल्यास, हस्तांतरण साधने उघडतील. +When looking at the Subgraph page, you can choose to withdraw or transfer the curation. Clicking on "Transfer Signal to Arbitrum" will open the transfer tool. ![Transfer signal](/img/transferSignalL2TransferTools.png) @@ -162,4 +162,4 @@ If you're using a smart contract wallet, like a multisig (e.g. a Safe), then cho ## L1 वर तुमचे क्युरेशन मागे घेत आहे -जर आपल्याला आपल्या GRT ला L2 वर पाठवायचं आवडत नसलं तर किंवा आपल्याला GRT ला मॅन्युअली ब्रिज करण्याची प्राथमिकता आहे, तर आपल्याला L1 वरील आपल्या क्युरेटेड GRT ला काढून घ्यायला दिले आहे. सबग्राफच्या पृष्ठाच्या बॅनरवरून "Withdraw Signal" निवडा आणि व्यवस्थापन प्रक्रियेची पुष्टी करा; GRT आपल्या क्युरेटर पत्त्याला पाठविला जाईल. +If you prefer not to send your GRT to L2, or you'd rather bridge the GRT manually, you can withdraw your curated GRT on L1. On the banner on the Subgraph page, choose "Withdraw Signal" and confirm the transaction; the GRT will be sent to your Curator address. diff --git a/website/src/pages/mr/archived/sunrise.mdx b/website/src/pages/mr/archived/sunrise.mdx index eb18a93c506c..71262f22e7d8 100644 --- a/website/src/pages/mr/archived/sunrise.mdx +++ b/website/src/pages/mr/archived/sunrise.mdx @@ -7,61 +7,61 @@ sidebarTitle: Post-Sunrise Upgrade FAQ ## What was the Sunrise of Decentralized Data? -The Sunrise of Decentralized Data was an initiative spearheaded by Edge & Node. This initiative enabled subgraph developers to upgrade to The Graph’s decentralized network seamlessly. +The Sunrise of Decentralized Data was an initiative spearheaded by Edge & Node. This initiative enabled Subgraph developers to upgrade to The Graph’s decentralized network seamlessly. -This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published subgraphs. +This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published Subgraphs. ### What happened to the hosted service? -The hosted service query endpoints are no longer available, and developers cannot deploy new subgraphs on the hosted service. +The hosted service query endpoints are no longer available, and developers cannot deploy new Subgraphs on the hosted service. -During the upgrade process, owners of hosted service subgraphs could upgrade their subgraphs to The Graph Network. Additionally, developers were able to claim auto-upgraded subgraphs. +During the upgrade process, owners of hosted service Subgraphs could upgrade their Subgraphs to The Graph Network. Additionally, developers were able to claim auto-upgraded Subgraphs. ### Was Subgraph Studio impacted by this upgrade? No, Subgraph Studio was not impacted by Sunrise. Subgraphs were immediately available for querying, powered by the upgrade Indexer, which uses the same infrastructure as the hosted service. -### Why were subgraphs published to Arbitrum, did it start indexing a different network? +### Why were Subgraphs published to Arbitrum, did it start indexing a different network? -The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that subgraphs are published to, but subgraphs can index any of the [supported networks](/supported-networks/) +The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new Subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that Subgraphs are published to, but Subgraphs can index any of the [supported networks](/supported-networks/) ## About the Upgrade Indexer > The upgrade Indexer is currently active. -The upgrade Indexer was implemented to improve the experience of upgrading subgraphs from the hosted service to The Graph Network and support new versions of existing subgraphs that had not yet been indexed. +The upgrade Indexer was implemented to improve the experience of upgrading Subgraphs from the hosted service to The Graph Network and support new versions of existing Subgraphs that had not yet been indexed. ### What does the upgrade Indexer do? -- It bootstraps chains that have yet to receive indexing rewards on The Graph Network and ensures that an Indexer is available to serve queries as quickly as possible after a subgraph is published. +- It bootstraps chains that have yet to receive indexing rewards on The Graph Network and ensures that an Indexer is available to serve queries as quickly as possible after a Subgraph is published. - It supports chains that were previously only available on the hosted service. Find a comprehensive list of supported chains [here](/supported-networks/). -- Indexers that operate an upgrade Indexer do so as a public service to support new subgraphs and additional chains that lack indexing rewards before The Graph Council approves them. +- Indexers that operate an upgrade Indexer do so as a public service to support new Subgraphs and additional chains that lack indexing rewards before The Graph Council approves them. ### Why is Edge & Node running the upgrade Indexer? -Edge & Node historically maintained the hosted service and, as a result, already have synced data for hosted service subgraphs. +Edge & Node historically maintained the hosted service and, as a result, already have synced data for hosted service Subgraphs. ### What does the upgrade indexer mean for existing Indexers? Chains previously only supported on the hosted service were made available to developers on The Graph Network without indexing rewards at first. -However, this action unlocked query fees for any interested Indexer and increased the number of subgraphs published on The Graph Network. As a result, Indexers have more opportunities to index and serve these subgraphs in exchange for query fees, even before indexing rewards are enabled for a chain. +However, this action unlocked query fees for any interested Indexer and increased the number of Subgraphs published on The Graph Network. As a result, Indexers have more opportunities to index and serve these Subgraphs in exchange for query fees, even before indexing rewards are enabled for a chain. -The upgrade Indexer also provides the Indexer community with information about the potential demand for subgraphs and new chains on The Graph Network. +The upgrade Indexer also provides the Indexer community with information about the potential demand for Subgraphs and new chains on The Graph Network. ### What does this mean for Delegators? -The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity. +The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more Subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity. ### Did the upgrade Indexer compete with existing Indexers for rewards? -No, the upgrade Indexer only allocates the minimum amount per subgraph and does not collect indexing rewards. +No, the upgrade Indexer only allocates the minimum amount per Subgraph and does not collect indexing rewards. -It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and subgraphs. +It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and Subgraphs. -### How does this affect subgraph developers? +### How does this affect Subgraph developers? -Subgraph developers can query their subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/subgraphs/developing/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a subgraph](/developing/creating-a-subgraph/) was not impacted by this upgrade. +Subgraph developers can query their Subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/subgraphs/developing/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a Subgraph](/developing/creating-a-subgraph/) was not impacted by this upgrade. ### How does the upgrade Indexer benefit data consumers? @@ -71,10 +71,10 @@ The upgrade Indexer enables chains on the network that were previously only supp The upgrade Indexer prices queries at the market rate to avoid influencing the query fee market. -### When will the upgrade Indexer stop supporting a subgraph? +### When will the upgrade Indexer stop supporting a Subgraph? -The upgrade Indexer supports a subgraph until at least 3 other Indexers successfully and consistently serve queries made to it. +The upgrade Indexer supports a Subgraph until at least 3 other Indexers successfully and consistently serve queries made to it. -Furthermore, the upgrade Indexer stops supporting a subgraph if it has not been queried in the last 30 days. +Furthermore, the upgrade Indexer stops supporting a Subgraph if it has not been queried in the last 30 days. -Other Indexers are incentivized to support subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it. +Other Indexers are incentivized to support Subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it. diff --git a/website/src/pages/mr/global.json b/website/src/pages/mr/global.json index b57692ddb6cf..9f39ea376ca1 100644 --- a/website/src/pages/mr/global.json +++ b/website/src/pages/mr/global.json @@ -6,6 +6,7 @@ "subgraphs": "सबग्राफ", "substreams": "उपप्रवाह", "sps": "Substreams-Powered Subgraphs", + "tokenApi": "Token API", "indexing": "Indexing", "resources": "Resources", "archived": "Archived" @@ -24,9 +25,51 @@ "linkToThisSection": "Link to this section" }, "content": { - "note": "Note", + "callout": { + "note": "Note", + "tip": "Tip", + "important": "Important", + "warning": "Warning", + "caution": "Caution" + }, "video": "Video" }, + "openApi": { + "parameters": { + "pathParameters": "Path Parameters", + "queryParameters": "Query Parameters", + "headerParameters": "Header Parameters", + "cookieParameters": "Cookie Parameters", + "parameter": "Parameter", + "description": "वर्णन", + "value": "Value", + "required": "Required", + "deprecated": "Deprecated", + "defaultValue": "Default value", + "minimumValue": "Minimum value", + "maximumValue": "Maximum value", + "acceptedValues": "Accepted values", + "acceptedPattern": "Accepted pattern", + "format": "Format", + "serializationFormat": "Serialization format" + }, + "request": { + "label": "Test this endpoint", + "noCredentialsRequired": "No credentials required", + "send": "Send Request" + }, + "responses": { + "potentialResponses": "Potential Responses", + "status": "Status", + "description": "वर्णन", + "liveResponse": "Live Response", + "example": "उदाहरण" + }, + "errors": { + "invalidApi": "Could not retrieve API {0}.", + "invalidOperation": "Could not retrieve operation {0} in API {1}." + } + }, "notFound": { "title": "Oops! This page was lost in space...", "subtitle": "Check if you’re using the right address or explore our website by clicking on the link below.", diff --git a/website/src/pages/mr/index.json b/website/src/pages/mr/index.json index a5d97255046e..add2f95c68b0 100644 --- a/website/src/pages/mr/index.json +++ b/website/src/pages/mr/index.json @@ -7,7 +7,7 @@ "cta2": "Build your first subgraph" }, "products": { - "title": "The Graph’s Products", + "title": "The Graph's Products", "description": "Choose a solution that fits your needs—interact with blockchain data your way.", "subgraphs": { "title": "सबग्राफ", @@ -21,7 +21,7 @@ }, "sps": { "title": "Substreams-Powered Subgraphs", - "description": "Boost your subgraph’s efficiency and scalability by using Substreams.", + "description": "Boost your subgraph's efficiency and scalability by using Substreams.", "cta": "Set up a Substreams-powered subgraph" }, "graphNode": { @@ -39,7 +39,7 @@ "title": "Supported Networks", "details": "Network Details", "services": "Services", - "type": "Type", + "type": "प्रकार", "protocol": "Protocol", "identifier": "Identifier", "chainId": "Chain ID", @@ -67,9 +67,9 @@ "tableHeaders": { "name": "Name", "id": "ID", - "subgraphs": "Subgraphs", - "substreams": "Substreams", - "firehose": "Firehose", + "subgraphs": "सबग्राफ", + "substreams": "उपप्रवाह", + "firehose": "फायरहोस", "tokenapi": "Token API" } }, @@ -80,7 +80,7 @@ "description": "Kickstart your journey into subgraph development." }, "substreams": { - "title": "Substreams", + "title": "उपप्रवाह", "description": "Stream high-speed data for real-time indexing." }, "timeseries": { @@ -156,15 +156,15 @@ "watchOnYouTube": "Watch on YouTube", "theGraphExplained": { "title": "The Graph Explained In 1 Minute", - "description": "What is The Graph? How does it work? Why does it matter so much to web3 developers? Learn how and why The Graph is the backbone of web3 in this short, non-technical video." + "description": "Learn how and why The Graph is the backbone of web3 in this short, non-technical video." }, "whatIsDelegating": { "title": "What is Delegating?", - "description": "Delegators are key participants who help secure The Graph by staking their GRT tokens to Indexers. This video explains key concepts to understand before delegating." + "description": "This video explains key concepts to understand before delegating, a form of staking that helps secure The Graph." }, "howToIndexSolana": { "title": "How to Index Solana with a Substreams-powered Subgraph", - "description": "If you’re familiar with subgraphs, discover how Substreams offer a different approach for key use cases. This video walks you through the process of building your first Substreams-powered subgraph." + "description": "If you're familiar with Subgraphs, discover how Substreams offer a different approach for key use cases." } }, "time": { diff --git a/website/src/pages/mr/indexing/chain-integration-overview.mdx b/website/src/pages/mr/indexing/chain-integration-overview.mdx index 77141e82b34a..33619b03c483 100644 --- a/website/src/pages/mr/indexing/chain-integration-overview.mdx +++ b/website/src/pages/mr/indexing/chain-integration-overview.mdx @@ -36,7 +36,7 @@ This process is related to the Subgraph Data Service, applicable only to new Sub ### 2. What happens if Firehose & Substreams support comes after the network is supported on mainnet? -This would only impact protocol support for indexing rewards on Substreams-powered subgraphs. The new Firehose implementation would need testing on testnet, following the methodology outlined for Stage 2 in this GIP. Similarly, assuming the implementation is performant and reliable, a PR on the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) would be required (`Substreams data sources` Subgraph Feature), as well as a new GIP for protocol support for indexing rewards. Anyone can create the PR and GIP; the Foundation would help with Council approval. +This would only impact protocol support for indexing rewards on Substreams-powered Subgraphs. The new Firehose implementation would need testing on testnet, following the methodology outlined for Stage 2 in this GIP. Similarly, assuming the implementation is performant and reliable, a PR on the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) would be required (`Substreams data sources` Subgraph Feature), as well as a new GIP for protocol support for indexing rewards. Anyone can create the PR and GIP; the Foundation would help with Council approval. ### 3. How much time will the process of reaching full protocol support take? diff --git a/website/src/pages/mr/indexing/new-chain-integration.mdx b/website/src/pages/mr/indexing/new-chain-integration.mdx index e45c4b411010..670e06c752c3 100644 --- a/website/src/pages/mr/indexing/new-chain-integration.mdx +++ b/website/src/pages/mr/indexing/new-chain-integration.mdx @@ -2,7 +2,7 @@ title: New Chain Integration --- -Chains can bring subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies: +Chains can bring Subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies: 1. **EVM JSON-RPC** 2. **Firehose**: All Firehose integration solutions include Substreams, a large-scale streaming engine based off Firehose with native `graph-node` support, allowing for parallelized transforms. @@ -47,15 +47,15 @@ For EVM chains, there exists a deeper level of data that can be achieved through ## EVM considerations - Difference between JSON-RPC & Firehose -While the JSON-RPC and Firehose are both suitable for subgraphs, a Firehose is always required for developers wanting to build with [Substreams](https://substreams.streamingfast.io). Supporting Substreams allows developers to build [Substreams-powered subgraphs](/subgraphs/cookbook/substreams-powered-subgraphs/) for the new chain, and has the potential to improve the performance of your subgraphs. Additionally, Firehose — as a drop-in replacement for the JSON-RPC extraction layer of `graph-node` — reduces by 90% the number of RPC calls required for general indexing. +While the JSON-RPC and Firehose are both suitable for Subgraphs, a Firehose is always required for developers wanting to build with [Substreams](https://substreams.streamingfast.io). Supporting Substreams allows developers to build [Substreams-powered Subgraphs](/subgraphs/cookbook/substreams-powered-subgraphs/) for the new chain, and has the potential to improve the performance of your Subgraphs. Additionally, Firehose — as a drop-in replacement for the JSON-RPC extraction layer of `graph-node` — reduces by 90% the number of RPC calls required for general indexing. -- All those `getLogs` calls and roundtrips get replaced by a single stream arriving into the heart of `graph-node`; a single block model for all subgraphs it processes. +- All those `getLogs` calls and roundtrips get replaced by a single stream arriving into the heart of `graph-node`; a single block model for all Subgraphs it processes. -> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers) +> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index Subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers) ## Graph Node Configuration -Configuring Graph Node is as easy as preparing your local environment. Once your local environment is set, you can test the integration by locally deploying a subgraph. +Configuring Graph Node is as easy as preparing your local environment. Once your local environment is set, you can test the integration by locally deploying a Subgraph. 1. [Clone Graph Node](https://github.com/graphprotocol/graph-node) @@ -67,4 +67,4 @@ Configuring Graph Node is as easy as preparing your local environment. Once your ## Substreams-powered Subgraphs -For StreamingFast-led Firehose/Substreams integrations, basic support for foundational Substreams modules (e.g. decoded transactions, logs and smart-contract events) and Substreams codegen tools are included. These tools enable the ability to enable [Substreams-powered subgraphs](/substreams/sps/introduction/). Follow the [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) and run `substreams codegen subgraph` to experience the codegen tools for yourself. +For StreamingFast-led Firehose/Substreams integrations, basic support for foundational Substreams modules (e.g. decoded transactions, logs and smart-contract events) and Substreams codegen tools are included. These tools enable the ability to enable [Substreams-powered Subgraphs](/substreams/sps/introduction/). Follow the [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) and run `substreams codegen subgraph` to experience the codegen tools for yourself. diff --git a/website/src/pages/mr/indexing/overview.mdx b/website/src/pages/mr/indexing/overview.mdx index 0113721170dd..9d78f7612f01 100644 --- a/website/src/pages/mr/indexing/overview.mdx +++ b/website/src/pages/mr/indexing/overview.mdx @@ -7,7 +7,7 @@ Indexers are node operators in The Graph Network that stake Graph Tokens (GRT) i प्रोटोकॉलमध्ये स्टॅक केलेले GRT वितळण्याच्या कालावधीच्या अधीन आहे आणि जर इंडेक्सर्स दुर्भावनापूर्ण असतील आणि ऍप्लिकेशन्सना चुकीचा डेटा देत असतील किंवा ते चुकीच्या पद्धतीने इंडेक्स करत असतील तर ते कमी केले जाऊ शकतात. इंडेक्सर्स नेटवर्कमध्ये योगदान देण्यासाठी डेलिगेटर्सकडून डेलिगेटेड स्टेकसाठी बक्षिसे देखील मिळवतात. -इंडेक्सर्स सबग्राफच्या क्युरेशन सिग्नलच्या आधारे इंडेक्समध्ये सबग्राफ निवडतात, जिथे क्यूरेटर्स जीआरटी घेतात जेणेकरून कोणते सबग्राफ उच्च-गुणवत्तेचे आहेत आणि त्यांना प्राधान्य दिले पाहिजे. ग्राहक (उदा. ऍप्लिकेशन्स) मापदंड देखील सेट करू शकतात ज्यासाठी इंडेक्सर्स त्यांच्या सबग्राफसाठी क्वेरी प्रक्रिया करतात आणि क्वेरी शुल्क किंमतीसाठी प्राधान्ये सेट करतात. +Indexers select Subgraphs to index based on the Subgraph’s curation signal, where Curators stake GRT in order to indicate which Subgraphs are high-quality and should be prioritized. Consumers (eg. applications) can also set parameters for which Indexers process queries for their Subgraphs and set preferences for query fee pricing. ## FAQ @@ -19,17 +19,17 @@ The minimum stake for an Indexer is currently set to 100K GRT. **Query fee rebates** - Payments for serving queries on the network. These payments are mediated via state channels between an Indexer and a gateway. Each query request from a gateway contains a payment and the corresponding response a proof of query result validity. -**Indexing rewards** - Generated via a 3% annual protocol wide inflation, the indexing rewards are distributed to Indexers who are indexing subgraph deployments for the network. +**Indexing rewards** - Generated via a 3% annual protocol wide inflation, the indexing rewards are distributed to Indexers who are indexing Subgraph deployments for the network. ### How are indexing rewards distributed? -Indexing rewards come from protocol inflation which is set to 3% annual issuance. They are distributed across subgraphs based on the proportion of all curation signal on each, then distributed proportionally to Indexers based on their allocated stake on that subgraph. **An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.** +Indexing rewards come from protocol inflation which is set to 3% annual issuance. They are distributed across Subgraphs based on the proportion of all curation signal on each, then distributed proportionally to Indexers based on their allocated stake on that Subgraph. **An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.** Numerous tools have been created by the community for calculating rewards; you'll find a collection of them organized in the [Community Guides collection](https://www.notion.so/Community-Guides-abbb10f4dba040d5ba81648ca093e70c). You can also find an up to date list of tools in the #Delegators and #Indexers channels on the [Discord server](https://discord.gg/graphprotocol). Here we link a [recommended allocation optimiser](https://github.com/graphprotocol/allocation-optimizer) integrated with the indexer software stack. ### What is a proof of indexing (POI)? -POIs are used in the network to verify that an Indexer is indexing the subgraphs they have allocated on. A POI for the first block of the current epoch must be submitted when closing an allocation for that allocation to be eligible for indexing rewards. A POI for a block is a digest for all entity store transactions for a specific subgraph deployment up to and including that block. +POIs are used in the network to verify that an Indexer is indexing the Subgraphs they have allocated on. A POI for the first block of the current epoch must be submitted when closing an allocation for that allocation to be eligible for indexing rewards. A POI for a block is a digest for all entity store transactions for a specific Subgraph deployment up to and including that block. ### When are indexing rewards distributed? @@ -41,7 +41,7 @@ The RewardsManager contract has a read-only [getRewards](https://github.com/grap Many of the community-made dashboards include pending rewards values and they can be easily checked manually by following these steps: -1. Query the [mainnet subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations: +1. Query the [mainnet Subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations: ```graphql query indexerAllocations { @@ -91,24 +91,24 @@ The `queryFeeCut` and `indexingRewardCut` values are delegation parameters that - **indexingRewardCut** - the % of indexing rewards that will be distributed to the Indexer. If this is set to 95%, the Indexer will receive 95% of the indexing rewards when an allocation is closed and the Delegators will split the other 5%. -### How do Indexers know which subgraphs to index? +### How do Indexers know which Subgraphs to index? -Indexers may differentiate themselves by applying advanced techniques for making subgraph indexing decisions but to give a general idea we'll discuss several key metrics used to evaluate subgraphs in the network: +Indexers may differentiate themselves by applying advanced techniques for making Subgraph indexing decisions but to give a general idea we'll discuss several key metrics used to evaluate Subgraphs in the network: -- **Curation signal** - The proportion of network curation signal applied to a particular subgraph is a good indicator of the interest in that subgraph, especially during the bootstrap phase when query voluming is ramping up. +- **Curation signal** - The proportion of network curation signal applied to a particular Subgraph is a good indicator of the interest in that Subgraph, especially during the bootstrap phase when query volume is ramping up. -- **Query fees collected** - The historical data for volume of query fees collected for a specific subgraph is a good indicator of future demand. +- **Query fees collected** - The historical data for volume of query fees collected for a specific Subgraph is a good indicator of future demand. -- **Amount staked** - Monitoring the behavior of other Indexers or looking at proportions of total stake allocated towards specific subgraphs can allow an Indexer to monitor the supply side for subgraph queries to identify subgraphs that the network is showing confidence in or subgraphs that may show a need for more supply. +- **Amount staked** - Monitoring the behavior of other Indexers or looking at proportions of total stake allocated towards specific Subgraphs can allow an Indexer to monitor the supply side for Subgraph queries to identify Subgraphs that the network is showing confidence in or Subgraphs that may show a need for more supply. -- **Subgraphs with no indexing rewards** - Some subgraphs do not generate indexing rewards mainly because they are using unsupported features like IPFS or because they are querying another network outside of mainnet. You will see a message on a subgraph if it is not generating indexing rewards. +- **Subgraphs with no indexing rewards** - Some Subgraphs do not generate indexing rewards mainly because they are using unsupported features like IPFS or because they are querying another network outside of mainnet. You will see a message on a Subgraph if it is not generating indexing rewards. ### What are the hardware requirements? -- **Small** - Enough to get started indexing several subgraphs, will likely need to be expanded. +- **Small** - Enough to get started indexing several Subgraphs, will likely need to be expanded. - **Standard** - Default setup, this is what is used in the example k8s/terraform deployment manifests. -- **Medium** - Production Indexer supporting 100 subgraphs and 200-500 requests per second. -- **Large** - Prepared to index all currently used subgraphs and serve requests for the related traffic. +- **Medium** - Production Indexer supporting 100 Subgraphs and 200-500 requests per second. +- **Large** - Prepared to index all currently used Subgraphs and serve requests for the related traffic. | Setup | Postgres
(CPUs) | Postgres
(memory in GBs) | Postgres
(disk in TBs) | VMs
(CPUs) | VMs
(memory in GBs) | | --- | :-: | :-: | :-: | :-: | :-: | @@ -125,17 +125,17 @@ Indexers may differentiate themselves by applying advanced techniques for making ## Infrastructure -At the center of an Indexer's infrastructure is the Graph Node which monitors the indexed networks, extracts and loads data per a subgraph definition and serves it as a [GraphQL API](/about/#how-the-graph-works). The Graph Node needs to be connected to an endpoint exposing data from each indexed network; an IPFS node for sourcing data; a PostgreSQL database for its store; and Indexer components which facilitate its interactions with the network. +At the center of an Indexer's infrastructure is the Graph Node which monitors the indexed networks, extracts and loads data per a Subgraph definition and serves it as a [GraphQL API](/about/#how-the-graph-works). The Graph Node needs to be connected to an endpoint exposing data from each indexed network; an IPFS node for sourcing data; a PostgreSQL database for its store; and Indexer components which facilitate its interactions with the network. -- **PostgreSQL database** - The main store for the Graph Node, this is where subgraph data is stored. The Indexer service and agent also use the database to store state channel data, cost models, indexing rules, and allocation actions. +- **PostgreSQL database** - The main store for the Graph Node, this is where Subgraph data is stored. The Indexer service and agent also use the database to store state channel data, cost models, indexing rules, and allocation actions. -- **Data endpoint** - For EVM-compatible networks, Graph Node needs to be connected to an endpoint that exposes an EVM-compatible JSON-RPC API. This may take the form of a single client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain subgraphs will require particular client capabilities such as archive mode and/or the parity tracing API. +- **Data endpoint** - For EVM-compatible networks, Graph Node needs to be connected to an endpoint that exposes an EVM-compatible JSON-RPC API. This may take the form of a single client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain Subgraphs will require particular client capabilities such as archive mode and/or the parity tracing API. -- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during subgraph deployment to fetch the subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com. +- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com. - **Indexer service** - Handles all required external communications with the network. Shares cost models and indexing statuses, passes query requests from gateways on to a Graph Node, and manages the query payments via state channels with the gateway. -- **Indexer agent** - Facilitates the Indexers interactions onchain including registering on the network, managing subgraph deployments to its Graph Node/s, and managing allocations. +- **Indexer agent** - Facilitates the Indexers interactions onchain including registering on the network, managing Subgraph deployments to its Graph Node/s, and managing allocations. - **Prometheus metrics server** - The Graph Node and Indexer components log their metrics to the metrics server. @@ -149,8 +149,8 @@ Note: To support agile scaling, it is recommended that query and indexing concer | Port | Purpose | Routes | CLI Argument | Environment Variable | | --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | -| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | +| 8000 | GraphQL HTTP server
(for Subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | +| 8001 | GraphQL WS
(for Subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | | 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | | 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | | 8040 | Prometheus metrics | /metrics | \--metrics-port | - | @@ -159,7 +159,7 @@ Note: To support agile scaling, it is recommended that query and indexing concer | Port | Purpose | Routes | CLI Argument | Environment Variable | | --- | --- | --- | --- | --- | -| 7600 | GraphQL HTTP server
(for paid subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` | +| 7600 | GraphQL HTTP server
(for paid Subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` | | 7300 | Prometheus metrics | /metrics | \--metrics-port | - | #### Indexer Agent @@ -295,7 +295,7 @@ Deploy all resources with `kubectl apply -k $dir`. ### आलेख नोड -[Graph Node](https://github.com/graphprotocol/graph-node) is an open source Rust implementation that event sources the Ethereum blockchain to deterministically update a data store that can be queried via the GraphQL endpoint. Developers use subgraphs to define their schema, and a set of mappings for transforming the data sourced from the blockchain and the Graph Node handles syncing the entire chain, monitoring for new blocks, and serving it via a GraphQL endpoint. +[Graph Node](https://github.com/graphprotocol/graph-node) is an open source Rust implementation that event sources the Ethereum blockchain to deterministically update a data store that can be queried via the GraphQL endpoint. Developers use Subgraphs to define their schema, and a set of mappings for transforming the data sourced from the blockchain and the Graph Node handles syncing the entire chain, monitoring for new blocks, and serving it via a GraphQL endpoint. #### Getting started from source @@ -365,9 +365,9 @@ docker-compose up To successfully participate in the network requires almost constant monitoring and interaction, so we've built a suite of Typescript applications for facilitating an Indexers network participation. There are three Indexer components: -- **Indexer agent** - The agent monitors the network and the Indexer's own infrastructure and manages which subgraph deployments are indexed and allocated towards onchain and how much is allocated towards each. +- **Indexer agent** - The agent monitors the network and the Indexer's own infrastructure and manages which Subgraph deployments are indexed and allocated towards onchain and how much is allocated towards each. -- **Indexer service** - The only component that needs to be exposed externally, the service passes on subgraph queries to the graph node, manages state channels for query payments, shares important decision making information to clients like the gateways. +- **Indexer service** - The only component that needs to be exposed externally, the service passes on Subgraph queries to the graph node, manages state channels for query payments, shares important decision making information to clients like the gateways. - **Indexer CLI** - The command line interface for managing the Indexer agent. It allows Indexers to manage cost models, manual allocations, actions queue, and indexing rules. @@ -525,7 +525,7 @@ graph indexer status #### Indexer management using Indexer CLI -The suggested tool for interacting with the **Indexer Management API** is the **Indexer CLI**, an extension to the **Graph CLI**. The Indexer agent needs input from an Indexer in order to autonomously interact with the network on the behalf of the Indexer. The mechanism for defining Indexer agent behavior are **allocation management** mode and **indexing rules**. Under auto mode, an Indexer can use **indexing rules** to apply their specific strategy for picking subgraphs to index and serve queries for. Rules are managed via a GraphQL API served by the agent and known as the Indexer Management API. Under manual mode, an Indexer can create allocation actions using **actions queue** and explicitly approve them before they get executed. Under oversight mode, **indexing rules** are used to populate **actions queue** and also require explicit approval for execution. +The suggested tool for interacting with the **Indexer Management API** is the **Indexer CLI**, an extension to the **Graph CLI**. The Indexer agent needs input from an Indexer in order to autonomously interact with the network on the behalf of the Indexer. The mechanism for defining Indexer agent behavior are **allocation management** mode and **indexing rules**. Under auto mode, an Indexer can use **indexing rules** to apply their specific strategy for picking Subgraphs to index and serve queries for. Rules are managed via a GraphQL API served by the agent and known as the Indexer Management API. Under manual mode, an Indexer can create allocation actions using **actions queue** and explicitly approve them before they get executed. Under oversight mode, **indexing rules** are used to populate **actions queue** and also require explicit approval for execution. #### Usage @@ -537,7 +537,7 @@ The **Indexer CLI** connects to the Indexer agent, typically through port-forwar - `graph indexer rules set [options] ...` - Set one or more indexing rules. -- `graph indexer rules start [options] ` - Start indexing a subgraph deployment if available and set its `decisionBasis` to `always`, so the Indexer agent will always choose to index it. If the global rule is set to always then all available subgraphs on the network will be indexed. +- `graph indexer rules start [options] ` - Start indexing a Subgraph deployment if available and set its `decisionBasis` to `always`, so the Indexer agent will always choose to index it. If the global rule is set to always then all available Subgraphs on the network will be indexed. - `graph indexer rules stop [options] ` - Stop indexing a deployment and set its `decisionBasis` to never, so it will skip this deployment when deciding on deployments to index. @@ -561,9 +561,9 @@ All commands which display rules in the output can choose between the supported #### Indexing rules -Indexing rules can either be applied as global defaults or for specific subgraph deployments using their IDs. The `deployment` and `decisionBasis` fields are mandatory, while all other fields are optional. When an indexing rule has `rules` as the `decisionBasis`, then the Indexer agent will compare non-null threshold values on that rule with values fetched from the network for the corresponding deployment. If the subgraph deployment has values above (or below) any of the thresholds it will be chosen for indexing. +Indexing rules can either be applied as global defaults or for specific Subgraph deployments using their IDs. The `deployment` and `decisionBasis` fields are mandatory, while all other fields are optional. When an indexing rule has `rules` as the `decisionBasis`, then the Indexer agent will compare non-null threshold values on that rule with values fetched from the network for the corresponding deployment. If the Subgraph deployment has values above (or below) any of the thresholds it will be chosen for indexing. -For example, if the global rule has a `minStake` of **5** (GRT), any subgraph deployment which has more than 5 (GRT) of stake allocated to it will be indexed. Threshold rules include `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, and `minAverageQueryFees`. +For example, if the global rule has a `minStake` of **5** (GRT), any Subgraph deployment which has more than 5 (GRT) of stake allocated to it will be indexed. Threshold rules include `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, and `minAverageQueryFees`. Data model: @@ -679,7 +679,7 @@ graph indexer actions execute approve Note that supported action types for allocation management have different input requirements: -- `Allocate` - allocate stake to a specific subgraph deployment +- `Allocate` - allocate stake to a specific Subgraph deployment - required action params: - deploymentID @@ -694,7 +694,7 @@ Note that supported action types for allocation management have different input - poi - force (forces using the provided POI even if it doesn’t match what the graph-node provides) -- `Reallocate` - atomically close allocation and open a fresh allocation for the same subgraph deployment +- `Reallocate` - atomically close allocation and open a fresh allocation for the same Subgraph deployment - required action params: - allocationID @@ -706,7 +706,7 @@ Note that supported action types for allocation management have different input #### Cost models -Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make Indexer selection decisions per query and to negotiate payment with chosen Indexers. +Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each Subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make Indexer selection decisions per query and to negotiate payment with chosen Indexers. #### Agora @@ -782,9 +782,9 @@ Once an Indexer has staked GRT in the protocol, the [Indexer components](/indexi 6. Call `stake()` to stake GRT in the protocol. -7. (Optional) Indexers may approve another address to be the operator for their Indexer infrastructure in order to separate the keys that control the funds from those that are performing day to day actions such as allocating on subgraphs and serving (paid) queries. In order to set the operator call `setOperator()` with the operator address. +7. (Optional) Indexers may approve another address to be the operator for their Indexer infrastructure in order to separate the keys that control the funds from those that are performing day to day actions such as allocating on Subgraphs and serving (paid) queries. In order to set the operator call `setOperator()` with the operator address. -8. (Optional) In order to control the distribution of rewards and strategically attract Delegators Indexers can update their delegation parameters by updating their indexingRewardCut (parts per million), queryFeeCut (parts per million), and cooldownBlocks (number of blocks). To do so call `setDelegationParameters()`. The following example sets the queryFeeCut to distribute 95% of query rebates to the Indexer and 5% to Delegators, set the indexingRewardCutto distribute 60% of indexing rewards to the Indexer and 40% to Delegators, and set `thecooldownBlocks` period to 500 blocks. +8. (Optional) In order to control the distribution of rewards and strategically attract Delegators Indexers can update their delegation parameters by updating their `indexingRewardCut` (parts per million), `queryFeeCut` (parts per million), and `cooldownBlocks` (number of blocks). To do so call `setDelegationParameters()`. The following example sets the `queryFeeCut` to distribute 95% of query rebates to the Indexer and 5% to Delegators, set the `indexingRewardCut` to distribute 60% of indexing rewards to the Indexer and 40% to Delegators, and set the `cooldownBlocks` period to 500 blocks. ``` setDelegationParameters(950000, 600000, 500) @@ -810,8 +810,8 @@ To set the delegation parameters using Graph Explorer interface, follow these st After being created by an Indexer a healthy allocation goes through two states. -- **Active** - Once an allocation is created onchain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L316)) it is considered **active**. A portion of the Indexer's own and/or delegated stake is allocated towards a subgraph deployment, which allows them to claim indexing rewards and serve queries for that subgraph deployment. The Indexer agent manages creating allocations based on the Indexer rules. +- **Active** - Once an allocation is created onchain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L316)) it is considered **active**. A portion of the Indexer's own and/or delegated stake is allocated towards a Subgraph deployment, which allows them to claim indexing rewards and serve queries for that Subgraph deployment. The Indexer agent manages creating allocations based on the Indexer rules. - **Closed** - An Indexer is free to close an allocation once 1 epoch has passed ([closeAllocation()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L335)) or their Indexer agent will automatically close the allocation after the **maxAllocationEpochs** (currently 28 days). When an allocation is closed with a valid proof of indexing (POI) their indexing rewards are distributed to the Indexer and its Delegators ([learn more](/indexing/overview/#how-are-indexing-rewards-distributed)). -Indexers are recommended to utilize offchain syncing functionality to sync subgraph deployments to chainhead before creating the allocation onchain. This feature is especially useful for subgraphs that may take longer than 28 epochs to sync or have some chances of failing undeterministically. +Indexers are recommended to utilize offchain syncing functionality to sync Subgraph deployments to chainhead before creating the allocation onchain. This feature is especially useful for Subgraphs that may take longer than 28 epochs to sync or have some chances of failing undeterministically. diff --git a/website/src/pages/mr/indexing/supported-network-requirements.mdx b/website/src/pages/mr/indexing/supported-network-requirements.mdx index a1a9e0338649..eddbc8af8460 100644 --- a/website/src/pages/mr/indexing/supported-network-requirements.mdx +++ b/website/src/pages/mr/indexing/supported-network-requirements.mdx @@ -6,7 +6,7 @@ title: Supported Network Requirements | --- | --- | --- | :-: | | Arbitrum | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | 4+ core CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_last updated August 2023_ | ✅ | | हिमस्खलन | [Docker Guide](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 5 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
Debian 12/Ubuntu 22.04
16 GB RAM
>= 4.5TB (NVME preffered)
_last updated 14th May 2024_ | ✅ | +| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
Debian 12/Ubuntu 22.04
16 GB RAM
>= 4.5TB (NVME preferred)
_last updated 14th May 2024_ | ✅ | | Binance | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | 8 core / 16 threads CPU
Ubuntu 22.04
>=32 GB RAM
>= 14 TiB NVMe SSD
_last updated 22nd June 2024_ | ✅ | | Celo | [Docker Guide](https://docs.infradao.com/archive-nodes-101/celo/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 2 TiB NVMe SSD
_last updated August 2023_ | ✅ | | इथरियम | [Docker Guide](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Higher clock speed over core count
Ubuntu 22.04
16GB+ RAM
>=3TB (NVMe recommended)
_last updated August 2023_ | ✅ | diff --git a/website/src/pages/mr/indexing/tap.mdx b/website/src/pages/mr/indexing/tap.mdx index f6248123d886..dd5401d6e9d5 100644 --- a/website/src/pages/mr/indexing/tap.mdx +++ b/website/src/pages/mr/indexing/tap.mdx @@ -1,21 +1,21 @@ --- -title: TAP Migration Guide +title: GraphTally Guide --- -Learn about The Graph’s new payment system, **Timeline Aggregation Protocol, TAP**. This system provides fast, efficient microtransactions with minimized trust. +Learn about The Graph’s new payment system, **GraphTally** [(previously Timeline Aggregation Protocol)](https://docs.rs/tap_core/latest/tap_core/index.html). This system provides fast, efficient microtransactions with minimized trust. ## सविश्लेषण -[TAP](https://docs.rs/tap_core/latest/tap_core/index.html) is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features: +GraphTally is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features: - Efficiently handles micropayments. - Adds a layer of consolidations to onchain transactions and costs. - Allows Indexers control of receipts and payments, guaranteeing payment for queries. - It enables decentralized, trustless gateways and improves `indexer-service` performance for multiple senders. -## Specifics +### Specifics -TAP allows a sender to make multiple payments to a receiver, **TAP Receipts**, which aggregates these payments into a single payment, a **Receipt Aggregate Voucher**, also known as a **RAV**. This aggregated payment can then be verified on the blockchain, reducing the number of transactions and simplifying the payment process. +GraphTally allows a sender to make multiple payments to a receiver, **Receipts**, which aggregates these payments into a single payment, a **Receipt Aggregate Voucher**, also known as a **RAV**. This aggregated payment can then be verified on the blockchain, reducing the number of transactions and simplifying the payment process. For each query, the gateway will send you a `signed receipt` that is stored on your database. Then, these queries will be aggregated by a `tap-agent` through a request. Afterwards, you’ll receive a RAV. You can update a RAV by sending it with newer receipts and this will generate a new RAV with an increased value. @@ -59,14 +59,14 @@ As long as you run `tap-agent` and `indexer-agent`, everything will be executed | Signers | `0xfF4B7A5EfD00Ff2EC3518D4F250A27e4c29A2211` | `0xFb142dE83E261e43a81e9ACEADd1c66A0DB121FE` | | Aggregator | `https://tap-aggregator.network.thegraph.com` | `https://tap-aggregator.testnet.thegraph.com` | -### Requirements +### Prerequisites -In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query TAP updates. You can use The Graph Network to query or host yourself on your `graph-node`. +In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query updates. You can use The Graph Network to query or host yourself on your `graph-node`. -- [Graph TAP Arbitrum Sepolia subgraph (for The Graph testnet)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) -- [Graph TAP Arbitrum One subgraph (for The Graph mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) +- [Graph TAP Arbitrum Sepolia Subgraph (for The Graph testnet)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) +- [Graph TAP Arbitrum One Subgraph (for The Graph mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) -> Note: `indexer-agent` does not currently handle the indexing of this subgraph like it does for the network subgraph deployment. As a result, you have to index it manually. +> Note: `indexer-agent` does not currently handle the indexing of this Subgraph like it does for the network Subgraph deployment. As a result, you have to index it manually. ## Migration Guide @@ -79,7 +79,7 @@ The required software version can be found [here](https://github.com/graphprotoc 1. **Indexer Agent** - Follow the [same process](https://github.com/graphprotocol/indexer/pkgs/container/indexer-agent#graph-protocol-indexer-components). - - Give the new argument `--tap-subgraph-endpoint` to activate the new TAP codepaths and enable redeeming of TAP RAVs. + - Give the new argument `--tap-subgraph-endpoint` to activate the new GraphTally codepaths and enable redeeming of RAVs. 2. **Indexer Service** @@ -128,18 +128,18 @@ query_url = "" status_url = "" [subgraphs.network] -# Query URL for the Graph Network subgraph. +# Query URL for the Graph Network Subgraph. query_url = "" # Optional, deployment to look for in the local `graph-node`, if locally indexed. -# Locally indexing the subgraph is recommended. +# Locally indexing the Subgraph is recommended. # NOTE: Use `query_url` or `deployment_id` only deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" [subgraphs.escrow] -# Query URL for the Escrow subgraph. +# Query URL for the Escrow Subgraph. query_url = "" # Optional, deployment to look for in the local `graph-node`, if locally indexed. -# Locally indexing the subgraph is recommended. +# Locally indexing the Subgraph is recommended. # NOTE: Use `query_url` or `deployment_id` only deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" diff --git a/website/src/pages/mr/indexing/tooling/graph-node.mdx b/website/src/pages/mr/indexing/tooling/graph-node.mdx index 30595816e62c..687f1ea42338 100644 --- a/website/src/pages/mr/indexing/tooling/graph-node.mdx +++ b/website/src/pages/mr/indexing/tooling/graph-node.mdx @@ -2,31 +2,31 @@ title: आलेख नोड --- -Graph Node is the component which indexes subgraphs, and makes the resulting data available to query via a GraphQL API. As such it is central to the indexer stack, and correct operation of Graph Node is crucial to running a successful indexer. +Graph Node is the component which indexes Subgraphs, and makes the resulting data available to query via a GraphQL API. As such it is central to the indexer stack, and correct operation of Graph Node is crucial to running a successful indexer. This provides a contextual overview of Graph Node, and some of the more advanced options available to indexers. Detailed documentation and instructions can be found in the [Graph Node repository](https://github.com/graphprotocol/graph-node). ## आलेख नोड -[Graph Node](https://github.com/graphprotocol/graph-node) is the reference implementation for indexing Subgraphs on The Graph Network, connecting to blockchain clients, indexing subgraphs and making indexed data available to query. +[Graph Node](https://github.com/graphprotocol/graph-node) is the reference implementation for indexing Subgraphs on The Graph Network, connecting to blockchain clients, indexing Subgraphs and making indexed data available to query. Graph Node (and the whole indexer stack) can be run on bare metal, or in a cloud environment. This flexibility of the central indexing component is crucial to the robustness of The Graph Protocol. Similarly, Graph Node can be [built from source](https://github.com/graphprotocol/graph-node), or indexers can use one of the [provided Docker Images](https://hub.docker.com/r/graphprotocol/graph-node). ### PostgreSQL डेटाबेस -The main store for the Graph Node, this is where subgraph data is stored, as well as metadata about subgraphs, and subgraph-agnostic network data such as the block cache, and eth_call cache. +The main store for the Graph Node, this is where Subgraph data is stored, as well as metadata about Subgraphs, and Subgraph-agnostic network data such as the block cache, and eth_call cache. ### नेटवर्क क्लायंट In order to index a network, Graph Node needs access to a network client via an EVM-compatible JSON-RPC API. This RPC may connect to a single client or it could be a more complex setup that load balances across multiple. -While some subgraphs may just require a full node, some may have indexing features which require additional RPC functionality. Specifically subgraphs which make `eth_calls` as part of indexing will require an archive node which supports [EIP-1898](https://eips.ethereum.org/EIPS/eip-1898), and subgraphs with `callHandlers`, or `blockHandlers` with a `call` filter, require `trace_filter` support ([see trace module documentation here](https://openethereum.github.io/JSONRPC-trace-module)). +While some Subgraphs may just require a full node, some may have indexing features which require additional RPC functionality. Specifically Subgraphs which make `eth_calls` as part of indexing will require an archive node which supports [EIP-1898](https://eips.ethereum.org/EIPS/eip-1898), and Subgraphs with `callHandlers`, or `blockHandlers` with a `call` filter, require `trace_filter` support ([see trace module documentation here](https://openethereum.github.io/JSONRPC-trace-module)). **Network Firehoses** - a Firehose is a gRPC service providing an ordered, yet fork-aware, stream of blocks, developed by The Graph's core developers to better support performant indexing at scale. This is not currently an Indexer requirement, but Indexers are encouraged to familiarise themselves with the technology, ahead of full network support. Learn more about the Firehose [here](https://firehose.streamingfast.io/). ### आयपीएफएस नोड्स -Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during subgraph deployment to fetch the subgraph manifest and all linked files. Network indexers do not need to host their own IPFS node. An IPFS node for the network is hosted at https://ipfs.network.thegraph.com. +Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network indexers do not need to host their own IPFS node. An IPFS node for the network is hosted at https://ipfs.network.thegraph.com. ### प्रोमिथियस मेट्रिक्स सर्व्हर @@ -79,8 +79,8 @@ When it is running Graph Node exposes the following ports: | Port | Purpose | Routes | CLI Argument | Environment Variable | | --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | -| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | +| 8000 | GraphQL HTTP server
(for Subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | +| 8001 | GraphQL WS
(for Subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | | 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | | 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | | 8040 | Prometheus metrics | /metrics | \--metrics-port | - | @@ -89,7 +89,7 @@ When it is running Graph Node exposes the following ports: ## प्रगत ग्राफ नोड कॉन्फिगरेशन -At its simplest, Graph Node can be operated with a single instance of Graph Node, a single PostgreSQL database, an IPFS node, and the network clients as required by the subgraphs to be indexed. +At its simplest, Graph Node can be operated with a single instance of Graph Node, a single PostgreSQL database, an IPFS node, and the network clients as required by the Subgraphs to be indexed. This setup can be scaled horizontally, by adding multiple Graph Nodes, and multiple databases to support those Graph Nodes. Advanced users may want to take advantage of some of the horizontal scaling capabilities of Graph Node, as well as some of the more advanced configuration options, via the `config.toml` file and Graph Node's environment variables. @@ -114,13 +114,13 @@ Full documentation of `config.toml` can be found in the [Graph Node docs](https: #### एकाधिक ग्राफ नोड्स -Graph Node indexing can scale horizontally, running multiple instances of Graph Node to split indexing and querying across different nodes. This can be done simply by running Graph Nodes configured with a different `node_id` on startup (e.g. in the Docker Compose file), which can then be used in the `config.toml` file to specify [dedicated query nodes](#dedicated-query-nodes), [block ingestors](#dedicated-block-ingestion), and splitting subgraphs across nodes with [deployment rules](#deployment-rules). +Graph Node indexing can scale horizontally, running multiple instances of Graph Node to split indexing and querying across different nodes. This can be done simply by running Graph Nodes configured with a different `node_id` on startup (e.g. in the Docker Compose file), which can then be used in the `config.toml` file to specify [dedicated query nodes](#dedicated-query-nodes), [block ingestors](#dedicated-block-ingestion), and splitting Subgraphs across nodes with [deployment rules](#deployment-rules). > Note that multiple Graph Nodes can all be configured to use the same database, which itself can be horizontally scaled via sharding. #### डिप्लॉयमेंट नियम -Given multiple Graph Nodes, it is necessary to manage deployment of new subgraphs so that the same subgraph isn't being indexed by two different nodes, which would lead to collisions. This can be done by using deployment rules, which can also specify which `shard` a subgraph's data should be stored in, if database sharding is being used. Deployment rules can match on the subgraph name and the network that the deployment is indexing in order to make a decision. +Given multiple Graph Nodes, it is necessary to manage deployment of new Subgraphs so that the same Subgraph isn't being indexed by two different nodes, which would lead to collisions. This can be done by using deployment rules, which can also specify which `shard` a Subgraph's data should be stored in, if database sharding is being used. Deployment rules can match on the Subgraph name and the network that the deployment is indexing in order to make a decision. उपयोजन नियम कॉन्फिगरेशनचे उदाहरण: @@ -138,7 +138,7 @@ indexers = [ "index_node_kovan_0" ] match = { network = [ "xdai", "poa-core" ] } indexers = [ "index_node_other_0" ] [[deployment.rule]] -# There's no 'match', so any subgraph matches +# There's no 'match', so any Subgraph matches shards = [ "sharda", "shardb" ] indexers = [ "index_node_community_0", @@ -167,11 +167,11 @@ Any node whose --node-id matches the regular expression will be set up to only r For most use cases, a single Postgres database is sufficient to support a graph-node instance. When a graph-node instance outgrows a single Postgres database, it is possible to split the storage of graph-node's data across multiple Postgres databases. All databases together form the store of the graph-node instance. Each individual database is called a shard. -Shards can be used to split subgraph deployments across multiple databases, and can also be used to use replicas to spread query load across databases. This includes configuring the number of available database connections each `graph-node` should keep in its connection pool for each database, which becomes increasingly important as more subgraphs are being indexed. +Shards can be used to split Subgraph deployments across multiple databases, and can also be used to use replicas to spread query load across databases. This includes configuring the number of available database connections each `graph-node` should keep in its connection pool for each database, which becomes increasingly important as more Subgraphs are being indexed. Sharding becomes useful when your existing database can't keep up with the load that Graph Node puts on it, and when it's not possible to increase the database size anymore. -> It is generally better make a single database as big as possible, before starting with shards. One exception is where query traffic is split very unevenly between subgraphs; in those situations it can help dramatically if the high-volume subgraphs are kept in one shard and everything else in another because that setup makes it more likely that the data for the high-volume subgraphs stays in the db-internal cache and doesn't get replaced by data that's not needed as much from low-volume subgraphs. +> It is generally better make a single database as big as possible, before starting with shards. One exception is where query traffic is split very unevenly between Subgraphs; in those situations it can help dramatically if the high-volume Subgraphs are kept in one shard and everything else in another because that setup makes it more likely that the data for the high-volume Subgraphs stays in the db-internal cache and doesn't get replaced by data that's not needed as much from low-volume Subgraphs. In terms of configuring connections, start with max_connections in postgresql.conf set to 400 (or maybe even 200) and look at the store_connection_wait_time_ms and store_connection_checkout_count Prometheus metrics. Noticeable wait times (anything above 5ms) is an indication that there are too few connections available; high wait times there will also be caused by the database being very busy (like high CPU load). However if the database seems otherwise stable, high wait times indicate a need to increase the number of connections. In the configuration, how many connections each graph-node instance can use is an upper limit, and Graph Node will not keep connections open if it doesn't need them. @@ -188,7 +188,7 @@ ingestor = "block_ingestor_node" #### एकाधिक नेटवर्क समर्थन -The Graph Protocol is increasing the number of networks supported for indexing rewards, and there exist many subgraphs indexing unsupported networks which an indexer would like to process. The `config.toml` file allows for expressive and flexible configuration of: +The Graph Protocol is increasing the number of networks supported for indexing rewards, and there exist many Subgraphs indexing unsupported networks which an indexer would like to process. The `config.toml` file allows for expressive and flexible configuration of: - एकाधिक नेटवर्क - Multiple providers per network (this can allow splitting of load across providers, and can also allow for configuration of full nodes as well as archive nodes, with Graph Node preferring cheaper providers if a given workload allows). @@ -225,11 +225,11 @@ Users who are operating a scaled indexing setup with advanced configuration may ### ग्राफ नोडचे व्यवस्थापन -Given a running Graph Node (or Graph Nodes!), the challenge is then to manage deployed subgraphs across those nodes. Graph Node surfaces a range of tools to help with managing subgraphs. +Given a running Graph Node (or Graph Nodes!), the challenge is then to manage deployed Subgraphs across those nodes. Graph Node surfaces a range of tools to help with managing Subgraphs. #### लॉगिंग -Graph Node's logs can provide useful information for debugging and optimisation of Graph Node and specific subgraphs. Graph Node supports different log levels via the `GRAPH_LOG` environment variable, with the following levels: error, warn, info, debug or trace. +Graph Node's logs can provide useful information for debugging and optimisation of Graph Node and specific Subgraphs. Graph Node supports different log levels via the `GRAPH_LOG` environment variable, with the following levels: error, warn, info, debug or trace. In addition setting `GRAPH_LOG_QUERY_TIMING` to `gql` provides more details about how GraphQL queries are running (though this will generate a large volume of logs). @@ -247,11 +247,11 @@ The graphman command is included in the official containers, and you can docker Full documentation of `graphman` commands is available in the Graph Node repository. See [/docs/graphman.md] (https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md) in the Graph Node `/docs` -### सबग्राफसह कार्य करणे +### Working with Subgraphs #### अनुक्रमणिका स्थिती API -Available on port 8030/graphql by default, the indexing status API exposes a range of methods for checking indexing status for different subgraphs, checking proofs of indexing, inspecting subgraph features and more. +Available on port 8030/graphql by default, the indexing status API exposes a range of methods for checking indexing status for different Subgraphs, checking proofs of indexing, inspecting Subgraph features and more. The full schema is available [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). @@ -263,7 +263,7 @@ There are three separate parts of the indexing process: - Processing events in order with the appropriate handlers (this can involve calling the chain for state, and fetching data from the store) - परिणामी डेटा स्टोअरमध्ये लिहित आहे -These stages are pipelined (i.e. they can be executed in parallel), but they are dependent on one another. Where subgraphs are slow to index, the underlying cause will depend on the specific subgraph. +These stages are pipelined (i.e. they can be executed in parallel), but they are dependent on one another. Where Subgraphs are slow to index, the underlying cause will depend on the specific Subgraph. अनुक्रमणिका मंद होण्याची सामान्य कारणे: @@ -276,24 +276,24 @@ These stages are pipelined (i.e. they can be executed in parallel), but they are - प्रदाता स्वतः साखळी डोके मागे घसरण - Slowness in fetching new receipts at the chain head from the provider -Subgraph indexing metrics can help diagnose the root cause of indexing slowness. In some cases, the problem lies with the subgraph itself, but in others, improved network providers, reduced database contention and other configuration improvements can markedly improve indexing performance. +Subgraph indexing metrics can help diagnose the root cause of indexing slowness. In some cases, the problem lies with the Subgraph itself, but in others, improved network providers, reduced database contention and other configuration improvements can markedly improve indexing performance. -#### अयशस्वी सबग्राफ +#### Failed Subgraphs -During indexing subgraphs might fail, if they encounter data that is unexpected, some component not working as expected, or if there is some bug in the event handlers or configuration. There are two general types of failure: +During indexing Subgraphs might fail, if they encounter data that is unexpected, some component not working as expected, or if there is some bug in the event handlers or configuration. There are two general types of failure: - Deterministic failures: these are failures which will not be resolved with retries - Non-deterministic failures: these might be down to issues with the provider, or some unexpected Graph Node error. When a non-deterministic failure occurs, Graph Node will retry the failing handlers, backing off over time. -In some cases a failure might be resolvable by the indexer (for example if the error is a result of not having the right kind of provider, adding the required provider will allow indexing to continue). However in others, a change in the subgraph code is required. +In some cases a failure might be resolvable by the indexer (for example if the error is a result of not having the right kind of provider, adding the required provider will allow indexing to continue). However in others, a change in the Subgraph code is required. -> Deterministic failures are considered "final", with a Proof of Indexing generated for the failing block, while non-deterministic failures are not, as the subgraph may manage to "unfail" and continue indexing. In some cases, the non-deterministic label is incorrect, and the subgraph will never overcome the error; such failures should be reported as issues on the Graph Node repository. +> Deterministic failures are considered "final", with a Proof of Indexing generated for the failing block, while non-deterministic failures are not, as the Subgraph may manage to "unfail" and continue indexing. In some cases, the non-deterministic label is incorrect, and the Subgraph will never overcome the error; such failures should be reported as issues on the Graph Node repository. #### ब्लॉक आणि कॉल कॅशे -Graph Node caches certain data in the store in order to save refetching from the provider. Blocks are cached, as are the results of `eth_calls` (the latter being cached as of a specific block). This caching can dramatically increase indexing speed during "resyncing" of a slightly altered subgraph. +Graph Node caches certain data in the store in order to save refetching from the provider. Blocks are cached, as are the results of `eth_calls` (the latter being cached as of a specific block). This caching can dramatically increase indexing speed during "resyncing" of a slightly altered Subgraph. -However, in some instances, if an Ethereum node has provided incorrect data for some period, that can make its way into the cache, leading to incorrect data or failed subgraphs. In this case indexers can use `graphman` to clear the poisoned cache, and then rewind the affected subgraphs, which will then fetch fresh data from the (hopefully) healthy provider. +However, in some instances, if an Ethereum node has provided incorrect data for some period, that can make its way into the cache, leading to incorrect data or failed Subgraphs. In this case indexers can use `graphman` to clear the poisoned cache, and then rewind the affected Subgraphs, which will then fetch fresh data from the (hopefully) healthy provider. If a block cache inconsistency is suspected, such as a tx receipt missing event: @@ -304,7 +304,7 @@ If a block cache inconsistency is suspected, such as a tx receipt missing event: #### समस्या आणि त्रुटींची चौकशी करणे -Once a subgraph has been indexed, indexers can expect to serve queries via the subgraph's dedicated query endpoint. If the indexer is hoping to serve significant query volume, a dedicated query node is recommended, and in case of very high query volumes, indexers may want to configure replica shards so that queries don't impact the indexing process. +Once a Subgraph has been indexed, indexers can expect to serve queries via the Subgraph's dedicated query endpoint. If the indexer is hoping to serve significant query volume, a dedicated query node is recommended, and in case of very high query volumes, indexers may want to configure replica shards so that queries don't impact the indexing process. However, even with a dedicated query node and replicas, certain queries can take a long time to execute, and in some cases increase memory usage and negatively impact the query time for other users. @@ -316,7 +316,7 @@ Graph Node caches GraphQL queries by default, which can significantly reduce dat ##### प्रश्नांचे विश्लेषण करत आहे -Problematic queries most often surface in one of two ways. In some cases, users themselves report that a given query is slow. In that case the challenge is to diagnose the reason for the slowness - whether it is a general issue, or specific to that subgraph or query. And then of course to resolve it, if possible. +Problematic queries most often surface in one of two ways. In some cases, users themselves report that a given query is slow. In that case the challenge is to diagnose the reason for the slowness - whether it is a general issue, or specific to that Subgraph or query. And then of course to resolve it, if possible. In other cases, the trigger might be high memory usage on a query node, in which case the challenge is first to identify the query causing the issue. @@ -336,10 +336,10 @@ In general, tables where the number of distinct entities are less than 1% of the Once a table has been determined to be account-like, running `graphman stats account-like .
` will turn on the account-like optimization for queries against that table. The optimization can be turned off again with `graphman stats account-like --clear .
` It takes up to 5 minutes for query nodes to notice that the optimization has been turned on or off. After turning the optimization on, it is necessary to verify that the change does not in fact make queries slower for that table. If you have configured Grafana to monitor Postgres, slow queries would show up in `pg_stat_activity`in large numbers, taking several seconds. In that case, the optimization needs to be turned off again. -For Uniswap-like subgraphs, the `pair` and `token` tables are prime candidates for this optimization, and can have a dramatic effect on database load. +For Uniswap-like Subgraphs, the `pair` and `token` tables are prime candidates for this optimization, and can have a dramatic effect on database load. -#### सबग्राफ काढून टाकत आहे +#### Removing Subgraphs > This is new functionality, which will be available in Graph Node 0.29.x -At some point an indexer might want to remove a given subgraph. This can be easily done via `graphman drop`, which deletes a deployment and all it's indexed data. The deployment can be specified as either a subgraph name, an IPFS hash `Qm..`, or the database namespace `sgdNNN`. Further documentation is available [here](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop). +At some point an indexer might want to remove a given Subgraph. This can be easily done via `graphman drop`, which deletes a deployment and all it's indexed data. The deployment can be specified as either a Subgraph name, an IPFS hash `Qm..`, or the database namespace `sgdNNN`. Further documentation is available [here](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop). diff --git a/website/src/pages/mr/indexing/tooling/graphcast.mdx b/website/src/pages/mr/indexing/tooling/graphcast.mdx index 46e7c77e864d..966849766b7a 100644 --- a/website/src/pages/mr/indexing/tooling/graphcast.mdx +++ b/website/src/pages/mr/indexing/tooling/graphcast.mdx @@ -10,10 +10,10 @@ Is there something you'd like to learn from or share with your fellow Indexers i ग्राफकास्ट SDK (सॉफ्टवेअर डेव्हलपमेंट किट) विकसकांना रेडिओ तयार करण्यास अनुमती देते, जे गॉसिप-शक्तीवर चालणारे अनुप्रयोग आहेत जे निर्देशांक दिलेल्या उद्देशासाठी चालवू शकतात. खालील वापराच्या प्रकरणांसाठी काही रेडिओ तयार करण्याचा आमचा मानस आहे (किंवा रेडिओ तयार करू इच्छिणाऱ्या इतर विकासकांना/संघांना समर्थन पुरवणे): -- Real-time cross-checking of subgraph data integrity ([Subgraph Radio](https://docs.graphops.xyz/graphcast/radios/subgraph-radio/intro)). -- Conducting auctions and coordination for warp syncing subgraphs, substreams, and Firehose data from other Indexers. -- Self-reporting on active query analytics, including subgraph request volumes, fee volumes, etc. -- Self-reporting on indexing analytics, including subgraph indexing time, handler gas costs, indexing errors encountered, etc. +- Real-time cross-checking of Subgraph data integrity ([Subgraph Radio](https://docs.graphops.xyz/graphcast/radios/subgraph-radio/intro)). +- Conducting auctions and coordination for warp syncing Subgraphs, substreams, and Firehose data from other Indexers. +- Self-reporting on active query analytics, including Subgraph request volumes, fee volumes, etc. +- Self-reporting on indexing analytics, including Subgraph indexing time, handler gas costs, indexing errors encountered, etc. - Self-reporting on stack information including graph-node version, Postgres version, Ethereum client version, etc. ### अधिक जाणून घ्या diff --git a/website/src/pages/mr/resources/benefits.mdx b/website/src/pages/mr/resources/benefits.mdx index 4ffee4b07761..7bf7e9392b3f 100644 --- a/website/src/pages/mr/resources/benefits.mdx +++ b/website/src/pages/mr/resources/benefits.mdx @@ -75,9 +75,9 @@ Query costs may vary; the quoted cost is the average at time of publication (Mar Reflects cost for data consumer. Query fees are still paid to Indexers for Free Plan queries. -Estimated costs are only for Ethereum Mainnet subgraphs — costs are even higher when self hosting a `graph-node` on other networks. Some users may need to update their subgraph to a new version. Due to Ethereum gas fees, an update costs ~$50 at time of writing. Note that gas fees on [Arbitrum](/archived/arbitrum/arbitrum-faq/) are substantially lower than Ethereum mainnet. +Estimated costs are only for Ethereum Mainnet Subgraphs — costs are even higher when self hosting a `graph-node` on other networks. Some users may need to update their Subgraph to a new version. Due to Ethereum gas fees, an update costs ~$50 at time of writing. Note that gas fees on [Arbitrum](/archived/arbitrum/arbitrum-faq/) are substantially lower than Ethereum mainnet. -सबग्राफवर क्युरेटिंग सिग्नल हा पर्यायी एक-वेळचा, निव्वळ-शून्य खर्च आहे (उदा., $1k सिग्नल सबग्राफवर क्युरेट केला जाऊ शकतो आणि नंतर मागे घेतला जाऊ शकतो—प्रक्रियेत परतावा मिळविण्याच्या संभाव्यतेसह). +Curating signal on a Subgraph is an optional one-time, net-zero cost (e.g., $1k in signal can be curated on a Subgraph, and later withdrawn—with potential to earn returns in the process). ## No Setup Costs & Greater Operational Efficiency @@ -89,4 +89,4 @@ The Graph’s decentralized network gives users access to geographic redundancy Bottom line: The Graph Network is less expensive, easier to use, and produces superior results compared to running a `graph-node` locally. -Start using The Graph Network today, and learn how to [publish your subgraph to The Graph's decentralized network](/subgraphs/quick-start/). +Start using The Graph Network today, and learn how to [publish your Subgraph to The Graph's decentralized network](/subgraphs/quick-start/). diff --git a/website/src/pages/mr/resources/glossary.mdx b/website/src/pages/mr/resources/glossary.mdx index ffcd4bca2eed..4c5ad55cd0d3 100644 --- a/website/src/pages/mr/resources/glossary.mdx +++ b/website/src/pages/mr/resources/glossary.mdx @@ -4,51 +4,51 @@ title: Glossary - **The Graph**: A decentralized protocol for indexing and querying data. -- **Query**: A request for data. In the case of The Graph, a query is a request for data from a subgraph that will be answered by an Indexer. +- **Query**: A request for data. In the case of The Graph, a query is a request for data from a Subgraph that will be answered by an Indexer. -- **GraphQL**: A query language for APIs and a runtime for fulfilling those queries with your existing data. The Graph uses GraphQL to query subgraphs. +- **GraphQL**: A query language for APIs and a runtime for fulfilling those queries with your existing data. The Graph uses GraphQL to query Subgraphs. -- **Endpoint**: A URL that can be used to query a subgraph. The testing endpoint for Subgraph Studio is `https://api.studio.thegraph.com/query///` and the Graph Explorer endpoint is `https://gateway.thegraph.com/api//subgraphs/id/`. The Graph Explorer endpoint is used to query subgraphs on The Graph's decentralized network. +- **Endpoint**: A URL that can be used to query a Subgraph. The testing endpoint for Subgraph Studio is `https://api.studio.thegraph.com/query///` and the Graph Explorer endpoint is `https://gateway.thegraph.com/api//subgraphs/id/`. The Graph Explorer endpoint is used to query Subgraphs on The Graph's decentralized network. -- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish subgraphs to The Graph Network. Once it is indexed, the subgraph can be queried by anyone. +- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish Subgraphs to The Graph Network. Once it is indexed, the Subgraph can be queried by anyone. - **Indexer**: Network participants that run indexing nodes to index data from blockchains and serve GraphQL queries. - **Indexer Revenue Streams**: Indexers are rewarded in GRT with two components: query fee rebates and indexing rewards. - 1. **Query Fee Rebates**: Payments from subgraph consumers for serving queries on the network. + 1. **Query Fee Rebates**: Payments from Subgraph consumers for serving queries on the network. - 2. **Indexing Rewards**: The rewards that Indexers receive for indexing subgraphs. Indexing rewards are generated via new issuance of 3% GRT annually. + 2. **Indexing Rewards**: The rewards that Indexers receive for indexing Subgraphs. Indexing rewards are generated via new issuance of 3% GRT annually. - **Indexer's Self-Stake**: The amount of GRT that Indexers stake to participate in the decentralized network. The minimum is 100,000 GRT, and there is no upper limit. - **Delegation Capacity**: The maximum amount of GRT an Indexer can accept from Delegators. Indexers can only accept up to 16x their Indexer Self-Stake, and additional delegation results in diluted rewards. For example, if an Indexer has a Self-Stake of 1M GRT, their delegation capacity is 16M. However, Indexers can increase their Delegation Capacity by increasing their Self-Stake. -- **Upgrade Indexer**: An Indexer designed to act as a fallback for subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. +- **Upgrade Indexer**: An Indexer designed to act as a fallback for Subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. -- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing subgraphs. +- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in Subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing Subgraphs. - **Delegation Tax**: A 0.5% fee paid by Delegators when they delegate GRT to Indexers. The GRT used to pay the fee is burned. -- **Curator**: Network participants that identify high-quality subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a subgraph, 10% is distributed to the Curators of that subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a subgraph. +- **Curator**: Network participants that identify high-quality Subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a Subgraph, 10% is distributed to the Curators of that Subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a Subgraph. -- **Curation Tax**: A 1% fee paid by Curators when they signal GRT on subgraphs. The GRT used to pay the fee is burned. +- **Curation Tax**: A 1% fee paid by Curators when they signal GRT on Subgraphs. The GRT used to pay the fee is burned. -- **Data Consumer**: Any application or user that queries a subgraph. +- **Data Consumer**: Any application or user that queries a Subgraph. -- **Subgraph Developer**: A developer who builds and deploys a subgraph to The Graph's decentralized network. +- **Subgraph Developer**: A developer who builds and deploys a Subgraph to The Graph's decentralized network. -- **Subgraph Manifest**: A YAML file that describes the subgraph's GraphQL schema, data sources, and other metadata. [Here](https://github.com/graphprotocol/example-subgraph/blob/master/subgraph.yaml) is an example. +- **Subgraph Manifest**: A YAML file that describes the Subgraph's GraphQL schema, data sources, and other metadata. [Here](https://github.com/graphprotocol/example-subgraph/blob/master/subgraph.yaml) is an example. - **Epoch**: A unit of time within the network. Currently, one epoch is 6,646 blocks or approximately 1 day. -- **Allocation**: An Indexer can allocate their total GRT stake (including Delegators' stake) towards subgraphs that have been published on The Graph's decentralized network. Allocations can have different statuses: +- **Allocation**: An Indexer can allocate their total GRT stake (including Delegators' stake) towards Subgraphs that have been published on The Graph's decentralized network. Allocations can have different statuses: - 1. **Active**: An allocation is considered active when it is created onchain. This is called opening an allocation, and indicates to the network that the Indexer is actively indexing and serving queries for a particular subgraph. Active allocations accrue indexing rewards proportional to the signal on the subgraph, and the amount of GRT allocated. + 1. **Active**: An allocation is considered active when it is created onchain. This is called opening an allocation, and indicates to the network that the Indexer is actively indexing and serving queries for a particular Subgraph. Active allocations accrue indexing rewards proportional to the signal on the Subgraph, and the amount of GRT allocated. - 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. + 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given Subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. -- **Subgraph Studio**: A powerful dapp for building, deploying, and publishing subgraphs. +- **Subgraph Studio**: A powerful dapp for building, deploying, and publishing Subgraphs. - **Fishermen**: A role within The Graph Network held by participants who monitor the accuracy and integrity of data served by Indexers. When a Fisherman identifies a query response or a POI they believe to be incorrect, they can initiate a dispute against the Indexer. If the dispute rules in favor of the Fisherman, the Indexer is slashed by losing 2.5% of their self-stake. Of this amount, 50% is awarded to the Fisherman as a bounty for their vigilance, and the remaining 50% is removed from circulation (burned). This mechanism is designed to encourage Fishermen to help maintain the reliability of the network by ensuring that Indexers are held accountable for the data they provide. @@ -56,28 +56,28 @@ title: Glossary - **Slashing**: Indexers can have their self-staked GRT slashed for providing an incorrect POI or for serving inaccurate data. The slashing percentage is a protocol parameter currently set to 2.5% of an Indexer's self-stake. 50% of the slashed GRT goes to the Fisherman that disputed the inaccurate data or incorrect POI. The other 50% is burned. -- **Indexing Rewards**: The rewards that Indexers receive for indexing subgraphs. Indexing rewards are distributed in GRT. +- **Indexing Rewards**: The rewards that Indexers receive for indexing Subgraphs. Indexing rewards are distributed in GRT. - **Delegation Rewards**: The rewards that Delegators receive for delegating GRT to Indexers. Delegation rewards are distributed in GRT. - **GRT**: The Graph's work utility token. GRT provides economic incentives to network participants for contributing to the network. -- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. +- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given Subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. -- **Graph Node**: Graph Node is the component that indexes subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. +- **Graph Node**: Graph Node is the component that indexes Subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. -- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions onchain, including registering on the network, managing subgraph deployments to its Graph Node(s), and managing allocations. +- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions onchain, including registering on the network, managing Subgraph deployments to its Graph Node(s), and managing allocations. - **The Graph Client**: A library for building GraphQL-based dapps in a decentralized way. -- **Graph Explorer**: A dapp designed for network participants to explore subgraphs and interact with the protocol. +- **Graph Explorer**: A dapp designed for network participants to explore Subgraphs and interact with the protocol. - **Graph CLI**: A command line interface tool for building and deploying to The Graph. - **Cooldown Period**: The time remaining until an Indexer who changed their delegation parameters can do so again. -- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, subgraphs, curation shares, and Indexer's self-stake. +- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, Subgraphs, curation shares, and Indexer's self-stake. -- **Updating a subgraph**: The process of releasing a new subgraph version with updates to the subgraph's manifest, schema, or mappings. +- **Updating a Subgraph**: The process of releasing a new Subgraph version with updates to the Subgraph's manifest, schema, or mappings. -- **Migrating**: The process of curation shares moving from an old version of a subgraph to a new version of a subgraph (e.g. when v0.0.1 is updated to v0.0.2). +- **Migrating**: The process of curation shares moving from an old version of a Subgraph to a new version of a Subgraph (e.g. when v0.0.1 is updated to v0.0.2). diff --git a/website/src/pages/mr/resources/migration-guides/assemblyscript-migration-guide.mdx b/website/src/pages/mr/resources/migration-guides/assemblyscript-migration-guide.mdx index b989c4de4c11..3983adc51b62 100644 --- a/website/src/pages/mr/resources/migration-guides/assemblyscript-migration-guide.mdx +++ b/website/src/pages/mr/resources/migration-guides/assemblyscript-migration-guide.mdx @@ -2,13 +2,13 @@ title: AssemblyScript Migration Guide --- -Up until now, subgraphs have been using one of the [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉 +Up until now, Subgraphs have been using one of the [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉 -हे सबग्राफ विकसकांना AS भाषा आणि मानक लायब्ररीची नवीन वैशिष्ट्ये वापरण्यास सक्षम करेल. +That will enable Subgraph developers to use newer features of the AS language and standard library. This guide is applicable for anyone using `graph-cli`/`graph-ts` below version `0.22.0`. If you're already at a higher than (or equal) version to that, you've already been using version `0.19.10` of AssemblyScript 🙂 -> Note: As of `0.24.0`, `graph-node` can support both versions, depending on the `apiVersion` specified in the subgraph manifest. +> Note: As of `0.24.0`, `graph-node` can support both versions, depending on the `apiVersion` specified in the Subgraph manifest. ## Features @@ -44,7 +44,7 @@ This guide is applicable for anyone using `graph-cli`/`graph-ts` below version ` ## How to upgrade? -1. Change your mappings `apiVersion` in `subgraph.yaml` to `0.0.6`: +1. Change your mappings `apiVersion` in `subgraph.yaml` to `0.0.9`: ```yaml ... @@ -52,7 +52,7 @@ dataSources: ... mapping: ... - apiVersion: 0.0.6 + apiVersion: 0.0.9 ... ``` @@ -106,7 +106,7 @@ let maybeValue = load()! // breaks in runtime if value is null maybeValue.aMethod() ``` -तुम्हाला कोणती निवड करायची याची खात्री नसल्यास, आम्ही नेहमी सुरक्षित आवृत्ती वापरण्याची शिफारस करतो. जर मूल्य अस्तित्वात नसेल तर तुम्ही तुमच्या सबग्राफ हँडलरमध्ये रिटर्नसह फक्त लवकर इफ स्टेटमेंट करू इच्छित असाल. +If you are unsure which to choose, we recommend always using the safe version. If the value doesn't exist you might want to just do an early if statement with a return in you Subgraph handler. ### Variable Shadowing @@ -132,7 +132,7 @@ You'll need to rename your duplicate variables if you had variable shadowing. ### Null Comparisons -By doing the upgrade on your subgraph, sometimes you might get errors like these: +By doing the upgrade on your Subgraph, sometimes you might get errors like these: ```typescript ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' is not assignable to type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt'. @@ -330,7 +330,7 @@ let wrapper = new Wrapper(y) wrapper.n = wrapper.n + x // doesn't give compile time errors as it should ``` -आम्ही यासाठी असेंबलीस्क्रिप्ट कंपायलरवर एक समस्या उघडली आहे, परंतु आत्ता तुम्ही तुमच्या सबग्राफ मॅपिंगमध्ये अशा प्रकारचे ऑपरेशन करत असल्यास, तुम्ही त्यापूर्वी शून्य तपासणी करण्यासाठी ते बदलले पाहिजेत. +We've opened a issue on the AssemblyScript compiler for this, but for now if you do these kind of operations in your Subgraph mappings, you should change them to do a null check before it. ```typescript let wrapper = new Wrapper(y) @@ -352,7 +352,7 @@ value.x = 10 value.y = 'content' ``` -It will compile but break at runtime, that happens because the value hasn't been initialized, so make sure your subgraph has initialized their values, like this: +It will compile but break at runtime, that happens because the value hasn't been initialized, so make sure your Subgraph has initialized their values, like this: ```typescript var value = new Type() // initialized diff --git a/website/src/pages/mr/resources/migration-guides/graphql-validations-migration-guide.mdx b/website/src/pages/mr/resources/migration-guides/graphql-validations-migration-guide.mdx index b0910e65fc1b..efe189247930 100644 --- a/website/src/pages/mr/resources/migration-guides/graphql-validations-migration-guide.mdx +++ b/website/src/pages/mr/resources/migration-guides/graphql-validations-migration-guide.mdx @@ -20,7 +20,7 @@ To be compliant with those validations, please follow the migration guide. तुमच्या GraphQL ऑपरेशन्समधील समस्या शोधण्यासाठी आणि त्यांचे निराकरण करण्यासाठी तुम्ही CLI माइग्रेशन टूल वापरू शकता. वैकल्पिकरित्या तुम्ही `https://api-next.thegraph.com/subgraphs/name/$GITHUB_USER/$SUBGRAPH_NAME` एंडपॉइंट वापरण्यासाठी तुमच्या GraphQL क्लायंटचा एंडपॉइंट अपडेट करू शकता. या एंडपॉइंटवर तुमच्या क्वेरींची चाचणी केल्याने तुम्हाला तुमच्या क्वेरींमधील समस्या शोधण्यात मदत होईल. -> तुम्ही [GraphQL ESlint](https://the-guild.dev/graphql/eslint/docs) किंवा [GraphQL कोड जनरेटर](https://the-guild.dev) वापरत असल्यास, सर्व उपग्राफ स्थलांतरित करण्याची गरज नाही /graphql/codegen), ते तुमच्या क्वेरी वैध असल्याची खात्री करतात. +> Not all Subgraphs will need to be migrated, if you are using [GraphQL ESlint](https://the-guild.dev/graphql/eslint/docs) or [GraphQL Code Generator](https://the-guild.dev/graphql/codegen), they already ensure that your queries are valid. ## Migration CLI tool diff --git a/website/src/pages/mr/resources/roles/curating.mdx b/website/src/pages/mr/resources/roles/curating.mdx index 2d504102644e..4c73d5b33d31 100644 --- a/website/src/pages/mr/resources/roles/curating.mdx +++ b/website/src/pages/mr/resources/roles/curating.mdx @@ -2,37 +2,37 @@ title: क्युरेटिंग --- -Curators are critical to The Graph's decentralized economy. They use their knowledge of the web3 ecosystem to assess and signal on the subgraphs that should be indexed by The Graph Network. Through Graph Explorer, Curators view network data to make signaling decisions. In turn, The Graph Network rewards Curators who signal on good quality subgraphs with a share of the query fees those subgraphs generate. The amount of GRT signaled is one of the key considerations for indexers when determining which subgraphs to index. +Curators are critical to The Graph's decentralized economy. They use their knowledge of the web3 ecosystem to assess and signal on the Subgraphs that should be indexed by The Graph Network. Through Graph Explorer, Curators view network data to make signaling decisions. In turn, The Graph Network rewards Curators who signal on good quality Subgraphs with a share of the query fees those Subgraphs generate. The amount of GRT signaled is one of the key considerations for indexers when determining which Subgraphs to index. ## What Does Signaling Mean for The Graph Network? -Before consumers can query a subgraph, it must be indexed. This is where curation comes into play. In order for Indexers to earn substantial query fees on quality subgraphs, they need to know what subgraphs to index. When Curators signal on a subgraph, it lets Indexers know that a subgraph is in demand and of sufficient quality that it should be indexed. +Before consumers can query a Subgraph, it must be indexed. This is where curation comes into play. In order for Indexers to earn substantial query fees on quality Subgraphs, they need to know what Subgraphs to index. When Curators signal on a Subgraph, it lets Indexers know that a Subgraph is in demand and of sufficient quality that it should be indexed. -Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that Curators use to let Indexers know that a subgraph is good to index. Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives. +Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that Curators use to let Indexers know that a Subgraph is good to index. Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the Subgraph, entitling them to a portion of future query fees that the Subgraph drives. -Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality subgraph because there will be fewer queries to process or fewer Indexers to process them. +Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to Subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality Subgraph because there will be fewer queries to process or fewer Indexers to process them. -The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. +The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all Subgraphs, signaling GRT on a particular Subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. -When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. +When signaling, Curators can decide to signal on a specific version of the Subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. -If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. +If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the Subgraphs you need assistance with. -Indexers can find subgraphs to index based on curation signals they see in Graph Explorer (screenshot below). +Indexers can find Subgraphs to index based on curation signals they see in Graph Explorer (screenshot below). -![Explorer subgraphs](/img/explorer-subgraphs.png) +![Explorer Subgraphs](/img/explorer-subgraphs.png) ## सिग्नल कसे करावे -Within the Curator tab in Graph Explorer, curators will be able to signal and unsignal on certain subgraphs based on network stats. For a step-by-step overview of how to do this in Graph Explorer, [click here.](/subgraphs/explorer/) +Within the Curator tab in Graph Explorer, curators will be able to signal and unsignal on certain Subgraphs based on network stats. For a step-by-step overview of how to do this in Graph Explorer, [click here.](/subgraphs/explorer/) -क्युरेटर विशिष्ट सबग्राफ आवृत्तीवर सिग्नल करणे निवडू शकतो किंवा ते त्यांचे सिग्नल त्या सबग्राफच्या नवीनतम उत्पादन बिल्डमध्ये स्वयंचलितपणे स्थलांतरित करणे निवडू शकतात. दोन्ही वैध धोरणे आहेत आणि त्यांच्या स्वतःच्या साधक आणि बाधकांसह येतात. +A curator can choose to signal on a specific Subgraph version, or they can choose to have their signal automatically migrate to the newest production build of that Subgraph. Both are valid strategies and come with their own pros and cons. -Signaling on a specific version is especially useful when one subgraph is used by multiple dapps. One dapp might need to regularly update the subgraph with new features. Another dapp might prefer to use an older, well-tested subgraph version. Upon initial curation, a 1% standard tax is incurred. +Signaling on a specific version is especially useful when one Subgraph is used by multiple dapps. One dapp might need to regularly update the Subgraph with new features. Another dapp might prefer to use an older, well-tested Subgraph version. Upon initial curation, a 1% standard tax is incurred. तुमचा सिग्नल नवीनतम प्रोडक्शन बिल्डवर आपोआप स्थलांतरित होणे हे तुम्ही क्वेरी फी जमा करत असल्याचे सुनिश्चित करण्यासाठी मौल्यवान असू शकते. प्रत्येक वेळी तुम्ही क्युरेट करता तेव्हा 1% क्युरेशन कर लागतो. तुम्ही प्रत्येक स्थलांतरावर 0.5% क्युरेशन कर देखील द्याल. सबग्राफ विकसकांना वारंवार नवीन आवृत्त्या प्रकाशित करण्यापासून परावृत्त केले जाते - त्यांना सर्व स्वयं-स्थलांतरित क्युरेशन शेअर्सवर 0.5% क्युरेशन कर भरावा लागतो. -> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy. +> **Note**: The first address to signal a particular Subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy. ## Withdrawing your GRT @@ -40,39 +40,39 @@ Curators have the option to withdraw their signaled GRT at any time. Unlike the process of delegating, if you decide to withdraw your signaled GRT you will not have to wait for a cooldown period and will receive the entire amount (minus the 1% curation tax). -Once a curator withdraws their signal, indexers may choose to keep indexing the subgraph, even if there's currently no active GRT signaled. +Once a curator withdraws their signal, indexers may choose to keep indexing the Subgraph, even if there's currently no active GRT signaled. -However, it is recommended that curators leave their signaled GRT in place not only to receive a portion of the query fees, but also to ensure reliability and uptime of the subgraph. +However, it is recommended that curators leave their signaled GRT in place not only to receive a portion of the query fees, but also to ensure reliability and uptime of the Subgraph. ## जोखीम 1. द ग्राफमध्ये क्वेरी मार्केट मूळतः तरुण आहे आणि नवीन मार्केट डायनॅमिक्समुळे तुमचा %APY तुमच्या अपेक्षेपेक्षा कमी असण्याचा धोका आहे. -2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned. -3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their subgraph or if a subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/resources/roles/delegating/delegating/). -4. बगमुळे सबग्राफ अयशस्वी होऊ शकतो. अयशस्वी सबग्राफ क्वेरी शुल्क जमा करत नाही. परिणामी, विकसक बगचे निराकरण करेपर्यंत आणि नवीन आवृत्ती तैनात करेपर्यंत तुम्हाला प्रतीक्षा करावी लागेल. - - तुम्ही सबग्राफच्या नवीनतम आवृत्तीचे सदस्यत्व घेतले असल्यास, तुमचे शेअर्स त्या नवीन आवृत्तीमध्ये स्वयंचलितपणे स्थलांतरित होतील. यावर 0.5% क्युरेशन कर लागेल. - - If you have signaled on a specific subgraph version and it fails, you will have to manually burn your curation shares. You can then signal on the new subgraph version, thus incurring a 1% curation tax. +2. Curation Fee - when a curator signals GRT on a Subgraph, they incur a 1% curation tax. This fee is burned. +3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their Subgraph or if a Subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/resources/roles/delegating/delegating/). +4. A Subgraph can fail due to a bug. A failed Subgraph does not accrue query fees. As a result, you’ll have to wait until the developer fixes the bug and deploys a new version. + - If you are subscribed to the newest version of a Subgraph, your shares will auto-migrate to that new version. This will incur a 0.5% curation tax. + - If you have signaled on a specific Subgraph version and it fails, you will have to manually burn your curation shares. You can then signal on the new Subgraph version, thus incurring a 1% curation tax. ## क्युरेशन FAQs ### 1. क्युरेटर्स किती % क्वेरी फी मिळवतात? -By signalling on a subgraph, you will earn a share of all the query fees that the subgraph generates. 10% of all query fees go to the Curators pro-rata to their curation shares. This 10% is subject to governance. +By signalling on a Subgraph, you will earn a share of all the query fees that the Subgraph generates. 10% of all query fees go to the Curators pro-rata to their curation shares. This 10% is subject to governance. -### 2. कोणते सबग्राफ उच्च दर्जाचे आहेत हे मी कसे ठरवू? +### 2. How do I decide which Subgraphs are high quality to signal on? -Finding high-quality subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy subgraphs that are driving query volume. A trustworthy subgraph may be valuable if it is complete, accurate, and supports a dapp’s data needs. A poorly architected subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a subgraph’s architecture or code in order to assess if a subgraph is valuable. As a result: +Finding high-quality Subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy Subgraphs that are driving query volume. A trustworthy Subgraph may be valuable if it is complete, accurate, and supports a dapp’s data needs. A poorly architected Subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a Subgraph’s architecture or code in order to assess if a Subgraph is valuable. As a result: -- Curators can use their understanding of a network to try and predict how an individual subgraph may generate a higher or lower query volume in the future -- Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the subgraph developer is can help determine whether or not a subgraph is worth signalling on. +- Curators can use their understanding of a network to try and predict how an individual Subgraph may generate a higher or lower query volume in the future +- Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the Subgraph developer is can help determine whether or not a Subgraph is worth signalling on. -### 3. What’s the cost of updating a subgraph? +### 3. What’s the cost of updating a Subgraph? -Migrating your curation shares to a new subgraph version incurs a curation tax of 1%. Curators can choose to subscribe to the newest version of a subgraph. When curator shares get auto-migrated to a new version, Curators will also pay half curation tax, ie. 0.5%, because upgrading subgraphs is an onchain action that costs gas. +Migrating your curation shares to a new Subgraph version incurs a curation tax of 1%. Curators can choose to subscribe to the newest version of a Subgraph. When curator shares get auto-migrated to a new version, Curators will also pay half curation tax, ie. 0.5%, because upgrading Subgraphs is an onchain action that costs gas. -### 4. How often can I update my subgraph? +### 4. How often can I update my Subgraph? -It’s suggested that you don’t update your subgraphs too frequently. See the question above for more details. +It’s suggested that you don’t update your Subgraphs too frequently. See the question above for more details. ### 5. मी माझे क्युरेशन शेअर्स विकू शकतो का? diff --git a/website/src/pages/mr/resources/subgraph-studio-faq.mdx b/website/src/pages/mr/resources/subgraph-studio-faq.mdx index f5729fb6cfa8..e50ecf505404 100644 --- a/website/src/pages/mr/resources/subgraph-studio-faq.mdx +++ b/website/src/pages/mr/resources/subgraph-studio-faq.mdx @@ -4,7 +4,7 @@ title: सबग्राफ स्टुडिओ FAQ ## 1. सबग्राफ स्टुडिओ म्हणजे काय? -[Subgraph Studio](https://thegraph.com/studio/) is a dapp for creating, managing, and publishing subgraphs and API keys. +[Subgraph Studio](https://thegraph.com/studio/) is a dapp for creating, managing, and publishing Subgraphs and API keys. ## 2. मी API की कशी तयार करू? @@ -18,14 +18,14 @@ Yes! You can create multiple API Keys to use in different projects. Check out th API की तयार केल्यानंतर, सिक्युरिटी विभागात, तुम्ही डोमेन परिभाषित करू शकता जे विशिष्ट क्वेरी करू शकतात API. -## 5. मी माझा सबग्राफ दुसर्‍या मालकाकडे हस्तांतरित करू शकतो का? +## 5. Can I transfer my Subgraph to another owner? -Yes, subgraphs that have been published to Arbitrum One can be transferred to a new wallet or a Multisig. You can do so by clicking the three dots next to the 'Publish' button on the subgraph's details page and selecting 'Transfer ownership'. +Yes, Subgraphs that have been published to Arbitrum One can be transferred to a new wallet or a Multisig. You can do so by clicking the three dots next to the 'Publish' button on the Subgraph's details page and selecting 'Transfer ownership'. -लक्षात ठेवा की एकदा स्‍टुडिओमध्‍ये सबग्राफ स्‍थानांतरित केल्‍यानंतर तुम्‍ही तो पाहू किंवा संपादित करू शकणार नाही. +Note that you will no longer be able to see or edit the Subgraph in Studio once it has been transferred. -## 6. मला वापरायचा असलेल्या सबग्राफचा मी विकसक नसल्यास सबग्राफसाठी क्वेरी URL कसे शोधू? +## 6. How do I find query URLs for Subgraphs if I’m not the developer of the Subgraph I want to use? -You can find the query URL of each subgraph in the Subgraph Details section of Graph Explorer. When you click on the “Query” button, you will be directed to a pane wherein you can view the query URL of the subgraph you’re interested in. You can then replace the `` placeholder with the API key you wish to leverage in Subgraph Studio. +You can find the query URL of each Subgraph in the Subgraph Details section of Graph Explorer. When you click on the “Query” button, you will be directed to a pane wherein you can view the query URL of the Subgraph you’re interested in. You can then replace the `` placeholder with the API key you wish to leverage in Subgraph Studio. -लक्षात ठेवा की तुम्ही API की तयार करू शकता आणि नेटवर्कवर प्रकाशित केलेल्या कोणत्याही सबग्राफची क्वेरी करू शकता, जरी तुम्ही स्वतः सबग्राफ तयार केला असला तरीही. नवीन API की द्वारे या क्वेरी, नेटवर्कवरील इतर कोणत्याही सशुल्क क्वेरी आहेत. +Remember that you can create an API key and query any Subgraph published to the network, even if you build a Subgraph yourself. These queries via the new API key, are paid queries as any other on the network. diff --git a/website/src/pages/mr/resources/tokenomics.mdx b/website/src/pages/mr/resources/tokenomics.mdx index 0fe45e9d9969..168cbea5509b 100644 --- a/website/src/pages/mr/resources/tokenomics.mdx +++ b/website/src/pages/mr/resources/tokenomics.mdx @@ -6,7 +6,7 @@ description: The Graph Network is incentivized by powerful tokenomics. Here’s ## सविश्लेषण -The Graph is a decentralized protocol that enables easy access to blockchain data. It indexes blockchain data similarly to how Google indexes the web. If you've used a dapp that retrieves data from a subgraph, you've probably interacted with The Graph. Today, thousands of [popular dapps](https://thegraph.com/explorer) in the web3 ecosystem use The Graph. +The Graph is a decentralized protocol that enables easy access to blockchain data. It indexes blockchain data similarly to how Google indexes the web. If you've used a dapp that retrieves data from a Subgraph, you've probably interacted with The Graph. Today, thousands of [popular dapps](https://thegraph.com/explorer) in the web3 ecosystem use The Graph. ## Specifics @@ -24,9 +24,9 @@ There are four primary network participants: 1. Delegators - Delegate GRT to Indexers & secure the network -2. क्युरेटर - इंडेक्सर्ससाठी सर्वोत्तम सबग्राफ शोधा +2. Curators - Find the best Subgraphs for Indexers -3. Developers - Build & query subgraphs +3. Developers - Build & query Subgraphs 4. इंडेक्सर्स - ब्लॉकचेन डेटाचा कणा @@ -36,7 +36,7 @@ Fishermen and Arbitrators are also integral to the network's success through oth ## Delegators (Passively earn GRT) -Indexers are delegated GRT by Delegators, increasing the Indexer’s stake in subgraphs on the network. In return, Delegators earn a percentage of all query fees and indexing rewards from the Indexer. Each Indexer sets the cut that will be rewarded to Delegators independently, creating competition among Indexers to attract Delegators. Most Indexers offer between 9-12% annually. +Indexers are delegated GRT by Delegators, increasing the Indexer’s stake in Subgraphs on the network. In return, Delegators earn a percentage of all query fees and indexing rewards from the Indexer. Each Indexer sets the cut that will be rewarded to Delegators independently, creating competition among Indexers to attract Delegators. Most Indexers offer between 9-12% annually. For example, if a Delegator were to delegate 15k GRT to an Indexer offering 10%, the Delegator would receive ~1,500 GRT in rewards annually. @@ -46,25 +46,25 @@ If you're reading this, you're capable of becoming a Delegator right now by head ## Curators (Earn GRT) -Curators identify high-quality subgraphs and "curate" them (i.e., signal GRT on them) to earn curation shares, which guarantee a percentage of all future query fees generated by the subgraph. While any independent network participant can be a Curator, typically subgraph developers are among the first Curators for their own subgraphs because they want to ensure their subgraph is indexed. +Curators identify high-quality Subgraphs and "curate" them (i.e., signal GRT on them) to earn curation shares, which guarantee a percentage of all future query fees generated by the Subgraph. While any independent network participant can be a Curator, typically Subgraph developers are among the first Curators for their own Subgraphs because they want to ensure their Subgraph is indexed. -Subgraph developers are encouraged to curate their subgraph with at least 3,000 GRT. However, this number may be impacted by network activity and community participation. +Subgraph developers are encouraged to curate their Subgraph with at least 3,000 GRT. However, this number may be impacted by network activity and community participation. -Curators pay a 1% curation tax when they curate a new subgraph. This curation tax is burned, decreasing the supply of GRT. +Curators pay a 1% curation tax when they curate a new Subgraph. This curation tax is burned, decreasing the supply of GRT. ## Developers -Developers build and query subgraphs to retrieve blockchain data. Since subgraphs are open source, developers can query existing subgraphs to load blockchain data into their dapps. Developers pay for queries they make in GRT, which is distributed to network participants. +Developers build and query Subgraphs to retrieve blockchain data. Since Subgraphs are open source, developers can query existing Subgraphs to load blockchain data into their dapps. Developers pay for queries they make in GRT, which is distributed to network participants. -### सबग्राफ तयार करणे +### Creating a Subgraph -Developers can [create a subgraph](/developing/creating-a-subgraph/) to index data on the blockchain. Subgraphs are instructions for Indexers about which data should be served to consumers. +Developers can [create a Subgraph](/developing/creating-a-subgraph/) to index data on the blockchain. Subgraphs are instructions for Indexers about which data should be served to consumers. -Once developers have built and tested their subgraph, they can [publish their subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) on The Graph's decentralized network. +Once developers have built and tested their Subgraph, they can [publish their Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) on The Graph's decentralized network. -### विद्यमान सबग्राफची चौकशी करत आहे +### Querying an existing Subgraph -Once a subgraph is [published](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph's decentralized network, anyone can create an API key, add GRT to their billing balance, and query the subgraph. +Once a Subgraph is [published](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph's decentralized network, anyone can create an API key, add GRT to their billing balance, and query the Subgraph. Subgraphs are [queried using GraphQL](/subgraphs/querying/introduction/), and the query fees are paid for with GRT in [Subgraph Studio](https://thegraph.com/studio/). Query fees are distributed to network participants based on their contributions to the protocol. @@ -72,27 +72,27 @@ Subgraphs are [queried using GraphQL](/subgraphs/querying/introduction/), and th ## Indexers (Earn GRT) -Indexers are the backbone of The Graph. They operate independent hardware and software powering The Graph’s decentralized network. Indexers serve data to consumers based on instructions from subgraphs. +Indexers are the backbone of The Graph. They operate independent hardware and software powering The Graph’s decentralized network. Indexers serve data to consumers based on instructions from Subgraphs. Indexers can earn GRT rewards in two ways: -1. **Query fees**: GRT paid by developers or users for subgraph data queries. Query fees are directly distributed to Indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). +1. **Query fees**: GRT paid by developers or users for Subgraph data queries. Query fees are directly distributed to Indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). -2. **Indexing rewards**: the 3% annual issuance is distributed to Indexers based on the number of subgraphs they are indexing. These rewards incentivize Indexers to index subgraphs, occasionally before the query fees begin, to accrue and submit Proofs of Indexing (POIs), verifying that they have indexed data accurately. +2. **Indexing rewards**: the 3% annual issuance is distributed to Indexers based on the number of Subgraphs they are indexing. These rewards incentivize Indexers to index Subgraphs, occasionally before the query fees begin, to accrue and submit Proofs of Indexing (POIs), verifying that they have indexed data accurately. -Each subgraph is allotted a portion of the total network token issuance, based on the amount of the subgraph’s curation signal. That amount is then rewarded to Indexers based on their allocated stake on the subgraph. +Each Subgraph is allotted a portion of the total network token issuance, based on the amount of the Subgraph’s curation signal. That amount is then rewarded to Indexers based on their allocated stake on the Subgraph. In order to run an indexing node, Indexers must self-stake 100,000 GRT or more with the network. Indexers are incentivized to self-stake GRT in proportion to the amount of queries they serve. -Indexers can increase their GRT allocations on subgraphs by accepting GRT delegation from Delegators, and they can accept up to 16 times their initial self-stake. If an Indexer becomes "over-delegated" (i.e., more than 16 times their initial self-stake), they will not be able to use the additional GRT from Delegators until they increase their self-stake in the network. +Indexers can increase their GRT allocations on Subgraphs by accepting GRT delegation from Delegators, and they can accept up to 16 times their initial self-stake. If an Indexer becomes "over-delegated" (i.e., more than 16 times their initial self-stake), they will not be able to use the additional GRT from Delegators until they increase their self-stake in the network. The amount of rewards an Indexer receives can vary based on the Indexer's self-stake, accepted delegation, quality of service, and many more factors. ## Token Supply: Burning & Issuance -The initial token supply is 10 billion GRT, with a target of 3% new issuance annually to reward Indexers for allocating stake on subgraphs. This means that the total supply of GRT tokens will increase by 3% each year as new tokens are issued to Indexers for their contribution to the network. +The initial token supply is 10 billion GRT, with a target of 3% new issuance annually to reward Indexers for allocating stake on Subgraphs. This means that the total supply of GRT tokens will increase by 3% each year as new tokens are issued to Indexers for their contribution to the network. -The Graph is designed with multiple burning mechanisms to offset new token issuance. Approximately 1% of the GRT supply is burned annually through various activities on the network, and this number has been increasing as network activity continues to grow. These burning activities include a 0.5% delegation tax whenever a Delegator delegates GRT to an Indexer, a 1% curation tax when Curators signal on a subgraph, and a 1% of query fees for blockchain data. +The Graph is designed with multiple burning mechanisms to offset new token issuance. Approximately 1% of the GRT supply is burned annually through various activities on the network, and this number has been increasing as network activity continues to grow. These burning activities include a 0.5% delegation tax whenever a Delegator delegates GRT to an Indexer, a 1% curation tax when Curators signal on a Subgraph, and a 1% of query fees for blockchain data. ![Total burned GRT](/img/total-burned-grt.jpeg) diff --git a/website/src/pages/mr/sps/introduction.mdx b/website/src/pages/mr/sps/introduction.mdx index 69be7173e0cf..d22d998dee0d 100644 --- a/website/src/pages/mr/sps/introduction.mdx +++ b/website/src/pages/mr/sps/introduction.mdx @@ -3,21 +3,21 @@ title: Introduction to Substreams-Powered Subgraphs sidebarTitle: Introduction --- -Boost your subgraph's efficiency and scalability by using [Substreams](/substreams/introduction/) to stream pre-indexed blockchain data. +Boost your Subgraph's efficiency and scalability by using [Substreams](/substreams/introduction/) to stream pre-indexed blockchain data. ## सविश्लेषण -Use a Substreams package (`.spkg`) as a data source to give your subgraph access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks. +Use a Substreams package (`.spkg`) as a data source to give your Subgraph access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks. ### Specifics There are two methods of enabling this technology: -1. **Using Substreams [triggers](/sps/triggers/)**: Consume from any Substreams module by importing the Protobuf model through a subgraph handler and move all your logic into a subgraph. This method creates the subgraph entities directly in the subgraph. +1. **Using Substreams [triggers](/sps/triggers/)**: Consume from any Substreams module by importing the Protobuf model through a Subgraph handler and move all your logic into a Subgraph. This method creates the Subgraph entities directly in the Subgraph. -2. **Using [Entity Changes](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)**: By writing more of the logic into Substreams, you can consume the module's output directly into [graph-node](/indexing/tooling/graph-node/). In graph-node, you can use the Substreams data to create your subgraph entities. +2. **Using [Entity Changes](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)**: By writing more of the logic into Substreams, you can consume the module's output directly into [graph-node](/indexing/tooling/graph-node/). In graph-node, you can use the Substreams data to create your Subgraph entities. -You can choose where to place your logic, either in the subgraph or Substreams. However, consider what aligns with your data needs, as Substreams has a parallelized model, and triggers are consumed linearly in the graph node. +You can choose where to place your logic, either in the Subgraph or Substreams. However, consider what aligns with your data needs, as Substreams has a parallelized model, and triggers are consumed linearly in the graph node. ### अतिरिक्त संसाधने @@ -28,3 +28,4 @@ Visit the following links for tutorials on using code-generation tooling to buil - [Starknet](https://docs.substreams.dev/tutorials/intro-to-tutorials/starknet) - [Injective](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/injective) - [MANTRA](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/mantra) +- [Stellar](https://docs.substreams.dev/tutorials/intro-to-tutorials/stellar) diff --git a/website/src/pages/mr/sps/sps-faq.mdx b/website/src/pages/mr/sps/sps-faq.mdx index abc1f3906686..250c466d5929 100644 --- a/website/src/pages/mr/sps/sps-faq.mdx +++ b/website/src/pages/mr/sps/sps-faq.mdx @@ -11,21 +11,21 @@ Specifically, it's a blockchain-agnostic, parallelized, and streaming-first engi Substreams is developed by [StreamingFast](https://www.streamingfast.io/). Visit the [Substreams Documentation](/substreams/introduction/) to learn more about Substreams. -## What are Substreams-powered subgraphs? +## What are Substreams-powered Subgraphs? -[Substreams-powered subgraphs](/sps/introduction/) combine the power of Substreams with the queryability of subgraphs. When publishing a Substreams-powered Subgraph, the data produced by the Substreams transformations, can [output entity changes](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs) that are compatible with subgraph entities. +[Substreams-powered Subgraphs](/sps/introduction/) combine the power of Substreams with the queryability of Subgraphs. When publishing a Substreams-powered Subgraph, the data produced by the Substreams transformations, can [output entity changes](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs) that are compatible with Subgraph entities. -If you are already familiar with subgraph development, note that Substreams-powered subgraphs can then be queried just as if it had been produced by the AssemblyScript transformation layer. This provides all the benefits of subgraphs, including a dynamic and flexible GraphQL API. +If you are already familiar with Subgraph development, note that Substreams-powered Subgraphs can then be queried just as if it had been produced by the AssemblyScript transformation layer. This provides all the benefits of Subgraphs, including a dynamic and flexible GraphQL API. -## How are Substreams-powered subgraphs different from subgraphs? +## How are Substreams-powered Subgraphs different from Subgraphs? Subgraphs are made up of datasources which specify onchain events, and how those events should be transformed via handlers written in Assemblyscript. These events are processed sequentially, based on the order in which events happen onchain. -By contrast, substreams-powered subgraphs have a single datasource which references a substreams package, which is processed by the Graph Node. Substreams have access to additional granular onchain data compared to conventional subgraphs, and can also benefit from massively parallelised processing, which can mean much faster processing times. +By contrast, substreams-powered Subgraphs have a single datasource which references a substreams package, which is processed by the Graph Node. Substreams have access to additional granular onchain data compared to conventional Subgraphs, and can also benefit from massively parallelised processing, which can mean much faster processing times. -## What are the benefits of using Substreams-powered subgraphs? +## What are the benefits of using Substreams-powered Subgraphs? -Substreams-powered subgraphs combine all the benefits of Substreams with the queryability of subgraphs. They bring greater composability and high-performance indexing to The Graph. They also enable new data use cases; for example, once you've built your Substreams-powered Subgraph, you can reuse your [Substreams modules](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) to output to different [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) such as PostgreSQL, MongoDB, and Kafka. +Substreams-powered Subgraphs combine all the benefits of Substreams with the queryability of Subgraphs. They bring greater composability and high-performance indexing to The Graph. They also enable new data use cases; for example, once you've built your Substreams-powered Subgraph, you can reuse your [Substreams modules](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) to output to different [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) such as PostgreSQL, MongoDB, and Kafka. ## What are the benefits of Substreams? @@ -35,7 +35,7 @@ There are many benefits to using Substreams, including: - High-performance indexing: Orders of magnitude faster indexing through large-scale clusters of parallel operations (think BigQuery). -- Sink anywhere: Sink your data to anywhere you want: PostgreSQL, MongoDB, Kafka, subgraphs, flat files, Google Sheets. +- Sink anywhere: Sink your data to anywhere you want: PostgreSQL, MongoDB, Kafka, Subgraphs, flat files, Google Sheets. - Programmable: Use code to customize extraction, do transformation-time aggregations, and model your output for multiple sinks. @@ -63,17 +63,17 @@ There are many benefits to using Firehose, including: - Leverages flat files: Blockchain data is extracted into flat files, the cheapest and most optimized computing resource available. -## Where can developers access more information about Substreams-powered subgraphs and Substreams? +## Where can developers access more information about Substreams-powered Subgraphs and Substreams? The [Substreams documentation](/substreams/introduction/) will teach you how to build Substreams modules. -The [Substreams-powered subgraphs documentation](/sps/introduction/) will show you how to package them for deployment on The Graph. +The [Substreams-powered Subgraphs documentation](/sps/introduction/) will show you how to package them for deployment on The Graph. The [latest Substreams Codegen tool](https://streamingfastio.medium.com/substreams-codegen-no-code-tool-to-bootstrap-your-project-a11efe0378c6) will allow you to bootstrap a Substreams project without any code. ## What is the role of Rust modules in Substreams? -Rust modules are the equivalent of the AssemblyScript mappers in subgraphs. They are compiled to WASM in a similar way, but the programming model allows for parallel execution. They define the sort of transformations and aggregations you want to apply to the raw blockchain data. +Rust modules are the equivalent of the AssemblyScript mappers in Subgraphs. They are compiled to WASM in a similar way, but the programming model allows for parallel execution. They define the sort of transformations and aggregations you want to apply to the raw blockchain data. See [modules documentation](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) for details. @@ -81,16 +81,16 @@ See [modules documentation](https://docs.substreams.dev/reference-material/subst When using Substreams, the composition happens at the transformation layer enabling cached modules to be re-used. -As an example, Alice can build a DEX price module, Bob can use it to build a volume aggregator for some tokens of his interest, and Lisa can combine four individual DEX price modules to create a price oracle. A single Substreams request will package all of these individual's modules, link them together, to offer a much more refined stream of data. That stream can then be used to populate a subgraph, and be queried by consumers. +As an example, Alice can build a DEX price module, Bob can use it to build a volume aggregator for some tokens of his interest, and Lisa can combine four individual DEX price modules to create a price oracle. A single Substreams request will package all of these individual's modules, link them together, to offer a much more refined stream of data. That stream can then be used to populate a Subgraph, and be queried by consumers. ## How can you build and deploy a Substreams-powered Subgraph? After [defining](/sps/introduction/) a Substreams-powered Subgraph, you can use the Graph CLI to deploy it in [Subgraph Studio](https://thegraph.com/studio/). -## Where can I find examples of Substreams and Substreams-powered subgraphs? +## Where can I find examples of Substreams and Substreams-powered Subgraphs? -You can visit [this Github repo](https://github.com/pinax-network/awesome-substreams) to find examples of Substreams and Substreams-powered subgraphs. +You can visit [this Github repo](https://github.com/pinax-network/awesome-substreams) to find examples of Substreams and Substreams-powered Subgraphs. -## What do Substreams and Substreams-powered subgraphs mean for The Graph Network? +## What do Substreams and Substreams-powered Subgraphs mean for The Graph Network? The integration promises many benefits, including extremely high-performance indexing and greater composability by leveraging community modules and building on them. diff --git a/website/src/pages/mr/sps/triggers.mdx b/website/src/pages/mr/sps/triggers.mdx index f5f05b02f759..df877d792fad 100644 --- a/website/src/pages/mr/sps/triggers.mdx +++ b/website/src/pages/mr/sps/triggers.mdx @@ -6,13 +6,13 @@ Use Custom Triggers and enable the full use GraphQL. ## सविश्लेषण -Custom Triggers allow you to send data directly into your subgraph mappings file and entities, which are similar to tables and fields. This enables you to fully use the GraphQL layer. +Custom Triggers allow you to send data directly into your Subgraph mappings file and entities, which are similar to tables and fields. This enables you to fully use the GraphQL layer. -By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data in your subgraph's handler. This ensures efficient and streamlined data management within the subgraph framework. +By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data in your Subgraph's handler. This ensures efficient and streamlined data management within the Subgraph framework. ### Defining `handleTransactions` -The following code demonstrates how to define a `handleTransactions` function in a subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new subgraph entity is created. +The following code demonstrates how to define a `handleTransactions` function in a Subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new Subgraph entity is created. ```tsx export function handleTransactions(bytes: Uint8Array): void { @@ -38,9 +38,9 @@ Here's what you're seeing in the `mappings.ts` file: 1. The bytes containing Substreams data are decoded into the generated `Transactions` object, this object is used like any other AssemblyScript object 2. Looping over the transactions -3. Create a new subgraph entity for every transaction +3. Create a new Subgraph entity for every transaction -To go through a detailed example of a trigger-based subgraph, [check out the tutorial](/sps/tutorial/). +To go through a detailed example of a trigger-based Subgraph, [check out the tutorial](/sps/tutorial/). ### अतिरिक्त संसाधने diff --git a/website/src/pages/mr/sps/tutorial.mdx b/website/src/pages/mr/sps/tutorial.mdx index 7f038fe09059..f72e82459cc5 100644 --- a/website/src/pages/mr/sps/tutorial.mdx +++ b/website/src/pages/mr/sps/tutorial.mdx @@ -3,7 +3,7 @@ title: 'Tutorial: Set Up a Substreams-Powered Subgraph on Solana' sidebarTitle: Tutorial --- -Successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. +Successfully set up a trigger-based Substreams-powered Subgraph for a Solana SPL token. ## सुरु करूया @@ -54,7 +54,7 @@ params: # Modify the param fields to meet your needs ### Step 2: Generate the Subgraph Manifest -Once the project is initialized, generate a subgraph manifest by running the following command in the Dev Container: +Once the project is initialized, generate a Subgraph manifest by running the following command in the Dev Container: ```bash substreams codegen subgraph @@ -73,7 +73,7 @@ dataSources: moduleName: map_spl_transfers # Module defined in the substreams.yaml file: ./my-project-sol-v0.1.0.spkg mapping: - apiVersion: 0.0.7 + apiVersion: 0.0.9 kind: substreams/graph-entities file: ./src/mappings.ts handler: handleTriggers @@ -81,7 +81,7 @@ dataSources: ### Step 3: Define Entities in `schema.graphql` -Define the fields you want to save in your subgraph entities by updating the `schema.graphql` file. +Define the fields you want to save in your Subgraph entities by updating the `schema.graphql` file. Here is an example: @@ -101,7 +101,7 @@ This schema defines a `MyTransfer` entity with fields such as `id`, `amount`, `s With the Protobuf objects generated, you can now handle the decoded Substreams data in your `mappings.ts` file found in the `./src` directory. -The example below demonstrates how to extract to subgraph entities the non-derived transfers associated to the Orca account id: +The example below demonstrates how to extract to Subgraph entities the non-derived transfers associated to the Orca account id: ```ts import { Protobuf } from 'as-proto/assembly' @@ -140,11 +140,11 @@ To generate Protobuf objects in AssemblyScript, run the following command: npm run protogen ``` -This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the subgraph's handler. +This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the Subgraph's handler. ### Conclusion -Congratulations! You've successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. You can take the next step by customizing your schema, mappings, and modules to fit your specific use case. +Congratulations! You've successfully set up a trigger-based Substreams-powered Subgraph for a Solana SPL token. You can take the next step by customizing your schema, mappings, and modules to fit your specific use case. ### Video Tutorial diff --git a/website/src/pages/mr/subgraphs/best-practices/avoid-eth-calls.mdx b/website/src/pages/mr/subgraphs/best-practices/avoid-eth-calls.mdx index e40a7b3712e4..07249c97dd2a 100644 --- a/website/src/pages/mr/subgraphs/best-practices/avoid-eth-calls.mdx +++ b/website/src/pages/mr/subgraphs/best-practices/avoid-eth-calls.mdx @@ -1,19 +1,19 @@ --- title: Subgraph Best Practice 4 - Improve Indexing Speed by Avoiding eth_calls -sidebarTitle: 'Subgraph Best Practice 4: Avoiding eth_calls' +sidebarTitle: Avoiding eth_calls --- ## TLDR -`eth_calls` are calls that can be made from a subgraph to an Ethereum node. These calls take a significant amount of time to return data, slowing down indexing. If possible, design smart contracts to emit all the data you need so you don’t need to use `eth_calls`. +`eth_calls` are calls that can be made from a Subgraph to an Ethereum node. These calls take a significant amount of time to return data, slowing down indexing. If possible, design smart contracts to emit all the data you need so you don’t need to use `eth_calls`. ## Why Avoiding `eth_calls` Is a Best Practice -Subgraphs are optimized to index event data emitted from smart contracts. A subgraph can also index the data coming from an `eth_call`, however, this can significantly slow down subgraph indexing as `eth_calls` require making external calls to smart contracts. The responsiveness of these calls relies not on the subgraph but on the connectivity and responsiveness of the Ethereum node being queried. By minimizing or eliminating eth_calls in our subgraphs, we can significantly improve our indexing speed. +Subgraphs are optimized to index event data emitted from smart contracts. A Subgraph can also index the data coming from an `eth_call`, however, this can significantly slow down Subgraph indexing as `eth_calls` require making external calls to smart contracts. The responsiveness of these calls relies not on the Subgraph but on the connectivity and responsiveness of the Ethereum node being queried. By minimizing or eliminating eth_calls in our Subgraphs, we can significantly improve our indexing speed. ### What Does an eth_call Look Like? -`eth_calls` are often necessary when the data required for a subgraph is not available through emitted events. For example, consider a scenario where a subgraph needs to identify whether ERC20 tokens are part of a specific pool, but the contract only emits a basic `Transfer` event and does not emit an event that contains the data that we need: +`eth_calls` are often necessary when the data required for a Subgraph is not available through emitted events. For example, consider a scenario where a Subgraph needs to identify whether ERC20 tokens are part of a specific pool, but the contract only emits a basic `Transfer` event and does not emit an event that contains the data that we need: ```yaml event Transfer(address indexed from, address indexed to, uint256 value); @@ -44,7 +44,7 @@ export function handleTransfer(event: Transfer): void { } ``` -This is functional, however is not ideal as it slows down our subgraph’s indexing. +This is functional, however is not ideal as it slows down our Subgraph’s indexing. ## How to Eliminate `eth_calls` @@ -54,7 +54,7 @@ Ideally, the smart contract should be updated to emit all necessary data within event TransferWithPool(address indexed from, address indexed to, uint256 value, bytes32 indexed poolInfo); ``` -With this update, the subgraph can directly index the required data without external calls: +With this update, the Subgraph can directly index the required data without external calls: ```typescript import { Address } from '@graphprotocol/graph-ts' @@ -96,11 +96,11 @@ The portion highlighted in yellow is the call declaration. The part before the c The handler itself accesses the result of this `eth_call` exactly as in the previous section by binding to the contract and making the call. graph-node caches the results of declared `eth_calls` in memory and the call from the handler will retrieve the result from this in memory cache instead of making an actual RPC call. -Note: Declared eth_calls can only be made in subgraphs with specVersion >= 1.2.0. +Note: Declared eth_calls can only be made in Subgraphs with specVersion >= 1.2.0. ## Conclusion -You can significantly improve indexing performance by minimizing or eliminating `eth_calls` in your subgraphs. +You can significantly improve indexing performance by minimizing or eliminating `eth_calls` in your Subgraphs. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/mr/subgraphs/best-practices/derivedfrom.mdx b/website/src/pages/mr/subgraphs/best-practices/derivedfrom.mdx index db3a49928c89..093eb29255ab 100644 --- a/website/src/pages/mr/subgraphs/best-practices/derivedfrom.mdx +++ b/website/src/pages/mr/subgraphs/best-practices/derivedfrom.mdx @@ -1,11 +1,11 @@ --- title: Subgraph Best Practice 2 - Improve Indexing and Query Responsiveness By Using @derivedFrom -sidebarTitle: 'Subgraph Best Practice 2: Arrays with @derivedFrom' +sidebarTitle: Arrays with @derivedFrom --- ## TLDR -Arrays in your schema can really slow down a subgraph's performance as they grow beyond thousands of entries. If possible, the `@derivedFrom` directive should be used when using arrays as it prevents large arrays from forming, simplifies handlers, and reduces the size of individual entities, improving indexing speed and query performance significantly. +Arrays in your schema can really slow down a Subgraph's performance as they grow beyond thousands of entries. If possible, the `@derivedFrom` directive should be used when using arrays as it prevents large arrays from forming, simplifies handlers, and reduces the size of individual entities, improving indexing speed and query performance significantly. ## How to Use the `@derivedFrom` Directive @@ -15,7 +15,7 @@ You just need to add a `@derivedFrom` directive after your array in your schema. comments: [Comment!]! @derivedFrom(field: "post") ``` -`@derivedFrom` creates efficient one-to-many relationships, enabling an entity to dynamically associate with multiple related entities based on a field in the related entity. This approach removes the need for both sides of the relationship to store duplicate data, making the subgraph more efficient. +`@derivedFrom` creates efficient one-to-many relationships, enabling an entity to dynamically associate with multiple related entities based on a field in the related entity. This approach removes the need for both sides of the relationship to store duplicate data, making the Subgraph more efficient. ### Example Use Case for `@derivedFrom` @@ -60,17 +60,17 @@ type Comment @entity { Just by adding the `@derivedFrom` directive, this schema will only store the “Comments” on the “Comments” side of the relationship and not on the “Post” side of the relationship. Arrays are stored across individual rows, which allows them to expand significantly. This can lead to particularly large sizes if their growth is unbounded. -This will not only make our subgraph more efficient, but it will also unlock three features: +This will not only make our Subgraph more efficient, but it will also unlock three features: 1. We can query the `Post` and see all of its comments. 2. We can do a reverse lookup and query any `Comment` and see which post it comes from. -3. We can use [Derived Field Loaders](/subgraphs/developing/creating/graph-ts/api/#looking-up-derived-entities) to unlock the ability to directly access and manipulate data from virtual relationships in our subgraph mappings. +3. We can use [Derived Field Loaders](/subgraphs/developing/creating/graph-ts/api/#looking-up-derived-entities) to unlock the ability to directly access and manipulate data from virtual relationships in our Subgraph mappings. ## Conclusion -Use the `@derivedFrom` directive in subgraphs to effectively manage dynamically growing arrays, enhancing indexing efficiency and data retrieval. +Use the `@derivedFrom` directive in Subgraphs to effectively manage dynamically growing arrays, enhancing indexing efficiency and data retrieval. For a more detailed explanation of strategies to avoid large arrays, check out Kevin Jones' blog: [Best Practices in Subgraph Development: Avoiding Large Arrays](https://thegraph.com/blog/improve-subgraph-performance-avoiding-large-arrays/). diff --git a/website/src/pages/mr/subgraphs/best-practices/grafting-hotfix.mdx b/website/src/pages/mr/subgraphs/best-practices/grafting-hotfix.mdx index 0989034a01a3..63b5d9bbe017 100644 --- a/website/src/pages/mr/subgraphs/best-practices/grafting-hotfix.mdx +++ b/website/src/pages/mr/subgraphs/best-practices/grafting-hotfix.mdx @@ -1,26 +1,26 @@ --- title: Subgraph Best Practice 6 - Use Grafting for Quick Hotfix Deployment -sidebarTitle: 'Subgraph Best Practice 6: Grafting and Hotfixing' +sidebarTitle: Grafting and Hotfixing --- ## TLDR -Grafting is a powerful feature in subgraph development that allows you to build and deploy new subgraphs while reusing the indexed data from existing ones. +Grafting is a powerful feature in Subgraph development that allows you to build and deploy new Subgraphs while reusing the indexed data from existing ones. ### सविश्लेषण -This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. +This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire Subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. ## Benefits of Grafting for Hotfixes 1. **Rapid Deployment** - - **Minimize Downtime**: When a subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. - - **Immediate Recovery**: The new subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. + - **Minimize Downtime**: When a Subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. + - **Immediate Recovery**: The new Subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. 2. **Data Preservation** - - **Reuse Historical Data**: Grafting copies the existing data from the base subgraph, so you don’t lose valuable historical records. + - **Reuse Historical Data**: Grafting copies the existing data from the base Subgraph, so you don’t lose valuable historical records. - **Consistency**: Maintains data continuity, which is crucial for applications relying on consistent historical data. 3. **Efficiency** @@ -31,38 +31,38 @@ This feature enables quick deployment of hotfixes for critical issues, eliminati 1. **Initial Deployment Without Grafting** - - **Start Clean**: Always deploy your initial subgraph without grafting to ensure that it’s stable and functions as expected. - - **Test Thoroughly**: Validate the subgraph’s performance to minimize the need for future hotfixes. + - **Start Clean**: Always deploy your initial Subgraph without grafting to ensure that it’s stable and functions as expected. + - **Test Thoroughly**: Validate the Subgraph’s performance to minimize the need for future hotfixes. 2. **Implementing the Hotfix with Grafting** - **Identify the Issue**: When a critical error occurs, determine the block number of the last successfully indexed event. - - **Create a New Subgraph**: Develop a new subgraph that includes the hotfix. - - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed subgraph. - - **Deploy Quickly**: Publish the grafted subgraph to restore service as soon as possible. + - **Create a New Subgraph**: Develop a new Subgraph that includes the hotfix. + - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed Subgraph. + - **Deploy Quickly**: Publish the grafted Subgraph to restore service as soon as possible. 3. **Post-Hotfix Actions** - - **Monitor Performance**: Ensure the grafted subgraph is indexing correctly and the hotfix resolves the issue. - - **Republish Without Grafting**: Once stable, deploy a new version of the subgraph without grafting for long-term maintenance. + - **Monitor Performance**: Ensure the grafted Subgraph is indexing correctly and the hotfix resolves the issue. + - **Republish Without Grafting**: Once stable, deploy a new version of the Subgraph without grafting for long-term maintenance. > Note: Relying on grafting indefinitely is not recommended as it can complicate future updates and maintenance. - - **Update References**: Redirect any services or applications to use the new, non-grafted subgraph. + - **Update References**: Redirect any services or applications to use the new, non-grafted Subgraph. 4. **Important Considerations** - **Careful Block Selection**: Choose the graft block number carefully to prevent data loss. - **Tip**: Use the block number of the last correctly processed event. - - **Use Deployment ID**: Ensure you reference the Deployment ID of the base subgraph, not the Subgraph ID. - - **Note**: The Deployment ID is the unique identifier for a specific subgraph deployment. - - **Feature Declaration**: Remember to declare grafting in the subgraph manifest under features. + - **Use Deployment ID**: Ensure you reference the Deployment ID of the base Subgraph, not the Subgraph ID. + - **Note**: The Deployment ID is the unique identifier for a specific Subgraph deployment. + - **Feature Declaration**: Remember to declare grafting in the Subgraph manifest under features. ## Example: Deploying a Hotfix with Grafting -Suppose you have a subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. +Suppose you have a Subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. 1. **Failed Subgraph Manifest (subgraph.yaml)** ```yaml - specVersion: 1.0.0 + specVersion: 1.3.0 schema: file: ./schema.graphql dataSources: @@ -75,7 +75,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing startBlock: 5000000 mapping: kind: ethereum/events - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Withdrawal @@ -90,7 +90,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing 2. **New Grafted Subgraph Manifest (subgraph.yaml)** ```yaml - specVersion: 1.0.0 + specVersion: 1.3.0 schema: file: ./schema.graphql dataSources: @@ -103,7 +103,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing startBlock: 6000001 # Block after the last indexed block mapping: kind: ethereum/events - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Withdrawal @@ -117,16 +117,16 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing features: - grafting graft: - base: QmBaseDeploymentID # Deployment ID of the failed subgraph + base: QmBaseDeploymentID # Deployment ID of the failed Subgraph block: 6000000 # Last successfully indexed block ``` **Explanation:** -- **Data Source Update**: The new subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. +- **Data Source Update**: The new Subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. - **Start Block**: Set to one block after the last successfully indexed block to avoid reprocessing the error. - **Grafting Configuration**: - - **base**: Deployment ID of the failed subgraph. + - **base**: Deployment ID of the failed Subgraph. - **block**: Block number where grafting should begin. 3. **Deployment Steps** @@ -135,10 +135,10 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing - **Adjust the Manifest**: As shown above, update the `subgraph.yaml` with grafting configurations. - **Deploy the Subgraph**: - Authenticate with the Graph CLI. - - Deploy the new subgraph using `graph deploy`. + - Deploy the new Subgraph using `graph deploy`. 4. **Post-Deployment** - - **Verify Indexing**: Check that the subgraph is indexing correctly from the graft point. + - **Verify Indexing**: Check that the Subgraph is indexing correctly from the graft point. - **Monitor Data**: Ensure that new data is being captured and the hotfix is effective. - **Plan for Republish**: Schedule the deployment of a non-grafted version for long-term stability. @@ -146,9 +146,9 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing While grafting is a powerful tool for deploying hotfixes quickly, there are specific scenarios where it should be avoided to maintain data integrity and ensure optimal performance. -- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new subgraph’s schema to be compatible with the base subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. +- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new Subgraph’s schema to be compatible with the base Subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. - **Significant Mapping Logic Overhauls**: When the hotfix involves substantial modifications to your mapping logic—such as changing how events are processed or altering handler functions—grafting may not function correctly. The new logic might not be compatible with the data processed under the old logic, leading to incorrect data or failed indexing. -- **Deployments to The Graph Network**: Grafting is not recommended for subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the subgraph from scratch to ensure full compatibility and reliability. +- **Deployments to The Graph Network**: Grafting is not recommended for Subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the Subgraph from scratch to ensure full compatibility and reliability. ### Risk Management @@ -157,20 +157,20 @@ While grafting is a powerful tool for deploying hotfixes quickly, there are spec ## Conclusion -Grafting is an effective strategy for deploying hotfixes in subgraph development, enabling you to: +Grafting is an effective strategy for deploying hotfixes in Subgraph development, enabling you to: - **Quickly Recover** from critical errors without re-indexing. - **Preserve Historical Data**, maintaining continuity for applications and users. - **Ensure Service Availability** by minimizing downtime during critical fixes. -However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. +However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your Subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. ## अतिरिक्त संसाधने - **[Grafting Documentation](/subgraphs/cookbook/grafting/)**: Replace a Contract and Keep its History With Grafting - **[Understanding Deployment IDs](/subgraphs/querying/subgraph-id-vs-deployment-id/)**: Learn the difference between Deployment ID and Subgraph ID. -By incorporating grafting into your subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. +By incorporating grafting into your Subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/mr/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx b/website/src/pages/mr/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx index 6ff60ec9ab34..3a633244e0f2 100644 --- a/website/src/pages/mr/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx +++ b/website/src/pages/mr/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx @@ -1,6 +1,6 @@ --- title: Subgraph Best Practice 3 - Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs -sidebarTitle: 'Subgraph Best Practice 3: Immutable Entities and Bytes as IDs' +sidebarTitle: Immutable Entities and Bytes as IDs --- ## TLDR @@ -50,12 +50,12 @@ While other types for IDs are possible, such as String and Int8, it is recommend ### Reasons to Not Use Bytes as IDs 1. If entity IDs must be human-readable such as auto-incremented numerical IDs or readable strings, Bytes for IDs should not be used. -2. If integrating a subgraph’s data with another data model that does not use Bytes as IDs, Bytes as IDs should not be used. +2. If integrating a Subgraph’s data with another data model that does not use Bytes as IDs, Bytes as IDs should not be used. 3. Indexing and querying performance improvements are not desired. ### Concatenating With Bytes as IDs -It is a common practice in many subgraphs to use string concatenation to combine two properties of an event into a single ID, such as using `event.transaction.hash.toHex() + "-" + event.logIndex.toString()`. However, as this returns a string, this significantly impedes subgraph indexing and querying performance. +It is a common practice in many Subgraphs to use string concatenation to combine two properties of an event into a single ID, such as using `event.transaction.hash.toHex() + "-" + event.logIndex.toString()`. However, as this returns a string, this significantly impedes Subgraph indexing and querying performance. Instead, we should use the `concatI32()` method to concatenate event properties. This strategy results in a `Bytes` ID that is much more performant. @@ -172,7 +172,7 @@ Query Response: ## Conclusion -Using both Immutable Entities and Bytes as IDs has been shown to markedly improve subgraph efficiency. Specifically, tests have highlighted up to a 28% increase in query performance and up to a 48% acceleration in indexing speeds. +Using both Immutable Entities and Bytes as IDs has been shown to markedly improve Subgraph efficiency. Specifically, tests have highlighted up to a 28% increase in query performance and up to a 48% acceleration in indexing speeds. Read more about using Immutable Entities and Bytes as IDs in this blog post by David Lutterkort, a Software Engineer at Edge & Node: [Two Simple Subgraph Performance Improvements](https://thegraph.com/blog/two-simple-subgraph-performance-improvements/). diff --git a/website/src/pages/mr/subgraphs/best-practices/pruning.mdx b/website/src/pages/mr/subgraphs/best-practices/pruning.mdx index 1b51dde8894f..2d4f9ad803e0 100644 --- a/website/src/pages/mr/subgraphs/best-practices/pruning.mdx +++ b/website/src/pages/mr/subgraphs/best-practices/pruning.mdx @@ -1,11 +1,11 @@ --- title: Subgraph Best Practice 1 - Improve Query Speed with Subgraph Pruning -sidebarTitle: 'Subgraph Best Practice 1: Pruning with indexerHints' +sidebarTitle: Pruning with indexerHints --- ## TLDR -[Pruning](/developing/creating-a-subgraph/#prune) removes archival entities from the subgraph’s database up to a given block, and removing unused entities from a subgraph’s database will improve a subgraph’s query performance, often dramatically. Using `indexerHints` is an easy way to prune a subgraph. +[Pruning](/developing/creating-a-subgraph/#prune) removes archival entities from the Subgraph’s database up to a given block, and removing unused entities from a Subgraph’s database will improve a Subgraph’s query performance, often dramatically. Using `indexerHints` is an easy way to prune a Subgraph. ## How to Prune a Subgraph With `indexerHints` @@ -13,14 +13,14 @@ Add a section called `indexerHints` in the manifest. `indexerHints` has three `prune` options: -- `prune: auto`: Retains the minimum necessary history as set by the Indexer, optimizing query performance. This is the generally recommended setting and is the default for all subgraphs created by `graph-cli` >= 0.66.0. +- `prune: auto`: Retains the minimum necessary history as set by the Indexer, optimizing query performance. This is the generally recommended setting and is the default for all Subgraphs created by `graph-cli` >= 0.66.0. - `prune: `: Sets a custom limit on the number of historical blocks to retain. - `prune: never`: No pruning of historical data; retains the entire history and is the default if there is no `indexerHints` section. `prune: never` should be selected if [Time Travel Queries](/subgraphs/querying/graphql-api/#time-travel-queries) are desired. -We can add `indexerHints` to our subgraphs by updating our `subgraph.yaml`: +We can add `indexerHints` to our Subgraphs by updating our `subgraph.yaml`: ```yaml -specVersion: 1.0.0 +specVersion: 1.3.0 schema: file: ./schema.graphql indexerHints: @@ -39,7 +39,7 @@ dataSources: ## Conclusion -Pruning using `indexerHints` is a best practice for subgraph development, offering significant query performance improvements. +Pruning using `indexerHints` is a best practice for Subgraph development, offering significant query performance improvements. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/mr/subgraphs/best-practices/timeseries.mdx b/website/src/pages/mr/subgraphs/best-practices/timeseries.mdx index 239c7e0158db..c690981afd7c 100644 --- a/website/src/pages/mr/subgraphs/best-practices/timeseries.mdx +++ b/website/src/pages/mr/subgraphs/best-practices/timeseries.mdx @@ -1,11 +1,11 @@ --- title: Subgraph Best Practice 5 - Simplify and Optimize with Timeseries and Aggregations -sidebarTitle: 'Subgraph Best Practice 5: Timeseries and Aggregations' +sidebarTitle: Timeseries and Aggregations --- ## TLDR -Leveraging the new time-series and aggregations feature in subgraphs can significantly enhance both indexing speed and query performance. +Leveraging the new time-series and aggregations feature in Subgraphs can significantly enhance both indexing speed and query performance. ## सविश्लेषण @@ -36,6 +36,10 @@ Timeseries and aggregations reduce data processing overhead and accelerate queri ## How to Implement Timeseries and Aggregations +### Prerequisites + +You need `spec version 1.1.0` for this feature. + ### Defining Timeseries Entities A timeseries entity represents raw data points collected over time. It is defined with the `@entity(timeseries: true)` annotation. Key requirements: @@ -51,7 +55,7 @@ A timeseries entity represents raw data points collected over time. It is define type Data @entity(timeseries: true) { id: Int8! timestamp: Timestamp! - price: BigDecimal! + amount: BigDecimal! } ``` @@ -68,11 +72,11 @@ An aggregation entity computes aggregated values from a timeseries source. It is type Stats @aggregation(intervals: ["hour", "day"], source: "Data") { id: Int8! timestamp: Timestamp! - sum: BigDecimal! @aggregate(fn: "sum", arg: "price") + sum: BigDecimal! @aggregate(fn: "sum", arg: "amount") } ``` -In this example, Stats aggregates the price field from Data over hourly and daily intervals, computing the sum. +In this example, Stats aggregates the amount field from Data over hourly and daily intervals, computing the sum. ### Querying Aggregated Data @@ -172,13 +176,13 @@ Supported operators and functions include basic arithmetic (+, -, \_, /), compar ### Conclusion -Implementing timeseries and aggregations in subgraphs is a best practice for projects dealing with time-based data. This approach: +Implementing timeseries and aggregations in Subgraphs is a best practice for projects dealing with time-based data. This approach: - Enhances Performance: Speeds up indexing and querying by reducing data processing overhead. - Simplifies Development: Eliminates the need for manual aggregation logic in mappings. - Scales Efficiently: Handles large volumes of data without compromising on speed or responsiveness. -By adopting this pattern, developers can build more efficient and scalable subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your subgraphs. +By adopting this pattern, developers can build more efficient and scalable Subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your Subgraphs. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/mr/subgraphs/billing.mdx b/website/src/pages/mr/subgraphs/billing.mdx index 7126ce22520f..3199bdea1317 100644 --- a/website/src/pages/mr/subgraphs/billing.mdx +++ b/website/src/pages/mr/subgraphs/billing.mdx @@ -4,12 +4,14 @@ title: Billing ## Querying Plans -There are two plans to use when querying subgraphs on The Graph Network. +There are two plans to use when querying Subgraphs on The Graph Network. - **Free Plan**: The Free Plan includes 100,000 free monthly queries with full access to the Subgraph Studio testing environment. This plan is designed for hobbyists, hackathoners, and those with side projects to try out The Graph before scaling their dapp. - **Growth Plan**: The Growth Plan includes everything in the Free Plan with all queries after 100,000 monthly queries requiring payments with GRT or credit card. The Growth Plan is flexible enough to cover teams that have established dapps across a variety of use cases. +Learn more about pricing [here](https://thegraph.com/studio-pricing/). + ## Query Payments with credit card diff --git a/website/src/pages/mr/subgraphs/developing/creating/advanced.mdx b/website/src/pages/mr/subgraphs/developing/creating/advanced.mdx index c24f72030078..e83051efd7a9 100644 --- a/website/src/pages/mr/subgraphs/developing/creating/advanced.mdx +++ b/website/src/pages/mr/subgraphs/developing/creating/advanced.mdx @@ -4,9 +4,9 @@ title: Advanced Subgraph Features ## सविश्लेषण -Add and implement advanced subgraph features to enhanced your subgraph's built. +Add and implement advanced Subgraph features to enhanced your Subgraph's built. -Starting from `specVersion` `0.0.4`, subgraph features must be explicitly declared in the `features` section at the top level of the manifest file, using their `camelCase` name, as listed in the table below: +Starting from `specVersion` `0.0.4`, Subgraph features must be explicitly declared in the `features` section at the top level of the manifest file, using their `camelCase` name, as listed in the table below: | Feature | Name | | ---------------------------------------------------- | ---------------- | @@ -14,10 +14,10 @@ Starting from `specVersion` `0.0.4`, subgraph features must be explicitly declar | [Full-text Search](#defining-fulltext-search-fields) | `fullTextSearch` | | [Grafting](#grafting-onto-existing-subgraphs) | `grafting` | -For instance, if a subgraph uses the **Full-Text Search** and the **Non-fatal Errors** features, the `features` field in the manifest should be: +For instance, if a Subgraph uses the **Full-Text Search** and the **Non-fatal Errors** features, the `features` field in the manifest should be: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum features: - fullTextSearch @@ -25,7 +25,7 @@ features: dataSources: ... ``` -> Note that using a feature without declaring it will incur a **validation error** during subgraph deployment, but no errors will occur if a feature is declared but not used. +> Note that using a feature without declaring it will incur a **validation error** during Subgraph deployment, but no errors will occur if a feature is declared but not used. ## Timeseries and Aggregations @@ -33,9 +33,9 @@ Prerequisites: - Subgraph specVersion must be ≥1.1.0. -Timeseries and aggregations enable your subgraph to track statistics like daily average price, hourly total transfers, and more. +Timeseries and aggregations enable your Subgraph to track statistics like daily average price, hourly total transfers, and more. -This feature introduces two new types of subgraph entity. Timeseries entities record data points with timestamps. Aggregation entities perform pre-declared calculations on the timeseries data points on an hourly or daily basis, then store the results for easy access via GraphQL. +This feature introduces two new types of Subgraph entity. Timeseries entities record data points with timestamps. Aggregation entities perform pre-declared calculations on the timeseries data points on an hourly or daily basis, then store the results for easy access via GraphQL. ### Example Schema @@ -97,21 +97,21 @@ Aggregation entities are automatically calculated on the basis of the specified ## गैर-घातक त्रुटी -आधीच समक्रमित केलेल्या सबग्राफ्सवर अनुक्रमणिका त्रुटी, डीफॉल्टनुसार, सबग्राफ अयशस्वी होण्यास आणि समक्रमण थांबवण्यास कारणीभूत ठरतील. सबग्राफ वैकल्पिकरित्या त्रुटींच्या उपस्थितीत समक्रमण सुरू ठेवण्यासाठी कॉन्फिगर केले जाऊ शकतात, हँडलरने केलेल्या बदलांकडे दुर्लक्ष करून, ज्यामुळे त्रुटी उद्भवली. हे सबग्राफ लेखकांना त्यांचे सबग्राफ दुरुस्त करण्यासाठी वेळ देते जेव्हा की नवीनतम ब्लॉकच्या विरूद्ध क्वेरी चालू ठेवल्या जातात, जरी त्रुटीमुळे परिणाम विसंगत असू शकतात. लक्षात घ्या की काही त्रुटी अजूनही नेहमीच घातक असतात. गैर-घातक होण्यासाठी, त्रुटी निश्चितपणे ज्ञात असणे आवश्यक आहे. +Indexing errors on already synced Subgraphs will, by default, cause the Subgraph to fail and stop syncing. Subgraphs can alternatively be configured to continue syncing in the presence of errors, by ignoring the changes made by the handler which provoked the error. This gives Subgraph authors time to correct their Subgraphs while queries continue to be served against the latest block, though the results might be inconsistent due to the bug that caused the error. Note that some errors are still always fatal. To be non-fatal, the error must be known to be deterministic. -> **Note:** The Graph Network does not yet support non-fatal errors, and developers should not deploy subgraphs using that functionality to the network via the Studio. +> **Note:** The Graph Network does not yet support non-fatal errors, and developers should not deploy Subgraphs using that functionality to the network via the Studio. -Enabling non-fatal errors requires setting the following feature flag on the subgraph manifest: +Enabling non-fatal errors requires setting the following feature flag on the Subgraph manifest: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum features: - nonFatalErrors ... ``` -The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the subgraph has skipped over errors, as in the example: +The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the Subgraph has skipped over errors, as in the example: ```graphql foos(first: 100, subgraphError: allow) { @@ -123,7 +123,7 @@ _meta { } ``` -If the subgraph encounters an error, that query will return both the data and a graphql error with the message `"indexing_error"`, as in this example response: +If the Subgraph encounters an error, that query will return both the data and a graphql error with the message `"indexing_error"`, as in this example response: ```graphql "data": { @@ -145,7 +145,7 @@ If the subgraph encounters an error, that query will return both the data and a ## IPFS/Arweave File Data Sources -File data sources are a new subgraph functionality for accessing off-chain data during indexing in a robust, extendable way. File data sources support fetching files from IPFS and from Arweave. +File data sources are a new Subgraph functionality for accessing off-chain data during indexing in a robust, extendable way. File data sources support fetching files from IPFS and from Arweave. > हे ऑफ-चेन डेटाच्या निर्धारवादी अनुक्रमणिकेसाठी तसेच अनियंत्रित HTTP-स्रोत डेटाच्या संभाव्य परिचयासाठी देखील पाया घालते. @@ -221,7 +221,7 @@ templates: - name: TokenMetadata kind: file/ipfs mapping: - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mapping.ts handler: handleMetadata @@ -290,7 +290,7 @@ For Arweave, as of version 0.33.0 Graph Node can fetch files stored on Arweave b import { TokenMetadata as TokenMetadataTemplate } from '../generated/templates' const ipfshash = 'QmaXzZhcYnsisuue5WRdQDH6FDvqkLQX1NckLqBYeYYEfm' -//This example code is for a Crypto coven subgraph. The above ipfs hash is a directory with token metadata for all crypto coven NFTs. +//This example code is for a Crypto coven Subgraph. The above ipfs hash is a directory with token metadata for all crypto coven NFTs. export function handleTransfer(event: TransferEvent): void { let token = Token.load(event.params.tokenId.toString()) @@ -317,23 +317,23 @@ This will create a new file data source, which will poll Graph Node's configured This example is using the CID as the lookup between the parent `Token` entity and the resulting `TokenMetadata` entity. -> Previously, this is the point at which a subgraph developer would have called `ipfs.cat(CID)` to fetch the file +> Previously, this is the point at which a Subgraph developer would have called `ipfs.cat(CID)` to fetch the file Congratulations, you are using file data sources! -#### तुमचे सबग्राफ उपयोजित करत आहे +#### Deploying your Subgraphs -You can now `build` and `deploy` your subgraph to any Graph Node >=v0.30.0-rc.0. +You can now `build` and `deploy` your Subgraph to any Graph Node >=v0.30.0-rc.0. #### Limitations -फाइल डेटा स्रोत हँडलर आणि संस्था इतर सबग्राफ संस्थांपासून वेगळ्या केल्या जातात, ते कार्यान्वित केल्यावर ते निर्धारवादी आहेत याची खात्री करून आणि साखळी-आधारित डेटा स्रोतांचे दूषित होणार नाही याची खात्री करतात. विशिष्ट असणे: +File data source handlers and entities are isolated from other Subgraph entities, ensuring that they are deterministic when executed, and ensuring no contamination of chain-based data sources. To be specific: - Entities created by File Data Sources are immutable, and cannot be updated - File Data Source handlers cannot access entities from other file data sources - Entities associated with File Data Sources cannot be accessed by chain-based handlers -> बहुतेक वापर-प्रकरणांसाठी ही मर्यादा समस्याप्रधान नसावी, परंतु काहींसाठी ते जटिलता आणू शकते. सबग्राफमध्‍ये तुमच्‍या फाईल-आधारित डेटाचे मॉडेल बनवण्‍यात तुम्‍हाला समस्या येत असल्‍यास कृपया डिस्‍कॉर्ड द्वारे संपर्क साधा! +> While this constraint should not be problematic for most use-cases, it may introduce complexity for some. Please get in touch via Discord if you are having issues modelling your file-based data in a Subgraph! याव्यतिरिक्त, फाइल डेटा स्रोतावरून डेटा स्रोत तयार करणे शक्य नाही, मग ते ऑनचेन डेटा स्रोत असो किंवा अन्य फाइल डेटा स्रोत. भविष्यात हे निर्बंध उठवले जाऊ शकतात. @@ -365,15 +365,15 @@ Handlers for File Data Sources cannot be in files which import `eth_call` contra > **Requires**: [SpecVersion](#specversion-releases) >= `1.2.0` -Topic filters, also known as indexed argument filters, are a powerful feature in subgraphs that allow users to precisely filter blockchain events based on the values of their indexed arguments. +Topic filters, also known as indexed argument filters, are a powerful feature in Subgraphs that allow users to precisely filter blockchain events based on the values of their indexed arguments. -- These filters help isolate specific events of interest from the vast stream of events on the blockchain, allowing subgraphs to operate more efficiently by focusing only on relevant data. +- These filters help isolate specific events of interest from the vast stream of events on the blockchain, allowing Subgraphs to operate more efficiently by focusing only on relevant data. -- This is useful for creating personal subgraphs that track specific addresses and their interactions with various smart contracts on the blockchain. +- This is useful for creating personal Subgraphs that track specific addresses and their interactions with various smart contracts on the blockchain. ### How Topic Filters Work -When a smart contract emits an event, any arguments that are marked as indexed can be used as filters in a subgraph's manifest. This allows the subgraph to listen selectively for events that match these indexed arguments. +When a smart contract emits an event, any arguments that are marked as indexed can be used as filters in a Subgraph's manifest. This allows the Subgraph to listen selectively for events that match these indexed arguments. - The event's first indexed argument corresponds to `topic1`, the second to `topic2`, and so on, up to `topic3`, since the Ethereum Virtual Machine (EVM) allows up to three indexed arguments per event. @@ -401,7 +401,7 @@ In this example: #### Configuration in Subgraphs -Topic filters are defined directly within the event handler configuration in the subgraph manifest. Here is how they are configured: +Topic filters are defined directly within the event handler configuration in the Subgraph manifest. Here is how they are configured: ```yaml eventHandlers: @@ -436,7 +436,7 @@ In this configuration: - `topic1` is configured to filter `Transfer` events where `0xAddressA` is the sender. - `topic2` is configured to filter `Transfer` events where `0xAddressB` is the receiver. -- The subgraph will only index transactions that occur directly from `0xAddressA` to `0xAddressB`. +- The Subgraph will only index transactions that occur directly from `0xAddressA` to `0xAddressB`. #### Example 2: Tracking Transactions in Either Direction Between Two or More Addresses @@ -452,17 +452,17 @@ In this configuration: - `topic1` is configured to filter `Transfer` events where `0xAddressA`, `0xAddressB`, `0xAddressC` is the sender. - `topic2` is configured to filter `Transfer` events where `0xAddressB` and `0xAddressC` is the receiver. -- The subgraph will index transactions that occur in either direction between multiple addresses allowing for comprehensive monitoring of interactions involving all addresses. +- The Subgraph will index transactions that occur in either direction between multiple addresses allowing for comprehensive monitoring of interactions involving all addresses. ## Declared eth_call > Note: This is an experimental feature that is not currently available in a stable Graph Node release yet. You can only use it in Subgraph Studio or your self-hosted node. -Declarative `eth_calls` are a valuable subgraph feature that allows `eth_calls` to be executed ahead of time, enabling `graph-node` to execute them in parallel. +Declarative `eth_calls` are a valuable Subgraph feature that allows `eth_calls` to be executed ahead of time, enabling `graph-node` to execute them in parallel. This feature does the following: -- Significantly improves the performance of fetching data from the Ethereum blockchain by reducing the total time for multiple calls and optimizing the subgraph's overall efficiency. +- Significantly improves the performance of fetching data from the Ethereum blockchain by reducing the total time for multiple calls and optimizing the Subgraph's overall efficiency. - Allows faster data fetching, resulting in quicker query responses and a better user experience. - Reduces wait times for applications that need to aggregate data from multiple Ethereum calls, making the data retrieval process more efficient. @@ -474,7 +474,7 @@ This feature does the following: #### Scenario without Declarative `eth_calls` -Imagine you have a subgraph that needs to make three Ethereum calls to fetch data about a user's transactions, balance, and token holdings. +Imagine you have a Subgraph that needs to make three Ethereum calls to fetch data about a user's transactions, balance, and token holdings. Traditionally, these calls might be made sequentially: @@ -498,15 +498,15 @@ Total time taken = max (3, 2, 4) = 4 seconds #### How it Works -1. Declarative Definition: In the subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. +1. Declarative Definition: In the Subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. 2. Parallel Execution Engine: The Graph Node's execution engine recognizes these declarations and runs the calls simultaneously. -3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the subgraph for further processing. +3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the Subgraph for further processing. #### Example Configuration in Subgraph Manifest Declared `eth_calls` can access the `event.address` of the underlying event as well as all the `event.params`. -`Subgraph.yaml` using `event.address`: +`subgraph.yaml` using `event.address`: ```yaml eventHandlers: @@ -524,7 +524,7 @@ Details for the example above: - The text (`Pool[event.address].feeGrowthGlobal0X128()`) is the actual `eth_call` that will be executed, which is in the form of `Contract[address].function(arguments)` - The `address` and `arguments` can be replaced with variables that will be available when the handler is executed. -`Subgraph.yaml` using `event.params` +`subgraph.yaml` using `event.params` ```yaml calls: @@ -535,22 +535,22 @@ calls: > **Note:** it is not recommended to use grafting when initially upgrading to The Graph Network. Learn more [here](/subgraphs/cookbook/grafting/#important-note-on-grafting-when-upgrading-to-the-network). -When a subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances; it is beneficial to reuse the data from an existing subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly or to temporarily get an existing subgraph working again after it has failed. +When a Subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances; it is beneficial to reuse the data from an existing Subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly or to temporarily get an existing Subgraph working again after it has failed. -A subgraph is grafted onto a base subgraph when the subgraph manifest in `subgraph.yaml` contains a `graft` block at the top-level: +A Subgraph is grafted onto a base Subgraph when the Subgraph manifest in `subgraph.yaml` contains a `graft` block at the top-level: ```yaml description: ... graft: - base: Qm... # Subgraph ID of base subgraph + base: Qm... # Subgraph ID of base Subgraph block: 7345624 # Block number ``` -When a subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` subgraph up to and including the given `block` and then continue indexing the new subgraph from that block on. The base subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted subgraph. +When a Subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` Subgraph up to and including the given `block` and then continue indexing the new Subgraph from that block on. The base Subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted Subgraph. -बेस डेटा इंडेक्स करण्याऐवजी कॉपीचे ग्राफ्टिंग केल्यामुळे, सुरवातीपासून इंडेक्स करण्यापेक्षा इच्छित ब्लॉकमध्ये सबग्राफ मिळवणे खूप जलद आहे, जरी सुरुवातीच्या डेटा कॉपीला खूप मोठ्या सबग्राफसाठी बरेच तास लागू शकतात. ग्रॅफ्टेड सबग्राफ सुरू होत असताना, ग्राफ नोड आधीपासून कॉपी केलेल्या घटक प्रकारांबद्दल माहिती लॉग करेल. +Because grafting copies rather than indexes base data, it is much quicker to get the Subgraph to the desired block than indexing from scratch, though the initial data copy can still take several hours for very large Subgraphs. While the grafted Subgraph is being initialized, the Graph Node will log information about the entity types that have already been copied. -ग्राफ्टेड सबग्राफ GraphQL स्कीमा वापरू शकतो जो बेस सबग्राफपैकी एकाशी एकसारखा नसतो, परंतु त्याच्याशी फक्त सुसंगत असतो. ती स्वतःच्या अधिकारात वैध सबग्राफ स्कीमा असणे आवश्यक आहे, परंतु खालील प्रकारे बेस सबग्राफच्या स्कीमापासून विचलित होऊ शकते: +The grafted Subgraph can use a GraphQL schema that is not identical to the one of the base Subgraph, but merely compatible with it. It has to be a valid Subgraph schema in its own right, but may deviate from the base Subgraph's schema in the following ways: - हे घटक प्रकार जोडते किंवा काढून टाकते - हे घटक प्रकारातील गुणधर्म काढून टाकते @@ -560,4 +560,4 @@ When a subgraph whose manifest contains a `graft` block is deployed, Graph Node - हे इंटरफेस जोडते किंवा काढून टाकते - कोणत्या घटकासाठी इंटरफेस लागू केला जातो ते बदलते -> **[Feature Management](#experimental-features):** `grafting` must be declared under `features` in the subgraph manifest. +> **[Feature Management](#experimental-features):** `grafting` must be declared under `features` in the Subgraph manifest. diff --git a/website/src/pages/mr/subgraphs/developing/creating/assemblyscript-mappings.mdx b/website/src/pages/mr/subgraphs/developing/creating/assemblyscript-mappings.mdx index 682aec0ae2a5..e531b0f3d7c9 100644 --- a/website/src/pages/mr/subgraphs/developing/creating/assemblyscript-mappings.mdx +++ b/website/src/pages/mr/subgraphs/developing/creating/assemblyscript-mappings.mdx @@ -10,7 +10,7 @@ The mappings take data from a particular source and transform it into entities t For each event handler that is defined in `subgraph.yaml` under `mapping.eventHandlers`, create an exported function of the same name. Each handler must accept a single parameter called `event` with a type corresponding to the name of the event which is being handled. -In the example subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events: +In the example Subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events: ```javascript import { NewGravatar, UpdatedGravatar } from '../generated/Gravity/Gravity' @@ -72,7 +72,7 @@ If no value is set for a field in the new entity with the same ID, the field wil ## कोड जनरेशन -स्मार्ट कॉन्ट्रॅक्ट्स, इव्हेंट्स आणि संस्थांसोबत काम करणे सोपे आणि टाइप-सुरक्षित करण्यासाठी, ग्राफ CLI सबग्राफच्या GraphQL स्कीमा आणि डेटा स्रोतांमध्ये समाविष्ट केलेल्या कॉन्ट्रॅक्ट ABIs मधून असेंबलीस्क्रिप्ट प्रकार व्युत्पन्न करू शकतो. +In order to make it easy and type-safe to work with smart contracts, events and entities, the Graph CLI can generate AssemblyScript types from the Subgraph's GraphQL schema and the contract ABIs included in the data sources. यासह केले जाते @@ -80,7 +80,7 @@ If no value is set for a field in the new entity with the same ID, the field wil graph codegen [--output-dir ] [] ``` -but in most cases, subgraphs are already preconfigured via `package.json` to allow you to simply run one of the following to achieve the same: +but in most cases, Subgraphs are already preconfigured via `package.json` to allow you to simply run one of the following to achieve the same: ```sh # Yarn @@ -90,7 +90,7 @@ yarn codegen npm run codegen ``` -This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with. +This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example Subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with. ```javascript import { @@ -102,12 +102,12 @@ import { } from '../generated/Gravity/Gravity' ``` -In addition to this, one class is generated for each entity type in the subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with +In addition to this, one class is generated for each entity type in the Subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with ```javascript import { Gravatar } from '../generated/schema' ``` -> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the subgraph. +> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the Subgraph. -Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your subgraph to Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find. +Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your Subgraph to Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find. diff --git a/website/src/pages/mr/subgraphs/developing/creating/graph-ts/CHANGELOG.md b/website/src/pages/mr/subgraphs/developing/creating/graph-ts/CHANGELOG.md index 5d90888ac378..5f964d3cbb78 100644 --- a/website/src/pages/mr/subgraphs/developing/creating/graph-ts/CHANGELOG.md +++ b/website/src/pages/mr/subgraphs/developing/creating/graph-ts/CHANGELOG.md @@ -1,5 +1,11 @@ # @graphprotocol/graph-ts +## 0.38.0 + +### Minor Changes + +- [#1935](https://github.com/graphprotocol/graph-tooling/pull/1935) [`0c36a02`](https://github.com/graphprotocol/graph-tooling/commit/0c36a024e0516bbf883ae62b8312dba3d9945f04) Thanks [@isum](https://github.com/isum)! - feat: add yaml parsing support to mappings + ## 0.37.0 ### Minor Changes diff --git a/website/src/pages/mr/subgraphs/developing/creating/graph-ts/api.mdx b/website/src/pages/mr/subgraphs/developing/creating/graph-ts/api.mdx index a807b884e30c..c84987c66e17 100644 --- a/website/src/pages/mr/subgraphs/developing/creating/graph-ts/api.mdx +++ b/website/src/pages/mr/subgraphs/developing/creating/graph-ts/api.mdx @@ -2,12 +2,12 @@ title: असेंबलीस्क्रिप्ट API --- -> Note: If you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/). +> Note: If you created a Subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/). -Learn what built-in APIs can be used when writing subgraph mappings. There are two kinds of APIs available out of the box: +Learn what built-in APIs can be used when writing Subgraph mappings. There are two kinds of APIs available out of the box: - The [Graph TypeScript library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) -- Code generated from subgraph files by `graph codegen` +- Code generated from Subgraph files by `graph codegen` You can also add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). @@ -27,7 +27,7 @@ The `@graphprotocol/graph-ts` library provides the following APIs: ### आवृत्त्या -The `apiVersion` in the subgraph manifest specifies the mapping API version which is run by Graph Node for a given subgraph. +The `apiVersion` in the Subgraph manifest specifies the mapping API version which is run by Graph Node for a given Subgraph. | आवृत्ती | रिलीझ नोट्स | | :-: | --- | @@ -223,7 +223,7 @@ It adds the following method on top of the `Bytes` API: The `store` API allows to load, save and remove entities from and to the Graph Node store. -Entities written to the store map one-to-one to the `@entity` types defined in the subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities. +Entities written to the store map one-to-one to the `@entity` types defined in the Subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities. #### अंदाज निर्मिती करणे @@ -282,8 +282,8 @@ As of `graph-node` v0.31.0, `@graphprotocol/graph-ts` v0.30.0 and `@graphprotoco The store API facilitates the retrieval of entities that were created or updated in the current block. A typical situation for this is that one handler creates a transaction from some onchain event, and a later handler wants to access this transaction if it exists. -- In the case where the transaction does not exist, the subgraph will have to go to the database simply to find out that the entity does not exist. If the subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. -- For some subgraphs, these missed lookups can contribute significantly to the indexing time. +- In the case where the transaction does not exist, the Subgraph will have to go to the database simply to find out that the entity does not exist. If the Subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. +- For some Subgraphs, these missed lookups can contribute significantly to the indexing time. ```typescript let id = event.transaction.hash // or however the ID is constructed @@ -380,11 +380,11 @@ The Ethereum API provides access to smart contracts, public state variables, con #### इथरियम प्रकारांसाठी समर्थन -As with entities, `graph codegen` generates classes for all smart contracts and events used in a subgraph. For this, the contract ABIs need to be part of the data source in the subgraph manifest. Typically, the ABI files are stored in an `abis/` folder. +As with entities, `graph codegen` generates classes for all smart contracts and events used in a Subgraph. For this, the contract ABIs need to be part of the data source in the Subgraph manifest. Typically, the ABI files are stored in an `abis/` folder. -With the generated classes, conversions between Ethereum types and the [built-in types](#built-in-types) take place behind the scenes so that subgraph authors do not have to worry about them. +With the generated classes, conversions between Ethereum types and the [built-in types](#built-in-types) take place behind the scenes so that Subgraph authors do not have to worry about them. -पुढील उदाहरण हे स्पष्ट करते. सारखी सबग्राफ स्कीमा दिली +The following example illustrates this. Given a Subgraph schema like ```graphql type Transfer @entity { @@ -483,7 +483,7 @@ class Log { #### स्मार्ट कॉन्ट्रॅक्ट स्टेटमध्ये प्रवेश -The code generated by `graph codegen` also includes classes for the smart contracts used in the subgraph. These can be used to access public state variables and call functions of the contract at the current block. +The code generated by `graph codegen` also includes classes for the smart contracts used in the Subgraph. These can be used to access public state variables and call functions of the contract at the current block. कॉन्ट्रॅक्टमध्ये प्रवेश करणे हा एक सामान्य पॅटर्न आहे ज्यातून इव्हेंटची उत्पत्ती होते. हे खालील कोडसह साध्य केले आहे: @@ -506,7 +506,7 @@ export function handleTransfer(event: TransferEvent) { As long as the `ERC20Contract` on Ethereum has a public read-only function called `symbol`, it can be called with `.symbol()`. For public state variables a method with the same name is created automatically. -सबग्राफचा भाग असलेला इतर कोणताही करार व्युत्पन्न केलेल्या कोडमधून आयात केला जाऊ शकतो आणि वैध पत्त्यावर बांधला जाऊ शकतो. +Any other contract that is part of the Subgraph can be imported from the generated code and can be bound to a valid address. #### रिव्हर्ट केलेले कॉल हाताळणे @@ -582,7 +582,7 @@ let isContract = ethereum.hasCode(eoa).inner // returns false '@graphprotocol/graph-ts' वरून { log } आयात करा ``` -The `log` API allows subgraphs to log information to the Graph Node standard output as well as Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument. +The `log` API allows Subgraphs to log information to the Graph Node standard output as well as Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument. The `log` API includes the following functions: @@ -590,7 +590,7 @@ The `log` API includes the following functions: - `log.info(fmt: string, args: Array): void` - logs an informational message. - `log.warning(fmt: string, args: Array): void` - logs a warning. - `log.error(fmt: string, args: Array): void` - logs an error message. -- `log.critical(fmt: string, args: Array): void` – logs a critical message _and_ terminates the subgraph. +- `log.critical(fmt: string, args: Array): void` – logs a critical message _and_ terminates the Subgraph. The `log` API takes a format string and an array of string values. It then replaces placeholders with the string values from the array. The first `{}` placeholder gets replaced by the first value in the array, the second `{}` placeholder gets replaced by the second value and so on. @@ -721,7 +721,7 @@ ipfs.mapJSON('Qm...', 'processItem', Value.fromString('parentId')) The only flag currently supported is `json`, which must be passed to `ipfs.map`. With the `json` flag, the IPFS file must consist of a series of JSON values, one value per line. The call to `ipfs.map` will read each line in the file, deserialize it into a `JSONValue` and call the callback for each of them. The callback can then use entity operations to store data from the `JSONValue`. Entity changes are stored only when the handler that called `ipfs.map` finishes successfully; in the meantime, they are kept in memory, and the size of the file that `ipfs.map` can process is therefore limited. -On success, `ipfs.map` returns `void`. If any invocation of the callback causes an error, the handler that invoked `ipfs.map` is aborted, and the subgraph is marked as failed. +On success, `ipfs.map` returns `void`. If any invocation of the callback causes an error, the handler that invoked `ipfs.map` is aborted, and the Subgraph is marked as failed. ### क्रिप्टो API @@ -836,7 +836,7 @@ The base `Entity` class and the child `DataSourceContext` class have helpers to ### DataSourceContext in Manifest -The `context` section within `dataSources` allows you to define key-value pairs that are accessible within your subgraph mappings. The available types are `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. +The `context` section within `dataSources` allows you to define key-value pairs that are accessible within your Subgraph mappings. The available types are `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Here is a YAML example illustrating the usage of various types in the `context` section: @@ -887,4 +887,4 @@ dataSources: - `List`: Specifies a list of items. Each item needs to specify its type and data. - `BigInt`: Specifies a large integer value. Must be quoted due to its large size. -This context is then accessible in your subgraph mapping files, enabling more dynamic and configurable subgraphs. +This context is then accessible in your Subgraph mapping files, enabling more dynamic and configurable Subgraphs. diff --git a/website/src/pages/mr/subgraphs/developing/creating/graph-ts/common-issues.mdx b/website/src/pages/mr/subgraphs/developing/creating/graph-ts/common-issues.mdx index d291033f3ff0..868eab208423 100644 --- a/website/src/pages/mr/subgraphs/developing/creating/graph-ts/common-issues.mdx +++ b/website/src/pages/mr/subgraphs/developing/creating/graph-ts/common-issues.mdx @@ -2,7 +2,7 @@ title: सामान्य असेंब्लीस्क्रिप्ट समस्या --- -There are certain [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) issues that are common to run into during subgraph development. They range in debug difficulty, however, being aware of them may help. The following is a non-exhaustive list of these issues: +There are certain [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) issues that are common to run into during Subgraph development. They range in debug difficulty, however, being aware of them may help. The following is a non-exhaustive list of these issues: - `Private` class variables are not enforced in [AssemblyScript](https://www.assemblyscript.org/status.html#language-features). There is no way to protect class variables from being directly changed from the class object. - Scope is not inherited into [closure functions](https://www.assemblyscript.org/status.html#on-closures), i.e. variables declared outside of closure functions cannot be used. Explanation in [Developer Highlights #3](https://www.youtube.com/watch?v=1-8AW-lVfrA&t=3243s). diff --git a/website/src/pages/mr/subgraphs/developing/creating/install-the-cli.mdx b/website/src/pages/mr/subgraphs/developing/creating/install-the-cli.mdx index 51dfb940edcb..c6892188ddfa 100644 --- a/website/src/pages/mr/subgraphs/developing/creating/install-the-cli.mdx +++ b/website/src/pages/mr/subgraphs/developing/creating/install-the-cli.mdx @@ -2,11 +2,11 @@ title: Install the Graph CLI --- -> In order to use your subgraph on The Graph's decentralized network, you will need to [create an API key](/resources/subgraph-studio-faq/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your subgraph with at least 3,000 GRT to attract 2-3 Indexers. To learn more about signaling, check out [curating](/resources/roles/curating/). +> In order to use your Subgraph on The Graph's decentralized network, you will need to [create an API key](/resources/subgraph-studio-faq/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your Subgraph with at least 3,000 GRT to attract 2-3 Indexers. To learn more about signaling, check out [curating](/resources/roles/curating/). ## सविश्लेषण -The [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) is a command-line interface that facilitates developers' commands for The Graph. It processes a [subgraph manifest](/subgraphs/developing/creating/subgraph-manifest/) and compiles the [mappings](/subgraphs/developing/creating/assemblyscript-mappings/) to create the files you will need to deploy the subgraph to [Subgraph Studio](https://thegraph.com/studio/) and the network. +The [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) is a command-line interface that facilitates developers' commands for The Graph. It processes a [Subgraph manifest](/subgraphs/developing/creating/subgraph-manifest/) and compiles the [mappings](/subgraphs/developing/creating/assemblyscript-mappings/) to create the files you will need to deploy the Subgraph to [Subgraph Studio](https://thegraph.com/studio/) and the network. ## प्रारंभ करणे @@ -28,13 +28,13 @@ npm install -g @graphprotocol/graph-cli@latest yarn global add @graphprotocol/graph-cli ``` -The `graph init` command can be used to set up a new subgraph project, either from an existing contract or from an example subgraph. If you already have a smart contract deployed to your preferred network, you can bootstrap a new subgraph from that contract to get started. +The `graph init` command can be used to set up a new Subgraph project, either from an existing contract or from an example Subgraph. If you already have a smart contract deployed to your preferred network, you can bootstrap a new Subgraph from that contract to get started. ## सबग्राफ तयार करा ### विद्यमान करारातून -The following command creates a subgraph that indexes all events of an existing contract: +The following command creates a Subgraph that indexes all events of an existing contract: ```sh graph init \ @@ -51,25 +51,25 @@ graph init \ - If any of the optional arguments are missing, it guides you through an interactive form. -- The `` is the ID of your subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your subgraph details page. +- The `` is the ID of your Subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your Subgraph details page. ### सबग्राफच्या उदाहरणावरून -The following command initializes a new project from an example subgraph: +The following command initializes a new project from an example Subgraph: ```sh graph init --from-example=example-subgraph ``` -- The [example subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. +- The [example Subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. -- The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. +- The Subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. ### Add New `dataSources` to an Existing Subgraph -`dataSources` are key components of subgraphs. They define the sources of data that the subgraph indexes and processes. A `dataSource` specifies which smart contract to listen to, which events to process, and how to handle them. +`dataSources` are key components of Subgraphs. They define the sources of data that the Subgraph indexes and processes. A `dataSource` specifies which smart contract to listen to, which events to process, and how to handle them. -Recent versions of the Graph CLI supports adding new `dataSources` to an existing subgraph through the `graph add` command: +Recent versions of the Graph CLI supports adding new `dataSources` to an existing Subgraph through the `graph add` command: ```sh graph add
[] @@ -101,19 +101,5 @@ The `graph add` command will fetch the ABI from Etherscan (unless an ABI path is The ABI file(s) must match your contract(s). There are a few ways to obtain ABI files: - If you are building your own project, you will likely have access to your most current ABIs. -- If you are building a subgraph for a public project, you can download that project to your computer and get the ABI by using [`npx hardhat compile`](https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts) or using `solc` to compile. -- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your subgraph will fail. - -## SpecVersion Releases - -| आवृत्ती | रिलीझ नोट्स | -| :-: | --- | -| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | -| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | -| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune subgraphs | -| 0.0.9 | Supports `endBlock` feature | -| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). | -| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). | -| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | -| 0.0.5 | Added support for event handlers having access to transaction receipts. | -| 0.0.4 | Added support for managing subgraph features. | +- If you are building a Subgraph for a public project, you can download that project to your computer and get the ABI by using [`npx hardhat compile`](https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts) or using `solc` to compile. +- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your Subgraph will fail. diff --git a/website/src/pages/mr/subgraphs/developing/creating/ql-schema.mdx b/website/src/pages/mr/subgraphs/developing/creating/ql-schema.mdx index 0e96ef80d066..73a098322d52 100644 --- a/website/src/pages/mr/subgraphs/developing/creating/ql-schema.mdx +++ b/website/src/pages/mr/subgraphs/developing/creating/ql-schema.mdx @@ -4,7 +4,7 @@ title: The Graph QL Schema ## सविश्लेषण -The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language. +The schema for your Subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language. > Note: If you've never written a GraphQL schema, it is recommended that you check out this primer on the GraphQL type system. Reference documentation for GraphQL schemas can be found in the [GraphQL API](/subgraphs/querying/graphql-api/) section. @@ -12,7 +12,7 @@ The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas ar Before defining entities, it is important to take a step back and think about how your data is structured and linked. -- All queries will be made against the data model defined in the subgraph schema. As a result, the design of the subgraph schema should be informed by the queries that your application will need to perform. +- All queries will be made against the data model defined in the Subgraph schema. As a result, the design of the Subgraph schema should be informed by the queries that your application will need to perform. - It may be useful to imagine entities as "objects containing data", rather than as events or functions. - You define entity types in `schema.graphql`, and Graph Node will generate top-level fields for querying single instances and collections of that entity type. - Each type that should be an entity is required to be annotated with an `@entity` directive. @@ -141,7 +141,7 @@ type TokenBalance @entity { Reverse lookups can be defined on an entity through the `@derivedFrom` field. This creates a virtual field on the entity that may be queried but cannot be set manually through the mappings API. Rather, it is derived from the relationship defined on the other entity. For such relationships, it rarely makes sense to store both sides of the relationship, and both indexing and query performance will be better when only one side is stored and the other is derived. -एक-ते-अनेक संबंधांसाठी, संबंध नेहमी 'एका' बाजूला साठवले पाहिजेत आणि 'अनेक' बाजू नेहमी काढल्या पाहिजेत. 'अनेक' बाजूंवर संस्थांचा अ‍ॅरे संचयित करण्याऐवजी अशा प्रकारे नातेसंबंध संचयित केल्याने, अनुक्रमणिका आणि सबग्राफ क्वेरी या दोन्हीसाठी नाटकीयरित्या चांगले कार्यप्रदर्शन होईल. सर्वसाधारणपणे, घटकांचे अ‍ॅरे संग्रहित करणे जितके व्यावहारिक आहे तितके टाळले पाहिजे. +For one-to-many relationships, the relationship should always be stored on the 'one' side, and the 'many' side should always be derived. Storing the relationship this way, rather than storing an array of entities on the 'many' side, will result in dramatically better performance for both indexing and querying the Subgraph. In general, storing arrays of entities should be avoided as much as is practical. #### उदाहरण @@ -160,7 +160,7 @@ type TokenBalance @entity { } ``` -Here is an example of how to write a mapping for a subgraph with reverse lookups: +Here is an example of how to write a mapping for a Subgraph with reverse lookups: ```typescript let token = new Token(event.address) // Create Token @@ -231,7 +231,7 @@ query usersWithOrganizations { } ``` -अनेक-ते-अनेक संबंध संचयित करण्याच्या या अधिक विस्तृत मार्गामुळे सबग्राफसाठी कमी डेटा संग्रहित केला जाईल आणि म्हणूनच अनुक्रमणिका आणि क्वेरीसाठी नाटकीयरित्या वेगवान असलेल्या सबग्राफमध्ये. +This more elaborate way of storing many-to-many relationships will result in less data stored for the Subgraph, and therefore to a Subgraph that is often dramatically faster to index and to query. ### स्कीमामध्ये टिप्पण्या जोडत आहे @@ -287,7 +287,7 @@ query { } ``` -> **[Feature Management](#experimental-features):** From `specVersion` `0.0.4` and onwards, `fullTextSearch` must be declared under the `features` section in the subgraph manifest. +> **[Feature Management](#experimental-features):** From `specVersion` `0.0.4` and onwards, `fullTextSearch` must be declared under the `features` section in the Subgraph manifest. ## भाषा समर्थित diff --git a/website/src/pages/mr/subgraphs/developing/creating/starting-your-subgraph.mdx b/website/src/pages/mr/subgraphs/developing/creating/starting-your-subgraph.mdx index 946093ef308b..daed9ec13c64 100644 --- a/website/src/pages/mr/subgraphs/developing/creating/starting-your-subgraph.mdx +++ b/website/src/pages/mr/subgraphs/developing/creating/starting-your-subgraph.mdx @@ -4,20 +4,32 @@ title: Starting Your Subgraph ## सविश्लेषण -The Graph is home to thousands of subgraphs already available for query, so check [The Graph Explorer](https://thegraph.com/explorer) and find one that already matches your needs. +The Graph is home to thousands of Subgraphs already available for query, so check [The Graph Explorer](https://thegraph.com/explorer) and find one that already matches your needs. -When you create a [subgraph](/subgraphs/developing/subgraphs/), you create a custom open API that extracts data from a blockchain, processes it, stores it, and makes it easy to query via GraphQL. +When you create a [Subgraph](/subgraphs/developing/subgraphs/), you create a custom open API that extracts data from a blockchain, processes it, stores it, and makes it easy to query via GraphQL. -Subgraph development ranges from simple scaffold subgraphs to advanced, specifically tailored subgraphs. +Subgraph development ranges from simple scaffold Subgraphs to advanced, specifically tailored Subgraphs. ### Start Building -Start the process and build a subgraph that matches your needs: +Start the process and build a Subgraph that matches your needs: 1. [Install the CLI](/subgraphs/developing/creating/install-the-cli/) - Set up your infrastructure -2. [Subgraph Manifest](/subgraphs/developing/creating/subgraph-manifest/) - Understand a subgraph's key component +2. [Subgraph Manifest](/subgraphs/developing/creating/subgraph-manifest/) - Understand a Subgraph's key component 3. [The GraphQL Schema](/subgraphs/developing/creating/ql-schema/) - Write your schema 4. [Writing AssemblyScript Mappings](/subgraphs/developing/creating/assemblyscript-mappings/) - Write your mappings -5. [Advanced Features](/subgraphs/developing/creating/advanced/) - Customize your subgraph with advanced features +5. [Advanced Features](/subgraphs/developing/creating/advanced/) - Customize your Subgraph with advanced features Explore additional [resources for APIs](/subgraphs/developing/creating/graph-ts/README/) and conduct local testing with [Matchstick](/subgraphs/developing/creating/unit-testing-framework/). + +| आवृत्ती | रिलीझ नोट्स | +| :-: | --- | +| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune Subgraphs | +| 0.0.9 | Supports `endBlock` feature | +| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). | +| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | diff --git a/website/src/pages/mr/subgraphs/developing/creating/subgraph-manifest.mdx b/website/src/pages/mr/subgraphs/developing/creating/subgraph-manifest.mdx index a09668000af7..97a686e21ad9 100644 --- a/website/src/pages/mr/subgraphs/developing/creating/subgraph-manifest.mdx +++ b/website/src/pages/mr/subgraphs/developing/creating/subgraph-manifest.mdx @@ -4,19 +4,19 @@ title: Subgraph Manifest ## सविश्लेषण -The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. +The Subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your Subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. -The **subgraph definition** consists of the following files: +The **Subgraph definition** consists of the following files: -- `subgraph.yaml`: Contains the subgraph manifest +- `subgraph.yaml`: Contains the Subgraph manifest -- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL +- `schema.graphql`: A GraphQL schema defining the data stored for your Subgraph and how to query it via GraphQL - `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema (e.g. `mapping.ts` in this guide) ### Subgraph Capabilities -A single subgraph can: +A single Subgraph can: - Index data from multiple smart contracts (but not multiple networks). @@ -24,12 +24,12 @@ A single subgraph can: - Add an entry for each contract that requires indexing to the `dataSources` array. -The full specification for subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). +The full specification for Subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). -For the example subgraph listed above, `subgraph.yaml` is: +For the example Subgraph listed above, `subgraph.yaml` is: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum repository: https://github.com/graphprotocol/graph-tooling schema: @@ -54,7 +54,7 @@ dataSources: data: 'bar' mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -79,47 +79,47 @@ dataSources: ## Subgraph Entries -> Important Note: Be sure you populate your subgraph manifest with all handlers and [entities](/subgraphs/developing/creating/ql-schema/). +> Important Note: Be sure you populate your Subgraph manifest with all handlers and [entities](/subgraphs/developing/creating/ql-schema/). मॅनिफेस्टसाठी अद्यतनित करण्याच्या महत्त्वाच्या नोंदी आहेत: -- `specVersion`: a semver version that identifies the supported manifest structure and functionality for the subgraph. The latest version is `1.2.0`. See [specVersion releases](#specversion-releases) section to see more details on features & releases. +- `specVersion`: a semver version that identifies the supported manifest structure and functionality for the Subgraph. The latest version is `1.3.0`. See [specVersion releases](#specversion-releases) section to see more details on features & releases. -- `description`: a human-readable description of what the subgraph is. This description is displayed in Graph Explorer when the subgraph is deployed to Subgraph Studio. +- `description`: a human-readable description of what the Subgraph is. This description is displayed in Graph Explorer when the Subgraph is deployed to Subgraph Studio. -- `repository`: the URL of the repository where the subgraph manifest can be found. This is also displayed in Graph Explorer. +- `repository`: the URL of the repository where the Subgraph manifest can be found. This is also displayed in Graph Explorer. - `features`: a list of all used [feature](#experimental-features) names. -- `indexerHints.prune`: Defines the retention of historical block data for a subgraph. See [prune](#prune) in [indexerHints](#indexer-hints) section. +- `indexerHints.prune`: Defines the retention of historical block data for a Subgraph. See [prune](#prune) in [indexerHints](#indexer-hints) section. -- `dataSources.source`: the address of the smart contract the subgraph sources, and the ABI of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts. +- `dataSources.source`: the address of the smart contract the Subgraph sources, and the ABI of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts. - `dataSources.source.startBlock`: the optional number of the block that the data source starts indexing from. In most cases, we suggest using the block in which the contract was created. - `dataSources.source.endBlock`: The optional number of the block that the data source stops indexing at, including that block. Minimum spec version required: `0.0.9`. -- `dataSources.context`: key-value pairs that can be used within subgraph mappings. Supports various data types like `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Each variable needs to specify its `type` and `data`. These context variables are then accessible in the mapping files, offering more configurable options for subgraph development. +- `dataSources.context`: key-value pairs that can be used within Subgraph mappings. Supports various data types like `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Each variable needs to specify its `type` and `data`. These context variables are then accessible in the mapping files, offering more configurable options for Subgraph development. - `dataSources.mapping.entities`: the entities that the data source writes to the store. The schema for each entity is defined in the schema.graphql file. - `dataSources.mapping.abis`: one or more named ABI files for the source contract as well as any other smart contracts that you interact with from within the mappings. -- `dataSources.mapping.eventHandlers`: lists the smart contract events this subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store. +- `dataSources.mapping.eventHandlers`: lists the smart contract events this Subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store. -- `dataSources.mapping.callHandlers`: lists the smart contract functions this subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store. +- `dataSources.mapping.callHandlers`: lists the smart contract functions this Subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store. -- `dataSources.mapping.blockHandlers`: lists the blocks this subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional call-filter can be provided by adding a `filter` field with `kind: call` to the handler. This will only run the handler if the block contains at least one call to the data source contract. +- `dataSources.mapping.blockHandlers`: lists the blocks this Subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional call-filter can be provided by adding a `filter` field with `kind: call` to the handler. This will only run the handler if the block contains at least one call to the data source contract. -A single subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array. +A single Subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array. ## Event Handlers -Event handlers in a subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the subgraph's manifest. This enables subgraphs to process and store event data according to defined logic. +Event handlers in a Subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the Subgraph's manifest. This enables Subgraphs to process and store event data according to defined logic. ### Defining an Event Handler -An event handler is declared within a data source in the subgraph's YAML configuration. It specifies which events to listen for and the corresponding function to execute when those events are detected. +An event handler is declared within a data source in the Subgraph's YAML configuration. It specifies which events to listen for and the corresponding function to execute when those events are detected. ```yaml dataSources: @@ -131,7 +131,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -149,11 +149,11 @@ dataSources: ## हँडलर्सना कॉल करा -While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured. +While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a Subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured. कॉल हँडलर्स फक्त दोनपैकी एका प्रकरणात ट्रिगर होतील: जेव्हा निर्दिष्ट केलेल्या फंक्शनला कॉन्ट्रॅक्ट व्यतिरिक्त इतर खात्याद्वारे कॉल केले जाते किंवा जेव्हा ते सॉलिडिटीमध्ये बाह्य म्हणून चिन्हांकित केले जाते आणि त्याच कॉन्ट्रॅक्टमधील दुसर्‍या फंक्शनचा भाग म्हणून कॉल केले जाते. -> **Note:** Call handlers currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a subgraph indexing one of these networks contain one or more call handlers, it will not start syncing. Subgraph developers should instead use event handlers. These are far more performant than call handlers, and are supported on every evm network. +> **Note:** Call handlers currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a Subgraph indexing one of these networks contain one or more call handlers, it will not start syncing. Subgraph developers should instead use event handlers. These are far more performant than call handlers, and are supported on every evm network. ### कॉल हँडलरची व्याख्या @@ -169,7 +169,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -186,7 +186,7 @@ The `function` is the normalized function signature to filter calls by. The `han ### मॅपिंग कार्य -Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument: +Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example Subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument: ```typescript import { CreateGravatarCall } from '../generated/Gravity/Gravity' @@ -205,7 +205,7 @@ The `handleCreateGravatar` function takes a new `CreateGravatarCall` which is a ## ब्लॉक हँडलर -कॉन्ट्रॅक्ट इव्हेंट्स किंवा फंक्शन कॉल्सची सदस्यता घेण्याव्यतिरिक्त, सबग्राफला त्याचा डेटा अद्यतनित करायचा असेल कारण साखळीमध्ये नवीन ब्लॉक्स जोडले जातात. हे साध्य करण्यासाठी सबग्राफ प्रत्येक ब्लॉकनंतर किंवा पूर्व-परिभाषित फिल्टरशी जुळणार्‍या ब्लॉक्सनंतर फंक्शन चालवू शकतो. +In addition to subscribing to contract events or function calls, a Subgraph may want to update its data as new blocks are appended to the chain. To achieve this a Subgraph can run a function after every block or after blocks that match a pre-defined filter. ### समर्थित फिल्टर @@ -218,7 +218,7 @@ filter: _The defined handler will be called once for every block which contains a call to the contract (data source) the handler is defined under._ -> **Note:** The `call` filter currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a subgraph indexing one of these networks contain one or more block handlers with a `call` filter, it will not start syncing. +> **Note:** The `call` filter currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a Subgraph indexing one of these networks contain one or more block handlers with a `call` filter, it will not start syncing. ब्लॉक हँडलरसाठी फिल्टरची अनुपस्थिती हे सुनिश्चित करेल की हँडलरला प्रत्येक ब्लॉक म्हटले जाईल. डेटा स्त्रोतामध्ये प्रत्येक फिल्टर प्रकारासाठी फक्त एक ब्लॉक हँडलर असू शकतो. @@ -232,7 +232,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -261,7 +261,7 @@ blockHandlers: every: 10 ``` -The defined handler will be called once for every `n` blocks, where `n` is the value provided in the `every` field. This configuration allows the subgraph to perform specific operations at regular block intervals. +The defined handler will be called once for every `n` blocks, where `n` is the value provided in the `every` field. This configuration allows the Subgraph to perform specific operations at regular block intervals. #### Once Filter @@ -276,7 +276,7 @@ blockHandlers: kind: once ``` -The defined handler with the once filter will be called only once before all other handlers run. This configuration allows the subgraph to use the handler as an initialization handler, performing specific tasks at the start of indexing. +The defined handler with the once filter will be called only once before all other handlers run. This configuration allows the Subgraph to use the handler as an initialization handler, performing specific tasks at the start of indexing. ```ts export function handleOnce(block: ethereum.Block): void { @@ -288,7 +288,7 @@ export function handleOnce(block: ethereum.Block): void { ### मॅपिंग कार्य -The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing subgraph entities in the store, call smart contracts and create or update entities. +The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing Subgraph entities in the store, call smart contracts and create or update entities. ```typescript import { ethereum } from '@graphprotocol/graph-ts' @@ -317,7 +317,7 @@ An event will only be triggered when both the signature and topic 0 match. By de Starting from `specVersion` `0.0.5` and `apiVersion` `0.0.7`, event handlers can have access to the receipt for the transaction which emitted them. -To do so, event handlers must be declared in the subgraph manifest with the new `receipt: true` key, which is optional and defaults to false. +To do so, event handlers must be declared in the Subgraph manifest with the new `receipt: true` key, which is optional and defaults to false. ```yaml eventHandlers: @@ -360,7 +360,7 @@ dataSources: abi: Factory mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/factory.ts entities: @@ -390,7 +390,7 @@ templates: abi: Exchange mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/exchange.ts entities: @@ -454,7 +454,7 @@ There are setters and getters like `setString` and `getString` for all value typ ## ब्लॉक सुरू करा -The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. +The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a Subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. ```yaml dataSources: @@ -467,7 +467,7 @@ dataSources: startBlock: 6627917 mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/factory.ts entities: @@ -488,13 +488,13 @@ dataSources: ## Indexer Hints -The `indexerHints` setting in a subgraph's manifest provides directives for indexers on processing and managing a subgraph. It influences operational decisions across data handling, indexing strategies, and optimizations. Presently, it features the `prune` option for managing historical data retention or pruning. +The `indexerHints` setting in a Subgraph's manifest provides directives for indexers on processing and managing a Subgraph. It influences operational decisions across data handling, indexing strategies, and optimizations. Presently, it features the `prune` option for managing historical data retention or pruning. > This feature is available from `specVersion: 1.0.0` ### Prune -`indexerHints.prune`: Defines the retention of historical block data for a subgraph. Options include: +`indexerHints.prune`: Defines the retention of historical block data for a Subgraph. Options include: 1. `"never"`: No pruning of historical data; retains the entire history. 2. `"auto"`: Retains the minimum necessary history as set by the indexer, optimizing query performance. @@ -505,19 +505,19 @@ The `indexerHints` setting in a subgraph's manifest provides directives for inde prune: auto ``` -> The term "history" in this context of subgraphs is about storing data that reflects the old states of mutable entities. +> The term "history" in this context of Subgraphs is about storing data that reflects the old states of mutable entities. History as of a given block is required for: -- [Time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), which enable querying the past states of these entities at specific blocks throughout the subgraph's history -- Using the subgraph as a [graft base](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) in another subgraph, at that block -- Rewinding the subgraph back to that block +- [Time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), which enable querying the past states of these entities at specific blocks throughout the Subgraph's history +- Using the Subgraph as a [graft base](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) in another Subgraph, at that block +- Rewinding the Subgraph back to that block If historical data as of the block has been pruned, the above capabilities will not be available. > Using `"auto"` is generally recommended as it maximizes query performance and is sufficient for most users who do not require access to extensive historical data. -For subgraphs leveraging [time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), it's advisable to either set a specific number of blocks for historical data retention or use `prune: never` to keep all historical entity states. Below are examples of how to configure both options in your subgraph's settings: +For Subgraphs leveraging [time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), it's advisable to either set a specific number of blocks for historical data retention or use `prune: never` to keep all historical entity states. Below are examples of how to configure both options in your Subgraph's settings: To retain a specific amount of historical data: @@ -532,3 +532,18 @@ To preserve the complete history of entity states: indexerHints: prune: never ``` + +## SpecVersion Releases + +| आवृत्ती | रिलीझ नोट्स | +| :-: | --- | +| 1.3.0 | Added support for [Subgraph Composition](/cookbook/subgraph-composition-three-sources) | +| 1.2.0 | Added support for [Indexed Argument Filtering](/developing/creating/advanced/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](/developing/creating/advanced/#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supports [`indexerHints`](/developing/creating/subgraph-manifest/#indexer-hints) feature to prune Subgraphs | +| 0.0.9 | Supports `endBlock` feature | +| 0.0.8 | Added support for polling [Block Handlers](/developing/creating/subgraph-manifest/#polling-filter) and [Initialisation Handlers](/developing/creating/subgraph-manifest/#once-filter). | +| 0.0.7 | Added support for [File Data Sources](/developing/creating/advanced/#ipfsarweave-file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | diff --git a/website/src/pages/mr/subgraphs/developing/creating/unit-testing-framework.mdx b/website/src/pages/mr/subgraphs/developing/creating/unit-testing-framework.mdx index e09a384b8e6d..0b3909e9ff3b 100644 --- a/website/src/pages/mr/subgraphs/developing/creating/unit-testing-framework.mdx +++ b/website/src/pages/mr/subgraphs/developing/creating/unit-testing-framework.mdx @@ -2,12 +2,12 @@ title: युनिट चाचणी फ्रेमवर्क --- -Learn how to use Matchstick, a unit testing framework developed by [LimeChain](https://limechain.tech/). Matchstick enables subgraph developers to test their mapping logic in a sandboxed environment and successfully deploy their subgraphs. +Learn how to use Matchstick, a unit testing framework developed by [LimeChain](https://limechain.tech/). Matchstick enables Subgraph developers to test their mapping logic in a sandboxed environment and successfully deploy their Subgraphs. ## Benefits of Using Matchstick - It's written in Rust and optimized for high performance. -- It gives you access to developer features, including the ability to mock contract calls, make assertions about the store state, monitor subgraph failures, check test performance, and many more. +- It gives you access to developer features, including the ability to mock contract calls, make assertions about the store state, monitor Subgraph failures, check test performance, and many more. ## प्रारंभ करणे @@ -87,7 +87,7 @@ And finally, do not use `graph test` (which uses your global installation of gra ### Using Matchstick -To use **Matchstick** in your subgraph project just open up a terminal, navigate to the root folder of your project and simply run `graph test [options] ` - it downloads the latest **Matchstick** binary and runs the specified test or all tests in a test folder (or all existing tests if no datasource flag is specified). +To use **Matchstick** in your Subgraph project just open up a terminal, navigate to the root folder of your project and simply run `graph test [options] ` - it downloads the latest **Matchstick** binary and runs the specified test or all tests in a test folder (or all existing tests if no datasource flag is specified). ### CLI पर्याय @@ -113,7 +113,7 @@ graph test path/to/file.test.ts ```sh -c, --coverage Run the tests in coverage mode --d, --docker Run the tests in a docker container (Note: Please execute from the root folder of the subgraph) +-d, --docker Run the tests in a docker container (Note: Please execute from the root folder of the Subgraph) -f, --force Binary: Redownloads the binary. Docker: Redownloads the Dockerfile and rebuilds the docker image. -h, --help Show usage information -l, --logs Logs to the console information about the OS, CPU model and download url (debugging purposes) @@ -145,17 +145,17 @@ libsFolder: path/to/libs manifestPath: path/to/subgraph.yaml ``` -### डेमो सबग्राफ +### Demo Subgraph You can try out and play around with the examples from this guide by cloning the [Demo Subgraph repo](https://github.com/LimeChain/demo-subgraph) ### व्हिडिओ ट्यूटोरियल -Also you can check out the video series on ["How to use Matchstick to write unit tests for your subgraphs"](https://www.youtube.com/playlist?list=PLTqyKgxaGF3SNakGQwczpSGVjS_xvOv3h) +Also you can check out the video series on ["How to use Matchstick to write unit tests for your Subgraphs"](https://www.youtube.com/playlist?list=PLTqyKgxaGF3SNakGQwczpSGVjS_xvOv3h) ## Tests structure -_**IMPORTANT: The test structure described below depens on `matchstick-as` version >=0.5.0**_ +_**IMPORTANT: The test structure described below depends on `matchstick-as` version >=0.5.0**_ ### वर्णन करणे() @@ -662,7 +662,7 @@ That's a lot to unpack! First off, an important thing to notice is that we're im आम्ही तिथे जातो - आम्ही आमची पहिली चाचणी तयार केली आहे! 👏 -आता आमच्या चाचण्या चालवण्यासाठी तुम्हाला तुमच्या सबग्राफ रूट फोल्डरमध्ये खालील गोष्टी चालवाव्या लागतील: +Now in order to run our tests you simply need to run the following in your Subgraph root folder: `graph test Gravity` @@ -756,7 +756,7 @@ createMockedFunction(contractAddress, 'getGravatar', 'getGravatar(address):(stri Users can mock IPFS files by using `mockIpfsFile(hash, filePath)` function. The function accepts two arguments, the first one is the IPFS file hash/path and the second one is the path to a local file. -NOTE: When testing `ipfs.map/ipfs.mapJSON`, the callback function must be exported from the test file in order for matchstck to detect it, like the `processGravatar()` function in the test example bellow: +NOTE: When testing `ipfs.map/ipfs.mapJSON`, the callback function must be exported from the test file in order for matchstick to detect it, like the `processGravatar()` function in the test example bellow: `.test.ts` file: @@ -765,7 +765,7 @@ import { assert, test, mockIpfsFile } from 'matchstick-as/assembly/index' import { ipfs } from '@graphprotocol/graph-ts' import { gravatarFromIpfs } from './utils' -// Export ipfs.map() callback in order for matchstck to detect it +// Export ipfs.map() callback in order for matchstick to detect it export { processGravatar } from './utils' test('ipfs.cat', () => { @@ -1172,7 +1172,7 @@ templates: network: mainnet mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/token-lock-wallet.ts handler: handleMetadata @@ -1289,7 +1289,7 @@ test('file/ipfs dataSource creation example', () => { ## चाचणी कव्हरेज -Using **Matchstick**, subgraph developers are able to run a script that will calculate the test coverage of the written unit tests. +Using **Matchstick**, Subgraph developers are able to run a script that will calculate the test coverage of the written unit tests. The test coverage tool takes the compiled test `wasm` binaries and converts them to `wat` files, which can then be easily inspected to see whether or not the handlers defined in `subgraph.yaml` have been called. Since code coverage (and testing as whole) is in very early stages in AssemblyScript and WebAssembly, **Matchstick** cannot check for branch coverage. Instead we rely on the assertion that if a given handler has been called, the event/function for it have been properly mocked. @@ -1395,7 +1395,7 @@ The mismatch in arguments is caused by mismatch in `graph-ts` and `matchstick-as ## अतिरिक्त संसाधने -For any additional support, check out this [demo subgraph repo using Matchstick](https://github.com/LimeChain/demo-subgraph#readme_). +For any additional support, check out this [demo Subgraph repo using Matchstick](https://github.com/LimeChain/demo-subgraph#readme_). ## अभिप्राय diff --git a/website/src/pages/mr/subgraphs/developing/deploying/multiple-networks.mdx b/website/src/pages/mr/subgraphs/developing/deploying/multiple-networks.mdx index 8d85033aeb01..3e34f743a6c0 100644 --- a/website/src/pages/mr/subgraphs/developing/deploying/multiple-networks.mdx +++ b/website/src/pages/mr/subgraphs/developing/deploying/multiple-networks.mdx @@ -1,12 +1,13 @@ --- title: Deploying a Subgraph to Multiple Networks +sidebarTitle: Deploying to Multiple Networks --- -This page explains how to deploy a subgraph to multiple networks. To deploy a subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a subgraph already, see [Creating a subgraph](/developing/creating-a-subgraph/). +This page explains how to deploy a Subgraph to multiple networks. To deploy a Subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a Subgraph already, see [Creating a Subgraph](/developing/creating-a-subgraph/). -## एकाधिक नेटवर्कवर सबग्राफ तैनात करणे +## Deploying the Subgraph to multiple networks -काही प्रकरणांमध्ये, तुम्हाला समान सबग्राफ एकाधिक नेटवर्कवर त्याच्या कोडची नक्कल न करता उपयोजित करायचा असेल. यासह येणारे मुख्य आव्हान हे आहे की या नेटवर्कवरील कराराचे पत्ते वेगळे आहेत. +In some cases, you will want to deploy the same Subgraph to multiple networks without duplicating all of its code. The main challenge that comes with this is that the contract addresses on these networks are different. ### Using `graph-cli` @@ -20,7 +21,7 @@ Options: --network-file Networks config file path (default: "./networks.json") ``` -You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your subgraph during development. +You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your Subgraph during development. > Note: The `init` command will now auto-generate a `networks.json` based on the provided information. You will then be able to update existing or add additional networks. @@ -54,7 +55,7 @@ If you don't have a `networks.json` file, you'll need to manually create one wit > Note: You don't have to specify any of the `templates` (if you have any) in the config file, only the `dataSources`. If there are any `templates` declared in the `subgraph.yaml` file, their network will be automatically updated to the one specified with the `--network` option. -Now, let's assume you want to be able to deploy your subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`: +Now, let's assume you want to be able to deploy your Subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`: ```yaml # ... @@ -96,7 +97,7 @@ yarn build --network sepolia yarn build --network sepolia --network-file path/to/config ``` -The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the subgraph. Your `subgraph.yaml` file now should look like this: +The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the Subgraph. Your `subgraph.yaml` file now should look like this: ```yaml # ... @@ -127,7 +128,7 @@ yarn deploy --network sepolia --network-file path/to/config One way to parameterize aspects like contract addresses using older `graph-cli` versions is to generate parts of it with a templating system like [Mustache](https://mustache.github.io/) or [Handlebars](https://handlebarsjs.com/). -To illustrate this approach, let's assume a subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network: +To illustrate this approach, let's assume a Subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network: ```json { @@ -179,7 +180,7 @@ In order to generate a manifest to either network, you could add two additional } ``` -To deploy this subgraph for mainnet or Sepolia you would now simply run one of the two following commands: +To deploy this Subgraph for mainnet or Sepolia you would now simply run one of the two following commands: ```sh # Mainnet: @@ -193,25 +194,25 @@ A working example of this can be found [here](https://github.com/graphprotocol/e **Note:** This approach can also be applied to more complex situations, where it is necessary to substitute more than contract addresses and network names or where generating mappings or ABIs from templates as well. -This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your Subgraph to check if it is running behind. `synced` informs if the Subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the Subgraph. In this case, you can check the `fatalError` field for details on this error. -## सबग्राफ स्टुडिओ सबग्राफ संग्रहण धोरण +## Subgraph Studio Subgraph archive policy -A subgraph version in Studio is archived if and only if it meets the following criteria: +A Subgraph version in Studio is archived if and only if it meets the following criteria: - The version is not published to the network (or pending publish) - The version was created 45 or more days ago -- The subgraph hasn't been queried in 30 days +- The Subgraph hasn't been queried in 30 days -In addition, when a new version is deployed, if the subgraph has not been published, then the N-2 version of the subgraph is archived. +In addition, when a new version is deployed, if the Subgraph has not been published, then the N-2 version of the Subgraph is archived. -या धोरणामुळे प्रभावित झालेल्या प्रत्येक सबग्राफला प्रश्नातील आवृत्ती परत आणण्याचा पर्याय आहे. +Every Subgraph affected with this policy has an option to bring the version in question back. -## सबग्राफ आरोग्य तपासत आहे +## Checking Subgraph health -जर सबग्राफ यशस्वीरित्या समक्रमित झाला, तर ते कायमचे चांगले चालत राहण्याचे चांगले चिन्ह आहे. तथापि, नेटवर्कवरील नवीन ट्रिगर्समुळे तुमच्या सबग्राफची चाचणी न केलेली त्रुटी स्थिती येऊ शकते किंवा कार्यप्रदर्शन समस्यांमुळे किंवा नोड ऑपरेटरमधील समस्यांमुळे ते मागे पडू शकते. +If a Subgraph syncs successfully, that is a good sign that it will continue to run well forever. However, new triggers on the network might cause your Subgraph to hit an untested error condition or it may start to fall behind due to performance issues or issues with the node operators. -Graph Node exposes a GraphQL endpoint which you can query to check the status of your subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a subgraph: +Graph Node exposes a GraphQL endpoint which you can query to check the status of your Subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a Subgraph: ```graphql { @@ -238,4 +239,4 @@ Graph Node exposes a GraphQL endpoint which you can query to check the status of } ``` -This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your Subgraph to check if it is running behind. `synced` informs if the Subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the Subgraph. In this case, you can check the `fatalError` field for details on this error. diff --git a/website/src/pages/mr/subgraphs/developing/deploying/using-subgraph-studio.mdx b/website/src/pages/mr/subgraphs/developing/deploying/using-subgraph-studio.mdx index 4769cbc3408b..2319974d45ed 100644 --- a/website/src/pages/mr/subgraphs/developing/deploying/using-subgraph-studio.mdx +++ b/website/src/pages/mr/subgraphs/developing/deploying/using-subgraph-studio.mdx @@ -2,23 +2,23 @@ title: Deploying Using Subgraph Studio --- -Learn how to deploy your subgraph to Subgraph Studio. +Learn how to deploy your Subgraph to Subgraph Studio. -> Note: When you deploy a subgraph, you push it to Subgraph Studio, where you'll be able to test it. It's important to remember that deploying is not the same as publishing. When you publish a subgraph, you're publishing it onchain. +> Note: When you deploy a Subgraph, you push it to Subgraph Studio, where you'll be able to test it. It's important to remember that deploying is not the same as publishing. When you publish a Subgraph, you're publishing it onchain. ## Subgraph Studio Overview In [Subgraph Studio](https://thegraph.com/studio/), you can do the following: -- View a list of subgraphs you've created -- Manage, view details, and visualize the status of a specific subgraph -- विशिष्ट सबग्राफसाठी तुमच्या API की तयार करा आणि व्यवस्थापित करा +- View a list of Subgraphs you've created +- Manage, view details, and visualize the status of a specific Subgraph +- Create and manage your API keys for specific Subgraphs - Restrict your API keys to specific domains and allow only certain Indexers to query with them -- Create your subgraph -- Deploy your subgraph using The Graph CLI -- Test your subgraph in the playground environment -- Integrate your subgraph in staging using the development query URL -- Publish your subgraph to The Graph Network +- Create your Subgraph +- Deploy your Subgraph using The Graph CLI +- Test your Subgraph in the playground environment +- Integrate your Subgraph in staging using the development query URL +- Publish your Subgraph to The Graph Network - Manage your billing ## Install The Graph CLI @@ -44,10 +44,10 @@ npm install -g @graphprotocol/graph-cli 1. Open [Subgraph Studio](https://thegraph.com/studio/). 2. Connect your wallet to sign in. - You can do this via MetaMask, Coinbase Wallet, WalletConnect, or Safe. -3. After you sign in, your unique deploy key will be displayed on your subgraph details page. - - The deploy key allows you to publish your subgraphs or manage your API keys and billing. It is unique but can be regenerated if you think it has been compromised. +3. After you sign in, your unique deploy key will be displayed on your Subgraph details page. + - The deploy key allows you to publish your Subgraphs or manage your API keys and billing. It is unique but can be regenerated if you think it has been compromised. -> Important: You need an API key to query subgraphs +> Important: You need an API key to query Subgraphs ### How to Create a Subgraph in Subgraph Studio @@ -57,31 +57,25 @@ npm install -g @graphprotocol/graph-cli ### ग्राफ नेटवर्कसह सबग्राफ सुसंगतता -In order to be supported by Indexers on The Graph Network, subgraphs must: - -- Index a [supported network](/supported-networks/) -- खालीलपैकी कोणतीही वैशिष्ट्ये वापरू नयेत: - - ipfs.cat & ipfs.map - - गैर-घातक त्रुटी - - कलम करणे +To be supported by Indexers on The Graph Network, Subgraphs must index a [supported network](/supported-networks/). For a full list of supported and unsupported features, check out the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) repo. ## Initialize Your Subgraph -Once your subgraph has been created in Subgraph Studio, you can initialize its code through the CLI using this command: +Once your Subgraph has been created in Subgraph Studio, you can initialize its code through the CLI using this command: ```bash graph init ``` -You can find the `` value on your subgraph details page in Subgraph Studio, see image below: +You can find the `` value on your Subgraph details page in Subgraph Studio, see image below: ![Subgraph Studio - Slug](/img/doc-subgraph-slug.png) -After running `graph init`, you will be asked to input the contract address, network, and an ABI that you want to query. This will generate a new folder on your local machine with some basic code to start working on your subgraph. You can then finalize your subgraph to make sure it works as expected. +After running `graph init`, you will be asked to input the contract address, network, and an ABI that you want to query. This will generate a new folder on your local machine with some basic code to start working on your Subgraph. You can then finalize your Subgraph to make sure it works as expected. ## आलेख प्रमाणीकरण -Before you can deploy your subgraph to Subgraph Studio, you need to log into your account within the CLI. To do this, you will need your deploy key, which you can find under your subgraph details page. +Before you can deploy your Subgraph to Subgraph Studio, you need to log into your account within the CLI. To do this, you will need your deploy key, which you can find under your Subgraph details page. Then, use the following command to authenticate from the CLI: @@ -91,11 +85,11 @@ graph auth ## Deploying a Subgraph -Once you are ready, you can deploy your subgraph to Subgraph Studio. +Once you are ready, you can deploy your Subgraph to Subgraph Studio. -> Deploying a subgraph with the CLI pushes it to the Studio, where you can test it and update the metadata. This action won't publish your subgraph to the decentralized network. +> Deploying a Subgraph with the CLI pushes it to the Studio, where you can test it and update the metadata. This action won't publish your Subgraph to the decentralized network. -Use the following CLI command to deploy your subgraph: +Use the following CLI command to deploy your Subgraph: ```bash graph deploy @@ -108,30 +102,30 @@ After running this command, the CLI will ask for a version label. ## Testing Your Subgraph -After deploying, you can test your subgraph (either in Subgraph Studio or in your own app, with the deployment query URL), deploy another version, update the metadata, and publish to [Graph Explorer](https://thegraph.com/explorer) when you are ready. +After deploying, you can test your Subgraph (either in Subgraph Studio or in your own app, with the deployment query URL), deploy another version, update the metadata, and publish to [Graph Explorer](https://thegraph.com/explorer) when you are ready. -Use Subgraph Studio to check the logs on the dashboard and look for any errors with your subgraph. +Use Subgraph Studio to check the logs on the dashboard and look for any errors with your Subgraph. ## Publish Your Subgraph -In order to publish your subgraph successfully, review [publishing a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). +In order to publish your Subgraph successfully, review [publishing a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). ## Versioning Your Subgraph with the CLI -If you want to update your subgraph, you can do the following: +If you want to update your Subgraph, you can do the following: - You can deploy a new version to Studio using the CLI (it will only be private at this point). - Once you're happy with it, you can publish your new deployment to [Graph Explorer](https://thegraph.com/explorer). -- This action will create a new version of your subgraph that Curators can start signaling on and Indexers can index. +- This action will create a new version of your Subgraph that Curators can start signaling on and Indexers can index. -You can also update your subgraph's metadata without publishing a new version. You can update your subgraph details in Studio (under the profile picture, name, description, etc.) by checking an option called **Update Details** in [Graph Explorer](https://thegraph.com/explorer). If this is checked, an onchain transaction will be generated that updates subgraph details in Explorer without having to publish a new version with a new deployment. +You can also update your Subgraph's metadata without publishing a new version. You can update your Subgraph details in Studio (under the profile picture, name, description, etc.) by checking an option called **Update Details** in [Graph Explorer](https://thegraph.com/explorer). If this is checked, an onchain transaction will be generated that updates Subgraph details in Explorer without having to publish a new version with a new deployment. -> Note: There are costs associated with publishing a new version of a subgraph to the network. In addition to the transaction fees, you must also fund a part of the curation tax on the auto-migrating signal. You cannot publish a new version of your subgraph if Curators have not signaled on it. For more information, please read more [here](/resources/roles/curating/). +> Note: There are costs associated with publishing a new version of a Subgraph to the network. In addition to the transaction fees, you must also fund a part of the curation tax on the auto-migrating signal. You cannot publish a new version of your Subgraph if Curators have not signaled on it. For more information, please read more [here](/resources/roles/curating/). ## सबग्राफ आवृत्त्यांचे स्वयंचलित संग्रहण -Whenever you deploy a new subgraph version in Subgraph Studio, the previous version will be archived. Archived versions won't be indexed/synced and therefore cannot be queried. You can unarchive an archived version of your subgraph in Subgraph Studio. +Whenever you deploy a new Subgraph version in Subgraph Studio, the previous version will be archived. Archived versions won't be indexed/synced and therefore cannot be queried. You can unarchive an archived version of your Subgraph in Subgraph Studio. -> Note: Previous versions of non-published subgraphs deployed to Studio will be automatically archived. +> Note: Previous versions of non-published Subgraphs deployed to Studio will be automatically archived. ![Subgraph Studio - Unarchive](/img/Unarchive.png) diff --git a/website/src/pages/mr/subgraphs/developing/developer-faq.mdx b/website/src/pages/mr/subgraphs/developing/developer-faq.mdx index 8578be282aad..4f3e183375b9 100644 --- a/website/src/pages/mr/subgraphs/developing/developer-faq.mdx +++ b/website/src/pages/mr/subgraphs/developing/developer-faq.mdx @@ -7,37 +7,37 @@ This page summarizes some of the most common questions for developers building o ## Subgraph Related -### 1. सबग्राफ म्हणजे काय? +### 1. What is a Subgraph? -A subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process subgraphs and make them available for subgraph consumers to query. +A Subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process Subgraphs and make them available for Subgraph consumers to query. -### 2. What is the first step to create a subgraph? +### 2. What is the first step to create a Subgraph? -To successfully create a subgraph, you will need to install The Graph CLI. Review the [Quick Start](/subgraphs/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). +To successfully create a Subgraph, you will need to install The Graph CLI. Review the [Quick Start](/subgraphs/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). -### 3. Can I still create a subgraph if my smart contracts don't have events? +### 3. Can I still create a Subgraph if my smart contracts don't have events? -It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the subgraph are triggered by contract events and are the fastest way to retrieve useful data. +It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the Subgraph are triggered by contract events and are the fastest way to retrieve useful data. -If the contracts you work with do not contain events, your subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. +If the contracts you work with do not contain events, your Subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. -### 4. मी माझ्या सबग्राफशी संबंधित गिटहब खाते बदलू शकतो का? +### 4. Can I change the GitHub account associated with my Subgraph? -No. Once a subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your subgraph. +No. Once a Subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your Subgraph. -### 5. How do I update a subgraph on mainnet? +### 5. How do I update a Subgraph on mainnet? -You can deploy a new version of your subgraph to Subgraph Studio using the CLI. This action maintains your subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. +You can deploy a new version of your Subgraph to Subgraph Studio using the CLI. This action maintains your Subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your Subgraph that Curators can start signaling on. -### 6. Is it possible to duplicate a subgraph to another account or endpoint without redeploying? +### 6. Is it possible to duplicate a Subgraph to another account or endpoint without redeploying? -तुम्हाला सबग्राफ पुन्हा तैनात करावा लागेल, परंतु सबग्राफ आयडी (IPFS हॅश) बदलत नसल्यास, त्याला सुरुवातीपासून सिंक करण्याची गरज नाही. +You have to redeploy the Subgraph, but if the Subgraph ID (IPFS hash) doesn't change, it won't have to sync from the beginning. -### 7. How do I call a contract function or access a public state variable from my subgraph mappings? +### 7. How do I call a contract function or access a public state variable from my Subgraph mappings? Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/#access-to-smart-contract-state). -### 8. Can I import `ethers.js` or other JS libraries into my subgraph mappings? +### 8. Can I import `ethers.js` or other JS libraries into my Subgraph mappings? Not currently, as mappings are written in AssemblyScript. @@ -45,15 +45,15 @@ One possible alternative solution to this is to store raw data in entities and p ### 9. When listening to multiple contracts, is it possible to select the contract order to listen to events? -सबग्राफमध्‍ये, इव्‍हेंट नेहमी ब्लॉकमध्‍ये दिसण्‍याच्‍या क्रमाने संसाधित केले जातात, ते एकाधिक कॉन्ट्रॅक्टमध्‍ये असले किंवा नसले तरीही. +Within a Subgraph, the events are always processed in the order they appear in the blocks, regardless of whether that is across multiple contracts or not. ### 10. How are templates different from data sources? -Templates allow you to create data sources quickly, while your subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your subgraph will create a dynamic data source by supplying the contract address. +Templates allow you to create data sources quickly, while your Subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your Subgraph will create a dynamic data source by supplying the contract address. Check out the "Instantiating a data source template" section on: [Data Source Templates](/developing/creating-a-subgraph/#data-source-templates). -### 11. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? +### 11. Is it possible to set up a Subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? Yes. On `graph init` command itself you can add multiple dataSources by entering contracts one after the other. @@ -79,9 +79,9 @@ When new dynamic data source are created, the handlers defined for dynamic data If only one entity is created during the event and if there's nothing better available, then the transaction hash + log index would be unique. You can obfuscate these by converting that to Bytes and then piping it through `crypto.keccak256` but this won't make it more unique. -### 15. Can I delete my subgraph? +### 15. Can I delete my Subgraph? -Yes, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) and [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) your subgraph. +Yes, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) and [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) your Subgraph. ## Network Related @@ -110,11 +110,11 @@ Yes. Sepolia supports block handlers, call handlers and event handlers. It shoul Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the dataSource starts indexing from. In most cases, we suggest using the block where the contract was created: [Start blocks](/developing/creating-a-subgraph/#start-blocks) -### 20. What are some tips to increase the performance of indexing? My subgraph is taking a very long time to sync +### 20. What are some tips to increase the performance of indexing? My Subgraph is taking a very long time to sync Yes, you should take a look at the optional start block feature to start indexing from the block where the contract was deployed: [Start blocks](/developing/creating-a-subgraph/#start-blocks) -### 21. Is there a way to query the subgraph directly to determine the latest block number it has indexed? +### 21. Is there a way to query the Subgraph directly to determine the latest block number it has indexed? होय! खालील आदेश वापरून पहा, "संस्था/सबग्राफनेम" च्या जागी त्याखालील संस्था प्रकाशित झाली आहे आणि तुमच्या सबग्राफचे नाव: @@ -132,7 +132,7 @@ someCollection(first: 1000, skip: ) { ... } ### 23. If my dapp frontend uses The Graph for querying, do I need to write my API key into the frontend directly? What if we pay query fees for users – will malicious users cause our query fees to be very high? -Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. +Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and Subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. ## Miscellaneous diff --git a/website/src/pages/mr/subgraphs/developing/introduction.mdx b/website/src/pages/mr/subgraphs/developing/introduction.mdx index 3123dd66f2a7..9b6155152843 100644 --- a/website/src/pages/mr/subgraphs/developing/introduction.mdx +++ b/website/src/pages/mr/subgraphs/developing/introduction.mdx @@ -11,21 +11,21 @@ As a developer, you need data to build and power your dapp. Querying and indexin On The Graph, you can: -1. Create, deploy, and publish subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). -2. Use GraphQL to query existing subgraphs. +1. Create, deploy, and publish Subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). +2. Use GraphQL to query existing Subgraphs. ### What is GraphQL? -- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. +- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query Subgraphs. ### Developer Actions -- Query subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. -- Create custom subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. -- Deploy, publish and signal your subgraphs within The Graph Network. +- Query Subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. +- Create custom Subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. +- Deploy, publish and signal your Subgraphs within The Graph Network. -### What are subgraphs? +### What are Subgraphs? -A subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. +A Subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. -Check out the documentation on [subgraphs](/subgraphs/developing/subgraphs/) to learn specifics. +Check out the documentation on [Subgraphs](/subgraphs/developing/subgraphs/) to learn specifics. diff --git a/website/src/pages/mr/subgraphs/developing/managing/deleting-a-subgraph.mdx b/website/src/pages/mr/subgraphs/developing/managing/deleting-a-subgraph.mdx index cabf1261970a..b8c2330ca49d 100644 --- a/website/src/pages/mr/subgraphs/developing/managing/deleting-a-subgraph.mdx +++ b/website/src/pages/mr/subgraphs/developing/managing/deleting-a-subgraph.mdx @@ -2,30 +2,30 @@ title: Deleting a Subgraph --- -Delete your subgraph using [Subgraph Studio](https://thegraph.com/studio/). +Delete your Subgraph using [Subgraph Studio](https://thegraph.com/studio/). -> Deleting your subgraph will remove all published versions from The Graph Network, but it will remain visible on Graph Explorer and Subgraph Studio for users who have signaled on it. +> Deleting your Subgraph will remove all published versions from The Graph Network, but it will remain visible on Graph Explorer and Subgraph Studio for users who have signaled on it. ## Step-by-Step -1. Visit the subgraph's page on [Subgraph Studio](https://thegraph.com/studio/). +1. Visit the Subgraph's page on [Subgraph Studio](https://thegraph.com/studio/). 2. Click on the three-dots to the right of the "publish" button. -3. Click on the option to "delete this subgraph": +3. Click on the option to "delete this Subgraph": ![Delete-subgraph](/img/Delete-subgraph.png) -4. Depending on the subgraph's status, you will be prompted with various options. +4. Depending on the Subgraph's status, you will be prompted with various options. - - If the subgraph is not published, simply click “delete” and confirm. - - If the subgraph is published, you will need to confirm on your wallet before the subgraph can be deleted from Studio. If a subgraph is published to multiple networks, such as testnet and mainnet, additional steps may be required. + - If the Subgraph is not published, simply click “delete” and confirm. + - If the Subgraph is published, you will need to confirm on your wallet before the Subgraph can be deleted from Studio. If a Subgraph is published to multiple networks, such as testnet and mainnet, additional steps may be required. -> If the owner of the subgraph has signal on it, the signaled GRT will be returned to the owner. +> If the owner of the Subgraph has signal on it, the signaled GRT will be returned to the owner. ### Important Reminders -- Once you delete a subgraph, it will **not** appear on Graph Explorer's homepage. However, users who have signaled on it will still be able to view it on their profile pages and remove their signal. -- क्युरेटर यापुढे सबग्राफवर सिग्नल करू शकणार नाहीत. -- Curators that already signaled on the subgraph can withdraw their signal at an average share price. -- Deleted subgraphs will show an error message. +- Once you delete a Subgraph, it will **not** appear on Graph Explorer's homepage. However, users who have signaled on it will still be able to view it on their profile pages and remove their signal. +- Curators will not be able to signal on the Subgraph anymore. +- Curators that already signaled on the Subgraph can withdraw their signal at an average share price. +- Deleted Subgraphs will show an error message. diff --git a/website/src/pages/mr/subgraphs/developing/managing/transferring-a-subgraph.mdx b/website/src/pages/mr/subgraphs/developing/managing/transferring-a-subgraph.mdx index 0fc6632cbc40..e80bde3fa6d2 100644 --- a/website/src/pages/mr/subgraphs/developing/managing/transferring-a-subgraph.mdx +++ b/website/src/pages/mr/subgraphs/developing/managing/transferring-a-subgraph.mdx @@ -2,18 +2,18 @@ title: Transferring a Subgraph --- -Subgraphs published to the decentralized network have an NFT minted to the address that published the subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network. +Subgraphs published to the decentralized network have an NFT minted to the address that published the Subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network. ## Reminders -- Whoever owns the NFT controls the subgraph. -- If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that subgraph on the network. -- You can easily move control of a subgraph to a multi-sig. -- A community member can create a subgraph on behalf of a DAO. +- Whoever owns the NFT controls the Subgraph. +- If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that Subgraph on the network. +- You can easily move control of a Subgraph to a multi-sig. +- A community member can create a Subgraph on behalf of a DAO. ## View Your Subgraph as an NFT -To view your subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**: +To view your Subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**: ``` https://opensea.io/your-wallet-address @@ -27,13 +27,13 @@ https://rainbow.me/your-wallet-addres ## Step-by-Step -To transfer ownership of a subgraph, do the following: +To transfer ownership of a Subgraph, do the following: 1. Use the UI built into Subgraph Studio: ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-1.png) -2. Choose the address that you would like to transfer the subgraph to: +2. Choose the address that you would like to transfer the Subgraph to: ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-2.png) diff --git a/website/src/pages/mr/subgraphs/developing/publishing/publishing-a-subgraph.mdx b/website/src/pages/mr/subgraphs/developing/publishing/publishing-a-subgraph.mdx index 50c8077f371a..78b641e5ae0a 100644 --- a/website/src/pages/mr/subgraphs/developing/publishing/publishing-a-subgraph.mdx +++ b/website/src/pages/mr/subgraphs/developing/publishing/publishing-a-subgraph.mdx @@ -1,10 +1,11 @@ --- title: विकेंद्रीकृत नेटवर्कवर सबग्राफ प्रकाशित करणे +sidebarTitle: Publishing to the Decentralized Network --- -Once you have [deployed your subgraph to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/) and it's ready to go into production, you can publish it to the decentralized network. +Once you have [deployed your Subgraph to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/) and it's ready to go into production, you can publish it to the decentralized network. -When you publish a subgraph to the decentralized network, you make it available for: +When you publish a Subgraph to the decentralized network, you make it available for: - [Curators](/resources/roles/curating/) to begin curating it. - [Indexers](/indexing/overview/) to begin indexing it. @@ -17,33 +18,33 @@ Check out the list of [supported networks](/supported-networks/). 1. Go to the [Subgraph Studio](https://thegraph.com/studio/) dashboard 2. Click on the **Publish** button -3. Your subgraph will now be visible in [Graph Explorer](https://thegraph.com/explorer/). +3. Your Subgraph will now be visible in [Graph Explorer](https://thegraph.com/explorer/). -All published versions of an existing subgraph can: +All published versions of an existing Subgraph can: - Be published to Arbitrum One. [Learn more about The Graph Network on Arbitrum](/archived/arbitrum/arbitrum-faq/). -- Index data on any of the [supported networks](/supported-networks/), regardless of the network on which the subgraph was published. +- Index data on any of the [supported networks](/supported-networks/), regardless of the network on which the Subgraph was published. -### प्रकाशित सबग्राफसाठी मेटाडेटा अपडेट करत आहे +### Updating metadata for a published Subgraph -- After publishing your subgraph to the decentralized network, you can update the metadata anytime in Subgraph Studio. +- After publishing your Subgraph to the decentralized network, you can update the metadata anytime in Subgraph Studio. - Once you’ve saved your changes and published the updates, they will appear in Graph Explorer. - It's important to note that this process will not create a new version since your deployment has not changed. ## Publishing from the CLI -As of version 0.73.0, you can also publish your subgraph with the [`graph-cli`](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). +As of version 0.73.0, you can also publish your Subgraph with the [`graph-cli`](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). 1. Open the `graph-cli`. 2. Use the following commands: `graph codegen && graph build` then `graph publish`. -3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized subgraph to a network of your choice. +3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized Subgraph to a network of your choice. ![cli-ui](/img/cli-ui.png) ### Customizing your deployment -You can upload your subgraph build to a specific IPFS node and further customize your deployment with the following flags: +You can upload your Subgraph build to a specific IPFS node and further customize your deployment with the following flags: ``` USAGE @@ -61,33 +62,33 @@ FLAGS ``` -## Adding signal to your subgraph +## Adding signal to your Subgraph -Developers can add GRT signal to their subgraphs to incentivize Indexers to query the subgraph. +Developers can add GRT signal to their Subgraphs to incentivize Indexers to query the Subgraph. -- If a subgraph is eligible for indexing rewards, Indexers who provide a "proof of indexing" will receive a GRT reward, based on the amount of GRT signalled. +- If a Subgraph is eligible for indexing rewards, Indexers who provide a "proof of indexing" will receive a GRT reward, based on the amount of GRT signalled. -- You can check indexing reward eligibility based on subgraph feature usage [here](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). +- You can check indexing reward eligibility based on Subgraph feature usage [here](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). - Specific supported networks can be checked [here](/supported-networks/). -> Adding signal to a subgraph which is not eligible for rewards will not attract additional Indexers. +> Adding signal to a Subgraph which is not eligible for rewards will not attract additional Indexers. > -> If your subgraph is eligible for rewards, it is recommended that you curate your own subgraph with at least 3,000 GRT in order to attract additional indexers to index your subgraph. +> If your Subgraph is eligible for rewards, it is recommended that you curate your own Subgraph with at least 3,000 GRT in order to attract additional indexers to index your Subgraph. -The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs. However, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. +The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all Subgraphs. However, signaling GRT on a particular Subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. -When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. +When signaling, Curators can decide to signal on a specific version of the Subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. -Indexers can find subgraphs to index based on curation signals they see in Graph Explorer. +Indexers can find Subgraphs to index based on curation signals they see in Graph Explorer. -![Explorer subgraphs](/img/explorer-subgraphs.png) +![Explorer Subgraphs](/img/explorer-subgraphs.png) -Subgraph Studio enables you to add signal to your subgraph by adding GRT to your subgraph's curation pool in the same transaction it is published. +Subgraph Studio enables you to add signal to your Subgraph by adding GRT to your Subgraph's curation pool in the same transaction it is published. ![Curation Pool](/img/curate-own-subgraph-tx.png) -Alternatively, you can add GRT signal to a published subgraph from Graph Explorer. +Alternatively, you can add GRT signal to a published Subgraph from Graph Explorer. ![Signal from Explorer](/img/signal-from-explorer.png) diff --git a/website/src/pages/mr/subgraphs/developing/subgraphs.mdx b/website/src/pages/mr/subgraphs/developing/subgraphs.mdx index 737fb1347ada..982e3dd36207 100644 --- a/website/src/pages/mr/subgraphs/developing/subgraphs.mdx +++ b/website/src/pages/mr/subgraphs/developing/subgraphs.mdx @@ -4,83 +4,83 @@ title: सबग्राफ ## What is a Subgraph? -A subgraph is a custom, open API that extracts data from a blockchain, processes it, and stores it so it can be easily queried via GraphQL. +A Subgraph is a custom, open API that extracts data from a blockchain, processes it, and stores it so it can be easily queried via GraphQL. ### Subgraph Capabilities - **Access Data:** Subgraphs enable the querying and indexing of blockchain data for web3. -- **Build:** Developers can build, deploy, and publish subgraphs to The Graph Network. To get started, check out the subgraph developer [Quick Start](quick-start/). -- **Index & Query:** Once a subgraph is indexed, anyone can query it. Explore and query all subgraphs published to the network in [Graph Explorer](https://thegraph.com/explorer). +- **Build:** Developers can build, deploy, and publish Subgraphs to The Graph Network. To get started, check out the Subgraph developer [Quick Start](quick-start/). +- **Index & Query:** Once a Subgraph is indexed, anyone can query it. Explore and query all Subgraphs published to the network in [Graph Explorer](https://thegraph.com/explorer). ## Inside a Subgraph -The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. +The Subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your Subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. -The **subgraph definition** consists of the following files: +The **Subgraph definition** consists of the following files: -- `subgraph.yaml`: Contains the subgraph manifest +- `subgraph.yaml`: Contains the Subgraph manifest -- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL +- `schema.graphql`: A GraphQL schema defining the data stored for your Subgraph and how to query it via GraphQL - `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema -To learn more about each subgraph component, check out [creating a subgraph](/developing/creating-a-subgraph/). +To learn more about each Subgraph component, check out [creating a Subgraph](/developing/creating-a-subgraph/). ## सबग्राफ लाइफसायकल -Here is a general overview of a subgraph’s lifecycle: +Here is a general overview of a Subgraph’s lifecycle: ![Subgraph Lifecycle](/img/subgraph-lifecycle.png) ## Subgraph Development -1. [Create a subgraph](/developing/creating-a-subgraph/) -2. [Deploy a subgraph](/deploying/deploying-a-subgraph-to-studio/) -3. [Test a subgraph](/deploying/subgraph-studio/#testing-your-subgraph-in-subgraph-studio) -4. [Publish a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) -5. [Signal on a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) +1. [Create a Subgraph](/developing/creating-a-subgraph/) +2. [Deploy a Subgraph](/deploying/deploying-a-subgraph-to-studio/) +3. [Test a Subgraph](/deploying/subgraph-studio/#testing-your-subgraph-in-subgraph-studio) +4. [Publish a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) +5. [Signal on a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) ### Build locally -Great subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying subgraphs on The Graph. They can also use [Graph TypeScript](/subgraphs/developing/creating/graph-ts/README/) and [Matchstick](/subgraphs/developing/creating/unit-testing-framework/) to create robust subgraphs. +Great Subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying Subgraphs on The Graph. They can also use [Graph TypeScript](/subgraphs/developing/creating/graph-ts/README/) and [Matchstick](/subgraphs/developing/creating/unit-testing-framework/) to create robust Subgraphs. ### Deploy to Subgraph Studio -Once defined, a subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: +Once defined, a Subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: -- Use its staging environment to index the deployed subgraph and make it available for review. -- Verify that your subgraph doesn't have any indexing errors and works as expected. +- Use its staging environment to index the deployed Subgraph and make it available for review. +- Verify that your Subgraph doesn't have any indexing errors and works as expected. ### Publish to the Network -When you're happy with your subgraph, you can [publish it](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph Network. +When you're happy with your Subgraph, you can [publish it](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph Network. -- This is an onchain action, which registers the subgraph and makes it discoverable by Indexers. -- Published subgraphs have a corresponding NFT, which defines the ownership of the subgraph. You can [transfer the subgraph's ownership](/subgraphs/developing/managing/transferring-a-subgraph/) by sending the NFT. -- Published subgraphs have associated metadata, which provides other network participants with useful context and information. +- This is an onchain action, which registers the Subgraph and makes it discoverable by Indexers. +- Published Subgraphs have a corresponding NFT, which defines the ownership of the Subgraph. You can [transfer the Subgraph's ownership](/subgraphs/developing/managing/transferring-a-subgraph/) by sending the NFT. +- Published Subgraphs have associated metadata, which provides other network participants with useful context and information. ### Add Curation Signal for Indexing -Published subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your subgraph. Learn more about signaling and [curating](/resources/roles/curating/) on The Graph. +Published Subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your Subgraph. Learn more about signaling and [curating](/resources/roles/curating/) on The Graph. #### What is signal? -- Signal is locked GRT associated with a given subgraph. It indicates to Indexers that a given subgraph will receive query volume and it contributes to the indexing rewards available for processing it. -- Third-party Curators may also signal on a given subgraph, if they deem the subgraph likely to drive query volume. +- Signal is locked GRT associated with a given Subgraph. It indicates to Indexers that a given Subgraph will receive query volume and it contributes to the indexing rewards available for processing it. +- Third-party Curators may also signal on a given Subgraph, if they deem the Subgraph likely to drive query volume. ### Querying & Application Development Subgraphs on The Graph Network receive 100,000 free queries per month, after which point developers can either [pay for queries with GRT or a credit card](/subgraphs/billing/). -Learn more about [querying subgraphs](/subgraphs/querying/introduction/). +Learn more about [querying Subgraphs](/subgraphs/querying/introduction/). ### Updating Subgraphs -To update your subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. +To update your Subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your Subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. -- If you selected "auto-migrate" when you applied the signal, updating the subgraph will migrate any signal to the new version and incur a migration tax. -- This signal migration should prompt Indexers to start indexing the new version of the subgraph, so it should soon become available for querying. +- If you selected "auto-migrate" when you applied the signal, updating the Subgraph will migrate any signal to the new version and incur a migration tax. +- This signal migration should prompt Indexers to start indexing the new version of the Subgraph, so it should soon become available for querying. ### Deleting & Transferring Subgraphs -If you no longer need a published subgraph, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) or [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) it. Deleting a subgraph returns any signaled GRT to [Curators](/resources/roles/curating/). +If you no longer need a published Subgraph, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) or [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) it. Deleting a Subgraph returns any signaled GRT to [Curators](/resources/roles/curating/). diff --git a/website/src/pages/mr/subgraphs/explorer.mdx b/website/src/pages/mr/subgraphs/explorer.mdx index 6f30c3ea0ea3..afcc80c29f35 100644 --- a/website/src/pages/mr/subgraphs/explorer.mdx +++ b/website/src/pages/mr/subgraphs/explorer.mdx @@ -2,11 +2,11 @@ title: Graph Explorer --- -Unlock the world of subgraphs and network data with [Graph Explorer](https://thegraph.com/explorer). +Unlock the world of Subgraphs and network data with [Graph Explorer](https://thegraph.com/explorer). ## सविश्लेषण -Graph Explorer consists of multiple parts where you can interact with [subgraphs](https://thegraph.com/explorer?chain=arbitrum-one), [delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one), engage [participants](https://thegraph.com/explorer/participants?chain=arbitrum-one), view [network information](https://thegraph.com/explorer/network?chain=arbitrum-one), and access your user profile. +Graph Explorer consists of multiple parts where you can interact with [Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one), [delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one), engage [participants](https://thegraph.com/explorer/participants?chain=arbitrum-one), view [network information](https://thegraph.com/explorer/network?chain=arbitrum-one), and access your user profile. ## Inside Explorer @@ -14,33 +14,33 @@ The following is a breakdown of all the key features of Graph Explorer. For addi ### Subgraphs Page -After deploying and publishing your subgraph in Subgraph Studio, go to [Graph Explorer](https://thegraph.com/explorer) and click on the "[Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one)" link in the navigation bar to access the following: +After deploying and publishing your Subgraph in Subgraph Studio, go to [Graph Explorer](https://thegraph.com/explorer) and click on the "[Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one)" link in the navigation bar to access the following: -- Your own finished subgraphs +- Your own finished Subgraphs - Subgraphs published by others -- The exact subgraph you want (based on the date created, signal amount, or name). +- The exact Subgraph you want (based on the date created, signal amount, or name). ![Explorer Image 1](/img/Subgraphs-Explorer-Landing.png) -When you click into a subgraph, you will be able to do the following: +When you click into a Subgraph, you will be able to do the following: - Test queries in the playground and be able to leverage network details to make informed decisions. -- Signal GRT on your own subgraph or the subgraphs of others to make indexers aware of its importance and quality. +- Signal GRT on your own Subgraph or the Subgraphs of others to make indexers aware of its importance and quality. - - This is critical because signaling on a subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries. + - This is critical because signaling on a Subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries. ![Explorer Image 2](/img/Subgraph-Details.png) -On each subgraph’s dedicated page, you can do the following: +On each Subgraph’s dedicated page, you can do the following: -- Signal/Un-signal on subgraphs +- Signal/Un-signal on Subgraphs - चार्ट, वर्तमान उपयोजन आयडी आणि इतर मेटाडेटा यासारखे अधिक तपशील पहा -- Switch versions to explore past iterations of the subgraph -- GraphQL द्वारे सबग्राफ क्वेरी करा -- Test subgraphs in the playground -- View the Indexers that are indexing on a certain subgraph +- Switch versions to explore past iterations of the Subgraph +- Query Subgraphs via GraphQL +- Test Subgraphs in the playground +- View the Indexers that are indexing on a certain Subgraph - Subgraph stats (allocations, Curators, etc) -- View the entity who published the subgraph +- View the entity who published the Subgraph ![Explorer Image 3](/img/Explorer-Signal-Unsignal.png) @@ -53,7 +53,7 @@ On this page, you can see the following: - Indexers who collected the most query fees - Indexers with the highest estimated APR -Additionally, you can calculate your ROI and search for top Indexers by name, address, or subgraph. +Additionally, you can calculate your ROI and search for top Indexers by name, address, or Subgraph. ### Participants Page @@ -63,9 +63,9 @@ This page provides a bird's-eye view of all "participants," which includes every ![Explorer Image 4](/img/Indexer-Pane.png) -Indexers are the backbone of the protocol. They stake on subgraphs, index them, and serve queries to anyone consuming subgraphs. +Indexers are the backbone of the protocol. They stake on Subgraphs, index them, and serve queries to anyone consuming Subgraphs. -In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each subgraph, and how much revenue they have made from query fees and indexing rewards. +In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each Subgraph, and how much revenue they have made from query fees and indexing rewards. **Specifics** @@ -74,7 +74,7 @@ In the Indexers table, you can see an Indexers’ delegation parameters, their s - Cooldown Remaining - the time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters. - Owned - This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior. - Delegated - Stake from Delegators which can be allocated by the Indexer, but cannot be slashed. -- Allocated - Stake that Indexers are actively allocating towards the subgraphs they are indexing. +- Allocated - Stake that Indexers are actively allocating towards the Subgraphs they are indexing. - Available Delegation Capacity - the amount of delegated stake the Indexers can still receive before they become over-delegated. - कमाल डेलिगेशन क्षमता - इंडेक्सर उत्पादकपणे स्वीकारू शकणारी जास्तीत जास्त डेलिगेटेड स्टेक. वाटप किंवा बक्षिसे गणनेसाठी जास्तीचा वाटप केला जाऊ शकत नाही. - Query Fees - this is the total fees that end users have paid for queries from an Indexer over all time. @@ -90,10 +90,10 @@ To learn more about how to become an Indexer, you can take a look at the [offici #### 2. Curators -Curators analyze subgraphs to identify which subgraphs are of the highest quality. Once a Curator has found a potentially high-quality subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which subgraphs are high quality and should be indexed. +Curators analyze Subgraphs to identify which Subgraphs are of the highest quality. Once a Curator has found a potentially high-quality Subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which Subgraphs are high quality and should be indexed. -- Curators can be community members, data consumers, or even subgraph developers who signal on their own subgraphs by depositing GRT tokens into a bonding curve. - - By depositing GRT, Curators mint curation shares of a subgraph. As a result, they can earn a portion of the query fees generated by the subgraph they have signaled on. +- Curators can be community members, data consumers, or even Subgraph developers who signal on their own Subgraphs by depositing GRT tokens into a bonding curve. + - By depositing GRT, Curators mint curation shares of a Subgraph. As a result, they can earn a portion of the query fees generated by the Subgraph they have signaled on. - The bonding curve incentivizes Curators to curate the highest quality data sources. In the The Curator table listed below you can see: @@ -144,8 +144,8 @@ The overview section has both all the current network metrics and some cumulativ A few key details to note: -- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the subgraphs have been closed and the data they served has been validated by the consumers. -- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). +- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the Subgraphs have been closed and the data they served has been validated by the consumers. +- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the Subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). ![Explorer Image 8](/img/Network-Stats.png) @@ -178,15 +178,15 @@ In this section, you can view the following: ### Subgraphs Tab -In the Subgraphs tab, you’ll see your published subgraphs. +In the Subgraphs tab, you’ll see your published Subgraphs. -> This will not include any subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network. +> This will not include any Subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network. ![Explorer Image 11](/img/Subgraphs-Overview.png) ### Indexing Tab -In the Indexing tab, you’ll find a table with all the active and historical allocations towards subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer. +In the Indexing tab, you’ll find a table with all the active and historical allocations towards Subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer. या विभागात तुमच्या निव्वळ इंडेक्सर रिवॉर्ड्स आणि नेट क्वेरी फीबद्दल तपशील देखील समाविष्ट असतील. तुम्हाला खालील मेट्रिक्स दिसतील: @@ -223,13 +223,13 @@ In the Delegators tab, you can find the details of your active and historical de ### Curating Tab -क्युरेशन टॅबमध्ये, तुम्ही सिग्नल करत असलेले सर्व सबग्राफ तुम्हाला सापडतील (अशा प्रकारे तुम्हाला क्वेरी शुल्क प्राप्त करण्यास सक्षम करते). सिग्नलिंगमुळे क्युरेटर्स इंडेक्सर्सना कोणते सबग्राफ मौल्यवान आणि विश्वासार्ह आहेत हे ठळकपणे दाखवू देते, अशा प्रकारे ते इंडेक्स केले जाणे आवश्यक असल्याचे संकेत देते. +In the Curation tab, you’ll find all the Subgraphs you’re signaling on (thus enabling you to receive query fees). Signaling allows Curators to highlight to Indexers which Subgraphs are valuable and trustworthy, thus signaling that they need to be indexed on. या टॅबमध्ये, तुम्हाला याचे विहंगावलोकन मिळेल: -- All the subgraphs you're curating on with signal details -- Share totals per subgraph -- Query rewards per subgraph +- All the Subgraphs you're curating on with signal details +- Share totals per Subgraph +- Query rewards per Subgraph - Updated at date details ![Explorer Image 14](/img/Curation-Stats.png) diff --git a/website/src/pages/mr/subgraphs/guides/_meta.js b/website/src/pages/mr/subgraphs/guides/_meta.js index 37e18bc51651..a1bb04fb6d3f 100644 --- a/website/src/pages/mr/subgraphs/guides/_meta.js +++ b/website/src/pages/mr/subgraphs/guides/_meta.js @@ -1,4 +1,5 @@ export default { + 'subgraph-composition': '', 'subgraph-debug-forking': '', near: '', arweave: '', diff --git a/website/src/pages/mr/subgraphs/guides/arweave.mdx b/website/src/pages/mr/subgraphs/guides/arweave.mdx index 08e6c4257268..be076ab8f655 100644 --- a/website/src/pages/mr/subgraphs/guides/arweave.mdx +++ b/website/src/pages/mr/subgraphs/guides/arweave.mdx @@ -1,50 +1,50 @@ --- -title: Building Subgraphs on Arweave +title: Arweave वर सबग्राफ तयार करणे --- > Arweave support in Graph Node and on Subgraph Studio is in beta: please reach us on [Discord](https://discord.gg/graphprotocol) with any questions about building Arweave Subgraphs! -In this guide, you will learn how to build and deploy Subgraphs to index the Arweave blockchain. +या मार्गदर्शकामध्ये, तुम्ही Arweave ब्लॉकचेन इंडेक्स करण्यासाठी सबग्राफ कसे तयार करावे आणि कसे तैनात करावे ते शिकाल. -## What is Arweave? +## Arweave काय आहे? -The Arweave protocol allows developers to store data permanently and that is the main difference between Arweave and IPFS, where IPFS lacks the feature; permanence, and files stored on Arweave can't be changed or deleted. +Arweave प्रोटोकॉल विकसकांना कायमस्वरूपी डेटा संचयित करण्याची परवानगी देतो आणि Arweave आणि IPFS मधील मुख्य फरक आहे, जेथे IPFS मध्ये वैशिष्ट्याचा अभाव आहे; कायमस्वरूपी, आणि Arweave वर संचयित केलेल्या फायली बदलल्या किंवा हटवल्या जाऊ शकत नाहीत. -Arweave already has built numerous libraries for integrating the protocol in a number of different programming languages. For more information you can check: +अनेक वेगवेगळ्या प्रोग्रामिंग भाषांमध्ये प्रोटोकॉल समाकलित करण्यासाठी Arweave ने आधीच असंख्य लायब्ररी तयार केल्या आहेत. अधिक माहितीसाठी तुम्ही तपासू शकता: - [Arwiki](https://arwiki.wiki/#/en/main) - [Arweave Resources](https://www.arweave.org/build) -## What are Arweave Subgraphs? +## Arweave Subgraphs काय आहेत? The Graph allows you to build custom open APIs called "Subgraphs". Subgraphs are used to tell indexers (server operators) which data to index on a blockchain and save on their servers in order for you to be able to query it at any time using [GraphQL](https://graphql.org/). [Graph Node](https://github.com/graphprotocol/graph-node) is now able to index data on Arweave protocol. The current integration is only indexing Arweave as a blockchain (blocks and transactions), it is not indexing the stored files yet. -## Building an Arweave Subgraph +## Arweave Subgraph तयार करणे -To be able to build and deploy Arweave Subgraphs, you need two packages: +Arweave Subgraphs तयार आणि तैनात करण्यात सक्षम होण्यासाठी, तुम्हाला दोन पॅकेजेसची आवश्यकता आहे: 1. `@graphprotocol/graph-cli` above version 0.30.2 - This is a command-line tool for building and deploying Subgraphs. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-cli) to download using `npm`. 2. `@graphprotocol/graph-ts` above version 0.27.0 - This is library of Subgraph-specific types. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-ts) to download using `npm`. -## Subgraph's components +## सबग्राफचे घटक There are three components of a Subgraph: ### 1. Manifest - `subgraph.yaml` -Defines the data sources of interest, and how they should be processed. Arweave is a new kind of data source. +स्वारस्य असलेल्या डेटा स्रोतांची व्याख्या करते आणि त्यांची प्रक्रिया कशी करावी. Arweave हा एक नवीन प्रकारचा डेटा स्रोत आहे. ### 2. Schema - `schema.graphql` -Here you define which data you want to be able to query after indexing your Subgraph using GraphQL. This is actually similar to a model for an API, where the model defines the structure of a request body. +GraphQL वापरून तुमचा सबग्राफ इंडेक्स केल्यानंतर तुम्ही कोणता डेटा क्वेरी करू इच्छिता ते येथे तुम्ही परिभाषित करता. हे प्रत्यक्षात API च्या मॉडेलसारखेच आहे, जेथे मॉडेल विनंती मुख्य भागाची रचना परिभाषित करते. The requirements for Arweave Subgraphs are covered by the [existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). ### 3. AssemblyScript Mappings - `mapping.ts` -This is the logic that determines how data should be retrieved and stored when someone interacts with the data sources you are listening to. The data gets translated and is stored based off the schema you have listed. +जेव्हा तुम्ही ऐकत असलेल्या डेटा स्रोतांशी कोणीतरी संवाद साधते तेव्हा डेटा कसा पुनर्प्राप्त आणि संग्रहित केला जावा हे हे तर्कशास्त्र आहे. डेटा अनुवादित केला जातो आणि तुम्ही सूचीबद्ध केलेल्या स्कीमावर आधारित संग्रहित केला जातो. During Subgraph development there are two key commands: @@ -53,7 +53,7 @@ $ graph codegen # generates types from the schema file identified in the manifes $ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder ``` -## Subgraph Manifest Definition +## सबग्राफ मॅनिफेस्ट व्याख्या The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for an Arweave Subgraph: @@ -84,24 +84,24 @@ dataSources: - Arweave Subgraphs introduce a new kind of data source (`arweave`) - The network should correspond to a network on the hosting Graph Node. In Subgraph Studio, Arweave's mainnet is `arweave-mainnet` -- Arweave data sources introduce an optional source.owner field, which is the public key of an Arweave wallet +- Arweave डेटा स्रोत पर्यायी source.owner फील्ड सादर करतात, जी Arweave वॉलेटची सार्वजनिक की आहे -Arweave data sources support two types of handlers: +Arweave डेटा स्रोत दोन प्रकारच्या हँडलरला समर्थन देतात: - `blockHandlers` - Run on every new Arweave block. No source.owner is required. - `transactionHandlers` - Run on every transaction where the data source's `source.owner` is the owner. Currently an owner is required for `transactionHandlers`, if users want to process all transactions they should provide "" as the `source.owner` -> The source.owner can be the owner's address, or their Public Key. - -> Transactions are the building blocks of the Arweave permaweb and they are objects created by end-users. - +> source.owner हा मालकाचा पत्ता किंवा त्यांची सार्वजनिक की असू शकतो. +> +> व्यवहार हे Arweave permaweb चे बिल्डिंग ब्लॉक्स आहेत आणि ते अंतिम वापरकर्त्यांनी तयार केलेल्या वस्तू आहेत. +> > Note: [Irys (previously Bundlr)](https://irys.xyz/) transactions are not supported yet. -## Schema Definition +## स्कीमा व्याख्या Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on the Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). -## AssemblyScript Mappings +## असेंबलीस्क्रिप्ट मॅपिंग The handlers for processing events are written in [AssemblyScript](https://www.assemblyscript.org/). @@ -158,11 +158,11 @@ Once your Subgraph has been created on your Subgraph Studio dashboard, you can d graph deploy --access-token ``` -## Querying an Arweave Subgraph +## प्रश्न करत आहे Arweave सबग्राफ The GraphQL endpoint for Arweave Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. -## Example Subgraphs +## उदाहरणे सबग्राफ Here is an example Subgraph for reference: @@ -174,19 +174,19 @@ Here is an example Subgraph for reference: No, a Subgraph can only support data sources from one chain/network. -### Can I index the stored files on Arweave? +### मी Arweave वर संग्रहित फाइल्स अनुक्रमित करू शकतो? -Currently, The Graph is only indexing Arweave as a blockchain (its blocks and transactions). +सध्या, ग्राफ फक्त ब्लॉकचेन (त्याचे ब्लॉक्स आणि व्यवहार) म्हणून Arweave अनुक्रमित करत आहे. ### Can I identify Bundlr bundles in my Subgraph? -This is not currently supported. +हे सध्या समर्थित नाही. -### How can I filter transactions to a specific account? +### मी विशिष्ट खात्यातील व्यवहार कसे फिल्टर करू शकतो? -The source.owner can be the user's public key or account address. +source.owner वापरकर्त्याची सार्वजनिक की किंवा खाते पत्ता असू शकतो. -### What is the current encryption format? +### सध्याचे एन्क्रिप्शन स्वरूप काय आहे? Data is generally passed into the mappings as Bytes, which if stored directly is returned in the Subgraph in a `hex` format (ex. block and transaction hashes). You may want to convert to a `base64` or `base64 URL`-safe format in your mappings, in order to match what is displayed in block explorers like [Arweave Explorer](https://viewblock.io/arweave/). diff --git a/website/src/pages/mr/subgraphs/guides/contract-analyzer.mdx b/website/src/pages/mr/subgraphs/guides/contract-analyzer.mdx index 084ac8d28a00..2135cd023def 100644 --- a/website/src/pages/mr/subgraphs/guides/contract-analyzer.mdx +++ b/website/src/pages/mr/subgraphs/guides/contract-analyzer.mdx @@ -2,11 +2,15 @@ title: Smart Contract Analysis with Cana CLI --- -# Cana CLI: Quick & Efficient Contract Analysis +Improve smart contract analysis with **Cana CLI**. It's fast, efficient, and designed specifically for EVM chains. -**Cana CLI** is a command-line tool that streamlines helpful smart contract metadata analysis specific to subgraph development across multiple EVM-compatible chains. +## सविश्लेषण -## 📌 Key Features +**Cana CLI** is a command-line tool that streamlines helpful smart contract metadata analysis specific to subgraph development across multiple EVM-compatible chains. It simplifies retrieving contract details, detecting proxy implementations, extracting ABIs, and more. + +### Key Features + +With Cana CLI, you can: - Detect deployment blocks - Verify source code @@ -14,63 +18,75 @@ title: Smart Contract Analysis with Cana CLI - Identify proxy and implementation contracts - Support multiple chains -## 🚀 Installation & Setup +### Prerequisites + +Before installing Cana CLI, make sure you have: + +- [Node.js v16+](https://nodejs.org/en) +- [npm v6+](https://docs.npmjs.com/cli/v11/commands/npm-install) +- Block explorer API keys + +### Installation & Setup -Install Cana globally using npm: +1. Install Cana CLI + +Use npm to install it globally: ```bash npm install -g contract-analyzer ``` -Set up a blockchain for analysis: +2. Configure Cana CLI + +Set up a blockchain environment for analysis: ```bash cana setup ``` -Provide the required block explorer API and block explorer endpoint URL details when prompted. +During setup, you'll be prompted to provide the required block explorer API key and block explorer endpoint URL. -Running `cana setup` creates a configuration file at `~/.contract-analyzer/config.json`. This file stores your block explorer API credentials, endpoint URLs, and chain selection preferences for future use. +After setup, Cana CLI creates a configuration file at `~/.contract-analyzer/config.json`. This file stores your block explorer API credentials, endpoint URLs, and chain selection preferences for future use. -## 🍳 Usage +### Steps: Using Cana CLI for Smart Contract Analysis -### 🔹 Chain Selection +#### 1. Select a Chain -Cana supports multiple EVM-compatible chains. +Cana CLI supports multiple EVM-compatible chains. -List chains added with: +For a list of chains added run this command: ```bash cana chains ``` -Then select a chain with: +Then select a chain with this command: ```bash cana chains --switch ``` -Once a chain is selected, all subsequent contract analases will continue on that chain. +Once a chain is selected, all subsequent contract analyses will continue on that chain. -### 🔹 Basic Contract Analysis +#### 2. Basic Contract Analysis -Analyze a contract with: +Run the following command to analyze a contract: ```bash cana analyze 0xContractAddress ``` -or +किंवा ```bash cana -a 0xContractAddress ``` -This command displays essential contract information in the terminal using a clear, organized format. +This command fetches and displays essential contract information in the terminal using a clear, organized format. -### 🔹 Understanding Output +#### 3. Understanding the Output -Cana organizes results into the terminal as well as into a structured directory when detailed contract data is successfully retrieved: +Cana CLI organizes results into the terminal and into a structured directory when detailed contract data is successfully retrieved: ``` contracts-analyzed/ @@ -80,24 +96,22 @@ contracts-analyzed/ └── event-information.json # Event signatures and examples ``` -### 🔹 Chain Management +This format makes it easy to reference contract metadata, event signatures, and ABIs for subgraph development. + +#### 4. Chain Management Add and manage chains: ```bash -cana setup # Add a new chain -cana chains # List configured chains -cana chains -s # Swich chains. +cana setup # Add a new chain +cana chains # List configured chains +cana chains -s # Switch chains ``` -## ⚠️ Troubleshooting +### Troubleshooting -- **Missing Data**: Ensure the contract address is correct, verified on the block explorer, and that your API key has the required permissions. +Missing Data? Ensure that the contract address is correct, that it's verified on the block explorer, and that your API key has the required permissions. -## ✅ Requirements - -- Node.js v16+ -- npm v6+ -- Block explorer API keys +### Conclusion -Keep your contract analyses efficient and well-organized. 🚀 +With Cana CLI, you can efficiently analyze smart contracts, extract crucial metadata, and support subgraph development with ease. diff --git a/website/src/pages/mr/subgraphs/guides/enums.mdx b/website/src/pages/mr/subgraphs/guides/enums.mdx index 9f55ae07c54b..c2f2a41791f3 100644 --- a/website/src/pages/mr/subgraphs/guides/enums.mdx +++ b/website/src/pages/mr/subgraphs/guides/enums.mdx @@ -269,6 +269,6 @@ Expected output includes the marketplaces that meet the criteria, each represent } ``` -## Additional Resources +## अतिरिक्त संसाधने For additional information, check out this guide's [repo](https://github.com/chidubemokeke/Subgraph-Tutorial-Enums). diff --git a/website/src/pages/mr/subgraphs/guides/grafting.mdx b/website/src/pages/mr/subgraphs/guides/grafting.mdx index d9abe0e70d2a..1fd0c6d49932 100644 --- a/website/src/pages/mr/subgraphs/guides/grafting.mdx +++ b/website/src/pages/mr/subgraphs/guides/grafting.mdx @@ -1,24 +1,24 @@ --- -title: Replace a Contract and Keep its History With Grafting +title: करार बदला आणि त्याचा इतिहास ग्राफ्टिंगसह ठेवा --- In this guide, you will learn how to build and deploy new Subgraphs by grafting existing Subgraphs. -## What is Grafting? +## ग्राफ्टिंग म्हणजे काय? Grafting reuses the data from an existing Subgraph and starts indexing it at a later block. This is useful during development to get past simple errors in the mappings quickly or to temporarily get an existing Subgraph working again after it has failed. Also, it can be used when adding a feature to a Subgraph that takes long to index from scratch. The grafted Subgraph can use a GraphQL schema that is not identical to the one of the base Subgraph, but merely compatible with it. It has to be a valid Subgraph schema in its own right, but may deviate from the base Subgraph's schema in the following ways: -- It adds or removes entity types -- It removes attributes from entity types -- It adds nullable attributes to entity types -- It turns non-nullable attributes into nullable attributes -- It adds values to enums -- It adds or removes interfaces -- It changes for which entity types an interface is implemented +- हे घटक प्रकार जोडते किंवा काढून टाकते +- हे घटक प्रकारातील गुणधर्म काढून टाकते +- हे अस्तित्व प्रकारांमध्ये रद्द करण्यायोग्य विशेषता जोडते +- हे नॉन-नलेबल अॅट्रिब्यूट्सना न्युलेबल अॅट्रिब्यूट्समध्ये बदलते +- हे enums मध्ये मूल्ये जोडते +- हे इंटरफेस जोडते किंवा काढून टाकते +- कोणत्या घटकासाठी इंटरफेस लागू केला जातो ते बदलते -For more information, you can check: +अधिक माहितीसाठी, तुम्ही तपासू शकता: - [Grafting](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) @@ -40,7 +40,7 @@ Grafting is a powerful feature that allows you to "graft" one Subgraph onto anot By adhering to these guidelines, you minimize risks and ensure a smoother migration process. -## Building an Existing Subgraph +## विद्यमान सबग्राफ तयार करणे Building Subgraphs is an essential part of The Graph, described more in depth [here](/subgraphs/quick-start/). To be able to build and deploy the existing Subgraph used in this tutorial, the following repo is provided: @@ -48,7 +48,7 @@ Building Subgraphs is an essential part of The Graph, described more in depth [h > Note: The contract used in the Subgraph was taken from the following [Hackathon Starterkit](https://github.com/schmidsi/hackathon-starterkit). -## Subgraph Manifest Definition +## सबग्राफ मॅनिफेस्ट व्याख्या The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest that you will use: @@ -83,7 +83,7 @@ dataSources: - The network should correspond to an indexed network being queried. Since we're running on Sepolia testnet, the network is `sepolia` - The `mapping` section defines the triggers of interest and the functions that should be run in response to those triggers. In this case, we are listening for the `Withdrawal` event and calling the `handleWithdrawal` function when it is emitted. -## Grafting Manifest Definition +## Grafting मॅनिफेस्ट व्याख्या Grafting requires adding two new items to the original Subgraph manifest: @@ -101,7 +101,7 @@ graft: The `base` and `block` values can be found by deploying two Subgraphs: one for the base indexing and one with grafting -## Deploying the Base Subgraph +## बेस सबग्राफ तैनात करणे 1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-example` 2. Follow the directions in the `AUTH & DEPLOY` section on your Subgraph page in the `graft-example` folder from the repo @@ -117,7 +117,7 @@ The `base` and `block` values can be found by deploying two Subgraphs: one for t } ``` -It returns something like this: +हे असे काहीतरी परत करते: ``` { @@ -140,9 +140,9 @@ It returns something like this: Once you have verified the Subgraph is indexing properly, you can quickly update the Subgraph with grafting. -## Deploying the Grafting Subgraph +## ग्राफ्टिंग सबग्राफ तैनात करणे -The graft replacement subgraph.yaml will have a new contract address. This could happen when you update your dapp, redeploy a contract, etc. +कलम बदली subgraph.yaml मध्ये नवीन करार पत्ता असेल. जेव्हा तुम्ही तुमचा dapp अपडेट करता, करार पुन्हा लागू करता तेव्हा असे होऊ शकते. 1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-replacement` 2. Create a new manifest. The `subgraph.yaml` for `graph-replacement` contains a different contract address and new information about how it should graft. These are the `block` of the [last event emitted](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452) you care about by the old contract and the `base` of the old Subgraph. The `base` Subgraph ID is the `Deployment ID` of your original `graph-example` Subgraph. You can find this in Subgraph Studio. @@ -189,7 +189,7 @@ You can see that the `graft-replacement` Subgraph is indexing from older `graph- Congrats! You have successfully grafted a Subgraph onto another Subgraph. -## Additional Resources +## अतिरिक्त संसाधने If you want more experience with grafting, here are a few examples for popular contracts: diff --git a/website/src/pages/mr/subgraphs/guides/near.mdx b/website/src/pages/mr/subgraphs/guides/near.mdx index e78a69eb7fa2..4a183fca2e16 100644 --- a/website/src/pages/mr/subgraphs/guides/near.mdx +++ b/website/src/pages/mr/subgraphs/guides/near.mdx @@ -1,10 +1,10 @@ --- -title: Building Subgraphs on NEAR +title: NEAR वर सबग्राफ तयार करणे --- This guide is an introduction to building Subgraphs indexing smart contracts on the [NEAR blockchain](https://docs.near.org/). -## What is NEAR? +## जवळ म्हणजे काय? [NEAR](https://near.org/) is a smart contract platform for building decentralized applications. Visit the [official documentation](https://docs.near.org/concepts/basics/protocol) for more information. @@ -14,14 +14,14 @@ The Graph gives developers tools to process blockchain events and make the resul Subgraphs are event-based, which means that they listen for and then process onchain events. There are currently two types of handlers supported for NEAR Subgraphs: -- Block handlers: these are run on every new block -- Receipt handlers: run every time a message is executed at a specified account +- ब्लॉक हँडलर: हे प्रत्येक नवीन ब्लॉकवर चालवले जातात +- पावती हँडलर्स: निर्दिष्ट खात्यावर संदेश कार्यान्वित झाल्यावर प्रत्येक वेळी चालवा [From the NEAR documentation](https://docs.near.org/build/data-infrastructure/lake-data-structures/receipt): -> A Receipt is the only actionable object in the system. When we talk about "processing a transaction" on the NEAR platform, this eventually means "applying receipts" at some point. +> प्रणालीमध्ये पावती ही एकमेव क्रिया करण्यायोग्य वस्तू आहे. जेव्हा आम्ही जवळच्या प्लॅटफॉर्मवर "व्यवहारावर प्रक्रिया करणे" बद्दल बोलतो, तेव्हा याचा अर्थ शेवटी "पावत्या लागू करणे" असा होतो. -## Building a NEAR Subgraph +## एक NEAR सबग्राफतयार करणे `@graphprotocol/graph-cli` is a command-line tool for building and deploying Subgraphs. @@ -46,7 +46,7 @@ $ graph codegen # generates types from the schema file identified in the manifes $ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder ``` -### Subgraph Manifest Definition +### सबग्राफ मॅनिफेस्ट व्याख्या The Subgraph manifest (`subgraph.yaml`) identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for a NEAR Subgraph: @@ -85,16 +85,16 @@ accounts: - morning.testnet ``` -NEAR data sources support two types of handlers: +जवळील डेटा स्रोत दोन प्रकारच्या हँडलरला समर्थन देतात: - `blockHandlers`: run on every new NEAR block. No `source.account` is required. - `receiptHandlers`: run on every receipt where the data source's `source.account` is the recipient. Note that only exact matches are processed ([subaccounts](https://docs.near.org/tutorials/crosswords/basics/add-functions-call#create-a-subaccount) must be added as independent data sources). -### Schema Definition +### स्कीमा व्याख्या Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). -### AssemblyScript Mappings +### असेंबलीस्क्रिप्ट मॅपिंग The handlers for processing events are written in [AssemblyScript](https://www.assemblyscript.org/). @@ -169,7 +169,7 @@ Otherwise, the rest of the [AssemblyScript API](/subgraphs/developing/creating/g This includes a new JSON parsing function - logs on NEAR are frequently emitted as stringified JSONs. A new `json.fromString(...)` function is available as part of the [JSON API](/subgraphs/developing/creating/graph-ts/api/#json-api) to allow developers to easily process these logs. -## Deploying a NEAR Subgraph +## NEAR सबग्राफ डिप्लॉय करण्यासाठी Once you have a built Subgraph, it is time to deploy it to Graph Node for indexing. NEAR Subgraphs can be deployed to any Graph Node `>=v0.26.x` (this version has not yet been tagged & released). @@ -198,7 +198,7 @@ graph auth graph deploy ``` -### Local Graph Node (based on default configuration) +### स्थानिक आलेख नोड (डीफॉल्ट कॉन्फिगरेशनवर आधारित) ```sh graph deploy --node http://localhost:8020/ --ipfs http://localhost:5001 @@ -216,21 +216,21 @@ Once your Subgraph has been deployed, it will be indexed by Graph Node. You can } ``` -### Indexing NEAR with a Local Graph Node +### स्थानिक ग्राफ नोडशी NEAR चे सूचीकरण करणे -Running a Graph Node that indexes NEAR has the following operational requirements: +NEAR ची अनुक्रमणिका देणारा आलेख नोड चालवण्यासाठी खालील ऑपरेशनल आवश्यकता आहेत: -- NEAR Indexer Framework with Firehose instrumentation -- NEAR Firehose Component(s) -- Graph Node with Firehose endpoint configured +- फायरहोस इंस्ट्रुमेंटेशनसह इंडेक्सर फ्रेमवर्क जवळ +- NEAR फायरहोज घटकाज(वळ) +- फायरहोस एंडपॉइंटसह आलेख नोड कॉन्फिगर केले आहे -We will provide more information on running the above components soon. +वरील घटक चालवण्याबाबत आम्ही लवकरच अधिक माहिती देऊ. -## Querying a NEAR Subgraph +## NEAR सबग्राफची क्वेरी करणे The GraphQL endpoint for NEAR Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. -## Example Subgraphs +## उदाहरणे सबग्राफ Here are some example Subgraphs for reference: @@ -240,7 +240,7 @@ Here are some example Subgraphs for reference: ## FAQ -### How does the beta work? +### बीटा कसे कार्य करते? NEAR support is in beta, which means that there may be changes to the API as we continue to work on improving the integration. Please email near@thegraph.com so that we can support you in building NEAR Subgraphs, and keep you up to date on the latest developments! @@ -250,9 +250,9 @@ No, a Subgraph can only support data sources from one chain/network. ### Can Subgraphs react to more specific triggers? -Currently, only Block and Receipt triggers are supported. We are investigating triggers for function calls to a specified account. We are also interested in supporting event triggers, once NEAR has native event support. +सध्या, फक्त ब्लॉक आणि पावती ट्रिगर समर्थित आहेत. आम्ही एका निर्दिष्ट खात्यावर फंक्शन कॉलसाठी ट्रिगर तपासत आहोत. आम्‍हाला इव्‍हेंट ट्रिगरला सपोर्ट करण्‍यात देखील रस आहे, एकदा NEAR ला नेटिव्ह इव्‍हेंट सपोर्ट असेल. -### Will receipt handlers trigger for accounts and their sub-accounts? +### पावती हँडलर खाती आणि त्यांच्या उप-खात्यांसाठी ट्रिगर करतील का? If an `account` is specified, that will only match the exact account name. It is possible to match sub-accounts by specifying an `accounts` field, with `suffixes` and `prefixes` specified to match accounts and sub-accounts, for example the following would match all `mintbase1.near` sub-accounts: @@ -264,11 +264,11 @@ accounts: ### Can NEAR Subgraphs make view calls to NEAR accounts during mappings? -This is not supported. We are evaluating whether this functionality is required for indexing. +हे समर्थित नाही. अनुक्रमणिकेसाठी ही कार्यक्षमता आवश्यक आहे का याचे आम्ही मूल्यमापन करत आहोत. ### Can I use data source templates in my NEAR Subgraph? -This is not currently supported. We are evaluating whether this functionality is required for indexing. +हे सध्या समर्थित नाही. अनुक्रमणिकेसाठी ही कार्यक्षमता आवश्यक आहे का याचे आम्ही मूल्यमापन करत आहोत. ### Ethereum Subgraphs support "pending" and "current" versions, how can I deploy a "pending" version of a NEAR Subgraph? @@ -278,6 +278,6 @@ Pending functionality is not yet supported for NEAR Subgraphs. In the interim, y If it is a general question about Subgraph development, there is a lot more information in the rest of the [Developer documentation](/subgraphs/quick-start/). Otherwise please join [The Graph Protocol Discord](https://discord.gg/graphprotocol) and ask in the #near channel or email near@thegraph.com. -## References +## संदर्भ - [NEAR developer documentation](https://docs.near.org/tutorials/crosswords/basics/set-up-skeleton) diff --git a/website/src/pages/mr/subgraphs/guides/secure-api-keys-nextjs.mdx b/website/src/pages/mr/subgraphs/guides/secure-api-keys-nextjs.mdx index e17e594408ff..b6b043fa29f1 100644 --- a/website/src/pages/mr/subgraphs/guides/secure-api-keys-nextjs.mdx +++ b/website/src/pages/mr/subgraphs/guides/secure-api-keys-nextjs.mdx @@ -2,7 +2,7 @@ title: How to Secure API Keys Using Next.js Server Components --- -## Overview +## सविश्लेषण We can use [Next.js server components](https://nextjs.org/docs/app/building-your-application/rendering/server-components) to properly secure our API key from exposure in the frontend of our dapp. To further increase our API key security, we can also [restrict our API key to certain Subgraphs or domains in Subgraph Studio](/cookbook/upgrading-a-subgraph/#securing-your-api-key). diff --git a/website/src/pages/mr/subgraphs/guides/subgraph-composition.mdx b/website/src/pages/mr/subgraphs/guides/subgraph-composition.mdx new file mode 100644 index 000000000000..52da13032a9c --- /dev/null +++ b/website/src/pages/mr/subgraphs/guides/subgraph-composition.mdx @@ -0,0 +1,132 @@ +--- +title: Aggregate Data Using Subgraph Composition +sidebarTitle: Build a Composable Subgraph with Multiple Subgraphs +--- + +Leverage Subgraph composition to speed up development time. Create a base Subgraph with essential data, then build additional Subgraphs on top of it. + +Optimize your Subgraph by merging data from independent, source Subgraphs into a single composable Subgraph to enhance data aggregation. + +## Introduction + +Composable Subgraphs enable you to combine multiple Subgraphs' data sources into a new Subgraph, facilitating faster and more flexible Subgraph development. Subgraph composition empowers you to create and maintain smaller, focused Subgraphs that collectively form a larger, interconnected dataset. + +### Benefits of Composition + +Subgraph composition is a powerful feature for scaling, allowing you to: + +- Reuse, mix, and combine existing data +- Streamline development and queries +- Use multiple data sources (up to five source Subgraphs) +- Speed up your Subgraph's syncing speed +- Handle errors and optimize the resync + +## Architecture Overview + +The setup for this example involves two Subgraphs: + +1. **Source Subgraph**: Tracks event data as entities. +2. **Dependent Subgraph**: Uses the source Subgraph as a data source. + +You can find these in the `source` and `dependent` directories. + +- The **source Subgraph** is a basic event-tracking Subgraph that records events emitted by relevant contracts. +- The **dependent Subgraph** references the source Subgraph as a data source, using the entities from the source as triggers. + +While the source Subgraph is a standard Subgraph, the dependent Subgraph uses the Subgraph composition feature. + +## Prerequisites + +### Source Subgraphs + +- All Subgraphs need to be published with a **specVersion 1.3.0 or later** (Use the latest graph-cli version to be able to deploy composable Subgraphs) +- See notes here: https://github.com/graphprotocol/graph-node/releases/tag/v0.37.0 +- Immutable entities only: All Subgraphs must have [immutable entities](https://thegraph.com/docs/en/subgraphs/best-practices/immutable-entities-bytes-as-ids/#immutable-entities) when the Subgraph is deployed +- Pruning can be used in the source Subgraphs, but only entities that are immutable can be composed on top of +- Source Subgraphs cannot use grafting on top of existing entities +- Aggregated entities can be used in composition, but entities that are composed from them cannot performed additional aggregations directly + +### Composed Subgraphs + +- You can only compose up to a **maximum of 5 source Subgraphs** +- Composed Subgraphs can only use **datasources from the same chain** +- **Nested composition is not yet supported**: Composing on top of another composed Subgraph isn’t allowed at this time +- Aggregated entities can be used in composition, but the composed entities on them cannot also use aggregations directly +- Developers cannot compose an onchain datasource with a Subgraph datasource (i.e. you can’t do normal event handlers and call handlers and block handlers in a composed Subgraph) + +Additionally, you can explore the [example-composable-subgraph](https://github.com/graphprotocol/example-composable-subgraph) repository for a working implementation of composable Subgraphs + +## सुरु करूया + +The following guide provides examples for defining 3 source Subgraphs to create one powerful composed Subgraph. + +### Specifics + +- To keep this example simple, all source Subgraphs use only block handlers. However, in a real environment, each source Subgraph will use data from different smart contracts. +- The examples below show how to import and extend the schema of another Subgraph to enhance its functionality. +- Each source Subgraph is optimized with a specific entity. +- All the commands listed install the necessary dependencies, generate code based on the GraphQL schema, build the Subgraph, and deploy it to your local Graph Node instance. + +### Step 1. Deploy Block Time Source Subgraph + +This first source Subgraph calculates the block time for each block. + +- It imports schemas from other Subgraphs and adds a `block` entity with a `timestamp` field, representing the time each block was mined. +- It listens to time-related blockchain events (e.g., block timestamps) and processes this data to update the Subgraph's entities accordingly. + +To deploy this Subgraph locally, run the following commands: + +```bash +npm install +npm run codegen +npm run build +npm run create-local +npm run deploy-local +``` + +### Step 2. Deploy Block Cost Source Subgraph + +This second source Subgraph indexes the cost of each block. + +#### Key Functions + +- It imports schemas from other Subgraphs and adds a `block` entity with cost-related fields. +- It listens to blockchain events related to costs (e.g. gas fees, transaction costs) and processes this data to update the Subgraph's entities accordingly. + +To deploy this Subgraph locally, run the same commands as above. + +### Step 3. Define Block Size in Source Subgraph + +This third source Subgraph indexes the size of each block. To deploy this Subgraph locally, run the same commands as above. + +#### Key Functions + +- It imports existing schemas from other Subgraphs and adds a `block` entity with a `size` field representing each block's size. +- It listens to blockchain events related to block sizes (e.g., storage or volume) and processes this data to update the Subgraph's entities accordingly. + +### Step 4. Combine Into Block Stats Subgraph + +This composed Subgraph combines and aggregates the information from the source Subgraphs above, providing a unified view of block statistics. To deploy this Subgraph locally, run the same commands as above. + +> Note: +> +> - Any change to a source Subgraph will likely generate a new deployment ID. +> - Be sure to update the deployment ID in the data source address of the Subgraph manifest to take advantage of the latest changes. +> - All source Subgraphs should be deployed before the composed Subgraph is deployed. + +#### Key Functions + +- It provides a consolidated data model that encompasses all relevant block metrics. +- It combines data from 3 source Subgraphs, and provides a comprehensive view of block statistics, enabling more complex queries and analyses. + +## Key Takeaways + +- This powerful tool will scale your Subgraph development and allow you to combine multiple Subgraphs. +- The setup includes the deployment of 3 source Subgraphs and one final deployment of the composed Subgraph. +- This feature unlocks scalability, simplifying both development and maintenance efficiency. + +## अतिरिक्त संसाधने + +- Check out all the code for this example in [this GitHub repo](https://github.com/graphprotocol/example-composable-subgraph). +- To add advanced features to your Subgraph, check out [Subgraph advanced features](/developing/creating/advanced/). +- To learn more about aggregations, check out [Timeseries and Aggregations](/subgraphs/developing/creating/advanced/#timeseries-and-aggregations). diff --git a/website/src/pages/mr/subgraphs/guides/subgraph-debug-forking.mdx b/website/src/pages/mr/subgraphs/guides/subgraph-debug-forking.mdx index 91aa7484d2ec..7007c6021580 100644 --- a/website/src/pages/mr/subgraphs/guides/subgraph-debug-forking.mdx +++ b/website/src/pages/mr/subgraphs/guides/subgraph-debug-forking.mdx @@ -1,22 +1,22 @@ --- -title: Quick and Easy Subgraph Debugging Using Forks +title: फॉर्क्स वापरून जलद आणि सुलभ सबग्राफ डीबगिंग --- As with many systems processing large amounts of data, The Graph's Indexers (Graph Nodes) may take quite some time to sync-up your Subgraph with the target blockchain. The discrepancy between quick changes with the purpose of debugging and long wait times needed for indexing is extremely counterproductive and we are well aware of that. This is why we are introducing **Subgraph forking**, developed by [LimeChain](https://limechain.tech/), and in this article I will show you how this feature can be used to substantially speed-up Subgraph debugging! -## Ok, what is it? +## ठीक आहे, ते काय आहे? **Subgraph forking** is the process of lazily fetching entities from _another_ Subgraph's store (usually a remote one). In the context of debugging, **Subgraph forking** allows you to debug your failed Subgraph at block _X_ without needing to wait to sync-up to block _X_. -## What?! How? +## काय?! कसे? When you deploy a Subgraph to a remote Graph Node for indexing and it fails at block _X_, the good news is that the Graph Node will still serve GraphQL queries using its store, which is synced-up to block _X_. That's great! This means we can take advantage of this "up-to-date" store to fix the bugs arising when indexing block _X_. In a nutshell, we are going to _fork the failing Subgraph_ from a remote Graph Node that is guaranteed to have the Subgraph indexed up to block _X_ in order to provide the locally deployed Subgraph being debugged at block _X_ an up-to-date view of the indexing state. -## Please, show me some code! +## कृपया, मला काही कोड दाखवा! To stay focused on Subgraph debugging, let's keep things simple and run along with the [example-Subgraph](https://github.com/graphprotocol/graph-tooling/tree/main/examples/ethereum-gravatar) indexing the Ethereum Gravity smart contract. @@ -46,31 +46,31 @@ export function handleUpdatedGravatar(event: UpdatedGravatar): void { Oops, how unfortunate, when I deploy my perfect looking Subgraph to [Subgraph Studio](https://thegraph.com/studio/) it fails with the _"Gravatar not found!"_ error. -The usual way to attempt a fix is: +निराकरण करण्याचा प्रयत्न करण्याचा नेहमीचा मार्ग आहे: -1. Make a change in the mappings source, which you believe will solve the issue (while I know it won't). +1. मॅपिंग स्त्रोतामध्ये बदल करा, जो तुम्हाला विश्वास आहे की समस्या सोडवेल (जेव्हा मला माहित आहे की ते होणार नाही). 2. Re-deploy the Subgraph to [Subgraph Studio](https://thegraph.com/studio/) (or another remote Graph Node). -3. Wait for it to sync-up. -4. If it breaks again go back to 1, otherwise: Hooray! +3. ते समक्रमित होण्याची प्रतीक्षा करा. +4. तो पुन्हा खंडित झाल्यास 1 वर परत जा, अन्यथा: हुर्रे! It is indeed pretty familiar to an ordinary debug process, but there is one step that horribly slows down the process: _3. Wait for it to sync-up._ Using **Subgraph forking** we can essentially eliminate this step. Here is how it looks: 0. Spin-up a local Graph Node with the **_appropriate fork-base_** set. -1. Make a change in the mappings source, which you believe will solve the issue. +1. मॅपिंग स्त्रोतामध्ये बदल करा, जो तुम्हाला विश्वास आहे की समस्या सोडवेल. 2. Deploy to the local Graph Node, **_forking the failing Subgraph_** and **_starting from the problematic block_**. -3. If it breaks again, go back to 1, otherwise: Hooray! +3. तो पुन्हा खंडित झाल्यास, 1 वर परत जा, अन्यथा: हुर्रे! -Now, you may have 2 questions: +आता, तुमच्याकडे 2 प्रश्न असू शकतात: -1. fork-base what??? -2. Forking who?! +1. fork-base काय??? +2. फोर्किंग होईल कोणासोबत?! -And I answer: +आणि मी उत्तर देतो: 1. `fork-base` is the "base" URL, such that when the _subgraph id_ is appended the resulting URL (`/`) is a valid GraphQL endpoint for the Subgraph's store. -2. Forking is easy, no need to sweat: +2. काटा काढणे सोपे आहे, घाम गाळण्याची गरज नाही: ```bash $ graph deploy --debug-fork --ipfs http://localhost:5001 --node http://localhost:8020 @@ -78,7 +78,7 @@ $ graph deploy --debug-fork --ipfs http://localhos Also, don't forget to set the `dataSources.source.startBlock` field in the Subgraph manifest to the number of the problematic block, so you can skip indexing unnecessary blocks and take advantage of the fork! -So, here is what I do: +तर, मी काय करतो ते येथे आहे: 1. I spin-up a local Graph Node ([here is how to do it](https://github.com/graphprotocol/graph-node#running-a-local-graph-node)) with the `fork-base` option set to: `https://api.thegraph.com/subgraphs/id/`, since I will fork a Subgraph, the buggy one I deployed earlier, from [Subgraph Studio](https://thegraph.com/studio/). diff --git a/website/src/pages/mr/subgraphs/guides/subgraph-uncrashable.mdx b/website/src/pages/mr/subgraphs/guides/subgraph-uncrashable.mdx index a08e2a7ad8c9..55cf87cd0af1 100644 --- a/website/src/pages/mr/subgraphs/guides/subgraph-uncrashable.mdx +++ b/website/src/pages/mr/subgraphs/guides/subgraph-uncrashable.mdx @@ -1,10 +1,10 @@ --- -title: Safe Subgraph Code Generator +title: सुरक्षित सबग्राफ कोड जनरेटर --- [Subgraph Uncrashable](https://float-capital.github.io/float-subgraph-uncrashable/) is a code generation tool that generates a set of helper functions from the graphql schema of a project. It ensures that all interactions with entities in your Subgraph are completely safe and consistent. -## Why integrate with Subgraph Uncrashable? +## Subgraph Uncrashable सह समाकलित का? - **Continuous Uptime**. Mishandled entities may cause Subgraphs to crash, which can be disruptive for projects that are dependent on The Graph. Set up helper functions to make your Subgraphs “uncrashable” and ensure business continuity. @@ -16,11 +16,11 @@ title: Safe Subgraph Code Generator - The code generation tool accommodates **all** Subgraph types and is configurable for users to set sane defaults on values. The code generation will use this config to generate helper functions that are to the users specification. -- The framework also includes a way (via the config file) to create custom, but safe, setter functions for groups of entity variables. This way it is impossible for the user to load/use a stale graph entity and it is also impossible to forget to save or set a variable that is required by the function. +- फ्रेमवर्कमध्ये एंटिटी व्हेरिएबल्सच्या गटांसाठी सानुकूल, परंतु सुरक्षित, सेटर फंक्शन्स तयार करण्याचा मार्ग (कॉन्फिग फाइलद्वारे) देखील समाविष्ट आहे. अशा प्रकारे वापरकर्त्याला जुना आलेख घटक लोड करणे/वापरणे अशक्य आहे आणि फंक्शनसाठी आवश्यक असलेले व्हेरिएबल सेव्ह करणे किंवा सेट करणे विसरणे देखील अशक्य आहे. - Warning logs are recorded as logs indicating where there is a breach of Subgraph logic to help patch the issue to ensure data accuracy. -Subgraph Uncrashable can be run as an optional flag using the Graph CLI codegen command. +ग्राफ CLI codegen कमांड वापरून Subgraph Uncrashable हा पर्यायी ध्वज म्हणून चालवला जाऊ शकतो. ```sh graph codegen -u [options] [] diff --git a/website/src/pages/mr/subgraphs/guides/transfer-to-the-graph.mdx b/website/src/pages/mr/subgraphs/guides/transfer-to-the-graph.mdx index a62072c48373..d687370b93e6 100644 --- a/website/src/pages/mr/subgraphs/guides/transfer-to-the-graph.mdx +++ b/website/src/pages/mr/subgraphs/guides/transfer-to-the-graph.mdx @@ -12,9 +12,9 @@ Quickly upgrade your Subgraphs from any platform to [The Graph's decentralized n ## Upgrade Your Subgraph to The Graph in 3 Easy Steps -1. [Set Up Your Studio Environment](/subgraphs/cookbook/transfer-to-the-graph/#1-set-up-your-studio-environment) -2. [Deploy Your Subgraph to Studio](/subgraphs/cookbook/transfer-to-the-graph/#2-deploy-your-subgraph-to-studio) -3. [Publish to The Graph Network](/subgraphs/cookbook/transfer-to-the-graph/#publish-your-subgraph-to-the-graphs-decentralized-network) +1. [Set Up Your Studio Environment](/subgraphs/cookbook/transfer-to-the-graph/#1-set-up-your-studio-environment) +2. [Deploy Your Subgraph to Studio](/subgraphs/cookbook/transfer-to-the-graph/#2-deploy-your-subgraph-to-studio) +3. [Publish to The Graph Network](/subgraphs/cookbook/transfer-to-the-graph/#publish-your-subgraph-to-the-graphs-decentralized-network) ## 1. Set Up Your Studio Environment @@ -74,7 +74,7 @@ graph deploy --ipfs-hash You can start [querying](/subgraphs/querying/introduction/) any Subgraph by sending a GraphQL query into the Subgraph’s query URL endpoint, which is located at the top of its Explorer page in Subgraph Studio. -#### Example +#### उदाहरण [CryptoPunks Ethereum Subgraph](https://thegraph.com/explorer/subgraphs/HdVdERFUe8h61vm2fDyycHgxjsde5PbB832NHgJfZNqK) by Messari: @@ -98,7 +98,7 @@ You can create API Keys in Subgraph Studio under the “API Keys” menu at the Once you upgrade, you can access and manage your Subgraphs in [Subgraph Studio](https://thegraph.com/studio/) and explore all Subgraphs in [The Graph Explorer](https://thegraph.com/networks/). -### Additional Resources +### अतिरिक्त संसाधने - To quickly create and publish a new Subgraph, check out the [Quick Start](/subgraphs/quick-start/). - To explore all the ways you can optimize and customize your Subgraph for a better performance, read more about [creating a Subgraph here](/developing/creating-a-subgraph/). diff --git a/website/src/pages/mr/subgraphs/querying/best-practices.mdx b/website/src/pages/mr/subgraphs/querying/best-practices.mdx index 484f1a2d891a..db52212384b1 100644 --- a/website/src/pages/mr/subgraphs/querying/best-practices.mdx +++ b/website/src/pages/mr/subgraphs/querying/best-practices.mdx @@ -4,7 +4,7 @@ title: Querying Best Practices The Graph provides a decentralized way to query data from blockchains. Its data is exposed through a GraphQL API, making it easier to query with the GraphQL language. -Learn the essential GraphQL language rules and best practices to optimize your subgraph. +Learn the essential GraphQL language rules and best practices to optimize your Subgraph. --- @@ -71,7 +71,7 @@ It means that you can query a GraphQL API using standard `fetch` (natively or vi However, as mentioned in ["Querying from an Application"](/subgraphs/querying/from-an-application/), it's recommended to use `graph-client`, which supports the following unique features: -- Cross-chain Subgraph Handling: Querying from multiple subgraphs in a single query +- Cross-chain Subgraph Handling: Querying from multiple Subgraphs in a single query - [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) - [Automatic Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) - Fully typed result @@ -219,7 +219,7 @@ If the application only needs 10 transactions, the query should explicitly set ` ### Use a single query to request multiple records -By default, subgraphs have a singular entity for one record. For multiple records, use the plural entities and filter: `where: {id_in:[X,Y,Z]}` or `where: {volume_gt:100000}` +By default, Subgraphs have a singular entity for one record. For multiple records, use the plural entities and filter: `where: {id_in:[X,Y,Z]}` or `where: {volume_gt:100000}` Example of inefficient querying: diff --git a/website/src/pages/mr/subgraphs/querying/from-an-application.mdx b/website/src/pages/mr/subgraphs/querying/from-an-application.mdx index f867964cb39b..521a7717da49 100644 --- a/website/src/pages/mr/subgraphs/querying/from-an-application.mdx +++ b/website/src/pages/mr/subgraphs/querying/from-an-application.mdx @@ -1,5 +1,6 @@ --- title: Querying from an Application +sidebarTitle: Querying from an App --- Learn how to query The Graph from your application. @@ -10,7 +11,7 @@ During the development process, you will receive a GraphQL API endpoint at two d ### Subgraph Studio Endpoint -After deploying your subgraph to [Subgraph Studio](https://thegraph.com/docs/en/subgraphs/developing/deploying/using-subgraph-studio/), you will receive an endpoint that looks like this: +After deploying your Subgraph to [Subgraph Studio](https://thegraph.com/docs/en/subgraphs/developing/deploying/using-subgraph-studio/), you will receive an endpoint that looks like this: ``` https://api.studio.thegraph.com/query/// @@ -20,13 +21,13 @@ https://api.studio.thegraph.com/query/// ### The Graph Network Endpoint -After publishing your subgraph to the network, you will receive an endpoint that looks like this: : +After publishing your Subgraph to the network, you will receive an endpoint that looks like this: : ``` https://gateway.thegraph.com/api//subgraphs/id/ ``` -> This endpoint is intended for active use on the network. It allows you to use various GraphQL client libraries to query the subgraph and populate your application with indexed data. +> This endpoint is intended for active use on the network. It allows you to use various GraphQL client libraries to query the Subgraph and populate your application with indexed data. ## Using Popular GraphQL Clients @@ -34,7 +35,7 @@ https://gateway.thegraph.com/api//subgraphs/id/ The Graph is providing its own GraphQL client, `graph-client` that supports unique features such as: -- Cross-chain Subgraph Handling: Querying from multiple subgraphs in a single query +- Cross-chain Subgraph Handling: Querying from multiple Subgraphs in a single query - [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) - [Automatic Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) - Fully typed result @@ -43,7 +44,7 @@ The Graph is providing its own GraphQL client, `graph-client` that supports uniq ### Fetch Data with Graph Client -Let's look at how to fetch data from a subgraph with `graph-client`: +Let's look at how to fetch data from a Subgraph with `graph-client`: #### 1 ली पायरी @@ -168,7 +169,7 @@ Although it's the heaviest client, it has many features to build advanced UI on ### Fetch Data with Apollo Client -Let's look at how to fetch data from a subgraph with Apollo client: +Let's look at how to fetch data from a Subgraph with Apollo client: #### 1 ली पायरी @@ -257,7 +258,7 @@ client ### Fetch data with URQL -Let's look at how to fetch data from a subgraph with URQL: +Let's look at how to fetch data from a Subgraph with URQL: #### 1 ली पायरी diff --git a/website/src/pages/mr/subgraphs/querying/graph-client/README.md b/website/src/pages/mr/subgraphs/querying/graph-client/README.md index 416cadc13c6f..d39228c34c0f 100644 --- a/website/src/pages/mr/subgraphs/querying/graph-client/README.md +++ b/website/src/pages/mr/subgraphs/querying/graph-client/README.md @@ -16,23 +16,23 @@ This library is intended to simplify the network aspect of data consumption for | Status | Feature | Notes | | :----: | ---------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------- | -| ✅ | Multiple indexers | based on fetch strategies | -| ✅ | Fetch Strategies | timeout, retry, fallback, race, highestValue | -| ✅ | Build time validations & optimizations | | -| ✅ | Client-Side Composition | with improved execution planner (based on GraphQL-Mesh) | -| ✅ | Cross-chain Subgraph Handling | Use similar subgraphs as a single source | -| ✅ | Raw Execution (standalone mode) | without a wrapping GraphQL client | -| ✅ | Local (client-side) Mutations | | -| ✅ | [Automatic Block Tracking](../packages/block-tracking/README.md) | tracking block numbers [as described here](https://thegraph.com/docs/en/developer/distributed-systems/#polling-for-updated-data) | -| ✅ | [Automatic Pagination](../packages/auto-pagination/README.md) | doing multiple requests in a single call to fetch more than the indexer limit | -| ✅ | Integration with `@apollo/client` | | -| ✅ | Integration with `urql` | | -| ✅ | TypeScript support | with built-in GraphQL Codegen and `TypedDocumentNode` | -| ✅ | [`@live` queries](./live.md) | Based on polling | +| ✅ | Multiple indexers | based on fetch strategies | +| ✅ | Fetch Strategies | timeout, retry, fallback, race, highestValue | +| ✅ | Build time validations & optimizations | | +| ✅ | Client-Side Composition | with improved execution planner (based on GraphQL-Mesh) | +| ✅ | Cross-chain Subgraph Handling | Use similar subgraphs as a single source | +| ✅ | Raw Execution (standalone mode) | without a wrapping GraphQL client | +| ✅ | Local (client-side) Mutations | | +| ✅ | [Automatic Block Tracking](../packages/block-tracking/README.md) | tracking block numbers [as described here](https://thegraph.com/docs/en/developer/distributed-systems/#polling-for-updated-data) | +| ✅ | [Automatic Pagination](../packages/auto-pagination/README.md) | doing multiple requests in a single call to fetch more than the indexer limit | +| ✅ | Integration with `@apollo/client` | | +| ✅ | Integration with `urql` | | +| ✅ | TypeScript support | with built-in GraphQL Codegen and `TypedDocumentNode` | +| ✅ | [`@live` queries](./live.md) | Based on polling | > You can find an [extended architecture design here](./architecture.md) -## Getting Started +## प्रारंभ करणे You can follow [Episode 45 of `graphql.wtf`](https://graphql.wtf/episodes/45-the-graph-client) to learn more about Graph Client: @@ -308,8 +308,8 @@ sources:
`highestValue` - - This strategy allows you to send parallel requests to different endpoints for the same source and choose the most updated. + +This strategy allows you to send parallel requests to different endpoints for the same source and choose the most updated. This is useful if you want to choose most synced data for the same Subgraph over different indexers/sources. diff --git a/website/src/pages/mr/subgraphs/querying/graph-client/live.md b/website/src/pages/mr/subgraphs/querying/graph-client/live.md index e6f726cb4352..2139013e97d0 100644 --- a/website/src/pages/mr/subgraphs/querying/graph-client/live.md +++ b/website/src/pages/mr/subgraphs/querying/graph-client/live.md @@ -2,7 +2,7 @@ Graph-Client implements a custom `@live` directive that can make every GraphQL query work with real-time data. -## Getting Started +## प्रारंभ करणे Start by adding the following configuration to your `.graphclientrc.yml` file: diff --git a/website/src/pages/mr/subgraphs/querying/graphql-api.mdx b/website/src/pages/mr/subgraphs/querying/graphql-api.mdx index c506e4c260a8..049248616399 100644 --- a/website/src/pages/mr/subgraphs/querying/graphql-api.mdx +++ b/website/src/pages/mr/subgraphs/querying/graphql-api.mdx @@ -6,13 +6,13 @@ Learn about the GraphQL Query API used in The Graph. ## What is GraphQL? -[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. +[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query Subgraphs. -To understand the larger role that GraphQL plays, review [developing](/subgraphs/developing/introduction/) and [creating a subgraph](/developing/creating-a-subgraph/). +To understand the larger role that GraphQL plays, review [developing](/subgraphs/developing/introduction/) and [creating a Subgraph](/developing/creating-a-subgraph/). ## Queries with GraphQL -In your subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type. +In your Subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type. > Note: `query` does not need to be included at the top of the `graphql` query when using The Graph. @@ -170,7 +170,7 @@ You can use suffixes like `_gt`, `_lte` for value comparison: You can also filter entities that were updated in or after a specified block with `_change_block(number_gte: Int)`. -This can be useful if you are looking to fetch only entities which have changed, for example since the last time you polled. Or alternatively it can be useful to investigate or debug how entities are changing in your subgraph (if combined with a block filter, you can isolate only entities that changed in a specific block). +This can be useful if you are looking to fetch only entities which have changed, for example since the last time you polled. Or alternatively it can be useful to investigate or debug how entities are changing in your Subgraph (if combined with a block filter, you can isolate only entities that changed in a specific block). ```graphql { @@ -329,7 +329,7 @@ This query will return `Challenge` entities, and their associated `Application` ### Fulltext Search Queries -Fulltext search query fields provide an expressive text search API that can be added to the subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) to add fulltext search to your subgraph. +Fulltext search query fields provide an expressive text search API that can be added to the Subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) to add fulltext search to your Subgraph. Fulltext search queries have one required field, `text`, for supplying search terms. Several special fulltext operators are available to be used in this `text` search field. @@ -391,7 +391,7 @@ Graph Node implements [specification-based](https://spec.graphql.org/October2021 The schema of your dataSources, i.e. the entity types, values, and relationships that are available to query, are defined through the [GraphQL Interface Definition Language (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). -GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your subgraph is automatically generated from the GraphQL schema that's included in your [subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph). +GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your Subgraph is automatically generated from the GraphQL schema that's included in your [Subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph). > Note: Our API does not expose mutations because developers are expected to issue transactions directly against the underlying blockchain from their applications. @@ -403,7 +403,7 @@ All GraphQL types with `@entity` directives in your schema will be treated as en ### Subgraph Metadata -All subgraphs have an auto-generated `_Meta_` object, which provides access to subgraph metadata. This can be queried as follows: +All Subgraphs have an auto-generated `_Meta_` object, which provides access to Subgraph metadata. This can be queried as follows: ```graphQL { @@ -419,7 +419,7 @@ All subgraphs have an auto-generated `_Meta_` object, which provides access to s } ``` -If a block is provided, the metadata is as of that block, if not the latest indexed block is used. If provided, the block must be after the subgraph's start block, and less than or equal to the most recently indexed block. +If a block is provided, the metadata is as of that block, if not the latest indexed block is used. If provided, the block must be after the Subgraph's start block, and less than or equal to the most recently indexed block. `deployment` is a unique ID, corresponding to the IPFS CID of the `subgraph.yaml` file. @@ -427,6 +427,6 @@ If a block is provided, the metadata is as of that block, if not the latest inde - hash: the hash of the block - number: the block number -- timestamp: the timestamp of the block, if available (this is currently only available for subgraphs indexing EVM networks) +- timestamp: the timestamp of the block, if available (this is currently only available for Subgraphs indexing EVM networks) -`hasIndexingErrors` is a boolean identifying whether the subgraph encountered indexing errors at some past block +`hasIndexingErrors` is a boolean identifying whether the Subgraph encountered indexing errors at some past block diff --git a/website/src/pages/mr/subgraphs/querying/introduction.mdx b/website/src/pages/mr/subgraphs/querying/introduction.mdx index d33c11a8fd26..f395b4ad9b8d 100644 --- a/website/src/pages/mr/subgraphs/querying/introduction.mdx +++ b/website/src/pages/mr/subgraphs/querying/introduction.mdx @@ -7,11 +7,11 @@ To start querying right away, visit [The Graph Explorer](https://thegraph.com/ex ## सविश्लेषण -When a subgraph is published to The Graph Network, you can visit its subgraph details page on Graph Explorer and use the "Query" tab to explore the deployed GraphQL API for each subgraph. +When a Subgraph is published to The Graph Network, you can visit its Subgraph details page on Graph Explorer and use the "Query" tab to explore the deployed GraphQL API for each Subgraph. ## Specifics -Each subgraph published to The Graph Network has a unique query URL in Graph Explorer to make direct queries. You can find it by navigating to the subgraph details page and clicking on the "Query" button in the top right corner. +Each Subgraph published to The Graph Network has a unique query URL in Graph Explorer to make direct queries. You can find it by navigating to the Subgraph details page and clicking on the "Query" button in the top right corner. ![Query Subgraph Button](/img/query-button-screenshot.png) @@ -21,7 +21,7 @@ You will notice that this query URL must use a unique API key. You can create an Subgraph Studio users start on a Free Plan, which allows them to make 100,000 queries per month. Additional queries are available on the Growth Plan, which offers usage based pricing for additional queries, payable by credit card, or GRT on Arbitrum. You can learn more about billing [here](/subgraphs/billing/). -> Please see the [Query API](/subgraphs/querying/graphql-api/) for a complete reference on how to query the subgraph's entities. +> Please see the [Query API](/subgraphs/querying/graphql-api/) for a complete reference on how to query the Subgraph's entities. > > Note: If you encounter 405 errors with a GET request to the Graph Explorer URL, please switch to a POST request instead. diff --git a/website/src/pages/mr/subgraphs/querying/managing-api-keys.mdx b/website/src/pages/mr/subgraphs/querying/managing-api-keys.mdx index 167edacef164..0cd0d779e8bb 100644 --- a/website/src/pages/mr/subgraphs/querying/managing-api-keys.mdx +++ b/website/src/pages/mr/subgraphs/querying/managing-api-keys.mdx @@ -4,11 +4,11 @@ title: Managing API keys ## सविश्लेषण -API keys are needed to query subgraphs. They ensure that the connections between application services are valid and authorized, including authenticating the end user and the device using the application. +API keys are needed to query Subgraphs. They ensure that the connections between application services are valid and authorized, including authenticating the end user and the device using the application. ### Create and Manage API Keys -Go to [Subgraph Studio](https://thegraph.com/studio/) and click the **API Keys** tab to create and manage your API keys for specific subgraphs. +Go to [Subgraph Studio](https://thegraph.com/studio/) and click the **API Keys** tab to create and manage your API keys for specific Subgraphs. The "API keys" table lists existing API keys and allows you to manage or delete them. For each key, you can see its status, the cost for the current period, the spending limit for the current period, and the total number of queries. @@ -31,4 +31,4 @@ You can click on an individual API key to view the Details page: - Amount of GRT spent 2. Under the **Security** section, you can opt into security settings depending on the level of control you’d like to have. Specifically, you can: - View and manage the domain names authorized to use your API key - - Assign subgraphs that can be queried with your API key + - Assign Subgraphs that can be queried with your API key diff --git a/website/src/pages/mr/subgraphs/querying/python.mdx b/website/src/pages/mr/subgraphs/querying/python.mdx index 020814827402..bfeabae0b868 100644 --- a/website/src/pages/mr/subgraphs/querying/python.mdx +++ b/website/src/pages/mr/subgraphs/querying/python.mdx @@ -3,7 +3,7 @@ title: Query The Graph with Python and Subgrounds sidebarTitle: Python (Subgrounds) --- -Subgrounds is an intuitive Python library for querying subgraphs, built by [Playgrounds](https://playgrounds.network/). It allows you to directly connect subgraph data to a Python data environment, letting you use libraries like [pandas](https://pandas.pydata.org/) to perform data analysis! +Subgrounds is an intuitive Python library for querying Subgraphs, built by [Playgrounds](https://playgrounds.network/). It allows you to directly connect Subgraph data to a Python data environment, letting you use libraries like [pandas](https://pandas.pydata.org/) to perform data analysis! Subgrounds offers a simple Pythonic API for building GraphQL queries, automates tedious workflows such as pagination, and empowers advanced users through controlled schema transformations. @@ -17,14 +17,14 @@ pip install --upgrade subgrounds python -m pip install --upgrade subgrounds ``` -Once installed, you can test out subgrounds with the following query. The following example grabs a subgraph for the Aave v2 protocol and queries the top 5 markets ordered by TVL (Total Value Locked), selects their name and their TVL (in USD) and returns the data as a pandas [DataFrame](https://pandas.pydata.org/pandas-docs/dev/reference/api/pandas.DataFrame.html#pandas.DataFrame). +Once installed, you can test out subgrounds with the following query. The following example grabs a Subgraph for the Aave v2 protocol and queries the top 5 markets ordered by TVL (Total Value Locked), selects their name and their TVL (in USD) and returns the data as a pandas [DataFrame](https://pandas.pydata.org/pandas-docs/dev/reference/api/pandas.DataFrame.html#pandas.DataFrame). ```python from subgrounds import Subgrounds sg = Subgrounds() -# Load the subgraph +# Load the Subgraph aave_v2 = sg.load_subgraph( "https://api.thegraph.com/subgraphs/name/messari/aave-v2-ethereum") diff --git a/website/src/pages/mr/subgraphs/querying/subgraph-id-vs-deployment-id.mdx b/website/src/pages/mr/subgraphs/querying/subgraph-id-vs-deployment-id.mdx index 103e470e14da..17258dd13ea1 100644 --- a/website/src/pages/mr/subgraphs/querying/subgraph-id-vs-deployment-id.mdx +++ b/website/src/pages/mr/subgraphs/querying/subgraph-id-vs-deployment-id.mdx @@ -2,17 +2,17 @@ title: Subgraph ID vs Deployment ID --- -A subgraph is identified by a Subgraph ID, and each version of the subgraph is identified by a Deployment ID. +A Subgraph is identified by a Subgraph ID, and each version of the Subgraph is identified by a Deployment ID. -When querying a subgraph, either ID can be used, though it is generally suggested that the Deployment ID is used due to its ability to specify a specific version of a subgraph. +When querying a Subgraph, either ID can be used, though it is generally suggested that the Deployment ID is used due to its ability to specify a specific version of a Subgraph. Here are some key differences between the two IDs: ![](/img/subgraph-id-vs-deployment-id.png) ## Deployment ID -The Deployment ID is the IPFS hash of the compiled manifest file, which refers to other files on IPFS instead of relative URLs on the computer. For example, the compiled manifest can be accessed via: `https://api.thegraph.com/ipfs/api/v0/cat?arg=QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. To change the Deployment ID, one can simply update the manifest file, such as modifying the description field as described in the [subgraph manifest documentation](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api). +The Deployment ID is the IPFS hash of the compiled manifest file, which refers to other files on IPFS instead of relative URLs on the computer. For example, the compiled manifest can be accessed via: `https://api.thegraph.com/ipfs/api/v0/cat?arg=QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. To change the Deployment ID, one can simply update the manifest file, such as modifying the description field as described in the [Subgraph manifest documentation](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api). -When queries are made using a subgraph's Deployment ID, we are specifying a version of that subgraph to query. Using the Deployment ID to query a specific subgraph version results in a more sophisticated and robust setup as there is full control over the subgraph version being queried. However, this results in the need of updating the query code manually every time a new version of the subgraph is published. +When queries are made using a Subgraph's Deployment ID, we are specifying a version of that Subgraph to query. Using the Deployment ID to query a specific Subgraph version results in a more sophisticated and robust setup as there is full control over the Subgraph version being queried. However, this results in the need of updating the query code manually every time a new version of the Subgraph is published. Example endpoint that uses Deployment ID: @@ -20,8 +20,8 @@ Example endpoint that uses Deployment ID: ## Subgraph ID -The Subgraph ID is a unique identifier for a subgraph. It remains constant across all versions of a subgraph. It is recommended to use the Subgraph ID to query the latest version of a subgraph, although there are some caveats. +The Subgraph ID is a unique identifier for a Subgraph. It remains constant across all versions of a Subgraph. It is recommended to use the Subgraph ID to query the latest version of a Subgraph, although there are some caveats. -Be aware that querying using Subgraph ID may result in queries being responded to by an older version of the subgraph due to the new version needing time to sync. Also, new versions could introduce breaking schema changes. +Be aware that querying using Subgraph ID may result in queries being responded to by an older version of the Subgraph due to the new version needing time to sync. Also, new versions could introduce breaking schema changes. Example endpoint that uses Subgraph ID: `https://gateway-arbitrum.network.thegraph.com/api/[api-key]/subgraphs/id/FL3ePDCBbShPvfRJTaSCNnehiqxsPHzpLud6CpbHoeKW` diff --git a/website/src/pages/mr/subgraphs/quick-start.mdx b/website/src/pages/mr/subgraphs/quick-start.mdx index 586b37afa265..b14954bc11a4 100644 --- a/website/src/pages/mr/subgraphs/quick-start.mdx +++ b/website/src/pages/mr/subgraphs/quick-start.mdx @@ -2,7 +2,7 @@ title: क्विक स्टार्ट --- -Learn how to easily build, publish and query a [subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) on The Graph. +Learn how to easily build, publish and query a [Subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) on The Graph. ## Prerequisites @@ -13,13 +13,13 @@ Learn how to easily build, publish and query a [subgraph](/subgraphs/developing/ ## How to Build a Subgraph -### 1. Create a subgraph in Subgraph Studio +### 1. Create a Subgraph in Subgraph Studio Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. -Subgraph Studio lets you create, manage, deploy, and publish subgraphs, as well as create and manage API keys. +Subgraph Studio lets you create, manage, deploy, and publish Subgraphs, as well as create and manage API keys. -Click "Create a Subgraph". It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name". +Click "Create a Subgraph". It is recommended to name the Subgraph in Title Case: "Subgraph Name Chain Name". ### 2. आलेख CLI स्थापित करा @@ -37,13 +37,13 @@ Using [yarn](https://yarnpkg.com/): yarn global add @graphprotocol/graph-cli ``` -### 3. Initialize your subgraph +### 3. Initialize your Subgraph -> You can find commands for your specific subgraph on the subgraph page in [Subgraph Studio](https://thegraph.com/studio/). +> You can find commands for your specific Subgraph on the Subgraph page in [Subgraph Studio](https://thegraph.com/studio/). -The `graph init` command will automatically create a scaffold of a subgraph based on your contract's events. +The `graph init` command will automatically create a scaffold of a Subgraph based on your contract's events. -The following command initializes your subgraph from an existing contract: +The following command initializes your Subgraph from an existing contract: ```sh graph init @@ -51,42 +51,42 @@ graph init If your contract is verified on the respective blockscanner where it is deployed (such as [Etherscan](https://etherscan.io/)), then the ABI will automatically be created in the CLI. -When you initialize your subgraph, the CLI will ask you for the following information: +When you initialize your Subgraph, the CLI will ask you for the following information: -- **Protocol**: Choose the protocol your subgraph will be indexing data from. -- **Subgraph slug**: Create a name for your subgraph. Your subgraph slug is an identifier for your subgraph. -- **Directory**: Choose a directory to create your subgraph in. -- **Ethereum network** (optional): You may need to specify which EVM-compatible network your subgraph will be indexing data from. +- **Protocol**: Choose the protocol your Subgraph will be indexing data from. +- **Subgraph slug**: Create a name for your Subgraph. Your Subgraph slug is an identifier for your Subgraph. +- **Directory**: Choose a directory to create your Subgraph in. +- **Ethereum network** (optional): You may need to specify which EVM-compatible network your Subgraph will be indexing data from. - **Contract address**: Locate the smart contract address you’d like to query data from. - **ABI**: If the ABI is not auto-populated, you will need to input it manually as a JSON file. -- **Start Block**: You should input the start block to optimize subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed. +- **Start Block**: You should input the start block to optimize Subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed. - **Contract Name**: Input the name of your contract. -- **Index contract events as entities**: It is suggested that you set this to true, as it will automatically add mappings to your subgraph for every emitted event. +- **Index contract events as entities**: It is suggested that you set this to true, as it will automatically add mappings to your Subgraph for every emitted event. - **Add another contract** (optional): You can add another contract. -तुमचा सबग्राफ सुरू करताना काय अपेक्षा करावी याच्या उदाहरणासाठी खालील स्क्रीनशॉट पहा: +See the following screenshot for an example for what to expect when initializing your Subgraph: ![Subgraph command](/img/CLI-Example.png) -### 4. Edit your subgraph +### 4. Edit your Subgraph -The `init` command in the previous step creates a scaffold subgraph that you can use as a starting point to build your subgraph. +The `init` command in the previous step creates a scaffold Subgraph that you can use as a starting point to build your Subgraph. -When making changes to the subgraph, you will mainly work with three files: +When making changes to the Subgraph, you will mainly work with three files: -- Manifest (`subgraph.yaml`) - defines what data sources your subgraph will index. -- Schema (`schema.graphql`) - defines what data you wish to retrieve from the subgraph. +- Manifest (`subgraph.yaml`) - defines what data sources your Subgraph will index. +- Schema (`schema.graphql`) - defines what data you wish to retrieve from the Subgraph. - AssemblyScript Mappings (`mapping.ts`) - translates data from your data sources to the entities defined in the schema. -For a detailed breakdown on how to write your subgraph, check out [Creating a Subgraph](/developing/creating-a-subgraph/). +For a detailed breakdown on how to write your Subgraph, check out [Creating a Subgraph](/developing/creating-a-subgraph/). -### 5. Deploy your subgraph +### 5. Deploy your Subgraph > Remember, deploying is not the same as publishing. -When you **deploy** a subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. A deployed subgraph's indexing is performed by the [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/), which is a single Indexer owned and operated by Edge & Node, rather than by the many decentralized Indexers in The Graph Network. A **deployed** subgraph is free to use, rate-limited, not visible to the public, and meant to be used for development, staging, and testing purposes. +When you **deploy** a Subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. A deployed Subgraph's indexing is performed by the [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/), which is a single Indexer owned and operated by Edge & Node, rather than by the many decentralized Indexers in The Graph Network. A **deployed** Subgraph is free to use, rate-limited, not visible to the public, and meant to be used for development, staging, and testing purposes. -तुमचा सबग्राफ लिहिल्यानंतर, खालील आदेश चालवा: +Once your Subgraph is written, run the following commands: ```` ```sh @@ -94,7 +94,7 @@ graph codegen && graph build ``` ```` -Authenticate and deploy your subgraph. The deploy key can be found on the subgraph's page in Subgraph Studio. +Authenticate and deploy your Subgraph. The deploy key can be found on the Subgraph's page in Subgraph Studio. ![Deploy key](/img/subgraph-studio-deploy-key.jpg) @@ -109,37 +109,37 @@ graph deploy The CLI will ask for a version label. It's strongly recommended to use [semantic versioning](https://semver.org/), e.g. `0.0.1`. -### 6. Review your subgraph +### 6. Review your Subgraph -If you’d like to test your subgraph before publishing it, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following: +If you’d like to test your Subgraph before publishing it, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following: - Run a sample query. -- Analyze your subgraph in the dashboard to check information. -- Check the logs on the dashboard to see if there are any errors with your subgraph. The logs of an operational subgraph will look like this: +- Analyze your Subgraph in the dashboard to check information. +- Check the logs on the dashboard to see if there are any errors with your Subgraph. The logs of an operational Subgraph will look like this: ![Subgraph logs](/img/subgraph-logs-image.png) -### 7. Publish your subgraph to The Graph Network +### 7. Publish your Subgraph to The Graph Network -When your subgraph is ready for a production environment, you can publish it to the decentralized network. Publishing is an onchain action that does the following: +When your Subgraph is ready for a production environment, you can publish it to the decentralized network. Publishing is an onchain action that does the following: -- It makes your subgraph available to be to indexed by the decentralized [Indexers](/indexing/overview/) on The Graph Network. -- It removes rate limits and makes your subgraph publicly searchable and queryable in [Graph Explorer](https://thegraph.com/explorer/). -- It makes your subgraph available for [Curators](/resources/roles/curating/) to curate it. +- It makes your Subgraph available to be to indexed by the decentralized [Indexers](/indexing/overview/) on The Graph Network. +- It removes rate limits and makes your Subgraph publicly searchable and queryable in [Graph Explorer](https://thegraph.com/explorer/). +- It makes your Subgraph available for [Curators](/resources/roles/curating/) to curate it. -> The greater the amount of GRT you and others curate on your subgraph, the more Indexers will be incentivized to index your subgraph, improving the quality of service, reducing latency, and enhancing network redundancy for your subgraph. +> The greater the amount of GRT you and others curate on your Subgraph, the more Indexers will be incentivized to index your Subgraph, improving the quality of service, reducing latency, and enhancing network redundancy for your Subgraph. #### Publishing with Subgraph Studio -To publish your subgraph, click the Publish button in the dashboard. +To publish your Subgraph, click the Publish button in the dashboard. -![Publish a subgraph on Subgraph Studio](/img/publish-sub-transfer.png) +![Publish a Subgraph on Subgraph Studio](/img/publish-sub-transfer.png) -Select the network to which you would like to publish your subgraph. +Select the network to which you would like to publish your Subgraph. #### Publishing from the CLI -As of version 0.73.0, you can also publish your subgraph with the Graph CLI. +As of version 0.73.0, you can also publish your Subgraph with the Graph CLI. Open the `graph-cli`. @@ -157,32 +157,32 @@ graph publish ``` ```` -3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized subgraph to a network of your choice. +3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized Subgraph to a network of your choice. ![cli-ui](/img/cli-ui.png) To customize your deployment, see [Publishing a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). -#### Adding signal to your subgraph +#### Adding signal to your Subgraph -1. To attract Indexers to query your subgraph, you should add GRT curation signal to it. +1. To attract Indexers to query your Subgraph, you should add GRT curation signal to it. - - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your subgraph. + - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your Subgraph. 2. If eligible for indexing rewards, Indexers receive GRT rewards based on the signaled amount. - - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on subgraph feature usage and supported networks. + - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on Subgraph feature usage and supported networks. To learn more about curation, read [Curating](/resources/roles/curating/). -To save on gas costs, you can curate your subgraph in the same transaction you publish it by selecting this option: +To save on gas costs, you can curate your Subgraph in the same transaction you publish it by selecting this option: ![Subgraph publish](/img/studio-publish-modal.png) -### 8. Query your subgraph +### 8. Query your Subgraph -You now have access to 100,000 free queries per month with your subgraph on The Graph Network! +You now have access to 100,000 free queries per month with your Subgraph on The Graph Network! -You can query your subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button. +You can query your Subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button. -For more information about querying data from your subgraph, read [Querying The Graph](/subgraphs/querying/introduction/). +For more information about querying data from your Subgraph, read [Querying The Graph](/subgraphs/querying/introduction/). diff --git a/website/src/pages/mr/substreams/developing/dev-container.mdx b/website/src/pages/mr/substreams/developing/dev-container.mdx index bd4acf16eec7..339ddb159c87 100644 --- a/website/src/pages/mr/substreams/developing/dev-container.mdx +++ b/website/src/pages/mr/substreams/developing/dev-container.mdx @@ -9,7 +9,7 @@ Develop your first project with Substreams Dev Container. It's a tool to help you build your first project. You can either run it remotely through Github codespaces or locally by cloning the [substreams starter repository](https://github.com/streamingfast/substreams-starter?tab=readme-ov-file). -Inside the Dev Container, the `substreams init` command sets up a code-generated Substreams project, allowing you to easily build a subgraph or an SQL-based solution for data handling. +Inside the Dev Container, the `substreams init` command sets up a code-generated Substreams project, allowing you to easily build a Subgraph or an SQL-based solution for data handling. ## Prerequisites @@ -35,7 +35,7 @@ To share your work with the broader community, publish your `.spkg` to [Substrea You can configure your project to query data either through a Subgraph or directly from an SQL database: -- **Subgraph**: Run `substreams codegen subgraph`. This generates a project with a basic `schema.graphql` and `mappings.ts` file. You can customize these to define entities based on the data extracted by Substreams. For more configurations, see [subgraph sink documentation](https://docs.substreams.dev/how-to-guides/sinks/subgraph). +- **Subgraph**: Run `substreams codegen subgraph`. This generates a project with a basic `schema.graphql` and `mappings.ts` file. You can customize these to define entities based on the data extracted by Substreams. For more configurations, see [Subgraph sink documentation](https://docs.substreams.dev/how-to-guides/sinks/subgraph). - **SQL**: Run `substreams codegen sql` for SQL-based queries. For more information on configuring a SQL sink, refer to the [SQL documentation](https://docs.substreams.dev/how-to-guides/sinks/sql-sink). ## Deployment Options diff --git a/website/src/pages/mr/substreams/developing/sinks.mdx b/website/src/pages/mr/substreams/developing/sinks.mdx index 5bea8dabfb0f..873e20981407 100644 --- a/website/src/pages/mr/substreams/developing/sinks.mdx +++ b/website/src/pages/mr/substreams/developing/sinks.mdx @@ -1,5 +1,5 @@ --- -title: Official Sinks +title: Sink your Substreams --- Choose a sink that meets your project's needs. @@ -8,7 +8,7 @@ Choose a sink that meets your project's needs. Once you find a package that fits your needs, you can choose how you want to consume the data. -Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database, a file or a subgraph. +Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database, a file or a Subgraph. ## Sinks diff --git a/website/src/pages/mr/substreams/developing/solana/account-changes.mdx b/website/src/pages/mr/substreams/developing/solana/account-changes.mdx index 6170435942de..e37f80ee352a 100644 --- a/website/src/pages/mr/substreams/developing/solana/account-changes.mdx +++ b/website/src/pages/mr/substreams/developing/solana/account-changes.mdx @@ -11,7 +11,7 @@ This guide walks you through the process of setting up your environment, configu > NOTE: History for the Solana Account Changes dates as of 2025, block 310629601. -For each Substreams Solana account block, only the latest update per account is recorded, see the [Protobuf Referece](https://buf.build/streamingfast/firehose-solana/file/main:sf/solana/type/v1/account.proto). If an account is deleted, a payload with `deleted == True` is provided. Additionally, events of low importance ware omitted, such as those with the special owner “Vote11111111…” account or changes that do not affect the account data (ex: lamport changes). +For each Substreams Solana account block, only the latest update per account is recorded, see the [Protobuf Reference](https://buf.build/streamingfast/firehose-solana/file/main:sf/solana/type/v1/account.proto). If an account is deleted, a payload with `deleted == True` is provided. Additionally, events of low importance ware omitted, such as those with the special owner “Vote11111111…” account or changes that do not affect the account data (ex: lamport changes). > NOTE: To test Substreams latency for Solana accounts, measured as block-head drift, install the [Substreams CLI](https://docs.substreams.dev/reference-material/substreams-cli/installing-the-cli) and running `substreams run solana-common blocks_without_votes -s -1 -o clock`. diff --git a/website/src/pages/mr/substreams/developing/solana/transactions.mdx b/website/src/pages/mr/substreams/developing/solana/transactions.mdx index 42b225167fb7..79dd7c6b24ea 100644 --- a/website/src/pages/mr/substreams/developing/solana/transactions.mdx +++ b/website/src/pages/mr/substreams/developing/solana/transactions.mdx @@ -36,12 +36,12 @@ Within the generated directories, modify your Substreams modules to include addi ## Step 3: Load the Data -To make your Substreams queryable (as opposed to [direct streaming](https://docs.substreams.dev/how-to-guides/sinks/stream)), you can automatically generate a [Substreams-powered subgraph](/sps/introduction/) or SQL-DB sink. +To make your Substreams queryable (as opposed to [direct streaming](https://docs.substreams.dev/how-to-guides/sinks/stream)), you can automatically generate a [Substreams-powered Subgraph](/sps/introduction/) or SQL-DB sink. ### सबग्राफ 1. Run `substreams codegen subgraph` to initialize the sink, producing the necessary files and function definitions. -2. Create your [subgraph mappings](/sps/triggers/) within the `mappings.ts` and associated entities within the `schema.graphql`. +2. Create your [Subgraph mappings](/sps/triggers/) within the `mappings.ts` and associated entities within the `schema.graphql`. 3. Build and deploy locally or to [Subgraph Studio](https://thegraph.com/studio-pricing/) by running `deploy-studio`. ### SQL diff --git a/website/src/pages/mr/substreams/introduction.mdx b/website/src/pages/mr/substreams/introduction.mdx index f29771cc4a59..f1625a7f69dc 100644 --- a/website/src/pages/mr/substreams/introduction.mdx +++ b/website/src/pages/mr/substreams/introduction.mdx @@ -13,7 +13,7 @@ Substreams is a powerful parallel blockchain indexing technology designed to enh ## Substreams Benefits -- **Accelerated Indexing**: Boost subgraph indexing time with a parallelized engine for quicker data retrieval and processing. +- **Accelerated Indexing**: Boost Subgraph indexing time with a parallelized engine for quicker data retrieval and processing. - **Multi-Chain Support**: Expand indexing capabilities beyond EVM-based chains, supporting ecosystems like Solana, Injective, Starknet, and Vara. - **Enhanced Data Model**: Access comprehensive data, including the `trace` level data on EVM or account changes on Solana, while efficiently managing forks/disconnections. - **Multi-Sink Support:** For Subgraph, Postgres database, Clickhouse, and Mongo database. diff --git a/website/src/pages/mr/substreams/publishing.mdx b/website/src/pages/mr/substreams/publishing.mdx index ea2846d412ae..b662fc083c98 100644 --- a/website/src/pages/mr/substreams/publishing.mdx +++ b/website/src/pages/mr/substreams/publishing.mdx @@ -9,7 +9,7 @@ Learn how to publish a Substreams package to the [Substreams Registry](https://s ### What is a package? -A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional subgraphs. +A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional Subgraphs. ## Publish a Package @@ -44,7 +44,7 @@ A Substreams package is a precompiled binary file that defines the specific data ![confirm](/img/4_confirm.png) -That's it! You have succesfully published a package in the Substreams registry. +That's it! You have successfully published a package in the Substreams registry. ![success](/img/5_success.png) diff --git a/website/src/pages/mr/supported-networks.mdx b/website/src/pages/mr/supported-networks.mdx index 7ae7ff45350a..ef2c28393033 100644 --- a/website/src/pages/mr/supported-networks.mdx +++ b/website/src/pages/mr/supported-networks.mdx @@ -18,11 +18,11 @@ export const getStaticProps = getSupportedNetworksStaticProps - Subgraph Studio relies on the stability and reliability of the underlying technologies, for example JSON-RPC, Firehose and Substreams endpoints. - Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier. -- If a subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks. +- If a Subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks. - For a full list of which features are supported on the decentralized network, see [this page](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). ## Running Graph Node locally If your preferred network isn't supported on The Graph's decentralized network, you can run your own [Graph Node](https://github.com/graphprotocol/graph-node) to index any EVM-compatible network. Make sure that the [version](https://github.com/graphprotocol/graph-node/releases) you are using supports the network and you have the needed configuration. -Graph Node can also index other protocols, via a Firehose integration. Firehose integrations have been created for NEAR, Arweave and Cosmos-based networks. Additionally, Graph Node can support Substreams-powered subgraphs for any network with Substreams support. +Graph Node can also index other protocols, via a Firehose integration. Firehose integrations have been created for NEAR and Arweave. Additionally, Graph Node can support Substreams-powered Subgraphs for any network with Substreams support. diff --git a/website/src/pages/mr/token-api/_meta-titles.json b/website/src/pages/mr/token-api/_meta-titles.json index 692cec84bd58..7ed31e0af95d 100644 --- a/website/src/pages/mr/token-api/_meta-titles.json +++ b/website/src/pages/mr/token-api/_meta-titles.json @@ -1,5 +1,6 @@ { "mcp": "MCP", "evm": "EVM Endpoints", - "monitoring": "Monitoring Endpoints" + "monitoring": "Monitoring Endpoints", + "faq": "FAQ" } diff --git a/website/src/pages/mr/token-api/_meta.js b/website/src/pages/mr/token-api/_meta.js index 09aa7ffc2649..0e526f673a66 100644 --- a/website/src/pages/mr/token-api/_meta.js +++ b/website/src/pages/mr/token-api/_meta.js @@ -5,4 +5,5 @@ export default { mcp: titles.mcp, evm: titles.evm, monitoring: titles.monitoring, + faq: '', } diff --git a/website/src/pages/mr/token-api/faq.mdx b/website/src/pages/mr/token-api/faq.mdx new file mode 100644 index 000000000000..d7683aa77768 --- /dev/null +++ b/website/src/pages/mr/token-api/faq.mdx @@ -0,0 +1,109 @@ +--- +title: Token API FAQ +--- + +Get fast answers to easily integrate and scale with The Graph's high-performance Token API. + +## सामान्य + +### What blockchains does the Token API support? + +Currently, the Token API supports Ethereum, Binance Smart Chain (BSC), Polygon, Optimism, Base, and Arbitrum One. + +### Why isn't my API key from The Graph Market working? + +Be sure to use the Access Token generated from the API key, not the API key itself. An access token can be generated from the dashboard on The Graph Market using the dropdown menu next to each API key. + +### How current is the data provided by the API relative to the blockchain? + +The API provides data up to the latest finalized block. + +### How do I authenticate requests to the Token API? + +Authentication is managed via JWT tokens obtained through The Graph Market. JWT tokens issued by [Pinax](https://pinax.network/en), a core developer of The Graph, are also supported. + +### Does the Token API provide a client SDK? + +While a client SDK is not currently available, please share feedback on any SDKs or integrations you would like to see on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to support additional blockchains in the future? + +Yes, more blockchains will be supported in the future. Please share feedback on desired chains on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to offer data closer to the chain head? + +Yes, improvements to provide data closer to the chain head are planned. Feedback is welcome on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to support additional use cases such as NFTs? + +The Graph ecosystem is actively determining the [roadmap](https://thegraph.com/blog/token-api-the-graph/) for additional use cases, including NFTs. Please provide feedback on specific features you would like prioritized on [Discord](https://discord.gg/graphprotocol). + +## MCP / LLM / AI Topics + +### Is there a time limit for LLM queries? + +Yes. The maximum time limit for LLM queries is 10 seconds. + +### Is there a known list of LLMs that work with the API? + +Yes, Cline, Cursor, and Claude Desktop integrate successfully with The Graph's Token API + MCP server. + +Beyond that, whether an LLM "works" depends on whether it supports the MCP protocol directly (or has a compatible plugin/adapter). + +### Where can I find the MCP client? + +You can find the code for the MCP client in [The Graph's repo](https://github.com/graphprotocol/mcp-client). + +## Advanced Topics + +### I'm getting 403/401 errors. What's wrong? + +Check that you included the `Authorization: Bearer ` header with the correct, non-expired token. Common issues include using the API key instead of generating a new JWT, forgetting the "Bearer" prefix, using an incorrect token, or omitting the header entirely. Ensure you copied the JWT exactly as provided by The Graph Market. + +### Are there rate limits or usage costs?\*\* + +During Beta, the Token API is free for authorized developers. While specific rate limits aren't documented, reasonable throttling exists to prevent abuse. High request volumes may trigger HTTP 429 errors. Monitor official announcements for future pricing changes after Beta. + +### What networks are supported, and how do I specify them? + +You can query available networks with [this link](https://token-api.thegraph.com/#tag/monitoring/GET/networks). A full list of network IDs accepted by The Graph can be found on [The Graph's Networks](https://thegraph.com/docs/en/supported-networks/). The API supports Ethereum mainnet, Binance Smart Chain, Base, Arbitrum One, Optimism, and Polygon/Matic. Use the optional `network_id` parameter (e.g., `mainnet`, `bsc`, `base`, `arbitrum-one`, `optimism`, `matic`) to target a specific chain. Without this parameter, the API defaults to Ethereum mainnet. + +### Why do I only see 10 results? How can I get more data? + +Endpoints cap output at 10 items by default. Use pagination parameters: `limit` (up to 500) and `page` (1-indexed). For example, set `limit=50` to get 50 results, and increment `page` for subsequent batches (e.g., `page=2` for items 51-100). + +### How do I fetch older transfer history? + +The Transfers endpoint defaults to 30 days of history. To retrieve older events, increase the `age` parameter up to 180 days maximum (e.g., `age=180` for 6 months of transfers). Transfers older than 180 days cannot be fetched in a single call. + +### What does an empty `"data": []` array mean? + +An empty data array means no records were found matching the query – not an error. This occurs when querying wallets with no tokens/transfers or token contracts with no holders. Verify you've used the correct address and parameters. An invalid address format will trigger a 4xx error. + +### Why is the JSON response wrapped in a `"data"` array? + +All Token API responses consistently wrap results in a top-level `data` array, even for single items. This uniform design handles the common case where addresses have multiple balances or transfers. When parsing, be sure to index into the `data` array (e.g., `const results = response.data`). + +### Why are token amounts returned as strings? + +Large numeric values (like token amounts or supplies) are returned as strings to avoid precision loss, as they often exceed JavaScript's safe integer range. Convert these to big number types for arithmetic operations. Fields like `decimals` are provided as normal numbers to help derive human-readable values. + +### What format should addresses be in? + +The API accepts 40-character hex addresses with or without the `0x` prefix. The endpoint is case-insensitive, so both lower and uppercase hex characters work. Ensure addresses are exactly 40 hex digits (20 bytes) if you remove the prefix. For contract queries, use the token's contract address; for balance/transfer queries, use the wallet address. + +### Do I need special headers besides authentication? + +While recommended, `Accept: application/json` isn't strictly required as the API returns JSON by default. The critical header is `Authorization: Bearer `. Ensure you make a GET request to the correct URL without trailing slashes or path typos (e.g., use `/balances/evm/{address}` not `/balance`). + +### MCP integration with Claude/Cline/Cursor shows errors like "ENOENT" or "Server disconnected". How do I fix this? + +For "ENOENT" errors, ensure Node.js 18+ is installed and the path to `npx`/`bunx` is correct (consider using full paths in config). "Server disconnected" usually indicates authentication or connectivity issues – verify your `ACCESS_TOKEN` is set correctly and your network allows access to `https://token-api.thegraph.com/sse`. + +### Is the Token API part of The Graph's GraphQL service? + +No, the Token API is a separate RESTful service. Unlike traditional subgraphs, it provides ready-to-use REST endpoints (HTTP GET) for common token data. You don't need to write GraphQL queries or deploy subgraphs. Under the hood, it uses The Graph's infrastructure and MCP with AI for enrichment, but you simply interact with REST endpoints. + +### Do I need to use MCP or tools like Claude, Cline, or Cursor? + +No, these are optional. MCP is an advanced feature allowing AI assistants to interface with the API via streaming. For standard usage, simply call the REST endpoints with any HTTP client using your JWT. Claude Desktop, Cline bot, and Cursor IDE integrations are provided for convenience but aren't required. diff --git a/website/src/pages/mr/token-api/mcp/claude.mdx b/website/src/pages/mr/token-api/mcp/claude.mdx index 0da8f2be031d..bd3781333707 100644 --- a/website/src/pages/mr/token-api/mcp/claude.mdx +++ b/website/src/pages/mr/token-api/mcp/claude.mdx @@ -12,7 +12,7 @@ sidebarTitle: Claude Desktop ![Screenshot of Claude Desktop's settings panel showing the MCP server configuration option.](/img/claude-preview-token-api.png) -## Configuration +## कॉन्फिगरेशन Create or edit your `claude_desktop_config.json` file. @@ -25,11 +25,11 @@ Create or edit your `claude_desktop_config.json` file. ```json label="claude_desktop_config.json" { "mcpServers": { - "mcp-pinax": { + "token-api": { "command": "npx", "args": ["@pinax/mcp", "--sse-url", "https://token-api.thegraph.com/sse"], "env": { - "ACCESS_TOKEN": "" + "ACCESS_TOKEN": "" } } } diff --git a/website/src/pages/mr/token-api/mcp/cline.mdx b/website/src/pages/mr/token-api/mcp/cline.mdx index ab54c0c8f6f0..970df7997b52 100644 --- a/website/src/pages/mr/token-api/mcp/cline.mdx +++ b/website/src/pages/mr/token-api/mcp/cline.mdx @@ -10,9 +10,9 @@ sidebarTitle: Cline - [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path. - The `@pinax/mcp` package requires Node 18+, as it relies on built-in `fetch()` / `Headers`, which are not available in Node 17 or older. You may need to specify an exact path to an up-to-date Node version, or uninstall previous versions of Node to ensure `@pinax/mcp` uses the correct version. -![Screenshot of Cline's MCP server configuration interface displaying the JSON settings file with mcp-pinax server details visible](/img/cline-preview-token-api.png) +![Screenshot of Cline's MCP server configuration interface displaying the JSON settings file with mcp-pinax server details visible.](/img/cline-preview-token-api.png) -## Configuration +## कॉन्फिगरेशन Create or edit your `cline_mcp_settings.json` file. diff --git a/website/src/pages/mr/token-api/mcp/cursor.mdx b/website/src/pages/mr/token-api/mcp/cursor.mdx index 658108d1337b..a243820cf998 100644 --- a/website/src/pages/mr/token-api/mcp/cursor.mdx +++ b/website/src/pages/mr/token-api/mcp/cursor.mdx @@ -12,7 +12,7 @@ sidebarTitle: Cursor ![Screenshot of Cursor's MCP configuration panel.](/img/cursor-preview-token-api.png) -## Configuration +## कॉन्फिगरेशन Create or edit your `~/.cursor/mcp.json` file. diff --git a/website/src/pages/mr/token-api/quick-start.mdx b/website/src/pages/mr/token-api/quick-start.mdx index 4653c3d41ac6..427bd0f2a59b 100644 --- a/website/src/pages/mr/token-api/quick-start.mdx +++ b/website/src/pages/mr/token-api/quick-start.mdx @@ -1,6 +1,6 @@ --- title: Token API Quick Start -sidebarTitle: Quick Start +sidebarTitle: क्विक स्टार्ट --- ![The Graph Token API Quick Start banner](/img/token-api-quickstart-banner.jpg) diff --git a/website/src/pages/nl/about.mdx b/website/src/pages/nl/about.mdx index ab5a9033cdac..7fde3b3d507d 100644 --- a/website/src/pages/nl/about.mdx +++ b/website/src/pages/nl/about.mdx @@ -30,25 +30,25 @@ Blockchain properties, such as finality, chain reorganizations, and uncled block ## The Graph Provides a Solution -The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API. +The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "Subgraphs") can then be queried with a standard GraphQL API. Today, there is a decentralized protocol that is backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node) that enables this process. ### How The Graph Functions -Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL. +Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using Subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL. #### Specifics -- The Graph uses subgraph descriptions, which are known as the subgraph manifest inside the subgraph. +- The Graph uses Subgraph descriptions, which are known as the Subgraph manifest inside the Subgraph. -- The subgraph description outlines the smart contracts of interest for a subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database. +- The Subgraph description outlines the smart contracts of interest for a Subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database. -- When creating a subgraph, you need to write a subgraph manifest. +- When creating a Subgraph, you need to write a Subgraph manifest. -- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that subgraph. +- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that Subgraph. -The diagram below provides more detailed information about the flow of data after a subgraph manifest has been deployed with Ethereum transactions. +The diagram below provides more detailed information about the flow of data after a Subgraph manifest has been deployed with Ethereum transactions. ![A graphic explaining how The Graph uses Graph Node to serve queries to data consumers](/img/graph-dataflow.png) @@ -56,12 +56,12 @@ The flow follows these steps: 1. A dapp adds data to Ethereum through a transaction on a smart contract. 2. The smart contract emits one or more events while processing the transaction. -3. Graph Node continually scans Ethereum for new blocks and the data for your subgraph they may contain. -4. Graph Node finds Ethereum events for your subgraph in these blocks and runs the mapping handlers you provided. The mapping is a WASM module that creates or updates the data entities that Graph Node stores in response to Ethereum events. +3. Graph Node continually scans Ethereum for new blocks and the data for your Subgraph they may contain. +4. Graph Node finds Ethereum events for your Subgraph in these blocks and runs the mapping handlers you provided. The mapping is a WASM module that creates or updates the data entities that Graph Node stores in response to Ethereum events. 5. The dapp queries the Graph Node for data indexed from the blockchain, using the node's [GraphQL endpoint](https://graphql.org/learn/). The Graph Node in turn translates the GraphQL queries into queries for its underlying data store in order to fetch this data, making use of the store's indexing capabilities. The dapp displays this data in a rich UI for end-users, which they use to issue new transactions on Ethereum. The cycle repeats. ## Next Steps -The following sections provide a more in-depth look at subgraphs, their deployment and data querying. +The following sections provide a more in-depth look at Subgraphs, their deployment and data querying. -Before you write your own subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed subgraphs. Each subgraph's page includes a GraphQL playground, allowing you to query its data. +Before you write your own Subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed Subgraphs. Each Subgraph's page includes a GraphQL playground, allowing you to query its data. diff --git a/website/src/pages/nl/archived/arbitrum/arbitrum-faq.mdx b/website/src/pages/nl/archived/arbitrum/arbitrum-faq.mdx index ee8b300ccb87..0e19e7062073 100644 --- a/website/src/pages/nl/archived/arbitrum/arbitrum-faq.mdx +++ b/website/src/pages/nl/archived/arbitrum/arbitrum-faq.mdx @@ -14,7 +14,7 @@ By scaling The Graph on L2, network participants can now benefit from: - Veiligheid overgenomen van Ethereum -Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers can open and close allocations more frequently to index a greater number of subgraphs. Developers can deploy and update subgraphs more easily, and Delegators can delegate GRT more frequently. Curators can add or remove signal to a larger number of subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. +Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers can open and close allocations more frequently to index a greater number of Subgraphs. Developers can deploy and update Subgraphs more easily, and Delegators can delegate GRT more frequently. Curators can add or remove signal to a larger number of Subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. The Graph gemeenschap heeft vorig jaar besloten om door te gaan met Arbitrum na de uitkomst van [GIP-0031](https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305) discussie. @@ -39,7 +39,7 @@ Om gebruik te maken van The Graph op L2, gebruik deze keuzeschakelaar om te wiss ![Dropdown switcher to toggle Arbitrum](/img/arbitrum-screenshot-toggle.png) -## Als een subgraph ontwikkelaar, data consument, Indexer, Curator, of Delegator, wat moet ik nu doen? +## As a Subgraph developer, data consumer, Indexer, Curator, or Delegator, what do I need to do now? Network participants must move to Arbitrum to continue participating in The Graph Network. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) for additional support. @@ -51,9 +51,9 @@ All smart contracts have been thoroughly [audited](https://github.com/graphproto Alles is grondig getest, en een eventualiteiten plan is gemaakt en klaargezet voor een veilige en naadloze transitie. Details kunnen [hier](https://forum.thegraph.com/t/gip-0037-the-graph-arbitrum-deployment-with-linear-rewards-minted-in-l2/3551#risks-and-security-considerations-20) gevonden worden. -## Are existing subgraphs on Ethereum working? +## Are existing Subgraphs on Ethereum working? -All subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) to ensure your subgraphs operate seamlessly. +All Subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) to ensure your Subgraphs operate seamlessly. ## Does GRT have a new smart contract deployed on Arbitrum? diff --git a/website/src/pages/nl/archived/arbitrum/l2-transfer-tools-faq.mdx b/website/src/pages/nl/archived/arbitrum/l2-transfer-tools-faq.mdx index 2c7df434e45c..846ddd61273d 100644 --- a/website/src/pages/nl/archived/arbitrum/l2-transfer-tools-faq.mdx +++ b/website/src/pages/nl/archived/arbitrum/l2-transfer-tools-faq.mdx @@ -24,9 +24,9 @@ The exception is with smart contract wallets like multisigs: these are smart con De L2 Transfer Tools gebruiken Arbitrum's eigen mechanismen op berichten te sturen van L1 naar L2. Dit mechanisme heet een "retryable ticket" en is gebruikt door alle eigen token bruggen, inclusief de Arbitrum GRT brug. Je kunt meer lezen over retryable tickets in de [Arbiturm docs](https://docs.arbitrum.io/arbos/l1-to-l2-messaging). -Wanneer je jouw activa (subgraph, inzet, delegatie of curatie) overdraagt naar L2, wordt er een bericht via de Arbitrum GRT-brug gestuurd dat een herhaalbaar ticket in L2 aanmaakt. De overdrachtstool bevat een bepaalde hoeveelheid ETH in de transactie, die gebruikt wordt om 1) te betalen voor de creatie van de ticket en 2) te betalen voor de gas voor de uitvoer van de ticket in L2. Omdat de gasprijzen kunnen variëren in de tijd tot het ticket gereed is om in L2 uit te voeren, is het mogelijk dat deze automatische uitvoerpoging mislukt. Als dat gebeurt, zal de Arbitrum-brug het herhaalbare ticket tot 7 dagen lang actief houden, en iedereen kan proberen het ticket te "inlossen" (wat een portemonnee met wat ETH dat naar Arbitrum is overgebracht, vereist). +When you transfer your assets (Subgraph, stake, delegation or curation) to L2, a message is sent through the Arbitrum GRT bridge which creates a retryable ticket in L2. The transfer tool includes some ETH value in the transaction, that is used to 1) pay to create the ticket and 2) pay for the gas to execute the ticket in L2. However, because gas prices might vary in the time until the ticket is ready to execute in L2, it is possible that this auto-execution attempt fails. When that happens, the Arbitrum bridge will keep the retryable ticket alive for up to 7 days, and anyone can retry “redeeming” the ticket (which requires a wallet with some ETH bridged to Arbitrum). -Dit is wat we de "Bevestigen"-stap noemen in alle overdrachtstools - deze zal in de meeste gevallen automatisch worden uitgevoerd, omdat de automatische uitvoering meestal succesvol is, maar het is belangrijk dat je terugkeert om te controleren of het is gelukt. Als het niet lukt en er zijn geen succesvolle herhaalpogingen in 7 dagen, zal de Arbitrum-brug het ticket verwerpen, en je activa (subgraph, inzet, delegatie of curatie) zullen verloren gaan en kunnen niet worden hersteld. De kernontwikkelaars van The Graph hebben een bewakingssysteem om deze situaties te detecteren en proberen de tickets in te lossen voordat het te laat is, maar uiteindelijk ben jij verantwoordelijk om ervoor te zorgen dat je overdracht op tijd is voltooid. Als je problemen hebt met het bevestigen van je transactie, neem dan contact op via [dit formulier](https://noteforms.com/forms/notionform-l2-transfer-tooling-issues-0ogqfu?notionforms=1&utm_source=notionforms) en de kernontwikkelaars zullen er zijn om je te helpen. +This is what we call the “Confirm” step in all the transfer tools - it will run automatically in most cases, as the auto-execution is most often successful, but it is important that you check back to make sure it went through. If it doesn’t succeed and there are no successful retries in 7 days, the Arbitrum bridge will discard the ticket, and your assets (Subgraph, stake, delegation or curation) will be lost and can’t be recovered. The Graph core devs have a monitoring system in place to detect these situations and try to redeem the tickets before it’s too late, but it is ultimately your responsibility to ensure your transfer is completed in time. If you’re having trouble confirming your transaction, please reach out using [this form](https://noteforms.com/forms/notionform-l2-transfer-tooling-issues-0ogqfu?notionforms=1&utm_source=notionforms) and core devs will be there help you. ### Ik ben mijn delegatie/inzet/curatie overdracht begonnen en ik ben niet zeker of deze door is gekomen naar L2, hoe kan ik bevestigen dat deze correct is overgedragen? @@ -36,43 +36,43 @@ Als je de L1 transactie-hash hebt (die je kunt vinden door naar de recente trans ## Subgraph Overdracht -### Hoe verplaats ik mijn subgraphs? +### How do I transfer my Subgraph? -Om je subgraph te verplaatsen, moet je de volgende stappen volgen: +To transfer your Subgraph, you will need to complete the following steps: 1. Start de overdracht op het Ethereum mainnet 2. Wacht 20 minuten op bevestiging -3. Bevestig subgraph overdracht op Arbitrum\* +3. Confirm Subgraph transfer on Arbitrum\* -4. Maak het publiceren van subrgaph op Arbitrum af +4. Finish publishing Subgraph on Arbitrum 5. Update Query URL (aanbevolen) -\*Let op dat je de overdracht binnen 7 dagen moet bevestigen, anders kan je subgraph verloren gaan. In de meeste gevallen zal deze stap automatisch verlopen, maar een handmatige bevestiging kan nodig zijn als er een gasprijsstijging is op Arbitrum. Als er tijdens dit proces problemen zijn, zijn er bronnen beschikbaar om te helpen: neem contact op met de ondersteuning via support@thegraph.com of op [Discord](https://discord.gg/graphprotocol). +\*Note that you must confirm the transfer within 7 days otherwise your Subgraph may be lost. In most cases, this step will run automatically, but a manual confirmation may be needed if there is a gas price spike on Arbitrum. If there are any issues during this process, there will be resources to help: contact support at support@thegraph.com or on [Discord](https://discord.gg/graphprotocol). ### Waarvandaan moet ik mijn overdracht vanaf starten? -Je kan je overdracht starten vanaf de [Subgraph Studio](https://thegraph.com/studio/), [Explorer,](https://thegraph.com/explorer) of elke subgraph details pagina. Klik de "Transfer Subgraph" knop in de subgraph details pagina om de overdracht te starten. +You can initiate your transfer from the [Subgraph Studio](https://thegraph.com/studio/), [Explorer,](https://thegraph.com/explorer) or any Subgraph details page. Click the "Transfer Subgraph" button in the Subgraph details page to start the transfer. -### Hoe lang moet ik wachten to mijn subrgaph overgedragen is +### How long do I need to wait until my Subgraph is transferred De overdracht duurt ongeveer 20 minuten. De Arbitrum brug werkt momenteel op de achtergrond om de brug overdracht automatisch te laten voltooien. In sommige gevallen kunnen gaskosten pieken en zul je de overdracht opnieuw moeten bevestigen. -### Is mijn subgraph nog te ontdekken nadat ik het naar L2 overgedragen heb? +### Will my Subgraph still be discoverable after I transfer it to L2? -Jouw subgraph zal alleen te ontdekken zijn op het netwerk waarnaar deze gepubliceerd is. Bijvoorbeeld, als jouw subgraph gepubliceerd is op Arbitrum One, dan kan je deze alleen vinden via de Explorer op Arbitrum One en zul je deze niet kunnen vinden op Ethereum. Zorg ervoor dat je Arbitrum One hebt geselecteerd in de netwerkschakelaar bovenaan de pagina om er zeker van te zijn dat je op het juiste netwerk bent.  Na de overdracht zal de L1 subgraph als verouderd worden weergegeven. +Your Subgraph will only be discoverable on the network it is published to. For example, if your Subgraph is on Arbitrum One, then you can only find it in Explorer on Arbitrum One and will not be able to find it on Ethereum. Please ensure that you have Arbitrum One selected in the network switcher at the top of the page to ensure you are on the correct network.  After the transfer, the L1 Subgraph will appear as deprecated. -### Moet mijn subgraph gepubliceerd zijn om deze te kunnen overdragen? +### Does my Subgraph need to be published to transfer it? -Om gebruik te maken van de subgraph transfer tool, moet jouw subgraph al gepubliceerd zijn op het Ethereum mainnet en moet het enige curatie-signalen hebben die eigendom zijn van de wallet die de subgraph bezit. Als jouw subgraph nog niet is gepubliceerd, wordt het aanbevolen om het direct op Arbitrum One te publiceren - de bijbehorende gas fees zullen aanzienlijk lager zijn. Als je een gepubliceerde subgraph wilt overdragen maar het eigenaarsaccount heeft nog geen enkel curatie-signalen, kun je een klein bedrag signaleren (bv.: 1 GRT) vanaf dat account; zorg ervoor dat je "auto-migrating" signalen kiest. +To take advantage of the Subgraph transfer tool, your Subgraph must be already published to Ethereum mainnet and must have some curation signal owned by the wallet that owns the Subgraph. If your Subgraph is not published, it is recommended you simply publish directly on Arbitrum One - the associated gas fees will be considerably lower. If you want to transfer a published Subgraph but the owner account hasn't curated any signal on it, you can signal a small amount (e.g. 1 GRT) from that account; make sure to choose "auto-migrating" signal. -### Wat gebeurt er met de Ethereum mainnet versie van mijn subgraph nadat ik overdraag naar Arbitrum? +### What happens to the Ethereum mainnet version of my Subgraph after I transfer to Arbitrum? -Nadat je je subgraph naar Arbitrum hebt overgezet, zal de versie op het Ethereum mainnet als verouderd worden beschouwd. We raden aan om je query URL binnen 48 uur bij te werken. Er is echter een overgangsperiode waardoor je mainnet URL nog steeds werkt, zodat ondersteuning voor externe dapps kan worden bijgewerkt. +After transferring your Subgraph to Arbitrum, the Ethereum mainnet version will be deprecated. We recommend you update your query URL within 48 hours. However, there is a grace period in place that keeps your mainnet URL functioning so that any third-party dapp support can be updated. ### Nadat ik overgedragen heb, moet ik opnieuw publiceren op Arbitrum? @@ -80,21 +80,21 @@ Na de overdracht periode van 20 minuten, zul je de overdracht moeten bevestigen ### Zal mijn eindpunt downtime ervaren tijdens het opnieuw publiceren? -Het is onwaarschijnlijk, maar mogelijk om een korte downtime te ervaren afhankelijk van welke Indexers de subgraph op L1 ondersteunen en of zij blijven indexen totdat de subgraph volledig ondersteund wordt op L2. +It is unlikely, but possible to experience a brief downtime depending on which Indexers are supporting the Subgraph on L1 and whether they keep indexing it until the Subgraph is fully supported on L2. ### Is het publiceren en versiebeheer hetzelfde op L2 als Ethereum mainnet? -Ja. Selecteer Arbiturm One als jou gepubliceerde netwerk tijdens het publiceren in Subrgaph Studio. In de studio, de laatste endpoint die beschikbaar is zal wijzen naar de meest recentelijk bijgewerkte versie van de subgraph. +Yes. Select Arbitrum One as your published network when publishing in Subgraph Studio. In the Studio, the latest endpoint will be available which points to the latest updated version of the Subgraph. -### Zal mijn subgraphs curatie mee verplaatsen met mijn subgraph? +### Will my Subgraph's curation move with my Subgraph? -Als je gekozen hebt voor auto-migrating signal, dan zal 100% van je eigen curatie mee verplaatsen met jouw subgraph naar Arbitrum One. Alle curatie signalen van de subgraph zullen worden omgezet naar GRT tijdens de overdracht en alle GRT die corresponderen met jouw curatie signaal zullen worden gebruikt om signalen te minten op de L2 subgraph. +If you've chosen auto-migrating signal, 100% of your own curation will move with your Subgraph to Arbitrum One. All of the Subgraph's curation signal will be converted to GRT at the time of the transfer, and the GRT corresponding to your curation signal will be used to mint signal on the L2 Subgraph. -Andere curators kunnen kiezen of ze hun deel van GRT kunnen opnemen, of overdragen naar L2 om signalen te minten op dezelfde subgraph. +Other Curators can choose whether to withdraw their fraction of GRT, or also transfer it to L2 to mint signal on the same Subgraph. -### Kan ik nadat ik mijn subgraph overgedragen heb deze weer terug overdragen naar Ethereum mainnet? +### Can I move my Subgraph back to Ethereum mainnet after I transfer? -Wanneer overgedragen, zal jouw Ethereum mainnet versie van deze subgraph als verouderd worden beschouwd. Als je terug wilt gaan naar het mainnet, zul je deze opnieuw moeten implementeren en publiceren op het mainnet. Echter, het wordt sterk afgeraden om terug naar het Ethereum mainnet over te dragen gezien index beloningen uiteindelijk op Arbitrum One zullen worden verdeeld. +Once transferred, your Ethereum mainnet version of this Subgraph will be deprecated. If you would like to move back to mainnet, you will need to redeploy and publish back to mainnet. However, transferring back to Ethereum mainnet is strongly discouraged as indexing rewards will eventually be distributed entirely on Arbitrum One. ### Waarom heb ik gebrugd ETH nodig om mijn transactie te voltooien? @@ -206,19 +206,19 @@ Om je curatie over te dragen, moet je de volgende stappen volgen: \*indien nodig - bv. als je een contract adres gebruikt hebt. -### Hoe weet ik of de subgraph die ik cureer verplaatst is naar L2? +### How will I know if the Subgraph I curated has moved to L2? -Bij het bekijken van de details pagina van de subgraph zal er een banner verschijnen om je te laten weten dat deze subgraph is overgedragen. Je kunt de instructies volgen om je curatie over te zetten. Deze informatie is ook te vinden op de detailspagina van elke subgraph die is overgezet. +When viewing the Subgraph details page, a banner will notify you that this Subgraph has been transferred. You can follow the prompt to transfer your curation. You can also find this information on the Subgraph details page of any Subgraph that has moved. ### Wat als ik niet mijn curatie wil overdragen naar L2? -Wanneer een subgraph is verouderd, heb je de optie om je signaal terug te trekken. Op dezelfde manier, als een subgraph naar L2 is verhuisd, kun je ervoor kiezen om je signaal op het Ethereum-mainnet terug te trekken of het signaal naar L2 te sturen. +When a Subgraph is deprecated you have the option to withdraw your signal. Similarly, if a Subgraph has moved to L2, you can choose to withdraw your signal in Ethereum mainnet or send the signal to L2. ### Hoe weet ik of mijn curatie succesvol is overgedragen? Signaal details zullen toegankelijk zijn via Explorer ongeveer 20 minuten nadat de L2 transfer tool is gestart. -### Kan ik mijn curatie overdragen op meer dan een subgraph per keer? +### Can I transfer my curation on more than one Subgraph at a time? Op dit moment is er geen bulk overdracht optie. @@ -266,7 +266,7 @@ Het duurt ongeveer 20 minuten voordat de L2-overdrachtstool je inzet heeft overg ### Moet ik indexeren op Arbitrum voordat ik mijn inzet overdraag? -Je kunt je inzet effectief overdragen voordat je indexing opzet, maar je zult geen beloningen kunnen claimen op L2 totdat je toewijst aan subgraphs op L2, ze indexeert en POI's presenteert. +You can effectively transfer your stake first before setting up indexing, but you will not be able to claim any rewards on L2 until you allocate to Subgraphs on L2, index them, and present POIs. ### Kunnen Delegators hun delegatie overdragen voordat ik mijn index inzet overdraag? diff --git a/website/src/pages/nl/archived/arbitrum/l2-transfer-tools-guide.mdx b/website/src/pages/nl/archived/arbitrum/l2-transfer-tools-guide.mdx index 67a7011010e7..d8828c547837 100644 --- a/website/src/pages/nl/archived/arbitrum/l2-transfer-tools-guide.mdx +++ b/website/src/pages/nl/archived/arbitrum/l2-transfer-tools-guide.mdx @@ -6,53 +6,53 @@ The Graph heeft het eenvoudig gemaakt om naar L2 op Arbitrum One over te stappen Some frequent questions about these tools are answered in the [L2 Transfer Tools FAQ](/archived/arbitrum/l2-transfer-tools-faq/). The FAQs contain in-depth explanations of how to use the tools, how they work, and things to keep in mind when using them. -## Hoe zet je je subgraph over naar Arbitrum (L2) +## How to transfer your Subgraph to Arbitrum (L2) -## Voordelen van het overzetten van uw subgraphs +## Benefits of transferring your Subgraphs De community en ontwikkelaars van The Graph hebben [zich voorbereid](https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305) op de transitie naar Arbitrum gedurende het afgelopen jaar. Arbitrum, een layer 2 of "L2" blockchain, erft de beveiliging van Ethereum maar biedt aanzienlijk lagere gas fees. -Wanneer je je subgraph publiceert of bijwerkt naar the Graph Network, interacteer je met smart contracts op het protocol en dit vereist het betalen van gas met ETH. Door je subgraphs naar Arbitrum te verplaatsen, zullen eventuele toekomstige updates aan de subgraph veel lagere gas fees vereisen. De lagere kosten, en het feit dat de curatie bonding curves op L2 vlak zijn, maken het ook makkelijker voor andere curatoren om te cureren op uw subgraph, waardoor de beloningen voor indexeerders op uw subgraph toenemen. Deze omgeving met lagere kosten maakt het ook goedkoper voor indexeerders om de subgraph te indexeren en query's te beantwoorden. Indexeringsbeloningen zullen op Arbitrum toenemen en op Ethereum mainnet afnemen in de komden maanden, dus meer en meer indexeerders zullen hun GRT overzetten en hun operaties op L2 opzetten. +When you publish or upgrade your Subgraph to The Graph Network, you're interacting with smart contracts on the protocol and this requires paying for gas using ETH. By moving your Subgraphs to Arbitrum, any future updates to your Subgraph will require much lower gas fees. The lower fees, and the fact that curation bonding curves on L2 are flat, also make it easier for other Curators to curate on your Subgraph, increasing the rewards for Indexers on your Subgraph. This lower-cost environment also makes it cheaper for Indexers to index and serve your Subgraph. Indexing rewards will be increasing on Arbitrum and decreasing on Ethereum mainnet over the coming months, so more and more Indexers will be transferring their stake and setting up their operations on L2. -## Begrijpen wat er gebeurt met signalen, de L1 subgraph en query URL's +## Understanding what happens with signal, your L1 Subgraph and query URLs -Het overzetten van een subgraph naar Arbitrum gebruikt de Arbitrum GRT brug, die op zijn beurt de natuurlijke Arbitrum brug gebruikt, om de subgraph naar L2 te sturen. De "transfer" zal de subgraph op mainnet verwijderen en de informatie versturen om de subgraph op L2 opnieuw te creëren met de bridge. Het zal ook de gesignaleerde GRT van de eigenaar van de subgraph bevatten, wat meer dan nul moet zijn voor de brug om de overdracht te accepteren. +Transferring a Subgraph to Arbitrum uses the Arbitrum GRT bridge, which in turn uses the native Arbitrum bridge, to send the Subgraph to L2. The "transfer" will deprecate the Subgraph on mainnet and send the information to re-create the Subgraph on L2 using the bridge. It will also include the Subgraph owner's signaled GRT, which must be more than zero for the bridge to accept the transfer. -Wanneer je kiest om de subgraph over te dragen, zal dit alle curatie van de subgraph omzetten in GRT. Dit staat gelijk aan het "degraderen" van de subgraph op mainnet. De GRT die overeenkomt met je curatie zal samen met de subgraph naar L2 worden gestuurd, waar ze zullen worden gebruikt om signaal namens u te munten. +When you choose to transfer the Subgraph, this will convert all of the Subgraph's curation signal to GRT. This is equivalent to "deprecating" the Subgraph on mainnet. The GRT corresponding to your curation will be sent to L2 together with the Subgraph, where they will be used to mint signal on your behalf. -Andere curatoren kunnen kiezen of ze hun fractie van GRT willen opnemen, of het ook naar L2 willen overzetten om signaal op dezelfde subgraph te munten. Als een eigenaar van een subgraph hun subgraph niet naar L2 overzet en handmatig verwijderd via een contract call, dan zullen curatoren worden genotificeerd en zullen ze in staat zijn om hun curatie op te nemen. +Other Curators can choose whether to withdraw their fraction of GRT, or also transfer it to L2 to mint signal on the same Subgraph. If a Subgraph owner does not transfer their Subgraph to L2 and manually deprecates it via a contract call, then Curators will be notified and will be able to withdraw their curation. -Zodra de subgraph is overgedragen, aangezien alle curatie is omgezet in GRT, zullen indexeerders geen beloningen meer ontvangen voor het indexeren van de subgraph. Er zullen echter indexeerders zijn die 1) overgedragen subgraphs 24 uur blijven ondersteunen, en 2) onmiddelijk beginnen met het indexeren van de subgraph op L2. Aangezien deze indexeerders de subgraph al hebben geïndexeerd, zou er geen noodzaak moeten zijn om te wachten tot de subgraph is gesynchroniseerd, en het zal mogelijk zijn om de L2 subgraph bijna onmiddelijk te queryen. +As soon as the Subgraph is transferred, since all curation is converted to GRT, Indexers will no longer receive rewards for indexing the Subgraph. However, there will be Indexers that will 1) keep serving transferred Subgraphs for 24 hours, and 2) immediately start indexing the Subgraph on L2. Since these Indexers already have the Subgraph indexed, there should be no need to wait for the Subgraph to sync, and it will be possible to query the L2 Subgraph almost immediately. -Query's naar de L2 subgraph moeten worden gedaan naar een andere URL (op `arbitrum-gateway.thegraph.com`), maar het L1 URL zal minimaal 48 uur blijven werken. Daarna zal de L1 gateway query's doorsturen naar de L2 gateway (voor enige tijd), maar dit zal latentie toevoegen dus het wordt aanbevolen om al uw query's zo snel mogelijk naar de nieuwe URL over te schakelen. +Queries to the L2 Subgraph will need to be done to a different URL (on `arbitrum-gateway.thegraph.com`), but the L1 URL will continue working for at least 48 hours. After that, the L1 gateway will forward queries to the L2 gateway (for some time), but this will add latency so it is recommended to switch all your queries to the new URL as soon as possible. ## Jouw L2 wallet kiezen -Wanneer je je subgraph op mainnet publiceerde, gebruikte je een verbonden wallet om de subgraph te creëren, en deze wallet bezit de NFT die deze subgraph vertegenwoordigt en dit zorgt er voor dat je updates kunt publiceren. +When you published your Subgraph on mainnet, you used a connected wallet to create the Subgraph, and this wallet owns the NFT that represents this Subgraph and allows you to publish updates. -Bij het overzetten van de subgraph naar Arbitrum, kunt u een andere wallet kiezen die deze subgraph NFT op L2 zal bezitten. +When transferring the Subgraph to Arbitrum, you can choose a different wallet that will own this Subgraph NFT on L2. Als je een "reguliere" wallet gebruikt zoals MetaMask (een Externally Owned Account of EOA, d.w.z. een wallet die geen smart contract is), dan is dit optioneel en wordt het aanbevolen om dezelfde wallet te gebruiken als in L1. -Als je een smart contract wallet gebruikt, zoals een multisig (bijv. een Safe) dan is het kiezen van een ander L2 wallet adres verplicht, aangezien het waarschijnlijk is dat de multisig alleen op mainnet bestaat en je geen transacties op Arbitrum kunt maken met deze wallet. Als je een smart contract wallet of multisig wilt blijven gebruiken, maak dan een nieuwe wallet aan op Arbitrum en gebruik het adres ervan als de L2 eigenaar van jouw subgraph. +If you're using a smart contract wallet, like a multisig (e.g. a Safe), then choosing a different L2 wallet address is mandatory, as it is most likely that this account only exists on mainnet and you will not be able to make transactions on Arbitrum using this wallet. If you want to keep using a smart contract wallet or multisig, create a new wallet on Arbitrum and use its address as the L2 owner of your Subgraph. -**Het is erg belangrijk om een wallet adres te gebruiken dat u controleert, en dat transacties op Arbitrum kan maken. Anders zal de subgraph verloren gaan en kan niet worden hersteld.** +**It is very important to use a wallet address that you control, and that can make transactions on Arbitrum. Otherwise, the Subgraph will be lost and cannot be recovered.** ## Voorbereiden op de overdracht: ETH verplaatsen van L1 naar L2 -Het overzetten van de subgraph houdt in dat je een transactie verstuurt via de brug, en vervolgens een andere transactie uitvoert op Arbitrum. De eerste transactie gebruikt ETH op mainnet, en bevat wat ETH om te betalen voor gas wanneer het op L2 wordt ontvangen. Echter, als dit onvoldoende is, zul je de transactie opnieuw moeten proberen en betalen voor het gas direct op L2 (dit is "Stap 3: De overdracht bevestigen" hieronder). Deze stap **moet worden uitgevoerd binnen 7 dagen na het starten van de overdracht**. Bovendien, de tweede transactie ("Stap 4: De overdracht op L2 afronden") zal direct op Arbitrum worden gedaan. Om deze redenen, zul je wat ETH nodig hebben op een Arbitrum wallet. Als je een multisig of smart contract wallet gebruikt, zal de ETH in de reguliere (EOA) wallet moeten zijn die je gebruikt om de transacties uit te voeren, niet op de multisig wallet zelf. +Transferring the Subgraph involves sending a transaction through the bridge, and then executing another transaction on Arbitrum. The first transaction uses ETH on mainnet, and includes some ETH to pay for gas when the message is received on L2. However, if this gas is insufficient, you will have to retry the transaction and pay for the gas directly on L2 (this is "Step 3: Confirming the transfer" below). This step **must be executed within 7 days of starting the transfer**. Moreover, the second transaction ("Step 4: Finishing the transfer on L2") will be done directly on Arbitrum. For these reasons, you will need some ETH on an Arbitrum wallet. If you're using a multisig or smart contract account, the ETH will need to be in the regular (EOA) wallet that you are using to execute the transactions, not on the multisig wallet itself. Je kunt ETH kopen op sommige exchanges en direct naar Arbitrum opnemen, of je kunt de Arbitrum bridge gebruiken om ETH van een mainnet wallet naar L2 te sturen: [bridge.arbitrum.io](http://bridge.arbitrum.io). Aangezien de gasprijzen op Arbitrum lager zijn, zou u slechts een kleine hoeveelheid nodig moeten hebben. Het wordt aanbevolen om te beginnen met een lage drempel (e.g. 0.1 ETH) voor uw transactie om te worden goedgekeurd. -## Het vinden van de Transfer Tool voor subgraphs +## Finding the Subgraph Transfer Tool -Je kunt de L2 Transfer Tool vinden als je naar de pagina van je subgraph kijkt in de Subgraph Studio: +You can find the L2 Transfer Tool when you're looking at your Subgraph's page on Subgraph Studio: ![transfer tool](/img/L2-transfer-tool1.png) -Het is ook beschikbaar in de Explorer als je verbonden bent met de wallet die een subgraph bezit en op de pagina van die subgraph in de Explorer kijkt: +It is also available on Explorer if you're connected with the wallet that owns a Subgraph and on that Subgraph's page on Explorer: ![Transferring to L2](/img/transferToL2.png) @@ -60,19 +60,19 @@ Door op de knop 'Transfer to L2' te klikken, wordt de Transfer Tool geopend waar ## Stap 1: Het transfer proces starten -Voordat je met het transfer proces begint, moet je beslissen welk adres de subgraph op L2 zal bezitten (zie "Je L2 portemonnee kiezen" hierboven), en het wordt sterk aanbevolen om al wat ETH voor gas op Arbitrum te hebben (zie "Voorbereiden op de overdracht: ETH verplaatsen van L1 naar L2" hierboven). +Before starting the transfer, you must decide which address will own the Subgraph on L2 (see "Choosing your L2 wallet" above), and it is strongly recommend having some ETH for gas already bridged on Arbitrum (see "Preparing for the transfer: bridging some ETH" above). -Let ook op dat het overzetten van de subgraph vereist dat je een hoeveelheid signaal groter dan nul op de subgraph hebt met dezelfde account die de subgraph bezit; als je nog geen signaal op de subgraph hebt, moet je een klein beetje curatie toevoegen (een kleine hoeveelheid zoals 1 GRT zou voldoende zijn). +Also please note transferring the Subgraph requires having a nonzero amount of signal on the Subgraph with the same account that owns the Subgraph; if you haven't signaled on the Subgraph you will have to add a bit of curation (adding a small amount like 1 GRT would suffice). -Na het openen van de Transfer Tool, kun je het adres van de L2 wallet invoeren in het veld "Receiving wallet address" - **zorg ervoor dat je het juiste adres hier invoert**. Door op 'Transfer Subgraph' te klikken, wordt je gevraagd de transactie op je wallet uit te voeren (let op dat er wel wat ETH in je wallet zit om te betalen voor L2 gas); dit zal de transfer initiëren en je L1 subgraph verwijderen (zie "Begrijpen wat er gebeurt met signalen, de L1 subgraph en query URL's" hierboven voor meer details over wat er achter de schermen gebeurt). +After opening the Transfer Tool, you will be able to input the L2 wallet address into the "Receiving wallet address" field - **make sure you've entered the correct address here**. Clicking on Transfer Subgraph will prompt you to execute the transaction on your wallet (note some ETH value is included to pay for L2 gas); this will initiate the transfer and deprecate your L1 Subgraph (see "Understanding what happens with signal, your L1 Subgraph and query URLs" above for more details on what goes on behind the scenes). -Als je deze stap uitvoert, **zorg ervoor dat je doorgaat tot het voltooien van stap 3 in minder dan 7 dagen, of de subgraph en je signaal GRT zullen verloren gaan.** Dit komt door hoe L1-L2 berichtgeving werkt op Arbitrum: berichten die via de bridge worden verzonden, zijn "retry-able tickets" die binnen 7 dagen uitgevoerd moeten worden, en de initiële uitvoering zou een nieuwe poging nodig kunnen hebben als er pieken zijn in de prijs voor gas op Arbitrum. +If you execute this step, **make sure you proceed until completing step 3 in less than 7 days, or the Subgraph and your signal GRT will be lost.** This is due to how L1-L2 messaging works on Arbitrum: messages that are sent through the bridge are "retry-able tickets" that must be executed within 7 days, and the initial execution might need a retry if there are spikes in the gas price on Arbitrum. ![Start the transfer to L2](/img/startTransferL2.png) -## Stap 2: Wachten tot de transfer van de subgraph naar L2 voltooid is +## Step 2: Waiting for the Subgraph to get to L2 -Nadat je de transfer gestart bent, moet het bericht dat je L1-subgraph naar L2 stuurt, via de Arbitrum brug worden doorgestuurd. Dit duurt ongeveer 20 minuten (de brug wacht tot het mainnet block dat de transactie bevat "veilig" is van potentiële chain reorganisaties). +After you start the transfer, the message that sends your L1 Subgraph to L2 must propagate through the Arbitrum bridge. This takes approximately 20 minutes (the bridge waits for the mainnet block containing the transaction to be "safe" from potential chain reorgs). Zodra deze wachttijd voorbij is, zal Arbitrum proberen de transfer automatisch uit te voeren op de L2 contracten. @@ -80,7 +80,7 @@ Zodra deze wachttijd voorbij is, zal Arbitrum proberen de transfer automatisch u ## Stap 3: De transfer bevestigen -In de meeste gevallen zal deze stap automatisch worden uitgevoerd aangezien de L2 gas kosten die bij stap 1 zijn inbegrepen, voldoende zouden moeten zijn om de transactie die de subgraph op de Arbitrum contracten ontvangt, uit te voeren. In sommige gevallen kan het echter zo zijn dat een piek in de gasprijzen op Arbitrum ervoor zorgt dat deze automatische uitvoering mislukt. In dat geval zal het "ticket" dat je subgraph naar L2 stuurt, in behandeling blijven en is nodig het binnen 7 dagen nogmaals te proberen. +In most cases, this step will auto-execute as the L2 gas included in step 1 should be sufficient to execute the transaction that receives the Subgraph on the Arbitrum contracts. In some cases, however, it is possible that a spike in gas prices on Arbitrum causes this auto-execution to fail. In this case, the "ticket" that sends your Subgraph to L2 will be pending and require a retry within 7 days. Als dit het geval is, moet je verbinding maken met een L2 wallet die wat ETH op Arbitrum heeft, je walletnetwerk naar Arbitrum overschakelen en op "Bevestig Transfer" klikken op de transactie opnieuw te proberen. @@ -88,33 +88,33 @@ Als dit het geval is, moet je verbinding maken met een L2 wallet die wat ETH op ## Stap 4: De transfer op L2 afronden -Na de vorige stappen zijn je subgraph en GRT ontvangen op Arbitrum, maar de subgraph is nog niet gepubliceerd. Je moet verbinding maken met de L2 wallet die je hebt gekozen als ontvangende wallet, je walletnetwerk naar Arbitrum overschakelen en op "Publiceer Subgraph" klikken +At this point, your Subgraph and GRT have been received on Arbitrum, but the Subgraph is not published yet. You will need to connect using the L2 wallet that you chose as the receiving wallet, switch your wallet network to Arbitrum, and click "Publish Subgraph." -![Publish the subgraph](/img/publishSubgraphL2TransferTools.png) +![Publish the Subgraph](/img/publishSubgraphL2TransferTools.png) -![Wait for the subgraph to be published](/img/waitForSubgraphToPublishL2TransferTools.png) +![Wait for the Subgraph to be published](/img/waitForSubgraphToPublishL2TransferTools.png) -Dit zal de subgraph publiceren zodat Indexeerders die op Arbitrum actief zijn, deze kunnen indexeren. Het zal ook curatie signaal munten met de GRT die van L1 zijn overgedragen. +This will publish the Subgraph so that Indexers that are operating on Arbitrum can start serving it. It will also mint curation signal using the GRT that were transferred from L1. ## Stap 5: De query-URL bijwerken -Je subgraph is succesvol overgedragen naar Arbitrum! Om query's naar de subgraph te sturen, kun je deze nieuwe URL gebruiken: +Your Subgraph has been successfully transferred to Arbitrum! To query the Subgraph, the new URL will be : `https://arbitrum-gateway.thegraph.com/api/[api-key]/subgraphs/id/[l2-subgraph-id]` -Let op dat de subgraph ID op Arbitum anders zal zijn dan degene die je op mainnet had, maar je kunt deze altijd vinden op de Explorer of in de Studio. Zoals hierboven vermeld (zie "Begrijpen wat er gebeurt met signalen, de L1 subgraph en query URL's") zal de oude L1-URL nog een korte tijd worden ondersteund, maar je zou zo snel mogelijk al je query's naar het nieuwe adres moeten overschakelen zodra de subgraph op L2 is gesynchroniseerd. +Note that the Subgraph ID on Arbitrum will be a different than the one you had on mainnet, but you can always find it on Explorer or Studio. As mentioned above (see "Understanding what happens with signal, your L1 Subgraph and query URLs") the old L1 URL will be supported for a short while, but you should switch your queries to the new address as soon as the Subgraph has been synced on L2. ## Hoe je je curatie signaal naar Arbitrum (L2) overzet -## Begrijpen wat er gebeurt met curatie bij subgraph transfers naar L2 +## Understanding what happens to curation on Subgraph transfers to L2 -Wanneer de eigenaar van een subgraph een subgraph naar Arbitrum verplaatst, wordt al het signaal van de subgraph tegelijkertijd omgezet in GRT. Dit is van toepassing op "automatisch gemigreerd" signaal, dus signaal dat niet specifiek is voor een subgraph versie, maar automatisch op de nieuwste versie van de subgraph signaleerd. +When the owner of a Subgraph transfers a Subgraph to Arbitrum, all of the Subgraph's signal is converted to GRT at the same time. This applies to "auto-migrated" signal, i.e. signal that is not specific to a Subgraph version or deployment but that follows the latest version of a Subgraph. -De conversie van signaal naar GRT is hetzelfde als wat zou gebeuren als de eigenaar van de subgraph de subgraph van L1 zou verwijderen. Wanneer de subgraph wordt verwijderd of verplaatst wordt naar L2, wordt al het curatie signaal tegelijkertijd "verbrand" (met behulp van de curation bonding curve) en wordt de GRT vastgehouden door het GNS smart contract (dat is het contract dat subgraph upgrades en automatisch gemigreerd signaal afhandeld). Elke Curator op die subgraph heeft daarom recht op die GRT naar rato van het aantal aandelen dat ze voor de subgraph hadden. +This conversion from signal to GRT is the same as what would happen if the Subgraph owner deprecated the Subgraph in L1. When the Subgraph is deprecated or transferred, all curation signal is "burned" simultaneously (using the curation bonding curve) and the resulting GRT is held by the GNS smart contract (that is the contract that handles Subgraph upgrades and auto-migrated signal). Each Curator on that Subgraph therefore has a claim to that GRT proportional to the amount of shares they had for the Subgraph. -Een deel van de GRT, dat behoort tot de eigenaar van de subgraph, wordt samen met de subgraph naar L2 gestuurd. +A fraction of these GRT corresponding to the Subgraph owner is sent to L2 together with the Subgraph. -Op dit punt zal de gesignaleerde GRT niet langer query kosten verzamelen, dus curatoren kunnen kiezen om hun GRT op te nemen of het naar dezelfde subgraph op L2 over te dragen, waar het gebruikt kan worden om nieuw curatie signaal te creëren. Er is geen haast bij, aangezien de GRT voor onbepaalde tijd kan worden bewaard en iedereen krijgt een hoeveelheid naar rato van hun aandelen, ongeacht wanneer ze het doen. +At this point, the curated GRT will not accrue any more query fees, so Curators can choose to withdraw their GRT or transfer it to the same Subgraph on L2, where it can be used to mint new curation signal. There is no rush to do this as the GRT can be help indefinitely and everybody gets an amount proportional to their shares, irrespective of when they do it. ## Jouw L2 wallet kiezen @@ -130,9 +130,9 @@ Als je een smart contract wallet gebruikt, zoals een multisig (bijv. een Safe) d Voordat je de transfer start, moet je beslissen welk wallet adres de curatie op L2 zal bezitting (zie "De L2 wallet kiezen" hierboven) en wordt het aanbevolen om al wat ETH voor gas op Arbitrum te hebben voor het geval je de uitvoering van het bericht op L2 opnieuw moet uitvoeren. Je kunt ETH kopen op sommige beurzen en deze rechstreeks naar je Arbitrum wallet sturen, of je kunt de Arbitrum bridge gebruiken om ETH van een mainnet wallet naar L2 te sturen: [bridge.arbitrum.io](http://bridge.arbitrum.io) - aangezien de gasprijzen op Arbitrum zo laag zijn, heb je waarschijnlijk maar een kleine hoeveelheid nodig, 0.01 ETH is waarschijnlijk meer dan genoeg. -Als een subgraph waar je curatie signaal op hebt naar L2 is verstuurd, zie je een bericht op de Explorer die je verteld dat je curatie hebt op een subgraph die een transfer heeft gemaakt naar L2. +If a Subgraph that you curate to has been transferred to L2, you will see a message on Explorer telling you that you're curating to a transferred Subgraph. -Wanneer je naar de subgraph pagina kijkt, kun je ervoor kiezen om de curatie op te nemen of over te dragen naar L2. Door op "Transfer Signal to Arbitrum" te klikken, worden de Transfer Tools geopend. +When looking at the Subgraph page, you can choose to withdraw or transfer the curation. Clicking on "Transfer Signal to Arbitrum" will open the transfer tool. ![Transfer signal](/img/transferSignalL2TransferTools.png) @@ -162,4 +162,4 @@ Als dit het geval is, moet je verbinding maken met een L2 wallet die wat ETH op ## Jouw curatie opnemen op L1 -Als je je GRT liever niet naar L2 stuurt, of als je de GRT handmatig over de brug wilt sturen, kunt je je gecureerde GRT op L1 opnemen. Kies op de banner op de subgraph pagina "Withdraw Signal" en bevestig de transactie; de GRT wordt naar uw Curator adres gestuurd. +If you prefer not to send your GRT to L2, or you'd rather bridge the GRT manually, you can withdraw your curated GRT on L1. On the banner on the Subgraph page, choose "Withdraw Signal" and confirm the transaction; the GRT will be sent to your Curator address. diff --git a/website/src/pages/nl/archived/sunrise.mdx b/website/src/pages/nl/archived/sunrise.mdx index eb18a93c506c..71262f22e7d8 100644 --- a/website/src/pages/nl/archived/sunrise.mdx +++ b/website/src/pages/nl/archived/sunrise.mdx @@ -7,61 +7,61 @@ sidebarTitle: Post-Sunrise Upgrade FAQ ## What was the Sunrise of Decentralized Data? -The Sunrise of Decentralized Data was an initiative spearheaded by Edge & Node. This initiative enabled subgraph developers to upgrade to The Graph’s decentralized network seamlessly. +The Sunrise of Decentralized Data was an initiative spearheaded by Edge & Node. This initiative enabled Subgraph developers to upgrade to The Graph’s decentralized network seamlessly. -This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published subgraphs. +This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published Subgraphs. ### What happened to the hosted service? -The hosted service query endpoints are no longer available, and developers cannot deploy new subgraphs on the hosted service. +The hosted service query endpoints are no longer available, and developers cannot deploy new Subgraphs on the hosted service. -During the upgrade process, owners of hosted service subgraphs could upgrade their subgraphs to The Graph Network. Additionally, developers were able to claim auto-upgraded subgraphs. +During the upgrade process, owners of hosted service Subgraphs could upgrade their Subgraphs to The Graph Network. Additionally, developers were able to claim auto-upgraded Subgraphs. ### Was Subgraph Studio impacted by this upgrade? No, Subgraph Studio was not impacted by Sunrise. Subgraphs were immediately available for querying, powered by the upgrade Indexer, which uses the same infrastructure as the hosted service. -### Why were subgraphs published to Arbitrum, did it start indexing a different network? +### Why were Subgraphs published to Arbitrum, did it start indexing a different network? -The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that subgraphs are published to, but subgraphs can index any of the [supported networks](/supported-networks/) +The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new Subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that Subgraphs are published to, but Subgraphs can index any of the [supported networks](/supported-networks/) ## About the Upgrade Indexer > The upgrade Indexer is currently active. -The upgrade Indexer was implemented to improve the experience of upgrading subgraphs from the hosted service to The Graph Network and support new versions of existing subgraphs that had not yet been indexed. +The upgrade Indexer was implemented to improve the experience of upgrading Subgraphs from the hosted service to The Graph Network and support new versions of existing Subgraphs that had not yet been indexed. ### What does the upgrade Indexer do? -- It bootstraps chains that have yet to receive indexing rewards on The Graph Network and ensures that an Indexer is available to serve queries as quickly as possible after a subgraph is published. +- It bootstraps chains that have yet to receive indexing rewards on The Graph Network and ensures that an Indexer is available to serve queries as quickly as possible after a Subgraph is published. - It supports chains that were previously only available on the hosted service. Find a comprehensive list of supported chains [here](/supported-networks/). -- Indexers that operate an upgrade Indexer do so as a public service to support new subgraphs and additional chains that lack indexing rewards before The Graph Council approves them. +- Indexers that operate an upgrade Indexer do so as a public service to support new Subgraphs and additional chains that lack indexing rewards before The Graph Council approves them. ### Why is Edge & Node running the upgrade Indexer? -Edge & Node historically maintained the hosted service and, as a result, already have synced data for hosted service subgraphs. +Edge & Node historically maintained the hosted service and, as a result, already have synced data for hosted service Subgraphs. ### What does the upgrade indexer mean for existing Indexers? Chains previously only supported on the hosted service were made available to developers on The Graph Network without indexing rewards at first. -However, this action unlocked query fees for any interested Indexer and increased the number of subgraphs published on The Graph Network. As a result, Indexers have more opportunities to index and serve these subgraphs in exchange for query fees, even before indexing rewards are enabled for a chain. +However, this action unlocked query fees for any interested Indexer and increased the number of Subgraphs published on The Graph Network. As a result, Indexers have more opportunities to index and serve these Subgraphs in exchange for query fees, even before indexing rewards are enabled for a chain. -The upgrade Indexer also provides the Indexer community with information about the potential demand for subgraphs and new chains on The Graph Network. +The upgrade Indexer also provides the Indexer community with information about the potential demand for Subgraphs and new chains on The Graph Network. ### What does this mean for Delegators? -The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity. +The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more Subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity. ### Did the upgrade Indexer compete with existing Indexers for rewards? -No, the upgrade Indexer only allocates the minimum amount per subgraph and does not collect indexing rewards. +No, the upgrade Indexer only allocates the minimum amount per Subgraph and does not collect indexing rewards. -It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and subgraphs. +It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and Subgraphs. -### How does this affect subgraph developers? +### How does this affect Subgraph developers? -Subgraph developers can query their subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/subgraphs/developing/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a subgraph](/developing/creating-a-subgraph/) was not impacted by this upgrade. +Subgraph developers can query their Subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/subgraphs/developing/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a Subgraph](/developing/creating-a-subgraph/) was not impacted by this upgrade. ### How does the upgrade Indexer benefit data consumers? @@ -71,10 +71,10 @@ The upgrade Indexer enables chains on the network that were previously only supp The upgrade Indexer prices queries at the market rate to avoid influencing the query fee market. -### When will the upgrade Indexer stop supporting a subgraph? +### When will the upgrade Indexer stop supporting a Subgraph? -The upgrade Indexer supports a subgraph until at least 3 other Indexers successfully and consistently serve queries made to it. +The upgrade Indexer supports a Subgraph until at least 3 other Indexers successfully and consistently serve queries made to it. -Furthermore, the upgrade Indexer stops supporting a subgraph if it has not been queried in the last 30 days. +Furthermore, the upgrade Indexer stops supporting a Subgraph if it has not been queried in the last 30 days. -Other Indexers are incentivized to support subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it. +Other Indexers are incentivized to support Subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it. diff --git a/website/src/pages/nl/global.json b/website/src/pages/nl/global.json index c90c8a637061..cbe24cf340a5 100644 --- a/website/src/pages/nl/global.json +++ b/website/src/pages/nl/global.json @@ -6,6 +6,7 @@ "subgraphs": "Subgraphs", "substreams": "Substreams", "sps": "Substreams-Powered Subgraphs", + "tokenApi": "Token API", "indexing": "Indexing", "resources": "Resources", "archived": "Archived" @@ -24,9 +25,51 @@ "linkToThisSection": "Link to this section" }, "content": { - "note": "Note", + "callout": { + "note": "Note", + "tip": "Tip", + "important": "Important", + "warning": "Warning", + "caution": "Caution" + }, "video": "Video" }, + "openApi": { + "parameters": { + "pathParameters": "Path Parameters", + "queryParameters": "Query Parameters", + "headerParameters": "Header Parameters", + "cookieParameters": "Cookie Parameters", + "parameter": "Parameter", + "description": "Description", + "value": "Value", + "required": "Required", + "deprecated": "Deprecated", + "defaultValue": "Default value", + "minimumValue": "Minimum value", + "maximumValue": "Maximum value", + "acceptedValues": "Accepted values", + "acceptedPattern": "Accepted pattern", + "format": "Format", + "serializationFormat": "Serialization format" + }, + "request": { + "label": "Test this endpoint", + "noCredentialsRequired": "No credentials required", + "send": "Send Request" + }, + "responses": { + "potentialResponses": "Potential Responses", + "status": "Status", + "description": "Description", + "liveResponse": "Live Response", + "example": "Example" + }, + "errors": { + "invalidApi": "Could not retrieve API {0}.", + "invalidOperation": "Could not retrieve operation {0} in API {1}." + } + }, "notFound": { "title": "Oops! This page was lost in space...", "subtitle": "Check if you’re using the right address or explore our website by clicking on the link below.", diff --git a/website/src/pages/nl/index.json b/website/src/pages/nl/index.json index bf2fa6bdc70b..200a19192e1c 100644 --- a/website/src/pages/nl/index.json +++ b/website/src/pages/nl/index.json @@ -7,7 +7,7 @@ "cta2": "Build your first subgraph" }, "products": { - "title": "The Graph’s Products", + "title": "The Graph's Products", "description": "Choose a solution that fits your needs—interact with blockchain data your way.", "subgraphs": { "title": "Subgraphs", @@ -21,7 +21,7 @@ }, "sps": { "title": "Substreams-Powered Subgraphs", - "description": "Boost your subgraph’s efficiency and scalability by using Substreams.", + "description": "Boost your subgraph's efficiency and scalability by using Substreams.", "cta": "Set up a Substreams-powered subgraph" }, "graphNode": { @@ -44,7 +44,7 @@ "identifier": "Identifier", "chainId": "Chain ID", "nativeCurrency": "Native Currency", - "docs": "Docs", + "docs": "Documentatie", "shortName": "Short Name", "guides": "Guides", "search": "Search networks", @@ -156,15 +156,15 @@ "watchOnYouTube": "Watch on YouTube", "theGraphExplained": { "title": "The Graph Explained In 1 Minute", - "description": "What is The Graph? How does it work? Why does it matter so much to web3 developers? Learn how and why The Graph is the backbone of web3 in this short, non-technical video." + "description": "Learn how and why The Graph is the backbone of web3 in this short, non-technical video." }, "whatIsDelegating": { "title": "What is Delegating?", - "description": "Delegators are key participants who help secure The Graph by staking their GRT tokens to Indexers. This video explains key concepts to understand before delegating." + "description": "This video explains key concepts to understand before delegating, a form of staking that helps secure The Graph." }, "howToIndexSolana": { "title": "How to Index Solana with a Substreams-powered Subgraph", - "description": "If you’re familiar with subgraphs, discover how Substreams offer a different approach for key use cases. This video walks you through the process of building your first Substreams-powered subgraph." + "description": "If you're familiar with Subgraphs, discover how Substreams offer a different approach for key use cases." } }, "time": { diff --git a/website/src/pages/nl/indexing/chain-integration-overview.mdx b/website/src/pages/nl/indexing/chain-integration-overview.mdx index 77141e82b34a..33619b03c483 100644 --- a/website/src/pages/nl/indexing/chain-integration-overview.mdx +++ b/website/src/pages/nl/indexing/chain-integration-overview.mdx @@ -36,7 +36,7 @@ This process is related to the Subgraph Data Service, applicable only to new Sub ### 2. What happens if Firehose & Substreams support comes after the network is supported on mainnet? -This would only impact protocol support for indexing rewards on Substreams-powered subgraphs. The new Firehose implementation would need testing on testnet, following the methodology outlined for Stage 2 in this GIP. Similarly, assuming the implementation is performant and reliable, a PR on the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) would be required (`Substreams data sources` Subgraph Feature), as well as a new GIP for protocol support for indexing rewards. Anyone can create the PR and GIP; the Foundation would help with Council approval. +This would only impact protocol support for indexing rewards on Substreams-powered Subgraphs. The new Firehose implementation would need testing on testnet, following the methodology outlined for Stage 2 in this GIP. Similarly, assuming the implementation is performant and reliable, a PR on the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) would be required (`Substreams data sources` Subgraph Feature), as well as a new GIP for protocol support for indexing rewards. Anyone can create the PR and GIP; the Foundation would help with Council approval. ### 3. How much time will the process of reaching full protocol support take? diff --git a/website/src/pages/nl/indexing/new-chain-integration.mdx b/website/src/pages/nl/indexing/new-chain-integration.mdx index e45c4b411010..670e06c752c3 100644 --- a/website/src/pages/nl/indexing/new-chain-integration.mdx +++ b/website/src/pages/nl/indexing/new-chain-integration.mdx @@ -2,7 +2,7 @@ title: New Chain Integration --- -Chains can bring subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies: +Chains can bring Subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies: 1. **EVM JSON-RPC** 2. **Firehose**: All Firehose integration solutions include Substreams, a large-scale streaming engine based off Firehose with native `graph-node` support, allowing for parallelized transforms. @@ -47,15 +47,15 @@ For EVM chains, there exists a deeper level of data that can be achieved through ## EVM considerations - Difference between JSON-RPC & Firehose -While the JSON-RPC and Firehose are both suitable for subgraphs, a Firehose is always required for developers wanting to build with [Substreams](https://substreams.streamingfast.io). Supporting Substreams allows developers to build [Substreams-powered subgraphs](/subgraphs/cookbook/substreams-powered-subgraphs/) for the new chain, and has the potential to improve the performance of your subgraphs. Additionally, Firehose — as a drop-in replacement for the JSON-RPC extraction layer of `graph-node` — reduces by 90% the number of RPC calls required for general indexing. +While the JSON-RPC and Firehose are both suitable for Subgraphs, a Firehose is always required for developers wanting to build with [Substreams](https://substreams.streamingfast.io). Supporting Substreams allows developers to build [Substreams-powered Subgraphs](/subgraphs/cookbook/substreams-powered-subgraphs/) for the new chain, and has the potential to improve the performance of your Subgraphs. Additionally, Firehose — as a drop-in replacement for the JSON-RPC extraction layer of `graph-node` — reduces by 90% the number of RPC calls required for general indexing. -- All those `getLogs` calls and roundtrips get replaced by a single stream arriving into the heart of `graph-node`; a single block model for all subgraphs it processes. +- All those `getLogs` calls and roundtrips get replaced by a single stream arriving into the heart of `graph-node`; a single block model for all Subgraphs it processes. -> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers) +> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index Subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers) ## Graph Node Configuration -Configuring Graph Node is as easy as preparing your local environment. Once your local environment is set, you can test the integration by locally deploying a subgraph. +Configuring Graph Node is as easy as preparing your local environment. Once your local environment is set, you can test the integration by locally deploying a Subgraph. 1. [Clone Graph Node](https://github.com/graphprotocol/graph-node) @@ -67,4 +67,4 @@ Configuring Graph Node is as easy as preparing your local environment. Once your ## Substreams-powered Subgraphs -For StreamingFast-led Firehose/Substreams integrations, basic support for foundational Substreams modules (e.g. decoded transactions, logs and smart-contract events) and Substreams codegen tools are included. These tools enable the ability to enable [Substreams-powered subgraphs](/substreams/sps/introduction/). Follow the [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) and run `substreams codegen subgraph` to experience the codegen tools for yourself. +For StreamingFast-led Firehose/Substreams integrations, basic support for foundational Substreams modules (e.g. decoded transactions, logs and smart-contract events) and Substreams codegen tools are included. These tools enable the ability to enable [Substreams-powered Subgraphs](/substreams/sps/introduction/). Follow the [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) and run `substreams codegen subgraph` to experience the codegen tools for yourself. diff --git a/website/src/pages/nl/indexing/overview.mdx b/website/src/pages/nl/indexing/overview.mdx index f797c80855e5..89c13c8ab279 100644 --- a/website/src/pages/nl/indexing/overview.mdx +++ b/website/src/pages/nl/indexing/overview.mdx @@ -7,7 +7,7 @@ Indexeers zijn node-operators in The Graph Netwerk die Graph Tokens (GRT) inzett GRT dat in het protocol wordt ingezet, is onderheven aan een ontdooiperiode en kan worden geslashed als Indexers schadelijke acties ondernemen, onjuiste data aan applicaties leveren of als ze onjuist indexeren. Indexers verdienen ook beloningen voor gedelegeerde inzet van Delegators om te contributeren aan het netwerk. -Indexeerders selecteren subgraphs om te indexeren op basis van het curatiesignaal van de subgraph, waar Curatoren GRT inzetten om aan te geven welke subgraphs van hoge kwaliteit zijn en prioriteit moeten krijgen. Consumenten (bijv. applicaties) kunnen ook parameters instellen voor welke Indexeerders queries voor hun subgraphs verwerken en voorkeuren instellen voor de prijs van querykosten. +Indexers select Subgraphs to index based on the Subgraph’s curation signal, where Curators stake GRT in order to indicate which Subgraphs are high-quality and should be prioritized. Consumers (eg. applications) can also set parameters for which Indexers process queries for their Subgraphs and set preferences for query fee pricing. ## FAQ @@ -19,17 +19,17 @@ The minimum stake for an Indexer is currently set to 100K GRT. **Query fee rebates** - Payments for serving queries on the network. These payments are mediated via state channels between an Indexer and a gateway. Each query request from a gateway contains a payment and the corresponding response a proof of query result validity. -**Indexing rewards** - Generated via a 3% annual protocol wide inflation, the indexing rewards are distributed to Indexers who are indexing subgraph deployments for the network. +**Indexing rewards** - Generated via a 3% annual protocol wide inflation, the indexing rewards are distributed to Indexers who are indexing Subgraph deployments for the network. ### How are indexing rewards distributed? -Indexing rewards come from protocol inflation which is set to 3% annual issuance. They are distributed across subgraphs based on the proportion of all curation signal on each, then distributed proportionally to Indexers based on their allocated stake on that subgraph. **An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.** +Indexing rewards come from protocol inflation which is set to 3% annual issuance. They are distributed across Subgraphs based on the proportion of all curation signal on each, then distributed proportionally to Indexers based on their allocated stake on that Subgraph. **An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.** Numerous tools have been created by the community for calculating rewards; you'll find a collection of them organized in the [Community Guides collection](https://www.notion.so/Community-Guides-abbb10f4dba040d5ba81648ca093e70c). You can also find an up to date list of tools in the #Delegators and #Indexers channels on the [Discord server](https://discord.gg/graphprotocol). Here we link a [recommended allocation optimiser](https://github.com/graphprotocol/allocation-optimizer) integrated with the indexer software stack. ### What is a proof of indexing (POI)? -POIs are used in the network to verify that an Indexer is indexing the subgraphs they have allocated on. A POI for the first block of the current epoch must be submitted when closing an allocation for that allocation to be eligible for indexing rewards. A POI for a block is a digest for all entity store transactions for a specific subgraph deployment up to and including that block. +POIs are used in the network to verify that an Indexer is indexing the Subgraphs they have allocated on. A POI for the first block of the current epoch must be submitted when closing an allocation for that allocation to be eligible for indexing rewards. A POI for a block is a digest for all entity store transactions for a specific Subgraph deployment up to and including that block. ### When are indexing rewards distributed? @@ -41,7 +41,7 @@ The RewardsManager contract has a read-only [getRewards](https://github.com/grap Many of the community-made dashboards include pending rewards values and they can be easily checked manually by following these steps: -1. Query the [mainnet subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations: +1. Query the [mainnet Subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations: ```graphql query indexerAllocations { @@ -91,24 +91,24 @@ The `queryFeeCut` and `indexingRewardCut` values are delegation parameters that - **indexingRewardCut** - the % of indexing rewards that will be distributed to the Indexer. If this is set to 95%, the Indexer will receive 95% of the indexing rewards when an allocation is closed and the Delegators will split the other 5%. -### How do Indexers know which subgraphs to index? +### How do Indexers know which Subgraphs to index? -Indexers may differentiate themselves by applying advanced techniques for making subgraph indexing decisions but to give a general idea we'll discuss several key metrics used to evaluate subgraphs in the network: +Indexers may differentiate themselves by applying advanced techniques for making Subgraph indexing decisions but to give a general idea we'll discuss several key metrics used to evaluate Subgraphs in the network: -- **Curation signal** - The proportion of network curation signal applied to a particular subgraph is a good indicator of the interest in that subgraph, especially during the bootstrap phase when query voluming is ramping up. +- **Curation signal** - The proportion of network curation signal applied to a particular Subgraph is a good indicator of the interest in that Subgraph, especially during the bootstrap phase when query volume is ramping up. -- **Query fees collected** - The historical data for volume of query fees collected for a specific subgraph is a good indicator of future demand. +- **Query fees collected** - The historical data for volume of query fees collected for a specific Subgraph is a good indicator of future demand. -- **Amount staked** - Monitoring the behavior of other Indexers or looking at proportions of total stake allocated towards specific subgraphs can allow an Indexer to monitor the supply side for subgraph queries to identify subgraphs that the network is showing confidence in or subgraphs that may show a need for more supply. +- **Amount staked** - Monitoring the behavior of other Indexers or looking at proportions of total stake allocated towards specific Subgraphs can allow an Indexer to monitor the supply side for Subgraph queries to identify Subgraphs that the network is showing confidence in or Subgraphs that may show a need for more supply. -- **Subgraphs with no indexing rewards** - Some subgraphs do not generate indexing rewards mainly because they are using unsupported features like IPFS or because they are querying another network outside of mainnet. You will see a message on a subgraph if it is not generating indexing rewards. +- **Subgraphs with no indexing rewards** - Some Subgraphs do not generate indexing rewards mainly because they are using unsupported features like IPFS or because they are querying another network outside of mainnet. You will see a message on a Subgraph if it is not generating indexing rewards. ### What are the hardware requirements? -- **Small** - Enough to get started indexing several subgraphs, will likely need to be expanded. +- **Small** - Enough to get started indexing several Subgraphs, will likely need to be expanded. - **Standard** - Default setup, this is what is used in the example k8s/terraform deployment manifests. -- **Medium** - Production Indexer supporting 100 subgraphs and 200-500 requests per second. -- **Large** - Prepared to index all currently used subgraphs and serve requests for the related traffic. +- **Medium** - Production Indexer supporting 100 Subgraphs and 200-500 requests per second. +- **Large** - Prepared to index all currently used Subgraphs and serve requests for the related traffic. | Setup | Postgres
(CPUs) | Postgres
(memory in GBs) | Postgres
(disk in TBs) | VMs
(CPUs) | VMs
(memory in GBs) | | --- | :-: | :-: | :-: | :-: | :-: | @@ -125,17 +125,17 @@ Indexers may differentiate themselves by applying advanced techniques for making ## Infrastructure -At the center of an Indexer's infrastructure is the Graph Node which monitors the indexed networks, extracts and loads data per a subgraph definition and serves it as a [GraphQL API](/about/#how-the-graph-works). The Graph Node needs to be connected to an endpoint exposing data from each indexed network; an IPFS node for sourcing data; a PostgreSQL database for its store; and Indexer components which facilitate its interactions with the network. +At the center of an Indexer's infrastructure is the Graph Node which monitors the indexed networks, extracts and loads data per a Subgraph definition and serves it as a [GraphQL API](/about/#how-the-graph-works). The Graph Node needs to be connected to an endpoint exposing data from each indexed network; an IPFS node for sourcing data; a PostgreSQL database for its store; and Indexer components which facilitate its interactions with the network. -- **PostgreSQL database** - The main store for the Graph Node, this is where subgraph data is stored. The Indexer service and agent also use the database to store state channel data, cost models, indexing rules, and allocation actions. +- **PostgreSQL database** - The main store for the Graph Node, this is where Subgraph data is stored. The Indexer service and agent also use the database to store state channel data, cost models, indexing rules, and allocation actions. -- **Data endpoint** - For EVM-compatible networks, Graph Node needs to be connected to an endpoint that exposes an EVM-compatible JSON-RPC API. This may take the form of a single client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain subgraphs will require particular client capabilities such as archive mode and/or the parity tracing API. +- **Data endpoint** - For EVM-compatible networks, Graph Node needs to be connected to an endpoint that exposes an EVM-compatible JSON-RPC API. This may take the form of a single client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain Subgraphs will require particular client capabilities such as archive mode and/or the parity tracing API. -- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during subgraph deployment to fetch the subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com. +- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com. - **Indexer service** - Handles all required external communications with the network. Shares cost models and indexing statuses, passes query requests from gateways on to a Graph Node, and manages the query payments via state channels with the gateway. -- **Indexer agent** - Facilitates the Indexers interactions onchain including registering on the network, managing subgraph deployments to its Graph Node/s, and managing allocations. +- **Indexer agent** - Facilitates the Indexers interactions onchain including registering on the network, managing Subgraph deployments to its Graph Node/s, and managing allocations. - **Prometheus metrics server** - The Graph Node and Indexer components log their metrics to the metrics server. @@ -149,8 +149,8 @@ Note: To support agile scaling, it is recommended that query and indexing concer | Port | Purpose | Routes | CLI Argument | Environment Variable | | --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | -| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | +| 8000 | GraphQL HTTP server
(for Subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | +| 8001 | GraphQL WS
(for Subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | | 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | | 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | | 8040 | Prometheus metrics | /metrics | \--metrics-port | - | @@ -159,7 +159,7 @@ Note: To support agile scaling, it is recommended that query and indexing concer | Port | Purpose | Routes | CLI Argument | Environment Variable | | --- | --- | --- | --- | --- | -| 7600 | GraphQL HTTP server
(for paid subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` | +| 7600 | GraphQL HTTP server
(for paid Subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` | | 7300 | Prometheus metrics | /metrics | \--metrics-port | - | #### Indexer Agent @@ -295,7 +295,7 @@ Deploy all resources with `kubectl apply -k $dir`. ### Graph Node -[Graph Node](https://github.com/graphprotocol/graph-node) is an open source Rust implementation that event sources the Ethereum blockchain to deterministically update a data store that can be queried via the GraphQL endpoint. Developers use subgraphs to define their schema, and a set of mappings for transforming the data sourced from the blockchain and the Graph Node handles syncing the entire chain, monitoring for new blocks, and serving it via a GraphQL endpoint. +[Graph Node](https://github.com/graphprotocol/graph-node) is an open source Rust implementation that event sources the Ethereum blockchain to deterministically update a data store that can be queried via the GraphQL endpoint. Developers use Subgraphs to define their schema, and a set of mappings for transforming the data sourced from the blockchain and the Graph Node handles syncing the entire chain, monitoring for new blocks, and serving it via a GraphQL endpoint. #### Getting started from source @@ -365,9 +365,9 @@ docker-compose up To successfully participate in the network requires almost constant monitoring and interaction, so we've built a suite of Typescript applications for facilitating an Indexers network participation. There are three Indexer components: -- **Indexer agent** - The agent monitors the network and the Indexer's own infrastructure and manages which subgraph deployments are indexed and allocated towards onchain and how much is allocated towards each. +- **Indexer agent** - The agent monitors the network and the Indexer's own infrastructure and manages which Subgraph deployments are indexed and allocated towards onchain and how much is allocated towards each. -- **Indexer service** - The only component that needs to be exposed externally, the service passes on subgraph queries to the graph node, manages state channels for query payments, shares important decision making information to clients like the gateways. +- **Indexer service** - The only component that needs to be exposed externally, the service passes on Subgraph queries to the graph node, manages state channels for query payments, shares important decision making information to clients like the gateways. - **Indexer CLI** - The command line interface for managing the Indexer agent. It allows Indexers to manage cost models, manual allocations, actions queue, and indexing rules. @@ -525,7 +525,7 @@ graph indexer status #### Indexer management using Indexer CLI -The suggested tool for interacting with the **Indexer Management API** is the **Indexer CLI**, an extension to the **Graph CLI**. The Indexer agent needs input from an Indexer in order to autonomously interact with the network on the behalf of the Indexer. The mechanism for defining Indexer agent behavior are **allocation management** mode and **indexing rules**. Under auto mode, an Indexer can use **indexing rules** to apply their specific strategy for picking subgraphs to index and serve queries for. Rules are managed via a GraphQL API served by the agent and known as the Indexer Management API. Under manual mode, an Indexer can create allocation actions using **actions queue** and explicitly approve them before they get executed. Under oversight mode, **indexing rules** are used to populate **actions queue** and also require explicit approval for execution. +The suggested tool for interacting with the **Indexer Management API** is the **Indexer CLI**, an extension to the **Graph CLI**. The Indexer agent needs input from an Indexer in order to autonomously interact with the network on the behalf of the Indexer. The mechanism for defining Indexer agent behavior are **allocation management** mode and **indexing rules**. Under auto mode, an Indexer can use **indexing rules** to apply their specific strategy for picking Subgraphs to index and serve queries for. Rules are managed via a GraphQL API served by the agent and known as the Indexer Management API. Under manual mode, an Indexer can create allocation actions using **actions queue** and explicitly approve them before they get executed. Under oversight mode, **indexing rules** are used to populate **actions queue** and also require explicit approval for execution. #### Usage @@ -537,7 +537,7 @@ The **Indexer CLI** connects to the Indexer agent, typically through port-forwar - `graph indexer rules set [options] ...` - Set one or more indexing rules. -- `graph indexer rules start [options] ` - Start indexing a subgraph deployment if available and set its `decisionBasis` to `always`, so the Indexer agent will always choose to index it. If the global rule is set to always then all available subgraphs on the network will be indexed. +- `graph indexer rules start [options] ` - Start indexing a Subgraph deployment if available and set its `decisionBasis` to `always`, so the Indexer agent will always choose to index it. If the global rule is set to always then all available Subgraphs on the network will be indexed. - `graph indexer rules stop [options] ` - Stop indexing a deployment and set its `decisionBasis` to never, so it will skip this deployment when deciding on deployments to index. @@ -561,9 +561,9 @@ All commands which display rules in the output can choose between the supported #### Indexing rules -Indexing rules can either be applied as global defaults or for specific subgraph deployments using their IDs. The `deployment` and `decisionBasis` fields are mandatory, while all other fields are optional. When an indexing rule has `rules` as the `decisionBasis`, then the Indexer agent will compare non-null threshold values on that rule with values fetched from the network for the corresponding deployment. If the subgraph deployment has values above (or below) any of the thresholds it will be chosen for indexing. +Indexing rules can either be applied as global defaults or for specific Subgraph deployments using their IDs. The `deployment` and `decisionBasis` fields are mandatory, while all other fields are optional. When an indexing rule has `rules` as the `decisionBasis`, then the Indexer agent will compare non-null threshold values on that rule with values fetched from the network for the corresponding deployment. If the Subgraph deployment has values above (or below) any of the thresholds it will be chosen for indexing. -For example, if the global rule has a `minStake` of **5** (GRT), any subgraph deployment which has more than 5 (GRT) of stake allocated to it will be indexed. Threshold rules include `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, and `minAverageQueryFees`. +For example, if the global rule has a `minStake` of **5** (GRT), any Subgraph deployment which has more than 5 (GRT) of stake allocated to it will be indexed. Threshold rules include `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, and `minAverageQueryFees`. Data model: @@ -679,7 +679,7 @@ graph indexer actions execute approve Note that supported action types for allocation management have different input requirements: -- `Allocate` - allocate stake to a specific subgraph deployment +- `Allocate` - allocate stake to a specific Subgraph deployment - required action params: - deploymentID @@ -694,7 +694,7 @@ Note that supported action types for allocation management have different input - poi - force (forces using the provided POI even if it doesn’t match what the graph-node provides) -- `Reallocate` - atomically close allocation and open a fresh allocation for the same subgraph deployment +- `Reallocate` - atomically close allocation and open a fresh allocation for the same Subgraph deployment - required action params: - allocationID @@ -706,7 +706,7 @@ Note that supported action types for allocation management have different input #### Cost models -Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make Indexer selection decisions per query and to negotiate payment with chosen Indexers. +Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each Subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make Indexer selection decisions per query and to negotiate payment with chosen Indexers. #### Agora @@ -782,9 +782,9 @@ Once an Indexer has staked GRT in the protocol, the [Indexer components](/indexi 6. Call `stake()` to stake GRT in the protocol. -7. (Optional) Indexers may approve another address to be the operator for their Indexer infrastructure in order to separate the keys that control the funds from those that are performing day to day actions such as allocating on subgraphs and serving (paid) queries. In order to set the operator call `setOperator()` with the operator address. +7. (Optional) Indexers may approve another address to be the operator for their Indexer infrastructure in order to separate the keys that control the funds from those that are performing day to day actions such as allocating on Subgraphs and serving (paid) queries. In order to set the operator call `setOperator()` with the operator address. -8. (Optional) In order to control the distribution of rewards and strategically attract Delegators Indexers can update their delegation parameters by updating their indexingRewardCut (parts per million), queryFeeCut (parts per million), and cooldownBlocks (number of blocks). To do so call `setDelegationParameters()`. The following example sets the queryFeeCut to distribute 95% of query rebates to the Indexer and 5% to Delegators, set the indexingRewardCutto distribute 60% of indexing rewards to the Indexer and 40% to Delegators, and set `thecooldownBlocks` period to 500 blocks. +8. (Optional) In order to control the distribution of rewards and strategically attract Delegators Indexers can update their delegation parameters by updating their `indexingRewardCut` (parts per million), `queryFeeCut` (parts per million), and `cooldownBlocks` (number of blocks). To do so call `setDelegationParameters()`. The following example sets the `queryFeeCut` to distribute 95% of query rebates to the Indexer and 5% to Delegators, set the `indexingRewardCut` to distribute 60% of indexing rewards to the Indexer and 40% to Delegators, and set the `cooldownBlocks` period to 500 blocks. ``` setDelegationParameters(950000, 600000, 500) @@ -810,8 +810,8 @@ To set the delegation parameters using Graph Explorer interface, follow these st After being created by an Indexer a healthy allocation goes through two states. -- **Active** - Once an allocation is created onchain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L316)) it is considered **active**. A portion of the Indexer's own and/or delegated stake is allocated towards a subgraph deployment, which allows them to claim indexing rewards and serve queries for that subgraph deployment. The Indexer agent manages creating allocations based on the Indexer rules. +- **Active** - Once an allocation is created onchain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L316)) it is considered **active**. A portion of the Indexer's own and/or delegated stake is allocated towards a Subgraph deployment, which allows them to claim indexing rewards and serve queries for that Subgraph deployment. The Indexer agent manages creating allocations based on the Indexer rules. - **Closed** - An Indexer is free to close an allocation once 1 epoch has passed ([closeAllocation()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L335)) or their Indexer agent will automatically close the allocation after the **maxAllocationEpochs** (currently 28 days). When an allocation is closed with a valid proof of indexing (POI) their indexing rewards are distributed to the Indexer and its Delegators ([learn more](/indexing/overview/#how-are-indexing-rewards-distributed)). -Indexers are recommended to utilize offchain syncing functionality to sync subgraph deployments to chainhead before creating the allocation onchain. This feature is especially useful for subgraphs that may take longer than 28 epochs to sync or have some chances of failing undeterministically. +Indexers are recommended to utilize offchain syncing functionality to sync Subgraph deployments to chainhead before creating the allocation onchain. This feature is especially useful for Subgraphs that may take longer than 28 epochs to sync or have some chances of failing undeterministically. diff --git a/website/src/pages/nl/indexing/supported-network-requirements.mdx b/website/src/pages/nl/indexing/supported-network-requirements.mdx index 9bfbc8d0fefd..72b60947104e 100644 --- a/website/src/pages/nl/indexing/supported-network-requirements.mdx +++ b/website/src/pages/nl/indexing/supported-network-requirements.mdx @@ -6,7 +6,7 @@ title: Supported Network Requirements | --- | --- | --- | :-: | | Arbitrum | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | 4+ core CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_last updated August 2023_ | ✅ | | Avalanche | [Docker Guide](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 5 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
Debian 12/Ubuntu 22.04
16 GB RAM
>= 4.5TB (NVME preffered)
_last updated 14th May 2024_ | ✅ | +| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
Debian 12/Ubuntu 22.04
16 GB RAM
>= 4.5TB (NVME preferred)
_last updated 14th May 2024_ | ✅ | | Binance | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | 8 core / 16 threads CPU
Ubuntu 22.04
>=32 GB RAM
>= 14 TiB NVMe SSD
_last updated 22nd June 2024_ | ✅ | | Celo | [Docker Guide](https://docs.infradao.com/archive-nodes-101/celo/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 2 TiB NVMe SSD
_last updated August 2023_ | ✅ | | Ethereum | [Docker Guide](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Higher clock speed over core count
Ubuntu 22.04
16GB+ RAM
>=3TB (NVMe recommended)
_last updated August 2023_ | ✅ | diff --git a/website/src/pages/nl/indexing/tap.mdx b/website/src/pages/nl/indexing/tap.mdx index 3bab672ab211..477534d63201 100644 --- a/website/src/pages/nl/indexing/tap.mdx +++ b/website/src/pages/nl/indexing/tap.mdx @@ -1,21 +1,21 @@ --- -title: TAP Migration Guide +title: GraphTally Guide --- -Learn about The Graph’s new payment system, **Timeline Aggregation Protocol, TAP**. This system provides fast, efficient microtransactions with minimized trust. +Learn about The Graph’s new payment system, **GraphTally** [(previously Timeline Aggregation Protocol)](https://docs.rs/tap_core/latest/tap_core/index.html). This system provides fast, efficient microtransactions with minimized trust. ## Overview -[TAP](https://docs.rs/tap_core/latest/tap_core/index.html) is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features: +GraphTally is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features: - Efficiently handles micropayments. - Adds a layer of consolidations to onchain transactions and costs. - Allows Indexers control of receipts and payments, guaranteeing payment for queries. - It enables decentralized, trustless gateways and improves `indexer-service` performance for multiple senders. -## Specifics +### Specifics -TAP allows a sender to make multiple payments to a receiver, **TAP Receipts**, which aggregates these payments into a single payment, a **Receipt Aggregate Voucher**, also known as a **RAV**. This aggregated payment can then be verified on the blockchain, reducing the number of transactions and simplifying the payment process. +GraphTally allows a sender to make multiple payments to a receiver, **Receipts**, which aggregates these payments into a single payment, a **Receipt Aggregate Voucher**, also known as a **RAV**. This aggregated payment can then be verified on the blockchain, reducing the number of transactions and simplifying the payment process. For each query, the gateway will send you a `signed receipt` that is stored on your database. Then, these queries will be aggregated by a `tap-agent` through a request. Afterwards, you’ll receive a RAV. You can update a RAV by sending it with newer receipts and this will generate a new RAV with an increased value. @@ -59,14 +59,14 @@ As long as you run `tap-agent` and `indexer-agent`, everything will be executed | Signers | `0xfF4B7A5EfD00Ff2EC3518D4F250A27e4c29A2211` | `0xFb142dE83E261e43a81e9ACEADd1c66A0DB121FE` | | Aggregator | `https://tap-aggregator.network.thegraph.com` | `https://tap-aggregator.testnet.thegraph.com` | -### Requirements +### Prerequisites -In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query TAP updates. You can use The Graph Network to query or host yourself on your `graph-node`. +In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query updates. You can use The Graph Network to query or host yourself on your `graph-node`. -- [Graph TAP Arbitrum Sepolia subgraph (for The Graph testnet)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) -- [Graph TAP Arbitrum One subgraph (for The Graph mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) +- [Graph TAP Arbitrum Sepolia Subgraph (for The Graph testnet)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) +- [Graph TAP Arbitrum One Subgraph (for The Graph mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) -> Note: `indexer-agent` does not currently handle the indexing of this subgraph like it does for the network subgraph deployment. As a result, you have to index it manually. +> Note: `indexer-agent` does not currently handle the indexing of this Subgraph like it does for the network Subgraph deployment. As a result, you have to index it manually. ## Migration Guide @@ -79,7 +79,7 @@ The required software version can be found [here](https://github.com/graphprotoc 1. **Indexer Agent** - Follow the [same process](https://github.com/graphprotocol/indexer/pkgs/container/indexer-agent#graph-protocol-indexer-components). - - Give the new argument `--tap-subgraph-endpoint` to activate the new TAP codepaths and enable redeeming of TAP RAVs. + - Give the new argument `--tap-subgraph-endpoint` to activate the new GraphTally codepaths and enable redeeming of RAVs. 2. **Indexer Service** @@ -128,18 +128,18 @@ query_url = "" status_url = "" [subgraphs.network] -# Query URL for the Graph Network subgraph. +# Query URL for the Graph Network Subgraph. query_url = "" # Optional, deployment to look for in the local `graph-node`, if locally indexed. -# Locally indexing the subgraph is recommended. +# Locally indexing the Subgraph is recommended. # NOTE: Use `query_url` or `deployment_id` only deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" [subgraphs.escrow] -# Query URL for the Escrow subgraph. +# Query URL for the Escrow Subgraph. query_url = "" # Optional, deployment to look for in the local `graph-node`, if locally indexed. -# Locally indexing the subgraph is recommended. +# Locally indexing the Subgraph is recommended. # NOTE: Use `query_url` or `deployment_id` only deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" diff --git a/website/src/pages/nl/indexing/tooling/graph-node.mdx b/website/src/pages/nl/indexing/tooling/graph-node.mdx index 0250f14a3d08..edde8a157fd3 100644 --- a/website/src/pages/nl/indexing/tooling/graph-node.mdx +++ b/website/src/pages/nl/indexing/tooling/graph-node.mdx @@ -2,31 +2,31 @@ title: Graph Node --- -Graph Node is the component which indexes subgraphs, and makes the resulting data available to query via a GraphQL API. As such it is central to the indexer stack, and correct operation of Graph Node is crucial to running a successful indexer. +Graph Node is the component which indexes Subgraphs, and makes the resulting data available to query via a GraphQL API. As such it is central to the indexer stack, and correct operation of Graph Node is crucial to running a successful indexer. This provides a contextual overview of Graph Node, and some of the more advanced options available to indexers. Detailed documentation and instructions can be found in the [Graph Node repository](https://github.com/graphprotocol/graph-node). ## Graph Node -[Graph Node](https://github.com/graphprotocol/graph-node) is the reference implementation for indexing Subgraphs on The Graph Network, connecting to blockchain clients, indexing subgraphs and making indexed data available to query. +[Graph Node](https://github.com/graphprotocol/graph-node) is the reference implementation for indexing Subgraphs on The Graph Network, connecting to blockchain clients, indexing Subgraphs and making indexed data available to query. Graph Node (and the whole indexer stack) can be run on bare metal, or in a cloud environment. This flexibility of the central indexing component is crucial to the robustness of The Graph Protocol. Similarly, Graph Node can be [built from source](https://github.com/graphprotocol/graph-node), or indexers can use one of the [provided Docker Images](https://hub.docker.com/r/graphprotocol/graph-node). ### PostgreSQL database -The main store for the Graph Node, this is where subgraph data is stored, as well as metadata about subgraphs, and subgraph-agnostic network data such as the block cache, and eth_call cache. +The main store for the Graph Node, this is where Subgraph data is stored, as well as metadata about Subgraphs, and Subgraph-agnostic network data such as the block cache, and eth_call cache. ### Network clients In order to index a network, Graph Node needs access to a network client via an EVM-compatible JSON-RPC API. This RPC may connect to a single client or it could be a more complex setup that load balances across multiple. -While some subgraphs may just require a full node, some may have indexing features which require additional RPC functionality. Specifically subgraphs which make `eth_calls` as part of indexing will require an archive node which supports [EIP-1898](https://eips.ethereum.org/EIPS/eip-1898), and subgraphs with `callHandlers`, or `blockHandlers` with a `call` filter, require `trace_filter` support ([see trace module documentation here](https://openethereum.github.io/JSONRPC-trace-module)). +While some Subgraphs may just require a full node, some may have indexing features which require additional RPC functionality. Specifically Subgraphs which make `eth_calls` as part of indexing will require an archive node which supports [EIP-1898](https://eips.ethereum.org/EIPS/eip-1898), and Subgraphs with `callHandlers`, or `blockHandlers` with a `call` filter, require `trace_filter` support ([see trace module documentation here](https://openethereum.github.io/JSONRPC-trace-module)). **Network Firehoses** - a Firehose is a gRPC service providing an ordered, yet fork-aware, stream of blocks, developed by The Graph's core developers to better support performant indexing at scale. This is not currently an Indexer requirement, but Indexers are encouraged to familiarise themselves with the technology, ahead of full network support. Learn more about the Firehose [here](https://firehose.streamingfast.io/). ### IPFS Nodes -Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during subgraph deployment to fetch the subgraph manifest and all linked files. Network indexers do not need to host their own IPFS node. An IPFS node for the network is hosted at https://ipfs.network.thegraph.com. +Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network indexers do not need to host their own IPFS node. An IPFS node for the network is hosted at https://ipfs.network.thegraph.com. ### Prometheus metrics server @@ -79,8 +79,8 @@ When it is running Graph Node exposes the following ports: | Port | Purpose | Routes | CLI Argument | Environment Variable | | --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | -| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | +| 8000 | GraphQL HTTP server
(for Subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | +| 8001 | GraphQL WS
(for Subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | | 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | | 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | | 8040 | Prometheus metrics | /metrics | \--metrics-port | - | @@ -89,7 +89,7 @@ When it is running Graph Node exposes the following ports: ## Advanced Graph Node configuration -At its simplest, Graph Node can be operated with a single instance of Graph Node, a single PostgreSQL database, an IPFS node, and the network clients as required by the subgraphs to be indexed. +At its simplest, Graph Node can be operated with a single instance of Graph Node, a single PostgreSQL database, an IPFS node, and the network clients as required by the Subgraphs to be indexed. This setup can be scaled horizontally, by adding multiple Graph Nodes, and multiple databases to support those Graph Nodes. Advanced users may want to take advantage of some of the horizontal scaling capabilities of Graph Node, as well as some of the more advanced configuration options, via the `config.toml` file and Graph Node's environment variables. @@ -114,13 +114,13 @@ Full documentation of `config.toml` can be found in the [Graph Node docs](https: #### Multiple Graph Nodes -Graph Node indexing can scale horizontally, running multiple instances of Graph Node to split indexing and querying across different nodes. This can be done simply by running Graph Nodes configured with a different `node_id` on startup (e.g. in the Docker Compose file), which can then be used in the `config.toml` file to specify [dedicated query nodes](#dedicated-query-nodes), [block ingestors](#dedicated-block-ingestion), and splitting subgraphs across nodes with [deployment rules](#deployment-rules). +Graph Node indexing can scale horizontally, running multiple instances of Graph Node to split indexing and querying across different nodes. This can be done simply by running Graph Nodes configured with a different `node_id` on startup (e.g. in the Docker Compose file), which can then be used in the `config.toml` file to specify [dedicated query nodes](#dedicated-query-nodes), [block ingestors](#dedicated-block-ingestion), and splitting Subgraphs across nodes with [deployment rules](#deployment-rules). > Note that multiple Graph Nodes can all be configured to use the same database, which itself can be horizontally scaled via sharding. #### Deployment rules -Given multiple Graph Nodes, it is necessary to manage deployment of new subgraphs so that the same subgraph isn't being indexed by two different nodes, which would lead to collisions. This can be done by using deployment rules, which can also specify which `shard` a subgraph's data should be stored in, if database sharding is being used. Deployment rules can match on the subgraph name and the network that the deployment is indexing in order to make a decision. +Given multiple Graph Nodes, it is necessary to manage deployment of new Subgraphs so that the same Subgraph isn't being indexed by two different nodes, which would lead to collisions. This can be done by using deployment rules, which can also specify which `shard` a Subgraph's data should be stored in, if database sharding is being used. Deployment rules can match on the Subgraph name and the network that the deployment is indexing in order to make a decision. Example deployment rule configuration: @@ -138,7 +138,7 @@ indexers = [ "index_node_kovan_0" ] match = { network = [ "xdai", "poa-core" ] } indexers = [ "index_node_other_0" ] [[deployment.rule]] -# There's no 'match', so any subgraph matches +# There's no 'match', so any Subgraph matches shards = [ "sharda", "shardb" ] indexers = [ "index_node_community_0", @@ -167,11 +167,11 @@ Any node whose --node-id matches the regular expression will be set up to only r For most use cases, a single Postgres database is sufficient to support a graph-node instance. When a graph-node instance outgrows a single Postgres database, it is possible to split the storage of graph-node's data across multiple Postgres databases. All databases together form the store of the graph-node instance. Each individual database is called a shard. -Shards can be used to split subgraph deployments across multiple databases, and can also be used to use replicas to spread query load across databases. This includes configuring the number of available database connections each `graph-node` should keep in its connection pool for each database, which becomes increasingly important as more subgraphs are being indexed. +Shards can be used to split Subgraph deployments across multiple databases, and can also be used to use replicas to spread query load across databases. This includes configuring the number of available database connections each `graph-node` should keep in its connection pool for each database, which becomes increasingly important as more Subgraphs are being indexed. Sharding becomes useful when your existing database can't keep up with the load that Graph Node puts on it, and when it's not possible to increase the database size anymore. -> It is generally better make a single database as big as possible, before starting with shards. One exception is where query traffic is split very unevenly between subgraphs; in those situations it can help dramatically if the high-volume subgraphs are kept in one shard and everything else in another because that setup makes it more likely that the data for the high-volume subgraphs stays in the db-internal cache and doesn't get replaced by data that's not needed as much from low-volume subgraphs. +> It is generally better make a single database as big as possible, before starting with shards. One exception is where query traffic is split very unevenly between Subgraphs; in those situations it can help dramatically if the high-volume Subgraphs are kept in one shard and everything else in another because that setup makes it more likely that the data for the high-volume Subgraphs stays in the db-internal cache and doesn't get replaced by data that's not needed as much from low-volume Subgraphs. In terms of configuring connections, start with max_connections in postgresql.conf set to 400 (or maybe even 200) and look at the store_connection_wait_time_ms and store_connection_checkout_count Prometheus metrics. Noticeable wait times (anything above 5ms) is an indication that there are too few connections available; high wait times there will also be caused by the database being very busy (like high CPU load). However if the database seems otherwise stable, high wait times indicate a need to increase the number of connections. In the configuration, how many connections each graph-node instance can use is an upper limit, and Graph Node will not keep connections open if it doesn't need them. @@ -188,7 +188,7 @@ ingestor = "block_ingestor_node" #### Supporting multiple networks -The Graph Protocol is increasing the number of networks supported for indexing rewards, and there exist many subgraphs indexing unsupported networks which an indexer would like to process. The `config.toml` file allows for expressive and flexible configuration of: +The Graph Protocol is increasing the number of networks supported for indexing rewards, and there exist many Subgraphs indexing unsupported networks which an indexer would like to process. The `config.toml` file allows for expressive and flexible configuration of: - Multiple networks - Multiple providers per network (this can allow splitting of load across providers, and can also allow for configuration of full nodes as well as archive nodes, with Graph Node preferring cheaper providers if a given workload allows). @@ -225,11 +225,11 @@ Users who are operating a scaled indexing setup with advanced configuration may ### Managing Graph Node -Given a running Graph Node (or Graph Nodes!), the challenge is then to manage deployed subgraphs across those nodes. Graph Node surfaces a range of tools to help with managing subgraphs. +Given a running Graph Node (or Graph Nodes!), the challenge is then to manage deployed Subgraphs across those nodes. Graph Node surfaces a range of tools to help with managing Subgraphs. #### Logging -Graph Node's logs can provide useful information for debugging and optimisation of Graph Node and specific subgraphs. Graph Node supports different log levels via the `GRAPH_LOG` environment variable, with the following levels: error, warn, info, debug or trace. +Graph Node's logs can provide useful information for debugging and optimisation of Graph Node and specific Subgraphs. Graph Node supports different log levels via the `GRAPH_LOG` environment variable, with the following levels: error, warn, info, debug or trace. In addition setting `GRAPH_LOG_QUERY_TIMING` to `gql` provides more details about how GraphQL queries are running (though this will generate a large volume of logs). @@ -247,11 +247,11 @@ The graphman command is included in the official containers, and you can docker Full documentation of `graphman` commands is available in the Graph Node repository. See [/docs/graphman.md] (https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md) in the Graph Node `/docs` -### Working with subgraphs +### Working with Subgraphs #### Indexing status API -Available on port 8030/graphql by default, the indexing status API exposes a range of methods for checking indexing status for different subgraphs, checking proofs of indexing, inspecting subgraph features and more. +Available on port 8030/graphql by default, the indexing status API exposes a range of methods for checking indexing status for different Subgraphs, checking proofs of indexing, inspecting Subgraph features and more. The full schema is available [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). @@ -263,7 +263,7 @@ There are three separate parts of the indexing process: - Processing events in order with the appropriate handlers (this can involve calling the chain for state, and fetching data from the store) - Writing the resulting data to the store -These stages are pipelined (i.e. they can be executed in parallel), but they are dependent on one another. Where subgraphs are slow to index, the underlying cause will depend on the specific subgraph. +These stages are pipelined (i.e. they can be executed in parallel), but they are dependent on one another. Where Subgraphs are slow to index, the underlying cause will depend on the specific Subgraph. Common causes of indexing slowness: @@ -276,24 +276,24 @@ Common causes of indexing slowness: - The provider itself falling behind the chain head - Slowness in fetching new receipts at the chain head from the provider -Subgraph indexing metrics can help diagnose the root cause of indexing slowness. In some cases, the problem lies with the subgraph itself, but in others, improved network providers, reduced database contention and other configuration improvements can markedly improve indexing performance. +Subgraph indexing metrics can help diagnose the root cause of indexing slowness. In some cases, the problem lies with the Subgraph itself, but in others, improved network providers, reduced database contention and other configuration improvements can markedly improve indexing performance. -#### Failed subgraphs +#### Failed Subgraphs -During indexing subgraphs might fail, if they encounter data that is unexpected, some component not working as expected, or if there is some bug in the event handlers or configuration. There are two general types of failure: +During indexing Subgraphs might fail, if they encounter data that is unexpected, some component not working as expected, or if there is some bug in the event handlers or configuration. There are two general types of failure: - Deterministic failures: these are failures which will not be resolved with retries - Non-deterministic failures: these might be down to issues with the provider, or some unexpected Graph Node error. When a non-deterministic failure occurs, Graph Node will retry the failing handlers, backing off over time. -In some cases a failure might be resolvable by the indexer (for example if the error is a result of not having the right kind of provider, adding the required provider will allow indexing to continue). However in others, a change in the subgraph code is required. +In some cases a failure might be resolvable by the indexer (for example if the error is a result of not having the right kind of provider, adding the required provider will allow indexing to continue). However in others, a change in the Subgraph code is required. -> Deterministic failures are considered "final", with a Proof of Indexing generated for the failing block, while non-deterministic failures are not, as the subgraph may manage to "unfail" and continue indexing. In some cases, the non-deterministic label is incorrect, and the subgraph will never overcome the error; such failures should be reported as issues on the Graph Node repository. +> Deterministic failures are considered "final", with a Proof of Indexing generated for the failing block, while non-deterministic failures are not, as the Subgraph may manage to "unfail" and continue indexing. In some cases, the non-deterministic label is incorrect, and the Subgraph will never overcome the error; such failures should be reported as issues on the Graph Node repository. #### Block and call cache -Graph Node caches certain data in the store in order to save refetching from the provider. Blocks are cached, as are the results of `eth_calls` (the latter being cached as of a specific block). This caching can dramatically increase indexing speed during "resyncing" of a slightly altered subgraph. +Graph Node caches certain data in the store in order to save refetching from the provider. Blocks are cached, as are the results of `eth_calls` (the latter being cached as of a specific block). This caching can dramatically increase indexing speed during "resyncing" of a slightly altered Subgraph. -However, in some instances, if an Ethereum node has provided incorrect data for some period, that can make its way into the cache, leading to incorrect data or failed subgraphs. In this case indexers can use `graphman` to clear the poisoned cache, and then rewind the affected subgraphs, which will then fetch fresh data from the (hopefully) healthy provider. +However, in some instances, if an Ethereum node has provided incorrect data for some period, that can make its way into the cache, leading to incorrect data or failed Subgraphs. In this case indexers can use `graphman` to clear the poisoned cache, and then rewind the affected Subgraphs, which will then fetch fresh data from the (hopefully) healthy provider. If a block cache inconsistency is suspected, such as a tx receipt missing event: @@ -304,7 +304,7 @@ If a block cache inconsistency is suspected, such as a tx receipt missing event: #### Querying issues and errors -Once a subgraph has been indexed, indexers can expect to serve queries via the subgraph's dedicated query endpoint. If the indexer is hoping to serve significant query volume, a dedicated query node is recommended, and in case of very high query volumes, indexers may want to configure replica shards so that queries don't impact the indexing process. +Once a Subgraph has been indexed, indexers can expect to serve queries via the Subgraph's dedicated query endpoint. If the indexer is hoping to serve significant query volume, a dedicated query node is recommended, and in case of very high query volumes, indexers may want to configure replica shards so that queries don't impact the indexing process. However, even with a dedicated query node and replicas, certain queries can take a long time to execute, and in some cases increase memory usage and negatively impact the query time for other users. @@ -316,7 +316,7 @@ Graph Node caches GraphQL queries by default, which can significantly reduce dat ##### Analysing queries -Problematic queries most often surface in one of two ways. In some cases, users themselves report that a given query is slow. In that case the challenge is to diagnose the reason for the slowness - whether it is a general issue, or specific to that subgraph or query. And then of course to resolve it, if possible. +Problematic queries most often surface in one of two ways. In some cases, users themselves report that a given query is slow. In that case the challenge is to diagnose the reason for the slowness - whether it is a general issue, or specific to that Subgraph or query. And then of course to resolve it, if possible. In other cases, the trigger might be high memory usage on a query node, in which case the challenge is first to identify the query causing the issue. @@ -336,10 +336,10 @@ In general, tables where the number of distinct entities are less than 1% of the Once a table has been determined to be account-like, running `graphman stats account-like .
` will turn on the account-like optimization for queries against that table. The optimization can be turned off again with `graphman stats account-like --clear .
` It takes up to 5 minutes for query nodes to notice that the optimization has been turned on or off. After turning the optimization on, it is necessary to verify that the change does not in fact make queries slower for that table. If you have configured Grafana to monitor Postgres, slow queries would show up in `pg_stat_activity`in large numbers, taking several seconds. In that case, the optimization needs to be turned off again. -For Uniswap-like subgraphs, the `pair` and `token` tables are prime candidates for this optimization, and can have a dramatic effect on database load. +For Uniswap-like Subgraphs, the `pair` and `token` tables are prime candidates for this optimization, and can have a dramatic effect on database load. -#### Removing subgraphs +#### Removing Subgraphs > This is new functionality, which will be available in Graph Node 0.29.x -At some point an indexer might want to remove a given subgraph. This can be easily done via `graphman drop`, which deletes a deployment and all it's indexed data. The deployment can be specified as either a subgraph name, an IPFS hash `Qm..`, or the database namespace `sgdNNN`. Further documentation is available [here](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop). +At some point an indexer might want to remove a given Subgraph. This can be easily done via `graphman drop`, which deletes a deployment and all it's indexed data. The deployment can be specified as either a Subgraph name, an IPFS hash `Qm..`, or the database namespace `sgdNNN`. Further documentation is available [here](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop). diff --git a/website/src/pages/nl/indexing/tooling/graphcast.mdx b/website/src/pages/nl/indexing/tooling/graphcast.mdx index cbc12c17f95b..9a712c6dd64a 100644 --- a/website/src/pages/nl/indexing/tooling/graphcast.mdx +++ b/website/src/pages/nl/indexing/tooling/graphcast.mdx @@ -10,10 +10,10 @@ Currently, the cost to broadcast information to other network participants is de The Graphcast SDK (Software Development Kit) allows developers to build Radios, which are gossip-powered applications that Indexers can run to serve a given purpose. We also intend to create a few Radios (or provide support to other developers/teams that wish to build Radios) for the following use cases: -- Real-time cross-checking of subgraph data integrity ([Subgraph Radio](https://docs.graphops.xyz/graphcast/radios/subgraph-radio/intro)). -- Conducting auctions and coordination for warp syncing subgraphs, substreams, and Firehose data from other Indexers. -- Self-reporting on active query analytics, including subgraph request volumes, fee volumes, etc. -- Self-reporting on indexing analytics, including subgraph indexing time, handler gas costs, indexing errors encountered, etc. +- Real-time cross-checking of Subgraph data integrity ([Subgraph Radio](https://docs.graphops.xyz/graphcast/radios/subgraph-radio/intro)). +- Conducting auctions and coordination for warp syncing Subgraphs, substreams, and Firehose data from other Indexers. +- Self-reporting on active query analytics, including Subgraph request volumes, fee volumes, etc. +- Self-reporting on indexing analytics, including Subgraph indexing time, handler gas costs, indexing errors encountered, etc. - Self-reporting on stack information including graph-node version, Postgres version, Ethereum client version, etc. ### Leer Meer diff --git a/website/src/pages/nl/resources/benefits.mdx b/website/src/pages/nl/resources/benefits.mdx index c02a029cb137..238e055693bd 100644 --- a/website/src/pages/nl/resources/benefits.mdx +++ b/website/src/pages/nl/resources/benefits.mdx @@ -75,9 +75,9 @@ Query costs may vary; the quoted cost is the average at time of publication (Mar Reflects cost for data consumer. Query fees are still paid to Indexers for Free Plan queries. -Estimated costs are only for Ethereum Mainnet subgraphs — costs are even higher when self hosting a `graph-node` on other networks. Some users may need to update their subgraph to a new version. Due to Ethereum gas fees, an update costs ~$50 at time of writing. Note that gas fees on [Arbitrum](/archived/arbitrum/arbitrum-faq/) are substantially lower than Ethereum mainnet. +Estimated costs are only for Ethereum Mainnet Subgraphs — costs are even higher when self hosting a `graph-node` on other networks. Some users may need to update their Subgraph to a new version. Due to Ethereum gas fees, an update costs ~$50 at time of writing. Note that gas fees on [Arbitrum](/archived/arbitrum/arbitrum-faq/) are substantially lower than Ethereum mainnet. -Signaal cureren op een subgraph is een optionele eenmalige, kostenneutrale actie (bijv. $1000 aan signaal kan worden gecureerd op een subgraph en later worden opgenomen - met het potentieel om rendementen te verdienen tijdens het proces). +Curating signal on a Subgraph is an optional one-time, net-zero cost (e.g., $1k in signal can be curated on a Subgraph, and later withdrawn—with potential to earn returns in the process). ## No Setup Costs & Greater Operational Efficiency @@ -89,4 +89,4 @@ The Graph’s decentralized network gives users access to geographic redundancy Bottom line: The Graph Network is less expensive, easier to use, and produces superior results compared to running a `graph-node` locally. -Start using The Graph Network today, and learn how to [publish your subgraph to The Graph's decentralized network](/subgraphs/quick-start/). +Start using The Graph Network today, and learn how to [publish your Subgraph to The Graph's decentralized network](/subgraphs/quick-start/). diff --git a/website/src/pages/nl/resources/glossary.mdx b/website/src/pages/nl/resources/glossary.mdx index ffcd4bca2eed..4c5ad55cd0d3 100644 --- a/website/src/pages/nl/resources/glossary.mdx +++ b/website/src/pages/nl/resources/glossary.mdx @@ -4,51 +4,51 @@ title: Glossary - **The Graph**: A decentralized protocol for indexing and querying data. -- **Query**: A request for data. In the case of The Graph, a query is a request for data from a subgraph that will be answered by an Indexer. +- **Query**: A request for data. In the case of The Graph, a query is a request for data from a Subgraph that will be answered by an Indexer. -- **GraphQL**: A query language for APIs and a runtime for fulfilling those queries with your existing data. The Graph uses GraphQL to query subgraphs. +- **GraphQL**: A query language for APIs and a runtime for fulfilling those queries with your existing data. The Graph uses GraphQL to query Subgraphs. -- **Endpoint**: A URL that can be used to query a subgraph. The testing endpoint for Subgraph Studio is `https://api.studio.thegraph.com/query///` and the Graph Explorer endpoint is `https://gateway.thegraph.com/api//subgraphs/id/`. The Graph Explorer endpoint is used to query subgraphs on The Graph's decentralized network. +- **Endpoint**: A URL that can be used to query a Subgraph. The testing endpoint for Subgraph Studio is `https://api.studio.thegraph.com/query///` and the Graph Explorer endpoint is `https://gateway.thegraph.com/api//subgraphs/id/`. The Graph Explorer endpoint is used to query Subgraphs on The Graph's decentralized network. -- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish subgraphs to The Graph Network. Once it is indexed, the subgraph can be queried by anyone. +- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish Subgraphs to The Graph Network. Once it is indexed, the Subgraph can be queried by anyone. - **Indexer**: Network participants that run indexing nodes to index data from blockchains and serve GraphQL queries. - **Indexer Revenue Streams**: Indexers are rewarded in GRT with two components: query fee rebates and indexing rewards. - 1. **Query Fee Rebates**: Payments from subgraph consumers for serving queries on the network. + 1. **Query Fee Rebates**: Payments from Subgraph consumers for serving queries on the network. - 2. **Indexing Rewards**: The rewards that Indexers receive for indexing subgraphs. Indexing rewards are generated via new issuance of 3% GRT annually. + 2. **Indexing Rewards**: The rewards that Indexers receive for indexing Subgraphs. Indexing rewards are generated via new issuance of 3% GRT annually. - **Indexer's Self-Stake**: The amount of GRT that Indexers stake to participate in the decentralized network. The minimum is 100,000 GRT, and there is no upper limit. - **Delegation Capacity**: The maximum amount of GRT an Indexer can accept from Delegators. Indexers can only accept up to 16x their Indexer Self-Stake, and additional delegation results in diluted rewards. For example, if an Indexer has a Self-Stake of 1M GRT, their delegation capacity is 16M. However, Indexers can increase their Delegation Capacity by increasing their Self-Stake. -- **Upgrade Indexer**: An Indexer designed to act as a fallback for subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. +- **Upgrade Indexer**: An Indexer designed to act as a fallback for Subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. -- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing subgraphs. +- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in Subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing Subgraphs. - **Delegation Tax**: A 0.5% fee paid by Delegators when they delegate GRT to Indexers. The GRT used to pay the fee is burned. -- **Curator**: Network participants that identify high-quality subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a subgraph, 10% is distributed to the Curators of that subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a subgraph. +- **Curator**: Network participants that identify high-quality Subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a Subgraph, 10% is distributed to the Curators of that Subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a Subgraph. -- **Curation Tax**: A 1% fee paid by Curators when they signal GRT on subgraphs. The GRT used to pay the fee is burned. +- **Curation Tax**: A 1% fee paid by Curators when they signal GRT on Subgraphs. The GRT used to pay the fee is burned. -- **Data Consumer**: Any application or user that queries a subgraph. +- **Data Consumer**: Any application or user that queries a Subgraph. -- **Subgraph Developer**: A developer who builds and deploys a subgraph to The Graph's decentralized network. +- **Subgraph Developer**: A developer who builds and deploys a Subgraph to The Graph's decentralized network. -- **Subgraph Manifest**: A YAML file that describes the subgraph's GraphQL schema, data sources, and other metadata. [Here](https://github.com/graphprotocol/example-subgraph/blob/master/subgraph.yaml) is an example. +- **Subgraph Manifest**: A YAML file that describes the Subgraph's GraphQL schema, data sources, and other metadata. [Here](https://github.com/graphprotocol/example-subgraph/blob/master/subgraph.yaml) is an example. - **Epoch**: A unit of time within the network. Currently, one epoch is 6,646 blocks or approximately 1 day. -- **Allocation**: An Indexer can allocate their total GRT stake (including Delegators' stake) towards subgraphs that have been published on The Graph's decentralized network. Allocations can have different statuses: +- **Allocation**: An Indexer can allocate their total GRT stake (including Delegators' stake) towards Subgraphs that have been published on The Graph's decentralized network. Allocations can have different statuses: - 1. **Active**: An allocation is considered active when it is created onchain. This is called opening an allocation, and indicates to the network that the Indexer is actively indexing and serving queries for a particular subgraph. Active allocations accrue indexing rewards proportional to the signal on the subgraph, and the amount of GRT allocated. + 1. **Active**: An allocation is considered active when it is created onchain. This is called opening an allocation, and indicates to the network that the Indexer is actively indexing and serving queries for a particular Subgraph. Active allocations accrue indexing rewards proportional to the signal on the Subgraph, and the amount of GRT allocated. - 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. + 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given Subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. -- **Subgraph Studio**: A powerful dapp for building, deploying, and publishing subgraphs. +- **Subgraph Studio**: A powerful dapp for building, deploying, and publishing Subgraphs. - **Fishermen**: A role within The Graph Network held by participants who monitor the accuracy and integrity of data served by Indexers. When a Fisherman identifies a query response or a POI they believe to be incorrect, they can initiate a dispute against the Indexer. If the dispute rules in favor of the Fisherman, the Indexer is slashed by losing 2.5% of their self-stake. Of this amount, 50% is awarded to the Fisherman as a bounty for their vigilance, and the remaining 50% is removed from circulation (burned). This mechanism is designed to encourage Fishermen to help maintain the reliability of the network by ensuring that Indexers are held accountable for the data they provide. @@ -56,28 +56,28 @@ title: Glossary - **Slashing**: Indexers can have their self-staked GRT slashed for providing an incorrect POI or for serving inaccurate data. The slashing percentage is a protocol parameter currently set to 2.5% of an Indexer's self-stake. 50% of the slashed GRT goes to the Fisherman that disputed the inaccurate data or incorrect POI. The other 50% is burned. -- **Indexing Rewards**: The rewards that Indexers receive for indexing subgraphs. Indexing rewards are distributed in GRT. +- **Indexing Rewards**: The rewards that Indexers receive for indexing Subgraphs. Indexing rewards are distributed in GRT. - **Delegation Rewards**: The rewards that Delegators receive for delegating GRT to Indexers. Delegation rewards are distributed in GRT. - **GRT**: The Graph's work utility token. GRT provides economic incentives to network participants for contributing to the network. -- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. +- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given Subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. -- **Graph Node**: Graph Node is the component that indexes subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. +- **Graph Node**: Graph Node is the component that indexes Subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. -- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions onchain, including registering on the network, managing subgraph deployments to its Graph Node(s), and managing allocations. +- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions onchain, including registering on the network, managing Subgraph deployments to its Graph Node(s), and managing allocations. - **The Graph Client**: A library for building GraphQL-based dapps in a decentralized way. -- **Graph Explorer**: A dapp designed for network participants to explore subgraphs and interact with the protocol. +- **Graph Explorer**: A dapp designed for network participants to explore Subgraphs and interact with the protocol. - **Graph CLI**: A command line interface tool for building and deploying to The Graph. - **Cooldown Period**: The time remaining until an Indexer who changed their delegation parameters can do so again. -- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, subgraphs, curation shares, and Indexer's self-stake. +- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, Subgraphs, curation shares, and Indexer's self-stake. -- **Updating a subgraph**: The process of releasing a new subgraph version with updates to the subgraph's manifest, schema, or mappings. +- **Updating a Subgraph**: The process of releasing a new Subgraph version with updates to the Subgraph's manifest, schema, or mappings. -- **Migrating**: The process of curation shares moving from an old version of a subgraph to a new version of a subgraph (e.g. when v0.0.1 is updated to v0.0.2). +- **Migrating**: The process of curation shares moving from an old version of a Subgraph to a new version of a Subgraph (e.g. when v0.0.1 is updated to v0.0.2). diff --git a/website/src/pages/nl/resources/migration-guides/assemblyscript-migration-guide.mdx b/website/src/pages/nl/resources/migration-guides/assemblyscript-migration-guide.mdx index 85f6903a6c69..aead2514ff51 100644 --- a/website/src/pages/nl/resources/migration-guides/assemblyscript-migration-guide.mdx +++ b/website/src/pages/nl/resources/migration-guides/assemblyscript-migration-guide.mdx @@ -2,13 +2,13 @@ title: AssemblyScript Migration Guide --- -Up until now, subgraphs have been using one of the [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉 +Up until now, Subgraphs have been using one of the [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉 -That will enable subgraph developers to use newer features of the AS language and standard library. +That will enable Subgraph developers to use newer features of the AS language and standard library. This guide is applicable for anyone using `graph-cli`/`graph-ts` below version `0.22.0`. If you're already at a higher than (or equal) version to that, you've already been using version `0.19.10` of AssemblyScript 🙂 -> Note: As of `0.24.0`, `graph-node` can support both versions, depending on the `apiVersion` specified in the subgraph manifest. +> Note: As of `0.24.0`, `graph-node` can support both versions, depending on the `apiVersion` specified in the Subgraph manifest. ## Features @@ -44,7 +44,7 @@ This guide is applicable for anyone using `graph-cli`/`graph-ts` below version ` ## How to upgrade? -1. Change your mappings `apiVersion` in `subgraph.yaml` to `0.0.6`: +1. Change your mappings `apiVersion` in `subgraph.yaml` to `0.0.9`: ```yaml ... @@ -52,7 +52,7 @@ dataSources: ... mapping: ... - apiVersion: 0.0.6 + apiVersion: 0.0.9 ... ``` @@ -106,7 +106,7 @@ let maybeValue = load()! // breaks in runtime if value is null maybeValue.aMethod() ``` -If you are unsure which to choose, we recommend always using the safe version. If the value doesn't exist you might want to just do an early if statement with a return in you subgraph handler. +If you are unsure which to choose, we recommend always using the safe version. If the value doesn't exist you might want to just do an early if statement with a return in you Subgraph handler. ### Variable Shadowing @@ -132,7 +132,7 @@ You'll need to rename your duplicate variables if you had variable shadowing. ### Null Comparisons -By doing the upgrade on your subgraph, sometimes you might get errors like these: +By doing the upgrade on your Subgraph, sometimes you might get errors like these: ```typescript ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' is not assignable to type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt'. @@ -330,7 +330,7 @@ let wrapper = new Wrapper(y) wrapper.n = wrapper.n + x // doesn't give compile time errors as it should ``` -We've opened a issue on the AssemblyScript compiler for this, but for now if you do these kind of operations in your subgraph mappings, you should change them to do a null check before it. +We've opened a issue on the AssemblyScript compiler for this, but for now if you do these kind of operations in your Subgraph mappings, you should change them to do a null check before it. ```typescript let wrapper = new Wrapper(y) @@ -352,7 +352,7 @@ value.x = 10 value.y = 'content' ``` -It will compile but break at runtime, that happens because the value hasn't been initialized, so make sure your subgraph has initialized their values, like this: +It will compile but break at runtime, that happens because the value hasn't been initialized, so make sure your Subgraph has initialized their values, like this: ```typescript var value = new Type() // initialized diff --git a/website/src/pages/nl/resources/migration-guides/graphql-validations-migration-guide.mdx b/website/src/pages/nl/resources/migration-guides/graphql-validations-migration-guide.mdx index 29fed533ef8c..ebed96df1002 100644 --- a/website/src/pages/nl/resources/migration-guides/graphql-validations-migration-guide.mdx +++ b/website/src/pages/nl/resources/migration-guides/graphql-validations-migration-guide.mdx @@ -20,7 +20,7 @@ To be compliant with those validations, please follow the migration guide. You can use the CLI migration tool to find any issues in your GraphQL operations and fix them. Alternatively you can update the endpoint of your GraphQL client to use the `https://api-next.thegraph.com/subgraphs/name/$GITHUB_USER/$SUBGRAPH_NAME` endpoint. Testing your queries against this endpoint will help you find the issues in your queries. -> Not all subgraphs will need to be migrated, if you are using [GraphQL ESlint](https://the-guild.dev/graphql/eslint/docs) or [GraphQL Code Generator](https://the-guild.dev/graphql/codegen), they already ensure that your queries are valid. +> Not all Subgraphs will need to be migrated, if you are using [GraphQL ESlint](https://the-guild.dev/graphql/eslint/docs) or [GraphQL Code Generator](https://the-guild.dev/graphql/codegen), they already ensure that your queries are valid. ## Migration CLI tool diff --git a/website/src/pages/nl/resources/roles/curating.mdx b/website/src/pages/nl/resources/roles/curating.mdx index 99c74778c9bd..a2f4fff13893 100644 --- a/website/src/pages/nl/resources/roles/curating.mdx +++ b/website/src/pages/nl/resources/roles/curating.mdx @@ -2,37 +2,37 @@ title: Cureren --- -Curators are critical to The Graph's decentralized economy. They use their knowledge of the web3 ecosystem to assess and signal on the subgraphs that should be indexed by The Graph Network. Through Graph Explorer, Curators view network data to make signaling decisions. In turn, The Graph Network rewards Curators who signal on good quality subgraphs with a share of the query fees those subgraphs generate. The amount of GRT signaled is one of the key considerations for indexers when determining which subgraphs to index. +Curators are critical to The Graph's decentralized economy. They use their knowledge of the web3 ecosystem to assess and signal on the Subgraphs that should be indexed by The Graph Network. Through Graph Explorer, Curators view network data to make signaling decisions. In turn, The Graph Network rewards Curators who signal on good quality Subgraphs with a share of the query fees those Subgraphs generate. The amount of GRT signaled is one of the key considerations for indexers when determining which Subgraphs to index. ## What Does Signaling Mean for The Graph Network? -Before consumers can query a subgraph, it must be indexed. This is where curation comes into play. In order for Indexers to earn substantial query fees on quality subgraphs, they need to know what subgraphs to index. When Curators signal on a subgraph, it lets Indexers know that a subgraph is in demand and of sufficient quality that it should be indexed. +Before consumers can query a Subgraph, it must be indexed. This is where curation comes into play. In order for Indexers to earn substantial query fees on quality Subgraphs, they need to know what Subgraphs to index. When Curators signal on a Subgraph, it lets Indexers know that a Subgraph is in demand and of sufficient quality that it should be indexed. -Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that Curators use to let Indexers know that a subgraph is good to index. Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives. +Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that Curators use to let Indexers know that a Subgraph is good to index. Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the Subgraph, entitling them to a portion of future query fees that the Subgraph drives. -Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality subgraph because there will be fewer queries to process or fewer Indexers to process them. +Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to Subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality Subgraph because there will be fewer queries to process or fewer Indexers to process them. -The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. +The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all Subgraphs, signaling GRT on a particular Subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. -When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. +When signaling, Curators can decide to signal on a specific version of the Subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. -If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. +If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the Subgraphs you need assistance with. -Indexers can find subgraphs to index based on curation signals they see in Graph Explorer (screenshot below). +Indexers can find Subgraphs to index based on curation signals they see in Graph Explorer (screenshot below). -![Explorer subgraphs](/img/explorer-subgraphs.png) +![Explorer Subgraphs](/img/explorer-subgraphs.png) ## Hoe werkt het Signaleren -Within the Curator tab in Graph Explorer, curators will be able to signal and unsignal on certain subgraphs based on network stats. For a step-by-step overview of how to do this in Graph Explorer, [click here.](/subgraphs/explorer/) +Within the Curator tab in Graph Explorer, curators will be able to signal and unsignal on certain Subgraphs based on network stats. For a step-by-step overview of how to do this in Graph Explorer, [click here.](/subgraphs/explorer/) -Een curator kan ervoor kiezen om een signaal af te geven voor een specifieke subgraph versie, of ze kunnen ervoor kiezen om hun signaal automatisch te laten migreren naar de nieuwste versie van de subgraph. Beide strategieën hebben voordelen en nadelen. +A curator can choose to signal on a specific Subgraph version, or they can choose to have their signal automatically migrate to the newest production build of that Subgraph. Both are valid strategies and come with their own pros and cons. -Signaling on a specific version is especially useful when one subgraph is used by multiple dapps. One dapp might need to regularly update the subgraph with new features. Another dapp might prefer to use an older, well-tested subgraph version. Upon initial curation, a 1% standard tax is incurred. +Signaling on a specific version is especially useful when one Subgraph is used by multiple dapps. One dapp might need to regularly update the Subgraph with new features. Another dapp might prefer to use an older, well-tested Subgraph version. Upon initial curation, a 1% standard tax is incurred. Automatische migratie van je signalering naar de nieuwste subgraphversie kan waardevol zijn om ervoor te zorgen dat je querykosten blijft ontvangen. Elke keer dat je signaleert, wordt een curatiebelasting van 1% in rekening gebracht. Je betaalt ook een curatiebelasting van 0,5% bij elke migratie. Subgraphontwikkelaars worden ontmoedigd om vaak nieuwe versies te publiceren - ze moeten een curatiebelasting van 0,5% betalen voor alle automatisch gemigreerde curatie-aandelen. -> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy. +> **Note**: The first address to signal a particular Subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy. ## Withdrawing your GRT @@ -40,39 +40,39 @@ Curators have the option to withdraw their signaled GRT at any time. Unlike the process of delegating, if you decide to withdraw your signaled GRT you will not have to wait for a cooldown period and will receive the entire amount (minus the 1% curation tax). -Once a curator withdraws their signal, indexers may choose to keep indexing the subgraph, even if there's currently no active GRT signaled. +Once a curator withdraws their signal, indexers may choose to keep indexing the Subgraph, even if there's currently no active GRT signaled. -However, it is recommended that curators leave their signaled GRT in place not only to receive a portion of the query fees, but also to ensure reliability and uptime of the subgraph. +However, it is recommended that curators leave their signaled GRT in place not only to receive a portion of the query fees, but also to ensure reliability and uptime of the Subgraph. ## Risico's 1. De querymarkt is nog jong bij het Graph Netwerk en er bestaat een risico dat je %APY lager kan zijn dan je verwacht door opkomende marktdynamiek. -2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned. -3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their subgraph or if a subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/resources/roles/delegating/delegating/). -4. Een subgraph kan stuk gaan door een bug. Een subgraph die stuk is gegenereerd geen querykosten. Als gevolg hiervan moet je wachten tot de ontwikkelaar de bug repareert en een nieuwe versie implementeert. - - Als je bent geabonneerd op de nieuwste versie van een subgraph, worden je curatieaandelen automatisch gemigreerd naar die nieuwe versie. Er is een curatiebelasting van 0,5%. - - If you have signaled on a specific subgraph version and it fails, you will have to manually burn your curation shares. You can then signal on the new subgraph version, thus incurring a 1% curation tax. +2. Curation Fee - when a curator signals GRT on a Subgraph, they incur a 1% curation tax. This fee is burned. +3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their Subgraph or if a Subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/resources/roles/delegating/delegating/). +4. A Subgraph can fail due to a bug. A failed Subgraph does not accrue query fees. As a result, you’ll have to wait until the developer fixes the bug and deploys a new version. + - If you are subscribed to the newest version of a Subgraph, your shares will auto-migrate to that new version. This will incur a 0.5% curation tax. + - If you have signaled on a specific Subgraph version and it fails, you will have to manually burn your curation shares. You can then signal on the new Subgraph version, thus incurring a 1% curation tax. ## Veelgestelde Vragen over Curatie ### Welk percentage van de querykosten verdienen curatoren? -By signalling on a subgraph, you will earn a share of all the query fees that the subgraph generates. 10% of all query fees go to the Curators pro-rata to their curation shares. This 10% is subject to governance. +By signalling on a Subgraph, you will earn a share of all the query fees that the Subgraph generates. 10% of all query fees go to the Curators pro-rata to their curation shares. This 10% is subject to governance. -### Hoe bepaal ik welke subgraphs van hoge kwaliteit zijn om op te signaleren? +### 2. How do I decide which Subgraphs are high quality to signal on? -Finding high-quality subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy subgraphs that are driving query volume. A trustworthy subgraph may be valuable if it is complete, accurate, and supports a dapp’s data needs. A poorly architected subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a subgraph’s architecture or code in order to assess if a subgraph is valuable. As a result: +Finding high-quality Subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy Subgraphs that are driving query volume. A trustworthy Subgraph may be valuable if it is complete, accurate, and supports a dapp’s data needs. A poorly architected Subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a Subgraph’s architecture or code in order to assess if a Subgraph is valuable. As a result: -- Curators can use their understanding of a network to try and predict how an individual subgraph may generate a higher or lower query volume in the future -- Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the subgraph developer is can help determine whether or not a subgraph is worth signalling on. +- Curators can use their understanding of a network to try and predict how an individual Subgraph may generate a higher or lower query volume in the future +- Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the Subgraph developer is can help determine whether or not a Subgraph is worth signalling on. -### Wat zijn de kosten voor het updaten van een subgraph? +### 3. What’s the cost of updating a Subgraph? -Migrating your curation shares to a new subgraph version incurs a curation tax of 1%. Curators can choose to subscribe to the newest version of a subgraph. When curator shares get auto-migrated to a new version, Curators will also pay half curation tax, ie. 0.5%, because upgrading subgraphs is an onchain action that costs gas. +Migrating your curation shares to a new Subgraph version incurs a curation tax of 1%. Curators can choose to subscribe to the newest version of a Subgraph. When curator shares get auto-migrated to a new version, Curators will also pay half curation tax, ie. 0.5%, because upgrading Subgraphs is an onchain action that costs gas. -### Hoe vaak kan ik mijn subgraph updaten? +### 4. How often can I update my Subgraph? -Het wordt aanbevolen om je subgraphs niet te vaak bij te werken. Zie de bovenstaande vraag voor meer details. +It’s suggested that you don’t update your Subgraphs too frequently. See the question above for more details. ### Kan ik mijn curatieaandelen verkopen? diff --git a/website/src/pages/nl/resources/subgraph-studio-faq.mdx b/website/src/pages/nl/resources/subgraph-studio-faq.mdx index 8761f7a31bf6..c2d4037bd099 100644 --- a/website/src/pages/nl/resources/subgraph-studio-faq.mdx +++ b/website/src/pages/nl/resources/subgraph-studio-faq.mdx @@ -4,7 +4,7 @@ title: Subgraph Studio FAQs ## 1. What is Subgraph Studio? -[Subgraph Studio](https://thegraph.com/studio/) is a dapp for creating, managing, and publishing subgraphs and API keys. +[Subgraph Studio](https://thegraph.com/studio/) is a dapp for creating, managing, and publishing Subgraphs and API keys. ## 2. How do I create an API Key? @@ -18,14 +18,14 @@ Yes! You can create multiple API Keys to use in different projects. Check out th After creating an API Key, in the Security section, you can define the domains that can query a specific API Key. -## 5. Can I transfer my subgraph to another owner? +## 5. Can I transfer my Subgraph to another owner? -Yes, subgraphs that have been published to Arbitrum One can be transferred to a new wallet or a Multisig. You can do so by clicking the three dots next to the 'Publish' button on the subgraph's details page and selecting 'Transfer ownership'. +Yes, Subgraphs that have been published to Arbitrum One can be transferred to a new wallet or a Multisig. You can do so by clicking the three dots next to the 'Publish' button on the Subgraph's details page and selecting 'Transfer ownership'. -Note that you will no longer be able to see or edit the subgraph in Studio once it has been transferred. +Note that you will no longer be able to see or edit the Subgraph in Studio once it has been transferred. -## 6. How do I find query URLs for subgraphs if I’m not the developer of the subgraph I want to use? +## 6. How do I find query URLs for Subgraphs if I’m not the developer of the Subgraph I want to use? -You can find the query URL of each subgraph in the Subgraph Details section of Graph Explorer. When you click on the “Query” button, you will be directed to a pane wherein you can view the query URL of the subgraph you’re interested in. You can then replace the `` placeholder with the API key you wish to leverage in Subgraph Studio. +You can find the query URL of each Subgraph in the Subgraph Details section of Graph Explorer. When you click on the “Query” button, you will be directed to a pane wherein you can view the query URL of the Subgraph you’re interested in. You can then replace the `` placeholder with the API key you wish to leverage in Subgraph Studio. -Remember that you can create an API key and query any subgraph published to the network, even if you build a subgraph yourself. These queries via the new API key, are paid queries as any other on the network. +Remember that you can create an API key and query any Subgraph published to the network, even if you build a Subgraph yourself. These queries via the new API key, are paid queries as any other on the network. diff --git a/website/src/pages/nl/resources/tokenomics.mdx b/website/src/pages/nl/resources/tokenomics.mdx index 4a9b42ca6e0d..dac3383a28e7 100644 --- a/website/src/pages/nl/resources/tokenomics.mdx +++ b/website/src/pages/nl/resources/tokenomics.mdx @@ -6,7 +6,7 @@ description: The Graph Network is incentivized by powerful tokenomics. Here’s ## Overview -The Graph is a decentralized protocol that enables easy access to blockchain data. It indexes blockchain data similarly to how Google indexes the web. If you've used a dapp that retrieves data from a subgraph, you've probably interacted with The Graph. Today, thousands of [popular dapps](https://thegraph.com/explorer) in the web3 ecosystem use The Graph. +The Graph is a decentralized protocol that enables easy access to blockchain data. It indexes blockchain data similarly to how Google indexes the web. If you've used a dapp that retrieves data from a Subgraph, you've probably interacted with The Graph. Today, thousands of [popular dapps](https://thegraph.com/explorer) in the web3 ecosystem use The Graph. ## Specifics @@ -24,9 +24,9 @@ There are four primary network participants: 1. Delegators - Delegate GRT to Indexers & secure the network -2. Curators - Find the best subgraphs for Indexers +2. Curators - Find the best Subgraphs for Indexers -3. Developers - Build & query subgraphs +3. Developers - Build & query Subgraphs 4. Indexers - Backbone of blockchain data @@ -36,7 +36,7 @@ Fishermen and Arbitrators are also integral to the network's success through oth ## Delegators (Passively earn GRT) -Indexers are delegated GRT by Delegators, increasing the Indexer’s stake in subgraphs on the network. In return, Delegators earn a percentage of all query fees and indexing rewards from the Indexer. Each Indexer sets the cut that will be rewarded to Delegators independently, creating competition among Indexers to attract Delegators. Most Indexers offer between 9-12% annually. +Indexers are delegated GRT by Delegators, increasing the Indexer’s stake in Subgraphs on the network. In return, Delegators earn a percentage of all query fees and indexing rewards from the Indexer. Each Indexer sets the cut that will be rewarded to Delegators independently, creating competition among Indexers to attract Delegators. Most Indexers offer between 9-12% annually. For example, if a Delegator were to delegate 15k GRT to an Indexer offering 10%, the Delegator would receive ~1,500 GRT in rewards annually. @@ -46,25 +46,25 @@ If you're reading this, you're capable of becoming a Delegator right now by head ## Curators (Earn GRT) -Curators identify high-quality subgraphs and "curate" them (i.e., signal GRT on them) to earn curation shares, which guarantee a percentage of all future query fees generated by the subgraph. While any independent network participant can be a Curator, typically subgraph developers are among the first Curators for their own subgraphs because they want to ensure their subgraph is indexed. +Curators identify high-quality Subgraphs and "curate" them (i.e., signal GRT on them) to earn curation shares, which guarantee a percentage of all future query fees generated by the Subgraph. While any independent network participant can be a Curator, typically Subgraph developers are among the first Curators for their own Subgraphs because they want to ensure their Subgraph is indexed. -Subgraph developers are encouraged to curate their subgraph with at least 3,000 GRT. However, this number may be impacted by network activity and community participation. +Subgraph developers are encouraged to curate their Subgraph with at least 3,000 GRT. However, this number may be impacted by network activity and community participation. -Curators pay a 1% curation tax when they curate a new subgraph. This curation tax is burned, decreasing the supply of GRT. +Curators pay a 1% curation tax when they curate a new Subgraph. This curation tax is burned, decreasing the supply of GRT. ## Developers -Developers build and query subgraphs to retrieve blockchain data. Since subgraphs are open source, developers can query existing subgraphs to load blockchain data into their dapps. Developers pay for queries they make in GRT, which is distributed to network participants. +Developers build and query Subgraphs to retrieve blockchain data. Since Subgraphs are open source, developers can query existing Subgraphs to load blockchain data into their dapps. Developers pay for queries they make in GRT, which is distributed to network participants. -### Creating a subgraph +### Creating a Subgraph -Developers can [create a subgraph](/developing/creating-a-subgraph/) to index data on the blockchain. Subgraphs are instructions for Indexers about which data should be served to consumers. +Developers can [create a Subgraph](/developing/creating-a-subgraph/) to index data on the blockchain. Subgraphs are instructions for Indexers about which data should be served to consumers. -Once developers have built and tested their subgraph, they can [publish their subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) on The Graph's decentralized network. +Once developers have built and tested their Subgraph, they can [publish their Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) on The Graph's decentralized network. -### Querying an existing subgraph +### Querying an existing Subgraph -Once a subgraph is [published](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph's decentralized network, anyone can create an API key, add GRT to their billing balance, and query the subgraph. +Once a Subgraph is [published](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph's decentralized network, anyone can create an API key, add GRT to their billing balance, and query the Subgraph. Subgraphs are [queried using GraphQL](/subgraphs/querying/introduction/), and the query fees are paid for with GRT in [Subgraph Studio](https://thegraph.com/studio/). Query fees are distributed to network participants based on their contributions to the protocol. @@ -72,27 +72,27 @@ Subgraphs are [queried using GraphQL](/subgraphs/querying/introduction/), and th ## Indexers (Earn GRT) -Indexers are the backbone of The Graph. They operate independent hardware and software powering The Graph’s decentralized network. Indexers serve data to consumers based on instructions from subgraphs. +Indexers are the backbone of The Graph. They operate independent hardware and software powering The Graph’s decentralized network. Indexers serve data to consumers based on instructions from Subgraphs. Indexers can earn GRT rewards in two ways: -1. **Query fees**: GRT paid by developers or users for subgraph data queries. Query fees are directly distributed to Indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). +1. **Query fees**: GRT paid by developers or users for Subgraph data queries. Query fees are directly distributed to Indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). -2. **Indexing rewards**: the 3% annual issuance is distributed to Indexers based on the number of subgraphs they are indexing. These rewards incentivize Indexers to index subgraphs, occasionally before the query fees begin, to accrue and submit Proofs of Indexing (POIs), verifying that they have indexed data accurately. +2. **Indexing rewards**: the 3% annual issuance is distributed to Indexers based on the number of Subgraphs they are indexing. These rewards incentivize Indexers to index Subgraphs, occasionally before the query fees begin, to accrue and submit Proofs of Indexing (POIs), verifying that they have indexed data accurately. -Each subgraph is allotted a portion of the total network token issuance, based on the amount of the subgraph’s curation signal. That amount is then rewarded to Indexers based on their allocated stake on the subgraph. +Each Subgraph is allotted a portion of the total network token issuance, based on the amount of the Subgraph’s curation signal. That amount is then rewarded to Indexers based on their allocated stake on the Subgraph. In order to run an indexing node, Indexers must self-stake 100,000 GRT or more with the network. Indexers are incentivized to self-stake GRT in proportion to the amount of queries they serve. -Indexers can increase their GRT allocations on subgraphs by accepting GRT delegation from Delegators, and they can accept up to 16 times their initial self-stake. If an Indexer becomes "over-delegated" (i.e., more than 16 times their initial self-stake), they will not be able to use the additional GRT from Delegators until they increase their self-stake in the network. +Indexers can increase their GRT allocations on Subgraphs by accepting GRT delegation from Delegators, and they can accept up to 16 times their initial self-stake. If an Indexer becomes "over-delegated" (i.e., more than 16 times their initial self-stake), they will not be able to use the additional GRT from Delegators until they increase their self-stake in the network. The amount of rewards an Indexer receives can vary based on the Indexer's self-stake, accepted delegation, quality of service, and many more factors. ## Token Supply: Burning & Issuance -The initial token supply is 10 billion GRT, with a target of 3% new issuance annually to reward Indexers for allocating stake on subgraphs. This means that the total supply of GRT tokens will increase by 3% each year as new tokens are issued to Indexers for their contribution to the network. +The initial token supply is 10 billion GRT, with a target of 3% new issuance annually to reward Indexers for allocating stake on Subgraphs. This means that the total supply of GRT tokens will increase by 3% each year as new tokens are issued to Indexers for their contribution to the network. -The Graph is designed with multiple burning mechanisms to offset new token issuance. Approximately 1% of the GRT supply is burned annually through various activities on the network, and this number has been increasing as network activity continues to grow. These burning activities include a 0.5% delegation tax whenever a Delegator delegates GRT to an Indexer, a 1% curation tax when Curators signal on a subgraph, and a 1% of query fees for blockchain data. +The Graph is designed with multiple burning mechanisms to offset new token issuance. Approximately 1% of the GRT supply is burned annually through various activities on the network, and this number has been increasing as network activity continues to grow. These burning activities include a 0.5% delegation tax whenever a Delegator delegates GRT to an Indexer, a 1% curation tax when Curators signal on a Subgraph, and a 1% of query fees for blockchain data. ![Total burned GRT](/img/total-burned-grt.jpeg) diff --git a/website/src/pages/nl/sps/introduction.mdx b/website/src/pages/nl/sps/introduction.mdx index b11c99dfb8e5..92d8618165dd 100644 --- a/website/src/pages/nl/sps/introduction.mdx +++ b/website/src/pages/nl/sps/introduction.mdx @@ -3,21 +3,21 @@ title: Introduction to Substreams-Powered Subgraphs sidebarTitle: Introduction --- -Boost your subgraph's efficiency and scalability by using [Substreams](/substreams/introduction/) to stream pre-indexed blockchain data. +Boost your Subgraph's efficiency and scalability by using [Substreams](/substreams/introduction/) to stream pre-indexed blockchain data. ## Overview -Use a Substreams package (`.spkg`) as a data source to give your subgraph access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks. +Use a Substreams package (`.spkg`) as a data source to give your Subgraph access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks. ### Specifics There are two methods of enabling this technology: -1. **Using Substreams [triggers](/sps/triggers/)**: Consume from any Substreams module by importing the Protobuf model through a subgraph handler and move all your logic into a subgraph. This method creates the subgraph entities directly in the subgraph. +1. **Using Substreams [triggers](/sps/triggers/)**: Consume from any Substreams module by importing the Protobuf model through a Subgraph handler and move all your logic into a Subgraph. This method creates the Subgraph entities directly in the Subgraph. -2. **Using [Entity Changes](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)**: By writing more of the logic into Substreams, you can consume the module's output directly into [graph-node](/indexing/tooling/graph-node/). In graph-node, you can use the Substreams data to create your subgraph entities. +2. **Using [Entity Changes](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)**: By writing more of the logic into Substreams, you can consume the module's output directly into [graph-node](/indexing/tooling/graph-node/). In graph-node, you can use the Substreams data to create your Subgraph entities. -You can choose where to place your logic, either in the subgraph or Substreams. However, consider what aligns with your data needs, as Substreams has a parallelized model, and triggers are consumed linearly in the graph node. +You can choose where to place your logic, either in the Subgraph or Substreams. However, consider what aligns with your data needs, as Substreams has a parallelized model, and triggers are consumed linearly in the graph node. ### Additional Resources @@ -28,3 +28,4 @@ Visit the following links for tutorials on using code-generation tooling to buil - [Starknet](https://docs.substreams.dev/tutorials/intro-to-tutorials/starknet) - [Injective](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/injective) - [MANTRA](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/mantra) +- [Stellar](https://docs.substreams.dev/tutorials/intro-to-tutorials/stellar) diff --git a/website/src/pages/nl/sps/sps-faq.mdx b/website/src/pages/nl/sps/sps-faq.mdx index abc1f3906686..250c466d5929 100644 --- a/website/src/pages/nl/sps/sps-faq.mdx +++ b/website/src/pages/nl/sps/sps-faq.mdx @@ -11,21 +11,21 @@ Specifically, it's a blockchain-agnostic, parallelized, and streaming-first engi Substreams is developed by [StreamingFast](https://www.streamingfast.io/). Visit the [Substreams Documentation](/substreams/introduction/) to learn more about Substreams. -## What are Substreams-powered subgraphs? +## What are Substreams-powered Subgraphs? -[Substreams-powered subgraphs](/sps/introduction/) combine the power of Substreams with the queryability of subgraphs. When publishing a Substreams-powered Subgraph, the data produced by the Substreams transformations, can [output entity changes](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs) that are compatible with subgraph entities. +[Substreams-powered Subgraphs](/sps/introduction/) combine the power of Substreams with the queryability of Subgraphs. When publishing a Substreams-powered Subgraph, the data produced by the Substreams transformations, can [output entity changes](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs) that are compatible with Subgraph entities. -If you are already familiar with subgraph development, note that Substreams-powered subgraphs can then be queried just as if it had been produced by the AssemblyScript transformation layer. This provides all the benefits of subgraphs, including a dynamic and flexible GraphQL API. +If you are already familiar with Subgraph development, note that Substreams-powered Subgraphs can then be queried just as if it had been produced by the AssemblyScript transformation layer. This provides all the benefits of Subgraphs, including a dynamic and flexible GraphQL API. -## How are Substreams-powered subgraphs different from subgraphs? +## How are Substreams-powered Subgraphs different from Subgraphs? Subgraphs are made up of datasources which specify onchain events, and how those events should be transformed via handlers written in Assemblyscript. These events are processed sequentially, based on the order in which events happen onchain. -By contrast, substreams-powered subgraphs have a single datasource which references a substreams package, which is processed by the Graph Node. Substreams have access to additional granular onchain data compared to conventional subgraphs, and can also benefit from massively parallelised processing, which can mean much faster processing times. +By contrast, substreams-powered Subgraphs have a single datasource which references a substreams package, which is processed by the Graph Node. Substreams have access to additional granular onchain data compared to conventional Subgraphs, and can also benefit from massively parallelised processing, which can mean much faster processing times. -## What are the benefits of using Substreams-powered subgraphs? +## What are the benefits of using Substreams-powered Subgraphs? -Substreams-powered subgraphs combine all the benefits of Substreams with the queryability of subgraphs. They bring greater composability and high-performance indexing to The Graph. They also enable new data use cases; for example, once you've built your Substreams-powered Subgraph, you can reuse your [Substreams modules](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) to output to different [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) such as PostgreSQL, MongoDB, and Kafka. +Substreams-powered Subgraphs combine all the benefits of Substreams with the queryability of Subgraphs. They bring greater composability and high-performance indexing to The Graph. They also enable new data use cases; for example, once you've built your Substreams-powered Subgraph, you can reuse your [Substreams modules](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) to output to different [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) such as PostgreSQL, MongoDB, and Kafka. ## What are the benefits of Substreams? @@ -35,7 +35,7 @@ There are many benefits to using Substreams, including: - High-performance indexing: Orders of magnitude faster indexing through large-scale clusters of parallel operations (think BigQuery). -- Sink anywhere: Sink your data to anywhere you want: PostgreSQL, MongoDB, Kafka, subgraphs, flat files, Google Sheets. +- Sink anywhere: Sink your data to anywhere you want: PostgreSQL, MongoDB, Kafka, Subgraphs, flat files, Google Sheets. - Programmable: Use code to customize extraction, do transformation-time aggregations, and model your output for multiple sinks. @@ -63,17 +63,17 @@ There are many benefits to using Firehose, including: - Leverages flat files: Blockchain data is extracted into flat files, the cheapest and most optimized computing resource available. -## Where can developers access more information about Substreams-powered subgraphs and Substreams? +## Where can developers access more information about Substreams-powered Subgraphs and Substreams? The [Substreams documentation](/substreams/introduction/) will teach you how to build Substreams modules. -The [Substreams-powered subgraphs documentation](/sps/introduction/) will show you how to package them for deployment on The Graph. +The [Substreams-powered Subgraphs documentation](/sps/introduction/) will show you how to package them for deployment on The Graph. The [latest Substreams Codegen tool](https://streamingfastio.medium.com/substreams-codegen-no-code-tool-to-bootstrap-your-project-a11efe0378c6) will allow you to bootstrap a Substreams project without any code. ## What is the role of Rust modules in Substreams? -Rust modules are the equivalent of the AssemblyScript mappers in subgraphs. They are compiled to WASM in a similar way, but the programming model allows for parallel execution. They define the sort of transformations and aggregations you want to apply to the raw blockchain data. +Rust modules are the equivalent of the AssemblyScript mappers in Subgraphs. They are compiled to WASM in a similar way, but the programming model allows for parallel execution. They define the sort of transformations and aggregations you want to apply to the raw blockchain data. See [modules documentation](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) for details. @@ -81,16 +81,16 @@ See [modules documentation](https://docs.substreams.dev/reference-material/subst When using Substreams, the composition happens at the transformation layer enabling cached modules to be re-used. -As an example, Alice can build a DEX price module, Bob can use it to build a volume aggregator for some tokens of his interest, and Lisa can combine four individual DEX price modules to create a price oracle. A single Substreams request will package all of these individual's modules, link them together, to offer a much more refined stream of data. That stream can then be used to populate a subgraph, and be queried by consumers. +As an example, Alice can build a DEX price module, Bob can use it to build a volume aggregator for some tokens of his interest, and Lisa can combine four individual DEX price modules to create a price oracle. A single Substreams request will package all of these individual's modules, link them together, to offer a much more refined stream of data. That stream can then be used to populate a Subgraph, and be queried by consumers. ## How can you build and deploy a Substreams-powered Subgraph? After [defining](/sps/introduction/) a Substreams-powered Subgraph, you can use the Graph CLI to deploy it in [Subgraph Studio](https://thegraph.com/studio/). -## Where can I find examples of Substreams and Substreams-powered subgraphs? +## Where can I find examples of Substreams and Substreams-powered Subgraphs? -You can visit [this Github repo](https://github.com/pinax-network/awesome-substreams) to find examples of Substreams and Substreams-powered subgraphs. +You can visit [this Github repo](https://github.com/pinax-network/awesome-substreams) to find examples of Substreams and Substreams-powered Subgraphs. -## What do Substreams and Substreams-powered subgraphs mean for The Graph Network? +## What do Substreams and Substreams-powered Subgraphs mean for The Graph Network? The integration promises many benefits, including extremely high-performance indexing and greater composability by leveraging community modules and building on them. diff --git a/website/src/pages/nl/sps/triggers.mdx b/website/src/pages/nl/sps/triggers.mdx index 816d42cb5f12..66687aa21889 100644 --- a/website/src/pages/nl/sps/triggers.mdx +++ b/website/src/pages/nl/sps/triggers.mdx @@ -6,13 +6,13 @@ Use Custom Triggers and enable the full use GraphQL. ## Overview -Custom Triggers allow you to send data directly into your subgraph mappings file and entities, which are similar to tables and fields. This enables you to fully use the GraphQL layer. +Custom Triggers allow you to send data directly into your Subgraph mappings file and entities, which are similar to tables and fields. This enables you to fully use the GraphQL layer. -By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data in your subgraph's handler. This ensures efficient and streamlined data management within the subgraph framework. +By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data in your Subgraph's handler. This ensures efficient and streamlined data management within the Subgraph framework. ### Defining `handleTransactions` -The following code demonstrates how to define a `handleTransactions` function in a subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new subgraph entity is created. +The following code demonstrates how to define a `handleTransactions` function in a Subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new Subgraph entity is created. ```tsx export function handleTransactions(bytes: Uint8Array): void { @@ -38,9 +38,9 @@ Here's what you're seeing in the `mappings.ts` file: 1. The bytes containing Substreams data are decoded into the generated `Transactions` object, this object is used like any other AssemblyScript object 2. Looping over the transactions -3. Create a new subgraph entity for every transaction +3. Create a new Subgraph entity for every transaction -To go through a detailed example of a trigger-based subgraph, [check out the tutorial](/sps/tutorial/). +To go through a detailed example of a trigger-based Subgraph, [check out the tutorial](/sps/tutorial/). ### Additional Resources diff --git a/website/src/pages/nl/sps/tutorial.mdx b/website/src/pages/nl/sps/tutorial.mdx index 9d568f422d31..fe78e2e9908f 100644 --- a/website/src/pages/nl/sps/tutorial.mdx +++ b/website/src/pages/nl/sps/tutorial.mdx @@ -3,7 +3,7 @@ title: 'Tutorial: Set Up a Substreams-Powered Subgraph on Solana' sidebarTitle: Tutorial --- -Successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. +Successfully set up a trigger-based Substreams-powered Subgraph for a Solana SPL token. ## Begin @@ -54,7 +54,7 @@ params: # Modify the param fields to meet your needs ### Step 2: Generate the Subgraph Manifest -Once the project is initialized, generate a subgraph manifest by running the following command in the Dev Container: +Once the project is initialized, generate a Subgraph manifest by running the following command in the Dev Container: ```bash substreams codegen subgraph @@ -73,7 +73,7 @@ dataSources: moduleName: map_spl_transfers # Module defined in the substreams.yaml file: ./my-project-sol-v0.1.0.spkg mapping: - apiVersion: 0.0.7 + apiVersion: 0.0.9 kind: substreams/graph-entities file: ./src/mappings.ts handler: handleTriggers @@ -81,7 +81,7 @@ dataSources: ### Step 3: Define Entities in `schema.graphql` -Define the fields you want to save in your subgraph entities by updating the `schema.graphql` file. +Define the fields you want to save in your Subgraph entities by updating the `schema.graphql` file. Here is an example: @@ -101,7 +101,7 @@ This schema defines a `MyTransfer` entity with fields such as `id`, `amount`, `s With the Protobuf objects generated, you can now handle the decoded Substreams data in your `mappings.ts` file found in the `./src` directory. -The example below demonstrates how to extract to subgraph entities the non-derived transfers associated to the Orca account id: +The example below demonstrates how to extract to Subgraph entities the non-derived transfers associated to the Orca account id: ```ts import { Protobuf } from 'as-proto/assembly' @@ -140,11 +140,11 @@ To generate Protobuf objects in AssemblyScript, run the following command: npm run protogen ``` -This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the subgraph's handler. +This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the Subgraph's handler. ### Conclusion -Congratulations! You've successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. You can take the next step by customizing your schema, mappings, and modules to fit your specific use case. +Congratulations! You've successfully set up a trigger-based Substreams-powered Subgraph for a Solana SPL token. You can take the next step by customizing your schema, mappings, and modules to fit your specific use case. ### Video Tutorial diff --git a/website/src/pages/nl/subgraphs/best-practices/avoid-eth-calls.mdx b/website/src/pages/nl/subgraphs/best-practices/avoid-eth-calls.mdx index e40a7b3712e4..07249c97dd2a 100644 --- a/website/src/pages/nl/subgraphs/best-practices/avoid-eth-calls.mdx +++ b/website/src/pages/nl/subgraphs/best-practices/avoid-eth-calls.mdx @@ -1,19 +1,19 @@ --- title: Subgraph Best Practice 4 - Improve Indexing Speed by Avoiding eth_calls -sidebarTitle: 'Subgraph Best Practice 4: Avoiding eth_calls' +sidebarTitle: Avoiding eth_calls --- ## TLDR -`eth_calls` are calls that can be made from a subgraph to an Ethereum node. These calls take a significant amount of time to return data, slowing down indexing. If possible, design smart contracts to emit all the data you need so you don’t need to use `eth_calls`. +`eth_calls` are calls that can be made from a Subgraph to an Ethereum node. These calls take a significant amount of time to return data, slowing down indexing. If possible, design smart contracts to emit all the data you need so you don’t need to use `eth_calls`. ## Why Avoiding `eth_calls` Is a Best Practice -Subgraphs are optimized to index event data emitted from smart contracts. A subgraph can also index the data coming from an `eth_call`, however, this can significantly slow down subgraph indexing as `eth_calls` require making external calls to smart contracts. The responsiveness of these calls relies not on the subgraph but on the connectivity and responsiveness of the Ethereum node being queried. By minimizing or eliminating eth_calls in our subgraphs, we can significantly improve our indexing speed. +Subgraphs are optimized to index event data emitted from smart contracts. A Subgraph can also index the data coming from an `eth_call`, however, this can significantly slow down Subgraph indexing as `eth_calls` require making external calls to smart contracts. The responsiveness of these calls relies not on the Subgraph but on the connectivity and responsiveness of the Ethereum node being queried. By minimizing or eliminating eth_calls in our Subgraphs, we can significantly improve our indexing speed. ### What Does an eth_call Look Like? -`eth_calls` are often necessary when the data required for a subgraph is not available through emitted events. For example, consider a scenario where a subgraph needs to identify whether ERC20 tokens are part of a specific pool, but the contract only emits a basic `Transfer` event and does not emit an event that contains the data that we need: +`eth_calls` are often necessary when the data required for a Subgraph is not available through emitted events. For example, consider a scenario where a Subgraph needs to identify whether ERC20 tokens are part of a specific pool, but the contract only emits a basic `Transfer` event and does not emit an event that contains the data that we need: ```yaml event Transfer(address indexed from, address indexed to, uint256 value); @@ -44,7 +44,7 @@ export function handleTransfer(event: Transfer): void { } ``` -This is functional, however is not ideal as it slows down our subgraph’s indexing. +This is functional, however is not ideal as it slows down our Subgraph’s indexing. ## How to Eliminate `eth_calls` @@ -54,7 +54,7 @@ Ideally, the smart contract should be updated to emit all necessary data within event TransferWithPool(address indexed from, address indexed to, uint256 value, bytes32 indexed poolInfo); ``` -With this update, the subgraph can directly index the required data without external calls: +With this update, the Subgraph can directly index the required data without external calls: ```typescript import { Address } from '@graphprotocol/graph-ts' @@ -96,11 +96,11 @@ The portion highlighted in yellow is the call declaration. The part before the c The handler itself accesses the result of this `eth_call` exactly as in the previous section by binding to the contract and making the call. graph-node caches the results of declared `eth_calls` in memory and the call from the handler will retrieve the result from this in memory cache instead of making an actual RPC call. -Note: Declared eth_calls can only be made in subgraphs with specVersion >= 1.2.0. +Note: Declared eth_calls can only be made in Subgraphs with specVersion >= 1.2.0. ## Conclusion -You can significantly improve indexing performance by minimizing or eliminating `eth_calls` in your subgraphs. +You can significantly improve indexing performance by minimizing or eliminating `eth_calls` in your Subgraphs. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/nl/subgraphs/best-practices/derivedfrom.mdx b/website/src/pages/nl/subgraphs/best-practices/derivedfrom.mdx index db3a49928c89..093eb29255ab 100644 --- a/website/src/pages/nl/subgraphs/best-practices/derivedfrom.mdx +++ b/website/src/pages/nl/subgraphs/best-practices/derivedfrom.mdx @@ -1,11 +1,11 @@ --- title: Subgraph Best Practice 2 - Improve Indexing and Query Responsiveness By Using @derivedFrom -sidebarTitle: 'Subgraph Best Practice 2: Arrays with @derivedFrom' +sidebarTitle: Arrays with @derivedFrom --- ## TLDR -Arrays in your schema can really slow down a subgraph's performance as they grow beyond thousands of entries. If possible, the `@derivedFrom` directive should be used when using arrays as it prevents large arrays from forming, simplifies handlers, and reduces the size of individual entities, improving indexing speed and query performance significantly. +Arrays in your schema can really slow down a Subgraph's performance as they grow beyond thousands of entries. If possible, the `@derivedFrom` directive should be used when using arrays as it prevents large arrays from forming, simplifies handlers, and reduces the size of individual entities, improving indexing speed and query performance significantly. ## How to Use the `@derivedFrom` Directive @@ -15,7 +15,7 @@ You just need to add a `@derivedFrom` directive after your array in your schema. comments: [Comment!]! @derivedFrom(field: "post") ``` -`@derivedFrom` creates efficient one-to-many relationships, enabling an entity to dynamically associate with multiple related entities based on a field in the related entity. This approach removes the need for both sides of the relationship to store duplicate data, making the subgraph more efficient. +`@derivedFrom` creates efficient one-to-many relationships, enabling an entity to dynamically associate with multiple related entities based on a field in the related entity. This approach removes the need for both sides of the relationship to store duplicate data, making the Subgraph more efficient. ### Example Use Case for `@derivedFrom` @@ -60,17 +60,17 @@ type Comment @entity { Just by adding the `@derivedFrom` directive, this schema will only store the “Comments” on the “Comments” side of the relationship and not on the “Post” side of the relationship. Arrays are stored across individual rows, which allows them to expand significantly. This can lead to particularly large sizes if their growth is unbounded. -This will not only make our subgraph more efficient, but it will also unlock three features: +This will not only make our Subgraph more efficient, but it will also unlock three features: 1. We can query the `Post` and see all of its comments. 2. We can do a reverse lookup and query any `Comment` and see which post it comes from. -3. We can use [Derived Field Loaders](/subgraphs/developing/creating/graph-ts/api/#looking-up-derived-entities) to unlock the ability to directly access and manipulate data from virtual relationships in our subgraph mappings. +3. We can use [Derived Field Loaders](/subgraphs/developing/creating/graph-ts/api/#looking-up-derived-entities) to unlock the ability to directly access and manipulate data from virtual relationships in our Subgraph mappings. ## Conclusion -Use the `@derivedFrom` directive in subgraphs to effectively manage dynamically growing arrays, enhancing indexing efficiency and data retrieval. +Use the `@derivedFrom` directive in Subgraphs to effectively manage dynamically growing arrays, enhancing indexing efficiency and data retrieval. For a more detailed explanation of strategies to avoid large arrays, check out Kevin Jones' blog: [Best Practices in Subgraph Development: Avoiding Large Arrays](https://thegraph.com/blog/improve-subgraph-performance-avoiding-large-arrays/). diff --git a/website/src/pages/nl/subgraphs/best-practices/grafting-hotfix.mdx b/website/src/pages/nl/subgraphs/best-practices/grafting-hotfix.mdx index d514e1633c75..674cf6b87c62 100644 --- a/website/src/pages/nl/subgraphs/best-practices/grafting-hotfix.mdx +++ b/website/src/pages/nl/subgraphs/best-practices/grafting-hotfix.mdx @@ -1,26 +1,26 @@ --- title: Subgraph Best Practice 6 - Use Grafting for Quick Hotfix Deployment -sidebarTitle: 'Subgraph Best Practice 6: Grafting and Hotfixing' +sidebarTitle: Grafting and Hotfixing --- ## TLDR -Grafting is a powerful feature in subgraph development that allows you to build and deploy new subgraphs while reusing the indexed data from existing ones. +Grafting is a powerful feature in Subgraph development that allows you to build and deploy new Subgraphs while reusing the indexed data from existing ones. ### Overview -This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. +This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire Subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. ## Benefits of Grafting for Hotfixes 1. **Rapid Deployment** - - **Minimize Downtime**: When a subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. - - **Immediate Recovery**: The new subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. + - **Minimize Downtime**: When a Subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. + - **Immediate Recovery**: The new Subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. 2. **Data Preservation** - - **Reuse Historical Data**: Grafting copies the existing data from the base subgraph, so you don’t lose valuable historical records. + - **Reuse Historical Data**: Grafting copies the existing data from the base Subgraph, so you don’t lose valuable historical records. - **Consistency**: Maintains data continuity, which is crucial for applications relying on consistent historical data. 3. **Efficiency** @@ -31,38 +31,38 @@ This feature enables quick deployment of hotfixes for critical issues, eliminati 1. **Initial Deployment Without Grafting** - - **Start Clean**: Always deploy your initial subgraph without grafting to ensure that it’s stable and functions as expected. - - **Test Thoroughly**: Validate the subgraph’s performance to minimize the need for future hotfixes. + - **Start Clean**: Always deploy your initial Subgraph without grafting to ensure that it’s stable and functions as expected. + - **Test Thoroughly**: Validate the Subgraph’s performance to minimize the need for future hotfixes. 2. **Implementing the Hotfix with Grafting** - **Identify the Issue**: When a critical error occurs, determine the block number of the last successfully indexed event. - - **Create a New Subgraph**: Develop a new subgraph that includes the hotfix. - - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed subgraph. - - **Deploy Quickly**: Publish the grafted subgraph to restore service as soon as possible. + - **Create a New Subgraph**: Develop a new Subgraph that includes the hotfix. + - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed Subgraph. + - **Deploy Quickly**: Publish the grafted Subgraph to restore service as soon as possible. 3. **Post-Hotfix Actions** - - **Monitor Performance**: Ensure the grafted subgraph is indexing correctly and the hotfix resolves the issue. - - **Republish Without Grafting**: Once stable, deploy a new version of the subgraph without grafting for long-term maintenance. + - **Monitor Performance**: Ensure the grafted Subgraph is indexing correctly and the hotfix resolves the issue. + - **Republish Without Grafting**: Once stable, deploy a new version of the Subgraph without grafting for long-term maintenance. > Note: Relying on grafting indefinitely is not recommended as it can complicate future updates and maintenance. - - **Update References**: Redirect any services or applications to use the new, non-grafted subgraph. + - **Update References**: Redirect any services or applications to use the new, non-grafted Subgraph. 4. **Important Considerations** - **Careful Block Selection**: Choose the graft block number carefully to prevent data loss. - **Tip**: Use the block number of the last correctly processed event. - - **Use Deployment ID**: Ensure you reference the Deployment ID of the base subgraph, not the Subgraph ID. - - **Note**: The Deployment ID is the unique identifier for a specific subgraph deployment. - - **Feature Declaration**: Remember to declare grafting in the subgraph manifest under features. + - **Use Deployment ID**: Ensure you reference the Deployment ID of the base Subgraph, not the Subgraph ID. + - **Note**: The Deployment ID is the unique identifier for a specific Subgraph deployment. + - **Feature Declaration**: Remember to declare grafting in the Subgraph manifest under features. ## Example: Deploying a Hotfix with Grafting -Suppose you have a subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. +Suppose you have a Subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. 1. **Failed Subgraph Manifest (subgraph.yaml)** ```yaml - specVersion: 1.0.0 + specVersion: 1.3.0 schema: file: ./schema.graphql dataSources: @@ -75,7 +75,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing startBlock: 5000000 mapping: kind: ethereum/events - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Withdrawal @@ -90,7 +90,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing 2. **New Grafted Subgraph Manifest (subgraph.yaml)** ```yaml - specVersion: 1.0.0 + specVersion: 1.3.0 schema: file: ./schema.graphql dataSources: @@ -103,7 +103,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing startBlock: 6000001 # Block after the last indexed block mapping: kind: ethereum/events - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Withdrawal @@ -117,16 +117,16 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing features: - grafting graft: - base: QmBaseDeploymentID # Deployment ID of the failed subgraph + base: QmBaseDeploymentID # Deployment ID of the failed Subgraph block: 6000000 # Last successfully indexed block ``` **Explanation:** -- **Data Source Update**: The new subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. +- **Data Source Update**: The new Subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. - **Start Block**: Set to one block after the last successfully indexed block to avoid reprocessing the error. - **Grafting Configuration**: - - **base**: Deployment ID of the failed subgraph. + - **base**: Deployment ID of the failed Subgraph. - **block**: Block number where grafting should begin. 3. **Deployment Steps** @@ -135,10 +135,10 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing - **Adjust the Manifest**: As shown above, update the `subgraph.yaml` with grafting configurations. - **Deploy the Subgraph**: - Authenticate with the Graph CLI. - - Deploy the new subgraph using `graph deploy`. + - Deploy the new Subgraph using `graph deploy`. 4. **Post-Deployment** - - **Verify Indexing**: Check that the subgraph is indexing correctly from the graft point. + - **Verify Indexing**: Check that the Subgraph is indexing correctly from the graft point. - **Monitor Data**: Ensure that new data is being captured and the hotfix is effective. - **Plan for Republish**: Schedule the deployment of a non-grafted version for long-term stability. @@ -146,9 +146,9 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing While grafting is a powerful tool for deploying hotfixes quickly, there are specific scenarios where it should be avoided to maintain data integrity and ensure optimal performance. -- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new subgraph’s schema to be compatible with the base subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. +- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new Subgraph’s schema to be compatible with the base Subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. - **Significant Mapping Logic Overhauls**: When the hotfix involves substantial modifications to your mapping logic—such as changing how events are processed or altering handler functions—grafting may not function correctly. The new logic might not be compatible with the data processed under the old logic, leading to incorrect data or failed indexing. -- **Deployments to The Graph Network**: Grafting is not recommended for subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the subgraph from scratch to ensure full compatibility and reliability. +- **Deployments to The Graph Network**: Grafting is not recommended for Subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the Subgraph from scratch to ensure full compatibility and reliability. ### Risk Management @@ -157,20 +157,20 @@ While grafting is a powerful tool for deploying hotfixes quickly, there are spec ## Conclusion -Grafting is an effective strategy for deploying hotfixes in subgraph development, enabling you to: +Grafting is an effective strategy for deploying hotfixes in Subgraph development, enabling you to: - **Quickly Recover** from critical errors without re-indexing. - **Preserve Historical Data**, maintaining continuity for applications and users. - **Ensure Service Availability** by minimizing downtime during critical fixes. -However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. +However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your Subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. ## Additional Resources - **[Grafting Documentation](/subgraphs/cookbook/grafting/)**: Replace a Contract and Keep its History With Grafting - **[Understanding Deployment IDs](/subgraphs/querying/subgraph-id-vs-deployment-id/)**: Learn the difference between Deployment ID and Subgraph ID. -By incorporating grafting into your subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. +By incorporating grafting into your Subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/nl/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx b/website/src/pages/nl/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx index 6ff60ec9ab34..3a633244e0f2 100644 --- a/website/src/pages/nl/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx +++ b/website/src/pages/nl/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx @@ -1,6 +1,6 @@ --- title: Subgraph Best Practice 3 - Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs -sidebarTitle: 'Subgraph Best Practice 3: Immutable Entities and Bytes as IDs' +sidebarTitle: Immutable Entities and Bytes as IDs --- ## TLDR @@ -50,12 +50,12 @@ While other types for IDs are possible, such as String and Int8, it is recommend ### Reasons to Not Use Bytes as IDs 1. If entity IDs must be human-readable such as auto-incremented numerical IDs or readable strings, Bytes for IDs should not be used. -2. If integrating a subgraph’s data with another data model that does not use Bytes as IDs, Bytes as IDs should not be used. +2. If integrating a Subgraph’s data with another data model that does not use Bytes as IDs, Bytes as IDs should not be used. 3. Indexing and querying performance improvements are not desired. ### Concatenating With Bytes as IDs -It is a common practice in many subgraphs to use string concatenation to combine two properties of an event into a single ID, such as using `event.transaction.hash.toHex() + "-" + event.logIndex.toString()`. However, as this returns a string, this significantly impedes subgraph indexing and querying performance. +It is a common practice in many Subgraphs to use string concatenation to combine two properties of an event into a single ID, such as using `event.transaction.hash.toHex() + "-" + event.logIndex.toString()`. However, as this returns a string, this significantly impedes Subgraph indexing and querying performance. Instead, we should use the `concatI32()` method to concatenate event properties. This strategy results in a `Bytes` ID that is much more performant. @@ -172,7 +172,7 @@ Query Response: ## Conclusion -Using both Immutable Entities and Bytes as IDs has been shown to markedly improve subgraph efficiency. Specifically, tests have highlighted up to a 28% increase in query performance and up to a 48% acceleration in indexing speeds. +Using both Immutable Entities and Bytes as IDs has been shown to markedly improve Subgraph efficiency. Specifically, tests have highlighted up to a 28% increase in query performance and up to a 48% acceleration in indexing speeds. Read more about using Immutable Entities and Bytes as IDs in this blog post by David Lutterkort, a Software Engineer at Edge & Node: [Two Simple Subgraph Performance Improvements](https://thegraph.com/blog/two-simple-subgraph-performance-improvements/). diff --git a/website/src/pages/nl/subgraphs/best-practices/pruning.mdx b/website/src/pages/nl/subgraphs/best-practices/pruning.mdx index 1b51dde8894f..2d4f9ad803e0 100644 --- a/website/src/pages/nl/subgraphs/best-practices/pruning.mdx +++ b/website/src/pages/nl/subgraphs/best-practices/pruning.mdx @@ -1,11 +1,11 @@ --- title: Subgraph Best Practice 1 - Improve Query Speed with Subgraph Pruning -sidebarTitle: 'Subgraph Best Practice 1: Pruning with indexerHints' +sidebarTitle: Pruning with indexerHints --- ## TLDR -[Pruning](/developing/creating-a-subgraph/#prune) removes archival entities from the subgraph’s database up to a given block, and removing unused entities from a subgraph’s database will improve a subgraph’s query performance, often dramatically. Using `indexerHints` is an easy way to prune a subgraph. +[Pruning](/developing/creating-a-subgraph/#prune) removes archival entities from the Subgraph’s database up to a given block, and removing unused entities from a Subgraph’s database will improve a Subgraph’s query performance, often dramatically. Using `indexerHints` is an easy way to prune a Subgraph. ## How to Prune a Subgraph With `indexerHints` @@ -13,14 +13,14 @@ Add a section called `indexerHints` in the manifest. `indexerHints` has three `prune` options: -- `prune: auto`: Retains the minimum necessary history as set by the Indexer, optimizing query performance. This is the generally recommended setting and is the default for all subgraphs created by `graph-cli` >= 0.66.0. +- `prune: auto`: Retains the minimum necessary history as set by the Indexer, optimizing query performance. This is the generally recommended setting and is the default for all Subgraphs created by `graph-cli` >= 0.66.0. - `prune: `: Sets a custom limit on the number of historical blocks to retain. - `prune: never`: No pruning of historical data; retains the entire history and is the default if there is no `indexerHints` section. `prune: never` should be selected if [Time Travel Queries](/subgraphs/querying/graphql-api/#time-travel-queries) are desired. -We can add `indexerHints` to our subgraphs by updating our `subgraph.yaml`: +We can add `indexerHints` to our Subgraphs by updating our `subgraph.yaml`: ```yaml -specVersion: 1.0.0 +specVersion: 1.3.0 schema: file: ./schema.graphql indexerHints: @@ -39,7 +39,7 @@ dataSources: ## Conclusion -Pruning using `indexerHints` is a best practice for subgraph development, offering significant query performance improvements. +Pruning using `indexerHints` is a best practice for Subgraph development, offering significant query performance improvements. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/nl/subgraphs/best-practices/timeseries.mdx b/website/src/pages/nl/subgraphs/best-practices/timeseries.mdx index cacdc44711fe..9732199531a8 100644 --- a/website/src/pages/nl/subgraphs/best-practices/timeseries.mdx +++ b/website/src/pages/nl/subgraphs/best-practices/timeseries.mdx @@ -1,11 +1,11 @@ --- title: Subgraph Best Practice 5 - Simplify and Optimize with Timeseries and Aggregations -sidebarTitle: 'Subgraph Best Practice 5: Timeseries and Aggregations' +sidebarTitle: Timeseries and Aggregations --- ## TLDR -Leveraging the new time-series and aggregations feature in subgraphs can significantly enhance both indexing speed and query performance. +Leveraging the new time-series and aggregations feature in Subgraphs can significantly enhance both indexing speed and query performance. ## Overview @@ -36,6 +36,10 @@ Timeseries and aggregations reduce data processing overhead and accelerate queri ## How to Implement Timeseries and Aggregations +### Prerequisites + +You need `spec version 1.1.0` for this feature. + ### Defining Timeseries Entities A timeseries entity represents raw data points collected over time. It is defined with the `@entity(timeseries: true)` annotation. Key requirements: @@ -51,7 +55,7 @@ Example: type Data @entity(timeseries: true) { id: Int8! timestamp: Timestamp! - price: BigDecimal! + amount: BigDecimal! } ``` @@ -68,11 +72,11 @@ Example: type Stats @aggregation(intervals: ["hour", "day"], source: "Data") { id: Int8! timestamp: Timestamp! - sum: BigDecimal! @aggregate(fn: "sum", arg: "price") + sum: BigDecimal! @aggregate(fn: "sum", arg: "amount") } ``` -In this example, Stats aggregates the price field from Data over hourly and daily intervals, computing the sum. +In this example, Stats aggregates the amount field from Data over hourly and daily intervals, computing the sum. ### Querying Aggregated Data @@ -172,13 +176,13 @@ Supported operators and functions include basic arithmetic (+, -, \_, /), compar ### Conclusion -Implementing timeseries and aggregations in subgraphs is a best practice for projects dealing with time-based data. This approach: +Implementing timeseries and aggregations in Subgraphs is a best practice for projects dealing with time-based data. This approach: - Enhances Performance: Speeds up indexing and querying by reducing data processing overhead. - Simplifies Development: Eliminates the need for manual aggregation logic in mappings. - Scales Efficiently: Handles large volumes of data without compromising on speed or responsiveness. -By adopting this pattern, developers can build more efficient and scalable subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your subgraphs. +By adopting this pattern, developers can build more efficient and scalable Subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your Subgraphs. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/nl/subgraphs/billing.mdx b/website/src/pages/nl/subgraphs/billing.mdx index c9f380bb022c..ec654ca63f55 100644 --- a/website/src/pages/nl/subgraphs/billing.mdx +++ b/website/src/pages/nl/subgraphs/billing.mdx @@ -4,12 +4,14 @@ title: Billing ## Querying Plans -There are two plans to use when querying subgraphs on The Graph Network. +There are two plans to use when querying Subgraphs on The Graph Network. - **Free Plan**: The Free Plan includes 100,000 free monthly queries with full access to the Subgraph Studio testing environment. This plan is designed for hobbyists, hackathoners, and those with side projects to try out The Graph before scaling their dapp. - **Growth Plan**: The Growth Plan includes everything in the Free Plan with all queries after 100,000 monthly queries requiring payments with GRT or credit card. The Growth Plan is flexible enough to cover teams that have established dapps across a variety of use cases. +Learn more about pricing [here](https://thegraph.com/studio-pricing/). + ## Query Payments with credit card diff --git a/website/src/pages/nl/subgraphs/developing/creating/advanced.mdx b/website/src/pages/nl/subgraphs/developing/creating/advanced.mdx index ee9918f5f254..8dbc48253034 100644 --- a/website/src/pages/nl/subgraphs/developing/creating/advanced.mdx +++ b/website/src/pages/nl/subgraphs/developing/creating/advanced.mdx @@ -4,9 +4,9 @@ title: Advanced Subgraph Features ## Overview -Add and implement advanced subgraph features to enhanced your subgraph's built. +Add and implement advanced Subgraph features to enhanced your Subgraph's built. -Starting from `specVersion` `0.0.4`, subgraph features must be explicitly declared in the `features` section at the top level of the manifest file, using their `camelCase` name, as listed in the table below: +Starting from `specVersion` `0.0.4`, Subgraph features must be explicitly declared in the `features` section at the top level of the manifest file, using their `camelCase` name, as listed in the table below: | Feature | Name | | ---------------------------------------------------- | ---------------- | @@ -14,10 +14,10 @@ Starting from `specVersion` `0.0.4`, subgraph features must be explicitly declar | [Full-text Search](#defining-fulltext-search-fields) | `fullTextSearch` | | [Grafting](#grafting-onto-existing-subgraphs) | `grafting` | -For instance, if a subgraph uses the **Full-Text Search** and the **Non-fatal Errors** features, the `features` field in the manifest should be: +For instance, if a Subgraph uses the **Full-Text Search** and the **Non-fatal Errors** features, the `features` field in the manifest should be: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum features: - fullTextSearch @@ -25,7 +25,7 @@ features: dataSources: ... ``` -> Note that using a feature without declaring it will incur a **validation error** during subgraph deployment, but no errors will occur if a feature is declared but not used. +> Note that using a feature without declaring it will incur a **validation error** during Subgraph deployment, but no errors will occur if a feature is declared but not used. ## Timeseries and Aggregations @@ -33,9 +33,9 @@ Prerequisites: - Subgraph specVersion must be ≥1.1.0. -Timeseries and aggregations enable your subgraph to track statistics like daily average price, hourly total transfers, and more. +Timeseries and aggregations enable your Subgraph to track statistics like daily average price, hourly total transfers, and more. -This feature introduces two new types of subgraph entity. Timeseries entities record data points with timestamps. Aggregation entities perform pre-declared calculations on the timeseries data points on an hourly or daily basis, then store the results for easy access via GraphQL. +This feature introduces two new types of Subgraph entity. Timeseries entities record data points with timestamps. Aggregation entities perform pre-declared calculations on the timeseries data points on an hourly or daily basis, then store the results for easy access via GraphQL. ### Example Schema @@ -97,21 +97,21 @@ Aggregation entities are automatically calculated on the basis of the specified ## Non-fatal errors -Indexing errors on already synced subgraphs will, by default, cause the subgraph to fail and stop syncing. Subgraphs can alternatively be configured to continue syncing in the presence of errors, by ignoring the changes made by the handler which provoked the error. This gives subgraph authors time to correct their subgraphs while queries continue to be served against the latest block, though the results might be inconsistent due to the bug that caused the error. Note that some errors are still always fatal. To be non-fatal, the error must be known to be deterministic. +Indexing errors on already synced Subgraphs will, by default, cause the Subgraph to fail and stop syncing. Subgraphs can alternatively be configured to continue syncing in the presence of errors, by ignoring the changes made by the handler which provoked the error. This gives Subgraph authors time to correct their Subgraphs while queries continue to be served against the latest block, though the results might be inconsistent due to the bug that caused the error. Note that some errors are still always fatal. To be non-fatal, the error must be known to be deterministic. -> **Note:** The Graph Network does not yet support non-fatal errors, and developers should not deploy subgraphs using that functionality to the network via the Studio. +> **Note:** The Graph Network does not yet support non-fatal errors, and developers should not deploy Subgraphs using that functionality to the network via the Studio. -Enabling non-fatal errors requires setting the following feature flag on the subgraph manifest: +Enabling non-fatal errors requires setting the following feature flag on the Subgraph manifest: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum features: - nonFatalErrors ... ``` -The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the subgraph has skipped over errors, as in the example: +The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the Subgraph has skipped over errors, as in the example: ```graphql foos(first: 100, subgraphError: allow) { @@ -123,7 +123,7 @@ _meta { } ``` -If the subgraph encounters an error, that query will return both the data and a graphql error with the message `"indexing_error"`, as in this example response: +If the Subgraph encounters an error, that query will return both the data and a graphql error with the message `"indexing_error"`, as in this example response: ```graphql "data": { @@ -145,7 +145,7 @@ If the subgraph encounters an error, that query will return both the data and a ## IPFS/Arweave File Data Sources -File data sources are a new subgraph functionality for accessing off-chain data during indexing in a robust, extendable way. File data sources support fetching files from IPFS and from Arweave. +File data sources are a new Subgraph functionality for accessing off-chain data during indexing in a robust, extendable way. File data sources support fetching files from IPFS and from Arweave. > This also lays the groundwork for deterministic indexing of off-chain data, as well as the potential introduction of arbitrary HTTP-sourced data. @@ -221,7 +221,7 @@ templates: - name: TokenMetadata kind: file/ipfs mapping: - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mapping.ts handler: handleMetadata @@ -290,7 +290,7 @@ Example: import { TokenMetadata as TokenMetadataTemplate } from '../generated/templates' const ipfshash = 'QmaXzZhcYnsisuue5WRdQDH6FDvqkLQX1NckLqBYeYYEfm' -//This example code is for a Crypto coven subgraph. The above ipfs hash is a directory with token metadata for all crypto coven NFTs. +//This example code is for a Crypto coven Subgraph. The above ipfs hash is a directory with token metadata for all crypto coven NFTs. export function handleTransfer(event: TransferEvent): void { let token = Token.load(event.params.tokenId.toString()) @@ -317,23 +317,23 @@ This will create a new file data source, which will poll Graph Node's configured This example is using the CID as the lookup between the parent `Token` entity and the resulting `TokenMetadata` entity. -> Previously, this is the point at which a subgraph developer would have called `ipfs.cat(CID)` to fetch the file +> Previously, this is the point at which a Subgraph developer would have called `ipfs.cat(CID)` to fetch the file Congratulations, you are using file data sources! -#### Deploying your subgraphs +#### Deploying your Subgraphs -You can now `build` and `deploy` your subgraph to any Graph Node >=v0.30.0-rc.0. +You can now `build` and `deploy` your Subgraph to any Graph Node >=v0.30.0-rc.0. #### Limitations -File data source handlers and entities are isolated from other subgraph entities, ensuring that they are deterministic when executed, and ensuring no contamination of chain-based data sources. To be specific: +File data source handlers and entities are isolated from other Subgraph entities, ensuring that they are deterministic when executed, and ensuring no contamination of chain-based data sources. To be specific: - Entities created by File Data Sources are immutable, and cannot be updated - File Data Source handlers cannot access entities from other file data sources - Entities associated with File Data Sources cannot be accessed by chain-based handlers -> While this constraint should not be problematic for most use-cases, it may introduce complexity for some. Please get in touch via Discord if you are having issues modelling your file-based data in a subgraph! +> While this constraint should not be problematic for most use-cases, it may introduce complexity for some. Please get in touch via Discord if you are having issues modelling your file-based data in a Subgraph! Additionally, it is not possible to create data sources from a file data source, be it an onchain data source or another file data source. This restriction may be lifted in the future. @@ -365,15 +365,15 @@ Handlers for File Data Sources cannot be in files which import `eth_call` contra > **Requires**: [SpecVersion](#specversion-releases) >= `1.2.0` -Topic filters, also known as indexed argument filters, are a powerful feature in subgraphs that allow users to precisely filter blockchain events based on the values of their indexed arguments. +Topic filters, also known as indexed argument filters, are a powerful feature in Subgraphs that allow users to precisely filter blockchain events based on the values of their indexed arguments. -- These filters help isolate specific events of interest from the vast stream of events on the blockchain, allowing subgraphs to operate more efficiently by focusing only on relevant data. +- These filters help isolate specific events of interest from the vast stream of events on the blockchain, allowing Subgraphs to operate more efficiently by focusing only on relevant data. -- This is useful for creating personal subgraphs that track specific addresses and their interactions with various smart contracts on the blockchain. +- This is useful for creating personal Subgraphs that track specific addresses and their interactions with various smart contracts on the blockchain. ### How Topic Filters Work -When a smart contract emits an event, any arguments that are marked as indexed can be used as filters in a subgraph's manifest. This allows the subgraph to listen selectively for events that match these indexed arguments. +When a smart contract emits an event, any arguments that are marked as indexed can be used as filters in a Subgraph's manifest. This allows the Subgraph to listen selectively for events that match these indexed arguments. - The event's first indexed argument corresponds to `topic1`, the second to `topic2`, and so on, up to `topic3`, since the Ethereum Virtual Machine (EVM) allows up to three indexed arguments per event. @@ -401,7 +401,7 @@ In this example: #### Configuration in Subgraphs -Topic filters are defined directly within the event handler configuration in the subgraph manifest. Here is how they are configured: +Topic filters are defined directly within the event handler configuration in the Subgraph manifest. Here is how they are configured: ```yaml eventHandlers: @@ -436,7 +436,7 @@ In this configuration: - `topic1` is configured to filter `Transfer` events where `0xAddressA` is the sender. - `topic2` is configured to filter `Transfer` events where `0xAddressB` is the receiver. -- The subgraph will only index transactions that occur directly from `0xAddressA` to `0xAddressB`. +- The Subgraph will only index transactions that occur directly from `0xAddressA` to `0xAddressB`. #### Example 2: Tracking Transactions in Either Direction Between Two or More Addresses @@ -452,17 +452,17 @@ In this configuration: - `topic1` is configured to filter `Transfer` events where `0xAddressA`, `0xAddressB`, `0xAddressC` is the sender. - `topic2` is configured to filter `Transfer` events where `0xAddressB` and `0xAddressC` is the receiver. -- The subgraph will index transactions that occur in either direction between multiple addresses allowing for comprehensive monitoring of interactions involving all addresses. +- The Subgraph will index transactions that occur in either direction between multiple addresses allowing for comprehensive monitoring of interactions involving all addresses. ## Declared eth_call > Note: This is an experimental feature that is not currently available in a stable Graph Node release yet. You can only use it in Subgraph Studio or your self-hosted node. -Declarative `eth_calls` are a valuable subgraph feature that allows `eth_calls` to be executed ahead of time, enabling `graph-node` to execute them in parallel. +Declarative `eth_calls` are a valuable Subgraph feature that allows `eth_calls` to be executed ahead of time, enabling `graph-node` to execute them in parallel. This feature does the following: -- Significantly improves the performance of fetching data from the Ethereum blockchain by reducing the total time for multiple calls and optimizing the subgraph's overall efficiency. +- Significantly improves the performance of fetching data from the Ethereum blockchain by reducing the total time for multiple calls and optimizing the Subgraph's overall efficiency. - Allows faster data fetching, resulting in quicker query responses and a better user experience. - Reduces wait times for applications that need to aggregate data from multiple Ethereum calls, making the data retrieval process more efficient. @@ -474,7 +474,7 @@ This feature does the following: #### Scenario without Declarative `eth_calls` -Imagine you have a subgraph that needs to make three Ethereum calls to fetch data about a user's transactions, balance, and token holdings. +Imagine you have a Subgraph that needs to make three Ethereum calls to fetch data about a user's transactions, balance, and token holdings. Traditionally, these calls might be made sequentially: @@ -498,15 +498,15 @@ Total time taken = max (3, 2, 4) = 4 seconds #### How it Works -1. Declarative Definition: In the subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. +1. Declarative Definition: In the Subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. 2. Parallel Execution Engine: The Graph Node's execution engine recognizes these declarations and runs the calls simultaneously. -3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the subgraph for further processing. +3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the Subgraph for further processing. #### Example Configuration in Subgraph Manifest Declared `eth_calls` can access the `event.address` of the underlying event as well as all the `event.params`. -`Subgraph.yaml` using `event.address`: +`subgraph.yaml` using `event.address`: ```yaml eventHandlers: @@ -524,7 +524,7 @@ Details for the example above: - The text (`Pool[event.address].feeGrowthGlobal0X128()`) is the actual `eth_call` that will be executed, which is in the form of `Contract[address].function(arguments)` - The `address` and `arguments` can be replaced with variables that will be available when the handler is executed. -`Subgraph.yaml` using `event.params` +`subgraph.yaml` using `event.params` ```yaml calls: @@ -535,22 +535,22 @@ calls: > **Note:** it is not recommended to use grafting when initially upgrading to The Graph Network. Learn more [here](/subgraphs/cookbook/grafting/#important-note-on-grafting-when-upgrading-to-the-network). -When a subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances; it is beneficial to reuse the data from an existing subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly or to temporarily get an existing subgraph working again after it has failed. +When a Subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances; it is beneficial to reuse the data from an existing Subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly or to temporarily get an existing Subgraph working again after it has failed. -A subgraph is grafted onto a base subgraph when the subgraph manifest in `subgraph.yaml` contains a `graft` block at the top-level: +A Subgraph is grafted onto a base Subgraph when the Subgraph manifest in `subgraph.yaml` contains a `graft` block at the top-level: ```yaml description: ... graft: - base: Qm... # Subgraph ID of base subgraph + base: Qm... # Subgraph ID of base Subgraph block: 7345624 # Block number ``` -When a subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` subgraph up to and including the given `block` and then continue indexing the new subgraph from that block on. The base subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted subgraph. +When a Subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` Subgraph up to and including the given `block` and then continue indexing the new Subgraph from that block on. The base Subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted Subgraph. -Because grafting copies rather than indexes base data, it is much quicker to get the subgraph to the desired block than indexing from scratch, though the initial data copy can still take several hours for very large subgraphs. While the grafted subgraph is being initialized, the Graph Node will log information about the entity types that have already been copied. +Because grafting copies rather than indexes base data, it is much quicker to get the Subgraph to the desired block than indexing from scratch, though the initial data copy can still take several hours for very large Subgraphs. While the grafted Subgraph is being initialized, the Graph Node will log information about the entity types that have already been copied. -The grafted subgraph can use a GraphQL schema that is not identical to the one of the base subgraph, but merely compatible with it. It has to be a valid subgraph schema in its own right, but may deviate from the base subgraph's schema in the following ways: +The grafted Subgraph can use a GraphQL schema that is not identical to the one of the base Subgraph, but merely compatible with it. It has to be a valid Subgraph schema in its own right, but may deviate from the base Subgraph's schema in the following ways: - It adds or removes entity types - It removes attributes from entity types @@ -560,4 +560,4 @@ The grafted subgraph can use a GraphQL schema that is not identical to the one o - It adds or removes interfaces - It changes for which entity types an interface is implemented -> **[Feature Management](#experimental-features):** `grafting` must be declared under `features` in the subgraph manifest. +> **[Feature Management](#experimental-features):** `grafting` must be declared under `features` in the Subgraph manifest. diff --git a/website/src/pages/nl/subgraphs/developing/creating/assemblyscript-mappings.mdx b/website/src/pages/nl/subgraphs/developing/creating/assemblyscript-mappings.mdx index 2ac894695fe1..cd81dc118f28 100644 --- a/website/src/pages/nl/subgraphs/developing/creating/assemblyscript-mappings.mdx +++ b/website/src/pages/nl/subgraphs/developing/creating/assemblyscript-mappings.mdx @@ -10,7 +10,7 @@ The mappings take data from a particular source and transform it into entities t For each event handler that is defined in `subgraph.yaml` under `mapping.eventHandlers`, create an exported function of the same name. Each handler must accept a single parameter called `event` with a type corresponding to the name of the event which is being handled. -In the example subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events: +In the example Subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events: ```javascript import { NewGravatar, UpdatedGravatar } from '../generated/Gravity/Gravity' @@ -72,7 +72,7 @@ If no value is set for a field in the new entity with the same ID, the field wil ## Code Generation -In order to make it easy and type-safe to work with smart contracts, events and entities, the Graph CLI can generate AssemblyScript types from the subgraph's GraphQL schema and the contract ABIs included in the data sources. +In order to make it easy and type-safe to work with smart contracts, events and entities, the Graph CLI can generate AssemblyScript types from the Subgraph's GraphQL schema and the contract ABIs included in the data sources. This is done with @@ -80,7 +80,7 @@ This is done with graph codegen [--output-dir ] [] ``` -but in most cases, subgraphs are already preconfigured via `package.json` to allow you to simply run one of the following to achieve the same: +but in most cases, Subgraphs are already preconfigured via `package.json` to allow you to simply run one of the following to achieve the same: ```sh # Yarn @@ -90,7 +90,7 @@ yarn codegen npm run codegen ``` -This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with. +This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example Subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with. ```javascript import { @@ -102,12 +102,12 @@ import { } from '../generated/Gravity/Gravity' ``` -In addition to this, one class is generated for each entity type in the subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with +In addition to this, one class is generated for each entity type in the Subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with ```javascript import { Gravatar } from '../generated/schema' ``` -> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the subgraph. +> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the Subgraph. -Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your subgraph to Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find. +Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your Subgraph to Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find. diff --git a/website/src/pages/nl/subgraphs/developing/creating/graph-ts/CHANGELOG.md b/website/src/pages/nl/subgraphs/developing/creating/graph-ts/CHANGELOG.md index 5d90888ac378..5f964d3cbb78 100644 --- a/website/src/pages/nl/subgraphs/developing/creating/graph-ts/CHANGELOG.md +++ b/website/src/pages/nl/subgraphs/developing/creating/graph-ts/CHANGELOG.md @@ -1,5 +1,11 @@ # @graphprotocol/graph-ts +## 0.38.0 + +### Minor Changes + +- [#1935](https://github.com/graphprotocol/graph-tooling/pull/1935) [`0c36a02`](https://github.com/graphprotocol/graph-tooling/commit/0c36a024e0516bbf883ae62b8312dba3d9945f04) Thanks [@isum](https://github.com/isum)! - feat: add yaml parsing support to mappings + ## 0.37.0 ### Minor Changes diff --git a/website/src/pages/nl/subgraphs/developing/creating/graph-ts/api.mdx b/website/src/pages/nl/subgraphs/developing/creating/graph-ts/api.mdx index 35bb04826c98..5be2530c4d6b 100644 --- a/website/src/pages/nl/subgraphs/developing/creating/graph-ts/api.mdx +++ b/website/src/pages/nl/subgraphs/developing/creating/graph-ts/api.mdx @@ -2,12 +2,12 @@ title: AssemblyScript API --- -> Note: If you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/). +> Note: If you created a Subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/). -Learn what built-in APIs can be used when writing subgraph mappings. There are two kinds of APIs available out of the box: +Learn what built-in APIs can be used when writing Subgraph mappings. There are two kinds of APIs available out of the box: - The [Graph TypeScript library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) -- Code generated from subgraph files by `graph codegen` +- Code generated from Subgraph files by `graph codegen` You can also add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). @@ -27,7 +27,7 @@ The `@graphprotocol/graph-ts` library provides the following APIs: ### Versions -The `apiVersion` in the subgraph manifest specifies the mapping API version which is run by Graph Node for a given subgraph. +The `apiVersion` in the Subgraph manifest specifies the mapping API version which is run by Graph Node for a given Subgraph. | Version | Release notes | | :-: | --- | @@ -223,7 +223,7 @@ import { store } from '@graphprotocol/graph-ts' The `store` API allows to load, save and remove entities from and to the Graph Node store. -Entities written to the store map one-to-one to the `@entity` types defined in the subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities. +Entities written to the store map one-to-one to the `@entity` types defined in the Subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities. #### Creating entities @@ -282,8 +282,8 @@ As of `graph-node` v0.31.0, `@graphprotocol/graph-ts` v0.30.0 and `@graphprotoco The store API facilitates the retrieval of entities that were created or updated in the current block. A typical situation for this is that one handler creates a transaction from some onchain event, and a later handler wants to access this transaction if it exists. -- In the case where the transaction does not exist, the subgraph will have to go to the database simply to find out that the entity does not exist. If the subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. -- For some subgraphs, these missed lookups can contribute significantly to the indexing time. +- In the case where the transaction does not exist, the Subgraph will have to go to the database simply to find out that the entity does not exist. If the Subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. +- For some Subgraphs, these missed lookups can contribute significantly to the indexing time. ```typescript let id = event.transaction.hash // or however the ID is constructed @@ -380,11 +380,11 @@ The Ethereum API provides access to smart contracts, public state variables, con #### Support for Ethereum Types -As with entities, `graph codegen` generates classes for all smart contracts and events used in a subgraph. For this, the contract ABIs need to be part of the data source in the subgraph manifest. Typically, the ABI files are stored in an `abis/` folder. +As with entities, `graph codegen` generates classes for all smart contracts and events used in a Subgraph. For this, the contract ABIs need to be part of the data source in the Subgraph manifest. Typically, the ABI files are stored in an `abis/` folder. -With the generated classes, conversions between Ethereum types and the [built-in types](#built-in-types) take place behind the scenes so that subgraph authors do not have to worry about them. +With the generated classes, conversions between Ethereum types and the [built-in types](#built-in-types) take place behind the scenes so that Subgraph authors do not have to worry about them. -The following example illustrates this. Given a subgraph schema like +The following example illustrates this. Given a Subgraph schema like ```graphql type Transfer @entity { @@ -483,7 +483,7 @@ class Log { #### Access to Smart Contract State -The code generated by `graph codegen` also includes classes for the smart contracts used in the subgraph. These can be used to access public state variables and call functions of the contract at the current block. +The code generated by `graph codegen` also includes classes for the smart contracts used in the Subgraph. These can be used to access public state variables and call functions of the contract at the current block. A common pattern is to access the contract from which an event originates. This is achieved with the following code: @@ -506,7 +506,7 @@ export function handleTransfer(event: TransferEvent) { As long as the `ERC20Contract` on Ethereum has a public read-only function called `symbol`, it can be called with `.symbol()`. For public state variables a method with the same name is created automatically. -Any other contract that is part of the subgraph can be imported from the generated code and can be bound to a valid address. +Any other contract that is part of the Subgraph can be imported from the generated code and can be bound to a valid address. #### Handling Reverted Calls @@ -582,7 +582,7 @@ let isContract = ethereum.hasCode(eoa).inner // returns false import { log } from '@graphprotocol/graph-ts' ``` -The `log` API allows subgraphs to log information to the Graph Node standard output as well as Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument. +The `log` API allows Subgraphs to log information to the Graph Node standard output as well as Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument. The `log` API includes the following functions: @@ -590,7 +590,7 @@ The `log` API includes the following functions: - `log.info(fmt: string, args: Array): void` - logs an informational message. - `log.warning(fmt: string, args: Array): void` - logs a warning. - `log.error(fmt: string, args: Array): void` - logs an error message. -- `log.critical(fmt: string, args: Array): void` – logs a critical message _and_ terminates the subgraph. +- `log.critical(fmt: string, args: Array): void` – logs a critical message _and_ terminates the Subgraph. The `log` API takes a format string and an array of string values. It then replaces placeholders with the string values from the array. The first `{}` placeholder gets replaced by the first value in the array, the second `{}` placeholder gets replaced by the second value and so on. @@ -721,7 +721,7 @@ ipfs.mapJSON('Qm...', 'processItem', Value.fromString('parentId')) The only flag currently supported is `json`, which must be passed to `ipfs.map`. With the `json` flag, the IPFS file must consist of a series of JSON values, one value per line. The call to `ipfs.map` will read each line in the file, deserialize it into a `JSONValue` and call the callback for each of them. The callback can then use entity operations to store data from the `JSONValue`. Entity changes are stored only when the handler that called `ipfs.map` finishes successfully; in the meantime, they are kept in memory, and the size of the file that `ipfs.map` can process is therefore limited. -On success, `ipfs.map` returns `void`. If any invocation of the callback causes an error, the handler that invoked `ipfs.map` is aborted, and the subgraph is marked as failed. +On success, `ipfs.map` returns `void`. If any invocation of the callback causes an error, the handler that invoked `ipfs.map` is aborted, and the Subgraph is marked as failed. ### Crypto API @@ -836,7 +836,7 @@ The base `Entity` class and the child `DataSourceContext` class have helpers to ### DataSourceContext in Manifest -The `context` section within `dataSources` allows you to define key-value pairs that are accessible within your subgraph mappings. The available types are `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. +The `context` section within `dataSources` allows you to define key-value pairs that are accessible within your Subgraph mappings. The available types are `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Here is a YAML example illustrating the usage of various types in the `context` section: @@ -887,4 +887,4 @@ dataSources: - `List`: Specifies a list of items. Each item needs to specify its type and data. - `BigInt`: Specifies a large integer value. Must be quoted due to its large size. -This context is then accessible in your subgraph mapping files, enabling more dynamic and configurable subgraphs. +This context is then accessible in your Subgraph mapping files, enabling more dynamic and configurable Subgraphs. diff --git a/website/src/pages/nl/subgraphs/developing/creating/graph-ts/common-issues.mdx b/website/src/pages/nl/subgraphs/developing/creating/graph-ts/common-issues.mdx index f8d0c9c004c2..65e8e3d4a8a3 100644 --- a/website/src/pages/nl/subgraphs/developing/creating/graph-ts/common-issues.mdx +++ b/website/src/pages/nl/subgraphs/developing/creating/graph-ts/common-issues.mdx @@ -2,7 +2,7 @@ title: Common AssemblyScript Issues --- -There are certain [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) issues that are common to run into during subgraph development. They range in debug difficulty, however, being aware of them may help. The following is a non-exhaustive list of these issues: +There are certain [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) issues that are common to run into during Subgraph development. They range in debug difficulty, however, being aware of them may help. The following is a non-exhaustive list of these issues: - `Private` class variables are not enforced in [AssemblyScript](https://www.assemblyscript.org/status.html#language-features). There is no way to protect class variables from being directly changed from the class object. - Scope is not inherited into [closure functions](https://www.assemblyscript.org/status.html#on-closures), i.e. variables declared outside of closure functions cannot be used. Explanation in [Developer Highlights #3](https://www.youtube.com/watch?v=1-8AW-lVfrA&t=3243s). diff --git a/website/src/pages/nl/subgraphs/developing/creating/install-the-cli.mdx b/website/src/pages/nl/subgraphs/developing/creating/install-the-cli.mdx index 8bf0b4dfca9f..004d0f94c99e 100644 --- a/website/src/pages/nl/subgraphs/developing/creating/install-the-cli.mdx +++ b/website/src/pages/nl/subgraphs/developing/creating/install-the-cli.mdx @@ -2,11 +2,11 @@ title: Install the Graph CLI --- -> In order to use your subgraph on The Graph's decentralized network, you will need to [create an API key](/resources/subgraph-studio-faq/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your subgraph with at least 3,000 GRT to attract 2-3 Indexers. To learn more about signaling, check out [curating](/resources/roles/curating/). +> In order to use your Subgraph on The Graph's decentralized network, you will need to [create an API key](/resources/subgraph-studio-faq/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your Subgraph with at least 3,000 GRT to attract 2-3 Indexers. To learn more about signaling, check out [curating](/resources/roles/curating/). ## Overview -The [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) is a command-line interface that facilitates developers' commands for The Graph. It processes a [subgraph manifest](/subgraphs/developing/creating/subgraph-manifest/) and compiles the [mappings](/subgraphs/developing/creating/assemblyscript-mappings/) to create the files you will need to deploy the subgraph to [Subgraph Studio](https://thegraph.com/studio/) and the network. +The [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) is a command-line interface that facilitates developers' commands for The Graph. It processes a [Subgraph manifest](/subgraphs/developing/creating/subgraph-manifest/) and compiles the [mappings](/subgraphs/developing/creating/assemblyscript-mappings/) to create the files you will need to deploy the Subgraph to [Subgraph Studio](https://thegraph.com/studio/) and the network. ## Getting Started @@ -28,13 +28,13 @@ npm install -g @graphprotocol/graph-cli@latest yarn global add @graphprotocol/graph-cli ``` -The `graph init` command can be used to set up a new subgraph project, either from an existing contract or from an example subgraph. If you already have a smart contract deployed to your preferred network, you can bootstrap a new subgraph from that contract to get started. +The `graph init` command can be used to set up a new Subgraph project, either from an existing contract or from an example Subgraph. If you already have a smart contract deployed to your preferred network, you can bootstrap a new Subgraph from that contract to get started. ## Creëer een Subgraph ### From an Existing Contract -The following command creates a subgraph that indexes all events of an existing contract: +The following command creates a Subgraph that indexes all events of an existing contract: ```sh graph init \ @@ -51,25 +51,25 @@ graph init \ - If any of the optional arguments are missing, it guides you through an interactive form. -- The `` is the ID of your subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your subgraph details page. +- The `` is the ID of your Subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your Subgraph details page. ### From an Example Subgraph -The following command initializes a new project from an example subgraph: +The following command initializes a new project from an example Subgraph: ```sh graph init --from-example=example-subgraph ``` -- The [example subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. +- The [example Subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. -- The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. +- The Subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. ### Add New `dataSources` to an Existing Subgraph -`dataSources` are key components of subgraphs. They define the sources of data that the subgraph indexes and processes. A `dataSource` specifies which smart contract to listen to, which events to process, and how to handle them. +`dataSources` are key components of Subgraphs. They define the sources of data that the Subgraph indexes and processes. A `dataSource` specifies which smart contract to listen to, which events to process, and how to handle them. -Recent versions of the Graph CLI supports adding new `dataSources` to an existing subgraph through the `graph add` command: +Recent versions of the Graph CLI supports adding new `dataSources` to an existing Subgraph through the `graph add` command: ```sh graph add
[] @@ -101,19 +101,5 @@ The `graph add` command will fetch the ABI from Etherscan (unless an ABI path is The ABI file(s) must match your contract(s). There are a few ways to obtain ABI files: - If you are building your own project, you will likely have access to your most current ABIs. -- If you are building a subgraph for a public project, you can download that project to your computer and get the ABI by using [`npx hardhat compile`](https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts) or using `solc` to compile. -- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your subgraph will fail. - -## SpecVersion Releases - -| Version | Release notes | -| :-: | --- | -| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | -| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | -| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune subgraphs | -| 0.0.9 | Supports `endBlock` feature | -| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). | -| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). | -| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | -| 0.0.5 | Added support for event handlers having access to transaction receipts. | -| 0.0.4 | Added support for managing subgraph features. | +- If you are building a Subgraph for a public project, you can download that project to your computer and get the ABI by using [`npx hardhat compile`](https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts) or using `solc` to compile. +- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your Subgraph will fail. diff --git a/website/src/pages/nl/subgraphs/developing/creating/ql-schema.mdx b/website/src/pages/nl/subgraphs/developing/creating/ql-schema.mdx index 27562f970620..7e0f889447c5 100644 --- a/website/src/pages/nl/subgraphs/developing/creating/ql-schema.mdx +++ b/website/src/pages/nl/subgraphs/developing/creating/ql-schema.mdx @@ -4,7 +4,7 @@ title: The Graph QL Schema ## Overview -The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language. +The schema for your Subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language. > Note: If you've never written a GraphQL schema, it is recommended that you check out this primer on the GraphQL type system. Reference documentation for GraphQL schemas can be found in the [GraphQL API](/subgraphs/querying/graphql-api/) section. @@ -12,7 +12,7 @@ The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas ar Before defining entities, it is important to take a step back and think about how your data is structured and linked. -- All queries will be made against the data model defined in the subgraph schema. As a result, the design of the subgraph schema should be informed by the queries that your application will need to perform. +- All queries will be made against the data model defined in the Subgraph schema. As a result, the design of the Subgraph schema should be informed by the queries that your application will need to perform. - It may be useful to imagine entities as "objects containing data", rather than as events or functions. - You define entity types in `schema.graphql`, and Graph Node will generate top-level fields for querying single instances and collections of that entity type. - Each type that should be an entity is required to be annotated with an `@entity` directive. @@ -141,7 +141,7 @@ type TokenBalance @entity { Reverse lookups can be defined on an entity through the `@derivedFrom` field. This creates a virtual field on the entity that may be queried but cannot be set manually through the mappings API. Rather, it is derived from the relationship defined on the other entity. For such relationships, it rarely makes sense to store both sides of the relationship, and both indexing and query performance will be better when only one side is stored and the other is derived. -For one-to-many relationships, the relationship should always be stored on the 'one' side, and the 'many' side should always be derived. Storing the relationship this way, rather than storing an array of entities on the 'many' side, will result in dramatically better performance for both indexing and querying the subgraph. In general, storing arrays of entities should be avoided as much as is practical. +For one-to-many relationships, the relationship should always be stored on the 'one' side, and the 'many' side should always be derived. Storing the relationship this way, rather than storing an array of entities on the 'many' side, will result in dramatically better performance for both indexing and querying the Subgraph. In general, storing arrays of entities should be avoided as much as is practical. #### Example @@ -160,7 +160,7 @@ type TokenBalance @entity { } ``` -Here is an example of how to write a mapping for a subgraph with reverse lookups: +Here is an example of how to write a mapping for a Subgraph with reverse lookups: ```typescript let token = new Token(event.address) // Create Token @@ -231,7 +231,7 @@ query usersWithOrganizations { } ``` -This more elaborate way of storing many-to-many relationships will result in less data stored for the subgraph, and therefore to a subgraph that is often dramatically faster to index and to query. +This more elaborate way of storing many-to-many relationships will result in less data stored for the Subgraph, and therefore to a Subgraph that is often dramatically faster to index and to query. ### Adding comments to the schema @@ -287,7 +287,7 @@ query { } ``` -> **[Feature Management](#experimental-features):** From `specVersion` `0.0.4` and onwards, `fullTextSearch` must be declared under the `features` section in the subgraph manifest. +> **[Feature Management](#experimental-features):** From `specVersion` `0.0.4` and onwards, `fullTextSearch` must be declared under the `features` section in the Subgraph manifest. ## Languages supported diff --git a/website/src/pages/nl/subgraphs/developing/creating/starting-your-subgraph.mdx b/website/src/pages/nl/subgraphs/developing/creating/starting-your-subgraph.mdx index 4823231d9a40..180a343470b1 100644 --- a/website/src/pages/nl/subgraphs/developing/creating/starting-your-subgraph.mdx +++ b/website/src/pages/nl/subgraphs/developing/creating/starting-your-subgraph.mdx @@ -4,20 +4,32 @@ title: Starting Your Subgraph ## Overview -The Graph is home to thousands of subgraphs already available for query, so check [The Graph Explorer](https://thegraph.com/explorer) and find one that already matches your needs. +The Graph is home to thousands of Subgraphs already available for query, so check [The Graph Explorer](https://thegraph.com/explorer) and find one that already matches your needs. -When you create a [subgraph](/subgraphs/developing/subgraphs/), you create a custom open API that extracts data from a blockchain, processes it, stores it, and makes it easy to query via GraphQL. +When you create a [Subgraph](/subgraphs/developing/subgraphs/), you create a custom open API that extracts data from a blockchain, processes it, stores it, and makes it easy to query via GraphQL. -Subgraph development ranges from simple scaffold subgraphs to advanced, specifically tailored subgraphs. +Subgraph development ranges from simple scaffold Subgraphs to advanced, specifically tailored Subgraphs. ### Start Building -Start the process and build a subgraph that matches your needs: +Start the process and build a Subgraph that matches your needs: 1. [Install the CLI](/subgraphs/developing/creating/install-the-cli/) - Set up your infrastructure -2. [Subgraph Manifest](/subgraphs/developing/creating/subgraph-manifest/) - Understand a subgraph's key component +2. [Subgraph Manifest](/subgraphs/developing/creating/subgraph-manifest/) - Understand a Subgraph's key component 3. [The GraphQL Schema](/subgraphs/developing/creating/ql-schema/) - Write your schema 4. [Writing AssemblyScript Mappings](/subgraphs/developing/creating/assemblyscript-mappings/) - Write your mappings -5. [Advanced Features](/subgraphs/developing/creating/advanced/) - Customize your subgraph with advanced features +5. [Advanced Features](/subgraphs/developing/creating/advanced/) - Customize your Subgraph with advanced features Explore additional [resources for APIs](/subgraphs/developing/creating/graph-ts/README/) and conduct local testing with [Matchstick](/subgraphs/developing/creating/unit-testing-framework/). + +| Version | Release notes | +| :-: | --- | +| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune Subgraphs | +| 0.0.9 | Supports `endBlock` feature | +| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). | +| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | diff --git a/website/src/pages/nl/subgraphs/developing/creating/subgraph-manifest.mdx b/website/src/pages/nl/subgraphs/developing/creating/subgraph-manifest.mdx index a42a50973690..78e4a3a55e7d 100644 --- a/website/src/pages/nl/subgraphs/developing/creating/subgraph-manifest.mdx +++ b/website/src/pages/nl/subgraphs/developing/creating/subgraph-manifest.mdx @@ -4,19 +4,19 @@ title: Subgraph Manifest ## Overview -The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. +The Subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your Subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. -The **subgraph definition** consists of the following files: +The **Subgraph definition** consists of the following files: -- `subgraph.yaml`: Contains the subgraph manifest +- `subgraph.yaml`: Contains the Subgraph manifest -- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL +- `schema.graphql`: A GraphQL schema defining the data stored for your Subgraph and how to query it via GraphQL - `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema (e.g. `mapping.ts` in this guide) ### Subgraph Capabilities -A single subgraph can: +A single Subgraph can: - Index data from multiple smart contracts (but not multiple networks). @@ -24,12 +24,12 @@ A single subgraph can: - Add an entry for each contract that requires indexing to the `dataSources` array. -The full specification for subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). +The full specification for Subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). -For the example subgraph listed above, `subgraph.yaml` is: +For the example Subgraph listed above, `subgraph.yaml` is: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum repository: https://github.com/graphprotocol/graph-tooling schema: @@ -54,7 +54,7 @@ dataSources: data: 'bar' mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -79,47 +79,47 @@ dataSources: ## Subgraph Entries -> Important Note: Be sure you populate your subgraph manifest with all handlers and [entities](/subgraphs/developing/creating/ql-schema/). +> Important Note: Be sure you populate your Subgraph manifest with all handlers and [entities](/subgraphs/developing/creating/ql-schema/). The important entries to update for the manifest are: -- `specVersion`: a semver version that identifies the supported manifest structure and functionality for the subgraph. The latest version is `1.2.0`. See [specVersion releases](#specversion-releases) section to see more details on features & releases. +- `specVersion`: a semver version that identifies the supported manifest structure and functionality for the Subgraph. The latest version is `1.3.0`. See [specVersion releases](#specversion-releases) section to see more details on features & releases. -- `description`: a human-readable description of what the subgraph is. This description is displayed in Graph Explorer when the subgraph is deployed to Subgraph Studio. +- `description`: a human-readable description of what the Subgraph is. This description is displayed in Graph Explorer when the Subgraph is deployed to Subgraph Studio. -- `repository`: the URL of the repository where the subgraph manifest can be found. This is also displayed in Graph Explorer. +- `repository`: the URL of the repository where the Subgraph manifest can be found. This is also displayed in Graph Explorer. - `features`: a list of all used [feature](#experimental-features) names. -- `indexerHints.prune`: Defines the retention of historical block data for a subgraph. See [prune](#prune) in [indexerHints](#indexer-hints) section. +- `indexerHints.prune`: Defines the retention of historical block data for a Subgraph. See [prune](#prune) in [indexerHints](#indexer-hints) section. -- `dataSources.source`: the address of the smart contract the subgraph sources, and the ABI of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts. +- `dataSources.source`: the address of the smart contract the Subgraph sources, and the ABI of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts. - `dataSources.source.startBlock`: the optional number of the block that the data source starts indexing from. In most cases, we suggest using the block in which the contract was created. - `dataSources.source.endBlock`: The optional number of the block that the data source stops indexing at, including that block. Minimum spec version required: `0.0.9`. -- `dataSources.context`: key-value pairs that can be used within subgraph mappings. Supports various data types like `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Each variable needs to specify its `type` and `data`. These context variables are then accessible in the mapping files, offering more configurable options for subgraph development. +- `dataSources.context`: key-value pairs that can be used within Subgraph mappings. Supports various data types like `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Each variable needs to specify its `type` and `data`. These context variables are then accessible in the mapping files, offering more configurable options for Subgraph development. - `dataSources.mapping.entities`: the entities that the data source writes to the store. The schema for each entity is defined in the schema.graphql file. - `dataSources.mapping.abis`: one or more named ABI files for the source contract as well as any other smart contracts that you interact with from within the mappings. -- `dataSources.mapping.eventHandlers`: lists the smart contract events this subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store. +- `dataSources.mapping.eventHandlers`: lists the smart contract events this Subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store. -- `dataSources.mapping.callHandlers`: lists the smart contract functions this subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store. +- `dataSources.mapping.callHandlers`: lists the smart contract functions this Subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store. -- `dataSources.mapping.blockHandlers`: lists the blocks this subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional call-filter can be provided by adding a `filter` field with `kind: call` to the handler. This will only run the handler if the block contains at least one call to the data source contract. +- `dataSources.mapping.blockHandlers`: lists the blocks this Subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional call-filter can be provided by adding a `filter` field with `kind: call` to the handler. This will only run the handler if the block contains at least one call to the data source contract. -A single subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array. +A single Subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array. ## Event Handlers -Event handlers in a subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the subgraph's manifest. This enables subgraphs to process and store event data according to defined logic. +Event handlers in a Subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the Subgraph's manifest. This enables Subgraphs to process and store event data according to defined logic. ### Defining an Event Handler -An event handler is declared within a data source in the subgraph's YAML configuration. It specifies which events to listen for and the corresponding function to execute when those events are detected. +An event handler is declared within a data source in the Subgraph's YAML configuration. It specifies which events to listen for and the corresponding function to execute when those events are detected. ```yaml dataSources: @@ -131,7 +131,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -149,11 +149,11 @@ dataSources: ## Call Handlers -While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured. +While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a Subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured. Call handlers will only trigger in one of two cases: when the function specified is called by an account other than the contract itself or when it is marked as external in Solidity and called as part of another function in the same contract. -> **Note:** Call handlers currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a subgraph indexing one of these networks contain one or more call handlers, it will not start syncing. Subgraph developers should instead use event handlers. These are far more performant than call handlers, and are supported on every evm network. +> **Note:** Call handlers currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a Subgraph indexing one of these networks contain one or more call handlers, it will not start syncing. Subgraph developers should instead use event handlers. These are far more performant than call handlers, and are supported on every evm network. ### Defining a Call Handler @@ -169,7 +169,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -186,7 +186,7 @@ The `function` is the normalized function signature to filter calls by. The `han ### Mapping Function -Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument: +Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example Subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument: ```typescript import { CreateGravatarCall } from '../generated/Gravity/Gravity' @@ -205,7 +205,7 @@ The `handleCreateGravatar` function takes a new `CreateGravatarCall` which is a ## Block Handlers -In addition to subscribing to contract events or function calls, a subgraph may want to update its data as new blocks are appended to the chain. To achieve this a subgraph can run a function after every block or after blocks that match a pre-defined filter. +In addition to subscribing to contract events or function calls, a Subgraph may want to update its data as new blocks are appended to the chain. To achieve this a Subgraph can run a function after every block or after blocks that match a pre-defined filter. ### Supported Filters @@ -218,7 +218,7 @@ filter: _The defined handler will be called once for every block which contains a call to the contract (data source) the handler is defined under._ -> **Note:** The `call` filter currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a subgraph indexing one of these networks contain one or more block handlers with a `call` filter, it will not start syncing. +> **Note:** The `call` filter currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a Subgraph indexing one of these networks contain one or more block handlers with a `call` filter, it will not start syncing. The absence of a filter for a block handler will ensure that the handler is called every block. A data source can only contain one block handler for each filter type. @@ -232,7 +232,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -261,7 +261,7 @@ blockHandlers: every: 10 ``` -The defined handler will be called once for every `n` blocks, where `n` is the value provided in the `every` field. This configuration allows the subgraph to perform specific operations at regular block intervals. +The defined handler will be called once for every `n` blocks, where `n` is the value provided in the `every` field. This configuration allows the Subgraph to perform specific operations at regular block intervals. #### Once Filter @@ -276,7 +276,7 @@ blockHandlers: kind: once ``` -The defined handler with the once filter will be called only once before all other handlers run. This configuration allows the subgraph to use the handler as an initialization handler, performing specific tasks at the start of indexing. +The defined handler with the once filter will be called only once before all other handlers run. This configuration allows the Subgraph to use the handler as an initialization handler, performing specific tasks at the start of indexing. ```ts export function handleOnce(block: ethereum.Block): void { @@ -288,7 +288,7 @@ export function handleOnce(block: ethereum.Block): void { ### Mapping Function -The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing subgraph entities in the store, call smart contracts and create or update entities. +The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing Subgraph entities in the store, call smart contracts and create or update entities. ```typescript import { ethereum } from '@graphprotocol/graph-ts' @@ -317,7 +317,7 @@ An event will only be triggered when both the signature and topic 0 match. By de Starting from `specVersion` `0.0.5` and `apiVersion` `0.0.7`, event handlers can have access to the receipt for the transaction which emitted them. -To do so, event handlers must be declared in the subgraph manifest with the new `receipt: true` key, which is optional and defaults to false. +To do so, event handlers must be declared in the Subgraph manifest with the new `receipt: true` key, which is optional and defaults to false. ```yaml eventHandlers: @@ -360,7 +360,7 @@ dataSources: abi: Factory mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/factory.ts entities: @@ -390,7 +390,7 @@ templates: abi: Exchange mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/exchange.ts entities: @@ -454,7 +454,7 @@ There are setters and getters like `setString` and `getString` for all value typ ## Start Blocks -The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. +The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a Subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. ```yaml dataSources: @@ -467,7 +467,7 @@ dataSources: startBlock: 6627917 mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/factory.ts entities: @@ -488,13 +488,13 @@ dataSources: ## Indexer Hints -The `indexerHints` setting in a subgraph's manifest provides directives for indexers on processing and managing a subgraph. It influences operational decisions across data handling, indexing strategies, and optimizations. Presently, it features the `prune` option for managing historical data retention or pruning. +The `indexerHints` setting in a Subgraph's manifest provides directives for indexers on processing and managing a Subgraph. It influences operational decisions across data handling, indexing strategies, and optimizations. Presently, it features the `prune` option for managing historical data retention or pruning. > This feature is available from `specVersion: 1.0.0` ### Prune -`indexerHints.prune`: Defines the retention of historical block data for a subgraph. Options include: +`indexerHints.prune`: Defines the retention of historical block data for a Subgraph. Options include: 1. `"never"`: No pruning of historical data; retains the entire history. 2. `"auto"`: Retains the minimum necessary history as set by the indexer, optimizing query performance. @@ -505,19 +505,19 @@ The `indexerHints` setting in a subgraph's manifest provides directives for inde prune: auto ``` -> The term "history" in this context of subgraphs is about storing data that reflects the old states of mutable entities. +> The term "history" in this context of Subgraphs is about storing data that reflects the old states of mutable entities. History as of a given block is required for: -- [Time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), which enable querying the past states of these entities at specific blocks throughout the subgraph's history -- Using the subgraph as a [graft base](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) in another subgraph, at that block -- Rewinding the subgraph back to that block +- [Time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), which enable querying the past states of these entities at specific blocks throughout the Subgraph's history +- Using the Subgraph as a [graft base](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) in another Subgraph, at that block +- Rewinding the Subgraph back to that block If historical data as of the block has been pruned, the above capabilities will not be available. > Using `"auto"` is generally recommended as it maximizes query performance and is sufficient for most users who do not require access to extensive historical data. -For subgraphs leveraging [time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), it's advisable to either set a specific number of blocks for historical data retention or use `prune: never` to keep all historical entity states. Below are examples of how to configure both options in your subgraph's settings: +For Subgraphs leveraging [time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), it's advisable to either set a specific number of blocks for historical data retention or use `prune: never` to keep all historical entity states. Below are examples of how to configure both options in your Subgraph's settings: To retain a specific amount of historical data: @@ -532,3 +532,18 @@ To preserve the complete history of entity states: indexerHints: prune: never ``` + +## SpecVersion Releases + +| Version | Release notes | +| :-: | --- | +| 1.3.0 | Added support for [Subgraph Composition](/cookbook/subgraph-composition-three-sources) | +| 1.2.0 | Added support for [Indexed Argument Filtering](/developing/creating/advanced/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](/developing/creating/advanced/#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supports [`indexerHints`](/developing/creating/subgraph-manifest/#indexer-hints) feature to prune Subgraphs | +| 0.0.9 | Supports `endBlock` feature | +| 0.0.8 | Added support for polling [Block Handlers](/developing/creating/subgraph-manifest/#polling-filter) and [Initialisation Handlers](/developing/creating/subgraph-manifest/#once-filter). | +| 0.0.7 | Added support for [File Data Sources](/developing/creating/advanced/#ipfsarweave-file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | diff --git a/website/src/pages/nl/subgraphs/developing/creating/unit-testing-framework.mdx b/website/src/pages/nl/subgraphs/developing/creating/unit-testing-framework.mdx index 2133c1d4b5c9..e56e1109bc04 100644 --- a/website/src/pages/nl/subgraphs/developing/creating/unit-testing-framework.mdx +++ b/website/src/pages/nl/subgraphs/developing/creating/unit-testing-framework.mdx @@ -2,12 +2,12 @@ title: Unit Testing Framework --- -Learn how to use Matchstick, a unit testing framework developed by [LimeChain](https://limechain.tech/). Matchstick enables subgraph developers to test their mapping logic in a sandboxed environment and successfully deploy their subgraphs. +Learn how to use Matchstick, a unit testing framework developed by [LimeChain](https://limechain.tech/). Matchstick enables Subgraph developers to test their mapping logic in a sandboxed environment and successfully deploy their Subgraphs. ## Benefits of Using Matchstick - It's written in Rust and optimized for high performance. -- It gives you access to developer features, including the ability to mock contract calls, make assertions about the store state, monitor subgraph failures, check test performance, and many more. +- It gives you access to developer features, including the ability to mock contract calls, make assertions about the store state, monitor Subgraph failures, check test performance, and many more. ## Getting Started @@ -87,7 +87,7 @@ And finally, do not use `graph test` (which uses your global installation of gra ### Using Matchstick -To use **Matchstick** in your subgraph project just open up a terminal, navigate to the root folder of your project and simply run `graph test [options] ` - it downloads the latest **Matchstick** binary and runs the specified test or all tests in a test folder (or all existing tests if no datasource flag is specified). +To use **Matchstick** in your Subgraph project just open up a terminal, navigate to the root folder of your project and simply run `graph test [options] ` - it downloads the latest **Matchstick** binary and runs the specified test or all tests in a test folder (or all existing tests if no datasource flag is specified). ### CLI options @@ -113,7 +113,7 @@ graph test path/to/file.test.ts ```sh -c, --coverage Run the tests in coverage mode --d, --docker Run the tests in a docker container (Note: Please execute from the root folder of the subgraph) +-d, --docker Run the tests in a docker container (Note: Please execute from the root folder of the Subgraph) -f, --force Binary: Redownloads the binary. Docker: Redownloads the Dockerfile and rebuilds the docker image. -h, --help Show usage information -l, --logs Logs to the console information about the OS, CPU model and download url (debugging purposes) @@ -145,17 +145,17 @@ libsFolder: path/to/libs manifestPath: path/to/subgraph.yaml ``` -### Demo subgraph +### Demo Subgraph You can try out and play around with the examples from this guide by cloning the [Demo Subgraph repo](https://github.com/LimeChain/demo-subgraph) ### Video tutorials -Also you can check out the video series on ["How to use Matchstick to write unit tests for your subgraphs"](https://www.youtube.com/playlist?list=PLTqyKgxaGF3SNakGQwczpSGVjS_xvOv3h) +Also you can check out the video series on ["How to use Matchstick to write unit tests for your Subgraphs"](https://www.youtube.com/playlist?list=PLTqyKgxaGF3SNakGQwczpSGVjS_xvOv3h) ## Tests structure -_**IMPORTANT: The test structure described below depens on `matchstick-as` version >=0.5.0**_ +_**IMPORTANT: The test structure described below depends on `matchstick-as` version >=0.5.0**_ ### describe() @@ -662,7 +662,7 @@ That's a lot to unpack! First off, an important thing to notice is that we're im There we go - we've created our first test! 👏 -Now in order to run our tests you simply need to run the following in your subgraph root folder: +Now in order to run our tests you simply need to run the following in your Subgraph root folder: `graph test Gravity` @@ -756,7 +756,7 @@ createMockedFunction(contractAddress, 'getGravatar', 'getGravatar(address):(stri Users can mock IPFS files by using `mockIpfsFile(hash, filePath)` function. The function accepts two arguments, the first one is the IPFS file hash/path and the second one is the path to a local file. -NOTE: When testing `ipfs.map/ipfs.mapJSON`, the callback function must be exported from the test file in order for matchstck to detect it, like the `processGravatar()` function in the test example bellow: +NOTE: When testing `ipfs.map/ipfs.mapJSON`, the callback function must be exported from the test file in order for matchstick to detect it, like the `processGravatar()` function in the test example bellow: `.test.ts` file: @@ -765,7 +765,7 @@ import { assert, test, mockIpfsFile } from 'matchstick-as/assembly/index' import { ipfs } from '@graphprotocol/graph-ts' import { gravatarFromIpfs } from './utils' -// Export ipfs.map() callback in order for matchstck to detect it +// Export ipfs.map() callback in order for matchstick to detect it export { processGravatar } from './utils' test('ipfs.cat', () => { @@ -1172,7 +1172,7 @@ templates: network: mainnet mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/token-lock-wallet.ts handler: handleMetadata @@ -1289,7 +1289,7 @@ test('file/ipfs dataSource creation example', () => { ## Test Coverage -Using **Matchstick**, subgraph developers are able to run a script that will calculate the test coverage of the written unit tests. +Using **Matchstick**, Subgraph developers are able to run a script that will calculate the test coverage of the written unit tests. The test coverage tool takes the compiled test `wasm` binaries and converts them to `wat` files, which can then be easily inspected to see whether or not the handlers defined in `subgraph.yaml` have been called. Since code coverage (and testing as whole) is in very early stages in AssemblyScript and WebAssembly, **Matchstick** cannot check for branch coverage. Instead we rely on the assertion that if a given handler has been called, the event/function for it have been properly mocked. @@ -1395,7 +1395,7 @@ The mismatch in arguments is caused by mismatch in `graph-ts` and `matchstick-as ## Additional Resources -For any additional support, check out this [demo subgraph repo using Matchstick](https://github.com/LimeChain/demo-subgraph#readme_). +For any additional support, check out this [demo Subgraph repo using Matchstick](https://github.com/LimeChain/demo-subgraph#readme_). ## Feedback diff --git a/website/src/pages/nl/subgraphs/developing/deploying/multiple-networks.mdx b/website/src/pages/nl/subgraphs/developing/deploying/multiple-networks.mdx index 4f7dcd3864e8..3b2b1bbc70ae 100644 --- a/website/src/pages/nl/subgraphs/developing/deploying/multiple-networks.mdx +++ b/website/src/pages/nl/subgraphs/developing/deploying/multiple-networks.mdx @@ -1,12 +1,13 @@ --- title: Deploying a Subgraph to Multiple Networks +sidebarTitle: Deploying to Multiple Networks --- -This page explains how to deploy a subgraph to multiple networks. To deploy a subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a subgraph already, see [Creating a subgraph](/developing/creating-a-subgraph/). +This page explains how to deploy a Subgraph to multiple networks. To deploy a Subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a Subgraph already, see [Creating a Subgraph](/developing/creating-a-subgraph/). -## Deploying the subgraph to multiple networks +## Deploying the Subgraph to multiple networks -In some cases, you will want to deploy the same subgraph to multiple networks without duplicating all of its code. The main challenge that comes with this is that the contract addresses on these networks are different. +In some cases, you will want to deploy the same Subgraph to multiple networks without duplicating all of its code. The main challenge that comes with this is that the contract addresses on these networks are different. ### Using `graph-cli` @@ -20,7 +21,7 @@ Options: --network-file Networks config file path (default: "./networks.json") ``` -You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your subgraph during development. +You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your Subgraph during development. > Note: The `init` command will now auto-generate a `networks.json` based on the provided information. You will then be able to update existing or add additional networks. @@ -54,7 +55,7 @@ If you don't have a `networks.json` file, you'll need to manually create one wit > Note: You don't have to specify any of the `templates` (if you have any) in the config file, only the `dataSources`. If there are any `templates` declared in the `subgraph.yaml` file, their network will be automatically updated to the one specified with the `--network` option. -Now, let's assume you want to be able to deploy your subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`: +Now, let's assume you want to be able to deploy your Subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`: ```yaml # ... @@ -96,7 +97,7 @@ yarn build --network sepolia yarn build --network sepolia --network-file path/to/config ``` -The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the subgraph. Your `subgraph.yaml` file now should look like this: +The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the Subgraph. Your `subgraph.yaml` file now should look like this: ```yaml # ... @@ -127,7 +128,7 @@ yarn deploy --network sepolia --network-file path/to/config One way to parameterize aspects like contract addresses using older `graph-cli` versions is to generate parts of it with a templating system like [Mustache](https://mustache.github.io/) or [Handlebars](https://handlebarsjs.com/). -To illustrate this approach, let's assume a subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network: +To illustrate this approach, let's assume a Subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network: ```json { @@ -179,7 +180,7 @@ In order to generate a manifest to either network, you could add two additional } ``` -To deploy this subgraph for mainnet or Sepolia you would now simply run one of the two following commands: +To deploy this Subgraph for mainnet or Sepolia you would now simply run one of the two following commands: ```sh # Mainnet: @@ -193,25 +194,25 @@ A working example of this can be found [here](https://github.com/graphprotocol/e **Note:** This approach can also be applied to more complex situations, where it is necessary to substitute more than contract addresses and network names or where generating mappings or ABIs from templates as well. -This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your Subgraph to check if it is running behind. `synced` informs if the Subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the Subgraph. In this case, you can check the `fatalError` field for details on this error. -## Subgraph Studio subgraph archive policy +## Subgraph Studio Subgraph archive policy -A subgraph version in Studio is archived if and only if it meets the following criteria: +A Subgraph version in Studio is archived if and only if it meets the following criteria: - The version is not published to the network (or pending publish) - The version was created 45 or more days ago -- The subgraph hasn't been queried in 30 days +- The Subgraph hasn't been queried in 30 days -In addition, when a new version is deployed, if the subgraph has not been published, then the N-2 version of the subgraph is archived. +In addition, when a new version is deployed, if the Subgraph has not been published, then the N-2 version of the Subgraph is archived. -Every subgraph affected with this policy has an option to bring the version in question back. +Every Subgraph affected with this policy has an option to bring the version in question back. -## Checking subgraph health +## Checking Subgraph health -If a subgraph syncs successfully, that is a good sign that it will continue to run well forever. However, new triggers on the network might cause your subgraph to hit an untested error condition or it may start to fall behind due to performance issues or issues with the node operators. +If a Subgraph syncs successfully, that is a good sign that it will continue to run well forever. However, new triggers on the network might cause your Subgraph to hit an untested error condition or it may start to fall behind due to performance issues or issues with the node operators. -Graph Node exposes a GraphQL endpoint which you can query to check the status of your subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a subgraph: +Graph Node exposes a GraphQL endpoint which you can query to check the status of your Subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a Subgraph: ```graphql { @@ -238,4 +239,4 @@ Graph Node exposes a GraphQL endpoint which you can query to check the status of } ``` -This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your Subgraph to check if it is running behind. `synced` informs if the Subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the Subgraph. In this case, you can check the `fatalError` field for details on this error. diff --git a/website/src/pages/nl/subgraphs/developing/deploying/using-subgraph-studio.mdx b/website/src/pages/nl/subgraphs/developing/deploying/using-subgraph-studio.mdx index 04fca3fb140a..370e428284cc 100644 --- a/website/src/pages/nl/subgraphs/developing/deploying/using-subgraph-studio.mdx +++ b/website/src/pages/nl/subgraphs/developing/deploying/using-subgraph-studio.mdx @@ -2,23 +2,23 @@ title: Deploying Using Subgraph Studio --- -Learn how to deploy your subgraph to Subgraph Studio. +Learn how to deploy your Subgraph to Subgraph Studio. -> Note: When you deploy a subgraph, you push it to Subgraph Studio, where you'll be able to test it. It's important to remember that deploying is not the same as publishing. When you publish a subgraph, you're publishing it onchain. +> Note: When you deploy a Subgraph, you push it to Subgraph Studio, where you'll be able to test it. It's important to remember that deploying is not the same as publishing. When you publish a Subgraph, you're publishing it onchain. ## Subgraph Studio Overview In [Subgraph Studio](https://thegraph.com/studio/), you can do the following: -- View a list of subgraphs you've created -- Manage, view details, and visualize the status of a specific subgraph -- Create and manage your API keys for specific subgraphs +- View a list of Subgraphs you've created +- Manage, view details, and visualize the status of a specific Subgraph +- Create and manage your API keys for specific Subgraphs - Restrict your API keys to specific domains and allow only certain Indexers to query with them -- Create your subgraph -- Deploy your subgraph using The Graph CLI -- Test your subgraph in the playground environment -- Integrate your subgraph in staging using the development query URL -- Publish your subgraph to The Graph Network +- Create your Subgraph +- Deploy your Subgraph using The Graph CLI +- Test your Subgraph in the playground environment +- Integrate your Subgraph in staging using the development query URL +- Publish your Subgraph to The Graph Network - Manage your billing ## Install The Graph CLI @@ -44,10 +44,10 @@ npm install -g @graphprotocol/graph-cli 1. Open [Subgraph Studio](https://thegraph.com/studio/). 2. Connect your wallet to sign in. - You can do this via MetaMask, Coinbase Wallet, WalletConnect, or Safe. -3. After you sign in, your unique deploy key will be displayed on your subgraph details page. - - The deploy key allows you to publish your subgraphs or manage your API keys and billing. It is unique but can be regenerated if you think it has been compromised. +3. After you sign in, your unique deploy key will be displayed on your Subgraph details page. + - The deploy key allows you to publish your Subgraphs or manage your API keys and billing. It is unique but can be regenerated if you think it has been compromised. -> Important: You need an API key to query subgraphs +> Important: You need an API key to query Subgraphs ### How to Create a Subgraph in Subgraph Studio @@ -57,31 +57,25 @@ npm install -g @graphprotocol/graph-cli ### Subgraph Compatibility with The Graph Network -In order to be supported by Indexers on The Graph Network, subgraphs must: - -- Index a [supported network](/supported-networks/) -- Must not use any of the following features: - - ipfs.cat & ipfs.map - - Non-fatal errors - - Grafting +To be supported by Indexers on The Graph Network, Subgraphs must index a [supported network](/supported-networks/). For a full list of supported and unsupported features, check out the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) repo. ## Initialize Your Subgraph -Once your subgraph has been created in Subgraph Studio, you can initialize its code through the CLI using this command: +Once your Subgraph has been created in Subgraph Studio, you can initialize its code through the CLI using this command: ```bash graph init ``` -You can find the `` value on your subgraph details page in Subgraph Studio, see image below: +You can find the `` value on your Subgraph details page in Subgraph Studio, see image below: ![Subgraph Studio - Slug](/img/doc-subgraph-slug.png) -After running `graph init`, you will be asked to input the contract address, network, and an ABI that you want to query. This will generate a new folder on your local machine with some basic code to start working on your subgraph. You can then finalize your subgraph to make sure it works as expected. +After running `graph init`, you will be asked to input the contract address, network, and an ABI that you want to query. This will generate a new folder on your local machine with some basic code to start working on your Subgraph. You can then finalize your Subgraph to make sure it works as expected. ## Graph Auth -Before you can deploy your subgraph to Subgraph Studio, you need to log into your account within the CLI. To do this, you will need your deploy key, which you can find under your subgraph details page. +Before you can deploy your Subgraph to Subgraph Studio, you need to log into your account within the CLI. To do this, you will need your deploy key, which you can find under your Subgraph details page. Then, use the following command to authenticate from the CLI: @@ -91,11 +85,11 @@ graph auth ## Deploying a Subgraph -Once you are ready, you can deploy your subgraph to Subgraph Studio. +Once you are ready, you can deploy your Subgraph to Subgraph Studio. -> Deploying a subgraph with the CLI pushes it to the Studio, where you can test it and update the metadata. This action won't publish your subgraph to the decentralized network. +> Deploying a Subgraph with the CLI pushes it to the Studio, where you can test it and update the metadata. This action won't publish your Subgraph to the decentralized network. -Use the following CLI command to deploy your subgraph: +Use the following CLI command to deploy your Subgraph: ```bash graph deploy @@ -108,30 +102,30 @@ After running this command, the CLI will ask for a version label. ## Testing Your Subgraph -After deploying, you can test your subgraph (either in Subgraph Studio or in your own app, with the deployment query URL), deploy another version, update the metadata, and publish to [Graph Explorer](https://thegraph.com/explorer) when you are ready. +After deploying, you can test your Subgraph (either in Subgraph Studio or in your own app, with the deployment query URL), deploy another version, update the metadata, and publish to [Graph Explorer](https://thegraph.com/explorer) when you are ready. -Use Subgraph Studio to check the logs on the dashboard and look for any errors with your subgraph. +Use Subgraph Studio to check the logs on the dashboard and look for any errors with your Subgraph. ## Publish Your Subgraph -In order to publish your subgraph successfully, review [publishing a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). +In order to publish your Subgraph successfully, review [publishing a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). ## Versioning Your Subgraph with the CLI -If you want to update your subgraph, you can do the following: +If you want to update your Subgraph, you can do the following: - You can deploy a new version to Studio using the CLI (it will only be private at this point). - Once you're happy with it, you can publish your new deployment to [Graph Explorer](https://thegraph.com/explorer). -- This action will create a new version of your subgraph that Curators can start signaling on and Indexers can index. +- This action will create a new version of your Subgraph that Curators can start signaling on and Indexers can index. -You can also update your subgraph's metadata without publishing a new version. You can update your subgraph details in Studio (under the profile picture, name, description, etc.) by checking an option called **Update Details** in [Graph Explorer](https://thegraph.com/explorer). If this is checked, an onchain transaction will be generated that updates subgraph details in Explorer without having to publish a new version with a new deployment. +You can also update your Subgraph's metadata without publishing a new version. You can update your Subgraph details in Studio (under the profile picture, name, description, etc.) by checking an option called **Update Details** in [Graph Explorer](https://thegraph.com/explorer). If this is checked, an onchain transaction will be generated that updates Subgraph details in Explorer without having to publish a new version with a new deployment. -> Note: There are costs associated with publishing a new version of a subgraph to the network. In addition to the transaction fees, you must also fund a part of the curation tax on the auto-migrating signal. You cannot publish a new version of your subgraph if Curators have not signaled on it. For more information, please read more [here](/resources/roles/curating/). +> Note: There are costs associated with publishing a new version of a Subgraph to the network. In addition to the transaction fees, you must also fund a part of the curation tax on the auto-migrating signal. You cannot publish a new version of your Subgraph if Curators have not signaled on it. For more information, please read more [here](/resources/roles/curating/). ## Automatic Archiving of Subgraph Versions -Whenever you deploy a new subgraph version in Subgraph Studio, the previous version will be archived. Archived versions won't be indexed/synced and therefore cannot be queried. You can unarchive an archived version of your subgraph in Subgraph Studio. +Whenever you deploy a new Subgraph version in Subgraph Studio, the previous version will be archived. Archived versions won't be indexed/synced and therefore cannot be queried. You can unarchive an archived version of your Subgraph in Subgraph Studio. -> Note: Previous versions of non-published subgraphs deployed to Studio will be automatically archived. +> Note: Previous versions of non-published Subgraphs deployed to Studio will be automatically archived. ![Subgraph Studio - Unarchive](/img/Unarchive.png) diff --git a/website/src/pages/nl/subgraphs/developing/developer-faq.mdx b/website/src/pages/nl/subgraphs/developing/developer-faq.mdx index 8dbe6d23ad39..e45141294523 100644 --- a/website/src/pages/nl/subgraphs/developing/developer-faq.mdx +++ b/website/src/pages/nl/subgraphs/developing/developer-faq.mdx @@ -7,37 +7,37 @@ This page summarizes some of the most common questions for developers building o ## Subgraph Related -### 1. What is a subgraph? +### 1. What is a Subgraph? -A subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process subgraphs and make them available for subgraph consumers to query. +A Subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process Subgraphs and make them available for Subgraph consumers to query. -### 2. What is the first step to create a subgraph? +### 2. What is the first step to create a Subgraph? -To successfully create a subgraph, you will need to install The Graph CLI. Review the [Quick Start](/subgraphs/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). +To successfully create a Subgraph, you will need to install The Graph CLI. Review the [Quick Start](/subgraphs/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). -### 3. Can I still create a subgraph if my smart contracts don't have events? +### 3. Can I still create a Subgraph if my smart contracts don't have events? -It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the subgraph are triggered by contract events and are the fastest way to retrieve useful data. +It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the Subgraph are triggered by contract events and are the fastest way to retrieve useful data. -If the contracts you work with do not contain events, your subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. +If the contracts you work with do not contain events, your Subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. -### 4. Can I change the GitHub account associated with my subgraph? +### 4. Can I change the GitHub account associated with my Subgraph? -No. Once a subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your subgraph. +No. Once a Subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your Subgraph. -### 5. How do I update a subgraph on mainnet? +### 5. How do I update a Subgraph on mainnet? -You can deploy a new version of your subgraph to Subgraph Studio using the CLI. This action maintains your subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. +You can deploy a new version of your Subgraph to Subgraph Studio using the CLI. This action maintains your Subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your Subgraph that Curators can start signaling on. -### 6. Is it possible to duplicate a subgraph to another account or endpoint without redeploying? +### 6. Is it possible to duplicate a Subgraph to another account or endpoint without redeploying? -You have to redeploy the subgraph, but if the subgraph ID (IPFS hash) doesn't change, it won't have to sync from the beginning. +You have to redeploy the Subgraph, but if the Subgraph ID (IPFS hash) doesn't change, it won't have to sync from the beginning. -### 7. How do I call a contract function or access a public state variable from my subgraph mappings? +### 7. How do I call a contract function or access a public state variable from my Subgraph mappings? Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/#access-to-smart-contract-state). -### 8. Can I import `ethers.js` or other JS libraries into my subgraph mappings? +### 8. Can I import `ethers.js` or other JS libraries into my Subgraph mappings? Not currently, as mappings are written in AssemblyScript. @@ -45,15 +45,15 @@ One possible alternative solution to this is to store raw data in entities and p ### 9. When listening to multiple contracts, is it possible to select the contract order to listen to events? -Within a subgraph, the events are always processed in the order they appear in the blocks, regardless of whether that is across multiple contracts or not. +Within a Subgraph, the events are always processed in the order they appear in the blocks, regardless of whether that is across multiple contracts or not. ### 10. How are templates different from data sources? -Templates allow you to create data sources quickly, while your subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your subgraph will create a dynamic data source by supplying the contract address. +Templates allow you to create data sources quickly, while your Subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your Subgraph will create a dynamic data source by supplying the contract address. Check out the "Instantiating a data source template" section on: [Data Source Templates](/developing/creating-a-subgraph/#data-source-templates). -### 11. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? +### 11. Is it possible to set up a Subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? Yes. On `graph init` command itself you can add multiple dataSources by entering contracts one after the other. @@ -79,9 +79,9 @@ docker pull graphprotocol/graph-node:latest If only one entity is created during the event and if there's nothing better available, then the transaction hash + log index would be unique. You can obfuscate these by converting that to Bytes and then piping it through `crypto.keccak256` but this won't make it more unique. -### 15. Can I delete my subgraph? +### 15. Can I delete my Subgraph? -Yes, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) and [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) your subgraph. +Yes, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) and [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) your Subgraph. ## Network Related @@ -110,11 +110,11 @@ Yes. Sepolia supports block handlers, call handlers and event handlers. It shoul Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the dataSource starts indexing from. In most cases, we suggest using the block where the contract was created: [Start blocks](/developing/creating-a-subgraph/#start-blocks) -### 20. What are some tips to increase the performance of indexing? My subgraph is taking a very long time to sync +### 20. What are some tips to increase the performance of indexing? My Subgraph is taking a very long time to sync Yes, you should take a look at the optional start block feature to start indexing from the block where the contract was deployed: [Start blocks](/developing/creating-a-subgraph/#start-blocks) -### 21. Is there a way to query the subgraph directly to determine the latest block number it has indexed? +### 21. Is there a way to query the Subgraph directly to determine the latest block number it has indexed? Yes! Try the following command, substituting "organization/subgraphName" with the organization under it is published and the name of your subgraph: @@ -132,7 +132,7 @@ someCollection(first: 1000, skip: ) { ... } ### 23. If my dapp frontend uses The Graph for querying, do I need to write my API key into the frontend directly? What if we pay query fees for users – will malicious users cause our query fees to be very high? -Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. +Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and Subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. ## Miscellaneous diff --git a/website/src/pages/nl/subgraphs/developing/introduction.mdx b/website/src/pages/nl/subgraphs/developing/introduction.mdx index 615b6cec4c9c..06bc2b76104d 100644 --- a/website/src/pages/nl/subgraphs/developing/introduction.mdx +++ b/website/src/pages/nl/subgraphs/developing/introduction.mdx @@ -11,21 +11,21 @@ As a developer, you need data to build and power your dapp. Querying and indexin On The Graph, you can: -1. Create, deploy, and publish subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). -2. Use GraphQL to query existing subgraphs. +1. Create, deploy, and publish Subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). +2. Use GraphQL to query existing Subgraphs. ### What is GraphQL? -- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. +- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query Subgraphs. ### Developer Actions -- Query subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. -- Create custom subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. -- Deploy, publish and signal your subgraphs within The Graph Network. +- Query Subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. +- Create custom Subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. +- Deploy, publish and signal your Subgraphs within The Graph Network. -### What are subgraphs? +### What are Subgraphs? -A subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. +A Subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. -Check out the documentation on [subgraphs](/subgraphs/developing/subgraphs/) to learn specifics. +Check out the documentation on [Subgraphs](/subgraphs/developing/subgraphs/) to learn specifics. diff --git a/website/src/pages/nl/subgraphs/developing/managing/deleting-a-subgraph.mdx b/website/src/pages/nl/subgraphs/developing/managing/deleting-a-subgraph.mdx index 5a4ac15e07fd..b8c2330ca49d 100644 --- a/website/src/pages/nl/subgraphs/developing/managing/deleting-a-subgraph.mdx +++ b/website/src/pages/nl/subgraphs/developing/managing/deleting-a-subgraph.mdx @@ -2,30 +2,30 @@ title: Deleting a Subgraph --- -Delete your subgraph using [Subgraph Studio](https://thegraph.com/studio/). +Delete your Subgraph using [Subgraph Studio](https://thegraph.com/studio/). -> Deleting your subgraph will remove all published versions from The Graph Network, but it will remain visible on Graph Explorer and Subgraph Studio for users who have signaled on it. +> Deleting your Subgraph will remove all published versions from The Graph Network, but it will remain visible on Graph Explorer and Subgraph Studio for users who have signaled on it. ## Step-by-Step -1. Visit the subgraph's page on [Subgraph Studio](https://thegraph.com/studio/). +1. Visit the Subgraph's page on [Subgraph Studio](https://thegraph.com/studio/). 2. Click on the three-dots to the right of the "publish" button. -3. Click on the option to "delete this subgraph": +3. Click on the option to "delete this Subgraph": ![Delete-subgraph](/img/Delete-subgraph.png) -4. Depending on the subgraph's status, you will be prompted with various options. +4. Depending on the Subgraph's status, you will be prompted with various options. - - If the subgraph is not published, simply click “delete” and confirm. - - If the subgraph is published, you will need to confirm on your wallet before the subgraph can be deleted from Studio. If a subgraph is published to multiple networks, such as testnet and mainnet, additional steps may be required. + - If the Subgraph is not published, simply click “delete” and confirm. + - If the Subgraph is published, you will need to confirm on your wallet before the Subgraph can be deleted from Studio. If a Subgraph is published to multiple networks, such as testnet and mainnet, additional steps may be required. -> If the owner of the subgraph has signal on it, the signaled GRT will be returned to the owner. +> If the owner of the Subgraph has signal on it, the signaled GRT will be returned to the owner. ### Important Reminders -- Once you delete a subgraph, it will **not** appear on Graph Explorer's homepage. However, users who have signaled on it will still be able to view it on their profile pages and remove their signal. -- Curators will not be able to signal on the subgraph anymore. -- Curators that already signaled on the subgraph can withdraw their signal at an average share price. -- Deleted subgraphs will show an error message. +- Once you delete a Subgraph, it will **not** appear on Graph Explorer's homepage. However, users who have signaled on it will still be able to view it on their profile pages and remove their signal. +- Curators will not be able to signal on the Subgraph anymore. +- Curators that already signaled on the Subgraph can withdraw their signal at an average share price. +- Deleted Subgraphs will show an error message. diff --git a/website/src/pages/nl/subgraphs/developing/managing/transferring-a-subgraph.mdx b/website/src/pages/nl/subgraphs/developing/managing/transferring-a-subgraph.mdx index 0fc6632cbc40..e80bde3fa6d2 100644 --- a/website/src/pages/nl/subgraphs/developing/managing/transferring-a-subgraph.mdx +++ b/website/src/pages/nl/subgraphs/developing/managing/transferring-a-subgraph.mdx @@ -2,18 +2,18 @@ title: Transferring a Subgraph --- -Subgraphs published to the decentralized network have an NFT minted to the address that published the subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network. +Subgraphs published to the decentralized network have an NFT minted to the address that published the Subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network. ## Reminders -- Whoever owns the NFT controls the subgraph. -- If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that subgraph on the network. -- You can easily move control of a subgraph to a multi-sig. -- A community member can create a subgraph on behalf of a DAO. +- Whoever owns the NFT controls the Subgraph. +- If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that Subgraph on the network. +- You can easily move control of a Subgraph to a multi-sig. +- A community member can create a Subgraph on behalf of a DAO. ## View Your Subgraph as an NFT -To view your subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**: +To view your Subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**: ``` https://opensea.io/your-wallet-address @@ -27,13 +27,13 @@ https://rainbow.me/your-wallet-addres ## Step-by-Step -To transfer ownership of a subgraph, do the following: +To transfer ownership of a Subgraph, do the following: 1. Use the UI built into Subgraph Studio: ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-1.png) -2. Choose the address that you would like to transfer the subgraph to: +2. Choose the address that you would like to transfer the Subgraph to: ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-2.png) diff --git a/website/src/pages/nl/subgraphs/developing/publishing/publishing-a-subgraph.mdx b/website/src/pages/nl/subgraphs/developing/publishing/publishing-a-subgraph.mdx index dca943ad3152..2bc0ec5f514c 100644 --- a/website/src/pages/nl/subgraphs/developing/publishing/publishing-a-subgraph.mdx +++ b/website/src/pages/nl/subgraphs/developing/publishing/publishing-a-subgraph.mdx @@ -1,10 +1,11 @@ --- title: Publishing a Subgraph to the Decentralized Network +sidebarTitle: Publishing to the Decentralized Network --- -Once you have [deployed your subgraph to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/) and it's ready to go into production, you can publish it to the decentralized network. +Once you have [deployed your Subgraph to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/) and it's ready to go into production, you can publish it to the decentralized network. -When you publish a subgraph to the decentralized network, you make it available for: +When you publish a Subgraph to the decentralized network, you make it available for: - [Curators](/resources/roles/curating/) to begin curating it. - [Indexers](/indexing/overview/) to begin indexing it. @@ -17,33 +18,33 @@ Check out the list of [supported networks](/supported-networks/). 1. Go to the [Subgraph Studio](https://thegraph.com/studio/) dashboard 2. Click on the **Publish** button -3. Your subgraph will now be visible in [Graph Explorer](https://thegraph.com/explorer/). +3. Your Subgraph will now be visible in [Graph Explorer](https://thegraph.com/explorer/). -All published versions of an existing subgraph can: +All published versions of an existing Subgraph can: - Be published to Arbitrum One. [Learn more about The Graph Network on Arbitrum](/archived/arbitrum/arbitrum-faq/). -- Index data on any of the [supported networks](/supported-networks/), regardless of the network on which the subgraph was published. +- Index data on any of the [supported networks](/supported-networks/), regardless of the network on which the Subgraph was published. -### Updating metadata for a published subgraph +### Updating metadata for a published Subgraph -- After publishing your subgraph to the decentralized network, you can update the metadata anytime in Subgraph Studio. +- After publishing your Subgraph to the decentralized network, you can update the metadata anytime in Subgraph Studio. - Once you’ve saved your changes and published the updates, they will appear in Graph Explorer. - It's important to note that this process will not create a new version since your deployment has not changed. ## Publishing from the CLI -As of version 0.73.0, you can also publish your subgraph with the [`graph-cli`](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). +As of version 0.73.0, you can also publish your Subgraph with the [`graph-cli`](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). 1. Open the `graph-cli`. 2. Use the following commands: `graph codegen && graph build` then `graph publish`. -3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized subgraph to a network of your choice. +3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized Subgraph to a network of your choice. ![cli-ui](/img/cli-ui.png) ### Customizing your deployment -You can upload your subgraph build to a specific IPFS node and further customize your deployment with the following flags: +You can upload your Subgraph build to a specific IPFS node and further customize your deployment with the following flags: ``` USAGE @@ -61,33 +62,33 @@ FLAGS ``` -## Adding signal to your subgraph +## Adding signal to your Subgraph -Developers can add GRT signal to their subgraphs to incentivize Indexers to query the subgraph. +Developers can add GRT signal to their Subgraphs to incentivize Indexers to query the Subgraph. -- If a subgraph is eligible for indexing rewards, Indexers who provide a "proof of indexing" will receive a GRT reward, based on the amount of GRT signalled. +- If a Subgraph is eligible for indexing rewards, Indexers who provide a "proof of indexing" will receive a GRT reward, based on the amount of GRT signalled. -- You can check indexing reward eligibility based on subgraph feature usage [here](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). +- You can check indexing reward eligibility based on Subgraph feature usage [here](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). - Specific supported networks can be checked [here](/supported-networks/). -> Adding signal to a subgraph which is not eligible for rewards will not attract additional Indexers. +> Adding signal to a Subgraph which is not eligible for rewards will not attract additional Indexers. > -> If your subgraph is eligible for rewards, it is recommended that you curate your own subgraph with at least 3,000 GRT in order to attract additional indexers to index your subgraph. +> If your Subgraph is eligible for rewards, it is recommended that you curate your own Subgraph with at least 3,000 GRT in order to attract additional indexers to index your Subgraph. -The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs. However, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. +The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all Subgraphs. However, signaling GRT on a particular Subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. -When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. +When signaling, Curators can decide to signal on a specific version of the Subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. -Indexers can find subgraphs to index based on curation signals they see in Graph Explorer. +Indexers can find Subgraphs to index based on curation signals they see in Graph Explorer. -![Explorer subgraphs](/img/explorer-subgraphs.png) +![Explorer Subgraphs](/img/explorer-subgraphs.png) -Subgraph Studio enables you to add signal to your subgraph by adding GRT to your subgraph's curation pool in the same transaction it is published. +Subgraph Studio enables you to add signal to your Subgraph by adding GRT to your Subgraph's curation pool in the same transaction it is published. ![Curation Pool](/img/curate-own-subgraph-tx.png) -Alternatively, you can add GRT signal to a published subgraph from Graph Explorer. +Alternatively, you can add GRT signal to a published Subgraph from Graph Explorer. ![Signal from Explorer](/img/signal-from-explorer.png) diff --git a/website/src/pages/nl/subgraphs/developing/subgraphs.mdx b/website/src/pages/nl/subgraphs/developing/subgraphs.mdx index 951ec74234d1..b5a75a88e94f 100644 --- a/website/src/pages/nl/subgraphs/developing/subgraphs.mdx +++ b/website/src/pages/nl/subgraphs/developing/subgraphs.mdx @@ -4,83 +4,83 @@ title: Subgraphs ## What is a Subgraph? -A subgraph is a custom, open API that extracts data from a blockchain, processes it, and stores it so it can be easily queried via GraphQL. +A Subgraph is a custom, open API that extracts data from a blockchain, processes it, and stores it so it can be easily queried via GraphQL. ### Subgraph Capabilities - **Access Data:** Subgraphs enable the querying and indexing of blockchain data for web3. -- **Build:** Developers can build, deploy, and publish subgraphs to The Graph Network. To get started, check out the subgraph developer [Quick Start](quick-start/). -- **Index & Query:** Once a subgraph is indexed, anyone can query it. Explore and query all subgraphs published to the network in [Graph Explorer](https://thegraph.com/explorer). +- **Build:** Developers can build, deploy, and publish Subgraphs to The Graph Network. To get started, check out the Subgraph developer [Quick Start](quick-start/). +- **Index & Query:** Once a Subgraph is indexed, anyone can query it. Explore and query all Subgraphs published to the network in [Graph Explorer](https://thegraph.com/explorer). ## Inside a Subgraph -The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. +The Subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your Subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. -The **subgraph definition** consists of the following files: +The **Subgraph definition** consists of the following files: -- `subgraph.yaml`: Contains the subgraph manifest +- `subgraph.yaml`: Contains the Subgraph manifest -- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL +- `schema.graphql`: A GraphQL schema defining the data stored for your Subgraph and how to query it via GraphQL - `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema -To learn more about each subgraph component, check out [creating a subgraph](/developing/creating-a-subgraph/). +To learn more about each Subgraph component, check out [creating a Subgraph](/developing/creating-a-subgraph/). ## Subgraph Lifecycle -Here is a general overview of a subgraph’s lifecycle: +Here is a general overview of a Subgraph’s lifecycle: ![Subgraph Lifecycle](/img/subgraph-lifecycle.png) ## Subgraph Development -1. [Create a subgraph](/developing/creating-a-subgraph/) -2. [Deploy a subgraph](/deploying/deploying-a-subgraph-to-studio/) -3. [Test a subgraph](/deploying/subgraph-studio/#testing-your-subgraph-in-subgraph-studio) -4. [Publish a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) -5. [Signal on a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) +1. [Create a Subgraph](/developing/creating-a-subgraph/) +2. [Deploy a Subgraph](/deploying/deploying-a-subgraph-to-studio/) +3. [Test a Subgraph](/deploying/subgraph-studio/#testing-your-subgraph-in-subgraph-studio) +4. [Publish a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) +5. [Signal on a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) ### Build locally -Great subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying subgraphs on The Graph. They can also use [Graph TypeScript](/subgraphs/developing/creating/graph-ts/README/) and [Matchstick](/subgraphs/developing/creating/unit-testing-framework/) to create robust subgraphs. +Great Subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying Subgraphs on The Graph. They can also use [Graph TypeScript](/subgraphs/developing/creating/graph-ts/README/) and [Matchstick](/subgraphs/developing/creating/unit-testing-framework/) to create robust Subgraphs. ### Deploy to Subgraph Studio -Once defined, a subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: +Once defined, a Subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: -- Use its staging environment to index the deployed subgraph and make it available for review. -- Verify that your subgraph doesn't have any indexing errors and works as expected. +- Use its staging environment to index the deployed Subgraph and make it available for review. +- Verify that your Subgraph doesn't have any indexing errors and works as expected. ### Publish to the Network -When you're happy with your subgraph, you can [publish it](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph Network. +When you're happy with your Subgraph, you can [publish it](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph Network. -- This is an onchain action, which registers the subgraph and makes it discoverable by Indexers. -- Published subgraphs have a corresponding NFT, which defines the ownership of the subgraph. You can [transfer the subgraph's ownership](/subgraphs/developing/managing/transferring-a-subgraph/) by sending the NFT. -- Published subgraphs have associated metadata, which provides other network participants with useful context and information. +- This is an onchain action, which registers the Subgraph and makes it discoverable by Indexers. +- Published Subgraphs have a corresponding NFT, which defines the ownership of the Subgraph. You can [transfer the Subgraph's ownership](/subgraphs/developing/managing/transferring-a-subgraph/) by sending the NFT. +- Published Subgraphs have associated metadata, which provides other network participants with useful context and information. ### Add Curation Signal for Indexing -Published subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your subgraph. Learn more about signaling and [curating](/resources/roles/curating/) on The Graph. +Published Subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your Subgraph. Learn more about signaling and [curating](/resources/roles/curating/) on The Graph. #### What is signal? -- Signal is locked GRT associated with a given subgraph. It indicates to Indexers that a given subgraph will receive query volume and it contributes to the indexing rewards available for processing it. -- Third-party Curators may also signal on a given subgraph, if they deem the subgraph likely to drive query volume. +- Signal is locked GRT associated with a given Subgraph. It indicates to Indexers that a given Subgraph will receive query volume and it contributes to the indexing rewards available for processing it. +- Third-party Curators may also signal on a given Subgraph, if they deem the Subgraph likely to drive query volume. ### Querying & Application Development Subgraphs on The Graph Network receive 100,000 free queries per month, after which point developers can either [pay for queries with GRT or a credit card](/subgraphs/billing/). -Learn more about [querying subgraphs](/subgraphs/querying/introduction/). +Learn more about [querying Subgraphs](/subgraphs/querying/introduction/). ### Updating Subgraphs -To update your subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. +To update your Subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your Subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. -- If you selected "auto-migrate" when you applied the signal, updating the subgraph will migrate any signal to the new version and incur a migration tax. -- This signal migration should prompt Indexers to start indexing the new version of the subgraph, so it should soon become available for querying. +- If you selected "auto-migrate" when you applied the signal, updating the Subgraph will migrate any signal to the new version and incur a migration tax. +- This signal migration should prompt Indexers to start indexing the new version of the Subgraph, so it should soon become available for querying. ### Deleting & Transferring Subgraphs -If you no longer need a published subgraph, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) or [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) it. Deleting a subgraph returns any signaled GRT to [Curators](/resources/roles/curating/). +If you no longer need a published Subgraph, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) or [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) it. Deleting a Subgraph returns any signaled GRT to [Curators](/resources/roles/curating/). diff --git a/website/src/pages/nl/subgraphs/explorer.mdx b/website/src/pages/nl/subgraphs/explorer.mdx index be848f2d0201..3df0b99d43ca 100644 --- a/website/src/pages/nl/subgraphs/explorer.mdx +++ b/website/src/pages/nl/subgraphs/explorer.mdx @@ -2,11 +2,11 @@ title: Graph Verkenner --- -Unlock the world of subgraphs and network data with [Graph Explorer](https://thegraph.com/explorer). +Unlock the world of Subgraphs and network data with [Graph Explorer](https://thegraph.com/explorer). ## Overview -Graph Explorer consists of multiple parts where you can interact with [subgraphs](https://thegraph.com/explorer?chain=arbitrum-one), [delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one), engage [participants](https://thegraph.com/explorer/participants?chain=arbitrum-one), view [network information](https://thegraph.com/explorer/network?chain=arbitrum-one), and access your user profile. +Graph Explorer consists of multiple parts where you can interact with [Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one), [delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one), engage [participants](https://thegraph.com/explorer/participants?chain=arbitrum-one), view [network information](https://thegraph.com/explorer/network?chain=arbitrum-one), and access your user profile. ## Inside Explorer @@ -14,33 +14,33 @@ The following is a breakdown of all the key features of Graph Explorer. For addi ### Subgraphs Page -After deploying and publishing your subgraph in Subgraph Studio, go to [Graph Explorer](https://thegraph.com/explorer) and click on the "[Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one)" link in the navigation bar to access the following: +After deploying and publishing your Subgraph in Subgraph Studio, go to [Graph Explorer](https://thegraph.com/explorer) and click on the "[Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one)" link in the navigation bar to access the following: -- Your own finished subgraphs +- Your own finished Subgraphs - Subgraphs published by others -- The exact subgraph you want (based on the date created, signal amount, or name). +- The exact Subgraph you want (based on the date created, signal amount, or name). ![Explorer Image 1](/img/Subgraphs-Explorer-Landing.png) -When you click into a subgraph, you will be able to do the following: +When you click into a Subgraph, you will be able to do the following: - Test queries in the playground and be able to leverage network details to make informed decisions. -- Signal GRT on your own subgraph or the subgraphs of others to make indexers aware of its importance and quality. +- Signal GRT on your own Subgraph or the Subgraphs of others to make indexers aware of its importance and quality. - - This is critical because signaling on a subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries. + - This is critical because signaling on a Subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries. ![Explorer Image 2](/img/Subgraph-Details.png) -On each subgraph’s dedicated page, you can do the following: +On each Subgraph’s dedicated page, you can do the following: -- Het toevoegen/weghalen van signaal op een subgraph +- Signal/Un-signal on Subgraphs - Details zoals grafieken, huidige implementatie-ID en andere metadata -- Schakel tussen versies om eerdere iteraties van de subgraph te verkennen -- Query subgraphs via GraphQL -- Subgraphs testen in de playground -- Bekijk de indexeerders die indexeren op een bepaalde subgraph +- Switch versions to explore past iterations of the Subgraph +- Query Subgraphs via GraphQL +- Test Subgraphs in the playground +- View the Indexers that are indexing on a certain Subgraph - Subgraphstatistieken (allocaties, curatoren, etc.) -- Bekijk de entiteit die de subgraph heeft gepubliceerd +- View the entity who published the Subgraph ![Explorer Image 3](/img/Explorer-Signal-Unsignal.png) @@ -53,7 +53,7 @@ On this page, you can see the following: - Indexers who collected the most query fees - Indexers with the highest estimated APR -Additionally, you can calculate your ROI and search for top Indexers by name, address, or subgraph. +Additionally, you can calculate your ROI and search for top Indexers by name, address, or Subgraph. ### Participants Page @@ -63,9 +63,9 @@ This page provides a bird's-eye view of all "participants," which includes every ![Explorer Image 4](/img/Indexer-Pane.png) -Indexers are the backbone of the protocol. They stake on subgraphs, index them, and serve queries to anyone consuming subgraphs. +Indexers are the backbone of the protocol. They stake on Subgraphs, index them, and serve queries to anyone consuming Subgraphs. -In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each subgraph, and how much revenue they have made from query fees and indexing rewards. +In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each Subgraph, and how much revenue they have made from query fees and indexing rewards. **Specifics** @@ -74,7 +74,7 @@ In the Indexers table, you can see an Indexers’ delegation parameters, their s - Cooldown Remaining - the time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters. - Owned - This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior. - Delegated - Stake from Delegators which can be allocated by the Indexer, but cannot be slashed. -- Allocated - Stake that Indexers are actively allocating towards the subgraphs they are indexing. +- Allocated - Stake that Indexers are actively allocating towards the Subgraphs they are indexing. - Available Delegation Capacity - the amount of delegated stake the Indexers can still receive before they become over-delegated. - Max Delegation Capacity - het maximale bedrag aan gedelegeerde inzet dat de Indexer productief kan accepteren. Een teveel aan gedelegeerde inzet kan niet worden gebruikt voor allocaties of beloningsberekeningen. - Query Fees - this is the total fees that end users have paid for queries from an Indexer over all time. @@ -90,10 +90,10 @@ To learn more about how to become an Indexer, you can take a look at the [offici #### 2. Curatoren -Curators analyze subgraphs to identify which subgraphs are of the highest quality. Once a Curator has found a potentially high-quality subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which subgraphs are high quality and should be indexed. +Curators analyze Subgraphs to identify which Subgraphs are of the highest quality. Once a Curator has found a potentially high-quality Subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which Subgraphs are high quality and should be indexed. -- Curators can be community members, data consumers, or even subgraph developers who signal on their own subgraphs by depositing GRT tokens into a bonding curve. - - By depositing GRT, Curators mint curation shares of a subgraph. As a result, they can earn a portion of the query fees generated by the subgraph they have signaled on. +- Curators can be community members, data consumers, or even Subgraph developers who signal on their own Subgraphs by depositing GRT tokens into a bonding curve. + - By depositing GRT, Curators mint curation shares of a Subgraph. As a result, they can earn a portion of the query fees generated by the Subgraph they have signaled on. - The bonding curve incentivizes Curators to curate the highest quality data sources. In the The Curator table listed below you can see: @@ -144,8 +144,8 @@ The overview section has both all the current network metrics and some cumulativ A few key details to note: -- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the subgraphs have been closed and the data they served has been validated by the consumers. -- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). +- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the Subgraphs have been closed and the data they served has been validated by the consumers. +- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the Subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). ![Explorer Image 8](/img/Network-Stats.png) @@ -178,15 +178,15 @@ In this section, you can view the following: ### Subgraph Tab -In the Subgraphs tab, you’ll see your published subgraphs. +In the Subgraphs tab, you’ll see your published Subgraphs. -> This will not include any subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network. +> This will not include any Subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network. ![Explorer Image 11](/img/Subgraphs-Overview.png) ### Indexing Tab -In the Indexing tab, you’ll find a table with all the active and historical allocations towards subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer. +In the Indexing tab, you’ll find a table with all the active and historical allocations towards Subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer. Dit gedeelte bevat ook details over uw netto indexeringsbeloningen en netto querykosten. U zult de volgende metrics zien: @@ -223,13 +223,13 @@ Houd er rekening mee dat deze grafiek horizontaal scrollbaar is, dus als u helem ### Curating Tab -Op de Curating Tab vind je alle subgraphs waarop je signaleert (dit stelt je in staat om querykosten te ontvangen). Singaleren stelt Curatoren in staat om aan Indexeerders te laten zien welke subgraphs waardevol en betrouwbaar zijn, wat aangeeft dat ze geïndexeerd moeten worden. +In the Curation tab, you’ll find all the Subgraphs you’re signaling on (thus enabling you to receive query fees). Signaling allows Curators to highlight to Indexers which Subgraphs are valuable and trustworthy, thus signaling that they need to be indexed on. Binnen deze tab vind je een overzicht van: -- Alle subgraphs waarop je cureert met signaaldetails -- Totale aandelen per subgraph -- Querybeloningen per subgraph +- All the Subgraphs you're curating on with signal details +- Share totals per Subgraph +- Query rewards per Subgraph - Gegevens van de bijwerkdatum ![Explorer Image 14](/img/Curation-Stats.png) diff --git a/website/src/pages/nl/subgraphs/guides/_meta.js b/website/src/pages/nl/subgraphs/guides/_meta.js index 37e18bc51651..a1bb04fb6d3f 100644 --- a/website/src/pages/nl/subgraphs/guides/_meta.js +++ b/website/src/pages/nl/subgraphs/guides/_meta.js @@ -1,4 +1,5 @@ export default { + 'subgraph-composition': '', 'subgraph-debug-forking': '', near: '', arweave: '', diff --git a/website/src/pages/nl/subgraphs/guides/arweave.mdx b/website/src/pages/nl/subgraphs/guides/arweave.mdx index 08e6c4257268..e957c2d61226 100644 --- a/website/src/pages/nl/subgraphs/guides/arweave.mdx +++ b/website/src/pages/nl/subgraphs/guides/arweave.mdx @@ -1,50 +1,50 @@ --- -title: Building Subgraphs on Arweave +title: Bouwen van Subgraphs op Arweave --- > Arweave support in Graph Node and on Subgraph Studio is in beta: please reach us on [Discord](https://discord.gg/graphprotocol) with any questions about building Arweave Subgraphs! -In this guide, you will learn how to build and deploy Subgraphs to index the Arweave blockchain. +In deze gids, zul je leren hoe je Subgraphs bouwt en implementeer om de Arweave blockchain te indexeren. -## What is Arweave? +## Wat is Arweave? -The Arweave protocol allows developers to store data permanently and that is the main difference between Arweave and IPFS, where IPFS lacks the feature; permanence, and files stored on Arweave can't be changed or deleted. +Het Arweave protocol stelt ontwikkelaars in staat om gegevens permanent op te slaan, dat is het voornaamste verschil tussen Arweave en IPFS, waar IPFS deze functie mist, en bestanden die op Arweave zijn opgeslagen, kunnen niet worden gewijzigd of verwijderd. -Arweave already has built numerous libraries for integrating the protocol in a number of different programming languages. For more information you can check: +Arweave heeft al talloze bibliotheken gebouwd voor het integreren van het protocol in verschillende programmeertalen. Voor meer informatie kun je kijken op: - [Arwiki](https://arwiki.wiki/#/en/main) - [Arweave Resources](https://www.arweave.org/build) -## What are Arweave Subgraphs? +## Wat zijn Arweave Subgraphs? The Graph allows you to build custom open APIs called "Subgraphs". Subgraphs are used to tell indexers (server operators) which data to index on a blockchain and save on their servers in order for you to be able to query it at any time using [GraphQL](https://graphql.org/). [Graph Node](https://github.com/graphprotocol/graph-node) is now able to index data on Arweave protocol. The current integration is only indexing Arweave as a blockchain (blocks and transactions), it is not indexing the stored files yet. -## Building an Arweave Subgraph +## Bouwen van een Arweave Subgraph -To be able to build and deploy Arweave Subgraphs, you need two packages: +Voor het kunnen bouwen en implementeren van Arweave Subgraphs, heb je twee paketten nodig: 1. `@graphprotocol/graph-cli` above version 0.30.2 - This is a command-line tool for building and deploying Subgraphs. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-cli) to download using `npm`. 2. `@graphprotocol/graph-ts` above version 0.27.0 - This is library of Subgraph-specific types. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-ts) to download using `npm`. -## Subgraph's components +## Subgraph's componenten There are three components of a Subgraph: ### 1. Manifest - `subgraph.yaml` -Defines the data sources of interest, and how they should be processed. Arweave is a new kind of data source. +Definieert gegevensbronnen die van belang zijn en hoe deze verwerkt moeten worden. Arweave is een nieuw type gegevensbron. ### 2. Schema - `schema.graphql` -Here you define which data you want to be able to query after indexing your Subgraph using GraphQL. This is actually similar to a model for an API, where the model defines the structure of a request body. +Hier definieer je welke gegevens je wilt kunnen opvragen na het indexeren van je subgraph door het gebruik van GraphQL. Dit lijkt eigenlijk op een model voor een API, waarbij het model de structuur van een verzoek definieert. The requirements for Arweave Subgraphs are covered by the [existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). ### 3. AssemblyScript Mappings - `mapping.ts` -This is the logic that determines how data should be retrieved and stored when someone interacts with the data sources you are listening to. The data gets translated and is stored based off the schema you have listed. +Dit is de logica die definieert hoe data zou moeten worden opgevraagd en opgeslagen wanneer iemand met de gegevens communiceert waarnaar jij aan het luisteren bent. De gegevens worden vertaald en is opgeslagen gebaseerd op het schema die je genoteerd hebt. During Subgraph development there are two key commands: @@ -84,17 +84,17 @@ dataSources: - Arweave Subgraphs introduce a new kind of data source (`arweave`) - The network should correspond to a network on the hosting Graph Node. In Subgraph Studio, Arweave's mainnet is `arweave-mainnet` -- Arweave data sources introduce an optional source.owner field, which is the public key of an Arweave wallet +- Arweave data bronnen introduceert een optionele bron.eigenaar veld, dat de openbare sleutel is van een Arweave wallet -Arweave data sources support two types of handlers: +Arweave data bronnen ondersteunt twee typen verwerkers: - `blockHandlers` - Run on every new Arweave block. No source.owner is required. - `transactionHandlers` - Run on every transaction where the data source's `source.owner` is the owner. Currently an owner is required for `transactionHandlers`, if users want to process all transactions they should provide "" as the `source.owner` > The source.owner can be the owner's address, or their Public Key. - +> > Transactions are the building blocks of the Arweave permaweb and they are objects created by end-users. - +> > Note: [Irys (previously Bundlr)](https://irys.xyz/) transactions are not supported yet. ## Schema Definition diff --git a/website/src/pages/nl/subgraphs/guides/contract-analyzer.mdx b/website/src/pages/nl/subgraphs/guides/contract-analyzer.mdx index 084ac8d28a00..ab5076c5ebf4 100644 --- a/website/src/pages/nl/subgraphs/guides/contract-analyzer.mdx +++ b/website/src/pages/nl/subgraphs/guides/contract-analyzer.mdx @@ -2,11 +2,15 @@ title: Smart Contract Analysis with Cana CLI --- -# Cana CLI: Quick & Efficient Contract Analysis +Improve smart contract analysis with **Cana CLI**. It's fast, efficient, and designed specifically for EVM chains. -**Cana CLI** is a command-line tool that streamlines helpful smart contract metadata analysis specific to subgraph development across multiple EVM-compatible chains. +## Overview -## 📌 Key Features +**Cana CLI** is a command-line tool that streamlines helpful smart contract metadata analysis specific to subgraph development across multiple EVM-compatible chains. It simplifies retrieving contract details, detecting proxy implementations, extracting ABIs, and more. + +### Key Features + +With Cana CLI, you can: - Detect deployment blocks - Verify source code @@ -14,47 +18,59 @@ title: Smart Contract Analysis with Cana CLI - Identify proxy and implementation contracts - Support multiple chains -## 🚀 Installation & Setup +### Prerequisites + +Before installing Cana CLI, make sure you have: + +- [Node.js v16+](https://nodejs.org/en) +- [npm v6+](https://docs.npmjs.com/cli/v11/commands/npm-install) +- Block explorer API keys + +### Installation & Setup -Install Cana globally using npm: +1. Install Cana CLI + +Use npm to install it globally: ```bash npm install -g contract-analyzer ``` -Set up a blockchain for analysis: +2. Configure Cana CLI + +Set up a blockchain environment for analysis: ```bash cana setup ``` -Provide the required block explorer API and block explorer endpoint URL details when prompted. +During setup, you'll be prompted to provide the required block explorer API key and block explorer endpoint URL. -Running `cana setup` creates a configuration file at `~/.contract-analyzer/config.json`. This file stores your block explorer API credentials, endpoint URLs, and chain selection preferences for future use. +After setup, Cana CLI creates a configuration file at `~/.contract-analyzer/config.json`. This file stores your block explorer API credentials, endpoint URLs, and chain selection preferences for future use. -## 🍳 Usage +### Steps: Using Cana CLI for Smart Contract Analysis -### 🔹 Chain Selection +#### 1. Select a Chain -Cana supports multiple EVM-compatible chains. +Cana CLI supports multiple EVM-compatible chains. -List chains added with: +For a list of chains added run this command: ```bash cana chains ``` -Then select a chain with: +Then select a chain with this command: ```bash cana chains --switch ``` -Once a chain is selected, all subsequent contract analases will continue on that chain. +Once a chain is selected, all subsequent contract analyses will continue on that chain. -### 🔹 Basic Contract Analysis +#### 2. Basic Contract Analysis -Analyze a contract with: +Run the following command to analyze a contract: ```bash cana analyze 0xContractAddress @@ -66,11 +82,11 @@ or cana -a 0xContractAddress ``` -This command displays essential contract information in the terminal using a clear, organized format. +This command fetches and displays essential contract information in the terminal using a clear, organized format. -### 🔹 Understanding Output +#### 3. Understanding the Output -Cana organizes results into the terminal as well as into a structured directory when detailed contract data is successfully retrieved: +Cana CLI organizes results into the terminal and into a structured directory when detailed contract data is successfully retrieved: ``` contracts-analyzed/ @@ -80,24 +96,22 @@ contracts-analyzed/ └── event-information.json # Event signatures and examples ``` -### 🔹 Chain Management +This format makes it easy to reference contract metadata, event signatures, and ABIs for subgraph development. + +#### 4. Chain Management Add and manage chains: ```bash -cana setup # Add a new chain -cana chains # List configured chains -cana chains -s # Swich chains. +cana setup # Add a new chain +cana chains # List configured chains +cana chains -s # Switch chains ``` -## ⚠️ Troubleshooting +### Troubleshooting -- **Missing Data**: Ensure the contract address is correct, verified on the block explorer, and that your API key has the required permissions. +Missing Data? Ensure that the contract address is correct, that it's verified on the block explorer, and that your API key has the required permissions. -## ✅ Requirements - -- Node.js v16+ -- npm v6+ -- Block explorer API keys +### Conclusion -Keep your contract analyses efficient and well-organized. 🚀 +With Cana CLI, you can efficiently analyze smart contracts, extract crucial metadata, and support subgraph development with ease. diff --git a/website/src/pages/nl/subgraphs/guides/subgraph-composition.mdx b/website/src/pages/nl/subgraphs/guides/subgraph-composition.mdx new file mode 100644 index 000000000000..bf62dc4dde30 --- /dev/null +++ b/website/src/pages/nl/subgraphs/guides/subgraph-composition.mdx @@ -0,0 +1,132 @@ +--- +title: Aggregate Data Using Subgraph Composition +sidebarTitle: Build a Composable Subgraph with Multiple Subgraphs +--- + +Leverage Subgraph composition to speed up development time. Create a base Subgraph with essential data, then build additional Subgraphs on top of it. + +Optimize your Subgraph by merging data from independent, source Subgraphs into a single composable Subgraph to enhance data aggregation. + +## Introduction + +Composable Subgraphs enable you to combine multiple Subgraphs' data sources into a new Subgraph, facilitating faster and more flexible Subgraph development. Subgraph composition empowers you to create and maintain smaller, focused Subgraphs that collectively form a larger, interconnected dataset. + +### Benefits of Composition + +Subgraph composition is a powerful feature for scaling, allowing you to: + +- Reuse, mix, and combine existing data +- Streamline development and queries +- Use multiple data sources (up to five source Subgraphs) +- Speed up your Subgraph's syncing speed +- Handle errors and optimize the resync + +## Architecture Overview + +The setup for this example involves two Subgraphs: + +1. **Source Subgraph**: Tracks event data as entities. +2. **Dependent Subgraph**: Uses the source Subgraph as a data source. + +You can find these in the `source` and `dependent` directories. + +- The **source Subgraph** is a basic event-tracking Subgraph that records events emitted by relevant contracts. +- The **dependent Subgraph** references the source Subgraph as a data source, using the entities from the source as triggers. + +While the source Subgraph is a standard Subgraph, the dependent Subgraph uses the Subgraph composition feature. + +## Prerequisites + +### Source Subgraphs + +- All Subgraphs need to be published with a **specVersion 1.3.0 or later** (Use the latest graph-cli version to be able to deploy composable Subgraphs) +- See notes here: https://github.com/graphprotocol/graph-node/releases/tag/v0.37.0 +- Immutable entities only: All Subgraphs must have [immutable entities](https://thegraph.com/docs/en/subgraphs/best-practices/immutable-entities-bytes-as-ids/#immutable-entities) when the Subgraph is deployed +- Pruning can be used in the source Subgraphs, but only entities that are immutable can be composed on top of +- Source Subgraphs cannot use grafting on top of existing entities +- Aggregated entities can be used in composition, but entities that are composed from them cannot performed additional aggregations directly + +### Composed Subgraphs + +- You can only compose up to a **maximum of 5 source Subgraphs** +- Composed Subgraphs can only use **datasources from the same chain** +- **Nested composition is not yet supported**: Composing on top of another composed Subgraph isn’t allowed at this time +- Aggregated entities can be used in composition, but the composed entities on them cannot also use aggregations directly +- Developers cannot compose an onchain datasource with a Subgraph datasource (i.e. you can’t do normal event handlers and call handlers and block handlers in a composed Subgraph) + +Additionally, you can explore the [example-composable-subgraph](https://github.com/graphprotocol/example-composable-subgraph) repository for a working implementation of composable Subgraphs + +## Begin + +The following guide provides examples for defining 3 source Subgraphs to create one powerful composed Subgraph. + +### Specifics + +- To keep this example simple, all source Subgraphs use only block handlers. However, in a real environment, each source Subgraph will use data from different smart contracts. +- The examples below show how to import and extend the schema of another Subgraph to enhance its functionality. +- Each source Subgraph is optimized with a specific entity. +- All the commands listed install the necessary dependencies, generate code based on the GraphQL schema, build the Subgraph, and deploy it to your local Graph Node instance. + +### Step 1. Deploy Block Time Source Subgraph + +This first source Subgraph calculates the block time for each block. + +- It imports schemas from other Subgraphs and adds a `block` entity with a `timestamp` field, representing the time each block was mined. +- It listens to time-related blockchain events (e.g., block timestamps) and processes this data to update the Subgraph's entities accordingly. + +To deploy this Subgraph locally, run the following commands: + +```bash +npm install +npm run codegen +npm run build +npm run create-local +npm run deploy-local +``` + +### Step 2. Deploy Block Cost Source Subgraph + +This second source Subgraph indexes the cost of each block. + +#### Key Functions + +- It imports schemas from other Subgraphs and adds a `block` entity with cost-related fields. +- It listens to blockchain events related to costs (e.g. gas fees, transaction costs) and processes this data to update the Subgraph's entities accordingly. + +To deploy this Subgraph locally, run the same commands as above. + +### Step 3. Define Block Size in Source Subgraph + +This third source Subgraph indexes the size of each block. To deploy this Subgraph locally, run the same commands as above. + +#### Key Functions + +- It imports existing schemas from other Subgraphs and adds a `block` entity with a `size` field representing each block's size. +- It listens to blockchain events related to block sizes (e.g., storage or volume) and processes this data to update the Subgraph's entities accordingly. + +### Step 4. Combine Into Block Stats Subgraph + +This composed Subgraph combines and aggregates the information from the source Subgraphs above, providing a unified view of block statistics. To deploy this Subgraph locally, run the same commands as above. + +> Note: +> +> - Any change to a source Subgraph will likely generate a new deployment ID. +> - Be sure to update the deployment ID in the data source address of the Subgraph manifest to take advantage of the latest changes. +> - All source Subgraphs should be deployed before the composed Subgraph is deployed. + +#### Key Functions + +- It provides a consolidated data model that encompasses all relevant block metrics. +- It combines data from 3 source Subgraphs, and provides a comprehensive view of block statistics, enabling more complex queries and analyses. + +## Key Takeaways + +- This powerful tool will scale your Subgraph development and allow you to combine multiple Subgraphs. +- The setup includes the deployment of 3 source Subgraphs and one final deployment of the composed Subgraph. +- This feature unlocks scalability, simplifying both development and maintenance efficiency. + +## Additional Resources + +- Check out all the code for this example in [this GitHub repo](https://github.com/graphprotocol/example-composable-subgraph). +- To add advanced features to your Subgraph, check out [Subgraph advanced features](/developing/creating/advanced/). +- To learn more about aggregations, check out [Timeseries and Aggregations](/subgraphs/developing/creating/advanced/#timeseries-and-aggregations). diff --git a/website/src/pages/nl/subgraphs/guides/transfer-to-the-graph.mdx b/website/src/pages/nl/subgraphs/guides/transfer-to-the-graph.mdx index a62072c48373..9a4b037cafbc 100644 --- a/website/src/pages/nl/subgraphs/guides/transfer-to-the-graph.mdx +++ b/website/src/pages/nl/subgraphs/guides/transfer-to-the-graph.mdx @@ -12,9 +12,9 @@ Quickly upgrade your Subgraphs from any platform to [The Graph's decentralized n ## Upgrade Your Subgraph to The Graph in 3 Easy Steps -1. [Set Up Your Studio Environment](/subgraphs/cookbook/transfer-to-the-graph/#1-set-up-your-studio-environment) -2. [Deploy Your Subgraph to Studio](/subgraphs/cookbook/transfer-to-the-graph/#2-deploy-your-subgraph-to-studio) -3. [Publish to The Graph Network](/subgraphs/cookbook/transfer-to-the-graph/#publish-your-subgraph-to-the-graphs-decentralized-network) +1. [Set Up Your Studio Environment](/subgraphs/cookbook/transfer-to-the-graph/#1-set-up-your-studio-environment) +2. [Deploy Your Subgraph to Studio](/subgraphs/cookbook/transfer-to-the-graph/#2-deploy-your-subgraph-to-studio) +3. [Publish to The Graph Network](/subgraphs/cookbook/transfer-to-the-graph/#publish-your-subgraph-to-the-graphs-decentralized-network) ## 1. Set Up Your Studio Environment diff --git a/website/src/pages/nl/subgraphs/querying/best-practices.mdx b/website/src/pages/nl/subgraphs/querying/best-practices.mdx index ff5f381e2993..ab02b27cbc03 100644 --- a/website/src/pages/nl/subgraphs/querying/best-practices.mdx +++ b/website/src/pages/nl/subgraphs/querying/best-practices.mdx @@ -4,7 +4,7 @@ title: Querying Best Practices The Graph provides a decentralized way to query data from blockchains. Its data is exposed through a GraphQL API, making it easier to query with the GraphQL language. -Learn the essential GraphQL language rules and best practices to optimize your subgraph. +Learn the essential GraphQL language rules and best practices to optimize your Subgraph. --- @@ -71,7 +71,7 @@ It means that you can query a GraphQL API using standard `fetch` (natively or vi However, as mentioned in ["Querying from an Application"](/subgraphs/querying/from-an-application/), it's recommended to use `graph-client`, which supports the following unique features: -- Cross-chain Subgraph Handling: Querying from multiple subgraphs in a single query +- Cross-chain Subgraph Handling: Querying from multiple Subgraphs in a single query - [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) - [Automatic Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) - Fully typed result @@ -219,7 +219,7 @@ If the application only needs 10 transactions, the query should explicitly set ` ### Use a single query to request multiple records -By default, subgraphs have a singular entity for one record. For multiple records, use the plural entities and filter: `where: {id_in:[X,Y,Z]}` or `where: {volume_gt:100000}` +By default, Subgraphs have a singular entity for one record. For multiple records, use the plural entities and filter: `where: {id_in:[X,Y,Z]}` or `where: {volume_gt:100000}` Example of inefficient querying: diff --git a/website/src/pages/nl/subgraphs/querying/from-an-application.mdx b/website/src/pages/nl/subgraphs/querying/from-an-application.mdx index 27ee9b282f9a..bf6f8f1a5817 100644 --- a/website/src/pages/nl/subgraphs/querying/from-an-application.mdx +++ b/website/src/pages/nl/subgraphs/querying/from-an-application.mdx @@ -1,5 +1,6 @@ --- title: Querying from an Application +sidebarTitle: Querying from an App --- Learn how to query The Graph from your application. @@ -10,7 +11,7 @@ During the development process, you will receive a GraphQL API endpoint at two d ### Subgraph Studio Endpoint -After deploying your subgraph to [Subgraph Studio](https://thegraph.com/docs/en/subgraphs/developing/deploying/using-subgraph-studio/), you will receive an endpoint that looks like this: +After deploying your Subgraph to [Subgraph Studio](https://thegraph.com/docs/en/subgraphs/developing/deploying/using-subgraph-studio/), you will receive an endpoint that looks like this: ``` https://api.studio.thegraph.com/query/// @@ -20,13 +21,13 @@ https://api.studio.thegraph.com/query/// ### The Graph Network Endpoint -After publishing your subgraph to the network, you will receive an endpoint that looks like this: : +After publishing your Subgraph to the network, you will receive an endpoint that looks like this: : ``` https://gateway.thegraph.com/api//subgraphs/id/ ``` -> This endpoint is intended for active use on the network. It allows you to use various GraphQL client libraries to query the subgraph and populate your application with indexed data. +> This endpoint is intended for active use on the network. It allows you to use various GraphQL client libraries to query the Subgraph and populate your application with indexed data. ## Using Popular GraphQL Clients @@ -34,7 +35,7 @@ https://gateway.thegraph.com/api//subgraphs/id/ The Graph is providing its own GraphQL client, `graph-client` that supports unique features such as: -- Cross-chain Subgraph Handling: Querying from multiple subgraphs in a single query +- Cross-chain Subgraph Handling: Querying from multiple Subgraphs in a single query - [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) - [Automatic Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) - Fully typed result @@ -43,7 +44,7 @@ The Graph is providing its own GraphQL client, `graph-client` that supports uniq ### Fetch Data with Graph Client -Let's look at how to fetch data from a subgraph with `graph-client`: +Let's look at how to fetch data from a Subgraph with `graph-client`: #### Stap 1 @@ -168,7 +169,7 @@ Although it's the heaviest client, it has many features to build advanced UI on ### Fetch Data with Apollo Client -Let's look at how to fetch data from a subgraph with Apollo client: +Let's look at how to fetch data from a Subgraph with Apollo client: #### Stap 1 @@ -257,7 +258,7 @@ client ### Fetch data with URQL -Let's look at how to fetch data from a subgraph with URQL: +Let's look at how to fetch data from a Subgraph with URQL: #### Stap 1 diff --git a/website/src/pages/nl/subgraphs/querying/graph-client/README.md b/website/src/pages/nl/subgraphs/querying/graph-client/README.md index 416cadc13c6f..d4850e723c6e 100644 --- a/website/src/pages/nl/subgraphs/querying/graph-client/README.md +++ b/website/src/pages/nl/subgraphs/querying/graph-client/README.md @@ -16,19 +16,19 @@ This library is intended to simplify the network aspect of data consumption for | Status | Feature | Notes | | :----: | ---------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------- | -| ✅ | Multiple indexers | based on fetch strategies | -| ✅ | Fetch Strategies | timeout, retry, fallback, race, highestValue | -| ✅ | Build time validations & optimizations | | -| ✅ | Client-Side Composition | with improved execution planner (based on GraphQL-Mesh) | -| ✅ | Cross-chain Subgraph Handling | Use similar subgraphs as a single source | -| ✅ | Raw Execution (standalone mode) | without a wrapping GraphQL client | -| ✅ | Local (client-side) Mutations | | -| ✅ | [Automatic Block Tracking](../packages/block-tracking/README.md) | tracking block numbers [as described here](https://thegraph.com/docs/en/developer/distributed-systems/#polling-for-updated-data) | -| ✅ | [Automatic Pagination](../packages/auto-pagination/README.md) | doing multiple requests in a single call to fetch more than the indexer limit | -| ✅ | Integration with `@apollo/client` | | -| ✅ | Integration with `urql` | | -| ✅ | TypeScript support | with built-in GraphQL Codegen and `TypedDocumentNode` | -| ✅ | [`@live` queries](./live.md) | Based on polling | +| ✅ | Multiple indexers | based on fetch strategies | +| ✅ | Fetch Strategies | timeout, retry, fallback, race, highestValue | +| ✅ | Build time validations & optimizations | | +| ✅ | Client-Side Composition | with improved execution planner (based on GraphQL-Mesh) | +| ✅ | Cross-chain Subgraph Handling | Use similar subgraphs as a single source | +| ✅ | Raw Execution (standalone mode) | without a wrapping GraphQL client | +| ✅ | Local (client-side) Mutations | | +| ✅ | [Automatic Block Tracking](../packages/block-tracking/README.md) | tracking block numbers [as described here](https://thegraph.com/docs/en/developer/distributed-systems/#polling-for-updated-data) | +| ✅ | [Automatic Pagination](../packages/auto-pagination/README.md) | doing multiple requests in a single call to fetch more than the indexer limit | +| ✅ | Integration with `@apollo/client` | | +| ✅ | Integration with `urql` | | +| ✅ | TypeScript support | with built-in GraphQL Codegen and `TypedDocumentNode` | +| ✅ | [`@live` queries](./live.md) | Based on polling | > You can find an [extended architecture design here](./architecture.md) @@ -308,8 +308,8 @@ sources:
`highestValue` - - This strategy allows you to send parallel requests to different endpoints for the same source and choose the most updated. + +This strategy allows you to send parallel requests to different endpoints for the same source and choose the most updated. This is useful if you want to choose most synced data for the same Subgraph over different indexers/sources. diff --git a/website/src/pages/nl/subgraphs/querying/graphql-api.mdx b/website/src/pages/nl/subgraphs/querying/graphql-api.mdx index b3003ece651a..e10201771989 100644 --- a/website/src/pages/nl/subgraphs/querying/graphql-api.mdx +++ b/website/src/pages/nl/subgraphs/querying/graphql-api.mdx @@ -6,13 +6,13 @@ Learn about the GraphQL Query API used in The Graph. ## What is GraphQL? -[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. +[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query Subgraphs. -To understand the larger role that GraphQL plays, review [developing](/subgraphs/developing/introduction/) and [creating a subgraph](/developing/creating-a-subgraph/). +To understand the larger role that GraphQL plays, review [developing](/subgraphs/developing/introduction/) and [creating a Subgraph](/developing/creating-a-subgraph/). ## Queries with GraphQL -In your subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type. +In your Subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type. > Note: `query` does not need to be included at the top of the `graphql` query when using The Graph. @@ -170,7 +170,7 @@ You can use suffixes like `_gt`, `_lte` for value comparison: You can also filter entities that were updated in or after a specified block with `_change_block(number_gte: Int)`. -This can be useful if you are looking to fetch only entities which have changed, for example since the last time you polled. Or alternatively it can be useful to investigate or debug how entities are changing in your subgraph (if combined with a block filter, you can isolate only entities that changed in a specific block). +This can be useful if you are looking to fetch only entities which have changed, for example since the last time you polled. Or alternatively it can be useful to investigate or debug how entities are changing in your Subgraph (if combined with a block filter, you can isolate only entities that changed in a specific block). ```graphql { @@ -329,7 +329,7 @@ This query will return `Challenge` entities, and their associated `Application` ### Fulltext Search Queries -Fulltext search query fields provide an expressive text search API that can be added to the subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) to add fulltext search to your subgraph. +Fulltext search query fields provide an expressive text search API that can be added to the Subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) to add fulltext search to your Subgraph. Fulltext search queries have one required field, `text`, for supplying search terms. Several special fulltext operators are available to be used in this `text` search field. @@ -391,7 +391,7 @@ Graph Node implements [specification-based](https://spec.graphql.org/October2021 The schema of your dataSources, i.e. the entity types, values, and relationships that are available to query, are defined through the [GraphQL Interface Definition Language (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). -GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your subgraph is automatically generated from the GraphQL schema that's included in your [subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph). +GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your Subgraph is automatically generated from the GraphQL schema that's included in your [Subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph). > Note: Our API does not expose mutations because developers are expected to issue transactions directly against the underlying blockchain from their applications. @@ -403,7 +403,7 @@ All GraphQL types with `@entity` directives in your schema will be treated as en ### Subgraph Metadata -All subgraphs have an auto-generated `_Meta_` object, which provides access to subgraph metadata. This can be queried as follows: +All Subgraphs have an auto-generated `_Meta_` object, which provides access to Subgraph metadata. This can be queried as follows: ```graphQL { @@ -419,7 +419,7 @@ All subgraphs have an auto-generated `_Meta_` object, which provides access to s } ``` -If a block is provided, the metadata is as of that block, if not the latest indexed block is used. If provided, the block must be after the subgraph's start block, and less than or equal to the most recently indexed block. +If a block is provided, the metadata is as of that block, if not the latest indexed block is used. If provided, the block must be after the Subgraph's start block, and less than or equal to the most recently indexed block. `deployment` is a unique ID, corresponding to the IPFS CID of the `subgraph.yaml` file. @@ -427,6 +427,6 @@ If a block is provided, the metadata is as of that block, if not the latest inde - hash: the hash of the block - number: the block number -- timestamp: the timestamp of the block, if available (this is currently only available for subgraphs indexing EVM networks) +- timestamp: the timestamp of the block, if available (this is currently only available for Subgraphs indexing EVM networks) -`hasIndexingErrors` is a boolean identifying whether the subgraph encountered indexing errors at some past block +`hasIndexingErrors` is a boolean identifying whether the Subgraph encountered indexing errors at some past block diff --git a/website/src/pages/nl/subgraphs/querying/introduction.mdx b/website/src/pages/nl/subgraphs/querying/introduction.mdx index 36ea85c37877..2c9c553293fa 100644 --- a/website/src/pages/nl/subgraphs/querying/introduction.mdx +++ b/website/src/pages/nl/subgraphs/querying/introduction.mdx @@ -7,11 +7,11 @@ To start querying right away, visit [The Graph Explorer](https://thegraph.com/ex ## Overview -When a subgraph is published to The Graph Network, you can visit its subgraph details page on Graph Explorer and use the "Query" tab to explore the deployed GraphQL API for each subgraph. +When a Subgraph is published to The Graph Network, you can visit its Subgraph details page on Graph Explorer and use the "Query" tab to explore the deployed GraphQL API for each Subgraph. ## Specifics -Each subgraph published to The Graph Network has a unique query URL in Graph Explorer to make direct queries. You can find it by navigating to the subgraph details page and clicking on the "Query" button in the top right corner. +Each Subgraph published to The Graph Network has a unique query URL in Graph Explorer to make direct queries. You can find it by navigating to the Subgraph details page and clicking on the "Query" button in the top right corner. ![Query Subgraph Button](/img/query-button-screenshot.png) @@ -21,7 +21,7 @@ You will notice that this query URL must use a unique API key. You can create an Subgraph Studio users start on a Free Plan, which allows them to make 100,000 queries per month. Additional queries are available on the Growth Plan, which offers usage based pricing for additional queries, payable by credit card, or GRT on Arbitrum. You can learn more about billing [here](/subgraphs/billing/). -> Please see the [Query API](/subgraphs/querying/graphql-api/) for a complete reference on how to query the subgraph's entities. +> Please see the [Query API](/subgraphs/querying/graphql-api/) for a complete reference on how to query the Subgraph's entities. > > Note: If you encounter 405 errors with a GET request to the Graph Explorer URL, please switch to a POST request instead. diff --git a/website/src/pages/nl/subgraphs/querying/managing-api-keys.mdx b/website/src/pages/nl/subgraphs/querying/managing-api-keys.mdx index 6964b1a7ad9b..aed3d10422e1 100644 --- a/website/src/pages/nl/subgraphs/querying/managing-api-keys.mdx +++ b/website/src/pages/nl/subgraphs/querying/managing-api-keys.mdx @@ -4,11 +4,11 @@ title: Managing API keys ## Overview -API keys are needed to query subgraphs. They ensure that the connections between application services are valid and authorized, including authenticating the end user and the device using the application. +API keys are needed to query Subgraphs. They ensure that the connections between application services are valid and authorized, including authenticating the end user and the device using the application. ### Create and Manage API Keys -Go to [Subgraph Studio](https://thegraph.com/studio/) and click the **API Keys** tab to create and manage your API keys for specific subgraphs. +Go to [Subgraph Studio](https://thegraph.com/studio/) and click the **API Keys** tab to create and manage your API keys for specific Subgraphs. The "API keys" table lists existing API keys and allows you to manage or delete them. For each key, you can see its status, the cost for the current period, the spending limit for the current period, and the total number of queries. @@ -31,4 +31,4 @@ You can click on an individual API key to view the Details page: - Amount of GRT spent 2. Under the **Security** section, you can opt into security settings depending on the level of control you’d like to have. Specifically, you can: - View and manage the domain names authorized to use your API key - - Assign subgraphs that can be queried with your API key + - Assign Subgraphs that can be queried with your API key diff --git a/website/src/pages/nl/subgraphs/querying/python.mdx b/website/src/pages/nl/subgraphs/querying/python.mdx index 0937e4f7862d..ed0d078a4175 100644 --- a/website/src/pages/nl/subgraphs/querying/python.mdx +++ b/website/src/pages/nl/subgraphs/querying/python.mdx @@ -3,7 +3,7 @@ title: Query The Graph with Python and Subgrounds sidebarTitle: Python (Subgrounds) --- -Subgrounds is an intuitive Python library for querying subgraphs, built by [Playgrounds](https://playgrounds.network/). It allows you to directly connect subgraph data to a Python data environment, letting you use libraries like [pandas](https://pandas.pydata.org/) to perform data analysis! +Subgrounds is an intuitive Python library for querying Subgraphs, built by [Playgrounds](https://playgrounds.network/). It allows you to directly connect Subgraph data to a Python data environment, letting you use libraries like [pandas](https://pandas.pydata.org/) to perform data analysis! Subgrounds offers a simple Pythonic API for building GraphQL queries, automates tedious workflows such as pagination, and empowers advanced users through controlled schema transformations. @@ -17,14 +17,14 @@ pip install --upgrade subgrounds python -m pip install --upgrade subgrounds ``` -Once installed, you can test out subgrounds with the following query. The following example grabs a subgraph for the Aave v2 protocol and queries the top 5 markets ordered by TVL (Total Value Locked), selects their name and their TVL (in USD) and returns the data as a pandas [DataFrame](https://pandas.pydata.org/pandas-docs/dev/reference/api/pandas.DataFrame.html#pandas.DataFrame). +Once installed, you can test out subgrounds with the following query. The following example grabs a Subgraph for the Aave v2 protocol and queries the top 5 markets ordered by TVL (Total Value Locked), selects their name and their TVL (in USD) and returns the data as a pandas [DataFrame](https://pandas.pydata.org/pandas-docs/dev/reference/api/pandas.DataFrame.html#pandas.DataFrame). ```python from subgrounds import Subgrounds sg = Subgrounds() -# Load the subgraph +# Load the Subgraph aave_v2 = sg.load_subgraph( "https://api.thegraph.com/subgraphs/name/messari/aave-v2-ethereum") diff --git a/website/src/pages/nl/subgraphs/querying/subgraph-id-vs-deployment-id.mdx b/website/src/pages/nl/subgraphs/querying/subgraph-id-vs-deployment-id.mdx index 103e470e14da..17258dd13ea1 100644 --- a/website/src/pages/nl/subgraphs/querying/subgraph-id-vs-deployment-id.mdx +++ b/website/src/pages/nl/subgraphs/querying/subgraph-id-vs-deployment-id.mdx @@ -2,17 +2,17 @@ title: Subgraph ID vs Deployment ID --- -A subgraph is identified by a Subgraph ID, and each version of the subgraph is identified by a Deployment ID. +A Subgraph is identified by a Subgraph ID, and each version of the Subgraph is identified by a Deployment ID. -When querying a subgraph, either ID can be used, though it is generally suggested that the Deployment ID is used due to its ability to specify a specific version of a subgraph. +When querying a Subgraph, either ID can be used, though it is generally suggested that the Deployment ID is used due to its ability to specify a specific version of a Subgraph. Here are some key differences between the two IDs: ![](/img/subgraph-id-vs-deployment-id.png) ## Deployment ID -The Deployment ID is the IPFS hash of the compiled manifest file, which refers to other files on IPFS instead of relative URLs on the computer. For example, the compiled manifest can be accessed via: `https://api.thegraph.com/ipfs/api/v0/cat?arg=QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. To change the Deployment ID, one can simply update the manifest file, such as modifying the description field as described in the [subgraph manifest documentation](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api). +The Deployment ID is the IPFS hash of the compiled manifest file, which refers to other files on IPFS instead of relative URLs on the computer. For example, the compiled manifest can be accessed via: `https://api.thegraph.com/ipfs/api/v0/cat?arg=QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. To change the Deployment ID, one can simply update the manifest file, such as modifying the description field as described in the [Subgraph manifest documentation](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api). -When queries are made using a subgraph's Deployment ID, we are specifying a version of that subgraph to query. Using the Deployment ID to query a specific subgraph version results in a more sophisticated and robust setup as there is full control over the subgraph version being queried. However, this results in the need of updating the query code manually every time a new version of the subgraph is published. +When queries are made using a Subgraph's Deployment ID, we are specifying a version of that Subgraph to query. Using the Deployment ID to query a specific Subgraph version results in a more sophisticated and robust setup as there is full control over the Subgraph version being queried. However, this results in the need of updating the query code manually every time a new version of the Subgraph is published. Example endpoint that uses Deployment ID: @@ -20,8 +20,8 @@ Example endpoint that uses Deployment ID: ## Subgraph ID -The Subgraph ID is a unique identifier for a subgraph. It remains constant across all versions of a subgraph. It is recommended to use the Subgraph ID to query the latest version of a subgraph, although there are some caveats. +The Subgraph ID is a unique identifier for a Subgraph. It remains constant across all versions of a Subgraph. It is recommended to use the Subgraph ID to query the latest version of a Subgraph, although there are some caveats. -Be aware that querying using Subgraph ID may result in queries being responded to by an older version of the subgraph due to the new version needing time to sync. Also, new versions could introduce breaking schema changes. +Be aware that querying using Subgraph ID may result in queries being responded to by an older version of the Subgraph due to the new version needing time to sync. Also, new versions could introduce breaking schema changes. Example endpoint that uses Subgraph ID: `https://gateway-arbitrum.network.thegraph.com/api/[api-key]/subgraphs/id/FL3ePDCBbShPvfRJTaSCNnehiqxsPHzpLud6CpbHoeKW` diff --git a/website/src/pages/nl/subgraphs/quick-start.mdx b/website/src/pages/nl/subgraphs/quick-start.mdx index 746891a192bb..7efec0891fa6 100644 --- a/website/src/pages/nl/subgraphs/quick-start.mdx +++ b/website/src/pages/nl/subgraphs/quick-start.mdx @@ -2,7 +2,7 @@ title: Snelle Start --- -Learn how to easily build, publish and query a [subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) on The Graph. +Learn how to easily build, publish and query a [Subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) on The Graph. ## Prerequisites @@ -13,13 +13,13 @@ Learn how to easily build, publish and query a [subgraph](/subgraphs/developing/ ## How to Build a Subgraph -### 1. Create a subgraph in Subgraph Studio +### 1. Create a Subgraph in Subgraph Studio Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. -Subgraph Studio lets you create, manage, deploy, and publish subgraphs, as well as create and manage API keys. +Subgraph Studio lets you create, manage, deploy, and publish Subgraphs, as well as create and manage API keys. -Click "Create a Subgraph". It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name". +Click "Create a Subgraph". It is recommended to name the Subgraph in Title Case: "Subgraph Name Chain Name". ### 2. Install the Graph CLI @@ -37,13 +37,13 @@ Using [yarn](https://yarnpkg.com/): yarn global add @graphprotocol/graph-cli ``` -### 3. Initialize your subgraph +### 3. Initialize your Subgraph -> You can find commands for your specific subgraph on the subgraph page in [Subgraph Studio](https://thegraph.com/studio/). +> You can find commands for your specific Subgraph on the Subgraph page in [Subgraph Studio](https://thegraph.com/studio/). -The `graph init` command will automatically create a scaffold of a subgraph based on your contract's events. +The `graph init` command will automatically create a scaffold of a Subgraph based on your contract's events. -The following command initializes your subgraph from an existing contract: +The following command initializes your Subgraph from an existing contract: ```sh graph init @@ -51,42 +51,42 @@ graph init If your contract is verified on the respective blockscanner where it is deployed (such as [Etherscan](https://etherscan.io/)), then the ABI will automatically be created in the CLI. -When you initialize your subgraph, the CLI will ask you for the following information: +When you initialize your Subgraph, the CLI will ask you for the following information: -- **Protocol**: Choose the protocol your subgraph will be indexing data from. -- **Subgraph slug**: Create a name for your subgraph. Your subgraph slug is an identifier for your subgraph. -- **Directory**: Choose a directory to create your subgraph in. -- **Ethereum network** (optional): You may need to specify which EVM-compatible network your subgraph will be indexing data from. +- **Protocol**: Choose the protocol your Subgraph will be indexing data from. +- **Subgraph slug**: Create a name for your Subgraph. Your Subgraph slug is an identifier for your Subgraph. +- **Directory**: Choose a directory to create your Subgraph in. +- **Ethereum network** (optional): You may need to specify which EVM-compatible network your Subgraph will be indexing data from. - **Contract address**: Locate the smart contract address you’d like to query data from. - **ABI**: If the ABI is not auto-populated, you will need to input it manually as a JSON file. -- **Start Block**: You should input the start block to optimize subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed. +- **Start Block**: You should input the start block to optimize Subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed. - **Contract Name**: Input the name of your contract. -- **Index contract events as entities**: It is suggested that you set this to true, as it will automatically add mappings to your subgraph for every emitted event. +- **Index contract events as entities**: It is suggested that you set this to true, as it will automatically add mappings to your Subgraph for every emitted event. - **Add another contract** (optional): You can add another contract. -See the following screenshot for an example for what to expect when initializing your subgraph: +See the following screenshot for an example for what to expect when initializing your Subgraph: ![Subgraph command](/img/CLI-Example.png) -### 4. Edit your subgraph +### 4. Edit your Subgraph -The `init` command in the previous step creates a scaffold subgraph that you can use as a starting point to build your subgraph. +The `init` command in the previous step creates a scaffold Subgraph that you can use as a starting point to build your Subgraph. -When making changes to the subgraph, you will mainly work with three files: +When making changes to the Subgraph, you will mainly work with three files: -- Manifest (`subgraph.yaml`) - defines what data sources your subgraph will index. -- Schema (`schema.graphql`) - defines what data you wish to retrieve from the subgraph. +- Manifest (`subgraph.yaml`) - defines what data sources your Subgraph will index. +- Schema (`schema.graphql`) - defines what data you wish to retrieve from the Subgraph. - AssemblyScript Mappings (`mapping.ts`) - translates data from your data sources to the entities defined in the schema. -For a detailed breakdown on how to write your subgraph, check out [Creating a Subgraph](/developing/creating-a-subgraph/). +For a detailed breakdown on how to write your Subgraph, check out [Creating a Subgraph](/developing/creating-a-subgraph/). -### 5. Deploy your subgraph +### 5. Deploy your Subgraph > Remember, deploying is not the same as publishing. -When you **deploy** a subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. A deployed subgraph's indexing is performed by the [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/), which is a single Indexer owned and operated by Edge & Node, rather than by the many decentralized Indexers in The Graph Network. A **deployed** subgraph is free to use, rate-limited, not visible to the public, and meant to be used for development, staging, and testing purposes. +When you **deploy** a Subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. A deployed Subgraph's indexing is performed by the [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/), which is a single Indexer owned and operated by Edge & Node, rather than by the many decentralized Indexers in The Graph Network. A **deployed** Subgraph is free to use, rate-limited, not visible to the public, and meant to be used for development, staging, and testing purposes. -Once your subgraph is written, run the following commands: +Once your Subgraph is written, run the following commands: ```` ```sh @@ -94,7 +94,7 @@ graph codegen && graph build ``` ```` -Authenticate and deploy your subgraph. The deploy key can be found on the subgraph's page in Subgraph Studio. +Authenticate and deploy your Subgraph. The deploy key can be found on the Subgraph's page in Subgraph Studio. ![Deploy key](/img/subgraph-studio-deploy-key.jpg) @@ -109,37 +109,37 @@ graph deploy The CLI will ask for a version label. It's strongly recommended to use [semantic versioning](https://semver.org/), e.g. `0.0.1`. -### 6. Review your subgraph +### 6. Review your Subgraph -If you’d like to test your subgraph before publishing it, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following: +If you’d like to test your Subgraph before publishing it, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following: - Run a sample query. -- Analyze your subgraph in the dashboard to check information. -- Check the logs on the dashboard to see if there are any errors with your subgraph. The logs of an operational subgraph will look like this: +- Analyze your Subgraph in the dashboard to check information. +- Check the logs on the dashboard to see if there are any errors with your Subgraph. The logs of an operational Subgraph will look like this: ![Subgraph logs](/img/subgraph-logs-image.png) -### 7. Publish your subgraph to The Graph Network +### 7. Publish your Subgraph to The Graph Network -When your subgraph is ready for a production environment, you can publish it to the decentralized network. Publishing is an onchain action that does the following: +When your Subgraph is ready for a production environment, you can publish it to the decentralized network. Publishing is an onchain action that does the following: -- It makes your subgraph available to be to indexed by the decentralized [Indexers](/indexing/overview/) on The Graph Network. -- It removes rate limits and makes your subgraph publicly searchable and queryable in [Graph Explorer](https://thegraph.com/explorer/). -- It makes your subgraph available for [Curators](/resources/roles/curating/) to curate it. +- It makes your Subgraph available to be to indexed by the decentralized [Indexers](/indexing/overview/) on The Graph Network. +- It removes rate limits and makes your Subgraph publicly searchable and queryable in [Graph Explorer](https://thegraph.com/explorer/). +- It makes your Subgraph available for [Curators](/resources/roles/curating/) to curate it. -> The greater the amount of GRT you and others curate on your subgraph, the more Indexers will be incentivized to index your subgraph, improving the quality of service, reducing latency, and enhancing network redundancy for your subgraph. +> The greater the amount of GRT you and others curate on your Subgraph, the more Indexers will be incentivized to index your Subgraph, improving the quality of service, reducing latency, and enhancing network redundancy for your Subgraph. #### Publishing with Subgraph Studio -To publish your subgraph, click the Publish button in the dashboard. +To publish your Subgraph, click the Publish button in the dashboard. -![Publish a subgraph on Subgraph Studio](/img/publish-sub-transfer.png) +![Publish a Subgraph on Subgraph Studio](/img/publish-sub-transfer.png) -Select the network to which you would like to publish your subgraph. +Select the network to which you would like to publish your Subgraph. #### Publishing from the CLI -As of version 0.73.0, you can also publish your subgraph with the Graph CLI. +As of version 0.73.0, you can also publish your Subgraph with the Graph CLI. Open the `graph-cli`. @@ -157,32 +157,32 @@ graph publish ``` ```` -3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized subgraph to a network of your choice. +3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized Subgraph to a network of your choice. ![cli-ui](/img/cli-ui.png) To customize your deployment, see [Publishing a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). -#### Adding signal to your subgraph +#### Adding signal to your Subgraph -1. To attract Indexers to query your subgraph, you should add GRT curation signal to it. +1. To attract Indexers to query your Subgraph, you should add GRT curation signal to it. - - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your subgraph. + - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your Subgraph. 2. If eligible for indexing rewards, Indexers receive GRT rewards based on the signaled amount. - - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on subgraph feature usage and supported networks. + - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on Subgraph feature usage and supported networks. To learn more about curation, read [Curating](/resources/roles/curating/). -To save on gas costs, you can curate your subgraph in the same transaction you publish it by selecting this option: +To save on gas costs, you can curate your Subgraph in the same transaction you publish it by selecting this option: ![Subgraph publish](/img/studio-publish-modal.png) -### 8. Query your subgraph +### 8. Query your Subgraph -You now have access to 100,000 free queries per month with your subgraph on The Graph Network! +You now have access to 100,000 free queries per month with your Subgraph on The Graph Network! -You can query your subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button. +You can query your Subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button. -For more information about querying data from your subgraph, read [Querying The Graph](/subgraphs/querying/introduction/). +For more information about querying data from your Subgraph, read [Querying The Graph](/subgraphs/querying/introduction/). diff --git a/website/src/pages/nl/substreams/developing/dev-container.mdx b/website/src/pages/nl/substreams/developing/dev-container.mdx index bd4acf16eec7..339ddb159c87 100644 --- a/website/src/pages/nl/substreams/developing/dev-container.mdx +++ b/website/src/pages/nl/substreams/developing/dev-container.mdx @@ -9,7 +9,7 @@ Develop your first project with Substreams Dev Container. It's a tool to help you build your first project. You can either run it remotely through Github codespaces or locally by cloning the [substreams starter repository](https://github.com/streamingfast/substreams-starter?tab=readme-ov-file). -Inside the Dev Container, the `substreams init` command sets up a code-generated Substreams project, allowing you to easily build a subgraph or an SQL-based solution for data handling. +Inside the Dev Container, the `substreams init` command sets up a code-generated Substreams project, allowing you to easily build a Subgraph or an SQL-based solution for data handling. ## Prerequisites @@ -35,7 +35,7 @@ To share your work with the broader community, publish your `.spkg` to [Substrea You can configure your project to query data either through a Subgraph or directly from an SQL database: -- **Subgraph**: Run `substreams codegen subgraph`. This generates a project with a basic `schema.graphql` and `mappings.ts` file. You can customize these to define entities based on the data extracted by Substreams. For more configurations, see [subgraph sink documentation](https://docs.substreams.dev/how-to-guides/sinks/subgraph). +- **Subgraph**: Run `substreams codegen subgraph`. This generates a project with a basic `schema.graphql` and `mappings.ts` file. You can customize these to define entities based on the data extracted by Substreams. For more configurations, see [Subgraph sink documentation](https://docs.substreams.dev/how-to-guides/sinks/subgraph). - **SQL**: Run `substreams codegen sql` for SQL-based queries. For more information on configuring a SQL sink, refer to the [SQL documentation](https://docs.substreams.dev/how-to-guides/sinks/sql-sink). ## Deployment Options diff --git a/website/src/pages/nl/substreams/developing/sinks.mdx b/website/src/pages/nl/substreams/developing/sinks.mdx index 5f6f9de21326..48c246201e8f 100644 --- a/website/src/pages/nl/substreams/developing/sinks.mdx +++ b/website/src/pages/nl/substreams/developing/sinks.mdx @@ -1,5 +1,5 @@ --- -title: Official Sinks +title: Sink your Substreams --- Choose a sink that meets your project's needs. @@ -8,7 +8,7 @@ Choose a sink that meets your project's needs. Once you find a package that fits your needs, you can choose how you want to consume the data. -Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database, a file or a subgraph. +Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database, a file or a Subgraph. ## Sinks diff --git a/website/src/pages/nl/substreams/developing/solana/account-changes.mdx b/website/src/pages/nl/substreams/developing/solana/account-changes.mdx index a282278c7d91..8c821acaee3f 100644 --- a/website/src/pages/nl/substreams/developing/solana/account-changes.mdx +++ b/website/src/pages/nl/substreams/developing/solana/account-changes.mdx @@ -11,7 +11,7 @@ This guide walks you through the process of setting up your environment, configu > NOTE: History for the Solana Account Changes dates as of 2025, block 310629601. -For each Substreams Solana account block, only the latest update per account is recorded, see the [Protobuf Referece](https://buf.build/streamingfast/firehose-solana/file/main:sf/solana/type/v1/account.proto). If an account is deleted, a payload with `deleted == True` is provided. Additionally, events of low importance ware omitted, such as those with the special owner “Vote11111111…” account or changes that do not affect the account data (ex: lamport changes). +For each Substreams Solana account block, only the latest update per account is recorded, see the [Protobuf Reference](https://buf.build/streamingfast/firehose-solana/file/main:sf/solana/type/v1/account.proto). If an account is deleted, a payload with `deleted == True` is provided. Additionally, events of low importance ware omitted, such as those with the special owner “Vote11111111…” account or changes that do not affect the account data (ex: lamport changes). > NOTE: To test Substreams latency for Solana accounts, measured as block-head drift, install the [Substreams CLI](https://docs.substreams.dev/reference-material/substreams-cli/installing-the-cli) and running `substreams run solana-common blocks_without_votes -s -1 -o clock`. diff --git a/website/src/pages/nl/substreams/developing/solana/transactions.mdx b/website/src/pages/nl/substreams/developing/solana/transactions.mdx index c22bd0f50611..1542ae22dab7 100644 --- a/website/src/pages/nl/substreams/developing/solana/transactions.mdx +++ b/website/src/pages/nl/substreams/developing/solana/transactions.mdx @@ -36,12 +36,12 @@ Within the generated directories, modify your Substreams modules to include addi ## Step 3: Load the Data -To make your Substreams queryable (as opposed to [direct streaming](https://docs.substreams.dev/how-to-guides/sinks/stream)), you can automatically generate a [Substreams-powered subgraph](/sps/introduction/) or SQL-DB sink. +To make your Substreams queryable (as opposed to [direct streaming](https://docs.substreams.dev/how-to-guides/sinks/stream)), you can automatically generate a [Substreams-powered Subgraph](/sps/introduction/) or SQL-DB sink. ### Subgraph 1. Run `substreams codegen subgraph` to initialize the sink, producing the necessary files and function definitions. -2. Create your [subgraph mappings](/sps/triggers/) within the `mappings.ts` and associated entities within the `schema.graphql`. +2. Create your [Subgraph mappings](/sps/triggers/) within the `mappings.ts` and associated entities within the `schema.graphql`. 3. Build and deploy locally or to [Subgraph Studio](https://thegraph.com/studio-pricing/) by running `deploy-studio`. ### SQL diff --git a/website/src/pages/nl/substreams/introduction.mdx b/website/src/pages/nl/substreams/introduction.mdx index e11174ee07c8..0bd1ea21c9f6 100644 --- a/website/src/pages/nl/substreams/introduction.mdx +++ b/website/src/pages/nl/substreams/introduction.mdx @@ -13,7 +13,7 @@ Substreams is a powerful parallel blockchain indexing technology designed to enh ## Substreams Benefits -- **Accelerated Indexing**: Boost subgraph indexing time with a parallelized engine for quicker data retrieval and processing. +- **Accelerated Indexing**: Boost Subgraph indexing time with a parallelized engine for quicker data retrieval and processing. - **Multi-Chain Support**: Expand indexing capabilities beyond EVM-based chains, supporting ecosystems like Solana, Injective, Starknet, and Vara. - **Enhanced Data Model**: Access comprehensive data, including the `trace` level data on EVM or account changes on Solana, while efficiently managing forks/disconnections. - **Multi-Sink Support:** For Subgraph, Postgres database, Clickhouse, and Mongo database. diff --git a/website/src/pages/nl/substreams/publishing.mdx b/website/src/pages/nl/substreams/publishing.mdx index 3d1a3863c882..3d93e6f9376f 100644 --- a/website/src/pages/nl/substreams/publishing.mdx +++ b/website/src/pages/nl/substreams/publishing.mdx @@ -9,7 +9,7 @@ Learn how to publish a Substreams package to the [Substreams Registry](https://s ### What is a package? -A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional subgraphs. +A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional Subgraphs. ## Publish a Package @@ -44,7 +44,7 @@ A Substreams package is a precompiled binary file that defines the specific data ![confirm](/img/4_confirm.png) -That's it! You have succesfully published a package in the Substreams registry. +That's it! You have successfully published a package in the Substreams registry. ![success](/img/5_success.png) diff --git a/website/src/pages/nl/supported-networks.mdx b/website/src/pages/nl/supported-networks.mdx index db62d74d039d..9ba4b8d0ab99 100644 --- a/website/src/pages/nl/supported-networks.mdx +++ b/website/src/pages/nl/supported-networks.mdx @@ -18,11 +18,11 @@ export const getStaticProps = getSupportedNetworksStaticProps - Subgraph Studio relies on the stability and reliability of the underlying technologies, for example JSON-RPC, Firehose and Substreams endpoints. - Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier. -- If a subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks. +- If a Subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks. - For a full list of which features are supported on the decentralized network, see [this page](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). ## Running Graph Node locally If your preferred network isn't supported on The Graph's decentralized network, you can run your own [Graph Node](https://github.com/graphprotocol/graph-node) to index any EVM-compatible network. Make sure that the [version](https://github.com/graphprotocol/graph-node/releases) you are using supports the network and you have the needed configuration. -Graph Node can also index other protocols, via a Firehose integration. Firehose integrations have been created for NEAR, Arweave and Cosmos-based networks. Additionally, Graph Node can support Substreams-powered subgraphs for any network with Substreams support. +Graph Node can also index other protocols, via a Firehose integration. Firehose integrations have been created for NEAR and Arweave. Additionally, Graph Node can support Substreams-powered Subgraphs for any network with Substreams support. diff --git a/website/src/pages/nl/token-api/_meta-titles.json b/website/src/pages/nl/token-api/_meta-titles.json index 692cec84bd58..7ed31e0af95d 100644 --- a/website/src/pages/nl/token-api/_meta-titles.json +++ b/website/src/pages/nl/token-api/_meta-titles.json @@ -1,5 +1,6 @@ { "mcp": "MCP", "evm": "EVM Endpoints", - "monitoring": "Monitoring Endpoints" + "monitoring": "Monitoring Endpoints", + "faq": "FAQ" } diff --git a/website/src/pages/nl/token-api/_meta.js b/website/src/pages/nl/token-api/_meta.js index 09aa7ffc2649..0e526f673a66 100644 --- a/website/src/pages/nl/token-api/_meta.js +++ b/website/src/pages/nl/token-api/_meta.js @@ -5,4 +5,5 @@ export default { mcp: titles.mcp, evm: titles.evm, monitoring: titles.monitoring, + faq: '', } diff --git a/website/src/pages/nl/token-api/faq.mdx b/website/src/pages/nl/token-api/faq.mdx new file mode 100644 index 000000000000..6178aee33e86 --- /dev/null +++ b/website/src/pages/nl/token-api/faq.mdx @@ -0,0 +1,109 @@ +--- +title: Token API FAQ +--- + +Get fast answers to easily integrate and scale with The Graph's high-performance Token API. + +## General + +### What blockchains does the Token API support? + +Currently, the Token API supports Ethereum, Binance Smart Chain (BSC), Polygon, Optimism, Base, and Arbitrum One. + +### Why isn't my API key from The Graph Market working? + +Be sure to use the Access Token generated from the API key, not the API key itself. An access token can be generated from the dashboard on The Graph Market using the dropdown menu next to each API key. + +### How current is the data provided by the API relative to the blockchain? + +The API provides data up to the latest finalized block. + +### How do I authenticate requests to the Token API? + +Authentication is managed via JWT tokens obtained through The Graph Market. JWT tokens issued by [Pinax](https://pinax.network/en), a core developer of The Graph, are also supported. + +### Does the Token API provide a client SDK? + +While a client SDK is not currently available, please share feedback on any SDKs or integrations you would like to see on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to support additional blockchains in the future? + +Yes, more blockchains will be supported in the future. Please share feedback on desired chains on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to offer data closer to the chain head? + +Yes, improvements to provide data closer to the chain head are planned. Feedback is welcome on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to support additional use cases such as NFTs? + +The Graph ecosystem is actively determining the [roadmap](https://thegraph.com/blog/token-api-the-graph/) for additional use cases, including NFTs. Please provide feedback on specific features you would like prioritized on [Discord](https://discord.gg/graphprotocol). + +## MCP / LLM / AI Topics + +### Is there a time limit for LLM queries? + +Yes. The maximum time limit for LLM queries is 10 seconds. + +### Is there a known list of LLMs that work with the API? + +Yes, Cline, Cursor, and Claude Desktop integrate successfully with The Graph's Token API + MCP server. + +Beyond that, whether an LLM "works" depends on whether it supports the MCP protocol directly (or has a compatible plugin/adapter). + +### Where can I find the MCP client? + +You can find the code for the MCP client in [The Graph's repo](https://github.com/graphprotocol/mcp-client). + +## Advanced Topics + +### I'm getting 403/401 errors. What's wrong? + +Check that you included the `Authorization: Bearer ` header with the correct, non-expired token. Common issues include using the API key instead of generating a new JWT, forgetting the "Bearer" prefix, using an incorrect token, or omitting the header entirely. Ensure you copied the JWT exactly as provided by The Graph Market. + +### Are there rate limits or usage costs?\*\* + +During Beta, the Token API is free for authorized developers. While specific rate limits aren't documented, reasonable throttling exists to prevent abuse. High request volumes may trigger HTTP 429 errors. Monitor official announcements for future pricing changes after Beta. + +### What networks are supported, and how do I specify them? + +You can query available networks with [this link](https://token-api.thegraph.com/#tag/monitoring/GET/networks). A full list of network IDs accepted by The Graph can be found on [The Graph's Networks](https://thegraph.com/docs/en/supported-networks/). The API supports Ethereum mainnet, Binance Smart Chain, Base, Arbitrum One, Optimism, and Polygon/Matic. Use the optional `network_id` parameter (e.g., `mainnet`, `bsc`, `base`, `arbitrum-one`, `optimism`, `matic`) to target a specific chain. Without this parameter, the API defaults to Ethereum mainnet. + +### Why do I only see 10 results? How can I get more data? + +Endpoints cap output at 10 items by default. Use pagination parameters: `limit` (up to 500) and `page` (1-indexed). For example, set `limit=50` to get 50 results, and increment `page` for subsequent batches (e.g., `page=2` for items 51-100). + +### How do I fetch older transfer history? + +The Transfers endpoint defaults to 30 days of history. To retrieve older events, increase the `age` parameter up to 180 days maximum (e.g., `age=180` for 6 months of transfers). Transfers older than 180 days cannot be fetched in a single call. + +### What does an empty `"data": []` array mean? + +An empty data array means no records were found matching the query – not an error. This occurs when querying wallets with no tokens/transfers or token contracts with no holders. Verify you've used the correct address and parameters. An invalid address format will trigger a 4xx error. + +### Why is the JSON response wrapped in a `"data"` array? + +All Token API responses consistently wrap results in a top-level `data` array, even for single items. This uniform design handles the common case where addresses have multiple balances or transfers. When parsing, be sure to index into the `data` array (e.g., `const results = response.data`). + +### Why are token amounts returned as strings? + +Large numeric values (like token amounts or supplies) are returned as strings to avoid precision loss, as they often exceed JavaScript's safe integer range. Convert these to big number types for arithmetic operations. Fields like `decimals` are provided as normal numbers to help derive human-readable values. + +### What format should addresses be in? + +The API accepts 40-character hex addresses with or without the `0x` prefix. The endpoint is case-insensitive, so both lower and uppercase hex characters work. Ensure addresses are exactly 40 hex digits (20 bytes) if you remove the prefix. For contract queries, use the token's contract address; for balance/transfer queries, use the wallet address. + +### Do I need special headers besides authentication? + +While recommended, `Accept: application/json` isn't strictly required as the API returns JSON by default. The critical header is `Authorization: Bearer `. Ensure you make a GET request to the correct URL without trailing slashes or path typos (e.g., use `/balances/evm/{address}` not `/balance`). + +### MCP integration with Claude/Cline/Cursor shows errors like "ENOENT" or "Server disconnected". How do I fix this? + +For "ENOENT" errors, ensure Node.js 18+ is installed and the path to `npx`/`bunx` is correct (consider using full paths in config). "Server disconnected" usually indicates authentication or connectivity issues – verify your `ACCESS_TOKEN` is set correctly and your network allows access to `https://token-api.thegraph.com/sse`. + +### Is the Token API part of The Graph's GraphQL service? + +No, the Token API is a separate RESTful service. Unlike traditional subgraphs, it provides ready-to-use REST endpoints (HTTP GET) for common token data. You don't need to write GraphQL queries or deploy subgraphs. Under the hood, it uses The Graph's infrastructure and MCP with AI for enrichment, but you simply interact with REST endpoints. + +### Do I need to use MCP or tools like Claude, Cline, or Cursor? + +No, these are optional. MCP is an advanced feature allowing AI assistants to interface with the API via streaming. For standard usage, simply call the REST endpoints with any HTTP client using your JWT. Claude Desktop, Cline bot, and Cursor IDE integrations are provided for convenience but aren't required. diff --git a/website/src/pages/nl/token-api/mcp/claude.mdx b/website/src/pages/nl/token-api/mcp/claude.mdx index 0da8f2be031d..12a036b6fc24 100644 --- a/website/src/pages/nl/token-api/mcp/claude.mdx +++ b/website/src/pages/nl/token-api/mcp/claude.mdx @@ -25,11 +25,11 @@ Create or edit your `claude_desktop_config.json` file. ```json label="claude_desktop_config.json" { "mcpServers": { - "mcp-pinax": { + "token-api": { "command": "npx", "args": ["@pinax/mcp", "--sse-url", "https://token-api.thegraph.com/sse"], "env": { - "ACCESS_TOKEN": "" + "ACCESS_TOKEN": "" } } } diff --git a/website/src/pages/nl/token-api/mcp/cline.mdx b/website/src/pages/nl/token-api/mcp/cline.mdx index ab54c0c8f6f0..ef98e45939fe 100644 --- a/website/src/pages/nl/token-api/mcp/cline.mdx +++ b/website/src/pages/nl/token-api/mcp/cline.mdx @@ -10,7 +10,7 @@ sidebarTitle: Cline - [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path. - The `@pinax/mcp` package requires Node 18+, as it relies on built-in `fetch()` / `Headers`, which are not available in Node 17 or older. You may need to specify an exact path to an up-to-date Node version, or uninstall previous versions of Node to ensure `@pinax/mcp` uses the correct version. -![Screenshot of Cline's MCP server configuration interface displaying the JSON settings file with mcp-pinax server details visible](/img/cline-preview-token-api.png) +![Screenshot of Cline's MCP server configuration interface displaying the JSON settings file with mcp-pinax server details visible.](/img/cline-preview-token-api.png) ## Configuration diff --git a/website/src/pages/nl/token-api/quick-start.mdx b/website/src/pages/nl/token-api/quick-start.mdx index 4653c3d41ac6..b1b07812ba97 100644 --- a/website/src/pages/nl/token-api/quick-start.mdx +++ b/website/src/pages/nl/token-api/quick-start.mdx @@ -1,6 +1,6 @@ --- title: Token API Quick Start -sidebarTitle: Quick Start +sidebarTitle: Snelle Start --- ![The Graph Token API Quick Start banner](/img/token-api-quickstart-banner.jpg) diff --git a/website/src/pages/pl/about.mdx b/website/src/pages/pl/about.mdx index 199bc6a77400..abfc28d9390b 100644 --- a/website/src/pages/pl/about.mdx +++ b/website/src/pages/pl/about.mdx @@ -30,25 +30,25 @@ Blockchain properties, such as finality, chain reorganizations, and uncled block ## The Graph Provides a Solution -The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API. +The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "Subgraphs") can then be queried with a standard GraphQL API. Today, there is a decentralized protocol that is backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node) that enables this process. ### How The Graph Functions -Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL. +Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using Subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL. #### Specifics -- The Graph uses subgraph descriptions, which are known as the subgraph manifest inside the subgraph. +- The Graph uses Subgraph descriptions, which are known as the Subgraph manifest inside the Subgraph. -- The subgraph description outlines the smart contracts of interest for a subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database. +- The Subgraph description outlines the smart contracts of interest for a Subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database. -- When creating a subgraph, you need to write a subgraph manifest. +- When creating a Subgraph, you need to write a Subgraph manifest. -- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that subgraph. +- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that Subgraph. -The diagram below provides more detailed information about the flow of data after a subgraph manifest has been deployed with Ethereum transactions. +The diagram below provides more detailed information about the flow of data after a Subgraph manifest has been deployed with Ethereum transactions. ![Grafika wyjaśniająca sposób w jaki protokół The Graph wykorzystuje węzeł Graph Node by obsługiwać zapytania dla konsumentów danych](/img/graph-dataflow.png) @@ -56,12 +56,12 @@ Proces ten przebiega według poniższych kroków: 1. Aplikacja dApp dodaje dane do sieci Ethereum za pomocą transakcji w smart kontrakcie. 2. Inteligentny kontrakt emituje jedno lub więcej zdarzeń podczas przetwarzania transakcji. -3. Graph Node nieprzerwanie skanuje sieć Ethereum w poszukiwaniu nowych bloków i danych dla Twojego subgraphu, które mogą one zawierać. -4. Graph Node znajduje zdarzenia Ethereum dla Twojego subgraphu w tych blokach i uruchamia dostarczone przez Ciebie procedury mapowania. Mapowanie to moduł WASM, który tworzy lub aktualizuje jednostki danych przechowywane przez węzeł Graph Node w odpowiedzi na zdarzenia Ethereum. +3. Graph Node continually scans Ethereum for new blocks and the data for your Subgraph they may contain. +4. Graph Node finds Ethereum events for your Subgraph in these blocks and runs the mapping handlers you provided. The mapping is a WASM module that creates or updates the data entities that Graph Node stores in response to Ethereum events. 5. Aplikacja dApp wysyła zapytanie do węzła Graph Node o dane zindeksowane na blockchainie, korzystając z [punktu końcowego GraphQL](https://graphql.org/learn/). Węzeł Graph Node przekształca zapytania GraphQL na zapytania do swojego podstawowego magazynu danych w celu pobrania tych danych, wykorzystując zdolności indeksowania magazynu. Aplikacja dApp wyświetla te dane w interfejsie użytkownika dla użytkowników końcowych, którzy używają go do tworzenia nowych transakcji w sieci Ethereum. Cykl się powtarza. ## Kolejne kroki -The following sections provide a more in-depth look at subgraphs, their deployment and data querying. +The following sections provide a more in-depth look at Subgraphs, their deployment and data querying. -Before you write your own subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed subgraphs. Each subgraph's page includes a GraphQL playground, allowing you to query its data. +Before you write your own Subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed Subgraphs. Each Subgraph's page includes a GraphQL playground, allowing you to query its data. diff --git a/website/src/pages/pl/archived/arbitrum/arbitrum-faq.mdx b/website/src/pages/pl/archived/arbitrum/arbitrum-faq.mdx index 8e3f51fe99c9..8322010a2d88 100644 --- a/website/src/pages/pl/archived/arbitrum/arbitrum-faq.mdx +++ b/website/src/pages/pl/archived/arbitrum/arbitrum-faq.mdx @@ -14,7 +14,7 @@ By scaling The Graph on L2, network participants can now benefit from: - Bezpieczeństwo jako spuścizna sieci Ethereum -Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers can open and close allocations more frequently to index a greater number of subgraphs. Developers can deploy and update subgraphs more easily, and Delegators can delegate GRT more frequently. Curators can add or remove signal to a larger number of subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. +Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers can open and close allocations more frequently to index a greater number of Subgraphs. Developers can deploy and update Subgraphs more easily, and Delegators can delegate GRT more frequently. Curators can add or remove signal to a larger number of Subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. W zeszłym roku społeczność The Graph postanowiła pójść o krok do przodu z Arbitrum po wynikach dyskusji [GIP-0031](https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305). @@ -39,7 +39,7 @@ By w pełni wykorzystać wszystkie zalety używania protokołu The Graph na L2 w ![Przejście do listy zawierającej Arbitrum](/img/arbitrum-screenshot-toggle.png) -## Co powinien wiedzieć na ten temat subgraf developer, konsument danych, indekser, kurator lub delegator? +## As a Subgraph developer, data consumer, Indexer, Curator, or Delegator, what do I need to do now? Network participants must move to Arbitrum to continue participating in The Graph Network. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) for additional support. @@ -51,9 +51,9 @@ All smart contracts have been thoroughly [audited](https://github.com/graphproto Wszystko zostało dokładnie przetestowane i przygotowano plan awaryjny, aby zapewnić bezpieczne i płynne przeniesienie. Szczegóły można znaleźć [tutaj](https://forum.thegraph.com/t/gip-0037-the-graph-arbitrum-deployment-with-linear-rewards-minted-in-l2/3551#risks-and-security-considerations-20). -## Are existing subgraphs on Ethereum working? +## Are existing Subgraphs on Ethereum working? -All subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) to ensure your subgraphs operate seamlessly. +All Subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) to ensure your Subgraphs operate seamlessly. ## Does GRT have a new smart contract deployed on Arbitrum? diff --git a/website/src/pages/pl/archived/arbitrum/l2-transfer-tools-faq.mdx b/website/src/pages/pl/archived/arbitrum/l2-transfer-tools-faq.mdx index c7f851bd8d87..50b904d5ef38 100644 --- a/website/src/pages/pl/archived/arbitrum/l2-transfer-tools-faq.mdx +++ b/website/src/pages/pl/archived/arbitrum/l2-transfer-tools-faq.mdx @@ -24,9 +24,9 @@ The exception is with smart contract wallets like multisigs: these are smart con Narzędzia przesyłania L2 używają natywnego mechanizmu Arbitrum do wysyłania wiadomości z L1 do L2. Mechanizm ten nazywany jest "ponowny bilet" i jest używany przez wszystkie natywne mosty tokenowe, w tym most Arbitrum GRT. Więcej informacji na temat "ponownych biletów" można znaleźć w [dokumentacji Arbitrum](https://docs.arbitrum.io/arbos/l1-to-l2-messaging). -Kiedy przenosisz swoje aktywa (subgraph, stake, delegowanie lub kuratorstwo) do L2, wiadomość jest wysyłana przez most Arbitrum GRT, który tworzy bilet z możliwością ponownej próby w L2. Narzędzie transferu zawiera pewną wartość ETH w transakcji, która jest wykorzystywana do 1) zapłaty za utworzenie biletu i 2) zapłaty za gaz do wykonania biletu w L2. Ponieważ jednak ceny gazu mogą się różnić w czasie do momentu, gdy bilet będzie gotowy do zrealizowania w L2, możliwe jest, że ta próba automatycznego wykonania zakończy się niepowodzeniem. Gdy tak się stanie, most Arbitrum utrzyma ten bilet aktywnym przez maksymalnie 7 dni, i każdy może ponowić próbę "zrealizowania" biletu (co wymaga portfela z pewną ilością ETH pzesłanego do Arbitrum). +When you transfer your assets (Subgraph, stake, delegation or curation) to L2, a message is sent through the Arbitrum GRT bridge which creates a retryable ticket in L2. The transfer tool includes some ETH value in the transaction, that is used to 1) pay to create the ticket and 2) pay for the gas to execute the ticket in L2. However, because gas prices might vary in the time until the ticket is ready to execute in L2, it is possible that this auto-execution attempt fails. When that happens, the Arbitrum bridge will keep the retryable ticket alive for up to 7 days, and anyone can retry “redeeming” the ticket (which requires a wallet with some ETH bridged to Arbitrum). -Nazywamy to etapem "Potwierdzenia" we wszystkich narzędziach do przesyłania - w większości przypadków będzie on wykonywany automatycznie, ponieważ najczęściej kończy się sukcesem, ale ważne jest, aby sprawdzić i upewnić się, że się powiódł. Jeśli się nie powiedzie i w ciągu 7 dni nie będzie skutecznych ponownych prób, most Arbitrum odrzuci bilet, a twoje zasoby ( subgraf, stake, delegowanie lub kuratorstwo) zostaną utracone i nie będzie można ich odzyskać. Główni programiści Graph mają system monitorowania, który wykrywa takie sytuacje i próbuje zrealizować bilety, zanim będzie za późno, ale ostatecznie to ty jesteś odpowiedzialny za zapewnienie, że przesyłanie zostanie zakończone na czas. Jeśli masz problemy z potwierdzeniem transakcji, skontaktuj się z nami za pomocą [tego formularza] (https://noteforms.com/forms/notionform-l2-transfer-tooling-issues-0ogqfu?notionforms=1&utm_source=notionforms), a nasi deweloperzy udzielą Ci pomocy. +This is what we call the “Confirm” step in all the transfer tools - it will run automatically in most cases, as the auto-execution is most often successful, but it is important that you check back to make sure it went through. If it doesn’t succeed and there are no successful retries in 7 days, the Arbitrum bridge will discard the ticket, and your assets (Subgraph, stake, delegation or curation) will be lost and can’t be recovered. The Graph core devs have a monitoring system in place to detect these situations and try to redeem the tickets before it’s too late, but it is ultimately your responsibility to ensure your transfer is completed in time. If you’re having trouble confirming your transaction, please reach out using [this form](https://noteforms.com/forms/notionform-l2-transfer-tooling-issues-0ogqfu?notionforms=1&utm_source=notionforms) and core devs will be there help you. ### I started my delegation/stake/curation transfer and I'm not sure if it made it through to L2, how can I confirm that it was transferred correctly? @@ -36,43 +36,43 @@ If you have the L1 transaction hash (which you can find by looking at the recent ## Subgraph Transfer -### Jak mogę przenieść swój subgraph? +### How do I transfer my Subgraph? -Aby przesłać swój subgraf, należy wykonać następujące kroki: +To transfer your Subgraph, you will need to complete the following steps: 1. Zainicjuj przesyłanie w sieci głównej Ethereum 2. Poczekaj 20 minut na potwierdzenie -3. Potwierdź przesyłanie subgrafu na Arbitrum\* +3. Confirm Subgraph transfer on Arbitrum\* -4. Zakończ publikowanie subgrafu na Arbitrum +4. Finish publishing Subgraph on Arbitrum 5. Zaktualizuj adres URL zapytania (zalecane) -\*Note that you must confirm the transfer within 7 days otherwise your subgraph may be lost. In most cases, this step will run automatically, but a manual confirmation may be needed if there is a gas price spike on Arbitrum. If there are any issues during this process, there will be resources to help: contact support at support@thegraph.com or on [Discord](https://discord.gg/graphprotocol). +\*Note that you must confirm the transfer within 7 days otherwise your Subgraph may be lost. In most cases, this step will run automatically, but a manual confirmation may be needed if there is a gas price spike on Arbitrum. If there are any issues during this process, there will be resources to help: contact support at support@thegraph.com or on [Discord](https://discord.gg/graphprotocol). ### Skąd powinienem zainicjować przesyłanie? -Przesyłanie można zainicjować ze strony [Subgraph Studio](https://thegraph.com/studio/), [Explorer,](https://thegraph.com/explorer) lub dowolnej strony zawierającej szczegóły subgrafu. Kliknij przycisk "Prześlij subgraf " na tej stronie, aby zainicjować proces przesyłania. +You can initiate your transfer from the [Subgraph Studio](https://thegraph.com/studio/), [Explorer,](https://thegraph.com/explorer) or any Subgraph details page. Click the "Transfer Subgraph" button in the Subgraph details page to start the transfer. -### Jak długo muszę czekać, aż mój subgraf zostanie przesłany +### How long do I need to wait until my Subgraph is transferred Przesyłanie trwa około 20 minut. Most Arbitrum działa w tle, automatycznie kończąc przesyłanie danych. W niektórych przypadkach koszty gazu mogą wzrosnąć i konieczne będzie ponowne potwierdzenie transakcji. -### Czy mój subgraf będzie nadal wykrywalny po przesłaniu go do L2? +### Will my Subgraph still be discoverable after I transfer it to L2? -Twój subgraf będzie można znaleźć tylko w sieci, w której został opublikowany. Na przykład, jeśli subgraf znajduje się w Arbitrum One, można go znaleźć tylko w Eksploratorze w Arbitrum One i nie będzie można go znaleźć w Ethereum. Upewnij się, że wybrałeś Arbitrum One w przełączniku sieci u góry strony i że jesteś we właściwej sieci. Po przesłaniu subgraf L1 będzie oznaczony jako nieaktualny. +Your Subgraph will only be discoverable on the network it is published to. For example, if your Subgraph is on Arbitrum One, then you can only find it in Explorer on Arbitrum One and will not be able to find it on Ethereum. Please ensure that you have Arbitrum One selected in the network switcher at the top of the page to ensure you are on the correct network.  After the transfer, the L1 Subgraph will appear as deprecated. -### Czy mój subgraf musi zostać opublikowany, aby móc go przesłać? +### Does my Subgraph need to be published to transfer it? -Aby skorzystać z narzędzia do przesyłania subgrafów, musi on być już opublikowany w sieci głównej Ethereum i musi mieć jakiś sygnał kuratorski należący do portfela, który jest właścicielem subgrafu. Jeśli subgraf nie został opublikowany, zaleca się po prostu opublikowanie go bezpośrednio na Arbitrum One - związane z tym opłaty za gaz będą znacznie niższe. Jeśli chcesz przesłać opublikowany subgraf, ale konto właściciela nie ma na nim żadnego sygnału, możesz zasygnalizować niewielką kwotę (np. 1 GRT) z tego konta; upewnij się, że wybrałeś sygnał "automatycznej migracji". +To take advantage of the Subgraph transfer tool, your Subgraph must be already published to Ethereum mainnet and must have some curation signal owned by the wallet that owns the Subgraph. If your Subgraph is not published, it is recommended you simply publish directly on Arbitrum One - the associated gas fees will be considerably lower. If you want to transfer a published Subgraph but the owner account hasn't curated any signal on it, you can signal a small amount (e.g. 1 GRT) from that account; make sure to choose "auto-migrating" signal. -### Co stanie się z wersją mojego subgrafu w sieci głównej Ethereum po przesłaniu go do Arbitrum? +### What happens to the Ethereum mainnet version of my Subgraph after I transfer to Arbitrum? -Po przesłaniu subgrafu do Arbitrum, wersja głównej sieci Ethereum zostanie wycofana. Zalecamy zaktualizowanie adresu URL zapytania w ciągu 48 godzin. Istnieje jednak okres prolongaty, dzięki któremu adres URL sieci głównej będzie dalej funkcjonował, tak aby można było zaktualizować obsługę innych aplikacji. +After transferring your Subgraph to Arbitrum, the Ethereum mainnet version will be deprecated. We recommend you update your query URL within 48 hours. However, there is a grace period in place that keeps your mainnet URL functioning so that any third-party dapp support can be updated. ### Czy po przesłaniu muszę również ponownie opublikować na Arbitrum? @@ -80,21 +80,21 @@ Po upływie 20-minutowego okna przesyłania konieczne będzie jego potwierdzenie ### Will my endpoint experience downtime while re-publishing? -It is unlikely, but possible to experience a brief downtime depending on which Indexers are supporting the subgraph on L1 and whether they keep indexing it until the subgraph is fully supported on L2. +It is unlikely, but possible to experience a brief downtime depending on which Indexers are supporting the Subgraph on L1 and whether they keep indexing it until the Subgraph is fully supported on L2. ### Czy publikowanie i wersjonowanie jest takie samo w L2 jak w sieci głównej Ethereum? -Yes. Select Arbitrum One as your published network when publishing in Subgraph Studio. In the Studio, the latest endpoint will be available which points to the latest updated version of the subgraph. +Yes. Select Arbitrum One as your published network when publishing in Subgraph Studio. In the Studio, the latest endpoint will be available which points to the latest updated version of the Subgraph. -### Czy kurator mojego subgrafu będzie się przemieszczał wraz z moim subgrafem? +### Will my Subgraph's curation move with my Subgraph? -Jeśli wybrałeś automatyczną migrację sygnału, 100% twojego własnego kuratorstwa zostanie przeniesione wraz z subgrafem do Arbitrum One. Cały sygnał kuratorski subgrafu zostanie przekonwertowany na GRT w momencie transferu, a GRT odpowiadający sygnałowi kuratorskiemu zostanie użyty do zmintowania sygnału na subgrafie L2. +If you've chosen auto-migrating signal, 100% of your own curation will move with your Subgraph to Arbitrum One. All of the Subgraph's curation signal will be converted to GRT at the time of the transfer, and the GRT corresponding to your curation signal will be used to mint signal on the L2 Subgraph. -Inni kuratorzy mogą zdecydować, czy wycofać swoją część GRT, czy też przesłać ją do L2 w celu zmintowania sygnału na tym samym subgrafie. +Other Curators can choose whether to withdraw their fraction of GRT, or also transfer it to L2 to mint signal on the same Subgraph. -### Czy mogę przenieść swój subgraf z powrotem do głównej sieci Ethereum po jego przesłaniu? +### Can I move my Subgraph back to Ethereum mainnet after I transfer? -Po przesłaniu, wersja tego subgrafu w sieci głównej Ethereum zostanie wycofana. Jeśli chcesz ją przywrócić do sieci głównej, musisz ją ponownie wdrożyć i opublikować. Jednak przeniesienie z powrotem do sieci głównej Ethereum nie jest zalecane, ponieważ nagrody za indeksowanie zostaną całkowicie rozdzielone na Arbitrum One. +Once transferred, your Ethereum mainnet version of this Subgraph will be deprecated. If you would like to move back to mainnet, you will need to redeploy and publish back to mainnet. However, transferring back to Ethereum mainnet is strongly discouraged as indexing rewards will eventually be distributed entirely on Arbitrum One. ### Dlaczego potrzebuję bridgowanego ETH do przesłania? @@ -206,19 +206,19 @@ Aby przesłać swoje kuratorstwo, należy wykonać następujące kroki: \*Jeżeli będzie wymagane - np. w przypadku korzystania z adresu kontraktu. -### Skąd będę wiedzieć, czy subgraf, którego jestem kuratorem, został przeniesiony do L2? +### How will I know if the Subgraph I curated has moved to L2? -Podczas przeglądania strony ze szczegółami subgrafu pojawi się baner informujący, że subgraf został przeniesiony. Możesz postępować zgodnie z wyświetlanymi instrukcjami, aby przesłać swoje kuratorstwo. Informacje te można również znaleźć na stronie ze szczegółami subgrafu każdego z tych, które zostały przeniesione. +When viewing the Subgraph details page, a banner will notify you that this Subgraph has been transferred. You can follow the prompt to transfer your curation. You can also find this information on the Subgraph details page of any Subgraph that has moved. ### Co jeśli nie chcę przenosić swojego kuratorstwa do L2? -Gdy subgraf jest nieaktualny, masz możliwość wycofania swojego sygnału. Podobnie, jeśli subgraf został przeniesiony do L2, możesz wycofać swój sygnał w sieci głównej Ethereum lub wysłać sygnał do L2. +When a Subgraph is deprecated you have the option to withdraw your signal. Similarly, if a Subgraph has moved to L2, you can choose to withdraw your signal in Ethereum mainnet or send the signal to L2. ### Skąd mam wiedzieć, że moje kuratorstwo zostało pomyślnie przesłane? Szczegóły sygnału będą dostępne za pośrednictwem Eksploratora po upływie ok. 20 minut od uruchomienia narzędzia do przesyłania L2. -### Czy mogę przesłać swoje kuratorstwo do więcej niż jednego subgrafu na raz? +### Can I transfer my curation on more than one Subgraph at a time? Obecnie nie ma opcji zbiorczego przesyłania. @@ -266,7 +266,7 @@ Przesyłanie stake'a przez narzędzie do przesyłania L2 zajmie około 20 minut. ### Czy muszę indeksować na Arbitrum, zanim przekażę swój stake? -Możesz skutecznie przesłać swój stake przed skonfigurowaniem indeksowania, lecz nie będziesz w stanie odebrać żadnych nagród na L2, dopóki nie alokujesz do subgrafów na L2, nie zindeksujesz ich i nie podasz POI. +You can effectively transfer your stake first before setting up indexing, but you will not be able to claim any rewards on L2 until you allocate to Subgraphs on L2, index them, and present POIs. ### Czy delegaci mogą przenieść swoje delegacje, zanim ja przeniosę swój indeksujący stake? diff --git a/website/src/pages/pl/archived/arbitrum/l2-transfer-tools-guide.mdx b/website/src/pages/pl/archived/arbitrum/l2-transfer-tools-guide.mdx index 2e4e4050450e..91e2f52b8525 100644 --- a/website/src/pages/pl/archived/arbitrum/l2-transfer-tools-guide.mdx +++ b/website/src/pages/pl/archived/arbitrum/l2-transfer-tools-guide.mdx @@ -6,53 +6,53 @@ Graph ułatwił przeniesienie danych do L2 na Arbitrum One. Dla każdego uczestn Some frequent questions about these tools are answered in the [L2 Transfer Tools FAQ](/archived/arbitrum/l2-transfer-tools-faq/). The FAQs contain in-depth explanations of how to use the tools, how they work, and things to keep in mind when using them. -## Jak przenieść swój subgraph do Arbitrum (L2) +## How to transfer your Subgraph to Arbitrum (L2) -## Benefits of transferring your subgraphs +## Benefits of transferring your Subgraphs Społeczność i deweloperzy Graph [przygotowywali się](https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305) do przejścia na Arbitrum w ciągu ostatniego roku. Arbitrum, blockchain warstwy 2 lub "L2", dziedziczy bezpieczeństwo po Ethereum, ale zapewnia znacznie niższe opłaty za gaz. -When you publish or upgrade your subgraph to The Graph Network, you're interacting with smart contracts on the protocol and this requires paying for gas using ETH. By moving your subgraphs to Arbitrum, any future updates to your subgraph will require much lower gas fees. The lower fees, and the fact that curation bonding curves on L2 are flat, also make it easier for other Curators to curate on your subgraph, increasing the rewards for Indexers on your subgraph. This lower-cost environment also makes it cheaper for Indexers to index and serve your subgraph. Indexing rewards will be increasing on Arbitrum and decreasing on Ethereum mainnet over the coming months, so more and more Indexers will be transferring their stake and setting up their operations on L2. +When you publish or upgrade your Subgraph to The Graph Network, you're interacting with smart contracts on the protocol and this requires paying for gas using ETH. By moving your Subgraphs to Arbitrum, any future updates to your Subgraph will require much lower gas fees. The lower fees, and the fact that curation bonding curves on L2 are flat, also make it easier for other Curators to curate on your Subgraph, increasing the rewards for Indexers on your Subgraph. This lower-cost environment also makes it cheaper for Indexers to index and serve your Subgraph. Indexing rewards will be increasing on Arbitrum and decreasing on Ethereum mainnet over the coming months, so more and more Indexers will be transferring their stake and setting up their operations on L2. -## Understanding what happens with signal, your L1 subgraph and query URLs +## Understanding what happens with signal, your L1 Subgraph and query URLs -Transferring a subgraph to Arbitrum uses the Arbitrum GRT bridge, which in turn uses the native Arbitrum bridge, to send the subgraph to L2. The "transfer" will deprecate the subgraph on mainnet and send the information to re-create the subgraph on L2 using the bridge. It will also include the subgraph owner's signaled GRT, which must be more than zero for the bridge to accept the transfer. +Transferring a Subgraph to Arbitrum uses the Arbitrum GRT bridge, which in turn uses the native Arbitrum bridge, to send the Subgraph to L2. The "transfer" will deprecate the Subgraph on mainnet and send the information to re-create the Subgraph on L2 using the bridge. It will also include the Subgraph owner's signaled GRT, which must be more than zero for the bridge to accept the transfer. -When you choose to transfer the subgraph, this will convert all of the subgraph's curation signal to GRT. This is equivalent to "deprecating" the subgraph on mainnet. The GRT corresponding to your curation will be sent to L2 together with the subgraph, where they will be used to mint signal on your behalf. +When you choose to transfer the Subgraph, this will convert all of the Subgraph's curation signal to GRT. This is equivalent to "deprecating" the Subgraph on mainnet. The GRT corresponding to your curation will be sent to L2 together with the Subgraph, where they will be used to mint signal on your behalf. -Other Curators can choose whether to withdraw their fraction of GRT, or also transfer it to L2 to mint signal on the same subgraph. If a subgraph owner does not transfer their subgraph to L2 and manually deprecates it via a contract call, then Curators will be notified and will be able to withdraw their curation. +Other Curators can choose whether to withdraw their fraction of GRT, or also transfer it to L2 to mint signal on the same Subgraph. If a Subgraph owner does not transfer their Subgraph to L2 and manually deprecates it via a contract call, then Curators will be notified and will be able to withdraw their curation. -As soon as the subgraph is transferred, since all curation is converted to GRT, Indexers will no longer receive rewards for indexing the subgraph. However, there will be Indexers that will 1) keep serving transferred subgraphs for 24 hours, and 2) immediately start indexing the subgraph on L2. Since these Indexers already have the subgraph indexed, there should be no need to wait for the subgraph to sync, and it will be possible to query the L2 subgraph almost immediately. +As soon as the Subgraph is transferred, since all curation is converted to GRT, Indexers will no longer receive rewards for indexing the Subgraph. However, there will be Indexers that will 1) keep serving transferred Subgraphs for 24 hours, and 2) immediately start indexing the Subgraph on L2. Since these Indexers already have the Subgraph indexed, there should be no need to wait for the Subgraph to sync, and it will be possible to query the L2 Subgraph almost immediately. -Queries to the L2 subgraph will need to be done to a different URL (on `arbitrum-gateway.thegraph.com`), but the L1 URL will continue working for at least 48 hours. After that, the L1 gateway will forward queries to the L2 gateway (for some time), but this will add latency so it is recommended to switch all your queries to the new URL as soon as possible. +Queries to the L2 Subgraph will need to be done to a different URL (on `arbitrum-gateway.thegraph.com`), but the L1 URL will continue working for at least 48 hours. After that, the L1 gateway will forward queries to the L2 gateway (for some time), but this will add latency so it is recommended to switch all your queries to the new URL as soon as possible. ## Choosing your L2 wallet -When you published your subgraph on mainnet, you used a connected wallet to create the subgraph, and this wallet owns the NFT that represents this subgraph and allows you to publish updates. +When you published your Subgraph on mainnet, you used a connected wallet to create the Subgraph, and this wallet owns the NFT that represents this Subgraph and allows you to publish updates. -When transferring the subgraph to Arbitrum, you can choose a different wallet that will own this subgraph NFT on L2. +When transferring the Subgraph to Arbitrum, you can choose a different wallet that will own this Subgraph NFT on L2. If you're using a "regular" wallet like MetaMask (an Externally Owned Account or EOA, i.e. a wallet that is not a smart contract), then this is optional and it is recommended to keep the same owner address as in L1. -If you're using a smart contract wallet, like a multisig (e.g. a Safe), then choosing a different L2 wallet address is mandatory, as it is most likely that this account only exists on mainnet and you will not be able to make transactions on Arbitrum using this wallet. If you want to keep using a smart contract wallet or multisig, create a new wallet on Arbitrum and use its address as the L2 owner of your subgraph. +If you're using a smart contract wallet, like a multisig (e.g. a Safe), then choosing a different L2 wallet address is mandatory, as it is most likely that this account only exists on mainnet and you will not be able to make transactions on Arbitrum using this wallet. If you want to keep using a smart contract wallet or multisig, create a new wallet on Arbitrum and use its address as the L2 owner of your Subgraph. -**It is very important to use a wallet address that you control, and that can make transactions on Arbitrum. Otherwise, the subgraph will be lost and cannot be recovered.** +**It is very important to use a wallet address that you control, and that can make transactions on Arbitrum. Otherwise, the Subgraph will be lost and cannot be recovered.** ## Preparing for the transfer: bridging some ETH -Transferring the subgraph involves sending a transaction through the bridge, and then executing another transaction on Arbitrum. The first transaction uses ETH on mainnet, and includes some ETH to pay for gas when the message is received on L2. However, if this gas is insufficient, you will have to retry the transaction and pay for the gas directly on L2 (this is "Step 3: Confirming the transfer" below). This step **must be executed within 7 days of starting the transfer**. Moreover, the second transaction ("Step 4: Finishing the transfer on L2") will be done directly on Arbitrum. For these reasons, you will need some ETH on an Arbitrum wallet. If you're using a multisig or smart contract account, the ETH will need to be in the regular (EOA) wallet that you are using to execute the transactions, not on the multisig wallet itself. +Transferring the Subgraph involves sending a transaction through the bridge, and then executing another transaction on Arbitrum. The first transaction uses ETH on mainnet, and includes some ETH to pay for gas when the message is received on L2. However, if this gas is insufficient, you will have to retry the transaction and pay for the gas directly on L2 (this is "Step 3: Confirming the transfer" below). This step **must be executed within 7 days of starting the transfer**. Moreover, the second transaction ("Step 4: Finishing the transfer on L2") will be done directly on Arbitrum. For these reasons, you will need some ETH on an Arbitrum wallet. If you're using a multisig or smart contract account, the ETH will need to be in the regular (EOA) wallet that you are using to execute the transactions, not on the multisig wallet itself. You can buy ETH on some exchanges and withdraw it directly to Arbitrum, or you can use the Arbitrum bridge to send ETH from a mainnet wallet to L2: [bridge.arbitrum.io](http://bridge.arbitrum.io). Since gas fees on Arbitrum are lower, you should only need a small amount. It is recommended that you start at a low threshold (0.e.g. 01 ETH) for your transaction to be approved. -## Finding the subgraph Transfer Tool +## Finding the Subgraph Transfer Tool -You can find the L2 Transfer Tool when you're looking at your subgraph's page on Subgraph Studio: +You can find the L2 Transfer Tool when you're looking at your Subgraph's page on Subgraph Studio: ![transfer tool](/img/L2-transfer-tool1.png) -It is also available on Explorer if you're connected with the wallet that owns a subgraph and on that subgraph's page on Explorer: +It is also available on Explorer if you're connected with the wallet that owns a Subgraph and on that Subgraph's page on Explorer: ![Transferring to L2](/img/transferToL2.png) @@ -60,19 +60,19 @@ Clicking on the Transfer to L2 button will open the transfer tool where you can ## Step 1: Starting the transfer -Before starting the transfer, you must decide which address will own the subgraph on L2 (see "Choosing your L2 wallet" above), and it is strongly recommend having some ETH for gas already bridged on Arbitrum (see "Preparing for the transfer: bridging some ETH" above). +Before starting the transfer, you must decide which address will own the Subgraph on L2 (see "Choosing your L2 wallet" above), and it is strongly recommend having some ETH for gas already bridged on Arbitrum (see "Preparing for the transfer: bridging some ETH" above). -Also please note transferring the subgraph requires having a nonzero amount of signal on the subgraph with the same account that owns the subgraph; if you haven't signaled on the subgraph you will have to add a bit of curation (adding a small amount like 1 GRT would suffice). +Also please note transferring the Subgraph requires having a nonzero amount of signal on the Subgraph with the same account that owns the Subgraph; if you haven't signaled on the Subgraph you will have to add a bit of curation (adding a small amount like 1 GRT would suffice). -After opening the Transfer Tool, you will be able to input the L2 wallet address into the "Receiving wallet address" field - **make sure you've entered the correct address here**. Clicking on Transfer Subgraph will prompt you to execute the transaction on your wallet (note some ETH value is included to pay for L2 gas); this will initiate the transfer and deprecate your L1 subgraph (see "Understanding what happens with signal, your L1 subgraph and query URLs" above for more details on what goes on behind the scenes). +After opening the Transfer Tool, you will be able to input the L2 wallet address into the "Receiving wallet address" field - **make sure you've entered the correct address here**. Clicking on Transfer Subgraph will prompt you to execute the transaction on your wallet (note some ETH value is included to pay for L2 gas); this will initiate the transfer and deprecate your L1 Subgraph (see "Understanding what happens with signal, your L1 Subgraph and query URLs" above for more details on what goes on behind the scenes). -If you execute this step, **make sure you proceed until completing step 3 in less than 7 days, or the subgraph and your signal GRT will be lost.** This is due to how L1-L2 messaging works on Arbitrum: messages that are sent through the bridge are "retry-able tickets" that must be executed within 7 days, and the initial execution might need a retry if there are spikes in the gas price on Arbitrum. +If you execute this step, **make sure you proceed until completing step 3 in less than 7 days, or the Subgraph and your signal GRT will be lost.** This is due to how L1-L2 messaging works on Arbitrum: messages that are sent through the bridge are "retry-able tickets" that must be executed within 7 days, and the initial execution might need a retry if there are spikes in the gas price on Arbitrum. ![Start the transfer to L2](/img/startTransferL2.png) -## Step 2: Waiting for the subgraph to get to L2 +## Step 2: Waiting for the Subgraph to get to L2 -After you start the transfer, the message that sends your L1 subgraph to L2 must propagate through the Arbitrum bridge. This takes approximately 20 minutes (the bridge waits for the mainnet block containing the transaction to be "safe" from potential chain reorgs). +After you start the transfer, the message that sends your L1 Subgraph to L2 must propagate through the Arbitrum bridge. This takes approximately 20 minutes (the bridge waits for the mainnet block containing the transaction to be "safe" from potential chain reorgs). Once this wait time is over, Arbitrum will attempt to auto-execute the transfer on the L2 contracts. @@ -80,7 +80,7 @@ Once this wait time is over, Arbitrum will attempt to auto-execute the transfer ## Step 3: Confirming the transfer -In most cases, this step will auto-execute as the L2 gas included in step 1 should be sufficient to execute the transaction that receives the subgraph on the Arbitrum contracts. In some cases, however, it is possible that a spike in gas prices on Arbitrum causes this auto-execution to fail. In this case, the "ticket" that sends your subgraph to L2 will be pending and require a retry within 7 days. +In most cases, this step will auto-execute as the L2 gas included in step 1 should be sufficient to execute the transaction that receives the Subgraph on the Arbitrum contracts. In some cases, however, it is possible that a spike in gas prices on Arbitrum causes this auto-execution to fail. In this case, the "ticket" that sends your Subgraph to L2 will be pending and require a retry within 7 days. If this is the case, you will need to connect using an L2 wallet that has some ETH on Arbitrum, switch your wallet network to Arbitrum, and click on "Confirm Transfer" to retry the transaction. @@ -88,33 +88,33 @@ If this is the case, you will need to connect using an L2 wallet that has some E ## Step 4: Finishing the transfer on L2 -At this point, your subgraph and GRT have been received on Arbitrum, but the subgraph is not published yet. You will need to connect using the L2 wallet that you chose as the receiving wallet, switch your wallet network to Arbitrum, and click "Publish Subgraph." +At this point, your Subgraph and GRT have been received on Arbitrum, but the Subgraph is not published yet. You will need to connect using the L2 wallet that you chose as the receiving wallet, switch your wallet network to Arbitrum, and click "Publish Subgraph." -![Publish the subgraph](/img/publishSubgraphL2TransferTools.png) +![Publish the Subgraph](/img/publishSubgraphL2TransferTools.png) -![Wait for the subgraph to be published](/img/waitForSubgraphToPublishL2TransferTools.png) +![Wait for the Subgraph to be published](/img/waitForSubgraphToPublishL2TransferTools.png) -This will publish the subgraph so that Indexers that are operating on Arbitrum can start serving it. It will also mint curation signal using the GRT that were transferred from L1. +This will publish the Subgraph so that Indexers that are operating on Arbitrum can start serving it. It will also mint curation signal using the GRT that were transferred from L1. ## Step 5: Updating the query URL -Your subgraph has been successfully transferred to Arbitrum! To query the subgraph, the new URL will be : +Your Subgraph has been successfully transferred to Arbitrum! To query the Subgraph, the new URL will be : `https://arbitrum-gateway.thegraph.com/api/[api-key]/subgraphs/id/[l2-subgraph-id]` -Note that the subgraph ID on Arbitrum will be a different than the one you had on mainnet, but you can always find it on Explorer or Studio. As mentioned above (see "Understanding what happens with signal, your L1 subgraph and query URLs") the old L1 URL will be supported for a short while, but you should switch your queries to the new address as soon as the subgraph has been synced on L2. +Note that the Subgraph ID on Arbitrum will be a different than the one you had on mainnet, but you can always find it on Explorer or Studio. As mentioned above (see "Understanding what happens with signal, your L1 Subgraph and query URLs") the old L1 URL will be supported for a short while, but you should switch your queries to the new address as soon as the Subgraph has been synced on L2. ## How to transfer your curation to Arbitrum (L2) -## Understanding what happens to curation on subgraph transfers to L2 +## Understanding what happens to curation on Subgraph transfers to L2 -When the owner of a subgraph transfers a subgraph to Arbitrum, all of the subgraph's signal is converted to GRT at the same time. This applies to "auto-migrated" signal, i.e. signal that is not specific to a subgraph version or deployment but that follows the latest version of a subgraph. +When the owner of a Subgraph transfers a Subgraph to Arbitrum, all of the Subgraph's signal is converted to GRT at the same time. This applies to "auto-migrated" signal, i.e. signal that is not specific to a Subgraph version or deployment but that follows the latest version of a Subgraph. -This conversion from signal to GRT is the same as what would happen if the subgraph owner deprecated the subgraph in L1. When the subgraph is deprecated or transferred, all curation signal is "burned" simultaneously (using the curation bonding curve) and the resulting GRT is held by the GNS smart contract (that is the contract that handles subgraph upgrades and auto-migrated signal). Each Curator on that subgraph therefore has a claim to that GRT proportional to the amount of shares they had for the subgraph. +This conversion from signal to GRT is the same as what would happen if the Subgraph owner deprecated the Subgraph in L1. When the Subgraph is deprecated or transferred, all curation signal is "burned" simultaneously (using the curation bonding curve) and the resulting GRT is held by the GNS smart contract (that is the contract that handles Subgraph upgrades and auto-migrated signal). Each Curator on that Subgraph therefore has a claim to that GRT proportional to the amount of shares they had for the Subgraph. -A fraction of these GRT corresponding to the subgraph owner is sent to L2 together with the subgraph. +A fraction of these GRT corresponding to the Subgraph owner is sent to L2 together with the Subgraph. -At this point, the curated GRT will not accrue any more query fees, so Curators can choose to withdraw their GRT or transfer it to the same subgraph on L2, where it can be used to mint new curation signal. There is no rush to do this as the GRT can be help indefinitely and everybody gets an amount proportional to their shares, irrespective of when they do it. +At this point, the curated GRT will not accrue any more query fees, so Curators can choose to withdraw their GRT or transfer it to the same Subgraph on L2, where it can be used to mint new curation signal. There is no rush to do this as the GRT can be help indefinitely and everybody gets an amount proportional to their shares, irrespective of when they do it. ## Choosing your L2 wallet @@ -130,9 +130,9 @@ If you're using a smart contract wallet, like a multisig (e.g. a Safe), then cho Before starting the transfer, you must decide which address will own the curation on L2 (see "Choosing your L2 wallet" above), and it is recommended having some ETH for gas already bridged on Arbitrum in case you need to retry the execution of the message on L2. You can buy ETH on some exchanges and withdraw it directly to Arbitrum, or you can use the Arbitrum bridge to send ETH from a mainnet wallet to L2: [bridge.arbitrum.io](http://bridge.arbitrum.io) - since gas fees on Arbitrum are so low, you should only need a small amount, e.g. 0.01 ETH will probably be more than enough. -If a subgraph that you curate to has been transferred to L2, you will see a message on Explorer telling you that you're curating to a transferred subgraph. +If a Subgraph that you curate to has been transferred to L2, you will see a message on Explorer telling you that you're curating to a transferred Subgraph. -When looking at the subgraph page, you can choose to withdraw or transfer the curation. Clicking on "Transfer Signal to Arbitrum" will open the transfer tool. +When looking at the Subgraph page, you can choose to withdraw or transfer the curation. Clicking on "Transfer Signal to Arbitrum" will open the transfer tool. ![Transfer signal](/img/transferSignalL2TransferTools.png) @@ -162,4 +162,4 @@ If this is the case, you will need to connect using an L2 wallet that has some E ## Withdrawing your curation on L1 -If you prefer not to send your GRT to L2, or you'd rather bridge the GRT manually, you can withdraw your curated GRT on L1. On the banner on the subgraph page, choose "Withdraw Signal" and confirm the transaction; the GRT will be sent to your Curator address. +If you prefer not to send your GRT to L2, or you'd rather bridge the GRT manually, you can withdraw your curated GRT on L1. On the banner on the Subgraph page, choose "Withdraw Signal" and confirm the transaction; the GRT will be sent to your Curator address. diff --git a/website/src/pages/pl/archived/sunrise.mdx b/website/src/pages/pl/archived/sunrise.mdx index eb18a93c506c..71262f22e7d8 100644 --- a/website/src/pages/pl/archived/sunrise.mdx +++ b/website/src/pages/pl/archived/sunrise.mdx @@ -7,61 +7,61 @@ sidebarTitle: Post-Sunrise Upgrade FAQ ## What was the Sunrise of Decentralized Data? -The Sunrise of Decentralized Data was an initiative spearheaded by Edge & Node. This initiative enabled subgraph developers to upgrade to The Graph’s decentralized network seamlessly. +The Sunrise of Decentralized Data was an initiative spearheaded by Edge & Node. This initiative enabled Subgraph developers to upgrade to The Graph’s decentralized network seamlessly. -This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published subgraphs. +This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published Subgraphs. ### What happened to the hosted service? -The hosted service query endpoints are no longer available, and developers cannot deploy new subgraphs on the hosted service. +The hosted service query endpoints are no longer available, and developers cannot deploy new Subgraphs on the hosted service. -During the upgrade process, owners of hosted service subgraphs could upgrade their subgraphs to The Graph Network. Additionally, developers were able to claim auto-upgraded subgraphs. +During the upgrade process, owners of hosted service Subgraphs could upgrade their Subgraphs to The Graph Network. Additionally, developers were able to claim auto-upgraded Subgraphs. ### Was Subgraph Studio impacted by this upgrade? No, Subgraph Studio was not impacted by Sunrise. Subgraphs were immediately available for querying, powered by the upgrade Indexer, which uses the same infrastructure as the hosted service. -### Why were subgraphs published to Arbitrum, did it start indexing a different network? +### Why were Subgraphs published to Arbitrum, did it start indexing a different network? -The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that subgraphs are published to, but subgraphs can index any of the [supported networks](/supported-networks/) +The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new Subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that Subgraphs are published to, but Subgraphs can index any of the [supported networks](/supported-networks/) ## About the Upgrade Indexer > The upgrade Indexer is currently active. -The upgrade Indexer was implemented to improve the experience of upgrading subgraphs from the hosted service to The Graph Network and support new versions of existing subgraphs that had not yet been indexed. +The upgrade Indexer was implemented to improve the experience of upgrading Subgraphs from the hosted service to The Graph Network and support new versions of existing Subgraphs that had not yet been indexed. ### What does the upgrade Indexer do? -- It bootstraps chains that have yet to receive indexing rewards on The Graph Network and ensures that an Indexer is available to serve queries as quickly as possible after a subgraph is published. +- It bootstraps chains that have yet to receive indexing rewards on The Graph Network and ensures that an Indexer is available to serve queries as quickly as possible after a Subgraph is published. - It supports chains that were previously only available on the hosted service. Find a comprehensive list of supported chains [here](/supported-networks/). -- Indexers that operate an upgrade Indexer do so as a public service to support new subgraphs and additional chains that lack indexing rewards before The Graph Council approves them. +- Indexers that operate an upgrade Indexer do so as a public service to support new Subgraphs and additional chains that lack indexing rewards before The Graph Council approves them. ### Why is Edge & Node running the upgrade Indexer? -Edge & Node historically maintained the hosted service and, as a result, already have synced data for hosted service subgraphs. +Edge & Node historically maintained the hosted service and, as a result, already have synced data for hosted service Subgraphs. ### What does the upgrade indexer mean for existing Indexers? Chains previously only supported on the hosted service were made available to developers on The Graph Network without indexing rewards at first. -However, this action unlocked query fees for any interested Indexer and increased the number of subgraphs published on The Graph Network. As a result, Indexers have more opportunities to index and serve these subgraphs in exchange for query fees, even before indexing rewards are enabled for a chain. +However, this action unlocked query fees for any interested Indexer and increased the number of Subgraphs published on The Graph Network. As a result, Indexers have more opportunities to index and serve these Subgraphs in exchange for query fees, even before indexing rewards are enabled for a chain. -The upgrade Indexer also provides the Indexer community with information about the potential demand for subgraphs and new chains on The Graph Network. +The upgrade Indexer also provides the Indexer community with information about the potential demand for Subgraphs and new chains on The Graph Network. ### What does this mean for Delegators? -The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity. +The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more Subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity. ### Did the upgrade Indexer compete with existing Indexers for rewards? -No, the upgrade Indexer only allocates the minimum amount per subgraph and does not collect indexing rewards. +No, the upgrade Indexer only allocates the minimum amount per Subgraph and does not collect indexing rewards. -It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and subgraphs. +It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and Subgraphs. -### How does this affect subgraph developers? +### How does this affect Subgraph developers? -Subgraph developers can query their subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/subgraphs/developing/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a subgraph](/developing/creating-a-subgraph/) was not impacted by this upgrade. +Subgraph developers can query their Subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/subgraphs/developing/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a Subgraph](/developing/creating-a-subgraph/) was not impacted by this upgrade. ### How does the upgrade Indexer benefit data consumers? @@ -71,10 +71,10 @@ The upgrade Indexer enables chains on the network that were previously only supp The upgrade Indexer prices queries at the market rate to avoid influencing the query fee market. -### When will the upgrade Indexer stop supporting a subgraph? +### When will the upgrade Indexer stop supporting a Subgraph? -The upgrade Indexer supports a subgraph until at least 3 other Indexers successfully and consistently serve queries made to it. +The upgrade Indexer supports a Subgraph until at least 3 other Indexers successfully and consistently serve queries made to it. -Furthermore, the upgrade Indexer stops supporting a subgraph if it has not been queried in the last 30 days. +Furthermore, the upgrade Indexer stops supporting a Subgraph if it has not been queried in the last 30 days. -Other Indexers are incentivized to support subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it. +Other Indexers are incentivized to support Subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it. diff --git a/website/src/pages/pl/global.json b/website/src/pages/pl/global.json index 9b22568b5199..5c981e17bd1c 100644 --- a/website/src/pages/pl/global.json +++ b/website/src/pages/pl/global.json @@ -6,6 +6,7 @@ "subgraphs": "Subgrafy", "substreams": "Substreams", "sps": "Substreams-Powered Subgraphs", + "tokenApi": "Token API", "indexing": "Indexing", "resources": "Resources", "archived": "Archived" @@ -24,9 +25,51 @@ "linkToThisSection": "Link to this section" }, "content": { - "note": "Note", + "callout": { + "note": "Note", + "tip": "Tip", + "important": "Important", + "warning": "Warning", + "caution": "Caution" + }, "video": "Video" }, + "openApi": { + "parameters": { + "pathParameters": "Path Parameters", + "queryParameters": "Query Parameters", + "headerParameters": "Header Parameters", + "cookieParameters": "Cookie Parameters", + "parameter": "Parameter", + "description": "Description", + "value": "Value", + "required": "Required", + "deprecated": "Deprecated", + "defaultValue": "Default value", + "minimumValue": "Minimum value", + "maximumValue": "Maximum value", + "acceptedValues": "Accepted values", + "acceptedPattern": "Accepted pattern", + "format": "Format", + "serializationFormat": "Serialization format" + }, + "request": { + "label": "Test this endpoint", + "noCredentialsRequired": "No credentials required", + "send": "Send Request" + }, + "responses": { + "potentialResponses": "Potential Responses", + "status": "Status", + "description": "Description", + "liveResponse": "Live Response", + "example": "Example" + }, + "errors": { + "invalidApi": "Could not retrieve API {0}.", + "invalidOperation": "Could not retrieve operation {0} in API {1}." + } + }, "notFound": { "title": "Oops! This page was lost in space...", "subtitle": "Check if you’re using the right address or explore our website by clicking on the link below.", diff --git a/website/src/pages/pl/index.json b/website/src/pages/pl/index.json index 2715d757b23a..ca9ba66107b7 100644 --- a/website/src/pages/pl/index.json +++ b/website/src/pages/pl/index.json @@ -7,7 +7,7 @@ "cta2": "Build your first subgraph" }, "products": { - "title": "The Graph’s Products", + "title": "The Graph's Products", "description": "Choose a solution that fits your needs—interact with blockchain data your way.", "subgraphs": { "title": "Subgrafy", @@ -21,7 +21,7 @@ }, "sps": { "title": "Substreams-Powered Subgraphs", - "description": "Boost your subgraph’s efficiency and scalability by using Substreams.", + "description": "Boost your subgraph's efficiency and scalability by using Substreams.", "cta": "Set up a Substreams-powered subgraph" }, "graphNode": { @@ -44,7 +44,7 @@ "identifier": "Identifier", "chainId": "Chain ID", "nativeCurrency": "Native Currency", - "docs": "Docs", + "docs": "Dokumenty", "shortName": "Short Name", "guides": "Guides", "search": "Search networks", @@ -67,7 +67,7 @@ "tableHeaders": { "name": "Name", "id": "ID", - "subgraphs": "Subgraphs", + "subgraphs": "Subgrafy", "substreams": "Substreams", "firehose": "Firehose", "tokenapi": "Token API" @@ -156,15 +156,15 @@ "watchOnYouTube": "Watch on YouTube", "theGraphExplained": { "title": "The Graph Explained In 1 Minute", - "description": "What is The Graph? How does it work? Why does it matter so much to web3 developers? Learn how and why The Graph is the backbone of web3 in this short, non-technical video." + "description": "Learn how and why The Graph is the backbone of web3 in this short, non-technical video." }, "whatIsDelegating": { "title": "What is Delegating?", - "description": "Delegators are key participants who help secure The Graph by staking their GRT tokens to Indexers. This video explains key concepts to understand before delegating." + "description": "This video explains key concepts to understand before delegating, a form of staking that helps secure The Graph." }, "howToIndexSolana": { "title": "How to Index Solana with a Substreams-powered Subgraph", - "description": "If you’re familiar with subgraphs, discover how Substreams offer a different approach for key use cases. This video walks you through the process of building your first Substreams-powered subgraph." + "description": "If you're familiar with Subgraphs, discover how Substreams offer a different approach for key use cases." } }, "time": { diff --git a/website/src/pages/pl/indexing/chain-integration-overview.mdx b/website/src/pages/pl/indexing/chain-integration-overview.mdx index 77141e82b34a..33619b03c483 100644 --- a/website/src/pages/pl/indexing/chain-integration-overview.mdx +++ b/website/src/pages/pl/indexing/chain-integration-overview.mdx @@ -36,7 +36,7 @@ This process is related to the Subgraph Data Service, applicable only to new Sub ### 2. What happens if Firehose & Substreams support comes after the network is supported on mainnet? -This would only impact protocol support for indexing rewards on Substreams-powered subgraphs. The new Firehose implementation would need testing on testnet, following the methodology outlined for Stage 2 in this GIP. Similarly, assuming the implementation is performant and reliable, a PR on the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) would be required (`Substreams data sources` Subgraph Feature), as well as a new GIP for protocol support for indexing rewards. Anyone can create the PR and GIP; the Foundation would help with Council approval. +This would only impact protocol support for indexing rewards on Substreams-powered Subgraphs. The new Firehose implementation would need testing on testnet, following the methodology outlined for Stage 2 in this GIP. Similarly, assuming the implementation is performant and reliable, a PR on the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) would be required (`Substreams data sources` Subgraph Feature), as well as a new GIP for protocol support for indexing rewards. Anyone can create the PR and GIP; the Foundation would help with Council approval. ### 3. How much time will the process of reaching full protocol support take? diff --git a/website/src/pages/pl/indexing/new-chain-integration.mdx b/website/src/pages/pl/indexing/new-chain-integration.mdx index e45c4b411010..670e06c752c3 100644 --- a/website/src/pages/pl/indexing/new-chain-integration.mdx +++ b/website/src/pages/pl/indexing/new-chain-integration.mdx @@ -2,7 +2,7 @@ title: New Chain Integration --- -Chains can bring subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies: +Chains can bring Subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies: 1. **EVM JSON-RPC** 2. **Firehose**: All Firehose integration solutions include Substreams, a large-scale streaming engine based off Firehose with native `graph-node` support, allowing for parallelized transforms. @@ -47,15 +47,15 @@ For EVM chains, there exists a deeper level of data that can be achieved through ## EVM considerations - Difference between JSON-RPC & Firehose -While the JSON-RPC and Firehose are both suitable for subgraphs, a Firehose is always required for developers wanting to build with [Substreams](https://substreams.streamingfast.io). Supporting Substreams allows developers to build [Substreams-powered subgraphs](/subgraphs/cookbook/substreams-powered-subgraphs/) for the new chain, and has the potential to improve the performance of your subgraphs. Additionally, Firehose — as a drop-in replacement for the JSON-RPC extraction layer of `graph-node` — reduces by 90% the number of RPC calls required for general indexing. +While the JSON-RPC and Firehose are both suitable for Subgraphs, a Firehose is always required for developers wanting to build with [Substreams](https://substreams.streamingfast.io). Supporting Substreams allows developers to build [Substreams-powered Subgraphs](/subgraphs/cookbook/substreams-powered-subgraphs/) for the new chain, and has the potential to improve the performance of your Subgraphs. Additionally, Firehose — as a drop-in replacement for the JSON-RPC extraction layer of `graph-node` — reduces by 90% the number of RPC calls required for general indexing. -- All those `getLogs` calls and roundtrips get replaced by a single stream arriving into the heart of `graph-node`; a single block model for all subgraphs it processes. +- All those `getLogs` calls and roundtrips get replaced by a single stream arriving into the heart of `graph-node`; a single block model for all Subgraphs it processes. -> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers) +> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index Subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers) ## Graph Node Configuration -Configuring Graph Node is as easy as preparing your local environment. Once your local environment is set, you can test the integration by locally deploying a subgraph. +Configuring Graph Node is as easy as preparing your local environment. Once your local environment is set, you can test the integration by locally deploying a Subgraph. 1. [Clone Graph Node](https://github.com/graphprotocol/graph-node) @@ -67,4 +67,4 @@ Configuring Graph Node is as easy as preparing your local environment. Once your ## Substreams-powered Subgraphs -For StreamingFast-led Firehose/Substreams integrations, basic support for foundational Substreams modules (e.g. decoded transactions, logs and smart-contract events) and Substreams codegen tools are included. These tools enable the ability to enable [Substreams-powered subgraphs](/substreams/sps/introduction/). Follow the [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) and run `substreams codegen subgraph` to experience the codegen tools for yourself. +For StreamingFast-led Firehose/Substreams integrations, basic support for foundational Substreams modules (e.g. decoded transactions, logs and smart-contract events) and Substreams codegen tools are included. These tools enable the ability to enable [Substreams-powered Subgraphs](/substreams/sps/introduction/). Follow the [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) and run `substreams codegen subgraph` to experience the codegen tools for yourself. diff --git a/website/src/pages/pl/indexing/overview.mdx b/website/src/pages/pl/indexing/overview.mdx index 914b04e0bf47..4a980db27f12 100644 --- a/website/src/pages/pl/indexing/overview.mdx +++ b/website/src/pages/pl/indexing/overview.mdx @@ -7,7 +7,7 @@ Indexers are node operators in The Graph Network that stake Graph Tokens (GRT) i GRT that is staked in the protocol is subject to a thawing period and can be slashed if Indexers are malicious and serve incorrect data to applications or if they index incorrectly. Indexers also earn rewards for delegated stake from Delegators, to contribute to the network. -Indexers select subgraphs to index based on the subgraph’s curation signal, where Curators stake GRT in order to indicate which subgraphs are high-quality and should be prioritized. Consumers (eg. applications) can also set parameters for which Indexers process queries for their subgraphs and set preferences for query fee pricing. +Indexers select Subgraphs to index based on the Subgraph’s curation signal, where Curators stake GRT in order to indicate which Subgraphs are high-quality and should be prioritized. Consumers (eg. applications) can also set parameters for which Indexers process queries for their Subgraphs and set preferences for query fee pricing. ## FAQ @@ -19,17 +19,17 @@ The minimum stake for an Indexer is currently set to 100K GRT. **Query fee rebates** - Payments for serving queries on the network. These payments are mediated via state channels between an Indexer and a gateway. Each query request from a gateway contains a payment and the corresponding response a proof of query result validity. -**Indexing rewards** - Generated via a 3% annual protocol wide inflation, the indexing rewards are distributed to Indexers who are indexing subgraph deployments for the network. +**Indexing rewards** - Generated via a 3% annual protocol wide inflation, the indexing rewards are distributed to Indexers who are indexing Subgraph deployments for the network. ### How are indexing rewards distributed? -Indexing rewards come from protocol inflation which is set to 3% annual issuance. They are distributed across subgraphs based on the proportion of all curation signal on each, then distributed proportionally to Indexers based on their allocated stake on that subgraph. **An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.** +Indexing rewards come from protocol inflation which is set to 3% annual issuance. They are distributed across Subgraphs based on the proportion of all curation signal on each, then distributed proportionally to Indexers based on their allocated stake on that Subgraph. **An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.** Numerous tools have been created by the community for calculating rewards; you'll find a collection of them organized in the [Community Guides collection](https://www.notion.so/Community-Guides-abbb10f4dba040d5ba81648ca093e70c). You can also find an up to date list of tools in the #Delegators and #Indexers channels on the [Discord server](https://discord.gg/graphprotocol). Here we link a [recommended allocation optimiser](https://github.com/graphprotocol/allocation-optimizer) integrated with the indexer software stack. ### What is a proof of indexing (POI)? -POIs are used in the network to verify that an Indexer is indexing the subgraphs they have allocated on. A POI for the first block of the current epoch must be submitted when closing an allocation for that allocation to be eligible for indexing rewards. A POI for a block is a digest for all entity store transactions for a specific subgraph deployment up to and including that block. +POIs are used in the network to verify that an Indexer is indexing the Subgraphs they have allocated on. A POI for the first block of the current epoch must be submitted when closing an allocation for that allocation to be eligible for indexing rewards. A POI for a block is a digest for all entity store transactions for a specific Subgraph deployment up to and including that block. ### When are indexing rewards distributed? @@ -41,7 +41,7 @@ The RewardsManager contract has a read-only [getRewards](https://github.com/grap Many of the community-made dashboards include pending rewards values and they can be easily checked manually by following these steps: -1. Query the [mainnet subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations: +1. Query the [mainnet Subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations: ```graphql query indexerAllocations { @@ -91,24 +91,24 @@ The `queryFeeCut` and `indexingRewardCut` values are delegation parameters that - **indexingRewardCut** - the % of indexing rewards that will be distributed to the Indexer. If this is set to 95%, the Indexer will receive 95% of the indexing rewards when an allocation is closed and the Delegators will split the other 5%. -### How do Indexers know which subgraphs to index? +### How do Indexers know which Subgraphs to index? -Indexers may differentiate themselves by applying advanced techniques for making subgraph indexing decisions but to give a general idea we'll discuss several key metrics used to evaluate subgraphs in the network: +Indexers may differentiate themselves by applying advanced techniques for making Subgraph indexing decisions but to give a general idea we'll discuss several key metrics used to evaluate Subgraphs in the network: -- **Curation signal** - The proportion of network curation signal applied to a particular subgraph is a good indicator of the interest in that subgraph, especially during the bootstrap phase when query voluming is ramping up. +- **Curation signal** - The proportion of network curation signal applied to a particular Subgraph is a good indicator of the interest in that Subgraph, especially during the bootstrap phase when query volume is ramping up. -- **Query fees collected** - The historical data for volume of query fees collected for a specific subgraph is a good indicator of future demand. +- **Query fees collected** - The historical data for volume of query fees collected for a specific Subgraph is a good indicator of future demand. -- **Amount staked** - Monitoring the behavior of other Indexers or looking at proportions of total stake allocated towards specific subgraphs can allow an Indexer to monitor the supply side for subgraph queries to identify subgraphs that the network is showing confidence in or subgraphs that may show a need for more supply. +- **Amount staked** - Monitoring the behavior of other Indexers or looking at proportions of total stake allocated towards specific Subgraphs can allow an Indexer to monitor the supply side for Subgraph queries to identify Subgraphs that the network is showing confidence in or Subgraphs that may show a need for more supply. -- **Subgraphs with no indexing rewards** - Some subgraphs do not generate indexing rewards mainly because they are using unsupported features like IPFS or because they are querying another network outside of mainnet. You will see a message on a subgraph if it is not generating indexing rewards. +- **Subgraphs with no indexing rewards** - Some Subgraphs do not generate indexing rewards mainly because they are using unsupported features like IPFS or because they are querying another network outside of mainnet. You will see a message on a Subgraph if it is not generating indexing rewards. ### What are the hardware requirements? -- **Small** - Enough to get started indexing several subgraphs, will likely need to be expanded. +- **Small** - Enough to get started indexing several Subgraphs, will likely need to be expanded. - **Standard** - Default setup, this is what is used in the example k8s/terraform deployment manifests. -- **Medium** - Production Indexer supporting 100 subgraphs and 200-500 requests per second. -- **Large** - Prepared to index all currently used subgraphs and serve requests for the related traffic. +- **Medium** - Production Indexer supporting 100 Subgraphs and 200-500 requests per second. +- **Large** - Prepared to index all currently used Subgraphs and serve requests for the related traffic. | Setup | Postgres
(CPUs) | Postgres
(memory in GBs) | Postgres
(disk in TBs) | VMs
(CPUs) | VMs
(memory in GBs) | | --- | :-: | :-: | :-: | :-: | :-: | @@ -125,17 +125,17 @@ Indexers may differentiate themselves by applying advanced techniques for making ## Infrastructure -At the center of an Indexer's infrastructure is the Graph Node which monitors the indexed networks, extracts and loads data per a subgraph definition and serves it as a [GraphQL API](/about/#how-the-graph-works). The Graph Node needs to be connected to an endpoint exposing data from each indexed network; an IPFS node for sourcing data; a PostgreSQL database for its store; and Indexer components which facilitate its interactions with the network. +At the center of an Indexer's infrastructure is the Graph Node which monitors the indexed networks, extracts and loads data per a Subgraph definition and serves it as a [GraphQL API](/about/#how-the-graph-works). The Graph Node needs to be connected to an endpoint exposing data from each indexed network; an IPFS node for sourcing data; a PostgreSQL database for its store; and Indexer components which facilitate its interactions with the network. -- **PostgreSQL database** - The main store for the Graph Node, this is where subgraph data is stored. The Indexer service and agent also use the database to store state channel data, cost models, indexing rules, and allocation actions. +- **PostgreSQL database** - The main store for the Graph Node, this is where Subgraph data is stored. The Indexer service and agent also use the database to store state channel data, cost models, indexing rules, and allocation actions. -- **Data endpoint** - For EVM-compatible networks, Graph Node needs to be connected to an endpoint that exposes an EVM-compatible JSON-RPC API. This may take the form of a single client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain subgraphs will require particular client capabilities such as archive mode and/or the parity tracing API. +- **Data endpoint** - For EVM-compatible networks, Graph Node needs to be connected to an endpoint that exposes an EVM-compatible JSON-RPC API. This may take the form of a single client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain Subgraphs will require particular client capabilities such as archive mode and/or the parity tracing API. -- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during subgraph deployment to fetch the subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com. +- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com. - **Indexer service** - Handles all required external communications with the network. Shares cost models and indexing statuses, passes query requests from gateways on to a Graph Node, and manages the query payments via state channels with the gateway. -- **Indexer agent** - Facilitates the Indexers interactions onchain including registering on the network, managing subgraph deployments to its Graph Node/s, and managing allocations. +- **Indexer agent** - Facilitates the Indexers interactions onchain including registering on the network, managing Subgraph deployments to its Graph Node/s, and managing allocations. - **Prometheus metrics server** - The Graph Node and Indexer components log their metrics to the metrics server. @@ -149,8 +149,8 @@ Note: To support agile scaling, it is recommended that query and indexing concer | Port | Purpose | Routes | CLI Argument | Environment Variable | | --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | -| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | +| 8000 | GraphQL HTTP server
(for Subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | +| 8001 | GraphQL WS
(for Subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | | 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | | 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | | 8040 | Prometheus metrics | /metrics | \--metrics-port | - | @@ -159,7 +159,7 @@ Note: To support agile scaling, it is recommended that query and indexing concer | Port | Purpose | Routes | CLI Argument | Environment Variable | | --- | --- | --- | --- | --- | -| 7600 | GraphQL HTTP server
(for paid subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` | +| 7600 | GraphQL HTTP server
(for paid Subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` | | 7300 | Prometheus metrics | /metrics | \--metrics-port | - | #### Indexer Agent @@ -295,7 +295,7 @@ Deploy all resources with `kubectl apply -k $dir`. ### Graph Node -[Graph Node](https://github.com/graphprotocol/graph-node) is an open source Rust implementation that event sources the Ethereum blockchain to deterministically update a data store that can be queried via the GraphQL endpoint. Developers use subgraphs to define their schema, and a set of mappings for transforming the data sourced from the blockchain and the Graph Node handles syncing the entire chain, monitoring for new blocks, and serving it via a GraphQL endpoint. +[Graph Node](https://github.com/graphprotocol/graph-node) is an open source Rust implementation that event sources the Ethereum blockchain to deterministically update a data store that can be queried via the GraphQL endpoint. Developers use Subgraphs to define their schema, and a set of mappings for transforming the data sourced from the blockchain and the Graph Node handles syncing the entire chain, monitoring for new blocks, and serving it via a GraphQL endpoint. #### Getting started from source @@ -365,9 +365,9 @@ docker-compose up To successfully participate in the network requires almost constant monitoring and interaction, so we've built a suite of Typescript applications for facilitating an Indexers network participation. There are three Indexer components: -- **Indexer agent** - The agent monitors the network and the Indexer's own infrastructure and manages which subgraph deployments are indexed and allocated towards onchain and how much is allocated towards each. +- **Indexer agent** - The agent monitors the network and the Indexer's own infrastructure and manages which Subgraph deployments are indexed and allocated towards onchain and how much is allocated towards each. -- **Indexer service** - The only component that needs to be exposed externally, the service passes on subgraph queries to the graph node, manages state channels for query payments, shares important decision making information to clients like the gateways. +- **Indexer service** - The only component that needs to be exposed externally, the service passes on Subgraph queries to the graph node, manages state channels for query payments, shares important decision making information to clients like the gateways. - **Indexer CLI** - The command line interface for managing the Indexer agent. It allows Indexers to manage cost models, manual allocations, actions queue, and indexing rules. @@ -525,7 +525,7 @@ graph indexer status #### Indexer management using Indexer CLI -The suggested tool for interacting with the **Indexer Management API** is the **Indexer CLI**, an extension to the **Graph CLI**. The Indexer agent needs input from an Indexer in order to autonomously interact with the network on the behalf of the Indexer. The mechanism for defining Indexer agent behavior are **allocation management** mode and **indexing rules**. Under auto mode, an Indexer can use **indexing rules** to apply their specific strategy for picking subgraphs to index and serve queries for. Rules are managed via a GraphQL API served by the agent and known as the Indexer Management API. Under manual mode, an Indexer can create allocation actions using **actions queue** and explicitly approve them before they get executed. Under oversight mode, **indexing rules** are used to populate **actions queue** and also require explicit approval for execution. +The suggested tool for interacting with the **Indexer Management API** is the **Indexer CLI**, an extension to the **Graph CLI**. The Indexer agent needs input from an Indexer in order to autonomously interact with the network on the behalf of the Indexer. The mechanism for defining Indexer agent behavior are **allocation management** mode and **indexing rules**. Under auto mode, an Indexer can use **indexing rules** to apply their specific strategy for picking Subgraphs to index and serve queries for. Rules are managed via a GraphQL API served by the agent and known as the Indexer Management API. Under manual mode, an Indexer can create allocation actions using **actions queue** and explicitly approve them before they get executed. Under oversight mode, **indexing rules** are used to populate **actions queue** and also require explicit approval for execution. #### Usage @@ -537,7 +537,7 @@ The **Indexer CLI** connects to the Indexer agent, typically through port-forwar - `graph indexer rules set [options] ...` - Set one or more indexing rules. -- `graph indexer rules start [options] ` - Start indexing a subgraph deployment if available and set its `decisionBasis` to `always`, so the Indexer agent will always choose to index it. If the global rule is set to always then all available subgraphs on the network will be indexed. +- `graph indexer rules start [options] ` - Start indexing a Subgraph deployment if available and set its `decisionBasis` to `always`, so the Indexer agent will always choose to index it. If the global rule is set to always then all available Subgraphs on the network will be indexed. - `graph indexer rules stop [options] ` - Stop indexing a deployment and set its `decisionBasis` to never, so it will skip this deployment when deciding on deployments to index. @@ -561,9 +561,9 @@ All commands which display rules in the output can choose between the supported #### Indexing rules -Indexing rules can either be applied as global defaults or for specific subgraph deployments using their IDs. The `deployment` and `decisionBasis` fields are mandatory, while all other fields are optional. When an indexing rule has `rules` as the `decisionBasis`, then the Indexer agent will compare non-null threshold values on that rule with values fetched from the network for the corresponding deployment. If the subgraph deployment has values above (or below) any of the thresholds it will be chosen for indexing. +Indexing rules can either be applied as global defaults or for specific Subgraph deployments using their IDs. The `deployment` and `decisionBasis` fields are mandatory, while all other fields are optional. When an indexing rule has `rules` as the `decisionBasis`, then the Indexer agent will compare non-null threshold values on that rule with values fetched from the network for the corresponding deployment. If the Subgraph deployment has values above (or below) any of the thresholds it will be chosen for indexing. -For example, if the global rule has a `minStake` of **5** (GRT), any subgraph deployment which has more than 5 (GRT) of stake allocated to it will be indexed. Threshold rules include `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, and `minAverageQueryFees`. +For example, if the global rule has a `minStake` of **5** (GRT), any Subgraph deployment which has more than 5 (GRT) of stake allocated to it will be indexed. Threshold rules include `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, and `minAverageQueryFees`. Data model: @@ -679,7 +679,7 @@ graph indexer actions execute approve Note that supported action types for allocation management have different input requirements: -- `Allocate` - allocate stake to a specific subgraph deployment +- `Allocate` - allocate stake to a specific Subgraph deployment - required action params: - deploymentID @@ -694,7 +694,7 @@ Note that supported action types for allocation management have different input - poi - force (forces using the provided POI even if it doesn’t match what the graph-node provides) -- `Reallocate` - atomically close allocation and open a fresh allocation for the same subgraph deployment +- `Reallocate` - atomically close allocation and open a fresh allocation for the same Subgraph deployment - required action params: - allocationID @@ -706,7 +706,7 @@ Note that supported action types for allocation management have different input #### Cost models -Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make Indexer selection decisions per query and to negotiate payment with chosen Indexers. +Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each Subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make Indexer selection decisions per query and to negotiate payment with chosen Indexers. #### Agora @@ -782,9 +782,9 @@ Once an Indexer has staked GRT in the protocol, the [Indexer components](/indexi 6. Call `stake()` to stake GRT in the protocol. -7. (Optional) Indexers may approve another address to be the operator for their Indexer infrastructure in order to separate the keys that control the funds from those that are performing day to day actions such as allocating on subgraphs and serving (paid) queries. In order to set the operator call `setOperator()` with the operator address. +7. (Optional) Indexers may approve another address to be the operator for their Indexer infrastructure in order to separate the keys that control the funds from those that are performing day to day actions such as allocating on Subgraphs and serving (paid) queries. In order to set the operator call `setOperator()` with the operator address. -8. (Optional) In order to control the distribution of rewards and strategically attract Delegators Indexers can update their delegation parameters by updating their indexingRewardCut (parts per million), queryFeeCut (parts per million), and cooldownBlocks (number of blocks). To do so call `setDelegationParameters()`. The following example sets the queryFeeCut to distribute 95% of query rebates to the Indexer and 5% to Delegators, set the indexingRewardCutto distribute 60% of indexing rewards to the Indexer and 40% to Delegators, and set `thecooldownBlocks` period to 500 blocks. +8. (Optional) In order to control the distribution of rewards and strategically attract Delegators Indexers can update their delegation parameters by updating their `indexingRewardCut` (parts per million), `queryFeeCut` (parts per million), and `cooldownBlocks` (number of blocks). To do so call `setDelegationParameters()`. The following example sets the `queryFeeCut` to distribute 95% of query rebates to the Indexer and 5% to Delegators, set the `indexingRewardCut` to distribute 60% of indexing rewards to the Indexer and 40% to Delegators, and set the `cooldownBlocks` period to 500 blocks. ``` setDelegationParameters(950000, 600000, 500) @@ -810,8 +810,8 @@ To set the delegation parameters using Graph Explorer interface, follow these st After being created by an Indexer a healthy allocation goes through two states. -- **Active** - Once an allocation is created onchain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L316)) it is considered **active**. A portion of the Indexer's own and/or delegated stake is allocated towards a subgraph deployment, which allows them to claim indexing rewards and serve queries for that subgraph deployment. The Indexer agent manages creating allocations based on the Indexer rules. +- **Active** - Once an allocation is created onchain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L316)) it is considered **active**. A portion of the Indexer's own and/or delegated stake is allocated towards a Subgraph deployment, which allows them to claim indexing rewards and serve queries for that Subgraph deployment. The Indexer agent manages creating allocations based on the Indexer rules. - **Closed** - An Indexer is free to close an allocation once 1 epoch has passed ([closeAllocation()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L335)) or their Indexer agent will automatically close the allocation after the **maxAllocationEpochs** (currently 28 days). When an allocation is closed with a valid proof of indexing (POI) their indexing rewards are distributed to the Indexer and its Delegators ([learn more](/indexing/overview/#how-are-indexing-rewards-distributed)). -Indexers are recommended to utilize offchain syncing functionality to sync subgraph deployments to chainhead before creating the allocation onchain. This feature is especially useful for subgraphs that may take longer than 28 epochs to sync or have some chances of failing undeterministically. +Indexers are recommended to utilize offchain syncing functionality to sync Subgraph deployments to chainhead before creating the allocation onchain. This feature is especially useful for Subgraphs that may take longer than 28 epochs to sync or have some chances of failing undeterministically. diff --git a/website/src/pages/pl/indexing/supported-network-requirements.mdx b/website/src/pages/pl/indexing/supported-network-requirements.mdx index df15ef48d762..3d57daa55709 100644 --- a/website/src/pages/pl/indexing/supported-network-requirements.mdx +++ b/website/src/pages/pl/indexing/supported-network-requirements.mdx @@ -6,7 +6,7 @@ title: Supported Network Requirements | --- | --- | --- | :-: | | Arbitrum | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | 4+ core CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_last updated August 2023_ | ✅ | | Avalanche | [Docker Guide](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 5 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
Debian 12/Ubuntu 22.04
16 GB RAM
>= 4.5TB (NVME preffered)
_last updated 14th May 2024_ | ✅ | +| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
Debian 12/Ubuntu 22.04
16 GB RAM
>= 4.5TB (NVME preferred)
_last updated 14th May 2024_ | ✅ | | Binance | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | 8 core / 16 threads CPU
Ubuntu 22.04
>=32 GB RAM
>= 14 TiB NVMe SSD
_last updated 22nd June 2024_ | ✅ | | Celo | [Docker Guide](https://docs.infradao.com/archive-nodes-101/celo/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 2 TiB NVMe SSD
_last updated August 2023_ | ✅ | | Ethereum | [Docker Guide](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Higher clock speed over core count
Ubuntu 22.04
16GB+ RAM
>=3TB (NVMe recommended)
_last updated August 2023_ | ✅ | diff --git a/website/src/pages/pl/indexing/tap.mdx b/website/src/pages/pl/indexing/tap.mdx index 3bab672ab211..477534d63201 100644 --- a/website/src/pages/pl/indexing/tap.mdx +++ b/website/src/pages/pl/indexing/tap.mdx @@ -1,21 +1,21 @@ --- -title: TAP Migration Guide +title: GraphTally Guide --- -Learn about The Graph’s new payment system, **Timeline Aggregation Protocol, TAP**. This system provides fast, efficient microtransactions with minimized trust. +Learn about The Graph’s new payment system, **GraphTally** [(previously Timeline Aggregation Protocol)](https://docs.rs/tap_core/latest/tap_core/index.html). This system provides fast, efficient microtransactions with minimized trust. ## Overview -[TAP](https://docs.rs/tap_core/latest/tap_core/index.html) is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features: +GraphTally is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features: - Efficiently handles micropayments. - Adds a layer of consolidations to onchain transactions and costs. - Allows Indexers control of receipts and payments, guaranteeing payment for queries. - It enables decentralized, trustless gateways and improves `indexer-service` performance for multiple senders. -## Specifics +### Specifics -TAP allows a sender to make multiple payments to a receiver, **TAP Receipts**, which aggregates these payments into a single payment, a **Receipt Aggregate Voucher**, also known as a **RAV**. This aggregated payment can then be verified on the blockchain, reducing the number of transactions and simplifying the payment process. +GraphTally allows a sender to make multiple payments to a receiver, **Receipts**, which aggregates these payments into a single payment, a **Receipt Aggregate Voucher**, also known as a **RAV**. This aggregated payment can then be verified on the blockchain, reducing the number of transactions and simplifying the payment process. For each query, the gateway will send you a `signed receipt` that is stored on your database. Then, these queries will be aggregated by a `tap-agent` through a request. Afterwards, you’ll receive a RAV. You can update a RAV by sending it with newer receipts and this will generate a new RAV with an increased value. @@ -59,14 +59,14 @@ As long as you run `tap-agent` and `indexer-agent`, everything will be executed | Signers | `0xfF4B7A5EfD00Ff2EC3518D4F250A27e4c29A2211` | `0xFb142dE83E261e43a81e9ACEADd1c66A0DB121FE` | | Aggregator | `https://tap-aggregator.network.thegraph.com` | `https://tap-aggregator.testnet.thegraph.com` | -### Requirements +### Prerequisites -In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query TAP updates. You can use The Graph Network to query or host yourself on your `graph-node`. +In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query updates. You can use The Graph Network to query or host yourself on your `graph-node`. -- [Graph TAP Arbitrum Sepolia subgraph (for The Graph testnet)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) -- [Graph TAP Arbitrum One subgraph (for The Graph mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) +- [Graph TAP Arbitrum Sepolia Subgraph (for The Graph testnet)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) +- [Graph TAP Arbitrum One Subgraph (for The Graph mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) -> Note: `indexer-agent` does not currently handle the indexing of this subgraph like it does for the network subgraph deployment. As a result, you have to index it manually. +> Note: `indexer-agent` does not currently handle the indexing of this Subgraph like it does for the network Subgraph deployment. As a result, you have to index it manually. ## Migration Guide @@ -79,7 +79,7 @@ The required software version can be found [here](https://github.com/graphprotoc 1. **Indexer Agent** - Follow the [same process](https://github.com/graphprotocol/indexer/pkgs/container/indexer-agent#graph-protocol-indexer-components). - - Give the new argument `--tap-subgraph-endpoint` to activate the new TAP codepaths and enable redeeming of TAP RAVs. + - Give the new argument `--tap-subgraph-endpoint` to activate the new GraphTally codepaths and enable redeeming of RAVs. 2. **Indexer Service** @@ -128,18 +128,18 @@ query_url = "" status_url = "" [subgraphs.network] -# Query URL for the Graph Network subgraph. +# Query URL for the Graph Network Subgraph. query_url = "" # Optional, deployment to look for in the local `graph-node`, if locally indexed. -# Locally indexing the subgraph is recommended. +# Locally indexing the Subgraph is recommended. # NOTE: Use `query_url` or `deployment_id` only deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" [subgraphs.escrow] -# Query URL for the Escrow subgraph. +# Query URL for the Escrow Subgraph. query_url = "" # Optional, deployment to look for in the local `graph-node`, if locally indexed. -# Locally indexing the subgraph is recommended. +# Locally indexing the Subgraph is recommended. # NOTE: Use `query_url` or `deployment_id` only deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" diff --git a/website/src/pages/pl/indexing/tooling/graph-node.mdx b/website/src/pages/pl/indexing/tooling/graph-node.mdx index 0250f14a3d08..edde8a157fd3 100644 --- a/website/src/pages/pl/indexing/tooling/graph-node.mdx +++ b/website/src/pages/pl/indexing/tooling/graph-node.mdx @@ -2,31 +2,31 @@ title: Graph Node --- -Graph Node is the component which indexes subgraphs, and makes the resulting data available to query via a GraphQL API. As such it is central to the indexer stack, and correct operation of Graph Node is crucial to running a successful indexer. +Graph Node is the component which indexes Subgraphs, and makes the resulting data available to query via a GraphQL API. As such it is central to the indexer stack, and correct operation of Graph Node is crucial to running a successful indexer. This provides a contextual overview of Graph Node, and some of the more advanced options available to indexers. Detailed documentation and instructions can be found in the [Graph Node repository](https://github.com/graphprotocol/graph-node). ## Graph Node -[Graph Node](https://github.com/graphprotocol/graph-node) is the reference implementation for indexing Subgraphs on The Graph Network, connecting to blockchain clients, indexing subgraphs and making indexed data available to query. +[Graph Node](https://github.com/graphprotocol/graph-node) is the reference implementation for indexing Subgraphs on The Graph Network, connecting to blockchain clients, indexing Subgraphs and making indexed data available to query. Graph Node (and the whole indexer stack) can be run on bare metal, or in a cloud environment. This flexibility of the central indexing component is crucial to the robustness of The Graph Protocol. Similarly, Graph Node can be [built from source](https://github.com/graphprotocol/graph-node), or indexers can use one of the [provided Docker Images](https://hub.docker.com/r/graphprotocol/graph-node). ### PostgreSQL database -The main store for the Graph Node, this is where subgraph data is stored, as well as metadata about subgraphs, and subgraph-agnostic network data such as the block cache, and eth_call cache. +The main store for the Graph Node, this is where Subgraph data is stored, as well as metadata about Subgraphs, and Subgraph-agnostic network data such as the block cache, and eth_call cache. ### Network clients In order to index a network, Graph Node needs access to a network client via an EVM-compatible JSON-RPC API. This RPC may connect to a single client or it could be a more complex setup that load balances across multiple. -While some subgraphs may just require a full node, some may have indexing features which require additional RPC functionality. Specifically subgraphs which make `eth_calls` as part of indexing will require an archive node which supports [EIP-1898](https://eips.ethereum.org/EIPS/eip-1898), and subgraphs with `callHandlers`, or `blockHandlers` with a `call` filter, require `trace_filter` support ([see trace module documentation here](https://openethereum.github.io/JSONRPC-trace-module)). +While some Subgraphs may just require a full node, some may have indexing features which require additional RPC functionality. Specifically Subgraphs which make `eth_calls` as part of indexing will require an archive node which supports [EIP-1898](https://eips.ethereum.org/EIPS/eip-1898), and Subgraphs with `callHandlers`, or `blockHandlers` with a `call` filter, require `trace_filter` support ([see trace module documentation here](https://openethereum.github.io/JSONRPC-trace-module)). **Network Firehoses** - a Firehose is a gRPC service providing an ordered, yet fork-aware, stream of blocks, developed by The Graph's core developers to better support performant indexing at scale. This is not currently an Indexer requirement, but Indexers are encouraged to familiarise themselves with the technology, ahead of full network support. Learn more about the Firehose [here](https://firehose.streamingfast.io/). ### IPFS Nodes -Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during subgraph deployment to fetch the subgraph manifest and all linked files. Network indexers do not need to host their own IPFS node. An IPFS node for the network is hosted at https://ipfs.network.thegraph.com. +Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network indexers do not need to host their own IPFS node. An IPFS node for the network is hosted at https://ipfs.network.thegraph.com. ### Prometheus metrics server @@ -79,8 +79,8 @@ When it is running Graph Node exposes the following ports: | Port | Purpose | Routes | CLI Argument | Environment Variable | | --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | -| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | +| 8000 | GraphQL HTTP server
(for Subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | +| 8001 | GraphQL WS
(for Subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | | 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | | 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | | 8040 | Prometheus metrics | /metrics | \--metrics-port | - | @@ -89,7 +89,7 @@ When it is running Graph Node exposes the following ports: ## Advanced Graph Node configuration -At its simplest, Graph Node can be operated with a single instance of Graph Node, a single PostgreSQL database, an IPFS node, and the network clients as required by the subgraphs to be indexed. +At its simplest, Graph Node can be operated with a single instance of Graph Node, a single PostgreSQL database, an IPFS node, and the network clients as required by the Subgraphs to be indexed. This setup can be scaled horizontally, by adding multiple Graph Nodes, and multiple databases to support those Graph Nodes. Advanced users may want to take advantage of some of the horizontal scaling capabilities of Graph Node, as well as some of the more advanced configuration options, via the `config.toml` file and Graph Node's environment variables. @@ -114,13 +114,13 @@ Full documentation of `config.toml` can be found in the [Graph Node docs](https: #### Multiple Graph Nodes -Graph Node indexing can scale horizontally, running multiple instances of Graph Node to split indexing and querying across different nodes. This can be done simply by running Graph Nodes configured with a different `node_id` on startup (e.g. in the Docker Compose file), which can then be used in the `config.toml` file to specify [dedicated query nodes](#dedicated-query-nodes), [block ingestors](#dedicated-block-ingestion), and splitting subgraphs across nodes with [deployment rules](#deployment-rules). +Graph Node indexing can scale horizontally, running multiple instances of Graph Node to split indexing and querying across different nodes. This can be done simply by running Graph Nodes configured with a different `node_id` on startup (e.g. in the Docker Compose file), which can then be used in the `config.toml` file to specify [dedicated query nodes](#dedicated-query-nodes), [block ingestors](#dedicated-block-ingestion), and splitting Subgraphs across nodes with [deployment rules](#deployment-rules). > Note that multiple Graph Nodes can all be configured to use the same database, which itself can be horizontally scaled via sharding. #### Deployment rules -Given multiple Graph Nodes, it is necessary to manage deployment of new subgraphs so that the same subgraph isn't being indexed by two different nodes, which would lead to collisions. This can be done by using deployment rules, which can also specify which `shard` a subgraph's data should be stored in, if database sharding is being used. Deployment rules can match on the subgraph name and the network that the deployment is indexing in order to make a decision. +Given multiple Graph Nodes, it is necessary to manage deployment of new Subgraphs so that the same Subgraph isn't being indexed by two different nodes, which would lead to collisions. This can be done by using deployment rules, which can also specify which `shard` a Subgraph's data should be stored in, if database sharding is being used. Deployment rules can match on the Subgraph name and the network that the deployment is indexing in order to make a decision. Example deployment rule configuration: @@ -138,7 +138,7 @@ indexers = [ "index_node_kovan_0" ] match = { network = [ "xdai", "poa-core" ] } indexers = [ "index_node_other_0" ] [[deployment.rule]] -# There's no 'match', so any subgraph matches +# There's no 'match', so any Subgraph matches shards = [ "sharda", "shardb" ] indexers = [ "index_node_community_0", @@ -167,11 +167,11 @@ Any node whose --node-id matches the regular expression will be set up to only r For most use cases, a single Postgres database is sufficient to support a graph-node instance. When a graph-node instance outgrows a single Postgres database, it is possible to split the storage of graph-node's data across multiple Postgres databases. All databases together form the store of the graph-node instance. Each individual database is called a shard. -Shards can be used to split subgraph deployments across multiple databases, and can also be used to use replicas to spread query load across databases. This includes configuring the number of available database connections each `graph-node` should keep in its connection pool for each database, which becomes increasingly important as more subgraphs are being indexed. +Shards can be used to split Subgraph deployments across multiple databases, and can also be used to use replicas to spread query load across databases. This includes configuring the number of available database connections each `graph-node` should keep in its connection pool for each database, which becomes increasingly important as more Subgraphs are being indexed. Sharding becomes useful when your existing database can't keep up with the load that Graph Node puts on it, and when it's not possible to increase the database size anymore. -> It is generally better make a single database as big as possible, before starting with shards. One exception is where query traffic is split very unevenly between subgraphs; in those situations it can help dramatically if the high-volume subgraphs are kept in one shard and everything else in another because that setup makes it more likely that the data for the high-volume subgraphs stays in the db-internal cache and doesn't get replaced by data that's not needed as much from low-volume subgraphs. +> It is generally better make a single database as big as possible, before starting with shards. One exception is where query traffic is split very unevenly between Subgraphs; in those situations it can help dramatically if the high-volume Subgraphs are kept in one shard and everything else in another because that setup makes it more likely that the data for the high-volume Subgraphs stays in the db-internal cache and doesn't get replaced by data that's not needed as much from low-volume Subgraphs. In terms of configuring connections, start with max_connections in postgresql.conf set to 400 (or maybe even 200) and look at the store_connection_wait_time_ms and store_connection_checkout_count Prometheus metrics. Noticeable wait times (anything above 5ms) is an indication that there are too few connections available; high wait times there will also be caused by the database being very busy (like high CPU load). However if the database seems otherwise stable, high wait times indicate a need to increase the number of connections. In the configuration, how many connections each graph-node instance can use is an upper limit, and Graph Node will not keep connections open if it doesn't need them. @@ -188,7 +188,7 @@ ingestor = "block_ingestor_node" #### Supporting multiple networks -The Graph Protocol is increasing the number of networks supported for indexing rewards, and there exist many subgraphs indexing unsupported networks which an indexer would like to process. The `config.toml` file allows for expressive and flexible configuration of: +The Graph Protocol is increasing the number of networks supported for indexing rewards, and there exist many Subgraphs indexing unsupported networks which an indexer would like to process. The `config.toml` file allows for expressive and flexible configuration of: - Multiple networks - Multiple providers per network (this can allow splitting of load across providers, and can also allow for configuration of full nodes as well as archive nodes, with Graph Node preferring cheaper providers if a given workload allows). @@ -225,11 +225,11 @@ Users who are operating a scaled indexing setup with advanced configuration may ### Managing Graph Node -Given a running Graph Node (or Graph Nodes!), the challenge is then to manage deployed subgraphs across those nodes. Graph Node surfaces a range of tools to help with managing subgraphs. +Given a running Graph Node (or Graph Nodes!), the challenge is then to manage deployed Subgraphs across those nodes. Graph Node surfaces a range of tools to help with managing Subgraphs. #### Logging -Graph Node's logs can provide useful information for debugging and optimisation of Graph Node and specific subgraphs. Graph Node supports different log levels via the `GRAPH_LOG` environment variable, with the following levels: error, warn, info, debug or trace. +Graph Node's logs can provide useful information for debugging and optimisation of Graph Node and specific Subgraphs. Graph Node supports different log levels via the `GRAPH_LOG` environment variable, with the following levels: error, warn, info, debug or trace. In addition setting `GRAPH_LOG_QUERY_TIMING` to `gql` provides more details about how GraphQL queries are running (though this will generate a large volume of logs). @@ -247,11 +247,11 @@ The graphman command is included in the official containers, and you can docker Full documentation of `graphman` commands is available in the Graph Node repository. See [/docs/graphman.md] (https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md) in the Graph Node `/docs` -### Working with subgraphs +### Working with Subgraphs #### Indexing status API -Available on port 8030/graphql by default, the indexing status API exposes a range of methods for checking indexing status for different subgraphs, checking proofs of indexing, inspecting subgraph features and more. +Available on port 8030/graphql by default, the indexing status API exposes a range of methods for checking indexing status for different Subgraphs, checking proofs of indexing, inspecting Subgraph features and more. The full schema is available [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). @@ -263,7 +263,7 @@ There are three separate parts of the indexing process: - Processing events in order with the appropriate handlers (this can involve calling the chain for state, and fetching data from the store) - Writing the resulting data to the store -These stages are pipelined (i.e. they can be executed in parallel), but they are dependent on one another. Where subgraphs are slow to index, the underlying cause will depend on the specific subgraph. +These stages are pipelined (i.e. they can be executed in parallel), but they are dependent on one another. Where Subgraphs are slow to index, the underlying cause will depend on the specific Subgraph. Common causes of indexing slowness: @@ -276,24 +276,24 @@ Common causes of indexing slowness: - The provider itself falling behind the chain head - Slowness in fetching new receipts at the chain head from the provider -Subgraph indexing metrics can help diagnose the root cause of indexing slowness. In some cases, the problem lies with the subgraph itself, but in others, improved network providers, reduced database contention and other configuration improvements can markedly improve indexing performance. +Subgraph indexing metrics can help diagnose the root cause of indexing slowness. In some cases, the problem lies with the Subgraph itself, but in others, improved network providers, reduced database contention and other configuration improvements can markedly improve indexing performance. -#### Failed subgraphs +#### Failed Subgraphs -During indexing subgraphs might fail, if they encounter data that is unexpected, some component not working as expected, or if there is some bug in the event handlers or configuration. There are two general types of failure: +During indexing Subgraphs might fail, if they encounter data that is unexpected, some component not working as expected, or if there is some bug in the event handlers or configuration. There are two general types of failure: - Deterministic failures: these are failures which will not be resolved with retries - Non-deterministic failures: these might be down to issues with the provider, or some unexpected Graph Node error. When a non-deterministic failure occurs, Graph Node will retry the failing handlers, backing off over time. -In some cases a failure might be resolvable by the indexer (for example if the error is a result of not having the right kind of provider, adding the required provider will allow indexing to continue). However in others, a change in the subgraph code is required. +In some cases a failure might be resolvable by the indexer (for example if the error is a result of not having the right kind of provider, adding the required provider will allow indexing to continue). However in others, a change in the Subgraph code is required. -> Deterministic failures are considered "final", with a Proof of Indexing generated for the failing block, while non-deterministic failures are not, as the subgraph may manage to "unfail" and continue indexing. In some cases, the non-deterministic label is incorrect, and the subgraph will never overcome the error; such failures should be reported as issues on the Graph Node repository. +> Deterministic failures are considered "final", with a Proof of Indexing generated for the failing block, while non-deterministic failures are not, as the Subgraph may manage to "unfail" and continue indexing. In some cases, the non-deterministic label is incorrect, and the Subgraph will never overcome the error; such failures should be reported as issues on the Graph Node repository. #### Block and call cache -Graph Node caches certain data in the store in order to save refetching from the provider. Blocks are cached, as are the results of `eth_calls` (the latter being cached as of a specific block). This caching can dramatically increase indexing speed during "resyncing" of a slightly altered subgraph. +Graph Node caches certain data in the store in order to save refetching from the provider. Blocks are cached, as are the results of `eth_calls` (the latter being cached as of a specific block). This caching can dramatically increase indexing speed during "resyncing" of a slightly altered Subgraph. -However, in some instances, if an Ethereum node has provided incorrect data for some period, that can make its way into the cache, leading to incorrect data or failed subgraphs. In this case indexers can use `graphman` to clear the poisoned cache, and then rewind the affected subgraphs, which will then fetch fresh data from the (hopefully) healthy provider. +However, in some instances, if an Ethereum node has provided incorrect data for some period, that can make its way into the cache, leading to incorrect data or failed Subgraphs. In this case indexers can use `graphman` to clear the poisoned cache, and then rewind the affected Subgraphs, which will then fetch fresh data from the (hopefully) healthy provider. If a block cache inconsistency is suspected, such as a tx receipt missing event: @@ -304,7 +304,7 @@ If a block cache inconsistency is suspected, such as a tx receipt missing event: #### Querying issues and errors -Once a subgraph has been indexed, indexers can expect to serve queries via the subgraph's dedicated query endpoint. If the indexer is hoping to serve significant query volume, a dedicated query node is recommended, and in case of very high query volumes, indexers may want to configure replica shards so that queries don't impact the indexing process. +Once a Subgraph has been indexed, indexers can expect to serve queries via the Subgraph's dedicated query endpoint. If the indexer is hoping to serve significant query volume, a dedicated query node is recommended, and in case of very high query volumes, indexers may want to configure replica shards so that queries don't impact the indexing process. However, even with a dedicated query node and replicas, certain queries can take a long time to execute, and in some cases increase memory usage and negatively impact the query time for other users. @@ -316,7 +316,7 @@ Graph Node caches GraphQL queries by default, which can significantly reduce dat ##### Analysing queries -Problematic queries most often surface in one of two ways. In some cases, users themselves report that a given query is slow. In that case the challenge is to diagnose the reason for the slowness - whether it is a general issue, or specific to that subgraph or query. And then of course to resolve it, if possible. +Problematic queries most often surface in one of two ways. In some cases, users themselves report that a given query is slow. In that case the challenge is to diagnose the reason for the slowness - whether it is a general issue, or specific to that Subgraph or query. And then of course to resolve it, if possible. In other cases, the trigger might be high memory usage on a query node, in which case the challenge is first to identify the query causing the issue. @@ -336,10 +336,10 @@ In general, tables where the number of distinct entities are less than 1% of the Once a table has been determined to be account-like, running `graphman stats account-like .
` will turn on the account-like optimization for queries against that table. The optimization can be turned off again with `graphman stats account-like --clear .
` It takes up to 5 minutes for query nodes to notice that the optimization has been turned on or off. After turning the optimization on, it is necessary to verify that the change does not in fact make queries slower for that table. If you have configured Grafana to monitor Postgres, slow queries would show up in `pg_stat_activity`in large numbers, taking several seconds. In that case, the optimization needs to be turned off again. -For Uniswap-like subgraphs, the `pair` and `token` tables are prime candidates for this optimization, and can have a dramatic effect on database load. +For Uniswap-like Subgraphs, the `pair` and `token` tables are prime candidates for this optimization, and can have a dramatic effect on database load. -#### Removing subgraphs +#### Removing Subgraphs > This is new functionality, which will be available in Graph Node 0.29.x -At some point an indexer might want to remove a given subgraph. This can be easily done via `graphman drop`, which deletes a deployment and all it's indexed data. The deployment can be specified as either a subgraph name, an IPFS hash `Qm..`, or the database namespace `sgdNNN`. Further documentation is available [here](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop). +At some point an indexer might want to remove a given Subgraph. This can be easily done via `graphman drop`, which deletes a deployment and all it's indexed data. The deployment can be specified as either a Subgraph name, an IPFS hash `Qm..`, or the database namespace `sgdNNN`. Further documentation is available [here](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop). diff --git a/website/src/pages/pl/indexing/tooling/graphcast.mdx b/website/src/pages/pl/indexing/tooling/graphcast.mdx index 18639dc9acc8..a790c5800c7e 100644 --- a/website/src/pages/pl/indexing/tooling/graphcast.mdx +++ b/website/src/pages/pl/indexing/tooling/graphcast.mdx @@ -10,10 +10,10 @@ Obecnie koszt przekazywania informacji innym uczestnikom sieci jest uzależniony SDK Graphcast (Software Development Kit) umożliwia programistom budowanie "Radios", czyli aplikacji opartych na przekazywaniu plotek, które indekserzy mogą uruchamiać w celu spełnienia określonego zadania. Planujemy również stworzyć kilka takich aplikacji Radios (lub udzielać wsparcia innym programistom/zespołom, które chcą w ich budowaniu uczestniczyć) dla następujących przypadków użycia: -- Real-time cross-checking of subgraph data integrity ([Subgraph Radio](https://docs.graphops.xyz/graphcast/radios/subgraph-radio/intro)). -- Przeprowadzanie aukcji i koordynacja synchronizacji warp subgrafów, substreamów oraz danych Firehose od innych indekserów. -- Raportowanie na temat aktywnej analizy zapytań, w tym wolumenów zapytań do subgrafów, wolumenów opłat itp. -- Raportowanie na temat analizy indeksowania, w tym czasu indeksowania subgrafów, kosztów gazu dla osób obsługujących zapytanie, napotkanych błędów indeksowania itp. +- Real-time cross-checking of Subgraph data integrity ([Subgraph Radio](https://docs.graphops.xyz/graphcast/radios/subgraph-radio/intro)). +- Conducting auctions and coordination for warp syncing Subgraphs, substreams, and Firehose data from other Indexers. +- Self-reporting on active query analytics, including Subgraph request volumes, fee volumes, etc. +- Self-reporting on indexing analytics, including Subgraph indexing time, handler gas costs, indexing errors encountered, etc. - Raportowanie informacji na temat stosu, w tym wersji graph-node, wersji Postgres oraz wersji klienta Ethereum itp. ### Dowiedz się więcej diff --git a/website/src/pages/pl/resources/benefits.mdx b/website/src/pages/pl/resources/benefits.mdx index d788b11bcd7a..311d327f3fff 100644 --- a/website/src/pages/pl/resources/benefits.mdx +++ b/website/src/pages/pl/resources/benefits.mdx @@ -75,9 +75,9 @@ Query costs may vary; the quoted cost is the average at time of publication (Mar Reflects cost for data consumer. Query fees are still paid to Indexers for Free Plan queries. -Estimated costs are only for Ethereum Mainnet subgraphs — costs are even higher when self hosting a `graph-node` on other networks. Some users may need to update their subgraph to a new version. Due to Ethereum gas fees, an update costs ~$50 at time of writing. Note that gas fees on [Arbitrum](/archived/arbitrum/arbitrum-faq/) are substantially lower than Ethereum mainnet. +Estimated costs are only for Ethereum Mainnet Subgraphs — costs are even higher when self hosting a `graph-node` on other networks. Some users may need to update their Subgraph to a new version. Due to Ethereum gas fees, an update costs ~$50 at time of writing. Note that gas fees on [Arbitrum](/archived/arbitrum/arbitrum-faq/) are substantially lower than Ethereum mainnet. -Curating signal on a subgraph is an optional one-time, net-zero cost (e.g., $1k in signal can be curated on a subgraph, and later withdrawn—with potential to earn returns in the process). +Curating signal on a Subgraph is an optional one-time, net-zero cost (e.g., $1k in signal can be curated on a Subgraph, and later withdrawn—with potential to earn returns in the process). ## No Setup Costs & Greater Operational Efficiency @@ -89,4 +89,4 @@ The Graph’s decentralized network gives users access to geographic redundancy Bottom line: The Graph Network is less expensive, easier to use, and produces superior results compared to running a `graph-node` locally. -Start using The Graph Network today, and learn how to [publish your subgraph to The Graph's decentralized network](/subgraphs/quick-start/). +Start using The Graph Network today, and learn how to [publish your Subgraph to The Graph's decentralized network](/subgraphs/quick-start/). diff --git a/website/src/pages/pl/resources/glossary.mdx b/website/src/pages/pl/resources/glossary.mdx index ffcd4bca2eed..4c5ad55cd0d3 100644 --- a/website/src/pages/pl/resources/glossary.mdx +++ b/website/src/pages/pl/resources/glossary.mdx @@ -4,51 +4,51 @@ title: Glossary - **The Graph**: A decentralized protocol for indexing and querying data. -- **Query**: A request for data. In the case of The Graph, a query is a request for data from a subgraph that will be answered by an Indexer. +- **Query**: A request for data. In the case of The Graph, a query is a request for data from a Subgraph that will be answered by an Indexer. -- **GraphQL**: A query language for APIs and a runtime for fulfilling those queries with your existing data. The Graph uses GraphQL to query subgraphs. +- **GraphQL**: A query language for APIs and a runtime for fulfilling those queries with your existing data. The Graph uses GraphQL to query Subgraphs. -- **Endpoint**: A URL that can be used to query a subgraph. The testing endpoint for Subgraph Studio is `https://api.studio.thegraph.com/query///` and the Graph Explorer endpoint is `https://gateway.thegraph.com/api//subgraphs/id/`. The Graph Explorer endpoint is used to query subgraphs on The Graph's decentralized network. +- **Endpoint**: A URL that can be used to query a Subgraph. The testing endpoint for Subgraph Studio is `https://api.studio.thegraph.com/query///` and the Graph Explorer endpoint is `https://gateway.thegraph.com/api//subgraphs/id/`. The Graph Explorer endpoint is used to query Subgraphs on The Graph's decentralized network. -- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish subgraphs to The Graph Network. Once it is indexed, the subgraph can be queried by anyone. +- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish Subgraphs to The Graph Network. Once it is indexed, the Subgraph can be queried by anyone. - **Indexer**: Network participants that run indexing nodes to index data from blockchains and serve GraphQL queries. - **Indexer Revenue Streams**: Indexers are rewarded in GRT with two components: query fee rebates and indexing rewards. - 1. **Query Fee Rebates**: Payments from subgraph consumers for serving queries on the network. + 1. **Query Fee Rebates**: Payments from Subgraph consumers for serving queries on the network. - 2. **Indexing Rewards**: The rewards that Indexers receive for indexing subgraphs. Indexing rewards are generated via new issuance of 3% GRT annually. + 2. **Indexing Rewards**: The rewards that Indexers receive for indexing Subgraphs. Indexing rewards are generated via new issuance of 3% GRT annually. - **Indexer's Self-Stake**: The amount of GRT that Indexers stake to participate in the decentralized network. The minimum is 100,000 GRT, and there is no upper limit. - **Delegation Capacity**: The maximum amount of GRT an Indexer can accept from Delegators. Indexers can only accept up to 16x their Indexer Self-Stake, and additional delegation results in diluted rewards. For example, if an Indexer has a Self-Stake of 1M GRT, their delegation capacity is 16M. However, Indexers can increase their Delegation Capacity by increasing their Self-Stake. -- **Upgrade Indexer**: An Indexer designed to act as a fallback for subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. +- **Upgrade Indexer**: An Indexer designed to act as a fallback for Subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. -- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing subgraphs. +- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in Subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing Subgraphs. - **Delegation Tax**: A 0.5% fee paid by Delegators when they delegate GRT to Indexers. The GRT used to pay the fee is burned. -- **Curator**: Network participants that identify high-quality subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a subgraph, 10% is distributed to the Curators of that subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a subgraph. +- **Curator**: Network participants that identify high-quality Subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a Subgraph, 10% is distributed to the Curators of that Subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a Subgraph. -- **Curation Tax**: A 1% fee paid by Curators when they signal GRT on subgraphs. The GRT used to pay the fee is burned. +- **Curation Tax**: A 1% fee paid by Curators when they signal GRT on Subgraphs. The GRT used to pay the fee is burned. -- **Data Consumer**: Any application or user that queries a subgraph. +- **Data Consumer**: Any application or user that queries a Subgraph. -- **Subgraph Developer**: A developer who builds and deploys a subgraph to The Graph's decentralized network. +- **Subgraph Developer**: A developer who builds and deploys a Subgraph to The Graph's decentralized network. -- **Subgraph Manifest**: A YAML file that describes the subgraph's GraphQL schema, data sources, and other metadata. [Here](https://github.com/graphprotocol/example-subgraph/blob/master/subgraph.yaml) is an example. +- **Subgraph Manifest**: A YAML file that describes the Subgraph's GraphQL schema, data sources, and other metadata. [Here](https://github.com/graphprotocol/example-subgraph/blob/master/subgraph.yaml) is an example. - **Epoch**: A unit of time within the network. Currently, one epoch is 6,646 blocks or approximately 1 day. -- **Allocation**: An Indexer can allocate their total GRT stake (including Delegators' stake) towards subgraphs that have been published on The Graph's decentralized network. Allocations can have different statuses: +- **Allocation**: An Indexer can allocate their total GRT stake (including Delegators' stake) towards Subgraphs that have been published on The Graph's decentralized network. Allocations can have different statuses: - 1. **Active**: An allocation is considered active when it is created onchain. This is called opening an allocation, and indicates to the network that the Indexer is actively indexing and serving queries for a particular subgraph. Active allocations accrue indexing rewards proportional to the signal on the subgraph, and the amount of GRT allocated. + 1. **Active**: An allocation is considered active when it is created onchain. This is called opening an allocation, and indicates to the network that the Indexer is actively indexing and serving queries for a particular Subgraph. Active allocations accrue indexing rewards proportional to the signal on the Subgraph, and the amount of GRT allocated. - 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. + 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given Subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. -- **Subgraph Studio**: A powerful dapp for building, deploying, and publishing subgraphs. +- **Subgraph Studio**: A powerful dapp for building, deploying, and publishing Subgraphs. - **Fishermen**: A role within The Graph Network held by participants who monitor the accuracy and integrity of data served by Indexers. When a Fisherman identifies a query response or a POI they believe to be incorrect, they can initiate a dispute against the Indexer. If the dispute rules in favor of the Fisherman, the Indexer is slashed by losing 2.5% of their self-stake. Of this amount, 50% is awarded to the Fisherman as a bounty for their vigilance, and the remaining 50% is removed from circulation (burned). This mechanism is designed to encourage Fishermen to help maintain the reliability of the network by ensuring that Indexers are held accountable for the data they provide. @@ -56,28 +56,28 @@ title: Glossary - **Slashing**: Indexers can have their self-staked GRT slashed for providing an incorrect POI or for serving inaccurate data. The slashing percentage is a protocol parameter currently set to 2.5% of an Indexer's self-stake. 50% of the slashed GRT goes to the Fisherman that disputed the inaccurate data or incorrect POI. The other 50% is burned. -- **Indexing Rewards**: The rewards that Indexers receive for indexing subgraphs. Indexing rewards are distributed in GRT. +- **Indexing Rewards**: The rewards that Indexers receive for indexing Subgraphs. Indexing rewards are distributed in GRT. - **Delegation Rewards**: The rewards that Delegators receive for delegating GRT to Indexers. Delegation rewards are distributed in GRT. - **GRT**: The Graph's work utility token. GRT provides economic incentives to network participants for contributing to the network. -- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. +- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given Subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. -- **Graph Node**: Graph Node is the component that indexes subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. +- **Graph Node**: Graph Node is the component that indexes Subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. -- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions onchain, including registering on the network, managing subgraph deployments to its Graph Node(s), and managing allocations. +- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions onchain, including registering on the network, managing Subgraph deployments to its Graph Node(s), and managing allocations. - **The Graph Client**: A library for building GraphQL-based dapps in a decentralized way. -- **Graph Explorer**: A dapp designed for network participants to explore subgraphs and interact with the protocol. +- **Graph Explorer**: A dapp designed for network participants to explore Subgraphs and interact with the protocol. - **Graph CLI**: A command line interface tool for building and deploying to The Graph. - **Cooldown Period**: The time remaining until an Indexer who changed their delegation parameters can do so again. -- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, subgraphs, curation shares, and Indexer's self-stake. +- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, Subgraphs, curation shares, and Indexer's self-stake. -- **Updating a subgraph**: The process of releasing a new subgraph version with updates to the subgraph's manifest, schema, or mappings. +- **Updating a Subgraph**: The process of releasing a new Subgraph version with updates to the Subgraph's manifest, schema, or mappings. -- **Migrating**: The process of curation shares moving from an old version of a subgraph to a new version of a subgraph (e.g. when v0.0.1 is updated to v0.0.2). +- **Migrating**: The process of curation shares moving from an old version of a Subgraph to a new version of a Subgraph (e.g. when v0.0.1 is updated to v0.0.2). diff --git a/website/src/pages/pl/resources/migration-guides/assemblyscript-migration-guide.mdx b/website/src/pages/pl/resources/migration-guides/assemblyscript-migration-guide.mdx index 85f6903a6c69..aead2514ff51 100644 --- a/website/src/pages/pl/resources/migration-guides/assemblyscript-migration-guide.mdx +++ b/website/src/pages/pl/resources/migration-guides/assemblyscript-migration-guide.mdx @@ -2,13 +2,13 @@ title: AssemblyScript Migration Guide --- -Up until now, subgraphs have been using one of the [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉 +Up until now, Subgraphs have been using one of the [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉 -That will enable subgraph developers to use newer features of the AS language and standard library. +That will enable Subgraph developers to use newer features of the AS language and standard library. This guide is applicable for anyone using `graph-cli`/`graph-ts` below version `0.22.0`. If you're already at a higher than (or equal) version to that, you've already been using version `0.19.10` of AssemblyScript 🙂 -> Note: As of `0.24.0`, `graph-node` can support both versions, depending on the `apiVersion` specified in the subgraph manifest. +> Note: As of `0.24.0`, `graph-node` can support both versions, depending on the `apiVersion` specified in the Subgraph manifest. ## Features @@ -44,7 +44,7 @@ This guide is applicable for anyone using `graph-cli`/`graph-ts` below version ` ## How to upgrade? -1. Change your mappings `apiVersion` in `subgraph.yaml` to `0.0.6`: +1. Change your mappings `apiVersion` in `subgraph.yaml` to `0.0.9`: ```yaml ... @@ -52,7 +52,7 @@ dataSources: ... mapping: ... - apiVersion: 0.0.6 + apiVersion: 0.0.9 ... ``` @@ -106,7 +106,7 @@ let maybeValue = load()! // breaks in runtime if value is null maybeValue.aMethod() ``` -If you are unsure which to choose, we recommend always using the safe version. If the value doesn't exist you might want to just do an early if statement with a return in you subgraph handler. +If you are unsure which to choose, we recommend always using the safe version. If the value doesn't exist you might want to just do an early if statement with a return in you Subgraph handler. ### Variable Shadowing @@ -132,7 +132,7 @@ You'll need to rename your duplicate variables if you had variable shadowing. ### Null Comparisons -By doing the upgrade on your subgraph, sometimes you might get errors like these: +By doing the upgrade on your Subgraph, sometimes you might get errors like these: ```typescript ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' is not assignable to type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt'. @@ -330,7 +330,7 @@ let wrapper = new Wrapper(y) wrapper.n = wrapper.n + x // doesn't give compile time errors as it should ``` -We've opened a issue on the AssemblyScript compiler for this, but for now if you do these kind of operations in your subgraph mappings, you should change them to do a null check before it. +We've opened a issue on the AssemblyScript compiler for this, but for now if you do these kind of operations in your Subgraph mappings, you should change them to do a null check before it. ```typescript let wrapper = new Wrapper(y) @@ -352,7 +352,7 @@ value.x = 10 value.y = 'content' ``` -It will compile but break at runtime, that happens because the value hasn't been initialized, so make sure your subgraph has initialized their values, like this: +It will compile but break at runtime, that happens because the value hasn't been initialized, so make sure your Subgraph has initialized their values, like this: ```typescript var value = new Type() // initialized diff --git a/website/src/pages/pl/resources/migration-guides/graphql-validations-migration-guide.mdx b/website/src/pages/pl/resources/migration-guides/graphql-validations-migration-guide.mdx index 29fed533ef8c..ebed96df1002 100644 --- a/website/src/pages/pl/resources/migration-guides/graphql-validations-migration-guide.mdx +++ b/website/src/pages/pl/resources/migration-guides/graphql-validations-migration-guide.mdx @@ -20,7 +20,7 @@ To be compliant with those validations, please follow the migration guide. You can use the CLI migration tool to find any issues in your GraphQL operations and fix them. Alternatively you can update the endpoint of your GraphQL client to use the `https://api-next.thegraph.com/subgraphs/name/$GITHUB_USER/$SUBGRAPH_NAME` endpoint. Testing your queries against this endpoint will help you find the issues in your queries. -> Not all subgraphs will need to be migrated, if you are using [GraphQL ESlint](https://the-guild.dev/graphql/eslint/docs) or [GraphQL Code Generator](https://the-guild.dev/graphql/codegen), they already ensure that your queries are valid. +> Not all Subgraphs will need to be migrated, if you are using [GraphQL ESlint](https://the-guild.dev/graphql/eslint/docs) or [GraphQL Code Generator](https://the-guild.dev/graphql/codegen), they already ensure that your queries are valid. ## Migration CLI tool diff --git a/website/src/pages/pl/resources/roles/curating.mdx b/website/src/pages/pl/resources/roles/curating.mdx index 1cc05bb7b62f..a228ebfb3267 100644 --- a/website/src/pages/pl/resources/roles/curating.mdx +++ b/website/src/pages/pl/resources/roles/curating.mdx @@ -2,37 +2,37 @@ title: Curating --- -Curators are critical to The Graph's decentralized economy. They use their knowledge of the web3 ecosystem to assess and signal on the subgraphs that should be indexed by The Graph Network. Through Graph Explorer, Curators view network data to make signaling decisions. In turn, The Graph Network rewards Curators who signal on good quality subgraphs with a share of the query fees those subgraphs generate. The amount of GRT signaled is one of the key considerations for indexers when determining which subgraphs to index. +Curators are critical to The Graph's decentralized economy. They use their knowledge of the web3 ecosystem to assess and signal on the Subgraphs that should be indexed by The Graph Network. Through Graph Explorer, Curators view network data to make signaling decisions. In turn, The Graph Network rewards Curators who signal on good quality Subgraphs with a share of the query fees those Subgraphs generate. The amount of GRT signaled is one of the key considerations for indexers when determining which Subgraphs to index. ## What Does Signaling Mean for The Graph Network? -Before consumers can query a subgraph, it must be indexed. This is where curation comes into play. In order for Indexers to earn substantial query fees on quality subgraphs, they need to know what subgraphs to index. When Curators signal on a subgraph, it lets Indexers know that a subgraph is in demand and of sufficient quality that it should be indexed. +Before consumers can query a Subgraph, it must be indexed. This is where curation comes into play. In order for Indexers to earn substantial query fees on quality Subgraphs, they need to know what Subgraphs to index. When Curators signal on a Subgraph, it lets Indexers know that a Subgraph is in demand and of sufficient quality that it should be indexed. -Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that Curators use to let Indexers know that a subgraph is good to index. Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives. +Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that Curators use to let Indexers know that a Subgraph is good to index. Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the Subgraph, entitling them to a portion of future query fees that the Subgraph drives. -Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality subgraph because there will be fewer queries to process or fewer Indexers to process them. +Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to Subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality Subgraph because there will be fewer queries to process or fewer Indexers to process them. -The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. +The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all Subgraphs, signaling GRT on a particular Subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. -When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. +When signaling, Curators can decide to signal on a specific version of the Subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. -If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. +If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the Subgraphs you need assistance with. -Indexers can find subgraphs to index based on curation signals they see in Graph Explorer (screenshot below). +Indexers can find Subgraphs to index based on curation signals they see in Graph Explorer (screenshot below). -![Explorer subgraphs](/img/explorer-subgraphs.png) +![Explorer Subgraphs](/img/explorer-subgraphs.png) ## How to Signal -Within the Curator tab in Graph Explorer, curators will be able to signal and unsignal on certain subgraphs based on network stats. For a step-by-step overview of how to do this in Graph Explorer, [click here.](/subgraphs/explorer/) +Within the Curator tab in Graph Explorer, curators will be able to signal and unsignal on certain Subgraphs based on network stats. For a step-by-step overview of how to do this in Graph Explorer, [click here.](/subgraphs/explorer/) -A curator can choose to signal on a specific subgraph version, or they can choose to have their signal automatically migrate to the newest production build of that subgraph. Both are valid strategies and come with their own pros and cons. +A curator can choose to signal on a specific Subgraph version, or they can choose to have their signal automatically migrate to the newest production build of that Subgraph. Both are valid strategies and come with their own pros and cons. -Signaling on a specific version is especially useful when one subgraph is used by multiple dapps. One dapp might need to regularly update the subgraph with new features. Another dapp might prefer to use an older, well-tested subgraph version. Upon initial curation, a 1% standard tax is incurred. +Signaling on a specific version is especially useful when one Subgraph is used by multiple dapps. One dapp might need to regularly update the Subgraph with new features. Another dapp might prefer to use an older, well-tested Subgraph version. Upon initial curation, a 1% standard tax is incurred. Having your signal automatically migrate to the newest production build can be valuable to ensure you keep accruing query fees. Every time you curate, a 1% curation tax is incurred. You will also pay a 0.5% curation tax on every migration. Subgraph developers are discouraged from frequently publishing new versions - they have to pay a 0.5% curation tax on all auto-migrated curation shares. -> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy. +> **Note**: The first address to signal a particular Subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy. ## Withdrawing your GRT @@ -40,39 +40,39 @@ Curators have the option to withdraw their signaled GRT at any time. Unlike the process of delegating, if you decide to withdraw your signaled GRT you will not have to wait for a cooldown period and will receive the entire amount (minus the 1% curation tax). -Once a curator withdraws their signal, indexers may choose to keep indexing the subgraph, even if there's currently no active GRT signaled. +Once a curator withdraws their signal, indexers may choose to keep indexing the Subgraph, even if there's currently no active GRT signaled. -However, it is recommended that curators leave their signaled GRT in place not only to receive a portion of the query fees, but also to ensure reliability and uptime of the subgraph. +However, it is recommended that curators leave their signaled GRT in place not only to receive a portion of the query fees, but also to ensure reliability and uptime of the Subgraph. ## Risks 1. The query market is inherently young at The Graph and there is risk that your %APY may be lower than you expect due to nascent market dynamics. -2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned. -3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their subgraph or if a subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/resources/roles/delegating/delegating/). -4. A subgraph can fail due to a bug. A failed subgraph does not accrue query fees. As a result, you’ll have to wait until the developer fixes the bug and deploys a new version. - - If you are subscribed to the newest version of a subgraph, your shares will auto-migrate to that new version. This will incur a 0.5% curation tax. - - If you have signaled on a specific subgraph version and it fails, you will have to manually burn your curation shares. You can then signal on the new subgraph version, thus incurring a 1% curation tax. +2. Curation Fee - when a curator signals GRT on a Subgraph, they incur a 1% curation tax. This fee is burned. +3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their Subgraph or if a Subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/resources/roles/delegating/delegating/). +4. A Subgraph can fail due to a bug. A failed Subgraph does not accrue query fees. As a result, you’ll have to wait until the developer fixes the bug and deploys a new version. + - If you are subscribed to the newest version of a Subgraph, your shares will auto-migrate to that new version. This will incur a 0.5% curation tax. + - If you have signaled on a specific Subgraph version and it fails, you will have to manually burn your curation shares. You can then signal on the new Subgraph version, thus incurring a 1% curation tax. ## Curation FAQs ### 1. What % of query fees do Curators earn? -By signalling on a subgraph, you will earn a share of all the query fees that the subgraph generates. 10% of all query fees go to the Curators pro-rata to their curation shares. This 10% is subject to governance. +By signalling on a Subgraph, you will earn a share of all the query fees that the Subgraph generates. 10% of all query fees go to the Curators pro-rata to their curation shares. This 10% is subject to governance. -### 2. How do I decide which subgraphs are high quality to signal on? +### 2. How do I decide which Subgraphs are high quality to signal on? -Finding high-quality subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy subgraphs that are driving query volume. A trustworthy subgraph may be valuable if it is complete, accurate, and supports a dapp’s data needs. A poorly architected subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a subgraph’s architecture or code in order to assess if a subgraph is valuable. As a result: +Finding high-quality Subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy Subgraphs that are driving query volume. A trustworthy Subgraph may be valuable if it is complete, accurate, and supports a dapp’s data needs. A poorly architected Subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a Subgraph’s architecture or code in order to assess if a Subgraph is valuable. As a result: -- Curators can use their understanding of a network to try and predict how an individual subgraph may generate a higher or lower query volume in the future -- Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the subgraph developer is can help determine whether or not a subgraph is worth signalling on. +- Curators can use their understanding of a network to try and predict how an individual Subgraph may generate a higher or lower query volume in the future +- Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the Subgraph developer is can help determine whether or not a Subgraph is worth signalling on. -### 3. What’s the cost of updating a subgraph? +### 3. What’s the cost of updating a Subgraph? -Migrating your curation shares to a new subgraph version incurs a curation tax of 1%. Curators can choose to subscribe to the newest version of a subgraph. When curator shares get auto-migrated to a new version, Curators will also pay half curation tax, ie. 0.5%, because upgrading subgraphs is an onchain action that costs gas. +Migrating your curation shares to a new Subgraph version incurs a curation tax of 1%. Curators can choose to subscribe to the newest version of a Subgraph. When curator shares get auto-migrated to a new version, Curators will also pay half curation tax, ie. 0.5%, because upgrading Subgraphs is an onchain action that costs gas. -### 4. How often can I update my subgraph? +### 4. How often can I update my Subgraph? -It’s suggested that you don’t update your subgraphs too frequently. See the question above for more details. +It’s suggested that you don’t update your Subgraphs too frequently. See the question above for more details. ### 5. Can I sell my curation shares? diff --git a/website/src/pages/pl/resources/subgraph-studio-faq.mdx b/website/src/pages/pl/resources/subgraph-studio-faq.mdx index 8761f7a31bf6..c2d4037bd099 100644 --- a/website/src/pages/pl/resources/subgraph-studio-faq.mdx +++ b/website/src/pages/pl/resources/subgraph-studio-faq.mdx @@ -4,7 +4,7 @@ title: Subgraph Studio FAQs ## 1. What is Subgraph Studio? -[Subgraph Studio](https://thegraph.com/studio/) is a dapp for creating, managing, and publishing subgraphs and API keys. +[Subgraph Studio](https://thegraph.com/studio/) is a dapp for creating, managing, and publishing Subgraphs and API keys. ## 2. How do I create an API Key? @@ -18,14 +18,14 @@ Yes! You can create multiple API Keys to use in different projects. Check out th After creating an API Key, in the Security section, you can define the domains that can query a specific API Key. -## 5. Can I transfer my subgraph to another owner? +## 5. Can I transfer my Subgraph to another owner? -Yes, subgraphs that have been published to Arbitrum One can be transferred to a new wallet or a Multisig. You can do so by clicking the three dots next to the 'Publish' button on the subgraph's details page and selecting 'Transfer ownership'. +Yes, Subgraphs that have been published to Arbitrum One can be transferred to a new wallet or a Multisig. You can do so by clicking the three dots next to the 'Publish' button on the Subgraph's details page and selecting 'Transfer ownership'. -Note that you will no longer be able to see or edit the subgraph in Studio once it has been transferred. +Note that you will no longer be able to see or edit the Subgraph in Studio once it has been transferred. -## 6. How do I find query URLs for subgraphs if I’m not the developer of the subgraph I want to use? +## 6. How do I find query URLs for Subgraphs if I’m not the developer of the Subgraph I want to use? -You can find the query URL of each subgraph in the Subgraph Details section of Graph Explorer. When you click on the “Query” button, you will be directed to a pane wherein you can view the query URL of the subgraph you’re interested in. You can then replace the `` placeholder with the API key you wish to leverage in Subgraph Studio. +You can find the query URL of each Subgraph in the Subgraph Details section of Graph Explorer. When you click on the “Query” button, you will be directed to a pane wherein you can view the query URL of the Subgraph you’re interested in. You can then replace the `` placeholder with the API key you wish to leverage in Subgraph Studio. -Remember that you can create an API key and query any subgraph published to the network, even if you build a subgraph yourself. These queries via the new API key, are paid queries as any other on the network. +Remember that you can create an API key and query any Subgraph published to the network, even if you build a Subgraph yourself. These queries via the new API key, are paid queries as any other on the network. diff --git a/website/src/pages/pl/resources/tokenomics.mdx b/website/src/pages/pl/resources/tokenomics.mdx index 4a9b42ca6e0d..dac3383a28e7 100644 --- a/website/src/pages/pl/resources/tokenomics.mdx +++ b/website/src/pages/pl/resources/tokenomics.mdx @@ -6,7 +6,7 @@ description: The Graph Network is incentivized by powerful tokenomics. Here’s ## Overview -The Graph is a decentralized protocol that enables easy access to blockchain data. It indexes blockchain data similarly to how Google indexes the web. If you've used a dapp that retrieves data from a subgraph, you've probably interacted with The Graph. Today, thousands of [popular dapps](https://thegraph.com/explorer) in the web3 ecosystem use The Graph. +The Graph is a decentralized protocol that enables easy access to blockchain data. It indexes blockchain data similarly to how Google indexes the web. If you've used a dapp that retrieves data from a Subgraph, you've probably interacted with The Graph. Today, thousands of [popular dapps](https://thegraph.com/explorer) in the web3 ecosystem use The Graph. ## Specifics @@ -24,9 +24,9 @@ There are four primary network participants: 1. Delegators - Delegate GRT to Indexers & secure the network -2. Curators - Find the best subgraphs for Indexers +2. Curators - Find the best Subgraphs for Indexers -3. Developers - Build & query subgraphs +3. Developers - Build & query Subgraphs 4. Indexers - Backbone of blockchain data @@ -36,7 +36,7 @@ Fishermen and Arbitrators are also integral to the network's success through oth ## Delegators (Passively earn GRT) -Indexers are delegated GRT by Delegators, increasing the Indexer’s stake in subgraphs on the network. In return, Delegators earn a percentage of all query fees and indexing rewards from the Indexer. Each Indexer sets the cut that will be rewarded to Delegators independently, creating competition among Indexers to attract Delegators. Most Indexers offer between 9-12% annually. +Indexers are delegated GRT by Delegators, increasing the Indexer’s stake in Subgraphs on the network. In return, Delegators earn a percentage of all query fees and indexing rewards from the Indexer. Each Indexer sets the cut that will be rewarded to Delegators independently, creating competition among Indexers to attract Delegators. Most Indexers offer between 9-12% annually. For example, if a Delegator were to delegate 15k GRT to an Indexer offering 10%, the Delegator would receive ~1,500 GRT in rewards annually. @@ -46,25 +46,25 @@ If you're reading this, you're capable of becoming a Delegator right now by head ## Curators (Earn GRT) -Curators identify high-quality subgraphs and "curate" them (i.e., signal GRT on them) to earn curation shares, which guarantee a percentage of all future query fees generated by the subgraph. While any independent network participant can be a Curator, typically subgraph developers are among the first Curators for their own subgraphs because they want to ensure their subgraph is indexed. +Curators identify high-quality Subgraphs and "curate" them (i.e., signal GRT on them) to earn curation shares, which guarantee a percentage of all future query fees generated by the Subgraph. While any independent network participant can be a Curator, typically Subgraph developers are among the first Curators for their own Subgraphs because they want to ensure their Subgraph is indexed. -Subgraph developers are encouraged to curate their subgraph with at least 3,000 GRT. However, this number may be impacted by network activity and community participation. +Subgraph developers are encouraged to curate their Subgraph with at least 3,000 GRT. However, this number may be impacted by network activity and community participation. -Curators pay a 1% curation tax when they curate a new subgraph. This curation tax is burned, decreasing the supply of GRT. +Curators pay a 1% curation tax when they curate a new Subgraph. This curation tax is burned, decreasing the supply of GRT. ## Developers -Developers build and query subgraphs to retrieve blockchain data. Since subgraphs are open source, developers can query existing subgraphs to load blockchain data into their dapps. Developers pay for queries they make in GRT, which is distributed to network participants. +Developers build and query Subgraphs to retrieve blockchain data. Since Subgraphs are open source, developers can query existing Subgraphs to load blockchain data into their dapps. Developers pay for queries they make in GRT, which is distributed to network participants. -### Creating a subgraph +### Creating a Subgraph -Developers can [create a subgraph](/developing/creating-a-subgraph/) to index data on the blockchain. Subgraphs are instructions for Indexers about which data should be served to consumers. +Developers can [create a Subgraph](/developing/creating-a-subgraph/) to index data on the blockchain. Subgraphs are instructions for Indexers about which data should be served to consumers. -Once developers have built and tested their subgraph, they can [publish their subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) on The Graph's decentralized network. +Once developers have built and tested their Subgraph, they can [publish their Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) on The Graph's decentralized network. -### Querying an existing subgraph +### Querying an existing Subgraph -Once a subgraph is [published](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph's decentralized network, anyone can create an API key, add GRT to their billing balance, and query the subgraph. +Once a Subgraph is [published](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph's decentralized network, anyone can create an API key, add GRT to their billing balance, and query the Subgraph. Subgraphs are [queried using GraphQL](/subgraphs/querying/introduction/), and the query fees are paid for with GRT in [Subgraph Studio](https://thegraph.com/studio/). Query fees are distributed to network participants based on their contributions to the protocol. @@ -72,27 +72,27 @@ Subgraphs are [queried using GraphQL](/subgraphs/querying/introduction/), and th ## Indexers (Earn GRT) -Indexers are the backbone of The Graph. They operate independent hardware and software powering The Graph’s decentralized network. Indexers serve data to consumers based on instructions from subgraphs. +Indexers are the backbone of The Graph. They operate independent hardware and software powering The Graph’s decentralized network. Indexers serve data to consumers based on instructions from Subgraphs. Indexers can earn GRT rewards in two ways: -1. **Query fees**: GRT paid by developers or users for subgraph data queries. Query fees are directly distributed to Indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). +1. **Query fees**: GRT paid by developers or users for Subgraph data queries. Query fees are directly distributed to Indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). -2. **Indexing rewards**: the 3% annual issuance is distributed to Indexers based on the number of subgraphs they are indexing. These rewards incentivize Indexers to index subgraphs, occasionally before the query fees begin, to accrue and submit Proofs of Indexing (POIs), verifying that they have indexed data accurately. +2. **Indexing rewards**: the 3% annual issuance is distributed to Indexers based on the number of Subgraphs they are indexing. These rewards incentivize Indexers to index Subgraphs, occasionally before the query fees begin, to accrue and submit Proofs of Indexing (POIs), verifying that they have indexed data accurately. -Each subgraph is allotted a portion of the total network token issuance, based on the amount of the subgraph’s curation signal. That amount is then rewarded to Indexers based on their allocated stake on the subgraph. +Each Subgraph is allotted a portion of the total network token issuance, based on the amount of the Subgraph’s curation signal. That amount is then rewarded to Indexers based on their allocated stake on the Subgraph. In order to run an indexing node, Indexers must self-stake 100,000 GRT or more with the network. Indexers are incentivized to self-stake GRT in proportion to the amount of queries they serve. -Indexers can increase their GRT allocations on subgraphs by accepting GRT delegation from Delegators, and they can accept up to 16 times their initial self-stake. If an Indexer becomes "over-delegated" (i.e., more than 16 times their initial self-stake), they will not be able to use the additional GRT from Delegators until they increase their self-stake in the network. +Indexers can increase their GRT allocations on Subgraphs by accepting GRT delegation from Delegators, and they can accept up to 16 times their initial self-stake. If an Indexer becomes "over-delegated" (i.e., more than 16 times their initial self-stake), they will not be able to use the additional GRT from Delegators until they increase their self-stake in the network. The amount of rewards an Indexer receives can vary based on the Indexer's self-stake, accepted delegation, quality of service, and many more factors. ## Token Supply: Burning & Issuance -The initial token supply is 10 billion GRT, with a target of 3% new issuance annually to reward Indexers for allocating stake on subgraphs. This means that the total supply of GRT tokens will increase by 3% each year as new tokens are issued to Indexers for their contribution to the network. +The initial token supply is 10 billion GRT, with a target of 3% new issuance annually to reward Indexers for allocating stake on Subgraphs. This means that the total supply of GRT tokens will increase by 3% each year as new tokens are issued to Indexers for their contribution to the network. -The Graph is designed with multiple burning mechanisms to offset new token issuance. Approximately 1% of the GRT supply is burned annually through various activities on the network, and this number has been increasing as network activity continues to grow. These burning activities include a 0.5% delegation tax whenever a Delegator delegates GRT to an Indexer, a 1% curation tax when Curators signal on a subgraph, and a 1% of query fees for blockchain data. +The Graph is designed with multiple burning mechanisms to offset new token issuance. Approximately 1% of the GRT supply is burned annually through various activities on the network, and this number has been increasing as network activity continues to grow. These burning activities include a 0.5% delegation tax whenever a Delegator delegates GRT to an Indexer, a 1% curation tax when Curators signal on a Subgraph, and a 1% of query fees for blockchain data. ![Total burned GRT](/img/total-burned-grt.jpeg) diff --git a/website/src/pages/pl/sps/introduction.mdx b/website/src/pages/pl/sps/introduction.mdx index 3e59ddaa10af..8c9483eb8feb 100644 --- a/website/src/pages/pl/sps/introduction.mdx +++ b/website/src/pages/pl/sps/introduction.mdx @@ -3,21 +3,21 @@ title: Introduction to Substreams-Powered Subgraphs sidebarTitle: Wstęp --- -Boost your subgraph's efficiency and scalability by using [Substreams](/substreams/introduction/) to stream pre-indexed blockchain data. +Boost your Subgraph's efficiency and scalability by using [Substreams](/substreams/introduction/) to stream pre-indexed blockchain data. ## Overview -Use a Substreams package (`.spkg`) as a data source to give your subgraph access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks. +Use a Substreams package (`.spkg`) as a data source to give your Subgraph access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks. ### Specifics There are two methods of enabling this technology: -1. **Using Substreams [triggers](/sps/triggers/)**: Consume from any Substreams module by importing the Protobuf model through a subgraph handler and move all your logic into a subgraph. This method creates the subgraph entities directly in the subgraph. +1. **Using Substreams [triggers](/sps/triggers/)**: Consume from any Substreams module by importing the Protobuf model through a Subgraph handler and move all your logic into a Subgraph. This method creates the Subgraph entities directly in the Subgraph. -2. **Using [Entity Changes](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)**: By writing more of the logic into Substreams, you can consume the module's output directly into [graph-node](/indexing/tooling/graph-node/). In graph-node, you can use the Substreams data to create your subgraph entities. +2. **Using [Entity Changes](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)**: By writing more of the logic into Substreams, you can consume the module's output directly into [graph-node](/indexing/tooling/graph-node/). In graph-node, you can use the Substreams data to create your Subgraph entities. -You can choose where to place your logic, either in the subgraph or Substreams. However, consider what aligns with your data needs, as Substreams has a parallelized model, and triggers are consumed linearly in the graph node. +You can choose where to place your logic, either in the Subgraph or Substreams. However, consider what aligns with your data needs, as Substreams has a parallelized model, and triggers are consumed linearly in the graph node. ### Additional Resources @@ -28,3 +28,4 @@ Visit the following links for tutorials on using code-generation tooling to buil - [Starknet](https://docs.substreams.dev/tutorials/intro-to-tutorials/starknet) - [Injective](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/injective) - [MANTRA](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/mantra) +- [Stellar](https://docs.substreams.dev/tutorials/intro-to-tutorials/stellar) diff --git a/website/src/pages/pl/sps/sps-faq.mdx b/website/src/pages/pl/sps/sps-faq.mdx index abc1f3906686..250c466d5929 100644 --- a/website/src/pages/pl/sps/sps-faq.mdx +++ b/website/src/pages/pl/sps/sps-faq.mdx @@ -11,21 +11,21 @@ Specifically, it's a blockchain-agnostic, parallelized, and streaming-first engi Substreams is developed by [StreamingFast](https://www.streamingfast.io/). Visit the [Substreams Documentation](/substreams/introduction/) to learn more about Substreams. -## What are Substreams-powered subgraphs? +## What are Substreams-powered Subgraphs? -[Substreams-powered subgraphs](/sps/introduction/) combine the power of Substreams with the queryability of subgraphs. When publishing a Substreams-powered Subgraph, the data produced by the Substreams transformations, can [output entity changes](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs) that are compatible with subgraph entities. +[Substreams-powered Subgraphs](/sps/introduction/) combine the power of Substreams with the queryability of Subgraphs. When publishing a Substreams-powered Subgraph, the data produced by the Substreams transformations, can [output entity changes](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs) that are compatible with Subgraph entities. -If you are already familiar with subgraph development, note that Substreams-powered subgraphs can then be queried just as if it had been produced by the AssemblyScript transformation layer. This provides all the benefits of subgraphs, including a dynamic and flexible GraphQL API. +If you are already familiar with Subgraph development, note that Substreams-powered Subgraphs can then be queried just as if it had been produced by the AssemblyScript transformation layer. This provides all the benefits of Subgraphs, including a dynamic and flexible GraphQL API. -## How are Substreams-powered subgraphs different from subgraphs? +## How are Substreams-powered Subgraphs different from Subgraphs? Subgraphs are made up of datasources which specify onchain events, and how those events should be transformed via handlers written in Assemblyscript. These events are processed sequentially, based on the order in which events happen onchain. -By contrast, substreams-powered subgraphs have a single datasource which references a substreams package, which is processed by the Graph Node. Substreams have access to additional granular onchain data compared to conventional subgraphs, and can also benefit from massively parallelised processing, which can mean much faster processing times. +By contrast, substreams-powered Subgraphs have a single datasource which references a substreams package, which is processed by the Graph Node. Substreams have access to additional granular onchain data compared to conventional Subgraphs, and can also benefit from massively parallelised processing, which can mean much faster processing times. -## What are the benefits of using Substreams-powered subgraphs? +## What are the benefits of using Substreams-powered Subgraphs? -Substreams-powered subgraphs combine all the benefits of Substreams with the queryability of subgraphs. They bring greater composability and high-performance indexing to The Graph. They also enable new data use cases; for example, once you've built your Substreams-powered Subgraph, you can reuse your [Substreams modules](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) to output to different [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) such as PostgreSQL, MongoDB, and Kafka. +Substreams-powered Subgraphs combine all the benefits of Substreams with the queryability of Subgraphs. They bring greater composability and high-performance indexing to The Graph. They also enable new data use cases; for example, once you've built your Substreams-powered Subgraph, you can reuse your [Substreams modules](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) to output to different [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) such as PostgreSQL, MongoDB, and Kafka. ## What are the benefits of Substreams? @@ -35,7 +35,7 @@ There are many benefits to using Substreams, including: - High-performance indexing: Orders of magnitude faster indexing through large-scale clusters of parallel operations (think BigQuery). -- Sink anywhere: Sink your data to anywhere you want: PostgreSQL, MongoDB, Kafka, subgraphs, flat files, Google Sheets. +- Sink anywhere: Sink your data to anywhere you want: PostgreSQL, MongoDB, Kafka, Subgraphs, flat files, Google Sheets. - Programmable: Use code to customize extraction, do transformation-time aggregations, and model your output for multiple sinks. @@ -63,17 +63,17 @@ There are many benefits to using Firehose, including: - Leverages flat files: Blockchain data is extracted into flat files, the cheapest and most optimized computing resource available. -## Where can developers access more information about Substreams-powered subgraphs and Substreams? +## Where can developers access more information about Substreams-powered Subgraphs and Substreams? The [Substreams documentation](/substreams/introduction/) will teach you how to build Substreams modules. -The [Substreams-powered subgraphs documentation](/sps/introduction/) will show you how to package them for deployment on The Graph. +The [Substreams-powered Subgraphs documentation](/sps/introduction/) will show you how to package them for deployment on The Graph. The [latest Substreams Codegen tool](https://streamingfastio.medium.com/substreams-codegen-no-code-tool-to-bootstrap-your-project-a11efe0378c6) will allow you to bootstrap a Substreams project without any code. ## What is the role of Rust modules in Substreams? -Rust modules are the equivalent of the AssemblyScript mappers in subgraphs. They are compiled to WASM in a similar way, but the programming model allows for parallel execution. They define the sort of transformations and aggregations you want to apply to the raw blockchain data. +Rust modules are the equivalent of the AssemblyScript mappers in Subgraphs. They are compiled to WASM in a similar way, but the programming model allows for parallel execution. They define the sort of transformations and aggregations you want to apply to the raw blockchain data. See [modules documentation](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) for details. @@ -81,16 +81,16 @@ See [modules documentation](https://docs.substreams.dev/reference-material/subst When using Substreams, the composition happens at the transformation layer enabling cached modules to be re-used. -As an example, Alice can build a DEX price module, Bob can use it to build a volume aggregator for some tokens of his interest, and Lisa can combine four individual DEX price modules to create a price oracle. A single Substreams request will package all of these individual's modules, link them together, to offer a much more refined stream of data. That stream can then be used to populate a subgraph, and be queried by consumers. +As an example, Alice can build a DEX price module, Bob can use it to build a volume aggregator for some tokens of his interest, and Lisa can combine four individual DEX price modules to create a price oracle. A single Substreams request will package all of these individual's modules, link them together, to offer a much more refined stream of data. That stream can then be used to populate a Subgraph, and be queried by consumers. ## How can you build and deploy a Substreams-powered Subgraph? After [defining](/sps/introduction/) a Substreams-powered Subgraph, you can use the Graph CLI to deploy it in [Subgraph Studio](https://thegraph.com/studio/). -## Where can I find examples of Substreams and Substreams-powered subgraphs? +## Where can I find examples of Substreams and Substreams-powered Subgraphs? -You can visit [this Github repo](https://github.com/pinax-network/awesome-substreams) to find examples of Substreams and Substreams-powered subgraphs. +You can visit [this Github repo](https://github.com/pinax-network/awesome-substreams) to find examples of Substreams and Substreams-powered Subgraphs. -## What do Substreams and Substreams-powered subgraphs mean for The Graph Network? +## What do Substreams and Substreams-powered Subgraphs mean for The Graph Network? The integration promises many benefits, including extremely high-performance indexing and greater composability by leveraging community modules and building on them. diff --git a/website/src/pages/pl/sps/triggers.mdx b/website/src/pages/pl/sps/triggers.mdx index 816d42cb5f12..66687aa21889 100644 --- a/website/src/pages/pl/sps/triggers.mdx +++ b/website/src/pages/pl/sps/triggers.mdx @@ -6,13 +6,13 @@ Use Custom Triggers and enable the full use GraphQL. ## Overview -Custom Triggers allow you to send data directly into your subgraph mappings file and entities, which are similar to tables and fields. This enables you to fully use the GraphQL layer. +Custom Triggers allow you to send data directly into your Subgraph mappings file and entities, which are similar to tables and fields. This enables you to fully use the GraphQL layer. -By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data in your subgraph's handler. This ensures efficient and streamlined data management within the subgraph framework. +By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data in your Subgraph's handler. This ensures efficient and streamlined data management within the Subgraph framework. ### Defining `handleTransactions` -The following code demonstrates how to define a `handleTransactions` function in a subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new subgraph entity is created. +The following code demonstrates how to define a `handleTransactions` function in a Subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new Subgraph entity is created. ```tsx export function handleTransactions(bytes: Uint8Array): void { @@ -38,9 +38,9 @@ Here's what you're seeing in the `mappings.ts` file: 1. The bytes containing Substreams data are decoded into the generated `Transactions` object, this object is used like any other AssemblyScript object 2. Looping over the transactions -3. Create a new subgraph entity for every transaction +3. Create a new Subgraph entity for every transaction -To go through a detailed example of a trigger-based subgraph, [check out the tutorial](/sps/tutorial/). +To go through a detailed example of a trigger-based Subgraph, [check out the tutorial](/sps/tutorial/). ### Additional Resources diff --git a/website/src/pages/pl/sps/tutorial.mdx b/website/src/pages/pl/sps/tutorial.mdx index f1126226dbcb..a795de7bb32b 100644 --- a/website/src/pages/pl/sps/tutorial.mdx +++ b/website/src/pages/pl/sps/tutorial.mdx @@ -3,7 +3,7 @@ title: 'Tutorial: Set Up a Substreams-Powered Subgraph on Solana' sidebarTitle: Tutorial --- -Successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. +Successfully set up a trigger-based Substreams-powered Subgraph for a Solana SPL token. ## Jak zacząć? @@ -54,7 +54,7 @@ params: # Modify the param fields to meet your needs ### Step 2: Generate the Subgraph Manifest -Once the project is initialized, generate a subgraph manifest by running the following command in the Dev Container: +Once the project is initialized, generate a Subgraph manifest by running the following command in the Dev Container: ```bash substreams codegen subgraph @@ -73,7 +73,7 @@ dataSources: moduleName: map_spl_transfers # Module defined in the substreams.yaml file: ./my-project-sol-v0.1.0.spkg mapping: - apiVersion: 0.0.7 + apiVersion: 0.0.9 kind: substreams/graph-entities file: ./src/mappings.ts handler: handleTriggers @@ -81,7 +81,7 @@ dataSources: ### Step 3: Define Entities in `schema.graphql` -Define the fields you want to save in your subgraph entities by updating the `schema.graphql` file. +Define the fields you want to save in your Subgraph entities by updating the `schema.graphql` file. Here is an example: @@ -101,7 +101,7 @@ This schema defines a `MyTransfer` entity with fields such as `id`, `amount`, `s With the Protobuf objects generated, you can now handle the decoded Substreams data in your `mappings.ts` file found in the `./src` directory. -The example below demonstrates how to extract to subgraph entities the non-derived transfers associated to the Orca account id: +The example below demonstrates how to extract to Subgraph entities the non-derived transfers associated to the Orca account id: ```ts import { Protobuf } from 'as-proto/assembly' @@ -140,11 +140,11 @@ To generate Protobuf objects in AssemblyScript, run the following command: npm run protogen ``` -This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the subgraph's handler. +This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the Subgraph's handler. ### Conclusion -Congratulations! You've successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. You can take the next step by customizing your schema, mappings, and modules to fit your specific use case. +Congratulations! You've successfully set up a trigger-based Substreams-powered Subgraph for a Solana SPL token. You can take the next step by customizing your schema, mappings, and modules to fit your specific use case. ### Video Tutorial diff --git a/website/src/pages/pl/subgraphs/best-practices/avoid-eth-calls.mdx b/website/src/pages/pl/subgraphs/best-practices/avoid-eth-calls.mdx index e40a7b3712e4..07249c97dd2a 100644 --- a/website/src/pages/pl/subgraphs/best-practices/avoid-eth-calls.mdx +++ b/website/src/pages/pl/subgraphs/best-practices/avoid-eth-calls.mdx @@ -1,19 +1,19 @@ --- title: Subgraph Best Practice 4 - Improve Indexing Speed by Avoiding eth_calls -sidebarTitle: 'Subgraph Best Practice 4: Avoiding eth_calls' +sidebarTitle: Avoiding eth_calls --- ## TLDR -`eth_calls` are calls that can be made from a subgraph to an Ethereum node. These calls take a significant amount of time to return data, slowing down indexing. If possible, design smart contracts to emit all the data you need so you don’t need to use `eth_calls`. +`eth_calls` are calls that can be made from a Subgraph to an Ethereum node. These calls take a significant amount of time to return data, slowing down indexing. If possible, design smart contracts to emit all the data you need so you don’t need to use `eth_calls`. ## Why Avoiding `eth_calls` Is a Best Practice -Subgraphs are optimized to index event data emitted from smart contracts. A subgraph can also index the data coming from an `eth_call`, however, this can significantly slow down subgraph indexing as `eth_calls` require making external calls to smart contracts. The responsiveness of these calls relies not on the subgraph but on the connectivity and responsiveness of the Ethereum node being queried. By minimizing or eliminating eth_calls in our subgraphs, we can significantly improve our indexing speed. +Subgraphs are optimized to index event data emitted from smart contracts. A Subgraph can also index the data coming from an `eth_call`, however, this can significantly slow down Subgraph indexing as `eth_calls` require making external calls to smart contracts. The responsiveness of these calls relies not on the Subgraph but on the connectivity and responsiveness of the Ethereum node being queried. By minimizing or eliminating eth_calls in our Subgraphs, we can significantly improve our indexing speed. ### What Does an eth_call Look Like? -`eth_calls` are often necessary when the data required for a subgraph is not available through emitted events. For example, consider a scenario where a subgraph needs to identify whether ERC20 tokens are part of a specific pool, but the contract only emits a basic `Transfer` event and does not emit an event that contains the data that we need: +`eth_calls` are often necessary when the data required for a Subgraph is not available through emitted events. For example, consider a scenario where a Subgraph needs to identify whether ERC20 tokens are part of a specific pool, but the contract only emits a basic `Transfer` event and does not emit an event that contains the data that we need: ```yaml event Transfer(address indexed from, address indexed to, uint256 value); @@ -44,7 +44,7 @@ export function handleTransfer(event: Transfer): void { } ``` -This is functional, however is not ideal as it slows down our subgraph’s indexing. +This is functional, however is not ideal as it slows down our Subgraph’s indexing. ## How to Eliminate `eth_calls` @@ -54,7 +54,7 @@ Ideally, the smart contract should be updated to emit all necessary data within event TransferWithPool(address indexed from, address indexed to, uint256 value, bytes32 indexed poolInfo); ``` -With this update, the subgraph can directly index the required data without external calls: +With this update, the Subgraph can directly index the required data without external calls: ```typescript import { Address } from '@graphprotocol/graph-ts' @@ -96,11 +96,11 @@ The portion highlighted in yellow is the call declaration. The part before the c The handler itself accesses the result of this `eth_call` exactly as in the previous section by binding to the contract and making the call. graph-node caches the results of declared `eth_calls` in memory and the call from the handler will retrieve the result from this in memory cache instead of making an actual RPC call. -Note: Declared eth_calls can only be made in subgraphs with specVersion >= 1.2.0. +Note: Declared eth_calls can only be made in Subgraphs with specVersion >= 1.2.0. ## Conclusion -You can significantly improve indexing performance by minimizing or eliminating `eth_calls` in your subgraphs. +You can significantly improve indexing performance by minimizing or eliminating `eth_calls` in your Subgraphs. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/pl/subgraphs/best-practices/derivedfrom.mdx b/website/src/pages/pl/subgraphs/best-practices/derivedfrom.mdx index db3a49928c89..093eb29255ab 100644 --- a/website/src/pages/pl/subgraphs/best-practices/derivedfrom.mdx +++ b/website/src/pages/pl/subgraphs/best-practices/derivedfrom.mdx @@ -1,11 +1,11 @@ --- title: Subgraph Best Practice 2 - Improve Indexing and Query Responsiveness By Using @derivedFrom -sidebarTitle: 'Subgraph Best Practice 2: Arrays with @derivedFrom' +sidebarTitle: Arrays with @derivedFrom --- ## TLDR -Arrays in your schema can really slow down a subgraph's performance as they grow beyond thousands of entries. If possible, the `@derivedFrom` directive should be used when using arrays as it prevents large arrays from forming, simplifies handlers, and reduces the size of individual entities, improving indexing speed and query performance significantly. +Arrays in your schema can really slow down a Subgraph's performance as they grow beyond thousands of entries. If possible, the `@derivedFrom` directive should be used when using arrays as it prevents large arrays from forming, simplifies handlers, and reduces the size of individual entities, improving indexing speed and query performance significantly. ## How to Use the `@derivedFrom` Directive @@ -15,7 +15,7 @@ You just need to add a `@derivedFrom` directive after your array in your schema. comments: [Comment!]! @derivedFrom(field: "post") ``` -`@derivedFrom` creates efficient one-to-many relationships, enabling an entity to dynamically associate with multiple related entities based on a field in the related entity. This approach removes the need for both sides of the relationship to store duplicate data, making the subgraph more efficient. +`@derivedFrom` creates efficient one-to-many relationships, enabling an entity to dynamically associate with multiple related entities based on a field in the related entity. This approach removes the need for both sides of the relationship to store duplicate data, making the Subgraph more efficient. ### Example Use Case for `@derivedFrom` @@ -60,17 +60,17 @@ type Comment @entity { Just by adding the `@derivedFrom` directive, this schema will only store the “Comments” on the “Comments” side of the relationship and not on the “Post” side of the relationship. Arrays are stored across individual rows, which allows them to expand significantly. This can lead to particularly large sizes if their growth is unbounded. -This will not only make our subgraph more efficient, but it will also unlock three features: +This will not only make our Subgraph more efficient, but it will also unlock three features: 1. We can query the `Post` and see all of its comments. 2. We can do a reverse lookup and query any `Comment` and see which post it comes from. -3. We can use [Derived Field Loaders](/subgraphs/developing/creating/graph-ts/api/#looking-up-derived-entities) to unlock the ability to directly access and manipulate data from virtual relationships in our subgraph mappings. +3. We can use [Derived Field Loaders](/subgraphs/developing/creating/graph-ts/api/#looking-up-derived-entities) to unlock the ability to directly access and manipulate data from virtual relationships in our Subgraph mappings. ## Conclusion -Use the `@derivedFrom` directive in subgraphs to effectively manage dynamically growing arrays, enhancing indexing efficiency and data retrieval. +Use the `@derivedFrom` directive in Subgraphs to effectively manage dynamically growing arrays, enhancing indexing efficiency and data retrieval. For a more detailed explanation of strategies to avoid large arrays, check out Kevin Jones' blog: [Best Practices in Subgraph Development: Avoiding Large Arrays](https://thegraph.com/blog/improve-subgraph-performance-avoiding-large-arrays/). diff --git a/website/src/pages/pl/subgraphs/best-practices/grafting-hotfix.mdx b/website/src/pages/pl/subgraphs/best-practices/grafting-hotfix.mdx index d514e1633c75..674cf6b87c62 100644 --- a/website/src/pages/pl/subgraphs/best-practices/grafting-hotfix.mdx +++ b/website/src/pages/pl/subgraphs/best-practices/grafting-hotfix.mdx @@ -1,26 +1,26 @@ --- title: Subgraph Best Practice 6 - Use Grafting for Quick Hotfix Deployment -sidebarTitle: 'Subgraph Best Practice 6: Grafting and Hotfixing' +sidebarTitle: Grafting and Hotfixing --- ## TLDR -Grafting is a powerful feature in subgraph development that allows you to build and deploy new subgraphs while reusing the indexed data from existing ones. +Grafting is a powerful feature in Subgraph development that allows you to build and deploy new Subgraphs while reusing the indexed data from existing ones. ### Overview -This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. +This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire Subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. ## Benefits of Grafting for Hotfixes 1. **Rapid Deployment** - - **Minimize Downtime**: When a subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. - - **Immediate Recovery**: The new subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. + - **Minimize Downtime**: When a Subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. + - **Immediate Recovery**: The new Subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. 2. **Data Preservation** - - **Reuse Historical Data**: Grafting copies the existing data from the base subgraph, so you don’t lose valuable historical records. + - **Reuse Historical Data**: Grafting copies the existing data from the base Subgraph, so you don’t lose valuable historical records. - **Consistency**: Maintains data continuity, which is crucial for applications relying on consistent historical data. 3. **Efficiency** @@ -31,38 +31,38 @@ This feature enables quick deployment of hotfixes for critical issues, eliminati 1. **Initial Deployment Without Grafting** - - **Start Clean**: Always deploy your initial subgraph without grafting to ensure that it’s stable and functions as expected. - - **Test Thoroughly**: Validate the subgraph’s performance to minimize the need for future hotfixes. + - **Start Clean**: Always deploy your initial Subgraph without grafting to ensure that it’s stable and functions as expected. + - **Test Thoroughly**: Validate the Subgraph’s performance to minimize the need for future hotfixes. 2. **Implementing the Hotfix with Grafting** - **Identify the Issue**: When a critical error occurs, determine the block number of the last successfully indexed event. - - **Create a New Subgraph**: Develop a new subgraph that includes the hotfix. - - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed subgraph. - - **Deploy Quickly**: Publish the grafted subgraph to restore service as soon as possible. + - **Create a New Subgraph**: Develop a new Subgraph that includes the hotfix. + - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed Subgraph. + - **Deploy Quickly**: Publish the grafted Subgraph to restore service as soon as possible. 3. **Post-Hotfix Actions** - - **Monitor Performance**: Ensure the grafted subgraph is indexing correctly and the hotfix resolves the issue. - - **Republish Without Grafting**: Once stable, deploy a new version of the subgraph without grafting for long-term maintenance. + - **Monitor Performance**: Ensure the grafted Subgraph is indexing correctly and the hotfix resolves the issue. + - **Republish Without Grafting**: Once stable, deploy a new version of the Subgraph without grafting for long-term maintenance. > Note: Relying on grafting indefinitely is not recommended as it can complicate future updates and maintenance. - - **Update References**: Redirect any services or applications to use the new, non-grafted subgraph. + - **Update References**: Redirect any services or applications to use the new, non-grafted Subgraph. 4. **Important Considerations** - **Careful Block Selection**: Choose the graft block number carefully to prevent data loss. - **Tip**: Use the block number of the last correctly processed event. - - **Use Deployment ID**: Ensure you reference the Deployment ID of the base subgraph, not the Subgraph ID. - - **Note**: The Deployment ID is the unique identifier for a specific subgraph deployment. - - **Feature Declaration**: Remember to declare grafting in the subgraph manifest under features. + - **Use Deployment ID**: Ensure you reference the Deployment ID of the base Subgraph, not the Subgraph ID. + - **Note**: The Deployment ID is the unique identifier for a specific Subgraph deployment. + - **Feature Declaration**: Remember to declare grafting in the Subgraph manifest under features. ## Example: Deploying a Hotfix with Grafting -Suppose you have a subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. +Suppose you have a Subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. 1. **Failed Subgraph Manifest (subgraph.yaml)** ```yaml - specVersion: 1.0.0 + specVersion: 1.3.0 schema: file: ./schema.graphql dataSources: @@ -75,7 +75,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing startBlock: 5000000 mapping: kind: ethereum/events - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Withdrawal @@ -90,7 +90,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing 2. **New Grafted Subgraph Manifest (subgraph.yaml)** ```yaml - specVersion: 1.0.0 + specVersion: 1.3.0 schema: file: ./schema.graphql dataSources: @@ -103,7 +103,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing startBlock: 6000001 # Block after the last indexed block mapping: kind: ethereum/events - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Withdrawal @@ -117,16 +117,16 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing features: - grafting graft: - base: QmBaseDeploymentID # Deployment ID of the failed subgraph + base: QmBaseDeploymentID # Deployment ID of the failed Subgraph block: 6000000 # Last successfully indexed block ``` **Explanation:** -- **Data Source Update**: The new subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. +- **Data Source Update**: The new Subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. - **Start Block**: Set to one block after the last successfully indexed block to avoid reprocessing the error. - **Grafting Configuration**: - - **base**: Deployment ID of the failed subgraph. + - **base**: Deployment ID of the failed Subgraph. - **block**: Block number where grafting should begin. 3. **Deployment Steps** @@ -135,10 +135,10 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing - **Adjust the Manifest**: As shown above, update the `subgraph.yaml` with grafting configurations. - **Deploy the Subgraph**: - Authenticate with the Graph CLI. - - Deploy the new subgraph using `graph deploy`. + - Deploy the new Subgraph using `graph deploy`. 4. **Post-Deployment** - - **Verify Indexing**: Check that the subgraph is indexing correctly from the graft point. + - **Verify Indexing**: Check that the Subgraph is indexing correctly from the graft point. - **Monitor Data**: Ensure that new data is being captured and the hotfix is effective. - **Plan for Republish**: Schedule the deployment of a non-grafted version for long-term stability. @@ -146,9 +146,9 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing While grafting is a powerful tool for deploying hotfixes quickly, there are specific scenarios where it should be avoided to maintain data integrity and ensure optimal performance. -- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new subgraph’s schema to be compatible with the base subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. +- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new Subgraph’s schema to be compatible with the base Subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. - **Significant Mapping Logic Overhauls**: When the hotfix involves substantial modifications to your mapping logic—such as changing how events are processed or altering handler functions—grafting may not function correctly. The new logic might not be compatible with the data processed under the old logic, leading to incorrect data or failed indexing. -- **Deployments to The Graph Network**: Grafting is not recommended for subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the subgraph from scratch to ensure full compatibility and reliability. +- **Deployments to The Graph Network**: Grafting is not recommended for Subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the Subgraph from scratch to ensure full compatibility and reliability. ### Risk Management @@ -157,20 +157,20 @@ While grafting is a powerful tool for deploying hotfixes quickly, there are spec ## Conclusion -Grafting is an effective strategy for deploying hotfixes in subgraph development, enabling you to: +Grafting is an effective strategy for deploying hotfixes in Subgraph development, enabling you to: - **Quickly Recover** from critical errors without re-indexing. - **Preserve Historical Data**, maintaining continuity for applications and users. - **Ensure Service Availability** by minimizing downtime during critical fixes. -However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. +However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your Subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. ## Additional Resources - **[Grafting Documentation](/subgraphs/cookbook/grafting/)**: Replace a Contract and Keep its History With Grafting - **[Understanding Deployment IDs](/subgraphs/querying/subgraph-id-vs-deployment-id/)**: Learn the difference between Deployment ID and Subgraph ID. -By incorporating grafting into your subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. +By incorporating grafting into your Subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/pl/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx b/website/src/pages/pl/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx index 6ff60ec9ab34..3a633244e0f2 100644 --- a/website/src/pages/pl/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx +++ b/website/src/pages/pl/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx @@ -1,6 +1,6 @@ --- title: Subgraph Best Practice 3 - Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs -sidebarTitle: 'Subgraph Best Practice 3: Immutable Entities and Bytes as IDs' +sidebarTitle: Immutable Entities and Bytes as IDs --- ## TLDR @@ -50,12 +50,12 @@ While other types for IDs are possible, such as String and Int8, it is recommend ### Reasons to Not Use Bytes as IDs 1. If entity IDs must be human-readable such as auto-incremented numerical IDs or readable strings, Bytes for IDs should not be used. -2. If integrating a subgraph’s data with another data model that does not use Bytes as IDs, Bytes as IDs should not be used. +2. If integrating a Subgraph’s data with another data model that does not use Bytes as IDs, Bytes as IDs should not be used. 3. Indexing and querying performance improvements are not desired. ### Concatenating With Bytes as IDs -It is a common practice in many subgraphs to use string concatenation to combine two properties of an event into a single ID, such as using `event.transaction.hash.toHex() + "-" + event.logIndex.toString()`. However, as this returns a string, this significantly impedes subgraph indexing and querying performance. +It is a common practice in many Subgraphs to use string concatenation to combine two properties of an event into a single ID, such as using `event.transaction.hash.toHex() + "-" + event.logIndex.toString()`. However, as this returns a string, this significantly impedes Subgraph indexing and querying performance. Instead, we should use the `concatI32()` method to concatenate event properties. This strategy results in a `Bytes` ID that is much more performant. @@ -172,7 +172,7 @@ Query Response: ## Conclusion -Using both Immutable Entities and Bytes as IDs has been shown to markedly improve subgraph efficiency. Specifically, tests have highlighted up to a 28% increase in query performance and up to a 48% acceleration in indexing speeds. +Using both Immutable Entities and Bytes as IDs has been shown to markedly improve Subgraph efficiency. Specifically, tests have highlighted up to a 28% increase in query performance and up to a 48% acceleration in indexing speeds. Read more about using Immutable Entities and Bytes as IDs in this blog post by David Lutterkort, a Software Engineer at Edge & Node: [Two Simple Subgraph Performance Improvements](https://thegraph.com/blog/two-simple-subgraph-performance-improvements/). diff --git a/website/src/pages/pl/subgraphs/best-practices/pruning.mdx b/website/src/pages/pl/subgraphs/best-practices/pruning.mdx index 1b51dde8894f..2d4f9ad803e0 100644 --- a/website/src/pages/pl/subgraphs/best-practices/pruning.mdx +++ b/website/src/pages/pl/subgraphs/best-practices/pruning.mdx @@ -1,11 +1,11 @@ --- title: Subgraph Best Practice 1 - Improve Query Speed with Subgraph Pruning -sidebarTitle: 'Subgraph Best Practice 1: Pruning with indexerHints' +sidebarTitle: Pruning with indexerHints --- ## TLDR -[Pruning](/developing/creating-a-subgraph/#prune) removes archival entities from the subgraph’s database up to a given block, and removing unused entities from a subgraph’s database will improve a subgraph’s query performance, often dramatically. Using `indexerHints` is an easy way to prune a subgraph. +[Pruning](/developing/creating-a-subgraph/#prune) removes archival entities from the Subgraph’s database up to a given block, and removing unused entities from a Subgraph’s database will improve a Subgraph’s query performance, often dramatically. Using `indexerHints` is an easy way to prune a Subgraph. ## How to Prune a Subgraph With `indexerHints` @@ -13,14 +13,14 @@ Add a section called `indexerHints` in the manifest. `indexerHints` has three `prune` options: -- `prune: auto`: Retains the minimum necessary history as set by the Indexer, optimizing query performance. This is the generally recommended setting and is the default for all subgraphs created by `graph-cli` >= 0.66.0. +- `prune: auto`: Retains the minimum necessary history as set by the Indexer, optimizing query performance. This is the generally recommended setting and is the default for all Subgraphs created by `graph-cli` >= 0.66.0. - `prune: `: Sets a custom limit on the number of historical blocks to retain. - `prune: never`: No pruning of historical data; retains the entire history and is the default if there is no `indexerHints` section. `prune: never` should be selected if [Time Travel Queries](/subgraphs/querying/graphql-api/#time-travel-queries) are desired. -We can add `indexerHints` to our subgraphs by updating our `subgraph.yaml`: +We can add `indexerHints` to our Subgraphs by updating our `subgraph.yaml`: ```yaml -specVersion: 1.0.0 +specVersion: 1.3.0 schema: file: ./schema.graphql indexerHints: @@ -39,7 +39,7 @@ dataSources: ## Conclusion -Pruning using `indexerHints` is a best practice for subgraph development, offering significant query performance improvements. +Pruning using `indexerHints` is a best practice for Subgraph development, offering significant query performance improvements. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/pl/subgraphs/best-practices/timeseries.mdx b/website/src/pages/pl/subgraphs/best-practices/timeseries.mdx index cacdc44711fe..9732199531a8 100644 --- a/website/src/pages/pl/subgraphs/best-practices/timeseries.mdx +++ b/website/src/pages/pl/subgraphs/best-practices/timeseries.mdx @@ -1,11 +1,11 @@ --- title: Subgraph Best Practice 5 - Simplify and Optimize with Timeseries and Aggregations -sidebarTitle: 'Subgraph Best Practice 5: Timeseries and Aggregations' +sidebarTitle: Timeseries and Aggregations --- ## TLDR -Leveraging the new time-series and aggregations feature in subgraphs can significantly enhance both indexing speed and query performance. +Leveraging the new time-series and aggregations feature in Subgraphs can significantly enhance both indexing speed and query performance. ## Overview @@ -36,6 +36,10 @@ Timeseries and aggregations reduce data processing overhead and accelerate queri ## How to Implement Timeseries and Aggregations +### Prerequisites + +You need `spec version 1.1.0` for this feature. + ### Defining Timeseries Entities A timeseries entity represents raw data points collected over time. It is defined with the `@entity(timeseries: true)` annotation. Key requirements: @@ -51,7 +55,7 @@ Example: type Data @entity(timeseries: true) { id: Int8! timestamp: Timestamp! - price: BigDecimal! + amount: BigDecimal! } ``` @@ -68,11 +72,11 @@ Example: type Stats @aggregation(intervals: ["hour", "day"], source: "Data") { id: Int8! timestamp: Timestamp! - sum: BigDecimal! @aggregate(fn: "sum", arg: "price") + sum: BigDecimal! @aggregate(fn: "sum", arg: "amount") } ``` -In this example, Stats aggregates the price field from Data over hourly and daily intervals, computing the sum. +In this example, Stats aggregates the amount field from Data over hourly and daily intervals, computing the sum. ### Querying Aggregated Data @@ -172,13 +176,13 @@ Supported operators and functions include basic arithmetic (+, -, \_, /), compar ### Conclusion -Implementing timeseries and aggregations in subgraphs is a best practice for projects dealing with time-based data. This approach: +Implementing timeseries and aggregations in Subgraphs is a best practice for projects dealing with time-based data. This approach: - Enhances Performance: Speeds up indexing and querying by reducing data processing overhead. - Simplifies Development: Eliminates the need for manual aggregation logic in mappings. - Scales Efficiently: Handles large volumes of data without compromising on speed or responsiveness. -By adopting this pattern, developers can build more efficient and scalable subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your subgraphs. +By adopting this pattern, developers can build more efficient and scalable Subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your Subgraphs. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/pl/subgraphs/billing.mdx b/website/src/pages/pl/subgraphs/billing.mdx index 511ac8067271..4dff0690a1ba 100644 --- a/website/src/pages/pl/subgraphs/billing.mdx +++ b/website/src/pages/pl/subgraphs/billing.mdx @@ -4,12 +4,14 @@ title: Billing ## Querying Plans -There are two plans to use when querying subgraphs on The Graph Network. +There are two plans to use when querying Subgraphs on The Graph Network. - **Free Plan**: The Free Plan includes 100,000 free monthly queries with full access to the Subgraph Studio testing environment. This plan is designed for hobbyists, hackathoners, and those with side projects to try out The Graph before scaling their dapp. - **Growth Plan**: The Growth Plan includes everything in the Free Plan with all queries after 100,000 monthly queries requiring payments with GRT or credit card. The Growth Plan is flexible enough to cover teams that have established dapps across a variety of use cases. +Learn more about pricing [here](https://thegraph.com/studio-pricing/). + ## Query Payments with credit card diff --git a/website/src/pages/pl/subgraphs/developing/creating/advanced.mdx b/website/src/pages/pl/subgraphs/developing/creating/advanced.mdx index ee9918f5f254..8dbc48253034 100644 --- a/website/src/pages/pl/subgraphs/developing/creating/advanced.mdx +++ b/website/src/pages/pl/subgraphs/developing/creating/advanced.mdx @@ -4,9 +4,9 @@ title: Advanced Subgraph Features ## Overview -Add and implement advanced subgraph features to enhanced your subgraph's built. +Add and implement advanced Subgraph features to enhanced your Subgraph's built. -Starting from `specVersion` `0.0.4`, subgraph features must be explicitly declared in the `features` section at the top level of the manifest file, using their `camelCase` name, as listed in the table below: +Starting from `specVersion` `0.0.4`, Subgraph features must be explicitly declared in the `features` section at the top level of the manifest file, using their `camelCase` name, as listed in the table below: | Feature | Name | | ---------------------------------------------------- | ---------------- | @@ -14,10 +14,10 @@ Starting from `specVersion` `0.0.4`, subgraph features must be explicitly declar | [Full-text Search](#defining-fulltext-search-fields) | `fullTextSearch` | | [Grafting](#grafting-onto-existing-subgraphs) | `grafting` | -For instance, if a subgraph uses the **Full-Text Search** and the **Non-fatal Errors** features, the `features` field in the manifest should be: +For instance, if a Subgraph uses the **Full-Text Search** and the **Non-fatal Errors** features, the `features` field in the manifest should be: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum features: - fullTextSearch @@ -25,7 +25,7 @@ features: dataSources: ... ``` -> Note that using a feature without declaring it will incur a **validation error** during subgraph deployment, but no errors will occur if a feature is declared but not used. +> Note that using a feature without declaring it will incur a **validation error** during Subgraph deployment, but no errors will occur if a feature is declared but not used. ## Timeseries and Aggregations @@ -33,9 +33,9 @@ Prerequisites: - Subgraph specVersion must be ≥1.1.0. -Timeseries and aggregations enable your subgraph to track statistics like daily average price, hourly total transfers, and more. +Timeseries and aggregations enable your Subgraph to track statistics like daily average price, hourly total transfers, and more. -This feature introduces two new types of subgraph entity. Timeseries entities record data points with timestamps. Aggregation entities perform pre-declared calculations on the timeseries data points on an hourly or daily basis, then store the results for easy access via GraphQL. +This feature introduces two new types of Subgraph entity. Timeseries entities record data points with timestamps. Aggregation entities perform pre-declared calculations on the timeseries data points on an hourly or daily basis, then store the results for easy access via GraphQL. ### Example Schema @@ -97,21 +97,21 @@ Aggregation entities are automatically calculated on the basis of the specified ## Non-fatal errors -Indexing errors on already synced subgraphs will, by default, cause the subgraph to fail and stop syncing. Subgraphs can alternatively be configured to continue syncing in the presence of errors, by ignoring the changes made by the handler which provoked the error. This gives subgraph authors time to correct their subgraphs while queries continue to be served against the latest block, though the results might be inconsistent due to the bug that caused the error. Note that some errors are still always fatal. To be non-fatal, the error must be known to be deterministic. +Indexing errors on already synced Subgraphs will, by default, cause the Subgraph to fail and stop syncing. Subgraphs can alternatively be configured to continue syncing in the presence of errors, by ignoring the changes made by the handler which provoked the error. This gives Subgraph authors time to correct their Subgraphs while queries continue to be served against the latest block, though the results might be inconsistent due to the bug that caused the error. Note that some errors are still always fatal. To be non-fatal, the error must be known to be deterministic. -> **Note:** The Graph Network does not yet support non-fatal errors, and developers should not deploy subgraphs using that functionality to the network via the Studio. +> **Note:** The Graph Network does not yet support non-fatal errors, and developers should not deploy Subgraphs using that functionality to the network via the Studio. -Enabling non-fatal errors requires setting the following feature flag on the subgraph manifest: +Enabling non-fatal errors requires setting the following feature flag on the Subgraph manifest: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum features: - nonFatalErrors ... ``` -The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the subgraph has skipped over errors, as in the example: +The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the Subgraph has skipped over errors, as in the example: ```graphql foos(first: 100, subgraphError: allow) { @@ -123,7 +123,7 @@ _meta { } ``` -If the subgraph encounters an error, that query will return both the data and a graphql error with the message `"indexing_error"`, as in this example response: +If the Subgraph encounters an error, that query will return both the data and a graphql error with the message `"indexing_error"`, as in this example response: ```graphql "data": { @@ -145,7 +145,7 @@ If the subgraph encounters an error, that query will return both the data and a ## IPFS/Arweave File Data Sources -File data sources are a new subgraph functionality for accessing off-chain data during indexing in a robust, extendable way. File data sources support fetching files from IPFS and from Arweave. +File data sources are a new Subgraph functionality for accessing off-chain data during indexing in a robust, extendable way. File data sources support fetching files from IPFS and from Arweave. > This also lays the groundwork for deterministic indexing of off-chain data, as well as the potential introduction of arbitrary HTTP-sourced data. @@ -221,7 +221,7 @@ templates: - name: TokenMetadata kind: file/ipfs mapping: - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mapping.ts handler: handleMetadata @@ -290,7 +290,7 @@ Example: import { TokenMetadata as TokenMetadataTemplate } from '../generated/templates' const ipfshash = 'QmaXzZhcYnsisuue5WRdQDH6FDvqkLQX1NckLqBYeYYEfm' -//This example code is for a Crypto coven subgraph. The above ipfs hash is a directory with token metadata for all crypto coven NFTs. +//This example code is for a Crypto coven Subgraph. The above ipfs hash is a directory with token metadata for all crypto coven NFTs. export function handleTransfer(event: TransferEvent): void { let token = Token.load(event.params.tokenId.toString()) @@ -317,23 +317,23 @@ This will create a new file data source, which will poll Graph Node's configured This example is using the CID as the lookup between the parent `Token` entity and the resulting `TokenMetadata` entity. -> Previously, this is the point at which a subgraph developer would have called `ipfs.cat(CID)` to fetch the file +> Previously, this is the point at which a Subgraph developer would have called `ipfs.cat(CID)` to fetch the file Congratulations, you are using file data sources! -#### Deploying your subgraphs +#### Deploying your Subgraphs -You can now `build` and `deploy` your subgraph to any Graph Node >=v0.30.0-rc.0. +You can now `build` and `deploy` your Subgraph to any Graph Node >=v0.30.0-rc.0. #### Limitations -File data source handlers and entities are isolated from other subgraph entities, ensuring that they are deterministic when executed, and ensuring no contamination of chain-based data sources. To be specific: +File data source handlers and entities are isolated from other Subgraph entities, ensuring that they are deterministic when executed, and ensuring no contamination of chain-based data sources. To be specific: - Entities created by File Data Sources are immutable, and cannot be updated - File Data Source handlers cannot access entities from other file data sources - Entities associated with File Data Sources cannot be accessed by chain-based handlers -> While this constraint should not be problematic for most use-cases, it may introduce complexity for some. Please get in touch via Discord if you are having issues modelling your file-based data in a subgraph! +> While this constraint should not be problematic for most use-cases, it may introduce complexity for some. Please get in touch via Discord if you are having issues modelling your file-based data in a Subgraph! Additionally, it is not possible to create data sources from a file data source, be it an onchain data source or another file data source. This restriction may be lifted in the future. @@ -365,15 +365,15 @@ Handlers for File Data Sources cannot be in files which import `eth_call` contra > **Requires**: [SpecVersion](#specversion-releases) >= `1.2.0` -Topic filters, also known as indexed argument filters, are a powerful feature in subgraphs that allow users to precisely filter blockchain events based on the values of their indexed arguments. +Topic filters, also known as indexed argument filters, are a powerful feature in Subgraphs that allow users to precisely filter blockchain events based on the values of their indexed arguments. -- These filters help isolate specific events of interest from the vast stream of events on the blockchain, allowing subgraphs to operate more efficiently by focusing only on relevant data. +- These filters help isolate specific events of interest from the vast stream of events on the blockchain, allowing Subgraphs to operate more efficiently by focusing only on relevant data. -- This is useful for creating personal subgraphs that track specific addresses and their interactions with various smart contracts on the blockchain. +- This is useful for creating personal Subgraphs that track specific addresses and their interactions with various smart contracts on the blockchain. ### How Topic Filters Work -When a smart contract emits an event, any arguments that are marked as indexed can be used as filters in a subgraph's manifest. This allows the subgraph to listen selectively for events that match these indexed arguments. +When a smart contract emits an event, any arguments that are marked as indexed can be used as filters in a Subgraph's manifest. This allows the Subgraph to listen selectively for events that match these indexed arguments. - The event's first indexed argument corresponds to `topic1`, the second to `topic2`, and so on, up to `topic3`, since the Ethereum Virtual Machine (EVM) allows up to three indexed arguments per event. @@ -401,7 +401,7 @@ In this example: #### Configuration in Subgraphs -Topic filters are defined directly within the event handler configuration in the subgraph manifest. Here is how they are configured: +Topic filters are defined directly within the event handler configuration in the Subgraph manifest. Here is how they are configured: ```yaml eventHandlers: @@ -436,7 +436,7 @@ In this configuration: - `topic1` is configured to filter `Transfer` events where `0xAddressA` is the sender. - `topic2` is configured to filter `Transfer` events where `0xAddressB` is the receiver. -- The subgraph will only index transactions that occur directly from `0xAddressA` to `0xAddressB`. +- The Subgraph will only index transactions that occur directly from `0xAddressA` to `0xAddressB`. #### Example 2: Tracking Transactions in Either Direction Between Two or More Addresses @@ -452,17 +452,17 @@ In this configuration: - `topic1` is configured to filter `Transfer` events where `0xAddressA`, `0xAddressB`, `0xAddressC` is the sender. - `topic2` is configured to filter `Transfer` events where `0xAddressB` and `0xAddressC` is the receiver. -- The subgraph will index transactions that occur in either direction between multiple addresses allowing for comprehensive monitoring of interactions involving all addresses. +- The Subgraph will index transactions that occur in either direction between multiple addresses allowing for comprehensive monitoring of interactions involving all addresses. ## Declared eth_call > Note: This is an experimental feature that is not currently available in a stable Graph Node release yet. You can only use it in Subgraph Studio or your self-hosted node. -Declarative `eth_calls` are a valuable subgraph feature that allows `eth_calls` to be executed ahead of time, enabling `graph-node` to execute them in parallel. +Declarative `eth_calls` are a valuable Subgraph feature that allows `eth_calls` to be executed ahead of time, enabling `graph-node` to execute them in parallel. This feature does the following: -- Significantly improves the performance of fetching data from the Ethereum blockchain by reducing the total time for multiple calls and optimizing the subgraph's overall efficiency. +- Significantly improves the performance of fetching data from the Ethereum blockchain by reducing the total time for multiple calls and optimizing the Subgraph's overall efficiency. - Allows faster data fetching, resulting in quicker query responses and a better user experience. - Reduces wait times for applications that need to aggregate data from multiple Ethereum calls, making the data retrieval process more efficient. @@ -474,7 +474,7 @@ This feature does the following: #### Scenario without Declarative `eth_calls` -Imagine you have a subgraph that needs to make three Ethereum calls to fetch data about a user's transactions, balance, and token holdings. +Imagine you have a Subgraph that needs to make three Ethereum calls to fetch data about a user's transactions, balance, and token holdings. Traditionally, these calls might be made sequentially: @@ -498,15 +498,15 @@ Total time taken = max (3, 2, 4) = 4 seconds #### How it Works -1. Declarative Definition: In the subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. +1. Declarative Definition: In the Subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. 2. Parallel Execution Engine: The Graph Node's execution engine recognizes these declarations and runs the calls simultaneously. -3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the subgraph for further processing. +3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the Subgraph for further processing. #### Example Configuration in Subgraph Manifest Declared `eth_calls` can access the `event.address` of the underlying event as well as all the `event.params`. -`Subgraph.yaml` using `event.address`: +`subgraph.yaml` using `event.address`: ```yaml eventHandlers: @@ -524,7 +524,7 @@ Details for the example above: - The text (`Pool[event.address].feeGrowthGlobal0X128()`) is the actual `eth_call` that will be executed, which is in the form of `Contract[address].function(arguments)` - The `address` and `arguments` can be replaced with variables that will be available when the handler is executed. -`Subgraph.yaml` using `event.params` +`subgraph.yaml` using `event.params` ```yaml calls: @@ -535,22 +535,22 @@ calls: > **Note:** it is not recommended to use grafting when initially upgrading to The Graph Network. Learn more [here](/subgraphs/cookbook/grafting/#important-note-on-grafting-when-upgrading-to-the-network). -When a subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances; it is beneficial to reuse the data from an existing subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly or to temporarily get an existing subgraph working again after it has failed. +When a Subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances; it is beneficial to reuse the data from an existing Subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly or to temporarily get an existing Subgraph working again after it has failed. -A subgraph is grafted onto a base subgraph when the subgraph manifest in `subgraph.yaml` contains a `graft` block at the top-level: +A Subgraph is grafted onto a base Subgraph when the Subgraph manifest in `subgraph.yaml` contains a `graft` block at the top-level: ```yaml description: ... graft: - base: Qm... # Subgraph ID of base subgraph + base: Qm... # Subgraph ID of base Subgraph block: 7345624 # Block number ``` -When a subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` subgraph up to and including the given `block` and then continue indexing the new subgraph from that block on. The base subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted subgraph. +When a Subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` Subgraph up to and including the given `block` and then continue indexing the new Subgraph from that block on. The base Subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted Subgraph. -Because grafting copies rather than indexes base data, it is much quicker to get the subgraph to the desired block than indexing from scratch, though the initial data copy can still take several hours for very large subgraphs. While the grafted subgraph is being initialized, the Graph Node will log information about the entity types that have already been copied. +Because grafting copies rather than indexes base data, it is much quicker to get the Subgraph to the desired block than indexing from scratch, though the initial data copy can still take several hours for very large Subgraphs. While the grafted Subgraph is being initialized, the Graph Node will log information about the entity types that have already been copied. -The grafted subgraph can use a GraphQL schema that is not identical to the one of the base subgraph, but merely compatible with it. It has to be a valid subgraph schema in its own right, but may deviate from the base subgraph's schema in the following ways: +The grafted Subgraph can use a GraphQL schema that is not identical to the one of the base Subgraph, but merely compatible with it. It has to be a valid Subgraph schema in its own right, but may deviate from the base Subgraph's schema in the following ways: - It adds or removes entity types - It removes attributes from entity types @@ -560,4 +560,4 @@ The grafted subgraph can use a GraphQL schema that is not identical to the one o - It adds or removes interfaces - It changes for which entity types an interface is implemented -> **[Feature Management](#experimental-features):** `grafting` must be declared under `features` in the subgraph manifest. +> **[Feature Management](#experimental-features):** `grafting` must be declared under `features` in the Subgraph manifest. diff --git a/website/src/pages/pl/subgraphs/developing/creating/assemblyscript-mappings.mdx b/website/src/pages/pl/subgraphs/developing/creating/assemblyscript-mappings.mdx index 2ac894695fe1..cd81dc118f28 100644 --- a/website/src/pages/pl/subgraphs/developing/creating/assemblyscript-mappings.mdx +++ b/website/src/pages/pl/subgraphs/developing/creating/assemblyscript-mappings.mdx @@ -10,7 +10,7 @@ The mappings take data from a particular source and transform it into entities t For each event handler that is defined in `subgraph.yaml` under `mapping.eventHandlers`, create an exported function of the same name. Each handler must accept a single parameter called `event` with a type corresponding to the name of the event which is being handled. -In the example subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events: +In the example Subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events: ```javascript import { NewGravatar, UpdatedGravatar } from '../generated/Gravity/Gravity' @@ -72,7 +72,7 @@ If no value is set for a field in the new entity with the same ID, the field wil ## Code Generation -In order to make it easy and type-safe to work with smart contracts, events and entities, the Graph CLI can generate AssemblyScript types from the subgraph's GraphQL schema and the contract ABIs included in the data sources. +In order to make it easy and type-safe to work with smart contracts, events and entities, the Graph CLI can generate AssemblyScript types from the Subgraph's GraphQL schema and the contract ABIs included in the data sources. This is done with @@ -80,7 +80,7 @@ This is done with graph codegen [--output-dir ] [] ``` -but in most cases, subgraphs are already preconfigured via `package.json` to allow you to simply run one of the following to achieve the same: +but in most cases, Subgraphs are already preconfigured via `package.json` to allow you to simply run one of the following to achieve the same: ```sh # Yarn @@ -90,7 +90,7 @@ yarn codegen npm run codegen ``` -This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with. +This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example Subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with. ```javascript import { @@ -102,12 +102,12 @@ import { } from '../generated/Gravity/Gravity' ``` -In addition to this, one class is generated for each entity type in the subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with +In addition to this, one class is generated for each entity type in the Subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with ```javascript import { Gravatar } from '../generated/schema' ``` -> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the subgraph. +> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the Subgraph. -Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your subgraph to Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find. +Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your Subgraph to Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find. diff --git a/website/src/pages/pl/subgraphs/developing/creating/graph-ts/CHANGELOG.md b/website/src/pages/pl/subgraphs/developing/creating/graph-ts/CHANGELOG.md index 5d90888ac378..5f964d3cbb78 100644 --- a/website/src/pages/pl/subgraphs/developing/creating/graph-ts/CHANGELOG.md +++ b/website/src/pages/pl/subgraphs/developing/creating/graph-ts/CHANGELOG.md @@ -1,5 +1,11 @@ # @graphprotocol/graph-ts +## 0.38.0 + +### Minor Changes + +- [#1935](https://github.com/graphprotocol/graph-tooling/pull/1935) [`0c36a02`](https://github.com/graphprotocol/graph-tooling/commit/0c36a024e0516bbf883ae62b8312dba3d9945f04) Thanks [@isum](https://github.com/isum)! - feat: add yaml parsing support to mappings + ## 0.37.0 ### Minor Changes diff --git a/website/src/pages/pl/subgraphs/developing/creating/graph-ts/api.mdx b/website/src/pages/pl/subgraphs/developing/creating/graph-ts/api.mdx index 35bb04826c98..5be2530c4d6b 100644 --- a/website/src/pages/pl/subgraphs/developing/creating/graph-ts/api.mdx +++ b/website/src/pages/pl/subgraphs/developing/creating/graph-ts/api.mdx @@ -2,12 +2,12 @@ title: AssemblyScript API --- -> Note: If you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/). +> Note: If you created a Subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/). -Learn what built-in APIs can be used when writing subgraph mappings. There are two kinds of APIs available out of the box: +Learn what built-in APIs can be used when writing Subgraph mappings. There are two kinds of APIs available out of the box: - The [Graph TypeScript library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) -- Code generated from subgraph files by `graph codegen` +- Code generated from Subgraph files by `graph codegen` You can also add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). @@ -27,7 +27,7 @@ The `@graphprotocol/graph-ts` library provides the following APIs: ### Versions -The `apiVersion` in the subgraph manifest specifies the mapping API version which is run by Graph Node for a given subgraph. +The `apiVersion` in the Subgraph manifest specifies the mapping API version which is run by Graph Node for a given Subgraph. | Version | Release notes | | :-: | --- | @@ -223,7 +223,7 @@ import { store } from '@graphprotocol/graph-ts' The `store` API allows to load, save and remove entities from and to the Graph Node store. -Entities written to the store map one-to-one to the `@entity` types defined in the subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities. +Entities written to the store map one-to-one to the `@entity` types defined in the Subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities. #### Creating entities @@ -282,8 +282,8 @@ As of `graph-node` v0.31.0, `@graphprotocol/graph-ts` v0.30.0 and `@graphprotoco The store API facilitates the retrieval of entities that were created or updated in the current block. A typical situation for this is that one handler creates a transaction from some onchain event, and a later handler wants to access this transaction if it exists. -- In the case where the transaction does not exist, the subgraph will have to go to the database simply to find out that the entity does not exist. If the subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. -- For some subgraphs, these missed lookups can contribute significantly to the indexing time. +- In the case where the transaction does not exist, the Subgraph will have to go to the database simply to find out that the entity does not exist. If the Subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. +- For some Subgraphs, these missed lookups can contribute significantly to the indexing time. ```typescript let id = event.transaction.hash // or however the ID is constructed @@ -380,11 +380,11 @@ The Ethereum API provides access to smart contracts, public state variables, con #### Support for Ethereum Types -As with entities, `graph codegen` generates classes for all smart contracts and events used in a subgraph. For this, the contract ABIs need to be part of the data source in the subgraph manifest. Typically, the ABI files are stored in an `abis/` folder. +As with entities, `graph codegen` generates classes for all smart contracts and events used in a Subgraph. For this, the contract ABIs need to be part of the data source in the Subgraph manifest. Typically, the ABI files are stored in an `abis/` folder. -With the generated classes, conversions between Ethereum types and the [built-in types](#built-in-types) take place behind the scenes so that subgraph authors do not have to worry about them. +With the generated classes, conversions between Ethereum types and the [built-in types](#built-in-types) take place behind the scenes so that Subgraph authors do not have to worry about them. -The following example illustrates this. Given a subgraph schema like +The following example illustrates this. Given a Subgraph schema like ```graphql type Transfer @entity { @@ -483,7 +483,7 @@ class Log { #### Access to Smart Contract State -The code generated by `graph codegen` also includes classes for the smart contracts used in the subgraph. These can be used to access public state variables and call functions of the contract at the current block. +The code generated by `graph codegen` also includes classes for the smart contracts used in the Subgraph. These can be used to access public state variables and call functions of the contract at the current block. A common pattern is to access the contract from which an event originates. This is achieved with the following code: @@ -506,7 +506,7 @@ export function handleTransfer(event: TransferEvent) { As long as the `ERC20Contract` on Ethereum has a public read-only function called `symbol`, it can be called with `.symbol()`. For public state variables a method with the same name is created automatically. -Any other contract that is part of the subgraph can be imported from the generated code and can be bound to a valid address. +Any other contract that is part of the Subgraph can be imported from the generated code and can be bound to a valid address. #### Handling Reverted Calls @@ -582,7 +582,7 @@ let isContract = ethereum.hasCode(eoa).inner // returns false import { log } from '@graphprotocol/graph-ts' ``` -The `log` API allows subgraphs to log information to the Graph Node standard output as well as Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument. +The `log` API allows Subgraphs to log information to the Graph Node standard output as well as Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument. The `log` API includes the following functions: @@ -590,7 +590,7 @@ The `log` API includes the following functions: - `log.info(fmt: string, args: Array): void` - logs an informational message. - `log.warning(fmt: string, args: Array): void` - logs a warning. - `log.error(fmt: string, args: Array): void` - logs an error message. -- `log.critical(fmt: string, args: Array): void` – logs a critical message _and_ terminates the subgraph. +- `log.critical(fmt: string, args: Array): void` – logs a critical message _and_ terminates the Subgraph. The `log` API takes a format string and an array of string values. It then replaces placeholders with the string values from the array. The first `{}` placeholder gets replaced by the first value in the array, the second `{}` placeholder gets replaced by the second value and so on. @@ -721,7 +721,7 @@ ipfs.mapJSON('Qm...', 'processItem', Value.fromString('parentId')) The only flag currently supported is `json`, which must be passed to `ipfs.map`. With the `json` flag, the IPFS file must consist of a series of JSON values, one value per line. The call to `ipfs.map` will read each line in the file, deserialize it into a `JSONValue` and call the callback for each of them. The callback can then use entity operations to store data from the `JSONValue`. Entity changes are stored only when the handler that called `ipfs.map` finishes successfully; in the meantime, they are kept in memory, and the size of the file that `ipfs.map` can process is therefore limited. -On success, `ipfs.map` returns `void`. If any invocation of the callback causes an error, the handler that invoked `ipfs.map` is aborted, and the subgraph is marked as failed. +On success, `ipfs.map` returns `void`. If any invocation of the callback causes an error, the handler that invoked `ipfs.map` is aborted, and the Subgraph is marked as failed. ### Crypto API @@ -836,7 +836,7 @@ The base `Entity` class and the child `DataSourceContext` class have helpers to ### DataSourceContext in Manifest -The `context` section within `dataSources` allows you to define key-value pairs that are accessible within your subgraph mappings. The available types are `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. +The `context` section within `dataSources` allows you to define key-value pairs that are accessible within your Subgraph mappings. The available types are `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Here is a YAML example illustrating the usage of various types in the `context` section: @@ -887,4 +887,4 @@ dataSources: - `List`: Specifies a list of items. Each item needs to specify its type and data. - `BigInt`: Specifies a large integer value. Must be quoted due to its large size. -This context is then accessible in your subgraph mapping files, enabling more dynamic and configurable subgraphs. +This context is then accessible in your Subgraph mapping files, enabling more dynamic and configurable Subgraphs. diff --git a/website/src/pages/pl/subgraphs/developing/creating/graph-ts/common-issues.mdx b/website/src/pages/pl/subgraphs/developing/creating/graph-ts/common-issues.mdx index f8d0c9c004c2..65e8e3d4a8a3 100644 --- a/website/src/pages/pl/subgraphs/developing/creating/graph-ts/common-issues.mdx +++ b/website/src/pages/pl/subgraphs/developing/creating/graph-ts/common-issues.mdx @@ -2,7 +2,7 @@ title: Common AssemblyScript Issues --- -There are certain [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) issues that are common to run into during subgraph development. They range in debug difficulty, however, being aware of them may help. The following is a non-exhaustive list of these issues: +There are certain [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) issues that are common to run into during Subgraph development. They range in debug difficulty, however, being aware of them may help. The following is a non-exhaustive list of these issues: - `Private` class variables are not enforced in [AssemblyScript](https://www.assemblyscript.org/status.html#language-features). There is no way to protect class variables from being directly changed from the class object. - Scope is not inherited into [closure functions](https://www.assemblyscript.org/status.html#on-closures), i.e. variables declared outside of closure functions cannot be used. Explanation in [Developer Highlights #3](https://www.youtube.com/watch?v=1-8AW-lVfrA&t=3243s). diff --git a/website/src/pages/pl/subgraphs/developing/creating/install-the-cli.mdx b/website/src/pages/pl/subgraphs/developing/creating/install-the-cli.mdx index d4509815a845..112f0952a1e8 100644 --- a/website/src/pages/pl/subgraphs/developing/creating/install-the-cli.mdx +++ b/website/src/pages/pl/subgraphs/developing/creating/install-the-cli.mdx @@ -2,11 +2,11 @@ title: Install the Graph CLI --- -> In order to use your subgraph on The Graph's decentralized network, you will need to [create an API key](/resources/subgraph-studio-faq/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your subgraph with at least 3,000 GRT to attract 2-3 Indexers. To learn more about signaling, check out [curating](/resources/roles/curating/). +> In order to use your Subgraph on The Graph's decentralized network, you will need to [create an API key](/resources/subgraph-studio-faq/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your Subgraph with at least 3,000 GRT to attract 2-3 Indexers. To learn more about signaling, check out [curating](/resources/roles/curating/). ## Overview -The [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) is a command-line interface that facilitates developers' commands for The Graph. It processes a [subgraph manifest](/subgraphs/developing/creating/subgraph-manifest/) and compiles the [mappings](/subgraphs/developing/creating/assemblyscript-mappings/) to create the files you will need to deploy the subgraph to [Subgraph Studio](https://thegraph.com/studio/) and the network. +The [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) is a command-line interface that facilitates developers' commands for The Graph. It processes a [Subgraph manifest](/subgraphs/developing/creating/subgraph-manifest/) and compiles the [mappings](/subgraphs/developing/creating/assemblyscript-mappings/) to create the files you will need to deploy the Subgraph to [Subgraph Studio](https://thegraph.com/studio/) and the network. ## Getting Started @@ -28,13 +28,13 @@ npm install -g @graphprotocol/graph-cli@latest yarn global add @graphprotocol/graph-cli ``` -The `graph init` command can be used to set up a new subgraph project, either from an existing contract or from an example subgraph. If you already have a smart contract deployed to your preferred network, you can bootstrap a new subgraph from that contract to get started. +The `graph init` command can be used to set up a new Subgraph project, either from an existing contract or from an example Subgraph. If you already have a smart contract deployed to your preferred network, you can bootstrap a new Subgraph from that contract to get started. ## Jak stworzyć subgraf ### From an Existing Contract -The following command creates a subgraph that indexes all events of an existing contract: +The following command creates a Subgraph that indexes all events of an existing contract: ```sh graph init \ @@ -51,25 +51,25 @@ graph init \ - If any of the optional arguments are missing, it guides you through an interactive form. -- The `` is the ID of your subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your subgraph details page. +- The `` is the ID of your Subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your Subgraph details page. ### From an Example Subgraph -The following command initializes a new project from an example subgraph: +The following command initializes a new project from an example Subgraph: ```sh graph init --from-example=example-subgraph ``` -- The [example subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. +- The [example Subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. -- The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. +- The Subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. ### Add New `dataSources` to an Existing Subgraph -`dataSources` are key components of subgraphs. They define the sources of data that the subgraph indexes and processes. A `dataSource` specifies which smart contract to listen to, which events to process, and how to handle them. +`dataSources` are key components of Subgraphs. They define the sources of data that the Subgraph indexes and processes. A `dataSource` specifies which smart contract to listen to, which events to process, and how to handle them. -Recent versions of the Graph CLI supports adding new `dataSources` to an existing subgraph through the `graph add` command: +Recent versions of the Graph CLI supports adding new `dataSources` to an existing Subgraph through the `graph add` command: ```sh graph add
[] @@ -101,19 +101,5 @@ The `graph add` command will fetch the ABI from Etherscan (unless an ABI path is The ABI file(s) must match your contract(s). There are a few ways to obtain ABI files: - If you are building your own project, you will likely have access to your most current ABIs. -- If you are building a subgraph for a public project, you can download that project to your computer and get the ABI by using [`npx hardhat compile`](https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts) or using `solc` to compile. -- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your subgraph will fail. - -## SpecVersion Releases - -| Version | Release notes | -| :-: | --- | -| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | -| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | -| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune subgraphs | -| 0.0.9 | Supports `endBlock` feature | -| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). | -| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). | -| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | -| 0.0.5 | Added support for event handlers having access to transaction receipts. | -| 0.0.4 | Added support for managing subgraph features. | +- If you are building a Subgraph for a public project, you can download that project to your computer and get the ABI by using [`npx hardhat compile`](https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts) or using `solc` to compile. +- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your Subgraph will fail. diff --git a/website/src/pages/pl/subgraphs/developing/creating/ql-schema.mdx b/website/src/pages/pl/subgraphs/developing/creating/ql-schema.mdx index 27562f970620..7e0f889447c5 100644 --- a/website/src/pages/pl/subgraphs/developing/creating/ql-schema.mdx +++ b/website/src/pages/pl/subgraphs/developing/creating/ql-schema.mdx @@ -4,7 +4,7 @@ title: The Graph QL Schema ## Overview -The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language. +The schema for your Subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language. > Note: If you've never written a GraphQL schema, it is recommended that you check out this primer on the GraphQL type system. Reference documentation for GraphQL schemas can be found in the [GraphQL API](/subgraphs/querying/graphql-api/) section. @@ -12,7 +12,7 @@ The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas ar Before defining entities, it is important to take a step back and think about how your data is structured and linked. -- All queries will be made against the data model defined in the subgraph schema. As a result, the design of the subgraph schema should be informed by the queries that your application will need to perform. +- All queries will be made against the data model defined in the Subgraph schema. As a result, the design of the Subgraph schema should be informed by the queries that your application will need to perform. - It may be useful to imagine entities as "objects containing data", rather than as events or functions. - You define entity types in `schema.graphql`, and Graph Node will generate top-level fields for querying single instances and collections of that entity type. - Each type that should be an entity is required to be annotated with an `@entity` directive. @@ -141,7 +141,7 @@ type TokenBalance @entity { Reverse lookups can be defined on an entity through the `@derivedFrom` field. This creates a virtual field on the entity that may be queried but cannot be set manually through the mappings API. Rather, it is derived from the relationship defined on the other entity. For such relationships, it rarely makes sense to store both sides of the relationship, and both indexing and query performance will be better when only one side is stored and the other is derived. -For one-to-many relationships, the relationship should always be stored on the 'one' side, and the 'many' side should always be derived. Storing the relationship this way, rather than storing an array of entities on the 'many' side, will result in dramatically better performance for both indexing and querying the subgraph. In general, storing arrays of entities should be avoided as much as is practical. +For one-to-many relationships, the relationship should always be stored on the 'one' side, and the 'many' side should always be derived. Storing the relationship this way, rather than storing an array of entities on the 'many' side, will result in dramatically better performance for both indexing and querying the Subgraph. In general, storing arrays of entities should be avoided as much as is practical. #### Example @@ -160,7 +160,7 @@ type TokenBalance @entity { } ``` -Here is an example of how to write a mapping for a subgraph with reverse lookups: +Here is an example of how to write a mapping for a Subgraph with reverse lookups: ```typescript let token = new Token(event.address) // Create Token @@ -231,7 +231,7 @@ query usersWithOrganizations { } ``` -This more elaborate way of storing many-to-many relationships will result in less data stored for the subgraph, and therefore to a subgraph that is often dramatically faster to index and to query. +This more elaborate way of storing many-to-many relationships will result in less data stored for the Subgraph, and therefore to a Subgraph that is often dramatically faster to index and to query. ### Adding comments to the schema @@ -287,7 +287,7 @@ query { } ``` -> **[Feature Management](#experimental-features):** From `specVersion` `0.0.4` and onwards, `fullTextSearch` must be declared under the `features` section in the subgraph manifest. +> **[Feature Management](#experimental-features):** From `specVersion` `0.0.4` and onwards, `fullTextSearch` must be declared under the `features` section in the Subgraph manifest. ## Languages supported diff --git a/website/src/pages/pl/subgraphs/developing/creating/starting-your-subgraph.mdx b/website/src/pages/pl/subgraphs/developing/creating/starting-your-subgraph.mdx index 4823231d9a40..180a343470b1 100644 --- a/website/src/pages/pl/subgraphs/developing/creating/starting-your-subgraph.mdx +++ b/website/src/pages/pl/subgraphs/developing/creating/starting-your-subgraph.mdx @@ -4,20 +4,32 @@ title: Starting Your Subgraph ## Overview -The Graph is home to thousands of subgraphs already available for query, so check [The Graph Explorer](https://thegraph.com/explorer) and find one that already matches your needs. +The Graph is home to thousands of Subgraphs already available for query, so check [The Graph Explorer](https://thegraph.com/explorer) and find one that already matches your needs. -When you create a [subgraph](/subgraphs/developing/subgraphs/), you create a custom open API that extracts data from a blockchain, processes it, stores it, and makes it easy to query via GraphQL. +When you create a [Subgraph](/subgraphs/developing/subgraphs/), you create a custom open API that extracts data from a blockchain, processes it, stores it, and makes it easy to query via GraphQL. -Subgraph development ranges from simple scaffold subgraphs to advanced, specifically tailored subgraphs. +Subgraph development ranges from simple scaffold Subgraphs to advanced, specifically tailored Subgraphs. ### Start Building -Start the process and build a subgraph that matches your needs: +Start the process and build a Subgraph that matches your needs: 1. [Install the CLI](/subgraphs/developing/creating/install-the-cli/) - Set up your infrastructure -2. [Subgraph Manifest](/subgraphs/developing/creating/subgraph-manifest/) - Understand a subgraph's key component +2. [Subgraph Manifest](/subgraphs/developing/creating/subgraph-manifest/) - Understand a Subgraph's key component 3. [The GraphQL Schema](/subgraphs/developing/creating/ql-schema/) - Write your schema 4. [Writing AssemblyScript Mappings](/subgraphs/developing/creating/assemblyscript-mappings/) - Write your mappings -5. [Advanced Features](/subgraphs/developing/creating/advanced/) - Customize your subgraph with advanced features +5. [Advanced Features](/subgraphs/developing/creating/advanced/) - Customize your Subgraph with advanced features Explore additional [resources for APIs](/subgraphs/developing/creating/graph-ts/README/) and conduct local testing with [Matchstick](/subgraphs/developing/creating/unit-testing-framework/). + +| Version | Release notes | +| :-: | --- | +| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune Subgraphs | +| 0.0.9 | Supports `endBlock` feature | +| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). | +| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | diff --git a/website/src/pages/pl/subgraphs/developing/creating/subgraph-manifest.mdx b/website/src/pages/pl/subgraphs/developing/creating/subgraph-manifest.mdx index a42a50973690..78e4a3a55e7d 100644 --- a/website/src/pages/pl/subgraphs/developing/creating/subgraph-manifest.mdx +++ b/website/src/pages/pl/subgraphs/developing/creating/subgraph-manifest.mdx @@ -4,19 +4,19 @@ title: Subgraph Manifest ## Overview -The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. +The Subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your Subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. -The **subgraph definition** consists of the following files: +The **Subgraph definition** consists of the following files: -- `subgraph.yaml`: Contains the subgraph manifest +- `subgraph.yaml`: Contains the Subgraph manifest -- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL +- `schema.graphql`: A GraphQL schema defining the data stored for your Subgraph and how to query it via GraphQL - `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema (e.g. `mapping.ts` in this guide) ### Subgraph Capabilities -A single subgraph can: +A single Subgraph can: - Index data from multiple smart contracts (but not multiple networks). @@ -24,12 +24,12 @@ A single subgraph can: - Add an entry for each contract that requires indexing to the `dataSources` array. -The full specification for subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). +The full specification for Subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). -For the example subgraph listed above, `subgraph.yaml` is: +For the example Subgraph listed above, `subgraph.yaml` is: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum repository: https://github.com/graphprotocol/graph-tooling schema: @@ -54,7 +54,7 @@ dataSources: data: 'bar' mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -79,47 +79,47 @@ dataSources: ## Subgraph Entries -> Important Note: Be sure you populate your subgraph manifest with all handlers and [entities](/subgraphs/developing/creating/ql-schema/). +> Important Note: Be sure you populate your Subgraph manifest with all handlers and [entities](/subgraphs/developing/creating/ql-schema/). The important entries to update for the manifest are: -- `specVersion`: a semver version that identifies the supported manifest structure and functionality for the subgraph. The latest version is `1.2.0`. See [specVersion releases](#specversion-releases) section to see more details on features & releases. +- `specVersion`: a semver version that identifies the supported manifest structure and functionality for the Subgraph. The latest version is `1.3.0`. See [specVersion releases](#specversion-releases) section to see more details on features & releases. -- `description`: a human-readable description of what the subgraph is. This description is displayed in Graph Explorer when the subgraph is deployed to Subgraph Studio. +- `description`: a human-readable description of what the Subgraph is. This description is displayed in Graph Explorer when the Subgraph is deployed to Subgraph Studio. -- `repository`: the URL of the repository where the subgraph manifest can be found. This is also displayed in Graph Explorer. +- `repository`: the URL of the repository where the Subgraph manifest can be found. This is also displayed in Graph Explorer. - `features`: a list of all used [feature](#experimental-features) names. -- `indexerHints.prune`: Defines the retention of historical block data for a subgraph. See [prune](#prune) in [indexerHints](#indexer-hints) section. +- `indexerHints.prune`: Defines the retention of historical block data for a Subgraph. See [prune](#prune) in [indexerHints](#indexer-hints) section. -- `dataSources.source`: the address of the smart contract the subgraph sources, and the ABI of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts. +- `dataSources.source`: the address of the smart contract the Subgraph sources, and the ABI of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts. - `dataSources.source.startBlock`: the optional number of the block that the data source starts indexing from. In most cases, we suggest using the block in which the contract was created. - `dataSources.source.endBlock`: The optional number of the block that the data source stops indexing at, including that block. Minimum spec version required: `0.0.9`. -- `dataSources.context`: key-value pairs that can be used within subgraph mappings. Supports various data types like `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Each variable needs to specify its `type` and `data`. These context variables are then accessible in the mapping files, offering more configurable options for subgraph development. +- `dataSources.context`: key-value pairs that can be used within Subgraph mappings. Supports various data types like `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Each variable needs to specify its `type` and `data`. These context variables are then accessible in the mapping files, offering more configurable options for Subgraph development. - `dataSources.mapping.entities`: the entities that the data source writes to the store. The schema for each entity is defined in the schema.graphql file. - `dataSources.mapping.abis`: one or more named ABI files for the source contract as well as any other smart contracts that you interact with from within the mappings. -- `dataSources.mapping.eventHandlers`: lists the smart contract events this subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store. +- `dataSources.mapping.eventHandlers`: lists the smart contract events this Subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store. -- `dataSources.mapping.callHandlers`: lists the smart contract functions this subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store. +- `dataSources.mapping.callHandlers`: lists the smart contract functions this Subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store. -- `dataSources.mapping.blockHandlers`: lists the blocks this subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional call-filter can be provided by adding a `filter` field with `kind: call` to the handler. This will only run the handler if the block contains at least one call to the data source contract. +- `dataSources.mapping.blockHandlers`: lists the blocks this Subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional call-filter can be provided by adding a `filter` field with `kind: call` to the handler. This will only run the handler if the block contains at least one call to the data source contract. -A single subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array. +A single Subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array. ## Event Handlers -Event handlers in a subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the subgraph's manifest. This enables subgraphs to process and store event data according to defined logic. +Event handlers in a Subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the Subgraph's manifest. This enables Subgraphs to process and store event data according to defined logic. ### Defining an Event Handler -An event handler is declared within a data source in the subgraph's YAML configuration. It specifies which events to listen for and the corresponding function to execute when those events are detected. +An event handler is declared within a data source in the Subgraph's YAML configuration. It specifies which events to listen for and the corresponding function to execute when those events are detected. ```yaml dataSources: @@ -131,7 +131,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -149,11 +149,11 @@ dataSources: ## Call Handlers -While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured. +While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a Subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured. Call handlers will only trigger in one of two cases: when the function specified is called by an account other than the contract itself or when it is marked as external in Solidity and called as part of another function in the same contract. -> **Note:** Call handlers currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a subgraph indexing one of these networks contain one or more call handlers, it will not start syncing. Subgraph developers should instead use event handlers. These are far more performant than call handlers, and are supported on every evm network. +> **Note:** Call handlers currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a Subgraph indexing one of these networks contain one or more call handlers, it will not start syncing. Subgraph developers should instead use event handlers. These are far more performant than call handlers, and are supported on every evm network. ### Defining a Call Handler @@ -169,7 +169,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -186,7 +186,7 @@ The `function` is the normalized function signature to filter calls by. The `han ### Mapping Function -Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument: +Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example Subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument: ```typescript import { CreateGravatarCall } from '../generated/Gravity/Gravity' @@ -205,7 +205,7 @@ The `handleCreateGravatar` function takes a new `CreateGravatarCall` which is a ## Block Handlers -In addition to subscribing to contract events or function calls, a subgraph may want to update its data as new blocks are appended to the chain. To achieve this a subgraph can run a function after every block or after blocks that match a pre-defined filter. +In addition to subscribing to contract events or function calls, a Subgraph may want to update its data as new blocks are appended to the chain. To achieve this a Subgraph can run a function after every block or after blocks that match a pre-defined filter. ### Supported Filters @@ -218,7 +218,7 @@ filter: _The defined handler will be called once for every block which contains a call to the contract (data source) the handler is defined under._ -> **Note:** The `call` filter currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a subgraph indexing one of these networks contain one or more block handlers with a `call` filter, it will not start syncing. +> **Note:** The `call` filter currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a Subgraph indexing one of these networks contain one or more block handlers with a `call` filter, it will not start syncing. The absence of a filter for a block handler will ensure that the handler is called every block. A data source can only contain one block handler for each filter type. @@ -232,7 +232,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -261,7 +261,7 @@ blockHandlers: every: 10 ``` -The defined handler will be called once for every `n` blocks, where `n` is the value provided in the `every` field. This configuration allows the subgraph to perform specific operations at regular block intervals. +The defined handler will be called once for every `n` blocks, where `n` is the value provided in the `every` field. This configuration allows the Subgraph to perform specific operations at regular block intervals. #### Once Filter @@ -276,7 +276,7 @@ blockHandlers: kind: once ``` -The defined handler with the once filter will be called only once before all other handlers run. This configuration allows the subgraph to use the handler as an initialization handler, performing specific tasks at the start of indexing. +The defined handler with the once filter will be called only once before all other handlers run. This configuration allows the Subgraph to use the handler as an initialization handler, performing specific tasks at the start of indexing. ```ts export function handleOnce(block: ethereum.Block): void { @@ -288,7 +288,7 @@ export function handleOnce(block: ethereum.Block): void { ### Mapping Function -The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing subgraph entities in the store, call smart contracts and create or update entities. +The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing Subgraph entities in the store, call smart contracts and create or update entities. ```typescript import { ethereum } from '@graphprotocol/graph-ts' @@ -317,7 +317,7 @@ An event will only be triggered when both the signature and topic 0 match. By de Starting from `specVersion` `0.0.5` and `apiVersion` `0.0.7`, event handlers can have access to the receipt for the transaction which emitted them. -To do so, event handlers must be declared in the subgraph manifest with the new `receipt: true` key, which is optional and defaults to false. +To do so, event handlers must be declared in the Subgraph manifest with the new `receipt: true` key, which is optional and defaults to false. ```yaml eventHandlers: @@ -360,7 +360,7 @@ dataSources: abi: Factory mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/factory.ts entities: @@ -390,7 +390,7 @@ templates: abi: Exchange mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/exchange.ts entities: @@ -454,7 +454,7 @@ There are setters and getters like `setString` and `getString` for all value typ ## Start Blocks -The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. +The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a Subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. ```yaml dataSources: @@ -467,7 +467,7 @@ dataSources: startBlock: 6627917 mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/factory.ts entities: @@ -488,13 +488,13 @@ dataSources: ## Indexer Hints -The `indexerHints` setting in a subgraph's manifest provides directives for indexers on processing and managing a subgraph. It influences operational decisions across data handling, indexing strategies, and optimizations. Presently, it features the `prune` option for managing historical data retention or pruning. +The `indexerHints` setting in a Subgraph's manifest provides directives for indexers on processing and managing a Subgraph. It influences operational decisions across data handling, indexing strategies, and optimizations. Presently, it features the `prune` option for managing historical data retention or pruning. > This feature is available from `specVersion: 1.0.0` ### Prune -`indexerHints.prune`: Defines the retention of historical block data for a subgraph. Options include: +`indexerHints.prune`: Defines the retention of historical block data for a Subgraph. Options include: 1. `"never"`: No pruning of historical data; retains the entire history. 2. `"auto"`: Retains the minimum necessary history as set by the indexer, optimizing query performance. @@ -505,19 +505,19 @@ The `indexerHints` setting in a subgraph's manifest provides directives for inde prune: auto ``` -> The term "history" in this context of subgraphs is about storing data that reflects the old states of mutable entities. +> The term "history" in this context of Subgraphs is about storing data that reflects the old states of mutable entities. History as of a given block is required for: -- [Time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), which enable querying the past states of these entities at specific blocks throughout the subgraph's history -- Using the subgraph as a [graft base](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) in another subgraph, at that block -- Rewinding the subgraph back to that block +- [Time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), which enable querying the past states of these entities at specific blocks throughout the Subgraph's history +- Using the Subgraph as a [graft base](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) in another Subgraph, at that block +- Rewinding the Subgraph back to that block If historical data as of the block has been pruned, the above capabilities will not be available. > Using `"auto"` is generally recommended as it maximizes query performance and is sufficient for most users who do not require access to extensive historical data. -For subgraphs leveraging [time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), it's advisable to either set a specific number of blocks for historical data retention or use `prune: never` to keep all historical entity states. Below are examples of how to configure both options in your subgraph's settings: +For Subgraphs leveraging [time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), it's advisable to either set a specific number of blocks for historical data retention or use `prune: never` to keep all historical entity states. Below are examples of how to configure both options in your Subgraph's settings: To retain a specific amount of historical data: @@ -532,3 +532,18 @@ To preserve the complete history of entity states: indexerHints: prune: never ``` + +## SpecVersion Releases + +| Version | Release notes | +| :-: | --- | +| 1.3.0 | Added support for [Subgraph Composition](/cookbook/subgraph-composition-three-sources) | +| 1.2.0 | Added support for [Indexed Argument Filtering](/developing/creating/advanced/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](/developing/creating/advanced/#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supports [`indexerHints`](/developing/creating/subgraph-manifest/#indexer-hints) feature to prune Subgraphs | +| 0.0.9 | Supports `endBlock` feature | +| 0.0.8 | Added support for polling [Block Handlers](/developing/creating/subgraph-manifest/#polling-filter) and [Initialisation Handlers](/developing/creating/subgraph-manifest/#once-filter). | +| 0.0.7 | Added support for [File Data Sources](/developing/creating/advanced/#ipfsarweave-file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | diff --git a/website/src/pages/pl/subgraphs/developing/creating/unit-testing-framework.mdx b/website/src/pages/pl/subgraphs/developing/creating/unit-testing-framework.mdx index 2133c1d4b5c9..e56e1109bc04 100644 --- a/website/src/pages/pl/subgraphs/developing/creating/unit-testing-framework.mdx +++ b/website/src/pages/pl/subgraphs/developing/creating/unit-testing-framework.mdx @@ -2,12 +2,12 @@ title: Unit Testing Framework --- -Learn how to use Matchstick, a unit testing framework developed by [LimeChain](https://limechain.tech/). Matchstick enables subgraph developers to test their mapping logic in a sandboxed environment and successfully deploy their subgraphs. +Learn how to use Matchstick, a unit testing framework developed by [LimeChain](https://limechain.tech/). Matchstick enables Subgraph developers to test their mapping logic in a sandboxed environment and successfully deploy their Subgraphs. ## Benefits of Using Matchstick - It's written in Rust and optimized for high performance. -- It gives you access to developer features, including the ability to mock contract calls, make assertions about the store state, monitor subgraph failures, check test performance, and many more. +- It gives you access to developer features, including the ability to mock contract calls, make assertions about the store state, monitor Subgraph failures, check test performance, and many more. ## Getting Started @@ -87,7 +87,7 @@ And finally, do not use `graph test` (which uses your global installation of gra ### Using Matchstick -To use **Matchstick** in your subgraph project just open up a terminal, navigate to the root folder of your project and simply run `graph test [options] ` - it downloads the latest **Matchstick** binary and runs the specified test or all tests in a test folder (or all existing tests if no datasource flag is specified). +To use **Matchstick** in your Subgraph project just open up a terminal, navigate to the root folder of your project and simply run `graph test [options] ` - it downloads the latest **Matchstick** binary and runs the specified test or all tests in a test folder (or all existing tests if no datasource flag is specified). ### CLI options @@ -113,7 +113,7 @@ graph test path/to/file.test.ts ```sh -c, --coverage Run the tests in coverage mode --d, --docker Run the tests in a docker container (Note: Please execute from the root folder of the subgraph) +-d, --docker Run the tests in a docker container (Note: Please execute from the root folder of the Subgraph) -f, --force Binary: Redownloads the binary. Docker: Redownloads the Dockerfile and rebuilds the docker image. -h, --help Show usage information -l, --logs Logs to the console information about the OS, CPU model and download url (debugging purposes) @@ -145,17 +145,17 @@ libsFolder: path/to/libs manifestPath: path/to/subgraph.yaml ``` -### Demo subgraph +### Demo Subgraph You can try out and play around with the examples from this guide by cloning the [Demo Subgraph repo](https://github.com/LimeChain/demo-subgraph) ### Video tutorials -Also you can check out the video series on ["How to use Matchstick to write unit tests for your subgraphs"](https://www.youtube.com/playlist?list=PLTqyKgxaGF3SNakGQwczpSGVjS_xvOv3h) +Also you can check out the video series on ["How to use Matchstick to write unit tests for your Subgraphs"](https://www.youtube.com/playlist?list=PLTqyKgxaGF3SNakGQwczpSGVjS_xvOv3h) ## Tests structure -_**IMPORTANT: The test structure described below depens on `matchstick-as` version >=0.5.0**_ +_**IMPORTANT: The test structure described below depends on `matchstick-as` version >=0.5.0**_ ### describe() @@ -662,7 +662,7 @@ That's a lot to unpack! First off, an important thing to notice is that we're im There we go - we've created our first test! 👏 -Now in order to run our tests you simply need to run the following in your subgraph root folder: +Now in order to run our tests you simply need to run the following in your Subgraph root folder: `graph test Gravity` @@ -756,7 +756,7 @@ createMockedFunction(contractAddress, 'getGravatar', 'getGravatar(address):(stri Users can mock IPFS files by using `mockIpfsFile(hash, filePath)` function. The function accepts two arguments, the first one is the IPFS file hash/path and the second one is the path to a local file. -NOTE: When testing `ipfs.map/ipfs.mapJSON`, the callback function must be exported from the test file in order for matchstck to detect it, like the `processGravatar()` function in the test example bellow: +NOTE: When testing `ipfs.map/ipfs.mapJSON`, the callback function must be exported from the test file in order for matchstick to detect it, like the `processGravatar()` function in the test example bellow: `.test.ts` file: @@ -765,7 +765,7 @@ import { assert, test, mockIpfsFile } from 'matchstick-as/assembly/index' import { ipfs } from '@graphprotocol/graph-ts' import { gravatarFromIpfs } from './utils' -// Export ipfs.map() callback in order for matchstck to detect it +// Export ipfs.map() callback in order for matchstick to detect it export { processGravatar } from './utils' test('ipfs.cat', () => { @@ -1172,7 +1172,7 @@ templates: network: mainnet mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/token-lock-wallet.ts handler: handleMetadata @@ -1289,7 +1289,7 @@ test('file/ipfs dataSource creation example', () => { ## Test Coverage -Using **Matchstick**, subgraph developers are able to run a script that will calculate the test coverage of the written unit tests. +Using **Matchstick**, Subgraph developers are able to run a script that will calculate the test coverage of the written unit tests. The test coverage tool takes the compiled test `wasm` binaries and converts them to `wat` files, which can then be easily inspected to see whether or not the handlers defined in `subgraph.yaml` have been called. Since code coverage (and testing as whole) is in very early stages in AssemblyScript and WebAssembly, **Matchstick** cannot check for branch coverage. Instead we rely on the assertion that if a given handler has been called, the event/function for it have been properly mocked. @@ -1395,7 +1395,7 @@ The mismatch in arguments is caused by mismatch in `graph-ts` and `matchstick-as ## Additional Resources -For any additional support, check out this [demo subgraph repo using Matchstick](https://github.com/LimeChain/demo-subgraph#readme_). +For any additional support, check out this [demo Subgraph repo using Matchstick](https://github.com/LimeChain/demo-subgraph#readme_). ## Feedback diff --git a/website/src/pages/pl/subgraphs/developing/deploying/multiple-networks.mdx b/website/src/pages/pl/subgraphs/developing/deploying/multiple-networks.mdx index 4f7dcd3864e8..3b2b1bbc70ae 100644 --- a/website/src/pages/pl/subgraphs/developing/deploying/multiple-networks.mdx +++ b/website/src/pages/pl/subgraphs/developing/deploying/multiple-networks.mdx @@ -1,12 +1,13 @@ --- title: Deploying a Subgraph to Multiple Networks +sidebarTitle: Deploying to Multiple Networks --- -This page explains how to deploy a subgraph to multiple networks. To deploy a subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a subgraph already, see [Creating a subgraph](/developing/creating-a-subgraph/). +This page explains how to deploy a Subgraph to multiple networks. To deploy a Subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a Subgraph already, see [Creating a Subgraph](/developing/creating-a-subgraph/). -## Deploying the subgraph to multiple networks +## Deploying the Subgraph to multiple networks -In some cases, you will want to deploy the same subgraph to multiple networks without duplicating all of its code. The main challenge that comes with this is that the contract addresses on these networks are different. +In some cases, you will want to deploy the same Subgraph to multiple networks without duplicating all of its code. The main challenge that comes with this is that the contract addresses on these networks are different. ### Using `graph-cli` @@ -20,7 +21,7 @@ Options: --network-file Networks config file path (default: "./networks.json") ``` -You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your subgraph during development. +You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your Subgraph during development. > Note: The `init` command will now auto-generate a `networks.json` based on the provided information. You will then be able to update existing or add additional networks. @@ -54,7 +55,7 @@ If you don't have a `networks.json` file, you'll need to manually create one wit > Note: You don't have to specify any of the `templates` (if you have any) in the config file, only the `dataSources`. If there are any `templates` declared in the `subgraph.yaml` file, their network will be automatically updated to the one specified with the `--network` option. -Now, let's assume you want to be able to deploy your subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`: +Now, let's assume you want to be able to deploy your Subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`: ```yaml # ... @@ -96,7 +97,7 @@ yarn build --network sepolia yarn build --network sepolia --network-file path/to/config ``` -The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the subgraph. Your `subgraph.yaml` file now should look like this: +The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the Subgraph. Your `subgraph.yaml` file now should look like this: ```yaml # ... @@ -127,7 +128,7 @@ yarn deploy --network sepolia --network-file path/to/config One way to parameterize aspects like contract addresses using older `graph-cli` versions is to generate parts of it with a templating system like [Mustache](https://mustache.github.io/) or [Handlebars](https://handlebarsjs.com/). -To illustrate this approach, let's assume a subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network: +To illustrate this approach, let's assume a Subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network: ```json { @@ -179,7 +180,7 @@ In order to generate a manifest to either network, you could add two additional } ``` -To deploy this subgraph for mainnet or Sepolia you would now simply run one of the two following commands: +To deploy this Subgraph for mainnet or Sepolia you would now simply run one of the two following commands: ```sh # Mainnet: @@ -193,25 +194,25 @@ A working example of this can be found [here](https://github.com/graphprotocol/e **Note:** This approach can also be applied to more complex situations, where it is necessary to substitute more than contract addresses and network names or where generating mappings or ABIs from templates as well. -This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your Subgraph to check if it is running behind. `synced` informs if the Subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the Subgraph. In this case, you can check the `fatalError` field for details on this error. -## Subgraph Studio subgraph archive policy +## Subgraph Studio Subgraph archive policy -A subgraph version in Studio is archived if and only if it meets the following criteria: +A Subgraph version in Studio is archived if and only if it meets the following criteria: - The version is not published to the network (or pending publish) - The version was created 45 or more days ago -- The subgraph hasn't been queried in 30 days +- The Subgraph hasn't been queried in 30 days -In addition, when a new version is deployed, if the subgraph has not been published, then the N-2 version of the subgraph is archived. +In addition, when a new version is deployed, if the Subgraph has not been published, then the N-2 version of the Subgraph is archived. -Every subgraph affected with this policy has an option to bring the version in question back. +Every Subgraph affected with this policy has an option to bring the version in question back. -## Checking subgraph health +## Checking Subgraph health -If a subgraph syncs successfully, that is a good sign that it will continue to run well forever. However, new triggers on the network might cause your subgraph to hit an untested error condition or it may start to fall behind due to performance issues or issues with the node operators. +If a Subgraph syncs successfully, that is a good sign that it will continue to run well forever. However, new triggers on the network might cause your Subgraph to hit an untested error condition or it may start to fall behind due to performance issues or issues with the node operators. -Graph Node exposes a GraphQL endpoint which you can query to check the status of your subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a subgraph: +Graph Node exposes a GraphQL endpoint which you can query to check the status of your Subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a Subgraph: ```graphql { @@ -238,4 +239,4 @@ Graph Node exposes a GraphQL endpoint which you can query to check the status of } ``` -This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your Subgraph to check if it is running behind. `synced` informs if the Subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the Subgraph. In this case, you can check the `fatalError` field for details on this error. diff --git a/website/src/pages/pl/subgraphs/developing/deploying/using-subgraph-studio.mdx b/website/src/pages/pl/subgraphs/developing/deploying/using-subgraph-studio.mdx index d2023c7b4a09..c21ff6dc2358 100644 --- a/website/src/pages/pl/subgraphs/developing/deploying/using-subgraph-studio.mdx +++ b/website/src/pages/pl/subgraphs/developing/deploying/using-subgraph-studio.mdx @@ -2,23 +2,23 @@ title: Deploying Using Subgraph Studio --- -Learn how to deploy your subgraph to Subgraph Studio. +Learn how to deploy your Subgraph to Subgraph Studio. -> Note: When you deploy a subgraph, you push it to Subgraph Studio, where you'll be able to test it. It's important to remember that deploying is not the same as publishing. When you publish a subgraph, you're publishing it onchain. +> Note: When you deploy a Subgraph, you push it to Subgraph Studio, where you'll be able to test it. It's important to remember that deploying is not the same as publishing. When you publish a Subgraph, you're publishing it onchain. ## Subgraph Studio Overview In [Subgraph Studio](https://thegraph.com/studio/), you can do the following: -- View a list of subgraphs you've created -- Manage, view details, and visualize the status of a specific subgraph -- Create and manage your API keys for specific subgraphs +- View a list of Subgraphs you've created +- Manage, view details, and visualize the status of a specific Subgraph +- Create and manage your API keys for specific Subgraphs - Restrict your API keys to specific domains and allow only certain Indexers to query with them -- Create your subgraph -- Deploy your subgraph using The Graph CLI -- Test your subgraph in the playground environment -- Integrate your subgraph in staging using the development query URL -- Publish your subgraph to The Graph Network +- Create your Subgraph +- Deploy your Subgraph using The Graph CLI +- Test your Subgraph in the playground environment +- Integrate your Subgraph in staging using the development query URL +- Publish your Subgraph to The Graph Network - Manage your billing ## Install The Graph CLI @@ -44,10 +44,10 @@ npm install -g @graphprotocol/graph-cli 1. Open [Subgraph Studio](https://thegraph.com/studio/). 2. Connect your wallet to sign in. - You can do this via MetaMask, Coinbase Wallet, WalletConnect, or Safe. -3. After you sign in, your unique deploy key will be displayed on your subgraph details page. - - The deploy key allows you to publish your subgraphs or manage your API keys and billing. It is unique but can be regenerated if you think it has been compromised. +3. After you sign in, your unique deploy key will be displayed on your Subgraph details page. + - The deploy key allows you to publish your Subgraphs or manage your API keys and billing. It is unique but can be regenerated if you think it has been compromised. -> Important: You need an API key to query subgraphs +> Important: You need an API key to query Subgraphs ### How to Create a Subgraph in Subgraph Studio @@ -57,31 +57,25 @@ npm install -g @graphprotocol/graph-cli ### Subgraph Compatibility with The Graph Network -In order to be supported by Indexers on The Graph Network, subgraphs must: - -- Index a [supported network](/supported-networks/) -- Must not use any of the following features: - - ipfs.cat & ipfs.map - - Non-fatal errors - - Grafting +To be supported by Indexers on The Graph Network, Subgraphs must index a [supported network](/supported-networks/). For a full list of supported and unsupported features, check out the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) repo. ## Initialize Your Subgraph -Once your subgraph has been created in Subgraph Studio, you can initialize its code through the CLI using this command: +Once your Subgraph has been created in Subgraph Studio, you can initialize its code through the CLI using this command: ```bash graph init ``` -You can find the `` value on your subgraph details page in Subgraph Studio, see image below: +You can find the `` value on your Subgraph details page in Subgraph Studio, see image below: ![Subgraph Studio - Slug](/img/doc-subgraph-slug.png) -After running `graph init`, you will be asked to input the contract address, network, and an ABI that you want to query. This will generate a new folder on your local machine with some basic code to start working on your subgraph. You can then finalize your subgraph to make sure it works as expected. +After running `graph init`, you will be asked to input the contract address, network, and an ABI that you want to query. This will generate a new folder on your local machine with some basic code to start working on your Subgraph. You can then finalize your Subgraph to make sure it works as expected. ## Graph Auth -Before you can deploy your subgraph to Subgraph Studio, you need to log into your account within the CLI. To do this, you will need your deploy key, which you can find under your subgraph details page. +Before you can deploy your Subgraph to Subgraph Studio, you need to log into your account within the CLI. To do this, you will need your deploy key, which you can find under your Subgraph details page. Then, use the following command to authenticate from the CLI: @@ -91,11 +85,11 @@ graph auth ## Deploying a Subgraph -Once you are ready, you can deploy your subgraph to Subgraph Studio. +Once you are ready, you can deploy your Subgraph to Subgraph Studio. -> Deploying a subgraph with the CLI pushes it to the Studio, where you can test it and update the metadata. This action won't publish your subgraph to the decentralized network. +> Deploying a Subgraph with the CLI pushes it to the Studio, where you can test it and update the metadata. This action won't publish your Subgraph to the decentralized network. -Use the following CLI command to deploy your subgraph: +Use the following CLI command to deploy your Subgraph: ```bash graph deploy @@ -108,30 +102,30 @@ After running this command, the CLI will ask for a version label. ## Testing Your Subgraph -After deploying, you can test your subgraph (either in Subgraph Studio or in your own app, with the deployment query URL), deploy another version, update the metadata, and publish to [Graph Explorer](https://thegraph.com/explorer) when you are ready. +After deploying, you can test your Subgraph (either in Subgraph Studio or in your own app, with the deployment query URL), deploy another version, update the metadata, and publish to [Graph Explorer](https://thegraph.com/explorer) when you are ready. -Use Subgraph Studio to check the logs on the dashboard and look for any errors with your subgraph. +Use Subgraph Studio to check the logs on the dashboard and look for any errors with your Subgraph. ## Publish Your Subgraph -In order to publish your subgraph successfully, review [publishing a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). +In order to publish your Subgraph successfully, review [publishing a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). ## Versioning Your Subgraph with the CLI -If you want to update your subgraph, you can do the following: +If you want to update your Subgraph, you can do the following: - You can deploy a new version to Studio using the CLI (it will only be private at this point). - Once you're happy with it, you can publish your new deployment to [Graph Explorer](https://thegraph.com/explorer). -- This action will create a new version of your subgraph that Curators can start signaling on and Indexers can index. +- This action will create a new version of your Subgraph that Curators can start signaling on and Indexers can index. -You can also update your subgraph's metadata without publishing a new version. You can update your subgraph details in Studio (under the profile picture, name, description, etc.) by checking an option called **Update Details** in [Graph Explorer](https://thegraph.com/explorer). If this is checked, an onchain transaction will be generated that updates subgraph details in Explorer without having to publish a new version with a new deployment. +You can also update your Subgraph's metadata without publishing a new version. You can update your Subgraph details in Studio (under the profile picture, name, description, etc.) by checking an option called **Update Details** in [Graph Explorer](https://thegraph.com/explorer). If this is checked, an onchain transaction will be generated that updates Subgraph details in Explorer without having to publish a new version with a new deployment. -> Note: There are costs associated with publishing a new version of a subgraph to the network. In addition to the transaction fees, you must also fund a part of the curation tax on the auto-migrating signal. You cannot publish a new version of your subgraph if Curators have not signaled on it. For more information, please read more [here](/resources/roles/curating/). +> Note: There are costs associated with publishing a new version of a Subgraph to the network. In addition to the transaction fees, you must also fund a part of the curation tax on the auto-migrating signal. You cannot publish a new version of your Subgraph if Curators have not signaled on it. For more information, please read more [here](/resources/roles/curating/). ## Automatic Archiving of Subgraph Versions -Whenever you deploy a new subgraph version in Subgraph Studio, the previous version will be archived. Archived versions won't be indexed/synced and therefore cannot be queried. You can unarchive an archived version of your subgraph in Subgraph Studio. +Whenever you deploy a new Subgraph version in Subgraph Studio, the previous version will be archived. Archived versions won't be indexed/synced and therefore cannot be queried. You can unarchive an archived version of your Subgraph in Subgraph Studio. -> Note: Previous versions of non-published subgraphs deployed to Studio will be automatically archived. +> Note: Previous versions of non-published Subgraphs deployed to Studio will be automatically archived. ![Subgraph Studio - Unarchive](/img/Unarchive.png) diff --git a/website/src/pages/pl/subgraphs/developing/developer-faq.mdx b/website/src/pages/pl/subgraphs/developing/developer-faq.mdx index 8dbe6d23ad39..e45141294523 100644 --- a/website/src/pages/pl/subgraphs/developing/developer-faq.mdx +++ b/website/src/pages/pl/subgraphs/developing/developer-faq.mdx @@ -7,37 +7,37 @@ This page summarizes some of the most common questions for developers building o ## Subgraph Related -### 1. What is a subgraph? +### 1. What is a Subgraph? -A subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process subgraphs and make them available for subgraph consumers to query. +A Subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process Subgraphs and make them available for Subgraph consumers to query. -### 2. What is the first step to create a subgraph? +### 2. What is the first step to create a Subgraph? -To successfully create a subgraph, you will need to install The Graph CLI. Review the [Quick Start](/subgraphs/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). +To successfully create a Subgraph, you will need to install The Graph CLI. Review the [Quick Start](/subgraphs/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). -### 3. Can I still create a subgraph if my smart contracts don't have events? +### 3. Can I still create a Subgraph if my smart contracts don't have events? -It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the subgraph are triggered by contract events and are the fastest way to retrieve useful data. +It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the Subgraph are triggered by contract events and are the fastest way to retrieve useful data. -If the contracts you work with do not contain events, your subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. +If the contracts you work with do not contain events, your Subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. -### 4. Can I change the GitHub account associated with my subgraph? +### 4. Can I change the GitHub account associated with my Subgraph? -No. Once a subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your subgraph. +No. Once a Subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your Subgraph. -### 5. How do I update a subgraph on mainnet? +### 5. How do I update a Subgraph on mainnet? -You can deploy a new version of your subgraph to Subgraph Studio using the CLI. This action maintains your subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. +You can deploy a new version of your Subgraph to Subgraph Studio using the CLI. This action maintains your Subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your Subgraph that Curators can start signaling on. -### 6. Is it possible to duplicate a subgraph to another account or endpoint without redeploying? +### 6. Is it possible to duplicate a Subgraph to another account or endpoint without redeploying? -You have to redeploy the subgraph, but if the subgraph ID (IPFS hash) doesn't change, it won't have to sync from the beginning. +You have to redeploy the Subgraph, but if the Subgraph ID (IPFS hash) doesn't change, it won't have to sync from the beginning. -### 7. How do I call a contract function or access a public state variable from my subgraph mappings? +### 7. How do I call a contract function or access a public state variable from my Subgraph mappings? Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/#access-to-smart-contract-state). -### 8. Can I import `ethers.js` or other JS libraries into my subgraph mappings? +### 8. Can I import `ethers.js` or other JS libraries into my Subgraph mappings? Not currently, as mappings are written in AssemblyScript. @@ -45,15 +45,15 @@ One possible alternative solution to this is to store raw data in entities and p ### 9. When listening to multiple contracts, is it possible to select the contract order to listen to events? -Within a subgraph, the events are always processed in the order they appear in the blocks, regardless of whether that is across multiple contracts or not. +Within a Subgraph, the events are always processed in the order they appear in the blocks, regardless of whether that is across multiple contracts or not. ### 10. How are templates different from data sources? -Templates allow you to create data sources quickly, while your subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your subgraph will create a dynamic data source by supplying the contract address. +Templates allow you to create data sources quickly, while your Subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your Subgraph will create a dynamic data source by supplying the contract address. Check out the "Instantiating a data source template" section on: [Data Source Templates](/developing/creating-a-subgraph/#data-source-templates). -### 11. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? +### 11. Is it possible to set up a Subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? Yes. On `graph init` command itself you can add multiple dataSources by entering contracts one after the other. @@ -79,9 +79,9 @@ docker pull graphprotocol/graph-node:latest If only one entity is created during the event and if there's nothing better available, then the transaction hash + log index would be unique. You can obfuscate these by converting that to Bytes and then piping it through `crypto.keccak256` but this won't make it more unique. -### 15. Can I delete my subgraph? +### 15. Can I delete my Subgraph? -Yes, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) and [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) your subgraph. +Yes, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) and [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) your Subgraph. ## Network Related @@ -110,11 +110,11 @@ Yes. Sepolia supports block handlers, call handlers and event handlers. It shoul Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the dataSource starts indexing from. In most cases, we suggest using the block where the contract was created: [Start blocks](/developing/creating-a-subgraph/#start-blocks) -### 20. What are some tips to increase the performance of indexing? My subgraph is taking a very long time to sync +### 20. What are some tips to increase the performance of indexing? My Subgraph is taking a very long time to sync Yes, you should take a look at the optional start block feature to start indexing from the block where the contract was deployed: [Start blocks](/developing/creating-a-subgraph/#start-blocks) -### 21. Is there a way to query the subgraph directly to determine the latest block number it has indexed? +### 21. Is there a way to query the Subgraph directly to determine the latest block number it has indexed? Yes! Try the following command, substituting "organization/subgraphName" with the organization under it is published and the name of your subgraph: @@ -132,7 +132,7 @@ someCollection(first: 1000, skip: ) { ... } ### 23. If my dapp frontend uses The Graph for querying, do I need to write my API key into the frontend directly? What if we pay query fees for users – will malicious users cause our query fees to be very high? -Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. +Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and Subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. ## Miscellaneous diff --git a/website/src/pages/pl/subgraphs/developing/introduction.mdx b/website/src/pages/pl/subgraphs/developing/introduction.mdx index 509b25654e82..92b39857a7f1 100644 --- a/website/src/pages/pl/subgraphs/developing/introduction.mdx +++ b/website/src/pages/pl/subgraphs/developing/introduction.mdx @@ -11,21 +11,21 @@ As a developer, you need data to build and power your dapp. Querying and indexin On The Graph, you can: -1. Create, deploy, and publish subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). -2. Use GraphQL to query existing subgraphs. +1. Create, deploy, and publish Subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). +2. Use GraphQL to query existing Subgraphs. ### What is GraphQL? -- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. +- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query Subgraphs. ### Developer Actions -- Query subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. -- Create custom subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. -- Deploy, publish and signal your subgraphs within The Graph Network. +- Query Subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. +- Create custom Subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. +- Deploy, publish and signal your Subgraphs within The Graph Network. -### What are subgraphs? +### What are Subgraphs? -A subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. +A Subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. -Check out the documentation on [subgraphs](/subgraphs/developing/subgraphs/) to learn specifics. +Check out the documentation on [Subgraphs](/subgraphs/developing/subgraphs/) to learn specifics. diff --git a/website/src/pages/pl/subgraphs/developing/managing/deleting-a-subgraph.mdx b/website/src/pages/pl/subgraphs/developing/managing/deleting-a-subgraph.mdx index 5a4ac15e07fd..b8c2330ca49d 100644 --- a/website/src/pages/pl/subgraphs/developing/managing/deleting-a-subgraph.mdx +++ b/website/src/pages/pl/subgraphs/developing/managing/deleting-a-subgraph.mdx @@ -2,30 +2,30 @@ title: Deleting a Subgraph --- -Delete your subgraph using [Subgraph Studio](https://thegraph.com/studio/). +Delete your Subgraph using [Subgraph Studio](https://thegraph.com/studio/). -> Deleting your subgraph will remove all published versions from The Graph Network, but it will remain visible on Graph Explorer and Subgraph Studio for users who have signaled on it. +> Deleting your Subgraph will remove all published versions from The Graph Network, but it will remain visible on Graph Explorer and Subgraph Studio for users who have signaled on it. ## Step-by-Step -1. Visit the subgraph's page on [Subgraph Studio](https://thegraph.com/studio/). +1. Visit the Subgraph's page on [Subgraph Studio](https://thegraph.com/studio/). 2. Click on the three-dots to the right of the "publish" button. -3. Click on the option to "delete this subgraph": +3. Click on the option to "delete this Subgraph": ![Delete-subgraph](/img/Delete-subgraph.png) -4. Depending on the subgraph's status, you will be prompted with various options. +4. Depending on the Subgraph's status, you will be prompted with various options. - - If the subgraph is not published, simply click “delete” and confirm. - - If the subgraph is published, you will need to confirm on your wallet before the subgraph can be deleted from Studio. If a subgraph is published to multiple networks, such as testnet and mainnet, additional steps may be required. + - If the Subgraph is not published, simply click “delete” and confirm. + - If the Subgraph is published, you will need to confirm on your wallet before the Subgraph can be deleted from Studio. If a Subgraph is published to multiple networks, such as testnet and mainnet, additional steps may be required. -> If the owner of the subgraph has signal on it, the signaled GRT will be returned to the owner. +> If the owner of the Subgraph has signal on it, the signaled GRT will be returned to the owner. ### Important Reminders -- Once you delete a subgraph, it will **not** appear on Graph Explorer's homepage. However, users who have signaled on it will still be able to view it on their profile pages and remove their signal. -- Curators will not be able to signal on the subgraph anymore. -- Curators that already signaled on the subgraph can withdraw their signal at an average share price. -- Deleted subgraphs will show an error message. +- Once you delete a Subgraph, it will **not** appear on Graph Explorer's homepage. However, users who have signaled on it will still be able to view it on their profile pages and remove their signal. +- Curators will not be able to signal on the Subgraph anymore. +- Curators that already signaled on the Subgraph can withdraw their signal at an average share price. +- Deleted Subgraphs will show an error message. diff --git a/website/src/pages/pl/subgraphs/developing/managing/transferring-a-subgraph.mdx b/website/src/pages/pl/subgraphs/developing/managing/transferring-a-subgraph.mdx index 0fc6632cbc40..e80bde3fa6d2 100644 --- a/website/src/pages/pl/subgraphs/developing/managing/transferring-a-subgraph.mdx +++ b/website/src/pages/pl/subgraphs/developing/managing/transferring-a-subgraph.mdx @@ -2,18 +2,18 @@ title: Transferring a Subgraph --- -Subgraphs published to the decentralized network have an NFT minted to the address that published the subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network. +Subgraphs published to the decentralized network have an NFT minted to the address that published the Subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network. ## Reminders -- Whoever owns the NFT controls the subgraph. -- If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that subgraph on the network. -- You can easily move control of a subgraph to a multi-sig. -- A community member can create a subgraph on behalf of a DAO. +- Whoever owns the NFT controls the Subgraph. +- If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that Subgraph on the network. +- You can easily move control of a Subgraph to a multi-sig. +- A community member can create a Subgraph on behalf of a DAO. ## View Your Subgraph as an NFT -To view your subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**: +To view your Subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**: ``` https://opensea.io/your-wallet-address @@ -27,13 +27,13 @@ https://rainbow.me/your-wallet-addres ## Step-by-Step -To transfer ownership of a subgraph, do the following: +To transfer ownership of a Subgraph, do the following: 1. Use the UI built into Subgraph Studio: ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-1.png) -2. Choose the address that you would like to transfer the subgraph to: +2. Choose the address that you would like to transfer the Subgraph to: ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-2.png) diff --git a/website/src/pages/pl/subgraphs/developing/publishing/publishing-a-subgraph.mdx b/website/src/pages/pl/subgraphs/developing/publishing/publishing-a-subgraph.mdx index dca943ad3152..2bc0ec5f514c 100644 --- a/website/src/pages/pl/subgraphs/developing/publishing/publishing-a-subgraph.mdx +++ b/website/src/pages/pl/subgraphs/developing/publishing/publishing-a-subgraph.mdx @@ -1,10 +1,11 @@ --- title: Publishing a Subgraph to the Decentralized Network +sidebarTitle: Publishing to the Decentralized Network --- -Once you have [deployed your subgraph to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/) and it's ready to go into production, you can publish it to the decentralized network. +Once you have [deployed your Subgraph to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/) and it's ready to go into production, you can publish it to the decentralized network. -When you publish a subgraph to the decentralized network, you make it available for: +When you publish a Subgraph to the decentralized network, you make it available for: - [Curators](/resources/roles/curating/) to begin curating it. - [Indexers](/indexing/overview/) to begin indexing it. @@ -17,33 +18,33 @@ Check out the list of [supported networks](/supported-networks/). 1. Go to the [Subgraph Studio](https://thegraph.com/studio/) dashboard 2. Click on the **Publish** button -3. Your subgraph will now be visible in [Graph Explorer](https://thegraph.com/explorer/). +3. Your Subgraph will now be visible in [Graph Explorer](https://thegraph.com/explorer/). -All published versions of an existing subgraph can: +All published versions of an existing Subgraph can: - Be published to Arbitrum One. [Learn more about The Graph Network on Arbitrum](/archived/arbitrum/arbitrum-faq/). -- Index data on any of the [supported networks](/supported-networks/), regardless of the network on which the subgraph was published. +- Index data on any of the [supported networks](/supported-networks/), regardless of the network on which the Subgraph was published. -### Updating metadata for a published subgraph +### Updating metadata for a published Subgraph -- After publishing your subgraph to the decentralized network, you can update the metadata anytime in Subgraph Studio. +- After publishing your Subgraph to the decentralized network, you can update the metadata anytime in Subgraph Studio. - Once you’ve saved your changes and published the updates, they will appear in Graph Explorer. - It's important to note that this process will not create a new version since your deployment has not changed. ## Publishing from the CLI -As of version 0.73.0, you can also publish your subgraph with the [`graph-cli`](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). +As of version 0.73.0, you can also publish your Subgraph with the [`graph-cli`](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). 1. Open the `graph-cli`. 2. Use the following commands: `graph codegen && graph build` then `graph publish`. -3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized subgraph to a network of your choice. +3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized Subgraph to a network of your choice. ![cli-ui](/img/cli-ui.png) ### Customizing your deployment -You can upload your subgraph build to a specific IPFS node and further customize your deployment with the following flags: +You can upload your Subgraph build to a specific IPFS node and further customize your deployment with the following flags: ``` USAGE @@ -61,33 +62,33 @@ FLAGS ``` -## Adding signal to your subgraph +## Adding signal to your Subgraph -Developers can add GRT signal to their subgraphs to incentivize Indexers to query the subgraph. +Developers can add GRT signal to their Subgraphs to incentivize Indexers to query the Subgraph. -- If a subgraph is eligible for indexing rewards, Indexers who provide a "proof of indexing" will receive a GRT reward, based on the amount of GRT signalled. +- If a Subgraph is eligible for indexing rewards, Indexers who provide a "proof of indexing" will receive a GRT reward, based on the amount of GRT signalled. -- You can check indexing reward eligibility based on subgraph feature usage [here](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). +- You can check indexing reward eligibility based on Subgraph feature usage [here](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). - Specific supported networks can be checked [here](/supported-networks/). -> Adding signal to a subgraph which is not eligible for rewards will not attract additional Indexers. +> Adding signal to a Subgraph which is not eligible for rewards will not attract additional Indexers. > -> If your subgraph is eligible for rewards, it is recommended that you curate your own subgraph with at least 3,000 GRT in order to attract additional indexers to index your subgraph. +> If your Subgraph is eligible for rewards, it is recommended that you curate your own Subgraph with at least 3,000 GRT in order to attract additional indexers to index your Subgraph. -The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs. However, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. +The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all Subgraphs. However, signaling GRT on a particular Subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. -When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. +When signaling, Curators can decide to signal on a specific version of the Subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. -Indexers can find subgraphs to index based on curation signals they see in Graph Explorer. +Indexers can find Subgraphs to index based on curation signals they see in Graph Explorer. -![Explorer subgraphs](/img/explorer-subgraphs.png) +![Explorer Subgraphs](/img/explorer-subgraphs.png) -Subgraph Studio enables you to add signal to your subgraph by adding GRT to your subgraph's curation pool in the same transaction it is published. +Subgraph Studio enables you to add signal to your Subgraph by adding GRT to your Subgraph's curation pool in the same transaction it is published. ![Curation Pool](/img/curate-own-subgraph-tx.png) -Alternatively, you can add GRT signal to a published subgraph from Graph Explorer. +Alternatively, you can add GRT signal to a published Subgraph from Graph Explorer. ![Signal from Explorer](/img/signal-from-explorer.png) diff --git a/website/src/pages/pl/subgraphs/developing/subgraphs.mdx b/website/src/pages/pl/subgraphs/developing/subgraphs.mdx index b81dc8a2d83e..e55dffd8111f 100644 --- a/website/src/pages/pl/subgraphs/developing/subgraphs.mdx +++ b/website/src/pages/pl/subgraphs/developing/subgraphs.mdx @@ -4,83 +4,83 @@ title: Subgrafy ## What is a Subgraph? -A subgraph is a custom, open API that extracts data from a blockchain, processes it, and stores it so it can be easily queried via GraphQL. +A Subgraph is a custom, open API that extracts data from a blockchain, processes it, and stores it so it can be easily queried via GraphQL. ### Subgraph Capabilities - **Access Data:** Subgraphs enable the querying and indexing of blockchain data for web3. -- **Build:** Developers can build, deploy, and publish subgraphs to The Graph Network. To get started, check out the subgraph developer [Quick Start](quick-start/). -- **Index & Query:** Once a subgraph is indexed, anyone can query it. Explore and query all subgraphs published to the network in [Graph Explorer](https://thegraph.com/explorer). +- **Build:** Developers can build, deploy, and publish Subgraphs to The Graph Network. To get started, check out the Subgraph developer [Quick Start](quick-start/). +- **Index & Query:** Once a Subgraph is indexed, anyone can query it. Explore and query all Subgraphs published to the network in [Graph Explorer](https://thegraph.com/explorer). ## Inside a Subgraph -The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. +The Subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your Subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. -The **subgraph definition** consists of the following files: +The **Subgraph definition** consists of the following files: -- `subgraph.yaml`: Contains the subgraph manifest +- `subgraph.yaml`: Contains the Subgraph manifest -- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL +- `schema.graphql`: A GraphQL schema defining the data stored for your Subgraph and how to query it via GraphQL - `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema -To learn more about each subgraph component, check out [creating a subgraph](/developing/creating-a-subgraph/). +To learn more about each Subgraph component, check out [creating a Subgraph](/developing/creating-a-subgraph/). ## Subgraph Lifecycle -Here is a general overview of a subgraph’s lifecycle: +Here is a general overview of a Subgraph’s lifecycle: ![Subgraph Lifecycle](/img/subgraph-lifecycle.png) ## Subgraph Development -1. [Create a subgraph](/developing/creating-a-subgraph/) -2. [Deploy a subgraph](/deploying/deploying-a-subgraph-to-studio/) -3. [Test a subgraph](/deploying/subgraph-studio/#testing-your-subgraph-in-subgraph-studio) -4. [Publish a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) -5. [Signal on a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) +1. [Create a Subgraph](/developing/creating-a-subgraph/) +2. [Deploy a Subgraph](/deploying/deploying-a-subgraph-to-studio/) +3. [Test a Subgraph](/deploying/subgraph-studio/#testing-your-subgraph-in-subgraph-studio) +4. [Publish a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) +5. [Signal on a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) ### Build locally -Great subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying subgraphs on The Graph. They can also use [Graph TypeScript](/subgraphs/developing/creating/graph-ts/README/) and [Matchstick](/subgraphs/developing/creating/unit-testing-framework/) to create robust subgraphs. +Great Subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying Subgraphs on The Graph. They can also use [Graph TypeScript](/subgraphs/developing/creating/graph-ts/README/) and [Matchstick](/subgraphs/developing/creating/unit-testing-framework/) to create robust Subgraphs. ### Deploy to Subgraph Studio -Once defined, a subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: +Once defined, a Subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: -- Use its staging environment to index the deployed subgraph and make it available for review. -- Verify that your subgraph doesn't have any indexing errors and works as expected. +- Use its staging environment to index the deployed Subgraph and make it available for review. +- Verify that your Subgraph doesn't have any indexing errors and works as expected. ### Publish to the Network -When you're happy with your subgraph, you can [publish it](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph Network. +When you're happy with your Subgraph, you can [publish it](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph Network. -- This is an onchain action, which registers the subgraph and makes it discoverable by Indexers. -- Published subgraphs have a corresponding NFT, which defines the ownership of the subgraph. You can [transfer the subgraph's ownership](/subgraphs/developing/managing/transferring-a-subgraph/) by sending the NFT. -- Published subgraphs have associated metadata, which provides other network participants with useful context and information. +- This is an onchain action, which registers the Subgraph and makes it discoverable by Indexers. +- Published Subgraphs have a corresponding NFT, which defines the ownership of the Subgraph. You can [transfer the Subgraph's ownership](/subgraphs/developing/managing/transferring-a-subgraph/) by sending the NFT. +- Published Subgraphs have associated metadata, which provides other network participants with useful context and information. ### Add Curation Signal for Indexing -Published subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your subgraph. Learn more about signaling and [curating](/resources/roles/curating/) on The Graph. +Published Subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your Subgraph. Learn more about signaling and [curating](/resources/roles/curating/) on The Graph. #### What is signal? -- Signal is locked GRT associated with a given subgraph. It indicates to Indexers that a given subgraph will receive query volume and it contributes to the indexing rewards available for processing it. -- Third-party Curators may also signal on a given subgraph, if they deem the subgraph likely to drive query volume. +- Signal is locked GRT associated with a given Subgraph. It indicates to Indexers that a given Subgraph will receive query volume and it contributes to the indexing rewards available for processing it. +- Third-party Curators may also signal on a given Subgraph, if they deem the Subgraph likely to drive query volume. ### Querying & Application Development Subgraphs on The Graph Network receive 100,000 free queries per month, after which point developers can either [pay for queries with GRT or a credit card](/subgraphs/billing/). -Learn more about [querying subgraphs](/subgraphs/querying/introduction/). +Learn more about [querying Subgraphs](/subgraphs/querying/introduction/). ### Updating Subgraphs -To update your subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. +To update your Subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your Subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. -- If you selected "auto-migrate" when you applied the signal, updating the subgraph will migrate any signal to the new version and incur a migration tax. -- This signal migration should prompt Indexers to start indexing the new version of the subgraph, so it should soon become available for querying. +- If you selected "auto-migrate" when you applied the signal, updating the Subgraph will migrate any signal to the new version and incur a migration tax. +- This signal migration should prompt Indexers to start indexing the new version of the Subgraph, so it should soon become available for querying. ### Deleting & Transferring Subgraphs -If you no longer need a published subgraph, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) or [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) it. Deleting a subgraph returns any signaled GRT to [Curators](/resources/roles/curating/). +If you no longer need a published Subgraph, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) or [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) it. Deleting a Subgraph returns any signaled GRT to [Curators](/resources/roles/curating/). diff --git a/website/src/pages/pl/subgraphs/explorer.mdx b/website/src/pages/pl/subgraphs/explorer.mdx index f29f2a3602d9..499fcede88d3 100644 --- a/website/src/pages/pl/subgraphs/explorer.mdx +++ b/website/src/pages/pl/subgraphs/explorer.mdx @@ -2,11 +2,11 @@ title: Graph Explorer --- -Unlock the world of subgraphs and network data with [Graph Explorer](https://thegraph.com/explorer). +Unlock the world of Subgraphs and network data with [Graph Explorer](https://thegraph.com/explorer). ## Overview -Graph Explorer consists of multiple parts where you can interact with [subgraphs](https://thegraph.com/explorer?chain=arbitrum-one), [delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one), engage [participants](https://thegraph.com/explorer/participants?chain=arbitrum-one), view [network information](https://thegraph.com/explorer/network?chain=arbitrum-one), and access your user profile. +Graph Explorer consists of multiple parts where you can interact with [Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one), [delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one), engage [participants](https://thegraph.com/explorer/participants?chain=arbitrum-one), view [network information](https://thegraph.com/explorer/network?chain=arbitrum-one), and access your user profile. ## Inside Explorer @@ -14,33 +14,33 @@ The following is a breakdown of all the key features of Graph Explorer. For addi ### Subgraphs Page -After deploying and publishing your subgraph in Subgraph Studio, go to [Graph Explorer](https://thegraph.com/explorer) and click on the "[Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one)" link in the navigation bar to access the following: +After deploying and publishing your Subgraph in Subgraph Studio, go to [Graph Explorer](https://thegraph.com/explorer) and click on the "[Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one)" link in the navigation bar to access the following: -- Your own finished subgraphs +- Your own finished Subgraphs - Subgraphs published by others -- The exact subgraph you want (based on the date created, signal amount, or name). +- The exact Subgraph you want (based on the date created, signal amount, or name). ![Explorer Image 1](/img/Subgraphs-Explorer-Landing.png) -When you click into a subgraph, you will be able to do the following: +When you click into a Subgraph, you will be able to do the following: - Test queries in the playground and be able to leverage network details to make informed decisions. -- Signal GRT on your own subgraph or the subgraphs of others to make indexers aware of its importance and quality. +- Signal GRT on your own Subgraph or the Subgraphs of others to make indexers aware of its importance and quality. - - This is critical because signaling on a subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries. + - This is critical because signaling on a Subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries. ![Explorer Image 2](/img/Subgraph-Details.png) -On each subgraph’s dedicated page, you can do the following: +On each Subgraph’s dedicated page, you can do the following: -- Signal/Un-signal on subgraphs +- Signal/Un-signal on Subgraphs - View more details such as charts, current deployment ID, and other metadata -- Switch versions to explore past iterations of the subgraph -- Query subgraphs via GraphQL -- Test subgraphs in the playground -- View the Indexers that are indexing on a certain subgraph +- Switch versions to explore past iterations of the Subgraph +- Query Subgraphs via GraphQL +- Test Subgraphs in the playground +- View the Indexers that are indexing on a certain Subgraph - Subgraph stats (allocations, Curators, etc) -- View the entity who published the subgraph +- View the entity who published the Subgraph ![Explorer Image 3](/img/Explorer-Signal-Unsignal.png) @@ -53,7 +53,7 @@ On this page, you can see the following: - Indexers who collected the most query fees - Indexers with the highest estimated APR -Additionally, you can calculate your ROI and search for top Indexers by name, address, or subgraph. +Additionally, you can calculate your ROI and search for top Indexers by name, address, or Subgraph. ### Participants Page @@ -63,9 +63,9 @@ This page provides a bird's-eye view of all "participants," which includes every ![Explorer Image 4](/img/Indexer-Pane.png) -Indexers are the backbone of the protocol. They stake on subgraphs, index them, and serve queries to anyone consuming subgraphs. +Indexers are the backbone of the protocol. They stake on Subgraphs, index them, and serve queries to anyone consuming Subgraphs. -In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each subgraph, and how much revenue they have made from query fees and indexing rewards. +In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each Subgraph, and how much revenue they have made from query fees and indexing rewards. **Specifics** @@ -74,7 +74,7 @@ In the Indexers table, you can see an Indexers’ delegation parameters, their s - Cooldown Remaining - the time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters. - Owned - This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior. - Delegated - Stake from Delegators which can be allocated by the Indexer, but cannot be slashed. -- Allocated - Stake that Indexers are actively allocating towards the subgraphs they are indexing. +- Allocated - Stake that Indexers are actively allocating towards the Subgraphs they are indexing. - Available Delegation Capacity - the amount of delegated stake the Indexers can still receive before they become over-delegated. - Max Delegation Capacity - the maximum amount of delegated stake the Indexer can productively accept. An excess delegated stake cannot be used for allocations or rewards calculations. - Query Fees - this is the total fees that end users have paid for queries from an Indexer over all time. @@ -90,10 +90,10 @@ To learn more about how to become an Indexer, you can take a look at the [offici #### 2. Curators -Curators analyze subgraphs to identify which subgraphs are of the highest quality. Once a Curator has found a potentially high-quality subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which subgraphs are high quality and should be indexed. +Curators analyze Subgraphs to identify which Subgraphs are of the highest quality. Once a Curator has found a potentially high-quality Subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which Subgraphs are high quality and should be indexed. -- Curators can be community members, data consumers, or even subgraph developers who signal on their own subgraphs by depositing GRT tokens into a bonding curve. - - By depositing GRT, Curators mint curation shares of a subgraph. As a result, they can earn a portion of the query fees generated by the subgraph they have signaled on. +- Curators can be community members, data consumers, or even Subgraph developers who signal on their own Subgraphs by depositing GRT tokens into a bonding curve. + - By depositing GRT, Curators mint curation shares of a Subgraph. As a result, they can earn a portion of the query fees generated by the Subgraph they have signaled on. - The bonding curve incentivizes Curators to curate the highest quality data sources. In the The Curator table listed below you can see: @@ -144,8 +144,8 @@ The overview section has both all the current network metrics and some cumulativ A few key details to note: -- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the subgraphs have been closed and the data they served has been validated by the consumers. -- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). +- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the Subgraphs have been closed and the data they served has been validated by the consumers. +- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the Subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). ![Explorer Image 8](/img/Network-Stats.png) @@ -178,15 +178,15 @@ In this section, you can view the following: ### Subgraphs Tab -In the Subgraphs tab, you’ll see your published subgraphs. +In the Subgraphs tab, you’ll see your published Subgraphs. -> This will not include any subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network. +> This will not include any Subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network. ![Explorer Image 11](/img/Subgraphs-Overview.png) ### Indexing Tab -In the Indexing tab, you’ll find a table with all the active and historical allocations towards subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer. +In the Indexing tab, you’ll find a table with all the active and historical allocations towards Subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer. This section will also include details about your net Indexer rewards and net query fees. You’ll see the following metrics: @@ -223,13 +223,13 @@ Keep in mind that this chart is horizontally scrollable, so if you scroll all th ### Curating Tab -In the Curation tab, you’ll find all the subgraphs you’re signaling on (thus enabling you to receive query fees). Signaling allows Curators to highlight to Indexers which subgraphs are valuable and trustworthy, thus signaling that they need to be indexed on. +In the Curation tab, you’ll find all the Subgraphs you’re signaling on (thus enabling you to receive query fees). Signaling allows Curators to highlight to Indexers which Subgraphs are valuable and trustworthy, thus signaling that they need to be indexed on. Within this tab, you’ll find an overview of: -- All the subgraphs you're curating on with signal details -- Share totals per subgraph -- Query rewards per subgraph +- All the Subgraphs you're curating on with signal details +- Share totals per Subgraph +- Query rewards per Subgraph - Updated at date details ![Explorer Image 14](/img/Curation-Stats.png) diff --git a/website/src/pages/pl/subgraphs/guides/_meta.js b/website/src/pages/pl/subgraphs/guides/_meta.js index 37e18bc51651..a1bb04fb6d3f 100644 --- a/website/src/pages/pl/subgraphs/guides/_meta.js +++ b/website/src/pages/pl/subgraphs/guides/_meta.js @@ -1,4 +1,5 @@ export default { + 'subgraph-composition': '', 'subgraph-debug-forking': '', near: '', arweave: '', diff --git a/website/src/pages/pl/subgraphs/guides/arweave.mdx b/website/src/pages/pl/subgraphs/guides/arweave.mdx index 08e6c4257268..e59abffa383f 100644 --- a/website/src/pages/pl/subgraphs/guides/arweave.mdx +++ b/website/src/pages/pl/subgraphs/guides/arweave.mdx @@ -92,9 +92,9 @@ Arweave data sources support two types of handlers: - `transactionHandlers` - Run on every transaction where the data source's `source.owner` is the owner. Currently an owner is required for `transactionHandlers`, if users want to process all transactions they should provide "" as the `source.owner` > The source.owner can be the owner's address, or their Public Key. - +> > Transactions are the building blocks of the Arweave permaweb and they are objects created by end-users. - +> > Note: [Irys (previously Bundlr)](https://irys.xyz/) transactions are not supported yet. ## Schema Definition diff --git a/website/src/pages/pl/subgraphs/guides/contract-analyzer.mdx b/website/src/pages/pl/subgraphs/guides/contract-analyzer.mdx index 084ac8d28a00..ab5076c5ebf4 100644 --- a/website/src/pages/pl/subgraphs/guides/contract-analyzer.mdx +++ b/website/src/pages/pl/subgraphs/guides/contract-analyzer.mdx @@ -2,11 +2,15 @@ title: Smart Contract Analysis with Cana CLI --- -# Cana CLI: Quick & Efficient Contract Analysis +Improve smart contract analysis with **Cana CLI**. It's fast, efficient, and designed specifically for EVM chains. -**Cana CLI** is a command-line tool that streamlines helpful smart contract metadata analysis specific to subgraph development across multiple EVM-compatible chains. +## Overview -## 📌 Key Features +**Cana CLI** is a command-line tool that streamlines helpful smart contract metadata analysis specific to subgraph development across multiple EVM-compatible chains. It simplifies retrieving contract details, detecting proxy implementations, extracting ABIs, and more. + +### Key Features + +With Cana CLI, you can: - Detect deployment blocks - Verify source code @@ -14,47 +18,59 @@ title: Smart Contract Analysis with Cana CLI - Identify proxy and implementation contracts - Support multiple chains -## 🚀 Installation & Setup +### Prerequisites + +Before installing Cana CLI, make sure you have: + +- [Node.js v16+](https://nodejs.org/en) +- [npm v6+](https://docs.npmjs.com/cli/v11/commands/npm-install) +- Block explorer API keys + +### Installation & Setup -Install Cana globally using npm: +1. Install Cana CLI + +Use npm to install it globally: ```bash npm install -g contract-analyzer ``` -Set up a blockchain for analysis: +2. Configure Cana CLI + +Set up a blockchain environment for analysis: ```bash cana setup ``` -Provide the required block explorer API and block explorer endpoint URL details when prompted. +During setup, you'll be prompted to provide the required block explorer API key and block explorer endpoint URL. -Running `cana setup` creates a configuration file at `~/.contract-analyzer/config.json`. This file stores your block explorer API credentials, endpoint URLs, and chain selection preferences for future use. +After setup, Cana CLI creates a configuration file at `~/.contract-analyzer/config.json`. This file stores your block explorer API credentials, endpoint URLs, and chain selection preferences for future use. -## 🍳 Usage +### Steps: Using Cana CLI for Smart Contract Analysis -### 🔹 Chain Selection +#### 1. Select a Chain -Cana supports multiple EVM-compatible chains. +Cana CLI supports multiple EVM-compatible chains. -List chains added with: +For a list of chains added run this command: ```bash cana chains ``` -Then select a chain with: +Then select a chain with this command: ```bash cana chains --switch ``` -Once a chain is selected, all subsequent contract analases will continue on that chain. +Once a chain is selected, all subsequent contract analyses will continue on that chain. -### 🔹 Basic Contract Analysis +#### 2. Basic Contract Analysis -Analyze a contract with: +Run the following command to analyze a contract: ```bash cana analyze 0xContractAddress @@ -66,11 +82,11 @@ or cana -a 0xContractAddress ``` -This command displays essential contract information in the terminal using a clear, organized format. +This command fetches and displays essential contract information in the terminal using a clear, organized format. -### 🔹 Understanding Output +#### 3. Understanding the Output -Cana organizes results into the terminal as well as into a structured directory when detailed contract data is successfully retrieved: +Cana CLI organizes results into the terminal and into a structured directory when detailed contract data is successfully retrieved: ``` contracts-analyzed/ @@ -80,24 +96,22 @@ contracts-analyzed/ └── event-information.json # Event signatures and examples ``` -### 🔹 Chain Management +This format makes it easy to reference contract metadata, event signatures, and ABIs for subgraph development. + +#### 4. Chain Management Add and manage chains: ```bash -cana setup # Add a new chain -cana chains # List configured chains -cana chains -s # Swich chains. +cana setup # Add a new chain +cana chains # List configured chains +cana chains -s # Switch chains ``` -## ⚠️ Troubleshooting +### Troubleshooting -- **Missing Data**: Ensure the contract address is correct, verified on the block explorer, and that your API key has the required permissions. +Missing Data? Ensure that the contract address is correct, that it's verified on the block explorer, and that your API key has the required permissions. -## ✅ Requirements - -- Node.js v16+ -- npm v6+ -- Block explorer API keys +### Conclusion -Keep your contract analyses efficient and well-organized. 🚀 +With Cana CLI, you can efficiently analyze smart contracts, extract crucial metadata, and support subgraph development with ease. diff --git a/website/src/pages/pl/subgraphs/guides/subgraph-composition.mdx b/website/src/pages/pl/subgraphs/guides/subgraph-composition.mdx new file mode 100644 index 000000000000..fb8427b04be9 --- /dev/null +++ b/website/src/pages/pl/subgraphs/guides/subgraph-composition.mdx @@ -0,0 +1,132 @@ +--- +title: Aggregate Data Using Subgraph Composition +sidebarTitle: Build a Composable Subgraph with Multiple Subgraphs +--- + +Leverage Subgraph composition to speed up development time. Create a base Subgraph with essential data, then build additional Subgraphs on top of it. + +Optimize your Subgraph by merging data from independent, source Subgraphs into a single composable Subgraph to enhance data aggregation. + +## Wstęp + +Composable Subgraphs enable you to combine multiple Subgraphs' data sources into a new Subgraph, facilitating faster and more flexible Subgraph development. Subgraph composition empowers you to create and maintain smaller, focused Subgraphs that collectively form a larger, interconnected dataset. + +### Benefits of Composition + +Subgraph composition is a powerful feature for scaling, allowing you to: + +- Reuse, mix, and combine existing data +- Streamline development and queries +- Use multiple data sources (up to five source Subgraphs) +- Speed up your Subgraph's syncing speed +- Handle errors and optimize the resync + +## Architecture Overview + +The setup for this example involves two Subgraphs: + +1. **Source Subgraph**: Tracks event data as entities. +2. **Dependent Subgraph**: Uses the source Subgraph as a data source. + +You can find these in the `source` and `dependent` directories. + +- The **source Subgraph** is a basic event-tracking Subgraph that records events emitted by relevant contracts. +- The **dependent Subgraph** references the source Subgraph as a data source, using the entities from the source as triggers. + +While the source Subgraph is a standard Subgraph, the dependent Subgraph uses the Subgraph composition feature. + +## Prerequisites + +### Source Subgraphs + +- All Subgraphs need to be published with a **specVersion 1.3.0 or later** (Use the latest graph-cli version to be able to deploy composable Subgraphs) +- See notes here: https://github.com/graphprotocol/graph-node/releases/tag/v0.37.0 +- Immutable entities only: All Subgraphs must have [immutable entities](https://thegraph.com/docs/en/subgraphs/best-practices/immutable-entities-bytes-as-ids/#immutable-entities) when the Subgraph is deployed +- Pruning can be used in the source Subgraphs, but only entities that are immutable can be composed on top of +- Source Subgraphs cannot use grafting on top of existing entities +- Aggregated entities can be used in composition, but entities that are composed from them cannot performed additional aggregations directly + +### Composed Subgraphs + +- You can only compose up to a **maximum of 5 source Subgraphs** +- Composed Subgraphs can only use **datasources from the same chain** +- **Nested composition is not yet supported**: Composing on top of another composed Subgraph isn’t allowed at this time +- Aggregated entities can be used in composition, but the composed entities on them cannot also use aggregations directly +- Developers cannot compose an onchain datasource with a Subgraph datasource (i.e. you can’t do normal event handlers and call handlers and block handlers in a composed Subgraph) + +Additionally, you can explore the [example-composable-subgraph](https://github.com/graphprotocol/example-composable-subgraph) repository for a working implementation of composable Subgraphs + +## Jak zacząć? + +The following guide provides examples for defining 3 source Subgraphs to create one powerful composed Subgraph. + +### Specifics + +- To keep this example simple, all source Subgraphs use only block handlers. However, in a real environment, each source Subgraph will use data from different smart contracts. +- The examples below show how to import and extend the schema of another Subgraph to enhance its functionality. +- Each source Subgraph is optimized with a specific entity. +- All the commands listed install the necessary dependencies, generate code based on the GraphQL schema, build the Subgraph, and deploy it to your local Graph Node instance. + +### Step 1. Deploy Block Time Source Subgraph + +This first source Subgraph calculates the block time for each block. + +- It imports schemas from other Subgraphs and adds a `block` entity with a `timestamp` field, representing the time each block was mined. +- It listens to time-related blockchain events (e.g., block timestamps) and processes this data to update the Subgraph's entities accordingly. + +To deploy this Subgraph locally, run the following commands: + +```bash +npm install +npm run codegen +npm run build +npm run create-local +npm run deploy-local +``` + +### Step 2. Deploy Block Cost Source Subgraph + +This second source Subgraph indexes the cost of each block. + +#### Key Functions + +- It imports schemas from other Subgraphs and adds a `block` entity with cost-related fields. +- It listens to blockchain events related to costs (e.g. gas fees, transaction costs) and processes this data to update the Subgraph's entities accordingly. + +To deploy this Subgraph locally, run the same commands as above. + +### Step 3. Define Block Size in Source Subgraph + +This third source Subgraph indexes the size of each block. To deploy this Subgraph locally, run the same commands as above. + +#### Key Functions + +- It imports existing schemas from other Subgraphs and adds a `block` entity with a `size` field representing each block's size. +- It listens to blockchain events related to block sizes (e.g., storage or volume) and processes this data to update the Subgraph's entities accordingly. + +### Step 4. Combine Into Block Stats Subgraph + +This composed Subgraph combines and aggregates the information from the source Subgraphs above, providing a unified view of block statistics. To deploy this Subgraph locally, run the same commands as above. + +> Note: +> +> - Any change to a source Subgraph will likely generate a new deployment ID. +> - Be sure to update the deployment ID in the data source address of the Subgraph manifest to take advantage of the latest changes. +> - All source Subgraphs should be deployed before the composed Subgraph is deployed. + +#### Key Functions + +- It provides a consolidated data model that encompasses all relevant block metrics. +- It combines data from 3 source Subgraphs, and provides a comprehensive view of block statistics, enabling more complex queries and analyses. + +## Key Takeaways + +- This powerful tool will scale your Subgraph development and allow you to combine multiple Subgraphs. +- The setup includes the deployment of 3 source Subgraphs and one final deployment of the composed Subgraph. +- This feature unlocks scalability, simplifying both development and maintenance efficiency. + +## Additional Resources + +- Check out all the code for this example in [this GitHub repo](https://github.com/graphprotocol/example-composable-subgraph). +- To add advanced features to your Subgraph, check out [Subgraph advanced features](/developing/creating/advanced/). +- To learn more about aggregations, check out [Timeseries and Aggregations](/subgraphs/developing/creating/advanced/#timeseries-and-aggregations). diff --git a/website/src/pages/pl/subgraphs/guides/transfer-to-the-graph.mdx b/website/src/pages/pl/subgraphs/guides/transfer-to-the-graph.mdx index a62072c48373..9a4b037cafbc 100644 --- a/website/src/pages/pl/subgraphs/guides/transfer-to-the-graph.mdx +++ b/website/src/pages/pl/subgraphs/guides/transfer-to-the-graph.mdx @@ -12,9 +12,9 @@ Quickly upgrade your Subgraphs from any platform to [The Graph's decentralized n ## Upgrade Your Subgraph to The Graph in 3 Easy Steps -1. [Set Up Your Studio Environment](/subgraphs/cookbook/transfer-to-the-graph/#1-set-up-your-studio-environment) -2. [Deploy Your Subgraph to Studio](/subgraphs/cookbook/transfer-to-the-graph/#2-deploy-your-subgraph-to-studio) -3. [Publish to The Graph Network](/subgraphs/cookbook/transfer-to-the-graph/#publish-your-subgraph-to-the-graphs-decentralized-network) +1. [Set Up Your Studio Environment](/subgraphs/cookbook/transfer-to-the-graph/#1-set-up-your-studio-environment) +2. [Deploy Your Subgraph to Studio](/subgraphs/cookbook/transfer-to-the-graph/#2-deploy-your-subgraph-to-studio) +3. [Publish to The Graph Network](/subgraphs/cookbook/transfer-to-the-graph/#publish-your-subgraph-to-the-graphs-decentralized-network) ## 1. Set Up Your Studio Environment diff --git a/website/src/pages/pl/subgraphs/querying/best-practices.mdx b/website/src/pages/pl/subgraphs/querying/best-practices.mdx index ff5f381e2993..ab02b27cbc03 100644 --- a/website/src/pages/pl/subgraphs/querying/best-practices.mdx +++ b/website/src/pages/pl/subgraphs/querying/best-practices.mdx @@ -4,7 +4,7 @@ title: Querying Best Practices The Graph provides a decentralized way to query data from blockchains. Its data is exposed through a GraphQL API, making it easier to query with the GraphQL language. -Learn the essential GraphQL language rules and best practices to optimize your subgraph. +Learn the essential GraphQL language rules and best practices to optimize your Subgraph. --- @@ -71,7 +71,7 @@ It means that you can query a GraphQL API using standard `fetch` (natively or vi However, as mentioned in ["Querying from an Application"](/subgraphs/querying/from-an-application/), it's recommended to use `graph-client`, which supports the following unique features: -- Cross-chain Subgraph Handling: Querying from multiple subgraphs in a single query +- Cross-chain Subgraph Handling: Querying from multiple Subgraphs in a single query - [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) - [Automatic Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) - Fully typed result @@ -219,7 +219,7 @@ If the application only needs 10 transactions, the query should explicitly set ` ### Use a single query to request multiple records -By default, subgraphs have a singular entity for one record. For multiple records, use the plural entities and filter: `where: {id_in:[X,Y,Z]}` or `where: {volume_gt:100000}` +By default, Subgraphs have a singular entity for one record. For multiple records, use the plural entities and filter: `where: {id_in:[X,Y,Z]}` or `where: {volume_gt:100000}` Example of inefficient querying: diff --git a/website/src/pages/pl/subgraphs/querying/from-an-application.mdx b/website/src/pages/pl/subgraphs/querying/from-an-application.mdx index 56be718d0fb8..48f4b6561ac7 100644 --- a/website/src/pages/pl/subgraphs/querying/from-an-application.mdx +++ b/website/src/pages/pl/subgraphs/querying/from-an-application.mdx @@ -1,5 +1,6 @@ --- title: Querying from an Application +sidebarTitle: Querying from an App --- Learn how to query The Graph from your application. @@ -10,7 +11,7 @@ During the development process, you will receive a GraphQL API endpoint at two d ### Subgraph Studio Endpoint -After deploying your subgraph to [Subgraph Studio](https://thegraph.com/docs/en/subgraphs/developing/deploying/using-subgraph-studio/), you will receive an endpoint that looks like this: +After deploying your Subgraph to [Subgraph Studio](https://thegraph.com/docs/en/subgraphs/developing/deploying/using-subgraph-studio/), you will receive an endpoint that looks like this: ``` https://api.studio.thegraph.com/query/// @@ -20,13 +21,13 @@ https://api.studio.thegraph.com/query/// ### The Graph Network Endpoint -After publishing your subgraph to the network, you will receive an endpoint that looks like this: : +After publishing your Subgraph to the network, you will receive an endpoint that looks like this: : ``` https://gateway.thegraph.com/api//subgraphs/id/ ``` -> This endpoint is intended for active use on the network. It allows you to use various GraphQL client libraries to query the subgraph and populate your application with indexed data. +> This endpoint is intended for active use on the network. It allows you to use various GraphQL client libraries to query the Subgraph and populate your application with indexed data. ## Using Popular GraphQL Clients @@ -34,7 +35,7 @@ https://gateway.thegraph.com/api//subgraphs/id/ The Graph is providing its own GraphQL client, `graph-client` that supports unique features such as: -- Cross-chain Subgraph Handling: Querying from multiple subgraphs in a single query +- Cross-chain Subgraph Handling: Querying from multiple Subgraphs in a single query - [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) - [Automatic Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) - Fully typed result @@ -43,7 +44,7 @@ The Graph is providing its own GraphQL client, `graph-client` that supports uniq ### Fetch Data with Graph Client -Let's look at how to fetch data from a subgraph with `graph-client`: +Let's look at how to fetch data from a Subgraph with `graph-client`: #### Krok 1 @@ -168,7 +169,7 @@ Although it's the heaviest client, it has many features to build advanced UI on ### Fetch Data with Apollo Client -Let's look at how to fetch data from a subgraph with Apollo client: +Let's look at how to fetch data from a Subgraph with Apollo client: #### Krok 1 @@ -257,7 +258,7 @@ client ### Fetch data with URQL -Let's look at how to fetch data from a subgraph with URQL: +Let's look at how to fetch data from a Subgraph with URQL: #### Krok 1 diff --git a/website/src/pages/pl/subgraphs/querying/graph-client/README.md b/website/src/pages/pl/subgraphs/querying/graph-client/README.md index 416cadc13c6f..d4850e723c6e 100644 --- a/website/src/pages/pl/subgraphs/querying/graph-client/README.md +++ b/website/src/pages/pl/subgraphs/querying/graph-client/README.md @@ -16,19 +16,19 @@ This library is intended to simplify the network aspect of data consumption for | Status | Feature | Notes | | :----: | ---------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------- | -| ✅ | Multiple indexers | based on fetch strategies | -| ✅ | Fetch Strategies | timeout, retry, fallback, race, highestValue | -| ✅ | Build time validations & optimizations | | -| ✅ | Client-Side Composition | with improved execution planner (based on GraphQL-Mesh) | -| ✅ | Cross-chain Subgraph Handling | Use similar subgraphs as a single source | -| ✅ | Raw Execution (standalone mode) | without a wrapping GraphQL client | -| ✅ | Local (client-side) Mutations | | -| ✅ | [Automatic Block Tracking](../packages/block-tracking/README.md) | tracking block numbers [as described here](https://thegraph.com/docs/en/developer/distributed-systems/#polling-for-updated-data) | -| ✅ | [Automatic Pagination](../packages/auto-pagination/README.md) | doing multiple requests in a single call to fetch more than the indexer limit | -| ✅ | Integration with `@apollo/client` | | -| ✅ | Integration with `urql` | | -| ✅ | TypeScript support | with built-in GraphQL Codegen and `TypedDocumentNode` | -| ✅ | [`@live` queries](./live.md) | Based on polling | +| ✅ | Multiple indexers | based on fetch strategies | +| ✅ | Fetch Strategies | timeout, retry, fallback, race, highestValue | +| ✅ | Build time validations & optimizations | | +| ✅ | Client-Side Composition | with improved execution planner (based on GraphQL-Mesh) | +| ✅ | Cross-chain Subgraph Handling | Use similar subgraphs as a single source | +| ✅ | Raw Execution (standalone mode) | without a wrapping GraphQL client | +| ✅ | Local (client-side) Mutations | | +| ✅ | [Automatic Block Tracking](../packages/block-tracking/README.md) | tracking block numbers [as described here](https://thegraph.com/docs/en/developer/distributed-systems/#polling-for-updated-data) | +| ✅ | [Automatic Pagination](../packages/auto-pagination/README.md) | doing multiple requests in a single call to fetch more than the indexer limit | +| ✅ | Integration with `@apollo/client` | | +| ✅ | Integration with `urql` | | +| ✅ | TypeScript support | with built-in GraphQL Codegen and `TypedDocumentNode` | +| ✅ | [`@live` queries](./live.md) | Based on polling | > You can find an [extended architecture design here](./architecture.md) @@ -308,8 +308,8 @@ sources:
`highestValue` - - This strategy allows you to send parallel requests to different endpoints for the same source and choose the most updated. + +This strategy allows you to send parallel requests to different endpoints for the same source and choose the most updated. This is useful if you want to choose most synced data for the same Subgraph over different indexers/sources. diff --git a/website/src/pages/pl/subgraphs/querying/graphql-api.mdx b/website/src/pages/pl/subgraphs/querying/graphql-api.mdx index b3003ece651a..e10201771989 100644 --- a/website/src/pages/pl/subgraphs/querying/graphql-api.mdx +++ b/website/src/pages/pl/subgraphs/querying/graphql-api.mdx @@ -6,13 +6,13 @@ Learn about the GraphQL Query API used in The Graph. ## What is GraphQL? -[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. +[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query Subgraphs. -To understand the larger role that GraphQL plays, review [developing](/subgraphs/developing/introduction/) and [creating a subgraph](/developing/creating-a-subgraph/). +To understand the larger role that GraphQL plays, review [developing](/subgraphs/developing/introduction/) and [creating a Subgraph](/developing/creating-a-subgraph/). ## Queries with GraphQL -In your subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type. +In your Subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type. > Note: `query` does not need to be included at the top of the `graphql` query when using The Graph. @@ -170,7 +170,7 @@ You can use suffixes like `_gt`, `_lte` for value comparison: You can also filter entities that were updated in or after a specified block with `_change_block(number_gte: Int)`. -This can be useful if you are looking to fetch only entities which have changed, for example since the last time you polled. Or alternatively it can be useful to investigate or debug how entities are changing in your subgraph (if combined with a block filter, you can isolate only entities that changed in a specific block). +This can be useful if you are looking to fetch only entities which have changed, for example since the last time you polled. Or alternatively it can be useful to investigate or debug how entities are changing in your Subgraph (if combined with a block filter, you can isolate only entities that changed in a specific block). ```graphql { @@ -329,7 +329,7 @@ This query will return `Challenge` entities, and their associated `Application` ### Fulltext Search Queries -Fulltext search query fields provide an expressive text search API that can be added to the subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) to add fulltext search to your subgraph. +Fulltext search query fields provide an expressive text search API that can be added to the Subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) to add fulltext search to your Subgraph. Fulltext search queries have one required field, `text`, for supplying search terms. Several special fulltext operators are available to be used in this `text` search field. @@ -391,7 +391,7 @@ Graph Node implements [specification-based](https://spec.graphql.org/October2021 The schema of your dataSources, i.e. the entity types, values, and relationships that are available to query, are defined through the [GraphQL Interface Definition Language (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). -GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your subgraph is automatically generated from the GraphQL schema that's included in your [subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph). +GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your Subgraph is automatically generated from the GraphQL schema that's included in your [Subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph). > Note: Our API does not expose mutations because developers are expected to issue transactions directly against the underlying blockchain from their applications. @@ -403,7 +403,7 @@ All GraphQL types with `@entity` directives in your schema will be treated as en ### Subgraph Metadata -All subgraphs have an auto-generated `_Meta_` object, which provides access to subgraph metadata. This can be queried as follows: +All Subgraphs have an auto-generated `_Meta_` object, which provides access to Subgraph metadata. This can be queried as follows: ```graphQL { @@ -419,7 +419,7 @@ All subgraphs have an auto-generated `_Meta_` object, which provides access to s } ``` -If a block is provided, the metadata is as of that block, if not the latest indexed block is used. If provided, the block must be after the subgraph's start block, and less than or equal to the most recently indexed block. +If a block is provided, the metadata is as of that block, if not the latest indexed block is used. If provided, the block must be after the Subgraph's start block, and less than or equal to the most recently indexed block. `deployment` is a unique ID, corresponding to the IPFS CID of the `subgraph.yaml` file. @@ -427,6 +427,6 @@ If a block is provided, the metadata is as of that block, if not the latest inde - hash: the hash of the block - number: the block number -- timestamp: the timestamp of the block, if available (this is currently only available for subgraphs indexing EVM networks) +- timestamp: the timestamp of the block, if available (this is currently only available for Subgraphs indexing EVM networks) -`hasIndexingErrors` is a boolean identifying whether the subgraph encountered indexing errors at some past block +`hasIndexingErrors` is a boolean identifying whether the Subgraph encountered indexing errors at some past block diff --git a/website/src/pages/pl/subgraphs/querying/introduction.mdx b/website/src/pages/pl/subgraphs/querying/introduction.mdx index e66fe896db2d..fc96956cda46 100644 --- a/website/src/pages/pl/subgraphs/querying/introduction.mdx +++ b/website/src/pages/pl/subgraphs/querying/introduction.mdx @@ -7,11 +7,11 @@ To start querying right away, visit [The Graph Explorer](https://thegraph.com/ex ## Overview -When a subgraph is published to The Graph Network, you can visit its subgraph details page on Graph Explorer and use the "Query" tab to explore the deployed GraphQL API for each subgraph. +When a Subgraph is published to The Graph Network, you can visit its Subgraph details page on Graph Explorer and use the "Query" tab to explore the deployed GraphQL API for each Subgraph. ## Specifics -Each subgraph published to The Graph Network has a unique query URL in Graph Explorer to make direct queries. You can find it by navigating to the subgraph details page and clicking on the "Query" button in the top right corner. +Each Subgraph published to The Graph Network has a unique query URL in Graph Explorer to make direct queries. You can find it by navigating to the Subgraph details page and clicking on the "Query" button in the top right corner. ![Query Subgraph Button](/img/query-button-screenshot.png) @@ -21,7 +21,7 @@ You will notice that this query URL must use a unique API key. You can create an Subgraph Studio users start on a Free Plan, which allows them to make 100,000 queries per month. Additional queries are available on the Growth Plan, which offers usage based pricing for additional queries, payable by credit card, or GRT on Arbitrum. You can learn more about billing [here](/subgraphs/billing/). -> Please see the [Query API](/subgraphs/querying/graphql-api/) for a complete reference on how to query the subgraph's entities. +> Please see the [Query API](/subgraphs/querying/graphql-api/) for a complete reference on how to query the Subgraph's entities. > > Note: If you encounter 405 errors with a GET request to the Graph Explorer URL, please switch to a POST request instead. diff --git a/website/src/pages/pl/subgraphs/querying/managing-api-keys.mdx b/website/src/pages/pl/subgraphs/querying/managing-api-keys.mdx index 6964b1a7ad9b..aed3d10422e1 100644 --- a/website/src/pages/pl/subgraphs/querying/managing-api-keys.mdx +++ b/website/src/pages/pl/subgraphs/querying/managing-api-keys.mdx @@ -4,11 +4,11 @@ title: Managing API keys ## Overview -API keys are needed to query subgraphs. They ensure that the connections between application services are valid and authorized, including authenticating the end user and the device using the application. +API keys are needed to query Subgraphs. They ensure that the connections between application services are valid and authorized, including authenticating the end user and the device using the application. ### Create and Manage API Keys -Go to [Subgraph Studio](https://thegraph.com/studio/) and click the **API Keys** tab to create and manage your API keys for specific subgraphs. +Go to [Subgraph Studio](https://thegraph.com/studio/) and click the **API Keys** tab to create and manage your API keys for specific Subgraphs. The "API keys" table lists existing API keys and allows you to manage or delete them. For each key, you can see its status, the cost for the current period, the spending limit for the current period, and the total number of queries. @@ -31,4 +31,4 @@ You can click on an individual API key to view the Details page: - Amount of GRT spent 2. Under the **Security** section, you can opt into security settings depending on the level of control you’d like to have. Specifically, you can: - View and manage the domain names authorized to use your API key - - Assign subgraphs that can be queried with your API key + - Assign Subgraphs that can be queried with your API key diff --git a/website/src/pages/pl/subgraphs/querying/python.mdx b/website/src/pages/pl/subgraphs/querying/python.mdx index 0937e4f7862d..ed0d078a4175 100644 --- a/website/src/pages/pl/subgraphs/querying/python.mdx +++ b/website/src/pages/pl/subgraphs/querying/python.mdx @@ -3,7 +3,7 @@ title: Query The Graph with Python and Subgrounds sidebarTitle: Python (Subgrounds) --- -Subgrounds is an intuitive Python library for querying subgraphs, built by [Playgrounds](https://playgrounds.network/). It allows you to directly connect subgraph data to a Python data environment, letting you use libraries like [pandas](https://pandas.pydata.org/) to perform data analysis! +Subgrounds is an intuitive Python library for querying Subgraphs, built by [Playgrounds](https://playgrounds.network/). It allows you to directly connect Subgraph data to a Python data environment, letting you use libraries like [pandas](https://pandas.pydata.org/) to perform data analysis! Subgrounds offers a simple Pythonic API for building GraphQL queries, automates tedious workflows such as pagination, and empowers advanced users through controlled schema transformations. @@ -17,14 +17,14 @@ pip install --upgrade subgrounds python -m pip install --upgrade subgrounds ``` -Once installed, you can test out subgrounds with the following query. The following example grabs a subgraph for the Aave v2 protocol and queries the top 5 markets ordered by TVL (Total Value Locked), selects their name and their TVL (in USD) and returns the data as a pandas [DataFrame](https://pandas.pydata.org/pandas-docs/dev/reference/api/pandas.DataFrame.html#pandas.DataFrame). +Once installed, you can test out subgrounds with the following query. The following example grabs a Subgraph for the Aave v2 protocol and queries the top 5 markets ordered by TVL (Total Value Locked), selects their name and their TVL (in USD) and returns the data as a pandas [DataFrame](https://pandas.pydata.org/pandas-docs/dev/reference/api/pandas.DataFrame.html#pandas.DataFrame). ```python from subgrounds import Subgrounds sg = Subgrounds() -# Load the subgraph +# Load the Subgraph aave_v2 = sg.load_subgraph( "https://api.thegraph.com/subgraphs/name/messari/aave-v2-ethereum") diff --git a/website/src/pages/pl/subgraphs/querying/subgraph-id-vs-deployment-id.mdx b/website/src/pages/pl/subgraphs/querying/subgraph-id-vs-deployment-id.mdx index 103e470e14da..17258dd13ea1 100644 --- a/website/src/pages/pl/subgraphs/querying/subgraph-id-vs-deployment-id.mdx +++ b/website/src/pages/pl/subgraphs/querying/subgraph-id-vs-deployment-id.mdx @@ -2,17 +2,17 @@ title: Subgraph ID vs Deployment ID --- -A subgraph is identified by a Subgraph ID, and each version of the subgraph is identified by a Deployment ID. +A Subgraph is identified by a Subgraph ID, and each version of the Subgraph is identified by a Deployment ID. -When querying a subgraph, either ID can be used, though it is generally suggested that the Deployment ID is used due to its ability to specify a specific version of a subgraph. +When querying a Subgraph, either ID can be used, though it is generally suggested that the Deployment ID is used due to its ability to specify a specific version of a Subgraph. Here are some key differences between the two IDs: ![](/img/subgraph-id-vs-deployment-id.png) ## Deployment ID -The Deployment ID is the IPFS hash of the compiled manifest file, which refers to other files on IPFS instead of relative URLs on the computer. For example, the compiled manifest can be accessed via: `https://api.thegraph.com/ipfs/api/v0/cat?arg=QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. To change the Deployment ID, one can simply update the manifest file, such as modifying the description field as described in the [subgraph manifest documentation](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api). +The Deployment ID is the IPFS hash of the compiled manifest file, which refers to other files on IPFS instead of relative URLs on the computer. For example, the compiled manifest can be accessed via: `https://api.thegraph.com/ipfs/api/v0/cat?arg=QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. To change the Deployment ID, one can simply update the manifest file, such as modifying the description field as described in the [Subgraph manifest documentation](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api). -When queries are made using a subgraph's Deployment ID, we are specifying a version of that subgraph to query. Using the Deployment ID to query a specific subgraph version results in a more sophisticated and robust setup as there is full control over the subgraph version being queried. However, this results in the need of updating the query code manually every time a new version of the subgraph is published. +When queries are made using a Subgraph's Deployment ID, we are specifying a version of that Subgraph to query. Using the Deployment ID to query a specific Subgraph version results in a more sophisticated and robust setup as there is full control over the Subgraph version being queried. However, this results in the need of updating the query code manually every time a new version of the Subgraph is published. Example endpoint that uses Deployment ID: @@ -20,8 +20,8 @@ Example endpoint that uses Deployment ID: ## Subgraph ID -The Subgraph ID is a unique identifier for a subgraph. It remains constant across all versions of a subgraph. It is recommended to use the Subgraph ID to query the latest version of a subgraph, although there are some caveats. +The Subgraph ID is a unique identifier for a Subgraph. It remains constant across all versions of a Subgraph. It is recommended to use the Subgraph ID to query the latest version of a Subgraph, although there are some caveats. -Be aware that querying using Subgraph ID may result in queries being responded to by an older version of the subgraph due to the new version needing time to sync. Also, new versions could introduce breaking schema changes. +Be aware that querying using Subgraph ID may result in queries being responded to by an older version of the Subgraph due to the new version needing time to sync. Also, new versions could introduce breaking schema changes. Example endpoint that uses Subgraph ID: `https://gateway-arbitrum.network.thegraph.com/api/[api-key]/subgraphs/id/FL3ePDCBbShPvfRJTaSCNnehiqxsPHzpLud6CpbHoeKW` diff --git a/website/src/pages/pl/subgraphs/quick-start.mdx b/website/src/pages/pl/subgraphs/quick-start.mdx index 6db0b1437e5e..62c69977491f 100644 --- a/website/src/pages/pl/subgraphs/quick-start.mdx +++ b/website/src/pages/pl/subgraphs/quick-start.mdx @@ -2,7 +2,7 @@ title: ' Na start' --- -Learn how to easily build, publish and query a [subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) on The Graph. +Learn how to easily build, publish and query a [Subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) on The Graph. ## Prerequisites @@ -13,13 +13,13 @@ Learn how to easily build, publish and query a [subgraph](/subgraphs/developing/ ## How to Build a Subgraph -### 1. Create a subgraph in Subgraph Studio +### 1. Create a Subgraph in Subgraph Studio Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. -Subgraph Studio lets you create, manage, deploy, and publish subgraphs, as well as create and manage API keys. +Subgraph Studio lets you create, manage, deploy, and publish Subgraphs, as well as create and manage API keys. -Click "Create a Subgraph". It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name". +Click "Create a Subgraph". It is recommended to name the Subgraph in Title Case: "Subgraph Name Chain Name". ### 2. Install the Graph CLI @@ -37,13 +37,13 @@ Using [yarn](https://yarnpkg.com/): yarn global add @graphprotocol/graph-cli ``` -### 3. Initialize your subgraph +### 3. Initialize your Subgraph -> You can find commands for your specific subgraph on the subgraph page in [Subgraph Studio](https://thegraph.com/studio/). +> You can find commands for your specific Subgraph on the Subgraph page in [Subgraph Studio](https://thegraph.com/studio/). -The `graph init` command will automatically create a scaffold of a subgraph based on your contract's events. +The `graph init` command will automatically create a scaffold of a Subgraph based on your contract's events. -The following command initializes your subgraph from an existing contract: +The following command initializes your Subgraph from an existing contract: ```sh graph init @@ -51,42 +51,42 @@ graph init If your contract is verified on the respective blockscanner where it is deployed (such as [Etherscan](https://etherscan.io/)), then the ABI will automatically be created in the CLI. -When you initialize your subgraph, the CLI will ask you for the following information: +When you initialize your Subgraph, the CLI will ask you for the following information: -- **Protocol**: Choose the protocol your subgraph will be indexing data from. -- **Subgraph slug**: Create a name for your subgraph. Your subgraph slug is an identifier for your subgraph. -- **Directory**: Choose a directory to create your subgraph in. -- **Ethereum network** (optional): You may need to specify which EVM-compatible network your subgraph will be indexing data from. +- **Protocol**: Choose the protocol your Subgraph will be indexing data from. +- **Subgraph slug**: Create a name for your Subgraph. Your Subgraph slug is an identifier for your Subgraph. +- **Directory**: Choose a directory to create your Subgraph in. +- **Ethereum network** (optional): You may need to specify which EVM-compatible network your Subgraph will be indexing data from. - **Contract address**: Locate the smart contract address you’d like to query data from. - **ABI**: If the ABI is not auto-populated, you will need to input it manually as a JSON file. -- **Start Block**: You should input the start block to optimize subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed. +- **Start Block**: You should input the start block to optimize Subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed. - **Contract Name**: Input the name of your contract. -- **Index contract events as entities**: It is suggested that you set this to true, as it will automatically add mappings to your subgraph for every emitted event. +- **Index contract events as entities**: It is suggested that you set this to true, as it will automatically add mappings to your Subgraph for every emitted event. - **Add another contract** (optional): You can add another contract. -See the following screenshot for an example for what to expect when initializing your subgraph: +See the following screenshot for an example for what to expect when initializing your Subgraph: ![Subgraph command](/img/CLI-Example.png) -### 4. Edit your subgraph +### 4. Edit your Subgraph -The `init` command in the previous step creates a scaffold subgraph that you can use as a starting point to build your subgraph. +The `init` command in the previous step creates a scaffold Subgraph that you can use as a starting point to build your Subgraph. -When making changes to the subgraph, you will mainly work with three files: +When making changes to the Subgraph, you will mainly work with three files: -- Manifest (`subgraph.yaml`) - defines what data sources your subgraph will index. -- Schema (`schema.graphql`) - defines what data you wish to retrieve from the subgraph. +- Manifest (`subgraph.yaml`) - defines what data sources your Subgraph will index. +- Schema (`schema.graphql`) - defines what data you wish to retrieve from the Subgraph. - AssemblyScript Mappings (`mapping.ts`) - translates data from your data sources to the entities defined in the schema. -For a detailed breakdown on how to write your subgraph, check out [Creating a Subgraph](/developing/creating-a-subgraph/). +For a detailed breakdown on how to write your Subgraph, check out [Creating a Subgraph](/developing/creating-a-subgraph/). -### 5. Deploy your subgraph +### 5. Deploy your Subgraph > Remember, deploying is not the same as publishing. -When you **deploy** a subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. A deployed subgraph's indexing is performed by the [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/), which is a single Indexer owned and operated by Edge & Node, rather than by the many decentralized Indexers in The Graph Network. A **deployed** subgraph is free to use, rate-limited, not visible to the public, and meant to be used for development, staging, and testing purposes. +When you **deploy** a Subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. A deployed Subgraph's indexing is performed by the [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/), which is a single Indexer owned and operated by Edge & Node, rather than by the many decentralized Indexers in The Graph Network. A **deployed** Subgraph is free to use, rate-limited, not visible to the public, and meant to be used for development, staging, and testing purposes. -Once your subgraph is written, run the following commands: +Once your Subgraph is written, run the following commands: ```` ```sh @@ -94,7 +94,7 @@ graph codegen && graph build ``` ```` -Authenticate and deploy your subgraph. The deploy key can be found on the subgraph's page in Subgraph Studio. +Authenticate and deploy your Subgraph. The deploy key can be found on the Subgraph's page in Subgraph Studio. ![Deploy key](/img/subgraph-studio-deploy-key.jpg) @@ -109,37 +109,37 @@ graph deploy The CLI will ask for a version label. It's strongly recommended to use [semantic versioning](https://semver.org/), e.g. `0.0.1`. -### 6. Review your subgraph +### 6. Review your Subgraph -If you’d like to test your subgraph before publishing it, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following: +If you’d like to test your Subgraph before publishing it, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following: - Run a sample query. -- Analyze your subgraph in the dashboard to check information. -- Check the logs on the dashboard to see if there are any errors with your subgraph. The logs of an operational subgraph will look like this: +- Analyze your Subgraph in the dashboard to check information. +- Check the logs on the dashboard to see if there are any errors with your Subgraph. The logs of an operational Subgraph will look like this: ![Subgraph logs](/img/subgraph-logs-image.png) -### 7. Publish your subgraph to The Graph Network +### 7. Publish your Subgraph to The Graph Network -When your subgraph is ready for a production environment, you can publish it to the decentralized network. Publishing is an onchain action that does the following: +When your Subgraph is ready for a production environment, you can publish it to the decentralized network. Publishing is an onchain action that does the following: -- It makes your subgraph available to be to indexed by the decentralized [Indexers](/indexing/overview/) on The Graph Network. -- It removes rate limits and makes your subgraph publicly searchable and queryable in [Graph Explorer](https://thegraph.com/explorer/). -- It makes your subgraph available for [Curators](/resources/roles/curating/) to curate it. +- It makes your Subgraph available to be to indexed by the decentralized [Indexers](/indexing/overview/) on The Graph Network. +- It removes rate limits and makes your Subgraph publicly searchable and queryable in [Graph Explorer](https://thegraph.com/explorer/). +- It makes your Subgraph available for [Curators](/resources/roles/curating/) to curate it. -> The greater the amount of GRT you and others curate on your subgraph, the more Indexers will be incentivized to index your subgraph, improving the quality of service, reducing latency, and enhancing network redundancy for your subgraph. +> The greater the amount of GRT you and others curate on your Subgraph, the more Indexers will be incentivized to index your Subgraph, improving the quality of service, reducing latency, and enhancing network redundancy for your Subgraph. #### Publishing with Subgraph Studio -To publish your subgraph, click the Publish button in the dashboard. +To publish your Subgraph, click the Publish button in the dashboard. -![Publish a subgraph on Subgraph Studio](/img/publish-sub-transfer.png) +![Publish a Subgraph on Subgraph Studio](/img/publish-sub-transfer.png) -Select the network to which you would like to publish your subgraph. +Select the network to which you would like to publish your Subgraph. #### Publishing from the CLI -As of version 0.73.0, you can also publish your subgraph with the Graph CLI. +As of version 0.73.0, you can also publish your Subgraph with the Graph CLI. Open the `graph-cli`. @@ -157,32 +157,32 @@ graph publish ``` ```` -3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized subgraph to a network of your choice. +3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized Subgraph to a network of your choice. ![cli-ui](/img/cli-ui.png) To customize your deployment, see [Publishing a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). -#### Adding signal to your subgraph +#### Adding signal to your Subgraph -1. To attract Indexers to query your subgraph, you should add GRT curation signal to it. +1. To attract Indexers to query your Subgraph, you should add GRT curation signal to it. - - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your subgraph. + - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your Subgraph. 2. If eligible for indexing rewards, Indexers receive GRT rewards based on the signaled amount. - - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on subgraph feature usage and supported networks. + - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on Subgraph feature usage and supported networks. To learn more about curation, read [Curating](/resources/roles/curating/). -To save on gas costs, you can curate your subgraph in the same transaction you publish it by selecting this option: +To save on gas costs, you can curate your Subgraph in the same transaction you publish it by selecting this option: ![Subgraph publish](/img/studio-publish-modal.png) -### 8. Query your subgraph +### 8. Query your Subgraph -You now have access to 100,000 free queries per month with your subgraph on The Graph Network! +You now have access to 100,000 free queries per month with your Subgraph on The Graph Network! -You can query your subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button. +You can query your Subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button. -For more information about querying data from your subgraph, read [Querying The Graph](/subgraphs/querying/introduction/). +For more information about querying data from your Subgraph, read [Querying The Graph](/subgraphs/querying/introduction/). diff --git a/website/src/pages/pl/substreams/developing/dev-container.mdx b/website/src/pages/pl/substreams/developing/dev-container.mdx index bd4acf16eec7..339ddb159c87 100644 --- a/website/src/pages/pl/substreams/developing/dev-container.mdx +++ b/website/src/pages/pl/substreams/developing/dev-container.mdx @@ -9,7 +9,7 @@ Develop your first project with Substreams Dev Container. It's a tool to help you build your first project. You can either run it remotely through Github codespaces or locally by cloning the [substreams starter repository](https://github.com/streamingfast/substreams-starter?tab=readme-ov-file). -Inside the Dev Container, the `substreams init` command sets up a code-generated Substreams project, allowing you to easily build a subgraph or an SQL-based solution for data handling. +Inside the Dev Container, the `substreams init` command sets up a code-generated Substreams project, allowing you to easily build a Subgraph or an SQL-based solution for data handling. ## Prerequisites @@ -35,7 +35,7 @@ To share your work with the broader community, publish your `.spkg` to [Substrea You can configure your project to query data either through a Subgraph or directly from an SQL database: -- **Subgraph**: Run `substreams codegen subgraph`. This generates a project with a basic `schema.graphql` and `mappings.ts` file. You can customize these to define entities based on the data extracted by Substreams. For more configurations, see [subgraph sink documentation](https://docs.substreams.dev/how-to-guides/sinks/subgraph). +- **Subgraph**: Run `substreams codegen subgraph`. This generates a project with a basic `schema.graphql` and `mappings.ts` file. You can customize these to define entities based on the data extracted by Substreams. For more configurations, see [Subgraph sink documentation](https://docs.substreams.dev/how-to-guides/sinks/subgraph). - **SQL**: Run `substreams codegen sql` for SQL-based queries. For more information on configuring a SQL sink, refer to the [SQL documentation](https://docs.substreams.dev/how-to-guides/sinks/sql-sink). ## Deployment Options diff --git a/website/src/pages/pl/substreams/developing/sinks.mdx b/website/src/pages/pl/substreams/developing/sinks.mdx index 5f6f9de21326..48c246201e8f 100644 --- a/website/src/pages/pl/substreams/developing/sinks.mdx +++ b/website/src/pages/pl/substreams/developing/sinks.mdx @@ -1,5 +1,5 @@ --- -title: Official Sinks +title: Sink your Substreams --- Choose a sink that meets your project's needs. @@ -8,7 +8,7 @@ Choose a sink that meets your project's needs. Once you find a package that fits your needs, you can choose how you want to consume the data. -Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database, a file or a subgraph. +Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database, a file or a Subgraph. ## Sinks diff --git a/website/src/pages/pl/substreams/developing/solana/account-changes.mdx b/website/src/pages/pl/substreams/developing/solana/account-changes.mdx index b31eafd2b064..20a3fe7373e5 100644 --- a/website/src/pages/pl/substreams/developing/solana/account-changes.mdx +++ b/website/src/pages/pl/substreams/developing/solana/account-changes.mdx @@ -11,7 +11,7 @@ This guide walks you through the process of setting up your environment, configu > NOTE: History for the Solana Account Changes dates as of 2025, block 310629601. -For each Substreams Solana account block, only the latest update per account is recorded, see the [Protobuf Referece](https://buf.build/streamingfast/firehose-solana/file/main:sf/solana/type/v1/account.proto). If an account is deleted, a payload with `deleted == True` is provided. Additionally, events of low importance ware omitted, such as those with the special owner “Vote11111111…” account or changes that do not affect the account data (ex: lamport changes). +For each Substreams Solana account block, only the latest update per account is recorded, see the [Protobuf Reference](https://buf.build/streamingfast/firehose-solana/file/main:sf/solana/type/v1/account.proto). If an account is deleted, a payload with `deleted == True` is provided. Additionally, events of low importance ware omitted, such as those with the special owner “Vote11111111…” account or changes that do not affect the account data (ex: lamport changes). > NOTE: To test Substreams latency for Solana accounts, measured as block-head drift, install the [Substreams CLI](https://docs.substreams.dev/reference-material/substreams-cli/installing-the-cli) and running `substreams run solana-common blocks_without_votes -s -1 -o clock`. diff --git a/website/src/pages/pl/substreams/developing/solana/transactions.mdx b/website/src/pages/pl/substreams/developing/solana/transactions.mdx index c22bd0f50611..1542ae22dab7 100644 --- a/website/src/pages/pl/substreams/developing/solana/transactions.mdx +++ b/website/src/pages/pl/substreams/developing/solana/transactions.mdx @@ -36,12 +36,12 @@ Within the generated directories, modify your Substreams modules to include addi ## Step 3: Load the Data -To make your Substreams queryable (as opposed to [direct streaming](https://docs.substreams.dev/how-to-guides/sinks/stream)), you can automatically generate a [Substreams-powered subgraph](/sps/introduction/) or SQL-DB sink. +To make your Substreams queryable (as opposed to [direct streaming](https://docs.substreams.dev/how-to-guides/sinks/stream)), you can automatically generate a [Substreams-powered Subgraph](/sps/introduction/) or SQL-DB sink. ### Subgraph 1. Run `substreams codegen subgraph` to initialize the sink, producing the necessary files and function definitions. -2. Create your [subgraph mappings](/sps/triggers/) within the `mappings.ts` and associated entities within the `schema.graphql`. +2. Create your [Subgraph mappings](/sps/triggers/) within the `mappings.ts` and associated entities within the `schema.graphql`. 3. Build and deploy locally or to [Subgraph Studio](https://thegraph.com/studio-pricing/) by running `deploy-studio`. ### SQL diff --git a/website/src/pages/pl/substreams/introduction.mdx b/website/src/pages/pl/substreams/introduction.mdx index 84fe81909fc8..3f22bea5db7a 100644 --- a/website/src/pages/pl/substreams/introduction.mdx +++ b/website/src/pages/pl/substreams/introduction.mdx @@ -13,7 +13,7 @@ Substreams is a powerful parallel blockchain indexing technology designed to enh ## Substreams Benefits -- **Accelerated Indexing**: Boost subgraph indexing time with a parallelized engine for quicker data retrieval and processing. +- **Accelerated Indexing**: Boost Subgraph indexing time with a parallelized engine for quicker data retrieval and processing. - **Multi-Chain Support**: Expand indexing capabilities beyond EVM-based chains, supporting ecosystems like Solana, Injective, Starknet, and Vara. - **Enhanced Data Model**: Access comprehensive data, including the `trace` level data on EVM or account changes on Solana, while efficiently managing forks/disconnections. - **Multi-Sink Support:** For Subgraph, Postgres database, Clickhouse, and Mongo database. diff --git a/website/src/pages/pl/substreams/publishing.mdx b/website/src/pages/pl/substreams/publishing.mdx index 3d1a3863c882..3d93e6f9376f 100644 --- a/website/src/pages/pl/substreams/publishing.mdx +++ b/website/src/pages/pl/substreams/publishing.mdx @@ -9,7 +9,7 @@ Learn how to publish a Substreams package to the [Substreams Registry](https://s ### What is a package? -A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional subgraphs. +A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional Subgraphs. ## Publish a Package @@ -44,7 +44,7 @@ A Substreams package is a precompiled binary file that defines the specific data ![confirm](/img/4_confirm.png) -That's it! You have succesfully published a package in the Substreams registry. +That's it! You have successfully published a package in the Substreams registry. ![success](/img/5_success.png) diff --git a/website/src/pages/pl/supported-networks.mdx b/website/src/pages/pl/supported-networks.mdx index b5e43f4650bc..c49e9c3853b2 100644 --- a/website/src/pages/pl/supported-networks.mdx +++ b/website/src/pages/pl/supported-networks.mdx @@ -18,11 +18,11 @@ export const getStaticProps = getSupportedNetworksStaticProps - Subgraph Studio relies on the stability and reliability of the underlying technologies, for example JSON-RPC, Firehose and Substreams endpoints. - Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier. -- If a subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks. +- If a Subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks. - For a full list of which features are supported on the decentralized network, see [this page](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). ## Running Graph Node locally If your preferred network isn't supported on The Graph's decentralized network, you can run your own [Graph Node](https://github.com/graphprotocol/graph-node) to index any EVM-compatible network. Make sure that the [version](https://github.com/graphprotocol/graph-node/releases) you are using supports the network and you have the needed configuration. -Graph Node can also index other protocols, via a Firehose integration. Firehose integrations have been created for NEAR, Arweave and Cosmos-based networks. Additionally, Graph Node can support Substreams-powered subgraphs for any network with Substreams support. +Graph Node can also index other protocols, via a Firehose integration. Firehose integrations have been created for NEAR and Arweave. Additionally, Graph Node can support Substreams-powered Subgraphs for any network with Substreams support. diff --git a/website/src/pages/pl/token-api/_meta-titles.json b/website/src/pages/pl/token-api/_meta-titles.json index 692cec84bd58..7ed31e0af95d 100644 --- a/website/src/pages/pl/token-api/_meta-titles.json +++ b/website/src/pages/pl/token-api/_meta-titles.json @@ -1,5 +1,6 @@ { "mcp": "MCP", "evm": "EVM Endpoints", - "monitoring": "Monitoring Endpoints" + "monitoring": "Monitoring Endpoints", + "faq": "FAQ" } diff --git a/website/src/pages/pl/token-api/_meta.js b/website/src/pages/pl/token-api/_meta.js index 09aa7ffc2649..0e526f673a66 100644 --- a/website/src/pages/pl/token-api/_meta.js +++ b/website/src/pages/pl/token-api/_meta.js @@ -5,4 +5,5 @@ export default { mcp: titles.mcp, evm: titles.evm, monitoring: titles.monitoring, + faq: '', } diff --git a/website/src/pages/pl/token-api/faq.mdx b/website/src/pages/pl/token-api/faq.mdx new file mode 100644 index 000000000000..6178aee33e86 --- /dev/null +++ b/website/src/pages/pl/token-api/faq.mdx @@ -0,0 +1,109 @@ +--- +title: Token API FAQ +--- + +Get fast answers to easily integrate and scale with The Graph's high-performance Token API. + +## General + +### What blockchains does the Token API support? + +Currently, the Token API supports Ethereum, Binance Smart Chain (BSC), Polygon, Optimism, Base, and Arbitrum One. + +### Why isn't my API key from The Graph Market working? + +Be sure to use the Access Token generated from the API key, not the API key itself. An access token can be generated from the dashboard on The Graph Market using the dropdown menu next to each API key. + +### How current is the data provided by the API relative to the blockchain? + +The API provides data up to the latest finalized block. + +### How do I authenticate requests to the Token API? + +Authentication is managed via JWT tokens obtained through The Graph Market. JWT tokens issued by [Pinax](https://pinax.network/en), a core developer of The Graph, are also supported. + +### Does the Token API provide a client SDK? + +While a client SDK is not currently available, please share feedback on any SDKs or integrations you would like to see on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to support additional blockchains in the future? + +Yes, more blockchains will be supported in the future. Please share feedback on desired chains on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to offer data closer to the chain head? + +Yes, improvements to provide data closer to the chain head are planned. Feedback is welcome on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to support additional use cases such as NFTs? + +The Graph ecosystem is actively determining the [roadmap](https://thegraph.com/blog/token-api-the-graph/) for additional use cases, including NFTs. Please provide feedback on specific features you would like prioritized on [Discord](https://discord.gg/graphprotocol). + +## MCP / LLM / AI Topics + +### Is there a time limit for LLM queries? + +Yes. The maximum time limit for LLM queries is 10 seconds. + +### Is there a known list of LLMs that work with the API? + +Yes, Cline, Cursor, and Claude Desktop integrate successfully with The Graph's Token API + MCP server. + +Beyond that, whether an LLM "works" depends on whether it supports the MCP protocol directly (or has a compatible plugin/adapter). + +### Where can I find the MCP client? + +You can find the code for the MCP client in [The Graph's repo](https://github.com/graphprotocol/mcp-client). + +## Advanced Topics + +### I'm getting 403/401 errors. What's wrong? + +Check that you included the `Authorization: Bearer ` header with the correct, non-expired token. Common issues include using the API key instead of generating a new JWT, forgetting the "Bearer" prefix, using an incorrect token, or omitting the header entirely. Ensure you copied the JWT exactly as provided by The Graph Market. + +### Are there rate limits or usage costs?\*\* + +During Beta, the Token API is free for authorized developers. While specific rate limits aren't documented, reasonable throttling exists to prevent abuse. High request volumes may trigger HTTP 429 errors. Monitor official announcements for future pricing changes after Beta. + +### What networks are supported, and how do I specify them? + +You can query available networks with [this link](https://token-api.thegraph.com/#tag/monitoring/GET/networks). A full list of network IDs accepted by The Graph can be found on [The Graph's Networks](https://thegraph.com/docs/en/supported-networks/). The API supports Ethereum mainnet, Binance Smart Chain, Base, Arbitrum One, Optimism, and Polygon/Matic. Use the optional `network_id` parameter (e.g., `mainnet`, `bsc`, `base`, `arbitrum-one`, `optimism`, `matic`) to target a specific chain. Without this parameter, the API defaults to Ethereum mainnet. + +### Why do I only see 10 results? How can I get more data? + +Endpoints cap output at 10 items by default. Use pagination parameters: `limit` (up to 500) and `page` (1-indexed). For example, set `limit=50` to get 50 results, and increment `page` for subsequent batches (e.g., `page=2` for items 51-100). + +### How do I fetch older transfer history? + +The Transfers endpoint defaults to 30 days of history. To retrieve older events, increase the `age` parameter up to 180 days maximum (e.g., `age=180` for 6 months of transfers). Transfers older than 180 days cannot be fetched in a single call. + +### What does an empty `"data": []` array mean? + +An empty data array means no records were found matching the query – not an error. This occurs when querying wallets with no tokens/transfers or token contracts with no holders. Verify you've used the correct address and parameters. An invalid address format will trigger a 4xx error. + +### Why is the JSON response wrapped in a `"data"` array? + +All Token API responses consistently wrap results in a top-level `data` array, even for single items. This uniform design handles the common case where addresses have multiple balances or transfers. When parsing, be sure to index into the `data` array (e.g., `const results = response.data`). + +### Why are token amounts returned as strings? + +Large numeric values (like token amounts or supplies) are returned as strings to avoid precision loss, as they often exceed JavaScript's safe integer range. Convert these to big number types for arithmetic operations. Fields like `decimals` are provided as normal numbers to help derive human-readable values. + +### What format should addresses be in? + +The API accepts 40-character hex addresses with or without the `0x` prefix. The endpoint is case-insensitive, so both lower and uppercase hex characters work. Ensure addresses are exactly 40 hex digits (20 bytes) if you remove the prefix. For contract queries, use the token's contract address; for balance/transfer queries, use the wallet address. + +### Do I need special headers besides authentication? + +While recommended, `Accept: application/json` isn't strictly required as the API returns JSON by default. The critical header is `Authorization: Bearer `. Ensure you make a GET request to the correct URL without trailing slashes or path typos (e.g., use `/balances/evm/{address}` not `/balance`). + +### MCP integration with Claude/Cline/Cursor shows errors like "ENOENT" or "Server disconnected". How do I fix this? + +For "ENOENT" errors, ensure Node.js 18+ is installed and the path to `npx`/`bunx` is correct (consider using full paths in config). "Server disconnected" usually indicates authentication or connectivity issues – verify your `ACCESS_TOKEN` is set correctly and your network allows access to `https://token-api.thegraph.com/sse`. + +### Is the Token API part of The Graph's GraphQL service? + +No, the Token API is a separate RESTful service. Unlike traditional subgraphs, it provides ready-to-use REST endpoints (HTTP GET) for common token data. You don't need to write GraphQL queries or deploy subgraphs. Under the hood, it uses The Graph's infrastructure and MCP with AI for enrichment, but you simply interact with REST endpoints. + +### Do I need to use MCP or tools like Claude, Cline, or Cursor? + +No, these are optional. MCP is an advanced feature allowing AI assistants to interface with the API via streaming. For standard usage, simply call the REST endpoints with any HTTP client using your JWT. Claude Desktop, Cline bot, and Cursor IDE integrations are provided for convenience but aren't required. diff --git a/website/src/pages/pl/token-api/mcp/claude.mdx b/website/src/pages/pl/token-api/mcp/claude.mdx index 0da8f2be031d..12a036b6fc24 100644 --- a/website/src/pages/pl/token-api/mcp/claude.mdx +++ b/website/src/pages/pl/token-api/mcp/claude.mdx @@ -25,11 +25,11 @@ Create or edit your `claude_desktop_config.json` file. ```json label="claude_desktop_config.json" { "mcpServers": { - "mcp-pinax": { + "token-api": { "command": "npx", "args": ["@pinax/mcp", "--sse-url", "https://token-api.thegraph.com/sse"], "env": { - "ACCESS_TOKEN": "" + "ACCESS_TOKEN": "" } } } diff --git a/website/src/pages/pl/token-api/mcp/cline.mdx b/website/src/pages/pl/token-api/mcp/cline.mdx index ab54c0c8f6f0..ef98e45939fe 100644 --- a/website/src/pages/pl/token-api/mcp/cline.mdx +++ b/website/src/pages/pl/token-api/mcp/cline.mdx @@ -10,7 +10,7 @@ sidebarTitle: Cline - [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path. - The `@pinax/mcp` package requires Node 18+, as it relies on built-in `fetch()` / `Headers`, which are not available in Node 17 or older. You may need to specify an exact path to an up-to-date Node version, or uninstall previous versions of Node to ensure `@pinax/mcp` uses the correct version. -![Screenshot of Cline's MCP server configuration interface displaying the JSON settings file with mcp-pinax server details visible](/img/cline-preview-token-api.png) +![Screenshot of Cline's MCP server configuration interface displaying the JSON settings file with mcp-pinax server details visible.](/img/cline-preview-token-api.png) ## Configuration diff --git a/website/src/pages/pl/token-api/quick-start.mdx b/website/src/pages/pl/token-api/quick-start.mdx index 4653c3d41ac6..05884b06caab 100644 --- a/website/src/pages/pl/token-api/quick-start.mdx +++ b/website/src/pages/pl/token-api/quick-start.mdx @@ -1,6 +1,6 @@ --- title: Token API Quick Start -sidebarTitle: Quick Start +sidebarTitle: ' Na start' --- ![The Graph Token API Quick Start banner](/img/token-api-quickstart-banner.jpg) diff --git a/website/src/pages/pt/about.mdx b/website/src/pages/pt/about.mdx index 6603713efd91..22d7582d014d 100644 --- a/website/src/pages/pt/about.mdx +++ b/website/src/pages/pt/about.mdx @@ -30,25 +30,25 @@ Propriedades de blockchain, como finalidade, reorganizações de chain, ou bloco ## The Graph Providencia uma Solução -O The Graph resolve este desafio com um protocolo descentralizado que indexa e permite queries eficientes e de alto desempenho de dados de blockchain. Estas APIs ("subgraphs" indexados) podem então ser consultados num query com uma API GraphQL padrão. +The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "Subgraphs") can then be queried with a standard GraphQL API. Hoje, há um protocolo descentralizado apoiado pela implementação de código aberto do [Graph Node](https://github.com/graphprotocol/graph-node) que facilita este processo. ### Como o The Graph Funciona -Indexar dados em blockchain é um processo difícil, mas facilitado pelo The Graph. O The Graph aprende como indexar dados no Ethereum com o uso de subgraphs. Subgraphs são APIs personalizadas construídas com dados de blockchain, que extraem, processam e armazenam dados de uma blockchain para poderem ser consultadas suavemente via GraphQL. +Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using Subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL. #### Especificações -- O The Graph usa descrições de subgraph, conhecidas como "manifests de subgraph" dentro do subgraph. +- The Graph uses Subgraph descriptions, which are known as the Subgraph manifest inside the Subgraph. -- A descrição do subgraph contorna os contratos inteligentes de interesse para o mesmo, os eventos dentro destes contratos para focar, e como mapear dados de evento para dados que o The Graph armazenará no seu banco de dados. +- The Subgraph description outlines the smart contracts of interest for a Subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database. -- Ao criar um subgraph, primeiro é necessário escrever um manifest de subgraph. +- When creating a Subgraph, you need to write a Subgraph manifest. -- Após escrever o `subgraph manifest`, é possível usar o Graph CLI para armazenar a definição no IPFS e instruir o Indexador para começar a indexar dados para o subgraph. +- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that Subgraph. -O diagrama abaixo dá informações mais detalhadas sobre o fluxo de dados quando um manifest de subgraph for lançado com transações no Ethereum. +The diagram below provides more detailed information about the flow of data after a Subgraph manifest has been deployed with Ethereum transactions. ![Um gráfico que explica como o The Graph utiliza Graph Nodes para servir queries para consumidores de dados](/img/graph-dataflow.png) @@ -56,12 +56,12 @@ O fluxo segue estes passos: 1. Um dApp adiciona dados à Ethereum através de uma transação em contrato inteligente. 2. O contrato inteligente emite um ou mais eventos enquanto processa a transação. -3. O Graph Node escaneia continuamente a Ethereum por novos blocos e os dados que podem conter para o seu subgraph. -4. O Graph Node encontra eventos na Ethereum para o seu subgraph nestes blocos e executa os handlers de mapeamento que forneceu. O mapeamento é um módulo WASM que cria ou atualiza as entidades de dados que o Graph Node armazena em resposta a eventos na Ethereum. +3. Graph Node continually scans Ethereum for new blocks and the data for your Subgraph they may contain. +4. Graph Node finds Ethereum events for your Subgraph in these blocks and runs the mapping handlers you provided. The mapping is a WASM module that creates or updates the data entities that Graph Node stores in response to Ethereum events. 5. O dApp consulta o Graph Node para dados indexados da blockchain, através do [endpoint GraphQL](https://graphql.org/learn/) do node. O Graph Node, por sua vez, traduz os queries GraphQL em queries para o seu armazenamento subjacente de dados para poder retirar estes dados, com o uso das capacidades de indexação do armazenamento. O dApp exibe estes dados em uma interface rica para utilizadores finais, que eles usam para emitir novas transações na Ethereum. E o ciclo se repete. ## Próximos Passos -As seguintes secções providenciam um olhar mais íntimo nos subgraphs, na sua publicação e no query de dados. +The following sections provide a more in-depth look at Subgraphs, their deployment and data querying. -Antes de escrever o seu próprio subgraph, é recomendado explorar o [Graph Explorer](https://thegraph.com/explorer) e revir alguns dos subgraphs já publicados. A página de todo subgraph inclui um ambiente de teste em GraphQL que lhe permite consultar os dados dele. +Before you write your own Subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed Subgraphs. Each Subgraph's page includes a GraphQL playground, allowing you to query its data. diff --git a/website/src/pages/pt/archived/arbitrum/arbitrum-faq.mdx b/website/src/pages/pt/archived/arbitrum/arbitrum-faq.mdx index 0c1ba5b192ef..7932ad2508bd 100644 --- a/website/src/pages/pt/archived/arbitrum/arbitrum-faq.mdx +++ b/website/src/pages/pt/archived/arbitrum/arbitrum-faq.mdx @@ -14,7 +14,7 @@ By scaling The Graph on L2, network participants can now benefit from: - Herdar segurança do Ethereum -Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers can open and close allocations more frequently to index a greater number of subgraphs. Developers can deploy and update subgraphs more easily, and Delegators can delegate GRT more frequently. Curators can add or remove signal to a larger number of subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. +Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers can open and close allocations more frequently to index a greater number of Subgraphs. Developers can deploy and update Subgraphs more easily, and Delegators can delegate GRT more frequently. Curators can add or remove signal to a larger number of Subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. A comunidade do The Graph prosseguiu com o Arbitrum no ano passado, após o resultado da discussão [GIP-0031](https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305). @@ -39,7 +39,7 @@ Para aproveitar o The Graph na L2, use este switcher de dropdown para alternar e ![Dropdown switcher to toggle Arbitrum](/img/arbitrum-screenshot-toggle.png) -## Como um programador, consumidor de dados, Indexador, Curador ou Delegante, o que devo fazer agora? +## As a Subgraph developer, data consumer, Indexer, Curator, or Delegator, what do I need to do now? Network participants must move to Arbitrum to continue participating in The Graph Network. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) for additional support. @@ -51,9 +51,9 @@ Todos os contratos inteligentes já foram devidamente [auditados](https://github Tudo foi testado exaustivamente, e já está pronto um plano de contingência para garantir uma transição segura e suave. Mais detalhes [aqui](https://forum.thegraph.com/t/gip-0037-the-graph-arbitrum-deployment-with-linear-rewards-minted-in-l2/3551#risks-and-security-considerations-20). -## Are existing subgraphs on Ethereum working? +## Are existing Subgraphs on Ethereum working? -All subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) to ensure your subgraphs operate seamlessly. +All Subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) to ensure your Subgraphs operate seamlessly. ## Does GRT have a new smart contract deployed on Arbitrum? diff --git a/website/src/pages/pt/archived/arbitrum/l2-transfer-tools-faq.mdx b/website/src/pages/pt/archived/arbitrum/l2-transfer-tools-faq.mdx index d542d643adc4..a821b0e0b588 100644 --- a/website/src/pages/pt/archived/arbitrum/l2-transfer-tools-faq.mdx +++ b/website/src/pages/pt/archived/arbitrum/l2-transfer-tools-faq.mdx @@ -24,9 +24,9 @@ A exceção é com carteiras de contrato inteligente como multisigs: estas são As Ferramentas de Transferência para L2 usam o mecanismo nativo do Arbitrum para enviar mensagens da L1 à L2. Este mecanismo é chamado de "retryable ticket" (bilhete retentável) e é usado por todos os bridges de tokens nativos, incluindo o bridge de GRT do Arbitrum. Leia mais na [documentação do Arbitrum](https://docs.arbitrum.io/arbos/l1-to-l2-messaging). -Ao transferir os seus ativos (subgraph, stake, delegação ou curadoria) à L2, é enviada uma mensagem através do bridge de GRT do Arbitrum, que cria um retryable ticket na L2. A ferramenta de transferência inclui um valor de ETH na transação, que é usado para pagar 1) pela criação do ticket e 2) pelo gas da execução do ticket na L2. Porém, devido à possível variação dos preços de gas no tempo até a execução do ticket na L2, esta tentativa de execução automática pode falhar. Se isto acontecer, o bridge do Arbitrum tentará manter o retryable ticket ativo por até 7 dias; assim, qualquer pessoa pode tentar novamente o "resgate" do ticket (que requer uma carteira com algum ETH em bridge ao Arbitrum). +When you transfer your assets (Subgraph, stake, delegation or curation) to L2, a message is sent through the Arbitrum GRT bridge which creates a retryable ticket in L2. The transfer tool includes some ETH value in the transaction, that is used to 1) pay to create the ticket and 2) pay for the gas to execute the ticket in L2. However, because gas prices might vary in the time until the ticket is ready to execute in L2, it is possible that this auto-execution attempt fails. When that happens, the Arbitrum bridge will keep the retryable ticket alive for up to 7 days, and anyone can retry “redeeming” the ticket (which requires a wallet with some ETH bridged to Arbitrum). -Este é o passo de "Confirmação" em todas as ferramentas de transferência. Ele será executado automaticamente e com êxito na maioria dos casos, mas é importante verificar que ele foi executado. Se não tiver êxito na primeira execução e nem em quaisquer das novas tentativas dentro de 7 dias, o bridge do Arbitrum descartará o ticket, e os seus ativos (subgraph, stake, delegação ou curadoria) serão perdidos sem volta. Os programadores-núcleo do The Graph têm um sistema de monitoria para detectar estas situações e tentar resgatar os tickets antes que seja tarde, mas no final, a responsabilidade é sua de que a sua transferência complete a tempo. Caso haja problemas ao confirmar a sua transação, contacte-nos com [este formulário](https://noteforms.com/forms/notionform-l2-transfer-tooling-issues-0ogqfu?notionforms=1&utm_source=notionforms) e o núcleo tentará lhe ajudar. +This is what we call the “Confirm” step in all the transfer tools - it will run automatically in most cases, as the auto-execution is most often successful, but it is important that you check back to make sure it went through. If it doesn’t succeed and there are no successful retries in 7 days, the Arbitrum bridge will discard the ticket, and your assets (Subgraph, stake, delegation or curation) will be lost and can’t be recovered. The Graph core devs have a monitoring system in place to detect these situations and try to redeem the tickets before it’s too late, but it is ultimately your responsibility to ensure your transfer is completed in time. If you’re having trouble confirming your transaction, please reach out using [this form](https://noteforms.com/forms/notionform-l2-transfer-tooling-issues-0ogqfu?notionforms=1&utm_source=notionforms) and core devs will be there help you. ### Eu comecei a transferir a minha delegação/meu stake/minha curadoria e não tenho certeza se ela chegou à L2, como posso ter certeza de que a mesma foi transferida corretamente? @@ -36,43 +36,43 @@ Se tiver o hash de transação da L1 (confira as transações recentes na sua ca ## Transferência de Subgraph -### Como transfiro o meu subgraph? +### How do I transfer my Subgraph? -Para transferir o seu subgraph, complete os seguintes passos: +To transfer your Subgraph, you will need to complete the following steps: 1. Inicie a transferência na mainnet Ethereum 2. Espere 20 minutos pela confirmação -3. Confirme a transferência do subgraph no Arbitrum\* +3. Confirm Subgraph transfer on Arbitrum\* -4. Termine de editar o subgraph no Arbitrum +4. Finish publishing Subgraph on Arbitrum 5. Atualize o URL de Query (recomendado) -\*Você deve confirmar a transferência dentro de 7 dias, ou o seu subgraph poderá ser perdido. Na maioria dos casos, este passo será executado automaticamente, mas pode ser necessário confirmar manualmente caso haja um surto no preço de gas no Arbitrum. Caso haja quaisquer dificuldades neste processo, contacte o suporte em support@thegraph.com ou no [Discord](https://discord.gg/graphprotocol). +\*Note that you must confirm the transfer within 7 days otherwise your Subgraph may be lost. In most cases, this step will run automatically, but a manual confirmation may be needed if there is a gas price spike on Arbitrum. If there are any issues during this process, there will be resources to help: contact support at support@thegraph.com or on [Discord](https://discord.gg/graphprotocol). ### De onde devo iniciar a minha transferência? -Do [Subgraph Studio](https://thegraph.com/studio/), [Explorer,](https://thegraph.com/explorer) ou de qualquer página de detalhes de subgraph. Clique no botão "Transfer Subgraph" (Transferir Subgraph) na página de detalhes de subgraph para começar a transferência. +You can initiate your transfer from the [Subgraph Studio](https://thegraph.com/studio/), [Explorer,](https://thegraph.com/explorer) or any Subgraph details page. Click the "Transfer Subgraph" button in the Subgraph details page to start the transfer. -### Quanto tempo devo esperar até que o meu subgraph seja transferido +### How long do I need to wait until my Subgraph is transferred A transferência leva cerca de 20 minutos. O bridge do Arbitrum trabalha em segundo plano para completar a transferência automaticamente. Às vezes, os custos de gas podem subir demais e a transação deverá ser confirmada novamente. -### O meu subgraph ainda poderá ser descoberto após ser transferido para a L2? +### Will my Subgraph still be discoverable after I transfer it to L2? -O seu subgraph só será descobrível na rede em qual foi editado. Por exemplo, se o seu subgraph estiver no Arbitrum One, então só poderá encontrá-lo no Explorer do Arbitrum One e não no Ethereum. Garanta que o Arbitrum One está selecionado no seletor de rede no topo da página para garantir que está na rede correta.  Após a transferência, o subgraph na L1 aparecerá como depreciado. +Your Subgraph will only be discoverable on the network it is published to. For example, if your Subgraph is on Arbitrum One, then you can only find it in Explorer on Arbitrum One and will not be able to find it on Ethereum. Please ensure that you have Arbitrum One selected in the network switcher at the top of the page to ensure you are on the correct network.  After the transfer, the L1 Subgraph will appear as deprecated. -### O meu subgraph precisa ser editado para poder ser transferido? +### Does my Subgraph need to be published to transfer it? -Para aproveitar a ferramenta de transferência de subgraph, o seu subgraph já deve estar editado na mainnet Ethereum e deve ter algum sinal de curadoria em posse da carteira titular do subgraph. Se o seu subgraph não estiver editado, edite-o diretamente no Arbitrum One - as taxas de gas associadas serão bem menores. Se quiser transferir um subgraph editado, mas a conta titular não curou qualquer sinal nele, você pode sinalizar uma quantidade pequena (por ex. 1 GRT) daquela conta; escolha o sinal "migração automática". +To take advantage of the Subgraph transfer tool, your Subgraph must be already published to Ethereum mainnet and must have some curation signal owned by the wallet that owns the Subgraph. If your Subgraph is not published, it is recommended you simply publish directly on Arbitrum One - the associated gas fees will be considerably lower. If you want to transfer a published Subgraph but the owner account hasn't curated any signal on it, you can signal a small amount (e.g. 1 GRT) from that account; make sure to choose "auto-migrating" signal. -### O que acontece com a versão da mainnet Ethereum do meu subgraph após eu transferi-lo ao Arbitrum? +### What happens to the Ethereum mainnet version of my Subgraph after I transfer to Arbitrum? -Após transferir o seu subgraph ao Arbitrum, a versão na mainnet Ethereum será depreciada. Recomendamos que atualize o seu URL de query em dentro de 28 horas. Porém, há um período que mantém o seu URL na mainnet em funcionamento, para que qualquer apoio de dapp de terceiros seja atualizado. +After transferring your Subgraph to Arbitrum, the Ethereum mainnet version will be deprecated. We recommend you update your query URL within 48 hours. However, there is a grace period in place that keeps your mainnet URL functioning so that any third-party dapp support can be updated. ### Após a transferência, preciso reeditar no Arbitrum? @@ -80,21 +80,21 @@ Após a janela de transferência de 20 minutos, confirme a transferência com um ### O meu endpoint estará fora do ar durante a reedição? -É improvável, mas é possível passar por um breve desligamento a depender de quais Indexadores apoiam o subgraph na L1, e de se eles continuarão a indexá-lo até o subgraph ter apoio total na L2. +It is unlikely, but possible to experience a brief downtime depending on which Indexers are supporting the Subgraph on L1 and whether they keep indexing it until the Subgraph is fully supported on L2. ### Editar e versionar na L2 funcionam da mesma forma que na mainnet Ethereum? -Sim. Selcione o Arbitrum One como a sua rede editada ao editar no Subgraph Studio. No Studio, o último endpoint disponível apontará à versão atualizada mais recente do subgraph. +Yes. Select Arbitrum One as your published network when publishing in Subgraph Studio. In the Studio, the latest endpoint will be available which points to the latest updated version of the Subgraph. -### A curadoria do meu subgraph se mudará com o meu subgraph? +### Will my Subgraph's curation move with my Subgraph? -Caso tenha escolhido o sinal automigratório, 100% da sua própria curadoria se mudará ao Arbitrum One junto com o seu subgraph. Todo o sinal de curadoria do subgraph será convertido em GRT na hora da transferência, e o GRT correspondente ao seu sinal de curadoria será usado para mintar sinais no subgraph na L2. +If you've chosen auto-migrating signal, 100% of your own curation will move with your Subgraph to Arbitrum One. All of the Subgraph's curation signal will be converted to GRT at the time of the transfer, and the GRT corresponding to your curation signal will be used to mint signal on the L2 Subgraph. -Outros Curadores podem escolher se querem sacar a sua fração de GRT, ou também transferi-la à L2 para mintar sinais no mesmo subgraph. +Other Curators can choose whether to withdraw their fraction of GRT, or also transfer it to L2 to mint signal on the same Subgraph. -### Posso devolver o meu subgraph à mainnet Ethereum após a transferência? +### Can I move my Subgraph back to Ethereum mainnet after I transfer? -Após a transferência, a versão da mainnet Ethereum deste subgraph será depreciada. Se quiser devolvê-lo à mainnet, será necessário relançá-lo e editá-lo de volta à mainnet. Porém, transferir de volta à mainnet do Ethereum é muito arriscado, já que as recompensas de indexação logo serão distribuidas apenas no Arbitrum One. +Once transferred, your Ethereum mainnet version of this Subgraph will be deprecated. If you would like to move back to mainnet, you will need to redeploy and publish back to mainnet. However, transferring back to Ethereum mainnet is strongly discouraged as indexing rewards will eventually be distributed entirely on Arbitrum One. ### Por que preciso de ETH em bridge para completar a minha transferência? @@ -206,19 +206,19 @@ Para transferir a sua curadoria, complete os seguintes passos: \*Se necessário - por ex. se você usar um endereço de contrato. -### Como saberei se o subgraph que eu curei foi transferido para a L2? +### How will I know if the Subgraph I curated has moved to L2? -Ao visualizar a página de detalhes do subgraph, um banner notificará-lhe que este subgraph foi transferido. Siga o prompt para transferir a sua curadoria. Esta informação também aparece na página de detalhes de qualquer subgraph transferido. +When viewing the Subgraph details page, a banner will notify you that this Subgraph has been transferred. You can follow the prompt to transfer your curation. You can also find this information on the Subgraph details page of any Subgraph that has moved. ### E se eu não quiser mudar a minha curadoria para a L2? -Quando um subgraph é depreciado, há a opção de retirar o seu sinal. Desta forma, se um subgraph for movido à L2, dá para escolher retirar o seu sinal na mainnet Ethereum ou enviar o sinal à L2. +When a Subgraph is deprecated you have the option to withdraw your signal. Similarly, if a Subgraph has moved to L2, you can choose to withdraw your signal in Ethereum mainnet or send the signal to L2. ### Como sei se a minha curadoria foi transferida com êxito? Os detalhes do sinal serão acessíveis através do Explorer cerca de 20 minutos após a ativação da ferramenta de transferência à L2. -### Posso transferir a minha curadoria em vários subgraphs de uma vez? +### Can I transfer my curation on more than one Subgraph at a time? Não há opção de transferências em conjunto no momento. @@ -266,7 +266,7 @@ A ferramenta de transferência à L2 finalizará a transferência do seu stake e ### Devo indexar no Arbitrum antes de transferir o meu stake? -Você pode transferir o seu stake antes de preparar a indexação, mas não terá como resgatar recompensas na L2 até alocar para subgraphs na L2, indexá-los, e apresentar POIs. +You can effectively transfer your stake first before setting up indexing, but you will not be able to claim any rewards on L2 until you allocate to Subgraphs on L2, index them, and present POIs. ### Os Delegadores podem mudar a sua delegação antes que eu mude o meu stake de indexação? diff --git a/website/src/pages/pt/archived/arbitrum/l2-transfer-tools-guide.mdx b/website/src/pages/pt/archived/arbitrum/l2-transfer-tools-guide.mdx index a6a744aeeb19..320c947532a4 100644 --- a/website/src/pages/pt/archived/arbitrum/l2-transfer-tools-guide.mdx +++ b/website/src/pages/pt/archived/arbitrum/l2-transfer-tools-guide.mdx @@ -6,53 +6,53 @@ O The Graph facilitou muito o processo de se mudar para a L2 no Arbitrum One. Pa Some frequent questions about these tools are answered in the [L2 Transfer Tools FAQ](/archived/arbitrum/l2-transfer-tools-faq/). The FAQs contain in-depth explanations of how to use the tools, how they work, and things to keep in mind when using them. -## Como transferir o seu subgraph ao Arbitrum (L2) +## How to transfer your Subgraph to Arbitrum (L2) -## Benefícios de transferir os seus subgraphs +## Benefits of transferring your Subgraphs A comunidade e os programadores centrais do The Graph andaram [preparando](https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305) as suas mudanças ao Arbitrum ao longo do último ano. O Arbitrum, uma blockchain layer 2, ou "L2", herda a segurança do Ethereum, mas providencia taxas de gas muito menores. -Ao publicar ou atualizar o seu subgraph na Graph Network, você interaje com contratos inteligentes no protocolo, e isto exige o pagamento de gas usando ETH. Ao mover os seus subgraphs ao Arbitrum, quaisquer atualizações futuras ao seu subgraph exigirão taxas de gas muito menores. As taxas menores, e o fato de que bonding curves de curadoria na L2 são planas, também facilitarão a curadoria no seu subgraph para outros Curadores, a fim de aumentar as recompensas para Indexadores no seu subgraph. Este ambiente de custo reduzido também barateia a indexação e o serviço de Indexadores no seu subgraph. As recompensas de indexação também aumentarão no Arbitrum e decairão na mainnet do Ethereum nos próximos meses, então mais e mais Indexadores transferirão o seu stake e preparando as suas operações na L2. +When you publish or upgrade your Subgraph to The Graph Network, you're interacting with smart contracts on the protocol and this requires paying for gas using ETH. By moving your Subgraphs to Arbitrum, any future updates to your Subgraph will require much lower gas fees. The lower fees, and the fact that curation bonding curves on L2 are flat, also make it easier for other Curators to curate on your Subgraph, increasing the rewards for Indexers on your Subgraph. This lower-cost environment also makes it cheaper for Indexers to index and serve your Subgraph. Indexing rewards will be increasing on Arbitrum and decreasing on Ethereum mainnet over the coming months, so more and more Indexers will be transferring their stake and setting up their operations on L2. -## Como entender o que acontece com o sinal, o seu subgraph na L1 e URLs de query +## Understanding what happens with signal, your L1 Subgraph and query URLs -Transferir um subgraph ao Arbitrum usa a bridge de GRT do Arbitrum, que por sua vez usa a bridge nativa do Arbitrum, para enviar o subgraph à L2. A "transferência" depreciará o subgraph na mainnet e enviará a informação para recriar o subgraph na L2 com o uso da bridge. Ele também incluirá o GRT sinalizado do dono do subgraph, que deve ser maior que zero para que a bridge aceite a transferência. +Transferring a Subgraph to Arbitrum uses the Arbitrum GRT bridge, which in turn uses the native Arbitrum bridge, to send the Subgraph to L2. The "transfer" will deprecate the Subgraph on mainnet and send the information to re-create the Subgraph on L2 using the bridge. It will also include the Subgraph owner's signaled GRT, which must be more than zero for the bridge to accept the transfer. -Ao escolher transferir o subgraph, isto converterá todo o sinal de curadoria do subgraph em GRT. Isto é equivalente à "depreciação" do subgraph na mainnet. O GRT correspondente à sua curadoria será enviado à L2 junto com o subgraph, onde ele será usado para mintar sinais em seu nome. +When you choose to transfer the Subgraph, this will convert all of the Subgraph's curation signal to GRT. This is equivalent to "deprecating" the Subgraph on mainnet. The GRT corresponding to your curation will be sent to L2 together with the Subgraph, where they will be used to mint signal on your behalf. -Outros Curadores podem escolher retirar a sua fração de GRT, ou também transferi-la à L2 para mintar sinais no mesmo subgraph. Se um dono de subgraph não transferir o seu subgraph à L2 e depreciá-lo manualmente através de uma chamada de contrato, os Curadores serão notificados, e poderão retirar a sua curadoria. +Other Curators can choose whether to withdraw their fraction of GRT, or also transfer it to L2 to mint signal on the same Subgraph. If a Subgraph owner does not transfer their Subgraph to L2 and manually deprecates it via a contract call, then Curators will be notified and will be able to withdraw their curation. -Assim que o subgraph for transferido, como toda curadoria é convertida em GRT, Indexadores não receberão mais recompensas por indexar o subgraph. Porém, haverão Indexadores que 1) continuarão a servir subgraphs transferidos por 24 horas, e 2) começarão imediatamente a indexar o subgraph na L2. Como estes Indexadores já têm o subgraph indexado, não deve haver necessidade de esperar que o subgraph se sincronize, e será possível consultar o subgraph na L2 quase que imediatamente. +As soon as the Subgraph is transferred, since all curation is converted to GRT, Indexers will no longer receive rewards for indexing the Subgraph. However, there will be Indexers that will 1) keep serving transferred Subgraphs for 24 hours, and 2) immediately start indexing the Subgraph on L2. Since these Indexers already have the Subgraph indexed, there should be no need to wait for the Subgraph to sync, and it will be possible to query the L2 Subgraph almost immediately. -Queries no subgraph na L2 deverão ser feitas para uma URL diferente (or 'arbitrum-gateway.thegraph'), mas a URL na L1 continuará a trabalhar por no mínimo 48 horas. Após isto, o gateway na L1 encaminhará queries ao gateway na L2 (por um certo tempo), mas isto adicionará latência, então é recomendado trocar todas as suas queries para a nova URL o mais rápido possível. +Queries to the L2 Subgraph will need to be done to a different URL (on `arbitrum-gateway.thegraph.com`), but the L1 URL will continue working for at least 48 hours. After that, the L1 gateway will forward queries to the L2 gateway (for some time), but this will add latency so it is recommended to switch all your queries to the new URL as soon as possible. ## Como escolher a sua carteira na L2 -Ao publicar o seu subgraph na mainnet, você usou uma carteira conectada para criar o subgraph, e esta carteira é dona do NFT que representa este subgraph e lhe permite publicar atualizações. +When you published your Subgraph on mainnet, you used a connected wallet to create the Subgraph, and this wallet owns the NFT that represents this Subgraph and allows you to publish updates. -Ao transferir o subgraph ao Arbitrum, você pode escolher uma carteira diferente que será dona deste NFT de subgraph na L2. +When transferring the Subgraph to Arbitrum, you can choose a different wallet that will own this Subgraph NFT on L2. Se você usar uma carteira "regular" como o MetaMask (uma Conta de Titularidade Externa, ou EOA, por ex. uma carteira que não é um contrato inteligente), então isto é opcional, e é recomendado manter o mesmo endereço titular que o da L1. -Se você usar uma carteira de contrato inteligente, como uma multisig (por ex. uma Safe), então escolher um endereço de carteira diferente na L2 é obrigatório, pois as chances são altas desta conta só existir na mainnet, e você não poderá fazer transações no Arbitrum enquanto usar esta carteira. Se quiser continuar a usar uma carteira de contrato inteligente ou multisig, crie uma nova carteira no Arbitrum e use o seu endereço lá como o dono do seu subgraph na L2. +If you're using a smart contract wallet, like a multisig (e.g. a Safe), then choosing a different L2 wallet address is mandatory, as it is most likely that this account only exists on mainnet and you will not be able to make transactions on Arbitrum using this wallet. If you want to keep using a smart contract wallet or multisig, create a new wallet on Arbitrum and use its address as the L2 owner of your Subgraph. -**É muito importante usar um endereço de carteira que você controle, e possa fazer transações no Arbitrum. Caso contrário, o subgraph será perdido e não poderá ser recuperado.** +**It is very important to use a wallet address that you control, and that can make transactions on Arbitrum. Otherwise, the Subgraph will be lost and cannot be recovered.** ## Preparações para a transferência: bridging de ETH -Transferir o subgraph envolve o envio de uma transação através da bridge, e depois, a execução de outra transação no Arbitrum. A primeira transação usa ETH na mainnet, e inclui um pouco de ETH para pagar por gas quando a mensagem for recebida na L2. Porém, se este gas for insuficiente, você deverá tentar executar a transação novamente e pagar o gas diretamente na L2 (este é o terceiro passo: "Confirmação da transação" abaixo). Este passo **deve ser executado até 7 dias depois do início da transação**. Além disto, a segunda transação ("4º passo: Finalização da transferência na L2") será feita diretamente no Arbitrum. Por estas razões, você precisará de um pouco de ETH em uma carteira Arbitrum. Se usar uma multisig ou uma conta de contrato inteligente, o ETH deverá estar na carteira regular (EOA) que você usar para executar as transações, e não na própria carteira multisig. +Transferring the Subgraph involves sending a transaction through the bridge, and then executing another transaction on Arbitrum. The first transaction uses ETH on mainnet, and includes some ETH to pay for gas when the message is received on L2. However, if this gas is insufficient, you will have to retry the transaction and pay for the gas directly on L2 (this is "Step 3: Confirming the transfer" below). This step **must be executed within 7 days of starting the transfer**. Moreover, the second transaction ("Step 4: Finishing the transfer on L2") will be done directly on Arbitrum. For these reasons, you will need some ETH on an Arbitrum wallet. If you're using a multisig or smart contract account, the ETH will need to be in the regular (EOA) wallet that you are using to execute the transactions, not on the multisig wallet itself. Você pode comprar ETH em algumas exchanges e retirá-la diretamente no Arbitrum, ou você pode usar a bridge do Arbitrum para enviar ETH de uma carteira na mainnet para a L2: [bridge.arbitrum.io](http://bridge.arbitrum.io). Como as taxas de gas no Arbitrum são menores, você só deve precisar de uma quantidade pequena. É recomendado começar em um limite baixo (por ex. 0.01 ETH) para que a sua transação seja aprovada. -## Como encontrar a Ferramenta de Transferência de Subgraphs +## Finding the Subgraph Transfer Tool -A Ferramenta de Transferência para L2 pode ser encontrada ao olhar a página do seu subgraph no Subgraph Studio: +You can find the L2 Transfer Tool when you're looking at your Subgraph's page on Subgraph Studio: ![ferramenta de transferência](/img/L2-transfer-tool1.png) -Ela também está disponível no Explorer se você se conectar com a carteira dona de um subgraph, e na página daquele subgraph no Explorer: +It is also available on Explorer if you're connected with the wallet that owns a Subgraph and on that Subgraph's page on Explorer: ![Transferência para L2](/img/transferToL2.png) @@ -60,19 +60,19 @@ Clicar no botão Transfer to L2 (Transferir para L2) abrirá a ferramenta de tra ## 1º Passo: Como começar a transferência -Antes de começar a transferência, decida qual endereço será dono do subgraph na L2 (ver "Como escolher a sua carteira na L2" acima), e é altamente recomendado ter um pouco de ETH para o gas já em bridge no Arbitrum (ver "Preparações para a transferência: bridging de ETH" acima). +Before starting the transfer, you must decide which address will own the Subgraph on L2 (see "Choosing your L2 wallet" above), and it is strongly recommend having some ETH for gas already bridged on Arbitrum (see "Preparing for the transfer: bridging some ETH" above). -Note também que transferir o subgraph exige ter uma quantidade de sinal no subgraph maior que zero, com a mesma conta dona do subgraph; se você não tiver sinalizado no subgraph, você deverá adicionar um pouco de curadoria (uma adição pequena, como 1 GRT, seria o suficiente). +Also please note transferring the Subgraph requires having a nonzero amount of signal on the Subgraph with the same account that owns the Subgraph; if you haven't signaled on the Subgraph you will have to add a bit of curation (adding a small amount like 1 GRT would suffice). -Após abrir a Ferramenta de Transferências, você poderá colocar o endereço da carteira na L2 no campo "Receiving wallet address" (endereço da carteira destinatária) - **certifique-se que inseriu o endereço correto**. Clicar em Transfer Subgraph (transferir subgraph) resultará em um pedido para executar a transação na sua carteira (note que um valor em ETH é incluído para pagar pelo gas na L2); isto iniciará a transferência e depreciará o seu subgraph na L1 (veja "Como entender o que acontece com o sinal, o seu subgraph na L1 e URLs de query" acima para mais detalhes sobre o que acontece nos bastidores). +After opening the Transfer Tool, you will be able to input the L2 wallet address into the "Receiving wallet address" field - **make sure you've entered the correct address here**. Clicking on Transfer Subgraph will prompt you to execute the transaction on your wallet (note some ETH value is included to pay for L2 gas); this will initiate the transfer and deprecate your L1 Subgraph (see "Understanding what happens with signal, your L1 Subgraph and query URLs" above for more details on what goes on behind the scenes). -Ao executar este passo, **garanta que executará o 3º passo em menos de 7 dias, ou o subgraph e o seu GRT de sinalização serão perdidos.** Isto se deve à maneira de como as mensagens L1-L2 funcionam no Arbitrum: mensagens enviadas através da bridge são "bilhetes de tentativas extras" que devem ser executadas dentro de 7 dias, e a execução inicial pode exigir outra tentativa se houver um surto no preço de gas no Arbitrum. +If you execute this step, **make sure you proceed until completing step 3 in less than 7 days, or the Subgraph and your signal GRT will be lost.** This is due to how L1-L2 messaging works on Arbitrum: messages that are sent through the bridge are "retry-able tickets" that must be executed within 7 days, and the initial execution might need a retry if there are spikes in the gas price on Arbitrum. ![Comece a transferência à L2](/img/startTransferL2.png) -## 2º Passo: A espera do caminho do subgraph até a L2 +## Step 2: Waiting for the Subgraph to get to L2 -Após iniciar a transferência, a mensagem que envia o seu subgraph da L1 para a L2 deve propagar pela bridge do Arbitrum. Isto leva cerca de 20 minutos (a bridge espera que o bloco da mainnet que contém a transação esteja "seguro" de reorganizações potenciais da chain). +After you start the transfer, the message that sends your L1 Subgraph to L2 must propagate through the Arbitrum bridge. This takes approximately 20 minutes (the bridge waits for the mainnet block containing the transaction to be "safe" from potential chain reorgs). Quando esta espera acabar, o Arbitrum tentará executar a transferência automaticamente nos contratos na L2. @@ -80,7 +80,7 @@ Quando esta espera acabar, o Arbitrum tentará executar a transferência automat ## 3º Passo: Como confirmar a transferência -Geralmente, este passo será executado automaticamente, já que o gas na L2 incluído no primeiro passo deverá ser suficiente para executar a transação que recebe o subgraph nos contratos do Arbitrum. Porém, em alguns casos, é possível que um surto nos preços de gas do Arbitrum faça com que esta execução automática falhe. Neste caso, o "bilhete" que envia o seu subgraph à L2 estará pendente e exigirá outra tentativa dentro de 7 dias. +In most cases, this step will auto-execute as the L2 gas included in step 1 should be sufficient to execute the transaction that receives the Subgraph on the Arbitrum contracts. In some cases, however, it is possible that a spike in gas prices on Arbitrum causes this auto-execution to fail. In this case, the "ticket" that sends your Subgraph to L2 will be pending and require a retry within 7 days. Se este for o caso, você deverá se conectar com uma carteira L2 que tenha um pouco de ETH no Arbitrum, trocar a rede da sua carteira para Arbitrum, e clicar em "Confirmar Transferência" para tentar a transação novamente. @@ -88,33 +88,33 @@ Se este for o caso, você deverá se conectar com uma carteira L2 que tenha um p ## 4º Passo: A finalização da transferência à L2 -Até aqui, o seu subgraph e GRT já foram recebidos no Arbitrum, mas o subgraph ainda não foi publicado. Você deverá se conectar com a carteira L2 que escolheu como a carteira destinatária, trocar a rede da carteira para Arbitrum, e clicar em "Publish Subgraph" (Publicar Subgraph). +At this point, your Subgraph and GRT have been received on Arbitrum, but the Subgraph is not published yet. You will need to connect using the L2 wallet that you chose as the receiving wallet, switch your wallet network to Arbitrum, and click "Publish Subgraph." -![Publicação do subgraph](/img/publishSubgraphL2TransferTools.png) +![Publish the Subgraph](/img/publishSubgraphL2TransferTools.png) -![Espera para a publicação do subgraph](/img/waitForSubgraphToPublishL2TransferTools.png) +![Wait for the Subgraph to be published](/img/waitForSubgraphToPublishL2TransferTools.png) -Isto publicará o subgraph de forma que Indexadores operantes no Arbitrum comecem a servi-lo. Ele também mintará sinais de curadoria com o GRT que foi transferido da L1. +This will publish the Subgraph so that Indexers that are operating on Arbitrum can start serving it. It will also mint curation signal using the GRT that were transferred from L1. ## 5º passo: Atualização da URL de query -Parabéns, o seu subgraph foi transferido ao Arbitrum com êxito! Para consultar o subgraph, a nova URL será: +Your Subgraph has been successfully transferred to Arbitrum! To query the Subgraph, the new URL will be : `https://arbitrum-gateway.thegraph.com/api/[api-key]/subgraphs/id/[l2-subgraph-id]` -Note que a ID do subgraph no Arbitrum será diferente daquela que você tinha na mainnet, mas você pode sempre encontrá-la no Explorer ou no Studio. Como mencionado acima (ver "Como entender o que acontece com o sinal, o seu subgraph na L1 e URLs de query"), a URL antiga na L1 será apoiada por um período curto, mas você deve trocar as suas queries para o novo endereço assim que o subgraph for sincronizado na L2. +Note that the Subgraph ID on Arbitrum will be a different than the one you had on mainnet, but you can always find it on Explorer or Studio. As mentioned above (see "Understanding what happens with signal, your L1 Subgraph and query URLs") the old L1 URL will be supported for a short while, but you should switch your queries to the new address as soon as the Subgraph has been synced on L2. ## Como transferir a sua curadoria ao Arbitrum (L2) -## Como entender o que acontece com a curadoria ao transferir um subgraph à L2 +## Understanding what happens to curation on Subgraph transfers to L2 -Quando o dono de um subgraph transfere um subgraph ao Arbitrum, todo o sinal do subgraph é convertido em GRT ao mesmo tempo. Isto se aplica a sinais "migrados automaticamente", por ex. sinais que não forem específicos a uma versão de um subgraph ou publicação, mas que segue a versão mais recente de um subgraph. +When the owner of a Subgraph transfers a Subgraph to Arbitrum, all of the Subgraph's signal is converted to GRT at the same time. This applies to "auto-migrated" signal, i.e. signal that is not specific to a Subgraph version or deployment but that follows the latest version of a Subgraph. -Esta conversão do sinal ao GRT é a mesma que aconteceria se o dono de um subgraph depreciasse o subgraph na L1. Quando o subgraph é depreciado ou transferido, todo o sinal de curadoria é "queimado" em simultâneo (com o uso da bonding curve de curadoria) e o GRT resultante fica em posse do contrato inteligente GNS (sendo o contrato que cuida de atualizações de subgraph e sinais migrados automaticamente). Cada Curador naquele subgraph então tem um direito àquele GRT, proporcional à quantidade de ações que tinham no subgraph. +This conversion from signal to GRT is the same as what would happen if the Subgraph owner deprecated the Subgraph in L1. When the Subgraph is deprecated or transferred, all curation signal is "burned" simultaneously (using the curation bonding curve) and the resulting GRT is held by the GNS smart contract (that is the contract that handles Subgraph upgrades and auto-migrated signal). Each Curator on that Subgraph therefore has a claim to that GRT proportional to the amount of shares they had for the Subgraph. -Uma fração deste GRT correspondente ao dono do subgraph é enviado à L2 junto com o subgraph. +A fraction of these GRT corresponding to the Subgraph owner is sent to L2 together with the Subgraph. -Neste ponto, o GRT curado não acumulará mais taxas de query, então Curadores podem escolher sacar o seu GRT ou transferi-lo ao mesmo subgraph na L2, onde ele pode ser usado para mintar novos sinais de curadoria. Não há pressa para fazer isto, já que o GRT pode ser possuído por tempo indeterminado, e todos conseguem uma quantidade proporcional às suas ações, irrespectivo de quando a fizerem. +At this point, the curated GRT will not accrue any more query fees, so Curators can choose to withdraw their GRT or transfer it to the same Subgraph on L2, where it can be used to mint new curation signal. There is no rush to do this as the GRT can be help indefinitely and everybody gets an amount proportional to their shares, irrespective of when they do it. ## Como escolher a sua carteira na L2 @@ -130,9 +130,9 @@ Se você usar uma carteira de contrato inteligente, como uma multisig (por ex. u Antes de iniciar a transferência, você deve decidir qual endereço será titular da curadoria na L2 (ver "Como escolher a sua carteira na L2" acima), e é recomendado ter um pouco de ETH para o gas já em bridge no Arbitrum, caso seja necessário tentar a execução da mensagem na L2 novamente. Você pode comprar ETH em algumas exchanges e retirá-lo diretamente no Arbitrum, ou você pode usar a bridge do Arbitrum para enviar ETH de uma carteira na mainnet à L2: [bridge.arbitrum.io](http://bridge.arbitrum.io) - como as taxas de gas no Arbitrum são menores, você só deve precisar de uma quantidade pequena; por ex. 0.01 ETH deve ser mais que o suficiente. -Se um subgraph para o qual você cura já foi transferido para a L2, você verá uma mensagem no Explorer lhe dizendo que você curará para um subgraph transferido. +If a Subgraph that you curate to has been transferred to L2, you will see a message on Explorer telling you that you're curating to a transferred Subgraph. -Ao olhar a página do subgraph, você pode escolher retirar ou transferir a curadoria. Clicar em "Transfer Signal to Arbitrum" (transferir sinal ao Arbitrum) abrirá a ferramenta de transferência. +When looking at the Subgraph page, you can choose to withdraw or transfer the curation. Clicking on "Transfer Signal to Arbitrum" will open the transfer tool. ![Transferir sinall](/img/transferSignalL2TransferTools.png) @@ -162,4 +162,4 @@ Se este for o caso, você deverá se conectar com uma carteira L2 que tenha um p ## Como retirar a sua curadoria na L1 -Se preferir não enviar o seu GRT à L2, ou preferir fazer um bridge do GRT de forma manual, você pode retirar o seu GRT curado na L1. No banner da página do subgraph, escolha "Withdraw Signal" (Retirar Sinal) e confirme a transação; o GRT será enviado ao seu endereço de Curador. +If you prefer not to send your GRT to L2, or you'd rather bridge the GRT manually, you can withdraw your curated GRT on L1. On the banner on the Subgraph page, choose "Withdraw Signal" and confirm the transaction; the GRT will be sent to your Curator address. diff --git a/website/src/pages/pt/archived/sunrise.mdx b/website/src/pages/pt/archived/sunrise.mdx index f7e7a0faf5f5..280639c4a9e5 100644 --- a/website/src/pages/pt/archived/sunrise.mdx +++ b/website/src/pages/pt/archived/sunrise.mdx @@ -7,61 +7,61 @@ sidebarTitle: Post-Sunrise Upgrade FAQ ## O Que Foi o Nascer do Sol dos Dados Descentralizados? -O Nascer do Sol dos Dados Descentralizados foi uma iniciativa liderada pela Edge & Node, com a meta de garantir que os programadores de subgraphs fizessem uma atualização suave para a rede descentralizada do The Graph. +The Sunrise of Decentralized Data was an initiative spearheaded by Edge & Node. This initiative enabled Subgraph developers to upgrade to The Graph’s decentralized network seamlessly. -Este plano teve base em desenvolvimentos anteriores do ecossistema do The Graph, e incluiu um Indexador de atualização para servir queries em subgraphs recém-editados. +This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published Subgraphs. ### O que aconteceu com o serviço hospedado? -Os endpoints de query do serviço hospedado não estão mais disponíveis, e programadores não podem mais editar subgraphs novos no serviço hospedado. +The hosted service query endpoints are no longer available, and developers cannot deploy new Subgraphs on the hosted service. -Durante o processo de atualização, donos de subgraphs no serviço hospedado puderam atualizar os seus subgraphs até a Graph Network. Além disto, programadores podiam resgatar subgraphs atualizados automaticamente. +During the upgrade process, owners of hosted service Subgraphs could upgrade their Subgraphs to The Graph Network. Additionally, developers were able to claim auto-upgraded Subgraphs. ### O Subgraph Studio foi atingido por esta atualização? Não, o Subgraph Studio não foi impactado pelo Nascer do Sol. Os subgraphs estavam disponíveis imediatamente para queries, movidos pelo Indexador de atualização, que usa a mesma infraestrutura do serviço hospedado. -### Por que subgraphs eram publicados ao Arbitrum, eles começaram a indexar uma rede diferente? +### Why were Subgraphs published to Arbitrum, did it start indexing a different network? -The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that subgraphs are published to, but subgraphs can index any of the [supported networks](/supported-networks/) +The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new Subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that Subgraphs are published to, but Subgraphs can index any of the [supported networks](/supported-networks/) ## Sobre o Indexador de Atualização > O Indexador de Atualização está atualmente ativo. -O Indexador de atualização foi construído para melhorar a experiência de atualizar subgraphs do serviço hospedado à Graph Network e apoiar novas versões de subgraphs existentes que ainda não foram indexados. +The upgrade Indexer was implemented to improve the experience of upgrading Subgraphs from the hosted service to The Graph Network and support new versions of existing Subgraphs that had not yet been indexed. ### O que o Indexador de atualização faz? -- Ele inicializa chains que ainda não tenham recompensas de indexação na Graph Network, e garante que um Indexador esteja disponível para servir queries o mais rápido possível após a publicação de um subgraph. +- It bootstraps chains that have yet to receive indexing rewards on The Graph Network and ensures that an Indexer is available to serve queries as quickly as possible after a Subgraph is published. - It supports chains that were previously only available on the hosted service. Find a comprehensive list of supported chains [here](/supported-networks/). -- Indexadores que operam um Indexador de atualização o fazem como um serviço público, para apoiar novos subgraphs e chains adicionais que não tenham recompensas de indexação antes da aprovação do Graph Council. +- Indexers that operate an upgrade Indexer do so as a public service to support new Subgraphs and additional chains that lack indexing rewards before The Graph Council approves them. ### Porque a Edge & Node executa o Indexador de atualização? -A Edge & Node operou historicamente o serviço hospedado, e como resultado, já sincronizou os dados de subgraphs do serviço hospedado. +Edge & Node historically maintained the hosted service and, as a result, already have synced data for hosted service Subgraphs. ### O que o Indexador de atualização significa para Indexadores existentes? Chains que antes só eram apoiadas no serviço hospedado foram disponibilizadas para programadores na Graph Network, inicialmente, sem recompensas de indexação. -Porém, esta ação liberou taxas de query para qualquer Indexador interessado e aumentou o número de subgraphs publicados na Graph Network. Como resultado, Indexadores têm mais oportunidades para indexar e servir estes subgraphs em troca de taxas de query, antes mesmo da ativação de recompensas de indexação para uma chain. +However, this action unlocked query fees for any interested Indexer and increased the number of Subgraphs published on The Graph Network. As a result, Indexers have more opportunities to index and serve these Subgraphs in exchange for query fees, even before indexing rewards are enabled for a chain. -O Indexador de atualização também fornece à comunidade de Indexadores informações sobre a demanda em potencial para subgraphs e novas chains na Graph Network. +The upgrade Indexer also provides the Indexer community with information about the potential demand for Subgraphs and new chains on The Graph Network. ### O que isto significa para Delegantes? -O Indexador de atualização oferece uma forte oportunidade para Delegantes. Como ele permitiu que mais subgraphs fossem atualizados do serviço hospedado até a Graph Network, os Delegantes podem se beneficiar do aumento na atividade da rede. +The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more Subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity. ### O Indexador de atualização concorreu com Indexadores existentes para recompensas? -Não, o Indexador de atualização só aloca a quantidade mínima por subgraph e não coleta recompensas de indexação. +No, the upgrade Indexer only allocates the minimum amount per Subgraph and does not collect indexing rewards. -Ele opera numa base de "necessidade" e serve como uma reserva até que uma cota de qualidade de serviço seja alcançada por, no mínimo, três outros Indexadores na rede para chains e subgraphs respetivos. +It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and Subgraphs. -### Como isto afeta os programadores de subgraph? +### How does this affect Subgraph developers? -Subgraph developers can query their subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/subgraphs/developing/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a subgraph](/developing/creating-a-subgraph/) was not impacted by this upgrade. +Subgraph developers can query their Subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/subgraphs/developing/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a Subgraph](/developing/creating-a-subgraph/) was not impacted by this upgrade. ### Como o Indexador de atualizações beneficia consumidores de dados? @@ -71,10 +71,10 @@ O Indexador de atualização ativa, na rede, chains que antes só tinham apoio n O Indexador de atualização precifica queries no preço do mercado, para não influenciar o mercado de taxas de queries. -### Quando o Indexador de atualização parará de apoiar um subgraph? +### When will the upgrade Indexer stop supporting a Subgraph? -O Indexador de atualização apoia um subgraph até que, no mínimo, 3 outros indexadores sirvam queries feitas nele com êxito e consistência. +The upgrade Indexer supports a Subgraph until at least 3 other Indexers successfully and consistently serve queries made to it. -Além disto, o Indexador de atualização para de apoiar um subgraph se ele não tiver sido consultado nos últimos 30 dias. +Furthermore, the upgrade Indexer stops supporting a Subgraph if it has not been queried in the last 30 days. -Outros Indexadores são incentivados a apoiar subgraphs com o volume de query atual. O volume de query ao Indexador de atualização deve se aproximar de zero, já que ele tem um tamanho de alocação pequeno e outros Indexadores devem ser escolhidos por queries antes disso. +Other Indexers are incentivized to support Subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it. diff --git a/website/src/pages/pt/contracts.json b/website/src/pages/pt/contracts.json index 134799f3dd0f..b660b0df679c 100644 --- a/website/src/pages/pt/contracts.json +++ b/website/src/pages/pt/contracts.json @@ -1,4 +1,4 @@ { - "contract": "Contract", + "contract": "Contrato", "address": "Address" } diff --git a/website/src/pages/pt/global.json b/website/src/pages/pt/global.json index dfa39b21d79b..4521a1053837 100644 --- a/website/src/pages/pt/global.json +++ b/website/src/pages/pt/global.json @@ -1,35 +1,78 @@ { "navigation": { "title": "Navegação principal", - "show": "Show navigation", - "hide": "Hide navigation", + "show": "Exibir navegação", + "hide": "Ocultar navegação", "subgraphs": "Subgraphs", "substreams": "Substreams", - "sps": "Substreams-Powered Subgraphs", - "indexing": "Indexing", + "sps": "Subgraphs movidos por Substreams", + "tokenApi": "Token API", + "indexing": "Indexação", "resources": "Recursos", - "archived": "Archived" + "archived": "Arquivados" }, "page": { - "lastUpdated": "Last updated", + "lastUpdated": "Ultima atualização", "readingTime": { - "title": "Reading time", - "minutes": "minutes" + "title": "Tempo de leitura", + "minutes": "minutos" }, - "previous": "Previous page", - "next": "Next page", - "edit": "Edit on GitHub", - "onThisPage": "On this page", - "tableOfContents": "Table of contents", - "linkToThisSection": "Link to this section" + "previous": "Página anterior", + "next": "Próxima página", + "edit": "Editar no GitHub", + "onThisPage": "Nesta página", + "tableOfContents": "Índice", + "linkToThisSection": "Link para esta secção" }, "content": { - "note": "Note", - "video": "Video" + "callout": { + "note": "Note", + "tip": "Tip", + "important": "Important", + "warning": "Warning", + "caution": "Caution" + }, + "video": "Vídeo" + }, + "openApi": { + "parameters": { + "pathParameters": "Path Parameters", + "queryParameters": "Parâmetros de Query", + "headerParameters": "Header Parameters", + "cookieParameters": "Cookie Parameters", + "parameter": "Parameter", + "description": "Descrição", + "value": "Value", + "required": "Required", + "deprecated": "Deprecated", + "defaultValue": "Default value", + "minimumValue": "Minimum value", + "maximumValue": "Maximum value", + "acceptedValues": "Accepted values", + "acceptedPattern": "Accepted pattern", + "format": "Format", + "serializationFormat": "Serialization format" + }, + "request": { + "label": "Test this endpoint", + "noCredentialsRequired": "No credentials required", + "send": "Send Request" + }, + "responses": { + "potentialResponses": "Potential Responses", + "status": "Status", + "description": "Descrição", + "liveResponse": "Live Response", + "example": "Exemplo" + }, + "errors": { + "invalidApi": "Could not retrieve API {0}.", + "invalidOperation": "Could not retrieve operation {0} in API {1}." + } }, "notFound": { - "title": "Oops! This page was lost in space...", - "subtitle": "Check if you’re using the right address or explore our website by clicking on the link below.", - "back": "Go Home" + "title": "Ops! Esta página foi pro espaço...", + "subtitle": "Confira se o endereço está certo, ou clique o atalho abaixo para explorar o nosso sítio.", + "back": "Página Inicial" } } diff --git a/website/src/pages/pt/index.json b/website/src/pages/pt/index.json index 15df67073c16..0fe9ac551a34 100644 --- a/website/src/pages/pt/index.json +++ b/website/src/pages/pt/index.json @@ -1,52 +1,52 @@ { "title": "Início", "hero": { - "title": "The Graph Docs", - "description": "Kick-start your web3 project with the tools to extract, transform, and load blockchain data.", - "cta1": "How The Graph works", + "title": "Documentação do The Graph", + "description": "Comece o seu projeto web3 com as ferramentas para extrair, transformar e carregar os dados da blockchain.", + "cta1": "Como o The Graph funciona", "cta2": "Construa o seu primeiro subgraph" }, "products": { - "title": "The Graph’s Products", - "description": "Choose a solution that fits your needs—interact with blockchain data your way.", + "title": "The Graph's Products", + "description": "Escolha uma solução adequada às suas necessidades — interaja com os dados da blockchain da sua maneira.", "subgraphs": { "title": "Subgraphs", - "description": "Extract, process, and query blockchain data with open APIs.", - "cta": "Develop a subgraph" + "description": "Extraia, processe, e solicite queries de dados da blockchain com APIs abertas.", + "cta": "Programe um subgraph" }, "substreams": { "title": "Substreams", - "description": "Fetch and consume blockchain data with parallel execution.", - "cta": "Develop with Substreams" + "description": "Solicite e consuma dados de blockchain com execução paralela.", + "cta": "Programe com Substreams" }, "sps": { - "title": "Substreams-Powered Subgraphs", - "description": "Boost your subgraph’s efficiency and scalability by using Substreams.", - "cta": "Set up a Substreams-powered subgraph" + "title": "Subgraphs movidos por Substreams", + "description": "Boost your subgraph's efficiency and scalability by using Substreams.", + "cta": "Monte um subgraph movido pelo Substreams" }, "graphNode": { "title": "Graph Node", - "description": "Index blockchain data and serve it via GraphQL queries.", - "cta": "Set up a local Graph Node" + "description": "Indexe dados de blockchain e sirva via queries da GraphQL.", + "cta": "Monte um Graph Node local" }, "firehose": { "title": "Firehose", - "description": "Extract blockchain data into flat files to enhance sync times and streaming capabilities.", - "cta": "Get started with Firehose" + "description": "Extraia dados de blockchain em arquivos simples para melhorar tempos de sincronização e capacidades de streaming de dados.", + "cta": "Comece com o Firehose" } }, "supportedNetworks": { "title": "Redes Apoiadas", "details": "Network Details", "services": "Services", - "type": "Type", + "type": "Tipo", "protocol": "Protocol", "identifier": "Identifier", "chainId": "Chain ID", "nativeCurrency": "Native Currency", - "docs": "Docs", + "docs": "Documentação", "shortName": "Short Name", - "guides": "Guides", + "guides": "Guias", "search": "Search networks", "showTestnets": "Show Testnets", "loading": "Loading...", @@ -54,9 +54,9 @@ "infoText": "Boost your developer experience by enabling The Graph's indexing network.", "infoLink": "Integrate new network", "description": { - "base": "The Graph supports {0}. To add a new network, {1}", - "networks": "networks", - "completeThisForm": "complete this form" + "base": "The Graph tem apoio a {0}. Para adicionar uma nova rede, {1}", + "networks": "redes", + "completeThisForm": "complete este formulário" }, "emptySearch": { "title": "No networks found", @@ -65,7 +65,7 @@ "showTestnets": "Show testnets" }, "tableHeaders": { - "name": "Name", + "name": "Nome", "id": "ID", "subgraphs": "Subgraphs", "substreams": "Substreams", @@ -92,7 +92,7 @@ "description": "Leverage features like custom data sources, event handlers, and topic filters." }, "billing": { - "title": "Billing", + "title": "Cobranças", "description": "Optimize costs and manage billing efficiently." } }, @@ -123,53 +123,53 @@ "title": "Guias", "description": "", "explorer": { - "title": "Find Data in Graph Explorer", - "description": "Leverage hundreds of public subgraphs for existing blockchain data." + "title": "Encontre Dados no Graph Explorer", + "description": "Aproveite centenas de subgraphs públicos para obter dados existentes de blockchain." }, "publishASubgraph": { - "title": "Publish a Subgraph", - "description": "Add your subgraph to the decentralized network." + "title": "Edite um Subgraph", + "description": "Adicione o seu subgraph à rede descentralizada." }, "publishSubstreams": { - "title": "Publish Substreams", - "description": "Launch your Substreams package to the Substreams Registry." + "title": "Edite Substreams", + "description": "Implante o seu pacote do Substreams ao Registo do Substreams." }, "queryingBestPractices": { "title": "Etiqueta de Query", - "description": "Optimize your subgraph queries for faster, better results." + "description": "Otimize os seus queries de subgraph para obter resultados melhores e mais rápidos." }, "timeseries": { - "title": "Optimized Timeseries & Aggregations", - "description": "Streamline your subgraph for efficiency." + "title": "Séries de Tempo e Agregações Otimizadas", + "description": "Simplifique o seu subgraph para aumentar a sua eficiência." }, "apiKeyManagement": { - "title": "API Key Management", - "description": "Easily create, manage, and secure API keys for your subgraphs." + "title": "Gestão de chaves de API", + "description": "Crie, administre, e proteja chaves de API para os seus subgraphs com facilidade." }, "transferToTheGraph": { - "title": "Transfer to The Graph", - "description": "Seamlessly upgrade your subgraph from any platform." + "title": "Faça uma transferência para o The Graph", + "description": "Migre o seu subgraph suavemente de qualquer plataforma para o The Graph." } }, "videos": { - "title": "Video Tutorials", - "watchOnYouTube": "Watch on YouTube", + "title": "Tutoriais de Vídeo", + "watchOnYouTube": "Assista no YouTube", "theGraphExplained": { - "title": "The Graph Explained In 1 Minute", - "description": "What is The Graph? How does it work? Why does it matter so much to web3 developers? Learn how and why The Graph is the backbone of web3 in this short, non-technical video." + "title": "The Graph Explicado em 1 Minuto", + "description": "Learn how and why The Graph is the backbone of web3 in this short, non-technical video." }, "whatIsDelegating": { "title": "O Que É Delegar?", - "description": "Delegators are key participants who help secure The Graph by staking their GRT tokens to Indexers. This video explains key concepts to understand before delegating." + "description": "This video explains key concepts to understand before delegating, a form of staking that helps secure The Graph." }, "howToIndexSolana": { - "title": "How to Index Solana with a Substreams-powered Subgraph", - "description": "If you’re familiar with subgraphs, discover how Substreams offer a different approach for key use cases. This video walks you through the process of building your first Substreams-powered subgraph." + "title": "Como Indexar na Solana com um Subgraph Movido por Substreams", + "description": "If you're familiar with Subgraphs, discover how Substreams offer a different approach for key use cases." } }, "time": { - "reading": "Reading time", - "duration": "Duration", + "reading": "Tempo de leitura", + "duration": "Duração", "minutes": "min (\"Mínimo\")" } } diff --git a/website/src/pages/pt/indexing/_meta-titles.json b/website/src/pages/pt/indexing/_meta-titles.json index 42f4de188fd4..cd4243ace5e6 100644 --- a/website/src/pages/pt/indexing/_meta-titles.json +++ b/website/src/pages/pt/indexing/_meta-titles.json @@ -1,3 +1,3 @@ { - "tooling": "Indexer Tooling" + "tooling": "Ferramentas do Indexador" } diff --git a/website/src/pages/pt/indexing/new-chain-integration.mdx b/website/src/pages/pt/indexing/new-chain-integration.mdx index 388561fac3d7..bcaf712fb2cb 100644 --- a/website/src/pages/pt/indexing/new-chain-integration.mdx +++ b/website/src/pages/pt/indexing/new-chain-integration.mdx @@ -2,7 +2,7 @@ title: Integração de Chains Novas --- -Chains podem trazer apoio a subgraphs para os seus ecossistemas ao iniciar uma nova integração de `graph-node`. Subgraphs são ferramentas poderosas de indexação que abrem infinitas possibilidades a programadores. O Graph Node já indexa dados das chains listadas aqui. Caso tenha interesse numa nova integração, há 2 estratégias para ela: +Chains podem trazer apoio a subgraphs para os seus ecossistemas, ao iniciar uma nova integração de `graph-node`. Subgraphs são ferramentas poderosas de indexação que abrem infinitas possibilidades a programadores. O Graph Node já indexa dados das chains listadas aqui. Caso tenha interesse numa nova integração, há 2 estratégias para ela: 1. **EVM JSON-RPC** 2. **Firehose**: Todas as soluções de integração do Firehose incluem Substreams, um motor de transmissão de grande escala com base no Firehose com apoio nativo ao `graph-node`, o que permite transformações paralelizadas. @@ -55,7 +55,7 @@ Enquanto ambos o JSON-RPC e o Firehose são próprios para subgraphs, um Firehos ## Como Configurar um Graph Node -Configurar um Graph Node é tão fácil quanto preparar o seu ambiente local. Quando o seu ambiente local estiver pronto, será possível testar a integração com a edição local de um subgraph. +Configurar um Graph Node é tão fácil quanto preparar o seu ambiente local. Quando o seu ambiente local estiver pronto, será possível testar a integração com a implantação local de um subgraph. 1. [Clone o Graph Node](https://github.com/graphprotocol/graph-node) @@ -67,4 +67,4 @@ Configurar um Graph Node é tão fácil quanto preparar o seu ambiente local. Qu ## Subgraphs movidos por Substreams -Para integrações do Substreams ou Firehose movidas ao StreamingFast, são inclusos: apoio básico a módulos do Substreams (por exemplo: transações, logs, e eventos de contrato inteligente decodificados); e ferramentas de geração de código do Substreams. Estas ferramentas permitem a habilidade de ativar [subgraphs movidos pelo Substreams](/substreams/sps/introduction/). Siga o [Passo-a-Passo](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) e execute `substreams codegen subgraph` para sentir um gostinho das ferramentas. +Para integrações do Substreams ou Firehose movidas pelo StreamingFast, são inclusos: apoio básico a módulos do Substreams (por exemplo: transações, logs, e eventos de contrato inteligente decodificados); e ferramentas de geração de código do Substreams. Estas ferramentas permitem a habilidade de ativar [subgraphs movidos pelo Substreams](/substreams/sps/introduction/). Siga o [Passo-a-Passo](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) e execute `substreams codegen subgraph` para sentir um gostinho das ferramentas. diff --git a/website/src/pages/pt/indexing/overview.mdx b/website/src/pages/pt/indexing/overview.mdx index adf55ea75a43..02343b809f03 100644 --- a/website/src/pages/pt/indexing/overview.mdx +++ b/website/src/pages/pt/indexing/overview.mdx @@ -9,39 +9,39 @@ O GRT em staking no protocolo é sujeito a um período de degelo, e pode passar Indexadores selecionam subgraphs para indexar com base no sinal de curadoria do subgraph, onde Curadores depositam GRT em staking para indicar quais subgraphs são de qualidade alta e devem ser priorizados. Consumidores (por ex., aplicativos) também podem configurar parâmetros para os quais Indexadores processam queries para seus subgraphs, além de configurar preferências para o preço das taxas de query. -## FAQ +## Perguntas Frequentes -### What is the minimum stake required to be an Indexer on the network? +### Qual o stake mínimo exigido para ser um Indexador na rede? -The minimum stake for an Indexer is currently set to 100K GRT. +O stake mínimo atual para um Indexador é de 100 mil GRT. -### What are the revenue streams for an Indexer? +### Quais são as fontes de renda para um Indexador? -**Query fee rebates** - Payments for serving queries on the network. These payments are mediated via state channels between an Indexer and a gateway. Each query request from a gateway contains a payment and the corresponding response a proof of query result validity. +**Rebates de taxas de query** — Pagamentos por serviço de queries na rede. Estes pagamentos são mediados por canais de estado entre um Indexador e um gateway. Cada pedido de query de um gateway contém um pagamento e a resposta correspondente: uma prova de validade de resultado de query. -**Indexing rewards** - Generated via a 3% annual protocol wide inflation, the indexing rewards are distributed to Indexers who are indexing subgraph deployments for the network. +**Recompensas de indexação** — são distribuídas a Indexadores que indexam lançamentos de subgraph para a rede. São geradas através de uma inflação de 3% para todo o protocolo. -### How are indexing rewards distributed? +### Como são distribuídas as recompensas de indexação? -Indexing rewards come from protocol inflation which is set to 3% annual issuance. They are distributed across subgraphs based on the proportion of all curation signal on each, then distributed proportionally to Indexers based on their allocated stake on that subgraph. **An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.** +As recompensas de indexação vêm da inflação do protocolo, que é configurada em 3% da emissão anual. Elas são distribuídas em subgraphs, com base na proporção de todos os sinais de curadoria em cada um, e depois distribuídos proporcionalmente a Indexadores baseado no stake que alocaram naquele subgraph. **Para ter direito a recompensas, uma alocação deve ser fechada com uma prova de indexação válida (POI) que atende aos padrões determinados pela carta de arbitragem.** -Numerous tools have been created by the community for calculating rewards; you'll find a collection of them organized in the [Community Guides collection](https://www.notion.so/Community-Guides-abbb10f4dba040d5ba81648ca093e70c). You can also find an up to date list of tools in the #Delegators and #Indexers channels on the [Discord server](https://discord.gg/graphprotocol). Here we link a [recommended allocation optimiser](https://github.com/graphprotocol/allocation-optimizer) integrated with the indexer software stack. +A comunidade criou várias ferramentas para calcular recompensas, organizadas na [coleção de guias da comunidade](https://www.notion.so/Community-Guides-abbb10f4dba040d5ba81648ca093e70c). Há também uma lista atualizada de ferramentas nos canais #Delegators e #Indexers no [servidor do Discord](https://discord.gg/graphprotocol). No próximo link, temos um [otimizador de alocações recomendadas](https://github.com/graphprotocol/allocation-optimizer) integrado com o stack de software de indexador. -### What is a proof of indexing (POI)? +### O que é uma prova de indexação (POI)? -POIs are used in the network to verify that an Indexer is indexing the subgraphs they have allocated on. A POI for the first block of the current epoch must be submitted when closing an allocation for that allocation to be eligible for indexing rewards. A POI for a block is a digest for all entity store transactions for a specific subgraph deployment up to and including that block. +POIs (Provas de indexação) são usadas na rede para verificar que um Indexador está a indexar os subgraphs nos quais eles alocaram. Uma POI para o primeiro bloco da epoch atual deve ser enviada ao fechar uma alocação, para que aquela alocação seja elegível a recompensas de indexação. Uma POI para um bloco serve como resumo para todas as transações de armazenamento de entidade para uma implantação específica de subgraph, até, e incluindo, aquele bloco. -### When are indexing rewards distributed? +### Quando são distribuídas as recompensas de indexação? -Allocations are continuously accruing rewards while they're active and allocated within 28 epochs. Rewards are collected by the Indexers, and distributed whenever their allocations are closed. That happens either manually, whenever the Indexer wants to force close them, or after 28 epochs a Delegator can close the allocation for the Indexer, but this results in no rewards. 28 epochs is the max allocation lifetime (right now, one epoch lasts for ~24h). +As alocações acumulam recompensas continuamente, enquanto permanecerem ativas e alocadas dentro de 28 epochs. As recompensas são coletadas pelos Indexadores, e distribuídas sempre que suas alocações são fechadas. Isto acontece ou manualmente, quando o Indexador quer fechá-las à força; ou após 28 epochs, quando um Delegante pode fechar a alocação para o Indexador, mas isto não rende recompensas. A vida máxima de uma alocação é de 28 epochs (no momento, um epoch dura cerca de 24 horas). -### Can pending indexing rewards be monitored? +### É possível monitorar recompensas de indexação pendentes? -The RewardsManager contract has a read-only [getRewards](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/rewards/RewardsManager.sol#L316) function that can be used to check the pending rewards for a specific allocation. +O contrato RewardsManager tem uma função de apenas-leitura — [getRewards](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/rewards/RewardsManager.sol#L316) — que pode ser usada para verificar as recompensas pendentes para uma alocação específica. -Many of the community-made dashboards include pending rewards values and they can be easily checked manually by following these steps: +Muitos dos painéis feitos pela comunidade incluem valores pendentes de recompensas, que podem facilmente ser conferidos de forma manual ao seguir os seguintes passos: -1. Query the [mainnet subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations: +1. Faça um query do [subgraph da mainnet](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) para buscar as IDs de todas as alocações ativas: ```graphql query indexerAllocations { @@ -57,138 +57,138 @@ query indexerAllocations { } ``` -Use Etherscan to call `getRewards()`: +Use o Etherscan para chamar o `getRewards()`: -- Navigate to [Etherscan interface to Rewards contract](https://etherscan.io/address/0x9Ac758AB77733b4150A901ebd659cbF8cB93ED66#readProxyContract) -- To call `getRewards()`: - - Expand the **9. getRewards** dropdown. - - Enter the **allocationID** in the input. - - Click the **Query** button. +- Navegue, na [interface do Etherscan, para o contrato Rewards](https://etherscan.io/address/0x9Ac758AB77733b4150A901ebd659cbF8cB93ED66#readProxyContract) +- Para chamar o `getRewards()`: + - Abra o dropdown **9. getRewards**. + - Preencha o campo da **allocationID**. + - Clique no botão **Query**. -### What are disputes and where can I view them? +### O que são disputas e onde posso visualizá-las? -Indexer's queries and allocations can both be disputed on The Graph during the dispute period. The dispute period varies, depending on the type of dispute. Queries/attestations have 7 epochs dispute window, whereas allocations have 56 epochs. After these periods pass, disputes cannot be opened against either of allocations or queries. When a dispute is opened, a deposit of a minimum of 10,000 GRT is required by the Fishermen, which will be locked until the dispute is finalized and a resolution has been given. Fisherman are any network participants that open disputes. +As consultas em query e alocações de Indexadores podem ser disputadas no The Graph durante o período de disputa. O período de disputa varia a depender do tipo de disputa. Consultas/atestações têm uma janela de disputa de 7 epochs, enquanto alocações duram até 56 epochs. Após o vencimento destes períodos, não se pode abrir disputas contra alocações ou consultas. Quando uma disputa é aberta, um depósito mínimo de 10.000 GRT é exigido pelos Pescadores, que será trancado até ser finalizada a disputa e servida uma resolução. Pescadores são quaisquer participantes de rede que abrem disputas. -Disputes have **three** possible outcomes, so does the deposit of the Fishermen. +Há **três** possíveis resultados para disputas, assim como para o depósito dos Pescadores. -- If the dispute is rejected, the GRT deposited by the Fishermen will be burned, and the disputed Indexer will not be slashed. -- If the dispute is settled as a draw, the Fishermen's deposit will be returned, and the disputed Indexer will not be slashed. -- If the dispute is accepted, the GRT deposited by the Fishermen will be returned, the disputed Indexer will be slashed and the Fishermen will earn 50% of the slashed GRT. +- Se a disputa for rejeitada, o GRT depositado pelo Pescador será queimado, e o Indexador disputado não será penalizado. +- Se a disputa terminar em empate, o depósito do Pescador será retornado, e o Indexador disputado não será penalizado. +- Se a disputa for aceite, o GRT depositado pelo Pescador será retornado, o Indexador disputado será penalizado, e o(s) Pescador(es) ganhará(ão) 50% do GRT cortado. -Disputes can be viewed in the UI in an Indexer's profile page under the `Disputes` tab. +As disputas podem ser visualizadas na interface na página de perfil de um Indexador, sob a aba `Disputes` (Disputas). -### What are query fee rebates and when are they distributed? +### O que são rebates de taxas de consulta e quando eles são distribuídos? -Query fees are collected by the gateway and distributed to indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). The exponential rebate function is proposed as a way to ensure indexers achieve the best outcome by faithfully serving queries. It works by incentivizing Indexers to allocate a large amount of stake (which can be slashed for erring when serving a query) relative to the amount of query fees they may collect. +As taxas de query são coletadas pelo gateway e distribuídas aos Indexadores de acordo com a função de rebate exponencial (veja o GIP [aqui](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). A tal função é proposta como uma maneira de garantir que indexadores alcancem o melhor resultado ao servir queries fieis. Ela funciona com o incentivo de Indexadores para alocarem uma grande quantia de stake (que pode ser cortada por errar ao servir um query) relativa à quantidade de taxas de query que possam colecionar. -Once an allocation has been closed the rebates are available to be claimed by the Indexer. Upon claiming, the query fee rebates are distributed to the Indexer and their Delegators based on the query fee cut and the exponential rebate function. +Quando uma alocação é fechada, os rebates podem ser reivindicados pelo Indexador. Após serem resgatados, os rebates de taxa de consulta são distribuídos ao Indexador e os seus Delegantes com base na porção de taxas de query e na função de rebate exponencial. -### What is query fee cut and indexing reward cut? +### O que são porção de taxa de query e porção de recompensa de indexação? -The `queryFeeCut` and `indexingRewardCut` values are delegation parameters that the Indexer may set along with cooldownBlocks to control the distribution of GRT between the Indexer and their Delegators. See the last steps in [Staking in the Protocol](/indexing/overview/#stake-in-the-protocol) for instructions on setting the delegation parameters. +Os valores `queryFeeCut` e `indexingRewardCut` são parâmetros de delegação que o Indexador pode configurar junto com o `cooldownBlocks` para controlar a distribuição de GRT entre o Indexador e os seus Delegantes. Veja os últimos passos no [Staking no Protocolo](/indexing/overview/#stake-in-the-protocol) para instruções sobre como configurar os parâmetros de delegação. -- **queryFeeCut** - the % of query fee rebates that will be distributed to the Indexer. If this is set to 95%, the Indexer will receive 95% of the query fees earned when an allocation is closed with the other 5% going to the Delegators. +- **queryFeeCut** — a % de rebates de taxas de query a ser distribuída ao Indexador. Se isto for configurado em 95%, o Indexador receberá 95% das taxas de query ganhas quando uma alocação for fechada, com os outros 5% destinados aos Delegantes. -- **indexingRewardCut** - the % of indexing rewards that will be distributed to the Indexer. If this is set to 95%, the Indexer will receive 95% of the indexing rewards when an allocation is closed and the Delegators will split the other 5%. +- **indexingRewardCut** — a % de recompensas de indexação a ser distribuída ao Indexador. Se isto for configurado em 95%, o Indexador receberá 95% do pool de recompensas de indexação ao fechamento de uma alocação e os Delegantes dividirão os outros 5%. -### How do Indexers know which subgraphs to index? +### Como os Indexadores podem saber quais subgraphs indexar? -Indexers may differentiate themselves by applying advanced techniques for making subgraph indexing decisions but to give a general idea we'll discuss several key metrics used to evaluate subgraphs in the network: +Os Indexadores podem se diferenciar ao aplicar técnicas avançadas para decidir indexações de subgraph, mas para dar uma ideia geral, vamos discutir várias métricas importantes usadas para avaliar subgraphs na rede: -- **Curation signal** - The proportion of network curation signal applied to a particular subgraph is a good indicator of the interest in that subgraph, especially during the bootstrap phase when query voluming is ramping up. +- **Sinal de curadoria** — A proporção do sinal de curadoria na rede aplicado a um subgraph particular mede bem o interesse nesse subgraph; especialmente durante a fase de inicialização, quando o volume de queries começa a subir. -- **Query fees collected** - The historical data for volume of query fees collected for a specific subgraph is a good indicator of future demand. +- **Taxas de query coletadas** — Os dados históricos para o volume de taxas de query coletadas para um subgraph específico indicam bem a demanda futura. -- **Amount staked** - Monitoring the behavior of other Indexers or looking at proportions of total stake allocated towards specific subgraphs can allow an Indexer to monitor the supply side for subgraph queries to identify subgraphs that the network is showing confidence in or subgraphs that may show a need for more supply. +- **Quantidade em staking** - Ao monitorar o comportamento de outros Indexadores ou inspecionar proporções de stake total alocado a subgraphs específicos, um Indexador pode monitorar a reserva para queries nos subgraphs, para assim identificar subgraphs nos quais a rede mostra confiança ou subgraphs que podem necessitar de mais reservas. -- **Subgraphs with no indexing rewards** - Some subgraphs do not generate indexing rewards mainly because they are using unsupported features like IPFS or because they are querying another network outside of mainnet. You will see a message on a subgraph if it is not generating indexing rewards. +- **Subgraphs sem recompensas de indexação** - Alguns subgraphs não geram recompensas de indexação, principalmente porque eles usam recursos não apoiados, como o IPFS, ou porque consultam outra rede fora da mainnet. Se um subgraph não estiver a gerar recompensas de indexação, o Indexador será notificado a respeito. -### What are the hardware requirements? +### Quais são os requisitos de hardware? -- **Small** - Enough to get started indexing several subgraphs, will likely need to be expanded. -- **Standard** - Default setup, this is what is used in the example k8s/terraform deployment manifests. -- **Medium** - Production Indexer supporting 100 subgraphs and 200-500 requests per second. -- **Large** - Prepared to index all currently used subgraphs and serve requests for the related traffic. +- **Pequeno** — O suficiente para começar a indexar vários subgraphs. Provavelmente precisará de expansões. +- **Normal** — Setup normal. Este é o usado nos exemplos de manifests de implantação de k8s/terraform. +- **Médio** — Indexador de Produção. Apoia 100 subgraphs e de 200 a 500 solicitações por segundo. +- **Grande** — Preparado para indexar todos os subgraphs usados atualmente e servir solicitações para o tráfego relacionado. -| Setup | Postgres
(CPUs) | Postgres
(memory in GBs) | Postgres
(disk in TBs) | VMs
(CPUs) | VMs
(memory in GBs) | +| Configuração | Postgres
(CPUs) | Postgres
(memória em GBs) | Postgres
(disco em TBs) | VMs
(CPUs) | VMs
(memória em GBs) | | --- | :-: | :-: | :-: | :-: | :-: | -| Small | 4 | 8 | 1 | 4 | 16 | -| Standard | 8 | 30 | 1 | 12 | 48 | -| Medium | 16 | 64 | 2 | 32 | 64 | -| Large | 72 | 468 | 3.5 | 48 | 184 | +| Pequeno | 4 | 8 | 1 | 4 | 16 | +| Normal | 8 | 30 | 1 | 12 | 48 | +| Médio | 16 | 64 | 2 | 32 | 64 | +| Grande | 72 | 468 | 3.5 | 48 | 184 | -### What are some basic security precautions an Indexer should take? +### Há alguma precaução básica de segurança que um Indexador deve tomar? -- **Operator wallet** - Setting up an operator wallet is an important precaution because it allows an Indexer to maintain separation between their keys that control stake and those that are in control of day-to-day operations. See [Stake in Protocol](/indexing/overview/#stake-in-the-protocol) for instructions. +- **Carteira de operador** — Configurar uma carteira de operador é importante, pois permite a um Indexador manter a separação entre as suas chaves que controlam o stake e aquelas no controlo das operações diárias. Mais informações em [Staking no Protocolo](/indexing/overview/#stake-in-the-protocol). -- **Firewall** - Only the Indexer service needs to be exposed publicly and particular attention should be paid to locking down admin ports and database access: the Graph Node JSON-RPC endpoint (default port: 8030), the Indexer management API endpoint (default port: 18000), and the Postgres database endpoint (default port: 5432) should not be exposed. +- **Firewall** - O serviço de Indexadores é o único que precisa ser exposto publicamente, e o trancamento de portas de admin e acesso ao banco de dados exigem muito mais atenção: o endpoint JSON-RPC do Graph Node (porta padrão: 8030), o endpoint da API de gerenciamento do Indexador (porta padrão: 18000), e o endpoint do banco de dados Postgres (porta padrão: 5432) não devem ser expostos. -## Infrastructure +## Infraestrutura -At the center of an Indexer's infrastructure is the Graph Node which monitors the indexed networks, extracts and loads data per a subgraph definition and serves it as a [GraphQL API](/about/#how-the-graph-works). The Graph Node needs to be connected to an endpoint exposing data from each indexed network; an IPFS node for sourcing data; a PostgreSQL database for its store; and Indexer components which facilitate its interactions with the network. +O núcleo da infraestrutura de um Indexador é o Graph Node, que monitora as redes indexadas, extrai e carrega dados por uma definição de um subgraph, e o serve como uma [API da GraphQL](/about/#how-the-graph-works). O Graph Node deve estar conectado a endpoints que expõem dados de cada rede indexada; um node IPFS para abastecer os dados; um banco de dados PostgreSQL para o seu armazenamento; e componentes de Indexador que facilitem as suas interações com a rede. -- **PostgreSQL database** - The main store for the Graph Node, this is where subgraph data is stored. The Indexer service and agent also use the database to store state channel data, cost models, indexing rules, and allocation actions. +- **Banco de dados PostgreSQL** — O armazenamento principal para o Graph Node, onde dados de subgraph são armazenados. O serviço e o agente indexador também usam o banco de dados para armazenar dados de canal de estado, modelos de custo, regras de indexação, e ações de alocação. -- **Data endpoint** - For EVM-compatible networks, Graph Node needs to be connected to an endpoint that exposes an EVM-compatible JSON-RPC API. This may take the form of a single client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain subgraphs will require particular client capabilities such as archive mode and/or the parity tracing API. +- **Endpoint de dados** — Para redes compatíveis com EVMs, o Graph Node deve estar conectado a um endpoint que expõe uma API JSON-RPC compatível com EVMs. Isto pode ser um único cliente, ou um setup mais complexo que carrega saldos em várias redes. É importante saber que certos subgraphs exigirão capabilidades particulares de clientes, como um modo de arquivo e/ou uma API de rastreamento. -- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during subgraph deployment to fetch the subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com. +- **Node IPFS (versão abaixo de 5)** — Os metadados de lançamento de subgraph são armazenados na rede IPFS. O Graph Node acessa primariamente o node IPFS durante o lançamento do subgraph, para retirar o manifest e todos os arquivos ligados. Indexadores de rede não precisam hospedar seu próprio node IPFS, pois já há um hospedado para a rede em https://ipfs.network.thegraph.com. -- **Indexer service** - Handles all required external communications with the network. Shares cost models and indexing statuses, passes query requests from gateways on to a Graph Node, and manages the query payments via state channels with the gateway. +- **Serviço de Indexador** — Cuida de todas as comunicações externas com a rede requeridas. Divide modelos de custo e estados de indexação, passa pedidos de query de gateways para um Graph Node, e monitora os pagamentos de query através de canais de estado com o gateway. -- **Indexer agent** - Facilitates the Indexers interactions onchain including registering on the network, managing subgraph deployments to its Graph Node/s, and managing allocations. +- **Agente Indexador** — Facilita as interações de Indexadores on-chain, que incluem cadastros na rede, gestão de lançamentos de Subgraph ao(s) seu(s) Graph Node(s), e gestão de alocações. -- **Prometheus metrics server** - The Graph Node and Indexer components log their metrics to the metrics server. +- **Servidor de métricas Prometheus** — O Graph Node e os componentes de Indexador registam as suas métricas ao servidor de métricas. -Note: To support agile scaling, it is recommended that query and indexing concerns are separated between different sets of nodes: query nodes and index nodes. +Observe: Para apoiar o escalamento ágil, recomendamos que assuntos de query e de indexação sejam separados entre conjuntos diferentes de nodes: nodes de query e nodes de indexação. -### Ports overview +### Visão geral das portas -> **Important**: Be careful about exposing ports publicly - **administration ports** should be kept locked down. This includes the the Graph Node JSON-RPC and the Indexer management endpoints detailed below. +> **Importante:** Cuidado ao expor portas publicamente — as **portas de administração** devem ser trancadas a sete chaves. Isto inclui o endpoint JSON-RPC do Graph Node e os pontos finais de gestão de Indexador detalhados abaixo. #### Graph Node -| Port | Purpose | Routes | CLI Argument | Environment Variable | +| Porta | Propósito | Rotas | Argumento CLI | Variável de Ambiente | | --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | -| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | -| 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | -| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | -| 8040 | Prometheus metrics | /metrics | \--metrics-port | - | +| 8000 | Servidor HTTP GraphQL
(para queries de subgraph) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | +| 8001 | WS GraphQL
(para inscrições a subgraphs) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | +| 8020 | JSON-RPC
(para gerir implantações) | / | \--admin-port | - | +| 8030 | API de estado de indexação do subgraph | /graphql | \--index-node-port | - | +| 8040 | Métricas Prometheus | /metrics | \--metrics-port | - | -#### Indexer Service +#### Serviço Indexador -| Port | Purpose | Routes | CLI Argument | Environment Variable | +| Porta | Propósito | Rotas | Argumento CLI | Variável de Ambiente | | --- | --- | --- | --- | --- | -| 7600 | GraphQL HTTP server
(for paid subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` | -| 7300 | Prometheus metrics | /metrics | \--metrics-port | - | +| 7600 | Servidor HTTP GraphQL
(para queries pagos de subgraph) | /subgraphs/id/...
/status
/channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` | +| 7300 | Métricas Prometheus | /metrics | \--metrics-port | - | -#### Indexer Agent +#### Agente Indexador -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| ---- | ---------------------- | ------ | -------------------------- | --------------------------------------- | -| 8000 | Indexer management API | / | \--indexer-management-port | `INDEXER_AGENT_INDEXER_MANAGEMENT_PORT` | +| Porta | Propósito | Rotas | Argumento CLI | Variável de Ambiente | +| ----- | -------------------------- | ----- | -------------------------- | --------------------------------------- | +| 8000 | API de gestão de Indexador | / | \--indexer-management-port | `INDEXER_AGENT_INDEXER_MANAGEMENT_PORT` | -### Setup server infrastructure using Terraform on Google Cloud +### Como preparar uma infraestrutura de servidor com o Terraform no Google Cloud -> Note: Indexers can alternatively use AWS, Microsoft Azure, or Alibaba. +> Nota: Como alternativa, os Indexadores podem usar o AWS, Microsoft Azure, ou Alibaba. -#### Install prerequisites +#### Pré-requisitos para a instalação - Google Cloud SDK -- Kubectl command line tool +- Ferramenta de linha de comando Kubectl - Terraform -#### Create a Google Cloud Project +#### Como criar um projeto no Google Cloud -- Clone or navigate to the [Indexer repository](https://github.com/graphprotocol/indexer). +- Clone ou navegue ao [repositório de Indexador](https://github.com/graphprotocol/indexer). -- Navigate to the `./terraform` directory, this is where all commands should be executed. +- Navegue ao diretório `./terraform`, é aqui onde todos os comandos devem ser executados. ```sh cd terraform ``` -- Authenticate with Google Cloud and create a new project. +- Autentique com o Google Cloud e crie um projeto novo. ```sh gcloud auth login @@ -196,9 +196,9 @@ project= gcloud projects create --enable-cloud-apis $project ``` -- Use the Google Cloud Console's billing page to enable billing for the new project. +- Use a página de cobrança do Google Cloud Console para ativar cobranças para o novo projeto. -- Create a Google Cloud configuration. +- Crie uma configuração no Google Cloud. ```sh proj_id=$(gcloud projects list --format='get(project_id)' --filter="name=$project") @@ -208,7 +208,7 @@ gcloud config set compute/region us-central1 gcloud config set compute/zone us-central1-a ``` -- Enable required Google Cloud APIs. +- Ative as APIs necessárias do Google Cloud. ```sh gcloud services enable compute.googleapis.com @@ -217,7 +217,7 @@ gcloud services enable servicenetworking.googleapis.com gcloud services enable sqladmin.googleapis.com ``` -- Create a service account. +- Crie uma conta de serviço. ```sh svc_name= @@ -225,7 +225,7 @@ gcloud iam service-accounts create $svc_name \ --description="Service account for Terraform" \ --display-name="$svc_name" gcloud iam service-accounts list -# Get the email of the service account from the list +# Pegue o email da conta de serviço da lista svc=$(gcloud iam service-accounts list --format='get(email)' --filter="displayName=$svc_name") gcloud iam service-accounts keys create .gcloud-credentials.json \ @@ -235,7 +235,7 @@ gcloud projects add-iam-policy-binding $proj_id \ --role roles/editor ``` -- Enable peering between database and Kubernetes cluster that will be created in the next step. +- Ative o peering entre o banco de dados e o cluster Kubernetes, que será criado no próximo passo. ```sh gcloud compute addresses create google-managed-services-default \ @@ -249,35 +249,35 @@ gcloud services vpc-peerings connect \ --ranges=google-managed-services-default ``` -- Create minimal terraform configuration file (update as needed). +- Crie o arquivo de configuração mínimo no terraform (atualize quando necessário). ```sh indexer= cat > terraform.tfvars < \ -f Dockerfile.indexer-service \ -t indexer-service:latest \ -# Indexer agent +# Agente indexador docker build \ --build-arg NPM_TOKEN= \ -f Dockerfile.indexer-agent \ -t indexer-agent:latest \ ``` -- Run the components +- Execute os componentes ```sh docker run -p 7600:7600 -it indexer-service:latest ... docker run -p 18000:8000 -it indexer-agent:latest ... ``` -**NOTE**: After starting the containers, the Indexer service should be accessible at [http://localhost:7600](http://localhost:7600) and the Indexer agent should be exposing the Indexer management API at [http://localhost:18000/](http://localhost:18000/). +**NOTA**: Após iniciar os containers, o serviço Indexador deve ser acessível no [http://localhost:7600](http://localhost:7600) e o agente indexador deve expor a API de gestão de Indexador no [http://localhost:18000/](http://localhost:18000/). -#### Using K8s and Terraform +#### Usando K8s e Terraform -See the [Setup Server Infrastructure Using Terraform on Google Cloud](/indexing/overview/#setup-server-infrastructure-using-terraform-on-google-cloud) section +Veja a seção sobre [preparar infraestruturas de servidor com o Terraform no Google Cloud](/indexing/overview/#setup-server-infrastructure-using-terraform-on-google-cloud) -#### Usage +#### Uso -> **NOTE**: All runtime configuration variables may be applied either as parameters to the command on startup or using environment variables of the format `COMPONENT_NAME_VARIABLE_NAME`(ex. `INDEXER_AGENT_ETHEREUM`). +> **NOTA**: Todas as variáveis de configuração de runtime (tempo de execução) podem ser aplicadas como parâmetros ao comando na inicialização, ou usando variáveis de ambiente do formato `COMPONENT_NAME_VARIABLE_NAME`(por ex. `INDEXER_AGENT_ETHEREUM`). -#### Indexer agent +#### Agente Indexador ```sh graph-indexer-agent start \ @@ -488,7 +488,7 @@ graph-indexer-agent start \ | pino-pretty ``` -#### Indexer service +#### Serviço Indexador ```sh SERVER_HOST=localhost \ @@ -516,56 +516,56 @@ graph-indexer-service start \ #### Indexer CLI -The Indexer CLI is a plugin for [`@graphprotocol/graph-cli`](https://www.npmjs.com/package/@graphprotocol/graph-cli) accessible in the terminal at `graph indexer`. +O Indexer CLI é um plugin para o [`@graphprotocol/graph-cli`](https://www.npmjs.com/package/@graphprotocol/graph-cli), acessível no terminal em `graph indexer`. ```sh graph indexer connect http://localhost:18000 graph indexer status ``` -#### Indexer management using Indexer CLI +#### Gestão de Indexador com o Indexer CLI -The suggested tool for interacting with the **Indexer Management API** is the **Indexer CLI**, an extension to the **Graph CLI**. The Indexer agent needs input from an Indexer in order to autonomously interact with the network on the behalf of the Indexer. The mechanism for defining Indexer agent behavior are **allocation management** mode and **indexing rules**. Under auto mode, an Indexer can use **indexing rules** to apply their specific strategy for picking subgraphs to index and serve queries for. Rules are managed via a GraphQL API served by the agent and known as the Indexer Management API. Under manual mode, an Indexer can create allocation actions using **actions queue** and explicitly approve them before they get executed. Under oversight mode, **indexing rules** are used to populate **actions queue** and also require explicit approval for execution. +O programa recomendado para interagir com a **API de Gestão de Indexador** é o **Indexer CLI**, uma extensão ao **Graph CLI**. O agente precisa de comandos de um Indexador para poder interagir de forma autônoma com a rede em nome do Indexador. Os mecanismos que definem o comportamento de um agente indexador são **gestão de alocações** e **regras de indexamento**. No modo automático, um Indexador pode usar **regras de indexamento** para aplicar estratégias específicas para a escolha de subgraphs para indexar e servir consultas. Regras são administradas através de uma API GraphQL servida pelo agente, e conhecida como a API de Gestão de Indexador. No modo manual, um Indexador pode criar ações de alocação usando a **fila de ações**, além de aprová-las explicitamente antes de serem executadas. Sob o modo de supervisão, as **regras de indexação** são usadas para popular a **fila de ações** e também exigem aprovação explícita para executar. -#### Usage +#### Uso -The **Indexer CLI** connects to the Indexer agent, typically through port-forwarding, so the CLI does not need to run on the same server or cluster. To help you get started, and to provide some context, the CLI will briefly be described here. +O **Indexer CLI** se conecta ao agente indexador, normalmente através do redirecionamento de portas, para que a CLI não precise ser executada no mesmo servidor ou cluster. Para facilitar o seu começo, e para fins de contexto, a CLI será descrita brevemente aqui. -- `graph indexer connect ` - Connect to the Indexer management API. Typically the connection to the server is opened via port forwarding, so the CLI can be easily operated remotely. (Example: `kubectl port-forward pod/ 8000:8000`) +- `graph indexer connect ` — Conecta à API de gestão de Indexador. Tipicamente, a conexão ao servidor é aberta através do redirecionamento de portas, para que a CLI possa ser operada remotamente com facilidade. (Exemplo: `kubectl port-forward pod/ 8000:8000`) -- `graph indexer rules get [options] [ ...]` - Get one or more indexing rules using `all` as the `` to get all rules, or `global` to get the global defaults. An additional argument `--merged` can be used to specify that deployment specific rules are merged with the global rule. This is how they are applied in the Indexer agent. +- `graph indexer rules get [options] [ ...]` — Mostra uma ou mais regras de indexação usando `all` como o `` para mostrar todas as regras, ou `global` para exibir os padrões globais. Um argumento adicional `--merged` pode ser usado para especificar que regras, específicas à implantação, estão fundidas com a regra global. É assim que elas são aplicadas no agente de Indexador. -- `graph indexer rules set [options] ...` - Set one or more indexing rules. +- `graph indexer rules set [options] ...` — Configura uma ou mais regras de indexação. -- `graph indexer rules start [options] ` - Start indexing a subgraph deployment if available and set its `decisionBasis` to `always`, so the Indexer agent will always choose to index it. If the global rule is set to always then all available subgraphs on the network will be indexed. +- `graph indexer rules start [options] ` - Começa a indexar uma implantação de subgraph, se disponível, e configura a sua `decisionBasis` para `always`, para que o agente indexador sempre escolha indexá-lo. Caso a regra global seja configurada para `always`, todos os subgraphs disponíveis na rede serão indexados. -- `graph indexer rules stop [options] ` - Stop indexing a deployment and set its `decisionBasis` to never, so it will skip this deployment when deciding on deployments to index. +- `graph indexer rules stop [options] ` — Pára de indexar uma implantação e configura a sua `decisionBasis` em `never`, com o fim de pular esta implantação ao decidir quais implantações indexar. -- `graph indexer rules maybe [options] ` — Set the `decisionBasis` for a deployment to `rules`, so that the Indexer agent will use indexing rules to decide whether to index this deployment. +- `graph indexer rules maybe [options] ` — Configura a `decisionBasis` de uma implantação para obedecer o `rules`, para que o agente Indexador use regras de indexação para decidir se esta implantação será ou não indexada. -- `graph indexer actions get [options] ` - Fetch one or more actions using `all` or leave `action-id` empty to get all actions. An additional argument `--status` can be used to print out all actions of a certain status. +- `graph indexer actions get [options] ` — Retira uma ou mais ações usando o `all`, ou deixa o `action-id` vazio para mostrar todas as ações. Um argumento adicional — `--status` — pode ser usado para imprimir todas as ações de um certo estado. -- `graph indexer action queue allocate ` - Queue allocation action +- `graph indexer action queue allocate ` — Programa a ação de alocação -- `graph indexer action queue reallocate ` - Queue reallocate action +- `graph indexer action queue reallocate ` — Programa uma ação de realocação -- `graph indexer action queue unallocate ` - Queue unallocate action +- `graph indexer action queue unallocate ` — Programa uma retirada de alocação -- `graph indexer actions cancel [ ...]` - Cancel all action in the queue if id is unspecified, otherwise cancel array of id with space as separator +- `graph indexer actions cancel [ ...]` — Cancela todas as ações na fila se a id não for especificada; caso contrário, cancela o arranjo do id com espaço como separador -- `graph indexer actions approve [ ...]` - Approve multiple actions for execution +- `graph indexer actions approve [ ...]` — Aprova múltiplas ações para execução -- `graph indexer actions execute approve` - Force the worker to execute approved actions immediately +- `graph indexer actions execute approve` — Força a execução imediata de ações aprovadas -All commands which display rules in the output can choose between the supported output formats (`table`, `yaml`, and `json`) using the `-output` argument. +Todos os comandos que mostram regras no resultado podem escolher entre os formatos de resultado (`table`, `yaml`, e `json`) com o argumento `-output`. -#### Indexing rules +#### Regras de indexação -Indexing rules can either be applied as global defaults or for specific subgraph deployments using their IDs. The `deployment` and `decisionBasis` fields are mandatory, while all other fields are optional. When an indexing rule has `rules` as the `decisionBasis`, then the Indexer agent will compare non-null threshold values on that rule with values fetched from the network for the corresponding deployment. If the subgraph deployment has values above (or below) any of the thresholds it will be chosen for indexing. +As regras de indexação podem ser aplicadas como padrões globais ou para implantações específicas de subgraph com o uso das suas IDs. Os campos `deployment` e `decisionBasis` são obrigatórios, enquanto todos os outros campos são opcionais. Quando uma regra de indexação tem `rules` como a `decisionBasis`, então o agente de Indexador comparará valores de limiar não-nulos naquela regra com valores retirados da rede para a implantação correspondente. Se a implantação do subgraph tiver valores acima (ou abaixo) de todos os limiares, ela será escolhida para a indexação. -For example, if the global rule has a `minStake` of **5** (GRT), any subgraph deployment which has more than 5 (GRT) of stake allocated to it will be indexed. Threshold rules include `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, and `minAverageQueryFees`. +Por exemplo: se a regra global tem um `minStake` de **5** (GRT), qualquer implantação de subgraph que tiver mais de 5 (GRT) de stake alocado nele será indexada. Regras de limiar incluem `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, e `minAverageQueryFees`. -Data model: +Modelo de dados: ```graphql type IndexingRule { @@ -599,7 +599,7 @@ IndexingDecisionBasis { } ``` -Example usage of indexing rule: +Exemplos de uso de regra de indexação: ``` graph indexer rules offchain QmZfeJYR86UARzp9HiXbURWunYgC9ywvPvoePNbuaATrEK @@ -611,20 +611,20 @@ graph indexer rules stop QmZfeJYR86UARzp9HiXbURWunYgC9ywvPvoePNbuaATrEK graph indexer rules delete QmZfeJYR86UARzp9HiXbURWunYgC9ywvPvoePNbuaATrEK ``` -#### Actions queue CLI +#### CLI de fila de ações -The indexer-cli provides an `actions` module for manually working with the action queue. It uses the **Graphql API** hosted by the indexer management server to interact with the actions queue. +O indexer-cli fornece um módulo `actions` para trabalhar manualmente com a fila de ações. Ele interage com a fila de ações através do **API GraphQL** hospedado pelo servidor de gestão de indexador. -The action execution worker will only grab items from the queue to execute if they have `ActionStatus = approved`. In the recommended path actions are added to the queue with ActionStatus = queued, so they must then be approved in order to be executed onchain. The general flow will look like: +O programa de execução de ações só retirará itens da fila para execução se esses tiverem o `ActionStatus = approved`. No local recomendado, as ações são adicionadas à fila com `ActionStatus = queued`; depois, serão aprovadas para serem executadas on-chain. O fluxo geral ficará assim: -- Action added to the queue by the 3rd party optimizer tool or indexer-cli user -- Indexer can use the `indexer-cli` to view all queued actions -- Indexer (or other software) can approve or cancel actions in the queue using the `indexer-cli`. The approve and cancel commands take an array of action ids as input. -- The execution worker regularly polls the queue for approved actions. It will grab the `approved` actions from the queue, attempt to execute them, and update the values in the db depending on the status of execution to `success` or `failed`. -- If an action is successful the worker will ensure that there is an indexing rule present that tells the agent how to manage the allocation moving forward, useful when taking manual actions while the agent is in `auto` or `oversight` mode. -- The indexer can monitor the action queue to see a history of action execution and if needed re-approve and update action items if they failed execution. The action queue provides a history of all actions queued and taken. +- Ação adicionada à fila por ferramenta de otimização de terceiros ou utilizador do indexer-cli +- O Indexador pode usar o `indexer-cli` para visualizar todas as ações enfileiradas +- O Indexador (ou outro software) pode aprovar ou cancelar ações na fila usando o `indexer-cli`. Os comandos de aprovação e cancelamento aceitam um arranjo de ids de ação como comando. +- O programa de execução consulta a fila regularmente para verificar as ações aprovadas. Ele tomará as ações `approved` da fila, tentará executá-las, e atualizará os valores no banco de dados a depender do estado da execução, sendo `success` ou `failed`. +- Se uma ação tiver êxito, o programa garantirá a presença de uma regra de indexação que diz ao agente como administrar a alocação dali em diante. Isto será mais conveniente para executar ações manuais enquanto o agente está no modo `auto` ou `oversight`. +- O indexador pode monitorizar a fila de ações para ver um histórico de execuções de ação, e se necessário, aprovar novamente e atualizar itens de ação caso a sua execução falhe. A fila de ações provém um histórico de todas as ações agendadas e tomadas. -Data model: +Modelo de dados: ```graphql Type ActionInput { @@ -657,7 +657,7 @@ ActionType { } ``` -Example usage from source: +Exemplo de uso da fonte: ```bash graph indexer actions get all @@ -677,141 +677,142 @@ graph indexer actions approve 1 3 5 graph indexer actions execute approve ``` -Note that supported action types for allocation management have different input requirements: +Observe que os tipos apoiados de ações para gestão de alocação têm requisitos diferentes de entrada: -- `Allocate` - allocate stake to a specific subgraph deployment +- `Allocate` — aloca stakes a uma implantação de subgraph específica - - required action params: + - parâmetros de ação exigidos: - deploymentID - amount -- `Unallocate` - close allocation, freeing up the stake to reallocate elsewhere +- `Unallocate` — fecha uma alocação, que libera o stake para ser redistribuído em outro lugar - - required action params: + - parâmetros de ação exigidos: - allocationID - deploymentID - - optional action params: + - parâmetros de ação opcionais: - poi - - force (forces using the provided POI even if it doesn’t match what the graph-node provides) + - force (força o uso do POI providenciado, mesmo se ele não corresponder ao providenciado pelo graph-node) -- `Reallocate` - atomically close allocation and open a fresh allocation for the same subgraph deployment +- `Reallocate` — fecha a alocação automaticamente e abre uma alocação nova para a mesma implantação de subgraph - - required action params: + - parâmetros de ação exigidos: - allocationID - deploymentID - amount - - optional action params: + - parâmetros de ação opcionais: - poi - - force (forces using the provided POI even if it doesn’t match what the graph-node provides) + - force (força o uso do POI providenciado, mesmo se ele não corresponder ao providenciado pelo graph-node) -#### Cost models +#### Modelos de custo -Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make Indexer selection decisions per query and to negotiate payment with chosen Indexers. +Modelos de custo servem preços dinâmicos para queries, com base em atributos de mercado e query. O Serviço de Indexador compartilha um modelo de custo com os gateways para cada subgraph, aos quais ele pretende responder a consultas. Os gateways, por sua vez, usam o modelo de custo para decidir seleções de Indexador por query e para negociar pagamentos com Indexadores escolhidos. #### Agora -The Agora language provides a flexible format for declaring cost models for queries. An Agora price model is a sequence of statements that execute in order for each top-level query in a GraphQL query. For each top-level query, the first statement which matches it determines the price for that query. +A linguagem Agora providencia um formato flexível para a declaração de modelos de custo para queries. Um modelo de preço do Agora é uma sequência de declarações, executadas em ordem, para cada query de alto-nível em um query no GraphQL. Para cada query de nível máximo, a primeira declaração correspondente determina o preço para o tal query. -A statement is comprised of a predicate, which is used for matching GraphQL queries, and a cost expression which when evaluated outputs a cost in decimal GRT. Values in the named argument position of a query may be captured in the predicate and used in the expression. Globals may also be set and substituted in for placeholders in an expression. +Uma declaração consiste de um predicado, que é usado para corresponder a buscas GraphQL; e uma expressão de custo que, quando avaliada, mostra um custo em GRT decimal. Valores na posição de argumento nomeada em um query podem ser capturados no predicado e usados na expressão. Valores globais também podem ser configurados e substituídos por valores temporários em uma expressão. -Example cost model: +Exemplo de modelo de custo: ``` -# This statement captures the skip value, -# uses a boolean expression in the predicate to match specific queries that use `skip` -# and a cost expression to calculate the cost based on the `skip` value and the SYSTEM_LOAD global +# Esta declaração captura o valor de pulo, +# usa uma expressão boolean no predicado para corresponder a consultas específicas que usam 'skip' +# e uma expressão de custo para calcular o custo baseado no valor 'skip' e no global SYSTEM_LOAD +SYSTEM_LOAD global query { pairs(skip: $skip) { id } } when $skip > 2000 => 0.0001 * $skip * $SYSTEM_LOAD; -# This default will match any GraphQL expression. -# It uses a Global substituted into the expression to calculate cost +# Este padrão corresponderá a qualquer expressão GraphQL. +# Ele usa um Global substituído na expressão para calcular o custo default => 0.1 * $SYSTEM_LOAD; ``` -Example query costing using the above model: +Exemplo de custo de query usando o modelo acima: -| Query | Price | +| Query | Preço | | ---------------------------------------------------------------------------- | ------- | | { pairs(skip: 5000) { id } } | 0.5 GRT | | { tokens { symbol } } | 0.1 GRT | | { pairs(skip: 5000) { id } tokens { symbol } } | 0.6 GRT | -#### Applying the cost model +#### Aplicação do modelo de custo -Cost models are applied via the Indexer CLI, which passes them to the Indexer Management API of the Indexer agent for storing in the database. The Indexer Service will then pick them up and serve the cost models to gateways whenever they ask for them. +Os modelos de custo são aplicados através do Indexer CLI, que os repassa à API de Gestão do agente de Indexador para armazenamento no banco de dados. O Serviço de Indexador depois irá localizar e servir os modelos de custo para gateways, sempre que eles forem requisitados. ```sh indexer cost set variables '{ "SYSTEM_LOAD": 1.4 }' indexer cost set model my_model.agora ``` -## Interacting with the network +## Interações com a rede -### Stake in the protocol +### Stake no protocolo -The first steps to participating in the network as an Indexer are to approve the protocol, stake funds, and (optionally) set up an operator address for day-to-day protocol interactions. +Os primeiros passos para participar na rede como Indexador consistem em aprovar o protocolo, fazer staking de fundos, e (opcionalmente) preparar um endereço de operador para interações ordinárias do protocolo. -> Note: For the purposes of these instructions Remix will be used for contract interaction, but feel free to use your tool of choice ([OneClickDapp](https://oneclickdapp.com/), [ABItopic](https://abitopic.io/), and [MyCrypto](https://www.mycrypto.com/account) are a few other known tools). +> Nota: Para os propósitos destas instruções, o Remix será usado para interação com contratos, mas é possível escolher a sua própria ferramenta ([OneClickDapp](https://oneclickdapp.com/), [ABItopic](https://abitopic.io/) e [MyCrypto](https://www.mycrypto.com/account) são algumas outras ferramentas conhecidas). -Once an Indexer has staked GRT in the protocol, the [Indexer components](/indexing/overview/#indexer-components) can be started up and begin their interactions with the network. +Quando um Indexador faz stake de GRT no protocolo, será possível iniciar os seus [componentes](/indexing/overview/#indexer-components) e começar as suas interações com a rede. -#### Approve tokens +#### Aprovação de tokens -1. Open the [Remix app](https://remix.ethereum.org/) in a browser +1. Abra o [app Remix](https://remix.ethereum.org/) em um navegador -2. In the `File Explorer` create a file named **GraphToken.abi** with the [token ABI](https://raw.githubusercontent.com/graphprotocol/contracts/mainnet-deploy-build/build/abis/GraphToken.json). +2. No `File Explorer`, crie um arquivo chamado **GraphToken.abi** com a [Token ABI](https://raw.githubusercontent.com/graphprotocol/contracts/mainnet-deploy-build/build/abis/GraphToken.json). -3. With `GraphToken.abi` selected and open in the editor, switch to the `Deploy and run transactions` section in the Remix interface. +3. Com `GraphToken.abi` selecionado e aberto no editor, abra a seção `Deploy and Run Transactions` (Implantar e Executar Transações) na interface do Remix. -4. Under environment select `Injected Web3` and under `Account` select your Indexer address. +4. Na opção **Environment** (ambiente), selecione `Injected Web3`, e sob `Account` (conta), selecione o seu endereço de Indexador. -5. Set the GraphToken contract address - Paste the GraphToken contract address (`0xc944E90C64B2c07662A292be6244BDf05Cda44a7`) next to `At Address` and click the `At address` button to apply. +5. Configure o endereço de contrato de GraphToken — cole o endereço de contrato do GraphToken (`0xc944E90C64B2c07662A292be6244BDf05Cda44a7`) próximo ao `At Address` e clique no botão `At address` para aplicar. -6. Call the `approve(spender, amount)` function to approve the Staking contract. Fill in `spender` with the Staking contract address (`0xF55041E37E12cD407ad00CE2910B8269B01263b9`) and `amount` with the tokens to stake (in wei). +6. Chame a função `approve(spender, amount)` para aprovar o contrato de Staking. Preencha a lacuna `spender`, que tem o endereço de contrato de Staking (`0xF55041E37E12cD407ad00CE2910B8269B01263b9`), e a `amount` com os tokens a serem colocados (em wei). -#### Stake tokens +#### Staking de tokens -1. Open the [Remix app](https://remix.ethereum.org/) in a browser +1. Abra o [app Remix](https://remix.ethereum.org/) em um navegador -2. In the `File Explorer` create a file named **Staking.abi** with the staking ABI. +2. No `File Explorer`, crie um arquivo chamado **Staking.abi** com a ABI de staking. -3. With `Staking.abi` selected and open in the editor, switch to the `Deploy and run transactions` section in the Remix interface. +3. Com o `Staking.abi` selecionado e aberto no editor, entre na seção com `Deploy and Run Transactions` na interface do Remix. -4. Under environment select `Injected Web3` and under `Account` select your Indexer address. +4. Na opção **Environment** (ambiente), selecione `Injected Web3`, e sob `Account` (conta), selecione o seu endereço de Indexador. -5. Set the Staking contract address - Paste the Staking contract address (`0xF55041E37E12cD407ad00CE2910B8269B01263b9`) next to `At Address` and click the `At address` button to apply. +5. Configure o endereço de contrato de Staking — cole o endereço de contrato do Staking (`0xF55041E37E12cD407ad00CE2910B8269B01263b9`) próximo ao `At Address` e clique no botão `At address` para aplicar. -6. Call `stake()` to stake GRT in the protocol. +6. Chame o `stake()` para fazer stake de GRT no protocolo. -7. (Optional) Indexers may approve another address to be the operator for their Indexer infrastructure in order to separate the keys that control the funds from those that are performing day to day actions such as allocating on subgraphs and serving (paid) queries. In order to set the operator call `setOperator()` with the operator address. +7. (Opcional) Os Indexadores podem aprovar outro endereço para operar sua infraestrutura de Indexador, a fim de poder separar as chaves que controlam os fundos daquelas que realizam ações rotineiras, como alocar em subgraphs e servir queries (pagos). Para configurar o operador, chame o `setOperator()` com o endereço do operador. -8. (Optional) In order to control the distribution of rewards and strategically attract Delegators Indexers can update their delegation parameters by updating their indexingRewardCut (parts per million), queryFeeCut (parts per million), and cooldownBlocks (number of blocks). To do so call `setDelegationParameters()`. The following example sets the queryFeeCut to distribute 95% of query rebates to the Indexer and 5% to Delegators, set the indexingRewardCutto distribute 60% of indexing rewards to the Indexer and 40% to Delegators, and set `thecooldownBlocks` period to 500 blocks. +8. (Opcional) Para controlar a distribuição de recompensas e atrair Delegantes estrategicamente, os Indexadores podem atualizar os seus parâmetros de delegação ao atualizar o seu indexingRewardCut (partes por milhão); queryFeeCut (partes por milhão); e cooldownBlocks (número de blocos). Para fazer isto, chame o `setDelegationParameters()`. O seguinte exemplo configura o queryFeeCut para distribuir 95% de rebates de query ao Indexador e 5% aos Delegantes; configura o indexingRewardCutto para distribuir 60% de recompensas de indexação ao Indexador e 40% aos Delegantes; e configura o período do `thecooldownBlocks` para 500 blocos. ``` setDelegationParameters(950000, 600000, 500) ``` -### Setting delegation parameters +### Configuração de parâmetros de delegação -The `setDelegationParameters()` function in the [staking contract](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol) is essential for Indexers, allowing them to set parameters that define their interactions with Delegators, influencing their reward sharing and delegation capacity. +A função `setDelegationParameters()` no [contrato de staking](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol) é essencial para Indexadores; esta permite configurar parâmetros que definem as suas interações com Delegantes, o que influencia a sua capacidade de delegação e divisa de recompensas. -### How to set delegation parameters +### Como configurar parâmetros de delegação -To set the delegation parameters using Graph Explorer interface, follow these steps: +Para configurar os parâmetros de delegação com a interface do Graph Explorer, siga os seguintes passos: -1. Navigate to [Graph Explorer](https://thegraph.com/explorer/). -2. Connect your wallet. Choose multisig (such as Gnosis Safe) and then select mainnet. Note: You will need to repeat this process for Arbitrum One. -3. Connect the wallet you have as a signer. -4. Navigate to the 'Settings' section and select 'Delegation Parameters'. These parameters should be configured to achieve an effective cut within the desired range. Upon entering values in the provided input fields, the interface will automatically calculate the effective cut. Adjust these values as necessary to attain the desired effective cut percentage. -5. Submit the transaction to the network. +1. Navegue para o [Graph Explorer](https://thegraph.com/explorer/). +2. Conecte a sua carteira. Escolha a multisig (por ex., Gnosis Safe), e depois, a mainnet. Observe que será necessário repetir este processo para o Arbitrum One. +3. Conecte a carteira que possui como signatário. +4. Navegue até a seção 'Settings' (Configurações) e selecione 'Delegation Parameters' (Parâmetros de Delegação). Estes parâmetros devem ser configurados para alcançar uma parte efetiva dentro do alcance desejado. Após preencher os campos com valores, a interface calculará automaticamente a parte efetiva. Ajuste estes valores como necessário para obter a percentagem de parte efetiva desejada. +5. Envie a transação à rede. -> Note: This transaction will need to be confirmed by the multisig wallet signers. +> Nota: Esta transação deverá ser confirmada pelos signatários da carteira multisig. -### The life of an allocation +### A vida de uma alocação -After being created by an Indexer a healthy allocation goes through two states. +Após criada por um Indexador, uma alocação sadia passa por dois estados. -- **Active** - Once an allocation is created onchain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L316)) it is considered **active**. A portion of the Indexer's own and/or delegated stake is allocated towards a subgraph deployment, which allows them to claim indexing rewards and serve queries for that subgraph deployment. The Indexer agent manages creating allocations based on the Indexer rules. +- **Ativa** - Quando uma alocação é criada on-chain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L316)), ela é considerada **ativa**. Uma porção do stake próprio e/ou delegado do Indexador é alocada a uma implantação de subgraph, que lhe permite resgatar recompensas de indexação e servir queries para aquela implantação de subgraph. O agente indexador cria alocações baseadas nas regras do Indexador. -- **Closed** - An Indexer is free to close an allocation once 1 epoch has passed ([closeAllocation()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L335)) or their Indexer agent will automatically close the allocation after the **maxAllocationEpochs** (currently 28 days). When an allocation is closed with a valid proof of indexing (POI) their indexing rewards are distributed to the Indexer and its Delegators ([learn more](/indexing/overview/#how-are-indexing-rewards-distributed)). +- **Fechada** - Um Indexador pode fechar uma alocação após a passagem de um epoch ([closeAllocation()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L335)), ou o seu agente indexador a fechará automaticamente após o **maxAllocationEpochs** (atualmente, 28 dias). Quando uma alocação é fechada com uma prova de indexação válida (POI), as suas recompensas de indexação são distribuídas ao Indexador e aos seus Delegantes ([aprenda mais](/indexing/overview/#how-are-indexing-rewards-distributed)). -Indexers are recommended to utilize offchain syncing functionality to sync subgraph deployments to chainhead before creating the allocation onchain. This feature is especially useful for subgraphs that may take longer than 28 epochs to sync or have some chances of failing undeterministically. +É ideal que os Indexadores utilizem a funcionalidade de sincronização off-chain para sincronizar implantações de subgraph à chainhead antes de criar a alocação on-chain. Esta ferramenta é mais útil para subgraphs que demorem mais de 28 epochs para sincronizar, ou que têm chances de falhar não-deterministicamente. diff --git a/website/src/pages/pt/indexing/tap.mdx b/website/src/pages/pt/indexing/tap.mdx index 33f6583ea3c6..197d8757b967 100644 --- a/website/src/pages/pt/indexing/tap.mdx +++ b/website/src/pages/pt/indexing/tap.mdx @@ -1,21 +1,21 @@ --- -title: Como migrar para o TAP +title: GraphTally Guide --- -Conheça o novo sistema de pagamentos do The Graph: **TAP — Timeline Aggregation Protocol** ("Protocolo de Agregação de Histórico"): um sistema de microtransações rápidas e eficientes, livre de confiança. +Learn about The Graph’s new payment system, **GraphTally** [(previously Timeline Aggregation Protocol)](https://docs.rs/tap_core/latest/tap_core/index.html). This system provides fast, efficient microtransactions with minimized trust. ## Visão geral -O [TAP](https://docs.rs/tap_core/latest/tap_core/index.html) é um programa modular que substituirá o sistema de pagamento Scalar atualmente em uso. Os recursos do TAP incluem: +GraphTally is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features: - Processamento eficiente de micropagamentos. - Uma camada de consolidações para transações e custos na chain. - Controle total de recibos e pagamentos para Indexadores, garantindo pagamentos por queries. - Pontes de ligação descentralizadas e livres de confiança, melhorando o desempenho do `indexer-service` para grupos de remetentes. -## Especificações +### Especificações -O TAP permite que um remetente faça múltiplos pagamentos a um destinatário — os **TAP Receipts** ("Recibos do TAP") — que agrega os pagamentos em um, o **RAV — Receipt Aggregate Voucher** (Prova de Recibos Agregados). Este pagamento agregado pode ser verificado na blockchain, reduzindo o número de transações e simplificando o processo de pagamento. +GraphTally allows a sender to make multiple payments to a receiver, **Receipts**, which aggregates these payments into a single payment, a **Receipt Aggregate Voucher**, also known as a **RAV**. This aggregated payment can then be verified on the blockchain, reducing the number of transactions and simplifying the payment process. Para cada query, a ponte de ligação enviará um `signed receipt` ("recibo assinado") para armazenar na sua base de dados. Estes queries serão então agregados por um `tap-agent` através de uma solicitação. Depois, você receberá um RAV. Para atualizar um RAV, envie-o com novos recibos para gerar um novo RAV com valor maior. @@ -59,14 +59,14 @@ Tudo será executado automaticamente enquanto `tap-agent` e `indexer-agent` fore | Signatários | `0xfF4B7A5EfD00Ff2EC3518D4F250A27e4c29A2211` | `0xFb142dE83E261e43a81e9ACEADd1c66A0DB121FE` | | Agregador | `https://tap-aggregator.network.thegraph.com` | `https://tap-aggregator.testnet.thegraph.com` | -### Requisitos +### Pré-requisitos -Além dos requisitos típicos para executar um indexador, é necessário um endpoint `tap-escrow-subgraph` para fazer queries de atualizações do TAP. É possível usar o The Graph Network para fazer queries ou se hospedar no seu `graph-node`. +In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query updates. You can use The Graph Network to query or host yourself on your `graph-node`. - [Subgraph do TAP do The Graph — Arbitrum Sepolia (para a testnet do The Graph)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) - [Subgraph do TAP do The Graph (para a mainnet do The Graph)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) -> Nota: o `indexer-agent` atualmente não executa o indexamento deste subgraph como faz com o lançamento de subgraphs da rede. Portanto, ele deve ser anexado manualmente. +> Nota: o `indexer-agent` atualmente não executa a indexação deste subgraph como faz com a implantação de subgraphs da rede. Portanto, ela deve ser anexada manualmente. ## Guia de migração @@ -79,7 +79,7 @@ O software necessário está [aqui](https://github.com/graphprotocol/indexer/blo 1. **Agente Indexador** - Siga o [mesmo processo](https://github.com/graphprotocol/indexer/pkgs/container/indexer-agent#graph-protocol-indexer-components). - - Insira o novo argumento `--tap-subgraph-endpoint` para ativar os novos caminhos de código e ativar o resgate de RAVs do TAP. + - Give the new argument `--tap-subgraph-endpoint` to activate the new GraphTally codepaths and enable redeeming of RAVs. 2. **Serviço Indexador** @@ -99,14 +99,14 @@ O software necessário está [aqui](https://github.com/graphprotocol/indexer/blo Para o mínimo de configuração, veja o exemplo abaixo: ```bash -# Você deve mudar *todos* os valores abaixo para mudar sua configuração. +# Mude *todos* os valores abaixo para combinar com a sua configuração. # -# O abaixo inclui valores globais da Graph Network, como visto aqui: +# A config abaixo inclui valores globais da graph network, conforme aqui: # # -# Fica a dica: se precisar carregar alguns variáveis do ambiente nesta configuração, você -# pode substituí-los com variáveis do ambiente. Por exemplo: pode-se substituir -# o abaixo por [PREFIX]_DATABASE_POSTGRESURL, onde PREFIX pode ser `INDEXER_SERVICE` ou `TAP_AGENT`: +# Fica a dica: se precisar carregar alguns valores do ambiente nesta config, você +# pode reescrever com variáveis de ambiente. Por exemplo, dá para trocar o seguinte +# com [PREFIX]_DATABASE_POSTGRESURL, onde PREFIX pode ser `INDEXER_SERVICE` ou `TAP_AGENT`: # # [database] # postgres_url = "postgresql://indexer:${POSTGRES_PASSWORD}@postgres:5432/indexer_components_0" @@ -116,56 +116,56 @@ indexer_address = "0x1111111111111111111111111111111111111111" operator_mnemonic = "celery smart tip orange scare van steel radio dragon joy alarm crane" [database] -# A URL da base de dados Postgres usada para os componentes do indexador. -# A mesma base de dados usada pelo `indexer-agent`. Espera-se que o `indexer-agent` -# criará as tabelas necessárias. +# A URL do banco de dados Postgres usada para os componentes do indexador; o mesmo +# banco usado pelo `indexer-agent`. Espera-se que o `indexer-agent` crie +# as tabelas necessárias. postgres_url = "postgres://postgres@postgres:5432/postgres" [graph_node] -# URL to your graph-node's query endpoint +# URL para o endpoint de queries do seu graph-node query_url = "" -# URL to your graph-node's status endpoint +# URL para o endpoint de estado do seu graph-node status_url = "" [subgraphs.network] -# URL de query pro subgraph do Graph Network. +# URL de Query para o Subgraph da Graph Network query_url = "" -# Opcional, procure o lançamento no `graph-node` local, se localmente indexado. -# Vale a pena indexar o subgraph localmente. -# NOTA: Usar apenas `query_url` ou `deployment_id` +# Opcional, implantação para buscar no `graph-node` local, se indexada localmente. +# Recomenda-se indexar o Subgraph localmente. +# IMPORTANTE: Só use `query_url` ou `deployment-id` deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" [subgraphs.escrow] -# Query URL for the Escrow subgraph. +# URL de Query para o Subgraph da Escrow query_url = "" -# Optional, deployment to look for in the local `graph-node`, if locally indexed. -# Locally indexing the subgraph is recommended. -# NOTE: Use `query_url` or `deployment_id` only +# Opcional, implantação para buscar no `graph-node` local, se indexada localmente. +# Recomenda-se indexar o Subgraph localmente. +# IMPORTANTE: Só use `query_url` ou `deployment-id` deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" [blockchain] -# ID de chain da rede que está a executar o Graph Network +# A ID de chain da rede a executar o Graph Network chain_id = 1337 -# Endereço de contrato do verificador de prova de agregação de recibos do TAP. +# Endereço de contrato do verificador de RAV (Prova de Recibos Agregados) do TAP. receipts_verifier_address = "0x2222222222222222222222222222222222222222" ######################################## -# Configurações específicas para o tap-agent # +# Configurações específicas ao tap-agent # ######################################## [tap] -# Esta é a quantia de taxas que você está disposto a arriscar. Por exemplo: -# se o remetente parar de enviar RAVs por tempo suficiente e as taxas passarem -# desta quantia, o indexer-service não aceitará mais queries deste remetente -# até que as taxas sejam agregadas. -# NOTA: Use strings para valores decimais, para evitar erros de arredondamento -# Por exemplo: -# max_amount_willing_to_lose_grt = "0,1" +# Esta é a quantia de taxas que você pode arriscar a qualquer momento. Por exemplo, +# se o remetente parar de fornecer RAVs por tempo suficiente e as taxas excederem +# essa quantia, o serviço indexador vai parar de aceitar queries do remetente +# até as taxas serem agregadas. +# IMPORTANTE: Use strings de valores decimais, para evitar erros de arredondamento +# e.g: +# max_amount_willing_to_lose_grt = "0.1" max_amount_willing_to_lose_grt = 20 [tap.sender_aggregator_endpoints] # Valor-Chave de todos os remetentes e seus endpoints agregadores -# Por exemplo, o abaixo é para a ponte de ligação do testnet Edge & Node. -0xDDE4cfFd3D9052A9cb618fC05a1Cd02be1f2F467 = "https://t +# Por exemplo, abaixo está o valor para o gateway da testnet do Edge & Node. +0xDDE4cfFd3D9052A9cb618fC05a1Cd02be1f2F467 = "https://tap-aggregator.network.thegraph.com" ``` Notas: diff --git a/website/src/pages/pt/indexing/tooling/graph-node.mdx b/website/src/pages/pt/indexing/tooling/graph-node.mdx index 370538b94e34..3d8d68eca293 100644 --- a/website/src/pages/pt/indexing/tooling/graph-node.mdx +++ b/website/src/pages/pt/indexing/tooling/graph-node.mdx @@ -2,7 +2,7 @@ title: Graph Node --- -O Node do The Graph (Graph Node) é o componente que indexa subgraphs e disponibiliza os dados resultantes a queries (consultas de dados) através de uma API GraphQL. Assim, ele é central ao stack dos indexers, e é crucial fazer operações corretas com um node Graph para executar um indexer com êxito. +O Graph Node é o componente que indexa subgraphs e disponibiliza os dados resultantes a queries (consultas de dados) através de uma API GraphQL. Assim, ele é central ao stack dos indexers, e é crucial fazer operações corretas com um Graph Node para executar um indexador com êxito. Isto fornece um resumo contextual do Graph Node e algumas das opções mais avançadas disponíveis para indexadores. Para mais instruções e documentação, veja o [repositório do Graph Node](https://github.com/graphprotocol/graph-node). @@ -26,15 +26,15 @@ Enquanto alguns subgraphs exigem apenas um node completo, alguns podem ter recur ### Nodes IPFS -Os metadados de lançamento de subgraph são armazenados na rede IPFS. O Graph Node acessa primariamente o node IPFS durante o lançamento do subgraph, para retirar o manifest e todos os arquivos ligados. Os indexadores de rede não precisam hospedar seu próprio node IPFS. Um node IPFS para a rede é hospedado em https://ipfs.network.thegraph.com. +Os metadados de implantação de subgraph são armazenados na rede IPFS. O Graph Node acessa primariamente o node IPFS durante a implantação do subgraph, para retirar o manifest e todos os arquivos ligados. Os indexadores de rede não precisam hospedar seu próprio node IPFS. Um node IPFS para a rede é hospedado em https://ipfs.network.thegraph.com. ### Servidor de métricas Prometheus O Graph Node pode, opcionalmente, logar métricas a um servidor de métricas Prometheus para permitir funções de relatórios e monitorado. -### Getting started from source +### Começando da fonte -#### Install prerequisites +#### Pré-requisitos para a instalação - **Rust** @@ -42,15 +42,15 @@ O Graph Node pode, opcionalmente, logar métricas a um servidor de métricas Pro - **IPFS** -- **Additional Requirements for Ubuntu users** - To run a Graph Node on Ubuntu a few additional packages may be needed. +- **Requisitos Adicionais para utilizadores de Ubuntu** — A execução de um Graph Node no Ubuntu pode exigir pacotes adicionais. ```sh sudo apt-get install -y clang libpq-dev libssl-dev pkg-config ``` -#### Setup +#### Configuração -1. Start a PostgreSQL database server +1. Inicie um servidor de banco de dados PostgreSQL ```sh initdb -D .postgres @@ -58,9 +58,9 @@ pg_ctl -D .postgres -l logfile start createdb graph-node ``` -2. Clone [Graph Node](https://github.com/graphprotocol/graph-node) repo and build the source by running `cargo build` +2. Clone o repositório do [Graph Node](https://github.com/graphprotocol/graph-node) e execute `cargo build` para construir a fonte -3. Now that all the dependencies are setup, start the Graph Node: +3. Agora que todas as dependências estão configuradas, inicialize o Graph Node: ```sh cargo run -p graph-node --release -- \ @@ -77,19 +77,19 @@ Veja um exemplo completo de configuração do Kubernetes no [repositório do ind Durante a execução, o Graph Node expõe as seguintes portas: -| Port | Purpose | Routes | CLI Argument | Environment Variable | +| Porta | Propósito | Rotas | Argumento CLI | Variável de Ambiente | | --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | -| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | -| 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | -| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | -| 8040 | Prometheus metrics | /metrics | \--metrics-port | - | +| 8000 | Servidor HTTP GraphQL
(para queries de subgraph) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | +| 8001 | WS GraphQL
(para inscrições a subgraphs) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | +| 8020 | JSON-RPC
(para gerir implantações) | / | \--admin-port | - | +| 8030 | API de estado de indexação do subgraph | /graphql | \--index-node-port | - | +| 8040 | Métricas Prometheus | /metrics | \--metrics-port | - | > **Importante**: Cuidado ao expor portas publicamente — **portas de administração** devem ser trancadas a sete chaves. Isto inclui o endpoint JSON-RPC do Graph Node. ## Configurações avançadas do Graph Node -Basicamente, o Graph Node pode ser operado com uma única instância de Graph Node, um único banco de dados PostgreSQP, e os clientes de rede como exigidos pelos subgraphs a serem indexados. +Basicamente, o Graph Node pode ser operado com uma única instância de Graph Node, um único banco de dados PostgreSQL, e os clientes de rede conforme exigidos pelos subgraphs a serem indexados. Este setup pode ser escalado horizontalmente, com a adição de vários Graph Nodes e bancos de dados para apoiá-los. Utilizadores mais avançados podem tomar vantagem de algumas das capacidades de escala horizontal do Graph Node, assim como algumas das opções de configuração mais avançadas, através do arquivo `config.toml` e as variáveis de ambiente do Graph Node. @@ -114,13 +114,13 @@ A documentação completa do `config.toml` pode ser encontrada nos [documentos d #### Múltiplos Graph Nodes -A indexação de Graph Nodes pode ser escalada horizontalmente, com a execução de várias instâncias de Graph Node para separar indexação de queries em nodes diferentes. Isto é possível só com a execução de Graph Nodes, configurados com um `node_id` diferente na inicialização (por ex. no arquivo Docker Compose), que pode então ser usado no arquivo `config.toml` para especificar [nodes dedicados de query](#dedicated-query-nodes), [ingestores de blocos](#dedicated-block-ingestion") e separar subgraphs entre nódulos com [regras de lançamento](#deployment-rules). +A indexação de Graph Nodes pode ser escalada horizontalmente, com a execução de várias instâncias de Graph Node para separar a indexação dos queries em nodes diferentes. Isto é possível só com a execução de Graph Nodes, configurados com um `node_id` diferente na inicialização (por ex. no arquivo Docker Compose), que pode então ser usado no arquivo `config.toml` para especificar [nodes dedicados de query](#dedicated-query-nodes), [ingestores de blocos](#dedicated-block-ingestion") e separar subgraphs entre nódulos com [regras de implantação](#deployment-rules). > Note que vários Graph Nodes podem ser configurados para usar o mesmo banco de dados — que, por conta própria, pode ser escalado horizontalmente através do sharding. #### Regras de lançamento -Levando em conta vários Graph Nodes, é necessário gerir o lançamento de novos subgraphs para que o mesmo subgraph não seja indexado por dois nodes diferentes, o que levaria a colisões. Isto é possível regras de lançamento, que também podem especificar em qual `shard` os dados de um subgraph devem ser armazenados, caso seja usado o sharding de bancos de dados. As regras de lançamento podem combinar com o nome do subgraph e com a rede que o lançamento indexa para fazer uma decisão. +Levando em conta vários Graph Nodes, é necessário gerir a implantação de novos subgraphs para que o mesmo subgraph não seja indexado por dois nodes diferentes, o que levaria a colisões. Isto é possível com regras de implantação, que também podem especificar em qual `shard` os dados de um subgraph devem ser armazenados, caso seja usado o sharding de bancos de dados. As regras de implantação podem combinar com o nome do subgraph e com a rede que a implantação indexa para fazer uma decisão. Exemplo de configuração de regra de lançamento: @@ -132,13 +132,13 @@ shard = "vip" indexers = [ "index_node_vip_0", "index_node_vip_1" ] [[deployment.rule]] match = { network = "kovan" } -# No shard, so we use the default shard called 'primary' +# Sem shard, então usamos o shard padrão chamado 'primary' indexers = [ "index_node_kovan_0" ] [[deployment.rule]] match = { network = [ "xdai", "poa-core" ] } indexers = [ "index_node_other_0" ] [[deployment.rule]] -# There's no 'match', so any subgraph matches +# Não tem 'match', então qualquer subgraph combina shards = [ "sharda", "shardb" ] indexers = [ "index_node_community_0", @@ -167,7 +167,7 @@ Qualquer node cujo --node-id combina com a expressão regular será programado p Para a maioria dos casos de uso, um único banco de dados Postgres é suficiente para apoiar uma instância de graph-node. Quando uma instância de graph-node cresce mais que um único banco Postgres, é possível dividir o armazenamento dos dados do graph-node entre múltiplos bancos Postgres. Todos os bancos de dados, juntos, formam o armazenamento da instância do graph-node. Cada banco de dados individual é chamado de shard. -Os shards servem para dividir lançamentos de subgraph em múltiplos bancos de dados, e podem também ser configurados para usar réplicas a fim de dividir a carga de query entre bancos de dados. Isto inclui a configuração do número de conexões disponíveis do banco que cada `graph-node` deve manter em seu pool de conexão para cada banco, o que fica cada vez mais importante conforme são indexados mais subgraphs. +Os shards servem para dividir implantações de subgraph em múltiplos bancos de dados, e podem também ser configurados para usar réplicas a fim de dividir a carga de query entre bancos de dados. Isto inclui a configuração do número de conexões disponíveis do banco que cada `graph-node` deve manter em seu pool de conexão para cada banco, o que fica cada vez mais importante conforme são indexados mais subgraphs. O sharding torna-se útil quando o seu banco de dados existente não aguenta o peso do Graph Node, e quando não é mais possível aumentar o tamanho do banco. @@ -225,11 +225,11 @@ Os utilizadores a operar um setup de indexing escalado, com configurações avan ### Como gerir o Graph Node -Dado um Graph Node (ou Nodes!) em execução, o desafio torna-se gerir subgraphs lançados entre estes nodes. O Graph Node tem uma gama de ferramentas para ajudar a direção de subgraphs. +Com um Graph Node (ou Nodes!) em execução, o desafio torna-se gerir subgraphs lançados entre estes nodes. O Graph Node tem uma gama de ferramentas para ajudar a direção de subgraphs. #### Logging -Os logs do Graph Node podem fornecer informações úteis, para debug e otimização — do Graph Node e de subgraphs específicos. O Graph Node apoia níveis diferentes de logs através da variável de ambiente `GRAPH_LOG`, com os seguintes níveis: `error`, `warn`, `info`, `debug` ou `trace`. +Os registos do Graph Node podem fornecer informações úteis, para debug e otimização — do Graph Node e de subgraphs específicos. O Graph Node apoia níveis diferentes de logs através da variável de ambiente `GRAPH_LOG`, com os seguintes níveis: `error`, `warn`, `info`, `debug` ou `trace`. Além disto, configurar o `GRAPH_LOG_QUERY_TIMING` para `gql` fornece mais detalhes sobre o processo de queries no GraphQL (porém, isto criará um grande volume de logs). @@ -263,7 +263,7 @@ Há três partes separadas no processo de indexação: - Processar eventos conforme os handlers apropriados (isto pode envolver chamar a chain para o estado, e retirar dados do armazenamento) - Escrever os dados resultantes ao armazenamento -Estes estágios são segmentados (por ex., podem ser executados em paralelo), mas são dependentes um no outro. Quando há demora em indexar, a causa depende do subgraph específico. +Estes estágios são segmentados (por ex., podem ser executados em paralelo), porém dependentes um no outro. Quando há demora em indexar, a causa depende do subgraph específico. Causas comuns de lentidão na indexação: @@ -276,18 +276,18 @@ Causas comuns de lentidão na indexação: - Atraso do próprio provedor em relação ao topo da chain - Atraso em retirar novos recibos do topo da chain do provedor -As métricas de indexação de subgraph podem ajudar a diagnosticar a causa raiz do atraso na indexação. Em alguns casos, o problema está no próprio subgraph, mas em outros, melhorar provedores de rede, reduzir a contenção no banco de dados, e outras melhorias na configuração podem aprimorar muito o desempenho da indexação. +As métricas de indexação de subgraph podem ajudar a diagnosticar a causa raiz do atraso na indexação. Em alguns casos, o problema está no próprio subgraph; mas em outros, melhorar provedores de rede, reduzir a contenção no banco de dados, e outras melhorias na configuração podem aprimorar muito o desempenho da indexação. #### Subgraphs falhos -É possível que subgraphs falhem durante a indexação, caso encontrem dados inesperados; algum componente não funcione como o esperado; ou se houver algum bug nos handlers de eventos ou na configuração. Geralmente, há dois tipos de falha: +É possível que subgraphs falhem durante a indexação, caso encontrem dados inesperados; algum componente não funcione como o esperado; ou se houver algum bug nos handlers de eventos ou na configuração. Geralmente, há dois tipos gerais de falha: - Falhas determinísticas: Falhas que não podem ser resolvidas com outras tentativas - Falhas não determinísticas: podem ser resumidas em problemas com o provedor ou algum erro inesperado no Graph Node. Quando ocorrer uma falha não determinística, o Graph Node reiniciará os handlers falhos e recuará gradualmente. Em alguns casos, uma falha pode ser resolvida pelo indexador (por ex. a indexação falhou por ter o tipo errado de provedor, e necessita do correto para continuar). Porém, em outros, é necessária uma alteração no código do subgraph. -> Falhas determinísticas são consideradas "finais", com uma Prova de Indexação (POI) gerada para o bloco falho; falhas não determinísticas não são finais, como há chances do subgraph superar a falha e continuar a indexar. Às vezes, o rótulo de "não determinístico" é incorreto e o subgraph não tem como melhorar do erro; estas falhas devem ser relatadas como problemas no repositório do Graph Node. +> Falhas determinísticas são consideradas "finais", com uma Prova de Indexação (POI) gerada para o bloco falho; falhas não determinísticas não são finais, já que há chances do subgraph superar a falha e continuar a indexar. Às vezes, o rótulo de "não determinístico" é incorreto e o subgraph não tem como melhorar do erro; estas falhas devem ser relatadas como problemas no repositório do Graph Node. #### Cache de blocos e chamadas @@ -304,7 +304,7 @@ Caso haja uma suspeita de inconsistência no cache de blocos, como a falta de um #### Erros e problemas de query -Quando um subgraph for indexado, os indexadores podem esperar servir consultas através do endpoint dedicado de consultas do subgraph. Se o indexador espera servir volumes significantes de consultas, é recomendado um node dedicado a queries; e para volumes muito altos, podem querer configurar réplicas de shard para que os queries não impactem o processo de indexação. +Depois que um subgraph for indexado, os indexadores podem esperar servir queries através do endpoint dedicado de queries do subgraph. Se o indexador espera servir volumes significantes de query, é recomendado um node dedicado a queries; e para volumes muito altos de queries, vale a pena configurar réplicas de shard para que os queries não impactem o processo de indexação. Porém, mesmo com um node dedicado a consultas e réplicas deste, certos queries podem demorar muito para executar; em alguns casos, aumentam o uso da memória e pioram o tempo de query para outros utilizadores. @@ -342,4 +342,4 @@ Para subgraphs parecidos com o Uniswap, as tábuas `pair` e `token` são ótimas > Esta é uma funcionalidade nova, que estará disponível no Graph Node 0.29.x -Em certo ponto, o indexador pode querer remover um subgraph. É só usar o `graphman drop`, que apaga um lançamento e todos os seus dados indexados. O lançamento pode ser especificado como o nome de um subgraph, um hash IPFS `Qm..`, ou o namespace de banco de dados `sgdNNN`. Mais documentos sobre o processo [aqui](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop). +Em certo ponto, o indexador pode querer remover um subgraph. É só usar o `graphman drop`, que apaga uma implantação e todos os seus dados indexados. A implantação pode ser especificada como o nome de um subgraph, um hash IPFS `Qm..`, ou o namespace de banco de dados `sgdNNN`. Mais documentos sobre o processo [aqui](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop). diff --git a/website/src/pages/pt/indexing/tooling/graphcast.mdx b/website/src/pages/pt/indexing/tooling/graphcast.mdx index e57b6b206900..84aa40b24cd5 100644 --- a/website/src/pages/pt/indexing/tooling/graphcast.mdx +++ b/website/src/pages/pt/indexing/tooling/graphcast.mdx @@ -11,7 +11,7 @@ Atualmente, o custo de transmitir informações para outros participantes de red O SDK (Kit de Programação de Software) do Graphcast permite aos programadores construir Rádios, que são aplicativos movidos a mexericos, que os Indexers podem executar por um certo propósito. Nós também pretendemos criar alguns Rádios (ou oferecer apoio para outros programadores/outras equipas que desejam construir Rádios) para os seguintes casos de uso: - Verificação em tempo real de integridade dos dados de um subgraph ([Subgraph Radio](https://docs.graphops.xyz/graphcast/radios/subgraph-radio/intro)). -- Condução de leilões e coordenação para a sincronização de subgraphs, substreams e dados do Firehose de outros Indexers. +- Condução de leilões e coordenação para a sincronização de subgraphs, substreams, e dados do Firehose de outros Indexadores. - Autorrelatos em analíticas ativas de queries, inclusive volumes de pedidos de subgraphs, volumes de taxas, etc. - Autorrelatos em analíticas de indexação, como tempo de indexação de subgraphs, custos de gas de handlers, erros encontrados, etc. - Autorrelatos em informações de stack, incluindo versão do graph-node, versão do Postgres, versão do cliente Ethereum, etc. diff --git a/website/src/pages/pt/resources/_meta-titles.json b/website/src/pages/pt/resources/_meta-titles.json index f5971e95a8f6..f6b3ef905da1 100644 --- a/website/src/pages/pt/resources/_meta-titles.json +++ b/website/src/pages/pt/resources/_meta-titles.json @@ -1,4 +1,4 @@ { - "roles": "Additional Roles", - "migration-guides": "Migration Guides" + "roles": "Funções Adicionais", + "migration-guides": "Guias de Migração" } diff --git a/website/src/pages/pt/resources/benefits.mdx b/website/src/pages/pt/resources/benefits.mdx index 536f02bd4a05..a534aa140070 100644 --- a/website/src/pages/pt/resources/benefits.mdx +++ b/website/src/pages/pt/resources/benefits.mdx @@ -34,7 +34,7 @@ Os custos de query podem variar; o custo citado é o normal até o fechamento da | Tempo de engenharia | $400 por mês | Nenhum, embutido na rede com Indexadores distribuídos globalmente | | Queries por mês | Limitadas pelas capabilidades da infra | 100 mil (Plano Grátis) | | Custo por query | $0 | $0 | -| Infrastructure | Centralizada | Descentralizada | +| Infraestrutura | Centralizada | Descentralizada | | Redundância geográfica | $750+ por node adicional | Incluída | | Uptime (disponibilidade) | Varia | 99.9%+ | | Custos mensais totais | $750+ | $0 | @@ -48,7 +48,7 @@ Os custos de query podem variar; o custo citado é o normal até o fechamento da | Tempo de engenharia | $800 por mês | Nenhum, embutido na rede com Indexadores distribuídos globalmente | | Queries por mês | Limitadas pelas capabilidades da infra | ~3 milhões | | Custo por query | $0 | $0.00004 | -| Infrastructure | Centralizada | Descentralizada | +| Infraestrutura | Centralizada | Descentralizada | | Custo de engenharia | $200 por hora | Incluída | | Redundância geográfica | $1.200 em custos totais por node adicional | Incluída | | Uptime (disponibilidade) | Varia | 99.9%+ | @@ -64,7 +64,7 @@ Os custos de query podem variar; o custo citado é o normal até o fechamento da | Tempo de engenharia | $6.000 ou mais por mês | Nenhum, embutido na rede com Indexadores distribuídos globalmente | | Queries por mês | Limitadas pelas capabilidades da infra | Cerca de 30 milhões | | Custo por query | $0 | $0.00004 | -| Infrastructure | Centralizada | Descentralizada | +| Infraestrutura | Centralizada | Descentralizada | | Redundância geográfica | $1.200 em custos totais por node adicional | Incluída | | Uptime (disponibilidade) | Varia | 99.9%+ | | Custos mensais totais | $11.000+ | $1.200 | @@ -76,9 +76,9 @@ Os custos de query podem variar; o custo citado é o normal até o fechamento da Reflete o custo ao consumidor de dados. Taxas de query ainda são pagas a Indexadores por queries do Plano Grátis. -Os custos estimados são apenas para subgraphs na Mainnet do Ethereum — os custos são maiores ao auto-hospedar um graph-node em outras redes. Alguns utilizadores devem atualizar o seu subgraph a uma versão mais recente. Até o fechamento deste texto, devido às taxas de gas do Ethereum, uma atualização custa cerca de 50 dólares. Note que as taxas de gás no [Arbitrum](/archived/arbitrum/arbitrum-faq/) são muito menores que as da mainnet do Ethereum. +Estimated costs are only for Ethereum Mainnet Subgraphs — costs are even higher when self hosting a `graph-node` on other networks. Some users may need to update their Subgraph to a new version. Due to Ethereum gas fees, an update costs ~$50 at time of writing. Note that gas fees on [Arbitrum](/archived/arbitrum/arbitrum-faq/) are substantially lower than Ethereum mainnet. -Curar um sinal em um subgraph é um custo opcional, único, e zero-líquido (por ex., $1 mil em um subgraph pode ser curado em um subgraph, e depois retirado — com potencial para ganhar retornos no processo). +Curating signal on a Subgraph is an optional one-time, net-zero cost (e.g., $1k in signal can be curated on a Subgraph, and later withdrawn—with potential to earn returns in the process). ## Zero Custos de Preparação e Mais Eficiência Operacional @@ -90,4 +90,4 @@ A rede descentralizada do The Graph permite que os utilizadores acessem redundâ Enfim: A Graph Network é mais barata e fácil de usar, e produz resultados melhores comparados à execução local de um graph-node. -Comece a usar a Graph Network hoje, e aprenda como [editar o seu subgraph na rede descentralizada do The Graph](/subgraphs/quick-start/). +Start using The Graph Network today, and learn how to [publish your Subgraph to The Graph's decentralized network](/subgraphs/quick-start/). diff --git a/website/src/pages/pt/resources/glossary.mdx b/website/src/pages/pt/resources/glossary.mdx index 4660c4d00ecf..d075e63e2c25 100644 --- a/website/src/pages/pt/resources/glossary.mdx +++ b/website/src/pages/pt/resources/glossary.mdx @@ -4,51 +4,51 @@ title: Glossário - **The Graph:** Um protocolo descentralizado para indexação e query de dados. -- **Query:** Uma solicitação de dados. No The Graph, um query é uma solicitação por dados de um subgraph que será respondida por um Indexador. +- **Query**: A request for data. In the case of The Graph, a query is a request for data from a Subgraph that will be answered by an Indexer. -- **GraphQL:** Uma linguagem de queries para APIs e um runtime (programa de execução) para realizar esses queries com os dados existentes. O The Graph usa a GraphQL para fazer queries de subgraphs. +- **GraphQL**: A query language for APIs and a runtime for fulfilling those queries with your existing data. The Graph uses GraphQL to query Subgraphs. -- **Endpoint**: Um URL que pode ser usado para fazer queries. O ponto final de execução para o Subgraph Studio é `https://api.studio.thegraph.com/query///`, e o do Graph Explorer é `https://gateway.thegraph.com/api//subgraphs/id/`. O ponto final do Graph Explorer é usado para fazer queries de subgraphs na rede descentralizada do The Graph. +- **Endpoint**: A URL that can be used to query a Subgraph. The testing endpoint for Subgraph Studio is `https://api.studio.thegraph.com/query///` and the Graph Explorer endpoint is `https://gateway.thegraph.com/api//subgraphs/id/`. The Graph Explorer endpoint is used to query Subgraphs on The Graph's decentralized network. -- **Subgraph:** Uma API aberta que extrai, processa, e guarda dados de uma blockchain para facilitar queries via a GraphQL. Os programadores podem construir, lançar, e editar subgraphs na The Graph Network. Indexado, o subgraph está sujeito a queries por quem quiser solicitar. +- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish Subgraphs to The Graph Network. Once it is indexed, the Subgraph can be queried by anyone. - **Indexador**: Um participante da rede que executa nodes de indexação para indexar dados de blockchains e servir queries da GraphQL. - **Fluxos de Receita de Indexadores:** Os Indexadores são recompensados em GRT com dois componentes: Rebates de taxa de query e recompensas de indexação. - 1. **Rebates de Taxa de Query**: Pagamentos de consumidores de subgraphs por servir queries na rede. + 1. **Query Fee Rebates**: Payments from Subgraph consumers for serving queries on the network. - 2. **Recompensas de Indexação**: São recebidas por Indexadores por indexar subgraphs, e geradas via a emissão anual de 3% de GRT. + 2. **Indexing Rewards**: The rewards that Indexers receive for indexing Subgraphs. Indexing rewards are generated via new issuance of 3% GRT annually. - \*\*Auto-Stake (Stake Próprio) do Indexador: A quantia de GRT que os Indexadores usam para participar na rede descentralizada. A quantia mínima é 100.000 GRT, e não há limite máximo. - **Capacidade de Delegação**: A quantia máxima de GRT que um Indexador pode aceitar dos Delegantes. Os Indexadores só podem aceitar até 16 vezes o seu Auto-Stake, e mais delegações resultam em recompensas diluídas. Por exemplo: se um Indexador tem um Auto-Stake de 1 milhão de GRT, a capacidade de delegação é 16 milhões. Porém, os Indexadores só podem aumentar a sua Capacidade de Delegação se aumentarem também o seu Auto-Stake. -- **Indexador de Atualizações**: Um Indexador de reserva para queries não servidos por outros Indexadores na rede. Este Indexador não compete com outros Indexadores. +- **Upgrade Indexer**: An Indexer designed to act as a fallback for Subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. -- **Delegante:** Um participante da rede que possui GRT e delega uma quantia para Indexadores, permitindo que esses aumentem o seu stake em subgraphs. Em retorno, os Delegantes recebem uma porção das Recompensas de Indexação recebidas pelos Indexadores por processar subgraphs. +- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in Subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing Subgraphs. - **Taxa de Delegação**: Uma taxa de 0,5% paga por Delegantes quando delegam GRT a Indexadores. O GRT usado para pagar a taxa é queimado. -- **Curador:** Um participante da rede que identifica subgraphs de qualidade e sinaliza GRT para eles em troca de ações de curadoria. Quando os Indexadores resgatam as taxas de query em um subgraph, 10% é distribuído para os Curadores desse subgraph. Há uma correlação positiva entre a quantia de GRT sinalizada e o número de Indexadores a indexar um subgraph. +- **Curator**: Network participants that identify high-quality Subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a Subgraph, 10% is distributed to the Curators of that Subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a Subgraph. -- \*\*Taxa de Curadoria: Uma taxa de 1% paga pelos Curadores quando sinalizam GRT em subgraphs. O GRT usado para pagar a taxa é queimado. +- **Curation Tax**: A 1% fee paid by Curators when they signal GRT on Subgraphs. The GRT used to pay the fee is burned. -- Consumidor de Dados: Qualquer aplicativo ou utilizador que faz queries para um subgraph. +- **Data Consumer**: Any application or user that queries a Subgraph. -- \*\*Programador de Subgraph: Um programador que constrói e lança um subgraph à rede descentralizada do The Graph. +- **Subgraph Developer**: A developer who builds and deploys a Subgraph to The Graph's decentralized network. -- **Manifest de Subgraph:** Um arquivo YAML que descreve o schema, fontes de dados, e outros metadados de um subgraph. [Veja um exemplo](https://github.com/graphprotocol/example-subgraph/blob/master/subgraph.yaml). +- **Subgraph Manifest**: A YAML file that describes the Subgraph's GraphQL schema, data sources, and other metadata. [Here](https://github.com/graphprotocol/example-subgraph/blob/master/subgraph.yaml) is an example. - **Epoch:** Uma unidade de tempo na rede. Um epoch atualmente dura 6.646 blocos, ou cerca de um dia. -- \*\*Alocação: Um Indexador pode alocar o seu stake total em GRT (incluindo o stake dos Delegantes) em subgraphs editados na rede descentralizada do The Graph. As alocações podem ter estados diferentes: +- **Allocation**: An Indexer can allocate their total GRT stake (including Delegators' stake) towards Subgraphs that have been published on The Graph's decentralized network. Allocations can have different statuses: - 1. **Ativa:** Uma alocação é considerada ativa quando é criada on-chain. Isto se chama abrir uma alocação, e indica à rede que o Indexador está a indexar e servir consultas ativamente para um subgraph particular. Alocações ativas acumulam recompensas de indexação proporcionais ao sinal no subgraph, e à quantidade de GRT alocada. + 1. **Active**: An allocation is considered active when it is created onchain. This is called opening an allocation, and indicates to the network that the Indexer is actively indexing and serving queries for a particular Subgraph. Active allocations accrue indexing rewards proportional to the signal on the Subgraph, and the amount of GRT allocated. - 2. **Fechada**: Um Indexador pode resgatar as recompensas acumuladas em um subgraph selecionado ao enviar uma Prova de Indexação (POI) recente e válida. Isto se chama "fechar uma alocação". Uma alocação deve ter ficado aberta por, no mínimo, um epoch antes que possa ser fechada. O período máximo de alocação é de 28 epochs; se um indexador deixar uma alocação aberta por mais que isso, ela se torna uma alocação obsoleta. Quando uma alocação está **Fechada**, um Pescador ainda pode abrir uma disputa contra um Indexador por servir dados falsos. + 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given Subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. -- **Subgraph Studio**: um dApp (aplicativo descentralizado) poderoso para a construção, lançamento e edição de subgraphs. +- **Subgraph Studio**: A powerful dapp for building, deploying, and publishing Subgraphs. - **Pescadores**: Um papel na Graph Network cumprido por participantes que monitoram a precisão e integridade dos dados servidos pelos Indexadores. Quando um Pescador identifica uma resposta de query ou uma POI que acreditam ser incorreta, ele pode iniciar uma disputa contra o Indexador. Se a disputa der um veredito a favor do Pescador, o Indexador é cortado, ou seja, perderá 2.5% do seu auto-stake de GRT. Desta quantidade, 50% é dado ao Pescador como recompensa pela sua vigilância, e os 50% restantes são retirados da circulação (queimados). Este mecanismo é desenhado para encorajar Pescadores a ajudar a manter a confiança na rede ao garantir que Indexadores sejam responsabilizados pelos dados que providenciam. @@ -56,28 +56,28 @@ title: Glossário - Corte: Os Indexadores podem tomar cortes no seu self-stake de GRT por fornecer uma prova de indexação (POI) incorreta ou servir dados imprecisos. A percentagem de corte é um parâmetro do protocolo, atualmente configurado em 2,5% do auto-stake de um Indexador. 50% do GRT cortado vai ao Pescador que disputou os dados ou POI incorretos. Os outros 50% são queimados. -- **Recompensas de Indexação**: As recompensas que os Indexadores recebem por indexar subgraphs, distribuídas em GRT. +- **Indexing Rewards**: The rewards that Indexers receive for indexing Subgraphs. Indexing rewards are distributed in GRT. - **Recompensas de Delegação**: As recompensas que os Delegantes recebem por delegar GRT a Indexadores, distribuídas em GRT. - **GRT**: O token de utilidade do The Graph, que oferece incentivos económicos a participantes da rede por contribuir. -- **POI (Prova de Indexação)**: Quando um Indexador fecha a sua alocação e quer resgatar as suas recompensas de indexação acumuladas em um certo subgraph, ele deve apresentar uma Prova de Indexação (POI) válida e recente. Os Pescadores podem disputar a POI providenciada por um Indexador; disputas resolvidas a favor do Pescador causam um corte para o Indexador. +- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given Subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. -- **Graph Node**: O componente que indexa subgraphs e disponibiliza os dados resultantes abertos a queries através de uma API GraphQL. Assim, ele é essencial ao stack de indexadores, e operações corretas de um Graph Node são cruciais para executar um indexador com êxito. +- **Graph Node**: Graph Node is the component that indexes Subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. -- **Agente de Indexador**: Parte do stack do indexador. Ele facilita as interações do Indexer on-chain, inclusive registos na rede, gestão de lançamentos de Subgraph ao(s) seu(s) Graph Node(s), e gestão de alocações. +- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions onchain, including registering on the network, managing Subgraph deployments to its Graph Node(s), and managing allocations. - **The Graph Client**: Uma biblioteca para construir dApps baseados em GraphQL de maneira descentralizada. -- **Graph Explorer**: Um dApp desenhado para que participantes da rede explorem subgraphs e interajam com o protocolo. +- **Graph Explorer**: A dapp designed for network participants to explore Subgraphs and interact with the protocol. - **Graph CLI**: Uma ferramenta de interface de comando de linha para construções e lançamentos no The Graph. - **Período de Recarga**: O tempo restante até que um Indexador que mudou os seus parâmetros de delegação possa fazê-lo novamente. -- Ferramentas de Transferência para L2: Contratos inteligentes e interfaces que permitem que os participantes na rede transfiram ativos relacionados à rede da mainnet da Ethereum ao Arbitrum One. Os participantes podem transferir GRT delegado, subgraphs, ações de curadoria, e o Auto-Stake do Indexador. +- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, Subgraphs, curation shares, and Indexer's self-stake. -- **Atualização de um subgraph**: O processo de lançar uma nova versão de subgraph com atualizações ao manifest, schema e mapeamentos do subgraph. +- **Updating a Subgraph**: The process of releasing a new Subgraph version with updates to the Subgraph's manifest, schema, or mappings. -- **Migração**: O processo de movimentar ações de curadoria da versão antiga de um subgraph à versão nova do mesmo (por ex., quando a v.0.0.1 é atualizada à v.0.0.2). +- **Migrating**: The process of curation shares moving from an old version of a Subgraph to a new version of a Subgraph (e.g. when v0.0.1 is updated to v0.0.2). diff --git a/website/src/pages/pt/resources/migration-guides/assemblyscript-migration-guide.mdx b/website/src/pages/pt/resources/migration-guides/assemblyscript-migration-guide.mdx index 165055c46822..436e74de6f60 100644 --- a/website/src/pages/pt/resources/migration-guides/assemblyscript-migration-guide.mdx +++ b/website/src/pages/pt/resources/migration-guides/assemblyscript-migration-guide.mdx @@ -2,13 +2,13 @@ title: Guia de Migração do AssemblyScript --- -Up until now, subgraphs have been using one of the [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉 +Up until now, Subgraphs have been using one of the [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉 -Isto permitirá que os programadores de subgraph usem recursos mais novos da linguagem AS e da sua biblioteca normal. +That will enable Subgraph developers to use newer features of the AS language and standard library. This guide is applicable for anyone using `graph-cli`/`graph-ts` below version `0.22.0`. If you're already at a higher than (or equal) version to that, you've already been using version `0.19.10` of AssemblyScript 🙂 -> Note: As of `0.24.0`, `graph-node` can support both versions, depending on the `apiVersion` specified in the subgraph manifest. +> Note: As of `0.24.0`, `graph-node` can support both versions, depending on the `apiVersion` specified in the Subgraph manifest. ## Recursos @@ -44,7 +44,7 @@ This guide is applicable for anyone using `graph-cli`/`graph-ts` below version ` ## Como atualizar? -1. Change your mappings `apiVersion` in `subgraph.yaml` to `0.0.6`: +1. Change your mappings `apiVersion` in `subgraph.yaml` to `0.0.9`: ```yaml ... @@ -52,7 +52,7 @@ dataSources: ... mapping: ... - apiVersion: 0.0.6 + apiVersion: 0.0.9 ... ``` @@ -106,7 +106,7 @@ let maybeValue = load()! // breaks in runtime if value is null maybeValue.aMethod() ``` -Se não tiver certeza de qual escolher, é sempre bom usar a versão segura. Se o valor não existir, pode fazer uma declaração if precoce com um retorno no seu handler de subgraph. +If you are unsure which to choose, we recommend always using the safe version. If the value doesn't exist you might want to just do an early if statement with a return in you Subgraph handler. ### Sombreamento Varíavel @@ -132,7 +132,7 @@ Renomeie as suas variáveis duplicadas, se tinha o sombreamento variável. ### Comparações de Nulos -Ao fazer a atualização no seu subgraph, às vezes aparecem erros como este: +By doing the upgrade on your Subgraph, sometimes you might get errors like these: ```typescript ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' is not assignable to type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt'. @@ -330,7 +330,7 @@ let wrapper = new Wrapper(y) wrapper.n = wrapper.n + x // não dá erros de tempo de compilação como deveria ``` -Nós abrimos um problema no compilador AssemblyScript para isto, mas por enquanto, se fizer estes tipos de operações nos seus mapeamentos de subgraph, vale mudá-las para fazer uma checagem de anulação antes delas. +We've opened a issue on the AssemblyScript compiler for this, but for now if you do these kind of operations in your Subgraph mappings, you should change them to do a null check before it. ```typescript let wrapper = new Wrapper(y) @@ -352,7 +352,7 @@ value.x = 10 value.y = 'content' ``` -Ele fará a compilação, mas quebrará no tempo de execução porque o valor não foi inicializado. Tenha certeza de que o seu subgraph inicializou os seus valores, como assim: +It will compile but break at runtime, that happens because the value hasn't been initialized, so make sure your Subgraph has initialized their values, like this: ```typescript var value = new Type() // initialized diff --git a/website/src/pages/pt/resources/migration-guides/graphql-validations-migration-guide.mdx b/website/src/pages/pt/resources/migration-guides/graphql-validations-migration-guide.mdx index 7b94db58a11d..bc4ee8d90619 100644 --- a/website/src/pages/pt/resources/migration-guides/graphql-validations-migration-guide.mdx +++ b/website/src/pages/pt/resources/migration-guides/graphql-validations-migration-guide.mdx @@ -1,5 +1,5 @@ --- -title: Guia de migração de Validações GraphQL +title: GraphQL Validations Migration Guide --- Em breve, o `graph-node` apoiará a cobertura total da [especificação de Validações GraphQL](https://spec.graphql.org/June2018/#sec-Validation). @@ -20,7 +20,7 @@ Para cumprir tais validações, por favor siga o guia de migração. Pode usar a ferramenta de migração em CLI para encontrar e consertar quaisquer problemas nas suas operações no GraphQL. De outra forma, pode atualizar o endpoint do seu cliente GraphQL para usar o endpoint `https://api-next.thegraph.com/subgraphs/name/$GITHUB_USER/$SUBGRAPH_NAME`. Testar os seus queries perante este endpoint ajudará-lhe a encontrar os problemas neles presentes. -> Nem todos os Subgraphs precisam ser migrados; se usar o [GraphQL ESlint](https://the-guild.dev/graphql/eslint/docs) ou o [Gerador de Código GraphQL](https://the-guild.dev/graphql/codegen), eles já garantirão que os seus queries sejam válidos. +> Not all Subgraphs will need to be migrated, if you are using [GraphQL ESlint](https://the-guild.dev/graphql/eslint/docs) or [GraphQL Code Generator](https://the-guild.dev/graphql/codegen), they already ensure that your queries are valid. ## Ferramenta CLI de migração diff --git a/website/src/pages/pt/resources/roles/curating.mdx b/website/src/pages/pt/resources/roles/curating.mdx index 582a7926b9ee..0bdc3248b7be 100644 --- a/website/src/pages/pt/resources/roles/curating.mdx +++ b/website/src/pages/pt/resources/roles/curating.mdx @@ -2,37 +2,37 @@ title: Curadorias --- -Curadores são importantes para a economia descentralizada do The Graph. Eles utilizam o seu conhecimento do ecossistema web3 para avaliar e sinalizar nos subgraphs que devem ser indexados pela Graph Network. Através do Graph Explorer, Curadores visualizam dados de rede para tomar decisões sobre sinalizações. Em troca, a Graph Network recompensa Curadores que sinalizam em subgraphs de alta qualidade com uma parte das taxas de query geradas por estes subgraphs. A quantidade de GRT sinalizada é uma das considerações mais importantes para Indexadores ao determinar quais subgraphs indexar. +Curators are critical to The Graph's decentralized economy. They use their knowledge of the web3 ecosystem to assess and signal on the Subgraphs that should be indexed by The Graph Network. Through Graph Explorer, Curators view network data to make signaling decisions. In turn, The Graph Network rewards Curators who signal on good quality Subgraphs with a share of the query fees those Subgraphs generate. The amount of GRT signaled is one of the key considerations for indexers when determining which Subgraphs to index. ## O que a Sinalização Significa para a Graph Network? -Antes que consumidores possam indexar um subgraph, ele deve ser indexado. É aqui que entra a curadoria. Para que Indexadores ganhem taxas de query substanciais em subgraphs de qualidade, eles devem saber quais subgraphs indexar. Quando Curadores sinalizam um subgraph, isto diz aos Indexadores que um subgraph está em demanda e tem qualidade suficiente para ser indexado. +Before consumers can query a Subgraph, it must be indexed. This is where curation comes into play. In order for Indexers to earn substantial query fees on quality Subgraphs, they need to know what Subgraphs to index. When Curators signal on a Subgraph, it lets Indexers know that a Subgraph is in demand and of sufficient quality that it should be indexed. -Os Curadores trazem eficiência à Graph Network, e a [sinalização](#how-to-signal) é o processo que curadores usam para avisar aos Indexadores que um subgraph é bom para indexar. Os Indexadores podem confiar no sinal de um Curador, porque ao sinalizar, os Curadores mintam uma ação de curadoria para o subgraph, o que concede aos Curadores uma porção das futuras taxas de query movidas pelo subgraph. +Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that Curators use to let Indexers know that a Subgraph is good to index. Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the Subgraph, entitling them to a portion of future query fees that the Subgraph drives. -Sinais de curador são representados como tokens ERC20 chamados de Ações de Curadoria do Graph (GCS). Quem quiser ganhar mais taxas de query devem sinalizar o seu GRT a subgraphs que apostam que gerará um fluxo forte de taxas á rede. Curadores não podem ser cortados por mau comportamento, mas há uma taxa de depósito em Curadores para desincentivar más decisões que possam ferir a integridade da rede. Curadores também ganharão menos taxas de query se curarem um subgraph de baixa qualidade, já que haverão menos queries a processar ou menos Indexadores para processá-las. +Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to Subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality Subgraph because there will be fewer queries to process or fewer Indexers to process them. -O [Indexador de Atualização do Nascer do Sol](/sunrise/#what-is-the-upgrade-indexer) garante a indexação de todos os subgraphs; sinalizar GRT em um subgraph específico atrairá mais Indexadores a ele. Este incentivo para Indexadores através da curadoria visa melhorar a qualidade do serviço de queries através da redução de latência e do aprimoramento da disponibilidade de rede. +The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all Subgraphs, signaling GRT on a particular Subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. -Ao sinalizar, Curadores podem decidir entre sinalizar numa versão específica do subgraph ou sinalizar com a automigração. Caso sinalizem com a automigração, as ações de um curador sempre serão atualizadas à versão mais recente publicada pelo programador. Se decidirem sinalizar numa versão específica, as ações sempre permanecerão nesta versão específica. +When signaling, Curators can decide to signal on a specific version of the Subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. -Se precisar de ajuda com a curadoria para melhorar a qualidade do serviço, peça ajuda à equipa da Edge Node em support@thegraph.zendesk.com e especifique os subgraphs com que precisa de assistência. +If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the Subgraphs you need assistance with. -Os indexadores podem achar subgraphs para indexar com base em sinais de curadoria que veem no Graph Explorer (imagem abaixo). +Indexers can find Subgraphs to index based on curation signals they see in Graph Explorer (screenshot below). -![Subgraphs do Explorer](/img/explorer-subgraphs.png) +![Explorer Subgraphs](/img/explorer-subgraphs.png) ## Como Sinalizar -Na aba "Curator" (Curador) do Graph Explorer, os curadores podem sinalizar e tirar sinal de certos subgraphs baseados nas estatísticas de rede. [Clique aqui](/subgraphs/explorer/) para um passo-a-passo deste processo no Graph Explorer. +Within the Curator tab in Graph Explorer, curators will be able to signal and unsignal on certain Subgraphs based on network stats. For a step-by-step overview of how to do this in Graph Explorer, [click here.](/subgraphs/explorer/) -Um curador pode escolher sinalizar uma versão específica de subgraph, ou pode automaticamente migrar o seu sinal à versão mais recente desse subgraph. Ambas estratégias são válidas, e vêm com as suas próprias vantagens e desvantagens. +A curator can choose to signal on a specific Subgraph version, or they can choose to have their signal automatically migrate to the newest production build of that Subgraph. Both are valid strategies and come with their own pros and cons. -Sinalizar numa versão específica serve muito mais quando um subgraph é usado por vários dApps. Um dApp pode precisar atualizar o subgraph regularmente com novos recursos; outro dApp pode preferir usar uma versão mais antiga, porém melhor testada. Na curadoria inicial, é incorrida uma taxa de 1%. +Signaling on a specific version is especially useful when one Subgraph is used by multiple dapps. One dapp might need to regularly update the Subgraph with new features. Another dapp might prefer to use an older, well-tested Subgraph version. Upon initial curation, a 1% standard tax is incurred. Ter um sinal que migra automaticamente à build mais recente de um subgraph pode ser bom para garantir o acúmulo de taxas de consulta. Toda vez que cura, é incorrida uma taxa de 1% de curadoria. Também pagará uma taxa de 0.5% em toda migração. É recomendado que rogramadores de subgraphs evitem editar novas versões com frequência - eles devem pagar uma taxa de curadoria de 0.5% em todas as ações de curadoria auto-migradas. -> \*\*Nota: O primeiro endereço a sinalizar um subgraph particular é considerado o primeiro curador e deverá realizar tarefas muito mais intensivas em gas do que o resto dos curadores seguintes — porque o primeiro curador inicializa os tokens de ação de curadoria, inicializa o bonding curve, e também transfere tokens no proxy do Graph. +> **Note**: The first address to signal a particular Subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy. ## Como Sacar o Seu GRT @@ -40,39 +40,39 @@ Curadores têm a opção de sacar o seu GRT sinalizado a qualquer momento. Ao contrário do processo de delegação, se decidir sacar o seu GRT sinalizado, você não precisará esperar um período de recarga, e receberá a quantidade completa (menos a taxa de curadoria de 1%). -Quando um curador retira o seu sinal, Indexadores podem escolher continuar a indexar o subgraph, mesmo se não houver no momento nenhum GRT sinalizado. +Once a curator withdraws their signal, indexers may choose to keep indexing the Subgraph, even if there's currently no active GRT signaled. -Porém, é recomendado que curadores deixem o seu GRT no lugar, não apenas para receber uma porção das taxas de query, mas também para garantir a confiança e disponibilidade do subgraph. +However, it is recommended that curators leave their signaled GRT in place not only to receive a portion of the query fees, but also to ensure reliability and uptime of the Subgraph. ## Riscos 1. O mercado de consulta é jovem por natureza no The Graph, e há sempre o risco do seu rendimento anual ser menor que o esperado devido às dinâmicas nascentes do mercado. -2. Taxa de Curadoria - Quando um curador sinaliza GRT em um subgraph, ele incorre uma taxa de curadoria de 1%, que é queimada. -3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their subgraph or if a subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/resources/roles/delegating/delegating/). -4. Um subgraph pode falhar devido a um erro de código. Um subgraph falho não acumula taxas de consulta. Portanto, espere até o programador consertar o erro e lançar uma nova versão. - - Caso se inscreva à versão mais recente de um subgraph, suas ações migrarão automaticamente a esta versão nova. Isto incorrerá uma taxa de curadoria de 0.5%. - - Se sinalizou em um subgraph específico e ele falhou, deverá queimar as suas ações de curadoria manualmente. Será então possível sinalizar na nova versão do subgraph, o que incorre uma taxa de curadoria de 1%. +2. Curation Fee - when a curator signals GRT on a Subgraph, they incur a 1% curation tax. This fee is burned. +3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their Subgraph or if a Subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/resources/roles/delegating/delegating/). +4. A Subgraph can fail due to a bug. A failed Subgraph does not accrue query fees. As a result, you’ll have to wait until the developer fixes the bug and deploys a new version. + - If you are subscribed to the newest version of a Subgraph, your shares will auto-migrate to that new version. This will incur a 0.5% curation tax. + - If you have signaled on a specific Subgraph version and it fails, you will have to manually burn your curation shares. You can then signal on the new Subgraph version, thus incurring a 1% curation tax. ## Perguntas Frequentes sobre Curadoria ### 1. Qual a % das taxas de query que os Curadores ganham? -Ao sinalizar em um subgraph, ganhará parte de todas as taxas de query geradas pelo subgraph. 10% de todas as taxas de curadoria vão aos Curadores, pro-rata às suas ações de curadoria. Estes 10% são sujeitos à governança. +By signalling on a Subgraph, you will earn a share of all the query fees that the Subgraph generates. 10% of all query fees go to the Curators pro-rata to their curation shares. This 10% is subject to governance. -### 2. Como decidir quais subgraphs são de qualidade alta para sinalizar? +### 2. How do I decide which Subgraphs are high quality to signal on? -Achar subgraphs de alta qualidade é uma tarefa complexa, mas o processo pode ser abordado de várias formas diferentes. Como Curador, procure subgraphs confiáveis que movem volumes de query. Um subgraph confiável pode ser valioso se for completo, preciso, e apoiar as necessidades de dados de um dApp. Um subgraph mal arquitetado pode precisar de revisões ou reedições, além de correr risco de falhar. É importante que os Curadores verifiquem a arquitetura ou código de um subgraph, para averiguar se ele é valioso. Portanto: +Finding high-quality Subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy Subgraphs that are driving query volume. A trustworthy Subgraph may be valuable if it is complete, accurate, and supports a dapp’s data needs. A poorly architected Subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a Subgraph’s architecture or code in order to assess if a Subgraph is valuable. As a result: -- Os curadores podem usar o seu conhecimento de uma rede para tentar adivinhar como um subgraph individual pode gerar um volume maior ou menor de queries no futuro -- Os curadores também devem entender as métricas disponíveis através do Graph Explorer. Métricas como o volume de queries passados e a identidade do programador do subgraph podem ajudar a determinar se um subgraph vale ou não o sinal. +- Curators can use their understanding of a network to try and predict how an individual Subgraph may generate a higher or lower query volume in the future +- Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the Subgraph developer is can help determine whether or not a Subgraph is worth signalling on. -### 3. Qual o custo de atualizar um subgraph? +### 3. What’s the cost of updating a Subgraph? -Migrar as suas ações de curadoria a uma nova versão de subgraph incorre uma taxa de curadoria de 1%. Os curadores podem escolher se inscrever na versão mais nova de um subgraph. Quando ações de curadores são automigradas a uma nova versão, os Curadores também pagarão metade da taxa de curadoria, por ex., 0.5%, porque a atualização de subgraphs é uma ação on-chain que custa gas. +Migrating your curation shares to a new Subgraph version incurs a curation tax of 1%. Curators can choose to subscribe to the newest version of a Subgraph. When curator shares get auto-migrated to a new version, Curators will also pay half curation tax, ie. 0.5%, because upgrading Subgraphs is an onchain action that costs gas. -### 4. Com que frequência posso atualizar o meu subgraph? +### 4. How often can I update my Subgraph? -Não atualize os seus subgraphs com frequência excessiva. Veja a questão acima para mais detalhes. +It’s suggested that you don’t update your Subgraphs too frequently. See the question above for more details. ### 5. Posso vender as minhas ações de curadoria? diff --git a/website/src/pages/pt/resources/roles/delegating/undelegating.mdx b/website/src/pages/pt/resources/roles/delegating/undelegating.mdx index 1c335992bbc7..b2e3239a5ae3 100644 --- a/website/src/pages/pt/resources/roles/delegating/undelegating.mdx +++ b/website/src/pages/pt/resources/roles/delegating/undelegating.mdx @@ -1,73 +1,73 @@ --- -title: Undelegating +title: Como Retirar uma Delegação --- -Learn how to withdraw your delegated tokens through [Graph Explorer](https://thegraph.com/explorer) or [Arbiscan](https://arbiscan.io/). +Aprenda como retirar os seus tokens delegados através do [Graph Explorer](https://thegraph.com/explorer) ou [Arbiscan](https://arbiscan.io/). -> To avoid this in the future, it's recommended that you select an Indexer wisely. To learn how to select and Indexer, check out the Delegate section in Graph Explorer. +> Para evitar isso no futuro, recomendamos que tenha cuidado ao escolher um Indexador. Para aprender como selecionar um indexador, confira a seção Delegar no Graph Explorer. -## How to Withdraw Using Graph Explorer +## Como Retirar uma Delegação com o Graph Explorer ### Passo a Passo -1. Visit [Graph Explorer](https://thegraph.com/explorer). Please make sure you're on Explorer and **not** Subgraph Studio. +1. Visite o [Graph Explorer](https://thegraph.com/explorer). Certifique-se que está no Explorer, e **não** no Subgraph Studio. -2. Click on your profile. You can find it on the top right corner of the page. +2. Clique no seu perfil, no canto superior direito da página. - - Make sure that your wallet is connected. If it's not connected, you will see the "connect" button instead. + - Verifique se a sua carteira está conectada. Se não estiver, o botão "connect" (conectar) aparecerá no lugar. -3. Once you're in your profile, click on the Delegating tab. In the Delegating tab, you can view the list of Indexers you have delegated to. +3. Já no seu perfil, clique na aba "Delegating" (Delegação). Nessa aba, é possível visualizar a lista de Indexadores para os quais já delegou. -4. Click on the Indexer from which you wish to withdraw your tokens. +4. Clique no indexador do qual deseja retirar os seus tokens. - - Make sure to note the specific Indexer, as you will need to find them again to withdraw. + - Indique o Indexador específico, pois ele terá que ser encontrado novamente para fazer a retirada. -5. Select the "Undelegate" option by clicking on the three dots next to the Indexer on the right side, see image below: +5. Selecione a opção "Undelegate" (Retirar Delegação) nos três pontos ao lado do Indexador, ao lado direito. Conforme a imagem abaixo: - ![Undelegate button](/img/undelegate-button.png) + ![Botão de Retirar Delegação](/img/undelegate-button.png) -6. After approximately [28 epochs](https://thegraph.com/explorer/network/epochs?chain=arbitrum-one) (28 days), return to the Delegate section and locate the specific Indexer you undelegated from. +6. Após cerca de [28 epochs](https://thegraph.com/explorer/network/epochs?chain=arbitrum-one) (28 dias), volte à seção "Delegate" (delegar) e localize o indexador específico do qual retirou a sua delegação. -7. Once you find the Indexer, click on the three dots next to them and proceed to withdraw all your tokens. +7. Após encontrar o Indexador, clique nos três pontos ao lado dele e retire todos os seus tokens. -## How to Withdraw Using Arbiscan +## Como Retirar uma Delegação com o Arbiscan -> This process is primarily useful if the UI in Graph Explorer experiences issues. +> Esse processo é primariamente útil se estiver com problemas na interface do Graph Explorer. ### Passo a Passo -1. Find your delegation transaction on Arbiscan. +1. Encontre a sua transação de delegação no Arbiscan. - - Here's an [example transaction on Arbiscan](https://arbiscan.io/tx/0xcf2110eac897099f821064445041031efb32786392bdbe7544a4cb7a6b2e4f9a) + - Aqui está um [exemplo de transação pelo Arbiscan](https://arbiscan.io/tx/0xcf2110eac897099f821064445041031efb32786392bdbe7544a4cb7a6b2e4f9a) -2. Navigate to "Transaction Action" where you can find the staking extension contract: +2. Navegue até "Transaction Action" (Ação de Transação), onde poderá encontrar o contrato da extensão de staking: - - [This is the staking extension contract for the example listed above](https://arbiscan.io/address/0x00669A4CF01450B64E8A2A20E9b1FCB71E61eF03) + - [Este é o contrato de extensão de staking do exemplo listado acima](https://arbiscan.io/address/0x00669A4CF01450B64E8A2A20E9b1FCB71E61eF03) -3. Then click on "Contract". ![Contract tab on Arbiscan, between NFT Transfers and Events](/img/arbiscan-contract.png) +3. Em seguida, clique em "Contract" (Contrato). ![Aba de contrato no Arbiscan, entre NFT Transfers (Transferências de NFT) e Events (Eventos)](/img/arbiscan-contract.png) -4. Scroll to the bottom and copy the Contract ABI. There should be a small button next to it that allows you to copy everything. +4. Role até o final e copie a ABI do Contrato. Deve haver um pequeno botão próximo a ela que permite copiar tudo. -5. Click on your profile button in the top right corner of the page. If you haven't created an account yet, please do so. +5. Clique no seu botão de perfil, no canto superior direito da página. Se ainda não criou uma conta, faça isso logo. -6. Once you're in your profile, click on "Custom ABI”. +6. Já no seu perfil, clique em "Custom ABI" (Personalizar ABI). -7. Paste the custom ABI you copied from the staking extension contract, and add the custom ABI for the address: 0x00669A4CF01450B64E8A2A20E9b1FCB71E61eF03 (**sample address**) +7. Cole a ABI personalizada que copiou do contrato da extensão de staking e adicione a ABI personalizada para o endereço: 0x00669A4CF01450B64E8A2A20E9b1FCB71E61eF03 (**endereço de amostra**) -8. Go back to the [staking extension contract](https://arbiscan.io/address/0x00669A4CF01450B64E8A2A20E9b1FCB71E61eF03#writeProxyContract). Now, call the `unstake` function in the [Write as Proxy tab](https://arbiscan.io/address/0x00669A4CF01450B64E8A2A20E9b1FCB71E61eF03#writeProxyContract), which has been added thanks to the custom ABI, with the number of tokens that you delegated. +8. Volte para o [contrato de extensão de staking](https://arbiscan.io/address/0x00669A4CF01450B64E8A2A20E9b1FCB71E61eF03#writeProxyContract). Agora, chame a função `unstake` na [aba "Write as Proxy" (Escrever como Proxy)](https://arbiscan.io/address/0x00669A4CF01450B64E8A2A20E9b1FCB71E61eF03#writeProxyContract), que foi adicionada graças à ABI personalizada, com o número de tokens que você delegou. -9. If you don't know how many tokens you delegated, you can call `getDelegation` on the Read Custom tab. You will need to paste your address (delegator address) and the address of the Indexer that you delegated to, as shown in the following screenshot: +9. Se não souber quantos tokens delegou, chame `getDelegation` na aba "Read Custom" (Ler Personalização). Será necessário colar tanto o seu endereço (endereço de delegante) quanto o do indexador para o qual você delegou, conforme na seguinte imagem: - ![Both of the addresses needed](/img/get-delegate.png) + ![Ambos os endereços necessários](/img/get-delegate.png) - - This will return three numbers. The first number is the amount you can unstake. + - Isto retornará três números. O primeiro número é a quantidade de staking que você pode retirar. -10. After you have called `unstake`, you can withdraw after approximately 28 epochs (28 days) by calling the `withdraw` function. +10. Após chamar `unstake`, você pode retirar o stake após, em média, 28 epochs (28 dias) com a função `withdraw`. -11. You can see how much you will have available to withdraw by calling the `getWithdrawableDelegatedTokens` on Read Custom and passing it your delegation tuple. See screenshot below: +11. É possível ver o quanto terá disponível para retirar, ao chamar `getWithdrawableDelegatedTokens` no "Read Custom" e repassar a sua tupla de delegação. Veja a imagem abaixo: - ![Call `getWithdrawableDelegatedTokens` to see amount of tokens that can be withdrawn](/img/withdraw-available.png) + ![Chame `getWithdrawableDelegatedTokens` para ver a quantia de tokens que pode ser retirada](/img/withdraw-available.png) ## Outros Recursos -To delegate successfully, review the [delegating documentation](/resources/roles/delegating/delegating/) and check out the delegate section in Graph Explorer. +Para delegar com êxito, consulte a [documentação de delegação](/resources/roles/delegating/delegating/) e confira a seção de delegação no Graph Explorer. diff --git a/website/src/pages/pt/resources/subgraph-studio-faq.mdx b/website/src/pages/pt/resources/subgraph-studio-faq.mdx index 57c66e49c2e0..161340865f69 100644 --- a/website/src/pages/pt/resources/subgraph-studio-faq.mdx +++ b/website/src/pages/pt/resources/subgraph-studio-faq.mdx @@ -4,7 +4,7 @@ title: Perguntas Frequentes do Subgraph Studio ## 1. O que é o Subgraph Studio? -[Subgraph Studio](https://thegraph.com/studio/) is a dapp for creating, managing, and publishing subgraphs and API keys. +[Subgraph Studio](https://thegraph.com/studio/) is a dapp for creating, managing, and publishing Subgraphs and API keys. ## 2. Como criar uma Chave de API? @@ -12,20 +12,20 @@ Para criar uma API, navegue até o Subgraph Studio e conecte a sua carteira. Log ## 3. Posso criar várias Chaves de API? -Yes! You can create multiple API Keys to use in different projects. Check out the link [here](https://thegraph.com/studio/apikeys/). +Sim! Pode criar mais de uma Chave de API para usar em projetos diferentes. Confira [aqui](https://thegraph.com/studio/apikeys/). ## 4. Como restringir um domínio para uma Chave de API? Após criar uma Chave de API, na seção de Segurança (Security), pode definir os domínios que podem consultar uma Chave de API específica. -## 5. Posso transferir meu subgraph para outro dono? +## 5. Can I transfer my Subgraph to another owner? -Sim. Subgraphs editados no Arbitrum One podem ser transferidos para uma nova carteira ou uma Multisig. Para isto, clique nos três pontos próximos ao botão 'Publish' (Publicar) na página de detalhes do subgraph e selecione 'Transfer ownership' (Transferir titularidade). +Yes, Subgraphs that have been published to Arbitrum One can be transferred to a new wallet or a Multisig. You can do so by clicking the three dots next to the 'Publish' button on the Subgraph's details page and selecting 'Transfer ownership'. -Note que após a transferência, não poderá mais ver ou alterar o subgraph no Studio. +Note that you will no longer be able to see or edit the Subgraph in Studio once it has been transferred. -## 6. Se eu não for o programador do subgraph que quero usar, como encontro URLs de query para subgraphs? +## 6. How do I find query URLs for Subgraphs if I’m not the developer of the Subgraph I want to use? -You can find the query URL of each subgraph in the Subgraph Details section of Graph Explorer. When you click on the “Query” button, you will be directed to a pane wherein you can view the query URL of the subgraph you’re interested in. You can then replace the `` placeholder with the API key you wish to leverage in Subgraph Studio. +You can find the query URL of each Subgraph in the Subgraph Details section of Graph Explorer. When you click on the “Query” button, you will be directed to a pane wherein you can view the query URL of the Subgraph you’re interested in. You can then replace the `` placeholder with the API key you wish to leverage in Subgraph Studio. -Lembre-se que, mesmo se construir um subgraph por conta própria, ainda poderá criar uma chave de API e consultar qualquer subgraph publicado na rede. Estes queries através da nova chave API são pagos, como quaisquer outros na rede. +Remember that you can create an API key and query any Subgraph published to the network, even if you build a Subgraph yourself. These queries via the new API key, are paid queries as any other on the network. diff --git a/website/src/pages/pt/resources/tokenomics.mdx b/website/src/pages/pt/resources/tokenomics.mdx index f5994ac88795..5126fa077fec 100644 --- a/website/src/pages/pt/resources/tokenomics.mdx +++ b/website/src/pages/pt/resources/tokenomics.mdx @@ -6,7 +6,7 @@ description: A Graph Network é incentivada por uma tokenomia (economia de token ## Visão geral -O The Graph é um protocolo descentralizado que permite acesso fácil a dados de blockchain. Ele indexa dados de blockchain da mesma forma que o Google indexa a web; se já usou um dApp (aplicativo descentralizado) que resgata dados de um subgraph, você provavelmente já interagiu com o The Graph. Hoje, milhares de [dApps populares](https://thegraph.com/explorer) no ecossistema da Web3 usam o The Graph. +The Graph is a decentralized protocol that enables easy access to blockchain data. It indexes blockchain data similarly to how Google indexes the web. If you've used a dapp that retrieves data from a Subgraph, you've probably interacted with The Graph. Today, thousands of [popular dapps](https://thegraph.com/explorer) in the web3 ecosystem use The Graph. ## Especificações @@ -24,9 +24,9 @@ Há quatro participantes primários na rede: 1. Delegantes — Delegam GRT aos Indexadores e protegem a rede -2. Curadores — Encontram os melhores subgraphs para Indexadores +2. Curators - Find the best Subgraphs for Indexers -3. Programadores — Constroem e consultam subgraphs em queries +3. Developers - Build & query Subgraphs 4. Indexadores — Rede de transporte de dados em blockchain @@ -36,7 +36,7 @@ Pescadores e Árbitros também são integrais ao êxito da rede através de outr ## Delegantes (Ganham GRT passivamente) -Os Delegantes delegam GRT a Indexadores, aumentando o stake do Indexador em subgraphs na rede. Em troca, os Delegantes ganham uma porcentagem de todas as taxas de query e recompensas de indexação do Indexador. Cada Indexador determina a porção que será recompensada aos Delegantes de forma independente, criando competição entre Indexadores para atrair Delegantes. Muitos Indexadores oferecem entre 9 e 12% ao ano. +Indexers are delegated GRT by Delegators, increasing the Indexer’s stake in Subgraphs on the network. In return, Delegators earn a percentage of all query fees and indexing rewards from the Indexer. Each Indexer sets the cut that will be rewarded to Delegators independently, creating competition among Indexers to attract Delegators. Most Indexers offer between 9-12% annually. Por exemplo, se um Delegante delegasse 15.000 GRT a um Indexador que oferecesse 10%, o Delegante receberia cerca de 1.500 GRT em recompensas por ano. @@ -46,25 +46,25 @@ Quem ler isto pode tornar-se um Delegante agora mesmo na [página de participant ## Curadores (Ganham GRT) -Os Curadores identificam subgraphs de alta qualidade e os "curam" (por ex., sinalizam GRT neles) para ganhar ações de curadoria, que garantem uma porção de todas as taxas de query futuras geradas pelo subgraph. Enquanto qualquer participante independente da rede pode ser um Curador, os programadores de subgraphs tendem a ser os primeiros Curadores dos seus próprios subgraphs, pois querem garantir que o seu subgraph seja indexado. +Curators identify high-quality Subgraphs and "curate" them (i.e., signal GRT on them) to earn curation shares, which guarantee a percentage of all future query fees generated by the Subgraph. While any independent network participant can be a Curator, typically Subgraph developers are among the first Curators for their own Subgraphs because they want to ensure their Subgraph is indexed. -Desde 11 de abril de 2024, os programadores de subgraphs podem curar o seu subgraph com, no mínimo, 3.000 GRT. Porém, este número pode ser impactado pela atividade na rede e participação na comunidade. +Subgraph developers are encouraged to curate their Subgraph with at least 3,000 GRT. However, this number may be impacted by network activity and community participation. -Os Curadores pagam uma taxa de curadoria de 1% ao curar um subgraph novo. Esta taxa de curadoria é queimada, de modo a reduzir a reserva de GRT. +Curators pay a 1% curation tax when they curate a new Subgraph. This curation tax is burned, decreasing the supply of GRT. ## Programadores -Os programadores constroem e fazem queries em subgraphs para retirar dados da blockchain. Como os subgraphs têm o código aberto, os programadores podem carregar dados da blockchain em seus dApps com queries nos subgraphs existentes. Os programadores pagam por queries feitos em GRT, que é distribuído aos participantes da rede. +Developers build and query Subgraphs to retrieve blockchain data. Since Subgraphs are open source, developers can query existing Subgraphs to load blockchain data into their dapps. Developers pay for queries they make in GRT, which is distributed to network participants. -### Como criar um Subgraph +### Creating a Subgraph -Para indexar dados na blockchain, os programadores podem [criar um subgraph](]/developing/creating-a-subgraph/) — um conjunto de instruções para Indexadores sobre quais dados devem ser servidos aos consumidores. +Developers can [create a Subgraph](/developing/creating-a-subgraph/) to index data on the blockchain. Subgraphs are instructions for Indexers about which data should be served to consumers. -Depois que os programadores tiverem criado e testado o seu subgraph, eles poderão [editá-lo](/subgraphs/developing/publishing/publishing-a-subgraph/) na rede descentralizada do The Graph. +Once developers have built and tested their Subgraph, they can [publish their Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) on The Graph's decentralized network. -### Como fazer queries um Subgraph existente +### Querying an existing Subgraph -Depois que um subgraph for [editado](/subgraphs/developing/publishing/publishing-a-subgraph/) na rede descentralizada do The Graph, qualquer um poderá criar uma chave API, depositar GRT no seu saldo de cobrança, e consultar o subgraph em um query. +Once a Subgraph is [published](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph's decentralized network, anyone can create an API key, add GRT to their billing balance, and query the Subgraph. Os Subgraphs [recebem queries pelo GraphQL](/subgraphs/querying/introduction/), e as taxas de query são pagas em GRT no [Subgraph Studio](https://thegraph.com/studio/). As taxas de query são distribuídas a participantes da rede com base nas suas contribuições ao protocolo. @@ -72,27 +72,27 @@ Os Subgraphs [recebem queries pelo GraphQL](/subgraphs/querying/introduction/), ## Indexadores (Ganham GRT) -Os Indexadores são o núcleo do The Graph: operam o equipamento e o software independentes que movem a rede descentralizada do The Graph. Eles servem dados a consumidores baseado em instruções de subgraphs. +Indexers are the backbone of The Graph. They operate independent hardware and software powering The Graph’s decentralized network. Indexers serve data to consumers based on instructions from Subgraphs. Os Indexadores podem ganhar recompensas em GRT de duas maneiras: -1. **Taxas de query**: GRT pago, por programadores ou utilizadores, para queries de dados de subgraph. Taxas de query são distribuídas diretamente a Indexadores conforme a função de rebate exponencial (veja o GIP [aqui](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). +1. **Query fees**: GRT paid by developers or users for Subgraph data queries. Query fees are directly distributed to Indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). -2. **Recompensas de indexação**: a emissão anual de 3% é distribuída aos Indexadores com base no número de subgraphs que indexam. Estas recompensas os incentivam a indexar subgraphs, às vezes antes das taxas de query começarem, de modo a acumular e enviar Provas de Indexação (POIs) que verificam que indexaram dados corretamente. +2. **Indexing rewards**: the 3% annual issuance is distributed to Indexers based on the number of Subgraphs they are indexing. These rewards incentivize Indexers to index Subgraphs, occasionally before the query fees begin, to accrue and submit Proofs of Indexing (POIs), verifying that they have indexed data accurately. -Cada subgraph recebe uma porção da emissão total do token na rede, com base na quantia do sinal de curadoria do subgraph. Essa quantia é então recompensada aos Indexadores com base no seu stake alocado no subgraph. +Each Subgraph is allotted a portion of the total network token issuance, based on the amount of the Subgraph’s curation signal. That amount is then rewarded to Indexers based on their allocated stake on the Subgraph. Para executar um node de indexação, os Indexadores devem fazer um stake de 100.000 GRT ou mais com a rede. Os mesmos são incentivados a fazer um stake de GRT, proporcional à quantidade de queries que servem. -Os Indexadores podem aumentar suas alocações de GRT nos subgraphs ao aceitar delegações de GRT de Delegantes; também podem aceitar até 16 vezes a quantia do seu stake inicial. Se um Indexador se tornar "excessivamente delegado" (por ex., com seu stake inicial multiplicado mais de 16 vezes), ele não poderá usar o GRT adicional dos Delegantes até aumentar o seu próprio stake na rede. +Indexers can increase their GRT allocations on Subgraphs by accepting GRT delegation from Delegators, and they can accept up to 16 times their initial self-stake. If an Indexer becomes "over-delegated" (i.e., more than 16 times their initial self-stake), they will not be able to use the additional GRT from Delegators until they increase their self-stake in the network. A quantidade de recompensas recebidas por um Indexador pode variar com base no seu auto-stake, delegação aceita, qualidade de serviço, e muito mais fatores. ## Reserva de Tokens: Queima e Emissão -A reserva inicial de tokens é de 10 bilhões de GRT, com um alvo de emissão de 3% novos ao ano para recompensar os Indexadores por alocar stake em subgraphs. Portanto, a reserva total de tokens GRT aumentará por 3% a cada ano à medida que tokens novos são emitidos para Indexadores, pela sua contribuição à rede. +The initial token supply is 10 billion GRT, with a target of 3% new issuance annually to reward Indexers for allocating stake on Subgraphs. This means that the total supply of GRT tokens will increase by 3% each year as new tokens are issued to Indexers for their contribution to the network. -O The Graph é projetado com vários mecanismos de queima para compensar pela emissão de novos tokens. Aproximadamente 1% da reserva de GRT é queimado todo ano, através de várias atividades na rede, e este número só aumenta conforme a atividade na rede cresce. Estas atividades de queima incluem: uma taxa de delegação de 0,5% sempre que um Delegante delega GRT a um Indexador; uma taxa de curadoria de 1% quando Curadores sinalizam em um subgraph; e 1% de taxas de query por dados de blockchain. +The Graph is designed with multiple burning mechanisms to offset new token issuance. Approximately 1% of the GRT supply is burned annually through various activities on the network, and this number has been increasing as network activity continues to grow. These burning activities include a 0.5% delegation tax whenever a Delegator delegates GRT to an Indexer, a 1% curation tax when Curators signal on a Subgraph, and a 1% of query fees for blockchain data. [Total de GRT Queimado](/img/total-burned-grt.jpeg) diff --git a/website/src/pages/pt/sps/introduction.mdx b/website/src/pages/pt/sps/introduction.mdx index 88ae1cd29f54..c355e80d015a 100644 --- a/website/src/pages/pt/sps/introduction.mdx +++ b/website/src/pages/pt/sps/introduction.mdx @@ -3,28 +3,29 @@ title: Introudução a Subgraphs Movidos pelo Substreams sidebarTitle: Introdução --- -Boost your subgraph's efficiency and scalability by using [Substreams](/substreams/introduction/) to stream pre-indexed blockchain data. +Melhore a eficiência e a escalabilidade do seu subgraph com o [Substreams](/substreams/introduction/) para transmitir dados pré-indexados de blockchain. ## Visão geral -Use a Substreams package (`.spkg`) as a data source to give your subgraph access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks. +Use um pacote Substreams (`.spkg`) como fonte de dados para que o seu subgraph ganhe acesso a um fluxo de dados de blockchain pré-indexados. Isto resulta num tratamento de dados mais eficiente e escalável, especialmente com redes de blockchain grandes ou complexas. ### Especificações Há dois metodos de ativar esta tecnologia: -1. **Using Substreams [triggers](/sps/triggers/)**: Consume from any Substreams module by importing the Protobuf model through a subgraph handler and move all your logic into a subgraph. This method creates the subgraph entities directly in the subgraph. +1. **Usar [gatilhos](/sps/triggers/)**: isto importa o modelo do Protobuf via um handler de subgraph, permitindo que o utilizador consuma de qualquer módulo do Substreams e mude toda a sua lógica para um subgraph. Este método cria as entidades diretamente no subgraph. -2. **Using [Entity Changes](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)**: By writing more of the logic into Substreams, you can consume the module's output directly into [graph-node](/indexing/tooling/graph-node/). In graph-node, you can use the Substreams data to create your subgraph entities. +2. **[Mudanças de Entidade](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)**: Ao inserir mais da lógica no Substreams, pode-se alimentar o rendimento do módulo diretamente no [graph-node](/indexing/tooling/graph-node/). No graph-node, os dados do Substreams podem ser usados para criar as entidades do seu subgraph. -You can choose where to place your logic, either in the subgraph or Substreams. However, consider what aligns with your data needs, as Substreams has a parallelized model, and triggers are consumed linearly in the graph node. +É possível escolher onde colocar a sua lógica, seja no subgraph ou no Substreams. Porém, considere o que supre as suas necessidades de dados; o Substreams tem um modelo paralelizado, e os gatilhos são consumidos de forma linear no graph-node. ### Outros Recursos -Visit the following links for tutorials on using code-generation tooling to build your first end-to-end Substreams project quickly: +Visite os seguintes links para ver guias passo-a-passo sobre ferramentas de geração de código, para construir o seu primeiro projeto de ponta a ponta rapidamente: - [Solana](/substreams/developing/solana/transactions/) - [EVM](https://docs.substreams.dev/tutorials/intro-to-tutorials/evm) - [Starknet](https://docs.substreams.dev/tutorials/intro-to-tutorials/starknet) - [Injective](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/injective) - [MANTRA](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/mantra) +- [Stellar](https://docs.substreams.dev/tutorials/intro-to-tutorials/stellar) diff --git a/website/src/pages/pt/sps/sps-faq.mdx b/website/src/pages/pt/sps/sps-faq.mdx index 2991b30adbe3..936b03bc0757 100644 --- a/website/src/pages/pt/sps/sps-faq.mdx +++ b/website/src/pages/pt/sps/sps-faq.mdx @@ -1,31 +1,31 @@ --- title: 'Perguntas Frequentes: Subgraphs Movidos pelo Substreams' -sidebarTitle: FAQ +sidebarTitle: Perguntas Frequentes --- ## O que são Substreams? -Substreams is an exceptionally powerful processing engine capable of consuming rich streams of blockchain data. It allows you to refine and shape blockchain data for fast and seamless digestion by end-user applications. +O Substreams é um mecanismo de processamento excecionalmente poderoso, capaz de consumir ricos fluxos de dados de blockchain. Ele permite refinar e moldar dados de blockchain, para serem digeridos rápida e continuamente por aplicativos de utilizador final. Specifically, it's a blockchain-agnostic, parallelized, and streaming-first engine that serves as a blockchain data transformation layer. It's powered by [Firehose](https://firehose.streamingfast.io/), and enables developers to write Rust modules, build upon community modules, provide extremely high-performance indexing, and [sink](/substreams/developing/sinks/) their data anywhere. -Substreams is developed by [StreamingFast](https://www.streamingfast.io/). Visit the [Substreams Documentation](/substreams/introduction/) to learn more about Substreams. +O Substreams é programado pela [StreamingFast](https://www.streamingfast.io/). Para mais informações, visite a [Documentação do Substreams](/substreams/introduction/). -## O que são subgraphs movidos por substreams? +## O que são subgraphs movidos por Substreams? -[Substreams-powered subgraphs](/sps/introduction/) combine the power of Substreams with the queryability of subgraphs. When publishing a Substreams-powered Subgraph, the data produced by the Substreams transformations, can [output entity changes](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs) that are compatible with subgraph entities. +[Subgraphs movidos pelo Substreams](/sps/introduction/) combinam o poder do Substreams com as queries de subgraphs. Ao editar um subgraph movido pelo Substreams, os dados produzidos pelas transformações do Substreams podem [produzir mudanças de entidade](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs) compatíveis com entidades de subgraph. -If you are already familiar with subgraph development, note that Substreams-powered subgraphs can then be queried just as if it had been produced by the AssemblyScript transformation layer. This provides all the benefits of subgraphs, including a dynamic and flexible GraphQL API. +Se já entende da programação de subgraphs, observe que subgraphs movidos a Substreams podem ser consultados do mesmo jeito que se tivessem sido produzidos pela camada de transformação em AssemblyScript; isso com todos os benefícios do Subgraph, o que inclui uma API GraphQL dinâmica e flexível. -## Como subgraphs movidos a Substreams são diferentes de subgraphs? +## Como subgraphs movidos a Substreams diferem de subgraphs? Os subgraphs são compostos de fontes de dados que especificam eventos on-chain, e como transformar estes eventos através de handlers escritos em AssemblyScript. Estes eventos são processados em sequência, com base na ordem em que acontecem na chain. -Por outro lado, subgraphs movidos a substreams têm uma única fonte de dados que referencia um pacote de substreams, processado pelo Graph Node. Substreams têm acesso a mais dados granulares on-chain em comparação a subgraphs convencionais, e também podem se beneficiar de um processamento paralelizado em massa, o que pode diminuir a espera do processamento. +Por outro lado, subgraphs movidos pelo Substreams têm uma única fonte de dados que referencia um pacote de substreams, processado pelo Graph Node. Substreams têm acesso a mais dados granulares on-chain em comparação a subgraphs convencionais, e também podem se beneficiar de um processamento paralelizado em massa, o que pode diminuir muito a espera do processamento. ## Quais os benefícios do uso de subgraphs movidos a Substreams? -Substreams-powered subgraphs combine all the benefits of Substreams with the queryability of subgraphs. They bring greater composability and high-performance indexing to The Graph. They also enable new data use cases; for example, once you've built your Substreams-powered Subgraph, you can reuse your [Substreams modules](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) to output to different [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) such as PostgreSQL, MongoDB, and Kafka. +Subgraphs movidos a Substreams combinam todos os benefícios do Substreams com o potencial de query de subgraphs. Eles também trazem mais composabilidade e indexações de alto desempenho ao The Graph. Eles também resultam em novos casos de uso de dados; por exemplo, após construir o seu Subgraph movido a Substreams, é possível reutilizar os seus [módulos de Substreams](https://substreams.streamingfast.io/documentation/develop/manifest-modules) para usar [coletores de dados](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) diferentes, como PostgreSQL, MongoDB e Kafka. ## Quais os benefícios do Substreams? @@ -35,7 +35,7 @@ Usar o Substreams incorre muitos benefícios, que incluem: - Indexação de alto desempenho: Indexação muito mais rápida através de clusters de larga escala de operações paralelas (como o BigQuery). -- Mergulho em qualquer lugar: Mergulhe seus dados onde quiser: PostgreSQL, MongoDB, Kafka, subgraphs, arquivos planos, Google Sheets. +- Colete dados em qualquer lugar: Mergulhe os seus dados onde quiser: PostgreSQL, MongoDB, Kafka, subgraphs, arquivos planos, Google Sheets. - Programável: Use códigos para personalizar a extração, realizar agregações de tempo de transformação, e modelar o seu resultado para vários sinks. @@ -67,7 +67,7 @@ Há muitos benefícios do uso do Firehose, que incluem: Para aprender como construir módulos do Substreams, leia a [documentação do Substreams](/substreams/introduction/). -The [Substreams-powered subgraphs documentation](/sps/introduction/) will show you how to package them for deployment on The Graph. +Para aprender como empacotar subgraphs e implantá-los no The Graph, veja a [documentação sobre subgraphs movidos pelo Substreams](/sps/introduction/). A [ferramenta de Codegen no Substreams mais recente](https://streamingfastio.medium.com/substreams-codegen-no-code-tool-to-bootstrap-your-project-a11efe0378c6) permitirá ao programador inicializar um projeto no Substreams sem a necessidade de código. @@ -75,7 +75,7 @@ A [ferramenta de Codegen no Substreams mais recente](https://streamingfastio.med Módulos de Rust são o equivalente aos mapeadores em AssemblyScript em subgraphs. Eles são compilados em WASM de forma parecida, mas o modelo de programação permite execuções paralelas. Eles definem a categoria de transformações e agregações que você quer aplicar aos dados de blockchain crus. -See [modules documentation](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) for details. +Veja a [documentação dos módulos](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) para mais detalhes. ## O que faz o Substreams compostável? @@ -85,11 +85,11 @@ Como exemplo, Fulana pode construir um módulo de preço de DEX, Sicrano pode us ## Como construir e publicar um Subgraph movido a Substreams? -After [defining](/sps/introduction/) a Substreams-powered Subgraph, you can use the Graph CLI to deploy it in [Subgraph Studio](https://thegraph.com/studio/). +Após [definir](/sps/introduction/) um subgraph movido pelo Substreams, é possível usar a Graph CLI para implantá-lo no [Subgraph Studio](https://thegraph.com/studio/). ## Onde posso encontrar exemplos de Substreams e subgraphs movidos a Substreams? -Você pode visitar [este repo do Github](https://github.com/pinax-network/awesome-substreams) para encontrar exemplos de Substreams e subgraphs movidos a Substreams. +Você pode visitar [este repositório do Github](https://github.com/pinax-network/awesome-substreams) para encontrar exemplos de Substreams e subgraphs movidos a Substreams. ## O que Substreams e subgraphs movidos a Substreams significam para a Graph Network? diff --git a/website/src/pages/pt/sps/triggers.mdx b/website/src/pages/pt/sps/triggers.mdx index 548bde4ca531..eafeca1e373f 100644 --- a/website/src/pages/pt/sps/triggers.mdx +++ b/website/src/pages/pt/sps/triggers.mdx @@ -2,17 +2,17 @@ title: Gatilhos do Substreams --- -Use Custom Triggers and enable the full use GraphQL. +Use Gatilhos Personalizados e ative o uso completo da GraphQL. ## Visão geral -Custom Triggers allow you to send data directly into your subgraph mappings file and entities, which are similar to tables and fields. This enables you to fully use the GraphQL layer. +Com Gatilhos Personalizados, é possível enviar dados diretamente ao arquivo de mapeamento do seu subgraph e às suas entidades; sendo esses aspetos parecidos com tabelas e campos. Assim, é possível usar a camada da GraphQL livremente. -By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data in your subgraph's handler. This ensures efficient and streamlined data management within the subgraph framework. +Estes dados podem ser recebidos e processados no handler do seu subgraph ao importar as definições do Protobuf emitidas pelo seu módulo do Substreams. Assim, o tratamento de dados na estrutura do subgraph fica mais simples e eficiente. -### Defining `handleTransactions` +### Como definir `handleTransactions` -O código a seguir demonstra como definir uma função `handleTransactions` num handler de subgraph. Esta função recebe bytes brutos do Substreams como um parâmetro e os decodifica num objeto `Transactions`. Uma nova entidade de subgraph é criada para cada transação. +O código a seguir demonstra como definir uma função `handleTransactions` num handler de subgraph. Esta função recebe bytes brutos do Substreams como um parâmetro e os descodifica num objeto `Transactions`. Uma nova entidade de subgraph é criada para cada transação. ```tsx export function handleTransactions(bytes: Uint8Array): void { @@ -34,14 +34,14 @@ export function handleTransactions(bytes: Uint8Array): void { } ``` -Here's what you're seeing in the `mappings.ts` file: +Você verá isto no arquivo `mappings.ts`: 1. Os bytes contendo dados do Substreams são descodificados no objeto `Transactions` gerado; este é usado como qualquer outro objeto AssemblyScript 2. Um loop sobre as transações 3. Uma nova entidade de subgraph é criada para cada transação -To go through a detailed example of a trigger-based subgraph, [check out the tutorial](/sps/tutorial/). +Para ver um exemplo detalhado de um subgraph baseado em gatilhos, [clique aqui](/sps/tutorial/). ### Outros Recursos -To scaffold your first project in the Development Container, check out one of the [How-To Guide](/substreams/developing/dev-container/). +Para estruturar o seu primeiro projeto no Recipiente de Programação, confira [este guia](/substreams/developing/dev-container/). diff --git a/website/src/pages/pt/sps/tutorial.mdx b/website/src/pages/pt/sps/tutorial.mdx index deb7589c4cdd..9c0719e36008 100644 --- a/website/src/pages/pt/sps/tutorial.mdx +++ b/website/src/pages/pt/sps/tutorial.mdx @@ -3,13 +3,13 @@ title: 'Tutorial: Como Montar um Subgraph Movido a Substreams na Solana' sidebarTitle: Tutorial --- -Successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. +Configure um subgraph, movido pelo Substreams e baseado em gatilhos, para um token da SPL (Biblioteca de Protocolos da Solana) da Solana. ## Como Começar -For a video tutorial, check out [How to Index Solana with a Substreams-powered Subgraph](/sps/tutorial/#video-tutorial) +Para ver um tutorial em vídeo sobre o assunto, [clique aqui](/sps/tutorial/#video-tutorial) -### Prerequisites +### Pré-requisitos Antes de começar: @@ -52,10 +52,10 @@ dataSources: network: solana-mainnet-beta source: package: - moduleName: map_spl_transfers # Módulo definido em substreams.yaml + moduleName: map_spl_transfers # Módulo definido no substreams.yaml file: ./my-project-sol-v0.1.0.spkg mapping: - apiVersion: 0.0.7 + apiVersion: 0.0.9 kind: substreams/graph-entities file: ./src/mappings.ts handler: handleTriggers @@ -63,9 +63,9 @@ dataSources: ### Passo 3: Defina as Entidades em `schema.graphql` -Define the fields you want to save in your subgraph entities by updating the `schema.graphql` file. +Para definir os campos a guardar nas suas entidades de subgraph, atualize o arquivo `schema.graphql`. -Here is an example: +Por exemplo: ```graphql type MyTransfer @entity { @@ -81,9 +81,9 @@ Este schema define uma entidade `MyTransfer` com campos como `id`, `amount`, `so ### Passo 4: Controle Dados do Substreams no `mappings.ts` -With the Protobuf objects generated, you can now handle the decoded Substreams data in your `mappings.ts` file found in the `./src` directory. +Com os objetos do Protobuf criados, agora você pode tratar os dados descodificados do Substreams no seu arquivo `mappings.ts` no diretório `./src`. -The example below demonstrates how to extract to subgraph entities the non-derived transfers associated to the Orca account id: +O exemplo abaixo demonstra como extrair as transferências não derivadas associadas à id de conta do Orca para entidades de subgraph: ```ts import { Protobuf } from 'as-proto/assembly' @@ -122,15 +122,15 @@ Para gerar objetos do Protobuf no AssemblyScript, execute: npm run protogen ``` -Este comando converte as definições do Protobuf em AssemblyScript, permitindo o uso destas no handler do subgraph. +Este comando converte as definições do Protobuf em AssemblyScript, permitindo o seu uso no handler do subgraph. ### Conclusão -Congratulations! You've successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. You can take the next step by customizing your schema, mappings, and modules to fit your specific use case. +Parabéns! Está montado um subgraph movido a Substreams, baseado em gatilhos, para um token da SPL da Solana. Agora dá para personalizar mais o seu schema, os seus mapeamentos, e os seus módulos de modo que combinem com o seu caso de uso específico. -### Video Tutorial +### Tutorial em vídeo - + ### Outros Recursos diff --git a/website/src/pages/pt/subgraphs/_meta-titles.json b/website/src/pages/pt/subgraphs/_meta-titles.json index 3fd405eed29a..a72543795a1d 100644 --- a/website/src/pages/pt/subgraphs/_meta-titles.json +++ b/website/src/pages/pt/subgraphs/_meta-titles.json @@ -1,6 +1,6 @@ { - "querying": "Querying", - "developing": "Developing", + "querying": "Queries", + "developing": "Programação", "guides": "How-to Guides", - "best-practices": "Best Practices" + "best-practices": "Boas práticas" } diff --git a/website/src/pages/pt/subgraphs/best-practices/avoid-eth-calls.mdx b/website/src/pages/pt/subgraphs/best-practices/avoid-eth-calls.mdx index f8f0fc8dedab..4217065c4fe7 100644 --- a/website/src/pages/pt/subgraphs/best-practices/avoid-eth-calls.mdx +++ b/website/src/pages/pt/subgraphs/best-practices/avoid-eth-calls.mdx @@ -1,19 +1,19 @@ --- title: Melhores Práticas de Subgraph Parte 4 - Como Melhorar a Velocidade da Indexação ao Evitar eth_calls -sidebarTitle: 'Subgraph Best Practice 4: Avoiding eth_calls' +sidebarTitle: Avoiding eth_calls --- ## TLDR -`eth_calls` são chamadas feitas de um subgraph a um node no Ethereum. Estas chamadas levam um bom tempo para retornar dados, o que retarda a indexação. Se possível, construa contratos inteligentes para emitir todos os dados necessários, para que não seja necessário usar `eth_calls`. +`eth_calls` are calls that can be made from a Subgraph to an Ethereum node. These calls take a significant amount of time to return data, slowing down indexing. If possible, design smart contracts to emit all the data you need so you don’t need to use `eth_calls`. ## Por que Evitar `eth_calls` É uma Boa Prática -Subgraphs são otimizados para indexar dados de eventos emitidos de contratos inteligentes. Um subgraph também pode indexar os dados que vêm de uma `eth_call`, mas isto pode atrasar muito a indexação de um subgraph, já que `eth_calls` exigem a realização de chamadas externas para contratos inteligentes. A capacidade de respostas destas chamadas depende não apenas do subgraph, mas também da conectividade e das respostas do node do Ethereum a ser consultado. Ao minimizar ou eliminar `eth_calls` nos nossos subgraphs, podemos melhorar muito a nossa velocidade de indexação. +Subgraphs are optimized to index event data emitted from smart contracts. A Subgraph can also index the data coming from an `eth_call`, however, this can significantly slow down Subgraph indexing as `eth_calls` require making external calls to smart contracts. The responsiveness of these calls relies not on the Subgraph but on the connectivity and responsiveness of the Ethereum node being queried. By minimizing or eliminating eth_calls in our Subgraphs, we can significantly improve our indexing speed. ### Como É Um `eth_call`? -`eth_calls` tendem a ser necessárias quando os dados requeridos por um subgraph não estão disponíveis via eventos emitidos. Por exemplo, vamos supor que um subgraph precisa identificar se tokens ERC20 são parte de um pool específico, mas o contrato só emite um evento `Transfer` básico e não emite um evento que contém os dados que precisamos: +`eth_calls` are often necessary when the data required for a Subgraph is not available through emitted events. For example, consider a scenario where a Subgraph needs to identify whether ERC20 tokens are part of a specific pool, but the contract only emits a basic `Transfer` event and does not emit an event that contains the data that we need: ```yaml event Transfer(address indexed from, address indexed to, uint256 value); @@ -44,7 +44,7 @@ export function handleTransfer(event: Transfer): void { } ``` -Isto é funcional, mas não ideal, já que ele atrasa a indexação do nosso subgraph. +This is functional, however is not ideal as it slows down our Subgraph’s indexing. ## Como Eliminar `eth_calls` @@ -54,7 +54,7 @@ Idealmente, o contrato inteligente deve ser atualizado para emitir todos os dado event TransferWithPool(address indexed from, address indexed to, uint256 value, bytes32 indexed poolInfo); ``` -Com esta atualização, o subgraph pode indexar directamente os dados exigidos sem chamadas externas: +With this update, the Subgraph can directly index the required data without external calls: ```typescript import { Address } from '@graphprotocol/graph-ts' @@ -96,22 +96,22 @@ A porção destacada em amarelo é a declaração de chamada. A parte antes dos O próprio handler acessa o resultado desta `eth_call` exatamente como na secção anterior ao atrelar ao contrato e fazer a chamada. o graph-node coloca em cache os resultados de `eth_calls` na memória e a chamada do handler terirará o resultado disto no cache de memória em vez de fazer uma chamada de RPC real. -Nota: `eth_calls` declaradas só podem ser feitas em subgraphs com specVersion maior que 1.2.0. +Note: Declared eth_calls can only be made in Subgraphs with specVersion >= 1.2.0. ## Conclusão -O desempenho da indexação pode melhorar muito ao minimizar ou eliminar `eth_calls` nos nossos subgraphs. +You can significantly improve indexing performance by minimizing or eliminating `eth_calls` in your Subgraphs. ## Melhores Práticas para um Subgraph 1 – 6 -1. [Pruning: Reduza o Excesso de Dados do Seu Subgraph para Acelerar Queries](/subgraphs/best-practices/pruning/) +1. [Improve Query Speed with Subgraph Pruning](/subgraphs/best-practices/pruning/) -2. [Use o @derivedFrom para Melhorar a Resposta da Indexação e de Queries](/subgraphs/best-practices/derivedfrom/) +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/subgraphs/best-practices/derivedfrom/) -3. [Melhore o Desempenho da Indexação e de Queries com o Uso de Bytes como IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) -4. [Evite `eth-calls` para Acelerar a Indexação](/subgraphs/best-practices/avoid-eth-calls/) +4. [Improve Indexing Speed by Avoiding `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) -5. [Simplifique e Otimize com Séries Temporais e Agregações](/subgraphs/best-practices/timeseries/) +5. [Simplify and Optimize with Timeseries and Aggregations](/subgraphs/best-practices/timeseries/) -6. [Lance Hotfixes Mais Rápido com Enxertos](/subgraphs/best-practices/grafting-hotfix/) +6. [Use Grafting for Quick Hotfix Deployment](/subgraphs/best-practices/grafting-hotfix/) diff --git a/website/src/pages/pt/subgraphs/best-practices/derivedfrom.mdx b/website/src/pages/pt/subgraphs/best-practices/derivedfrom.mdx index dedf0bf2ffe2..6640242a3ddd 100644 --- a/website/src/pages/pt/subgraphs/best-practices/derivedfrom.mdx +++ b/website/src/pages/pt/subgraphs/best-practices/derivedfrom.mdx @@ -1,11 +1,11 @@ --- title: Boas Práticas de Subgraph 2 - Melhorar a Indexação e a Capacidade de Resposta de Queries com @derivedFrom -sidebarTitle: 'Subgraph Best Practice 2: Arrays with @derivedFrom' +sidebarTitle: Arrays with @derivedFrom --- ## TLDR -O desempenho de um subgraph pode ser muito atrasado por arranjos no seu schema, já que esses podem crescer além dos milhares de entradas. Se possível, a diretiva `@derivedFrom` deve ser usada ao usar arranjos, já que ela impede a formação de grandes arranjos, simplifica handlers e reduz o tamanho de entidades individuais, o que melhora muito a velocidade da indexação e o desempenho dos queries. +Arrays in your schema can really slow down a Subgraph's performance as they grow beyond thousands of entries. If possible, the `@derivedFrom` directive should be used when using arrays as it prevents large arrays from forming, simplifies handlers, and reduces the size of individual entities, improving indexing speed and query performance significantly. ## Como Usar a Diretiva `@derivedFrom` @@ -15,7 +15,7 @@ Você só precisa adicionar uma diretiva `@derivedFrom` após o seu arranjo no s comments: [Comment!]! @derivedFrom(field: "post") ``` -o `@derivedFrom` cria relações eficientes de um-para-muitos, o que permite que uma entidade se associe dinamicamente com muitas entidades relacionadas com base em um campo na entidade relacionada. Esta abordagem faz com que ambos os lados do relacionamento não precisem armazenar dados duplicados e aumenta a eficácia do subgraph. +`@derivedFrom` creates efficient one-to-many relationships, enabling an entity to dynamically associate with multiple related entities based on a field in the related entity. This approach removes the need for both sides of the relationship to store duplicate data, making the Subgraph more efficient. ### Exemplo de Caso de Uso para `@derivedFrom` @@ -60,30 +60,30 @@ type Comment @entity { Ao adicionar a diretiva `@derivedFrom`, este schema só armazenará os "Comentários" no lado "Comments" do relacionamento, e não no lado "Post". Os arranjos são armazenados em fileiras individuais, o que os faz crescer significativamente. Se o seu crescimento não for contido, isto pode permitir que o tamanho fique excessivamente grande. -Isto não só aumenta a eficiência do nosso subgraph, mas também desbloqueia três características: +This will not only make our Subgraph more efficient, but it will also unlock three features: 1. Podemos fazer um query sobre o `Post` e ver todos os seus comentários. 2. Podemos fazer uma pesquisa reversa e um query sobre qualquer `Comment`, para ver de qual post ele vem. -3. Podemos usar [Carregadores de Campos Derivados](/subgraphs/developing/creating/graph-ts/api/#looking-up-derived-entities) para ativar o acesso e manipulação de dados diretamente de relacionamentos virtuais nos nossos mapeamentos de subgraph. +3. We can use [Derived Field Loaders](/subgraphs/developing/creating/graph-ts/api/#looking-up-derived-entities) to unlock the ability to directly access and manipulate data from virtual relationships in our Subgraph mappings. ## Conclusão -Usar a diretiva `@derivedFrom` nos subgraphs lida eficientemente com arranjos que crescem dinamicamente, o que melhora o desempenho da indexação e o retiro de dados. +Use the `@derivedFrom` directive in Subgraphs to effectively manage dynamically growing arrays, enhancing indexing efficiency and data retrieval. Para aprender mais estratégias detalhadas sobre evitar arranjos grandes, leia este blog por Kevin Jones: [Melhores Práticas no Desenvolvimento de Subgraphs: Como Evitar Grandes Arranjos](https://thegraph.com/blog/improve-subgraph-performance-avoiding-large-arrays/). ## Melhores Práticas para um Subgraph 1 – 6 -1. [Pruning: Reduza o Excesso de Dados do Seu Subgraph para Acelerar Queries](/subgraphs/best-practices/pruning/) +1. [Improve Query Speed with Subgraph Pruning](/subgraphs/best-practices/pruning/) -2. [Use o @derivedFrom para Melhorar a Resposta da Indexação e de Queries](/subgraphs/best-practices/derivedfrom/) +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/subgraphs/best-practices/derivedfrom/) -3. [Melhore o Desempenho da Indexação e de Queries com o Uso de Bytes como IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) -4. [Evite `eth-calls` para Acelerar a Indexação](/subgraphs/best-practices/avoid-eth-calls/) +4. [Improve Indexing Speed by Avoiding `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) -5. [Simplifique e Otimize com Séries Temporais e Agregações](/subgraphs/best-practices/timeseries/) +5. [Simplify and Optimize with Timeseries and Aggregations](/subgraphs/best-practices/timeseries/) -6. [Lance Hotfixes Mais Rápido com Enxertos](/subgraphs/best-practices/grafting-hotfix/) +6. [Use Grafting for Quick Hotfix Deployment](/subgraphs/best-practices/grafting-hotfix/) diff --git a/website/src/pages/pt/subgraphs/best-practices/grafting-hotfix.mdx b/website/src/pages/pt/subgraphs/best-practices/grafting-hotfix.mdx index d9f463501e94..1bb1297526e7 100644 --- a/website/src/pages/pt/subgraphs/best-practices/grafting-hotfix.mdx +++ b/website/src/pages/pt/subgraphs/best-practices/grafting-hotfix.mdx @@ -1,26 +1,26 @@ --- title: 'Melhores Práticas de Subgraph #6 - Use Enxertos para Implantar Hotfixes Mais Rápido' -sidebarTitle: 'Subgraph Best Practice 6: Grafting and Hotfixing' +sidebarTitle: Grafting and Hotfixing --- ## TLDR -O enxerto é uma função poderosa na programação de subgraphs, que permite a construção e implantação de novos subgraphs enquanto recicla os dados indexados dos já existentes. +Grafting is a powerful feature in Subgraph development that allows you to build and deploy new Subgraphs while reusing the indexed data from existing ones. ### Visão geral -Esta função permite a implantação rápida de hotfixes para problemas críticos, eliminando a necessidade de indexar o subgraph inteiro do zero novamente. Ao preservar dados históricos, enxertar diminui o tempo de espera e garante a continuidade em serviços de dados. +This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire Subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. ## Benefícios de Enxertos para Hotfixes 1. **Lançamento Rápido** - - **Espera Minimizada**: Quando um subgraph encontra um erro crítico e para de indexar, um enxerto permite que seja lançada uma solução imediata, sem esperar uma nova indexação. - - **Recuperação Imediata**: O novo subgraph continua do último bloco indexado, garantindo o funcionamento ininterrupto dos serviços de dados. + - **Minimize Downtime**: When a Subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. + - **Immediate Recovery**: The new Subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. 2. **Preservação de Dados** - - **Reaproveitamento de Dados Históricos**: O enxerto copia os dados existentes do subgraph de origem; assim, não há como perder dados históricos valiosos. + - **Reuse Historical Data**: Grafting copies the existing data from the base Subgraph, so you don’t lose valuable historical records. - **Consistência**: Mantém a continuidade de dados, que é crucial para aplicativos que dependem de dados históricos consistentes. 3. **Eficiência** @@ -31,38 +31,38 @@ Esta função permite a implantação rápida de hotfixes para problemas crític 1. \*Implantação Inicial sem Enxerto\*\* - - **Começar do Zero**: Sempre lance o seu subgraph inicial sem enxertos para que fique estável e funcione como esperado. - - **Fazer Testes Minuciosos:** Valide o desempenho do subgraph para minimizar a necessidade de hotfixes futuros. + - **Start Clean**: Always deploy your initial Subgraph without grafting to ensure that it’s stable and functions as expected. + - **Test Thoroughly**: Validate the Subgraph’s performance to minimize the need for future hotfixes. 2. **Implementação do Hotfix com Enxerto** - **Identificar o Problema**: Quando ocorrer um erro crítico, determine o número de bloco do último evento indexado com êxito. - - **Criar um Novo Subgraph**: Programe um novo subgraph que inclui o hotfix. - - **Configure o Enxerto**: Use o enxerto para copiar dados até o número de bloco identificado do subgraph defeituoso. - - **Lance Rápido**: Edite o subgraph enxertado para reabrir o serviço o mais rápido possível. + - **Create a New Subgraph**: Develop a new Subgraph that includes the hotfix. + - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed Subgraph. + - **Deploy Quickly**: Publish the grafted Subgraph to restore service as soon as possible. 3. **Depois do Hotfix** - - **Monitore o Desempenho**: Tenha certeza que o subgraph enxertado está a indexar corretamente, e que o hotfix pode resolver o problema. - - **Reedite Sem Enxertos**: Agora que está estável, lance uma nova versão do subgraph sem enxertos para fins de manutenção a longo prazo. + - **Monitor Performance**: Ensure the grafted Subgraph is indexing correctly and the hotfix resolves the issue. + - **Republish Without Grafting**: Once stable, deploy a new version of the Subgraph without grafting for long-term maintenance. > Nota: Não é recomendado depender de enxertos indefinidamente, pois isto pode complicar a manutenção e implantação de futuras atualizações. - - **Atualize as Referências**: Redirecione quaisquer serviços ou aplicativos para que usem o novo subgraph, sem enxertos. + - **Update References**: Redirect any services or applications to use the new, non-grafted Subgraph. 4. **Considerações Importantes** - **Selecione Blocos Corretamente**: Escolha o número de bloco do enxerto com cuidado, para evitar perdas de dados. - **Dica**: Use o número de bloco do último evento corretamente processado. - - **Use a ID de Implantação**: Referencie a ID de Implantação do subgraph de origem, não a ID do Subgraph. - - **Nota**: A ID de Implantação é a identificadora única para uma implantação específica de subgraph. - - **Declaração de Funções**: Não se esqueça de declarar enxertos na lista de funções, no manifest do seu subgraph. + - **Use Deployment ID**: Ensure you reference the Deployment ID of the base Subgraph, not the Subgraph ID. + - **Note**: The Deployment ID is the unique identifier for a specific Subgraph deployment. + - **Feature Declaration**: Remember to declare grafting in the Subgraph manifest under features. ## Exemplo: Como Implantar um Subgraph com Enxertos -Vamos supor que tens um subgraph a rastrear um contrato inteligente, que parou de indexar devido a um erro crítico. Veja como usar um enxerto para implementar um hotfix. +Suppose you have a Subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. 1. **Manifest Falho de Subgraph (subgraph.yaml)** ```yaml - specVersion: 1.0.0 + specVersion: 1.3.0 schema: file: ./schema.graphql dataSources: @@ -75,7 +75,7 @@ Vamos supor que tens um subgraph a rastrear um contrato inteligente, que parou d startBlock: 5000000 mapping: kind: ethereum/events - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Withdrawal @@ -90,7 +90,7 @@ Vamos supor que tens um subgraph a rastrear um contrato inteligente, que parou d 2. **Novo Manifest Enxertado de Subgraph (subgraph.yaml)** ```yaml - specVersion: 1.0.0 + specVersion: 1.3.0 schema: file: ./schema.graphql dataSources: @@ -103,7 +103,7 @@ Vamos supor que tens um subgraph a rastrear um contrato inteligente, que parou d startBlock: 6000001 # Block after the last indexed block mapping: kind: ethereum/events - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Withdrawal @@ -117,16 +117,16 @@ Vamos supor que tens um subgraph a rastrear um contrato inteligente, que parou d features: - grafting graft: - base: QmBaseDeploymentID # Deployment ID of the failed subgraph + base: QmBaseDeploymentID # Deployment ID of the failed Subgraph block: 6000000 # Last successfully indexed block ``` **Explicação:** -- **Atualização de Fonte de Dados**: O novo subgraph aponta para 0xNewContractAddress, que pode ser uma versão consertada do contrato inteligente. +- **Data Source Update**: The new Subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. - **Bloco Inicial**: Configure para um bloco após o último indexado com êxito, para evitar processar o erro novamente. - **Configuração de Enxerto**: - - **base**: ID de Implantação do subgraph falho. + - **base**: Deployment ID of the failed Subgraph. - **block**: Número de blocos onde o enxerto deve começar. 3. **Etapas de Implantação** @@ -135,10 +135,10 @@ Vamos supor que tens um subgraph a rastrear um contrato inteligente, que parou d - **Ajuste o Manifest**: Conforme detalhado acima, atualize o `subgraph.yaml` com configurações de enxerto. - **Lance o Subgraph**: - Autentique com a Graph CLI. - - Lance o novo subgraph com `graph deploy`. + - Deploy the new Subgraph using `graph deploy`. 4. **Após a Implantação** - - **Verifique a Indexação**: Verifique se o subgraph está a indexar corretamente a partir do ponto de enxerto. + - **Verify Indexing**: Check that the Subgraph is indexing correctly from the graft point. - **Monitore os Dados**: Verifique se há novos dados sendo capturados, e se o hotfix funciona. - **Planeie Para uma Reedição**: Prepare a implantação de uma versão não enxertada, para mais estabilidade a longo prazo. @@ -146,9 +146,9 @@ Vamos supor que tens um subgraph a rastrear um contrato inteligente, que parou d O enxerto é uma ferramenta poderosa para implantar hotfixes rapidamente, mas deve ser evitado em algumas situações específicas — para manter a integridade dos dados e garantir o melhor desempenho. -- **Mudanças Incompatíveis de Schema**: Se o seu hotfix exigir a alteração do tipo de campos existentes ou a remoção de campos do seu esquema, não é adequado fazer um enxerto. O enxerto espera que o esquema do novo subgraph seja compatível com o schema do subgráfico base. Alterações incompatíveis podem levar a inconsistências e erros de dados, porque os dados existentes não se alinham com o novo schema. +- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new Subgraph’s schema to be compatible with the base Subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. - **Mudanças Significantes na Lógica de Mapeamento**: Quando o hotfix envolve modificações substanciais na sua lógica de mapeamento — como alterar o processamento de eventos ​de funções do handler — o enxerto pode não funcionar corretamente. A nova lógica pode não ser compatível com os dados processados ​​sob a lógica antiga, levando a dados incorretos ou indexação com falha. -- **Implantações na The Graph Network:** Enxertos não são recomendados para subgraphs destinados à rede descentralizada (mainnet) do The Graph. Um enxerto pode complicar a indexação e pode não ser totalmente apoiado por todos os Indexers, o que pode causar comportamento inesperado ou aumento de custos. Para implantações de mainnet, é mais seguro recomeçar a indexação do subgraph do zero, para garantir total compatibilidade e confiabilidade. +- **Deployments to The Graph Network**: Grafting is not recommended for Subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the Subgraph from scratch to ensure full compatibility and reliability. ### **Controle de Riscos** @@ -157,31 +157,31 @@ O enxerto é uma ferramenta poderosa para implantar hotfixes rapidamente, mas de ## Conclusão -O enxerto é uma estratégia eficaz para implantar hotfixes no desenvolvimento de subgraphs, e ainda permite: +Grafting is an effective strategy for deploying hotfixes in Subgraph development, enabling you to: - **Se recuperar rapidamente** de erros críticos sem recomeçar a indexação. - **Preservar dados históricos**, mantendo a continuidade tanto para aplicativos quanto para utilizadores. - **Garantir a disponibilidade do serviço** ao minimizar o tempo de espera em períodos importantes de manutenção. -No entanto, é importante usar enxertos com cuidado e seguir as melhores práticas para controlar riscos. Após estabilizar o seu subgraph com o hotfix, planeie a implantação de uma versão não enxertada para garantir a estabilidade a longo prazo. +However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your Subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. ## Outros Recursos - **[Documentação de Enxertos](/subgraphs/cookbook/grafting/)**: Substitua um Contrato e Mantenha o Seu Histórico com Enxertos - **[Como Entender IDs de Implantação](/subgraphs/querying/subgraph-id-vs-deployment-id/)**: Aprenda a diferença entre ID de Implantação e ID de Subgraph. -Ao incorporar enxertos ao seu fluxo de programação de subgraphs, é possível melhorar a sua capacidade de responder a problemas, garantindo que os seus serviços de dados permaneçam robustos e confiáveis. +By incorporating grafting into your Subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. ## Melhores Práticas para um Subgraph 1 – 6 -1. [Pruning: Reduza o Excesso de Dados do Seu Subgraph para Acelerar Queries](/subgraphs/best-practices/pruning/) +1. [Improve Query Speed with Subgraph Pruning](/subgraphs/best-practices/pruning/) -2. [Use o @derivedFrom para Melhorar a Resposta da Indexação e de Queries](/subgraphs/best-practices/derivedfrom/) +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/subgraphs/best-practices/derivedfrom/) -3. [Melhore o Desempenho da Indexação e de Queries com o Uso de Bytes como IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) -4. [Evite `eth-calls` para Acelerar a Indexação](/subgraphs/best-practices/avoid-eth-calls/) +4. [Improve Indexing Speed by Avoiding `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) -5. [Simplifique e Otimize com Séries Temporais e Agregações](/subgraphs/best-practices/timeseries/) +5. [Simplify and Optimize with Timeseries and Aggregations](/subgraphs/best-practices/timeseries/) -6. [Lance Hotfixes Mais Rápido com Enxertos](/subgraphs/best-practices/grafting-hotfix/) +6. [Use Grafting for Quick Hotfix Deployment](/subgraphs/best-practices/grafting-hotfix/) diff --git a/website/src/pages/pt/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx b/website/src/pages/pt/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx index 4124d0504cde..93d54d6a07e9 100644 --- a/website/src/pages/pt/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx +++ b/website/src/pages/pt/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx @@ -1,6 +1,6 @@ --- title: Boas Práticas de Subgraph 3 - Como Melhorar o Desempenho da Indexação e de Queries com Entidades Imutáveis e Bytes como IDs -sidebarTitle: 'Subgraph Best Practice 3: Immutable Entities and Bytes as IDs' +sidebarTitle: Immutable Entities and Bytes as IDs --- ## TLDR @@ -50,12 +50,12 @@ Enquanto outros tipos de IDs são possíveis, como String e Int8, recomendamos u ### Razões para Não Usar Bytes como IDs 1. Se IDs de entidade devem ser legíveis para humanos, como IDs numéricas automaticamente incrementadas ou strings legíveis, então Bytes como IDs não devem ser usados. -2. Em caso de integração dos dados de um subgraph com outro modelo de dados que não usa Bytes como IDs, então Bytes como IDs não devem ser usados. +2. If integrating a Subgraph’s data with another data model that does not use Bytes as IDs, Bytes as IDs should not be used. 3. Melhorias no desempenho de indexação e queries não são desejáveis. ### Concatenação com Bytes como IDs -É comum em vários subgraphs usar a concatenação de strings para combinar duas propriedades de um evento em uma ID única, como o uso de `event.transaction.hash.toHex() + "-" + event.logIndex.toString()`. Mas como isto retorna um string, isto impede muito o desempenho da indexação e queries de subgraphs. +It is a common practice in many Subgraphs to use string concatenation to combine two properties of an event into a single ID, such as using `event.transaction.hash.toHex() + "-" + event.logIndex.toString()`. However, as this returns a string, this significantly impedes Subgraph indexing and querying performance. Em vez disto, devemos usar o método `concatI32()` para concatenar propriedades de evento. Esta estratégia resulta numa ID `Bytes` que tem um desempenho muito melhor. @@ -172,20 +172,20 @@ Resposta de query: ## Conclusão -É comprovado que usar Entidades Imutáveis e Bytes como IDs aumenta muito a eficiência de subgraphs. Especificamente, segundo testes, houve um aumento de até 28% no desempenho de queries e uma aceleração de até 48% em velocidades de indexação. +Using both Immutable Entities and Bytes as IDs has been shown to markedly improve Subgraph efficiency. Specifically, tests have highlighted up to a 28% increase in query performance and up to a 48% acceleration in indexing speeds. Leia mais sobre o uso de Entidades Imutáveis e Bytes como IDs nesta publicação por David Lutterkort, Engenheiro de Software na Edge & Node: [Duas Melhorias Simples no Desempenho de Subgraphs](https://thegraph.com/blog/two-simple-subgraph-performance-improvements/). ## Melhores Práticas para um Subgraph 1 – 6 -1. [Pruning: Reduza o Excesso de Dados do Seu Subgraph para Acelerar Queries](/subgraphs/best-practices/pruning/) +1. [Improve Query Speed with Subgraph Pruning](/subgraphs/best-practices/pruning/) -2. [Use o @derivedFrom para Melhorar a Resposta da Indexação e de Queries](/subgraphs/best-practices/derivedfrom/) +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/subgraphs/best-practices/derivedfrom/) -3. [Melhore o Desempenho da Indexação e de Queries com o Uso de Bytes como IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) -4. [Evite `eth-calls` para Acelerar a Indexação](/subgraphs/best-practices/avoid-eth-calls/) +4. [Improve Indexing Speed by Avoiding `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) -5. [Simplifique e Otimize com Séries Temporais e Agregações](/subgraphs/best-practices/timeseries/) +5. [Simplify and Optimize with Timeseries and Aggregations](/subgraphs/best-practices/timeseries/) -6. [Lance Hotfixes Mais Rápido com Enxertos](/subgraphs/best-practices/grafting-hotfix/) +6. [Use Grafting for Quick Hotfix Deployment](/subgraphs/best-practices/grafting-hotfix/) diff --git a/website/src/pages/pt/subgraphs/best-practices/pruning.mdx b/website/src/pages/pt/subgraphs/best-practices/pruning.mdx index eb6afc85791f..4fb9bc557b22 100644 --- a/website/src/pages/pt/subgraphs/best-practices/pruning.mdx +++ b/website/src/pages/pt/subgraphs/best-practices/pruning.mdx @@ -1,11 +1,11 @@ --- title: Boas Práticas de Subgraph 1 - Acelerar Queries com Pruning -sidebarTitle: 'Subgraph Best Practice 1: Pruning with indexerHints' +sidebarTitle: Pruning with indexerHints --- ## TLDR -O [pruning](/developing/creating-a-subgraph/#prune) retira entidades de arquivo do banco de dados de um subgraph até um bloco especificado; e retirar entidades não usadas do banco de dados de um subgraph tende a melhorar muito o desempenho de queries de um subgraph. Usar o `indexerHints` é uma maneira fácil de fazer o pruning de um subgraph. +[Pruning](/developing/creating-a-subgraph/#prune) removes archival entities from the Subgraph’s database up to a given block, and removing unused entities from a Subgraph’s database will improve a Subgraph’s query performance, often dramatically. Using `indexerHints` is an easy way to prune a Subgraph. ## Como Fazer Pruning de um Subgraph com `indexerHints` @@ -13,14 +13,14 @@ Adicione uma secção chamada `indexerHints` ao manifest. O `indexerHints` tem três opções de `prune`: -- `prune: auto`: Guarda o histórico mínimo necessário, conforme configurado pelo Indexador, para otimizar o desempenho dos queries. Esta é a configuração geralmente recomendada e é padrão para todos os subgraphs criados pela `graph-cli` >= 0.66.0. +- `prune: auto`: Retains the minimum necessary history as set by the Indexer, optimizing query performance. This is the generally recommended setting and is the default for all Subgraphs created by `graph-cli` >= 0.66.0. - `prune: `: Determina um limite personalizado no número de blocos históricos a serem retidos. - `prune: never`: Não será feito pruning de dados históricos; guarda o histórico completo, e é o padrão caso não haja uma secção `indexerHints`. `prune: never` deve ser selecionado caso queira [Queries de Viagem no Tempo](/subgraphs/querying/graphql-api/#time-travel-queries). -Podemos adicionar `indexerHints` aos nossos subgraphs ao atualizar o nosso `subgraph.yaml`: +We can add `indexerHints` to our Subgraphs by updating our `subgraph.yaml`: ```yaml -specVersion: 1.0.0 +specVersion: 1.3.0 schema: file: ./schema.graphql indexerHints: @@ -39,18 +39,18 @@ dataSources: ## Conclusão -O pruning com `indexerHints` é uma boa prática para o desenvolvimento de subgraphs que oferece melhorias significativas no desempenho de queries. +Pruning using `indexerHints` is a best practice for Subgraph development, offering significant query performance improvements. ## Melhores Práticas para um Subgraph 1 – 6 -1. [Pruning: Reduza o Excesso de Dados do Seu Subgraph para Acelerar Queries](/subgraphs/best-practices/pruning/) +1. [Improve Query Speed with Subgraph Pruning](/subgraphs/best-practices/pruning/) -2. [Use o @derivedFrom para Melhorar a Resposta da Indexação e de Queries](/subgraphs/best-practices/derivedfrom/) +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/subgraphs/best-practices/derivedfrom/) -3. [Melhore o Desempenho da Indexação e de Queries com o Uso de Bytes como IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) -4. [Evite `eth-calls` para Acelerar a Indexação](/subgraphs/best-practices/avoid-eth-calls/) +4. [Improve Indexing Speed by Avoiding `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) -5. [Simplifique e Otimize com Séries Temporais e Agregações](/subgraphs/best-practices/timeseries/) +5. [Simplify and Optimize with Timeseries and Aggregations](/subgraphs/best-practices/timeseries/) -6. [Lance Hotfixes Mais Rápido com Enxertos](/subgraphs/best-practices/grafting-hotfix/) +6. [Use Grafting for Quick Hotfix Deployment](/subgraphs/best-practices/grafting-hotfix/) diff --git a/website/src/pages/pt/subgraphs/best-practices/timeseries.mdx b/website/src/pages/pt/subgraphs/best-practices/timeseries.mdx index b0228580d20f..b0a9925207eb 100644 --- a/website/src/pages/pt/subgraphs/best-practices/timeseries.mdx +++ b/website/src/pages/pt/subgraphs/best-practices/timeseries.mdx @@ -1,11 +1,11 @@ --- title: 'Melhores Práticas para um Subgraph #5 — Simplifique e Otimize com Séries Temporais e Agregações' -sidebarTitle: 'Subgraph Best Practice 5: Timeseries and Aggregations' +sidebarTitle: Séries de Tempo e Agregações --- ## TLDR -Tirar vantagem de séries temporais e agregações em subgraphs pode melhorar bastante a velocidade da indexação e o desempenho dos queries. +Leveraging the new time-series and aggregations feature in Subgraphs can significantly enhance both indexing speed and query performance. ## Visão geral @@ -36,6 +36,10 @@ Séries temporais e agregações reduzem a sobrecarga do processamento de dados ## Como Implementar Séries Temporais e Agregações +### Pré-requisitos + +You need `spec version 1.1.0` for this feature. + ### Como Definir Entidades de Séries Temporais Uma entidade de série temporal representa pontos de dados brutos coletados gradativamente. Ela é definida com a anotação `@entity(timeseries: true)`. Requisitos principais: @@ -51,7 +55,7 @@ Exemplo: type Data @entity(timeseries: true) { id: Int8! timestamp: Timestamp! - price: BigDecimal! + amount: BigDecimal! } ``` @@ -68,11 +72,11 @@ Exemplo: type Stats @aggregation(intervals: ["hour", "day"], source: "Data") { id: Int8! timestamp: Timestamp! - sum: BigDecimal! @aggregate(fn: "sum", arg: "price") + sum: BigDecimal! @aggregate(fn: "sum", arg: "amount") } ``` -Neste exemplo, o campo `Stats` ("Estatísticas") agrega o campo de preços de Data de hora em hora, diariamente, e computa a soma. +In this example, Stats aggregates the amount field from Data over hourly and daily intervals, computing the sum. ### Queries de Dados Agregados @@ -172,24 +176,24 @@ Os operadores e funções suportados incluem aritmética básica (+, -, \_, /), ### Conclusão -Implementar séries temporais e agregações em subgraphs é recomendado para projetos que lidam com dados baseados em tempo. Esta abordagem: +Implementing timeseries and aggregations in Subgraphs is a best practice for projects dealing with time-based data. This approach: - Melhora o Desempenho: Acelera a indexação e os queries ao reduzir a carga de processamento de dados. - Simplifica a Produção: Elimina a necessidade de lógica de agregação manual em mapeamentos. - Escala Eficientemente: Manuseia grandes quantias de dados sem comprometer a velocidade ou a capacidade de resposta. -Ao adotar esse padrão, os programadores podem criar subgraphs mais eficientes e escaláveis, fornecendo acesso mais rápido e confiável de dados aos utilizadores finais. Para saber mais sobre como implementar séries temporais e agregações, consulte o [Leia-me sobre Séries Temporais e Agregações](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) e experimente esse recurso nos seus subgraphs. +By adopting this pattern, developers can build more efficient and scalable Subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your Subgraphs. ## Melhores Práticas para um Subgraph 1 – 6 -1. [Pruning: Reduza o Excesso de Dados do Seu Subgraph para Acelerar Queries](/subgraphs/best-practices/pruning/) +1. [Improve Query Speed with Subgraph Pruning](/subgraphs/best-practices/pruning/) -2. [Use o @derivedFrom para Melhorar a Resposta da Indexação e de Queries](/subgraphs/best-practices/derivedfrom/) +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/subgraphs/best-practices/derivedfrom/) -3. [Melhore o Desempenho da Indexação e de Queries com o Uso de Bytes como IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) -4. [Evite `eth-calls` para Acelerar a Indexação](/subgraphs/best-practices/avoid-eth-calls/) +4. [Improve Indexing Speed by Avoiding `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) -5. [Simplifique e Otimize com Séries Temporais e Agregações](/subgraphs/best-practices/timeseries/) +5. [Simplify and Optimize with Timeseries and Aggregations](/subgraphs/best-practices/timeseries/) -6. [Lance Hotfixes Mais Rápido com Enxertos](/subgraphs/best-practices/grafting-hotfix/) +6. [Use Grafting for Quick Hotfix Deployment](/subgraphs/best-practices/grafting-hotfix/) diff --git a/website/src/pages/pt/subgraphs/billing.mdx b/website/src/pages/pt/subgraphs/billing.mdx index f73ae48ff725..a028354d7e66 100644 --- a/website/src/pages/pt/subgraphs/billing.mdx +++ b/website/src/pages/pt/subgraphs/billing.mdx @@ -10,7 +10,9 @@ Há dois planos disponíveis para queries de subgraphs na Graph Network. - **Plano de Crescimento**: Inclui tudo no Plano Grátis, com todos os queries após a cota de 100.000 mensais exigindo pagamentos com cartão de crédito ou GRT. Este plano é flexível o suficiente para cobrir equipes que estabeleceram dapps numa variedade de casos de uso. - +Learn more about pricing [here](https://thegraph.com/studio-pricing/). + + ## Pagamentos de Queries com cartão de crédito diff --git a/website/src/pages/pt/subgraphs/developing/_meta-titles.json b/website/src/pages/pt/subgraphs/developing/_meta-titles.json index 01a91b09ed77..48b57c9aae14 100644 --- a/website/src/pages/pt/subgraphs/developing/_meta-titles.json +++ b/website/src/pages/pt/subgraphs/developing/_meta-titles.json @@ -1,6 +1,6 @@ { - "creating": "Creating", - "deploying": "Deploying", - "publishing": "Publishing", - "managing": "Managing" + "creating": "Criação", + "deploying": "Implante", + "publishing": "Edição", + "managing": "Gestão" } diff --git a/website/src/pages/pt/subgraphs/developing/creating/advanced.mdx b/website/src/pages/pt/subgraphs/developing/creating/advanced.mdx index 5dfeb1034a5f..51adc5cea9a6 100644 --- a/website/src/pages/pt/subgraphs/developing/creating/advanced.mdx +++ b/website/src/pages/pt/subgraphs/developing/creating/advanced.mdx @@ -1,23 +1,23 @@ --- -title: Advanced Subgraph Features +title: Funções Avançadas de Subgraph --- ## Visão geral -Add and implement advanced subgraph features to enhanced your subgraph's built. +Add and implement advanced Subgraph features to enhanced your Subgraph's built. -Starting from `specVersion` `0.0.4`, subgraph features must be explicitly declared in the `features` section at the top level of the manifest file, using their `camelCase` name, as listed in the table below: +Starting from `specVersion` `0.0.4`, Subgraph features must be explicitly declared in the `features` section at the top level of the manifest file, using their `camelCase` name, as listed in the table below: -| Feature | Name | -| ---------------------------------------------------- | ---------------- | -| [Non-fatal errors](#non-fatal-errors) | `nonFatalErrors` | -| [Full-text Search](#defining-fulltext-search-fields) | `fullTextSearch` | -| [Grafting](#grafting-onto-existing-subgraphs) | `grafting` | +| Função | Nome | +| ------------------------------------------------------ | ---------------- | +| [Erros não fatais](#non-fatal-errors) | `nonFatalErrors` | +| [Busca em full-text](#defining-fulltext-search-fields) | `fullTextSearch` | +| [Enxertos](#grafting-onto-existing-subgraphs) | `grafting` | -For instance, if a subgraph uses the **Full-Text Search** and the **Non-fatal Errors** features, the `features` field in the manifest should be: +For instance, if a Subgraph uses the **Full-Text Search** and the **Non-fatal Errors** features, the `features` field in the manifest should be: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum features: - fullTextSearch @@ -25,17 +25,17 @@ features: dataSources: ... ``` -> Note that using a feature without declaring it will incur a **validation error** during subgraph deployment, but no errors will occur if a feature is declared but not used. +> Note that using a feature without declaring it will incur a **validation error** during Subgraph deployment, but no errors will occur if a feature is declared but not used. ## Séries de Tempo e Agregações -Prerequisites: +Pré-requisitos: -- Subgraph specVersion must be ≥1.1.0. +- O specVersion do subgraph deve ser maior que 1.1.0. -Timeseries and aggregations enable your subgraph to track statistics like daily average price, hourly total transfers, and more. +Timeseries and aggregations enable your Subgraph to track statistics like daily average price, hourly total transfers, and more. -This feature introduces two new types of subgraph entity. Timeseries entities record data points with timestamps. Aggregation entities perform pre-declared calculations on the timeseries data points on an hourly or daily basis, then store the results for easy access via GraphQL. +This feature introduces two new types of Subgraph entity. Timeseries entities record data points with timestamps. Aggregation entities perform pre-declared calculations on the timeseries data points on an hourly or daily basis, then store the results for easy access via GraphQL. ### Exemplo de Schema @@ -53,33 +53,33 @@ type Stats @aggregation(intervals: ["hour", "day"], source: "Data") { } ``` -### How to Define Timeseries and Aggregations +### Como Definir Séries Temporais e Agregações -Timeseries entities are defined with `@entity(timeseries: true)` in the GraphQL schema. Every timeseries entity must: +Entidades de séries temporais são definidas com `@entity(timeseries: true)` no schema da GraphQL. Toda entidade deste tipo deve: -- have a unique ID of the int8 type -- have a timestamp of the Timestamp type -- include data that will be used for calculation by aggregation entities. +- ter uma ID exclusiva do tipo int8 +- ter um registro de data e hora do tipo Timestamp +- incluir dados a serem usados para cálculo pelas entidades de agregação. -These Timeseries entities can be saved in regular trigger handlers, and act as the “raw data” for the aggregation entities. +Estas entidades de Série Temporal podem ser guardadas em handlers regulares de gatilho, e atuam como “dados brutos” para as entidades de agregação. -Aggregation entities are defined with `@aggregation` in the GraphQL schema. Every aggregation entity defines the source from which it will gather data (which must be a timeseries entity), sets the intervals (e.g., hour, day), and specifies the aggregation function it will use (e.g., sum, count, min, max, first, last). +As entidades de agregação são definidas com `@aggregation` no schema da GraphQL. Toda entidade deste tipo define a fonte de onde retirará dados (que deve ser uma entidade de Série Temporal), determina os intervalos (por ex., hora, dia) e especifica a função de agregação que usará (por ex., soma, contagem, min, max, primeiro, último). -Aggregation entities are automatically calculated on the basis of the specified source at the end of the required interval. +As entidades de agregação são calculadas automaticamente com base na fonte especificada no final do intervalo necessário. #### Intervalos de Agregação Disponíveis -- `hour`: sets the timeseries period every hour, on the hour. -- `day`: sets the timeseries period every day, starting and ending at 00:00. +- `hour`: configura o período de série de tempo para cada hora, em ponto. +- `day`: configura o período de série de tempo para cada dia, a começar e terminar à meia-noite. #### Funções de Agregação Disponíveis -- `sum`: Total of all values. -- `count`: Number of values. -- `min`: Minimum value. -- `max`: Maximum value. -- `first`: First value in the period. -- `last`: Last value in the period. +- `sum`: Total de todos os valores. +- `count`: Número de valores. +- `min`: Valor mínimo. +- `max`: Valor máximo. +- `first`: Primeiro valor no período. +- `last`: Último valor no período. #### Exemplo de Query de Agregações @@ -93,25 +93,25 @@ Aggregation entities are automatically calculated on the basis of the specified } ``` -[Read more](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) about Timeseries and Aggregations. +[Leia mais](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) sobre Séries Temporais e Agregações. ## Erros não-fatais -Erros de indexação em subgraphs já sincronizados, por si próprios, farão que o subgraph falhe e pare de sincronizar. Os subgraphs podem, de outra forma, ser configurados a continuar a sincronizar na presença de erros, ao ignorar as mudanças feitas pelo handler que provocaram o erro. Isto dá tempo aos autores de subgraphs para corrigir seus subgraphs enquanto queries continuam a ser servidos perante o bloco mais recente, porém os resultados podem ser inconsistentes devido ao bug que causou o erro. Note que alguns erros ainda são sempre fatais. Para ser não-fatais, os erros devem ser confirmados como determinísticos. +Indexing errors on already synced Subgraphs will, by default, cause the Subgraph to fail and stop syncing. Subgraphs can alternatively be configured to continue syncing in the presence of errors, by ignoring the changes made by the handler which provoked the error. This gives Subgraph authors time to correct their Subgraphs while queries continue to be served against the latest block, though the results might be inconsistent due to the bug that caused the error. Note that some errors are still always fatal. To be non-fatal, the error must be known to be deterministic. -> **Note:** The Graph Network does not yet support non-fatal errors, and developers should not deploy subgraphs using that functionality to the network via the Studio. +> **Note:** The Graph Network does not yet support non-fatal errors, and developers should not deploy Subgraphs using that functionality to the network via the Studio. -Permitir erros não fatais exige a configuração da seguinte feature flag no manifest do subgraph: +Enabling non-fatal errors requires setting the following feature flag on the Subgraph manifest: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum features: - nonFatalErrors ... ``` -The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the subgraph has skipped over errors, as in the example: +The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the Subgraph has skipped over errors, as in the example: ```graphql foos(first: 100, subgraphError: allow) { @@ -123,7 +123,7 @@ _meta { } ``` -If the subgraph encounters an error, that query will return both the data and a graphql error with the message `"indexing_error"`, as in this example response: +If the Subgraph encounters an error, that query will return both the data and a graphql error with the message `"indexing_error"`, as in this example response: ```graphql "data": { @@ -145,7 +145,7 @@ If the subgraph encounters an error, that query will return both the data and a ## Fontes de Dados de Arquivos em IPFS/Arweave -Fontes de dados de arquivos são uma nova funcionalidade de subgraph para acessar dados off-chain de forma robusta e extensível. As fontes de dados de arquivos apoiam o retiro de arquivos do IPFS e do Arweave. +File data sources are a new Subgraph functionality for accessing off-chain data during indexing in a robust, extendable way. File data sources support fetching files from IPFS and from Arweave. > Isto também abre as portas para indexar dados off-chain de forma determinística, além de potencialmente introduzir dados arbitrários com fonte em HTTP. @@ -153,15 +153,15 @@ Fontes de dados de arquivos são uma nova funcionalidade de subgraph para acessa Em vez de buscar arquivos "em fila" durante a execução do handler, isto introduz modelos que podem ser colocados como novas fontes de dados para um identificador de arquivos. Estas novas fontes de dados pegam os arquivos e tentam novamente caso não obtenham êxito; quando o arquivo é encontrado, executam um handler dedicado. -This is similar to the [existing data source templates](/developing/creating-a-subgraph/#data-source-templates), which are used to dynamically create new chain-based data sources. +Isso é semelhante aos [modelos existentes de fonte de dados](/developing/creating-a-subgraph/#data-source-templates), usados para criar dinamicamente novas fontes de dados baseados em chain. -> This replaces the existing `ipfs.cat` API +> Isto substitui a API `ipfs.cat` existente ### Guia de atualização -#### Update `graph-ts` and `graph-cli` +#### Atualização de `graph-ts` e `graph-cli` -File data sources requires graph-ts >=0.29.0 and graph-cli >=0.33.1 +O recurso de fontes de dados de arquivos exige o graph-ts na versão acima de 0.29.0 e o graph-cli acima de 0.33.1 #### Adicionar um novo tipo de entidade que será atualizado quando os arquivos forem encontrados @@ -210,9 +210,9 @@ type TokenMetadata @entity { Se o relacionamento for perfeitamente proporcional entre a entidade parente e a entidade de fontes de dados de arquivos resultante, é mais simples ligar a entidade parente a uma entidade de arquivos resultante, com a CID IPFS como o assunto de busca. Se tiver dificuldades em modelar suas novas entidades baseadas em arquivos, pergunte no Discord! -> You can use [nested filters](/subgraphs/querying/graphql-api/#example-for-nested-entity-filtering) to filter parent entities on the basis of these nested entities. +> É possível usar [filtros aninhados](/subgraphs/querying/graphql-api/#example-for-nested-entity-filtering) para filtrar entidades parentes, com base nestas entidades aninhadas. -#### Add a new templated data source with `kind: file/ipfs` or `kind: file/arweave` +#### Adicione um novo modelo de fonte de dados com `kind: file/ipfs` ou `kind: file/arweave` Esta é a fonte de dados que será gerada quando um arquivo de interesse for identificado. @@ -221,7 +221,7 @@ templates: - name: TokenMetadata kind: file/ipfs mapping: - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mapping.ts handler: handleMetadata @@ -232,15 +232,15 @@ templates: file: ./abis/Token.json ``` -> Currently `abis` are required, though it is not possible to call contracts from within file data sources +> Atualmente é obrigatório usar `abis`, mas não é possível chamar contratos de dentro de fontes de dados de arquivos -The file data source must specifically mention all the entity types which it will interact with under `entities`. See [limitations](#limitations) for more details. +A fonte de dados de arquivos deve mencionar, especificamente, todos os tipos de entidades com os quais ela interagirá sob `entities`. Veja as [limitações](#limitations) para mais detalhes. #### Criar um novo handler para processar arquivos -This handler should accept one `Bytes` parameter, which will be the contents of the file, when it is found, which can then be processed. This will often be a JSON file, which can be processed with `graph-ts` helpers ([documentation](/subgraphs/developing/creating/graph-ts/api/#json-api)). +Este handler deve aceitar um parâmetro `Bytes`, que consistirá dos conteúdos do arquivo; quando encontrado, este poderá ser acessado. Isto costuma ser um arquivo JSON, que pode ser processado com helpers `graph-ts` ([documentação](/subgraphs/developing/creating/graph-ts/api/#json-api)). -The CID of the file as a readable string can be accessed via the `dataSource` as follows: +A CID do arquivo como um string legível pode ser acessada através do `dataSource` a seguir: ```typescript const cid = dataSource.stringParam() @@ -277,12 +277,12 @@ export function handleMetadata(content: Bytes): void { Agora pode criar fontes de dados de arquivos durante a execução de handlers baseados em chain: -- Import the template from the auto-generated `templates` -- call `TemplateName.create(cid: string)` from within a mapping, where the cid is a valid content identifier for IPFS or Arweave +- Importe o modelo do `templates` gerado automaticamente +- chame o `TemplateName.create(cid: string)` de dentro de um mapeamento, onde o cid é um identificador de conteúdo válido para IPFS ou Arweave -For IPFS, Graph Node supports [v0 and v1 content identifiers](https://docs.ipfs.tech/concepts/content-addressing/), and content identifiers with directories (e.g. `bafyreighykzv2we26wfrbzkcdw37sbrby4upq7ae3aqobbq7i4er3tnxci/metadata.json`). +Para o IPFS, o Graph Node apoia [identificadores de conteúdo v0 e v1](https://docs.ipfs.tech/concepts/content-addressing/) e identificadores com diretórios (por ex. `bafyreighykzv2we26wfrbzkcdw37sbrby4upq7ae3aqobbq7i4er3tnxci/metadata.json`). -For Arweave, as of version 0.33.0 Graph Node can fetch files stored on Arweave based on their [transaction ID](https://docs.arweave.org/developers/arweave-node-server/http-api#transactions) from an Arweave gateway ([example file](https://bdxujjl5ev5eerd5ouhhs6o4kjrs4g6hqstzlci5pf6vhxezkgaa.arweave.net/CO9EpX0lekJEfXUOeXncUmMuG8eEp5WJHXl9U9yZUYA)). Arweave supports transactions uploaded via Irys (previously Bundlr), and Graph Node can also fetch files based on [Irys manifests](https://docs.irys.xyz/overview/gateways#indexing). +Para o Arweave, a partir da versão 0.33.0, o Graph Node pode buscar arquivos armazenados no Arweave com base no seu [ID de transação](https://docs.arweave.org/developers/arweave-node-server/http-api#transactions) de um gateway Arweave ([exemplo de arquivo](https://bdxujjl5ev5eerd5ouhhs6o4kjrs4g6hqstzlci5pf6vhxezkgaa.arweave.net/CO9EpX0lekJEfXUOeXncUmMuG8eEp5WJHXl9U9yZUYA)). O Arweave apoia transações enviadas via Irys (antigo Bundlr), e o Graph Node também pode solicitar arquivos com base em [manifests do Irys](https://docs.irys.xyz/overview/gateways#indexing). Exemplo: @@ -290,7 +290,7 @@ Exemplo: import { TokenMetadata as TokenMetadataTemplate } from '../generated/templates' const ipfshash = 'QmaXzZhcYnsisuue5WRdQDH6FDvqkLQX1NckLqBYeYYEfm' -//Este exemplo de código é para um subgraph do Crypto Coven. O hash ipfs acima é um diretório com metadados de tokens para todos os NFTs do Crypto Coven. +//This example code is for a Crypto coven Subgraph. The above ipfs hash is a directory with token metadata for all crypto coven NFTs. export function handleTransfer(event: TransferEvent): void { let token = Token.load(event.params.tokenId.toString()) @@ -300,7 +300,7 @@ export function handleTransfer(event: TransferEvent): void { token.tokenURI = '/' + event.params.tokenId.toString() + '.json' const tokenIpfsHash = ipfshash + token.tokenURI - //Isto cria um caminho aos metadados para um único NFT do Crypto Coven. Ele concatena o diretório com "/" + nome do arquivo + ".json" + //This creates a path to the metadata for a single Crypto coven NFT. It concats the directory with "/" + filename + ".json" token.ipfsURI = tokenIpfsHash @@ -315,25 +315,25 @@ export function handleTransfer(event: TransferEvent): void { Isto criará uma fonte de dados de arquivos, que avaliará o endpoint de IPFS ou Arweave configurado do Graph Node, e tentará novamente caso não achá-lo. Com o arquivo localizado, o handler da fonte de dados de arquivos será executado. -This example is using the CID as the lookup between the parent `Token` entity and the resulting `TokenMetadata` entity. +Este exemplo usa a CID como a consulta entre a entidade parente `Token` e a entidade `TokenMetadata` resultante. -> Previously, this is the point at which a subgraph developer would have called `ipfs.cat(CID)` to fetch the file +> Previously, this is the point at which a Subgraph developer would have called `ipfs.cat(CID)` to fetch the file Parabéns, você está a usar fontes de dados de arquivos! -#### Como lançar os seus Subgraphs +#### Deploying your Subgraphs -You can now `build` and `deploy` your subgraph to any Graph Node >=v0.30.0-rc.0. +You can now `build` and `deploy` your Subgraph to any Graph Node >=v0.30.0-rc.0. #### Limitações -Handlers e entidades de fontes de dados de arquivos são isolados de outras entidades de subgraph, o que garante que sejam determinísticos quando executados e que não haja contaminação de fontes de dados baseadas em chain. Especificamente: +File data source handlers and entities are isolated from other Subgraph entities, ensuring that they are deterministic when executed, and ensuring no contamination of chain-based data sources. To be specific: - Entidades criadas por Fontes de Dados de Arquivos são imutáveis, e não podem ser atualizadas - Handlers de Fontes de Dados de Arquivos não podem acessar entidades de outras fontes de dados de arquivos - Entidades associadas com Fontes de Dados de Arquivos não podem ser acessadas por handlers baseados em chain -> Enquanto esta limitação pode não ser problemática para a maioria dos casos de uso, ela pode deixar alguns mais complexos. Se houver qualquer problema neste processo, por favor dê um alô via Discord! +> While this constraint should not be problematic for most use-cases, it may introduce complexity for some. Please get in touch via Discord if you are having issues modelling your file-based data in a Subgraph! Além disto, não é possível criar fontes de dados de uma fonte de dado de arquivos, seja uma on-chain ou outra fonte de dados de arquivos. Esta restrição poderá ser retirada no futuro. @@ -341,41 +341,41 @@ Além disto, não é possível criar fontes de dados de uma fonte de dado de arq Caso ligue metadados de NFTs a tokens correspondentes, use o hash IPFS destes para referenciar uma entidade de Metadados da entidade do Token. Salve a entidade de Metadados a usar o hash IPFS como ID. -You can use [DataSource context](/subgraphs/developing/creating/graph-ts/api/#entity-and-datasourcecontext) when creating File Data Sources to pass extra information which will be available to the File Data Source handler. +Você pode usar o [contexto de DataSource](/subgraphs/developing/creating/graph-ts/api/#entity-and-datasourcecontext) ao criar Fontes de Dados de Arquivo (FDS), para passar informações extras que estarão disponíveis para o handler de FDS. -If you have entities which are refreshed multiple times, create unique file-based entities using the IPFS hash & the entity ID, and reference them using a derived field in the chain-based entity. +Caso tenha entidades a ser atualizadas várias vezes, crie entidades únicas baseadas em arquivos utilizando o hash IPFS e o ID da entidade, e as referencie com um campo derivado na entidade baseada na chain. > Estamos a melhorar a recomendação acima, para que os queries retornem apenas a versão "mais recente" #### Problemas conhecidos -File data sources currently require ABIs, even though ABIs are not used ([issue](https://github.com/graphprotocol/graph-cli/issues/961)). Workaround is to add any ABI. +Fontes de dados de arquivo atualmente requerem ABIs, mesmo que estas não sejam usadas ([problema](https://github.com/graphprotocol/graph-cli/issues/961)). Por enquanto, vale a pena adicionar qualquer ABI como alternativa. -Handlers for File Data Sources cannot be in files which import `eth_call` contract bindings, failing with "unknown import: `ethereum::ethereum.call` has not been defined" ([issue](https://github.com/graphprotocol/graph-node/issues/4309)). Workaround is to create file data source handlers in a dedicated file. +Handlers para Fontes de Dados de Arquivos não podem estar em arquivos que importam ligações de contrato `eth_call`, o que causa falhas com "unknown import: `ethereum::ethereum.call` has not been defined" ([problema no GitHub](https://github.com/graphprotocol/graph-node/issues/4309)). A solução atual é criar handlers de fontes de dados de arquivos num arquivo dedicado. #### Exemplos -[Crypto Coven Subgraph migration](https://github.com/azf20/cryptocoven-api/tree/file-data-sources-refactor) +[Migração de Subgraph do Crypto Coven](https://github.com/azf20/cryptocoven-api/tree/file-data-sources-refactor) #### Referências -[GIP File Data Sources](https://forum.thegraph.com/t/gip-file-data-sources/2721) +[Fontes de Dados de Arquivos GIP](https://forum.thegraph.com/t/gip-file-data-sources/2721) ## Filtros de Argumentos Indexados / Filtros de Tópicos -> **Requires**: [SpecVersion](#specversion-releases) >= `1.2.0` +> **Obrigatório**: [SpecVersion](#specversion-releases) >= `1.2.0` -Filtros de tópico, também conhecidos como filtros de argumentos indexados, permitem que os utilizadores filtrem eventos de blockchain com alta precisão, em base nos valores dos seus argumentos indexados. +Topic filters, also known as indexed argument filters, are a powerful feature in Subgraphs that allow users to precisely filter blockchain events based on the values of their indexed arguments. -- Estes filtros ajudam a isolar eventos específicos de interesse do fluxo vasto de eventos na blockchain, o que permite que subgraphs operem com mais eficácia ao focarem apenas em dados relevantes. +- These filters help isolate specific events of interest from the vast stream of events on the blockchain, allowing Subgraphs to operate more efficiently by focusing only on relevant data. -- Isto serve para criar subgraphs pessoais que rastreiam endereços específicos e as suas interações com vários contratos inteligentes na blockchain. +- This is useful for creating personal Subgraphs that track specific addresses and their interactions with various smart contracts on the blockchain. ### Como Filtros de Tópicos Funcionam -Quando um contrato inteligente emite um evento, quaisquer argumentos que forem marcados como indexados podem ser usados como filtros no manifest de um subgraph. Isto permite que o subgraph preste atenção seletiva para eventos que correspondam a estes argumentos indexados. +When a smart contract emits an event, any arguments that are marked as indexed can be used as filters in a Subgraph's manifest. This allows the Subgraph to listen selectively for events that match these indexed arguments. -- The event's first indexed argument corresponds to `topic1`, the second to `topic2`, and so on, up to `topic3`, since the Ethereum Virtual Machine (EVM) allows up to three indexed arguments per event. +- O primeiro argumento indexado do evento corresponde ao `topic1`, o segundo ao `topic2`, e por aí vai até o `topic3`, já que a Máquina Virtual de Ethereum (EVM) só permite até três argumentos indexados por evento. ```solidity // SPDX-License-Identifier: MIT @@ -395,13 +395,13 @@ contract Token { Neste exemplo: -- The `Transfer` event is used to log transactions of tokens between addresses. -- The `from` and `to` parameters are indexed, allowing event listeners to filter and monitor transfers involving specific addresses. -- The `transfer` function is a simple representation of a token transfer action, emitting the Transfer event whenever it is called. +- O evento `Transfer` é usado para gravar transações de tokens entre endereços. +- Os parâmetros `from` e `to` são indexados, o que permite que ouvidores de eventos filtrem e monitorizem transferências que envolvem endereços específicos. +- A função `transfer` é uma representação simples de uma ação de transferência de token, e emite o evento Transfer sempre que é chamada. #### Configuração em Subgraphs -Filtros de tópicos são definidos diretamente na configuração de handlers de eventos no manifest do subgraph. Veja como eles são configurados: +Topic filters are defined directly within the event handler configuration in the Subgraph manifest. Here is how they are configured: ```yaml eventHandlers: @@ -414,7 +414,7 @@ eventHandlers: Neste cenário: -- `topic1` corresponds to the first indexed argument of the event, `topic2` to the second, and `topic3` to the third. +- `topic1` corresponde ao primeiro argumento indexado do evento, `topic2` ao segundo, e `topic3` ao terceiro. - Cada tópico pode ter um ou mais valores, e um evento só é processado se corresponder a um dos valores em cada tópico especificado. #### Lógica de Filtro @@ -434,9 +434,9 @@ eventHandlers: Nesta configuração: -- `topic1` is configured to filter `Transfer` events where `0xAddressA` is the sender. -- `topic2` is configured to filter `Transfer` events where `0xAddressB` is the receiver. -- The subgraph will only index transactions that occur directly from `0xAddressA` to `0xAddressB`. +- `topic1` é configurado para filtrar eventos `Transfer` onde `0xAddressA` é o remetente. +- `topic2` é configurado para filtrar eventos `Transfer` onde `0xAddressB` é o destinatário. +- The Subgraph will only index transactions that occur directly from `0xAddressA` to `0xAddressB`. #### Exemplo 2: Como Rastrear Transações em Qualquer Direção Entre Dois ou Mais Endereços @@ -450,31 +450,31 @@ eventHandlers: Nesta configuração: -- `topic1` is configured to filter `Transfer` events where `0xAddressA`, `0xAddressB`, `0xAddressC` is the sender. -- `topic2` is configured to filter `Transfer` events where `0xAddressB` and `0xAddressC` is the receiver. -- O subgraph indexará transações que ocorrerem em qualquer direção entre vários endereços, o que permite a monitoria compreensiva de interações que envolverem todos os endereços. +- O `topic1` é configurado para filtrar eventos `Transfer` onde `0xAddressA`, `0xAddressB`, `0xAddressC` é o remetente. +- `topic2` é configurado para filtrar eventos `Transfer` onde `0xAddressB` e `0xAddressC` é o destinatário. +- The Subgraph will index transactions that occur in either direction between multiple addresses allowing for comprehensive monitoring of interactions involving all addresses. -## Declared eth_call +## eth_call declarada -> Note: This is an experimental feature that is not currently available in a stable Graph Node release yet. You can only use it in Subgraph Studio or your self-hosted node. +> Nota: Esta é uma função experimental que atualmente não está disponível numa versão estável do Graph Node, e só pode ser usada no Subgraph Studio ou no seu node auto-hospedado. -Declarative `eth_calls` are a valuable subgraph feature that allows `eth_calls` to be executed ahead of time, enabling `graph-node` to execute them in parallel. +Declarative `eth_calls` are a valuable Subgraph feature that allows `eth_calls` to be executed ahead of time, enabling `graph-node` to execute them in parallel. Esta ferramenta faz o seguinte: -- Aumenta muito o desempenho do retiro de dados da blockchain Ethereum ao reduzir o tempo total para múltiplas chamadas e otimizar a eficácia geral do subgraph. +- Significantly improves the performance of fetching data from the Ethereum blockchain by reducing the total time for multiple calls and optimizing the Subgraph's overall efficiency. - Permite retiros de dados mais rápidos, o que resulta em respostas de query aceleradas e uma experiência de utilizador melhorada. - Reduz tempos de espera para aplicativos que precisam agregar dados de várias chamadas no Ethereum, o que aumenta a eficácia do processo de retiro de dados. -### Key Concepts +### Conceitos Importantes -- Declarative `eth_calls`: Ethereum calls that are defined to be executed in parallel rather than sequentially. +- `eth_calls` declarativas: Chamadas no Ethereum definidas para serem executadas em paralelo, e não em sequência. - Execução Paralela: Ao invés de esperar o término de uma chamada para começar a próxima, várias chamadas podem ser iniciadas simultaneamente. - Eficácia de Tempo: O total de tempo levado para todas as chamadas muda da soma dos tempos de chamadas individuais (sequencial) para o tempo levado para a chamada mais longa (paralelo). -#### Scenario without Declarative `eth_calls` +#### Cenário sem `eth_calls` Declarativas -Imagina que tens um subgraph que precisa fazer três chamadas no Ethereum para retirar dados sobre as transações, o saldo e as posses de token de um utilizador. +Imagine you have a Subgraph that needs to make three Ethereum calls to fetch data about a user's transactions, balance, and token holdings. Tradicionalmente, estas chamadas podem ser realizadas em sequência: @@ -484,7 +484,7 @@ Tradicionalmente, estas chamadas podem ser realizadas em sequência: Total de tempo: 3 + 2 + 4 = 9 segundos -#### Scenario with Declarative `eth_calls` +#### Cenário com `eth_calls` Declarativas Com esta ferramenta, é possível declarar que estas chamadas sejam executadas em paralelo: @@ -496,17 +496,17 @@ Como estas chamadas são executadas em paralelo, o total de tempo é igual ao te Total de tempo = max (3, 2, 4) = 4 segundos -#### How it Works +#### Como Funciona -1. Definição Declarativa: No manifest do subgraph, as chamadas no Ethereum são declaradas de maneira que indique que elas possam ser executadas em paralelo. +1. Declarative Definition: In the Subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. 2. Motor de Execução Paralela: O motor de execução do Graph Node reconhece estas declarações e executa as chamadas simultaneamente. -3. Agregação de Resultado: Quando todas as chamadas forem completadas, os resultados são agregados e usados pelo subgraph para mais processos. +3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the Subgraph for further processing. -#### Example Configuration in Subgraph Manifest +#### Exemplo de Configuração no Manifest do Subgraph -Declared `eth_calls` can access the `event.address` of the underlying event as well as all the `event.params`. +`eth_calls` declaradas podem acessar o `event.address` do evento subjacente, assim como todos os `event.params`. -`Subgraph.yaml` using `event.address`: +`subgraph.yaml` using `event.address`: ```yaml eventHandlers: @@ -519,12 +519,12 @@ calls: Detalhes para o exemplo acima: -- `global0X128` is the declared `eth_call`. -- The text (`global0X128`) is the label for this `eth_call` which is used when logging errors. -- The text (`Pool[event.address].feeGrowthGlobal0X128()`) is the actual `eth_call` that will be executed, which is in the form of `Contract[address].function(arguments)` -- The `address` and `arguments` can be replaced with variables that will be available when the handler is executed. +- `global0X128` é a `eth_call` declarada. +- O texto antes dos dois pontos (`global0X128`) é o rótulo para esta `eth_call` que é usado ao registar erros. +- O texto (`Pool[event.address].feeGrowthGlobal0X128()`) é a `eth_call` a ser executada, que está na forma do `Contract[address].function(arguments)` +- O `address` e o `arguments` podem ser substituídos por variáveis a serem disponibilizadas quando o handler for executado. -`Subgraph.yaml` using `event.params` +`subgraph.yaml` using `event.params` ```yaml calls: @@ -533,24 +533,24 @@ calls: ### Como Enxertar em Subgraphs Existentes -> **Note:** it is not recommended to use grafting when initially upgrading to The Graph Network. Learn more [here](/subgraphs/cookbook/grafting/#important-note-on-grafting-when-upgrading-to-the-network). +> **Observação:** não é recomendado usar enxertos quando começar a atualização para a The Graph Network. Aprenda mais [aqui](/subgraphs/cookbook/grafting/#important-note-on-grafting-when-upgrading-to-the-network). -When a subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances; it is beneficial to reuse the data from an existing subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly or to temporarily get an existing subgraph working again after it has failed. +When a Subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances; it is beneficial to reuse the data from an existing Subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly or to temporarily get an existing Subgraph working again after it has failed. -A subgraph is grafted onto a base subgraph when the subgraph manifest in `subgraph.yaml` contains a `graft` block at the top-level: +A Subgraph is grafted onto a base Subgraph when the Subgraph manifest in `subgraph.yaml` contains a `graft` block at the top-level: ```yaml description: ... graft: - base: Qm... # ID do subgraph base - block: 7345624 # Número do bloco + base: Qm... # Subgraph ID of base Subgraph + block: 7345624 # Block number ``` -When a subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` subgraph up to and including the given `block` and then continue indexing the new subgraph from that block on. The base subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted subgraph. +When a Subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` Subgraph up to and including the given `block` and then continue indexing the new Subgraph from that block on. The base Subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted Subgraph. -Como o enxerto copia em vez de indexar dados base, dirigir o subgraph para o bloco desejado desta maneira é mais rápido que indexar do começo, mesmo que a cópia inicial dos dados ainda possa levar várias horas para subgraphs muito grandes. Enquanto o subgraph enxertado é inicializado, o Graph Node gravará informações sobre os tipos de entidade que já foram copiados. +Because grafting copies rather than indexes base data, it is much quicker to get the Subgraph to the desired block than indexing from scratch, though the initial data copy can still take several hours for very large Subgraphs. While the grafted Subgraph is being initialized, the Graph Node will log information about the entity types that have already been copied. -O subgraph enxertado pode usar um schema GraphQL que não é idêntico ao schema do subgraph base, mas é apenas compatível com ele. Ele deve ser um schema válido no seu próprio mérito, mas pode desviar do schema do subgraph base nas seguintes maneiras: +The grafted Subgraph can use a GraphQL schema that is not identical to the one of the base Subgraph, but merely compatible with it. It has to be a valid Subgraph schema in its own right, but may deviate from the base Subgraph's schema in the following ways: - Ele adiciona ou remove tipos de entidade - Ele retira atributos de tipos de entidade @@ -560,4 +560,4 @@ O subgraph enxertado pode usar um schema GraphQL que não é idêntico ao schema - Ele adiciona ou remove interfaces - Ele muda os tipos de entidades para qual implementar uma interface -> **[Feature Management](#experimental-features):** `grafting` must be declared under `features` in the subgraph manifest. +> **[Feature Management](#experimental-features):** `grafting` must be declared under `features` in the Subgraph manifest. diff --git a/website/src/pages/pt/subgraphs/developing/creating/assemblyscript-mappings.mdx b/website/src/pages/pt/subgraphs/developing/creating/assemblyscript-mappings.mdx index e7d972a9d0bf..f6be7a46ee9c 100644 --- a/website/src/pages/pt/subgraphs/developing/creating/assemblyscript-mappings.mdx +++ b/website/src/pages/pt/subgraphs/developing/creating/assemblyscript-mappings.mdx @@ -1,16 +1,16 @@ --- -title: Writing AssemblyScript Mappings +title: Escrita de Mapeamentos de AssemblyScript --- ## Visão geral -The mappings take data from a particular source and transform it into entities that are defined within your schema. Mappings are written in a subset of [TypeScript](https://www.typescriptlang.org/docs/handbook/typescript-in-5-minutes.html) called [AssemblyScript](https://github.com/AssemblyScript/assemblyscript/wiki) which can be compiled to WASM ([WebAssembly](https://webassembly.org/)). AssemblyScript is stricter than normal TypeScript, yet provides a familiar syntax. +Os mapeamentos tomam dados de uma fonte particular e os transformam em entidades que são definidas dentro do seu schema. São escritos em um subconjunto do [TypeScript](https://www.typescriptlang.org/docs/handbook/typescript-in-5-minutes.html) chamado [AssemblyScript](https://github.com/AssemblyScript/assemblyscript/wiki), que pode ser compilado ao ([WebAssembly](https://webassembly.org/)). O AssemblyScript é mais rígido que o TypeScript normal, mas rende uma sintaxe familiar. ## Como Escrever Mapeamentos -For each event handler that is defined in `subgraph.yaml` under `mapping.eventHandlers`, create an exported function of the same name. Each handler must accept a single parameter called `event` with a type corresponding to the name of the event which is being handled. +Para cada handler de evento definido no `subgraph.yaml` sob o `mapping.eventHandlers`, crie uma função exportada de mesmo nome. Cada handler deve aceitar um único parâmetro chamado `event` com um tipo a corresponder ao nome do evento a ser lidado. -In the example subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events: +In the example Subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events: ```javascript import { NewGravatar, UpdatedGravatar } from '../generated/Gravity/Gravity' @@ -37,30 +37,30 @@ export function handleUpdatedGravatar(event: UpdatedGravatar): void { } ``` -The first handler takes a `NewGravatar` event and creates a new `Gravatar` entity with `new Gravatar(event.params.id.toHex())`, populating the entity fields using the corresponding event parameters. This entity instance is represented by the variable `gravatar`, with an id value of `event.params.id.toHex()`. +O primeiro handler toma um evento `NewGravatar` e cria uma nova entidade `Gravatar` com o `new Gravatar(event.params.id.toHex())`, e assim popula os campos da entidade com os parâmetros de evento correspondentes. Esta instância da entidade é representada pelo variável `gravatar`, com um valor de id de `event.params.id.toHex()`. -The second handler tries to load the existing `Gravatar` from the Graph Node store. If it does not exist yet, it is created on-demand. The entity is then updated to match the new event parameters before it is saved back to the store using `gravatar.save()`. +O segundo handler tenta carregar o `Gravatar` existente do armazenamento do Graph Node. Se ele ainda não existe, ele é criado por demanda. A entidade é então atualizada para corresponder aos novos parâmetros de evento, antes de ser devolvida ao armazenamento com `gravatar.save()`. ### IDs Recomendadas para Criar Novas Entidades -It is highly recommended to use `Bytes` as the type for `id` fields, and only use `String` for attributes that truly contain human-readable text, like the name of a token. Below are some recommended `id` values to consider when creating new entities. +Recomendamos muito utilizar `Bytes` como o tipo para campos `id`, e só usar o `String` para atributos que realmente contenham texto legível para humanos, como o nome de um token. Abaixo estão alguns valores recomendados de `id` para considerar ao criar novas entidades. - `transfer.id = event.transaction.hash` - `let id = event.transaction.hash.concatI32(event.logIndex.toI32())` -- For entities that store aggregated data, for e.g, daily trade volumes, the `id` usually contains the day number. Here, using a `Bytes` as the `id` is beneficial. Determining the `id` would look like +- Para entidades que armazenam dados agregados como, por exemplo, volumes diários de trading, a `id` costuma conter o número do dia. Aqui, usar `Bytes` como a `id` é beneficial. Determinar a `id` pareceria com: ```typescript let dayID = event.block.timestamp.toI32() / 86400 let id = Bytes.fromI32(dayID) ``` -- Convert constant addresses to `Bytes`. +- Converta endereços constantes em Bytes\`. `const id = Bytes.fromHexString('0xdead...beef')` -There is a [Graph Typescript Library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) which contains utilities for interacting with the Graph Node store and conveniences for handling smart contract data and entities. It can be imported into `mapping.ts` from `@graphprotocol/graph-ts`. +Há uma [Biblioteca do Graph Typescript](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts), com utilidades para interagir com o armazenamento do Graph Node e conveniências para lidar com entidades e dados de contratos inteligentes. Ela pode ser importada ao `mapping.ts` do `@graphprotocol/graph-ts`. ### Gestão de entidades com IDs idênticas @@ -72,7 +72,7 @@ Se nenhum valor for inserido para um campo na nova entidade com a mesma ID, o ca ## Geração de Código -Para tornar mais fácil e seguro a tipos o trabalho com contratos inteligentes, eventos e entidades, o Graph CLI pode gerar tipos de AssemblyScript a partir do schema GraphQL do subgraph e das ABIs de contratos incluídas nas fontes de dados. +In order to make it easy and type-safe to work with smart contracts, events and entities, the Graph CLI can generate AssemblyScript types from the Subgraph's GraphQL schema and the contract ABIs included in the data sources. Isto é feito com @@ -80,7 +80,7 @@ Isto é feito com graph codegen [--output-dir ] [] ``` -but in most cases, subgraphs are already preconfigured via `package.json` to allow you to simply run one of the following to achieve the same: +but in most cases, Subgraphs are already preconfigured via `package.json` to allow you to simply run one of the following to achieve the same: ```sh # Yarn @@ -90,7 +90,7 @@ yarn codegen npm run codegen ``` -This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with. +This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example Subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with. ```javascript import { @@ -102,12 +102,12 @@ import { } from '../generated/Gravity/Gravity' ``` -In addition to this, one class is generated for each entity type in the subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with +In addition to this, one class is generated for each entity type in the Subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with ```javascript import { Gravatar } from '../generated/schema' ``` -> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the subgraph. +> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the Subgraph. -Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your subgraph to Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find. +Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your Subgraph to Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find. diff --git a/website/src/pages/pt/subgraphs/developing/creating/graph-ts/CHANGELOG.md b/website/src/pages/pt/subgraphs/developing/creating/graph-ts/CHANGELOG.md index 5d90888ac378..5f964d3cbb78 100644 --- a/website/src/pages/pt/subgraphs/developing/creating/graph-ts/CHANGELOG.md +++ b/website/src/pages/pt/subgraphs/developing/creating/graph-ts/CHANGELOG.md @@ -1,5 +1,11 @@ # @graphprotocol/graph-ts +## 0.38.0 + +### Minor Changes + +- [#1935](https://github.com/graphprotocol/graph-tooling/pull/1935) [`0c36a02`](https://github.com/graphprotocol/graph-tooling/commit/0c36a024e0516bbf883ae62b8312dba3d9945f04) Thanks [@isum](https://github.com/isum)! - feat: add yaml parsing support to mappings + ## 0.37.0 ### Minor Changes diff --git a/website/src/pages/pt/subgraphs/developing/creating/graph-ts/README.md b/website/src/pages/pt/subgraphs/developing/creating/graph-ts/README.md index b6771a8305e5..fe878f01f295 100644 --- a/website/src/pages/pt/subgraphs/developing/creating/graph-ts/README.md +++ b/website/src/pages/pt/subgraphs/developing/creating/graph-ts/README.md @@ -6,7 +6,7 @@ TypeScript/AssemblyScript library for writing subgraph mappings to be deployed to [The Graph](https://github.com/graphprotocol/graph-node). -## Usage +## Uso For a detailed guide on how to create a subgraph, please see the [Graph CLI docs](https://github.com/graphprotocol/graph-cli). diff --git a/website/src/pages/pt/subgraphs/developing/creating/graph-ts/api.mdx b/website/src/pages/pt/subgraphs/developing/creating/graph-ts/api.mdx index c9069e51a627..ee20c583603e 100644 --- a/website/src/pages/pt/subgraphs/developing/creating/graph-ts/api.mdx +++ b/website/src/pages/pt/subgraphs/developing/creating/graph-ts/api.mdx @@ -2,16 +2,16 @@ title: API AssemblyScript --- -> Note: If you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/). +> Note: If you created a Subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/). -Learn what built-in APIs can be used when writing subgraph mappings. There are two kinds of APIs available out of the box: +Learn what built-in APIs can be used when writing Subgraph mappings. There are two kinds of APIs available out of the box: -- The [Graph TypeScript library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) -- Code generated from subgraph files by `graph codegen` +- A [biblioteca do Graph TypeScript](https://github.com/grAphprotocol/grAph-tooling/tree/mAin/pAckAges/ts) (`graph-ts`) +- Code generated from Subgraph files by `graph codegen` -You can also add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). +Você também pode adicionar outras bibliotecas como dependências, contanto que sejam compatíveis com [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). -Since language mappings are written in AssemblyScript, it is useful to review the language and standard library features from the [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki). +Já que os mapeamentos de linguagem são escritos em AssemblyScript, vale a pena consultar os recursos padrão de linguagem e biblioteca da [wiki do AssemblyScript](https://github.com/AssemblyScript/assemblyscript/wiki). ## Referência da API @@ -27,7 +27,7 @@ A biblioteca `@graphprotocol/graph-ts` fornece as seguintes APIs: ### Versões -No manifest do subgraph, `apiVersion` especifica a versão da API de mapeamento, executada pelo Graph Node para um subgraph. +The `apiVersion` in the Subgraph manifest specifies the mapping API version which is run by Graph Node for a given Subgraph. | Versão | Notas de atualização | | :-: | --- | @@ -37,7 +37,7 @@ No manifest do subgraph, `apiVersion` especifica a versão da API de mapeamento, | 0.0.6 | Campo `nonce` adicionado ao objeto Ethereum TransactionCampo
`baseFeePerGas` adicionado ao objeto Ethereum Block | | 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/))
`ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` | | 0.0.4 | Campo `functionSignature` adicionado ao objeto Ethereum SmartContractCall | -| 0.0.3 | Added `from` field to the Ethereum Call object
`ethereum.call.address` renamed to `ethereum.call.to` | +| 0.0.3 | Campo `from` adicionado ao objeto de chamada no Ethereum
`Callethereum.call.address` renomeado para `ethereum.call.to` | | 0.0.2 | Campo `input` adicionado ao objeto Ethereum Transaction | ### Tipos Embutidos @@ -223,7 +223,7 @@ import { store } from '@graphprotocol/graph-ts' A API `store` permite carregar, salvar e remover entidades do/para o armazenamento do Graph Node. -As entidades escritas no armazenamento mapeam um-por-um com os tipos de `@entity` definidos no schema GraphQL do subgraph. Para trabalhar com estas entidades de forma conveniente, o comando `graph codegen` fornecido pelo [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) gera classes de entidades, que são subclasses do tipo embutido `Entity`, com getters e setters de propriedade para os campos no schema e métodos para carregar e salvar estas entidades. +Entities written to the store map one-to-one to the `@entity` types defined in the Subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities. #### Como criar entidades @@ -254,9 +254,9 @@ export function handleTransfer(event: TransferEvent): void { Quando um evento `Transfer` é encontrado durante o processamento da chain, ele é passado para o handler de evento `handleTransfer` com o tipo `Transfer` gerado (apelidado de `TransferEvent` aqui, para evitar confusões com o tipo de entidade). Este tipo permite o acesso a dados como a transação parente do evento e seus parâmetros. -Each entity must have a unique ID to avoid collisions with other entities. It is fairly common for event parameters to include a unique identifier that can be used. +Cada entidade deve ter um identificador exclusivo para evitar colisões com outras entidades. É bastante comum que parâmetros de evento incluam um identificador exclusivo que pode ser usado. -> Note: Using the transaction hash as the ID assumes that no other events in the same transaction create entities with this hash as the ID. +> Nota: Usar o hash de transação como ID supõe que nenhum outro evento na mesma transação cria entidades com este hash como o ID. #### Como carregar entidades a partir do armazenamento @@ -272,18 +272,18 @@ if (transfer == null) { // Use a entidade Transfer como antes ``` -As the entity may not exist in the store yet, the `load` method returns a value of type `Transfer | null`. It may be necessary to check for the `null` case before using the value. +Como a entidade pode ainda não existir no armazenamento, o método `load` retorna um valor de tipo `Transfer | null`. Portanto, é bom prestar atenção ao caso `null` antes de usar o valor. -> Note: Loading entities is only necessary if the changes made in the mapping depend on the previous data of an entity. See the next section for the two ways of updating existing entities. +> Nota: Só é necessário carregar entidades se as mudanças feitas no mapeamento dependem dos dados anteriores de uma entidade. Veja a próxima seção para ver as duas maneiras de atualizar entidades existentes. #### Como consultar entidades criadas dentro de um bloco Desde o `graph-node` v0.31.0, o `@graphprotocol/graph-ts` v0.30.0 e o `@graphprotocol/graph-cli v0.49.0`, o método `loadInBlock` está disponível em todos os tipos de entidade. -The store API facilitates the retrieval of entities that were created or updated in the current block. A typical situation for this is that one handler creates a transaction from some onchain event, and a later handler wants to access this transaction if it exists. +A API do armazenamento facilita a recuperação de entidades que já foram criadas ou atualizadas no bloco atual. Uma situação típica para isso é que um manipulador cria uma transação a partir de algum evento em cadeia, e um handler posterior quer acessar esta transação — se ela existir. -- In the case where the transaction does not exist, the subgraph will have to go to the database simply to find out that the entity does not exist. If the subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. -- For some subgraphs, these missed lookups can contribute significantly to the indexing time. +- In the case where the transaction does not exist, the Subgraph will have to go to the database simply to find out that the entity does not exist. If the Subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. +- For some Subgraphs, these missed lookups can contribute significantly to the indexing time. ```typescript let id = event.transaction.hash // ou como a ID for construída @@ -380,11 +380,11 @@ A API do Ethereum fornece acesso a contratos inteligentes, variáveis de estado #### Apoio para Tipos no Ethereum -Assim como em entidades, o `graph codegen` gera classes para todos os contratos inteligentes e eventos usados em um subgraph. Para isto, as ABIs dos contratos devem ser parte da fonte de dados no manifest do subgraph. Tipicamente, os arquivos da ABI são armazenados em uma pasta `abis/`. +As with entities, `graph codegen` generates classes for all smart contracts and events used in a Subgraph. For this, the contract ABIs need to be part of the data source in the Subgraph manifest. Typically, the ABI files are stored in an `abis/` folder. -Com as classes geradas, conversões entre tipos no Ethereum e os [tipos embutidos](#built-in-types) acontecem em segundo plano para que os autores de subgraphs não precisem se preocupar com elas. +With the generated classes, conversions between Ethereum types and the [built-in types](#built-in-types) take place behind the scenes so that Subgraph authors do not have to worry about them. -Veja um exemplo a seguir. Considerando um schema de subgraph como +The following example illustrates this. Given a Subgraph schema like ```graphql type Transfer @entity { @@ -483,7 +483,7 @@ class Log { #### Acesso ao Estado do Contrato Inteligente -O código gerado pelo `graph codegen` também inclui classes para os contratos inteligentes usados no subgraph. Estes servem para acessar variáveis de estado público e funções de chamada do contrato no bloco atual. +The code generated by `graph codegen` also includes classes for the smart contracts used in the Subgraph. These can be used to access public state variables and call functions of the contract at the current block. É comum acessar o contrato de qual origina um evento. Isto é feito com o seguinte código: @@ -506,13 +506,13 @@ O `Transfer` é apelidado de `TransferEvent` aqui para evitar confusões de nome Enquanto o `ERC20Contract` no Ethereum tiver uma função pública de apenas-leitura chamada `symbol`, ele pode ser chamado com o `.symbol()`. Para variáveis de estado público, um método com o mesmo nome é criado automaticamente. -Qualquer outro contrato que seja parte do subgraph pode ser importado do código gerado e ligado a um endereço válido. +Any other contract that is part of the Subgraph can be imported from the generated code and can be bound to a valid address. #### Como Lidar com Chamadas Revertidas -If the read-only methods of your contract may revert, then you should handle that by calling the generated contract method prefixed with `try_`. +Se houver reversão dos métodos somente-leitura do seu contrato, cuide disso chamando o método do contrato gerado prefixado com `try_`. -- For example, the Gravity contract exposes the `gravatarToOwner` method. This code would be able to handle a revert in that method: +- Por exemplo, o contrato da Gravity expõe o método `gravatarToOwner`. Este código poderia manusear uma reversão nesse método: ```typescript let gravity = Gravity.bind(event.address) @@ -524,7 +524,7 @@ if (callResult.reverted) { } ``` -> Note: A Graph node connected to a Geth or Infura client may not detect all reverts. If you rely on this, we recommend using a Graph Node connected to a Parity client. +> Observe que um Graph Node conectado a um cliente Geth ou Infura pode não detetar todas as reversões; se depender disto, recomendamos usar um Graph Node conectado a um cliente Parity. #### ABI de Codificação/Decodificação @@ -582,7 +582,7 @@ let isContract = ethereum.hasCode(eoa).inner // retorna false import { log } from '@graphprotocol/graph-ts' ``` -A API `log` permite que os subgraphs gravem informações à saída padrão do Graph Node, assim como ao Graph Explorer. Mensagens podem ser gravadas com níveis diferentes de log. É fornecida uma sintaxe básica de formatação de strings para compor mensagens de log do argumento. +The `log` API allows Subgraphs to log information to the Graph Node standard output as well as Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument. A API `log` inclui as seguintes funções: @@ -590,7 +590,7 @@ A API `log` inclui as seguintes funções: - `log.info(fmt: string, args: Array): void` - loga uma mensagem de debug. - `log.warning(fmt: string, args: Array): void` - loga um aviso. - `log.error(fmt: string, args: Array): void` - loga uma mensagem de erro. -- `log.critical(fmt: string, args: Array): void` – loga uma mensagem crítica _e_ encerra o subgraph. +- `log.critical(fmt: string, args: Array): void` – logs a critical message _and_ terminates the Subgraph. A API `log` toma um string de formato e um arranjo de valores de string. Ele então substitui os temporários com os valores de strings do arranjo. O primeiro `{}` temporário é substituído pelo primeiro valor no arranjo, o segundo `{}` temporário é substituído pelo segundo valor, e assim por diante. @@ -672,7 +672,7 @@ export function handleSomeEvent(event: SomeEvent): void { import { ipfs } from '@graphprotocol/graph-ts' ``` -Smart contracts occasionally anchor IPFS files onchain. This allows mappings to obtain the IPFS hashes from the contract and read the corresponding files from IPFS. The file data will be returned as `Bytes`, which usually requires further processing, e.g. with the `json` API documented later on this page. +Contratos inteligentes, ocasionalmente, ancoram arquivos IPFS on-chain. Assim, os mapeamentos obtém os hashes IPFS do contrato e lêem os arquivos correspondentes do IPFS. Os dados dos arquivos serão retornados como `Bytes`, o que costuma exigir mais processamento; por ex., com a API `json` documentada mais abaixo nesta página. Considerando um hash ou local IPFS, um arquivo do IPFS é lido da seguinte maneira: @@ -721,7 +721,7 @@ ipfs.mapJSON('Qm...', 'processItem', Value.fromString('parentId')) O único flag atualmente apoiado é o `json`, que deve ser passado ao `ipfs.map`. Com o flag `json`, o arquivo IPFS deve consistir de uma série de valores JSON, com um valor por linha. Chamar `ipfs.map`, irá ler cada linha no arquivo, desserializá-lo em um `JSONValue`, e chamar o callback para cada linha. O callback pode então armazenar dados do `JSONValue` com operações de entidade. As mudanças na entidade só serão armazenadas quando o handler que chamou o `ipfs.map` concluir com sucesso; enquanto isso, elas ficam na memória, e o tamanho do arquivo que o `ipfs.map` pode processar é então limitado. -Em caso de sucesso, o `ipfs.map` retorna `void`. Se qualquer invocação do callback causar um erro, o handler que invocou o `ipfs.map` é abortado, e o subgraph é marcado como falho. +On success, `ipfs.map` returns `void`. If any invocation of the callback causes an error, the handler that invoked `ipfs.map` is aborted, and the Subgraph is marked as failed. ### API de Criptografia @@ -836,7 +836,7 @@ A classe base `Entity` e a subclasse `DataSourceContext` têm helpers para deter ### DataSourceContext no Manifest -A seção `context` dentro do `dataSources` lhe permite definir pares key-value acessíveis dentro dos seus mapeamentos de subgraph. Os tipos disponíveis são `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, e `BigInt`. +The `context` section within `dataSources` allows you to define key-value pairs that are accessible within your Subgraph mappings. The available types are `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Aqui está um exemplo de YAML que ilustra o uso de vários tipos na seção `context`: @@ -887,4 +887,4 @@ dataSources: - `List`: Especifica uma lista de itens. Cada item deve especificar o seu tipo e dados. - `BigInt`: Especifica um valor integral largo. É necessário citar este devido ao seu grande tamanho. -Este contexto, então, pode ser acessado nos seus arquivos de mapeamento de subgraph, o que resulta em subgraphs mais dinâmicos e configuráveis. +This context is then accessible in your Subgraph mapping files, enabling more dynamic and configurable Subgraphs. diff --git a/website/src/pages/pt/subgraphs/developing/creating/graph-ts/common-issues.mdx b/website/src/pages/pt/subgraphs/developing/creating/graph-ts/common-issues.mdx index 2f5f5b63c40a..32ea7ff586f9 100644 --- a/website/src/pages/pt/subgraphs/developing/creating/graph-ts/common-issues.mdx +++ b/website/src/pages/pt/subgraphs/developing/creating/graph-ts/common-issues.mdx @@ -2,7 +2,7 @@ title: Problemas Comuns no AssemblyScript --- -É comum encontrar certos problemas no [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) durante o desenvolvimento do subgraph. Eles variam em dificuldade de debug, mas vale ter consciência deles. A seguir, uma lista não exaustiva destes problemas: +There are certain [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) issues that are common to run into during Subgraph development. They range in debug difficulty, however, being aware of them may help. The following is a non-exhaustive list of these issues: -- `Private` class variables are not enforced in [AssemblyScript](https://www.assemblyscript.org/status.html#language-features). There is no way to protect class variables from being directly changed from the class object. +- Variáveis de classe `Private` não são aplicadas no [AssemblyScript](https://www.assemblyscript.org/status.html#language-features). Não há como evitar que estas variáveis sejam alteradas diretamente a partir do objeto de classe. - O escopo não é herdado em [funções de closure](https://www.assemblyscript.org/status.html#on-closures), por ex., não é possível usar variáveis declaradas fora de funções de closure. Há uma explicação [neste vídeo](https://www.youtube.com/watch?v=1-8AW-lVfrA&t=3243s). diff --git a/website/src/pages/pt/subgraphs/developing/creating/install-the-cli.mdx b/website/src/pages/pt/subgraphs/developing/creating/install-the-cli.mdx index ca436b6eef1b..ee2f14a8e76f 100644 --- a/website/src/pages/pt/subgraphs/developing/creating/install-the-cli.mdx +++ b/website/src/pages/pt/subgraphs/developing/creating/install-the-cli.mdx @@ -2,39 +2,39 @@ title: Como instalar o Graph CLI --- -> In order to use your subgraph on The Graph's decentralized network, you will need to [create an API key](/resources/subgraph-studio-faq/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your subgraph with at least 3,000 GRT to attract 2-3 Indexers. To learn more about signaling, check out [curating](/resources/roles/curating/). +> In order to use your Subgraph on The Graph's decentralized network, you will need to [create an API key](/resources/subgraph-studio-faq/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your Subgraph with at least 3,000 GRT to attract 2-3 Indexers. To learn more about signaling, check out [curating](/resources/roles/curating/). ## Visão geral -The [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) is a command-line interface that facilitates developers' commands for The Graph. It processes a [subgraph manifest](/subgraphs/developing/creating/subgraph-manifest/) and compiles the [mappings](/subgraphs/developing/creating/assemblyscript-mappings/) to create the files you will need to deploy the subgraph to [Subgraph Studio](https://thegraph.com/studio/) and the network. +The [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) is a command-line interface that facilitates developers' commands for The Graph. It processes a [Subgraph manifest](/subgraphs/developing/creating/subgraph-manifest/) and compiles the [mappings](/subgraphs/developing/creating/assemblyscript-mappings/) to create the files you will need to deploy the Subgraph to [Subgraph Studio](https://thegraph.com/studio/) and the network. ## Como Começar ### Como instalar o Graph CLI -The Graph CLI is written in TypeScript, and you must have `node` and either `npm` or `yarn` installed to use it. Check for the [most recent](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) CLI version. +A CLI do The Graph é escrita em TypeScript, e é necessário ter o `node`, e `npm` ou `yarn`, instalados para usá-la. Verifique a versão [mais recente](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) da CLI. Execute um dos seguintes comandos na sua máquina local: -#### Using [npm](https://www.npmjs.com/) +#### Uso do [npm](https://www.npmjs.com/) ```bash npm install -g @graphprotocol/graph-cli@latest ``` -#### Using [yarn](https://yarnpkg.com/) +#### Uso do [yarn](https://yarnpkg.com/) ```bash yarn global add @graphprotocol/graph-cli ``` -The `graph init` command can be used to set up a new subgraph project, either from an existing contract or from an example subgraph. If you already have a smart contract deployed to your preferred network, you can bootstrap a new subgraph from that contract to get started. +The `graph init` command can be used to set up a new Subgraph project, either from an existing contract or from an example Subgraph. If you already have a smart contract deployed to your preferred network, you can bootstrap a new Subgraph from that contract to get started. ## Crie um Subgraph ### De um Contrato Existente -The following command creates a subgraph that indexes all events of an existing contract: +The following command creates a Subgraph that indexes all events of an existing contract: ```sh graph init \ @@ -45,75 +45,61 @@ graph init \ [] ``` -- The command tries to retrieve the contract ABI from Etherscan. +- O comando tenta resgatar o contrato da ABI do Etherscan. - - The Graph CLI relies on a public RPC endpoint. While occasional failures are expected, retries typically resolve this issue. If failures persist, consider using a local ABI. + - A CLI do The Graph depende de um endpoint público de RPC. Enquanto falhas ocasionais são de se esperar, basta tentar de novo para resolver. Se as falhas persistirem, considere usar uma ABI local. -- If any of the optional arguments are missing, it guides you through an interactive form. +- Se faltar algum dos argumentos opcionais, você será guiado para um formulário interativo. -- The `` is the ID of your subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your subgraph details page. +- The `` is the ID of your Subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your Subgraph details page. ### De um Exemplo de Subgraph -The following command initializes a new project from an example subgraph: +The following command initializes a new project from an example Subgraph: ```sh graph init --from-example=example-subgraph ``` -- The [example subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. +- The [example Subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. -- The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. +- The Subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. -### Add New `dataSources` to an Existing Subgraph +### Como Adicionar Novos `dataSources` para um Subgraph Existente -`dataSources` are key components of subgraphs. They define the sources of data that the subgraph indexes and processes. A `dataSource` specifies which smart contract to listen to, which events to process, and how to handle them. +`dataSources` are key components of Subgraphs. They define the sources of data that the Subgraph indexes and processes. A `dataSource` specifies which smart contract to listen to, which events to process, and how to handle them. -Recent versions of the Graph CLI supports adding new `dataSources` to an existing subgraph through the `graph add` command: +Recent versions of the Graph CLI supports adding new `dataSources` to an existing Subgraph through the `graph add` command: ```sh graph add
[] -Options: +Opções: - --abi Path to the contract ABI (default: download from Etherscan) - --contract-name Name of the contract (default: Contract) - --merge-entities Whether to merge entities with the same name (default: false) - --network-file Networks config file path (default: "./networks.json") + --abi Caminho à ABI do contrato (padrão: baixar do Etherscan) + --contract-name Nome do contrato (padrão: Contract) + --merge-entities Se fundir ou não entidades com o mesmo nome (padrão: false) + --network-file Caminho ao arquivo de configuração das redes (padrão: "./networks.json") ``` #### Especificações -The `graph add` command will fetch the ABI from Etherscan (unless an ABI path is specified with the `--abi` option) and creates a new `dataSource`, similar to how the `graph init` command creates a `dataSource` `--from-contract`, updating the schema and mappings accordingly. This allows you to index implementation contracts from their proxy contracts. +O comando `graph add` pegará a ABI do Etherscan (a não ser que um local de ABI seja especificado com a opção --abi), e criará um novo `dataSource` da mesma maneira que o comando `graph init` cria um `dataSource` `--from-contract`, assim atualizando o schema e os mapeamentos de acordo. -- The `--merge-entities` option identifies how the developer would like to handle `entity` and `event` name conflicts: +- A opção `--merge entities` identifica como o programador gostaria de lidar com conflitos de nome em `entity` e `event`: - - If `true`: the new `dataSource` should use existing `eventHandlers` & `entities`. + - Se for `true`: o novo `dataSource` deve usar `eventHandlers` e `entities` existentes. - - If `false`: a new `entity` & `event` handler should be created with `${dataSourceName}{EventName}`. + - Se for `false`: um novo handler de `entity` e `event` deve ser criado com `${dataSourceName}{EventName}`. -- The contract `address` will be written to the `networks.json` for the relevant network. +- O `address` (endereço de contrato) será escrito no `networks.json` para a rede relevante. -> Note: When using the interactive CLI, after successfully running `graph init`, you'll be prompted to add a new `dataSource`. +> Observação: Quando usar a CLI interativa, após executar o `graph init` com êxito, você receberá uma solicitação para adicionar um novo `dataSource`. -### Getting The ABIs +### Como Obter as ABIs Os arquivos da ABI devem combinar com o(s) seu(s) contrato(s). Há algumas maneiras de obter estes arquivos: - Caso construa o seu próprio projeto, provavelmente terá acesso às suas ABIs mais recentes. -- If you are building a subgraph for a public project, you can download that project to your computer and get the ABI by using [`npx hardhat compile`](https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts) or using `solc` to compile. -- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your subgraph will fail. - -## SpecVersion Releases - -| Versão | Notas de atualização | -| :-: | --- | -| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | -| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | -| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune subgraphs | -| 0.0.9 | Supports `endBlock` feature | -| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). | -| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). | -| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | -| 0.0.5 | Adicionado apoio a handlers de eventos com acesso a recibos de transação. | -| 0.0.4 | Adicionado apoio à gestão de recursos de subgraph. | +- If you are building a Subgraph for a public project, you can download that project to your computer and get the ABI by using [`npx hardhat compile`](https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts) or using `solc` to compile. +- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your Subgraph will fail. diff --git a/website/src/pages/pt/subgraphs/developing/creating/ql-schema.mdx b/website/src/pages/pt/subgraphs/developing/creating/ql-schema.mdx index db1f1f513082..9fd41c7e9594 100644 --- a/website/src/pages/pt/subgraphs/developing/creating/ql-schema.mdx +++ b/website/src/pages/pt/subgraphs/developing/creating/ql-schema.mdx @@ -1,28 +1,28 @@ --- -title: The Graph QL Schema +title: O Schema GraphQL --- ## Visão geral -The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language. +The schema for your Subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language. -> Note: If you've never written a GraphQL schema, it is recommended that you check out this primer on the GraphQL type system. Reference documentation for GraphQL schemas can be found in the [GraphQL API](/subgraphs/querying/graphql-api/) section. +> Nota: Se você nunca escreveu um schema em GraphQL, recomendamos que confira este manual sobre o sistema de tipos da GraphQL. Consulte a documentação sobre schemas GraphQL na seção sobre a [API da GraphQL](/subgraphs/querying/graphql-api/). -### Defining Entities +### Como Definir Entidades -Before defining entities, it is important to take a step back and think about how your data is structured and linked. +Antes de definir as entidades, é importante dar um passo atrás e pensar em como os seus dados são estruturados e ligados. -- All queries will be made against the data model defined in the subgraph schema. As a result, the design of the subgraph schema should be informed by the queries that your application will need to perform. -- It may be useful to imagine entities as "objects containing data", rather than as events or functions. -- You define entity types in `schema.graphql`, and Graph Node will generate top-level fields for querying single instances and collections of that entity type. -- Each type that should be an entity is required to be annotated with an `@entity` directive. -- By default, entities are mutable, meaning that mappings can load existing entities, modify them and store a new version of that entity. - - Mutability comes at a price, so for entity types that will never be modified, such as those containing data extracted verbatim from the chain, it is recommended to mark them as immutable with `@entity(immutable: true)`. - - If changes happen in the same block in which the entity was created, then mappings can make changes to immutable entities. Immutable entities are much faster to write and to query so they should be used whenever possible. +- All queries will be made against the data model defined in the Subgraph schema. As a result, the design of the Subgraph schema should be informed by the queries that your application will need to perform. +- Pode ser bem útil imaginar entidades como "objetos que contém dados", e não como eventos ou funções. +- Você define os tipos de entidade em `schema.graphql`, e o Graph Node irá gerar campos de nível superior para queries de instâncias únicas e coleções desse tipo de entidade. +- Cada tipo feito para ser uma entidade precisa ser anotado com uma diretiva `@entity`. +- Por padrão, as entidades são mutáveis, ou seja: os mapeamentos podem carregar as entidades existentes, modificá-las, e armazenar uma nova versão dessa entidade. + - A mutabilidade tem um preço, então, para tipos de entidade que nunca serão modificados, como as que contêm dados extraídos da chain sem alterações, recomendamos marcá-los como imutáveis com `@entity(immutable: true)`. + - Se as alterações acontecerem no mesmo bloco em que a entidade foi criada, então os mapeamentos podem fazer alterações em entidades imutáveis. Entidades imutáveis são muito mais rápidas de escrever e consultar em query, então elas devem ser usadas sempre que possível. #### Bom Exemplo -The following `Gravatar` entity is structured around a Gravatar object and is a good example of how an entity could be defined. +A entidade `Gravatar` abaixo é estruturada em torno de um objeto Gravatar, e é um bom exemplo de como pode ser definida uma entidade. ```graphql type Gravatar @entity(immutable: true) { @@ -36,7 +36,7 @@ type Gravatar @entity(immutable: true) { #### Mau Exemplo -The following example `GravatarAccepted` and `GravatarDeclined` entities are based around events. It is not recommended to map events or function calls to entities 1:1. +As entidades `GravatarAccepted` e `GravatarDeclined` abaixo têm base em torno de eventos. Não é recomendado mapear eventos ou chamadas de função a entidades identicamente. ```graphql type GravatarAccepted @entity { @@ -56,32 +56,32 @@ type GravatarDeclined @entity { #### Campos Opcionais e Obrigatórios -Entity fields can be defined as required or optional. Required fields are indicated by the `!` in the schema. If the field is a scalar field, you get an error when you try to store the entity. If the field references another entity then you get this error: +Os campos da entidade podem ser definidos como obrigatórios ou opcionais. Os campos obrigatórios são indicados pelo `!` no schema. Se o campo for escalar, tentar armazenar a entidade causará um erro. Se o campo fizer referência a outra entidade, você receberá esse erro: ``` Null value resolved for non-null field 'name' ``` -Each entity must have an `id` field, which must be of type `Bytes!` or `String!`. It is generally recommended to use `Bytes!`, unless the `id` contains human-readable text, since entities with `Bytes!` id's will be faster to write and query as those with a `String!` `id`. The `id` field serves as the primary key, and needs to be unique among all entities of the same type. For historical reasons, the type `ID!` is also accepted and is a synonym for `String!`. +Cada entidade deve ter um campo `id`, que deve ser do tipo `Bytes!` ou `String!`. Geralmente é melhor usar `Bytes!` — a não ser que o `id` tenha texto legível para humanos, já que entidades com as ids `Bytes!` são mais fáceis de escrever e consultar que aquelas com um `id` `String!`. O campo `id` serve como a chave primária, e deve ser singular entre todas as entidades do mesmo tipo. Por razões históricas, o tipo `ID!` também é aceite, como um sinónimo de `String!`. -For some entity types the `id` for `Bytes!` is constructed from the id's of two other entities; that is possible using `concat`, e.g., `let id = left.id.concat(right.id) ` to form the id from the id's of `left` and `right`. Similarly, to construct an id from the id of an existing entity and a counter `count`, `let id = left.id.concatI32(count)` can be used. The concatenation is guaranteed to produce unique id's as long as the length of `left` is the same for all such entities, for example, because `left.id` is an `Address`. +Para alguns tipos de entidade, o `id` é construído das id's de duas outras entidades; isto é possível com o `concat`, por ex., `let id = left.id.concat(right.id)` para formar a id a partir das id's de `left` e `right`. Da mesma forma, para construir uma id a partir da id de uma entidade existente e um contador `count`, pode ser usado o `let id = left.id.concatI32(count)`. Isto garante a concatenação a produzir id's únicas enquanto o comprimento do `left` for o mesmo para todas as tais entidades; por exemplo, porque o `left.id` é um `Address` (endereço). ### Tipos Embutidos de Escalar #### Escalares Apoiados pelo GraphQL -The following scalars are supported in the GraphQL API: +As seguintes escalas são apoiadas na API da GraphQL: | Tipo | Descrição | | --- | --- | | `Bytes` | Arranjo de bytes, representado como string hexadecimal. Usado frequentemente por hashes e endereços no Ethereum. | -| `String` | Scalar for `string` values. Null characters are not supported and are automatically removed. | -| `Boolean` | Scalar for `boolean` values. | -| `Int` | The GraphQL spec defines `Int` to be a signed 32-bit integer. | -| `Int8` | An 8-byte signed integer, also known as a 64-bit signed integer, can store values in the range from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807. Prefer using this to represent `i64` from ethereum. | -| `BigInt` | Large integers. Used for Ethereum's `uint32`, `int64`, `uint64`, ..., `uint256` types. Note: Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. | -| `BigDecimal` | `BigDecimal` High precision decimals represented as a significand and an exponent. The exponent range is from −6143 to +6144. Rounded to 34 significant digits. | -| `Timestamp` | It is an `i64` value in microseconds. Commonly used for `timestamp` fields for timeseries and aggregations. | +| `String` | Escalar para valores string. Caracteres nulos serão removidos automaticamente. | +| `Boolean` | Escalar para valores `boolean`. | +| `Int` | A especificação da GraphQL define `Int` como um inteiro assinado de 32 bits. | +| `Int8` | Um número inteiro assinado em 8 bits, também conhecido como um número inteiro assinado em 64 bits, pode armazenar valores de -9,223,372,036,854,775,808 a 9,223,372,036,854,775,807. É melhor usar isto para representar o i64 do ethereum. | +| `BigInt` | Números inteiros grandes. Usados para os tipos `uint32`, `int64`, `uint64`, ..., `uint256` do Ethereum. Nota: Tudo abaixo de `uint32`, como `int32`, `uint24` ou `int8` é representado como `i32`. | +| `BigDecimal` | Decimais de alta precisão `BigDecimal` representados como um significando e um exponente. O alcance de exponentes é de -6143 até +6144. Arredondado para 34 dígitos significantes. | +| `Timestamp` | É um valor i64 em microssegundos. Usado frequentemente para campos `timestamp` para séries temporais e agregações. | ### Enums @@ -95,9 +95,9 @@ enum TokenStatus { } ``` -Once the enum is defined in the schema, you can use the string representation of the enum value to set an enum field on an entity. For example, you can set the `tokenStatus` to `SecondOwner` by first defining your entity and subsequently setting the field with `entity.tokenStatus = "SecondOwner"`. The example below demonstrates what the Token entity would look like with an enum field: +Quando o enum for definido no schema, podes usar a representação do string do valor enum para determinar um campo enum numa entidade. Por exemplo, podes implantar o `tokenStatus` no `SecondOwner` ao definir primeiro a sua entidade e depois determinar o campo com `entity.tokenStatus = "SecondOwner"`. O exemplo abaixo demonstra como ficaria a entidade do Token com um campo enum: -More detail on writing enums can be found in the [GraphQL documentation](https://graphql.org/learn/schema/). +Para saber mais sobre como escrever enums, veja a [documentação do GraphQL](https://graphql.org/learn/schema/). ### Relacionamentos de Entidades @@ -107,7 +107,7 @@ Relacionamentos são definidos em entidades como qualquer outro campo, sendo que #### Relacionamentos Um-com-Um -Define a `Transaction` entity type with an optional one-to-one relationship with a `TransactionReceipt` entity type: +Defina um tipo de entidade `Transaction` com um relacionamento um-com-um opcional, com um tipo de entidade `TransactionReceipt`: ```graphql type Transaction @entity(immutable: true) { @@ -123,7 +123,7 @@ type TransactionReceipt @entity(immutable: true) { #### Relacionamentos Um-com-Vários -Define a `TokenBalance` entity type with a required one-to-many relationship with a Token entity type: +Defina um tipo de entidade `TokenBalance` com um relacionamento um-com-vários, exigido com um tipo de entidade `Token`: ```graphql type Token @entity(immutable: true) { @@ -139,13 +139,13 @@ type TokenBalance @entity { ### Buscas Reversas -Reverse lookups can be defined on an entity through the `@derivedFrom` field. This creates a virtual field on the entity that may be queried but cannot be set manually through the mappings API. Rather, it is derived from the relationship defined on the other entity. For such relationships, it rarely makes sense to store both sides of the relationship, and both indexing and query performance will be better when only one side is stored and the other is derived. +Buscas reversas podem ser definidas numa entidade pelo campo `@derivedFrom`. Isto cria um campo virtual na entidade, que pode ser consultado, mas não pode ser configurado manualmente pela API de mapeamentos. Em vez disto, ele é derivado do relacionamento definido na outra entidade. Para tais relacionamentos, faz raramente sentido armazenar ambos os lados do relacionamento, e tanto o indexing quanto o desempenho dos queries melhorarão quando apenas um lado for armazenado, e o outro derivado. -Para relacionamentos um-com-vários, o relacionamento sempre deve ser armazenado no lado 'um', e o lado 'vários' deve sempre ser derivado. Armazenar o relacionamento desta maneira, em vez de armazenar um arranjo de entidades no lado 'vários', melhorará dramaticamente o desempenho para o indexing e os queries no subgraph. Em geral, evite armazenar arranjos de entidades enquanto for prático. +For one-to-many relationships, the relationship should always be stored on the 'one' side, and the 'many' side should always be derived. Storing the relationship this way, rather than storing an array of entities on the 'many' side, will result in dramatically better performance for both indexing and querying the Subgraph. In general, storing arrays of entities should be avoided as much as is practical. #### Exemplo -We can make the balances for a token accessible from the token by deriving a `tokenBalances` field: +Podemos fazer os saldos para um token acessíveis a partir do mesmo token ao derivar um campo `tokenBalances`: ```graphql type Token @entity(immutable: true) { @@ -160,15 +160,15 @@ type TokenBalance @entity { } ``` -Here is an example of how to write a mapping for a subgraph with reverse lookups: +Here is an example of how to write a mapping for a Subgraph with reverse lookups: ```typescript -let token = new Token(event.address) // Create Token -token.save() // tokenBalances is derived automatically +let token = new Token(event.address) // Crie o Token +token.save() // tokenBalances é derivado automaticamente let tokenBalance = new TokenBalance(event.address) tokenBalance.amount = BigInt.fromI32(0) -tokenBalance.token = token.id // Reference stored here +tokenBalance.token = token.id // Referência armazenada aqui tokenBalance.save() ``` @@ -178,7 +178,7 @@ Para relacionamentos vários-com-vários, como um conjunto de utilizadores em qu #### Exemplo -Define a reverse lookup from a `User` entity type to an `Organization` entity type. In the example below, this is achieved by looking up the `members` attribute from within the `Organization` entity. In queries, the `organizations` field on `User` will be resolved by finding all `Organization` entities that include the user's ID. +Defina uma busca reversa a partir de um tipo de entidade `User` para um tipo de entidade `Organization`. No exemplo abaixo, isto é feito ao buscar pelo atributo `members` a partir de dentro da entidade `Organization`. Em queries, o campo `organizations` no `User` será resolvido ao encontrar todas as entidades `Organization` que incluem a ID do utilizador. ```graphql type Organization @entity { @@ -194,7 +194,7 @@ type User @entity { } ``` -A more performant way to store this relationship is through a mapping table that has one entry for each `User` / `Organization` pair with a schema like +Uma maneira mais eficiente para armazenar este relacionamento é com uma mesa de mapeamento que tem uma entrada para cada par de `User` / `Organization`, com um schema como: ```graphql type Organization @entity { @@ -231,11 +231,11 @@ query usersWithOrganizations { } ``` -Esta maneira mais elaborada de armazenar relacionamentos vários-com-vários armazenará menos dados para o subgraph, portanto, o subgraph ficará muito mais rápido de indexar e consultar. +This more elaborate way of storing many-to-many relationships will result in less data stored for the Subgraph, and therefore to a Subgraph that is often dramatically faster to index and to query. ### Como adicionar comentários ao schema -As per GraphQL spec, comments can be added above schema entity attributes using the hash symbol `#`. This is illustrated in the example below: +Pela especificação do GraphQL, é possível adicionar comentários acima de atributos de entidade do schema com o símbolo de hash `#`. Isto é ilustrado no exemplo abaixo: ```graphql type MyFirstEntity @entity { @@ -251,7 +251,7 @@ Buscas fulltext filtram e ordenam entidades baseadas num texto inserido. Queries Uma definição de query fulltext inclui: o nome do query, o dicionário do idioma usado para processar os campos de texto, o algoritmo de ordem usado para ordenar os resultados, e os campos incluídos na busca. Todo query fulltext pode ter vários campos, mas todos os campos incluídos devem ser de um único tipo de entidade. -To add a fulltext query, include a `_Schema_` type with a fulltext directive in the GraphQL schema. +Para adicionar um query fulltext, inclua um tipo `_Schema_` com uma diretiva fulltext no schema em GraphQL. ```graphql type _Schema_ @@ -274,7 +274,7 @@ type Band @entity { } ``` -The example `bandSearch` field can be used in queries to filter `Band` entities based on the text documents in the `name`, `description`, and `bio` fields. Jump to [GraphQL API - Queries](/subgraphs/querying/graphql-api/#queries) for a description of the fulltext search API and more example usage. +O exemplo `bandSearch` serve, em queries, para filtrar entidades `Band` baseadas nos documentos de texto nos campos `name`, `description` e `bio`. Confira a página [API GraphQL - Consultas](/subgraphs/querying/graphql-api/#queries) para uma descrição da API de busca fulltext e mais exemplos de uso. ```graphql query { @@ -287,7 +287,7 @@ query { } ``` -> **[Feature Management](#experimental-features):** From `specVersion` `0.0.4` and onwards, `fullTextSearch` must be declared under the `features` section in the subgraph manifest. +> **[Feature Management](#experimental-features):** From `specVersion` `0.0.4` and onwards, `fullTextSearch` must be declared under the `features` section in the Subgraph manifest. ## Idiomas apoiados @@ -295,30 +295,30 @@ Escolher um idioma diferente terá um efeito definitivo, porém às vezes sutil, Dicionários apoiados: -| Code | Dicionário | -| ------ | ---------- | -| simple | General | -| da | Danish | -| nl | Dutch | -| en | English | -| fi | Finnish | -| fr | French | -| de | German | -| hu | Hungarian | -| it | Italian | -| no | Norwegian | -| pt | Português | -| ro | Romanian | -| ru | Russian | -| es | Spanish | -| sv | Swedish | -| tr | Turkish | +| Código | Dicionário | +| ------ | ----------- | +| simple | Geral | +| da | Dinamarquês | +| nl | Neerlandês | +| en | Inglês | +| fi | Finlandês | +| fr | Francês | +| de | Alemão | +| hu | Húngaro | +| it | Italiano | +| no | Norueguês | +| pt | Português | +| ro | Romeno | +| ru | Russo | +| es | Espanhol | +| sv | Sueco | +| tr | Turco | ### Algoritmos de Ordem Algoritmos apoiados para a organização de resultados: -| Algorithm | Description | +| Algoritmo | Descrição | | ------------- | --------------------------------------------------------------------------------- | | rank | Organiza os resultados pela qualidade da correspondência (0-1) da busca fulltext. | -| proximityRank | Similar to rank but also includes the proximity of the matches. | +| proximityRank | Similar ao rank, mas também inclui a proximidade das combinações. | diff --git a/website/src/pages/pt/subgraphs/developing/creating/starting-your-subgraph.mdx b/website/src/pages/pt/subgraphs/developing/creating/starting-your-subgraph.mdx index 1b70a2ec98ad..e80ca1803b20 100644 --- a/website/src/pages/pt/subgraphs/developing/creating/starting-your-subgraph.mdx +++ b/website/src/pages/pt/subgraphs/developing/creating/starting-your-subgraph.mdx @@ -1,23 +1,35 @@ --- -title: Starting Your Subgraph +title: Como Iniciar o Seu Subgraph --- ## Visão geral -The Graph is home to thousands of subgraphs already available for query, so check [The Graph Explorer](https://thegraph.com/explorer) and find one that already matches your needs. +The Graph is home to thousands of Subgraphs already available for query, so check [The Graph Explorer](https://thegraph.com/explorer) and find one that already matches your needs. -When you create a [subgraph](/subgraphs/developing/subgraphs/), you create a custom open API that extracts data from a blockchain, processes it, stores it, and makes it easy to query via GraphQL. +When you create a [Subgraph](/subgraphs/developing/subgraphs/), you create a custom open API that extracts data from a blockchain, processes it, stores it, and makes it easy to query via GraphQL. -Subgraph development ranges from simple scaffold subgraphs to advanced, specifically tailored subgraphs. +Subgraph development ranges from simple scaffold Subgraphs to advanced, specifically tailored Subgraphs. -### Start Building +### Comece a Construir -Start the process and build a subgraph that matches your needs: +Start the process and build a Subgraph that matches your needs: -1. [Install the CLI](/subgraphs/developing/creating/install-the-cli/) - Set up your infrastructure -2. [Subgraph Manifest](/subgraphs/developing/creating/subgraph-manifest/) - Understand a subgraph's key component -3. [The GraphQL Schema](/subgraphs/developing/creating/ql-schema/) - Write your schema -4. [Writing AssemblyScript Mappings](/subgraphs/developing/creating/assemblyscript-mappings/) - Write your mappings -5. [Advanced Features](/subgraphs/developing/creating/advanced/) - Customize your subgraph with advanced features +1. [Como Instalar a CLI](/subgraphs/developing/creating/install-the-cli/) — Configure a sua infraestrutura +2. [Subgraph Manifest](/subgraphs/developing/creating/subgraph-manifest/) - Understand a Subgraph's key component +3. [Schema da GraphQL](/subgraphs/developing/creating/ql-schema/) — Escreva o seu schema +4. [Como Escrever Mapeamentos em AssemblyScript](/subgraphs/developing/creating/assemblyscript-mappings/) — Escreva os seus mapeamentos +5. [Advanced Features](/subgraphs/developing/creating/advanced/) - Customize your Subgraph with advanced features -Explore additional [resources for APIs](/subgraphs/developing/creating/graph-ts/README/) and conduct local testing with [Matchstick](/subgraphs/developing/creating/unit-testing-framework/). +Explore mais [recursos para APIs](/subgraphs/developing/creating/graph-ts/README/) e realize testes locais com [Matchstick](/subgraphs/developing/creating/unit-testing-framework/). + +| Versão | Notas de atualização | +| :-: | --- | +| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune Subgraphs | +| 0.0.9 | Supports `endBlock` feature | +| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). | +| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | diff --git a/website/src/pages/pt/subgraphs/developing/creating/subgraph-manifest.mdx b/website/src/pages/pt/subgraphs/developing/creating/subgraph-manifest.mdx index 2a4c3af44fe4..92002efba848 100644 --- a/website/src/pages/pt/subgraphs/developing/creating/subgraph-manifest.mdx +++ b/website/src/pages/pt/subgraphs/developing/creating/subgraph-manifest.mdx @@ -1,35 +1,35 @@ --- -title: Subgraph Manifest +title: Manifest do Subgraph --- ## Visão geral -The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. +The Subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your Subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. -The **subgraph definition** consists of the following files: +The **Subgraph definition** consists of the following files: -- `subgraph.yaml`: Contains the subgraph manifest +- `subgraph.yaml`: Contains the Subgraph manifest -- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL +- `schema.graphql`: A GraphQL schema defining the data stored for your Subgraph and how to query it via GraphQL -- `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema (e.g. `mapping.ts` in this guide) +- `mapping.ts`: Código de [Mapeamentos do AssemblyScript](https://github.com/AssemblyScript/assemblyscript) que traduz dados de eventos para entidades definidas no seu schema (por exemplo, `mapping.ts` neste guia) -### Subgraph Capabilities +### Capacidades do Subgraph -A single subgraph can: +A single Subgraph can: -- Index data from multiple smart contracts (but not multiple networks). +- Indexar dados de vários contratos inteligentes (mas não de múltiplas redes). -- Index data from IPFS files using File Data Sources. +- Indexar dados de arquivos IPFS usando Fontes de Dados de Arquivo. -- Add an entry for each contract that requires indexing to the `dataSources` array. +- Adicionar uma entrada para cada contrato que precisa ser indexado para o arranjo `dataSources`. -The full specification for subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). +The full specification for Subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). -For the example subgraph listed above, `subgraph.yaml` is: +For the example Subgraph listed above, `subgraph.yaml` is: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum repository: https://github.com/graphprotocol/graph-tooling schema: @@ -54,7 +54,7 @@ dataSources: data: 'bar' mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -77,49 +77,49 @@ dataSources: file: ./src/mapping.ts ``` -## Subgraph Entries +## Entradas do Subgraph -> Important Note: Be sure you populate your subgraph manifest with all handlers and [entities](/subgraphs/developing/creating/ql-schema/). +> Important Note: Be sure you populate your Subgraph manifest with all handlers and [entities](/subgraphs/developing/creating/ql-schema/). As entradas importantes para atualizar para o manifest são: -- `specVersion`: a semver version that identifies the supported manifest structure and functionality for the subgraph. The latest version is `1.2.0`. See [specVersion releases](#specversion-releases) section to see more details on features & releases. +- `specVersion`: a semver version that identifies the supported manifest structure and functionality for the Subgraph. The latest version is `1.3.0`. See [specVersion releases](#specversion-releases) section to see more details on features & releases. -- `description`: a human-readable description of what the subgraph is. This description is displayed in Graph Explorer when the subgraph is deployed to Subgraph Studio. +- `description`: a human-readable description of what the Subgraph is. This description is displayed in Graph Explorer when the Subgraph is deployed to Subgraph Studio. -- `repository`: the URL of the repository where the subgraph manifest can be found. This is also displayed in Graph Explorer. +- `repository`: the URL of the repository where the Subgraph manifest can be found. This is also displayed in Graph Explorer. -- `features`: a list of all used [feature](#experimental-features) names. +- `features`: é uma lista de todos os [nomes de função](#experimental-features) usados. -- `indexerHints.prune`: Defines the retention of historical block data for a subgraph. See [prune](#prune) in [indexerHints](#indexer-hints) section. +- `indexerHints.prune`: Defines the retention of historical block data for a Subgraph. See [prune](#prune) in [indexerHints](#indexer-hints) section. -- `dataSources.source`: the address of the smart contract the subgraph sources, and the ABI of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts. +- `dataSources.source`: the address of the smart contract the Subgraph sources, and the ABI of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts. -- `dataSources.source.startBlock`: the optional number of the block that the data source starts indexing from. In most cases, we suggest using the block in which the contract was created. +- `dataSources.source.startBlock`: o número opcional do bloco de onde a fonte de dados começa a indexar. Em muitos casos, sugerimos usar o bloco em que o contrato foi criado. -- `dataSources.source.endBlock`: The optional number of the block that the data source stops indexing at, including that block. Minimum spec version required: `0.0.9`. +- `dataSources.source.endBlock`: O número opcional do bloco onde a fonte de dados pára de indexar, inclusive aquele bloco. Versão de spec mínima exigida: `0.0.9`. -- `dataSources.context`: key-value pairs that can be used within subgraph mappings. Supports various data types like `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Each variable needs to specify its `type` and `data`. These context variables are then accessible in the mapping files, offering more configurable options for subgraph development. +- `dataSources.context`: key-value pairs that can be used within Subgraph mappings. Supports various data types like `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Each variable needs to specify its `type` and `data`. These context variables are then accessible in the mapping files, offering more configurable options for Subgraph development. -- `dataSources.mapping.entities`: the entities that the data source writes to the store. The schema for each entity is defined in the schema.graphql file. +- `dataSources.mapping.entities`: as entidades que a fonte de dados escreve ao armazenamento. O schema para cada entidade é definido no arquivo schema.graphql. -- `dataSources.mapping.abis`: one or more named ABI files for the source contract as well as any other smart contracts that you interact with from within the mappings. +- `dataSources.mapping.abis`: um ou mais arquivos de ABI nomeados para o contrato de origem, além de quaisquer outros contratos inteligentes com os quais interage de dentro dos mapeamentos. -- `dataSources.mapping.eventHandlers`: lists the smart contract events this subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store. +- `dataSources.mapping.eventHandlers`: lists the smart contract events this Subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store. -- `dataSources.mapping.callHandlers`: lists the smart contract functions this subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store. +- `dataSources.mapping.callHandlers`: lists the smart contract functions this Subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store. -- `dataSources.mapping.blockHandlers`: lists the blocks this subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional call-filter can be provided by adding a `filter` field with `kind: call` to the handler. This will only run the handler if the block contains at least one call to the data source contract. +- `dataSources.mapping.blockHandlers`: lists the blocks this Subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional call-filter can be provided by adding a `filter` field with `kind: call` to the handler. This will only run the handler if the block contains at least one call to the data source contract. -A single subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array. +A single Subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array. ## Handlers de Eventos -Handlers de eventos em um subgraph reagem a eventos específicos emitidos por contratos inteligentes na blockchain e acionam handlers definidos no manifest do subgraph. Isto permite que subgraphs processem e armazenem dados conforme a lógica definida. +Event handlers in a Subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the Subgraph's manifest. This enables Subgraphs to process and store event data according to defined logic. ### Como Definir um Handler de Evento -Um handler de evento é declarado dentro de uma fonte de dados na configuração YAML do subgraph. Ele especifica quais eventos devem ser escutados e a função correspondente a ser executada quando estes eventos forem detetados. +An event handler is declared within a data source in the Subgraph's YAML configuration. It specifies which events to listen for and the corresponding function to execute when those events are detected. ```yaml dataSources: @@ -131,7 +131,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -144,20 +144,20 @@ dataSources: handler: handleApproval - event: Transfer(address,address,uint256) handler: handleTransfer - topic1: ['0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045', '0xc8dA6BF26964aF9D7eEd9e03E53415D37aA96325'] # Filtro de tópico opcional que só filtra eventos com o tópico especificado. + topic1: ['0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045', '0xc8dA6BF26964aF9D7eEd9e03E53415D37aA96325'] # Optional topic filter which filters only events with the specified topic. ``` ## Handlers de chamada -While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured. +While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a Subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured. Handlers de chamadas só serão ativados em um de dois casos: quando a função especificada é chamada por uma conta que não for do próprio contrato, ou quando ela é marcada como externa no Solidity e chamada como parte de outra função no mesmo contrato. -> **Note:** Call handlers currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a subgraph indexing one of these networks contain one or more call handlers, it will not start syncing. Subgraph developers should instead use event handlers. These are far more performant than call handlers, and are supported on every evm network. +> **Note:** Call handlers currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a Subgraph indexing one of these networks contain one or more call handlers, it will not start syncing. Subgraph developers should instead use event handlers. These are far more performant than call handlers, and are supported on every evm network. ### Como Definir um Handler de Chamada -To define a call handler in your manifest, simply add a `callHandlers` array under the data source you would like to subscribe to. +Para definir um handler de chamada no seu manifest, apenas adicione um arranjo `callHandlers` sob a fonte de dados para a qual quer se inscrever. ```yaml dataSources: @@ -169,7 +169,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -182,11 +182,11 @@ dataSources: handler: handleCreateGravatar ``` -The `function` is the normalized function signature to filter calls by. The `handler` property is the name of the function in your mapping you would like to execute when the target function is called in the data source contract. +O `function` é a assinatura de função normalizada para filtrar chamadas. A propriedade `handler` é o nome da função no mapeamento que quer executar quando a função-alvo é chamada no contrato da fonte de dados. ### Função de Mapeamento -Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument: +Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example Subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument: ```typescript import { CreateGravatarCall } from '../generated/Gravity/Gravity' @@ -201,11 +201,11 @@ export function handleCreateGravatar(call: CreateGravatarCall): void { } ``` -The `handleCreateGravatar` function takes a new `CreateGravatarCall` which is a subclass of `ethereum.Call`, provided by `@graphprotocol/graph-ts`, that includes the typed inputs and outputs of the call. The `CreateGravatarCall` type is generated for you when you run `graph codegen`. +A função `handleCreateGravatar` toma um novo `CreateGravatarCall` que é uma subclasse do ethereum.Call, fornecido pelo @graphprotocol/graph-ts, que inclui as entradas e saídas digitadas da chamada. O tipo `CreateGravatarCall` é gerado ao executar o `graph codegen`. ## Handlers de Blocos -Além de se inscrever a eventos de contratos ou chamadas para funções, um subgraph também pode querer atualizar os seus dados enquanto novos blocos são afixados à chain. Para isto, um subgraph pode executar uma função após cada bloco, ou após blocos que correspondem a um filtro predefinido. +In addition to subscribing to contract events or function calls, a Subgraph may want to update its data as new blocks are appended to the chain. To achieve this a Subgraph can run a function after every block or after blocks that match a pre-defined filter. ### Filtros Apoiados @@ -216,9 +216,9 @@ filter: kind: call ``` -_The defined handler will be called once for every block which contains a call to the contract (data source) the handler is defined under._ +_O handler definido será chamado uma vez para cada bloco, que contém uma chamada ao contrato (fonte de dados) sob o qual o handler está definido._ -> **Note:** The `call` filter currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a subgraph indexing one of these networks contain one or more block handlers with a `call` filter, it will not start syncing. +> **Note:** The `call` filter currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a Subgraph indexing one of these networks contain one or more block handlers with a `call` filter, it will not start syncing. A ausência de um filtro para um handler de blocos garantirá que o handler seja chamado a todos os blocos. Uma fonte de dados só pode conter um handler de bloco para cada tipo de filtro. @@ -232,7 +232,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -249,9 +249,9 @@ dataSources: #### Filtro Polling -> **Requires `specVersion` >= 0.0.8** +> **Requer `specVersion` >= 0.0.8** > -> **Note:** Polling filters are only available on dataSources of `kind: ethereum`. +> \*\*Nota: Filtros de polling só estão disponíveis nas dataSources de `kind: ethereum`. ```yaml blockHandlers: @@ -261,13 +261,13 @@ blockHandlers: every: 10 ``` -The defined handler will be called once for every `n` blocks, where `n` is the value provided in the `every` field. This configuration allows the subgraph to perform specific operations at regular block intervals. +The defined handler will be called once for every `n` blocks, where `n` is the value provided in the `every` field. This configuration allows the Subgraph to perform specific operations at regular block intervals. #### Filtro Once -> **Requires `specVersion` >= 0.0.8** +> **Requer `specVersion` >= 0.0.8** > -> **Note:** Once filters are only available on dataSources of `kind: ethereum`. +> **Observação:** Filtros de once só estão disponíveis nas dataSources de `kind: ethereum`. ```yaml blockHandlers: @@ -276,7 +276,7 @@ blockHandlers: kind: once ``` -O handler definido com o filtro once só será chamado uma única vez antes da execução de todos os outros handlers (por isto, o nome "once" / "uma vez"). Esta configuração permite que o subgraph use o handler como um handler de inicialização, para realizar tarefas específicas no começo da indexação. +The defined handler with the once filter will be called only once before all other handlers run. This configuration allows the Subgraph to use the handler as an initialization handler, performing specific tasks at the start of indexing. ```ts export function handleOnce(block: ethereum.Block): void { @@ -288,7 +288,7 @@ export function handleOnce(block: ethereum.Block): void { ### Função de Mapeamento -The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing subgraph entities in the store, call smart contracts and create or update entities. +The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing Subgraph entities in the store, call smart contracts and create or update entities. ```typescript import { ethereum } from '@graphprotocol/graph-ts' @@ -311,13 +311,13 @@ eventHandlers: handler: handleGive ``` -An event will only be triggered when both the signature and topic 0 match. By default, `topic0` is equal to the hash of the event signature. +Um evento só será ativado quando a assinatura e o topic 0 corresponderem. `topic0` é igual ao hash da assinatura do evento. ## Recibos de Transação em Handlers de Eventos -Starting from `specVersion` `0.0.5` and `apiVersion` `0.0.7`, event handlers can have access to the receipt for the transaction which emitted them. +A partir do `specVersion` `0.0.5` e `apiVersion` `0.0.7`, os handlers de eventos podem acessar o recibo para a transação que os emitiu. -To do so, event handlers must be declared in the subgraph manifest with the new `receipt: true` key, which is optional and defaults to false. +To do so, event handlers must be declared in the Subgraph manifest with the new `receipt: true` key, which is optional and defaults to false. ```yaml eventHandlers: @@ -326,7 +326,7 @@ eventHandlers: receipt: true ``` -Inside the handler function, the receipt can be accessed in the `Event.receipt` field. When the `receipt` key is set to `false` or omitted in the manifest, a `null` value will be returned instead. +Dentro da função do handler, o recibo pode ser acessado no campo `Event.receipt`. Quando a chave `receipt` é configurada em `false`, ou omitida no manifest, um valor `null` será retornado em vez disto. ## Ordem de Handlers de Gatilhos @@ -338,17 +338,17 @@ Os gatilhos para uma fonte de dados dentro de um bloco são ordenados com o segu Estas regras de organização estão sujeitas à mudança. -> **Note:** When new [dynamic data source](#data-source-templates-for-dynamically-created-contracts) are created, the handlers defined for dynamic data sources will only start processing after all existing data source handlers are processed, and will repeat in the same sequence whenever triggered. +> **Observe:** Quando novas [fontes de dados dinâmicas](#data-source-templates-for-dynamically-created-contracts) forem criadas, os handlers definidos para fontes de dados dinâmicas só começarão o processamento após todos os handlers existentes forem processados, e repetirão a mesma sequência quando ativados. ## Modelos de Fontes de Dados Um padrão comum em contratos inteligentes compatíveis com EVMs é o uso de contratos de registro ou fábrica. Nisto, um contrato cria, gesta ou refere a um número arbitrário de outros contratos, cada um com o seu próprio estado e eventos. -The addresses of these sub-contracts may or may not be known upfront and many of these contracts may be created and/or added over time. This is why, in such cases, defining a single data source or a fixed number of data sources is impossible and a more dynamic approach is needed: _data source templates_. +Os endereços destes subcontratos podem ou não ser conhecidos imediatamente, e muitos destes contratos podem ser criados e/ou adicionados ao longo do tempo. É por isto que, em muitos casos, é impossível definir uma única fonte de dados ou um número fixo de fontes de dados, e é necessária uma abordagem mais dinâmica: _modelos de fontes de dados_. ### Fonte de Dados para o Contrato Principal -First, you define a regular data source for the main contract. The snippet below shows a simplified example data source for the [Uniswap](https://uniswap.org) exchange factory contract. Note the `NewExchange(address,address)` event handler. This is emitted when a new exchange contract is created onchain by the factory contract. +Primeiro, defina uma fonte de dados regular para o contrato principal. Abaixo está um exemplo simplificado de fonte de dados para o contrato de fábrica de trocas do [Uniswap](https://uniswap.org). Preste atenção ao handler de evento `NewExchange(address,address)`: é emitido quando um novo contrato de troca é criado on-chain pelo contrato de fábrica. ```yaml dataSources: @@ -360,7 +360,7 @@ dataSources: abi: Factory mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/factory.ts entities: @@ -375,13 +375,13 @@ dataSources: ### Modelos de Fontes de Dados para Contratos Criados Dinamicamente -Then, you add _data source templates_ to the manifest. These are identical to regular data sources, except that they lack a pre-defined contract address under `source`. Typically, you would define one template for each type of sub-contract managed or referenced by the parent contract. +Depois, adicione _modelos de fontes de dados_ ao manifest. Estes são idênticos a fontes de dados regulares, mas não têm um endereço de contrato predefinido sob `source`. Tipicamente, é possível definir um modelo para cada tipo de subcontrato administrado ou referenciado pelo contrato parente. ```yaml dataSources: - kind: ethereum/contract name: Factory - # ... outros campos de fonte para o contrato principal ... + # ... other source fields for the main contract ... templates: - name: Exchange kind: ethereum/contract @@ -390,7 +390,7 @@ templates: abi: Exchange mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/exchange.ts entities: @@ -411,7 +411,7 @@ templates: ### Como Instanciar um Modelo de Fontes de Dados -In the final step, you update your main contract mapping to create a dynamic data source instance from one of the templates. In this example, you would change the main contract mapping to import the `Exchange` template and call the `Exchange.create(address)` method on it to start indexing the new exchange contract. +Na etapa final, atualize o seu mapeamento de contratos para criar uma instância dinâmica de fontes de dados de um dos modelos. Neste exemplo, mudarias o mapeamento do contrato principal para importar o modelo `Exchange` e chamar o método `Exchange.create(address)` nele, para começar a indexar o novo contrato de troca. ```typescript import { Exchange } from '../generated/templates' @@ -423,13 +423,13 @@ export function handleNewExchange(event: NewExchange): void { } ``` -> **Note:** A new data source will only process the calls and events for the block in which it was created and all following blocks, but will not process historical data, i.e., data that is contained in prior blocks. +> **Observação:** Uma nova fonte de dados só processará as chamadas e eventos para o bloco onde ele foi criado e todos os blocos a seguir. Porém, não serão processados dados históricos, por ex., dados contidos em blocos anteriores. > > Se blocos anteriores conterem dados relevantes à nova fonte, é melhor indexá-los ao ler o estado atual do contrato e criar entidades que representem aquele estado na hora que a nova fonte de dados for criada. ### Contextos de Fontes de Dados -Data source contexts allow passing extra configuration when instantiating a template. In our example, let's say exchanges are associated with a particular trading pair, which is included in the `NewExchange` event. That information can be passed into the instantiated data source, like so: +Contextos de fontes de dados permitem passar configurações extras ao instanciar um modelo. No nosso exemplo, vamos dizer que há trocas associadas com um par de trading particular, incluído no evento `NewExchange`. Essa informação pode ser passada na fonte de dados instanciada, como: ```typescript import { Exchange } from '../generated/templates' @@ -441,7 +441,7 @@ export function handleNewExchange(event: NewExchange): void { } ``` -Inside a mapping of the `Exchange` template, the context can then be accessed: +Dentro de um mapeamento do modelo `Exchange`, dá para acessar o contexto: ```typescript import { dataSource } from '@graphprotocol/graph-ts' @@ -450,11 +450,11 @@ let context = dataSource.context() let tradingPair = context.getString('tradingPair') ``` -There are setters and getters like `setString` and `getString` for all value types. +Há setters e getters como `setString` e `getString` para todos os tipos de valores. ## Blocos Iniciais -The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. +The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a Subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. ```yaml dataSources: @@ -467,7 +467,7 @@ dataSources: startBlock: 6627917 mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/factory.ts entities: @@ -480,24 +480,24 @@ dataSources: handler: handleNewEvent ``` -> **Note:** The contract creation block can be quickly looked up on Etherscan: +> **Observe:** O bloco de criação do contrato pode ser buscado rapidamente no Etherscan: > > 1. Procure pelo contrato ao inserir o seu endereço na barra de busca. -> 2. Click on the creation transaction hash in the `Contract Creator` section. +> 2. Clique no hash da transação de criação na seção `Contract Creator`. > 3. Carregue a página dos detalhes da transação, onde encontrará o bloco inicial para aquele contrato. ## IndexerHints -The `indexerHints` setting in a subgraph's manifest provides directives for indexers on processing and managing a subgraph. It influences operational decisions across data handling, indexing strategies, and optimizations. Presently, it features the `prune` option for managing historical data retention or pruning. +The `indexerHints` setting in a Subgraph's manifest provides directives for indexers on processing and managing a Subgraph. It influences operational decisions across data handling, indexing strategies, and optimizations. Presently, it features the `prune` option for managing historical data retention or pruning. -> This feature is available from `specVersion: 1.0.0` +> Este recurso está disponível a partir da `specVersion: 1.0.0` ### Prune -`indexerHints.prune`: Defines the retention of historical block data for a subgraph. Options include: +`indexerHints.prune`: Defines the retention of historical block data for a Subgraph. Options include: -1. `"never"`: No pruning of historical data; retains the entire history. -2. `"auto"`: Retains the minimum necessary history as set by the indexer, optimizing query performance. +1. `"never"`: Nenhum pruning de dados históricos; retém o histórico completo. +2. `"auto"`: Retém o histórico mínimo necessário determinado pelo Indexador e otimiza o desempenho das queries. 3. Um número específico: Determina um limite personalizado no número de blocos históricos a guardar. ``` @@ -505,25 +505,25 @@ The `indexerHints` setting in a subgraph's manifest provides directives for inde prune: auto ``` -> O termo "histórico", neste contexto de subgraphs, refere-se ao armazenamento de dados que refletem os estados antigos de entidades mutáveis. +> The term "history" in this context of Subgraphs is about storing data that reflects the old states of mutable entities. O histórico, desde um bloco especificado, é necessário para: -- [Time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), which enable querying the past states of these entities at specific blocks throughout the subgraph's history -- Using the subgraph as a [graft base](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) in another subgraph, at that block -- Rebobinar o subgraph de volta àquele bloco +- [Time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), which enable querying the past states of these entities at specific blocks throughout the Subgraph's history +- Using the Subgraph as a [graft base](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) in another Subgraph, at that block +- Rewinding the Subgraph back to that block Se os dados históricos desde aquele bloco tiverem passado por pruning, as capacidades acima não estarão disponíveis. -> Using `"auto"` is generally recommended as it maximizes query performance and is sufficient for most users who do not require access to extensive historical data. +> Vale usar o `"auto"`, por maximizar o desempenho de queries e ser suficiente para a maioria dos utilizadores que não exigem acesso a dados extensos no histórico. -For subgraphs leveraging [time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), it's advisable to either set a specific number of blocks for historical data retention or use `prune: never` to keep all historical entity states. Below are examples of how to configure both options in your subgraph's settings: +For Subgraphs leveraging [time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), it's advisable to either set a specific number of blocks for historical data retention or use `prune: never` to keep all historical entity states. Below are examples of how to configure both options in your Subgraph's settings: Para reter uma quantidade específica de dados históricos: ``` indexerHints: - prune: 1000 # Replace 1000 with the desired number of blocks to retain + prune: 1000 # Substitua 1000 pelo número de blocos que deseja reter ``` Para preservar o histórico completo dos estados da entidade: @@ -532,3 +532,18 @@ Para preservar o histórico completo dos estados da entidade: indexerHints: prune: never ``` + +## SpecVersion Releases + +| Versão | Notas de atualização | +| :-: | --- | +| 1.3.0 | Added support for [Subgraph Composition](/cookbook/subgraph-composition-three-sources) | +| 1.2.0 | Added support for [Indexed Argument Filtering](/developing/creating/advanced/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](/developing/creating/advanced/#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supports [`indexerHints`](/developing/creating/subgraph-manifest/#indexer-hints) feature to prune Subgraphs | +| 0.0.9 | Supports `endBlock` feature | +| 0.0.8 | Added support for polling [Block Handlers](/developing/creating/subgraph-manifest/#polling-filter) and [Initialisation Handlers](/developing/creating/subgraph-manifest/#once-filter). | +| 0.0.7 | Added support for [File Data Sources](/developing/creating/advanced/#ipfsarweave-file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | diff --git a/website/src/pages/pt/subgraphs/developing/creating/unit-testing-framework.mdx b/website/src/pages/pt/subgraphs/developing/creating/unit-testing-framework.mdx index 0b92f77c0f4f..c1676c2773d7 100644 --- a/website/src/pages/pt/subgraphs/developing/creating/unit-testing-framework.mdx +++ b/website/src/pages/pt/subgraphs/developing/creating/unit-testing-framework.mdx @@ -2,52 +2,52 @@ title: Estrutura de Testes de Unidades --- -Learn how to use Matchstick, a unit testing framework developed by [LimeChain](https://limechain.tech/). Matchstick enables subgraph developers to test their mapping logic in a sandboxed environment and successfully deploy their subgraphs. +Learn how to use Matchstick, a unit testing framework developed by [LimeChain](https://limechain.tech/). Matchstick enables Subgraph developers to test their mapping logic in a sandboxed environment and successfully deploy their Subgraphs. -## Benefits of Using Matchstick +## Vantagens de Usar o Matchstick -- It's written in Rust and optimized for high performance. -- It gives you access to developer features, including the ability to mock contract calls, make assertions about the store state, monitor subgraph failures, check test performance, and many more. +- É escrito em Rust e otimizado para o melhor desempenho possível. +- It gives you access to developer features, including the ability to mock contract calls, make assertions about the store state, monitor Subgraph failures, check test performance, and many more. ## Como Começar -### Install Dependencies +### Como Instalar Dependências -In order to use the test helper methods and run tests, you need to install the following dependencies: +Para usar os métodos de test helper e executar os testes, instale as seguintes dependências: ```sh yarn add --dev matchstick-as ``` -### Install PostgreSQL +### Como Instalar o PostgreSQL -`graph-node` depends on PostgreSQL, so if you don't already have it, then you will need to install it. +O `graph-node` depende do PostgreSQL, então se ainda não o tem, será necessário instalá-lo. -> Note: It's highly recommended to use the commands below to avoid unexpected errors. +> Observação: É altamente recomendável usar os comandos abaixo para evitar erros inesperados. -#### Using MacOS +#### Usando o MacOS -Installation command: +Comando de instalação: ```sh brew install postgresql ``` -Create a symlink to the latest libpq.5.lib _You may need to create this dir first_ `/usr/local/opt/postgresql/lib/` +Crie um symlink ao último libpq.5.lib Talvez precise criar este diretório primeiro: `/usr/local/opt/postgresql/lib/` ```sh ln -sf /usr/local/opt/postgresql@14/lib/postgresql@14/libpq.5.dylib /usr/local/opt/postgresql/lib/libpq.5.dylib ``` -#### Using Linux +#### Usando o Linux -Installation command (depends on your distro): +Comando de instalação do Postgres (depende da sua distro): ```sh sudo apt install postgresql ``` -### Using WSL (Windows Subsystem for Linux) +### Usando o WSL (Subsistema do Windows para o Linux) Pode usar o Matchstick no WSL tanto com a abordagem do Docker quanto com a abordagem binária. Como o WSL pode ser um pouco complicado, aqui estão algumas dicas caso encontre problemas @@ -61,13 +61,13 @@ ou /node_modules/gluegun/build/index.js:13 throw up; ``` -Please make sure you're on a newer version of Node.js graph-cli doesn't support **v10.19.0** anymore, and that is still the default version for new Ubuntu images on WSL. For instance Matchstick is confirmed to be working on WSL with **v18.1.0**, you can switch to it either via **nvm** or if you update your global Node.js. Don't forget to delete `node_modules` and to run `npm install` again after updating you nodejs! Then, make sure you have **libpq** installed, you can do that by running +Verifique se está em uma versão mais recente do Node.js. O graph-cli não apoia mais a **v10.19.0**, que ainda é a versão padrão para novas imagens de Ubuntu no WSL. Por exemplo, se o Matchstick é confirmado como funcional no WSL com a **v18.1.0**, pode trocar para essa versão através do **nvm** ou ao atualizar o seu Node.js global. Não se esqueça de apagar o `node_modules` e executar o `npm install` novamente após atualizar o seu nodejs! Depois, garanta que tem o **libpq** instalado. Isto pode ser feito ao executar: ``` sudo apt-get install libpq-dev ``` -And finally, do not use `graph test` (which uses your global installation of graph-cli and for some reason that looks like it's broken on WSL currently), instead use `yarn test` or `npm run test` (that will use the local, project-level instance of graph-cli, which works like a charm). For that you would of course need to have a `"test"` script in your `package.json` file which can be something as simple as +E finalmente, não use o `graph test` (que usa a sua instalação global da graph-cli, e por alguma razão, parece não funcionar no WSL no momento). Em vez disto, use o `yarn test` ou o `npm run test` (que usará a instância local do graph-cli; esta funciona muito bem). Para isto, obviamente você precisa de um script `"test"` no seu arquivo `package.json`, que pode ser algo simples como ```json { @@ -85,9 +85,9 @@ And finally, do not use `graph test` (which uses your global installation of gra } ``` -### Using Matchstick +### Usando o Matchstick -To use **Matchstick** in your subgraph project just open up a terminal, navigate to the root folder of your project and simply run `graph test [options] ` - it downloads the latest **Matchstick** binary and runs the specified test or all tests in a test folder (or all existing tests if no datasource flag is specified). +To use **Matchstick** in your Subgraph project just open up a terminal, navigate to the root folder of your project and simply run `graph test [options] ` - it downloads the latest **Matchstick** binary and runs the specified test or all tests in a test folder (or all existing tests if no datasource flag is specified). ### Opções de CLI @@ -109,11 +109,11 @@ Isto só executará esse arquivo de teste específico: graph test path/to/file.test.ts ``` -**Options:** +**Opções:** ```sh -c, --coverage Run the tests in coverage mode --d, --docker Run the tests in a docker container (Note: Please execute from the root folder of the subgraph) +-d, --docker Run the tests in a docker container (Note: Please execute from the root folder of the Subgraph) -f, --force Binary: Redownloads the binary. Docker: Redownloads the Dockerfile and rebuilds the docker image. -h, --help Show usage information -l, --logs Logs to the console information about the OS, CPU model and download url (debugging purposes) @@ -123,21 +123,21 @@ graph test path/to/file.test.ts ### Docker -From `graph-cli 0.25.2`, the `graph test` command supports running `matchstick` in a docker container with the `-d` flag. The docker implementation uses [bind mount](https://docs.docker.com/storage/bind-mounts/) so it does not have to rebuild the docker image every time the `graph test -d` command is executed. Alternatively you can follow the instructions from the [matchstick](https://github.com/LimeChain/matchstick#docker-) repository to run docker manually. +Desde o `graph-cli 0.25.2`, o comando `graph test` apoia a execução do `matchstick` em um container docker com a flag `-d`. A implementação do docker utiliza o [bind mount](https://docs.docker.com/storage/bind-mounts/) para que não precise reconstruir a imagem do docker toda vez que o comando `graph test -d` for executado. Alternativamente, siga as instruções do repositório do [matchstick](https://github.com/LimeChain/matchstick#docker-) para executar o docker manualmente. -❗ `graph test -d` forces `docker run` to run with flag `-t`. This must be removed to run inside non-interactive environments (like GitHub CI). +❗ `graph test -d` força o `docker run` a ser executado com o flag `-t`. Isto deve ser removido para rodar em ambientes não interativos (como o GitHub CI). -❗ If you have previously ran `graph test` you may encounter the following error during docker build: +❗ Caso já tenha executado o `graph test` anteriormente, o seguinte erro pode aparecer durante a compilação do docker: ```sh error from sender: failed to xattr node_modules/binary-install-raw/bin/binary-: permission denied ``` -In this case create a `.dockerignore` in the root folder and add `node_modules/binary-install-raw/bin` +Neste caso, crie um `.dockerignore` na pasta raiz e adicione `node_modules/binary-install-raw/bin` ### Configuração -Matchstick can be configured to use a custom tests, libs and manifest path via `matchstick.yaml` config file: +O Matchstick pode ser configurado para usar um caminho personalizado de tests, libs e manifest através do arquivo de configuração `matchstick.yaml`: ```yaml testsFolder: path/to/tests @@ -145,25 +145,25 @@ libsFolder: path/to/libs manifestPath: path/to/subgraph.yaml ``` -### Subgraph de demonstração +### Demo Subgraph -You can try out and play around with the examples from this guide by cloning the [Demo Subgraph repo](https://github.com/LimeChain/demo-subgraph) +Você pode experimentar com os exemplos deste guia clonando o [repositório de Subgraph Demonstrativo](https://github.com/LimeChain/demo-subgraph) ### Tutoriais de vídeo -Also you can check out the video series on ["How to use Matchstick to write unit tests for your subgraphs"](https://www.youtube.com/playlist?list=PLTqyKgxaGF3SNakGQwczpSGVjS_xvOv3h) +Also you can check out the video series on ["How to use Matchstick to write unit tests for your Subgraphs"](https://www.youtube.com/playlist?list=PLTqyKgxaGF3SNakGQwczpSGVjS_xvOv3h) ## Estrutura de testes -_**IMPORTANT: The test structure described below depens on `matchstick-as` version >=0.5.0**_ +_**IMPORTANT: The test structure described below depends on `matchstick-as` version >=0.5.0**_ ### describe() -`describe(name: String , () => {})` - Defines a test group. +`describe(name: String , () = {})` — Define um grupo de teste. -**_Notes:_** +**_Observações:_** -- _Describes are not mandatory. You can still use test() the old way, outside of the describe() blocks_ +- _Describes (descrições) não são obrigatórias. O test() ainda pode ser usado da maneira antiga, fora dos blocos describe()_ Exemplo: @@ -172,27 +172,27 @@ import { describe, test } from "matchstick-as/assembly/index" import { handleNewGravatar } from "../../src/gravity" describe("handleNewGravatar()", () => { - test("Should create a new Gravatar entity", () => { + test("Isto deve criar uma nova entidade Gravatar", () => { ... }) }) ``` -Nested `describe()` example: +Exemplo aninhado de `describe()`: ```typescript import { describe, test } from "matchstick-as/assembly/index" import { handleUpdatedGravatar } from "../../src/gravity" describe("handleUpdatedGravatar()", () => { - describe("When entity exists", () => { - test("updates the entity", () => { + describe("Quando houver uma entidade", () => { + test("entidade atualizada", () => { ... }) }) - describe("When entity does not exists", () => { - test("it creates a new entity", () => { + describe("Quando não houver uma entidade", () => { + test("nova entidade criada", () => { ... }) }) @@ -203,7 +203,7 @@ describe("handleUpdatedGravatar()", () => { ### test() -`test(name: String, () =>, should_fail: bool)` - Defines a test case. You can use test() inside of describe() blocks or independently. +`test(name: String, () =, should_fail: bool)` — Define um caso de teste. O test() pode ser usado em blocos describe() ou de maneira independente. Exemplo: @@ -212,7 +212,7 @@ import { describe, test } from "matchstick-as/assembly/index" import { handleNewGravatar } from "../../src/gravity" describe("handleNewGravatar()", () => { - test("Should create a new Entity", () => { + test("Isto deve criar uma nova Entidade", () => { ... }) }) @@ -221,7 +221,7 @@ describe("handleNewGravatar()", () => { ou ```typescript -test("handleNewGravatar() should create a new entity", () => { +test("handleNewGravatar() deve criar uma nova entidade", () => { ... }) @@ -232,11 +232,11 @@ test("handleNewGravatar() should create a new entity", () => { ### beforeAll() -Runs a code block before any of the tests in the file. If `beforeAll` is declared inside of a `describe` block, it runs at the beginning of that `describe` block. +Executa um bloco de código antes de quaisquer dos testes no arquivo. Se o `beforeAll` for declarado dentro de um bloco `describe`, ele é executado no começo daquele bloco `describe`. Exemplos: -Code inside `beforeAll` will execute once before _all_ tests in the file. +O código dentro do `beforeAll` será executado uma vez antes de _todos_ os testes no arquivo. ```typescript import { describe, test, beforeAll } from "matchstick-as/assembly/index" @@ -250,39 +250,39 @@ beforeAll(() => { ... }) -describe("When the entity does not exist", () => { - test("it should create a new Gravatar with id 0x1", () => { +describe("Quando a entidade não existe", () => { + test("ela deve criar um novo Gravatar com a id 0x1", () => { ... }) }) -describe("When entity already exists", () => { - test("it should update the Gravatar with id 0x0", () => { +describe("Quando a entidade já existe", () => { + test("ela deve atualizar o Gravatar com a id 0x0", () => { ... }) }) ``` -Code inside `beforeAll` will execute once before all tests in the first describe block +O código antes do `beforeAll` será executado uma vez antes de todos os testes no primeiro bloco describe ```typescript -import { describe, test, beforeAll } from "matchstick-as/assembly/index" +mport { describe, test, beforeAll } from "matchstick-as/assembly/index" import { handleUpdatedGravatar, handleNewGravatar } from "../../src/gravity" import { Gravatar } from "../../generated/schema" describe("handleUpdatedGravatar()", () => { beforeAll(() => { let gravatar = new Gravatar("0x0") - gravatar.displayName = “First Gravatar” + gravatar.displayName = “Primeiro Gravatar” gravatar.save() ... }) - test("updates Gravatar with id 0x0", () => { + test("atualiza Gravatar com id 0x0", () => { ... }) - test("creates new Gravatar with id 0x1", () => { + test("cria novo Gravatar com id 0x1", () => { ... }) }) @@ -292,11 +292,11 @@ describe("handleUpdatedGravatar()", () => { ### afterAll() -Runs a code block after all of the tests in the file. If `afterAll` is declared inside of a `describe` block, it runs at the end of that `describe` block. +Executa um bloco de código depois de todos os testes no arquivo. Se o `afterAll` for declarado dentro de um bloco `describe`, ele será executado no final desse bloco `describe`. Exemplo: -Code inside `afterAll` will execute once after _all_ tests in the file. +O código dentro do `afterAll` será executado uma vez depois de _todos_ os testes no arquivo. ```typescript import { describe, test, afterAll } from "matchstick-as/assembly/index" @@ -309,19 +309,19 @@ afterAll(() => { }) describe("handleNewGravatar, () => { - test("creates Gravatar with id 0x0", () => { + test("cria Gravatar com id 0x0", () => { ... }) }) describe("handleUpdatedGravatar", () => { - test("updates Gravatar with id 0x0", () => { + test("atualiza Gravatar com id 0x0", () => { ... }) }) ``` -Code inside `afterAll` will execute once after all tests in the first describe block +O código dentro do `afterAll` será executado uma vez depois de todos os testes no primeiro bloco describe ```typescript import { describe, test, afterAll, clearStore } from "matchstick-as/assembly/index" @@ -333,17 +333,17 @@ describe("handleNewGravatar", () => { ... }) - test("It creates a new entity with Id 0x0", () => { + test("Cria uma nova entidade com Id 0x0", () => { ... }) - test("It creates a new entity with Id 0x1", () => { + test("Cria uma nova entidade com Id 0x1", () => { ... }) }) describe("handleUpdatedGravatar", () => { - test("updates Gravatar with id 0x0", () => { + test("atualiza Gravatar com id 0x0", () => { ... }) }) @@ -353,24 +353,24 @@ describe("handleUpdatedGravatar", () => { ### beforeEach() -Runs a code block before every test. If `beforeEach` is declared inside of a `describe` block, it runs before each test in that `describe` block. +Executa um bloco de código antes de cada teste no arquivo. Se o `beforeEach` for declarado dentro de um bloco `describe`, ele será executado antes de cada teste nesse bloco `describe`. -Examples: Code inside `beforeEach` will execute before each tests. +Exemplos: O código dentro do `beforeEach` será executado antes de cada teste. ```typescript import { describe, test, beforeEach, clearStore } from "matchstick-as/assembly/index" import { handleNewGravatars } from "./utils" beforeEach(() => { - clearStore() // <-- clear the store before each test in the file + clearStore() // <-- limpa o armazenamento antes de cada teste no arquivo }) describe("handleNewGravatars, () => { - test("A test that requires a clean store", () => { + test("Teste que exige armazenamento limpo", () => { ... }) - test("Second that requires a clean store", () => { + test("Segundo que exige armazenamento limpo", () => { ... }) }) @@ -378,7 +378,7 @@ describe("handleNewGravatars, () => { ... ``` -Code inside `beforeEach` will execute only before each test in the that describe +O código antes do `beforeEach` será executado antes de cada teste no describe ```typescript import { describe, test, beforeEach } from 'matchstick-as/assembly/index' @@ -387,24 +387,24 @@ import { handleUpdatedGravatar, handleNewGravatar } from '../../src/gravity' describe('handleUpdatedGravatars', () => { beforeEach(() => { let gravatar = new Gravatar('0x0') - gravatar.displayName = 'First Gravatar' + gravatar.displayName = 'Primeiro Gravatar' gravatar.imageUrl = '' gravatar.save() }) - test('Updates the displayName', () => { - assert.fieldEquals('Gravatar', '0x0', 'displayName', 'First Gravatar') + test('Atualiza o displayName', () => { + assert.fieldEquals('Gravatar', '0x0', 'displayName', 'Primeiro Gravatar') - // code that should update the displayName to 1st Gravatar + // código que deve atualizar o displayName para 1o. Gravatar - assert.fieldEquals('Gravatar', '0x0', 'displayName', '1st Gravatar') + assert.fieldEquals('Gravatar', '0x0', 'displayName', '1o. Gravatar') store.remove('Gravatar', '0x0') }) - test('Updates the imageUrl', () => { + test('Atualiza o imageUrl', () => { assert.fieldEquals('Gravatar', '0x0', 'imageUrl', '') - // code that should changes the imageUrl to https://www.gravatar.com/avatar/0x0 + // código que deve mudar imageUrl para https://www.gravatar.com/avatar/0x0 assert.fieldEquals('Gravatar', '0x0', 'imageUrl', 'https://www.gravatar.com/avatar/0x0') store.remove('Gravatar', '0x0') @@ -416,11 +416,11 @@ describe('handleUpdatedGravatars', () => { ### afterEach() -Runs a code block after every test. If `afterEach` is declared inside of a `describe` block, it runs after each test in that `describe` block. +Executa um bloco de código depois de cada teste no arquivo. Se o `afterEach` for declarado dentro de um bloco `describe`, será executado após cada teste nesse `describe`. Exemplos: -Code inside `afterEach` will execute after every test. +O código dentro do `afterEach` será executado após cada teste. ```typescript import { describe, test, beforeEach, afterEach } from "matchstick-as/assembly/index" @@ -428,7 +428,7 @@ import { handleUpdatedGravatar, handleNewGravatar } from "../../src/gravity" beforeEach(() => { let gravatar = new Gravatar("0x0") - gravatar.displayName = “First Gravatar” + gravatar.displayName = “Primeiro Gravatar” gravatar.save() }) @@ -441,25 +441,25 @@ describe("handleNewGravatar", () => { }) describe("handleUpdatedGravatar", () => { - test("Updates the displayName", () => { - assert.fieldEquals("Gravatar", "0x0", "displayName", "First Gravatar") + test("Atualiza o displayName", () => { + assert.fieldEquals("Gravatar", "0x0", "displayName", "Primeiro Gravatar") - // code that should update the displayName to 1st Gravatar + // código que deve mudar o displayName para 1o. Gravatar - assert.fieldEquals("Gravatar", "0x0", "displayName", "1st Gravatar") + assert.fieldEquals("Gravatar", "0x0", "displayName", "1o. Gravatar") }) - test("Updates the imageUrl", () => { + test("Atualiza o imageUrl", () => { assert.fieldEquals("Gravatar", "0x0", "imageUrl", "") - // code that should changes the imageUrl to https://www.gravatar.com/avatar/0x0 + // código que deve mudar o imageUrl para https://www.gravatar.com/avatar/0x0 assert.fieldEquals("Gravatar", "0x0", "imageUrl", "https://www.gravatar.com/avatar/0x0") }) }) ``` -Code inside `afterEach` will execute after each test in that describe +O código dentro do `afterEach` será executado após cada teste nesse describe ```typescript import { describe, test, beforeEach, afterEach } from "matchstick-as/assembly/index" @@ -472,7 +472,7 @@ describe("handleNewGravatar", () => { describe("handleUpdatedGravatar", () => { beforeEach(() => { let gravatar = new Gravatar("0x0") - gravatar.displayName = "First Gravatar" + gravatar.displayName = "Primeiro Gravatar" gravatar.imageUrl = "" gravatar.save() }) @@ -482,17 +482,17 @@ describe("handleUpdatedGravatar", () => { }) test("Updates the displayName", () => { - assert.fieldEquals("Gravatar", "0x0", "displayName", "First Gravatar") + assert.fieldEquals("Gravatar", "0x0", "displayName", "Primeiro Gravatar") - // code that should update the displayName to 1st Gravatar + // código que deve atualizar o displayName para 1o. Gravatar - assert.fieldEquals("Gravatar", "0x0", "displayName", "1st Gravatar") + assert.fieldEquals("Gravatar", "0x0", "displayName", "1o. Gravatar") }) test("Updates the imageUrl", () => { assert.fieldEquals("Gravatar", "0x0", "imageUrl", "") - // code that should changes the imageUrl to https://www.gravatar.com/avatar/0x0 + // código que deve mudar o imageUrl para https://www.gravatar.com/avatar/0x0 assert.fieldEquals("Gravatar", "0x0", "imageUrl", "https://www.gravatar.com/avatar/0x0") }) @@ -536,36 +536,36 @@ entityCount(entityType: string, expectedCount: i32) A partir da versão 0.6.0, asserts também apoiam mensagens de erro personalizadas ```typescript -assert.fieldEquals('Gravatar', '0x123', 'id', '0x123', 'Id should be 0x123') -assert.equals(ethereum.Value.fromI32(1), ethereum.Value.fromI32(1), 'Value should equal 1') -assert.notInStore('Gravatar', '0x124', 'Gravatar should not be in store') -assert.addressEquals(Address.zero(), Address.zero(), 'Address should be zero') -assert.bytesEquals(Bytes.fromUTF8('0x123'), Bytes.fromUTF8('0x123'), 'Bytes should be equal') -assert.i32Equals(2, 2, 'I32 should equal 2') -assert.bigIntEquals(BigInt.fromI32(1), BigInt.fromI32(1), 'BigInt should equal 1') -assert.booleanEquals(true, true, 'Boolean should be true') -assert.stringEquals('1', '1', 'String should equal 1') -assert.arrayEquals([ethereum.Value.fromI32(1)], [ethereum.Value.fromI32(1)], 'Arrays should be equal') +assert.fieldEquals('Gravatar', '0x123', 'id', '0x123', 'Id deve ser 0x123') +assert.equals(ethereum.Value.fromI32(1), ethereum.Value.fromI32(1), 'Valor deve ser igual a 1') +assert.notInStore('Gravatar', '0x124', 'Gravatar não deve estar armazenado') +assert.addressEquals(Address.zero(), Address.zero(), 'Address deve ser zero') +assert.bytesEquals(Bytes.fromUTF8('0x123'), Bytes.fromUTF8('0x123'), 'Bytes devem ser iguais') +assert.i32Equals(2, 2, 'I32 deve ser igual a 2') +assert.bigIntEquals(BigInt.fromI32(1), BigInt.fromI32(1), 'BigInt deve ser igual 1') +assert.booleanEquals(true, true, 'Boolean deve ser true') +assert.stringEquals('1', '1', 'String deve ser igual a 1') +assert.arrayEquals([ethereum.Value.fromI32(1)], [ethereum.Value.fromI32(1)], 'Arranjos devem ser iguais') assert.tupleEquals( changetype([ethereum.Value.fromI32(1)]), changetype([ethereum.Value.fromI32(1)]), - 'Tuples should be equal', + 'Tuplas devem ser iguais', ) -assert.assertTrue(true, 'Should be true') -assert.assertNull(null, 'Should be null') -assert.assertNotNull('not null', 'Should be not null') -assert.entityCount('Gravatar', 1, 'There should be 2 gravatars') -assert.dataSourceCount('GraphTokenLockWallet', 1, 'GraphTokenLockWallet template should have one data source') +assert.assertTrue(true, 'Deve ser true') +assert.assertNull(null, 'Deve ser null') +assert.assertNotNull('not null', 'Deve não ser null') +assert.entityCount('Gravatar', 1, 'Deve haver 2 Gravatars') +assert.dataSourceCount('GraphTokenLockWallet', 1, 'Template GraphTokenLockWallet template deve ter uma fonte de dados') assert.dataSourceExists( 'GraphTokenLockWallet', Address.zero().toHexString(), - 'GraphTokenLockWallet should have a data source for zero address', + 'GraphTokenLockWallet deve ter uma fonte de dados para address zero', ) ``` ## Como Escrever um Teste de Unidade -Let's see how a simple unit test would look like using the Gravatar examples in the [Demo Subgraph](https://github.com/LimeChain/demo-subgraph/blob/main/src/gravity.ts). +Vamos ver como seria um simples teste unitário usando os exemplos de Gravatar no [Subgraph de Demonstração](https://github.com/LimeChain/demo-subgraph/blob/main/src/gravity.ts). Suponhamos que temos a seguinte função de handler (com duas funções de helper para facilitar): @@ -627,23 +627,23 @@ import { NewGravatar } from '../../generated/Gravity/Gravity' import { createNewGravatarEvent, handleNewGravatars } from '../mappings/gravity' test('Can call mappings with custom events', () => { - // Create a test entity and save it in the store as initial state (optional) + // Criar uma entidade de teste e guarda no armazenamento como estado inicial (opcional) let gravatar = new Gravatar('gravatarId0') gravatar.save() - // Create mock events + // Criar eventos simulados let newGravatarEvent = createNewGravatarEvent(12345, '0x89205A3A3b2A69De6Dbf7f01ED13B2108B2c43e7', 'cap', 'pac') let anotherGravatarEvent = createNewGravatarEvent(3546, '0x89205A3A3b2A69De6Dbf7f01ED13B2108B2c43e7', 'cap', 'pac') - // Call mapping functions passing the events we just created + // Chamar funções de mapeamento passando os eventos que acabamos de criar handleNewGravatars([newGravatarEvent, anotherGravatarEvent]) - // Assert the state of the store + // Assertar o estado do armazenamento assert.fieldEquals('Gravatar', 'gravatarId0', 'id', 'gravatarId0') assert.fieldEquals('Gravatar', '12345', 'owner', '0x89205A3A3b2A69De6Dbf7f01ED13B2108B2c43e7') assert.fieldEquals('Gravatar', '3546', 'displayName', 'cap') - // Clear the store in order to start the next test off on a clean slate + // Limpar o armazenamento para começar o próximo teste do zero clearStore() }) @@ -652,23 +652,23 @@ test('Next test', () => { }) ``` -That's a lot to unpack! First off, an important thing to notice is that we're importing things from `matchstick-as`, our AssemblyScript helper library (distributed as an npm module). You can find the repository [here](https://github.com/LimeChain/matchstick-as). `matchstick-as` provides us with useful testing methods and also defines the `test()` function which we will use to build our test blocks. The rest of it is pretty straightforward - here's what happens: +Quanta coisa! Primeiro, note que estamos a importar coisas do `matchstick-as`, a nossa biblioteca de helper do AssemblyScript (distribuída como um módulo npm). O repositório está [aqui](https://github.com/LimeChain/matchstick-as). O `matchstick-as` nos dá alguns métodos de teste úteis e define a função `test()`, que usaremos para construir os nossos blocos de teste. O resto é bem simples — veja o que acontece: - Configuramos nosso estado inicial e adicionamos uma entidade de Gravatar personalizada; -- We define two `NewGravatar` event objects along with their data, using the `createNewGravatarEvent()` function; -- We're calling out handler methods for those events - `handleNewGravatars()` and passing in the list of our custom events; +- Definimos dois eventos `NewGravatar` com os seus dados, usando a função `createNewGravatarEvent()`; +- Chamamos métodos de handlers para estes eventos — `handleNewGravatars()` — e passamos a lista dos nossos eventos personalizados; - Garantimos o estado da loja. Como isto funciona? — Passamos uma combinação do tipo e da id da Entidade. Depois conferimos um campo específico naquela Entidade e garantimos que ela tem o valor que esperamos que tenha. Estamos a fazer isto tanto para a Entidade Gravatar inicial adicionada ao armazenamento, quanto para as duas entidades Gravatar adicionadas ao chamar a função de handler; -- And lastly - we're cleaning the store using `clearStore()` so that our next test can start with a fresh and empty store object. We can define as many test blocks as we want. +- E por último — limpamos o armazenamento com `clearStore()`, para que o nosso próximo teste comece com um objeto de armazenamento novo em folha. Podemos definir quantos blocos de teste quisermos. Prontinho — criamos o nosso primeiro teste! 👏 -Para executar os nossos testes, basta apenas executar o seguinte na pasta raiz do seu subgraph: +Now in order to run our tests you simply need to run the following in your Subgraph root folder: `graph test Gravity` E se tudo der certo, deve receber a seguinte resposta: -![Matchstick saying “All tests passed!”](/img/matchstick-tests-passed.png) +![Matchstick diz "Todos os testes passados!”](/img/matchstick-tests-passed.png) ## Cenários de teste comuns @@ -754,18 +754,18 @@ createMockedFunction(contractAddress, 'getGravatar', 'getGravatar(address):(stri ### Como simular arquivos IPFS (do matchstick 0.4.1) -Users can mock IPFS files by using `mockIpfsFile(hash, filePath)` function. The function accepts two arguments, the first one is the IPFS file hash/path and the second one is the path to a local file. +Os utilizadores podem simular arquivos IPFS com a função `mockIpfsFile(hash, filePath)`. A função aceita dois argumentos: o primeiro é o hash/caminho do arquivo IPFS, e o segundo é o caminho a um arquivo local. -NOTE: When testing `ipfs.map/ipfs.mapJSON`, the callback function must be exported from the test file in order for matchstck to detect it, like the `processGravatar()` function in the test example bellow: +NOTE: When testing `ipfs.map/ipfs.mapJSON`, the callback function must be exported from the test file in order for matchstick to detect it, like the `processGravatar()` function in the test example bellow: -`.test.ts` file: +Arquivo `test.ts`: ```typescript import { assert, test, mockIpfsFile } from 'matchstick-as/assembly/index' import { ipfs } from '@graphprotocol/graph-ts' import { gravatarFromIpfs } from './utils' -// Export ipfs.map() callback in order for matchstck to detect it +// Export ipfs.map() callback in order for matchstick to detect it export { processGravatar } from './utils' test('ipfs.cat', () => { @@ -795,7 +795,7 @@ test('ipfs.map', () => { }) ``` -`utils.ts` file: +Arquivo `utils.ts`: ```typescript import { Address, ethereum, JSONValue, Value, ipfs, json, Bytes } from "@graphprotocol/graph-ts" @@ -857,11 +857,11 @@ gravatar.save() assert.fieldEquals('Gravatar', 'gravatarId0', 'id', 'gravatarId0') ``` -Running the assert.fieldEquals() function will check for equality of the given field against the given expected value. The test will fail and an error message will be outputted if the values are **NOT** equal. Otherwise the test will pass successfully. +A função assert.fieldEquals() conferirá a igualdade do campo dado contra o valor dado esperado. O teste acabará em erro, com mensagem correspondente, caso os valores **NÃO** sejam iguais. Caso contrário, o teste terá êxito. ### Como interagir com metadados de Eventos -Users can use default transaction metadata, which could be returned as an ethereum.Event by using the `newMockEvent()` function. The following example shows how you can read/write to those fields on the Event object: +Os utilizadores podem usar metadados-padrão de transações, que podem ser retornados como um ethereum.Event com a função `newMockEvent()`. O seguinte exemplo mostra como podes ler/escrever a estes campos no objeto de Evento: ```typescript // Leitura @@ -878,7 +878,7 @@ newGravatarEvent.address = Address.fromString(UPDATED_ADDRESS) assert.equals(ethereum.Value.fromString("hello"); ethereum.Value.fromString("hello")); ``` -### Asserting that an Entity is **not** in the store +### Como afirmar que uma Entidade **não** está no armazenamento Os utilizadores podem afirmar que uma entidade não existe no armazenamento. A função toma um tipo e uma id de entidade. Caso a entidade esteja, de facto, na loja, o teste acabará em erro, com uma mensagem de erro relevante. Veja um exemplo rápido de como usar esta funcionalidade: @@ -896,7 +896,7 @@ import { logStore } from 'matchstick-as/assembly/store' logStore() ``` -As of version 0.6.0, `logStore` no longer prints derived fields, instead users can use the new `logEntity` function. Of course `logEntity` can be used to print any entity, not just ones that have derived fields. `logEntity` takes the entity type, entity id and a `showRelated` flag to indicate if users want to print the related derived entities. +Desde a versão 0.6.0, o `logStore` não imprime mais campos derivados; em vez disto, os utilizadores podem usar a nova função `logEntity`. O `logEntity` pode ser usado para imprimir qualquer entidade, não só as que têm campos derivados. O `logEntity` pega o tipo e a ID da entidade, e um flag `showRelated` para indicar se os utilizadores querem imprimir as entidades derivadas relacionadas. ``` import { logEntity } from 'matchstick-as/assembly/store' @@ -911,7 +911,7 @@ Os utilizadores podem encontrar falhas esperadas, com o flag shouldFail nas fun ```typescript test( - 'Should throw an error', + 'Deve chamar um erro', () => { throw new Error() }, @@ -930,27 +930,27 @@ import { test } from "matchstick-as/assembly/index"; import { log } from "matchstick-as/assembly/log"; test("Success", () => { - log.success("Success!". []); + log.success("Sucesso!". []); }); test("Error", () => { - log.error("Error :( ", []); + log.error("Erro! :( ", []); }); test("Debug", () => { - log.debug("Debugging...", []); + log.debug("Debug em progresso...", []); }); test("Info", () => { - log.info("Info!", []); + log.info("Informação!", []); }); test("Warning", () => { - log.warning("Warning!", []); + log.warning("Cuidado!", []); }); ``` Os utilizadores também podem simular uma falha crítica, como no seguinte: ```typescript -test('Blow everything up', () => { - log.critical('Boom!') +test('Explodir tudo', () = { + log.critical('É boooomba!') }) ``` @@ -960,14 +960,14 @@ Logar erros críticos interromperá a execução dos testes e causará um desast Testar campos derivados permite aos utilizadores configurar um campo numa entidade e atualizar outra automaticamente, caso ela derive um dos seus campos da primeira entidade. -Before version `0.6.0` it was possible to get the derived entities by accessing them as entity fields/properties, like so: +Antes da versão `0.6.0`, era possível resgatar as entidades derivadas ao acessá-las como propriedades ou campos de entidade, como no seguinte exemplo: ```typescript let entity = ExampleEntity.load('id') let derivedEntity = entity.derived_entity ``` -As of version `0.6.0`, this is done by using the `loadRelated` function of graph-node, the derived entities can be accessed the same way as in the handlers. +Desde a versão `0.6.0`, isto é feito com a função `loadRelated` do graph-node. As entidades derivadas podem ser acessadas como são nos handlers. ```typescript test('Derived fields example test', () => { @@ -1009,9 +1009,9 @@ test('Derived fields example test', () => { }) ``` -### Testing `loadInBlock` +### Teste de `loadInBlock` -As of version `0.6.0`, users can test `loadInBlock` by using the `mockInBlockStore`, it allows mocking entities in the block cache. +Desde a versão `0.6.0`, é possível testar o `loadInBlock` com o `mockInBlockStore`, que permite a simulação de entidades no cache de blocos. ```typescript import { afterAll, beforeAll, describe, mockInBlockStore, test } from 'matchstick-as' @@ -1026,12 +1026,12 @@ describe('loadInBlock', () => { clearInBlockStore() }) - test('Can use entity.loadInBlock() to retrieve entity from cache store in the current block', () => { + test('Pode usar entity.loadInBlock() para retirar a entidade do armazenamento do cache no bloco atual', () => { let retrievedGravatar = Gravatar.loadInBlock('gravatarId0') assert.stringEquals('gravatarId0', retrievedGravatar!.get('id')!.toString()) }) - test("Returns null when calling entity.loadInBlock() if an entity doesn't exist in the current block", () => { + test('Retorna null ao chamar entity.loadInBlock() se uma entidade não existir no bloco atual', () => { let retrievedGravatar = Gravatar.loadInBlock('IDoNotExist') assert.assertNull(retrievedGravatar) }) @@ -1040,7 +1040,7 @@ describe('loadInBlock', () => { ### Como testar fontes de dados dinâmicas -Testing dynamic data sources can be be done by mocking the return value of the `context()`, `address()` and `network()` functions of the dataSource namespace. These functions currently return the following: `context()` - returns an empty entity (DataSourceContext), `address()` - returns `0x0000000000000000000000000000000000000000`, `network()` - returns `mainnet`. The `create(...)` and `createWithContext(...)` functions are mocked to do nothing so they don't need to be called in the tests at all. Changes to the return values can be done through the functions of the `dataSourceMock` namespace in `matchstick-as` (version 0.3.0+). +É possível testar fontes de dados dinâmicas ao simular o valor de retorno das funções `context()`, `address()` e `network()` do namespace do dataSource. Estas funções atualmente retornam o seguinte: `context()` — retorna uma entidade vazia (DataSourceContext); `address()` — retorna `0x0000000000000000000000000000000000000000`; `network()` — retorna `mainnet`. As funções `create(...)` e `createWithContext(...)` são simuladas para não terem uso, para que não precisem ser chamadas nos testes. Dá para mudar os valores de retorno através das funções do namespace `dataSourceMock` no `matchstick-as` (versão 0.3.0+). Exemplo abaixo: @@ -1070,7 +1070,7 @@ import { handleApproveTokenDestinations } from '../../src/token-lock-wallet' import { ApproveTokenDestinations } from '../../generated/templates/GraphTokenLockWallet/GraphTokenLockWallet' import { TokenLockWallet } from '../../generated/schema' -test('Data source simple mocking example', () => { +test('Exemplo simples de simulação de fonte de dados', () => { let addressString = '0xA16081F360e3847006dB660bae1c6d1b2e17eC2A' let address = Address.fromString(addressString) @@ -1097,44 +1097,44 @@ Note que o dataSourceMock.resetValues() é chamado no final. Isto ### Teste de criação de fontes de dados dinâmicas -As of version `0.6.0`, it is possible to test if a new data source has been created from a template. This feature supports both ethereum/contract and file/ipfs templates. There are four functions for this: +Desde a versão `0.6.0`, é possível testar se uma nova fonte de dados foi criada de um modelo. Esta função apoia modelos ethereum/contract e file/ipfs. Há quatro funçôes para isto: -- `assert.dataSourceCount(templateName, expectedCount)` can be used to assert the expected count of data sources from the specified template -- `assert.dataSourceExists(templateName, address/ipfsHash)` asserts that a data source with the specified identifier (could be a contract address or IPFS file hash) from a specified template was created -- `logDataSources(templateName)` prints all data sources from the specified template to the console for debugging purposes -- `readFile(path)` reads a JSON file that represents an IPFS file and returns the content as Bytes +- `assert.dataSourceCount(templateName, expectedCount)` pode ser usado para impor a contagem esperada de fontes de dados do modelo especificado +- `assert.dataSourceExists(templateName, address/ipfsHash)` impõe que foi criada uma fonte de dados com o identificador especificado (seja um endereço de contrato ou um hash de arquivo IPFS) de um modelo especificado +- `logDataSources(templateName)` imprime todas as fontes de dados do modelo especificado ao console, para propósitos de debug +- `readFile(path)` lê um arquivo JSON que representa um arquivo IPFS e retorna o conteúdo como Bytes -#### Testing `ethereum/contract` templates +#### Teste de modelos `ethereum/contract` ```typescript test('ethereum/contract dataSource creation example', () => { - // Assert there are no dataSources created from GraphTokenLockWallet template + // Impor que não há dataSources criadas de modelo GraphTokenLockWallet assert.dataSourceCount('GraphTokenLockWallet', 0) - // Create a new GraphTokenLockWallet datasource with address 0xA16081F360e3847006dB660bae1c6d1b2e17eC2A + // Criar uma nova datasource GraphTokenLockWallet com o endereço 0xA16081F360e3847006dB660bae1c6d1b2e17eC2A GraphTokenLockWallet.create(Address.fromString('0xA16081F360e3847006dB660bae1c6d1b2e17eC2A')) - // Assert the dataSource has been created + // Assegurar que foi criada a dataSource assert.dataSourceCount('GraphTokenLockWallet', 1) - // Add a second dataSource with context + // Adicionar uma segunda dataSource com contexto let context = new DataSourceContext() context.set('contextVal', Value.fromI32(325)) GraphTokenLockWallet.createWithContext(Address.fromString('0xA16081F360e3847006dB660bae1c6d1b2e17eC2B'), context) - // Assert there are now 2 dataSources + // Assertar que agora há 2 dataSources assert.dataSourceCount('GraphTokenLockWallet', 2) - // Assert that a dataSource with address "0xA16081F360e3847006dB660bae1c6d1b2e17eC2B" was created - // Keep in mind that `Address` type is transformed to lower case when decoded, so you have to pass the address as all lower case when asserting if it exists + // Impor que foi criada uma dataSource com o endereço "0xA16081F360e3847006dB660bae1c6d1b2e17eC2B" + // Lembrar que o tipo `Address` transforma para caixa baixa quando decodificado, então o endereço deve ser passado como caixa-baixa ao determinar se existe assert.dataSourceExists('GraphTokenLockWallet', '0xA16081F360e3847006dB660bae1c6d1b2e17eC2B'.toLowerCase()) logDataSources('GraphTokenLockWallet') }) ``` -##### Example `logDataSource` output +##### Exemplo de resultado de `logDataSource` ```bash 🛠 { @@ -1158,11 +1158,11 @@ test('ethereum/contract dataSource creation example', () => { } ``` -#### Testing `file/ipfs` templates +#### Teste de modelos `file/ipfs` -Similarly to contract dynamic data sources, users can test test file data sources and their handlers +Assim como as fontes dinâmicas de dados de contrato, os utilizadores podem testar fontes de dados de arquivos e os seus handlers -##### Example `subgraph.yaml` +##### Exemplo de `subgraph.yaml` ```yaml ... @@ -1172,7 +1172,7 @@ templates: network: mainnet mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/token-lock-wallet.ts handler: handleMetadata @@ -1183,7 +1183,7 @@ templates: file: ./abis/GraphTokenLockWallet.json ``` -##### Example `schema.graphql` +##### Exemplo de `schema.graphql` ```graphql """ @@ -1203,7 +1203,7 @@ type TokenLockMetadata @entity { } ``` -##### Example `metadata.json` +##### Exemplo de `metadata.json` ```json { @@ -1218,9 +1218,9 @@ type TokenLockMetadata @entity { ```typescript export function handleMetadata(content: Bytes): void { - // dataSource.stringParams() returns the File DataSource CID - // stringParam() will be mocked in the handler test - // for more info https://thegraph.com/docs/en/developing/creating-a-subgraph/#create-a-new-handler-to-process-files + // dataSource.stringParams() retorna CID de Fonte de Dados de Arquivo + // stringParam() será simulado no teste de handler + // para saber mais https://thegraph.com/docs/en/developing/creating-a-subgraph/#create-a-new-handler-to-process-files let tokenMetadata = new TokenLockMetadata(dataSource.stringParam()) const value = json.fromBytes(content).toObject() @@ -1253,31 +1253,32 @@ import { TokenLockMetadata } from '../../generated/schema' import { GraphTokenLockMetadata } from '../../generated/templates' test('file/ipfs dataSource creation example', () => { - // Generate the dataSource CID from the ipfsHash + ipfs path file - // For example QmaXzZhcYnsisuue5WRdQDH6FDvqkLQX1NckLqBYeYYEfm/example.json + // Gerar CID da dataSource do arquivo de local ipfsHash + ipfs + // Por exemplo QmaXzZhcYnsisuue5WRdQDH6FDvqkLQX1NckLqBYeYYEfm/example.json const ipfshash = 'QmaXzZhcYnsisuue5WRdQDH6FDvqkLQX1NckLqBYeYYEfm' const CID = `${ipfshash}/example.json` - // Create a new dataSource using the generated CID + // Criar uma nova dataSource com o CID gerado GraphTokenLockMetadata.create(CID) - // Assert the dataSource has been created + // Verificar se foi criada a dataSource assert.dataSourceCount('GraphTokenLockMetadata', 1) assert.dataSourceExists('GraphTokenLockMetadata', CID) logDataSources('GraphTokenLockMetadata') - // Now we have to mock the dataSource metadata and specifically dataSource.stringParam() - // dataSource.stringParams actually uses the value of dataSource.address(), so we will mock the address using dataSourceMock from matchstick-as - // First we will reset the values and then use dataSourceMock.setAddress() to set the CID + // Agora temos que simular os metadados da dataSource, e especificamente dataSource.stringParam() + // dataSource.stringParams usa o valor de dataSource.address(), então vamos simular o endereço +com dataSourceMock de matchstick-as + // Primeiro, vamos reiniciar os valores e usar dataSourceMock.setAddress() para configurar o CID dataSourceMock.resetValues() dataSourceMock.setAddress(CID) - // Now we need to generate the Bytes to pass to the dataSource handler - // For this case we introduced a new function readFile, that reads a local json and returns the content as Bytes + // Agora precisamos gerar os Bytes para passar para o handler da dataSource + // Para este caso, apresentamos uma nova função readFile, que lê um json local e retorna o conteúdo como Bytes const content = readFile(`path/to/metadata.json`) handleMetadata(content) - // Now we will test if a TokenLockMetadata was created + // Agora vamos testar se foi criado um TokenLockMetadata const metadata = TokenLockMetadata.load(CID) assert.bigIntEquals(metadata!.endTime, BigInt.fromI32(1)) @@ -1289,29 +1290,29 @@ test('file/ipfs dataSource creation example', () => { ## Cobertura de Testes -Using **Matchstick**, subgraph developers are able to run a script that will calculate the test coverage of the written unit tests. +Using **Matchstick**, Subgraph developers are able to run a script that will calculate the test coverage of the written unit tests. -The test coverage tool takes the compiled test `wasm` binaries and converts them to `wat` files, which can then be easily inspected to see whether or not the handlers defined in `subgraph.yaml` have been called. Since code coverage (and testing as whole) is in very early stages in AssemblyScript and WebAssembly, **Matchstick** cannot check for branch coverage. Instead we rely on the assertion that if a given handler has been called, the event/function for it have been properly mocked. +A ferramenta de cobertura de testes pega os binários de teste `wasm` compilados e os converte a arquivos `wat`, que podem então ser facilmente vistoriados para ver se os handlers definidos em `subgraph.yaml` foram chamados ou não. Como a cobertura de código (e os testes em geral) está num estado primitivo no AssemblyScript e WebAssembly, o **Matchstick** não pode procurar por coberturas de branch. Em vez disto, supomos que, se um handler foi chamado, o evento/a função correspondente já foi simulado com êxito. -### Prerequisites +### Pré-requisitos -To run the test coverage functionality provided in **Matchstick**, there are a few things you need to prepare beforehand: +Para executar a funcionalidade da cobertura de teste fornecida no **Matchstick**, prepare algumas coisas com antecedência: #### Exportar seus handlers -In order for **Matchstick** to check which handlers are being run, those handlers need to be exported from the **test file**. So for instance in our example, in our gravity.test.ts file we have the following handler being imported: +Para que o **Matchstick** confira quais handlers serão executados, estes handlers devem ser exportados do **arquivo de teste** primeiro. No nosso exemplo, temos o seguinte handler a ser importado no nosso arquivo gravity.test.ts: ```typescript import { handleNewGravatar } from '../../src/gravity' ``` -In order for that function to be visible (for it to be included in the `wat` file **by name**) we need to also export it, like this: +Para que essa função seja visível (para ser incluída no arquivo `wat` **por nome**), também precisamos exportá-la assim: ```typescript export { handleNewGravatar } ``` -### Usage +### Uso Assim que tudo estiver pronto, para executar a ferramenta de cobertura de testes, basta: @@ -1319,7 +1320,7 @@ Assim que tudo estiver pronto, para executar a ferramenta de cobertura de testes graph test -- -c ``` -You could also add a custom `coverage` command to your `package.json` file, like so: +Um comando `coverage` personalizado também pode ser adicionado ao seu arquivo `package.json`, assim: ```typescript "scripts": { @@ -1371,7 +1372,7 @@ Global test coverage: 22.2% (2/9 handlers). A saída do log inclui a duração do teste. Veja um exemplo: -`[Thu, 31 Mar 2022 13:54:54 +0300] Program executed in: 42.270ms.` +`[Quinta, 31 Mar 2022 13:54:54 +0300] Programa executado em: 42.270ms.` ## Erros comuns do compilador @@ -1380,7 +1381,7 @@ A saída do log inclui a duração do teste. Veja um exemplo: > wasi_snapshot_preview1::fd_write has not been defined > -This means you have used `console.log` in your code, which is not supported by AssemblyScript. Please consider using the [Logging API](/subgraphs/developing/creating/graph-ts/api/#logging-api) +Isso significa que você usou `console.log` no seu código, que não é apoiado pelo AssemblyScript. Por favor, considere usar a [API de registo](/subgraphs/developing/creating/graph-ts/api/#logging-api) > ERROR TS2554: Expected ? arguments, but got ?. > @@ -1401,11 +1402,11 @@ This means you have used `console.log` in your code, which is not supported by A > > in ~lib/matchstick-as/assembly/defaults.ts(24,12) -The mismatch in arguments is caused by mismatch in `graph-ts` and `matchstick-as`. The best way to fix issues like this one is to update everything to the latest released version. +A diferença nos argumentos é causada pela diferença no `graph-ts` e no `matchstick-as`. Problemas como este são melhor resolvidos ao atualizar tudo para a versão mais recente. ## Outros Recursos -For any additional support, check out this [demo subgraph repo using Matchstick](https://github.com/LimeChain/demo-subgraph#readme_). +For any additional support, check out this [demo Subgraph repo using Matchstick](https://github.com/LimeChain/demo-subgraph#readme_). ## Feedback diff --git a/website/src/pages/pt/subgraphs/developing/deploying/multiple-networks.mdx b/website/src/pages/pt/subgraphs/developing/deploying/multiple-networks.mdx index 7164b6d5a83c..1a1aca2c7b9e 100644 --- a/website/src/pages/pt/subgraphs/developing/deploying/multiple-networks.mdx +++ b/website/src/pages/pt/subgraphs/developing/deploying/multiple-networks.mdx @@ -1,30 +1,31 @@ --- -title: Deploying a Subgraph to Multiple Networks +title: Como Implantar um Subgraph em Várias Redes +sidebarTitle: Deploying to Multiple Networks --- -This page explains how to deploy a subgraph to multiple networks. To deploy a subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a subgraph already, see [Creating a subgraph](/developing/creating-a-subgraph/). +This page explains how to deploy a Subgraph to multiple networks. To deploy a Subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a Subgraph already, see [Creating a Subgraph](/developing/creating-a-subgraph/). -## Como lançar o subgraph a várias redes +## Deploying the Subgraph to multiple networks -Em alguns casos, irá querer lançar o mesmo subgraph a várias redes sem duplicar o seu código completo. O grande desafio nisto é que os endereços de contrato nestas redes são diferentes. +In some cases, you will want to deploy the same Subgraph to multiple networks without duplicating all of its code. The main challenge that comes with this is that the contract addresses on these networks are different. -### Using `graph-cli` +### Como usar o `graph-cli` -Both `graph build` (since `v0.29.0`) and `graph deploy` (since `v0.32.0`) accept two new options: +Tanto o `graph build` (desde a `v0.29.0`) quanto o `graph deploy` (desde a `v0.32.0`) aceitam duas novas opções: ```sh Options: ... - --network Network configuration to use from the networks config file - --network-file Networks config file path (default: "./networks.json") + --network Configuração de rede para usar no arquivo de config de redes + --network-file Local do arquivo de config de redes (padrão: "./networks.json") ``` -You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your subgraph during development. +You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your Subgraph during development. -> Note: The `init` command will now auto-generate a `networks.json` based on the provided information. You will then be able to update existing or add additional networks. +> Nota: O comando `init` agora irá gerar um `networks.json` automaticamente, com base na informação fornecida. Daí, será possível atualizar redes existentes ou adicionar redes novas. -If you don't have a `networks.json` file, you'll need to manually create one with the following structure: +Caso não tenha um arquivo `networks.json`, você deve criar o mesmo manualmente, com a seguinte estrutura: ```json { @@ -52,9 +53,9 @@ If you don't have a `networks.json` file, you'll need to manually create one wit } ``` -> Note: You don't have to specify any of the `templates` (if you have any) in the config file, only the `dataSources`. If there are any `templates` declared in the `subgraph.yaml` file, their network will be automatically updated to the one specified with the `--network` option. +> Nota: Não é necessário especificar quaisquer dos `templates` (se tiver) no arquivo de configuração, apenas as `dataSources`. Se houver `templates` declarados no arquivo `subgraph.yaml`, sua rede será automaticamente atualizada à especificada na opção `--network`. -Now, let's assume you want to be able to deploy your subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`: +Now, let's assume you want to be able to deploy your Subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`: ```yaml # ... @@ -96,7 +97,7 @@ yarn build --network sepolia yarn build --network sepolia --network-file local/do/config ``` -The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the subgraph. Your `subgraph.yaml` file now should look like this: +The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the Subgraph. Your `subgraph.yaml` file now should look like this: ```yaml # ... @@ -111,9 +112,9 @@ dataSources: kind: ethereum/events ``` -Now you are ready to `yarn deploy`. +Agora está tudo pronto para executar o `yarn deploy`. -> Note: As mentioned earlier, since `graph-cli 0.32.0` you can directly run `yarn deploy` with the `--network` option: +> Nota: Como anteriormente mencionado, desde o `graph-cli 0.32.0`, dá para executar diretamente o `yarn deploy` com a opção `--network`: ```sh # Usar o arquivo networks.json padrão @@ -125,9 +126,9 @@ yarn deploy --network sepolia --network-file local/do/config ### Como usar o template subgraph.yaml -One way to parameterize aspects like contract addresses using older `graph-cli` versions is to generate parts of it with a templating system like [Mustache](https://mustache.github.io/) or [Handlebars](https://handlebarsjs.com/). +Uma forma de parametrizar aspetos, como endereços de contratos, com versões mais antigas de `graph-cli` é: gerar partes dele com um sistema de modelos como o [Mustache](https://mustache.github.io/) ou o [Handlebars](https://handlebarsjs.com/). -Por exemplo, vamos supor que um subgraph deve ser lançado à mainnet e à Sepolia, através de diferentes endereços de contratos. Então, seria possível definir dois arquivos de config ao fornecer os endereços para cada rede: +To illustrate this approach, let's assume a Subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network: ```json { @@ -145,7 +146,7 @@ e } ``` -Along with that, you would substitute the network name and addresses in the manifest with variable placeholders `{{network}}` and `{{address}}` and rename the manifest to e.g. `subgraph.template.yaml`: +Além disso, dá para substituir o nome da rede e os endereços no manifest com variáveis temporários `{{network}}` and `{{address}}` e renomear o manifest para, por exemplo, `subgraph.template.yaml`: ```yaml # ... @@ -162,7 +163,7 @@ dataSources: kind: ethereum/events ``` -In order to generate a manifest to either network, you could add two additional commands to `package.json` along with a dependency on `mustache`: +Para poder gerar um manifest para uma rede, pode-se adicionar mais dois comandos ao `package.json` com uma dependência no `mustache`: ```json { @@ -179,7 +180,7 @@ In order to generate a manifest to either network, you could add two additional } ``` -Para lançar este subgraph à mainnet ou à Sepolia, apenas um dos seguintes comandos precisaria ser executado: +To deploy this Subgraph for mainnet or Sepolia you would now simply run one of the two following commands: ```sh # Mainnet: @@ -189,29 +190,29 @@ yarn prepare:mainnet && yarn deploy yarn prepare:sepolia && yarn deploy ``` -A working example of this can be found [here](https://github.com/graphprotocol/example-subgraph/tree/371232cf68e6d814facf5e5413ad0fef65144759). +Veja um exemplo funcional [aqui](https://github.com/graphprotocol/example-subgraph/tree/371232cf68e6d814facf5e5413ad0fef65144759). -**Note:** This approach can also be applied to more complex situations, where it is necessary to substitute more than contract addresses and network names or where generating mappings or ABIs from templates as well. +**Observe:** Este método também pode ser aplicado a situações mais complexas, onde é necessário substituir mais que endereços de contratos e nomes de redes, ou gerar mapeamentos e ABIs de templates também. -This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your Subgraph to check if it is running behind. `synced` informs if the Subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the Subgraph. In this case, you can check the `fatalError` field for details on this error. -## Política de arqivamento do Subgraph Studio +## Subgraph Studio Subgraph archive policy -Uma versão de subgraph no Studio é arquivada se, e apenas se, atender aos seguintes critérios: +A Subgraph version in Studio is archived if and only if it meets the following criteria: - A versão não foi publicada na rede (ou tem a publicação pendente) - A versão foi criada há 45 dias ou mais -- O subgraph não foi consultado em 30 dias +- The Subgraph hasn't been queried in 30 days -Além disto, quando uma nova versão é editada, se o subgraph ainda não foi publicado, então a versão N-2 do subgraph é arquivada. +In addition, when a new version is deployed, if the Subgraph has not been published, then the N-2 version of the Subgraph is archived. -Todos os subgraphs afetados por esta política têm a opção de trazer de volta a versão em questão. +Every Subgraph affected with this policy has an option to bring the version in question back. -## Como conferir a saúde do subgraph +## Checking Subgraph health -Se um subgraph for sincronizado com sucesso, isto indica que ele continuará a rodar bem para sempre. Porém, novos gatilhos na rede podem revelar uma condição de erro não testada, ou ele pode começar a se atrasar por problemas de desempenho ou com os operadores de nodes. +If a Subgraph syncs successfully, that is a good sign that it will continue to run well forever. However, new triggers on the network might cause your Subgraph to hit an untested error condition or it may start to fall behind due to performance issues or issues with the node operators. -Graph Node exposes a GraphQL endpoint which you can query to check the status of your subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a subgraph: +Graph Node exposes a GraphQL endpoint which you can query to check the status of your Subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a Subgraph: ```graphql { @@ -238,4 +239,4 @@ Graph Node exposes a GraphQL endpoint which you can query to check the status of } ``` -This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your Subgraph to check if it is running behind. `synced` informs if the Subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the Subgraph. In this case, you can check the `fatalError` field for details on this error. diff --git a/website/src/pages/pt/subgraphs/developing/deploying/using-subgraph-studio.mdx b/website/src/pages/pt/subgraphs/developing/deploying/using-subgraph-studio.mdx index d9e9be3f83e9..5a8e4fb9f905 100644 --- a/website/src/pages/pt/subgraphs/developing/deploying/using-subgraph-studio.mdx +++ b/website/src/pages/pt/subgraphs/developing/deploying/using-subgraph-studio.mdx @@ -1,39 +1,39 @@ --- -title: Deploying Using Subgraph Studio +title: Como Implantar com o Subgraph Studio --- -Learn how to deploy your subgraph to Subgraph Studio. +Learn how to deploy your Subgraph to Subgraph Studio. -> Note: When you deploy a subgraph, you push it to Subgraph Studio, where you'll be able to test it. It's important to remember that deploying is not the same as publishing. When you publish a subgraph, you're publishing it onchain. +> Note: When you deploy a Subgraph, you push it to Subgraph Studio, where you'll be able to test it. It's important to remember that deploying is not the same as publishing. When you publish a Subgraph, you're publishing it onchain. -## Subgraph Studio Overview +## Visão Geral do Subgraph Studio -In [Subgraph Studio](https://thegraph.com/studio/), you can do the following: +No [Subgraph Studio](https://thegraph.com/studio/), você pode fazer o seguinte: -- View a list of subgraphs you've created -- Manage, view details, and visualize the status of a specific subgraph -- Criar e gerir as suas chaves de API para subgraphs específicos -- Restrict your API keys to specific domains and allow only certain Indexers to query with them -- Create your subgraph -- Deploy your subgraph using The Graph CLI -- Test your subgraph in the playground environment -- Integrate your subgraph in staging using the development query URL -- Publish your subgraph to The Graph Network -- Manage your billing +- View a list of Subgraphs you've created +- Manage, view details, and visualize the status of a specific Subgraph +- Create and manage your API keys for specific Subgraphs +- Restringir as suas chaves de API a domínios específicos e permitir que apenas certos indexadores façam queries com eles +- Create your Subgraph +- Deploy your Subgraph using The Graph CLI +- Test your Subgraph in the playground environment +- Integrate your Subgraph in staging using the development query URL +- Publish your Subgraph to The Graph Network +- Gerir o seu faturamento -## Install The Graph CLI +## Instalar a CLI do The Graph -Before deploying, you must install The Graph CLI. +Antes de implantar, você deve instalar a Graph CLI (CLI do The Graph). -You must have [Node.js](https://nodejs.org/) and a package manager of your choice (`npm`, `yarn` or `pnpm`) installed to use The Graph CLI. Check for the [most recent](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) CLI version. +É necessário ter [Node.js](https://nodejs.org/) e um gerenciador de pacotes da sua escolha (`npm`, `yarn` ou `pnpm`) instalados, para utilizar a Graph CLI. Verifique a versão [mais recente](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) da CLI. -### Install with yarn +### Instalação com o yarn ```bash yarn global add @graphprotocol/graph-cli ``` -### Install with npm +### Instalação com o npm ```bash npm install -g @graphprotocol/graph-cli @@ -41,97 +41,91 @@ npm install -g @graphprotocol/graph-cli ## Como Começar -1. Open [Subgraph Studio](https://thegraph.com/studio/). -2. Connect your wallet to sign in. - - You can do this via MetaMask, Coinbase Wallet, WalletConnect, or Safe. -3. After you sign in, your unique deploy key will be displayed on your subgraph details page. - - The deploy key allows you to publish your subgraphs or manage your API keys and billing. It is unique but can be regenerated if you think it has been compromised. +1. Abra o [Subgraph Studio](https://thegraph.com/studio/). +2. Conecte a sua carteira para fazer login. + - É possível fazer isso via MetaMask, Carteira da Coinbase, WalletConnect, ou Safe. +3. After you sign in, your unique deploy key will be displayed on your Subgraph details page. + - The deploy key allows you to publish your Subgraphs or manage your API keys and billing. It is unique but can be regenerated if you think it has been compromised. -> Important: You need an API key to query subgraphs +> Important: You need an API key to query Subgraphs ### Como Criar um Subgraph no Subgraph Studio -> For additional written detail, review the [Quick Start](/subgraphs/quick-start/). +> Para mais detalhes, consulte o [Guia de Início Rápido](/subgraphs/quick-start/). ### Compatibilidade de Subgraph com a Graph Network -Para ter apoio de Indexadores na Graph Network, os subgraphs devem: +To be supported by Indexers on The Graph Network, Subgraphs must index a [supported network](/supported-networks/). For a full list of supported and unsupported features, check out the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) repo. -- Index a [supported network](/supported-networks/) -- Não deve usar quaisquer das seguintes características: - - ipfs.cat & ipfs.map - - Erros não-fatais - - Enxerto +## Como inicializar o seu Subgraph -## Initialize Your Subgraph - -Once your subgraph has been created in Subgraph Studio, you can initialize its code through the CLI using this command: +Once your Subgraph has been created in Subgraph Studio, you can initialize its code through the CLI using this command: ```bash graph init ``` -You can find the `` value on your subgraph details page in Subgraph Studio, see image below: +You can find the `` value on your Subgraph details page in Subgraph Studio, see image below: ![Subgraph Studio - Slug](/img/doc-subgraph-slug.png) -After running `graph init`, you will be asked to input the contract address, network, and an ABI that you want to query. This will generate a new folder on your local machine with some basic code to start working on your subgraph. You can then finalize your subgraph to make sure it works as expected. +After running `graph init`, you will be asked to input the contract address, network, and an ABI that you want to query. This will generate a new folder on your local machine with some basic code to start working on your Subgraph. You can then finalize your Subgraph to make sure it works as expected. ## Autenticação -Before you can deploy your subgraph to Subgraph Studio, you need to log into your account within the CLI. To do this, you will need your deploy key, which you can find under your subgraph details page. +Before you can deploy your Subgraph to Subgraph Studio, you need to log into your account within the CLI. To do this, you will need your deploy key, which you can find under your Subgraph details page. -Then, use the following command to authenticate from the CLI: +Em seguida, use o seguinte comando para autenticar a partir da CLI: ```bash graph auth ``` -## Deploying a Subgraph +## Como Implantar um Subgraph -Once you are ready, you can deploy your subgraph to Subgraph Studio. +Once you are ready, you can deploy your Subgraph to Subgraph Studio. -> Deploying a subgraph with the CLI pushes it to the Studio, where you can test it and update the metadata. This action won't publish your subgraph to the decentralized network. +> Deploying a Subgraph with the CLI pushes it to the Studio, where you can test it and update the metadata. This action won't publish your Subgraph to the decentralized network. -Use the following CLI command to deploy your subgraph: +Use the following CLI command to deploy your Subgraph: ```bash graph deploy ``` -After running this command, the CLI will ask for a version label. +Após executar este comando, a CLI solicitará um número de versão. -- It's strongly recommended to use [semver](https://semver.org/) for versioning like `0.0.1`. That said, you are free to choose any string as version such as `v1`, `version1`, or `asdf`. -- The labels you create will be visible in Graph Explorer and can be used by curators to decide if they want to signal on a specific version or not, so choose them wisely. +- É altamente recomendado usar o [semver](https://semver.org/) para números de versão, como `0.0.1`. Dito isto, dá para escolher qualquer string como versão, por exemplo: `v1`, `version1`, `asdf`. +- Os nomes de versão criados serão visíveis no Graph Explorer, e podem ser usados pelos curadores para decidir se querem ou não sinalizar numa versão específica, então escolha com sabedoria. -## Testing Your Subgraph +## Como Testar o Seu Subgraph -After deploying, you can test your subgraph (either in Subgraph Studio or in your own app, with the deployment query URL), deploy another version, update the metadata, and publish to [Graph Explorer](https://thegraph.com/explorer) when you are ready. +After deploying, you can test your Subgraph (either in Subgraph Studio or in your own app, with the deployment query URL), deploy another version, update the metadata, and publish to [Graph Explorer](https://thegraph.com/explorer) when you are ready. -Use Subgraph Studio to check the logs on the dashboard and look for any errors with your subgraph. +Use Subgraph Studio to check the logs on the dashboard and look for any errors with your Subgraph. -## Publish Your Subgraph +## Edite o Seu Subgraph -In order to publish your subgraph successfully, review [publishing a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). +In order to publish your Subgraph successfully, review [publishing a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). -## Versioning Your Subgraph with the CLI +## Como Fazer Versões do Seu Subgraph com a CLI -If you want to update your subgraph, you can do the following: +If you want to update your Subgraph, you can do the following: -- You can deploy a new version to Studio using the CLI (it will only be private at this point). -- Once you're happy with it, you can publish your new deployment to [Graph Explorer](https://thegraph.com/explorer). -- This action will create a new version of your subgraph that Curators can start signaling on and Indexers can index. +- Você pode implantar uma nova versão para o Studio com a CLI (no momento, só será privada). +- Quando o resultado estiver satisfatório, você poderá editar a sua nova implantação para o [Graph Explorer](https://thegraph.com/explorer). +- This action will create a new version of your Subgraph that Curators can start signaling on and Indexers can index. -You can also update your subgraph's metadata without publishing a new version. You can update your subgraph details in Studio (under the profile picture, name, description, etc.) by checking an option called **Update Details** in [Graph Explorer](https://thegraph.com/explorer). If this is checked, an onchain transaction will be generated that updates subgraph details in Explorer without having to publish a new version with a new deployment. +You can also update your Subgraph's metadata without publishing a new version. You can update your Subgraph details in Studio (under the profile picture, name, description, etc.) by checking an option called **Update Details** in [Graph Explorer](https://thegraph.com/explorer). If this is checked, an onchain transaction will be generated that updates Subgraph details in Explorer without having to publish a new version with a new deployment. -> Note: There are costs associated with publishing a new version of a subgraph to the network. In addition to the transaction fees, you must also fund a part of the curation tax on the auto-migrating signal. You cannot publish a new version of your subgraph if Curators have not signaled on it. For more information, please read more [here](/resources/roles/curating/). +> Note: There are costs associated with publishing a new version of a Subgraph to the network. In addition to the transaction fees, you must also fund a part of the curation tax on the auto-migrating signal. You cannot publish a new version of your Subgraph if Curators have not signaled on it. For more information, please read more [here](/resources/roles/curating/). ## Arquivamento Automático de Versões de Subgraphs -Whenever you deploy a new subgraph version in Subgraph Studio, the previous version will be archived. Archived versions won't be indexed/synced and therefore cannot be queried. You can unarchive an archived version of your subgraph in Subgraph Studio. +Whenever you deploy a new Subgraph version in Subgraph Studio, the previous version will be archived. Archived versions won't be indexed/synced and therefore cannot be queried. You can unarchive an archived version of your Subgraph in Subgraph Studio. -> Note: Previous versions of non-published subgraphs deployed to Studio will be automatically archived. +> Note: Previous versions of non-published Subgraphs deployed to Studio will be automatically archived. -![Subgraph Studio - Unarchive](/img/Unarchive.png) +![Subgraph Studio — Tirar Arquivo](/img/Unarchive.png) diff --git a/website/src/pages/pt/subgraphs/developing/developer-faq.mdx b/website/src/pages/pt/subgraphs/developing/developer-faq.mdx index 94f963a2fa3a..8878494e4c34 100644 --- a/website/src/pages/pt/subgraphs/developing/developer-faq.mdx +++ b/website/src/pages/pt/subgraphs/developing/developer-faq.mdx @@ -1,71 +1,71 @@ --- -title: Developer FAQ -sidebarTitle: FAQ +title: Perguntas frequentes do programador +sidebarTitle: Perguntas Frequentes --- -This page summarizes some of the most common questions for developers building on The Graph. +Esta página resume algumas das perguntas mais comuns para programadores que trabalham no The Graph. -## Subgraph Related +## Perguntas sobre Subgraphs -### 1. O que é um subgraph? +### 1. What is a Subgraph? -A subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process subgraphs and make them available for subgraph consumers to query. +A Subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process Subgraphs and make them available for Subgraph consumers to query. -### 2. What is the first step to create a subgraph? +### 2. What is the first step to create a Subgraph? -To successfully create a subgraph, you will need to install The Graph CLI. Review the [Quick Start](/subgraphs/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). +To successfully create a Subgraph, you will need to install The Graph CLI. Review the [Quick Start](/subgraphs/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). -### 3. Can I still create a subgraph if my smart contracts don't have events? +### 3. Can I still create a Subgraph if my smart contracts don't have events? -It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the subgraph are triggered by contract events and are the fastest way to retrieve useful data. +It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the Subgraph are triggered by contract events and are the fastest way to retrieve useful data. -If the contracts you work with do not contain events, your subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. +If the contracts you work with do not contain events, your Subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. -### 4. Posso mudar a conta do GitHub associada ao meu subgraph? +### 4. Can I change the GitHub account associated with my Subgraph? -No. Once a subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your subgraph. +No. Once a Subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your Subgraph. -### 5. How do I update a subgraph on mainnet? +### 5. How do I update a Subgraph on mainnet? -You can deploy a new version of your subgraph to Subgraph Studio using the CLI. This action maintains your subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. +You can deploy a new version of your Subgraph to Subgraph Studio using the CLI. This action maintains your Subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your Subgraph that Curators can start signaling on. -### 6. Is it possible to duplicate a subgraph to another account or endpoint without redeploying? +### 6. Is it possible to duplicate a Subgraph to another account or endpoint without redeploying? -Deve relançar o subgraph, mas se a ID do subgraph (hash IPFS) não mudar, ele não precisará sincronizar do começo. +You have to redeploy the Subgraph, but if the Subgraph ID (IPFS hash) doesn't change, it won't have to sync from the beginning. -### 7. How do I call a contract function or access a public state variable from my subgraph mappings? +### 7. How do I call a contract function or access a public state variable from my Subgraph mappings? -Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/#access-to-smart-contract-state). +Veja o estado de `Acesso ao contrato inteligente` dentro da secção [API AssemblyScript](/subgraphs/developing/creating/graph-ts/api/#access-to-smart-contract-state). -### 8. Can I import `ethers.js` or other JS libraries into my subgraph mappings? +### 8. Can I import `ethers.js` or other JS libraries into my Subgraph mappings? -Not currently, as mappings are written in AssemblyScript. +Não atualmente, afinal, os mapeamentos são escritos em AssemblyScript. -One possible alternative solution to this is to store raw data in entities and perform logic that requires JS libraries on the client. +Uma solução alternativa possível é armazenar dados brutos em entidades e executar uma lógica que exige bibliotecas de JS no cliente. -### 9. When listening to multiple contracts, is it possible to select the contract order to listen to events? +### 9. Ao escutar vários contratos, é possível selecionar a ordem do contrato para escutar eventos? -Dentro de um subgraph, os eventos são sempre processados na ordem em que aparecem nos blocos, mesmo sendo ou não através de vários contratos. +Within a Subgraph, the events are always processed in the order they appear in the blocks, regardless of whether that is across multiple contracts or not. -### 10. How are templates different from data sources? +### 10. Quais são as diferenças entre modelos e fontes de dados? -Templates allow you to create data sources quickly, while your subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your subgraph will create a dynamic data source by supplying the contract address. +Templates allow you to create data sources quickly, while your Subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your Subgraph will create a dynamic data source by supplying the contract address. -Check out the "Instantiating a data source template" section on: [Data Source Templates](/developing/creating-a-subgraph/#data-source-templates). +Confira a secção "Como instanciar um modelo de fonte de dados" em: [Modelos de Fonte de Dados](/developing/creating-a-subgraph/#data-source-templates). -### 11. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? +### 11. Is it possible to set up a Subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? -Yes. On `graph init` command itself you can add multiple dataSources by entering contracts one after the other. +Sim. No comando `graph init`, pode-se adicionar várias dataSources ao inserir um contrato após o outro. -You can also use `graph add` command to add a new dataSource. +O comando `graph add` também pode adicionar uma nova dataSource. -### 12. In what order are the event, block, and call handlers triggered for a data source? +### 12. Em qual ordem os handlers de evento, bloco, e chamada são ativados para uma fonte de dados? Primeiro, handlers de eventos e chamadas são organizados pelo índice de transações dentro do bloco. Handlers de evento e chamada dentro da mesma transação são organizados com uma convenção: handlers de eventos primeiro e depois handlers de chamadas, com cada tipo a respeitar a ordem em que são definidos no manifest. Handlers de blocos são executados após handlers de eventos e chamadas, na ordem em que são definidos no manifest. Estas regras de organizações estão sujeitas a mudanças. Com a criação de novas fontes de dados dinâmicas, os handlers definidos para fontes de dados dinâmicas só começarão a processar após o processamento dos handlers das fontes, e se repetirão na mesma sequência sempre que acionados. -### 13. How do I make sure I'm using the latest version of graph-node for my local deployments? +### 13. Como garantir que estou a usar a versão mais recente do graph-node para as minhas implantações locais? Podes executar o seguinte comando: @@ -73,25 +73,25 @@ Podes executar o seguinte comando: docker pull graphprotocol/graph-node:latest ``` -> Note: docker / docker-compose will always use whatever graph-node version was pulled the first time you ran it, so make sure you're up to date with the latest version of graph-node. +> Observação: O docker / docker-compose sempre usará a versão do graph-node que foi puxada na primeira vez que o executou, então é importante fazer isto para garantir que está em dia com a versão mais recente do graph-node. -### 14. What is the recommended way to build "autogenerated" ids for an entity when handling events? +### 14. Qual é a forma recomendada de construir ids "autogeradas" para uma entidade ao lidar com eventos? -If only one entity is created during the event and if there's nothing better available, then the transaction hash + log index would be unique. You can obfuscate these by converting that to Bytes and then piping it through `crypto.keccak256` but this won't make it more unique. +Se só uma entidade for criada durante o evento e não houver nada melhor disponível, então o hash da transação + o index do registo seria único. Esses podem ser ofuscados ao converter em Bytes e então passar pelo `crypto.keccak256`, mas isto não deixará os dados mais singulares. -### 15. Can I delete my subgraph? +### 15. Can I delete my Subgraph? -Yes, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) and [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) your subgraph. +Yes, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) and [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) your Subgraph. -## Network Related +## Perguntas sobre Rede -### 16. What networks are supported by The Graph? +### 16. Quais redes são apoiadas pelo The Graph? -You can find the list of the supported networks [here](/supported-networks/). +Veja a lista das redes apoiadas [aqui](/supported-networks/). -### 17. Is it possible to differentiate between networks (mainnet, Sepolia, local) within event handlers? +### 17. É possível diferenciar entre redes (mainnet, Sepolia, local) dentro de handlers de eventos? -Yes. You can do this by importing `graph-ts` as per the example below: +Sim. Isto é possível ao importar o `graph-ts` como no exemplo abaixo: ```javascript import { dataSource } from '@graphprotocol/graph-ts' @@ -100,21 +100,21 @@ dataSource.network() dataSource.address() ``` -### 18. Do you support block and call handlers on Sepolia? +### 18. Vocês apoiam handlers de bloco e de chamadas no Sepolia? Sim. O Sepolia apoia handlers de blocos, chamadas e eventos. Vale notar que handlers de eventos têm desempenho muito melhor do que os outros dois e têm apoio em todas as redes compatíveis com EVMs. -## Indexing & Querying Related +## Perguntas sobre Indexação e Queries -### 19. Is it possible to specify what block to start indexing on? +### 19. É possível especificar o bloco de onde a indexação deve começar? -Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the dataSource starts indexing from. In most cases, we suggest using the block where the contract was created: [Start blocks](/developing/creating-a-subgraph/#start-blocks) +Sim. O `dataSources.source.startBlock` no arquivo `subgraph.yaml` especifica o número do bloco de onde a fonte de dados começa a indexar. Geralmente, sugerimos usar o bloco em que o contrato foi criado: [Blocos de início](/developing/creating-a-subgraph/#start-blocks) -### 20. What are some tips to increase the performance of indexing? My subgraph is taking a very long time to sync +### 20. What are some tips to increase the performance of indexing? My Subgraph is taking a very long time to sync -Yes, you should take a look at the optional start block feature to start indexing from the block where the contract was deployed: [Start blocks](/developing/creating-a-subgraph/#start-blocks) +Sim. Confira o recurso opcional de bloco inicial (start block) para começar a indexar do bloco em que o contrato foi lançado: [Blocos iniciais](/developing/creating-a-subgraph/#start-blocks) -### 21. Is there a way to query the subgraph directly to determine the latest block number it has indexed? +### 21. Is there a way to query the Subgraph directly to determine the latest block number it has indexed? Sim! Execute o seguinte comando, com "organization/subgraphName" substituído com a organização sob a qual ele foi publicado e o nome do seu subgraph: @@ -122,25 +122,25 @@ Sim! Execute o seguinte comando, com "organization/subgraphName" substituído co curl -X POST -d '{ "query": "{indexingStatusForCurrentVersion(subgraphName: \"organization/subgraphName\") { chains { latestBlock { hash number }}}}"}' https://api.thegraph.com/index-node/graphql ``` -### 22. Is there a limit to how many objects The Graph can return per query? +### 22. Há um limite de quantos objetos o Graph pode retornar por query? -By default, query responses are limited to 100 items per collection. If you want to receive more, you can go up to 1000 items per collection and beyond that, you can paginate with: +Normalmente, respostas a queries são limitadas a 100 itens por coleção. Se quiser receber mais, pode subir para até 1000 itens por coleção; além disto, pode paginar com: ```graphql someCollection(first: 1000, skip: ) { ... } ``` -### 23. If my dapp frontend uses The Graph for querying, do I need to write my API key into the frontend directly? What if we pay query fees for users – will malicious users cause our query fees to be very high? +### 23. Se a frontend do meu dApp usa o The Graph para queries, eu preciso escrever a minha chave de API diretamente na frontend? E se pagarmos taxas de query para utilizadores — algum utilizador malicioso pode aumentar demais estas taxas? -Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. +Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and Subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. -## Miscellaneous +## Outras Perguntas -### 24. Is it possible to use Apollo Federation on top of graph-node? +### 24. É possível usar a Apollo Federation juntamente ao graph-node? -Federation is not supported yet. At the moment, you can use schema stitching, either on the client or via a proxy service. +Ainda não há apoio ao Federation. No momento, é possível costurar schemas, seja no cliente ou via um serviço de proxy. -### 25. I want to contribute or add a GitHub issue. Where can I find the open source repositories? +### 25. Quero contribuir ou adicionar um problema no GitHub. Onde posso encontrar os repositórios de código aberto? - [graph-node](https://github.com/graphprotocol/graph-node) - [graph-tooling](https://github.com/graphprotocol/graph-tooling) diff --git a/website/src/pages/pt/subgraphs/developing/introduction.mdx b/website/src/pages/pt/subgraphs/developing/introduction.mdx index e550867e2244..e7a5cdd3cc56 100644 --- a/website/src/pages/pt/subgraphs/developing/introduction.mdx +++ b/website/src/pages/pt/subgraphs/developing/introduction.mdx @@ -1,31 +1,31 @@ --- -title: Introduction to Subgraph Development +title: Introdução à Programação de Subgraphs sidebarTitle: Introdução --- -To start coding right away, go to [Developer Quick Start](/subgraphs/quick-start/). +Para começar a programar imediatamente, confira o [Guia de Início Rápido do Programador](/subgraphs/quick-start/). ## Visão geral -As a developer, you need data to build and power your dapp. Querying and indexing blockchain data is challenging, but The Graph provides a solution to this issue. +Todo programador precisa de dados para criar e melhorar o seu dapp (aplicativo descentralizado). Consultar e indexar dados da blockchain é desafiador, mas o The Graph fornece uma solução para este problema. -On The Graph, you can: +Com o The Graph, você pode: -1. Create, deploy, and publish subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). -2. Use GraphQL to query existing subgraphs. +1. Create, deploy, and publish Subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). +2. Use GraphQL to query existing Subgraphs. ### O Que é a GraphQL? -- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. +- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query Subgraphs. -### Developer Actions +### Ações de Programador -- Query subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. -- Create custom subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. -- Deploy, publish and signal your subgraphs within The Graph Network. +- Query Subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. +- Create custom Subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. +- Deploy, publish and signal your Subgraphs within The Graph Network. -### What are subgraphs? +### What are Subgraphs? -A subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. +A Subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. -Check out the documentation on [subgraphs](/subgraphs/developing/subgraphs/) to learn specifics. +Check out the documentation on [Subgraphs](/subgraphs/developing/subgraphs/) to learn specifics. diff --git a/website/src/pages/pt/subgraphs/developing/managing/deleting-a-subgraph.mdx b/website/src/pages/pt/subgraphs/developing/managing/deleting-a-subgraph.mdx index d5305fe2cfbe..49cb207e435e 100644 --- a/website/src/pages/pt/subgraphs/developing/managing/deleting-a-subgraph.mdx +++ b/website/src/pages/pt/subgraphs/developing/managing/deleting-a-subgraph.mdx @@ -1,31 +1,31 @@ --- -title: Deleting a Subgraph +title: Como Apagar um Subgraph --- -Delete your subgraph using [Subgraph Studio](https://thegraph.com/studio/). +Delete your Subgraph using [Subgraph Studio](https://thegraph.com/studio/). -> Deleting your subgraph will remove all published versions from The Graph Network, but it will remain visible on Graph Explorer and Subgraph Studio for users who have signaled on it. +> Deleting your Subgraph will remove all published versions from The Graph Network, but it will remain visible on Graph Explorer and Subgraph Studio for users who have signaled on it. ## Passo a Passo -1. Visit the subgraph's page on [Subgraph Studio](https://thegraph.com/studio/). +1. Visit the Subgraph's page on [Subgraph Studio](https://thegraph.com/studio/). -2. Click on the three-dots to the right of the "publish" button. +2. Clique nos três pontos à direita do botão "publish" (editar). -3. Click on the option to "delete this subgraph": +3. Click on the option to "delete this Subgraph": ![Delete-subgraph](/img/Delete-subgraph.png) -4. Depending on the subgraph's status, you will be prompted with various options. +4. Depending on the Subgraph's status, you will be prompted with various options. - - If the subgraph is not published, simply click “delete” and confirm. - - If the subgraph is published, you will need to confirm on your wallet before the subgraph can be deleted from Studio. If a subgraph is published to multiple networks, such as testnet and mainnet, additional steps may be required. + - If the Subgraph is not published, simply click “delete” and confirm. + - If the Subgraph is published, you will need to confirm on your wallet before the Subgraph can be deleted from Studio. If a Subgraph is published to multiple networks, such as testnet and mainnet, additional steps may be required. -> If the owner of the subgraph has signal on it, the signaled GRT will be returned to the owner. +> If the owner of the Subgraph has signal on it, the signaled GRT will be returned to the owner. -### Important Reminders +### Lembretes Importantes -- Once you delete a subgraph, it will **not** appear on Graph Explorer's homepage. However, users who have signaled on it will still be able to view it on their profile pages and remove their signal. -- Os curadores não poderão mais sinalizar no subgraph depreciado. -- Curadores que já sinalizaram no subgraph poderão retirar a sua sinalização a um preço de ação normal. -- Deleted subgraphs will show an error message. +- Once you delete a Subgraph, it will **not** appear on Graph Explorer's homepage. However, users who have signaled on it will still be able to view it on their profile pages and remove their signal. +- Curators will not be able to signal on the Subgraph anymore. +- Curators that already signaled on the Subgraph can withdraw their signal at an average share price. +- Deleted Subgraphs will show an error message. diff --git a/website/src/pages/pt/subgraphs/developing/managing/transferring-a-subgraph.mdx b/website/src/pages/pt/subgraphs/developing/managing/transferring-a-subgraph.mdx index 1931370a6df7..7f4ead265671 100644 --- a/website/src/pages/pt/subgraphs/developing/managing/transferring-a-subgraph.mdx +++ b/website/src/pages/pt/subgraphs/developing/managing/transferring-a-subgraph.mdx @@ -1,19 +1,19 @@ --- -title: Transferring a Subgraph +title: Transferências de Subgraphs --- -Subgraphs publicados na rede descentralizada terão um NFT mintado no endereço que publicou o subgraph. O NFT é baseado no padrão ERC-721, que facilita transferências entre contas na Graph Network. +Subgraphs published to the decentralized network have an NFT minted to the address that published the Subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network. -## Reminders +## Lembretes -- O dono do NFT controla o subgraph. -- Se o dono atual decidir vender ou transferir o NFT, ele não poderá mais editar ou atualizar aquele subgraph na rede. -- É possível transferir o controle de um subgraph para uma multisig. -- Um membro da comunidade pode criar um subgraph no nome de uma DAO. +- Whoever owns the NFT controls the Subgraph. +- If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that Subgraph on the network. +- You can easily move control of a Subgraph to a multi-sig. +- A community member can create a Subgraph on behalf of a DAO. -## View Your Subgraph as an NFT +## Como visualizar o seu subgraph como um NFT -Para visualizar o seu subgraph como um NFT, visite um mercado de NFTs como o **OpenSea**: +To view your Subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**: ``` https://opensea.io/your-wallet-address @@ -27,15 +27,15 @@ https://rainbow.me/your-wallet-addres ## Passo a Passo -Para transferir a titularidade de um subgraph, faça o seguinte: +To transfer ownership of a Subgraph, do the following: 1. Use a interface embutida no Subgraph Studio: ![Transferência de Titularidade de Subgraph](/img/subgraph-ownership-transfer-1.png) -2. Escolha o endereço para o qual gostaria de transferir o subgraph: +2. Choose the address that you would like to transfer the Subgraph to: - ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-2.png) + ![Transferência de Titularidade de Subgraph](/img/subgraph-ownership-transfer-2.png) Também é possível usar a interface embutida de mercados de NFT, como o OpenSea: diff --git a/website/src/pages/pt/subgraphs/developing/publishing/publishing-a-subgraph.mdx b/website/src/pages/pt/subgraphs/developing/publishing/publishing-a-subgraph.mdx index ad08b1c68cf8..1d25ded18a61 100644 --- a/website/src/pages/pt/subgraphs/developing/publishing/publishing-a-subgraph.mdx +++ b/website/src/pages/pt/subgraphs/developing/publishing/publishing-a-subgraph.mdx @@ -1,49 +1,50 @@ --- title: Como Editar um Subgraph na Rede Descentralizada +sidebarTitle: Publishing to the Decentralized Network --- -Once you have [deployed your subgraph to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/) and it's ready to go into production, you can publish it to the decentralized network. +Once you have [deployed your Subgraph to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/) and it's ready to go into production, you can publish it to the decentralized network. -Ao editar um subgraph à rede descentralizada, ele será disponibilizado para: +When you publish a Subgraph to the decentralized network, you make it available for: -- [Curators](/resources/roles/curating/) to begin curating it. -- [Indexers](/indexing/overview/) to begin indexing it. +- [Curadores](/resources/roles/curating/), para começarem a curadoria. +- [Indexadores](/indexing/overview/), para começarem a indexação. -Check out the list of [supported networks](/supported-networks/). +Veja a lista das redes apoiadas [aqui](/supported-networks/). ## Edição do Subgraph Studio -1. Go to the [Subgraph Studio](https://thegraph.com/studio/) dashboard -2. Click on the **Publish** button -3. Your subgraph will now be visible in [Graph Explorer](https://thegraph.com/explorer/). +1. Entre no painel de controlo do [Subgraph Studio](https://thegraph.com/studio/) +2. Clique no botão **Publish** (Editar) +3. Your Subgraph will now be visible in [Graph Explorer](https://thegraph.com/explorer/). -Todas as versões editadas de um subgraph existente podem: +All published versions of an existing Subgraph can: -- Be published to Arbitrum One. [Learn more about The Graph Network on Arbitrum](/archived/arbitrum/arbitrum-faq/). +- Ser editados no Arbitrum One. [Saiba mais sobre The Graph Network no Arbitrum](/archived/arbitrum/arbitrum-faq/). -- Index data on any of the [supported networks](/supported-networks/), regardless of the network on which the subgraph was published. +- Index data on any of the [supported networks](/supported-networks/), regardless of the network on which the Subgraph was published. -### Como atualizar metadados para um subgraph editado +### Updating metadata for a published Subgraph -- Após editar o seu subgraph à rede descentralizada, será possível editar os metadados a qualquer hora no Subgraph Studio. +- After publishing your Subgraph to the decentralized network, you can update the metadata anytime in Subgraph Studio. - Após salvar as suas mudanças e publicar as atualizações, elas aparecerão no Graph Explorer. - É importante notar que este processo não criará uma nova versão, já que a sua edição não terá mudado. ## Publicação da CLI -As of version 0.73.0, you can also publish your subgraph with the [`graph-cli`](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). +As of version 0.73.0, you can also publish your Subgraph with the [`graph-cli`](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). -1. Open the `graph-cli`. -2. Use the following commands: `graph codegen && graph build` then `graph publish`. -3. Uma janela será aberta para o programador conectar a sua carteira, adicionar metadados e lançar o seu subgraph finalizado a uma rede de sua escolha. +1. Abra a `graph-cli`. +2. Use os seguintes comandos: `graph codegen && graph build` e depois `graph publish`. +3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized Subgraph to a network of your choice. ![cli-ui](/img/cli-ui.png) ### Como personalizar o seu lançamento -É possível enviar a sua build a um node IPFS específico e personalizar ainda mais o seu lançamento com as seguintes flags: +You can upload your Subgraph build to a specific IPFS node and further customize your deployment with the following flags: ``` USAGE @@ -51,44 +52,44 @@ USAGE ] FLAGS - -h, --help Show CLI help. - -i, --ipfs= [default: https://api.thegraph.com/ipfs/api/v0] Upload build results to an IPFS node. - --ipfs-hash= IPFS hash of the subgraph manifest to deploy. - --protocol-network=
` will turn on the account-like optimization for queries against that table. The optimization can be turned off again with `graphman stats account-like --clear .
` It takes up to 5 minutes for query nodes to notice that the optimization has been turned on or off. After turning the optimization on, it is necessary to verify that the change does not in fact make queries slower for that table. If you have configured Grafana to monitor Postgres, slow queries would show up in `pg_stat_activity`in large numbers, taking several seconds. In that case, the optimization needs to be turned off again. -For Uniswap-like subgraphs, the `pair` and `token` tables are prime candidates for this optimization, and can have a dramatic effect on database load. +For Uniswap-like Subgraphs, the `pair` and `token` tables are prime candidates for this optimization, and can have a dramatic effect on database load. -#### Removing subgraphs +#### Removing Subgraphs > This is new functionality, which will be available in Graph Node 0.29.x -At some point an indexer might want to remove a given subgraph. This can be easily done via `graphman drop`, which deletes a deployment and all it's indexed data. The deployment can be specified as either a subgraph name, an IPFS hash `Qm..`, or the database namespace `sgdNNN`. Further documentation is available [here](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop). +At some point an indexer might want to remove a given Subgraph. This can be easily done via `graphman drop`, which deletes a deployment and all it's indexed data. The deployment can be specified as either a Subgraph name, an IPFS hash `Qm..`, or the database namespace `sgdNNN`. Further documentation is available [here](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop). diff --git a/website/src/pages/ro/indexing/tooling/graphcast.mdx b/website/src/pages/ro/indexing/tooling/graphcast.mdx index cac63bbd9340..461fe3852377 100644 --- a/website/src/pages/ro/indexing/tooling/graphcast.mdx +++ b/website/src/pages/ro/indexing/tooling/graphcast.mdx @@ -10,10 +10,10 @@ Currently, the cost to broadcast information to other network participants is de The Graphcast SDK (Software Development Kit) allows developers to build Radios, which are gossip-powered applications that Indexers can run to serve a given purpose. We also intend to create a few Radios (or provide support to other developers/teams that wish to build Radios) for the following use cases: -- Real-time cross-checking of subgraph data integrity ([Subgraph Radio](https://docs.graphops.xyz/graphcast/radios/subgraph-radio/intro)). -- Conducting auctions and coordination for warp syncing subgraphs, substreams, and Firehose data from other Indexers. -- Self-reporting on active query analytics, including subgraph request volumes, fee volumes, etc. -- Self-reporting on indexing analytics, including subgraph indexing time, handler gas costs, indexing errors encountered, etc. +- Real-time cross-checking of Subgraph data integrity ([Subgraph Radio](https://docs.graphops.xyz/graphcast/radios/subgraph-radio/intro)). +- Conducting auctions and coordination for warp syncing Subgraphs, substreams, and Firehose data from other Indexers. +- Self-reporting on active query analytics, including Subgraph request volumes, fee volumes, etc. +- Self-reporting on indexing analytics, including Subgraph indexing time, handler gas costs, indexing errors encountered, etc. - Self-reporting on stack information including graph-node version, Postgres version, Ethereum client version, etc. ### Află mai multe diff --git a/website/src/pages/ro/resources/benefits.mdx b/website/src/pages/ro/resources/benefits.mdx index 6e698c54af73..8b6c8a74c0a6 100644 --- a/website/src/pages/ro/resources/benefits.mdx +++ b/website/src/pages/ro/resources/benefits.mdx @@ -75,9 +75,9 @@ Query costs may vary; the quoted cost is the average at time of publication (Mar Reflects cost for data consumer. Query fees are still paid to Indexers for Free Plan queries. -Estimated costs are only for Ethereum Mainnet subgraphs — costs are even higher when self hosting a `graph-node` on other networks. Some users may need to update their subgraph to a new version. Due to Ethereum gas fees, an update costs ~$50 at time of writing. Note that gas fees on [Arbitrum](/archived/arbitrum/arbitrum-faq/) are substantially lower than Ethereum mainnet. +Estimated costs are only for Ethereum Mainnet Subgraphs — costs are even higher when self hosting a `graph-node` on other networks. Some users may need to update their Subgraph to a new version. Due to Ethereum gas fees, an update costs ~$50 at time of writing. Note that gas fees on [Arbitrum](/archived/arbitrum/arbitrum-faq/) are substantially lower than Ethereum mainnet. -Curating signal on a subgraph is an optional one-time, net-zero cost (e.g., $1k in signal can be curated on a subgraph, and later withdrawn—with potential to earn returns in the process). +Curating signal on a Subgraph is an optional one-time, net-zero cost (e.g., $1k in signal can be curated on a Subgraph, and later withdrawn—with potential to earn returns in the process). ## No Setup Costs & Greater Operational Efficiency @@ -89,4 +89,4 @@ The Graph’s decentralized network gives users access to geographic redundancy Bottom line: The Graph Network is less expensive, easier to use, and produces superior results compared to running a `graph-node` locally. -Start using The Graph Network today, and learn how to [publish your subgraph to The Graph's decentralized network](/subgraphs/quick-start/). +Start using The Graph Network today, and learn how to [publish your Subgraph to The Graph's decentralized network](/subgraphs/quick-start/). diff --git a/website/src/pages/ro/resources/glossary.mdx b/website/src/pages/ro/resources/glossary.mdx index ffcd4bca2eed..4c5ad55cd0d3 100644 --- a/website/src/pages/ro/resources/glossary.mdx +++ b/website/src/pages/ro/resources/glossary.mdx @@ -4,51 +4,51 @@ title: Glossary - **The Graph**: A decentralized protocol for indexing and querying data. -- **Query**: A request for data. In the case of The Graph, a query is a request for data from a subgraph that will be answered by an Indexer. +- **Query**: A request for data. In the case of The Graph, a query is a request for data from a Subgraph that will be answered by an Indexer. -- **GraphQL**: A query language for APIs and a runtime for fulfilling those queries with your existing data. The Graph uses GraphQL to query subgraphs. +- **GraphQL**: A query language for APIs and a runtime for fulfilling those queries with your existing data. The Graph uses GraphQL to query Subgraphs. -- **Endpoint**: A URL that can be used to query a subgraph. The testing endpoint for Subgraph Studio is `https://api.studio.thegraph.com/query///` and the Graph Explorer endpoint is `https://gateway.thegraph.com/api//subgraphs/id/`. The Graph Explorer endpoint is used to query subgraphs on The Graph's decentralized network. +- **Endpoint**: A URL that can be used to query a Subgraph. The testing endpoint for Subgraph Studio is `https://api.studio.thegraph.com/query///` and the Graph Explorer endpoint is `https://gateway.thegraph.com/api//subgraphs/id/`. The Graph Explorer endpoint is used to query Subgraphs on The Graph's decentralized network. -- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish subgraphs to The Graph Network. Once it is indexed, the subgraph can be queried by anyone. +- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish Subgraphs to The Graph Network. Once it is indexed, the Subgraph can be queried by anyone. - **Indexer**: Network participants that run indexing nodes to index data from blockchains and serve GraphQL queries. - **Indexer Revenue Streams**: Indexers are rewarded in GRT with two components: query fee rebates and indexing rewards. - 1. **Query Fee Rebates**: Payments from subgraph consumers for serving queries on the network. + 1. **Query Fee Rebates**: Payments from Subgraph consumers for serving queries on the network. - 2. **Indexing Rewards**: The rewards that Indexers receive for indexing subgraphs. Indexing rewards are generated via new issuance of 3% GRT annually. + 2. **Indexing Rewards**: The rewards that Indexers receive for indexing Subgraphs. Indexing rewards are generated via new issuance of 3% GRT annually. - **Indexer's Self-Stake**: The amount of GRT that Indexers stake to participate in the decentralized network. The minimum is 100,000 GRT, and there is no upper limit. - **Delegation Capacity**: The maximum amount of GRT an Indexer can accept from Delegators. Indexers can only accept up to 16x their Indexer Self-Stake, and additional delegation results in diluted rewards. For example, if an Indexer has a Self-Stake of 1M GRT, their delegation capacity is 16M. However, Indexers can increase their Delegation Capacity by increasing their Self-Stake. -- **Upgrade Indexer**: An Indexer designed to act as a fallback for subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. +- **Upgrade Indexer**: An Indexer designed to act as a fallback for Subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. -- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing subgraphs. +- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in Subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing Subgraphs. - **Delegation Tax**: A 0.5% fee paid by Delegators when they delegate GRT to Indexers. The GRT used to pay the fee is burned. -- **Curator**: Network participants that identify high-quality subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a subgraph, 10% is distributed to the Curators of that subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a subgraph. +- **Curator**: Network participants that identify high-quality Subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a Subgraph, 10% is distributed to the Curators of that Subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a Subgraph. -- **Curation Tax**: A 1% fee paid by Curators when they signal GRT on subgraphs. The GRT used to pay the fee is burned. +- **Curation Tax**: A 1% fee paid by Curators when they signal GRT on Subgraphs. The GRT used to pay the fee is burned. -- **Data Consumer**: Any application or user that queries a subgraph. +- **Data Consumer**: Any application or user that queries a Subgraph. -- **Subgraph Developer**: A developer who builds and deploys a subgraph to The Graph's decentralized network. +- **Subgraph Developer**: A developer who builds and deploys a Subgraph to The Graph's decentralized network. -- **Subgraph Manifest**: A YAML file that describes the subgraph's GraphQL schema, data sources, and other metadata. [Here](https://github.com/graphprotocol/example-subgraph/blob/master/subgraph.yaml) is an example. +- **Subgraph Manifest**: A YAML file that describes the Subgraph's GraphQL schema, data sources, and other metadata. [Here](https://github.com/graphprotocol/example-subgraph/blob/master/subgraph.yaml) is an example. - **Epoch**: A unit of time within the network. Currently, one epoch is 6,646 blocks or approximately 1 day. -- **Allocation**: An Indexer can allocate their total GRT stake (including Delegators' stake) towards subgraphs that have been published on The Graph's decentralized network. Allocations can have different statuses: +- **Allocation**: An Indexer can allocate their total GRT stake (including Delegators' stake) towards Subgraphs that have been published on The Graph's decentralized network. Allocations can have different statuses: - 1. **Active**: An allocation is considered active when it is created onchain. This is called opening an allocation, and indicates to the network that the Indexer is actively indexing and serving queries for a particular subgraph. Active allocations accrue indexing rewards proportional to the signal on the subgraph, and the amount of GRT allocated. + 1. **Active**: An allocation is considered active when it is created onchain. This is called opening an allocation, and indicates to the network that the Indexer is actively indexing and serving queries for a particular Subgraph. Active allocations accrue indexing rewards proportional to the signal on the Subgraph, and the amount of GRT allocated. - 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. + 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given Subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. -- **Subgraph Studio**: A powerful dapp for building, deploying, and publishing subgraphs. +- **Subgraph Studio**: A powerful dapp for building, deploying, and publishing Subgraphs. - **Fishermen**: A role within The Graph Network held by participants who monitor the accuracy and integrity of data served by Indexers. When a Fisherman identifies a query response or a POI they believe to be incorrect, they can initiate a dispute against the Indexer. If the dispute rules in favor of the Fisherman, the Indexer is slashed by losing 2.5% of their self-stake. Of this amount, 50% is awarded to the Fisherman as a bounty for their vigilance, and the remaining 50% is removed from circulation (burned). This mechanism is designed to encourage Fishermen to help maintain the reliability of the network by ensuring that Indexers are held accountable for the data they provide. @@ -56,28 +56,28 @@ title: Glossary - **Slashing**: Indexers can have their self-staked GRT slashed for providing an incorrect POI or for serving inaccurate data. The slashing percentage is a protocol parameter currently set to 2.5% of an Indexer's self-stake. 50% of the slashed GRT goes to the Fisherman that disputed the inaccurate data or incorrect POI. The other 50% is burned. -- **Indexing Rewards**: The rewards that Indexers receive for indexing subgraphs. Indexing rewards are distributed in GRT. +- **Indexing Rewards**: The rewards that Indexers receive for indexing Subgraphs. Indexing rewards are distributed in GRT. - **Delegation Rewards**: The rewards that Delegators receive for delegating GRT to Indexers. Delegation rewards are distributed in GRT. - **GRT**: The Graph's work utility token. GRT provides economic incentives to network participants for contributing to the network. -- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. +- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given Subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. -- **Graph Node**: Graph Node is the component that indexes subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. +- **Graph Node**: Graph Node is the component that indexes Subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. -- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions onchain, including registering on the network, managing subgraph deployments to its Graph Node(s), and managing allocations. +- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions onchain, including registering on the network, managing Subgraph deployments to its Graph Node(s), and managing allocations. - **The Graph Client**: A library for building GraphQL-based dapps in a decentralized way. -- **Graph Explorer**: A dapp designed for network participants to explore subgraphs and interact with the protocol. +- **Graph Explorer**: A dapp designed for network participants to explore Subgraphs and interact with the protocol. - **Graph CLI**: A command line interface tool for building and deploying to The Graph. - **Cooldown Period**: The time remaining until an Indexer who changed their delegation parameters can do so again. -- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, subgraphs, curation shares, and Indexer's self-stake. +- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, Subgraphs, curation shares, and Indexer's self-stake. -- **Updating a subgraph**: The process of releasing a new subgraph version with updates to the subgraph's manifest, schema, or mappings. +- **Updating a Subgraph**: The process of releasing a new Subgraph version with updates to the Subgraph's manifest, schema, or mappings. -- **Migrating**: The process of curation shares moving from an old version of a subgraph to a new version of a subgraph (e.g. when v0.0.1 is updated to v0.0.2). +- **Migrating**: The process of curation shares moving from an old version of a Subgraph to a new version of a Subgraph (e.g. when v0.0.1 is updated to v0.0.2). diff --git a/website/src/pages/ro/resources/migration-guides/assemblyscript-migration-guide.mdx b/website/src/pages/ro/resources/migration-guides/assemblyscript-migration-guide.mdx index 85f6903a6c69..aead2514ff51 100644 --- a/website/src/pages/ro/resources/migration-guides/assemblyscript-migration-guide.mdx +++ b/website/src/pages/ro/resources/migration-guides/assemblyscript-migration-guide.mdx @@ -2,13 +2,13 @@ title: AssemblyScript Migration Guide --- -Up until now, subgraphs have been using one of the [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉 +Up until now, Subgraphs have been using one of the [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉 -That will enable subgraph developers to use newer features of the AS language and standard library. +That will enable Subgraph developers to use newer features of the AS language and standard library. This guide is applicable for anyone using `graph-cli`/`graph-ts` below version `0.22.0`. If you're already at a higher than (or equal) version to that, you've already been using version `0.19.10` of AssemblyScript 🙂 -> Note: As of `0.24.0`, `graph-node` can support both versions, depending on the `apiVersion` specified in the subgraph manifest. +> Note: As of `0.24.0`, `graph-node` can support both versions, depending on the `apiVersion` specified in the Subgraph manifest. ## Features @@ -44,7 +44,7 @@ This guide is applicable for anyone using `graph-cli`/`graph-ts` below version ` ## How to upgrade? -1. Change your mappings `apiVersion` in `subgraph.yaml` to `0.0.6`: +1. Change your mappings `apiVersion` in `subgraph.yaml` to `0.0.9`: ```yaml ... @@ -52,7 +52,7 @@ dataSources: ... mapping: ... - apiVersion: 0.0.6 + apiVersion: 0.0.9 ... ``` @@ -106,7 +106,7 @@ let maybeValue = load()! // breaks in runtime if value is null maybeValue.aMethod() ``` -If you are unsure which to choose, we recommend always using the safe version. If the value doesn't exist you might want to just do an early if statement with a return in you subgraph handler. +If you are unsure which to choose, we recommend always using the safe version. If the value doesn't exist you might want to just do an early if statement with a return in you Subgraph handler. ### Variable Shadowing @@ -132,7 +132,7 @@ You'll need to rename your duplicate variables if you had variable shadowing. ### Null Comparisons -By doing the upgrade on your subgraph, sometimes you might get errors like these: +By doing the upgrade on your Subgraph, sometimes you might get errors like these: ```typescript ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' is not assignable to type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt'. @@ -330,7 +330,7 @@ let wrapper = new Wrapper(y) wrapper.n = wrapper.n + x // doesn't give compile time errors as it should ``` -We've opened a issue on the AssemblyScript compiler for this, but for now if you do these kind of operations in your subgraph mappings, you should change them to do a null check before it. +We've opened a issue on the AssemblyScript compiler for this, but for now if you do these kind of operations in your Subgraph mappings, you should change them to do a null check before it. ```typescript let wrapper = new Wrapper(y) @@ -352,7 +352,7 @@ value.x = 10 value.y = 'content' ``` -It will compile but break at runtime, that happens because the value hasn't been initialized, so make sure your subgraph has initialized their values, like this: +It will compile but break at runtime, that happens because the value hasn't been initialized, so make sure your Subgraph has initialized their values, like this: ```typescript var value = new Type() // initialized diff --git a/website/src/pages/ro/resources/migration-guides/graphql-validations-migration-guide.mdx b/website/src/pages/ro/resources/migration-guides/graphql-validations-migration-guide.mdx index 29fed533ef8c..ebed96df1002 100644 --- a/website/src/pages/ro/resources/migration-guides/graphql-validations-migration-guide.mdx +++ b/website/src/pages/ro/resources/migration-guides/graphql-validations-migration-guide.mdx @@ -20,7 +20,7 @@ To be compliant with those validations, please follow the migration guide. You can use the CLI migration tool to find any issues in your GraphQL operations and fix them. Alternatively you can update the endpoint of your GraphQL client to use the `https://api-next.thegraph.com/subgraphs/name/$GITHUB_USER/$SUBGRAPH_NAME` endpoint. Testing your queries against this endpoint will help you find the issues in your queries. -> Not all subgraphs will need to be migrated, if you are using [GraphQL ESlint](https://the-guild.dev/graphql/eslint/docs) or [GraphQL Code Generator](https://the-guild.dev/graphql/codegen), they already ensure that your queries are valid. +> Not all Subgraphs will need to be migrated, if you are using [GraphQL ESlint](https://the-guild.dev/graphql/eslint/docs) or [GraphQL Code Generator](https://the-guild.dev/graphql/codegen), they already ensure that your queries are valid. ## Migration CLI tool diff --git a/website/src/pages/ro/resources/roles/curating.mdx b/website/src/pages/ro/resources/roles/curating.mdx index 1cc05bb7b62f..a228ebfb3267 100644 --- a/website/src/pages/ro/resources/roles/curating.mdx +++ b/website/src/pages/ro/resources/roles/curating.mdx @@ -2,37 +2,37 @@ title: Curating --- -Curators are critical to The Graph's decentralized economy. They use their knowledge of the web3 ecosystem to assess and signal on the subgraphs that should be indexed by The Graph Network. Through Graph Explorer, Curators view network data to make signaling decisions. In turn, The Graph Network rewards Curators who signal on good quality subgraphs with a share of the query fees those subgraphs generate. The amount of GRT signaled is one of the key considerations for indexers when determining which subgraphs to index. +Curators are critical to The Graph's decentralized economy. They use their knowledge of the web3 ecosystem to assess and signal on the Subgraphs that should be indexed by The Graph Network. Through Graph Explorer, Curators view network data to make signaling decisions. In turn, The Graph Network rewards Curators who signal on good quality Subgraphs with a share of the query fees those Subgraphs generate. The amount of GRT signaled is one of the key considerations for indexers when determining which Subgraphs to index. ## What Does Signaling Mean for The Graph Network? -Before consumers can query a subgraph, it must be indexed. This is where curation comes into play. In order for Indexers to earn substantial query fees on quality subgraphs, they need to know what subgraphs to index. When Curators signal on a subgraph, it lets Indexers know that a subgraph is in demand and of sufficient quality that it should be indexed. +Before consumers can query a Subgraph, it must be indexed. This is where curation comes into play. In order for Indexers to earn substantial query fees on quality Subgraphs, they need to know what Subgraphs to index. When Curators signal on a Subgraph, it lets Indexers know that a Subgraph is in demand and of sufficient quality that it should be indexed. -Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that Curators use to let Indexers know that a subgraph is good to index. Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives. +Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that Curators use to let Indexers know that a Subgraph is good to index. Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the Subgraph, entitling them to a portion of future query fees that the Subgraph drives. -Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality subgraph because there will be fewer queries to process or fewer Indexers to process them. +Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to Subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality Subgraph because there will be fewer queries to process or fewer Indexers to process them. -The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. +The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all Subgraphs, signaling GRT on a particular Subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. -When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. +When signaling, Curators can decide to signal on a specific version of the Subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. -If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. +If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the Subgraphs you need assistance with. -Indexers can find subgraphs to index based on curation signals they see in Graph Explorer (screenshot below). +Indexers can find Subgraphs to index based on curation signals they see in Graph Explorer (screenshot below). -![Explorer subgraphs](/img/explorer-subgraphs.png) +![Explorer Subgraphs](/img/explorer-subgraphs.png) ## How to Signal -Within the Curator tab in Graph Explorer, curators will be able to signal and unsignal on certain subgraphs based on network stats. For a step-by-step overview of how to do this in Graph Explorer, [click here.](/subgraphs/explorer/) +Within the Curator tab in Graph Explorer, curators will be able to signal and unsignal on certain Subgraphs based on network stats. For a step-by-step overview of how to do this in Graph Explorer, [click here.](/subgraphs/explorer/) -A curator can choose to signal on a specific subgraph version, or they can choose to have their signal automatically migrate to the newest production build of that subgraph. Both are valid strategies and come with their own pros and cons. +A curator can choose to signal on a specific Subgraph version, or they can choose to have their signal automatically migrate to the newest production build of that Subgraph. Both are valid strategies and come with their own pros and cons. -Signaling on a specific version is especially useful when one subgraph is used by multiple dapps. One dapp might need to regularly update the subgraph with new features. Another dapp might prefer to use an older, well-tested subgraph version. Upon initial curation, a 1% standard tax is incurred. +Signaling on a specific version is especially useful when one Subgraph is used by multiple dapps. One dapp might need to regularly update the Subgraph with new features. Another dapp might prefer to use an older, well-tested Subgraph version. Upon initial curation, a 1% standard tax is incurred. Having your signal automatically migrate to the newest production build can be valuable to ensure you keep accruing query fees. Every time you curate, a 1% curation tax is incurred. You will also pay a 0.5% curation tax on every migration. Subgraph developers are discouraged from frequently publishing new versions - they have to pay a 0.5% curation tax on all auto-migrated curation shares. -> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy. +> **Note**: The first address to signal a particular Subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy. ## Withdrawing your GRT @@ -40,39 +40,39 @@ Curators have the option to withdraw their signaled GRT at any time. Unlike the process of delegating, if you decide to withdraw your signaled GRT you will not have to wait for a cooldown period and will receive the entire amount (minus the 1% curation tax). -Once a curator withdraws their signal, indexers may choose to keep indexing the subgraph, even if there's currently no active GRT signaled. +Once a curator withdraws their signal, indexers may choose to keep indexing the Subgraph, even if there's currently no active GRT signaled. -However, it is recommended that curators leave their signaled GRT in place not only to receive a portion of the query fees, but also to ensure reliability and uptime of the subgraph. +However, it is recommended that curators leave their signaled GRT in place not only to receive a portion of the query fees, but also to ensure reliability and uptime of the Subgraph. ## Risks 1. The query market is inherently young at The Graph and there is risk that your %APY may be lower than you expect due to nascent market dynamics. -2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned. -3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their subgraph or if a subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/resources/roles/delegating/delegating/). -4. A subgraph can fail due to a bug. A failed subgraph does not accrue query fees. As a result, you’ll have to wait until the developer fixes the bug and deploys a new version. - - If you are subscribed to the newest version of a subgraph, your shares will auto-migrate to that new version. This will incur a 0.5% curation tax. - - If you have signaled on a specific subgraph version and it fails, you will have to manually burn your curation shares. You can then signal on the new subgraph version, thus incurring a 1% curation tax. +2. Curation Fee - when a curator signals GRT on a Subgraph, they incur a 1% curation tax. This fee is burned. +3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their Subgraph or if a Subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/resources/roles/delegating/delegating/). +4. A Subgraph can fail due to a bug. A failed Subgraph does not accrue query fees. As a result, you’ll have to wait until the developer fixes the bug and deploys a new version. + - If you are subscribed to the newest version of a Subgraph, your shares will auto-migrate to that new version. This will incur a 0.5% curation tax. + - If you have signaled on a specific Subgraph version and it fails, you will have to manually burn your curation shares. You can then signal on the new Subgraph version, thus incurring a 1% curation tax. ## Curation FAQs ### 1. What % of query fees do Curators earn? -By signalling on a subgraph, you will earn a share of all the query fees that the subgraph generates. 10% of all query fees go to the Curators pro-rata to their curation shares. This 10% is subject to governance. +By signalling on a Subgraph, you will earn a share of all the query fees that the Subgraph generates. 10% of all query fees go to the Curators pro-rata to their curation shares. This 10% is subject to governance. -### 2. How do I decide which subgraphs are high quality to signal on? +### 2. How do I decide which Subgraphs are high quality to signal on? -Finding high-quality subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy subgraphs that are driving query volume. A trustworthy subgraph may be valuable if it is complete, accurate, and supports a dapp’s data needs. A poorly architected subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a subgraph’s architecture or code in order to assess if a subgraph is valuable. As a result: +Finding high-quality Subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy Subgraphs that are driving query volume. A trustworthy Subgraph may be valuable if it is complete, accurate, and supports a dapp’s data needs. A poorly architected Subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a Subgraph’s architecture or code in order to assess if a Subgraph is valuable. As a result: -- Curators can use their understanding of a network to try and predict how an individual subgraph may generate a higher or lower query volume in the future -- Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the subgraph developer is can help determine whether or not a subgraph is worth signalling on. +- Curators can use their understanding of a network to try and predict how an individual Subgraph may generate a higher or lower query volume in the future +- Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the Subgraph developer is can help determine whether or not a Subgraph is worth signalling on. -### 3. What’s the cost of updating a subgraph? +### 3. What’s the cost of updating a Subgraph? -Migrating your curation shares to a new subgraph version incurs a curation tax of 1%. Curators can choose to subscribe to the newest version of a subgraph. When curator shares get auto-migrated to a new version, Curators will also pay half curation tax, ie. 0.5%, because upgrading subgraphs is an onchain action that costs gas. +Migrating your curation shares to a new Subgraph version incurs a curation tax of 1%. Curators can choose to subscribe to the newest version of a Subgraph. When curator shares get auto-migrated to a new version, Curators will also pay half curation tax, ie. 0.5%, because upgrading Subgraphs is an onchain action that costs gas. -### 4. How often can I update my subgraph? +### 4. How often can I update my Subgraph? -It’s suggested that you don’t update your subgraphs too frequently. See the question above for more details. +It’s suggested that you don’t update your Subgraphs too frequently. See the question above for more details. ### 5. Can I sell my curation shares? diff --git a/website/src/pages/ro/resources/subgraph-studio-faq.mdx b/website/src/pages/ro/resources/subgraph-studio-faq.mdx index 8761f7a31bf6..c2d4037bd099 100644 --- a/website/src/pages/ro/resources/subgraph-studio-faq.mdx +++ b/website/src/pages/ro/resources/subgraph-studio-faq.mdx @@ -4,7 +4,7 @@ title: Subgraph Studio FAQs ## 1. What is Subgraph Studio? -[Subgraph Studio](https://thegraph.com/studio/) is a dapp for creating, managing, and publishing subgraphs and API keys. +[Subgraph Studio](https://thegraph.com/studio/) is a dapp for creating, managing, and publishing Subgraphs and API keys. ## 2. How do I create an API Key? @@ -18,14 +18,14 @@ Yes! You can create multiple API Keys to use in different projects. Check out th After creating an API Key, in the Security section, you can define the domains that can query a specific API Key. -## 5. Can I transfer my subgraph to another owner? +## 5. Can I transfer my Subgraph to another owner? -Yes, subgraphs that have been published to Arbitrum One can be transferred to a new wallet or a Multisig. You can do so by clicking the three dots next to the 'Publish' button on the subgraph's details page and selecting 'Transfer ownership'. +Yes, Subgraphs that have been published to Arbitrum One can be transferred to a new wallet or a Multisig. You can do so by clicking the three dots next to the 'Publish' button on the Subgraph's details page and selecting 'Transfer ownership'. -Note that you will no longer be able to see or edit the subgraph in Studio once it has been transferred. +Note that you will no longer be able to see or edit the Subgraph in Studio once it has been transferred. -## 6. How do I find query URLs for subgraphs if I’m not the developer of the subgraph I want to use? +## 6. How do I find query URLs for Subgraphs if I’m not the developer of the Subgraph I want to use? -You can find the query URL of each subgraph in the Subgraph Details section of Graph Explorer. When you click on the “Query” button, you will be directed to a pane wherein you can view the query URL of the subgraph you’re interested in. You can then replace the `` placeholder with the API key you wish to leverage in Subgraph Studio. +You can find the query URL of each Subgraph in the Subgraph Details section of Graph Explorer. When you click on the “Query” button, you will be directed to a pane wherein you can view the query URL of the Subgraph you’re interested in. You can then replace the `` placeholder with the API key you wish to leverage in Subgraph Studio. -Remember that you can create an API key and query any subgraph published to the network, even if you build a subgraph yourself. These queries via the new API key, are paid queries as any other on the network. +Remember that you can create an API key and query any Subgraph published to the network, even if you build a Subgraph yourself. These queries via the new API key, are paid queries as any other on the network. diff --git a/website/src/pages/ro/resources/tokenomics.mdx b/website/src/pages/ro/resources/tokenomics.mdx index 4a9b42ca6e0d..dac3383a28e7 100644 --- a/website/src/pages/ro/resources/tokenomics.mdx +++ b/website/src/pages/ro/resources/tokenomics.mdx @@ -6,7 +6,7 @@ description: The Graph Network is incentivized by powerful tokenomics. Here’s ## Overview -The Graph is a decentralized protocol that enables easy access to blockchain data. It indexes blockchain data similarly to how Google indexes the web. If you've used a dapp that retrieves data from a subgraph, you've probably interacted with The Graph. Today, thousands of [popular dapps](https://thegraph.com/explorer) in the web3 ecosystem use The Graph. +The Graph is a decentralized protocol that enables easy access to blockchain data. It indexes blockchain data similarly to how Google indexes the web. If you've used a dapp that retrieves data from a Subgraph, you've probably interacted with The Graph. Today, thousands of [popular dapps](https://thegraph.com/explorer) in the web3 ecosystem use The Graph. ## Specifics @@ -24,9 +24,9 @@ There are four primary network participants: 1. Delegators - Delegate GRT to Indexers & secure the network -2. Curators - Find the best subgraphs for Indexers +2. Curators - Find the best Subgraphs for Indexers -3. Developers - Build & query subgraphs +3. Developers - Build & query Subgraphs 4. Indexers - Backbone of blockchain data @@ -36,7 +36,7 @@ Fishermen and Arbitrators are also integral to the network's success through oth ## Delegators (Passively earn GRT) -Indexers are delegated GRT by Delegators, increasing the Indexer’s stake in subgraphs on the network. In return, Delegators earn a percentage of all query fees and indexing rewards from the Indexer. Each Indexer sets the cut that will be rewarded to Delegators independently, creating competition among Indexers to attract Delegators. Most Indexers offer between 9-12% annually. +Indexers are delegated GRT by Delegators, increasing the Indexer’s stake in Subgraphs on the network. In return, Delegators earn a percentage of all query fees and indexing rewards from the Indexer. Each Indexer sets the cut that will be rewarded to Delegators independently, creating competition among Indexers to attract Delegators. Most Indexers offer between 9-12% annually. For example, if a Delegator were to delegate 15k GRT to an Indexer offering 10%, the Delegator would receive ~1,500 GRT in rewards annually. @@ -46,25 +46,25 @@ If you're reading this, you're capable of becoming a Delegator right now by head ## Curators (Earn GRT) -Curators identify high-quality subgraphs and "curate" them (i.e., signal GRT on them) to earn curation shares, which guarantee a percentage of all future query fees generated by the subgraph. While any independent network participant can be a Curator, typically subgraph developers are among the first Curators for their own subgraphs because they want to ensure their subgraph is indexed. +Curators identify high-quality Subgraphs and "curate" them (i.e., signal GRT on them) to earn curation shares, which guarantee a percentage of all future query fees generated by the Subgraph. While any independent network participant can be a Curator, typically Subgraph developers are among the first Curators for their own Subgraphs because they want to ensure their Subgraph is indexed. -Subgraph developers are encouraged to curate their subgraph with at least 3,000 GRT. However, this number may be impacted by network activity and community participation. +Subgraph developers are encouraged to curate their Subgraph with at least 3,000 GRT. However, this number may be impacted by network activity and community participation. -Curators pay a 1% curation tax when they curate a new subgraph. This curation tax is burned, decreasing the supply of GRT. +Curators pay a 1% curation tax when they curate a new Subgraph. This curation tax is burned, decreasing the supply of GRT. ## Developers -Developers build and query subgraphs to retrieve blockchain data. Since subgraphs are open source, developers can query existing subgraphs to load blockchain data into their dapps. Developers pay for queries they make in GRT, which is distributed to network participants. +Developers build and query Subgraphs to retrieve blockchain data. Since Subgraphs are open source, developers can query existing Subgraphs to load blockchain data into their dapps. Developers pay for queries they make in GRT, which is distributed to network participants. -### Creating a subgraph +### Creating a Subgraph -Developers can [create a subgraph](/developing/creating-a-subgraph/) to index data on the blockchain. Subgraphs are instructions for Indexers about which data should be served to consumers. +Developers can [create a Subgraph](/developing/creating-a-subgraph/) to index data on the blockchain. Subgraphs are instructions for Indexers about which data should be served to consumers. -Once developers have built and tested their subgraph, they can [publish their subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) on The Graph's decentralized network. +Once developers have built and tested their Subgraph, they can [publish their Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) on The Graph's decentralized network. -### Querying an existing subgraph +### Querying an existing Subgraph -Once a subgraph is [published](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph's decentralized network, anyone can create an API key, add GRT to their billing balance, and query the subgraph. +Once a Subgraph is [published](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph's decentralized network, anyone can create an API key, add GRT to their billing balance, and query the Subgraph. Subgraphs are [queried using GraphQL](/subgraphs/querying/introduction/), and the query fees are paid for with GRT in [Subgraph Studio](https://thegraph.com/studio/). Query fees are distributed to network participants based on their contributions to the protocol. @@ -72,27 +72,27 @@ Subgraphs are [queried using GraphQL](/subgraphs/querying/introduction/), and th ## Indexers (Earn GRT) -Indexers are the backbone of The Graph. They operate independent hardware and software powering The Graph’s decentralized network. Indexers serve data to consumers based on instructions from subgraphs. +Indexers are the backbone of The Graph. They operate independent hardware and software powering The Graph’s decentralized network. Indexers serve data to consumers based on instructions from Subgraphs. Indexers can earn GRT rewards in two ways: -1. **Query fees**: GRT paid by developers or users for subgraph data queries. Query fees are directly distributed to Indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). +1. **Query fees**: GRT paid by developers or users for Subgraph data queries. Query fees are directly distributed to Indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). -2. **Indexing rewards**: the 3% annual issuance is distributed to Indexers based on the number of subgraphs they are indexing. These rewards incentivize Indexers to index subgraphs, occasionally before the query fees begin, to accrue and submit Proofs of Indexing (POIs), verifying that they have indexed data accurately. +2. **Indexing rewards**: the 3% annual issuance is distributed to Indexers based on the number of Subgraphs they are indexing. These rewards incentivize Indexers to index Subgraphs, occasionally before the query fees begin, to accrue and submit Proofs of Indexing (POIs), verifying that they have indexed data accurately. -Each subgraph is allotted a portion of the total network token issuance, based on the amount of the subgraph’s curation signal. That amount is then rewarded to Indexers based on their allocated stake on the subgraph. +Each Subgraph is allotted a portion of the total network token issuance, based on the amount of the Subgraph’s curation signal. That amount is then rewarded to Indexers based on their allocated stake on the Subgraph. In order to run an indexing node, Indexers must self-stake 100,000 GRT or more with the network. Indexers are incentivized to self-stake GRT in proportion to the amount of queries they serve. -Indexers can increase their GRT allocations on subgraphs by accepting GRT delegation from Delegators, and they can accept up to 16 times their initial self-stake. If an Indexer becomes "over-delegated" (i.e., more than 16 times their initial self-stake), they will not be able to use the additional GRT from Delegators until they increase their self-stake in the network. +Indexers can increase their GRT allocations on Subgraphs by accepting GRT delegation from Delegators, and they can accept up to 16 times their initial self-stake. If an Indexer becomes "over-delegated" (i.e., more than 16 times their initial self-stake), they will not be able to use the additional GRT from Delegators until they increase their self-stake in the network. The amount of rewards an Indexer receives can vary based on the Indexer's self-stake, accepted delegation, quality of service, and many more factors. ## Token Supply: Burning & Issuance -The initial token supply is 10 billion GRT, with a target of 3% new issuance annually to reward Indexers for allocating stake on subgraphs. This means that the total supply of GRT tokens will increase by 3% each year as new tokens are issued to Indexers for their contribution to the network. +The initial token supply is 10 billion GRT, with a target of 3% new issuance annually to reward Indexers for allocating stake on Subgraphs. This means that the total supply of GRT tokens will increase by 3% each year as new tokens are issued to Indexers for their contribution to the network. -The Graph is designed with multiple burning mechanisms to offset new token issuance. Approximately 1% of the GRT supply is burned annually through various activities on the network, and this number has been increasing as network activity continues to grow. These burning activities include a 0.5% delegation tax whenever a Delegator delegates GRT to an Indexer, a 1% curation tax when Curators signal on a subgraph, and a 1% of query fees for blockchain data. +The Graph is designed with multiple burning mechanisms to offset new token issuance. Approximately 1% of the GRT supply is burned annually through various activities on the network, and this number has been increasing as network activity continues to grow. These burning activities include a 0.5% delegation tax whenever a Delegator delegates GRT to an Indexer, a 1% curation tax when Curators signal on a Subgraph, and a 1% of query fees for blockchain data. ![Total burned GRT](/img/total-burned-grt.jpeg) diff --git a/website/src/pages/ro/sps/introduction.mdx b/website/src/pages/ro/sps/introduction.mdx index b11c99dfb8e5..92d8618165dd 100644 --- a/website/src/pages/ro/sps/introduction.mdx +++ b/website/src/pages/ro/sps/introduction.mdx @@ -3,21 +3,21 @@ title: Introduction to Substreams-Powered Subgraphs sidebarTitle: Introduction --- -Boost your subgraph's efficiency and scalability by using [Substreams](/substreams/introduction/) to stream pre-indexed blockchain data. +Boost your Subgraph's efficiency and scalability by using [Substreams](/substreams/introduction/) to stream pre-indexed blockchain data. ## Overview -Use a Substreams package (`.spkg`) as a data source to give your subgraph access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks. +Use a Substreams package (`.spkg`) as a data source to give your Subgraph access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks. ### Specifics There are two methods of enabling this technology: -1. **Using Substreams [triggers](/sps/triggers/)**: Consume from any Substreams module by importing the Protobuf model through a subgraph handler and move all your logic into a subgraph. This method creates the subgraph entities directly in the subgraph. +1. **Using Substreams [triggers](/sps/triggers/)**: Consume from any Substreams module by importing the Protobuf model through a Subgraph handler and move all your logic into a Subgraph. This method creates the Subgraph entities directly in the Subgraph. -2. **Using [Entity Changes](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)**: By writing more of the logic into Substreams, you can consume the module's output directly into [graph-node](/indexing/tooling/graph-node/). In graph-node, you can use the Substreams data to create your subgraph entities. +2. **Using [Entity Changes](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)**: By writing more of the logic into Substreams, you can consume the module's output directly into [graph-node](/indexing/tooling/graph-node/). In graph-node, you can use the Substreams data to create your Subgraph entities. -You can choose where to place your logic, either in the subgraph or Substreams. However, consider what aligns with your data needs, as Substreams has a parallelized model, and triggers are consumed linearly in the graph node. +You can choose where to place your logic, either in the Subgraph or Substreams. However, consider what aligns with your data needs, as Substreams has a parallelized model, and triggers are consumed linearly in the graph node. ### Additional Resources @@ -28,3 +28,4 @@ Visit the following links for tutorials on using code-generation tooling to buil - [Starknet](https://docs.substreams.dev/tutorials/intro-to-tutorials/starknet) - [Injective](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/injective) - [MANTRA](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/mantra) +- [Stellar](https://docs.substreams.dev/tutorials/intro-to-tutorials/stellar) diff --git a/website/src/pages/ro/sps/sps-faq.mdx b/website/src/pages/ro/sps/sps-faq.mdx index abc1f3906686..250c466d5929 100644 --- a/website/src/pages/ro/sps/sps-faq.mdx +++ b/website/src/pages/ro/sps/sps-faq.mdx @@ -11,21 +11,21 @@ Specifically, it's a blockchain-agnostic, parallelized, and streaming-first engi Substreams is developed by [StreamingFast](https://www.streamingfast.io/). Visit the [Substreams Documentation](/substreams/introduction/) to learn more about Substreams. -## What are Substreams-powered subgraphs? +## What are Substreams-powered Subgraphs? -[Substreams-powered subgraphs](/sps/introduction/) combine the power of Substreams with the queryability of subgraphs. When publishing a Substreams-powered Subgraph, the data produced by the Substreams transformations, can [output entity changes](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs) that are compatible with subgraph entities. +[Substreams-powered Subgraphs](/sps/introduction/) combine the power of Substreams with the queryability of Subgraphs. When publishing a Substreams-powered Subgraph, the data produced by the Substreams transformations, can [output entity changes](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs) that are compatible with Subgraph entities. -If you are already familiar with subgraph development, note that Substreams-powered subgraphs can then be queried just as if it had been produced by the AssemblyScript transformation layer. This provides all the benefits of subgraphs, including a dynamic and flexible GraphQL API. +If you are already familiar with Subgraph development, note that Substreams-powered Subgraphs can then be queried just as if it had been produced by the AssemblyScript transformation layer. This provides all the benefits of Subgraphs, including a dynamic and flexible GraphQL API. -## How are Substreams-powered subgraphs different from subgraphs? +## How are Substreams-powered Subgraphs different from Subgraphs? Subgraphs are made up of datasources which specify onchain events, and how those events should be transformed via handlers written in Assemblyscript. These events are processed sequentially, based on the order in which events happen onchain. -By contrast, substreams-powered subgraphs have a single datasource which references a substreams package, which is processed by the Graph Node. Substreams have access to additional granular onchain data compared to conventional subgraphs, and can also benefit from massively parallelised processing, which can mean much faster processing times. +By contrast, substreams-powered Subgraphs have a single datasource which references a substreams package, which is processed by the Graph Node. Substreams have access to additional granular onchain data compared to conventional Subgraphs, and can also benefit from massively parallelised processing, which can mean much faster processing times. -## What are the benefits of using Substreams-powered subgraphs? +## What are the benefits of using Substreams-powered Subgraphs? -Substreams-powered subgraphs combine all the benefits of Substreams with the queryability of subgraphs. They bring greater composability and high-performance indexing to The Graph. They also enable new data use cases; for example, once you've built your Substreams-powered Subgraph, you can reuse your [Substreams modules](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) to output to different [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) such as PostgreSQL, MongoDB, and Kafka. +Substreams-powered Subgraphs combine all the benefits of Substreams with the queryability of Subgraphs. They bring greater composability and high-performance indexing to The Graph. They also enable new data use cases; for example, once you've built your Substreams-powered Subgraph, you can reuse your [Substreams modules](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) to output to different [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) such as PostgreSQL, MongoDB, and Kafka. ## What are the benefits of Substreams? @@ -35,7 +35,7 @@ There are many benefits to using Substreams, including: - High-performance indexing: Orders of magnitude faster indexing through large-scale clusters of parallel operations (think BigQuery). -- Sink anywhere: Sink your data to anywhere you want: PostgreSQL, MongoDB, Kafka, subgraphs, flat files, Google Sheets. +- Sink anywhere: Sink your data to anywhere you want: PostgreSQL, MongoDB, Kafka, Subgraphs, flat files, Google Sheets. - Programmable: Use code to customize extraction, do transformation-time aggregations, and model your output for multiple sinks. @@ -63,17 +63,17 @@ There are many benefits to using Firehose, including: - Leverages flat files: Blockchain data is extracted into flat files, the cheapest and most optimized computing resource available. -## Where can developers access more information about Substreams-powered subgraphs and Substreams? +## Where can developers access more information about Substreams-powered Subgraphs and Substreams? The [Substreams documentation](/substreams/introduction/) will teach you how to build Substreams modules. -The [Substreams-powered subgraphs documentation](/sps/introduction/) will show you how to package them for deployment on The Graph. +The [Substreams-powered Subgraphs documentation](/sps/introduction/) will show you how to package them for deployment on The Graph. The [latest Substreams Codegen tool](https://streamingfastio.medium.com/substreams-codegen-no-code-tool-to-bootstrap-your-project-a11efe0378c6) will allow you to bootstrap a Substreams project without any code. ## What is the role of Rust modules in Substreams? -Rust modules are the equivalent of the AssemblyScript mappers in subgraphs. They are compiled to WASM in a similar way, but the programming model allows for parallel execution. They define the sort of transformations and aggregations you want to apply to the raw blockchain data. +Rust modules are the equivalent of the AssemblyScript mappers in Subgraphs. They are compiled to WASM in a similar way, but the programming model allows for parallel execution. They define the sort of transformations and aggregations you want to apply to the raw blockchain data. See [modules documentation](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) for details. @@ -81,16 +81,16 @@ See [modules documentation](https://docs.substreams.dev/reference-material/subst When using Substreams, the composition happens at the transformation layer enabling cached modules to be re-used. -As an example, Alice can build a DEX price module, Bob can use it to build a volume aggregator for some tokens of his interest, and Lisa can combine four individual DEX price modules to create a price oracle. A single Substreams request will package all of these individual's modules, link them together, to offer a much more refined stream of data. That stream can then be used to populate a subgraph, and be queried by consumers. +As an example, Alice can build a DEX price module, Bob can use it to build a volume aggregator for some tokens of his interest, and Lisa can combine four individual DEX price modules to create a price oracle. A single Substreams request will package all of these individual's modules, link them together, to offer a much more refined stream of data. That stream can then be used to populate a Subgraph, and be queried by consumers. ## How can you build and deploy a Substreams-powered Subgraph? After [defining](/sps/introduction/) a Substreams-powered Subgraph, you can use the Graph CLI to deploy it in [Subgraph Studio](https://thegraph.com/studio/). -## Where can I find examples of Substreams and Substreams-powered subgraphs? +## Where can I find examples of Substreams and Substreams-powered Subgraphs? -You can visit [this Github repo](https://github.com/pinax-network/awesome-substreams) to find examples of Substreams and Substreams-powered subgraphs. +You can visit [this Github repo](https://github.com/pinax-network/awesome-substreams) to find examples of Substreams and Substreams-powered Subgraphs. -## What do Substreams and Substreams-powered subgraphs mean for The Graph Network? +## What do Substreams and Substreams-powered Subgraphs mean for The Graph Network? The integration promises many benefits, including extremely high-performance indexing and greater composability by leveraging community modules and building on them. diff --git a/website/src/pages/ro/sps/triggers.mdx b/website/src/pages/ro/sps/triggers.mdx index 816d42cb5f12..66687aa21889 100644 --- a/website/src/pages/ro/sps/triggers.mdx +++ b/website/src/pages/ro/sps/triggers.mdx @@ -6,13 +6,13 @@ Use Custom Triggers and enable the full use GraphQL. ## Overview -Custom Triggers allow you to send data directly into your subgraph mappings file and entities, which are similar to tables and fields. This enables you to fully use the GraphQL layer. +Custom Triggers allow you to send data directly into your Subgraph mappings file and entities, which are similar to tables and fields. This enables you to fully use the GraphQL layer. -By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data in your subgraph's handler. This ensures efficient and streamlined data management within the subgraph framework. +By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data in your Subgraph's handler. This ensures efficient and streamlined data management within the Subgraph framework. ### Defining `handleTransactions` -The following code demonstrates how to define a `handleTransactions` function in a subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new subgraph entity is created. +The following code demonstrates how to define a `handleTransactions` function in a Subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new Subgraph entity is created. ```tsx export function handleTransactions(bytes: Uint8Array): void { @@ -38,9 +38,9 @@ Here's what you're seeing in the `mappings.ts` file: 1. The bytes containing Substreams data are decoded into the generated `Transactions` object, this object is used like any other AssemblyScript object 2. Looping over the transactions -3. Create a new subgraph entity for every transaction +3. Create a new Subgraph entity for every transaction -To go through a detailed example of a trigger-based subgraph, [check out the tutorial](/sps/tutorial/). +To go through a detailed example of a trigger-based Subgraph, [check out the tutorial](/sps/tutorial/). ### Additional Resources diff --git a/website/src/pages/ro/sps/tutorial.mdx b/website/src/pages/ro/sps/tutorial.mdx index 55e563608bce..7358f8c02a20 100644 --- a/website/src/pages/ro/sps/tutorial.mdx +++ b/website/src/pages/ro/sps/tutorial.mdx @@ -3,7 +3,7 @@ title: 'Tutorial: Set Up a Substreams-Powered Subgraph on Solana' sidebarTitle: Tutorial --- -Successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. +Successfully set up a trigger-based Substreams-powered Subgraph for a Solana SPL token. ## Get Started @@ -54,7 +54,7 @@ params: # Modify the param fields to meet your needs ### Step 2: Generate the Subgraph Manifest -Once the project is initialized, generate a subgraph manifest by running the following command in the Dev Container: +Once the project is initialized, generate a Subgraph manifest by running the following command in the Dev Container: ```bash substreams codegen subgraph @@ -73,7 +73,7 @@ dataSources: moduleName: map_spl_transfers # Module defined in the substreams.yaml file: ./my-project-sol-v0.1.0.spkg mapping: - apiVersion: 0.0.7 + apiVersion: 0.0.9 kind: substreams/graph-entities file: ./src/mappings.ts handler: handleTriggers @@ -81,7 +81,7 @@ dataSources: ### Step 3: Define Entities in `schema.graphql` -Define the fields you want to save in your subgraph entities by updating the `schema.graphql` file. +Define the fields you want to save in your Subgraph entities by updating the `schema.graphql` file. Here is an example: @@ -101,7 +101,7 @@ This schema defines a `MyTransfer` entity with fields such as `id`, `amount`, `s With the Protobuf objects generated, you can now handle the decoded Substreams data in your `mappings.ts` file found in the `./src` directory. -The example below demonstrates how to extract to subgraph entities the non-derived transfers associated to the Orca account id: +The example below demonstrates how to extract to Subgraph entities the non-derived transfers associated to the Orca account id: ```ts import { Protobuf } from 'as-proto/assembly' @@ -140,11 +140,11 @@ To generate Protobuf objects in AssemblyScript, run the following command: npm run protogen ``` -This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the subgraph's handler. +This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the Subgraph's handler. ### Conclusion -Congratulations! You've successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. You can take the next step by customizing your schema, mappings, and modules to fit your specific use case. +Congratulations! You've successfully set up a trigger-based Substreams-powered Subgraph for a Solana SPL token. You can take the next step by customizing your schema, mappings, and modules to fit your specific use case. ### Video Tutorial diff --git a/website/src/pages/ro/subgraphs/best-practices/avoid-eth-calls.mdx b/website/src/pages/ro/subgraphs/best-practices/avoid-eth-calls.mdx index e40a7b3712e4..07249c97dd2a 100644 --- a/website/src/pages/ro/subgraphs/best-practices/avoid-eth-calls.mdx +++ b/website/src/pages/ro/subgraphs/best-practices/avoid-eth-calls.mdx @@ -1,19 +1,19 @@ --- title: Subgraph Best Practice 4 - Improve Indexing Speed by Avoiding eth_calls -sidebarTitle: 'Subgraph Best Practice 4: Avoiding eth_calls' +sidebarTitle: Avoiding eth_calls --- ## TLDR -`eth_calls` are calls that can be made from a subgraph to an Ethereum node. These calls take a significant amount of time to return data, slowing down indexing. If possible, design smart contracts to emit all the data you need so you don’t need to use `eth_calls`. +`eth_calls` are calls that can be made from a Subgraph to an Ethereum node. These calls take a significant amount of time to return data, slowing down indexing. If possible, design smart contracts to emit all the data you need so you don’t need to use `eth_calls`. ## Why Avoiding `eth_calls` Is a Best Practice -Subgraphs are optimized to index event data emitted from smart contracts. A subgraph can also index the data coming from an `eth_call`, however, this can significantly slow down subgraph indexing as `eth_calls` require making external calls to smart contracts. The responsiveness of these calls relies not on the subgraph but on the connectivity and responsiveness of the Ethereum node being queried. By minimizing or eliminating eth_calls in our subgraphs, we can significantly improve our indexing speed. +Subgraphs are optimized to index event data emitted from smart contracts. A Subgraph can also index the data coming from an `eth_call`, however, this can significantly slow down Subgraph indexing as `eth_calls` require making external calls to smart contracts. The responsiveness of these calls relies not on the Subgraph but on the connectivity and responsiveness of the Ethereum node being queried. By minimizing or eliminating eth_calls in our Subgraphs, we can significantly improve our indexing speed. ### What Does an eth_call Look Like? -`eth_calls` are often necessary when the data required for a subgraph is not available through emitted events. For example, consider a scenario where a subgraph needs to identify whether ERC20 tokens are part of a specific pool, but the contract only emits a basic `Transfer` event and does not emit an event that contains the data that we need: +`eth_calls` are often necessary when the data required for a Subgraph is not available through emitted events. For example, consider a scenario where a Subgraph needs to identify whether ERC20 tokens are part of a specific pool, but the contract only emits a basic `Transfer` event and does not emit an event that contains the data that we need: ```yaml event Transfer(address indexed from, address indexed to, uint256 value); @@ -44,7 +44,7 @@ export function handleTransfer(event: Transfer): void { } ``` -This is functional, however is not ideal as it slows down our subgraph’s indexing. +This is functional, however is not ideal as it slows down our Subgraph’s indexing. ## How to Eliminate `eth_calls` @@ -54,7 +54,7 @@ Ideally, the smart contract should be updated to emit all necessary data within event TransferWithPool(address indexed from, address indexed to, uint256 value, bytes32 indexed poolInfo); ``` -With this update, the subgraph can directly index the required data without external calls: +With this update, the Subgraph can directly index the required data without external calls: ```typescript import { Address } from '@graphprotocol/graph-ts' @@ -96,11 +96,11 @@ The portion highlighted in yellow is the call declaration. The part before the c The handler itself accesses the result of this `eth_call` exactly as in the previous section by binding to the contract and making the call. graph-node caches the results of declared `eth_calls` in memory and the call from the handler will retrieve the result from this in memory cache instead of making an actual RPC call. -Note: Declared eth_calls can only be made in subgraphs with specVersion >= 1.2.0. +Note: Declared eth_calls can only be made in Subgraphs with specVersion >= 1.2.0. ## Conclusion -You can significantly improve indexing performance by minimizing or eliminating `eth_calls` in your subgraphs. +You can significantly improve indexing performance by minimizing or eliminating `eth_calls` in your Subgraphs. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/ro/subgraphs/best-practices/derivedfrom.mdx b/website/src/pages/ro/subgraphs/best-practices/derivedfrom.mdx index db3a49928c89..093eb29255ab 100644 --- a/website/src/pages/ro/subgraphs/best-practices/derivedfrom.mdx +++ b/website/src/pages/ro/subgraphs/best-practices/derivedfrom.mdx @@ -1,11 +1,11 @@ --- title: Subgraph Best Practice 2 - Improve Indexing and Query Responsiveness By Using @derivedFrom -sidebarTitle: 'Subgraph Best Practice 2: Arrays with @derivedFrom' +sidebarTitle: Arrays with @derivedFrom --- ## TLDR -Arrays in your schema can really slow down a subgraph's performance as they grow beyond thousands of entries. If possible, the `@derivedFrom` directive should be used when using arrays as it prevents large arrays from forming, simplifies handlers, and reduces the size of individual entities, improving indexing speed and query performance significantly. +Arrays in your schema can really slow down a Subgraph's performance as they grow beyond thousands of entries. If possible, the `@derivedFrom` directive should be used when using arrays as it prevents large arrays from forming, simplifies handlers, and reduces the size of individual entities, improving indexing speed and query performance significantly. ## How to Use the `@derivedFrom` Directive @@ -15,7 +15,7 @@ You just need to add a `@derivedFrom` directive after your array in your schema. comments: [Comment!]! @derivedFrom(field: "post") ``` -`@derivedFrom` creates efficient one-to-many relationships, enabling an entity to dynamically associate with multiple related entities based on a field in the related entity. This approach removes the need for both sides of the relationship to store duplicate data, making the subgraph more efficient. +`@derivedFrom` creates efficient one-to-many relationships, enabling an entity to dynamically associate with multiple related entities based on a field in the related entity. This approach removes the need for both sides of the relationship to store duplicate data, making the Subgraph more efficient. ### Example Use Case for `@derivedFrom` @@ -60,17 +60,17 @@ type Comment @entity { Just by adding the `@derivedFrom` directive, this schema will only store the “Comments” on the “Comments” side of the relationship and not on the “Post” side of the relationship. Arrays are stored across individual rows, which allows them to expand significantly. This can lead to particularly large sizes if their growth is unbounded. -This will not only make our subgraph more efficient, but it will also unlock three features: +This will not only make our Subgraph more efficient, but it will also unlock three features: 1. We can query the `Post` and see all of its comments. 2. We can do a reverse lookup and query any `Comment` and see which post it comes from. -3. We can use [Derived Field Loaders](/subgraphs/developing/creating/graph-ts/api/#looking-up-derived-entities) to unlock the ability to directly access and manipulate data from virtual relationships in our subgraph mappings. +3. We can use [Derived Field Loaders](/subgraphs/developing/creating/graph-ts/api/#looking-up-derived-entities) to unlock the ability to directly access and manipulate data from virtual relationships in our Subgraph mappings. ## Conclusion -Use the `@derivedFrom` directive in subgraphs to effectively manage dynamically growing arrays, enhancing indexing efficiency and data retrieval. +Use the `@derivedFrom` directive in Subgraphs to effectively manage dynamically growing arrays, enhancing indexing efficiency and data retrieval. For a more detailed explanation of strategies to avoid large arrays, check out Kevin Jones' blog: [Best Practices in Subgraph Development: Avoiding Large Arrays](https://thegraph.com/blog/improve-subgraph-performance-avoiding-large-arrays/). diff --git a/website/src/pages/ro/subgraphs/best-practices/grafting-hotfix.mdx b/website/src/pages/ro/subgraphs/best-practices/grafting-hotfix.mdx index d514e1633c75..674cf6b87c62 100644 --- a/website/src/pages/ro/subgraphs/best-practices/grafting-hotfix.mdx +++ b/website/src/pages/ro/subgraphs/best-practices/grafting-hotfix.mdx @@ -1,26 +1,26 @@ --- title: Subgraph Best Practice 6 - Use Grafting for Quick Hotfix Deployment -sidebarTitle: 'Subgraph Best Practice 6: Grafting and Hotfixing' +sidebarTitle: Grafting and Hotfixing --- ## TLDR -Grafting is a powerful feature in subgraph development that allows you to build and deploy new subgraphs while reusing the indexed data from existing ones. +Grafting is a powerful feature in Subgraph development that allows you to build and deploy new Subgraphs while reusing the indexed data from existing ones. ### Overview -This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. +This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire Subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. ## Benefits of Grafting for Hotfixes 1. **Rapid Deployment** - - **Minimize Downtime**: When a subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. - - **Immediate Recovery**: The new subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. + - **Minimize Downtime**: When a Subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. + - **Immediate Recovery**: The new Subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. 2. **Data Preservation** - - **Reuse Historical Data**: Grafting copies the existing data from the base subgraph, so you don’t lose valuable historical records. + - **Reuse Historical Data**: Grafting copies the existing data from the base Subgraph, so you don’t lose valuable historical records. - **Consistency**: Maintains data continuity, which is crucial for applications relying on consistent historical data. 3. **Efficiency** @@ -31,38 +31,38 @@ This feature enables quick deployment of hotfixes for critical issues, eliminati 1. **Initial Deployment Without Grafting** - - **Start Clean**: Always deploy your initial subgraph without grafting to ensure that it’s stable and functions as expected. - - **Test Thoroughly**: Validate the subgraph’s performance to minimize the need for future hotfixes. + - **Start Clean**: Always deploy your initial Subgraph without grafting to ensure that it’s stable and functions as expected. + - **Test Thoroughly**: Validate the Subgraph’s performance to minimize the need for future hotfixes. 2. **Implementing the Hotfix with Grafting** - **Identify the Issue**: When a critical error occurs, determine the block number of the last successfully indexed event. - - **Create a New Subgraph**: Develop a new subgraph that includes the hotfix. - - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed subgraph. - - **Deploy Quickly**: Publish the grafted subgraph to restore service as soon as possible. + - **Create a New Subgraph**: Develop a new Subgraph that includes the hotfix. + - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed Subgraph. + - **Deploy Quickly**: Publish the grafted Subgraph to restore service as soon as possible. 3. **Post-Hotfix Actions** - - **Monitor Performance**: Ensure the grafted subgraph is indexing correctly and the hotfix resolves the issue. - - **Republish Without Grafting**: Once stable, deploy a new version of the subgraph without grafting for long-term maintenance. + - **Monitor Performance**: Ensure the grafted Subgraph is indexing correctly and the hotfix resolves the issue. + - **Republish Without Grafting**: Once stable, deploy a new version of the Subgraph without grafting for long-term maintenance. > Note: Relying on grafting indefinitely is not recommended as it can complicate future updates and maintenance. - - **Update References**: Redirect any services or applications to use the new, non-grafted subgraph. + - **Update References**: Redirect any services or applications to use the new, non-grafted Subgraph. 4. **Important Considerations** - **Careful Block Selection**: Choose the graft block number carefully to prevent data loss. - **Tip**: Use the block number of the last correctly processed event. - - **Use Deployment ID**: Ensure you reference the Deployment ID of the base subgraph, not the Subgraph ID. - - **Note**: The Deployment ID is the unique identifier for a specific subgraph deployment. - - **Feature Declaration**: Remember to declare grafting in the subgraph manifest under features. + - **Use Deployment ID**: Ensure you reference the Deployment ID of the base Subgraph, not the Subgraph ID. + - **Note**: The Deployment ID is the unique identifier for a specific Subgraph deployment. + - **Feature Declaration**: Remember to declare grafting in the Subgraph manifest under features. ## Example: Deploying a Hotfix with Grafting -Suppose you have a subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. +Suppose you have a Subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. 1. **Failed Subgraph Manifest (subgraph.yaml)** ```yaml - specVersion: 1.0.0 + specVersion: 1.3.0 schema: file: ./schema.graphql dataSources: @@ -75,7 +75,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing startBlock: 5000000 mapping: kind: ethereum/events - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Withdrawal @@ -90,7 +90,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing 2. **New Grafted Subgraph Manifest (subgraph.yaml)** ```yaml - specVersion: 1.0.0 + specVersion: 1.3.0 schema: file: ./schema.graphql dataSources: @@ -103,7 +103,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing startBlock: 6000001 # Block after the last indexed block mapping: kind: ethereum/events - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Withdrawal @@ -117,16 +117,16 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing features: - grafting graft: - base: QmBaseDeploymentID # Deployment ID of the failed subgraph + base: QmBaseDeploymentID # Deployment ID of the failed Subgraph block: 6000000 # Last successfully indexed block ``` **Explanation:** -- **Data Source Update**: The new subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. +- **Data Source Update**: The new Subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. - **Start Block**: Set to one block after the last successfully indexed block to avoid reprocessing the error. - **Grafting Configuration**: - - **base**: Deployment ID of the failed subgraph. + - **base**: Deployment ID of the failed Subgraph. - **block**: Block number where grafting should begin. 3. **Deployment Steps** @@ -135,10 +135,10 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing - **Adjust the Manifest**: As shown above, update the `subgraph.yaml` with grafting configurations. - **Deploy the Subgraph**: - Authenticate with the Graph CLI. - - Deploy the new subgraph using `graph deploy`. + - Deploy the new Subgraph using `graph deploy`. 4. **Post-Deployment** - - **Verify Indexing**: Check that the subgraph is indexing correctly from the graft point. + - **Verify Indexing**: Check that the Subgraph is indexing correctly from the graft point. - **Monitor Data**: Ensure that new data is being captured and the hotfix is effective. - **Plan for Republish**: Schedule the deployment of a non-grafted version for long-term stability. @@ -146,9 +146,9 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing While grafting is a powerful tool for deploying hotfixes quickly, there are specific scenarios where it should be avoided to maintain data integrity and ensure optimal performance. -- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new subgraph’s schema to be compatible with the base subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. +- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new Subgraph’s schema to be compatible with the base Subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. - **Significant Mapping Logic Overhauls**: When the hotfix involves substantial modifications to your mapping logic—such as changing how events are processed or altering handler functions—grafting may not function correctly. The new logic might not be compatible with the data processed under the old logic, leading to incorrect data or failed indexing. -- **Deployments to The Graph Network**: Grafting is not recommended for subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the subgraph from scratch to ensure full compatibility and reliability. +- **Deployments to The Graph Network**: Grafting is not recommended for Subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the Subgraph from scratch to ensure full compatibility and reliability. ### Risk Management @@ -157,20 +157,20 @@ While grafting is a powerful tool for deploying hotfixes quickly, there are spec ## Conclusion -Grafting is an effective strategy for deploying hotfixes in subgraph development, enabling you to: +Grafting is an effective strategy for deploying hotfixes in Subgraph development, enabling you to: - **Quickly Recover** from critical errors without re-indexing. - **Preserve Historical Data**, maintaining continuity for applications and users. - **Ensure Service Availability** by minimizing downtime during critical fixes. -However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. +However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your Subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. ## Additional Resources - **[Grafting Documentation](/subgraphs/cookbook/grafting/)**: Replace a Contract and Keep its History With Grafting - **[Understanding Deployment IDs](/subgraphs/querying/subgraph-id-vs-deployment-id/)**: Learn the difference between Deployment ID and Subgraph ID. -By incorporating grafting into your subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. +By incorporating grafting into your Subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/ro/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx b/website/src/pages/ro/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx index 6ff60ec9ab34..3a633244e0f2 100644 --- a/website/src/pages/ro/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx +++ b/website/src/pages/ro/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx @@ -1,6 +1,6 @@ --- title: Subgraph Best Practice 3 - Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs -sidebarTitle: 'Subgraph Best Practice 3: Immutable Entities and Bytes as IDs' +sidebarTitle: Immutable Entities and Bytes as IDs --- ## TLDR @@ -50,12 +50,12 @@ While other types for IDs are possible, such as String and Int8, it is recommend ### Reasons to Not Use Bytes as IDs 1. If entity IDs must be human-readable such as auto-incremented numerical IDs or readable strings, Bytes for IDs should not be used. -2. If integrating a subgraph’s data with another data model that does not use Bytes as IDs, Bytes as IDs should not be used. +2. If integrating a Subgraph’s data with another data model that does not use Bytes as IDs, Bytes as IDs should not be used. 3. Indexing and querying performance improvements are not desired. ### Concatenating With Bytes as IDs -It is a common practice in many subgraphs to use string concatenation to combine two properties of an event into a single ID, such as using `event.transaction.hash.toHex() + "-" + event.logIndex.toString()`. However, as this returns a string, this significantly impedes subgraph indexing and querying performance. +It is a common practice in many Subgraphs to use string concatenation to combine two properties of an event into a single ID, such as using `event.transaction.hash.toHex() + "-" + event.logIndex.toString()`. However, as this returns a string, this significantly impedes Subgraph indexing and querying performance. Instead, we should use the `concatI32()` method to concatenate event properties. This strategy results in a `Bytes` ID that is much more performant. @@ -172,7 +172,7 @@ Query Response: ## Conclusion -Using both Immutable Entities and Bytes as IDs has been shown to markedly improve subgraph efficiency. Specifically, tests have highlighted up to a 28% increase in query performance and up to a 48% acceleration in indexing speeds. +Using both Immutable Entities and Bytes as IDs has been shown to markedly improve Subgraph efficiency. Specifically, tests have highlighted up to a 28% increase in query performance and up to a 48% acceleration in indexing speeds. Read more about using Immutable Entities and Bytes as IDs in this blog post by David Lutterkort, a Software Engineer at Edge & Node: [Two Simple Subgraph Performance Improvements](https://thegraph.com/blog/two-simple-subgraph-performance-improvements/). diff --git a/website/src/pages/ro/subgraphs/best-practices/pruning.mdx b/website/src/pages/ro/subgraphs/best-practices/pruning.mdx index 1b51dde8894f..2d4f9ad803e0 100644 --- a/website/src/pages/ro/subgraphs/best-practices/pruning.mdx +++ b/website/src/pages/ro/subgraphs/best-practices/pruning.mdx @@ -1,11 +1,11 @@ --- title: Subgraph Best Practice 1 - Improve Query Speed with Subgraph Pruning -sidebarTitle: 'Subgraph Best Practice 1: Pruning with indexerHints' +sidebarTitle: Pruning with indexerHints --- ## TLDR -[Pruning](/developing/creating-a-subgraph/#prune) removes archival entities from the subgraph’s database up to a given block, and removing unused entities from a subgraph’s database will improve a subgraph’s query performance, often dramatically. Using `indexerHints` is an easy way to prune a subgraph. +[Pruning](/developing/creating-a-subgraph/#prune) removes archival entities from the Subgraph’s database up to a given block, and removing unused entities from a Subgraph’s database will improve a Subgraph’s query performance, often dramatically. Using `indexerHints` is an easy way to prune a Subgraph. ## How to Prune a Subgraph With `indexerHints` @@ -13,14 +13,14 @@ Add a section called `indexerHints` in the manifest. `indexerHints` has three `prune` options: -- `prune: auto`: Retains the minimum necessary history as set by the Indexer, optimizing query performance. This is the generally recommended setting and is the default for all subgraphs created by `graph-cli` >= 0.66.0. +- `prune: auto`: Retains the minimum necessary history as set by the Indexer, optimizing query performance. This is the generally recommended setting and is the default for all Subgraphs created by `graph-cli` >= 0.66.0. - `prune: `: Sets a custom limit on the number of historical blocks to retain. - `prune: never`: No pruning of historical data; retains the entire history and is the default if there is no `indexerHints` section. `prune: never` should be selected if [Time Travel Queries](/subgraphs/querying/graphql-api/#time-travel-queries) are desired. -We can add `indexerHints` to our subgraphs by updating our `subgraph.yaml`: +We can add `indexerHints` to our Subgraphs by updating our `subgraph.yaml`: ```yaml -specVersion: 1.0.0 +specVersion: 1.3.0 schema: file: ./schema.graphql indexerHints: @@ -39,7 +39,7 @@ dataSources: ## Conclusion -Pruning using `indexerHints` is a best practice for subgraph development, offering significant query performance improvements. +Pruning using `indexerHints` is a best practice for Subgraph development, offering significant query performance improvements. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/ro/subgraphs/best-practices/timeseries.mdx b/website/src/pages/ro/subgraphs/best-practices/timeseries.mdx index cacdc44711fe..9732199531a8 100644 --- a/website/src/pages/ro/subgraphs/best-practices/timeseries.mdx +++ b/website/src/pages/ro/subgraphs/best-practices/timeseries.mdx @@ -1,11 +1,11 @@ --- title: Subgraph Best Practice 5 - Simplify and Optimize with Timeseries and Aggregations -sidebarTitle: 'Subgraph Best Practice 5: Timeseries and Aggregations' +sidebarTitle: Timeseries and Aggregations --- ## TLDR -Leveraging the new time-series and aggregations feature in subgraphs can significantly enhance both indexing speed and query performance. +Leveraging the new time-series and aggregations feature in Subgraphs can significantly enhance both indexing speed and query performance. ## Overview @@ -36,6 +36,10 @@ Timeseries and aggregations reduce data processing overhead and accelerate queri ## How to Implement Timeseries and Aggregations +### Prerequisites + +You need `spec version 1.1.0` for this feature. + ### Defining Timeseries Entities A timeseries entity represents raw data points collected over time. It is defined with the `@entity(timeseries: true)` annotation. Key requirements: @@ -51,7 +55,7 @@ Example: type Data @entity(timeseries: true) { id: Int8! timestamp: Timestamp! - price: BigDecimal! + amount: BigDecimal! } ``` @@ -68,11 +72,11 @@ Example: type Stats @aggregation(intervals: ["hour", "day"], source: "Data") { id: Int8! timestamp: Timestamp! - sum: BigDecimal! @aggregate(fn: "sum", arg: "price") + sum: BigDecimal! @aggregate(fn: "sum", arg: "amount") } ``` -In this example, Stats aggregates the price field from Data over hourly and daily intervals, computing the sum. +In this example, Stats aggregates the amount field from Data over hourly and daily intervals, computing the sum. ### Querying Aggregated Data @@ -172,13 +176,13 @@ Supported operators and functions include basic arithmetic (+, -, \_, /), compar ### Conclusion -Implementing timeseries and aggregations in subgraphs is a best practice for projects dealing with time-based data. This approach: +Implementing timeseries and aggregations in Subgraphs is a best practice for projects dealing with time-based data. This approach: - Enhances Performance: Speeds up indexing and querying by reducing data processing overhead. - Simplifies Development: Eliminates the need for manual aggregation logic in mappings. - Scales Efficiently: Handles large volumes of data without compromising on speed or responsiveness. -By adopting this pattern, developers can build more efficient and scalable subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your subgraphs. +By adopting this pattern, developers can build more efficient and scalable Subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your Subgraphs. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/ro/subgraphs/billing.mdx b/website/src/pages/ro/subgraphs/billing.mdx index c9f380bb022c..ec654ca63f55 100644 --- a/website/src/pages/ro/subgraphs/billing.mdx +++ b/website/src/pages/ro/subgraphs/billing.mdx @@ -4,12 +4,14 @@ title: Billing ## Querying Plans -There are two plans to use when querying subgraphs on The Graph Network. +There are two plans to use when querying Subgraphs on The Graph Network. - **Free Plan**: The Free Plan includes 100,000 free monthly queries with full access to the Subgraph Studio testing environment. This plan is designed for hobbyists, hackathoners, and those with side projects to try out The Graph before scaling their dapp. - **Growth Plan**: The Growth Plan includes everything in the Free Plan with all queries after 100,000 monthly queries requiring payments with GRT or credit card. The Growth Plan is flexible enough to cover teams that have established dapps across a variety of use cases. +Learn more about pricing [here](https://thegraph.com/studio-pricing/). + ## Query Payments with credit card diff --git a/website/src/pages/ro/subgraphs/developing/creating/advanced.mdx b/website/src/pages/ro/subgraphs/developing/creating/advanced.mdx index ee9918f5f254..8dbc48253034 100644 --- a/website/src/pages/ro/subgraphs/developing/creating/advanced.mdx +++ b/website/src/pages/ro/subgraphs/developing/creating/advanced.mdx @@ -4,9 +4,9 @@ title: Advanced Subgraph Features ## Overview -Add and implement advanced subgraph features to enhanced your subgraph's built. +Add and implement advanced Subgraph features to enhanced your Subgraph's built. -Starting from `specVersion` `0.0.4`, subgraph features must be explicitly declared in the `features` section at the top level of the manifest file, using their `camelCase` name, as listed in the table below: +Starting from `specVersion` `0.0.4`, Subgraph features must be explicitly declared in the `features` section at the top level of the manifest file, using their `camelCase` name, as listed in the table below: | Feature | Name | | ---------------------------------------------------- | ---------------- | @@ -14,10 +14,10 @@ Starting from `specVersion` `0.0.4`, subgraph features must be explicitly declar | [Full-text Search](#defining-fulltext-search-fields) | `fullTextSearch` | | [Grafting](#grafting-onto-existing-subgraphs) | `grafting` | -For instance, if a subgraph uses the **Full-Text Search** and the **Non-fatal Errors** features, the `features` field in the manifest should be: +For instance, if a Subgraph uses the **Full-Text Search** and the **Non-fatal Errors** features, the `features` field in the manifest should be: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum features: - fullTextSearch @@ -25,7 +25,7 @@ features: dataSources: ... ``` -> Note that using a feature without declaring it will incur a **validation error** during subgraph deployment, but no errors will occur if a feature is declared but not used. +> Note that using a feature without declaring it will incur a **validation error** during Subgraph deployment, but no errors will occur if a feature is declared but not used. ## Timeseries and Aggregations @@ -33,9 +33,9 @@ Prerequisites: - Subgraph specVersion must be ≥1.1.0. -Timeseries and aggregations enable your subgraph to track statistics like daily average price, hourly total transfers, and more. +Timeseries and aggregations enable your Subgraph to track statistics like daily average price, hourly total transfers, and more. -This feature introduces two new types of subgraph entity. Timeseries entities record data points with timestamps. Aggregation entities perform pre-declared calculations on the timeseries data points on an hourly or daily basis, then store the results for easy access via GraphQL. +This feature introduces two new types of Subgraph entity. Timeseries entities record data points with timestamps. Aggregation entities perform pre-declared calculations on the timeseries data points on an hourly or daily basis, then store the results for easy access via GraphQL. ### Example Schema @@ -97,21 +97,21 @@ Aggregation entities are automatically calculated on the basis of the specified ## Non-fatal errors -Indexing errors on already synced subgraphs will, by default, cause the subgraph to fail and stop syncing. Subgraphs can alternatively be configured to continue syncing in the presence of errors, by ignoring the changes made by the handler which provoked the error. This gives subgraph authors time to correct their subgraphs while queries continue to be served against the latest block, though the results might be inconsistent due to the bug that caused the error. Note that some errors are still always fatal. To be non-fatal, the error must be known to be deterministic. +Indexing errors on already synced Subgraphs will, by default, cause the Subgraph to fail and stop syncing. Subgraphs can alternatively be configured to continue syncing in the presence of errors, by ignoring the changes made by the handler which provoked the error. This gives Subgraph authors time to correct their Subgraphs while queries continue to be served against the latest block, though the results might be inconsistent due to the bug that caused the error. Note that some errors are still always fatal. To be non-fatal, the error must be known to be deterministic. -> **Note:** The Graph Network does not yet support non-fatal errors, and developers should not deploy subgraphs using that functionality to the network via the Studio. +> **Note:** The Graph Network does not yet support non-fatal errors, and developers should not deploy Subgraphs using that functionality to the network via the Studio. -Enabling non-fatal errors requires setting the following feature flag on the subgraph manifest: +Enabling non-fatal errors requires setting the following feature flag on the Subgraph manifest: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum features: - nonFatalErrors ... ``` -The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the subgraph has skipped over errors, as in the example: +The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the Subgraph has skipped over errors, as in the example: ```graphql foos(first: 100, subgraphError: allow) { @@ -123,7 +123,7 @@ _meta { } ``` -If the subgraph encounters an error, that query will return both the data and a graphql error with the message `"indexing_error"`, as in this example response: +If the Subgraph encounters an error, that query will return both the data and a graphql error with the message `"indexing_error"`, as in this example response: ```graphql "data": { @@ -145,7 +145,7 @@ If the subgraph encounters an error, that query will return both the data and a ## IPFS/Arweave File Data Sources -File data sources are a new subgraph functionality for accessing off-chain data during indexing in a robust, extendable way. File data sources support fetching files from IPFS and from Arweave. +File data sources are a new Subgraph functionality for accessing off-chain data during indexing in a robust, extendable way. File data sources support fetching files from IPFS and from Arweave. > This also lays the groundwork for deterministic indexing of off-chain data, as well as the potential introduction of arbitrary HTTP-sourced data. @@ -221,7 +221,7 @@ templates: - name: TokenMetadata kind: file/ipfs mapping: - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mapping.ts handler: handleMetadata @@ -290,7 +290,7 @@ Example: import { TokenMetadata as TokenMetadataTemplate } from '../generated/templates' const ipfshash = 'QmaXzZhcYnsisuue5WRdQDH6FDvqkLQX1NckLqBYeYYEfm' -//This example code is for a Crypto coven subgraph. The above ipfs hash is a directory with token metadata for all crypto coven NFTs. +//This example code is for a Crypto coven Subgraph. The above ipfs hash is a directory with token metadata for all crypto coven NFTs. export function handleTransfer(event: TransferEvent): void { let token = Token.load(event.params.tokenId.toString()) @@ -317,23 +317,23 @@ This will create a new file data source, which will poll Graph Node's configured This example is using the CID as the lookup between the parent `Token` entity and the resulting `TokenMetadata` entity. -> Previously, this is the point at which a subgraph developer would have called `ipfs.cat(CID)` to fetch the file +> Previously, this is the point at which a Subgraph developer would have called `ipfs.cat(CID)` to fetch the file Congratulations, you are using file data sources! -#### Deploying your subgraphs +#### Deploying your Subgraphs -You can now `build` and `deploy` your subgraph to any Graph Node >=v0.30.0-rc.0. +You can now `build` and `deploy` your Subgraph to any Graph Node >=v0.30.0-rc.0. #### Limitations -File data source handlers and entities are isolated from other subgraph entities, ensuring that they are deterministic when executed, and ensuring no contamination of chain-based data sources. To be specific: +File data source handlers and entities are isolated from other Subgraph entities, ensuring that they are deterministic when executed, and ensuring no contamination of chain-based data sources. To be specific: - Entities created by File Data Sources are immutable, and cannot be updated - File Data Source handlers cannot access entities from other file data sources - Entities associated with File Data Sources cannot be accessed by chain-based handlers -> While this constraint should not be problematic for most use-cases, it may introduce complexity for some. Please get in touch via Discord if you are having issues modelling your file-based data in a subgraph! +> While this constraint should not be problematic for most use-cases, it may introduce complexity for some. Please get in touch via Discord if you are having issues modelling your file-based data in a Subgraph! Additionally, it is not possible to create data sources from a file data source, be it an onchain data source or another file data source. This restriction may be lifted in the future. @@ -365,15 +365,15 @@ Handlers for File Data Sources cannot be in files which import `eth_call` contra > **Requires**: [SpecVersion](#specversion-releases) >= `1.2.0` -Topic filters, also known as indexed argument filters, are a powerful feature in subgraphs that allow users to precisely filter blockchain events based on the values of their indexed arguments. +Topic filters, also known as indexed argument filters, are a powerful feature in Subgraphs that allow users to precisely filter blockchain events based on the values of their indexed arguments. -- These filters help isolate specific events of interest from the vast stream of events on the blockchain, allowing subgraphs to operate more efficiently by focusing only on relevant data. +- These filters help isolate specific events of interest from the vast stream of events on the blockchain, allowing Subgraphs to operate more efficiently by focusing only on relevant data. -- This is useful for creating personal subgraphs that track specific addresses and their interactions with various smart contracts on the blockchain. +- This is useful for creating personal Subgraphs that track specific addresses and their interactions with various smart contracts on the blockchain. ### How Topic Filters Work -When a smart contract emits an event, any arguments that are marked as indexed can be used as filters in a subgraph's manifest. This allows the subgraph to listen selectively for events that match these indexed arguments. +When a smart contract emits an event, any arguments that are marked as indexed can be used as filters in a Subgraph's manifest. This allows the Subgraph to listen selectively for events that match these indexed arguments. - The event's first indexed argument corresponds to `topic1`, the second to `topic2`, and so on, up to `topic3`, since the Ethereum Virtual Machine (EVM) allows up to three indexed arguments per event. @@ -401,7 +401,7 @@ In this example: #### Configuration in Subgraphs -Topic filters are defined directly within the event handler configuration in the subgraph manifest. Here is how they are configured: +Topic filters are defined directly within the event handler configuration in the Subgraph manifest. Here is how they are configured: ```yaml eventHandlers: @@ -436,7 +436,7 @@ In this configuration: - `topic1` is configured to filter `Transfer` events where `0xAddressA` is the sender. - `topic2` is configured to filter `Transfer` events where `0xAddressB` is the receiver. -- The subgraph will only index transactions that occur directly from `0xAddressA` to `0xAddressB`. +- The Subgraph will only index transactions that occur directly from `0xAddressA` to `0xAddressB`. #### Example 2: Tracking Transactions in Either Direction Between Two or More Addresses @@ -452,17 +452,17 @@ In this configuration: - `topic1` is configured to filter `Transfer` events where `0xAddressA`, `0xAddressB`, `0xAddressC` is the sender. - `topic2` is configured to filter `Transfer` events where `0xAddressB` and `0xAddressC` is the receiver. -- The subgraph will index transactions that occur in either direction between multiple addresses allowing for comprehensive monitoring of interactions involving all addresses. +- The Subgraph will index transactions that occur in either direction between multiple addresses allowing for comprehensive monitoring of interactions involving all addresses. ## Declared eth_call > Note: This is an experimental feature that is not currently available in a stable Graph Node release yet. You can only use it in Subgraph Studio or your self-hosted node. -Declarative `eth_calls` are a valuable subgraph feature that allows `eth_calls` to be executed ahead of time, enabling `graph-node` to execute them in parallel. +Declarative `eth_calls` are a valuable Subgraph feature that allows `eth_calls` to be executed ahead of time, enabling `graph-node` to execute them in parallel. This feature does the following: -- Significantly improves the performance of fetching data from the Ethereum blockchain by reducing the total time for multiple calls and optimizing the subgraph's overall efficiency. +- Significantly improves the performance of fetching data from the Ethereum blockchain by reducing the total time for multiple calls and optimizing the Subgraph's overall efficiency. - Allows faster data fetching, resulting in quicker query responses and a better user experience. - Reduces wait times for applications that need to aggregate data from multiple Ethereum calls, making the data retrieval process more efficient. @@ -474,7 +474,7 @@ This feature does the following: #### Scenario without Declarative `eth_calls` -Imagine you have a subgraph that needs to make three Ethereum calls to fetch data about a user's transactions, balance, and token holdings. +Imagine you have a Subgraph that needs to make three Ethereum calls to fetch data about a user's transactions, balance, and token holdings. Traditionally, these calls might be made sequentially: @@ -498,15 +498,15 @@ Total time taken = max (3, 2, 4) = 4 seconds #### How it Works -1. Declarative Definition: In the subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. +1. Declarative Definition: In the Subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. 2. Parallel Execution Engine: The Graph Node's execution engine recognizes these declarations and runs the calls simultaneously. -3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the subgraph for further processing. +3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the Subgraph for further processing. #### Example Configuration in Subgraph Manifest Declared `eth_calls` can access the `event.address` of the underlying event as well as all the `event.params`. -`Subgraph.yaml` using `event.address`: +`subgraph.yaml` using `event.address`: ```yaml eventHandlers: @@ -524,7 +524,7 @@ Details for the example above: - The text (`Pool[event.address].feeGrowthGlobal0X128()`) is the actual `eth_call` that will be executed, which is in the form of `Contract[address].function(arguments)` - The `address` and `arguments` can be replaced with variables that will be available when the handler is executed. -`Subgraph.yaml` using `event.params` +`subgraph.yaml` using `event.params` ```yaml calls: @@ -535,22 +535,22 @@ calls: > **Note:** it is not recommended to use grafting when initially upgrading to The Graph Network. Learn more [here](/subgraphs/cookbook/grafting/#important-note-on-grafting-when-upgrading-to-the-network). -When a subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances; it is beneficial to reuse the data from an existing subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly or to temporarily get an existing subgraph working again after it has failed. +When a Subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances; it is beneficial to reuse the data from an existing Subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly or to temporarily get an existing Subgraph working again after it has failed. -A subgraph is grafted onto a base subgraph when the subgraph manifest in `subgraph.yaml` contains a `graft` block at the top-level: +A Subgraph is grafted onto a base Subgraph when the Subgraph manifest in `subgraph.yaml` contains a `graft` block at the top-level: ```yaml description: ... graft: - base: Qm... # Subgraph ID of base subgraph + base: Qm... # Subgraph ID of base Subgraph block: 7345624 # Block number ``` -When a subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` subgraph up to and including the given `block` and then continue indexing the new subgraph from that block on. The base subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted subgraph. +When a Subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` Subgraph up to and including the given `block` and then continue indexing the new Subgraph from that block on. The base Subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted Subgraph. -Because grafting copies rather than indexes base data, it is much quicker to get the subgraph to the desired block than indexing from scratch, though the initial data copy can still take several hours for very large subgraphs. While the grafted subgraph is being initialized, the Graph Node will log information about the entity types that have already been copied. +Because grafting copies rather than indexes base data, it is much quicker to get the Subgraph to the desired block than indexing from scratch, though the initial data copy can still take several hours for very large Subgraphs. While the grafted Subgraph is being initialized, the Graph Node will log information about the entity types that have already been copied. -The grafted subgraph can use a GraphQL schema that is not identical to the one of the base subgraph, but merely compatible with it. It has to be a valid subgraph schema in its own right, but may deviate from the base subgraph's schema in the following ways: +The grafted Subgraph can use a GraphQL schema that is not identical to the one of the base Subgraph, but merely compatible with it. It has to be a valid Subgraph schema in its own right, but may deviate from the base Subgraph's schema in the following ways: - It adds or removes entity types - It removes attributes from entity types @@ -560,4 +560,4 @@ The grafted subgraph can use a GraphQL schema that is not identical to the one o - It adds or removes interfaces - It changes for which entity types an interface is implemented -> **[Feature Management](#experimental-features):** `grafting` must be declared under `features` in the subgraph manifest. +> **[Feature Management](#experimental-features):** `grafting` must be declared under `features` in the Subgraph manifest. diff --git a/website/src/pages/ro/subgraphs/developing/creating/assemblyscript-mappings.mdx b/website/src/pages/ro/subgraphs/developing/creating/assemblyscript-mappings.mdx index 2ac894695fe1..cd81dc118f28 100644 --- a/website/src/pages/ro/subgraphs/developing/creating/assemblyscript-mappings.mdx +++ b/website/src/pages/ro/subgraphs/developing/creating/assemblyscript-mappings.mdx @@ -10,7 +10,7 @@ The mappings take data from a particular source and transform it into entities t For each event handler that is defined in `subgraph.yaml` under `mapping.eventHandlers`, create an exported function of the same name. Each handler must accept a single parameter called `event` with a type corresponding to the name of the event which is being handled. -In the example subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events: +In the example Subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events: ```javascript import { NewGravatar, UpdatedGravatar } from '../generated/Gravity/Gravity' @@ -72,7 +72,7 @@ If no value is set for a field in the new entity with the same ID, the field wil ## Code Generation -In order to make it easy and type-safe to work with smart contracts, events and entities, the Graph CLI can generate AssemblyScript types from the subgraph's GraphQL schema and the contract ABIs included in the data sources. +In order to make it easy and type-safe to work with smart contracts, events and entities, the Graph CLI can generate AssemblyScript types from the Subgraph's GraphQL schema and the contract ABIs included in the data sources. This is done with @@ -80,7 +80,7 @@ This is done with graph codegen [--output-dir ] [] ``` -but in most cases, subgraphs are already preconfigured via `package.json` to allow you to simply run one of the following to achieve the same: +but in most cases, Subgraphs are already preconfigured via `package.json` to allow you to simply run one of the following to achieve the same: ```sh # Yarn @@ -90,7 +90,7 @@ yarn codegen npm run codegen ``` -This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with. +This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example Subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with. ```javascript import { @@ -102,12 +102,12 @@ import { } from '../generated/Gravity/Gravity' ``` -In addition to this, one class is generated for each entity type in the subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with +In addition to this, one class is generated for each entity type in the Subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with ```javascript import { Gravatar } from '../generated/schema' ``` -> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the subgraph. +> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the Subgraph. -Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your subgraph to Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find. +Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your Subgraph to Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find. diff --git a/website/src/pages/ro/subgraphs/developing/creating/graph-ts/CHANGELOG.md b/website/src/pages/ro/subgraphs/developing/creating/graph-ts/CHANGELOG.md index 5d90888ac378..5f964d3cbb78 100644 --- a/website/src/pages/ro/subgraphs/developing/creating/graph-ts/CHANGELOG.md +++ b/website/src/pages/ro/subgraphs/developing/creating/graph-ts/CHANGELOG.md @@ -1,5 +1,11 @@ # @graphprotocol/graph-ts +## 0.38.0 + +### Minor Changes + +- [#1935](https://github.com/graphprotocol/graph-tooling/pull/1935) [`0c36a02`](https://github.com/graphprotocol/graph-tooling/commit/0c36a024e0516bbf883ae62b8312dba3d9945f04) Thanks [@isum](https://github.com/isum)! - feat: add yaml parsing support to mappings + ## 0.37.0 ### Minor Changes diff --git a/website/src/pages/ro/subgraphs/developing/creating/graph-ts/api.mdx b/website/src/pages/ro/subgraphs/developing/creating/graph-ts/api.mdx index 35bb04826c98..5be2530c4d6b 100644 --- a/website/src/pages/ro/subgraphs/developing/creating/graph-ts/api.mdx +++ b/website/src/pages/ro/subgraphs/developing/creating/graph-ts/api.mdx @@ -2,12 +2,12 @@ title: AssemblyScript API --- -> Note: If you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/). +> Note: If you created a Subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/). -Learn what built-in APIs can be used when writing subgraph mappings. There are two kinds of APIs available out of the box: +Learn what built-in APIs can be used when writing Subgraph mappings. There are two kinds of APIs available out of the box: - The [Graph TypeScript library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) -- Code generated from subgraph files by `graph codegen` +- Code generated from Subgraph files by `graph codegen` You can also add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). @@ -27,7 +27,7 @@ The `@graphprotocol/graph-ts` library provides the following APIs: ### Versions -The `apiVersion` in the subgraph manifest specifies the mapping API version which is run by Graph Node for a given subgraph. +The `apiVersion` in the Subgraph manifest specifies the mapping API version which is run by Graph Node for a given Subgraph. | Version | Release notes | | :-: | --- | @@ -223,7 +223,7 @@ import { store } from '@graphprotocol/graph-ts' The `store` API allows to load, save and remove entities from and to the Graph Node store. -Entities written to the store map one-to-one to the `@entity` types defined in the subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities. +Entities written to the store map one-to-one to the `@entity` types defined in the Subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities. #### Creating entities @@ -282,8 +282,8 @@ As of `graph-node` v0.31.0, `@graphprotocol/graph-ts` v0.30.0 and `@graphprotoco The store API facilitates the retrieval of entities that were created or updated in the current block. A typical situation for this is that one handler creates a transaction from some onchain event, and a later handler wants to access this transaction if it exists. -- In the case where the transaction does not exist, the subgraph will have to go to the database simply to find out that the entity does not exist. If the subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. -- For some subgraphs, these missed lookups can contribute significantly to the indexing time. +- In the case where the transaction does not exist, the Subgraph will have to go to the database simply to find out that the entity does not exist. If the Subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. +- For some Subgraphs, these missed lookups can contribute significantly to the indexing time. ```typescript let id = event.transaction.hash // or however the ID is constructed @@ -380,11 +380,11 @@ The Ethereum API provides access to smart contracts, public state variables, con #### Support for Ethereum Types -As with entities, `graph codegen` generates classes for all smart contracts and events used in a subgraph. For this, the contract ABIs need to be part of the data source in the subgraph manifest. Typically, the ABI files are stored in an `abis/` folder. +As with entities, `graph codegen` generates classes for all smart contracts and events used in a Subgraph. For this, the contract ABIs need to be part of the data source in the Subgraph manifest. Typically, the ABI files are stored in an `abis/` folder. -With the generated classes, conversions between Ethereum types and the [built-in types](#built-in-types) take place behind the scenes so that subgraph authors do not have to worry about them. +With the generated classes, conversions between Ethereum types and the [built-in types](#built-in-types) take place behind the scenes so that Subgraph authors do not have to worry about them. -The following example illustrates this. Given a subgraph schema like +The following example illustrates this. Given a Subgraph schema like ```graphql type Transfer @entity { @@ -483,7 +483,7 @@ class Log { #### Access to Smart Contract State -The code generated by `graph codegen` also includes classes for the smart contracts used in the subgraph. These can be used to access public state variables and call functions of the contract at the current block. +The code generated by `graph codegen` also includes classes for the smart contracts used in the Subgraph. These can be used to access public state variables and call functions of the contract at the current block. A common pattern is to access the contract from which an event originates. This is achieved with the following code: @@ -506,7 +506,7 @@ export function handleTransfer(event: TransferEvent) { As long as the `ERC20Contract` on Ethereum has a public read-only function called `symbol`, it can be called with `.symbol()`. For public state variables a method with the same name is created automatically. -Any other contract that is part of the subgraph can be imported from the generated code and can be bound to a valid address. +Any other contract that is part of the Subgraph can be imported from the generated code and can be bound to a valid address. #### Handling Reverted Calls @@ -582,7 +582,7 @@ let isContract = ethereum.hasCode(eoa).inner // returns false import { log } from '@graphprotocol/graph-ts' ``` -The `log` API allows subgraphs to log information to the Graph Node standard output as well as Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument. +The `log` API allows Subgraphs to log information to the Graph Node standard output as well as Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument. The `log` API includes the following functions: @@ -590,7 +590,7 @@ The `log` API includes the following functions: - `log.info(fmt: string, args: Array): void` - logs an informational message. - `log.warning(fmt: string, args: Array): void` - logs a warning. - `log.error(fmt: string, args: Array): void` - logs an error message. -- `log.critical(fmt: string, args: Array): void` – logs a critical message _and_ terminates the subgraph. +- `log.critical(fmt: string, args: Array): void` – logs a critical message _and_ terminates the Subgraph. The `log` API takes a format string and an array of string values. It then replaces placeholders with the string values from the array. The first `{}` placeholder gets replaced by the first value in the array, the second `{}` placeholder gets replaced by the second value and so on. @@ -721,7 +721,7 @@ ipfs.mapJSON('Qm...', 'processItem', Value.fromString('parentId')) The only flag currently supported is `json`, which must be passed to `ipfs.map`. With the `json` flag, the IPFS file must consist of a series of JSON values, one value per line. The call to `ipfs.map` will read each line in the file, deserialize it into a `JSONValue` and call the callback for each of them. The callback can then use entity operations to store data from the `JSONValue`. Entity changes are stored only when the handler that called `ipfs.map` finishes successfully; in the meantime, they are kept in memory, and the size of the file that `ipfs.map` can process is therefore limited. -On success, `ipfs.map` returns `void`. If any invocation of the callback causes an error, the handler that invoked `ipfs.map` is aborted, and the subgraph is marked as failed. +On success, `ipfs.map` returns `void`. If any invocation of the callback causes an error, the handler that invoked `ipfs.map` is aborted, and the Subgraph is marked as failed. ### Crypto API @@ -836,7 +836,7 @@ The base `Entity` class and the child `DataSourceContext` class have helpers to ### DataSourceContext in Manifest -The `context` section within `dataSources` allows you to define key-value pairs that are accessible within your subgraph mappings. The available types are `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. +The `context` section within `dataSources` allows you to define key-value pairs that are accessible within your Subgraph mappings. The available types are `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Here is a YAML example illustrating the usage of various types in the `context` section: @@ -887,4 +887,4 @@ dataSources: - `List`: Specifies a list of items. Each item needs to specify its type and data. - `BigInt`: Specifies a large integer value. Must be quoted due to its large size. -This context is then accessible in your subgraph mapping files, enabling more dynamic and configurable subgraphs. +This context is then accessible in your Subgraph mapping files, enabling more dynamic and configurable Subgraphs. diff --git a/website/src/pages/ro/subgraphs/developing/creating/graph-ts/common-issues.mdx b/website/src/pages/ro/subgraphs/developing/creating/graph-ts/common-issues.mdx index f8d0c9c004c2..65e8e3d4a8a3 100644 --- a/website/src/pages/ro/subgraphs/developing/creating/graph-ts/common-issues.mdx +++ b/website/src/pages/ro/subgraphs/developing/creating/graph-ts/common-issues.mdx @@ -2,7 +2,7 @@ title: Common AssemblyScript Issues --- -There are certain [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) issues that are common to run into during subgraph development. They range in debug difficulty, however, being aware of them may help. The following is a non-exhaustive list of these issues: +There are certain [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) issues that are common to run into during Subgraph development. They range in debug difficulty, however, being aware of them may help. The following is a non-exhaustive list of these issues: - `Private` class variables are not enforced in [AssemblyScript](https://www.assemblyscript.org/status.html#language-features). There is no way to protect class variables from being directly changed from the class object. - Scope is not inherited into [closure functions](https://www.assemblyscript.org/status.html#on-closures), i.e. variables declared outside of closure functions cannot be used. Explanation in [Developer Highlights #3](https://www.youtube.com/watch?v=1-8AW-lVfrA&t=3243s). diff --git a/website/src/pages/ro/subgraphs/developing/creating/install-the-cli.mdx b/website/src/pages/ro/subgraphs/developing/creating/install-the-cli.mdx index f98ef589aaef..ee168286548b 100644 --- a/website/src/pages/ro/subgraphs/developing/creating/install-the-cli.mdx +++ b/website/src/pages/ro/subgraphs/developing/creating/install-the-cli.mdx @@ -2,11 +2,11 @@ title: Install the Graph CLI --- -> In order to use your subgraph on The Graph's decentralized network, you will need to [create an API key](/resources/subgraph-studio-faq/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your subgraph with at least 3,000 GRT to attract 2-3 Indexers. To learn more about signaling, check out [curating](/resources/roles/curating/). +> In order to use your Subgraph on The Graph's decentralized network, you will need to [create an API key](/resources/subgraph-studio-faq/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your Subgraph with at least 3,000 GRT to attract 2-3 Indexers. To learn more about signaling, check out [curating](/resources/roles/curating/). ## Overview -The [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) is a command-line interface that facilitates developers' commands for The Graph. It processes a [subgraph manifest](/subgraphs/developing/creating/subgraph-manifest/) and compiles the [mappings](/subgraphs/developing/creating/assemblyscript-mappings/) to create the files you will need to deploy the subgraph to [Subgraph Studio](https://thegraph.com/studio/) and the network. +The [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) is a command-line interface that facilitates developers' commands for The Graph. It processes a [Subgraph manifest](/subgraphs/developing/creating/subgraph-manifest/) and compiles the [mappings](/subgraphs/developing/creating/assemblyscript-mappings/) to create the files you will need to deploy the Subgraph to [Subgraph Studio](https://thegraph.com/studio/) and the network. ## Getting Started @@ -28,13 +28,13 @@ npm install -g @graphprotocol/graph-cli@latest yarn global add @graphprotocol/graph-cli ``` -The `graph init` command can be used to set up a new subgraph project, either from an existing contract or from an example subgraph. If you already have a smart contract deployed to your preferred network, you can bootstrap a new subgraph from that contract to get started. +The `graph init` command can be used to set up a new Subgraph project, either from an existing contract or from an example Subgraph. If you already have a smart contract deployed to your preferred network, you can bootstrap a new Subgraph from that contract to get started. ## Creează un Subgraf ### From an Existing Contract -The following command creates a subgraph that indexes all events of an existing contract: +The following command creates a Subgraph that indexes all events of an existing contract: ```sh graph init \ @@ -51,25 +51,25 @@ graph init \ - If any of the optional arguments are missing, it guides you through an interactive form. -- The `` is the ID of your subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your subgraph details page. +- The `` is the ID of your Subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your Subgraph details page. ### From an Example Subgraph -The following command initializes a new project from an example subgraph: +The following command initializes a new project from an example Subgraph: ```sh graph init --from-example=example-subgraph ``` -- The [example subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. +- The [example Subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. -- The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. +- The Subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. ### Add New `dataSources` to an Existing Subgraph -`dataSources` are key components of subgraphs. They define the sources of data that the subgraph indexes and processes. A `dataSource` specifies which smart contract to listen to, which events to process, and how to handle them. +`dataSources` are key components of Subgraphs. They define the sources of data that the Subgraph indexes and processes. A `dataSource` specifies which smart contract to listen to, which events to process, and how to handle them. -Recent versions of the Graph CLI supports adding new `dataSources` to an existing subgraph through the `graph add` command: +Recent versions of the Graph CLI supports adding new `dataSources` to an existing Subgraph through the `graph add` command: ```sh graph add
[] @@ -101,19 +101,5 @@ The `graph add` command will fetch the ABI from Etherscan (unless an ABI path is The ABI file(s) must match your contract(s). There are a few ways to obtain ABI files: - If you are building your own project, you will likely have access to your most current ABIs. -- If you are building a subgraph for a public project, you can download that project to your computer and get the ABI by using [`npx hardhat compile`](https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts) or using `solc` to compile. -- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your subgraph will fail. - -## SpecVersion Releases - -| Version | Release notes | -| :-: | --- | -| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | -| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | -| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune subgraphs | -| 0.0.9 | Supports `endBlock` feature | -| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). | -| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). | -| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | -| 0.0.5 | Added support for event handlers having access to transaction receipts. | -| 0.0.4 | Added support for managing subgraph features. | +- If you are building a Subgraph for a public project, you can download that project to your computer and get the ABI by using [`npx hardhat compile`](https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts) or using `solc` to compile. +- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your Subgraph will fail. diff --git a/website/src/pages/ro/subgraphs/developing/creating/ql-schema.mdx b/website/src/pages/ro/subgraphs/developing/creating/ql-schema.mdx index 27562f970620..7e0f889447c5 100644 --- a/website/src/pages/ro/subgraphs/developing/creating/ql-schema.mdx +++ b/website/src/pages/ro/subgraphs/developing/creating/ql-schema.mdx @@ -4,7 +4,7 @@ title: The Graph QL Schema ## Overview -The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language. +The schema for your Subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language. > Note: If you've never written a GraphQL schema, it is recommended that you check out this primer on the GraphQL type system. Reference documentation for GraphQL schemas can be found in the [GraphQL API](/subgraphs/querying/graphql-api/) section. @@ -12,7 +12,7 @@ The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas ar Before defining entities, it is important to take a step back and think about how your data is structured and linked. -- All queries will be made against the data model defined in the subgraph schema. As a result, the design of the subgraph schema should be informed by the queries that your application will need to perform. +- All queries will be made against the data model defined in the Subgraph schema. As a result, the design of the Subgraph schema should be informed by the queries that your application will need to perform. - It may be useful to imagine entities as "objects containing data", rather than as events or functions. - You define entity types in `schema.graphql`, and Graph Node will generate top-level fields for querying single instances and collections of that entity type. - Each type that should be an entity is required to be annotated with an `@entity` directive. @@ -141,7 +141,7 @@ type TokenBalance @entity { Reverse lookups can be defined on an entity through the `@derivedFrom` field. This creates a virtual field on the entity that may be queried but cannot be set manually through the mappings API. Rather, it is derived from the relationship defined on the other entity. For such relationships, it rarely makes sense to store both sides of the relationship, and both indexing and query performance will be better when only one side is stored and the other is derived. -For one-to-many relationships, the relationship should always be stored on the 'one' side, and the 'many' side should always be derived. Storing the relationship this way, rather than storing an array of entities on the 'many' side, will result in dramatically better performance for both indexing and querying the subgraph. In general, storing arrays of entities should be avoided as much as is practical. +For one-to-many relationships, the relationship should always be stored on the 'one' side, and the 'many' side should always be derived. Storing the relationship this way, rather than storing an array of entities on the 'many' side, will result in dramatically better performance for both indexing and querying the Subgraph. In general, storing arrays of entities should be avoided as much as is practical. #### Example @@ -160,7 +160,7 @@ type TokenBalance @entity { } ``` -Here is an example of how to write a mapping for a subgraph with reverse lookups: +Here is an example of how to write a mapping for a Subgraph with reverse lookups: ```typescript let token = new Token(event.address) // Create Token @@ -231,7 +231,7 @@ query usersWithOrganizations { } ``` -This more elaborate way of storing many-to-many relationships will result in less data stored for the subgraph, and therefore to a subgraph that is often dramatically faster to index and to query. +This more elaborate way of storing many-to-many relationships will result in less data stored for the Subgraph, and therefore to a Subgraph that is often dramatically faster to index and to query. ### Adding comments to the schema @@ -287,7 +287,7 @@ query { } ``` -> **[Feature Management](#experimental-features):** From `specVersion` `0.0.4` and onwards, `fullTextSearch` must be declared under the `features` section in the subgraph manifest. +> **[Feature Management](#experimental-features):** From `specVersion` `0.0.4` and onwards, `fullTextSearch` must be declared under the `features` section in the Subgraph manifest. ## Languages supported diff --git a/website/src/pages/ro/subgraphs/developing/creating/starting-your-subgraph.mdx b/website/src/pages/ro/subgraphs/developing/creating/starting-your-subgraph.mdx index 4823231d9a40..180a343470b1 100644 --- a/website/src/pages/ro/subgraphs/developing/creating/starting-your-subgraph.mdx +++ b/website/src/pages/ro/subgraphs/developing/creating/starting-your-subgraph.mdx @@ -4,20 +4,32 @@ title: Starting Your Subgraph ## Overview -The Graph is home to thousands of subgraphs already available for query, so check [The Graph Explorer](https://thegraph.com/explorer) and find one that already matches your needs. +The Graph is home to thousands of Subgraphs already available for query, so check [The Graph Explorer](https://thegraph.com/explorer) and find one that already matches your needs. -When you create a [subgraph](/subgraphs/developing/subgraphs/), you create a custom open API that extracts data from a blockchain, processes it, stores it, and makes it easy to query via GraphQL. +When you create a [Subgraph](/subgraphs/developing/subgraphs/), you create a custom open API that extracts data from a blockchain, processes it, stores it, and makes it easy to query via GraphQL. -Subgraph development ranges from simple scaffold subgraphs to advanced, specifically tailored subgraphs. +Subgraph development ranges from simple scaffold Subgraphs to advanced, specifically tailored Subgraphs. ### Start Building -Start the process and build a subgraph that matches your needs: +Start the process and build a Subgraph that matches your needs: 1. [Install the CLI](/subgraphs/developing/creating/install-the-cli/) - Set up your infrastructure -2. [Subgraph Manifest](/subgraphs/developing/creating/subgraph-manifest/) - Understand a subgraph's key component +2. [Subgraph Manifest](/subgraphs/developing/creating/subgraph-manifest/) - Understand a Subgraph's key component 3. [The GraphQL Schema](/subgraphs/developing/creating/ql-schema/) - Write your schema 4. [Writing AssemblyScript Mappings](/subgraphs/developing/creating/assemblyscript-mappings/) - Write your mappings -5. [Advanced Features](/subgraphs/developing/creating/advanced/) - Customize your subgraph with advanced features +5. [Advanced Features](/subgraphs/developing/creating/advanced/) - Customize your Subgraph with advanced features Explore additional [resources for APIs](/subgraphs/developing/creating/graph-ts/README/) and conduct local testing with [Matchstick](/subgraphs/developing/creating/unit-testing-framework/). + +| Version | Release notes | +| :-: | --- | +| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune Subgraphs | +| 0.0.9 | Supports `endBlock` feature | +| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). | +| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | diff --git a/website/src/pages/ro/subgraphs/developing/creating/subgraph-manifest.mdx b/website/src/pages/ro/subgraphs/developing/creating/subgraph-manifest.mdx index a42a50973690..78e4a3a55e7d 100644 --- a/website/src/pages/ro/subgraphs/developing/creating/subgraph-manifest.mdx +++ b/website/src/pages/ro/subgraphs/developing/creating/subgraph-manifest.mdx @@ -4,19 +4,19 @@ title: Subgraph Manifest ## Overview -The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. +The Subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your Subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. -The **subgraph definition** consists of the following files: +The **Subgraph definition** consists of the following files: -- `subgraph.yaml`: Contains the subgraph manifest +- `subgraph.yaml`: Contains the Subgraph manifest -- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL +- `schema.graphql`: A GraphQL schema defining the data stored for your Subgraph and how to query it via GraphQL - `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema (e.g. `mapping.ts` in this guide) ### Subgraph Capabilities -A single subgraph can: +A single Subgraph can: - Index data from multiple smart contracts (but not multiple networks). @@ -24,12 +24,12 @@ A single subgraph can: - Add an entry for each contract that requires indexing to the `dataSources` array. -The full specification for subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). +The full specification for Subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). -For the example subgraph listed above, `subgraph.yaml` is: +For the example Subgraph listed above, `subgraph.yaml` is: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum repository: https://github.com/graphprotocol/graph-tooling schema: @@ -54,7 +54,7 @@ dataSources: data: 'bar' mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -79,47 +79,47 @@ dataSources: ## Subgraph Entries -> Important Note: Be sure you populate your subgraph manifest with all handlers and [entities](/subgraphs/developing/creating/ql-schema/). +> Important Note: Be sure you populate your Subgraph manifest with all handlers and [entities](/subgraphs/developing/creating/ql-schema/). The important entries to update for the manifest are: -- `specVersion`: a semver version that identifies the supported manifest structure and functionality for the subgraph. The latest version is `1.2.0`. See [specVersion releases](#specversion-releases) section to see more details on features & releases. +- `specVersion`: a semver version that identifies the supported manifest structure and functionality for the Subgraph. The latest version is `1.3.0`. See [specVersion releases](#specversion-releases) section to see more details on features & releases. -- `description`: a human-readable description of what the subgraph is. This description is displayed in Graph Explorer when the subgraph is deployed to Subgraph Studio. +- `description`: a human-readable description of what the Subgraph is. This description is displayed in Graph Explorer when the Subgraph is deployed to Subgraph Studio. -- `repository`: the URL of the repository where the subgraph manifest can be found. This is also displayed in Graph Explorer. +- `repository`: the URL of the repository where the Subgraph manifest can be found. This is also displayed in Graph Explorer. - `features`: a list of all used [feature](#experimental-features) names. -- `indexerHints.prune`: Defines the retention of historical block data for a subgraph. See [prune](#prune) in [indexerHints](#indexer-hints) section. +- `indexerHints.prune`: Defines the retention of historical block data for a Subgraph. See [prune](#prune) in [indexerHints](#indexer-hints) section. -- `dataSources.source`: the address of the smart contract the subgraph sources, and the ABI of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts. +- `dataSources.source`: the address of the smart contract the Subgraph sources, and the ABI of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts. - `dataSources.source.startBlock`: the optional number of the block that the data source starts indexing from. In most cases, we suggest using the block in which the contract was created. - `dataSources.source.endBlock`: The optional number of the block that the data source stops indexing at, including that block. Minimum spec version required: `0.0.9`. -- `dataSources.context`: key-value pairs that can be used within subgraph mappings. Supports various data types like `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Each variable needs to specify its `type` and `data`. These context variables are then accessible in the mapping files, offering more configurable options for subgraph development. +- `dataSources.context`: key-value pairs that can be used within Subgraph mappings. Supports various data types like `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Each variable needs to specify its `type` and `data`. These context variables are then accessible in the mapping files, offering more configurable options for Subgraph development. - `dataSources.mapping.entities`: the entities that the data source writes to the store. The schema for each entity is defined in the schema.graphql file. - `dataSources.mapping.abis`: one or more named ABI files for the source contract as well as any other smart contracts that you interact with from within the mappings. -- `dataSources.mapping.eventHandlers`: lists the smart contract events this subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store. +- `dataSources.mapping.eventHandlers`: lists the smart contract events this Subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store. -- `dataSources.mapping.callHandlers`: lists the smart contract functions this subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store. +- `dataSources.mapping.callHandlers`: lists the smart contract functions this Subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store. -- `dataSources.mapping.blockHandlers`: lists the blocks this subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional call-filter can be provided by adding a `filter` field with `kind: call` to the handler. This will only run the handler if the block contains at least one call to the data source contract. +- `dataSources.mapping.blockHandlers`: lists the blocks this Subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional call-filter can be provided by adding a `filter` field with `kind: call` to the handler. This will only run the handler if the block contains at least one call to the data source contract. -A single subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array. +A single Subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array. ## Event Handlers -Event handlers in a subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the subgraph's manifest. This enables subgraphs to process and store event data according to defined logic. +Event handlers in a Subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the Subgraph's manifest. This enables Subgraphs to process and store event data according to defined logic. ### Defining an Event Handler -An event handler is declared within a data source in the subgraph's YAML configuration. It specifies which events to listen for and the corresponding function to execute when those events are detected. +An event handler is declared within a data source in the Subgraph's YAML configuration. It specifies which events to listen for and the corresponding function to execute when those events are detected. ```yaml dataSources: @@ -131,7 +131,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -149,11 +149,11 @@ dataSources: ## Call Handlers -While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured. +While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a Subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured. Call handlers will only trigger in one of two cases: when the function specified is called by an account other than the contract itself or when it is marked as external in Solidity and called as part of another function in the same contract. -> **Note:** Call handlers currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a subgraph indexing one of these networks contain one or more call handlers, it will not start syncing. Subgraph developers should instead use event handlers. These are far more performant than call handlers, and are supported on every evm network. +> **Note:** Call handlers currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a Subgraph indexing one of these networks contain one or more call handlers, it will not start syncing. Subgraph developers should instead use event handlers. These are far more performant than call handlers, and are supported on every evm network. ### Defining a Call Handler @@ -169,7 +169,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -186,7 +186,7 @@ The `function` is the normalized function signature to filter calls by. The `han ### Mapping Function -Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument: +Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example Subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument: ```typescript import { CreateGravatarCall } from '../generated/Gravity/Gravity' @@ -205,7 +205,7 @@ The `handleCreateGravatar` function takes a new `CreateGravatarCall` which is a ## Block Handlers -In addition to subscribing to contract events or function calls, a subgraph may want to update its data as new blocks are appended to the chain. To achieve this a subgraph can run a function after every block or after blocks that match a pre-defined filter. +In addition to subscribing to contract events or function calls, a Subgraph may want to update its data as new blocks are appended to the chain. To achieve this a Subgraph can run a function after every block or after blocks that match a pre-defined filter. ### Supported Filters @@ -218,7 +218,7 @@ filter: _The defined handler will be called once for every block which contains a call to the contract (data source) the handler is defined under._ -> **Note:** The `call` filter currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a subgraph indexing one of these networks contain one or more block handlers with a `call` filter, it will not start syncing. +> **Note:** The `call` filter currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a Subgraph indexing one of these networks contain one or more block handlers with a `call` filter, it will not start syncing. The absence of a filter for a block handler will ensure that the handler is called every block. A data source can only contain one block handler for each filter type. @@ -232,7 +232,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -261,7 +261,7 @@ blockHandlers: every: 10 ``` -The defined handler will be called once for every `n` blocks, where `n` is the value provided in the `every` field. This configuration allows the subgraph to perform specific operations at regular block intervals. +The defined handler will be called once for every `n` blocks, where `n` is the value provided in the `every` field. This configuration allows the Subgraph to perform specific operations at regular block intervals. #### Once Filter @@ -276,7 +276,7 @@ blockHandlers: kind: once ``` -The defined handler with the once filter will be called only once before all other handlers run. This configuration allows the subgraph to use the handler as an initialization handler, performing specific tasks at the start of indexing. +The defined handler with the once filter will be called only once before all other handlers run. This configuration allows the Subgraph to use the handler as an initialization handler, performing specific tasks at the start of indexing. ```ts export function handleOnce(block: ethereum.Block): void { @@ -288,7 +288,7 @@ export function handleOnce(block: ethereum.Block): void { ### Mapping Function -The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing subgraph entities in the store, call smart contracts and create or update entities. +The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing Subgraph entities in the store, call smart contracts and create or update entities. ```typescript import { ethereum } from '@graphprotocol/graph-ts' @@ -317,7 +317,7 @@ An event will only be triggered when both the signature and topic 0 match. By de Starting from `specVersion` `0.0.5` and `apiVersion` `0.0.7`, event handlers can have access to the receipt for the transaction which emitted them. -To do so, event handlers must be declared in the subgraph manifest with the new `receipt: true` key, which is optional and defaults to false. +To do so, event handlers must be declared in the Subgraph manifest with the new `receipt: true` key, which is optional and defaults to false. ```yaml eventHandlers: @@ -360,7 +360,7 @@ dataSources: abi: Factory mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/factory.ts entities: @@ -390,7 +390,7 @@ templates: abi: Exchange mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/exchange.ts entities: @@ -454,7 +454,7 @@ There are setters and getters like `setString` and `getString` for all value typ ## Start Blocks -The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. +The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a Subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. ```yaml dataSources: @@ -467,7 +467,7 @@ dataSources: startBlock: 6627917 mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/factory.ts entities: @@ -488,13 +488,13 @@ dataSources: ## Indexer Hints -The `indexerHints` setting in a subgraph's manifest provides directives for indexers on processing and managing a subgraph. It influences operational decisions across data handling, indexing strategies, and optimizations. Presently, it features the `prune` option for managing historical data retention or pruning. +The `indexerHints` setting in a Subgraph's manifest provides directives for indexers on processing and managing a Subgraph. It influences operational decisions across data handling, indexing strategies, and optimizations. Presently, it features the `prune` option for managing historical data retention or pruning. > This feature is available from `specVersion: 1.0.0` ### Prune -`indexerHints.prune`: Defines the retention of historical block data for a subgraph. Options include: +`indexerHints.prune`: Defines the retention of historical block data for a Subgraph. Options include: 1. `"never"`: No pruning of historical data; retains the entire history. 2. `"auto"`: Retains the minimum necessary history as set by the indexer, optimizing query performance. @@ -505,19 +505,19 @@ The `indexerHints` setting in a subgraph's manifest provides directives for inde prune: auto ``` -> The term "history" in this context of subgraphs is about storing data that reflects the old states of mutable entities. +> The term "history" in this context of Subgraphs is about storing data that reflects the old states of mutable entities. History as of a given block is required for: -- [Time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), which enable querying the past states of these entities at specific blocks throughout the subgraph's history -- Using the subgraph as a [graft base](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) in another subgraph, at that block -- Rewinding the subgraph back to that block +- [Time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), which enable querying the past states of these entities at specific blocks throughout the Subgraph's history +- Using the Subgraph as a [graft base](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) in another Subgraph, at that block +- Rewinding the Subgraph back to that block If historical data as of the block has been pruned, the above capabilities will not be available. > Using `"auto"` is generally recommended as it maximizes query performance and is sufficient for most users who do not require access to extensive historical data. -For subgraphs leveraging [time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), it's advisable to either set a specific number of blocks for historical data retention or use `prune: never` to keep all historical entity states. Below are examples of how to configure both options in your subgraph's settings: +For Subgraphs leveraging [time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), it's advisable to either set a specific number of blocks for historical data retention or use `prune: never` to keep all historical entity states. Below are examples of how to configure both options in your Subgraph's settings: To retain a specific amount of historical data: @@ -532,3 +532,18 @@ To preserve the complete history of entity states: indexerHints: prune: never ``` + +## SpecVersion Releases + +| Version | Release notes | +| :-: | --- | +| 1.3.0 | Added support for [Subgraph Composition](/cookbook/subgraph-composition-three-sources) | +| 1.2.0 | Added support for [Indexed Argument Filtering](/developing/creating/advanced/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](/developing/creating/advanced/#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supports [`indexerHints`](/developing/creating/subgraph-manifest/#indexer-hints) feature to prune Subgraphs | +| 0.0.9 | Supports `endBlock` feature | +| 0.0.8 | Added support for polling [Block Handlers](/developing/creating/subgraph-manifest/#polling-filter) and [Initialisation Handlers](/developing/creating/subgraph-manifest/#once-filter). | +| 0.0.7 | Added support for [File Data Sources](/developing/creating/advanced/#ipfsarweave-file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | diff --git a/website/src/pages/ro/subgraphs/developing/creating/unit-testing-framework.mdx b/website/src/pages/ro/subgraphs/developing/creating/unit-testing-framework.mdx index 2133c1d4b5c9..e56e1109bc04 100644 --- a/website/src/pages/ro/subgraphs/developing/creating/unit-testing-framework.mdx +++ b/website/src/pages/ro/subgraphs/developing/creating/unit-testing-framework.mdx @@ -2,12 +2,12 @@ title: Unit Testing Framework --- -Learn how to use Matchstick, a unit testing framework developed by [LimeChain](https://limechain.tech/). Matchstick enables subgraph developers to test their mapping logic in a sandboxed environment and successfully deploy their subgraphs. +Learn how to use Matchstick, a unit testing framework developed by [LimeChain](https://limechain.tech/). Matchstick enables Subgraph developers to test their mapping logic in a sandboxed environment and successfully deploy their Subgraphs. ## Benefits of Using Matchstick - It's written in Rust and optimized for high performance. -- It gives you access to developer features, including the ability to mock contract calls, make assertions about the store state, monitor subgraph failures, check test performance, and many more. +- It gives you access to developer features, including the ability to mock contract calls, make assertions about the store state, monitor Subgraph failures, check test performance, and many more. ## Getting Started @@ -87,7 +87,7 @@ And finally, do not use `graph test` (which uses your global installation of gra ### Using Matchstick -To use **Matchstick** in your subgraph project just open up a terminal, navigate to the root folder of your project and simply run `graph test [options] ` - it downloads the latest **Matchstick** binary and runs the specified test or all tests in a test folder (or all existing tests if no datasource flag is specified). +To use **Matchstick** in your Subgraph project just open up a terminal, navigate to the root folder of your project and simply run `graph test [options] ` - it downloads the latest **Matchstick** binary and runs the specified test or all tests in a test folder (or all existing tests if no datasource flag is specified). ### CLI options @@ -113,7 +113,7 @@ graph test path/to/file.test.ts ```sh -c, --coverage Run the tests in coverage mode --d, --docker Run the tests in a docker container (Note: Please execute from the root folder of the subgraph) +-d, --docker Run the tests in a docker container (Note: Please execute from the root folder of the Subgraph) -f, --force Binary: Redownloads the binary. Docker: Redownloads the Dockerfile and rebuilds the docker image. -h, --help Show usage information -l, --logs Logs to the console information about the OS, CPU model and download url (debugging purposes) @@ -145,17 +145,17 @@ libsFolder: path/to/libs manifestPath: path/to/subgraph.yaml ``` -### Demo subgraph +### Demo Subgraph You can try out and play around with the examples from this guide by cloning the [Demo Subgraph repo](https://github.com/LimeChain/demo-subgraph) ### Video tutorials -Also you can check out the video series on ["How to use Matchstick to write unit tests for your subgraphs"](https://www.youtube.com/playlist?list=PLTqyKgxaGF3SNakGQwczpSGVjS_xvOv3h) +Also you can check out the video series on ["How to use Matchstick to write unit tests for your Subgraphs"](https://www.youtube.com/playlist?list=PLTqyKgxaGF3SNakGQwczpSGVjS_xvOv3h) ## Tests structure -_**IMPORTANT: The test structure described below depens on `matchstick-as` version >=0.5.0**_ +_**IMPORTANT: The test structure described below depends on `matchstick-as` version >=0.5.0**_ ### describe() @@ -662,7 +662,7 @@ That's a lot to unpack! First off, an important thing to notice is that we're im There we go - we've created our first test! 👏 -Now in order to run our tests you simply need to run the following in your subgraph root folder: +Now in order to run our tests you simply need to run the following in your Subgraph root folder: `graph test Gravity` @@ -756,7 +756,7 @@ createMockedFunction(contractAddress, 'getGravatar', 'getGravatar(address):(stri Users can mock IPFS files by using `mockIpfsFile(hash, filePath)` function. The function accepts two arguments, the first one is the IPFS file hash/path and the second one is the path to a local file. -NOTE: When testing `ipfs.map/ipfs.mapJSON`, the callback function must be exported from the test file in order for matchstck to detect it, like the `processGravatar()` function in the test example bellow: +NOTE: When testing `ipfs.map/ipfs.mapJSON`, the callback function must be exported from the test file in order for matchstick to detect it, like the `processGravatar()` function in the test example bellow: `.test.ts` file: @@ -765,7 +765,7 @@ import { assert, test, mockIpfsFile } from 'matchstick-as/assembly/index' import { ipfs } from '@graphprotocol/graph-ts' import { gravatarFromIpfs } from './utils' -// Export ipfs.map() callback in order for matchstck to detect it +// Export ipfs.map() callback in order for matchstick to detect it export { processGravatar } from './utils' test('ipfs.cat', () => { @@ -1172,7 +1172,7 @@ templates: network: mainnet mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/token-lock-wallet.ts handler: handleMetadata @@ -1289,7 +1289,7 @@ test('file/ipfs dataSource creation example', () => { ## Test Coverage -Using **Matchstick**, subgraph developers are able to run a script that will calculate the test coverage of the written unit tests. +Using **Matchstick**, Subgraph developers are able to run a script that will calculate the test coverage of the written unit tests. The test coverage tool takes the compiled test `wasm` binaries and converts them to `wat` files, which can then be easily inspected to see whether or not the handlers defined in `subgraph.yaml` have been called. Since code coverage (and testing as whole) is in very early stages in AssemblyScript and WebAssembly, **Matchstick** cannot check for branch coverage. Instead we rely on the assertion that if a given handler has been called, the event/function for it have been properly mocked. @@ -1395,7 +1395,7 @@ The mismatch in arguments is caused by mismatch in `graph-ts` and `matchstick-as ## Additional Resources -For any additional support, check out this [demo subgraph repo using Matchstick](https://github.com/LimeChain/demo-subgraph#readme_). +For any additional support, check out this [demo Subgraph repo using Matchstick](https://github.com/LimeChain/demo-subgraph#readme_). ## Feedback diff --git a/website/src/pages/ro/subgraphs/developing/deploying/multiple-networks.mdx b/website/src/pages/ro/subgraphs/developing/deploying/multiple-networks.mdx index 4f7dcd3864e8..3b2b1bbc70ae 100644 --- a/website/src/pages/ro/subgraphs/developing/deploying/multiple-networks.mdx +++ b/website/src/pages/ro/subgraphs/developing/deploying/multiple-networks.mdx @@ -1,12 +1,13 @@ --- title: Deploying a Subgraph to Multiple Networks +sidebarTitle: Deploying to Multiple Networks --- -This page explains how to deploy a subgraph to multiple networks. To deploy a subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a subgraph already, see [Creating a subgraph](/developing/creating-a-subgraph/). +This page explains how to deploy a Subgraph to multiple networks. To deploy a Subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a Subgraph already, see [Creating a Subgraph](/developing/creating-a-subgraph/). -## Deploying the subgraph to multiple networks +## Deploying the Subgraph to multiple networks -In some cases, you will want to deploy the same subgraph to multiple networks without duplicating all of its code. The main challenge that comes with this is that the contract addresses on these networks are different. +In some cases, you will want to deploy the same Subgraph to multiple networks without duplicating all of its code. The main challenge that comes with this is that the contract addresses on these networks are different. ### Using `graph-cli` @@ -20,7 +21,7 @@ Options: --network-file Networks config file path (default: "./networks.json") ``` -You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your subgraph during development. +You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your Subgraph during development. > Note: The `init` command will now auto-generate a `networks.json` based on the provided information. You will then be able to update existing or add additional networks. @@ -54,7 +55,7 @@ If you don't have a `networks.json` file, you'll need to manually create one wit > Note: You don't have to specify any of the `templates` (if you have any) in the config file, only the `dataSources`. If there are any `templates` declared in the `subgraph.yaml` file, their network will be automatically updated to the one specified with the `--network` option. -Now, let's assume you want to be able to deploy your subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`: +Now, let's assume you want to be able to deploy your Subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`: ```yaml # ... @@ -96,7 +97,7 @@ yarn build --network sepolia yarn build --network sepolia --network-file path/to/config ``` -The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the subgraph. Your `subgraph.yaml` file now should look like this: +The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the Subgraph. Your `subgraph.yaml` file now should look like this: ```yaml # ... @@ -127,7 +128,7 @@ yarn deploy --network sepolia --network-file path/to/config One way to parameterize aspects like contract addresses using older `graph-cli` versions is to generate parts of it with a templating system like [Mustache](https://mustache.github.io/) or [Handlebars](https://handlebarsjs.com/). -To illustrate this approach, let's assume a subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network: +To illustrate this approach, let's assume a Subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network: ```json { @@ -179,7 +180,7 @@ In order to generate a manifest to either network, you could add two additional } ``` -To deploy this subgraph for mainnet or Sepolia you would now simply run one of the two following commands: +To deploy this Subgraph for mainnet or Sepolia you would now simply run one of the two following commands: ```sh # Mainnet: @@ -193,25 +194,25 @@ A working example of this can be found [here](https://github.com/graphprotocol/e **Note:** This approach can also be applied to more complex situations, where it is necessary to substitute more than contract addresses and network names or where generating mappings or ABIs from templates as well. -This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your Subgraph to check if it is running behind. `synced` informs if the Subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the Subgraph. In this case, you can check the `fatalError` field for details on this error. -## Subgraph Studio subgraph archive policy +## Subgraph Studio Subgraph archive policy -A subgraph version in Studio is archived if and only if it meets the following criteria: +A Subgraph version in Studio is archived if and only if it meets the following criteria: - The version is not published to the network (or pending publish) - The version was created 45 or more days ago -- The subgraph hasn't been queried in 30 days +- The Subgraph hasn't been queried in 30 days -In addition, when a new version is deployed, if the subgraph has not been published, then the N-2 version of the subgraph is archived. +In addition, when a new version is deployed, if the Subgraph has not been published, then the N-2 version of the Subgraph is archived. -Every subgraph affected with this policy has an option to bring the version in question back. +Every Subgraph affected with this policy has an option to bring the version in question back. -## Checking subgraph health +## Checking Subgraph health -If a subgraph syncs successfully, that is a good sign that it will continue to run well forever. However, new triggers on the network might cause your subgraph to hit an untested error condition or it may start to fall behind due to performance issues or issues with the node operators. +If a Subgraph syncs successfully, that is a good sign that it will continue to run well forever. However, new triggers on the network might cause your Subgraph to hit an untested error condition or it may start to fall behind due to performance issues or issues with the node operators. -Graph Node exposes a GraphQL endpoint which you can query to check the status of your subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a subgraph: +Graph Node exposes a GraphQL endpoint which you can query to check the status of your Subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a Subgraph: ```graphql { @@ -238,4 +239,4 @@ Graph Node exposes a GraphQL endpoint which you can query to check the status of } ``` -This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your Subgraph to check if it is running behind. `synced` informs if the Subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the Subgraph. In this case, you can check the `fatalError` field for details on this error. diff --git a/website/src/pages/ro/subgraphs/developing/deploying/using-subgraph-studio.mdx b/website/src/pages/ro/subgraphs/developing/deploying/using-subgraph-studio.mdx index 634c2700ba68..77d10212c770 100644 --- a/website/src/pages/ro/subgraphs/developing/deploying/using-subgraph-studio.mdx +++ b/website/src/pages/ro/subgraphs/developing/deploying/using-subgraph-studio.mdx @@ -2,23 +2,23 @@ title: Deploying Using Subgraph Studio --- -Learn how to deploy your subgraph to Subgraph Studio. +Learn how to deploy your Subgraph to Subgraph Studio. -> Note: When you deploy a subgraph, you push it to Subgraph Studio, where you'll be able to test it. It's important to remember that deploying is not the same as publishing. When you publish a subgraph, you're publishing it onchain. +> Note: When you deploy a Subgraph, you push it to Subgraph Studio, where you'll be able to test it. It's important to remember that deploying is not the same as publishing. When you publish a Subgraph, you're publishing it onchain. ## Subgraph Studio Overview In [Subgraph Studio](https://thegraph.com/studio/), you can do the following: -- View a list of subgraphs you've created -- Manage, view details, and visualize the status of a specific subgraph -- Create and manage your API keys for specific subgraphs +- View a list of Subgraphs you've created +- Manage, view details, and visualize the status of a specific Subgraph +- Create and manage your API keys for specific Subgraphs - Restrict your API keys to specific domains and allow only certain Indexers to query with them -- Create your subgraph -- Deploy your subgraph using The Graph CLI -- Test your subgraph in the playground environment -- Integrate your subgraph in staging using the development query URL -- Publish your subgraph to The Graph Network +- Create your Subgraph +- Deploy your Subgraph using The Graph CLI +- Test your Subgraph in the playground environment +- Integrate your Subgraph in staging using the development query URL +- Publish your Subgraph to The Graph Network - Manage your billing ## Install The Graph CLI @@ -44,10 +44,10 @@ npm install -g @graphprotocol/graph-cli 1. Open [Subgraph Studio](https://thegraph.com/studio/). 2. Connect your wallet to sign in. - You can do this via MetaMask, Coinbase Wallet, WalletConnect, or Safe. -3. After you sign in, your unique deploy key will be displayed on your subgraph details page. - - The deploy key allows you to publish your subgraphs or manage your API keys and billing. It is unique but can be regenerated if you think it has been compromised. +3. After you sign in, your unique deploy key will be displayed on your Subgraph details page. + - The deploy key allows you to publish your Subgraphs or manage your API keys and billing. It is unique but can be regenerated if you think it has been compromised. -> Important: You need an API key to query subgraphs +> Important: You need an API key to query Subgraphs ### How to Create a Subgraph in Subgraph Studio @@ -57,31 +57,25 @@ npm install -g @graphprotocol/graph-cli ### Subgraph Compatibility with The Graph Network -In order to be supported by Indexers on The Graph Network, subgraphs must: - -- Index a [supported network](/supported-networks/) -- Must not use any of the following features: - - ipfs.cat & ipfs.map - - Non-fatal errors - - Grafting +To be supported by Indexers on The Graph Network, Subgraphs must index a [supported network](/supported-networks/). For a full list of supported and unsupported features, check out the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) repo. ## Initialize Your Subgraph -Once your subgraph has been created in Subgraph Studio, you can initialize its code through the CLI using this command: +Once your Subgraph has been created in Subgraph Studio, you can initialize its code through the CLI using this command: ```bash graph init ``` -You can find the `` value on your subgraph details page in Subgraph Studio, see image below: +You can find the `` value on your Subgraph details page in Subgraph Studio, see image below: ![Subgraph Studio - Slug](/img/doc-subgraph-slug.png) -After running `graph init`, you will be asked to input the contract address, network, and an ABI that you want to query. This will generate a new folder on your local machine with some basic code to start working on your subgraph. You can then finalize your subgraph to make sure it works as expected. +After running `graph init`, you will be asked to input the contract address, network, and an ABI that you want to query. This will generate a new folder on your local machine with some basic code to start working on your Subgraph. You can then finalize your Subgraph to make sure it works as expected. ## Graph Auth -Before you can deploy your subgraph to Subgraph Studio, you need to log into your account within the CLI. To do this, you will need your deploy key, which you can find under your subgraph details page. +Before you can deploy your Subgraph to Subgraph Studio, you need to log into your account within the CLI. To do this, you will need your deploy key, which you can find under your Subgraph details page. Then, use the following command to authenticate from the CLI: @@ -91,11 +85,11 @@ graph auth ## Deploying a Subgraph -Once you are ready, you can deploy your subgraph to Subgraph Studio. +Once you are ready, you can deploy your Subgraph to Subgraph Studio. -> Deploying a subgraph with the CLI pushes it to the Studio, where you can test it and update the metadata. This action won't publish your subgraph to the decentralized network. +> Deploying a Subgraph with the CLI pushes it to the Studio, where you can test it and update the metadata. This action won't publish your Subgraph to the decentralized network. -Use the following CLI command to deploy your subgraph: +Use the following CLI command to deploy your Subgraph: ```bash graph deploy @@ -108,30 +102,30 @@ After running this command, the CLI will ask for a version label. ## Testing Your Subgraph -After deploying, you can test your subgraph (either in Subgraph Studio or in your own app, with the deployment query URL), deploy another version, update the metadata, and publish to [Graph Explorer](https://thegraph.com/explorer) when you are ready. +After deploying, you can test your Subgraph (either in Subgraph Studio or in your own app, with the deployment query URL), deploy another version, update the metadata, and publish to [Graph Explorer](https://thegraph.com/explorer) when you are ready. -Use Subgraph Studio to check the logs on the dashboard and look for any errors with your subgraph. +Use Subgraph Studio to check the logs on the dashboard and look for any errors with your Subgraph. ## Publish Your Subgraph -In order to publish your subgraph successfully, review [publishing a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). +In order to publish your Subgraph successfully, review [publishing a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). ## Versioning Your Subgraph with the CLI -If you want to update your subgraph, you can do the following: +If you want to update your Subgraph, you can do the following: - You can deploy a new version to Studio using the CLI (it will only be private at this point). - Once you're happy with it, you can publish your new deployment to [Graph Explorer](https://thegraph.com/explorer). -- This action will create a new version of your subgraph that Curators can start signaling on and Indexers can index. +- This action will create a new version of your Subgraph that Curators can start signaling on and Indexers can index. -You can also update your subgraph's metadata without publishing a new version. You can update your subgraph details in Studio (under the profile picture, name, description, etc.) by checking an option called **Update Details** in [Graph Explorer](https://thegraph.com/explorer). If this is checked, an onchain transaction will be generated that updates subgraph details in Explorer without having to publish a new version with a new deployment. +You can also update your Subgraph's metadata without publishing a new version. You can update your Subgraph details in Studio (under the profile picture, name, description, etc.) by checking an option called **Update Details** in [Graph Explorer](https://thegraph.com/explorer). If this is checked, an onchain transaction will be generated that updates Subgraph details in Explorer without having to publish a new version with a new deployment. -> Note: There are costs associated with publishing a new version of a subgraph to the network. In addition to the transaction fees, you must also fund a part of the curation tax on the auto-migrating signal. You cannot publish a new version of your subgraph if Curators have not signaled on it. For more information, please read more [here](/resources/roles/curating/). +> Note: There are costs associated with publishing a new version of a Subgraph to the network. In addition to the transaction fees, you must also fund a part of the curation tax on the auto-migrating signal. You cannot publish a new version of your Subgraph if Curators have not signaled on it. For more information, please read more [here](/resources/roles/curating/). ## Automatic Archiving of Subgraph Versions -Whenever you deploy a new subgraph version in Subgraph Studio, the previous version will be archived. Archived versions won't be indexed/synced and therefore cannot be queried. You can unarchive an archived version of your subgraph in Subgraph Studio. +Whenever you deploy a new Subgraph version in Subgraph Studio, the previous version will be archived. Archived versions won't be indexed/synced and therefore cannot be queried. You can unarchive an archived version of your Subgraph in Subgraph Studio. -> Note: Previous versions of non-published subgraphs deployed to Studio will be automatically archived. +> Note: Previous versions of non-published Subgraphs deployed to Studio will be automatically archived. ![Subgraph Studio - Unarchive](/img/Unarchive.png) diff --git a/website/src/pages/ro/subgraphs/developing/developer-faq.mdx b/website/src/pages/ro/subgraphs/developing/developer-faq.mdx index 8dbe6d23ad39..e45141294523 100644 --- a/website/src/pages/ro/subgraphs/developing/developer-faq.mdx +++ b/website/src/pages/ro/subgraphs/developing/developer-faq.mdx @@ -7,37 +7,37 @@ This page summarizes some of the most common questions for developers building o ## Subgraph Related -### 1. What is a subgraph? +### 1. What is a Subgraph? -A subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process subgraphs and make them available for subgraph consumers to query. +A Subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process Subgraphs and make them available for Subgraph consumers to query. -### 2. What is the first step to create a subgraph? +### 2. What is the first step to create a Subgraph? -To successfully create a subgraph, you will need to install The Graph CLI. Review the [Quick Start](/subgraphs/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). +To successfully create a Subgraph, you will need to install The Graph CLI. Review the [Quick Start](/subgraphs/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). -### 3. Can I still create a subgraph if my smart contracts don't have events? +### 3. Can I still create a Subgraph if my smart contracts don't have events? -It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the subgraph are triggered by contract events and are the fastest way to retrieve useful data. +It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the Subgraph are triggered by contract events and are the fastest way to retrieve useful data. -If the contracts you work with do not contain events, your subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. +If the contracts you work with do not contain events, your Subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. -### 4. Can I change the GitHub account associated with my subgraph? +### 4. Can I change the GitHub account associated with my Subgraph? -No. Once a subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your subgraph. +No. Once a Subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your Subgraph. -### 5. How do I update a subgraph on mainnet? +### 5. How do I update a Subgraph on mainnet? -You can deploy a new version of your subgraph to Subgraph Studio using the CLI. This action maintains your subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. +You can deploy a new version of your Subgraph to Subgraph Studio using the CLI. This action maintains your Subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your Subgraph that Curators can start signaling on. -### 6. Is it possible to duplicate a subgraph to another account or endpoint without redeploying? +### 6. Is it possible to duplicate a Subgraph to another account or endpoint without redeploying? -You have to redeploy the subgraph, but if the subgraph ID (IPFS hash) doesn't change, it won't have to sync from the beginning. +You have to redeploy the Subgraph, but if the Subgraph ID (IPFS hash) doesn't change, it won't have to sync from the beginning. -### 7. How do I call a contract function or access a public state variable from my subgraph mappings? +### 7. How do I call a contract function or access a public state variable from my Subgraph mappings? Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/#access-to-smart-contract-state). -### 8. Can I import `ethers.js` or other JS libraries into my subgraph mappings? +### 8. Can I import `ethers.js` or other JS libraries into my Subgraph mappings? Not currently, as mappings are written in AssemblyScript. @@ -45,15 +45,15 @@ One possible alternative solution to this is to store raw data in entities and p ### 9. When listening to multiple contracts, is it possible to select the contract order to listen to events? -Within a subgraph, the events are always processed in the order they appear in the blocks, regardless of whether that is across multiple contracts or not. +Within a Subgraph, the events are always processed in the order they appear in the blocks, regardless of whether that is across multiple contracts or not. ### 10. How are templates different from data sources? -Templates allow you to create data sources quickly, while your subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your subgraph will create a dynamic data source by supplying the contract address. +Templates allow you to create data sources quickly, while your Subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your Subgraph will create a dynamic data source by supplying the contract address. Check out the "Instantiating a data source template" section on: [Data Source Templates](/developing/creating-a-subgraph/#data-source-templates). -### 11. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? +### 11. Is it possible to set up a Subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? Yes. On `graph init` command itself you can add multiple dataSources by entering contracts one after the other. @@ -79,9 +79,9 @@ docker pull graphprotocol/graph-node:latest If only one entity is created during the event and if there's nothing better available, then the transaction hash + log index would be unique. You can obfuscate these by converting that to Bytes and then piping it through `crypto.keccak256` but this won't make it more unique. -### 15. Can I delete my subgraph? +### 15. Can I delete my Subgraph? -Yes, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) and [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) your subgraph. +Yes, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) and [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) your Subgraph. ## Network Related @@ -110,11 +110,11 @@ Yes. Sepolia supports block handlers, call handlers and event handlers. It shoul Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the dataSource starts indexing from. In most cases, we suggest using the block where the contract was created: [Start blocks](/developing/creating-a-subgraph/#start-blocks) -### 20. What are some tips to increase the performance of indexing? My subgraph is taking a very long time to sync +### 20. What are some tips to increase the performance of indexing? My Subgraph is taking a very long time to sync Yes, you should take a look at the optional start block feature to start indexing from the block where the contract was deployed: [Start blocks](/developing/creating-a-subgraph/#start-blocks) -### 21. Is there a way to query the subgraph directly to determine the latest block number it has indexed? +### 21. Is there a way to query the Subgraph directly to determine the latest block number it has indexed? Yes! Try the following command, substituting "organization/subgraphName" with the organization under it is published and the name of your subgraph: @@ -132,7 +132,7 @@ someCollection(first: 1000, skip: ) { ... } ### 23. If my dapp frontend uses The Graph for querying, do I need to write my API key into the frontend directly? What if we pay query fees for users – will malicious users cause our query fees to be very high? -Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. +Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and Subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. ## Miscellaneous diff --git a/website/src/pages/ro/subgraphs/developing/introduction.mdx b/website/src/pages/ro/subgraphs/developing/introduction.mdx index 615b6cec4c9c..06bc2b76104d 100644 --- a/website/src/pages/ro/subgraphs/developing/introduction.mdx +++ b/website/src/pages/ro/subgraphs/developing/introduction.mdx @@ -11,21 +11,21 @@ As a developer, you need data to build and power your dapp. Querying and indexin On The Graph, you can: -1. Create, deploy, and publish subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). -2. Use GraphQL to query existing subgraphs. +1. Create, deploy, and publish Subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). +2. Use GraphQL to query existing Subgraphs. ### What is GraphQL? -- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. +- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query Subgraphs. ### Developer Actions -- Query subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. -- Create custom subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. -- Deploy, publish and signal your subgraphs within The Graph Network. +- Query Subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. +- Create custom Subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. +- Deploy, publish and signal your Subgraphs within The Graph Network. -### What are subgraphs? +### What are Subgraphs? -A subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. +A Subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. -Check out the documentation on [subgraphs](/subgraphs/developing/subgraphs/) to learn specifics. +Check out the documentation on [Subgraphs](/subgraphs/developing/subgraphs/) to learn specifics. diff --git a/website/src/pages/ro/subgraphs/developing/managing/deleting-a-subgraph.mdx b/website/src/pages/ro/subgraphs/developing/managing/deleting-a-subgraph.mdx index 5a4ac15e07fd..b8c2330ca49d 100644 --- a/website/src/pages/ro/subgraphs/developing/managing/deleting-a-subgraph.mdx +++ b/website/src/pages/ro/subgraphs/developing/managing/deleting-a-subgraph.mdx @@ -2,30 +2,30 @@ title: Deleting a Subgraph --- -Delete your subgraph using [Subgraph Studio](https://thegraph.com/studio/). +Delete your Subgraph using [Subgraph Studio](https://thegraph.com/studio/). -> Deleting your subgraph will remove all published versions from The Graph Network, but it will remain visible on Graph Explorer and Subgraph Studio for users who have signaled on it. +> Deleting your Subgraph will remove all published versions from The Graph Network, but it will remain visible on Graph Explorer and Subgraph Studio for users who have signaled on it. ## Step-by-Step -1. Visit the subgraph's page on [Subgraph Studio](https://thegraph.com/studio/). +1. Visit the Subgraph's page on [Subgraph Studio](https://thegraph.com/studio/). 2. Click on the three-dots to the right of the "publish" button. -3. Click on the option to "delete this subgraph": +3. Click on the option to "delete this Subgraph": ![Delete-subgraph](/img/Delete-subgraph.png) -4. Depending on the subgraph's status, you will be prompted with various options. +4. Depending on the Subgraph's status, you will be prompted with various options. - - If the subgraph is not published, simply click “delete” and confirm. - - If the subgraph is published, you will need to confirm on your wallet before the subgraph can be deleted from Studio. If a subgraph is published to multiple networks, such as testnet and mainnet, additional steps may be required. + - If the Subgraph is not published, simply click “delete” and confirm. + - If the Subgraph is published, you will need to confirm on your wallet before the Subgraph can be deleted from Studio. If a Subgraph is published to multiple networks, such as testnet and mainnet, additional steps may be required. -> If the owner of the subgraph has signal on it, the signaled GRT will be returned to the owner. +> If the owner of the Subgraph has signal on it, the signaled GRT will be returned to the owner. ### Important Reminders -- Once you delete a subgraph, it will **not** appear on Graph Explorer's homepage. However, users who have signaled on it will still be able to view it on their profile pages and remove their signal. -- Curators will not be able to signal on the subgraph anymore. -- Curators that already signaled on the subgraph can withdraw their signal at an average share price. -- Deleted subgraphs will show an error message. +- Once you delete a Subgraph, it will **not** appear on Graph Explorer's homepage. However, users who have signaled on it will still be able to view it on their profile pages and remove their signal. +- Curators will not be able to signal on the Subgraph anymore. +- Curators that already signaled on the Subgraph can withdraw their signal at an average share price. +- Deleted Subgraphs will show an error message. diff --git a/website/src/pages/ro/subgraphs/developing/managing/transferring-a-subgraph.mdx b/website/src/pages/ro/subgraphs/developing/managing/transferring-a-subgraph.mdx index 0fc6632cbc40..e80bde3fa6d2 100644 --- a/website/src/pages/ro/subgraphs/developing/managing/transferring-a-subgraph.mdx +++ b/website/src/pages/ro/subgraphs/developing/managing/transferring-a-subgraph.mdx @@ -2,18 +2,18 @@ title: Transferring a Subgraph --- -Subgraphs published to the decentralized network have an NFT minted to the address that published the subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network. +Subgraphs published to the decentralized network have an NFT minted to the address that published the Subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network. ## Reminders -- Whoever owns the NFT controls the subgraph. -- If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that subgraph on the network. -- You can easily move control of a subgraph to a multi-sig. -- A community member can create a subgraph on behalf of a DAO. +- Whoever owns the NFT controls the Subgraph. +- If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that Subgraph on the network. +- You can easily move control of a Subgraph to a multi-sig. +- A community member can create a Subgraph on behalf of a DAO. ## View Your Subgraph as an NFT -To view your subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**: +To view your Subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**: ``` https://opensea.io/your-wallet-address @@ -27,13 +27,13 @@ https://rainbow.me/your-wallet-addres ## Step-by-Step -To transfer ownership of a subgraph, do the following: +To transfer ownership of a Subgraph, do the following: 1. Use the UI built into Subgraph Studio: ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-1.png) -2. Choose the address that you would like to transfer the subgraph to: +2. Choose the address that you would like to transfer the Subgraph to: ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-2.png) diff --git a/website/src/pages/ro/subgraphs/developing/publishing/publishing-a-subgraph.mdx b/website/src/pages/ro/subgraphs/developing/publishing/publishing-a-subgraph.mdx index dca943ad3152..2bc0ec5f514c 100644 --- a/website/src/pages/ro/subgraphs/developing/publishing/publishing-a-subgraph.mdx +++ b/website/src/pages/ro/subgraphs/developing/publishing/publishing-a-subgraph.mdx @@ -1,10 +1,11 @@ --- title: Publishing a Subgraph to the Decentralized Network +sidebarTitle: Publishing to the Decentralized Network --- -Once you have [deployed your subgraph to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/) and it's ready to go into production, you can publish it to the decentralized network. +Once you have [deployed your Subgraph to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/) and it's ready to go into production, you can publish it to the decentralized network. -When you publish a subgraph to the decentralized network, you make it available for: +When you publish a Subgraph to the decentralized network, you make it available for: - [Curators](/resources/roles/curating/) to begin curating it. - [Indexers](/indexing/overview/) to begin indexing it. @@ -17,33 +18,33 @@ Check out the list of [supported networks](/supported-networks/). 1. Go to the [Subgraph Studio](https://thegraph.com/studio/) dashboard 2. Click on the **Publish** button -3. Your subgraph will now be visible in [Graph Explorer](https://thegraph.com/explorer/). +3. Your Subgraph will now be visible in [Graph Explorer](https://thegraph.com/explorer/). -All published versions of an existing subgraph can: +All published versions of an existing Subgraph can: - Be published to Arbitrum One. [Learn more about The Graph Network on Arbitrum](/archived/arbitrum/arbitrum-faq/). -- Index data on any of the [supported networks](/supported-networks/), regardless of the network on which the subgraph was published. +- Index data on any of the [supported networks](/supported-networks/), regardless of the network on which the Subgraph was published. -### Updating metadata for a published subgraph +### Updating metadata for a published Subgraph -- After publishing your subgraph to the decentralized network, you can update the metadata anytime in Subgraph Studio. +- After publishing your Subgraph to the decentralized network, you can update the metadata anytime in Subgraph Studio. - Once you’ve saved your changes and published the updates, they will appear in Graph Explorer. - It's important to note that this process will not create a new version since your deployment has not changed. ## Publishing from the CLI -As of version 0.73.0, you can also publish your subgraph with the [`graph-cli`](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). +As of version 0.73.0, you can also publish your Subgraph with the [`graph-cli`](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). 1. Open the `graph-cli`. 2. Use the following commands: `graph codegen && graph build` then `graph publish`. -3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized subgraph to a network of your choice. +3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized Subgraph to a network of your choice. ![cli-ui](/img/cli-ui.png) ### Customizing your deployment -You can upload your subgraph build to a specific IPFS node and further customize your deployment with the following flags: +You can upload your Subgraph build to a specific IPFS node and further customize your deployment with the following flags: ``` USAGE @@ -61,33 +62,33 @@ FLAGS ``` -## Adding signal to your subgraph +## Adding signal to your Subgraph -Developers can add GRT signal to their subgraphs to incentivize Indexers to query the subgraph. +Developers can add GRT signal to their Subgraphs to incentivize Indexers to query the Subgraph. -- If a subgraph is eligible for indexing rewards, Indexers who provide a "proof of indexing" will receive a GRT reward, based on the amount of GRT signalled. +- If a Subgraph is eligible for indexing rewards, Indexers who provide a "proof of indexing" will receive a GRT reward, based on the amount of GRT signalled. -- You can check indexing reward eligibility based on subgraph feature usage [here](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). +- You can check indexing reward eligibility based on Subgraph feature usage [here](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). - Specific supported networks can be checked [here](/supported-networks/). -> Adding signal to a subgraph which is not eligible for rewards will not attract additional Indexers. +> Adding signal to a Subgraph which is not eligible for rewards will not attract additional Indexers. > -> If your subgraph is eligible for rewards, it is recommended that you curate your own subgraph with at least 3,000 GRT in order to attract additional indexers to index your subgraph. +> If your Subgraph is eligible for rewards, it is recommended that you curate your own Subgraph with at least 3,000 GRT in order to attract additional indexers to index your Subgraph. -The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs. However, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. +The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all Subgraphs. However, signaling GRT on a particular Subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. -When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. +When signaling, Curators can decide to signal on a specific version of the Subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. -Indexers can find subgraphs to index based on curation signals they see in Graph Explorer. +Indexers can find Subgraphs to index based on curation signals they see in Graph Explorer. -![Explorer subgraphs](/img/explorer-subgraphs.png) +![Explorer Subgraphs](/img/explorer-subgraphs.png) -Subgraph Studio enables you to add signal to your subgraph by adding GRT to your subgraph's curation pool in the same transaction it is published. +Subgraph Studio enables you to add signal to your Subgraph by adding GRT to your Subgraph's curation pool in the same transaction it is published. ![Curation Pool](/img/curate-own-subgraph-tx.png) -Alternatively, you can add GRT signal to a published subgraph from Graph Explorer. +Alternatively, you can add GRT signal to a published Subgraph from Graph Explorer. ![Signal from Explorer](/img/signal-from-explorer.png) diff --git a/website/src/pages/ro/subgraphs/developing/subgraphs.mdx b/website/src/pages/ro/subgraphs/developing/subgraphs.mdx index ff37e00042e6..f061203d6ea6 100644 --- a/website/src/pages/ro/subgraphs/developing/subgraphs.mdx +++ b/website/src/pages/ro/subgraphs/developing/subgraphs.mdx @@ -4,83 +4,83 @@ title: Subgrafuri ## What is a Subgraph? -A subgraph is a custom, open API that extracts data from a blockchain, processes it, and stores it so it can be easily queried via GraphQL. +A Subgraph is a custom, open API that extracts data from a blockchain, processes it, and stores it so it can be easily queried via GraphQL. ### Subgraph Capabilities - **Access Data:** Subgraphs enable the querying and indexing of blockchain data for web3. -- **Build:** Developers can build, deploy, and publish subgraphs to The Graph Network. To get started, check out the subgraph developer [Quick Start](quick-start/). -- **Index & Query:** Once a subgraph is indexed, anyone can query it. Explore and query all subgraphs published to the network in [Graph Explorer](https://thegraph.com/explorer). +- **Build:** Developers can build, deploy, and publish Subgraphs to The Graph Network. To get started, check out the Subgraph developer [Quick Start](quick-start/). +- **Index & Query:** Once a Subgraph is indexed, anyone can query it. Explore and query all Subgraphs published to the network in [Graph Explorer](https://thegraph.com/explorer). ## Inside a Subgraph -The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. +The Subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your Subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. -The **subgraph definition** consists of the following files: +The **Subgraph definition** consists of the following files: -- `subgraph.yaml`: Contains the subgraph manifest +- `subgraph.yaml`: Contains the Subgraph manifest -- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL +- `schema.graphql`: A GraphQL schema defining the data stored for your Subgraph and how to query it via GraphQL - `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema -To learn more about each subgraph component, check out [creating a subgraph](/developing/creating-a-subgraph/). +To learn more about each Subgraph component, check out [creating a Subgraph](/developing/creating-a-subgraph/). ## Subgraph Lifecycle -Here is a general overview of a subgraph’s lifecycle: +Here is a general overview of a Subgraph’s lifecycle: ![Subgraph Lifecycle](/img/subgraph-lifecycle.png) ## Subgraph Development -1. [Create a subgraph](/developing/creating-a-subgraph/) -2. [Deploy a subgraph](/deploying/deploying-a-subgraph-to-studio/) -3. [Test a subgraph](/deploying/subgraph-studio/#testing-your-subgraph-in-subgraph-studio) -4. [Publish a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) -5. [Signal on a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) +1. [Create a Subgraph](/developing/creating-a-subgraph/) +2. [Deploy a Subgraph](/deploying/deploying-a-subgraph-to-studio/) +3. [Test a Subgraph](/deploying/subgraph-studio/#testing-your-subgraph-in-subgraph-studio) +4. [Publish a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) +5. [Signal on a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) ### Build locally -Great subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying subgraphs on The Graph. They can also use [Graph TypeScript](/subgraphs/developing/creating/graph-ts/README/) and [Matchstick](/subgraphs/developing/creating/unit-testing-framework/) to create robust subgraphs. +Great Subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying Subgraphs on The Graph. They can also use [Graph TypeScript](/subgraphs/developing/creating/graph-ts/README/) and [Matchstick](/subgraphs/developing/creating/unit-testing-framework/) to create robust Subgraphs. ### Deploy to Subgraph Studio -Once defined, a subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: +Once defined, a Subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: -- Use its staging environment to index the deployed subgraph and make it available for review. -- Verify that your subgraph doesn't have any indexing errors and works as expected. +- Use its staging environment to index the deployed Subgraph and make it available for review. +- Verify that your Subgraph doesn't have any indexing errors and works as expected. ### Publish to the Network -When you're happy with your subgraph, you can [publish it](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph Network. +When you're happy with your Subgraph, you can [publish it](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph Network. -- This is an onchain action, which registers the subgraph and makes it discoverable by Indexers. -- Published subgraphs have a corresponding NFT, which defines the ownership of the subgraph. You can [transfer the subgraph's ownership](/subgraphs/developing/managing/transferring-a-subgraph/) by sending the NFT. -- Published subgraphs have associated metadata, which provides other network participants with useful context and information. +- This is an onchain action, which registers the Subgraph and makes it discoverable by Indexers. +- Published Subgraphs have a corresponding NFT, which defines the ownership of the Subgraph. You can [transfer the Subgraph's ownership](/subgraphs/developing/managing/transferring-a-subgraph/) by sending the NFT. +- Published Subgraphs have associated metadata, which provides other network participants with useful context and information. ### Add Curation Signal for Indexing -Published subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your subgraph. Learn more about signaling and [curating](/resources/roles/curating/) on The Graph. +Published Subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your Subgraph. Learn more about signaling and [curating](/resources/roles/curating/) on The Graph. #### What is signal? -- Signal is locked GRT associated with a given subgraph. It indicates to Indexers that a given subgraph will receive query volume and it contributes to the indexing rewards available for processing it. -- Third-party Curators may also signal on a given subgraph, if they deem the subgraph likely to drive query volume. +- Signal is locked GRT associated with a given Subgraph. It indicates to Indexers that a given Subgraph will receive query volume and it contributes to the indexing rewards available for processing it. +- Third-party Curators may also signal on a given Subgraph, if they deem the Subgraph likely to drive query volume. ### Querying & Application Development Subgraphs on The Graph Network receive 100,000 free queries per month, after which point developers can either [pay for queries with GRT or a credit card](/subgraphs/billing/). -Learn more about [querying subgraphs](/subgraphs/querying/introduction/). +Learn more about [querying Subgraphs](/subgraphs/querying/introduction/). ### Updating Subgraphs -To update your subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. +To update your Subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your Subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. -- If you selected "auto-migrate" when you applied the signal, updating the subgraph will migrate any signal to the new version and incur a migration tax. -- This signal migration should prompt Indexers to start indexing the new version of the subgraph, so it should soon become available for querying. +- If you selected "auto-migrate" when you applied the signal, updating the Subgraph will migrate any signal to the new version and incur a migration tax. +- This signal migration should prompt Indexers to start indexing the new version of the Subgraph, so it should soon become available for querying. ### Deleting & Transferring Subgraphs -If you no longer need a published subgraph, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) or [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) it. Deleting a subgraph returns any signaled GRT to [Curators](/resources/roles/curating/). +If you no longer need a published Subgraph, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) or [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) it. Deleting a Subgraph returns any signaled GRT to [Curators](/resources/roles/curating/). diff --git a/website/src/pages/ro/subgraphs/explorer.mdx b/website/src/pages/ro/subgraphs/explorer.mdx index f29f2a3602d9..499fcede88d3 100644 --- a/website/src/pages/ro/subgraphs/explorer.mdx +++ b/website/src/pages/ro/subgraphs/explorer.mdx @@ -2,11 +2,11 @@ title: Graph Explorer --- -Unlock the world of subgraphs and network data with [Graph Explorer](https://thegraph.com/explorer). +Unlock the world of Subgraphs and network data with [Graph Explorer](https://thegraph.com/explorer). ## Overview -Graph Explorer consists of multiple parts where you can interact with [subgraphs](https://thegraph.com/explorer?chain=arbitrum-one), [delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one), engage [participants](https://thegraph.com/explorer/participants?chain=arbitrum-one), view [network information](https://thegraph.com/explorer/network?chain=arbitrum-one), and access your user profile. +Graph Explorer consists of multiple parts where you can interact with [Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one), [delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one), engage [participants](https://thegraph.com/explorer/participants?chain=arbitrum-one), view [network information](https://thegraph.com/explorer/network?chain=arbitrum-one), and access your user profile. ## Inside Explorer @@ -14,33 +14,33 @@ The following is a breakdown of all the key features of Graph Explorer. For addi ### Subgraphs Page -After deploying and publishing your subgraph in Subgraph Studio, go to [Graph Explorer](https://thegraph.com/explorer) and click on the "[Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one)" link in the navigation bar to access the following: +After deploying and publishing your Subgraph in Subgraph Studio, go to [Graph Explorer](https://thegraph.com/explorer) and click on the "[Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one)" link in the navigation bar to access the following: -- Your own finished subgraphs +- Your own finished Subgraphs - Subgraphs published by others -- The exact subgraph you want (based on the date created, signal amount, or name). +- The exact Subgraph you want (based on the date created, signal amount, or name). ![Explorer Image 1](/img/Subgraphs-Explorer-Landing.png) -When you click into a subgraph, you will be able to do the following: +When you click into a Subgraph, you will be able to do the following: - Test queries in the playground and be able to leverage network details to make informed decisions. -- Signal GRT on your own subgraph or the subgraphs of others to make indexers aware of its importance and quality. +- Signal GRT on your own Subgraph or the Subgraphs of others to make indexers aware of its importance and quality. - - This is critical because signaling on a subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries. + - This is critical because signaling on a Subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries. ![Explorer Image 2](/img/Subgraph-Details.png) -On each subgraph’s dedicated page, you can do the following: +On each Subgraph’s dedicated page, you can do the following: -- Signal/Un-signal on subgraphs +- Signal/Un-signal on Subgraphs - View more details such as charts, current deployment ID, and other metadata -- Switch versions to explore past iterations of the subgraph -- Query subgraphs via GraphQL -- Test subgraphs in the playground -- View the Indexers that are indexing on a certain subgraph +- Switch versions to explore past iterations of the Subgraph +- Query Subgraphs via GraphQL +- Test Subgraphs in the playground +- View the Indexers that are indexing on a certain Subgraph - Subgraph stats (allocations, Curators, etc) -- View the entity who published the subgraph +- View the entity who published the Subgraph ![Explorer Image 3](/img/Explorer-Signal-Unsignal.png) @@ -53,7 +53,7 @@ On this page, you can see the following: - Indexers who collected the most query fees - Indexers with the highest estimated APR -Additionally, you can calculate your ROI and search for top Indexers by name, address, or subgraph. +Additionally, you can calculate your ROI and search for top Indexers by name, address, or Subgraph. ### Participants Page @@ -63,9 +63,9 @@ This page provides a bird's-eye view of all "participants," which includes every ![Explorer Image 4](/img/Indexer-Pane.png) -Indexers are the backbone of the protocol. They stake on subgraphs, index them, and serve queries to anyone consuming subgraphs. +Indexers are the backbone of the protocol. They stake on Subgraphs, index them, and serve queries to anyone consuming Subgraphs. -In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each subgraph, and how much revenue they have made from query fees and indexing rewards. +In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each Subgraph, and how much revenue they have made from query fees and indexing rewards. **Specifics** @@ -74,7 +74,7 @@ In the Indexers table, you can see an Indexers’ delegation parameters, their s - Cooldown Remaining - the time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters. - Owned - This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior. - Delegated - Stake from Delegators which can be allocated by the Indexer, but cannot be slashed. -- Allocated - Stake that Indexers are actively allocating towards the subgraphs they are indexing. +- Allocated - Stake that Indexers are actively allocating towards the Subgraphs they are indexing. - Available Delegation Capacity - the amount of delegated stake the Indexers can still receive before they become over-delegated. - Max Delegation Capacity - the maximum amount of delegated stake the Indexer can productively accept. An excess delegated stake cannot be used for allocations or rewards calculations. - Query Fees - this is the total fees that end users have paid for queries from an Indexer over all time. @@ -90,10 +90,10 @@ To learn more about how to become an Indexer, you can take a look at the [offici #### 2. Curators -Curators analyze subgraphs to identify which subgraphs are of the highest quality. Once a Curator has found a potentially high-quality subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which subgraphs are high quality and should be indexed. +Curators analyze Subgraphs to identify which Subgraphs are of the highest quality. Once a Curator has found a potentially high-quality Subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which Subgraphs are high quality and should be indexed. -- Curators can be community members, data consumers, or even subgraph developers who signal on their own subgraphs by depositing GRT tokens into a bonding curve. - - By depositing GRT, Curators mint curation shares of a subgraph. As a result, they can earn a portion of the query fees generated by the subgraph they have signaled on. +- Curators can be community members, data consumers, or even Subgraph developers who signal on their own Subgraphs by depositing GRT tokens into a bonding curve. + - By depositing GRT, Curators mint curation shares of a Subgraph. As a result, they can earn a portion of the query fees generated by the Subgraph they have signaled on. - The bonding curve incentivizes Curators to curate the highest quality data sources. In the The Curator table listed below you can see: @@ -144,8 +144,8 @@ The overview section has both all the current network metrics and some cumulativ A few key details to note: -- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the subgraphs have been closed and the data they served has been validated by the consumers. -- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). +- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the Subgraphs have been closed and the data they served has been validated by the consumers. +- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the Subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). ![Explorer Image 8](/img/Network-Stats.png) @@ -178,15 +178,15 @@ In this section, you can view the following: ### Subgraphs Tab -In the Subgraphs tab, you’ll see your published subgraphs. +In the Subgraphs tab, you’ll see your published Subgraphs. -> This will not include any subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network. +> This will not include any Subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network. ![Explorer Image 11](/img/Subgraphs-Overview.png) ### Indexing Tab -In the Indexing tab, you’ll find a table with all the active and historical allocations towards subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer. +In the Indexing tab, you’ll find a table with all the active and historical allocations towards Subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer. This section will also include details about your net Indexer rewards and net query fees. You’ll see the following metrics: @@ -223,13 +223,13 @@ Keep in mind that this chart is horizontally scrollable, so if you scroll all th ### Curating Tab -In the Curation tab, you’ll find all the subgraphs you’re signaling on (thus enabling you to receive query fees). Signaling allows Curators to highlight to Indexers which subgraphs are valuable and trustworthy, thus signaling that they need to be indexed on. +In the Curation tab, you’ll find all the Subgraphs you’re signaling on (thus enabling you to receive query fees). Signaling allows Curators to highlight to Indexers which Subgraphs are valuable and trustworthy, thus signaling that they need to be indexed on. Within this tab, you’ll find an overview of: -- All the subgraphs you're curating on with signal details -- Share totals per subgraph -- Query rewards per subgraph +- All the Subgraphs you're curating on with signal details +- Share totals per Subgraph +- Query rewards per Subgraph - Updated at date details ![Explorer Image 14](/img/Curation-Stats.png) diff --git a/website/src/pages/ro/subgraphs/guides/_meta.js b/website/src/pages/ro/subgraphs/guides/_meta.js index 37e18bc51651..a1bb04fb6d3f 100644 --- a/website/src/pages/ro/subgraphs/guides/_meta.js +++ b/website/src/pages/ro/subgraphs/guides/_meta.js @@ -1,4 +1,5 @@ export default { + 'subgraph-composition': '', 'subgraph-debug-forking': '', near: '', arweave: '', diff --git a/website/src/pages/ro/subgraphs/guides/arweave.mdx b/website/src/pages/ro/subgraphs/guides/arweave.mdx index 08e6c4257268..e59abffa383f 100644 --- a/website/src/pages/ro/subgraphs/guides/arweave.mdx +++ b/website/src/pages/ro/subgraphs/guides/arweave.mdx @@ -92,9 +92,9 @@ Arweave data sources support two types of handlers: - `transactionHandlers` - Run on every transaction where the data source's `source.owner` is the owner. Currently an owner is required for `transactionHandlers`, if users want to process all transactions they should provide "" as the `source.owner` > The source.owner can be the owner's address, or their Public Key. - +> > Transactions are the building blocks of the Arweave permaweb and they are objects created by end-users. - +> > Note: [Irys (previously Bundlr)](https://irys.xyz/) transactions are not supported yet. ## Schema Definition diff --git a/website/src/pages/ro/subgraphs/guides/contract-analyzer.mdx b/website/src/pages/ro/subgraphs/guides/contract-analyzer.mdx index 084ac8d28a00..ab5076c5ebf4 100644 --- a/website/src/pages/ro/subgraphs/guides/contract-analyzer.mdx +++ b/website/src/pages/ro/subgraphs/guides/contract-analyzer.mdx @@ -2,11 +2,15 @@ title: Smart Contract Analysis with Cana CLI --- -# Cana CLI: Quick & Efficient Contract Analysis +Improve smart contract analysis with **Cana CLI**. It's fast, efficient, and designed specifically for EVM chains. -**Cana CLI** is a command-line tool that streamlines helpful smart contract metadata analysis specific to subgraph development across multiple EVM-compatible chains. +## Overview -## 📌 Key Features +**Cana CLI** is a command-line tool that streamlines helpful smart contract metadata analysis specific to subgraph development across multiple EVM-compatible chains. It simplifies retrieving contract details, detecting proxy implementations, extracting ABIs, and more. + +### Key Features + +With Cana CLI, you can: - Detect deployment blocks - Verify source code @@ -14,47 +18,59 @@ title: Smart Contract Analysis with Cana CLI - Identify proxy and implementation contracts - Support multiple chains -## 🚀 Installation & Setup +### Prerequisites + +Before installing Cana CLI, make sure you have: + +- [Node.js v16+](https://nodejs.org/en) +- [npm v6+](https://docs.npmjs.com/cli/v11/commands/npm-install) +- Block explorer API keys + +### Installation & Setup -Install Cana globally using npm: +1. Install Cana CLI + +Use npm to install it globally: ```bash npm install -g contract-analyzer ``` -Set up a blockchain for analysis: +2. Configure Cana CLI + +Set up a blockchain environment for analysis: ```bash cana setup ``` -Provide the required block explorer API and block explorer endpoint URL details when prompted. +During setup, you'll be prompted to provide the required block explorer API key and block explorer endpoint URL. -Running `cana setup` creates a configuration file at `~/.contract-analyzer/config.json`. This file stores your block explorer API credentials, endpoint URLs, and chain selection preferences for future use. +After setup, Cana CLI creates a configuration file at `~/.contract-analyzer/config.json`. This file stores your block explorer API credentials, endpoint URLs, and chain selection preferences for future use. -## 🍳 Usage +### Steps: Using Cana CLI for Smart Contract Analysis -### 🔹 Chain Selection +#### 1. Select a Chain -Cana supports multiple EVM-compatible chains. +Cana CLI supports multiple EVM-compatible chains. -List chains added with: +For a list of chains added run this command: ```bash cana chains ``` -Then select a chain with: +Then select a chain with this command: ```bash cana chains --switch ``` -Once a chain is selected, all subsequent contract analases will continue on that chain. +Once a chain is selected, all subsequent contract analyses will continue on that chain. -### 🔹 Basic Contract Analysis +#### 2. Basic Contract Analysis -Analyze a contract with: +Run the following command to analyze a contract: ```bash cana analyze 0xContractAddress @@ -66,11 +82,11 @@ or cana -a 0xContractAddress ``` -This command displays essential contract information in the terminal using a clear, organized format. +This command fetches and displays essential contract information in the terminal using a clear, organized format. -### 🔹 Understanding Output +#### 3. Understanding the Output -Cana organizes results into the terminal as well as into a structured directory when detailed contract data is successfully retrieved: +Cana CLI organizes results into the terminal and into a structured directory when detailed contract data is successfully retrieved: ``` contracts-analyzed/ @@ -80,24 +96,22 @@ contracts-analyzed/ └── event-information.json # Event signatures and examples ``` -### 🔹 Chain Management +This format makes it easy to reference contract metadata, event signatures, and ABIs for subgraph development. + +#### 4. Chain Management Add and manage chains: ```bash -cana setup # Add a new chain -cana chains # List configured chains -cana chains -s # Swich chains. +cana setup # Add a new chain +cana chains # List configured chains +cana chains -s # Switch chains ``` -## ⚠️ Troubleshooting +### Troubleshooting -- **Missing Data**: Ensure the contract address is correct, verified on the block explorer, and that your API key has the required permissions. +Missing Data? Ensure that the contract address is correct, that it's verified on the block explorer, and that your API key has the required permissions. -## ✅ Requirements - -- Node.js v16+ -- npm v6+ -- Block explorer API keys +### Conclusion -Keep your contract analyses efficient and well-organized. 🚀 +With Cana CLI, you can efficiently analyze smart contracts, extract crucial metadata, and support subgraph development with ease. diff --git a/website/src/pages/ro/subgraphs/guides/subgraph-composition.mdx b/website/src/pages/ro/subgraphs/guides/subgraph-composition.mdx new file mode 100644 index 000000000000..09f1939c1fde --- /dev/null +++ b/website/src/pages/ro/subgraphs/guides/subgraph-composition.mdx @@ -0,0 +1,132 @@ +--- +title: Aggregate Data Using Subgraph Composition +sidebarTitle: Build a Composable Subgraph with Multiple Subgraphs +--- + +Leverage Subgraph composition to speed up development time. Create a base Subgraph with essential data, then build additional Subgraphs on top of it. + +Optimize your Subgraph by merging data from independent, source Subgraphs into a single composable Subgraph to enhance data aggregation. + +## Introduction + +Composable Subgraphs enable you to combine multiple Subgraphs' data sources into a new Subgraph, facilitating faster and more flexible Subgraph development. Subgraph composition empowers you to create and maintain smaller, focused Subgraphs that collectively form a larger, interconnected dataset. + +### Benefits of Composition + +Subgraph composition is a powerful feature for scaling, allowing you to: + +- Reuse, mix, and combine existing data +- Streamline development and queries +- Use multiple data sources (up to five source Subgraphs) +- Speed up your Subgraph's syncing speed +- Handle errors and optimize the resync + +## Architecture Overview + +The setup for this example involves two Subgraphs: + +1. **Source Subgraph**: Tracks event data as entities. +2. **Dependent Subgraph**: Uses the source Subgraph as a data source. + +You can find these in the `source` and `dependent` directories. + +- The **source Subgraph** is a basic event-tracking Subgraph that records events emitted by relevant contracts. +- The **dependent Subgraph** references the source Subgraph as a data source, using the entities from the source as triggers. + +While the source Subgraph is a standard Subgraph, the dependent Subgraph uses the Subgraph composition feature. + +## Prerequisites + +### Source Subgraphs + +- All Subgraphs need to be published with a **specVersion 1.3.0 or later** (Use the latest graph-cli version to be able to deploy composable Subgraphs) +- See notes here: https://github.com/graphprotocol/graph-node/releases/tag/v0.37.0 +- Immutable entities only: All Subgraphs must have [immutable entities](https://thegraph.com/docs/en/subgraphs/best-practices/immutable-entities-bytes-as-ids/#immutable-entities) when the Subgraph is deployed +- Pruning can be used in the source Subgraphs, but only entities that are immutable can be composed on top of +- Source Subgraphs cannot use grafting on top of existing entities +- Aggregated entities can be used in composition, but entities that are composed from them cannot performed additional aggregations directly + +### Composed Subgraphs + +- You can only compose up to a **maximum of 5 source Subgraphs** +- Composed Subgraphs can only use **datasources from the same chain** +- **Nested composition is not yet supported**: Composing on top of another composed Subgraph isn’t allowed at this time +- Aggregated entities can be used in composition, but the composed entities on them cannot also use aggregations directly +- Developers cannot compose an onchain datasource with a Subgraph datasource (i.e. you can’t do normal event handlers and call handlers and block handlers in a composed Subgraph) + +Additionally, you can explore the [example-composable-subgraph](https://github.com/graphprotocol/example-composable-subgraph) repository for a working implementation of composable Subgraphs + +## Get Started + +The following guide provides examples for defining 3 source Subgraphs to create one powerful composed Subgraph. + +### Specifics + +- To keep this example simple, all source Subgraphs use only block handlers. However, in a real environment, each source Subgraph will use data from different smart contracts. +- The examples below show how to import and extend the schema of another Subgraph to enhance its functionality. +- Each source Subgraph is optimized with a specific entity. +- All the commands listed install the necessary dependencies, generate code based on the GraphQL schema, build the Subgraph, and deploy it to your local Graph Node instance. + +### Step 1. Deploy Block Time Source Subgraph + +This first source Subgraph calculates the block time for each block. + +- It imports schemas from other Subgraphs and adds a `block` entity with a `timestamp` field, representing the time each block was mined. +- It listens to time-related blockchain events (e.g., block timestamps) and processes this data to update the Subgraph's entities accordingly. + +To deploy this Subgraph locally, run the following commands: + +```bash +npm install +npm run codegen +npm run build +npm run create-local +npm run deploy-local +``` + +### Step 2. Deploy Block Cost Source Subgraph + +This second source Subgraph indexes the cost of each block. + +#### Key Functions + +- It imports schemas from other Subgraphs and adds a `block` entity with cost-related fields. +- It listens to blockchain events related to costs (e.g. gas fees, transaction costs) and processes this data to update the Subgraph's entities accordingly. + +To deploy this Subgraph locally, run the same commands as above. + +### Step 3. Define Block Size in Source Subgraph + +This third source Subgraph indexes the size of each block. To deploy this Subgraph locally, run the same commands as above. + +#### Key Functions + +- It imports existing schemas from other Subgraphs and adds a `block` entity with a `size` field representing each block's size. +- It listens to blockchain events related to block sizes (e.g., storage or volume) and processes this data to update the Subgraph's entities accordingly. + +### Step 4. Combine Into Block Stats Subgraph + +This composed Subgraph combines and aggregates the information from the source Subgraphs above, providing a unified view of block statistics. To deploy this Subgraph locally, run the same commands as above. + +> Note: +> +> - Any change to a source Subgraph will likely generate a new deployment ID. +> - Be sure to update the deployment ID in the data source address of the Subgraph manifest to take advantage of the latest changes. +> - All source Subgraphs should be deployed before the composed Subgraph is deployed. + +#### Key Functions + +- It provides a consolidated data model that encompasses all relevant block metrics. +- It combines data from 3 source Subgraphs, and provides a comprehensive view of block statistics, enabling more complex queries and analyses. + +## Key Takeaways + +- This powerful tool will scale your Subgraph development and allow you to combine multiple Subgraphs. +- The setup includes the deployment of 3 source Subgraphs and one final deployment of the composed Subgraph. +- This feature unlocks scalability, simplifying both development and maintenance efficiency. + +## Additional Resources + +- Check out all the code for this example in [this GitHub repo](https://github.com/graphprotocol/example-composable-subgraph). +- To add advanced features to your Subgraph, check out [Subgraph advanced features](/developing/creating/advanced/). +- To learn more about aggregations, check out [Timeseries and Aggregations](/subgraphs/developing/creating/advanced/#timeseries-and-aggregations). diff --git a/website/src/pages/ro/subgraphs/guides/transfer-to-the-graph.mdx b/website/src/pages/ro/subgraphs/guides/transfer-to-the-graph.mdx index a62072c48373..9a4b037cafbc 100644 --- a/website/src/pages/ro/subgraphs/guides/transfer-to-the-graph.mdx +++ b/website/src/pages/ro/subgraphs/guides/transfer-to-the-graph.mdx @@ -12,9 +12,9 @@ Quickly upgrade your Subgraphs from any platform to [The Graph's decentralized n ## Upgrade Your Subgraph to The Graph in 3 Easy Steps -1. [Set Up Your Studio Environment](/subgraphs/cookbook/transfer-to-the-graph/#1-set-up-your-studio-environment) -2. [Deploy Your Subgraph to Studio](/subgraphs/cookbook/transfer-to-the-graph/#2-deploy-your-subgraph-to-studio) -3. [Publish to The Graph Network](/subgraphs/cookbook/transfer-to-the-graph/#publish-your-subgraph-to-the-graphs-decentralized-network) +1. [Set Up Your Studio Environment](/subgraphs/cookbook/transfer-to-the-graph/#1-set-up-your-studio-environment) +2. [Deploy Your Subgraph to Studio](/subgraphs/cookbook/transfer-to-the-graph/#2-deploy-your-subgraph-to-studio) +3. [Publish to The Graph Network](/subgraphs/cookbook/transfer-to-the-graph/#publish-your-subgraph-to-the-graphs-decentralized-network) ## 1. Set Up Your Studio Environment diff --git a/website/src/pages/ro/subgraphs/querying/best-practices.mdx b/website/src/pages/ro/subgraphs/querying/best-practices.mdx index ff5f381e2993..ab02b27cbc03 100644 --- a/website/src/pages/ro/subgraphs/querying/best-practices.mdx +++ b/website/src/pages/ro/subgraphs/querying/best-practices.mdx @@ -4,7 +4,7 @@ title: Querying Best Practices The Graph provides a decentralized way to query data from blockchains. Its data is exposed through a GraphQL API, making it easier to query with the GraphQL language. -Learn the essential GraphQL language rules and best practices to optimize your subgraph. +Learn the essential GraphQL language rules and best practices to optimize your Subgraph. --- @@ -71,7 +71,7 @@ It means that you can query a GraphQL API using standard `fetch` (natively or vi However, as mentioned in ["Querying from an Application"](/subgraphs/querying/from-an-application/), it's recommended to use `graph-client`, which supports the following unique features: -- Cross-chain Subgraph Handling: Querying from multiple subgraphs in a single query +- Cross-chain Subgraph Handling: Querying from multiple Subgraphs in a single query - [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) - [Automatic Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) - Fully typed result @@ -219,7 +219,7 @@ If the application only needs 10 transactions, the query should explicitly set ` ### Use a single query to request multiple records -By default, subgraphs have a singular entity for one record. For multiple records, use the plural entities and filter: `where: {id_in:[X,Y,Z]}` or `where: {volume_gt:100000}` +By default, Subgraphs have a singular entity for one record. For multiple records, use the plural entities and filter: `where: {id_in:[X,Y,Z]}` or `where: {volume_gt:100000}` Example of inefficient querying: diff --git a/website/src/pages/ro/subgraphs/querying/from-an-application.mdx b/website/src/pages/ro/subgraphs/querying/from-an-application.mdx index 708dcfde2fdc..fe2372bd15b1 100644 --- a/website/src/pages/ro/subgraphs/querying/from-an-application.mdx +++ b/website/src/pages/ro/subgraphs/querying/from-an-application.mdx @@ -1,5 +1,6 @@ --- title: Querying from an Application +sidebarTitle: Querying from an App --- Learn how to query The Graph from your application. @@ -10,7 +11,7 @@ During the development process, you will receive a GraphQL API endpoint at two d ### Subgraph Studio Endpoint -After deploying your subgraph to [Subgraph Studio](https://thegraph.com/docs/en/subgraphs/developing/deploying/using-subgraph-studio/), you will receive an endpoint that looks like this: +After deploying your Subgraph to [Subgraph Studio](https://thegraph.com/docs/en/subgraphs/developing/deploying/using-subgraph-studio/), you will receive an endpoint that looks like this: ``` https://api.studio.thegraph.com/query/// @@ -20,13 +21,13 @@ https://api.studio.thegraph.com/query/// ### The Graph Network Endpoint -After publishing your subgraph to the network, you will receive an endpoint that looks like this: : +After publishing your Subgraph to the network, you will receive an endpoint that looks like this: : ``` https://gateway.thegraph.com/api//subgraphs/id/ ``` -> This endpoint is intended for active use on the network. It allows you to use various GraphQL client libraries to query the subgraph and populate your application with indexed data. +> This endpoint is intended for active use on the network. It allows you to use various GraphQL client libraries to query the Subgraph and populate your application with indexed data. ## Using Popular GraphQL Clients @@ -34,7 +35,7 @@ https://gateway.thegraph.com/api//subgraphs/id/ The Graph is providing its own GraphQL client, `graph-client` that supports unique features such as: -- Cross-chain Subgraph Handling: Querying from multiple subgraphs in a single query +- Cross-chain Subgraph Handling: Querying from multiple Subgraphs in a single query - [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) - [Automatic Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) - Fully typed result @@ -43,7 +44,7 @@ The Graph is providing its own GraphQL client, `graph-client` that supports uniq ### Fetch Data with Graph Client -Let's look at how to fetch data from a subgraph with `graph-client`: +Let's look at how to fetch data from a Subgraph with `graph-client`: #### Pasul 1 @@ -168,7 +169,7 @@ Although it's the heaviest client, it has many features to build advanced UI on ### Fetch Data with Apollo Client -Let's look at how to fetch data from a subgraph with Apollo client: +Let's look at how to fetch data from a Subgraph with Apollo client: #### Pasul 1 @@ -257,7 +258,7 @@ client ### Fetch data with URQL -Let's look at how to fetch data from a subgraph with URQL: +Let's look at how to fetch data from a Subgraph with URQL: #### Pasul 1 diff --git a/website/src/pages/ro/subgraphs/querying/graph-client/README.md b/website/src/pages/ro/subgraphs/querying/graph-client/README.md index 416cadc13c6f..d4850e723c6e 100644 --- a/website/src/pages/ro/subgraphs/querying/graph-client/README.md +++ b/website/src/pages/ro/subgraphs/querying/graph-client/README.md @@ -16,19 +16,19 @@ This library is intended to simplify the network aspect of data consumption for | Status | Feature | Notes | | :----: | ---------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------- | -| ✅ | Multiple indexers | based on fetch strategies | -| ✅ | Fetch Strategies | timeout, retry, fallback, race, highestValue | -| ✅ | Build time validations & optimizations | | -| ✅ | Client-Side Composition | with improved execution planner (based on GraphQL-Mesh) | -| ✅ | Cross-chain Subgraph Handling | Use similar subgraphs as a single source | -| ✅ | Raw Execution (standalone mode) | without a wrapping GraphQL client | -| ✅ | Local (client-side) Mutations | | -| ✅ | [Automatic Block Tracking](../packages/block-tracking/README.md) | tracking block numbers [as described here](https://thegraph.com/docs/en/developer/distributed-systems/#polling-for-updated-data) | -| ✅ | [Automatic Pagination](../packages/auto-pagination/README.md) | doing multiple requests in a single call to fetch more than the indexer limit | -| ✅ | Integration with `@apollo/client` | | -| ✅ | Integration with `urql` | | -| ✅ | TypeScript support | with built-in GraphQL Codegen and `TypedDocumentNode` | -| ✅ | [`@live` queries](./live.md) | Based on polling | +| ✅ | Multiple indexers | based on fetch strategies | +| ✅ | Fetch Strategies | timeout, retry, fallback, race, highestValue | +| ✅ | Build time validations & optimizations | | +| ✅ | Client-Side Composition | with improved execution planner (based on GraphQL-Mesh) | +| ✅ | Cross-chain Subgraph Handling | Use similar subgraphs as a single source | +| ✅ | Raw Execution (standalone mode) | without a wrapping GraphQL client | +| ✅ | Local (client-side) Mutations | | +| ✅ | [Automatic Block Tracking](../packages/block-tracking/README.md) | tracking block numbers [as described here](https://thegraph.com/docs/en/developer/distributed-systems/#polling-for-updated-data) | +| ✅ | [Automatic Pagination](../packages/auto-pagination/README.md) | doing multiple requests in a single call to fetch more than the indexer limit | +| ✅ | Integration with `@apollo/client` | | +| ✅ | Integration with `urql` | | +| ✅ | TypeScript support | with built-in GraphQL Codegen and `TypedDocumentNode` | +| ✅ | [`@live` queries](./live.md) | Based on polling | > You can find an [extended architecture design here](./architecture.md) @@ -308,8 +308,8 @@ sources:
`highestValue` - - This strategy allows you to send parallel requests to different endpoints for the same source and choose the most updated. + +This strategy allows you to send parallel requests to different endpoints for the same source and choose the most updated. This is useful if you want to choose most synced data for the same Subgraph over different indexers/sources. diff --git a/website/src/pages/ro/subgraphs/querying/graphql-api.mdx b/website/src/pages/ro/subgraphs/querying/graphql-api.mdx index b3003ece651a..e10201771989 100644 --- a/website/src/pages/ro/subgraphs/querying/graphql-api.mdx +++ b/website/src/pages/ro/subgraphs/querying/graphql-api.mdx @@ -6,13 +6,13 @@ Learn about the GraphQL Query API used in The Graph. ## What is GraphQL? -[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. +[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query Subgraphs. -To understand the larger role that GraphQL plays, review [developing](/subgraphs/developing/introduction/) and [creating a subgraph](/developing/creating-a-subgraph/). +To understand the larger role that GraphQL plays, review [developing](/subgraphs/developing/introduction/) and [creating a Subgraph](/developing/creating-a-subgraph/). ## Queries with GraphQL -In your subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type. +In your Subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type. > Note: `query` does not need to be included at the top of the `graphql` query when using The Graph. @@ -170,7 +170,7 @@ You can use suffixes like `_gt`, `_lte` for value comparison: You can also filter entities that were updated in or after a specified block with `_change_block(number_gte: Int)`. -This can be useful if you are looking to fetch only entities which have changed, for example since the last time you polled. Or alternatively it can be useful to investigate or debug how entities are changing in your subgraph (if combined with a block filter, you can isolate only entities that changed in a specific block). +This can be useful if you are looking to fetch only entities which have changed, for example since the last time you polled. Or alternatively it can be useful to investigate or debug how entities are changing in your Subgraph (if combined with a block filter, you can isolate only entities that changed in a specific block). ```graphql { @@ -329,7 +329,7 @@ This query will return `Challenge` entities, and their associated `Application` ### Fulltext Search Queries -Fulltext search query fields provide an expressive text search API that can be added to the subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) to add fulltext search to your subgraph. +Fulltext search query fields provide an expressive text search API that can be added to the Subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) to add fulltext search to your Subgraph. Fulltext search queries have one required field, `text`, for supplying search terms. Several special fulltext operators are available to be used in this `text` search field. @@ -391,7 +391,7 @@ Graph Node implements [specification-based](https://spec.graphql.org/October2021 The schema of your dataSources, i.e. the entity types, values, and relationships that are available to query, are defined through the [GraphQL Interface Definition Language (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). -GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your subgraph is automatically generated from the GraphQL schema that's included in your [subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph). +GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your Subgraph is automatically generated from the GraphQL schema that's included in your [Subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph). > Note: Our API does not expose mutations because developers are expected to issue transactions directly against the underlying blockchain from their applications. @@ -403,7 +403,7 @@ All GraphQL types with `@entity` directives in your schema will be treated as en ### Subgraph Metadata -All subgraphs have an auto-generated `_Meta_` object, which provides access to subgraph metadata. This can be queried as follows: +All Subgraphs have an auto-generated `_Meta_` object, which provides access to Subgraph metadata. This can be queried as follows: ```graphQL { @@ -419,7 +419,7 @@ All subgraphs have an auto-generated `_Meta_` object, which provides access to s } ``` -If a block is provided, the metadata is as of that block, if not the latest indexed block is used. If provided, the block must be after the subgraph's start block, and less than or equal to the most recently indexed block. +If a block is provided, the metadata is as of that block, if not the latest indexed block is used. If provided, the block must be after the Subgraph's start block, and less than or equal to the most recently indexed block. `deployment` is a unique ID, corresponding to the IPFS CID of the `subgraph.yaml` file. @@ -427,6 +427,6 @@ If a block is provided, the metadata is as of that block, if not the latest inde - hash: the hash of the block - number: the block number -- timestamp: the timestamp of the block, if available (this is currently only available for subgraphs indexing EVM networks) +- timestamp: the timestamp of the block, if available (this is currently only available for Subgraphs indexing EVM networks) -`hasIndexingErrors` is a boolean identifying whether the subgraph encountered indexing errors at some past block +`hasIndexingErrors` is a boolean identifying whether the Subgraph encountered indexing errors at some past block diff --git a/website/src/pages/ro/subgraphs/querying/introduction.mdx b/website/src/pages/ro/subgraphs/querying/introduction.mdx index 36ea85c37877..2c9c553293fa 100644 --- a/website/src/pages/ro/subgraphs/querying/introduction.mdx +++ b/website/src/pages/ro/subgraphs/querying/introduction.mdx @@ -7,11 +7,11 @@ To start querying right away, visit [The Graph Explorer](https://thegraph.com/ex ## Overview -When a subgraph is published to The Graph Network, you can visit its subgraph details page on Graph Explorer and use the "Query" tab to explore the deployed GraphQL API for each subgraph. +When a Subgraph is published to The Graph Network, you can visit its Subgraph details page on Graph Explorer and use the "Query" tab to explore the deployed GraphQL API for each Subgraph. ## Specifics -Each subgraph published to The Graph Network has a unique query URL in Graph Explorer to make direct queries. You can find it by navigating to the subgraph details page and clicking on the "Query" button in the top right corner. +Each Subgraph published to The Graph Network has a unique query URL in Graph Explorer to make direct queries. You can find it by navigating to the Subgraph details page and clicking on the "Query" button in the top right corner. ![Query Subgraph Button](/img/query-button-screenshot.png) @@ -21,7 +21,7 @@ You will notice that this query URL must use a unique API key. You can create an Subgraph Studio users start on a Free Plan, which allows them to make 100,000 queries per month. Additional queries are available on the Growth Plan, which offers usage based pricing for additional queries, payable by credit card, or GRT on Arbitrum. You can learn more about billing [here](/subgraphs/billing/). -> Please see the [Query API](/subgraphs/querying/graphql-api/) for a complete reference on how to query the subgraph's entities. +> Please see the [Query API](/subgraphs/querying/graphql-api/) for a complete reference on how to query the Subgraph's entities. > > Note: If you encounter 405 errors with a GET request to the Graph Explorer URL, please switch to a POST request instead. diff --git a/website/src/pages/ro/subgraphs/querying/managing-api-keys.mdx b/website/src/pages/ro/subgraphs/querying/managing-api-keys.mdx index 6964b1a7ad9b..aed3d10422e1 100644 --- a/website/src/pages/ro/subgraphs/querying/managing-api-keys.mdx +++ b/website/src/pages/ro/subgraphs/querying/managing-api-keys.mdx @@ -4,11 +4,11 @@ title: Managing API keys ## Overview -API keys are needed to query subgraphs. They ensure that the connections between application services are valid and authorized, including authenticating the end user and the device using the application. +API keys are needed to query Subgraphs. They ensure that the connections between application services are valid and authorized, including authenticating the end user and the device using the application. ### Create and Manage API Keys -Go to [Subgraph Studio](https://thegraph.com/studio/) and click the **API Keys** tab to create and manage your API keys for specific subgraphs. +Go to [Subgraph Studio](https://thegraph.com/studio/) and click the **API Keys** tab to create and manage your API keys for specific Subgraphs. The "API keys" table lists existing API keys and allows you to manage or delete them. For each key, you can see its status, the cost for the current period, the spending limit for the current period, and the total number of queries. @@ -31,4 +31,4 @@ You can click on an individual API key to view the Details page: - Amount of GRT spent 2. Under the **Security** section, you can opt into security settings depending on the level of control you’d like to have. Specifically, you can: - View and manage the domain names authorized to use your API key - - Assign subgraphs that can be queried with your API key + - Assign Subgraphs that can be queried with your API key diff --git a/website/src/pages/ro/subgraphs/querying/python.mdx b/website/src/pages/ro/subgraphs/querying/python.mdx index 0937e4f7862d..ed0d078a4175 100644 --- a/website/src/pages/ro/subgraphs/querying/python.mdx +++ b/website/src/pages/ro/subgraphs/querying/python.mdx @@ -3,7 +3,7 @@ title: Query The Graph with Python and Subgrounds sidebarTitle: Python (Subgrounds) --- -Subgrounds is an intuitive Python library for querying subgraphs, built by [Playgrounds](https://playgrounds.network/). It allows you to directly connect subgraph data to a Python data environment, letting you use libraries like [pandas](https://pandas.pydata.org/) to perform data analysis! +Subgrounds is an intuitive Python library for querying Subgraphs, built by [Playgrounds](https://playgrounds.network/). It allows you to directly connect Subgraph data to a Python data environment, letting you use libraries like [pandas](https://pandas.pydata.org/) to perform data analysis! Subgrounds offers a simple Pythonic API for building GraphQL queries, automates tedious workflows such as pagination, and empowers advanced users through controlled schema transformations. @@ -17,14 +17,14 @@ pip install --upgrade subgrounds python -m pip install --upgrade subgrounds ``` -Once installed, you can test out subgrounds with the following query. The following example grabs a subgraph for the Aave v2 protocol and queries the top 5 markets ordered by TVL (Total Value Locked), selects their name and their TVL (in USD) and returns the data as a pandas [DataFrame](https://pandas.pydata.org/pandas-docs/dev/reference/api/pandas.DataFrame.html#pandas.DataFrame). +Once installed, you can test out subgrounds with the following query. The following example grabs a Subgraph for the Aave v2 protocol and queries the top 5 markets ordered by TVL (Total Value Locked), selects their name and their TVL (in USD) and returns the data as a pandas [DataFrame](https://pandas.pydata.org/pandas-docs/dev/reference/api/pandas.DataFrame.html#pandas.DataFrame). ```python from subgrounds import Subgrounds sg = Subgrounds() -# Load the subgraph +# Load the Subgraph aave_v2 = sg.load_subgraph( "https://api.thegraph.com/subgraphs/name/messari/aave-v2-ethereum") diff --git a/website/src/pages/ro/subgraphs/querying/subgraph-id-vs-deployment-id.mdx b/website/src/pages/ro/subgraphs/querying/subgraph-id-vs-deployment-id.mdx index 103e470e14da..17258dd13ea1 100644 --- a/website/src/pages/ro/subgraphs/querying/subgraph-id-vs-deployment-id.mdx +++ b/website/src/pages/ro/subgraphs/querying/subgraph-id-vs-deployment-id.mdx @@ -2,17 +2,17 @@ title: Subgraph ID vs Deployment ID --- -A subgraph is identified by a Subgraph ID, and each version of the subgraph is identified by a Deployment ID. +A Subgraph is identified by a Subgraph ID, and each version of the Subgraph is identified by a Deployment ID. -When querying a subgraph, either ID can be used, though it is generally suggested that the Deployment ID is used due to its ability to specify a specific version of a subgraph. +When querying a Subgraph, either ID can be used, though it is generally suggested that the Deployment ID is used due to its ability to specify a specific version of a Subgraph. Here are some key differences between the two IDs: ![](/img/subgraph-id-vs-deployment-id.png) ## Deployment ID -The Deployment ID is the IPFS hash of the compiled manifest file, which refers to other files on IPFS instead of relative URLs on the computer. For example, the compiled manifest can be accessed via: `https://api.thegraph.com/ipfs/api/v0/cat?arg=QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. To change the Deployment ID, one can simply update the manifest file, such as modifying the description field as described in the [subgraph manifest documentation](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api). +The Deployment ID is the IPFS hash of the compiled manifest file, which refers to other files on IPFS instead of relative URLs on the computer. For example, the compiled manifest can be accessed via: `https://api.thegraph.com/ipfs/api/v0/cat?arg=QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. To change the Deployment ID, one can simply update the manifest file, such as modifying the description field as described in the [Subgraph manifest documentation](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api). -When queries are made using a subgraph's Deployment ID, we are specifying a version of that subgraph to query. Using the Deployment ID to query a specific subgraph version results in a more sophisticated and robust setup as there is full control over the subgraph version being queried. However, this results in the need of updating the query code manually every time a new version of the subgraph is published. +When queries are made using a Subgraph's Deployment ID, we are specifying a version of that Subgraph to query. Using the Deployment ID to query a specific Subgraph version results in a more sophisticated and robust setup as there is full control over the Subgraph version being queried. However, this results in the need of updating the query code manually every time a new version of the Subgraph is published. Example endpoint that uses Deployment ID: @@ -20,8 +20,8 @@ Example endpoint that uses Deployment ID: ## Subgraph ID -The Subgraph ID is a unique identifier for a subgraph. It remains constant across all versions of a subgraph. It is recommended to use the Subgraph ID to query the latest version of a subgraph, although there are some caveats. +The Subgraph ID is a unique identifier for a Subgraph. It remains constant across all versions of a Subgraph. It is recommended to use the Subgraph ID to query the latest version of a Subgraph, although there are some caveats. -Be aware that querying using Subgraph ID may result in queries being responded to by an older version of the subgraph due to the new version needing time to sync. Also, new versions could introduce breaking schema changes. +Be aware that querying using Subgraph ID may result in queries being responded to by an older version of the Subgraph due to the new version needing time to sync. Also, new versions could introduce breaking schema changes. Example endpoint that uses Subgraph ID: `https://gateway-arbitrum.network.thegraph.com/api/[api-key]/subgraphs/id/FL3ePDCBbShPvfRJTaSCNnehiqxsPHzpLud6CpbHoeKW` diff --git a/website/src/pages/ro/subgraphs/quick-start.mdx b/website/src/pages/ro/subgraphs/quick-start.mdx index dc280ec699d3..a803ac8695fa 100644 --- a/website/src/pages/ro/subgraphs/quick-start.mdx +++ b/website/src/pages/ro/subgraphs/quick-start.mdx @@ -2,7 +2,7 @@ title: Quick Start --- -Learn how to easily build, publish and query a [subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) on The Graph. +Learn how to easily build, publish and query a [Subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) on The Graph. ## Prerequisites @@ -13,13 +13,13 @@ Learn how to easily build, publish and query a [subgraph](/subgraphs/developing/ ## How to Build a Subgraph -### 1. Create a subgraph in Subgraph Studio +### 1. Create a Subgraph in Subgraph Studio Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. -Subgraph Studio lets you create, manage, deploy, and publish subgraphs, as well as create and manage API keys. +Subgraph Studio lets you create, manage, deploy, and publish Subgraphs, as well as create and manage API keys. -Click "Create a Subgraph". It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name". +Click "Create a Subgraph". It is recommended to name the Subgraph in Title Case: "Subgraph Name Chain Name". ### 2. Install the Graph CLI @@ -37,13 +37,13 @@ Using [yarn](https://yarnpkg.com/): yarn global add @graphprotocol/graph-cli ``` -### 3. Initialize your subgraph +### 3. Initialize your Subgraph -> You can find commands for your specific subgraph on the subgraph page in [Subgraph Studio](https://thegraph.com/studio/). +> You can find commands for your specific Subgraph on the Subgraph page in [Subgraph Studio](https://thegraph.com/studio/). -The `graph init` command will automatically create a scaffold of a subgraph based on your contract's events. +The `graph init` command will automatically create a scaffold of a Subgraph based on your contract's events. -The following command initializes your subgraph from an existing contract: +The following command initializes your Subgraph from an existing contract: ```sh graph init @@ -51,42 +51,42 @@ graph init If your contract is verified on the respective blockscanner where it is deployed (such as [Etherscan](https://etherscan.io/)), then the ABI will automatically be created in the CLI. -When you initialize your subgraph, the CLI will ask you for the following information: +When you initialize your Subgraph, the CLI will ask you for the following information: -- **Protocol**: Choose the protocol your subgraph will be indexing data from. -- **Subgraph slug**: Create a name for your subgraph. Your subgraph slug is an identifier for your subgraph. -- **Directory**: Choose a directory to create your subgraph in. -- **Ethereum network** (optional): You may need to specify which EVM-compatible network your subgraph will be indexing data from. +- **Protocol**: Choose the protocol your Subgraph will be indexing data from. +- **Subgraph slug**: Create a name for your Subgraph. Your Subgraph slug is an identifier for your Subgraph. +- **Directory**: Choose a directory to create your Subgraph in. +- **Ethereum network** (optional): You may need to specify which EVM-compatible network your Subgraph will be indexing data from. - **Contract address**: Locate the smart contract address you’d like to query data from. - **ABI**: If the ABI is not auto-populated, you will need to input it manually as a JSON file. -- **Start Block**: You should input the start block to optimize subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed. +- **Start Block**: You should input the start block to optimize Subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed. - **Contract Name**: Input the name of your contract. -- **Index contract events as entities**: It is suggested that you set this to true, as it will automatically add mappings to your subgraph for every emitted event. +- **Index contract events as entities**: It is suggested that you set this to true, as it will automatically add mappings to your Subgraph for every emitted event. - **Add another contract** (optional): You can add another contract. -See the following screenshot for an example for what to expect when initializing your subgraph: +See the following screenshot for an example for what to expect when initializing your Subgraph: ![Subgraph command](/img/CLI-Example.png) -### 4. Edit your subgraph +### 4. Edit your Subgraph -The `init` command in the previous step creates a scaffold subgraph that you can use as a starting point to build your subgraph. +The `init` command in the previous step creates a scaffold Subgraph that you can use as a starting point to build your Subgraph. -When making changes to the subgraph, you will mainly work with three files: +When making changes to the Subgraph, you will mainly work with three files: -- Manifest (`subgraph.yaml`) - defines what data sources your subgraph will index. -- Schema (`schema.graphql`) - defines what data you wish to retrieve from the subgraph. +- Manifest (`subgraph.yaml`) - defines what data sources your Subgraph will index. +- Schema (`schema.graphql`) - defines what data you wish to retrieve from the Subgraph. - AssemblyScript Mappings (`mapping.ts`) - translates data from your data sources to the entities defined in the schema. -For a detailed breakdown on how to write your subgraph, check out [Creating a Subgraph](/developing/creating-a-subgraph/). +For a detailed breakdown on how to write your Subgraph, check out [Creating a Subgraph](/developing/creating-a-subgraph/). -### 5. Deploy your subgraph +### 5. Deploy your Subgraph > Remember, deploying is not the same as publishing. -When you **deploy** a subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. A deployed subgraph's indexing is performed by the [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/), which is a single Indexer owned and operated by Edge & Node, rather than by the many decentralized Indexers in The Graph Network. A **deployed** subgraph is free to use, rate-limited, not visible to the public, and meant to be used for development, staging, and testing purposes. +When you **deploy** a Subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. A deployed Subgraph's indexing is performed by the [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/), which is a single Indexer owned and operated by Edge & Node, rather than by the many decentralized Indexers in The Graph Network. A **deployed** Subgraph is free to use, rate-limited, not visible to the public, and meant to be used for development, staging, and testing purposes. -Once your subgraph is written, run the following commands: +Once your Subgraph is written, run the following commands: ```` ```sh @@ -94,7 +94,7 @@ graph codegen && graph build ``` ```` -Authenticate and deploy your subgraph. The deploy key can be found on the subgraph's page in Subgraph Studio. +Authenticate and deploy your Subgraph. The deploy key can be found on the Subgraph's page in Subgraph Studio. ![Deploy key](/img/subgraph-studio-deploy-key.jpg) @@ -109,37 +109,37 @@ graph deploy The CLI will ask for a version label. It's strongly recommended to use [semantic versioning](https://semver.org/), e.g. `0.0.1`. -### 6. Review your subgraph +### 6. Review your Subgraph -If you’d like to test your subgraph before publishing it, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following: +If you’d like to test your Subgraph before publishing it, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following: - Run a sample query. -- Analyze your subgraph in the dashboard to check information. -- Check the logs on the dashboard to see if there are any errors with your subgraph. The logs of an operational subgraph will look like this: +- Analyze your Subgraph in the dashboard to check information. +- Check the logs on the dashboard to see if there are any errors with your Subgraph. The logs of an operational Subgraph will look like this: ![Subgraph logs](/img/subgraph-logs-image.png) -### 7. Publish your subgraph to The Graph Network +### 7. Publish your Subgraph to The Graph Network -When your subgraph is ready for a production environment, you can publish it to the decentralized network. Publishing is an onchain action that does the following: +When your Subgraph is ready for a production environment, you can publish it to the decentralized network. Publishing is an onchain action that does the following: -- It makes your subgraph available to be to indexed by the decentralized [Indexers](/indexing/overview/) on The Graph Network. -- It removes rate limits and makes your subgraph publicly searchable and queryable in [Graph Explorer](https://thegraph.com/explorer/). -- It makes your subgraph available for [Curators](/resources/roles/curating/) to curate it. +- It makes your Subgraph available to be to indexed by the decentralized [Indexers](/indexing/overview/) on The Graph Network. +- It removes rate limits and makes your Subgraph publicly searchable and queryable in [Graph Explorer](https://thegraph.com/explorer/). +- It makes your Subgraph available for [Curators](/resources/roles/curating/) to curate it. -> The greater the amount of GRT you and others curate on your subgraph, the more Indexers will be incentivized to index your subgraph, improving the quality of service, reducing latency, and enhancing network redundancy for your subgraph. +> The greater the amount of GRT you and others curate on your Subgraph, the more Indexers will be incentivized to index your Subgraph, improving the quality of service, reducing latency, and enhancing network redundancy for your Subgraph. #### Publishing with Subgraph Studio -To publish your subgraph, click the Publish button in the dashboard. +To publish your Subgraph, click the Publish button in the dashboard. -![Publish a subgraph on Subgraph Studio](/img/publish-sub-transfer.png) +![Publish a Subgraph on Subgraph Studio](/img/publish-sub-transfer.png) -Select the network to which you would like to publish your subgraph. +Select the network to which you would like to publish your Subgraph. #### Publishing from the CLI -As of version 0.73.0, you can also publish your subgraph with the Graph CLI. +As of version 0.73.0, you can also publish your Subgraph with the Graph CLI. Open the `graph-cli`. @@ -157,32 +157,32 @@ graph publish ``` ```` -3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized subgraph to a network of your choice. +3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized Subgraph to a network of your choice. ![cli-ui](/img/cli-ui.png) To customize your deployment, see [Publishing a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). -#### Adding signal to your subgraph +#### Adding signal to your Subgraph -1. To attract Indexers to query your subgraph, you should add GRT curation signal to it. +1. To attract Indexers to query your Subgraph, you should add GRT curation signal to it. - - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your subgraph. + - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your Subgraph. 2. If eligible for indexing rewards, Indexers receive GRT rewards based on the signaled amount. - - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on subgraph feature usage and supported networks. + - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on Subgraph feature usage and supported networks. To learn more about curation, read [Curating](/resources/roles/curating/). -To save on gas costs, you can curate your subgraph in the same transaction you publish it by selecting this option: +To save on gas costs, you can curate your Subgraph in the same transaction you publish it by selecting this option: ![Subgraph publish](/img/studio-publish-modal.png) -### 8. Query your subgraph +### 8. Query your Subgraph -You now have access to 100,000 free queries per month with your subgraph on The Graph Network! +You now have access to 100,000 free queries per month with your Subgraph on The Graph Network! -You can query your subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button. +You can query your Subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button. -For more information about querying data from your subgraph, read [Querying The Graph](/subgraphs/querying/introduction/). +For more information about querying data from your Subgraph, read [Querying The Graph](/subgraphs/querying/introduction/). diff --git a/website/src/pages/ro/substreams/developing/dev-container.mdx b/website/src/pages/ro/substreams/developing/dev-container.mdx index bd4acf16eec7..339ddb159c87 100644 --- a/website/src/pages/ro/substreams/developing/dev-container.mdx +++ b/website/src/pages/ro/substreams/developing/dev-container.mdx @@ -9,7 +9,7 @@ Develop your first project with Substreams Dev Container. It's a tool to help you build your first project. You can either run it remotely through Github codespaces or locally by cloning the [substreams starter repository](https://github.com/streamingfast/substreams-starter?tab=readme-ov-file). -Inside the Dev Container, the `substreams init` command sets up a code-generated Substreams project, allowing you to easily build a subgraph or an SQL-based solution for data handling. +Inside the Dev Container, the `substreams init` command sets up a code-generated Substreams project, allowing you to easily build a Subgraph or an SQL-based solution for data handling. ## Prerequisites @@ -35,7 +35,7 @@ To share your work with the broader community, publish your `.spkg` to [Substrea You can configure your project to query data either through a Subgraph or directly from an SQL database: -- **Subgraph**: Run `substreams codegen subgraph`. This generates a project with a basic `schema.graphql` and `mappings.ts` file. You can customize these to define entities based on the data extracted by Substreams. For more configurations, see [subgraph sink documentation](https://docs.substreams.dev/how-to-guides/sinks/subgraph). +- **Subgraph**: Run `substreams codegen subgraph`. This generates a project with a basic `schema.graphql` and `mappings.ts` file. You can customize these to define entities based on the data extracted by Substreams. For more configurations, see [Subgraph sink documentation](https://docs.substreams.dev/how-to-guides/sinks/subgraph). - **SQL**: Run `substreams codegen sql` for SQL-based queries. For more information on configuring a SQL sink, refer to the [SQL documentation](https://docs.substreams.dev/how-to-guides/sinks/sql-sink). ## Deployment Options diff --git a/website/src/pages/ro/substreams/developing/sinks.mdx b/website/src/pages/ro/substreams/developing/sinks.mdx index 5f6f9de21326..48c246201e8f 100644 --- a/website/src/pages/ro/substreams/developing/sinks.mdx +++ b/website/src/pages/ro/substreams/developing/sinks.mdx @@ -1,5 +1,5 @@ --- -title: Official Sinks +title: Sink your Substreams --- Choose a sink that meets your project's needs. @@ -8,7 +8,7 @@ Choose a sink that meets your project's needs. Once you find a package that fits your needs, you can choose how you want to consume the data. -Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database, a file or a subgraph. +Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database, a file or a Subgraph. ## Sinks diff --git a/website/src/pages/ro/substreams/developing/solana/account-changes.mdx b/website/src/pages/ro/substreams/developing/solana/account-changes.mdx index a282278c7d91..8c821acaee3f 100644 --- a/website/src/pages/ro/substreams/developing/solana/account-changes.mdx +++ b/website/src/pages/ro/substreams/developing/solana/account-changes.mdx @@ -11,7 +11,7 @@ This guide walks you through the process of setting up your environment, configu > NOTE: History for the Solana Account Changes dates as of 2025, block 310629601. -For each Substreams Solana account block, only the latest update per account is recorded, see the [Protobuf Referece](https://buf.build/streamingfast/firehose-solana/file/main:sf/solana/type/v1/account.proto). If an account is deleted, a payload with `deleted == True` is provided. Additionally, events of low importance ware omitted, such as those with the special owner “Vote11111111…” account or changes that do not affect the account data (ex: lamport changes). +For each Substreams Solana account block, only the latest update per account is recorded, see the [Protobuf Reference](https://buf.build/streamingfast/firehose-solana/file/main:sf/solana/type/v1/account.proto). If an account is deleted, a payload with `deleted == True` is provided. Additionally, events of low importance ware omitted, such as those with the special owner “Vote11111111…” account or changes that do not affect the account data (ex: lamport changes). > NOTE: To test Substreams latency for Solana accounts, measured as block-head drift, install the [Substreams CLI](https://docs.substreams.dev/reference-material/substreams-cli/installing-the-cli) and running `substreams run solana-common blocks_without_votes -s -1 -o clock`. diff --git a/website/src/pages/ro/substreams/developing/solana/transactions.mdx b/website/src/pages/ro/substreams/developing/solana/transactions.mdx index c22bd0f50611..1542ae22dab7 100644 --- a/website/src/pages/ro/substreams/developing/solana/transactions.mdx +++ b/website/src/pages/ro/substreams/developing/solana/transactions.mdx @@ -36,12 +36,12 @@ Within the generated directories, modify your Substreams modules to include addi ## Step 3: Load the Data -To make your Substreams queryable (as opposed to [direct streaming](https://docs.substreams.dev/how-to-guides/sinks/stream)), you can automatically generate a [Substreams-powered subgraph](/sps/introduction/) or SQL-DB sink. +To make your Substreams queryable (as opposed to [direct streaming](https://docs.substreams.dev/how-to-guides/sinks/stream)), you can automatically generate a [Substreams-powered Subgraph](/sps/introduction/) or SQL-DB sink. ### Subgraph 1. Run `substreams codegen subgraph` to initialize the sink, producing the necessary files and function definitions. -2. Create your [subgraph mappings](/sps/triggers/) within the `mappings.ts` and associated entities within the `schema.graphql`. +2. Create your [Subgraph mappings](/sps/triggers/) within the `mappings.ts` and associated entities within the `schema.graphql`. 3. Build and deploy locally or to [Subgraph Studio](https://thegraph.com/studio-pricing/) by running `deploy-studio`. ### SQL diff --git a/website/src/pages/ro/substreams/introduction.mdx b/website/src/pages/ro/substreams/introduction.mdx index e11174ee07c8..0bd1ea21c9f6 100644 --- a/website/src/pages/ro/substreams/introduction.mdx +++ b/website/src/pages/ro/substreams/introduction.mdx @@ -13,7 +13,7 @@ Substreams is a powerful parallel blockchain indexing technology designed to enh ## Substreams Benefits -- **Accelerated Indexing**: Boost subgraph indexing time with a parallelized engine for quicker data retrieval and processing. +- **Accelerated Indexing**: Boost Subgraph indexing time with a parallelized engine for quicker data retrieval and processing. - **Multi-Chain Support**: Expand indexing capabilities beyond EVM-based chains, supporting ecosystems like Solana, Injective, Starknet, and Vara. - **Enhanced Data Model**: Access comprehensive data, including the `trace` level data on EVM or account changes on Solana, while efficiently managing forks/disconnections. - **Multi-Sink Support:** For Subgraph, Postgres database, Clickhouse, and Mongo database. diff --git a/website/src/pages/ro/substreams/publishing.mdx b/website/src/pages/ro/substreams/publishing.mdx index 3d1a3863c882..3d93e6f9376f 100644 --- a/website/src/pages/ro/substreams/publishing.mdx +++ b/website/src/pages/ro/substreams/publishing.mdx @@ -9,7 +9,7 @@ Learn how to publish a Substreams package to the [Substreams Registry](https://s ### What is a package? -A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional subgraphs. +A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional Subgraphs. ## Publish a Package @@ -44,7 +44,7 @@ A Substreams package is a precompiled binary file that defines the specific data ![confirm](/img/4_confirm.png) -That's it! You have succesfully published a package in the Substreams registry. +That's it! You have successfully published a package in the Substreams registry. ![success](/img/5_success.png) diff --git a/website/src/pages/ro/supported-networks.mdx b/website/src/pages/ro/supported-networks.mdx index d25956f8a037..554c558ded7e 100644 --- a/website/src/pages/ro/supported-networks.mdx +++ b/website/src/pages/ro/supported-networks.mdx @@ -18,11 +18,11 @@ export const getStaticProps = getSupportedNetworksStaticProps - Subgraph Studio relies on the stability and reliability of the underlying technologies, for example JSON-RPC, Firehose and Substreams endpoints. - Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier. -- If a subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks. +- If a Subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks. - For a full list of which features are supported on the decentralized network, see [this page](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). ## Running Graph Node locally If your preferred network isn't supported on The Graph's decentralized network, you can run your own [Graph Node](https://github.com/graphprotocol/graph-node) to index any EVM-compatible network. Make sure that the [version](https://github.com/graphprotocol/graph-node/releases) you are using supports the network and you have the needed configuration. -Graph Node can also index other protocols, via a Firehose integration. Firehose integrations have been created for NEAR, Arweave and Cosmos-based networks. Additionally, Graph Node can support Substreams-powered subgraphs for any network with Substreams support. +Graph Node can also index other protocols, via a Firehose integration. Firehose integrations have been created for NEAR and Arweave. Additionally, Graph Node can support Substreams-powered Subgraphs for any network with Substreams support. diff --git a/website/src/pages/ro/token-api/_meta-titles.json b/website/src/pages/ro/token-api/_meta-titles.json index 692cec84bd58..7ed31e0af95d 100644 --- a/website/src/pages/ro/token-api/_meta-titles.json +++ b/website/src/pages/ro/token-api/_meta-titles.json @@ -1,5 +1,6 @@ { "mcp": "MCP", "evm": "EVM Endpoints", - "monitoring": "Monitoring Endpoints" + "monitoring": "Monitoring Endpoints", + "faq": "FAQ" } diff --git a/website/src/pages/ro/token-api/_meta.js b/website/src/pages/ro/token-api/_meta.js index 09aa7ffc2649..0e526f673a66 100644 --- a/website/src/pages/ro/token-api/_meta.js +++ b/website/src/pages/ro/token-api/_meta.js @@ -5,4 +5,5 @@ export default { mcp: titles.mcp, evm: titles.evm, monitoring: titles.monitoring, + faq: '', } diff --git a/website/src/pages/ro/token-api/faq.mdx b/website/src/pages/ro/token-api/faq.mdx new file mode 100644 index 000000000000..6178aee33e86 --- /dev/null +++ b/website/src/pages/ro/token-api/faq.mdx @@ -0,0 +1,109 @@ +--- +title: Token API FAQ +--- + +Get fast answers to easily integrate and scale with The Graph's high-performance Token API. + +## General + +### What blockchains does the Token API support? + +Currently, the Token API supports Ethereum, Binance Smart Chain (BSC), Polygon, Optimism, Base, and Arbitrum One. + +### Why isn't my API key from The Graph Market working? + +Be sure to use the Access Token generated from the API key, not the API key itself. An access token can be generated from the dashboard on The Graph Market using the dropdown menu next to each API key. + +### How current is the data provided by the API relative to the blockchain? + +The API provides data up to the latest finalized block. + +### How do I authenticate requests to the Token API? + +Authentication is managed via JWT tokens obtained through The Graph Market. JWT tokens issued by [Pinax](https://pinax.network/en), a core developer of The Graph, are also supported. + +### Does the Token API provide a client SDK? + +While a client SDK is not currently available, please share feedback on any SDKs or integrations you would like to see on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to support additional blockchains in the future? + +Yes, more blockchains will be supported in the future. Please share feedback on desired chains on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to offer data closer to the chain head? + +Yes, improvements to provide data closer to the chain head are planned. Feedback is welcome on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to support additional use cases such as NFTs? + +The Graph ecosystem is actively determining the [roadmap](https://thegraph.com/blog/token-api-the-graph/) for additional use cases, including NFTs. Please provide feedback on specific features you would like prioritized on [Discord](https://discord.gg/graphprotocol). + +## MCP / LLM / AI Topics + +### Is there a time limit for LLM queries? + +Yes. The maximum time limit for LLM queries is 10 seconds. + +### Is there a known list of LLMs that work with the API? + +Yes, Cline, Cursor, and Claude Desktop integrate successfully with The Graph's Token API + MCP server. + +Beyond that, whether an LLM "works" depends on whether it supports the MCP protocol directly (or has a compatible plugin/adapter). + +### Where can I find the MCP client? + +You can find the code for the MCP client in [The Graph's repo](https://github.com/graphprotocol/mcp-client). + +## Advanced Topics + +### I'm getting 403/401 errors. What's wrong? + +Check that you included the `Authorization: Bearer ` header with the correct, non-expired token. Common issues include using the API key instead of generating a new JWT, forgetting the "Bearer" prefix, using an incorrect token, or omitting the header entirely. Ensure you copied the JWT exactly as provided by The Graph Market. + +### Are there rate limits or usage costs?\*\* + +During Beta, the Token API is free for authorized developers. While specific rate limits aren't documented, reasonable throttling exists to prevent abuse. High request volumes may trigger HTTP 429 errors. Monitor official announcements for future pricing changes after Beta. + +### What networks are supported, and how do I specify them? + +You can query available networks with [this link](https://token-api.thegraph.com/#tag/monitoring/GET/networks). A full list of network IDs accepted by The Graph can be found on [The Graph's Networks](https://thegraph.com/docs/en/supported-networks/). The API supports Ethereum mainnet, Binance Smart Chain, Base, Arbitrum One, Optimism, and Polygon/Matic. Use the optional `network_id` parameter (e.g., `mainnet`, `bsc`, `base`, `arbitrum-one`, `optimism`, `matic`) to target a specific chain. Without this parameter, the API defaults to Ethereum mainnet. + +### Why do I only see 10 results? How can I get more data? + +Endpoints cap output at 10 items by default. Use pagination parameters: `limit` (up to 500) and `page` (1-indexed). For example, set `limit=50` to get 50 results, and increment `page` for subsequent batches (e.g., `page=2` for items 51-100). + +### How do I fetch older transfer history? + +The Transfers endpoint defaults to 30 days of history. To retrieve older events, increase the `age` parameter up to 180 days maximum (e.g., `age=180` for 6 months of transfers). Transfers older than 180 days cannot be fetched in a single call. + +### What does an empty `"data": []` array mean? + +An empty data array means no records were found matching the query – not an error. This occurs when querying wallets with no tokens/transfers or token contracts with no holders. Verify you've used the correct address and parameters. An invalid address format will trigger a 4xx error. + +### Why is the JSON response wrapped in a `"data"` array? + +All Token API responses consistently wrap results in a top-level `data` array, even for single items. This uniform design handles the common case where addresses have multiple balances or transfers. When parsing, be sure to index into the `data` array (e.g., `const results = response.data`). + +### Why are token amounts returned as strings? + +Large numeric values (like token amounts or supplies) are returned as strings to avoid precision loss, as they often exceed JavaScript's safe integer range. Convert these to big number types for arithmetic operations. Fields like `decimals` are provided as normal numbers to help derive human-readable values. + +### What format should addresses be in? + +The API accepts 40-character hex addresses with or without the `0x` prefix. The endpoint is case-insensitive, so both lower and uppercase hex characters work. Ensure addresses are exactly 40 hex digits (20 bytes) if you remove the prefix. For contract queries, use the token's contract address; for balance/transfer queries, use the wallet address. + +### Do I need special headers besides authentication? + +While recommended, `Accept: application/json` isn't strictly required as the API returns JSON by default. The critical header is `Authorization: Bearer `. Ensure you make a GET request to the correct URL without trailing slashes or path typos (e.g., use `/balances/evm/{address}` not `/balance`). + +### MCP integration with Claude/Cline/Cursor shows errors like "ENOENT" or "Server disconnected". How do I fix this? + +For "ENOENT" errors, ensure Node.js 18+ is installed and the path to `npx`/`bunx` is correct (consider using full paths in config). "Server disconnected" usually indicates authentication or connectivity issues – verify your `ACCESS_TOKEN` is set correctly and your network allows access to `https://token-api.thegraph.com/sse`. + +### Is the Token API part of The Graph's GraphQL service? + +No, the Token API is a separate RESTful service. Unlike traditional subgraphs, it provides ready-to-use REST endpoints (HTTP GET) for common token data. You don't need to write GraphQL queries or deploy subgraphs. Under the hood, it uses The Graph's infrastructure and MCP with AI for enrichment, but you simply interact with REST endpoints. + +### Do I need to use MCP or tools like Claude, Cline, or Cursor? + +No, these are optional. MCP is an advanced feature allowing AI assistants to interface with the API via streaming. For standard usage, simply call the REST endpoints with any HTTP client using your JWT. Claude Desktop, Cline bot, and Cursor IDE integrations are provided for convenience but aren't required. diff --git a/website/src/pages/ro/token-api/mcp/claude.mdx b/website/src/pages/ro/token-api/mcp/claude.mdx index 0da8f2be031d..12a036b6fc24 100644 --- a/website/src/pages/ro/token-api/mcp/claude.mdx +++ b/website/src/pages/ro/token-api/mcp/claude.mdx @@ -25,11 +25,11 @@ Create or edit your `claude_desktop_config.json` file. ```json label="claude_desktop_config.json" { "mcpServers": { - "mcp-pinax": { + "token-api": { "command": "npx", "args": ["@pinax/mcp", "--sse-url", "https://token-api.thegraph.com/sse"], "env": { - "ACCESS_TOKEN": "" + "ACCESS_TOKEN": "" } } } diff --git a/website/src/pages/ro/token-api/mcp/cline.mdx b/website/src/pages/ro/token-api/mcp/cline.mdx index ab54c0c8f6f0..ef98e45939fe 100644 --- a/website/src/pages/ro/token-api/mcp/cline.mdx +++ b/website/src/pages/ro/token-api/mcp/cline.mdx @@ -10,7 +10,7 @@ sidebarTitle: Cline - [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path. - The `@pinax/mcp` package requires Node 18+, as it relies on built-in `fetch()` / `Headers`, which are not available in Node 17 or older. You may need to specify an exact path to an up-to-date Node version, or uninstall previous versions of Node to ensure `@pinax/mcp` uses the correct version. -![Screenshot of Cline's MCP server configuration interface displaying the JSON settings file with mcp-pinax server details visible](/img/cline-preview-token-api.png) +![Screenshot of Cline's MCP server configuration interface displaying the JSON settings file with mcp-pinax server details visible.](/img/cline-preview-token-api.png) ## Configuration diff --git a/website/src/pages/ru/about.mdx b/website/src/pages/ru/about.mdx index 35f9c6efd933..d940c455bdf7 100644 --- a/website/src/pages/ru/about.mdx +++ b/website/src/pages/ru/about.mdx @@ -24,31 +24,31 @@ The Graph — это мощный децентрализованный прот Децентрализованному приложению (dapp), запущенному в браузере, потребуются **часы или даже дни**, чтобы получить ответ на эти простые вопросы. -Alternatively, you have the option to set up your own server, process the transactions, store them in a database, and create an API endpoint to query the data. However, this option is [resource intensive](/resources/benefits/), needs maintenance, presents a single point of failure, and breaks important security properties required for decentralization. +В качестве альтернативы у Вас есть возможность настроить собственный сервер, обрабатывать транзакции, хранить их в базе данных и создать конечную точку API для запроса данных. Однако этот вариант [ресурсоемок](/resources/benefits/), требует обслуживания, создает единую точку отказа и нарушает важные требования безопасности, необходимые для децентрализации. Такие свойства блокчейна, как окончательность, реорганизация чейна и необработанные блоки, усложняют процесс, делая получение точных результатов запроса из данных блокчейна трудоемким и концептуально сложным. ## The Graph предлагает решение -The Graph решает эту проблему с помощью децентрализованного протокола, который индексирует и обеспечивает эффективный и высокопроизводительный запрос данных блокчейна. Эти API (индексированные «субграфы») затем могут быть запрошены с помощью стандартного API GraphQL. +The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "Subgraphs") can then be queried with a standard GraphQL API. Сегодня существует децентрализованный протокол, поддерживаемый реализацией с открытым исходным кодом [Graph Node](https://github.com/graphprotocol/graph-node), который обеспечивает этот процесс. ### Как функционирует The Graph -Индексирование данных блокчейна очень сложный процесс, но The Graph упрощает его. The Graph учится индексировать данные Ethereum с помощью субграфов. Субграфы — это пользовательские API, построенные на данных блокчейна, которые извлекают данные из блокчейна, обрабатывают их и сохраняют так, чтобы их можно было легко запрашивать через GraphQL. +Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using Subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL. #### Специфические особенности -- В The Graph используются описания субграфов, которые называются манифестами субграфов внутри субграфа. +- The Graph uses Subgraph descriptions, which are known as the Subgraph manifest inside the Subgraph. -- В описании субграфа описываются смарт-контракты, представляющие интерес для субграфа, события в этих контрактах, на которых следует сосредоточиться, а также способы сопоставления данных о событиях с данными, которые The Graph будет хранить в своей базе данных. +- The Subgraph description outlines the smart contracts of interest for a Subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database. -- При создании субграфа Вам необходимо написать манифест субграфа. +- When creating a Subgraph, you need to write a Subgraph manifest. -- После написания `манифеста субграфа` Вы можете использовать Graph CLI для сохранения определения в IPFS и дать команду индексатору начать индексирование данных для этого субграфа. +- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that Subgraph. -На диаграмме ниже представлена ​​более подробная информация о потоке данных после развертывания манифеста субграфа с транзакциями Ethereum. +The diagram below provides more detailed information about the flow of data after a Subgraph manifest has been deployed with Ethereum transactions. ![График, объясняющий потребителям данных, как The Graph использует Graph Node для обслуживания запросов](/img/graph-dataflow.png) @@ -56,12 +56,12 @@ The Graph решает эту проблему с помощью децентр 1. Dapp добавляет данные в Ethereum через транзакцию в смарт-контракте. 2. Смарт-контракт генерирует одно или несколько событий во время обработки транзакции. -3. Graph Node постоянно сканирует Ethereum на наличие новых блоков и данных для Вашего субграфа, которые они могут содержать. -4. The Graph нода затем разбирает события, относящиеся к Вашему субграфу, которые записаны в данном блоке и структурирует их согласно схеме данных описанной в subgraph используя модуль WASM. Затем данные сохраняются в таблицы базы данных Graph Node. +3. Graph Node continually scans Ethereum for new blocks and the data for your Subgraph they may contain. +4. Graph Node finds Ethereum events for your Subgraph in these blocks and runs the mapping handlers you provided. The mapping is a WASM module that creates or updates the data entities that Graph Node stores in response to Ethereum events. 5. Dapp запрашивает у Graph Node данные, проиндексированные с блокчейна, используя [конечную точку GraphQL](https://graphql.org/learn/) ноды. В свою очередь, Graph Node переводит запросы GraphQL в запросы к его базовому хранилищу данных, чтобы получить эти данные, используя возможности индексации этого хранилища. Dapp отображает эти данные в насыщенном пользовательском интерфейсе для конечных пользователей, который они используют для создания новых транзакций в Ethereum. Цикл повторяется. ## Что далее -В следующих разделах более подробно рассматриваются субграфы, их развертывание и запросы данных. +The following sections provide a more in-depth look at Subgraphs, their deployment and data querying. -Прежде чем писать собственный субграф, рекомендуется ознакомиться с [Graph Explorer](https://thegraph.com/explorer) и изучить некоторые из уже развернутых субграфов. Страница каждого субграфа включает в себя тестовую площадку GraphQL, позволяющую запрашивать его данные. +Before you write your own Subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed Subgraphs. Each Subgraph's page includes a GraphQL playground, allowing you to query its data. diff --git a/website/src/pages/ru/archived/arbitrum/arbitrum-faq.mdx b/website/src/pages/ru/archived/arbitrum/arbitrum-faq.mdx index 0375e85a7135..5e7bf098577d 100644 --- a/website/src/pages/ru/archived/arbitrum/arbitrum-faq.mdx +++ b/website/src/pages/ru/archived/arbitrum/arbitrum-faq.mdx @@ -14,7 +14,7 @@ title: Часто задаваемые вопросы об Arbitrum - Безопасность, унаследованную от Ethereum -Масштабирование смарт-контрактов протокола на L2 позволяет участникам сети взаимодействовать чаще и с меньшими затратами на комиссии за газ. Например, Индексаторы могут чаще открывать и закрывать аллокации, чтобы индексировать большее количество субграфов. Разработчики могут с большей легкостью разворачивать и обновлять субграфы, а Делегаторы — чаще делегировать GRT. Кураторы могут добавлять или удалять сигнал для большего количества субграфов — действия, которые ранее считались слишком затратными для частого выполнения из-за стоимости газа. +Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers can open and close allocations more frequently to index a greater number of Subgraphs. Developers can deploy and update Subgraphs more easily, and Delegators can delegate GRT more frequently. Curators can add or remove signal to a larger number of Subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. Решение о продолжении сотрудничества с Arbitrum было принято в прошлом году по итогам обсуждения сообществом The Graph [GIP-0031](https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305). @@ -39,7 +39,7 @@ title: Часто задаваемые вопросы об Arbitrum ![Выпадающий список для переключения на Arbitrum](/img/arbitrum-screenshot-toggle.png) -## Что мне нужно делать сейчас как разработчику субграфа, потребителю данных, индексатору, куратору или делегатору? +## As a Subgraph developer, data consumer, Indexer, Curator, or Delegator, what do I need to do now? Network participants must move to Arbitrum to continue participating in The Graph Network. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) for additional support. @@ -51,9 +51,9 @@ Network participants must move to Arbitrum to continue participating in The Grap Все было тщательно протестировано, и разработан план действий на случай непредвиденных обстоятельств, чтобы обеспечить безопасный и непрерывный переход. Подробности можно найти [here](https://forum.thegraph.com/t/gip-0037-the-graph-arbitrum-deployment-with-linear-rewards-minted-in-l2/3551#risks-and-security-considerations-20). -## Работают ли существующие субграфы на Ethereum? +## Are existing Subgraphs on Ethereum working? -All subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) to ensure your subgraphs operate seamlessly. +All Subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) to ensure your Subgraphs operate seamlessly. ## Есть ли у GRT новый смарт-контракт, развернутый на Arbitrum? diff --git a/website/src/pages/ru/archived/arbitrum/l2-transfer-tools-faq.mdx b/website/src/pages/ru/archived/arbitrum/l2-transfer-tools-faq.mdx index ebb1f3b1b165..4982403c1db2 100644 --- a/website/src/pages/ru/archived/arbitrum/l2-transfer-tools-faq.mdx +++ b/website/src/pages/ru/archived/arbitrum/l2-transfer-tools-faq.mdx @@ -24,9 +24,9 @@ If you are using an [EOA](https://ethereum.org/en/developers/docs/accounts/#type Инструменты переноса L2 используют встроенный механизм Arbitrum для передачи сообщений с L1 на L2. Этот механизм называется "retryable ticket", или "повторный тикет", и используется всеми собственными токен-мостами, включая мост Arbitrum GRT. Подробнее о повторном тикете можно прочитать в [Arbitrum docs](https://docs.arbitrum.io/arbos/l1-to-l2-messaging). -Когда Вы переносите свои активы (субграф, стейк, делегирование или курирование) на L2, через мост Arbitrum GRT отправляется сообщение, которое создает повторный тикет на L2. Инструмент переноса включает в транзакцию некоторую стоимость ETH, которая используется для 1) оплаты создания тикета и 2) оплаты стоимости газа для выполнения тикета на L2. Однако, поскольку стоимость газа может измениться за время, пока тикет будет готов к исполнению на L2, возможна ситуация, когда попытка автоматического исполнения не удастся. В этом случае мост Arbitrum сохранит повторный тикет в течение 7 дней, и любой желающий может повторить попытку "погасить" тикет (для этого необходимо иметь кошелек с некоторым количеством ETH, подключенный к мосту Arbitrum). +When you transfer your assets (Subgraph, stake, delegation or curation) to L2, a message is sent through the Arbitrum GRT bridge which creates a retryable ticket in L2. The transfer tool includes some ETH value in the transaction, that is used to 1) pay to create the ticket and 2) pay for the gas to execute the ticket in L2. However, because gas prices might vary in the time until the ticket is ready to execute in L2, it is possible that this auto-execution attempt fails. When that happens, the Arbitrum bridge will keep the retryable ticket alive for up to 7 days, and anyone can retry “redeeming” the ticket (which requires a wallet with some ETH bridged to Arbitrum). -Это так называемый шаг "Подтверждение" во всех инструментах переноса - в большинстве случаев он выполняется автоматически, поскольку автоисполнение чаще всего бывает успешным, но важно, чтобы Вы проверили, прошел ли он. Если он не исполнился и в течение 7 дней не будет повторных успешных попыток, мост Arbitrum отменит тикет, и Ваши активы (субграф, стейк, делегирование или курирование) будут потеряны и не смогут быть восстановлены. У разработчиков ядра The Graph есть система мониторинга, позволяющая выявлять такие ситуации и пытаться погасить тикеты, пока не стало слишком поздно, но в конечном итоге ответственность за своевременное завершение переноса лежит на Вас. Если у Вас возникли проблемы с подтверждением переноса, пожалуйста, свяжитесь с нами через [эту форму] (https://noteforms.com/forms/notionform-l2-transfer-tooling-issues-0ogqfu?notionforms=1&utm_source=notionforms), и разработчики ядра помогут Вам. +This is what we call the “Confirm” step in all the transfer tools - it will run automatically in most cases, as the auto-execution is most often successful, but it is important that you check back to make sure it went through. If it doesn’t succeed and there are no successful retries in 7 days, the Arbitrum bridge will discard the ticket, and your assets (Subgraph, stake, delegation or curation) will be lost and can’t be recovered. The Graph core devs have a monitoring system in place to detect these situations and try to redeem the tickets before it’s too late, but it is ultimately your responsibility to ensure your transfer is completed in time. If you’re having trouble confirming your transaction, please reach out using [this form](https://noteforms.com/forms/notionform-l2-transfer-tooling-issues-0ogqfu?notionforms=1&utm_source=notionforms) and core devs will be there help you. ### Я начал передачу делегирования/стейка/курирования и не уверен, что она дошла до уровня L2. Как я могу убедиться, что она была передана правильно? @@ -36,43 +36,43 @@ If you are using an [EOA](https://ethereum.org/en/developers/docs/accounts/#type ## Перенос субграфа -### Как мне перенести свой субграф? +### How do I transfer my Subgraph? -Чтобы перенести Ваш субграф, необходимо выполнить следующие действия: +To transfer your Subgraph, you will need to complete the following steps: 1. Инициировать перенос в основной сети Ethereum 2. Подождать 20 минут для получения подтверждения -3. Подтвердить перенос субграфа в Arbitrum\* +3. Confirm Subgraph transfer on Arbitrum\* -4. Завершить публикацию субграфа в Arbitrum +4. Finish publishing Subgraph on Arbitrum 5. Обновить URL-адрес запроса (рекомендуется) -\* Обратите внимание, что Вы должны подтвердить перенос в течение 7 дней, иначе Ваш субграф может быть потерян. В большинстве случаев этот шаг выполнится автоматически, но в случае скачка стоимости комиссии сети в Arbitrum может потребоваться ручное подтверждение. Если в ходе этого процесса возникнут какие-либо проблемы, Вам помогут: обратитесь в службу поддержки по адресу support@thegraph.com или в [Discord](https://discord.gg/graphprotocol). +\*Note that you must confirm the transfer within 7 days otherwise your Subgraph may be lost. In most cases, this step will run automatically, but a manual confirmation may be needed if there is a gas price spike on Arbitrum. If there are any issues during this process, there will be resources to help: contact support at support@thegraph.com or on [Discord](https://discord.gg/graphprotocol). ### С чего необходимо начать перенос? -Вы можете начать перенос со страницы [Subgraph Studio](https://thegraph.com/studio/), [Explorer,](https://thegraph.com/explorer) или любой другой страницы с информацией о субграфе. Для начала переноса нажмите кнопку "Перенести субграф" на странице сведений о субграфе. +You can initiate your transfer from the [Subgraph Studio](https://thegraph.com/studio/), [Explorer,](https://thegraph.com/explorer) or any Subgraph details page. Click the "Transfer Subgraph" button in the Subgraph details page to start the transfer. -### Как долго мне необходимо ждать, пока мой субграф будет перенесен +### How long do I need to wait until my Subgraph is transferred Время переноса занимает около 20 минут. Мост Arbitrum работает в фоновом режиме, чтобы автоматически завершить перенос через мост. В некоторых случаях стоимость комиссии сети может повыситься, и Вам потребуется повторно подтвердить транзакцию. -### Будет ли мой субграф по-прежнему доступен для поиска после того, как я перенесу его на L2? +### Will my Subgraph still be discoverable after I transfer it to L2? -Ваш субграф можно будет найти только в той сети, в которой он опубликован. Например, если Ваш субграф находится в сети Arbitrum One, то Вы сможете найти его в Explorer только в сети Arbitrum One, и не сможете найти в сети Ethereum. Обратите внимание, что в переключателе сетей в верхней части страницы выбран Arbitrum One, чтобы убедиться, что Вы находитесь в правильной сети. После переноса субграф L1 будет отображаться как устаревший. +Your Subgraph will only be discoverable on the network it is published to. For example, if your Subgraph is on Arbitrum One, then you can only find it in Explorer on Arbitrum One and will not be able to find it on Ethereum. Please ensure that you have Arbitrum One selected in the network switcher at the top of the page to ensure you are on the correct network.  After the transfer, the L1 Subgraph will appear as deprecated. -### Должен ли мой субграф быть опубликован, чтобы его можно было перенести? +### Does my Subgraph need to be published to transfer it? -Чтобы воспользоваться инструментом переноса субграфа, Ваш субграф должен быть уже опубликован в основной сети Ethereum и иметь какой-либо сигнал курирования, принадлежащий кошельку, которому принадлежит субграф. Если Ваш субграф не опубликован, рекомендуется просто опубликовать его непосредственно на Arbitrum One - связанная с этим стоимость комиссии сети будет значительно ниже. Если Вы хотите перенести опубликованный субграф, но на счете владельца нет сигнала курирования, Вы можете подать сигнал на небольшую сумму (например, 1 GRT) с этого счета; при этом обязательно выберите сигнал "автомиграция". +To take advantage of the Subgraph transfer tool, your Subgraph must be already published to Ethereum mainnet and must have some curation signal owned by the wallet that owns the Subgraph. If your Subgraph is not published, it is recommended you simply publish directly on Arbitrum One - the associated gas fees will be considerably lower. If you want to transfer a published Subgraph but the owner account hasn't curated any signal on it, you can signal a small amount (e.g. 1 GRT) from that account; make sure to choose "auto-migrating" signal. -### Что произойдет с версией моего субграфа в основной сети Ethereum после его переноса на Arbitrum? +### What happens to the Ethereum mainnet version of my Subgraph after I transfer to Arbitrum? -После переноса Вашего субграфа на Arbitrum версия, находящаяся на основной сети Ethereum станет устаревшей. Мы рекомендуем Вам обновить URL-адрес запроса в течение 48 часов. Однако существует отсрочка, в течение которой Ваш URL-адрес на основной сети будет функционировать, чтобы можно было обновить стороннюю поддержку децентрализованных приложений. +After transferring your Subgraph to Arbitrum, the Ethereum mainnet version will be deprecated. We recommend you update your query URL within 48 hours. However, there is a grace period in place that keeps your mainnet URL functioning so that any third-party dapp support can be updated. ### Нужно ли мне после переноса повторно опубликовываться на Arbitrum? @@ -80,21 +80,21 @@ If you are using an [EOA](https://ethereum.org/en/developers/docs/accounts/#type ### Будет ли моя конечная точка простаивать при повторной публикации? -Это маловероятно, но возможно возникновение кратковременного простоя в зависимости от того, какие индексаторы поддерживают субграф на уровне L1 и продолжают ли они индексировать его до тех пор, пока субграф не будет полностью поддерживаться на уровне L2. +It is unlikely, but possible to experience a brief downtime depending on which Indexers are supporting the Subgraph on L1 and whether they keep indexing it until the Subgraph is fully supported on L2. ### Публикация и версионность на L2 такие же, как и в основной сети Ethereum? -Да. При публикации в Subgraph Studio выберите Arbitrum One в качестве публикуемой сети. В Studio будет доступна последняя конечная точка, которая указывает на последнюю обновленную версию субграфа. +Yes. Select Arbitrum One as your published network when publishing in Subgraph Studio. In the Studio, the latest endpoint will be available which points to the latest updated version of the Subgraph. -### Будет ли курирование моего субграфа перемещено вместе с моим субграфом? +### Will my Subgraph's curation move with my Subgraph? -Если Вы выбрали автомиграцию сигнала, то 100% Вашего собственного кураторства переместится вместе с Вашим субграфом на Arbitrum One. Весь сигнал курирования субграфа будет преобразован в GRT в момент переноса, а GRT, соответствующий Вашему сигналу курирования, будет использован для обработки сигнала на субграфе L2. +If you've chosen auto-migrating signal, 100% of your own curation will move with your Subgraph to Arbitrum One. All of the Subgraph's curation signal will be converted to GRT at the time of the transfer, and the GRT corresponding to your curation signal will be used to mint signal on the L2 Subgraph. -Другие кураторы могут выбрать, снять ли им свою долю GRT, или также перевести ее в L2 для обработки сигнала на том же субграфе. +Other Curators can choose whether to withdraw their fraction of GRT, or also transfer it to L2 to mint signal on the same Subgraph. -### Могу ли я переместить свой субграф обратно в основную сеть Ethereum после переноса? +### Can I move my Subgraph back to Ethereum mainnet after I transfer? -После переноса Ваша версия данного субграфа в основной сети Ethereum станет устаревшей. Если Вы захотите вернуться в основную сеть, Вам нужно будет переразвернуть и снова опубликовать субграф в основной сети. Однако перенос обратно в основную сеть Ethereum настоятельно не рекомендуется, так как вознаграждения за индексирование в конечном итоге будут полностью распределяться на Arbitrum One. +Once transferred, your Ethereum mainnet version of this Subgraph will be deprecated. If you would like to move back to mainnet, you will need to redeploy and publish back to mainnet. However, transferring back to Ethereum mainnet is strongly discouraged as indexing rewards will eventually be distributed entirely on Arbitrum One. ### Зачем мне необходимо использовать мост ETH для завершения переноса? @@ -206,19 +206,19 @@ If you are using an [EOA](https://ethereum.org/en/developers/docs/accounts/#type \*При необходимости - т.е. если Вы используете контрактный адрес. -### Как я узнаю, что курируемый мною субграф перешел в L2? +### How will I know if the Subgraph I curated has moved to L2? -При просмотре страницы сведений о субграфе появится баннер, уведомляющий о том, что данный субграф был перенесен. Вы можете следовать подсказке, чтобы перенести свое курирование. Эту информацию можно также найти на странице сведений о субграфе любого перемещенного субграфа. +When viewing the Subgraph details page, a banner will notify you that this Subgraph has been transferred. You can follow the prompt to transfer your curation. You can also find this information on the Subgraph details page of any Subgraph that has moved. ### Что делать, если я не хочу переносить свое курирование в L2? -Когда субграф устаревает, у Вас есть возможность отозвать свой сигнал. Аналогично, если субграф переместился в L2, Вы можете выбрать, отозвать свой сигнал из основной сети Ethereum или отправить его в L2. +When a Subgraph is deprecated you have the option to withdraw your signal. Similarly, if a Subgraph has moved to L2, you can choose to withdraw your signal in Ethereum mainnet or send the signal to L2. ### Как я узнаю, что мое курирование успешно перенесено? Информация о сигнале будет доступна через Explorer примерно через 20 минут после запуска инструмента переноса L2. -### Можно ли перенести курирование на несколько субграфов одновременно? +### Can I transfer my curation on more than one Subgraph at a time? В настоящее время опция массового переноса отсутствует. @@ -266,7 +266,7 @@ If you are using an [EOA](https://ethereum.org/en/developers/docs/accounts/#type ### Должен ли я индексироваться на Arbitrum перед тем, как перенести стейк? -Вы можете эффективно перенести свой стейк до начала настройки индексации, но Вы не сможете претендовать на вознаграждение на L2 до тех пор, пока не распределите субграфы на L2, не проиндексируете их, а также пока не представите POI. +You can effectively transfer your stake first before setting up indexing, but you will not be able to claim any rewards on L2 until you allocate to Subgraphs on L2, index them, and present POIs. ### Могут ли делегаторы перемещать свои делегации до того, как я перемещу свой индексируемый стейк? diff --git a/website/src/pages/ru/archived/arbitrum/l2-transfer-tools-guide.mdx b/website/src/pages/ru/archived/arbitrum/l2-transfer-tools-guide.mdx index 1dc689d934d3..b3509a9c7f8d 100644 --- a/website/src/pages/ru/archived/arbitrum/l2-transfer-tools-guide.mdx +++ b/website/src/pages/ru/archived/arbitrum/l2-transfer-tools-guide.mdx @@ -6,53 +6,53 @@ The Graph упростил переход на L2 в Arbitrum One. Для каж Some frequent questions about these tools are answered in the [L2 Transfer Tools FAQ](/archived/arbitrum/l2-transfer-tools-faq/). The FAQs contain in-depth explanations of how to use the tools, how they work, and things to keep in mind when using them. -## Как перенести свой субграф в Arbitrum (L2) +## How to transfer your Subgraph to Arbitrum (L2) -## Преимущества переноса Ваших субграфов +## Benefits of transferring your Subgraphs Сообщество и разработчики ядра The Graph [готовились](https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305) к переходу на Arbitrum в течение прошлого года. Arbitrum, блокчейн уровня 2 или «L2», наследует безопасность от Ethereum, но обеспечивает значительно более низкую комиссию сети. -Когда Вы публикуете или обновляете свой субграф до The Graph Network, Вы взаимодействуете со смарт-контрактами по протоколу, и для этого требуется проплачивать комиссию сети с помощью ETH. После перемещения Ваших субграфов в Arbitrum, любые будущие обновления Вашего субграфа потребуют гораздо более низких сборов за комиссию сети. Более низкие сборы и тот факт, что кривые связи курирования на L2 ровные, также облегчают другим кураторам курирование Вашего субграфа, увеличивая вознаграждение для индексаторов в Вашем субграфе. Эта менее затратная среда также упрощает индексацию и обслуживание Вашего субграфа. В ближайшие месяцы вознаграждения за индексацию в Arbitrum будут увеличиваться, а в основной сети Ethereum уменьшаться, поэтому все больше и больше индексаторов будут переводить свои стейки и настраивать операции на L2. +When you publish or upgrade your Subgraph to The Graph Network, you're interacting with smart contracts on the protocol and this requires paying for gas using ETH. By moving your Subgraphs to Arbitrum, any future updates to your Subgraph will require much lower gas fees. The lower fees, and the fact that curation bonding curves on L2 are flat, also make it easier for other Curators to curate on your Subgraph, increasing the rewards for Indexers on your Subgraph. This lower-cost environment also makes it cheaper for Indexers to index and serve your Subgraph. Indexing rewards will be increasing on Arbitrum and decreasing on Ethereum mainnet over the coming months, so more and more Indexers will be transferring their stake and setting up their operations on L2. -## Понимание того, что происходит с сигналом, Вашим субграфом L1 и URL-адресами запроса +## Understanding what happens with signal, your L1 Subgraph and query URLs -Для передачи субграфа в Arbitrum используется мост Arbitrum GRT, который, в свою очередь, использует собственный мост Arbitrum для отправки субграфа на L2. «Перенос» отменяет поддержку субграфа в основной сети и отправляет информацию для повторного создания субграфа на L2 с использованием моста. Он также будет включать сигнал GRT владельца субграфа, который должен быть больше нуля, чтобы мост смог принять передачу. +Transferring a Subgraph to Arbitrum uses the Arbitrum GRT bridge, which in turn uses the native Arbitrum bridge, to send the Subgraph to L2. The "transfer" will deprecate the Subgraph on mainnet and send the information to re-create the Subgraph on L2 using the bridge. It will also include the Subgraph owner's signaled GRT, which must be more than zero for the bridge to accept the transfer. -Когда Вы решите передать субграф, весь сигнал курирования подграфа будет преобразован в GRT. Это эквивалентно «прекращению поддержки» субграфа в основной сети. GRT, соответствующие Вашему кураторству, будут отправлен на L2 вместе с субграфом, где они будут использоваться для производства сигнала от Вашего имени. +When you choose to transfer the Subgraph, this will convert all of the Subgraph's curation signal to GRT. This is equivalent to "deprecating" the Subgraph on mainnet. The GRT corresponding to your curation will be sent to L2 together with the Subgraph, where they will be used to mint signal on your behalf. -Другие Кураторы могут выбрать, вывести ли свою долю GRT или также перевести ее в L2 для производства сигнала на том же субграфе. Если владелец субграфа не перенесет свой субграф в L2 и вручную аннулирует его с помощью вызова контракта, то Кураторы будут уведомлены и смогут отозвать свое курирование. +Other Curators can choose whether to withdraw their fraction of GRT, or also transfer it to L2 to mint signal on the same Subgraph. If a Subgraph owner does not transfer their Subgraph to L2 and manually deprecates it via a contract call, then Curators will be notified and will be able to withdraw their curation. -Индексаторы больше не будут получать вознаграждение за индексирование субграфа, как только субграф будет перенесён, так как всё курирование конвертируется в GRT. Однако будут индексаторы, которые 1) продолжат обслуживать переданные субграфы в течение 24 часов и 2) немедленно начнут индексировать субграф на L2. Поскольку эти индексаторы уже проиндексировали субграф, не нужно будет ждать синхронизации субграфа, и можно будет запросить субграф L2 практически сразу. +As soon as the Subgraph is transferred, since all curation is converted to GRT, Indexers will no longer receive rewards for indexing the Subgraph. However, there will be Indexers that will 1) keep serving transferred Subgraphs for 24 hours, and 2) immediately start indexing the Subgraph on L2. Since these Indexers already have the Subgraph indexed, there should be no need to wait for the Subgraph to sync, and it will be possible to query the L2 Subgraph almost immediately. -Запросы к субграфу L2 необходимо будет выполнять по другому URL-адресу (на `arbitrum-gateway.thegraph.com`), но URL-адрес L1 будет продолжать работать в течение как минимум 48 часов. После этого шлюз L1 будет перенаправлять запросы на шлюз L2 (на некоторое время), но это увеличит задержку, поэтому рекомендуется как можно скорее переключить все Ваши запросы на новый URL-адрес. +Queries to the L2 Subgraph will need to be done to a different URL (on `arbitrum-gateway.thegraph.com`), but the L1 URL will continue working for at least 48 hours. After that, the L1 gateway will forward queries to the L2 gateway (for some time), but this will add latency so it is recommended to switch all your queries to the new URL as soon as possible. ## Выбор Вашего кошелька L2 -Когда Вы опубликовали свой субграф в основной сети, Вы использовали подключенный кошелек для его создания, и этот кошелек обладает NFT, который представляет этот субграф и позволяет Вам публиковать обновления. +When you published your Subgraph on mainnet, you used a connected wallet to create the Subgraph, and this wallet owns the NFT that represents this Subgraph and allows you to publish updates. -При переносе субграфа в Arbitrum Вы можете выбрать другой кошелек, которому будет принадлежать этот NFT субграфа на L2. +When transferring the Subgraph to Arbitrum, you can choose a different wallet that will own this Subgraph NFT on L2. Если Вы используете «обычный» кошелек, такой как MetaMask (Externally Owned Account или EOA, то есть кошелек, который не является смарт-контрактом), тогда это необязательно, и рекомендуется сохранить тот же адрес владельца, что и в L1. -Если Вы используете смарт-контрактный кошелек, такой как кошелёк с мультиподписью (например, Safe), то выбор другого адреса кошелька L2 является обязательным, так как, скорее всего, эта учетная запись существует только в основной сети, и Вы не сможете совершать транзакции в сети Arbitrum с помощью этого кошелька. Если Вы хотите продолжать использовать кошелек смарт-контрактов или мультиподпись, создайте новый кошелек на Arbitrum и используйте его адрес в качестве владельца L2 Вашего субграфа. +If you're using a smart contract wallet, like a multisig (e.g. a Safe), then choosing a different L2 wallet address is mandatory, as it is most likely that this account only exists on mainnet and you will not be able to make transactions on Arbitrum using this wallet. If you want to keep using a smart contract wallet or multisig, create a new wallet on Arbitrum and use its address as the L2 owner of your Subgraph. -**Очень важно использовать адрес кошелька, которым Вы управляете и с которого можно совершать транзакции в Arbitrum. В противном случае субграф будет потерян и его невозможно будет восстановить.** +**It is very important to use a wallet address that you control, and that can make transactions on Arbitrum. Otherwise, the Subgraph will be lost and cannot be recovered.** ## Подготовка к переносу: использование моста с некоторым количеством ETH -Передача субграфа включает в себя отправку транзакции через мост, а затем выполнение другой транзакции в Arbitrum. Первая транзакция использует ETH в основной сети и включает некоторое количество ETH для оплаты комиссии сети при получении сообщения на уровне L2. Однако, если этого количества будет недостаточно, Вам придется повторить транзакцию и оплатить комиссию сети непосредственно на L2 (это «Шаг 3: Подтверждение перевода» ниже). Этот шаг **должен быть выполнен в течение 7 дней после начала переноса**. Более того, вторая транзакция («Шаг 4: Завершение перевода на L2») будет выполнена непосредственно на Arbitrum. В связи с этим Вам понадобится некоторое количество ETH на кошельке Arbitrum. Если Вы используете учетную запись с мультиподписью или смарт-контрактом, ETH должен находиться в обычном (EOA) кошельке, который Вы используете для выполнения транзакций, а не в самом кошельке с мультиподписью. +Transferring the Subgraph involves sending a transaction through the bridge, and then executing another transaction on Arbitrum. The first transaction uses ETH on mainnet, and includes some ETH to pay for gas when the message is received on L2. However, if this gas is insufficient, you will have to retry the transaction and pay for the gas directly on L2 (this is "Step 3: Confirming the transfer" below). This step **must be executed within 7 days of starting the transfer**. Moreover, the second transaction ("Step 4: Finishing the transfer on L2") will be done directly on Arbitrum. For these reasons, you will need some ETH on an Arbitrum wallet. If you're using a multisig or smart contract account, the ETH will need to be in the regular (EOA) wallet that you are using to execute the transactions, not on the multisig wallet itself. Вы можете приобрести ETH на некоторых биржах и вывести его напрямую на Arbitrum, или Вы можете использовать мост Arbitrum для отправки ETH из кошелька основной сети на L2: [bridge.arbitrum.io](http://bridge.arbitrum.io). Поскольку плата за комиссию сети в Arbitrum ниже, Вам понадобится лишь небольшая сумма. Рекомендуется начинать с низкого порога (например, 0,01 ETH), чтобы Ваша транзакция была одобрена. -## Поиск инструмента переноса субграфа +## Finding the Subgraph Transfer Tool -Вы можете найти инструмент переноса L2, когда просматриваете страницу своего субграфа в Subgraph Studio: +You can find the L2 Transfer Tool when you're looking at your Subgraph's page on Subgraph Studio: ![инструмент переноса](/img/L2-transfer-tool1.png) -Он также доступен в Explorer, если Вы подключены к кошельку, которому принадлежит субграф, и на странице этого субграфа в Explorer: +It is also available on Explorer if you're connected with the wallet that owns a Subgraph and on that Subgraph's page on Explorer: ![Перенос на L2](/img/transferToL2.png) @@ -60,19 +60,19 @@ Some frequent questions about these tools are answered in the [L2 Transfer Tools ## Шаг 1: Запуск перевода -Прежде чем начать перенос, Вы должны решить, какому адресу будет принадлежать субграф на L2 (см. «Выбор кошелька L2» выше), также настоятельно рекомендуется иметь некоторое количество ETH для оплаты комиссии сети за соединение мостом с Arbitrum (см. «Подготовка к переносу: использование моста с некоторым количеством ETH" выше). +Before starting the transfer, you must decide which address will own the Subgraph on L2 (see "Choosing your L2 wallet" above), and it is strongly recommend having some ETH for gas already bridged on Arbitrum (see "Preparing for the transfer: bridging some ETH" above). -Также обратите внимание, что для передачи субграфа требуется наличие ненулевого количества сигнала в субграфе с той же учетной записью, которая владеет субграфом; если Вы не просигнализировали на субграфе, Вам придется добавить немного монет для курирования (достаточно добавить небольшую сумму, например 1 GRT). +Also please note transferring the Subgraph requires having a nonzero amount of signal on the Subgraph with the same account that owns the Subgraph; if you haven't signaled on the Subgraph you will have to add a bit of curation (adding a small amount like 1 GRT would suffice). -После открытия инструмента переноса Вы сможете ввести адрес кошелька L2 в поле «Адрес получающего кошелька» — **убедитесь, что Вы ввели здесь правильный адрес**. После нажатия на «Перевод субграфа», Вам будет предложено выполнить транзакцию в Вашем кошельке (обратите внимание, что некоторое количество ETH включено для оплаты газа L2); это инициирует передачу и отменит Ваш субграф на L1 (см. «Понимание того, что происходит с сигналом, Вашим субграфом L1 и URL-адресами запроса» выше для получения более подробной информации о том, что происходит за кулисами). +After opening the Transfer Tool, you will be able to input the L2 wallet address into the "Receiving wallet address" field - **make sure you've entered the correct address here**. Clicking on Transfer Subgraph will prompt you to execute the transaction on your wallet (note some ETH value is included to pay for L2 gas); this will initiate the transfer and deprecate your L1 Subgraph (see "Understanding what happens with signal, your L1 Subgraph and query URLs" above for more details on what goes on behind the scenes). -Если Вы выполните этот шаг, ** убедитесь в том, что Вы завершили шаг 3 менее чем за 7 дней, иначе субграф и Ваш сигнал GRT будут утеряны.** Это связано с тем, как в Arbitrum работает обмен сообщениями L1-L2: сообщения, которые отправляются через мост, представляют собой «билеты с возможностью повторной попытки», которые должны быть выполнены в течение 7 дней, и для первоначального исполнения может потребоваться повторная попытка, если в Arbitrum будут скачки цен комиссии сети. +If you execute this step, **make sure you proceed until completing step 3 in less than 7 days, or the Subgraph and your signal GRT will be lost.** This is due to how L1-L2 messaging works on Arbitrum: messages that are sent through the bridge are "retry-able tickets" that must be executed within 7 days, and the initial execution might need a retry if there are spikes in the gas price on Arbitrum. ![Запустите перенос на L2](/img/startTransferL2.png) -## Шаг 2: Ожидание перехода субграфа в L2 +## Step 2: Waiting for the Subgraph to get to L2 -После того, как Вы начнете передачу, сообщение, которое отправляет Ваш субграф с L1 в L2, должно пройти через мост Arbitrum. Это занимает примерно 20 минут (мост ожидает, пока блок основной сети, содержащий транзакцию, будет «защищен» от потенциальных реорганизаций чейна). +After you start the transfer, the message that sends your L1 Subgraph to L2 must propagate through the Arbitrum bridge. This takes approximately 20 minutes (the bridge waits for the mainnet block containing the transaction to be "safe" from potential chain reorgs). По истечении этого времени ожидания Arbitrum попытается автоматически выполнить перевод по контрактам L2. @@ -80,7 +80,7 @@ Some frequent questions about these tools are answered in the [L2 Transfer Tools ## Шаг 3: Подтверждение переноса -В большинстве случаев этот шаг будет выполняться автоматически, поскольку комиссии сети L2, включенной в шаг 1, должно быть достаточно для выполнения транзакции, которая получает субграф в контрактах Arbitrum. Однако в некоторых случаях возможно, что скачок цен комиссии сети на Arbitrum приведёт к сбою этого автоматического выполнения. В этом случае «тикет», который отправляет ваш субграф на L2, будет находиться в ожидании и потребует повторной попытки в течение 7 дней. +In most cases, this step will auto-execute as the L2 gas included in step 1 should be sufficient to execute the transaction that receives the Subgraph on the Arbitrum contracts. In some cases, however, it is possible that a spike in gas prices on Arbitrum causes this auto-execution to fail. In this case, the "ticket" that sends your Subgraph to L2 will be pending and require a retry within 7 days. В этом случае Вам нужно будет подключиться с помощью кошелька L2, в котором есть некоторое количество ETH в сети Arbitrum, переключить сеть Вашего кошелька на Arbitrum и нажать «Подтвердить перевод», чтобы повторить транзакцию. @@ -88,33 +88,33 @@ Some frequent questions about these tools are answered in the [L2 Transfer Tools ## Шаг 4: Завершение переноса в L2 -На данный момент Ваш субграф и GRT получены в Arbitrum, но субграф еще не опубликован. Вам нужно будет подключиться с помощью кошелька L2, который Вы выбрали в качестве принимающего кошелька, переключить сеть Вашего кошелька на Arbitrum и нажать «Опубликовать субграф». +At this point, your Subgraph and GRT have been received on Arbitrum, but the Subgraph is not published yet. You will need to connect using the L2 wallet that you chose as the receiving wallet, switch your wallet network to Arbitrum, and click "Publish Subgraph." -![Опубликуйте субграф](/img/publishSubgraphL2TransferTools.png) +![Publish the Subgraph](/img/publishSubgraphL2TransferTools.png) -![Дождитесь публикации субграфа](/img/waitForSubgraphToPublishL2TransferTools.png) +![Wait for the Subgraph to be published](/img/waitForSubgraphToPublishL2TransferTools.png) -Субграф будет опубликован, и индексаторы, работающие на Arbitrum, смогут начать его обслуживание. Он также будет создавать сигнал курирования, используя GRT, переданные из L1. +This will publish the Subgraph so that Indexers that are operating on Arbitrum can start serving it. It will also mint curation signal using the GRT that were transferred from L1. ## Шаг 5. Обновление URL-адреса запроса -Ваш субграф успешно перенесен в Arbitrum! Для запроса субграфа новый URL будет следующим: +Your Subgraph has been successfully transferred to Arbitrum! To query the Subgraph, the new URL will be : `https://arbitrum-gateway.thegraph.com/api/[api-key]/subgraphs/id/[l2-subgraph-id]` -Обратите внимание, что идентификатор субграфа в Arbitrum будет отличаться от того, который был у Вас в основной сети, но Вы всегда можете найти его в Explorer или Studio. Как упоминалось выше (см. «Понимание того, что происходит с сигналом, Вашим субграфом L1 и URL-адресами запроса»), старый URL-адрес L1 будет поддерживаться в течение некоторого времени, но Вы должны переключить свои запросы на новый адрес, как только субграф будет синхронизирован в L2. +Note that the Subgraph ID on Arbitrum will be a different than the one you had on mainnet, but you can always find it on Explorer or Studio. As mentioned above (see "Understanding what happens with signal, your L1 Subgraph and query URLs") the old L1 URL will be supported for a short while, but you should switch your queries to the new address as soon as the Subgraph has been synced on L2. ## Как перенести свой субграф в Arbitrum (L2) -## Понимание того, что происходит с курированием передачи субграфов на L2 +## Understanding what happens to curation on Subgraph transfers to L2 -Когда владелец субграфа передает субграф в Arbitrum, весь сигнал субграфа одновременно конвертируется в GRT. Это же относится и к "автоматически мигрировавшему" сигналу, т.е. сигналу, который не относится к конкретной версии или развертыванию субграфа, но который следует за последней версией субграфа. +When the owner of a Subgraph transfers a Subgraph to Arbitrum, all of the Subgraph's signal is converted to GRT at the same time. This applies to "auto-migrated" signal, i.e. signal that is not specific to a Subgraph version or deployment but that follows the latest version of a Subgraph. -Это преобразование сигнала в GRT аналогично тому, что произошло бы, если бы владелец субграфа объявил его устаревшим на L1. Когда субграф устаревает или переносится, в то же время «сжигается» весь сигнал курирования (с использованием кривой связывания курирования), а полученный GRT сохраняется в смарт-контракте GNS (то есть контракте, который обрабатывает обновления субграфа и сигнал автоматической миграции). Таким образом, каждый куратор этого субграфа имеет право на GRT, пропорционально количеству акций, которыми он владел в этом субграфе. +This conversion from signal to GRT is the same as what would happen if the Subgraph owner deprecated the Subgraph in L1. When the Subgraph is deprecated or transferred, all curation signal is "burned" simultaneously (using the curation bonding curve) and the resulting GRT is held by the GNS smart contract (that is the contract that handles Subgraph upgrades and auto-migrated signal). Each Curator on that Subgraph therefore has a claim to that GRT proportional to the amount of shares they had for the Subgraph. -Часть этих GRT, принадлежащая владельцу субграфа, отправляется на L2 вместе с субграфом. +A fraction of these GRT corresponding to the Subgraph owner is sent to L2 together with the Subgraph. -На этом этапе курируемый GRT больше не будет начислять комиссии за запросы, поэтому кураторы могут выбрать: вывести свой GRT или перевести его на тот же субграф на L2, где его можно использовать для создания нового сигнала курирования. Спешить с этим не стоит, так как GRT может храниться неограниченное время, и каждый получит сумму пропорционально своим долям, независимо от того, когда это будет сделано. +At this point, the curated GRT will not accrue any more query fees, so Curators can choose to withdraw their GRT or transfer it to the same Subgraph on L2, where it can be used to mint new curation signal. There is no rush to do this as the GRT can be help indefinitely and everybody gets an amount proportional to their shares, irrespective of when they do it. ## Выбор Вашего кошелька L2 @@ -130,9 +130,9 @@ Some frequent questions about these tools are answered in the [L2 Transfer Tools Прежде чем начать перенос, Вы должны решить, какой адрес будет владеть курированием на L2 (см. "Выбор кошелька L2" выше), также рекомендуется уже иметь на Arbitrum некоторое количество ETH для газа на случай, если Вам потребуется повторно выполнить отправку сообщения на L2. Вы можете купить ETH на любых биржах и вывести его напрямую на Arbitrum, или использовать мост Arbitrum для отправки ETH из кошелька основной сети на L2: [bridge.arbitrum.io](http://bridge.arbitrum.io) — поскольку комиссии за газ на Arbitrum очень низкие, Вам понадобится небольшая сумма, например, 0.01 ETH, этого, вероятно, будет более чем достаточно. -Если субграф, который Вы курируете, был перенесен на L2, Вы увидите сообщение в Explorer о том, что Вы курируете перенесённый субграф. +If a Subgraph that you curate to has been transferred to L2, you will see a message on Explorer telling you that you're curating to a transferred Subgraph. -При просмотре страницы субграфа Вы можете выбрать вывод или перенос курирования. Нажатие на кнопку "Перенести сигнал в Arbitrum", откроет инструмент переноса. +When looking at the Subgraph page, you can choose to withdraw or transfer the curation. Clicking on "Transfer Signal to Arbitrum" will open the transfer tool. ![Перенос сигнала](/img/transferSignalL2TransferTools.png) @@ -162,4 +162,4 @@ Some frequent questions about these tools are answered in the [L2 Transfer Tools ## Снятие Вашего курирования на L1 -Если Вы предпочитаете не отправлять свой GRT на L2 или хотите передать GRT вручную, Вы можете вывести свой курируемый GRT на L1. На баннере на странице субграфа выберите "Вывести сигнал" и подтвердите транзакцию; GRT будет отправлен на Ваш адрес Куратора. +If you prefer not to send your GRT to L2, or you'd rather bridge the GRT manually, you can withdraw your curated GRT on L1. On the banner on the Subgraph page, choose "Withdraw Signal" and confirm the transaction; the GRT will be sent to your Curator address. diff --git a/website/src/pages/ru/archived/sunrise.mdx b/website/src/pages/ru/archived/sunrise.mdx index eb18a93c506c..f5a86771b9a1 100644 --- a/website/src/pages/ru/archived/sunrise.mdx +++ b/website/src/pages/ru/archived/sunrise.mdx @@ -1,80 +1,80 @@ --- -title: Post-Sunrise + Upgrading to The Graph Network FAQ -sidebarTitle: Post-Sunrise Upgrade FAQ +title: 'Post-Sunrise + Обновление до The Graph Network: Часто задаваемые вопросы' +sidebarTitle: Часто задаваемые вопросы об обновлении Post-Sunrise --- -> Note: The Sunrise of Decentralized Data ended June 12th, 2024. +> Примечание: Эра децентрализованных данных Sunrise завершилась 12 июня 2024 года. -## What was the Sunrise of Decentralized Data? +## Что представляла собой эра децентрализованных данных Sunrise? -The Sunrise of Decentralized Data was an initiative spearheaded by Edge & Node. This initiative enabled subgraph developers to upgrade to The Graph’s decentralized network seamlessly. +Эра децентрализованных данных Sunrise была инициативой, возглавляемой Edge & Node. Она позволила разработчикам субграфам беспрепятственно перейти на децентрализованную сеть The Graph. -This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published subgraphs. +Этот план основывался на предыдущих разработках экосистемы The Graph, включая обновление индексатора для обслуживания запросов на недавно опубликованные субграфы. -### What happened to the hosted service? +### Что произошло с хостинг-сервисом? -The hosted service query endpoints are no longer available, and developers cannot deploy new subgraphs on the hosted service. +Конечные точки запросов на хостинг-сервис больше недоступны, и разработчики не могут развертывать новые субграфы на хостинг-сервисе. -During the upgrade process, owners of hosted service subgraphs could upgrade their subgraphs to The Graph Network. Additionally, developers were able to claim auto-upgraded subgraphs. +В процессе обновления владельцы субграфов на хостинг-сервисе могли обновить свои субграфы до сети The Graph. Кроме того, разработчики могли заявить о своих автоматически обновлённых субграфах. -### Was Subgraph Studio impacted by this upgrade? +### Было ли Subgraph Studio затронуто этим обновлением? -No, Subgraph Studio was not impacted by Sunrise. Subgraphs were immediately available for querying, powered by the upgrade Indexer, which uses the same infrastructure as the hosted service. +Нет, Subgraph Studio не было затронуто эрой Sunrise. Субгрфы стали немедленно доступны для запросов, благодаря обновлённому Индексатору, который использует ту же инфраструктуру, что и хостинг-сервис. -### Why were subgraphs published to Arbitrum, did it start indexing a different network? +### Почему субграфы были опубликованы на Arbitrum? Это означает, что они начали индексировать другую сеть? -The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that subgraphs are published to, but subgraphs can index any of the [supported networks](/supported-networks/) +Сеть The Graph изначально была развернута на основной сети Ethereum, но позже была перенесена на Arbitrum One, чтобы снизить затраты на газ для всех пользователей. В результате все новые субграфы публикуются в сети The Graph на Arbitrum, чтобы Индексаторы могли их поддерживать. Arbitrum — это сеть, в которую публикуются субграфы, но субграфы могут индексировать любую из [поддерживаемых сетей](/supported-networks/) -## About the Upgrade Indexer +## Об обновлении Индексатора -> The upgrade Indexer is currently active. +> Обновлённый Индексатор в настоящее время активен. -The upgrade Indexer was implemented to improve the experience of upgrading subgraphs from the hosted service to The Graph Network and support new versions of existing subgraphs that had not yet been indexed. +Обновлённый Индексатор был внедрён для улучшения процесса обновления субграфов с хостинг-сервиса на сеть The Graph и поддержки новых версий существующих субграфов, которые ещё не были проиндексированы. -### What does the upgrade Indexer do? +### Что делает обновлённый Индексатор? -- It bootstraps chains that have yet to receive indexing rewards on The Graph Network and ensures that an Indexer is available to serve queries as quickly as possible after a subgraph is published. -- It supports chains that were previously only available on the hosted service. Find a comprehensive list of supported chains [here](/supported-networks/). -- Indexers that operate an upgrade Indexer do so as a public service to support new subgraphs and additional chains that lack indexing rewards before The Graph Council approves them. +- Он помогает запустить блокчейны, которые ещё не получили вознаграждения за индексирование в сети The Graph, и гарантирует, что Индексатор будет доступен для обслуживания запросов как можно быстрее после публикации субграфа. +- Он поддерживает блокчейны, которые ранее были доступны только на хостинг-сервисе. Полный список поддерживаемых блокчейнов можно найти [здесь](/supported-networks/). +- Индексаторы, которые используют обновлённый Индексатор, делают это как общественную услугу, чтобы поддерживать новые субграфы и дополнительные блокчейны, которые ещё не получают вознаграждения за индексирование, до того как их одобрит Совет The Graph. -### Why is Edge & Node running the upgrade Indexer? +### Почему Edge & Node запускает обновленный Индексатор? -Edge & Node historically maintained the hosted service and, as a result, already have synced data for hosted service subgraphs. +Edge & Node исторически поддерживали хостинг-сервис, и, как результат, уже имеют синхронизированные данные для субграфов, размещённых на хостинг-сервисе. -### What does the upgrade indexer mean for existing Indexers? +### Что означает обновлённый индексатор для существующих Индексаторов? -Chains previously only supported on the hosted service were made available to developers on The Graph Network without indexing rewards at first. +Блокчейны, которые ранее поддерживались только на хостинг-сервисе, стали доступны разработчикам в сети The Graph без вознаграждений за индексирование на первом этапе. -However, this action unlocked query fees for any interested Indexer and increased the number of subgraphs published on The Graph Network. As a result, Indexers have more opportunities to index and serve these subgraphs in exchange for query fees, even before indexing rewards are enabled for a chain. +Однако это действие открыло возможность получения сборов за запросы для любого заинтересованного Индексатора и увеличило количество субграфов, опубликованных в сети The Graph. В результате Индексаторы получили больше возможностей для индексирования и обслуживания этих субграфов в обмен на сборы за запросы, даже до того, как вознаграждения за индексирование будут активированы для чейна. -The upgrade Indexer also provides the Indexer community with information about the potential demand for subgraphs and new chains on The Graph Network. +Обновлённый Индексатор также предоставляет сообществу Индексаторов информацию о потенциальном спросе на субграфы и новые чейны в сети The Graph. -### What does this mean for Delegators? +### Что это означает для Делегаторов? -The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity. +Обновлённый Индексатор предоставляет Делегаторам широкие возможности. Поскольку он позволил большему числу субграфов перейти с хостинг-сервиса в сеть The Graph, Делегаторы выигрывают от увеличенной активности в сети. -### Did the upgrade Indexer compete with existing Indexers for rewards? +### Конкурировал ли обновлённый Индексатор с существующими Индексаторами за вознаграждения? -No, the upgrade Indexer only allocates the minimum amount per subgraph and does not collect indexing rewards. +Нет, обновлённый Индексатор выделяет минимальное количество вознаграждений на каждый субграф и не собирает вознаграждения за индексирование. -It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and subgraphs. +Он работает на основе принципа «по мере необходимости», выполняя роль резервного решения до тех пор, пока как минимум три других Индексатора в сети не обеспечат достаточное качество обслуживания для соответствующих чейнов и субграфов. -### How does this affect subgraph developers? +### Как это влиякт на разработчиков субграфов? -Subgraph developers can query their subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/subgraphs/developing/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a subgraph](/developing/creating-a-subgraph/) was not impacted by this upgrade. +Разработчики субграфов могут запрашивать свои субграфы в сети The Graph практически сразу после их обновления с хостинговой службы или публикации через [Subgraph Studio](/subgraphs/developing/publishing/publishing-a-subgraph/), так как время на индексирование не требуется. Обратите внимание, что [создание субграфа](/developing/creating-a-subgraph/) не было затронуто этим обновлением. -### How does the upgrade Indexer benefit data consumers? +### Как обновлённый Индексатор приносит пользу потребителям данных? -The upgrade Indexer enables chains on the network that were previously only supported on the hosted service. Therefore, it widens the scope and availability of data that can be queried on the network. +Обновлённый Индексатор позволяет использовать чейны на сети, которые ранее поддерживались только на севрисе хостинга. Таким образом, он расширяет объем и доступность данных, которые могут быть запрашиваемы в сети. -### How does the upgrade Indexer price queries? +### Как обновлённый Индексатор оценивает запросы? -The upgrade Indexer prices queries at the market rate to avoid influencing the query fee market. +Обновлённый Индексатор оценивает запросы по рыночной цене, чтобы избежать влияния на рынок комиссий за запросы. -### When will the upgrade Indexer stop supporting a subgraph? +### Когда обновлённый Индексатор перестанет поддерживать субграф? -The upgrade Indexer supports a subgraph until at least 3 other Indexers successfully and consistently serve queries made to it. +Обновлённый Индексатор поддерживает субграф, пока как минимум три других Индексатора не начнут успешно и стабильно обслуживать запросы к нему. -Furthermore, the upgrade Indexer stops supporting a subgraph if it has not been queried in the last 30 days. +Кроме того, обновлённый Индексатор прекращает поддержку субграфа, если к нему не поступали запросы в последние 30 дней. -Other Indexers are incentivized to support subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it. +Другие Индексаторы получают стимулы для поддержки субграфов с постоянным объёмом запросов. Объём запросов к обновлённому Индексатору должен стремиться к нулю, так как он имеет небольшое распределение, и другие Индексаторы должны обслуживать запросы раньше него. diff --git a/website/src/pages/ru/contracts.json b/website/src/pages/ru/contracts.json index 134799f3dd0f..17850d7d1b2f 100644 --- a/website/src/pages/ru/contracts.json +++ b/website/src/pages/ru/contracts.json @@ -1,4 +1,4 @@ { - "contract": "Contract", + "contract": "Контракт", "address": "Address" } diff --git a/website/src/pages/ru/contracts.mdx b/website/src/pages/ru/contracts.mdx index 9226c911fcd8..8c6fbb464abe 100644 --- a/website/src/pages/ru/contracts.mdx +++ b/website/src/pages/ru/contracts.mdx @@ -14,7 +14,7 @@ This is the principal deployment of The Graph Network. ## Mainnet -This was the original deployment of The Graph Network. [Learn more](/archived/arbitrum/arbitrum-faq/) about The Graph's scaling with Arbitrum. +Это было первоначальное развертывание The Graph Network. [Узнайте больше](/archived/arbitrum/arbitrum-faq/) о масштабировании The Graph с Arbitrum. diff --git a/website/src/pages/ru/global.json b/website/src/pages/ru/global.json index 0b02b6ff1575..70dd9a3b9dfe 100644 --- a/website/src/pages/ru/global.json +++ b/website/src/pages/ru/global.json @@ -1,35 +1,78 @@ { "navigation": { "title": "Главное меню", - "show": "Show navigation", - "hide": "Hide navigation", + "show": "Показать панель навигации", + "hide": "Скрыть панель навигации", "subgraphs": "Субграфы", - "substreams": "Substreams", - "sps": "Substreams-Powered Subgraphs", - "indexing": "Indexing", - "resources": "Resources", - "archived": "Archived" + "substreams": "Субпотоки", + "sps": "Субграфы, работающие на основе субпотоков", + "tokenApi": "Token API", + "indexing": "Индексирование", + "resources": "Ресурсы", + "archived": "Архивировано" }, "page": { - "lastUpdated": "Last updated", + "lastUpdated": "Последнее обновление", "readingTime": { - "title": "Reading time", - "minutes": "minutes" + "title": "Время на прочтение", + "minutes": "минуты" }, - "previous": "Previous page", - "next": "Next page", - "edit": "Edit on GitHub", - "onThisPage": "On this page", - "tableOfContents": "Table of contents", - "linkToThisSection": "Link to this section" + "previous": "Предыдущая страница", + "next": "Следующая страница", + "edit": "Редактировать на GitHub", + "onThisPage": "На этой странице", + "tableOfContents": "Содержание", + "linkToThisSection": "Ссылка на этот раздел" }, "content": { - "note": "Note", - "video": "Video" + "callout": { + "note": "Note", + "tip": "Tip", + "important": "Important", + "warning": "Warning", + "caution": "Caution" + }, + "video": "Видео" + }, + "openApi": { + "parameters": { + "pathParameters": "Path Parameters", + "queryParameters": "Параметры запроса", + "headerParameters": "Header Parameters", + "cookieParameters": "Cookie Parameters", + "parameter": "Parameter", + "description": "Описание", + "value": "Value", + "required": "Required", + "deprecated": "Deprecated", + "defaultValue": "Default value", + "minimumValue": "Minimum value", + "maximumValue": "Maximum value", + "acceptedValues": "Accepted values", + "acceptedPattern": "Accepted pattern", + "format": "Format", + "serializationFormat": "Serialization format" + }, + "request": { + "label": "Test this endpoint", + "noCredentialsRequired": "No credentials required", + "send": "Send Request" + }, + "responses": { + "potentialResponses": "Potential Responses", + "status": "Статус", + "description": "Описание", + "liveResponse": "Live Response", + "example": "Пример" + }, + "errors": { + "invalidApi": "Could not retrieve API {0}.", + "invalidOperation": "Could not retrieve operation {0} in API {1}." + } }, "notFound": { - "title": "Oops! This page was lost in space...", - "subtitle": "Check if you’re using the right address or explore our website by clicking on the link below.", - "back": "Go Home" + "title": "Ой! Эта страница была утеряна...", + "subtitle": "Проверьте, верно ли указан адрес или перейдите на наш сайт по ссылке ниже.", + "back": "На главную страницу" } } diff --git a/website/src/pages/ru/index.json b/website/src/pages/ru/index.json index e43fff6d2f1e..28d369ba865d 100644 --- a/website/src/pages/ru/index.json +++ b/website/src/pages/ru/index.json @@ -1,52 +1,52 @@ { "title": "Главная страница", "hero": { - "title": "The Graph Docs", - "description": "Kick-start your web3 project with the tools to extract, transform, and load blockchain data.", - "cta1": "How The Graph works", - "cta2": "Build your first subgraph" + "title": "Документация The Graph", + "description": "Запустите свой проект web3 с помощью инструментов для извлечения, преобразования и загрузки данных блокчейна.", + "cta1": "Как работает The Graph", + "cta2": "Создайте свой первый субграф" }, "products": { - "title": "The Graph’s Products", - "description": "Choose a solution that fits your needs—interact with blockchain data your way.", + "title": "The Graph's Products", + "description": "Выберите решение, которое соответствует Вашим потребностям, — взаимодействуйте с данными блокчейна так, как Вам удобно.", "subgraphs": { "title": "Субграфы", - "description": "Extract, process, and query blockchain data with open APIs.", - "cta": "Develop a subgraph" + "description": "Извлечение, процесс и запрос данных блокчейна с открытым API.", + "cta": "Разработка субграфа" }, "substreams": { - "title": "Substreams", - "description": "Fetch and consume blockchain data with parallel execution.", - "cta": "Develop with Substreams" + "title": "Субпотоки", + "description": "Получение и потребление данных блокчейна с параллельным исполнением.", + "cta": "Разработка с использованием Субпотоков" }, "sps": { - "title": "Substreams-Powered Subgraphs", - "description": "Boost your subgraph’s efficiency and scalability by using Substreams.", - "cta": "Set up a Substreams-powered subgraph" + "title": "Субграфы, работающие на основе субпотоков", + "description": "Boost your subgraph's efficiency and scalability by using Substreams.", + "cta": "Настройка субграфа, работающего на основе Субпотоков" }, "graphNode": { "title": "Graph Node", - "description": "Index blockchain data and serve it via GraphQL queries.", - "cta": "Set up a local Graph Node" + "description": "Индексируйте данные блокчейна и обслуживайте их через запросы GraphQL.", + "cta": "Настройка локальной Graph Node" }, "firehose": { "title": "Firehose", - "description": "Extract blockchain data into flat files to enhance sync times and streaming capabilities.", - "cta": "Get started with Firehose" + "description": "Извлекайте данные блокчейна в плоские файлы, чтобы улучшить время синхронизации и возможности потоковой передачи.", + "cta": "Начало работы с Firehose" } }, "supportedNetworks": { "title": "Поддерживаемые сети", "details": "Network Details", "services": "Services", - "type": "Type", + "type": "Тип", "protocol": "Protocol", "identifier": "Identifier", "chainId": "Chain ID", "nativeCurrency": "Native Currency", - "docs": "Docs", + "docs": "Документы", "shortName": "Short Name", - "guides": "Guides", + "guides": "Гайды", "search": "Search networks", "showTestnets": "Show Testnets", "loading": "Loading...", @@ -54,9 +54,9 @@ "infoText": "Boost your developer experience by enabling The Graph's indexing network.", "infoLink": "Integrate new network", "description": { - "base": "The Graph supports {0}. To add a new network, {1}", - "networks": "networks", - "completeThisForm": "complete this form" + "base": "The Graph поддерживает {0}. Для добавления новой сети {1}", + "networks": "сети", + "completeThisForm": "заполнить эту форму" }, "emptySearch": { "title": "No networks found", @@ -65,10 +65,10 @@ "showTestnets": "Show testnets" }, "tableHeaders": { - "name": "Name", - "id": "ID", - "subgraphs": "Subgraphs", - "substreams": "Substreams", + "name": "Имя", + "id": "Идентификатор", + "subgraphs": "Субграфы", + "substreams": "Субпотоки", "firehose": "Firehose", "tokenapi": "Token API" } @@ -80,7 +80,7 @@ "description": "Kickstart your journey into subgraph development." }, "substreams": { - "title": "Substreams", + "title": "Субпотоки", "description": "Stream high-speed data for real-time indexing." }, "timeseries": { @@ -92,7 +92,7 @@ "description": "Leverage features like custom data sources, event handlers, and topic filters." }, "billing": { - "title": "Billing", + "title": "Выставление счетов", "description": "Optimize costs and manage billing efficiently." } }, @@ -123,53 +123,53 @@ "title": "Гайды", "description": "", "explorer": { - "title": "Find Data in Graph Explorer", - "description": "Leverage hundreds of public subgraphs for existing blockchain data." + "title": "Поиск данных в Graph Explorer", + "description": "Использование сотен публичных субграфов для существующих данных блокчейна." }, "publishASubgraph": { - "title": "Publish a Subgraph", - "description": "Add your subgraph to the decentralized network." + "title": "Публикация субграфа", + "description": "Добавьте свой субграф в децентрализованную сеть." }, "publishSubstreams": { - "title": "Publish Substreams", - "description": "Launch your Substreams package to the Substreams Registry." + "title": "Публикация Субпотоков", + "description": "Запустите свой пакет Субпотоков в Реестр Субпотоков." }, "queryingBestPractices": { - "title": "Querying Best Practices", - "description": "Optimize your subgraph queries for faster, better results." + "title": "Лучшие практики запросов", + "description": "Оптимизируйте свои запросы субграфов для получения более быстрых и лучших результатов." }, "timeseries": { - "title": "Optimized Timeseries & Aggregations", - "description": "Streamline your subgraph for efficiency." + "title": "Оптимизация тайм-серий и агрегаций", + "description": "Оптимизируйте свой субграф для большей эффективности." }, "apiKeyManagement": { - "title": "API Key Management", - "description": "Easily create, manage, and secure API keys for your subgraphs." + "title": "Управление API-ключами", + "description": "Легко создавайте, управляйте и защищайте ключи API для своих субграфов." }, "transferToTheGraph": { - "title": "Transfer to The Graph", - "description": "Seamlessly upgrade your subgraph from any platform." + "title": "Перенос в The Graph", + "description": "Легко обновляйте свой субграф с любой платформы." } }, "videos": { - "title": "Video Tutorials", - "watchOnYouTube": "Watch on YouTube", + "title": "Видеоуроки", + "watchOnYouTube": "Смотреть на YouTube", "theGraphExplained": { - "title": "The Graph Explained In 1 Minute", - "description": "What is The Graph? How does it work? Why does it matter so much to web3 developers? Learn how and why The Graph is the backbone of web3 in this short, non-technical video." + "title": "Объяснение The Graph за 1 минуту", + "description": "Learn how and why The Graph is the backbone of web3 in this short, non-technical video." }, "whatIsDelegating": { - "title": "What is Delegating?", - "description": "Delegators are key participants who help secure The Graph by staking their GRT tokens to Indexers. This video explains key concepts to understand before delegating." + "title": "Что такое Делегирование?", + "description": "This video explains key concepts to understand before delegating, a form of staking that helps secure The Graph." }, "howToIndexSolana": { - "title": "How to Index Solana with a Substreams-powered Subgraph", - "description": "If you’re familiar with subgraphs, discover how Substreams offer a different approach for key use cases. This video walks you through the process of building your first Substreams-powered subgraph." + "title": "Как индексировать Solana с помощью субграфа, работающего на базе Субпотоков", + "description": "If you're familiar with Subgraphs, discover how Substreams offer a different approach for key use cases." } }, "time": { - "reading": "Reading time", - "duration": "Duration", + "reading": "Время на прочтение", + "duration": "Продолжительность", "minutes": "min" } } diff --git a/website/src/pages/ru/indexing/_meta-titles.json b/website/src/pages/ru/indexing/_meta-titles.json index 42f4de188fd4..b204530e4826 100644 --- a/website/src/pages/ru/indexing/_meta-titles.json +++ b/website/src/pages/ru/indexing/_meta-titles.json @@ -1,3 +1,3 @@ { - "tooling": "Indexer Tooling" + "tooling": "Инструментарий Индексатора" } diff --git a/website/src/pages/ru/indexing/chain-integration-overview.mdx b/website/src/pages/ru/indexing/chain-integration-overview.mdx index 3ee1ef3bc4bc..613d4b5151c4 100644 --- a/website/src/pages/ru/indexing/chain-integration-overview.mdx +++ b/website/src/pages/ru/indexing/chain-integration-overview.mdx @@ -36,7 +36,7 @@ This process is related to the Subgraph Data Service, applicable only to new Sub ### 2. What happens if Firehose & Substreams support comes after the network is supported on mainnet? -This would only impact protocol support for indexing rewards on Substreams-powered subgraphs. The new Firehose implementation would need testing on testnet, following the methodology outlined for Stage 2 in this GIP. Similarly, assuming the implementation is performant and reliable, a PR on the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) would be required (`Substreams data sources` Subgraph Feature), as well as a new GIP for protocol support for indexing rewards. Anyone can create the PR and GIP; the Foundation would help with Council approval. +This would only impact protocol support for indexing rewards on Substreams-powered Subgraphs. The new Firehose implementation would need testing on testnet, following the methodology outlined for Stage 2 in this GIP. Similarly, assuming the implementation is performant and reliable, a PR on the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) would be required (`Substreams data sources` Subgraph Feature), as well as a new GIP for protocol support for indexing rewards. Anyone can create the PR and GIP; the Foundation would help with Council approval. ### 3. Сколько времени займет процесс достижения полной поддержки протокола? diff --git a/website/src/pages/ru/indexing/new-chain-integration.mdx b/website/src/pages/ru/indexing/new-chain-integration.mdx index 427169610d41..8b23af33ebd1 100644 --- a/website/src/pages/ru/indexing/new-chain-integration.mdx +++ b/website/src/pages/ru/indexing/new-chain-integration.mdx @@ -2,7 +2,7 @@ title: Интеграция новых чейнов --- -Чейны могут обеспечить поддержку субграфов в своей экосистеме, начав новую интеграцию `graph-node`. Субграфы — это мощный инструмент индексирования, открывающий перед разработчиками целый мир возможностей. Graph Node уже индексирует данные из перечисленных здесь чейнов. Если Вы заинтересованы в новой интеграции, для этого существуют 2 стратегии: +Chains can bring Subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies: 1. **EVM JSON-RPC** 2. **Firehose**: все решения по интеграции Firehose включают Substreams, крупномасштабный механизм потоковой передачи на базе Firehose со встроенной поддержкой `graph-node`, позволяющий выполнять распараллеленные преобразования. @@ -47,15 +47,15 @@ title: Интеграция новых чейнов ## Рекомендации по EVM — разница между JSON-RPC & Firehose -Хотя как JSON-RPC, так и Firehose оба подходят для субграфов, Firehose всегда востребован разработчиками, желающими создавать с помощью [Substreams](https://substreams.streamingfast.io). Поддержка Substreams позволяет разработчикам создавать [субграфы на основе субпотоков](/subgraphs/cookbook/substreams-powered-subgraphs/) для нового чейна и потенциально может повысить производительность Ваших субграфов. Кроме того, Firehose — в качестве замены уровня извлечения JSON-RPC `graph-node` — сокращает на 90 % количество вызовов RPC, необходимых для общего индексирования. +While the JSON-RPC and Firehose are both suitable for Subgraphs, a Firehose is always required for developers wanting to build with [Substreams](https://substreams.streamingfast.io). Supporting Substreams allows developers to build [Substreams-powered Subgraphs](/subgraphs/cookbook/substreams-powered-subgraphs/) for the new chain, and has the potential to improve the performance of your Subgraphs. Additionally, Firehose — as a drop-in replacement for the JSON-RPC extraction layer of `graph-node` — reduces by 90% the number of RPC calls required for general indexing. -- Все эти вызовы `getLogs` и циклические передачи заменяются единым потоком, поступающим в сердце `graph-node`; единой блочной моделью для всех обрабатываемых ею субграфов. +- All those `getLogs` calls and roundtrips get replaced by a single stream arriving into the heart of `graph-node`; a single block model for all Subgraphs it processes. -> ПРИМЕЧАНИЕ: Интеграция на основе Firehose для чейнов EVM по-прежнему будет требовать от Индексаторов запуска ноды архива RPC чейна для правильного индексирования субрафов. Это происходит из-за неспособности Firehose предоставить состояние смарт-контракта, обычно доступное с помощью метода RPC `eth_call`. (Стоит напомнить, что `eth_calls` не является хорошей практикой для разработчиков) +> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index Subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers) ## Конфигурация Graph Node -Настроить Graph Node так же просто, как подготовить локальную среду. После того, как Ваша локальная среда настроена, Вы можете протестировать интеграцию, локально развернув субграф. +Configuring Graph Node is as easy as preparing your local environment. Once your local environment is set, you can test the integration by locally deploying a Subgraph. 1. [Клонировать Graph Node](https://github.com/graphprotocol/graph-node) @@ -67,4 +67,4 @@ title: Интеграция новых чейнов ## Субграфы, работающие на основе субпотоков (Substreams) -Для интеграции Firehose/Substreams под управлением StreamingFast включена базовая поддержка фундаментальных модулей Substreams (например, декодированные транзакции, логи и события смарт-контрактов) и инструментов генерации кодов Substreams. Эти инструменты позволяют включать [субграфы на базе субпотоков](/substreams/sps/introduction/). Следуйте [Практическому руководству] (https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) и запустите `substreams codegen subgraph`, чтобы самостоятельно испробовать инструменты кодирования. +For StreamingFast-led Firehose/Substreams integrations, basic support for foundational Substreams modules (e.g. decoded transactions, logs and smart-contract events) and Substreams codegen tools are included. These tools enable the ability to enable [Substreams-powered Subgraphs](/substreams/sps/introduction/). Follow the [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) and run `substreams codegen subgraph` to experience the codegen tools for yourself. diff --git a/website/src/pages/ru/indexing/overview.mdx b/website/src/pages/ru/indexing/overview.mdx index a1a21b206718..30c9ec939dcd 100644 --- a/website/src/pages/ru/indexing/overview.mdx +++ b/website/src/pages/ru/indexing/overview.mdx @@ -5,43 +5,43 @@ sidebarTitle: Обзор Индексаторы — это операторы нод в сети The Graph, которые стейкают токены Graph (GRT) для предоставления услуг индексирования и обработки запросов. Индексаторы получают оплату за запросы и вознаграждение за свои услуги индексирования. Они также получают комиссию за запросы, которая возвращаются в соответствии с экспоненциальной функцией возврата. -Токены GRT, которые застейканы в протоколе, подлежат периоду "оттаивания" и могут быть срезаны, если индексаторы являются вредоносными и передают неверные данные приложениям или если они некорректно осуществляют индексирование. Индексаторы также получают вознаграждение за делегированный стейк от делегаторов, внося свой вклад в работу сети. +GRT, застейканные в протоколе, замораживаются на определённый период и могут быть уменьшены, если Индексаторы действуют недобросовестно и предоставляют приложениям неверные данные или неправильно выполняют индексирование. Кроме того, Индексаторы получают вознаграждения за стейк, который им передают Делегаторы, помогая тем самым поддерживать работу сети. -Индексаторы выбирают подграфы для индексирования на основе сигналов от кураторов, в которых кураторы стейкают токены GRT, чтобы обозначить, какие подграфы являются высококачественными и заслуживают приоритетного внимания. Потребители (к примеру, приложения) также могут задавать параметры, по которым индексаторы обрабатывают запросы к их подграфам, и устанавливать предпочтения по цене за запрос. +Индексаторы выбирают субграфы для индексирования на основе сигнала курирования субграфа, где Кураторы стейкают GRT, чтобы указать, какие субграфы являются качественными и должны быть в приоритете. Потребители (например, приложения) также могут задавать параметры для выбора Индексаторов, обрабатывающих запросы к их субграфам, и устанавливать предпочтения по стоимости запросов. -## FAQ +## Часто задаваемые вопросы -### What is the minimum stake required to be an Indexer on the network? +### Какова минимальная величина стейка, требуемая для того, чтобы быть Индексатором в сети? -The minimum stake for an Indexer is currently set to 100K GRT. +Минимальный стейк для Индексатора в настоящее время составляет 100 000 GRT. -### What are the revenue streams for an Indexer? +### Какие источники дохода у Индексатора? -**Query fee rebates** - Payments for serving queries on the network. These payments are mediated via state channels between an Indexer and a gateway. Each query request from a gateway contains a payment and the corresponding response a proof of query result validity. +**Возмещение комиссий за запросы** – выплаты за обработку запросов в сети. Эти платежи проходят через государственные каналы между Индексатором и шлюзом. Каждый запрос от шлюза содержит оплату, а соответствующий ответ — доказательство достоверности результата запроса. -**Indexing rewards** - Generated via a 3% annual protocol wide inflation, the indexing rewards are distributed to Indexers who are indexing subgraph deployments for the network. +**Награды за индексирование** – формируются за счет ежегодной инфляции протокола в размере 3% и распределяются между Индексаторами, которые индексируют развернутые субграфы для сети. -### How are indexing rewards distributed? +### Как распределяются награды за индексирование? -Indexing rewards come from protocol inflation which is set to 3% annual issuance. They are distributed across subgraphs based on the proportion of all curation signal on each, then distributed proportionally to Indexers based on their allocated stake on that subgraph. **An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.** +Награды за индексирование поступают из инфляции протокола, установленной на уровне 3% в год. Они распределяются между субграфами пропорционально общему сигналу кураторства на каждом из них, а затем пропорционально между Индексаторами в зависимости от их застейканного объема на данном субграфе. **Чтобы получить награду, распределение должно быть закрыто с действительным доказательством индексирования (POI), соответствующим стандартам, установленным арбитражной хартией.** -Numerous tools have been created by the community for calculating rewards; you'll find a collection of them organized in the [Community Guides collection](https://www.notion.so/Community-Guides-abbb10f4dba040d5ba81648ca093e70c). You can also find an up to date list of tools in the #Delegators and #Indexers channels on the [Discord server](https://discord.gg/graphprotocol). Here we link a [recommended allocation optimiser](https://github.com/graphprotocol/allocation-optimizer) integrated with the indexer software stack. +Сообщество создало множество инструментов для расчета наград; их собрание можно найти в [коллекции Гайдов Сообщества](https://www.notion.so/Community-Guides-abbb10f4dba040d5ba81648ca093e70c). Также актуальный список инструментов доступен в каналах #Delegators и #Indexers на [сервере Discord](https://discord.gg/graphprotocol). Здесь мы приводим ссылку на [рекомендованный оптимизатор распределения](https://github.com/graphprotocol/allocation-optimizer), интегрированный с программным стеком Индексатора. -### What is a proof of indexing (POI)? +### Что такое доказательство индексирования (POI)? -POIs are used in the network to verify that an Indexer is indexing the subgraphs they have allocated on. A POI for the first block of the current epoch must be submitted when closing an allocation for that allocation to be eligible for indexing rewards. A POI for a block is a digest for all entity store transactions for a specific subgraph deployment up to and including that block. +POI (доказательство индексирования) используется в сети для подтверждения того, что Индексатор действительно индексирует назначенные ему субграфы. При закрытии распределения необходимо предоставить POI для первого блока текущей эпохи, чтобы оно было квалифицировано для получения наград за индексирование. POI для блока представляет собой хеш всех транзакций хранилища объектов для конкретного развертывания субграфа вплоть до этого блока включительно. -### When are indexing rewards distributed? +### Когда распределяются награды за индексирование? -Allocations are continuously accruing rewards while they're active and allocated within 28 epochs. Rewards are collected by the Indexers, and distributed whenever their allocations are closed. That happens either manually, whenever the Indexer wants to force close them, or after 28 epochs a Delegator can close the allocation for the Indexer, but this results in no rewards. 28 epochs is the max allocation lifetime (right now, one epoch lasts for ~24h). +Аллокации постоянно накапливают награды, пока они активны и распределены в течение 28 эпох. Награды собираются Индексаторами и распределяются при закрытии их аллокаций. Это может происходить вручную, когда Индексатор сам решает их закрыть, или автоматически по истечении 28 эпох. Если после 28 эпох аллокацию закрывает Делегатор, награды не выплачиваются. В настоящее время одна эпоха длится примерно 24 часа. -### Can pending indexing rewards be monitored? +### Можно ли отслеживать ожидаемые награды за индексирование? -The RewardsManager contract has a read-only [getRewards](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/rewards/RewardsManager.sol#L316) function that can be used to check the pending rewards for a specific allocation. +Контракт RewardsManager имеет функцию только для чтения [getRewards](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/rewards/RewardsManager.sol#L316), которая позволяет проверить ожидаемые награды для конкретной аллокации. -Many of the community-made dashboards include pending rewards values and they can be easily checked manually by following these steps: +Многие созданные сообществом панели отображают ожидаемые награды, и их можно легко проверить вручную, следуя этим шагам: -1. Query the [mainnet subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations: +1. Выполните запрос к [основному субграфу](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one), чтобы получить идентификаторы всех активных аллокаций: ```graphql query indexerAllocations { @@ -57,138 +57,138 @@ query indexerAllocations { } ``` -Use Etherscan to call `getRewards()`: +Используйте Etherscan для вызова `getRewards()`: -- Navigate to [Etherscan interface to Rewards contract](https://etherscan.io/address/0x9Ac758AB77733b4150A901ebd659cbF8cB93ED66#readProxyContract) -- To call `getRewards()`: - - Expand the **9. getRewards** dropdown. - - Enter the **allocationID** in the input. - - Click the **Query** button. +- Перейдите на [интерфейс Etherscan к контракту Rewards](https://etherscan.io/address/0x9Ac758AB77733b4150A901ebd659cbF8cB93ED66#readProxyContract). +- Чтобы вызвать `getRewards()`: + - Разверните выпадающее меню **9. getRewards**. + - Введите **allocationID** в поле ввода. + - Нажмите кнопку **Query**. -### What are disputes and where can I view them? +### Что такое споры и где их можно посмотреть? -Indexer's queries and allocations can both be disputed on The Graph during the dispute period. The dispute period varies, depending on the type of dispute. Queries/attestations have 7 epochs dispute window, whereas allocations have 56 epochs. After these periods pass, disputes cannot be opened against either of allocations or queries. When a dispute is opened, a deposit of a minimum of 10,000 GRT is required by the Fishermen, which will be locked until the dispute is finalized and a resolution has been given. Fisherman are any network participants that open disputes. +Запросы и аллокации Индексатора могут быть оспорены в The Graph в течение периода спора. Период спора варьируется в зависимости от типа спора. Запросы/аттестации имеют окно спора в 7 эпох, тогда как аллокации – 56 эпох. После истечения этих периодов споры против аллокаций или запросов больше не могут быть открыты. Когда спор открывается, Fishermen (участники сети, открывающие споры) должны внести депозит минимум 10 000 GRT, который будет заморожен до завершения спора и вынесения решения. -Disputes have **three** possible outcomes, so does the deposit of the Fishermen. +Споры могут иметь **три** возможных исхода, как и депозит Fishermen. -- If the dispute is rejected, the GRT deposited by the Fishermen will be burned, and the disputed Indexer will not be slashed. -- If the dispute is settled as a draw, the Fishermen's deposit will be returned, and the disputed Indexer will not be slashed. -- If the dispute is accepted, the GRT deposited by the Fishermen will be returned, the disputed Indexer will be slashed and the Fishermen will earn 50% of the slashed GRT. +- Если спор отклонен, GRT, внесенные Fishermen в качестве депозита, будут сожжены, а оспариваемый Индексатор не понесет штраф. +- Если спор завершится вничью, депозит Fishermen будет возвращен, а оспариваемый Индексатор не понесет штраф. +- Если спор принят, депозит Fishermen будет возвращен, оспариваемый Индексатор понесет штраф, а Fishermen получит 50% от списанных GRT. -Disputes can be viewed in the UI in an Indexer's profile page under the `Disputes` tab. +Споры можно просматривать в пользовательском интерфейсе на странице профиля Индексатора во вкладке `Disputes`. -### What are query fee rebates and when are they distributed? +### Что такое возврат комиссии за запросы и когда он распределяется? -Query fees are collected by the gateway and distributed to indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). The exponential rebate function is proposed as a way to ensure indexers achieve the best outcome by faithfully serving queries. It works by incentivizing Indexers to allocate a large amount of stake (which can be slashed for erring when serving a query) relative to the amount of query fees they may collect. +Комиссии за запросы собираются шлюзом и распределяются индексаторам в соответствии с экспоненциальной функцией возврата (см. GIP [здесь](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). Экспоненциальная функция возврата предлагается как способ гарантировать, что индексаторы добиваются наилучшего результата, добросовестно обслуживая запросы. Она работает, стимулируя Индексаторов выделять крупные объемы стейка (который может быть урезан в случае ошибки при обработке запроса) относительно суммы комиссий за запросы, которые они могут получить. -Once an allocation has been closed the rebates are available to be claimed by the Indexer. Upon claiming, the query fee rebates are distributed to the Indexer and their Delegators based on the query fee cut and the exponential rebate function. +Как только аллокация закрывается, Индексатор может потребовать возврат комиссии. После запроса возврата, комиссии за запросы распределяются между Индексатором и его Делегаторами в соответствии с процентом комиссии за запросы и экспоненциальной функцией возврата. -### What is query fee cut and indexing reward cut? +### Что такое доля комиссии за запросы и доля вознаграждения за индексирование? -The `queryFeeCut` and `indexingRewardCut` values are delegation parameters that the Indexer may set along with cooldownBlocks to control the distribution of GRT between the Indexer and their Delegators. See the last steps in [Staking in the Protocol](/indexing/overview/#stake-in-the-protocol) for instructions on setting the delegation parameters. +Параметры `queryFeeCut` и `indexingRewardCut` являются параметрами делегирования, которые Индексатор может настроить вместе с `cooldownBlocks`, чтобы контролировать распределение GRT между собой и Делегаторами. Инструкции по настройке параметров делегирования можно найти в последних шагах раздела [Стейкинг в протоколе](/indexing/overview/#stake-in-the-protocol). -- **queryFeeCut** - the % of query fee rebates that will be distributed to the Indexer. If this is set to 95%, the Indexer will receive 95% of the query fees earned when an allocation is closed with the other 5% going to the Delegators. +- **queryFeeCut** – процент возврата комиссий за запросы, который будет распределяться в пользу Индексатора. Если установлено значение 95%, Индексатор получит 95% заработанных комиссий за запросы при закрытии аллокации, а оставшиеся 5% пойдут Делегаторам. -- **indexingRewardCut** - the % of indexing rewards that will be distributed to the Indexer. If this is set to 95%, the Indexer will receive 95% of the indexing rewards when an allocation is closed and the Delegators will split the other 5%. +- **indexingRewardCut** – процент вознаграждений за индексирование, который будет распределяться в пользу Индексатора. Если установлено значение 95%, Индексатор получит 95% вознаграждений за индексирование при закрытии аллокации, а оставшиеся 5% будут распределены между Делегаторами. -### How do Indexers know which subgraphs to index? +### Как Индексаторы узнают, какие Субграфы индексировать? -Indexers may differentiate themselves by applying advanced techniques for making subgraph indexing decisions but to give a general idea we'll discuss several key metrics used to evaluate subgraphs in the network: +Индексаторы могут отличаться, применяя продвинутые методы для принятия решений об индексировании Субграфов, но в общем случае они оценивают Субграфы на основе нескольких ключевых метрик: -- **Curation signal** - The proportion of network curation signal applied to a particular subgraph is a good indicator of the interest in that subgraph, especially during the bootstrap phase when query voluming is ramping up. +- **Сигнал кураторства** — пропорция сигнала кураторства сети, применяемого к конкретному субграфу, является хорошим индикатором интереса к этому субграфу, особенно в фазе начальной загрузки, когда объем запросов постепенно увеличивается. -- **Query fees collected** - The historical data for volume of query fees collected for a specific subgraph is a good indicator of future demand. +- **Собранные комиссии за запросы** – исторические данные о сумме комиссий за запросы, собранных для конкретного Субграфа, являются хорошим индикатором будущего спроса. -- **Amount staked** - Monitoring the behavior of other Indexers or looking at proportions of total stake allocated towards specific subgraphs can allow an Indexer to monitor the supply side for subgraph queries to identify subgraphs that the network is showing confidence in or subgraphs that may show a need for more supply. +- **Объем стейка** – отслеживание поведения других Индексаторов или анализ доли общего стейка, выделенного на конкретные Субграфы, позволяет Индексатору оценивать предложение для запросов к Субграфам, что помогает выявлять Субграфы, которым сеть доверяет, или те, которые нуждаются в большем количестве ресурсов. -- **Subgraphs with no indexing rewards** - Some subgraphs do not generate indexing rewards mainly because they are using unsupported features like IPFS or because they are querying another network outside of mainnet. You will see a message on a subgraph if it is not generating indexing rewards. +- **Субграфы без наград за индексирование** – некоторые Субграфы не приносят награды за индексирование, главным образом потому, что они используют неподдерживаемые функции, такие как IPFS, или делают запросы к другой сети за пределами основной сети. В интерфейсе будет отображаться сообщение, если Субграф не генерирует награды за индексирование. -### What are the hardware requirements? +### Каковы требования к аппаратному обеспечению? -- **Small** - Enough to get started indexing several subgraphs, will likely need to be expanded. -- **Standard** - Default setup, this is what is used in the example k8s/terraform deployment manifests. -- **Medium** - Production Indexer supporting 100 subgraphs and 200-500 requests per second. -- **Large** - Prepared to index all currently used subgraphs and serve requests for the related traffic. +- **Низкие** – достаточно для начала индексирования нескольких субграфов, но, вероятно, потребуется расширение. +- **Стандартные** – настройка по умолчанию, используется в примерах манифестов развертывания k8s/terraform. +- **Средние** – производительный Индексатор, поддерживающий 100 субграфов и 200–500 запросов в секунду. +- **Высокие** – готов индексировать все используемые субграфы и обрабатывать соответствующий трафик запросов. -| Setup | Postgres
(CPUs) | Postgres
(memory in GBs) | Postgres
(disk in TBs) | VMs
(CPUs) | VMs
(memory in GBs) | +| Настройка | Postgres
(ЦП) | Postgres
(память в ГБ) | Postgres
(диск в ТБ) | VMs
(ЦП) | VMs
(память в ГБ) | | --- | :-: | :-: | :-: | :-: | :-: | -| Small | 4 | 8 | 1 | 4 | 16 | -| Standard | 8 | 30 | 1 | 12 | 48 | -| Medium | 16 | 64 | 2 | 32 | 64 | -| Large | 72 | 468 | 3.5 | 48 | 184 | +| Низкая | 4 | 8 | 1 | 4 | 16 | +| Стандартная | 8 | 30 | 1 | 12 | 48 | +| Средняя | 16 | 64 | 2 | 32 | 64 | +| Высокая | 72 | 468 | 3.5 | 48 | 184 | -### What are some basic security precautions an Indexer should take? +### Какие основные меры безопасности следует предпринять Индексатору? -- **Operator wallet** - Setting up an operator wallet is an important precaution because it allows an Indexer to maintain separation between their keys that control stake and those that are in control of day-to-day operations. See [Stake in Protocol](/indexing/overview/#stake-in-the-protocol) for instructions. +- **Кошелек оператора** - настройка кошелька оператора является важной мерой безопасности, поскольку она позволяет Индексатору поддерживать разделение между своими ключами, которые контролируют величину стейка, и теми, которые контролируют ежедневные операции. Инструкции см. в разделе [Стейкинг в протоколе](/indexing/overview/#stake-in-the-protocol). -- **Firewall** - Only the Indexer service needs to be exposed publicly and particular attention should be paid to locking down admin ports and database access: the Graph Node JSON-RPC endpoint (default port: 8030), the Indexer management API endpoint (default port: 18000), and the Postgres database endpoint (default port: 5432) should not be exposed. +- ** Firewall** – только сервис Индексатора должен быть доступен публично. Особое внимание следует уделить защите административных портов и доступа к базе данных: JSON-RPC-конечная точка Graph Node (порт по умолчанию: **8030**), конечная точка API управления Индексатором (порт по умолчанию: **18000**) и конечная точка базы данных Postgres (порт по умолчанию: **5432**) **не должны** быть открыты. -## Infrastructure +## Инфраструктура -At the center of an Indexer's infrastructure is the Graph Node which monitors the indexed networks, extracts and loads data per a subgraph definition and serves it as a [GraphQL API](/about/#how-the-graph-works). The Graph Node needs to be connected to an endpoint exposing data from each indexed network; an IPFS node for sourcing data; a PostgreSQL database for its store; and Indexer components which facilitate its interactions with the network. +В центре инфраструктуры Индексатора находится Graph Node, который отслеживает индексируемые сети, извлекает и загружает данные в соответствии с определением Субграфа и предоставляет их в виде [GraphQL API](/about/#how-the-graph-works). Graph Node должна быть подключена к конечной точке, предоставляющей данные из каждой индексируемой сети, к ноде IPFS для получения данных, к базе данных PostgreSQL для хранения информации, а также к компонентам Индексатора, которые обеспечивают его взаимодействие с сетью. -- **PostgreSQL database** - The main store for the Graph Node, this is where subgraph data is stored. The Indexer service and agent also use the database to store state channel data, cost models, indexing rules, and allocation actions. +- **База данных PostgreSQL** – это основное хранилище для Graph Node, где хранятся данные Субграфа. Сервис и агент Индексатора также используют эту базу данных для хранения данных каналов состояния, моделей стоимости, правил индексирования и действий по распределению. -- **Data endpoint** - For EVM-compatible networks, Graph Node needs to be connected to an endpoint that exposes an EVM-compatible JSON-RPC API. This may take the form of a single client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain subgraphs will require particular client capabilities such as archive mode and/or the parity tracing API. +- **Конечная точка данных** – для сетей, совместимых с EVM, Graph Node должна быть подключена к конечной точке, предоставляющей JSON-RPC API, совместимый с EVM. Это может быть как один клиент, так и более сложная конфигурация с балансировкой нагрузки между несколькими клиентами. Важно учитывать, что некоторые Субграфы требуют определённых возможностей клиента, таких как архивный режим и/или API трассировки Parity. -- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during subgraph deployment to fetch the subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com. +- **IPFS-нода (версия ниже 5)** – метаданные развертывания Субграфа хранятся в сети IPFS. Graph Node в основном обращается к IPFS-ноде во время развертывания Субграфа, чтобы получить манифест Субграфа и все связанные файлы. Индексаторы сети не обязаны размещать свою собственную IPFS-ноду, так как для сети уже развернута IPFS-нода по адресу: [https://ipfs.network.thegraph.com](https://ipfs.network.thegraph.com). -- **Indexer service** - Handles all required external communications with the network. Shares cost models and indexing statuses, passes query requests from gateways on to a Graph Node, and manages the query payments via state channels with the gateway. +- **Сервис Индексатора** – обрабатывает все необходимые внешние коммуникации с сетью. Делится моделями стоимости и статусами индексирования, передаёт запросы от шлюзов в Graph Node и управляет платежами за запросы через каналы состояния со шлюзом. -- **Indexer agent** - Facilitates the Indexers interactions onchain including registering on the network, managing subgraph deployments to its Graph Node/s, and managing allocations. +- **Агент Индексатора** – обеспечивает взаимодействие Индексатора в блокчейне, включая регистрацию в сети, управление развертыванием Субграфов в его Graph Node и управление распределением ресурсов. -- **Prometheus metrics server** - The Graph Node and Indexer components log their metrics to the metrics server. +- **Сервер метрик Prometheus** – Graph Node и компоненты Индексатора записывают свои метрики на сервер метрик. -Note: To support agile scaling, it is recommended that query and indexing concerns are separated between different sets of nodes: query nodes and index nodes. +Примечание: для поддержки гибкого масштабирования рекомендуется разделять обработку запросов и индексирование между разными наборами нод: нодами запросов и нодами индексирования. -### Ports overview +### Обзор портов -> **Important**: Be careful about exposing ports publicly - **administration ports** should be kept locked down. This includes the the Graph Node JSON-RPC and the Indexer management endpoints detailed below. +> **Важно**: будьте осторожны при открытии портов в публичный доступ – **административные порты** должны быть закрыты. Это касается JSON-RPC Graph Node и управляющих конечных точек Индексатора, описанных ниже. #### Graph Node -| Port | Purpose | Routes | CLI Argument | Environment Variable | +| Порт | Назначение | Маршруты | Аргумент CLI | Переменная среды | | --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | -| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | -| 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | -| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | -| 8040 | Prometheus metrics | /metrics | \--metrics-port | - | +| 8000 | GraphQL HTTP-сервер
(для запросов к Субграфу) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | +| 8001 | GraphQL WS
(для подписок на Субграф) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | +| 8020 | JSON-RPC
(для управления развертываниями) | / | \--admin-port | - | +| 8030 | API статуса индексирования Субграфа | /graphql | \--index-node-port | - | +| 8040 | Метрики Prometheus | /metrics | \--metrics-port | - | -#### Indexer Service +#### Сервис Индексатора -| Port | Purpose | Routes | CLI Argument | Environment Variable | +| Порт | Назначение | Маршруты | Аргумент CLI | Переменная среды | | --- | --- | --- | --- | --- | -| 7600 | GraphQL HTTP server
(for paid subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` | -| 7300 | Prometheus metrics | /metrics | \--metrics-port | - | +| 7600 | GraphQL HTTP-сервер
(для платных запросов к Субграфу) | /subgraphs/id/...
/status
/channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` | +| 7300 | Метрики Prometheus | /metrics | \--metrics-port | - | -#### Indexer Agent +#### Агент Индексатора -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| ---- | ---------------------- | ------ | -------------------------- | --------------------------------------- | -| 8000 | Indexer management API | / | \--indexer-management-port | `INDEXER_AGENT_INDEXER_MANAGEMENT_PORT` | +| Порт | Назначение | Маршруты | Аргумент CLI | Переменная среды | +| ---- | --------------------------- | -------- | -------------------------- | --------------------------------------- | +| 8000 | API управления Индексатором | / | \--indexer-management-port | `INDEXER_AGENT_INDEXER_MANAGEMENT_PORT` | -### Setup server infrastructure using Terraform on Google Cloud +### Настройка серверной инфраструктуры с использованием Terraform в Google Cloud -> Note: Indexers can alternatively use AWS, Microsoft Azure, or Alibaba. +> Примечание: Индексаторы могут также использовать AWS, Microsoft Azure или Alibaba. -#### Install prerequisites +#### Установка необходимых компонентов - Google Cloud SDK -- Kubectl command line tool +- Инструмент командной строки Kubectl - Terraform -#### Create a Google Cloud Project +#### Создание проекта в Google Cloud -- Clone or navigate to the [Indexer repository](https://github.com/graphprotocol/indexer). +- Клонируйте или перейдите в [репозиторий Индексатора](https://github.com/graphprotocol/indexer). -- Navigate to the `./terraform` directory, this is where all commands should be executed. +- Перейдите в каталог `./terraform`, именно здесь должны быть выполнены все команды. ```sh cd terraform ``` -- Authenticate with Google Cloud and create a new project. +- Аутентифицируйтесь в Google Cloud и создайте новый проект. ```sh gcloud auth login @@ -196,9 +196,9 @@ project= gcloud projects create --enable-cloud-apis $project ``` -- Use the Google Cloud Console's billing page to enable billing for the new project. +- Используйте страницу выставления счетов в Google Cloud Console, чтобы включить эту функцию для нового проекта. -- Create a Google Cloud configuration. +- Создайте конфигурацию Google Cloud. ```sh proj_id=$(gcloud projects list --format='get(project_id)' --filter="name=$project") @@ -208,7 +208,7 @@ gcloud config set compute/region us-central1 gcloud config set compute/zone us-central1-a ``` -- Enable required Google Cloud APIs. +- Включите необходимые API Google Cloud. ```sh gcloud services enable compute.googleapis.com @@ -217,7 +217,7 @@ gcloud services enable servicenetworking.googleapis.com gcloud services enable sqladmin.googleapis.com ``` -- Create a service account. +- Создайте сервисный аккаунт. ```sh svc_name= @@ -225,7 +225,7 @@ gcloud iam service-accounts create $svc_name \ --description="Service account for Terraform" \ --display-name="$svc_name" gcloud iam service-accounts list -# Get the email of the service account from the list +# Получить email учетной записи сервиса из списка svc=$(gcloud iam service-accounts list --format='get(email)' --filter="displayName=$svc_name") gcloud iam service-accounts keys create .gcloud-credentials.json \ @@ -235,7 +235,7 @@ gcloud projects add-iam-policy-binding $proj_id \ --role roles/editor ``` -- Enable peering between database and Kubernetes cluster that will be created in the next step. +- Включите пиринг между базой данных и кластером Kubernetes, который будет создан на следующем шаге. ```sh gcloud compute addresses create google-managed-services-default \ @@ -249,7 +249,7 @@ gcloud services vpc-peerings connect \ --ranges=google-managed-services-default ``` -- Create minimal terraform configuration file (update as needed). +- Создайте минимальный файл конфигурации Terraform (обновите при необходимости). ```sh indexer= @@ -260,24 +260,24 @@ database_password = "" EOF ``` -#### Use Terraform to create infrastructure +#### Используйте Terraform для создания инфраструктуры -Before running any commands, read through [variables.tf](https://github.com/graphprotocol/indexer/blob/main/terraform/variables.tf) and create a file `terraform.tfvars` in this directory (or modify the one we created in the last step). For each variable where you want to override the default, or where you need to set a value, enter a setting into `terraform.tfvars`. +Прежде чем выполнять какие-либо команды, ознакомьтесь с файлом [variables.tf](https://github.com/graphprotocol/indexer/blob/main/terraform/variables.tf) и создайте файл `terraform.tfvars` в этом каталоге (или измените тот, который мы создали на предыдущем шаге). Для каждой переменной, значение которой вы хотите изменить по умолчанию или которую необходимо настроить, введите соответствующую настройку в `terraform.tfvars`. -- Run the following commands to create the infrastructure. +- Выполните следующие команды для создания инфраструктуры. ```sh -# Install required plugins +# Установить необходимые плагины terraform init -# View plan for resources to be created +# Просмотреть план создаваемых ресурсов terraform plan -# Create the resources (expect it to take up to 30 minutes) +# Создать ресурсы (может занять до 30 минут) terraform apply ``` -Download credentials for the new cluster into `~/.kube/config` and set it as your default context. +Скачайте учетные данные для нового кластера в файл `~/.kube/config` и установите его как ваш контекст по умолчанию. ```sh gcloud container clusters get-credentials $indexer @@ -285,21 +285,21 @@ kubectl config use-context $(kubectl config get-contexts --output='name' | grep $indexer) ``` -#### Creating the Kubernetes components for the Indexer +#### Создание компонентов Kubernetes для Индексатора -- Copy the directory `k8s/overlays` to a new directory `$dir,` and adjust the `bases` entry in `$dir/kustomization.yaml` so that it points to the directory `k8s/base`. +- Скопируйте директорию `k8s/overlays` в новую директорию `$dir` и измените запись `bases` в файле `$dir/kustomization.yaml`, чтобы она указывала на директорию `k8s/base`. -- Read through all the files in `$dir` and adjust any values as indicated in the comments. +- Прочитайте все файлы в директории `$dir` и скорректируйте значения в соответствии с комментариями. -Deploy all resources with `kubectl apply -k $dir`. +Разверните все ресурсы с помощью команды `kubectl apply -k $dir`. ### Graph Node -[Graph Node](https://github.com/graphprotocol/graph-node) is an open source Rust implementation that event sources the Ethereum blockchain to deterministically update a data store that can be queried via the GraphQL endpoint. Developers use subgraphs to define their schema, and a set of mappings for transforming the data sourced from the blockchain and the Graph Node handles syncing the entire chain, monitoring for new blocks, and serving it via a GraphQL endpoint. +[Graph Node](https://github.com/graphprotocol/graph-node) — это открытый источник на языке Rust, который отслеживает блокчейн Ethereum для детерминированного обновления хранилища данных, доступного для запросов через GraphQL API. Разработчики используют Субграфы для определения своей схемы и набора мэппингов, чтобы преобразовать информацию, полученную из блокчейна, а сама Graph Node синхронизирует весь блокчейн, отслеживает новые блоки и предоставляет данные через конечную точку GraphQL. -#### Getting started from source +#### Начало работы с исходным кодом -#### Install prerequisites +#### Установка необходимых компонентов - **Rust** @@ -307,15 +307,15 @@ Deploy all resources with `kubectl apply -k $dir`. - **IPFS** -- **Additional Requirements for Ubuntu users** - To run a Graph Node on Ubuntu a few additional packages may be needed. +- **Дополнительные требования для пользователей Ubuntu** - для запуска Graph Node на Ubuntu может потребоваться установить несколько дополнительных пакетов. ```sh sudo apt-get install -y clang libpg-dev libssl-dev pkg-config ``` -#### Setup +#### Настройка -1. Start a PostgreSQL database server +1. Запустите сервер базы данных PostgreSQL ```sh initdb -D .postgres @@ -323,9 +323,9 @@ pg_ctl -D .postgres -l logfile start createdb graph-node ``` -2. Clone [Graph Node](https://github.com/graphprotocol/graph-node) repo and build the source by running `cargo build` +2. Клонируйте репозиторий [Graph Node](https://github.com/graphprotocol/graph-node) и соберите исходный код, выполнив команду `cargo build`. -3. Now that all the dependencies are setup, start the Graph Node: +3. Теперь, когда все зависимости настроены, запустите Graph Node: ```sh cargo run -p graph-node --release -- \ @@ -334,132 +334,132 @@ cargo run -p graph-node --release -- \ --ipfs https://ipfs.network.thegraph.com ``` -#### Getting started using Docker +#### Начало работы с Docker -#### Prerequisites +#### Предварительные требования -- **Ethereum node** - By default, the docker compose setup will use mainnet: [http://host.docker.internal:8545](http://host.docker.internal:8545) to connect to the Ethereum node on your host machine. You can replace this network name and url by updating `docker-compose.yaml`. +- **Нода Ethereum** — по умолчанию, настройка Docker Compose будет использовать основную сетевую ноду: [http://host.docker.internal:8545](http://host.docker.internal:8545) для подключения к ноде Ethereum на вашей хостинговой машине. Вы можете заменить это имя сети и URL, обновив файл `docker-compose.yaml`. -#### Setup +#### Настройка -1. Clone Graph Node and navigate to the Docker directory: +1. Клонируйте Graph Node и перейдите в директорию Docker: ```sh git clone https://github.com/graphprotocol/graph-node cd graph-node/docker ``` -2. For linux users only - Use the host IP address instead of `host.docker.internal` in the `docker-compose.yaml `using the included script: +2. Только для пользователей Linux — используйте IP-адрес хоста вместо `host.docker.internal` в файле `docker-compose.yaml`, используя при этом включенный скрипт: ```sh ./setup.sh ``` -3. Start a local Graph Node that will connect to your Ethereum endpoint: +3. Запустите локальную Graph Node, которая будет подключаться к Вашей конечной точке Ethereum: ```sh docker-compose up ``` -### Indexer components +### Компоненты Индексатора -To successfully participate in the network requires almost constant monitoring and interaction, so we've built a suite of Typescript applications for facilitating an Indexers network participation. There are three Indexer components: +Для успешного участия в сети требуется почти постоянный мониторинг и взаимодействие, поэтому мы разработали набор приложений на TypeScript для упрощения участия Индексаторов в сети. Существует три компонента для Индексаторов: -- **Indexer agent** - The agent monitors the network and the Indexer's own infrastructure and manages which subgraph deployments are indexed and allocated towards onchain and how much is allocated towards each. +- **Агент Индексатора** — агент мониторит сеть и инфраструктуру Индексатора, управляет тем, какие развертывания субграфов индексируются и распределяются по чейну, а также сколько ресурсов выделяется на каждый из них. -- **Indexer service** - The only component that needs to be exposed externally, the service passes on subgraph queries to the graph node, manages state channels for query payments, shares important decision making information to clients like the gateways. +- **Сервис Индексатора** — единственный компонент, который необходимо открывать для внешнего доступа. Сервис передает запросы субграфов в граф-ноду, управляет каналами состояния для оплаты запросов, а также делится важной информацией для принятия решений с клиентами, такими как шлюзы. -- **Indexer CLI** - The command line interface for managing the Indexer agent. It allows Indexers to manage cost models, manual allocations, actions queue, and indexing rules. +- **CLI Индексатора** — интерфейс командной строки для управления агентом Индексатора. Он позволяет Индексаторам управлять моделями затрат, ручными распределениями, очередью действий и правилами индексирования. -#### Getting started +#### Начало работы -The Indexer agent and Indexer service should be co-located with your Graph Node infrastructure. There are many ways to set up virtual execution environments for your Indexer components; here we'll explain how to run them on baremetal using NPM packages or source, or via kubernetes and docker on the Google Cloud Kubernetes Engine. If these setup examples do not translate well to your infrastructure there will likely be a community guide to reference, come say hi on [Discord](https://discord.gg/graphprotocol)! Remember to [stake in the protocol](/indexing/overview/#stake-in-the-protocol) before starting up your Indexer components! +Агент Индексатора и сервис Индексатора должны быть размещены рядом с Вашей инфраструктурой Graph Node. Существует множество способов настройки виртуальных исполнимых сред для компонентов Индексатора. Здесь мы объясним, как запустить их на физическом сервере, используя NPM пакеты или исходный код, а также через Kubernetes и Docker на Google Cloud Kubernetes Engine. Если эти примеры настроек не подходят для Вашей инфраструктуры, скорее всего, существует сообщество, которое может предоставить руководство. Присоединяйтесь к нам в [Discord](https://discord.gg/graphprotocol)! Не забудьте [застейкать GRT](https://thegraph.com/docs/indexing/overview/#stake-in-the-protocol) перед запуском компонентов Индексатора! -#### From NPM packages +#### Из пакетов NPM ```sh npm install -g @graphprotocol/indexer-service npm install -g @graphprotocol/indexer-agent -# Indexer CLI is a plugin for Graph CLI, so both need to be installed: +# CLI Индексатора является плагином для Graph CLI, поэтому необходимо установить оба пакета: npm install -g @graphprotocol/graph-cli npm install -g @graphprotocol/indexer-cli -# Indexer service +# Сервис Индексатора graph-indexer-service start ... -# Indexer agent +# Агент Индексатора graph-indexer-agent start ... -# Indexer CLI -#Forward the port of your agent pod if using Kubernetes +#CLI Индексатора +#Проброс порта Вашего pod-агента, если используется Kubernetes kubectl port-forward pod/POD_ID 18000:8000 graph indexer connect http://localhost:18000/ graph indexer ... ``` -#### From source +#### Из исходного кода ```sh -# From Repo root directory +# Из корневого каталога репозитория yarn -# Indexer Service +# Сервис Индексатора cd packages/indexer-service ./bin/graph-indexer-service start ... -# Indexer agent +# Агент Индексатора cd packages/indexer-agent ./bin/graph-indexer-service start ... -# Indexer CLI +# CLI Индексатора cd packages/indexer-cli ./bin/graph-indexer-cli indexer connect http://localhost:18000/ ./bin/graph-indexer-cli indexer ... ``` -#### Using docker +#### Использование docker -- Pull images from the registry +- Извлеките образы из реестра ```sh docker pull ghcr.io/graphprotocol/indexer-service:latest docker pull ghcr.io/graphprotocol/indexer-agent:latest ``` -Or build images locally from source +Или создайте образы локально из исходного кода ```sh -# Indexer service +# Сервис Индексатора docker build \ --build-arg NPM_TOKEN= \ -f Dockerfile.indexer-service \ -t indexer-service:latest \ -# Indexer agent +# Агент Индексатора docker build \ --build-arg NPM_TOKEN= \ -f Dockerfile.indexer-agent \ -t indexer-agent:latest \ ``` -- Run the components +- Запустите компоненты ```sh docker run -p 7600:7600 -it indexer-service:latest ... docker run -p 18000:8000 -it indexer-agent:latest ... ``` -**NOTE**: After starting the containers, the Indexer service should be accessible at [http://localhost:7600](http://localhost:7600) and the Indexer agent should be exposing the Indexer management API at [http://localhost:18000/](http://localhost:18000/). +**ПРИМЕЧАНИЕ**: после запуска контейнеров сервис Индексатора должен быть доступен по адресу [http://localhost:7600](http://localhost:7600), а агент Индексатора должен предоставлять API управления Индексатором по адресу [http://localhost:18000/](http://localhost:18000/). -#### Using K8s and Terraform +#### Использование K8s и Terraform -See the [Setup Server Infrastructure Using Terraform on Google Cloud](/indexing/overview/#setup-server-infrastructure-using-terraform-on-google-cloud) section +Посмотрите раздел [Настройка серверной инфраструктуры с использованием Terraform в Google Cloud](/indexing/overview/#setup-server-infrastructure-using-terraform-on-google-cloud) -#### Usage +#### Применение -> **NOTE**: All runtime configuration variables may be applied either as parameters to the command on startup or using environment variables of the format `COMPONENT_NAME_VARIABLE_NAME`(ex. `INDEXER_AGENT_ETHEREUM`). +> **ПРИМЕЧАНИЕ**: все переменные конфигурации времени выполнения могут быть применены либо в качестве параметров команды при запуске, либо с использованием переменных среды в формате `COMPONENT_NAME_VARIABLE_NAME` (например, `INDEXER_AGENT_ETHEREUM`). -#### Indexer agent +#### Агент Индексатора ```sh graph-indexer-agent start \ @@ -488,7 +488,7 @@ graph-indexer-agent start \ | pino-pretty ``` -#### Indexer service +#### Сервис Индексатора ```sh SERVER_HOST=localhost \ @@ -514,58 +514,58 @@ graph-indexer-service start \ | pino-pretty ``` -#### Indexer CLI +#### CLI Индексатора -The Indexer CLI is a plugin for [`@graphprotocol/graph-cli`](https://www.npmjs.com/package/@graphprotocol/graph-cli) accessible in the terminal at `graph indexer`. +CLI Индексатора — это плагин для [`@graphprotocol/graph-cli`](https://www.npmjs.com/package/@graphprotocol/graph-cli), доступный в терминале через команду `graph indexer`. ```sh graph indexer connect http://localhost:18000 graph indexer status ``` -#### Indexer management using Indexer CLI +#### Управление Индексатором с помощью CLI Индексатора -The suggested tool for interacting with the **Indexer Management API** is the **Indexer CLI**, an extension to the **Graph CLI**. The Indexer agent needs input from an Indexer in order to autonomously interact with the network on the behalf of the Indexer. The mechanism for defining Indexer agent behavior are **allocation management** mode and **indexing rules**. Under auto mode, an Indexer can use **indexing rules** to apply their specific strategy for picking subgraphs to index and serve queries for. Rules are managed via a GraphQL API served by the agent and known as the Indexer Management API. Under manual mode, an Indexer can create allocation actions using **actions queue** and explicitly approve them before they get executed. Under oversight mode, **indexing rules** are used to populate **actions queue** and also require explicit approval for execution. +Предлагаемым инструментом для взаимодействия с **API управления Индексатором** является **CLI Индексатора**, расширение для **Graph CLI**. Для того чтобы Индексатор мог автономно взаимодействовать с сетью от его имени, ему нужно предоставить входные данные. Механизм, который определяет поведение Индексатора, включает режимы **управления распределениями** и **правила индексирования**. В режиме **автоматического управления** Индексатор может использовать **правила индексирования**, чтобы применить свою стратегию для выбора субграфов, которые он будет индексировать и обслуживать запросы. Эти правила управляются через GraphQL API, которое предоставляется агентом и называется **API управления Индексатором**. В режиме **ручного управления** Индексатор может создавать действия для выделений, используя **очередь действий** и явно утверждать их перед выполнением. В режиме **контроля** **правила индексирования** используются для пополнения **очереди действий**, и для выполнения этих действий также требуется явное одобрение. Эти механизмы позволяют Индексатору выбирать стратегию для индексирования и обеспечения запросов в сети, а также контролировать их выполнение с различными уровнями автоматизации. -#### Usage +#### Применение -The **Indexer CLI** connects to the Indexer agent, typically through port-forwarding, so the CLI does not need to run on the same server or cluster. To help you get started, and to provide some context, the CLI will briefly be described here. +**CLI Индексатора** подключается к агенту Индексатора, обычно через порт-прокси, поэтому CLI не обязательно должен работать на том же сервере или кластере. Чтобы помочь вам начать работу и предоставить некоторый контекст, CLI будет кратко описан здесь. -- `graph indexer connect ` - Connect to the Indexer management API. Typically the connection to the server is opened via port forwarding, so the CLI can be easily operated remotely. (Example: `kubectl port-forward pod/ 8000:8000`) +- `graph indexer connect ` - подключение к API управления Индексатором. Обычно соединение с сервером устанавливается через порт-прокси, так что CLI можно легко использовать удаленно. (Пример: `kubectl port-forward pod/ 8000:8000`) -- `graph indexer rules get [options] [ ...]` - Get one or more indexing rules using `all` as the `` to get all rules, or `global` to get the global defaults. An additional argument `--merged` can be used to specify that deployment specific rules are merged with the global rule. This is how they are applied in the Indexer agent. +- `graph indexer rules get [options] [ ...]` - получить одно или несколько правил индексирования, используя `all` в качестве ``, чтобы получить все правила, или `global`, чтобы получить глобальные настройки по умолчанию. Дополнительный аргумент `--merged` можно использовать, чтобы указать, что правила, специфичные для развертывания, будут объединены с глобальным правилом. Именно так они применяются в агенте Индексатора. -- `graph indexer rules set [options] ...` - Set one or more indexing rules. +- `graph indexer rules set [options] ...` - установить одно или несколько правил индексирования. -- `graph indexer rules start [options] ` - Start indexing a subgraph deployment if available and set its `decisionBasis` to `always`, so the Indexer agent will always choose to index it. If the global rule is set to always then all available subgraphs on the network will be indexed. +- `graph indexer rules start [options] ` - запустить индексирование развертывания субграфа, если оно доступно, и установить для него `decisionBasis` в значение `always`, чтобы агент Индексатора всегда выбирал его для индексирования. Если глобальное правило установлено на `always`, то все доступные субграфы в сети будут индексироваться. -- `graph indexer rules stop [options] ` - Stop indexing a deployment and set its `decisionBasis` to never, so it will skip this deployment when deciding on deployments to index. +- `graph indexer rules stop [options] ` - остановить индексирование развертывания и установить для него `decisionBasis` в значение `never`, чтобы агент Индексатора пропускал это развертывание при принятии решения о том, какие развертывания индексировать. -- `graph indexer rules maybe [options] ` — Set the `decisionBasis` for a deployment to `rules`, so that the Indexer agent will use indexing rules to decide whether to index this deployment. +- `graph indexer rules maybe [options] ` — установить `decisionBasis` для развертывания в значение `rules`, чтобы агент Индексатора использовал правила индексирования для принятия решения о том, индексировать ли это развертывание. -- `graph indexer actions get [options] ` - Fetch one or more actions using `all` or leave `action-id` empty to get all actions. An additional argument `--status` can be used to print out all actions of a certain status. +- `graph indexer actions get [options] ` — получить одно или несколько действий, используя `all`, или оставить `action-id` пустым, чтобы получить все действия. Дополнительный аргумент `--status` можно использовать для вывода всех действий с определенным статусом. -- `graph indexer action queue allocate ` - Queue allocation action +- `graph indexer action queue allocate ` — добавить действие на распределение в очередь -- `graph indexer action queue reallocate ` - Queue reallocate action +- `graph indexer action queue reallocate ` — добавить действие на перераспределение в виде очереди -- `graph indexer action queue unallocate ` - Queue unallocate action +- `graph indexer action queue unallocate ` — добавить действие на отмену распределения в виде очереди -- `graph indexer actions cancel [ ...]` - Cancel all action in the queue if id is unspecified, otherwise cancel array of id with space as separator +- `graph indexer actions cancel [ ...]` - отменить все действия в очереди, если идентификатор не указан, в противном случае отменить массив идентификаторов, разделенных пробелом -- `graph indexer actions approve [ ...]` - Approve multiple actions for execution +- `graph indexer actions approve [ ...]` - одобрить несколько действий для выполнения -- `graph indexer actions execute approve` - Force the worker to execute approved actions immediately +- `graph indexer actions execute approve` - принудительно выполнить одобренные действия немедленно -All commands which display rules in the output can choose between the supported output formats (`table`, `yaml`, and `json`) using the `-output` argument. +Все команды, которые выводят правила, могут выбирать между поддерживаемыми форматами вывода (`table`, `yaml` и `json`) с помощью аргумента `-output`. -#### Indexing rules +#### Правила индексирования -Indexing rules can either be applied as global defaults or for specific subgraph deployments using their IDs. The `deployment` and `decisionBasis` fields are mandatory, while all other fields are optional. When an indexing rule has `rules` as the `decisionBasis`, then the Indexer agent will compare non-null threshold values on that rule with values fetched from the network for the corresponding deployment. If the subgraph deployment has values above (or below) any of the thresholds it will be chosen for indexing. +Правила индексирования могут быть применены как глобальные настройки по умолчанию или для конкретных развертываний субграфов с использованием их идентификаторов. Поля `deployment` и `decisionBasis` являются обязательными, в то время как все остальные поля — опциональными. Когда правило индексирования имеет значение `rules` в поле `decisionBasis`, агент Индексатора будет сравнивать ненулевые пороговые значения этого правила с значениями, полученными из сети для соответствующего развертывания. Если развертывание субграфа имеет значения, превышающие (или ниже) любой из пороговых величин, оно будет выбрано для индексирования. -For example, if the global rule has a `minStake` of **5** (GRT), any subgraph deployment which has more than 5 (GRT) of stake allocated to it will be indexed. Threshold rules include `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, and `minAverageQueryFees`. +Например, если глобальное правило имеет `minStake` равное **5** (GRT), любое развертывание субграфа, на которое выделено более 5 (GRT) стейка, будет проиндексировано. Пороговые правила включают `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake` и `minAverageQueryFees`. -Data model: +Модель данных: ```graphql type IndexingRule { @@ -599,7 +599,7 @@ IndexingDecisionBasis { } ``` -Example usage of indexing rule: +Пример применения правила индексирования: ``` graph indexer rules offchain QmZfeJYR86UARzp9HiXbURWunYgC9ywvPvoePNbuaATrEK @@ -611,20 +611,20 @@ graph indexer rules stop QmZfeJYR86UARzp9HiXbURWunYgC9ywvPvoePNbuaATrEK graph indexer rules delete QmZfeJYR86UARzp9HiXbURWunYgC9ywvPvoePNbuaATrEK ``` -#### Actions queue CLI +#### Очередь действий CLI -The indexer-cli provides an `actions` module for manually working with the action queue. It uses the **Graphql API** hosted by the indexer management server to interact with the actions queue. +`indexer-cli` предоставляет модуль `actions` для ручной работы с очередью действий. Он использует **Graphql API**, размещенный на сервере управления Индексатором, для взаимодействия с очередью действий. -The action execution worker will only grab items from the queue to execute if they have `ActionStatus = approved`. In the recommended path actions are added to the queue with ActionStatus = queued, so they must then be approved in order to be executed onchain. The general flow will look like: +Рабочий процесс выполнения действий будет извлекать элементы из очереди для выполнения только в том случае, если у них статус `ActionStatus = approved`. В рекомендованном процессе действия добавляются в очередь с состоянием `ActionStatus = queued`, и затем они должны быть утверждены, чтобы быть выполненными на чейне. Общий процесс будет выглядеть следующим образом: -- Action added to the queue by the 3rd party optimizer tool or indexer-cli user -- Indexer can use the `indexer-cli` to view all queued actions -- Indexer (or other software) can approve or cancel actions in the queue using the `indexer-cli`. The approve and cancel commands take an array of action ids as input. -- The execution worker regularly polls the queue for approved actions. It will grab the `approved` actions from the queue, attempt to execute them, and update the values in the db depending on the status of execution to `success` or `failed`. -- If an action is successful the worker will ensure that there is an indexing rule present that tells the agent how to manage the allocation moving forward, useful when taking manual actions while the agent is in `auto` or `oversight` mode. -- The indexer can monitor the action queue to see a history of action execution and if needed re-approve and update action items if they failed execution. The action queue provides a history of all actions queued and taken. +- Действие добавляется в очередь сторонним инструментом оптимизации или пользователем indexer-cli +- Индексатор может использовать `indexer-cli` для просмотра всех действий в очереди +- Индексатор (или другое программное обеспечение) может одобрять или отменять действия в очереди с помощью `indexer-cli`. Команды одобрения и отмены принимают массив идентификаторов действий в качестве входных данных. +- Исполнитель регулярно опрашивает очередь на наличие одобренных действий. Он извлекает одобренные действия из очереди, пытается выполнить их и обновляет значения в базе данных в зависимости от результата выполнения, присваивая статус `success` или `failed`. +- Если действие выполнено успешно, исполнитель убедится, что существует правило индексирования, которое указывает агенту, как управлять выделением ресурсов в дальнейшем. Это особенно полезно, когда выполняются ручные действия, в то время как агент находится в режиме `auto` или `oversight`. +- Индексатор может отслеживать очередь действий, чтобы увидеть историю выполнения действий и, если необходимо, повторно одобрить и обновить элементы действий, если они не были выполнены. Очередь действий предоставляет историю всех действий, которые были добавлены в очередь и выполнены. -Data model: +Модель данных: ```graphql Type ActionInput { @@ -657,7 +657,7 @@ ActionType { } ``` -Example usage from source: +Пример использования из исходного кода: ```bash graph indexer actions get all @@ -677,141 +677,141 @@ graph indexer actions approve 1 3 5 graph indexer actions execute approve ``` -Note that supported action types for allocation management have different input requirements: +Обратите внимание, что поддерживаемые типы действий для управления аллокацией имеют различные требования к входным данным: -- `Allocate` - allocate stake to a specific subgraph deployment +- `Allocate` - выделение стейка для конкретного развертывания субграфа - - required action params: + - необходимые параметры действия: - deploymentID - amount -- `Unallocate` - close allocation, freeing up the stake to reallocate elsewhere +- `Unallocate` — закрыть аллокацию, освободив стейк для перераспределения в другое место - - required action params: + - необходимые параметры действия: - allocationID - deploymentID - - optional action params: + - необязательные параметры действия: - poi - - force (forces using the provided POI even if it doesn’t match what the graph-node provides) + - force (принудительно использует указанный POI, даже если он не совпадает с тем, что предоставляет graph-node) -- `Reallocate` - atomically close allocation and open a fresh allocation for the same subgraph deployment +- `Reallocate` - атомарно закрывает распределение и открывает новое распределение для того же развертывания субграфа - - required action params: + - необходимые параметры действия: - allocationID - deploymentID - amount - - optional action params: + - необязательные параметры действия: - poi - - force (forces using the provided POI even if it doesn’t match what the graph-node provides) + - force (принудительно использует указанный POI, даже если он не совпадает с тем, что предоставляет graph-node) -#### Cost models +#### Модели стоимости -Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make Indexer selection decisions per query and to negotiate payment with chosen Indexers. +Модели стоимости обеспечивают динамическое ценообразование для запросов на основе рыночных условий и атрибутов запроса. Сервис Индексатора делится моделью стоимости с шлюзами для каждого субграфа, на который они планируют отвечать. Шлюзы, в свою очередь, используют модель стоимости для принятия решений о выборе Индексатора для каждого запроса и для ведения переговоров о плате с выбранными Индексаторами. #### Agora -The Agora language provides a flexible format for declaring cost models for queries. An Agora price model is a sequence of statements that execute in order for each top-level query in a GraphQL query. For each top-level query, the first statement which matches it determines the price for that query. +Язык Agora предоставляет гибкий формат для объявления моделей стоимости запросов. Модель стоимости Agora — это последовательность операторов, которые выполняются по порядку для каждого верхнего уровня запроса в GraphQL запросе. Для каждого верхнего уровня запроса первое условие, которое совпадает с ним, определяет цену для этого запроса. -A statement is comprised of a predicate, which is used for matching GraphQL queries, and a cost expression which when evaluated outputs a cost in decimal GRT. Values in the named argument position of a query may be captured in the predicate and used in the expression. Globals may also be set and substituted in for placeholders in an expression. +Заявление состоит из предиката, который используется для сопоставления запросов GraphQL, и выражения стоимости, которое при вычислении выводит стоимость в десятичных долях GRT. Значения, находящиеся в позиции именованных аргументов запроса, могут быть захвачены в предикате и использованы в выражении. Глобальные переменные также могут быть установлены и подставлены в качестве заполнителей в выражении. -Example cost model: +Пример модели стоимости: ``` -# This statement captures the skip value, -# uses a boolean expression in the predicate to match specific queries that use `skip` -# and a cost expression to calculate the cost based on the `skip` value and the SYSTEM_LOAD global +# Это выражение захватывает значение skip, +# использует логическое выражение в предикате для соответствия конкретным запросам, использующим `skip`, +# и выражение для вычисления стоимости на основе значения `skip` и глобальной переменной SYSTEM_LOAD query { pairs(skip: $skip) { id } } when $skip > 2000 => 0.0001 * $skip * $SYSTEM_LOAD; -# This default will match any GraphQL expression. -# It uses a Global substituted into the expression to calculate cost +# Этот по умолчанию шаблон будет соответствовать любому выражению GraphQL. +# Он использует глобальную переменную, подставленную в выражение для вычисления стоимости default => 0.1 * $SYSTEM_LOAD; ``` -Example query costing using the above model: +Пример вычисления запросов с использованием вышеуказанной модели: -| Query | Price | +| Запрос | Цена | | ---------------------------------------------------------------------------- | ------- | | { pairs(skip: 5000) { id } } | 0.5 GRT | | { tokens { symbol } } | 0.1 GRT | | { pairs(skip: 5000) { id } tokens { symbol } } | 0.6 GRT | -#### Applying the cost model +#### Применение модели стоимости -Cost models are applied via the Indexer CLI, which passes them to the Indexer Management API of the Indexer agent for storing in the database. The Indexer Service will then pick them up and serve the cost models to gateways whenever they ask for them. +Модели стоимости применяются через CLI Индексатора, который передает их в API управления Индексатором для хранения в базе данных. После этого Индексатор-сервис будет получать эти модели стоимости и передавать их шлюзам, когда те запрашивают их. ```sh indexer cost set variables '{ "SYSTEM_LOAD": 1.4 }' indexer cost set model my_model.agora ``` -## Interacting with the network +## Взаимодействие с сетью -### Stake in the protocol +### Стейкинг в протоколе -The first steps to participating in the network as an Indexer are to approve the protocol, stake funds, and (optionally) set up an operator address for day-to-day protocol interactions. +Первые шаги для участия в сети в качестве Индексатора заключаются в утверждении протокола, ставке средств и (по желанию) настройке адреса оператора для повседневных взаимодействий с протоколом. -> Note: For the purposes of these instructions Remix will be used for contract interaction, but feel free to use your tool of choice ([OneClickDapp](https://oneclickdapp.com/), [ABItopic](https://abitopic.io/), and [MyCrypto](https://www.mycrypto.com/account) are a few other known tools). +> Примечание: В целях выполнения этих инструкций будет использоваться Remix для взаимодействия с контрактом, но Вы можете использовать любой инструмент по своему выбору (например, [OneClickDapp](https://oneclickdapp.com/), [ABItopic](https://abitopic.io/) и [MyCrypto](https://www.mycrypto.com/account) — несколько известных инструментов). -Once an Indexer has staked GRT in the protocol, the [Indexer components](/indexing/overview/#indexer-components) can be started up and begin their interactions with the network. +После того как Индексатор застейкает GRT в протокол, компоненты Индексатора могут быть запущены и начать взаимодействие с сетью. -#### Approve tokens +#### Подтверждение токенов -1. Open the [Remix app](https://remix.ethereum.org/) in a browser +1. Откройте [приложение Remix](https://remix.ethereum.org/) в браузере -2. In the `File Explorer` create a file named **GraphToken.abi** with the [token ABI](https://raw.githubusercontent.com/graphprotocol/contracts/mainnet-deploy-build/build/abis/GraphToken.json). +2. В `File Explorer` создайте файл с именем **GraphToken.abi** с [токеном ABI](https://raw.githubusercontent.com/graphprotocol /contracts/mainnet-deploy-build/build/abis/GraphToken.json). -3. With `GraphToken.abi` selected and open in the editor, switch to the `Deploy and run transactions` section in the Remix interface. +3. С выбранным и открытым в редакторе файлом `GraphToken.abi`, перейдите в раздел `Deploy and run transactions` в интерфейсе Remix. -4. Under environment select `Injected Web3` and under `Account` select your Indexer address. +4. В разделе Среды выберите `Injected Web3`, а в разделе `Account` выберите адрес своего Индексатора. -5. Set the GraphToken contract address - Paste the GraphToken contract address (`0xc944E90C64B2c07662A292be6244BDf05Cda44a7`) next to `At Address` and click the `At address` button to apply. +5. Установите адрес контракта GraphToken — вставьте адрес контракта GraphToken (`0xc944E90C64B2c07662A292be6244BDf05Cda44a7`) рядом с полем `At Address` и нажмите кнопку `At address`, чтобы применить. -6. Call the `approve(spender, amount)` function to approve the Staking contract. Fill in `spender` with the Staking contract address (`0xF55041E37E12cD407ad00CE2910B8269B01263b9`) and `amount` with the tokens to stake (in wei). +6. Вызовите функцию `approve(spender, amount)`, чтобы одобрить контракт стейкинга. В поле `spender` укажите адрес контракта стейкинга (`0xF55041E37E12cD407ad00CE2910B8269B01263b9`), а в поле `amount` укажите количество токенов для стейкинга (в wei). -#### Stake tokens +#### Стейкинг токенов -1. Open the [Remix app](https://remix.ethereum.org/) in a browser +1. Откройте [приложение Remix](https://remix.ethereum.org/) в браузере -2. In the `File Explorer` create a file named **Staking.abi** with the staking ABI. +2. В `File Explorer` создайте файл с именем **Staking.abi** и добавьте в него ABI контракта для стейкинга. -3. With `Staking.abi` selected and open in the editor, switch to the `Deploy and run transactions` section in the Remix interface. +3. С файлом `Staking.abi`, выбранным и открытым в редакторе, перейдите в раздел `Deploy and run transactions` в интерфейсе Remix. -4. Under environment select `Injected Web3` and under `Account` select your Indexer address. +4. В разделе Среды выберите `Injected Web3`, а в разделе `Account` выберите адрес своего Индексатора. -5. Set the Staking contract address - Paste the Staking contract address (`0xF55041E37E12cD407ad00CE2910B8269B01263b9`) next to `At Address` and click the `At address` button to apply. +5. Установите адрес контракта стейкинга — вставьте адрес контракта стейкинга (`0xF55041E37E12cD407ad00CE2910B8269B01263b9`) рядом с полем `At Address` и нажмите кнопку `At address`, чтобы применить. -6. Call `stake()` to stake GRT in the protocol. +6. Вызовите функцию `stake()`, чтобы застейкать GRT в протокол. -7. (Optional) Indexers may approve another address to be the operator for their Indexer infrastructure in order to separate the keys that control the funds from those that are performing day to day actions such as allocating on subgraphs and serving (paid) queries. In order to set the operator call `setOperator()` with the operator address. +7. (Необязательно) Индексаторы могут одобрить другой адрес в качестве оператора для своей инфраструктуры Индексатора, чтобы разделить ключи, которые контролируют средства, и те, которые выполняют повседневные действия, такие как выделение на субграфах и обслуживание (оплачиваемых) запросов. Чтобы установить оператора, вызовите функцию `setOperator()`, указав адрес оператора. -8. (Optional) In order to control the distribution of rewards and strategically attract Delegators Indexers can update their delegation parameters by updating their indexingRewardCut (parts per million), queryFeeCut (parts per million), and cooldownBlocks (number of blocks). To do so call `setDelegationParameters()`. The following example sets the queryFeeCut to distribute 95% of query rebates to the Indexer and 5% to Delegators, set the indexingRewardCutto distribute 60% of indexing rewards to the Indexer and 40% to Delegators, and set `thecooldownBlocks` period to 500 blocks. +8. (Необязательно) Чтобы контролировать распределение вознаграждений и стратегически привлекать Делегаторов, Индексаторы могут обновить свои параметры делегирования, изменив `indexingRewardCut` (доли на миллион), `queryFeeCut` (доли на миллион) и `cooldownBlocks` (количество блоков). Для этого вызовите функцию `setDelegationParameters()`. Пример ниже устанавливает `queryFeeCut` так, чтобы 95% возмещений за запросы распределялись между Индексатором и 5% — между Делегаторами, устанавливает `indexingRewardCut` так, чтобы 60% вознаграждений за индексирование получал Индексатор, а 40% — Делегаторы, и устанавливает период `cooldownBlocks` на 500 блоков. ``` setDelegationParameters(950000, 600000, 500) ``` -### Setting delegation parameters +### Настройка параметров делегирования -The `setDelegationParameters()` function in the [staking contract](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol) is essential for Indexers, allowing them to set parameters that define their interactions with Delegators, influencing their reward sharing and delegation capacity. +Функция `setDelegationParameters()` в [стейкинг-контракте](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol) является важной для Индексаторов, позволяя им задавать параметры, определяющие их взаимодействие с Делегаторами, что влияет на распределение вознаграждений и способность к делегированию. -### How to set delegation parameters +### Как настроить параметры делегирования -To set the delegation parameters using Graph Explorer interface, follow these steps: +Чтобы установить параметры делегирования с помощью интерфейса Graph Explorer, выполните следующие шаги: -1. Navigate to [Graph Explorer](https://thegraph.com/explorer/). -2. Connect your wallet. Choose multisig (such as Gnosis Safe) and then select mainnet. Note: You will need to repeat this process for Arbitrum One. -3. Connect the wallet you have as a signer. -4. Navigate to the 'Settings' section and select 'Delegation Parameters'. These parameters should be configured to achieve an effective cut within the desired range. Upon entering values in the provided input fields, the interface will automatically calculate the effective cut. Adjust these values as necessary to attain the desired effective cut percentage. -5. Submit the transaction to the network. +1. Перейдите в [Graph Explorer](https://thegraph.com/explorer/). +2. Подключите свой кошелек. Выберите мультиподпись (например, Gnosis Safe), затем выберите основную сеть. Примечание: вам нужно будет повторить этот процесс для сети Arbitrum One. +3. Подключите кошелек, который у Вас есть в качестве подписанта. +4. Перейдите в раздел 'Settings' и выберите 'Delegation Parameters'. Эти параметры должны быть настроены для достижения эффективного распределения в желаемом диапазоне. После ввода значений в предоставленные поля ввода интерфейс автоматически рассчитает эффективное распределение. При необходимости отрегулируйте эти значения, чтобы достичь желаемого процента эффективного распределения. +5. Отправьте транзакцию в сеть. -> Note: This transaction will need to be confirmed by the multisig wallet signers. +> Примечание: эта транзакция должна быть подтверждена подписантами кошелька с мультиподписью. -### The life of an allocation +### Срок существования аллокации -After being created by an Indexer a healthy allocation goes through two states. +После создания Индексатором работоспособная аллокация проходит через два состояния. -- **Active** - Once an allocation is created onchain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L316)) it is considered **active**. A portion of the Indexer's own and/or delegated stake is allocated towards a subgraph deployment, which allows them to claim indexing rewards and serve queries for that subgraph deployment. The Indexer agent manages creating allocations based on the Indexer rules. +- **Активный** - как только распределение создается в блокчейне ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L316)), оно считается **активным**. Часть собственного залога Индексера и/или делегированного залога выделяется для развертывания субграфа, что позволяет им получать вознаграждения за индексирование и обслуживать запросы для этого развертывания субграфа. Агент Индексатора управляет созданием распределений в соответствии с правилами Индексатора. -- **Closed** - An Indexer is free to close an allocation once 1 epoch has passed ([closeAllocation()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L335)) or their Indexer agent will automatically close the allocation after the **maxAllocationEpochs** (currently 28 days). When an allocation is closed with a valid proof of indexing (POI) their indexing rewards are distributed to the Indexer and its Delegators ([learn more](/indexing/overview/#how-are-indexing-rewards-distributed)). +- **Закрытый** - Индексатор может закрыть распределение, как только пройдет 1 эпоха ([closeAllocation()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L335)), или его агент Индексатора автоматически закроет распределение после **maxAllocationEpochs** (в настоящее время 28 дней). Когда распределение закрыто с действительным доказательством индексирования (POI), вознаграждения за индексирование распределяются между Индексатором и его делегаторами ([узнать больше](/indexing/overview/#how-are-indexing-rewards-distributed)). -Indexers are recommended to utilize offchain syncing functionality to sync subgraph deployments to chainhead before creating the allocation onchain. This feature is especially useful for subgraphs that may take longer than 28 epochs to sync or have some chances of failing undeterministically. +Индексаторам рекомендуется использовать функциональность оффчейн-синхронизации для синхронизации развертываний субграфов до чейна перед созданием распределения в блокчейне. Эта функция особенно полезна для субграфов, которые могут занять более 28 эпох для синхронизации или которые имеют вероятность неустойчивых сбоев. diff --git a/website/src/pages/ru/indexing/supported-network-requirements.mdx b/website/src/pages/ru/indexing/supported-network-requirements.mdx index f1afe7cb7850..56dc63870cb3 100644 --- a/website/src/pages/ru/indexing/supported-network-requirements.mdx +++ b/website/src/pages/ru/indexing/supported-network-requirements.mdx @@ -6,7 +6,7 @@ title: Требования к поддерживаемым сетям | --- | --- | --- | :-: | | Арбитрум | [Гайд по Baremetal](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
[Гайд по Docker](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | 4+ ядраа CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_последнее обновление в августе 2023_ | ✅ | | Avalanche | [Гайд по Docker](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 ядра / 8 потоков CPU
Ubuntu 22.04
16GB+ RAM
>= 5 TiB NVMe SSD
_последнее обновление в августе 2023_ | ✅ | -| Base | [Гайд по Erigon Baremetal](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

[Гайд по GETH Baremetal](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
[Гайд по GETH Docker](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ ядер CPU
Debian 12/Ubuntu 22.04
16 GB RAM
>= 4.5TB (NVME preffered)
_последнее обновление 14 мая 2024_ | ✅ | +| Base | [Гайд по Erigon Baremetal](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

[Гайд по GETH Baremetal](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
[Гайд по GETH Docker](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
Debian 12/Ubuntu 22.04
16 GB RAM
>= 4.5TB (NVME preferred)
_last updated 14th May 2024_ | ✅ | | Binance | [Гайд по Erigon Baremetal](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | 8 ядер / 16 потоков CPU
Ubuntu 22.04
>=32 GB RAM
>= 14 TiB NVMe SSD
_последнее обновление 22 июня 2024_ | ✅ | | Celo | [Гайд по Docker](https://docs.infradao.com/archive-nodes-101/celo/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 2 TiB NVMe SSD
_последнее обновление в августе 2023_ | ✅ | | Ethereum | [Гайд по Docker](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Более высокая тактовая частота по сравнению с количеством ядер
Ubuntu 22.04
16GB+ RAM
>=3TB (NVMe recommended)
_последнее обновление в августе 2023_ | ✅ | diff --git a/website/src/pages/ru/indexing/tap.mdx b/website/src/pages/ru/indexing/tap.mdx index fe3b7d982be4..7703c14853a6 100644 --- a/website/src/pages/ru/indexing/tap.mdx +++ b/website/src/pages/ru/indexing/tap.mdx @@ -1,21 +1,21 @@ --- -title: Руководство по миграции TAP +title: GraphTally Guide --- -Узнайте о новой платежной системе The Graph, **Timeline Aggregation Protocol, TAP**. Эта система обеспечивает быстрые и эффективные микротранзакции с минимальным уровнем доверия. +Learn about The Graph’s new payment system, **GraphTally** [(previously Timeline Aggregation Protocol)](https://docs.rs/tap_core/latest/tap_core/index.html). This system provides fast, efficient microtransactions with minimized trust. ## Обзор -[TAP](https://docs.rs/tap_core/latest/tap_core/index.html) — это полная замена существующей в настоящее время платежной системы Scalar. Она предоставляет следующие ключевые функции: +GraphTally is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features: - Эффективно обрабатывает микроплатежи. - Добавляет уровень консолидации к транзакциям и затратам ончейна. - Позволяет Индексаторам управлять поступлениями и платежами, гарантируя оплату запросов. - Обеспечивает децентрализованные, не требующие доверия шлюзы и повышает производительность `indexer-service` для нескольких отправителей. -## Специфические особенности +### Специфические особенности -TAP позволяет отправителю совершать несколько платежей получателю, **TAP Receipts**, который объединяет эти платежи в один платеж, **Receipt Aggregate Voucher**, также известный как **RAV**. Затем этот агрегированный платеж можно проверить в блокчейне, что сокращает количество транзакций и упрощает процесс оплаты. +GraphTally allows a sender to make multiple payments to a receiver, **Receipts**, which aggregates these payments into a single payment, a **Receipt Aggregate Voucher**, also known as a **RAV**. This aggregated payment can then be verified on the blockchain, reducing the number of transactions and simplifying the payment process. Для каждого запроса шлюз отправит вам `signed receipt`, который будет сохранен в Вашей базе данных. Затем эти запросы будут агрегированы `tap-agent` через запрос. После этого Вы получите RAV. Вы можете обновить RAV, отправив ему новые квитанции, что приведет к генерации нового RAV с увеличенным значением. @@ -59,14 +59,14 @@ TAP позволяет отправителю совершать несколь | Подписанты | `0xfF4B7A5EfD00Ff2EC3518D4F250A27e4c29A2211` | `0xFb142dE83E261e43a81e9ACEADd1c66A0DB121FE` | | Агрегатор | `https://tap-aggregator.network.thegraph.com` | `https://tap-aggregator.testnet.thegraph.com` | -### Требования +### Предварительные требования -Помимо типичных требований для запуска индексатора Вам понадобится конечная точка `tap-escrow-subgraph` для запроса обновлений TAP. Вы можете использовать The Graph Network для запроса или размещения себя на своей `graph-node`. +In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query updates. You can use The Graph Network to query or host yourself on your `graph-node`. -- [Субграф Graph TAP Arbitrum Sepolia (для тестовой сети The Graph)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) -- [Субграф Graph TAP Arbitrum One (для основной сети The Graph)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) +- [Graph TAP Arbitrum Sepolia Subgraph (for The Graph testnet)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) +- [Graph TAP Arbitrum One Subgraph (for The Graph mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) -> Примечание: `indexer-agent` в настоящее время не обрабатывает индексирование этого субграфа, как это происходит при развертывании сетевого субграфа. В итоге Вам придется индексировать его вручную. +> Note: `indexer-agent` does not currently handle the indexing of this Subgraph like it does for the network Subgraph deployment. As a result, you have to index it manually. ## Руководство по миграции @@ -79,7 +79,7 @@ TAP позволяет отправителю совершать несколь 1. **Indexer Agent** - Следуйте [этому же процессу] (https://github.com/graphprotocol/indexer/pkgs/container/indexer-agent#graph-protocol-indexer-compents). - - Укажите новый аргумент `--tap-subgraph-endpoint`, чтобы активировать новые кодовые пути TAP и разрешить выкуп TAP RAV. + - Give the new argument `--tap-subgraph-endpoint` to activate the new GraphTally codepaths and enable redeeming of RAVs. 2. **Indexer Service** @@ -99,14 +99,14 @@ TAP позволяет отправителю совершать несколь Для минимальной конфигурации используйте следующий шаблон: ```bash -# Вам придется изменить *все* приведенные ниже значения, чтобы они соответствовали вашим настройкам. +# You will have to change *all* the values below to match your setup. # -# Некоторые из приведенных ниже конфигураций представляют собой глобальные значения graph network, которые Вы можете найти здесь: +# Some of the config below are global graph network values, which you can find here: # # -# Совет профессионала: если Вам нужно загрузить некоторые значения из среды в эту конфигурацию, Вы -# можете перезаписать их переменными среды. Например, следующее можно заменить -# на [PREFIX]_DATABASE_POSTGRESURL, where PREFIX can be `INDEXER_SERVICE` or `TAP_AGENT`: +# Pro tip: if you need to load some values from the environment into this config, you +# can overwrite with environment variables. For example, the following can be replaced +# by [PREFIX]_DATABASE_POSTGRESURL, where PREFIX can be `INDEXER_SERVICE` or `TAP_AGENT`: # # [database] # postgres_url = "postgresql://indexer:${POSTGRES_PASSWORD}@postgres:5432/indexer_components_0" @@ -116,55 +116,55 @@ indexer_address = "0x1111111111111111111111111111111111111111" operator_mnemonic = "celery smart tip orange scare van steel radio dragon joy alarm crane" [database] -# URL-адрес базы данных Postgres, используемой для компонентов индексатора. Та же база данных, -# которая используется `indexer-agent`. Ожидается, что `indexer-agent` создаст -# необходимые таблицы. +# The URL of the Postgres database used for the indexer components. The same database +# that is used by the `indexer-agent`. It is expected that `indexer-agent` will create +# the necessary tables. postgres_url = "postgres://postgres@postgres:5432/postgres" [graph_node] -# URL-адрес конечной точки запроса Вашей graph-node +# URL to your graph-node's query endpoint query_url = "" -# URL-адрес конечной точки статуса Вашей graph-node +# URL to your graph-node's status endpoint status_url = "" [subgraphs.network] -# URL-адрес запроса для субграфа Graph Network. +# Query URL for the Graph Network Subgraph. query_url = "" -# Необязательно, развертывание нужно искать в локальной `graph-node`, если оно локально проиндексировано. -# Рекомендуется индексировать субграф локально. -# ПРИМЕЧАНИЕ: используйте только `query_url` или `deployment_id` +# Optional, deployment to look for in the local `graph-node`, if locally indexed. +# Locally indexing the Subgraph is recommended. +# NOTE: Use `query_url` or `deployment_id` only deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" [subgraphs.escrow] -# URL-адрес запроса для субграфа Escrow. +# Query URL for the Escrow Subgraph. query_url = "" -# Необязательно, развертывание нужно искать в локальной `graph-node`, если оно локально проиндексировано. -# Рекомендуется индексировать субграф локально. -# ПРИМЕЧАНИЕ: используйте только `query_url` или `deployment_id` +# Optional, deployment to look for in the local `graph-node`, if locally indexed. +# Locally indexing the Subgraph is recommended. +# NOTE: Use `query_url` or `deployment_id` only deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" [blockchain] -# Идентификатор чейна сети, в которой работает the graph network работает на +# The chain ID of the network that the graph network is running on chain_id = 1337 -# Контрактный адрес верификатора receipt aggregate voucher (RAV) TAP +# Contract address of TAP's receipt aggregate voucher (RAV) verifier. receipts_verifier_address = "0x2222222222222222222222222222222222222222" ######################################## -# Специальные настройки для tap-agent # +# Specific configurations to tap-agent # ######################################## [tap] -# Это сумма комиссий, которой вы готовы рискнуть в любой момент времени. Например, -# если отправитель не совершает поставку RAV достаточно длительное время, и комиссии превышают это значение -# суммарно, служба-индексатор перестанет принимать запросы от отправителя -# до тех пор, пока комиссии не будут суммированы. -# ПРИМЕЧАНИЕ: Используйте строки для десятичных значений, чтобы избежать ошибок округления -# например: +# This is the amount of fees you are willing to risk at any given time. For ex. +# if the sender stops supplying RAVs for long enough and the fees exceed this +# amount, the indexer-service will stop accepting queries from the sender +# until the fees are aggregated. +# NOTE: Use strings for decimal values to prevent rounding errors +# e.g: # max_amount_willing_to_lose_grt = "0.1" max_amount_willing_to_lose_grt = 20 [tap.sender_aggregator_endpoints] -# Ключ-значение всех отправителей и их конечных точек агрегатора -# Ниже приведен пример шлюза тестовой сети E&N. +# Key-Value of all senders and their aggregator endpoints +# This one below is for the E&N testnet gateway for example. 0xDDE4cfFd3D9052A9cb618fC05a1Cd02be1f2F467 = "https://tap-aggregator.network.thegraph.com" ``` diff --git a/website/src/pages/ru/indexing/tooling/graph-node.mdx b/website/src/pages/ru/indexing/tooling/graph-node.mdx index 43e98a3aad17..27f730d64966 100644 --- a/website/src/pages/ru/indexing/tooling/graph-node.mdx +++ b/website/src/pages/ru/indexing/tooling/graph-node.mdx @@ -2,39 +2,39 @@ title: Graph Node --- -Graph Node — это компонент, который индексирует подграфы и делает полученные данные доступными для запроса через GraphQL API. Таким образом, он занимает центральное место в стеке индексатора, а правильная работа Graph Node имеет решающее значение для успешного запуска индексатора. +Graph Node is the component which indexes Subgraphs, and makes the resulting data available to query via a GraphQL API. As such it is central to the indexer stack, and correct operation of Graph Node is crucial to running a successful indexer. -This provides a contextual overview of Graph Node, and some of the more advanced options available to indexers. Detailed documentation and instructions can be found in the [Graph Node repository](https://github.com/graphprotocol/graph-node). +Здесь представлен контекстуальный обзор Graph Node и некоторые более продвинутые параметры, доступные индексаторам. Подробную документацию и инструкции можно найти в [репозитории Graph Node](https://github.com/graphprotocol/graph-node). ## Graph Node -[Graph Node](https://github.com/graphprotocol/graph-node) is the reference implementation for indexing Subgraphs on The Graph Network, connecting to blockchain clients, indexing subgraphs and making indexed data available to query. +[Graph Node](https://github.com/graphprotocol/graph-node) is the reference implementation for indexing Subgraphs on The Graph Network, connecting to blockchain clients, indexing Subgraphs and making indexed data available to query. -Graph Node (and the whole indexer stack) can be run on bare metal, or in a cloud environment. This flexibility of the central indexing component is crucial to the robustness of The Graph Protocol. Similarly, Graph Node can be [built from source](https://github.com/graphprotocol/graph-node), or indexers can use one of the [provided Docker Images](https://hub.docker.com/r/graphprotocol/graph-node). +Graph Node (и весь стек Индексаторов) можно запускать на «голом железе» или в облачной среде. Эта гибкость центрального компонента индексирования имеет решающее значение для надежности The Graph Protocol. Точно так же Graph Node может быть [создана из исходного кода](https://github.com/graphprotocol/graph-node), или Индексаторы могут использовать один из [предусмотренных Docker Images](https://hub.docker.com/r/graphprotocol/graph-node). ### База данных PostgreSQL -Основное хранилище для Graph Node, это место, где хранятся данные подграфа, а также метаданные о подграфах и сетевые данные, не зависящие от подграфа, такие как кэш блоков и кэш eth_call. +The main store for the Graph Node, this is where Subgraph data is stored, as well as metadata about Subgraphs, and Subgraph-agnostic network data such as the block cache, and eth_call cache. ### Клиенты сети Для индексации сети Graph Node требуется доступ к сетевому клиенту через EVM-совместимый JSON-RPC API. Этот RPC может подключаться к одному клиенту или может представлять собой более сложную настройку, которая распределяет нагрузку между несколькими. -While some subgraphs may just require a full node, some may have indexing features which require additional RPC functionality. Specifically subgraphs which make `eth_calls` as part of indexing will require an archive node which supports [EIP-1898](https://eips.ethereum.org/EIPS/eip-1898), and subgraphs with `callHandlers`, or `blockHandlers` with a `call` filter, require `trace_filter` support ([see trace module documentation here](https://openethereum.github.io/JSONRPC-trace-module)). +While some Subgraphs may just require a full node, some may have indexing features which require additional RPC functionality. Specifically Subgraphs which make `eth_calls` as part of indexing will require an archive node which supports [EIP-1898](https://eips.ethereum.org/EIPS/eip-1898), and Subgraphs with `callHandlers`, or `blockHandlers` with a `call` filter, require `trace_filter` support ([see trace module documentation here](https://openethereum.github.io/JSONRPC-trace-module)). -**Network Firehoses** - a Firehose is a gRPC service providing an ordered, yet fork-aware, stream of blocks, developed by The Graph's core developers to better support performant indexing at scale. This is not currently an Indexer requirement, but Indexers are encouraged to familiarise themselves with the technology, ahead of full network support. Learn more about the Firehose [here](https://firehose.streamingfast.io/). +**Network Firehoses**. Firehose — это служба gRPC, предоставляющая упорядоченный, но учитывающий форк поток блоков, разработанная разработчиками ядра The Graph для лучшей поддержки крупномасштабного высокопроизводительного индексирования. В настоящее время это не является обязательным требованием для Индексаторов, но Индексаторам рекомендуется ознакомиться с технологией до начала полной поддержки сети. Подробнее о Firehose можно узнать [здесь(https://firehose.streamingfast.io/). ### Ноды IPFS -Метаданные о развертывании подграфа хранятся в сети IPFS. The Graph Node в первую очередь обращается к ноде IPFS во время развертывания подграфа, чтобы получить манифест подграфа и все связанные файлы. Сетевым индексаторам не требуется запускать собственную ноду IPFS. Нода IPFS для сети находиться по адресу https://ipfs.network.thegraph.com. +Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network indexers do not need to host their own IPFS node. An IPFS node for the network is hosted at https://ipfs.network.thegraph.com. ### Сервер метрик Prometheus Чтобы включить мониторинг и отчетность, Graph Node может дополнительно регистрировать метрики на сервере метрик Prometheus. -### Getting started from source +### Начало работы с исходным кодом -#### Install prerequisites +#### Установка необходимых компонентов - **Rust** @@ -42,15 +42,15 @@ While some subgraphs may just require a full node, some may have indexing featur - **IPFS** -- **Additional Requirements for Ubuntu users** - To run a Graph Node on Ubuntu a few additional packages may be needed. +- **Дополнительные требования для пользователей Ubuntu**. Для запуска Graph Node на Ubuntu может потребоваться несколько дополнительных пакетов. ```sh sudo apt-get install -y clang libpq-dev libssl-dev pkg-config ``` -#### Setup +#### Настройка -1. Start a PostgreSQL database server +1. Запустите сервер базы данных PostgreSQL ```sh initdb -D .postgres @@ -58,9 +58,9 @@ pg_ctl -D .postgres -l logfile start createdb graph-node ``` -2. Clone [Graph Node](https://github.com/graphprotocol/graph-node) repo and build the source by running `cargo build` +2. Клонируйте репозиторий [Graph Node](https://github.com/graphprotocol/graph-node) и соберите исходный код, запустив `cargo build` -3. Now that all the dependencies are setup, start the Graph Node: +3. Теперь, когда все зависимости настроены, запустите Graph Node: ```sh cargo run -p graph-node --release -- \ @@ -71,35 +71,35 @@ cargo run -p graph-node --release -- \ ### Начало работы с Kubernetes -A complete Kubernetes example configuration can be found in the [indexer repository](https://github.com/graphprotocol/indexer/tree/main/k8s). +Полный пример конфигурации Kubernetes можно найти в [репозитории индексатора](https://github.com/graphprotocol/indexer/tree/main/k8s). ### Порты Во время работы Graph Node предоставляет следующие порты: -| Port | Purpose | Routes | CLI Argument | Environment Variable | +| Порт | Назначение | Маршруты | Аргумент CLI | Переменная среды | | --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | -| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | -| 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | -| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | -| 8040 | Prometheus metrics | /metrics | \--metrics-port | - | +| 8000 | GraphQL HTTP-сервер
(для запросов к Субграфу) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | +| 8001 | GraphQL WS
(для подписок на Субграф) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | +| 8020 | JSON-RPC
(для управления развертываниями) | / | \--admin-port | - | +| 8030 | API статуса индексирования Субграфа | /graphql | \--index-node-port | - | +| 8040 | Метрики Prometheus | /metrics | \--metrics-port | - | -> **Important**: Be careful about exposing ports publicly - **administration ports** should be kept locked down. This includes the the Graph Node JSON-RPC endpoint. +> **Важно**. Будьте осторожны, открывая порты для общего доступа — **порты администрирования** должны оставаться закрытыми. Это касается конечных точек Graph Node JSON-RPC. ## Расширенная настройка Graph Node -На простейшем уровне Graph Node может работать с одним экземпляром Graph Node, одной базой данных PostgreSQL, нодой IPFS и сетевыми клиентами в соответствии с требованиями субграфов, подлежащих индексированию. +At its simplest, Graph Node can be operated with a single instance of Graph Node, a single PostgreSQL database, an IPFS node, and the network clients as required by the Subgraphs to be indexed. -This setup can be scaled horizontally, by adding multiple Graph Nodes, and multiple databases to support those Graph Nodes. Advanced users may want to take advantage of some of the horizontal scaling capabilities of Graph Node, as well as some of the more advanced configuration options, via the `config.toml` file and Graph Node's environment variables. +Эту настройку можно масштабировать горизонтально, добавляя несколько Graph Node и несколько баз данных для поддержки этих Graph Node. Опытные пользователи могут воспользоваться некоторыми возможностями горизонтального масштабирования Graph Node, а также некоторыми более продвинутыми параметрами конфигурации через файл `config.toml`l и переменные среды Graph Node. ### `config.toml` -A [TOML](https://toml.io/en/) configuration file can be used to set more complex configurations than those exposed in the CLI. The location of the file is passed with the --config command line switch. +Файл конфигурации [TOML](https://toml.io/en/) можно использовать для установки более сложных конфигураций, чем те, которые представлены в интерфейсе командной строки. Местоположение файла передается с помощью параметра командной строки --config. > При использовании файла конфигурации невозможно использовать параметры --postgres-url, --postgres-secondary-hosts и --postgres-host-weights. -A minimal `config.toml` file can be provided; the following file is equivalent to using the --postgres-url command line option: +Можно предоставить минимальный файл `config.toml`, следующий файл эквивалентен использованию опции командной строки --postgres-url: ```toml [store] @@ -110,17 +110,17 @@ connection="<.. postgres-url argument ..>" indexers = [ "<.. list of all indexing nodes ..>" ] ``` -Full documentation of `config.toml` can be found in the [Graph Node docs](https://github.com/graphprotocol/graph-node/blob/master/docs/config.md). +Полную документацию по `config.toml` можно найти в [документации Graph Node](https://github.com/graphprotocol/graph-node/blob/master/docs/config.md). #### Множественные Graph Node -Graph Node indexing can scale horizontally, running multiple instances of Graph Node to split indexing and querying across different nodes. This can be done simply by running Graph Nodes configured with a different `node_id` on startup (e.g. in the Docker Compose file), which can then be used in the `config.toml` file to specify [dedicated query nodes](#dedicated-query-nodes), [block ingestors](#dedicated-block-ingestion), and splitting subgraphs across nodes with [deployment rules](#deployment-rules). +Graph Node indexing can scale horizontally, running multiple instances of Graph Node to split indexing and querying across different nodes. This can be done simply by running Graph Nodes configured with a different `node_id` on startup (e.g. in the Docker Compose file), which can then be used in the `config.toml` file to specify [dedicated query nodes](#dedicated-query-nodes), [block ingestors](#dedicated-block-ingestion), and splitting Subgraphs across nodes with [deployment rules](#deployment-rules). > Обратите внимание, что несколько Graph Nodes могут быть настроены для использования одной и той же базы данных, которая сама по себе может масштабироваться по горизонтали с помощью сегментирования. #### Правила развертывания -Given multiple Graph Nodes, it is necessary to manage deployment of new subgraphs so that the same subgraph isn't being indexed by two different nodes, which would lead to collisions. This can be done by using deployment rules, which can also specify which `shard` a subgraph's data should be stored in, if database sharding is being used. Deployment rules can match on the subgraph name and the network that the deployment is indexing in order to make a decision. +Given multiple Graph Nodes, it is necessary to manage deployment of new Subgraphs so that the same Subgraph isn't being indexed by two different nodes, which would lead to collisions. This can be done by using deployment rules, which can also specify which `shard` a Subgraph's data should be stored in, if database sharding is being used. Deployment rules can match on the Subgraph name and the network that the deployment is indexing in order to make a decision. Пример настройки правил развертывания: @@ -138,7 +138,7 @@ indexers = [ "index_node_kovan_0" ] match = { network = [ "xdai", "poa-core" ] } indexers = [ "index_node_other_0" ] [[deployment.rule]] -# There's no 'match', so any subgraph matches +# There's no 'match', so any Subgraph matches shards = [ "sharda", "shardb" ] indexers = [ "index_node_community_0", @@ -150,7 +150,7 @@ indexers = [ ] ``` -Read more about deployment rules [here](https://github.com/graphprotocol/graph-node/blob/master/docs/config.md#controlling-deployment). +Подробную информацию о правилах развертывания можно найти [здесь](https://github.com/graphprotocol/graph-node/blob/master/docs/config.md#controlling-deployment). #### Выделенные ноды запросов @@ -167,19 +167,19 @@ query = "" В большинстве случаев одной базы данных Postgres достаточно для поддержки отдельной Graph Node. Когда отдельная Graph Node перерастает одну базу данных Postgres, можно разделить хранилище данных Graph Node между несколькими базами данных Postgres. Все базы данных вместе образуют хранилище отдельной Graph Node. Каждая отдельная база данных называется шардом (сегментом). -Shards can be used to split subgraph deployments across multiple databases, and can also be used to use replicas to spread query load across databases. This includes configuring the number of available database connections each `graph-node` should keep in its connection pool for each database, which becomes increasingly important as more subgraphs are being indexed. +Shards can be used to split Subgraph deployments across multiple databases, and can also be used to use replicas to spread query load across databases. This includes configuring the number of available database connections each `graph-node` should keep in its connection pool for each database, which becomes increasingly important as more Subgraphs are being indexed. Сегментирование становится полезным, когда Ваша существующая база данных не может справиться с нагрузкой, которую на нее возлагает Graph Node, и когда больше невозможно увеличить размер базы данных. -> Обычно лучше сделать одну базу данных максимально большой, прежде чем начинать с шардов (сегментов). Единственным исключением является случай, когда трафик запросов распределяется между подграфами очень неравномерно; в таких ситуациях может существенно помочь, если подграфы большого объема хранятся в одном сегменте, а все остальное — в другом, потому что такая настройка повышает вероятность того, что данные для подграфов большого объема останутся во внутреннем кеше базы данных и не будут заменяться данными, которые не очень нужны, из подграфов с небольшим объемом. +> It is generally better make a single database as big as possible, before starting with shards. One exception is where query traffic is split very unevenly between Subgraphs; in those situations it can help dramatically if the high-volume Subgraphs are kept in one shard and everything else in another because that setup makes it more likely that the data for the high-volume Subgraphs stays in the db-internal cache and doesn't get replaced by data that's not needed as much from low-volume Subgraphs. Что касается настройки соединений, начните с max_connections в postgresql.conf, установленного на 400 (или, может быть, даже на 200), и посмотрите на метрики store_connection_wait_time_ms и store_connection_checkout_count Prometheus. Длительное время ожидания (все, что превышает 5 мс) является признаком того, что доступных соединений слишком мало; большое время ожидания также будет вызвано тем, что база данных очень загружена (например, высокая загрузка ЦП). Однако, если в остальном база данных кажется стабильной, большое время ожидания указывает на необходимость увеличения количества подключений. В конфигурации количество подключений, которое может использовать каждая отдельная Graph Node, является верхним пределом, и Graph Node не будет держать соединения открытыми, если они ей не нужны. -Read more about store configuration [here](https://github.com/graphprotocol/graph-node/blob/master/docs/config.md#configuring-multiple-databases). +Подробную информацию о настройке хранилища можно найти [здесь](https://github.com/graphprotocol/graph-node/blob/master/docs/config.md#configuring-multiple-databases). #### Прием выделенного блока -If there are multiple nodes configured, it will be necessary to specify one node which is responsible for ingestion of new blocks, so that all configured index nodes aren't polling the chain head. This is done as part of the `chains` namespace, specifying the `node_id` to be used for block ingestion: +Если настроено несколько нод, необходимо выделить одну, которая будет отвечать за прием новых блоков, чтобы все сконфигурированные ноды индекса не опрашивали заголовок чейна. Это настраивается в рамках пространства имен `chains`, в котором `node_id`, используемый для приема блоков: ```toml [chains] @@ -188,13 +188,13 @@ ingestor = "block_ingestor_node" #### Поддержка нескольких сетей -The Graph Protocol is increasing the number of networks supported for indexing rewards, and there exist many subgraphs indexing unsupported networks which an indexer would like to process. The `config.toml` file allows for expressive and flexible configuration of: +The Graph Protocol is increasing the number of networks supported for indexing rewards, and there exist many Subgraphs indexing unsupported networks which an indexer would like to process. The `config.toml` file allows for expressive and flexible configuration of: - Несколько сетей - Несколько провайдеров на сеть (это может позволить разделить нагрузку между провайдерами, а также может позволить настроить полные ноды, а также архивные ноды, при этом Graph Node предпочитает более дешевых поставщиков, если позволяет данная рабочая нагрузка). - Дополнительные сведения о провайдере, такие как функции, аутентификация и тип провайдера (для экспериментальной поддержки Firehose) -The `[chains]` section controls the ethereum providers that graph-node connects to, and where blocks and other metadata for each chain are stored. The following example configures two chains, mainnet and kovan, where blocks for mainnet are stored in the vip shard and blocks for kovan are stored in the primary shard. The mainnet chain can use two different providers, whereas kovan only has one provider. +Раздел `[chains]` управляет провайдерами Ethereum, к которым подключается graph-node, и где хранятся блоки и другие метаданные для каждого чейна. В следующем примере настраиваются два чейна, mainnet и kovan, где блоки для mainnet хранятся в сегменте vip, а блоки для kovan — в основном сегменте. Чейн mainnet может использовать двух разных провайдеров, тогда как у kovan есть только один провайдер. ```toml [chains] @@ -210,50 +210,50 @@ shard = "primary" provider = [ { label = "kovan", url = "http://..", features = [] } ] ``` -Read more about provider configuration [here](https://github.com/graphprotocol/graph-node/blob/master/docs/config.md#configuring-ethereum-providers). +Подробную информацию о настройке провайдера можно найти [здесь](https://github.com/graphprotocol/graph-node/blob/master/docs/config.md#configuring-ethereum-providers). ### Переменные среды -Graph Node supports a range of environment variables which can enable features, or change Graph Node behaviour. These are documented [here](https://github.com/graphprotocol/graph-node/blob/master/docs/environment-variables.md). +Graph Node поддерживает ряд переменных среды, которые могут включать функции или изменять поведение Graph Node. Они описаны [здесь] (https://github.com/graphprotocol/graph-node/blob/master/docs/environment-variables.md). ### Непрерывное развертывание Пользователи, использующие масштабируемую настройку индексирования с расширенной конфигурацией, могут получить преимущество от управления своими узлами Graph с помощью Kubernetes. -- The indexer repository has an [example Kubernetes reference](https://github.com/graphprotocol/indexer/tree/main/k8s) -- [Launchpad](https://docs.graphops.xyz/launchpad/intro) is a toolkit for running a Graph Protocol Indexer on Kubernetes maintained by GraphOps. It provides a set of Helm charts and a CLI to manage a Graph Node deployment. +- В репозитории индексатора имеется [пример ссылки на Kubernetes](https://github.com/graphprotocol/indexer/tree/main/k8s) +- [Launchpad](https://docs.graphops.xyz/launchpad/intro) – это набор инструментов для запуска Индексатора Graph Protocol в Kubernetes, поддерживаемый GraphOps. Он предоставляет набор диаграмм Helm и интерфейс командной строки для управления развертыванием Graph Node. ### Управление Graph Node -При наличии работающей Graph Node (или Graph Nodes!), задача состоит в том, чтобы управлять развернутыми подграфами на этих нодах. Graph Node предлагает ряд инструментов для управления подграфами. +Given a running Graph Node (or Graph Nodes!), the challenge is then to manage deployed Subgraphs across those nodes. Graph Node surfaces a range of tools to help with managing Subgraphs. #### Логирование (ведение журналов) -Graph Node's logs can provide useful information for debugging and optimisation of Graph Node and specific subgraphs. Graph Node supports different log levels via the `GRAPH_LOG` environment variable, with the following levels: error, warn, info, debug or trace. +Graph Node's logs can provide useful information for debugging and optimisation of Graph Node and specific Subgraphs. Graph Node supports different log levels via the `GRAPH_LOG` environment variable, with the following levels: error, warn, info, debug or trace. -In addition setting `GRAPH_LOG_QUERY_TIMING` to `gql` provides more details about how GraphQL queries are running (though this will generate a large volume of logs). +Кроме того, установка для GRAPH_LOG_QUERY_TIMING`значения`gql\` предоставляет дополнительные сведения о том, как выполняются запросы GraphQL (хотя это приводит к созданию большого объема логов). -#### Monitoring & alerting +#### Мониторинг и оповещения Graph Node предоставляет метрики через конечную точку Prometheus на порту 8040 по умолчанию. Затем можно использовать Grafana для визуализации этих метрик. -The indexer repository provides an [example Grafana configuration](https://github.com/graphprotocol/indexer/blob/main/k8s/base/grafana.yaml). +В репозитории индексатора имеется [пример конфигурации Grafana] (https://github.com/graphprotocol/indexer/blob/main/k8s/base/grafana.yaml). #### Graphman -`graphman` is a maintenance tool for Graph Node, helping with diagnosis and resolution of different day-to-day and exceptional tasks. +`graphman` – это инструмент обслуживания Graph Node, помогающий диагностировать и решать различные повседневные и исключительные задачи. -The graphman command is included in the official containers, and you can docker exec into your graph-node container to run it. It requires a `config.toml` file. +Команда graphman включена в официальные контейнеры, и Вы можете выполнить docker exec в контейнере graph-node, чтобы запустить ее. Для этого требуется файл `config.toml`. -Full documentation of `graphman` commands is available in the Graph Node repository. See [/docs/graphman.md] (https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md) in the Graph Node `/docs` +Полная документация по командам `graphman` доступна в репозитории Graph Node. См. [/docs/graphman.md] (https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md) в Graph Node `/docs` -### Работа с подграфами +### Working with Subgraphs #### API статуса индексирования -Доступный по умолчанию на порту 8030/graphql, API статуса индексирования предоставляет ряд методов для проверки статуса индексирования для различных подграфов, проверки доказательств индексирования, проверки функций подграфов и многого другого. +Available on port 8030/graphql by default, the indexing status API exposes a range of methods for checking indexing status for different Subgraphs, checking proofs of indexing, inspecting Subgraph features and more. -The full schema is available [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). +Полная схема доступна [здесь](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). #### Производительность индексирования @@ -263,12 +263,12 @@ The full schema is available [here](https://github.com/graphprotocol/graph-node/ - Обработка событий по порядку с помощью соответствующих обработчиков (это может включать вызов чейна для состояния и выборку данных из хранилища) - Запись полученных данных в хранилище -Эти этапы конвейерные (т.е. могут выполняться параллельно), но они зависят друг от друга. Там, где подграфы индексируются медленно, основная причина будет зависеть от конкретного подграфа. +These stages are pipelined (i.e. they can be executed in parallel), but they are dependent on one another. Where Subgraphs are slow to index, the underlying cause will depend on the specific Subgraph. Распространенные причины низкой скорости индексации: -- Time taken to find relevant events from the chain (call handlers in particular can be slow, given the reliance on `trace_filter`) -- Making large numbers of `eth_calls` as part of handlers +- Время, затрачиваемое на поиск соответствующих событий в чейне (в частности, обработчики вызовов могут работать медленно, учитывая зависимость от `trace_filter`) +- Создание большого количества `eth_calls` в составе обработчиков - Большое количество операций с хранилищем во время выполнения - Большой объем данных для сохранения в хранилище - Большое количество событий для обработки @@ -276,35 +276,35 @@ The full schema is available [here](https://github.com/graphprotocol/graph-node/ - Сам провайдер отстает от головного чейна - Задержка получения новых поступлений от провайдера в головном чейне -Метрики индексации подграфов могут помочь диагностировать основную причину замедления индексации. В некоторых случаях проблема связана с самим подграфом, но в других случаях усовершенствованные сетевые провайдеры, снижение конкуренции за базу данных и другие улучшения конфигурации могут заметно повысить производительность индексирования. +Subgraph indexing metrics can help diagnose the root cause of indexing slowness. In some cases, the problem lies with the Subgraph itself, but in others, improved network providers, reduced database contention and other configuration improvements can markedly improve indexing performance. -#### Повреждённые подграфы +#### Failed Subgraphs -Во время индексации подграфов может произойти сбой, если они столкнутся с неожиданными данными, какой-то компонент не будет работать должным образом или если в обработчиках событий или конфигурации появится ошибка. Есть два основных типа отказа: +During indexing Subgraphs might fail, if they encounter data that is unexpected, some component not working as expected, or if there is some bug in the event handlers or configuration. There are two general types of failure: - Детерминированные сбои: это сбои, которые не будут устранены при повторных попытках - Недетерминированные сбои: они могут быть связаны с проблемами с провайдером или какой-либо неожиданной ошибкой Graph Node. Когда происходит недетерминированный сбой, Graph Node повторяет попытки обработчиков сбоя, со временем отказываясь от них. -В некоторых случаях сбой может быть устранен индексатором (например, если ошибка вызвана отсутствием нужного поставщика, добавление необходимого поставщика позволит продолжить индексирование). Однако в других случаях требуется изменить код подграфа. +In some cases a failure might be resolvable by the indexer (for example if the error is a result of not having the right kind of provider, adding the required provider will allow indexing to continue). However in others, a change in the Subgraph code is required. -> Deterministic failures are considered "final", with a Proof of Indexing generated for the failing block, while non-deterministic failures are not, as the subgraph may manage to "unfail" and continue indexing. In some cases, the non-deterministic label is incorrect, and the subgraph will never overcome the error; such failures should be reported as issues on the Graph Node repository. +> Deterministic failures are considered "final", with a Proof of Indexing generated for the failing block, while non-deterministic failures are not, as the Subgraph may manage to "unfail" and continue indexing. In some cases, the non-deterministic label is incorrect, and the Subgraph will never overcome the error; such failures should be reported as issues on the Graph Node repository. #### Кэш блокировки и вызова -Graph Node caches certain data in the store in order to save refetching from the provider. Blocks are cached, as are the results of `eth_calls` (the latter being cached as of a specific block). This caching can dramatically increase indexing speed during "resyncing" of a slightly altered subgraph. +Graph Node caches certain data in the store in order to save refetching from the provider. Blocks are cached, as are the results of `eth_calls` (the latter being cached as of a specific block). This caching can dramatically increase indexing speed during "resyncing" of a slightly altered Subgraph. -However, in some instances, if an Ethereum node has provided incorrect data for some period, that can make its way into the cache, leading to incorrect data or failed subgraphs. In this case indexers can use `graphman` to clear the poisoned cache, and then rewind the affected subgraphs, which will then fetch fresh data from the (hopefully) healthy provider. +However, in some instances, if an Ethereum node has provided incorrect data for some period, that can make its way into the cache, leading to incorrect data or failed Subgraphs. In this case indexers can use `graphman` to clear the poisoned cache, and then rewind the affected Subgraphs, which will then fetch fresh data from the (hopefully) healthy provider. Если есть подозрение на несогласованность кэша блоков, например, событие отсутствия квитанции tx: -1. `graphman chain list` to find the chain name. -2. `graphman chain check-blocks by-number ` will check if the cached block matches the provider, and deletes the block from the cache if it doesn’t. - 1. If there is a difference, it may be safer to truncate the whole cache with `graphman chain truncate `. +1. `graphman chain list`, чтобы найти название чейна. +2. `graphman chain check-blocks by-number ` проверит, соответствует ли кэшированный блок провайдеру, и удалит блок из кэша, если это не так. + 1. Если есть разница, может быть безопаснее усечь весь кеш с помощью `graphman chain truncate `. 2. Если блок соответствует провайдеру, то проблема может быть отлажена непосредственно провайдером. #### Запрос проблем и ошибок -После индексации подграфа индексаторы могут рассчитывать на обслуживание запросов через выделенную конечную точку запроса подграфа. Если индексатор планирует обслуживать значительный объем запросов, рекомендуется выделенная нода запросов, а в случае очень больших объемов запросов индексаторы могут настроить сегменты копий так, чтобы запросы не влияли на процесс индексирования. +Once a Subgraph has been indexed, indexers can expect to serve queries via the Subgraph's dedicated query endpoint. If the indexer is hoping to serve significant query volume, a dedicated query node is recommended, and in case of very high query volumes, indexers may want to configure replica shards so that queries don't impact the indexing process. Однако, даже с выделенной нодой запросов и копиями выполнение некоторых запросов может занять много времени, а в некоторых случаях увеличить использование памяти и негативно повлиять на время выполнения запросов другими пользователями. @@ -312,15 +312,15 @@ However, in some instances, if an Ethereum node has provided incorrect data for ##### Кэширование запросов -Graph Node caches GraphQL queries by default, which can significantly reduce database load. This can be further configured with the `GRAPH_QUERY_CACHE_BLOCKS` and `GRAPH_QUERY_CACHE_MAX_MEM` settings - read more [here](https://github.com/graphprotocol/graph-node/blob/master/docs/environment-variables.md#graphql-caching). +Graph Node по умолчанию кэширует запросы GraphQL, что может значительно снизить нагрузку на базу данных. Это можно дополнительно настроить с помощью параметров `GRAPH_QUERY_CACHE_BLOCKS` и `GRAPH_QUERY_CACHE_MAX_MEM` — подробнее читайте [здесь](https://github.com/graphprotocol/graph-node/blob/master. /docs/environment-variables.md#graphql-caching). ##### Анализ запросов -Проблемные запросы чаще всего выявляются одним из двух способов. В некоторых случаях пользователи сами сообщают, что данный запрос выполняется медленно. В этом случае задача состоит в том, чтобы диагностировать причину замедленности — является ли это общей проблемой или специфичной для этого подграфа или запроса. А затем, конечно же, решить ее, если это возможно. +Problematic queries most often surface in one of two ways. In some cases, users themselves report that a given query is slow. In that case the challenge is to diagnose the reason for the slowness - whether it is a general issue, or specific to that Subgraph or query. And then of course to resolve it, if possible. В других случаях триггером может быть высокий уровень использования памяти на ноде запроса, и в этом случае сначала нужно определить запрос, вызвавший проблему. -Indexers can use [qlog](https://github.com/graphprotocol/qlog/) to process and summarize Graph Node's query logs. `GRAPH_LOG_QUERY_TIMING` can also be enabled to help identify and debug slow queries. +Индексаторы могут использовать [qlog](https://github.com/graphprotocol/qlog/) для обработки и суммирования логов запросов Graph Node. Также можно включить `GRAPH_LOG_QUERY_TIMING` для выявления и отладки медленных запросов. При медленном запросе у индексаторов есть несколько вариантов. Разумеется, они могут изменить свою модель затрат, чтобы значительно увеличить стоимость отправки проблемного запроса. Это может привести к снижению частоты этого запроса. Однако это часто не устраняет основной причины проблемы. @@ -328,18 +328,18 @@ Indexers can use [qlog](https://github.com/graphprotocol/qlog/) to process and s Таблицы базы данных, в которых хранятся объекты, как правило, бывают двух видов: «подобные транзакциям», когда объекты, однажды созданные, никогда не обновляются, т. е. они хранят что-то вроде списка финансовых транзакций и «подобные учетной записи», где объекты обновляются очень часто, т. е. они хранят что-то вроде финансовых счетов, которые изменяются каждый раз при записи транзакции. Таблицы, подобные учетным записям, характеризуются тем, что они содержат большое количество версий объектов, но относительно мало отдельных объектов. Часто в таких таблицах количество отдельных объектов составляет 1% от общего количества строк (версий объектов) -For account-like tables, `graph-node` can generate queries that take advantage of details of how Postgres ends up storing data with such a high rate of change, namely that all of the versions for recent blocks are in a small subsection of the overall storage for such a table. +Для таблиц, подобных учетным записям, `graph-node` может генерировать запросы, в которых используются детали того, как Postgres в конечном итоге сохраняет данные с такой высокой скоростью изменения, а именно, что все версии последних блоков находятся в небольшом подразделе общего хранилища для такой таблицы. -The command `graphman stats show shows, for each entity type/table in a deployment, how many distinct entities, and how many entity versions each table contains. That data is based on Postgres-internal estimates, and is therefore necessarily imprecise, and can be off by an order of magnitude. A `-1` in the `entities` column means that Postgres believes that all rows contain a distinct entity. +Команда `graphman stats show ` показывает для каждого типа/таблицы объектов в развертывании, сколько различных объектов и сколько версий объектов содержит каждая таблица. Эти данные основаны на внутренних оценках Postgres и, следовательно, неточны и могут отличаться на порядок. `-1` в столбце `entities` означает, что Postgres считает, что все строки содержат отдельный объект. -In general, tables where the number of distinct entities are less than 1% of the total number of rows/entity versions are good candidates for the account-like optimization. When the output of `graphman stats show` indicates that a table might benefit from this optimization, running `graphman stats show
` will perform a full count of the table - that can be slow, but gives a precise measure of the ratio of distinct entities to overall entity versions. +В общем, таблицы, в которых количество отдельных объектов составляет менее 1 % от общего количества версий строк/объектов, являются хорошими кандидатами на оптимизацию по аналогии с учетными записями. Если выходные данные `graphman stats show` указывают на то, что эта оптимизация может принести пользу таблице, запуск `graphman stats show
` произведёт полный расчет таблицы. Этот процесс может быть медленным, но обеспечит точную степень соотношения отдельных объектов к общему количеству версий объекта. -Once a table has been determined to be account-like, running `graphman stats account-like .
` will turn on the account-like optimization for queries against that table. The optimization can be turned off again with `graphman stats account-like --clear .
` It takes up to 5 minutes for query nodes to notice that the optimization has been turned on or off. After turning the optimization on, it is necessary to verify that the change does not in fact make queries slower for that table. If you have configured Grafana to monitor Postgres, slow queries would show up in `pg_stat_activity`in large numbers, taking several seconds. In that case, the optimization needs to be turned off again. +Как только таблица будет определена как учетная запись, запуск `graphman stats account-like .
`, включит оптимизацию, подобную учетной записи, для запросов к этой таблице. Оптимизацию можно снова отключить с помощью `graphman stats account-like --clear .
`. Нодам запроса требуется до 5 минут, чтобы заметить, что оптимизация включена или выключена. После включения оптимизации необходимо убедиться, что изменение фактически не приводит к замедлению запросов к этой таблице. Если Вы настроили Grafana для мониторинга Postgres, медленные запросы будут отображаться в `pg_stat_activity` в больших количествах, это займет несколько секунд. В этом случае оптимизацию необходимо снова отключить. -For Uniswap-like subgraphs, the `pair` and `token` tables are prime candidates for this optimization, and can have a dramatic effect on database load. +For Uniswap-like Subgraphs, the `pair` and `token` tables are prime candidates for this optimization, and can have a dramatic effect on database load. -#### Удаление подграфов +#### Removing Subgraphs > Это новый функционал, который будет доступен в Graph Node 0.29.x -At some point an indexer might want to remove a given subgraph. This can be easily done via `graphman drop`, which deletes a deployment and all it's indexed data. The deployment can be specified as either a subgraph name, an IPFS hash `Qm..`, or the database namespace `sgdNNN`. Further documentation is available [here](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop). +At some point an indexer might want to remove a given Subgraph. This can be easily done via `graphman drop`, which deletes a deployment and all it's indexed data. The deployment can be specified as either a Subgraph name, an IPFS hash `Qm..`, or the database namespace `sgdNNN`. Further documentation is available [here](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop). diff --git a/website/src/pages/ru/indexing/tooling/graphcast.mdx b/website/src/pages/ru/indexing/tooling/graphcast.mdx index a3c391cf3e4f..2c5c4818950f 100644 --- a/website/src/pages/ru/indexing/tooling/graphcast.mdx +++ b/website/src/pages/ru/indexing/tooling/graphcast.mdx @@ -2,7 +2,7 @@ title: Graphcast --- -## Introduction +## Введение Is there something you'd like to learn from or share with your fellow Indexers in an automated manner, but it's too much hassle or costs too much gas? @@ -10,10 +10,10 @@ Currently, the cost to broadcast information to other network participants is de The Graphcast SDK (Software Development Kit) allows developers to build Radios, which are gossip-powered applications that Indexers can run to serve a given purpose. We also intend to create a few Radios (or provide support to other developers/teams that wish to build Radios) for the following use cases: -- Перекрестная проверка целостности данных субграфа в режиме реального времени (Subgraph Radio](https://docs.graphops.xyz/graphcast/radios/subgraph-radio/intro)). -- Conducting auctions and coordination for warp syncing subgraphs, substreams, and Firehose data from other Indexers. -- Self-reporting on active query analytics, including subgraph request volumes, fee volumes, etc. -- Self-reporting on indexing analytics, including subgraph indexing time, handler gas costs, indexing errors encountered, etc. +- Real-time cross-checking of Subgraph data integrity ([Subgraph Radio](https://docs.graphops.xyz/graphcast/radios/subgraph-radio/intro)). +- Conducting auctions and coordination for warp syncing Subgraphs, substreams, and Firehose data from other Indexers. +- Self-reporting on active query analytics, including Subgraph request volumes, fee volumes, etc. +- Self-reporting on indexing analytics, including Subgraph indexing time, handler gas costs, indexing errors encountered, etc. - Self-reporting on stack information including graph-node version, Postgres version, Ethereum client version, etc. ### Узнать больше diff --git a/website/src/pages/ru/resources/_meta-titles.json b/website/src/pages/ru/resources/_meta-titles.json index f5971e95a8f6..6e14e6afa310 100644 --- a/website/src/pages/ru/resources/_meta-titles.json +++ b/website/src/pages/ru/resources/_meta-titles.json @@ -1,4 +1,4 @@ { - "roles": "Additional Roles", - "migration-guides": "Migration Guides" + "roles": "Дополнительные роли", + "migration-guides": "Руководства по миграции" } diff --git a/website/src/pages/ru/resources/benefits.mdx b/website/src/pages/ru/resources/benefits.mdx index df6eeac7c628..dc1e73eeb255 100644 --- a/website/src/pages/ru/resources/benefits.mdx +++ b/website/src/pages/ru/resources/benefits.mdx @@ -1,11 +1,11 @@ --- -title: The Graph vs. Self Hosting +title: The Graph против самостоятельного хостинга socialImage: https://thegraph.com/docs/img/seo/benefits.jpg --- Децентрализованная сеть The Graph была спроектирована и усовершенствована для создания надежной системы индексации и запросов — и с каждым днем она становится лучше благодаря тысячам участников по всему миру. -The benefits of this decentralized protocol cannot be replicated by running a `graph-node` locally. The Graph Network is more reliable, more efficient, and less expensive. +Преимущества этого децентрализованного протокола невозможно воспроизвести, запустив `graph-node` локально. The Graph Network более надежен, эффективен и экономичен. Вот анализ: @@ -19,7 +19,7 @@ The benefits of this decentralized protocol cannot be replicated by running a `g ## Преимущества -### Lower & more Flexible Cost Structure +### Более низкая и гибкая структура затрат No contracts. No monthly fees. Only pay for the queries you use—with an average cost-per-query of $40 per million queries (~$0.00004 per query). Queries are priced in USD and paid in GRT or credit card. @@ -34,7 +34,7 @@ Query costs may vary; the quoted cost is the average at time of publication (Mar | Время разработки | $400 в месяц | Нет, встроен в сеть с глобально распределенными Индексаторами | | Запросы в месяц | Ограничен возможностями инфраструктуры | 100,000 (Free Plan) | | Стоимость одного запроса | $0 | $0 | -| Infrastructure | Централизованная | Децентрализованная | +| Инфраструктура | Централизованная | Децентрализованная | | Географическая избыточность | $750+ за каждую дополнительную ноду | Включено | | Время безотказной работы | Варьируется | 99.9%+ | | Общие ежемесячные расходы | $750+ | $0 | @@ -48,7 +48,7 @@ Query costs may vary; the quoted cost is the average at time of publication (Mar | Время разработки | $800 в месяц | Нет, встроен в сеть с глобально распределенными Индексаторами | | Запросы в месяц | Ограничен возможностями инфраструктуры | ~3,000,000 | | Стоимость одного запроса | $0 | $0.00004 | -| Infrastructure | Централизованная | Децентрализованная | +| Инфраструктура | Централизованная | Децентрализованная | | Инженерные расходы | $200 в час | Включено | | Географическая избыточность | общие затраты на каждую дополнительную ноду составляют $1,200 | Включено | | Время безотказной работы | Варьируется | 99.9%+ | @@ -64,7 +64,7 @@ Query costs may vary; the quoted cost is the average at time of publication (Mar | Время разработки | $6,000 или больше в месяц | Нет, встроен в сеть с глобально распределенными Индексаторами | | Запросы в месяц | Ограничен возможностями инфраструктуры | ~30,000,000 | | Стоимость одного запроса | $0 | $0.00004 | -| Infrastructure | Централизованная | Децентрализованная | +| Инфраструктура | Централизованная | Децентрализованная | | Географическая избыточность | общие затраты на каждую дополнительную ноду составляют $1,200 | Включено | | Время безотказной работы | Варьируется | 99.9%+ | | Общие ежемесячные расходы | $11,000+ | $1,200 | @@ -75,18 +75,18 @@ Query costs may vary; the quoted cost is the average at time of publication (Mar Reflects cost for data consumer. Query fees are still paid to Indexers for Free Plan queries. -Estimated costs are only for Ethereum Mainnet subgraphs — costs are even higher when self hosting a `graph-node` on other networks. Some users may need to update their subgraph to a new version. Due to Ethereum gas fees, an update costs ~$50 at time of writing. Note that gas fees on [Arbitrum](/archived/arbitrum/arbitrum-faq/) are substantially lower than Ethereum mainnet. +Estimated costs are only for Ethereum Mainnet Subgraphs — costs are even higher when self hosting a `graph-node` on other networks. Some users may need to update their Subgraph to a new version. Due to Ethereum gas fees, an update costs ~$50 at time of writing. Note that gas fees on [Arbitrum](/archived/arbitrum/arbitrum-faq/) are substantially lower than Ethereum mainnet. -Курирование сигнала на субграфе - это необязательная единовременная стоимость, равная нулю (например, сигнал стоимостью 1 тыс. долларов может быть курирован на субграфе, а затем отозван - с возможностью получения прибыли в процессе). +Curating signal on a Subgraph is an optional one-time, net-zero cost (e.g., $1k in signal can be curated on a Subgraph, and later withdrawn—with potential to earn returns in the process). -## No Setup Costs & Greater Operational Efficiency +## Отсутствие затрат на настройку и более высокая эксплуатационная эффективность Нулевая плата за установку. Приступайте к работе немедленно, без каких-либо затрат на настройку или накладные расходы. Никаких требований к оборудованию. Отсутствие перебоев в работе из-за централизованной инфраструктуры и больше времени для концентрации на Вашем основном продукте. Нет необходимости в резервных серверах, устранении неполадок или дорогостоящих инженерных ресурсах. -## Reliability & Resiliency +## Надежность и устойчивость -The Graph’s decentralized network gives users access to geographic redundancy that does not exist when self-hosting a `graph-node`. Queries are served reliably thanks to 99.9%+ uptime, achieved by hundreds of independent Indexers securing the network globally. +Децентрализованная сеть The Graph предоставляет пользователям доступ к географической избыточности, которой не существует при самостоятельном размещении `graph-node`. Запросы обслуживаются надежно благодаря времени безотказной работы более 99,9%, достигаемому сотнями независимых Индексаторов, обеспечивающими безопасность сети по всему миру. -Bottom line: The Graph Network is less expensive, easier to use, and produces superior results compared to running a `graph-node` locally. +Итог: The Graph Network дешевле, проще в использовании и дает превосходные результаты по сравнению с запуском `graph-node` локально. -Start using The Graph Network today, and learn how to [publish your subgraph to The Graph's decentralized network](/subgraphs/quick-start/). +Start using The Graph Network today, and learn how to [publish your Subgraph to The Graph's decentralized network](/subgraphs/quick-start/). diff --git a/website/src/pages/ru/resources/glossary.mdx b/website/src/pages/ru/resources/glossary.mdx index ffcd4bca2eed..9f55e53ab4e5 100644 --- a/website/src/pages/ru/resources/glossary.mdx +++ b/website/src/pages/ru/resources/glossary.mdx @@ -1,83 +1,83 @@ --- -title: Glossary +title: Глоссарий --- -- **The Graph**: A decentralized protocol for indexing and querying data. +- **The Graph**: Децентрализованный протокол для индексирования и запроса данных. -- **Query**: A request for data. In the case of The Graph, a query is a request for data from a subgraph that will be answered by an Indexer. +- **Query**: A request for data. In the case of The Graph, a query is a request for data from a Subgraph that will be answered by an Indexer. -- **GraphQL**: A query language for APIs and a runtime for fulfilling those queries with your existing data. The Graph uses GraphQL to query subgraphs. +- **GraphQL**: A query language for APIs and a runtime for fulfilling those queries with your existing data. The Graph uses GraphQL to query Subgraphs. -- **Endpoint**: A URL that can be used to query a subgraph. The testing endpoint for Subgraph Studio is `https://api.studio.thegraph.com/query///` and the Graph Explorer endpoint is `https://gateway.thegraph.com/api//subgraphs/id/`. The Graph Explorer endpoint is used to query subgraphs on The Graph's decentralized network. +- **Endpoint**: A URL that can be used to query a Subgraph. The testing endpoint for Subgraph Studio is `https://api.studio.thegraph.com/query///` and the Graph Explorer endpoint is `https://gateway.thegraph.com/api//subgraphs/id/`. The Graph Explorer endpoint is used to query Subgraphs on The Graph's decentralized network. -- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish subgraphs to The Graph Network. Once it is indexed, the subgraph can be queried by anyone. +- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish Subgraphs to The Graph Network. Once it is indexed, the Subgraph can be queried by anyone. -- **Indexer**: Network participants that run indexing nodes to index data from blockchains and serve GraphQL queries. +- **Индексаторы**: Участники сети, которые запускают ноды индексирования для индексирования данных из блокчейнов и обслуживания запросов GraphQL. -- **Indexer Revenue Streams**: Indexers are rewarded in GRT with two components: query fee rebates and indexing rewards. +- **Потоки доходов Индексатора**: Индексаторы получают вознаграждение в GRT с помощью двух компонентов: скидки на сборы за запросы и вознаграждения за индексирование. - 1. **Query Fee Rebates**: Payments from subgraph consumers for serving queries on the network. + 1. **Query Fee Rebates**: Payments from Subgraph consumers for serving queries on the network. - 2. **Indexing Rewards**: The rewards that Indexers receive for indexing subgraphs. Indexing rewards are generated via new issuance of 3% GRT annually. + 2. **Indexing Rewards**: The rewards that Indexers receive for indexing Subgraphs. Indexing rewards are generated via new issuance of 3% GRT annually. -- **Indexer's Self-Stake**: The amount of GRT that Indexers stake to participate in the decentralized network. The minimum is 100,000 GRT, and there is no upper limit. +- **Собственный стейк Индексатора**: Сумма GRT, которую Индексаторы стейкают для участия в децентрализованной сети. Минимальная сумма составляет 100 000 GRT, верхнего предела нет. -- **Delegation Capacity**: The maximum amount of GRT an Indexer can accept from Delegators. Indexers can only accept up to 16x their Indexer Self-Stake, and additional delegation results in diluted rewards. For example, if an Indexer has a Self-Stake of 1M GRT, their delegation capacity is 16M. However, Indexers can increase their Delegation Capacity by increasing their Self-Stake. +- **Лимит делегирования**: Максимальная сумма GRT, которую Индексатор может получить от Делегаторов. Индексаторы могут принимать делегированные средства только в пределах 16-кратного размера их собственного стейка, и превышение этого лимита приводит к снижению вознаграждений. Например, при собственном стейке в 1 млн GRT лимит делегирования составит 16 млн GRT. При этом Индексаторы могут повысить этот лимит, увеличив свой собственный стейк. -- **Upgrade Indexer**: An Indexer designed to act as a fallback for subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. +- **Upgrade Indexer**: An Indexer designed to act as a fallback for Subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. -- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing subgraphs. +- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in Subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing Subgraphs. -- **Delegation Tax**: A 0.5% fee paid by Delegators when they delegate GRT to Indexers. The GRT used to pay the fee is burned. +- **Налог на делегирование**: Комиссия в размере 0,5%, уплачиваемая Делегаторами, когда они делегируют GRT Индексаторам. GRT, использованный для оплаты комиссий, сжигается. -- **Curator**: Network participants that identify high-quality subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a subgraph, 10% is distributed to the Curators of that subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a subgraph. +- **Curator**: Network participants that identify high-quality Subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a Subgraph, 10% is distributed to the Curators of that Subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a Subgraph. -- **Curation Tax**: A 1% fee paid by Curators when they signal GRT on subgraphs. The GRT used to pay the fee is burned. +- **Curation Tax**: A 1% fee paid by Curators when they signal GRT on Subgraphs. The GRT used to pay the fee is burned. -- **Data Consumer**: Any application or user that queries a subgraph. +- **Data Consumer**: Any application or user that queries a Subgraph. -- **Subgraph Developer**: A developer who builds and deploys a subgraph to The Graph's decentralized network. +- **Subgraph Developer**: A developer who builds and deploys a Subgraph to The Graph's decentralized network. -- **Subgraph Manifest**: A YAML file that describes the subgraph's GraphQL schema, data sources, and other metadata. [Here](https://github.com/graphprotocol/example-subgraph/blob/master/subgraph.yaml) is an example. +- **Subgraph Manifest**: A YAML file that describes the Subgraph's GraphQL schema, data sources, and other metadata. [Here](https://github.com/graphprotocol/example-subgraph/blob/master/subgraph.yaml) is an example. -- **Epoch**: A unit of time within the network. Currently, one epoch is 6,646 blocks or approximately 1 day. +- **Эпоха**: Единица времени в сети. В настоящее время одна эпоха составляет 6 646 блоков или приблизительно 1 день. -- **Allocation**: An Indexer can allocate their total GRT stake (including Delegators' stake) towards subgraphs that have been published on The Graph's decentralized network. Allocations can have different statuses: +- **Allocation**: An Indexer can allocate their total GRT stake (including Delegators' stake) towards Subgraphs that have been published on The Graph's decentralized network. Allocations can have different statuses: - 1. **Active**: An allocation is considered active when it is created onchain. This is called opening an allocation, and indicates to the network that the Indexer is actively indexing and serving queries for a particular subgraph. Active allocations accrue indexing rewards proportional to the signal on the subgraph, and the amount of GRT allocated. + 1. **Active**: An allocation is considered active when it is created onchain. This is called opening an allocation, and indicates to the network that the Indexer is actively indexing and serving queries for a particular Subgraph. Active allocations accrue indexing rewards proportional to the signal on the Subgraph, and the amount of GRT allocated. - 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. + 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given Subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. -- **Subgraph Studio**: A powerful dapp for building, deploying, and publishing subgraphs. +- **Subgraph Studio**: A powerful dapp for building, deploying, and publishing Subgraphs. -- **Fishermen**: A role within The Graph Network held by participants who monitor the accuracy and integrity of data served by Indexers. When a Fisherman identifies a query response or a POI they believe to be incorrect, they can initiate a dispute against the Indexer. If the dispute rules in favor of the Fisherman, the Indexer is slashed by losing 2.5% of their self-stake. Of this amount, 50% is awarded to the Fisherman as a bounty for their vigilance, and the remaining 50% is removed from circulation (burned). This mechanism is designed to encourage Fishermen to help maintain the reliability of the network by ensuring that Indexers are held accountable for the data they provide. +- **Рыбаки**: Роль в сети The Graph Network, которую выполняют участники, отслеживающие точность и целостность данных, предоставляемых Индексаторами. Когда Рыбак идентифицирует ответ на запрос или POI, который, по его мнению, является неверным, он может инициировать спор против Индексатора. Если спор будет решен в пользу Рыбака, Индексатор потеряет 2,5% своего стейка. Из этой суммы 50% присуждается Рыбаку в качестве вознаграждения за его бдительность, а оставшиеся 50% изымаются из обращения (сжигаются). Этот механизм предназначен для того, чтобы побудить Рыбаков поддерживать надежность сети, гарантируя, что Индексаторы будут нести ответственность за предоставляемые ими данные. -- **Arbitrators**: Arbitrators are network participants appointed through a governance process. The role of the Arbitrator is to decide the outcome of indexing and query disputes. Their goal is to maximize the utility and reliability of The Graph Network. +- **Арбитры**: Арбитры — это участники сети, назначаемые в рамках процесса управления. Роль Арбитра — принимать решения по результатам споров об индексировании и запросах. Их цель — максимизировать полезность и надежность The Graph Network. -- **Slashing**: Indexers can have their self-staked GRT slashed for providing an incorrect POI or for serving inaccurate data. The slashing percentage is a protocol parameter currently set to 2.5% of an Indexer's self-stake. 50% of the slashed GRT goes to the Fisherman that disputed the inaccurate data or incorrect POI. The other 50% is burned. +- **Сокращение**: Индексаторы могут сократить свой собственный стейк GRT за предоставление неверного POI или за предоставление неточных данных. Процент сокращения — это параметр протокола, в настоящее время установленный на уровне 2,5% от собственного стейка Индексатора. 50% сокращенного GRT достается Рыбаку, который оспорил неточные данные или неверный POI. Остальные 50% сжигаются. -- **Indexing Rewards**: The rewards that Indexers receive for indexing subgraphs. Indexing rewards are distributed in GRT. +- **Indexing Rewards**: The rewards that Indexers receive for indexing Subgraphs. Indexing rewards are distributed in GRT. -- **Delegation Rewards**: The rewards that Delegators receive for delegating GRT to Indexers. Delegation rewards are distributed in GRT. +- **Награды за делегирование**: Вознаграждения, которые Делегаторы получают за делегирование GRT Индексаторам. Награды за делегирование распределяются в GRT. -- **GRT**: The Graph's work utility token. GRT provides economic incentives to network participants for contributing to the network. +- **GRT**: Рабочий служебный токен The Graph. GRT предоставляет участникам сети экономические стимулы за вклад в сеть. -- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. +- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given Subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. -- **Graph Node**: Graph Node is the component that indexes subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. +- **Graph Node**: Graph Node is the component that indexes Subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. -- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions onchain, including registering on the network, managing subgraph deployments to its Graph Node(s), and managing allocations. +- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions onchain, including registering on the network, managing Subgraph deployments to its Graph Node(s), and managing allocations. -- **The Graph Client**: A library for building GraphQL-based dapps in a decentralized way. +- **Клиент The Graph**: Библиотека для создания децентрализованных приложений на основе GraphQL. -- **Graph Explorer**: A dapp designed for network participants to explore subgraphs and interact with the protocol. +- **Graph Explorer**: A dapp designed for network participants to explore Subgraphs and interact with the protocol. -- **Graph CLI**: A command line interface tool for building and deploying to The Graph. +- **Graph CLI**: Инструмент интерфейса командной строки для создания и развертывания в The Graph. -- **Cooldown Period**: The time remaining until an Indexer who changed their delegation parameters can do so again. +- **Период восстановления**: Время, которое должно пройти, прежде чем Индексатор, изменивший свои параметры делегирования, сможет сделать это снова. -- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, subgraphs, curation shares, and Indexer's self-stake. +- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, Subgraphs, curation shares, and Indexer's self-stake. -- **Updating a subgraph**: The process of releasing a new subgraph version with updates to the subgraph's manifest, schema, or mappings. +- **Updating a Subgraph**: The process of releasing a new Subgraph version with updates to the Subgraph's manifest, schema, or mappings. -- **Migrating**: The process of curation shares moving from an old version of a subgraph to a new version of a subgraph (e.g. when v0.0.1 is updated to v0.0.2). +- **Migrating**: The process of curation shares moving from an old version of a Subgraph to a new version of a Subgraph (e.g. when v0.0.1 is updated to v0.0.2). diff --git a/website/src/pages/ru/resources/migration-guides/assemblyscript-migration-guide.mdx b/website/src/pages/ru/resources/migration-guides/assemblyscript-migration-guide.mdx index c52b3b97cda2..c3a823cd38b6 100644 --- a/website/src/pages/ru/resources/migration-guides/assemblyscript-migration-guide.mdx +++ b/website/src/pages/ru/resources/migration-guides/assemblyscript-migration-guide.mdx @@ -2,49 +2,49 @@ title: Руководство по миграции AssemblyScript --- -Up until now, subgraphs have been using one of the [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉 +Up until now, Subgraphs have been using one of the [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉 -Это позволит разработчикам субграфов использовать более новые возможности языка AS и стандартной библиотеки. +That will enable Subgraph developers to use newer features of the AS language and standard library. -This guide is applicable for anyone using `graph-cli`/`graph-ts` below version `0.22.0`. If you're already at a higher than (or equal) version to that, you've already been using version `0.19.10` of AssemblyScript 🙂 +Это руководство применимо для всех, кто использует `graph-cli`/`graph-ts` версии ниже 0.22.0. Если у Вас уже есть версия выше (или равная) этой, значит, Вы уже использовали версию 0.19.10 AssemblyScript 🙂 -> Note: As of `0.24.0`, `graph-node` can support both versions, depending on the `apiVersion` specified in the subgraph manifest. +> Note: As of `0.24.0`, `graph-node` can support both versions, depending on the `apiVersion` specified in the Subgraph manifest. ## Особенности ### Новый функционал -- `TypedArray`s can now be built from `ArrayBuffer`s by using the [new `wrap` static method](https://www.assemblyscript.org/stdlib/typedarray.html#static-members) ([v0.8.1](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.8.1)) -- New standard library functions: `String#toUpperCase`, `String#toLowerCase`, `String#localeCompare`and `TypedArray#set` ([v0.9.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.0)) -- Added support for x instanceof GenericClass ([v0.9.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.2)) -- Added `StaticArray`, a more efficient array variant ([v0.9.3](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.3)) -- Added `Array#flat` ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) -- Implemented `radix` argument on `Number#toString` ([v0.10.1](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.1)) -- Added support for separators in floating point literals ([v0.13.7](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.13.7)) -- Added support for first class functions ([v0.14.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.14.0)) -- Add builtins: `i32/i64/f32/f64.add/sub/mul` ([v0.14.13](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.14.13)) -- Implement `Array/TypedArray/String#at` ([v0.18.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.2)) -- Added support for template literal strings ([v0.18.17](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.17)) -- Add `encodeURI(Component)` and `decodeURI(Component)` ([v0.18.27](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.27)) -- Add `toString`, `toDateString` and `toTimeString` to `Date` ([v0.18.29](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.29)) -- Add `toUTCString` for `Date` ([v0.18.30](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.30)) -- Add `nonnull/NonNullable` builtin type ([v0.19.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.19.2)) +- Теперь `TypedArray` можно создавать из `ArrayBuffer`, используя [новый статический метод `wrap`](https://www.assemblyscript.org/stdlib/typedarray.html#static-members) ([v0.8.1](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.8.1)) +- Новые функции стандартной библиотеки: `String#toUpperCase`, `String#toLowerCase`, `String#localeCompare`и `TypedArray#set` ([v0.9.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.0)) +- Добавлена поддержка x instanceof GenericClass ([v0.9.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.2)) +- Добавлен `StaticArray`, более эффективный вариант массива ([v0.9.3](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.3)) +- Добавлен `Array#flat` ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) +- Реализован аргумент `radix` в `Number#toString` ([v0.10.1](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.1)) +- Добавлена поддержка разделителей в литералах с плавающей точкой ([v0.13.7](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.13.7)) +- Добавлена поддержка функций первого класса ([v0.14.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.14.0)) +- Добавлены встроенные функции: `i32/i64/f32/f64.add/sub/mul` ([v0.14.13](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.14.13)) +- Реализован `Array/TypedArray/String#at` ([v0.18.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.2)) +- Добавлена поддержка строк с шаблонными литералами ([v0.18.17](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.17)) +- Добавлены `encodeURI(Component)` и `decodeURI(Component)` ([v0.18.27](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.27)) +- Добавлены `toString`, `toDateString` и `toTimeString` в `Date` ([v0.18.29](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.29)) +- Добавлен `toUTCString` для `Date` ([v0.18.30](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.30)) +- Добавлен встроенный тип `nonnull/NonNullable` ([v0.19.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.19.2)) ### Оптимизации -- `Math` functions such as `exp`, `exp2`, `log`, `log2` and `pow` have been replaced by faster variants ([v0.9.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.0)) -- Slightly optimize `Math.mod` ([v0.17.1](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.1)) -- Cache more field accesses in std Map and Set ([v0.17.8](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.8)) -- Optimize for powers of two in `ipow32/64` ([v0.18.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.2)) +- Функции `Math`, такие как `exp`, `exp2`, `log`, `log2` и`pow`, были заменены на более быстрые варианты ([v0.9.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.0)) +- Немного оптимизирована функция `Math.mod` ([v0.17.1](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.1)) +- Кэшировано больше обращений к полям в стандартных Map и Set ([v0.17.8](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.8)) +- Проведена оптимизация для степеней двойки в `ipow32/64` ([v0.18.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.2)) ### Прочее -- The type of an array literal can now be inferred from its contents ([v0.9.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.0)) -- Updated stdlib to Unicode 13.0.0 ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) +- Тип литерала массива теперь может быть выведен из его содержимого ([v0.9.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.0)) +- Обновлена стандартная библиотека до Unicode 13.0.0 ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) ## Как выполнить обновление? -1. Change your mappings `apiVersion` in `subgraph.yaml` to `0.0.6`: +1. Change your mappings `apiVersion` in `subgraph.yaml` to `0.0.9`: ```yaml ... @@ -52,11 +52,11 @@ dataSources: ... mapping: ... - apiVersion: 0.0.6 + apiVersion: 0.0.9 ... ``` -2. Update the `graph-cli` you're using to the `latest` version by running: +2. Обновите используемую Вами версию `graph-cli` до `latest`, выполнив команду: ```bash # если он у Вас установлен глобально @@ -66,14 +66,14 @@ npm install --global @graphprotocol/graph-cli@latest npm install --save-dev @graphprotocol/graph-cli@latest ``` -3. Do the same for `graph-ts`, but instead of installing globally, save it in your main dependencies: +3. Сделайте то же самое для `graph-ts`, но вместо глобальной установки сохраните его в основных зависимостях: ```bash npm install --save @graphprotocol/graph-ts@latest ``` 4. Следуйте остальной части руководства, чтобы исправить языковые изменения. -5. Run `codegen` and `deploy` again. +5. Снова запустите `codegen` и `deploy`. ## Критические изменения @@ -106,11 +106,11 @@ let maybeValue = load()! // прерывается во время выполн maybeValue.aMethod() ``` -Если Вы не уверены, что выбрать, мы рекомендуем всегда использовать безопасную версию. Если значение не существует, Вы можете просто выполнить раннее выражение if с возвратом в обработчике субграфа. +If you are unsure which to choose, we recommend always using the safe version. If the value doesn't exist you might want to just do an early if statement with a return in you Subgraph handler. ### Затенение переменных -Before you could do [variable shadowing](https://en.wikipedia.org/wiki/Variable_shadowing) and code like this would work: +Раньше можно было использовать [затенение переменных](https://en.wikipedia.org/wiki/Variable_shadowing), и такой код работал: ```typescript let a = 10 @@ -132,7 +132,7 @@ in assembly/index.ts(4,3) ### Нулевые сравнения -Выполняя обновление своего субграфа, иногда Вы можете получить такие ошибки: +By doing the upgrade on your Subgraph, sometimes you might get errors like these: ```typescript ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' is not assignable to type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt'. @@ -141,12 +141,12 @@ ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' i in src/mappings/file.ts(41,21) ``` -To solve you can simply change the `if` statement to something like this: +Для решения этой проблемы можно просто изменить оператор `if` на что-то вроде этого: ```typescript if (!decimals) { - // or + // или if (decimals === null) { ``` @@ -155,16 +155,16 @@ To solve you can simply change the `if` statement to something like this: ### Кастинг -The common way to do casting before was to just use the `as` keyword, like this: +Раньше преобразование типов обычно выполнялось с использованием ключевого слова `as`, например: ```typescript let byteArray = new ByteArray(10) -let uint8Array = byteArray as Uint8Array // equivalent to: byteArray +let uint8Array = byteArray as Uint8Array // эквивалентно: byteArray ``` Однако это работает только в двух случаях: -- Primitive casting (between types such as `u8`, `i32`, `bool`; eg: `let b: isize = 10; b as usize`); +- Примитивное преобразование (между такими типами, как `u8`, `i32`, `bool`; например: `let b: isize = 10; b as usize`); - Укрупнение по наследованию классов (subclass → superclass) Примеры: @@ -177,55 +177,55 @@ let c: usize = a + (b as usize) ``` ```typescript -// upcasting on class inheritance +// приведение к базовому типу при наследовании классов class Bytes extends Uint8Array {} let bytes = new Bytes(2) -// bytes // same as: bytes as Uint8Array +// bytes // то же самое, что: bytes as Uint8Array ``` -There are two scenarios where you may want to cast, but using `as`/`var` **isn't safe**: +Есть два сценария, где Вам может понадобиться преобразование типов, но использование `as`/`var` **небезопасно**: - Понижение уровня наследования классов (superclass → subclass) - Между двумя типами, имеющими общий супер класс ```typescript -// downcasting on class inheritance +// понижение уровня наследования классов class Bytes extends Uint8Array {} let uint8Array = new Uint8Array(2) -// uint8Array // breaks in runtime :( +// uint8Array // перерывы в работе :( ``` ```typescript -// between two types that share a superclass +// между двумя типами, имеющими общий суперкласс class Bytes extends Uint8Array {} class ByteArray extends Uint8Array {} let bytes = new Bytes(2) -// bytes // breaks in runtime :( +// bytes // перерывы в работе :( ``` -For those cases, you can use the `changetype` function: +В таких случаях Вы можете использовать функцию `changetype`: ```typescript -// downcasting on class inheritance +// понижение уровня наследования классов class Bytes extends Uint8Array {} let uint8Array = new Uint8Array(2) -changetype(uint8Array) // works :) +changetype(uint8Array) // работает :) ``` ```typescript -// between two types that share a superclass +// между двумя типами, имеющими общий суперкласс class Bytes extends Uint8Array {} class ByteArray extends Uint8Array {} let bytes = new Bytes(2) -changetype(bytes) // works :) +changetype(bytes) // работает :) ``` -If you just want to remove nullability, you can keep using the `as` operator (or `variable`), but make sure you know that value can't be null, otherwise it will break. +Если Вы просто хотите убрать возможность обнуления, Вы можете продолжить использовать оператор `as` (или `variable`), но помните, что это значение не может быть нулевым, иначе оно приведет к ошибке. ```typescript // удалить значение NULL @@ -238,7 +238,7 @@ if (previousBalance != null) { let newBalance = new AccountBalance(balanceId) ``` -For the nullability case we recommend taking a look at the [nullability check feature](https://www.assemblyscript.org/basics.html#nullability-checks), it will make your code cleaner 🙂 +В случае возможности обнуления мы рекомендуем ознакомиться с [функцией проверки обнуления](https://www.assemblyscript.org/basics.html#nullability-checks), которая сделает код чище 🙂 Также мы добавили еще несколько статических методов в некоторые типы, чтобы облегчить кастинг: @@ -249,7 +249,7 @@ For the nullability case we recommend taking a look at the [nullability check fe ### Проверка нулевого значения с доступом к свойству -To use the [nullability check feature](https://www.assemblyscript.org/basics.html#nullability-checks) you can use either `if` statements or the ternary operator (`?` and `:`) like this: +Чтобы использовать [функцию проверки на обнуляемость](https://www.assemblyscript.org/basics.html#nullability-checks), Вы можете использовать либо операторы `if`, либо тернарный оператор (`?` и `:`), например: ```typescript let something: string | null = 'data' @@ -267,7 +267,7 @@ if (something) { } ``` -However that only works when you're doing the `if` / ternary on a variable, not on a property access, like this: +Однако это работает только тогда, когда Вы выполняете `if` / тернарную операцию для переменной, а не для доступа к свойству, например: ```typescript class Container { @@ -330,7 +330,7 @@ let wrapper = new Wrapper(y) wrapper.n = wrapper.n + x // не выдает ошибок времени компиляции, как это должно быть ``` -Мы открыли вопрос по этому поводу для компилятора AssemblyScript, но пока, если Вы выполняете подобные операции в своих мэппингах субграфов, Вам следует изменить их так, чтобы перед этим выполнялась проверка на нулевое значение. +We've opened a issue on the AssemblyScript compiler for this, but for now if you do these kind of operations in your Subgraph mappings, you should change them to do a null check before it. ```typescript let wrapper = new Wrapper(y) @@ -352,7 +352,7 @@ value.x = 10 value.y = 'content' ``` -Он будет скомпилирован, но сломается во время выполнения. Это происходит из-за того, что значение не было инициализировано, поэтому убедитесь, что Ваш субграф инициализировал свои значения, например так: +It will compile but break at runtime, that happens because the value hasn't been initialized, so make sure your Subgraph has initialized their values, like this: ```typescript var value = new Type() // initialized @@ -381,7 +381,7 @@ if (total === null) { total.amount = total.amount + BigInt.fromI32(1) ``` -You'll need to make sure to initialize the `total.amount` value, because if you try to access like in the last line for the sum, it will crash. So you either initialize it first: +Вам необходимо убедиться, что значение `total.amount` инициализировано, потому что, если Вы попытаетесь получить доступ к сумме, как в последней строке, произойдет сбой. Таким образом, Вы либо инициализируете его первым: ```typescript let total = Total.load('latest') @@ -394,7 +394,7 @@ if (total === null) { total.tokens = total.tokens + BigInt.fromI32(1) ``` -Or you can just change your GraphQL schema to not use a nullable type for this property, then we'll initialize it as zero on the `codegen` step 😉 +Или Вы можете просто изменить свою схему GraphQL, чтобы не использовать для этого свойства тип, допускающий обнуление, тогда мы инициализируем его как ноль на этапе `codegen` 😉 ```graphql type Total @entity { @@ -425,7 +425,7 @@ export class Something { } ``` -The compiler will error because you either need to add an initializer for the properties that are classes, or add the `!` operator: +Компилятор выдаст ошибку, потому что Вам нужно либо добавить инициализатор для свойств, являющихся классами, либо добавить оператор `!`: ```typescript export class Something { @@ -451,12 +451,12 @@ export class Something { ### Инициализация массива -The `Array` class still accepts a number to initialize the length of the list, however you should take care because operations like `.push` will actually increase the size instead of adding to the beginning, for example: +Класс `Array` по-прежнему принимает число для инициализации длины списка, однако следует учитывать, что операции, такие как `.push`, будут увеличивать размер массива, а не добавлять элемент в начало. Например: ```typescript let arr = new Array(5) // ["", "", "", "", ""] -arr.push('something') // ["", "", "", "", "", "something"] // size 6 :( +arr.push('something') // ["", "", "", "", "", "something"] // размер 6 :( ``` В зависимости от используемых типов, например, допускающих значение NULL, и способа доступа к ним, можно столкнуться с ошибкой времени выполнения, подобной этой: @@ -465,7 +465,7 @@ arr.push('something') // ["", "", "", "", "", "something"] // size 6 :( ERRO Handler skipped due to execution failure, error: Mapping aborted at ~lib/array.ts, line 110, column 40, with message: Element type must be nullable if array is holey wasm backtrace: 0: 0x19c4 - !~lib/@graphprotocol/graph-ts/index/format 1: 0x1e75 - !~lib/@graphprotocol/graph-ts/common/collections/Entity#constructor 2: 0x30b9 - !node_modules/@graphprotocol/graph-ts/global/global/id_of_type ``` -To actually push at the beginning you should either, initialize the `Array` with size zero, like this: +Чтобы действительно добавить элемент в начало, следует инициализировать `Array` с нулевым размером, например, так: ```typescript let arr = new Array(0) // [] @@ -483,7 +483,7 @@ arr[0] = 'something' // ["something", "", "", "", ""] ### Схема GraphQL -This is not a direct AssemblyScript change, but you may have to update your `schema.graphql` file. +Это не прямое изменение AssemblyScript, но Вам, возможно, придется обновить файл `schema.graphql`. Теперь Вы больше не можете определять поля в своих типах, которые являются списками, не допускающими значение NULL. Если у Вас такая схема: @@ -498,7 +498,7 @@ type MyEntity @entity { } ``` -You'll have to add an `!` to the member of the List type, like this: +Вам нужно добавить `!` к элементу типа List, например, так: ```graphql type Something @entity { @@ -511,14 +511,14 @@ type MyEntity @entity { } ``` -This changed because of nullability differences between AssemblyScript versions, and it's related to the `src/generated/schema.ts` file (default path, you might have changed this). +Это изменение связано с различиями в обработке возможности обнуления между версиями AssemblyScript и связано с файлом `src/generated/schema.ts` (значение по умолчанию, хотя Вы могли его изменить). ### Прочее -- Aligned `Map#set` and `Set#add` with the spec, returning `this` ([v0.9.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.2)) -- Arrays no longer inherit from ArrayBufferView, but are now distinct ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) -- Classes initialized from object literals can no longer define a constructor ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) -- The result of a `**` binary operation is now the common denominator integer if both operands are integers. Previously, the result was a float as if calling `Math/f.pow` ([v0.11.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.11.0)) -- Coerce `NaN` to `false` when casting to `bool` ([v0.14.9](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.14.9)) -- When shifting a small integer value of type `i8`/`u8` or `i16`/`u16`, only the 3 respectively 4 least significant bits of the RHS value affect the result, analogous to the result of an `i32.shl` only being affected by the 5 least significant bits of the RHS value. Example: `someI8 << 8` previously produced the value `0`, but now produces `someI8` due to masking the RHS as `8 & 7 = 0` (3 bits) ([v0.17.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.0)) -- Bug fix of relational string comparisons when sizes differ ([v0.17.8](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.8)) +- `Map#set` и `Set#add` приведены в соответствие со спецификацией, возвращая `this` ([v0.9.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.2)) +- Массивы больше не наследуются от ArrayBufferView, а теперь являются отдельными ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) +- Классы, инициализируемые из объектных литералов, больше не могут определять конструктор ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) +- Результат бинарной операции `**` теперь является общим целочисленным знаменателем, если оба операнда - целые числа. Ранее результат был числом с плавающей точкой, как при вызове `Math/f.pow` ([v0.11.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.11.0)) +- При приведении к `bool` значение `NaN` теперь принудительно преобразуется в `false` ([v0.14.9](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.14.9)) +- При сдвиге небольшого целочисленного значения типа `i8`/`u8` или `i16`/`u16` на результат влияют только 3 или 4 младших бита значения RHS, аналогично результату `i32.shl`, на который влияют только 5 младших битов значения RHS. Пример: `someI8 << 8` ранее выдавало значение 0, а теперь выдает `someI8` благодаря маскировке RHS как `8 & 7 = 0` (3 бита) ([v0.17.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.0)) +- Исправлена ошибка в сравнении строк разной длины ([v0.17.8](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.8)) diff --git a/website/src/pages/ru/resources/migration-guides/graphql-validations-migration-guide.mdx b/website/src/pages/ru/resources/migration-guides/graphql-validations-migration-guide.mdx index b7cb792259b3..7f5b4e042752 100644 --- a/website/src/pages/ru/resources/migration-guides/graphql-validations-migration-guide.mdx +++ b/website/src/pages/ru/resources/migration-guides/graphql-validations-migration-guide.mdx @@ -1,5 +1,5 @@ --- -title: Руководство по переходу на валидацию GraphQL +title: GraphQL Validations Migration Guide --- Вскоре `graph-node` будет поддерживать 100-процентное покрытие [спецификации GraphQL Validation] (https://spec.graphql.org/June2018/#sec-Validation). @@ -20,7 +20,7 @@ title: Руководство по переходу на валидацию Grap Вы можете использовать инструмент миграции CLI, чтобы найти любые проблемы в операциях GraphQL и исправить их. В качестве альтернативы вы можете обновить конечную точку своего клиента GraphQL, чтобы использовать конечную точку `https://api-next.thegraph.com/subgraphs/name/$GITHUB_USER/$SUBGRAPH_NAME`. Проверка запросов на этой конечной точке поможет Вам обнаружить проблемы в Ваших запросах. -> Не все субграфы нужно будет переносить, если Вы используете [GraphQL ESlint](https://the-guild.dev/graphql/eslint/docs) или [GraphQL Code Generator](https://the-guild.dev/graphql/codegen), они уже гарантируют корректность Ваших запросов. +> Not all Subgraphs will need to be migrated, if you are using [GraphQL ESlint](https://the-guild.dev/graphql/eslint/docs) or [GraphQL Code Generator](https://the-guild.dev/graphql/codegen), they already ensure that your queries are valid. ## CLI-инструмент миграции diff --git a/website/src/pages/ru/resources/roles/curating.mdx b/website/src/pages/ru/resources/roles/curating.mdx index ef319cda705e..61053f5d542b 100644 --- a/website/src/pages/ru/resources/roles/curating.mdx +++ b/website/src/pages/ru/resources/roles/curating.mdx @@ -1,88 +1,88 @@ --- -title: Кураторство +title: Курирование --- -Curators are critical to The Graph's decentralized economy. They use their knowledge of the web3 ecosystem to assess and signal on the subgraphs that should be indexed by The Graph Network. Through Graph Explorer, Curators view network data to make signaling decisions. In turn, The Graph Network rewards Curators who signal on good quality subgraphs with a share of the query fees those subgraphs generate. The amount of GRT signaled is one of the key considerations for indexers when determining which subgraphs to index. +Curators are critical to The Graph's decentralized economy. They use their knowledge of the web3 ecosystem to assess and signal on the Subgraphs that should be indexed by The Graph Network. Through Graph Explorer, Curators view network data to make signaling decisions. In turn, The Graph Network rewards Curators who signal on good quality Subgraphs with a share of the query fees those Subgraphs generate. The amount of GRT signaled is one of the key considerations for indexers when determining which Subgraphs to index. -## What Does Signaling Mean for The Graph Network? +## Что означает сигнализация для The Graph Network? -Before consumers can query a subgraph, it must be indexed. This is where curation comes into play. In order for Indexers to earn substantial query fees on quality subgraphs, they need to know what subgraphs to index. When Curators signal on a subgraph, it lets Indexers know that a subgraph is in demand and of sufficient quality that it should be indexed. +Before consumers can query a Subgraph, it must be indexed. This is where curation comes into play. In order for Indexers to earn substantial query fees on quality Subgraphs, they need to know what Subgraphs to index. When Curators signal on a Subgraph, it lets Indexers know that a Subgraph is in demand and of sufficient quality that it should be indexed. -Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that Curators use to let Indexers know that a subgraph is good to index. Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives. +Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that Curators use to let Indexers know that a Subgraph is good to index. Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the Subgraph, entitling them to a portion of future query fees that the Subgraph drives. -Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality subgraph because there will be fewer queries to process or fewer Indexers to process them. +Сигналы кураторов представлены токенами ERC20, называемыми Graph Curation Shares (GCS). Те, кто хочет зарабатывать больше комиссий за запросы, должны направлять свои GRT на субграфы, которые, по их прогнозам, будут генерировать значительный поток комиссий для сети. Кураторы не подвергаются штрафам за некорректное поведение, но существует налог на депозиты Кураторов, чтобы предотвратить принятие решений, которые могут нанести ущерб целостности сети. Кроме того, Кураторы будут получать меньше комиссий за запросы, если они занимаются кураторством субграфов низкого качества, так как будет меньше запросов для обработки или Индексаторов, готовых их обрабатывать. -The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. +The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all Subgraphs, signaling GRT on a particular Subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. -При подаче сигнала Кураторы могут решить подать сигнал на определенную версию субграфа или использовать автомиграцию. Если они подают сигнал с помощью автомиграции, доли куратора всегда будут обновляться до последней версии, опубликованной разработчиком. Если же они решат подать сигнал на определенную версию, доли всегда будут оставаться на этой конкретной версии. +When signaling, Curators can decide to signal on a specific version of the Subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. -If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. +If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the Subgraphs you need assistance with. -Indexers can find subgraphs to index based on curation signals they see in Graph Explorer (screenshot below). +Индексаторы могут находить субграфы для индексирования на основе сигналов курирования, которые они видят в Graph Explorer (скриншот ниже). -![Explorer subgraphs](/img/explorer-subgraphs.png) +![Explorer Subgraphs](/img/explorer-subgraphs.png) ## Как подавать Сигнал -Within the Curator tab in Graph Explorer, curators will be able to signal and unsignal on certain subgraphs based on network stats. For a step-by-step overview of how to do this in Graph Explorer, [click here.](/subgraphs/explorer/) +Within the Curator tab in Graph Explorer, curators will be able to signal and unsignal on certain Subgraphs based on network stats. For a step-by-step overview of how to do this in Graph Explorer, [click here.](/subgraphs/explorer/) -Куратор может выбрать конкретную версию подграфа для сигнализации, или же он может выбрать автоматическую миграцию своего сигнала на самую новую рабочую сборку этого подграфа. Оба варианта являются допустимыми стратегиями и имеют свои плюсы и минусы. +Куратор может выбрать подачу сигнала на конкретную версию субграфа или настроить автоматическую миграцию сигнала на последнюю производственную версию этого субграфа. Оба подхода являются допустимыми и имеют свои плюсы и минусы. -Signaling on a specific version is especially useful when one subgraph is used by multiple dapps. One dapp might need to regularly update the subgraph with new features. Another dapp might prefer to use an older, well-tested subgraph version. Upon initial curation, a 1% standard tax is incurred. +Signaling on a specific version is especially useful when one Subgraph is used by multiple dapps. One dapp might need to regularly update the Subgraph with new features. Another dapp might prefer to use an older, well-tested Subgraph version. Upon initial curation, a 1% standard tax is incurred. -Автоматическая миграция вашего сигнала на самую новую рабочую сборку может быть ценной, чтобы гарантировать непрерывное начисление комиссий за запросы. Каждый раз, когда вы осуществляете курирование, взимается комиссия в размере 1%. Вы также заплатите комиссию в размере 0,5% при каждой миграции. Разработчикам подграфов не рекомендуется часто публиковать новые версии - они должны заплатить комиссию на курирование в размере 0,5% на все автоматически мигрированные доли курации. +Автоматическая миграция Вашего сигнала на новейшую производственную версию может быть полезной, чтобы гарантировать непрерывное начисление комиссий за запросы. Каждый раз, когда Вы осуществляете курирование, взимается комиссия в размере 1%. Также при каждой миграции взимается налог на курирование в размере 0,5%. Разработчикам субграфов не рекомендуется часто публиковать новые версии, так как они обязаны оплачивать комиссию в размере 0,5% за все автоматически перенесённые кураторские доли. -> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy. +> **Note**: The first address to signal a particular Subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy. -## Withdrawing your GRT +## Вывод Вашего GRT -Curators have the option to withdraw their signaled GRT at any time. +Кураторы имеют возможность в любой момент отозвать свои заявленные GRT. -Unlike the process of delegating, if you decide to withdraw your signaled GRT you will not have to wait for a cooldown period and will receive the entire amount (minus the 1% curation tax). +В отличие от процесса делегирования, если Вы решите отозвать заявленный Вами GRT, Вам не придется ждать периода размораживания и Вы получите всю сумму (за вычетом 1% налога на курирование). -Once a curator withdraws their signal, indexers may choose to keep indexing the subgraph, even if there's currently no active GRT signaled. +Как только куратор отзовет свой сигнал, индексаторы могут продолжить индексирование субграфа, даже если в данный момент нет активного сигнала GRT. -However, it is recommended that curators leave their signaled GRT in place not only to receive a portion of the query fees, but also to ensure reliability and uptime of the subgraph. +However, it is recommended that curators leave their signaled GRT in place not only to receive a portion of the query fees, but also to ensure reliability and uptime of the Subgraph. ## Риски -1. Рынок запросов в The Graph по своей сути молод, и существует риск того, что ваш %APY может оказаться ниже, чем вы ожидаете, из-за зарождающейся динамики рынка. -2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned. -3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their subgraph or if a subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/resources/roles/delegating/delegating/). -4. Подграф может выйти из строя из-за ошибки. За неудавшийся подграф не начисляется плата за запрос. В результате вам придется ждать, пока разработчик исправит ошибку и выложит новую версию. - - Если вы подписаны на новейшую версию подграфа, ваши общие ресурсы автоматически перейдут на эту новую версию. При этом будет взиматься кураторская комиссия в размере 0,5%. - - If you have signaled on a specific subgraph version and it fails, you will have to manually burn your curation shares. You can then signal on the new subgraph version, thus incurring a 1% curation tax. +1. Рынок запросов в The Graph по своей сути молод, и существует риск того, что Ваш %APY может оказаться ниже, чем Вы ожидаете, из-за зарождающейся динамики рынка. +2. Плата за курирование — когда куратор сигнализирует GRT о субграфе, он платит налог на курирование в размере 1%. Этот сбор сжигается. +3. (Только для Ethereum) Когда кураторы сжигают свои доли для вывода GRT, оценочная стоимость оставшихся долей в GRT уменьшается. Учтите, что в некоторых случаях кураторы могут решить сжечь все свои доли **одновременно**. Такая ситуация может возникнуть, если разработчик dApp перестанет обновлять и улучшать свой субграф или если субграф выйдет из строя. В результате оставшиеся кураторы могут вывести лишь часть своего первоначального GRT. Если Вы ищете роль в сети с меньшим уровнем риска, обратите внимание на [Делегаторов](/resources/roles/delegating/delegating/). +4. Субграф может выйти из строя из-за ошибки. Неисправный субграф не генерирует комиссии за запросы. В таком случае Вам придется ждать, пока разработчик исправит ошибку и развернет новую версию. + - Если Вы подписаны на самую новую версию субграфа, Ваши доли будут автоматически мигрировать на эту новую версию. При этом взимается 0,5% налог на кураторство. + - Если Вы подали сигнал на определенную версию субграфа и она вышла из строя, Вам потребуется вручную сжечь свои кураторские доли. Затем Вы сможете подать сигнал на новую версию субграфа, при этом будет взиматься налог на кураторство в размере 1%. ## Часто задаваемые вопросы по кураторству -### 1. Какой % от оплаты за запрос получают кураторы? +### 1. Какой % от оплаты за запрос получают Кураторы? -By signalling on a subgraph, you will earn a share of all the query fees that the subgraph generates. 10% of all query fees go to the Curators pro-rata to their curation shares. This 10% is subject to governance. +Подавая сигнал о субграфе, Вы получаете долю от всех комиссий за запросы, которые генерирует субграф. 10% от всех сборов за запросы переходят Кураторам пропорционально их доле курирования. Эти 10% подлежат регулированию через механизм управления. -### 2. Как определить, какие подграфы являются высококачественными, чтобы подавать на них сигналы? +### 2. Как определить, какие субграфы являются качественными для подачи сигнала? -Finding high-quality subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy subgraphs that are driving query volume. A trustworthy subgraph may be valuable if it is complete, accurate, and supports a dapp’s data needs. A poorly architected subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a subgraph’s architecture or code in order to assess if a subgraph is valuable. As a result: +Finding high-quality Subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy Subgraphs that are driving query volume. A trustworthy Subgraph may be valuable if it is complete, accurate, and supports a dapp’s data needs. A poorly architected Subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a Subgraph’s architecture or code in order to assess if a Subgraph is valuable. As a result: -- Curators can use their understanding of a network to try and predict how an individual subgraph may generate a higher or lower query volume in the future -- Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the subgraph developer is can help determine whether or not a subgraph is worth signalling on. +- Curators can use their understanding of a network to try and predict how an individual Subgraph may generate a higher or lower query volume in the future +- Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the Subgraph developer is can help determine whether or not a Subgraph is worth signalling on. -### 3. Какова стоимость обновления подграфа? +### 3. Какова стоимость обновления субграфа? -Migrating your curation shares to a new subgraph version incurs a curation tax of 1%. Curators can choose to subscribe to the newest version of a subgraph. When curator shares get auto-migrated to a new version, Curators will also pay half curation tax, ie. 0.5%, because upgrading subgraphs is an onchain action that costs gas. +При переносе Вашей кураторской доли в новую версию субграфа взимается курационный налог в размере 1%. Кураторы могут подписаться на самую последнюю версию субграфа. Когда кураторская доля автоматически перенотся в новую версию, Кураторы также платят половину кураторского налога, т. е. 0,5%, потому что обновление субграфов — это внутрисетевое действие, требующее затрат газа. -### 4. Как часто я могу обновлять свой подграф? +### 4. How often can I update my Subgraph? -Рекомендуется не обновлять свои подграфы слишком часто. См. выше для более подробной информации. +It’s suggested that you don’t update your Subgraphs too frequently. See the question above for more details. ### 5. Могу ли я продать свои кураторские доли? -Curation shares cannot be "bought" or "sold" like other ERC20 tokens that you may be familiar with. They can only be minted (created) or burned (destroyed). +Акции курирования нельзя «купить» или «продать», как другие токены ERC20, с которыми Вы, возможно, знакомы. Их можно только отчеканить (создать) или сжечь (уничтожить). -As a Curator on Arbitrum, you are guaranteed to get back the GRT you initially deposited (minus the tax). +Будучи куратором Arbitrum, Вы гарантированно вернете первоначально внесенный Вами GRT (за вычетом налога). -### 6. Am I eligible for a curation grant? +### 6. Имею ли я право на получение гранта на кураторство? -Curation grants are determined individually on a case-by-case basis. If you need assistance with curation, please send a request to support@thegraph.zendesk.com. +Гранты на кураторство определяются индивидуально в каждом конкретном случае. Если Вам нужна помощь с кураторством, отправьте запрос на support@thegraph.zendesk.com. Вы все еще в замешательстве? Ознакомьтесь с нашим видеоруководством по кураторству: diff --git a/website/src/pages/ru/resources/roles/delegating/delegating.mdx b/website/src/pages/ru/resources/roles/delegating/delegating.mdx index a0f6b73d1c06..c6e6f6eb8b33 100644 --- a/website/src/pages/ru/resources/roles/delegating/delegating.mdx +++ b/website/src/pages/ru/resources/roles/delegating/delegating.mdx @@ -2,54 +2,54 @@ title: Делегирование --- -To start delegating right away, check out [delegate on the graph](https://thegraph.com/explorer/delegate?chain=arbitrum-one). +Чтобы приступить к делегированию прямо сейчас, ознакомьтесь с разделом [делегирование в The Graph](https://thegraph.com/explorer/delegate?chain=arbitrum-one). ## Обзор -Delegators earn GRT by delegating GRT to Indexers, which helps network security and functionality. +Делегаторы зарабатывают GRT, делегируя GRT Индексаторам, что повышает безопасность и функциональность сети. -## Benefits of Delegating +## Преимущества делегирования -- Strengthen the network’s security and scalability by supporting Indexers. -- Earn a portion of rewards generated by the Indexers. +- Усиление безопасности и масштабируемости сети за счет поддержки Индексаторов. +- Получение части вознаграждений, генерируемых Индексаторами. -## How Does Delegation Work? +## Как работает делегирование? -Delegators earn GRT rewards from the Indexer(s) they choose to delegate their GRT to. +Делегаторы получают вознаграждения GRT от Индексатора(ов), которому(ым) они делегируют свои GRT. -An Indexer's ability to process queries and earn rewards depends on three key factors: +Способность Индексатора обрабатывать запросы и получать вознаграждения зависит от трех ключевых факторов: -1. The Indexer's Self-Stake (GRT staked by the Indexer). -2. The total GRT delegated to them by Delegators. -3. The price the Indexer sets for queries. +1. Собственной ставки Индексатора (GRT застейканные Индексатором). +2. Общей суммы GRT, делегированной им Делегаторами. +3. Цены, которую Индексатор устанавливает за запросы. -The more GRT staked and delegated to an Indexer, the more queries they can serve, leading to higher potential rewards for both the Delegator and Indexer. +Чем больше GRT застейкано и делегировано Индексатору, тем больше запросов он сможет обработать, что приведет к более высоким потенциальным вознаграждениям как для Делегатора, так и для Индексатора. -### What is Delegation Capacity? +### Что такое объем делегирования? -Delegation Capacity refers to the maximum amount of GRT an Indexer can accept from Delegators, based on the Indexer's Self-Stake. +Под объемом делегирования понимается максимальная сумма GRT, которую Индексатор может принять от Делегаторов, исходя из собственной доли Индексатора. -The Graph Network includes a delegation ratio of 16, meaning an Indexer can accept up to 16 times their Self-Stake in delegated GRT. +The Graph Network включает коэффициент делегирования 16, что означает, что Индексатор может принять делегированные GRT, в 16 раз превышающие его собственный стейк. -For example, if an Indexer has a Self-Stake of 1M GRT, their Delegation Capacity is 16M. +Например, если Индексатор имеет собственную долю в размере 1 млн GRT, его объем делегирования составляет 16 млн. -### Why Does Delegation Capacity Matter? +### Почему объем делегирования имеет значение? -If an Indexer exceeds their Delegation Capacity, rewards for all Delegators become diluted because the excess delegated GRT cannot be used effectively within the protocol. +Если Индексатор превышает свой объем делегирования, вознаграждения всех Делегаторов размываются, поскольку избыточный делегированный GRT не может быть эффективно использован в рамках протокола. -This makes it crucial for Delegators to evaluate an Indexer's current Delegation Capacity before selecting an Indexer. +Поэтому Делегаторам крайне важно оценить текущий объем делегирования Индексатора, прежде чем его выбирать. -Indexers can increase their Delegation Capacity by increasing their Self-Stake, thereby raising the limit for delegated tokens. +Индексаторы могут увеличить свой объем делегирования, увеличив свой собственный стейк, тем самым повысив лимит делегированных токенов. -## Delegation on The Graph +## Делегирование на The Graph -> Please note this guide does not cover steps such as setting up MetaMask. The Ethereum community provides a [comprehensive resource regarding wallets](https://ethereum.org/en/wallets/). +> Пожалуйста, обратите внимание на то, что это руководство не охватывает такие шаги, как настройка MetaMask. Сообщество Ethereum предоставляет [исчерпывающий ресурс по кошелькам](https://ethereum.org/en/wallets/). -There are two sections in this guide: +Данное руководство состоит из двух разделов: - Риски связанные с делегацией в сети The Graph - Как расчитать примерный доход @@ -58,7 +58,7 @@ There are two sections in this guide: Ниже указаны основные риски Делегатора. -### The Delegation Tax +### Комиссия за делегирование Делегаторы не могут быть наказаны за некорректное поведение, но они уплачивают комиссию на делегацию, который должен стимулировать обдуманный выбор Индексатора для делегации. @@ -68,19 +68,19 @@ There are two sections in this guide: - В целях безопасности Вам следует рассчитать потенциальную прибыль при делегировании Индексатору. Например, Вы можете подсчитать, сколько дней пройдет, прежде чем Вы вернете налог в размере 0,5% за своё делегирование. -### The Undelegation Period +### Период отмены делегирования -When a Delegator chooses to undelegate, their tokens are subject to a 28-day undelegation period. +Когда Делегатор решает отменить делегирование, на его токены распространяется 28-дневный период отмены делегирования. -This means they cannot transfer their tokens or earn any rewards for 28 days. +Это означает, что они не смогут переводить свои токены или получать какие-либо вознаграждения в течение 28 дней. -After the undelegation period, GRT will return to your crypto wallet. +По истечении периода отмены делегирования GRT вернутся на Ваш криптокошелек. ### Почему это важно? -If you choose an Indexer that is not trustworthy or not doing a good job, you will want to undelegate. This means you will be losing opportunities to earn rewards. +Если Вы выберете Индексатора, которому нельзя доверять или который плохо выполняет свою работу, Вам захочется отозвать делегирование. Это приведёт к тому, что Вы потеряете возможности получения наград. -As a result, it’s recommended that you choose an Indexer wisely. +Поэтому рекомендуется тщательно выбирать Индексатора. ![Delegation unbonding. Note the 0.5% fee in the Delegation UI, as well as the 28 day unbonding period.](/img/Delegation-Unbonding.png) @@ -96,25 +96,25 @@ As a result, it’s recommended that you choose an Indexer wisely. - **Снижение комиссии за запросы** — это то же самое, что и снижение вознаграждения за индексирование, но она применяется к доходам от комиссий за запросы, которые собирает Индексатор. -- It is highly recommended that you explore [The Graph Discord](https://discord.gg/graphprotocol) to determine which Indexers have the best social and technical reputations. +- Настоятельно рекомендуется посетить [Discord The Graph](https://discord.gg/graphprotocol), чтобы узнать, какие Индексаторы имеют лучшую социальную и техническую репутацию. -- Many Indexers are active in Discord and will be happy to answer your questions. +- Многие Индексаторы активно участвуют в Discord и будут рады ответить на Ваши вопросы. ## Расчет ожидаемой доходности Делегаторов -> Calculate the ROI on your delegation [here](https://thegraph.com/explorer/delegate?chain=arbitrum-one). +> Рассчитайте свою ROI (рентабельность инвестиций) от делегирования [здесь](https://thegraph.com/explorer/delegate?chain=arbitrum-one). -A Delegator must consider a variety of factors to determine a return: +Делегатор должен учитывать ряд факторов для определения доходности: -An Indexer's ability to use the delegated GRT available to them impacts their rewards. +Способность Индексатора использовать доступные ему делегированные GRT влияет на его вознаграждения. -If an Indexer does not allocate all the GRT at their disposal, they may miss out on maximizing potential earnings for both themselves and their Delegators. +Если Индексатор не распределяет все имеющиеся в его распоряжении GRT, он может упустить возможность максимизировать потенциальный доход как для себя, так и для своих Делегаторов. -Indexers can close an allocation and collect rewards at any time within the 1 to 28-day window. However, if rewards are not promptly collected, the total rewards may appear lower, even if a percentage of rewards remain unclaimed. +Индексаторы могут закрыть распределение и получить вознаграждение в любое время в течение периода от 1 до 28 дней. Однако, если вознаграждения не будут собраны своевременно, общая сумма вознаграждений может оказаться ниже, даже если определенный процент вознаграждений останется незабранным. ### Учёт части комиссии за запросы и части комиссии за индексирование -You should choose an Indexer that is transparent about setting their Query Fee and Indexing Fee Cuts. +Вам следует выбрать Индексатора, который открыто устанавливает размер комиссии за запрос и снижение платы за индексирование. Формула следующая: diff --git a/website/src/pages/ru/resources/roles/delegating/undelegating.mdx b/website/src/pages/ru/resources/roles/delegating/undelegating.mdx index d9422b997a77..c9f6de94cca2 100644 --- a/website/src/pages/ru/resources/roles/delegating/undelegating.mdx +++ b/website/src/pages/ru/resources/roles/delegating/undelegating.mdx @@ -1,73 +1,73 @@ --- -title: Undelegating +title: Отмена делегирования --- -Learn how to withdraw your delegated tokens through [Graph Explorer](https://thegraph.com/explorer) or [Arbiscan](https://arbiscan.io/). +Узнайте, как вывести свои делегированные токены через [Graph Explorer](https://thegraph.com/explorer) или [Arbiscan](https://arbiscan.io/). -> To avoid this in the future, it's recommended that you select an Indexer wisely. To learn how to select and Indexer, check out the Delegate section in Graph Explorer. +> Чтобы избежать этого в будущем, рекомендуется тщательно выбирать Индексатор. Чтобы узнать, как выбрать Индексатор, ознакомьтесь с разделом "Делегировать" в Graph Explorer. -## How to Withdraw Using Graph Explorer +## Как вывести с помощью Graph Explorer ### Пошаговое руководство -1. Visit [Graph Explorer](https://thegraph.com/explorer). Please make sure you're on Explorer and **not** Subgraph Studio. +1. Посетите [Graph Explorer](https://thegraph.com/explorer). Убедитесь, что вы находитесь в Explorer, а **не** в Subgraph Studio. -2. Click on your profile. You can find it on the top right corner of the page. +2. Нажмите на свой профиль. Он находится в верхнем правом углу страницы. - - Make sure that your wallet is connected. If it's not connected, you will see the "connect" button instead. + - Убедитесь, что ваш кошелек подключен. Если он не подключен, вместо этого вы увидите кнопку "подключить". -3. Once you're in your profile, click on the Delegating tab. In the Delegating tab, you can view the list of Indexers you have delegated to. +3. Когда вы окажетесь в своем профиле, нажмите на вкладку "Делегирование". В этой вкладке вы сможете увидеть список Индексаторов, которым вы делегировали свои токены. -4. Click on the Indexer from which you wish to withdraw your tokens. +4. Нажмите на Индексатора, из которого вы хотите вывести свои токены. - - Make sure to note the specific Indexer, as you will need to find them again to withdraw. + - Убедитесь, что вы записали конкретного Индексатора, так как вам нужно будет найти его снова для вывода. -5. Select the "Undelegate" option by clicking on the three dots next to the Indexer on the right side, see image below: +5. Выберите опцию "Отменить делегирование", кликнув на три точки рядом с Индексатором с правой стороны, как показано на изображении ниже: - ![Undelegate button](/img/undelegate-button.png) + ![Кнопка отменить делегирование](/img/undelegate-button.png) -6. After approximately [28 epochs](https://thegraph.com/explorer/network/epochs?chain=arbitrum-one) (28 days), return to the Delegate section and locate the specific Indexer you undelegated from. +6. После примерно [28 эпох](https://thegraph.com/explorer/network/epochs?chain=arbitrum-one) (28 дней) вернитесь в раздел "Делегирование" и найдите конкретного Индексатора, делегацию которого вы отменили. -7. Once you find the Indexer, click on the three dots next to them and proceed to withdraw all your tokens. +7. Как только вы найдете Индексатора, кликните на три точки рядом с ним и продолжите вывод всех ваших токенов. -## How to Withdraw Using Arbiscan +## Как вывести средства с использованием Arbiscan -> This process is primarily useful if the UI in Graph Explorer experiences issues. +> Этот процесс в основном полезен, если пользовательский интерфейс в Graph Explorer испытывает проблемы. ### Пошаговое руководство -1. Find your delegation transaction on Arbiscan. +1. Найдите вашу транзакцию делегирования на Arbiscan. - - Here's an [example transaction on Arbiscan](https://arbiscan.io/tx/0xcf2110eac897099f821064445041031efb32786392bdbe7544a4cb7a6b2e4f9a) + - Вот [пример транзакции на Arbiscan](https://arbiscan.io/tx/0xcf2110eac897099f821064445041031efb32786392bdbe7544a4cb7a6b2e4f9a) -2. Navigate to "Transaction Action" where you can find the staking extension contract: +2. Перейдите в раздел "Действие по транзакции", где вы можете найти контракт расширения стейкинга: - - [This is the staking extension contract for the example listed above](https://arbiscan.io/address/0x00669A4CF01450B64E8A2A20E9b1FCB71E61eF03) + - [Это контракт расширения стейкинга для приведенного выше примера](https://arbiscan.io/address/0x00669A4CF01450B64E8A2A20E9b1FCB71E61eF03) -3. Then click on "Contract". ![Contract tab on Arbiscan, between NFT Transfers and Events](/img/arbiscan-contract.png) +3. Затем нажмите на «Контракт». ![Вкладка контракта на Arbiscan, между трансфер NFT и События](/img/arbiscan-contract.png) -4. Scroll to the bottom and copy the Contract ABI. There should be a small button next to it that allows you to copy everything. +4. Прокрутите вниз и скопируйте Contract ABI. Рядом с ним должна быть небольшая кнопка, которая позволяет скопировать всё. -5. Click on your profile button in the top right corner of the page. If you haven't created an account yet, please do so. +5. Нажмите на кнопку своего профиля в верхнем правом углу страницы. Если вы ещё не создали аккаунт, пожалуйста, сделайте это. -6. Once you're in your profile, click on "Custom ABI”. +6. После того как вы окажетесь в своём профиле, нажмите на "Custom ABI". -7. Paste the custom ABI you copied from the staking extension contract, and add the custom ABI for the address: 0x00669A4CF01450B64E8A2A20E9b1FCB71E61eF03 (**sample address**) +7. Вставьте пользовательский ABI, который вы скопировали из контракта расширения для стейкинга, и добавьте пользовательский ABI для адреса: 0x00669A4CF01450B64E8A2A20E9b1FCB71E61eF03 (**пример адреса**) -8. Go back to the [staking extension contract](https://arbiscan.io/address/0x00669A4CF01450B64E8A2A20E9b1FCB71E61eF03#writeProxyContract). Now, call the `unstake` function in the [Write as Proxy tab](https://arbiscan.io/address/0x00669A4CF01450B64E8A2A20E9b1FCB71E61eF03#writeProxyContract), which has been added thanks to the custom ABI, with the number of tokens that you delegated. +8. Перейдите обратно к [контракту расширения для стейкинга](https://arbiscan.io/address/0x00669A4CF01450B64E8A2A20E9b1FCB71E61eF03#writeProxyContract). Теперь вызовите функцию `unstake` на вкладке [Write as Proxy](https://arbiscan.io/address/0x00669A4CF01450B64E8A2A20E9b1FCB71E61eF03#writeProxyContract), которая была добавлена благодаря пользовательскому ABI, с количеством токенов, которые вы делегировали. -9. If you don't know how many tokens you delegated, you can call `getDelegation` on the Read Custom tab. You will need to paste your address (delegator address) and the address of the Indexer that you delegated to, as shown in the following screenshot: +9. Если вы не знаете, сколько токенов вы делегировали, вы можете вызвать `getDelegation` на вкладке Read Custom. Вам нужно будет вставить свой адрес (адрес делегатора) и адрес Индексатора, которому вы делегировали, как показано на следующем скриншоте: - ![Both of the addresses needed](/img/get-delegate.png) + ![Оба адреса, которые нужны](/img/get-delegate.png) - - This will return three numbers. The first number is the amount you can unstake. + - Это вернет три числа. Первое число — это количество токенов, которые вы можете вывести. -10. After you have called `unstake`, you can withdraw after approximately 28 epochs (28 days) by calling the `withdraw` function. +10. После того как вы вызовете `unstake`, вы сможете вывести токены примерно через 28 эпох (28 дней), вызвав функцию `withdraw`. -11. You can see how much you will have available to withdraw by calling the `getWithdrawableDelegatedTokens` on Read Custom and passing it your delegation tuple. See screenshot below: +11. Вы можете увидеть, сколько токенов будет доступно для вывода, вызвав функцию `getWithdrawableDelegatedTokens` в разделе Read Custom и передав ей ваш делегированный кортеж. См. скриншот ниже: - ![Call `getWithdrawableDelegatedTokens` to see amount of tokens that can be withdrawn](/img/withdraw-available.png) + ![Вызовите `getWithdrawableDelegatedTokens`, чтобы увидеть количество токенов, которые можно вывести](/img/withdraw-available.png) ## Дополнительные ресурсы -To delegate successfully, review the [delegating documentation](/resources/roles/delegating/delegating/) and check out the delegate section in Graph Explorer. +Чтобы успешно делегировать, ознакомьтесь с [документацией по делегированию](/resources/roles/delegating/delegating/) и проверьте раздел делегирования в Graph Explorer. diff --git a/website/src/pages/ru/resources/subgraph-studio-faq.mdx b/website/src/pages/ru/resources/subgraph-studio-faq.mdx index 4e0eee2dba2d..5c63bc3b3b6d 100644 --- a/website/src/pages/ru/resources/subgraph-studio-faq.mdx +++ b/website/src/pages/ru/resources/subgraph-studio-faq.mdx @@ -4,7 +4,7 @@ title: Часто задаваемые вопросы о Subgraph Studio ## 1. Что такое Subgraph Studio? -[Subgraph Studio](https://thegraph.com/studio/) is a dapp for creating, managing, and publishing subgraphs and API keys. +[Subgraph Studio](https://thegraph.com/studio/) is a dapp for creating, managing, and publishing Subgraphs and API keys. ## 2. Как создать ключ API? @@ -12,20 +12,20 @@ title: Часто задаваемые вопросы о Subgraph Studio ## 3. Могу ли я создать несколько ключей API? -Yes! You can create multiple API Keys to use in different projects. Check out the link [here](https://thegraph.com/studio/apikeys/). +Да! Вы можете создать несколько ключей API для использования в разных проектах. Перейдите по этой [ссылке](https://thegraph.com/studio/apikeys/). ## 4. Как мне настроить ограничения домена для ключа API? После создания ключа API в разделе «Безопасность» Вы можете определить домены, которые могут запрашивать определенный ключ API. -## 5. Могу ли я передать свой субграф другому владельцу? +## 5. Can I transfer my Subgraph to another owner? -Да, субграфы, которые были опубликованы в Arbitrum One, могут быть перенесены в новый кошелек или на кошелек с мультиподписью. Вы можете сделать это, щелкнув три точки рядом с кнопкой «Опубликовать» на странице сведений о субграфе и выбрав «Передать право собственности». +Yes, Subgraphs that have been published to Arbitrum One can be transferred to a new wallet or a Multisig. You can do so by clicking the three dots next to the 'Publish' button on the Subgraph's details page and selecting 'Transfer ownership'. -Обратите внимание, что Вы больше не сможете просматривать или редактировать субграф в Studio после его переноса. +Note that you will no longer be able to see or edit the Subgraph in Studio once it has been transferred. -## 6. Как мне найти URL-адреса запросов для субграфов, если я не являюсь разработчиком субграфа, который хочу использовать? +## 6. How do I find query URLs for Subgraphs if I’m not the developer of the Subgraph I want to use? -You can find the query URL of each subgraph in the Subgraph Details section of Graph Explorer. When you click on the “Query” button, you will be directed to a pane wherein you can view the query URL of the subgraph you’re interested in. You can then replace the `` placeholder with the API key you wish to leverage in Subgraph Studio. +You can find the query URL of each Subgraph in the Subgraph Details section of Graph Explorer. When you click on the “Query” button, you will be directed to a pane wherein you can view the query URL of the Subgraph you’re interested in. You can then replace the `` placeholder with the API key you wish to leverage in Subgraph Studio. -Помните, что Вы можете создать ключ API и запрашивать любой субграф, опубликованный в сети, даже если сами создаете субграф. Эти запросы через новый ключ API являются платными, как и любые другие в сети. +Remember that you can create an API key and query any Subgraph published to the network, even if you build a Subgraph yourself. These queries via the new API key, are paid queries as any other on the network. diff --git a/website/src/pages/ru/resources/tokenomics.mdx b/website/src/pages/ru/resources/tokenomics.mdx index e4ab88d45844..2f98043db48f 100644 --- a/website/src/pages/ru/resources/tokenomics.mdx +++ b/website/src/pages/ru/resources/tokenomics.mdx @@ -1,103 +1,103 @@ --- title: Токеномика сети The Graph sidebarTitle: Tokenomics -description: The Graph Network is incentivized by powerful tokenomics. Here’s how GRT, The Graph’s native work utility token, works. +description: Сеть The Graph стимулируется мощной токеномикой. Вот как работает GRT, нативный токен The Graph, предназначенный для предоставления рабочих утилит. --- ## Обзор -The Graph is a decentralized protocol that enables easy access to blockchain data. It indexes blockchain data similarly to how Google indexes the web. If you've used a dapp that retrieves data from a subgraph, you've probably interacted with The Graph. Today, thousands of [popular dapps](https://thegraph.com/explorer) in the web3 ecosystem use The Graph. +The Graph is a decentralized protocol that enables easy access to blockchain data. It indexes blockchain data similarly to how Google indexes the web. If you've used a dapp that retrieves data from a Subgraph, you've probably interacted with The Graph. Today, thousands of [popular dapps](https://thegraph.com/explorer) in the web3 ecosystem use The Graph. ## Специфические особенности -The Graph's model is akin to a B2B2C model, but it's driven by a decentralized network where participants collaborate to provide data to end users in exchange for GRT rewards. GRT is the utility token for The Graph. It coordinates and incentivizes the interaction between data providers and consumers within the network. +Модель The Graph похожа на модель B2B2C, но она управляется децентрализованной сетью, где участники сотрудничают, чтобы предоставлять данные конечным пользователям в обмен на вознаграждения GRT. GRT – это утилитарный токен The Graph. Он координирует и стимулирует взаимодействие между поставщиками данных и потребителями внутри сети. -The Graph plays a vital role in making blockchain data more accessible and supports a marketplace for its exchange. To learn more about The Graph's pay-for-what-you-need model, check out its [free and growth plans](/subgraphs/billing/). +The Graph играет важную роль в обеспечении большей доступности данных блокчейна и поддерживает рынок для их обмена. Чтобы узнать больше о модели The Graph «плати за то, что тебе нужно», ознакомьтесь с её [бесплатными планами и планами развития](/subgraphs/billing/). -- GRT Token Address on Mainnet: [0xc944e90c64b2c07662a292be6244bdf05cda44a7](https://etherscan.io/token/0xc944e90c64b2c07662a292be6244bdf05cda44a7) +- Адрес токена GRT в основной сети: [0xc944e90c64b2c07662a292be6244bdf05cda44a7](https://etherscan.io/token/0xc944e90c64b2c07662a292be6244bdf05cda44a7) -- GRT Token Address on Arbitrum One: [0x9623063377AD1B27544C965cCd7342f7EA7e88C7](https://arbiscan.io/token/0x9623063377ad1b27544c965ccd7342f7ea7e88c7) +- Адрес токена GRT на Arbitrum One: [0x9623063377AD1B27544C965cCd7342f7EA7e88C7](https://arbiscan.io/token/0x9623063377ad1b27544c965ccd7342f7ea7e88c7) -## The Roles of Network Participants +## Роли участников сети -There are four primary network participants: +Есть четыре основных участника сети: -1. Delegators - Delegate GRT to Indexers & secure the network +1. Делегаторы - Делегируют токены GRT Индексаторам и защищают сеть -2. Кураторы - Ищут лучшие субграфы для Индексаторов +2. Curators - Find the best Subgraphs for Indexers -3. Developers - Build & query subgraphs +3. Developers - Build & query Subgraphs 4. Индексаторы - Магистральный канал передачи данных блокчейна -Fishermen and Arbitrators are also integral to the network's success through other contributions, supporting the work of the other primary participant roles. For more information about network roles, [read this article](https://thegraph.com/blog/the-graph-grt-token-economics/). +Рыбаки и Арбитры также вносят свой вклад в успех сети, поддерживая работу других основных участников. Для получения дополнительной информации о сетевых ролях [прочитайте эту статью](https://thegraph.com/blog/the-graph-grt-token- Economics/). -![Tokenomics diagram](/img/updated-tokenomics-image.png) +![Диаграмма токеномики](/img/updated-tokenomics-image.png) -## Delegators (Passively earn GRT) +## Делегаторы (Пассивный заработок GRT) -Indexers are delegated GRT by Delegators, increasing the Indexer’s stake in subgraphs on the network. In return, Delegators earn a percentage of all query fees and indexing rewards from the Indexer. Each Indexer sets the cut that will be rewarded to Delegators independently, creating competition among Indexers to attract Delegators. Most Indexers offer between 9-12% annually. +Indexers are delegated GRT by Delegators, increasing the Indexer’s stake in Subgraphs on the network. In return, Delegators earn a percentage of all query fees and indexing rewards from the Indexer. Each Indexer sets the cut that will be rewarded to Delegators independently, creating competition among Indexers to attract Delegators. Most Indexers offer between 9-12% annually. -For example, if a Delegator were to delegate 15k GRT to an Indexer offering 10%, the Delegator would receive ~1,500 GRT in rewards annually. +Например, если бы Делегатор делегировал 15 тыс. GRT Индексатору, предлагающему 10%, Делегатор получал бы вознаграждение в размере ~ 1,500 GRT в год. -There is a 0.5% delegation tax which is burned whenever a Delegator delegates GRT on the network. If a Delegator chooses to withdraw their delegated GRT, the Delegator must wait for the 28-epoch unbonding period. Each epoch is 6,646 blocks, which means 28 epochs ends up being approximately 26 days. +Существует комиссия на делегирование в размере 0,5%, которая взимается всякий раз, когда Делегатор делегирует GRT в сети. Если Делегатор решает отозвать свои делегированные GRT, он должен подождать 28 эпох, которые занимают период отмены делегирования. Каждая эпоха состоит из 6646 блоков, что означает, что 28 эпох в конечном итоге составляют приблизительно 26 дней. -If you're reading this, you're capable of becoming a Delegator right now by heading to the [network participants page](https://thegraph.com/explorer/participants/indexers), and delegating GRT to an Indexer of your choice. +Если Вы это читаете, значит, Вы можете стать Делегатором прямо сейчас, перейдя на [страницу участников сети](https://thegraph.com/explorer/participants/indexers) и делегировав GRT выбранному Индексатору. -## Curators (Earn GRT) +## Кураторы (Заработок GRT) -Curators identify high-quality subgraphs and "curate" them (i.e., signal GRT on them) to earn curation shares, which guarantee a percentage of all future query fees generated by the subgraph. While any independent network participant can be a Curator, typically subgraph developers are among the first Curators for their own subgraphs because they want to ensure their subgraph is indexed. +Curators identify high-quality Subgraphs and "curate" them (i.e., signal GRT on them) to earn curation shares, which guarantee a percentage of all future query fees generated by the Subgraph. While any independent network participant can be a Curator, typically Subgraph developers are among the first Curators for their own Subgraphs because they want to ensure their Subgraph is indexed. -Subgraph developers are encouraged to curate their subgraph with at least 3,000 GRT. However, this number may be impacted by network activity and community participation. +Subgraph developers are encouraged to curate their Subgraph with at least 3,000 GRT. However, this number may be impacted by network activity and community participation. -Curators pay a 1% curation tax when they curate a new subgraph. This curation tax is burned, decreasing the supply of GRT. +Curators pay a 1% curation tax when they curate a new Subgraph. This curation tax is burned, decreasing the supply of GRT. -## Developers +## Разработчики -Developers build and query subgraphs to retrieve blockchain data. Since subgraphs are open source, developers can query existing subgraphs to load blockchain data into their dapps. Developers pay for queries they make in GRT, which is distributed to network participants. +Developers build and query Subgraphs to retrieve blockchain data. Since Subgraphs are open source, developers can query existing Subgraphs to load blockchain data into their dapps. Developers pay for queries they make in GRT, which is distributed to network participants. -### Создание субграфа +### Creating a Subgraph -Developers can [create a subgraph](/developing/creating-a-subgraph/) to index data on the blockchain. Subgraphs are instructions for Indexers about which data should be served to consumers. +Developers can [create a Subgraph](/developing/creating-a-subgraph/) to index data on the blockchain. Subgraphs are instructions for Indexers about which data should be served to consumers. -Once developers have built and tested their subgraph, they can [publish their subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) on The Graph's decentralized network. +Once developers have built and tested their Subgraph, they can [publish their Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) on The Graph's decentralized network. -### Запрос к существующему субграфу +### Querying an existing Subgraph -Once a subgraph is [published](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph's decentralized network, anyone can create an API key, add GRT to their billing balance, and query the subgraph. +Once a Subgraph is [published](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph's decentralized network, anyone can create an API key, add GRT to their billing balance, and query the Subgraph. -Subgraphs are [queried using GraphQL](/subgraphs/querying/introduction/), and the query fees are paid for with GRT in [Subgraph Studio](https://thegraph.com/studio/). Query fees are distributed to network participants based on their contributions to the protocol. +Субграфы [запрашиваются с помощью GraphQL](/subgraphs/querying/introduction/), а плата за запрос производится с помощью GRT в [Subgraph Studio](https://thegraph.com/studio/). Плата за запрос распределяется между участниками сети на основе их вклада в протокол. -1% of the query fees paid to the network are burned. +1% от комиссии за запрос, оплаченной в сети, сжигается. -## Indexers (Earn GRT) +## Индексаторы (Заработок GRT) -Indexers are the backbone of The Graph. They operate independent hardware and software powering The Graph’s decentralized network. Indexers serve data to consumers based on instructions from subgraphs. +Indexers are the backbone of The Graph. They operate independent hardware and software powering The Graph’s decentralized network. Indexers serve data to consumers based on instructions from Subgraphs. -Indexers can earn GRT rewards in two ways: +Индексаторы могут зарабатывать GRT двумя способами: -1. **Query fees**: GRT paid by developers or users for subgraph data queries. Query fees are directly distributed to Indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). +1. **Query fees**: GRT paid by developers or users for Subgraph data queries. Query fees are directly distributed to Indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). -2. **Indexing rewards**: the 3% annual issuance is distributed to Indexers based on the number of subgraphs they are indexing. These rewards incentivize Indexers to index subgraphs, occasionally before the query fees begin, to accrue and submit Proofs of Indexing (POIs), verifying that they have indexed data accurately. +2. **Indexing rewards**: the 3% annual issuance is distributed to Indexers based on the number of Subgraphs they are indexing. These rewards incentivize Indexers to index Subgraphs, occasionally before the query fees begin, to accrue and submit Proofs of Indexing (POIs), verifying that they have indexed data accurately. -Each subgraph is allotted a portion of the total network token issuance, based on the amount of the subgraph’s curation signal. That amount is then rewarded to Indexers based on their allocated stake on the subgraph. +Each Subgraph is allotted a portion of the total network token issuance, based on the amount of the Subgraph’s curation signal. That amount is then rewarded to Indexers based on their allocated stake on the Subgraph. -In order to run an indexing node, Indexers must self-stake 100,000 GRT or more with the network. Indexers are incentivized to self-stake GRT in proportion to the amount of queries they serve. +Для запуска ноды индексирования Индексаторы должны застейкать в сети не менее 100 000 GRT в качестве собственного стейка. Индексаторы заинтересованы в том, чтобы делать собственный стейк GRT пропорционально количеству обслуживаемых ими запросов. -Indexers can increase their GRT allocations on subgraphs by accepting GRT delegation from Delegators, and they can accept up to 16 times their initial self-stake. If an Indexer becomes "over-delegated" (i.e., more than 16 times their initial self-stake), they will not be able to use the additional GRT from Delegators until they increase their self-stake in the network. +Indexers can increase their GRT allocations on Subgraphs by accepting GRT delegation from Delegators, and they can accept up to 16 times their initial self-stake. If an Indexer becomes "over-delegated" (i.e., more than 16 times their initial self-stake), they will not be able to use the additional GRT from Delegators until they increase their self-stake in the network. -The amount of rewards an Indexer receives can vary based on the Indexer's self-stake, accepted delegation, quality of service, and many more factors. +Сумма вознаграждений, которые получает Индексатор, может варьироваться в зависимости от размера его собственного стейка, принятых делегированных средств, качества обслуживания и многих других факторов. -## Token Supply: Burning & Issuance +## Объем токенов: Сжигание и Эмиссия -The initial token supply is 10 billion GRT, with a target of 3% new issuance annually to reward Indexers for allocating stake on subgraphs. This means that the total supply of GRT tokens will increase by 3% each year as new tokens are issued to Indexers for their contribution to the network. +The initial token supply is 10 billion GRT, with a target of 3% new issuance annually to reward Indexers for allocating stake on Subgraphs. This means that the total supply of GRT tokens will increase by 3% each year as new tokens are issued to Indexers for their contribution to the network. -The Graph is designed with multiple burning mechanisms to offset new token issuance. Approximately 1% of the GRT supply is burned annually through various activities on the network, and this number has been increasing as network activity continues to grow. These burning activities include a 0.5% delegation tax whenever a Delegator delegates GRT to an Indexer, a 1% curation tax when Curators signal on a subgraph, and a 1% of query fees for blockchain data. +The Graph is designed with multiple burning mechanisms to offset new token issuance. Approximately 1% of the GRT supply is burned annually through various activities on the network, and this number has been increasing as network activity continues to grow. These burning activities include a 0.5% delegation tax whenever a Delegator delegates GRT to an Indexer, a 1% curation tax when Curators signal on a Subgraph, and a 1% of query fees for blockchain data. -![Total burned GRT](/img/total-burned-grt.jpeg) +![Общее количество сожжённых GRT](/img/total-burned-grt.jpeg) -In addition to these regularly occurring burning activities, the GRT token also has a slashing mechanism in place to penalize malicious or irresponsible behavior by Indexers. If an Indexer is slashed, 50% of their indexing rewards for the epoch are burned (while the other half goes to the fisherman), and their self-stake is slashed by 2.5%, with half of this amount being burned. This helps to ensure that Indexers have a strong incentive to act in the best interests of the network and to contribute to its security and stability. +В дополнение к этим регулярным процессам сжигания токенов, токен GRT также имеет механизм слэшинга (наказания) за злонамеренное или безответственное поведение Индексаторов. Если Индексатор подвергается слэшингу, 50% его вознаграждения за индексирование за эпоху сжигается (в то время как другая половина достается Рыбаку), а его собственная сумма стейка уменьшается на 2,5%, причем половина этой суммы сгорает. Это создаёт мощный стимул для Индексаторов действовать в интересах сети, обеспечивая её безопасность и стабильность. -## Improving the Protocol +## Улучшение протокола -The Graph Network is ever-evolving and improvements to the economic design of the protocol are constantly being made to provide the best experience for all network participants. The Graph Council oversees protocol changes and community members are encouraged to participate. Get involved with protocol improvements in [The Graph Forum](https://forum.thegraph.com/). +The Graph Network постоянно развивается, и в экономический дизайн протокола регулярно вносятся улучшения, чтобы обеспечить наилучший опыт для всех участников сети. The Graph Council следит за изменениями протокола, и участники сообщества активно привлекаются к этому процессу. Примите участие в улучшении протокола на [форуме The Graph](https://forum.thegraph.com/). diff --git a/website/src/pages/ru/sps/introduction.mdx b/website/src/pages/ru/sps/introduction.mdx index 13b65e8d36fe..d4c5118ad8f6 100644 --- a/website/src/pages/ru/sps/introduction.mdx +++ b/website/src/pages/ru/sps/introduction.mdx @@ -1,30 +1,31 @@ --- -title: Introduction to Substreams-Powered Subgraphs -sidebarTitle: Introduction +title: Введение в субграфы, работающие на основе Субпотоков +sidebarTitle: Введение --- -Boost your subgraph's efficiency and scalability by using [Substreams](/substreams/introduction/) to stream pre-indexed blockchain data. +Boost your Subgraph's efficiency and scalability by using [Substreams](/substreams/introduction/) to stream pre-indexed blockchain data. ## Обзор -Use a Substreams package (`.spkg`) as a data source to give your subgraph access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks. +Используя пакет Субпотоков (`.spkg`) в качестве источника данных, Ваш субграф получает доступ к потоку предварительно индексированных данных блокчейна. Это позволяет более эффективно и масштабируемо обрабатывать данные, особенно в крупных или сложных блокчейн-сетях. ### Специфические особенности -There are two methods of enabling this technology: +Существует два способа активации этой технологии: -1. **Using Substreams [triggers](/sps/triggers/)**: Consume from any Substreams module by importing the Protobuf model through a subgraph handler and move all your logic into a subgraph. This method creates the subgraph entities directly in the subgraph. +1. **Использование [триггеров](/sps/triggers/) Субпотоков**: Получайте данные из любого модуля Субпотоков, импортируя Protobuf-модель через обработчик субграфа, и переносите всю логику в субграф. Этот метод создает объекты субграфа непосредственно внутри субграфа. -2. **Using [Entity Changes](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)**: By writing more of the logic into Substreams, you can consume the module's output directly into [graph-node](/indexing/tooling/graph-node/). In graph-node, you can use the Substreams data to create your subgraph entities. +2. Использование [Изменений Объектов](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)\*\*: Записывая большую часть логики в Субпотоки, Вы можете напрямую передавать вывод модуля в [graph-node](/indexing/tooling/graph-node/). В graph-node можно использовать данные Субпотоков для создания объектов субграфа. -You can choose where to place your logic, either in the subgraph or Substreams. However, consider what aligns with your data needs, as Substreams has a parallelized model, and triggers are consumed linearly in the graph node. +You can choose where to place your logic, either in the Subgraph or Substreams. However, consider what aligns with your data needs, as Substreams has a parallelized model, and triggers are consumed linearly in the graph node. ### Дополнительные ресурсы -Visit the following links for tutorials on using code-generation tooling to build your first end-to-end Substreams project quickly: +Перейдите по следующим ссылкам, чтобы ознакомиться с руководствами по использованию инструментов для генерации кода и быстро создать свой первый проект от начала до конца: - [Solana](/substreams/developing/solana/transactions/) - [EVM](https://docs.substreams.dev/tutorials/intro-to-tutorials/evm) - [Starknet](https://docs.substreams.dev/tutorials/intro-to-tutorials/starknet) - [Injective](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/injective) - [MANTRA](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/mantra) +- [Stellar](https://docs.substreams.dev/tutorials/intro-to-tutorials/stellar) diff --git a/website/src/pages/ru/sps/sps-faq.mdx b/website/src/pages/ru/sps/sps-faq.mdx index fc2d9862921f..45edab5a3d00 100644 --- a/website/src/pages/ru/sps/sps-faq.mdx +++ b/website/src/pages/ru/sps/sps-faq.mdx @@ -1,6 +1,6 @@ --- -title: Substreams-Powered Subgraphs FAQ -sidebarTitle: FAQ +title: Часто задаваемые вопросы о Субграфах, работающих на основе Субпотоков +sidebarTitle: Часто задаваемые вопросы --- ## Что такое субпотоки? @@ -11,21 +11,21 @@ Specifically, it's a blockchain-agnostic, parallelized, and streaming-first engi Substreams is developed by [StreamingFast](https://www.streamingfast.io/). Visit the [Substreams Documentation](/substreams/introduction/) to learn more about Substreams. -## Что такое субграфы, работающие на основе Субпотоков? +## What are Substreams-powered Subgraphs? -[Substreams-powered subgraphs](/sps/introduction/) combine the power of Substreams with the queryability of subgraphs. When publishing a Substreams-powered Subgraph, the data produced by the Substreams transformations, can [output entity changes](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs) that are compatible with subgraph entities. +[Субграфы, работающие на основе Субпотоков](/sps/introduction/) объединяют мощь Субпотоков с возможностью запросов субграфов. При публикации субграфа, работающего на основе Субпотоков данные, полученные в результате преобразований Субпотоков, могут [генерировать изменения объектов](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs), которые совместимы с объектами субграфа. -If you are already familiar with subgraph development, note that Substreams-powered subgraphs can then be queried just as if it had been produced by the AssemblyScript transformation layer. This provides all the benefits of subgraphs, including a dynamic and flexible GraphQL API. +If you are already familiar with Subgraph development, note that Substreams-powered Subgraphs can then be queried just as if it had been produced by the AssemblyScript transformation layer. This provides all the benefits of Subgraphs, including a dynamic and flexible GraphQL API. -## Чем субграфы, работающие на основе Субпотоков, отличаются от субграфов? +## How are Substreams-powered Subgraphs different from Subgraphs? -Subgraphs are made up of datasources which specify onchain events, and how those events should be transformed via handlers written in Assemblyscript. These events are processed sequentially, based on the order in which events happen onchain. +Субграфы состоят из источников данных, которые указывают он-чейн события и то, как эти события должны быть преобразованы с помощью обработчиков, написанных на AssemblyScript. Эти события обрабатываются последовательно, в зависимости от того, в каком порядке они происходят он-чейн. -By contrast, substreams-powered subgraphs have a single datasource which references a substreams package, which is processed by the Graph Node. Substreams have access to additional granular onchain data compared to conventional subgraphs, and can also benefit from massively parallelised processing, which can mean much faster processing times. +В отличие от этого, субграфы, работающие на основе Субпотоков имеют один источник данных, который ссылается на пакет Субпотоков, обрабатываемый Graph Node. Субпотоки имеют доступ к дополнительным детализированным данным из он-чейна в отличии от традиционных субграфов, а также могут массово использовать параллельную обработку, что значительно ускоряет время обработки. -## Каковы преимущества использования субграфов, работающих на основе Субпотоков? +## What are the benefits of using Substreams-powered Subgraphs? -Substreams-powered subgraphs combine all the benefits of Substreams with the queryability of subgraphs. They bring greater composability and high-performance indexing to The Graph. They also enable new data use cases; for example, once you've built your Substreams-powered Subgraph, you can reuse your [Substreams modules](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) to output to different [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) such as PostgreSQL, MongoDB, and Kafka. +Substreams-powered Subgraphs combine all the benefits of Substreams with the queryability of Subgraphs. They bring greater composability and high-performance indexing to The Graph. They also enable new data use cases; for example, once you've built your Substreams-powered Subgraph, you can reuse your [Substreams modules](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) to output to different [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) such as PostgreSQL, MongoDB, and Kafka. ## В чем преимущества Субпотоков? @@ -35,7 +35,7 @@ Substreams-powered subgraphs combine all the benefits of Substreams with the que - Высокопроизводительное индексирование: индексирование на порядки быстрее благодаря крупномасштабным кластерам параллельных операций (как пример, BigQuery). -- Возможность загружать куда угодно: Загружайте Ваши данные в любое удобное для Вас место: PostgreSQL, MongoDB, Kafka, субграфы, плоские файлы, Google Sheets. +- Sink anywhere: Sink your data to anywhere you want: PostgreSQL, MongoDB, Kafka, Subgraphs, flat files, Google Sheets. - Программируемость: Используйте код для настройки извлечения, выполнения агрегирования во время преобразования и моделирования выходных данных для нескольких приемников. @@ -63,34 +63,34 @@ Firehose, разработанный [StreamingFast](https://www.streamingfast.i - Использует плоские файлы: Данные блокчейна извлекаются в плоские файлы — самый дешевый и наиболее оптимизированный доступный вычислительный ресурс. -## Где разработчики могут получить доступ к дополнительной информации о субграфах работающих на основе Субпотоков и о Субпотоках? +## Where can developers access more information about Substreams-powered Subgraphs and Substreams? -The [Substreams documentation](/substreams/introduction/) will teach you how to build Substreams modules. +Из [документации по Субпотокам](/substreams/introduction/) Вы узнаете, как создавать модули Субпотоков. -The [Substreams-powered subgraphs documentation](/sps/introduction/) will show you how to package them for deployment on The Graph. +The [Substreams-powered Subgraphs documentation](/sps/introduction/) will show you how to package them for deployment on The Graph. [Новейший инструмент Substreams Codegen](https://streamingfastio.medium.com/substreams-codegen-no-code-tool-to-bootstrap-your-project-a11efe0378c6) позволит Вам загрузить проект Substreams без использования какого-либо кода. ## Какова роль модулей Rust в Субпотоках? -Модули Rust - это эквивалент мапперов AssemblyScript в субграфах. Они компилируются в WASM аналогичным образом, но модель программирования допускает параллельное выполнение. Они определяют, какие преобразования и агрегации необходимо применить к необработанным данным блокчейна. +Rust modules are the equivalent of the AssemblyScript mappers in Subgraphs. They are compiled to WASM in a similar way, but the programming model allows for parallel execution. They define the sort of transformations and aggregations you want to apply to the raw blockchain data. -See [modules documentation](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) for details. +Подробную информацию см. в [документации по модулям](https://docs.substreams.dev/reference-material/substreams-components/modules#modules). ## Что делает Субпотоки компонуемыми? При использовании Субпотоков компоновка происходит на уровне преобразования, что позволяет повторно использовать кэшированные модули. -Например, Алиса может создать ценовой модуль DEX, Боб может использовать его для создания агрегатора объемов для некоторых интересующих его токенов, а Лиза может объединить четыре отдельных ценовых модуля DEX, чтобы создать ценовой оракул. Один запрос Субпотоков упакует все эти отдельные модули, свяжет их вместе, чтобы предложить гораздо более уточненный поток данных. Затем этот поток может быть использован для заполнения субграфа и запрашиваться потребителями. +As an example, Alice can build a DEX price module, Bob can use it to build a volume aggregator for some tokens of his interest, and Lisa can combine four individual DEX price modules to create a price oracle. A single Substreams request will package all of these individual's modules, link them together, to offer a much more refined stream of data. That stream can then be used to populate a Subgraph, and be queried by consumers. ## Как Вы можете создать и развернуть субграф, работающий на основе Субпотоков? -After [defining](/sps/introduction/) a Substreams-powered Subgraph, you can use the Graph CLI to deploy it in [Subgraph Studio](https://thegraph.com/studio/). +После [определения](/sps/introduction/) субграфа, работающего на основе Субпотоков, Вы можете использовать Graph CLI для его развертывания в [Subgraph Studio](https://thegraph.com/studio/). -## Где я могу найти примеры Субпотоков и субграфов, работающих на основе Субпотоков? +## Where can I find examples of Substreams and Substreams-powered Subgraphs? -Вы можете посетить [этот репозиторий на Github](https://github.com/pinax-network/awesome-substreams), чтобы найти примеры Субпотоков и субграфов, работающих на основе Субпотоков. +You can visit [this Github repo](https://github.com/pinax-network/awesome-substreams) to find examples of Substreams and Substreams-powered Subgraphs. -## Что означают Субпотоки и субграфы, работающие на основе Субпотоков, для сети The Graph? +## What do Substreams and Substreams-powered Subgraphs mean for The Graph Network? Интеграция обещает множество преимуществ, включая чрезвычайно высокопроизводительную индексацию и большую компонуемость за счет использования модулей сообщества и развития на их основе. diff --git a/website/src/pages/ru/sps/triggers.mdx b/website/src/pages/ru/sps/triggers.mdx index d4f8ef896db2..3e047577c67a 100644 --- a/website/src/pages/ru/sps/triggers.mdx +++ b/website/src/pages/ru/sps/triggers.mdx @@ -1,18 +1,18 @@ --- -title: Substreams Triggers +title: Триггеры Субпотоков --- Use Custom Triggers and enable the full use GraphQL. ## Обзор -Custom Triggers allow you to send data directly into your subgraph mappings file and entities, which are similar to tables and fields. This enables you to fully use the GraphQL layer. +Custom Triggers allow you to send data directly into your Subgraph mappings file and entities, which are similar to tables and fields. This enables you to fully use the GraphQL layer. -By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data in your subgraph's handler. This ensures efficient and streamlined data management within the subgraph framework. +By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data in your Subgraph's handler. This ensures efficient and streamlined data management within the Subgraph framework. ### Defining `handleTransactions` -The following code demonstrates how to define a `handleTransactions` function in a subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new subgraph entity is created. +Следующий код демонстрирует, как определить функцию `handleTransactions` в обработчике субграфа. Эта функция принимает сырые байты Субпотоков в качестве параметра и декодирует их в объект `Transactions`. Для каждой транзакции создается новый объект субграфа. ```tsx export function handleTransactions(bytes: Uint8Array): void { @@ -34,13 +34,13 @@ export function handleTransactions(bytes: Uint8Array): void { } ``` -Here's what you're seeing in the `mappings.ts` file: +Вот что Вы видите в файле `mappings.ts`: -1. The bytes containing Substreams data are decoded into the generated `Transactions` object, this object is used like any other AssemblyScript object -2. Looping over the transactions -3. Create a new subgraph entity for every transaction +1. Байты, содержащие данные Субпотоков, декодируются в сгенерированный объект `Transactions`. Этот объект используется как любой другой объект на AssemblyScript +2. Итерация по транзакциям (процесс поочерёдного прохода по всем транзакциям для их анализа или обработки) +3. Создание нового объекта субграфа для каждой транзакции -To go through a detailed example of a trigger-based subgraph, [check out the tutorial](/sps/tutorial/). +Чтобы ознакомиться с подробным примером субграфа на основе триггера, [ознакомьтесь с руководством](/sps/tutorial/). ### Дополнительные ресурсы diff --git a/website/src/pages/ru/sps/tutorial.mdx b/website/src/pages/ru/sps/tutorial.mdx index b9e55f8bc89f..977f1803f352 100644 --- a/website/src/pages/ru/sps/tutorial.mdx +++ b/website/src/pages/ru/sps/tutorial.mdx @@ -1,32 +1,32 @@ --- -title: 'Tutorial: Set Up a Substreams-Powered Subgraph on Solana' +title: 'Руководство: Настройка Субграфа, работающего на основе Субпотоков в сети Solana' sidebarTitle: Tutorial --- -Successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. +Successfully set up a trigger-based Substreams-powered Subgraph for a Solana SPL token. ## Начнем For a video tutorial, check out [How to Index Solana with a Substreams-powered Subgraph](/sps/tutorial/#video-tutorial) -### Prerequisites +### Предварительные требования -Before starting, make sure to: +Прежде чем начать, убедитесь, что: -- Complete the [Getting Started Guide](https://github.com/streamingfast/substreams-starter) to set up your development environment using a Dev Container. -- Be familiar with The Graph and basic blockchain concepts such as transactions and Protobufs. +- Завершили изучение [руководства по началу работы](https://github.com/streamingfast/substreams-starter), чтобы настроить свою среду разработки с использованием контейнера для разработки. +- Ознакомлены с The Graph и основными концепциями блокчейна, такими как транзакции и Protobuf. -### Step 1: Initialize Your Project +### Шаг 1: Инициализация Вашего проекта -1. Open your Dev Container and run the following command to initialize your project: +1. Откройте свой контейнер для разработки и выполните следующую команду для инициализации проекта: ```bash substreams init ``` -2. Select the "minimal" project option. +2. Выберите вариант проекта "minimal". -3. Replace the contents of the generated `substreams.yaml` file with the following configuration, which filters transactions for the Orca account on the SPL token program ID: +3. Замените содержимое сгенерированного файла `substreams.yaml` следующей конфигурацией, которая фильтрует транзакции для аккаунта Orca в идентификаторе программы токенов SPL: ```yaml specVersion: v0.1.0 @@ -34,12 +34,12 @@ package: name: my_project_sol version: v0.1.0 -imports: # Pass your spkg of interest +imports: # Укажите нужный Вам spkg solana: https://github.com/streamingfast/substreams-solana-spl-token/raw/master/tokens/solana-spl-token-v0.1.0.spkg modules: - name: map_spl_transfers - use: solana:map_block # Select corresponding modules available within your spkg + use: solana:map_block # Выберите соответствующие модули, доступные в Вашем spkg initialBlock: 260000082 - name: map_transactions_by_programid @@ -47,20 +47,19 @@ modules: network: solana-mainnet-beta -params: # Modify the param fields to meet your needs - # For program_id: TokenkegQfeZyiNwAJbNbGKPFXCWuBvf9Ss623VQ5DA - map_spl_transfers: token_contract:orcaEKTdK7LKz57vaAYr9QeNsVEPfiu6QeMU1kektZE +params: # Измените параметры в соответствии со своими требованиями + # Для program_id: TokenkegQfeZyiNwAJbNbGKPFXCWuBvf9Ss623VQ5DA: map_spl_transfers: token_contract:orcaEKTdK7LKz57vaAYr9QeNsVEPfiu6QeMU1kektZE ``` -### Step 2: Generate the Subgraph Manifest +### Шаг 2: Создание манифеста субграфа -Once the project is initialized, generate a subgraph manifest by running the following command in the Dev Container: +После инициализации проекта создайте манифест субграфа, выполнив следующую команду в Dev Container: ```bash substreams codegen subgraph ``` -You will generate a`subgraph.yaml` manifest which imports the Substreams package as a data source: +Вы создадите манифест `subgraph.yaml`, который импортирует пакет Субпотоков в качестве источника данных: ```yaml --- @@ -70,20 +69,20 @@ dataSources: network: solana-mainnet-beta source: package: - moduleName: map_spl_transfers # Module defined in the substreams.yaml + moduleName: map_spl_transfers # Модуль, определенный в substreams.yaml file: ./my-project-sol-v0.1.0.spkg mapping: - apiVersion: 0.0.7 + apiVersion: 0.0.9 kind: substreams/graph-entities file: ./src/mappings.ts handler: handleTriggers ``` -### Step 3: Define Entities in `schema.graphql` +### Шаг 3: Определите объекты в `schema.graphql` -Define the fields you want to save in your subgraph entities by updating the `schema.graphql` file. +Определите поля, которые хотите сохранить в объектах субграфа, обновив файл `schema.graphql`. -Here is an example: +Пример: ```graphql type MyTransfer @entity { @@ -95,13 +94,13 @@ type MyTransfer @entity { } ``` -This schema defines a `MyTransfer` entity with fields such as `id`, `amount`, `source`, `designation`, and `signers`. +Эта схема определяет объект `MyTransfer` с такими полями, как `id`, `amount`, `source`, `designation` и `signers`. -### Step 4: Handle Substreams Data in `mappings.ts` +### Шаг 4: Обработка данных Субпотоков в `mappings.ts` With the Protobuf objects generated, you can now handle the decoded Substreams data in your `mappings.ts` file found in the `./src` directory. -The example below demonstrates how to extract to subgraph entities the non-derived transfers associated to the Orca account id: +The example below demonstrates how to extract to Subgraph entities the non-derived transfers associated to the Orca account id: ```ts import { Protobuf } from 'as-proto/assembly' @@ -132,19 +131,19 @@ export function handleTriggers(bytes: Uint8Array): void { } ``` -### Step 5: Generate Protobuf Files +### Шаг 5: Сгенерируйте файлы Protobuf -To generate Protobuf objects in AssemblyScript, run the following command: +Чтобы сгенерировать объекты Protobuf в AssemblyScript, выполните следующую команду: ```bash npm run protogen ``` -This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the subgraph's handler. +Эта команда преобразует определения Protobuf в AssemblyScript, позволяя использовать их в обработчике субграфа. ### Заключение -Congratulations! You've successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. You can take the next step by customizing your schema, mappings, and modules to fit your specific use case. +Поздравляем! Вы успешно настроили субграф на основе триггеров с поддержкой Субпотоков для токена Solana SPL. Следующий шаг Вы можете сделать, настроив схему, мэппинги и модули в соответствии со своим конкретным вариантом использования. ### Video Tutorial @@ -152,4 +151,4 @@ Congratulations! You've successfully set up a trigger-based Substreams-powered s ### Дополнительные ресурсы -For more advanced customization and optimizations, check out the official [Substreams documentation](https://substreams.streamingfast.io/tutorials/solana). +Для более продвинутой настройки и оптимизации ознакомьтесь с официальной [документацией по Субпотокам](https://substreams.streamingfast.io/tutorials/solana). diff --git a/website/src/pages/ru/subgraphs/_meta-titles.json b/website/src/pages/ru/subgraphs/_meta-titles.json index 3fd405eed29a..935e730c6eb3 100644 --- a/website/src/pages/ru/subgraphs/_meta-titles.json +++ b/website/src/pages/ru/subgraphs/_meta-titles.json @@ -1,6 +1,6 @@ { - "querying": "Querying", - "developing": "Developing", + "querying": "Запрос", + "developing": "Разработка", "guides": "How-to Guides", - "best-practices": "Best Practices" + "best-practices": "Лучшие практики" } diff --git a/website/src/pages/ru/subgraphs/best-practices/avoid-eth-calls.mdx b/website/src/pages/ru/subgraphs/best-practices/avoid-eth-calls.mdx index f44611137483..042a1c001522 100644 --- a/website/src/pages/ru/subgraphs/best-practices/avoid-eth-calls.mdx +++ b/website/src/pages/ru/subgraphs/best-practices/avoid-eth-calls.mdx @@ -1,19 +1,19 @@ --- title: Лучшая практика субграфа 4 — увеличение скорости индексирования за счет избегания eth_calls -sidebarTitle: 'Subgraph Best Practice 4: Avoiding eth_calls' +sidebarTitle: Avoiding eth_calls --- ## Краткое содержание -`eth_calls` — это вызовы, которые могут выполняться из субграфа к ноде Ethereum. Эти вызовы требуют значительного количества времени для возврата данных, что замедляет индексирование. По возможности, проектируйте смарт-контракты так, чтобы они отправляли все необходимые Вам данные, чтобы избежать использования `eth_calls`. +`eth_calls` are calls that can be made from a Subgraph to an Ethereum node. These calls take a significant amount of time to return data, slowing down indexing. If possible, design smart contracts to emit all the data you need so you don’t need to use `eth_calls`. ## Почему избегание `eth_calls` является наилучшей практикой -Субграфы оптимизированы для индексирования данных событий, которые исходят из смарт-контрактов. Субграф также может индексировать данные из `eth_call`, однако это значительно замедляет процесс индексирования, так как `eth_calls` требуют выполнения внешних вызовов к смарт-контрактам. Скорость реагирования этих вызовов зависит не от субграфа, а от подключения и скорости ответа ноды Ethereum, к которой отправлен запрос. Минимизируя или полностью исключая `eth_calls` в наших субграфах, мы можем значительно повысить скорость индексирования. +Subgraphs are optimized to index event data emitted from smart contracts. A Subgraph can also index the data coming from an `eth_call`, however, this can significantly slow down Subgraph indexing as `eth_calls` require making external calls to smart contracts. The responsiveness of these calls relies not on the Subgraph but on the connectivity and responsiveness of the Ethereum node being queried. By minimizing or eliminating eth_calls in our Subgraphs, we can significantly improve our indexing speed. ### Что из себя представляет eth_call? -`eth_calls` часто необходимы, когда данные, требуемые для субграфа, недоступны через сгенерированные события. Например, рассмотрим ситуацию, когда субграфу нужно определить, являются ли токены ERC20 частью определенного пула, но контракт генерирует только базовое событие `Transfer` и не создает событие, содержащее нужные нам данные: +`eth_calls` are often necessary when the data required for a Subgraph is not available through emitted events. For example, consider a scenario where a Subgraph needs to identify whether ERC20 tokens are part of a specific pool, but the contract only emits a basic `Transfer` event and does not emit an event that contains the data that we need: ```yaml event Transfer(address indexed from, address indexed to, uint256 value); @@ -44,7 +44,7 @@ export function handleTransfer(event: Transfer): void { } ``` -Это функционально, однако не идеально, так как замедляет индексирование нашего субграфа. +This is functional, however is not ideal as it slows down our Subgraph’s indexing. ## Как устранить `eth_calls` @@ -54,7 +54,7 @@ export function handleTransfer(event: Transfer): void { event TransferWithPool(address indexed from, address indexed to, uint256 value, bytes32 indexed poolInfo); ``` -С этим обновлением субграф может напрямую индексировать необходимые данные без внешних вызовов: +With this update, the Subgraph can directly index the required data without external calls: ```typescript import { Address } from '@graphprotocol/graph-ts' @@ -96,22 +96,22 @@ calls: Обработчик сам получает результат этого `eth_call`, как и в предыдущем разделе, привязываясь к контракту и выполняя вызов. `graph-node` кеширует результаты объявленных `eth_calls` в памяти, а вызов из обработчика будет извлекать результат из этого кеша в памяти, вместо того чтобы выполнять фактический RPC-вызов. -Примечание: Объявленные `eth_calls` могут быть выполнены только в субграфах с версией спецификации >= 1.2.0. +Note: Declared eth_calls can only be made in Subgraphs with specVersion >= 1.2.0. ## Заключение -Вы можете значительно улучшить производительность индексирования, минимизируя или исключая `eth_calls` в своих субграфах. +You can significantly improve indexing performance by minimizing or eliminating `eth_calls` in your Subgraphs. ## Лучшие практики для субграфов 1-6 -1. [Увеличение скорости запросов с помощью обрезки субграфов](/subgraphs/best-practices/pruning/) +1. [Improve Query Speed with Subgraph Pruning](/subgraphs/best-practices/pruning/) -2. [Улучшение индексирования и отклика запросов с использованием @derivedFrom](/subgraphs/best-practices/derivedfrom/) +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/subgraphs/best-practices/derivedfrom/) -3. [Улучшение индексирования и производительности запросов с использованием неизменяемых объектов и байтов в качестве идентификаторов](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) -4. [Увеличение скорости индексирования путем избегания `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) +4. [Improve Indexing Speed by Avoiding `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) -5. [Упрощение и оптимизация с помощью временных рядов и агрегаций](/subgraphs/best-practices/timeseries/) +5. [Simplify and Optimize with Timeseries and Aggregations](/subgraphs/best-practices/timeseries/) -6. [Использование переноса (графтинга) для быстрого развертывания исправлений](/subgraphs/best-practices/grafting-hotfix/) +6. [Use Grafting for Quick Hotfix Deployment](/subgraphs/best-practices/grafting-hotfix/) diff --git a/website/src/pages/ru/subgraphs/best-practices/derivedfrom.mdx b/website/src/pages/ru/subgraphs/best-practices/derivedfrom.mdx index 3e918462a606..da809815ce60 100644 --- a/website/src/pages/ru/subgraphs/best-practices/derivedfrom.mdx +++ b/website/src/pages/ru/subgraphs/best-practices/derivedfrom.mdx @@ -1,11 +1,11 @@ --- title: Лучшая практика для субграфов 2 — улучшение индексирования и отклика на запросы с помощью @derivedFrom -sidebarTitle: 'Subgraph Best Practice 2: Arrays with @derivedFrom' +sidebarTitle: Arrays with @derivedFrom --- ## Краткое содержание -Массивы в Вашей схеме могут значительно замедлить работу субграфа, когда их размер превышает тысячи элементов. Если возможно, следует использовать директиву @derivedFrom при работе с массивами, так как она предотвращает образование больших массивов, упрощает обработчики и уменьшает размер отдельных элементов, что значительно улучшает скорость индексирования и производительность запросов. +Arrays in your schema can really slow down a Subgraph's performance as they grow beyond thousands of entries. If possible, the `@derivedFrom` directive should be used when using arrays as it prevents large arrays from forming, simplifies handlers, and reduces the size of individual entities, improving indexing speed and query performance significantly. ## Как использовать директиву @derivedFrom @@ -15,7 +15,7 @@ sidebarTitle: 'Subgraph Best Practice 2: Arrays with @derivedFrom' comments: [Comment!]! @derivedFrom(field: "post") ``` -@derivedFrom создает эффективные отношения "один ко многим", позволяя объекту динамически ассоциироваться с несколькими связанными объектами на основе поля в связанном объекте. Этот подход исключает необходимость хранения продублированных данных с обеих сторон отношений, что делает субграф более эффективным. +`@derivedFrom` creates efficient one-to-many relationships, enabling an entity to dynamically associate with multiple related entities based on a field in the related entity. This approach removes the need for both sides of the relationship to store duplicate data, making the Subgraph more efficient. ### Пример использования @derivedFrom @@ -60,30 +60,30 @@ type Comment @entity { Именно при добавлении директивы `@derivedFrom`, эта схема будет хранить "Comments" только на стороне отношения "Comments", а не на стороне отношения "Post". Массивы хранятся в отдельных строках, что позволяет им значительно расширяться. Это может привести к очень большим объёмам, поскольку их рост не ограничен. -Это не только сделает наш субграф более эффективным, но и откроет три возможности: +This will not only make our Subgraph more efficient, but it will also unlock three features: 1. Мы можем запрашивать `Post` и видеть все его комментарии. 2. Мы можем выполнить обратный поиск и запросить любой `Comment`, чтобы увидеть, от какого поста он пришел. -3. Мы можем использовать [Загрузчики производных полей](/subgraphs/developing/creating/graph-ts/api/#looking-up-derived-entities), чтобы получить возможность напрямую обращаться и манипулировать данными из виртуальных отношений в наших мэппингах субграфа. +3. We can use [Derived Field Loaders](/subgraphs/developing/creating/graph-ts/api/#looking-up-derived-entities) to unlock the ability to directly access and manipulate data from virtual relationships in our Subgraph mappings. ## Заключение -Используйте директиву `@derivedFrom` в субграфах для эффективного управления динамически растущими массивами, улучшая эффективность индексирования и извлечения данных. +Use the `@derivedFrom` directive in Subgraphs to effectively manage dynamically growing arrays, enhancing indexing efficiency and data retrieval. Для более подробного объяснения стратегий, которые помогут избежать использования больших массивов, ознакомьтесь с блогом Кевина Джонса: [Лучшие практики разработки субграфов: как избежать больших массивов](https://thegraph.com/blog/improve-subgraph-performance-avoiding-large-arrays/). ## Лучшие практики для субграфов 1-6 -1. [Увеличение скорости запросов с помощью обрезки субграфов](/subgraphs/best-practices/pruning/) +1. [Improve Query Speed with Subgraph Pruning](/subgraphs/best-practices/pruning/) -2. [Улучшение индексирования и отклика запросов с использованием @derivedFrom](/subgraphs/best-practices/derivedfrom/) +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/subgraphs/best-practices/derivedfrom/) -3. [Улучшение индексирования и производительности запросов с использованием неизменяемых объектов и байтов в качестве идентификаторов](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) -4. [Увеличение скорости индексирования путем избегания `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) +4. [Improve Indexing Speed by Avoiding `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) -5. [Упрощение и оптимизация с помощью временных рядов и агрегаций](/subgraphs/best-practices/timeseries/) +5. [Simplify and Optimize with Timeseries and Aggregations](/subgraphs/best-practices/timeseries/) -6. [Использование переноса (графтинга) для быстрого развертывания исправлений](/subgraphs/best-practices/grafting-hotfix/) +6. [Use Grafting for Quick Hotfix Deployment](/subgraphs/best-practices/grafting-hotfix/) diff --git a/website/src/pages/ru/subgraphs/best-practices/grafting-hotfix.mdx b/website/src/pages/ru/subgraphs/best-practices/grafting-hotfix.mdx index ebb6b49ea9bf..b169115f012c 100644 --- a/website/src/pages/ru/subgraphs/best-practices/grafting-hotfix.mdx +++ b/website/src/pages/ru/subgraphs/best-practices/grafting-hotfix.mdx @@ -1,26 +1,26 @@ --- title: Лучшая практика субграфов 6 — используйте графтинг для быстрого развертывания исправлений -sidebarTitle: 'Subgraph Best Practice 6: Grafting and Hotfixing' +sidebarTitle: Grafting and Hotfixing --- ## Краткое содержание -Графтинг — это мощная функция в разработке субграфов, которая позволяет создавать и разворачивать новые субграфы, повторно используя индексированные данные из существующих. +Grafting is a powerful feature in Subgraph development that allows you to build and deploy new Subgraphs while reusing the indexed data from existing ones. ### Обзор -Эта функция позволяет быстро развертывать исправления для критических ошибок, устраняя необходимость повторного индексирования всего субграфа с нуля. Сохраняя исторические данные, графтинг минимизирует время простоя и обеспечивает непрерывность работы сервисов данных. +This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire Subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. ## Преимущества графтинга для оперативных исправлений 1. **Быстрое развертывание** - - **Минимизация времени простоя**: когда субграф сталкивается с критической ошибкой и перестает индексировать данные, графтинг позволяет немедленно развернуть исправление без необходимости ждать повторного индексирования. - - **Немедленное восстановление**: новый субграф продолжается с последнего индексированного блока, обеспечивая бесперебойную работу служб передачи данных. + - **Minimize Downtime**: When a Subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. + - **Immediate Recovery**: The new Subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. 2. **Сохранение данных** - - **Повторное использование исторических данных**: графтинг копирует существующие данные из базового субграфа, что позволяет сохранить важные исторические записи. + - **Reuse Historical Data**: Grafting copies the existing data from the base Subgraph, so you don’t lose valuable historical records. - **Консистентность**: поддерживает непрерывность данных, что имеет решающее значение для приложений, полагающихся на согласованные исторические данные. 3. **Эффективность** @@ -31,38 +31,38 @@ sidebarTitle: 'Subgraph Best Practice 6: Grafting and Hotfixing' 1. **Первоначальное развертывание без графтинга** - - **Начните с чистого листа**: Всегда разворчивайте первоначальный субграф без использования графтинга, чтобы убедиться в его стабильности и корректной работе. - - **Тщательно тестируйте**: проверьте производительность субграфа, чтобы свести к минимуму необходимость в будущих исправлениях. + - **Start Clean**: Always deploy your initial Subgraph without grafting to ensure that it’s stable and functions as expected. + - **Test Thoroughly**: Validate the Subgraph’s performance to minimize the need for future hotfixes. 2. **Реализация исправления с использованием графтинга** - **Определите проблему**: при возникновении критической ошибки определите номер блока последнего успешно проиндексированного события. - - **Создайте новый субграф**: разработайте новый субграф, включающий оперативное исправление. - - **Настройте графтинг**: используйте графтинг для копирования данных до определенного номера блока из неисправного субграфа. - - **Быстро разверните**: опубликуйте графтинговый (перенесенный) субграф, чтобы как можно скорее восстановить работу сервиса. + - **Create a New Subgraph**: Develop a new Subgraph that includes the hotfix. + - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed Subgraph. + - **Deploy Quickly**: Publish the grafted Subgraph to restore service as soon as possible. 3. **Действия после оперативного исправления** - - **Мониторинг производительности**: убедитесь, что графтинговый (перенесенный) субграф индексируется правильно и исправление решает проблему. - - **Публикация без графтинга**: как только субграф стабилизируется, разверните его новую версию без использования графтинга для долгосрочного обслуживания. + - **Monitor Performance**: Ensure the grafted Subgraph is indexing correctly and the hotfix resolves the issue. + - **Republish Without Grafting**: Once stable, deploy a new version of the Subgraph without grafting for long-term maintenance. > Примечание: Не рекомендуется использовать графтинг бесконечно, так как это может усложнить будущие обновления и обслуживание. - - **Обновите ссылки**: перенаправьте все сервисы или приложения на новый субграф без использования графтинга. + - **Update References**: Redirect any services or applications to use the new, non-grafted Subgraph. 4. **Важные замечания** - **Тщательный выбор блока**: тщательно выбирайте номер блока графтинга, чтобы избежать потери данных. - **Совет**: используйте номер блока последнего корректно обработанного события. - - **Используйте идентификатор развертывания**: убедитесь, что Вы ссылаетесь на идентификатор развертывания базового субграфа, а не на идентификатор субграфа. - - **Примечание**: идентификатор развертывания — это уникальный идентификатор для конкретного развертывания субграфа. - - **Объявление функции**: не забудьте указать использование графтинга в манифесте субграфа в разделе функций. + - **Use Deployment ID**: Ensure you reference the Deployment ID of the base Subgraph, not the Subgraph ID. + - **Note**: The Deployment ID is the unique identifier for a specific Subgraph deployment. + - **Feature Declaration**: Remember to declare grafting in the Subgraph manifest under features. ## Пример: развертывание оперативного исправления с использованием графтинга -Предположим, у вас есть субграф, отслеживающий смарт-контракт, который перестал индексироваться из-за критической ошибки. Вот как Вы можете использовать графтинг для развертывания оперативного исправления. +Suppose you have a Subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. 1. **Манифест неудачного субграфа (subgraph.yaml)** ```yaml - specVersion: 1.0.0 + specVersion: 1.3.0 schema: file: ./schema.graphql dataSources: @@ -75,7 +75,7 @@ sidebarTitle: 'Subgraph Best Practice 6: Grafting and Hotfixing' startBlock: 5000000 mapping: kind: ethereum/events - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Withdrawal @@ -90,7 +90,7 @@ sidebarTitle: 'Subgraph Best Practice 6: Grafting and Hotfixing' 2. **Манифест нового субграфа с графтингом (subgraph.yaml)** ```yaml - specVersion: 1.0.0 + specVersion: 1.3.0 schema: file: ./schema.graphql dataSources: @@ -100,10 +100,10 @@ sidebarTitle: 'Subgraph Best Practice 6: Grafting and Hotfixing' source: address: '0xNewContractAddress' abi: Lock - startBlock: 6000001 # Блок после последнего индексированного блока + startBlock: 6000001 # Block after the last indexed block mapping: kind: ethereum/events - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Withdrawal @@ -117,16 +117,16 @@ sidebarTitle: 'Subgraph Best Practice 6: Grafting and Hotfixing' features: - grafting graft: - base: QmBaseDeploymentID # ID развертывания неудачного субграфа - block: 6000000 # Последний успешно индексированный блок + base: QmBaseDeploymentID # Deployment ID of the failed Subgraph + block: 6000000 # Last successfully indexed block ``` **Пояснение:** -- **Обновление источника данных**: новый субграф указывает на 0xNewContractAddress, который может быть исправленной версией смарт-контракта. +- **Data Source Update**: The new Subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. - **Начальный блок**: устанавливается на один блок после последнего успешно индексированного блока, чтобы избежать повторной обработки ошибки. - **Конфигурация графтинга**: - - **base**: идентификатор развертывания неудачного субграфа. + - **base**: Deployment ID of the failed Subgraph. - **block**: номер блока, с которого должен начаться графтинг. 3. **Шаги развертывания** @@ -135,10 +135,10 @@ sidebarTitle: 'Subgraph Best Practice 6: Grafting and Hotfixing' - **Отредактируйте манифест**: как показано выше, обновите файл `subgraph.yaml` с конфигурациями для графтинга. - **Разверните субграф**: - Аутентифицируйтесь с помощью Graph CLI. - - Разверните новый субграф используя `graph deploy`. + - Deploy the new Subgraph using `graph deploy`. 4. **После развертывания** - - **Проверьте индексирование**: убедитесь, что субграф корректно индексирует данные с точки графтинга. + - **Verify Indexing**: Check that the Subgraph is indexing correctly from the graft point. - **Следите за данными**: убедитесь, что новые данные индексируются и что исправление работает эффективно. - **Запланируйте повторную публикацию**: запланируйте развертывание версии без графтинга для обеспечения долгосрочной стабильности. @@ -146,9 +146,9 @@ sidebarTitle: 'Subgraph Best Practice 6: Grafting and Hotfixing' Хотя графтинг является мощным инструментом для быстрого развертывания исправлений, существуют конкретные сценарии, когда его следует избегать для поддержания целостности данных и обеспечения оптимальной производительности. -- **Несовместимые изменения схемы**: если ваше исправление требует изменения типа существующих полей или удаления полей из схемы, графтинг не подходит. Графтинг предусматривает, что схема нового субграфа будет совместима со схемой базового субграфа. Несовместимые изменения могут привести к несоответствиям данных и ошибкам, так как существующие данные не будут соответствовать новой схеме. +- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new Subgraph’s schema to be compatible with the base Subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. - **Значительные изменения логики мэппинга**: когда исправление включает существенные изменения в вашей логике мэппинга, такие как изменение обработки событий или изменение функций обработчиков, графтинг может работать некорректно. Новая логика может быть несовместима с данными, обработанными по старой логике, что приведет к некорректным данным или сбоям в индексировании. -- **Развертывания в сеть The Graph**: графтинг не рекомендуется для субграфов, предназначенных для децентрализованной сети The Graph (майннет). Это может усложнить индексирование и не поддерживаться всеми Индексаторами, что может привести к непредсказуемому поведению или увеличению затрат. Для развертываний в майннете безопаснее перезапустить индексирование субграфа с нуля, чтобы обеспечить полную совместимость и надежность. +- **Deployments to The Graph Network**: Grafting is not recommended for Subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the Subgraph from scratch to ensure full compatibility and reliability. ### Управление рисками @@ -157,31 +157,31 @@ sidebarTitle: 'Subgraph Best Practice 6: Grafting and Hotfixing' ## Заключение -Графтинг — это эффективная стратегия для развертывания оперативных исправлений в разработке субграфов, позволяющая Вам: +Grafting is an effective strategy for deploying hotfixes in Subgraph development, enabling you to: - **Быстро восстанавливаться** после критических ошибок без повторного индексирования. - **Сохранять исторические данные**, поддерживая непрерывности работы для приложений и пользователей. - **Обеспечить доступность сервиса**, минимизируя время простоя при критических исправлениях. -Однако важно использовать графтинг разумно и следовать лучшим практикам для снижения рисков. После стабилизации своего субграфа с помощью оперативных исправлений, спланируйте развертывание версии без графтинга для обеспечения долгосрочного обслуживания. +However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your Subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. ## Дополнительные ресурсы - **[Документация графтинга](/subgraphs/cookbook/grafting/)**: замените контракт и сохраните его историю с помощью графтинга - **[Понимание идентификаторов развертывания](/subgraphs/querying/subgraph-id-vs-deployment-id/)**: ознакомьтесь с разницей между идентификатором развертывания и идентификатором субграфа. -Включив графтинг в процесс разработки субграфов, Вы сможете быстрее реагировать на проблемы, обеспечивая стабильность и надежность Ваших сервисов данных. +By incorporating grafting into your Subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. ## Лучшие практики для субграфов 1-6 -1. [Увеличение скорости запросов с помощью обрезки субграфов](/subgraphs/best-practices/pruning/) +1. [Improve Query Speed with Subgraph Pruning](/subgraphs/best-practices/pruning/) -2. [Улучшение индексирования и отклика запросов с использованием @derivedFrom](/subgraphs/best-practices/derivedfrom/) +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/subgraphs/best-practices/derivedfrom/) -3. [Улучшение индексирования и производительности запросов с использованием неизменяемых объектов и байтов в качестве идентификаторов](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) -4. [Увеличение скорости индексирования путем избегания `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) +4. [Improve Indexing Speed by Avoiding `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) -5. [Упрощение и оптимизация с помощью временных рядов и агрегаций](/subgraphs/best-practices/timeseries/) +5. [Simplify and Optimize with Timeseries and Aggregations](/subgraphs/best-practices/timeseries/) -6. [Использование переноса (графтинга) для быстрого развертывания исправлений](/subgraphs/best-practices/grafting-hotfix/) +6. [Use Grafting for Quick Hotfix Deployment](/subgraphs/best-practices/grafting-hotfix/) diff --git a/website/src/pages/ru/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx b/website/src/pages/ru/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx index 194240e032c3..78e81a267bc5 100644 --- a/website/src/pages/ru/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx +++ b/website/src/pages/ru/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx @@ -1,6 +1,6 @@ --- title: Лучшие практики для субграфов №3 – Улучшение индексирования и производительности запросов с использованием неизменяемых объектов и байтов в качестве идентификаторов -sidebarTitle: 'Subgraph Best Practice 3: Immutable Entities and Bytes as IDs' +sidebarTitle: Immutable Entities and Bytes as IDs --- ## Краткое содержание @@ -24,7 +24,7 @@ type Transfer @entity(immutable: true) { Структуры неизменяемых объектов не будут изменяться в будущем. Идеальным кандидатом для превращения в неизменяемый объект может быть объект, который напрямую фиксирует данные событий в блокчейне, например, событие `Transfer`, записываемое как объект `Transfer`. -### Под капотом +### Как это устроено Изменяемые объекты имеют «диапазон блоков», указывающий их актуальность. Обновление таких объектов требует от graph node корректировки диапазона блоков для предыдущих версий, что увеличивает нагрузку на базу данных. Запросы также должны фильтровать данные, чтобы находить только актуальные объекты. Неизменяемые объекты работают быстрее, поскольку все они актуальны, и, так как они не изменяются, не требуется никаких проверок или обновлений при записи, а также фильтрации во время выполнения запросов. @@ -50,12 +50,12 @@ type Transfer @entity(immutable: true) { ### Причины, по которым не стоит использовать Bytes как идентификаторы 1. Если идентификаторы объектов должны быть читаемыми для человека, например, автоинкрементированные числовые идентификаторы или читаемые строки, то не следует использовать тип Bytes для идентификаторов. -2. Если данные субграфа интегрируются с другой моделью данных, которая не использует тип Bytes для идентификаторов, то не следует использовать Bytes для идентификаторов в субграфе. +2. If integrating a Subgraph’s data with another data model that does not use Bytes as IDs, Bytes as IDs should not be used. 3. Если улучшения производительности индексирования и запросов не являются приоритетом. ### Конкатенация (объединение) с использованием Bytes как идентификаторов -Это распространенная практика во многих субграфах — использовать конкатенацию строк для объединения двух свойств события в единый идентификатор, например, используя `event.transaction.hash.toHex() + "-" + event.logIndex.toString()`. Однако поскольку это возвращает строку, такой подход значительно ухудшает производительность индексирования и запросов в субграфах. +It is a common practice in many Subgraphs to use string concatenation to combine two properties of an event into a single ID, such as using `event.transaction.hash.toHex() + "-" + event.logIndex.toString()`. However, as this returns a string, this significantly impedes Subgraph indexing and querying performance. Вместо этого следует использовать метод `concatI32()` для конкатенации свойств события. Эта стратегия приводит к созданию идентификатора типа `Bytes`, который гораздо более производителен. @@ -172,20 +172,20 @@ type Transfer @entity { ## Заключение -Использование как неизменяемых объектов, так и Bytes как идентификаторов значительно улучшает эффективность субграфов. В частности, тесты показали увеличение производительности запросов до 28% и ускорение индексирования до 48%. +Using both Immutable Entities and Bytes as IDs has been shown to markedly improve Subgraph efficiency. Specifically, tests have highlighted up to a 28% increase in query performance and up to a 48% acceleration in indexing speeds. Читайте больше о применении неизменяемых объектов и Bytes как идентификаторов в этом блоге от Дэвида Луттеркорта, инженера-программиста в Edge & Node: [Два простых способа улучшить производительность субграфов](https://thegraph.com/blog/two-simple-subgraph-performance-improvements/). ## Лучшие практики для субграфов 1-6 -1. [Увеличение скорости запросов с помощью обрезки субграфов](/subgraphs/best-practices/pruning/) +1. [Improve Query Speed with Subgraph Pruning](/subgraphs/best-practices/pruning/) -2. [Улучшение индексирования и отклика запросов с использованием @derivedFrom](/subgraphs/best-practices/derivedfrom/) +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/subgraphs/best-practices/derivedfrom/) -3. [Улучшение индексирования и производительности запросов с использованием неизменяемых объектов и байтов в качестве идентификаторов](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) -4. [Увеличение скорости индексирования путем избегания `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) +4. [Improve Indexing Speed by Avoiding `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) -5. [Упрощение и оптимизация с помощью временных рядов и агрегаций](/subgraphs/best-practices/timeseries/) +5. [Simplify and Optimize with Timeseries and Aggregations](/subgraphs/best-practices/timeseries/) -6. [Использование переноса (графтинга) для быстрого развертывания исправлений](/subgraphs/best-practices/grafting-hotfix/) +6. [Use Grafting for Quick Hotfix Deployment](/subgraphs/best-practices/grafting-hotfix/) diff --git a/website/src/pages/ru/subgraphs/best-practices/pruning.mdx b/website/src/pages/ru/subgraphs/best-practices/pruning.mdx index f99ae4861ec4..0903a26f9da7 100644 --- a/website/src/pages/ru/subgraphs/best-practices/pruning.mdx +++ b/website/src/pages/ru/subgraphs/best-practices/pruning.mdx @@ -1,11 +1,11 @@ --- title: Лучшая практика субграфа 1 — Улучшение скорости запросов с помощью сокращения (Pruning) субграфа -sidebarTitle: 'Subgraph Best Practice 1: Pruning with indexerHints' +sidebarTitle: Pruning with indexerHints --- ## Краткое содержание -[Pruning](/developing/creating-a-subgraph/#prune) удаляет архивные элементы из базы данных субграфа до заданного блока, а удаление неиспользуемых элементов из базы данных субграфа улучшает производительность запросов, зачастую значительно. Использование `indexerHints` — это простой способ выполнить сокращение субграфа. +[Pruning](/developing/creating-a-subgraph/#prune) removes archival entities from the Subgraph’s database up to a given block, and removing unused entities from a Subgraph’s database will improve a Subgraph’s query performance, often dramatically. Using `indexerHints` is an easy way to prune a Subgraph. ## Как сократить субграф с помощью `indexerHints` @@ -13,14 +13,14 @@ sidebarTitle: 'Subgraph Best Practice 1: Pruning with indexerHints' `indexerHints` имеет три опции `prune`: -- `prune: auto`: Сохраняет минимально необходимую историю, установленную Индексатором, оптимизируя производительность запросов. Это рекомендуется как основная настройка и является настройкой по умолчанию для всех субграфов, созданных с помощью `graph-cli` версии >= 0.66.0. +- `prune: auto`: Retains the minimum necessary history as set by the Indexer, optimizing query performance. This is the generally recommended setting and is the default for all Subgraphs created by `graph-cli` >= 0.66.0. - `prune: `: Устанавливает пользовательский предел на количество исторических блоков, которые следует сохранить. - `prune: never`: без сокращения исторических данных; сохраняет всю историю и является значением по умолчанию, если раздел `indexerHints` отсутствует. `prune: never` следует выбрать, если требуются [Запросы на путешествия во времени](/subgraphs/querying/graphql-api/#time-travel-queries). -Мы можем добавить `indexerHints` в наши субграфы, обновив наш файл `subgraph.yaml`: +We can add `indexerHints` to our Subgraphs by updating our `subgraph.yaml`: ```yaml -specVersion: 1.0.0 +specVersion: 1.3.0 schema: file: ./schema.graphql indexerHints: @@ -39,18 +39,18 @@ dataSources: ## Заключение -Сокращение с использованием `indexerHints` — это наилучшая практика при разработке субграфов, обеспечивающая значительное улучшение производительности запросов. +Pruning using `indexerHints` is a best practice for Subgraph development, offering significant query performance improvements. ## Лучшие практики для субграфов 1-6 -1. [Увеличение скорости запросов с помощью обрезки субграфов](/subgraphs/best-practices/pruning/) +1. [Improve Query Speed with Subgraph Pruning](/subgraphs/best-practices/pruning/) -2. [Улучшение индексирования и отклика запросов с использованием @derivedFrom](/subgraphs/best-practices/derivedfrom/) +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/subgraphs/best-practices/derivedfrom/) -3. [Улучшение индексирования и производительности запросов с использованием неизменяемых объектов и байтов в качестве идентификаторов](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) -4. [Увеличение скорости индексирования путем избегания `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) +4. [Improve Indexing Speed by Avoiding `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) -5. [Упрощение и оптимизация с помощью временных рядов и агрегаций](/subgraphs/best-practices/timeseries/) +5. [Simplify and Optimize with Timeseries and Aggregations](/subgraphs/best-practices/timeseries/) -6. [Использование переноса (графтинга) для быстрого развертывания исправлений](/subgraphs/best-practices/grafting-hotfix/) +6. [Use Grafting for Quick Hotfix Deployment](/subgraphs/best-practices/grafting-hotfix/) diff --git a/website/src/pages/ru/subgraphs/best-practices/timeseries.mdx b/website/src/pages/ru/subgraphs/best-practices/timeseries.mdx index 5520d80a970a..3d5de9e6d731 100644 --- a/website/src/pages/ru/subgraphs/best-practices/timeseries.mdx +++ b/website/src/pages/ru/subgraphs/best-practices/timeseries.mdx @@ -1,11 +1,11 @@ --- title: Лучшие практики субграфов №5 — Упрощение и оптимизация с помощью временных рядов и агрегаций -sidebarTitle: 'Subgraph Best Practice 5: Timeseries and Aggregations' +sidebarTitle: Тайм-серии и агрегации --- ## Краткое содержание -Использование новой функции временных рядов и агрегаций в субграфах может значительно улучшить как скорость индексирования, так и производительность запросов. +Leveraging the new time-series and aggregations feature in Subgraphs can significantly enhance both indexing speed and query performance. ## Обзор @@ -36,6 +36,10 @@ sidebarTitle: 'Subgraph Best Practice 5: Timeseries and Aggregations' ## Как внедрить временные ряды и агрегации +### Предварительные требования + +You need `spec version 1.1.0` for this feature. + ### Определение объектов временных рядов Объект временного ряда представляет собой необработанные данные, собранные с течением времени. Он определяется с помощью аннотации `@entity(timeseries: true)`. Ключевые требования: @@ -51,7 +55,7 @@ sidebarTitle: 'Subgraph Best Practice 5: Timeseries and Aggregations' type Data @entity(timeseries: true) { id: Int8! timestamp: Timestamp! - price: BigDecimal! + amount: BigDecimal! } ``` @@ -68,11 +72,11 @@ type Data @entity(timeseries: true) { type Stats @aggregation(intervals: ["hour", "day"], source: "Data") { id: Int8! timestamp: Timestamp! - sum: BigDecimal! @aggregate(fn: "sum", arg: "price") + sum: BigDecimal! @aggregate(fn: "sum", arg: "amount") } ``` -В этом примере статистика агрегирует поле цены из данных за часовые и дневные интервалы, вычисляя сумму. +In this example, Stats aggregates the amount field from Data over hourly and daily intervals, computing the sum. ### Запрос агрегированных данных @@ -172,24 +176,24 @@ type TokenStats @aggregation(intervals: ["hour", "day"], source: "TokenData") { ### Заключение -Внедрение временных рядов и агрегаций в субграфы является лучшей практикой для проектов, работающих с данными, зависящими от времени. Этот подход: +Implementing timeseries and aggregations in Subgraphs is a best practice for projects dealing with time-based data. This approach: - Улучшает производительность: ускоряет индексирование и запросы, снижая нагрузку на обработку данных. - Упрощает разработку: устраняет необходимость в ручном написании логики агрегации в мэппингах. - Эффективно масштабируется: обрабатывает большие объемы данных, не ухудшая скорость и отзывчивость. -Применяя этот шаблон, разработчики могут создавать более эффективные и масштабируемые субграфы, обеспечивая более быстрый и надежный доступ к данным для конечных пользователей. Чтобы узнать больше о внедрении временных рядов и агрегаций, обратитесь к [Руководству по временным рядам и агрегациям](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) и рассмотрите возможность использования этой функции в своих субграфах. +By adopting this pattern, developers can build more efficient and scalable Subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your Subgraphs. ## Лучшие практики для субграфов 1-6 -1. [Увеличение скорости запросов с помощью обрезки субграфов](/subgraphs/best-practices/pruning/) +1. [Improve Query Speed with Subgraph Pruning](/subgraphs/best-practices/pruning/) -2. [Улучшение индексирования и отклика запросов с использованием @derivedFrom](/subgraphs/best-practices/derivedfrom/) +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/subgraphs/best-practices/derivedfrom/) -3. [Улучшение индексирования и производительности запросов с использованием неизменяемых объектов и байтов в качестве идентификаторов](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) -4. [Увеличение скорости индексирования путем избегания `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) +4. [Improve Indexing Speed by Avoiding `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) -5. [Упрощение и оптимизация с помощью временных рядов и агрегаций](/subgraphs/best-practices/timeseries/) +5. [Simplify and Optimize with Timeseries and Aggregations](/subgraphs/best-practices/timeseries/) -6. [Использование переноса (графтинга) для быстрого развертывания исправлений](/subgraphs/best-practices/grafting-hotfix/) +6. [Use Grafting for Quick Hotfix Deployment](/subgraphs/best-practices/grafting-hotfix/) diff --git a/website/src/pages/ru/subgraphs/billing.mdx b/website/src/pages/ru/subgraphs/billing.mdx index 0a7daa3442d0..5f345d114a67 100644 --- a/website/src/pages/ru/subgraphs/billing.mdx +++ b/website/src/pages/ru/subgraphs/billing.mdx @@ -2,20 +2,22 @@ title: Выставление счетов --- -## Querying Plans +## Запрос планов -Существует два плана для выполнения запросов к субграфам в The Graph Network. +There are two plans to use when querying Subgraphs on The Graph Network. -- **Free Plan**: The Free Plan includes 100,000 free monthly queries with full access to the Subgraph Studio testing environment. This plan is designed for hobbyists, hackathoners, and those with side projects to try out The Graph before scaling their dapp. +- **Бесплатный план**: Бесплатный план включает 100,000 бесплатных запросов в месяц с полным доступом к тестовой среде Subgraph Studio. Этот план предназначен для любителей, участников хакатонов и разработчиков небольших проектов, которые хотят попробовать The Graph перед масштабированием своего децентрализованного приложения. -- **Growth Plan**: The Growth Plan includes everything in the Free Plan with all queries after 100,000 monthly queries requiring payments with GRT or credit card. The Growth Plan is flexible enough to cover teams that have established dapps across a variety of use cases. +- **План роста**: План роста включает все возможности бесплатного плана, но все запросы, превышающие 100,000 в месяц, требуют оплаты в GRT или кредитной картой. Этот план достаточно гибок, чтобы поддерживать команды, которые уже запустили децентрализованные приложения для различных сценариев использования. + +Learn more about pricing [here](https://thegraph.com/studio-pricing/). ## Оплата запросов с помощью кредитной карты - Чтобы настроить оплату с помощью кредитных/дебетовых карт, пользователи должны зайти в Subgraph Studio (https://thegraph.com/studio/) - 1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/subgraphs/billing/). + 1. Перейдите на [страницу оплаты Subgraph Studio](https://thegraph.com/studio/subgraphs/billing/). 2. Нажмите на кнопку «Connect Wallet» в правом верхнем углу страницы. Вы будете перенаправлены на страницу выбора кошелька. Выберите свой кошелек и нажмите «Connect». 3. Выберите «Обновление плана», если Вы переходите с бесплатного плана, или «Управление планом», если Вы уже ранее добавили GRT на свой баланс для оплаты. Далее Вы можете оценить количество запросов, чтобы получить примерную стоимость, но это не обязательный шаг. 4. Чтобы выбрать оплату кредитной картой, выберите «Credit card» как способ оплаты и заполните информацию о своей карте. Те, кто ранее использовал Stripe, могут воспользоваться функцией Link для автоматического заполнения данных. @@ -45,17 +47,17 @@ title: Выставление счетов - В качестве альтернативы, Вы можете приобрести GRT напрямую на Arbitrum через децентрализованную биржу. -> This section is written assuming you already have GRT in your wallet, and you're on Arbitrum. If you don't have GRT, you can learn how to get GRT [here](#getting-grt). +> Этот раздел написан с учетом того, что у Вас уже есть GRT в кошельке и Вы находитесь в сети Arbitrum. Если у Вас нет GRT, Вы можете узнать, как его получить, [здесь](#getting-grt). После переноса GRT Вы можете добавить его на баланс для оплаты. ### Добавление токенов GRT с помощью кошелька -1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/subgraphs/billing/). +1. Перейдите на [страницу оплаты Subgraph Studio](https://thegraph.com/studio/subgraphs/billing/). 2. Нажмите на кнопку «Connect Wallet» в правом верхнем углу страницы. Вы будете перенаправлены на страницу выбора кошелька. Выберите свой кошелек и нажмите «Connect». 3. Нажмите кнопку «Управление» в правом верхнем углу. Новые пользователи увидят опцию «Обновить до плана Роста», а те, кто пользовался ранее — «Пополнение с кошелька». 4. Используйте ползунок, чтобы оценить количество запросов, которое Вы планируете выполнять ежемесячно. - - For suggestions on the number of queries you may use, see our **Frequently Asked Questions** page. + - Рекомендации по количеству запросов, которые Вы можете использовать, можно найти на нашей странице **Часто задаваемые вопросы**. 5. Выберите «Криптовалюта». В настоящее время GRT — единственная криптовалюта, принимаемая в The Graph Network. 6. Выберите количество месяцев, за которые Вы хотели бы внести предоплату. - Предоплата не обязывает Вас к дальнейшему использованию. С Вас будет взиматься плата только за то, что Вы используете, и Вы сможете вывести свой баланс в любое время. @@ -68,7 +70,7 @@ title: Выставление счетов ### Вывод токенов GRT с помощью кошелька -1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/subgraphs/billing/). +1. Перейдите на [страницу оплаты Subgraph Studio](https://thegraph.com/studio/subgraphs/billing/). 2. Нажмите на кнопку «Подключить кошелек» в правом верхнем углу страницы. Выберите свой кошелек и нажмите «Подключить». 3. Нажмите кнопку «Управление» в правом верхнем углу страницы. Выберите «Вывести GRT». Появится боковая панель. 4. Введите сумму GRT, которую хотите вывести. @@ -77,11 +79,11 @@ title: Выставление счетов ### Добавление токенов GRT с помощью кошелька с мультиподписью -1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/subgraphs/billing/). -2. Click on the "Connect Wallet" button on the top right corner of the page. Select your wallet and click on "Connect". If you're using [Gnosis-Safe](https://gnosis-safe.io/), you'll be able to connect your multisig as well as your signing wallet. Then, sign the associated message. This will not cost any gas. +1. Перейдите на [страницу оплаты Subgraph Studio](https://thegraph.com/studio/subgraphs/billing/). +2. Нажмите на кнопку «Подключить кошелек» в правом верхнем углу страницы. Выберите свой кошелек и нажмите «Подключить». Если Вы используете [Gnosis-Safe](https://gnosis-safe.io/), Вы сможете подключить как стандартный кошелёк, так и кошелёк с мультиподписью. Затем подпишите соответствующее сообщение. За это Вам не придётся платить комиссию. 3. Нажмите кнопку «Управление» в правом верхнем углу. Новые пользователи увидят опцию «Обновить до плана Роста», а те, кто пользовался ранее — «Пополнение с кошелька». 4. Используйте ползунок, чтобы оценить количество запросов, которое Вы планируете выполнять ежемесячно. - - For suggestions on the number of queries you may use, see our **Frequently Asked Questions** page. + - Рекомендации по количеству запросов, которые Вы можете использовать, можно найти на нашей странице **Часто задаваемые вопросы**. 5. Выберите «Криптовалюта». В настоящее время GRT — единственная криптовалюта, принимаемая в The Graph Network. 6. Выберите количество месяцев, за которые Вы хотели бы внести предоплату. - Предоплата не обязывает Вас к дальнейшему использованию. С Вас будет взиматься плата только за то, что Вы используете, и Вы сможете вывести свой баланс в любое время. @@ -99,7 +101,7 @@ title: Выставление счетов Далее будет представлено пошаговое руководство по приобретению токена GRT на Coinbase. -1. Go to [Coinbase](https://www.coinbase.com/) and create an account. +1. Перейдите на [Coinbase](https://www.coinbase.com/) и создайте учетную запись. 2. После того как Вы создали учетную запись, Вам нужно будет подтвердить свою личность с помощью процесса, известного как KYC (или Know Your Customer). Это стандартная процедура для всех централизованных или кастодиальных криптобирж. 3. После того как Вы подтвердили свою личность, Вы можете приобрести токены ETH нажав на кнопку "Купить/Продать" в правом верхнем углу страницы. 4. Выберите валюту, которую хотите купить. Выберите GRT. @@ -107,19 +109,19 @@ title: Выставление счетов 6. Выберите количество токенов GRT, которое хотите приобрести. 7. Проверьте все данные о приобретении. Проверьте все данные о приобретении и нажмите «Купить GRT». 8. Подтвердите покупку. Подтвердите покупку - Вы успешно приобрели токены GRT. -9. You can transfer the GRT from your account to your wallet such as [MetaMask](https://metamask.io/). +9. Вы можете перевести GRT со своего аккаунта на кошелек, например, [MetaMask](https://metamask.io/). - Чтобы перевести токены GRT на свой кошелек, нажмите кнопку «Учетные записи» в правом верхнем углу страницы. - Нажмите на кнопку «Отправить» рядом с учетной записью GRT. - Введите сумму GRT, которую хотите отправить, и адрес кошелька, на который хотите её отправить. - Нажмите «Продолжить» и подтвердите транзакцию. -Обратите внимание, что при больших суммах покупки Coinbase может потребовать от Вас подождать 7-10 дней, прежде чем переведет полную сумму на кошелек. -You can learn more about getting GRT on Coinbase [here](https://help.coinbase.com/en/coinbase/trading-and-funding/buying-selling-or-converting-crypto/how-do-i-buy-digital-currency). +Вы можете узнать больше о том, как получить GRT на Coinbase [здесь](https://help.coinbase.com/en/coinbase/trading-and-funding/buying-selling-or-converting-crypto/how-do-i-buy-digital-currency). ### Binance Далее будет представлено пошаговое руководство по приобретению токена GRT на Binance. -1. Go to [Binance](https://www.binance.com/en) and create an account. +1. Перейдите на [Binance](https://www.binance.com/en) и создайте аккаунт. 2. После того как Вы создали учетную запись, Вам нужно будет подтвердить свою личность с помощью процесса, известного как KYC (или Know Your Customer). Это стандартная процедура для всех централизованных или кастодиальных криптобирж. 3. После того как Вы подтвердили свою личность, Вы можете приобрести токены GRT. Вы можете сделать это, нажав на кнопку «Купить сейчас» на баннере главной страницы. 4. Вы попадете на страницу, где сможете выбрать валюту, которую хотите приобрести. Выберите GRT. @@ -127,27 +129,27 @@ You can learn more about getting GRT on Coinbase [here](https://help.coinbase.co 6. Выберите количество токенов GRT, которое хотите приобрести. 7. Проверьте все данные о приобретении и нажмите «Купить GRT». 8. Подтвердите покупку, и Вы сможете увидеть GRT в своем кошельке Binance Spot. -9. You can withdraw the GRT from your account to your wallet such as [MetaMask](https://metamask.io/). - - [To withdraw](https://www.binance.com/en/blog/ecosystem/how-to-transfer-crypto-from-binance-to-trust-wallet-8305050796630181570) the GRT to your wallet, add your wallet's address to the withdrawal whitelist. +9. Вы можете вывести GRT со своего аккаунта на кошелек, например, [MetaMask](https://metamask.io/). + - [Чтобы вывести](https://www.binance.com/en/blog/ecosystem/how-to-transfer-crypto-from-binance-to-trust-wallet-8305050796630181570) GRT на свой кошелек, добавьте адрес своего кошелька в список адресов для вывода. - Нажмите на кнопку «кошелек», нажмите «вывести» и выберите GRT. - Введите сумму GRT, которую хотите отправить, и адрес кошелька из белого списка, на который Вы хотите её отправить. - Нажмите «Продолжить» и подтвердите транзакцию. -You can learn more about getting GRT on Binance [here](https://www.binance.com/en/support/faq/how-to-buy-cryptocurrency-on-binance-homepage-400c38f5e0cd4b46a1d0805c296b5582). +Вы можете узнать больше о том, как получить GRT на Binance [здесь](https://www.binance.com/en/support/faq/how-to-buy-cryptocurrency-on-binance-homepage-400c38f5e0cd4b46a1d0805c296b5582). ### Uniswap Так Вы можете приобрести GRT на Uniswap. -1. Go to [Uniswap](https://app.uniswap.org/swap?chain=arbitrum) and connect your wallet. +1. Перейдите на [Uniswap](https://app.uniswap.org/swap?chain=arbitrum) и подключите свой кошелек. 2. Выберите токен, который хотите обменять. Выберите ETH. 3. Выберите токен, на который хотите произвести обмен. Выберите GRT. - - Make sure you're swapping for the correct token. The GRT smart contract address on Arbitrum One is: [0x9623063377AD1B27544C965cCd7342f7EA7e88C7](https://arbiscan.io/token/0x9623063377ad1b27544c965ccd7342f7ea7e88c7) + - Убедитесь, что Вы обмениваете на правильный токен. Адрес смарт-контракта GRT в сети Arbitrum One:[0x9623063377AD1B27544C965cCd7342f7EA7e88C7](https://arbiscan.io/token/0x9623063377ad1b27544c965ccd7342f7ea7e88c7) 4. Введите количество ETH, которое хотите обменять. 5. Нажмите «Обменять». 6. Подтвердите транзакцию в своем кошельке и дождитесь ее обработки. -You can learn more about getting GRT on Uniswap [here](https://support.uniswap.org/hc/en-us/articles/8370549680909-How-to-Swap-Tokens-). +Вы можете узнать больше о том, как получить GRT на Uniswap [здесь](https://support.uniswap.org/hc/en-us/articles/8370549680909-How-to-Swap-Tokens-). ## Получение Ether @@ -157,7 +159,7 @@ You can learn more about getting GRT on Uniswap [here](https://support.uniswap.o Далее будет представлено пошаговое руководство по приобретению токена ETH на Coinbase. -1. Go to [Coinbase](https://www.coinbase.com/) and create an account. +1. Перейдите на [Coinbase](https://www.coinbase.com/) и создайте учетную запись. 2. После того как Вы создали учетную запись, Вам нужно будет подтвердить свою личность с помощью процесса, известного как KYC (или Know Your Customer). Это стандартная процедура для всех централизованных или кастодиальных криптобирж. 3. После того как Вы подтвердили свою личность, Вы можете приобрести токены ETH нажав на кнопку "Купить/Продать" в правом верхнем углу страницы. 4. Выберите валюту, которую хотите купить. Выберите ETH. @@ -165,35 +167,35 @@ You can learn more about getting GRT on Uniswap [here](https://support.uniswap.o 6. Введите количество ETH, которое хотите приобрести. 7. Проверьте все данные о приобретении и нажмите «Купить ETH». 8. Подтвердите покупку. Вы успешно приобрели токены ETH. -9. You can transfer the ETH from your Coinbase account to your wallet such as [MetaMask](https://metamask.io/). +9. Вы можете перевести ETH со своего аккаунта Coinbase на кошелек, например, [MetaMask](https://metamask.io/). - Чтобы перевести ETH на свой кошелек, нажмите кнопку «Учетные записи» в правом верхнем углу страницы. - Нажмите на кнопку «Отправить» рядом с учетной записью ETH. - Введите сумму ETH которую хотите отправить, и адрес кошелька, на который хотите её отправить. - Убедитесь, что делаете перевод на адрес своего Ethereum-кошелька в сети Arbitrum One. - Нажмите «Продолжить» и подтвердите транзакцию. -You can learn more about getting ETH on Coinbase [here](https://help.coinbase.com/en/coinbase/trading-and-funding/buying-selling-or-converting-crypto/how-do-i-buy-digital-currency). +Вы можете узнать больше о том, как получить ETH на Coinbase [здесь](https://help.coinbase.com/en/coinbase/trading-and-funding/buying-selling-or-converting-crypto/how-do-i-buy-digital-currency). ### Binance -This will be a step by step guide for purchasing ETH on Binance. +Далее будет представлено пошаговое руководство по приобретению токена ETH на Binance. -1. Go to [Binance](https://www.binance.com/en) and create an account. +1. Перейдите на [Binance](https://www.binance.com/en) и создайте аккаунт. 2. После того как Вы создали учетную запись, Вам нужно будет подтвердить свою личность с помощью процесса, известного как KYC (или Know Your Customer). Это стандартная процедура для всех централизованных или кастодиальных криптобирж. -3. Once you have verified your identity, purchase ETH by clicking on the "Buy Now" button on the homepage banner. +3. После того как Вы подтвердили свою личность, Вы можете приобрести токены ETH. Вы можете сделать это, нажав на кнопку «Buy/Sell» в правом верхнем углу страницы. 4. Выберите валюту, которую хотите купить. Выберите ETH. 5. Выберите предпочитаемый способ оплаты. 6. Введите количество ETH, которое хотите приобрести. 7. Проверьте все данные о приобретении и нажмите «Купить ETH». -8. Confirm your purchase and you will see your ETH in your Binance Spot Wallet. -9. You can withdraw the ETH from your account to your wallet such as [MetaMask](https://metamask.io/). +8. Подтвердите покупку, и ваш ETH появится в вашем спотовом кошельке Binance. +9. Вы можете вывести ETH со своего аккаунта на кошелек, например, [MetaMask](https://metamask.io/). - Чтобы вывести ETH на свой кошелек, добавьте адрес кошелька в белый список вывода. - Click on the "wallet" button, click withdraw, and select ETH. - Enter the amount of ETH you want to send and the whitelisted wallet address you want to send it to. - Убедитесь, что делаете перевод на адрес своего Ethereum-кошелька в сети Arbitrum One. - Нажмите «Продолжить» и подтвердите транзакцию. -You can learn more about getting ETH on Binance [here](https://www.binance.com/en/support/faq/how-to-buy-cryptocurrency-on-binance-homepage-400c38f5e0cd4b46a1d0805c296b5582). +Вы можете узнать больше о том, как получить ETH на Binance [здесь](https://www.binance.com/en/support/faq/how-to-buy-cryptocurrency-on-binance-homepage-400c38f5e0cd4b46a1d0805c296b5582). ## Часто задаваемые вопросы по выставлению счетов @@ -203,11 +205,11 @@ You can learn more about getting ETH on Binance [here](https://www.binance.com/e Мы рекомендуем переоценить количество запросов, чтобы Вам не приходилось часто пополнять баланс. Хорошей оценкой для небольших и средних приложений будет начать с 1–2 млн запросов в месяц и внимательно следить за использованием в первые недели. Для более крупных приложений хорошей оценкой будет использовать количество ежедневных посещений Вашего сайта, умноженное на количество запросов, которые делает Ваша самая активная страница при открытии. -Of course, both new and existing users can reach out to Edge & Node's BD team for a consult to learn more about anticipated usage. +Конечно, как новые, так и существующие пользователи могут обратиться к команде бизнес-развития Edge & Node для консультации и получения информации о планируемом использовании. ### Могу ли я вывести GRT со своего платежного баланса? -Yes, you can always withdraw GRT that has not already been used for queries from your billing balance. The billing contract is only designed to bridge GRT from Ethereum mainnet to the Arbitrum network. If you'd like to transfer your GRT from Arbitrum back to Ethereum mainnet, you'll need to use the [Arbitrum Bridge](https://bridge.arbitrum.io/?l2ChainId=42161). +Да, Вы всегда можете вывести GRT, которые еще не были использованы для запросов, со своего платежного баланса. Контракт для выставления счетов предназначен только для переноса GRT с основной сети Ethereum в сеть Arbitrum. Если Вы хотите перевести свои GRT с Arbitrum обратно на основную сеть Ethereum, Вам нужно будет использовать [Мост Arbitrum](https://bridge.arbitrum.io/?l2ChainId=42161). ### Что произойдет, когда мой платежный баланс закончится? Получу ли я предупреждение? diff --git a/website/src/pages/ru/subgraphs/developing/_meta-titles.json b/website/src/pages/ru/subgraphs/developing/_meta-titles.json index 01a91b09ed77..7c82e83ac8dd 100644 --- a/website/src/pages/ru/subgraphs/developing/_meta-titles.json +++ b/website/src/pages/ru/subgraphs/developing/_meta-titles.json @@ -1,6 +1,6 @@ { - "creating": "Creating", - "deploying": "Deploying", - "publishing": "Publishing", - "managing": "Managing" + "creating": "Создание", + "deploying": "Развертывание", + "publishing": "Публикация", + "managing": "Управление" } diff --git a/website/src/pages/ru/subgraphs/developing/creating/advanced.mdx b/website/src/pages/ru/subgraphs/developing/creating/advanced.mdx index a264671c393e..662c71ed059f 100644 --- a/website/src/pages/ru/subgraphs/developing/creating/advanced.mdx +++ b/website/src/pages/ru/subgraphs/developing/creating/advanced.mdx @@ -4,9 +4,9 @@ title: Advanced Subgraph Features ## Обзор -Add and implement advanced subgraph features to enhanced your subgraph's built. +Add and implement advanced Subgraph features to enhanced your Subgraph's built. -Starting from `specVersion` `0.0.4`, subgraph features must be explicitly declared in the `features` section at the top level of the manifest file, using their `camelCase` name, as listed in the table below: +Starting from `specVersion` `0.0.4`, Subgraph features must be explicitly declared in the `features` section at the top level of the manifest file, using their `camelCase` name, as listed in the table below: | Feature | Name | | ---------------------------------------------------- | ---------------- | @@ -14,10 +14,10 @@ Starting from `specVersion` `0.0.4`, subgraph features must be explicitly declar | [Full-text Search](#defining-fulltext-search-fields) | `fullTextSearch` | | [Grafting](#grafting-onto-existing-subgraphs) | `grafting` | -For instance, if a subgraph uses the **Full-Text Search** and the **Non-fatal Errors** features, the `features` field in the manifest should be: +For instance, if a Subgraph uses the **Full-Text Search** and the **Non-fatal Errors** features, the `features` field in the manifest should be: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum features: - fullTextSearch @@ -25,7 +25,7 @@ features: dataSources: ... ``` -> Note that using a feature without declaring it will incur a **validation error** during subgraph deployment, but no errors will occur if a feature is declared but not used. +> Note that using a feature without declaring it will incur a **validation error** during Subgraph deployment, but no errors will occur if a feature is declared but not used. ## Тайм-серии и агрегации @@ -33,9 +33,9 @@ Prerequisites: - Subgraph specVersion must be ≥1.1.0. -Timeseries and aggregations enable your subgraph to track statistics like daily average price, hourly total transfers, and more. +Timeseries and aggregations enable your Subgraph to track statistics like daily average price, hourly total transfers, and more. -This feature introduces two new types of subgraph entity. Timeseries entities record data points with timestamps. Aggregation entities perform pre-declared calculations on the timeseries data points on an hourly or daily basis, then store the results for easy access via GraphQL. +This feature introduces two new types of Subgraph entity. Timeseries entities record data points with timestamps. Aggregation entities perform pre-declared calculations on the timeseries data points on an hourly or daily basis, then store the results for easy access via GraphQL. ### Пример схемы @@ -97,21 +97,21 @@ Aggregation entities are automatically calculated on the basis of the specified ## Нефатальные ошибки -Ошибки индексирования в уже синхронизированных субграфах по умолчанию приведут к сбою субграфа и прекращению синхронизации. В качестве альтернативы субграфы можно настроить на продолжение синхронизации при наличии ошибок, игнорируя изменения, внесенные обработчиком, который спровоцировал ошибку. Это дает авторам субграфов время на исправление своих субграфов, в то время как запросы к последнему блоку продолжают обрабатываться, хотя результаты могут быть противоречивыми из-за бага, вызвавшего ошибку. Обратите внимание на то, что некоторые ошибки всё равно всегда будут фатальны. Чтобы быть нефатальной, ошибка должна быть детерминированной. +Indexing errors on already synced Subgraphs will, by default, cause the Subgraph to fail and stop syncing. Subgraphs can alternatively be configured to continue syncing in the presence of errors, by ignoring the changes made by the handler which provoked the error. This gives Subgraph authors time to correct their Subgraphs while queries continue to be served against the latest block, though the results might be inconsistent due to the bug that caused the error. Note that some errors are still always fatal. To be non-fatal, the error must be known to be deterministic. -> **Note:** The Graph Network does not yet support non-fatal errors, and developers should not deploy subgraphs using that functionality to the network via the Studio. +> **Note:** The Graph Network does not yet support non-fatal errors, and developers should not deploy Subgraphs using that functionality to the network via the Studio. -Для включения нефатальных ошибок необходимо установить в манифесте субграфа следующий флаг функции: +Enabling non-fatal errors requires setting the following feature flag on the Subgraph manifest: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum features: - nonFatalErrors ... ``` -The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the subgraph has skipped over errors, as in the example: +The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the Subgraph has skipped over errors, as in the example: ```graphql foos(first: 100, subgraphError: allow) { @@ -123,7 +123,7 @@ _meta { } ``` -If the subgraph encounters an error, that query will return both the data and a graphql error with the message `"indexing_error"`, as in this example response: +If the Subgraph encounters an error, that query will return both the data and a graphql error with the message `"indexing_error"`, as in this example response: ```graphql "data": { @@ -145,7 +145,7 @@ If the subgraph encounters an error, that query will return both the data and a ## Источники файловых данных IPFS/Arweave -Источники файловых данных — это новая функциональность субграфа для надежного и расширенного доступа к данным вне чейна во время индексации. Источники данных файлов поддерживают получение файлов из IPFS и Arweave. +File data sources are a new Subgraph functionality for accessing off-chain data during indexing in a robust, extendable way. File data sources support fetching files from IPFS and from Arweave. > Это также закладывает основу для детерминированного индексирования данных вне сети, а также потенциального введения произвольных данных из HTTP-источников. @@ -221,7 +221,7 @@ templates: - name: TokenMetadata kind: file/ipfs mapping: - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mapping.ts handler: handleMetadata @@ -290,7 +290,7 @@ For Arweave, as of version 0.33.0 Graph Node can fetch files stored on Arweave b import { TokenMetadata as TokenMetadataTemplate } from '../generated/templates' const ipfshash = 'QmaXzZhcYnsisuue5WRdQDH6FDvqkLQX1NckLqBYeYYEfm' -//Этот пример кода предназначен для сборщика субграфа Crypto. Приведенный выше хеш ipfs представляет собой каталог с метаданными токена для всех NFT криптоковена. +//This example code is for a Crypto coven Subgraph. The above ipfs hash is a directory with token metadata for all crypto coven NFTs. export function handleTransfer(event: TransferEvent): void { let token = Token.load(event.params.tokenId.toString()) @@ -300,7 +300,7 @@ export function handleTransfer(event: TransferEvent): void { token.tokenURI = '/' + event.params.tokenId.toString() + '.json' const tokenIpfsHash = ipfshash + token.tokenURI - //Это создает путь к метаданным для одного сборщика NFT Crypto. Он объединяет каталог с "/" + filename + ".json" + //This creates a path to the metadata for a single Crypto coven NFT. It concats the directory with "/" + filename + ".json" token.ipfsURI = tokenIpfsHash @@ -317,23 +317,23 @@ export function handleTransfer(event: TransferEvent): void { This example is using the CID as the lookup between the parent `Token` entity and the resulting `TokenMetadata` entity. -> Previously, this is the point at which a subgraph developer would have called `ipfs.cat(CID)` to fetch the file +> Previously, this is the point at which a Subgraph developer would have called `ipfs.cat(CID)` to fetch the file Поздравляем, Вы используете файловые источники данных! -#### Развертывание субграфов +#### Deploying your Subgraphs -You can now `build` and `deploy` your subgraph to any Graph Node >=v0.30.0-rc.0. +You can now `build` and `deploy` your Subgraph to any Graph Node >=v0.30.0-rc.0. #### Ограничения -Обработчики и объекты файловых источников данных изолированы от других объектов субграфа, что гарантирует их детерминированность при выполнении и исключает загрязнение источников данных на чейн-основе. В частности: +File data source handlers and entities are isolated from other Subgraph entities, ensuring that they are deterministic when executed, and ensuring no contamination of chain-based data sources. To be specific: - Объекты, созданные с помощью файловых источников данных, неизменяемы и не могут быть обновлены - Обработчики файловых источников данных не могут получить доступ к объектам из других файловых источников данных - Объекты, связанные с источниками данных файлов, не могут быть доступны обработчикам на чейн-основе -> Хотя это ограничение не должно вызывать проблем в большинстве случаев, для некоторых оно может вызвать сложности. Если у Вас возникли проблемы с моделированием Ваших файловых данных в субграфе, свяжитесь с нами через Discord! +> While this constraint should not be problematic for most use-cases, it may introduce complexity for some. Please get in touch via Discord if you are having issues modelling your file-based data in a Subgraph! Кроме того, невозможно создать источники данных из файлового источника данных, будь то источник данных onchain или другой файловый источник данных. Это ограничение может быть снято в будущем. @@ -365,15 +365,15 @@ Handlers for File Data Sources cannot be in files which import `eth_call` contra > **Requires**: [SpecVersion](#specversion-releases) >= `1.2.0` -Фильтры по темам, также известные как фильтры по индексированным аргументам, — это мощная функция в субграфах, которая позволяет пользователям точно фильтровать события блокчейна на основе значений их индексированных аргументов. +Topic filters, also known as indexed argument filters, are a powerful feature in Subgraphs that allow users to precisely filter blockchain events based on the values of their indexed arguments. -- Эти фильтры помогают изолировать конкретные интересующие события из огромного потока событий в блокчейне, позволяя субграфам работать более эффективно, сосредотачиваясь только на релевантных данных. +- These filters help isolate specific events of interest from the vast stream of events on the blockchain, allowing Subgraphs to operate more efficiently by focusing only on relevant data. -- Это полезно для создания персональных субграфов, отслеживающих конкретные адреса и их взаимодействие с различными смарт-контрактами в блокчейне. +- This is useful for creating personal Subgraphs that track specific addresses and their interactions with various smart contracts on the blockchain. ### Как работают фильтры тем -Когда смарт-контракт генерирует событие, любые аргументы, помеченные как индексированные, могут использоваться в манифесте субграфа в качестве фильтров. Это позволяет субграфу выборочно прослушивать события, соответствующие этим индексированным аргументам. +When a smart contract emits an event, any arguments that are marked as indexed can be used as filters in a Subgraph's manifest. This allows the Subgraph to listen selectively for events that match these indexed arguments. - The event's first indexed argument corresponds to `topic1`, the second to `topic2`, and so on, up to `topic3`, since the Ethereum Virtual Machine (EVM) allows up to three indexed arguments per event. @@ -401,7 +401,7 @@ contract Token { #### Конфигурация в субграфах -Фильтры тем определяются непосредственно в конфигурации обработчика событий в манифесте субграфа. Вот как они настроены: +Topic filters are defined directly within the event handler configuration in the Subgraph manifest. Here is how they are configured: ```yaml eventHandlers: @@ -436,7 +436,7 @@ eventHandlers: - `topic1` is configured to filter `Transfer` events where `0xAddressA` is the sender. - `topic2` is configured to filter `Transfer` events where `0xAddressB` is the receiver. -- The subgraph will only index transactions that occur directly from `0xAddressA` to `0xAddressB`. +- The Subgraph will only index transactions that occur directly from `0xAddressA` to `0xAddressB`. #### Пример 2. Отслеживание транзакций в любом направлении между двумя и более адресами @@ -452,17 +452,17 @@ eventHandlers: - `topic1` is configured to filter `Transfer` events where `0xAddressA`, `0xAddressB`, `0xAddressC` is the sender. - `topic2` is configured to filter `Transfer` events where `0xAddressB` and `0xAddressC` is the receiver. -- Субграф будет индексировать транзакции, происходящие в любом направлении между несколькими адресами, что позволит осуществлять комплексный мониторинг взаимодействий с участием всех адресов. +- The Subgraph will index transactions that occur in either direction between multiple addresses allowing for comprehensive monitoring of interactions involving all addresses. ## Декларированный eth_call > Примечание: Это экспериментальная функция, которая пока недоступна в стабильной версии Graph Node. Вы можете использовать её только в Subgraph Studio или на своей локальной ноде. -Declarative `eth_calls` are a valuable subgraph feature that allows `eth_calls` to be executed ahead of time, enabling `graph-node` to execute them in parallel. +Declarative `eth_calls` are a valuable Subgraph feature that allows `eth_calls` to be executed ahead of time, enabling `graph-node` to execute them in parallel. Эта функция выполняет следующие действия: -- Значительно повышает производительность получения данных из блокчейна Ethereum за счет сокращения общего времени выполнения нескольких вызовов и оптимизации общей эффективности субграфа. +- Significantly improves the performance of fetching data from the Ethereum blockchain by reducing the total time for multiple calls and optimizing the Subgraph's overall efficiency. - Обеспечивает ускоренное получение данных, что приводит к более быстрому реагированию на запросы и улучшению пользовательского опыта. - Сокращает время ожидания для приложений, которым необходимо агрегировать данные из нескольких вызовов Ethereum, что делает процесс получения данных более эффективным. @@ -474,7 +474,7 @@ Declarative `eth_calls` are a valuable subgraph feature that allows `eth_calls` #### Scenario without Declarative `eth_calls` -Представьте, что у вас есть субграф, которому необходимо выполнить три вызова в Ethereum, чтобы получить данные о транзакциях пользователя, балансе и владении токенами. +Imagine you have a Subgraph that needs to make three Ethereum calls to fetch data about a user's transactions, balance, and token holdings. Традиционно эти вызовы могут выполняться последовательно: @@ -498,15 +498,15 @@ Declarative `eth_calls` are a valuable subgraph feature that allows `eth_calls` #### Как это работает -1. Декларативное определение: В манифесте субграфа Вы декларируете вызовы Ethereum таким образом, чтобы указать, что они могут выполняться параллельно. +1. Declarative Definition: In the Subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. 2. Механизм параллельного выполнения: Механизм выполнения The Graph Node распознает эти объявления и выполняет вызовы одновременно. -3. Агрегация результатов: После завершения всех вызовов результаты агрегируются и используются субграфом для дальнейшей обработки. +3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the Subgraph for further processing. #### Пример конфигурации в манифесте субграфа Declared `eth_calls` can access the `event.address` of the underlying event as well as all the `event.params`. -`Subgraph.yaml` using `event.address`: +`subgraph.yaml` using `event.address`: ```yaml eventHandlers: @@ -524,7 +524,7 @@ calls: - The text (`Pool[event.address].feeGrowthGlobal0X128()`) is the actual `eth_call` that will be executed, which is in the form of `Contract[address].function(arguments)` - The `address` and `arguments` can be replaced with variables that will be available when the handler is executed. -`Subgraph.yaml` using `event.params` +`subgraph.yaml` using `event.params` ```yaml calls: @@ -535,20 +535,20 @@ calls: > **Note:** it is not recommended to use grafting when initially upgrading to The Graph Network. Learn more [here](/subgraphs/cookbook/grafting/#important-note-on-grafting-when-upgrading-to-the-network). -When a subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances; it is beneficial to reuse the data from an existing subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly or to temporarily get an existing subgraph working again after it has failed. +When a Subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances; it is beneficial to reuse the data from an existing Subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly or to temporarily get an existing Subgraph working again after it has failed. -A subgraph is grafted onto a base subgraph when the subgraph manifest in `subgraph.yaml` contains a `graft` block at the top-level: +A Subgraph is grafted onto a base Subgraph when the Subgraph manifest in `subgraph.yaml` contains a `graft` block at the top-level: ```yaml description: ... graft: - base: Qm... # Subgraph ID of base subgraph + base: Qm... # Subgraph ID of base Subgraph block: 7345624 # Block number ``` -When a subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` subgraph up to and including the given `block` and then continue indexing the new subgraph from that block on. The base subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted subgraph. +When a Subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` Subgraph up to and including the given `block` and then continue indexing the new Subgraph from that block on. The base Subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted Subgraph. -Поскольку графтинг копирует, а не индексирует базовые данные, гораздо быстрее перенести субграф в нужный блок, чем индексировать с нуля, хотя для очень больших субграфов копирование исходных данных может занять несколько часов. Пока графтовый субграф инициализируется, узел The Graph будет регистрировать информацию о типах объектов, которые уже были скопированы. +Because grafting copies rather than indexes base data, it is much quicker to get the Subgraph to the desired block than indexing from scratch, though the initial data copy can still take several hours for very large Subgraphs. While the grafted Subgraph is being initialized, the Graph Node will log information about the entity types that have already been copied. Перенесённый субграф может использовать схему GraphQL, которая не идентична схеме базового субграфа, а просто совместима с ней. Это должна быть автономно действующая схема субграфа, но она может отличаться от схемы базового субграфа следующим образом: @@ -560,4 +560,4 @@ When a subgraph whose manifest contains a `graft` block is deployed, Graph Node - Она добавляет или удаляет интерфейсы - Она изменяется в зависимости от того, под какой тип объектов реализован интерфейс -> **[Feature Management](#experimental-features):** `grafting` must be declared under `features` in the subgraph manifest. +> **[Feature Management](#experimental-features):** `grafting` must be declared under `features` in the Subgraph manifest. diff --git a/website/src/pages/ru/subgraphs/developing/creating/assemblyscript-mappings.mdx b/website/src/pages/ru/subgraphs/developing/creating/assemblyscript-mappings.mdx index e4c398204f2e..6a74db44bfb3 100644 --- a/website/src/pages/ru/subgraphs/developing/creating/assemblyscript-mappings.mdx +++ b/website/src/pages/ru/subgraphs/developing/creating/assemblyscript-mappings.mdx @@ -10,7 +10,7 @@ The mappings take data from a particular source and transform it into entities t For each event handler that is defined in `subgraph.yaml` under `mapping.eventHandlers`, create an exported function of the same name. Each handler must accept a single parameter called `event` with a type corresponding to the name of the event which is being handled. -In the example subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events: +In the example Subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events: ```javascript import { NewGravatar, UpdatedGravatar } from '../generated/Gravity/Gravity' @@ -72,7 +72,7 @@ There is a [Graph Typescript Library](https://github.com/graphprotocol/graph-too ## Генерация кода -Для упрощения и обеспечения безопасности типов при работе со смарт-контрактами, событиями и объектами Graph CLI может генерировать типы AssemblyScript на основе схемы GraphQL субграфа и ABI контрактов, включенных в источники данных. +In order to make it easy and type-safe to work with smart contracts, events and entities, the Graph CLI can generate AssemblyScript types from the Subgraph's GraphQL schema and the contract ABIs included in the data sources. Это делается с помощью @@ -80,7 +80,7 @@ There is a [Graph Typescript Library](https://github.com/graphprotocol/graph-too graph codegen [--output-dir ] [] ``` -but in most cases, subgraphs are already preconfigured via `package.json` to allow you to simply run one of the following to achieve the same: +but in most cases, Subgraphs are already preconfigured via `package.json` to allow you to simply run one of the following to achieve the same: ```sh # Yarn @@ -90,7 +90,7 @@ yarn codegen npm run codegen ``` -This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with. +This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example Subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with. ```javascript import { @@ -102,12 +102,12 @@ import { } from '../generated/Gravity/Gravity' ``` -In addition to this, one class is generated for each entity type in the subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with +In addition to this, one class is generated for each entity type in the Subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with ```javascript import { Gravatar } from '../generated/schema' ``` -> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the subgraph. +> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the Subgraph. -Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your subgraph to Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find. +Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your Subgraph to Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find. diff --git a/website/src/pages/ru/subgraphs/developing/creating/graph-ts/CHANGELOG.md b/website/src/pages/ru/subgraphs/developing/creating/graph-ts/CHANGELOG.md index 5d90888ac378..5f964d3cbb78 100644 --- a/website/src/pages/ru/subgraphs/developing/creating/graph-ts/CHANGELOG.md +++ b/website/src/pages/ru/subgraphs/developing/creating/graph-ts/CHANGELOG.md @@ -1,5 +1,11 @@ # @graphprotocol/graph-ts +## 0.38.0 + +### Minor Changes + +- [#1935](https://github.com/graphprotocol/graph-tooling/pull/1935) [`0c36a02`](https://github.com/graphprotocol/graph-tooling/commit/0c36a024e0516bbf883ae62b8312dba3d9945f04) Thanks [@isum](https://github.com/isum)! - feat: add yaml parsing support to mappings + ## 0.37.0 ### Minor Changes diff --git a/website/src/pages/ru/subgraphs/developing/creating/graph-ts/README.md b/website/src/pages/ru/subgraphs/developing/creating/graph-ts/README.md index b6771a8305e5..d36adad723ef 100644 --- a/website/src/pages/ru/subgraphs/developing/creating/graph-ts/README.md +++ b/website/src/pages/ru/subgraphs/developing/creating/graph-ts/README.md @@ -6,7 +6,7 @@ TypeScript/AssemblyScript library for writing subgraph mappings to be deployed to [The Graph](https://github.com/graphprotocol/graph-node). -## Usage +## Применение For a detailed guide on how to create a subgraph, please see the [Graph CLI docs](https://github.com/graphprotocol/graph-cli). diff --git a/website/src/pages/ru/subgraphs/developing/creating/graph-ts/_meta-titles.json b/website/src/pages/ru/subgraphs/developing/creating/graph-ts/_meta-titles.json index e850186d44c0..29a6950b50c7 100644 --- a/website/src/pages/ru/subgraphs/developing/creating/graph-ts/_meta-titles.json +++ b/website/src/pages/ru/subgraphs/developing/creating/graph-ts/_meta-titles.json @@ -1,5 +1,5 @@ { - "README": "Introduction", + "README": "Введение", "api": "Референс API", "common-issues": "Common Issues" } diff --git a/website/src/pages/ru/subgraphs/developing/creating/graph-ts/api.mdx b/website/src/pages/ru/subgraphs/developing/creating/graph-ts/api.mdx index 88bfcafe7af0..4d3c6ff563f5 100644 --- a/website/src/pages/ru/subgraphs/developing/creating/graph-ts/api.mdx +++ b/website/src/pages/ru/subgraphs/developing/creating/graph-ts/api.mdx @@ -2,12 +2,12 @@ title: AssemblyScript API --- -> Note: If you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/). +> Note: If you created a Subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/). -Узнайте, какие встроенные API можно использовать при написании мэппингов субграфов. По умолчанию доступны два типа API: +Learn what built-in APIs can be used when writing Subgraph mappings. There are two kinds of APIs available out of the box: - [Библиотека The Graph TypeScript](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) -- Код, сгенерированный из файлов субграфов с помощью `graph codegen` +- Code generated from Subgraph files by `graph codegen` Вы также можете добавлять другие библиотеки в качестве зависимостей, если они совместимы с [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). @@ -27,7 +27,7 @@ title: AssemblyScript API ### Версии -`apiVersion` в манифесте субграфа указывает версию мэппинга API, которая запускается посредством Graph Node для данного субграфа. +The `apiVersion` in the Subgraph manifest specifies the mapping API version which is run by Graph Node for a given Subgraph. | Версия | Примечания к релизу | | :-: | --- | @@ -223,7 +223,7 @@ import { store } from '@graphprotocol/graph-ts' API `store` позволяет загружать, сохранять и удалять объекты из хранилища the Graph Node и в него. -Объекты, записанные в хранилище карты, сопоставляются один к одному с типами `@entity`, определенными в схеме субграфов GraphQL. Чтобы сделать работу с этими объектами удобной, команда `graph codegen`, предоставляемая [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) генерирует классы объектов, которые являются подклассами встроенного типа `Entity`, с геттерами и сеттерами свойств для полей в схеме, а также методами загрузки и сохранения этих объектов. +Entities written to the store map one-to-one to the `@entity` types defined in the Subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities. #### Создание объектов @@ -282,8 +282,8 @@ if (transfer == null) { The store API facilitates the retrieval of entities that were created or updated in the current block. A typical situation for this is that one handler creates a transaction from some onchain event, and a later handler wants to access this transaction if it exists. -- В случае, если транзакция не существует, субграф должен будет обратиться к базе данных просто для того, чтобы узнать, что объект не существует. Если автор субграфа уже знает, что объект должен быть создан в том же блоке, использование `loadInBlock` позволяет избежать этого обращения к базе данных. -- Для некоторых субграфов эти пропущенные поиски могут существенно увеличить время индексации. +- In the case where the transaction does not exist, the Subgraph will have to go to the database simply to find out that the entity does not exist. If the Subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. +- For some Subgraphs, these missed lookups can contribute significantly to the indexing time. ```typescript let id = event.transaction.hash // или некоторым образом создается идентификатор @@ -380,11 +380,11 @@ Ethereum API предоставляет доступ к смарт-контра #### Поддержка типов Ethereum -Как и в случае с объектами, `graph codegen` генерирует классы для всех смарт-контрактов и событий, используемых в субграфе. Для этого ABI контракта должны быть частью источника данных в манифесте субграфа. Как правило, файлы ABI хранятся в папке `abis/`. +As with entities, `graph codegen` generates classes for all smart contracts and events used in a Subgraph. For this, the contract ABIs need to be part of the data source in the Subgraph manifest. Typically, the ABI files are stored in an `abis/` folder. -С помощью сгенерированных классов преобразования между типами Ethereum и [встроенными типами] (#built-in-types) происходят за кулисами, так что авторам субграфов не нужно беспокоиться о них. +With the generated classes, conversions between Ethereum types and the [built-in types](#built-in-types) take place behind the scenes so that Subgraph authors do not have to worry about them. -Следующий пример иллюстрирует это. С учётом схемы субграфа, такой как +The following example illustrates this. Given a Subgraph schema like ```graphql type Transfer @entity { @@ -483,7 +483,7 @@ class Log { #### Доступ к состоянию смарт-контракта -Код, сгенерированный с помощью `graph codegen`, также включает классы для смарт-контрактов, используемых в субграфе. Они могут быть использованы для доступа к общедоступным переменным состояния и вызова функций контракта в текущем блоке. +The code generated by `graph codegen` also includes classes for the smart contracts used in the Subgraph. These can be used to access public state variables and call functions of the contract at the current block. Распространенным шаблоном является доступ к контракту, из которого исходит событие. Это достигается с помощью следующего кода: @@ -506,7 +506,7 @@ export function handleTransfer(event: TransferEvent) { Пока `ERC20Contract` в Ethereum имеет общедоступную функцию только для чтения, называемую `symbol`, ее можно вызвать с помощью `.symbol()`. Для общедоступных переменных состояния автоматически создается метод с таким же именем. -Любой другой контракт, который является частью субграфа, может быть импортирован из сгенерированного кода и привязан к действительному адресу. +Any other contract that is part of the Subgraph can be imported from the generated code and can be bound to a valid address. #### Обработка возвращенных вызовов @@ -582,7 +582,7 @@ let isContract = ethereum.hasCode(eoa).inner // возвращает ложно import { log } from '@graphprotocol/graph-ts' ``` -API `log` позволяет субграфам записывать информацию в стандартный вывод Graph Node, а также в Graph Explorer. Сообщения могут быть зарегистрированы с использованием различных уровней ведения лога. Для составления сообщений лога из аргумента предусмотрен синтаксис строки базового формата. +The `log` API allows Subgraphs to log information to the Graph Node standard output as well as Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument. API `log` включает в себя следующие функции: @@ -590,7 +590,7 @@ API `log` включает в себя следующие функции: - `log.info (fmt: string, args: Array): void` - регистрирует информационное сообщение. - `log.warning(fmt: string, args: Array): void` - регистрирует предупреждение. - `log.error(fmt: string, args: Array): void` - регистрирует сообщение об ошибке. -- `log.critical(fmt: string, args: Array): void` – регистрирует критическое сообщение и завершает работу субграфа. +- `log.critical(fmt: string, args: Array): void` – logs a critical message _and_ terminates the Subgraph. API `log` принимает строку формата и массив строковых значений. Затем он заменяет заполнители строковыми значениями из массива. Первый `{}` заполнитель заменяется первым значением в массиве, второй `{}` заполнитель заменяется вторым значением и так далее. @@ -721,7 +721,7 @@ ipfs.mapJSON('Qm...', 'processItem', Value.fromString('parentId')) Единственным поддерживаемым в настоящее время флагом является `json`, который должен быть передан в `ipfs.map`. С флагом `json` файл IPFS должен состоять из серии значений JSON, по одному значению в строке. Вызов `ipfs.map` прочитает каждую строку в файле, десериализует ее в `JSONValue` и совершит обратный вызов для каждой из них. Затем обратный вызов может использовать операции с объектами для хранения данных из `JSONValue`. Изменения объекта сохраняются только после успешного завершения обработчика, вызвавшего `ipfs.map`; в то же время они хранятся в памяти, и поэтому размер файла, который может обработать `ipfs.map`, ограничен. -При успешном завершении `ipfs.map` возвращает `void`. Если какое-либо совершение обратного вызова приводит к ошибке, обработчик, вызвавший `ipfs.map`, прерывается, а субграф помечается как давший сбой. +On success, `ipfs.map` returns `void`. If any invocation of the callback causes an error, the handler that invoked `ipfs.map` is aborted, and the Subgraph is marked as failed. ### Crypto API @@ -836,7 +836,7 @@ if (value.kind == JSONValueKind.BOOL) { ### DataSourceContext в манифесте -Раздел `context` в `dataSources` позволяет Вам определять пары ключ-значение, которые доступны в Ваших мэппингах субграфа. Доступные типы: `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List` и `BigInt`. +The `context` section within `dataSources` allows you to define key-value pairs that are accessible within your Subgraph mappings. The available types are `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Ниже приведен пример YAML, иллюстрирующий использование различных типов в разделе `context`: @@ -887,4 +887,4 @@ dataSources: - `List`: Определяет список элементов. Для каждого элемента необходимо указать его тип и данные. - `BigInt`: Определяет большое целочисленное значение. Необходимо заключить в кавычки из-за большого размера. -Затем этот контекст становится доступным в Ваших мэппинговых файлах субграфов, что позволяет сделать субграфы более динамичными и настраиваемыми. +This context is then accessible in your Subgraph mapping files, enabling more dynamic and configurable Subgraphs. diff --git a/website/src/pages/ru/subgraphs/developing/creating/graph-ts/common-issues.mdx b/website/src/pages/ru/subgraphs/developing/creating/graph-ts/common-issues.mdx index 74f717af91a4..0903710db4bf 100644 --- a/website/src/pages/ru/subgraphs/developing/creating/graph-ts/common-issues.mdx +++ b/website/src/pages/ru/subgraphs/developing/creating/graph-ts/common-issues.mdx @@ -2,7 +2,7 @@ title: Распространенные проблемы с AssemblyScript --- -Существуют определенные проблемы c [AssemblyScript] (https://github.com/AssemblyScript/assemblyscript), с которыми часто приходится сталкиваться при разработке субграфа. Они различаются по сложности отладки, однако знание о них может помочь. Ниже приведен неполный перечень этих проблем: +There are certain [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) issues that are common to run into during Subgraph development. They range in debug difficulty, however, being aware of them may help. The following is a non-exhaustive list of these issues: - `Private` class variables are not enforced in [AssemblyScript](https://www.assemblyscript.org/status.html#language-features). There is no way to protect class variables from being directly changed from the class object. - Область видимости не наследуется [функциями замыкания](https://www.assemblyscript.org/status.html#on-closures), т.е. переменные, объявленные вне функций замыкания, не могут быть использованы. Пояснения см. в [Рекомендациях для разработчиков #3](https://www.youtube.com/watch?v=1-8AW-lVfrA&t=3243s). diff --git a/website/src/pages/ru/subgraphs/developing/creating/install-the-cli.mdx b/website/src/pages/ru/subgraphs/developing/creating/install-the-cli.mdx index b48104c2ff0d..0208397aeb4d 100644 --- a/website/src/pages/ru/subgraphs/developing/creating/install-the-cli.mdx +++ b/website/src/pages/ru/subgraphs/developing/creating/install-the-cli.mdx @@ -2,11 +2,11 @@ title: Установка Graph CLI --- -> In order to use your subgraph on The Graph's decentralized network, you will need to [create an API key](/resources/subgraph-studio-faq/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your subgraph with at least 3,000 GRT to attract 2-3 Indexers. To learn more about signaling, check out [curating](/resources/roles/curating/). +> In order to use your Subgraph on The Graph's decentralized network, you will need to [create an API key](/resources/subgraph-studio-faq/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your Subgraph with at least 3,000 GRT to attract 2-3 Indexers. To learn more about signaling, check out [curating](/resources/roles/curating/). ## Обзор -The [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) is a command-line interface that facilitates developers' commands for The Graph. It processes a [subgraph manifest](/subgraphs/developing/creating/subgraph-manifest/) and compiles the [mappings](/subgraphs/developing/creating/assemblyscript-mappings/) to create the files you will need to deploy the subgraph to [Subgraph Studio](https://thegraph.com/studio/) and the network. +The [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) is a command-line interface that facilitates developers' commands for The Graph. It processes a [Subgraph manifest](/subgraphs/developing/creating/subgraph-manifest/) and compiles the [mappings](/subgraphs/developing/creating/assemblyscript-mappings/) to create the files you will need to deploy the Subgraph to [Subgraph Studio](https://thegraph.com/studio/) and the network. ## Начало работы @@ -28,13 +28,13 @@ npm install -g @graphprotocol/graph-cli@latest yarn global add @graphprotocol/graph-cli ``` -The `graph init` command can be used to set up a new subgraph project, either from an existing contract or from an example subgraph. If you already have a smart contract deployed to your preferred network, you can bootstrap a new subgraph from that contract to get started. +The `graph init` command can be used to set up a new Subgraph project, either from an existing contract or from an example Subgraph. If you already have a smart contract deployed to your preferred network, you can bootstrap a new Subgraph from that contract to get started. ## Создайте субграф ### Из существующего контракта -Следующая команда создает субграф, индексирующий все события существующего контракта: +The following command creates a Subgraph that indexes all events of an existing contract: ```sh graph init \ @@ -51,25 +51,25 @@ graph init \ - Если какой-либо из необязательных аргументов отсутствует, Вам будет предложено воспользоваться интерактивной формой. -- The `` is the ID of your subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your subgraph details page. +- The `` is the ID of your Subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your Subgraph details page. ### Из примера подграфа -Следующая команда инициализирует новый проект на примере субграфа: +The following command initializes a new project from an example Subgraph: ```sh graph init --from-example=example-subgraph ``` -- The [example subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. +- The [example Subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. -- The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. +- The Subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. ### Add New `dataSources` to an Existing Subgraph -`dataSources` are key components of subgraphs. They define the sources of data that the subgraph indexes and processes. A `dataSource` specifies which smart contract to listen to, which events to process, and how to handle them. +`dataSources` are key components of Subgraphs. They define the sources of data that the Subgraph indexes and processes. A `dataSource` specifies which smart contract to listen to, which events to process, and how to handle them. -Recent versions of the Graph CLI supports adding new `dataSources` to an existing subgraph through the `graph add` command: +Recent versions of the Graph CLI supports adding new `dataSources` to an existing Subgraph through the `graph add` command: ```sh graph add
[] @@ -101,19 +101,5 @@ The `graph add` command will fetch the ABI from Etherscan (unless an ABI path is Файл(ы) ABI должен(ы) соответствовать Вашему контракту (контрактам). Существует несколько способов получения файлов ABI: - Если Вы создаете свой собственный проект, у Вас, скорее всего, будет доступ к наиболее актуальным ABIS. -- If you are building a subgraph for a public project, you can download that project to your computer and get the ABI by using [`npx hardhat compile`](https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts) or using `solc` to compile. -- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your subgraph will fail. - -## Релизы SpecVersion - -| Версия | Примечания к релизу | -| :-: | --- | -| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | -| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | -| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune subgraphs | -| 0.0.9 | Supports `endBlock` feature | -| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). | -| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). | -| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | -| 0.0.5 | Добавлена поддержка обработчиков событий, имеющих доступ к чекам транзакций. | -| 0.0.4 | Добавлена ​​поддержка управления функциями субграфа. | +- If you are building a Subgraph for a public project, you can download that project to your computer and get the ABI by using [`npx hardhat compile`](https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts) or using `solc` to compile. +- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your Subgraph will fail. diff --git a/website/src/pages/ru/subgraphs/developing/creating/ql-schema.mdx b/website/src/pages/ru/subgraphs/developing/creating/ql-schema.mdx index fb468f6110f5..5b4f6c643b60 100644 --- a/website/src/pages/ru/subgraphs/developing/creating/ql-schema.mdx +++ b/website/src/pages/ru/subgraphs/developing/creating/ql-schema.mdx @@ -1,28 +1,28 @@ --- -title: The Graph QL Schema +title: Схема GraphQL --- ## Обзор -The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language. +Схема для вашего субграфа находится в файле `schema.graphql`. Схемы GraphQL определяются с использованием языка определения интерфейса GraphQL. -> Note: If you've never written a GraphQL schema, it is recommended that you check out this primer on the GraphQL type system. Reference documentation for GraphQL schemas can be found in the [GraphQL API](/subgraphs/querying/graphql-api/) section. +> Примечание: Если вы никогда не писали схему GraphQL, рекомендуется ознакомиться с этим введением в систему типов GraphQL. Справочную документацию по схемам GraphQL можно найти в разделе [GraphQL API](/subgraphs/querying/graphql-api/). ### Определение Объектов Прежде чем определять объекты, важно сделать шаг назад и задуматься над тем, как структурированы и связаны Ваши данные. -- All queries will be made against the data model defined in the subgraph schema. As a result, the design of the subgraph schema should be informed by the queries that your application will need to perform. +- Все запросы будут выполняться против модели данных, определенной в схеме субграфа. Поэтому проектирование схемы субграфа должно основываться на запросах, которые ваше приложение будет выполнять. - Может быть полезно представить объекты как «объекты, содержащие данные», а не как события или функции. -- You define entity types in `schema.graphql`, and Graph Node will generate top-level fields for querying single instances and collections of that entity type. -- Each type that should be an entity is required to be annotated with an `@entity` directive. +- Вы определяете типы объектов в файле `schema.graphql`, и Graph Node будет генерировать поля верхнего уровня для запроса отдельных экземпляров и коллекций этих типов объектов. +- Каждый тип, который должен быть объектом, должен быть аннотирован директивой `@entity`. - По умолчанию объекты изменяемы, то есть мэппинги могут загружать существующие объекты, изменять и сохранять их новую версию. - - Mutability comes at a price, so for entity types that will never be modified, such as those containing data extracted verbatim from the chain, it is recommended to mark them as immutable with `@entity(immutable: true)`. + - Изменяемость имеет свою цену, поэтому для типов объектов, которые никогда не будут изменяться, например, содержащих данные, извлеченные дословно из чейна, рекомендуется пометить их как неизменяемые с помощью `@entity(immutable: true)`. - Если изменения происходят в том же блоке, в котором был создан объект, то мэппинги могут вносить изменения в неизменяемые объекты. Неизменяемые объекты гораздо быстрее записываются и запрашиваются, поэтому их следует использовать, когда это возможно. #### Удачный пример -The following `Gravatar` entity is structured around a Gravatar object and is a good example of how an entity could be defined. +Следующий объект `Gravatar` структурирован вокруг объекта Gravatar и является хорошим примером того, как можно определить объект. ```graphql type Gravatar @entity(immutable: true) { @@ -36,7 +36,7 @@ type Gravatar @entity(immutable: true) { #### Неудачный пример -The following example `GravatarAccepted` and `GravatarDeclined` entities are based around events. It is not recommended to map events or function calls to entities 1:1. +Следующий пример объектов `GravatarAccepted` и `GravatarDeclined` основан на событиях. Не рекомендуется сопоставлять события или вызовы функций 1:1 к объектам. ```graphql type GravatarAccepted @entity { @@ -56,15 +56,15 @@ type GravatarDeclined @entity { #### Дополнительные и обязательные поля -Entity fields can be defined as required or optional. Required fields are indicated by the `!` in the schema. If the field is a scalar field, you get an error when you try to store the entity. If the field references another entity then you get this error: +Поля объектов могут быть определены как обязательные или необязательные. Обязательные поля указываются с помощью `!` в схеме. Если поле является скалярным, вы получите ошибку при попытке сохранить объект. Если поле ссылается на другой объект, то вы получите следующую ошибку: ``` Null value resolved for non-null field 'name' ``` -Each entity must have an `id` field, which must be of type `Bytes!` or `String!`. It is generally recommended to use `Bytes!`, unless the `id` contains human-readable text, since entities with `Bytes!` id's will be faster to write and query as those with a `String!` `id`. The `id` field serves as the primary key, and needs to be unique among all entities of the same type. For historical reasons, the type `ID!` is also accepted and is a synonym for `String!`. +Каждый объект должен иметь поле `id`, которое должно быть типа `Bytes!` или `String!`. Обычно рекомендуется использовать `Bytes!`, если только `id` не содержит текст, читаемый человеком, поскольку объекты с `id` типа `Bytes!` будут быстрее записываться и запрашиваться, чем те, у которых `id` типа `String!`. Поле `id` служит основным ключом и должно быть уникальным среди всех объектов одного типа. По историческим причинам также принимается тип `ID!`, который является синонимом `String!`. -For some entity types the `id` for `Bytes!` is constructed from the id's of two other entities; that is possible using `concat`, e.g., `let id = left.id.concat(right.id) ` to form the id from the id's of `left` and `right`. Similarly, to construct an id from the id of an existing entity and a counter `count`, `let id = left.id.concatI32(count)` can be used. The concatenation is guaranteed to produce unique id's as long as the length of `left` is the same for all such entities, for example, because `left.id` is an `Address`. +Для некоторых типов объектов `id` для `Bytes!` формируется из `id` двух других объектов. Это возможно с использованием функции `concat`, например, `let id = left.id.concat(right.id)`, чтобы сформировать `id` из `id` объектов `left` и `right`. Аналогично, чтобы сформировать `id` из `id` существующего объекта и счетчика `count`, можно использовать `let id = left.id.concatI32(count)`. Конкатенация гарантирует, что `id` будет уникальным, если длина `left.id` одинаковая для всех таких объектов, например, если `left.id` — это `Address`. ### Встроенные скалярные типы @@ -75,13 +75,13 @@ For some entity types the `id` for `Bytes!` is constructed from the id's of two | Тип | Описание | | --- | --- | | `Bytes` | Массив байтов, представленный в виде шестнадцатеричной строки. Обычно используется для хэшей и адресов Ethereum. | -| `String` | Scalar for `string` values. Null characters are not supported and are automatically removed. | -| `Boolean` | Scalar for `boolean` values. | -| `Int` | The GraphQL spec defines `Int` to be a signed 32-bit integer. | -| `Int8` | An 8-byte signed integer, also known as a 64-bit signed integer, can store values in the range from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807. Prefer using this to represent `i64` from ethereum. | -| `BigInt` | Large integers. Used for Ethereum's `uint32`, `int64`, `uint64`, ..., `uint256` types. Note: Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. | -| `BigDecimal` | `BigDecimal` High precision decimals represented as a significand and an exponent. The exponent range is from −6143 to +6144. Rounded to 34 significant digits. | -| `Timestamp` | It is an `i64` value in microseconds. Commonly used for `timestamp` fields for timeseries and aggregations. | +| `String` | Скаляр для значений типа `string`. Нулевые символы не поддерживаются и автоматически удаляются. | +| `Boolean` | Скаляр для значений `boolean`. | +| `Int` | Спецификация GraphQL определяет тип `Int` как знаковое 32-битное целое число. | +| `Int8` | 8-байтовое целое число со знаком, также известное как 64-битное целое число со знаком, может хранить значения в диапазоне от -9,223,372,036,854,775,808 до 9,223,372,036,854,775,807. Рекомендуется использовать его для представления типа `i64` из ethereum. | +| `BigInt` | Большие целые числа. Используются для типов `uint32`, `int64`, `uint64`, ..., `uint256` из Ethereum. Примечание: все типы, меньше чем `uint32`, такие как `int32`, `uint24` или `int8`, представлены как `i32`. | +| `BigDecimal` | `BigDecimal` Высокоточные десятичные числа, представленные как мантисса и экспонента. Диапазон экспоненты от −6143 до +6144. Округляется до 34 значащих цифр. | +| `Timestamp` | Это значение типа `i64` в микросекундах. Обычно используется для полей `timestamp` в временных рядах и агрегациях. | ### Перечисления @@ -95,9 +95,9 @@ enum TokenStatus { } ``` -Once the enum is defined in the schema, you can use the string representation of the enum value to set an enum field on an entity. For example, you can set the `tokenStatus` to `SecondOwner` by first defining your entity and subsequently setting the field with `entity.tokenStatus = "SecondOwner"`. The example below demonstrates what the Token entity would look like with an enum field: +Как только перечисление определено в схеме, вы можете использовать строковое представление значения перечисления для установки поля перечисления в объекте. Например, вы можете установить `tokenStatus` в значение `SecondOwner`, сначала определив ваш объект, а затем установив поле с помощью `entity.tokenStatus = "SecondOwner"`. Пример ниже демонстрирует, как будет выглядеть объект Token с полем перечисления: -More detail on writing enums can be found in the [GraphQL documentation](https://graphql.org/learn/schema/). +Более подробную информацию о написании перечислений можно найти в [документации по GraphQL](https://graphql.org/learn/schema/). ### Связи объектов @@ -107,7 +107,7 @@ More detail on writing enums can be found in the [GraphQL documentation](https:/ #### Связи "Один к одному" -Define a `Transaction` entity type with an optional one-to-one relationship with a `TransactionReceipt` entity type: +Определите тип объекта `Transaction` с необязательной связью "один к одному" с типом объекта `TransactionReceipt`: ```graphql type Transaction @entity(immutable: true) { @@ -123,7 +123,7 @@ type TransactionReceipt @entity(immutable: true) { #### Связи "Один ко многим" -Define a `TokenBalance` entity type with a required one-to-many relationship with a Token entity type: +Определите тип объекта `TokenBalance` с обязательной связью "один ко многим" с типом объекта `Token`: ```graphql type Token @entity(immutable: true) { @@ -139,13 +139,13 @@ type TokenBalance @entity { ### Обратные запросы -Reverse lookups can be defined on an entity through the `@derivedFrom` field. This creates a virtual field on the entity that may be queried but cannot be set manually through the mappings API. Rather, it is derived from the relationship defined on the other entity. For such relationships, it rarely makes sense to store both sides of the relationship, and both indexing and query performance will be better when only one side is stored and the other is derived. +Обратные поисковые запросы можно определить в объекте с помощью поля `@derivedFrom`. Это создает виртуальное поле в объекте, которое может быть запрашиваемо, но не может быть установлено вручную через API отображений. Вместо этого оно вычисляется на основе связи, определенной в другом объекте. Для таких отношений редко имеет смысл хранить обе стороны связи, и как производительность индексирования, так и производительность запросов будут лучше, если хранится только одна сторона связи, а другая извлекается. -Для связей "один ко многим" связь всегда должна храниться на стороне "один", а сторона "многие" всегда должна быть производной. Такое сохранение связи, вместо хранения массива объектов на стороне "многие", приведет к значительному повышению производительности как при индексации, так и при запросах к субграфам. В общем, следует избегать хранения массивов объектов настолько, насколько это возможно. +Для отношений «один ко многим» отношение всегда должно храниться на стороне «один», а сторона «многие» должна быть выведена. Хранение отношений таким образом, а не хранение массива объектов на стороне «многие», приведет к значительному улучшению производительности как при индексировании, так и при запросах к субграфу. В общем, хранение массивов объектов следует избегать, насколько это возможно на практике. #### Пример -We can make the balances for a token accessible from the token by deriving a `tokenBalances` field: +Мы можем сделать балансы для токена доступными из токена, создав поле `tokenBalances`: ```graphql type Token @entity(immutable: true) { @@ -160,15 +160,15 @@ type TokenBalance @entity { } ``` -Here is an example of how to write a mapping for a subgraph with reverse lookups: +Вот пример того, как написать мэппинг для субграфа с обратными поисковыми запросами: ```typescript -let token = new Token(event.address) // Create Token -token.save() // tokenBalances is derived automatically +let token = new Token(event.address) // Создание токена +token.save() // tokenBalances определяется автоматически let tokenBalance = new TokenBalance(event.address) tokenBalance.amount = BigInt.fromI32(0) -tokenBalance.token = token.id // Reference stored here +tokenBalance.token = token.id // Ссылка на токен сохраняется здесь tokenBalance.save() ``` @@ -178,7 +178,7 @@ tokenBalance.save() #### Пример -Define a reverse lookup from a `User` entity type to an `Organization` entity type. In the example below, this is achieved by looking up the `members` attribute from within the `Organization` entity. In queries, the `organizations` field on `User` will be resolved by finding all `Organization` entities that include the user's ID. +Определите обратный поиск от объекта `User` к объекту `Organization`. В примере ниже это достигается через поиск атрибута `members` внутри объекта `Organization`. В запросах поле `organizations` на объекте `User` будет разрешаться путем поиска всех объектов `Organization`, которые включают идентификатор пользователя. ```graphql type Organization @entity { @@ -194,7 +194,7 @@ type User @entity { } ``` -A more performant way to store this relationship is through a mapping table that has one entry for each `User` / `Organization` pair with a schema like +Более эффективный способ хранения этих отношений — это использование таблицы отображений, которая содержит одну запись для каждой пары `User` / `Organization` с такой схемой ```graphql type Organization @entity { @@ -231,11 +231,11 @@ query usersWithOrganizations { } ``` -Такой более сложный способ хранения связей "многие ко многим" приведет к уменьшению объема хранимых данных для субграфа и, следовательно, к тому, что субграф будет значительно быстрее индексироваться и запрашиваться. +Этот более сложный способ хранения отношений многие ко многим приведет к меньшему объему данных, хранимых для субграфа, что, в свою очередь, сделает субграф значительно быстрее как при индексировании, так и при запросах. ### Добавление комментариев к схеме -As per GraphQL spec, comments can be added above schema entity attributes using the hash symbol `#`. This is illustrated in the example below: +Согласно спецификации GraphQL, комментарии могут быть добавлены над атрибутами объектов схемы с использованием символа решетки `#`. Это показано в следующем примере: ```graphql type MyFirstEntity @entity { @@ -251,7 +251,7 @@ type MyFirstEntity @entity { Определение полнотекстового запроса включает в себя название запроса, словарь языка, используемый для обработки текстовых полей, алгоритм ранжирования, используемый для упорядочивания результатов, и поля, включенные в поиск. Каждый полнотекстовый запрос может охватывать несколько полей, но все включенные поля должны относиться к одному типу объекта. -To add a fulltext query, include a `_Schema_` type with a fulltext directive in the GraphQL schema. +Чтобы добавить полнотекстовый запрос, включите тип `_Schema_` с директивой `fulltext` в схему GraphQL. ```graphql type _Schema_ @@ -274,7 +274,7 @@ type Band @entity { } ``` -The example `bandSearch` field can be used in queries to filter `Band` entities based on the text documents in the `name`, `description`, and `bio` fields. Jump to [GraphQL API - Queries](/subgraphs/querying/graphql-api/#queries) for a description of the fulltext search API and more example usage. +Пример поля `bandSearch` может быть использован в запросах для фильтрации объектов `Band` на основе текстовых документов в полях `name`, `description` и `bio`. Перейдите к [GraphQL API - Запросы](/subgraphs/querying/graphql-api/#queries) для описания API полнотекстового поиска и других примеров использования. ```graphql query { @@ -287,7 +287,7 @@ query { } ``` -> **[Feature Management](#experimental-features):** From `specVersion` `0.0.4` and onwards, `fullTextSearch` must be declared under the `features` section in the subgraph manifest. +> **[Управление функциями](#экспериментальные-функции):** Начиная с `specVersion` `0.0.4` и далее, `fullTextSearch` должен быть объявлен в разделе `features` манифеста субграфа. ## Поддерживаемые языки @@ -295,30 +295,30 @@ query { Поддерживаемые языковые словари: -| Code | Словарь | +| Код | Словарь | | ------- | ------------- | -| простой | General | -| da | Danish | -| nl | Dutch | -| en | English | -| fi | Finnish | -| fr | French | -| de | German | -| hu | Hungarian | -| it | Italian | -| no | Norwegian | +| простой | Общий | +| da | Датский | +| nl | Голландский | +| en | Английский | +| fi | Финский | +| fr | Французский | +| de | Немецкий | +| hu | Венгерский | +| it | Итальянский | +| no | Норвежский | | pt | Португальский | -| ro | Romanian | -| ru | Russian | -| es | Spanish | -| sv | Swedish | -| tr | Turkish | +| ro | Румынский | +| ru | Русский | +| es | Испанский | +| sv | Шведский | +| tr | Турецкий | ### Алгоритмы ранжирования Поддерживаемые алгоритмы для упорядочивания результатов: -| Algorithm | Description | +| Алгоритм | Описание | | ------------- | ---------------------------------------------------------------------------------------------- | | rank | Используйте качество соответствия (0-1) полнотекстового запроса, чтобы упорядочить результаты. | -| proximityRank | Similar to rank but also includes the proximity of the matches. | +| proximityRank | Похоже на рейтинг, но также учитывает близость совпадений. | diff --git a/website/src/pages/ru/subgraphs/developing/creating/starting-your-subgraph.mdx b/website/src/pages/ru/subgraphs/developing/creating/starting-your-subgraph.mdx index 8136fb559cff..0103ec85f145 100644 --- a/website/src/pages/ru/subgraphs/developing/creating/starting-your-subgraph.mdx +++ b/website/src/pages/ru/subgraphs/developing/creating/starting-your-subgraph.mdx @@ -4,20 +4,32 @@ title: Starting Your Subgraph ## Обзор -The Graph is home to thousands of subgraphs already available for query, so check [The Graph Explorer](https://thegraph.com/explorer) and find one that already matches your needs. +The Graph is home to thousands of Subgraphs already available for query, so check [The Graph Explorer](https://thegraph.com/explorer) and find one that already matches your needs. -When you create a [subgraph](/subgraphs/developing/subgraphs/), you create a custom open API that extracts data from a blockchain, processes it, stores it, and makes it easy to query via GraphQL. +When you create a [Subgraph](/subgraphs/developing/subgraphs/), you create a custom open API that extracts data from a blockchain, processes it, stores it, and makes it easy to query via GraphQL. -Subgraph development ranges from simple scaffold subgraphs to advanced, specifically tailored subgraphs. +Subgraph development ranges from simple scaffold Subgraphs to advanced, specifically tailored Subgraphs. ### Start Building -Start the process and build a subgraph that matches your needs: +Start the process and build a Subgraph that matches your needs: 1. [Install the CLI](/subgraphs/developing/creating/install-the-cli/) - Set up your infrastructure -2. [Subgraph Manifest](/subgraphs/developing/creating/subgraph-manifest/) - Understand a subgraph's key component +2. [Subgraph Manifest](/subgraphs/developing/creating/subgraph-manifest/) - Understand a Subgraph's key component 3. [The GraphQL Schema](/subgraphs/developing/creating/ql-schema/) - Write your schema 4. [Writing AssemblyScript Mappings](/subgraphs/developing/creating/assemblyscript-mappings/) - Write your mappings -5. [Advanced Features](/subgraphs/developing/creating/advanced/) - Customize your subgraph with advanced features +5. [Advanced Features](/subgraphs/developing/creating/advanced/) - Customize your Subgraph with advanced features Explore additional [resources for APIs](/subgraphs/developing/creating/graph-ts/README/) and conduct local testing with [Matchstick](/subgraphs/developing/creating/unit-testing-framework/). + +| Версия | Примечания к релизу | +| :-: | --- | +| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune Subgraphs | +| 0.0.9 | Supports `endBlock` feature | +| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). | +| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | diff --git a/website/src/pages/ru/subgraphs/developing/creating/subgraph-manifest.mdx b/website/src/pages/ru/subgraphs/developing/creating/subgraph-manifest.mdx index a8f1a728f47a..57530a38868e 100644 --- a/website/src/pages/ru/subgraphs/developing/creating/subgraph-manifest.mdx +++ b/website/src/pages/ru/subgraphs/developing/creating/subgraph-manifest.mdx @@ -4,19 +4,19 @@ title: Subgraph Manifest ## Обзор -The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. +The Subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your Subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. -The **subgraph definition** consists of the following files: +The **Subgraph definition** consists of the following files: -- `subgraph.yaml`: Contains the subgraph manifest +- `subgraph.yaml`: Contains the Subgraph manifest -- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL +- `schema.graphql`: A GraphQL schema defining the data stored for your Subgraph and how to query it via GraphQL - `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema (e.g. `mapping.ts` in this guide) ### Subgraph Capabilities -Один субграф может: +A single Subgraph can: - Индексировать данные из нескольких смарт-контрактов (но не из нескольких сетей). @@ -24,12 +24,12 @@ The **subgraph definition** consists of the following files: - Add an entry for each contract that requires indexing to the `dataSources` array. -The full specification for subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). +The full specification for Subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). -For the example subgraph listed above, `subgraph.yaml` is: +For the example Subgraph listed above, `subgraph.yaml` is: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum repository: https://github.com/graphprotocol/graph-tooling schema: @@ -54,7 +54,7 @@ dataSources: data: 'bar' mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -79,47 +79,47 @@ dataSources: ## Subgraph Entries -> Important Note: Be sure you populate your subgraph manifest with all handlers and [entities](/subgraphs/developing/creating/ql-schema/). +> Important Note: Be sure you populate your Subgraph manifest with all handlers and [entities](/subgraphs/developing/creating/ql-schema/). Важными элементами манифеста, которые необходимо обновить, являются: -- `specVersion`: a semver version that identifies the supported manifest structure and functionality for the subgraph. The latest version is `1.2.0`. See [specVersion releases](#specversion-releases) section to see more details on features & releases. +- `specVersion`: a semver version that identifies the supported manifest structure and functionality for the Subgraph. The latest version is `1.3.0`. See [specVersion releases](#specversion-releases) section to see more details on features & releases. -- `description`: a human-readable description of what the subgraph is. This description is displayed in Graph Explorer when the subgraph is deployed to Subgraph Studio. +- `description`: a human-readable description of what the Subgraph is. This description is displayed in Graph Explorer when the Subgraph is deployed to Subgraph Studio. -- `repository`: the URL of the repository where the subgraph manifest can be found. This is also displayed in Graph Explorer. +- `repository`: the URL of the repository where the Subgraph manifest can be found. This is also displayed in Graph Explorer. - `features`: a list of all used [feature](#experimental-features) names. -- `indexerHints.prune`: Defines the retention of historical block data for a subgraph. See [prune](#prune) in [indexerHints](#indexer-hints) section. +- `indexerHints.prune`: Defines the retention of historical block data for a Subgraph. See [prune](#prune) in [indexerHints](#indexer-hints) section. -- `dataSources.source`: the address of the smart contract the subgraph sources, and the ABI of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts. +- `dataSources.source`: the address of the smart contract the Subgraph sources, and the ABI of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts. - `dataSources.source.startBlock`: the optional number of the block that the data source starts indexing from. In most cases, we suggest using the block in which the contract was created. - `dataSources.source.endBlock`: The optional number of the block that the data source stops indexing at, including that block. Minimum spec version required: `0.0.9`. -- `dataSources.context`: key-value pairs that can be used within subgraph mappings. Supports various data types like `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Each variable needs to specify its `type` and `data`. These context variables are then accessible in the mapping files, offering more configurable options for subgraph development. +- `dataSources.context`: key-value pairs that can be used within Subgraph mappings. Supports various data types like `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Each variable needs to specify its `type` and `data`. These context variables are then accessible in the mapping files, offering more configurable options for Subgraph development. - `dataSources.mapping.entities`: the entities that the data source writes to the store. The schema for each entity is defined in the schema.graphql file. - `dataSources.mapping.abis`: one or more named ABI files for the source contract as well as any other smart contracts that you interact with from within the mappings. -- `dataSources.mapping.eventHandlers`: lists the smart contract events this subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store. +- `dataSources.mapping.eventHandlers`: lists the smart contract events this Subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store. -- `dataSources.mapping.callHandlers`: lists the smart contract functions this subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store. +- `dataSources.mapping.callHandlers`: lists the smart contract functions this Subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store. -- `dataSources.mapping.blockHandlers`: lists the blocks this subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional call-filter can be provided by adding a `filter` field with `kind: call` to the handler. This will only run the handler if the block contains at least one call to the data source contract. +- `dataSources.mapping.blockHandlers`: lists the blocks this Subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional call-filter can be provided by adding a `filter` field with `kind: call` to the handler. This will only run the handler if the block contains at least one call to the data source contract. -A single subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array. +A single Subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array. ## Обработчики событий -Обработчики событий в субграфе реагируют на конкретные события, генерируемые смарт-контрактами в блокчейне, и запускают обработчики, определенные в манифесте подграфа. Это позволяет субграфам обрабатывать и хранить данные о событиях в соответствии с определенной логикой. +Event handlers in a Subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the Subgraph's manifest. This enables Subgraphs to process and store event data according to defined logic. ### Определение обработчика событий -Обработчик событий объявлен внутри источника данных в конфигурации YAML субграфа. Он определяет, какие события следует прослушивать, и соответствующую функцию, которую необходимо выполнить при обнаружении этих событий. +An event handler is declared within a data source in the Subgraph's YAML configuration. It specifies which events to listen for and the corresponding function to execute when those events are detected. ```yaml dataSources: @@ -131,7 +131,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -149,11 +149,11 @@ dataSources: ## Обработчики вызовов -While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured. +While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a Subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured. Обработчики вызовов срабатывают только в одном из двух случаев: когда указанная функция вызывается учетной записью, отличной от самого контракта, или когда она помечена как внешняя в Solidity и вызывается как часть другой функции в том же контракте. -> **Note:** Call handlers currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a subgraph indexing one of these networks contain one or more call handlers, it will not start syncing. Subgraph developers should instead use event handlers. These are far more performant than call handlers, and are supported on every evm network. +> **Note:** Call handlers currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a Subgraph indexing one of these networks contain one or more call handlers, it will not start syncing. Subgraph developers should instead use event handlers. These are far more performant than call handlers, and are supported on every evm network. ### Определение обработчика вызова @@ -169,7 +169,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -186,7 +186,7 @@ The `function` is the normalized function signature to filter calls by. The `han ### Функция мэппинга -Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument: +Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example Subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument: ```typescript import { CreateGravatarCall } from '../generated/Gravity/Gravity' @@ -205,7 +205,7 @@ The `handleCreateGravatar` function takes a new `CreateGravatarCall` which is a ## Обработчики блоков -В дополнение к подписке на события контракта или вызовы функций, субграф может захотеть обновить свои данные по мере добавления в цепочку новых блоков. Чтобы добиться этого, субграф может запускать функцию после каждого блока или после блоков, соответствующих заранее определенному фильтру. +In addition to subscribing to contract events or function calls, a Subgraph may want to update its data as new blocks are appended to the chain. To achieve this a Subgraph can run a function after every block or after blocks that match a pre-defined filter. ### Поддерживаемые фильтры @@ -218,7 +218,7 @@ filter: _The defined handler will be called once for every block which contains a call to the contract (data source) the handler is defined under._ -> **Note:** The `call` filter currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a subgraph indexing one of these networks contain one or more block handlers with a `call` filter, it will not start syncing. +> **Note:** The `call` filter currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a Subgraph indexing one of these networks contain one or more block handlers with a `call` filter, it will not start syncing. Отсутствие фильтра для обработчика блоков гарантирует, что обработчик вызывается для каждого блока. Источник данных может содержать только один обработчик блоков для каждого типа фильтра. @@ -232,7 +232,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -261,7 +261,7 @@ blockHandlers: every: 10 ``` -The defined handler will be called once for every `n` blocks, where `n` is the value provided in the `every` field. This configuration allows the subgraph to perform specific operations at regular block intervals. +The defined handler will be called once for every `n` blocks, where `n` is the value provided in the `every` field. This configuration allows the Subgraph to perform specific operations at regular block intervals. #### Однократный фильтр @@ -276,7 +276,7 @@ blockHandlers: kind: once ``` -Определенный обработчик с однократным фильтром будет вызываться только один раз перед запуском всех остальных обработчиков. Эта конфигурация позволяет субграфу использовать обработчик в качестве обработчика инициализации, выполняя определенные задачи в начале индексирования. +The defined handler with the once filter will be called only once before all other handlers run. This configuration allows the Subgraph to use the handler as an initialization handler, performing specific tasks at the start of indexing. ```ts export function handleOnce(block: ethereum.Block): void { @@ -288,7 +288,7 @@ export function handleOnce(block: ethereum.Block): void { ### Функция мэппинга -The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing subgraph entities in the store, call smart contracts and create or update entities. +The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing Subgraph entities in the store, call smart contracts and create or update entities. ```typescript import { ethereum } from '@graphprotocol/graph-ts' @@ -317,7 +317,7 @@ An event will only be triggered when both the signature and topic 0 match. By de Starting from `specVersion` `0.0.5` and `apiVersion` `0.0.7`, event handlers can have access to the receipt for the transaction which emitted them. -To do so, event handlers must be declared in the subgraph manifest with the new `receipt: true` key, which is optional and defaults to false. +To do so, event handlers must be declared in the Subgraph manifest with the new `receipt: true` key, which is optional and defaults to false. ```yaml eventHandlers: @@ -360,7 +360,7 @@ dataSources: abi: Factory mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/factory.ts entities: @@ -390,7 +390,7 @@ templates: abi: Exchange mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/exchange.ts entities: @@ -454,7 +454,7 @@ There are setters and getters like `setString` and `getString` for all value typ ## Стартовые блоки -The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. +The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a Subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. ```yaml dataSources: @@ -467,7 +467,7 @@ dataSources: startBlock: 6627917 mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/factory.ts entities: @@ -488,13 +488,13 @@ dataSources: ## Подсказки индексатору -The `indexerHints` setting in a subgraph's manifest provides directives for indexers on processing and managing a subgraph. It influences operational decisions across data handling, indexing strategies, and optimizations. Presently, it features the `prune` option for managing historical data retention or pruning. +The `indexerHints` setting in a Subgraph's manifest provides directives for indexers on processing and managing a Subgraph. It influences operational decisions across data handling, indexing strategies, and optimizations. Presently, it features the `prune` option for managing historical data retention or pruning. > This feature is available from `specVersion: 1.0.0` ### Сокращение -`indexerHints.prune`: Defines the retention of historical block data for a subgraph. Options include: +`indexerHints.prune`: Defines the retention of historical block data for a Subgraph. Options include: 1. `"never"`: No pruning of historical data; retains the entire history. 2. `"auto"`: Retains the minimum necessary history as set by the indexer, optimizing query performance. @@ -505,19 +505,19 @@ The `indexerHints` setting in a subgraph's manifest provides directives for inde prune: auto ``` -> Термин «история» в контексте субграфов означает хранение данных, отражающих старые состояния изменяемых объектов. +> The term "history" in this context of Subgraphs is about storing data that reflects the old states of mutable entities. История данного блока необходима для: -- [Time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), which enable querying the past states of these entities at specific blocks throughout the subgraph's history -- Using the subgraph as a [graft base](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) in another subgraph, at that block -- Отката субграфа обратно к этому блоку +- [Time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), which enable querying the past states of these entities at specific blocks throughout the Subgraph's history +- Using the Subgraph as a [graft base](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) in another Subgraph, at that block +- Rewinding the Subgraph back to that block Если исторические данные на момент создания блока были удалены, вышеупомянутые возможности будут недоступны. > Using `"auto"` is generally recommended as it maximizes query performance and is sufficient for most users who do not require access to extensive historical data. -For subgraphs leveraging [time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), it's advisable to either set a specific number of blocks for historical data retention or use `prune: never` to keep all historical entity states. Below are examples of how to configure both options in your subgraph's settings: +For Subgraphs leveraging [time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), it's advisable to either set a specific number of blocks for historical data retention or use `prune: never` to keep all historical entity states. Below are examples of how to configure both options in your Subgraph's settings: Чтобы сохранить определенный объем исторических данных: @@ -532,3 +532,18 @@ For subgraphs leveraging [time travel queries](/subgraphs/querying/graphql-api/# indexerHints: prune: never ``` + +## SpecVersion Releases + +| Версия | Примечания к релизу | +| :-: | --- | +| 1.3.0 | Added support for [Subgraph Composition](/cookbook/subgraph-composition-three-sources) | +| 1.2.0 | Added support for [Indexed Argument Filtering](/developing/creating/advanced/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](/developing/creating/advanced/#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supports [`indexerHints`](/developing/creating/subgraph-manifest/#indexer-hints) feature to prune Subgraphs | +| 0.0.9 | Supports `endBlock` feature | +| 0.0.8 | Added support for polling [Block Handlers](/developing/creating/subgraph-manifest/#polling-filter) and [Initialisation Handlers](/developing/creating/subgraph-manifest/#once-filter). | +| 0.0.7 | Added support for [File Data Sources](/developing/creating/advanced/#ipfsarweave-file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | diff --git a/website/src/pages/ru/subgraphs/developing/creating/unit-testing-framework.mdx b/website/src/pages/ru/subgraphs/developing/creating/unit-testing-framework.mdx index a747fd939efb..336ce2398d0d 100644 --- a/website/src/pages/ru/subgraphs/developing/creating/unit-testing-framework.mdx +++ b/website/src/pages/ru/subgraphs/developing/creating/unit-testing-framework.mdx @@ -2,12 +2,12 @@ title: Фреймворк модульного тестирования --- -Learn how to use Matchstick, a unit testing framework developed by [LimeChain](https://limechain.tech/). Matchstick enables subgraph developers to test their mapping logic in a sandboxed environment and successfully deploy their subgraphs. +Learn how to use Matchstick, a unit testing framework developed by [LimeChain](https://limechain.tech/). Matchstick enables Subgraph developers to test their mapping logic in a sandboxed environment and successfully deploy their Subgraphs. ## Benefits of Using Matchstick - It's written in Rust and optimized for high performance. -- It gives you access to developer features, including the ability to mock contract calls, make assertions about the store state, monitor subgraph failures, check test performance, and many more. +- It gives you access to developer features, including the ability to mock contract calls, make assertions about the store state, monitor Subgraph failures, check test performance, and many more. ## Начало работы @@ -87,7 +87,7 @@ And finally, do not use `graph test` (which uses your global installation of gra ### Using Matchstick -To use **Matchstick** in your subgraph project just open up a terminal, navigate to the root folder of your project and simply run `graph test [options] ` - it downloads the latest **Matchstick** binary and runs the specified test or all tests in a test folder (or all existing tests if no datasource flag is specified). +To use **Matchstick** in your Subgraph project just open up a terminal, navigate to the root folder of your project and simply run `graph test [options] ` - it downloads the latest **Matchstick** binary and runs the specified test or all tests in a test folder (or all existing tests if no datasource flag is specified). ### Параметры CLI @@ -113,7 +113,7 @@ graph test path/to/file.test.ts ```sh -c, --coverage Run the tests in coverage mode --d, --docker Run the tests in a docker container (Note: Please execute from the root folder of the subgraph) +-d, --docker Run the tests in a docker container (Note: Please execute from the root folder of the Subgraph) -f, --force Binary: Redownloads the binary. Docker: Redownloads the Dockerfile and rebuilds the docker image. -h, --help Show usage information -l, --logs Logs to the console information about the OS, CPU model and download url (debugging purposes) @@ -145,17 +145,17 @@ libsFolder: path/to/libs manifestPath: path/to/subgraph.yaml ``` -### Демонстрационный субграф +### Demo Subgraph You can try out and play around with the examples from this guide by cloning the [Demo Subgraph repo](https://github.com/LimeChain/demo-subgraph) ### Видеоуроки -Also you can check out the video series on ["How to use Matchstick to write unit tests for your subgraphs"](https://www.youtube.com/playlist?list=PLTqyKgxaGF3SNakGQwczpSGVjS_xvOv3h) +Also you can check out the video series on ["How to use Matchstick to write unit tests for your Subgraphs"](https://www.youtube.com/playlist?list=PLTqyKgxaGF3SNakGQwczpSGVjS_xvOv3h) ## Структура тестов -_**IMPORTANT: The test structure described below depens on `matchstick-as` version >=0.5.0**_ +_**IMPORTANT: The test structure described below depends on `matchstick-as` version >=0.5.0**_ ### describe() @@ -662,7 +662,7 @@ That's a lot to unpack! First off, an important thing to notice is that we're im Вот и все - мы создали наш первый тест! 👏 -Теперь, чтобы запустить наши тесты, Вам просто нужно запустить в корневой папке своего субграфа следующее: +Now in order to run our tests you simply need to run the following in your Subgraph root folder: `graph test Gravity` @@ -756,7 +756,7 @@ createMockedFunction(contractAddress, 'getGravatar', 'getGravatar(address):(stri Users can mock IPFS files by using `mockIpfsFile(hash, filePath)` function. The function accepts two arguments, the first one is the IPFS file hash/path and the second one is the path to a local file. -NOTE: When testing `ipfs.map/ipfs.mapJSON`, the callback function must be exported from the test file in order for matchstck to detect it, like the `processGravatar()` function in the test example bellow: +NOTE: When testing `ipfs.map/ipfs.mapJSON`, the callback function must be exported from the test file in order for matchstick to detect it, like the `processGravatar()` function in the test example bellow: `.test.ts` file: @@ -765,7 +765,7 @@ import { assert, test, mockIpfsFile } from 'matchstick-as/assembly/index' import { ipfs } from '@graphprotocol/graph-ts' import { gravatarFromIpfs } from './utils' -// Export ipfs.map() callback in order for matchstck to detect it +// Export ipfs.map() callback in order for matchstick to detect it export { processGravatar } from './utils' test('ipfs.cat', () => { @@ -1172,7 +1172,7 @@ templates: network: mainnet mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/token-lock-wallet.ts handler: handleMetadata @@ -1289,11 +1289,11 @@ test('file/ipfs dataSource creation example', () => { ## Тестовое покрытие -Using **Matchstick**, subgraph developers are able to run a script that will calculate the test coverage of the written unit tests. +Using **Matchstick**, Subgraph developers are able to run a script that will calculate the test coverage of the written unit tests. The test coverage tool takes the compiled test `wasm` binaries and converts them to `wat` files, which can then be easily inspected to see whether or not the handlers defined in `subgraph.yaml` have been called. Since code coverage (and testing as whole) is in very early stages in AssemblyScript and WebAssembly, **Matchstick** cannot check for branch coverage. Instead we rely on the assertion that if a given handler has been called, the event/function for it have been properly mocked. -### Prerequisites +### Предварительные требования To run the test coverage functionality provided in **Matchstick**, there are a few things you need to prepare beforehand: @@ -1311,7 +1311,7 @@ In order for that function to be visible (for it to be included in the `wat` fil export { handleNewGravatar } ``` -### Usage +### Применение После того как всё это будет настроено, чтобы запустить инструмент тестового покрытия, просто запустите: @@ -1395,7 +1395,7 @@ The mismatch in arguments is caused by mismatch in `graph-ts` and `matchstick-as ## Дополнительные ресурсы -For any additional support, check out this [demo subgraph repo using Matchstick](https://github.com/LimeChain/demo-subgraph#readme_). +For any additional support, check out this [demo Subgraph repo using Matchstick](https://github.com/LimeChain/demo-subgraph#readme_). ## Обратная связь diff --git a/website/src/pages/ru/subgraphs/developing/deploying/multiple-networks.mdx b/website/src/pages/ru/subgraphs/developing/deploying/multiple-networks.mdx index 8ae55fbd8bcc..4f15c642b820 100644 --- a/website/src/pages/ru/subgraphs/developing/deploying/multiple-networks.mdx +++ b/website/src/pages/ru/subgraphs/developing/deploying/multiple-networks.mdx @@ -1,12 +1,13 @@ --- title: Развертывание субграфа в нескольких сетях +sidebarTitle: Deploying to Multiple Networks --- -This page explains how to deploy a subgraph to multiple networks. To deploy a subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a subgraph already, see [Creating a subgraph](/developing/creating-a-subgraph/). +This page explains how to deploy a Subgraph to multiple networks. To deploy a Subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a Subgraph already, see [Creating a Subgraph](/developing/creating-a-subgraph/). -## Развертывание подграфа в нескольких сетях +## Deploying the Subgraph to multiple networks -В некоторых случаях вы захотите развернуть один и тот же подграф в нескольких сетях, не дублируя весь его код. Основная проблема, возникающая при этом, заключается в том, что адреса контрактов в этих сетях разные. +In some cases, you will want to deploy the same Subgraph to multiple networks without duplicating all of its code. The main challenge that comes with this is that the contract addresses on these networks are different. ### Использование `graph-cli` @@ -19,7 +20,7 @@ This page explains how to deploy a subgraph to multiple networks. To deploy a su --network-file Путь к файлу конфигурации сетей (по умолчанию: "./networks.json") ``` -Вы можете использовать опцию `--network` для указания конфигурации сети из стандартного файла `json` (по умолчанию используется `networks.json`), чтобы легко обновлять свой субграф во время разработки. +You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your Subgraph during development. > Примечание: Команда `init` теперь автоматически сгенерирует `networks.json` на основе предоставленной информации. Затем Вы сможете обновить существующие или добавить дополнительные сети. @@ -53,7 +54,7 @@ This page explains how to deploy a subgraph to multiple networks. To deploy a su > Примечание: Вам не нужно указывать ни один из `templates` (если они у Вас есть) в файле конфигурации, только `dataSources`. Если есть какие-либо `templates`, объявленные в файле `subgraph.yaml`, их сеть будет автоматически обновлена до указанной с помощью опции `--network`. -Теперь давайте предположим, что Вы хотите иметь возможность развернуть свой субграф в сетях `mainnet` и `sepolia`, и это Ваш `subgraph.yaml`: +Now, let's assume you want to be able to deploy your Subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`: ```yaml # ... @@ -95,7 +96,7 @@ yarn build --network sepolia yarn build --network sepolia --network-file path/to/config ``` -Команда `build` обновит Ваш `subgraph.yaml` конфигурацией `sepolia`, а затем повторно скомпилирует субграф. Ваш файл `subgraph.yaml` теперь должен выглядеть следующим образом: +The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the Subgraph. Your `subgraph.yaml` file now should look like this: ```yaml # ... @@ -126,7 +127,7 @@ yarn deploy --network sepolia --network-file path/to/config Одним из способов параметризации таких аспектов, как адреса контрактов, с использованием старых версий `graph-cli` является генерация его частей с помощью системы шаблонов, такой как [Mustache](https://mustache.github.io/) или [Handlebars](https://handlebarsjs.com/). -Чтобы проиллюстрировать этот подход, давайте предположим, что субграф должен быть развернут в майннете и в сети Sepolia с использованием разных адресов контракта. Затем Вы можете определить два файла конфигурации, содержащие адреса для каждой сети: +To illustrate this approach, let's assume a Subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network: ```json { @@ -178,7 +179,7 @@ dataSources: } ``` -Чтобы развернуть этот субграф для основной сети или сети Sepolia, Вам нужно просто запустить одну из двух следующих команд: +To deploy this Subgraph for mainnet or Sepolia you would now simply run one of the two following commands: ```sh # Mainnet: @@ -192,25 +193,25 @@ yarn prepare:sepolia && yarn deploy **Примечание:** Этот подход также можно применять в более сложных ситуациях, когда необходимо заменить не только адреса контрактов и сетевые имена, но и сгенерировать мэппинги или ABI из шаблонов. -Это предоставит Вам `chainHeadBlock`, который Вы сможете сравнить с `latestBlock` своего субграфа, чтобы проверить, не отстает ли он. `synced` сообщает, попал ли субграф в чейн. `health` в настоящее время может принимать значения `healthy`, если ошибки отсутствуют, или `failed`, если произошла ошибка, остановившая работу субграфа. В этом случае Вы можете проверить поле `fatalError` для получения подробной информации об этой ошибке. +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your Subgraph to check if it is running behind. `synced` informs if the Subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the Subgraph. In this case, you can check the `fatalError` field for details on this error. -## Политика архивирования подграфов в Subgraph Studio +## Subgraph Studio Subgraph archive policy -Версия субграфа в Studio архивируется, если и только если выполняются следующие критерии: +A Subgraph version in Studio is archived if and only if it meets the following criteria: - Версия не опубликована в сети (или ожидает публикации) - Версия была создана 45 или более дней назад -- Субграф не запрашивался в течение 30 дней +- The Subgraph hasn't been queried in 30 days -Кроме того, когда развертывается новая версия, если субграф не был опубликован, то версия N-2 субграфа архивируется. +In addition, when a new version is deployed, if the Subgraph has not been published, then the N-2 version of the Subgraph is archived. -У каждого подграфа, затронутого этой политикой, есть возможность вернуть соответствующую версию обратно. +Every Subgraph affected with this policy has an option to bring the version in question back. -## Проверка работоспособности подграфа +## Checking Subgraph health -Если подграф успешно синхронизируется, это хороший признак того, что он будет работать надёжно. Однако новые триггеры в сети могут привести к тому, что ваш подграф попадет в состояние непроверенной ошибки, или он может начать отставать из-за проблем с производительностью или проблем с операторами нод. +If a Subgraph syncs successfully, that is a good sign that it will continue to run well forever. However, new triggers on the network might cause your Subgraph to hit an untested error condition or it may start to fall behind due to performance issues or issues with the node operators. -Graph Node предоставляет конечную точку GraphQL, которую Вы можете запросить для проверки статуса своего субграфа. В хостинговом сервисе он доступен по адресу `https://api.thegraph.com/index-node/graphql`. На локальной ноде он по умолчанию доступен через порт `8030/graphql`. Полную схему для этой конечной точки можно найти [здесь](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Вот пример запроса, проверяющего состояние текущей версии субграфа: +Graph Node exposes a GraphQL endpoint which you can query to check the status of your Subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a Subgraph: ```graphql { @@ -237,4 +238,4 @@ Graph Node предоставляет конечную точку GraphQL, ко } ``` -Это предоставит Вам `chainHeadBlock`, который Вы сможете сравнить с `latestBlock` своего субграфа, чтобы проверить, не отстает ли он. `synced` сообщает, попал ли субграф в чейн. `health` в настоящее время может принимать значения `healthy`, если ошибки отсутствуют, или `failed`, если произошла ошибка, остановившая работу субграфа. В этом случае Вы можете проверить поле `fatalError` для получения подробной информации об этой ошибке. +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your Subgraph to check if it is running behind. `synced` informs if the Subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the Subgraph. In this case, you can check the `fatalError` field for details on this error. diff --git a/website/src/pages/ru/subgraphs/developing/deploying/using-subgraph-studio.mdx b/website/src/pages/ru/subgraphs/developing/deploying/using-subgraph-studio.mdx index e1aadd279a0b..3ff9c8594763 100644 --- a/website/src/pages/ru/subgraphs/developing/deploying/using-subgraph-studio.mdx +++ b/website/src/pages/ru/subgraphs/developing/deploying/using-subgraph-studio.mdx @@ -2,23 +2,23 @@ title: Deploying Using Subgraph Studio --- -Узнайте, как развернуть свой субграф в Subgraph Studio. +Learn how to deploy your Subgraph to Subgraph Studio. -> Note: When you deploy a subgraph, you push it to Subgraph Studio, where you'll be able to test it. It's important to remember that deploying is not the same as publishing. When you publish a subgraph, you're publishing it onchain. +> Note: When you deploy a Subgraph, you push it to Subgraph Studio, where you'll be able to test it. It's important to remember that deploying is not the same as publishing. When you publish a Subgraph, you're publishing it onchain. ## Обзор Subgraph Studio В [Subgraph Studio](https://thegraph.com/studio/) Вы можете выполнять следующие действия: -- Просматривать список созданных Вами субграфов -- Управлять, просматривать детали и визуализировать статус конкретного субграфа -- Создание и управление ключами API для определенных подграфов +- View a list of Subgraphs you've created +- Manage, view details, and visualize the status of a specific Subgraph +- Create and manage your API keys for specific Subgraphs - Ограничивать использование своих API-ключей определенными доменами и разрешать только определенным индексаторам выполнять запросы с их помощью -- Создавать свой субграф -- Развертывать свой субграф, используя The Graph CLI -- Тестировать свой субграф в тестовой среде Playground -- Интегрировать свой субграф на стадии разработки, используя URL запроса разработки -- Публиковать свой субграф в The Graph Network +- Create your Subgraph +- Deploy your Subgraph using The Graph CLI +- Test your Subgraph in the playground environment +- Integrate your Subgraph in staging using the development query URL +- Publish your Subgraph to The Graph Network - Управлять своими платежами ## Установка The Graph CLI @@ -44,10 +44,10 @@ npm install -g @graphprotocol/graph-cli 1. Откройте [Subgraph Studio](https://thegraph.com/studio/). 2. Подключите свой кошелек для входа. - Вы можете это сделать через MetaMask, Coinbase Wallet, WalletConnect или Safe. -3. После входа в систему Ваш уникальный ключ развертывания будет отображаться на странице сведений о Вашем субграфе. - - Ключ развертывания позволяет публиковать субграфы, а также управлять вашими API-ключами и оплатой. Он уникален, но может быть восстановлен, если Вы подозреваете, что он был взломан. +3. After you sign in, your unique deploy key will be displayed on your Subgraph details page. + - The deploy key allows you to publish your Subgraphs or manage your API keys and billing. It is unique but can be regenerated if you think it has been compromised. -> Важно: для выполнения запросов к субграфам необходим API-ключ +> Important: You need an API key to query Subgraphs ### How to Create a Subgraph in Subgraph Studio @@ -57,31 +57,25 @@ npm install -g @graphprotocol/graph-cli ### Совместимость подграфов с сетью Graph -In order to be supported by Indexers on The Graph Network, subgraphs must: - -- Index a [supported network](/supported-networks/) -- Не должны использовать ни одну из следующих функций: - - ipfs.cat & ipfs.map - - Нефатальные ошибки - - Grafting +To be supported by Indexers on The Graph Network, Subgraphs must index a [supported network](/supported-networks/). For a full list of supported and unsupported features, check out the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) repo. ## Инициализация Вашего Субграфа -После создания субграфа в Subgraph Studio Вы можете инициализировать его код через CLI с помощью следующей команды: +Once your Subgraph has been created in Subgraph Studio, you can initialize its code through the CLI using this command: ```bash graph init ``` -Значение `` можно найти на странице сведений о субграфе в Subgraph Studio, см. изображение ниже: +You can find the `` value on your Subgraph details page in Subgraph Studio, see image below: ![Subgraph Studio - Slug](/img/doc-subgraph-slug.png) -После запуска `graph init` Вам будет предложено ввести адрес контракта, сеть и ABI, которые Вы хотите запросить. Это приведет к созданию новой папки на Вашем локальном компьютере с базовым кодом для начала работы над субграфом. Затем Вы можете завершить работу над своим субграфом, чтобы убедиться, что он функционирует должным образом. +After running `graph init`, you will be asked to input the contract address, network, and an ABI that you want to query. This will generate a new folder on your local machine with some basic code to start working on your Subgraph. You can then finalize your Subgraph to make sure it works as expected. ## Аутентификация в Graph -Прежде чем Вы сможете развернуть свой субграф в Subgraph Studio, Вам будет необходимо войти в свою учетную запись в CLI. Для этого Вам понадобится ключ развертывания, который Вы сможете найти на странице сведений о субграфе. +Before you can deploy your Subgraph to Subgraph Studio, you need to log into your account within the CLI. To do this, you will need your deploy key, which you can find under your Subgraph details page. После этого используйте следующую команду для аутентификации через CLI: @@ -91,11 +85,11 @@ graph auth ## Развертывание субграфа -Когда будете готовы, Вы сможете развернуть свой субграф в Subgraph Studio. +Once you are ready, you can deploy your Subgraph to Subgraph Studio. -> Развертывание субграфа с помощью CLI отправляет его в Studio, где Вы сможете протестировать его и обновить метаданные. Это действие не приводит к публикации субграфа в децентрализованной сети. +> Deploying a Subgraph with the CLI pushes it to the Studio, where you can test it and update the metadata. This action won't publish your Subgraph to the decentralized network. -Используйте следующую команду CLI для развертывания своего субграфа: +Use the following CLI command to deploy your Subgraph: ```bash graph deploy @@ -108,30 +102,30 @@ graph deploy ## Тестирование Вашего субграфа -После развертывания Вы можете протестировать свой субграф (в Subgraph Studio или в собственном приложении, используя URL-адрес запроса на развертывание), развернуть другую версию, обновить метаданные и, когда будете готовы, опубликовать в [Graph Explorer](https://thegraph.com/explorer). +After deploying, you can test your Subgraph (either in Subgraph Studio or in your own app, with the deployment query URL), deploy another version, update the metadata, and publish to [Graph Explorer](https://thegraph.com/explorer) when you are ready. -Используйте Subgraph Studio, чтобы проверить логи на панели управления и обнаружить возможные ошибки в своем субграфе. +Use Subgraph Studio to check the logs on the dashboard and look for any errors with your Subgraph. ## Публикация Вашего субграфа -In order to publish your subgraph successfully, review [publishing a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). +In order to publish your Subgraph successfully, review [publishing a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). ## Управление версиями Вашего субграфа с помощью CLI -Если Вы хотите обновить свой субграф, Вы можете сделать следующее: +If you want to update your Subgraph, you can do the following: - Вы можете развернуть новую версию в Studio, используя CLI (на этом этапе она будет только приватной). - Если результат Вас устроит, Вы можете опубликовать новое развертывание в [Graph Explorer](https://thegraph.com/explorer). -- Это действие создаст новую версию вашего субграфа, о которой Кураторы смогут начать сигнализировать, а Индексаторы — индексировать. +- This action will create a new version of your Subgraph that Curators can start signaling on and Indexers can index. -You can also update your subgraph's metadata without publishing a new version. You can update your subgraph details in Studio (under the profile picture, name, description, etc.) by checking an option called **Update Details** in [Graph Explorer](https://thegraph.com/explorer). If this is checked, an onchain transaction will be generated that updates subgraph details in Explorer without having to publish a new version with a new deployment. +You can also update your Subgraph's metadata without publishing a new version. You can update your Subgraph details in Studio (under the profile picture, name, description, etc.) by checking an option called **Update Details** in [Graph Explorer](https://thegraph.com/explorer). If this is checked, an onchain transaction will be generated that updates Subgraph details in Explorer without having to publish a new version with a new deployment. -> Note: There are costs associated with publishing a new version of a subgraph to the network. In addition to the transaction fees, you must also fund a part of the curation tax on the auto-migrating signal. You cannot publish a new version of your subgraph if Curators have not signaled on it. For more information, please read more [here](/resources/roles/curating/). +> Note: There are costs associated with publishing a new version of a Subgraph to the network. In addition to the transaction fees, you must also fund a part of the curation tax on the auto-migrating signal. You cannot publish a new version of your Subgraph if Curators have not signaled on it. For more information, please read more [here](/resources/roles/curating/). ## Автоматическое архивирование версий подграфа -Каждый раз, когда Вы развертываете новую версию субграфа в Subgraph Studio, предыдущая версия архивируется. Архивированные версии не будут проиндексированы/синхронизированы и, следовательно, их нельзя будет запросить. Вы можете разархивировать архивированную версию своего субграфа в Subgraph Studio. +Whenever you deploy a new Subgraph version in Subgraph Studio, the previous version will be archived. Archived versions won't be indexed/synced and therefore cannot be queried. You can unarchive an archived version of your Subgraph in Subgraph Studio. -> Примечание: предыдущие версии непубликованных субграфов, развернутых в Studio, будут автоматически архивированы. +> Note: Previous versions of non-published Subgraphs deployed to Studio will be automatically archived. ![Subgraph Studio - Unarchive](/img/Unarchive.png) diff --git a/website/src/pages/ru/subgraphs/developing/developer-faq.mdx b/website/src/pages/ru/subgraphs/developing/developer-faq.mdx index 4c5aa00bf9cf..a86d764816c8 100644 --- a/website/src/pages/ru/subgraphs/developing/developer-faq.mdx +++ b/website/src/pages/ru/subgraphs/developing/developer-faq.mdx @@ -1,43 +1,43 @@ --- title: Developer FAQ -sidebarTitle: FAQ +sidebarTitle: Часто задаваемые вопросы --- На этой странице собраны некоторые из наиболее частых вопросов для разработчиков, использующих The Graph. ## Вопросы, связанные с субграфом -### 1. Что такое субграф? +### 1. What is a Subgraph? -Субграф - это пользовательский API, построенный на данных блокчейна. Субграфы запрашиваются с использованием языка запросов GraphQL и развертываются на Graph Node с помощью Graph CLI. После развертывания и публикации в децентрализованной сети The Graph индексаторы обрабатывают субграфы и делают их доступными для запросов потребителей субграфов. +A Subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process Subgraphs and make them available for Subgraph consumers to query. -### 2. Каков первый шаг в создании субграфа? +### 2. What is the first step to create a Subgraph? -To successfully create a subgraph, you will need to install The Graph CLI. Review the [Quick Start](/subgraphs/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). +To successfully create a Subgraph, you will need to install The Graph CLI. Review the [Quick Start](/subgraphs/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). -### 3. Могу ли я создать субграф, если в моих смарт-контрактах нет событий? +### 3. Can I still create a Subgraph if my smart contracts don't have events? -Настоятельно рекомендуется структурировать смарт-контракты так, чтобы они содержали события, связанные с данными, которые вы хотите запросить. Обработчики событий в субграфе срабатывают на события контракта и являются самым быстрым способом получения нужных данных. +It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the Subgraph are triggered by contract events and are the fastest way to retrieve useful data. -Если контракты, с которыми Вы работаете, не содержат событий, Ваш субграф может использовать обработчики вызовов и блоков для запуска индексации. Хотя это не рекомендуется, так как производительность будет существенно ниже. +If the contracts you work with do not contain events, your Subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. -### 4. Могу ли я изменить учетную запись GitHub, связанную с моим субграфом? +### 4. Can I change the GitHub account associated with my Subgraph? -Нет. После создания субграфа связанная с ним учетная запись GitHub не может быть изменена. Пожалуйста, учтите это перед созданием субграфа. +No. Once a Subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your Subgraph. -### 5. Как обновить субграф в майннете? +### 5. How do I update a Subgraph on mainnet? -Вы можете развернуть новую версию своего субграфа в Subgraph Studio с помощью интерфейса командной строки (CLI). Это действие сохраняет конфиденциальность вашего субграфа, но, если результат Вас удовлетворит, Вы сможете опубликовать его в Graph Explorer. При этом будет создана новая версия Вашего субграфа, на которую Кураторы смогут начать подавать сигналы. +You can deploy a new version of your Subgraph to Subgraph Studio using the CLI. This action maintains your Subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your Subgraph that Curators can start signaling on. -### 6. Можно ли дублировать субграф на другую учетную запись или конечную точку без повторного развертывания? +### 6. Is it possible to duplicate a Subgraph to another account or endpoint without redeploying? -Вы должны повторно развернуть субграф, но если идентификатор субграфа (хэш IPFS) не изменится, его не нужно будет синхронизировать с самого начала. +You have to redeploy the Subgraph, but if the Subgraph ID (IPFS hash) doesn't change, it won't have to sync from the beginning. -### 7. Как вызвать контрактную функцию или получить доступ к публичной переменной состояния из моих мэппингов субграфа? +### 7. How do I call a contract function or access a public state variable from my Subgraph mappings? Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/#access-to-smart-contract-state). -### 8. Can I import `ethers.js` or other JS libraries into my subgraph mappings? +### 8. Can I import `ethers.js` or other JS libraries into my Subgraph mappings? В настоящее время нет, так как мэппинги написаны на языке AssemblyScript. @@ -45,15 +45,15 @@ Take a look at `Access to smart contract` state inside the section [AssemblyScri ### 9. При прослушивании нескольких контрактов, возможно ли выбрать порядок прослушивания событий контрактов? -Внутри субграфа события всегда обрабатываются в том порядке, в котором они появляются в блоках, независимо от того, относится ли это к нескольким контрактам или нет. +Within a Subgraph, the events are always processed in the order they appear in the blocks, regardless of whether that is across multiple contracts or not. ### 10. Чем шаблоны отличаются от источников данных? -Шаблоны позволяют Вам быстро создавать источники данных, пока Ваш субграф индексируется. Ваш контракт может создавать новые контракты по мере того, как люди будут с ним взаимодействовать. Поскольку форма этих контрактов (ABI, события и т. д.) известна заранее, Вы сможете определить, как Вы хотите индексировать их в шаблоне. Когда они будут сгенерированы, Ваш субграф создаст динамический источник данных, предоставив адрес контракта. +Templates allow you to create data sources quickly, while your Subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your Subgraph will create a dynamic data source by supplying the contract address. Check out the "Instantiating a data source template" section on: [Data Source Templates](/developing/creating-a-subgraph/#data-source-templates). -### 11. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? +### 11. Is it possible to set up a Subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? Yes. On `graph init` command itself you can add multiple dataSources by entering contracts one after the other. @@ -79,9 +79,9 @@ docker pull graphprotocol/graph-node:latest If only one entity is created during the event and if there's nothing better available, then the transaction hash + log index would be unique. You can obfuscate these by converting that to Bytes and then piping it through `crypto.keccak256` but this won't make it more unique. -### 15. Могу ли я удалить свой субграф? +### 15. Can I delete my Subgraph? -Yes, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) and [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) your subgraph. +Yes, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) and [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) your Subgraph. ## Вопросы, связанный с сетью @@ -110,11 +110,11 @@ dataSource.address() Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the dataSource starts indexing from. In most cases, we suggest using the block where the contract was created: [Start blocks](/developing/creating-a-subgraph/#start-blocks) -### 20. Есть ли какие-либо советы по увеличению производительности индексирования? Синхронизация моего субграфа занимает очень много времени +### 20. What are some tips to increase the performance of indexing? My Subgraph is taking a very long time to sync Yes, you should take a look at the optional start block feature to start indexing from the block where the contract was deployed: [Start blocks](/developing/creating-a-subgraph/#start-blocks) -### 21. Есть ли способ напрямую запросить субграф, чтобы определить номер последнего проиндексированного блока? +### 21. Is there a way to query the Subgraph directly to determine the latest block number it has indexed? Да! Попробуйте выполнить следующую команду, заменив "organization/subgraphName" на название организации, под которой она опубликована, и имя Вашего субграфа: @@ -132,7 +132,7 @@ someCollection(first: 1000, skip: ) { ... } ### 23. If my dapp frontend uses The Graph for querying, do I need to write my API key into the frontend directly? What if we pay query fees for users – will malicious users cause our query fees to be very high? -Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. +Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and Subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. ## Прочее diff --git a/website/src/pages/ru/subgraphs/developing/introduction.mdx b/website/src/pages/ru/subgraphs/developing/introduction.mdx index d5b1df06feae..8afe64411063 100644 --- a/website/src/pages/ru/subgraphs/developing/introduction.mdx +++ b/website/src/pages/ru/subgraphs/developing/introduction.mdx @@ -1,6 +1,6 @@ --- title: Introduction to Subgraph Development -sidebarTitle: Introduction +sidebarTitle: Введение --- To start coding right away, go to [Developer Quick Start](/subgraphs/quick-start/). @@ -11,21 +11,21 @@ To start coding right away, go to [Developer Quick Start](/subgraphs/quick-start На The Graph Вы можете: -1. Create, deploy, and publish subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). -2. Использовать GraphQL для запроса существующих субграфов. +1. Create, deploy, and publish Subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). +2. Use GraphQL to query existing Subgraphs. ### Что такое GraphQL? -- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. +- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query Subgraphs. ### Действия разработчика -- Query subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. -- Создавайте собственные субграфы для удовлетворения конкретных потребностей в данных, обеспечивая улучшенную масштабируемость и гибкость для других разработчиков. -- Развертывайте, публикуйте и сигнализируйте о своих субграфах в The Graph Network. +- Query Subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. +- Create custom Subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. +- Deploy, publish and signal your Subgraphs within The Graph Network. -### Что такое субграфы? +### What are Subgraphs? -Субграф — это пользовательский API, созданный на основе данных блокчейна. Он извлекает данные из блокчейна, обрабатывает их и сохраняет так, чтобы их можно было легко запросить через GraphQL. +A Subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. -Check out the documentation on [subgraphs](/subgraphs/developing/subgraphs/) to learn specifics. +Check out the documentation on [Subgraphs](/subgraphs/developing/subgraphs/) to learn specifics. diff --git a/website/src/pages/ru/subgraphs/developing/managing/deleting-a-subgraph.mdx b/website/src/pages/ru/subgraphs/developing/managing/deleting-a-subgraph.mdx index 5787620c079a..84674685403f 100644 --- a/website/src/pages/ru/subgraphs/developing/managing/deleting-a-subgraph.mdx +++ b/website/src/pages/ru/subgraphs/developing/managing/deleting-a-subgraph.mdx @@ -1,31 +1,31 @@ --- -title: Deleting a Subgraph +title: Удаление субграфа --- -Delete your subgraph using [Subgraph Studio](https://thegraph.com/studio/). +Удалите свой субграф, используя [Subgraph Studio](https://thegraph.com/studio/). -> Deleting your subgraph will remove all published versions from The Graph Network, but it will remain visible on Graph Explorer and Subgraph Studio for users who have signaled on it. +> Удаление вашего субграфа удалит все опубликованные версии из сети The Graph, но он останется видимым в Graph Explorer и Subgraph Studio для пользователей, которые на него сигнализировали. ## Пошаговое руководство -1. Visit the subgraph's page on [Subgraph Studio](https://thegraph.com/studio/). +1. Перейдите на страницу субграфа в [Subgraph Studio](https://thegraph.com/studio/). -2. Click on the three-dots to the right of the "publish" button. +2. Нажмите на три точки справа от кнопки "опубликовать". -3. Click on the option to "delete this subgraph": +3. Нажмите на опцию "удалить этот субграф": - ![Delete-subgraph](/img/Delete-subgraph.png) + ![Удалить субграф](/img/Delete-subgraph.png) -4. Depending on the subgraph's status, you will be prompted with various options. +4. В зависимости от состояния субграфа, вам будут предложены различные варианты. - - If the subgraph is not published, simply click “delete” and confirm. - - If the subgraph is published, you will need to confirm on your wallet before the subgraph can be deleted from Studio. If a subgraph is published to multiple networks, such as testnet and mainnet, additional steps may be required. + - Если субграф не опубликован, просто нажмите «удалить» и подтвердите действие. + - Если субграф опубликован, вам нужно будет подтвердить действие в вашем кошельке перед его удалением из Studio. Если субграф опубликован в нескольких сетях, таких как тестовая сеть и основная сеть, могут потребоваться дополнительные шаги. -> If the owner of the subgraph has signal on it, the signaled GRT will be returned to the owner. +> Если владелец субграфа подал сигнал на него, сигнализированный GRT будет возвращен владельцу. -### Important Reminders +### Важные напоминания -- Once you delete a subgraph, it will **not** appear on Graph Explorer's homepage. However, users who have signaled on it will still be able to view it on their profile pages and remove their signal. -- Кураторы больше не смогут сигналить на сабграф. -- Кураторы, уже подавшие сигнал на субграф, могут отозвать свой сигнал по средней цене доли. -- Deleted subgraphs will show an error message. +- Как только вы удалите субграф, он **не** будет отображаться на главной странице Graph Explorer. Однако пользователи, которые сделали сигнал на него, все еще смогут просматривать его на своих профилях и удалить свой сигнал. +- Кураторы больше не смогут сигнализировать о субграфе. +- Кураторы, которые уже сигнализировали о субграфе, могут отозвать свой сигнал по средней цене доли. +- Удалённые субграфы будут показывать сообщение об ошибке. diff --git a/website/src/pages/ru/subgraphs/developing/managing/transferring-a-subgraph.mdx b/website/src/pages/ru/subgraphs/developing/managing/transferring-a-subgraph.mdx index bc76890218f7..f99757ea07e9 100644 --- a/website/src/pages/ru/subgraphs/developing/managing/transferring-a-subgraph.mdx +++ b/website/src/pages/ru/subgraphs/developing/managing/transferring-a-subgraph.mdx @@ -2,18 +2,18 @@ title: Transferring a Subgraph --- -Субграфы, опубликованные в децентрализованной сети, имеют NFT, сминченный по адресу, опубликовавшему субграф. NFT основан на стандарте ERC721, который облегчает переводы между аккаунтами в The Graph Network. +Subgraphs published to the decentralized network have an NFT minted to the address that published the Subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network. ## Напоминания -- Тот, кто владеет NFT, управляет субграфом. -- Если владелец решит продать или передать NFT, он больше не сможет редактировать или обновлять этот субграф в сети. -- Вы можете легко перенести управление субграфом на мультиподпись. -- Участник сообщества может создать субграф от имени DAO. +- Whoever owns the NFT controls the Subgraph. +- If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that Subgraph on the network. +- You can easily move control of a Subgraph to a multi-sig. +- A community member can create a Subgraph on behalf of a DAO. ## Просмотр Вашего субграфа как NFT -Чтобы просмотреть свой субграф как NFT, Вы можете посетить маркетплейс NFT, например, **OpenSea**: +To view your Subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**: ``` https://opensea.io/your-wallet-address @@ -27,13 +27,13 @@ https://rainbow.me/your-wallet-addres ## Пошаговое руководство -Чтобы передать право собственности на субграф, выполните следующие действия: +To transfer ownership of a Subgraph, do the following: 1. Используйте встроенный в Subgraph Studio пользовательский интерфейс: - ![Передача права собственности на субграф](/img/subgraph-ownership-transfer-1.png) + ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-1.png) -2. Выберите адрес, на который хотели бы передать субграф: +2. Choose the address that you would like to transfer the Subgraph to: ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-2.png) diff --git a/website/src/pages/ru/subgraphs/developing/publishing/publishing-a-subgraph.mdx b/website/src/pages/ru/subgraphs/developing/publishing/publishing-a-subgraph.mdx index bf789c87b2b0..8838c90b6889 100644 --- a/website/src/pages/ru/subgraphs/developing/publishing/publishing-a-subgraph.mdx +++ b/website/src/pages/ru/subgraphs/developing/publishing/publishing-a-subgraph.mdx @@ -1,10 +1,11 @@ --- title: Публикация подграфа в децентрализованной сети +sidebarTitle: Publishing to the Decentralized Network --- -Once you have [deployed your subgraph to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/) and it's ready to go into production, you can publish it to the decentralized network. +Once you have [deployed your Subgraph to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/) and it's ready to go into production, you can publish it to the decentralized network. -Публикуя субграф в децентрализованной сети, Вы делаете его доступным для: +When you publish a Subgraph to the decentralized network, you make it available for: - [Curators](/resources/roles/curating/) to begin curating it. - [Indexers](/indexing/overview/) to begin indexing it. @@ -17,23 +18,23 @@ Check out the list of [supported networks](/supported-networks/). 1. Go to the [Subgraph Studio](https://thegraph.com/studio/) dashboard 2. Click on the **Publish** button -3. Your subgraph will now be visible in [Graph Explorer](https://thegraph.com/explorer/). +3. Your Subgraph will now be visible in [Graph Explorer](https://thegraph.com/explorer/). -Все опубликованные версии существующего субграфа могут: +All published versions of an existing Subgraph can: - Be published to Arbitrum One. [Learn more about The Graph Network on Arbitrum](/archived/arbitrum/arbitrum-faq/). -- Index data on any of the [supported networks](/supported-networks/), regardless of the network on which the subgraph was published. +- Index data on any of the [supported networks](/supported-networks/), regardless of the network on which the Subgraph was published. -### Обновление метаданных опубликованного субграфа +### Updating metadata for a published Subgraph -- После публикации своего субграфа в децентрализованной сети Вы можете в любое время обновить метаданные в Subgraph Studio. +- After publishing your Subgraph to the decentralized network, you can update the metadata anytime in Subgraph Studio. - После сохранения изменений и публикации обновлений они появятся в Graph Explorer. - Важно отметить, что этот процесс не приведет к созданию новой версии, поскольку Ваше развертывание не изменилось. ## Публикация с помощью CLI -As of version 0.73.0, you can also publish your subgraph with the [`graph-cli`](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). +As of version 0.73.0, you can also publish your Subgraph with the [`graph-cli`](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). 1. Откройте `graph-cli`. 2. Use the following commands: `graph codegen && graph build` then `graph publish`. @@ -43,7 +44,7 @@ As of version 0.73.0, you can also publish your subgraph with the [`graph-cli`]( ### Настройка Вашего развертывания -Вы можете загрузить сборку своего субграфа на конкретную ноду IPFS и дополнительно настроить развертывание с помощью следующих флагов: +You can upload your Subgraph build to a specific IPFS node and further customize your deployment with the following flags: ``` USAGE @@ -63,31 +64,31 @@ FLAGS ## Добавление сигнала к Вашему субграфу -Разработчики могут добавлять сигнал GRT в свои субграфы, чтобы стимулировать Индексаторов запрашивать субграф. +Developers can add GRT signal to their Subgraphs to incentivize Indexers to query the Subgraph. -- Если субграф имеет право на вознаграждение за индексирование, Индексаторы, предоставившие «доказательство индексирования», получат вознаграждение GRT в зависимости от заявленной суммы GRT. +- If a Subgraph is eligible for indexing rewards, Indexers who provide a "proof of indexing" will receive a GRT reward, based on the amount of GRT signalled. -- You can check indexing reward eligibility based on subgraph feature usage [here](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). +- You can check indexing reward eligibility based on Subgraph feature usage [here](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). - Specific supported networks can be checked [here](/supported-networks/). -> Добавление сигнала в субграф, который не имеет права на получение вознаграждения, не привлечет дополнительных Индексаторов. +> Adding signal to a Subgraph which is not eligible for rewards will not attract additional Indexers. > -> Если Ваш субграф имеет право на получение вознаграждения, рекомендуется курировать собственный субграф, добавив как минимум 3,000 GRT, чтобы привлечь дополнительных Индексаторов для индексирования Вашего субграфа. +> If your Subgraph is eligible for rewards, it is recommended that you curate your own Subgraph with at least 3,000 GRT in order to attract additional indexers to index your Subgraph. -The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs. However, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. +The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all Subgraphs. However, signaling GRT on a particular Subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. -При подаче сигнала Кураторы могут решить подать сигнал на определенную версию субграфа или использовать автомиграцию. Если они подают сигнал с помощью автомиграции, доли куратора всегда будут обновляться до последней версии, опубликованной разработчиком. Если же они решат подать сигнал на определенную версию, доли всегда будут оставаться на этой конкретной версии. +When signaling, Curators can decide to signal on a specific version of the Subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. -Индексаторы могут находить субграфы для индексирования на основе сигналов курирования, которые они видят в Graph Explorer. +Indexers can find Subgraphs to index based on curation signals they see in Graph Explorer. -![Explorer subgraphs](/img/explorer-subgraphs.png) +![Explorer Subgraphs](/img/explorer-subgraphs.png) -Subgraph Studio позволяет Вам добавлять сигнал в Ваш субграф, добавляя GRT в пул курирования Вашего субграфа в той же транзакции, в которой он публикуется. +Subgraph Studio enables you to add signal to your Subgraph by adding GRT to your Subgraph's curation pool in the same transaction it is published. ![Curation Pool](/img/curate-own-subgraph-tx.png) -Кроме того, Вы можете добавить сигнал GRT к опубликованному субграфу из Graph Explorer. +Alternatively, you can add GRT signal to a published Subgraph from Graph Explorer. ![Signal from Explorer](/img/signal-from-explorer.png) diff --git a/website/src/pages/ru/subgraphs/developing/subgraphs.mdx b/website/src/pages/ru/subgraphs/developing/subgraphs.mdx index 62134b0551ae..8945ce707d0e 100644 --- a/website/src/pages/ru/subgraphs/developing/subgraphs.mdx +++ b/website/src/pages/ru/subgraphs/developing/subgraphs.mdx @@ -4,83 +4,83 @@ title: Субграфы ## What is a Subgraph? -A subgraph is a custom, open API that extracts data from a blockchain, processes it, and stores it so it can be easily queried via GraphQL. +A Subgraph is a custom, open API that extracts data from a blockchain, processes it, and stores it so it can be easily queried via GraphQL. ### Subgraph Capabilities - **Access Data:** Subgraphs enable the querying and indexing of blockchain data for web3. -- **Build:** Developers can build, deploy, and publish subgraphs to The Graph Network. To get started, check out the subgraph developer [Quick Start](quick-start/). -- **Index & Query:** Once a subgraph is indexed, anyone can query it. Explore and query all subgraphs published to the network in [Graph Explorer](https://thegraph.com/explorer). +- **Build:** Developers can build, deploy, and publish Subgraphs to The Graph Network. To get started, check out the Subgraph developer [Quick Start](quick-start/). +- **Index & Query:** Once a Subgraph is indexed, anyone can query it. Explore and query all Subgraphs published to the network in [Graph Explorer](https://thegraph.com/explorer). ## Inside a Subgraph -The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. +The Subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your Subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. -The **subgraph definition** consists of the following files: +The **Subgraph definition** consists of the following files: -- `subgraph.yaml`: Contains the subgraph manifest +- `subgraph.yaml`: Contains the Subgraph manifest -- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL +- `schema.graphql`: A GraphQL schema defining the data stored for your Subgraph and how to query it via GraphQL - `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema -To learn more about each subgraph component, check out [creating a subgraph](/developing/creating-a-subgraph/). +To learn more about each Subgraph component, check out [creating a Subgraph](/developing/creating-a-subgraph/). ## Жизненный цикл подграфа -Ниже представлен общий обзор жизненного цикла субграфа: +Here is a general overview of a Subgraph’s lifecycle: ![Subgraph Lifecycle](/img/subgraph-lifecycle.png) ## Subgraph Development -1. [Create a subgraph](/developing/creating-a-subgraph/) -2. [Deploy a subgraph](/deploying/deploying-a-subgraph-to-studio/) -3. [Test a subgraph](/deploying/subgraph-studio/#testing-your-subgraph-in-subgraph-studio) -4. [Publish a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) -5. [Signal on a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) +1. [Create a Subgraph](/developing/creating-a-subgraph/) +2. [Deploy a Subgraph](/deploying/deploying-a-subgraph-to-studio/) +3. [Test a Subgraph](/deploying/subgraph-studio/#testing-your-subgraph-in-subgraph-studio) +4. [Publish a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) +5. [Signal on a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) ### Build locally -Great subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying subgraphs on The Graph. They can also use [Graph TypeScript](/subgraphs/developing/creating/graph-ts/README/) and [Matchstick](/subgraphs/developing/creating/unit-testing-framework/) to create robust subgraphs. +Great Subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying Subgraphs on The Graph. They can also use [Graph TypeScript](/subgraphs/developing/creating/graph-ts/README/) and [Matchstick](/subgraphs/developing/creating/unit-testing-framework/) to create robust Subgraphs. ### Deploy to Subgraph Studio -Once defined, a subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: +Once defined, a Subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: -- Use its staging environment to index the deployed subgraph and make it available for review. -- Verify that your subgraph doesn't have any indexing errors and works as expected. +- Use its staging environment to index the deployed Subgraph and make it available for review. +- Verify that your Subgraph doesn't have any indexing errors and works as expected. ### Publish to the Network -When you're happy with your subgraph, you can [publish it](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph Network. +When you're happy with your Subgraph, you can [publish it](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph Network. -- This is an onchain action, which registers the subgraph and makes it discoverable by Indexers. -- Published subgraphs have a corresponding NFT, which defines the ownership of the subgraph. You can [transfer the subgraph's ownership](/subgraphs/developing/managing/transferring-a-subgraph/) by sending the NFT. -- Published subgraphs have associated metadata, which provides other network participants with useful context and information. +- This is an onchain action, which registers the Subgraph and makes it discoverable by Indexers. +- Published Subgraphs have a corresponding NFT, which defines the ownership of the Subgraph. You can [transfer the Subgraph's ownership](/subgraphs/developing/managing/transferring-a-subgraph/) by sending the NFT. +- Published Subgraphs have associated metadata, which provides other network participants with useful context and information. ### Add Curation Signal for Indexing -Published subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your subgraph. Learn more about signaling and [curating](/resources/roles/curating/) on The Graph. +Published Subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your Subgraph. Learn more about signaling and [curating](/resources/roles/curating/) on The Graph. #### What is signal? -- Signal is locked GRT associated with a given subgraph. It indicates to Indexers that a given subgraph will receive query volume and it contributes to the indexing rewards available for processing it. -- Third-party Curators may also signal on a given subgraph, if they deem the subgraph likely to drive query volume. +- Signal is locked GRT associated with a given Subgraph. It indicates to Indexers that a given Subgraph will receive query volume and it contributes to the indexing rewards available for processing it. +- Third-party Curators may also signal on a given Subgraph, if they deem the Subgraph likely to drive query volume. ### Querying & Application Development Subgraphs on The Graph Network receive 100,000 free queries per month, after which point developers can either [pay for queries with GRT or a credit card](/subgraphs/billing/). -Learn more about [querying subgraphs](/subgraphs/querying/introduction/). +Learn more about [querying Subgraphs](/subgraphs/querying/introduction/). ### Updating Subgraphs -To update your subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. +To update your Subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your Subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. -- If you selected "auto-migrate" when you applied the signal, updating the subgraph will migrate any signal to the new version and incur a migration tax. -- This signal migration should prompt Indexers to start indexing the new version of the subgraph, so it should soon become available for querying. +- If you selected "auto-migrate" when you applied the signal, updating the Subgraph will migrate any signal to the new version and incur a migration tax. +- This signal migration should prompt Indexers to start indexing the new version of the Subgraph, so it should soon become available for querying. ### Deleting & Transferring Subgraphs -If you no longer need a published subgraph, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) or [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) it. Deleting a subgraph returns any signaled GRT to [Curators](/resources/roles/curating/). +If you no longer need a published Subgraph, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) or [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) it. Deleting a Subgraph returns any signaled GRT to [Curators](/resources/roles/curating/). diff --git a/website/src/pages/ru/subgraphs/explorer.mdx b/website/src/pages/ru/subgraphs/explorer.mdx index 34a535683fca..b963e985dd99 100644 --- a/website/src/pages/ru/subgraphs/explorer.mdx +++ b/website/src/pages/ru/subgraphs/explorer.mdx @@ -2,70 +2,70 @@ title: Graph Explorer --- -Unlock the world of subgraphs and network data with [Graph Explorer](https://thegraph.com/explorer). +Unlock the world of Subgraphs and network data with [Graph Explorer](https://thegraph.com/explorer). ## Обзор -Graph Explorer consists of multiple parts where you can interact with [subgraphs](https://thegraph.com/explorer?chain=arbitrum-one), [delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one), engage [participants](https://thegraph.com/explorer/participants?chain=arbitrum-one), view [network information](https://thegraph.com/explorer/network?chain=arbitrum-one), and access your user profile. +Graph Explorer consists of multiple parts where you can interact with [Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one), [delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one), engage [participants](https://thegraph.com/explorer/participants?chain=arbitrum-one), view [network information](https://thegraph.com/explorer/network?chain=arbitrum-one), and access your user profile. -## Inside Explorer +## Внутреннее устройство Explorer -The following is a breakdown of all the key features of Graph Explorer. For additional support, you can watch the [Graph Explorer video guide](/subgraphs/explorer/#video-guide). +Ниже приведён обзор всех ключевых функций Graph Explorer. Для получения дополнительной помощи Вы можете посмотреть [видеоруководство по Graph Explorer](/subgraphs/explorer/#video-guide). -### Subgraphs Page +### Страница субграфов -After deploying and publishing your subgraph in Subgraph Studio, go to [Graph Explorer](https://thegraph.com/explorer) and click on the "[Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one)" link in the navigation bar to access the following: +After deploying and publishing your Subgraph in Subgraph Studio, go to [Graph Explorer](https://thegraph.com/explorer) and click on the "[Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one)" link in the navigation bar to access the following: -- Ваши готовые субграфы +- Your own finished Subgraphs - Субграфы, опубликованные другими -- Конкретный скбграф, который Вам нужен (в зависимости от даты создания, количества сигналов или имени). +- The exact Subgraph you want (based on the date created, signal amount, or name). -![Explorer Image 1](/img/Subgraphs-Explorer-Landing.png) +![Изображение Explorer 1](/img/Subgraphs-Explorer-Landing.png) -Нажав на субграф, Вы сможете сделать следующее: +When you click into a Subgraph, you will be able to do the following: - Протестировать запросы на тестовой площадке и использовать данные сети для принятия обоснованных решений. -- Подать сигнал GRT на свой собственный субграф или субграфы других, чтобы обратить внимание индексаторов на их значимость и качество. +- Signal GRT on your own Subgraph or the Subgraphs of others to make indexers aware of its importance and quality. - - This is critical because signaling on a subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries. + - This is critical because signaling on a Subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries. -![Explorer Image 2](/img/Subgraph-Details.png) +![Изображение Explorer 2](/img/Subgraph-Details.png) -На специальной странице каждого субграфа Вы можете выполнить следующие действия: +On each Subgraph’s dedicated page, you can do the following: -- Сигнал/снятие сигнала на субграфах +- Signal/Un-signal on Subgraphs - Просмотр дополнительных сведений, таких как диаграммы, текущий идентификатор развертывания и другие метаданные -- Переключение версии с целью изучения прошлых итераций субграфа -- Запрос субграфов через GraphQL -- Тестовые субграфы на тренировочной площадке -- Просмотр индексаторов, индексирующих определенный субграф +- Switch versions to explore past iterations of the Subgraph +- Query Subgraphs via GraphQL +- Test Subgraphs in the playground +- View the Indexers that are indexing on a certain Subgraph - Статистика субграфов (распределения, кураторы и т. д.) -- Просмотр объекта, опубликовавшего субграф +- View the entity who published the Subgraph -![Explorer Image 3](/img/Explorer-Signal-Unsignal.png) +![Изображение Explorer 3](/img/Explorer-Signal-Unsignal.png) -### Delegate Page +### Страница Делегатора -On the [Delegate page](https://thegraph.com/explorer/delegate?chain=arbitrum-one), you can find information about delegating, acquiring GRT, and choosing an Indexer. +На [странице Делегатора](https://thegraph.com/explorer/delegate?chain=arbitrum-one) Вы можете найти информацию о делегировании, приобретении GRT и выборе Индексатора. -On this page, you can see the following: +На этой странице Вы можете увидеть следующее: -- Indexers who collected the most query fees -- Indexers with the highest estimated APR +- Индексаторы, собравшие наибольшее количество комиссий за запросы +- Индексаторы с самой высокой расчетной годовой процентной ставкой -Additionally, you can calculate your ROI and search for top Indexers by name, address, or subgraph. +Additionally, you can calculate your ROI and search for top Indexers by name, address, or Subgraph. -### Participants Page +### Страница участников -This page provides a bird's-eye view of all "participants," which includes everyone participating in the network, such as Indexers, Delegators, and Curators. +На этой странице представлен общий обзор всех «участников», включая всех участников сети, таких как Индексаторы, Делегаторы и Кураторы. #### 1. Индексаторы -![Explorer Image 4](/img/Indexer-Pane.png) +![Изображение Explorer 4](/img/Indexer-Pane.png) -Индексаторы являются основой протокола. Они стейкают на субграфы, индексируют их и обслуживают запросы всех, кто использует субграфы. +Indexers are the backbone of the protocol. They stake on Subgraphs, index them, and serve queries to anyone consuming Subgraphs. -В таблице Индексаторов Вы можете увидеть параметры делегирования Индексаторов, их стейк, сумму стейка, которую они поставили на каждый субграф, а также размер дохода, который они получили от комиссий за запросы и вознаграждений за индексирование. +In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each Subgraph, and how much revenue they have made from query fees and indexing rewards. **Особенности** @@ -74,7 +74,7 @@ This page provides a bird's-eye view of all "participants," which includes every - Оставшееся время восстановления — время, оставшееся до того, как Индексатор сможет изменить вышеуказанные параметры делегирования. Периоды восстановления устанавливаются Индексаторами при обновлении параметров делегирования. - Собственность — это депозитная доля Индексатора, которая может быть урезана за злонамеренное или некорректное поведение. - Делегированный стейк — доля Делегаторов, которая может быть распределена Индексатором, но не может быть сокращена. -- Распределенный стейк — доля, которую Индексаторы активно распределяют между индексируемыми субграфами. +- Allocated - Stake that Indexers are actively allocating towards the Subgraphs they are indexing. - Доступная емкость делегирования — объем делегированной доли, которую Индексаторы всё ещё могут получить, прежде чем они будут перераспределены. - Максимальная емкость делегирования — максимальная сумма делегированной доли, которую Индексатор может продуктивно принять. Избыточная делегированная ставка не может быть использована для распределения или расчета вознаграждений. - Плата за запросы — это общая сумма комиссий, которую конечные пользователи заплатили за запросы Индексатора за все время. @@ -84,16 +84,16 @@ This page provides a bird's-eye view of all "participants," which includes every - Параметры индексирования можно задать, щелкнув мышью в правой части таблицы или перейдя в профиль Индексатора и нажав кнопку «Делегировать». -To learn more about how to become an Indexer, you can take a look at the [official documentation](/indexing/overview/) or [The Graph Academy Indexer guides.](https://thegraph.academy/delegators/choosing-indexers/) +Чтобы узнать больше о том, как стать Индексатором, Вы можете ознакомиться с [официальной документацией](/indexing/overview/) или [руководствами для Индексаторов Академии The Graph.] (https://thegraph.academy/delegators/choosing-indexers/) -![Indexing details pane](/img/Indexing-Details-Pane.png) +![Панель сведений об индексировании](/img/Indexing-Details-Pane.png) #### 2. Кураторы -Кураторы анализируют субграфы, чтобы определить, какие из них имеют наивысшее качество. Найдя потенциально привлекательный субграф, Куратор может курировать его, отправляя сигнал на его кривую связывания. Таким образом, Кураторы сообщают Индексаторам, какие субграфы имеют высокое качество и должны быть проиндексированы. +Curators analyze Subgraphs to identify which Subgraphs are of the highest quality. Once a Curator has found a potentially high-quality Subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which Subgraphs are high quality and should be indexed. -- Кураторами могут быть члены сообщества, потребители данных или даже разработчики субграфов, которые сигнализируют о своих собственных субграфах, внося токены GRT в кривую связывания. - - Внося GRT, Кураторы чеканят кураторские акции субграфа. В результате они могут заработать часть комиссий за запросы, сгенерированных субграфом, на который они подали сигнал. +- Curators can be community members, data consumers, or even Subgraph developers who signal on their own Subgraphs by depositing GRT tokens into a bonding curve. + - By depositing GRT, Curators mint curation shares of a Subgraph. As a result, they can earn a portion of the query fees generated by the Subgraph they have signaled on. - Кривая связывания стимулирует Кураторов отбирать источники данных самого высокого качества. В приведенной ниже таблице «Куратор» вы можете увидеть: @@ -102,9 +102,9 @@ To learn more about how to become an Indexer, you can take a look at the [offici - Количество GRT, которое было внесено - Количество акций, которыми владеет Куратор -![Explorer Image 6](/img/Curation-Overview.png) +![Изображение Explorer 6](/img/Curation-Overview.png) -If you want to learn more about the Curator role, you can do so by visiting [official documentation.](/resources/roles/curating/) or [The Graph Academy](https://thegraph.academy/curators/). +Если Вы хотите узнать больше о роли Куратора, Вы можете сделать это, ознакомившись с [официальной документацией] (/resources/roles/curating/) или посетив [Академию The Graph](https://thegraph.academy/curators/). #### 3. Делегаторы @@ -112,14 +112,14 @@ If you want to learn more about the Curator role, you can do so by visiting [off - Без Делегаторов Индексаторы с меньшей долей вероятности получат значительные вознаграждения и сборы. Таким образом, Индексаторы привлекают Делегаторов, предлагая им часть вознаграждения за индексацию и комиссию за запросы. - Делегаторы выбирают Индексаторов на основе ряда различных переменных, таких как прошлые результаты, ставки вознаграждения за индексирование и снижение платы за запросы. -- Reputation within the community can also play a factor in the selection process. It's recommended to connect with the selected Indexers via [The Graph's Discord](https://discord.gg/graphprotocol) or [The Graph Forum](https://forum.thegraph.com/). +- Репутация в сообществе может также повлиять на выбор. Вы можете связаться с Индексаторами через [Дискорд The Graph](https://discord.gg/graphprotocol) или [Форум The Graph](https://forum.thegraph.com/). -![Explorer Image 7](/img/Delegation-Overview.png) +![Изображение Explorer 7](/img/Delegation-Overview.png) В таблице «Делегаторы» Вы можете увидеть активных в сообществе Делегаторов и важные показатели: - Количество Индексаторов, к которым делегирует Делегатор -- A Delegator's original delegation +- Первоначальная делегация Делегатора - Накопленные ими вознаграждения, которые они не вывели из протокола - Реализованные вознаграждения, которые они сняли с протокола - Общее количество GRT, которое у них имеется в настоящее время в протоколе @@ -127,9 +127,9 @@ If you want to learn more about the Curator role, you can do so by visiting [off If you want to learn more about how to become a Delegator, check out the [official documentation](/resources/roles/delegating/delegating/) or [The Graph Academy](https://docs.thegraph.academy/official-docs/delegator/choosing-indexers). -### Network Page +### Страница сети -On this page, you can see global KPIs and have the ability to switch to a per-epoch basis and analyze network metrics in more detail. These details will give you a sense of how the network is performing over time. +На этой странице Вы можете увидеть глобальные ключевые показатели эффективности и получить возможность переключения на поэпохальную основу и более детально проанализировать сетевые метрики. Эти данные дадут Вам представление о том, как работает сеть на протяжении определённого времени. #### Обзор @@ -144,10 +144,10 @@ On this page, you can see global KPIs and have the ability to switch to a per-ep Несколько важных деталей, на которые следует обратить внимание: -- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the subgraphs have been closed and the data they served has been validated by the consumers. -- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). +- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the Subgraphs have been closed and the data they served has been validated by the consumers. +- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the Subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). -![Explorer Image 8](/img/Network-Stats.png) +![Изображение Explorer 8](/img/Network-Stats.png) #### Эпохи @@ -161,7 +161,7 @@ On this page, you can see global KPIs and have the ability to switch to a per-ep - Эпохи распределения - это эпохи, в которых состояния каналов для эпох регулируются, и Индексаторы могут требовать скидки на комиссию за запросы. - Завершенные эпохи — это эпохи, в которых Индексаторы больше не могут заявить возврат комиссии за запросы. -![Explorer Image 9](/img/Epoch-Stats.png) +![Изображение Explorer 9](/img/Epoch-Stats.png) ## Ваш профиль пользователя @@ -174,19 +174,19 @@ On this page, you can see global KPIs and have the ability to switch to a per-ep - Любое из текущих действий, которые Вы совершили. - Данные своего профиля, описание и веб-сайт (если Вы его добавили). -![Explorer Image 10](/img/Profile-Overview.png) +![Изображение Explorer 10](/img/Profile-Overview.png) ### Вкладка "Субграфы" -На вкладке «Субграфы» Вы увидите опубликованные вами субграфы. +In the Subgraphs tab, you’ll see your published Subgraphs. -> Сюда не будут включены субграфы, развернутые с помощью CLI в целях тестирования. Субграфы будут отображаться только после публикации в децентрализованной сети. +> This will not include any Subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network. -![Explorer Image 11](/img/Subgraphs-Overview.png) +![Изображение Explorer 11](/img/Subgraphs-Overview.png) ### Вкладка "Индексирование" -На вкладке «Индексирование» Вы найдете таблицу со всеми активными и прежними распределениями по субграфам. Вы также найдете диаграммы, на которых сможете увидеть и проанализировать свои прошлые результаты в качестве Индексатора. +In the Indexing tab, you’ll find a table with all the active and historical allocations towards Subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer. Этот раздел также будет содержать подробную информацию о Ваших чистых вознаграждениях Индексатора и чистой комиссии за запросы. Вы увидите следующие показатели: @@ -197,7 +197,7 @@ On this page, you can see global KPIs and have the ability to switch to a per-ep - Сокращение вознаграждений — процент вознаграждений Индексатора, который Вы сохраните при разделении с Делегаторами - Собственность — Ваша внесенная ставка, которая может быть уменьшена за злонамеренное или неправильное поведение -![Explorer Image 12](/img/Indexer-Stats.png) +![Изображение Explorer 12](/img/Indexer-Stats.png) ### Вкладка "Делегирование" @@ -219,20 +219,20 @@ On this page, you can see global KPIs and have the ability to switch to a per-ep Имейте в виду, что эта диаграмма прокручивается по горизонтали, поэтому, если Вы прокрутите ее до конца вправо, Вы также сможете увидеть статус своего делегирования (делегирование, отмена делегирования, возможность отзыва). -![Explorer Image 13](/img/Delegation-Stats.png) +![Изображение Explorer 13](/img/Delegation-Stats.png) ### Вкладка "Курирование" -На вкладке «Курирование» Вы найдете все субграфы, на которые Вы подаете сигналы (что позволит Вам получать комиссию за запросы). Сигнализация позволяет Кураторам указывать Индексаторам, какие субграфы являются ценными и заслуживающими доверия, тем самым сигнализируя о том, что их необходимо проиндексировать. +In the Curation tab, you’ll find all the Subgraphs you’re signaling on (thus enabling you to receive query fees). Signaling allows Curators to highlight to Indexers which Subgraphs are valuable and trustworthy, thus signaling that they need to be indexed on. На данной вкладке Вы найдете обзор: -- Всех субграфов, которые Вы курируете, с подробной информацией о сигнале -- Общего количества акций на субграф -- Вознаграждений за запрос за субграф +- All the Subgraphs you're curating on with signal details +- Share totals per Subgraph +- Query rewards per Subgraph - Даты обновления данных -![Explorer Image 14](/img/Curation-Stats.png) +![Изображение Explorer 14](/img/Curation-Stats.png) ### Параметры Вашего профиля @@ -241,7 +241,7 @@ On this page, you can see global KPIs and have the ability to switch to a per-ep - Операторы выполняют ограниченные действия в протоколе от имени Индексатора, такие как открытие и закрытие распределения. Операторами обычно являются другие адреса Ethereum, отдельные от их кошелька для ставок, с ограниченным доступом к сети, который Индексаторы могут настроить лично - Параметры делегирования позволяют Вам контролировать распределение GRT между Вами и Вашими Делегаторами. -![Explorer Image 15](/img/Profile-Settings.png) +![Изображение Explorer 15](/img/Profile-Settings.png) Являясь Вашим официальным порталом в мир децентрализованных данных, Graph Explorer позволяет Вам выполнять самые разные действия, независимо от Вашей роли в сети. Вы можете перейти к настройкам своего профиля, открыв выпадающее меню рядом со своим адресом и нажав кнопку «Настройки». diff --git a/website/src/pages/ru/subgraphs/guides/_meta.js b/website/src/pages/ru/subgraphs/guides/_meta.js index 37e18bc51651..a1bb04fb6d3f 100644 --- a/website/src/pages/ru/subgraphs/guides/_meta.js +++ b/website/src/pages/ru/subgraphs/guides/_meta.js @@ -1,4 +1,5 @@ export default { + 'subgraph-composition': '', 'subgraph-debug-forking': '', near: '', arweave: '', diff --git a/website/src/pages/ru/subgraphs/guides/arweave.mdx b/website/src/pages/ru/subgraphs/guides/arweave.mdx index 08e6c4257268..800f22842ffe 100644 --- a/website/src/pages/ru/subgraphs/guides/arweave.mdx +++ b/website/src/pages/ru/subgraphs/guides/arweave.mdx @@ -1,111 +1,111 @@ --- -title: Building Subgraphs on Arweave +title: Создание Субграфов на Arweave --- -> Arweave support in Graph Node and on Subgraph Studio is in beta: please reach us on [Discord](https://discord.gg/graphprotocol) with any questions about building Arweave Subgraphs! +> Поддержка Arweave в Graph Node и Subgraph Studio находится на стадии бета-тестирования: пожалуйста, обращайтесь к нам в [Discord](https://discord.gg/graphprotocol) с любыми вопросами о создании субграфов Arweave! -In this guide, you will learn how to build and deploy Subgraphs to index the Arweave blockchain. +Из этого руководства Вы узнаете, как создавать и развертывать субграфы для индексации блокчейна Arweave. -## What is Arweave? +## Что такое Arweave? -The Arweave protocol allows developers to store data permanently and that is the main difference between Arweave and IPFS, where IPFS lacks the feature; permanence, and files stored on Arweave can't be changed or deleted. +Протокол Arweave позволяет разработчикам хранить данные на постоянной основе, и в этом основное различие между Arweave и IPFS, поскольку в IPFS отсутствует функция постоянства, а файлы, хранящиеся в Arweave, не могут быть изменены или удалены. -Arweave already has built numerous libraries for integrating the protocol in a number of different programming languages. For more information you can check: +Arweave уже создала множество библиотек для интеграции протокола на нескольких различных языках программирования. С дополнительной информацией Вы можете ознакомиться: - [Arwiki](https://arwiki.wiki/#/en/main) -- [Arweave Resources](https://www.arweave.org/build) +- [Ресурсы Arweave](https://www.arweave.org/build) -## What are Arweave Subgraphs? +## Что такое субграфы Arweave? -The Graph allows you to build custom open APIs called "Subgraphs". Subgraphs are used to tell indexers (server operators) which data to index on a blockchain and save on their servers in order for you to be able to query it at any time using [GraphQL](https://graphql.org/). +The Graph позволяет создавать собственные открытые API, называемые "Субграфами". Субграфы используются для указания индексаторам (операторам серверов), какие данные индексировать на блокчейне и сохранять на их серверах, чтобы Вы могли запрашивать эти данные в любое время используя [GraphQL](https://graphql.org/). -[Graph Node](https://github.com/graphprotocol/graph-node) is now able to index data on Arweave protocol. The current integration is only indexing Arweave as a blockchain (blocks and transactions), it is not indexing the stored files yet. +[Graph Node](https://github.com/graphprotocol/graph-node) теперь может индексировать данные на протоколе Arweave. Текущая интеграция индексирует только Arweave как блокчейн (блоки и транзакции), она еще не индексирует сохраненные файлы. -## Building an Arweave Subgraph +## Построение Субграфа на Arweave -To be able to build and deploy Arweave Subgraphs, you need two packages: +Чтобы иметь возможность создавать и развертывать Субграфы на Arweave, Вам понадобятся два пакета: -1. `@graphprotocol/graph-cli` above version 0.30.2 - This is a command-line tool for building and deploying Subgraphs. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-cli) to download using `npm`. -2. `@graphprotocol/graph-ts` above version 0.27.0 - This is library of Subgraph-specific types. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-ts) to download using `npm`. +1. `@graphprotocol/graph-cli` версии выше 0.30.2 — это инструмент командной строки для создания и развертывания субграфов. [Нажмите здесь](https://www.npmjs.com/package/@graphprotocol/graph-cli), чтобы скачать с помощью `npm`. +2. `@graphprotocol/graph-ts` версии выше 0.27.0 — это библиотека типов, специфичных для субграфов. [Нажмите здесь](https://www.npmjs.com/package/@graphprotocol/graph-ts), чтобы скачать с помощью `npm`. -## Subgraph's components +## Составляющие Субграфов -There are three components of a Subgraph: +Существует три компонента субграфа: -### 1. Manifest - `subgraph.yaml` +### 1. Манифест - `subgraph.yaml` -Defines the data sources of interest, and how they should be processed. Arweave is a new kind of data source. +Определяет источники данных, представляющие интерес, и то, как они должны обрабатываться. Arweave - это новый вид источника данных. -### 2. Schema - `schema.graphql` +### 2. Схема - `schema.graphql` -Here you define which data you want to be able to query after indexing your Subgraph using GraphQL. This is actually similar to a model for an API, where the model defines the structure of a request body. +Здесь Вы определяете, какие данные хотите иметь возможность запрашивать после индексации своего субграфа с помощью GraphQL. На самом деле это похоже на модель для API, где модель определяет структуру тела запроса. -The requirements for Arweave Subgraphs are covered by the [existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). +Требования для субграфов Arweave описаны в [существующей документации](/developing/creating-a-subgraph/#the-graphql-schema). -### 3. AssemblyScript Mappings - `mapping.ts` +### 3. Мэппинги на AssemblyScript - `mapping.ts` -This is the logic that determines how data should be retrieved and stored when someone interacts with the data sources you are listening to. The data gets translated and is stored based off the schema you have listed. +Это логика, которая определяет, как данные должны извлекаться и храниться, когда кто-то взаимодействует с источниками данных, которые Вы отслеживаете. Данные переводятся и сохраняются в соответствии с указанной Вами схемой. -During Subgraph development there are two key commands: +Во время разработки субграфа есть две ключевые команды: ``` -$ graph codegen # generates types from the schema file identified in the manifest -$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder +$ graph codegen # генерирует типы из файла схемы, указанного в манифесте +$ graph build # генерирует Web Assembly из файлов AssemblyScript и подготавливает все файлы субграфа в папке /build ``` -## Subgraph Manifest Definition +## Определение манифеста субграфа -The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for an Arweave Subgraph: +Манифест субграфа `subgraph.yaml` идентифицирует источники данных для субграфа, триггеры, представляющие интерес, и функции, которые должны быть выполнены в ответ на эти триггеры. Ниже приведен пример манифеста субграфа для Arweave Subgraph: ```yaml specVersion: 1.3.0 description: Arweave Blocks Indexing schema: - file: ./schema.graphql # link to the schema file + file: ./schema.graphql # ссылка на файл схемы dataSources: - kind: arweave name: arweave-blocks - network: arweave-mainnet # The Graph only supports Arweave Mainnet + network: arweave-mainnet # The Graph поддерживает только Arweave Mainnet source: - owner: 'ID-OF-AN-OWNER' # The public key of an Arweave wallet - startBlock: 0 # set this to 0 to start indexing from chain genesis + owner: 'ID-OF-AN-OWNER' # Открытый ключ кошелька Arweave + startBlock: 0 # установите это значение на 0, чтобы начать индексацию с генезиса чейна mapping: apiVersion: 0.0.9 language: wasm/assemblyscript - file: ./src/blocks.ts # link to the file with the Assemblyscript mappings + file: ./src/blocks.ts # ссылка на файл с мэппингами Assemblyscript entities: - Block - Transaction blockHandlers: - - handler: handleBlock # the function name in the mapping file + - handler: handleBlock # имя функции в файле мэппинга transactionHandlers: - - handler: handleTx # the function name in the mapping file + - handler: handleTx # имя функции в файле мэппинга ``` -- Arweave Subgraphs introduce a new kind of data source (`arweave`) -- The network should correspond to a network on the hosting Graph Node. In Subgraph Studio, Arweave's mainnet is `arweave-mainnet` -- Arweave data sources introduce an optional source.owner field, which is the public key of an Arweave wallet +- Arweave Subgraphs вводят новый тип источника данных (`arweave`) +- Сеть должна соответствовать сети на размещенной Graph Node. В Subgraph Studio мейннет Arweave обозначается как `arweave-mainnet` +- Источники данных Arweave содержат необязательное поле source.owner, которое является открытым ключом кошелька Arweave -Arweave data sources support two types of handlers: +Источники данных Arweave поддерживают два типа обработчиков: -- `blockHandlers` - Run on every new Arweave block. No source.owner is required. -- `transactionHandlers` - Run on every transaction where the data source's `source.owner` is the owner. Currently an owner is required for `transactionHandlers`, if users want to process all transactions they should provide "" as the `source.owner` +- `blockHandlers` — выполняется при каждом новом блоке Arweave. source.owner не требуется. +- `transactionHandlers` — выполняется при каждой транзакции, где `source.owner` является владельцем источника данных. На данный момент для `transactionHandlers` требуется указать владельца. Если пользователи хотят обрабатывать все транзакции, они должны указать `""` в качестве `source.owner` -> The source.owner can be the owner's address, or their Public Key. +> Source.owner может быть адресом владельца или его Публичным ключом. +> +> Транзакции являются строительными блоками Arweave permaweb, и они представляют собой объекты, созданные конечными пользователями. +> +> Примечание: транзакции [Irys (ранее Bundlr)](https://irys.xyz/) пока не поддерживаются. -> Transactions are the building blocks of the Arweave permaweb and they are objects created by end-users. +## Определение схемы -> Note: [Irys (previously Bundlr)](https://irys.xyz/) transactions are not supported yet. +Определение схемы описывает структуру итоговой базы данных субграфа и отношения между объектами. Это не зависит от исходного источника данных. Более подробную информацию о определении схемы субграфа можно найти [здесь](/developing/creating-a-subgraph/#the-graphql-schema). -## Schema Definition +## Мэппинги AssemblyScript -Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on the Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). +Обработчики для обработки событий написаны на [AssemblyScript](https://www.assemblyscript.org/). -## AssemblyScript Mappings - -The handlers for processing events are written in [AssemblyScript](https://www.assemblyscript.org/). - -Arweave indexing introduces Arweave-specific data types to the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/). +Индексирование Arweave вводит специфичные для Arweave типы данных в [API AssemblyScript](https://thegraph. com/docs/using-graph-ts). ```tsx class Block { @@ -146,51 +146,51 @@ class Transaction { } ``` -Block handlers receive a `Block`, while transactions receive a `Transaction`. +Обработчики блоков получают `Block`, в то время как обработчики транзакций получают `Transaction`. -Writing the mappings of an Arweave Subgraph is very similar to writing the mappings of an Ethereum Subgraph. For more information, click [here](/developing/creating-a-subgraph/#writing-mappings). +Написание мэппингов для субграфа Arweave очень похоже на написание мэппингов для субграфа Ethereum. Для получения дополнительной информации нажмите [сюда](/developing/creating-a-subgraph/#writing-mappings). -## Deploying an Arweave Subgraph in Subgraph Studio +## Развертывание субграфа Arweave в Subgraph Studio -Once your Subgraph has been created on your Subgraph Studio dashboard, you can deploy by using the `graph deploy` CLI command. +После того как Ваш субграф был создан на панели управления Subgraph Studio, вы можете развернуть его с помощью команды CLI `graph deploy`. ```bash graph deploy --access-token ``` -## Querying an Arweave Subgraph +## Запрос субграфа Arweave -The GraphQL endpoint for Arweave Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. +Конечная точка GraphQL для Arweave Subgraphs определяется определением схемы, с использованием существующего интерфейса API. Пожалуйста, посетите [документацию по GraphQL API](/subgraphs/querying/graphql-api/) для получения дополнительной информации. -## Example Subgraphs +## Примеры субграфов -Here is an example Subgraph for reference: +Вот пример субграфа для справки: -- [Example Subgraph for Arweave](https://github.com/graphprotocol/graph-tooling/tree/main/examples/arweave-blocks-transactions) +- [Пример субграфа для Arweave](https://github.com/graphprotocol/graph-tooling/tree/main/examples/arweave-blocks-transactions) -## FAQ +## Часто задаваемые вопросы -### Can a Subgraph index Arweave and other chains? +### Может ли субграф индексировать данные с Arweave и других чейнов? -No, a Subgraph can only support data sources from one chain/network. +Нет, субграф может поддерживать источники данных только из одного чейна/сети. -### Can I index the stored files on Arweave? +### Могу ли я проиндексировать сохраненные файлы в Arweave? -Currently, The Graph is only indexing Arweave as a blockchain (its blocks and transactions). +В настоящее время The Graph индексирует Arweave только как блокчейн (его блоки и транзакции). -### Can I identify Bundlr bundles in my Subgraph? +### Могу ли я идентифицировать Bundlr-бандлы в своём субграфе? -This is not currently supported. +В настоящее время это не поддерживается. -### How can I filter transactions to a specific account? +### Как я могу отфильтровать транзакции по определенному аккаунту? -The source.owner can be the user's public key or account address. +Source.owner может быть открытым ключом пользователя или адресом учетной записи. -### What is the current encryption format? +### Каков текущий формат шифрования? -Data is generally passed into the mappings as Bytes, which if stored directly is returned in the Subgraph in a `hex` format (ex. block and transaction hashes). You may want to convert to a `base64` or `base64 URL`-safe format in your mappings, in order to match what is displayed in block explorers like [Arweave Explorer](https://viewblock.io/arweave/). +Данные обычно передаются в мэппингах в виде байтов, которые, если сохраняются напрямую, возвращаются в субграфе в формате `hex` (например, хэши блоков и транзакций). Возможно, вам захочется преобразовать их в формат `base64` или `base64 URL`-безопасный в ваших мэппингах, чтобы привести их в соответствие с тем, как они отображаются в эксплорерах блоков, таких как [Arweave Explorer](https://viewblock.io/arweave/). -The following `bytesToBase64(bytes: Uint8Array, urlSafe: boolean): string` helper function can be used, and will be added to `graph-ts`: +Следующая вспомогательная функция `bytesToBase64(bytes: Uint8Array, urlSafe: boolean): string` может быть использована и будет добавлена в `graph-ts`: ``` const base64Alphabet = [ diff --git a/website/src/pages/ru/subgraphs/guides/contract-analyzer.mdx b/website/src/pages/ru/subgraphs/guides/contract-analyzer.mdx index 084ac8d28a00..ba2416901e38 100644 --- a/website/src/pages/ru/subgraphs/guides/contract-analyzer.mdx +++ b/website/src/pages/ru/subgraphs/guides/contract-analyzer.mdx @@ -2,11 +2,15 @@ title: Smart Contract Analysis with Cana CLI --- -# Cana CLI: Quick & Efficient Contract Analysis +Improve smart contract analysis with **Cana CLI**. It's fast, efficient, and designed specifically for EVM chains. -**Cana CLI** is a command-line tool that streamlines helpful smart contract metadata analysis specific to subgraph development across multiple EVM-compatible chains. +## Обзор -## 📌 Key Features +**Cana CLI** is a command-line tool that streamlines helpful smart contract metadata analysis specific to subgraph development across multiple EVM-compatible chains. It simplifies retrieving contract details, detecting proxy implementations, extracting ABIs, and more. + +### Key Features + +With Cana CLI, you can: - Detect deployment blocks - Verify source code @@ -14,63 +18,75 @@ title: Smart Contract Analysis with Cana CLI - Identify proxy and implementation contracts - Support multiple chains -## 🚀 Installation & Setup +### Предварительные требования + +Before installing Cana CLI, make sure you have: + +- [Node.js v16+](https://nodejs.org/en) +- [npm v6+](https://docs.npmjs.com/cli/v11/commands/npm-install) +- Block explorer API keys + +### Installation & Setup -Install Cana globally using npm: +1. Install Cana CLI + +Use npm to install it globally: ```bash npm install -g contract-analyzer ``` -Set up a blockchain for analysis: +2. Configure Cana CLI + +Set up a blockchain environment for analysis: ```bash cana setup ``` -Provide the required block explorer API and block explorer endpoint URL details when prompted. +During setup, you'll be prompted to provide the required block explorer API key and block explorer endpoint URL. -Running `cana setup` creates a configuration file at `~/.contract-analyzer/config.json`. This file stores your block explorer API credentials, endpoint URLs, and chain selection preferences for future use. +After setup, Cana CLI creates a configuration file at `~/.contract-analyzer/config.json`. This file stores your block explorer API credentials, endpoint URLs, and chain selection preferences for future use. -## 🍳 Usage +### Steps: Using Cana CLI for Smart Contract Analysis -### 🔹 Chain Selection +#### 1. Select a Chain -Cana supports multiple EVM-compatible chains. +Cana CLI supports multiple EVM-compatible chains. -List chains added with: +For a list of chains added run this command: ```bash cana chains ``` -Then select a chain with: +Then select a chain with this command: ```bash cana chains --switch ``` -Once a chain is selected, all subsequent contract analases will continue on that chain. +Once a chain is selected, all subsequent contract analyses will continue on that chain. -### 🔹 Basic Contract Analysis +#### 2. Basic Contract Analysis -Analyze a contract with: +Run the following command to analyze a contract: ```bash cana analyze 0xContractAddress ``` -or +или ```bash cana -a 0xContractAddress ``` -This command displays essential contract information in the terminal using a clear, organized format. +This command fetches and displays essential contract information in the terminal using a clear, organized format. -### 🔹 Understanding Output +#### 3. Understanding the Output -Cana organizes results into the terminal as well as into a structured directory when detailed contract data is successfully retrieved: +Cana CLI organizes results into the terminal and into a structured directory when detailed contract data is successfully retrieved: ``` contracts-analyzed/ @@ -80,24 +96,22 @@ contracts-analyzed/ └── event-information.json # Event signatures and examples ``` -### 🔹 Chain Management +This format makes it easy to reference contract metadata, event signatures, and ABIs for subgraph development. + +#### 4. Chain Management Add and manage chains: ```bash -cana setup # Add a new chain -cana chains # List configured chains -cana chains -s # Swich chains. +cana setup # Add a new chain +cana chains # List configured chains +cana chains -s # Switch chains ``` -## ⚠️ Troubleshooting +### Troubleshooting -- **Missing Data**: Ensure the contract address is correct, verified on the block explorer, and that your API key has the required permissions. +Missing Data? Ensure that the contract address is correct, that it's verified on the block explorer, and that your API key has the required permissions. -## ✅ Requirements - -- Node.js v16+ -- npm v6+ -- Block explorer API keys +### Заключение -Keep your contract analyses efficient and well-organized. 🚀 +With Cana CLI, you can efficiently analyze smart contracts, extract crucial metadata, and support subgraph development with ease. diff --git a/website/src/pages/ru/subgraphs/guides/enums.mdx b/website/src/pages/ru/subgraphs/guides/enums.mdx index 9f55ae07c54b..1d696e352f9a 100644 --- a/website/src/pages/ru/subgraphs/guides/enums.mdx +++ b/website/src/pages/ru/subgraphs/guides/enums.mdx @@ -1,20 +1,20 @@ --- -title: Categorize NFT Marketplaces Using Enums +title: Категоризация маркетплейсов NFT с использованием Enums (перечислений) --- -Use Enums to make your code cleaner and less error-prone. Here's a full example of using Enums on NFT marketplaces. +Используйте Enums (перечисления), чтобы сделать Ваш код чище и уменьшить вероятность ошибок. Вот полный пример использования перечислений для маркетплейсов NFT. -## What are Enums? +## Что такое Enums (перечисления)? -Enums, or enumeration types, are a specific data type that allows you to define a set of specific, allowed values. +Перечисления (или типы перечислений) — это особый тип данных, который позволяет определить набор конкретных допустимых значений. -### Example of Enums in Your Schema +### Пример использования Enums (перечислений) в Вашей схеме -If you're building a Subgraph to track the ownership history of tokens on a marketplace, each token might go through different ownerships, such as `OriginalOwner`, `SecondOwner`, and `ThirdOwner`. By using enums, you can define these specific ownerships, ensuring only predefined values are assigned. +Если вы создаете cубграф для отслеживания истории владения токенами на маркетплейсе, каждый токен может проходить через разные этапы владения, такие как `OriginalOwner`, `SecondOwner` и `ThirdOwner`. Используя перечисления (enums), вы можете определить эти конкретные этапы владения, обеспечив, что будут присваиваться только заранее определенные значения. -You can define enums in your schema, and once defined, you can use the string representation of the enum values to set an enum field on an entity. +Вы можете определить перечисления (enums) в своей схеме, и после их определения Вы можете использовать строковое представление значений перечислений для установки значения поля перечисления в объекты. -Here's what an enum definition might look like in your schema, based on the example above: +Вот как может выглядеть определение перечисления (enum) в Вашей схеме, исходя из приведенного выше примера: ```graphql enum TokenStatus { @@ -24,19 +24,19 @@ enum TokenStatus { } ``` -This means that when you use the `TokenStatus` type in your schema, you expect it to be exactly one of predefined values: `OriginalOwner`, `SecondOwner`, or `ThirdOwner`, ensuring consistency and validity. +Это означает, что когда Вы используете тип `TokenStatus` в своей схеме, Вы ожидаете, что он будет иметь одно из заранее определенных значений: `OriginalOwner` (Первоначальный Владелец), `SecondOwner` (Второй Владелец) или `ThirdOwner` (Третий Владелец), что обеспечивает согласованность и корректность данных. -To learn more about enums, check out [Creating a Subgraph](/developing/creating-a-subgraph/#enums) and [GraphQL documentation](https://graphql.org/learn/schema/#enumeration-types). +Чтобы узнать больше о перечислениях (Enums), ознакомьтесь с разделом [Создание субграфа](/developing/creating-a-subgraph/#enums) и с [документацией GraphQL](https://graphql.org/learn/schema/#enumeration-types). -## Benefits of Using Enums +## Преимущества использования перечислений (Enums) -- **Clarity:** Enums provide meaningful names for values, making data easier to understand. -- **Validation:** Enums enforce strict value definitions, preventing invalid data entries. -- **Maintainability:** When you need to change or add new categories, enums allow you to do this in a focused manner. +- **Ясность:** Перечисления предоставляют значимые имена для значений, что делает данные более понятными. +- **Валидация:** Перечисления обеспечивают строгие определения значений, предотвращая ввод недопустимых данных. +- **Поддерживаемость:** Когда Вам нужно изменить или добавить новые категории, перечисления позволяют сделать это целенаправленно и удобно. -### Without Enums +### Без перечислений (Enums) -If you choose to define the type as a string instead of using an Enum, your code might look like this: +Если Вы решите определить тип как строку вместо использования перечисления (Enum), Ваш код может выглядеть следующим образом: ```graphql type Token @entity { @@ -48,85 +48,85 @@ type Token @entity { } ``` -In this schema, `TokenStatus` is a simple string with no specific, allowed values. +В этой схеме `TokenStatus` является простой строкой без конкретных и допустимых значений. -#### Why is this a problem? +#### Почему это является проблемой? -- There's no restriction of `TokenStatus` values, so any string can be accidentally assigned. This makes it hard to ensure that only valid statuses like `OriginalOwner`, `SecondOwner`, or `ThirdOwner` are set. -- It's easy to make typos such as `Orgnalowner` instead of `OriginalOwner`, making the data and potential queries unreliable. +- Нет никаких ограничений на значения `TokenStatus`, поэтому любое строковое значение может быть назначено случайно. Это усложняет обеспечение того, что устанавливаются только допустимые статусы, такие как `OriginalOwner` (Первоначальный Владелец), `SecondOwner` (Второй Владелец) или `ThirdOwner` (Третий Владелец). +- Легко допустить опечатку, например, `Orgnalowner` вместо `OriginalOwner`, что делает данные и потенциальные запросы ненадежными. -### With Enums +### С перечислениями (Enums) -Instead of assigning free-form strings, you can define an enum for `TokenStatus` with specific values: `OriginalOwner`, `SecondOwner`, or `ThirdOwner`. Using an enum ensures only allowed values are used. +Вместо присвоения строк произвольной формы Вы можете определить перечисление (Enum) для `TokenStatus` с конкретными значениями: `OriginalOwner`, `SecondOwner` или `ThirdOwner`. Использование перечисления гарантирует, что используются только допустимые значения. -Enums provide type safety, minimize typo risks, and ensure consistent and reliable results. +Перечисления обеспечивают безопасность типов, минимизируют риск опечаток и гарантируют согласованные и надежные результаты. -## Defining Enums for NFT Marketplaces +## Определение перечислений (Enums) для Маркетплейсов NFT -> Note: The following guide uses the CryptoCoven NFT smart contract. +> Примечание: Следующее руководство использует смарт-контракт NFT CryptoCoven. -To define enums for the various marketplaces where NFTs are traded, use the following in your Subgraph schema: +Чтобы определить перечисления (enums) для различных маркетплейсов, где торгуются NFT, используйте следующее в вашей схеме cубграфа: ```gql -# Enum for Marketplaces that the CryptoCoven contract interacted with(likely a Trade/Mint) +# Перечисление для маркетплейсов, с которыми взаимодействовал смарт-контракт CryptoCoven (вероятно, торговля или минт) enum Marketplace { - OpenSeaV1 # Represents when a CryptoCoven NFT is traded on the marketplace - OpenSeaV2 # Represents when a CryptoCoven NFT is traded on the OpenSeaV2 marketplace - SeaPort # Represents when a CryptoCoven NFT is traded on the SeaPort marketplace - LooksRare # Represents when a CryptoCoven NFT is traded on the LookRare marketplace - # ...and other marketplaces + OpenSeaV1 # Представляет случай, когда NFT CryptoCoven торгуется на маркетплейсе OpenSeaV1 + OpenSeaV2 # Представляет случай, когда NFT CryptoCoven торгуется на маркетплейсе OpenSeaV2 + SeaPort # Представляет случай, когда NFT CryptoCoven торгуется на маркетплейсе SeaPort + LooksRare # Представляет случай, когда NFT CryptoCoven торгуется на маркетплейсе LooksRare + # ...и другие рынки } ``` -## Using Enums for NFT Marketplaces +## Использование перечислений (Enums) для Маркетплейсов NFT -Once defined, enums can be used throughout your Subgraph to categorize transactions or events. +После того как перечисления (enums) определены, их можно использовать по всему вашему cубграфу для категоризации транзакций или событий. -For example, when logging NFT sales, you can specify the marketplace involved in the trade using the enum. +Например, при регистрации продаж NFT можно указать маркетплейс, на котором произошла сделка, используя перечисление. -### Implementing a Function for NFT Marketplaces +### Реализация функции для маркетплейсов NFT -Here's how you can implement a function to retrieve the marketplace name from the enum as a string: +Вот как можно реализовать функцию для получения названия маркетплейса из перечисления (enum) в виде строки: ```ts export function getMarketplaceName(marketplace: Marketplace): string { - // Using if-else statements to map the enum value to a string + // Используем операторы if-else для сопоставления значения перечисления со строкой if (marketplace === Marketplace.OpenSeaV1) { - return 'OpenSeaV1' // If the marketplace is OpenSea, return its string representation + return 'OpenSeaV1' // Если маркетплейс OpenSea, возвращаем его строковое представление } else if (marketplace === Marketplace.OpenSeaV2) { return 'OpenSeaV2' } else if (marketplace === Marketplace.SeaPort) { - return 'SeaPort' // If the marketplace is SeaPort, return its string representation + return 'SeaPort' // Если маркетплейс SeaPort, возвращаем его строковое представление } else if (marketplace === Marketplace.LooksRare) { - return 'LooksRare' // If the marketplace is LooksRare, return its string representation - // ... and other market places + return 'LooksRare' // Если маркетплейс LooksRare, возвращаем его строковое представление + // ... и другие маркетплейсы } } ``` -## Best Practices for Using Enums +## Лучшие практики использования перечислений (Enums) -- **Consistent Naming:** Use clear, descriptive names for enum values to improve readability. -- **Centralized Management:** Keep enums in a single file for consistency. This makes enums easier to update and ensures they are the single source of truth. -- **Documentation:** Add comments to enum to clarify their purpose and usage. +- **Согласованность в наименованиях:** Используйте четкие, описательные названия для значений перечислений, чтобы улучшить читаемость кода. +- **Централизованное управление:** Храните перечисления в одном файле для обеспечения согласованности. Это облегчает обновление перечислений и гарантирует, что они являются единственным источником достоверной информации. +- **Документация:** Добавляйте комментарии к перечислениям, чтобы прояснить их назначение и использование. -## Using Enums in Queries +## Использование перечислений (Enums) в запросах -Enums in queries help you improve data quality and make your results easier to interpret. They function as filters and response elements, ensuring consistency and reducing errors in marketplace values. +Перечисления в запросах помогают улучшить качество данных и делают результаты более понятными. Они функционируют как фильтры и элементы ответа, обеспечивая согласованность и уменьшая ошибки в значениях маркетплейса. -**Specifics** +**Особенности** -- **Filtering with Enums:** Enums provide clear filters, allowing you to confidently include or exclude specific marketplaces. -- **Enums in Responses:** Enums guarantee that only recognized marketplace names are returned, making the results standardized and accurate. +- **Фильтрация с помощью перечислений:** Перечисления предоставляют четкие фильтры, позволяя уверенно включать или исключать конкретные маркетплейсы. +- **Перечисления в ответах:** Перечисления гарантируют, что возвращаются только признанные названия маркетплейсов, делая результаты стандартизированными и точными. -### Sample Queries +### Пример запросов -#### Query 1: Account With The Highest NFT Marketplace Interactions +#### Запрос 1: Аккаунт с наибольшим количеством взаимодействий на маркетплейсе NFT -This query does the following: +Этот запрос выполняет следующие действия: -- It finds the account with the highest unique NFT marketplace interactions, which is great for analyzing cross-marketplace activity. -- The marketplaces field uses the marketplace enum, ensuring consistent and validated marketplace values in the response. +- Он находит аккаунт с наибольшим количеством уникальных взаимодействий с маркетплейсами NFT, что полезно для анализа активности на разных маркетплейсах. +- Поле маркетплейсов использует перечисление marketplace, что обеспечивает согласованность и валидацию значений маркетплейсов в ответе. ```gql { @@ -137,15 +137,15 @@ This query does the following: totalSpent uniqueMarketplacesCount marketplaces { - marketplace # This field returns the enum value representing the marketplace + marketplace # Это поле возвращает значение перечисления, представляющее маркетплейс } } } ``` -#### Returns +#### Результаты -This response provides account details and a list of unique marketplace interactions with enum values for standardized clarity: +Данный ответ включает информацию об аккаунте и перечень уникальных взаимодействий с маркетплейсом, где используются значения перечислений (enum) для обеспечения единообразной ясности: ```gql { @@ -186,12 +186,12 @@ This response provides account details and a list of unique marketplace interact } ``` -#### Query 2: Most Active Marketplace for CryptoCoven transactions +#### Запрос 2: Наиболее активный маркетплейс для транзакций CryptoCoven -This query does the following: +Этот запрос выполняет следующие действия: -- It identifies the marketplace with the highest volume of CryptoCoven transactions. -- It uses the marketplace enum to ensure that only valid marketplace types appear in the response, adding reliability and consistency to your data. +- Он определяет маркетплейс с наибольшим объемом транзакций CryptoCoven. +- Он использует перечисление marketplace, чтобы гарантировать, что в ответе будут только допустимые типы маркетплейсов, что повышает надежность и согласованность ваших данных. ```gql { @@ -202,9 +202,9 @@ This query does the following: } ``` -#### Result 2 +#### Результат 2 -The expected response includes the marketplace and the corresponding transaction count, using the enum to indicate the marketplace type: +Ожидаемый ответ включает маркетплейс и соответствующее количество транзакций, используя перечисление для указания типа маркетплейса: ```gql { @@ -219,12 +219,12 @@ The expected response includes the marketplace and the corresponding transaction } ``` -#### Query 3: Marketplace Interactions with High Transaction Counts +#### Запрос 3: Взаимодействия на маркетплейсе с высоким количеством транзакций -This query does the following: +Этот запрос выполняет следующие действия: -- It retrieves the top four marketplaces with over 100 transactions, excluding "Unknown" marketplaces. -- It uses enums as filters to ensure that only valid marketplace types are included, increasing accuracy. +- Он извлекает четыре самых активных маркетплейса с более чем 100 транзакциями, исключая маркетплейсы с типом "Unknown". +- Он использует перечисления в качестве фильтров, чтобы гарантировать, что включены только допустимые типы маркетплейсов, что повышает точность выполнения запроса. ```gql { @@ -240,9 +240,9 @@ This query does the following: } ``` -#### Result 3 +#### Результат 3 -Expected output includes the marketplaces that meet the criteria, each represented by an enum value: +Ожидаемый вывод включает маркетплейсы, которые соответствуют критериям, каждый из которых представлен значением перечисления: ```gql { @@ -269,6 +269,6 @@ Expected output includes the marketplaces that meet the criteria, each represent } ``` -## Additional Resources +## Дополнительные ресурсы -For additional information, check out this guide's [repo](https://github.com/chidubemokeke/Subgraph-Tutorial-Enums). +Дополнительную информацию можно найти в [репозитории] этого руководства (https://github.com/chidubemokeke/Subgraph-Tutorial-Enums). diff --git a/website/src/pages/ru/subgraphs/guides/grafting.mdx b/website/src/pages/ru/subgraphs/guides/grafting.mdx index d9abe0e70d2a..6d718b0fa64c 100644 --- a/website/src/pages/ru/subgraphs/guides/grafting.mdx +++ b/website/src/pages/ru/subgraphs/guides/grafting.mdx @@ -1,56 +1,56 @@ --- -title: Replace a Contract and Keep its History With Grafting +title: Замените контракт и сохраните его историю с помощью Grafting --- -In this guide, you will learn how to build and deploy new Subgraphs by grafting existing Subgraphs. +В этом руководстве вы научитесь создавать и развертывать новые субграфы, используя существующие субграфы. -## What is Grafting? +## Что такое Grafting? -Grafting reuses the data from an existing Subgraph and starts indexing it at a later block. This is useful during development to get past simple errors in the mappings quickly or to temporarily get an existing Subgraph working again after it has failed. Also, it can be used when adding a feature to a Subgraph that takes long to index from scratch. +Графтинг позволяет повторно использовать данные из существующего субграфа и начать индексирование с более позднего блока. Это полезно в процессе разработки, чтобы быстро обходить простые ошибки в мэппингах или временно восстанавливать работу существующего субграфа после его сбоя. Также это может пригодиться при добавлении новой функции в субграф, которая требует долгого времени на индексирование с нуля. -The grafted Subgraph can use a GraphQL schema that is not identical to the one of the base Subgraph, but merely compatible with it. It has to be a valid Subgraph schema in its own right, but may deviate from the base Subgraph's schema in the following ways: +Перенесённый субграф может использовать схему GraphQL, которая не идентична схеме базового субграфа, а просто совместима с ней. Это должна быть автономно действующая схема субграфа, но она может отличаться от схемы базового субграфа следующим образом: -- It adds or removes entity types -- It removes attributes from entity types -- It adds nullable attributes to entity types -- It turns non-nullable attributes into nullable attributes -- It adds values to enums -- It adds or removes interfaces -- It changes for which entity types an interface is implemented +- Она добавляет или удаляет типы объектов +- Она удаляет атрибуты из типов объектов +- Она добавляет обнуляемые атрибуты к типам объектов +- Она превращает необнуляемые атрибуты в обнуляемые +- Она добавляет значения в перечисления +- Она добавляет или удаляет интерфейсы +- Она изменяется в зависимости от того, под какой тип объектов реализован интерфейс -For more information, you can check: +Для получения дополнительной информации Вы можете перейти: -- [Grafting](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) +- [Графтинг](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) -In this tutorial, we will be covering a basic use case. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing Subgraph onto the "base" Subgraph that tracks the new contract. +В этом руководстве мы рассмотрим базовый случай. Мы заменим существующий контракт на идентичный контракт (с новым адресом, но с тем же кодом). Затем, с помощью графтинга, мы подключим существующий субграф к "базовому" субграфу, который отслеживает новый контракт. -## Important Note on Grafting When Upgrading to the Network +## Важное примечание о Grafting при обновлении до сети -> **Caution**: It is recommended to not use grafting for Subgraphs published to The Graph Network +> **Предупреждение**: рекомендуется не использовать графтинг для субграфов, опубликованных в сети The Graph -### Why Is This Important? +### Почему это важно? -Grafting is a powerful feature that allows you to "graft" one Subgraph onto another, effectively transferring historical data from the existing Subgraph to a new version. It is not possible to graft a Subgraph from The Graph Network back to Subgraph Studio. +Графтинг — это мощная функция, которая позволяет «приращивать» один субграф к другому, эффективно передавая исторические данные из существующего субграфа в новую версию. Невозможно выполнить графтинг субграфа из сети The Graph обратно в Subgraph Studio. -### Best Practices +### Лучшие практики -**Initial Migration**: when you first deploy your Subgraph to the decentralized network, do so without grafting. Ensure that the Subgraph is stable and functioning as expected. +**Первоначальная миграция**: при первом развертывании вашего субграфа в децентрализованной сети, делайте это без графтинга. Убедитесь, что субграф стабилен и работает так, как ожидается. -**Subsequent Updates**: once your Subgraph is live and stable on the decentralized network, you may use grafting for future versions to make the transition smoother and to preserve historical data. +**Последующие обновления**: после того, как ваш субграф станет активным и стабильным в децентрализованной сети, вы можете использовать графтинг для будущих версий, чтобы сделать переход более плавным и сохранить исторические данные. -By adhering to these guidelines, you minimize risks and ensure a smoother migration process. +Соблюдая эти рекомендации, Вы минимизируете риски и обеспечите более плавный процесс миграции. -## Building an Existing Subgraph +## Создание существующего субграфа -Building Subgraphs is an essential part of The Graph, described more in depth [here](/subgraphs/quick-start/). To be able to build and deploy the existing Subgraph used in this tutorial, the following repo is provided: +Создание субграфов — важная часть работы с The Graph, и об этом рассказывается более подробно [здесь](/subgraphs/quick-start/). Чтобы иметь возможность создавать и развертывать существующий субграф, используемый в этом руководстве, был предоставлен следующий репозиторий: -- [Subgraph example repo](https://github.com/Shiyasmohd/grafting-tutorial) +- [Пример репозитория субграфа](https://github.com/Shiyasmohd/grafting-tutorial) -> Note: The contract used in the Subgraph was taken from the following [Hackathon Starterkit](https://github.com/schmidsi/hackathon-starterkit). +> Примечание: контракт, использованный в субграфе, был взят из следующего [Hackathon Starterkit](https://github.com/schmidsi/hackathon-starterkit). -## Subgraph Manifest Definition +## Определение манифеста субграфа -The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest that you will use: +Манифест субграфа `subgraph.yaml` определяет источники данных для субграфа, интересующие триггеры и функции, которые должны быть выполнены в ответ на эти триггеры. Ниже приведён пример манифеста субграфа, который вы будете использовать: ```yaml specVersion: 1.3.0 @@ -79,33 +79,33 @@ dataSources: file: ./src/lock.ts ``` -- The `Lock` data source is the abi and contract address we will get when we compile and deploy the contract -- The network should correspond to an indexed network being queried. Since we're running on Sepolia testnet, the network is `sepolia` -- The `mapping` section defines the triggers of interest and the functions that should be run in response to those triggers. In this case, we are listening for the `Withdrawal` event and calling the `handleWithdrawal` function when it is emitted. +- Источник данных `Lock` — это ABI и адрес контракта, которые мы получим при компиляции и развертывании контракта +- Сеть должна соответствовать индексируемой сети, к которой выполняется запрос. Поскольку мы работаем в тестнете Sepolia, сеть будет `sepolia`. +- Раздел `mapping` определяет триггеры, которые представляют интерес, и функции, которые должны быть выполнены в ответ на эти триггеры. В данном случае мы слушаем событие `Withdrawal` и вызываем функцию `handleWithdrawal`, когда оно срабатывает. -## Grafting Manifest Definition +## Определение Манифеста Grafting -Grafting requires adding two new items to the original Subgraph manifest: +Для использования функции графтинга необходимо добавить два новых элемента в исходный манифест субграфа: ```yaml --- features: - - grafting # feature name + - grafting # название функции graft: - base: Qm... # Subgraph ID of base Subgraph - block: 5956000 # block number + base: Qm... # идентификатор базового субграфа + block: 5956000 # номер блока ``` -- `features:` is a list of all used [feature names](/developing/creating-a-subgraph/#experimental-features). -- `graft:` is a map of the `base` Subgraph and the block to graft on to. The `block` is the block number to start indexing from. The Graph will copy the data of the base Subgraph up to and including the given block and then continue indexing the new Subgraph from that block on. +- `features:` — это список всех используемых [имен функций](/developing/creating-a-subgraph/#experimental-features). +- `graft:` — это карта, содержащая базовый субграф (`base`) и номер блока (`block`), на который будет выполняться графтинг. Значение `block` указывает, с какого блока начинать индексирование. The Graph скопирует данные базового субграфа вплоть до указанного блока (включительно), а затем продолжит индексировать новый субграф, начиная с этого блока. -The `base` and `block` values can be found by deploying two Subgraphs: one for the base indexing and one with grafting +Значения `base` и `block` можно получить, развернув два субграфа: один для базового индексирования, а другой с графтингом -## Deploying the Base Subgraph +## Развертывание базового субграфа -1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-example` -2. Follow the directions in the `AUTH & DEPLOY` section on your Subgraph page in the `graft-example` folder from the repo -3. Once finished, verify the Subgraph is indexing properly. If you run the following command in The Graph Playground +1. Перейдите в [Subgraph Studio](https://thegraph.com/studio/) и создайте субграф в тестовой сети Sepolia с названием `graft-example` +2. Следуйте инструкциям в разделе `AUTH & DEPLOY` на странице вашего субграфа в папке `graft-example` из репозитория +3. После завершения убедитесь, что субграф правильно индексируется. Если Вы запустите следующую команду в The Graph Playground ```graphql { @@ -117,7 +117,7 @@ The `base` and `block` values can be found by deploying two Subgraphs: one for t } ``` -It returns something like this: +Отклик будет подобным этому: ``` { @@ -138,16 +138,16 @@ It returns something like this: } ``` -Once you have verified the Subgraph is indexing properly, you can quickly update the Subgraph with grafting. +Как только вы убедитесь, что субграф индексируется корректно, вы можете быстро обновить его с помощью графтинга. -## Deploying the Grafting Subgraph +## Развертывание grafting субграфа -The graft replacement subgraph.yaml will have a new contract address. This could happen when you update your dapp, redeploy a contract, etc. +Замененный subgraph.yaml будет иметь новый адрес контракта. Это может произойти, когда Вы обновите свое децентрализованное приложение, повторно развернете контракт и т. д. -1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-replacement` -2. Create a new manifest. The `subgraph.yaml` for `graph-replacement` contains a different contract address and new information about how it should graft. These are the `block` of the [last event emitted](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452) you care about by the old contract and the `base` of the old Subgraph. The `base` Subgraph ID is the `Deployment ID` of your original `graph-example` Subgraph. You can find this in Subgraph Studio. -3. Follow the directions in the `AUTH & DEPLOY` section on your Subgraph page in the `graft-replacement` folder from the repo -4. Once finished, verify the Subgraph is indexing properly. If you run the following command in The Graph Playground +1. Перейдите в [Subgraph Studio](https://thegraph.com/studio/) и создайте субграф в тестовой сети Sepolia с названием `graft-replacement` +2. Создайте новый манифест. `subgraph.yaml` для `graph-replacement` содержит другой адрес контракта и новую информацию о том, как следует выполнить графтинг. Это `block` последнего [события, сгенерированного](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452) старым контрактом, и `base` старого субграфа. Идентификатор субграфа `base` — это `Deployment ID` вашего оригинального `graph-example` субграфа. Вы можете найти его в Subgraph Studio. +3. Следуйте инструкциям в разделе `AUTH & DEPLOY` на странице вашего субграфв в папке `graft-replacement` из репозитория +4. После завершения убедитесь, что субграф правильно индексируется. Если Вы запустите следующую команду в The Graph Playground ```graphql { @@ -159,7 +159,7 @@ The graft replacement subgraph.yaml will have a new contract address. This could } ``` -It should return the following: +Это должно привести к следующему результату: ``` { @@ -185,18 +185,18 @@ It should return the following: } ``` -You can see that the `graft-replacement` Subgraph is indexing from older `graph-example` data and newer data from the new contract address. The original contract emitted two `Withdrawal` events, [Event 1](https://sepolia.etherscan.io/tx/0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d) and [Event 2](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452). The new contract emitted one `Withdrawal` after, [Event 3](https://sepolia.etherscan.io/tx/0x2410475f76a44754bae66d293d14eac34f98ec03a3689cbbb56a716d20b209af). The two previously indexed transactions (Event 1 and 2) and the new transaction (Event 3) were combined together in the `graft-replacement` Subgraph. +Вы можете увидеть, что субграф `graft-replacement` индексирует данные из старого субграфа `graph-example` и новые данные с нового адреса контракта. Оригинальный контракт сгенерировал два события `Withdrawal`, [Событие 1](https://sepolia.etherscan.io/tx/0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d) и [Событие 2](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452). Новый контракт сгенерировал одно событие `Withdrawal`, [Событие 3](https://sepolia.etherscan.io/tx/0x2410475f76a44754bae66d293d14eac34f98ec03a3689cbbb56a716d20b209af). Два ранее индексированных транзакции (События 1 и 2) и новая транзакция (Событие 3) были объединены в субграфе `graft-replacement`. -Congrats! You have successfully grafted a Subgraph onto another Subgraph. +Поздравляем! Вы успешно перенесли один субграф в другой. -## Additional Resources +## Дополнительные ресурсы -If you want more experience with grafting, here are a few examples for popular contracts: +Если Вы хотите получить больше опыта в графтинге (переносе), вот несколько примеров популярных контрактов: - [Curve](https://github.com/messari/subgraphs/blob/master/subgraphs/curve-finance/protocols/curve-finance/config/templates/curve.template.yaml) - [ERC-721](https://github.com/messari/subgraphs/blob/master/subgraphs/erc721-metadata/subgraph.yaml) - [Uniswap](https://github.com/messari/subgraphs/blob/master/subgraphs/uniswap-v3-forks/protocols/uniswap-v3/config/templates/uniswapV3Template.yaml), -To become even more of a Graph expert, consider learning about other ways to handle changes in underlying datasources. Alternatives like [Data Source Templates](/developing/creating-a-subgraph/#data-source-templates) can achieve similar results +Чтобы стать еще большим экспертом в области Graph, рассмотрите возможность изучения других способов обработки изменений в исходных данных. Альтернативы, такие как [Шаблоны источников данных](/developing/creating-a-subgraph/#data-source-templates), могут привести к аналогичным результатам -> Note: A lot of material from this article was taken from the previously published [Arweave article](/subgraphs/cookbook/arweave/) +> Примечание: Многие материалы из этой статьи были взяты из ранее опубликованной статьи об [Arweave](/subgraphs/cookbook/arweave/) diff --git a/website/src/pages/ru/subgraphs/guides/near.mdx b/website/src/pages/ru/subgraphs/guides/near.mdx index e78a69eb7fa2..a71aee9acdca 100644 --- a/website/src/pages/ru/subgraphs/guides/near.mdx +++ b/website/src/pages/ru/subgraphs/guides/near.mdx @@ -1,79 +1,79 @@ --- -title: Building Subgraphs on NEAR +title: Создание субграфов на NEAR --- -This guide is an introduction to building Subgraphs indexing smart contracts on the [NEAR blockchain](https://docs.near.org/). +Это руководство является введением в создание субграфов для индексирования смарт-контрактов на блокчейне [NEAR](https://docs.near.org/). -## What is NEAR? +## Что такое NEAR? -[NEAR](https://near.org/) is a smart contract platform for building decentralized applications. Visit the [official documentation](https://docs.near.org/concepts/basics/protocol) for more information. +[NEAR](https://near.org/) — это платформа для смарт-контрактов, предназначенная для создания децентрализованных приложений. Для получения дополнительной информации ознакомьтесь с [официальной документацией](https://docs.near.org/concepts/basics/protocol). -## What are NEAR Subgraphs? +## Что такое субграфы NEAR? -The Graph gives developers tools to process blockchain events and make the resulting data easily available via a GraphQL API, known individually as a Subgraph. [Graph Node](https://github.com/graphprotocol/graph-node) is now able to process NEAR events, which means that NEAR developers can now build Subgraphs to index their smart contracts. +The Graph предоставляет разработчикам инструменты для обработки событий блокчейна и предоставления полученных данных через API GraphQL, который называется субграфом. [Graph Node](https://github.com/graphprotocol/graph-node) теперь может обрабатывать события NEAR, что означает, что разработчики на платформе NEAR могут создавать субграфы для индексирования своих смарт-контрактов. -Subgraphs are event-based, which means that they listen for and then process onchain events. There are currently two types of handlers supported for NEAR Subgraphs: +Субграфы основаны на событиях, что означает, что они слушают и затем обрабатывают события с блокчейна. В настоящее время для субграфов NEAR поддерживаются два типа обработчиков: -- Block handlers: these are run on every new block -- Receipt handlers: run every time a message is executed at a specified account +- Обработчики блоков: они запускаются для каждого нового блока +- Обработчики поступлений: запускаются каждый раз, когда сообщение выполняется в указанной учетной записи -[From the NEAR documentation](https://docs.near.org/build/data-infrastructure/lake-data-structures/receipt): +[Из документации NEAR](https://docs.near.org/build/data-infrastructure/lake-data-structures/receipt): -> A Receipt is the only actionable object in the system. When we talk about "processing a transaction" on the NEAR platform, this eventually means "applying receipts" at some point. +> Поступление - это единственный объект, к которому можно применить действие в системе. Когда мы говорим об "обработке транзакции" на платформе NEAR, это в конечном итоге означает "применение поступлений" в какой-то момент. -## Building a NEAR Subgraph +## Создание NEAR субграфа -`@graphprotocol/graph-cli` is a command-line tool for building and deploying Subgraphs. +`@graphprotocol/graph-cli` — это инструмент командной строки для создания и развертывания субграфов. -`@graphprotocol/graph-ts` is a library of Subgraph-specific types. +`@graphprotocol/graph-ts` — это библиотека типов, специфичных для субграфов. -NEAR Subgraph development requires `graph-cli` above version `0.23.0`, and `graph-ts` above version `0.23.0`. +Для разработки субграфа NEAR требуется версия `graph-cli` выше `0.23.0` и версия `graph-ts` выше `0.23.0`. -> Building a NEAR Subgraph is very similar to building a Subgraph that indexes Ethereum. +> Создание субграфа NEAR очень похоже на создание субграфа, индексирующего Ethereum. -There are three aspects of Subgraph definition: +Существует три аспекта определения субграфов: -**subgraph.yaml:** the Subgraph manifest, defining the data sources of interest, and how they should be processed. NEAR is a new `kind` of data source. +**subgraph.yaml**: манифест субграфа, который определяет интересующие источники данных и то, как они должны обрабатываться. NEAR является новым `kind` (типом) источника данных. -**schema.graphql:** a schema file that defines what data is stored for your Subgraph, and how to query it via GraphQL. The requirements for NEAR Subgraphs are covered by [the existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). +**schema.graphql**: файл схемы, который определяет, какие данные хранятся для вашего субграфа, и как их можно запрашивать через GraphQL. Требования для субграфов NEAR описаны в [существующей документации](/developing/creating-a-subgraph/#the-graphql-schema). -**AssemblyScript Mappings:** [AssemblyScript code](/subgraphs/developing/creating/graph-ts/api/) that translates from the event data to the entities defined in your schema. NEAR support introduces NEAR-specific data types and new JSON parsing functionality. +**Мэппинги на AssemblyScript:** [код на AssemblyScript](/subgraphs/developing/creating/graph-ts/api/), который преобразует данные событий в элементы, определенные в Вашей схеме. Поддержка NEAR вводит специфичные для NEAR типы данных и новую функциональность для парсинга JSON. -During Subgraph development there are two key commands: +Во время разработки субграфа есть две ключевые команды: ```bash -$ graph codegen # generates types from the schema file identified in the manifest -$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder +$ graph codegen # генерирует типы из файла схемы, указанного в манифесте +$ graph build # генерирует Web Assembly из файлов AssemblyScript и подготавливает все файлы субграфа в папке /build ``` -### Subgraph Manifest Definition +### Определение манифеста субграфа -The Subgraph manifest (`subgraph.yaml`) identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for a NEAR Subgraph: +Манифест субграфа (`subgraph.yaml`) идентифицирует источники данных для субграфа, триггеры интересующих событий и функции, которые должны быть выполнены в ответ на эти триггеры. Ниже приведен пример манифеста субграфа для субграфа NEAR: ```yaml specVersion: 1.3.0 schema: - file: ./src/schema.graphql # link to the schema file + file: ./src/schema.graphql # ссылка на файл схемы dataSources: - kind: near network: near-mainnet source: - account: app.good-morning.near # This data source will monitor this account + account: app.good-morning.near # Этот источник данных будет контролировать эту учетную запись startBlock: 10662188 # Required for NEAR mapping: apiVersion: 0.0.9 language: wasm/assemblyscript blockHandlers: - - handler: handleNewBlock # the function name in the mapping file + - handler: handleNewBlock # имя функции в файле мэппинга receiptHandlers: - - handler: handleReceipt # the function name in the mapping file - file: ./src/mapping.ts # link to the file with the Assemblyscript mappings + - handler: handleReceipt # имя функции в файле мэппингаe + file: ./src/mapping.ts # ссылка на файл с мэппингами Assemblyscript ``` -- NEAR Subgraphs introduce a new `kind` of data source (`near`) -- The `network` should correspond to a network on the hosting Graph Node. On Subgraph Studio, NEAR's mainnet is `near-mainnet`, and NEAR's testnet is `near-testnet` -- NEAR data sources introduce an optional `source.account` field, which is a human-readable ID corresponding to a [NEAR account](https://docs.near.org/concepts/protocol/account-model). This can be an account or a sub-account. -- NEAR data sources introduce an alternative optional `source.accounts` field, which contains optional suffixes and prefixes. At least prefix or suffix must be specified, they will match the any account starting or ending with the list of values respectively. The example below would match: `[app|good].*[morning.near|morning.testnet]`. If only a list of prefixes or suffixes is necessary the other field can be omitted. +- Субграфы NEAR вводят новый тип источника данных (`near`). +- `network` должен соответствовать сети на хостинговой Graph Node. В Subgraph Studio майннет NEAR называется `near-mainnet`, а теснет NEAR — `near-testnet` +- Источники данных NEAR содержат необязательное поле `source.account`, которое представляет собой удобочитаемый идентификатор, соответствующий [учетной записи NEAR] (https://docs.near.org/concepts/protocol/account-model). Это может быть как основной аккаунт, так и суб-аккаунт. +- Источники данных NEAR вводят альтернативное необязательное поле `source.accounts`, которое содержит необязательные префиксы и суффиксы. Необходимо указать хотя бы один префикс или суффикс, они будут соответствовать любому аккаунту, начинающемуся или заканчивающемуся на значения из списка соответственно. Приведенный ниже пример будет совпадать с: `[app|good].*[morning.near|morning.testnet]`. Если необходим только список префиксов или суффиксов, другое поле можно опустить. ```yaml accounts: @@ -85,20 +85,20 @@ accounts: - morning.testnet ``` -NEAR data sources support two types of handlers: +Источники данных NEAR поддерживают два типа обработчиков: -- `blockHandlers`: run on every new NEAR block. No `source.account` is required. -- `receiptHandlers`: run on every receipt where the data source's `source.account` is the recipient. Note that only exact matches are processed ([subaccounts](https://docs.near.org/tutorials/crosswords/basics/add-functions-call#create-a-subaccount) must be added as independent data sources). +- `blockHandlers`: выполняется для каждого нового блока NEAR. `source.account` не требуется. +- `receiptHandlers`: выполняется при каждом получении, где `source.account` источника данных является получателем. Обратите внимание, что обрабатываются только точные совпадения ([субаккаунты](https://docs.near.org/tutorials/crosswords/basics/add-functions-call#create-a-subaccount) должны быть добавлены как независимые источники данных). -### Schema Definition +### Определение схемы -Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). +Определение схемы описывает структуру базы данных субграфа и отношения между объектами. Это не зависит от исходного источника данных. Подробнее об определении схемы субграфа можно прочитать [здесь](/developing/creating-a-subgraph/#the-graphql-schema). -### AssemblyScript Mappings +### Мэппинги AssemblyScript -The handlers for processing events are written in [AssemblyScript](https://www.assemblyscript.org/). +Обработчики для обработки событий написаны на [AssemblyScript](https://www.assemblyscript.org/). -NEAR indexing introduces NEAR-specific data types to the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/). +Индексирование NEAR вводит специфичные для NEAR типы данных в [API AssemblyScript](/subgraphs/developing/creating/graph-ts/api/). ```typescript @@ -125,7 +125,7 @@ class ActionReceipt { class BlockHeader { height: u64, - prevHeight: u64,// Always zero when version < V3 + prevHeight: u64,// Всегда 0 для версии < V3 epochId: Bytes, nextEpochId: Bytes, chunksIncluded: u64, @@ -160,36 +160,36 @@ class ReceiptWithOutcome { } ``` -These types are passed to block & receipt handlers: +Эти типы передаются в обработчики блоков и поступлений: -- Block handlers will receive a `Block` -- Receipt handlers will receive a `ReceiptWithOutcome` +- Обработчики блоков получат `Block` +- Обработчики поступлений получат `ReceiptWithOutcome` -Otherwise, the rest of the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/) is available to NEAR Subgraph developers during mapping execution. +В противном случае, остальная часть [API AssemblyScript](/subgraphs/developing/creating/graph-ts/api/) доступна разработчикам субграфов NEAR во время выполнения мэппинга. -This includes a new JSON parsing function - logs on NEAR are frequently emitted as stringified JSONs. A new `json.fromString(...)` function is available as part of the [JSON API](/subgraphs/developing/creating/graph-ts/api/#json-api) to allow developers to easily process these logs. +Это включает в себя новую функцию для парсинга JSON — логи в NEAR часто выводятся как строковые JSON. Новая функция `json.fromString(...)` доступна в рамках [JSON API](/subgraphs/developing/creating/graph-ts/api/#json-api), что позволяет разработчикам легко обрабатывать эти логи. -## Deploying a NEAR Subgraph +## Развертывание NEAR субграфа -Once you have a built Subgraph, it is time to deploy it to Graph Node for indexing. NEAR Subgraphs can be deployed to any Graph Node `>=v0.26.x` (this version has not yet been tagged & released). +После того как вы построите субграф, пришло время развернуть его на Graph Node для индексирования. Субграфы NEAR можно развернуть на любой Graph Node версии `>=v0.26.x` (эта версия еще не была отмечена и выпущена). -Subgraph Studio and the upgrade Indexer on The Graph Network currently supports indexing NEAR mainnet and testnet in beta, with the following network names: +Subgraph Studio и Индексатор обновлений в The Graph Network в настоящее время поддерживают индексирование основной и тестовой сети NEAR в бета-версии со следующими именами сетей: - `near-mainnet` - `near-testnet` -More information on creating and deploying Subgraphs on Subgraph Studio can be found [here](/deploying/deploying-a-subgraph-to-studio/). +Более подробную информацию о создании и развертывании субграфа в Subgraph Studio можно найти [здесь](/deploying/deploying-a-subgraph-to-studio/). -As a quick primer - the first step is to "create" your Subgraph - this only needs to be done once. On Subgraph Studio, this can be done from [your Dashboard](https://thegraph.com/studio/): "Create a Subgraph". +Как быстрый вводный шаг — первым делом нужно "создать" ваш субграф — это нужно сделать только один раз. В Subgraph Studio это можно сделать через [вашу панель управления](https://thegraph.com/studio/): "Создать субграф". -Once your Subgraph has been created, you can deploy your Subgraph by using the `graph deploy` CLI command: +Как только ваш субграф будет создан, вы можете развернуть его, используя команду CLI `graph deploy`: ```sh -$ graph create --node # creates a Subgraph on a local Graph Node (on Subgraph Studio, this is done via the UI) -$ graph deploy --node --ipfs https://api.thegraph.com/ipfs/ # uploads the build files to a specified IPFS endpoint, and then deploys the Subgraph to a specified Graph Node based on the manifest IPFS hash +$ graph create --node # создает субграф на локальной Graph Node (в Subgraph Studio это делается через UI) +$ graph deploy --node --ipfs https://api.thegraph.com/ipfs/ # загружает файлы сборки на указанную конечную точку IPFS и затем развертывает субграф на указанном Graph Node, основываясь на IPFS-хэше манифеста ``` -The node configuration will depend on where the Subgraph is being deployed. +Конфигурация ноды будет зависеть от того, где развертывается субграф. ### Subgraph Studio @@ -198,13 +198,13 @@ graph auth graph deploy ``` -### Local Graph Node (based on default configuration) +### Локальная Graph Node (на основе конфигурации по умолчанию) ```sh graph deploy --node http://localhost:8020/ --ipfs http://localhost:5001 ``` -Once your Subgraph has been deployed, it will be indexed by Graph Node. You can check its progress by querying the Subgraph itself: +После того как ваш субграф был развернут, он будет индексироваться Graph Node. Вы можете проверить его прогресс, сделав запрос к самому субграфу: ```graphql { @@ -216,45 +216,45 @@ Once your Subgraph has been deployed, it will be indexed by Graph Node. You can } ``` -### Indexing NEAR with a Local Graph Node +### Индексирование NEAR с помощью локальной Graph Node -Running a Graph Node that indexes NEAR has the following operational requirements: +Запуск Graph Node, который индексирует NEAR, имеет следующие эксплуатационные требования: -- NEAR Indexer Framework with Firehose instrumentation -- NEAR Firehose Component(s) -- Graph Node with Firehose endpoint configured +- Фреймворк NEAR Indexer с инструментарием Firehose +- Компонент(ы) NEAR Firehose +- Graph Node с настроенным эндпоинтом Firehose -We will provide more information on running the above components soon. +В ближайшее время мы предоставим более подробную информацию о запуске вышеуказанных компонентов. -## Querying a NEAR Subgraph +## Запрос NEAR субграфа -The GraphQL endpoint for NEAR Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. +Конечная точка GraphQL для субграфов NEAR определяется в соответствии с определением схемы и существующим интерфейсом API. Для получения дополнительной информации изучите [документацию по GraphQL API](/subgraphs/querying/graphql-api/). -## Example Subgraphs +## Примеры субграфов -Here are some example Subgraphs for reference: +Вот несколько примеров субграфов для справки: -[NEAR Blocks](https://github.com/graphprotocol/graph-tooling/tree/main/examples/near-blocks) +[Блоки NEAR](https://github.com/graphprotocol/graph-tooling/tree/main/examples/near-blocks) -[NEAR Receipts](https://github.com/graphprotocol/graph-tooling/tree/main/examples/near-receipts) +[Подтверждения NEAR](https://github.com/graphprotocol/graph-tooling/tree/main/examples/near-receipts) -## FAQ +## Часто задаваемые вопросы -### How does the beta work? +### Как работает бета-версия? -NEAR support is in beta, which means that there may be changes to the API as we continue to work on improving the integration. Please email near@thegraph.com so that we can support you in building NEAR Subgraphs, and keep you up to date on the latest developments! +Поддержка NEAR находится на стадии бета-тестирования, что означает, что могут быть изменения в API по мере улучшения интеграции. Пожалуйста, отправьте письмо на адрес near@thegraph.com, чтобы мы могли помочь вам в создании субграфов NEAR и держать вас в курсе последних обновлений! -### Can a Subgraph index both NEAR and EVM chains? +### Может ли субграф индексировать чейны NEAR и EVM? -No, a Subgraph can only support data sources from one chain/network. +Нет, субграф может поддерживать источники данных только из одного чейна/сети. -### Can Subgraphs react to more specific triggers? +### Могут ли субграфы реагировать на более конкретные триггеры? -Currently, only Block and Receipt triggers are supported. We are investigating triggers for function calls to a specified account. We are also interested in supporting event triggers, once NEAR has native event support. +В настоящее время поддерживаются только триггеры Block и Receipt. Мы исследуем триггеры для вызовов функций к указанной учетной записи. Мы также заинтересованы в поддержке триггеров событий, когда NEAR обладает собственной поддержкой событий. -### Will receipt handlers trigger for accounts and their sub-accounts? +### Будут ли срабатывать обработчики поступлений для учетных записей и их дочерних учетных записей? -If an `account` is specified, that will only match the exact account name. It is possible to match sub-accounts by specifying an `accounts` field, with `suffixes` and `prefixes` specified to match accounts and sub-accounts, for example the following would match all `mintbase1.near` sub-accounts: +Если указано `account`, это будет соответствовать только точному имени аккаунта. Для того чтобы соответствовать субаккаунтам, можно указать поле `accounts`, с `suffixes` и `prefixes`, которые будут соответствовать аккаунтам и субаккаунтам. Например, следующее выражение будет соответствовать всем субаккаунтам `mintbase1.near`: ```yaml accounts: @@ -262,22 +262,22 @@ accounts: - mintbase1.near ``` -### Can NEAR Subgraphs make view calls to NEAR accounts during mappings? +### Могут ли субграфы NEAR выполнять вызовы просмотра аккаунтов NEAR во время мэппингов? -This is not supported. We are evaluating whether this functionality is required for indexing. +Это не поддерживается. Мы оцениваем, требуется ли этот функционал для индексирования. -### Can I use data source templates in my NEAR Subgraph? +### Могу ли я использовать шаблоны источников данных в своем субграфе NEAR? -This is not currently supported. We are evaluating whether this functionality is required for indexing. +В настоящее время это не поддерживается. Мы оцениваем, требуется ли этот функционал для индексирования. -### Ethereum Subgraphs support "pending" and "current" versions, how can I deploy a "pending" version of a NEAR Subgraph? +### Субграфы Ethereum поддерживают «ожидающие» и «текущие» версии. Как я могу развернуть «ожидающую» версию субграфа NEAR? -Pending functionality is not yet supported for NEAR Subgraphs. In the interim, you can deploy a new version to a different "named" Subgraph, and then when that is synced with the chain head, you can redeploy to your primary "named" Subgraph, which will use the same underlying deployment ID, so the main Subgraph will be instantly synced. +Функциональность ожидающих пока не поддерживается для субграфов NEAR. В промежуточный период вы можете развернуть новую версию на другом "именованном" субграфе, а затем, когда она будет синхронизирована с головой чейна, вы можете повторно развернуть ее на своем основном "именованном" субграфе, который будет использовать тот же самый идентификатор развертывания, так что основной субграф будет мгновенно синхронизирован. -### My question hasn't been answered, where can I get more help building NEAR Subgraphs? +### На мой вопрос нет ответа, где я могу получить дополнительную помощь в создании субграфов NEAR? -If it is a general question about Subgraph development, there is a lot more information in the rest of the [Developer documentation](/subgraphs/quick-start/). Otherwise please join [The Graph Protocol Discord](https://discord.gg/graphprotocol) and ask in the #near channel or email near@thegraph.com. +Если это общий вопрос о разработке субграфов, дополнительную информацию можно найти в остальной части [документации для разработчиков](/subgraphs/quick-start/). В других случаях присоединяйтесь к [Discord-каналу The Graph Protocol](https://discord.gg/graphprotocol) и задавайте вопросы в канале #near или отправьте email на адрес near@thegraph.com. -## References +## Ссылки -- [NEAR developer documentation](https://docs.near.org/tutorials/crosswords/basics/set-up-skeleton) +- [Документация для разработчиков NEAR](https://docs.near.org/tutorials/crosswords/basics/set-up-skeleton) diff --git a/website/src/pages/ru/subgraphs/guides/polymarket.mdx b/website/src/pages/ru/subgraphs/guides/polymarket.mdx index 74efe387b0d7..f10bf31617c1 100644 --- a/website/src/pages/ru/subgraphs/guides/polymarket.mdx +++ b/website/src/pages/ru/subgraphs/guides/polymarket.mdx @@ -1,23 +1,23 @@ --- -title: Querying Blockchain Data from Polymarket with Subgraphs on The Graph -sidebarTitle: Query Polymarket Data +title: Запрос данных блокчейна из Polymarket с субграфами на The Graph +sidebarTitle: Запрос данных Polymarket --- -Query Polymarket’s onchain data using GraphQL via Subgraphs on The Graph Network. Subgraphs are decentralized APIs powered by The Graph, a protocol for indexing & querying data from blockchains. +Запрашивайте ончейн-данные Polymarket с помощью GraphQL через субграфы в The Graph Network. Субграфы — это децентрализованные API, работающие на основе The Graph, протокола для индексирования и запросов данных из блокчейнов. -## Polymarket Subgraph on Graph Explorer +## Субграф Polymarket в Graph Explorer -You can see an interactive query playground on the [Polymarket Subgraph’s page on The Graph Explorer](https://thegraph.com/explorer/subgraphs/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp?view=Query&chain=arbitrum-one), where you can test any query. +Вы можете увидеть интерактивную площадку для запросов на [странице субграфа Polymarket в The Graph Explorer](https://thegraph.com/explorer/subgraphs/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp?view=Query&chain=arbitrum-one), где можно протестировать любые запросы. -![Polymarket Playground](/img/Polymarket-playground.png) +![Polymarket Endpoint](/img/Polymarket-playground.png) -## How to use the Visual Query Editor +## Как пользоваться визуальным редактором запросов -The visual query editor helps you test sample queries from your Subgraph. +Визуальный редактор запросов помогает тестировать примерные запросы из Вашего субграфа. -You can use the GraphiQL Explorer to compose your GraphQL queries by clicking on the fields you want. +Вы можете использовать GraphiQL Explorer для составления запросов GraphQL, нажимая на нужные поля. -### Example Query: Get the top 5 highest payouts from Polymarket +### Пример запроса: получите 5 самых высоких выплат от Polymarket ``` { @@ -30,7 +30,7 @@ You can use the GraphiQL Explorer to compose your GraphQL queries by clicking on } ``` -### Example output +### Пример вывода ``` { @@ -71,41 +71,41 @@ You can use the GraphiQL Explorer to compose your GraphQL queries by clicking on } ``` -## Polymarket's GraphQL Schema +## Схема GraphQL Polymarket -The schema for this Subgraph is defined [here in Polymarket’s GitHub](https://github.com/Polymarket/polymarket-subgraph/blob/main/polymarket-subgraph/schema.graphql). +Схема для этого субграфа определена [здесь, в GitHub Polymarket](https://github.com/Polymarket/polymarket-subgraph/blob/main/polymarket-subgraph/schema.graphql). -### Polymarket Subgraph Endpoint +### Конечная точка субграфа Polymarket https://gateway.thegraph.com/api/{api-key}/subgraphs/id/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp -The Polymarket Subgraph endpoint is available on [Graph Explorer](https://thegraph.com/explorer). +Конечная точка субграфа Polymarket доступна в [Graph Explorer](https://thegraph.com/explorer). -![Polymarket Endpoint](/img/Polymarket-endpoint.png) +![Конечная точка Polymarket](/img/Polymarket-endpoint.png) -## How to Get your own API Key +## Как получить свой собственный ключ API -1. Go to [https://thegraph.com/studio](http://thegraph.com/studio) and connect your wallet -2. Go to https://thegraph.com/studio/apikeys/ to create an API key +1. Перейдите на [https://thegraph.com/studio](http://thegraph.com/studio) и подключите свой кошелек +2. Перейдите по ссылке https://thegraph.com/studio/apikeys/, чтобы создать ключ API -You can use this API key on any Subgraph in [Graph Explorer](https://thegraph.com/explorer), and it’s not limited to just Polymarket. +Вы можете использовать этот API-ключ в любом субграфе в [Graph Explorer](https://thegraph.com/explorer), и он не ограничивается только Polymarket. -100k queries per month are free which is perfect for your side project! +100 тыс. запросов в месяц бесплатны, что идеально подходит для Вашего стороннего проекта! -## Additional Polymarket Subgraphs +## Дополнительные субграфы Polymarket - [Polymarket](https://thegraph.com/explorer/subgraphs/81Dm16JjuFSrqz813HysXoUPvzTwE7fsfPk2RTf66nyC?view=Query&chain=arbitrum-one) -- [Polymarket Activity Polygon](https://thegraph.com/explorer/subgraphs/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp?view=Query&chain=arbitrum-one) -- [Polymarket Profit & Loss](https://thegraph.com/explorer/subgraphs/6c58N5U4MtQE2Y8njfVrrAfRykzfqajMGeTMEvMmskVz?view=Query&chain=arbitrum-one) -- [Polymarket Open Interest](https://thegraph.com/explorer/subgraphs/ELaW6RtkbmYNmMMU6hEPsghG9Ko3EXSmiRkH855M4qfF?view=Query&chain=arbitrum-one) +- [Активность Polymarket в Polygon](https://thegraph.com/explorer/subgraphs/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp?view=Query&chain=arbitrum-one) +- [Прибыль и убыток Polymarket](https://thegraph.com/explorer/subgraphs/6c58N5U4MtQE2Y8njfVrrAfRykzfqajMGeTMEvMmskVz?view=Query&chain=arbitrum-one) +- [Открытый интерес Polymarket](https://thegraph.com/explorer/subgraphs/ELaW6RtkbmYNmMMU6hEPsghG9Ko3EXSmiRkH855M4qfF?view=Query&chain=arbitrum-one) -## How to Query with the API +## Как делать запросы с помощью API -You can pass any GraphQL query to the Polymarket endpoint and receive data in json format. +Вы можете передать любой запрос GraphQL в конечную точку Polymarket и получить данные в формате json. -This following code example will return the exact same output as above. +Следующий пример кода вернет тот же результат, что и выше. -### Sample Code from node.js +### Пример кода из node.js ``` const axios = require('axios'); @@ -141,8 +141,8 @@ axios(graphQLRequest) }); ``` -### Additional resources +### Дополнительные источники -For more information about querying data from your Subgraph, read more [here](/subgraphs/querying/introduction/). +Для получения дополнительной информации о запросе данных из Вашего субграфа читайте [здесь](/subgraphs/querying/introduction/). -To explore all the ways you can optimize & customize your Subgraph for a better performance, read more about [creating a Subgraph here](/developing/creating-a-subgraph/). +Чтобы изучить все способы оптимизации и настройки Вашего субграфа для повышения производительности, прочитайте больше о [создании субграфа здесь](/developing/creating-a-subgraph/). diff --git a/website/src/pages/ru/subgraphs/guides/secure-api-keys-nextjs.mdx b/website/src/pages/ru/subgraphs/guides/secure-api-keys-nextjs.mdx index e17e594408ff..3b61c71f3c74 100644 --- a/website/src/pages/ru/subgraphs/guides/secure-api-keys-nextjs.mdx +++ b/website/src/pages/ru/subgraphs/guides/secure-api-keys-nextjs.mdx @@ -1,48 +1,48 @@ --- -title: How to Secure API Keys Using Next.js Server Components +title: Как обезопасить API-ключи с использованием серверных компонентов Next.js --- -## Overview +## Обзор -We can use [Next.js server components](https://nextjs.org/docs/app/building-your-application/rendering/server-components) to properly secure our API key from exposure in the frontend of our dapp. To further increase our API key security, we can also [restrict our API key to certain Subgraphs or domains in Subgraph Studio](/cookbook/upgrading-a-subgraph/#securing-your-api-key). +Мы можем использовать [серверные компоненты Next.js](https://nextjs.org/docs/app/building-your-application/rendering/server-components), чтобы надёжно защитить наш API-ключ от утечки на стороне фронтенда в нашем dApp. Для дополнительной безопасности API-ключа мы также можем [ограничить его использование определёнными субграфами или доменами в Subgraph Studio](/cookbook/upgrading-a-subgraph/#securing-your-api-key). -In this cookbook, we will go over how to create a Next.js server component that queries a Subgraph while also hiding the API key from the frontend. +В этом руководстве мы рассмотрим, как создать серверный компонент Next.js, который выполняет запрос к субграфу, скрывая API-ключ от фронтенда. -### Caveats +### Предостережения -- Next.js server components do not protect API keys from being drained using denial of service attacks. -- The Graph Network gateways have denial of service detection and mitigation strategies in place, however using server components may weaken these protections. -- Next.js server components introduce centralization risks as the server can go down. +- Серверные компоненты Next.js не защищают API-ключи от утечки при атаках типа "отказ в обслуживании". +- Шлюзы The Graph Network имеют стратегии обнаружения и смягчения атак типа "отказ в обслуживании", однако использование серверных компонентов может ослабить эти защиты. +- Серверные компоненты Next.js вносят риски централизации, так как сервер может выйти из строя. -### Why It's Needed +### Почему это необходимо -In a standard React application, API keys included in the frontend code can be exposed to the client-side, posing a security risk. While `.env` files are commonly used, they don't fully protect the keys since React's code is executed on the client side, exposing the API key in the headers. Next.js Server Components address this issue by handling sensitive operations server-side. +В стандартном React-приложении API-ключи, включённые в код внешнего интерфейса, могут быть раскрыты на стороне клиента, что созает угрозу безопасности. Хотя обычно используются файлы `.env`, они не обеспечивают полной защиты ключей, так как код React выполняется на стороне клиента, раскрывая API-ключ в заголовках. Серверные компоненты Next.js решают эту проблему, обрабатывая конфиденциальные операции на сервере. -### Using client-side rendering to query a Subgraph +### Использование рендеринга на стороне клиента для выполнения запроса к субграфу -![Client-side rendering](/img/api-key-client-side-rendering.png) +![Рендеринг на клиентской стороне](/img/api-key-client-side-rendering.png) -### Prerequisites +### Предварительные требования -- An API key from [Subgraph Studio](https://thegraph.com/studio) -- Basic knowledge of Next.js and React. -- An existing Next.js project that uses the [App Router](https://nextjs.org/docs/app). +- API-ключ от [Subgraph Studio](https://thegraph.com/studio) +- Базовые знания о Next.js и React. +- Существующий проект Next.js, который использует [App Router](https://nextjs.org/docs/app). -## Step-by-Step Cookbook +## Пошаговое руководство -### Step 1: Set Up Environment Variables +### Шаг 1: Настройка переменных среды -1. In our Next.js project root, create a `.env.local` file. -2. Add our API key: `API_KEY=`. +1. В корневой папке нашего проекта Next.js создайте файл `.env.local`. +2. Добавьте наш API-ключ: `API_KEY=`. -### Step 2: Create a Server Component +### Шаг 2: Создание серверного компонента -1. In our `components` directory, create a new file, `ServerComponent.js`. -2. Use the provided example code to set up the server component. +1. В директории `components` создайте новый файл `ServerComponent.js`. +2. Используйте приведённый пример кода для настройки серверного компонента. -### Step 3: Implement Server-Side API Request +### Шаг 3: Реализация API-запроса на стороне сервера -In `ServerComponent.js`, add the following code: +В файл `ServerComponent.js` добавьте следующий код: ```javascript const API_KEY = process.env.API_KEY @@ -95,10 +95,10 @@ export default async function ServerComponent() { } ``` -### Step 4: Use the Server Component +### Шаг 4: Использование серверного компонента -1. In our page file (e.g., `pages/index.js`), import `ServerComponent`. -2. Render the component: +1. В файл страницы (например, `pages/index.js`) импортируйте `ServerComponent`. +2. Отрендерите компонент: ```javascript import ServerComponent from './components/ServerComponent' @@ -112,12 +112,12 @@ export default function Home() { } ``` -### Step 5: Run and Test Our Dapp +### Шаг 5: Запуск и тестирование нашего децентрализованного приложения (Dapp) -Start our Next.js application using `npm run dev`. Verify that the server component is fetching data without exposing the API key. +Запустите наше приложение Next.js с помощью команды `npm run dev`. Убедитесь, что серверный компонент запрашивает данные, не раскрывая API-ключ. -![Server-side rendering](/img/api-key-server-side-rendering.png) +![Рендеринг на стороне сервера](/img/api-key-server-side-rendering.png) -### Conclusion +### Заключение -By utilizing Next.js Server Components, we've effectively hidden the API key from the client-side, enhancing the security of our application. This method ensures that sensitive operations are handled server-side, away from potential client-side vulnerabilities. Finally, be sure to explore [other API key security measures](/subgraphs/querying/managing-api-keys/) to increase your API key security even further. +Используя серверные компоненты Next.js, мы эффективно скрыли ключ API от клиентской стороны, улучшив безопасность нашего приложения. Этот метод гарантирует, что чувствительные операции обрабатываются на сервере, вдали от потенциальных уязвимостей на стороне клиента. В заключение, не забудьте ознакомиться с [другими мерами безопасности для ключей API](/subgraphs/querying/managing-api-keys/), чтобы повысить уровень безопасности своего ключа API. diff --git a/website/src/pages/ru/subgraphs/guides/subgraph-composition.mdx b/website/src/pages/ru/subgraphs/guides/subgraph-composition.mdx new file mode 100644 index 000000000000..5c3b634a5620 --- /dev/null +++ b/website/src/pages/ru/subgraphs/guides/subgraph-composition.mdx @@ -0,0 +1,132 @@ +--- +title: Агрегируйте данные с помощью композиции субграфов +sidebarTitle: Создайте композиционный субграф с несколькими субграфами +--- + +Используйте композицию субграфов для ускорения разработки. Создайте базовый субграф с основными данными, а затем разрабатывайте дополнительные субграфы на его основе. + +Optimize your Subgraph by merging data from independent, source Subgraphs into a single composable Subgraph to enhance data aggregation. + +## Введение + +Composable Subgraphs enable you to combine multiple Subgraphs' data sources into a new Subgraph, facilitating faster and more flexible Subgraph development. Subgraph composition empowers you to create and maintain smaller, focused Subgraphs that collectively form a larger, interconnected dataset. + +### Преимущества композиции + +Композиция субграфов — это мощная функция для масштабирования, позволяющая вам: + +- Повторно использовать, смешивать и комбинировать существующие данные +- Оптимизировать разработку и запросы +- Использовать несколько источников данных (до пяти исходных субграфов) +- Ускорить синхронизацию вашего субграфа +- Обрабатывать ошибки и оптимизировать повторную синхронизацию + +## Обзор архитектуры + +Настройка для этого примера включает два субграфа: + +1. **Исходный субграф**: отслеживает данные событий как объекты. +2. **Зависимый субграф**: использует исходный субграф в качестве источника данных. + +Вы можете найти их в директориях `source` и `dependent`. + +- **Исходный субграф** — это базовый субграф для отслеживания событий, который записывает события, генерируемые соответствующими контрактами. +- **Зависимый субграф** ссылается на источник субграфа как на источник данных, используя объекты из источника в качестве триггеров. + +В то время как исходный субграф является стандартным субграфом, зависимый субграф использует функцию композиции субграфов. + +## Предварительные требования + +### Source Subgraphs + +- All Subgraphs need to be published with a **specVersion 1.3.0 or later** (Use the latest graph-cli version to be able to deploy composable Subgraphs) +- See notes here: https://github.com/graphprotocol/graph-node/releases/tag/v0.37.0 +- Immutable entities only: All Subgraphs must have [immutable entities](https://thegraph.com/docs/en/subgraphs/best-practices/immutable-entities-bytes-as-ids/#immutable-entities) when the Subgraph is deployed +- Pruning can be used in the source Subgraphs, but only entities that are immutable can be composed on top of +- Source Subgraphs cannot use grafting on top of existing entities +- Aggregated entities can be used in composition, but entities that are composed from them cannot performed additional aggregations directly + +### Composed Subgraphs + +- You can only compose up to a **maximum of 5 source Subgraphs** +- Composed Subgraphs can only use **datasources from the same chain** +- **Nested composition is not yet supported**: Composing on top of another composed Subgraph isn’t allowed at this time +- Aggregated entities can be used in composition, but the composed entities on them cannot also use aggregations directly +- Developers cannot compose an onchain datasource with a Subgraph datasource (i.e. you can’t do normal event handlers and call handlers and block handlers in a composed Subgraph) + +Additionally, you can explore the [example-composable-subgraph](https://github.com/graphprotocol/example-composable-subgraph) repository for a working implementation of composable Subgraphs + +## Начнем + +The following guide provides examples for defining 3 source Subgraphs to create one powerful composed Subgraph. + +### Специфические особенности + +- Чтобы сделать этот пример простым, все исходные субграфы используют только блок-обработчики. Однако в реальной среде каждый исходный субграф будет использовать данные из разных смарт-контрактов. +- Приведенные ниже примеры показывают, как импортировать и расширять схему другого субграфа для улучшения его функциональности. +- Каждый исходный субграф оптимизирован для работы с конкретным объектом. +- Все перечисленные команды устанавливают необходимые зависимости, генерируют код на основе схемы GraphQL, строят субграф и деплоят его на ваш локальный экземпляр Graph Node. + +### Шаг 1. Развертывание исходного субграфа для времени блока + +Этот первый исходный субграф вычисляет время блока для каждого блока. + +- Он импортирует схемы из других субграфов и добавляет объект `block` с полем `timestamp`, представляющим время, когда был добыт каждый блок. +- Он слушает события блокчейна, связанные с временем (например, метки времени блоков), и обрабатывает эти данные для обновления объектов субграфа соответствующим образом. + +Чтобы развернуть этот субграф локально, выполните следующие команды: + +```bash +npm install +npm run codegen +npm run build +npm run create-local +npm run deploy-local +``` + +### Шаг 2. Развертывание исходного субграфа для стоимости блока + +Этот второй исходный субграф индексирует стоимость каждого блока. + +#### Ключевые функции + +- Он импортирует схемы из других субграфов и добавляет объект `block` с полями, связанными со стоимостью. +- Он отслеживает события блокчейна, связанные с затратами (например, плата за газ, стоимость транзакций), и обрабатывает эти данные для соотвествующего обновления объектов субграфа. + +Чтобы развернуть этот субграф локально, выполните те же команды, что и выше. + +### Шаг 3. Определите размер блока в исходном субграфе + +Этот третий исходный субграф индексирует размер каждого блока. Чтобы развернуть этот субграф локально, выполните те же команды, что и выше. + +#### Ключевые функции + +- Он импортирует существующие схемы из других субграфов и добавляет объект `block` с полем `size`, представляющим размер каждого блока. +- Он слушает события блокчейна, связанные с размерами блоков (например, хранение данных или объем), и обрабатывает эти данные для обновления объектов суграфа соответствующим образом. + +### Шаг 4. Объединение в субграфе для статистики блоков + +This composed Subgraph combines and aggregates the information from the source Subgraphs above, providing a unified view of block statistics. To deploy this Subgraph locally, run the same commands as above. + +> Примечание: +> +> - Любое изменение в исходном субграфе, скорее всего, приведет к созданию нового идентификатора развертывания. +> - Обязательно обновите идентификатор развертывания в адресе источника данных в манифесте субграфа, чтобы воспользоваться последними изменениями. +> - Все исходные субграфы должны быть развернуты до того, как будет развернут композиционный субграф. + +#### Ключевые функции + +- Он предоставляет объединенную модель данных, которая охватывает все соответствующие метрики блоков. +- It combines data from 3 source Subgraphs, and provides a comprehensive view of block statistics, enabling more complex queries and analyses. + +## Основные выводы + +- Этот мощный инструмент масштабирует разработку ваших субграфов и позволяет комбинировать несколько субграфов. +- The setup includes the deployment of 3 source Subgraphs and one final deployment of the composed Subgraph. +- Эта функция открывает возможности масштабируемости, упрощая как разработку, так и эффективность обслуживания. + +## Дополнительные ресурсы + +- Check out all the code for this example in [this GitHub repo](https://github.com/graphprotocol/example-composable-subgraph). +- Чтобы добавить продвинутые функции в ваш субграф, ознакомьтесь с [продвинутыми функциями субграфа](/developing/creating/advanced/). +- Чтобы узнать больше об агрегациях, ознакомьтесь с разделом [Временные ряды и агрегации](/subgraphs/developing/creating/advanced/#timeseries-and-aggregations). diff --git a/website/src/pages/ru/subgraphs/guides/subgraph-debug-forking.mdx b/website/src/pages/ru/subgraphs/guides/subgraph-debug-forking.mdx index 91aa7484d2ec..fcc064d4190f 100644 --- a/website/src/pages/ru/subgraphs/guides/subgraph-debug-forking.mdx +++ b/website/src/pages/ru/subgraphs/guides/subgraph-debug-forking.mdx @@ -1,26 +1,26 @@ --- -title: Quick and Easy Subgraph Debugging Using Forks +title: Быстрая и простая отладка субграфа с использованием форков --- -As with many systems processing large amounts of data, The Graph's Indexers (Graph Nodes) may take quite some time to sync-up your Subgraph with the target blockchain. The discrepancy between quick changes with the purpose of debugging and long wait times needed for indexing is extremely counterproductive and we are well aware of that. This is why we are introducing **Subgraph forking**, developed by [LimeChain](https://limechain.tech/), and in this article I will show you how this feature can be used to substantially speed-up Subgraph debugging! +Как и многие системы, обрабатывающие большие объемы данных, Индексаторы The Graph (Graph Nodes) могут потребовать значительное время для синхронизации вашего субграфа с целевым блокчейном. Несоответствие между быстрыми изменениями, необходимыми для отладки, и длительным временем ожидания индексирования крайне контрпродуктивно, и мы хорошо осведомлены об этом. Именно поэтому мы представляем **форк субграфа**, разработанный [LimeChain](https://limechain.tech/), и в этой статье я покажу, как эта функция может существенно ускорить процесс отладки субграфов! -## Ok, what is it? +## И так, что это? -**Subgraph forking** is the process of lazily fetching entities from _another_ Subgraph's store (usually a remote one). +**Форкинг субграфа** — это процесс ленивой загрузки объектов из _другого_ хранилища субграфа (обычно удаленного). -In the context of debugging, **Subgraph forking** allows you to debug your failed Subgraph at block _X_ without needing to wait to sync-up to block _X_. +В контексте отладки **форкинг субграфа** позволяет вам отлаживать ваш неудавшийся субграф на блоке _X_ без необходимости ждать синхронизации с этим блоком _X_. -## What?! How? +## Что? Как? -When you deploy a Subgraph to a remote Graph Node for indexing and it fails at block _X_, the good news is that the Graph Node will still serve GraphQL queries using its store, which is synced-up to block _X_. That's great! This means we can take advantage of this "up-to-date" store to fix the bugs arising when indexing block _X_. +Когда вы развертываете субграф на удалённой Graph Node для индексирования и он даёт сбой на блоке _X_, хорошая новость заключается в том, что Graph Node всё равно будет обслуживать GraphQL-запросы, используя своё хранилище, синхронизированное с блоком _X_. Это замечательно! Это означает, что мы можем воспользоваться этим "актуальным" хранилищем, чтобы исправить ошибки, возникающие при индексировании блока _X_. -In a nutshell, we are going to _fork the failing Subgraph_ from a remote Graph Node that is guaranteed to have the Subgraph indexed up to block _X_ in order to provide the locally deployed Subgraph being debugged at block _X_ an up-to-date view of the indexing state. +Короче говоря, мы собираемся _форкать неудавшийся субграф_ с удалённой Graph Node, которая гарантированно имеет индексированный субграф до блока _X_, чтобы предоставить локально развернутому субграфу, который отлаживается на блоке _X_, актуальное представление о состоянии индексирования. -## Please, show me some code! +## Пожалуйста, покажите мне какой-нибудь код! -To stay focused on Subgraph debugging, let's keep things simple and run along with the [example-Subgraph](https://github.com/graphprotocol/graph-tooling/tree/main/examples/ethereum-gravatar) indexing the Ethereum Gravity smart contract. +Чтобы сосредоточиться на отладке субграфа, давайте сделаем всё проще и используем [пример субграфа](https://github.com/graphprotocol/graph-tooling/tree/main/examples/ethereum-gravatar), индексирующий смарт-контракт Ethereum Gravity. -Here are the handlers defined for indexing `Gravatar`s, with no bugs whatsoever: +Вот обработчики, определённые для индексирования `Gravatar`, без каких-либо ошибок: ```tsx export function handleNewGravatar(event: NewGravatar): void { @@ -44,43 +44,43 @@ export function handleUpdatedGravatar(event: UpdatedGravatar): void { } ``` -Oops, how unfortunate, when I deploy my perfect looking Subgraph to [Subgraph Studio](https://thegraph.com/studio/) it fails with the _"Gravatar not found!"_ error. +Ой, как неудачно! Когда я деплою мой идеально выглядящий субграф в [Subgraph Studio](https://thegraph.com/studio/), он терпит неудачу с ошибкой _"Gravatar не найден!"_. -The usual way to attempt a fix is: +Обычный способ попытаться исправить это: -1. Make a change in the mappings source, which you believe will solve the issue (while I know it won't). -2. Re-deploy the Subgraph to [Subgraph Studio](https://thegraph.com/studio/) (or another remote Graph Node). -3. Wait for it to sync-up. -4. If it breaks again go back to 1, otherwise: Hooray! +1. Внести изменения в источник мэппингов, которые, по Вашему мнению, решат проблему (в то время как я знаю, что это не так). +2. Перезапустить развертывание своего субграфа в [Subgraph Studio](https://thegraph.com/studio/) (или на другую удалённую Graph Node). +3. Подождать, пока он синхронизируется. +4. Если он снова сломается, вернуться к пункту 1, в противном случае: Ура! -It is indeed pretty familiar to an ordinary debug process, but there is one step that horribly slows down the process: _3. Wait for it to sync-up._ +Действительно, это похоже на обычный процесс отладки, но есть один шаг, который ужасно замедляет процесс: _3. Ждите, пока завершится синхронизация._ -Using **Subgraph forking** we can essentially eliminate this step. Here is how it looks: +Используя **форкинг субграфа**, мы можем фактически исключить этот шаг. Вот как это выглядит: -0. Spin-up a local Graph Node with the **_appropriate fork-base_** set. -1. Make a change in the mappings source, which you believe will solve the issue. -2. Deploy to the local Graph Node, **_forking the failing Subgraph_** and **_starting from the problematic block_**. -3. If it breaks again, go back to 1, otherwise: Hooray! +0. Запустите локальную Graph Node с помощью **_соответстсвующего набора fork-base_**. +1. Внесите изменения в источник мэппингов, которые, по Вашему мнению, решат проблему. +2. Разверните на локальной Graph Node, **_используя форкинг неудавшегося субграфа_** и **_начав с проблемного блока_**. +3. Если он снова сломается, вернитесь к пункту 1, в противном случае: Ура! -Now, you may have 2 questions: +Сейчас у Вас может появиться 2 вопроса: -1. fork-base what??? -2. Forking who?! +1. fork-base - что это??? +2. Форкнуть кого?! -And I answer: +И я вам отвечаю: -1. `fork-base` is the "base" URL, such that when the _subgraph id_ is appended the resulting URL (`/`) is a valid GraphQL endpoint for the Subgraph's store. -2. Forking is easy, no need to sweat: +1. `fork-base` — это "базовый" URL, так что когда _ID субграфа_ добавляется, получившийся URL (`/`) становится действительной конечной точкой GraphQL для хранилища субграфа. +2. Форкнуть легко, не нужно напрягаться: ```bash $ graph deploy --debug-fork --ipfs http://localhost:5001 --node http://localhost:8020 ``` -Also, don't forget to set the `dataSources.source.startBlock` field in the Subgraph manifest to the number of the problematic block, so you can skip indexing unnecessary blocks and take advantage of the fork! +Кроме того, не забудьте установить поле `dataSources.source.startBlock` в манифесте субграфа на номер проблемного блока, чтобы пропустить индексирование ненужных блоков и воспользоваться форком! -So, here is what I do: +Итак, вот что я делаю: -1. I spin-up a local Graph Node ([here is how to do it](https://github.com/graphprotocol/graph-node#running-a-local-graph-node)) with the `fork-base` option set to: `https://api.thegraph.com/subgraphs/id/`, since I will fork a Subgraph, the buggy one I deployed earlier, from [Subgraph Studio](https://thegraph.com/studio/). +1. Я запускаю локальную Graph Node ([вот как это сделать](https://github.com/graphprotocol/graph-node#running-a-local-graph-node)) с опцией `fork-base`, установленной на: `https://api.thegraph.com/subgraphs/id/`, так как я собираюсь форкать субграф, тот самый, который я развернул ранее, с [Subgraph Studio](https://thegraph.com/studio/). ``` $ cargo run -p graph-node --release -- \ @@ -90,12 +90,12 @@ $ cargo run -p graph-node --release -- \ --fork-base https://api.thegraph.com/subgraphs/id/ ``` -2. After careful inspection I notice that there is a mismatch in the `id` representations used when indexing `Gravatar`s in my two handlers. While `handleNewGravatar` converts it to a hex (`event.params.id.toHex()`), `handleUpdatedGravatar` uses an int32 (`event.params.id.toI32()`) which causes the `handleUpdatedGravatar` to panic with "Gravatar not found!". I make them both convert the `id` to a hex. -3. After I made the changes I deploy my Subgraph to the local Graph Node, **_forking the failing Subgraph_** and setting `dataSources.source.startBlock` to `6190343` in `subgraph.yaml`: +2. После тщательной проверки я замечаю, что существует несоответствие в представлениях `id`, используемых при индексировании `Gravatar` в двух моих обработчиках. В то время как `handleNewGravatar` конвертирует его в hex (`event.params.id.toHex()`), `handleUpdatedGravatar` использует int32 (`event.params.id.toI32()`), что приводит к тому, что `handleUpdatedGravatar` завершается ошибкой и появляется сообщение "Gravatar not found!". Я заставляю оба обработчика конвертировать `id` в hex. +3. После внесения изменений, я развертываю свой субграф на локальной Graph Node, **форкаю неудачно развернутый субграф** и устанавливаю значение `dataSources.source.startBlock` равным `6190343` в файле `subgraph.yaml`: ```bash $ graph deploy gravity --debug-fork QmNp169tKvomnH3cPXTfGg4ZEhAHA6kEq5oy1XDqAxqHmW --ipfs http://localhost:5001 --node http://localhost:8020 ``` -4. I inspect the logs produced by the local Graph Node and, Hooray!, everything seems to be working. -5. I deploy my now bug-free Subgraph to a remote Graph Node and live happily ever after! (no potatoes tho) +4. Я проверяю логи, созданные локальной Graph Node, и, ура!, кажется, все работает. +5. Я развертываю теперь безошибочный субграф на удаленной Graph Node и живу счастливо до конца своих дней! (только без картошки) diff --git a/website/src/pages/ru/subgraphs/guides/subgraph-uncrashable.mdx b/website/src/pages/ru/subgraphs/guides/subgraph-uncrashable.mdx index a08e2a7ad8c9..1469f39676a8 100644 --- a/website/src/pages/ru/subgraphs/guides/subgraph-uncrashable.mdx +++ b/website/src/pages/ru/subgraphs/guides/subgraph-uncrashable.mdx @@ -1,29 +1,29 @@ --- -title: Safe Subgraph Code Generator +title: Генератор кода безопасного субграфа --- -[Subgraph Uncrashable](https://float-capital.github.io/float-subgraph-uncrashable/) is a code generation tool that generates a set of helper functions from the graphql schema of a project. It ensures that all interactions with entities in your Subgraph are completely safe and consistent. +[Subgraph Uncrashable](https://float-capital.github.io/float-subgraph-uncrashable/) — это инструмент генерации кода, который создает набор вспомогательных функций из схемы GraphQL проекта. Он гарантирует, что все взаимодействия с объектами в Вашем субграфе будут полностью безопасными и последовательными. -## Why integrate with Subgraph Uncrashable? +## Зачем интегрироваться с Subgraph Uncrashable? -- **Continuous Uptime**. Mishandled entities may cause Subgraphs to crash, which can be disruptive for projects that are dependent on The Graph. Set up helper functions to make your Subgraphs “uncrashable” and ensure business continuity. +- **Непрерывная работоспособность**: неправильная обработка объектов может привести к сбоям в работе субграфа, что может нарушить работу проектов, зависимых от The Graph. Настройте вспомогательные функции, чтобы сделать ваши субграфы "неподвластными сбоям" и обеспечить бесперебойную работу бизнеса. -- **Completely Safe**. Common problems seen in Subgraph development are issues of loading undefined entities, not setting or initializing all values of entities, and race conditions on loading and saving entities. Ensure all interactions with entities are completely atomic. +- **Полностью безопасно**: распространенные проблемы при разработке субграфа включают ошибки загрузки неопределенных объектов, отсутствие установки или инициализации всех значений объектов, а также гонки данных при загрузке и сохранении объектов. Убедитесь, что все взаимодействия с объектов являются полностью атомарными. -- **User Configurable** Set default values and configure the level of security checks that suits your individual project's needs. Warning logs are recorded indicating where there is a breach of Subgraph logic to help patch the issue to ensure data accuracy. +- **Конфигурируемо пользователем**: установите значения по умолчанию и настройте уровень проверок безопасности в соответствии с потребностями вашего проекта. Предупреждающие логи записываются в случае нарушения логики субграфа, что помогает устранить проблему и обеспечить точность данных. -**Key Features** +**Ключевые особенности** -- The code generation tool accommodates **all** Subgraph types and is configurable for users to set sane defaults on values. The code generation will use this config to generate helper functions that are to the users specification. +- Инструмент генерации кода поддерживает **все** типы субграфов и конфигурируем для пользователей, чтобы они могли устанавливать разумные значения по умолчанию. Генерация кода будет использовать эту конфигурацию для создания вспомогательных функций, соответствующих спецификации пользователя. -- The framework also includes a way (via the config file) to create custom, but safe, setter functions for groups of entity variables. This way it is impossible for the user to load/use a stale graph entity and it is also impossible to forget to save or set a variable that is required by the function. +- Фреймворк также включает в себя способ создания пользовательских, но безопасных функций установки для групп переменных объектов (через config-файл). Таким образом, пользователь не сможет загрузить/использовать устаревшую graph entity, и также не сможет забыть о сохранении или установке переменной, которая требуется функцией. -- Warning logs are recorded as logs indicating where there is a breach of Subgraph logic to help patch the issue to ensure data accuracy. +- Предупреждающие логи записываются, указывая на места нарушения логики субграфа, чтобы помочь устранить проблему и обеспечить точность данных. -Subgraph Uncrashable can be run as an optional flag using the Graph CLI codegen command. +Subgraph Uncrashable можно запустить как необязательный флаг с помощью команды Graph CLI codegen. ```sh graph codegen -u [options] [] ``` -Visit the [Subgraph uncrashable documentation](https://float-capital.github.io/float-subgraph-uncrashable/docs/) or watch this [video tutorial](https://float-capital.github.io/float-subgraph-uncrashable/docs/tutorial) to learn more and to get started with developing safer Subgraphs. +Ознакомьтесь с [документацией по subgraph uncrashable](https://float-capital.github.io/float-subgraph-uncrashable/docs/) или посмотрите это [видеоруководство](https://float-capital.github.io/float-subgraph-uncrashable/docs/tutorial), чтобы узнать больше и начать разрабатывать более безопасные субграфы. diff --git a/website/src/pages/ru/subgraphs/guides/transfer-to-the-graph.mdx b/website/src/pages/ru/subgraphs/guides/transfer-to-the-graph.mdx index a62072c48373..fa78162eb377 100644 --- a/website/src/pages/ru/subgraphs/guides/transfer-to-the-graph.mdx +++ b/website/src/pages/ru/subgraphs/guides/transfer-to-the-graph.mdx @@ -1,104 +1,104 @@ --- -title: Transfer to The Graph +title: Перенос в The Graph --- -Quickly upgrade your Subgraphs from any platform to [The Graph's decentralized network](https://thegraph.com/networks/). +Быстро обновите свои субграфы с любой платформы на [децентрализованную сеть The Graph](https://thegraph.com/networks/). -## Benefits of Switching to The Graph +## Преимущества перехода на The Graph -- Use the same Subgraph that your apps already use with zero-downtime migration. -- Increase reliability from a global network supported by 100+ Indexers. -- Receive lightning-fast support for Subgraphs 24/7, with an on-call engineering team. +- Используйте тот же субграф, который уже используется в Ваших приложениях, с миграцией без времени простоя. +- Повышайте надежность благодаря глобальной сети, поддерживаемой более чем 100 индексаторами. +- Получайте молниеносную поддержку для субграфов круглосуточно, с командой инженеров на связи. -## Upgrade Your Subgraph to The Graph in 3 Easy Steps +## Обновите свой субграф до The Graph за 3 простых шага -1. [Set Up Your Studio Environment](/subgraphs/cookbook/transfer-to-the-graph/#1-set-up-your-studio-environment) -2. [Deploy Your Subgraph to Studio](/subgraphs/cookbook/transfer-to-the-graph/#2-deploy-your-subgraph-to-studio) -3. [Publish to The Graph Network](/subgraphs/cookbook/transfer-to-the-graph/#publish-your-subgraph-to-the-graphs-decentralized-network) +1. [Настройте свою среду Studio](/subgraphs/cookbook/transfer-to-the-graph/#1-set-up-your-studio-environment) +2. [Разверните свой субграф в Studio](/subgraphs/cookbook/transfer-to-the-graph/#2-deploy-your-subgraph-to-studio) +3. [Опубликуйте в сети The Graph](/subgraphs/cookbook/transfer-to-the-graph/#publish-your-subgraph-to-the-graphs-decentralized-network) -## 1. Set Up Your Studio Environment +## 1. Настройте свою среду в Studio -### Create a Subgraph in Subgraph Studio +### Создайте субграф в Subgraph Studio -- Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. -- Click "Create a Subgraph". It is recommended to name the Subgraph in Title Case: "Subgraph Name Chain Name". +- Перейдите в [Subgraph Studio](https://thegraph.com/studio/) и подключите свой кошелек. +- Нажмите "Создать субграф". Рекомендуется называть субграф с использованием Заглавного регистра: "Subgraph Name Chain Name". -> Note: After publishing, the Subgraph name will be editable but requires onchain action each time, so name it properly. +> Примечание: после публикации название субграфа можно будет изменять, но для этого потребуется действие в сети каждый раз, поэтому дайте ему правильное название. -### Install the Graph CLI⁠ +### Установите Graph CLI -You must have [Node.js](https://nodejs.org/) and a package manager of your choice (`npm` or `pnpm`) installed to use the Graph CLI. Check for the [most recent](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) CLI version. +Для использования Graph CLI у Вас должны быть установлены [Node.js](https://nodejs.org/) и выбранный Вами менеджер пакетов (`npm` или `pnpm`). Проверьте [самую последнюю](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) версию CLI. -On your local machine, run the following command: +Выполните следующую команду на своем локальном компьютере: -Using [npm](https://www.npmjs.com/): +Использование [npm](https://www.npmjs.com/): ```sh npm install -g @graphprotocol/graph-cli@latest ``` -Use the following command to create a Subgraph in Studio using the CLI: +Используйте следующую команду для создания субграфа в Studio с помощью CLI: ```sh graph init --product subgraph-studio ``` -### Authenticate Your Subgraph +### Аутентификация Вашего субграфа -In The Graph CLI, use the auth command seen in Subgraph Studio: +В Graph CLI используйте команду `auth`, как показано в Subgraph Studio: ```sh graph auth ``` -## 2. Deploy Your Subgraph to Studio +## 2. Разверните свой субграф в Studio -If you have your source code, you can easily deploy it to Studio. If you don't have it, here's a quick way to deploy your Subgraph. +Если у Вас есть исходный код, вы можете легко развернуть его в Studio. Если его нет, вот быстрый способ развернуть ваш субграф. -In The Graph CLI, run the following command: +В Graph CLI выполните следующую команду: ```sh graph deploy --ipfs-hash ``` -> **Note:** Every Subgraph has an IPFS hash (Deployment ID), which looks like this: "Qmasdfad...". To deploy simply use this **IPFS hash**. You’ll be prompted to enter a version (e.g., v0.0.1). +> **Примечание:** у каждого субграфа есть хэш IPFS (Deployment ID), который выглядит так: "Qmasdfad...". Для развертывания просто используйте этот **IPFS хэш**. Вам будет предложено ввести версию (например, v0.0.1). -## 3. Publish Your Subgraph to The Graph Network +## 3. Опубликуйте свой субграф в The Graph Network -![publish button](/img/publish-sub-transfer.png) +![кнопка публикации](/img/publish-sub-transfer.png) -### Query Your Subgraph +### Запросите Ваш Субграф -> To attract about 3 indexers to query your Subgraph, it’s recommended to curate at least 3,000 GRT. To learn more about curating, check out [Curating](/resources/roles/curating/) on The Graph. +> Чтобы привлечь около 3 индексаторов для обработки запросов к вашему субграфу, рекомендуется выделить как минимум 3 000 GRT. Чтобы узнать больше о курировании, ознакомьтесь с разделом [Курирование](/resources/roles/curating/) на The Graph. -You can start [querying](/subgraphs/querying/introduction/) any Subgraph by sending a GraphQL query into the Subgraph’s query URL endpoint, which is located at the top of its Explorer page in Subgraph Studio. +Вы можете начать [выполнять запросы](/subgraphs/querying/introduction/) к любому субграфу, отправляя GraphQL-запрос на конечную точку субграфа, которая находится в верхней части его страницы в эксплорере в Subgraph Studio. -#### Example +#### Пример -[CryptoPunks Ethereum Subgraph](https://thegraph.com/explorer/subgraphs/HdVdERFUe8h61vm2fDyycHgxjsde5PbB832NHgJfZNqK) by Messari: +[Субграф CryptoPunks Ethereum](https://thegraph.com/explorer/subgraphs/HdVdERFUe8h61vm2fDyycHgxjsde5PbB832NHgJfZNqK) от Messari: -![Query URL](/img/cryptopunks-screenshot-transfer.png) +![URL запроса](/img/cryptopunks-screenshot-transfer.png) -The query URL for this Subgraph is: +URL для запроса этого субграфа: ```sh -https://gateway-arbitrum.network.thegraph.com/api/`**your-own-api-key**`/subgraphs/id/HdVdERFUe8h61vm2fDyycgxjsde5PbB832NHgJfZNqK +https://gateway-arbitrum.network.thegraph.com/api/`**Ваш-api-ключ**`/subgraphs/id/HdVdERFUe8h61vm2fDyycgxjsde5PbB832NHgJfZNqK ``` -Now, you simply need to fill in **your own API Key** to start sending GraphQL queries to this endpoint. +Теперь Вам нужно просто вставить **Ваш собственный API-ключ**, чтобы начать отправлять GraphQL-запросы на эту конечную точку. -### Getting your own API Key +### Получение собственного API-ключа -You can create API Keys in Subgraph Studio under the “API Keys” menu at the top of the page: +Вы можете создать API-ключи в Subgraph Studio в меню «API Keys» в верхней части страницы: -![API keys](/img/Api-keys-screenshot.png) +![API ключи](/img/Api-keys-screenshot.png) -### Monitor Subgraph Status +### Мониторинг статуса субграфа -Once you upgrade, you can access and manage your Subgraphs in [Subgraph Studio](https://thegraph.com/studio/) and explore all Subgraphs in [The Graph Explorer](https://thegraph.com/networks/). +После обновления вы можете получить доступ к своим субграфам и управлять ими в [Subgraph Studio](https://thegraph.com/studio/) и исследовать все субграфы в [The Graph Explorer](https://thegraph.com/networks/). -### Additional Resources +### Дополнительные ресурсы -- To quickly create and publish a new Subgraph, check out the [Quick Start](/subgraphs/quick-start/). -- To explore all the ways you can optimize and customize your Subgraph for a better performance, read more about [creating a Subgraph here](/developing/creating-a-subgraph/). +- Чтобы быстро создать и опубликовать новый субграф, ознакомьтесь с [Руководством по быстрому старту](/subgraphs/quick-start/). +- Чтобы исследовать все способы оптимизации и настройки вашего субграфа для лучшей производительности, читайте больше о [создании субграфа здесь](/developing/creating-a-subgraph/). diff --git a/website/src/pages/ru/subgraphs/querying/best-practices.mdx b/website/src/pages/ru/subgraphs/querying/best-practices.mdx index e7ecc1795d98..d0189ac234ee 100644 --- a/website/src/pages/ru/subgraphs/querying/best-practices.mdx +++ b/website/src/pages/ru/subgraphs/querying/best-practices.mdx @@ -4,7 +4,7 @@ title: Querying Best Practices The Graph provides a decentralized way to query data from blockchains. Its data is exposed through a GraphQL API, making it easier to query with the GraphQL language. -Learn the essential GraphQL language rules and best practices to optimize your subgraph. +Learn the essential GraphQL language rules and best practices to optimize your Subgraph. --- @@ -71,7 +71,7 @@ It means that you can query a GraphQL API using standard `fetch` (natively or vi However, as mentioned in ["Querying from an Application"](/subgraphs/querying/from-an-application/), it's recommended to use `graph-client`, which supports the following unique features: -- Cross-chain Subgraph Handling: Querying from multiple subgraphs in a single query +- Cross-chain Subgraph Handling: Querying from multiple Subgraphs in a single query - [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) - [Automatic Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) - Fully typed result @@ -219,7 +219,7 @@ If the application only needs 10 transactions, the query should explicitly set ` ### Use a single query to request multiple records -By default, subgraphs have a singular entity for one record. For multiple records, use the plural entities and filter: `where: {id_in:[X,Y,Z]}` or `where: {volume_gt:100000}` +By default, Subgraphs have a singular entity for one record. For multiple records, use the plural entities and filter: `where: {id_in:[X,Y,Z]}` or `where: {volume_gt:100000}` Example of inefficient querying: diff --git a/website/src/pages/ru/subgraphs/querying/from-an-application.mdx b/website/src/pages/ru/subgraphs/querying/from-an-application.mdx index 75853752f129..817d034d2d9b 100644 --- a/website/src/pages/ru/subgraphs/querying/from-an-application.mdx +++ b/website/src/pages/ru/subgraphs/querying/from-an-application.mdx @@ -1,5 +1,6 @@ --- title: Querying from an Application +sidebarTitle: Querying from an App --- Learn how to query The Graph from your application. @@ -10,7 +11,7 @@ During the development process, you will receive a GraphQL API endpoint at two d ### Subgraph Studio Endpoint -After deploying your subgraph to [Subgraph Studio](https://thegraph.com/docs/en/subgraphs/developing/deploying/using-subgraph-studio/), you will receive an endpoint that looks like this: +After deploying your Subgraph to [Subgraph Studio](https://thegraph.com/docs/en/subgraphs/developing/deploying/using-subgraph-studio/), you will receive an endpoint that looks like this: ``` https://api.studio.thegraph.com/query/// @@ -20,13 +21,13 @@ https://api.studio.thegraph.com/query/// ### The Graph Network Endpoint -After publishing your subgraph to the network, you will receive an endpoint that looks like this: : +After publishing your Subgraph to the network, you will receive an endpoint that looks like this: : ``` https://gateway.thegraph.com/api//subgraphs/id/ ``` -> This endpoint is intended for active use on the network. It allows you to use various GraphQL client libraries to query the subgraph and populate your application with indexed data. +> This endpoint is intended for active use on the network. It allows you to use various GraphQL client libraries to query the Subgraph and populate your application with indexed data. ## Using Popular GraphQL Clients @@ -34,7 +35,7 @@ https://gateway.thegraph.com/api//subgraphs/id/ The Graph is providing its own GraphQL client, `graph-client` that supports unique features such as: -- Cross-chain Subgraph Handling: Querying from multiple subgraphs in a single query +- Cross-chain Subgraph Handling: Querying from multiple Subgraphs in a single query - [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) - [Automatic Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) - Fully typed result @@ -43,7 +44,7 @@ The Graph is providing its own GraphQL client, `graph-client` that supports uniq ### Fetch Data with Graph Client -Let's look at how to fetch data from a subgraph with `graph-client`: +Let's look at how to fetch data from a Subgraph with `graph-client`: #### Шаг 1 @@ -51,7 +52,7 @@ Install The Graph Client CLI in your project: ```sh yarn add -D @graphprotocol/client-cli -# or, with NPM: +# или, с NPM: npm install --save-dev @graphprotocol/client-cli ``` @@ -168,7 +169,7 @@ Although it's the heaviest client, it has many features to build advanced UI on ### Fetch Data with Apollo Client -Let's look at how to fetch data from a subgraph with Apollo client: +Let's look at how to fetch data from a Subgraph with Apollo client: #### Шаг 1 @@ -257,7 +258,7 @@ client ### Fetch data with URQL -Let's look at how to fetch data from a subgraph with URQL: +Let's look at how to fetch data from a Subgraph with URQL: #### Шаг 1 diff --git a/website/src/pages/ru/subgraphs/querying/graph-client/README.md b/website/src/pages/ru/subgraphs/querying/graph-client/README.md index 416cadc13c6f..071bb3c883b7 100644 --- a/website/src/pages/ru/subgraphs/querying/graph-client/README.md +++ b/website/src/pages/ru/subgraphs/querying/graph-client/README.md @@ -1,54 +1,54 @@ -# The Graph Client Tools +# Инструменты клиента The Graph -This repo is the home for [The Graph](https://thegraph.com) consumer-side tools (for both browser and NodeJS environments). +Этот репозиторий является домом для потребительских инструментов [The Graph](https://thegraph.com) (как для браузерных, так и для NodeJS сред). -## Background +## Предисловие -The tools provided in this repo are intended to enrich and extend the DX, and add the additional layer required for dApps in order to implement distributed applications. +Инструменты, предоставленные в этом репозитории, предназначены для улучшения и расширения разработческого опыта (DX), а также для добавления дополнительного слоя, необходимого для децентрализованных приложений (dApps), чтобы реализовать распределенные приложения. -Developers who consume data from [The Graph](https://thegraph.com) GraphQL API often need peripherals for making data consumption easier, and also tools that allow using multiple indexers at the same time. +Разработчики, которые потребляют данные через GraphQL API от [The Graph](https://thegraph.com), часто нуждаются в периферийных инструментах для облегчения потребления данных, а также в инструментах, которые позволяют использовать несколько индексаторов одновременно. -## Features and Goals +## Функции и цели -This library is intended to simplify the network aspect of data consumption for dApps. The tools provided within this repository are intended to run at build time, in order to make execution faster and performant at runtime. +Эта библиотека предназначена для упрощения сетевого аспекта потребления данных для децентрализованных приложений (dApps). Инструменты, предоставленные в этом репозитории, предназначены для работы во время сборки, чтобы сделать выполнение более быстрым и производительным в момент выполнения. -> The tools provided in this repo can be used as standalone, but you can also use it with any existing GraphQL Client! +> Инструменты, предоставленные в этом репозитории, могут использоваться как самостоятельно, так и в сочетании с любым существующим GraphQL клиентом! -| Status | Feature | Notes | -| :----: | ---------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------- | -| ✅ | Multiple indexers | based on fetch strategies | -| ✅ | Fetch Strategies | timeout, retry, fallback, race, highestValue | -| ✅ | Build time validations & optimizations | | -| ✅ | Client-Side Composition | with improved execution planner (based on GraphQL-Mesh) | -| ✅ | Cross-chain Subgraph Handling | Use similar subgraphs as a single source | -| ✅ | Raw Execution (standalone mode) | without a wrapping GraphQL client | -| ✅ | Local (client-side) Mutations | | -| ✅ | [Automatic Block Tracking](../packages/block-tracking/README.md) | tracking block numbers [as described here](https://thegraph.com/docs/en/developer/distributed-systems/#polling-for-updated-data) | -| ✅ | [Automatic Pagination](../packages/auto-pagination/README.md) | doing multiple requests in a single call to fetch more than the indexer limit | -| ✅ | Integration with `@apollo/client` | | -| ✅ | Integration with `urql` | | -| ✅ | TypeScript support | with built-in GraphQL Codegen and `TypedDocumentNode` | -| ✅ | [`@live` queries](./live.md) | Based on polling | +| Статус | Функция | Примечания | +| :----: | -------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------- | +| ✅ | Несколько индексаторов | основано на стратегиях выборки | +| ✅ | Стратегия выборки | timeout, retry, fallback, race, highestValue | +| ✅ | Валидации и оптимизации во время сборки | | +| ✅ | Композиция на стороне клиента | с улучшенным планировщиком выполнения (на основе GraphQL-Mesh) | +| ✅ | Кросс-чейн обработка субграфа | Использование схожих субграфов как единого источника | +| ✅ | Выполнение сырых данных (автономный режим) | напрямую, без GraphQL-клиента | +| ✅ | Местные (клиентские) мутации | | +| ✅ | [Отслеживание автоматического блока](../packages/block-tracking/README.md) | отслеживание номеров блоков [как описано здесь](https://thegraph.com/docs/en/developer/distributed-systems/#polling-for-updated-data) | +| ✅ | [Автоматическая пагинация](../packages/auto-pagination/README.md) | выполнение нескольких запросов в одном вызове для получения больше лимита индексатора | +| ✅ | Интеграция с `@apollo/client` | | +| ✅ | Интеграция с `urql` | | +| ✅ | Поддержка TypeScript | со встроенным GraphQL Codegen и `TypedDocumentNode` | +| ✅ | [`@live` запросы](./live.md) | На основе опроса | -> You can find an [extended architecture design here](./architecture.md) +> Вы можете найти [расширенный архитектурный дизайн здесь](./architecture.md) -## Getting Started +## Начало работы -You can follow [Episode 45 of `graphql.wtf`](https://graphql.wtf/episodes/45-the-graph-client) to learn more about Graph Client: +Вы можете подписаться на [Episode 45 из `graphql.wtf`](https://graphql.wtf/episodes/45-the-graph-client), чтобы узнать больше о Graph Client: -[![GraphQL.wtf Episode 45](https://img.youtube.com/vi/ZsRAmyUtvwg/0.jpg)](https://graphql.wtf/episodes/45-the-graph-client) +[![GraphQL.wtf Эпизод 45](https://img.youtube.com/vi/ZsRAmyUtvwg/0.jpg)](https://graphql.wtf/episodes/45-the-graph-client) -To get started, make sure to install [The Graph Client CLI] in your project: +Чтобы начать, убедитесь, что установили [The Graph Client CLI] в свой проект: ```sh yarn add -D @graphprotocol/client-cli -# or, with NPM: +# или, с NPM: npm install --save-dev @graphprotocol/client-cli ``` -> The CLI is installed as dev dependency since we are using it to produce optimized runtime artifacts that can be loaded directly from your app! +> CLI устанавливается как зависимость для разработки, поскольку мы используем его для создания оптимизированных артефактов времени выполнения, которые могут быть загружены непосредственно из Вашего приложения! -Create a configuration file (called `.graphclientrc.yml`) and point to your GraphQL endpoints provided by The Graph, for example: +Создайте конфигурационный файл (под названием `.graphclientrc.yml`) и укажите Ваши GraphQL конечные точки, предоставленные The Graph, например: ```yml # .graphclientrc.yml @@ -59,28 +59,28 @@ sources: endpoint: https://api.thegraph.com/subgraphs/name/uniswap/uniswap-v2 ``` -Now, create a runtime artifact by running The Graph Client CLI: +Теперь создайте артефакт времени выполнения, запустив The Graph Client CLI: ```sh graphclient build ``` -> Note: you need to run this with `yarn` prefix, or add that as a script in your `package.json`. +> Примечание: Вам нужно выполнить это с префиксом `yarn`, или добавить это как скрипт в свой `package.json`. -This should produce a ready-to-use standalone `execute` function, that you can use for running your application GraphQL operations, you should have an output similar to the following: +Это должно создать готовую к использованию автономную функцию `execute`, которую Вы сможете использовать для выполнения операций GraphQL в своем приложении. Вы должны получить вывод, похожий на следующий: ```sh -GraphClient: Cleaning existing artifacts -GraphClient: Reading the configuration -🕸️: Generating the unified schema -🕸️: Generating artifacts -🕸️: Generating index file in TypeScript -🕸️: Writing index.ts for ESM to the disk. -🕸️: Cleanup -🕸️: Done! => .graphclient +GraphClient: Очистка существующих артефактов +GraphClient: Чтение конфигурации +🕸️: Генерация унифицированной схемы +🕸️: Генерация артефактов +🕸️: Генерация индекса в TypeScript +🕸️: Запись index.ts для ESM на диск +🕸️: Очистка +🕸️: Готово! => .graphclient ``` -Now, the `.graphclient` artifact is generated for you, and you can import it directly from your code, and run your queries: +Теперь артефакт `.graphclient` для Вас сгенерирован, и Вы можете импортировать его напрямую в свой код и выполнять запросы: ```ts import { execute } from '../.graphclient' @@ -111,54 +111,54 @@ async function main() { main() ``` -### Using Vanilla JavaScript Instead of TypeScript +### Использование Vanilla JavaScript вместо TypeScript -GraphClient CLI generates the client artifacts as TypeScript files by default, but you can configure CLI to generate JavaScript and JSON files together with additional TypeScript definition files by using `--fileType js` or `--fileType json`. +По умолчанию, GraphClient CLI генерирует артефакты клиента в виде файлов TypeScript, но Вы можете настроить CLI для генерации файлов JavaScript и JSON вместе с дополнительными файлами определений TypeScript, используя `--fileType js` или `--fileType json`. -`js` flag generates all files as JavaScript files with ESM Syntax and `json` flag generates source artifacts as JSON files while entrypoint JavaScript file with old CommonJS syntax because only CommonJS supports JSON files as modules. +Флаг `js` генерирует все файлы как JavaScript файлы с синтаксисом ESM, а флаг `json` генерирует исходные артефакты как JSON файлы, при этом файл точки входа будет на старом синтаксисе CommonJS, поскольку только CommonJS поддерживает JSON файлы как модули. -Unless you use CommonJS(`require`) specifically, we'd recommend you to use `js` flag. +Если Вы специально не используете CommonJS (`require`), мы рекомендуем использовать флаг `js`. `graphclient --fileType js` -- [An example for JavaScript usage in CommonJS syntax with JSON files](../examples/javascript-cjs) -- [An example for JavaScript usage in ESM syntax](../examples/javascript-esm) +- [Пример использования JavaScript в синтаксисе CommonJS с JSON файлами](../examples/javascript-cjs) +- [Пример использования JavaScript в синтаксисе ESM](../examples/javascript-esm) -#### The Graph Client DevTools +#### Инструменты разработки The Graph Client -The Graph Client CLI comes with a built-in GraphiQL, so you can experiment with queries in real-time. +The Graph Client CLI включает встроенный GraphiQL, который позволяет Вам экспериментировать с запросами в реальном времени. -The GraphQL schema served in that environment, is the eventual schema based on all composed Subgraphs and transformations you applied. +GraphQL-схема, обслуживаемая в этой среде, представляет собой итоговую схему, основанную на всех составленных субграфах и примененных преобразованиях. -To start the DevTool GraphiQL, run the following command: +Чтобы запустить DevTool GraphiQL, выполните следующую команду: ```sh graphclient serve-dev ``` -And open http://localhost:4000/ to use GraphiQL. You can now experiment with your Graph client-side GraphQL schema locally! 🥳 +А затем откройте [http://localhost:4000/](http://localhost:4000/), чтобы использовать GraphiQL. Теперь Вы можете экспериментировать со своей GraphQL-схемой на стороне клиента локально! 🥳 -#### Examples +#### Примеры -You can also refer to [examples directory in this repo](../examples), for more advanced examples and integration examples: +Вы также можете обратиться к [каталогу с примерами в этом репозитории](../examples) для более продвинутых примеров и примеров интеграции: -- [TypeScript & React example with raw `execute` and built-in GraphQL-Codegen](../examples/execute) -- [TS/JS NodeJS standalone mode](../examples/node) -- [Client-Side GraphQL Composition](../examples/composition) -- [Integration with Urql and React](../examples/urql) -- [Integration with NextJS and TypeScript](../examples/nextjs) -- [Integration with Apollo-Client and React](../examples/apollo) -- [Integration with React-Query](../examples/react-query) -- _Cross-chain merging (same Subgraph, different chains)_ -- - [Parallel SDK calls](../examples/cross-chain-sdk) -- - [Parallel internal calls with schema extensions](../examples/cross-chain-extension) -- [Customize execution with Transforms (auto-pagination and auto-block-tracking)](../examples/transforms) +- [Пример TypeScript и React с использованием `execute` и встроенного GraphQL-Codegen](../examples/execute) +- [Автономный режим TS/JS NodeJS](../examples/node) +- [Клиентская композиция GraphQL](../examples/composition) +- [Интеграция с Urql и React](../examples/urql) +- [Интеграция с NextJS и TypeScript](../examples/nextjs) +- [Интеграция с Apollo-Client и React](../examples/apollo) +- [Интеграция с React-Query](../examples/react-query) +- _Кросс-чейн слияние (тот же субграф, разные чейны)_ +- - [Параллельные вызовы SDK](../examples/cross-chain-sdk) +- - [Параллельные внутренние вызовы с расширениями схемы](../examples/cross-chain-extension) +- [Настройка выполнения с помощью трансформаций (автоматическая пагинация и автоматический отслеживание блоков)](../examples/transforms) -### Advanced Examples/Features +### Продвинутые примеры/функции -#### Customize Network Calls +#### Настройка сетевых вызовов -You can customize the network execution (for example, to add authentication headers) by using `operationHeaders`: +Вы можете настроить выполнение сетевых запросов (например, для добавления заголовков аутентификации), используя `operationHeaders`: ```yaml sources: @@ -170,7 +170,7 @@ sources: Authorization: Bearer MY_TOKEN ``` -You can also use runtime variables if you wish, and specify it in a declarative way: +Вы также можете использовать переменные времени выполнения, если хотите, и указать их декларативным способом: ```yaml sources: @@ -182,7 +182,7 @@ sources: Authorization: Bearer {context.config.apiToken} ``` -Then, you can specify that when you execute operations: +Затем Вы можете указать следующее, когда выполняете операции: ```ts execute(myQuery, myVariables, { @@ -192,11 +192,11 @@ execute(myQuery, myVariables, { }) ``` -> You can find the [complete documentation for the `graphql` handler here](https://graphql-mesh.com/docs/handlers/graphql#config-api-reference). +> Полную документацию по обработчику `graphql` можно найти [здесь](https://graphql-mesh.com/docs/handlers/graphql#config-api-reference). -#### Environment Variables Interpolation +#### Интерполяция переменных среды -If you wish to use environment variables in your Graph Client configuration file, you can use interpolation with `env` helper: +Если Вы хотите использовать переменные среды в конфигурационном файле своего Graph Client, Вы можете использовать интерполяцию с помощью помощника `env`: ```yaml sources: @@ -205,12 +205,12 @@ sources: graphql: endpoint: https://api.thegraph.com/subgraphs/name/uniswap/uniswap-v2 operationHeaders: - Authorization: Bearer {env.MY_API_TOKEN} # runtime + Authorization: Bearer {env.MY_API_TOKEN} # время выполнения ``` -Then, make sure to have `MY_API_TOKEN` defined when you run `process.env` at runtime. +Затем убедитесь, что `MY_API_TOKEN` определён, когда Вы выполняете `process.env` во время выполнения программы. -You can also specify environment variables to be filled at build time (during `graphclient build` run) by using the env-var name directly: +Вы также можете указать переменные среды, которые будут заполняться во время сборки (при запуске `graphclient build`), используя непосредственно имя переменной средв: ```yaml sources: @@ -219,23 +219,23 @@ sources: graphql: endpoint: https://api.thegraph.com/subgraphs/name/uniswap/uniswap-v2 operationHeaders: - Authorization: Bearer ${MY_API_TOKEN} # build time + Authorization: Bearer ${MY_API_TOKEN} # время разработки ``` -> You can find the [complete documentation for the `graphql` handler here](https://graphql-mesh.com/docs/handlers/graphql#config-api-reference). +> Полную документацию по обработчику `graphql` можно найти [здесь](https://graphql-mesh.com/docs/handlers/graphql#config-api-reference). -#### Fetch Strategies and Multiple Graph Indexers +#### Стратегии выборки данных и работа с несколькими Graph-индексаторами -It's a common practice to use more than one indexer in dApps, so to achieve the ideal experience with The Graph, you can specify several `fetch` strategies in order to make it more smooth and simple. +Это обычная практика — использовать несколько индексаторов в децентрализованных приложениях (dApps), поэтому для достижения наилучшего опыта работы с The Graph Вы можете указать несколько стратегий `fetch`, чтобы сделать процесс более плавным и простым. -All `fetch` strategies can be combined to create the ultimate execution flow. +Все стратегии`fetch` можно комбинировать для создания идеального потока выполнения.
`retry` -The `retry` mechanism allow you to specify the retry attempts for a single GraphQL endpoint/source. +Механизм `retry` позволяет указать количество попыток повторного запроса для одной GraphQL конечной точки/источника. -The retry flow will execute in both conditions: a netword error, or due to a runtime error (indexing issue/inavailability of the indexer). +Механизм повторных попыток будет выполняться в обоих случаях: при ошибке сети или при ошибке выполнения (проблемы с индексированием/недоступность индексатора). ```yaml sources: @@ -243,7 +243,7 @@ sources: handler: graphql: endpoint: https://api.thegraph.com/subgraphs/name/uniswap/uniswap-v2 - retry: 2 # specify here, if you have an unstable/error prone indexer + retry: 2 # укажите здесь, если у вас нестабильный/подверженный ошибкам индексатор ```
@@ -251,7 +251,7 @@ sources:
`timeout` -The `timeout` mechanism allow you to specify the `timeout` for a given GraphQL endpoint. +Механизм `timeout` позволяет задать `timeout` для указанной конечной точки GraphQL. ```yaml sources: @@ -259,7 +259,7 @@ sources: handler: graphql: endpoint: https://api.thegraph.com/subgraphs/name/uniswap/uniswap-v2 - timeout: 5000 # 5 seconds + timeout: 5000 # 5 секунд ```
@@ -267,9 +267,9 @@ sources:
`fallback` -The `fallback` mechanism allow you to specify use more than one GraphQL endpoint, for the same source. +Механизм `fallback` позволяет указать несколько конечных точек GraphQL для одного и того же источника. -This is useful if you want to use more than one indexer for the same Subgraph, and fallback when an error/timeout happens. You can also use this strategy in order to use a custom indexer, but allow it to fallback to [The Graph Hosted Service](https://thegraph.com/hosted-service). +Это полезно, если Вы хотите использовать более одного индексатора для одного и того же субграфа и переключаться на другой в случае ошибки или тайм-аута. Вы также можете использовать эту стратегию для использования кастомного индексатора, но в случае необходимости переключаться на [The Graph Hosted Service](https://thegraph.com/hosted-service). ```yaml sources: @@ -289,9 +289,9 @@ sources:
`race` -The `race` mechanism allow you to specify use more than one GraphQL endpoint, for the same source, and race on every execution. +Механизм `race` позволяет указать несколько GraphQL-эндпоинтов для одного источника данных, выполняя их конкурентный опрос при каждом запросе. -This is useful if you want to use more than one indexer for the same Subgraph, and allow both sources to race and get the fastest response from all specified indexers. +Это полезно, если вы хотите использовать несколько индексаторов для одного субграфа и позволить им конкурировать за получение самого быстрого ответа от всех указанных индексаторов. ```yaml sources: @@ -308,10 +308,10 @@ sources:
`highestValue` - - This strategy allows you to send parallel requests to different endpoints for the same source and choose the most updated. -This is useful if you want to choose most synced data for the same Subgraph over different indexers/sources. +Эта стратегия позволяет отправлять параллельные запросы к различным конечным точкам для одного и того же источника и выбирать наиболее актуальный ответ. + +Это полезно, если Вы хотите выбрать наиболее синхронизированные данные для одного субграфа среди нескольких индексаторов/источников. ```yaml sources: @@ -349,9 +349,9 @@ graph LR;
-#### Block Tracking +#### Отслеживание блоков -The Graph Client can track block numbers and do the following queries by following [this pattern](https://thegraph.com/docs/en/developer/distributed-systems/#polling-for-updated-data) with `blockTracking` transform; +Graph Client может отслеживать номера блоков и выполнять следующие запросы, следуя [этой схеме](https://thegraph.com/docs/en/developer/distributed-systems/#polling-for-updated-data) с использованием преобразования `blockTracking`; ```yaml sources: @@ -361,23 +361,23 @@ sources: endpoint: https://api.thegraph.com/subgraphs/name/uniswap/uniswap-v2 transforms: - blockTracking: - # You might want to disable schema validation for faster startup + # Вы можете отключить проверку схемы для более быстрого старта validateSchema: true - # Ignore the fields that you don't want to be tracked + # Игнорируйте поля, которые вы не хотите отслеживать ignoreFieldNames: [users, prices] - # Exclude the operation with the following names + # Исключите операции с указанными именами ignoreOperationNames: [NotFollowed] ``` -[You can try a working example here](../examples/transforms) +[Здесь Вы можете попробовать рабочий пример](../examples/transforms) -#### Automatic Pagination +#### Автоматическая пагинация -With most subgraphs, the number of records you can fetch is limited. In this case, you have to send multiple requests with pagination. +Для большинства субграфов количество записей, которые Вы можете извлечь, ограничено. В этом случае Вам нужно отправить несколько запросов с пагинацией. ```graphql query { - # Will throw an error if the limit is 1000 + # Выдаст ошибку, если лимит равен 1000 users(first: 2000) { id name @@ -385,11 +385,11 @@ query { } ``` -So you have to send the following operations one after the other: +Таким образом, Вам нужно отправить следующие операции одну за другой: ```graphql query { - # Will throw an error if the limit is 1000 + # Выдаст ошибку, если лимит равен 1000 users(first: 1000) { id name @@ -397,11 +397,11 @@ query { } ``` -Then after the first response: +Затем после первого ответа: ```graphql query { - # Will throw an error if the limit is 1000 + # Выдаст ошибку, если лимит равен 1000 users(first: 1000, skip: 1000) { id name @@ -409,9 +409,9 @@ query { } ``` -After the second response, you have to merge the results manually. But instead The Graph Client allows you to do the first one and automatically does those multiple requests for you under the hood. +После второго ответа Вам пришлось бы вручную объединять результаты. Однако Graph Client позволяет выполнить первый запрос, а затем в фоновом режиме обрабатывает все остальные. -All you have to do is: +Всё, что Вам нужно сделать, это: ```yaml sources: @@ -421,21 +421,21 @@ sources: endpoint: https://api.thegraph.com/subgraphs/name/uniswap/uniswap-v2 transforms: - autoPagination: - # You might want to disable schema validation for faster startup + # Вы можете отключить проверку схемы для более быстрого старта validateSchema: true ``` -[You can try a working example here](../examples/transforms) +[Здесь Вы можете попробовать рабочий пример](../examples/transforms) -#### Client-side Composition +#### Композиция на стороне клиента -The Graph Client has built-in support for client-side GraphQL Composition (powered by [GraphQL-Tools Schema-Stitching](https://graphql-tools.com/docs/schema-stitching/stitch-combining-schemas)). +Graph Client имеет встроенную поддержку композиции GraphQL на стороне клиента (реализованную с помощью [GraphQL-Tools Schema-Stitching](https://graphql-tools.com/docs/schema-stitching/stitch-combining-schemas)). -You can leverage this feature in order to create a single GraphQL layer from multiple Subgraphs, deployed on multiple indexers. +Вы можете использовать эту функцию для создания единого слоя GraphQL из нескольких субграфов, развернутых на нескольких индексаторах. -> 💡 Tip: You can compose any GraphQL sources, and not only Subgraphs! +> 💡 Совет: Вы можете комбинировать любые источники GraphQL, а не только субграфы! -Trivial composition can be done by adding more than one GraphQL source to your `.graphclientrc.yml` file, here's an example: +Тривиальную композицию можно выполнить, добавив более одного источника GraphQL в Ваш файл `.graphclientrc.yml`, вот пример: ```yaml sources: @@ -449,15 +449,15 @@ sources: endpoint: https://api.thegraph.com/subgraphs/name/graphprotocol/compound-v2 ``` -As long as there a no conflicts across the composed schemas, you can compose it, and then run a single query to both Subgraphs: +Пока нет конфликтов между объединёнными схемами, Вы можете их составлять, а затем выполнить один запрос ко всем субграфам: ```graphql query myQuery { - # this one is coming from compound-v2 + # этот запрос поступает от compound-v2 markets(first: 7) { borrowRate } - # this one is coming from uniswap-v2 + # этот запрос поступает от uniswap-v2 pair(id: "0x00004ee988665cdda9a1080d5792cecd16dc1220") { id token0 { @@ -470,33 +470,33 @@ query myQuery { } ``` -You can also resolve conflicts, rename parts of the schema, add custom GraphQL fields, and modify the entire execution phase. +Вы также можете разрешать конфликты, переименовывать части схемы, добавлять пользовательские поля GraphQL и изменять всю фазу выполнения. -For advanced use-cases with composition, please refer to the following resources: +Для сложных сценариев использования композиций обратитесь к следующим ресурсам: -- [Advanced Composition Example](../examples/composition) -- [GraphQL-Mesh Schema transformations](https://graphql-mesh.com/docs/transforms/transforms-introduction) -- [GraphQL-Tools Schema-Stitching documentation](https://graphql-tools.com/docs/schema-stitching/stitch-combining-schemas) +- [Пример сложной композиции](../examples/composition) +- [Преобразования схемы GraphQL-Mesh](https://graphql-mesh.com/docs/transforms/transforms-introduction) +- [Документация по объединению схем с помощью GraphQL-Tools](https://graphql-tools.com/docs/schema-stitching/stitch-combining-schemas) -#### TypeScript Support +#### Поддержка TypeScript -If your project is written in TypeScript, you can leverage the power of [`TypedDocumentNode`](https://the-guild.dev/blog/typed-document-node) and have a fully-typed GraphQL client experience. +Если Ваш проект написан на TypeScript, Вы можете использовать возможности [`TypedDocumentNode`](https://the-guild.dev/blog/typed-document-node) и получить полностью типизированный опыт работы с GraphQL-клиентом. -The standalone mode of The GraphQL, and popular GraphQL client libraries like Apollo-Client and urql has built-in support for `TypedDocumentNode`! +Автономный режим The GraphQL, а также популярные библиотеки GraphQL-клиентов, такие как Apollo-Client и urql, имеют встроенную поддержку `TypedDocumentNode`! -The Graph Client CLI comes with a ready-to-use configuration for [GraphQL Code Generator](https://graphql-code-generator.com), and it can generate `TypedDocumentNode` based on your GraphQL operations. +CLI Graph Client поставляется с готовой конфигурацией для [GraphQL Code Generator](https://graphql-code-generator.com) и может генерировать `TypedDocumentNode` на основе Ваших GraphQL-операций. -To get started, define your GraphQL operations in your application code, and point to those files using the `documents` section of `.graphclientrc.yml`: +Чтобы начать, определите Ваши GraphQL-операции в коде приложения и укажите пути к этим файлам в разделе `documents` файла `.graphclientrc.yml`: ```yaml sources: - - # ... your Subgraphs/GQL sources here + - # ... Ваши Субграфы/источники GQL здесь documents: - ./src/example-query.graphql ``` -You can also use Glob expressions, or even point to code files, and the CLI will find your GraphQL queries automatically: +Вы также можете использовать выражения Glob или даже указывать файлы кода, и CLI автоматически найдет Ваши GraphQL-запросы: ```yaml documents: @@ -504,37 +504,37 @@ documents: - './src/**/*.{ts,tsx,js,jsx}' ``` -Now, run the GraphQL CLI `build` command again, the CLI will generate a `TypedDocumentNode` object under `.graphclient` for every operation found. +Теперь снова выполните команду `build` в GraphQL CLI, и CLI сгенерирует объект `TypedDocumentNode` в `.graphclient` для каждой найденной операции. -> Make sure to name your GraphQL operations, otherwise it will be ignored! +> Обязательно давайте имена Вашим GraphQL-операциям, иначе они будут проигнорированы! -For example, a query called `query ExampleQuery` will have the corresponding `ExampleQueryDocument` generated in `.graphclient`. You can now import it and use that for your GraphQL calls, and you'll have a fully typed experience without writing or specifying any TypeScript manually: +Например, для запроса с именем `query ExampleQuery` будет сгенерирован соответствующий `ExampleQueryDocument` в `.graphclient`. Теперь вы можете импортировать его и использовать для GraphQL-запросов, получая полностью типизированный опыт без необходимости вручную писать или указывать TypeScript: ```ts import { ExampleQueryDocument, execute } from '../.graphclient' async function main() { - // "result" variable is fully typed, and represents the exact structure of the fields you selected in your query. + // переменная "result" полностью типизирована и представляет точную структуру полей, которые вы выбрали в вашем запросе. const result = await execute(ExampleQueryDocument, {}) console.log(result) } ``` -> You can find a [TypeScript project example here](../examples/urql). +> Вы можете найти [пример проекта на TypeScript здесь](../examples/urql). -#### Client-Side Mutations +#### Мутации на стороне клиента -Due to the nature of Graph-Client setup, it is possible to add client-side schema, that you can later bridge to run any arbitrary code. +Из-за особенностей настройки Graph-Client, возможно добавление схемы на стороне клиента, которую затем можно использовать для выполнения произвольного кода. -This is helpful since you can implement custom code as part of your GraphQL schema, and have it as unified application schema that is easier to track and develop. +Это полезно, потому что Вы можете внедрить пользовательский код в часть своей схемы GraphQL и использовать его как единую схему приложения, что облегчает отслеживание и разработку. -> This document explains how to add custom mutations, but in fact you can add any GraphQL operation (query/mutation/subscriptions). See [Extending the unified schema article](https://graphql-mesh.com/docs/guides/extending-unified-schema) for more information about this feature. +> Этот документ объясняет, как добавить пользовательские мутации, но на самом деле Вы можете добавить любую операцию GraphQL (запросы/мутации/подписки). Для получения дополнительной информации о данной функции, см. статью [Расширение единой схемы](https://graphql-mesh.com/docs/guides/extending-unified-schema). -To get started, define a `additionalTypeDefs` section in your config file: +Чтобы начать, определите раздел `additionalTypeDefs` в Вашем конфигурационном файле: ```yaml additionalTypeDefs: | - # We should define the missing `Mutation` type + # Мы должны определить отсутствующий тип `Mutation` extend schema { mutation: Mutation } @@ -548,21 +548,21 @@ additionalTypeDefs: | } ``` -Then, add a pointer to a custom GraphQL resolvers file: +Затем добавьте указатель на файл с пользовательскими GraphQL-ресолверами: ```yaml additionalResolvers: - './resolvers' ``` -Now, create `resolver.js` (or, `resolvers.ts`) in your project, and implement your custom mutation: +Теперь создайте файл `resolver.js` (или `resolvers.ts`) в своем проекте и внедрите свою пользовательскую мутацию: ```js module.exports = { Mutation: { async doSomething(root, args, context, info) { - // Here, you can run anything you wish. - // For example, use `web3` lib, connect a wallet and so on. + // Здесь Вы можете выполнить все, что хотите. + // Например, использовать библиотеку `web3`, подключить кошелек и так далее. return true }, @@ -570,17 +570,17 @@ module.exports = { } ``` -If you are using TypeScript, you can also get fully type-safe signature by doing: +Если Вы используете TypeScript, Вы также можете получить полностью безопасную типизацию подписей, сделав следующее: ```ts import { Resolvers } from './.graphclient' -// Now it's fully typed! +// Теперь всё написано! const resolvers: Resolvers = { Mutation: { async doSomething(root, args, context, info) { - // Here, you can run anything you wish. - // For example, use `web3` lib, connect a wallet and so on. + // Здесь Вы можете выполнить любые операции, которые хотите. + // Например, использовать библиотеку `web3`, подключить кошелек и так далее. return true }, @@ -590,22 +590,22 @@ const resolvers: Resolvers = { export default resolvers ``` -If you need to inject runtime variables into your GraphQL execution `context`, you can use the following snippet: +Если Вам нужно внедрить переменные времени выполнения в Ваш `context` выполнения GraphQL, вы можете использовать следующий сниппет: ```ts execute( MY_QUERY, {}, { - myHelper: {}, // this will be available in your Mutation resolver as `context.myHelper` + myHelper: {}, // это будет доступно в Вашем ресолвере мутации как `context.myHelper` }, ) ``` -> [You can read more about client-side schema extensions here](https://graphql-mesh.com/docs/guides/extending-unified-schema) +> [Вы можете прочитать больше о расширениях схемы на стороне клиента здесь](https://graphql-mesh.com/docs/guides/extending-unified-schema) -> [You can also delegate and call Query fields as part of your mutation](https://graphql-mesh.com/docs/guides/extending-unified-schema#using-the-sdk-to-fetch-sources) +> [Вы также можете делегировать и вызывать поля Query в рамках Вашей мутации](https://graphql-mesh.com/docs/guides/extending-unified-schema#using-the-sdk-to-fetch-sources) -## License +## Лицензия -Released under the [MIT license](../LICENSE). +Выпущена под [лицензией MIT](../LICENSE). diff --git a/website/src/pages/ru/subgraphs/querying/graph-client/_meta-titles.json b/website/src/pages/ru/subgraphs/querying/graph-client/_meta-titles.json index ee554b4ac36f..a71a02842b68 100644 --- a/website/src/pages/ru/subgraphs/querying/graph-client/_meta-titles.json +++ b/website/src/pages/ru/subgraphs/querying/graph-client/_meta-titles.json @@ -1,3 +1,3 @@ { - "README": "Introduction" + "README": "Введение" } diff --git a/website/src/pages/ru/subgraphs/querying/graph-client/live.md b/website/src/pages/ru/subgraphs/querying/graph-client/live.md index e6f726cb4352..da0133b7a768 100644 --- a/website/src/pages/ru/subgraphs/querying/graph-client/live.md +++ b/website/src/pages/ru/subgraphs/querying/graph-client/live.md @@ -2,7 +2,7 @@ Graph-Client implements a custom `@live` directive that can make every GraphQL query work with real-time data. -## Getting Started +## Начало работы Start by adding the following configuration to your `.graphclientrc.yml` file: @@ -12,7 +12,7 @@ plugins: defaultInterval: 1000 ``` -## Usage +## Применение Set the default update interval you wish to use, and then you can apply the following GraphQL `@directive` over your GraphQL queries: diff --git a/website/src/pages/ru/subgraphs/querying/graphql-api.mdx b/website/src/pages/ru/subgraphs/querying/graphql-api.mdx index cf058623eacf..899c3e3caa7e 100644 --- a/website/src/pages/ru/subgraphs/querying/graphql-api.mdx +++ b/website/src/pages/ru/subgraphs/querying/graphql-api.mdx @@ -2,23 +2,23 @@ title: API GraphQL --- -Learn about the GraphQL Query API used in The Graph. +Узнайте о GraphQL API запросах, используемых в The Graph. ## Что такое GraphQL? -[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. +[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query Subgraphs. -To understand the larger role that GraphQL plays, review [developing](/subgraphs/developing/introduction/) and [creating a subgraph](/developing/creating-a-subgraph/). +To understand the larger role that GraphQL plays, review [developing](/subgraphs/developing/introduction/) and [creating a Subgraph](/developing/creating-a-subgraph/). -## Queries with GraphQL +## Запросы с GraphQL -In your subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type. +In your Subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type. -> Note: `query` does not need to be included at the top of the `graphql` query when using The Graph. +> Примечание: `query` не нужно указывать в начале `graphql` запроса при использовании The Graph. ### Примеры -Query for a single `Token` entity defined in your schema: +Запрос для одного объекта `Token`, определенного в Вашей схеме: ```graphql { @@ -29,9 +29,9 @@ Query for a single `Token` entity defined in your schema: } ``` -> Note: When querying for a single entity, the `id` field is required, and it must be written as a string. +> Примечание: При запросе одного объекта поле `id` является обязательным и должно быть записано как строка. -Query all `Token` entities: +Запрос всех объектов `Token`: ```graphql { @@ -44,10 +44,10 @@ Query all `Token` entities: ### Сортировка -When querying a collection, you may: +При запросе коллекции Вы можете: -- Use the `orderBy` parameter to sort by a specific attribute. -- Use the `orderDirection` to specify the sort direction, `asc` for ascending or `desc` for descending. +- Использовать параметр `orderBy` для сортировки по определенному атрибуту. +- Использовать параметр `orderDirection`, чтобы указать направление сортировки `asc` для возрастания или `desc` для убывания. #### Пример @@ -62,9 +62,9 @@ When querying a collection, you may: #### Пример сортировки вложенных объектов -As of Graph Node [`v0.30.0`](https://github.com/graphprotocol/graph-node/releases/tag/v0.30.0) entities can be sorted on the basis of nested entities. +Начиная с Graph Node [`v0.30.0`](https://github.com/graphprotocol/graph-node/releases/tag/v0.30.0), объекты можно сортировать на основе вложенных объектов. -The following example shows tokens sorted by the name of their owner: +В следующем примере мы сортируем токены по имени их владельца: ```graphql { @@ -77,18 +77,18 @@ The following example shows tokens sorted by the name of their owner: } ``` -> Currently, you can sort by one-level deep `String` or `ID` types on `@entity` and `@derivedFrom` fields. Unfortunately, [sorting by interfaces on one level-deep entities](https://github.com/graphprotocol/graph-node/pull/4058), sorting by fields which are arrays and nested entities is not yet supported. +> В настоящее время сортировка возможна по одноуровневым полям типа `String` или `ID`, в полях `@entity` и `@derivedFrom`. К сожалению, [сортировка по интерфейсам в одноуровневых объектах](https://github.com/graphprotocol/graph-node/pull/4058), сортировка по полям-массивам и вложенным объектам пока не поддерживается. ### Пагинация -When querying a collection, it's best to: +При запросе коллекции лучше всего: -- Use the `first` parameter to paginate from the beginning of the collection. - - The default sort order is by `ID` in ascending alphanumeric order, **not** by creation time. -- Use the `skip` parameter to skip entities and paginate. For instance, `first:100` shows the first 100 entities and `first:100, skip:100` shows the next 100 entities. -- Avoid using `skip` values in queries because they generally perform poorly. To retrieve a large number of items, it's best to page through entities based on an attribute as shown in the previous example above. +- Использовать параметр `first` для пагинации данных с начала коллекции. + - Стандартная сортировка выполняется по `ID` в возрастающем алфавитно-числовом порядке, **не** по времени создания. +- Использовать параметр `skip`, чтобы пропускать объекты и осуществлять пагинацию. Например, `first:100` покажет первые 100 объектов, а `first:100, skip:100` покажет следующие 100 объектов. +- Избегайте использования `skip` в запросах, так как это обычно приводит к низкой производительности. Для получения большого количества элементов лучше выполнять постраничную загрузку объектов на основе атрибута, как показано в предыдущем примере. -#### Example using `first` +#### Пример использования `first` Запрос первых 10 токенов: @@ -101,11 +101,11 @@ When querying a collection, it's best to: } ``` -To query for groups of entities in the middle of a collection, the `skip` parameter may be used in conjunction with the `first` parameter to skip a specified number of entities starting at the beginning of the collection. +Чтобы запросить группы объектов в середине коллекции, параметр `skip` можно использовать в сочетании с параметром `first`, чтобы пропустить указанное количество объектов, начиная с начала коллекции. -#### Example using `first` and `skip` +#### Пример использования `first` и `skip` -Query 10 `Token` entities, offset by 10 places from the beginning of the collection: +Запрос 10 объектов `Token`, смещенных на 10 позиций от начала коллекции: ```graphql { @@ -116,9 +116,9 @@ Query 10 `Token` entities, offset by 10 places from the beginning of the collect } ``` -#### Example using `first` and `id_ge` +#### Пример использования `first` и `id_ge` -If a client needs to retrieve a large number of entities, it's more performant to base queries on an attribute and filter by that attribute. For example, a client could retrieve a large number of tokens using this query: +Если клиенту нужно получить большое количество объектов, эффективнее выполнять запросы на основе атрибута и фильтровать по этому атрибуту. Например, клиент может получить большое количество токенов с помощью следующего запроса: ```graphql query manyTokens($lastID: String) { @@ -129,16 +129,16 @@ query manyTokens($lastID: String) { } ``` -The first time, it would send the query with `lastID = ""`, and for subsequent requests it would set `lastID` to the `id` attribute of the last entity in the previous request. This approach will perform significantly better than using increasing `skip` values. +В первый раз запрос отправляется с `lastID = ""`, а в последующих запросах `lastID` устанавливается в значение атрибута `id` последнего объекта из предыдущего запроса. Этот подход значительно эффективнее, чем использование увеличивающихся значений `skip`. ### Фильтрация -- You can use the `where` parameter in your queries to filter for different properties. -- You can filter on multiple values within the `where` parameter. +- Вы можете использовать параметр `where` в запросах для фильтрации по различным свойствам. +- Вы можете фильтровать по нескольким значениям внутри параметра `where`. -#### Example using `where` +#### Пример использования `where` -Query challenges with `failed` outcome: +Запрос задач с результатом `failed`: ```graphql { @@ -152,7 +152,7 @@ Query challenges with `failed` outcome: } ``` -You can use suffixes like `_gt`, `_lte` for value comparison: +Вы можете использовать такие суффиксы, как `_gt`, `_lte` для сравнения значений: #### Пример фильтрации диапазона @@ -168,9 +168,9 @@ You can use suffixes like `_gt`, `_lte` for value comparison: #### Пример фильтрации блока -You can also filter entities that were updated in or after a specified block with `_change_block(number_gte: Int)`. +Вы также можете фильтровать объекты, которые были обновлены на указанном блоке или позже, с помощью `_change_block(number_gte: Int)`. -Это может быть полезно, если Вы хотите получить только объекты, которые изменились, например, с момента последнего опроса. Или, в качестве альтернативы, может быть полезно исследовать или отладить изменнения объектов в Вашем субграфе (в сочетании с фильтрацией блоков Вы можете изолировать только объекты, которые изменились в определенном блоке). +This can be useful if you are looking to fetch only entities which have changed, for example since the last time you polled. Or alternatively it can be useful to investigate or debug how entities are changing in your Subgraph (if combined with a block filter, you can isolate only entities that changed in a specific block). ```graphql { @@ -184,7 +184,7 @@ You can also filter entities that were updated in or after a specified block wit #### Пример фильтрации вложенных объектов -Filtering on the basis of nested entities is possible in the fields with the `_` suffix. +Фильтрация на основе вложенных объектов возможна в полях с суффиксом `_`. Это может быть полезно, если Вы хотите получать только объекты, у которых объекты дочернего уровня удовлетворяют заданным условиям. @@ -202,11 +202,11 @@ Filtering on the basis of nested entities is possible in the fields with the `_` #### Логические операторы -As of Graph Node [`v0.30.0`](https://github.com/graphprotocol/graph-node/releases/tag/v0.30.0) you can group multiple parameters in the same `where` argument using the `and` or the `or` operators to filter results based on more than one criteria. +Начиная с Graph Node [`v0.30.0`](https://github.com/graphprotocol/graph-node/releases/tag/v0.30.0), Вы можете группировать несколько параметров в одном аргументе `where`, используя операторы `and` или `or` для фильтрации результатов по нескольким критериям. -##### `AND` Operator +##### Оператор `AND` -The following example filters for challenges with `outcome` `succeeded` and `number` greater than or equal to `100`. +Следующий пример фильтрует задачи с `outcome` `succeeded` и `number` больше или равно `100`. ```graphql { @@ -220,7 +220,7 @@ The following example filters for challenges with `outcome` `succeeded` and `num } ``` -> **Syntactic sugar:** You can simplify the above query by removing the `and` operator by passing a sub-expression separated by commas. +> **Синтаксический сахар:** Вы можете упростить приведенный выше запрос, убрав оператор `and` и передав подвыражение, разделенное запятыми. > > ```graphql > { @@ -234,9 +234,9 @@ The following example filters for challenges with `outcome` `succeeded` and `num > } > ``` -##### `OR` Operator +##### Оператор `OR` -The following example filters for challenges with `outcome` `succeeded` or `number` greater than or equal to `100`. +Следующий пример фильтрует задачи с `outcome` `succeeded` или `number` больше или равно `100`. ```graphql { @@ -250,7 +250,7 @@ The following example filters for challenges with `outcome` `succeeded` or `numb } ``` -> **Note**: When constructing queries, it is important to consider the performance impact of using the `or` operator. While `or` can be a useful tool for broadening search results, it can also have significant costs. One of the main issues with `or` is that it can cause queries to slow down. This is because `or` requires the database to scan through multiple indexes, which can be a time-consuming process. To avoid these issues, it is recommended that developers use and operators instead of or whenever possible. This allows for more precise filtering and can lead to faster, more accurate queries. +> **Примечание**: При составлении запросов важно учитывать влияние оператора `or` на производительность. Хотя `or` может быть полезным инструментом для расширения результатов поиска, он также может значительно замедлить запросы. Основная проблема в том, что `or` заставляет базу данных сканировать несколько индексов, что может быть ресурсоемким процессом. Чтобы избежать этих проблем, рекомендуется по возможности использовать оператор `and` вместо `or`. Это позволяет выполнять более точную фильтрацию и делает запросы быстрее и эффективнее. #### Все фильтры @@ -279,9 +279,9 @@ _not_ends_with _not_ends_with_nocase ``` -> Please note that some suffixes are only supported for specific types. For example, `Boolean` only supports `_not`, `_in`, and `_not_in`, but `_` is available only for object and interface types. +> Обратите внимание, что некоторые суффиксы поддерживаются только для определенных типов. Например, `Boolean` поддерживает только `_not`, `_in` и `_not_in`, тогда как `_` доступен только для объектных и интерфейсных типов. -In addition, the following global filters are available as part of `where` argument: +Кроме того, в качестве части аргумента `where` доступны следующие глобальные фильтры: ```graphql _change_block(number_gte: Int) @@ -289,11 +289,11 @@ _change_block(number_gte: Int) ### Запросы на Time-travel -You can query the state of your entities not just for the latest block, which is the default, but also for an arbitrary block in the past. The block at which a query should happen can be specified either by its block number or its block hash by including a `block` argument in the toplevel fields of queries. +Вы можете запрашивать состояние своих объектов не только для последнего блока, который используется по умолчанию, но и для произвольного блока в прошлом. Блок, в котором должен выполняться запрос, можно указать либо по номеру блока, либо по его хэшу, включив аргумент `block` в поля верхнего уровня запросов. -The result of such a query will not change over time, i.e., querying at a certain past block will return the same result no matter when it is executed, with the exception that if you query at a block very close to the head of the chain, the result might change if that block turns out to **not** be on the main chain and the chain gets reorganized. Once a block can be considered final, the result of the query will not change. +Результат такого запроса не изменится со временем, то есть запрос на определенном прошедшем блоке вернет тот же результат, независимо от времени выполнения, за исключением случая, когда запрос выполняется на блоке, который находится очень близко к началу чейна. В этом случае результат может измениться, если этот блок окажется **не** на основном чейне, и чейн будет реорганизован. Как только блок можно будет считать окончательным, результат запроса больше не изменится. -> Note: The current implementation is still subject to certain limitations that might violate these guarantees. The implementation can not always tell that a given block hash is not on the main chain at all, or if a query result by a block hash for a block that is not yet considered final could be influenced by a block reorganization running concurrently with the query. They do not affect the results of queries by block hash when the block is final and known to be on the main chain. [This issue](https://github.com/graphprotocol/graph-node/issues/1405) explains what these limitations are in detail. +> Примечание: Текущая реализация все еще подвержена определенным ограничениям, которые могут нарушить эти гарантии. Реализация не всегда может точно определить, что данный хэш блока вообще не находится на основном чейне, или что результат запроса по хэшу блока для блока, который еще не считается окончательным, может быть изменен из-за реорганизации блоков, происходящей одновременно с запросом. Эти ограничения не влияют на результаты запросов по хэшу блока, если блок окончателен и подтвержден на основном чейне. [Этот вопрос](https://github.com/graphprotocol/graph-node/issues/1405) подробно объясняет, в чем состоят эти ограничения. #### Пример @@ -309,7 +309,7 @@ The result of such a query will not change over time, i.e., querying at a certai } ``` -This query will return `Challenge` entities, and their associated `Application` entities, as they existed directly after processing block number 8,000,000. +Этот запрос вернет объекты `Challenge` и связанные с ними объекты `Application` в том виде, в каком они существовали сразу после обработки блока номер 8,000,000. #### Пример @@ -325,13 +325,13 @@ This query will return `Challenge` entities, and their associated `Application` } ``` -This query will return `Challenge` entities, and their associated `Application` entities, as they existed directly after processing the block with the given hash. +Этот запрос вернет объекты `Challenge` и связанные с ними объекты `Application` в том виде, в каком они существовали сразу после обработки блока с заданным хешем. ### Полнотекстовые поисковые запросы -Fulltext search query fields provide an expressive text search API that can be added to the subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) to add fulltext search to your subgraph. +Fulltext search query fields provide an expressive text search API that can be added to the Subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) to add fulltext search to your Subgraph. -Fulltext search queries have one required field, `text`, for supplying search terms. Several special fulltext operators are available to be used in this `text` search field. +Запросы полнотекстового поиска имеют одно обязательное поле, `text`, для предоставления поисковых запросов. В этом поле поиска `text` можно использовать несколько специальных операторов полнотекстового поиска. Полнотекстовые поисковые операторы: @@ -344,7 +344,7 @@ Fulltext search queries have one required field, `text`, for supplying search te #### Примеры -Using the `or` operator, this query will filter to blog entities with variations of either "anarchism" or "crumpet" in their fulltext fields. +Используя оператор `or`, этот запрос отфильтрует объекты блога, содержащие варианты слов "anarchism" или "crumpet" в их полнотекстовых полях. ```graphql { @@ -357,7 +357,7 @@ Using the `or` operator, this query will filter to blog entities with variations } ``` -The `follow by` operator specifies a words a specific distance apart in the fulltext documents. The following query will return all blogs with variations of "decentralize" followed by "philosophy" +Оператор `follow by` определяет слова, находящиеся на определённом расстоянии друг от друга в полнотекстовых документах. Следующий запрос вернёт все блоги, содержащие варианты слова "decentralize", за которым следует "philosophy" ```graphql { @@ -385,25 +385,25 @@ The `follow by` operator specifies a words a specific distance apart in the full ### Валидация -Graph Node implements [specification-based](https://spec.graphql.org/October2021/#sec-Validation) validation of the GraphQL queries it receives using [graphql-tools-rs](https://github.com/dotansimha/graphql-tools-rs#validation-rules), which is based on the [graphql-js reference implementation](https://github.com/graphql/graphql-js/tree/main/src/validation). Queries which fail a validation rule do so with a standard error - visit the [GraphQL spec](https://spec.graphql.org/October2021/#sec-Validation) to learn more. +Graph Node реализует валидацию [на основе спецификации](https://spec.graphql.org/October2021/#sec-Validation) для получаемых GraphQL-запросов с использованием [graphql-tools-rs](https://github.com/dotansimha/graphql-tools-rs#validation-rules), которая основана на [референсной реализации graphql-js](https://github.com/graphql/graphql-js/tree/main/src/validation). Запросы, не прошедшие проверку валидации, завершаются стандартной ошибкой. Ознакомьтесь со [спецификацией GraphQL](https://spec.graphql.org/October2021/#sec-Validation), чтобы узнать больше. ## Схема -The schema of your dataSources, i.e. the entity types, values, and relationships that are available to query, are defined through the [GraphQL Interface Definition Language (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). +Схема Ваших источников данных, то есть типы объектов, значения и связи, доступные для запросов, определяется с помощью [Языка определения интерфейсов GraphQL (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). -GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your subgraph is automatically generated from the GraphQL schema that's included in your [subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph). +GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your Subgraph is automatically generated from the GraphQL schema that's included in your [Subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph). -> Note: Our API does not expose mutations because developers are expected to issue transactions directly against the underlying blockchain from their applications. +> Примечание: Наш API не подвергается мутациям, поскольку ожидается, что разработчики будут отправлять транзакции напрямую на базовый блокчейн из своих приложений. ### Объекты -All GraphQL types with `@entity` directives in your schema will be treated as entities and must have an `ID` field. +Все типы GraphQL с директивами `@entity` в Вашей схеме будут рассматриваться как объекты и должны содержать поле `ID`. -> **Note:** Currently, all types in your schema must have an `@entity` directive. In the future, we will treat types without an `@entity` directive as value objects, but this is not yet supported. +> **Примечание:** В настоящее время все типы в Вашей схеме должны иметь директиву `@entity`. В будущем мы будем рассматривать типы без директивы `@entity` как объекты значений, но на данный момент это не поддерживается. ### Метаданные субграфа -All subgraphs have an auto-generated `_Meta_` object, which provides access to subgraph metadata. This can be queried as follows: +All Subgraphs have an auto-generated `_Meta_` object, which provides access to Subgraph metadata. This can be queried as follows: ```graphQL { @@ -419,14 +419,14 @@ All subgraphs have an auto-generated `_Meta_` object, which provides access to s } ``` -Если предоставлен блок, метаданные относятся к этому блоку, в противном случае используется последний проиндексированный блок. Если предоставляется блок, он должен быть после начального блока субграфа и меньше или равен последнему проиндексированному блоку. +If a block is provided, the metadata is as of that block, if not the latest indexed block is used. If provided, the block must be after the Subgraph's start block, and less than or equal to the most recently indexed block. -`deployment` is a unique ID, corresponding to the IPFS CID of the `subgraph.yaml` file. +`deployment` — это уникальный идентификатор, соответствующий IPFS CID файла `subgraph.yaml`. -`block` provides information about the latest block (taking into account any block constraints passed to `_meta`): +`block` предоставляет информацию о последнем блоке (с учетом любых ограничений блоков, переданных в `_meta`): - hash: хэш блока - number: номер блока -- timestamp: временная метка блока, если она доступна (в настоящее время доступна только для субграфов, индексирующих сети EVM) +- timestamp: the timestamp of the block, if available (this is currently only available for Subgraphs indexing EVM networks) -`hasIndexingErrors` is a boolean identifying whether the subgraph encountered indexing errors at some past block +`hasIndexingErrors` is a boolean identifying whether the Subgraph encountered indexing errors at some past block diff --git a/website/src/pages/ru/subgraphs/querying/introduction.mdx b/website/src/pages/ru/subgraphs/querying/introduction.mdx index d28d11fa28e6..d7cc8fa082c3 100644 --- a/website/src/pages/ru/subgraphs/querying/introduction.mdx +++ b/website/src/pages/ru/subgraphs/querying/introduction.mdx @@ -1,32 +1,32 @@ --- title: Запрос The Graph -sidebarTitle: Introduction +sidebarTitle: Введение --- -To start querying right away, visit [The Graph Explorer](https://thegraph.com/explorer). +Чтобы сразу приступить к запросу, посетите [The Graph Explorer](https://thegraph.com/explorer). ## Обзор -When a subgraph is published to The Graph Network, you can visit its subgraph details page on Graph Explorer and use the "Query" tab to explore the deployed GraphQL API for each subgraph. +When a Subgraph is published to The Graph Network, you can visit its Subgraph details page on Graph Explorer and use the "Query" tab to explore the deployed GraphQL API for each Subgraph. ## Специфические особенности -Each subgraph published to The Graph Network has a unique query URL in Graph Explorer to make direct queries. You can find it by navigating to the subgraph details page and clicking on the "Query" button in the top right corner. +Each Subgraph published to The Graph Network has a unique query URL in Graph Explorer to make direct queries. You can find it by navigating to the Subgraph details page and clicking on the "Query" button in the top right corner. -![Query Subgraph Button](/img/query-button-screenshot.png) +![Кнопка запроса субграфа](/img/query-button-screenshot.png) -![Query Subgraph URL](/img/query-url-screenshot.png) +![Запрос URL субграфа](/img/query-url-screenshot.png) -You will notice that this query URL must use a unique API key. You can create and manage your API keys in [Subgraph Studio](https://thegraph.com/studio), under the "API Keys" section. Learn more about how to use Subgraph Studio [here](/deploying/subgraph-studio/). +Как Вы можете заметить, этот URL-адрес запроса должен использовать уникальный API-ключ. Вы можете создавать и управлять своими API-ключами в [Subgraph Studio](https://thegraph.com/studio) в разделе "API-ключи". Узнайте больше о том, как использовать Subgraph Studio [здесь](/deploying/subgraph-studio/). -Subgraph Studio users start on a Free Plan, which allows them to make 100,000 queries per month. Additional queries are available on the Growth Plan, which offers usage based pricing for additional queries, payable by credit card, or GRT on Arbitrum. You can learn more about billing [here](/subgraphs/billing/). +Пользователи Subgraph Studio начинают с Бесплатного плана, который позволяет делать 100 000 запросов в месяц. Дополнительные запросы доступны в рамках Плана роста, который предлагает гибкую оплату за дополнительные запросы, оплачиваемые кредитной картой, или GRT в Арбитрум. Подробнее о тарифах можно узнать [здесь](/subgraphs/billing/). -> Please see the [Query API](/subgraphs/querying/graphql-api/) for a complete reference on how to query the subgraph's entities. +> Please see the [Query API](/subgraphs/querying/graphql-api/) for a complete reference on how to query the Subgraph's entities. > -> Note: If you encounter 405 errors with a GET request to the Graph Explorer URL, please switch to a POST request instead. +> Примечание: Если Вы столкнулись с ошибками 405 при выполнении GET-запроса к URL Graph Explorer, попробуйте использовать POST-запрос. ### Дополнительные ресурсы -- Use [GraphQL querying best practices](/subgraphs/querying/best-practices/). -- To query from an application, click [here](/subgraphs/querying/from-an-application/). -- View [querying examples](https://github.com/graphprotocol/query-examples/tree/main). +- Используйте [лучшие практики выполнения запросов GraphQL](/subgraphs/querying/best-practices/). +- Чтобы выполнить запрос из приложения, нажмите [здесь](/subgraphs/querying/from-an-application/). +- Посмотреть [примеры запросов](https://github.com/graphprotocol/query-examples/tree/main). diff --git a/website/src/pages/ru/subgraphs/querying/managing-api-keys.mdx b/website/src/pages/ru/subgraphs/querying/managing-api-keys.mdx index 002aa22be689..b9a52472b66b 100644 --- a/website/src/pages/ru/subgraphs/querying/managing-api-keys.mdx +++ b/website/src/pages/ru/subgraphs/querying/managing-api-keys.mdx @@ -1,34 +1,34 @@ --- -title: Управление вашими ключами API +title: Managing API keys --- ## Обзор -API keys are needed to query subgraphs. They ensure that the connections between application services are valid and authorized, including authenticating the end user and the device using the application. +API keys are needed to query Subgraphs. They ensure that the connections between application services are valid and authorized, including authenticating the end user and the device using the application. -### Create and Manage API Keys +### Создание и управление API ключами -Go to [Subgraph Studio](https://thegraph.com/studio/) and click the **API Keys** tab to create and manage your API keys for specific subgraphs. +Go to [Subgraph Studio](https://thegraph.com/studio/) and click the **API Keys** tab to create and manage your API keys for specific Subgraphs. -The "API keys" table lists existing API keys and allows you to manage or delete them. For each key, you can see its status, the cost for the current period, the spending limit for the current period, and the total number of queries. +В таблице "API keys" перечислены существующие ключи API и Вы можете управлять ими или удалять их. Для каждого ключа, Вы можете увидеть его статус, стоимость за текущий период, лимит расходов за текущий период и общее количество запросов. -You can click the "three dots" menu to the right of a given API key to: +Вы можете нажать на меню "три точки" справа от заданного ключа API, чтобы: -- Rename API key -- Regenerate API key -- Delete API key -- Manage spending limit: this is an optional monthly spending limit for a given API key, in USD. This limit is per billing period (calendar month). +- Переименовать API ключ +- Восстановить API ключ +- Удалить API ключ +- Управлять лимитом расходов: это необязательный лимит ежемесячных расходов для данного API ключа в USD. Этот лимит за расчетный период (календарный месяц). -### API Key Details +### Детали API ключа -You can click on an individual API key to view the Details page: +Вы можете нажать на отдельный ключ API, чтобы перейти на страницу с подробной информацией: -1. Under the **Overview** section, you can: +1. В разделе **Обзор** можно: - Отредактируйте свое ключевое имя - Регенерировать ключи API - Просмотр текущего использования ключа API со статистикой: - Количество запросов - Количество потраченных GRT -2. Under the **Security** section, you can opt into security settings depending on the level of control you’d like to have. Specifically, you can: +2. В разделе **Безопасность** Вы можете выбрать параметры безопасности в зависимости от необходимого Вам уровня контроля. А именно: - Просматривайте доменные имена, авторизованные для использования вашего API-ключа, и управляйте ими - - Назначьте субграфы, которые могут быть запрошены с помощью вашего API-ключа + - Assign Subgraphs that can be queried with your API key diff --git a/website/src/pages/ru/subgraphs/querying/python.mdx b/website/src/pages/ru/subgraphs/querying/python.mdx index b450ba9276de..f2e0b317b482 100644 --- a/website/src/pages/ru/subgraphs/querying/python.mdx +++ b/website/src/pages/ru/subgraphs/querying/python.mdx @@ -1,11 +1,11 @@ --- -title: Query The Graph with Python and Subgrounds +title: Запросы к The Graph с использованием Python и Subgrounds sidebarTitle: Python (Subgrounds) --- -Subgrounds is an intuitive Python library for querying subgraphs, built by [Playgrounds](https://playgrounds.network/). It allows you to directly connect subgraph data to a Python data environment, letting you use libraries like [pandas](https://pandas.pydata.org/) to perform data analysis! +Subgrounds is an intuitive Python library for querying Subgraphs, built by [Playgrounds](https://playgrounds.network/). It allows you to directly connect Subgraph data to a Python data environment, letting you use libraries like [pandas](https://pandas.pydata.org/) to perform data analysis! -Subgrounds offers a simple Pythonic API for building GraphQL queries, automates tedious workflows such as pagination, and empowers advanced users through controlled schema transformations. +Subgrounds предлагает простой Pythonic API для создания GraphQL-запросов, автоматизирует утомительные рабочие процессы, такие как пагинация, и предоставляет расширенные возможности для опытных пользователей через управляемые преобразования схем. ## Начало работы @@ -13,18 +13,18 @@ Subgrounds requires Python 3.10 or higher and is available on [pypi](https://pyp ```bash pip install --upgrade subgrounds -# or +# или python -m pip install --upgrade subgrounds ``` -Once installed, you can test out subgrounds with the following query. The following example grabs a subgraph for the Aave v2 protocol and queries the top 5 markets ordered by TVL (Total Value Locked), selects their name and their TVL (in USD) and returns the data as a pandas [DataFrame](https://pandas.pydata.org/pandas-docs/dev/reference/api/pandas.DataFrame.html#pandas.DataFrame). +Once installed, you can test out subgrounds with the following query. The following example grabs a Subgraph for the Aave v2 protocol and queries the top 5 markets ordered by TVL (Total Value Locked), selects their name and their TVL (in USD) and returns the data as a pandas [DataFrame](https://pandas.pydata.org/pandas-docs/dev/reference/api/pandas.DataFrame.html#pandas.DataFrame). ```python from subgrounds import Subgrounds sg = Subgrounds() -# Load the subgraph +# Load the Subgraph aave_v2 = sg.load_subgraph( "https://api.thegraph.com/subgraphs/name/messari/aave-v2-ethereum") @@ -54,4 +54,4 @@ Since subgrounds has a large feature set to explore, here are some helpful start - [Concurrent Queries](https://docs.playgrounds.network/subgrounds/getting_started/async/) - Learn how to level up your queries by parallelizing them. - [Exporting Data to CSVs](https://docs.playgrounds.network/subgrounds/faq/exporting/) - - A quick article on how to seamlessly save your data as CSVs for further analysis. + - Краткая статья о том, как легко сохранять данные в формате CSV для дальнейшего анализа. diff --git a/website/src/pages/ru/subgraphs/querying/subgraph-id-vs-deployment-id.mdx b/website/src/pages/ru/subgraphs/querying/subgraph-id-vs-deployment-id.mdx index 103e470e14da..b697d9cfd5e6 100644 --- a/website/src/pages/ru/subgraphs/querying/subgraph-id-vs-deployment-id.mdx +++ b/website/src/pages/ru/subgraphs/querying/subgraph-id-vs-deployment-id.mdx @@ -1,27 +1,27 @@ --- -title: Subgraph ID vs Deployment ID +title: Идентификатор субграфа vs Идентификатор развертывания --- -A subgraph is identified by a Subgraph ID, and each version of the subgraph is identified by a Deployment ID. +Субграф идентифицируется с помощью идентификатора субграфа, а каждая его версия — с помощью идентификатора развертывания. -When querying a subgraph, either ID can be used, though it is generally suggested that the Deployment ID is used due to its ability to specify a specific version of a subgraph. +При выполнении запроса к субграфу можно использовать любой из идентификаторов, но обычно рекомендуется использовать идентификатор развертывания, так как он позволяет указать конкретную версию субграфа. -Here are some key differences between the two IDs: ![](/img/subgraph-id-vs-deployment-id.png) +Вот некоторые ключевые различия между двумя ID: ![](/img/subgraph-id-vs-deployment-id.png) -## Deployment ID +## Идентификатор развертывания -The Deployment ID is the IPFS hash of the compiled manifest file, which refers to other files on IPFS instead of relative URLs on the computer. For example, the compiled manifest can be accessed via: `https://api.thegraph.com/ipfs/api/v0/cat?arg=QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. To change the Deployment ID, one can simply update the manifest file, such as modifying the description field as described in the [subgraph manifest documentation](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api). +Идентификатор развертывания — это IPFS-хеш скомпилированного файла манифеста, который ссылается на другие файлы в IPFS вместо относительных URL на компьютере. Например, скомпилированный манифест можно открыть по ссылке: `https://api.thegraph.com/ipfs/api/v0/cat?arg=QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. Чтобы изменить идентификатор развертывания, можно просто обновить файл манифеста, например, изменив поле description, как описано в [документации манифеста субграфа](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api). -When queries are made using a subgraph's Deployment ID, we are specifying a version of that subgraph to query. Using the Deployment ID to query a specific subgraph version results in a more sophisticated and robust setup as there is full control over the subgraph version being queried. However, this results in the need of updating the query code manually every time a new version of the subgraph is published. +Когда запросы выполняются с использованием идентификатора развертывания субграфа, мы указываем конкретную версию этого субграфа для запроса. Использование идентификатора развертывания для запроса определённой версии субграфа обеспечивает более продвинутую и надежную настройку, так как даёт полный контроль над версией субграфа, к которой выполняется запрос. Однако это приводит к необходимости вручную обновлять код запроса каждый раз при публикации новой версии субграфа. -Example endpoint that uses Deployment ID: +Пример конечной точки, использующей идентификатор развертывания: `https://gateway-arbitrum.network.thegraph.com/api/[api-key]/deployments/id/QmfYaVdSSekUeK6expfm47tP8adg3NNdEGnVExqswsSwaB` -## Subgraph ID +## Идентификатор субграфа -The Subgraph ID is a unique identifier for a subgraph. It remains constant across all versions of a subgraph. It is recommended to use the Subgraph ID to query the latest version of a subgraph, although there are some caveats. +Идентификатор субграфа — это уникальный идентификатор для субграфа. Он остаётся постоянным для всех версий субграфа. Рекомендуется использовать идентификатор субграфа для запроса последней версии субграфа, хотя существуют некоторые особенности. -Be aware that querying using Subgraph ID may result in queries being responded to by an older version of the subgraph due to the new version needing time to sync. Also, new versions could introduce breaking schema changes. +Имейте в виду, что запросы с использованием идентификатора субграфа могут привести к тому, что на запрос будет отвечать старая версия субграфа, так как новой версии может потребоваться время для синхронизации. Также новые версии могут вводить изменения в схеме, которые являются несовместимыми с предыдущими версиями. -Example endpoint that uses Subgraph ID: `https://gateway-arbitrum.network.thegraph.com/api/[api-key]/subgraphs/id/FL3ePDCBbShPvfRJTaSCNnehiqxsPHzpLud6CpbHoeKW` +Пример конечной точки, использующей идентификатор субграфа: `https://gateway-arbitrum.network.thegraph.com/api/[api-key]/subgraphs/id/FL3ePDCBbShPvfRJTaSCNnehiqxsPHzpLud6CpbHoeKW` diff --git a/website/src/pages/ru/subgraphs/quick-start.mdx b/website/src/pages/ru/subgraphs/quick-start.mdx index a8113aa22586..c676f1cf698d 100644 --- a/website/src/pages/ru/subgraphs/quick-start.mdx +++ b/website/src/pages/ru/subgraphs/quick-start.mdx @@ -2,22 +2,22 @@ title: Быстрый старт --- -Learn how to easily build, publish and query a [subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) on The Graph. +Узнайте, как легко создать, опубликовать и запросить [Субграф](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) на The Graph. -## Prerequisites +## Предварительные требования - Криптовалютный кошелек -- A smart contract address on a [supported network](/supported-networks/) -- [Node.js](https://nodejs.org/) installed -- A package manager of your choice (`npm`, `yarn` or `pnpm`) +- Адрес смарт-контракта в [поддерживаемой сети](/supported-networks/) +- [Node.js](https://nodejs.org/) установлен +- Менеджер пакетов на Ваш выбор (`npm`, `yarn` или `pnpm`) -## How to Build a Subgraph +## Как создать субграф -### 1. Create a subgraph in Subgraph Studio +### 1. Создайте Субграф в Subgraph Studio Перейдите в [Subgraph Studio](https://thegraph.com/studio/) и подключите свой кошелек. -Subgraph Studio позволяет создавать, управлять, развертывать и публиковать субграфы, а также создавать и управлять API-ключами. +Subgraph Studio позволяет создавать, управлять, развертывать и публиковать Субграфы, а также создавать и управлять API-ключами. Нажмите "Создать субграф". Рекомендуется называть субграф с использованием Заглавного регистра: "Subgraph Name Chain Name". @@ -37,56 +37,56 @@ npm install -g @graphprotocol/graph-cli@latest yarn global add @graphprotocol/graph-cli ``` -### 3. Инициализация Вашего cубграфа +### 3. Инициализируйте ваш субграф -> Вы можете найти команды для своего конкретного субграфа на странице субграфа в [Subgraph Studio](https://thegraph.com/studio/). +> Вы можете найти команды для вашего конкретного субграфа на странице субграфа в [Subgraph Studio](https://thegraph.com/studio/). -The `graph init` command will automatically create a scaffold of a subgraph based on your contract's events. +Команда `graph init` автоматически создаст каркас субграфа на основе событий вашего контракта. -Следующая команда инициализирует ваш субграф из существующего контракта: +Следующая команда инициализирует ваш субграф на основе существующего контракта: ```sh graph init ``` -If your contract is verified on the respective blockscanner where it is deployed (such as [Etherscan](https://etherscan.io/)), then the ABI will automatically be created in the CLI. +Если Ваш контракт верифицирован на соответствующем блоксканере, где он развернут (например, [Etherscan](https://etherscan.io/)), то ABI будет автоматически создан в CLI. -При инициализации субграфа CLI запросит у Вас следующую информацию: +Когда вы инициализируете свой субграф, CLI запросит у вас следующую информацию: -- **Protocol**: Choose the protocol your subgraph will be indexing data from. -- **Subgraph slug**: Create a name for your subgraph. Your subgraph slug is an identifier for your subgraph. -- **Directory**: Choose a directory to create your subgraph in. -- **Ethereum network** (optional): You may need to specify which EVM-compatible network your subgraph will be indexing data from. -- **Contract address**: Locate the smart contract address you’d like to query data from. -- **ABI**: If the ABI is not auto-populated, you will need to input it manually as a JSON file. -- **Start Block**: You should input the start block to optimize subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed. -- **Contract Name**: Input the name of your contract. -- **Index contract events as entities**: It is suggested that you set this to true, as it will automatically add mappings to your subgraph for every emitted event. -- **Add another contract** (optional): You can add another contract. +- **Протокол**: Выберите протокол, данные из которого будет индексировать ваш субграф. +- **Слаг субграфа**: Создайте имя для вашего субграфа. Слаг субграфа — это идентификатор для вашего субграфа. +- **Каталог**: Выберите каталог, в котором будет создан ваш субграф. +- **Сеть Ethereum** (необязательно): Вам может понадобиться указать, из какой совместимой с EVM сети ваш субграф будет индексировать данные. +- **Адрес контракта**: Найдите адрес смарт-контракта, из которого Вы хотите запрашивать данные. +- **ABI**: Если ABI не заполнен автоматически, Вам придется ввести его вручную в формате JSON. +- **Начальный блок**: Вы должны ввести начальный блок для оптимизации индексирования данных субграфа. Найдите начальный блок, определив блок, в котором был развернут ваш контракт. +- **Имя контракта**: Введите имя Вашего контракта. +- **Индексирование событий контракта как объектов**: Рекомендуется установить это значение в "true", так как это автоматически добавит мэппинги для каждого сгенерированного события в ваш субграф. +- **Добавление еще одного контракта (опционально)**: Вы можете добавить еще один контракт. -На следующем скриншоте показан пример того, чего следует ожидать при инициализации субграфа: +Вот скриншот, который демонстрирует, чего ожидать при инициализации вашего субграфа: -![Subgraph command](/img/CLI-Example.png) +![Команда субграфа](/img/CLI-Example.png) -### 4. Edit your subgraph +### 4. Редактирование вашего субграфа -Команда `init` на предыдущем шаге создает шаблон субграфа, который Вы можете использовать в качестве отправной точки для его разработки. +Команда `init` на предыдущем шаге создает скелет субграфа, который вы можете использовать в качестве отправной точки для создания вашего субграфа. -При внесении изменений в субграф Вы будете работать в основном с тремя файлами: +При внесении изменений в субграф вы будете в основном работать с тремя файлами: -- Манифест (`subgraph.yaml`) — определяет, какие источники данных Ваш субграф будет индексировать. -- Схема (`schema.graphql`) - Схема GraphQL определяет, какие данные Вы хотите извлечь из субграфа. +- Манифест (`subgraph.yaml`) — определяет, какие источники данных ваш субграф будет индексировать. +- Схема (`schema.graphql`) — определяет, какие данные вы хотите извлекать из субграфа. - AssemblyScript Mappings (mapping.ts) - это код, который преобразует данные из Ваших источников данных в объекты, определенные в схеме. -Для получения более детальной информации о том, как создать свой субграф, ознакомьтесь с разделом [Creating a Subgraph](/developing/creating-a-subgraph/). +Для подробного объяснения того, как писать ваш субграф, ознакомьтесь с разделом [Создание субграфа](/developing/creating-a-subgraph/). ### 5. Развертывание Вашего субграфа -> Remember, deploying is not the same as publishing. +> Помните, развертывание — это не то же самое, что публикация. -When you **deploy** a subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. A deployed subgraph's indexing is performed by the [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/), which is a single Indexer owned and operated by Edge & Node, rather than by the many decentralized Indexers in The Graph Network. A **deployed** subgraph is free to use, rate-limited, not visible to the public, and meant to be used for development, staging, and testing purposes. +Когда вы **разворачиваете** субграф, вы отправляете его в [Subgraph Studio](https://thegraph.com/studio/), где можете тестировать, настраивать и проверять его. Индексирование развернутого субграфа выполняется с помощью [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/), который является единственным Индексатором, принадлежащим и управляемым Edge & Node, а не многими децентрализованными Индексаторами в сети The Graph. **Развернутый** субграф бесплатен для использования, имеет ограничения по количеству запросов, не виден для общественности и предназначен для разработки, настройки и тестирования. -После того как Ваш субграф будет написан, выполните следующие команды: +Как только ваш субграф написан, выполните следующие команды: ```` ```sh @@ -94,7 +94,7 @@ graph codegen && graph build ``` ```` -Аутентифицируйте и разверните свой субграф. Ключ развертывания можно найти на странице Subgraph в Subgraph Studio. +Аутентифицируйтесь и разверните ваш субграф. Ключ для развертывания можно найти на странице субграфа в Subgraph Studio. ![Deploy key](/img/subgraph-studio-deploy-key.jpg) @@ -107,39 +107,39 @@ graph deploy ``` ```` -The CLI will ask for a version label. It's strongly recommended to use [semantic versioning](https://semver.org/), e.g. `0.0.1`. +CLI запросит метку версии. Настоятельно рекомендуется использовать [семантическую версию](https://semver.org/), например, `0.0.1`. ### 6. Просмотр Вашего субграфа -If you’d like to test your subgraph before publishing it, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following: +Если вы хотите протестировать свой субграф перед его публикацией, вы можете использовать [Subgraph Studio](https://thegraph.com/studio/) для выполнения следующих действий: - Запустить пример запроса. -- Проанализировать Ваш субграф на панели управления для проверки информации. -- Проверить логи на панели управления, чтобы убедиться, нет ли ошибок в Вашем субграфе. Логи рабочего субграфа будут выглядеть следующим образом: +- Анализируйте свой субграф в панели управления, чтобы проверить информацию. +- Проверьте логи на панели управления, чтобы узнать, есть ли ошибки в вашем субграфе. Логи работающего субграфа будут выглядеть так: ![Subgraph logs](/img/subgraph-logs-image.png) -### Публикация Вашего субграфа в сети The Graph +### 7. Опубликуйте свой субграф в сети The Graph -When your subgraph is ready for a production environment, you can publish it to the decentralized network. Publishing is an onchain action that does the following: +Когда ваш субграф готов к использованию в рабочей среде, вы можете опубликовать его в децентрализованную сеть. Публикация — это действие в сети, которое выполняет следующие задачи: -- It makes your subgraph available to be to indexed by the decentralized [Indexers](/indexing/overview/) on The Graph Network. -- It removes rate limits and makes your subgraph publicly searchable and queryable in [Graph Explorer](https://thegraph.com/explorer/). -- It makes your subgraph available for [Curators](/resources/roles/curating/) to curate it. +- Это делает ваш субграф доступным для индексирования децентрализованными [Индексаторами](/indexing/overview/) в сети The Graph. +- Это снимает ограничения по количеству запросов и делает ваш субграф общедоступным для поиска и запросов в [Graph Explorer](https://thegraph.com/explorer/). +- Это делает ваш субграф доступным для [Кураторов](/resources/roles/curating/), чтобы они могли его курировать. -> The greater the amount of GRT you and others curate on your subgraph, the more Indexers will be incentivized to index your subgraph, improving the quality of service, reducing latency, and enhancing network redundancy for your subgraph. +> Чем больше GRT вы и другие курируете на вашем субграфе, тем больше Индексаторов будут мотивированы индексировать ваш субграф, что улучшит качество обслуживания, уменьшит задержку и повысит избыточность сети для вашего субграфа. #### Публикация с помощью Subgraph Studio -Чтобы опубликовать свой субграф, нажмите кнопку «Опубликовать» на панели управления. +Чтобы опубликовать ваш субграф, нажмите кнопку "Опубликовать" на панели управления. -![Publish a subgraph on Subgraph Studio](/img/publish-sub-transfer.png) +![Publish a Subgraph on Subgraph Studio](/img/publish-sub-transfer.png) -Выберите сеть, в которую хотите опубликовать свой субграф. +Выберите сеть, в которую вы хотите опубликовать свой субграф. #### Публикация с помощью CLI -Начиная с версии 0.73.0, Вы также можете публиковать свой субграф с помощью Graph CLI. +Начиная с версии 0.73.0, вы также можете опубликовать свой субграф с помощью Graph CLI. Откройте `graph-cli`. @@ -150,7 +150,7 @@ When your subgraph is ready for a production environment, you can publish it to graph codegen && graph build ``` -Then, +Затем, ```sh graph publish @@ -161,28 +161,28 @@ graph publish ![cli-ui](/img/cli-ui.png) -To customize your deployment, see [Publishing a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). +Чтобы настроить ваше развертывание, смотрите раздел [Публикация субграфа](/subgraphs/developing/publishing/publishing-a-subgraph/). #### Добавление сигнала к Вашему субграфу -1. To attract Indexers to query your subgraph, you should add GRT curation signal to it. +1. Чтобы привлечь Индексаторов для запросов к вашему субграфу, вам следует добавить сигнал курирования GRT. - - Это действие улучшает качество обслуживания, снижает задержку и увеличивает надежность и доступность сети для Вашего субграфа. + - Это действие улучшает качество обслуживания, снижает задержку и повышает сетевую избыточность и доступность для вашего субграфа. 2. Если индексаторы имеют право на получение вознаграждений за индексацию, они получат вознаграждения в GRT, в соответствии с количеством поданного сигнала. - - Рекомендуется добавить как минимум 3,000 GRT, чтобы привлечь 3 индексаторов. Проверьте право на вознаграждение на основе использования функций субграфа и поддерживаемых сетей. + - Рекомендуется курировать как минимум 3,000 GRT, чтобы привлечь 3 Индексаторов. Проверьте право на вознаграждения в зависимости от использования функций субграфа и поддерживаемых сетей. -To learn more about curation, read [Curating](/resources/roles/curating/). +Чтобы узнать больше о кураторстве, прочитайте статью [Курирование](/resources/roles/curating/). -Чтобы сэкономить на расходах на газ, Вы можете курировать свой субграф в той же транзакции, в которой Вы его публикуете, выбрав эту опцию: +Чтобы сэкономить на газовых расходах, вы можете закрепить свой субграф в той же транзакции, в которой его публикуете, выбрав эту опцию: -![Subgraph publish](/img/studio-publish-modal.png) +![Публикация субграфа](/img/studio-publish-modal.png) ### 8. Запрос Вашего субграфа -You now have access to 100,000 free queries per month with your subgraph on The Graph Network! +Теперь у вас есть доступ к 100 000 бесплатных запросов в месяц для вашего субграфа в The Graph Network! -You can query your subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button. +Вы можете выполнять запросы к своему субграфу, отправляя запросы GraphQL по его URL для запросов, который можно найти, нажав кнопку "Запрос". -For more information about querying data from your subgraph, read [Querying The Graph](/subgraphs/querying/introduction/). +Для получения дополнительной информации о том, как выполнять запросы к данным из вашего субграфа, прочитайте статью [Запросы к данным в The Graph](/subgraphs/querying/introduction/). diff --git a/website/src/pages/ru/substreams/_meta-titles.json b/website/src/pages/ru/substreams/_meta-titles.json index 6262ad528c3a..b4353cede681 100644 --- a/website/src/pages/ru/substreams/_meta-titles.json +++ b/website/src/pages/ru/substreams/_meta-titles.json @@ -1,3 +1,3 @@ { - "developing": "Developing" + "developing": "Разработка" } diff --git a/website/src/pages/ru/substreams/developing/dev-container.mdx b/website/src/pages/ru/substreams/developing/dev-container.mdx index bd4acf16eec7..71d84bce5eb8 100644 --- a/website/src/pages/ru/substreams/developing/dev-container.mdx +++ b/website/src/pages/ru/substreams/developing/dev-container.mdx @@ -1,48 +1,48 @@ --- -title: Substreams Dev Container -sidebarTitle: Dev Container +title: Контейнер для разработки субпотоков +sidebarTitle: Контейнер для разработки --- -Develop your first project with Substreams Dev Container. +Разработайте свой первый проект с помощью контейнера для разработки. -## What is a Dev Container? +## Что такое контейнер для разработки? -It's a tool to help you build your first project. You can either run it remotely through Github codespaces or locally by cloning the [substreams starter repository](https://github.com/streamingfast/substreams-starter?tab=readme-ov-file). +Это инструмент, который поможет Вам создать первый проект. Вы можете использовать его удалённо через Github Codespaces или локально, клонировав [репозиторий Substreams Starter](https://github.com/streamingfast/substreams-starter?tab=readme-ov-file). -Inside the Dev Container, the `substreams init` command sets up a code-generated Substreams project, allowing you to easily build a subgraph or an SQL-based solution for data handling. +Внутри контейнера для разработки команда `substreams init` настраивает сгенерированный код проекта субпотоков, позволяя вам легко создать субграф или решение на базе SQL для обработки данных. -## Prerequisites +## Предварительные требования -- Ensure Docker and VS Code are up-to-date. +- Убедитесь, что Docker и VS Code обновлены до последних версий. -## Navigating the Dev Container +## Ориентирование в контейнере для разработки -In the Dev Container, you can either build or import your own `substreams.yaml` and associate modules within the minimal path or opt for the automatically generated Substreams paths. Then, when you run the `Substreams Build` it will generate the Protobuf files. +В контейнере для разработки вы можете либо создать, либо импортировать свой собственный файл `substreams.yaml` и ассоциировать модули в минимальном пути, либо выбрать автоматически сгенерированные пути субпотоков. Затем, когда вы запускаете команду `Substreams Build`, она создаёт файлы Protobuf. -### Options +### Параметры -- **Minimal**: Starts you with the raw block `.proto` and requires development. This path is intended for experienced users. -- **Non-Minimal**: Extracts filtered data using network-specific caches and Protobufs taken from corresponding foundational modules (maintained by the StreamingFast team). This path generates a working Substreams out of the box. +- **Minimal**: Начинает с необработанного блока `.proto` и требует доработки. Этот путь предназначен для опытных пользователей. +- **Non-Minimal**: Извлекает отфильтрованные данные, используя сетевые кэши и Protobuf файлы, полученные из соответствующих основных модулей (поддерживаемых командой StreamingFast). Этот путь создаёт рабочие субпотоки "из коробки". -To share your work with the broader community, publish your `.spkg` to [Substreams registry](https://substreams.dev/) using: +Чтобы поделиться своей работой с широкой аудиторией, опубликуйте свой `.spkg` в [реестре субпотоков](https://substreams.dev/) с помощью: - `substreams registry login` - `substreams registry publish` -> Note: If you run into any problems within the Dev Container, use the `help` command to access trouble shooting tools. +> Примечание: Если у вас возникнут проблемы внутри контейнера для разработки, используйте команду `help`, чтобы получить доступ к инструментам для устранения неполадок. -## Building a Sink for Your Project +## Создание Sink для вашего проекта -You can configure your project to query data either through a Subgraph or directly from an SQL database: +Вы можете настроить свой проект для запроса данных либо через субграф, либо напрямую из базы данных SQL: -- **Subgraph**: Run `substreams codegen subgraph`. This generates a project with a basic `schema.graphql` and `mappings.ts` file. You can customize these to define entities based on the data extracted by Substreams. For more configurations, see [subgraph sink documentation](https://docs.substreams.dev/how-to-guides/sinks/subgraph). -- **SQL**: Run `substreams codegen sql` for SQL-based queries. For more information on configuring a SQL sink, refer to the [SQL documentation](https://docs.substreams.dev/how-to-guides/sinks/sql-sink). +- **Субграф**: Запустите команду `substreams codegen subgraph`. Это создаст проект с базовыми файлами `schema.graphql` и `mappings.ts`. Вы можете настроить эти файлы для определения объектов на основе данных, извлечённых с помощью субпотоков. Больше настроек смотрите в [документации по хранилищу субграфа](https://docs.substreams.dev/how-to-guides/sinks/subgraph). +- **SQL**: Запустите команду `substreams codegen sql` для SQL-запросов. Для получения дополнительной информации о настройке SQL sink, обратитесь к [документации по SQL](https://docs.substreams.dev/how-to-guides/sinks/sql-sink). -## Deployment Options +## Варианты развертывания -To deploy a Subgraph, you can either run the `graph-node` locally using the `deploy-local` command or deploy to Subgraph Studio by using the `deploy` command found in the `package.json` file. +Чтобы развернуть субграф, вы можете либо запустить `graph-node` локально с помощью команды `deploy-local`, либо развернуть его в Subgraph Studio, используя команду `deploy`, указанную в файле `package.json`. -## Common Errors +## Распространённые ошибки -- When running locally, make sure to verify that all Docker containers are healthy by running the `dev-status` command. -- If you put the wrong start-block while generating your project, navigate to the `substreams.yaml` to change the block number, then re-run `substreams build`. +- При запуске локально убедитесь, что все Docker-контейнеры работают корректно, выполнив команду `dev-status`. +- Если при создании проекта вы указали неверный начальный блок, откройте файл `substreams.yaml`, измените номер блока и заново выполните команду `substreams build`. diff --git a/website/src/pages/ru/substreams/developing/sinks.mdx b/website/src/pages/ru/substreams/developing/sinks.mdx index f1c5360f39a9..c0981c39ae75 100644 --- a/website/src/pages/ru/substreams/developing/sinks.mdx +++ b/website/src/pages/ru/substreams/developing/sinks.mdx @@ -1,32 +1,32 @@ --- -title: Official Sinks +title: Подключите свои субпотоки --- -Choose a sink that meets your project's needs. +Подберите sink, соответствующий требованиям Вашего проекта. ## Обзор -Once you find a package that fits your needs, you can choose how you want to consume the data. +Как только вы найдете пакет, который соответствует Вашим потребностям, Вы можете выбрать способ потребления данных. -Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database, a file or a subgraph. +Sinks — это интеграции, которые позволяют отправлять извлечённые данные в различные системы-получатели такие как SQL база данных, файл или субграф. ## Sinks -> Note: Some of the sinks are officially supported by the StreamingFast core development team (i.e. active support is provided), but other sinks are community-driven and support can't be guaranteed. +> Примечание: Некоторые из sinks официально поддерживаются командой разработчиков ядра StreamingFast (то есть предоставляется активная поддержка), в то время как другие sinks являются проектами, созданными сообществом, и поддержка для них не может быть гарантирована. -- [SQL Database](https://docs.substreams.dev/how-to-guides/sinks/sql-sink): Send the data to a database. -- [Subgraph](/sps/introduction/): Configure an API to meet your data needs and host it on The Graph Network. -- [Direct Streaming](https://docs.substreams.dev/how-to-guides/sinks/stream): Stream data directly from your application. -- [PubSub](https://docs.substreams.dev/how-to-guides/sinks/pubsub): Send data to a PubSub topic. -- [Community Sinks](https://docs.substreams.dev/how-to-guides/sinks/community-sinks): Explore quality community maintained sinks. +- [База данных SQL](https://docs.substreams.dev/how-to-guides/sinks/sql-sink): Отправьте данные в базу данных. +- [Субграф](/sps/introduction/): Настройте API, чтобы удовлетворить потребности в данных, и разместите его в сети The Graph. +- [Прямая трансляция](https://docs.substreams.dev/how-to-guides/sinks/stream): Транслируйте данные напрямую из вашего приложения. +- [PubSub](https://docs.substreams.dev/how-to-guides/sinks/pubsub): Отправляйте данные в тему PubSub. +- [Sinks сообщества](https://docs.substreams.dev/how-to-guides/sinks/community-sinks): Изучите качественные sinks, поддерживаемые сообществом. -> Important: If you’d like your sink (e.g., SQL or PubSub) hosted for you, reach out to the StreamingFast team [here](mailto:sales@streamingfast.io). +> Важно: Если вы хотите, чтобы ваш sink (например, SQL или PubSub) был размещён для Вас, свяжитесь с командой StreamingFast [здесь](mailto:sales@streamingfast.io). -## Navigating Sink Repos +## Ориентирование в репозиториях Sink -### Official +### Официально -| Name | Support | Maintainer | Source Code | +| Имя | Поддержка | Мейнтейнер | Исходный код | | --- | --- | --- | --- | | SQL | O | StreamingFast | [substreams-sink-sql](https://github.com/streamingfast/substreams-sink-sql) | | Go SDK | O | StreamingFast | [substreams-sink](https://github.com/streamingfast/substreams-sink) | @@ -38,14 +38,14 @@ Sinks are integrations that allow you to send the extracted data to different de | CSV | O | Pinax | [substreams-sink-csv](https://github.com/pinax-network/substreams-sink-csv) | | PubSub | O | StreamingFast | [substreams-sink-pubsub](https://github.com/streamingfast/substreams-sink-pubsub) | -### Community +### Сообщество -| Name | Support | Maintainer | Source Code | +| Имя | Поддержка | Мейнтейнер | Исходный код | | --- | --- | --- | --- | -| MongoDB | C | Community | [substreams-sink-mongodb](https://github.com/streamingfast/substreams-sink-mongodb) | -| Files | C | Community | [substreams-sink-files](https://github.com/streamingfast/substreams-sink-files) | -| KV Store | C | Community | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) | -| Prometheus | C | Community | [substreams-sink-Prometheus](https://github.com/pinax-network/substreams-sink-prometheus) | +| MongoDB | C | Сообщество | [substreams-sink-mongodb](https://github.com/streamingfast/substreams-sink-mongodb) | +| Files | C | Сообщество | [substreams-sink-files](https://github.com/streamingfast/substreams-sink-files) | +| KV Store | C | Сообщество | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) | +| Prometheus | C | Сообщество | [substreams-sink-Prometheus](https://github.com/pinax-network/substreams-sink-prometheus) | -- O = Official Support (by one of the main Substreams providers) -- C = Community Support +- O = Официальная поддержка (от одного из основных поставщиков субпотоков) +- C = Поддержка сообщества diff --git a/website/src/pages/ru/substreams/developing/solana/account-changes.mdx b/website/src/pages/ru/substreams/developing/solana/account-changes.mdx index 7f089022b1f0..6aea139b7ae7 100644 --- a/website/src/pages/ru/substreams/developing/solana/account-changes.mdx +++ b/website/src/pages/ru/substreams/developing/solana/account-changes.mdx @@ -1,57 +1,57 @@ --- -title: Solana Account Changes +title: Изменения в учетной записи Solana sidebarTitle: Account Changes --- -Learn how to consume Solana account change data using Substreams. +Узнайте, как использовать данные изменений учетных записей Solana с помощью субпотоков. -## Introduction +## Введение -This guide walks you through the process of setting up your environment, configuring your first Substreams stream, and consuming account changes efficiently. By the end of this guide, you will have a working Substreams feed that allows you to track real-time account changes on the Solana blockchain, as well as historical account change data. +Это руководство проведет Вас через процесс настройки вашей среды, конфигурирования Ваших первых субпотоков и эффективного потребления изменений учетных записей. К концу этого руководства у Вас будут рабочие субпотоки, которые позволят отслеживать изменения учетных записей в реальном времени на блокчейне Solana, а также получать исторические данные об изменениях учетных записей. -> NOTE: History for the Solana Account Changes dates as of 2025, block 310629601. +> Примечание: история изменений учетных записей Solana начинается с 2025 года, блок 310629601. -For each Substreams Solana account block, only the latest update per account is recorded, see the [Protobuf Referece](https://buf.build/streamingfast/firehose-solana/file/main:sf/solana/type/v1/account.proto). If an account is deleted, a payload with `deleted == True` is provided. Additionally, events of low importance ware omitted, such as those with the special owner “Vote11111111…” account or changes that do not affect the account data (ex: lamport changes). +Для каждого субпотока Solana, фиксируется только последнее обновление для каждой учетной записи. См. [Protobuf справочник](https://buf.build/streamingfast/firehose-solana/file/main:sf/solana/type/v1/account.proto). Если учетная запись была удалена, в payload будет указано `deleted == True`. Кроме того, события с низким приоритетом, такие как изменения с участием специального владельца "Vote11111111…" или изменения, не влияющие на данные учетной записи (например, изменения лампортов), были опущены. -> NOTE: To test Substreams latency for Solana accounts, measured as block-head drift, install the [Substreams CLI](https://docs.substreams.dev/reference-material/substreams-cli/installing-the-cli) and running `substreams run solana-common blocks_without_votes -s -1 -o clock`. +> ПРИМЕЧАНИЕ: чтобы проверить задержку субпотоков для аккаунтов Solana, измеряемую как отклонение от головного блока, установите [Substreams CLI](https://docs.substreams.dev/reference-material/substreams-cli/installing-the-cli) и выполните команду `substreams run solana-common blocks_without_votes -s -1 -o clock`. ## Начало работы -### Prerequisites +### Предварительные требования -Before you begin, ensure that you have the following: +Прежде чем начать, убедитесь, что у Вас есть следующее: -1. [Substreams CLI](https://docs.substreams.dev/reference-material/substreams-cli/installing-the-cli) installed. -2. A [Substreams key](https://docs.substreams.dev/reference-material/substreams-cli/authentication) for access to the Solana Account Change data. -3. Basic knowledge of [how to use](https://docs.substreams.dev/reference-material/substreams-cli/command-line-interface) the command line interface (CLI). +1. [Субпотоки CLI](https://docs.substreams.dev/reference-material/substreams-cli/installing-the-cli) установлены. +2. [Ключ субпотока](https://docs.substreams.dev/reference-material/substreams-cli/authentication) для доступа к данным об изменении учетной записи Солана. +3. Базовые знания о том, [как использовать](https://docs.substreams.dev/reference-material/substreams-cli/command-line-interface) интерфейс командной строки (CLI). -### Step 1: Set Up a Connection to Solana Account Change Substreams +### Шаг 1: Настройка подключения к субпотокам изменений аккаунтов Solana -Now that you have Substreams CLI installed, you can set up a connection to the Solana Account Change Substreams feed. +Теперь, когда у вас установлен CLI субпотоков, Вы можете настроить подключение к потоку изменений аккаунтов Solana в субпотоках. -- Using the [Solana Accounts Foundational Module](https://substreams.dev/packages/solana-accounts-foundational/latest), you can choose to stream data directly or use the GUI for a more visual experience. The following `gui` example filters for Honey Token account data. +- Используя [основной модуль аккаунтов Solana](https://substreams.dev/packages/solana-accounts-foundational/latest), Вы можете либо транслировать данные напрямую, либо использовать графический интерфейс (GUI) для более наглядного взаимодействия. В следующем примере `gui` выполняется фильтрация данных аккаунта токена Honey. ```bash substreams gui solana-accounts-foundational filtered_accounts -t +10 -p filtered_accounts="owner:TokenkegQfeZyiNwAJbNbGKPFXCWuBvf9Ss623VQ5DA || account:4vMsoUT2BWatFweudnQM1xedRLfJgJ7hswhcpz4xgBTy" ``` -- This command will stream account changes directly to your terminal. +- Эта команда будет транслировать изменения аккаунта непосредственно в ваш терминал. ```bash substreams run solana-accounts-foundational filtered_accounts -s -1 -o clock ``` -The Foundational Module has support for filtering on specific accounts and/or owners. You can adjust the query based on your needs. +Основной модуль поддерживает фильтрацию по конкретным аккаунтам и/или владельцам. Вы можете настроить запрос в соответствии с Вашими потребностями. -### Step 2: Sink the Substreams +### Шаг 2: Подключение субпотоков -Consume the account stream [directly in your application](https://docs.substreams.dev/how-to-guides/sinks/stream) using a callback or make it queryable by using the [SQL-DB sink](https://docs.substreams.dev/how-to-guides/sinks/sql-sink). +Используйте поток данных аккаунтов [напрямую в вашем приложении](https://docs.substreams.dev/how-to-guides/sinks/stream), используя callback-функцию, или сделайте его доступным для запросов, используя [SQL-DB sink](https://docs.substreams.dev/how-to-guides/sinks/sql-sink). -### Step 3: Setting up a Reconnection Policy +### Шаг 3: Настройка политики переподключения -[Cursor Management](https://docs.substreams.dev/reference-material/reliability-guarantees) ensures seamless continuity and retraceability by allowing you to resume from the last consumed block if the connection is interrupted. This functionality prevents data loss and maintains a persistent stream. +[Управление курсором](https://docs.substreams.dev/reference-material/reliability-guarantees) обеспечивает бесперебойную непрерывность и возможность возврата, позволяя возобновить обработку с последнего потребленного блока в случае разрыва соединения. Эта функция предотвращает потерю данных и поддерживает стабильный поток. -When creating or using a sink, the user's primary responsibility is to provide implementations of BlockScopedDataHandler and a BlockUndoSignalHandler implementation(s) which has the following interface: +При создании или использовании sink, основной задачей пользователя является предоставление реализаций BlockScopedDataHandler и BlockUndoSignalHandler, которая должна иметь следующий интерфейс: ```go import ( diff --git a/website/src/pages/ru/substreams/developing/solana/transactions.mdx b/website/src/pages/ru/substreams/developing/solana/transactions.mdx index dbd16d487158..242cccdfb006 100644 --- a/website/src/pages/ru/substreams/developing/solana/transactions.mdx +++ b/website/src/pages/ru/substreams/developing/solana/transactions.mdx @@ -1,61 +1,61 @@ --- -title: Solana Transactions -sidebarTitle: Transactions +title: Транзакции Solana +sidebarTitle: Транзакции --- -Learn how to initialize a Solana-based Substreams project within the Dev Container. +Узнайте, как инициализировать проект Substreams на основе Solana в рамках Dev Container. -> Note: This guide excludes [Account Changes](/substreams/developing/solana/account-changes/). +> Примечание: Этот гид не включает [Изменения аккаунтов](/substreams/developing/solana/account-changes/). -## Options +## Параметры -If you prefer to begin locally within your terminal rather than through the Dev Container (VS Code required), refer to the [Substreams CLI installation guide](https://docs.substreams.dev/reference-material/substreams-cli/installing-the-cli). +Если вы предпочитаете начать работу локально в вашем терминале, а не через Dev Container (требуется VS Code), обратитесь к [руководству по установке Substreams CLI](https://docs.substreams.dev/reference-material/substreams-cli/installing-the-cli). -## Step 1: Initialize Your Solana Substreams Project +## Шаг 1: Инициализация Вашего проекта субпотоков Solana -1. Open the [Dev Container](https://github.com/streamingfast/substreams-starter) and follow the on-screen steps to initialize your project. +1. Откройте [Dev Container](https://github.com/streamingfast/substreams-starter) и следуйте шагам на экране, чтобы инициализировать Ваш проект. -2. Running `substreams init` will give you the option to choose between two Solana project options. Select the best option for your project: - - **sol-minimal**: This creates a simple Substreams that extracts raw Solana block data and generates corresponding Rust code. This path will start you with the full raw block, and you can navigate to the `substreams.yaml` (the manifest) to modify the input. - - **sol-transactions**: This creates a Substreams that filters Solana transactions based on one or more Program IDs and/or Account IDs, using the cached [Solana Foundational Module](https://substreams.dev/streamingfast/solana-common/v0.3.0). - - **sol-anchor-beta**: This creates a Substreams that decodes instructions and events with an Anchor IDL. If an IDL isn’t available (reference [Anchor CLI](https://www.anchor-lang.com/docs/cli)), then you’ll need to provide it yourself. +2. Выполнив команду `substreams init`, Вам будет предложено выбрать между двумя вариантами проектов для Solana. Выберите наиболее подходящий вариант для Вашего проекта: + - **sol-minimal**: Этот вариант создаёт простые субпотоки, которые извлекают сырые данные блоков Solana и генерирует соответствующий код на Rust. Этот путь начнёт с полного сырого блока, и вы сможете перейти к файлу `substreams.yaml` (манифест), чтобы изменить входные данные. + - **sol-transactions**: Этот вариант создаёт субпотоки, которые фильтруют транзакции Solana на основе одного или нескольких Program ID и/или Account ID, используя кешированный [Solana Foundational Module](https://substreams.dev/streamingfast/solana-common/v0.3.0). + - **sol-anchor-beta**: Этот вариант создаёт субпотоки, которые декодируют инструкции и события с использованием Anchor IDL. Если IDL недоступен (смотрите [Anchor CLI](https://www.anchor-lang.com/docs/cli)), Вам нужно будет предоставить его самостоятельно. -The modules within Solana Common do not include voting transactions. To gain a 75% reduction in data processing size and costs, delay your stream by over 1000 blocks from the head. This can be done using the [`sleep`](https://doc.rust-lang.org/std/thread/fn.sleep.html) function in Rust. +Модули в Solana Common не включают транзакции голосования. Чтобы уменьшить размер и затраты на обработку данных на 75%, задержите Ваш поток на более чем 1000 блоков от начала. Это можно сделать с помощью функции [`sleep`](https://doc.rust-lang.org/std/thread/fn.sleep.html) в Rust. -To access voting transactions, use the full Solana block, `sf.solana.type.v1.Block`, as input. +Чтобы получить доступ к транзакциям голосования, используйте полный блок Solana, `sf.solana.type.v1.Block`, в качестве входных данных. -## Step 2: Visualize the Data +## Шаг 2: Визуализация данных -1. Run `substreams auth` to create your [account](https://thegraph.market/) and generate an authentication token (JWT), then pass this token back as input. +1. Выполните команду `substreams auth`, чтобы создать Ваш [аккаунт](https://thegraph.market/) и сгенерировать токен аутентификации (JWT), затем передайте этот токен обратно в качестве входных данных. -2. Now you can freely use the `substreams gui` to visualize and iterate on your extracted data. +2. Теперь Вы можете свободно использовать команду `substreams gui`, чтобы визуализировать и итеративно работать с Вашими извлечёнными данными. -## Step 2.5: (Optionally) Transform the Data +## Шаг 2.5: (По желанию) Преобразование данных -Within the generated directories, modify your Substreams modules to include additional filters, aggregations, and transformations, then update the manifest accordingly. +В сгенерированных директориях отредактируйте Ваши модули субпотоков, чтобы добавить дополнительные фильтры, агрегации и преобразования, а затем обновите манифест соответственно. -## Step 3: Load the Data +## Шаг 3: Загрузка данных -To make your Substreams queryable (as opposed to [direct streaming](https://docs.substreams.dev/how-to-guides/sinks/stream)), you can automatically generate a [Substreams-powered subgraph](/sps/introduction/) or SQL-DB sink. +Чтобы сделать Ваш запрос в субпотоках доступным для выполнения (в отличие от [прямой трансляции](https://docs.substreams.dev/how-to-guides/sinks/stream)), Вы можете автоматически сгенерировать [субграф на базе субпотоков](/sps/introduction/) или базу данных SQL sink. -### Subgraph +### Субграф -1. Run `substreams codegen subgraph` to initialize the sink, producing the necessary files and function definitions. -2. Create your [subgraph mappings](/sps/triggers/) within the `mappings.ts` and associated entities within the `schema.graphql`. -3. Build and deploy locally or to [Subgraph Studio](https://thegraph.com/studio-pricing/) by running `deploy-studio`. +1. Выполните команду `substreams codegen subgraph`, чтобы инициализировать sink, создавая необходимые файлы и определения функций. +2. Создайте Ваши [мэппинги субграфа](/sps/triggers/) в файле `mappings.ts` и связанные объекты в файле `schema.graphql`. +3. Создайте и разверните локально или в [Subgraph Studio](https://thegraph.com/studio-pricing/), выполнив команду `deploy-studio`. ### SQL -1. Run `substreams codegen sql` and choose from either ClickHouse or Postgres to initialize the sink, producing the necessary files. -2. Run `substreams build` build the [Substream SQL](https://docs.substreams.dev/how-to-guides/sinks/sql-sink) sink. -3. Run `substreams-sink-sql` to sink the data into your selected SQL DB. +1. Выполните команду `substreams codegen sql` и выберите либо ClickHouse, либо Postgres, чтобы инициализировать sink и создать необходимые файлы. +2. Выполните команду `substreams build`, чтобы собрать sink [SQL субпотока](https://docs.substreams.dev/how-to-guides/sinks/sql-sink). +3. Выполните команду `substreams-sink-sql`, чтобы записать данные в выбранную Вами базу данных SQL. -> Note: Run `help` to better navigate the development environment and check the health of containers. +> Примечание: Выполните команду `help`, чтобы лучше ориентироваться в среде разработки и проверить состояние контейнеров. ## Дополнительные ресурсы -You may find these additional resources helpful for developing your first Solana application. +Вам могут быть полезны следующие дополнительные ресурсы для разработки Вашего первого приложения на Solana. -- The [Dev Container Reference](/substreams/developing/dev-container/) helps you navigate the container and its common errors. -- The [CLI reference](https://docs.substreams.dev/reference-material/substreams-cli/command-line-interface) lets you explore all the tools available in the Substreams CLI. -- The [Components Reference](https://docs.substreams.dev/reference-material/substreams-components/packages) dives deeper into navigating the `substreams.yaml`. +- [Справочник по Dev Container](/substreams/developing/dev-container/) поможет Вам ориентироваться в контейнере и решать распространённые ошибки. +- [Справочник по CLI](https://docs.substreams.dev/reference-material/substreams-cli/command-line-interface) позволяет Вам изучить все инструменты, доступные в CLI субпотоков. +- [Справочник по компонентам](https://docs.substreams.dev/reference-material/substreams-components/packages) более подробно объясняет, как работать с файлом `substreams.yaml`. diff --git a/website/src/pages/ru/substreams/introduction.mdx b/website/src/pages/ru/substreams/introduction.mdx index 320c8c262175..2b3d0a89d87b 100644 --- a/website/src/pages/ru/substreams/introduction.mdx +++ b/website/src/pages/ru/substreams/introduction.mdx @@ -1,26 +1,26 @@ --- -title: Introduction to Substreams -sidebarTitle: Introduction +title: Введение в Субпотоки +sidebarTitle: Введение --- -![Substreams Logo](/img/substreams-logo.png) +![Логотип Субпотоков](/img/substreams-logo.png) To start coding right away, check out the [Substreams Quick Start](/substreams/quick-start/). ## Обзор -Substreams is a powerful parallel blockchain indexing technology designed to enhance performance and scalability within The Graph Network. +Субпотоки — это мощная технология параллельного индексирования блокчейна, разработанная для повышения производительности и масштабируемости в сети The Graph. -## Substreams Benefits +## Преимущества Субпотоков -- **Accelerated Indexing**: Boost subgraph indexing time with a parallelized engine for quicker data retrieval and processing. -- **Multi-Chain Support**: Expand indexing capabilities beyond EVM-based chains, supporting ecosystems like Solana, Injective, Starknet, and Vara. -- **Enhanced Data Model**: Access comprehensive data, including the `trace` level data on EVM or account changes on Solana, while efficiently managing forks/disconnections. -- **Multi-Sink Support:** For Subgraph, Postgres database, Clickhouse, and Mongo database. +- **Ускоренное индексирование**: Повышает скорость индексирования субграфов с помощью параллельного движка для более быстрого извлечения и обработки данных. +- **Мультичейн-поддержка**: Расширяет возможности индексирования за пределы сетей на основе EVM, поддерживая такие экосистемы, как Solana, Injective, Starknet и Vara. +- **Усовершенствованная модель данных**: Обеспечивает доступ к детализированным данным, таким как данные уровня `trace` в EVM или изменения аккаунтов в Solana, с эффективным управлением форками и разрывами соединения. +- **Поддержка нескольких хранилищ**: Для Субграфа, базы данных Postgres, Clickhouse и Mongo. -## How Substreams Works in 4 Steps +## Как работают Субпотоки: 4 этапа -1. You write a Rust program, which defines the transformations that you want to apply to the blockchain data. For example, the following Rust function extracts relevant information from an Ethereum block (number, hash, and parent hash). +1. Вы пишете программу на Rust, которая определяет преобразования, применяемые к данным блокчейна. Например, следующая функция на Rust извлекает соответствующую информацию из блока Ethereum (номер, хеш и хеш родительского блока). ```rust fn get_my_block(blk: Block) -> Result { @@ -34,12 +34,12 @@ fn get_my_block(blk: Block) -> Result { } ``` -2. You wrap up your Rust program into a WASM module just by running a single CLI command. +2. Вы упаковываете свою программу на Rust в WASM-модуль с помощью одной команды в CLI. -3. The WASM container is sent to a Substreams endpoint for execution. The Substreams provider feeds the WASM container with the blockchain data and the transformations are applied. +3. WASM-контейнер отправляется на конечную точку Субпотоков для выполнения. Провайдер Субпотоков передает в WASM-контейнер данные блокчейна, и к ним применяются преобразования. -4. You select a [sink](https://docs.substreams.dev/how-to-guides/sinks), a place where you want to send the transformed data (such as a SQL database or a Subgraph). +4. Вы выбираете [хранилище](https://docs.substreams.dev/how-to-guides/sinks), куда хотите отправить преобразованные данные (например, SQL-базу данных или Субграф). ## Дополнительные ресурсы -All Substreams developer documentation is maintained by the StreamingFast core development team on the [Substreams registry](https://docs.substreams.dev). +Вся документация для разработчиков Субпотоков поддерживается командой разработчиков ядра StreamingFast в [реестре Субпотоков](https://docs.substreams.dev). diff --git a/website/src/pages/ru/substreams/publishing.mdx b/website/src/pages/ru/substreams/publishing.mdx index 42808170179f..d19904d26e9e 100644 --- a/website/src/pages/ru/substreams/publishing.mdx +++ b/website/src/pages/ru/substreams/publishing.mdx @@ -1,53 +1,53 @@ --- -title: Publishing a Substreams Package -sidebarTitle: Publishing +title: Публикация пакета Субпотоков +sidebarTitle: Публикация --- -Learn how to publish a Substreams package to the [Substreams Registry](https://substreams.dev). +Узнайте, как опубликовать пакет Субпотоков в [реестре Субпотоков](https://substreams.dev). ## Обзор -### What is a package? +### Что такое пакет? -A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional subgraphs. +Пакет Субпотоков — это предварительно скомпилированный бинарный файл, который определяет конкретные данные, извлекаемые из блокчейна, аналогично файлу `mapping.ts` в традиционных субграфах. -## Publish a Package +## Публикация пакета -### Prerequisites +### Предварительные требования -- You must have the Substreams CLI installed. -- You must have a Substreams package (`.spkg`) that you want to publish. +- У Вас должен быть установлен CLI Субпотоков. +- У Вас должен быть пакет Субпотоков (`.spkg`), который Вы хотите опубликовать. -### Step 1: Run the `substreams publish` Command +### Шаг 1: Запустите команду `substreams publish` -1. In a command-line terminal, run `substreams publish .spkg`. +1. В терминале командной строки выполните `substreams publish .spkg`. -2. If you do not have a token set in your computer, navigate to `https://substreams.dev/me`. +2. Если у Вас не установлен токен на компьютере, перейдите на `https://substreams.dev/me`. -![get token](/img/1_get-token.png) +![получить токен](/img/1_get-token.png) -### Step 2: Get a Token in the Substreams Registry +### Шаг 2: Получите токен в реестре Субпотоков -1. In the Substreams Registry, log in with your GitHub account. +1. Войдите в реестр Субпотоков с использованием своей учетной записи GitHub. -2. Create a new token and copy it in a safe location. +2. Создайте новый токен и сохраните его в надежном месте. -![new token](/img/2_new_token.png) +![новый токен](/img/1_get-token.png) -### Step 3: Authenticate in the Substreams CLI +### Шаг 3: Аутентифицируйтесь в CLI Субпотоков -1. Back in the Substreams CLI, paste the previously generated token. +1. Вернитесь в CLI Субпотоков и вставьте ранее сгенерированный токен. -![paste token](/img/3_paste_token.png) +![вставить токен](/img/2_new_token.png) -2. Lastly, confirm that you want to publish the package. +2. В заключение, подтвердите, что хотите опубликовать пакет. -![confirm](/img/4_confirm.png) +![подтвердить](/img/4_confirm.png) -That's it! You have succesfully published a package in the Substreams registry. +Вот и все! Вы успешно опубликовали пакет в реестре Субпотоков. -![success](/img/5_success.png) +![успех](/img/5_success.png) ## Дополнительные ресурсы -Visit [Substreams](https://substreams.dev/) to explore a growing collection of ready-to-use Substreams packages across various blockchain networks. +Посетите сайт [Субпотоков](https://substreams.dev/), чтобы изучить растущую коллекцию готовых к использованию пакетов Субпотоков, поддерживающих различные блокчейн-сети. diff --git a/website/src/pages/ru/substreams/quick-start.mdx b/website/src/pages/ru/substreams/quick-start.mdx index c74623e3c753..922ee9c9e2db 100644 --- a/website/src/pages/ru/substreams/quick-start.mdx +++ b/website/src/pages/ru/substreams/quick-start.mdx @@ -1,30 +1,30 @@ --- -title: Substreams Quick Start +title: Быстрый старт с Субпотоками sidebarTitle: Быстрый старт --- -Discover how to utilize ready-to-use substream packages or develop your own. +Узнайте, как использовать готовые пакеты Субпотоков или разработать собственные. ## Обзор -Integrating Substreams can be quick and easy. They are permissionless, and you can [obtain a key here](https://thegraph.market/) without providing personal information to start streaming on-chain data. +Интеграция Субпотоков может быть быстрой и простой. Они не требуют разрешений, и Вы можете без предоставления личной информации [получить здесь ключ](https://thegraph.market/) для того, чтобы начать потоковую передачу он-чейн данных. -## Start Building +## Начало создания -### Use Substreams Packages +### Использование пакетов Субпотоков -There are many ready-to-use Substreams packages available. You can explore these packages by visiting the [Substreams Registry](https://substreams.dev) and [sinking them](/substreams/developing/sinks/). The registry lets you search for and find any package that meets your needs. +Доступно множество готовых пакетов Субпотоков. Вы можете изучить эти пакеты, посетив [реестр Субпотоков](https://substreams.dev) и [используя их](/substreams/developing/sinks/). Реестр позволяет Вам искать и находить любые пакеты, которые соответствуют Вашим требованиям. -Once you find a package that fits your needs, you can choose how you want to consume the data: +Найдя пакет, который соответствует Вашим потребностям, Вы можете выбрать способ потребления данных: -- **[Subgraph](/sps/introduction/)**: Configure an API to meet your data needs and host it on The Graph Network. -- **[SQL Database](https://docs.substreams.dev/how-to-guides/sinks/sql-sink)**: Send the data to a database. -- **[Direct Streaming](https://docs.substreams.dev/how-to-guides/sinks/stream)**: Stream data directly to your application. -- **[PubSub](https://docs.substreams.dev/how-to-guides/sinks/pubsub)**: Send data to a PubSub topic. +- **[Субграф](/sps/introduction/)**: Настройте API для удовлетворения своих потребностей в данных и разместите его в сети The Graph. +- **[База данных SQL](https://docs.substreams.dev/how-to-guides/sinks/sql-sink)**: Отправьте данные в базу данных. +- **[Прямая потоковая передача](https://docs.substreams.dev/how-to-guides/sinks/stream)**: Потоковая передача данных непосредственно в Ваше приложение. +- **[PubSub](https://docs.substreams.dev/how-to-guides/sinks/pubsub)**: Отправьте данные в тему PubSub. -### Develop Your Own +### Разработка своего собственного -If you can't find a Substreams package that meets your specific needs, you can develop your own. Substreams are built with Rust, so you'll write functions that extract and filter the data you need from the blockchain. To get started, check out the following tutorials: +Если Вы не можете найти пакет Субпотоков, который соответствует Вашим конкретным потребностям, Вы можете разработать свой собственный. Субпотоки создаются с использованием Rust, поэтому Вы будете писать функции, которые извлекают и фильтруют необходимые Вам данные из блокчейна. Чтобы начать, ознакомьтесь со следующими руководствами: - [EVM](https://docs.substreams.dev/tutorials/intro-to-tutorials/evm) - [Solana](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-solana) @@ -32,11 +32,11 @@ If you can't find a Substreams package that meets your specific needs, you can d - [Injective](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/injective) - [MANTRA](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/mantra) -To build and optimize your Substreams from zero, use the minimal path within the [Dev Container](/substreams/developing/dev-container/). +Чтобы создать и оптимизировать свои Субпотоки с нуля, используйте минимальный путь внутри [контейнера для разработки](/substreams/developing/dev-container/). -> Note: Substreams guarantees that you'll [never miss data](https://docs.substreams.dev/reference-material/reliability-guarantees) with a simple reconnection policy. +> Примечание: Субпотоки гарантируют, что Вы [никогда не пропустите данные](https://docs.substreams.dev/reference-material/reliability-guarantees) благодаря простой политике повторного подключения. ## Дополнительные ресурсы -- For additional guidance, reference the [Tutorials](https://docs.substreams.dev/tutorials/intro-to-tutorials) and follow the [How-To Guides](https://docs.substreams.dev/how-to-guides/develop-your-own-substreams) on Streaming Fast docs. -- For a deeper understanding of how Substreams works, explore the [architectural overview](https://docs.substreams.dev/reference-material/architecture) of the data service. +- Для получения дополнительной помощи обратитесь к [урокам](https://docs.substreams.dev/tutorials/intro-to-tutorials) и следуйте [пошаговым инструкциям](https://docs.substreams.dev/how-to-guides/develop-your-own-substreams) в документации Streaming Fast. +- Для более глубокого понимания того, как работают Субпотоки, ознакомьтесь с [обзором архитектуры](https://docs.substreams.dev/reference-material/architecture) обслуживания данных. diff --git a/website/src/pages/ru/supported-networks.mdx b/website/src/pages/ru/supported-networks.mdx index 37ad35891750..6399dfa3844c 100644 --- a/website/src/pages/ru/supported-networks.mdx +++ b/website/src/pages/ru/supported-networks.mdx @@ -17,12 +17,12 @@ export const getStaticProps = getSupportedNetworksStaticProps - Subgraph Studio полагается на стабильность и надежность базовых технологий, например, таких, как JSON-RPC, Firehose и конечных точек Substreams. -- Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier. -- Если субграф был опубликован через CLI и выбран индексатором, технически его можно было бы запросить даже без поддержки, и в настоящее время предпринимаются усилия для упрощения интеграции новых сетей. -- For a full list of which features are supported on the decentralized network, see [this page](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). +- Субграфы, индексирующие Gnosis Chain, теперь можно развертывать с идентификатором сети `gnosis`. +- If a Subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks. +- Полный список поддерживаемых функций в децентрализованной сети можно найти [на этой странице](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). ## Локальный запуск Graph Node -If your preferred network isn't supported on The Graph's decentralized network, you can run your own [Graph Node](https://github.com/graphprotocol/graph-node) to index any EVM-compatible network. Make sure that the [version](https://github.com/graphprotocol/graph-node/releases) you are using supports the network and you have the needed configuration. +Если предпочитаемая Вами сеть не поддерживается в децентрализованной сети The Graph, Вы можете запустить собственную [Graph Node](https://github.com/graphprotocol/graph-node) для индексирования любой совместимой с EVM сети. Убедитесь, что [версия](https://github.com/graphprotocol/graph-node/releases), которую вы используете, поддерживает эту сеть и у Вас есть необходимая конфигурация. -Graph Node также может индексировать другие протоколы через интеграцию с Firehose. Интеграции Firehose созданы для сетей на базе NEAR, Arweave и Cosmos. Кроме того, Graph Node может поддерживать субграфы на основе Substreams для любой сети с поддержкой Substreams. +Graph Node can also index other protocols, via a Firehose integration. Firehose integrations have been created for NEAR and Arweave. Additionally, Graph Node can support Substreams-powered Subgraphs for any network with Substreams support. diff --git a/website/src/pages/ru/token-api/_meta-titles.json b/website/src/pages/ru/token-api/_meta-titles.json index 692cec84bd58..e3d12c4a864f 100644 --- a/website/src/pages/ru/token-api/_meta-titles.json +++ b/website/src/pages/ru/token-api/_meta-titles.json @@ -1,5 +1,6 @@ { "mcp": "MCP", "evm": "EVM Endpoints", - "monitoring": "Monitoring Endpoints" + "monitoring": "Monitoring Endpoints", + "faq": "Часто задаваемые вопросы" } diff --git a/website/src/pages/ru/token-api/_meta.js b/website/src/pages/ru/token-api/_meta.js index 09aa7ffc2649..0e526f673a66 100644 --- a/website/src/pages/ru/token-api/_meta.js +++ b/website/src/pages/ru/token-api/_meta.js @@ -5,4 +5,5 @@ export default { mcp: titles.mcp, evm: titles.evm, monitoring: titles.monitoring, + faq: '', } diff --git a/website/src/pages/ru/token-api/faq.mdx b/website/src/pages/ru/token-api/faq.mdx new file mode 100644 index 000000000000..78b478d6d7ef --- /dev/null +++ b/website/src/pages/ru/token-api/faq.mdx @@ -0,0 +1,109 @@ +--- +title: Token API FAQ +--- + +Get fast answers to easily integrate and scale with The Graph's high-performance Token API. + +## Общая информация + +### What blockchains does the Token API support? + +Currently, the Token API supports Ethereum, Binance Smart Chain (BSC), Polygon, Optimism, Base, and Arbitrum One. + +### Why isn't my API key from The Graph Market working? + +Be sure to use the Access Token generated from the API key, not the API key itself. An access token can be generated from the dashboard on The Graph Market using the dropdown menu next to each API key. + +### How current is the data provided by the API relative to the blockchain? + +The API provides data up to the latest finalized block. + +### How do I authenticate requests to the Token API? + +Authentication is managed via JWT tokens obtained through The Graph Market. JWT tokens issued by [Pinax](https://pinax.network/en), a core developer of The Graph, are also supported. + +### Does the Token API provide a client SDK? + +While a client SDK is not currently available, please share feedback on any SDKs or integrations you would like to see on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to support additional blockchains in the future? + +Yes, more blockchains will be supported in the future. Please share feedback on desired chains on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to offer data closer to the chain head? + +Yes, improvements to provide data closer to the chain head are planned. Feedback is welcome on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to support additional use cases such as NFTs? + +The Graph ecosystem is actively determining the [roadmap](https://thegraph.com/blog/token-api-the-graph/) for additional use cases, including NFTs. Please provide feedback on specific features you would like prioritized on [Discord](https://discord.gg/graphprotocol). + +## MCP / LLM / AI Topics + +### Is there a time limit for LLM queries? + +Yes. The maximum time limit for LLM queries is 10 seconds. + +### Is there a known list of LLMs that work with the API? + +Yes, Cline, Cursor, and Claude Desktop integrate successfully with The Graph's Token API + MCP server. + +Beyond that, whether an LLM "works" depends on whether it supports the MCP protocol directly (or has a compatible plugin/adapter). + +### Where can I find the MCP client? + +You can find the code for the MCP client in [The Graph's repo](https://github.com/graphprotocol/mcp-client). + +## Advanced Topics + +### I'm getting 403/401 errors. What's wrong? + +Check that you included the `Authorization: Bearer ` header with the correct, non-expired token. Common issues include using the API key instead of generating a new JWT, forgetting the "Bearer" prefix, using an incorrect token, or omitting the header entirely. Ensure you copied the JWT exactly as provided by The Graph Market. + +### Are there rate limits or usage costs?\*\* + +During Beta, the Token API is free for authorized developers. While specific rate limits aren't documented, reasonable throttling exists to prevent abuse. High request volumes may trigger HTTP 429 errors. Monitor official announcements for future pricing changes after Beta. + +### What networks are supported, and how do I specify them? + +You can query available networks with [this link](https://token-api.thegraph.com/#tag/monitoring/GET/networks). A full list of network IDs accepted by The Graph can be found on [The Graph's Networks](https://thegraph.com/docs/en/supported-networks/). The API supports Ethereum mainnet, Binance Smart Chain, Base, Arbitrum One, Optimism, and Polygon/Matic. Use the optional `network_id` parameter (e.g., `mainnet`, `bsc`, `base`, `arbitrum-one`, `optimism`, `matic`) to target a specific chain. Without this parameter, the API defaults to Ethereum mainnet. + +### Why do I only see 10 results? How can I get more data? + +Endpoints cap output at 10 items by default. Use pagination parameters: `limit` (up to 500) and `page` (1-indexed). For example, set `limit=50` to get 50 results, and increment `page` for subsequent batches (e.g., `page=2` for items 51-100). + +### How do I fetch older transfer history? + +The Transfers endpoint defaults to 30 days of history. To retrieve older events, increase the `age` parameter up to 180 days maximum (e.g., `age=180` for 6 months of transfers). Transfers older than 180 days cannot be fetched in a single call. + +### What does an empty `"data": []` array mean? + +An empty data array means no records were found matching the query – not an error. This occurs when querying wallets with no tokens/transfers or token contracts with no holders. Verify you've used the correct address and parameters. An invalid address format will trigger a 4xx error. + +### Why is the JSON response wrapped in a `"data"` array? + +All Token API responses consistently wrap results in a top-level `data` array, even for single items. This uniform design handles the common case where addresses have multiple balances or transfers. When parsing, be sure to index into the `data` array (e.g., `const results = response.data`). + +### Why are token amounts returned as strings? + +Large numeric values (like token amounts or supplies) are returned as strings to avoid precision loss, as they often exceed JavaScript's safe integer range. Convert these to big number types for arithmetic operations. Fields like `decimals` are provided as normal numbers to help derive human-readable values. + +### What format should addresses be in? + +The API accepts 40-character hex addresses with or without the `0x` prefix. The endpoint is case-insensitive, so both lower and uppercase hex characters work. Ensure addresses are exactly 40 hex digits (20 bytes) if you remove the prefix. For contract queries, use the token's contract address; for balance/transfer queries, use the wallet address. + +### Do I need special headers besides authentication? + +While recommended, `Accept: application/json` isn't strictly required as the API returns JSON by default. The critical header is `Authorization: Bearer `. Ensure you make a GET request to the correct URL without trailing slashes or path typos (e.g., use `/balances/evm/{address}` not `/balance`). + +### MCP integration with Claude/Cline/Cursor shows errors like "ENOENT" or "Server disconnected". How do I fix this? + +For "ENOENT" errors, ensure Node.js 18+ is installed and the path to `npx`/`bunx` is correct (consider using full paths in config). "Server disconnected" usually indicates authentication or connectivity issues – verify your `ACCESS_TOKEN` is set correctly and your network allows access to `https://token-api.thegraph.com/sse`. + +### Is the Token API part of The Graph's GraphQL service? + +No, the Token API is a separate RESTful service. Unlike traditional subgraphs, it provides ready-to-use REST endpoints (HTTP GET) for common token data. You don't need to write GraphQL queries or deploy subgraphs. Under the hood, it uses The Graph's infrastructure and MCP with AI for enrichment, but you simply interact with REST endpoints. + +### Do I need to use MCP or tools like Claude, Cline, or Cursor? + +No, these are optional. MCP is an advanced feature allowing AI assistants to interface with the API via streaming. For standard usage, simply call the REST endpoints with any HTTP client using your JWT. Claude Desktop, Cline bot, and Cursor IDE integrations are provided for convenience but aren't required. diff --git a/website/src/pages/ru/token-api/mcp/claude.mdx b/website/src/pages/ru/token-api/mcp/claude.mdx index 0da8f2be031d..25a29164f8cb 100644 --- a/website/src/pages/ru/token-api/mcp/claude.mdx +++ b/website/src/pages/ru/token-api/mcp/claude.mdx @@ -3,7 +3,7 @@ title: Using Claude Desktop to Access the Token API via MCP sidebarTitle: Claude Desktop --- -## Prerequisites +## Предварительные требования - [Claude Desktop](https://claude.ai/download) installed. - A [JWT token](/token-api/quick-start) from [The Graph Market](https://thegraph.market/). @@ -12,7 +12,7 @@ sidebarTitle: Claude Desktop ![Screenshot of Claude Desktop's settings panel showing the MCP server configuration option.](/img/claude-preview-token-api.png) -## Configuration +## Конфигурация Create or edit your `claude_desktop_config.json` file. @@ -25,11 +25,11 @@ Create or edit your `claude_desktop_config.json` file. ```json label="claude_desktop_config.json" { "mcpServers": { - "mcp-pinax": { + "token-api": { "command": "npx", "args": ["@pinax/mcp", "--sse-url", "https://token-api.thegraph.com/sse"], "env": { - "ACCESS_TOKEN": "" + "ACCESS_TOKEN": "" } } } diff --git a/website/src/pages/ru/token-api/mcp/cline.mdx b/website/src/pages/ru/token-api/mcp/cline.mdx index ab54c0c8f6f0..374877608d17 100644 --- a/website/src/pages/ru/token-api/mcp/cline.mdx +++ b/website/src/pages/ru/token-api/mcp/cline.mdx @@ -3,16 +3,16 @@ title: Using Cline to Access the Token API via MCP sidebarTitle: Cline --- -## Prerequisites +## Предварительные требования - [Cline](https://cline.bot/) installed. - A [JWT token](/token-api/quick-start) from [The Graph Market](https://thegraph.market/). - [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path. - The `@pinax/mcp` package requires Node 18+, as it relies on built-in `fetch()` / `Headers`, which are not available in Node 17 or older. You may need to specify an exact path to an up-to-date Node version, or uninstall previous versions of Node to ensure `@pinax/mcp` uses the correct version. -![Screenshot of Cline's MCP server configuration interface displaying the JSON settings file with mcp-pinax server details visible](/img/cline-preview-token-api.png) +![Screenshot of Cline's MCP server configuration interface displaying the JSON settings file with mcp-pinax server details visible.](/img/cline-preview-token-api.png) -## Configuration +## Конфигурация Create or edit your `cline_mcp_settings.json` file. diff --git a/website/src/pages/ru/token-api/mcp/cursor.mdx b/website/src/pages/ru/token-api/mcp/cursor.mdx index 658108d1337b..5dc411608825 100644 --- a/website/src/pages/ru/token-api/mcp/cursor.mdx +++ b/website/src/pages/ru/token-api/mcp/cursor.mdx @@ -3,7 +3,7 @@ title: Using Cursor to Access the Token API via MCP sidebarTitle: Cursor --- -## Prerequisites +## Предварительные требования - [Cursor](https://www.cursor.com/) installed. - A [JWT token](/token-api/quick-start) from [The Graph Market](https://thegraph.market/). @@ -12,7 +12,7 @@ sidebarTitle: Cursor ![Screenshot of Cursor's MCP configuration panel.](/img/cursor-preview-token-api.png) -## Configuration +## Конфигурация Create or edit your `~/.cursor/mcp.json` file. diff --git a/website/src/pages/ru/token-api/quick-start.mdx b/website/src/pages/ru/token-api/quick-start.mdx index 4653c3d41ac6..a878bea36a20 100644 --- a/website/src/pages/ru/token-api/quick-start.mdx +++ b/website/src/pages/ru/token-api/quick-start.mdx @@ -1,6 +1,6 @@ --- title: Token API Quick Start -sidebarTitle: Quick Start +sidebarTitle: Быстрый старт --- ![The Graph Token API Quick Start banner](/img/token-api-quickstart-banner.jpg) @@ -11,7 +11,7 @@ The Graph's Token API lets you access blockchain token information via a GET req The Token API provides access to onchain token data, including balances, holders, detailed token metadata, and historical transfers. This API also uses the Model Context Protocol (MCP) to enrich raw blockchain data with contextual insights using AI tools, such as Claude. -## Prerequisites +## Предварительные требования Before you begin, get a JWT token by signing up on [The Graph Market](https://thegraph.market/). You can generate a JWT token for each of your API keys using the dropdown menu. diff --git a/website/src/pages/sv/about.mdx b/website/src/pages/sv/about.mdx index 90c63c0f036d..8f3ae9f1a8e7 100644 --- a/website/src/pages/sv/about.mdx +++ b/website/src/pages/sv/about.mdx @@ -30,25 +30,25 @@ Blockchain properties, such as finality, chain reorganizations, and uncled block ## The Graph Provides a Solution -The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API. +The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "Subgraphs") can then be queried with a standard GraphQL API. Today, there is a decentralized protocol that is backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node) that enables this process. ### How The Graph Functions -Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL. +Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using Subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL. #### Specifics -- The Graph uses subgraph descriptions, which are known as the subgraph manifest inside the subgraph. +- The Graph uses Subgraph descriptions, which are known as the Subgraph manifest inside the Subgraph. -- The subgraph description outlines the smart contracts of interest for a subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database. +- The Subgraph description outlines the smart contracts of interest for a Subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database. -- When creating a subgraph, you need to write a subgraph manifest. +- When creating a Subgraph, you need to write a Subgraph manifest. -- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that subgraph. +- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that Subgraph. -The diagram below provides more detailed information about the flow of data after a subgraph manifest has been deployed with Ethereum transactions. +The diagram below provides more detailed information about the flow of data after a Subgraph manifest has been deployed with Ethereum transactions. ![En grafik som förklarar hur The Graf använder Graf Node för att servera frågor till datakonsumenter](/img/graph-dataflow.png) @@ -56,12 +56,12 @@ Följande steg följs: 1. En dapp lägger till data i Ethereum genom en transaktion på ett smart kontrakt. 2. Det smarta kontraktet sänder ut en eller flera händelser under bearbetningen av transaktionen. -3. Graf Node skannar kontinuerligt Ethereum efter nya block och den data för din subgraf de kan innehålla. -4. Graf Node hittar Ethereum-händelser för din subgraf i dessa block och kör de kartläggande hanterarna du tillhandahållit. Kartläggningen är en WASM-modul som skapar eller uppdaterar de dataenheter som Graph Node lagrar som svar på Ethereum-händelser. +3. Graph Node continually scans Ethereum for new blocks and the data for your Subgraph they may contain. +4. Graph Node finds Ethereum events for your Subgraph in these blocks and runs the mapping handlers you provided. The mapping is a WASM module that creates or updates the data entities that Graph Node stores in response to Ethereum events. 5. Dappen frågar Graph Node om data som indexerats från blockkedjan med hjälp av nodens [GraphQL-slutpunkt](https://graphql.org/learn/). Graph Node översätter i sin tur GraphQL-frågorna till frågor för sin underliggande datalagring för att hämta dessa data, och använder lagrets indexeringsegenskaper. Dappen visar dessa data i ett användarvänligt gränssnitt för slutanvändare, som de använder för att utfärda nya transaktioner på Ethereum. Cykeln upprepas. ## Nästa steg -The following sections provide a more in-depth look at subgraphs, their deployment and data querying. +The following sections provide a more in-depth look at Subgraphs, their deployment and data querying. -Before you write your own subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed subgraphs. Each subgraph's page includes a GraphQL playground, allowing you to query its data. +Before you write your own Subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed Subgraphs. Each Subgraph's page includes a GraphQL playground, allowing you to query its data. diff --git a/website/src/pages/sv/archived/arbitrum/arbitrum-faq.mdx b/website/src/pages/sv/archived/arbitrum/arbitrum-faq.mdx index a3162cf19888..aba7e13387a4 100644 --- a/website/src/pages/sv/archived/arbitrum/arbitrum-faq.mdx +++ b/website/src/pages/sv/archived/arbitrum/arbitrum-faq.mdx @@ -14,7 +14,7 @@ By scaling The Graph on L2, network participants can now benefit from: - Säkerhet ärvt från Ethereum -Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers can open and close allocations more frequently to index a greater number of subgraphs. Developers can deploy and update subgraphs more easily, and Delegators can delegate GRT more frequently. Curators can add or remove signal to a larger number of subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. +Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers can open and close allocations more frequently to index a greater number of Subgraphs. Developers can deploy and update Subgraphs more easily, and Delegators can delegate GRT more frequently. Curators can add or remove signal to a larger number of Subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. Graph gemenskapen beslutade att gå vidare med Arbitrum förra året efter resultatet av diskussionen [GIP-0031](https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305). @@ -39,7 +39,7 @@ För att dra fördel av att använda The Graph på L2, använd den här rullgard ![Dropdown-väljare för att växla Arbitrum](/img/arbitrum-screenshot-toggle.png) -## Som subgrafutvecklare, datakonsument, indexerare, curator eller delegator, vad behöver jag göra nu? +## As a Subgraph developer, data consumer, Indexer, Curator, or Delegator, what do I need to do now? Network participants must move to Arbitrum to continue participating in The Graph Network. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) for additional support. @@ -51,9 +51,9 @@ All smart contracts have been thoroughly [audited](https://github.com/graphproto Allt har testats noggrant och en beredskapsplan finns på plats för att säkerställa en säker och sömlös övergång. Detaljer finns [here](https://forum.thegraph.com/t/gip-0037-the-graph-arbitrum-deployment-with-linear-rewards-minted-in-l2/3551#risks-and-security-considerations-20). -## Are existing subgraphs on Ethereum working? +## Are existing Subgraphs on Ethereum working? -All subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) to ensure your subgraphs operate seamlessly. +All Subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) to ensure your Subgraphs operate seamlessly. ## Does GRT have a new smart contract deployed on Arbitrum? diff --git a/website/src/pages/sv/archived/arbitrum/l2-transfer-tools-faq.mdx b/website/src/pages/sv/archived/arbitrum/l2-transfer-tools-faq.mdx index b158efaed6ff..272fa705dfe5 100644 --- a/website/src/pages/sv/archived/arbitrum/l2-transfer-tools-faq.mdx +++ b/website/src/pages/sv/archived/arbitrum/l2-transfer-tools-faq.mdx @@ -24,9 +24,9 @@ The exception is with smart contract wallets like multisigs: these are smart con L2 Överföringsverktygen använder Arbitrums nativa mekanism för att skicka meddelanden från L1 till L2. Denna mekanism kallas en "retryable ticket" och används av alla nativa token-broar, inklusive Arbitrum GRT-broen. Du kan läsa mer om retryable tickets i [Arbitrums dokumentation](https://docs.arbitrum.io/arbos/l1-to-l2-messaging). -När du överför dina tillgångar (subgraf, insats, delegation eller kurering) till L2 skickas ett meddelande genom Arbitrum GRT-broen, vilket skapar en retryable ticket i L2. Överföringsverktyget inkluderar ett visst ETH-värde i transaktionen, som används för att 1) betala för att skapa biljetten och 2) betala för gasen för att utföra biljetten i L2. Men eftersom gaspriserna kan variera fram till att biljetten är redo att utföras i L2 kan det hända att detta automatiska utförsel försöket misslyckas. När det händer kommer Arbitrum-broen att behålla retryable ticket i livet i upp till 7 dagar, och vem som helst kan försöka "inlösa" biljetten (vilket kräver en plånbok med en viss mängd ETH broad till Arbitrum). +When you transfer your assets (Subgraph, stake, delegation or curation) to L2, a message is sent through the Arbitrum GRT bridge which creates a retryable ticket in L2. The transfer tool includes some ETH value in the transaction, that is used to 1) pay to create the ticket and 2) pay for the gas to execute the ticket in L2. However, because gas prices might vary in the time until the ticket is ready to execute in L2, it is possible that this auto-execution attempt fails. When that happens, the Arbitrum bridge will keep the retryable ticket alive for up to 7 days, and anyone can retry “redeeming” the ticket (which requires a wallet with some ETH bridged to Arbitrum). -Detta är vad vi kallar "Bekräfta"-steget i alla överföringsverktygen - det kommer att köras automatiskt i de flesta fall, eftersom den automatiska utförandet oftast är framgångsrikt, men det är viktigt att du kontrollerar att det gick igenom. Om det inte lyckas och det inte finns några framgångsrika försök på 7 dagar kommer Arbitrum-broen att kasta biljetten, och dina tillgångar (subgraf, insats, delegation eller kurering) kommer att gå förlorade och kan inte återvinnas. The Graphs kärnutvecklare har ett övervakningssystem på plats för att upptäcka dessa situationer och försöka lösa biljetterna innan det är för sent, men det är i slutändan ditt ansvar att se till att din överföring är klar i tid. Om du har svårt att bekräfta din transaktion, kontakta oss via [detta formulär](https://noteforms.com/forms/notionform-l2-transfer-tooling-issues-0ogqfu?notionforms=1&utm_source=notionforms), och kärnutvecklarna kommer att vara där för att hjälpa dig. +This is what we call the “Confirm” step in all the transfer tools - it will run automatically in most cases, as the auto-execution is most often successful, but it is important that you check back to make sure it went through. If it doesn’t succeed and there are no successful retries in 7 days, the Arbitrum bridge will discard the ticket, and your assets (Subgraph, stake, delegation or curation) will be lost and can’t be recovered. The Graph core devs have a monitoring system in place to detect these situations and try to redeem the tickets before it’s too late, but it is ultimately your responsibility to ensure your transfer is completed in time. If you’re having trouble confirming your transaction, please reach out using [this form](https://noteforms.com/forms/notionform-l2-transfer-tooling-issues-0ogqfu?notionforms=1&utm_source=notionforms) and core devs will be there help you. ### Jag startade min överföring av delegation/insats/kurering, och jag är osäker på om den lyckades komma till L2, hur kan jag bekräfta att den överfördes korrekt? @@ -36,43 +36,43 @@ Om du har L1-transaktionshashen (som du kan hitta genom att titta på de senaste ## Subgraf Överföring -### Hur överför jag min subgraf? +### How do I transfer my Subgraph? -För att överföra din subgraf måste du slutföra följande steg: +To transfer your Subgraph, you will need to complete the following steps: 1. Initiera överföringen på Ethereum huvudnätet 2. Vänta 20 minuter på bekräftelse -3. Bekräfta subgraföverföringen på Arbitrum\* +3. Confirm Subgraph transfer on Arbitrum\* -4. Slutför publiceringen av subgraf på Arbitrum +4. Finish publishing Subgraph on Arbitrum 5. Uppdatera fråge-URL (rekommenderas) -\*Observera att du måste bekräfta överföringen inom 7 dagar, annars kan din subgraf gå förlorad. I de flesta fall kommer detta steg att köras automatiskt, men en manuell bekräftelse kan behövas om det finns en gasprisspike på Arbitrum. Om det uppstår några problem under denna process finns det resurser för att hjälpa: kontakta support på support@thegraph.com eller på [Discord](https://discord.gg/graphprotocol). +\*Note that you must confirm the transfer within 7 days otherwise your Subgraph may be lost. In most cases, this step will run automatically, but a manual confirmation may be needed if there is a gas price spike on Arbitrum. If there are any issues during this process, there will be resources to help: contact support at support@thegraph.com or on [Discord](https://discord.gg/graphprotocol). ### Var ska jag initiera min överföring från? -Du kan initiera din överföring från [Subgraph Studio](https://thegraph.com/studio/), [Utforskaren,](https://thegraph.com/explorer) eller från vilken som helst subgrafsdetaljsida. Klicka på knappen "Överför subgraf" på subgrafsdetaljsidan för att starta överföringen. +You can initiate your transfer from the [Subgraph Studio](https://thegraph.com/studio/), [Explorer,](https://thegraph.com/explorer) or any Subgraph details page. Click the "Transfer Subgraph" button in the Subgraph details page to start the transfer. -### Hur länge måste jag vänta tills min subgraf överförs? +### How long do I need to wait until my Subgraph is transferred Överföringstiden tar ungefär 20 minuter. Arbitrum-broen arbetar i bakgrunden för att slutföra broöverföringen automatiskt. I vissa fall kan gasavgifterna öka, och du måste bekräfta transaktionen igen. -### Kommer min subgraf fortfarande vara sökbar efter att jag har överfört den till L2? +### Will my Subgraph still be discoverable after I transfer it to L2? -Din subgraf kommer endast vara sökbar på det nätverk där den är publicerad. Till exempel, om din subgraf är på Arbitrum One, kan du endast hitta den i Utforskaren på Arbitrum One och kommer inte att kunna hitta den på Ethereum. Se till att du har valt Arbitrum One i nätverksväxlaren högst upp på sidan för att säkerställa att du är på rätt nätverk.  Efter överföringen kommer L1-subgrafen att visas som föråldrad. +Your Subgraph will only be discoverable on the network it is published to. For example, if your Subgraph is on Arbitrum One, then you can only find it in Explorer on Arbitrum One and will not be able to find it on Ethereum. Please ensure that you have Arbitrum One selected in the network switcher at the top of the page to ensure you are on the correct network.  After the transfer, the L1 Subgraph will appear as deprecated. -### Måste min subgraf vara publicerad för att kunna överföra den? +### Does my Subgraph need to be published to transfer it? -För att dra nytta av subgraföverföringsverktyget måste din subgraf redan vara publicerad på Ethereum huvudnät och måste ha något kureringssignal ägt av plånboken som äger subgrafen. Om din subgraf inte är publicerad rekommenderas det att du helt enkelt publicerar direkt på Arbitrum One - de associerade gasavgifterna kommer att vara betydligt lägre. Om du vill överföra en publicerad subgraf men ägarplånboken inte har kuraterat något signal på den kan du signalera en liten mängd (t.ex. 1 GRT) från den plånboken; se till att välja "automigrering" signal. +To take advantage of the Subgraph transfer tool, your Subgraph must be already published to Ethereum mainnet and must have some curation signal owned by the wallet that owns the Subgraph. If your Subgraph is not published, it is recommended you simply publish directly on Arbitrum One - the associated gas fees will be considerably lower. If you want to transfer a published Subgraph but the owner account hasn't curated any signal on it, you can signal a small amount (e.g. 1 GRT) from that account; make sure to choose "auto-migrating" signal. -### Vad händer med Ethereum huvudnätversionen av min subgraf efter att jag har överfört till Arbitrum? +### What happens to the Ethereum mainnet version of my Subgraph after I transfer to Arbitrum? -Efter att ha överfört din subgraf till Arbitrum kommer Ethereum huvudnätversionen att föråldras. Vi rekommenderar att du uppdaterar din fråge-URL inom 48 timmar. Det finns dock en nådperiod som gör att din huvudnät-URL fungerar så att stöd från tredjeparts-dappar kan uppdateras. +After transferring your Subgraph to Arbitrum, the Ethereum mainnet version will be deprecated. We recommend you update your query URL within 48 hours. However, there is a grace period in place that keeps your mainnet URL functioning so that any third-party dapp support can be updated. ### Behöver jag också publicera om på Arbitrum efter överföringen? @@ -80,21 +80,21 @@ Efter de 20 minuters överföringsfönstret måste du bekräfta överföringen m ### Kommer min endpunkt att ha nertid under ompubliceringen? -Det är osannolikt, men det är möjligt att uppleva en kort nertid beroende på vilka indexeringar som stöder subgrafen på L1 och om de fortsätter att indexera den tills subgrafen är fullt stödd på L2. +It is unlikely, but possible to experience a brief downtime depending on which Indexers are supporting the Subgraph on L1 and whether they keep indexing it until the Subgraph is fully supported on L2. ### Är publicering och versionering densamma på L2 som på Ethereum huvudnätet? -Ja. Välj Arbitrum One som ditt publicerade nätverk när du publicerar i Subgraph Studio. I studion kommer den senaste ändpunkt att vara tillgänglig, som pekar till den senaste uppdaterade versionen av subgrafen. +Yes. Select Arbitrum One as your published network when publishing in Subgraph Studio. In the Studio, the latest endpoint will be available which points to the latest updated version of the Subgraph. -### Kommer min subgrafs kurering att flyttas med min subgraf? +### Will my Subgraph's curation move with my Subgraph? -Om du har valt automatisk migreringssignal kommer 100% av din egen kurering att flyttas med din subgraf till Arbitrum One. All subgrafens kureringssignal kommer att konverteras till GRT vid överföringstillfället, och GRT som motsvarar din kureringssignal kommer att användas för att prägla signal på L2-subgrafen. +If you've chosen auto-migrating signal, 100% of your own curation will move with your Subgraph to Arbitrum One. All of the Subgraph's curation signal will be converted to GRT at the time of the transfer, and the GRT corresponding to your curation signal will be used to mint signal on the L2 Subgraph. -Andra kuratorer kan välja att ta tillbaka sin del av GRT eller också överföra den till L2 för att prägla signal på samma subgraf. +Other Curators can choose whether to withdraw their fraction of GRT, or also transfer it to L2 to mint signal on the same Subgraph. -### Kan jag flytta min subgraf tillbaka till Ethereum huvudnätet efter överföringen? +### Can I move my Subgraph back to Ethereum mainnet after I transfer? -När den är överförd kommer din Ethereum huvudnätversion av denna subgraf att vara föråldrad. Om du vill flytta tillbaka till huvudnätet måste du omimplementera och publicera på huvudnätet igen. Dock avråds starkt från att flytta tillbaka till Ethereum huvudnätet eftersom indexbelöningar till sist kommer att fördelas helt på Arbitrum One. +Once transferred, your Ethereum mainnet version of this Subgraph will be deprecated. If you would like to move back to mainnet, you will need to redeploy and publish back to mainnet. However, transferring back to Ethereum mainnet is strongly discouraged as indexing rewards will eventually be distributed entirely on Arbitrum One. ### Varför behöver jag bridged ETH för att slutföra min överföring? @@ -206,19 +206,19 @@ För att överföra din kurering måste du följa följande steg: \*Om det behövs - dvs. du använder en kontraktadress. -### Hur vet jag om den subgraph jag har kuraterat har flyttats till L2? +### How will I know if the Subgraph I curated has moved to L2? -När du tittar på sidan med detaljer om subgraphen kommer en banner att meddela dig att denna subgraph har flyttats. Du kan följa uppmaningen för att överföra din kurering. Du kan också hitta denna information på sidan med detaljer om subgraphen som har flyttat. +When viewing the Subgraph details page, a banner will notify you that this Subgraph has been transferred. You can follow the prompt to transfer your curation. You can also find this information on the Subgraph details page of any Subgraph that has moved. ### Vad händer om jag inte vill flytta min kurering till L2? -När en subgraph avvecklas har du möjlighet att ta tillbaka din signal. På samma sätt, om en subgraph har flyttats till L2, kan du välja att ta tillbaka din signal på Ethereum huvudnät eller skicka signalen till L2. +When a Subgraph is deprecated you have the option to withdraw your signal. Similarly, if a Subgraph has moved to L2, you can choose to withdraw your signal in Ethereum mainnet or send the signal to L2. ### Hur vet jag att min kurering har överförts framgångsrikt? Signaldetaljer kommer att vara tillgängliga via Explorer ungefär 20 minuter efter att L2-överföringsverktyget har initierats. -### Kan jag överföra min kurering på fler än en subgraph samtidigt? +### Can I transfer my curation on more than one Subgraph at a time? Det finns för närvarande ingen möjlighet till bulköverföring. @@ -266,7 +266,7 @@ Det tar ungefär 20 minuter för L2-överföringsverktyget att slutföra överf ### Måste jag indexer på Arbitrum innan jag överför min insats? -Du kan effektivt överföra din insats först innan du sätter upp indexering, men du kommer inte att kunna hämta några belöningar på L2 förrän du allokerar till subgrapher på L2, indexerar dem och presenterar POI. +You can effectively transfer your stake first before setting up indexing, but you will not be able to claim any rewards on L2 until you allocate to Subgraphs on L2, index them, and present POIs. ### Kan Delegators flytta sin delegation innan jag flyttar min indexinsats? diff --git a/website/src/pages/sv/archived/arbitrum/l2-transfer-tools-guide.mdx b/website/src/pages/sv/archived/arbitrum/l2-transfer-tools-guide.mdx index 4dde699e5079..9cdb196e9c09 100644 --- a/website/src/pages/sv/archived/arbitrum/l2-transfer-tools-guide.mdx +++ b/website/src/pages/sv/archived/arbitrum/l2-transfer-tools-guide.mdx @@ -6,53 +6,53 @@ The Graph har gjort det enkelt att flytta till L2 på Arbitrum One. För varje p Some frequent questions about these tools are answered in the [L2 Transfer Tools FAQ](/archived/arbitrum/l2-transfer-tools-faq/). The FAQs contain in-depth explanations of how to use the tools, how they work, and things to keep in mind when using them. -## Så här överför du din subgraf till Arbitrum (L2) +## How to transfer your Subgraph to Arbitrum (L2) -## Fördelar med att överföra dina subgrafer +## Benefits of transferring your Subgraphs The Graphs community och kärnutvecklare har [förberett sig](https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305) för att flytta till Arbitrum under det senaste året. Arbitrum, en blockkedja av lager 2 eller "L2", ärver säkerheten från Ethereum men ger drastiskt lägre gasavgifter. -När du publicerar eller uppgraderar din subgraf till The Graph Network, interagerar du med smarta kontrakt på protokollet och detta kräver att du betalar för gas med ETH. Genom att flytta dina subgrafer till Arbitrum kommer alla framtida uppdateringar av din subgraf att kräva mycket lägre gasavgifter. De lägre avgifterna, och det faktum att curation bonding-kurvorna på L2 är platta, gör det också lättare för andra curatorer att kurera på din subgraf, vilket ökar belöningarna för Indexers på din subgraf. Denna miljö med lägre kostnader gör det också billigare för indexerare att indexera och betjäna din subgraf. Indexeringsbelöningar kommer att öka på Arbitrum och minska på Ethereums mainnet under de kommande månaderna, så fler och fler indexerare kommer att överföra sin andel och sätta upp sin verksamhet på L2. +When you publish or upgrade your Subgraph to The Graph Network, you're interacting with smart contracts on the protocol and this requires paying for gas using ETH. By moving your Subgraphs to Arbitrum, any future updates to your Subgraph will require much lower gas fees. The lower fees, and the fact that curation bonding curves on L2 are flat, also make it easier for other Curators to curate on your Subgraph, increasing the rewards for Indexers on your Subgraph. This lower-cost environment also makes it cheaper for Indexers to index and serve your Subgraph. Indexing rewards will be increasing on Arbitrum and decreasing on Ethereum mainnet over the coming months, so more and more Indexers will be transferring their stake and setting up their operations on L2. -## Förstå vad som händer med signal, din L1 subgraf och frågewebbadresser +## Understanding what happens with signal, your L1 Subgraph and query URLs -Att överföra en subgraf till Arbitrum använder Arbitrum GRT-bryggan, som i sin tur använder den inhemska Arbitrum-bryggan, för att skicka subgrafen till L2. "Överföringen" kommer att fasa ut subgrafen på mainnet och skicka informationen för att återskapa subgrafen på L2 med hjälp av bryggan. Den kommer också att inkludera subgrafägarens signalerade GRT, som måste vara mer än noll för att bryggan ska acceptera överföringen. +Transferring a Subgraph to Arbitrum uses the Arbitrum GRT bridge, which in turn uses the native Arbitrum bridge, to send the Subgraph to L2. The "transfer" will deprecate the Subgraph on mainnet and send the information to re-create the Subgraph on L2 using the bridge. It will also include the Subgraph owner's signaled GRT, which must be more than zero for the bridge to accept the transfer. -När du väljer att överföra subgrafen kommer detta att konvertera hela subgrafens kurationssignal till GRT. Detta motsvarar att "avskriva" subgrafen på mainnet. GRT som motsvarar din kuration kommer att skickas till L2 tillsammans med subgrafen, där de kommer att användas för att skapa signaler å dina vägnar. +When you choose to transfer the Subgraph, this will convert all of the Subgraph's curation signal to GRT. This is equivalent to "deprecating" the Subgraph on mainnet. The GRT corresponding to your curation will be sent to L2 together with the Subgraph, where they will be used to mint signal on your behalf. -Andra kuratorer kan välja om de vill ta tillbaka sin del av GRT eller också överföra den till L2 för att få en signal på samma subgraf. Om en subgrafägare inte överför sin subgraf till L2 och manuellt fasar ut den via ett kontraktsanrop, kommer Curatorer att meddelas och kommer att kunna dra tillbaka sin curation. +Other Curators can choose whether to withdraw their fraction of GRT, or also transfer it to L2 to mint signal on the same Subgraph. If a Subgraph owner does not transfer their Subgraph to L2 and manually deprecates it via a contract call, then Curators will be notified and will be able to withdraw their curation. -Så snart subgrafen har överförts, eftersom all kuration konverteras till GRT, kommer indexerare inte längre att få belöningar för att indexera subgrafen. Det kommer dock att finnas indexerare som kommer 1) att fortsätta visa överförda subgrafer i 24 timmar och 2) omedelbart börja indexera subgrafen på L2. Eftersom dessa indexerare redan har subgrafen indexerad, borde det inte finnas något behov av att vänta på att subgrafen ska synkroniseras, och det kommer att vara möjligt att fråga L2-subgrafen nästan omedelbart. +As soon as the Subgraph is transferred, since all curation is converted to GRT, Indexers will no longer receive rewards for indexing the Subgraph. However, there will be Indexers that will 1) keep serving transferred Subgraphs for 24 hours, and 2) immediately start indexing the Subgraph on L2. Since these Indexers already have the Subgraph indexed, there should be no need to wait for the Subgraph to sync, and it will be possible to query the L2 Subgraph almost immediately. -Förfrågningar till L2-subgrafen kommer att behöva göras till en annan URL (på `arbitrum-gateway.thegraph.com`), men L1-URL:n fortsätter att fungera i minst 48 timmar. Efter det kommer L1-gatewayen att vidarebefordra frågor till L2-gatewayen (under en tid), men detta kommer att lägga till latens så det rekommenderas att byta alla dina frågor till den nya URL:en så snart som möjligt. +Queries to the L2 Subgraph will need to be done to a different URL (on `arbitrum-gateway.thegraph.com`), but the L1 URL will continue working for at least 48 hours. After that, the L1 gateway will forward queries to the L2 gateway (for some time), but this will add latency so it is recommended to switch all your queries to the new URL as soon as possible. ## Välja din L2 plånbok -När du publicerade din subgraf på mainnet använde du en ansluten plånbok för att skapa subgrafen, och denna plånbok äger NFT som representerar denna subgraf och låter dig publicera uppdateringar. +When you published your Subgraph on mainnet, you used a connected wallet to create the Subgraph, and this wallet owns the NFT that represents this Subgraph and allows you to publish updates. -När du överför subgrafen till Arbitrum kan du välja en annan plånbok som kommer att äga denna subgraf NFT på L2. +When transferring the Subgraph to Arbitrum, you can choose a different wallet that will own this Subgraph NFT on L2. Om du använder en "vanlig" plånbok som MetaMask (ett externt ägt konto eller EOA, d.v.s. en plånbok som inte är ett smart kontrakt), så är detta valfritt och det rekommenderas att behålla samma ägaradress som i L1. -Om du använder en smart kontraktsplånbok, som en multisig (t.ex. ett kassaskåp), är det obligatoriskt att välja en annan L2-plånboksadress, eftersom det är mest troligt att det här kontot bara finns på mainnet och att du inte kommer att kunna göra transaktioner på Arbitrum med denna plånbok. Om du vill fortsätta använda en smart kontraktsplånbok eller multisig, skapa en ny plånbok på Arbitrum och använd dess adress som L2-ägare till din subgraf. +If you're using a smart contract wallet, like a multisig (e.g. a Safe), then choosing a different L2 wallet address is mandatory, as it is most likely that this account only exists on mainnet and you will not be able to make transactions on Arbitrum using this wallet. If you want to keep using a smart contract wallet or multisig, create a new wallet on Arbitrum and use its address as the L2 owner of your Subgraph. -**Det är mycket viktigt att använda en plånboksadress som du kontrollerar, och som kan göra transaktioner på Arbitrum. Annars kommer subgrafen att gå förlorad och kan inte återställas.** +**It is very important to use a wallet address that you control, and that can make transactions on Arbitrum. Otherwise, the Subgraph will be lost and cannot be recovered.** ## Förbereder för överföringen: överbrygga lite ETH -Att överföra subgrafen innebär att man skickar en transaktion genom bryggan och sedan utför en annan transaktion på Arbitrum. Den första transaktionen använder ETH på huvudnätet och inkluderar en del ETH för att betala för gas när meddelandet tas emot på L2. Men om denna gas är otillräcklig måste du göra om transaktionen och betala för gasen direkt på L2 (detta är "Steg 3: Bekräfta överföringen" nedan). Detta steg **måste utföras inom 7 dagar efter att överföringen påbörjats**. Dessutom kommer den andra transaktionen ("Steg 4: Avsluta överföringen på L2") att göras direkt på Arbitrum. Av dessa skäl behöver du lite ETH på en Arbitrum-plånbok. Om du använder ett multisig- eller smart kontraktskonto måste ETH: en finnas i den vanliga (EOA) plånboken som du använder för att utföra transaktionerna, inte på själva multisig plånboken. +Transferring the Subgraph involves sending a transaction through the bridge, and then executing another transaction on Arbitrum. The first transaction uses ETH on mainnet, and includes some ETH to pay for gas when the message is received on L2. However, if this gas is insufficient, you will have to retry the transaction and pay for the gas directly on L2 (this is "Step 3: Confirming the transfer" below). This step **must be executed within 7 days of starting the transfer**. Moreover, the second transaction ("Step 4: Finishing the transfer on L2") will be done directly on Arbitrum. For these reasons, you will need some ETH on an Arbitrum wallet. If you're using a multisig or smart contract account, the ETH will need to be in the regular (EOA) wallet that you are using to execute the transactions, not on the multisig wallet itself. Du kan köpa ETH på vissa börser och ta ut den direkt till Arbitrum, eller så kan du använda Arbitrum-bryggan för att skicka ETH från en mainnet-plånbok till L2: [bridge.arbitrum.io](http://bridge.arbitrum.io). Eftersom gasavgifterna på Arbitrum är lägre bör du bara behöva en liten summa. Det rekommenderas att du börjar vid en låg tröskel (0.t.ex. 01 ETH) för att din transaktion ska godkännas. -## Hitta subgrafen Överföringsverktyg +## Finding the Subgraph Transfer Tool -Du kan hitta L2 Överföringsverktyg när du tittar på din subgrafs sida på Subgraf Studio: +You can find the L2 Transfer Tool when you're looking at your Subgraph's page on Subgraph Studio: ![Överföringsverktyg](/img/L2-transfer-tool1.png) -Den är också tillgänglig på Explorer om du är ansluten till plånboken som äger en subgraf och på den subgrafens sida på Explorer: +It is also available on Explorer if you're connected with the wallet that owns a Subgraph and on that Subgraph's page on Explorer: ![Överför till L2](/img/transferToL2.png) @@ -60,19 +60,19 @@ Genom att klicka på knappen Överför till L2 öppnas överföringsverktyget d ## Steg 1: Starta överföringen -Innan du påbörjar överföringen måste du bestämma vilken adress som ska äga subgrafen på L2 (se "Välja din L2 plånbok" ovan), och det rekommenderas starkt att ha lite ETH för gas som redan är överbryggad på Arbitrum (se "Förbereda för överföringen: brygga" lite ETH" ovan). +Before starting the transfer, you must decide which address will own the Subgraph on L2 (see "Choosing your L2 wallet" above), and it is strongly recommend having some ETH for gas already bridged on Arbitrum (see "Preparing for the transfer: bridging some ETH" above). -Observera också att överföring av subgrafen kräver att en signal som inte är noll på subgrafen med samma konto som äger subgrafen; om du inte har signalerat på subgrafen måste du lägga till lite curation (att lägga till en liten mängd som 1 GRT skulle räcka). +Also please note transferring the Subgraph requires having a nonzero amount of signal on the Subgraph with the same account that owns the Subgraph; if you haven't signaled on the Subgraph you will have to add a bit of curation (adding a small amount like 1 GRT would suffice). -Efter att ha öppnat överföringsverktyget kommer du att kunna ange L2-plånboksadressen i fältet "Mottagande plånboksadress" - **se till att du har angett rätt adress här**. Om du klickar på Transfer Subgraph kommer du att uppmana dig att utföra transaktionen på din plånbok (observera att ett ETH-värde ingår för att betala för L2-gas); detta kommer att initiera överföringen och fasa ut din L1-subgraf (se "Förstå vad som händer med signal, din L1-subgraf och sökadresser" ovan för mer information om vad som händer bakom kulisserna). +After opening the Transfer Tool, you will be able to input the L2 wallet address into the "Receiving wallet address" field - **make sure you've entered the correct address here**. Clicking on Transfer Subgraph will prompt you to execute the transaction on your wallet (note some ETH value is included to pay for L2 gas); this will initiate the transfer and deprecate your L1 Subgraph (see "Understanding what happens with signal, your L1 Subgraph and query URLs" above for more details on what goes on behind the scenes). -Om du utför det här steget, **se till att du fortsätter tills du har slutfört steg 3 om mindre än 7 dagar, annars försvinner subgrafen och din signal-GRT.** Detta beror på hur L1-L2-meddelanden fungerar på Arbitrum: meddelanden som skickas genom bryggan är "omförsökbara biljetter" som måste utföras inom 7 dagar, och det första utförandet kan behöva ett nytt försök om det finns toppar i gaspriset på Arbitrum. +If you execute this step, **make sure you proceed until completing step 3 in less than 7 days, or the Subgraph and your signal GRT will be lost.** This is due to how L1-L2 messaging works on Arbitrum: messages that are sent through the bridge are "retry-able tickets" that must be executed within 7 days, and the initial execution might need a retry if there are spikes in the gas price on Arbitrum. ![Start the transfer to L2](/img/startTransferL2.png) -## Steg 2: Väntar på att subgrafen ska komma till L2 +## Step 2: Waiting for the Subgraph to get to L2 -När du har startat överföringen måste meddelandet som skickar din L1 subgraf till L2 spridas genom Arbitrum bryggan. Detta tar cirka 20 minuter (bryggan väntar på att huvudnäts blocket som innehåller transaktionen är "säkert" från potentiella kedjereorganisationer). +After you start the transfer, the message that sends your L1 Subgraph to L2 must propagate through the Arbitrum bridge. This takes approximately 20 minutes (the bridge waits for the mainnet block containing the transaction to be "safe" from potential chain reorgs). När denna väntetid är över kommer Arbitrum att försöka utföra överföringen automatiskt på L2 kontrakten. @@ -80,7 +80,7 @@ När denna väntetid är över kommer Arbitrum att försöka utföra överförin ## Steg 3: Bekräfta överföringen -I de flesta fall kommer detta steg att utföras automatiskt eftersom L2-gasen som ingår i steg 1 borde vara tillräcklig för att utföra transaktionen som tar emot subgrafen på Arbitrum-kontrakten. I vissa fall är det dock möjligt att en topp i gaspriserna på Arbitrum gör att denna autoexekvering misslyckas. I det här fallet kommer "biljetten" som skickar din subgraf till L2 att vara vilande och kräver ett nytt försök inom 7 dagar. +In most cases, this step will auto-execute as the L2 gas included in step 1 should be sufficient to execute the transaction that receives the Subgraph on the Arbitrum contracts. In some cases, however, it is possible that a spike in gas prices on Arbitrum causes this auto-execution to fail. In this case, the "ticket" that sends your Subgraph to L2 will be pending and require a retry within 7 days. Om så är fallet måste du ansluta med en L2 plånbok som har lite ETH på Arbitrum, byta ditt plånboksnätverk till Arbitrum och klicka på "Bekräfta överföring" för att försöka genomföra transaktionen igen. @@ -88,33 +88,33 @@ Om så är fallet måste du ansluta med en L2 plånbok som har lite ETH på Arbi ## Steg 4: Avsluta överföringen på L2 -Vid det här laget har din subgraf och GRT tagits emot på Arbitrum, men subgrafen är inte publicerad ännu. Du måste ansluta med L2 plånboken som du valde som mottagande plånbok, byta ditt plånboksnätverk till Arbitrum och klicka på "Publicera subgraf" +At this point, your Subgraph and GRT have been received on Arbitrum, but the Subgraph is not published yet. You will need to connect using the L2 wallet that you chose as the receiving wallet, switch your wallet network to Arbitrum, and click "Publish Subgraph." -![Publicera subgrafen](/img/publishSubgraphL2TransferTools.png) +![Publish the Subgraph](/img/publishSubgraphL2TransferTools.png) -![Vänta på att subgrafen ska publiceras](/img/waitForSubgraphToPublishL2TransferTools.png) +![Wait for the Subgraph to be published](/img/waitForSubgraphToPublishL2TransferTools.png) -Detta kommer att publicera subgrafen så att indexerare som är verksamma på Arbitrum kan börja servera den. Det kommer också att skapa kurations signaler med hjälp av GRT som överfördes från L1. +This will publish the Subgraph so that Indexers that are operating on Arbitrum can start serving it. It will also mint curation signal using the GRT that were transferred from L1. ## Steg 5: Uppdatera sökfrågans URL -Din subgraf har överförts till Arbitrum! För att fråga subgrafen kommer den nya webbadressen att vara: +Your Subgraph has been successfully transferred to Arbitrum! To query the Subgraph, the new URL will be : `https://arbitrum-gateway.thegraph.com/api/[api-key]/subgraphs/id/[l2-subgraph-id]` -Observera att subgraf-ID: t på Arbitrum kommer att vara ett annat än det du hade på mainnet, men du kan alltid hitta det på Explorer eller Studio. Som nämnts ovan (se "Förstå vad som händer med signal, dina L1-subgraf- och sökwebbadresser") kommer den gamla L1-URL: n att stödjas under en kort stund, men du bör byta dina frågor till den nya adressen så snart subgrafen har synkroniserats på L2. +Note that the Subgraph ID on Arbitrum will be a different than the one you had on mainnet, but you can always find it on Explorer or Studio. As mentioned above (see "Understanding what happens with signal, your L1 Subgraph and query URLs") the old L1 URL will be supported for a short while, but you should switch your queries to the new address as soon as the Subgraph has been synced on L2. ## Så här överför du din kuration till Arbitrum (L2) -## Förstå vad som händer med curation vid subgraf överföringar till L2 +## Understanding what happens to curation on Subgraph transfers to L2 -När ägaren av en subgraf överför en subgraf till Arbitrum, omvandlas all subgrafs signal till GRT samtidigt. Detta gäller för "auto-migrerad" signal, det vill säga signal som inte är specifik för en subgraf version eller utbyggnad men som följer den senaste versionen av en subgraf. +When the owner of a Subgraph transfers a Subgraph to Arbitrum, all of the Subgraph's signal is converted to GRT at the same time. This applies to "auto-migrated" signal, i.e. signal that is not specific to a Subgraph version or deployment but that follows the latest version of a Subgraph. -Denna omvandling från signal till GRT är densamma som vad som skulle hända om subgrafägaren avskaffade subgrafen i L1. När subgrafen föråldras eller överförs, "bränns" all curation-signal samtidigt (med hjälp av curation bonding-kurvan) och den resulterande GRT hålls av GNS smarta kontraktet (det är kontraktet som hanterar subgrafuppgraderingar och automatisk migrerad signal). Varje kurator i det stycket har därför ett anspråk på den GRT som är proportionell mot antalet aktier de hade för stycket. +This conversion from signal to GRT is the same as what would happen if the Subgraph owner deprecated the Subgraph in L1. When the Subgraph is deprecated or transferred, all curation signal is "burned" simultaneously (using the curation bonding curve) and the resulting GRT is held by the GNS smart contract (that is the contract that handles Subgraph upgrades and auto-migrated signal). Each Curator on that Subgraph therefore has a claim to that GRT proportional to the amount of shares they had for the Subgraph. -En bråkdel av dessa BRT som motsvarar subgrafägaren skickas till L2 tillsammans med subgrafen. +A fraction of these GRT corresponding to the Subgraph owner is sent to L2 together with the Subgraph. -Vid denna tidpunkt kommer den kurerade BRT inte att samla på sig några fler frågeavgifter, så kuratorer kan välja att dra tillbaka sin BRT eller överföra den till samma subgraf på L2, där den kan användas för att skapa en ny kurationssignal. Det är ingen brådska att göra detta eftersom BRT kan hjälpa till på obestämd tid och alla får ett belopp som är proportionellt mot sina aktier, oavsett när de gör det. +At this point, the curated GRT will not accrue any more query fees, so Curators can choose to withdraw their GRT or transfer it to the same Subgraph on L2, where it can be used to mint new curation signal. There is no rush to do this as the GRT can be help indefinitely and everybody gets an amount proportional to their shares, irrespective of when they do it. ## Välja din L2 plånbok @@ -130,9 +130,9 @@ Om du använder en smart kontraktsplånbok, som en multisig (t.ex. ett kassaskå Innan du påbörjar överföringen måste du bestämma vilken adress som ska äga kurationen på L2 (se "Välja din L2-plånbok" ovan), och det rekommenderas att ha en del ETH för gas som redan är överbryggad på Arbitrum ifall du behöver försöka utföra exekveringen av meddelande på L2. Du kan köpa ETH på vissa börser och ta ut den direkt till Arbitrum, eller så kan du använda Arbitrum-bryggan för att skicka ETH från en mainnet-plånbok till L2: [bridge.arbitrum.io](http://bridge.arbitrum.io) - eftersom gasavgifterna på Arbitrum är så låga ska du bara behöva en liten summa, t.ex. 0,01 ETH kommer förmodligen att vara mer än tillräckligt. -Om en subgraf som du kurerar till har överförts till L2 kommer du att se ett meddelande i Explorer som talar om att du kurerar till en överförd subgraf. +If a Subgraph that you curate to has been transferred to L2, you will see a message on Explorer telling you that you're curating to a transferred Subgraph. -När du tittar på subgraf sidan kan du välja att dra tillbaka eller överföra kurationen. Genom att klicka på "Överför signal till Arbitrum" öppnas överföringsverktyget. +When looking at the Subgraph page, you can choose to withdraw or transfer the curation. Clicking on "Transfer Signal to Arbitrum" will open the transfer tool. ![Överföringssignal](/img/transferSignalL2TransferTools.png) @@ -162,4 +162,4 @@ Om så är fallet måste du ansluta med en L2 plånbok som har lite ETH på Arbi ## Dra tillbaka din kuration på L1 -Om du föredrar att inte skicka din GRT till L2, eller om du hellre vill överbrygga GRT manuellt, kan du ta tillbaka din kurerade BRT på L1. På bannern på subgraf sidan väljer du "Ta tillbaka signal" och bekräftar transaktionen; GRT kommer att skickas till din kurator adress. +If you prefer not to send your GRT to L2, or you'd rather bridge the GRT manually, you can withdraw your curated GRT on L1. On the banner on the Subgraph page, choose "Withdraw Signal" and confirm the transaction; the GRT will be sent to your Curator address. diff --git a/website/src/pages/sv/archived/sunrise.mdx b/website/src/pages/sv/archived/sunrise.mdx index eb18a93c506c..71262f22e7d8 100644 --- a/website/src/pages/sv/archived/sunrise.mdx +++ b/website/src/pages/sv/archived/sunrise.mdx @@ -7,61 +7,61 @@ sidebarTitle: Post-Sunrise Upgrade FAQ ## What was the Sunrise of Decentralized Data? -The Sunrise of Decentralized Data was an initiative spearheaded by Edge & Node. This initiative enabled subgraph developers to upgrade to The Graph’s decentralized network seamlessly. +The Sunrise of Decentralized Data was an initiative spearheaded by Edge & Node. This initiative enabled Subgraph developers to upgrade to The Graph’s decentralized network seamlessly. -This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published subgraphs. +This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published Subgraphs. ### What happened to the hosted service? -The hosted service query endpoints are no longer available, and developers cannot deploy new subgraphs on the hosted service. +The hosted service query endpoints are no longer available, and developers cannot deploy new Subgraphs on the hosted service. -During the upgrade process, owners of hosted service subgraphs could upgrade their subgraphs to The Graph Network. Additionally, developers were able to claim auto-upgraded subgraphs. +During the upgrade process, owners of hosted service Subgraphs could upgrade their Subgraphs to The Graph Network. Additionally, developers were able to claim auto-upgraded Subgraphs. ### Was Subgraph Studio impacted by this upgrade? No, Subgraph Studio was not impacted by Sunrise. Subgraphs were immediately available for querying, powered by the upgrade Indexer, which uses the same infrastructure as the hosted service. -### Why were subgraphs published to Arbitrum, did it start indexing a different network? +### Why were Subgraphs published to Arbitrum, did it start indexing a different network? -The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that subgraphs are published to, but subgraphs can index any of the [supported networks](/supported-networks/) +The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new Subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that Subgraphs are published to, but Subgraphs can index any of the [supported networks](/supported-networks/) ## About the Upgrade Indexer > The upgrade Indexer is currently active. -The upgrade Indexer was implemented to improve the experience of upgrading subgraphs from the hosted service to The Graph Network and support new versions of existing subgraphs that had not yet been indexed. +The upgrade Indexer was implemented to improve the experience of upgrading Subgraphs from the hosted service to The Graph Network and support new versions of existing Subgraphs that had not yet been indexed. ### What does the upgrade Indexer do? -- It bootstraps chains that have yet to receive indexing rewards on The Graph Network and ensures that an Indexer is available to serve queries as quickly as possible after a subgraph is published. +- It bootstraps chains that have yet to receive indexing rewards on The Graph Network and ensures that an Indexer is available to serve queries as quickly as possible after a Subgraph is published. - It supports chains that were previously only available on the hosted service. Find a comprehensive list of supported chains [here](/supported-networks/). -- Indexers that operate an upgrade Indexer do so as a public service to support new subgraphs and additional chains that lack indexing rewards before The Graph Council approves them. +- Indexers that operate an upgrade Indexer do so as a public service to support new Subgraphs and additional chains that lack indexing rewards before The Graph Council approves them. ### Why is Edge & Node running the upgrade Indexer? -Edge & Node historically maintained the hosted service and, as a result, already have synced data for hosted service subgraphs. +Edge & Node historically maintained the hosted service and, as a result, already have synced data for hosted service Subgraphs. ### What does the upgrade indexer mean for existing Indexers? Chains previously only supported on the hosted service were made available to developers on The Graph Network without indexing rewards at first. -However, this action unlocked query fees for any interested Indexer and increased the number of subgraphs published on The Graph Network. As a result, Indexers have more opportunities to index and serve these subgraphs in exchange for query fees, even before indexing rewards are enabled for a chain. +However, this action unlocked query fees for any interested Indexer and increased the number of Subgraphs published on The Graph Network. As a result, Indexers have more opportunities to index and serve these Subgraphs in exchange for query fees, even before indexing rewards are enabled for a chain. -The upgrade Indexer also provides the Indexer community with information about the potential demand for subgraphs and new chains on The Graph Network. +The upgrade Indexer also provides the Indexer community with information about the potential demand for Subgraphs and new chains on The Graph Network. ### What does this mean for Delegators? -The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity. +The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more Subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity. ### Did the upgrade Indexer compete with existing Indexers for rewards? -No, the upgrade Indexer only allocates the minimum amount per subgraph and does not collect indexing rewards. +No, the upgrade Indexer only allocates the minimum amount per Subgraph and does not collect indexing rewards. -It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and subgraphs. +It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and Subgraphs. -### How does this affect subgraph developers? +### How does this affect Subgraph developers? -Subgraph developers can query their subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/subgraphs/developing/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a subgraph](/developing/creating-a-subgraph/) was not impacted by this upgrade. +Subgraph developers can query their Subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/subgraphs/developing/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a Subgraph](/developing/creating-a-subgraph/) was not impacted by this upgrade. ### How does the upgrade Indexer benefit data consumers? @@ -71,10 +71,10 @@ The upgrade Indexer enables chains on the network that were previously only supp The upgrade Indexer prices queries at the market rate to avoid influencing the query fee market. -### When will the upgrade Indexer stop supporting a subgraph? +### When will the upgrade Indexer stop supporting a Subgraph? -The upgrade Indexer supports a subgraph until at least 3 other Indexers successfully and consistently serve queries made to it. +The upgrade Indexer supports a Subgraph until at least 3 other Indexers successfully and consistently serve queries made to it. -Furthermore, the upgrade Indexer stops supporting a subgraph if it has not been queried in the last 30 days. +Furthermore, the upgrade Indexer stops supporting a Subgraph if it has not been queried in the last 30 days. -Other Indexers are incentivized to support subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it. +Other Indexers are incentivized to support Subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it. diff --git a/website/src/pages/sv/global.json b/website/src/pages/sv/global.json index 3793fbf29d78..20aef5782977 100644 --- a/website/src/pages/sv/global.json +++ b/website/src/pages/sv/global.json @@ -6,6 +6,7 @@ "subgraphs": "Subgrafer", "substreams": "Underströmmar", "sps": "Substreams-Powered Subgraphs", + "tokenApi": "Token API", "indexing": "Indexing", "resources": "Resources", "archived": "Archived" @@ -24,9 +25,51 @@ "linkToThisSection": "Link to this section" }, "content": { - "note": "Note", + "callout": { + "note": "Note", + "tip": "Tip", + "important": "Important", + "warning": "Warning", + "caution": "Caution" + }, "video": "Video" }, + "openApi": { + "parameters": { + "pathParameters": "Path Parameters", + "queryParameters": "Query Parameters", + "headerParameters": "Header Parameters", + "cookieParameters": "Cookie Parameters", + "parameter": "Parameter", + "description": "Beskrivning", + "value": "Value", + "required": "Required", + "deprecated": "Deprecated", + "defaultValue": "Default value", + "minimumValue": "Minimum value", + "maximumValue": "Maximum value", + "acceptedValues": "Accepted values", + "acceptedPattern": "Accepted pattern", + "format": "Format", + "serializationFormat": "Serialization format" + }, + "request": { + "label": "Test this endpoint", + "noCredentialsRequired": "No credentials required", + "send": "Send Request" + }, + "responses": { + "potentialResponses": "Potential Responses", + "status": "Status", + "description": "Beskrivning", + "liveResponse": "Live Response", + "example": "Exempel" + }, + "errors": { + "invalidApi": "Could not retrieve API {0}.", + "invalidOperation": "Could not retrieve operation {0} in API {1}." + } + }, "notFound": { "title": "Oops! This page was lost in space...", "subtitle": "Check if you’re using the right address or explore our website by clicking on the link below.", diff --git a/website/src/pages/sv/index.json b/website/src/pages/sv/index.json index 3f33f38c5613..23a97080ffc1 100644 --- a/website/src/pages/sv/index.json +++ b/website/src/pages/sv/index.json @@ -7,7 +7,7 @@ "cta2": "Build your first subgraph" }, "products": { - "title": "The Graph’s Products", + "title": "The Graph's Products", "description": "Choose a solution that fits your needs—interact with blockchain data your way.", "subgraphs": { "title": "Subgrafer", @@ -21,7 +21,7 @@ }, "sps": { "title": "Substreams-Powered Subgraphs", - "description": "Boost your subgraph’s efficiency and scalability by using Substreams.", + "description": "Boost your subgraph's efficiency and scalability by using Substreams.", "cta": "Set up a Substreams-powered subgraph" }, "graphNode": { @@ -39,12 +39,12 @@ "title": "Nätverk som stöds", "details": "Network Details", "services": "Services", - "type": "Type", + "type": "Typ", "protocol": "Protocol", "identifier": "Identifier", "chainId": "Chain ID", "nativeCurrency": "Native Currency", - "docs": "Docs", + "docs": "Dokument", "shortName": "Short Name", "guides": "Guides", "search": "Search networks", @@ -67,8 +67,8 @@ "tableHeaders": { "name": "Name", "id": "ID", - "subgraphs": "Subgraphs", - "substreams": "Substreams", + "subgraphs": "Subgrafer", + "substreams": "Underströmmar", "firehose": "Firehose", "tokenapi": "Token API" } @@ -80,7 +80,7 @@ "description": "Kickstart your journey into subgraph development." }, "substreams": { - "title": "Substreams", + "title": "Underströmmar", "description": "Stream high-speed data for real-time indexing." }, "timeseries": { @@ -92,7 +92,7 @@ "description": "Leverage features like custom data sources, event handlers, and topic filters." }, "billing": { - "title": "Billing", + "title": "Fakturering", "description": "Optimize costs and manage billing efficiently." } }, @@ -156,15 +156,15 @@ "watchOnYouTube": "Watch on YouTube", "theGraphExplained": { "title": "The Graph Explained In 1 Minute", - "description": "What is The Graph? How does it work? Why does it matter so much to web3 developers? Learn how and why The Graph is the backbone of web3 in this short, non-technical video." + "description": "Learn how and why The Graph is the backbone of web3 in this short, non-technical video." }, "whatIsDelegating": { "title": "What is Delegating?", - "description": "Delegators are key participants who help secure The Graph by staking their GRT tokens to Indexers. This video explains key concepts to understand before delegating." + "description": "This video explains key concepts to understand before delegating, a form of staking that helps secure The Graph." }, "howToIndexSolana": { "title": "How to Index Solana with a Substreams-powered Subgraph", - "description": "If you’re familiar with subgraphs, discover how Substreams offer a different approach for key use cases. This video walks you through the process of building your first Substreams-powered subgraph." + "description": "If you're familiar with Subgraphs, discover how Substreams offer a different approach for key use cases." } }, "time": { diff --git a/website/src/pages/sv/indexing/chain-integration-overview.mdx b/website/src/pages/sv/indexing/chain-integration-overview.mdx index 147468f7dc17..94f8e8dd42e5 100644 --- a/website/src/pages/sv/indexing/chain-integration-overview.mdx +++ b/website/src/pages/sv/indexing/chain-integration-overview.mdx @@ -36,7 +36,7 @@ Denna process är relaterad till Subgraf Data Service och gäller endast nya Sub ### 2. Vad händer om stöd för Firehose & Substreams kommer efter det att nätverket stöds på mainnet? -Detta skulle endast påverka protokollstödet för indexbelöningar på Substreams-drivna subgrafer. Den nya Firehose-implementeringen skulle behöva testas på testnätet, enligt den metodik som beskrivs för Fas 2 i detta GIP. På liknande sätt, förutsatt att implementationen är prestanda- och tillförlitlig, skulle en PR på [Funktionsstödsmatrisen](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) krävas (`Substreams data sources` Subgraf Feature), liksom en ny GIP för protokollstöd för indexbelöningar. Vem som helst kan skapa PR och GIP; Stiftelsen skulle hjälpa till med Rådets godkännande. +This would only impact protocol support for indexing rewards on Substreams-powered Subgraphs. The new Firehose implementation would need testing on testnet, following the methodology outlined for Stage 2 in this GIP. Similarly, assuming the implementation is performant and reliable, a PR on the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) would be required (`Substreams data sources` Subgraph Feature), as well as a new GIP for protocol support for indexing rewards. Anyone can create the PR and GIP; the Foundation would help with Council approval. ### 3. How much time will the process of reaching full protocol support take? diff --git a/website/src/pages/sv/indexing/new-chain-integration.mdx b/website/src/pages/sv/indexing/new-chain-integration.mdx index c33a501eb77f..d8cb301e3902 100644 --- a/website/src/pages/sv/indexing/new-chain-integration.mdx +++ b/website/src/pages/sv/indexing/new-chain-integration.mdx @@ -2,7 +2,7 @@ title: New Chain Integration --- -Chains can bring subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies: +Chains can bring Subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies: 1. **EVM JSON-RPC** 2. **Firehose**: All Firehose integration solutions include Substreams, a large-scale streaming engine based off Firehose with native `graph-node` support, allowing for parallelized transforms. @@ -47,15 +47,15 @@ For EVM chains, there exists a deeper level of data that can be achieved through ## EVM considerations - Difference between JSON-RPC & Firehose -While the JSON-RPC and Firehose are both suitable for subgraphs, a Firehose is always required for developers wanting to build with [Substreams](https://substreams.streamingfast.io). Supporting Substreams allows developers to build [Substreams-powered subgraphs](/subgraphs/cookbook/substreams-powered-subgraphs/) for the new chain, and has the potential to improve the performance of your subgraphs. Additionally, Firehose — as a drop-in replacement for the JSON-RPC extraction layer of `graph-node` — reduces by 90% the number of RPC calls required for general indexing. +While the JSON-RPC and Firehose are both suitable for Subgraphs, a Firehose is always required for developers wanting to build with [Substreams](https://substreams.streamingfast.io). Supporting Substreams allows developers to build [Substreams-powered Subgraphs](/subgraphs/cookbook/substreams-powered-subgraphs/) for the new chain, and has the potential to improve the performance of your Subgraphs. Additionally, Firehose — as a drop-in replacement for the JSON-RPC extraction layer of `graph-node` — reduces by 90% the number of RPC calls required for general indexing. -- All those `getLogs` calls and roundtrips get replaced by a single stream arriving into the heart of `graph-node`; a single block model for all subgraphs it processes. +- All those `getLogs` calls and roundtrips get replaced by a single stream arriving into the heart of `graph-node`; a single block model for all Subgraphs it processes. -> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers) +> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index Subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers) ## Graf Node-konfiguration -Configuring Graph Node is as easy as preparing your local environment. Once your local environment is set, you can test the integration by locally deploying a subgraph. +Configuring Graph Node is as easy as preparing your local environment. Once your local environment is set, you can test the integration by locally deploying a Subgraph. 1. [Klona Graf Node](https://github.com/graphprotocol/graph-node) @@ -67,4 +67,4 @@ Configuring Graph Node is as easy as preparing your local environment. Once your ## Substreams-powered Subgraphs -For StreamingFast-led Firehose/Substreams integrations, basic support for foundational Substreams modules (e.g. decoded transactions, logs and smart-contract events) and Substreams codegen tools are included. These tools enable the ability to enable [Substreams-powered subgraphs](/substreams/sps/introduction/). Follow the [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) and run `substreams codegen subgraph` to experience the codegen tools for yourself. +For StreamingFast-led Firehose/Substreams integrations, basic support for foundational Substreams modules (e.g. decoded transactions, logs and smart-contract events) and Substreams codegen tools are included. These tools enable the ability to enable [Substreams-powered Subgraphs](/substreams/sps/introduction/). Follow the [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) and run `substreams codegen subgraph` to experience the codegen tools for yourself. diff --git a/website/src/pages/sv/indexing/overview.mdx b/website/src/pages/sv/indexing/overview.mdx index 26ecf1330d60..b355374c5949 100644 --- a/website/src/pages/sv/indexing/overview.mdx +++ b/website/src/pages/sv/indexing/overview.mdx @@ -7,7 +7,7 @@ Indexerare är nodoperatörer i The Graph Network som satsar Graph Tokens (GRT) GRT som satsas i protokollet är föremål för en tiningperiod och kan drabbas av strykning om indexerare är skadliga och tillhandahåller felaktiga data till applikationer eller om de indexerar felaktigt. Indexerare tjänar också belöningar för delegerat satsning från Delegater, för att bidra till nätverket. -Indexerare väljer subgrafer att indexera baserat på subgrafens kuratersignal, där Curators satsar GRT för att ange vilka subgrafer som är av hög kvalitet och bör prioriteras. Konsumenter (t.ex. applikationer) kan också ställa in parametrar för vilka indexerare som behandlar frågor för deras subgrafer och ange preferenser för pris på frågebetalning. +Indexers select Subgraphs to index based on the Subgraph’s curation signal, where Curators stake GRT in order to indicate which Subgraphs are high-quality and should be prioritized. Consumers (eg. applications) can also set parameters for which Indexers process queries for their Subgraphs and set preferences for query fee pricing. ## FAQ @@ -19,17 +19,17 @@ The minimum stake for an Indexer is currently set to 100K GRT. **Query fee rebates** - Payments for serving queries on the network. These payments are mediated via state channels between an Indexer and a gateway. Each query request from a gateway contains a payment and the corresponding response a proof of query result validity. -**Indexing rewards** - Generated via a 3% annual protocol wide inflation, the indexing rewards are distributed to Indexers who are indexing subgraph deployments for the network. +**Indexing rewards** - Generated via a 3% annual protocol wide inflation, the indexing rewards are distributed to Indexers who are indexing Subgraph deployments for the network. ### How are indexing rewards distributed? -Indexing rewards come from protocol inflation which is set to 3% annual issuance. They are distributed across subgraphs based on the proportion of all curation signal on each, then distributed proportionally to Indexers based on their allocated stake on that subgraph. **An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.** +Indexing rewards come from protocol inflation which is set to 3% annual issuance. They are distributed across Subgraphs based on the proportion of all curation signal on each, then distributed proportionally to Indexers based on their allocated stake on that Subgraph. **An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.** Numerous tools have been created by the community for calculating rewards; you'll find a collection of them organized in the [Community Guides collection](https://www.notion.so/Community-Guides-abbb10f4dba040d5ba81648ca093e70c). You can also find an up to date list of tools in the #Delegators and #Indexers channels on the [Discord server](https://discord.gg/graphprotocol). Here we link a [recommended allocation optimiser](https://github.com/graphprotocol/allocation-optimizer) integrated with the indexer software stack. ### What is a proof of indexing (POI)? -POIs are used in the network to verify that an Indexer is indexing the subgraphs they have allocated on. A POI for the first block of the current epoch must be submitted when closing an allocation for that allocation to be eligible for indexing rewards. A POI for a block is a digest for all entity store transactions for a specific subgraph deployment up to and including that block. +POIs are used in the network to verify that an Indexer is indexing the Subgraphs they have allocated on. A POI for the first block of the current epoch must be submitted when closing an allocation for that allocation to be eligible for indexing rewards. A POI for a block is a digest for all entity store transactions for a specific Subgraph deployment up to and including that block. ### When are indexing rewards distributed? @@ -41,7 +41,7 @@ The RewardsManager contract has a read-only [getRewards](https://github.com/grap Many of the community-made dashboards include pending rewards values and they can be easily checked manually by following these steps: -1. Query the [mainnet subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations: +1. Query the [mainnet Subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations: ```graphql query indexerAllocations { @@ -91,24 +91,24 @@ The `queryFeeCut` and `indexingRewardCut` values are delegation parameters that - **indexingRewardCut** - the % of indexing rewards that will be distributed to the Indexer. If this is set to 95%, the Indexer will receive 95% of the indexing rewards when an allocation is closed and the Delegators will split the other 5%. -### How do Indexers know which subgraphs to index? +### How do Indexers know which Subgraphs to index? -Indexers may differentiate themselves by applying advanced techniques for making subgraph indexing decisions but to give a general idea we'll discuss several key metrics used to evaluate subgraphs in the network: +Indexers may differentiate themselves by applying advanced techniques for making Subgraph indexing decisions but to give a general idea we'll discuss several key metrics used to evaluate Subgraphs in the network: -- **Curation signal** - The proportion of network curation signal applied to a particular subgraph is a good indicator of the interest in that subgraph, especially during the bootstrap phase when query voluming is ramping up. +- **Curation signal** - The proportion of network curation signal applied to a particular Subgraph is a good indicator of the interest in that Subgraph, especially during the bootstrap phase when query volume is ramping up. -- **Query fees collected** - The historical data for volume of query fees collected for a specific subgraph is a good indicator of future demand. +- **Query fees collected** - The historical data for volume of query fees collected for a specific Subgraph is a good indicator of future demand. -- **Amount staked** - Monitoring the behavior of other Indexers or looking at proportions of total stake allocated towards specific subgraphs can allow an Indexer to monitor the supply side for subgraph queries to identify subgraphs that the network is showing confidence in or subgraphs that may show a need for more supply. +- **Amount staked** - Monitoring the behavior of other Indexers or looking at proportions of total stake allocated towards specific Subgraphs can allow an Indexer to monitor the supply side for Subgraph queries to identify Subgraphs that the network is showing confidence in or Subgraphs that may show a need for more supply. -- **Subgraphs with no indexing rewards** - Some subgraphs do not generate indexing rewards mainly because they are using unsupported features like IPFS or because they are querying another network outside of mainnet. You will see a message on a subgraph if it is not generating indexing rewards. +- **Subgraphs with no indexing rewards** - Some Subgraphs do not generate indexing rewards mainly because they are using unsupported features like IPFS or because they are querying another network outside of mainnet. You will see a message on a Subgraph if it is not generating indexing rewards. ### What are the hardware requirements? -- **Small** - Enough to get started indexing several subgraphs, will likely need to be expanded. +- **Small** - Enough to get started indexing several Subgraphs, will likely need to be expanded. - **Standard** - Default setup, this is what is used in the example k8s/terraform deployment manifests. -- **Medium** - Production Indexer supporting 100 subgraphs and 200-500 requests per second. -- **Large** - Prepared to index all currently used subgraphs and serve requests for the related traffic. +- **Medium** - Production Indexer supporting 100 Subgraphs and 200-500 requests per second. +- **Large** - Prepared to index all currently used Subgraphs and serve requests for the related traffic. | Setup | Postgres
(CPUs) | Postgres
(memory in GBs) | Postgres
(disk in TBs) | VMs
(CPUs) | VMs
(memory in GBs) | | --- | :-: | :-: | :-: | :-: | :-: | @@ -125,17 +125,17 @@ Indexers may differentiate themselves by applying advanced techniques for making ## Infrastructure -At the center of an Indexer's infrastructure is the Graph Node which monitors the indexed networks, extracts and loads data per a subgraph definition and serves it as a [GraphQL API](/about/#how-the-graph-works). The Graph Node needs to be connected to an endpoint exposing data from each indexed network; an IPFS node for sourcing data; a PostgreSQL database for its store; and Indexer components which facilitate its interactions with the network. +At the center of an Indexer's infrastructure is the Graph Node which monitors the indexed networks, extracts and loads data per a Subgraph definition and serves it as a [GraphQL API](/about/#how-the-graph-works). The Graph Node needs to be connected to an endpoint exposing data from each indexed network; an IPFS node for sourcing data; a PostgreSQL database for its store; and Indexer components which facilitate its interactions with the network. -- **PostgreSQL database** - The main store for the Graph Node, this is where subgraph data is stored. The Indexer service and agent also use the database to store state channel data, cost models, indexing rules, and allocation actions. +- **PostgreSQL database** - The main store for the Graph Node, this is where Subgraph data is stored. The Indexer service and agent also use the database to store state channel data, cost models, indexing rules, and allocation actions. -- **Data endpoint** - For EVM-compatible networks, Graph Node needs to be connected to an endpoint that exposes an EVM-compatible JSON-RPC API. This may take the form of a single client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain subgraphs will require particular client capabilities such as archive mode and/or the parity tracing API. +- **Data endpoint** - For EVM-compatible networks, Graph Node needs to be connected to an endpoint that exposes an EVM-compatible JSON-RPC API. This may take the form of a single client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain Subgraphs will require particular client capabilities such as archive mode and/or the parity tracing API. -- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during subgraph deployment to fetch the subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com. +- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com. - **Indexer service** - Handles all required external communications with the network. Shares cost models and indexing statuses, passes query requests from gateways on to a Graph Node, and manages the query payments via state channels with the gateway. -- **Indexer agent** - Facilitates the Indexers interactions onchain including registering on the network, managing subgraph deployments to its Graph Node/s, and managing allocations. +- **Indexer agent** - Facilitates the Indexers interactions onchain including registering on the network, managing Subgraph deployments to its Graph Node/s, and managing allocations. - **Prometheus metrics server** - The Graph Node and Indexer components log their metrics to the metrics server. @@ -149,8 +149,8 @@ Note: To support agile scaling, it is recommended that query and indexing concer | Port | Purpose | Routes | CLI Argument | Environment Variable | | --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | -| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | +| 8000 | GraphQL HTTP server
(for Subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | +| 8001 | GraphQL WS
(for Subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | | 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | | 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | | 8040 | Prometheus metrics | /metrics | \--metrics-port | - | @@ -159,7 +159,7 @@ Note: To support agile scaling, it is recommended that query and indexing concer | Port | Purpose | Routes | CLI Argument | Environment Variable | | --- | --- | --- | --- | --- | -| 7600 | GraphQL HTTP server
(for paid subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` | +| 7600 | GraphQL HTTP server
(for paid Subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` | | 7300 | Prometheus metrics | /metrics | \--metrics-port | - | #### Indexer Agent @@ -295,7 +295,7 @@ Deploy all resources with `kubectl apply -k $dir`. ### Graf Node -[Graph Node](https://github.com/graphprotocol/graph-node) is an open source Rust implementation that event sources the Ethereum blockchain to deterministically update a data store that can be queried via the GraphQL endpoint. Developers use subgraphs to define their schema, and a set of mappings for transforming the data sourced from the blockchain and the Graph Node handles syncing the entire chain, monitoring for new blocks, and serving it via a GraphQL endpoint. +[Graph Node](https://github.com/graphprotocol/graph-node) is an open source Rust implementation that event sources the Ethereum blockchain to deterministically update a data store that can be queried via the GraphQL endpoint. Developers use Subgraphs to define their schema, and a set of mappings for transforming the data sourced from the blockchain and the Graph Node handles syncing the entire chain, monitoring for new blocks, and serving it via a GraphQL endpoint. #### Getting started from source @@ -365,9 +365,9 @@ docker-compose up To successfully participate in the network requires almost constant monitoring and interaction, so we've built a suite of Typescript applications for facilitating an Indexers network participation. There are three Indexer components: -- **Indexer agent** - The agent monitors the network and the Indexer's own infrastructure and manages which subgraph deployments are indexed and allocated towards onchain and how much is allocated towards each. +- **Indexer agent** - The agent monitors the network and the Indexer's own infrastructure and manages which Subgraph deployments are indexed and allocated towards onchain and how much is allocated towards each. -- **Indexer service** - The only component that needs to be exposed externally, the service passes on subgraph queries to the graph node, manages state channels for query payments, shares important decision making information to clients like the gateways. +- **Indexer service** - The only component that needs to be exposed externally, the service passes on Subgraph queries to the graph node, manages state channels for query payments, shares important decision making information to clients like the gateways. - **Indexer CLI** - The command line interface for managing the Indexer agent. It allows Indexers to manage cost models, manual allocations, actions queue, and indexing rules. @@ -525,7 +525,7 @@ graph indexer status #### Indexer management using Indexer CLI -The suggested tool for interacting with the **Indexer Management API** is the **Indexer CLI**, an extension to the **Graph CLI**. The Indexer agent needs input from an Indexer in order to autonomously interact with the network on the behalf of the Indexer. The mechanism for defining Indexer agent behavior are **allocation management** mode and **indexing rules**. Under auto mode, an Indexer can use **indexing rules** to apply their specific strategy for picking subgraphs to index and serve queries for. Rules are managed via a GraphQL API served by the agent and known as the Indexer Management API. Under manual mode, an Indexer can create allocation actions using **actions queue** and explicitly approve them before they get executed. Under oversight mode, **indexing rules** are used to populate **actions queue** and also require explicit approval for execution. +The suggested tool for interacting with the **Indexer Management API** is the **Indexer CLI**, an extension to the **Graph CLI**. The Indexer agent needs input from an Indexer in order to autonomously interact with the network on the behalf of the Indexer. The mechanism for defining Indexer agent behavior are **allocation management** mode and **indexing rules**. Under auto mode, an Indexer can use **indexing rules** to apply their specific strategy for picking Subgraphs to index and serve queries for. Rules are managed via a GraphQL API served by the agent and known as the Indexer Management API. Under manual mode, an Indexer can create allocation actions using **actions queue** and explicitly approve them before they get executed. Under oversight mode, **indexing rules** are used to populate **actions queue** and also require explicit approval for execution. #### Usage @@ -537,7 +537,7 @@ The **Indexer CLI** connects to the Indexer agent, typically through port-forwar - `graph indexer rules set [options] ...` - Set one or more indexing rules. -- `graph indexer rules start [options] ` - Start indexing a subgraph deployment if available and set its `decisionBasis` to `always`, so the Indexer agent will always choose to index it. If the global rule is set to always then all available subgraphs on the network will be indexed. +- `graph indexer rules start [options] ` - Start indexing a Subgraph deployment if available and set its `decisionBasis` to `always`, so the Indexer agent will always choose to index it. If the global rule is set to always then all available Subgraphs on the network will be indexed. - `graph indexer rules stop [options] ` - Stop indexing a deployment and set its `decisionBasis` to never, so it will skip this deployment when deciding on deployments to index. @@ -561,9 +561,9 @@ All commands which display rules in the output can choose between the supported #### Indexing rules -Indexing rules can either be applied as global defaults or for specific subgraph deployments using their IDs. The `deployment` and `decisionBasis` fields are mandatory, while all other fields are optional. When an indexing rule has `rules` as the `decisionBasis`, then the Indexer agent will compare non-null threshold values on that rule with values fetched from the network for the corresponding deployment. If the subgraph deployment has values above (or below) any of the thresholds it will be chosen for indexing. +Indexing rules can either be applied as global defaults or for specific Subgraph deployments using their IDs. The `deployment` and `decisionBasis` fields are mandatory, while all other fields are optional. When an indexing rule has `rules` as the `decisionBasis`, then the Indexer agent will compare non-null threshold values on that rule with values fetched from the network for the corresponding deployment. If the Subgraph deployment has values above (or below) any of the thresholds it will be chosen for indexing. -For example, if the global rule has a `minStake` of **5** (GRT), any subgraph deployment which has more than 5 (GRT) of stake allocated to it will be indexed. Threshold rules include `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, and `minAverageQueryFees`. +For example, if the global rule has a `minStake` of **5** (GRT), any Subgraph deployment which has more than 5 (GRT) of stake allocated to it will be indexed. Threshold rules include `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, and `minAverageQueryFees`. Data model: @@ -679,7 +679,7 @@ graph indexer actions execute approve Note that supported action types for allocation management have different input requirements: -- `Allocate` - allocate stake to a specific subgraph deployment +- `Allocate` - allocate stake to a specific Subgraph deployment - required action params: - deploymentID @@ -694,7 +694,7 @@ Note that supported action types for allocation management have different input - poi - force (forces using the provided POI even if it doesn’t match what the graph-node provides) -- `Reallocate` - atomically close allocation and open a fresh allocation for the same subgraph deployment +- `Reallocate` - atomically close allocation and open a fresh allocation for the same Subgraph deployment - required action params: - allocationID @@ -706,7 +706,7 @@ Note that supported action types for allocation management have different input #### Cost models -Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make Indexer selection decisions per query and to negotiate payment with chosen Indexers. +Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each Subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make Indexer selection decisions per query and to negotiate payment with chosen Indexers. #### Agora @@ -782,9 +782,9 @@ Once an Indexer has staked GRT in the protocol, the [Indexer components](/indexi 6. Call `stake()` to stake GRT in the protocol. -7. (Optional) Indexers may approve another address to be the operator for their Indexer infrastructure in order to separate the keys that control the funds from those that are performing day to day actions such as allocating on subgraphs and serving (paid) queries. In order to set the operator call `setOperator()` with the operator address. +7. (Optional) Indexers may approve another address to be the operator for their Indexer infrastructure in order to separate the keys that control the funds from those that are performing day to day actions such as allocating on Subgraphs and serving (paid) queries. In order to set the operator call `setOperator()` with the operator address. -8. (Optional) In order to control the distribution of rewards and strategically attract Delegators Indexers can update their delegation parameters by updating their indexingRewardCut (parts per million), queryFeeCut (parts per million), and cooldownBlocks (number of blocks). To do so call `setDelegationParameters()`. The following example sets the queryFeeCut to distribute 95% of query rebates to the Indexer and 5% to Delegators, set the indexingRewardCutto distribute 60% of indexing rewards to the Indexer and 40% to Delegators, and set `thecooldownBlocks` period to 500 blocks. +8. (Optional) In order to control the distribution of rewards and strategically attract Delegators Indexers can update their delegation parameters by updating their `indexingRewardCut` (parts per million), `queryFeeCut` (parts per million), and `cooldownBlocks` (number of blocks). To do so call `setDelegationParameters()`. The following example sets the `queryFeeCut` to distribute 95% of query rebates to the Indexer and 5% to Delegators, set the `indexingRewardCut` to distribute 60% of indexing rewards to the Indexer and 40% to Delegators, and set the `cooldownBlocks` period to 500 blocks. ``` setDelegationParameters(950000, 600000, 500) @@ -810,8 +810,8 @@ To set the delegation parameters using Graph Explorer interface, follow these st After being created by an Indexer a healthy allocation goes through two states. -- **Active** - Once an allocation is created onchain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L316)) it is considered **active**. A portion of the Indexer's own and/or delegated stake is allocated towards a subgraph deployment, which allows them to claim indexing rewards and serve queries for that subgraph deployment. The Indexer agent manages creating allocations based on the Indexer rules. +- **Active** - Once an allocation is created onchain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L316)) it is considered **active**. A portion of the Indexer's own and/or delegated stake is allocated towards a Subgraph deployment, which allows them to claim indexing rewards and serve queries for that Subgraph deployment. The Indexer agent manages creating allocations based on the Indexer rules. - **Closed** - An Indexer is free to close an allocation once 1 epoch has passed ([closeAllocation()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L335)) or their Indexer agent will automatically close the allocation after the **maxAllocationEpochs** (currently 28 days). When an allocation is closed with a valid proof of indexing (POI) their indexing rewards are distributed to the Indexer and its Delegators ([learn more](/indexing/overview/#how-are-indexing-rewards-distributed)). -Indexers are recommended to utilize offchain syncing functionality to sync subgraph deployments to chainhead before creating the allocation onchain. This feature is especially useful for subgraphs that may take longer than 28 epochs to sync or have some chances of failing undeterministically. +Indexers are recommended to utilize offchain syncing functionality to sync Subgraph deployments to chainhead before creating the allocation onchain. This feature is especially useful for Subgraphs that may take longer than 28 epochs to sync or have some chances of failing undeterministically. diff --git a/website/src/pages/sv/indexing/supported-network-requirements.mdx b/website/src/pages/sv/indexing/supported-network-requirements.mdx index f7a4943afd1b..70013eae23fe 100644 --- a/website/src/pages/sv/indexing/supported-network-requirements.mdx +++ b/website/src/pages/sv/indexing/supported-network-requirements.mdx @@ -6,7 +6,7 @@ title: Supported Network Requirements | --- | --- | --- | :-: | | Arbitrum | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | 4+ core CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_last updated August 2023_ | ✅ | | Avalanche | [Docker Guide](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 5 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
Debian 12/Ubuntu 22.04
16 GB RAM
>= 4.5TB (NVME preffered)
_last updated 14th May 2024_ | ✅ | +| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
Debian 12/Ubuntu 22.04
16 GB RAM
>= 4.5TB (NVME preferred)
_last updated 14th May 2024_ | ✅ | | Binance | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | 8 core / 16 threads CPU
Ubuntu 22.04
>=32 GB RAM
>= 14 TiB NVMe SSD
_last updated 22nd June 2024_ | ✅ | | Celo | [Docker Guide](https://docs.infradao.com/archive-nodes-101/celo/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 2 TiB NVMe SSD
_last updated August 2023_ | ✅ | | Ethereum | [Docker Guide](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Higher clock speed over core count
Ubuntu 22.04
16GB+ RAM
>=3TB (NVMe recommended)
_last updated August 2023_ | ✅ | diff --git a/website/src/pages/sv/indexing/tap.mdx b/website/src/pages/sv/indexing/tap.mdx index d69cb7b5bc91..65582940a499 100644 --- a/website/src/pages/sv/indexing/tap.mdx +++ b/website/src/pages/sv/indexing/tap.mdx @@ -1,21 +1,21 @@ --- -title: TAP Migration Guide +title: GraphTally Guide --- -Learn about The Graph’s new payment system, **Timeline Aggregation Protocol, TAP**. This system provides fast, efficient microtransactions with minimized trust. +Learn about The Graph’s new payment system, **GraphTally** [(previously Timeline Aggregation Protocol)](https://docs.rs/tap_core/latest/tap_core/index.html). This system provides fast, efficient microtransactions with minimized trust. ## Översikt -[TAP](https://docs.rs/tap_core/latest/tap_core/index.html) is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features: +GraphTally is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features: - Efficiently handles micropayments. - Adds a layer of consolidations to onchain transactions and costs. - Allows Indexers control of receipts and payments, guaranteeing payment for queries. - It enables decentralized, trustless gateways and improves `indexer-service` performance for multiple senders. -## Specifics +### Specifics -TAP allows a sender to make multiple payments to a receiver, **TAP Receipts**, which aggregates these payments into a single payment, a **Receipt Aggregate Voucher**, also known as a **RAV**. This aggregated payment can then be verified on the blockchain, reducing the number of transactions and simplifying the payment process. +GraphTally allows a sender to make multiple payments to a receiver, **Receipts**, which aggregates these payments into a single payment, a **Receipt Aggregate Voucher**, also known as a **RAV**. This aggregated payment can then be verified on the blockchain, reducing the number of transactions and simplifying the payment process. For each query, the gateway will send you a `signed receipt` that is stored on your database. Then, these queries will be aggregated by a `tap-agent` through a request. Afterwards, you’ll receive a RAV. You can update a RAV by sending it with newer receipts and this will generate a new RAV with an increased value. @@ -59,14 +59,14 @@ As long as you run `tap-agent` and `indexer-agent`, everything will be executed | Signers | `0xfF4B7A5EfD00Ff2EC3518D4F250A27e4c29A2211` | `0xFb142dE83E261e43a81e9ACEADd1c66A0DB121FE` | | Aggregator | `https://tap-aggregator.network.thegraph.com` | `https://tap-aggregator.testnet.thegraph.com` | -### Krav +### Prerequisites -In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query TAP updates. You can use The Graph Network to query or host yourself on your `graph-node`. +In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query updates. You can use The Graph Network to query or host yourself on your `graph-node`. -- [Graph TAP Arbitrum Sepolia subgraph (for The Graph testnet)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) -- [Graph TAP Arbitrum One subgraph (for The Graph mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) +- [Graph TAP Arbitrum Sepolia Subgraph (for The Graph testnet)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) +- [Graph TAP Arbitrum One Subgraph (for The Graph mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) -> Note: `indexer-agent` does not currently handle the indexing of this subgraph like it does for the network subgraph deployment. As a result, you have to index it manually. +> Note: `indexer-agent` does not currently handle the indexing of this Subgraph like it does for the network Subgraph deployment. As a result, you have to index it manually. ## Migration Guide @@ -79,7 +79,7 @@ The required software version can be found [here](https://github.com/graphprotoc 1. **Indexer Agent** - Follow the [same process](https://github.com/graphprotocol/indexer/pkgs/container/indexer-agent#graph-protocol-indexer-components). - - Give the new argument `--tap-subgraph-endpoint` to activate the new TAP codepaths and enable redeeming of TAP RAVs. + - Give the new argument `--tap-subgraph-endpoint` to activate the new GraphTally codepaths and enable redeeming of RAVs. 2. **Indexer Service** @@ -128,18 +128,18 @@ query_url = "" status_url = "" [subgraphs.network] -# Query URL for the Graph Network subgraph. +# Query URL for the Graph Network Subgraph. query_url = "" # Optional, deployment to look for in the local `graph-node`, if locally indexed. -# Locally indexing the subgraph is recommended. +# Locally indexing the Subgraph is recommended. # NOTE: Use `query_url` or `deployment_id` only deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" [subgraphs.escrow] -# Query URL for the Escrow subgraph. +# Query URL for the Escrow Subgraph. query_url = "" # Optional, deployment to look for in the local `graph-node`, if locally indexed. -# Locally indexing the subgraph is recommended. +# Locally indexing the Subgraph is recommended. # NOTE: Use `query_url` or `deployment_id` only deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" diff --git a/website/src/pages/sv/indexing/tooling/graph-node.mdx b/website/src/pages/sv/indexing/tooling/graph-node.mdx index e53a127b3fcd..e3d030167389 100644 --- a/website/src/pages/sv/indexing/tooling/graph-node.mdx +++ b/website/src/pages/sv/indexing/tooling/graph-node.mdx @@ -2,31 +2,31 @@ title: Graf Node --- -Graf Node är komponenten som indexerar subgraffar och gör den resulterande datan tillgänglig för förfrågan via en GraphQL API. Som sådan är den central för indexeringsstacken, och korrekt drift av Graph Node är avgörande för att driva en framgångsrik indexerare. +Graph Node is the component which indexes Subgraphs, and makes the resulting data available to query via a GraphQL API. As such it is central to the indexer stack, and correct operation of Graph Node is crucial to running a successful indexer. This provides a contextual overview of Graph Node, and some of the more advanced options available to indexers. Detailed documentation and instructions can be found in the [Graph Node repository](https://github.com/graphprotocol/graph-node). ## Graf Node -[Graph Node](https://github.com/graphprotocol/graph-node) is the reference implementation for indexing Subgraphs on The Graph Network, connecting to blockchain clients, indexing subgraphs and making indexed data available to query. +[Graph Node](https://github.com/graphprotocol/graph-node) is the reference implementation for indexing Subgraphs on The Graph Network, connecting to blockchain clients, indexing Subgraphs and making indexed data available to query. Graph Node (and the whole indexer stack) can be run on bare metal, or in a cloud environment. This flexibility of the central indexing component is crucial to the robustness of The Graph Protocol. Similarly, Graph Node can be [built from source](https://github.com/graphprotocol/graph-node), or indexers can use one of the [provided Docker Images](https://hub.docker.com/r/graphprotocol/graph-node). ### PostgreSQL-databas -Huvudlagret för Graph Node, här lagras subgrafdata, liksom metadata om subgraffar och nätverksdata som är oberoende av subgraffar, som blockcache och eth_call-cache. +The main store for the Graph Node, this is where Subgraph data is stored, as well as metadata about Subgraphs, and Subgraph-agnostic network data such as the block cache, and eth_call cache. ### Nätverkskunder För att indexera ett nätverk behöver Graf Node åtkomst till en nätverksklient via ett EVM-kompatibelt JSON-RPC API. Denna RPC kan ansluta till en enda klient eller så kan det vara en mer komplex konfiguration som lastbalanserar över flera. -While some subgraphs may just require a full node, some may have indexing features which require additional RPC functionality. Specifically subgraphs which make `eth_calls` as part of indexing will require an archive node which supports [EIP-1898](https://eips.ethereum.org/EIPS/eip-1898), and subgraphs with `callHandlers`, or `blockHandlers` with a `call` filter, require `trace_filter` support ([see trace module documentation here](https://openethereum.github.io/JSONRPC-trace-module)). +While some Subgraphs may just require a full node, some may have indexing features which require additional RPC functionality. Specifically Subgraphs which make `eth_calls` as part of indexing will require an archive node which supports [EIP-1898](https://eips.ethereum.org/EIPS/eip-1898), and Subgraphs with `callHandlers`, or `blockHandlers` with a `call` filter, require `trace_filter` support ([see trace module documentation here](https://openethereum.github.io/JSONRPC-trace-module)). **Network Firehoses** - a Firehose is a gRPC service providing an ordered, yet fork-aware, stream of blocks, developed by The Graph's core developers to better support performant indexing at scale. This is not currently an Indexer requirement, but Indexers are encouraged to familiarise themselves with the technology, ahead of full network support. Learn more about the Firehose [here](https://firehose.streamingfast.io/). ### IPFS-noder -Metadata för distribution av subgraffar lagras på IPFS-nätverket. Graf Node har främst åtkomst till IPFS-noden under distributionen av subgraffar för att hämta subgrafens manifest och alla länkade filer. Nätverksindexerare behöver inte värd sin egen IPFS-nod. En IPFS-nod för nätverket är värd på https://ipfs.network.thegraph.com. +Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network indexers do not need to host their own IPFS node. An IPFS node for the network is hosted at https://ipfs.network.thegraph.com. ### Prometheus server för mätvärden @@ -79,8 +79,8 @@ När Graph Node är igång exponerar den följande portar: | Port | Purpose | Routes | CLI Argument | Environment Variable | | --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | -| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | +| 8000 | GraphQL HTTP server
(for Subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | +| 8001 | GraphQL WS
(for Subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | | 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | | 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | | 8040 | Prometheus metrics | /metrics | \--metrics-port | - | @@ -89,7 +89,7 @@ När Graph Node är igång exponerar den följande portar: ## Avancerad konfiguration av Graf Node -På sitt enklaste sätt kan Graph Node användas med en enda instans av Graph Node, en enda PostgreSQL-databas, en IPFS-nod och nätverksklienter som krävs av de subgrafer som ska indexeras. +At its simplest, Graph Node can be operated with a single instance of Graph Node, a single PostgreSQL database, an IPFS node, and the network clients as required by the Subgraphs to be indexed. This setup can be scaled horizontally, by adding multiple Graph Nodes, and multiple databases to support those Graph Nodes. Advanced users may want to take advantage of some of the horizontal scaling capabilities of Graph Node, as well as some of the more advanced configuration options, via the `config.toml` file and Graph Node's environment variables. @@ -114,13 +114,13 @@ Full documentation of `config.toml` can be found in the [Graph Node docs](https: #### Flera Grafnoder -Graph Node indexing can scale horizontally, running multiple instances of Graph Node to split indexing and querying across different nodes. This can be done simply by running Graph Nodes configured with a different `node_id` on startup (e.g. in the Docker Compose file), which can then be used in the `config.toml` file to specify [dedicated query nodes](#dedicated-query-nodes), [block ingestors](#dedicated-block-ingestion), and splitting subgraphs across nodes with [deployment rules](#deployment-rules). +Graph Node indexing can scale horizontally, running multiple instances of Graph Node to split indexing and querying across different nodes. This can be done simply by running Graph Nodes configured with a different `node_id` on startup (e.g. in the Docker Compose file), which can then be used in the `config.toml` file to specify [dedicated query nodes](#dedicated-query-nodes), [block ingestors](#dedicated-block-ingestion), and splitting Subgraphs across nodes with [deployment rules](#deployment-rules). > Observera att flera Graph Nodes alla kan konfigureras att använda samma databas, som i sig kan skalas horisontellt via sharding. #### Regler för utplacering -Given multiple Graph Nodes, it is necessary to manage deployment of new subgraphs so that the same subgraph isn't being indexed by two different nodes, which would lead to collisions. This can be done by using deployment rules, which can also specify which `shard` a subgraph's data should be stored in, if database sharding is being used. Deployment rules can match on the subgraph name and the network that the deployment is indexing in order to make a decision. +Given multiple Graph Nodes, it is necessary to manage deployment of new Subgraphs so that the same Subgraph isn't being indexed by two different nodes, which would lead to collisions. This can be done by using deployment rules, which can also specify which `shard` a Subgraph's data should be stored in, if database sharding is being used. Deployment rules can match on the Subgraph name and the network that the deployment is indexing in order to make a decision. Exempel på konfiguration av deployeringsregler: @@ -138,7 +138,7 @@ indexers = [ "index_node_kovan_0" ] match = { network = [ "xdai", "poa-core" ] } indexers = [ "index_node_other_0" ] [[deployment.rule]] -# There's no 'match', so any subgraph matches +# There's no 'match', so any Subgraph matches shards = [ "sharda", "shardb" ] indexers = [ "index_node_community_0", @@ -167,11 +167,11 @@ Alla noder vars --node-id matchar reguljärt uttryck kommer att konfigureras fö För de flesta användningsfall är en enda Postgres-databas tillräcklig för att stödja en graph-node-instans. När en graph-node-instans växer utöver en enda Postgres-databas är det möjligt att dela upp lagringen av graph-node-data över flera Postgres-databaser. Alla databaser tillsammans bildar lagringsutrymmet för graph-node-instansen. Varje individuell databas kallas en shard. -Shards can be used to split subgraph deployments across multiple databases, and can also be used to use replicas to spread query load across databases. This includes configuring the number of available database connections each `graph-node` should keep in its connection pool for each database, which becomes increasingly important as more subgraphs are being indexed. +Shards can be used to split Subgraph deployments across multiple databases, and can also be used to use replicas to spread query load across databases. This includes configuring the number of available database connections each `graph-node` should keep in its connection pool for each database, which becomes increasingly important as more Subgraphs are being indexed. Sharding blir användbart när din befintliga databas inte kan hålla jämna steg med belastningen som Graph Node sätter på den och när det inte längre är möjligt att öka databasens storlek. -> Det är generellt sett bättre att göra en enda databas så stor som möjligt innan man börjar med shards. Ett undantag är när frågetrafiken är mycket ojämnt fördelad mellan subgrafer; i dessa situationer kan det hjälpa dramatiskt om högvolymsubgraferna hålls i en shard och allt annat i en annan, eftersom den konfigurationen gör det mer troligt att data för högvolymsubgraferna stannar i databasens interna cache och inte ersätts av data som inte behövs lika mycket från lågvolymsubgrafer. +> It is generally better make a single database as big as possible, before starting with shards. One exception is where query traffic is split very unevenly between Subgraphs; in those situations it can help dramatically if the high-volume Subgraphs are kept in one shard and everything else in another because that setup makes it more likely that the data for the high-volume Subgraphs stays in the db-internal cache and doesn't get replaced by data that's not needed as much from low-volume Subgraphs. När det gäller att konfigurera anslutningar, börja med max_connections i postgresql.conf som är inställt på 400 (eller kanske till och med 200) och titta på Prometheus-metrarna store_connection_wait_time_ms och store_connection_checkout_count. Märkbara väntetider (något över 5 ms) är en indikation på att det finns för få anslutningar tillgängliga; höga väntetider beror också på att databasen är mycket upptagen (som hög CPU-belastning). Om databasen verkar annars stabil, indikerar höga väntetider att antalet anslutningar behöver ökas. I konfigurationen är det en övre gräns för hur många anslutningar varje graph-node-instans kan använda, och Graph Node kommer inte att hålla anslutningar öppna om det inte behöver dem. @@ -188,7 +188,7 @@ ingestor = "block_ingestor_node" #### Stöd för flera nätverk -The Graph Protocol is increasing the number of networks supported for indexing rewards, and there exist many subgraphs indexing unsupported networks which an indexer would like to process. The `config.toml` file allows for expressive and flexible configuration of: +The Graph Protocol is increasing the number of networks supported for indexing rewards, and there exist many Subgraphs indexing unsupported networks which an indexer would like to process. The `config.toml` file allows for expressive and flexible configuration of: - Flera nätverk - Flera leverantörer per nätverk (detta kan göra det möjligt att dela upp belastningen mellan leverantörer, och kan också möjliggöra konfiguration av fullständiga noder samt arkivnoder, där Graph Node föredrar billigare leverantörer om en viss arbetsbelastning tillåter det). @@ -225,11 +225,11 @@ Användare som driver en skalad indexering med avancerad konfiguration kan dra n ### Hantera Graf Noder -Med en körande Graph Node (eller Graph Nodes!) är utmaningen sedan att hantera distribuerade subgrafer över dessa noder. Graph Node erbjuder en rad verktyg för att hjälpa till med hanteringen av subgrafer. +Given a running Graph Node (or Graph Nodes!), the challenge is then to manage deployed Subgraphs across those nodes. Graph Node surfaces a range of tools to help with managing Subgraphs. #### Loggning -Graph Node's logs can provide useful information for debugging and optimisation of Graph Node and specific subgraphs. Graph Node supports different log levels via the `GRAPH_LOG` environment variable, with the following levels: error, warn, info, debug or trace. +Graph Node's logs can provide useful information for debugging and optimisation of Graph Node and specific Subgraphs. Graph Node supports different log levels via the `GRAPH_LOG` environment variable, with the following levels: error, warn, info, debug or trace. In addition setting `GRAPH_LOG_QUERY_TIMING` to `gql` provides more details about how GraphQL queries are running (though this will generate a large volume of logs). @@ -247,11 +247,11 @@ The graphman command is included in the official containers, and you can docker Full documentation of `graphman` commands is available in the Graph Node repository. See [/docs/graphman.md] (https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md) in the Graph Node `/docs` -### Arbeta med undergrafer +### Working with Subgraphs #### Indexerings status API -Tillgänglig som standard på port 8030/graphql, exponerar indexeringstatus-API: en en rad metoder för att kontrollera indexeringstatus för olika subgrafer, kontrollera bevis för indexering, inspektera subgrafegenskaper och mer. +Available on port 8030/graphql by default, the indexing status API exposes a range of methods for checking indexing status for different Subgraphs, checking proofs of indexing, inspecting Subgraph features and more. The full schema is available [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). @@ -263,7 +263,7 @@ Det finns tre separata delar av indexeringsprocessen: - Bearbeta händelser i rätt ordning med lämpliga hanterare (detta kan innebära att kedjan anropas för status och att data hämtas från lagret) - Skriva de resulterande data till butiken -Dessa stadier är pipelinerade (det vill säga de kan utföras parallellt), men de är beroende av varandra. När subgrafer är långsamma att indexera beror orsaken på den specifika subgrafgen. +These stages are pipelined (i.e. they can be executed in parallel), but they are dependent on one another. Where Subgraphs are slow to index, the underlying cause will depend on the specific Subgraph. Vanliga orsaker till indexeringslångsamhet: @@ -276,24 +276,24 @@ Vanliga orsaker till indexeringslångsamhet: - Leverantören själv faller bakom kedjehuvudet - Långsamhet vid hämtning av nya kvitton från leverantören vid kedjehuvudet -Subgrafindexeringsmetriker kan hjälpa till att diagnostisera grunden till indexeringens långsamhet. I vissa fall ligger problemet med subgrafgenen själv, men i andra fall kan förbättrade nätverksleverantörer, minskad databaskonflikt och andra konfigurationsförbättringar markant förbättra indexeringens prestanda. +Subgraph indexing metrics can help diagnose the root cause of indexing slowness. In some cases, the problem lies with the Subgraph itself, but in others, improved network providers, reduced database contention and other configuration improvements can markedly improve indexing performance. -#### Undergrafer som misslyckats +#### Failed Subgraphs -Under indexering kan subgrafer misslyckas om de stöter på data som är oväntad, om någon komponent inte fungerar som förväntat eller om det finns något fel i händelsehanterare eller konfiguration. Det finns två allmänna typer av misslyckande: +During indexing Subgraphs might fail, if they encounter data that is unexpected, some component not working as expected, or if there is some bug in the event handlers or configuration. There are two general types of failure: - Deterministiska fel: detta är fel som inte kommer att lösas med retries - Icke-deterministiska fel: dessa kan bero på problem med leverantören eller något oväntat Graph Node-fel. När ett icke-deterministiskt fel inträffar kommer Graph Node att försöka igen med de felande hanterarna och backa över tid. -I vissa fall kan ett misslyckande vara lösbart av indexören (till exempel om felet beror på att det inte finns rätt typ av leverantör, kommer att tillåta indexering att fortsätta om den nödvändiga leverantören läggs till). Men i andra fall krävs en ändring i subgrafkoden. +In some cases a failure might be resolvable by the indexer (for example if the error is a result of not having the right kind of provider, adding the required provider will allow indexing to continue). However in others, a change in the Subgraph code is required. -> Deterministic failures are considered "final", with a Proof of Indexing generated for the failing block, while non-deterministic failures are not, as the subgraph may manage to "unfail" and continue indexing. In some cases, the non-deterministic label is incorrect, and the subgraph will never overcome the error; such failures should be reported as issues on the Graph Node repository. +> Deterministic failures are considered "final", with a Proof of Indexing generated for the failing block, while non-deterministic failures are not, as the Subgraph may manage to "unfail" and continue indexing. In some cases, the non-deterministic label is incorrect, and the Subgraph will never overcome the error; such failures should be reported as issues on the Graph Node repository. #### Blockera och anropa cache -Graph Node caches certain data in the store in order to save refetching from the provider. Blocks are cached, as are the results of `eth_calls` (the latter being cached as of a specific block). This caching can dramatically increase indexing speed during "resyncing" of a slightly altered subgraph. +Graph Node caches certain data in the store in order to save refetching from the provider. Blocks are cached, as are the results of `eth_calls` (the latter being cached as of a specific block). This caching can dramatically increase indexing speed during "resyncing" of a slightly altered Subgraph. -However, in some instances, if an Ethereum node has provided incorrect data for some period, that can make its way into the cache, leading to incorrect data or failed subgraphs. In this case indexers can use `graphman` to clear the poisoned cache, and then rewind the affected subgraphs, which will then fetch fresh data from the (hopefully) healthy provider. +However, in some instances, if an Ethereum node has provided incorrect data for some period, that can make its way into the cache, leading to incorrect data or failed Subgraphs. In this case indexers can use `graphman` to clear the poisoned cache, and then rewind the affected Subgraphs, which will then fetch fresh data from the (hopefully) healthy provider. Om en blockcache-inkonsekvens misstänks, som att en tx-kvitto saknar händelse: @@ -304,7 +304,7 @@ Om en blockcache-inkonsekvens misstänks, som att en tx-kvitto saknar händelse: #### Fråga frågor och fel -När en subgraf har indexeras kan indexörer förvänta sig att servera frågor via subgrafens dedikerade frågendpunkt. Om indexören hoppas på att betjäna en betydande mängd frågor rekommenderas en dedikerad frågenod, och vid mycket höga frågevolymer kan indexörer vilja konfigurera replikskivor så att frågor inte påverkar indexeringsprocessen. +Once a Subgraph has been indexed, indexers can expect to serve queries via the Subgraph's dedicated query endpoint. If the indexer is hoping to serve significant query volume, a dedicated query node is recommended, and in case of very high query volumes, indexers may want to configure replica shards so that queries don't impact the indexing process. Men även med en dedikerad frågenod och repliker kan vissa frågor ta lång tid att utföra, och i vissa fall öka minnesanvändningen och negativt påverka frågetiden för andra användare. @@ -316,7 +316,7 @@ Graph Node caches GraphQL queries by default, which can significantly reduce dat ##### Analyserar frågor -Problematiska frågor dyker oftast upp på ett av två sätt. I vissa fall rapporterar användare själva att en viss fråga är långsam. I det fallet är utmaningen att diagnostisera orsaken till långsamheten - om det är ett generellt problem eller specifikt för den subgraf eller fråga. Och naturligtvis att lösa det om det är möjligt. +Problematic queries most often surface in one of two ways. In some cases, users themselves report that a given query is slow. In that case the challenge is to diagnose the reason for the slowness - whether it is a general issue, or specific to that Subgraph or query. And then of course to resolve it, if possible. I andra fall kan utlösaren vara hög minnesanvändning på en frågenod, i vilket fall utmaningen först är att identifiera frågan som orsakar problemet. @@ -336,10 +336,10 @@ In general, tables where the number of distinct entities are less than 1% of the Once a table has been determined to be account-like, running `graphman stats account-like .
` will turn on the account-like optimization for queries against that table. The optimization can be turned off again with `graphman stats account-like --clear .
` It takes up to 5 minutes for query nodes to notice that the optimization has been turned on or off. After turning the optimization on, it is necessary to verify that the change does not in fact make queries slower for that table. If you have configured Grafana to monitor Postgres, slow queries would show up in `pg_stat_activity`in large numbers, taking several seconds. In that case, the optimization needs to be turned off again. -For Uniswap-like subgraphs, the `pair` and `token` tables are prime candidates for this optimization, and can have a dramatic effect on database load. +For Uniswap-like Subgraphs, the `pair` and `token` tables are prime candidates for this optimization, and can have a dramatic effect on database load. -#### Ta bort undergrafer +#### Removing Subgraphs > Detta är ny funktionalitet, som kommer att vara tillgänglig i Graf Node 0.29.x -At some point an indexer might want to remove a given subgraph. This can be easily done via `graphman drop`, which deletes a deployment and all it's indexed data. The deployment can be specified as either a subgraph name, an IPFS hash `Qm..`, or the database namespace `sgdNNN`. Further documentation is available [here](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop). +At some point an indexer might want to remove a given Subgraph. This can be easily done via `graphman drop`, which deletes a deployment and all it's indexed data. The deployment can be specified as either a Subgraph name, an IPFS hash `Qm..`, or the database namespace `sgdNNN`. Further documentation is available [here](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop). diff --git a/website/src/pages/sv/indexing/tooling/graphcast.mdx b/website/src/pages/sv/indexing/tooling/graphcast.mdx index 213029e1836b..56b93af13fc2 100644 --- a/website/src/pages/sv/indexing/tooling/graphcast.mdx +++ b/website/src/pages/sv/indexing/tooling/graphcast.mdx @@ -10,10 +10,10 @@ För närvarande avgörs kostnaden för att sända information till andra nätve Graphcast SDK (Utrustning för programvaruutveckling) gör det möjligt för utvecklare att bygga Radios, vilka är applikationer som drivs av gossipeffekt och som indexare kan köra för att tjäna ett visst syfte. Vi avser också att skapa några Radios (eller ge stöd åt andra utvecklare/team som önskar bygga Radios) för följande användningsområden: -- Real-time cross-checking of subgraph data integrity ([Subgraph Radio](https://docs.graphops.xyz/graphcast/radios/subgraph-radio/intro)). -- Genomföra auktioner och koordinering för warp-synkronisering av delgrafer, delströmmar och Firehose-data från andra indexare. -- Självrapportering om aktiv frågeanalys, inklusive delgrafförfrågningsvolym, avgiftsvolym etc. -- Självrapportering om indexeringanalys, inklusive tid för delgrafindexering, gasavgifter för handler, påträffade indexeringsfel etc. +- Real-time cross-checking of Subgraph data integrity ([Subgraph Radio](https://docs.graphops.xyz/graphcast/radios/subgraph-radio/intro)). +- Conducting auctions and coordination for warp syncing Subgraphs, substreams, and Firehose data from other Indexers. +- Self-reporting on active query analytics, including Subgraph request volumes, fee volumes, etc. +- Self-reporting on indexing analytics, including Subgraph indexing time, handler gas costs, indexing errors encountered, etc. - Självrapportering om stackinformation inklusive graph-node-version, Postgres-version, Ethereum-klientversion etc. ### Läs mer diff --git a/website/src/pages/sv/resources/benefits.mdx b/website/src/pages/sv/resources/benefits.mdx index b3c5e957cb54..bbeb6f2f631c 100644 --- a/website/src/pages/sv/resources/benefits.mdx +++ b/website/src/pages/sv/resources/benefits.mdx @@ -75,9 +75,9 @@ Query costs may vary; the quoted cost is the average at time of publication (Mar Reflects cost for data consumer. Query fees are still paid to Indexers for Free Plan queries. -Estimated costs are only for Ethereum Mainnet subgraphs — costs are even higher when self hosting a `graph-node` on other networks. Some users may need to update their subgraph to a new version. Due to Ethereum gas fees, an update costs ~$50 at time of writing. Note that gas fees on [Arbitrum](/archived/arbitrum/arbitrum-faq/) are substantially lower than Ethereum mainnet. +Estimated costs are only for Ethereum Mainnet Subgraphs — costs are even higher when self hosting a `graph-node` on other networks. Some users may need to update their Subgraph to a new version. Due to Ethereum gas fees, an update costs ~$50 at time of writing. Note that gas fees on [Arbitrum](/archived/arbitrum/arbitrum-faq/) are substantially lower than Ethereum mainnet. -Att kurera signal på en subgraf är en valfri engångskostnad med noll nettokostnad (t.ex. $1k i signal kan kurera på en subgraf och senare dras tillbaka - med potential att tjäna avkastning i processen). +Curating signal on a Subgraph is an optional one-time, net-zero cost (e.g., $1k in signal can be curated on a Subgraph, and later withdrawn—with potential to earn returns in the process). ## No Setup Costs & Greater Operational Efficiency @@ -89,4 +89,4 @@ The Graph’s decentralized network gives users access to geographic redundancy Bottom line: The Graph Network is less expensive, easier to use, and produces superior results compared to running a `graph-node` locally. -Start using The Graph Network today, and learn how to [publish your subgraph to The Graph's decentralized network](/subgraphs/quick-start/). +Start using The Graph Network today, and learn how to [publish your Subgraph to The Graph's decentralized network](/subgraphs/quick-start/). diff --git a/website/src/pages/sv/resources/glossary.mdx b/website/src/pages/sv/resources/glossary.mdx index dd930819456b..72ab2ba9333a 100644 --- a/website/src/pages/sv/resources/glossary.mdx +++ b/website/src/pages/sv/resources/glossary.mdx @@ -4,51 +4,51 @@ title: Ordlista - **The Graph**: A decentralized protocol for indexing and querying data. -- **Query**: A request for data. In the case of The Graph, a query is a request for data from a subgraph that will be answered by an Indexer. +- **Query**: A request for data. In the case of The Graph, a query is a request for data from a Subgraph that will be answered by an Indexer. -- **GraphQL**: A query language for APIs and a runtime for fulfilling those queries with your existing data. The Graph uses GraphQL to query subgraphs. +- **GraphQL**: A query language for APIs and a runtime for fulfilling those queries with your existing data. The Graph uses GraphQL to query Subgraphs. -- **Endpoint**: A URL that can be used to query a subgraph. The testing endpoint for Subgraph Studio is `https://api.studio.thegraph.com/query///` and the Graph Explorer endpoint is `https://gateway.thegraph.com/api//subgraphs/id/`. The Graph Explorer endpoint is used to query subgraphs on The Graph's decentralized network. +- **Endpoint**: A URL that can be used to query a Subgraph. The testing endpoint for Subgraph Studio is `https://api.studio.thegraph.com/query///` and the Graph Explorer endpoint is `https://gateway.thegraph.com/api//subgraphs/id/`. The Graph Explorer endpoint is used to query Subgraphs on The Graph's decentralized network. -- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish subgraphs to The Graph Network. Once it is indexed, the subgraph can be queried by anyone. +- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish Subgraphs to The Graph Network. Once it is indexed, the Subgraph can be queried by anyone. - **Indexer**: Network participants that run indexing nodes to index data from blockchains and serve GraphQL queries. - **Indexer Revenue Streams**: Indexers are rewarded in GRT with two components: query fee rebates and indexing rewards. - 1. **Query Fee Rebates**: Payments from subgraph consumers for serving queries on the network. + 1. **Query Fee Rebates**: Payments from Subgraph consumers for serving queries on the network. - 2. **Indexing Rewards**: The rewards that Indexers receive for indexing subgraphs. Indexing rewards are generated via new issuance of 3% GRT annually. + 2. **Indexing Rewards**: The rewards that Indexers receive for indexing Subgraphs. Indexing rewards are generated via new issuance of 3% GRT annually. - **Indexer's Self-Stake**: The amount of GRT that Indexers stake to participate in the decentralized network. The minimum is 100,000 GRT, and there is no upper limit. - **Delegation Capacity**: The maximum amount of GRT an Indexer can accept from Delegators. Indexers can only accept up to 16x their Indexer Self-Stake, and additional delegation results in diluted rewards. For example, if an Indexer has a Self-Stake of 1M GRT, their delegation capacity is 16M. However, Indexers can increase their Delegation Capacity by increasing their Self-Stake. -- **Upgrade Indexer**: An Indexer designed to act as a fallback for subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. +- **Upgrade Indexer**: An Indexer designed to act as a fallback for Subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. -- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing subgraphs. +- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in Subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing Subgraphs. - **Delegation Tax**: A 0.5% fee paid by Delegators when they delegate GRT to Indexers. The GRT used to pay the fee is burned. -- **Curator**: Network participants that identify high-quality subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a subgraph, 10% is distributed to the Curators of that subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a subgraph. +- **Curator**: Network participants that identify high-quality Subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a Subgraph, 10% is distributed to the Curators of that Subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a Subgraph. -- **Curation Tax**: A 1% fee paid by Curators when they signal GRT on subgraphs. The GRT used to pay the fee is burned. +- **Curation Tax**: A 1% fee paid by Curators when they signal GRT on Subgraphs. The GRT used to pay the fee is burned. -- **Data Consumer**: Any application or user that queries a subgraph. +- **Data Consumer**: Any application or user that queries a Subgraph. -- **Subgraph Developer**: A developer who builds and deploys a subgraph to The Graph's decentralized network. +- **Subgraph Developer**: A developer who builds and deploys a Subgraph to The Graph's decentralized network. -- **Subgraph Manifest**: A YAML file that describes the subgraph's GraphQL schema, data sources, and other metadata. [Here](https://github.com/graphprotocol/example-subgraph/blob/master/subgraph.yaml) is an example. +- **Subgraph Manifest**: A YAML file that describes the Subgraph's GraphQL schema, data sources, and other metadata. [Here](https://github.com/graphprotocol/example-subgraph/blob/master/subgraph.yaml) is an example. - **Epoch**: A unit of time within the network. Currently, one epoch is 6,646 blocks or approximately 1 day. -- **Allocation**: An Indexer can allocate their total GRT stake (including Delegators' stake) towards subgraphs that have been published on The Graph's decentralized network. Allocations can have different statuses: +- **Allocation**: An Indexer can allocate their total GRT stake (including Delegators' stake) towards Subgraphs that have been published on The Graph's decentralized network. Allocations can have different statuses: - 1. **Active**: An allocation is considered active when it is created onchain. This is called opening an allocation, and indicates to the network that the Indexer is actively indexing and serving queries for a particular subgraph. Active allocations accrue indexing rewards proportional to the signal on the subgraph, and the amount of GRT allocated. + 1. **Active**: An allocation is considered active when it is created onchain. This is called opening an allocation, and indicates to the network that the Indexer is actively indexing and serving queries for a particular Subgraph. Active allocations accrue indexing rewards proportional to the signal on the Subgraph, and the amount of GRT allocated. - 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. + 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given Subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. -- **Subgraph Studio**: A powerful dapp for building, deploying, and publishing subgraphs. +- **Subgraph Studio**: A powerful dapp for building, deploying, and publishing Subgraphs. - **Fishermen**: A role within The Graph Network held by participants who monitor the accuracy and integrity of data served by Indexers. When a Fisherman identifies a query response or a POI they believe to be incorrect, they can initiate a dispute against the Indexer. If the dispute rules in favor of the Fisherman, the Indexer is slashed by losing 2.5% of their self-stake. Of this amount, 50% is awarded to the Fisherman as a bounty for their vigilance, and the remaining 50% is removed from circulation (burned). This mechanism is designed to encourage Fishermen to help maintain the reliability of the network by ensuring that Indexers are held accountable for the data they provide. @@ -56,28 +56,28 @@ title: Ordlista - **Slashing**: Indexers can have their self-staked GRT slashed for providing an incorrect POI or for serving inaccurate data. The slashing percentage is a protocol parameter currently set to 2.5% of an Indexer's self-stake. 50% of the slashed GRT goes to the Fisherman that disputed the inaccurate data or incorrect POI. The other 50% is burned. -- **Indexing Rewards**: The rewards that Indexers receive for indexing subgraphs. Indexing rewards are distributed in GRT. +- **Indexing Rewards**: The rewards that Indexers receive for indexing Subgraphs. Indexing rewards are distributed in GRT. - **Delegation Rewards**: The rewards that Delegators receive for delegating GRT to Indexers. Delegation rewards are distributed in GRT. - **GRT**: The Graph's work utility token. GRT provides economic incentives to network participants for contributing to the network. -- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. +- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given Subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. -- **Graph Node**: Graph Node is the component that indexes subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. +- **Graph Node**: Graph Node is the component that indexes Subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. -- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions onchain, including registering on the network, managing subgraph deployments to its Graph Node(s), and managing allocations. +- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions onchain, including registering on the network, managing Subgraph deployments to its Graph Node(s), and managing allocations. - **The Graph Client**: A library for building GraphQL-based dapps in a decentralized way. -- **Graph Explorer**: A dapp designed for network participants to explore subgraphs and interact with the protocol. +- **Graph Explorer**: A dapp designed for network participants to explore Subgraphs and interact with the protocol. - **Graph CLI**: A command line interface tool for building and deploying to The Graph. - **Cooldown Period**: The time remaining until an Indexer who changed their delegation parameters can do so again. -- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, subgraphs, curation shares, and Indexer's self-stake. +- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, Subgraphs, curation shares, and Indexer's self-stake. -- **Updating a subgraph**: The process of releasing a new subgraph version with updates to the subgraph's manifest, schema, or mappings. +- **Updating a Subgraph**: The process of releasing a new Subgraph version with updates to the Subgraph's manifest, schema, or mappings. -- **Migrating**: The process of curation shares moving from an old version of a subgraph to a new version of a subgraph (e.g. when v0.0.1 is updated to v0.0.2). +- **Migrating**: The process of curation shares moving from an old version of a Subgraph to a new version of a Subgraph (e.g. when v0.0.1 is updated to v0.0.2). diff --git a/website/src/pages/sv/resources/migration-guides/assemblyscript-migration-guide.mdx b/website/src/pages/sv/resources/migration-guides/assemblyscript-migration-guide.mdx index e0f49fc2c71e..58983b9de579 100644 --- a/website/src/pages/sv/resources/migration-guides/assemblyscript-migration-guide.mdx +++ b/website/src/pages/sv/resources/migration-guides/assemblyscript-migration-guide.mdx @@ -2,13 +2,13 @@ title: AssemblyScript Migrationsguide --- -Up until now, subgraphs have been using one of the [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉 +Up until now, Subgraphs have been using one of the [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉 -Det kommer att möjliggöra för undergrafutvecklare att använda nyare funktioner i AS-språket och standardbiblioteket. +That will enable Subgraph developers to use newer features of the AS language and standard library. This guide is applicable for anyone using `graph-cli`/`graph-ts` below version `0.22.0`. If you're already at a higher than (or equal) version to that, you've already been using version `0.19.10` of AssemblyScript 🙂 -> Note: As of `0.24.0`, `graph-node` can support both versions, depending on the `apiVersion` specified in the subgraph manifest. +> Note: As of `0.24.0`, `graph-node` can support both versions, depending on the `apiVersion` specified in the Subgraph manifest. ## Funktioner @@ -44,7 +44,7 @@ This guide is applicable for anyone using `graph-cli`/`graph-ts` below version ` ## Hur uppgraderar du? -1. Change your mappings `apiVersion` in `subgraph.yaml` to `0.0.6`: +1. Change your mappings `apiVersion` in `subgraph.yaml` to `0.0.9`: ```yaml ... @@ -52,7 +52,7 @@ dataSources: ... mapping: ... - apiVersion: 0.0.6 + apiVersion: 0.0.9 ... ``` @@ -106,7 +106,7 @@ let maybeValue = load()! // bryts i runtime om värdet är null maybeValue.aMethod() ``` -Om du är osäker på vilken du ska välja, rekommenderar vi alltid att använda den säkra versionen. Om värdet inte finns kanske du bara vill göra ett tidigt villkorligt uttalande med en retur i din undergrafshanterare. +If you are unsure which to choose, we recommend always using the safe version. If the value doesn't exist you might want to just do an early if statement with a return in you Subgraph handler. ### Variabelskuggning @@ -132,7 +132,7 @@ Du måste döpa om dina duplicerade variabler om du hade variabelskuggning. ### Jämförelser med nollvärden -När du gör uppgraderingen av din subgraf kan du ibland få fel som dessa: +By doing the upgrade on your Subgraph, sometimes you might get errors like these: ```typescript ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' is not assignable to type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt'. @@ -330,7 +330,7 @@ let wrapper = new Wrapper(y) wrapper.n = wrapper.n + x // ger inte kompileringsfel som det borde ``` -Vi har öppnat en fråga om AssemblyScript-kompilatorn för detta, men om du gör den här typen av operationer i dina subgraf-mappningar bör du ändra dem så att de gör en null-kontroll innan den. +We've opened a issue on the AssemblyScript compiler for this, but for now if you do these kind of operations in your Subgraph mappings, you should change them to do a null check before it. ```typescript let wrapper = new Wrapper(y) @@ -352,7 +352,7 @@ value.x = 10 value.y = 'content' ``` -Det kommer att kompilera men brytas vid körning, det händer eftersom värdet inte har initialiserats, så se till att din subgraf har initialiserat sina värden, så här: +It will compile but break at runtime, that happens because the value hasn't been initialized, so make sure your Subgraph has initialized their values, like this: ```typescript var value = new Type() // initialized diff --git a/website/src/pages/sv/resources/migration-guides/graphql-validations-migration-guide.mdx b/website/src/pages/sv/resources/migration-guides/graphql-validations-migration-guide.mdx index 25d4c50249e1..647bead3ee4f 100644 --- a/website/src/pages/sv/resources/migration-guides/graphql-validations-migration-guide.mdx +++ b/website/src/pages/sv/resources/migration-guides/graphql-validations-migration-guide.mdx @@ -1,5 +1,5 @@ --- -title: Migrationsguide för GraphQL-validering +title: GraphQL Validations Migration Guide --- Snart kommer `graph-node` att stödja 100 % täckning av [GraphQL Valideringsspecifikationen](https://spec.graphql.org/June2018/#sec-Validation). @@ -20,7 +20,7 @@ För att vara i linje med dessa valideringar, följ migrationsguiden. Du kan använda CLI-migrationsverktyget för att hitta eventuella problem i dina GraphQL-operationer och åtgärda dem. Alternativt kan du uppdatera ändpunkten för din GraphQL-klient att använda ändpunkten `https://api-next.thegraph.com/subgraphs/name/$GITHUB_USER/$SUBGRAPH_NAME`. Att testa dina frågor mot denna ändpunkt kommer att hjälpa dig att hitta problemen i dina frågor. -> Inte alla subgrafer behöver migreras, om du använder [GraphQL ESlint](https://the-guild.dev/graphql/eslint/docs) eller [GraphQL Code Generator](https://the-guild.dev/graphql/codegen), ser de redan till att dina frågor är giltiga. +> Not all Subgraphs will need to be migrated, if you are using [GraphQL ESlint](https://the-guild.dev/graphql/eslint/docs) or [GraphQL Code Generator](https://the-guild.dev/graphql/codegen), they already ensure that your queries are valid. ## Migrations-CLI-verktyg diff --git a/website/src/pages/sv/resources/roles/curating.mdx b/website/src/pages/sv/resources/roles/curating.mdx index fa6a279e5b1e..0ae08de7bc3a 100644 --- a/website/src/pages/sv/resources/roles/curating.mdx +++ b/website/src/pages/sv/resources/roles/curating.mdx @@ -2,37 +2,37 @@ title: Kuratering --- -Curators are critical to The Graph's decentralized economy. They use their knowledge of the web3 ecosystem to assess and signal on the subgraphs that should be indexed by The Graph Network. Through Graph Explorer, Curators view network data to make signaling decisions. In turn, The Graph Network rewards Curators who signal on good quality subgraphs with a share of the query fees those subgraphs generate. The amount of GRT signaled is one of the key considerations for indexers when determining which subgraphs to index. +Curators are critical to The Graph's decentralized economy. They use their knowledge of the web3 ecosystem to assess and signal on the Subgraphs that should be indexed by The Graph Network. Through Graph Explorer, Curators view network data to make signaling decisions. In turn, The Graph Network rewards Curators who signal on good quality Subgraphs with a share of the query fees those Subgraphs generate. The amount of GRT signaled is one of the key considerations for indexers when determining which Subgraphs to index. ## What Does Signaling Mean for The Graph Network? -Before consumers can query a subgraph, it must be indexed. This is where curation comes into play. In order for Indexers to earn substantial query fees on quality subgraphs, they need to know what subgraphs to index. When Curators signal on a subgraph, it lets Indexers know that a subgraph is in demand and of sufficient quality that it should be indexed. +Before consumers can query a Subgraph, it must be indexed. This is where curation comes into play. In order for Indexers to earn substantial query fees on quality Subgraphs, they need to know what Subgraphs to index. When Curators signal on a Subgraph, it lets Indexers know that a Subgraph is in demand and of sufficient quality that it should be indexed. -Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that Curators use to let Indexers know that a subgraph is good to index. Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives. +Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that Curators use to let Indexers know that a Subgraph is good to index. Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the Subgraph, entitling them to a portion of future query fees that the Subgraph drives. -Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality subgraph because there will be fewer queries to process or fewer Indexers to process them. +Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to Subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality Subgraph because there will be fewer queries to process or fewer Indexers to process them. -The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. +The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all Subgraphs, signaling GRT on a particular Subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. -When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. +When signaling, Curators can decide to signal on a specific version of the Subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. -If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. +If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the Subgraphs you need assistance with. -Indexers can find subgraphs to index based on curation signals they see in Graph Explorer (screenshot below). +Indexers can find Subgraphs to index based on curation signals they see in Graph Explorer (screenshot below). -![Explorer subgraphs](/img/explorer-subgraphs.png) +![Explorer Subgraphs](/img/explorer-subgraphs.png) ## Hur man Signaliserar -Within the Curator tab in Graph Explorer, curators will be able to signal and unsignal on certain subgraphs based on network stats. For a step-by-step overview of how to do this in Graph Explorer, [click here.](/subgraphs/explorer/) +Within the Curator tab in Graph Explorer, curators will be able to signal and unsignal on certain Subgraphs based on network stats. For a step-by-step overview of how to do this in Graph Explorer, [click here.](/subgraphs/explorer/) -En kurator kan välja att signalera på en specifik subgrafversion, eller så kan de välja att ha sin signal automatiskt migrerad till den nyaste produktionsversionen av den subgrafen. Båda är giltiga strategier och har sina egna för- och nackdelar. +A curator can choose to signal on a specific Subgraph version, or they can choose to have their signal automatically migrate to the newest production build of that Subgraph. Both are valid strategies and come with their own pros and cons. -Signaling on a specific version is especially useful when one subgraph is used by multiple dapps. One dapp might need to regularly update the subgraph with new features. Another dapp might prefer to use an older, well-tested subgraph version. Upon initial curation, a 1% standard tax is incurred. +Signaling on a specific version is especially useful when one Subgraph is used by multiple dapps. One dapp might need to regularly update the Subgraph with new features. Another dapp might prefer to use an older, well-tested Subgraph version. Upon initial curation, a 1% standard tax is incurred. Att ha din signal automatiskt migrerad till den nyaste produktionsversionen kan vara värdefullt för att säkerställa att du fortsätter att ackumulera frågeavgifter. Varje gång du signalerar åläggs en kuratoravgift på 1%. Du kommer också att betala en kuratoravgift på 0,5% vid varje migration. Subgrafutvecklare uppmanas att inte publicera nya versioner för ofta - de måste betala en kuratoravgift på 0,5% på alla automatiskt migrerade kuratorandelar. -> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy. +> **Note**: The first address to signal a particular Subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy. ## Withdrawing your GRT @@ -40,39 +40,39 @@ Curators have the option to withdraw their signaled GRT at any time. Unlike the process of delegating, if you decide to withdraw your signaled GRT you will not have to wait for a cooldown period and will receive the entire amount (minus the 1% curation tax). -Once a curator withdraws their signal, indexers may choose to keep indexing the subgraph, even if there's currently no active GRT signaled. +Once a curator withdraws their signal, indexers may choose to keep indexing the Subgraph, even if there's currently no active GRT signaled. -However, it is recommended that curators leave their signaled GRT in place not only to receive a portion of the query fees, but also to ensure reliability and uptime of the subgraph. +However, it is recommended that curators leave their signaled GRT in place not only to receive a portion of the query fees, but also to ensure reliability and uptime of the Subgraph. ## Risker 1. Frågemarknaden är i grunden ung på The Graph och det finns en risk att din %APY kan vara lägre än du förväntar dig på grund av tidiga marknadsmekanik. -2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned. -3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their subgraph or if a subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/resources/roles/delegating/delegating/). -4. En subgraf kan misslyckas på grund av en bugg. En misslyckad subgraf genererar inte frågeavgifter. Som ett resultat måste du vänta tills utvecklaren rättar felet och distribuerar en ny version. - - Om du prenumererar på den nyaste versionen av en subgraf kommer dina andelar automatiskt att migreras till den nya versionen. Detta kommer att medföra en kuratoravgift på 0,5%. - - If you have signaled on a specific subgraph version and it fails, you will have to manually burn your curation shares. You can then signal on the new subgraph version, thus incurring a 1% curation tax. +2. Curation Fee - when a curator signals GRT on a Subgraph, they incur a 1% curation tax. This fee is burned. +3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their Subgraph or if a Subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/resources/roles/delegating/delegating/). +4. A Subgraph can fail due to a bug. A failed Subgraph does not accrue query fees. As a result, you’ll have to wait until the developer fixes the bug and deploys a new version. + - If you are subscribed to the newest version of a Subgraph, your shares will auto-migrate to that new version. This will incur a 0.5% curation tax. + - If you have signaled on a specific Subgraph version and it fails, you will have to manually burn your curation shares. You can then signal on the new Subgraph version, thus incurring a 1% curation tax. ## Kurations-FAQ ### 1. Vilken % av frågeavgifterna tjänar Kuratorer? -By signalling on a subgraph, you will earn a share of all the query fees that the subgraph generates. 10% of all query fees go to the Curators pro-rata to their curation shares. This 10% is subject to governance. +By signalling on a Subgraph, you will earn a share of all the query fees that the Subgraph generates. 10% of all query fees go to the Curators pro-rata to their curation shares. This 10% is subject to governance. -### 2. Hur bestämmer jag vilka subgrafer av hög kvalitet att signalera på? +### 2. How do I decide which Subgraphs are high quality to signal on? -Finding high-quality subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy subgraphs that are driving query volume. A trustworthy subgraph may be valuable if it is complete, accurate, and supports a dapp’s data needs. A poorly architected subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a subgraph’s architecture or code in order to assess if a subgraph is valuable. As a result: +Finding high-quality Subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy Subgraphs that are driving query volume. A trustworthy Subgraph may be valuable if it is complete, accurate, and supports a dapp’s data needs. A poorly architected Subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a Subgraph’s architecture or code in order to assess if a Subgraph is valuable. As a result: -- Curators can use their understanding of a network to try and predict how an individual subgraph may generate a higher or lower query volume in the future -- Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the subgraph developer is can help determine whether or not a subgraph is worth signalling on. +- Curators can use their understanding of a network to try and predict how an individual Subgraph may generate a higher or lower query volume in the future +- Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the Subgraph developer is can help determine whether or not a Subgraph is worth signalling on. -### 3. Vad kostar det att uppdatera en subgraf? +### 3. What’s the cost of updating a Subgraph? -Migrating your curation shares to a new subgraph version incurs a curation tax of 1%. Curators can choose to subscribe to the newest version of a subgraph. When curator shares get auto-migrated to a new version, Curators will also pay half curation tax, ie. 0.5%, because upgrading subgraphs is an onchain action that costs gas. +Migrating your curation shares to a new Subgraph version incurs a curation tax of 1%. Curators can choose to subscribe to the newest version of a Subgraph. When curator shares get auto-migrated to a new version, Curators will also pay half curation tax, ie. 0.5%, because upgrading Subgraphs is an onchain action that costs gas. -### 4. Hur ofta kan jag uppdatera min subgraf? +### 4. How often can I update my Subgraph? -Det föreslås att du inte uppdaterar dina subgrafer för ofta. Se frågan ovan för mer information. +It’s suggested that you don’t update your Subgraphs too frequently. See the question above for more details. ### 5. Kan jag sälja mina kuratorandelar? diff --git a/website/src/pages/sv/resources/subgraph-studio-faq.mdx b/website/src/pages/sv/resources/subgraph-studio-faq.mdx index f2d35d39c1ee..5787f5c2dfeb 100644 --- a/website/src/pages/sv/resources/subgraph-studio-faq.mdx +++ b/website/src/pages/sv/resources/subgraph-studio-faq.mdx @@ -4,7 +4,7 @@ title: Vanliga frågor om Subgraf Studio ## 1. Vad är Subgraf Studio? -[Subgraph Studio](https://thegraph.com/studio/) is a dapp for creating, managing, and publishing subgraphs and API keys. +[Subgraph Studio](https://thegraph.com/studio/) is a dapp for creating, managing, and publishing Subgraphs and API keys. ## 2. Hur skapar jag en API-nyckel? @@ -18,14 +18,14 @@ Yes! You can create multiple API Keys to use in different projects. Check out th När du har skapat en API-nyckel kan du i avsnittet Säkerhet definiera vilka domäner som kan ställa frågor till en specifik API-nyckel. -## 5. Kan jag överföra min subgraf till en annan ägare? +## 5. Can I transfer my Subgraph to another owner? -Yes, subgraphs that have been published to Arbitrum One can be transferred to a new wallet or a Multisig. You can do so by clicking the three dots next to the 'Publish' button on the subgraph's details page and selecting 'Transfer ownership'. +Yes, Subgraphs that have been published to Arbitrum One can be transferred to a new wallet or a Multisig. You can do so by clicking the three dots next to the 'Publish' button on the Subgraph's details page and selecting 'Transfer ownership'. -Observera att du inte längre kommer att kunna se eller redigera undergrafen i Studio när den har överförts. +Note that you will no longer be able to see or edit the Subgraph in Studio once it has been transferred. -## 6. Hur hittar jag fråge-URL: er för undergrafer om jag inte är utvecklaren av den undergraf jag vill använda? +## 6. How do I find query URLs for Subgraphs if I’m not the developer of the Subgraph I want to use? -You can find the query URL of each subgraph in the Subgraph Details section of Graph Explorer. When you click on the “Query” button, you will be directed to a pane wherein you can view the query URL of the subgraph you’re interested in. You can then replace the `` placeholder with the API key you wish to leverage in Subgraph Studio. +You can find the query URL of each Subgraph in the Subgraph Details section of Graph Explorer. When you click on the “Query” button, you will be directed to a pane wherein you can view the query URL of the Subgraph you’re interested in. You can then replace the `` placeholder with the API key you wish to leverage in Subgraph Studio. -Kom ihåg att du kan skapa en API-nyckel och ställa frågor till alla undergrafer som publicerats i nätverket, även om du själv har byggt en undergraf. Dessa förfrågningar via den nya API-nyckeln är betalda förfrågningar som alla andra i nätverket. +Remember that you can create an API key and query any Subgraph published to the network, even if you build a Subgraph yourself. These queries via the new API key, are paid queries as any other on the network. diff --git a/website/src/pages/sv/resources/tokenomics.mdx b/website/src/pages/sv/resources/tokenomics.mdx index 3d6c4666a960..120c43db7ee1 100644 --- a/website/src/pages/sv/resources/tokenomics.mdx +++ b/website/src/pages/sv/resources/tokenomics.mdx @@ -6,7 +6,7 @@ description: The Graph Network is incentivized by powerful tokenomics. Here’s ## Översikt -The Graph is a decentralized protocol that enables easy access to blockchain data. It indexes blockchain data similarly to how Google indexes the web. If you've used a dapp that retrieves data from a subgraph, you've probably interacted with The Graph. Today, thousands of [popular dapps](https://thegraph.com/explorer) in the web3 ecosystem use The Graph. +The Graph is a decentralized protocol that enables easy access to blockchain data. It indexes blockchain data similarly to how Google indexes the web. If you've used a dapp that retrieves data from a Subgraph, you've probably interacted with The Graph. Today, thousands of [popular dapps](https://thegraph.com/explorer) in the web3 ecosystem use The Graph. ## Specifics @@ -24,9 +24,9 @@ There are four primary network participants: 1. Delegators - Delegate GRT to Indexers & secure the network -2. Kuratorer - Hitta de bästa subgrafterna för Indexers +2. Curators - Find the best Subgraphs for Indexers -3. Developers - Build & query subgraphs +3. Developers - Build & query Subgraphs 4. Indexers - Grundvalen för blockkedjedata @@ -36,7 +36,7 @@ Fishermen and Arbitrators are also integral to the network's success through oth ## Delegators (Passively earn GRT) -Indexers are delegated GRT by Delegators, increasing the Indexer’s stake in subgraphs on the network. In return, Delegators earn a percentage of all query fees and indexing rewards from the Indexer. Each Indexer sets the cut that will be rewarded to Delegators independently, creating competition among Indexers to attract Delegators. Most Indexers offer between 9-12% annually. +Indexers are delegated GRT by Delegators, increasing the Indexer’s stake in Subgraphs on the network. In return, Delegators earn a percentage of all query fees and indexing rewards from the Indexer. Each Indexer sets the cut that will be rewarded to Delegators independently, creating competition among Indexers to attract Delegators. Most Indexers offer between 9-12% annually. For example, if a Delegator were to delegate 15k GRT to an Indexer offering 10%, the Delegator would receive ~1,500 GRT in rewards annually. @@ -46,25 +46,25 @@ If you're reading this, you're capable of becoming a Delegator right now by head ## Curators (Earn GRT) -Curators identify high-quality subgraphs and "curate" them (i.e., signal GRT on them) to earn curation shares, which guarantee a percentage of all future query fees generated by the subgraph. While any independent network participant can be a Curator, typically subgraph developers are among the first Curators for their own subgraphs because they want to ensure their subgraph is indexed. +Curators identify high-quality Subgraphs and "curate" them (i.e., signal GRT on them) to earn curation shares, which guarantee a percentage of all future query fees generated by the Subgraph. While any independent network participant can be a Curator, typically Subgraph developers are among the first Curators for their own Subgraphs because they want to ensure their Subgraph is indexed. -Subgraph developers are encouraged to curate their subgraph with at least 3,000 GRT. However, this number may be impacted by network activity and community participation. +Subgraph developers are encouraged to curate their Subgraph with at least 3,000 GRT. However, this number may be impacted by network activity and community participation. -Curators pay a 1% curation tax when they curate a new subgraph. This curation tax is burned, decreasing the supply of GRT. +Curators pay a 1% curation tax when they curate a new Subgraph. This curation tax is burned, decreasing the supply of GRT. ## Developers -Developers build and query subgraphs to retrieve blockchain data. Since subgraphs are open source, developers can query existing subgraphs to load blockchain data into their dapps. Developers pay for queries they make in GRT, which is distributed to network participants. +Developers build and query Subgraphs to retrieve blockchain data. Since Subgraphs are open source, developers can query existing Subgraphs to load blockchain data into their dapps. Developers pay for queries they make in GRT, which is distributed to network participants. -### Skapa en subgraf +### Creating a Subgraph -Developers can [create a subgraph](/developing/creating-a-subgraph/) to index data on the blockchain. Subgraphs are instructions for Indexers about which data should be served to consumers. +Developers can [create a Subgraph](/developing/creating-a-subgraph/) to index data on the blockchain. Subgraphs are instructions for Indexers about which data should be served to consumers. -Once developers have built and tested their subgraph, they can [publish their subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) on The Graph's decentralized network. +Once developers have built and tested their Subgraph, they can [publish their Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) on The Graph's decentralized network. -### Fråga en befintlig subgraf +### Querying an existing Subgraph -Once a subgraph is [published](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph's decentralized network, anyone can create an API key, add GRT to their billing balance, and query the subgraph. +Once a Subgraph is [published](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph's decentralized network, anyone can create an API key, add GRT to their billing balance, and query the Subgraph. Subgraphs are [queried using GraphQL](/subgraphs/querying/introduction/), and the query fees are paid for with GRT in [Subgraph Studio](https://thegraph.com/studio/). Query fees are distributed to network participants based on their contributions to the protocol. @@ -72,27 +72,27 @@ Subgraphs are [queried using GraphQL](/subgraphs/querying/introduction/), and th ## Indexers (Earn GRT) -Indexers are the backbone of The Graph. They operate independent hardware and software powering The Graph’s decentralized network. Indexers serve data to consumers based on instructions from subgraphs. +Indexers are the backbone of The Graph. They operate independent hardware and software powering The Graph’s decentralized network. Indexers serve data to consumers based on instructions from Subgraphs. Indexers can earn GRT rewards in two ways: -1. **Query fees**: GRT paid by developers or users for subgraph data queries. Query fees are directly distributed to Indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). +1. **Query fees**: GRT paid by developers or users for Subgraph data queries. Query fees are directly distributed to Indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). -2. **Indexing rewards**: the 3% annual issuance is distributed to Indexers based on the number of subgraphs they are indexing. These rewards incentivize Indexers to index subgraphs, occasionally before the query fees begin, to accrue and submit Proofs of Indexing (POIs), verifying that they have indexed data accurately. +2. **Indexing rewards**: the 3% annual issuance is distributed to Indexers based on the number of Subgraphs they are indexing. These rewards incentivize Indexers to index Subgraphs, occasionally before the query fees begin, to accrue and submit Proofs of Indexing (POIs), verifying that they have indexed data accurately. -Each subgraph is allotted a portion of the total network token issuance, based on the amount of the subgraph’s curation signal. That amount is then rewarded to Indexers based on their allocated stake on the subgraph. +Each Subgraph is allotted a portion of the total network token issuance, based on the amount of the Subgraph’s curation signal. That amount is then rewarded to Indexers based on their allocated stake on the Subgraph. In order to run an indexing node, Indexers must self-stake 100,000 GRT or more with the network. Indexers are incentivized to self-stake GRT in proportion to the amount of queries they serve. -Indexers can increase their GRT allocations on subgraphs by accepting GRT delegation from Delegators, and they can accept up to 16 times their initial self-stake. If an Indexer becomes "over-delegated" (i.e., more than 16 times their initial self-stake), they will not be able to use the additional GRT from Delegators until they increase their self-stake in the network. +Indexers can increase their GRT allocations on Subgraphs by accepting GRT delegation from Delegators, and they can accept up to 16 times their initial self-stake. If an Indexer becomes "over-delegated" (i.e., more than 16 times their initial self-stake), they will not be able to use the additional GRT from Delegators until they increase their self-stake in the network. The amount of rewards an Indexer receives can vary based on the Indexer's self-stake, accepted delegation, quality of service, and many more factors. ## Token Supply: Burning & Issuance -The initial token supply is 10 billion GRT, with a target of 3% new issuance annually to reward Indexers for allocating stake on subgraphs. This means that the total supply of GRT tokens will increase by 3% each year as new tokens are issued to Indexers for their contribution to the network. +The initial token supply is 10 billion GRT, with a target of 3% new issuance annually to reward Indexers for allocating stake on Subgraphs. This means that the total supply of GRT tokens will increase by 3% each year as new tokens are issued to Indexers for their contribution to the network. -The Graph is designed with multiple burning mechanisms to offset new token issuance. Approximately 1% of the GRT supply is burned annually through various activities on the network, and this number has been increasing as network activity continues to grow. These burning activities include a 0.5% delegation tax whenever a Delegator delegates GRT to an Indexer, a 1% curation tax when Curators signal on a subgraph, and a 1% of query fees for blockchain data. +The Graph is designed with multiple burning mechanisms to offset new token issuance. Approximately 1% of the GRT supply is burned annually through various activities on the network, and this number has been increasing as network activity continues to grow. These burning activities include a 0.5% delegation tax whenever a Delegator delegates GRT to an Indexer, a 1% curation tax when Curators signal on a Subgraph, and a 1% of query fees for blockchain data. ![Total burned GRT](/img/total-burned-grt.jpeg) diff --git a/website/src/pages/sv/sps/introduction.mdx b/website/src/pages/sv/sps/introduction.mdx index 6c9a0b9ece89..30e643fff68a 100644 --- a/website/src/pages/sv/sps/introduction.mdx +++ b/website/src/pages/sv/sps/introduction.mdx @@ -3,21 +3,21 @@ title: Introduction to Substreams-Powered Subgraphs sidebarTitle: Introduktion --- -Boost your subgraph's efficiency and scalability by using [Substreams](/substreams/introduction/) to stream pre-indexed blockchain data. +Boost your Subgraph's efficiency and scalability by using [Substreams](/substreams/introduction/) to stream pre-indexed blockchain data. ## Översikt -Use a Substreams package (`.spkg`) as a data source to give your subgraph access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks. +Use a Substreams package (`.spkg`) as a data source to give your Subgraph access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks. ### Specifics There are two methods of enabling this technology: -1. **Using Substreams [triggers](/sps/triggers/)**: Consume from any Substreams module by importing the Protobuf model through a subgraph handler and move all your logic into a subgraph. This method creates the subgraph entities directly in the subgraph. +1. **Using Substreams [triggers](/sps/triggers/)**: Consume from any Substreams module by importing the Protobuf model through a Subgraph handler and move all your logic into a Subgraph. This method creates the Subgraph entities directly in the Subgraph. -2. **Using [Entity Changes](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)**: By writing more of the logic into Substreams, you can consume the module's output directly into [graph-node](/indexing/tooling/graph-node/). In graph-node, you can use the Substreams data to create your subgraph entities. +2. **Using [Entity Changes](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)**: By writing more of the logic into Substreams, you can consume the module's output directly into [graph-node](/indexing/tooling/graph-node/). In graph-node, you can use the Substreams data to create your Subgraph entities. -You can choose where to place your logic, either in the subgraph or Substreams. However, consider what aligns with your data needs, as Substreams has a parallelized model, and triggers are consumed linearly in the graph node. +You can choose where to place your logic, either in the Subgraph or Substreams. However, consider what aligns with your data needs, as Substreams has a parallelized model, and triggers are consumed linearly in the graph node. ### Ytterligare resurser @@ -28,3 +28,4 @@ Visit the following links for tutorials on using code-generation tooling to buil - [Starknet](https://docs.substreams.dev/tutorials/intro-to-tutorials/starknet) - [Injective](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/injective) - [MANTRA](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/mantra) +- [Stellar](https://docs.substreams.dev/tutorials/intro-to-tutorials/stellar) diff --git a/website/src/pages/sv/sps/sps-faq.mdx b/website/src/pages/sv/sps/sps-faq.mdx index 74ae7af82977..e5313465d87c 100644 --- a/website/src/pages/sv/sps/sps-faq.mdx +++ b/website/src/pages/sv/sps/sps-faq.mdx @@ -11,21 +11,21 @@ Specifically, it's a blockchain-agnostic, parallelized, and streaming-first engi Substreams is developed by [StreamingFast](https://www.streamingfast.io/). Visit the [Substreams Documentation](/substreams/introduction/) to learn more about Substreams. -## Vad är Substreams-drivna subgrafer? +## What are Substreams-powered Subgraphs? -[Substreams-powered subgraphs](/sps/introduction/) combine the power of Substreams with the queryability of subgraphs. When publishing a Substreams-powered Subgraph, the data produced by the Substreams transformations, can [output entity changes](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs) that are compatible with subgraph entities. +[Substreams-powered Subgraphs](/sps/introduction/) combine the power of Substreams with the queryability of Subgraphs. When publishing a Substreams-powered Subgraph, the data produced by the Substreams transformations, can [output entity changes](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs) that are compatible with Subgraph entities. -If you are already familiar with subgraph development, note that Substreams-powered subgraphs can then be queried just as if it had been produced by the AssemblyScript transformation layer. This provides all the benefits of subgraphs, including a dynamic and flexible GraphQL API. +If you are already familiar with Subgraph development, note that Substreams-powered Subgraphs can then be queried just as if it had been produced by the AssemblyScript transformation layer. This provides all the benefits of Subgraphs, including a dynamic and flexible GraphQL API. -## Hur skiljer sig Substreams-drivna subgrafer från subgrafer? +## How are Substreams-powered Subgraphs different from Subgraphs? Subgraphs are made up of datasources which specify onchain events, and how those events should be transformed via handlers written in Assemblyscript. These events are processed sequentially, based on the order in which events happen onchain. -By contrast, substreams-powered subgraphs have a single datasource which references a substreams package, which is processed by the Graph Node. Substreams have access to additional granular onchain data compared to conventional subgraphs, and can also benefit from massively parallelised processing, which can mean much faster processing times. +By contrast, substreams-powered Subgraphs have a single datasource which references a substreams package, which is processed by the Graph Node. Substreams have access to additional granular onchain data compared to conventional Subgraphs, and can also benefit from massively parallelised processing, which can mean much faster processing times. -## Vilka fördelar har användning av Substreams-drivna subgrafer? +## What are the benefits of using Substreams-powered Subgraphs? -Substreams-powered subgraphs combine all the benefits of Substreams with the queryability of subgraphs. They bring greater composability and high-performance indexing to The Graph. They also enable new data use cases; for example, once you've built your Substreams-powered Subgraph, you can reuse your [Substreams modules](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) to output to different [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) such as PostgreSQL, MongoDB, and Kafka. +Substreams-powered Subgraphs combine all the benefits of Substreams with the queryability of Subgraphs. They bring greater composability and high-performance indexing to The Graph. They also enable new data use cases; for example, once you've built your Substreams-powered Subgraph, you can reuse your [Substreams modules](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) to output to different [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) such as PostgreSQL, MongoDB, and Kafka. ## Vilka fördelar har Substreams? @@ -35,7 +35,7 @@ Det finns många fördelar med att använda Substreams, inklusive: - Högpresterande indexering: Ordervärden snabbare indexering genom storskaliga kluster av parallella operationer (tänk BigQuery). -- Utdata var som helst: Du kan sänka dina data var som helst du vill: PostgreSQL, MongoDB, Kafka, subgrafer, platta filer, Google Sheets. +- Sink anywhere: Sink your data to anywhere you want: PostgreSQL, MongoDB, Kafka, Subgraphs, flat files, Google Sheets. - Programmerbarhet: Använd kod för att anpassa extrahering, utföra transformationsbaserade aggregeringar och modellera din utdata för flera sänkar. @@ -63,17 +63,17 @@ Det finns många fördelar med att använda Firehose, inklusive: - Använder platta filer: Blockkedjedata extraheras till platta filer, den billigaste och mest optimerade datorkällan som finns tillgänglig. -## Var kan utvecklare få mer information om Substreams-drivna subgrafer och Substreams? +## Where can developers access more information about Substreams-powered Subgraphs and Substreams? The [Substreams documentation](/substreams/introduction/) will teach you how to build Substreams modules. -The [Substreams-powered subgraphs documentation](/sps/introduction/) will show you how to package them for deployment on The Graph. +The [Substreams-powered Subgraphs documentation](/sps/introduction/) will show you how to package them for deployment on The Graph. The [latest Substreams Codegen tool](https://streamingfastio.medium.com/substreams-codegen-no-code-tool-to-bootstrap-your-project-a11efe0378c6) will allow you to bootstrap a Substreams project without any code. ## Vad är rollen för Rust-moduler i Substreams? -Rust-moduler är motsvarigheten till AssemblyScript-mappers i subgrafer. De kompileras till WASM på ett liknande sätt, men programmeringsmodellen tillåter parallell körning. De definierar vilken typ av omvandlingar och aggregeringar du vill tillämpa på råblockkedjedata. +Rust modules are the equivalent of the AssemblyScript mappers in Subgraphs. They are compiled to WASM in a similar way, but the programming model allows for parallel execution. They define the sort of transformations and aggregations you want to apply to the raw blockchain data. See [modules documentation](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) for details. @@ -81,16 +81,16 @@ See [modules documentation](https://docs.substreams.dev/reference-material/subst Vid användning av Substreams sker sammansättningen på omvandlingsnivån, vilket gör att cachade moduler kan återanvändas. -Som exempel kan Alice bygga en DEX-prismodul, Bob kan använda den för att bygga en volymaggregator för vissa intressanta tokens, och Lisa kan kombinera fyra individuella DEX-prismoduler för att skapa en prisoracle. En enda Substreams-begäran kommer att paketera alla dessa individuella moduler, länka dem samman, för att erbjuda en mycket mer förädlad dataström. Den strömmen kan sedan användas för att fylla i en subgraf och frågas av användare. +As an example, Alice can build a DEX price module, Bob can use it to build a volume aggregator for some tokens of his interest, and Lisa can combine four individual DEX price modules to create a price oracle. A single Substreams request will package all of these individual's modules, link them together, to offer a much more refined stream of data. That stream can then be used to populate a Subgraph, and be queried by consumers. ## Hur kan man bygga och distribuera en Substreams-drivna subgraf? After [defining](/sps/introduction/) a Substreams-powered Subgraph, you can use the Graph CLI to deploy it in [Subgraph Studio](https://thegraph.com/studio/). -## Var kan jag hitta exempel på Substreams och Substreams-drivna subgrafer? +## Where can I find examples of Substreams and Substreams-powered Subgraphs? -Du kan besöka [detta Github-repo](https://github.com/pinax-network/awesome-substreams) för att hitta exempel på Substreams och Substreams-drivna subgrafer. +You can visit [this Github repo](https://github.com/pinax-network/awesome-substreams) to find examples of Substreams and Substreams-powered Subgraphs. -## Vad innebär Substreams och Substreams-drivna subgrafer för The Graph Network? +## What do Substreams and Substreams-powered Subgraphs mean for The Graph Network? Integrationen lovar många fördelar, inklusive extremt högpresterande indexering och ökad sammansättbarhet genom att dra nytta av gemenskapsmoduler och bygga vidare på dem. diff --git a/website/src/pages/sv/sps/triggers.mdx b/website/src/pages/sv/sps/triggers.mdx index d618f8254691..77b382a28280 100644 --- a/website/src/pages/sv/sps/triggers.mdx +++ b/website/src/pages/sv/sps/triggers.mdx @@ -6,13 +6,13 @@ Use Custom Triggers and enable the full use GraphQL. ## Översikt -Custom Triggers allow you to send data directly into your subgraph mappings file and entities, which are similar to tables and fields. This enables you to fully use the GraphQL layer. +Custom Triggers allow you to send data directly into your Subgraph mappings file and entities, which are similar to tables and fields. This enables you to fully use the GraphQL layer. -By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data in your subgraph's handler. This ensures efficient and streamlined data management within the subgraph framework. +By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data in your Subgraph's handler. This ensures efficient and streamlined data management within the Subgraph framework. ### Defining `handleTransactions` -The following code demonstrates how to define a `handleTransactions` function in a subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new subgraph entity is created. +The following code demonstrates how to define a `handleTransactions` function in a Subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new Subgraph entity is created. ```tsx export function handleTransactions(bytes: Uint8Array): void { @@ -38,9 +38,9 @@ Here's what you're seeing in the `mappings.ts` file: 1. The bytes containing Substreams data are decoded into the generated `Transactions` object, this object is used like any other AssemblyScript object 2. Looping over the transactions -3. Create a new subgraph entity for every transaction +3. Create a new Subgraph entity for every transaction -To go through a detailed example of a trigger-based subgraph, [check out the tutorial](/sps/tutorial/). +To go through a detailed example of a trigger-based Subgraph, [check out the tutorial](/sps/tutorial/). ### Ytterligare resurser diff --git a/website/src/pages/sv/sps/tutorial.mdx b/website/src/pages/sv/sps/tutorial.mdx index 12b0127acb81..0aabe284b6d0 100644 --- a/website/src/pages/sv/sps/tutorial.mdx +++ b/website/src/pages/sv/sps/tutorial.mdx @@ -3,7 +3,7 @@ title: 'Tutorial: Set Up a Substreams-Powered Subgraph on Solana' sidebarTitle: Tutorial --- -Successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. +Successfully set up a trigger-based Substreams-powered Subgraph for a Solana SPL token. ## Komma igång @@ -54,7 +54,7 @@ params: # Modify the param fields to meet your needs ### Step 2: Generate the Subgraph Manifest -Once the project is initialized, generate a subgraph manifest by running the following command in the Dev Container: +Once the project is initialized, generate a Subgraph manifest by running the following command in the Dev Container: ```bash substreams codegen subgraph @@ -73,7 +73,7 @@ dataSources: moduleName: map_spl_transfers # Module defined in the substreams.yaml file: ./my-project-sol-v0.1.0.spkg mapping: - apiVersion: 0.0.7 + apiVersion: 0.0.9 kind: substreams/graph-entities file: ./src/mappings.ts handler: handleTriggers @@ -81,7 +81,7 @@ dataSources: ### Step 3: Define Entities in `schema.graphql` -Define the fields you want to save in your subgraph entities by updating the `schema.graphql` file. +Define the fields you want to save in your Subgraph entities by updating the `schema.graphql` file. Here is an example: @@ -101,7 +101,7 @@ This schema defines a `MyTransfer` entity with fields such as `id`, `amount`, `s With the Protobuf objects generated, you can now handle the decoded Substreams data in your `mappings.ts` file found in the `./src` directory. -The example below demonstrates how to extract to subgraph entities the non-derived transfers associated to the Orca account id: +The example below demonstrates how to extract to Subgraph entities the non-derived transfers associated to the Orca account id: ```ts import { Protobuf } from 'as-proto/assembly' @@ -140,11 +140,11 @@ To generate Protobuf objects in AssemblyScript, run the following command: npm run protogen ``` -This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the subgraph's handler. +This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the Subgraph's handler. ### Conclusion -Congratulations! You've successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. You can take the next step by customizing your schema, mappings, and modules to fit your specific use case. +Congratulations! You've successfully set up a trigger-based Substreams-powered Subgraph for a Solana SPL token. You can take the next step by customizing your schema, mappings, and modules to fit your specific use case. ### Video Tutorial diff --git a/website/src/pages/sv/subgraphs/_meta-titles.json b/website/src/pages/sv/subgraphs/_meta-titles.json index 3fd405eed29a..79dc0c23f596 100644 --- a/website/src/pages/sv/subgraphs/_meta-titles.json +++ b/website/src/pages/sv/subgraphs/_meta-titles.json @@ -2,5 +2,5 @@ "querying": "Querying", "developing": "Developing", "guides": "How-to Guides", - "best-practices": "Best Practices" + "best-practices": "Bästa praxis" } diff --git a/website/src/pages/sv/subgraphs/best-practices/avoid-eth-calls.mdx b/website/src/pages/sv/subgraphs/best-practices/avoid-eth-calls.mdx index e40a7b3712e4..07249c97dd2a 100644 --- a/website/src/pages/sv/subgraphs/best-practices/avoid-eth-calls.mdx +++ b/website/src/pages/sv/subgraphs/best-practices/avoid-eth-calls.mdx @@ -1,19 +1,19 @@ --- title: Subgraph Best Practice 4 - Improve Indexing Speed by Avoiding eth_calls -sidebarTitle: 'Subgraph Best Practice 4: Avoiding eth_calls' +sidebarTitle: Avoiding eth_calls --- ## TLDR -`eth_calls` are calls that can be made from a subgraph to an Ethereum node. These calls take a significant amount of time to return data, slowing down indexing. If possible, design smart contracts to emit all the data you need so you don’t need to use `eth_calls`. +`eth_calls` are calls that can be made from a Subgraph to an Ethereum node. These calls take a significant amount of time to return data, slowing down indexing. If possible, design smart contracts to emit all the data you need so you don’t need to use `eth_calls`. ## Why Avoiding `eth_calls` Is a Best Practice -Subgraphs are optimized to index event data emitted from smart contracts. A subgraph can also index the data coming from an `eth_call`, however, this can significantly slow down subgraph indexing as `eth_calls` require making external calls to smart contracts. The responsiveness of these calls relies not on the subgraph but on the connectivity and responsiveness of the Ethereum node being queried. By minimizing or eliminating eth_calls in our subgraphs, we can significantly improve our indexing speed. +Subgraphs are optimized to index event data emitted from smart contracts. A Subgraph can also index the data coming from an `eth_call`, however, this can significantly slow down Subgraph indexing as `eth_calls` require making external calls to smart contracts. The responsiveness of these calls relies not on the Subgraph but on the connectivity and responsiveness of the Ethereum node being queried. By minimizing or eliminating eth_calls in our Subgraphs, we can significantly improve our indexing speed. ### What Does an eth_call Look Like? -`eth_calls` are often necessary when the data required for a subgraph is not available through emitted events. For example, consider a scenario where a subgraph needs to identify whether ERC20 tokens are part of a specific pool, but the contract only emits a basic `Transfer` event and does not emit an event that contains the data that we need: +`eth_calls` are often necessary when the data required for a Subgraph is not available through emitted events. For example, consider a scenario where a Subgraph needs to identify whether ERC20 tokens are part of a specific pool, but the contract only emits a basic `Transfer` event and does not emit an event that contains the data that we need: ```yaml event Transfer(address indexed from, address indexed to, uint256 value); @@ -44,7 +44,7 @@ export function handleTransfer(event: Transfer): void { } ``` -This is functional, however is not ideal as it slows down our subgraph’s indexing. +This is functional, however is not ideal as it slows down our Subgraph’s indexing. ## How to Eliminate `eth_calls` @@ -54,7 +54,7 @@ Ideally, the smart contract should be updated to emit all necessary data within event TransferWithPool(address indexed from, address indexed to, uint256 value, bytes32 indexed poolInfo); ``` -With this update, the subgraph can directly index the required data without external calls: +With this update, the Subgraph can directly index the required data without external calls: ```typescript import { Address } from '@graphprotocol/graph-ts' @@ -96,11 +96,11 @@ The portion highlighted in yellow is the call declaration. The part before the c The handler itself accesses the result of this `eth_call` exactly as in the previous section by binding to the contract and making the call. graph-node caches the results of declared `eth_calls` in memory and the call from the handler will retrieve the result from this in memory cache instead of making an actual RPC call. -Note: Declared eth_calls can only be made in subgraphs with specVersion >= 1.2.0. +Note: Declared eth_calls can only be made in Subgraphs with specVersion >= 1.2.0. ## Conclusion -You can significantly improve indexing performance by minimizing or eliminating `eth_calls` in your subgraphs. +You can significantly improve indexing performance by minimizing or eliminating `eth_calls` in your Subgraphs. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/sv/subgraphs/best-practices/derivedfrom.mdx b/website/src/pages/sv/subgraphs/best-practices/derivedfrom.mdx index db3a49928c89..093eb29255ab 100644 --- a/website/src/pages/sv/subgraphs/best-practices/derivedfrom.mdx +++ b/website/src/pages/sv/subgraphs/best-practices/derivedfrom.mdx @@ -1,11 +1,11 @@ --- title: Subgraph Best Practice 2 - Improve Indexing and Query Responsiveness By Using @derivedFrom -sidebarTitle: 'Subgraph Best Practice 2: Arrays with @derivedFrom' +sidebarTitle: Arrays with @derivedFrom --- ## TLDR -Arrays in your schema can really slow down a subgraph's performance as they grow beyond thousands of entries. If possible, the `@derivedFrom` directive should be used when using arrays as it prevents large arrays from forming, simplifies handlers, and reduces the size of individual entities, improving indexing speed and query performance significantly. +Arrays in your schema can really slow down a Subgraph's performance as they grow beyond thousands of entries. If possible, the `@derivedFrom` directive should be used when using arrays as it prevents large arrays from forming, simplifies handlers, and reduces the size of individual entities, improving indexing speed and query performance significantly. ## How to Use the `@derivedFrom` Directive @@ -15,7 +15,7 @@ You just need to add a `@derivedFrom` directive after your array in your schema. comments: [Comment!]! @derivedFrom(field: "post") ``` -`@derivedFrom` creates efficient one-to-many relationships, enabling an entity to dynamically associate with multiple related entities based on a field in the related entity. This approach removes the need for both sides of the relationship to store duplicate data, making the subgraph more efficient. +`@derivedFrom` creates efficient one-to-many relationships, enabling an entity to dynamically associate with multiple related entities based on a field in the related entity. This approach removes the need for both sides of the relationship to store duplicate data, making the Subgraph more efficient. ### Example Use Case for `@derivedFrom` @@ -60,17 +60,17 @@ type Comment @entity { Just by adding the `@derivedFrom` directive, this schema will only store the “Comments” on the “Comments” side of the relationship and not on the “Post” side of the relationship. Arrays are stored across individual rows, which allows them to expand significantly. This can lead to particularly large sizes if their growth is unbounded. -This will not only make our subgraph more efficient, but it will also unlock three features: +This will not only make our Subgraph more efficient, but it will also unlock three features: 1. We can query the `Post` and see all of its comments. 2. We can do a reverse lookup and query any `Comment` and see which post it comes from. -3. We can use [Derived Field Loaders](/subgraphs/developing/creating/graph-ts/api/#looking-up-derived-entities) to unlock the ability to directly access and manipulate data from virtual relationships in our subgraph mappings. +3. We can use [Derived Field Loaders](/subgraphs/developing/creating/graph-ts/api/#looking-up-derived-entities) to unlock the ability to directly access and manipulate data from virtual relationships in our Subgraph mappings. ## Conclusion -Use the `@derivedFrom` directive in subgraphs to effectively manage dynamically growing arrays, enhancing indexing efficiency and data retrieval. +Use the `@derivedFrom` directive in Subgraphs to effectively manage dynamically growing arrays, enhancing indexing efficiency and data retrieval. For a more detailed explanation of strategies to avoid large arrays, check out Kevin Jones' blog: [Best Practices in Subgraph Development: Avoiding Large Arrays](https://thegraph.com/blog/improve-subgraph-performance-avoiding-large-arrays/). diff --git a/website/src/pages/sv/subgraphs/best-practices/grafting-hotfix.mdx b/website/src/pages/sv/subgraphs/best-practices/grafting-hotfix.mdx index 2fea7d3f3239..acc7aa19a5ec 100644 --- a/website/src/pages/sv/subgraphs/best-practices/grafting-hotfix.mdx +++ b/website/src/pages/sv/subgraphs/best-practices/grafting-hotfix.mdx @@ -1,26 +1,26 @@ --- title: Subgraph Best Practice 6 - Use Grafting for Quick Hotfix Deployment -sidebarTitle: 'Subgraph Best Practice 6: Grafting and Hotfixing' +sidebarTitle: Grafting and Hotfixing --- ## TLDR -Grafting is a powerful feature in subgraph development that allows you to build and deploy new subgraphs while reusing the indexed data from existing ones. +Grafting is a powerful feature in Subgraph development that allows you to build and deploy new Subgraphs while reusing the indexed data from existing ones. ### Översikt -This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. +This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire Subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. ## Benefits of Grafting for Hotfixes 1. **Rapid Deployment** - - **Minimize Downtime**: When a subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. - - **Immediate Recovery**: The new subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. + - **Minimize Downtime**: When a Subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. + - **Immediate Recovery**: The new Subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. 2. **Data Preservation** - - **Reuse Historical Data**: Grafting copies the existing data from the base subgraph, so you don’t lose valuable historical records. + - **Reuse Historical Data**: Grafting copies the existing data from the base Subgraph, so you don’t lose valuable historical records. - **Consistency**: Maintains data continuity, which is crucial for applications relying on consistent historical data. 3. **Efficiency** @@ -31,38 +31,38 @@ This feature enables quick deployment of hotfixes for critical issues, eliminati 1. **Initial Deployment Without Grafting** - - **Start Clean**: Always deploy your initial subgraph without grafting to ensure that it’s stable and functions as expected. - - **Test Thoroughly**: Validate the subgraph’s performance to minimize the need for future hotfixes. + - **Start Clean**: Always deploy your initial Subgraph without grafting to ensure that it’s stable and functions as expected. + - **Test Thoroughly**: Validate the Subgraph’s performance to minimize the need for future hotfixes. 2. **Implementing the Hotfix with Grafting** - **Identify the Issue**: When a critical error occurs, determine the block number of the last successfully indexed event. - - **Create a New Subgraph**: Develop a new subgraph that includes the hotfix. - - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed subgraph. - - **Deploy Quickly**: Publish the grafted subgraph to restore service as soon as possible. + - **Create a New Subgraph**: Develop a new Subgraph that includes the hotfix. + - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed Subgraph. + - **Deploy Quickly**: Publish the grafted Subgraph to restore service as soon as possible. 3. **Post-Hotfix Actions** - - **Monitor Performance**: Ensure the grafted subgraph is indexing correctly and the hotfix resolves the issue. - - **Republish Without Grafting**: Once stable, deploy a new version of the subgraph without grafting for long-term maintenance. + - **Monitor Performance**: Ensure the grafted Subgraph is indexing correctly and the hotfix resolves the issue. + - **Republish Without Grafting**: Once stable, deploy a new version of the Subgraph without grafting for long-term maintenance. > Note: Relying on grafting indefinitely is not recommended as it can complicate future updates and maintenance. - - **Update References**: Redirect any services or applications to use the new, non-grafted subgraph. + - **Update References**: Redirect any services or applications to use the new, non-grafted Subgraph. 4. **Important Considerations** - **Careful Block Selection**: Choose the graft block number carefully to prevent data loss. - **Tip**: Use the block number of the last correctly processed event. - - **Use Deployment ID**: Ensure you reference the Deployment ID of the base subgraph, not the Subgraph ID. - - **Note**: The Deployment ID is the unique identifier for a specific subgraph deployment. - - **Feature Declaration**: Remember to declare grafting in the subgraph manifest under features. + - **Use Deployment ID**: Ensure you reference the Deployment ID of the base Subgraph, not the Subgraph ID. + - **Note**: The Deployment ID is the unique identifier for a specific Subgraph deployment. + - **Feature Declaration**: Remember to declare grafting in the Subgraph manifest under features. ## Example: Deploying a Hotfix with Grafting -Suppose you have a subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. +Suppose you have a Subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. 1. **Failed Subgraph Manifest (subgraph.yaml)** ```yaml - specVersion: 1.0.0 + specVersion: 1.3.0 schema: file: ./schema.graphql dataSources: @@ -75,7 +75,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing startBlock: 5000000 mapping: kind: ethereum/events - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Withdrawal @@ -90,7 +90,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing 2. **New Grafted Subgraph Manifest (subgraph.yaml)** ```yaml - specVersion: 1.0.0 + specVersion: 1.3.0 schema: file: ./schema.graphql dataSources: @@ -103,7 +103,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing startBlock: 6000001 # Block after the last indexed block mapping: kind: ethereum/events - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Withdrawal @@ -117,16 +117,16 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing features: - grafting graft: - base: QmBaseDeploymentID # Deployment ID of the failed subgraph + base: QmBaseDeploymentID # Deployment ID of the failed Subgraph block: 6000000 # Last successfully indexed block ``` **Explanation:** -- **Data Source Update**: The new subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. +- **Data Source Update**: The new Subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. - **Start Block**: Set to one block after the last successfully indexed block to avoid reprocessing the error. - **Grafting Configuration**: - - **base**: Deployment ID of the failed subgraph. + - **base**: Deployment ID of the failed Subgraph. - **block**: Block number where grafting should begin. 3. **Deployment Steps** @@ -135,10 +135,10 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing - **Adjust the Manifest**: As shown above, update the `subgraph.yaml` with grafting configurations. - **Deploy the Subgraph**: - Authenticate with the Graph CLI. - - Deploy the new subgraph using `graph deploy`. + - Deploy the new Subgraph using `graph deploy`. 4. **Post-Deployment** - - **Verify Indexing**: Check that the subgraph is indexing correctly from the graft point. + - **Verify Indexing**: Check that the Subgraph is indexing correctly from the graft point. - **Monitor Data**: Ensure that new data is being captured and the hotfix is effective. - **Plan for Republish**: Schedule the deployment of a non-grafted version for long-term stability. @@ -146,9 +146,9 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing While grafting is a powerful tool for deploying hotfixes quickly, there are specific scenarios where it should be avoided to maintain data integrity and ensure optimal performance. -- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new subgraph’s schema to be compatible with the base subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. +- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new Subgraph’s schema to be compatible with the base Subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. - **Significant Mapping Logic Overhauls**: When the hotfix involves substantial modifications to your mapping logic—such as changing how events are processed or altering handler functions—grafting may not function correctly. The new logic might not be compatible with the data processed under the old logic, leading to incorrect data or failed indexing. -- **Deployments to The Graph Network**: Grafting is not recommended for subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the subgraph from scratch to ensure full compatibility and reliability. +- **Deployments to The Graph Network**: Grafting is not recommended for Subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the Subgraph from scratch to ensure full compatibility and reliability. ### Risk Management @@ -157,20 +157,20 @@ While grafting is a powerful tool for deploying hotfixes quickly, there are spec ## Conclusion -Grafting is an effective strategy for deploying hotfixes in subgraph development, enabling you to: +Grafting is an effective strategy for deploying hotfixes in Subgraph development, enabling you to: - **Quickly Recover** from critical errors without re-indexing. - **Preserve Historical Data**, maintaining continuity for applications and users. - **Ensure Service Availability** by minimizing downtime during critical fixes. -However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. +However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your Subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. ## Ytterligare resurser - **[Grafting Documentation](/subgraphs/cookbook/grafting/)**: Replace a Contract and Keep its History With Grafting - **[Understanding Deployment IDs](/subgraphs/querying/subgraph-id-vs-deployment-id/)**: Learn the difference between Deployment ID and Subgraph ID. -By incorporating grafting into your subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. +By incorporating grafting into your Subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/sv/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx b/website/src/pages/sv/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx index 6ff60ec9ab34..3a633244e0f2 100644 --- a/website/src/pages/sv/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx +++ b/website/src/pages/sv/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx @@ -1,6 +1,6 @@ --- title: Subgraph Best Practice 3 - Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs -sidebarTitle: 'Subgraph Best Practice 3: Immutable Entities and Bytes as IDs' +sidebarTitle: Immutable Entities and Bytes as IDs --- ## TLDR @@ -50,12 +50,12 @@ While other types for IDs are possible, such as String and Int8, it is recommend ### Reasons to Not Use Bytes as IDs 1. If entity IDs must be human-readable such as auto-incremented numerical IDs or readable strings, Bytes for IDs should not be used. -2. If integrating a subgraph’s data with another data model that does not use Bytes as IDs, Bytes as IDs should not be used. +2. If integrating a Subgraph’s data with another data model that does not use Bytes as IDs, Bytes as IDs should not be used. 3. Indexing and querying performance improvements are not desired. ### Concatenating With Bytes as IDs -It is a common practice in many subgraphs to use string concatenation to combine two properties of an event into a single ID, such as using `event.transaction.hash.toHex() + "-" + event.logIndex.toString()`. However, as this returns a string, this significantly impedes subgraph indexing and querying performance. +It is a common practice in many Subgraphs to use string concatenation to combine two properties of an event into a single ID, such as using `event.transaction.hash.toHex() + "-" + event.logIndex.toString()`. However, as this returns a string, this significantly impedes Subgraph indexing and querying performance. Instead, we should use the `concatI32()` method to concatenate event properties. This strategy results in a `Bytes` ID that is much more performant. @@ -172,7 +172,7 @@ Query Response: ## Conclusion -Using both Immutable Entities and Bytes as IDs has been shown to markedly improve subgraph efficiency. Specifically, tests have highlighted up to a 28% increase in query performance and up to a 48% acceleration in indexing speeds. +Using both Immutable Entities and Bytes as IDs has been shown to markedly improve Subgraph efficiency. Specifically, tests have highlighted up to a 28% increase in query performance and up to a 48% acceleration in indexing speeds. Read more about using Immutable Entities and Bytes as IDs in this blog post by David Lutterkort, a Software Engineer at Edge & Node: [Two Simple Subgraph Performance Improvements](https://thegraph.com/blog/two-simple-subgraph-performance-improvements/). diff --git a/website/src/pages/sv/subgraphs/best-practices/pruning.mdx b/website/src/pages/sv/subgraphs/best-practices/pruning.mdx index 1b51dde8894f..2d4f9ad803e0 100644 --- a/website/src/pages/sv/subgraphs/best-practices/pruning.mdx +++ b/website/src/pages/sv/subgraphs/best-practices/pruning.mdx @@ -1,11 +1,11 @@ --- title: Subgraph Best Practice 1 - Improve Query Speed with Subgraph Pruning -sidebarTitle: 'Subgraph Best Practice 1: Pruning with indexerHints' +sidebarTitle: Pruning with indexerHints --- ## TLDR -[Pruning](/developing/creating-a-subgraph/#prune) removes archival entities from the subgraph’s database up to a given block, and removing unused entities from a subgraph’s database will improve a subgraph’s query performance, often dramatically. Using `indexerHints` is an easy way to prune a subgraph. +[Pruning](/developing/creating-a-subgraph/#prune) removes archival entities from the Subgraph’s database up to a given block, and removing unused entities from a Subgraph’s database will improve a Subgraph’s query performance, often dramatically. Using `indexerHints` is an easy way to prune a Subgraph. ## How to Prune a Subgraph With `indexerHints` @@ -13,14 +13,14 @@ Add a section called `indexerHints` in the manifest. `indexerHints` has three `prune` options: -- `prune: auto`: Retains the minimum necessary history as set by the Indexer, optimizing query performance. This is the generally recommended setting and is the default for all subgraphs created by `graph-cli` >= 0.66.0. +- `prune: auto`: Retains the minimum necessary history as set by the Indexer, optimizing query performance. This is the generally recommended setting and is the default for all Subgraphs created by `graph-cli` >= 0.66.0. - `prune: `: Sets a custom limit on the number of historical blocks to retain. - `prune: never`: No pruning of historical data; retains the entire history and is the default if there is no `indexerHints` section. `prune: never` should be selected if [Time Travel Queries](/subgraphs/querying/graphql-api/#time-travel-queries) are desired. -We can add `indexerHints` to our subgraphs by updating our `subgraph.yaml`: +We can add `indexerHints` to our Subgraphs by updating our `subgraph.yaml`: ```yaml -specVersion: 1.0.0 +specVersion: 1.3.0 schema: file: ./schema.graphql indexerHints: @@ -39,7 +39,7 @@ dataSources: ## Conclusion -Pruning using `indexerHints` is a best practice for subgraph development, offering significant query performance improvements. +Pruning using `indexerHints` is a best practice for Subgraph development, offering significant query performance improvements. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/sv/subgraphs/best-practices/timeseries.mdx b/website/src/pages/sv/subgraphs/best-practices/timeseries.mdx index 63786a945971..3b416d32b2bd 100644 --- a/website/src/pages/sv/subgraphs/best-practices/timeseries.mdx +++ b/website/src/pages/sv/subgraphs/best-practices/timeseries.mdx @@ -1,11 +1,11 @@ --- title: Subgraph Best Practice 5 - Simplify and Optimize with Timeseries and Aggregations -sidebarTitle: 'Subgraph Best Practice 5: Timeseries and Aggregations' +sidebarTitle: Timeseries and Aggregations --- ## TLDR -Leveraging the new time-series and aggregations feature in subgraphs can significantly enhance both indexing speed and query performance. +Leveraging the new time-series and aggregations feature in Subgraphs can significantly enhance both indexing speed and query performance. ## Översikt @@ -36,6 +36,10 @@ Timeseries and aggregations reduce data processing overhead and accelerate queri ## How to Implement Timeseries and Aggregations +### Prerequisites + +You need `spec version 1.1.0` for this feature. + ### Defining Timeseries Entities A timeseries entity represents raw data points collected over time. It is defined with the `@entity(timeseries: true)` annotation. Key requirements: @@ -51,7 +55,7 @@ Exempel: type Data @entity(timeseries: true) { id: Int8! timestamp: Timestamp! - price: BigDecimal! + amount: BigDecimal! } ``` @@ -68,11 +72,11 @@ Exempel: type Stats @aggregation(intervals: ["hour", "day"], source: "Data") { id: Int8! timestamp: Timestamp! - sum: BigDecimal! @aggregate(fn: "sum", arg: "price") + sum: BigDecimal! @aggregate(fn: "sum", arg: "amount") } ``` -In this example, Stats aggregates the price field from Data over hourly and daily intervals, computing the sum. +In this example, Stats aggregates the amount field from Data over hourly and daily intervals, computing the sum. ### Querying Aggregated Data @@ -172,13 +176,13 @@ Supported operators and functions include basic arithmetic (+, -, \_, /), compar ### Conclusion -Implementing timeseries and aggregations in subgraphs is a best practice for projects dealing with time-based data. This approach: +Implementing timeseries and aggregations in Subgraphs is a best practice for projects dealing with time-based data. This approach: - Enhances Performance: Speeds up indexing and querying by reducing data processing overhead. - Simplifies Development: Eliminates the need for manual aggregation logic in mappings. - Scales Efficiently: Handles large volumes of data without compromising on speed or responsiveness. -By adopting this pattern, developers can build more efficient and scalable subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your subgraphs. +By adopting this pattern, developers can build more efficient and scalable Subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your Subgraphs. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/sv/subgraphs/billing.mdx b/website/src/pages/sv/subgraphs/billing.mdx index d864c1d3d6fb..614d84dd04f3 100644 --- a/website/src/pages/sv/subgraphs/billing.mdx +++ b/website/src/pages/sv/subgraphs/billing.mdx @@ -4,12 +4,14 @@ title: Fakturering ## Querying Plans -There are two plans to use when querying subgraphs on The Graph Network. +There are two plans to use when querying Subgraphs on The Graph Network. - **Free Plan**: The Free Plan includes 100,000 free monthly queries with full access to the Subgraph Studio testing environment. This plan is designed for hobbyists, hackathoners, and those with side projects to try out The Graph before scaling their dapp. - **Growth Plan**: The Growth Plan includes everything in the Free Plan with all queries after 100,000 monthly queries requiring payments with GRT or credit card. The Growth Plan is flexible enough to cover teams that have established dapps across a variety of use cases. +Learn more about pricing [here](https://thegraph.com/studio-pricing/). + ## Query Payments with credit card diff --git a/website/src/pages/sv/subgraphs/developing/creating/advanced.mdx b/website/src/pages/sv/subgraphs/developing/creating/advanced.mdx index 7ed946aee07e..e8476a0d9bdf 100644 --- a/website/src/pages/sv/subgraphs/developing/creating/advanced.mdx +++ b/website/src/pages/sv/subgraphs/developing/creating/advanced.mdx @@ -4,9 +4,9 @@ title: Advanced Subgraph Features ## Översikt -Add and implement advanced subgraph features to enhanced your subgraph's built. +Add and implement advanced Subgraph features to enhanced your Subgraph's built. -Starting from `specVersion` `0.0.4`, subgraph features must be explicitly declared in the `features` section at the top level of the manifest file, using their `camelCase` name, as listed in the table below: +Starting from `specVersion` `0.0.4`, Subgraph features must be explicitly declared in the `features` section at the top level of the manifest file, using their `camelCase` name, as listed in the table below: | Feature | Name | | ---------------------------------------------------- | ---------------- | @@ -14,10 +14,10 @@ Starting from `specVersion` `0.0.4`, subgraph features must be explicitly declar | [Full-text Search](#defining-fulltext-search-fields) | `fullTextSearch` | | [Grafting](#grafting-onto-existing-subgraphs) | `grafting` | -For instance, if a subgraph uses the **Full-Text Search** and the **Non-fatal Errors** features, the `features` field in the manifest should be: +For instance, if a Subgraph uses the **Full-Text Search** and the **Non-fatal Errors** features, the `features` field in the manifest should be: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum features: - fullTextSearch @@ -25,7 +25,7 @@ features: dataSources: ... ``` -> Note that using a feature without declaring it will incur a **validation error** during subgraph deployment, but no errors will occur if a feature is declared but not used. +> Note that using a feature without declaring it will incur a **validation error** during Subgraph deployment, but no errors will occur if a feature is declared but not used. ## Timeseries and Aggregations @@ -33,9 +33,9 @@ Prerequisites: - Subgraph specVersion must be ≥1.1.0. -Timeseries and aggregations enable your subgraph to track statistics like daily average price, hourly total transfers, and more. +Timeseries and aggregations enable your Subgraph to track statistics like daily average price, hourly total transfers, and more. -This feature introduces two new types of subgraph entity. Timeseries entities record data points with timestamps. Aggregation entities perform pre-declared calculations on the timeseries data points on an hourly or daily basis, then store the results for easy access via GraphQL. +This feature introduces two new types of Subgraph entity. Timeseries entities record data points with timestamps. Aggregation entities perform pre-declared calculations on the timeseries data points on an hourly or daily basis, then store the results for easy access via GraphQL. ### Example Schema @@ -97,21 +97,21 @@ Aggregation entities are automatically calculated on the basis of the specified ## Icke dödliga fel -Indexeringsfel på redan synkroniserade delgrafer kommer, som standard, att få delgrafen att misslyckas och sluta synkronisera. Delgrafer kan istället konfigureras för att fortsätta synkroniseringen i närvaro av fel, genom att ignorera ändringarna som orsakades av hanteraren som provocerade felet. Det ger delgrafsförfattare tid att korrigera sina delgrafer medan förfrågningar fortsätter att behandlas mot det senaste blocket, även om resultaten kan vara inkonsekventa på grund av felet som orsakade felet. Observera att vissa fel alltid är dödliga. För att vara icke-dödliga måste felet vara känt för att vara deterministiskt. +Indexing errors on already synced Subgraphs will, by default, cause the Subgraph to fail and stop syncing. Subgraphs can alternatively be configured to continue syncing in the presence of errors, by ignoring the changes made by the handler which provoked the error. This gives Subgraph authors time to correct their Subgraphs while queries continue to be served against the latest block, though the results might be inconsistent due to the bug that caused the error. Note that some errors are still always fatal. To be non-fatal, the error must be known to be deterministic. -> **Note:** The Graph Network does not yet support non-fatal errors, and developers should not deploy subgraphs using that functionality to the network via the Studio. +> **Note:** The Graph Network does not yet support non-fatal errors, and developers should not deploy Subgraphs using that functionality to the network via the Studio. -Aktivering av icke-dödliga fel kräver att följande funktionsflagga sätts i delgrafens manifest: +Enabling non-fatal errors requires setting the following feature flag on the Subgraph manifest: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum features: - nonFatalErrors ... ``` -The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the subgraph has skipped over errors, as in the example: +The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the Subgraph has skipped over errors, as in the example: ```graphql foos(first: 100, subgraphError: allow) { @@ -123,7 +123,7 @@ _meta { } ``` -If the subgraph encounters an error, that query will return both the data and a graphql error with the message `"indexing_error"`, as in this example response: +If the Subgraph encounters an error, that query will return both the data and a graphql error with the message `"indexing_error"`, as in this example response: ```graphql "data": { @@ -145,7 +145,7 @@ If the subgraph encounters an error, that query will return both the data and a ## IPFS/Arweave File Data Sources -Filbaserade datakällor är en ny delgrafsfunktion för att få tillgång till data utanför kedjan under indexering på ett robust, utökat sätt. Filbaserade datakällor stödjer hämtning av filer från IPFS och från Arweave. +File data sources are a new Subgraph functionality for accessing off-chain data during indexing in a robust, extendable way. File data sources support fetching files from IPFS and from Arweave. > Detta lägger också grunden för deterministisk indexering av data utanför kedjan, samt möjligheten att introducera godtycklig data som hämtas via HTTP. @@ -221,7 +221,7 @@ templates: - name: TokenMetadata kind: file/ipfs mapping: - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mapping.ts handler: handleMetadata @@ -290,7 +290,7 @@ Exempel: import { TokenMetadata as TokenMetadataTemplate } from '../generated/templates' const ipfshash = 'QmaXzZhcYnsisuue5WRdQDH6FDvqkLQX1NckLqBYeYYEfm' -//Denna exempelkod är för en undergraf för kryptosamverkan. Ovanstående ipfs-hash är en katalog med tokenmetadata för alla kryptosamverkande NFT:er. +//This example code is for a Crypto coven Subgraph. The above ipfs hash is a directory with token metadata for all crypto coven NFTs. export function handleTransfer(event: TransferEvent): void { let token = Token.load(event.params.tokenId.toString()) @@ -300,7 +300,7 @@ export function handleTransfer(event: TransferEvent): void { token.tokenURI = '/' + event.params.tokenId.toString() + '.json' const tokenIpfsHash = ipfshash + token.tokenURI - //Detta skapar en sökväg till metadata för en enskild Crypto coven NFT. Den konkaterar katalogen med "/" + filnamn + ".json" + //This creates a path to the metadata for a single Crypto coven NFT. It concats the directory with "/" + filename + ".json" token.ipfsURI = tokenIpfsHash @@ -317,23 +317,23 @@ Detta kommer att skapa en ny filbaserad datakälla som kommer att övervaka Grap This example is using the CID as the lookup between the parent `Token` entity and the resulting `TokenMetadata` entity. -> Previously, this is the point at which a subgraph developer would have called `ipfs.cat(CID)` to fetch the file +> Previously, this is the point at which a Subgraph developer would have called `ipfs.cat(CID)` to fetch the file Grattis, du använder filbaserade datakällor! -#### Distribuera dina delgrafer +#### Deploying your Subgraphs -You can now `build` and `deploy` your subgraph to any Graph Node >=v0.30.0-rc.0. +You can now `build` and `deploy` your Subgraph to any Graph Node >=v0.30.0-rc.0. #### Begränsningar -Filbaserade datakällahanterare och entiteter är isolerade från andra delgrafentiteter, vilket säkerställer att de är deterministiska när de körs och att ingen förorening av kedjebaserade datakällor sker. För att vara specifik: +File data source handlers and entities are isolated from other Subgraph entities, ensuring that they are deterministic when executed, and ensuring no contamination of chain-based data sources. To be specific: - Entiteter skapade av Filbaserade datakällor är oföränderliga och kan inte uppdateras - Filbaserade datakällahanterare kan inte komma åt entiteter från andra filbaserade datakällor - Entiteter associerade med filbaserade datakällor kan inte nås av kedjebaserade hanterare -> Även om denna begränsning inte bör vara problematisk för de flesta användningsfall kan den införa komplexitet för vissa. Var god kontakta oss via Discord om du har problem med att modellera din data baserad på fil i en delgraf! +> While this constraint should not be problematic for most use-cases, it may introduce complexity for some. Please get in touch via Discord if you are having issues modelling your file-based data in a Subgraph! Dessutom är det inte möjligt att skapa datakällor från en filbaserad datakälla, vare sig det är en datakälla på kedjan eller en annan filbaserad datakälla. Denna begränsning kan komma att hävas i framtiden. @@ -365,15 +365,15 @@ Handlers for File Data Sources cannot be in files which import `eth_call` contra > **Requires**: [SpecVersion](#specversion-releases) >= `1.2.0` -Topic filters, also known as indexed argument filters, are a powerful feature in subgraphs that allow users to precisely filter blockchain events based on the values of their indexed arguments. +Topic filters, also known as indexed argument filters, are a powerful feature in Subgraphs that allow users to precisely filter blockchain events based on the values of their indexed arguments. -- These filters help isolate specific events of interest from the vast stream of events on the blockchain, allowing subgraphs to operate more efficiently by focusing only on relevant data. +- These filters help isolate specific events of interest from the vast stream of events on the blockchain, allowing Subgraphs to operate more efficiently by focusing only on relevant data. -- This is useful for creating personal subgraphs that track specific addresses and their interactions with various smart contracts on the blockchain. +- This is useful for creating personal Subgraphs that track specific addresses and their interactions with various smart contracts on the blockchain. ### How Topic Filters Work -When a smart contract emits an event, any arguments that are marked as indexed can be used as filters in a subgraph's manifest. This allows the subgraph to listen selectively for events that match these indexed arguments. +When a smart contract emits an event, any arguments that are marked as indexed can be used as filters in a Subgraph's manifest. This allows the Subgraph to listen selectively for events that match these indexed arguments. - The event's first indexed argument corresponds to `topic1`, the second to `topic2`, and so on, up to `topic3`, since the Ethereum Virtual Machine (EVM) allows up to three indexed arguments per event. @@ -401,7 +401,7 @@ In this example: #### Configuration in Subgraphs -Topic filters are defined directly within the event handler configuration in the subgraph manifest. Here is how they are configured: +Topic filters are defined directly within the event handler configuration in the Subgraph manifest. Here is how they are configured: ```yaml eventHandlers: @@ -436,7 +436,7 @@ In this configuration: - `topic1` is configured to filter `Transfer` events where `0xAddressA` is the sender. - `topic2` is configured to filter `Transfer` events where `0xAddressB` is the receiver. -- The subgraph will only index transactions that occur directly from `0xAddressA` to `0xAddressB`. +- The Subgraph will only index transactions that occur directly from `0xAddressA` to `0xAddressB`. #### Example 2: Tracking Transactions in Either Direction Between Two or More Addresses @@ -452,17 +452,17 @@ In this configuration: - `topic1` is configured to filter `Transfer` events where `0xAddressA`, `0xAddressB`, `0xAddressC` is the sender. - `topic2` is configured to filter `Transfer` events where `0xAddressB` and `0xAddressC` is the receiver. -- The subgraph will index transactions that occur in either direction between multiple addresses allowing for comprehensive monitoring of interactions involving all addresses. +- The Subgraph will index transactions that occur in either direction between multiple addresses allowing for comprehensive monitoring of interactions involving all addresses. ## Declared eth_call > Note: This is an experimental feature that is not currently available in a stable Graph Node release yet. You can only use it in Subgraph Studio or your self-hosted node. -Declarative `eth_calls` are a valuable subgraph feature that allows `eth_calls` to be executed ahead of time, enabling `graph-node` to execute them in parallel. +Declarative `eth_calls` are a valuable Subgraph feature that allows `eth_calls` to be executed ahead of time, enabling `graph-node` to execute them in parallel. This feature does the following: -- Significantly improves the performance of fetching data from the Ethereum blockchain by reducing the total time for multiple calls and optimizing the subgraph's overall efficiency. +- Significantly improves the performance of fetching data from the Ethereum blockchain by reducing the total time for multiple calls and optimizing the Subgraph's overall efficiency. - Allows faster data fetching, resulting in quicker query responses and a better user experience. - Reduces wait times for applications that need to aggregate data from multiple Ethereum calls, making the data retrieval process more efficient. @@ -474,7 +474,7 @@ This feature does the following: #### Scenario without Declarative `eth_calls` -Imagine you have a subgraph that needs to make three Ethereum calls to fetch data about a user's transactions, balance, and token holdings. +Imagine you have a Subgraph that needs to make three Ethereum calls to fetch data about a user's transactions, balance, and token holdings. Traditionally, these calls might be made sequentially: @@ -498,15 +498,15 @@ Total time taken = max (3, 2, 4) = 4 seconds #### How it Works -1. Declarative Definition: In the subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. +1. Declarative Definition: In the Subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. 2. Parallel Execution Engine: The Graph Node's execution engine recognizes these declarations and runs the calls simultaneously. -3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the subgraph for further processing. +3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the Subgraph for further processing. #### Example Configuration in Subgraph Manifest Declared `eth_calls` can access the `event.address` of the underlying event as well as all the `event.params`. -`Subgraph.yaml` using `event.address`: +`subgraph.yaml` using `event.address`: ```yaml eventHandlers: @@ -524,7 +524,7 @@ Details for the example above: - The text (`Pool[event.address].feeGrowthGlobal0X128()`) is the actual `eth_call` that will be executed, which is in the form of `Contract[address].function(arguments)` - The `address` and `arguments` can be replaced with variables that will be available when the handler is executed. -`Subgraph.yaml` using `event.params` +`subgraph.yaml` using `event.params` ```yaml calls: @@ -535,22 +535,22 @@ calls: > **Note:** it is not recommended to use grafting when initially upgrading to The Graph Network. Learn more [here](/subgraphs/cookbook/grafting/#important-note-on-grafting-when-upgrading-to-the-network). -When a subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances; it is beneficial to reuse the data from an existing subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly or to temporarily get an existing subgraph working again after it has failed. +When a Subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances; it is beneficial to reuse the data from an existing Subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly or to temporarily get an existing Subgraph working again after it has failed. -A subgraph is grafted onto a base subgraph when the subgraph manifest in `subgraph.yaml` contains a `graft` block at the top-level: +A Subgraph is grafted onto a base Subgraph when the Subgraph manifest in `subgraph.yaml` contains a `graft` block at the top-level: ```yaml description: ... graft: - base: Qm... # Subgraph ID of base subgraph + base: Qm... # Subgraph ID of base Subgraph block: 7345624 # Block number ``` -When a subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` subgraph up to and including the given `block` and then continue indexing the new subgraph from that block on. The base subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted subgraph. +When a Subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` Subgraph up to and including the given `block` and then continue indexing the new Subgraph from that block on. The base Subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted Subgraph. -Eftersom ympning kopierar data istället för att indexera basdata går det mycket snabbare att få delgrafen till det önskade blocket än att indexera från början, även om den initiala datorkopieringen fortfarande kan ta flera timmar för mycket stora delgrafer. Medan den ympade delgrafen initialiseras kommer Graph Node att logga information om de entitetstyper som redan har kopierats. +Because grafting copies rather than indexes base data, it is much quicker to get the Subgraph to the desired block than indexing from scratch, though the initial data copy can still take several hours for very large Subgraphs. While the grafted Subgraph is being initialized, the Graph Node will log information about the entity types that have already been copied. -Den ympade subgrafen kan använda ett GraphQL-schema som inte är identiskt med det i bas subgrafen, utan bara är kompatibelt med det. Det måste vara ett giltigt subgraf schema i sig, men kan avvika från bas undergrafens schema på följande sätt: +The grafted Subgraph can use a GraphQL schema that is not identical to the one of the base Subgraph, but merely compatible with it. It has to be a valid Subgraph schema in its own right, but may deviate from the base Subgraph's schema in the following ways: - Den lägger till eller tar bort entitetstyper - Det tar bort attribut från entitetstyper @@ -560,4 +560,4 @@ Den ympade subgrafen kan använda ett GraphQL-schema som inte är identiskt med - Den lägger till eller tar bort gränssnitt - Det ändrar för vilka entitetstyper ett gränssnitt implementeras -> **[Feature Management](#experimental-features):** `grafting` must be declared under `features` in the subgraph manifest. +> **[Feature Management](#experimental-features):** `grafting` must be declared under `features` in the Subgraph manifest. diff --git a/website/src/pages/sv/subgraphs/developing/creating/assemblyscript-mappings.mdx b/website/src/pages/sv/subgraphs/developing/creating/assemblyscript-mappings.mdx index 259ae147af9f..47acf31182d5 100644 --- a/website/src/pages/sv/subgraphs/developing/creating/assemblyscript-mappings.mdx +++ b/website/src/pages/sv/subgraphs/developing/creating/assemblyscript-mappings.mdx @@ -10,7 +10,7 @@ The mappings take data from a particular source and transform it into entities t For each event handler that is defined in `subgraph.yaml` under `mapping.eventHandlers`, create an exported function of the same name. Each handler must accept a single parameter called `event` with a type corresponding to the name of the event which is being handled. -In the example subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events: +In the example Subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events: ```javascript import { NewGravatar, UpdatedGravatar } from "../generated/Gravity/Gravity"; @@ -72,7 +72,7 @@ If no value is set for a field in the new entity with the same ID, the field wil ## Kodgenerering -För att göra det enkelt och typsäkert att arbeta med smarta kontrakt, händelser och entiteter kan Graph CLI generera AssemblyScript-typer från subgrafens GraphQL-schema och kontrakts-ABIn som ingår i datakällorna. +In order to make it easy and type-safe to work with smart contracts, events and entities, the Graph CLI can generate AssemblyScript types from the Subgraph's GraphQL schema and the contract ABIs included in the data sources. Detta görs med @@ -80,7 +80,7 @@ Detta görs med graph codegen [--output-dir ] [] ``` -but in most cases, subgraphs are already preconfigured via `package.json` to allow you to simply run one of the following to achieve the same: +but in most cases, Subgraphs are already preconfigured via `package.json` to allow you to simply run one of the following to achieve the same: ```sh # Yarn @@ -90,7 +90,7 @@ yarn codegen npm run codegen ``` -This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with. +This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example Subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with. ```javascript import { @@ -102,12 +102,12 @@ import { } from '../generated/Gravity/Gravity' ``` -In addition to this, one class is generated for each entity type in the subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with +In addition to this, one class is generated for each entity type in the Subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with ```javascript import { Gravatar } from '../generated/schema' ``` -> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the subgraph. +> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the Subgraph. -Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your subgraph to Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find. +Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your Subgraph to Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find. diff --git a/website/src/pages/sv/subgraphs/developing/creating/graph-ts/CHANGELOG.md b/website/src/pages/sv/subgraphs/developing/creating/graph-ts/CHANGELOG.md index 5d90888ac378..5f964d3cbb78 100644 --- a/website/src/pages/sv/subgraphs/developing/creating/graph-ts/CHANGELOG.md +++ b/website/src/pages/sv/subgraphs/developing/creating/graph-ts/CHANGELOG.md @@ -1,5 +1,11 @@ # @graphprotocol/graph-ts +## 0.38.0 + +### Minor Changes + +- [#1935](https://github.com/graphprotocol/graph-tooling/pull/1935) [`0c36a02`](https://github.com/graphprotocol/graph-tooling/commit/0c36a024e0516bbf883ae62b8312dba3d9945f04) Thanks [@isum](https://github.com/isum)! - feat: add yaml parsing support to mappings + ## 0.37.0 ### Minor Changes diff --git a/website/src/pages/sv/subgraphs/developing/creating/graph-ts/api.mdx b/website/src/pages/sv/subgraphs/developing/creating/graph-ts/api.mdx index dd9fb343dd68..afc7cbab6a60 100644 --- a/website/src/pages/sv/subgraphs/developing/creating/graph-ts/api.mdx +++ b/website/src/pages/sv/subgraphs/developing/creating/graph-ts/api.mdx @@ -2,12 +2,12 @@ title: API för AssemblyScript --- -> Note: If you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/). +> Note: If you created a Subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/). -Learn what built-in APIs can be used when writing subgraph mappings. There are two kinds of APIs available out of the box: +Learn what built-in APIs can be used when writing Subgraph mappings. There are two kinds of APIs available out of the box: - The [Graph TypeScript library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) -- Code generated from subgraph files by `graph codegen` +- Code generated from Subgraph files by `graph codegen` You can also add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). @@ -27,7 +27,7 @@ The `@graphprotocol/graph-ts` library provides the following APIs: ### Versioner -The `apiVersion` in the subgraph manifest specifies the mapping API version which is run by Graph Node for a given subgraph. +The `apiVersion` in the Subgraph manifest specifies the mapping API version which is run by Graph Node for a given Subgraph. | Version | Versionsanteckningar | | :-: | --- | @@ -223,7 +223,7 @@ import { store } from '@graphprotocol/graph-ts' The `store` API allows to load, save and remove entities from and to the Graph Node store. -Entities written to the store map one-to-one to the `@entity` types defined in the subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities. +Entities written to the store map one-to-one to the `@entity` types defined in the Subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities. #### Skapa entiteter @@ -282,8 +282,8 @@ As of `graph-node` v0.31.0, `@graphprotocol/graph-ts` v0.30.0 and `@graphprotoco The store API facilitates the retrieval of entities that were created or updated in the current block. A typical situation for this is that one handler creates a transaction from some onchain event, and a later handler wants to access this transaction if it exists. -- In the case where the transaction does not exist, the subgraph will have to go to the database simply to find out that the entity does not exist. If the subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. -- For some subgraphs, these missed lookups can contribute significantly to the indexing time. +- In the case where the transaction does not exist, the Subgraph will have to go to the database simply to find out that the entity does not exist. If the Subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. +- For some Subgraphs, these missed lookups can contribute significantly to the indexing time. ```typescript let id = event.transaction.hash // eller hur ID konstrueras @@ -380,11 +380,11 @@ Ethereum API ger tillgång till smarta kontrakt, offentliga tillståndsvariabler #### Stöd för Ethereum-typer -As with entities, `graph codegen` generates classes for all smart contracts and events used in a subgraph. For this, the contract ABIs need to be part of the data source in the subgraph manifest. Typically, the ABI files are stored in an `abis/` folder. +As with entities, `graph codegen` generates classes for all smart contracts and events used in a Subgraph. For this, the contract ABIs need to be part of the data source in the Subgraph manifest. Typically, the ABI files are stored in an `abis/` folder. -With the generated classes, conversions between Ethereum types and the [built-in types](#built-in-types) take place behind the scenes so that subgraph authors do not have to worry about them. +With the generated classes, conversions between Ethereum types and the [built-in types](#built-in-types) take place behind the scenes so that Subgraph authors do not have to worry about them. -Följande exempel illustrerar detta. Med en subgraph-schema som +The following example illustrates this. Given a Subgraph schema like ```graphql type Transfer @entity { @@ -483,7 +483,7 @@ class Log { #### Åtkomst till Smart Contract-tillstånd -The code generated by `graph codegen` also includes classes for the smart contracts used in the subgraph. These can be used to access public state variables and call functions of the contract at the current block. +The code generated by `graph codegen` also includes classes for the smart contracts used in the Subgraph. These can be used to access public state variables and call functions of the contract at the current block. En vanlig mönster är att komma åt kontraktet från vilket en händelse härstammar. Detta uppnås med följande kod: @@ -506,7 +506,7 @@ export function handleTransfer(event: TransferEvent) { As long as the `ERC20Contract` on Ethereum has a public read-only function called `symbol`, it can be called with `.symbol()`. For public state variables a method with the same name is created automatically. -Andra kontrakt som är en del av subgraphen kan importeras från den genererade koden och bindas till en giltig adress. +Any other contract that is part of the Subgraph can be imported from the generated code and can be bound to a valid address. #### Hantering av återkallade anrop @@ -582,7 +582,7 @@ let isContract = ethereum.hasCode(eoa).inner // returns false import { log } from '@graphprotocol/graph-ts' ``` -The `log` API allows subgraphs to log information to the Graph Node standard output as well as Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument. +The `log` API allows Subgraphs to log information to the Graph Node standard output as well as Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument. The `log` API includes the following functions: @@ -590,7 +590,7 @@ The `log` API includes the following functions: - `log.info(fmt: string, args: Array): void` - logs an informational message. - `log.warning(fmt: string, args: Array): void` - logs a warning. - `log.error(fmt: string, args: Array): void` - logs an error message. -- `log.critical(fmt: string, args: Array): void` – logs a critical message _and_ terminates the subgraph. +- `log.critical(fmt: string, args: Array): void` – logs a critical message _and_ terminates the Subgraph. The `log` API takes a format string and an array of string values. It then replaces placeholders with the string values from the array. The first `{}` placeholder gets replaced by the first value in the array, the second `{}` placeholder gets replaced by the second value and so on. @@ -721,7 +721,7 @@ ipfs.mapJSON('Qm...', 'processItem', Value.fromString('parentId')) The only flag currently supported is `json`, which must be passed to `ipfs.map`. With the `json` flag, the IPFS file must consist of a series of JSON values, one value per line. The call to `ipfs.map` will read each line in the file, deserialize it into a `JSONValue` and call the callback for each of them. The callback can then use entity operations to store data from the `JSONValue`. Entity changes are stored only when the handler that called `ipfs.map` finishes successfully; in the meantime, they are kept in memory, and the size of the file that `ipfs.map` can process is therefore limited. -On success, `ipfs.map` returns `void`. If any invocation of the callback causes an error, the handler that invoked `ipfs.map` is aborted, and the subgraph is marked as failed. +On success, `ipfs.map` returns `void`. If any invocation of the callback causes an error, the handler that invoked `ipfs.map` is aborted, and the Subgraph is marked as failed. ### Crypto API @@ -836,7 +836,7 @@ The base `Entity` class and the child `DataSourceContext` class have helpers to ### DataSourceContext in Manifest -The `context` section within `dataSources` allows you to define key-value pairs that are accessible within your subgraph mappings. The available types are `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. +The `context` section within `dataSources` allows you to define key-value pairs that are accessible within your Subgraph mappings. The available types are `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Here is a YAML example illustrating the usage of various types in the `context` section: @@ -887,4 +887,4 @@ dataSources: - `List`: Specifies a list of items. Each item needs to specify its type and data. - `BigInt`: Specifies a large integer value. Must be quoted due to its large size. -This context is then accessible in your subgraph mapping files, enabling more dynamic and configurable subgraphs. +This context is then accessible in your Subgraph mapping files, enabling more dynamic and configurable Subgraphs. diff --git a/website/src/pages/sv/subgraphs/developing/creating/graph-ts/common-issues.mdx b/website/src/pages/sv/subgraphs/developing/creating/graph-ts/common-issues.mdx index b1f7b27f220a..dd4d5e876a6a 100644 --- a/website/src/pages/sv/subgraphs/developing/creating/graph-ts/common-issues.mdx +++ b/website/src/pages/sv/subgraphs/developing/creating/graph-ts/common-issues.mdx @@ -2,7 +2,7 @@ title: Vanliga problem med AssemblyScript --- -There are certain [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) issues that are common to run into during subgraph development. They range in debug difficulty, however, being aware of them may help. The following is a non-exhaustive list of these issues: +There are certain [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) issues that are common to run into during Subgraph development. They range in debug difficulty, however, being aware of them may help. The following is a non-exhaustive list of these issues: - `Private` class variables are not enforced in [AssemblyScript](https://www.assemblyscript.org/status.html#language-features). There is no way to protect class variables from being directly changed from the class object. - Scope is not inherited into [closure functions](https://www.assemblyscript.org/status.html#on-closures), i.e. variables declared outside of closure functions cannot be used. Explanation in [Developer Highlights #3](https://www.youtube.com/watch?v=1-8AW-lVfrA&t=3243s). diff --git a/website/src/pages/sv/subgraphs/developing/creating/install-the-cli.mdx b/website/src/pages/sv/subgraphs/developing/creating/install-the-cli.mdx index 8905ec3abf61..21e3401cd8e9 100644 --- a/website/src/pages/sv/subgraphs/developing/creating/install-the-cli.mdx +++ b/website/src/pages/sv/subgraphs/developing/creating/install-the-cli.mdx @@ -2,11 +2,11 @@ title: Installera Graph CLI --- -> In order to use your subgraph on The Graph's decentralized network, you will need to [create an API key](/resources/subgraph-studio-faq/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your subgraph with at least 3,000 GRT to attract 2-3 Indexers. To learn more about signaling, check out [curating](/resources/roles/curating/). +> In order to use your Subgraph on The Graph's decentralized network, you will need to [create an API key](/resources/subgraph-studio-faq/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your Subgraph with at least 3,000 GRT to attract 2-3 Indexers. To learn more about signaling, check out [curating](/resources/roles/curating/). ## Översikt -The [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) is a command-line interface that facilitates developers' commands for The Graph. It processes a [subgraph manifest](/subgraphs/developing/creating/subgraph-manifest/) and compiles the [mappings](/subgraphs/developing/creating/assemblyscript-mappings/) to create the files you will need to deploy the subgraph to [Subgraph Studio](https://thegraph.com/studio/) and the network. +The [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) is a command-line interface that facilitates developers' commands for The Graph. It processes a [Subgraph manifest](/subgraphs/developing/creating/subgraph-manifest/) and compiles the [mappings](/subgraphs/developing/creating/assemblyscript-mappings/) to create the files you will need to deploy the Subgraph to [Subgraph Studio](https://thegraph.com/studio/) and the network. ## Komma igång @@ -28,13 +28,13 @@ npm install -g @graphprotocol/graph-cli@latest yarn global add @graphprotocol/graph-cli ``` -The `graph init` command can be used to set up a new subgraph project, either from an existing contract or from an example subgraph. If you already have a smart contract deployed to your preferred network, you can bootstrap a new subgraph from that contract to get started. +The `graph init` command can be used to set up a new Subgraph project, either from an existing contract or from an example Subgraph. If you already have a smart contract deployed to your preferred network, you can bootstrap a new Subgraph from that contract to get started. ## Skapa en Subgraf ### Från ett befintligt avtal -The following command creates a subgraph that indexes all events of an existing contract: +The following command creates a Subgraph that indexes all events of an existing contract: ```sh graph init \ @@ -51,25 +51,25 @@ graph init \ - If any of the optional arguments are missing, it guides you through an interactive form. -- The `` is the ID of your subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your subgraph details page. +- The `` is the ID of your Subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your Subgraph details page. ### Från ett exempel på en undergraf -The following command initializes a new project from an example subgraph: +The following command initializes a new project from an example Subgraph: ```sh graph init --from-example=example-subgraph ``` -- The [example subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. +- The [example Subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. -- The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. +- The Subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. ### Add New `dataSources` to an Existing Subgraph -`dataSources` are key components of subgraphs. They define the sources of data that the subgraph indexes and processes. A `dataSource` specifies which smart contract to listen to, which events to process, and how to handle them. +`dataSources` are key components of Subgraphs. They define the sources of data that the Subgraph indexes and processes. A `dataSource` specifies which smart contract to listen to, which events to process, and how to handle them. -Recent versions of the Graph CLI supports adding new `dataSources` to an existing subgraph through the `graph add` command: +Recent versions of the Graph CLI supports adding new `dataSources` to an existing Subgraph through the `graph add` command: ```sh graph add
[] @@ -101,19 +101,5 @@ The `graph add` command will fetch the ABI from Etherscan (unless an ABI path is ABI-filerna måste matcha ditt/dina kontrakt. Det finns några olika sätt att få ABI-filer: - Om du bygger ditt eget projekt har du förmodligen tillgång till dina senaste ABIs. -- If you are building a subgraph for a public project, you can download that project to your computer and get the ABI by using [`npx hardhat compile`](https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts) or using `solc` to compile. -- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your subgraph will fail. - -## SpecVersion Releases - -| Version | Versionsanteckningar | -| :-: | --- | -| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | -| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | -| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune subgraphs | -| 0.0.9 | Supports `endBlock` feature | -| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). | -| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). | -| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | -| 0.0.5 | Added support for event handlers having access to transaction receipts. | -| 0.0.4 | Added support for managing subgraph features. | +- If you are building a Subgraph for a public project, you can download that project to your computer and get the ABI by using [`npx hardhat compile`](https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts) or using `solc` to compile. +- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your Subgraph will fail. diff --git a/website/src/pages/sv/subgraphs/developing/creating/ql-schema.mdx b/website/src/pages/sv/subgraphs/developing/creating/ql-schema.mdx index 426092a76eb4..839bfd17d19d 100644 --- a/website/src/pages/sv/subgraphs/developing/creating/ql-schema.mdx +++ b/website/src/pages/sv/subgraphs/developing/creating/ql-schema.mdx @@ -4,7 +4,7 @@ title: The Graph QL Schema ## Översikt -The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language. +The schema for your Subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language. > Note: If you've never written a GraphQL schema, it is recommended that you check out this primer on the GraphQL type system. Reference documentation for GraphQL schemas can be found in the [GraphQL API](/subgraphs/querying/graphql-api/) section. @@ -12,7 +12,7 @@ The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas ar Before defining entities, it is important to take a step back and think about how your data is structured and linked. -- All queries will be made against the data model defined in the subgraph schema. As a result, the design of the subgraph schema should be informed by the queries that your application will need to perform. +- All queries will be made against the data model defined in the Subgraph schema. As a result, the design of the Subgraph schema should be informed by the queries that your application will need to perform. - It may be useful to imagine entities as "objects containing data", rather than as events or functions. - You define entity types in `schema.graphql`, and Graph Node will generate top-level fields for querying single instances and collections of that entity type. - Each type that should be an entity is required to be annotated with an `@entity` directive. @@ -141,7 +141,7 @@ type TokenBalance @entity { Reverse lookups can be defined on an entity through the `@derivedFrom` field. This creates a virtual field on the entity that may be queried but cannot be set manually through the mappings API. Rather, it is derived from the relationship defined on the other entity. For such relationships, it rarely makes sense to store both sides of the relationship, and both indexing and query performance will be better when only one side is stored and the other is derived. -För en-till-många-relationer bör relationen alltid lagras på 'en'-sidan, och 'många'-sidan bör alltid härledas. Att lagra relationen på detta sätt, istället för att lagra en array av entiteter på 'många'-sidan, kommer att resultera i dramatiskt bättre prestanda både för indexering och för frågning av subgraphen. Generellt sett bör lagring av arrayer av entiteter undvikas så mycket som är praktiskt möjligt. +For one-to-many relationships, the relationship should always be stored on the 'one' side, and the 'many' side should always be derived. Storing the relationship this way, rather than storing an array of entities on the 'many' side, will result in dramatically better performance for both indexing and querying the Subgraph. In general, storing arrays of entities should be avoided as much as is practical. #### Exempel @@ -160,7 +160,7 @@ type TokenBalance @entity { } ``` -Here is an example of how to write a mapping for a subgraph with reverse lookups: +Here is an example of how to write a mapping for a Subgraph with reverse lookups: ```typescript let token = new Token(event.address) // Create Token @@ -231,7 +231,7 @@ query usersWithOrganizations { } ``` -Detta mer avancerade sätt att lagra många-till-många-relationer kommer att leda till att mindre data lagras för subgrafen, och därför till en subgraf som ofta är dramatiskt snabbare att indexera och att fråga. +This more elaborate way of storing many-to-many relationships will result in less data stored for the Subgraph, and therefore to a Subgraph that is often dramatically faster to index and to query. ### Lägga till kommentarer i schemat @@ -287,7 +287,7 @@ query { } ``` -> **[Feature Management](#experimental-features):** From `specVersion` `0.0.4` and onwards, `fullTextSearch` must be declared under the `features` section in the subgraph manifest. +> **[Feature Management](#experimental-features):** From `specVersion` `0.0.4` and onwards, `fullTextSearch` must be declared under the `features` section in the Subgraph manifest. ## Stödda språk diff --git a/website/src/pages/sv/subgraphs/developing/creating/starting-your-subgraph.mdx b/website/src/pages/sv/subgraphs/developing/creating/starting-your-subgraph.mdx index 9f06ce8fcd1d..db4c083402f9 100644 --- a/website/src/pages/sv/subgraphs/developing/creating/starting-your-subgraph.mdx +++ b/website/src/pages/sv/subgraphs/developing/creating/starting-your-subgraph.mdx @@ -4,20 +4,32 @@ title: Starting Your Subgraph ## Översikt -The Graph is home to thousands of subgraphs already available for query, so check [The Graph Explorer](https://thegraph.com/explorer) and find one that already matches your needs. +The Graph is home to thousands of Subgraphs already available for query, so check [The Graph Explorer](https://thegraph.com/explorer) and find one that already matches your needs. -When you create a [subgraph](/subgraphs/developing/subgraphs/), you create a custom open API that extracts data from a blockchain, processes it, stores it, and makes it easy to query via GraphQL. +When you create a [Subgraph](/subgraphs/developing/subgraphs/), you create a custom open API that extracts data from a blockchain, processes it, stores it, and makes it easy to query via GraphQL. -Subgraph development ranges from simple scaffold subgraphs to advanced, specifically tailored subgraphs. +Subgraph development ranges from simple scaffold Subgraphs to advanced, specifically tailored Subgraphs. ### Start Building -Start the process and build a subgraph that matches your needs: +Start the process and build a Subgraph that matches your needs: 1. [Install the CLI](/subgraphs/developing/creating/install-the-cli/) - Set up your infrastructure -2. [Subgraph Manifest](/subgraphs/developing/creating/subgraph-manifest/) - Understand a subgraph's key component +2. [Subgraph Manifest](/subgraphs/developing/creating/subgraph-manifest/) - Understand a Subgraph's key component 3. [The GraphQL Schema](/subgraphs/developing/creating/ql-schema/) - Write your schema 4. [Writing AssemblyScript Mappings](/subgraphs/developing/creating/assemblyscript-mappings/) - Write your mappings -5. [Advanced Features](/subgraphs/developing/creating/advanced/) - Customize your subgraph with advanced features +5. [Advanced Features](/subgraphs/developing/creating/advanced/) - Customize your Subgraph with advanced features Explore additional [resources for APIs](/subgraphs/developing/creating/graph-ts/README/) and conduct local testing with [Matchstick](/subgraphs/developing/creating/unit-testing-framework/). + +| Version | Versionsanteckningar | +| :-: | --- | +| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune Subgraphs | +| 0.0.9 | Supports `endBlock` feature | +| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). | +| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | diff --git a/website/src/pages/sv/subgraphs/developing/creating/subgraph-manifest.mdx b/website/src/pages/sv/subgraphs/developing/creating/subgraph-manifest.mdx index e9bac4f876b1..b86383d95712 100644 --- a/website/src/pages/sv/subgraphs/developing/creating/subgraph-manifest.mdx +++ b/website/src/pages/sv/subgraphs/developing/creating/subgraph-manifest.mdx @@ -4,19 +4,19 @@ title: Subgraph Manifest ## Översikt -The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. +The Subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your Subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. -The **subgraph definition** consists of the following files: +The **Subgraph definition** consists of the following files: -- `subgraph.yaml`: Contains the subgraph manifest +- `subgraph.yaml`: Contains the Subgraph manifest -- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL +- `schema.graphql`: A GraphQL schema defining the data stored for your Subgraph and how to query it via GraphQL - `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema (e.g. `mapping.ts` in this guide) ### Subgraph Capabilities -A single subgraph can: +A single Subgraph can: - Index data from multiple smart contracts (but not multiple networks). @@ -24,12 +24,12 @@ A single subgraph can: - Add an entry for each contract that requires indexing to the `dataSources` array. -The full specification for subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). +The full specification for Subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). -For the example subgraph listed above, `subgraph.yaml` is: +For the example Subgraph listed above, `subgraph.yaml` is: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum repository: https://github.com/graphprotocol/graph-tooling schema: @@ -54,7 +54,7 @@ dataSources: data: 'bar' mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -79,47 +79,47 @@ dataSources: ## Subgraph Entries -> Important Note: Be sure you populate your subgraph manifest with all handlers and [entities](/subgraphs/developing/creating/ql-schema/). +> Important Note: Be sure you populate your Subgraph manifest with all handlers and [entities](/subgraphs/developing/creating/ql-schema/). De viktiga posterna att uppdatera för manifestet är: -- `specVersion`: a semver version that identifies the supported manifest structure and functionality for the subgraph. The latest version is `1.2.0`. See [specVersion releases](#specversion-releases) section to see more details on features & releases. +- `specVersion`: a semver version that identifies the supported manifest structure and functionality for the Subgraph. The latest version is `1.3.0`. See [specVersion releases](#specversion-releases) section to see more details on features & releases. -- `description`: a human-readable description of what the subgraph is. This description is displayed in Graph Explorer when the subgraph is deployed to Subgraph Studio. +- `description`: a human-readable description of what the Subgraph is. This description is displayed in Graph Explorer when the Subgraph is deployed to Subgraph Studio. -- `repository`: the URL of the repository where the subgraph manifest can be found. This is also displayed in Graph Explorer. +- `repository`: the URL of the repository where the Subgraph manifest can be found. This is also displayed in Graph Explorer. - `features`: a list of all used [feature](#experimental-features) names. -- `indexerHints.prune`: Defines the retention of historical block data for a subgraph. See [prune](#prune) in [indexerHints](#indexer-hints) section. +- `indexerHints.prune`: Defines the retention of historical block data for a Subgraph. See [prune](#prune) in [indexerHints](#indexer-hints) section. -- `dataSources.source`: the address of the smart contract the subgraph sources, and the ABI of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts. +- `dataSources.source`: the address of the smart contract the Subgraph sources, and the ABI of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts. - `dataSources.source.startBlock`: the optional number of the block that the data source starts indexing from. In most cases, we suggest using the block in which the contract was created. - `dataSources.source.endBlock`: The optional number of the block that the data source stops indexing at, including that block. Minimum spec version required: `0.0.9`. -- `dataSources.context`: key-value pairs that can be used within subgraph mappings. Supports various data types like `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Each variable needs to specify its `type` and `data`. These context variables are then accessible in the mapping files, offering more configurable options for subgraph development. +- `dataSources.context`: key-value pairs that can be used within Subgraph mappings. Supports various data types like `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Each variable needs to specify its `type` and `data`. These context variables are then accessible in the mapping files, offering more configurable options for Subgraph development. - `dataSources.mapping.entities`: the entities that the data source writes to the store. The schema for each entity is defined in the schema.graphql file. - `dataSources.mapping.abis`: one or more named ABI files for the source contract as well as any other smart contracts that you interact with from within the mappings. -- `dataSources.mapping.eventHandlers`: lists the smart contract events this subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store. +- `dataSources.mapping.eventHandlers`: lists the smart contract events this Subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store. -- `dataSources.mapping.callHandlers`: lists the smart contract functions this subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store. +- `dataSources.mapping.callHandlers`: lists the smart contract functions this Subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store. -- `dataSources.mapping.blockHandlers`: lists the blocks this subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional call-filter can be provided by adding a `filter` field with `kind: call` to the handler. This will only run the handler if the block contains at least one call to the data source contract. +- `dataSources.mapping.blockHandlers`: lists the blocks this Subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional call-filter can be provided by adding a `filter` field with `kind: call` to the handler. This will only run the handler if the block contains at least one call to the data source contract. -A single subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array. +A single Subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array. ## Event Handlers -Event handlers in a subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the subgraph's manifest. This enables subgraphs to process and store event data according to defined logic. +Event handlers in a Subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the Subgraph's manifest. This enables Subgraphs to process and store event data according to defined logic. ### Defining an Event Handler -An event handler is declared within a data source in the subgraph's YAML configuration. It specifies which events to listen for and the corresponding function to execute when those events are detected. +An event handler is declared within a data source in the Subgraph's YAML configuration. It specifies which events to listen for and the corresponding function to execute when those events are detected. ```yaml dataSources: @@ -131,7 +131,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -149,11 +149,11 @@ dataSources: ## Anropsbehandlare -While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured. +While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a Subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured. Anropsbehandlare utlöses endast i ett av två fall: när den specificerade funktionen anropas av ett konto som inte är kontraktet självt eller när den är markerad som extern i Solidity och anropas som en del av en annan funktion i samma kontrakt. -> **Note:** Call handlers currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a subgraph indexing one of these networks contain one or more call handlers, it will not start syncing. Subgraph developers should instead use event handlers. These are far more performant than call handlers, and are supported on every evm network. +> **Note:** Call handlers currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a Subgraph indexing one of these networks contain one or more call handlers, it will not start syncing. Subgraph developers should instead use event handlers. These are far more performant than call handlers, and are supported on every evm network. ### Definiera en Anropsbehandlare @@ -169,7 +169,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -186,7 +186,7 @@ The `function` is the normalized function signature to filter calls by. The `han ### Kartläggningsfunktion -Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument: +Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example Subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument: ```typescript import { CreateGravatarCall } from '../generated/Gravity/Gravity' @@ -205,7 +205,7 @@ The `handleCreateGravatar` function takes a new `CreateGravatarCall` which is a ## Blockbehandlare -Förutom att prenumerera på kontrakts händelser eller funktionsanrop kan en subgraf vilja uppdatera sina data när nya block läggs till i kedjan. För att uppnå detta kan en subgraf köra en funktion efter varje block eller efter block som matchar en fördefinierad filter. +In addition to subscribing to contract events or function calls, a Subgraph may want to update its data as new blocks are appended to the chain. To achieve this a Subgraph can run a function after every block or after blocks that match a pre-defined filter. ### Stödda filter @@ -218,7 +218,7 @@ filter: _The defined handler will be called once for every block which contains a call to the contract (data source) the handler is defined under._ -> **Note:** The `call` filter currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a subgraph indexing one of these networks contain one or more block handlers with a `call` filter, it will not start syncing. +> **Note:** The `call` filter currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a Subgraph indexing one of these networks contain one or more block handlers with a `call` filter, it will not start syncing. Avsaknaden av ett filter för en blockhanterare kommer att säkerställa att hanteraren kallas för varje block. En datakälla kan endast innehålla en blockhanterare för varje filttyp. @@ -232,7 +232,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -261,7 +261,7 @@ blockHandlers: every: 10 ``` -The defined handler will be called once for every `n` blocks, where `n` is the value provided in the `every` field. This configuration allows the subgraph to perform specific operations at regular block intervals. +The defined handler will be called once for every `n` blocks, where `n` is the value provided in the `every` field. This configuration allows the Subgraph to perform specific operations at regular block intervals. #### En Gång Filter @@ -276,7 +276,7 @@ blockHandlers: kind: once ``` -Den definierade hanteraren med filtret once kommer att anropas endast en gång innan alla andra hanterare körs. Denna konfiguration gör det möjligt för subgrafen att använda hanteraren som en initialiseringshanterare, som utför specifika uppgifter i början av indexeringen. +The defined handler with the once filter will be called only once before all other handlers run. This configuration allows the Subgraph to use the handler as an initialization handler, performing specific tasks at the start of indexing. ```ts export function handleOnce(block: ethereum.Block): void { @@ -288,7 +288,7 @@ export function handleOnce(block: ethereum.Block): void { ### Kartläggningsfunktion -The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing subgraph entities in the store, call smart contracts and create or update entities. +The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing Subgraph entities in the store, call smart contracts and create or update entities. ```typescript import { ethereum } from '@graphprotocol/graph-ts' @@ -317,7 +317,7 @@ An event will only be triggered when both the signature and topic 0 match. By de Starting from `specVersion` `0.0.5` and `apiVersion` `0.0.7`, event handlers can have access to the receipt for the transaction which emitted them. -To do so, event handlers must be declared in the subgraph manifest with the new `receipt: true` key, which is optional and defaults to false. +To do so, event handlers must be declared in the Subgraph manifest with the new `receipt: true` key, which is optional and defaults to false. ```yaml eventHandlers: @@ -360,7 +360,7 @@ dataSources: abi: Factory mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/factory.ts entities: @@ -390,7 +390,7 @@ templates: abi: Exchange mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/exchange.ts entities: @@ -454,7 +454,7 @@ There are setters and getters like `setString` and `getString` for all value typ ## Startblock -The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. +The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a Subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. ```yaml dataSources: @@ -467,7 +467,7 @@ dataSources: startBlock: 6627917 mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/factory.ts entities: @@ -488,13 +488,13 @@ dataSources: ## Indexer Hints -The `indexerHints` setting in a subgraph's manifest provides directives for indexers on processing and managing a subgraph. It influences operational decisions across data handling, indexing strategies, and optimizations. Presently, it features the `prune` option for managing historical data retention or pruning. +The `indexerHints` setting in a Subgraph's manifest provides directives for indexers on processing and managing a Subgraph. It influences operational decisions across data handling, indexing strategies, and optimizations. Presently, it features the `prune` option for managing historical data retention or pruning. > This feature is available from `specVersion: 1.0.0` ### Prune -`indexerHints.prune`: Defines the retention of historical block data for a subgraph. Options include: +`indexerHints.prune`: Defines the retention of historical block data for a Subgraph. Options include: 1. `"never"`: No pruning of historical data; retains the entire history. 2. `"auto"`: Retains the minimum necessary history as set by the indexer, optimizing query performance. @@ -505,19 +505,19 @@ The `indexerHints` setting in a subgraph's manifest provides directives for inde prune: auto ``` -> The term "history" in this context of subgraphs is about storing data that reflects the old states of mutable entities. +> The term "history" in this context of Subgraphs is about storing data that reflects the old states of mutable entities. History as of a given block is required for: -- [Time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), which enable querying the past states of these entities at specific blocks throughout the subgraph's history -- Using the subgraph as a [graft base](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) in another subgraph, at that block -- Rewinding the subgraph back to that block +- [Time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), which enable querying the past states of these entities at specific blocks throughout the Subgraph's history +- Using the Subgraph as a [graft base](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) in another Subgraph, at that block +- Rewinding the Subgraph back to that block If historical data as of the block has been pruned, the above capabilities will not be available. > Using `"auto"` is generally recommended as it maximizes query performance and is sufficient for most users who do not require access to extensive historical data. -For subgraphs leveraging [time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), it's advisable to either set a specific number of blocks for historical data retention or use `prune: never` to keep all historical entity states. Below are examples of how to configure both options in your subgraph's settings: +For Subgraphs leveraging [time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), it's advisable to either set a specific number of blocks for historical data retention or use `prune: never` to keep all historical entity states. Below are examples of how to configure both options in your Subgraph's settings: To retain a specific amount of historical data: @@ -532,3 +532,18 @@ To preserve the complete history of entity states: indexerHints: prune: never ``` + +## SpecVersion Releases + +| Version | Versionsanteckningar | +| :-: | --- | +| 1.3.0 | Added support for [Subgraph Composition](/cookbook/subgraph-composition-three-sources) | +| 1.2.0 | Added support for [Indexed Argument Filtering](/developing/creating/advanced/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](/developing/creating/advanced/#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supports [`indexerHints`](/developing/creating/subgraph-manifest/#indexer-hints) feature to prune Subgraphs | +| 0.0.9 | Supports `endBlock` feature | +| 0.0.8 | Added support for polling [Block Handlers](/developing/creating/subgraph-manifest/#polling-filter) and [Initialisation Handlers](/developing/creating/subgraph-manifest/#once-filter). | +| 0.0.7 | Added support for [File Data Sources](/developing/creating/advanced/#ipfsarweave-file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | diff --git a/website/src/pages/sv/subgraphs/developing/creating/unit-testing-framework.mdx b/website/src/pages/sv/subgraphs/developing/creating/unit-testing-framework.mdx index 49aea6a7f4da..83b346d47707 100644 --- a/website/src/pages/sv/subgraphs/developing/creating/unit-testing-framework.mdx +++ b/website/src/pages/sv/subgraphs/developing/creating/unit-testing-framework.mdx @@ -2,12 +2,12 @@ title: Enhetsprovningsramverk --- -Learn how to use Matchstick, a unit testing framework developed by [LimeChain](https://limechain.tech/). Matchstick enables subgraph developers to test their mapping logic in a sandboxed environment and successfully deploy their subgraphs. +Learn how to use Matchstick, a unit testing framework developed by [LimeChain](https://limechain.tech/). Matchstick enables Subgraph developers to test their mapping logic in a sandboxed environment and successfully deploy their Subgraphs. ## Benefits of Using Matchstick - It's written in Rust and optimized for high performance. -- It gives you access to developer features, including the ability to mock contract calls, make assertions about the store state, monitor subgraph failures, check test performance, and many more. +- It gives you access to developer features, including the ability to mock contract calls, make assertions about the store state, monitor Subgraph failures, check test performance, and many more. ## Komma igång @@ -87,7 +87,7 @@ And finally, do not use `graph test` (which uses your global installation of gra ### Using Matchstick -To use **Matchstick** in your subgraph project just open up a terminal, navigate to the root folder of your project and simply run `graph test [options] ` - it downloads the latest **Matchstick** binary and runs the specified test or all tests in a test folder (or all existing tests if no datasource flag is specified). +To use **Matchstick** in your Subgraph project just open up a terminal, navigate to the root folder of your project and simply run `graph test [options] ` - it downloads the latest **Matchstick** binary and runs the specified test or all tests in a test folder (or all existing tests if no datasource flag is specified). ### CLI alternativ @@ -113,7 +113,7 @@ graph test path/to/file.test.ts ```sh -c, --coverage Run the tests in coverage mode --d, --docker Run the tests in a docker container (Note: Please execute from the root folder of the subgraph) +-d, --docker Run the tests in a docker container (Note: Please execute from the root folder of the Subgraph) -f, --force Binary: Redownloads the binary. Docker: Redownloads the Dockerfile and rebuilds the docker image. -h, --help Show usage information -l, --logs Logs to the console information about the OS, CPU model and download url (debugging purposes) @@ -145,17 +145,17 @@ libsFolder: path/to/libs manifestPath: path/to/subgraph.yaml ``` -### Demo undergraf +### Demo Subgraph You can try out and play around with the examples from this guide by cloning the [Demo Subgraph repo](https://github.com/LimeChain/demo-subgraph) ### Handledning för video -Also you can check out the video series on ["How to use Matchstick to write unit tests for your subgraphs"](https://www.youtube.com/playlist?list=PLTqyKgxaGF3SNakGQwczpSGVjS_xvOv3h) +Also you can check out the video series on ["How to use Matchstick to write unit tests for your Subgraphs"](https://www.youtube.com/playlist?list=PLTqyKgxaGF3SNakGQwczpSGVjS_xvOv3h) ## Tests structure -_**IMPORTANT: The test structure described below depens on `matchstick-as` version >=0.5.0**_ +_**IMPORTANT: The test structure described below depends on `matchstick-as` version >=0.5.0**_ ### describe() @@ -662,7 +662,7 @@ That's a lot to unpack! First off, an important thing to notice is that we're im Så där har vi skapat vårt första test! 👏 -För att köra våra tester behöver du helt enkelt köra följande i din subgrafs rotmapp: +Now in order to run our tests you simply need to run the following in your Subgraph root folder: `graph test Gravity` @@ -756,7 +756,7 @@ createMockedFunction(contractAddress, 'getGravatar', 'getGravatar(address):(stri Users can mock IPFS files by using `mockIpfsFile(hash, filePath)` function. The function accepts two arguments, the first one is the IPFS file hash/path and the second one is the path to a local file. -NOTE: When testing `ipfs.map/ipfs.mapJSON`, the callback function must be exported from the test file in order for matchstck to detect it, like the `processGravatar()` function in the test example bellow: +NOTE: When testing `ipfs.map/ipfs.mapJSON`, the callback function must be exported from the test file in order for matchstick to detect it, like the `processGravatar()` function in the test example bellow: `.test.ts` file: @@ -765,7 +765,7 @@ import { assert, test, mockIpfsFile } from 'matchstick-as/assembly/index' import { ipfs } from '@graphprotocol/graph-ts' import { gravatarFromIpfs } from './utils' -// Export ipfs.map() callback in order for matchstck to detect it +// Export ipfs.map() callback in order for matchstick to detect it export { processGravatar } from './utils' test('ipfs.cat', () => { @@ -1172,7 +1172,7 @@ templates: network: mainnet mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/token-lock-wallet.ts handler: handleMetadata @@ -1289,7 +1289,7 @@ test('file/ipfs dataSource creation example', () => { ## Testtäckning -Using **Matchstick**, subgraph developers are able to run a script that will calculate the test coverage of the written unit tests. +Using **Matchstick**, Subgraph developers are able to run a script that will calculate the test coverage of the written unit tests. The test coverage tool takes the compiled test `wasm` binaries and converts them to `wat` files, which can then be easily inspected to see whether or not the handlers defined in `subgraph.yaml` have been called. Since code coverage (and testing as whole) is in very early stages in AssemblyScript and WebAssembly, **Matchstick** cannot check for branch coverage. Instead we rely on the assertion that if a given handler has been called, the event/function for it have been properly mocked. @@ -1395,7 +1395,7 @@ The mismatch in arguments is caused by mismatch in `graph-ts` and `matchstick-as ## Ytterligare resurser -For any additional support, check out this [demo subgraph repo using Matchstick](https://github.com/LimeChain/demo-subgraph#readme_). +For any additional support, check out this [demo Subgraph repo using Matchstick](https://github.com/LimeChain/demo-subgraph#readme_). ## Respons diff --git a/website/src/pages/sv/subgraphs/developing/deploying/multiple-networks.mdx b/website/src/pages/sv/subgraphs/developing/deploying/multiple-networks.mdx index 8be847bc8fab..b45b0701bfdd 100644 --- a/website/src/pages/sv/subgraphs/developing/deploying/multiple-networks.mdx +++ b/website/src/pages/sv/subgraphs/developing/deploying/multiple-networks.mdx @@ -1,12 +1,13 @@ --- title: Deploying a Subgraph to Multiple Networks +sidebarTitle: Deploying to Multiple Networks --- -This page explains how to deploy a subgraph to multiple networks. To deploy a subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a subgraph already, see [Creating a subgraph](/developing/creating-a-subgraph/). +This page explains how to deploy a Subgraph to multiple networks. To deploy a Subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a Subgraph already, see [Creating a Subgraph](/developing/creating-a-subgraph/). -## Distribuera undergrafen till flera nätverk +## Deploying the Subgraph to multiple networks -I vissa fall vill du distribuera samma undergraf till flera nätverk utan att duplicera all dess kod. Den största utmaningen med detta är att kontraktsadresserna på dessa nätverk är olika. +In some cases, you will want to deploy the same Subgraph to multiple networks without duplicating all of its code. The main challenge that comes with this is that the contract addresses on these networks are different. ### Using `graph-cli` @@ -20,7 +21,7 @@ Options: --network-file Networks config file path (default: "./networks.json") ``` -You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your subgraph during development. +You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your Subgraph during development. > Note: The `init` command will now auto-generate a `networks.json` based on the provided information. You will then be able to update existing or add additional networks. @@ -54,7 +55,7 @@ If you don't have a `networks.json` file, you'll need to manually create one wit > Note: You don't have to specify any of the `templates` (if you have any) in the config file, only the `dataSources`. If there are any `templates` declared in the `subgraph.yaml` file, their network will be automatically updated to the one specified with the `--network` option. -Now, let's assume you want to be able to deploy your subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`: +Now, let's assume you want to be able to deploy your Subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`: ```yaml # ... @@ -96,7 +97,7 @@ yarn build --network sepolia yarn build --network sepolia --network-file path/to/config ``` -The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the subgraph. Your `subgraph.yaml` file now should look like this: +The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the Subgraph. Your `subgraph.yaml` file now should look like this: ```yaml # ... @@ -127,7 +128,7 @@ yarn deploy --network sepolia --network-file path/to/config One way to parameterize aspects like contract addresses using older `graph-cli` versions is to generate parts of it with a templating system like [Mustache](https://mustache.github.io/) or [Handlebars](https://handlebarsjs.com/). -To illustrate this approach, let's assume a subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network: +To illustrate this approach, let's assume a Subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network: ```json { @@ -179,7 +180,7 @@ In order to generate a manifest to either network, you could add two additional } ``` -To deploy this subgraph for mainnet or Sepolia you would now simply run one of the two following commands: +To deploy this Subgraph for mainnet or Sepolia you would now simply run one of the two following commands: ```sh # Mainnet: @@ -193,25 +194,25 @@ A working example of this can be found [here](https://github.com/graphprotocol/e **Note:** This approach can also be applied to more complex situations, where it is necessary to substitute more than contract addresses and network names or where generating mappings or ABIs from templates as well. -This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your Subgraph to check if it is running behind. `synced` informs if the Subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the Subgraph. In this case, you can check the `fatalError` field for details on this error. -## Subgraph Studio subgraf arkivpolitik +## Subgraph Studio Subgraph archive policy -A subgraph version in Studio is archived if and only if it meets the following criteria: +A Subgraph version in Studio is archived if and only if it meets the following criteria: - The version is not published to the network (or pending publish) - The version was created 45 or more days ago -- The subgraph hasn't been queried in 30 days +- The Subgraph hasn't been queried in 30 days -In addition, when a new version is deployed, if the subgraph has not been published, then the N-2 version of the subgraph is archived. +In addition, when a new version is deployed, if the Subgraph has not been published, then the N-2 version of the Subgraph is archived. -Varje subgraf som påverkas av denna policy har en möjlighet att ta tillbaka versionen i fråga. +Every Subgraph affected with this policy has an option to bring the version in question back. -## Kontroll av undergrafens hälsa +## Checking Subgraph health -Om en subgraf synkroniseras framgångsrikt är det ett gott tecken på att den kommer att fortsätta att fungera bra för alltid. Nya triggers i nätverket kan dock göra att din subgraf stöter på ett otestat feltillstånd eller så kan den börja halka efter på grund av prestandaproblem eller problem med nodoperatörerna. +If a Subgraph syncs successfully, that is a good sign that it will continue to run well forever. However, new triggers on the network might cause your Subgraph to hit an untested error condition or it may start to fall behind due to performance issues or issues with the node operators. -Graph Node exposes a GraphQL endpoint which you can query to check the status of your subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a subgraph: +Graph Node exposes a GraphQL endpoint which you can query to check the status of your Subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a Subgraph: ```graphql { @@ -238,4 +239,4 @@ Graph Node exposes a GraphQL endpoint which you can query to check the status of } ``` -This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your Subgraph to check if it is running behind. `synced` informs if the Subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the Subgraph. In this case, you can check the `fatalError` field for details on this error. diff --git a/website/src/pages/sv/subgraphs/developing/deploying/using-subgraph-studio.mdx b/website/src/pages/sv/subgraphs/developing/deploying/using-subgraph-studio.mdx index cf6d67e5bb9d..dc1facd6d5cb 100644 --- a/website/src/pages/sv/subgraphs/developing/deploying/using-subgraph-studio.mdx +++ b/website/src/pages/sv/subgraphs/developing/deploying/using-subgraph-studio.mdx @@ -2,23 +2,23 @@ title: Deploying Using Subgraph Studio --- -Learn how to deploy your subgraph to Subgraph Studio. +Learn how to deploy your Subgraph to Subgraph Studio. -> Note: When you deploy a subgraph, you push it to Subgraph Studio, where you'll be able to test it. It's important to remember that deploying is not the same as publishing. When you publish a subgraph, you're publishing it onchain. +> Note: When you deploy a Subgraph, you push it to Subgraph Studio, where you'll be able to test it. It's important to remember that deploying is not the same as publishing. When you publish a Subgraph, you're publishing it onchain. ## Subgraph Studio Overview In [Subgraph Studio](https://thegraph.com/studio/), you can do the following: -- View a list of subgraphs you've created -- Manage, view details, and visualize the status of a specific subgraph -- Skapa och hantera API nycklar för specifika undergrafer +- View a list of Subgraphs you've created +- Manage, view details, and visualize the status of a specific Subgraph +- Create and manage your API keys for specific Subgraphs - Restrict your API keys to specific domains and allow only certain Indexers to query with them -- Create your subgraph -- Deploy your subgraph using The Graph CLI -- Test your subgraph in the playground environment -- Integrate your subgraph in staging using the development query URL -- Publish your subgraph to The Graph Network +- Create your Subgraph +- Deploy your Subgraph using The Graph CLI +- Test your Subgraph in the playground environment +- Integrate your Subgraph in staging using the development query URL +- Publish your Subgraph to The Graph Network - Manage your billing ## Install The Graph CLI @@ -44,10 +44,10 @@ npm install -g @graphprotocol/graph-cli 1. Open [Subgraph Studio](https://thegraph.com/studio/). 2. Connect your wallet to sign in. - You can do this via MetaMask, Coinbase Wallet, WalletConnect, or Safe. -3. After you sign in, your unique deploy key will be displayed on your subgraph details page. - - The deploy key allows you to publish your subgraphs or manage your API keys and billing. It is unique but can be regenerated if you think it has been compromised. +3. After you sign in, your unique deploy key will be displayed on your Subgraph details page. + - The deploy key allows you to publish your Subgraphs or manage your API keys and billing. It is unique but can be regenerated if you think it has been compromised. -> Important: You need an API key to query subgraphs +> Important: You need an API key to query Subgraphs ### Hur man skapar en subgraf i Subgraf Studio @@ -57,31 +57,25 @@ npm install -g @graphprotocol/graph-cli ### Kompatibilitet mellan undergrafer och grafnätet -In order to be supported by Indexers on The Graph Network, subgraphs must: - -- Index a [supported network](/supported-networks/) -- Får inte använda någon av följande egenskaper: - - ipfs.cat & ipfs.map - - Icke dödliga fel - - Ympning +To be supported by Indexers on The Graph Network, Subgraphs must index a [supported network](/supported-networks/). For a full list of supported and unsupported features, check out the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) repo. ## Initialize Your Subgraph -Once your subgraph has been created in Subgraph Studio, you can initialize its code through the CLI using this command: +Once your Subgraph has been created in Subgraph Studio, you can initialize its code through the CLI using this command: ```bash graph init ``` -You can find the `` value on your subgraph details page in Subgraph Studio, see image below: +You can find the `` value on your Subgraph details page in Subgraph Studio, see image below: ![Subgraph Studio - Slug](/img/doc-subgraph-slug.png) -After running `graph init`, you will be asked to input the contract address, network, and an ABI that you want to query. This will generate a new folder on your local machine with some basic code to start working on your subgraph. You can then finalize your subgraph to make sure it works as expected. +After running `graph init`, you will be asked to input the contract address, network, and an ABI that you want to query. This will generate a new folder on your local machine with some basic code to start working on your Subgraph. You can then finalize your Subgraph to make sure it works as expected. ## Auth för grafer -Before you can deploy your subgraph to Subgraph Studio, you need to log into your account within the CLI. To do this, you will need your deploy key, which you can find under your subgraph details page. +Before you can deploy your Subgraph to Subgraph Studio, you need to log into your account within the CLI. To do this, you will need your deploy key, which you can find under your Subgraph details page. Then, use the following command to authenticate from the CLI: @@ -91,11 +85,11 @@ graph auth ## Deploying a Subgraph -Once you are ready, you can deploy your subgraph to Subgraph Studio. +Once you are ready, you can deploy your Subgraph to Subgraph Studio. -> Deploying a subgraph with the CLI pushes it to the Studio, where you can test it and update the metadata. This action won't publish your subgraph to the decentralized network. +> Deploying a Subgraph with the CLI pushes it to the Studio, where you can test it and update the metadata. This action won't publish your Subgraph to the decentralized network. -Use the following CLI command to deploy your subgraph: +Use the following CLI command to deploy your Subgraph: ```bash graph deploy @@ -108,30 +102,30 @@ After running this command, the CLI will ask for a version label. ## Testing Your Subgraph -After deploying, you can test your subgraph (either in Subgraph Studio or in your own app, with the deployment query URL), deploy another version, update the metadata, and publish to [Graph Explorer](https://thegraph.com/explorer) when you are ready. +After deploying, you can test your Subgraph (either in Subgraph Studio or in your own app, with the deployment query URL), deploy another version, update the metadata, and publish to [Graph Explorer](https://thegraph.com/explorer) when you are ready. -Use Subgraph Studio to check the logs on the dashboard and look for any errors with your subgraph. +Use Subgraph Studio to check the logs on the dashboard and look for any errors with your Subgraph. ## Publish Your Subgraph -In order to publish your subgraph successfully, review [publishing a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). +In order to publish your Subgraph successfully, review [publishing a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). ## Versioning Your Subgraph with the CLI -If you want to update your subgraph, you can do the following: +If you want to update your Subgraph, you can do the following: - You can deploy a new version to Studio using the CLI (it will only be private at this point). - Once you're happy with it, you can publish your new deployment to [Graph Explorer](https://thegraph.com/explorer). -- This action will create a new version of your subgraph that Curators can start signaling on and Indexers can index. +- This action will create a new version of your Subgraph that Curators can start signaling on and Indexers can index. -You can also update your subgraph's metadata without publishing a new version. You can update your subgraph details in Studio (under the profile picture, name, description, etc.) by checking an option called **Update Details** in [Graph Explorer](https://thegraph.com/explorer). If this is checked, an onchain transaction will be generated that updates subgraph details in Explorer without having to publish a new version with a new deployment. +You can also update your Subgraph's metadata without publishing a new version. You can update your Subgraph details in Studio (under the profile picture, name, description, etc.) by checking an option called **Update Details** in [Graph Explorer](https://thegraph.com/explorer). If this is checked, an onchain transaction will be generated that updates Subgraph details in Explorer without having to publish a new version with a new deployment. -> Note: There are costs associated with publishing a new version of a subgraph to the network. In addition to the transaction fees, you must also fund a part of the curation tax on the auto-migrating signal. You cannot publish a new version of your subgraph if Curators have not signaled on it. For more information, please read more [here](/resources/roles/curating/). +> Note: There are costs associated with publishing a new version of a Subgraph to the network. In addition to the transaction fees, you must also fund a part of the curation tax on the auto-migrating signal. You cannot publish a new version of your Subgraph if Curators have not signaled on it. For more information, please read more [here](/resources/roles/curating/). ## Automatisk arkivering av versioner av undergrafer -Whenever you deploy a new subgraph version in Subgraph Studio, the previous version will be archived. Archived versions won't be indexed/synced and therefore cannot be queried. You can unarchive an archived version of your subgraph in Subgraph Studio. +Whenever you deploy a new Subgraph version in Subgraph Studio, the previous version will be archived. Archived versions won't be indexed/synced and therefore cannot be queried. You can unarchive an archived version of your Subgraph in Subgraph Studio. -> Note: Previous versions of non-published subgraphs deployed to Studio will be automatically archived. +> Note: Previous versions of non-published Subgraphs deployed to Studio will be automatically archived. ![Subgraph Studio - Unarchive](/img/Unarchive.png) diff --git a/website/src/pages/sv/subgraphs/developing/developer-faq.mdx b/website/src/pages/sv/subgraphs/developing/developer-faq.mdx index 347f3caa9805..1671e9dd5a77 100644 --- a/website/src/pages/sv/subgraphs/developing/developer-faq.mdx +++ b/website/src/pages/sv/subgraphs/developing/developer-faq.mdx @@ -7,37 +7,37 @@ This page summarizes some of the most common questions for developers building o ## Subgraph Related -### 1. Vad är en subgraf? +### 1. What is a Subgraph? -A subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process subgraphs and make them available for subgraph consumers to query. +A Subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process Subgraphs and make them available for Subgraph consumers to query. -### 2. What is the first step to create a subgraph? +### 2. What is the first step to create a Subgraph? -To successfully create a subgraph, you will need to install The Graph CLI. Review the [Quick Start](/subgraphs/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). +To successfully create a Subgraph, you will need to install The Graph CLI. Review the [Quick Start](/subgraphs/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). -### 3. Can I still create a subgraph if my smart contracts don't have events? +### 3. Can I still create a Subgraph if my smart contracts don't have events? -It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the subgraph are triggered by contract events and are the fastest way to retrieve useful data. +It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the Subgraph are triggered by contract events and are the fastest way to retrieve useful data. -If the contracts you work with do not contain events, your subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. +If the contracts you work with do not contain events, your Subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. -### 4. Kan jag ändra det GitHub-konto som är kopplat till min subgraf? +### 4. Can I change the GitHub account associated with my Subgraph? -No. Once a subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your subgraph. +No. Once a Subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your Subgraph. -### 5. How do I update a subgraph on mainnet? +### 5. How do I update a Subgraph on mainnet? -You can deploy a new version of your subgraph to Subgraph Studio using the CLI. This action maintains your subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. +You can deploy a new version of your Subgraph to Subgraph Studio using the CLI. This action maintains your Subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your Subgraph that Curators can start signaling on. -### 6. Is it possible to duplicate a subgraph to another account or endpoint without redeploying? +### 6. Is it possible to duplicate a Subgraph to another account or endpoint without redeploying? -Du måste distribuera om subgrafen, men om subgrafens ID (IPFS-hash) inte ändras behöver den inte synkroniseras från början. +You have to redeploy the Subgraph, but if the Subgraph ID (IPFS hash) doesn't change, it won't have to sync from the beginning. -### 7. How do I call a contract function or access a public state variable from my subgraph mappings? +### 7. How do I call a contract function or access a public state variable from my Subgraph mappings? Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/#access-to-smart-contract-state). -### 8. Can I import `ethers.js` or other JS libraries into my subgraph mappings? +### 8. Can I import `ethers.js` or other JS libraries into my Subgraph mappings? Not currently, as mappings are written in AssemblyScript. @@ -45,15 +45,15 @@ One possible alternative solution to this is to store raw data in entities and p ### 9. When listening to multiple contracts, is it possible to select the contract order to listen to events? -Inom en subgraf behandlas händelser alltid i den ordning de visas i blocken, oavsett om det är över flera kontrakt eller inte. +Within a Subgraph, the events are always processed in the order they appear in the blocks, regardless of whether that is across multiple contracts or not. ### 10. How are templates different from data sources? -Templates allow you to create data sources quickly, while your subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your subgraph will create a dynamic data source by supplying the contract address. +Templates allow you to create data sources quickly, while your Subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your Subgraph will create a dynamic data source by supplying the contract address. Check out the "Instantiating a data source template" section on: [Data Source Templates](/developing/creating-a-subgraph/#data-source-templates). -### 11. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? +### 11. Is it possible to set up a Subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? Yes. On `graph init` command itself you can add multiple dataSources by entering contracts one after the other. @@ -79,9 +79,9 @@ docker pull graphprotocol/graph-node:latest If only one entity is created during the event and if there's nothing better available, then the transaction hash + log index would be unique. You can obfuscate these by converting that to Bytes and then piping it through `crypto.keccak256` but this won't make it more unique. -### 15. Can I delete my subgraph? +### 15. Can I delete my Subgraph? -Yes, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) and [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) your subgraph. +Yes, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) and [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) your Subgraph. ## Network Related @@ -110,11 +110,11 @@ Yes. Sepolia supports block handlers, call handlers and event handlers. It shoul Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the dataSource starts indexing from. In most cases, we suggest using the block where the contract was created: [Start blocks](/developing/creating-a-subgraph/#start-blocks) -### 20. What are some tips to increase the performance of indexing? My subgraph is taking a very long time to sync +### 20. What are some tips to increase the performance of indexing? My Subgraph is taking a very long time to sync Yes, you should take a look at the optional start block feature to start indexing from the block where the contract was deployed: [Start blocks](/developing/creating-a-subgraph/#start-blocks) -### 21. Is there a way to query the subgraph directly to determine the latest block number it has indexed? +### 21. Is there a way to query the Subgraph directly to determine the latest block number it has indexed? Ja! Prova följande kommando och ersätt "organization/subgraphName" med organisationen under vilken den är publicerad och namnet på din subgraf: @@ -132,7 +132,7 @@ someCollection(first: 1000, skip: ) { ... } ### 23. If my dapp frontend uses The Graph for querying, do I need to write my API key into the frontend directly? What if we pay query fees for users – will malicious users cause our query fees to be very high? -Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. +Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and Subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. ## Miscellaneous diff --git a/website/src/pages/sv/subgraphs/developing/introduction.mdx b/website/src/pages/sv/subgraphs/developing/introduction.mdx index bf5f1bb0f311..c4e9fbd9c78a 100644 --- a/website/src/pages/sv/subgraphs/developing/introduction.mdx +++ b/website/src/pages/sv/subgraphs/developing/introduction.mdx @@ -11,21 +11,21 @@ As a developer, you need data to build and power your dapp. Querying and indexin On The Graph, you can: -1. Create, deploy, and publish subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). -2. Use GraphQL to query existing subgraphs. +1. Create, deploy, and publish Subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). +2. Use GraphQL to query existing Subgraphs. ### What is GraphQL? -- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. +- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query Subgraphs. ### Developer Actions -- Query subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. -- Create custom subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. -- Deploy, publish and signal your subgraphs within The Graph Network. +- Query Subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. +- Create custom Subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. +- Deploy, publish and signal your Subgraphs within The Graph Network. -### What are subgraphs? +### What are Subgraphs? -A subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. +A Subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. -Check out the documentation on [subgraphs](/subgraphs/developing/subgraphs/) to learn specifics. +Check out the documentation on [Subgraphs](/subgraphs/developing/subgraphs/) to learn specifics. diff --git a/website/src/pages/sv/subgraphs/developing/managing/deleting-a-subgraph.mdx b/website/src/pages/sv/subgraphs/developing/managing/deleting-a-subgraph.mdx index ae778febe161..b8c2330ca49d 100644 --- a/website/src/pages/sv/subgraphs/developing/managing/deleting-a-subgraph.mdx +++ b/website/src/pages/sv/subgraphs/developing/managing/deleting-a-subgraph.mdx @@ -2,30 +2,30 @@ title: Deleting a Subgraph --- -Delete your subgraph using [Subgraph Studio](https://thegraph.com/studio/). +Delete your Subgraph using [Subgraph Studio](https://thegraph.com/studio/). -> Deleting your subgraph will remove all published versions from The Graph Network, but it will remain visible on Graph Explorer and Subgraph Studio for users who have signaled on it. +> Deleting your Subgraph will remove all published versions from The Graph Network, but it will remain visible on Graph Explorer and Subgraph Studio for users who have signaled on it. ## Step-by-Step -1. Visit the subgraph's page on [Subgraph Studio](https://thegraph.com/studio/). +1. Visit the Subgraph's page on [Subgraph Studio](https://thegraph.com/studio/). 2. Click on the three-dots to the right of the "publish" button. -3. Click on the option to "delete this subgraph": +3. Click on the option to "delete this Subgraph": ![Delete-subgraph](/img/Delete-subgraph.png) -4. Depending on the subgraph's status, you will be prompted with various options. +4. Depending on the Subgraph's status, you will be prompted with various options. - - If the subgraph is not published, simply click “delete” and confirm. - - If the subgraph is published, you will need to confirm on your wallet before the subgraph can be deleted from Studio. If a subgraph is published to multiple networks, such as testnet and mainnet, additional steps may be required. + - If the Subgraph is not published, simply click “delete” and confirm. + - If the Subgraph is published, you will need to confirm on your wallet before the Subgraph can be deleted from Studio. If a Subgraph is published to multiple networks, such as testnet and mainnet, additional steps may be required. -> If the owner of the subgraph has signal on it, the signaled GRT will be returned to the owner. +> If the owner of the Subgraph has signal on it, the signaled GRT will be returned to the owner. ### Important Reminders -- Once you delete a subgraph, it will **not** appear on Graph Explorer's homepage. However, users who have signaled on it will still be able to view it on their profile pages and remove their signal. -- Kuratorer kommer inte längre kunna signalera på subgrafet. -- Curators that already signaled on the subgraph can withdraw their signal at an average share price. -- Deleted subgraphs will show an error message. +- Once you delete a Subgraph, it will **not** appear on Graph Explorer's homepage. However, users who have signaled on it will still be able to view it on their profile pages and remove their signal. +- Curators will not be able to signal on the Subgraph anymore. +- Curators that already signaled on the Subgraph can withdraw their signal at an average share price. +- Deleted Subgraphs will show an error message. diff --git a/website/src/pages/sv/subgraphs/developing/managing/transferring-a-subgraph.mdx b/website/src/pages/sv/subgraphs/developing/managing/transferring-a-subgraph.mdx index 0fc6632cbc40..e80bde3fa6d2 100644 --- a/website/src/pages/sv/subgraphs/developing/managing/transferring-a-subgraph.mdx +++ b/website/src/pages/sv/subgraphs/developing/managing/transferring-a-subgraph.mdx @@ -2,18 +2,18 @@ title: Transferring a Subgraph --- -Subgraphs published to the decentralized network have an NFT minted to the address that published the subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network. +Subgraphs published to the decentralized network have an NFT minted to the address that published the Subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network. ## Reminders -- Whoever owns the NFT controls the subgraph. -- If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that subgraph on the network. -- You can easily move control of a subgraph to a multi-sig. -- A community member can create a subgraph on behalf of a DAO. +- Whoever owns the NFT controls the Subgraph. +- If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that Subgraph on the network. +- You can easily move control of a Subgraph to a multi-sig. +- A community member can create a Subgraph on behalf of a DAO. ## View Your Subgraph as an NFT -To view your subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**: +To view your Subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**: ``` https://opensea.io/your-wallet-address @@ -27,13 +27,13 @@ https://rainbow.me/your-wallet-addres ## Step-by-Step -To transfer ownership of a subgraph, do the following: +To transfer ownership of a Subgraph, do the following: 1. Use the UI built into Subgraph Studio: ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-1.png) -2. Choose the address that you would like to transfer the subgraph to: +2. Choose the address that you would like to transfer the Subgraph to: ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-2.png) diff --git a/website/src/pages/sv/subgraphs/developing/publishing/publishing-a-subgraph.mdx b/website/src/pages/sv/subgraphs/developing/publishing/publishing-a-subgraph.mdx index 24079d30b9b4..e13f4a7f9f7c 100644 --- a/website/src/pages/sv/subgraphs/developing/publishing/publishing-a-subgraph.mdx +++ b/website/src/pages/sv/subgraphs/developing/publishing/publishing-a-subgraph.mdx @@ -1,10 +1,11 @@ --- title: Publicera en Subgraph på Det Decentraliserade Nätverket +sidebarTitle: Publishing to the Decentralized Network --- -Once you have [deployed your subgraph to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/) and it's ready to go into production, you can publish it to the decentralized network. +Once you have [deployed your Subgraph to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/) and it's ready to go into production, you can publish it to the decentralized network. -When you publish a subgraph to the decentralized network, you make it available for: +When you publish a Subgraph to the decentralized network, you make it available for: - [Curators](/resources/roles/curating/) to begin curating it. - [Indexers](/indexing/overview/) to begin indexing it. @@ -17,33 +18,33 @@ Check out the list of [supported networks](/supported-networks/). 1. Go to the [Subgraph Studio](https://thegraph.com/studio/) dashboard 2. Click on the **Publish** button -3. Your subgraph will now be visible in [Graph Explorer](https://thegraph.com/explorer/). +3. Your Subgraph will now be visible in [Graph Explorer](https://thegraph.com/explorer/). -All published versions of an existing subgraph can: +All published versions of an existing Subgraph can: - Be published to Arbitrum One. [Learn more about The Graph Network on Arbitrum](/archived/arbitrum/arbitrum-faq/). -- Index data on any of the [supported networks](/supported-networks/), regardless of the network on which the subgraph was published. +- Index data on any of the [supported networks](/supported-networks/), regardless of the network on which the Subgraph was published. -### Uppdatera metadata för en publicerad subgraph +### Updating metadata for a published Subgraph -- After publishing your subgraph to the decentralized network, you can update the metadata anytime in Subgraph Studio. +- After publishing your Subgraph to the decentralized network, you can update the metadata anytime in Subgraph Studio. - Once you’ve saved your changes and published the updates, they will appear in Graph Explorer. - It's important to note that this process will not create a new version since your deployment has not changed. ## Publishing from the CLI -As of version 0.73.0, you can also publish your subgraph with the [`graph-cli`](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). +As of version 0.73.0, you can also publish your Subgraph with the [`graph-cli`](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). 1. Open the `graph-cli`. 2. Use the following commands: `graph codegen && graph build` then `graph publish`. -3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized subgraph to a network of your choice. +3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized Subgraph to a network of your choice. ![cli-ui](/img/cli-ui.png) ### Customizing your deployment -You can upload your subgraph build to a specific IPFS node and further customize your deployment with the following flags: +You can upload your Subgraph build to a specific IPFS node and further customize your deployment with the following flags: ``` USAGE @@ -61,33 +62,33 @@ FLAGS ``` -## Adding signal to your subgraph +## Adding signal to your Subgraph -Developers can add GRT signal to their subgraphs to incentivize Indexers to query the subgraph. +Developers can add GRT signal to their Subgraphs to incentivize Indexers to query the Subgraph. -- If a subgraph is eligible for indexing rewards, Indexers who provide a "proof of indexing" will receive a GRT reward, based on the amount of GRT signalled. +- If a Subgraph is eligible for indexing rewards, Indexers who provide a "proof of indexing" will receive a GRT reward, based on the amount of GRT signalled. -- You can check indexing reward eligibility based on subgraph feature usage [here](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). +- You can check indexing reward eligibility based on Subgraph feature usage [here](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). - Specific supported networks can be checked [here](/supported-networks/). -> Adding signal to a subgraph which is not eligible for rewards will not attract additional Indexers. +> Adding signal to a Subgraph which is not eligible for rewards will not attract additional Indexers. > -> If your subgraph is eligible for rewards, it is recommended that you curate your own subgraph with at least 3,000 GRT in order to attract additional indexers to index your subgraph. +> If your Subgraph is eligible for rewards, it is recommended that you curate your own Subgraph with at least 3,000 GRT in order to attract additional indexers to index your Subgraph. -The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs. However, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. +The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all Subgraphs. However, signaling GRT on a particular Subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. -When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. +When signaling, Curators can decide to signal on a specific version of the Subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. -Indexers can find subgraphs to index based on curation signals they see in Graph Explorer. +Indexers can find Subgraphs to index based on curation signals they see in Graph Explorer. -![Explorer subgraphs](/img/explorer-subgraphs.png) +![Explorer Subgraphs](/img/explorer-subgraphs.png) -Subgraph Studio enables you to add signal to your subgraph by adding GRT to your subgraph's curation pool in the same transaction it is published. +Subgraph Studio enables you to add signal to your Subgraph by adding GRT to your Subgraph's curation pool in the same transaction it is published. ![Curation Pool](/img/curate-own-subgraph-tx.png) -Alternatively, you can add GRT signal to a published subgraph from Graph Explorer. +Alternatively, you can add GRT signal to a published Subgraph from Graph Explorer. ![Signal from Explorer](/img/signal-from-explorer.png) diff --git a/website/src/pages/sv/subgraphs/developing/subgraphs.mdx b/website/src/pages/sv/subgraphs/developing/subgraphs.mdx index a6fa5ca3a4f6..9ad42542beda 100644 --- a/website/src/pages/sv/subgraphs/developing/subgraphs.mdx +++ b/website/src/pages/sv/subgraphs/developing/subgraphs.mdx @@ -4,83 +4,83 @@ title: Subgrafer ## What is a Subgraph? -A subgraph is a custom, open API that extracts data from a blockchain, processes it, and stores it so it can be easily queried via GraphQL. +A Subgraph is a custom, open API that extracts data from a blockchain, processes it, and stores it so it can be easily queried via GraphQL. ### Subgraph Capabilities - **Access Data:** Subgraphs enable the querying and indexing of blockchain data for web3. -- **Build:** Developers can build, deploy, and publish subgraphs to The Graph Network. To get started, check out the subgraph developer [Quick Start](quick-start/). -- **Index & Query:** Once a subgraph is indexed, anyone can query it. Explore and query all subgraphs published to the network in [Graph Explorer](https://thegraph.com/explorer). +- **Build:** Developers can build, deploy, and publish Subgraphs to The Graph Network. To get started, check out the Subgraph developer [Quick Start](quick-start/). +- **Index & Query:** Once a Subgraph is indexed, anyone can query it. Explore and query all Subgraphs published to the network in [Graph Explorer](https://thegraph.com/explorer). ## Inside a Subgraph -The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. +The Subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your Subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. -The **subgraph definition** consists of the following files: +The **Subgraph definition** consists of the following files: -- `subgraph.yaml`: Contains the subgraph manifest +- `subgraph.yaml`: Contains the Subgraph manifest -- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL +- `schema.graphql`: A GraphQL schema defining the data stored for your Subgraph and how to query it via GraphQL - `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema -To learn more about each subgraph component, check out [creating a subgraph](/developing/creating-a-subgraph/). +To learn more about each Subgraph component, check out [creating a Subgraph](/developing/creating-a-subgraph/). ## Livscykel för undergrafer -Here is a general overview of a subgraph’s lifecycle: +Here is a general overview of a Subgraph’s lifecycle: ![Subgraph Lifecycle](/img/subgraph-lifecycle.png) ## Subgraph Development -1. [Create a subgraph](/developing/creating-a-subgraph/) -2. [Deploy a subgraph](/deploying/deploying-a-subgraph-to-studio/) -3. [Test a subgraph](/deploying/subgraph-studio/#testing-your-subgraph-in-subgraph-studio) -4. [Publish a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) -5. [Signal on a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) +1. [Create a Subgraph](/developing/creating-a-subgraph/) +2. [Deploy a Subgraph](/deploying/deploying-a-subgraph-to-studio/) +3. [Test a Subgraph](/deploying/subgraph-studio/#testing-your-subgraph-in-subgraph-studio) +4. [Publish a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) +5. [Signal on a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) ### Build locally -Great subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying subgraphs on The Graph. They can also use [Graph TypeScript](/subgraphs/developing/creating/graph-ts/README/) and [Matchstick](/subgraphs/developing/creating/unit-testing-framework/) to create robust subgraphs. +Great Subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying Subgraphs on The Graph. They can also use [Graph TypeScript](/subgraphs/developing/creating/graph-ts/README/) and [Matchstick](/subgraphs/developing/creating/unit-testing-framework/) to create robust Subgraphs. ### Deploy to Subgraph Studio -Once defined, a subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: +Once defined, a Subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: -- Use its staging environment to index the deployed subgraph and make it available for review. -- Verify that your subgraph doesn't have any indexing errors and works as expected. +- Use its staging environment to index the deployed Subgraph and make it available for review. +- Verify that your Subgraph doesn't have any indexing errors and works as expected. ### Publish to the Network -When you're happy with your subgraph, you can [publish it](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph Network. +When you're happy with your Subgraph, you can [publish it](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph Network. -- This is an onchain action, which registers the subgraph and makes it discoverable by Indexers. -- Published subgraphs have a corresponding NFT, which defines the ownership of the subgraph. You can [transfer the subgraph's ownership](/subgraphs/developing/managing/transferring-a-subgraph/) by sending the NFT. -- Published subgraphs have associated metadata, which provides other network participants with useful context and information. +- This is an onchain action, which registers the Subgraph and makes it discoverable by Indexers. +- Published Subgraphs have a corresponding NFT, which defines the ownership of the Subgraph. You can [transfer the Subgraph's ownership](/subgraphs/developing/managing/transferring-a-subgraph/) by sending the NFT. +- Published Subgraphs have associated metadata, which provides other network participants with useful context and information. ### Add Curation Signal for Indexing -Published subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your subgraph. Learn more about signaling and [curating](/resources/roles/curating/) on The Graph. +Published Subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your Subgraph. Learn more about signaling and [curating](/resources/roles/curating/) on The Graph. #### What is signal? -- Signal is locked GRT associated with a given subgraph. It indicates to Indexers that a given subgraph will receive query volume and it contributes to the indexing rewards available for processing it. -- Third-party Curators may also signal on a given subgraph, if they deem the subgraph likely to drive query volume. +- Signal is locked GRT associated with a given Subgraph. It indicates to Indexers that a given Subgraph will receive query volume and it contributes to the indexing rewards available for processing it. +- Third-party Curators may also signal on a given Subgraph, if they deem the Subgraph likely to drive query volume. ### Querying & Application Development Subgraphs on The Graph Network receive 100,000 free queries per month, after which point developers can either [pay for queries with GRT or a credit card](/subgraphs/billing/). -Learn more about [querying subgraphs](/subgraphs/querying/introduction/). +Learn more about [querying Subgraphs](/subgraphs/querying/introduction/). ### Updating Subgraphs -To update your subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. +To update your Subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your Subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. -- If you selected "auto-migrate" when you applied the signal, updating the subgraph will migrate any signal to the new version and incur a migration tax. -- This signal migration should prompt Indexers to start indexing the new version of the subgraph, so it should soon become available for querying. +- If you selected "auto-migrate" when you applied the signal, updating the Subgraph will migrate any signal to the new version and incur a migration tax. +- This signal migration should prompt Indexers to start indexing the new version of the Subgraph, so it should soon become available for querying. ### Deleting & Transferring Subgraphs -If you no longer need a published subgraph, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) or [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) it. Deleting a subgraph returns any signaled GRT to [Curators](/resources/roles/curating/). +If you no longer need a published Subgraph, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) or [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) it. Deleting a Subgraph returns any signaled GRT to [Curators](/resources/roles/curating/). diff --git a/website/src/pages/sv/subgraphs/explorer.mdx b/website/src/pages/sv/subgraphs/explorer.mdx index 9dfc11588323..87b670a3247d 100644 --- a/website/src/pages/sv/subgraphs/explorer.mdx +++ b/website/src/pages/sv/subgraphs/explorer.mdx @@ -2,11 +2,11 @@ title: Graf Utforskaren --- -Unlock the world of subgraphs and network data with [Graph Explorer](https://thegraph.com/explorer). +Unlock the world of Subgraphs and network data with [Graph Explorer](https://thegraph.com/explorer). ## Översikt -Graph Explorer consists of multiple parts where you can interact with [subgraphs](https://thegraph.com/explorer?chain=arbitrum-one), [delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one), engage [participants](https://thegraph.com/explorer/participants?chain=arbitrum-one), view [network information](https://thegraph.com/explorer/network?chain=arbitrum-one), and access your user profile. +Graph Explorer consists of multiple parts where you can interact with [Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one), [delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one), engage [participants](https://thegraph.com/explorer/participants?chain=arbitrum-one), view [network information](https://thegraph.com/explorer/network?chain=arbitrum-one), and access your user profile. ## Inside Explorer @@ -14,33 +14,33 @@ The following is a breakdown of all the key features of Graph Explorer. For addi ### Subgraphs Page -After deploying and publishing your subgraph in Subgraph Studio, go to [Graph Explorer](https://thegraph.com/explorer) and click on the "[Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one)" link in the navigation bar to access the following: +After deploying and publishing your Subgraph in Subgraph Studio, go to [Graph Explorer](https://thegraph.com/explorer) and click on the "[Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one)" link in the navigation bar to access the following: -- Your own finished subgraphs +- Your own finished Subgraphs - Subgraphs published by others -- The exact subgraph you want (based on the date created, signal amount, or name). +- The exact Subgraph you want (based on the date created, signal amount, or name). ![Explorer Image 1](/img/Subgraphs-Explorer-Landing.png) -When you click into a subgraph, you will be able to do the following: +When you click into a Subgraph, you will be able to do the following: - Test queries in the playground and be able to leverage network details to make informed decisions. -- Signal GRT on your own subgraph or the subgraphs of others to make indexers aware of its importance and quality. +- Signal GRT on your own Subgraph or the Subgraphs of others to make indexers aware of its importance and quality. - - This is critical because signaling on a subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries. + - This is critical because signaling on a Subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries. ![Explorer Image 2](/img/Subgraph-Details.png) -On each subgraph’s dedicated page, you can do the following: +On each Subgraph’s dedicated page, you can do the following: -- Signalera/Sluta signalera på subgraffar +- Signal/Un-signal on Subgraphs - Visa mer detaljer som diagram, aktuell distributions-ID och annan metadata -- Växla versioner för att utforska tidigare iterationer av subgraffen -- Fråga subgraffar via GraphQL -- Testa subgraffar i lekplatsen -- Visa indexerare som indexerar på en viss subgraff +- Switch versions to explore past iterations of the Subgraph +- Query Subgraphs via GraphQL +- Test Subgraphs in the playground +- View the Indexers that are indexing on a certain Subgraph - Subgraffstatistik (tilldelningar, kuratorer, etc.) -- Visa enheten som publicerade subgraffen +- View the entity who published the Subgraph ![Explorer Image 3](/img/Explorer-Signal-Unsignal.png) @@ -53,7 +53,7 @@ On this page, you can see the following: - Indexers who collected the most query fees - Indexers with the highest estimated APR -Additionally, you can calculate your ROI and search for top Indexers by name, address, or subgraph. +Additionally, you can calculate your ROI and search for top Indexers by name, address, or Subgraph. ### Participants Page @@ -63,9 +63,9 @@ This page provides a bird's-eye view of all "participants," which includes every ![Explorer Image 4](/img/Indexer-Pane.png) -Indexers are the backbone of the protocol. They stake on subgraphs, index them, and serve queries to anyone consuming subgraphs. +Indexers are the backbone of the protocol. They stake on Subgraphs, index them, and serve queries to anyone consuming Subgraphs. -In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each subgraph, and how much revenue they have made from query fees and indexing rewards. +In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each Subgraph, and how much revenue they have made from query fees and indexing rewards. **Specifics** @@ -74,7 +74,7 @@ In the Indexers table, you can see an Indexers’ delegation parameters, their s - Cooldown Remaining - the time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters. - Owned - This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior. - Delegated - Stake from Delegators which can be allocated by the Indexer, but cannot be slashed. -- Allocated - Stake that Indexers are actively allocating towards the subgraphs they are indexing. +- Allocated - Stake that Indexers are actively allocating towards the Subgraphs they are indexing. - Available Delegation Capacity - the amount of delegated stake the Indexers can still receive before they become over-delegated. - Maximal delegeringskapacitet - den maximala mängden delegerad insats som indexeraren produktivt kan acceptera. Överskjuten delegerad insats kan inte användas för tilldelningar eller beräkningar av belöningar. - Query Fees - this is the total fees that end users have paid for queries from an Indexer over all time. @@ -90,10 +90,10 @@ To learn more about how to become an Indexer, you can take a look at the [offici #### 2. Kuratorer -Curators analyze subgraphs to identify which subgraphs are of the highest quality. Once a Curator has found a potentially high-quality subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which subgraphs are high quality and should be indexed. +Curators analyze Subgraphs to identify which Subgraphs are of the highest quality. Once a Curator has found a potentially high-quality Subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which Subgraphs are high quality and should be indexed. -- Curators can be community members, data consumers, or even subgraph developers who signal on their own subgraphs by depositing GRT tokens into a bonding curve. - - By depositing GRT, Curators mint curation shares of a subgraph. As a result, they can earn a portion of the query fees generated by the subgraph they have signaled on. +- Curators can be community members, data consumers, or even Subgraph developers who signal on their own Subgraphs by depositing GRT tokens into a bonding curve. + - By depositing GRT, Curators mint curation shares of a Subgraph. As a result, they can earn a portion of the query fees generated by the Subgraph they have signaled on. - The bonding curve incentivizes Curators to curate the highest quality data sources. In the The Curator table listed below you can see: @@ -144,8 +144,8 @@ The overview section has both all the current network metrics and some cumulativ A few key details to note: -- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the subgraphs have been closed and the data they served has been validated by the consumers. -- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). +- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the Subgraphs have been closed and the data they served has been validated by the consumers. +- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the Subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). ![Explorer Image 8](/img/Network-Stats.png) @@ -178,15 +178,15 @@ In this section, you can view the following: ### Subgraffar-fliken -In the Subgraphs tab, you’ll see your published subgraphs. +In the Subgraphs tab, you’ll see your published Subgraphs. -> This will not include any subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network. +> This will not include any Subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network. ![Explorer Image 11](/img/Subgraphs-Overview.png) ### Indexeringstabell -In the Indexing tab, you’ll find a table with all the active and historical allocations towards subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer. +In the Indexing tab, you’ll find a table with all the active and historical allocations towards Subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer. I det här avsnittet hittar du också information om dina nettobelöningar som indexerare och nettovärdaravgifter. Du kommer att se följande metriker: @@ -223,13 +223,13 @@ Kom ihåg att denna tabell kan rullas horisontellt, så om du rullar hela vägen ### Kureringstabell -I Kureringstabellen hittar du alla subgraffar du signalerar på (vilket gör det möjligt för dig att ta emot frågeavgifter). Signalering gör att kuratorer kan informera indexerare om vilka subgraffar som är värdefulla och pålitliga, vilket signalerar att de bör indexerats. +In the Curation tab, you’ll find all the Subgraphs you’re signaling on (thus enabling you to receive query fees). Signaling allows Curators to highlight to Indexers which Subgraphs are valuable and trustworthy, thus signaling that they need to be indexed on. Inom den här fliken hittar du en översikt över: -- Alla subgraffar du signalerar på med signaldetaljer -- Andelar totalt per subgraff -- Frågebelöningar per subgraff +- All the Subgraphs you're curating on with signal details +- Share totals per Subgraph +- Query rewards per Subgraph - Uppdaterade datumdetaljer ![Explorer Image 14](/img/Curation-Stats.png) diff --git a/website/src/pages/sv/subgraphs/guides/_meta.js b/website/src/pages/sv/subgraphs/guides/_meta.js index 37e18bc51651..a1bb04fb6d3f 100644 --- a/website/src/pages/sv/subgraphs/guides/_meta.js +++ b/website/src/pages/sv/subgraphs/guides/_meta.js @@ -1,4 +1,5 @@ export default { + 'subgraph-composition': '', 'subgraph-debug-forking': '', near: '', arweave: '', diff --git a/website/src/pages/sv/subgraphs/guides/arweave.mdx b/website/src/pages/sv/subgraphs/guides/arweave.mdx index 08e6c4257268..4a5591b45c72 100644 --- a/website/src/pages/sv/subgraphs/guides/arweave.mdx +++ b/website/src/pages/sv/subgraphs/guides/arweave.mdx @@ -1,50 +1,50 @@ --- -title: Building Subgraphs on Arweave +title: Bygga subgrafer på Arweave --- > Arweave support in Graph Node and on Subgraph Studio is in beta: please reach us on [Discord](https://discord.gg/graphprotocol) with any questions about building Arweave Subgraphs! -In this guide, you will learn how to build and deploy Subgraphs to index the Arweave blockchain. +I den här guiden kommer du att lära dig hur du bygger och distribuerar subgrafer för att indexera Weaver-blockkedjan. -## What is Arweave? +## Vad är Arweave? -The Arweave protocol allows developers to store data permanently and that is the main difference between Arweave and IPFS, where IPFS lacks the feature; permanence, and files stored on Arweave can't be changed or deleted. +Arweave-protokollet tillåter utvecklare att lagra data permanent och det är den största skillnaden mellan Arweave och IPFS, där IPFS saknar funktionen; beständighet och filer lagrade på Arweave kan inte ändras eller raderas. -Arweave already has built numerous libraries for integrating the protocol in a number of different programming languages. For more information you can check: +Arweave har redan byggt ett flertal bibliotek för att integrera protokollet i ett antal olika programmeringsspråk. För mer information kan du kolla: - [Arwiki](https://arwiki.wiki/#/en/main) - [Arweave Resources](https://www.arweave.org/build) -## What are Arweave Subgraphs? +## Vad är Arweave-subgrafer? The Graph allows you to build custom open APIs called "Subgraphs". Subgraphs are used to tell indexers (server operators) which data to index on a blockchain and save on their servers in order for you to be able to query it at any time using [GraphQL](https://graphql.org/). [Graph Node](https://github.com/graphprotocol/graph-node) is now able to index data on Arweave protocol. The current integration is only indexing Arweave as a blockchain (blocks and transactions), it is not indexing the stored files yet. -## Building an Arweave Subgraph +## Bygga en Arweave-subgraf -To be able to build and deploy Arweave Subgraphs, you need two packages: +För att kunna bygga och distribuera Arweave Subgraphs behöver du två paket: 1. `@graphprotocol/graph-cli` above version 0.30.2 - This is a command-line tool for building and deploying Subgraphs. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-cli) to download using `npm`. 2. `@graphprotocol/graph-ts` above version 0.27.0 - This is library of Subgraph-specific types. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-ts) to download using `npm`. -## Subgraph's components +## Subgraphs komponenter There are three components of a Subgraph: ### 1. Manifest - `subgraph.yaml` -Defines the data sources of interest, and how they should be processed. Arweave is a new kind of data source. +Definierar datakällorna av intresse och hur de ska behandlas. Arweave är en ny typ av datakälla. ### 2. Schema - `schema.graphql` -Here you define which data you want to be able to query after indexing your Subgraph using GraphQL. This is actually similar to a model for an API, where the model defines the structure of a request body. +Här definierar du vilken data du vill kunna fråga efter att du har indexerat din subgrafer med GraphQL. Detta liknar faktiskt en modell för ett API, där modellen definierar strukturen för en begäran. The requirements for Arweave Subgraphs are covered by the [existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). ### 3. AssemblyScript Mappings - `mapping.ts` -This is the logic that determines how data should be retrieved and stored when someone interacts with the data sources you are listening to. The data gets translated and is stored based off the schema you have listed. +Detta är logiken som avgör hur data ska hämtas och lagras när någon interagerar med datakällorna du lyssnar på. Data översätts och lagras utifrån det schema du har listat. During Subgraph development there are two key commands: @@ -53,7 +53,7 @@ $ graph codegen # generates types from the schema file identified in the manifes $ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder ``` -## Subgraph Manifest Definition +## Definition av subgraf manifestet The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for an Arweave Subgraph: @@ -84,24 +84,24 @@ dataSources: - Arweave Subgraphs introduce a new kind of data source (`arweave`) - The network should correspond to a network on the hosting Graph Node. In Subgraph Studio, Arweave's mainnet is `arweave-mainnet` -- Arweave data sources introduce an optional source.owner field, which is the public key of an Arweave wallet +- Arweave datakällor introducerar ett valfritt source.owner fält, som är den publika nyckeln till en Arweave plånbok -Arweave data sources support two types of handlers: +Arweave datakällor stöder två typer av hanterare: - `blockHandlers` - Run on every new Arweave block. No source.owner is required. - `transactionHandlers` - Run on every transaction where the data source's `source.owner` is the owner. Currently an owner is required for `transactionHandlers`, if users want to process all transactions they should provide "" as the `source.owner` -> The source.owner can be the owner's address, or their Public Key. - -> Transactions are the building blocks of the Arweave permaweb and they are objects created by end-users. - +> De source.owner kan vara ägarens adress eller deras publika nyckel. +> +> Transaktioner är byggstenarna i Arweave permaweb och de är objekt skapade av slutanvändare. +> > Note: [Irys (previously Bundlr)](https://irys.xyz/) transactions are not supported yet. ## Schema Definition Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on the Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). -## AssemblyScript Mappings +## AssemblyScript mappningar The handlers for processing events are written in [AssemblyScript](https://www.assemblyscript.org/). @@ -158,11 +158,11 @@ Once your Subgraph has been created on your Subgraph Studio dashboard, you can d graph deploy --access-token ``` -## Querying an Arweave Subgraph +## Fråga efter en Arweave-subgraf The GraphQL endpoint for Arweave Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. -## Example Subgraphs +## Exempel på subgrafer Here is an example Subgraph for reference: @@ -174,19 +174,19 @@ Here is an example Subgraph for reference: No, a Subgraph can only support data sources from one chain/network. -### Can I index the stored files on Arweave? +### Kan jag indexera de lagrade filerna på Arweave? -Currently, The Graph is only indexing Arweave as a blockchain (its blocks and transactions). +För närvarande indexerar The Graph bara Arweave som en blockkedja (dess block och transaktioner). ### Can I identify Bundlr bundles in my Subgraph? -This is not currently supported. +Detta stöds inte för närvarande. -### How can I filter transactions to a specific account? +### Hur kan jag filtrera transaktioner till ett specifikt konto? -The source.owner can be the user's public key or account address. +Source.owner kan vara användarens publika nyckel eller kontoadress. -### What is the current encryption format? +### Vad är det aktuella krypteringsformatet? Data is generally passed into the mappings as Bytes, which if stored directly is returned in the Subgraph in a `hex` format (ex. block and transaction hashes). You may want to convert to a `base64` or `base64 URL`-safe format in your mappings, in order to match what is displayed in block explorers like [Arweave Explorer](https://viewblock.io/arweave/). diff --git a/website/src/pages/sv/subgraphs/guides/contract-analyzer.mdx b/website/src/pages/sv/subgraphs/guides/contract-analyzer.mdx index 084ac8d28a00..7da81474c9ad 100644 --- a/website/src/pages/sv/subgraphs/guides/contract-analyzer.mdx +++ b/website/src/pages/sv/subgraphs/guides/contract-analyzer.mdx @@ -2,11 +2,15 @@ title: Smart Contract Analysis with Cana CLI --- -# Cana CLI: Quick & Efficient Contract Analysis +Improve smart contract analysis with **Cana CLI**. It's fast, efficient, and designed specifically for EVM chains. -**Cana CLI** is a command-line tool that streamlines helpful smart contract metadata analysis specific to subgraph development across multiple EVM-compatible chains. +## Översikt -## 📌 Key Features +**Cana CLI** is a command-line tool that streamlines helpful smart contract metadata analysis specific to subgraph development across multiple EVM-compatible chains. It simplifies retrieving contract details, detecting proxy implementations, extracting ABIs, and more. + +### Key Features + +With Cana CLI, you can: - Detect deployment blocks - Verify source code @@ -14,63 +18,75 @@ title: Smart Contract Analysis with Cana CLI - Identify proxy and implementation contracts - Support multiple chains -## 🚀 Installation & Setup +### Prerequisites + +Before installing Cana CLI, make sure you have: + +- [Node.js v16+](https://nodejs.org/en) +- [npm v6+](https://docs.npmjs.com/cli/v11/commands/npm-install) +- Block explorer API keys + +### Installation & Setup -Install Cana globally using npm: +1. Install Cana CLI + +Use npm to install it globally: ```bash npm install -g contract-analyzer ``` -Set up a blockchain for analysis: +2. Configure Cana CLI + +Set up a blockchain environment for analysis: ```bash cana setup ``` -Provide the required block explorer API and block explorer endpoint URL details when prompted. +During setup, you'll be prompted to provide the required block explorer API key and block explorer endpoint URL. -Running `cana setup` creates a configuration file at `~/.contract-analyzer/config.json`. This file stores your block explorer API credentials, endpoint URLs, and chain selection preferences for future use. +After setup, Cana CLI creates a configuration file at `~/.contract-analyzer/config.json`. This file stores your block explorer API credentials, endpoint URLs, and chain selection preferences for future use. -## 🍳 Usage +### Steps: Using Cana CLI for Smart Contract Analysis -### 🔹 Chain Selection +#### 1. Select a Chain -Cana supports multiple EVM-compatible chains. +Cana CLI supports multiple EVM-compatible chains. -List chains added with: +For a list of chains added run this command: ```bash cana chains ``` -Then select a chain with: +Then select a chain with this command: ```bash cana chains --switch ``` -Once a chain is selected, all subsequent contract analases will continue on that chain. +Once a chain is selected, all subsequent contract analyses will continue on that chain. -### 🔹 Basic Contract Analysis +#### 2. Basic Contract Analysis -Analyze a contract with: +Run the following command to analyze a contract: ```bash cana analyze 0xContractAddress ``` -or +eller ```bash cana -a 0xContractAddress ``` -This command displays essential contract information in the terminal using a clear, organized format. +This command fetches and displays essential contract information in the terminal using a clear, organized format. -### 🔹 Understanding Output +#### 3. Understanding the Output -Cana organizes results into the terminal as well as into a structured directory when detailed contract data is successfully retrieved: +Cana CLI organizes results into the terminal and into a structured directory when detailed contract data is successfully retrieved: ``` contracts-analyzed/ @@ -80,24 +96,22 @@ contracts-analyzed/ └── event-information.json # Event signatures and examples ``` -### 🔹 Chain Management +This format makes it easy to reference contract metadata, event signatures, and ABIs for subgraph development. + +#### 4. Chain Management Add and manage chains: ```bash -cana setup # Add a new chain -cana chains # List configured chains -cana chains -s # Swich chains. +cana setup # Add a new chain +cana chains # List configured chains +cana chains -s # Switch chains ``` -## ⚠️ Troubleshooting +### Troubleshooting -- **Missing Data**: Ensure the contract address is correct, verified on the block explorer, and that your API key has the required permissions. +Missing Data? Ensure that the contract address is correct, that it's verified on the block explorer, and that your API key has the required permissions. -## ✅ Requirements - -- Node.js v16+ -- npm v6+ -- Block explorer API keys +### Conclusion -Keep your contract analyses efficient and well-organized. 🚀 +With Cana CLI, you can efficiently analyze smart contracts, extract crucial metadata, and support subgraph development with ease. diff --git a/website/src/pages/sv/subgraphs/guides/enums.mdx b/website/src/pages/sv/subgraphs/guides/enums.mdx index 9f55ae07c54b..3b90caab564e 100644 --- a/website/src/pages/sv/subgraphs/guides/enums.mdx +++ b/website/src/pages/sv/subgraphs/guides/enums.mdx @@ -269,6 +269,6 @@ Expected output includes the marketplaces that meet the criteria, each represent } ``` -## Additional Resources +## Ytterligare resurser For additional information, check out this guide's [repo](https://github.com/chidubemokeke/Subgraph-Tutorial-Enums). diff --git a/website/src/pages/sv/subgraphs/guides/grafting.mdx b/website/src/pages/sv/subgraphs/guides/grafting.mdx index d9abe0e70d2a..d88057cdac80 100644 --- a/website/src/pages/sv/subgraphs/guides/grafting.mdx +++ b/website/src/pages/sv/subgraphs/guides/grafting.mdx @@ -1,46 +1,46 @@ --- -title: Replace a Contract and Keep its History With Grafting +title: Byt ut ett kontrakt och behåll dess historia med ympning --- In this guide, you will learn how to build and deploy new Subgraphs by grafting existing Subgraphs. -## What is Grafting? +## Vad är ympning? Grafting reuses the data from an existing Subgraph and starts indexing it at a later block. This is useful during development to get past simple errors in the mappings quickly or to temporarily get an existing Subgraph working again after it has failed. Also, it can be used when adding a feature to a Subgraph that takes long to index from scratch. The grafted Subgraph can use a GraphQL schema that is not identical to the one of the base Subgraph, but merely compatible with it. It has to be a valid Subgraph schema in its own right, but may deviate from the base Subgraph's schema in the following ways: -- It adds or removes entity types -- It removes attributes from entity types -- It adds nullable attributes to entity types -- It turns non-nullable attributes into nullable attributes -- It adds values to enums -- It adds or removes interfaces -- It changes for which entity types an interface is implemented +- Den lägger till eller tar bort entitetstyper +- Det tar bort attribut från entitetstyper +- Det tar bort attribut från entitetstyper +- Det förvandlar icke-nullbara attribut till nullbara attribut +- Det lägger till värden till enums +- Den lägger till eller tar bort gränssnitt +- Det ändrar för vilka entitetstyper ett gränssnitt implementeras -For more information, you can check: +För mer information kan du kontrollera: - [Grafting](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) In this tutorial, we will be covering a basic use case. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing Subgraph onto the "base" Subgraph that tracks the new contract. -## Important Note on Grafting When Upgrading to the Network +## Viktig anmärkning om ympning vid uppgradering till nätverket > **Caution**: It is recommended to not use grafting for Subgraphs published to The Graph Network -### Why Is This Important? +### Varför är detta viktigt? Grafting is a powerful feature that allows you to "graft" one Subgraph onto another, effectively transferring historical data from the existing Subgraph to a new version. It is not possible to graft a Subgraph from The Graph Network back to Subgraph Studio. -### Best Practices +### Bästa praxis **Initial Migration**: when you first deploy your Subgraph to the decentralized network, do so without grafting. Ensure that the Subgraph is stable and functioning as expected. **Subsequent Updates**: once your Subgraph is live and stable on the decentralized network, you may use grafting for future versions to make the transition smoother and to preserve historical data. -By adhering to these guidelines, you minimize risks and ensure a smoother migration process. +Genom att följa dessa riktlinjer minimerar du riskerna och säkerställer en smidigare migreringsprocess. -## Building an Existing Subgraph +## Bygga en befintlig subgraf Building Subgraphs is an essential part of The Graph, described more in depth [here](/subgraphs/quick-start/). To be able to build and deploy the existing Subgraph used in this tutorial, the following repo is provided: @@ -48,7 +48,7 @@ Building Subgraphs is an essential part of The Graph, described more in depth [h > Note: The contract used in the Subgraph was taken from the following [Hackathon Starterkit](https://github.com/schmidsi/hackathon-starterkit). -## Subgraph Manifest Definition +## Definition av subgraf manifestet The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest that you will use: @@ -83,7 +83,7 @@ dataSources: - The network should correspond to an indexed network being queried. Since we're running on Sepolia testnet, the network is `sepolia` - The `mapping` section defines the triggers of interest and the functions that should be run in response to those triggers. In this case, we are listening for the `Withdrawal` event and calling the `handleWithdrawal` function when it is emitted. -## Grafting Manifest Definition +## Ympnings manifest Definition Grafting requires adding two new items to the original Subgraph manifest: @@ -101,7 +101,7 @@ graft: The `base` and `block` values can be found by deploying two Subgraphs: one for the base indexing and one with grafting -## Deploying the Base Subgraph +## Distribuera Bas Subgraf 1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-example` 2. Follow the directions in the `AUTH & DEPLOY` section on your Subgraph page in the `graft-example` folder from the repo @@ -117,7 +117,7 @@ The `base` and `block` values can be found by deploying two Subgraphs: one for t } ``` -It returns something like this: +Den returnerar ungefär så här: ``` { @@ -140,9 +140,9 @@ It returns something like this: Once you have verified the Subgraph is indexing properly, you can quickly update the Subgraph with grafting. -## Deploying the Grafting Subgraph +## Utplacering av ympning subgraf -The graft replacement subgraph.yaml will have a new contract address. This could happen when you update your dapp, redeploy a contract, etc. +Transplantatersättningen subgraph.yaml kommer att ha en ny kontraktsadress. Detta kan hända när du uppdaterar din dapp, omdisponerar ett kontrakt, etc. 1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-replacement` 2. Create a new manifest. The `subgraph.yaml` for `graph-replacement` contains a different contract address and new information about how it should graft. These are the `block` of the [last event emitted](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452) you care about by the old contract and the `base` of the old Subgraph. The `base` Subgraph ID is the `Deployment ID` of your original `graph-example` Subgraph. You can find this in Subgraph Studio. @@ -159,7 +159,7 @@ The graft replacement subgraph.yaml will have a new contract address. This could } ``` -It should return the following: +Det bör returnera följande: ``` { @@ -189,7 +189,7 @@ You can see that the `graft-replacement` Subgraph is indexing from older `graph- Congrats! You have successfully grafted a Subgraph onto another Subgraph. -## Additional Resources +## Ytterligare resurser If you want more experience with grafting, here are a few examples for popular contracts: diff --git a/website/src/pages/sv/subgraphs/guides/near.mdx b/website/src/pages/sv/subgraphs/guides/near.mdx index e78a69eb7fa2..d766a44ad511 100644 --- a/website/src/pages/sv/subgraphs/guides/near.mdx +++ b/website/src/pages/sv/subgraphs/guides/near.mdx @@ -1,10 +1,10 @@ --- -title: Building Subgraphs on NEAR +title: Bygger subgrafer på NEAR --- This guide is an introduction to building Subgraphs indexing smart contracts on the [NEAR blockchain](https://docs.near.org/). -## What is NEAR? +## Vad är NEAR? [NEAR](https://near.org/) is a smart contract platform for building decentralized applications. Visit the [official documentation](https://docs.near.org/concepts/basics/protocol) for more information. @@ -14,14 +14,14 @@ The Graph gives developers tools to process blockchain events and make the resul Subgraphs are event-based, which means that they listen for and then process onchain events. There are currently two types of handlers supported for NEAR Subgraphs: -- Block handlers: these are run on every new block -- Receipt handlers: run every time a message is executed at a specified account +- Blockhanterare: dessa körs på varje nytt block +- Kvittohanterare: körs varje gång ett meddelande körs på ett angivet konto [From the NEAR documentation](https://docs.near.org/build/data-infrastructure/lake-data-structures/receipt): -> A Receipt is the only actionable object in the system. When we talk about "processing a transaction" on the NEAR platform, this eventually means "applying receipts" at some point. +> Ett kvitto är det enda handlingsbara objektet i systemet. När vi pratar om att "bearbeta en transaktion" på NEAR plattformen betyder det så småningom att "tillämpa kvitton" någon gång. -## Building a NEAR Subgraph +## Att bygga en NEAR Subgraf `@graphprotocol/graph-cli` is a command-line tool for building and deploying Subgraphs. @@ -46,7 +46,7 @@ $ graph codegen # generates types from the schema file identified in the manifes $ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder ``` -### Subgraph Manifest Definition +### Definition av subgraf manifestet The Subgraph manifest (`subgraph.yaml`) identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for a NEAR Subgraph: @@ -85,7 +85,7 @@ accounts: - morning.testnet ``` -NEAR data sources support two types of handlers: +NEAR datakällor stöder två typer av hanterare: - `blockHandlers`: run on every new NEAR block. No `source.account` is required. - `receiptHandlers`: run on every receipt where the data source's `source.account` is the recipient. Note that only exact matches are processed ([subaccounts](https://docs.near.org/tutorials/crosswords/basics/add-functions-call#create-a-subaccount) must be added as independent data sources). @@ -94,7 +94,7 @@ NEAR data sources support two types of handlers: Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). -### AssemblyScript Mappings +### AssemblyScript mappningar The handlers for processing events are written in [AssemblyScript](https://www.assemblyscript.org/). @@ -169,7 +169,7 @@ Otherwise, the rest of the [AssemblyScript API](/subgraphs/developing/creating/g This includes a new JSON parsing function - logs on NEAR are frequently emitted as stringified JSONs. A new `json.fromString(...)` function is available as part of the [JSON API](/subgraphs/developing/creating/graph-ts/api/#json-api) to allow developers to easily process these logs. -## Deploying a NEAR Subgraph +## Utplacera en NEAR Subgraf Once you have a built Subgraph, it is time to deploy it to Graph Node for indexing. NEAR Subgraphs can be deployed to any Graph Node `>=v0.26.x` (this version has not yet been tagged & released). @@ -191,14 +191,14 @@ $ graph deploy --node --ipfs https://api.thegraph.com/ipfs/ ``` -### Local Graph Node (based on default configuration) +### Lokal graf nod (baserat på standardkonfiguration) ```sh graph deploy --node http://localhost:8020/ --ipfs http://localhost:5001 @@ -216,21 +216,21 @@ Once your Subgraph has been deployed, it will be indexed by Graph Node. You can } ``` -### Indexing NEAR with a Local Graph Node +### Indexering av NEAR med en lokal grafnod -Running a Graph Node that indexes NEAR has the following operational requirements: +Att köra en Graph Node som indexerar NEAR har följande operativa krav: -- NEAR Indexer Framework with Firehose instrumentation -- NEAR Firehose Component(s) -- Graph Node with Firehose endpoint configured +- NEAR Indexer Framework med Firehose-instrumentering +- NEAR Brandslangskomponent(er) +- Graf Nod med Firehose ändpunkt konfigurerad -We will provide more information on running the above components soon. +Vi kommer snart att ge mer information om hur du kör ovanstående komponenter. -## Querying a NEAR Subgraph +## Fråga efter en NEAR subgraf The GraphQL endpoint for NEAR Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. -## Example Subgraphs +## Exempel på subgrafer Here are some example Subgraphs for reference: @@ -240,7 +240,7 @@ Here are some example Subgraphs for reference: ## FAQ -### How does the beta work? +### Hur fungerar betan? NEAR support is in beta, which means that there may be changes to the API as we continue to work on improving the integration. Please email near@thegraph.com so that we can support you in building NEAR Subgraphs, and keep you up to date on the latest developments! @@ -250,9 +250,9 @@ No, a Subgraph can only support data sources from one chain/network. ### Can Subgraphs react to more specific triggers? -Currently, only Block and Receipt triggers are supported. We are investigating triggers for function calls to a specified account. We are also interested in supporting event triggers, once NEAR has native event support. +För närvarande stöds endast blockerings- och kvittoutlösare. Vi undersöker utlösare för funktionsanrop till ett specificerat konto. Vi är också intresserade av att stödja eventutlösare, när NEAR har inbyggt eventsupport. -### Will receipt handlers trigger for accounts and their sub-accounts? +### Kommer kvittohanterare att utlösa för konton och deras underkonton? If an `account` is specified, that will only match the exact account name. It is possible to match sub-accounts by specifying an `accounts` field, with `suffixes` and `prefixes` specified to match accounts and sub-accounts, for example the following would match all `mintbase1.near` sub-accounts: @@ -264,11 +264,11 @@ accounts: ### Can NEAR Subgraphs make view calls to NEAR accounts during mappings? -This is not supported. We are evaluating whether this functionality is required for indexing. +Detta stöds inte. Vi utvärderar om denna funktionalitet krävs för indexering. ### Can I use data source templates in my NEAR Subgraph? -This is not currently supported. We are evaluating whether this functionality is required for indexing. +Detta stöds inte för närvarande. Vi utvärderar om denna funktionalitet krävs för indexering. ### Ethereum Subgraphs support "pending" and "current" versions, how can I deploy a "pending" version of a NEAR Subgraph? @@ -278,6 +278,6 @@ Pending functionality is not yet supported for NEAR Subgraphs. In the interim, y If it is a general question about Subgraph development, there is a lot more information in the rest of the [Developer documentation](/subgraphs/quick-start/). Otherwise please join [The Graph Protocol Discord](https://discord.gg/graphprotocol) and ask in the #near channel or email near@thegraph.com. -## References +## Referenser - [NEAR developer documentation](https://docs.near.org/tutorials/crosswords/basics/set-up-skeleton) diff --git a/website/src/pages/sv/subgraphs/guides/secure-api-keys-nextjs.mdx b/website/src/pages/sv/subgraphs/guides/secure-api-keys-nextjs.mdx index e17e594408ff..f90b30ccdd8c 100644 --- a/website/src/pages/sv/subgraphs/guides/secure-api-keys-nextjs.mdx +++ b/website/src/pages/sv/subgraphs/guides/secure-api-keys-nextjs.mdx @@ -2,7 +2,7 @@ title: How to Secure API Keys Using Next.js Server Components --- -## Overview +## Översikt We can use [Next.js server components](https://nextjs.org/docs/app/building-your-application/rendering/server-components) to properly secure our API key from exposure in the frontend of our dapp. To further increase our API key security, we can also [restrict our API key to certain Subgraphs or domains in Subgraph Studio](/cookbook/upgrading-a-subgraph/#securing-your-api-key). diff --git a/website/src/pages/sv/subgraphs/guides/subgraph-composition.mdx b/website/src/pages/sv/subgraphs/guides/subgraph-composition.mdx new file mode 100644 index 000000000000..805a904c7ba9 --- /dev/null +++ b/website/src/pages/sv/subgraphs/guides/subgraph-composition.mdx @@ -0,0 +1,132 @@ +--- +title: Aggregate Data Using Subgraph Composition +sidebarTitle: Build a Composable Subgraph with Multiple Subgraphs +--- + +Leverage Subgraph composition to speed up development time. Create a base Subgraph with essential data, then build additional Subgraphs on top of it. + +Optimize your Subgraph by merging data from independent, source Subgraphs into a single composable Subgraph to enhance data aggregation. + +## Introduktion + +Composable Subgraphs enable you to combine multiple Subgraphs' data sources into a new Subgraph, facilitating faster and more flexible Subgraph development. Subgraph composition empowers you to create and maintain smaller, focused Subgraphs that collectively form a larger, interconnected dataset. + +### Benefits of Composition + +Subgraph composition is a powerful feature for scaling, allowing you to: + +- Reuse, mix, and combine existing data +- Streamline development and queries +- Use multiple data sources (up to five source Subgraphs) +- Speed up your Subgraph's syncing speed +- Handle errors and optimize the resync + +## Architecture Overview + +The setup for this example involves two Subgraphs: + +1. **Source Subgraph**: Tracks event data as entities. +2. **Dependent Subgraph**: Uses the source Subgraph as a data source. + +You can find these in the `source` and `dependent` directories. + +- The **source Subgraph** is a basic event-tracking Subgraph that records events emitted by relevant contracts. +- The **dependent Subgraph** references the source Subgraph as a data source, using the entities from the source as triggers. + +While the source Subgraph is a standard Subgraph, the dependent Subgraph uses the Subgraph composition feature. + +## Prerequisites + +### Source Subgraphs + +- All Subgraphs need to be published with a **specVersion 1.3.0 or later** (Use the latest graph-cli version to be able to deploy composable Subgraphs) +- See notes here: https://github.com/graphprotocol/graph-node/releases/tag/v0.37.0 +- Immutable entities only: All Subgraphs must have [immutable entities](https://thegraph.com/docs/en/subgraphs/best-practices/immutable-entities-bytes-as-ids/#immutable-entities) when the Subgraph is deployed +- Pruning can be used in the source Subgraphs, but only entities that are immutable can be composed on top of +- Source Subgraphs cannot use grafting on top of existing entities +- Aggregated entities can be used in composition, but entities that are composed from them cannot performed additional aggregations directly + +### Composed Subgraphs + +- You can only compose up to a **maximum of 5 source Subgraphs** +- Composed Subgraphs can only use **datasources from the same chain** +- **Nested composition is not yet supported**: Composing on top of another composed Subgraph isn’t allowed at this time +- Aggregated entities can be used in composition, but the composed entities on them cannot also use aggregations directly +- Developers cannot compose an onchain datasource with a Subgraph datasource (i.e. you can’t do normal event handlers and call handlers and block handlers in a composed Subgraph) + +Additionally, you can explore the [example-composable-subgraph](https://github.com/graphprotocol/example-composable-subgraph) repository for a working implementation of composable Subgraphs + +## Komma igång + +The following guide provides examples for defining 3 source Subgraphs to create one powerful composed Subgraph. + +### Specifics + +- To keep this example simple, all source Subgraphs use only block handlers. However, in a real environment, each source Subgraph will use data from different smart contracts. +- The examples below show how to import and extend the schema of another Subgraph to enhance its functionality. +- Each source Subgraph is optimized with a specific entity. +- All the commands listed install the necessary dependencies, generate code based on the GraphQL schema, build the Subgraph, and deploy it to your local Graph Node instance. + +### Step 1. Deploy Block Time Source Subgraph + +This first source Subgraph calculates the block time for each block. + +- It imports schemas from other Subgraphs and adds a `block` entity with a `timestamp` field, representing the time each block was mined. +- It listens to time-related blockchain events (e.g., block timestamps) and processes this data to update the Subgraph's entities accordingly. + +To deploy this Subgraph locally, run the following commands: + +```bash +npm install +npm run codegen +npm run build +npm run create-local +npm run deploy-local +``` + +### Step 2. Deploy Block Cost Source Subgraph + +This second source Subgraph indexes the cost of each block. + +#### Key Functions + +- It imports schemas from other Subgraphs and adds a `block` entity with cost-related fields. +- It listens to blockchain events related to costs (e.g. gas fees, transaction costs) and processes this data to update the Subgraph's entities accordingly. + +To deploy this Subgraph locally, run the same commands as above. + +### Step 3. Define Block Size in Source Subgraph + +This third source Subgraph indexes the size of each block. To deploy this Subgraph locally, run the same commands as above. + +#### Key Functions + +- It imports existing schemas from other Subgraphs and adds a `block` entity with a `size` field representing each block's size. +- It listens to blockchain events related to block sizes (e.g., storage or volume) and processes this data to update the Subgraph's entities accordingly. + +### Step 4. Combine Into Block Stats Subgraph + +This composed Subgraph combines and aggregates the information from the source Subgraphs above, providing a unified view of block statistics. To deploy this Subgraph locally, run the same commands as above. + +> Note: +> +> - Any change to a source Subgraph will likely generate a new deployment ID. +> - Be sure to update the deployment ID in the data source address of the Subgraph manifest to take advantage of the latest changes. +> - All source Subgraphs should be deployed before the composed Subgraph is deployed. + +#### Key Functions + +- It provides a consolidated data model that encompasses all relevant block metrics. +- It combines data from 3 source Subgraphs, and provides a comprehensive view of block statistics, enabling more complex queries and analyses. + +## Key Takeaways + +- This powerful tool will scale your Subgraph development and allow you to combine multiple Subgraphs. +- The setup includes the deployment of 3 source Subgraphs and one final deployment of the composed Subgraph. +- This feature unlocks scalability, simplifying both development and maintenance efficiency. + +## Ytterligare resurser + +- Check out all the code for this example in [this GitHub repo](https://github.com/graphprotocol/example-composable-subgraph). +- To add advanced features to your Subgraph, check out [Subgraph advanced features](/developing/creating/advanced/). +- To learn more about aggregations, check out [Timeseries and Aggregations](/subgraphs/developing/creating/advanced/#timeseries-and-aggregations). diff --git a/website/src/pages/sv/subgraphs/guides/subgraph-debug-forking.mdx b/website/src/pages/sv/subgraphs/guides/subgraph-debug-forking.mdx index 91aa7484d2ec..75bff8ee89a8 100644 --- a/website/src/pages/sv/subgraphs/guides/subgraph-debug-forking.mdx +++ b/website/src/pages/sv/subgraphs/guides/subgraph-debug-forking.mdx @@ -1,22 +1,22 @@ --- -title: Quick and Easy Subgraph Debugging Using Forks +title: Snabb och enkel subgraf felsökning med gafflar --- As with many systems processing large amounts of data, The Graph's Indexers (Graph Nodes) may take quite some time to sync-up your Subgraph with the target blockchain. The discrepancy between quick changes with the purpose of debugging and long wait times needed for indexing is extremely counterproductive and we are well aware of that. This is why we are introducing **Subgraph forking**, developed by [LimeChain](https://limechain.tech/), and in this article I will show you how this feature can be used to substantially speed-up Subgraph debugging! -## Ok, what is it? +## Ok, vad är det? **Subgraph forking** is the process of lazily fetching entities from _another_ Subgraph's store (usually a remote one). In the context of debugging, **Subgraph forking** allows you to debug your failed Subgraph at block _X_ without needing to wait to sync-up to block _X_. -## What?! How? +## Vad?! Hur? When you deploy a Subgraph to a remote Graph Node for indexing and it fails at block _X_, the good news is that the Graph Node will still serve GraphQL queries using its store, which is synced-up to block _X_. That's great! This means we can take advantage of this "up-to-date" store to fix the bugs arising when indexing block _X_. In a nutshell, we are going to _fork the failing Subgraph_ from a remote Graph Node that is guaranteed to have the Subgraph indexed up to block _X_ in order to provide the locally deployed Subgraph being debugged at block _X_ an up-to-date view of the indexing state. -## Please, show me some code! +## Snälla, visa mig lite kod! To stay focused on Subgraph debugging, let's keep things simple and run along with the [example-Subgraph](https://github.com/graphprotocol/graph-tooling/tree/main/examples/ethereum-gravatar) indexing the Ethereum Gravity smart contract. @@ -46,31 +46,31 @@ export function handleUpdatedGravatar(event: UpdatedGravatar): void { Oops, how unfortunate, when I deploy my perfect looking Subgraph to [Subgraph Studio](https://thegraph.com/studio/) it fails with the _"Gravatar not found!"_ error. -The usual way to attempt a fix is: +Det vanliga sättet att försöka fixa är: -1. Make a change in the mappings source, which you believe will solve the issue (while I know it won't). +1. Gör en förändring i mappningskällan, som du tror kommer att lösa problemet (även om jag vet att det inte kommer att göra det). 2. Re-deploy the Subgraph to [Subgraph Studio](https://thegraph.com/studio/) (or another remote Graph Node). -3. Wait for it to sync-up. -4. If it breaks again go back to 1, otherwise: Hooray! +3. Vänta tills det synkroniseras. +4. Om den går sönder igen gå tillbaka till 1, annars: Hurra! It is indeed pretty familiar to an ordinary debug process, but there is one step that horribly slows down the process: _3. Wait for it to sync-up._ Using **Subgraph forking** we can essentially eliminate this step. Here is how it looks: 0. Spin-up a local Graph Node with the **_appropriate fork-base_** set. -1. Make a change in the mappings source, which you believe will solve the issue. +1. Gör en ändring i mappningskällan som du tror kommer att lösa problemet. 2. Deploy to the local Graph Node, **_forking the failing Subgraph_** and **_starting from the problematic block_**. -3. If it breaks again, go back to 1, otherwise: Hooray! +3. Om den går sönder igen, gå tillbaka till 1, annars: Hurra! -Now, you may have 2 questions: +Nu kanske du har 2 frågor: -1. fork-base what??? -2. Forking who?! +1. gaffelbas vad??? +2. Forking vem?! -And I answer: +Och jag svarar: 1. `fork-base` is the "base" URL, such that when the _subgraph id_ is appended the resulting URL (`/`) is a valid GraphQL endpoint for the Subgraph's store. -2. Forking is easy, no need to sweat: +2. Gaffling är lätt, du behöver inte svettas: ```bash $ graph deploy --debug-fork --ipfs http://localhost:5001 --node http://localhost:8020 @@ -78,7 +78,7 @@ $ graph deploy --debug-fork --ipfs http://localhos Also, don't forget to set the `dataSources.source.startBlock` field in the Subgraph manifest to the number of the problematic block, so you can skip indexing unnecessary blocks and take advantage of the fork! -So, here is what I do: +Så här är vad jag gör: 1. I spin-up a local Graph Node ([here is how to do it](https://github.com/graphprotocol/graph-node#running-a-local-graph-node)) with the `fork-base` option set to: `https://api.thegraph.com/subgraphs/id/`, since I will fork a Subgraph, the buggy one I deployed earlier, from [Subgraph Studio](https://thegraph.com/studio/). diff --git a/website/src/pages/sv/subgraphs/guides/subgraph-uncrashable.mdx b/website/src/pages/sv/subgraphs/guides/subgraph-uncrashable.mdx index a08e2a7ad8c9..9b0652bf1a85 100644 --- a/website/src/pages/sv/subgraphs/guides/subgraph-uncrashable.mdx +++ b/website/src/pages/sv/subgraphs/guides/subgraph-uncrashable.mdx @@ -1,10 +1,10 @@ --- -title: Safe Subgraph Code Generator +title: Säker subgraf kodgenerator --- [Subgraph Uncrashable](https://float-capital.github.io/float-subgraph-uncrashable/) is a code generation tool that generates a set of helper functions from the graphql schema of a project. It ensures that all interactions with entities in your Subgraph are completely safe and consistent. -## Why integrate with Subgraph Uncrashable? +## Varför integrera med Subgraf Uncrashable? - **Continuous Uptime**. Mishandled entities may cause Subgraphs to crash, which can be disruptive for projects that are dependent on The Graph. Set up helper functions to make your Subgraphs “uncrashable” and ensure business continuity. @@ -16,11 +16,11 @@ title: Safe Subgraph Code Generator - The code generation tool accommodates **all** Subgraph types and is configurable for users to set sane defaults on values. The code generation will use this config to generate helper functions that are to the users specification. -- The framework also includes a way (via the config file) to create custom, but safe, setter functions for groups of entity variables. This way it is impossible for the user to load/use a stale graph entity and it is also impossible to forget to save or set a variable that is required by the function. +- Ramverket innehåller också ett sätt (via konfigurationsfilen) att skapa anpassade, men säkra, sätterfunktioner för grupper av entitetsvariabler. På så sätt är det omöjligt för användaren att ladda/använda en inaktuell grafenhet och det är också omöjligt att glömma att spara eller ställa in en variabel som krävs av funktionen. - Warning logs are recorded as logs indicating where there is a breach of Subgraph logic to help patch the issue to ensure data accuracy. -Subgraph Uncrashable can be run as an optional flag using the Graph CLI codegen command. +Subgraph Uncrashable kan köras som en valfri flagga med kommandot Graph CLI codegen. ```sh graph codegen -u [options] [] diff --git a/website/src/pages/sv/subgraphs/guides/transfer-to-the-graph.mdx b/website/src/pages/sv/subgraphs/guides/transfer-to-the-graph.mdx index a62072c48373..de3e762e2d40 100644 --- a/website/src/pages/sv/subgraphs/guides/transfer-to-the-graph.mdx +++ b/website/src/pages/sv/subgraphs/guides/transfer-to-the-graph.mdx @@ -12,9 +12,9 @@ Quickly upgrade your Subgraphs from any platform to [The Graph's decentralized n ## Upgrade Your Subgraph to The Graph in 3 Easy Steps -1. [Set Up Your Studio Environment](/subgraphs/cookbook/transfer-to-the-graph/#1-set-up-your-studio-environment) -2. [Deploy Your Subgraph to Studio](/subgraphs/cookbook/transfer-to-the-graph/#2-deploy-your-subgraph-to-studio) -3. [Publish to The Graph Network](/subgraphs/cookbook/transfer-to-the-graph/#publish-your-subgraph-to-the-graphs-decentralized-network) +1. [Set Up Your Studio Environment](/subgraphs/cookbook/transfer-to-the-graph/#1-set-up-your-studio-environment) +2. [Deploy Your Subgraph to Studio](/subgraphs/cookbook/transfer-to-the-graph/#2-deploy-your-subgraph-to-studio) +3. [Publish to The Graph Network](/subgraphs/cookbook/transfer-to-the-graph/#publish-your-subgraph-to-the-graphs-decentralized-network) ## 1. Set Up Your Studio Environment @@ -74,7 +74,7 @@ graph deploy --ipfs-hash You can start [querying](/subgraphs/querying/introduction/) any Subgraph by sending a GraphQL query into the Subgraph’s query URL endpoint, which is located at the top of its Explorer page in Subgraph Studio. -#### Example +#### Exempel [CryptoPunks Ethereum Subgraph](https://thegraph.com/explorer/subgraphs/HdVdERFUe8h61vm2fDyycHgxjsde5PbB832NHgJfZNqK) by Messari: @@ -98,7 +98,7 @@ You can create API Keys in Subgraph Studio under the “API Keys” menu at the Once you upgrade, you can access and manage your Subgraphs in [Subgraph Studio](https://thegraph.com/studio/) and explore all Subgraphs in [The Graph Explorer](https://thegraph.com/networks/). -### Additional Resources +### Ytterligare resurser - To quickly create and publish a new Subgraph, check out the [Quick Start](/subgraphs/quick-start/). - To explore all the ways you can optimize and customize your Subgraph for a better performance, read more about [creating a Subgraph here](/developing/creating-a-subgraph/). diff --git a/website/src/pages/sv/subgraphs/querying/best-practices.mdx b/website/src/pages/sv/subgraphs/querying/best-practices.mdx index 906948273d5f..0ab033858acd 100644 --- a/website/src/pages/sv/subgraphs/querying/best-practices.mdx +++ b/website/src/pages/sv/subgraphs/querying/best-practices.mdx @@ -4,7 +4,7 @@ title: Bästa praxis för förfrågningar The Graph provides a decentralized way to query data from blockchains. Its data is exposed through a GraphQL API, making it easier to query with the GraphQL language. -Learn the essential GraphQL language rules and best practices to optimize your subgraph. +Learn the essential GraphQL language rules and best practices to optimize your Subgraph. --- @@ -71,7 +71,7 @@ It means that you can query a GraphQL API using standard `fetch` (natively or vi However, as mentioned in ["Querying from an Application"](/subgraphs/querying/from-an-application/), it's recommended to use `graph-client`, which supports the following unique features: -- Hantering av subgrafer över olika blockkedjor: Frågehantering från flera subgrafer i en enda fråga +- Cross-chain Subgraph Handling: Querying from multiple Subgraphs in a single query - [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) - [Automatic Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) - Fullt typad resultat @@ -219,7 +219,7 @@ If the application only needs 10 transactions, the query should explicitly set ` ### Use a single query to request multiple records -By default, subgraphs have a singular entity for one record. For multiple records, use the plural entities and filter: `where: {id_in:[X,Y,Z]}` or `where: {volume_gt:100000}` +By default, Subgraphs have a singular entity for one record. For multiple records, use the plural entities and filter: `where: {id_in:[X,Y,Z]}` or `where: {volume_gt:100000}` Example of inefficient querying: diff --git a/website/src/pages/sv/subgraphs/querying/from-an-application.mdx b/website/src/pages/sv/subgraphs/querying/from-an-application.mdx index ee0bb7a2fabe..0784e371cab0 100644 --- a/website/src/pages/sv/subgraphs/querying/from-an-application.mdx +++ b/website/src/pages/sv/subgraphs/querying/from-an-application.mdx @@ -1,5 +1,6 @@ --- title: Att göra förfrågningar från en Applikation +sidebarTitle: Querying from an App --- Learn how to query The Graph from your application. @@ -10,7 +11,7 @@ During the development process, you will receive a GraphQL API endpoint at two d ### Subgraph Studio Endpoint -After deploying your subgraph to [Subgraph Studio](https://thegraph.com/docs/en/subgraphs/developing/deploying/using-subgraph-studio/), you will receive an endpoint that looks like this: +After deploying your Subgraph to [Subgraph Studio](https://thegraph.com/docs/en/subgraphs/developing/deploying/using-subgraph-studio/), you will receive an endpoint that looks like this: ``` https://api.studio.thegraph.com/query/// @@ -20,13 +21,13 @@ https://api.studio.thegraph.com/query/// ### The Graph Network Endpoint -After publishing your subgraph to the network, you will receive an endpoint that looks like this: : +After publishing your Subgraph to the network, you will receive an endpoint that looks like this: : ``` https://gateway.thegraph.com/api//subgraphs/id/ ``` -> This endpoint is intended for active use on the network. It allows you to use various GraphQL client libraries to query the subgraph and populate your application with indexed data. +> This endpoint is intended for active use on the network. It allows you to use various GraphQL client libraries to query the Subgraph and populate your application with indexed data. ## Using Popular GraphQL Clients @@ -34,7 +35,7 @@ https://gateway.thegraph.com/api//subgraphs/id/ The Graph is providing its own GraphQL client, `graph-client` that supports unique features such as: -- Hantering av subgrafer över olika blockkedjor: Frågehantering från flera subgrafer i en enda fråga +- Cross-chain Subgraph Handling: Querying from multiple Subgraphs in a single query - [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) - [Automatic Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) - Fullt typad resultat @@ -43,7 +44,7 @@ The Graph is providing its own GraphQL client, `graph-client` that supports uniq ### Fetch Data with Graph Client -Let's look at how to fetch data from a subgraph with `graph-client`: +Let's look at how to fetch data from a Subgraph with `graph-client`: #### Steg 1 @@ -168,7 +169,7 @@ Although it's the heaviest client, it has many features to build advanced UI on ### Fetch Data with Apollo Client -Let's look at how to fetch data from a subgraph with Apollo client: +Let's look at how to fetch data from a Subgraph with Apollo client: #### Steg 1 @@ -257,7 +258,7 @@ client ### Fetch data with URQL -Let's look at how to fetch data from a subgraph with URQL: +Let's look at how to fetch data from a Subgraph with URQL: #### Steg 1 diff --git a/website/src/pages/sv/subgraphs/querying/graph-client/README.md b/website/src/pages/sv/subgraphs/querying/graph-client/README.md index 416cadc13c6f..ae01284970a6 100644 --- a/website/src/pages/sv/subgraphs/querying/graph-client/README.md +++ b/website/src/pages/sv/subgraphs/querying/graph-client/README.md @@ -16,23 +16,23 @@ This library is intended to simplify the network aspect of data consumption for | Status | Feature | Notes | | :----: | ---------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------- | -| ✅ | Multiple indexers | based on fetch strategies | -| ✅ | Fetch Strategies | timeout, retry, fallback, race, highestValue | -| ✅ | Build time validations & optimizations | | -| ✅ | Client-Side Composition | with improved execution planner (based on GraphQL-Mesh) | -| ✅ | Cross-chain Subgraph Handling | Use similar subgraphs as a single source | -| ✅ | Raw Execution (standalone mode) | without a wrapping GraphQL client | -| ✅ | Local (client-side) Mutations | | -| ✅ | [Automatic Block Tracking](../packages/block-tracking/README.md) | tracking block numbers [as described here](https://thegraph.com/docs/en/developer/distributed-systems/#polling-for-updated-data) | -| ✅ | [Automatic Pagination](../packages/auto-pagination/README.md) | doing multiple requests in a single call to fetch more than the indexer limit | -| ✅ | Integration with `@apollo/client` | | -| ✅ | Integration with `urql` | | -| ✅ | TypeScript support | with built-in GraphQL Codegen and `TypedDocumentNode` | -| ✅ | [`@live` queries](./live.md) | Based on polling | +| ✅ | Multiple indexers | based on fetch strategies | +| ✅ | Fetch Strategies | timeout, retry, fallback, race, highestValue | +| ✅ | Build time validations & optimizations | | +| ✅ | Client-Side Composition | with improved execution planner (based on GraphQL-Mesh) | +| ✅ | Cross-chain Subgraph Handling | Use similar subgraphs as a single source | +| ✅ | Raw Execution (standalone mode) | without a wrapping GraphQL client | +| ✅ | Local (client-side) Mutations | | +| ✅ | [Automatic Block Tracking](../packages/block-tracking/README.md) | tracking block numbers [as described here](https://thegraph.com/docs/en/developer/distributed-systems/#polling-for-updated-data) | +| ✅ | [Automatic Pagination](../packages/auto-pagination/README.md) | doing multiple requests in a single call to fetch more than the indexer limit | +| ✅ | Integration with `@apollo/client` | | +| ✅ | Integration with `urql` | | +| ✅ | TypeScript support | with built-in GraphQL Codegen and `TypedDocumentNode` | +| ✅ | [`@live` queries](./live.md) | Based on polling | > You can find an [extended architecture design here](./architecture.md) -## Getting Started +## Komma igång You can follow [Episode 45 of `graphql.wtf`](https://graphql.wtf/episodes/45-the-graph-client) to learn more about Graph Client: @@ -138,7 +138,7 @@ graphclient serve-dev And open http://localhost:4000/ to use GraphiQL. You can now experiment with your Graph client-side GraphQL schema locally! 🥳 -#### Examples +#### Exempel You can also refer to [examples directory in this repo](../examples), for more advanced examples and integration examples: @@ -308,8 +308,8 @@ sources:
`highestValue` - - This strategy allows you to send parallel requests to different endpoints for the same source and choose the most updated. + +This strategy allows you to send parallel requests to different endpoints for the same source and choose the most updated. This is useful if you want to choose most synced data for the same Subgraph over different indexers/sources. diff --git a/website/src/pages/sv/subgraphs/querying/graph-client/live.md b/website/src/pages/sv/subgraphs/querying/graph-client/live.md index e6f726cb4352..00053b724be0 100644 --- a/website/src/pages/sv/subgraphs/querying/graph-client/live.md +++ b/website/src/pages/sv/subgraphs/querying/graph-client/live.md @@ -2,7 +2,7 @@ Graph-Client implements a custom `@live` directive that can make every GraphQL query work with real-time data. -## Getting Started +## Komma igång Start by adding the following configuration to your `.graphclientrc.yml` file: diff --git a/website/src/pages/sv/subgraphs/querying/graphql-api.mdx b/website/src/pages/sv/subgraphs/querying/graphql-api.mdx index e4c1fbcb94b3..535f48fc7790 100644 --- a/website/src/pages/sv/subgraphs/querying/graphql-api.mdx +++ b/website/src/pages/sv/subgraphs/querying/graphql-api.mdx @@ -6,13 +6,13 @@ Learn about the GraphQL Query API used in The Graph. ## What is GraphQL? -[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. +[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query Subgraphs. -To understand the larger role that GraphQL plays, review [developing](/subgraphs/developing/introduction/) and [creating a subgraph](/developing/creating-a-subgraph/). +To understand the larger role that GraphQL plays, review [developing](/subgraphs/developing/introduction/) and [creating a Subgraph](/developing/creating-a-subgraph/). ## Queries with GraphQL -In your subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type. +In your Subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type. > Note: `query` does not need to be included at the top of the `graphql` query when using The Graph. @@ -170,7 +170,7 @@ You can use suffixes like `_gt`, `_lte` for value comparison: You can also filter entities that were updated in or after a specified block with `_change_block(number_gte: Int)`. -Detta kan vara användbart om du bara vill hämta enheter som har ändrats, till exempel sedan den senaste gången du pollade. Eller alternativt kan det vara användbart för att undersöka eller felsöka hur enheter förändras i din undergraf (om det kombineras med ett blockfilter kan du isolera endast enheter som ändrades i ett visst block). +This can be useful if you are looking to fetch only entities which have changed, for example since the last time you polled. Or alternatively it can be useful to investigate or debug how entities are changing in your Subgraph (if combined with a block filter, you can isolate only entities that changed in a specific block). ```graphql { @@ -329,7 +329,7 @@ This query will return `Challenge` entities, and their associated `Application` ### Fulltextsökförfrågningar -Fulltext search query fields provide an expressive text search API that can be added to the subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) to add fulltext search to your subgraph. +Fulltext search query fields provide an expressive text search API that can be added to the Subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) to add fulltext search to your Subgraph. Fulltext search queries have one required field, `text`, for supplying search terms. Several special fulltext operators are available to be used in this `text` search field. @@ -391,7 +391,7 @@ Graph Node implements [specification-based](https://spec.graphql.org/October2021 The schema of your dataSources, i.e. the entity types, values, and relationships that are available to query, are defined through the [GraphQL Interface Definition Language (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). -GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your subgraph is automatically generated from the GraphQL schema that's included in your [subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph). +GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your Subgraph is automatically generated from the GraphQL schema that's included in your [Subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph). > Note: Our API does not expose mutations because developers are expected to issue transactions directly against the underlying blockchain from their applications. @@ -403,7 +403,7 @@ All GraphQL types with `@entity` directives in your schema will be treated as en ### Metadata för undergrafer -All subgraphs have an auto-generated `_Meta_` object, which provides access to subgraph metadata. This can be queried as follows: +All Subgraphs have an auto-generated `_Meta_` object, which provides access to Subgraph metadata. This can be queried as follows: ```graphQL { @@ -419,7 +419,7 @@ All subgraphs have an auto-generated `_Meta_` object, which provides access to s } ``` -Om ett block anges är metadata från det blocket, om inte används det senast indexerade blocket. Om det anges måste blocket vara efter undergrafens startblock och mindre än eller lika med det senast indexerade blocket. +If a block is provided, the metadata is as of that block, if not the latest indexed block is used. If provided, the block must be after the Subgraph's start block, and less than or equal to the most recently indexed block. `deployment` is a unique ID, corresponding to the IPFS CID of the `subgraph.yaml` file. @@ -427,6 +427,6 @@ Om ett block anges är metadata från det blocket, om inte används det senast i - hash: blockets hash - nummer: blockets nummer -- timestamp: blockets timestamp, om tillgänglig (detta är för närvarande endast tillgängligt för undergrafer som indexerar EVM-nätverk) +- timestamp: the timestamp of the block, if available (this is currently only available for Subgraphs indexing EVM networks) -`hasIndexingErrors` is a boolean identifying whether the subgraph encountered indexing errors at some past block +`hasIndexingErrors` is a boolean identifying whether the Subgraph encountered indexing errors at some past block diff --git a/website/src/pages/sv/subgraphs/querying/introduction.mdx b/website/src/pages/sv/subgraphs/querying/introduction.mdx index 5434f06414fb..7b3c151bdbbd 100644 --- a/website/src/pages/sv/subgraphs/querying/introduction.mdx +++ b/website/src/pages/sv/subgraphs/querying/introduction.mdx @@ -7,11 +7,11 @@ To start querying right away, visit [The Graph Explorer](https://thegraph.com/ex ## Översikt -When a subgraph is published to The Graph Network, you can visit its subgraph details page on Graph Explorer and use the "Query" tab to explore the deployed GraphQL API for each subgraph. +When a Subgraph is published to The Graph Network, you can visit its Subgraph details page on Graph Explorer and use the "Query" tab to explore the deployed GraphQL API for each Subgraph. ## Specifics -Each subgraph published to The Graph Network has a unique query URL in Graph Explorer to make direct queries. You can find it by navigating to the subgraph details page and clicking on the "Query" button in the top right corner. +Each Subgraph published to The Graph Network has a unique query URL in Graph Explorer to make direct queries. You can find it by navigating to the Subgraph details page and clicking on the "Query" button in the top right corner. ![Query Subgraph Button](/img/query-button-screenshot.png) @@ -21,7 +21,7 @@ You will notice that this query URL must use a unique API key. You can create an Subgraph Studio users start on a Free Plan, which allows them to make 100,000 queries per month. Additional queries are available on the Growth Plan, which offers usage based pricing for additional queries, payable by credit card, or GRT on Arbitrum. You can learn more about billing [here](/subgraphs/billing/). -> Please see the [Query API](/subgraphs/querying/graphql-api/) for a complete reference on how to query the subgraph's entities. +> Please see the [Query API](/subgraphs/querying/graphql-api/) for a complete reference on how to query the Subgraph's entities. > > Note: If you encounter 405 errors with a GET request to the Graph Explorer URL, please switch to a POST request instead. diff --git a/website/src/pages/sv/subgraphs/querying/managing-api-keys.mdx b/website/src/pages/sv/subgraphs/querying/managing-api-keys.mdx index 3c3ad4ba152e..594527795da0 100644 --- a/website/src/pages/sv/subgraphs/querying/managing-api-keys.mdx +++ b/website/src/pages/sv/subgraphs/querying/managing-api-keys.mdx @@ -1,14 +1,14 @@ --- -title: Hantera dina API-nycklar +title: Managing API keys --- ## Översikt -API keys are needed to query subgraphs. They ensure that the connections between application services are valid and authorized, including authenticating the end user and the device using the application. +API keys are needed to query Subgraphs. They ensure that the connections between application services are valid and authorized, including authenticating the end user and the device using the application. ### Create and Manage API Keys -Go to [Subgraph Studio](https://thegraph.com/studio/) and click the **API Keys** tab to create and manage your API keys for specific subgraphs. +Go to [Subgraph Studio](https://thegraph.com/studio/) and click the **API Keys** tab to create and manage your API keys for specific Subgraphs. The "API keys" table lists existing API keys and allows you to manage or delete them. For each key, you can see its status, the cost for the current period, the spending limit for the current period, and the total number of queries. @@ -31,4 +31,4 @@ You can click on an individual API key to view the Details page: - Mängd GRT spenderad 2. Under the **Security** section, you can opt into security settings depending on the level of control you’d like to have. Specifically, you can: - Visa och hantera domännamn som har auktoriserats att använda din API-nyckel - - Koppla subgrafer som kan frågas med din API-nyckel + - Assign Subgraphs that can be queried with your API key diff --git a/website/src/pages/sv/subgraphs/querying/python.mdx b/website/src/pages/sv/subgraphs/querying/python.mdx index 213b45f144b3..3a987546c454 100644 --- a/website/src/pages/sv/subgraphs/querying/python.mdx +++ b/website/src/pages/sv/subgraphs/querying/python.mdx @@ -3,7 +3,7 @@ title: Query The Graph with Python and Subgrounds sidebarTitle: Python (Subgrounds) --- -Subgrounds is an intuitive Python library for querying subgraphs, built by [Playgrounds](https://playgrounds.network/). It allows you to directly connect subgraph data to a Python data environment, letting you use libraries like [pandas](https://pandas.pydata.org/) to perform data analysis! +Subgrounds is an intuitive Python library for querying Subgraphs, built by [Playgrounds](https://playgrounds.network/). It allows you to directly connect Subgraph data to a Python data environment, letting you use libraries like [pandas](https://pandas.pydata.org/) to perform data analysis! Subgrounds offers a simple Pythonic API for building GraphQL queries, automates tedious workflows such as pagination, and empowers advanced users through controlled schema transformations. @@ -17,14 +17,14 @@ pip install --upgrade subgrounds python -m pip install --upgrade subgrounds ``` -Once installed, you can test out subgrounds with the following query. The following example grabs a subgraph for the Aave v2 protocol and queries the top 5 markets ordered by TVL (Total Value Locked), selects their name and their TVL (in USD) and returns the data as a pandas [DataFrame](https://pandas.pydata.org/pandas-docs/dev/reference/api/pandas.DataFrame.html#pandas.DataFrame). +Once installed, you can test out subgrounds with the following query. The following example grabs a Subgraph for the Aave v2 protocol and queries the top 5 markets ordered by TVL (Total Value Locked), selects their name and their TVL (in USD) and returns the data as a pandas [DataFrame](https://pandas.pydata.org/pandas-docs/dev/reference/api/pandas.DataFrame.html#pandas.DataFrame). ```python from subgrounds import Subgrounds sg = Subgrounds() -# Load the subgraph +# Load the Subgraph aave_v2 = sg.load_subgraph( "https://api.thegraph.com/subgraphs/name/messari/aave-v2-ethereum") diff --git a/website/src/pages/sv/subgraphs/querying/subgraph-id-vs-deployment-id.mdx b/website/src/pages/sv/subgraphs/querying/subgraph-id-vs-deployment-id.mdx index 103e470e14da..17258dd13ea1 100644 --- a/website/src/pages/sv/subgraphs/querying/subgraph-id-vs-deployment-id.mdx +++ b/website/src/pages/sv/subgraphs/querying/subgraph-id-vs-deployment-id.mdx @@ -2,17 +2,17 @@ title: Subgraph ID vs Deployment ID --- -A subgraph is identified by a Subgraph ID, and each version of the subgraph is identified by a Deployment ID. +A Subgraph is identified by a Subgraph ID, and each version of the Subgraph is identified by a Deployment ID. -When querying a subgraph, either ID can be used, though it is generally suggested that the Deployment ID is used due to its ability to specify a specific version of a subgraph. +When querying a Subgraph, either ID can be used, though it is generally suggested that the Deployment ID is used due to its ability to specify a specific version of a Subgraph. Here are some key differences between the two IDs: ![](/img/subgraph-id-vs-deployment-id.png) ## Deployment ID -The Deployment ID is the IPFS hash of the compiled manifest file, which refers to other files on IPFS instead of relative URLs on the computer. For example, the compiled manifest can be accessed via: `https://api.thegraph.com/ipfs/api/v0/cat?arg=QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. To change the Deployment ID, one can simply update the manifest file, such as modifying the description field as described in the [subgraph manifest documentation](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api). +The Deployment ID is the IPFS hash of the compiled manifest file, which refers to other files on IPFS instead of relative URLs on the computer. For example, the compiled manifest can be accessed via: `https://api.thegraph.com/ipfs/api/v0/cat?arg=QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. To change the Deployment ID, one can simply update the manifest file, such as modifying the description field as described in the [Subgraph manifest documentation](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api). -When queries are made using a subgraph's Deployment ID, we are specifying a version of that subgraph to query. Using the Deployment ID to query a specific subgraph version results in a more sophisticated and robust setup as there is full control over the subgraph version being queried. However, this results in the need of updating the query code manually every time a new version of the subgraph is published. +When queries are made using a Subgraph's Deployment ID, we are specifying a version of that Subgraph to query. Using the Deployment ID to query a specific Subgraph version results in a more sophisticated and robust setup as there is full control over the Subgraph version being queried. However, this results in the need of updating the query code manually every time a new version of the Subgraph is published. Example endpoint that uses Deployment ID: @@ -20,8 +20,8 @@ Example endpoint that uses Deployment ID: ## Subgraph ID -The Subgraph ID is a unique identifier for a subgraph. It remains constant across all versions of a subgraph. It is recommended to use the Subgraph ID to query the latest version of a subgraph, although there are some caveats. +The Subgraph ID is a unique identifier for a Subgraph. It remains constant across all versions of a Subgraph. It is recommended to use the Subgraph ID to query the latest version of a Subgraph, although there are some caveats. -Be aware that querying using Subgraph ID may result in queries being responded to by an older version of the subgraph due to the new version needing time to sync. Also, new versions could introduce breaking schema changes. +Be aware that querying using Subgraph ID may result in queries being responded to by an older version of the Subgraph due to the new version needing time to sync. Also, new versions could introduce breaking schema changes. Example endpoint that uses Subgraph ID: `https://gateway-arbitrum.network.thegraph.com/api/[api-key]/subgraphs/id/FL3ePDCBbShPvfRJTaSCNnehiqxsPHzpLud6CpbHoeKW` diff --git a/website/src/pages/sv/subgraphs/quick-start.mdx b/website/src/pages/sv/subgraphs/quick-start.mdx index b959329363d9..f3fba67ef0d7 100644 --- a/website/src/pages/sv/subgraphs/quick-start.mdx +++ b/website/src/pages/sv/subgraphs/quick-start.mdx @@ -2,7 +2,7 @@ title: Snabbstart --- -Learn how to easily build, publish and query a [subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) on The Graph. +Learn how to easily build, publish and query a [Subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) on The Graph. ## Prerequisites @@ -13,13 +13,13 @@ Learn how to easily build, publish and query a [subgraph](/subgraphs/developing/ ## How to Build a Subgraph -### 1. Create a subgraph in Subgraph Studio +### 1. Create a Subgraph in Subgraph Studio Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. -Subgraph Studio lets you create, manage, deploy, and publish subgraphs, as well as create and manage API keys. +Subgraph Studio lets you create, manage, deploy, and publish Subgraphs, as well as create and manage API keys. -Click "Create a Subgraph". It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name". +Click "Create a Subgraph". It is recommended to name the Subgraph in Title Case: "Subgraph Name Chain Name". ### 2. Installera Graph CLI @@ -37,13 +37,13 @@ Using [yarn](https://yarnpkg.com/): yarn global add @graphprotocol/graph-cli ``` -### 3. Initialize your subgraph +### 3. Initialize your Subgraph -> You can find commands for your specific subgraph on the subgraph page in [Subgraph Studio](https://thegraph.com/studio/). +> You can find commands for your specific Subgraph on the Subgraph page in [Subgraph Studio](https://thegraph.com/studio/). -The `graph init` command will automatically create a scaffold of a subgraph based on your contract's events. +The `graph init` command will automatically create a scaffold of a Subgraph based on your contract's events. -The following command initializes your subgraph from an existing contract: +The following command initializes your Subgraph from an existing contract: ```sh graph init @@ -51,42 +51,42 @@ graph init If your contract is verified on the respective blockscanner where it is deployed (such as [Etherscan](https://etherscan.io/)), then the ABI will automatically be created in the CLI. -When you initialize your subgraph, the CLI will ask you for the following information: +When you initialize your Subgraph, the CLI will ask you for the following information: -- **Protocol**: Choose the protocol your subgraph will be indexing data from. -- **Subgraph slug**: Create a name for your subgraph. Your subgraph slug is an identifier for your subgraph. -- **Directory**: Choose a directory to create your subgraph in. -- **Ethereum network** (optional): You may need to specify which EVM-compatible network your subgraph will be indexing data from. +- **Protocol**: Choose the protocol your Subgraph will be indexing data from. +- **Subgraph slug**: Create a name for your Subgraph. Your Subgraph slug is an identifier for your Subgraph. +- **Directory**: Choose a directory to create your Subgraph in. +- **Ethereum network** (optional): You may need to specify which EVM-compatible network your Subgraph will be indexing data from. - **Contract address**: Locate the smart contract address you’d like to query data from. - **ABI**: If the ABI is not auto-populated, you will need to input it manually as a JSON file. -- **Start Block**: You should input the start block to optimize subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed. +- **Start Block**: You should input the start block to optimize Subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed. - **Contract Name**: Input the name of your contract. -- **Index contract events as entities**: It is suggested that you set this to true, as it will automatically add mappings to your subgraph for every emitted event. +- **Index contract events as entities**: It is suggested that you set this to true, as it will automatically add mappings to your Subgraph for every emitted event. - **Add another contract** (optional): You can add another contract. -Se följande skärmdump för ett exempel för vad du kan förvänta dig när du initierar din subgraf: +See the following screenshot for an example for what to expect when initializing your Subgraph: ![Subgraph command](/img/CLI-Example.png) -### 4. Edit your subgraph +### 4. Edit your Subgraph -The `init` command in the previous step creates a scaffold subgraph that you can use as a starting point to build your subgraph. +The `init` command in the previous step creates a scaffold Subgraph that you can use as a starting point to build your Subgraph. -When making changes to the subgraph, you will mainly work with three files: +When making changes to the Subgraph, you will mainly work with three files: -- Manifest (`subgraph.yaml`) - defines what data sources your subgraph will index. -- Schema (`schema.graphql`) - defines what data you wish to retrieve from the subgraph. +- Manifest (`subgraph.yaml`) - defines what data sources your Subgraph will index. +- Schema (`schema.graphql`) - defines what data you wish to retrieve from the Subgraph. - AssemblyScript Mappings (`mapping.ts`) - translates data from your data sources to the entities defined in the schema. -For a detailed breakdown on how to write your subgraph, check out [Creating a Subgraph](/developing/creating-a-subgraph/). +For a detailed breakdown on how to write your Subgraph, check out [Creating a Subgraph](/developing/creating-a-subgraph/). -### 5. Deploy your subgraph +### 5. Deploy your Subgraph > Remember, deploying is not the same as publishing. -When you **deploy** a subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. A deployed subgraph's indexing is performed by the [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/), which is a single Indexer owned and operated by Edge & Node, rather than by the many decentralized Indexers in The Graph Network. A **deployed** subgraph is free to use, rate-limited, not visible to the public, and meant to be used for development, staging, and testing purposes. +When you **deploy** a Subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. A deployed Subgraph's indexing is performed by the [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/), which is a single Indexer owned and operated by Edge & Node, rather than by the many decentralized Indexers in The Graph Network. A **deployed** Subgraph is free to use, rate-limited, not visible to the public, and meant to be used for development, staging, and testing purposes. -När din subgraf är skriven, kör följande kommandon: +Once your Subgraph is written, run the following commands: ```` ```sh @@ -94,7 +94,7 @@ graph codegen && graph build ``` ```` -Authenticate and deploy your subgraph. The deploy key can be found on the subgraph's page in Subgraph Studio. +Authenticate and deploy your Subgraph. The deploy key can be found on the Subgraph's page in Subgraph Studio. ![Deploy key](/img/subgraph-studio-deploy-key.jpg) @@ -109,37 +109,37 @@ graph deploy The CLI will ask for a version label. It's strongly recommended to use [semantic versioning](https://semver.org/), e.g. `0.0.1`. -### 6. Review your subgraph +### 6. Review your Subgraph -If you’d like to test your subgraph before publishing it, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following: +If you’d like to test your Subgraph before publishing it, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following: - Run a sample query. -- Analyze your subgraph in the dashboard to check information. -- Check the logs on the dashboard to see if there are any errors with your subgraph. The logs of an operational subgraph will look like this: +- Analyze your Subgraph in the dashboard to check information. +- Check the logs on the dashboard to see if there are any errors with your Subgraph. The logs of an operational Subgraph will look like this: ![Subgraph logs](/img/subgraph-logs-image.png) -### 7. Publish your subgraph to The Graph Network +### 7. Publish your Subgraph to The Graph Network -When your subgraph is ready for a production environment, you can publish it to the decentralized network. Publishing is an onchain action that does the following: +When your Subgraph is ready for a production environment, you can publish it to the decentralized network. Publishing is an onchain action that does the following: -- It makes your subgraph available to be to indexed by the decentralized [Indexers](/indexing/overview/) on The Graph Network. -- It removes rate limits and makes your subgraph publicly searchable and queryable in [Graph Explorer](https://thegraph.com/explorer/). -- It makes your subgraph available for [Curators](/resources/roles/curating/) to curate it. +- It makes your Subgraph available to be to indexed by the decentralized [Indexers](/indexing/overview/) on The Graph Network. +- It removes rate limits and makes your Subgraph publicly searchable and queryable in [Graph Explorer](https://thegraph.com/explorer/). +- It makes your Subgraph available for [Curators](/resources/roles/curating/) to curate it. -> The greater the amount of GRT you and others curate on your subgraph, the more Indexers will be incentivized to index your subgraph, improving the quality of service, reducing latency, and enhancing network redundancy for your subgraph. +> The greater the amount of GRT you and others curate on your Subgraph, the more Indexers will be incentivized to index your Subgraph, improving the quality of service, reducing latency, and enhancing network redundancy for your Subgraph. #### Publishing with Subgraph Studio -To publish your subgraph, click the Publish button in the dashboard. +To publish your Subgraph, click the Publish button in the dashboard. -![Publish a subgraph on Subgraph Studio](/img/publish-sub-transfer.png) +![Publish a Subgraph on Subgraph Studio](/img/publish-sub-transfer.png) -Select the network to which you would like to publish your subgraph. +Select the network to which you would like to publish your Subgraph. #### Publishing from the CLI -As of version 0.73.0, you can also publish your subgraph with the Graph CLI. +As of version 0.73.0, you can also publish your Subgraph with the Graph CLI. Open the `graph-cli`. @@ -157,32 +157,32 @@ graph publish ``` ```` -3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized subgraph to a network of your choice. +3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized Subgraph to a network of your choice. ![cli-ui](/img/cli-ui.png) To customize your deployment, see [Publishing a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). -#### Adding signal to your subgraph +#### Adding signal to your Subgraph -1. To attract Indexers to query your subgraph, you should add GRT curation signal to it. +1. To attract Indexers to query your Subgraph, you should add GRT curation signal to it. - - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your subgraph. + - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your Subgraph. 2. If eligible for indexing rewards, Indexers receive GRT rewards based on the signaled amount. - - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on subgraph feature usage and supported networks. + - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on Subgraph feature usage and supported networks. To learn more about curation, read [Curating](/resources/roles/curating/). -To save on gas costs, you can curate your subgraph in the same transaction you publish it by selecting this option: +To save on gas costs, you can curate your Subgraph in the same transaction you publish it by selecting this option: ![Subgraph publish](/img/studio-publish-modal.png) -### 8. Query your subgraph +### 8. Query your Subgraph -You now have access to 100,000 free queries per month with your subgraph on The Graph Network! +You now have access to 100,000 free queries per month with your Subgraph on The Graph Network! -You can query your subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button. +You can query your Subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button. -For more information about querying data from your subgraph, read [Querying The Graph](/subgraphs/querying/introduction/). +For more information about querying data from your Subgraph, read [Querying The Graph](/subgraphs/querying/introduction/). diff --git a/website/src/pages/sv/substreams/developing/dev-container.mdx b/website/src/pages/sv/substreams/developing/dev-container.mdx index bd4acf16eec7..339ddb159c87 100644 --- a/website/src/pages/sv/substreams/developing/dev-container.mdx +++ b/website/src/pages/sv/substreams/developing/dev-container.mdx @@ -9,7 +9,7 @@ Develop your first project with Substreams Dev Container. It's a tool to help you build your first project. You can either run it remotely through Github codespaces or locally by cloning the [substreams starter repository](https://github.com/streamingfast/substreams-starter?tab=readme-ov-file). -Inside the Dev Container, the `substreams init` command sets up a code-generated Substreams project, allowing you to easily build a subgraph or an SQL-based solution for data handling. +Inside the Dev Container, the `substreams init` command sets up a code-generated Substreams project, allowing you to easily build a Subgraph or an SQL-based solution for data handling. ## Prerequisites @@ -35,7 +35,7 @@ To share your work with the broader community, publish your `.spkg` to [Substrea You can configure your project to query data either through a Subgraph or directly from an SQL database: -- **Subgraph**: Run `substreams codegen subgraph`. This generates a project with a basic `schema.graphql` and `mappings.ts` file. You can customize these to define entities based on the data extracted by Substreams. For more configurations, see [subgraph sink documentation](https://docs.substreams.dev/how-to-guides/sinks/subgraph). +- **Subgraph**: Run `substreams codegen subgraph`. This generates a project with a basic `schema.graphql` and `mappings.ts` file. You can customize these to define entities based on the data extracted by Substreams. For more configurations, see [Subgraph sink documentation](https://docs.substreams.dev/how-to-guides/sinks/subgraph). - **SQL**: Run `substreams codegen sql` for SQL-based queries. For more information on configuring a SQL sink, refer to the [SQL documentation](https://docs.substreams.dev/how-to-guides/sinks/sql-sink). ## Deployment Options diff --git a/website/src/pages/sv/substreams/developing/sinks.mdx b/website/src/pages/sv/substreams/developing/sinks.mdx index 5ff37a31d943..3b278edbc8fe 100644 --- a/website/src/pages/sv/substreams/developing/sinks.mdx +++ b/website/src/pages/sv/substreams/developing/sinks.mdx @@ -1,5 +1,5 @@ --- -title: Official Sinks +title: Sink your Substreams --- Choose a sink that meets your project's needs. @@ -8,7 +8,7 @@ Choose a sink that meets your project's needs. Once you find a package that fits your needs, you can choose how you want to consume the data. -Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database, a file or a subgraph. +Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database, a file or a Subgraph. ## Sinks diff --git a/website/src/pages/sv/substreams/developing/solana/account-changes.mdx b/website/src/pages/sv/substreams/developing/solana/account-changes.mdx index 7e45ea961e5e..37c0b7d5abcb 100644 --- a/website/src/pages/sv/substreams/developing/solana/account-changes.mdx +++ b/website/src/pages/sv/substreams/developing/solana/account-changes.mdx @@ -11,7 +11,7 @@ This guide walks you through the process of setting up your environment, configu > NOTE: History for the Solana Account Changes dates as of 2025, block 310629601. -For each Substreams Solana account block, only the latest update per account is recorded, see the [Protobuf Referece](https://buf.build/streamingfast/firehose-solana/file/main:sf/solana/type/v1/account.proto). If an account is deleted, a payload with `deleted == True` is provided. Additionally, events of low importance ware omitted, such as those with the special owner “Vote11111111…” account or changes that do not affect the account data (ex: lamport changes). +For each Substreams Solana account block, only the latest update per account is recorded, see the [Protobuf Reference](https://buf.build/streamingfast/firehose-solana/file/main:sf/solana/type/v1/account.proto). If an account is deleted, a payload with `deleted == True` is provided. Additionally, events of low importance ware omitted, such as those with the special owner “Vote11111111…” account or changes that do not affect the account data (ex: lamport changes). > NOTE: To test Substreams latency for Solana accounts, measured as block-head drift, install the [Substreams CLI](https://docs.substreams.dev/reference-material/substreams-cli/installing-the-cli) and running `substreams run solana-common blocks_without_votes -s -1 -o clock`. diff --git a/website/src/pages/sv/substreams/developing/solana/transactions.mdx b/website/src/pages/sv/substreams/developing/solana/transactions.mdx index b6f8cbc3b345..dcd19e9de276 100644 --- a/website/src/pages/sv/substreams/developing/solana/transactions.mdx +++ b/website/src/pages/sv/substreams/developing/solana/transactions.mdx @@ -36,12 +36,12 @@ Within the generated directories, modify your Substreams modules to include addi ## Step 3: Load the Data -To make your Substreams queryable (as opposed to [direct streaming](https://docs.substreams.dev/how-to-guides/sinks/stream)), you can automatically generate a [Substreams-powered subgraph](/sps/introduction/) or SQL-DB sink. +To make your Substreams queryable (as opposed to [direct streaming](https://docs.substreams.dev/how-to-guides/sinks/stream)), you can automatically generate a [Substreams-powered Subgraph](/sps/introduction/) or SQL-DB sink. ### Subgraf 1. Run `substreams codegen subgraph` to initialize the sink, producing the necessary files and function definitions. -2. Create your [subgraph mappings](/sps/triggers/) within the `mappings.ts` and associated entities within the `schema.graphql`. +2. Create your [Subgraph mappings](/sps/triggers/) within the `mappings.ts` and associated entities within the `schema.graphql`. 3. Build and deploy locally or to [Subgraph Studio](https://thegraph.com/studio-pricing/) by running `deploy-studio`. ### SQL diff --git a/website/src/pages/sv/substreams/introduction.mdx b/website/src/pages/sv/substreams/introduction.mdx index 1c263c32d747..c12627982ad6 100644 --- a/website/src/pages/sv/substreams/introduction.mdx +++ b/website/src/pages/sv/substreams/introduction.mdx @@ -13,7 +13,7 @@ Substreams is a powerful parallel blockchain indexing technology designed to enh ## Substreams Benefits -- **Accelerated Indexing**: Boost subgraph indexing time with a parallelized engine for quicker data retrieval and processing. +- **Accelerated Indexing**: Boost Subgraph indexing time with a parallelized engine for quicker data retrieval and processing. - **Multi-Chain Support**: Expand indexing capabilities beyond EVM-based chains, supporting ecosystems like Solana, Injective, Starknet, and Vara. - **Enhanced Data Model**: Access comprehensive data, including the `trace` level data on EVM or account changes on Solana, while efficiently managing forks/disconnections. - **Multi-Sink Support:** For Subgraph, Postgres database, Clickhouse, and Mongo database. diff --git a/website/src/pages/sv/substreams/publishing.mdx b/website/src/pages/sv/substreams/publishing.mdx index 0d0bb4856073..21989ed9b73b 100644 --- a/website/src/pages/sv/substreams/publishing.mdx +++ b/website/src/pages/sv/substreams/publishing.mdx @@ -9,7 +9,7 @@ Learn how to publish a Substreams package to the [Substreams Registry](https://s ### What is a package? -A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional subgraphs. +A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional Subgraphs. ## Publish a Package @@ -44,7 +44,7 @@ A Substreams package is a precompiled binary file that defines the specific data ![confirm](/img/4_confirm.png) -That's it! You have succesfully published a package in the Substreams registry. +That's it! You have successfully published a package in the Substreams registry. ![success](/img/5_success.png) diff --git a/website/src/pages/sv/supported-networks.mdx b/website/src/pages/sv/supported-networks.mdx index 7e335314ad2d..01776006c980 100644 --- a/website/src/pages/sv/supported-networks.mdx +++ b/website/src/pages/sv/supported-networks.mdx @@ -18,11 +18,11 @@ export const getStaticProps = getSupportedNetworksStaticProps - Subgraph Studio relies on the stability and reliability of the underlying technologies, for example JSON-RPC, Firehose and Substreams endpoints. - Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier. -- If a subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks. +- If a Subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks. - For a full list of which features are supported on the decentralized network, see [this page](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). ## Running Graph Node locally If your preferred network isn't supported on The Graph's decentralized network, you can run your own [Graph Node](https://github.com/graphprotocol/graph-node) to index any EVM-compatible network. Make sure that the [version](https://github.com/graphprotocol/graph-node/releases) you are using supports the network and you have the needed configuration. -Graph Node can also index other protocols, via a Firehose integration. Firehose integrations have been created for NEAR, Arweave and Cosmos-based networks. Additionally, Graph Node can support Substreams-powered subgraphs for any network with Substreams support. +Graph Node can also index other protocols, via a Firehose integration. Firehose integrations have been created for NEAR and Arweave. Additionally, Graph Node can support Substreams-powered Subgraphs for any network with Substreams support. diff --git a/website/src/pages/sv/token-api/_meta-titles.json b/website/src/pages/sv/token-api/_meta-titles.json index 692cec84bd58..7ed31e0af95d 100644 --- a/website/src/pages/sv/token-api/_meta-titles.json +++ b/website/src/pages/sv/token-api/_meta-titles.json @@ -1,5 +1,6 @@ { "mcp": "MCP", "evm": "EVM Endpoints", - "monitoring": "Monitoring Endpoints" + "monitoring": "Monitoring Endpoints", + "faq": "FAQ" } diff --git a/website/src/pages/sv/token-api/_meta.js b/website/src/pages/sv/token-api/_meta.js index 09aa7ffc2649..0e526f673a66 100644 --- a/website/src/pages/sv/token-api/_meta.js +++ b/website/src/pages/sv/token-api/_meta.js @@ -5,4 +5,5 @@ export default { mcp: titles.mcp, evm: titles.evm, monitoring: titles.monitoring, + faq: '', } diff --git a/website/src/pages/sv/token-api/faq.mdx b/website/src/pages/sv/token-api/faq.mdx new file mode 100644 index 000000000000..8a5f3bbd358a --- /dev/null +++ b/website/src/pages/sv/token-api/faq.mdx @@ -0,0 +1,109 @@ +--- +title: Token API FAQ +--- + +Get fast answers to easily integrate and scale with The Graph's high-performance Token API. + +## Allmän + +### What blockchains does the Token API support? + +Currently, the Token API supports Ethereum, Binance Smart Chain (BSC), Polygon, Optimism, Base, and Arbitrum One. + +### Why isn't my API key from The Graph Market working? + +Be sure to use the Access Token generated from the API key, not the API key itself. An access token can be generated from the dashboard on The Graph Market using the dropdown menu next to each API key. + +### How current is the data provided by the API relative to the blockchain? + +The API provides data up to the latest finalized block. + +### How do I authenticate requests to the Token API? + +Authentication is managed via JWT tokens obtained through The Graph Market. JWT tokens issued by [Pinax](https://pinax.network/en), a core developer of The Graph, are also supported. + +### Does the Token API provide a client SDK? + +While a client SDK is not currently available, please share feedback on any SDKs or integrations you would like to see on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to support additional blockchains in the future? + +Yes, more blockchains will be supported in the future. Please share feedback on desired chains on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to offer data closer to the chain head? + +Yes, improvements to provide data closer to the chain head are planned. Feedback is welcome on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to support additional use cases such as NFTs? + +The Graph ecosystem is actively determining the [roadmap](https://thegraph.com/blog/token-api-the-graph/) for additional use cases, including NFTs. Please provide feedback on specific features you would like prioritized on [Discord](https://discord.gg/graphprotocol). + +## MCP / LLM / AI Topics + +### Is there a time limit for LLM queries? + +Yes. The maximum time limit for LLM queries is 10 seconds. + +### Is there a known list of LLMs that work with the API? + +Yes, Cline, Cursor, and Claude Desktop integrate successfully with The Graph's Token API + MCP server. + +Beyond that, whether an LLM "works" depends on whether it supports the MCP protocol directly (or has a compatible plugin/adapter). + +### Where can I find the MCP client? + +You can find the code for the MCP client in [The Graph's repo](https://github.com/graphprotocol/mcp-client). + +## Advanced Topics + +### I'm getting 403/401 errors. What's wrong? + +Check that you included the `Authorization: Bearer ` header with the correct, non-expired token. Common issues include using the API key instead of generating a new JWT, forgetting the "Bearer" prefix, using an incorrect token, or omitting the header entirely. Ensure you copied the JWT exactly as provided by The Graph Market. + +### Are there rate limits or usage costs?\*\* + +During Beta, the Token API is free for authorized developers. While specific rate limits aren't documented, reasonable throttling exists to prevent abuse. High request volumes may trigger HTTP 429 errors. Monitor official announcements for future pricing changes after Beta. + +### What networks are supported, and how do I specify them? + +You can query available networks with [this link](https://token-api.thegraph.com/#tag/monitoring/GET/networks). A full list of network IDs accepted by The Graph can be found on [The Graph's Networks](https://thegraph.com/docs/en/supported-networks/). The API supports Ethereum mainnet, Binance Smart Chain, Base, Arbitrum One, Optimism, and Polygon/Matic. Use the optional `network_id` parameter (e.g., `mainnet`, `bsc`, `base`, `arbitrum-one`, `optimism`, `matic`) to target a specific chain. Without this parameter, the API defaults to Ethereum mainnet. + +### Why do I only see 10 results? How can I get more data? + +Endpoints cap output at 10 items by default. Use pagination parameters: `limit` (up to 500) and `page` (1-indexed). For example, set `limit=50` to get 50 results, and increment `page` for subsequent batches (e.g., `page=2` for items 51-100). + +### How do I fetch older transfer history? + +The Transfers endpoint defaults to 30 days of history. To retrieve older events, increase the `age` parameter up to 180 days maximum (e.g., `age=180` for 6 months of transfers). Transfers older than 180 days cannot be fetched in a single call. + +### What does an empty `"data": []` array mean? + +An empty data array means no records were found matching the query – not an error. This occurs when querying wallets with no tokens/transfers or token contracts with no holders. Verify you've used the correct address and parameters. An invalid address format will trigger a 4xx error. + +### Why is the JSON response wrapped in a `"data"` array? + +All Token API responses consistently wrap results in a top-level `data` array, even for single items. This uniform design handles the common case where addresses have multiple balances or transfers. When parsing, be sure to index into the `data` array (e.g., `const results = response.data`). + +### Why are token amounts returned as strings? + +Large numeric values (like token amounts or supplies) are returned as strings to avoid precision loss, as they often exceed JavaScript's safe integer range. Convert these to big number types for arithmetic operations. Fields like `decimals` are provided as normal numbers to help derive human-readable values. + +### What format should addresses be in? + +The API accepts 40-character hex addresses with or without the `0x` prefix. The endpoint is case-insensitive, so both lower and uppercase hex characters work. Ensure addresses are exactly 40 hex digits (20 bytes) if you remove the prefix. For contract queries, use the token's contract address; for balance/transfer queries, use the wallet address. + +### Do I need special headers besides authentication? + +While recommended, `Accept: application/json` isn't strictly required as the API returns JSON by default. The critical header is `Authorization: Bearer `. Ensure you make a GET request to the correct URL without trailing slashes or path typos (e.g., use `/balances/evm/{address}` not `/balance`). + +### MCP integration with Claude/Cline/Cursor shows errors like "ENOENT" or "Server disconnected". How do I fix this? + +For "ENOENT" errors, ensure Node.js 18+ is installed and the path to `npx`/`bunx` is correct (consider using full paths in config). "Server disconnected" usually indicates authentication or connectivity issues – verify your `ACCESS_TOKEN` is set correctly and your network allows access to `https://token-api.thegraph.com/sse`. + +### Is the Token API part of The Graph's GraphQL service? + +No, the Token API is a separate RESTful service. Unlike traditional subgraphs, it provides ready-to-use REST endpoints (HTTP GET) for common token data. You don't need to write GraphQL queries or deploy subgraphs. Under the hood, it uses The Graph's infrastructure and MCP with AI for enrichment, but you simply interact with REST endpoints. + +### Do I need to use MCP or tools like Claude, Cline, or Cursor? + +No, these are optional. MCP is an advanced feature allowing AI assistants to interface with the API via streaming. For standard usage, simply call the REST endpoints with any HTTP client using your JWT. Claude Desktop, Cline bot, and Cursor IDE integrations are provided for convenience but aren't required. diff --git a/website/src/pages/sv/token-api/mcp/claude.mdx b/website/src/pages/sv/token-api/mcp/claude.mdx index 0da8f2be031d..bc3dbe28ecb3 100644 --- a/website/src/pages/sv/token-api/mcp/claude.mdx +++ b/website/src/pages/sv/token-api/mcp/claude.mdx @@ -12,7 +12,7 @@ sidebarTitle: Claude Desktop ![Screenshot of Claude Desktop's settings panel showing the MCP server configuration option.](/img/claude-preview-token-api.png) -## Configuration +## Konfiguration Create or edit your `claude_desktop_config.json` file. @@ -25,11 +25,11 @@ Create or edit your `claude_desktop_config.json` file. ```json label="claude_desktop_config.json" { "mcpServers": { - "mcp-pinax": { + "token-api": { "command": "npx", "args": ["@pinax/mcp", "--sse-url", "https://token-api.thegraph.com/sse"], "env": { - "ACCESS_TOKEN": "" + "ACCESS_TOKEN": "" } } } diff --git a/website/src/pages/sv/token-api/mcp/cline.mdx b/website/src/pages/sv/token-api/mcp/cline.mdx index ab54c0c8f6f0..15c9980df7a6 100644 --- a/website/src/pages/sv/token-api/mcp/cline.mdx +++ b/website/src/pages/sv/token-api/mcp/cline.mdx @@ -10,9 +10,9 @@ sidebarTitle: Cline - [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path. - The `@pinax/mcp` package requires Node 18+, as it relies on built-in `fetch()` / `Headers`, which are not available in Node 17 or older. You may need to specify an exact path to an up-to-date Node version, or uninstall previous versions of Node to ensure `@pinax/mcp` uses the correct version. -![Screenshot of Cline's MCP server configuration interface displaying the JSON settings file with mcp-pinax server details visible](/img/cline-preview-token-api.png) +![Screenshot of Cline's MCP server configuration interface displaying the JSON settings file with mcp-pinax server details visible.](/img/cline-preview-token-api.png) -## Configuration +## Konfiguration Create or edit your `cline_mcp_settings.json` file. diff --git a/website/src/pages/sv/token-api/mcp/cursor.mdx b/website/src/pages/sv/token-api/mcp/cursor.mdx index 658108d1337b..1364cca2cca5 100644 --- a/website/src/pages/sv/token-api/mcp/cursor.mdx +++ b/website/src/pages/sv/token-api/mcp/cursor.mdx @@ -12,7 +12,7 @@ sidebarTitle: Cursor ![Screenshot of Cursor's MCP configuration panel.](/img/cursor-preview-token-api.png) -## Configuration +## Konfiguration Create or edit your `~/.cursor/mcp.json` file. diff --git a/website/src/pages/sv/token-api/quick-start.mdx b/website/src/pages/sv/token-api/quick-start.mdx index 4653c3d41ac6..db512ba0d7f8 100644 --- a/website/src/pages/sv/token-api/quick-start.mdx +++ b/website/src/pages/sv/token-api/quick-start.mdx @@ -1,6 +1,6 @@ --- title: Token API Quick Start -sidebarTitle: Quick Start +sidebarTitle: Snabbstart --- ![The Graph Token API Quick Start banner](/img/token-api-quickstart-banner.jpg) diff --git a/website/src/pages/tr/about.mdx b/website/src/pages/tr/about.mdx index 775696c41265..3b1dce5a5617 100644 --- a/website/src/pages/tr/about.mdx +++ b/website/src/pages/tr/about.mdx @@ -30,25 +30,25 @@ Finalite, zincir yeniden organizasyonu ve "uncle" bloklar gibi blokzinciri özel ## The Graph'in Sağladığı Çözüm -The Graph, blokzinciri verilerini endeksleyip verimli, yüksek performanslı sorgulama imkanı sunan merkeziyetsiz bir protokol ile bu zorluğu çözer. Bu endekslenmiş API'lar ("subgraph'ler"), standart bir GraphQL API'ı ile sorgulanabilir. +The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "Subgraphs") can then be queried with a standard GraphQL API. Artık, bu süreci mümkün kılan, [Graph Düğümü](https://github.com/graphprotocol/graph-node)'nün açık kaynaklı implementasyonuna dayanan merkeziyetsiz bir protokol mevcut. ### The Graph'in Çalışma Şekli -Blokzinciri verilerini endekslemek oldukça zordur, ancak The Graph bunu kolaylaştırır. The Graph, Ethereum verilerini nasıl endeksleyeceğini subgraph'ler kullanarak öğrenir. Subgraph'ler, blokzinciri verileri üzerine kurulu özel yapım API'lerdir; bu API'ler blokzincirinden veriyi çıkarır, işler ve sorguların GraphQL ile sorunsuz bir şekilde yapılabilmesi için depolar. +Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using Subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL. #### Ayrıntılar -- The Graph, subgraph tanımlarını kullanır; bu tanımlar subgraph içinde subgraph manifestosu olarak bilinir. +- The Graph uses Subgraph descriptions, which are known as the Subgraph manifest inside the Subgraph. -- Subgraph tanımı, bir subgraph için ilgili akıllı sözleşmeleri, bu sözleşmelerde odaklanılacak olayları ve bu olay verilerinin The Graph'in veritabanında depolayacağı verilere nasıl eşleneceğini açıklar. +- The Subgraph description outlines the smart contracts of interest for a Subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database. -- Subgraph oluştururken bir subgraph manifestosu yazmanız gerekir. +- When creating a Subgraph, you need to write a Subgraph manifest. -- `Subgraph manifestosunu` yazdıktan sonra, Graph CLI'yi kullanarak tanımı IPFS'e depolayabilir ve bir Indexer'a bu subgraph için veri endekslemeye başlaması talimatını verebilirsiniz. +- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that Subgraph. -Aşağıdaki diyagramda, subgraph manifestosunun Ethereum blokzinciri üzerinde yapılan işlemler aracılığıyla yayına alınmasından sonra veri akışının nasıl ilerlediğine dair daha detaylı bilgi bulabilirsiniz. +The diagram below provides more detailed information about the flow of data after a Subgraph manifest has been deployed with Ethereum transactions. ![The Graph'in, Graph Düğümü'nü kullanarak veri tüketicilerine sorgu sunma sürecini açıklayan bir grafik](/img/graph-dataflow.png) @@ -56,12 +56,12 @@ Veri akışı şu şekildedir: 1. Bir dapp, bir akıllı sözleşme üzerinde işlem yaparak Ethereum'a veri ekler. 2. Akıllı sözleşme, işlemi işlerken bir veya daha fazla olay yayımlar. -3. Graph Düğümü, Ethereum blokzincirini yeni blokları sürekli olarak tarar ve blokların subgraph'iniz için endekslenmesi gereken verileri içerip içermediğini kontrol eder. -4. Graph Düğümü, bu bloklarda subgraph'iniz için Ethereum olaylarını bulur ve sağladığınız eşleme işleyicilerini (mapping handler) çalıştırır. Eşleme (mapping), Ethereum olaylarına karşılık olarak Graph Düğümünün depoladığı veri varlıklarını oluşturan veya güncelleyen bir WASM modülüdür. +3. Graph Node continually scans Ethereum for new blocks and the data for your Subgraph they may contain. +4. Graph Node finds Ethereum events for your Subgraph in these blocks and runs the mapping handlers you provided. The mapping is a WASM module that creates or updates the data entities that Graph Node stores in response to Ethereum events. 5. Dapp, blokzincirinden endekslenen veriler için Graph Düğümüne, düğümün [GraphQL uç noktası](https://graphql.org/learn/) üzerinden sorgu gönderir. Graph Düğümü ise veriyi getirmek için bu sorguları kendi veri deposuna yönelik sorgulara çevirir ve depolama sisteminin endeksleme kabiliyetlerini kullanarak bu verileri alır. Dapp, bu verileri son kullanıcılar için zengin bir arayüzde gösterir ve kullanıcılar bu arayüzü kullanarak Ethereum'da yeni işlemler gerçekleştirir. Bu döngü tekrarlanır. ## Sonraki Adımlar -Sonraki bölümler, subgraph'lere, yayına alınmalarına ve veri sorgulama sürecine daha derin bir bakış sunmaktadır. +The following sections provide a more in-depth look at Subgraphs, their deployment and data querying. -Kendi subgraph'inizi yazmadan önce, [Graph Explorer](https://thegraph.com/explorer)'ı keşfetmeniz ve halihazırda yayına alınmış bazı subgraph'leri incelemeniz önerilir. Her subgraph'in sayfasında bir GraphQL playground bulunur. Bu aracı kullanarak subgraph'in verilerine erişebilir ve sorgulamalar yapabilirsiniz. +Before you write your own Subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed Subgraphs. Each Subgraph's page includes a GraphQL playground, allowing you to query its data. diff --git a/website/src/pages/tr/archived/arbitrum/arbitrum-faq.mdx b/website/src/pages/tr/archived/arbitrum/arbitrum-faq.mdx index ca32d52975dc..eeb1e61127b5 100644 --- a/website/src/pages/tr/archived/arbitrum/arbitrum-faq.mdx +++ b/website/src/pages/tr/archived/arbitrum/arbitrum-faq.mdx @@ -14,7 +14,7 @@ Ağ katılımcıları, The Graph'i L2 üzerinde ölçeklendirerek şunlardan fay - Ethereum'dan aktarılmış güvenlik -Protokol akıllı sözleşmelerini L2’ye ölçeklendirmek, ağ katılımcılarının daha düşük gas ücretleriyle daha sık etkileşimde bulunmasına olanak tanır. Örneğin, Endeksleyiciler daha fazla subgraph endekslemek için tahsisleri daha sık açıp kapatabilir. Geliştiriciler, subgraph’leri daha kolay bir şekilde dağıtabilir ve güncelleyebilir. Delegatörler, GRT’yi daha sık bir şekilde delege edebilir. Küratörler, daha fazla sayıda subgraph’e sinyal ekleyebilir veya kaldırabilir. Böylece önceden gas maliyetleri nedeniyle sık yapılması ekonomik olmayan işlemler artık mümkün hale gelir. +Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers can open and close allocations more frequently to index a greater number of Subgraphs. Developers can deploy and update Subgraphs more easily, and Delegators can delegate GRT more frequently. Curators can add or remove signal to a larger number of Subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. Graph topluluğu, geçen yıl [GIP-0031](https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305) tartışmasının sonucuna göre Arbitrum ile çalışmaya karar verdi. @@ -39,7 +39,7 @@ Graph'ı Katman2'de kullanmanın avantajlarından yararlanmak için, zincirler a ![Dropdown switcher to toggle Arbitrum](/img/arbitrum-screenshot-toggle.png) -## Bir subgraph geliştirici, veri tüketicisi, Endeksleyici, Küratör veya Delegatör olarak şimdi ne yapmalıyım? +## As a Subgraph developer, data consumer, Indexer, Curator, or Delegator, what do I need to do now? Network participants must move to Arbitrum to continue participating in The Graph Network. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) for additional support. @@ -51,9 +51,9 @@ Tüm akıllı sözleşmeler kapsamlı bir şekilde [denetlenmiştir](https://git Güvenli ve sorunsuz bir geçiş sağlamak için her şey kapsamlı bir şekilde test edilmiş ve bir acil durum planı hazırlanmıştır. Ayrıntıları [burada](https://forum.thegraph.com/t/gip-0037-the-graph-arbitrum-deployment-with-linear-rewards-minted-in-l2/3551#risks-and-security-considerations-20) bulabilirsiniz. -## Ethereum üzerindeki mevcut subgraph'ler çalışıyor mu? +## Are existing Subgraphs on Ethereum working? -All subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) to ensure your subgraphs operate seamlessly. +All Subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) to ensure your Subgraphs operate seamlessly. ## GRT'nin Arbitrum'da yeni bir akıllı sözleşmesi mi var? diff --git a/website/src/pages/tr/archived/arbitrum/l2-transfer-tools-faq.mdx b/website/src/pages/tr/archived/arbitrum/l2-transfer-tools-faq.mdx index 709689c6ca55..e82d00f0809b 100644 --- a/website/src/pages/tr/archived/arbitrum/l2-transfer-tools-faq.mdx +++ b/website/src/pages/tr/archived/arbitrum/l2-transfer-tools-faq.mdx @@ -24,9 +24,9 @@ Bunun istisnası çoklu imza gibi akıllı sözleşme cüzdanlarıdır. Bunlar h L2 Transfer Araçları, Katman1'den Katman2'ye mesaj göndermek için Arbitrum'un yerel mekanizmasını kullanır. Bu mekanizma "yeniden denenebilir bilet" olarak adlandırılır ve Arbitrum GRT köprüsü de dahil olmak üzere tüm yerel token köprüleri tarafından kullanılır. Tekrar denenebilir biletler hakkında daha fazla bilgiyi [Arbitrum dökümantasyonunda] (https://docs.arbitrum.io/arbos/l1-to-l2-messaging) okuyabilirsiniz. -Varlıklarınızı (subgraph, stake, delegasyon veya kürasyon) Katman2'ye aktardığınızda, Katman2'de yeniden denenebilir bir bilet oluşturan Arbitrum GRT köprüsü aracılığıyla bir mesaj gönderilir. Transfer aracı, işlemde 1) bileti oluşturmak için ödeme yapmak ve 2) bileti Katman2'de yürütmek üzere gas için ödeme yapmak amacıyla kullanılan bir miktar ETH içerir. Ancak, bilet Katman2'de yürütülmeye hazır olana kadar geçen sürede gas fiyatları değişebileceğinden ötürü, bu otomatik yürütme girişiminin başarısız olma ihtimali vardır. Bu durumda, Arbitrum köprüsü yeniden denenebilir bileti 7 güne kadar kullanılabilir tutacaktır ve herkes bileti "kullanmayı" yeniden deneyebilir (bunun için Arbitrum'a köprülenmiş bir miktar ETH'ye sahip bir cüzdan gereklidir). +When you transfer your assets (Subgraph, stake, delegation or curation) to L2, a message is sent through the Arbitrum GRT bridge which creates a retryable ticket in L2. The transfer tool includes some ETH value in the transaction, that is used to 1) pay to create the ticket and 2) pay for the gas to execute the ticket in L2. However, because gas prices might vary in the time until the ticket is ready to execute in L2, it is possible that this auto-execution attempt fails. When that happens, the Arbitrum bridge will keep the retryable ticket alive for up to 7 days, and anyone can retry “redeeming” the ticket (which requires a wallet with some ETH bridged to Arbitrum). -Bu, tüm transfer araçlarında "Onayla" adımı olarak adlandırdığımız adımdır - otomatik yürütme çoğu zaman başarılı olduğu için çoğu durumda otomatik olarak çalışacaktır, ancak başarılı bir şekilde gerçekleştiğinden emin olmak için tekrar kontrol etmeniz önemlidir. Başarılı olmazsa ve 7 gün içinde başarılı bir yeniden deneme gerçekleşmezse, Arbitrum köprüsü bileti iptal edecek ve varlıklarınız (subgraph, stake, delegasyon veya kürasyon) kaybolacak ve kurtarılamayacaktır. Graph çekirdek geliştiricileri bu durumları tespit etmek ve çok geç olmadan biletleri kurtarmaya çalışmak için bir izleme sistemine sahiptir, ancak transferinizin zamanında tamamlanmasını sağlamak nihayetinde sizin sorumluluğunuzdadır. İşleminizi onaylamakta sorun yaşıyorsanız, lütfen [bu formu](https://noteforms.com/forms/notionform-l2-transfer-tooling-issues-0ogqfu?notionforms=1&utm_source=notionforms) kullanarak bize ulaşın; çekirdek geliştiriciler size yardımcı olacaktır. +This is what we call the “Confirm” step in all the transfer tools - it will run automatically in most cases, as the auto-execution is most often successful, but it is important that you check back to make sure it went through. If it doesn’t succeed and there are no successful retries in 7 days, the Arbitrum bridge will discard the ticket, and your assets (Subgraph, stake, delegation or curation) will be lost and can’t be recovered. The Graph core devs have a monitoring system in place to detect these situations and try to redeem the tickets before it’s too late, but it is ultimately your responsibility to ensure your transfer is completed in time. If you’re having trouble confirming your transaction, please reach out using [this form](https://noteforms.com/forms/notionform-l2-transfer-tooling-issues-0ogqfu?notionforms=1&utm_source=notionforms) and core devs will be there help you. ### Delegasyon/stake/kürasyon transferimi başlattım ve Katman2'ye ulaşıp ulaşmadığından emin değilim, doğru şekilde transfer edilip edilmediğini nasıl teyit edebilirim? @@ -36,43 +36,43 @@ Katman1 işlem hash'ına sahipseniz (cüzdanınızdaki son işlemlere bakarak bu ## Subgraph Transferi -### Subgraph'ımı nasıl transfer edebilirim? +### How do I transfer my Subgraph? -Subgraph'ınızı transfer etmek için aşağıdaki adımları tamamlamanız gerekecektir: +To transfer your Subgraph, you will need to complete the following steps: 1. Ethereum ana ağında transferi başlatın 2. Onaylanması için 20 dakika bekleyin -3. Arbitrum\* üzerinde subgraph transferini onaylayın +3. Confirm Subgraph transfer on Arbitrum\* -4. Arbitrum üzerinde subgraph'ı yayınlamayı bitirin +4. Finish publishing Subgraph on Arbitrum 5. Sorgu URL'sini Güncelle (önerilir) -\*Transferi 7 gün içinde onaylamanız gerektiğini unutmayın, aksi takdirde subgraph'ınız kaybolabilir. Çoğunlukla, bu adım otomatik olarak çalışacaktır, ancak Arbitrum'da gas fiyatlarında bir artış varsa manuel bir onay gerekebilir. Bu süreç sırasında herhangi bir sorun yaşanırsa, yardımcı olacak kaynaklar olacaktır: support@thegraph.com veya [Discord](https://discord.gg/graphprotocol) üzerinden destek ile iletişime geçin. +\*Note that you must confirm the transfer within 7 days otherwise your Subgraph may be lost. In most cases, this step will run automatically, but a manual confirmation may be needed if there is a gas price spike on Arbitrum. If there are any issues during this process, there will be resources to help: contact support at support@thegraph.com or on [Discord](https://discord.gg/graphprotocol). ### Transferimi nereden başlatmalıyım? -Transferinizi [Subgraph Stüdyo](https://thegraph.com/studio/), [Gezgin](https://thegraph.com/explorer) veya herhangi bir subgraph ayrıntıları sayfasından başlatabilirsiniz. Transferi başlatmak için subgraph ayrıntıları sayfasındaki "Subgraph Transfer" butonuna tıklayın. +You can initiate your transfer from the [Subgraph Studio](https://thegraph.com/studio/), [Explorer,](https://thegraph.com/explorer) or any Subgraph details page. Click the "Transfer Subgraph" button in the Subgraph details page to start the transfer. -### Subgraph'ım transfer edilene kadar ne kadar beklemem gerekir? +### How long do I need to wait until my Subgraph is transferred Transfer süresi yaklaşık 20 dakika alır. Arbitrum köprüsü, köprü transferini otomatik olarak tamamlamak için arka planda çalışmaktadır. Bazı durumlarda gaz maliyetleri artabilir ve işlemi tekrar onaylamanız gerekebilir. -### Katman2'ye transfer ettikten sonra subgraph'ım hala keşfedilebilir olacak mı? +### Will my Subgraph still be discoverable after I transfer it to L2? -Subgraph'ınız yalnızca yayınlandığı ağda keşfedilebilir olacaktır. Örneğin, subgraph'ınız Arbitrum One üzerindeyse, onu yalnızca Arbitrum One üzerindeki Gezgin'de bulabilirsiniz, Ethereum'da aradığınızda bulamazsınız. Doğru ağda olduğunuzdan emin olmak için lütfen sayfanın üst kısmındaki ağ değiştiricisinde Arbitrum One'ın seçili olduğundan emin olun. Transferden sonra, Katman1 subgraph'ı kullanımdan kaldırılmış olarak görünecektir. +Your Subgraph will only be discoverable on the network it is published to. For example, if your Subgraph is on Arbitrum One, then you can only find it in Explorer on Arbitrum One and will not be able to find it on Ethereum. Please ensure that you have Arbitrum One selected in the network switcher at the top of the page to ensure you are on the correct network.  After the transfer, the L1 Subgraph will appear as deprecated. -### Transfer etmek için subgraph'ımın yayınlanmış olması gerekiyor mu? +### Does my Subgraph need to be published to transfer it? -Subgraph transfer aracından yararlanmak için, subgraph'ınızın Ethereum ana ağı'nda yayınlanmış olması ve subgraph'ın sahibi olan cüzdanın, belirli miktarda kürasyon sinyaline sahip olması gerekmektedir. Eğer subgraph'ınız yayınlanmamışsa, doğrudan Arbitrum One'da yayınlamanız önerilir böylece ilgili gas ücretleri önemli ölçüde daha düşük olacaktır. Yayınlanmış bir subgraph'ı transfer etmek istiyorsanız, ancak sahip hesap üzerinde herhangi bir sinyal kürasyonu yapılmamışsa, bu hesaptan küçük bir miktar (örneğin 1 GRT) sinyal verebilirsiniz; "otomatik geçiş" sinyalini seçtiğinizden emin olun. +To take advantage of the Subgraph transfer tool, your Subgraph must be already published to Ethereum mainnet and must have some curation signal owned by the wallet that owns the Subgraph. If your Subgraph is not published, it is recommended you simply publish directly on Arbitrum One - the associated gas fees will be considerably lower. If you want to transfer a published Subgraph but the owner account hasn't curated any signal on it, you can signal a small amount (e.g. 1 GRT) from that account; make sure to choose "auto-migrating" signal. -### Arbitrum'a transfer olduktan sonra subgraph'ımın Ethereum ana ağ versiyonuna ne olur? +### What happens to the Ethereum mainnet version of my Subgraph after I transfer to Arbitrum? -Subgraph'ınızı Arbitrum'a transfer ettikten sonra, Ethereum ana ağ versiyonu kullanımdan kaldırılacaktır. Sorgu URL'nizi 48 saat içinde güncellemenizi öneririz. Bununla birlikte, herhangi bir üçüncü taraf merkeziyetsiz uygulama desteğinin güncellenebilmesi için ana ağ URL'nizin çalışmasını sağlayan bir ödemesiz dönem vardır. +After transferring your Subgraph to Arbitrum, the Ethereum mainnet version will be deprecated. We recommend you update your query URL within 48 hours. However, there is a grace period in place that keeps your mainnet URL functioning so that any third-party dapp support can be updated. ### Transferi tamamladıktan sonra Arbitrum'da da yeniden yayınlamam gerekiyor mu? @@ -80,21 +80,21 @@ Subgraph'ınızı Arbitrum'a transfer ettikten sonra, Ethereum ana ağ versiyonu ### Yeniden yayınlama sırasında uç noktam kesinti yaşar mı? -Olası değildir, fakat Katman1'de hangi İndeksleyicilerin subgraph'ı desteklediğine ve subgraph Katman2'de tam olarak desteklenene kadar indekslemeye devam edip etmediklerine bağlı olarak kısa bir kesinti yaşanması mümkündür. +It is unlikely, but possible to experience a brief downtime depending on which Indexers are supporting the Subgraph on L1 and whether they keep indexing it until the Subgraph is fully supported on L2. ### Yayınlama ve sürüm oluşturma Katman2'de Ethereum ana ağı ile aynı mı? -Evet. Subgraph Stüdyo'da yayınlarken, yayınlanan ağınız olarak Arbitrum One'ı seçin. Stüdyo'da, subgprah'ın en son güncellenmiş sürümüne yönlendiren en son uç nokta mevcut olacaktır. +Yes. Select Arbitrum One as your published network when publishing in Subgraph Studio. In the Studio, the latest endpoint will be available which points to the latest updated version of the Subgraph. -### Subgraph'ımın kürasyonu subgraph'ımla birlikte hareket edecek mi? +### Will my Subgraph's curation move with my Subgraph? -Otomatik geçiş sinyalini seçtiyseniz, kendi kürasyonunuzun %100'ü subgraph'ınızla birlikte Arbitrum One'a taşınacaktır. Subgraph'ın tüm kürasyon sinyali, aktarım sırasında GRT'ye dönüştürülecek ve kürasyon sinyalinize karşılık gelen GRT, Katman2 subgraph'ında sinyal basmak için kullanılacaktır. +If you've chosen auto-migrating signal, 100% of your own curation will move with your Subgraph to Arbitrum One. All of the Subgraph's curation signal will be converted to GRT at the time of the transfer, and the GRT corresponding to your curation signal will be used to mint signal on the L2 Subgraph. -Diğer Küratörler kendilerne ait GRT miktarını geri çekmeyi ya da aynı subgraph üzerinde sinyal basmak için Katman2'ye transfer etmeyi seçebilirler. +Other Curators can choose whether to withdraw their fraction of GRT, or also transfer it to L2 to mint signal on the same Subgraph. -### Transferden sonra subgraph'ımı Ethereum ana ağı'na geri taşıyabilir miyim? +### Can I move my Subgraph back to Ethereum mainnet after I transfer? -Transfer edildikten sonra, bu subgraph'ınızın Ethereum ana ağı sürümü kullanımdan kaldırılacaktır. Ana ağa geri dönmek isterseniz, ana ağa yeniden dağıtmanız ve geri yayınlamanız gerekecektir. Öte yandan, indeksleme ödülleri eninde sonunda tamamen Arbitrum One üzerinde dağıtılacağından, Ethereum ana ağına geri transfer kesinlikle önerilmez. +Once transferred, your Ethereum mainnet version of this Subgraph will be deprecated. If you would like to move back to mainnet, you will need to redeploy and publish back to mainnet. However, transferring back to Ethereum mainnet is strongly discouraged as indexing rewards will eventually be distributed entirely on Arbitrum One. ### Transferimi tamamlamak için neden köprülenmiş ETH'ye ihtiyacım var? @@ -206,19 +206,19 @@ Kürasyonunuzu transfer etmek için aşağıdaki adımları tamamlamanız gereke \*Gerekliyse - yani bir sözleşme adresi kullanıyorsanız. -### Küratörlüğünü yaptığım subgraph'ın Katman2'ye taşınıp taşınmadığını nasıl bileceğim? +### How will I know if the Subgraph I curated has moved to L2? -Subgraph ayrıntıları sayfasını görüntülerken, bir afiş size bu subgraph'ın transfer edildiğini bildirecektir. Kürasyonunuzu transfer etmek için komut istemini takip edebilirsiniz. Bu bilgiyi taşınan herhangi bir subgraph'ın subgraph ayrıntıları sayfasında da bulabilirsiniz. +When viewing the Subgraph details page, a banner will notify you that this Subgraph has been transferred. You can follow the prompt to transfer your curation. You can also find this information on the Subgraph details page of any Subgraph that has moved. ### Kürasyonumu Katman2'ye taşımak istemezsem ne olur? -Bir subgraph kullanımdan kaldırıldığında sinyalinizi geri çekme opsiyonu bulunmaktadır. Benzer şekilde, bir subgraph Katman2'ye taşındıysa, sinyalinizi Ethereum ana ağı'nda geri çekmeyi veya sinyali Katman2'ye göndermeyi seçebilirsiniz. +When a Subgraph is deprecated you have the option to withdraw your signal. Similarly, if a Subgraph has moved to L2, you can choose to withdraw your signal in Ethereum mainnet or send the signal to L2. ### Kürasyonumun başarıyla transfer edildiğini nasıl bilebilirim? Sinyal ayrıntıları, Katman2 transfer aracı başlatıldıktan yaklaşık 20 dakika sonra Gezgin üzerinden erişilebilir olacaktır. -### Kürasyonumu aynı anda birden fazla subgraph'a transfer edebilir miyim? +### Can I transfer my curation on more than one Subgraph at a time? Şu anda toplu transfer seçeneği bulunmamaktadır. @@ -266,7 +266,7 @@ Katman2 transfer aracının stake'inizi transfer etmeyi tamamlaması yaklaşık ### Stake'imi transfer etmeden önce Arbitrum'da indekslemem gerekiyor mu? -İndekslemeyi oluşturmadan önce hissenizi etkin bir şekilde aktarabilirsiniz, ancak Katman2'deki subgraph'lara tahsis edene, bunları indeksleyene ve POI'leri sunana kadar Katman2'de herhangi bir ödül talep edemezsiniz. +You can effectively transfer your stake first before setting up indexing, but you will not be able to claim any rewards on L2 until you allocate to Subgraphs on L2, index them, and present POIs. ### Ben indeksleme stake'imi taşımadan önce Delegatörler delegasyonlarını taşıyabilir mi? diff --git a/website/src/pages/tr/archived/arbitrum/l2-transfer-tools-guide.mdx b/website/src/pages/tr/archived/arbitrum/l2-transfer-tools-guide.mdx index 15b3bfb1004e..949f7e1ca425 100644 --- a/website/src/pages/tr/archived/arbitrum/l2-transfer-tools-guide.mdx +++ b/website/src/pages/tr/archived/arbitrum/l2-transfer-tools-guide.mdx @@ -6,53 +6,53 @@ Graph, Arbitrum One üzerinde Katman2'ye geçişi kolaylaştırmıştır. Her pr Some frequent questions about these tools are answered in the [L2 Transfer Tools FAQ](/archived/arbitrum/l2-transfer-tools-faq/). The FAQs contain in-depth explanations of how to use the tools, how they work, and things to keep in mind when using them. -## Subgraph'ınızı Arbitrum'a nasıl transfer edebilirsiniz (Katman2) +## How to transfer your Subgraph to Arbitrum (L2) -## Subgraphlar'ınızı transfer etmenin faydaları +## Benefits of transferring your Subgraphs Graph topluluğu ve çekirdek geliştiricileri geçtiğimiz yıl boyunca Arbitrum'a geçmek için [hazırlanıyordu] (https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305). Bir katman 2 veya "L2" blok zinciri olan Arbitrum, güvenliği Ethereum'dan devralmakla birlikte büyük ölçüde daha düşük gaz ücretleri sağlamaktadır. -Subgraph'ınızı Graph Ağı'nda yayınladığınızda veya yükselttiğinizde, protokol üzerindeki akıllı sözleşmelerle etkileşime girersiniz ve bu ETH kullanarak gas ödemesi yapmayı gerektirir. Subgraphlar'ınızı Arbitrum'a taşıdığınızda, gelecekte subgraphlar'ınızda yapılacak tüm güncellemeler çok daha düşük gas ücretleri gerektirecektir. Daha düşük ücretler ve Katman2'deki kürasyon bağlanma eğrilerinin sabit olması, diğer Küratörlerin subgraph'ınızda kürasyon yapmasını kolaylaştırır ve subgraph'ınızdaki İndeksleyiciler için ödülleri artırır. Bu düşük maliyetli ortam, İndeksleyicilerin subgraph'ınızı indekslemesini ve hizmet vermesini de daha ucuz hale getirmektedir.. Önümüzdeki aylarda İndeksleme ödülleri Arbitrum'da artacak ve Ethereum ana ağında azalacaktır, bu nedenle gittikçe daha fazla İndeksleyici mevcut stake'lerini transfer edecek ve operasyonlarını Katman2'de başlatacaktır. +When you publish or upgrade your Subgraph to The Graph Network, you're interacting with smart contracts on the protocol and this requires paying for gas using ETH. By moving your Subgraphs to Arbitrum, any future updates to your Subgraph will require much lower gas fees. The lower fees, and the fact that curation bonding curves on L2 are flat, also make it easier for other Curators to curate on your Subgraph, increasing the rewards for Indexers on your Subgraph. This lower-cost environment also makes it cheaper for Indexers to index and serve your Subgraph. Indexing rewards will be increasing on Arbitrum and decreasing on Ethereum mainnet over the coming months, so more and more Indexers will be transferring their stake and setting up their operations on L2. -## Sinyal, Katman1 subgraph'ınız ve sorgu URL'leri ile neler gerçekleştiğini anlama +## Understanding what happens with signal, your L1 Subgraph and query URLs -Bir subgraph'ı Arbitrum'a transfer etmek için Arbitrum GRT köprüsü kullanılmaktadır, bu köprüde subgraph'ı Katman2'ye göndermek için yerel Arbitrum köprüsünü kullanır. "transfer", ana ağdaki subgraph'ı kullanımdan kaldıracak ve köprüyü kullanarak Katman2'de subgraph'ı yeniden oluşturmak için bilgi gönderecektir. Aynı zamanda, köprünün transferi kabul etmesi için subgraph sahibinin sinyallenmiş GRT'sini de dahil edecektir ve bu değer sıfırdan büyük olmalıdır. +Transferring a Subgraph to Arbitrum uses the Arbitrum GRT bridge, which in turn uses the native Arbitrum bridge, to send the Subgraph to L2. The "transfer" will deprecate the Subgraph on mainnet and send the information to re-create the Subgraph on L2 using the bridge. It will also include the Subgraph owner's signaled GRT, which must be more than zero for the bridge to accept the transfer. -Subgraph transfer etmeyi seçtiğinizde, bu, subgraph'ın tüm kürasyon sinyalini GRT'ye dönüştürecektir. Bu, ana ağdaki subgraph'ı "kullanımdan kaldırmaya" eşdeğerdir. Kürasyonunuza karşılık gelen GRT, subgraphla birlikte Katman2'ye gönderilecek ve burada sizin adınıza sinyal basmak için kullanılacaktır. +When you choose to transfer the Subgraph, this will convert all of the Subgraph's curation signal to GRT. This is equivalent to "deprecating" the Subgraph on mainnet. The GRT corresponding to your curation will be sent to L2 together with the Subgraph, where they will be used to mint signal on your behalf. -Diğer Küratörler, GRT tokenlerinin bir bölümünü geri çekmeyi ya da aynı subgraph üzerinde sinyal basmak için Katman2'ye transfer etmeyi tercih edebilirler. Bir subgraph sahibi subgraph'ını Katman2'ye transfer edemezse ve bir sözleşme çağrısı yoluyla manuel olarak kullanımdan kaldırırsa, Küratörler bilgilendirilecek ve kürasyonlarını geri çekebileceklerdir. +Other Curators can choose whether to withdraw their fraction of GRT, or also transfer it to L2 to mint signal on the same Subgraph. If a Subgraph owner does not transfer their Subgraph to L2 and manually deprecates it via a contract call, then Curators will be notified and will be able to withdraw their curation. -Subgraph transfer edilir edilmez, tüm kürasyon GRT'ye dönüştürüldüğünden, İndeksleyiciler artık subgraph'ı indekslemek için ödül almayacaktır. Ancak, 1) aktarılan subgraphlar'ı 24 saat boyunca sunmaya devam edecek ve 2) hemen Katman2'de subgraph'ı indekslemeye başlayacak İndeksleyiciler olacaktır. Bu İndeksleyiciler subgraph'ı zaten indekslediğinden, subgraph'ın senkronize olmasını beklemeye gerek kalmayacak ve Katman2 subgraph'ını neredeyse anında sorgulamak mümkün olacaktır. +As soon as the Subgraph is transferred, since all curation is converted to GRT, Indexers will no longer receive rewards for indexing the Subgraph. However, there will be Indexers that will 1) keep serving transferred Subgraphs for 24 hours, and 2) immediately start indexing the Subgraph on L2. Since these Indexers already have the Subgraph indexed, there should be no need to wait for the Subgraph to sync, and it will be possible to query the L2 Subgraph almost immediately. -Katman2 subgraph'ına yönelik sorgular farklı bir URL üzerinden yapılmalıdır (arbitrum-gateway.thegraph.com). Ancak Katman1 URL'si en az 48 saat boyunca çalışmaya devam edecektir. Bu sürenin ardından, Katman1 ağ geçidi sorguları (bir süre için) Katman2 ağ geçidine iletecektir, fakat bu gecikmeye neden olacağından ötürü mümkün olan en kısa sürede tüm sorgularınızı yeni URL'ye geçirmeniz önerilir. +Queries to the L2 Subgraph will need to be done to a different URL (on `arbitrum-gateway.thegraph.com`), but the L1 URL will continue working for at least 48 hours. After that, the L1 gateway will forward queries to the L2 gateway (for some time), but this will add latency so it is recommended to switch all your queries to the new URL as soon as possible. ## Katman2 cüzdanınızın seçimi -Subgraph'ınızı ana ağ üzerinde yayınladığınızda, subgraph'ı oluşturmak için bağlı bir cüzdan kullandınız ve bu cüzdan, bu subgraph'ı temsil eden ve güncellemeleri yayınlamanıza izin veren NFT'nin sahibidir. +When you published your Subgraph on mainnet, you used a connected wallet to create the Subgraph, and this wallet owns the NFT that represents this Subgraph and allows you to publish updates. -Subgraph'ı Arbitrum'a transfer ederken, Katman2 üzerinde bu subgraph NFT'ye sahip olacak farklı bir cüzdan seçebilirsiniz. +When transferring the Subgraph to Arbitrum, you can choose a different wallet that will own this Subgraph NFT on L2. MetaMask gibi "genel" bir cüzdan (Harici Olarak Sahip Olunan Hesap veya EOA, yani akıllı sözleşme olmayan bir cüzdan) kullanıyorsanız, bu opsiyoneldir ve Katman1'deki ile aynı sahip adresini kullanmanız önerilir. -Çoklu imza (örneğin Safe) gibi bir akıllı sözleşme cüzdanı kullanıyorsanız, farklı bir Katman2 cüzdan adresi seçmek zorunludur, çünkü büyük olasılıkla bu hesap yalnızca ana ağ üzerinde kullanılabilir ve bu cüzdanı kullanarak Arbitrum'da işlem yapamazsınız. Bir akıllı sözleşme cüzdanı veya çoklu imza cüzdanı kullanmaya devam etmek istiyorsanız, Arbitrum'da yeni bir cüzdan oluşturun ve adresini subgraph'ınızın Katman2 sahibi olarak kullanın. +If you're using a smart contract wallet, like a multisig (e.g. a Safe), then choosing a different L2 wallet address is mandatory, as it is most likely that this account only exists on mainnet and you will not be able to make transactions on Arbitrum using this wallet. If you want to keep using a smart contract wallet or multisig, create a new wallet on Arbitrum and use its address as the L2 owner of your Subgraph. -**Sizin kontrolünüzde ve Arbitrum üzerinde işlem yapabilen bir cüzdan adresi kullanmak oldukça önemlidir. Aksi takdirde, subgraph kaybolacak ve kurtarılamayacaktır.** +**It is very important to use a wallet address that you control, and that can make transactions on Arbitrum. Otherwise, the Subgraph will be lost and cannot be recovered.** ## Transfer için hazırlık: Bir miktar ETH köprüleme -Subgraph'ın transfer edilmesi, köprü üzerinden bir işlemin gönderilmesini ve ardından Arbitrum'da başka bir işlemin yürütülmesini içermektedir. İlk işlem ana ağda ETH kullanır ve mesaj Katman2'de alındığında gas için ödeme yapmak üzere bir miktar ETH içerir. Ancak, bu gas yetersizse, işlemi yeniden denemeniz ve gas için doğrudan Katman2'de ödeme yapmanız gerekecektir (bu, aşağıdaki "Adım 3: Transferi onaylama" dır). Bu adım **transferin başlamasından sonraki 7 gün içinde gerçekleştirilmelidir**. Ayrıca, ikinci işlem ("Adım 4: Katman2'de transferin tamamlanması") doğrudan Arbitrum'da gerçekleştirilecektir. Bu nedenlerden dolayı, Arbitrum cüzdanında bir miktar ETH'ye ihtiyacınız olacak. Bir çoklu imzalı veya akıllı sözleşme hesabı kullanıyorsanız, ETH'nin çoklu imza değil, işlemleri gerçekleştirmek için kullandığınız normal harici hesap (EOA) cüzdanında olması gerekecektir. +Transferring the Subgraph involves sending a transaction through the bridge, and then executing another transaction on Arbitrum. The first transaction uses ETH on mainnet, and includes some ETH to pay for gas when the message is received on L2. However, if this gas is insufficient, you will have to retry the transaction and pay for the gas directly on L2 (this is "Step 3: Confirming the transfer" below). This step **must be executed within 7 days of starting the transfer**. Moreover, the second transaction ("Step 4: Finishing the transfer on L2") will be done directly on Arbitrum. For these reasons, you will need some ETH on an Arbitrum wallet. If you're using a multisig or smart contract account, the ETH will need to be in the regular (EOA) wallet that you are using to execute the transactions, not on the multisig wallet itself. Bazı borsalardan ETH satın alabilir ve doğrudan Arbitrum'a çekebilir veya bir ana ağ cüzdanından Katman2'ye ETH göndermek için Arbitrum köprüsünü kullanabilirsiniz: [bridge.arbitrum.io](http://bridge.arbitrum.io). Arbitrum'daki gas ücretleri daha düşük olduğundan, yalnızca küçük bir miktara ihtiyacınız olacaktır. İşleminizin onaylanması için düşük bir eşikten (ör. 0.01 ETH) başlamanız önerilir. -## Subgraph Transfer Aracını bulma +## Finding the Subgraph Transfer Tool -Subgraph Stüdyo'da subgraph'ınızın sayfasına bakarak Katman2 Transfer Aracını bulabilirsiniz: +You can find the L2 Transfer Tool when you're looking at your Subgraph's page on Subgraph Studio: ![transfer tool](/img/L2-transfer-tool1.png) -Ayrıca, bir subgraph'ın sahibi olan cüzdana bağlıysanız Gezgin'de ve Gezgin'deki subgraph'ın sayfasında da bulunmaktadır: +It is also available on Explorer if you're connected with the wallet that owns a Subgraph and on that Subgraph's page on Explorer: ![Transferring to L2](/img/transferToL2.png) @@ -60,19 +60,19 @@ Katman2'ye Transfer düğmesine tıkladığınızda transfer işlemini başlatab ## Adım 1: Transferin başlatılması -Transfere başlamadan önce, Katman2'de hangi adresin subgraph'a sahip olacağına karar vermelisiniz (yukarıdaki "Katman2 cüzdanınızın seçimi" bölümüne bakın) ve Arbitrum'da halihazırda köprülenmiş gas için kullanacağınız bir miktar ETH bulundurmanız şiddetle tavsiye edilir (yukarıdaki "Transfer için hazırlık: Bir miktar ETH köprüleme" bölümüne bakın). +Before starting the transfer, you must decide which address will own the Subgraph on L2 (see "Choosing your L2 wallet" above), and it is strongly recommend having some ETH for gas already bridged on Arbitrum (see "Preparing for the transfer: bridging some ETH" above). -Ayrıca, subgraph'ın sahibi olan hesabın bir subgraph transferi gerçekleştirebilmesi için ilgili subgraph üzerinde belirli bir sinyale sahip olması gerektiğini göz önünde bulundurun; eğer subgraph üzerinde sinyal vermediyseniz, biraz kürasyon eklemeniz gerekecektir (1 GRT gibi küçük bir miktar eklemek yeterli olacaktır). +Also please note transferring the Subgraph requires having a nonzero amount of signal on the Subgraph with the same account that owns the Subgraph; if you haven't signaled on the Subgraph you will have to add a bit of curation (adding a small amount like 1 GRT would suffice). -Transfer Aracını açtıktan sonra, Katman2 cüzdan adresini "Alıcı cüzdan adresi" alanına girebileceksiniz - **buraya doğru adresi girdiğinizden emin olun**. Subgraph'ı Transfer Et'e tıkladığınızda, cüzdanınızda işlemi gerçekleştirmeniz istenecektir (Katman2 gas'ı için ödeme yapmak üzere bir miktar ETH'nin dahil edildiğini unutmayın); bu, transferi başlatacak ve Katman1 subgraph'ınızı kullanımdan kaldıracaktır (perde arkasında neler olup bittiğine ilişkin daha fazla ayrıntı için yukarıdaki "Sinyal, Katman1 subgraph'ınız ve sorgu URL'leri ile neler gerçekleştiğini anlama" bölümüne bakın). +After opening the Transfer Tool, you will be able to input the L2 wallet address into the "Receiving wallet address" field - **make sure you've entered the correct address here**. Clicking on Transfer Subgraph will prompt you to execute the transaction on your wallet (note some ETH value is included to pay for L2 gas); this will initiate the transfer and deprecate your L1 Subgraph (see "Understanding what happens with signal, your L1 Subgraph and query URLs" above for more details on what goes on behind the scenes). -Bu adımı uygularsanız, **3. adımı tamamlamak için yedi günden daha kısa bir sürede ilerlediğinizden mutlaka emin olmalısınız; aksi halde subgraph ve sinyal GRT'nizi kaybedeceksiniz.** Bunun nedeni Arbitrum'da Katman1-Katman2 mesajlaşmasının çalışma şeklidir: köprü üzerinden gönderilen mesajlar 7 gün içinde yürütülmesi gereken "yeniden denenebilir biletler"dir ve Arbitrum'da gas fiyatında ani artışlar olması durumunda ilk yürütmenin yeniden denenmesi gerekebilir. +If you execute this step, **make sure you proceed until completing step 3 in less than 7 days, or the Subgraph and your signal GRT will be lost.** This is due to how L1-L2 messaging works on Arbitrum: messages that are sent through the bridge are "retry-able tickets" that must be executed within 7 days, and the initial execution might need a retry if there are spikes in the gas price on Arbitrum. ![L2’ye transferi başlatın](/img/startTransferL2.png) -## Adım 2: Subgraph'ın Katman2'ye ulaşmasını bekleme +## Step 2: Waiting for the Subgraph to get to L2 -Transferi başlattıktan sonra, Katman1 subgraph'ınızı Katman2'ye gönderen mesajın Arbitrum köprüsü üzerinden yayılması gerekir. Bu işlem yaklaşık 20 dakika sürer (köprü, işlemi içeren ana ağ bloğunun olası zincir yeniden düzenlemelerine karşı "güvenli" olmasını bekler). +After you start the transfer, the message that sends your L1 Subgraph to L2 must propagate through the Arbitrum bridge. This takes approximately 20 minutes (the bridge waits for the mainnet block containing the transaction to be "safe" from potential chain reorgs). Bu bekleme süresi sona erdiğinde Arbitrum, Katman2 sözleşmelerinde transferi otomatik olarak yürütmeye çalışacaktır. @@ -80,7 +80,7 @@ Bu bekleme süresi sona erdiğinde Arbitrum, Katman2 sözleşmelerinde transferi ## Adım 3: Transferi onaylama -Çoğu durumda, bu adım otomatik olarak yürütülecektir çünkü 1. adımda yer alan Katman2 gas'ı Arbitrum sözleşmelerinde subgraph'ı içeren işlemi yürütmek için yeterli olacaktır. Ancak bazı durumlarda, Arbitrum'daki gas fiyatlarındaki bir artış bu otomatik yürütmenin başarısız olmasına neden olabilir. Bu durumda, subgraph'ınızı Katman2'ye gönderen "bilet" beklemede olacak ve 7 gün içinde yeniden denenmesi gerekecektir. +In most cases, this step will auto-execute as the L2 gas included in step 1 should be sufficient to execute the transaction that receives the Subgraph on the Arbitrum contracts. In some cases, however, it is possible that a spike in gas prices on Arbitrum causes this auto-execution to fail. In this case, the "ticket" that sends your Subgraph to L2 will be pending and require a retry within 7 days. Durum buysa, Arbitrum'da bir miktar ETH bulunan bir Katman2 cüzdanı bağlanmanız, cüzdan ağınızı Arbitrum'a geçirmeniz ve işlemi yeniden denemek için "Transferi Onayla" seçeneğine tıklamanız gerekecektir. @@ -88,33 +88,33 @@ Durum buysa, Arbitrum'da bir miktar ETH bulunan bir Katman2 cüzdanı bağlanman ## Adım 4: Katman2'de transferin tamamlanması -Bu noktada, subgraph'ınız ve GRT'niz Arbitrum'a ulaşmıştır, ancak subgraph henüz yayınlanmamıştır. Alıcı cüzdan olarak seçtiğiniz Katman2 cüzdanını bağlanmanız, cüzdan ağınızı Arbitrum'a geçirmeniz ve "Subgraph'ı Yayınla" seçeneğine tıklamanız gerekecektir. +At this point, your Subgraph and GRT have been received on Arbitrum, but the Subgraph is not published yet. You will need to connect using the L2 wallet that you chose as the receiving wallet, switch your wallet network to Arbitrum, and click "Publish Subgraph." -![Publish the subgraph](/img/publishSubgraphL2TransferTools.png) +![Publish the Subgraph](/img/publishSubgraphL2TransferTools.png) -![Wait for the subgraph to be published](/img/waitForSubgraphToPublishL2TransferTools.png) +![Wait for the Subgraph to be published](/img/waitForSubgraphToPublishL2TransferTools.png) -Bu, Arbitrum üzerinde çalışan İndeksleyicilerin hizmet vermeye başlayabilmesi için subgraph'ı yayınlayacaktır. Ayrıca Katman1'den aktarılan GRT'yi kullanarak kürasyon sinyalini de basacaktır. +This will publish the Subgraph so that Indexers that are operating on Arbitrum can start serving it. It will also mint curation signal using the GRT that were transferred from L1. ## Adım 5: Sorgu URL'sini güncelleme -Subgraph'ınız Arbitrum'a başarıyla transfer edildi! Subgraph'ı sorgulamak için yeni URL şu şekilde olacaktır: +Your Subgraph has been successfully transferred to Arbitrum! To query the Subgraph, the new URL will be : `https://arbitrum-gateway.thegraph.com/api/[api-key]/subgraphs/id/[l2-subgraph-id]` -Arbitrum'daki subgraph kimliğinin ana ağda sahip olduğunuzdan farklı olacağını unutmayın, ancak bunu her zaman Gezgin veya Stüdyo aracılığıyla bulabilirsiniz. Yukarıda belirtildiği gibi ("Sinyal, Katman1 subgraph'ınız ve sorgu URL'leri ile neler gerçekleştiğini anlama" bölümüne bakın) eski Katman1 URL'si kısa bir süre için desteklenecektir, ancak subgraph Katman2'de senkronize edilir edilmez sorgularınızı yeni adrese geçirmelisiniz. +Note that the Subgraph ID on Arbitrum will be a different than the one you had on mainnet, but you can always find it on Explorer or Studio. As mentioned above (see "Understanding what happens with signal, your L1 Subgraph and query URLs") the old L1 URL will be supported for a short while, but you should switch your queries to the new address as soon as the Subgraph has been synced on L2. ## Kürasyonunuzu Arbitrum'a nasıl transfer edebilirsiniz (Katman2) -## Katman2'ye subgraph transferlerinde kürasyona ne olduğunu anlama +## Understanding what happens to curation on Subgraph transfers to L2 -Bir subgraph'ın sahibi subgraph'ı Arbitrum'a transfer ettiğinde, subgrpah'ın tüm sinyali aynı anda GRT'ye dönüştürülür. Bu, "otomatik olarak taşınan" sinyal, yani bir subgraph sürümüne veya dağıtımına özgü olmayan ancak bir subgraph'ın en son sürümünü takip eden sinyal için geçerlidir. +When the owner of a Subgraph transfers a Subgraph to Arbitrum, all of the Subgraph's signal is converted to GRT at the same time. This applies to "auto-migrated" signal, i.e. signal that is not specific to a Subgraph version or deployment but that follows the latest version of a Subgraph. -Sinyalden GRT'ye bu dönüşüm, subgraph sahibinin subgraph'ı Katman1'de kullanımdan kaldırması durumunda gerçekleşecek olanla aynıdır. Subgraph kullanımdan kaldırıldığında veya transfer edildiğinde, tüm kürasyon sinyali aynı anda "yakılır" (kürasyon bağlanma eğrisi kullanılarak) ve ortaya çıkan GRT, GNS akıllı sözleşmesi (yani subgraph yükseltmelerini ve otomatik olarak taşınan sinyali işleyen sözleşme) tarafından tutulur. Bu nedenle, bu subgraph'daki her Küratör, subgraph için sahip oldukları stake miktarıyla orantılı olarak GRT üzerinde hak iddia eder. +This conversion from signal to GRT is the same as what would happen if the Subgraph owner deprecated the Subgraph in L1. When the Subgraph is deprecated or transferred, all curation signal is "burned" simultaneously (using the curation bonding curve) and the resulting GRT is held by the GNS smart contract (that is the contract that handles Subgraph upgrades and auto-migrated signal). Each Curator on that Subgraph therefore has a claim to that GRT proportional to the amount of shares they had for the Subgraph. -Bu GRT tokenlerin subgraph sahibine ilişkin bir bölümü, subgraph ile birlikte Katman2'ye iletilir. +A fraction of these GRT corresponding to the Subgraph owner is sent to L2 together with the Subgraph. -Bu noktada, küratörlüğü yapılan GRT daha fazla sorgu ücreti biriktirmeyecektir, bu nedenle Küratörler GRT'lerini geri çekmeyi veya yeni kürasyon sinyali basmak için kullanılabilecekleri Katman2'deki aynı subgraph'a transfer etmeyi seçebilirler. GRT süresiz bir şekilde kullanılabileceğinden ve ne zaman yaptıklarına bakılmaksızın herkes paylarıyla orantılı bir miktar alacağından bunu yapmak için acele etmeye gerek yoktur. +At this point, the curated GRT will not accrue any more query fees, so Curators can choose to withdraw their GRT or transfer it to the same Subgraph on L2, where it can be used to mint new curation signal. There is no rush to do this as the GRT can be help indefinitely and everybody gets an amount proportional to their shares, irrespective of when they do it. ## Katman2 cüzdanınızın seçimi @@ -130,9 +130,9 @@ Metamask gibi "genel" bir cüzdan (Harici Olarak Sahip Olunan Hesap veya EOA, ya Transfere başlamadan önce, Katman2'deki kürasyonun hangi adrese ait olacağına karar vermelisiniz (yukarıdaki "Katman2 cüzdanınızın seçinmi" bölümüne bakın) ve mesajın Katman2'de yürütülmesini yeniden denemeniz gerektiğinde Arbitrum'da zaten köprülenmiş gas için kullanabileceğiniz bir miktar ETH bulundurmanız önerilir. Bazı borsalardan ETH satın alabilir ve doğrudan Arbitrum'a çekebilir veya bir ana ağ cüzdanından Katman2'ye ETH göndermek için Arbitrum köprüsünü kullanabilirsiniz: [bridge.arbitrum.io](http://bridge.arbitrum.io) - Arbitrum'daki gas ücretleri çok düşük olduğundan, yalnızca küçük bir miktara ihtiyacınız olacak, örneğin 0.01 ETH muhtemelen fazlasıyla yeterli olacaktır. -Küratörlüğünü yaptığınız bir subgraph Katman2'ye transfer edilmişse, Gezgin'de transfer edilmiş bir subgraph'a küratörlük yaptığınızı belirten bir mesaj göreceksiniz. +If a Subgraph that you curate to has been transferred to L2, you will see a message on Explorer telling you that you're curating to a transferred Subgraph. -Subgraph sayfasına bakarken, kürasyonu geri çekmeyi veya transfer etmeyi seçebilirsiniz. "Sinyali Arbitrum'a Transfer Et" seçeneğine tıkladığınızda transfer aracı açılacaktır. +When looking at the Subgraph page, you can choose to withdraw or transfer the curation. Clicking on "Transfer Signal to Arbitrum" will open the transfer tool. ![Transfer signal](/img/transferSignalL2TransferTools.png) @@ -162,4 +162,4 @@ Durum buysa, Arbitrum'da bir miktar ETH bulunan bir Katman2 cüzdanı bağlanman ## Katman1'deki kürasyonunuzu çekme -GRT'nizi Katman2'ye göndermek istemiyorsanız veya manuel olarak köprülemeyi tercih ediyorsanız, Katman1'de kürasyonu gerçekleşmiş GRT'lerinizi çekebilirsiniz. Subgraph sayfasındaki afişte "Sinyali Çek" seçeneğini seçin ve işlemi onaylayın; GRT, Küratör adresinize gönderilecektir. +If you prefer not to send your GRT to L2, or you'd rather bridge the GRT manually, you can withdraw your curated GRT on L1. On the banner on the Subgraph page, choose "Withdraw Signal" and confirm the transaction; the GRT will be sent to your Curator address. diff --git a/website/src/pages/tr/archived/sunrise.mdx b/website/src/pages/tr/archived/sunrise.mdx index f7d204bb791f..91accac3661b 100644 --- a/website/src/pages/tr/archived/sunrise.mdx +++ b/website/src/pages/tr/archived/sunrise.mdx @@ -7,61 +7,61 @@ sidebarTitle: Post-Sunrise Upgrade FAQ ## What was the Sunrise of Decentralized Data? -The Sunrise of Decentralized Data was an initiative spearheaded by Edge & Node. This initiative enabled subgraph developers to upgrade to The Graph’s decentralized network seamlessly. +The Sunrise of Decentralized Data was an initiative spearheaded by Edge & Node. This initiative enabled Subgraph developers to upgrade to The Graph’s decentralized network seamlessly. -This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published subgraphs. +This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published Subgraphs. ### What happened to the hosted service? -The hosted service query endpoints are no longer available, and developers cannot deploy new subgraphs on the hosted service. +The hosted service query endpoints are no longer available, and developers cannot deploy new Subgraphs on the hosted service. -During the upgrade process, owners of hosted service subgraphs could upgrade their subgraphs to The Graph Network. Additionally, developers were able to claim auto-upgraded subgraphs. +During the upgrade process, owners of hosted service Subgraphs could upgrade their Subgraphs to The Graph Network. Additionally, developers were able to claim auto-upgraded Subgraphs. ### Was Subgraph Studio impacted by this upgrade? No, Subgraph Studio was not impacted by Sunrise. Subgraphs were immediately available for querying, powered by the upgrade Indexer, which uses the same infrastructure as the hosted service. -### Why were subgraphs published to Arbitrum, did it start indexing a different network? +### Why were Subgraphs published to Arbitrum, did it start indexing a different network? -The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that subgraphs are published to, but subgraphs can index any of the [supported networks](/supported-networks/) +The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new Subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that Subgraphs are published to, but Subgraphs can index any of the [supported networks](/supported-networks/) ## About the Upgrade Indexer > The upgrade Indexer is currently active. -The upgrade Indexer was implemented to improve the experience of upgrading subgraphs from the hosted service to The Graph Network and support new versions of existing subgraphs that had not yet been indexed. +The upgrade Indexer was implemented to improve the experience of upgrading Subgraphs from the hosted service to The Graph Network and support new versions of existing Subgraphs that had not yet been indexed. ### What does the upgrade Indexer do? -- It bootstraps chains that have yet to receive indexing rewards on The Graph Network and ensures that an Indexer is available to serve queries as quickly as possible after a subgraph is published. +- It bootstraps chains that have yet to receive indexing rewards on The Graph Network and ensures that an Indexer is available to serve queries as quickly as possible after a Subgraph is published. - It supports chains that were previously only available on the hosted service. Find a comprehensive list of supported chains [here](/supported-networks/). -- Indexers that operate an upgrade Indexer do so as a public service to support new subgraphs and additional chains that lack indexing rewards before The Graph Council approves them. +- Indexers that operate an upgrade Indexer do so as a public service to support new Subgraphs and additional chains that lack indexing rewards before The Graph Council approves them. ### Yükseltme İndeksleyicisini neden Edge & Node çalıştırıyor? -Edge & Node historically maintained the hosted service and, as a result, already have synced data for hosted service subgraphs. +Edge & Node historically maintained the hosted service and, as a result, already have synced data for hosted service Subgraphs. ### What does the upgrade indexer mean for existing Indexers? Chains previously only supported on the hosted service were made available to developers on The Graph Network without indexing rewards at first. -However, this action unlocked query fees for any interested Indexer and increased the number of subgraphs published on The Graph Network. As a result, Indexers have more opportunities to index and serve these subgraphs in exchange for query fees, even before indexing rewards are enabled for a chain. +However, this action unlocked query fees for any interested Indexer and increased the number of Subgraphs published on The Graph Network. As a result, Indexers have more opportunities to index and serve these Subgraphs in exchange for query fees, even before indexing rewards are enabled for a chain. -The upgrade Indexer also provides the Indexer community with information about the potential demand for subgraphs and new chains on The Graph Network. +The upgrade Indexer also provides the Indexer community with information about the potential demand for Subgraphs and new chains on The Graph Network. ### Bu Delegatörler için ne anlama gelmektedir? -The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity. +The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more Subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity. ### Did the upgrade Indexer compete with existing Indexers for rewards? -No, the upgrade Indexer only allocates the minimum amount per subgraph and does not collect indexing rewards. +No, the upgrade Indexer only allocates the minimum amount per Subgraph and does not collect indexing rewards. -It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and subgraphs. +It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and Subgraphs. -### How does this affect subgraph developers? +### How does this affect Subgraph developers? -Subgraph developers can query their subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/subgraphs/developing/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a subgraph](/developing/creating-a-subgraph/) was not impacted by this upgrade. +Subgraph developers can query their Subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/subgraphs/developing/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a Subgraph](/developing/creating-a-subgraph/) was not impacted by this upgrade. ### How does the upgrade Indexer benefit data consumers? @@ -71,10 +71,10 @@ The upgrade Indexer enables chains on the network that were previously only supp The upgrade Indexer prices queries at the market rate to avoid influencing the query fee market. -### When will the upgrade Indexer stop supporting a subgraph? +### When will the upgrade Indexer stop supporting a Subgraph? -The upgrade Indexer supports a subgraph until at least 3 other Indexers successfully and consistently serve queries made to it. +The upgrade Indexer supports a Subgraph until at least 3 other Indexers successfully and consistently serve queries made to it. -Furthermore, the upgrade Indexer stops supporting a subgraph if it has not been queried in the last 30 days. +Furthermore, the upgrade Indexer stops supporting a Subgraph if it has not been queried in the last 30 days. -Other Indexers are incentivized to support subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it. +Other Indexers are incentivized to support Subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it. diff --git a/website/src/pages/tr/contracts.json b/website/src/pages/tr/contracts.json index 0ca72a349608..eeaae13e41cb 100644 --- a/website/src/pages/tr/contracts.json +++ b/website/src/pages/tr/contracts.json @@ -1,4 +1,4 @@ { - "contract": "Contract", + "contract": "Sözleşme", "address": "Adres(Address)" } diff --git a/website/src/pages/tr/global.json b/website/src/pages/tr/global.json index 60b4d779ddda..979e9bd4d321 100644 --- a/website/src/pages/tr/global.json +++ b/website/src/pages/tr/global.json @@ -1,35 +1,78 @@ { "navigation": { "title": "Ana navigasyon", - "show": "Show navigation", - "hide": "Hide navigation", + "show": "Navigasyonu göster", + "hide": "Navigasyonu gizle", "subgraphs": "Subgraph'ler", "substreams": "Substream'ler", - "sps": "Substreams-Powered Subgraphs", - "indexing": "Indexing", - "resources": "Resources", - "archived": "Archived" + "sps": "Substreams Destekli Subgraph'ler", + "tokenApi": "Token API", + "indexing": "Endeksleme", + "resources": "Kaynaklar", + "archived": "Arşivlenmiş" }, "page": { - "lastUpdated": "Last updated", + "lastUpdated": "Son güncelleme", "readingTime": { - "title": "Reading time", - "minutes": "minutes" + "title": "Okuma süresi", + "minutes": "dakika" }, - "previous": "Previous page", - "next": "Next page", - "edit": "Edit on GitHub", - "onThisPage": "On this page", - "tableOfContents": "Table of contents", - "linkToThisSection": "Link to this section" + "previous": "Önceki sayfa", + "next": "Sonraki Sayfa", + "edit": "GitHub'da Düzenle", + "onThisPage": "Bu sayfada", + "tableOfContents": "İçindekiler", + "linkToThisSection": "Bu bölüme bağlantı" }, "content": { - "note": "Note", + "callout": { + "note": "Note", + "tip": "Tip", + "important": "Important", + "warning": "Warning", + "caution": "Caution" + }, "video": "Video" }, + "openApi": { + "parameters": { + "pathParameters": "Path Parameters", + "queryParameters": "Sorgu Parametreleri", + "headerParameters": "Header Parameters", + "cookieParameters": "Cookie Parameters", + "parameter": "Parameter", + "description": "Tanım", + "value": "Value", + "required": "Required", + "deprecated": "Deprecated", + "defaultValue": "Default value", + "minimumValue": "Minimum value", + "maximumValue": "Maximum value", + "acceptedValues": "Accepted values", + "acceptedPattern": "Accepted pattern", + "format": "Format", + "serializationFormat": "Serialization format" + }, + "request": { + "label": "Test this endpoint", + "noCredentialsRequired": "No credentials required", + "send": "Send Request" + }, + "responses": { + "potentialResponses": "Potential Responses", + "status": "Durum", + "description": "Tanım", + "liveResponse": "Live Response", + "example": "Örnek" + }, + "errors": { + "invalidApi": "Could not retrieve API {0}.", + "invalidOperation": "Could not retrieve operation {0} in API {1}." + } + }, "notFound": { - "title": "Oops! This page was lost in space...", - "subtitle": "Check if you’re using the right address or explore our website by clicking on the link below.", - "back": "Go Home" + "title": "Hata! Bu sayfa kayboldu gitti...", + "subtitle": "Doğru adresi kullanıp kullanmadığınızı kontrol edin veya aşağıdaki bağlantıya tıklayarak web sitemize göz atın.", + "back": "Anasayfaya Git" } } diff --git a/website/src/pages/tr/index.json b/website/src/pages/tr/index.json index 5334106f029e..7a721a844042 100644 --- a/website/src/pages/tr/index.json +++ b/website/src/pages/tr/index.json @@ -1,52 +1,52 @@ { "title": "Ana sayfa", "hero": { - "title": "The Graph Docs", - "description": "Kick-start your web3 project with the tools to extract, transform, and load blockchain data.", - "cta1": "How The Graph works", - "cta2": "Build your first subgraph" + "title": "The Graph Dokümantasyonu", + "description": "Blokzinciri verilerini çıkarma, dönüştürme ve yükleme araçlarıyla web3 projenize hızlı bir başlangıç yapın.", + "cta1": "The Graph nasıl çalışır?", + "cta2": "İlk subgraph'inizi oluşturun" }, "products": { - "title": "The Graph’s Products", - "description": "Choose a solution that fits your needs—interact with blockchain data your way.", + "title": "The Graph's Products", + "description": "İhtiyaçlarınıza uygun bir çözüm seçin: blokzinciri verileriyle istediğiniz gibi etkileşime geçin.", "subgraphs": { "title": "Subgraph'ler", - "description": "Extract, process, and query blockchain data with open APIs.", - "cta": "Develop a subgraph" + "description": "Açık API'ler ile blokzinciri verilerini çıkarın, işleyin ve sorgulayın.", + "cta": "Bir subgraph geliştirin" }, "substreams": { "title": "Substream'ler", - "description": "Fetch and consume blockchain data with parallel execution.", - "cta": "Develop with Substreams" + "description": "Blokzinciri verilerini paralel çalıştırma ile alın ve tüketin.", + "cta": "Substreams'le geliştirin" }, "sps": { - "title": "Substreams-Powered Subgraphs", - "description": "Boost your subgraph’s efficiency and scalability by using Substreams.", - "cta": "Set up a Substreams-powered subgraph" + "title": "Substreams Destekli Subgraph'ler", + "description": "Boost your subgraph's efficiency and scalability by using Substreams.", + "cta": "Substreams destekli bir subgraph kurun" }, "graphNode": { "title": "Graph Node", - "description": "Index blockchain data and serve it via GraphQL queries.", - "cta": "Set up a local Graph Node" + "description": "Blokzinciri verilerini endeksleyin ve GraphQL sorgularıyla sunun.", + "cta": "Yerel bir Graph Düğümü kurun" }, "firehose": { "title": "Firehose", - "description": "Extract blockchain data into flat files to enhance sync times and streaming capabilities.", - "cta": "Get started with Firehose" + "description": "Senkronizasyon sürelerini ve veri akışı olanaklarını artırmak için blokzinciri verilerini düz dosyalara çıkarın.", + "cta": "Firehose ile çalışmaya başlayın" } }, "supportedNetworks": { "title": "Desteklenen Ağlar", "details": "Network Details", "services": "Services", - "type": "Type", + "type": "Tür", "protocol": "Protocol", "identifier": "Identifier", "chainId": "Chain ID", "nativeCurrency": "Native Currency", - "docs": "Docs", + "docs": "Dokümanlar", "shortName": "Short Name", - "guides": "Guides", + "guides": "Rehberler", "search": "Search networks", "showTestnets": "Show Testnets", "loading": "Loading...", @@ -54,9 +54,9 @@ "infoText": "Boost your developer experience by enabling The Graph's indexing network.", "infoLink": "Integrate new network", "description": { - "base": "The Graph supports {0}. To add a new network, {1}", - "networks": "networks", - "completeThisForm": "complete this form" + "base": "The Graph, {0} ile uyumludur. Yeni bir ağ eklemek isterseniz, {1}", + "networks": "ağlar", + "completeThisForm": "bu formu doldurun" }, "emptySearch": { "title": "No networks found", @@ -65,10 +65,10 @@ "showTestnets": "Show testnets" }, "tableHeaders": { - "name": "Name", + "name": "İsim", "id": "ID", - "subgraphs": "Subgraphs", - "substreams": "Substreams", + "subgraphs": "Subgraph'ler", + "substreams": "Substream'ler", "firehose": "Firehose", "tokenapi": "Token API" } @@ -80,7 +80,7 @@ "description": "Kickstart your journey into subgraph development." }, "substreams": { - "title": "Substreams", + "title": "Substream'ler", "description": "Stream high-speed data for real-time indexing." }, "timeseries": { @@ -92,7 +92,7 @@ "description": "Leverage features like custom data sources, event handlers, and topic filters." }, "billing": { - "title": "Billing", + "title": "Faturalandırma", "description": "Optimize costs and manage billing efficiently." } }, @@ -123,53 +123,53 @@ "title": "Rehberler", "description": "", "explorer": { - "title": "Find Data in Graph Explorer", - "description": "Leverage hundreds of public subgraphs for existing blockchain data." + "title": "Graph Gezgini'nde Veri Bulun", + "description": "Mevcut blokzinciri verileri için yüzlerce herkese açık subgraph'ten faydalanın." }, "publishASubgraph": { - "title": "Publish a Subgraph", - "description": "Add your subgraph to the decentralized network." + "title": "Bir Subgraph Yayımlayın", + "description": "Subgraph'inizi merkeziyetsiz ağa ekleyin." }, "publishSubstreams": { - "title": "Publish Substreams", - "description": "Launch your Substreams package to the Substreams Registry." + "title": "Substreams Yayımlayın", + "description": "Substreams paketinizi Substreams Kayıt Defteri'nde yayımlayın." }, "queryingBestPractices": { - "title": "Querying Best Practices", - "description": "Optimize your subgraph queries for faster, better results." + "title": "Sorgulama - Örnek Uygulamalar", + "description": "Subgraph sorgularınızı daha hızlı ve verimli sonuçlar için optimize edin." }, "timeseries": { - "title": "Optimized Timeseries & Aggregations", - "description": "Streamline your subgraph for efficiency." + "title": "Optimize Edilmiş Zaman Serileri & Toplulaştırılmalar", + "description": "Subgraph'inizi daha verimli çalışacak şekilde optimize edin." }, "apiKeyManagement": { - "title": "API Key Management", - "description": "Easily create, manage, and secure API keys for your subgraphs." + "title": "API Anahtarı Yönetimi", + "description": "Subgraph'leriniz için API anahtarlarını kolayca oluşturun, yönetin ve güvene alın." }, "transferToTheGraph": { - "title": "Transfer to The Graph", - "description": "Seamlessly upgrade your subgraph from any platform." + "title": "The Graph'e Transfer", + "description": "Subgraph'inizi herhangi bir platformdan sorunsuz bir şekilde yükseltin." } }, "videos": { - "title": "Video Tutorials", - "watchOnYouTube": "Watch on YouTube", + "title": "Videolu Rehberler", + "watchOnYouTube": "YouTube'da izle", "theGraphExplained": { - "title": "The Graph Explained In 1 Minute", - "description": "What is The Graph? How does it work? Why does it matter so much to web3 developers? Learn how and why The Graph is the backbone of web3 in this short, non-technical video." + "title": "1 Dakikada The Graph", + "description": "Learn how and why The Graph is the backbone of web3 in this short, non-technical video." }, "whatIsDelegating": { - "title": "What is Delegating?", - "description": "Delegators are key participants who help secure The Graph by staking their GRT tokens to Indexers. This video explains key concepts to understand before delegating." + "title": "Delegasyon Nedir?", + "description": "This video explains key concepts to understand before delegating, a form of staking that helps secure The Graph." }, "howToIndexSolana": { - "title": "How to Index Solana with a Substreams-powered Subgraph", - "description": "If you’re familiar with subgraphs, discover how Substreams offer a different approach for key use cases. This video walks you through the process of building your first Substreams-powered subgraph." + "title": "Substreams Destekli Bir Subgraph ile Solana'yı Endeksleme", + "description": "If you're familiar with Subgraphs, discover how Substreams offer a different approach for key use cases." } }, "time": { - "reading": "Reading time", - "duration": "Duration", + "reading": "Okuma süresi", + "duration": "Süre", "minutes": "min (asgari)" } } diff --git a/website/src/pages/tr/indexing/chain-integration-overview.mdx b/website/src/pages/tr/indexing/chain-integration-overview.mdx index db50f7b8e673..b81aae6c3dd2 100644 --- a/website/src/pages/tr/indexing/chain-integration-overview.mdx +++ b/website/src/pages/tr/indexing/chain-integration-overview.mdx @@ -6,12 +6,12 @@ Blok zinciri ekiplerinin [Graph protokolüyle entegrasyon](https://forum.thegrap ## Aşama 1. Teknik Entegrasyon -- Please visit [New Chain Integration](/indexing/new-chain-integration/) for information on `graph-node` support for new chains. +- Lütfen yeni zincirler için `graph-node` desteği hakkında bilgi almak için [Yeni Zincir Entegrasyonu](/indexing/new-chain-integration/) sayfasını ziyaret edin. - Ekipler, protokol entegrasyon sürecini bir Forum başlığı oluşturarak başlatır [here](https://forum.thegraph.com/c/governance-gips/new-chain-support/71) (Yönetişim ve GIP'ler altındaki Yeni Veri Kaynakları alt kategorisi). Varsayılan Forum şablonunun kullanılması zorunludur. ## Aşama 2. Entegrasyon Doğrulaması -- Teams collaborate with core developers, Graph Foundation and operators of GUIs and network gateways, such as [Subgraph Studio](https://thegraph.com/studio/), to ensure a smooth integration process. This involves providing the necessary backend infrastructure, such as the integrating chain's JSON-RPC, Firehose or Substreams endpoints. Teams wanting to avoid self-hosting such infrastructure can leverage The Graph's community of node operators (Indexers) to do so, which the Foundation can help with. +- Takımlar, sorunsuz bir entegrasyon sürecini sağlamak amacıyla temel geliştiriciler, Graph Vakfı ve [Subgraph Studio](https://thegraph.com/studio/) gibi GUI ve ağ geçidi operatörleriyle işbirliği yapar. Entegre edilen zincirin JSON-RPC, Firehose veya Substreams uç noktaları gibi gerekli altyapıların sağlanması buna dahildir. Bu altyapıyı kendi kendine sunmaktan kaçınmak isteyen ekipler, bunu yapmak için The Graph'ın düğüm operatörleri (Endeksleyiciler) topluluğunu kullanabilirler. The Graph bu konuda yardımcı olabilir. - Graph İndeksleyicileri, entegrasyonu Graph'ın test ağında test eder. - Çekirdek geliştiriciler ve İndeksleyiciler kararlılığı, performansı ve veri belirleyiciliğini izler. @@ -36,9 +36,9 @@ Bu süreç Subgraph Veri Hizmeti ile ilgilidir ve yalnızca yeni Subgraph `Veri ### 2. Firehose & Substreams desteği, ağ ana ağda desteklendikten sonra gelirse ne olur? -Bu, yalnızca Substreams destekli subgraphlar'da ödüllerin indekslenmesi için protokol desteğini etkileyecektir. Yeni Firehose uygulamasının, bu GIP'de Aşama 2 için özetlenen metodolojiyi izleyerek testnet üzerinde test edilmesi gerekecektir. Benzer şekilde, uygulamanın performanslı ve güvenilir olduğu varsayıldığı takdirde, [Özellik Destek Matrisi] (https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) üzerinde bir PR (`Substreams veri kaynakları` Subgraph Özelliği) ve ödüllerin indekslenmesi amacıyla protokol desteği için yeni bir GIP gerekecektir. PR ve GIP'yi herkes oluşturabilir; Vakıf, Konsey onayı konusunda yardımcı olacaktır. +This would only impact protocol support for indexing rewards on Substreams-powered Subgraphs. The new Firehose implementation would need testing on testnet, following the methodology outlined for Stage 2 in this GIP. Similarly, assuming the implementation is performant and reliable, a PR on the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) would be required (`Substreams data sources` Subgraph Feature), as well as a new GIP for protocol support for indexing rewards. Anyone can create the PR and GIP; the Foundation would help with Council approval. -### 3. How much time will the process of reaching full protocol support take? +### 3. Tam protokol desteğine ulaşma süreci ne kadar zaman alacak? Ana ağa geçiş süresinin entegrasyon geliştirme süresine, ek araştırma gerekip gerekmediğine, test ve hata düzeltmelerine ve her zaman olduğu gibi topluluk geri bildirimi gerektiren yönetişim sürecinin zamanlamasına bağlı olarak değişmek kaydıyla birkaç hafta olması beklenmektedir. @@ -46,4 +46,4 @@ Ana ağa geçiş süresinin entegrasyon geliştirme süresine, ek araştırma ge ### 4. Öncelikler nasıl ele alınacak? -Similar to #3, it will depend on overall readiness and involved stakeholders' bandwidth. For example, a new chain with a brand new Firehose implementation may take longer than integrations that have already been battle-tested or are farther along in the governance process. +\#3'e benzer şekilde, bu genel hazırlık sürecine ve ilgili tarafların bant genişliğine bağlı olacaktır. Örneğin, tamamen yeni bir Firehose entegrasyonuna ihtiyaç duyan yeni bir zincir, zaten gerçek koşullarda test edilmiş veya yönetim sürecinde daha ileride olan entegrasyonlardan daha uzun sürebilir. diff --git a/website/src/pages/tr/indexing/new-chain-integration.mdx b/website/src/pages/tr/indexing/new-chain-integration.mdx index 5eb41f1d922d..07e538ae9bae 100644 --- a/website/src/pages/tr/indexing/new-chain-integration.mdx +++ b/website/src/pages/tr/indexing/new-chain-integration.mdx @@ -1,70 +1,70 @@ --- -title: New Chain Integration +title: Yeni Zincir Entegrasyonu --- -Chains can bring subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies: +Chains can bring Subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies: 1. **EVM JSON-RPC** -2. **Firehose**: All Firehose integration solutions include Substreams, a large-scale streaming engine based off Firehose with native `graph-node` support, allowing for parallelized transforms. +2. **Firehose**: Tüm Firehose entegrasyon çözümleri, Firehose temelinde geliştirilmiş ve `graph-node` tarafından doğal olarak desteklenen büyük ölçekli bir akış motoru olan Substreams'i içerir. Bu sayede paralelleştirilmiş dönüşümler gerçekleştirilebilir. -> Note that while the recommended approach is to develop a new Firehose for all new chains, it is only required for non-EVM chains. +> Unutmamak gerekir ki, önerilen yaklaşım tüm yeni zincirler için yeni bir Firehose geliştirmektir. Ancak bu sadece EVM dışı zincirler için bir zorunluluktur. -## Integration Strategies +## Entegrasyon Stratejileri ### 1. EVM JSON-RPC -If the blockchain is EVM equivalent and the client/node exposes the standard EVM JSON-RPC API, Graph Node should be able to index the new chain. +İlgili blokzinciri EVM eşdeğeriyse ve istemci/düğüm standart EVM JSON-RPC API'sini dışarıya sunuyorsa, Graph Düğümü yeni zinciri endeksleyebilmelidir. #### EVM JSON-RPC'yi test etme -For Graph Node to be able to ingest data from an EVM chain, the RPC node must expose the following EVM JSON-RPC methods: +Graph Düğümü'nün bir EVM zincirinden veri alabilmesi için RPC düğümünün şu EVM JSON-RPC metotlarını dışarıya sunması gerekir: - `eth_getLogs` -- `eth_call` (for historical blocks, with EIP-1898 - requires archive node) +- `eth_call` (geçmiş bloklar için gereklidir. EIP-1898 ile gelmiştir. Arşiv düğümü gerektirir) - `eth_getBlockByNumber` - `eth_getBlockByHash` - `net_version` - `eth_getTransactionReceipt`, bir JSON-RPC toplu talebinde -- `trace_filter` *(limited tracing and optionally required for Graph Node)* +- `trace_filter` *(sınırlı izleme ve isteğe bağlı olarak Graph Düğümü için gerekli olabilir)* -### 2. Firehose Integration +### 2. Firehose Entegrasyonu -[Firehose](https://firehose.streamingfast.io/firehose-setup/overview) is a next-generation extraction layer. It collects history in flat files and streams in real time. Firehose technology replaces those polling API calls with a stream of data utilizing a push model that sends data to the indexing node faster. This helps increase the speed of syncing and indexing. +[Firehose](https://firehose.streamingfast.io/firehose-setup/overview), yeni nesil bir veri çekme katmanıdır. Geçmiş verileri düz dosyalarda saklar ve gerçek zamanlı olarak akışa alır. Firehose teknolojisi, sorgulama tabanlı API çağrılarını, veri akışı sağlayan bir itme (push) modeliyle değiştirerek verileri endeksleme düğümüne daha hızlı iletir. Bu, senkronizasyon ve endeksleme hızını artırmaya yardımcı olur. -> NOTE: All integrations done by the StreamingFast team include maintenance for the Firehose replication protocol into the chain's codebase. StreamingFast tracks any changes and releases binaries when you change code and when StreamingFast changes code. This includes releasing Firehose/Substreams binaries for the protocol, maintaining Substreams modules for the block model of the chain, and releasing binaries for the blockchain node with instrumentation if need be. +> NOT: StreamingFast ekibi tarafından yapılan tüm entegrasyonlar, Firehose çoğaltma protokolünün zincirin kod tabanına entegre edilmesini ve güncel tutulmasını içerir. StreamingFast, zincirin kodunda veya kendi kodunda yapılan değişiklikleri takip eder ve gerekli durumlarda ikili (binary) dosyalarını yayımlar. Bu süreç, protokol için Firehose/Substreams ikili dosyalarının yayımlanmasını, zincirin blok modeline uygun Substreams modüllerinin bakımını ve gerektiğinde ölçümleme (instrumentation) içeren blokzinciri düğümü ikili dosyalarının yayımlanmasını kapsar. -#### Integration for Non-EVM chains +#### EVM Dışı Zincirler İçin Entegrasyon -The primary method to integrate the Firehose into chains is to use an RPC polling strategy. Our polling algorithm will predict when a new block will arrive and increase the rate at which it checks for a new block near that time, making it a very low-latency and efficient solution. For help with the integration and maintenance of the Firehose, contact the [StreamingFast team](https://www.streamingfast.io/firehose-integration-program). New chains and their integrators will appreciate the [fork awareness](https://substreams.streamingfast.io/documentation/consume/reliability-guarantees) and massive parallelized indexing capabilities that Firehose and Substreams bring to their ecosystem. +Firehose'u zincirlere entegre etmenin temel yöntemi, bir RPC sorgulama stratejisi kullanmaktır. Sorgulama algoritmamız, yeni bir bloğun ne zaman geleceğini tahmin eder ve o zaman diliminde yeni blok olup olmadığını daha sık kontrol ederek düşük gecikmeli ve verimli bir çözüm sunar. Firehose'un entegrasyonu ve bakımı konusunda yardım almak için [StreamingFast ekibiyle](https://www.streamingfast.io/firehose-integration-program) iletişime geçin. Yeni zincirler ve entegratörleri, Firehose ve Substreams'in ekosistemlerine kazandırdığı [çatallanma farkındalığını](https://substreams.streamingfast.io/documentation/consume/reliability-guarantees) ve yüksek derecede paralelleştirilmiş endeksleme yeteneklerini takdir edecektir. -#### Specific Instrumentation for EVM (`geth`) chains +#### EVM (`geth`) Zincirleri İçin Özel Ölçümleme -For EVM chains, there exists a deeper level of data that can be achieved through the `geth` [live-tracer](https://github.com/ethereum/go-ethereum/releases/tag/v1.14.0), a collaboration between Go-Ethereum and StreamingFast, in building a high-throughput and rich transaction tracing system. The Live Tracer is the most comprehensive solution, resulting in [Extended](https://streamingfastio.medium.com/new-block-model-to-accelerate-chain-integration-9f65126e5425) block details. This enables new indexing paradigms, like pattern matching of events based on state changes, calls, parent call trees, or triggering of events based on changes to the actual variables in a smart contract. +EVM zincirleri için, **`geth` [live-tracer](https://github.com/ethereum/go-ethereum/releases/tag/v1.14.0)** kullanılarak daha derin bir veri seviyesine erişilebilir. Bu, Go-Ethereum ve StreamingFast iş birliğiyle, yüksek verimli ve kapsamlı bir işlem izleme sistemi oluşturmak amacıyla geliştirilmiştir. **Live Tracer**, en kapsamlı çözüm olup, [Genişletilmiş](https://streamingfastio.medium.com/new-block-model-to-accelerate-chain-integration-9f65126e5425) blok detayları sağlar. Bu sayede, olayların durum değişikliklerine, çağrılara, üst çağrı ağaçlarına dayalı olarak örüntü eşleştirme, veya akıllı sözleşmelerdeki değişkenlerin güncellenmesine bağlı olarak olay tetikleme gibi yeni endeksleme yaklaşımları mümkün hale gelir. -![Base block vs Extended block](/img/extended-vs-base-substreams-blocks.png) +![Base blok ve Extended blok karşılaştırması](/img/extended-vs-base-substreams-blocks.png) -> NOTE: This improvement upon the Firehose requires chains make use of the EVM engine `geth version 1.13.0` and up. +> NOT: Firehose üzerine yapılan bu iyileştirme, zincirlerin `geth version 1.13.0` ve üstü EVM motoru kullanmasını gerektirir. -## EVM considerations - Difference between JSON-RPC & Firehose +## EVM değerlendirmeleri - JSON-RPC ve Firehose arasındaki farklılıklar -While the JSON-RPC and Firehose are both suitable for subgraphs, a Firehose is always required for developers wanting to build with [Substreams](https://substreams.streamingfast.io). Supporting Substreams allows developers to build [Substreams-powered subgraphs](/subgraphs/cookbook/substreams-powered-subgraphs/) for the new chain, and has the potential to improve the performance of your subgraphs. Additionally, Firehose — as a drop-in replacement for the JSON-RPC extraction layer of `graph-node` — reduces by 90% the number of RPC calls required for general indexing. +While the JSON-RPC and Firehose are both suitable for Subgraphs, a Firehose is always required for developers wanting to build with [Substreams](https://substreams.streamingfast.io). Supporting Substreams allows developers to build [Substreams-powered Subgraphs](/subgraphs/cookbook/substreams-powered-subgraphs/) for the new chain, and has the potential to improve the performance of your Subgraphs. Additionally, Firehose — as a drop-in replacement for the JSON-RPC extraction layer of `graph-node` — reduces by 90% the number of RPC calls required for general indexing. -- All those `getLogs` calls and roundtrips get replaced by a single stream arriving into the heart of `graph-node`; a single block model for all subgraphs it processes. +- All those `getLogs` calls and roundtrips get replaced by a single stream arriving into the heart of `graph-node`; a single block model for all Subgraphs it processes. -> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers) +> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index Subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers) ## Graph Node Configuration -Configuring Graph Node is as easy as preparing your local environment. Once your local environment is set, you can test the integration by locally deploying a subgraph. +Configuring Graph Node is as easy as preparing your local environment. Once your local environment is set, you can test the integration by locally deploying a Subgraph. 1. [Graph Düğümü'nü Klonlayın](https://github.com/graphprotocol/graph-node) -2. Modify [this line](https://github.com/graphprotocol/graph-node/blob/master/docker/docker-compose.yml#L22) to include the new network name and the EVM JSON-RPC or Firehose compliant URL +2. [Bu satırı](https://github.com/graphprotocol/graph-node/blob/master/docker/docker-compose.yml#L22) düzenleyerek yeni ağ adını ve EVM JSON-RPC veya Firehose uyumlu URL'yi ekleyin. - > Do not change the env var name itself. It must remain `ethereum` even if the network name is different. + > Çevre değişkeni adının kendisini değiştirmeyin. Ağ adı farklı olsa bile `ethereum` olarak kalmalıdır. -3. Run an IPFS node or use the one used by The Graph: https://api.thegraph.com/ipfs/ +3. Bir IPFS düğümü çalıştırın veya The Graph tarafından kullanılanı kullanın: https://api.thegraph.com/ipfs/ -## Substreams-powered Subgraphs +## Substreams destekli Subgraph'ler -For StreamingFast-led Firehose/Substreams integrations, basic support for foundational Substreams modules (e.g. decoded transactions, logs and smart-contract events) and Substreams codegen tools are included. These tools enable the ability to enable [Substreams-powered subgraphs](/substreams/sps/introduction/). Follow the [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) and run `substreams codegen subgraph` to experience the codegen tools for yourself. +For StreamingFast-led Firehose/Substreams integrations, basic support for foundational Substreams modules (e.g. decoded transactions, logs and smart-contract events) and Substreams codegen tools are included. These tools enable the ability to enable [Substreams-powered Subgraphs](/substreams/sps/introduction/). Follow the [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) and run `substreams codegen subgraph` to experience the codegen tools for yourself. diff --git a/website/src/pages/tr/indexing/overview.mdx b/website/src/pages/tr/indexing/overview.mdx index 0de4a3fcb961..6a9d93397c39 100644 --- a/website/src/pages/tr/indexing/overview.mdx +++ b/website/src/pages/tr/indexing/overview.mdx @@ -1,13 +1,13 @@ --- -title: Indexing Overview +title: Endekslemeye Genel Bakış sidebarTitle: Genel Bakış --- İndeksleyiciler, indeksleme ve sorgu işleme hizmetleri sağlamak için Graph Token'leri (GRT) stake eden Graph Ağındaki düğüm operatörleridir. İndeksleyiciler, hizmetleri karşılığında sorgu ücretleri ve indeksleme ödülleri kazanırlar. Ayrıca üstel bir indirim fonksiyonuna göre geri ödenen sorgu ücretleri de kazanırlar. -GRT that is staked in the protocol is subject to a thawing period and can be slashed if Indexers are malicious and serve incorrect data to applications or if they index incorrectly. Indexers also earn rewards for delegated stake from Delegators, to contribute to the network. +Protokolde istiflenen GRT, bir çözülme süresine tabidir ve Endeksleyicilerin kötü niyetli davranarak uygulamalara yanlış veri sağlaması veya yanlış endeksleme yapması durumunda kesilebilir (slash edilebilir). Endeksleyiciler ayrıca, ağa katkıda bulunmak için Delegatörlerden delege edilen istif üzerinden ödül kazanırlar. -Indexers select subgraphs to index based on the subgraph’s curation signal, where Curators stake GRT in order to indicate which subgraphs are high-quality and should be prioritized. Consumers (eg. applications) can also set parameters for which Indexers process queries for their subgraphs and set preferences for query fee pricing. +Indexers select Subgraphs to index based on the Subgraph’s curation signal, where Curators stake GRT in order to indicate which Subgraphs are high-quality and should be prioritized. Consumers (eg. applications) can also set parameters for which Indexers process queries for their Subgraphs and set preferences for query fee pricing. ## FAQ @@ -19,17 +19,17 @@ The minimum stake for an Indexer is currently set to 100K GRT. **Query fee rebates** - Payments for serving queries on the network. These payments are mediated via state channels between an Indexer and a gateway. Each query request from a gateway contains a payment and the corresponding response a proof of query result validity. -**Indexing rewards** - Generated via a 3% annual protocol wide inflation, the indexing rewards are distributed to Indexers who are indexing subgraph deployments for the network. +**Indexing rewards** - Generated via a 3% annual protocol wide inflation, the indexing rewards are distributed to Indexers who are indexing Subgraph deployments for the network. ### How are indexing rewards distributed? -Indexing rewards come from protocol inflation which is set to 3% annual issuance. They are distributed across subgraphs based on the proportion of all curation signal on each, then distributed proportionally to Indexers based on their allocated stake on that subgraph. **An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.** +Indexing rewards come from protocol inflation which is set to 3% annual issuance. They are distributed across Subgraphs based on the proportion of all curation signal on each, then distributed proportionally to Indexers based on their allocated stake on that Subgraph. **An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.** Numerous tools have been created by the community for calculating rewards; you'll find a collection of them organized in the [Community Guides collection](https://www.notion.so/Community-Guides-abbb10f4dba040d5ba81648ca093e70c). You can also find an up to date list of tools in the #Delegators and #Indexers channels on the [Discord server](https://discord.gg/graphprotocol). Here we link a [recommended allocation optimiser](https://github.com/graphprotocol/allocation-optimizer) integrated with the indexer software stack. ### What is a proof of indexing (POI)? -POIs are used in the network to verify that an Indexer is indexing the subgraphs they have allocated on. A POI for the first block of the current epoch must be submitted when closing an allocation for that allocation to be eligible for indexing rewards. A POI for a block is a digest for all entity store transactions for a specific subgraph deployment up to and including that block. +POIs are used in the network to verify that an Indexer is indexing the Subgraphs they have allocated on. A POI for the first block of the current epoch must be submitted when closing an allocation for that allocation to be eligible for indexing rewards. A POI for a block is a digest for all entity store transactions for a specific Subgraph deployment up to and including that block. ### When are indexing rewards distributed? @@ -41,7 +41,7 @@ The RewardsManager contract has a read-only [getRewards](https://github.com/grap Many of the community-made dashboards include pending rewards values and they can be easily checked manually by following these steps: -1. Query the [mainnet subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations: +1. Query the [mainnet Subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations: ```graphql query indexerAllocations { @@ -91,24 +91,24 @@ The `queryFeeCut` and `indexingRewardCut` values are delegation parameters that - **indexingRewardCut** - the % of indexing rewards that will be distributed to the Indexer. If this is set to 95%, the Indexer will receive 95% of the indexing rewards when an allocation is closed and the Delegators will split the other 5%. -### How do Indexers know which subgraphs to index? +### How do Indexers know which Subgraphs to index? -Indexers may differentiate themselves by applying advanced techniques for making subgraph indexing decisions but to give a general idea we'll discuss several key metrics used to evaluate subgraphs in the network: +Indexers may differentiate themselves by applying advanced techniques for making Subgraph indexing decisions but to give a general idea we'll discuss several key metrics used to evaluate Subgraphs in the network: -- **Curation signal** - The proportion of network curation signal applied to a particular subgraph is a good indicator of the interest in that subgraph, especially during the bootstrap phase when query voluming is ramping up. +- **Curation signal** - The proportion of network curation signal applied to a particular Subgraph is a good indicator of the interest in that Subgraph, especially during the bootstrap phase when query volume is ramping up. -- **Query fees collected** - The historical data for volume of query fees collected for a specific subgraph is a good indicator of future demand. +- **Query fees collected** - The historical data for volume of query fees collected for a specific Subgraph is a good indicator of future demand. -- **Amount staked** - Monitoring the behavior of other Indexers or looking at proportions of total stake allocated towards specific subgraphs can allow an Indexer to monitor the supply side for subgraph queries to identify subgraphs that the network is showing confidence in or subgraphs that may show a need for more supply. +- **Amount staked** - Monitoring the behavior of other Indexers or looking at proportions of total stake allocated towards specific Subgraphs can allow an Indexer to monitor the supply side for Subgraph queries to identify Subgraphs that the network is showing confidence in or Subgraphs that may show a need for more supply. -- **Subgraphs with no indexing rewards** - Some subgraphs do not generate indexing rewards mainly because they are using unsupported features like IPFS or because they are querying another network outside of mainnet. You will see a message on a subgraph if it is not generating indexing rewards. +- **Subgraphs with no indexing rewards** - Some Subgraphs do not generate indexing rewards mainly because they are using unsupported features like IPFS or because they are querying another network outside of mainnet. You will see a message on a Subgraph if it is not generating indexing rewards. ### What are the hardware requirements? -- **Small** - Enough to get started indexing several subgraphs, will likely need to be expanded. +- **Small** - Enough to get started indexing several Subgraphs, will likely need to be expanded. - **Standard** - Default setup, this is what is used in the example k8s/terraform deployment manifests. -- **Medium** - Production Indexer supporting 100 subgraphs and 200-500 requests per second. -- **Large** - Prepared to index all currently used subgraphs and serve requests for the related traffic. +- **Medium** - Production Indexer supporting 100 Subgraphs and 200-500 requests per second. +- **Large** - Prepared to index all currently used Subgraphs and serve requests for the related traffic. | Setup | Postgres
(CPUs) | Postgres
(memory in GBs) | Postgres
(disk in TBs) | VMs
(CPUs) | VMs
(memory in GBs) | | --- | :-: | :-: | :-: | :-: | :-: | @@ -125,17 +125,17 @@ Indexers may differentiate themselves by applying advanced techniques for making ## Infrastructure -At the center of an Indexer's infrastructure is the Graph Node which monitors the indexed networks, extracts and loads data per a subgraph definition and serves it as a [GraphQL API](/about/#how-the-graph-works). The Graph Node needs to be connected to an endpoint exposing data from each indexed network; an IPFS node for sourcing data; a PostgreSQL database for its store; and Indexer components which facilitate its interactions with the network. +At the center of an Indexer's infrastructure is the Graph Node which monitors the indexed networks, extracts and loads data per a Subgraph definition and serves it as a [GraphQL API](/about/#how-the-graph-works). The Graph Node needs to be connected to an endpoint exposing data from each indexed network; an IPFS node for sourcing data; a PostgreSQL database for its store; and Indexer components which facilitate its interactions with the network. -- **PostgreSQL database** - The main store for the Graph Node, this is where subgraph data is stored. The Indexer service and agent also use the database to store state channel data, cost models, indexing rules, and allocation actions. +- **PostgreSQL database** - The main store for the Graph Node, this is where Subgraph data is stored. The Indexer service and agent also use the database to store state channel data, cost models, indexing rules, and allocation actions. -- **Data endpoint** - For EVM-compatible networks, Graph Node needs to be connected to an endpoint that exposes an EVM-compatible JSON-RPC API. This may take the form of a single client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain subgraphs will require particular client capabilities such as archive mode and/or the parity tracing API. +- **Data endpoint** - For EVM-compatible networks, Graph Node needs to be connected to an endpoint that exposes an EVM-compatible JSON-RPC API. This may take the form of a single client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain Subgraphs will require particular client capabilities such as archive mode and/or the parity tracing API. -- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during subgraph deployment to fetch the subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com. +- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com. - **Indexer service** - Handles all required external communications with the network. Shares cost models and indexing statuses, passes query requests from gateways on to a Graph Node, and manages the query payments via state channels with the gateway. -- **Indexer agent** - Facilitates the Indexers interactions onchain including registering on the network, managing subgraph deployments to its Graph Node/s, and managing allocations. +- **Indexer agent** - Facilitates the Indexers interactions onchain including registering on the network, managing Subgraph deployments to its Graph Node/s, and managing allocations. - **Prometheus metrics server** - The Graph Node and Indexer components log their metrics to the metrics server. @@ -149,8 +149,8 @@ Note: To support agile scaling, it is recommended that query and indexing concer | Port | Purpose | Routes | CLI Argument | Environment Variable | | --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | -| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | +| 8000 | GraphQL HTTP server
(for Subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | +| 8001 | GraphQL WS
(for Subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | | 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | | 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | | 8040 | Prometheus metrics | /metrics | \--metrics-port | - | @@ -159,7 +159,7 @@ Note: To support agile scaling, it is recommended that query and indexing concer | Port | Purpose | Routes | CLI Argument | Environment Variable | | --- | --- | --- | --- | --- | -| 7600 | GraphQL HTTP server
(for paid subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` | +| 7600 | GraphQL HTTP server
(for paid Subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` | | 7300 | Prometheus metrics | /metrics | \--metrics-port | - | #### Indexer Agent @@ -295,7 +295,7 @@ Deploy all resources with `kubectl apply -k $dir`. ### Graph Node -[Graph Node](https://github.com/graphprotocol/graph-node) is an open source Rust implementation that event sources the Ethereum blockchain to deterministically update a data store that can be queried via the GraphQL endpoint. Developers use subgraphs to define their schema, and a set of mappings for transforming the data sourced from the blockchain and the Graph Node handles syncing the entire chain, monitoring for new blocks, and serving it via a GraphQL endpoint. +[Graph Node](https://github.com/graphprotocol/graph-node) is an open source Rust implementation that event sources the Ethereum blockchain to deterministically update a data store that can be queried via the GraphQL endpoint. Developers use Subgraphs to define their schema, and a set of mappings for transforming the data sourced from the blockchain and the Graph Node handles syncing the entire chain, monitoring for new blocks, and serving it via a GraphQL endpoint. #### Getting started from source @@ -365,9 +365,9 @@ docker-compose up To successfully participate in the network requires almost constant monitoring and interaction, so we've built a suite of Typescript applications for facilitating an Indexers network participation. There are three Indexer components: -- **Indexer agent** - The agent monitors the network and the Indexer's own infrastructure and manages which subgraph deployments are indexed and allocated towards onchain and how much is allocated towards each. +- **Indexer agent** - The agent monitors the network and the Indexer's own infrastructure and manages which Subgraph deployments are indexed and allocated towards onchain and how much is allocated towards each. -- **Indexer service** - The only component that needs to be exposed externally, the service passes on subgraph queries to the graph node, manages state channels for query payments, shares important decision making information to clients like the gateways. +- **Indexer service** - The only component that needs to be exposed externally, the service passes on Subgraph queries to the graph node, manages state channels for query payments, shares important decision making information to clients like the gateways. - **Indexer CLI** - The command line interface for managing the Indexer agent. It allows Indexers to manage cost models, manual allocations, actions queue, and indexing rules. @@ -525,7 +525,7 @@ graph indexer status #### Indexer management using Indexer CLI -The suggested tool for interacting with the **Indexer Management API** is the **Indexer CLI**, an extension to the **Graph CLI**. The Indexer agent needs input from an Indexer in order to autonomously interact with the network on the behalf of the Indexer. The mechanism for defining Indexer agent behavior are **allocation management** mode and **indexing rules**. Under auto mode, an Indexer can use **indexing rules** to apply their specific strategy for picking subgraphs to index and serve queries for. Rules are managed via a GraphQL API served by the agent and known as the Indexer Management API. Under manual mode, an Indexer can create allocation actions using **actions queue** and explicitly approve them before they get executed. Under oversight mode, **indexing rules** are used to populate **actions queue** and also require explicit approval for execution. +The suggested tool for interacting with the **Indexer Management API** is the **Indexer CLI**, an extension to the **Graph CLI**. The Indexer agent needs input from an Indexer in order to autonomously interact with the network on the behalf of the Indexer. The mechanism for defining Indexer agent behavior are **allocation management** mode and **indexing rules**. Under auto mode, an Indexer can use **indexing rules** to apply their specific strategy for picking Subgraphs to index and serve queries for. Rules are managed via a GraphQL API served by the agent and known as the Indexer Management API. Under manual mode, an Indexer can create allocation actions using **actions queue** and explicitly approve them before they get executed. Under oversight mode, **indexing rules** are used to populate **actions queue** and also require explicit approval for execution. #### Usage @@ -537,7 +537,7 @@ The **Indexer CLI** connects to the Indexer agent, typically through port-forwar - `graph indexer rules set [options] ...` - Set one or more indexing rules. -- `graph indexer rules start [options] ` - Start indexing a subgraph deployment if available and set its `decisionBasis` to `always`, so the Indexer agent will always choose to index it. If the global rule is set to always then all available subgraphs on the network will be indexed. +- `graph indexer rules start [options] ` - Start indexing a Subgraph deployment if available and set its `decisionBasis` to `always`, so the Indexer agent will always choose to index it. If the global rule is set to always then all available Subgraphs on the network will be indexed. - `graph indexer rules stop [options] ` - Stop indexing a deployment and set its `decisionBasis` to never, so it will skip this deployment when deciding on deployments to index. @@ -561,9 +561,9 @@ All commands which display rules in the output can choose between the supported #### Indexing rules -Indexing rules can either be applied as global defaults or for specific subgraph deployments using their IDs. The `deployment` and `decisionBasis` fields are mandatory, while all other fields are optional. When an indexing rule has `rules` as the `decisionBasis`, then the Indexer agent will compare non-null threshold values on that rule with values fetched from the network for the corresponding deployment. If the subgraph deployment has values above (or below) any of the thresholds it will be chosen for indexing. +Indexing rules can either be applied as global defaults or for specific Subgraph deployments using their IDs. The `deployment` and `decisionBasis` fields are mandatory, while all other fields are optional. When an indexing rule has `rules` as the `decisionBasis`, then the Indexer agent will compare non-null threshold values on that rule with values fetched from the network for the corresponding deployment. If the Subgraph deployment has values above (or below) any of the thresholds it will be chosen for indexing. -For example, if the global rule has a `minStake` of **5** (GRT), any subgraph deployment which has more than 5 (GRT) of stake allocated to it will be indexed. Threshold rules include `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, and `minAverageQueryFees`. +For example, if the global rule has a `minStake` of **5** (GRT), any Subgraph deployment which has more than 5 (GRT) of stake allocated to it will be indexed. Threshold rules include `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, and `minAverageQueryFees`. Data model: @@ -679,7 +679,7 @@ graph indexer actions execute approve Note that supported action types for allocation management have different input requirements: -- `Allocate` - allocate stake to a specific subgraph deployment +- `Allocate` - allocate stake to a specific Subgraph deployment - required action params: - deploymentID @@ -694,7 +694,7 @@ Note that supported action types for allocation management have different input - poi - force (forces using the provided POI even if it doesn’t match what the graph-node provides) -- `Reallocate` - atomically close allocation and open a fresh allocation for the same subgraph deployment +- `Reallocate` - atomically close allocation and open a fresh allocation for the same Subgraph deployment - required action params: - allocationID @@ -706,7 +706,7 @@ Note that supported action types for allocation management have different input #### Cost models -Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make Indexer selection decisions per query and to negotiate payment with chosen Indexers. +Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each Subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make Indexer selection decisions per query and to negotiate payment with chosen Indexers. #### Agora @@ -782,9 +782,9 @@ Once an Indexer has staked GRT in the protocol, the [Indexer components](/indexi 6. Call `stake()` to stake GRT in the protocol. -7. (Optional) Indexers may approve another address to be the operator for their Indexer infrastructure in order to separate the keys that control the funds from those that are performing day to day actions such as allocating on subgraphs and serving (paid) queries. In order to set the operator call `setOperator()` with the operator address. +7. (Optional) Indexers may approve another address to be the operator for their Indexer infrastructure in order to separate the keys that control the funds from those that are performing day to day actions such as allocating on Subgraphs and serving (paid) queries. In order to set the operator call `setOperator()` with the operator address. -8. (Optional) In order to control the distribution of rewards and strategically attract Delegators Indexers can update their delegation parameters by updating their indexingRewardCut (parts per million), queryFeeCut (parts per million), and cooldownBlocks (number of blocks). To do so call `setDelegationParameters()`. The following example sets the queryFeeCut to distribute 95% of query rebates to the Indexer and 5% to Delegators, set the indexingRewardCutto distribute 60% of indexing rewards to the Indexer and 40% to Delegators, and set `thecooldownBlocks` period to 500 blocks. +8. (Optional) In order to control the distribution of rewards and strategically attract Delegators Indexers can update their delegation parameters by updating their `indexingRewardCut` (parts per million), `queryFeeCut` (parts per million), and `cooldownBlocks` (number of blocks). To do so call `setDelegationParameters()`. The following example sets the `queryFeeCut` to distribute 95% of query rebates to the Indexer and 5% to Delegators, set the `indexingRewardCut` to distribute 60% of indexing rewards to the Indexer and 40% to Delegators, and set the `cooldownBlocks` period to 500 blocks. ``` setDelegationParameters(950000, 600000, 500) @@ -810,8 +810,8 @@ To set the delegation parameters using Graph Explorer interface, follow these st After being created by an Indexer a healthy allocation goes through two states. -- **Active** - Once an allocation is created onchain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L316)) it is considered **active**. A portion of the Indexer's own and/or delegated stake is allocated towards a subgraph deployment, which allows them to claim indexing rewards and serve queries for that subgraph deployment. The Indexer agent manages creating allocations based on the Indexer rules. +- **Active** - Once an allocation is created onchain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L316)) it is considered **active**. A portion of the Indexer's own and/or delegated stake is allocated towards a Subgraph deployment, which allows them to claim indexing rewards and serve queries for that Subgraph deployment. The Indexer agent manages creating allocations based on the Indexer rules. - **Closed** - An Indexer is free to close an allocation once 1 epoch has passed ([closeAllocation()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L335)) or their Indexer agent will automatically close the allocation after the **maxAllocationEpochs** (currently 28 days). When an allocation is closed with a valid proof of indexing (POI) their indexing rewards are distributed to the Indexer and its Delegators ([learn more](/indexing/overview/#how-are-indexing-rewards-distributed)). -Indexers are recommended to utilize offchain syncing functionality to sync subgraph deployments to chainhead before creating the allocation onchain. This feature is especially useful for subgraphs that may take longer than 28 epochs to sync or have some chances of failing undeterministically. +Indexers are recommended to utilize offchain syncing functionality to sync Subgraph deployments to chainhead before creating the allocation onchain. This feature is especially useful for Subgraphs that may take longer than 28 epochs to sync or have some chances of failing undeterministically. diff --git a/website/src/pages/tr/indexing/supported-network-requirements.mdx b/website/src/pages/tr/indexing/supported-network-requirements.mdx index a106094cac7c..9baf78db6a6f 100644 --- a/website/src/pages/tr/indexing/supported-network-requirements.mdx +++ b/website/src/pages/tr/indexing/supported-network-requirements.mdx @@ -6,7 +6,7 @@ title: Desteklenen Ağ Gereksinimleri | --- | --- | --- | :-: | | Arbitrum | [Baremetal Rehberi](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
[Docker Rehberi](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | 4+ çekirdekli CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_son güncelleme Ağustos 2023_ | ✅ | | Avalanche | [Docker Rehberi](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 çekirdekli / 8 iş parçacıklı CPU
Ubuntu 22.04
16GB+ RAM
>= 5 TiB NVMe SSD
_son güncelleme Ağustos 2023_ | ✅ | -| Base | [Erigon Baremetal Rehberi](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

[GETH Baremetal Rehberi](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
[GETH Docker Rehberi](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ çekirdekli CPU
Debian 12/Ubuntu 22.04
16 GB RAM
>= 4.5TB (NVME tercih edilir)
_son güncelleme 14 Mayıs 2024_ | ✅ | +| Base | [Erigon Baremetal Rehberi](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

[GETH Baremetal Rehberi](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
[GETH Docker Rehberi](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
Debian 12/Ubuntu 22.04
16 GB RAM
>= 4.5TB (NVME preferred)
_last updated 14th May 2024_ | ✅ | | Binance | [Erigon Baremetal Rehberi](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | 8 çekirdekli / 16 iş parçacıklı CPU
Ubuntu 22.04
>=32 GB RAM
>= 14 TiB NVMe SSD
_son güncelleme 22 Haziran 2024_ | ✅ | | Celo | [Docker Rehberi](https://docs.infradao.com/archive-nodes-101/celo/docker) | 4 çekirdekli / 8 iş parçacıklı CPU
Ubuntu 22.04
16GB+ RAM
>= 2 TiB NVMe SSD
_son güncelleme Ağustos 2023_ | ✅ | | Ethereum | [Docker Rehberi](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Yüksek saat hızı, çekirdek sayısından daha önemlidir
Ubuntu 22.04
16GB+ RAM
>=3TB (NVMe önerilir)
_son güncelleme Ağustos 2023_ | ✅ | diff --git a/website/src/pages/tr/indexing/tap.mdx b/website/src/pages/tr/indexing/tap.mdx index 5ad4f2dc020e..d59e66ef4a76 100644 --- a/website/src/pages/tr/indexing/tap.mdx +++ b/website/src/pages/tr/indexing/tap.mdx @@ -1,102 +1,102 @@ --- -title: TAP Migration Guide +title: GraphTally Guide --- -Learn about The Graph’s new payment system, **Timeline Aggregation Protocol, TAP**. This system provides fast, efficient microtransactions with minimized trust. +Learn about The Graph’s new payment system, **GraphTally** [(previously Timeline Aggregation Protocol)](https://docs.rs/tap_core/latest/tap_core/index.html). This system provides fast, efficient microtransactions with minimized trust. ## Genel Bakış -[TAP](https://docs.rs/tap_core/latest/tap_core/index.html) is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features: +GraphTally is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features: -- Efficiently handles micropayments. -- Adds a layer of consolidations to onchain transactions and costs. -- Allows Indexers control of receipts and payments, guaranteeing payment for queries. -- It enables decentralized, trustless gateways and improves `indexer-service` performance for multiple senders. +- Mikro ödemelerin üstesinden etkili bir şekilde gelir. +- Zincir üstünde yapılan işlemler ve maliyetler için bir birleştirme katmanı ekler. +- Endeksleyicilerin makbuzlar ve ödemeler üzerinde kontrol sahibi olmasına olanak tanır, sorgular için garanti ödeme sağlar. +- Merkeziyetsiz, güven gerektirmeyen ağ geçitlerine olanak sağlar ve birden fazla gönderici olması durumunda `indexer-service` performansını artırır. -## Ayrıntılar +### Ayrıntılar -TAP allows a sender to make multiple payments to a receiver, **TAP Receipts**, which aggregates these payments into a single payment, a **Receipt Aggregate Voucher**, also known as a **RAV**. This aggregated payment can then be verified on the blockchain, reducing the number of transactions and simplifying the payment process. +GraphTally allows a sender to make multiple payments to a receiver, **Receipts**, which aggregates these payments into a single payment, a **Receipt Aggregate Voucher**, also known as a **RAV**. This aggregated payment can then be verified on the blockchain, reducing the number of transactions and simplifying the payment process. -For each query, the gateway will send you a `signed receipt` that is stored on your database. Then, these queries will be aggregated by a `tap-agent` through a request. Afterwards, you’ll receive a RAV. You can update a RAV by sending it with newer receipts and this will generate a new RAV with an increased value. +Ağ geçidi her sorgu için size veritabanınızda saklanan bir `signed receipt` (imzalı makbuz) gönderecektir. Daha sonra, bu sorgular bir `tap-agent` tarafından bir istek aracılığıyla toplulaştırılacaktır. Sonrasında size bir RAV gönderilecek. RAV'ı, sonradan aldığınız yeni makbuzlarla birlikte göndererek güncelleyebilirsiniz. Bu yeni oluşan RAV'ın değeri önceki RAV'ın ve sonradan eklediğiniz makbuzların toplamı olacaktır. -### RAV Details +### RAV Ayrıntıları -- It’s money that is waiting to be sent to the blockchain. +- RAV blokzincirine gönderilmeyi bekleyen paradır. -- It will continue to send requests to aggregate and ensure that the total value of non-aggregated receipts does not exceed the `amount willing to lose`. +- RAV, toplulaştırma işlemini sürdürmek için istekler göndermeye devam edecek ve toplulaştırılmamış makbuzların toplam değerinin `amount willing to lose`'u (kaybetmesi göze alınan tutarı) aşmamasını sağlayacaktır. -- Each RAV can be redeemed once in the contracts, which is why they are sent after the allocation is closed. +- Her bir RAV, akıllı sözleşmelerde yalnızca bir kez kullanılabilir, bu yüzden tahsis kapandıktan sonra gönderilir. -### Redeeming RAV +### RAV'i Kullanma -As long as you run `tap-agent` and `indexer-agent`, everything will be executed automatically. The following provides a detailed breakdown of the process: +`tap-agent` ve `indexer-agent` komutlarını çalıştırdığınız sürece, her şey otomatik olarak yürütülecektir. Aşağıda sürecin ayrıntılı bir açıklaması verilmiştir: -1. An Indexer closes allocation. +1. Bir Endeksleyici tahsisi kapatır. -2. ` period, tap-agent` takes all pending receipts for that specific allocation and requests an aggregation into a RAV, marking it as `last`. +2. `` dönemi boyunca `tap-agent`, söz konusu tahsis için bekleyen tüm makbuzları alır ve bu makbuzları bir RAV içine toplulaştırma talebi oluştur. Bu talebi `last` (son) olarak işaretler. -3. `indexer-agent` takes all the last RAVS and sends redeem requests to the blockchain, which will update the value of `redeem_at`. +3. `indexer-agent`, tüm son RAV'leri alır ve blokzincirine kullanma talepleri gönderir, bu da `redeem_at` değerini güncelleyecektir. -4. During the `` period, `indexer-agent` monitors if the blockchain has any reorganizations that revert the transaction. +4. `` dönemi boyunca, `indexer-agent`, blokzincirinin işlemi geri çeviren herhangi bir yeniden düzenleme (reorg) yaşayıp yaşamadığını izler. - - If it was reverted, the RAV is resent to the blockchain. If it was not reverted, it gets marked as `final`. + - Eğer işlem geri çevrildiyse, RAV tekrar blokzincire gönderilir. Eğer geri çevrilmediyse, `final` (nihai) olarak işaretlenir. -## Blockchain Addresses +## Blokzinciri Adresleri -### Contracts +### Sözleşmeler -| Contract | Arbitrum Mainnet (42161) | Arbitrum Sepolia (421614) | +| Sözleşme | Arbitrum Ana Ağı (42161) | Arbitrum Sepolia (421614) | | ------------------- | -------------------------------------------- | -------------------------------------------- | -| TAP Verifier | `0x33f9E93266ce0E108fc85DdE2f71dab555A0F05a` | `0xfC24cE7a4428A6B89B52645243662A02BA734ECF` | +| TAP Doğrulayıcı | `0x33f9E93266ce0E108fc85DdE2f71dab555A0F05a` | `0xfC24cE7a4428A6B89B52645243662A02BA734ECF` | | AllocationIDTracker | `0x5B2F33d7Ca6Ec88f5586f2528f58c20843D9FE7c` | `0xAaC28a10d707bbc6e02029f1bfDAEB5084b2aD11` | -| Escrow | `0x8f477709eF277d4A880801D01A140a9CF88bA0d3` | `0x1e4dC4f9F95E102635D8F7ED71c5CdbFa20e2d02` | +| Emanet (Escrow) | `0x8f477709eF277d4A880801D01A140a9CF88bA0d3` | `0x1e4dC4f9F95E102635D8F7ED71c5CdbFa20e2d02` | -### Gateway +### Ağ Geçidi -| Component | Edge and Node Mainnet (Arbitrum Mainnet) | Edge and Node Testnet (Arbitrum Sepolia) | -| ---------- | --------------------------------------------- | --------------------------------------------- | -| Sender | `0xDDE4cfFd3D9052A9cb618fC05a1Cd02be1f2F467` | `0xC3dDf37906724732FfD748057FEBe23379b0710D` | -| Signers | `0xfF4B7A5EfD00Ff2EC3518D4F250A27e4c29A2211` | `0xFb142dE83E261e43a81e9ACEADd1c66A0DB121FE` | -| Aggregator | `https://tap-aggregator.network.thegraph.com` | `https://tap-aggregator.testnet.thegraph.com` | +| Bileşen | Edge and Node Mainnet (Arbitrum Mainnet) | Edge and Node Testnet (Arbitrum Sepolia) | +| --- | --- | --- | +| Gönderen | `0xDDE4cfFd3D9052A9cb618fC05a1Cd02be1f2F467` | `0xC3dDf37906724732FfD748057FEBe23379b0710D` | +| İmzalayıcılar (Signers) | `0xfF4B7A5EfD00Ff2EC3518D4F250A27e4c29A2211` | `0xFb142dE83E261e43a81e9ACEADd1c66A0DB121FE` | +| Toplulaştırıcı (Aggregator) | `https://tap-aggregator.network.thegraph.com` | `https://tap-aggregator.testnet.thegraph.com` | -### Gereksinimler +### Prerequisites -In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query TAP updates. You can use The Graph Network to query or host yourself on your `graph-node`. +In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query updates. You can use The Graph Network to query or host yourself on your `graph-node`. -- [Graph TAP Arbitrum Sepolia subgraph (for The Graph testnet)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) -- [Graph TAP Arbitrum One subgraph (for The Graph mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) +- [Graph TAP Arbitrum Sepolia Subgraph (for The Graph testnet)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) +- [Graph TAP Arbitrum One Subgraph (for The Graph mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) -> Note: `indexer-agent` does not currently handle the indexing of this subgraph like it does for the network subgraph deployment. As a result, you have to index it manually. +> Note: `indexer-agent` does not currently handle the indexing of this Subgraph like it does for the network Subgraph deployment. As a result, you have to index it manually. -## Migration Guide +## Geçiş Kılavuzu -### Software versions +### Yazılım sürümleri -The required software version can be found [here](https://github.com/graphprotocol/indexer/blob/main/docs/networks/arbitrum-one.md#latest-releases). +Gerekli yazılım sürümü [burada](https://github.com/graphprotocol/indexer/blob/main/docs/networks/arbitrum-one.md#latest-releases) bulunabilir. -### Steps +### Adımlar -1. **Indexer Agent** +1. **Endeksleyici Aracı** - - Follow the [same process](https://github.com/graphprotocol/indexer/pkgs/container/indexer-agent#graph-protocol-indexer-components). - - Give the new argument `--tap-subgraph-endpoint` to activate the new TAP codepaths and enable redeeming of TAP RAVs. + - [Buradaki süreci](https://github.com/graphprotocol/indexer/pkgs/container/indexer-agent#graph-protocol-indexer-components) takip edin. + - Give the new argument `--tap-subgraph-endpoint` to activate the new GraphTally codepaths and enable redeeming of RAVs. -2. **Indexer Service** +2. **Endeksleyici Hizmeti** - - Fully replace your current configuration with the [new Indexer Service rs](https://github.com/graphprotocol/indexer-rs). It's recommend that you use the [container image](https://github.com/orgs/graphprotocol/packages?repo_name=indexer-rs). - - Like the older version, you can scale Indexer Service horizontally easily. It is still stateless. + - Mevcut yapılandırmanızı tamamen [yeni Indexer Service rs (Endeksleyici Hizmeti)](https://github.com/graphprotocol/indexer-rs) ile değiştirin. [Konteyner imajını](https://github.com/orgs/graphprotocol/packages?repo_name=indexer-rs) kullanmanız tavsiye edilir. + - Eski versiyonda olduğu gibi, Endeksleyici Hizmeti'ni yatay olarak kolayca ölçekleyebilirsiniz. Hala durumsuz (stateless) bir yapıya sahiptir. -3. **TAP Agent** +3. **TAP Aracı** - - Run _one_ single instance of [TAP Agent](https://github.com/graphprotocol/indexer-rs) at all times. It's recommend that you use the [container image](https://github.com/orgs/graphprotocol/packages?repo_name=indexer-rs). + - Her zaman [TAP Aracı](https://github.com/graphprotocol/indexer-rs)'nın _tek_ bir örneğini çalıştırın. [Konteyner imajını](https://github.com/orgs/graphprotocol/packages?repo_name=indexer-rs) kullanmanız önerilir. -4. **Configure Indexer Service and TAP Agent** +4. **Endeksleyici Servisi'ni ve TAP Aracı'nı Yapılandırma** - Configuration is a TOML file shared between `indexer-service` and `tap-agent`, supplied with the argument `--config /path/to/config.toml`. + Yapılandırma, `indexer-service` ve `tap-agent` arasında paylaşılan bir TOML dosyasıdır ve `--config /path/to/config.toml` argümanı ile sağlanır. - Check out the full [configuration](https://github.com/graphprotocol/indexer-rs/blob/main/config/maximal-config-example.toml) and the [default values](https://github.com/graphprotocol/indexer-rs/blob/main/config/default_values.toml) + Tam [yapılandırmaya](https://github.com/graphprotocol/indexer-rs/blob/main/config/maximal-config-example.toml) ve [varsayılan değerlere](https://github.com/graphprotocol/indexer-rs/blob/main/config/default_values.toml) göz atın. -For minimal configuration, use the following template: +Minimal yapılandırma için aşağıdaki şablonu kullanın: ```bash # You will have to change *all* the values below to match your setup. @@ -128,18 +128,18 @@ query_url = "" status_url = "" [subgraphs.network] -# Query URL for the Graph Network subgraph. +# Query URL for the Graph Network Subgraph. query_url = "" # Optional, deployment to look for in the local `graph-node`, if locally indexed. -# Locally indexing the subgraph is recommended. +# Locally indexing the Subgraph is recommended. # NOTE: Use `query_url` or `deployment_id` only deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" [subgraphs.escrow] -# Query URL for the Escrow subgraph. +# Query URL for the Escrow Subgraph. query_url = "" # Optional, deployment to look for in the local `graph-node`, if locally indexed. -# Locally indexing the subgraph is recommended. +# Locally indexing the Subgraph is recommended. # NOTE: Use `query_url` or `deployment_id` only deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" @@ -170,24 +170,24 @@ max_amount_willing_to_lose_grt = 20 Notlar: -- Values for `tap.sender_aggregator_endpoints` can be found in the [gateway section](/indexing/tap/#gateway). -- Values for `blockchain.receipts_verifier_address` must be used accordingly to the [Blockchain addresses section](/indexing/tap/#contracts) using the appropriate chain id. +- `tap.sender_aggregator_endpoints` parametresinin alabileceği değerler [ağ geçidi bölümü](/indexing/tap/#gateway) içinde bulunabilir. +- `blockchain.receipts_verifier_address` parametresinin alabileceği değerler, [Blokzinciri adresleri bölümü](/indexing/tap/#contracts) ile uyumlu olarak, uygun zincir kimliği verilerek kullanılmalıdır. -**Log Level** +**Kayıt Seviyesi** -- You can set the log level by using the `RUST_LOG` environment variable. -- It’s recommended that you set it to `RUST_LOG=indexer_tap_agent=debug,info`. +- Kayıt seviyesini `RUST_LOG` çevre değişkenini kullanarak ayarlayabilirsiniz. +- Bunu `RUST_LOG=indexer_tap_agent=debug,info` şeklinde ayarlamanız önerilir. -## Monitoring +## İzleme -### Metrics +### Metrikler -All components expose the port 7300 to be queried by prometheus. +Tüm bileşenler, prometheus tarafından sorgulanmak üzere 7300 portunu açar. -### Grafana Dashboard +### Grafana Gösterge Paneli -You can download [Grafana Dashboard](https://github.com/graphprotocol/indexer-rs/blob/main/docs/dashboard.json) and import. +[Grafana Gösterge Paneli](https://github.com/graphprotocol/indexer-rs/blob/main/docs/dashboard.json)'ni indirip içe aktarabilirsiniz. ### Launchpad -Currently, there is a WIP version of `indexer-rs` and `tap-agent` that can be found [here](https://github.com/graphops/launchpad-charts/tree/main/charts/graph-network-indexer) +Halihazırda, `indexer-rs` ve `tap-agent`'in [buradan](https://github.com/graphops/launchpad-charts/tree/main/charts/graph-network-indexer) ulaşabileceğiniz bir WIP sürümü (geliştirilmesi tamamlanmamış sürümü) bulunmaktadır. diff --git a/website/src/pages/tr/indexing/tooling/firehose.mdx b/website/src/pages/tr/indexing/tooling/firehose.mdx index 686b37df1c43..6c5e78611d93 100644 --- a/website/src/pages/tr/indexing/tooling/firehose.mdx +++ b/website/src/pages/tr/indexing/tooling/firehose.mdx @@ -2,23 +2,23 @@ title: Firehose --- -![Firehose Logo](/img/firehose-logo.png) +![Firehose Logosu](/img/firehose-logo.png) -Firehose is a new technology developed by StreamingFast working with The Graph Foundation. The product provides **previously unseen capabilities and speeds for indexing blockchain data** using a files-based and streaming-first approach. +Firehose, StreamingFast tarafından The Graph Vakfı ile birlikte geliştirilen yeni bir teknolojidir. Ürün, dosya tabanlı ve akış-öncelikli bir yaklaşım kullanarak, **blokzinciri verilerini endekslemede daha önce görülmemiş seviyede işlevsellik ve hız** sağlar. -The Graph merges into Go Ethereum/geth with the adoption of [Live Tracer with v1.14.0 release](https://github.com/ethereum/go-ethereum/releases/tag/v1.14.0). +The Graph, [v1.14.0 sürümüyle yayımlanan Live Tracer](https://github.com/ethereum/go-ethereum/releases/tag/v1.14.0)'ın benimsenmesiyle Go Ethereum/geth ile birleşiyor. Firehose, blok zinciri verilerini yüksek performanslı dosya tabanlı bir stratejiyle çıkarır, dönüştürür ve kaydeder. Blok zinciri geliştiricileri daha sonra Firehose tarafından çıkarılan verilere ikili veri akışları üzerinden erişebilir. Firehose'un Graph'ın orijinal blok zinciri veri çıkarma katmanının yerine geçmesi amaçlanmıştır. ## Firehose Dökümantasyonu -The Firehose documentation is currently maintained by the StreamingFast team [on the StreamingFast website](https://firehose.streamingfast.io/). +Firehose dokümantasyonu şu anda StreamingFast ekibi tarafından [StreamingFast web sitesi](https://firehose.streamingfast.io/) üzerinde sağlanmaktadır. ### Buradan Başlayın -- Read this [Firehose introduction](https://firehose.streamingfast.io/introduction/firehose-overview) to get an overview of what it is and why it was built. -- Learn about the [Prerequisites](https://firehose.streamingfast.io/introduction/prerequisites) to install and deploy Firehose. +- Firehose'un ne olduğu ve sebeple kurulduğu hakkında genel bilgi edinmek için bu [Firehose tanıtım yazısını](https://firehose.streamingfast.io/introduction/firehose-overview) okuyun. +- Firehose'u kurmak ve dağıtmak için [Gereksinimler](https://firehose.streamingfast.io/introduction/prerequisites)'i öğrenin. ### Bilgi Dağarcığınızı Genişletin -- Learn about the different [Firehose components](https://firehose.streamingfast.io/architecture/components) available. +- Kullanılabilecek farklı [Firehose bileşenleri](https://firehose.streamingfast.io/architecture/components) hakkında bilgi edinin. diff --git a/website/src/pages/tr/indexing/tooling/graph-node.mdx b/website/src/pages/tr/indexing/tooling/graph-node.mdx index 62f5fff90afc..5f8d0267d3e0 100644 --- a/website/src/pages/tr/indexing/tooling/graph-node.mdx +++ b/website/src/pages/tr/indexing/tooling/graph-node.mdx @@ -2,31 +2,31 @@ title: Graph Node --- -Graph Düğümü, subgraphları indeksleyen ve sonuçta oluşan verileri GraphQL API aracılığıyla sorgulanabilir hale getiren bileşendir. Bu nedenle indeksleyici yığınının merkezi bir parçasıdır ve başarılı bir indeksleyici çalıştırmak için Graph Düğümü'nün doğru şekilde çalışması çok önemlidir. +Graph Node is the component which indexes Subgraphs, and makes the resulting data available to query via a GraphQL API. As such it is central to the indexer stack, and correct operation of Graph Node is crucial to running a successful indexer. -This provides a contextual overview of Graph Node, and some of the more advanced options available to indexers. Detailed documentation and instructions can be found in the [Graph Node repository](https://github.com/graphprotocol/graph-node). +Bu belge, Graph Düğümü'nün bağlamsal bir genel görünümünü ve Endeksleyicilerin kullanımına açık olan bazı daha gelişmiş seçenekleri sunar. Ayrıntılı dokümantasyon ve talimatlar [Graph Düğümü deposunda](https://github.com/graphprotocol/graph-node) bulunabilir. ## Graph Node -[Graph Node](https://github.com/graphprotocol/graph-node) is the reference implementation for indexing Subgraphs on The Graph Network, connecting to blockchain clients, indexing subgraphs and making indexed data available to query. +[Graph Node](https://github.com/graphprotocol/graph-node) is the reference implementation for indexing Subgraphs on The Graph Network, connecting to blockchain clients, indexing Subgraphs and making indexed data available to query. -Graph Node (and the whole indexer stack) can be run on bare metal, or in a cloud environment. This flexibility of the central indexing component is crucial to the robustness of The Graph Protocol. Similarly, Graph Node can be [built from source](https://github.com/graphprotocol/graph-node), or indexers can use one of the [provided Docker Images](https://hub.docker.com/r/graphprotocol/graph-node). +Graph Düğümü (ve tüm endeksleyici yığını), çıplak metal sunucular üzerinde veya bir bulut ortamında çalıştırılabilir. Merkezi endeksleme bileşeninin bu esnekliği, The Graph Protokolü'nün dayanıklılığı için çok önemlidir. Benzer şekilde, Graph Düğümü [kaynak kodundan inşa edilebilir](https://github.com/graphprotocol/graph-node), veya endeksleyiciler [sağlanan Docker Görüntülerinden](https://hub.docker.com/r/graphprotocol/graph-node) birini kullanabilirler. ### PostgreSQL veritabanı -Graph Düğümü'nün ana deposu, burada subgraph verileri yanı sıra subgraphlarla ilgili üst veriler ve blok önbelleği ve eth_call önbelleği gibi subgraphtan bağımsız ağ verileri saklanır. +The main store for the Graph Node, this is where Subgraph data is stored, as well as metadata about Subgraphs, and Subgraph-agnostic network data such as the block cache, and eth_call cache. ### Ağ istemcileri -In order to index a network, Graph Node needs access to a network client via an EVM-compatible JSON-RPC API. This RPC may connect to a single client or it could be a more complex setup that load balances across multiple. +Bir ağı endekslemek için, Graph Düğümü'nün EVM uyumlu bir JSON-RPC API üzerinden bir ağ istemcisine erişimi olması gerekir. Bu RPC, tek bir istemciye bağlanabilir veya birden fazla istemci arasında yük dengelemesi yapan daha karmaşık bir yapı olabilir. -While some subgraphs may just require a full node, some may have indexing features which require additional RPC functionality. Specifically subgraphs which make `eth_calls` as part of indexing will require an archive node which supports [EIP-1898](https://eips.ethereum.org/EIPS/eip-1898), and subgraphs with `callHandlers`, or `blockHandlers` with a `call` filter, require `trace_filter` support ([see trace module documentation here](https://openethereum.github.io/JSONRPC-trace-module)). +While some Subgraphs may just require a full node, some may have indexing features which require additional RPC functionality. Specifically Subgraphs which make `eth_calls` as part of indexing will require an archive node which supports [EIP-1898](https://eips.ethereum.org/EIPS/eip-1898), and Subgraphs with `callHandlers`, or `blockHandlers` with a `call` filter, require `trace_filter` support ([see trace module documentation here](https://openethereum.github.io/JSONRPC-trace-module)). -**Network Firehoses** - a Firehose is a gRPC service providing an ordered, yet fork-aware, stream of blocks, developed by The Graph's core developers to better support performant indexing at scale. This is not currently an Indexer requirement, but Indexers are encouraged to familiarise themselves with the technology, ahead of full network support. Learn more about the Firehose [here](https://firehose.streamingfast.io/). +**Ağ Firehose'ları ** - Firehose, sıralı ancak çatallanmaların farkında olacak şekilde blok akışı sağlayan bir gRPC hizmetidir. The Graph'in çekirdek geliştiricileri tarafından, ölçeklenebilir ve yüksek performanslı endekslemeyi daha iyi desteklemek amacıyla geliştirilmiştir. Firehose şu an için bir Endeksleyici gereksinimi değildir. Ancak Endeksleyicilerin tam ağ desteğinden önce bu teknolojiye aşina olmaları teşvik edilmektedir. Firehose hakkında daha fazla bilgiye [buradan](https://firehose.streamingfast.io/) ulaşılabilir. ### IPFS Düğümleri -Subgraph dağıtım üst verilerini IPFS ağında depolanır. Graph düğümü, subgraph manifestini ve tüm bağlantılı dosyaları almak için subgraph dağıtımı sırasında öncelikle IPFS düğümüne erişir. Ağ indeksleyicilerinin kendi IPFS düğümlerini barındırmaları gerekmez. Ağ için bir IPFS düğümü https://ipfs.network.thegraph.com adresinde barındırılmaktadır. +Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network indexers do not need to host their own IPFS node. An IPFS node for the network is hosted at https://ipfs.network.thegraph.com. ### Prometheus metrik sunucusu @@ -42,7 +42,7 @@ Subgraph dağıtım üst verilerini IPFS ağında depolanır. Graph düğümü, - **IPFS** -- **Additional Requirements for Ubuntu users** - To run a Graph Node on Ubuntu a few additional packages may be needed. +- **Ubuntu kullanıcıları için Ek Gereksinimler** - Graph Düğümü'nü Ubuntu üzerinde çalıştırmak için birkaç ek paket gerekebilir. ```sh sudo apt-get install -y clang libpq-dev libssl-dev pkg-config @@ -58,7 +58,7 @@ pg_ctl -D .postgres -l logfile start createdb graph-node ``` -2. Clone [Graph Node](https://github.com/graphprotocol/graph-node) repo and build the source by running `cargo build` +2. [Graph Düğümü](https://github.com/graphprotocol/graph-node) github deposunu klonlayın ve `cargo build` komutunu çalıştırarak kaynağı derleyin 3. Now that all the dependencies are setup, start the Graph Node: @@ -71,7 +71,7 @@ cargo run -p graph-node --release -- \ ### Kubernetes'i kullanmaya başlarken -A complete Kubernetes example configuration can be found in the [indexer repository](https://github.com/graphprotocol/indexer/tree/main/k8s). +Kubernetes örnek yapılandırmasının bütüncül bir örneği [endeksleyici Github deposunda](https://github.com/graphprotocol/indexer/tree/main/k8s) bulunabilir. ### Portlar @@ -79,48 +79,48 @@ Graph Düğümü çalışırken aşağıdaki portları açar: | Port | Purpose | Routes | CLI Argument | Environment Variable | | --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | -| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | +| 8000 | GraphQL HTTP server
(for Subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | +| 8001 | GraphQL WS
(for Subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | | 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | | 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | | 8040 | Prometheus metrics | /metrics | \--metrics-port | - | -> **Important**: Be careful about exposing ports publicly - **administration ports** should be kept locked down. This includes the the Graph Node JSON-RPC endpoint. +> **Önemli**: Bağlantı noktalarını herkese açık olarak dışarıya sunarken dikkatli olun. **Yönetim portları** kilitli tutulmalıdır. Bu gereklilik Graph Düğümü JSON-RPC uç noktası için de geçerlidir. ## Gelişmiş Graph Düğüm yapılandırması -En basit haliyle, Graph Düğümü tek bir Graph Düğüm örneği, bir PostgreSQL veritabanı, bir IPFS düğümü ve indekslenecek subgraphlar tarafından gerektirilen ağ istemcileri ile çalıştırılabilir. +At its simplest, Graph Node can be operated with a single instance of Graph Node, a single PostgreSQL database, an IPFS node, and the network clients as required by the Subgraphs to be indexed. -This setup can be scaled horizontally, by adding multiple Graph Nodes, and multiple databases to support those Graph Nodes. Advanced users may want to take advantage of some of the horizontal scaling capabilities of Graph Node, as well as some of the more advanced configuration options, via the `config.toml` file and Graph Node's environment variables. +Bu yapılandırma, birden fazla Graph Düğümü ekleyerek ve bu Graph Düğümlerini desteklemek için birden fazla veritabanı ekleyerek, yatay olarak ölçeklenebilir. İleri düzey kullanıcılar, Graph Düğümü'nün bazı yatay ölçekleme özelliklerinden faydalanmak isteyebilir. Ayrıca `config.toml` dosyası ve Graph Düğümü'nün ortam değişkenleri aracılığıyla daha gelişmiş yapılandırma seçeneklerinden yararlanabilir. ### `config.toml` -A [TOML](https://toml.io/en/) configuration file can be used to set more complex configurations than those exposed in the CLI. The location of the file is passed with the --config command line switch. +[TOML](https://toml.io/en/) yapılandırma dosyası, CLI'da sunulanlardan daha karmaşık yapılandırmalar ayarlamak için kullanılabilir. Dosyanın konumu --config komut satırı anahtarı ile iletilir. > Yapılandırma dosyası kullanırken --postgres-url, --postgres-secondary-hosts ve --postgres-host-weights seçeneklerinin kullanılması mümkün değildir. -A minimal `config.toml` file can be provided; the following file is equivalent to using the --postgres-url command line option: +Minimal bir `config.toml` dosyası sağlanabilir; aşağıdaki dosya --postgres-url komut satırı seçeneğini kullanmakla eşdeğerdir: ```toml [store] [store.primary] -connection="<.. postgres-url argument ..>" +connection="<.. postgres-url argümanı ..>" [deployment] [[deployment.rule]] -indexers = [ "<.. list of all indexing nodes ..>" ] +indexers = [ "<.. tüm endeksleme düğümlerinin listesi ..>" ] ``` -Full documentation of `config.toml` can be found in the [Graph Node docs](https://github.com/graphprotocol/graph-node/blob/master/docs/config.md). +`config.toml` dosyasının tam dokümantasyonu, [Graph Düğümü belgelerinde](https://github.com/graphprotocol/graph-node/blob/master/docs/config.md) bulunabilir. #### Birden Fazla Graph Düğümü -Graph Node indexing can scale horizontally, running multiple instances of Graph Node to split indexing and querying across different nodes. This can be done simply by running Graph Nodes configured with a different `node_id` on startup (e.g. in the Docker Compose file), which can then be used in the `config.toml` file to specify [dedicated query nodes](#dedicated-query-nodes), [block ingestors](#dedicated-block-ingestion), and splitting subgraphs across nodes with [deployment rules](#deployment-rules). +Graph Node indexing can scale horizontally, running multiple instances of Graph Node to split indexing and querying across different nodes. This can be done simply by running Graph Nodes configured with a different `node_id` on startup (e.g. in the Docker Compose file), which can then be used in the `config.toml` file to specify [dedicated query nodes](#dedicated-query-nodes), [block ingestors](#dedicated-block-ingestion), and splitting Subgraphs across nodes with [deployment rules](#deployment-rules). > Birden fazla Graph Düğümü, aynı veritabanını kullanacak şekilde yapılandırılabilir ve veritabanı sharding kullanılarak yatay olarak ölçeklenebilir. #### Dağıtım kuralları -Given multiple Graph Nodes, it is necessary to manage deployment of new subgraphs so that the same subgraph isn't being indexed by two different nodes, which would lead to collisions. This can be done by using deployment rules, which can also specify which `shard` a subgraph's data should be stored in, if database sharding is being used. Deployment rules can match on the subgraph name and the network that the deployment is indexing in order to make a decision. +Given multiple Graph Nodes, it is necessary to manage deployment of new Subgraphs so that the same Subgraph isn't being indexed by two different nodes, which would lead to collisions. This can be done by using deployment rules, which can also specify which `shard` a Subgraph's data should be stored in, if database sharding is being used. Deployment rules can match on the Subgraph name and the network that the deployment is indexing in order to make a decision. Örnek dağıtım kuralı yapılandırması: @@ -138,7 +138,7 @@ indexers = [ "index_node_kovan_0" ] match = { network = [ "xdai", "poa-core" ] } indexers = [ "index_node_other_0" ] [[deployment.rule]] -# There's no 'match', so any subgraph matches +# There's no 'match', so any Subgraph matches shards = [ "sharda", "shardb" ] indexers = [ "index_node_community_0", @@ -150,7 +150,7 @@ indexers = [ ] ``` -Read more about deployment rules [here](https://github.com/graphprotocol/graph-node/blob/master/docs/config.md#controlling-deployment). +Dağıtım kuralları hakkında daha fazla bilgi için [burayı](https://github.com/graphprotocol/graph-node/blob/master/docs/config.md#controlling-deployment) okuyabilirsiniz. #### Özelleştirilmiş sorgu düğümleri @@ -167,19 +167,19 @@ query = "" Çoğu kullanım durumu için, tek bir Postgres veritabanı bir graph-düğümü örneğini desteklemek için yeterlidir. Bir graph-düğümü örneği tek bir Postgres veritabanından daha büyük hale geldiğinde, bu graph düğümü verilerinin depolanmasını birden fazla Postgres veritabanına yaymak mümkündür. Tüm veritabanları birlikte, graph-düğümü örneğinin deposunu oluşturur. Her tekil veritabanına bir shard denir. -Shards can be used to split subgraph deployments across multiple databases, and can also be used to use replicas to spread query load across databases. This includes configuring the number of available database connections each `graph-node` should keep in its connection pool for each database, which becomes increasingly important as more subgraphs are being indexed. +Shards can be used to split Subgraph deployments across multiple databases, and can also be used to use replicas to spread query load across databases. This includes configuring the number of available database connections each `graph-node` should keep in its connection pool for each database, which becomes increasingly important as more Subgraphs are being indexed. Sharding, Graph Düğümü'nün üzerine koyduğu yükü mevcut veritabanınıza koyamadığınızda ve veritabanı boyutunu artıramayacağınızda faydalı hale gelir. -> Genellikle, shard'larla başlamadan önce tek bir veritabanını mümkün olduğunca büyük hale getirmek daha mantıklıdır. Tek bir istisna, sorgu trafiği subgraphlar arasında çokta eşit olmayan bir şekilde bölünmesidir. Bu durumda, yüksek-hacimli subgraphlar'ın bir shard'da tutulması ve geriye kalan her şeyin diğer bir shard'da tutulması, yüksek hacimli subgraphlar için verinin veritabanı dahili önbellekte kalması ve düşük hacimli subgraphlar'daki daha az ihtiyaç duyulan veriler tarafından değiştirilmemesi daha olası olduğu için çok yardımcı olabilir. +> It is generally better make a single database as big as possible, before starting with shards. One exception is where query traffic is split very unevenly between Subgraphs; in those situations it can help dramatically if the high-volume Subgraphs are kept in one shard and everything else in another because that setup makes it more likely that the data for the high-volume Subgraphs stays in the db-internal cache and doesn't get replaced by data that's not needed as much from low-volume Subgraphs. Bağlantı yapılandırması açısından postgresql.conf'da max_connections değerinin 400 (veya belki de 200) olarak ayarlanması ve store_connection_wait_time_ms ve store_connection_checkout_count Prometheus metriklerine bakılması önerilir. Belirgin bekleme süreleri (5 milisaniye'nin üzerinde herhangi bir değer) yetersiz bağlantıların mevcut olduğunun bir işaretidir; yüksek bekleme süreleri veritabanının çok yoğun olması gibi sebeplerden de kaynaklanabilir. Ancak, veritabanı genel olarak stabil görünüyorsa, yüksek bekleme süreleri bağlantı sayısını arttırma ihtiyacını belirtir. Yapılandırmada her graph-düğümü örneğinin ne kadar bağlantı kullanabileceği bir üst sınırdır ve Graph Düğümü bunları gereksiz bulmadığı sürece açık tutmaz. -Read more about store configuration [here](https://github.com/graphprotocol/graph-node/blob/master/docs/config.md#configuring-multiple-databases). +Mağaza yapılandırması hakkında daha fazla bilgi için [bu yazıyı](https://github.com/graphprotocol/graph-node/blob/master/docs/config.md#configuring-multiple-databases) okuyun. #### Özelleştirilmiş blok alınması -If there are multiple nodes configured, it will be necessary to specify one node which is responsible for ingestion of new blocks, so that all configured index nodes aren't polling the chain head. This is done as part of the `chains` namespace, specifying the `node_id` to be used for block ingestion: +Birden fazla düğüm yapılandırıldığında, yeni blokların toplanmasından sorumlu olacak tek bir düğüm belirlemek gerekir. Böylece tüm yapılandırılmış endeksleme düğümleri zincir başını sorgulamaz. Bu işlem, `chains` ad alanında `node_id` belirterek gerçekleştirilir: ```toml [chains] @@ -188,13 +188,13 @@ ingestor = "block_ingestor_node" #### Birden fazla ağın desteklenmesi -The Graph Protocol is increasing the number of networks supported for indexing rewards, and there exist many subgraphs indexing unsupported networks which an indexer would like to process. The `config.toml` file allows for expressive and flexible configuration of: +The Graph Protocol is increasing the number of networks supported for indexing rewards, and there exist many Subgraphs indexing unsupported networks which an indexer would like to process. The `config.toml` file allows for expressive and flexible configuration of: - Birden fazla ağ - Ağ başına birden fazla sağlayıcı (bu, yükü sağlayıcılar arasında bölme ve bir Graph Düğümü'nün deneyimsel Firehose desteği gibi daha ucuz sağlayıcıları tercih etmesi ile tam düğümlerin yanı sıra arşiv düğümlerinin yapılandırılmasına da izin verebilir). - Özellikler, kimlik doğrulama ve sağlayıcı türü gibi ek sağlayıcı detayları (deneysel Firehose desteği için) -The `[chains]` section controls the ethereum providers that graph-node connects to, and where blocks and other metadata for each chain are stored. The following example configures two chains, mainnet and kovan, where blocks for mainnet are stored in the vip shard and blocks for kovan are stored in the primary shard. The mainnet chain can use two different providers, whereas kovan only has one provider. +`[chains]` bölümü, graph-node'un bağlandığı Ethereum sağlayıcılarını kontrol eder ve her zincir için blokların ve diğer meta verilerin nerede depolandığını belirler. Aşağıdaki örnek, ana ağ ve kovan olmak üzere iki zinciri yapılandırır; ana ağ blokları vip parçasında, kovan blokları ise birincil parçada depolanır. Ana ağ zinciri iki farklı sağlayıcı kullanabilirken, kovan yalnızca bir sağlayıcıya sahiptir. ```toml [chains] @@ -210,50 +210,50 @@ shard = "primary" provider = [ { label = "kovan", url = "http://..", features = [] } ] ``` -Read more about provider configuration [here](https://github.com/graphprotocol/graph-node/blob/master/docs/config.md#configuring-ethereum-providers). +Sağlayıcı yapılandırması hakkında daha fazla bilgi için [bu yazıyı](https://github.com/graphprotocol/graph-node/blob/master/docs/config.md#configuring-ethereum-providers) okuyun. ### Ortam değişkenleri -Graph Node supports a range of environment variables which can enable features, or change Graph Node behaviour. These are documented [here](https://github.com/graphprotocol/graph-node/blob/master/docs/environment-variables.md). +Graph Düğümü, özellikleri etkinleştirebilecek veya Graph Düğümünün davranışını değiştirebilecek çeşitli ortam değişkenlerini destekler. Bunlar [burada](https://github.com/graphprotocol/graph-node/blob/master/docs/environment-variables.md) belgelenmiştir. ### Sürekli dağıtım Gelişmiş yapılandırmaya sahip ölçeklendirilmiş bir dizinleme kurulumu işleten kullanıcılar, Graph Düğümler'ini Kubernetes ile yönetmekten faydalanabilirler. -- The indexer repository has an [example Kubernetes reference](https://github.com/graphprotocol/indexer/tree/main/k8s) -- [Launchpad](https://docs.graphops.xyz/launchpad/intro) is a toolkit for running a Graph Protocol Indexer on Kubernetes maintained by GraphOps. It provides a set of Helm charts and a CLI to manage a Graph Node deployment. +- Endeksleyici GitHub deposunda bir [örnek Kubernetes referansı](https://github.com/graphprotocol/indexer/tree/main/k8s) bulunmaktadır +- [Launchpad](https://docs.graphops.xyz/launchpad/intro), GraphOps tarafından geliştirilen ve Kubernetes üzerinde bir Graph Protokolü Endeksleyicisi çalıştırmak için kullanılan bir araç setidir. Bir Graph Düğümü dağıtımını yönetmek için bir dizi Helm şeması ve bir CLI sağlar. ### Graph Düğümü Yönetimi -Çalışan bir Graph Düğümüne (veya Graph Düğümlerine) sahip olunduktan sonra, dağıtılan subgraplar'ın bu düğümler üzerinde yönetilmesi zorluğu ortaya çıkar. Subgraphlar'ı yönetmeye yardımcı olmak için Graph Düğümü, bir dizi araç sunar. +Given a running Graph Node (or Graph Nodes!), the challenge is then to manage deployed Subgraphs across those nodes. Graph Node surfaces a range of tools to help with managing Subgraphs. #### Kayıt tutma -Graph Node's logs can provide useful information for debugging and optimisation of Graph Node and specific subgraphs. Graph Node supports different log levels via the `GRAPH_LOG` environment variable, with the following levels: error, warn, info, debug or trace. +Graph Node's logs can provide useful information for debugging and optimisation of Graph Node and specific Subgraphs. Graph Node supports different log levels via the `GRAPH_LOG` environment variable, with the following levels: error, warn, info, debug or trace. -In addition setting `GRAPH_LOG_QUERY_TIMING` to `gql` provides more details about how GraphQL queries are running (though this will generate a large volume of logs). +Ek olarak, `GRAPH_LOG_QUERY_TIMING`'i `gql` olarak ayarlamak, GraphQL sorgularının nasıl çalıştığı hakkında daha fazla ayrıntı sağlar (ancak bu, büyük bir günlük hacmi oluşturacaktır). -#### Monitoring & alerting +#### İzleme & uyarma Graph Düğümü, varsayılan olarak 8040 port'undaki Prometheus uç noktası aracılığıyla metrikleri sağlar. Ardından Grafana, bu metrikleri görselleştirmek için kullanılabilir. -The indexer repository provides an [example Grafana configuration](https://github.com/graphprotocol/indexer/blob/main/k8s/base/grafana.yaml). +Endeksleyici deposunda [örnek bir Grafana yapılandırması](https://github.com/graphprotocol/indexer/blob/main/k8s/base/grafana.yaml) bulunmaktadır. #### Graphman -`graphman` is a maintenance tool for Graph Node, helping with diagnosis and resolution of different day-to-day and exceptional tasks. +`graphman`, Graph Düğümü için bakım aracıdır. Günlük işlerdeki ve olağanüstü görevlerdeki tanı ve çözümlemeye yardımcı olur. -The graphman command is included in the official containers, and you can docker exec into your graph-node container to run it. It requires a `config.toml` file. +graphman komutu resmi konteynerlerde mevcuttur, ve çalıştırmak için graph-node docker exec komutunu kullanarak konteynerinize girebilirsiniz. Bunun için bir `config.toml` dosyası gereklidir. -Full documentation of `graphman` commands is available in the Graph Node repository. See [/docs/graphman.md] (https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md) in the Graph Node `/docs` +`graphman` komutlarının tam dokümantasyonu Graph Düğümü deposunda mevcuttur. Graph Düğümü `/docs` (dokümanlar) dizini içindeki [/docs/graphman.md](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md) dosyasına bakın. -### Subgraphlarla çalışma +### Working with Subgraphs #### İndeksleme durum API'si -Varsayılan olarak 8030/graphql port'unda mevcut olan indeksleme durumu API'si, farklı subgraphlar için indeksleme durumunu ve ispatlarını kontrol etmek, subgraph özelliklerini incelemek ve daha fazlasını yapmak için çeşitli yöntemler sunar. +Available on port 8030/graphql by default, the indexing status API exposes a range of methods for checking indexing status for different Subgraphs, checking proofs of indexing, inspecting Subgraph features and more. -The full schema is available [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). +Tam şemaya [buradan](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql) ulaşabilirsiniz. #### Endeksleme performansı @@ -263,12 +263,12 @@ The full schema is available [here](https://github.com/graphprotocol/graph-node/ - Uygun işleyicilerle sırayla olayları işleme (bu, durumu sormak için zincire çağrı yapmayı ve depodan veri getirmeyi içerebilir) - Elde edilen verileri depoya yazma -Bu aşamalar boru hattında (yani eşzamanlı olarak yürütülebilir), ancak birbirlerine bağımlıdırlar. Subgraphlar'ın indekslenmesi yavaş olduğunda, bunun altındaki neden spesifik subgraphlar'a bağlı olacaktır. +These stages are pipelined (i.e. they can be executed in parallel), but they are dependent on one another. Where Subgraphs are slow to index, the underlying cause will depend on the specific Subgraph. İndeksleme yavaşlığının yaygın nedenleri: -- Time taken to find relevant events from the chain (call handlers in particular can be slow, given the reliance on `trace_filter`) -- Making large numbers of `eth_calls` as part of handlers +- Zincirde ilgili olayları bulmak için geçen süre (`trace_filter`a bağlı olmalarından dolayı, özellikle çağrı işleyicileri yavaş olabilir) +- İşleyicilerin içinde çok fazla `eth_calls` çağrısı yapmak - Yürütme sırasında büyük miktarda depolama etkileşimi - Depoya kaydedilecek büyük miktarda veri - İşlenecek büyük miktarda olay @@ -276,35 +276,35 @@ Bu aşamalar boru hattında (yani eşzamanlı olarak yürütülebilir), ancak bi - Sağlayıcının zincir başından geriye düşmesi - Sağlayıcıdan zincir başındaki yeni makbuzların alınmasındaki yavaşlık -Subgraph indeksleme metrikleri, indeksleme yavaşlığının temel nedenini teşhis etmede yardımcı olabilir. Bazı durumlarda, sorun subgraph'ın kendisiyle ilgilidir, ancak diğer durumlarda, geliştirilmiş ağ sağlayıcıları, azaltılmış veritabanı çekişmesi ve diğer yapılandırma iyileştirmeleri indeksleme performansını belirgin şekilde artırabilir. +Subgraph indexing metrics can help diagnose the root cause of indexing slowness. In some cases, the problem lies with the Subgraph itself, but in others, improved network providers, reduced database contention and other configuration improvements can markedly improve indexing performance. -#### Başarısıız subgraphlar +#### Failed Subgraphs -İndekslemesi sırasında subgraphlar beklenmedik veri, beklendiği gibi çalışmayan bir bileşen veya olay işleyicilerinde veya yapılandırmada bir hata olması durumunda başarısız olabilir. İki genel başarısızlık türü mevcuttur: +During indexing Subgraphs might fail, if they encounter data that is unexpected, some component not working as expected, or if there is some bug in the event handlers or configuration. There are two general types of failure: - Deterministik başarısızlıklar: Bu, yeniden denemelerle çözülmeyecek hatalardır - Deterministik olmayan başarısızlıklar: Bunlar, sağlayıcının sorunları veya beklenmedik bir Graph Düğüm hatası gibi nedenlere bağlı olabilir. Deterministik olmayan bir başarısızlık meydana geldiğinde Graph Düğümü, başarısız olan işleyicileri yeniden deneyecek ve zamanla geri çekilecektir. -Bazı durumlarda, başarısızlık indeksleyici tarafından çözülebilir (örneğin, hatanın doğru türde sağlayıcıya sahip olmamasından kaynaklanması durumunda, gerekli sağlayıcı eklenirse indeksleme devam ettirilebilir). Ancak diğer durumlarda, subgraph kodunda bir değişiklik gereklidir. +In some cases a failure might be resolvable by the indexer (for example if the error is a result of not having the right kind of provider, adding the required provider will allow indexing to continue). However in others, a change in the Subgraph code is required. -> Deterministic failures are considered "final", with a Proof of Indexing generated for the failing block, while non-deterministic failures are not, as the subgraph may manage to "unfail" and continue indexing. In some cases, the non-deterministic label is incorrect, and the subgraph will never overcome the error; such failures should be reported as issues on the Graph Node repository. +> Deterministic failures are considered "final", with a Proof of Indexing generated for the failing block, while non-deterministic failures are not, as the Subgraph may manage to "unfail" and continue indexing. In some cases, the non-deterministic label is incorrect, and the Subgraph will never overcome the error; such failures should be reported as issues on the Graph Node repository. #### Blok ve çağrı önbelleği -Graph Node caches certain data in the store in order to save refetching from the provider. Blocks are cached, as are the results of `eth_calls` (the latter being cached as of a specific block). This caching can dramatically increase indexing speed during "resyncing" of a slightly altered subgraph. +Graph Node caches certain data in the store in order to save refetching from the provider. Blocks are cached, as are the results of `eth_calls` (the latter being cached as of a specific block). This caching can dramatically increase indexing speed during "resyncing" of a slightly altered Subgraph. -However, in some instances, if an Ethereum node has provided incorrect data for some period, that can make its way into the cache, leading to incorrect data or failed subgraphs. In this case indexers can use `graphman` to clear the poisoned cache, and then rewind the affected subgraphs, which will then fetch fresh data from the (hopefully) healthy provider. +However, in some instances, if an Ethereum node has provided incorrect data for some period, that can make its way into the cache, leading to incorrect data or failed Subgraphs. In this case indexers can use `graphman` to clear the poisoned cache, and then rewind the affected Subgraphs, which will then fetch fresh data from the (hopefully) healthy provider. Örneğin tx makbuzu etkinlik eksikliği gibi bir blok önbellek tutarsızlığı şüphesi varsa: -1. `graphman chain list` to find the chain name. -2. `graphman chain check-blocks by-number ` will check if the cached block matches the provider, and deletes the block from the cache if it doesn’t. - 1. If there is a difference, it may be safer to truncate the whole cache with `graphman chain truncate `. +1. Zincir ismini bulmak için `graphman chain list` komutunu kullanın. +2. `graphman chain check-blocks by-number ` önbellekteki bloğun sağlayıcıyla eşleşip eşleşmediğini kontrol edecek ve eşleşmezse bloğu önbellekten silecektir. + 1. Bir fark varsa, `graphman chain truncate ` ile tüm önbelleği kırpmak daha güvenli olabilir. 2. Blok sağlayıcıyla eşleşirse, sorun doğrudan sağlayıcıya karşı hata ayıklanabilir. #### Sorgulama sorunları ve hataları -Bir subgraph indekslendikten sonra, indeksleyiciler subgraph'ın ayrılmış sorgu son noktası aracılığıyla sorguları sunmayı bekleyebilirler. İndeksleyiciler önemli sorgu hacmi sunmayı umuyorlarsa, bunun için ayrılmış bir sorgu düğümü önerilir ve çok yüksek sorgu hacimleri durumunda indeksleyiciler sorguların indeksleme sürecini etkilememesi için replika shardlar yapılandırmak isteyebilirler. +Once a Subgraph has been indexed, indexers can expect to serve queries via the Subgraph's dedicated query endpoint. If the indexer is hoping to serve significant query volume, a dedicated query node is recommended, and in case of very high query volumes, indexers may want to configure replica shards so that queries don't impact the indexing process. Bununla birlikte, özel bir sorgu düğümü ve replikalarda bile, belirli sorguların yürütülmesi uzun zaman alabilir, bazı durumlarda bellek kullanımını artırabilir ve diğer kullanıcılar için sorgu süresini olumsuz etkileyebilir. @@ -312,15 +312,15 @@ Tek bir "sihirli çözüm" yoktur, ancak yavaş sorguların önlenmesi, teşhisi ##### Sorgu önbellekleme -Graph Node caches GraphQL queries by default, which can significantly reduce database load. This can be further configured with the `GRAPH_QUERY_CACHE_BLOCKS` and `GRAPH_QUERY_CACHE_MAX_MEM` settings - read more [here](https://github.com/graphprotocol/graph-node/blob/master/docs/environment-variables.md#graphql-caching). +Graph Düğümü, varsayılan olarak GraphQL sorgularını önbelleğe alır, bu da veritabanı yükünü önemli ölçüde azaltabilir. `GRAPH_QUERY_CACHE_BLOCKS` ve `GRAPH_QUERY_CACHE_MAX_MEM` ayarlarıyla başka yapılandırmalar da uygulanabilir. Daha fazla bilgi için [bu linkteki dokümantasyonu](https://github.com/graphprotocol/graph-node/blob/master/docs/environment-variables.md#graphql-caching) okuyun. ##### Sorguların analizi -Sorunlu sorgular genellikle iki şekilde ortaya çıkar. Bazı durumlarda, kullanıcılar kendileri belirli bir sorgunun yavaş olduğunu bildirirler. Bu durumda zorluk, yavaşlığın nedenini teşhis etmektir - genel bir sorun mu, yoksa subgraph'a veya sorguya özgü mü olduğunu belirlemek ve tabii ki mümkünse sonra çözmek olacaktır. +Problematic queries most often surface in one of two ways. In some cases, users themselves report that a given query is slow. In that case the challenge is to diagnose the reason for the slowness - whether it is a general issue, or specific to that Subgraph or query. And then of course to resolve it, if possible. Diğer durumlarda, tetikleyici sorgu düğümündee yüksek bellek kullanımı olabilir, bu durumda zorluk ilk olarak soruna neden olan sorguyu belirlemektir. -Indexers can use [qlog](https://github.com/graphprotocol/qlog/) to process and summarize Graph Node's query logs. `GRAPH_LOG_QUERY_TIMING` can also be enabled to help identify and debug slow queries. +Endeksleyiciler, Graph Düğümünün sorgu günlüklerini işlemek ve özetlemek için [qlog](https://github.com/graphprotocol/qlog/) aracını kullanabilirler. Yavaş sorguları tanımlayıp hata ayıklamak amacıyla `GRAPH_LOG_QUERY_TIMING` parametresi de etkinleştirilebilir. Yavaş bir sorgu verildiğinde, indeksleyicilerin birkaç seçeneği vardır. Tabii ki, sorunlu sorgunun gönderilme maliyetini önemli ölçüde artırmak için maliyet modelini değiştirebilirler. Bu, o sorgunun sıklığında azalmaya neden olabilir. Ancak, genellikle sorunun temek nedenini çözmez. @@ -328,18 +328,18 @@ Yavaş bir sorgu verildiğinde, indeksleyicilerin birkaç seçeneği vardır. Ta Varlıkları depolayan veritabanı tablolarının genellikle iki çeşit olduğu görünmektedir: oluşturulduktan sonra hiçbir zaman güncellenmeyen mesela finansal işlemler listesine benzer şeyler saklayan olan 'işlemimsi' ve varlıkların çok sık güncellendiği, mesela her işlem kaydedildiğinde değiştirilen finansal hesaplar gibi şeyler saklayan 'hesabımsı'. Hesabımsı tablolar, birçok varlık sürümünü içermelerine rağmen, nispeten az sayıda farklı varlığa sahip olmasıyla bilinir. Çoğu durumda, böyle bir tabloda farklı varlık sayısı, toplam satır (varlık sürümleri) sayısının %1'ine eşittir -For account-like tables, `graph-node` can generate queries that take advantage of details of how Postgres ends up storing data with such a high rate of change, namely that all of the versions for recent blocks are in a small subsection of the overall storage for such a table. +Hesap benzeri (account-like) tablolar, sık sık güncellenen veriler içerdiğinden, PostgreSQL bu tür değişiklikleri depolarken eski satır sürümlerini de bir süre saklar. Ancak, en yeni bloklara ait sürümler genellikle tablonun küçük bir bölümünde bulunur. `graph-node`, bu yapıyı dikkate alarak sorgular oluşturur ve böylece veriye daha verimli bir şekilde erişilmesini sağlar. -The command `graphman stats show shows, for each entity type/table in a deployment, how many distinct entities, and how many entity versions each table contains. That data is based on Postgres-internal estimates, and is therefore necessarily imprecise, and can be off by an order of magnitude. A `-1` in the `entities` column means that Postgres believes that all rows contain a distinct entity. +`graphman stats show komutu, bir dağıtımda her bir varlık türü/tablosu için, kaç farklı varlık olduğunu ve her tablonun kaç varlık sürümü içerdiğini gösterir. Bu veri, Postgres'in içsel tahminlerine dayalıdır ve bu nedenle zorunlu olarak kesin değildir. Ayrıca bu veri, bir büyüklük derecesi kadar hatalı olabilir. `entities` sütunundaki bir `-1` değeri, Postgres'in tüm satırların farklı bir varlık içerdiğine inandığını gösterir. -In general, tables where the number of distinct entities are less than 1% of the total number of rows/entity versions are good candidates for the account-like optimization. When the output of `graphman stats show` indicates that a table might benefit from this optimization, running `graphman stats show
` will perform a full count of the table - that can be slow, but gives a precise measure of the ratio of distinct entities to overall entity versions. +Genel olarak, farklı varlıkların sayısının, toplam satır/varlık sürümleri sayısının %1'inden daha az olduğu tablolar, hesap benzeri optimizasyon için iyi adaylardır. `graphman stats show` çıktısı bir tablonun bu optimizasyondan faydalanabileceğini gösterdiğinde, `graphman stats show
` komutunu çalıştırmak tablonun tam bir sayımını yapacaktır. Bu sayım yavaş olabilir ama farklı varlıkların genel varlık sürümlerine oranını tam olarak ölçer. -Once a table has been determined to be account-like, running `graphman stats account-like .
` will turn on the account-like optimization for queries against that table. The optimization can be turned off again with `graphman stats account-like --clear .
` It takes up to 5 minutes for query nodes to notice that the optimization has been turned on or off. After turning the optimization on, it is necessary to verify that the change does not in fact make queries slower for that table. If you have configured Grafana to monitor Postgres, slow queries would show up in `pg_stat_activity`in large numbers, taking several seconds. In that case, the optimization needs to be turned off again. +Bir tablonun hesap benzeri olduğuna karar verildikten sonra, `graphman stats account-like .
` komutunu çalıştırmak, bu tabloya yapılan sorgular için hesap benzeri optimizasyonu etkinleştirecektir. Optimizasyon, `graphman stats account-like --clear .
` ile tekrar kapatılabilir. Sorgu düğümlerinin optimizasyonun açıldığını veya kapatıldığını fark etmesi 5 dakikayı bulabilir. Optimizasyonu açtıktan sonra, değişikliğin o tablo için sorguları yavaşlatmadığını doğrulamak gerekir. Grafana'yı Postgres'i izlemek için yapılandırdıysanız, `pg_stat_activity`'de çok sayıda, birkaç saniyeden uzun süren, yavaş sorgular görünecektir. Bu durumda optimizasyonun kapatılması gereklidir. -For Uniswap-like subgraphs, the `pair` and `token` tables are prime candidates for this optimization, and can have a dramatic effect on database load. +For Uniswap-like Subgraphs, the `pair` and `token` tables are prime candidates for this optimization, and can have a dramatic effect on database load. -#### Subgraphları kaldırma +#### Removing Subgraphs > Bu, Graph Node 0.29.x sürümünde kullanılabilir olan yeni bir fonksiyonelliktir -At some point an indexer might want to remove a given subgraph. This can be easily done via `graphman drop`, which deletes a deployment and all it's indexed data. The deployment can be specified as either a subgraph name, an IPFS hash `Qm..`, or the database namespace `sgdNNN`. Further documentation is available [here](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop). +At some point an indexer might want to remove a given Subgraph. This can be easily done via `graphman drop`, which deletes a deployment and all it's indexed data. The deployment can be specified as either a Subgraph name, an IPFS hash `Qm..`, or the database namespace `sgdNNN`. Further documentation is available [here](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop). diff --git a/website/src/pages/tr/indexing/tooling/graphcast.mdx b/website/src/pages/tr/indexing/tooling/graphcast.mdx index d0bce650e2ae..7b5520491169 100644 --- a/website/src/pages/tr/indexing/tooling/graphcast.mdx +++ b/website/src/pages/tr/indexing/tooling/graphcast.mdx @@ -4,18 +4,18 @@ title: Graphcast ## Giriş -Is there something you'd like to learn from or share with your fellow Indexers in an automated manner, but it's too much hassle or costs too much gas? +Diğer Endeksleyicilerle otomatik bir şekilde bilgi alışverişi yapmak istiyorsunuz, ancak bu çok mu zahmetli veya çok mu fazla gaz ücreti gerektiriyor? -Currently, the cost to broadcast information to other network participants is determined by gas fees on the Ethereum blockchain. Graphcast solves this problem by acting as an optional decentralized, distributed peer-to-peer (P2P) communication tool that allows Indexers across the network to exchange information in real time. The cost of exchanging P2P messages is near zero, with the tradeoff of no data integrity guarantees. Nevertheless, Graphcast aims to provide message validity guarantees (i.e. that the message is valid and signed by a known protocol participant) with an open design space of reputation models. +Şu anda, diğer ağ katılımcılarına bilgi yayınlama maliyeti Ethereum blokzinciri üzerindeki gaz ücretlerine göre belirlenmektedir. Graphcast, ağdaki Endeksleyicilerin gerçek zamanlı bilgi alışverişi yapmalarına olanak tanıyan isteğe bağlı, merkezi olmayan, dağıtılmış, ve eşler arası (P2P) bir iletişim aracı olarak bu sorunu çözmektedir. Eşler arası mesaj alışverişi maliyeti neredeyse sıfırdır, ancak veri bütünlüğü garantisi sağlanmaz. Buna rağmen, Graphcast, açık bir itibar modeli tasarım alanıyla, mesajın geçerli ve bilinen bir protokol katılımcısı tarafından imzalandığını garanti eden bir mesaj geçerlilik garantisi sağlamayı amaçlamaktadır. -The Graphcast SDK (Software Development Kit) allows developers to build Radios, which are gossip-powered applications that Indexers can run to serve a given purpose. We also intend to create a few Radios (or provide support to other developers/teams that wish to build Radios) for the following use cases: +The Graphcast SDK (Software Development Kit), geliştiricilerin Radyo adı verilen ve dedikodu protokolüyle çalışan uygulamalar oluşturmasını sağlar. Endeksleyiciler, belirli bir amacı yerine getirmek için bu Radyoları çalıştırabilir. Ayrıca, aşağıdaki kullanım senaryoları için birkaç Radyo oluşturmayı (veya Radyo geliştirmek isteyen diğer geliştiricilere/ekiplere destek sağlamayı) planlıyoruz: -- Real-time cross-checking of subgraph data integrity ([Subgraph Radio](https://docs.graphops.xyz/graphcast/radios/subgraph-radio/intro)). -- Conducting auctions and coordination for warp syncing subgraphs, substreams, and Firehose data from other Indexers. -- Self-reporting on active query analytics, including subgraph request volumes, fee volumes, etc. -- Self-reporting on indexing analytics, including subgraph indexing time, handler gas costs, indexing errors encountered, etc. -- Self-reporting on stack information including graph-node version, Postgres version, Ethereum client version, etc. +- Real-time cross-checking of Subgraph data integrity ([Subgraph Radio](https://docs.graphops.xyz/graphcast/radios/subgraph-radio/intro)). +- Conducting auctions and coordination for warp syncing Subgraphs, substreams, and Firehose data from other Indexers. +- Self-reporting on active query analytics, including Subgraph request volumes, fee volumes, etc. +- Self-reporting on indexing analytics, including Subgraph indexing time, handler gas costs, indexing errors encountered, etc. +- Graph düğüm sürümü, Postgres sürümü, Ethereum istemcisi sürümü gibi yığın bilgileri üzerinde kendi kendine raporlama. ### Daha Fazla Bilgi Edin -If you would like to learn more about Graphcast, [check out the documentation here.](https://docs.graphops.xyz/graphcast/intro) +Graphcast hakkında daha fazla bilgi almak isterseniz, [belgelere buradan ulaşabilirsiniz.](https://docs.graphops.xyz/graphcast/intro) diff --git a/website/src/pages/tr/resources/benefits.mdx b/website/src/pages/tr/resources/benefits.mdx index cb9b6e71d129..34e23eccc9cc 100644 --- a/website/src/pages/tr/resources/benefits.mdx +++ b/website/src/pages/tr/resources/benefits.mdx @@ -1,17 +1,17 @@ --- -title: The Graph vs. Self Hosting +title: The Graph ve Kendi Sunucunda Barındırmanın Karşılaştırması socialImage: https://thegraph.com/docs/img/seo/benefits.jpg --- The Graph’s decentralized network has been engineered and refined to create a robust indexing and querying experience—and it’s getting better every day thanks to thousands of contributors around the world. -The benefits of this decentralized protocol cannot be replicated by running a `graph-node` locally. The Graph Network is more reliable, more efficient, and less expensive. +Bu merkeziyetsiz protokolün sunduğu faydalar, `graph-node`'u yerel olarak çalıştırarak kopyalanamaz. The Graph Ağı daha güvenilir, daha verimli ve daha az maliyetlidir. Here is an analysis: -## Why You Should Use The Graph Network +## Neden Graph Ağını Kullanmalısınız -- Significantly lower monthly costs +- Önemli ölçüde daha düşük aylık maliyet - $0 infrastructure setup costs - Superior uptime - Dünya çapındaki yüzlerce bağımsız İndeksleyiciye erişim @@ -29,25 +29,25 @@ Query costs may vary; the quoted cost is the average at time of publication (Mar | Cost Comparison | Self Hosted | Graph Ağı | | :-: | :-: | :-: | -| Monthly server cost\* | $350 per month | $0 | -| Query costs | $0+ | $0 per month | +| Aylık sunucu maliyeti\* | Aylık 350$ | 0$ | +| Sorgu maliyetleri | $0+ | $0 per month | | Engineering time | $400 per month | None, built into the network with globally distributed Indexers | | Queries per month | Limited to infra capabilities | 100,000 (Free Plan) | -| Cost per query | $0 | $0 | +| Cost per query | 0$ | $0 | | Infrastructure | Centralized | Decentralized | | Geographic redundancy | $750+ per additional node | Included | | Uptime | Varies | 99.9%+ | -| Total Monthly Costs | $750+ | $0 | +| Total Monthly Costs | $750+ | 0$ | ## Medium Volume User (~3M queries per month) | Cost Comparison | Self Hosted | Graph Ağı | | :-: | :-: | :-: | -| Monthly server cost\* | $350 per month | $0 | -| Query costs | $500 per month | $120 per month | +| Aylık sunucu maliyeti\* | Aylık 350$ | 0$ | +| Sorgu maliyetleri | $500 per month | $120 per month | | Engineering time | $800 per month | None, built into the network with globally distributed Indexers | | Queries per month | Limited to infra capabilities | ~3,000,000 | -| Cost per query | $0 | $0.00004 | +| Cost per query | 0$ | $0.00004 | | Infrastructure | Centralized | Decentralized | | Engineering expense | $200 per hour | Included | | Geographic redundancy | $1,200 in total costs per additional node | Included | @@ -58,12 +58,12 @@ Query costs may vary; the quoted cost is the average at time of publication (Mar | Cost Comparison | Self Hosted | Graph Ağı | | :-: | :-: | :-: | -| Monthly server cost\* | $1100 per month, per node | $0 | -| Query costs | $4000 | $1,200 per month | +| Aylık sunucu maliyeti\* | $1100 per month, per node | 0$ | +| Sorgu maliyetleri | $4000 | $1,200 per month | | Number of nodes needed | 10 | Not applicable | | Engineering time | $6,000 or more per month | None, built into the network with globally distributed Indexers | | Queries per month | Limited to infra capabilities | ~30,000,000 | -| Cost per query | $0 | $0.00004 | +| Cost per query | 0$ | $0.00004 | | Infrastructure | Centralized | Decentralized | | Geographic redundancy | $1,200 in total costs per additional node | Included | | Uptime | Varies | 99.9%+ | @@ -73,20 +73,21 @@ Query costs may vary; the quoted cost is the average at time of publication (Mar Engineering time based on $200 per hour assumption -Reflects cost for data consumer. Query fees are still paid to Indexers for Free Plan queries. +Veri tüketicisi için maliyeti yansıtır. Ücretsiz Plan kullanılarak yapılan sorgular için de Endeksleyicilere +sorgu ücretleri ödenir. -Estimated costs are only for Ethereum Mainnet subgraphs — costs are even higher when self hosting a `graph-node` on other networks. Some users may need to update their subgraph to a new version. Due to Ethereum gas fees, an update costs ~$50 at time of writing. Note that gas fees on [Arbitrum](/archived/arbitrum/arbitrum-faq/) are substantially lower than Ethereum mainnet. +Estimated costs are only for Ethereum Mainnet Subgraphs — costs are even higher when self hosting a `graph-node` on other networks. Some users may need to update their Subgraph to a new version. Due to Ethereum gas fees, an update costs ~$50 at time of writing. Note that gas fees on [Arbitrum](/archived/arbitrum/arbitrum-faq/) are substantially lower than Ethereum mainnet. -Curating signal on a subgraph is an optional one-time, net-zero cost (e.g., $1k in signal can be curated on a subgraph, and later withdrawn—with potential to earn returns in the process). +Curating signal on a Subgraph is an optional one-time, net-zero cost (e.g., $1k in signal can be curated on a Subgraph, and later withdrawn—with potential to earn returns in the process). -## No Setup Costs & Greater Operational Efficiency +## Kurulum Maliyeti Yok & Daha Yüksek Operasyonel Verimlilik -Zero setup fees. Get started immediately with no setup or overhead costs. No hardware requirements. No outages due to centralized infrastructure, and more time to concentrate on your core product . No need for backup servers, troubleshooting, or expensive engineering resources. +Sıfır kurulum ücreti. Kurulum veya ek sabit maliyet olmadan hemen başlayın. Donanım gereksinimi yok. Merkezi altyapı kaynaklı kesintiler yaşanmaz ve temel ürününüze odaklanmak için daha fazla zamanınız olur. Yedek sunuculara, sorun gidermeye veya pahalı mühendislik kaynaklarına ihtiyaç duymazsınız. -## Reliability & Resiliency +## Güvenilirlik & Dayanıklılık -The Graph’s decentralized network gives users access to geographic redundancy that does not exist when self-hosting a `graph-node`. Queries are served reliably thanks to 99.9%+ uptime, achieved by hundreds of independent Indexers securing the network globally. +The Graph'in merkeziyetsiz ağı, kullanıcılarına `graph-node`'u kendileri barındırdıklarında mevcut olmayan coğrafi yedeklilik sağlar. Sorgular, ağı küresel olarak güvence altına alan yüzlerce bağımsız Endeksleyici sayesinde %99,9+ çalışma süresi ile güvenilir bir şekilde sunulmaktadır. -Bottom line: The Graph Network is less expensive, easier to use, and produces superior results compared to running a `graph-node` locally. +Sonuç olarak: The Graph Ağı, yerel olarak bir `graph-node` çalıştırmaya kıyasla daha ucuzdur, kullanımı daha kolaydır ve üstün sonuçlar üretir. -Start using The Graph Network today, and learn how to [publish your subgraph to The Graph's decentralized network](/subgraphs/quick-start/). +Start using The Graph Network today, and learn how to [publish your Subgraph to The Graph's decentralized network](/subgraphs/quick-start/). diff --git a/website/src/pages/tr/resources/glossary.mdx b/website/src/pages/tr/resources/glossary.mdx index ffcd4bca2eed..75f3b5e3cb0b 100644 --- a/website/src/pages/tr/resources/glossary.mdx +++ b/website/src/pages/tr/resources/glossary.mdx @@ -1,83 +1,83 @@ --- -title: Glossary +title: Sözlük --- -- **The Graph**: A decentralized protocol for indexing and querying data. +- **The Graph**: Verileri endekslemek ve sorgulamak için merkeziyetsiz bir protokol. -- **Query**: A request for data. In the case of The Graph, a query is a request for data from a subgraph that will be answered by an Indexer. +- **Query**: A request for data. In the case of The Graph, a query is a request for data from a Subgraph that will be answered by an Indexer. -- **GraphQL**: A query language for APIs and a runtime for fulfilling those queries with your existing data. The Graph uses GraphQL to query subgraphs. +- **GraphQL**: A query language for APIs and a runtime for fulfilling those queries with your existing data. The Graph uses GraphQL to query Subgraphs. -- **Endpoint**: A URL that can be used to query a subgraph. The testing endpoint for Subgraph Studio is `https://api.studio.thegraph.com/query///` and the Graph Explorer endpoint is `https://gateway.thegraph.com/api//subgraphs/id/`. The Graph Explorer endpoint is used to query subgraphs on The Graph's decentralized network. +- **Endpoint**: A URL that can be used to query a Subgraph. The testing endpoint for Subgraph Studio is `https://api.studio.thegraph.com/query///` and the Graph Explorer endpoint is `https://gateway.thegraph.com/api//subgraphs/id/`. The Graph Explorer endpoint is used to query Subgraphs on The Graph's decentralized network. -- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish subgraphs to The Graph Network. Once it is indexed, the subgraph can be queried by anyone. +- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish Subgraphs to The Graph Network. Once it is indexed, the Subgraph can be queried by anyone. -- **Indexer**: Network participants that run indexing nodes to index data from blockchains and serve GraphQL queries. +- **Endeksleyici**: Blokzincirlerinden veri endekslemek ve GraphQL sorgularını sunmak için endeksleme düğümleri çalıştıran ağ katılımcıları. -- **Indexer Revenue Streams**: Indexers are rewarded in GRT with two components: query fee rebates and indexing rewards. +- **Endeksleyici Gelir Akışları**: Endeksleyiciler, sorgu ücreti iadeleri ve endeksleme ödülleri olmak üzere iki bileşenle GRT cinsinden ödüllendirilir. - 1. **Query Fee Rebates**: Payments from subgraph consumers for serving queries on the network. + 1. **Query Fee Rebates**: Payments from Subgraph consumers for serving queries on the network. - 2. **Indexing Rewards**: The rewards that Indexers receive for indexing subgraphs. Indexing rewards are generated via new issuance of 3% GRT annually. + 2. **Indexing Rewards**: The rewards that Indexers receive for indexing Subgraphs. Indexing rewards are generated via new issuance of 3% GRT annually. -- **Indexer's Self-Stake**: The amount of GRT that Indexers stake to participate in the decentralized network. The minimum is 100,000 GRT, and there is no upper limit. +- **Endeksleyicinin Kendi İstifi (Self-Staking)**: Endeksleyicilerin merkeziyetsiz ağda yer almak için istifledikleri GRT miktarı. Minimum 100.000 GRT olup üst sınır yoktur. -- **Delegation Capacity**: The maximum amount of GRT an Indexer can accept from Delegators. Indexers can only accept up to 16x their Indexer Self-Stake, and additional delegation results in diluted rewards. For example, if an Indexer has a Self-Stake of 1M GRT, their delegation capacity is 16M. However, Indexers can increase their Delegation Capacity by increasing their Self-Stake. +- **Delegasyon Kapasitesi**: Bir Endeksleyicinin Delegatörlerden kabul edebileceği maksimum GRT miktarı. Endeksleyiciler, yalnızca kendi Endeksleyici istiflerinin 16 katına kadar kabul edebilir. Ek delegasyon, ödüllerin seyreltilmesine yol açar. Örneğin, bir Endeksleyicinin kendi istifi 1M GRT ise, delegasyon kapasitesi 16M'dir. Ancak Endeksleyiciler, kendi istiflerini artırarak delegasyon kapasitelerini artırabilirler. -- **Upgrade Indexer**: An Indexer designed to act as a fallback for subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. +- **Upgrade Indexer**: An Indexer designed to act as a fallback for Subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. -- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing subgraphs. +- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in Subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing Subgraphs. -- **Delegation Tax**: A 0.5% fee paid by Delegators when they delegate GRT to Indexers. The GRT used to pay the fee is burned. +- **Delegasyon Vergisi**: Delegatörlerin Endeksleyicilere GRT delege ettiklerinde ödedikleri %0,5'lik bir ücrettir. Ücreti ödemek için kullanılan GRT yakılır. -- **Curator**: Network participants that identify high-quality subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a subgraph, 10% is distributed to the Curators of that subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a subgraph. +- **Curator**: Network participants that identify high-quality Subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a Subgraph, 10% is distributed to the Curators of that Subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a Subgraph. -- **Curation Tax**: A 1% fee paid by Curators when they signal GRT on subgraphs. The GRT used to pay the fee is burned. +- **Curation Tax**: A 1% fee paid by Curators when they signal GRT on Subgraphs. The GRT used to pay the fee is burned. -- **Data Consumer**: Any application or user that queries a subgraph. +- **Data Consumer**: Any application or user that queries a Subgraph. -- **Subgraph Developer**: A developer who builds and deploys a subgraph to The Graph's decentralized network. +- **Subgraph Developer**: A developer who builds and deploys a Subgraph to The Graph's decentralized network. -- **Subgraph Manifest**: A YAML file that describes the subgraph's GraphQL schema, data sources, and other metadata. [Here](https://github.com/graphprotocol/example-subgraph/blob/master/subgraph.yaml) is an example. +- **Subgraph Manifest**: A YAML file that describes the Subgraph's GraphQL schema, data sources, and other metadata. [Here](https://github.com/graphprotocol/example-subgraph/blob/master/subgraph.yaml) is an example. -- **Epoch**: A unit of time within the network. Currently, one epoch is 6,646 blocks or approximately 1 day. +- **Dönem**: Ağdaki bir zaman birimi. Şu anda, bir dönem 6.646 blok veya yaklaşık 1 gündür. -- **Allocation**: An Indexer can allocate their total GRT stake (including Delegators' stake) towards subgraphs that have been published on The Graph's decentralized network. Allocations can have different statuses: +- **Allocation**: An Indexer can allocate their total GRT stake (including Delegators' stake) towards Subgraphs that have been published on The Graph's decentralized network. Allocations can have different statuses: - 1. **Active**: An allocation is considered active when it is created onchain. This is called opening an allocation, and indicates to the network that the Indexer is actively indexing and serving queries for a particular subgraph. Active allocations accrue indexing rewards proportional to the signal on the subgraph, and the amount of GRT allocated. + 1. **Active**: An allocation is considered active when it is created onchain. This is called opening an allocation, and indicates to the network that the Indexer is actively indexing and serving queries for a particular Subgraph. Active allocations accrue indexing rewards proportional to the signal on the Subgraph, and the amount of GRT allocated. - 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. + 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given Subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. -- **Subgraph Studio**: A powerful dapp for building, deploying, and publishing subgraphs. +- **Subgraph Studio**: A powerful dapp for building, deploying, and publishing Subgraphs. -- **Fishermen**: A role within The Graph Network held by participants who monitor the accuracy and integrity of data served by Indexers. When a Fisherman identifies a query response or a POI they believe to be incorrect, they can initiate a dispute against the Indexer. If the dispute rules in favor of the Fisherman, the Indexer is slashed by losing 2.5% of their self-stake. Of this amount, 50% is awarded to the Fisherman as a bounty for their vigilance, and the remaining 50% is removed from circulation (burned). This mechanism is designed to encourage Fishermen to help maintain the reliability of the network by ensuring that Indexers are held accountable for the data they provide. +- **Fisherman'ler**: The Graph Ağı içinde Endeksleyiciler tarafından sağlanan verilerin doğruluğunu ve bütünlüğünü izleyen katılımcıların üstlendiği bir roldür. Bir Fisherman, hatalı olduğuna inandığı bir sorgu yanıtı veya POI tespit ettiğinde, Endeksleyiciye karşı bir itiraz başlatabilir. İtiraz Fisherman lehine sonuçlanırsa, Endeksleyicinin kendisine ait istifin %2,5'i kesilir. Bu miktarın %50'si, göstermiş olduğu dikkat için ödül olarak Fisherman'e verilir. Kalan miktarın %50'si dolaşımdan kaldırılır (yakılır). Bu mekanizma, Fishermen'lerin ağın güvenilirliğini korumaya yardımcı olmalarını ve Endeksleyicilerin sağladıkları verilerden sorumlu tutulmalarını teşvik etmek için tasarlanmıştır. -- **Arbitrators**: Arbitrators are network participants appointed through a governance process. The role of the Arbitrator is to decide the outcome of indexing and query disputes. Their goal is to maximize the utility and reliability of The Graph Network. +- **Arabulucular**: Yönetişim süreciyle atanan ağ katılımcılarıdır. Arbitratörlerin rolü, endeksleme ve sorgu anlaşmasının sonucuna karar vermektir. Amaçları, The Graph Ağı'nın faydasını ve güvenilirliğini en üst düzeye çıkarmaktır. -- **Slashing**: Indexers can have their self-staked GRT slashed for providing an incorrect POI or for serving inaccurate data. The slashing percentage is a protocol parameter currently set to 2.5% of an Indexer's self-stake. 50% of the slashed GRT goes to the Fisherman that disputed the inaccurate data or incorrect POI. The other 50% is burned. +- **Kesinti (Slashing)**: Endeksleyiciler, hatalı bir POI (endeksleme kanıtı) sağlamaları veya hatalı veri sunmaları halinde kendi istifledikleri GRT'lerinin kesilmesiyle karşı karşıya kalabilirler. Kesinti yüzdesi, şu anda bir Endeksleyicinin kendi istifinin %2,5'i olarak ayarlanmış bir protokol parametresidir. Kesilen GRT'nin %50'si, yanlış veriyi veya hatalı POI'yi bildiren Fisherman'e gider. Kalan %50'si yakılır. -- **Indexing Rewards**: The rewards that Indexers receive for indexing subgraphs. Indexing rewards are distributed in GRT. +- **Indexing Rewards**: The rewards that Indexers receive for indexing Subgraphs. Indexing rewards are distributed in GRT. -- **Delegation Rewards**: The rewards that Delegators receive for delegating GRT to Indexers. Delegation rewards are distributed in GRT. +- **Delegasyon Ödülleri**: Delegatörlerin GRT’lerini Endeksleyicilere delege etmeleri karşılığında aldıkları ödüllerdir. Delegasyon ödülleri GRT olarak dağıtılır. -- **GRT**: The Graph's work utility token. GRT provides economic incentives to network participants for contributing to the network. +- **GRT**: The Graph'in fayda token'ı. GRT, ağ katılımcılarına ağa katkıda bulunmaları için ekonomik teşvikler sağlar. -- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. +- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given Subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. -- **Graph Node**: Graph Node is the component that indexes subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. +- **Graph Node**: Graph Node is the component that indexes Subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. -- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions onchain, including registering on the network, managing subgraph deployments to its Graph Node(s), and managing allocations. +- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions onchain, including registering on the network, managing Subgraph deployments to its Graph Node(s), and managing allocations. -- **The Graph Client**: A library for building GraphQL-based dapps in a decentralized way. +- **The Graph İstemcisi**: Merkezi olmayan bir şekilde GraphQL tabanlı dapp'ler geliştirmeyi sağlayan bir kütüphane. -- **Graph Explorer**: A dapp designed for network participants to explore subgraphs and interact with the protocol. +- **Graph Explorer**: A dapp designed for network participants to explore Subgraphs and interact with the protocol. -- **Graph CLI**: A command line interface tool for building and deploying to The Graph. +- **Graph CLI**: The Graph'e dağıtım ve geliştirme yapmayı sağlayan bir komut satırı arayüz aracı. -- **Cooldown Period**: The time remaining until an Indexer who changed their delegation parameters can do so again. +- **Soğuma Dönemi**: Delegasyon parametrelerini değiştiren bir Endeksleyicinin bunu tekrar yapabilmesi için geçmesi gereken süre. -- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, subgraphs, curation shares, and Indexer's self-stake. +- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, Subgraphs, curation shares, and Indexer's self-stake. -- **Updating a subgraph**: The process of releasing a new subgraph version with updates to the subgraph's manifest, schema, or mappings. +- **Updating a Subgraph**: The process of releasing a new Subgraph version with updates to the Subgraph's manifest, schema, or mappings. -- **Migrating**: The process of curation shares moving from an old version of a subgraph to a new version of a subgraph (e.g. when v0.0.1 is updated to v0.0.2). +- **Migrating**: The process of curation shares moving from an old version of a Subgraph to a new version of a Subgraph (e.g. when v0.0.1 is updated to v0.0.2). diff --git a/website/src/pages/tr/resources/migration-guides/assemblyscript-migration-guide.mdx b/website/src/pages/tr/resources/migration-guides/assemblyscript-migration-guide.mdx index 0e0082ff79f5..866954f3a5ec 100644 --- a/website/src/pages/tr/resources/migration-guides/assemblyscript-migration-guide.mdx +++ b/website/src/pages/tr/resources/migration-guides/assemblyscript-migration-guide.mdx @@ -2,13 +2,13 @@ title: AssemblyScript Geçiş Rehberi --- -Şu ana kadar subgraph'ler, [AssemblyScript'in ilk versiyonlarından birini](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6) kullanıyordu. Nihayet, [en yeni versiyonu](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10) için destek ekledik! 🎉 +Up until now, Subgraphs have been using one of the [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉 -Bu, subgraph geliştiricilerinin AS dilinin ve standart kütüphanenin daha yeni özelliklerini kullanmasını sağlayacak. +That will enable Subgraph developers to use newer features of the AS language and standard library. Bu rehber, `graph-cli`/`graph-ts` araçlarının `0.22.0` ve öncesi versiyonlarını kullanan herkes için geçerlidir. Eğer halihazırda bu versiyonun üstünde (veya ona eşit) bir versiyon kullanıyorsanız, zaten AssemblyScript'in `0.19.10` versiyonunu kullanıyorsunuz demektir 🙂 -> Not: `0.24.0` itibarıyla, `graph-node`, subgraph manifestosunda belirtilen `apiVersion`'e bağlı olarak her iki versiyonu da destekleyebilir. +> Note: As of `0.24.0`, `graph-node` can support both versions, depending on the `apiVersion` specified in the Subgraph manifest. ## Özellikler @@ -44,7 +44,7 @@ Bu rehber, `graph-cli`/`graph-ts` araçlarının `0.22.0` ve öncesi versiyonlar ## Nasıl yükseltilir? -1. Eşlemlerinizdeki `apiVersion` değerini `subgraph.yaml` dosyasında `0.0.6` olarak değiştirin: +1. Change your mappings `apiVersion` in `subgraph.yaml` to `0.0.9`: ```yaml ... @@ -52,7 +52,7 @@ dataSources: ... mapping: ... - apiVersion: 0.0.6 + apiVersion: 0.0.9 ... ``` @@ -106,7 +106,7 @@ let maybeValue = load()! // Değer null ise çalıştırma esnasında hata verir maybeValue.aMethod() ``` -Emin olamadığınızda daima güvenli sürümü kullanmanızı öneririz. Değer mevcut değilse, subgraph işleyicinizde erken bir if kontrolü yaparak işlemi sonlandırabilirsiniz. +If you are unsure which to choose, we recommend always using the safe version. If the value doesn't exist you might want to just do an early if statement with a return in you Subgraph handler. ### Değişken Gölgeleme @@ -132,7 +132,7 @@ Eğer değişken gölgeleme yapıyorsanız, yinelenen değişkenlerinizi yeniden ### Null Karşılaştırmaları -Subgraph'inizi yükselttikten sonra bazı noktalarda şu tür hatalar alabilirsiniz: +By doing the upgrade on your Subgraph, sometimes you might get errors like these: ```typescript ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' is not assignable to type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt'. @@ -330,7 +330,7 @@ let wrapper = new Wrapper(y) wrapper.n = wrapper.n + x // olması gerektiği gibi derleme hatası vermez ``` -AssemblyScript derleyicisine bu sorunu bildirdik. Ancak subgraph eşlemlerinizde bu tür işlemleri yapıyorsanız, şimdilik önce bir null değer kontrolü yapacak şekilde kodunuzu değiştirmelisiniz. +We've opened a issue on the AssemblyScript compiler for this, but for now if you do these kind of operations in your Subgraph mappings, you should change them to do a null check before it. ```typescript let wrapper = new Wrapper(y) @@ -352,7 +352,7 @@ value.x = 10 value.y = 'content' ``` -Kod derlenir ancak çalıştırma esnasında kırılır. Bu da değerin ilklendirilmemiş olmasından kaynaklanır. Bu yüzden subgraph'inizin değerlerini aşağıdaki gibi ilklendirdiğinden emin olun: +It will compile but break at runtime, that happens because the value hasn't been initialized, so make sure your Subgraph has initialized their values, like this: ```typescript var value = new Type() // ilklendirme diff --git a/website/src/pages/tr/resources/migration-guides/graphql-validations-migration-guide.mdx b/website/src/pages/tr/resources/migration-guides/graphql-validations-migration-guide.mdx index 3d1d0f76fb6b..aa5fafcab761 100644 --- a/website/src/pages/tr/resources/migration-guides/graphql-validations-migration-guide.mdx +++ b/website/src/pages/tr/resources/migration-guides/graphql-validations-migration-guide.mdx @@ -1,5 +1,5 @@ --- -title: GraphQL Validasyon Geçiş Kılavuzu +title: GraphQL Validations Migration Guide --- Yakında "graph-node", [GraphQL Validasyon Özelliklerinin](https://spec.graphql.org/June2018/#sec-Validation)'in %100'ünü destekleyecektir. @@ -20,7 +20,7 @@ Bu doğrulamalarla uyumlu olmak için lütfen taşıma kılavuzunu takip edin. GraphQL işlemlerinizdeki sorunları bulmak ve düzeltmek için CLI taşıma aracını kullanabilirsiniz. Alternatif olarak, `https://api-next.thegraph.com/subgraphs/name/$GITHUB_USER/$SUBGRAPH_NAME` uç noktasını kullanmak için GraphQL istemcinizin uç noktasını güncelleyebilirsiniz. Sorgularınızı bu uç noktaya göre test etmek, sorgularınızdaki sorunları bulmanıza yardımcı olacaktır. -> Tüm subgraph'lerin taşınması gerekmez, [GraphQL ESlint](https://the-guild.dev/graphql/eslint/docs) or [GraphQL Code Generator](https://the-guild.dev/graphql/codegen) kullanıyorsanız zaten sorgularınızın geçerli olmasını sağlarlar. +> Not all Subgraphs will need to be migrated, if you are using [GraphQL ESlint](https://the-guild.dev/graphql/eslint/docs) or [GraphQL Code Generator](https://the-guild.dev/graphql/codegen), they already ensure that your queries are valid. ## Geçiş CLI Aracı diff --git a/website/src/pages/tr/resources/roles/curating.mdx b/website/src/pages/tr/resources/roles/curating.mdx index 33d63ae0f0bb..414c0f8bfa2f 100644 --- a/website/src/pages/tr/resources/roles/curating.mdx +++ b/website/src/pages/tr/resources/roles/curating.mdx @@ -2,37 +2,37 @@ title: Kürasyon --- -Küratörler The Graph'in merkeziyetsiz ekonomisi için kritik öneme sahiptir. Web3 ekosistemi hakkındaki bilgilerini kullanarak, The Graph Ağı tarafından endekslenmesi gereken subgraph’leri değerlendirir ve bunlara sinyal verirler. Küratörler Graph Gezgini aracılığıyla ağ verilerini inceleyerek sinyal verip vermeme kararını alır. The Graph Ağı, iyi kaliteye sahip subgraph’lere sinyal veren küratörleri, bu subgraph’lerin ürettiği sorgu ücretlerinden bir pay ile ödüllendirir. Sinyallenen GRT miktarı endeksleyiciler için hangi subgraph'leri endeksleyeceklerini belirlerken önemli bir faktördür. +Curators are critical to The Graph's decentralized economy. They use their knowledge of the web3 ecosystem to assess and signal on the Subgraphs that should be indexed by The Graph Network. Through Graph Explorer, Curators view network data to make signaling decisions. In turn, The Graph Network rewards Curators who signal on good quality Subgraphs with a share of the query fees those Subgraphs generate. The amount of GRT signaled is one of the key considerations for indexers when determining which Subgraphs to index. ## Sinyal Verme, The Graph Ağı için Ne Anlama Geliyor? -Bir subgraph'in tüketiciler tarafından sorgulanabilmesi için subgraph önce endekslenmelidir. İşte burada kürasyon devreye girer. Endeksleyicilerin kaliteli subgraph’lerden kayda değer sorgu ücretleri kazanabilmesi için hangi subgraph’leri endeksleyeceklerini bilmeleri gerekir. Küratörler bir subgraph’e sinyal verdiğinde bu, endeksleyicilere o subgraph’in talep gördüğünü ve yeterli kaliteye sahip olduğunu gösterir. +Before consumers can query a Subgraph, it must be indexed. This is where curation comes into play. In order for Indexers to earn substantial query fees on quality Subgraphs, they need to know what Subgraphs to index. When Curators signal on a Subgraph, it lets Indexers know that a Subgraph is in demand and of sufficient quality that it should be indexed. -Küratörler, The Graph ağını verimli hale getirirler. [Sinyalleme](#how-to-signal), Küratörlerin Endeksleyicilere hangi subgraph'in endekslenmeye uygun olduğunu bildirmelerini sağlayan süreçtir. Endeksleyiciler, bir Küratörden gelen sinyale güvenebilir çünkü sinyalleme sırasında, Küratörler subgraph için bir kürasyon payı üretir. Bu da onları subgraph'in sağladığı gelecekteki sorgu ücretlerinin bir kısmına hak sahibi kılar. +Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that Curators use to let Indexers know that a Subgraph is good to index. Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the Subgraph, entitling them to a portion of future query fees that the Subgraph drives. -Küratör sinyalleri, Graph Kürasyon Payları (Graph Curation Shares - GCS) olarak adlandırılan ERC20 token ile temsil edilir. Daha fazla sorgu ücreti kazanmak isteyenler, GRT’lerini ağ için güçlü bir ücret akışı yaratacağını öngördükleri subgraph’lere sinyal vermelidir. Küratörler kötü davranışları nedeniyle cezalandırılmaz (slashing uygulanmaz), ancak ağın bütünlüğüne zarar verebilecek kötü kararları caydırmak için bir depozito vergisi bulunur. Düşük kaliteli bir subgraph üzerinde kürasyon yapan Küratörler, daha az sorgu olduğu ya da daha az Endeksleyici tarafından işlendiği için daha az sorgu ücreti kazanır. +Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to Subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality Subgraph because there will be fewer queries to process or fewer Indexers to process them. -[Sunrise Yükseltme Endeksleyici](/archived/sunrise/#what-is-the-upgrade-indexer) tüm subgraph'lerin endekslenmesini sağlar. Belirli bir subgraph'e GRT sinyallenmesi o subgraph'e daha fazla endeksleyici çeker. Kürasyon yoluyla ek Endeksleyicilerin teşvik edilmesi, sorgu hizmetinin kalitesini artırmayı amaçlar ve ağ erişilebilirliğini artırarak gecikmeyi azaltır. +The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all Subgraphs, signaling GRT on a particular Subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. -Sinyal verirken, Küratörler belirli bir subgraph sürümüne sinyal vermeyi veya otomatik geçiş (auto-migrate) özelliğini kullanmayı seçebilirler. Eğer otomatik geçiş özelliğini kullanarak sinyal verirlerse, bir küratörün payları her zaman geliştirici tarafından yayımlanan en son sürüme göre güncellenir. Bunun yerine belirli bir sürüme sinyal vermeyi seçerlerse, paylar her zaman bu belirli sürümdeki haliyle kalır. +Endeksleyiciler, Graph Gezgini'nde gördükleri küratörlük sinyallerine göre endeksleyecekleri subgraph'leri bulabilirler. -Hizmet kalitenizi artırmak için kürasyon konusunda yardıma ihtiyacınız varsa, lütfen Edge & Node ekibine support@thegraph.zendesk.com adresinden bir talep gönderin ve yardıma ihtiyacınız olan subgraph'leri belirtin. +If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the Subgraphs you need assistance with. -Endeksleyiciler, Graph Gezgini'nde gördükleri kürasyon sinyallerine dayanarak endeksleyecekleri subgraph’leri bulabilirler (aşağıdaki ekran görüntüsüne bakın). +Indexers can find Subgraphs to index based on curation signals they see in Graph Explorer (screenshot below). -![Gezgin subgraph'leri](/img/explorer-subgraphs.png) +![Explorer Subgraphs](/img/explorer-subgraphs.png) ## Nasıl Sinyal Verilir -Graph Gezgini'ndeki Küratör sekmesi içinde, küratörler ağ istatistiklerine dayalı olarak belirli subgraph'lere sinyal verip kaldırabilecekler. Bunu Graph Gezgini'nde nasıl yapacağınıza dair adım adım bir genel bakış için, [buraya tıklayın.](/subgraphs/explorer/) +Within the Curator tab in Graph Explorer, curators will be able to signal and unsignal on certain Subgraphs based on network stats. For a step-by-step overview of how to do this in Graph Explorer, [click here.](/subgraphs/explorer/) -Bir küratör, belirli bir subgraph sürümü üzerinde sinyal vermeyi seçebilir veya sinyalinin otomatik olarak o subgraph'in en yeni üretim sürümüne taşınmasını tercih edebilir. Her iki strateji de geçerli olup kendi avantaj ve dezavantajlarına sahiptir. +A curator can choose to signal on a specific Subgraph version, or they can choose to have their signal automatically migrate to the newest production build of that Subgraph. Both are valid strategies and come with their own pros and cons. -Belirli bir sürüme sinyal vermek, özellikle bir subgraph birden fazla dapp tarafından kullanıldığında faydalıdır. Bir dapp, subgraph'ini yeni özelliklerle düzenli olarak güncellemek isteyebilir. Diğer bir dapp ise daha eski, iyi test edilmiş bir subgraph sürümünü kullanmayı tercih edebilir. İlk kürasyon sırasında, %1'lik standart bir vergi alınır. +Signaling on a specific version is especially useful when one Subgraph is used by multiple dapps. One dapp might need to regularly update the Subgraph with new features. Another dapp might prefer to use an older, well-tested Subgraph version. Upon initial curation, a 1% standard tax is incurred. Sinyalinizin otomatik olarak en yeni üretim sürümüne geçiş yapması, sorgu ücretlerini biriktirmeye devam etmenizi sağlamak açısından değerli olabilir. Her kürasyon yaptığınızda %1'lik bir kürasyon vergisi uygulanır. Ayrıca her geçişte %0,5'lik bir kürasyon vergisi ödersiniz. Subgraph geliştiricilerinin sık sık yeni sürümler yayımlaması teşvik edilmez - geliştiriciler otomatik olarak taşınan tüm kürasyon payları için %0,5 kürasyon vergisi ödemek zorundadırlar. -> **Not**: Belirli bir subgraph'e ilk kez sinyal veren adres ilk küratör olarak kabul edilir. Bu ilk sinyal işlemi, sonraki küratörlerinkine kıyasla çok daha fazla gaz tüketen bir işlemdir. Bunun nedeni, ilk küratörün kürasyon payı token'larını ilklendirmesi ve ayrıca token'ları The Graph proxy'sine aktarmasıdır. +> **Note**: The first address to signal a particular Subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy. ## GRT'nizi Çekme @@ -40,39 +40,39 @@ Küratörler, sinyal verdikleri GRT'yi istedikleri zaman çekme seçeneğine sah Yetkilendirme sürecinden farklı olarak, sinyal verdiğiniz GRT'yi çekmeye karar verirseniz bir bekleme süresiyle karşılaşmazsınız ve (%1 kürasyon vergisi düşüldükten sonra) toplam miktarı alırsınız. -Bir küratör sinyalini çektikten sonra, endeksleyiciler aktif olarak sinyal verilmiş GRT olmasa bile subgraph'i endekslemeye devam etmeyi seçebilirler. +Once a curator withdraws their signal, indexers may choose to keep indexing the Subgraph, even if there's currently no active GRT signaled. -Ancak, küratörlerin sinyal verdikleri GRT'yi yerinde bırakmaları tavsiye edilir; bu yalnızca sorgu ücretlerinden pay almak için değil, aynı zamanda subgraph'in güvenilirliğini ve kesintisiz çalışmasını sağlamak için de önemlidir. +However, it is recommended that curators leave their signaled GRT in place not only to receive a portion of the query fees, but also to ensure reliability and uptime of the Subgraph. ## Riskler 1. The Graph üzerindeki sorgu pazarı henüz nispeten yenidir ve erken aşama piyasa dinamikleri nedeniyle %APY'nin beklediğinizden daha düşük olması riski mevcuttur. -2. Kürasyon Ücreti - Bir küratör bir subgraph'e GRT ile sinyal verdiğinde, %1'lik bir kürasyon vergisine tabi olur. Bu ücret yakılır. -3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their subgraph or if a subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/resources/roles/delegating/delegating/). -4. Bir subgraph, bir hata nedeniyle başarısız olabilir. Başarısız subgraph sorgu ücreti biriktirmez. Bu sebeple, geliştiricinin hatayı düzeltip yeni bir sürüm dağıtmasını beklemeniz gerekecektir. - - Eğer bir subgraph'in en yeni sürümüne aboneyseniz, paylarınız otomatik olarak o yeni sürüme geçecektir. Bu geçiş sırasında %0,5'lik bir kürasyon vergisi uygulanır. - - Belirli bir subgraph sürümüne sinyal verdiyseniz ve bu sürüm başarısız olduysa, kürasyon paylarınızı manuel olarak yakmanız gerekir. Daha sonra yeni subgraph sürümüne sinyal verebilirsiniz; bu işlem sırasında %1'lik bir kürasyon vergisi uygulanır. +2. Curation Fee - when a curator signals GRT on a Subgraph, they incur a 1% curation tax. This fee is burned. +3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their Subgraph or if a Subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/resources/roles/delegating/delegating/). +4. A Subgraph can fail due to a bug. A failed Subgraph does not accrue query fees. As a result, you’ll have to wait until the developer fixes the bug and deploys a new version. + - If you are subscribed to the newest version of a Subgraph, your shares will auto-migrate to that new version. This will incur a 0.5% curation tax. + - If you have signaled on a specific Subgraph version and it fails, you will have to manually burn your curation shares. You can then signal on the new Subgraph version, thus incurring a 1% curation tax. ## Kürasyon Hakkında SSS ### 1. Küratörler, sorgu ücretlerinin yüzde kaçını kazanır? -Bir subgraph'e sinyal vererek, subgraph'in ürettiği tüm sorgu ücretlerinden pay alırsınız. Tüm sorgu ücretlerinin %10'u, kürasyon paylarına orantılı olarak Küratörlere gider. Bu %10'luk oran yönetişime tabidir (yani yönetişim kararlarıyla değiştirilebilir). +By signalling on a Subgraph, you will earn a share of all the query fees that the Subgraph generates. 10% of all query fees go to the Curators pro-rata to their curation shares. This 10% is subject to governance. -### 2. Sinyal vereceğim subgraph'lerin hangilerinin yüksek kaliteli olduğunu nasıl belirlerim? +### 2. How do I decide which Subgraphs are high quality to signal on? -Yüksek kaliteli subgraph'leri bulmak karmaşık bir iştir. Ancak bu duruma farklı şekillerde yaklaşılabilir. Bir Küratör olarak, sorgu hacmi oluşturan güvenilir subgraph'ler aramak istersiniz. Güvenilir bir subgraph; tamamlanmış, doğru ve bir dapp’in veri ihtiyaçlarını destekliyorsa değerli olabilir. Kötü tasarlanmış bir subgraph'in revize edilmesi veya yeniden yayımlanması gerekebilir ve ileride hata alıp çalışmayı durdurabilir. Küratörler için bir subgraph'in değerli olup olmadığını değerlendirmek için subgraph'in mimarisini veya kodunu gözden geçirmesi önemlidir. Sonuç olarak: +Finding high-quality Subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy Subgraphs that are driving query volume. A trustworthy Subgraph may be valuable if it is complete, accurate, and supports a dapp’s data needs. A poorly architected Subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a Subgraph’s architecture or code in order to assess if a Subgraph is valuable. As a result: -- Küratörler, bir ağ hakkındaki bilgilerini kullanarak, belirli bir subgraph'in gelecekte daha yüksek veya daha düşük sorgu hacmi oluşturma olasılığını tahmin etmeye çalışabilirler. -- Küratörler Graph Gezgini üzerinden erişilebilen metrikleri de anlamalıdır. Geçmiş sorgu hacmi ve subgraph geliştiricisinin kim olduğu gibi metrikler, bir subgraph'in sinyal vermeye değer olup olmadığını belirlemekte yardımcı olabilir. +- Curators can use their understanding of a network to try and predict how an individual Subgraph may generate a higher or lower query volume in the future +- Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the Subgraph developer is can help determine whether or not a Subgraph is worth signalling on. -### 3. Bir subgraph'ı güncellemenin maliyeti nedir? +### 3. What’s the cost of updating a Subgraph? -Kürasyon paylarınızı yeni bir subgraph sürümüne taşımak %1'lik bir kürasyon vergisine tabidir. Küratörler, bir subgraph'in en yeni sürümüne abone olmayı tercih edebilir. Küratör payları otomatik olarak yeni bir sürüme taşındığında, Küratörler ayrıca kürasyon vergisinin yarısını (yani %0,5) öderler. Çünkü subgraph'lerin yükseltilmesi, zincir üzerinde gerçekleşen ve dolayısıyla gaz harcamayı gerektiren bir eylemdir. +Migrating your curation shares to a new Subgraph version incurs a curation tax of 1%. Curators can choose to subscribe to the newest version of a Subgraph. When curator shares get auto-migrated to a new version, Curators will also pay half curation tax, ie. 0.5%, because upgrading Subgraphs is an onchain action that costs gas. -### 4. Subgraph'ımı ne sıklıkla güncelleyebilirim? +### 4. How often can I update my Subgraph? -Subgraph'ınızı çok sık güncellememeniz önerilir. Daha fazla ayrıntı için yukarıdaki soruya bakın. +It’s suggested that you don’t update your Subgraphs too frequently. See the question above for more details. ### 5. Kürasyon paylarımı satabilir miyim? diff --git a/website/src/pages/tr/resources/roles/delegating/delegating.mdx b/website/src/pages/tr/resources/roles/delegating/delegating.mdx index 3f6bf5fa3daf..17e26e005666 100644 --- a/website/src/pages/tr/resources/roles/delegating/delegating.mdx +++ b/website/src/pages/tr/resources/roles/delegating/delegating.mdx @@ -2,54 +2,54 @@ title: Delegasyon --- -To start delegating right away, check out [delegate on the graph](https://thegraph.com/explorer/delegate?chain=arbitrum-one). +Hemen delege etmeye başlamak için, [The Graph üzerinde delege et](https://thegraph.com/explorer/delegate?chain=arbitrum-one) bağlantısına göz atın. ## Genel Bakış -Delegators earn GRT by delegating GRT to Indexers, which helps network security and functionality. +Delegatörler, GRT'lerini Endeksleyicilere delege ederek GRT kazanır, bu da ağ güvenliğine ve işlevselliğine yardımcı olur. -## Benefits of Delegating +## Delege Etmenin Avantajları -- Strengthen the network’s security and scalability by supporting Indexers. -- Earn a portion of rewards generated by the Indexers. +- Endeksleyicilere destek vererek ağın güvenliğini ve ölçeklenebilirliğini güçlendirin. +- Endeksleyiciler tarafından üretilen ödüllerin bir kısmını kazanın. -## How Does Delegation Work? +## Delegasyon Nasıl Çalışır? -Delegators earn GRT rewards from the Indexer(s) they choose to delegate their GRT to. +Delegatörler, GRT'lerini delege etmeyi seçtikleri Endeksleyici(ler)den GRT ödülleri kazanırlar. -An Indexer's ability to process queries and earn rewards depends on three key factors: +Bir Endeksleyicinin sorguları işleme kabiliyeti ve ödül kazanma durumu üç temel faktöre bağlıdır: -1. The Indexer's Self-Stake (GRT staked by the Indexer). -2. The total GRT delegated to them by Delegators. -3. The price the Indexer sets for queries. +1. Endeksleyicinin Kendisine Ait İstif (Endeksleyici tarafından istiflenen GRT). +2. Delegatörler tarafından ona delege edilen toplam GRT miktarı. +3. Endeksleyicinin sorgular için belirlediği fiyat. -The more GRT staked and delegated to an Indexer, the more queries they can serve, leading to higher potential rewards for both the Delegator and Indexer. +Bir Endeksleyiciye istiflenen ve delege edilen GRT miktarı ne kadar fazla olursa, o kadar fazla sorgu hizmeti sunabilir ve bu da hem Delegatör hem de Endeksleyici için daha yüksek potansiyel ödüller anlamına gelir. -### What is Delegation Capacity? +### Delegasyon Kapasitesi Nedir? -Delegation Capacity refers to the maximum amount of GRT an Indexer can accept from Delegators, based on the Indexer's Self-Stake. +Delegasyon kapasitesi, Endeksleyicinin Delegatörlerden kabul edebileceği maksimum GRT miktarını ifade eder. Bu miktar Endeksleyicinin kendi istifine bağlıdır. -The Graph Network includes a delegation ratio of 16, meaning an Indexer can accept up to 16 times their Self-Stake in delegated GRT. +The Graph Ağı'nda delegasyon oranı 16'dır. Yani bir Endeksleyici kendi istifinin 16 katına kadar delege edilmiş GRT kabul edebilir. -For example, if an Indexer has a Self-Stake of 1M GRT, their Delegation Capacity is 16M. +Örneğin, bir Endeksleyicinin kendi istifi 1M GRT ise, delegasyon kapasitesi 16M'dir. -### Why Does Delegation Capacity Matter? +### Delegasyon Kapasitesi Neden Önemlidir? -If an Indexer exceeds their Delegation Capacity, rewards for all Delegators become diluted because the excess delegated GRT cannot be used effectively within the protocol. +Bir Endeksleyici delegasyon kapasitesini aşarsa tüm Delegatörlerin ödülleri seyrelir. Çünkü fazla delege edilmiş GRT, protokol içinde etkili bir şekilde kullanılamaz. -This makes it crucial for Delegators to evaluate an Indexer's current Delegation Capacity before selecting an Indexer. +Bu sebeple, Delegatörlerin bir Endeksleyiciyi seçmeden önce Endeksleyicinin mevcut delegasyon kapasitesini değerlendirmesi önemlidir. -Indexers can increase their Delegation Capacity by increasing their Self-Stake, thereby raising the limit for delegated tokens. +Endeksleyiciler, kendi istiflerini artırarak delegasyon kapasitelerini artırabilirler, böylece delege edilmiş token'lar için limiti yükseltirler. -## Delegation on The Graph +## The Graph'ta Delegasyon -> Please note this guide does not cover steps such as setting up MetaMask. The Ethereum community provides a [comprehensive resource regarding wallets](https://ethereum.org/en/wallets/). +> Lütfen bu rehberin MetaMask kurulumu gibi adımları kapsamadığını unutmayın. Ethereum topluluğu tarafından hazırlanmış, [cüzdanlar hakkında kapsamlı bir kaynağa ulaşmak için buraya tıklayın](https://ethereum.org/en/wallets/). -There are two sections in this guide: +Bu rehberde iki bölüm bulunmaktadır: - The Graph Ağı'nda token delege etmenin riskleri - Bir Delegatör olarak beklenen getirilerin nasıl hesaplandığı @@ -58,7 +58,7 @@ There are two sections in this guide: Aşağıda, protokolde Delegatör olmanın başlıca riskleri listelenmiştir. -### The Delegation Tax +### Delegasyon Vergisi Delegatörler kötü davranışları nedeniyle cezalandırılmaz (slashing uygulanmaz), ancak ağın bütünlüğüne zarar verebilecek kötü kararları caydırmak için bir vergi bulunur. @@ -68,19 +68,19 @@ Bir Delegatör olarak aşağıdakileri anlamak önemlidir: - Önlem olarak, bir Endeksleyiciye delege ederken potansiyel getirilerinizi hesaplamalısınız. Örneğin, delegasyon işlemi için ödediğiniz %0,5 vergiyi geri kazanmanızın kaç gün süreceğini hesaplayabilirsiniz. -### The Undelegation Period +### Delegasyon Geri Çekme Dönemi -When a Delegator chooses to undelegate, their tokens are subject to a 28-day undelegation period. +Bir Delegatör, delegasyonunu geri çekmeyi tercih ettiğinde, token'ları 28 günlük bir geri çekme dönemine tabi olur. -This means they cannot transfer their tokens or earn any rewards for 28 days. +Bu, 28 gün boyunca token'larını transfer edemeyecekleri veya herhangi bir ödül kazanamayacakları anlamına gelir. -After the undelegation period, GRT will return to your crypto wallet. +Delegasyonu geri çekme döneminin ardından GRT kripto cüzdanınıza geri dönecektir. ### Bu neden önemli? -If you choose an Indexer that is not trustworthy or not doing a good job, you will want to undelegate. This means you will be losing opportunities to earn rewards. +Güvenilir olmayan ya da iyi iş çıkarmayan bir Endeksleyici seçerseniz delegasyonunuzu geri çekmek isteyeceksiniz. Bu durum, ödül kazanma fırsatlarını kaçıracağınız anlamına gelir. -As a result, it’s recommended that you choose an Indexer wisely. +Bundan dolayı, Endeksleyici seçimini dikkatlice yapmanız önerilir. ![Delegation unbonding. Note the 0.5% fee in the Delegation UI, as well as the 28 day unbonding period.](/img/Delegation-Unbonding.png) @@ -96,25 +96,25 @@ Güvenilir bir Endeksleyici nasıl seçebileceğinizi anlamak için Delegasyon P - **Sorgu Ücreti Kesintisi** - Endeksleme Ödülü Kesintisi'ne benzer ancak Endeksleyicinin topladığı, sorgu ücretlerinden elde edilen kazançlara uygulanır. -- It is highly recommended that you explore [The Graph Discord](https://discord.gg/graphprotocol) to determine which Indexers have the best social and technical reputations. +- Hangi Endeksleyicilerin en iyi sosyal ve teknik itibara sahip olduğunu belirlemek için [The Graph Discord sunucusu](https://discord.gg/graphprotocol)'nu incelemeniz şiddetle tavsiye edilir. -- Many Indexers are active in Discord and will be happy to answer your questions. +- Çoğu Endeksleyici Discord'da aktiftir ve sorularınızı yanıtlamaktan memnuniyet duyacaktır. ## Delegatörlerin Beklenen Getirisini Hesaplama -> Calculate the ROI on your delegation [here](https://thegraph.com/explorer/delegate?chain=arbitrum-one). +> Delegasyonunuzun yıllık getirisini (ROI) [buradan](https://thegraph.com/explorer/delegate?chain=arbitrum-one) hesaplayın. -A Delegator must consider a variety of factors to determine a return: +Bir Delegatör, kazancı belirlemek için çeşitli faktörleri hesaba katmalıdır: -An Indexer's ability to use the delegated GRT available to them impacts their rewards. +Bir Endeksleyicinin kendisine delege edilen GRT'yi kullanabilme başarısı, ödüllerini etkiler. -If an Indexer does not allocate all the GRT at their disposal, they may miss out on maximizing potential earnings for both themselves and their Delegators. +Eğer bir Endeksleyici elindeki bütün GRT'yi tahsis etmezse, hem kendileri hem de Delegatörleri için potansiyel kazançları en üst düzeye çıkarmayı kaçırabilirler. -Indexers can close an allocation and collect rewards at any time within the 1 to 28-day window. However, if rewards are not promptly collected, the total rewards may appear lower, even if a percentage of rewards remain unclaimed. +Endeksleyiciler, 1 ila 28 günlük süre içinde herhangi bir zamanda tahsisi kapatabilir ve ödülleri toplayabilir. Ancak, ödüller zamanında toplanmazsa, ödüllerin bir kısmı talep edilmemiş olsa bile toplam ödül miktarı daha düşük görünebilir. ### Sorgu ücreti kesintisi ve endeksleme ücreti kesintisini dikkate almak -You should choose an Indexer that is transparent about setting their Query Fee and Indexing Fee Cuts. +Sorgu Ücreti ve Endeksleme Ücreti kesintilerini belirlemekte şeffaf olan bir Endeksleyici seçmelisiniz. Formül şu şekildedir: diff --git a/website/src/pages/tr/resources/subgraph-studio-faq.mdx b/website/src/pages/tr/resources/subgraph-studio-faq.mdx index dc5b2fb87f0a..051a8ea8ed57 100644 --- a/website/src/pages/tr/resources/subgraph-studio-faq.mdx +++ b/website/src/pages/tr/resources/subgraph-studio-faq.mdx @@ -4,7 +4,7 @@ title: Subgrap Studio Hakkında SSS ## 1. Subgraph Stüdyo Nedir? -[Subgraph Studio](https://thegraph.com/studio/) is a dapp for creating, managing, and publishing subgraphs and API keys. +[Subgraph Studio](https://thegraph.com/studio/) is a dapp for creating, managing, and publishing Subgraphs and API keys. ## 2. API Anahtarını Nasıl Oluşturabilirim? @@ -12,20 +12,20 @@ Bir API oluşturmak için Subgraph Studio'ya gidin ve cüzdanınızı bağlayın ## 3. Birden Çok API Anahtarı Oluşturabilir miyim? -Yes! You can create multiple API Keys to use in different projects. Check out the link [here](https://thegraph.com/studio/apikeys/). +Evet! Farklı projelerde kullanmak için birden fazla API anahtarı oluşturabilirsiniz. Daha fazla bilgi için [buraya](https://thegraph.com/studio/apikeys/) göz atın. ## 4. API Anahtarı için Domain'i Nasıl Kısıtlarım? Bir API Anahtarı oluşturduktan sonra, Güvenlik bölümünde belirli bir API Anahtarını sorgulayabilecek alanları tanımlayabilirsiniz. -## 5. Subgraph'ımı Başka Birine Devredebilir miyim? +## 5. Can I transfer my Subgraph to another owner? -Evet, Arbitrum One'da yayımlanmış subgraph'ler yeni bir cüzdana veya bir Multisig'e aktarılabilir. Bunu, subgraph'in ayrıntılar sayfasında 'Yayımla' düğmesinin yanındaki üç noktaya tıklayıp 'Sahipliği devret' seçeneğini seçerek yapabilirsiniz. +Yes, Subgraphs that have been published to Arbitrum One can be transferred to a new wallet or a Multisig. You can do so by clicking the three dots next to the 'Publish' button on the Subgraph's details page and selecting 'Transfer ownership'. -Subgraph'i devrettikten sonra onu Studio'da artık göremeyeceğinizi veya düzenleyemeyeceğinizi unutmayın. +Note that you will no longer be able to see or edit the Subgraph in Studio once it has been transferred. -## 6. Kullanmak İstediğim Subgraph'ın Geliştiricisi Değilsem, bu Subgraphlar için Sorgu URL'lerini Nasıl Bulabilirim? +## 6. How do I find query URLs for Subgraphs if I’m not the developer of the Subgraph I want to use? -You can find the query URL of each subgraph in the Subgraph Details section of Graph Explorer. When you click on the “Query” button, you will be directed to a pane wherein you can view the query URL of the subgraph you’re interested in. You can then replace the `` placeholder with the API key you wish to leverage in Subgraph Studio. +You can find the query URL of each Subgraph in the Subgraph Details section of Graph Explorer. When you click on the “Query” button, you will be directed to a pane wherein you can view the query URL of the Subgraph you’re interested in. You can then replace the `` placeholder with the API key you wish to leverage in Subgraph Studio. -Unutmayın, bir API anahtarı oluşturarak ağda yayımlanmış herhangi bir subgraph'i sorgulayabilirsiniz; bu durum, kendi subgraph'inizi oluşturmuş olsanız bile geçerlidir. Bu yeni API anahtarı üzerinden yapılan sorgular, ağdaki diğer sorgular gibi ücretlidir. +Remember that you can create an API key and query any Subgraph published to the network, even if you build a Subgraph yourself. These queries via the new API key, are paid queries as any other on the network. diff --git a/website/src/pages/tr/resources/tokenomics.mdx b/website/src/pages/tr/resources/tokenomics.mdx index ff09d144619c..80fa43ff88fa 100644 --- a/website/src/pages/tr/resources/tokenomics.mdx +++ b/website/src/pages/tr/resources/tokenomics.mdx @@ -1,103 +1,103 @@ --- -title: Tokenomics of The Graph Network +title: Graph Ağı'nın Token Ekonomisi sidebarTitle: Tokenomics -description: The Graph Network is incentivized by powerful tokenomics. Here’s how GRT, The Graph’s native work utility token, works. +description: Graph Ağı, güçlü token ekonomisi ile teşvik edilmektedir. İşte The Graph'in yerel fayda token'ı GRT'nin çalışma şekli. --- ## Genel Bakış -The Graph is a decentralized protocol that enables easy access to blockchain data. It indexes blockchain data similarly to how Google indexes the web. If you've used a dapp that retrieves data from a subgraph, you've probably interacted with The Graph. Today, thousands of [popular dapps](https://thegraph.com/explorer) in the web3 ecosystem use The Graph. +The Graph is a decentralized protocol that enables easy access to blockchain data. It indexes blockchain data similarly to how Google indexes the web. If you've used a dapp that retrieves data from a Subgraph, you've probably interacted with The Graph. Today, thousands of [popular dapps](https://thegraph.com/explorer) in the web3 ecosystem use The Graph. ## Ayrıntılar -The Graph's model is akin to a B2B2C model, but it's driven by a decentralized network where participants collaborate to provide data to end users in exchange for GRT rewards. GRT is the utility token for The Graph. It coordinates and incentivizes the interaction between data providers and consumers within the network. +The Graph'in modeli B2B2C modeline benzer, ancak model, GRT ödülleri karşılığında son kullanıcılara veri sağlamak için katılımcıların işbirliği içinde olduğu merkeziyetsiz bir ağ tarafından yönlendirilir. GRT, The Graph'in fayda token'ıdır. Ağ içindeki veri sağlayıcılar ve tüketiciler arasındaki etkileşimi koordine ve teşvik eder. -The Graph plays a vital role in making blockchain data more accessible and supports a marketplace for its exchange. To learn more about The Graph's pay-for-what-you-need model, check out its [free and growth plans](/subgraphs/billing/). +The Graph, blokzinciri verilerini daha erişilebilir hale getirmede hayati bir rol oynar ve bu verilerin paylaşımı için bir pazar yeri sağlar. The Graph'in ihtiyacın kadar öde modelini daha fazla öğrenmek için [ücretsiz ve büyüme planlarına](/subgraphs/billing/) göz atın. -- GRT Token Address on Mainnet: [0xc944e90c64b2c07662a292be6244bdf05cda44a7](https://etherscan.io/token/0xc944e90c64b2c07662a292be6244bdf05cda44a7) +- Mainnet'teki GRT Token Adresi: [0xc944e90c64b2c07662a292be6244bdf05cda44a7](https://etherscan.io/token/0xc944e90c64b2c07662a292be6244bdf05cda44a7) -- GRT Token Address on Arbitrum One: [0x9623063377AD1B27544C965cCd7342f7EA7e88C7](https://arbiscan.io/token/0x9623063377ad1b27544c965ccd7342f7ea7e88c7) +- Arbitrum One Üzerindeki GRT Token Adresi: [0x9623063377AD1B27544C965cCd7342f7EA7e88C7](https://arbiscan.io/token/0x9623063377ad1b27544c965ccd7342f7ea7e88c7) -## The Roles of Network Participants +## Ağ Katılımcılarının Rolleri -There are four primary network participants: +Dört birincil ağ katılımcısı vardır: -1. Delegators - Delegate GRT to Indexers & secure the network +1. Delegatörler - Endeksleyicilere GRT delege et & ağı güvence altına al -2. Curators - Find the best subgraphs for Indexers +2. Curators - Find the best Subgraphs for Indexers -3. Developers - Build & query subgraphs +3. Developers - Build & query Subgraphs -4. Indexers - Backbone of blockchain data +4. Endeksleyiciler - Blockchain verilerinin omurgaları -Fishermen and Arbitrators are also integral to the network's success through other contributions, supporting the work of the other primary participant roles. For more information about network roles, [read this article](https://thegraph.com/blog/the-graph-grt-token-economics/). +Fisherman'ler ve Arabulucular da diğer katkılarıyla ağın başarısı için ayrılmaz bir parça olup, diğer ana katılımcı rollerinin çalışmalarını destekler. Ağ rolleri hakkında daha fazla bilgi için, [bu makaleyi okuyun](https://thegraph.com/blog/the-graph-grt-token-economics/). -![Tokenomics diagram](/img/updated-tokenomics-image.png) +![Token ekonomisi diyagramı](/img/updated-tokenomics-image.png) -## Delegators (Passively earn GRT) +## Delegatörler (Pasif olarak GRT kazanırlar) -Indexers are delegated GRT by Delegators, increasing the Indexer’s stake in subgraphs on the network. In return, Delegators earn a percentage of all query fees and indexing rewards from the Indexer. Each Indexer sets the cut that will be rewarded to Delegators independently, creating competition among Indexers to attract Delegators. Most Indexers offer between 9-12% annually. +Indexers are delegated GRT by Delegators, increasing the Indexer’s stake in Subgraphs on the network. In return, Delegators earn a percentage of all query fees and indexing rewards from the Indexer. Each Indexer sets the cut that will be rewarded to Delegators independently, creating competition among Indexers to attract Delegators. Most Indexers offer between 9-12% annually. -For example, if a Delegator were to delegate 15k GRT to an Indexer offering 10%, the Delegator would receive ~1,500 GRT in rewards annually. +Örneğin, bir delegatör, %10 teklif veren bir endeksleyiciye 15.000 GRT istif ederse delegatör yılda ~1.500 GRT ödül alacaktır. -There is a 0.5% delegation tax which is burned whenever a Delegator delegates GRT on the network. If a Delegator chooses to withdraw their delegated GRT, the Delegator must wait for the 28-epoch unbonding period. Each epoch is 6,646 blocks, which means 28 epochs ends up being approximately 26 days. +Bir delegatör ağ üzerinde GRT istif ettiğinde %0,5'lik bir delegasyon vergisi alınır ve bu tutar yakılır. Eğer bir delegatör istif edilen GRT'sini geri çekmek isterse, 28 dönemlik çözülme süresini beklemek zorundadır. Her dönem 6.646 blok vardır. Bu da 28 dönemin yaklaşık 26 gün olduğu anlamına gelir. -If you're reading this, you're capable of becoming a Delegator right now by heading to the [network participants page](https://thegraph.com/explorer/participants/indexers), and delegating GRT to an Indexer of your choice. +Bunu okuyorsanız, [ağ katılımcıları sayfasına](https://thegraph.com/explorer/participants/indexers) giderek ve seçtiğiniz bir Endeksleyiciye GRT delege ederek hemen bir Delegatör olabilirsiniz. -## Curators (Earn GRT) +## Küratörler (GRT Kazanırlar) -Curators identify high-quality subgraphs and "curate" them (i.e., signal GRT on them) to earn curation shares, which guarantee a percentage of all future query fees generated by the subgraph. While any independent network participant can be a Curator, typically subgraph developers are among the first Curators for their own subgraphs because they want to ensure their subgraph is indexed. +Curators identify high-quality Subgraphs and "curate" them (i.e., signal GRT on them) to earn curation shares, which guarantee a percentage of all future query fees generated by the Subgraph. While any independent network participant can be a Curator, typically Subgraph developers are among the first Curators for their own Subgraphs because they want to ensure their Subgraph is indexed. -Subgraph developers are encouraged to curate their subgraph with at least 3,000 GRT. However, this number may be impacted by network activity and community participation. +Subgraph developers are encouraged to curate their Subgraph with at least 3,000 GRT. However, this number may be impacted by network activity and community participation. -Curators pay a 1% curation tax when they curate a new subgraph. This curation tax is burned, decreasing the supply of GRT. +Curators pay a 1% curation tax when they curate a new Subgraph. This curation tax is burned, decreasing the supply of GRT. -## Developers +## Geliştiriciler -Developers build and query subgraphs to retrieve blockchain data. Since subgraphs are open source, developers can query existing subgraphs to load blockchain data into their dapps. Developers pay for queries they make in GRT, which is distributed to network participants. +Developers build and query Subgraphs to retrieve blockchain data. Since Subgraphs are open source, developers can query existing Subgraphs to load blockchain data into their dapps. Developers pay for queries they make in GRT, which is distributed to network participants. -### Subgraph oluşturma +### Creating a Subgraph -Developers can [create a subgraph](/developing/creating-a-subgraph/) to index data on the blockchain. Subgraphs are instructions for Indexers about which data should be served to consumers. +Developers can [create a Subgraph](/developing/creating-a-subgraph/) to index data on the blockchain. Subgraphs are instructions for Indexers about which data should be served to consumers. -Once developers have built and tested their subgraph, they can [publish their subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) on The Graph's decentralized network. +Once developers have built and tested their Subgraph, they can [publish their Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) on The Graph's decentralized network. -### Mevcut bir subgraph'ı sorgulama +### Querying an existing Subgraph -Once a subgraph is [published](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph's decentralized network, anyone can create an API key, add GRT to their billing balance, and query the subgraph. +Once a Subgraph is [published](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph's decentralized network, anyone can create an API key, add GRT to their billing balance, and query the Subgraph. -Subgraphs are [queried using GraphQL](/subgraphs/querying/introduction/), and the query fees are paid for with GRT in [Subgraph Studio](https://thegraph.com/studio/). Query fees are distributed to network participants based on their contributions to the protocol. +Subgraph'ler [GraphQL kullanılarak sorgulanır](/subgraphs/querying/introduction/), ve sorgu ücretleri [Subgraph Studio](https://thegraph.com/studio/) içinde GRT ile ödenir. Sorgu ücretleri, protokole katkıları doğrultusunda ağ katılımcılarına dağıtılır. -1% of the query fees paid to the network are burned. +Ağa ödenen sorgu ücretlerinin %1'i yakılır. -## Indexers (Earn GRT) +## Endeksleyiciler (GRT Kazanırlar) -Indexers are the backbone of The Graph. They operate independent hardware and software powering The Graph’s decentralized network. Indexers serve data to consumers based on instructions from subgraphs. +Indexers are the backbone of The Graph. They operate independent hardware and software powering The Graph’s decentralized network. Indexers serve data to consumers based on instructions from Subgraphs. -Indexers can earn GRT rewards in two ways: +Endeksleyiciler, iki şekilde GRT ödülü kazanabilir: -1. **Query fees**: GRT paid by developers or users for subgraph data queries. Query fees are directly distributed to Indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). +1. **Query fees**: GRT paid by developers or users for Subgraph data queries. Query fees are directly distributed to Indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). -2. **Indexing rewards**: the 3% annual issuance is distributed to Indexers based on the number of subgraphs they are indexing. These rewards incentivize Indexers to index subgraphs, occasionally before the query fees begin, to accrue and submit Proofs of Indexing (POIs), verifying that they have indexed data accurately. +2. **Indexing rewards**: the 3% annual issuance is distributed to Indexers based on the number of Subgraphs they are indexing. These rewards incentivize Indexers to index Subgraphs, occasionally before the query fees begin, to accrue and submit Proofs of Indexing (POIs), verifying that they have indexed data accurately. -Each subgraph is allotted a portion of the total network token issuance, based on the amount of the subgraph’s curation signal. That amount is then rewarded to Indexers based on their allocated stake on the subgraph. +Each Subgraph is allotted a portion of the total network token issuance, based on the amount of the Subgraph’s curation signal. That amount is then rewarded to Indexers based on their allocated stake on the Subgraph. -In order to run an indexing node, Indexers must self-stake 100,000 GRT or more with the network. Indexers are incentivized to self-stake GRT in proportion to the amount of queries they serve. +Bir endeksleme düğümü çalıştırabilmek için Endeksleyiciler ağda 100.000 veya daha fazla GRT'yi kendilerine istiflemelidir. Endeksleyiciler, hizmet verdikleri sorgu miktarına orantılı miktarda GRT'yi kendilerine istiflemeye teşvik edilir. -Indexers can increase their GRT allocations on subgraphs by accepting GRT delegation from Delegators, and they can accept up to 16 times their initial self-stake. If an Indexer becomes "over-delegated" (i.e., more than 16 times their initial self-stake), they will not be able to use the additional GRT from Delegators until they increase their self-stake in the network. +Indexers can increase their GRT allocations on Subgraphs by accepting GRT delegation from Delegators, and they can accept up to 16 times their initial self-stake. If an Indexer becomes "over-delegated" (i.e., more than 16 times their initial self-stake), they will not be able to use the additional GRT from Delegators until they increase their self-stake in the network. -The amount of rewards an Indexer receives can vary based on the Indexer's self-stake, accepted delegation, quality of service, and many more factors. +Bir Endeksleyicinin aldığı ödül miktarı, Endeksleyicinin kendi istifleme miktarına, kabul edilen delegeye, hizmet kalitesine ve birçok başka faktöre bağlı olarak değişiklik gösterebilir. -## Token Supply: Burning & Issuance +## Token Arzı: Yakma & İhraç -The initial token supply is 10 billion GRT, with a target of 3% new issuance annually to reward Indexers for allocating stake on subgraphs. This means that the total supply of GRT tokens will increase by 3% each year as new tokens are issued to Indexers for their contribution to the network. +The initial token supply is 10 billion GRT, with a target of 3% new issuance annually to reward Indexers for allocating stake on Subgraphs. This means that the total supply of GRT tokens will increase by 3% each year as new tokens are issued to Indexers for their contribution to the network. -The Graph is designed with multiple burning mechanisms to offset new token issuance. Approximately 1% of the GRT supply is burned annually through various activities on the network, and this number has been increasing as network activity continues to grow. These burning activities include a 0.5% delegation tax whenever a Delegator delegates GRT to an Indexer, a 1% curation tax when Curators signal on a subgraph, and a 1% of query fees for blockchain data. +The Graph is designed with multiple burning mechanisms to offset new token issuance. Approximately 1% of the GRT supply is burned annually through various activities on the network, and this number has been increasing as network activity continues to grow. These burning activities include a 0.5% delegation tax whenever a Delegator delegates GRT to an Indexer, a 1% curation tax when Curators signal on a Subgraph, and a 1% of query fees for blockchain data. -![Total burned GRT](/img/total-burned-grt.jpeg) +![Toplam yakılan GRT](/img/total-burned-grt.jpeg) -In addition to these regularly occurring burning activities, the GRT token also has a slashing mechanism in place to penalize malicious or irresponsible behavior by Indexers. If an Indexer is slashed, 50% of their indexing rewards for the epoch are burned (while the other half goes to the fisherman), and their self-stake is slashed by 2.5%, with half of this amount being burned. This helps to ensure that Indexers have a strong incentive to act in the best interests of the network and to contribute to its security and stability. +Bu düzenli olarak gerçekleşen yakma işlemlerine ek olarak GRT token’ının, Endeksleyiciler tarafından yapılan kötü niyetli veya sorumsuz davranışları cezalandırmak için geliştirilmiş bir kesinti mekanizması da vardır. Bir Endeksleyici kesinti cezası alırsa, ilgili döneme ait endeksleme ödüllerinin %50’si yakılır (diğer yarısı fisherman'a gider) ve kendi istiflediği GRT’nin %2,5’i kesilir; bu miktarın yarısı yakılır. Bu mekanizma, Endeksleyicilerin ağın çıkarları doğrultusunda hareket etmelerini ve ağın güvenliği ile istikrarına katkıda bulunmalarını teşvik eder. -## Improving the Protocol +## Protokolün Geliştirilmesi -The Graph Network is ever-evolving and improvements to the economic design of the protocol are constantly being made to provide the best experience for all network participants. The Graph Council oversees protocol changes and community members are encouraged to participate. Get involved with protocol improvements in [The Graph Forum](https://forum.thegraph.com/). +The Graph Ağı sürekli gelişmektedir ve protokolün ekonomik tasarımında iyileştirmeler yapılmaktadır. Böylece tüm ağ katılımcıları için en iyi deneyim sağlanır. The Graph Konseyi protokol değişikliklerini denetler ve topluluk üyelerinin katılımı teşvik edilir. Protokol iyileştirmelerine [The Graph Forumu](https://forum.thegraph.com/) üzerinden dahil olun. diff --git a/website/src/pages/tr/sps/introduction.mdx b/website/src/pages/tr/sps/introduction.mdx index 5df89c8cdb4b..4df56659c9f8 100644 --- a/website/src/pages/tr/sps/introduction.mdx +++ b/website/src/pages/tr/sps/introduction.mdx @@ -3,21 +3,21 @@ title: Introduction to Substreams-Powered Subgraphs sidebarTitle: Giriş --- -Boost your subgraph's efficiency and scalability by using [Substreams](/substreams/introduction/) to stream pre-indexed blockchain data. +Boost your Subgraph's efficiency and scalability by using [Substreams](/substreams/introduction/) to stream pre-indexed blockchain data. ## Genel Bakış -Use a Substreams package (`.spkg`) as a data source to give your subgraph access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks. +Use a Substreams package (`.spkg`) as a data source to give your Subgraph access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks. ### Ayrıntılar There are two methods of enabling this technology: -1. **Using Substreams [triggers](/sps/triggers/)**: Consume from any Substreams module by importing the Protobuf model through a subgraph handler and move all your logic into a subgraph. This method creates the subgraph entities directly in the subgraph. +1. **Using Substreams [triggers](/sps/triggers/)**: Consume from any Substreams module by importing the Protobuf model through a Subgraph handler and move all your logic into a Subgraph. This method creates the Subgraph entities directly in the Subgraph. -2. **Using [Entity Changes](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)**: By writing more of the logic into Substreams, you can consume the module's output directly into [graph-node](/indexing/tooling/graph-node/). In graph-node, you can use the Substreams data to create your subgraph entities. +2. **Using [Entity Changes](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)**: By writing more of the logic into Substreams, you can consume the module's output directly into [graph-node](/indexing/tooling/graph-node/). In graph-node, you can use the Substreams data to create your Subgraph entities. -You can choose where to place your logic, either in the subgraph or Substreams. However, consider what aligns with your data needs, as Substreams has a parallelized model, and triggers are consumed linearly in the graph node. +You can choose where to place your logic, either in the Subgraph or Substreams. However, consider what aligns with your data needs, as Substreams has a parallelized model, and triggers are consumed linearly in the graph node. ### Ek Kaynaklar @@ -28,3 +28,4 @@ Visit the following links for tutorials on using code-generation tooling to buil - [Starknet](https://docs.substreams.dev/tutorials/intro-to-tutorials/starknet) - [Injective](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/injective) - [MANTRA](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/mantra) +- [Stellar](https://docs.substreams.dev/tutorials/intro-to-tutorials/stellar) diff --git a/website/src/pages/tr/sps/sps-faq.mdx b/website/src/pages/tr/sps/sps-faq.mdx index 1f627d451a98..30401c2c76bd 100644 --- a/website/src/pages/tr/sps/sps-faq.mdx +++ b/website/src/pages/tr/sps/sps-faq.mdx @@ -11,21 +11,21 @@ Specifically, it's a blockchain-agnostic, parallelized, and streaming-first engi Substreams is developed by [StreamingFast](https://www.streamingfast.io/). Visit the [Substreams Documentation](/substreams/introduction/) to learn more about Substreams. -## Substreams destekli subgraphlar nelerdir? +## What are Substreams-powered Subgraphs? -[Substreams-powered subgraphs](/sps/introduction/) combine the power of Substreams with the queryability of subgraphs. When publishing a Substreams-powered Subgraph, the data produced by the Substreams transformations, can [output entity changes](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs) that are compatible with subgraph entities. +[Substreams-powered Subgraphs](/sps/introduction/) combine the power of Substreams with the queryability of Subgraphs. When publishing a Substreams-powered Subgraph, the data produced by the Substreams transformations, can [output entity changes](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs) that are compatible with Subgraph entities. -If you are already familiar with subgraph development, note that Substreams-powered subgraphs can then be queried just as if it had been produced by the AssemblyScript transformation layer. This provides all the benefits of subgraphs, including a dynamic and flexible GraphQL API. +If you are already familiar with Subgraph development, note that Substreams-powered Subgraphs can then be queried just as if it had been produced by the AssemblyScript transformation layer. This provides all the benefits of Subgraphs, including a dynamic and flexible GraphQL API. -## Substreams destekli subgraphlar'ın normal subgraphlar'dan farkı nedir? +## How are Substreams-powered Subgraphs different from Subgraphs? Subgraphs are made up of datasources which specify onchain events, and how those events should be transformed via handlers written in Assemblyscript. These events are processed sequentially, based on the order in which events happen onchain. -By contrast, substreams-powered subgraphs have a single datasource which references a substreams package, which is processed by the Graph Node. Substreams have access to additional granular onchain data compared to conventional subgraphs, and can also benefit from massively parallelised processing, which can mean much faster processing times. +By contrast, substreams-powered Subgraphs have a single datasource which references a substreams package, which is processed by the Graph Node. Substreams have access to additional granular onchain data compared to conventional Subgraphs, and can also benefit from massively parallelised processing, which can mean much faster processing times. -## Substreams destekli subgraphlar kullanmanın avantajları nelerdir? +## What are the benefits of using Substreams-powered Subgraphs? -Substreams-powered subgraphs combine all the benefits of Substreams with the queryability of subgraphs. They bring greater composability and high-performance indexing to The Graph. They also enable new data use cases; for example, once you've built your Substreams-powered Subgraph, you can reuse your [Substreams modules](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) to output to different [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) such as PostgreSQL, MongoDB, and Kafka. +Substreams-powered Subgraphs combine all the benefits of Substreams with the queryability of Subgraphs. They bring greater composability and high-performance indexing to The Graph. They also enable new data use cases; for example, once you've built your Substreams-powered Subgraph, you can reuse your [Substreams modules](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) to output to different [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) such as PostgreSQL, MongoDB, and Kafka. ## Substreams'in faydaları nelerdir? @@ -35,7 +35,7 @@ Substreams kullanmanın birçok faydası vardır, bunlar şunlardır: - Yüksek performanslı indeksleme: Büyük ölçekli paralel işlemler sayesinde sıradan işlemlere göre onlarca kat daha hızlı indeksleme sağlar (BigQuery gibi). -- Her yere veri gönderme: Verilerinizi PostgreSQL, MongoDB, Kafka, subgraphlar, düz dosyalar, Google Sheets gibi herhangi bir yere gönderebilirsiniz. +- Sink anywhere: Sink your data to anywhere you want: PostgreSQL, MongoDB, Kafka, Subgraphs, flat files, Google Sheets. - Programlanabilir: Kod kullanarak çıkarma işlemlerini özelleştirmek, dönüşüm zamanında toplamalar yapmak ve çıktınızı birden çok hedef için modelleyebilirsiniz. @@ -63,17 +63,17 @@ Firehose kullanmanın birçok faydası vardır, bunlar şunlardır: - Düz dosyalardan yararlanma: Blok zinciri verileri düz dosyalara çıkarılır, en ucuz ve en optimize hesaplama kaynağı kullanılır. -## Geliştiriciler, Substreams destekli subgraphlar ve Substreams hakkında daha fazla bilgiye nereden erişebilir geliştiriciler? +## Where can developers access more information about Substreams-powered Subgraphs and Substreams? The [Substreams documentation](/substreams/introduction/) will teach you how to build Substreams modules. -The [Substreams-powered subgraphs documentation](/sps/introduction/) will show you how to package them for deployment on The Graph. +The [Substreams-powered Subgraphs documentation](/sps/introduction/) will show you how to package them for deployment on The Graph. [En son sürüm Substreams Codegen aracı](https://streamingfastio.medium.com/substreams-codegen-no-code-tool-to-bootstrap-your-project-a11efe0378c6), hiç kod yazmadan bir Substreams projesi başlatmanıza olanak tanır. ## Rust modüllerinin Substreams içindeki rolü nedir? -Rust modülleri, Subgraphs'teki AssemblyScript eşleştiricilerinin karşılığıdır. WASM'ye benzer şekilde derlenirler, ancak programlama modelleri paralel yürütme için olanak sağlar. Rust modülleri, ham blok zinciri verilerine uygulamak istediğiniz dönüşümleri ve birleştirmeleri tanımlar. +Rust modules are the equivalent of the AssemblyScript mappers in Subgraphs. They are compiled to WASM in a similar way, but the programming model allows for parallel execution. They define the sort of transformations and aggregations you want to apply to the raw blockchain data. See [modules documentation](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) for details. @@ -81,16 +81,16 @@ See [modules documentation](https://docs.substreams.dev/reference-material/subst Substream kullanırken, kompozisyon dönüşüm katmanında gerçekleşir ve önbelleğe alınmış modüllerin tekrar kullanılmasına olanak sağlar. -Örnek olarak, Alice bir merkeziyetsiz borsa fiyat modülü oluşturabilir, Bob ilgisini çeken bazı tokenler için bir hacim aggregator inşa edebilir ve Lisa dört bireysel merkeziyetsiz borsa fiyat modülünü bir araya getirerek bir fiyat oracle'ı oluşturabilir. Tek bir Substreams talebi, tüm bu bireylerin modüllerini bir araya getirir, birleştirir ve çok daha sofistike bir veri akışı sunar. Bu akış daha sonra bir subgraph'ı doldurmak ve tüketiciler tarafından sorgulanmak için kullanılabilir. +As an example, Alice can build a DEX price module, Bob can use it to build a volume aggregator for some tokens of his interest, and Lisa can combine four individual DEX price modules to create a price oracle. A single Substreams request will package all of these individual's modules, link them together, to offer a much more refined stream of data. That stream can then be used to populate a Subgraph, and be queried by consumers. ## Bir Substreams destekli Subgraph nasıl oluşturulur ve dağıtılır? After [defining](/sps/introduction/) a Substreams-powered Subgraph, you can use the Graph CLI to deploy it in [Subgraph Studio](https://thegraph.com/studio/). -## Substreams ve Substreams destekli subgraphlar ile ilgili örnekleri nerede bulubilirim? +## Where can I find examples of Substreams and Substreams-powered Subgraphs? -Substreams ve Substreams destekli subgraphlar ile ilgili örnekleri bulmak için [bu Github deposunu](https://github.com/pinax-network/awesome-substreams) ziyaret edebilirsiniz. +You can visit [this Github repo](https://github.com/pinax-network/awesome-substreams) to find examples of Substreams and Substreams-powered Subgraphs. -## Substreams ve Substreams destekli subgraphlar, Graph Ağı için ne anlam ifade etmektedir? +## What do Substreams and Substreams-powered Subgraphs mean for The Graph Network? Bu entegrasyon, topluluk modüllerinden yararlanarak son derece yüksek performanslı indeksleme ve daha fazla birleştirme yapma avantajları sunar. diff --git a/website/src/pages/tr/sps/triggers.mdx b/website/src/pages/tr/sps/triggers.mdx index ac56ac8755c9..648b624258e3 100644 --- a/website/src/pages/tr/sps/triggers.mdx +++ b/website/src/pages/tr/sps/triggers.mdx @@ -6,13 +6,13 @@ Use Custom Triggers and enable the full use GraphQL. ## Genel Bakış -Custom Triggers allow you to send data directly into your subgraph mappings file and entities, which are similar to tables and fields. This enables you to fully use the GraphQL layer. +Custom Triggers allow you to send data directly into your Subgraph mappings file and entities, which are similar to tables and fields. This enables you to fully use the GraphQL layer. -By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data in your subgraph's handler. This ensures efficient and streamlined data management within the subgraph framework. +By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data in your Subgraph's handler. This ensures efficient and streamlined data management within the Subgraph framework. ### Defining `handleTransactions` -The following code demonstrates how to define a `handleTransactions` function in a subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new subgraph entity is created. +The following code demonstrates how to define a `handleTransactions` function in a Subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new Subgraph entity is created. ```tsx export function handleTransactions(bytes: Uint8Array): void { @@ -38,9 +38,9 @@ Here's what you're seeing in the `mappings.ts` file: 1. The bytes containing Substreams data are decoded into the generated `Transactions` object, this object is used like any other AssemblyScript object 2. Looping over the transactions -3. Create a new subgraph entity for every transaction +3. Create a new Subgraph entity for every transaction -To go through a detailed example of a trigger-based subgraph, [check out the tutorial](/sps/tutorial/). +To go through a detailed example of a trigger-based Subgraph, [check out the tutorial](/sps/tutorial/). ### Ek Kaynaklar diff --git a/website/src/pages/tr/sps/tutorial.mdx b/website/src/pages/tr/sps/tutorial.mdx index fd7c1acde4fe..1d5d17bc712f 100644 --- a/website/src/pages/tr/sps/tutorial.mdx +++ b/website/src/pages/tr/sps/tutorial.mdx @@ -3,7 +3,7 @@ title: 'Tutorial: Set Up a Substreams-Powered Subgraph on Solana' sidebarTitle: Tutorial --- -Successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. +Successfully set up a trigger-based Substreams-powered Subgraph for a Solana SPL token. ## Başlayalım @@ -54,7 +54,7 @@ params: # Modify the param fields to meet your needs ### Step 2: Generate the Subgraph Manifest -Once the project is initialized, generate a subgraph manifest by running the following command in the Dev Container: +Once the project is initialized, generate a Subgraph manifest by running the following command in the Dev Container: ```bash substreams codegen subgraph @@ -73,7 +73,7 @@ dataSources: moduleName: map_spl_transfers # Module defined in the substreams.yaml file: ./my-project-sol-v0.1.0.spkg mapping: - apiVersion: 0.0.7 + apiVersion: 0.0.9 kind: substreams/graph-entities file: ./src/mappings.ts handler: handleTriggers @@ -81,7 +81,7 @@ dataSources: ### Step 3: Define Entities in `schema.graphql` -Define the fields you want to save in your subgraph entities by updating the `schema.graphql` file. +Define the fields you want to save in your Subgraph entities by updating the `schema.graphql` file. Here is an example: @@ -101,7 +101,7 @@ This schema defines a `MyTransfer` entity with fields such as `id`, `amount`, `s With the Protobuf objects generated, you can now handle the decoded Substreams data in your `mappings.ts` file found in the `./src` directory. -The example below demonstrates how to extract to subgraph entities the non-derived transfers associated to the Orca account id: +The example below demonstrates how to extract to Subgraph entities the non-derived transfers associated to the Orca account id: ```ts import { Protobuf } from 'as-proto/assembly' @@ -140,11 +140,11 @@ To generate Protobuf objects in AssemblyScript, run the following command: npm run protogen ``` -This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the subgraph's handler. +This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the Subgraph's handler. ### Sonuç -Congratulations! You've successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. You can take the next step by customizing your schema, mappings, and modules to fit your specific use case. +Congratulations! You've successfully set up a trigger-based Substreams-powered Subgraph for a Solana SPL token. You can take the next step by customizing your schema, mappings, and modules to fit your specific use case. ### Video Tutorial diff --git a/website/src/pages/tr/subgraphs/_meta-titles.json b/website/src/pages/tr/subgraphs/_meta-titles.json index 3fd405eed29a..1bfcfe87a948 100644 --- a/website/src/pages/tr/subgraphs/_meta-titles.json +++ b/website/src/pages/tr/subgraphs/_meta-titles.json @@ -1,6 +1,6 @@ { - "querying": "Querying", - "developing": "Developing", + "querying": "Sorgulama", + "developing": "Geliştirme", "guides": "How-to Guides", - "best-practices": "Best Practices" + "best-practices": "En İyi Uygulamalar" } diff --git a/website/src/pages/tr/subgraphs/best-practices/avoid-eth-calls.mdx b/website/src/pages/tr/subgraphs/best-practices/avoid-eth-calls.mdx index 5bea3a40a54e..b5bd9893b941 100644 --- a/website/src/pages/tr/subgraphs/best-practices/avoid-eth-calls.mdx +++ b/website/src/pages/tr/subgraphs/best-practices/avoid-eth-calls.mdx @@ -1,19 +1,19 @@ --- title: Subgraph Örnek Uygulamalar 4 - eth_calls Kullanımından Kaçınarak Endeksleme Hızını Artırma -sidebarTitle: 'Subgraph Best Practice 4: Avoiding eth_calls' +sidebarTitle: Avoiding eth_calls --- ## Özet -`eth_calls`, bir subgraph'ten bir Ethereum düğümüne yapılan çağrılardır. Bu çağrıların veri döndürmesi ciddi miktarda zaman alır ve endekslemeyi yavaşlatır. Mümkünse akıllı sözleşmelerinizi ihtiyacınız olan tüm verileri yayacak şekilde tasarlayın. Böylece eth_calls kullanmanız gerekmez. +`eth_calls` are calls that can be made from a Subgraph to an Ethereum node. These calls take a significant amount of time to return data, slowing down indexing. If possible, design smart contracts to emit all the data you need so you don’t need to use `eth_calls`. ## `eth_calls` Kullanımından Kaçınmanın Önemi -Subgraph'ler, akıllı sözleşmelerden yayılan olay verilerini endekslemek için optimize edilmiştir. Subgraph'ler bir `eth_call` üzerinden gelen verileri de endeksleyebilir. Ancak, `eth_calls`'ın akıllı sözleşmelere harici çağrılar gerektirmesi nedeniyle, bu durum endekslemeyi önemli ölçüde yavaşlatabilir. Bu çağrıların yanıt verme süresi, subgraph'ten ziyade sorgulanan Ethereum düğümünün bağlantısına ve yanıt hızına bağlıdır. Subgraph'lerimizde `eth_calls`ı en aza indirerek veya ortadan kaldırarak, endeksleme hızımızı önemli ölçüde artırabiliriz. +Subgraphs are optimized to index event data emitted from smart contracts. A Subgraph can also index the data coming from an `eth_call`, however, this can significantly slow down Subgraph indexing as `eth_calls` require making external calls to smart contracts. The responsiveness of these calls relies not on the Subgraph but on the connectivity and responsiveness of the Ethereum node being queried. By minimizing or eliminating eth_calls in our Subgraphs, we can significantly improve our indexing speed. ### Bir `eth_call` Nasıl Görünür? -`eth_calls`, gerekli veriler olaylar aracılığıyla sağlanmadığında durumlarda genellikle gereklidir. Örneğin, bir subgraph'in ERC20 token'larının belirli bir havuza ait olup olmadığını belirlemesi gerektiğini, ancak sözleşmenin yalnızca temel bir `Transfer` olayı yaydığını ve ihtiyacımız olan verileri içeren bir olay yaymadığını varsayalım: +`eth_calls` are often necessary when the data required for a Subgraph is not available through emitted events. For example, consider a scenario where a Subgraph needs to identify whether ERC20 tokens are part of a specific pool, but the contract only emits a basic `Transfer` event and does not emit an event that contains the data that we need: ```yaml event Transfer(address indexed from, address indexed to, uint256 value); @@ -44,7 +44,7 @@ export function handleTransfer(event: Transfer): void { } ``` -Bu kod işlevsel olacaktır; ancak subgraph'imizin endekslenmesini yavaşlattığı için ideal değildir. +This is functional, however is not ideal as it slows down our Subgraph’s indexing. ## `eth_calls`'ı Ortadan Kaldırma @@ -54,7 +54,7 @@ Bu kod işlevsel olacaktır; ancak subgraph'imizin endekslenmesini yavaşlattı event TransferWithPool(address indexed from, address indexed to, uint256 value, bytes32 indexed poolInfo); ``` -Bu güncellemeyle, subgraph harici çağrılara ihtiyaç duymadan gerekli verileri doğrudan endeksleyebilir: +With this update, the Subgraph can directly index the required data without external calls: ```typescript import { Address } from '@graphprotocol/graph-ts' @@ -96,22 +96,22 @@ Sarıyla vurgulanan bölüm çağrı deklarasyonudur. İki nokta üst üste önc İşleyici, bu `eth_call` sonucuna bir önceki bölümde olduğu gibi sözleşmeye bağlanarak ve çağrıyı yaparak erişir. graph-node, deklare edilen `eth_calls` sonuçlarını bellekte önbelleğe alır. İşleyiciden yapılan çağrı, sonuçları gerçek bir RPC çağrısı yapıp almak yerine, önbellekten alır. -Not: Deklare edilen `eth_calls`, yalnızca specVersion >= 1.2.0 olan subgraph'lerde kullanılabilir. +Note: Declared eth_calls can only be made in Subgraphs with specVersion >= 1.2.0. ## Sonuç -Subgraph'lerinizde `eth_calls`'ı en aza indirerek endeksleme performansını önemli ölçüde artırabilirsiniz. +You can significantly improve indexing performance by minimizing or eliminating `eth_calls` in your Subgraphs. ## Subgraph Örnek Uygulamalar 1-6 -1. [Subgraph Budama ile Sorgu Hızını İyileştirin](/subgraphs/best-practices/pruning/) +1. [Improve Query Speed with Subgraph Pruning](/subgraphs/best-practices/pruning/) -2. [@derivedFrom Kullanarak Endeksleme ve Sorgu Yanıt Hızını Artırın](/subgraphs/best-practices/derivedfrom/) +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/subgraphs/best-practices/derivedfrom/) -3. [Değişmez Varlıklar ve Bytes ID'ler Kullanarak Endeksleme ve Sorgu Performansını Artırın](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) -4. [Endeksleme Hızını `eth_calls`'den Kaçınarak İyileştirin](/subgraphs/best-practices/avoid-eth-calls/) +4. [Improve Indexing Speed by Avoiding `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) -5. [Zaman Serileri ve Bütünleştirme ile Basitleştirin ve Optimize Edin](/subgraphs/best-practices/timeseries/) +5. [Simplify and Optimize with Timeseries and Aggregations](/subgraphs/best-practices/timeseries/) -6. [Hızlı Düzeltme Dağıtımı için Aşılama Kullanın](/subgraphs/best-practices/grafting-hotfix/) +6. [Use Grafting for Quick Hotfix Deployment](/subgraphs/best-practices/grafting-hotfix/) diff --git a/website/src/pages/tr/subgraphs/best-practices/derivedfrom.mdx b/website/src/pages/tr/subgraphs/best-practices/derivedfrom.mdx index 9c2356837602..f9e1997ebbd8 100644 --- a/website/src/pages/tr/subgraphs/best-practices/derivedfrom.mdx +++ b/website/src/pages/tr/subgraphs/best-practices/derivedfrom.mdx @@ -1,11 +1,11 @@ --- title: Subgraph Örnek Uygulamalar 2 - @derivedFrom Kullanarak Endeksleme ve Sorgu Performansını İyileştirin -sidebarTitle: 'Subgraph Best Practice 2: Arrays with @derivedFrom' +sidebarTitle: Arrays with @derivedFrom --- ## Özet -Şemanızdaki diziler, binlerce girişin ötesine geçtiğinde subgraph performansını ciddi şekilde yavaşlatabilir. Mümkünse `@derivedFrom` yönergesi kullanılmalıdır. Bu yaklaşım; büyük dizilerin oluşmasını önler, işleyicileri basitleştirir ve bireysel varlıkların boyutunu küçülterek endeksleme hızını ve sorgu performansını önemli ölçüde artırır. +Arrays in your schema can really slow down a Subgraph's performance as they grow beyond thousands of entries. If possible, the `@derivedFrom` directive should be used when using arrays as it prevents large arrays from forming, simplifies handlers, and reduces the size of individual entities, improving indexing speed and query performance significantly. ## `@derivedFrom` Yönergesi Nasıl Kullanılır @@ -15,7 +15,7 @@ sidebarTitle: 'Subgraph Best Practice 2: Arrays with @derivedFrom' comments: [Comment!]! @derivedFrom(field: "post") ``` -`@derivedFrom`, verimli bir şekilde birden çoka ilişkiler oluşturur. Böylece bir varlığın, ilgili alan temelinde birden fazla ilişkili varlıklarla dinamik olarak ilişkilendirilmesini sağlar. Bu yaklaşım, ilişkinin her iki tarafının da yinelenen verileri saklama gerekliliğini ortadan kaldırarak subgraph'i daha verimli hale getirir. +`@derivedFrom` creates efficient one-to-many relationships, enabling an entity to dynamically associate with multiple related entities based on a field in the related entity. This approach removes the need for both sides of the relationship to store duplicate data, making the Subgraph more efficient. ### `@derivedFrom`'ın Örnek Kullanımı @@ -60,30 +60,30 @@ type Comment @entity { Sadece `@derivedFrom` yönergesini ekleyerek bu şema, ilişkinin “Post” tarafında değil, yalnızca “Comments” tarafında “Comments” verilerini depolamış olur. Diziler bireysel satırlara yayıldığı için önemli ölçüde genişleyebilir. Bu durum, büyüme sınırsız olduğunda büyük boyutlara yol açabilir. -Bu yalnızca subgraph'imizi daha verimli hale getirmekle kalmayacak, aynı zamanda şu üç özelliği de kullanmamıza olanak tanıyacaktır: +This will not only make our Subgraph more efficient, but it will also unlock three features: 1. `Post`'u sorgulayarak tüm yorumlarını görebiliriz. 2. Herhangi bir `Comment`'te tersine arama yapabilir ve hangi gönderiden geldiğini sorgulayabiliriz. -3. [Türetilmiş Alan Yükleyicileri](/subgraphs/developing/creating/graph-ts/api/#looking-up-derived-entities) kullanarak, sanal ilişkilerden gelen verilere doğrudan erişim ve manipülasyon yapma yeteneğini subgraph eşlemelerinde etkinleştirebiliriz. +3. We can use [Derived Field Loaders](/subgraphs/developing/creating/graph-ts/api/#looking-up-derived-entities) to unlock the ability to directly access and manipulate data from virtual relationships in our Subgraph mappings. ## Sonuç -`@derivedFrom` direktifini subgraph'lerde dinamik olarak büyüyen dizileri etkili bir şekilde yönetmek için kullanın. Bu direktif endeksleme verimliliğini ve veri alımını artırır. +Use the `@derivedFrom` directive in Subgraphs to effectively manage dynamically growing arrays, enhancing indexing efficiency and data retrieval. Büyük dizilerden kaçınma stratejilerinin daha ayrıntılı bir açıklaması için Kevin Jones'un blog yazısına göz atın: [Subgraph Geliştiriminde Örnek Uygulamalar: Büyük Dize Kümelerinden Kaçınmak](https://thegraph.com/blog/improve-subgraph-performance-avoiding-large-arrays/). ## Subgraph Örnek Uygulamalar 1-6 -1. [Subgraph Budama ile Sorgu Hızını İyileştirin](/subgraphs/best-practices/pruning/) +1. [Improve Query Speed with Subgraph Pruning](/subgraphs/best-practices/pruning/) -2. [@derivedFrom Kullanarak Endeksleme ve Sorgu Yanıt Hızını Artırın](/subgraphs/best-practices/derivedfrom/) +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/subgraphs/best-practices/derivedfrom/) -3. [Değişmez Varlıklar ve Bytes ID'ler Kullanarak Endeksleme ve Sorgu Performansını Artırın](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) -4. [Endeksleme Hızını `eth_calls`'den Kaçınarak İyileştirin](/subgraphs/best-practices/avoid-eth-calls/) +4. [Improve Indexing Speed by Avoiding `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) -5. [Zaman Serileri ve Bütünleştirme ile Basitleştirin ve Optimize Edin](/subgraphs/best-practices/timeseries/) +5. [Simplify and Optimize with Timeseries and Aggregations](/subgraphs/best-practices/timeseries/) -6. [Hızlı Düzeltme Dağıtımı için Aşılama Kullanın](/subgraphs/best-practices/grafting-hotfix/) +6. [Use Grafting for Quick Hotfix Deployment](/subgraphs/best-practices/grafting-hotfix/) diff --git a/website/src/pages/tr/subgraphs/best-practices/grafting-hotfix.mdx b/website/src/pages/tr/subgraphs/best-practices/grafting-hotfix.mdx index 6b2be11b98f4..1b661af00a2f 100644 --- a/website/src/pages/tr/subgraphs/best-practices/grafting-hotfix.mdx +++ b/website/src/pages/tr/subgraphs/best-practices/grafting-hotfix.mdx @@ -1,26 +1,26 @@ --- title: Subgraph Örnek Uygulama 6 - Acil Güncelleme Dağıtımı için Aşılama Kullanın -sidebarTitle: 'Subgraph Best Practice 6: Grafting and Hotfixing' +sidebarTitle: Grafting and Hotfixing --- ## Özet -Aşılama, mevcut endekslenmiş verileri yeniden kullanarak yeni subgraph'ler oluşturmanıza ve dağıtmanıza olanak tanıyan güçlü bir özelliktir. +Grafting is a powerful feature in Subgraph development that allows you to build and deploy new Subgraphs while reusing the indexed data from existing ones. ### Genel Bakış -Bu özellik, kritik sorunlar için hızlı bir şekilde düzeltmelerin dağıtılmasını sağlar ve tüm subgraph'i baştan endeksleme ihtiyacını ortadan kaldırır. Aşılama, tarihsel verileri koruyarak kesinti sürelerini en aza indirir ve veri hizmetlerinde süreklilik sağlar. +This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire Subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. ## Acil Güncellemelerde Aşılamanın Avantajları 1. **Hızlı Dağıtım** - - **Kesinti Süresini En Aza İndirme**: Bir subgraph kritik bir hata ile karşılaştığında ve endekslemeyi durdurduğunda, aşılama sayesinde yeniden endekslemeyi beklemeden hemen bir düzeltme dağıtabilirsiniz. - - **Hızlıca Kurtarma**: Yeni subgraph, son endekslenmiş bloktan devam eder ve veri hizmetlerinin kesintisiz olmasını sağlar. + - **Minimize Downtime**: When a Subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. + - **Immediate Recovery**: The new Subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. 2. **Veri Koruma** - - **Tarihsel Verileri Yeniden Kullan**: Aşılama, temel subgraph'ten mevcut verileri kopyalar, böylece değerli tarihsel kayıtları kaybetmezsiniz. + - **Reuse Historical Data**: Grafting copies the existing data from the base Subgraph, so you don’t lose valuable historical records. - **Tutarlılık**: Tutarlı tarihsel verilere bağımlı uygulamalar için veri sürekliliğini sağlar. 3. **Verimlilik** @@ -31,38 +31,38 @@ Bu özellik, kritik sorunlar için hızlı bir şekilde düzeltmelerin dağıtı 1. **Aşılama Olmadan Başlangıç Dağıtımı** - - **Sıfırdan Başlamak**: Subgraph'inizin ilk halini her zaman aşılama olmadan dağıtarak, stabil ve beklendiği gibi çalışmasını sağlayın. - - **Detaylı Test**: Gelecekte acil güncelleme yapmayı en aza indirmek için subgraph'in performansını doğrulayın. + - **Start Clean**: Always deploy your initial Subgraph without grafting to ensure that it’s stable and functions as expected. + - **Test Thoroughly**: Validate the Subgraph’s performance to minimize the need for future hotfixes. 2. **Aşılama ile Acil Güncelleme Yapmak** - **Sorunu Belirleme**: Kritik bir hata oluştuğunda, son başarılı endekslenmiş olayın blok numarasını belirleyin. - - **Yeni Bir Subgraph Oluşturma**: Acil güncellemeyi içeren yeni bir subgraph geliştirin. - - **Aşılamayı Yapılandırma**: Dağıtılamamış subgraph'ten belirlenen blok numarasına kadar olan verileri kopyalamak için aşılama kullanın. - - **Hızlı Dağıtım**: Hizmeti en kısa sürede yeniden başlatmak için aşılanmış subgraph'i ağda yayımlayın. + - **Create a New Subgraph**: Develop a new Subgraph that includes the hotfix. + - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed Subgraph. + - **Deploy Quickly**: Publish the grafted Subgraph to restore service as soon as possible. 3. **Acil Güncelleme Sonrasındaki Aksiyonlar** - - **Performansı Takip Etme**: Aşılanmış subgraph'in doğru şekilde endekslendiğinden ve acil güncellemenin sorunu çözdüğünden emin olun. - - **Aşılamadan Yeniden Yayımlama**: Stabil olduktan sonra, uzun vadede sürdürülebilirlik için, yeni bir subgraph versiyonunu aşılamadan dağıtın. + - **Monitor Performance**: Ensure the grafted Subgraph is indexing correctly and the hotfix resolves the issue. + - **Republish Without Grafting**: Once stable, deploy a new version of the Subgraph without grafting for long-term maintenance. > Not: Aşılamaya süresiz olarak güvenmek önerilmez. Bu durum gelecekteki güncellemeleri ve bakımı karmaşık hale getirebilir. - - **Referansları Güncelleyin**: Bütün hizmetleri ve uygulamaları yeni, aşılanmamış subgraph'i kullanacak şekilde yönlendirin. + - **Update References**: Redirect any services or applications to use the new, non-grafted Subgraph. 4. **Önemli Hususlar** - **Bloku Dikkatli Seçimek**: Veri kaybını önlemek için aşılama blok numarasını dikkatli seçin. - **İpucu**: Doğru işlenmiş son olayın blok numarasını kullanın. - - **Dağıtım ID'sini Kullanın**: Subgraph ID'si yerine temel subgraph'in Dağıtım ID'sine referans verdiğinizden emin olun. - - **Not**: Dağıtım Kimliği, belirli bir subgraph dağıtımı için benzersiz bir tanımlayıcıdır. - - **Özellik Deklarasyonu**: Subgraph manifestosunda özellikler altında aşılamayı deklare etmeyi unutmayın. + - **Use Deployment ID**: Ensure you reference the Deployment ID of the base Subgraph, not the Subgraph ID. + - **Note**: The Deployment ID is the unique identifier for a specific Subgraph deployment. + - **Feature Declaration**: Remember to declare grafting in the Subgraph manifest under features. ## Örnek: Aşılama ile Bir Acil Güncelleme Dağıtmak -Bir akıllı sözleşmeyi takip eden ve kritik bir hata nedeniyle endekslemeyi durdurmuş bir subgraph'e sahip olduğunuzu varsayalım. Bu durumda acil güncelleme dağıtmak için aşılamayı nasıl kullanabileceğiniz aşağıda açıklanmıştır. +Suppose you have a Subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. 1. **Hata Veren Subgraph Manifestosu (subgraph.yaml)** ```yaml - specVersion: 1.0.0 + specVersion: 1.3.0 schema: file: ./schema.graphql dataSources: @@ -75,7 +75,7 @@ Bir akıllı sözleşmeyi takip eden ve kritik bir hata nedeniyle endekslemeyi d startBlock: 5000000 mapping: kind: ethereum/events - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Withdrawal @@ -90,7 +90,7 @@ Bir akıllı sözleşmeyi takip eden ve kritik bir hata nedeniyle endekslemeyi d 2. **Yeni Aşılanmış Subgraph Manifestosu (subgraph.yaml)** ```yaml - specVersion: 1.0.0 + specVersion: 1.3.0 schema: file: ./schema.graphql dataSources: @@ -100,10 +100,10 @@ Bir akıllı sözleşmeyi takip eden ve kritik bir hata nedeniyle endekslemeyi d source: address: '0xNewContractAddress' abi: Lock - startBlock: 6000001 # # Son endekslenmiş bloktan sonraki blok + startBlock: 6000001 # Block after the last indexed block mapping: kind: ethereum/events - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Withdrawal @@ -117,16 +117,16 @@ Bir akıllı sözleşmeyi takip eden ve kritik bir hata nedeniyle endekslemeyi d features: - grafting graft: - base: QmBaseDeploymentID # Başarısız subgraph'in Dağıtım Kimliği - block: 6000000 # Başarıyla endekslenmiş son blok + base: QmBaseDeploymentID # Deployment ID of the failed Subgraph + block: 6000000 # Last successfully indexed block ``` **Açıklama:** -- **Veri Kaynağı Güncellemesi**: Yeni subgraph, akıllı sözleşmenin düzeltilmiş bir versiyonu olabilecek 0xNewContractAddress adresine işaret etmektedir. +- **Data Source Update**: The new Subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. - **Başlangıç Bloğu**: Hatanın tekrar işlenmesini önlemek için başarıyla endekslenmiş son bloktan bir blok sonraya ayarlayın. - **Aşılama Yapılandırması**: - - **base**: Başarısız olan subgraph'in Dağıtım Kimliği. + - **base**: Deployment ID of the failed Subgraph. - **block**: Aşılama işleminin başlaması gereken blok numarası. 3. **Dağıtım Adımları** @@ -135,10 +135,10 @@ Bir akıllı sözleşmeyi takip eden ve kritik bir hata nedeniyle endekslemeyi d - **Manifestoyu Ayarlayın**: Yukarıda gösterildiği gibi, `subgraph.yaml` dosyasını aşılama yapılandırmalarıyla güncelleyin. - **Subgraph'i Dağıtın**: - Graph CLI ile kimlik doğrulaması yapın. - - `graph deploy` komutunu kullanarak yeni subgraph'i dağıtın. + - Deploy the new Subgraph using `graph deploy`. 4. **Dağıtım Sonrası** - - **Endekslemeyi Doğrulama**: Subgraph'in aşılanma noktasından itibaren doğru endekslendiğinden emin olun. + - **Verify Indexing**: Check that the Subgraph is indexing correctly from the graft point. - **Veriyi Takip Etme**: Yeni verilerin yakalandığından ve acil güncellemenin etkili olduğundan emin olun. - **Yeniden Yayımlama İçin Planlama**: Uzun süreli istikrar için aşılama yapılmamış sürümün dağıtımını planlayın. @@ -146,9 +146,9 @@ Bir akıllı sözleşmeyi takip eden ve kritik bir hata nedeniyle endekslemeyi d Aşılama, acil güncellemeleri hızlı bir şekilde dağıtmayı sağlayan güçlü bir araçtır. Fakat veri bütünlüğünü korumak ve ideal performansı sağlamak için aşılanma kullanımından kaçınılması gereken belirli durumlar vardır. -- **Uyumsuz Şema Değişiklikleri**: Acil güncelleme mevcut alanların türünü değiştirmeyi veya şemanızdan alanları kaldırmayı gerektiriyorsa, bu durumda aşılama uygun değildir. Aşılama, yeni subgraph'in şemasının temel subgraph'in şemasıyla uyumlu olmasını bekler. Uyumsuz değişiklikler, mevcut verilerin yeni şemayla uyumlu olmaması nedeniyle veri tutarsızlıklarına ve hatalara neden olabilir. +- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new Subgraph’s schema to be compatible with the base Subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. - **Önemli Eşlem Mantığı Revizyonları**: Acil güncelleme olayların işlenme şeklinin değiştirilmesi veya işleyici fonksiyonlarının değiştirilmesi gibi eşlem mantığınızda önemli değişiklikleri içeriyorsa, aşılama doğru çalışmayabilir. Buradaki yeni mantık, eski mantık altında işlenmiş verilerle uyumlu olmayabilir. Bu da hatalı verilere veya başarısız endekslemelere yol açabilir. -- **The Graph Ağına Dağıtımlar**: Aşılama, The Graph'in merkeziyetsiz ağı (ana ağ) için tasarlanmış subgraph'ler için önerilmez. Aşılama endekslemeyi karmaşıklaştırabilir ve tüm endeksleyiciler tarafından tamamen desteklenmeyebilir. Bu yüzden beklenmedik davranışlara veya artan maliyetlere neden olabilir. Ana ağ dağıtımları için, tam uyumluluk ve güvenilirliği sağlamak amacıyla subgraph'i en baştan, tekrar endekslemek daha güvenlidir. +- **Deployments to The Graph Network**: Grafting is not recommended for Subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the Subgraph from scratch to ensure full compatibility and reliability. ### Risk Yönetimi @@ -157,31 +157,31 @@ Aşılama, acil güncellemeleri hızlı bir şekilde dağıtmayı sağlayan gü ## Sonuç -Aşılama, subgraph geliştirme sürecinde acil düzeltmeleri dağıtmak için etkili bir stratejidir. Bu strateji, aşağıdakileri yapmanızı sağlar: +Grafting is an effective strategy for deploying hotfixes in Subgraph development, enabling you to: - Yeniden endeksleme yapmadan kritik hatalardan **Hızla Kurtulun**. - Uygulamalar ve kullanıcılar için sürekliliği koruyarak **Tarihsel Verileri Koruyun**. - Kritik düzeltmeler sırasında kesinti sürelerini en aza indirerek **Hizmet Erişilebilirliğini Sağlayın**. -Ancak, aşılamayı tedbirli bir şekilde kullanmak ve riskleri azaltmak için örnek uygulamaları takip etmek önemlidir. Subgraph'inizi acil düzeltmeyle stabilize ettikten sonra, uzun vadede çalışmasını sağlamak için aşılama kullanılmayan bir sürüm dağıtmayı planlayın. +However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your Subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. ## Ek Kaynaklar - **[Aşılama Dokümantasyonu](/subgraphs/cookbook/grafting/)**: Aşılama ile Bir Sözleşmeyi Değiştirin ve Geçmişini Koruyun - **[Dağıtım Kimliklerini Anlamak](/subgraphs/querying/subgraph-id-vs-deployment-id/)**: Dağıtım Kimliği ile Subgraph Kimliği arasındaki farkı öğrenin. -Subgraph geliştirme iş akışınıza aşılamayı dahil ederek, sorunlara hızla yanıt verme yeteneğinizi artırabilir ve veri hizmetlerinizin sağlam ve güvenilir kalmasını sağlayabilirsiniz. +By incorporating grafting into your Subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. ## Subgraph Örnek Uygulamalar 1-6 -1. [Subgraph Budama ile Sorgu Hızını İyileştirin](/subgraphs/best-practices/pruning/) +1. [Improve Query Speed with Subgraph Pruning](/subgraphs/best-practices/pruning/) -2. [@derivedFrom Kullanarak Endeksleme ve Sorgu Yanıt Hızını Artırın](/subgraphs/best-practices/derivedfrom/) +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/subgraphs/best-practices/derivedfrom/) -3. [Değişmez Varlıklar ve Bytes ID'ler Kullanarak Endeksleme ve Sorgu Performansını Artırın](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) -4. [Endeksleme Hızını `eth_calls`'den Kaçınarak İyileştirin](/subgraphs/best-practices/avoid-eth-calls/) +4. [Improve Indexing Speed by Avoiding `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) -5. [Zaman Serileri ve Bütünleştirme ile Basitleştirin ve Optimize Edin](/subgraphs/best-practices/timeseries/) +5. [Simplify and Optimize with Timeseries and Aggregations](/subgraphs/best-practices/timeseries/) -6. [Hızlı Düzeltme Dağıtımı için Aşılama Kullanın](/subgraphs/best-practices/grafting-hotfix/) +6. [Use Grafting for Quick Hotfix Deployment](/subgraphs/best-practices/grafting-hotfix/) diff --git a/website/src/pages/tr/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx b/website/src/pages/tr/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx index 36eb9a1d99bf..3b15e3f63346 100644 --- a/website/src/pages/tr/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx +++ b/website/src/pages/tr/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx @@ -1,6 +1,6 @@ --- title: Subgraph Örnek Uygulama 3 - Değişmez Varlıklar ve Byte'ları Kimlik Olarak Kullanarak Endeksleme ve Sorgu Performansını İyileştirin -sidebarTitle: 'Subgraph Best Practice 3: Immutable Entities and Bytes as IDs' +sidebarTitle: Immutable Entities and Bytes as IDs --- ## Özet @@ -50,12 +50,12 @@ ID'ler için String ve Int8 gibi başka türler kullanmak mümkün olsa da, tüm ### ID Olarak Bytes Kullanmama Nedenleri 1. Eğer varlık ID'leri otomatik artırılan sayısal ID'ler veya okunabilir dizeler gibi insan tarafından okunabilir olmalıysa, ID için Bytes kullanılmamalıdır. -2. Bir subgraph'in verilerini Bytes'ı ID olarak kullanmayan başka bir veri modeliyle entegre ediyorsanız, ID için Bayt kullanılmamalıdır. +2. If integrating a Subgraph’s data with another data model that does not use Bytes as IDs, Bytes as IDs should not be used. 3. Endeksleme ve sorgulama gibi performans iyileştirmeleri istenmiyorsa. ### ID Olarak Bytes'ı Başka Bir Özellikle Birleştirme -Birçok subgraph'te, bir olayın iki özelliğini tek bir ID'de birleştirmek için dizi birleştirme kullanmak yaygın bir uygulamadır: örneğin, `event.transaction.hash.toHex() + "-" + event.logIndex.toString()` gibi. Ancak, bu bir dizi döndürdüğü için, subgraph endeksleme ve sorgulama performansını önemli ölçüde engeller. +It is a common practice in many Subgraphs to use string concatenation to combine two properties of an event into a single ID, such as using `event.transaction.hash.toHex() + "-" + event.logIndex.toString()`. However, as this returns a string, this significantly impedes Subgraph indexing and querying performance. Bunun yerine, olay özelliklerini birleştirmek için `concatI32()` metodunu kullanmalıyız. Bu strateji, çok daha iyi çalışan bir `Bytes` ID ile sonuçlanır. @@ -172,20 +172,20 @@ Sorgu Yanıtı: ## Sonuç -Hem Değişmez Varlıklar hem de ID olarak, Bytes kullanmanın subgraph verimliliğini önemli ölçüde artırdığı gösterilmiştir. Özellikle, testlerde sorgu performansında %28'e kadar artış ve endeksleme hızlarında %48'e kadar hızlanma göze çarpmaktadır. +Using both Immutable Entities and Bytes as IDs has been shown to markedly improve Subgraph efficiency. Specifically, tests have highlighted up to a 28% increase in query performance and up to a 48% acceleration in indexing speeds. Değişmez Varlıkları ve ID olarak Bytes kullanmak hakkında daha fazla bilgiyi Edge & Node'da Yazılım Mühendisi olan David Lutterkort'un bu blog yazısında bulabilirsiniz: [İki Basit Subgraph Performans İyileştirmesi](https://thegraph.com/blog/two-simple-subgraph-performance-improvements/). ## Subgraph Örnek Uygulamalar 1-6 -1. [Subgraph Budama ile Sorgu Hızını İyileştirin](/subgraphs/best-practices/pruning/) +1. [Improve Query Speed with Subgraph Pruning](/subgraphs/best-practices/pruning/) -2. [@derivedFrom Kullanarak Endeksleme ve Sorgu Yanıt Hızını Artırın](/subgraphs/best-practices/derivedfrom/) +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/subgraphs/best-practices/derivedfrom/) -3. [Değişmez Varlıklar ve Bytes ID'ler Kullanarak Endeksleme ve Sorgu Performansını Artırın](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) -4. [Endeksleme Hızını `eth_calls`'den Kaçınarak İyileştirin](/subgraphs/best-practices/avoid-eth-calls/) +4. [Improve Indexing Speed by Avoiding `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) -5. [Zaman Serileri ve Bütünleştirme ile Basitleştirin ve Optimize Edin](/subgraphs/best-practices/timeseries/) +5. [Simplify and Optimize with Timeseries and Aggregations](/subgraphs/best-practices/timeseries/) -6. [Hızlı Düzeltme Dağıtımı için Aşılama Kullanın](/subgraphs/best-practices/grafting-hotfix/) +6. [Use Grafting for Quick Hotfix Deployment](/subgraphs/best-practices/grafting-hotfix/) diff --git a/website/src/pages/tr/subgraphs/best-practices/pruning.mdx b/website/src/pages/tr/subgraphs/best-practices/pruning.mdx index 690803e92533..3c002a7fdb97 100644 --- a/website/src/pages/tr/subgraphs/best-practices/pruning.mdx +++ b/website/src/pages/tr/subgraphs/best-practices/pruning.mdx @@ -1,11 +1,11 @@ --- title: Subgraph Örnek Uygulama 1 - Subgraph Budama ile Sorgu Hızını Artırın -sidebarTitle: 'Subgraph Best Practice 1: Pruning with indexerHints' +sidebarTitle: Pruning with indexerHints --- ## Özet -[Budama](/developing/creating-a-subgraph/#prune), bir subgraph'in veritabanından arşivlenmiş varlıkları istenilen bir bloka kadar kaldırır. Bir subgraph'in veritabanından kullanılmayan varlıkların kaldırılması, subgraph'in sorgu performansını genellikle kayda değer ölçüde artırır. `indexerHints` kullanmak, bir subgraph'i budamayı kolaylaştırır. +[Pruning](/developing/creating-a-subgraph/#prune) removes archival entities from the Subgraph’s database up to a given block, and removing unused entities from a Subgraph’s database will improve a Subgraph’s query performance, often dramatically. Using `indexerHints` is an easy way to prune a Subgraph. ## `indexerHints` ile Bir Subgraph'i Nasıl Budayabilirsiniz @@ -13,14 +13,14 @@ Manifestoya `indexerHints` adlı bir bölüm ekleyin. `indexerHints` üç `prune` (budama) seçeneğine sahiptir: -- `prune: auto`: Endeksleyici tarafından belirlenen asgari gerekli geçmişi koruyarak sorgu performansını optimize eder. Bu genellikle önerilen ayardır ve `graph-cli` >= 0.66.0 tarafından oluşturulan tüm subgraph'ler için varsayılandır. +- `prune: auto`: Retains the minimum necessary history as set by the Indexer, optimizing query performance. This is the generally recommended setting and is the default for all Subgraphs created by `graph-cli` >= 0.66.0. - `prune: `: Korunacak olan geçmiş blokların sayısı için özel bir limit belirler. - `prune: never`: Geçmiş verilerin budanması yoktur; tüm geçmişi korur. `indexerHints` bölümü yoksa `prune: never` varsayılandır. [Zaman Yolculuğu Sorguları](/subgraphs/querying/graphql-api/#time-travel-queries) isteniyorsa `prune: never` seçilmelidir. -`subgraph.yaml` dosyamızı güncelleyerek subgraph'lerimize `indexerHints` ekleyebiliriz: +We can add `indexerHints` to our Subgraphs by updating our `subgraph.yaml`: ```yaml -specVersion: 1.0.0 +specVersion: 1.3.0 schema: file: ./schema.graphql indexerHints: @@ -39,18 +39,18 @@ dataSources: ## Sonuç -`indexerHints` kullanarak budama, subgraph geliştirmesi için örnek uygulamadır ve sorgu performansında önemli iyileştirmeler sunar. +Pruning using `indexerHints` is a best practice for Subgraph development, offering significant query performance improvements. ## Subgraph Örnek Uygulamalar 1-6 -1. [Subgraph Budama ile Sorgu Hızını İyileştirin](/subgraphs/best-practices/pruning/) +1. [Improve Query Speed with Subgraph Pruning](/subgraphs/best-practices/pruning/) -2. [@derivedFrom Kullanarak Endeksleme ve Sorgu Yanıt Hızını Artırın](/subgraphs/best-practices/derivedfrom/) +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/subgraphs/best-practices/derivedfrom/) -3. [Değişmez Varlıklar ve Bytes ID'ler Kullanarak Endeksleme ve Sorgu Performansını Artırın](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) -4. [Endeksleme Hızını `eth_calls`'den Kaçınarak İyileştirin](/subgraphs/best-practices/avoid-eth-calls/) +4. [Improve Indexing Speed by Avoiding `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) -5. [Zaman Serileri ve Bütünleştirme ile Basitleştirin ve Optimize Edin](/subgraphs/best-practices/timeseries/) +5. [Simplify and Optimize with Timeseries and Aggregations](/subgraphs/best-practices/timeseries/) -6. [Hızlı Düzeltme Dağıtımı için Aşılama Kullanın](/subgraphs/best-practices/grafting-hotfix/) +6. [Use Grafting for Quick Hotfix Deployment](/subgraphs/best-practices/grafting-hotfix/) diff --git a/website/src/pages/tr/subgraphs/best-practices/timeseries.mdx b/website/src/pages/tr/subgraphs/best-practices/timeseries.mdx index 78994cb6f089..eb75c2ddf46b 100644 --- a/website/src/pages/tr/subgraphs/best-practices/timeseries.mdx +++ b/website/src/pages/tr/subgraphs/best-practices/timeseries.mdx @@ -1,11 +1,11 @@ --- title: Subgraph Örnek Uygulama 5 - Zaman serileri ve Toplulaştırma ile Basitleştirip Optimize Edin -sidebarTitle: 'Subgraph Best Practice 5: Timeseries and Aggregations' +sidebarTitle: Zaman Serileri ve Toplulaştırmalar --- ## Özet -Subgraph'lerdeki yeni zaman serisi ve toplulaştırma özelliğini kullanmak, hem endeksleme hızını hem de sorgu performansını önemli ölçüde artırabilir. +Leveraging the new time-series and aggregations feature in Subgraphs can significantly enhance both indexing speed and query performance. ## Genel Bakış @@ -36,6 +36,10 @@ Zaman serisi ve toplulaştırmalar, toplulaştırma hesaplamalarını veritaban ## Zaman Serisi ve Toplulaştırmaları Nasıl Uygulanır +### Prerequisites + +You need `spec version 1.1.0` for this feature. + ### Zaman Serisi Varlıklarının Tanımlanması Bir zaman serisi varlığı, zaman içinde toplanan ham veri noktalarını temsil eder. `@entity(timeseries: true)` notasyonu ile tanımlanır. Ana gereksinimler: @@ -51,7 +55,7 @@ Bir zaman serisi varlığı, zaman içinde toplanan ham veri noktalarını temsi type Data @entity(timeseries: true) { id: Int8! timestamp: Timestamp! - price: BigDecimal! + amount: BigDecimal! } ``` @@ -68,11 +72,11 @@ Bir toplulaştırma varlığı, bir zaman serisi kaynağından toplulaştırılm type Stats @aggregation(intervals: ["hour", "day"], source: "Data") { id: Int8! timestamp: Timestamp! - sum: BigDecimal! @aggregate(fn: "sum", arg: "price") + sum: BigDecimal! @aggregate(fn: "sum", arg: "amount") } ``` -Bu örnekte Stats, Data'dan fiyat alanını saatlik ve günlük aralıklar üzerinden toplulaştırarak toplamı hesaplar. +In this example, Stats aggregates the amount field from Data over hourly and daily intervals, computing the sum. ### Toplulaştırılmış Verileri Sorgulama @@ -172,24 +176,24 @@ Desteklenen operatörler ve fonksiyonlar arasında temel aritmetik (+, -, \_, /) ### Sonuç -Zaman serileri ve toplulaştırmaların subgraph'lerde uygulanması, zamana dayalı veri ile çalışan projeler için örnek bir uygulamadır. Bu yaklaşım: +Implementing timeseries and aggregations in Subgraphs is a best practice for projects dealing with time-based data. This approach: - Performansı Artırır: Veri işleme yükünü azaltarak endeksleme ve sorgulamayı hızlandırır. - Geliştirmeyi Basitleştirir: Eşlemelerde manuel toplulaştırma mantığı ihtiyacını ortadan kaldırır. - Verimli Ölçeklenir: Hız veya yanıt verme süresinden ödün vermeden büyük hacimli verileri işler. -Bu modeli benimseyerek, geliştiriciler daha verimli ve ölçeklenebilir subgraph'ler oluşturabilir ve son kullanıcılara daha hızlı ve güvenilir veri erişimi sağlayabilirler. Zaman serileri ve toplulaştırmaların uygulanması hakkında daha fazla bilgi için [Zaman Serileri ve Toplulaştırmalar](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) belgesini inceleyin. Bu özelliği subgraph'lerinizde denemeyi de düşünebilirsiniz. +By adopting this pattern, developers can build more efficient and scalable Subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your Subgraphs. ## Subgraph Örnek Uygulamalar 1-6 -1. [Subgraph Budama ile Sorgu Hızını İyileştirin](/subgraphs/best-practices/pruning/) +1. [Improve Query Speed with Subgraph Pruning](/subgraphs/best-practices/pruning/) -2. [@derivedFrom Kullanarak Endeksleme ve Sorgu Yanıt Hızını Artırın](/subgraphs/best-practices/derivedfrom/) +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/subgraphs/best-practices/derivedfrom/) -3. [Değişmez Varlıklar ve Bytes ID'ler Kullanarak Endeksleme ve Sorgu Performansını Artırın](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) -4. [Endeksleme Hızını `eth_calls`'den Kaçınarak İyileştirin](/subgraphs/best-practices/avoid-eth-calls/) +4. [Improve Indexing Speed by Avoiding `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) -5. [Zaman Serileri ve Bütünleştirme ile Basitleştirin ve Optimize Edin](/subgraphs/best-practices/timeseries/) +5. [Simplify and Optimize with Timeseries and Aggregations](/subgraphs/best-practices/timeseries/) -6. [Hızlı Düzeltme Dağıtımı için Aşılama Kullanın](/subgraphs/best-practices/grafting-hotfix/) +6. [Use Grafting for Quick Hotfix Deployment](/subgraphs/best-practices/grafting-hotfix/) diff --git a/website/src/pages/tr/subgraphs/billing.mdx b/website/src/pages/tr/subgraphs/billing.mdx index a86c1adbb755..aed4d87af0ac 100644 --- a/website/src/pages/tr/subgraphs/billing.mdx +++ b/website/src/pages/tr/subgraphs/billing.mdx @@ -2,20 +2,22 @@ title: Faturalandırma --- -## Querying Plans +## Sorgulama Planları -The Graph Ağı'nda subgraph'leri sorgulamak için kullanabileceğiniz iki plan bulunmaktadır. +The Graph Ağı üzerinde Subgraph'leri sorgularken kullanılabilecek iki ödeme planı vardır. -- **Free Plan**: The Free Plan includes 100,000 free monthly queries with full access to the Subgraph Studio testing environment. This plan is designed for hobbyists, hackathoners, and those with side projects to try out The Graph before scaling their dapp. +- **Ücretsiz Plan**: Ücretsiz Plan, Subgraph Studio test ortamına tam erişim ve aylık 100.000 ücretsiz sorgu hakkı içerir. Bu plan, dapp’lerini ölçeklendirmeden önce The Graph’i denemek isteyen meraklı kişiler, hackathon katılımcıları ve yan proje sahipleri için tasarlanmıştır. -- **Growth Plan**: The Growth Plan includes everything in the Free Plan with all queries after 100,000 monthly queries requiring payments with GRT or credit card. The Growth Plan is flexible enough to cover teams that have established dapps across a variety of use cases. +- **Büyüme Planı**: Büyüme Planı, Ücretsiz Plan’daki her şeyi içerir; ancak aylık 100.000 sorgudan sonraki tüm sorgular için GRT ya da kredi kartı ile ödeme yapılması gerekir. Growth Plan, çeşitli kullanım senaryolarında faaliyet gösteren dapp’lerini hayata geçirmiş ekiplerin ihtiyaçlarını karşılayabilecek esnekliktedir. + +Learn more about pricing [here](https://thegraph.com/studio-pricing/). ## Kredi kartı ile sorgu ödemeleri - Kredi/banka kartları ile faturalandırmayı ayarlamak için, kullanıcıların Subgraph Studio'ya (https://thegraph.com/studio/) erişmeleri gerekir - 1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/subgraphs/billing/). + 1. Subgraph Studio Faturalandırma sayfasına gitmek için  [buraya tıklayın](https://thegraph.com/studio/subgraphs/billing/). 2. Sayfanın sağ üst köşesindeki "Cüzdanı Bağla" düğmesine tıklayın. Cüzdan seçim sayfasına yönlendirileceksiniz. Cüzdanınızı seçin ve "Bağlan" a tıklayın. 3. Ücretsiz Plan'dan yükseltme yapıyorsanız "Planı Yükselt" seçeneğini, daha önce faturalandırma bakiyenize GRT eklediyseniz "Planı Yönet" seçeneğini seçin. Sonrasında, sorgu sayısını tahmin ederek bir fiyat tahmini alabilirsiniz, ancak bu zorunlu bir adım değildir. 4. Kredi kartı ödemesini seçmek için, ödeme yöntemi olarak "Kredi kartı" seçeneğini seçin ve kredi kartı bilgilerinizi doldurun. Daha önce Stripe kullananlar, bilgilerini otomatik doldurmak için Link özelliğini kullanabilirler. @@ -45,17 +47,17 @@ Sorgu ödemelerini yapmak için Arbitrum üzerinde GRT'ye ihtiyacınız var. İ - Alternatif olarak, GRT'yi doğrudan Arbitrum üzerinde merkeziyetsiz bir borsa aracılığıyla edinebilirsiniz. -> This section is written assuming you already have GRT in your wallet, and you're on Arbitrum. If you don't have GRT, you can learn how to get GRT [here](#getting-grt). +> Bu bölüm, cüzdanınızda halihazırda GRT bulunduğu ve Arbitrum ağında olduğunuz varsayılarak yazılmıştır. Eğer GRT’niz yoksa, [buradan](#getting-grt) nasıl GRT edinebileceğinizi öğrenebilirsiniz. GRT'yi köprüledikten sonra faturalandırma bakiyenize ekleyebilirsiniz. ### Bir cüzdan kullanarak GRT ekleme -1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/subgraphs/billing/). +1. Subgraph Studio Faturalandırma sayfasına gitmek için  [buraya tıklayın](https://thegraph.com/studio/subgraphs/billing/). 2. Sayfanın sağ üst köşesindeki "Cüzdanı Bağla" düğmesine tıklayın. Cüzdan seçim sayfasına yönlendirileceksiniz. Cüzdanınızı seçin ve "Bağlan" a tıklayın. 3. Sağ üst köşedeki 'Yönet' düğmesine tıklayın. İlk kez kullanıyorsanız 'Büyüme Planına Yükselt' seçeneğini göreceksiniz. Daha önce işlem yaptıysanız 'Cüzdandan Yatır' seçeneğine tıklayın. 4. Kaydırıcıyı kullanarak aylık olarak yapmayı beklediğiniz sorgu sayısını tahmin edin. - - For suggestions on the number of queries you may use, see our **Frequently Asked Questions** page. + - Kullanabileceğiniz sorgu sayısı hakkında öneriler için **Sıkça Sorulan Sorular** sayfamıza göz atın. 5. "Kriptopara" seçeneğini seçin. Şu anda The Graph Ağı'nda kabul edilen tek kriptopara GRT'dir. 6. Peşin ödeme yapmak istediğiniz ay sayısını seçin. - Peşin ödeme yapmak, gelecekte kullanım zorunluluğu getirmez. Yalnızca kullandığınız kadar ücretlendirilirsiniz ve bakiyenizi istediğiniz zaman çekebilirsiniz. @@ -68,7 +70,7 @@ GRT'yi köprüledikten sonra faturalandırma bakiyenize ekleyebilirsiniz. ### Bir cüzdan kullanarak GRT çekme -1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/subgraphs/billing/). +1. Subgraph Studio Faturalandırma sayfasına gitmek için  [buraya tıklayın](https://thegraph.com/studio/subgraphs/billing/). 2. Sayfanın sağ üst köşesindeki "Cüzdanı Bağla" düğmesine tıklayın. Cüzdanınızı seçin ve "Bağlan" düğmesine tıklayın. 3. Sayfanın sağ üst köşesindeki "Yönet" düğmesine tıklayın. "GRT Çek" seçeneğini seçin. Bir yan panel açılacaktır. 4. Çekmek istediğiniz GRT miktarını girin. @@ -77,11 +79,11 @@ GRT'yi köprüledikten sonra faturalandırma bakiyenize ekleyebilirsiniz. ### Multisig cüzdanı kullanarak GRT ekleme -1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/subgraphs/billing/). -2. Click on the "Connect Wallet" button on the top right corner of the page. Select your wallet and click on "Connect". If you're using [Gnosis-Safe](https://gnosis-safe.io/), you'll be able to connect your multisig as well as your signing wallet. Then, sign the associated message. This will not cost any gas. +1. Subgraph Studio Faturalandırma sayfasına gitmek için [buraya tıklayın](https://thegraph.com/studio/subgraphs/billing/). +2. Sayfanın sağ üst köşesindeki "Cüzdanı Bağla" düğmesine tıklayın. Cüzdanınızı seçin ve "Bağlan" düğmesine tıklayın. Eğer [Gnosis-Safe](https://gnosis-safe.io/) kullanıyorsanız, multisig cüzdanınızı ve imza cüzdanınızı da bağlayabileceksiniz. Ardından ilgili mesajı imzalayın. Bu işlem gaz ücreti gerektirmeyecektir. 3. Sağ üst köşedeki 'Yönet' düğmesine tıklayın. İlk kez kullanıyorsanız 'Büyüme Planına Yükselt' seçeneğini göreceksiniz. Daha önce işlem yaptıysanız 'Cüzdandan Yatır' seçeneğine tıklayın. 4. Kaydırıcıyı kullanarak aylık olarak yapmayı beklediğiniz sorgu sayısını tahmin edin. - - For suggestions on the number of queries you may use, see our **Frequently Asked Questions** page. + - Kullanabileceğiniz sorgu sayısı hakkında öneriler için **Sıkça Sorulan Sorular** sayfamıza göz atın. 5. "Kriptopara" seçeneğini seçin. Şu anda The Graph Ağı'nda kabul edilen tek kriptopara GRT'dir. 6. Peşin ödeme yapmak istediğiniz ay sayısını seçin. - Peşin ödeme yapmak, gelecekte kullanım zorunluluğu getirmez. Yalnızca kullandığınız kadar ücretlendirilirsiniz ve bakiyenizi istediğiniz zaman çekebilirsiniz. @@ -99,7 +101,7 @@ Bu bölüm, sorgu ücretlerini ödemek için nasıl GRT edinebileceğinizi anlat Bu kılavuz, Coinbase üzerinden GRT satın alma işlemini adım adım açıklayacaktır. -1. Go to [Coinbase](https://www.coinbase.com/) and create an account. +1. [Coinbase](https://www.coinbase.com/)'e gidin ve bir hesap oluşturun. 2. Bir hesap oluşturduktan sonra, kimliğinizi KYC (Müşterini Tanı) olarak bilinen bir süreçle doğrulamanız gerekecek. KYC, tüm merkezi veya emanetçi kripto borsaları için standart bir prosedürdür. 3. Kimliğinizi doğruladıktan sonra GRT satın alabilirsiniz. Bunu sayfanın sağ üst köşesindeki "Al/Sat" düğmesine tıklayarak yapabilirsiniz. 4. Satın almak istediğiniz para birimini seçin. GRT'yi seçin. @@ -107,19 +109,19 @@ Bu kılavuz, Coinbase üzerinden GRT satın alma işlemini adım adım açıklay 6. Satın almak istediğiniz GRT miktarını seçin. 7. Satın alımınızı gözden geçirin. Satın alımınızı gözden geçirin ve "GRT Satın Al" düğmesine tıklayın. 8. Satın alımınızı onaylayın. Satın alımınızı onaylayın, böylece başarılı bir şekilde GRT satın almış olacaksınız. -9. You can transfer the GRT from your account to your wallet such as [MetaMask](https://metamask.io/). +9. GRT'yi hesabınızdan [MetaMask](https://metamask.io/) gibi bir cüzdana transfer edebilirsiniz. - GRT'yi cüzdanınıza transfer etmek için, sayfanın sağ üst köşesindeki "Hesaplar" düğmesine tıklayın. - GRT hesabının yanındaki "Gönder" düğmesine tıklayın. - Göndermek istediğiniz GRT miktarını ve göndermek istediğiniz cüzdan adresini girin. - "Devam" düğmesine tıklayın ve işleminizi onaylayın. -Lütfen unutmayın, yüksek tutarda satın alım durumunda Coinbase, tam tutarı bir cüzdana transfer etmeden önce 7-10 gün beklemenizi isteyebilir. -You can learn more about getting GRT on Coinbase [here](https://help.coinbase.com/en/coinbase/trading-and-funding/buying-selling-or-converting-crypto/how-do-i-buy-digital-currency). +Coinbase'de GRT edinmekle alakalı daha fazla bilgiyi [buradan](https://help.coinbase.com/en/coinbase/trading-and-funding/buying-selling-or-converting-crypto/how-do-i-buy-digital-currency) öğrenebilirsiniz. ### Binance Bu kılavuz, Binance üzerinden GRT satın alma işlemini adım adım açıklayacaktır. -1. Go to [Binance](https://www.binance.com/en) and create an account. +1. [Binance](https://www.binance.com/en)'e gidin ve bir hesap oluşturun. 2. Bir hesap oluşturduktan sonra, kimliğinizi KYC (Müşterini Tanı) olarak bilinen bir süreçle doğrulamanız gerekecek. KYC, tüm merkezi veya emanetçi kripto borsaları için standart bir prosedürdür. 3. Kimliğinizi doğruladıktan sonra GRT satın alabilirsiniz. Bunu, ana sayfa banner'ındaki "Şimdi Satın Al" düğmesine tıklayarak yapabilirsiniz. 4. Satın almak istediğiniz para birimini seçebileceğiniz bir sayfaya yönlendirileceksiniz. GRT'yi seçin. @@ -127,27 +129,27 @@ Bu kılavuz, Binance üzerinden GRT satın alma işlemini adım adım açıklaya 6. Satın almak istediğiniz GRT miktarını seçin. 7. Satın alımınızı gözden geçirin ve "GRT Satın Al" düğmesine tıklayın. 8. Satın alımınızı onaylayın, ardından GRT'nizi Binance Spot Cüzdanınızda görüntüleyebilirsiniz. -9. You can withdraw the GRT from your account to your wallet such as [MetaMask](https://metamask.io/). - - [To withdraw](https://www.binance.com/en/blog/ecosystem/how-to-transfer-crypto-from-binance-to-trust-wallet-8305050796630181570) the GRT to your wallet, add your wallet's address to the withdrawal whitelist. +9. GRT'yi hesabınızdan [MetaMask](https://metamask.io/) gibi bir cüzdana çekebilirsiniz. + - [Cüzdanınıza GRT çekmek](https://www.binance.com/en/blog/ecosystem/how-to-transfer-crypto-from-binance-to-trust-wallet-8305050796630181570) için, cüzdan adresinizi çekim beyaz listesine ekleyin. - "Cüzdan" düğmesine tıklayın, ardından "Çekim" seçeneğine tıklayın ve GRT'yi seçin. - Göndermek istediğiniz GRT miktarını ve beyaz listeye eklenmiş cüzdan adresini girin. - "Devam" düğmesine tıklayın ve işleminizi onaylayın. -You can learn more about getting GRT on Binance [here](https://www.binance.com/en/support/faq/how-to-buy-cryptocurrency-on-binance-homepage-400c38f5e0cd4b46a1d0805c296b5582). +Binance'de GRT edinmekle alakalı daha fazla bilgiyi [buradan](https://www.binance.com/en/support/faq/how-to-buy-cryptocurrency-on-binance-homepage-400c38f5e0cd4b46a1d0805c296b5582) öğrenebilirsiniz. ### Uniswap Bu kılavuz, Uniswap üzerinden GRT satın alma işlemini adım adım açıklayacaktır. -1. Go to [Uniswap](https://app.uniswap.org/swap?chain=arbitrum) and connect your wallet. +1. [Uniswap](https://app.uniswap.org/swap?chain=arbitrum)'a gidin ve cüzdanınızı bağlayın. 2. Baz para birimi olarak kullanılacak token'ı seçin. ETH'yi seçin. 3. Satın almak istediğiniz token'ı seçin. GRT'yi seçin. - - Make sure you're swapping for the correct token. The GRT smart contract address on Arbitrum One is: [0x9623063377AD1B27544C965cCd7342f7EA7e88C7](https://arbiscan.io/token/0x9623063377ad1b27544c965ccd7342f7ea7e88c7) + - Doğru token'ı takas ettiğinizden emin olun. Arbitrum One üzerindeki GRT akıllı sözleşme adresi budur: [0x9623063377AD1B27544C965cCd7342f7EA7e88C7](https://arbiscan.io/token/0x9623063377ad1b27544c965ccd7342f7ea7e88c7) 4. Takas etmek istediğiniz ETH miktarını girin. 5. "Swap" butonuna tıklayın. 6. İşlemi cüzdanınızda onaylayın ve işlemin tamamlanmasını bekleyin. -You can learn more about getting GRT on Uniswap [here](https://support.uniswap.org/hc/en-us/articles/8370549680909-How-to-Swap-Tokens-). +Uniswap'ta GRT edinmekle alakalı daha fazla bilgiyi [buradan](https://support.uniswap.org/hc/en-us/articles/8370549680909-How-to-Swap-Tokens-) öğrenebilirsiniz. ## Ether Edinme @@ -157,7 +159,7 @@ Bu bölüm, işlem ücretlerini veya gas maliyetlerini ödemek için nasıl Ethe Bu kılavuz, Coinbase üzerinden ETH satın alma işlemini adım adım açıklayacaktır. -1. Go to [Coinbase](https://www.coinbase.com/) and create an account. +1. [Coinbase](https://www.coinbase.com/)'e gidin ve bir hesap oluşturun. 2. Bir hesap oluşturduktan sonra, kimliğinizi KYC (Müşterini Tanı) olarak bilinen bir süreçle doğrulamanız gerekecek. Bu, tüm merkezi veya emanetçi kripto borsaları için standart bir prosedürdür. 3. Kimliğinizi doğruladıktan sonra, sayfanın sağ üst köşesindeki "Al/Sat" düğmesine tıklayarak ETH satın alın. 4. Satın almak istediğiniz para birimini seçin. ETH seçin. @@ -165,20 +167,20 @@ Bu kılavuz, Coinbase üzerinden ETH satın alma işlemini adım adım açıklay 6. Satın almak istediğiniz ETH miktarını girin. 7. Satın alımınızı gözden geçirin ve "ETH Satın Al" düğmesine tıklayın. 8. Satın alımınızı onaylayın, böylece ETH'yi başarıyla satın almış olacaksınız. -9. You can transfer the ETH from your Coinbase account to your wallet such as [MetaMask](https://metamask.io/). +9. ETH'yi Coinbase hesabınızdan [MetaMask](https://metamask.io/) gibi bir cüzdana transfer edebilirsiniz. - ETH'yi cüzdanınıza transfer etmek için, sayfanın sağ üst köşesindeki "Hesaplar" düğmesine tıklayın. - ETH hesabının yanındaki "Gönder" düğmesine tıklayın. - Göndermek istediğiniz ETH miktarını ve göndermek istediğiniz cüzdan adresini girin. - Gönderdiğiniz adresin Arbitrum One üzerindeki Ethereum cüzdan adresiniz olduğundan emin olun. - "Devam" düğmesine tıklayın ve işleminizi onaylayın. -You can learn more about getting ETH on Coinbase [here](https://help.coinbase.com/en/coinbase/trading-and-funding/buying-selling-or-converting-crypto/how-do-i-buy-digital-currency). +Coinbase'de ETH edinmekle alakalı daha fazla bilgiyi [buradan](https://help.coinbase.com/en/coinbase/trading-and-funding/buying-selling-or-converting-crypto/how-do-i-buy-digital-currency) öğrenebilirsiniz. ### Binance Bu, Binance'de ETH satın almak için adım adım bir rehberdir. -1. Go to [Binance](https://www.binance.com/en) and create an account. +1. [Binance](https://www.binance.com/en)'e gidin ve bir hesap oluşturun. 2. Bir hesap oluşturduktan sonra, kimliğinizi KYC (Müşterini Tanı) olarak bilinen bir süreçle doğrulamanız gerekecek. Bu, tüm merkezi veya emanetçi kripto borsaları için standart bir prosedürdür. 3. Kimliğinizi doğruladıktan sonra, ana sayfa afişindeki "Şimdi Satın Al" düğmesine tıklayarak ETH satın alın. 4. Satın almak istediğiniz para birimini seçin. ETH seçin. @@ -186,14 +188,14 @@ Bu, Binance'de ETH satın almak için adım adım bir rehberdir. 6. Satın almak istediğiniz ETH miktarını girin. 7. Satın alımınızı gözden geçirin ve "ETH Satın Al" düğmesine tıklayın. 8. Satın alımınızı onaylayın ve ETH'nizi Binance Spot Cüzdanınızda görüceksiniz. -9. You can withdraw the ETH from your account to your wallet such as [MetaMask](https://metamask.io/). +9. ETH'yi hesabınızdan [MetaMask](https://metamask.io/) gibi bir cüzdana çekebilirsiniz. - ETH'yi cüzdanınıza çekmek için cüzdan adresinizi çekim beyaz listesine ekleyin. Daha fazla bilgi için buraya tıklayın. - "Cüzdan" düğmesine tıklayın, para çekme seçeneğine tıklayın ve ETH'yi seçin. - Göndermek istediğiniz ETH miktarını ve göndermek istediğiniz güvenilir adresler listesindeki cüzdan adresini girin. - Gönderdiğiniz adresin Arbitrum One üzerindeki Ethereum cüzdan adresiniz olduğundan emin olun. - "Devam" düğmesine tıklayın ve işleminizi onaylayın. -You can learn more about getting ETH on Binance [here](https://www.binance.com/en/support/faq/how-to-buy-cryptocurrency-on-binance-homepage-400c38f5e0cd4b46a1d0805c296b5582). +Binance'de ETH edinmekle alakalı daha fazla bilgiyi [buradan](https://www.binance.com/en/support/faq/how-to-buy-cryptocurrency-on-binance-homepage-400c38f5e0cd4b46a1d0805c296b5582) öğrenebilirsiniz. ## Faturalandırma Hakkında SSS @@ -203,11 +205,11 @@ Kaç sorguya ihtiyacınız olacağını önceden bilmeniz gerekmez. Yalnızca ku Sorgu sayısını fazla tahmin etmenizi öneririz, böylece bakiyenizi sık sık doldurmak zorunda kalmazsınız. Küçük ve orta ölçekli uygulamalar için iyi bir başlangıç tahmini, aylık 1M-2M sorgu ile başlamak ve ilk haftalarda kullanımı yakından izlemektir. Daha büyük uygulamalar için iyi bir tahmin, sitenizin günlük ziyaret sayısını, en aktif sayfanızın açılışta yaptığı sorgu sayısı ile çarpmaktır. -Of course, both new and existing users can reach out to Edge & Node's BD team for a consult to learn more about anticipated usage. +Elbette, hem yeni hem de mevcut kullanıcılar, beklenen kullanım hakkında daha fazla bilgi almak için Edge & Node'un İş Geliştirme (BD) ekibiyle iletişime geçebilirler. ### Faturalandırma bakiyemden GRT çekebilir miyim? -Yes, you can always withdraw GRT that has not already been used for queries from your billing balance. The billing contract is only designed to bridge GRT from Ethereum mainnet to the Arbitrum network. If you'd like to transfer your GRT from Arbitrum back to Ethereum mainnet, you'll need to use the [Arbitrum Bridge](https://bridge.arbitrum.io/?l2ChainId=42161). +Evet, faturalandırma bakiyenizde sorgularda kullanılmamış olan GRT'yi her zaman çekebilirsiniz. Faturalandırma sözleşmesi yalnızca GRT'yi Ethereum ana ağından Arbitrum ağına bağlamak için tasarlanmıştır. GRT'nizi Arbitrum'dan tekrar Ethereum ana ağına aktarmak isterseniz, [Arbitrum Köprüsü](https://bridge.arbitrum.io/?l2ChainId=42161)'nü kullanmanız gerekir. ### Faturalandırma bakiyem tükendiğinde ne olur? Bir uyarı alacak mıyım? diff --git a/website/src/pages/tr/subgraphs/developing/_meta-titles.json b/website/src/pages/tr/subgraphs/developing/_meta-titles.json index 01a91b09ed77..2cf379969da3 100644 --- a/website/src/pages/tr/subgraphs/developing/_meta-titles.json +++ b/website/src/pages/tr/subgraphs/developing/_meta-titles.json @@ -1,6 +1,6 @@ { - "creating": "Creating", - "deploying": "Deploying", - "publishing": "Publishing", - "managing": "Managing" + "creating": "Oluşturma", + "deploying": "Dağıtma", + "publishing": "Yayımlama", + "managing": "Yönetme" } diff --git a/website/src/pages/tr/subgraphs/developing/creating/advanced.mdx b/website/src/pages/tr/subgraphs/developing/creating/advanced.mdx index 980b0069c3e9..76cad13dfa04 100644 --- a/website/src/pages/tr/subgraphs/developing/creating/advanced.mdx +++ b/website/src/pages/tr/subgraphs/developing/creating/advanced.mdx @@ -4,20 +4,20 @@ title: Advanced Subgraph Features ## Genel Bakış -Add and implement advanced subgraph features to enhanced your subgraph's built. +Add and implement advanced Subgraph features to enhanced your Subgraph's built. -Starting from `specVersion` `0.0.4`, subgraph features must be explicitly declared in the `features` section at the top level of the manifest file, using their `camelCase` name, as listed in the table below: +Starting from `specVersion` `0.0.4`, Subgraph features must be explicitly declared in the `features` section at the top level of the manifest file, using their `camelCase` name, as listed in the table below: | Feature | Name | | ---------------------------------------------------- | ---------------- | | [Non-fatal errors](#non-fatal-errors) | `nonFatalErrors` | | [Full-text Search](#defining-fulltext-search-fields) | `fullTextSearch` | -| [Grafting](#grafting-onto-existing-subgraphs) | `grafting` | +| [Aşılama](#grafting-onto-existing-subgraphs) | `grafting` | -For instance, if a subgraph uses the **Full-Text Search** and the **Non-fatal Errors** features, the `features` field in the manifest should be: +For instance, if a Subgraph uses the **Full-Text Search** and the **Non-fatal Errors** features, the `features` field in the manifest should be: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum features: - fullTextSearch @@ -25,7 +25,7 @@ features: dataSources: ... ``` -> Note that using a feature without declaring it will incur a **validation error** during subgraph deployment, but no errors will occur if a feature is declared but not used. +> Note that using a feature without declaring it will incur a **validation error** during Subgraph deployment, but no errors will occur if a feature is declared but not used. ## Zaman Serileri ve Toplulaştırmalar @@ -33,9 +33,9 @@ Prerequisites: - Subgraph specVersion must be ≥1.1.0. -Timeseries and aggregations enable your subgraph to track statistics like daily average price, hourly total transfers, and more. +Timeseries and aggregations enable your Subgraph to track statistics like daily average price, hourly total transfers, and more. -This feature introduces two new types of subgraph entity. Timeseries entities record data points with timestamps. Aggregation entities perform pre-declared calculations on the timeseries data points on an hourly or daily basis, then store the results for easy access via GraphQL. +This feature introduces two new types of Subgraph entity. Timeseries entities record data points with timestamps. Aggregation entities perform pre-declared calculations on the timeseries data points on an hourly or daily basis, then store the results for easy access via GraphQL. ### Örnek Şema @@ -97,21 +97,21 @@ Aggregation entities are automatically calculated on the basis of the specified ## Ölümcül Olmayan Hatalar -Halihazırda senkronize edilmiş subgraphlarda indeksleme hataları varsayılan olarak subgraph başarısız olmasına ve senkronizasyonun durmasına neden olur. Hatalara rağmen senkronizasyonun devam etmesi için subgraphlar, hata tetikleyen işleyicinin yapılan değişikliklerini yok sayarak yapılandırılabilir. Bu, subgraph yazarlarının subgraphlarını düzeltmeleri için zaman kazandırırken, sorguların en son blokta sunulmaya devam etmesini sağlar, ancak hata nedeniyle sonuçlar tutarsız olabilir. Bazı hatalar hala her zaman ölümcül olacaktır. Ölümcül olmaması için hatanın belirlenmiş olması gerekmektedir. +Indexing errors on already synced Subgraphs will, by default, cause the Subgraph to fail and stop syncing. Subgraphs can alternatively be configured to continue syncing in the presence of errors, by ignoring the changes made by the handler which provoked the error. This gives Subgraph authors time to correct their Subgraphs while queries continue to be served against the latest block, though the results might be inconsistent due to the bug that caused the error. Note that some errors are still always fatal. To be non-fatal, the error must be known to be deterministic. -> **Note:** The Graph Network does not yet support non-fatal errors, and developers should not deploy subgraphs using that functionality to the network via the Studio. +> **Note:** The Graph Network does not yet support non-fatal errors, and developers should not deploy Subgraphs using that functionality to the network via the Studio. -Ölümcül olmayan hataların etkinleştirilmesi, subgraph manifestinde aşağıdaki özellik bayrağının ayarlanmasını gerektirir: +Enabling non-fatal errors requires setting the following feature flag on the Subgraph manifest: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum features: - nonFatalErrors ... ``` -The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the subgraph has skipped over errors, as in the example: +The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the Subgraph has skipped over errors, as in the example: ```graphql foos(first: 100, subgraphError: allow) { @@ -123,7 +123,7 @@ _meta { } ``` -If the subgraph encounters an error, that query will return both the data and a graphql error with the message `"indexing_error"`, as in this example response: +If the Subgraph encounters an error, that query will return both the data and a graphql error with the message `"indexing_error"`, as in this example response: ```graphql "data": { @@ -145,7 +145,7 @@ If the subgraph encounters an error, that query will return both the data and a ## IPFS/Arweave Dosya Veri Kaynakları -Dosya veri kaynakları, indeksleme sırasında zincir dışı verilere sağlam ve genişletilebilir bir şekilde erişmek için yeni bir subgraph fonksiyonudur. Dosya veri kaynakları IPFS'den ve Arweave'den dosya getirmeyi desteklemektedir. +File data sources are a new Subgraph functionality for accessing off-chain data during indexing in a robust, extendable way. File data sources support fetching files from IPFS and from Arweave. > Bu aynı zamanda zincir dışı verilerinin belirlenebilir indekslenmesi için zemin hazırlar ve keyfi HTTP kaynaklı verilerin tanıtılma potansiyelini de beraberinde getirir. @@ -221,7 +221,7 @@ templates: - name: TokenMetadata kind: file/ipfs mapping: - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mapping.ts handler: handleMetadata @@ -290,7 +290,7 @@ For Arweave, as of version 0.33.0 Graph Node can fetch files stored on Arweave b import { TokenMetadata as TokenMetadataTemplate } from '../generated/templates' const ipfshash = 'QmaXzZhcYnsisuue5WRdQDH6FDvqkLQX1NckLqBYeYYEfm' -//Bu örnek kod, bir Crypto coven subgraph'ı içindir. Yukarıdaki ipfs hash'ı, tüm kripto NFT'leri için token üst verilerine sahip bir dizindir. +//This example code is for a Crypto coven Subgraph. The above ipfs hash is a directory with token metadata for all crypto coven NFTs. export function handleTransfer(event: TransferEvent): void { let token = Token.load(event.params.tokenId.toString()) @@ -300,7 +300,7 @@ export function handleTransfer(event: TransferEvent): void { token.tokenURI = '/' + event.params.tokenId.toString() + '.json' const tokenIpfsHash = ipfshash + token.tokenURI - //Bu, tek bir Crypto coven NFT için üst verilere giden bir yol oluşturur. Dizini "/" + dosya adı + ".json" ile birleştirir. + //This creates a path to the metadata for a single Crypto coven NFT. It concats the directory with "/" + filename + ".json" token.ipfsURI = tokenIpfsHash @@ -317,23 +317,23 @@ Bu, Graph Düğümü'nün yapılandırılmış IPFS veya Arweave uç noktasını This example is using the CID as the lookup between the parent `Token` entity and the resulting `TokenMetadata` entity. -> Previously, this is the point at which a subgraph developer would have called `ipfs.cat(CID)` to fetch the file +> Previously, this is the point at which a Subgraph developer would have called `ipfs.cat(CID)` to fetch the file Tebrikler, dosya veri kaynaklarını kullanıyorsunuz! -#### Subgraph'ınızı dağıtma +#### Deploying your Subgraphs -You can now `build` and `deploy` your subgraph to any Graph Node >=v0.30.0-rc.0. +You can now `build` and `deploy` your Subgraph to any Graph Node >=v0.30.0-rc.0. #### Sınırlamalar -Dosya veri kaynağı işleyicileri ve varlıkları yürütüldüklerinde belirleyici olmaları ve zincir tabanlı veri kaynaklarının bozulmasını önlemeleri için, diğer subgraph varlıklarından izole edilir,. Açıkça şunlardır: +File data source handlers and entities are isolated from other Subgraph entities, ensuring that they are deterministic when executed, and ensuring no contamination of chain-based data sources. To be specific: - Dosya Veri Kaynakları tarafından oluşturulan varlıklar değiştirilemez ve güncellenemez - Dosya Veri Kaynağı işleyicileri, diğer dosya veri kaynaklarından varlıklara erişemez - Dosya Veri Kaynaklarıyla ilişkili varlıklara zincir tabanlı işleyicilerden erişilemez -> Bu kısıtlama çoğu kullanım durumu için sorun oluşturmamalıdır, ancak bazı durumlarda karmaşıklıklığa sebep olabilir. Dosya tabanlı verilerinizi bir subgraph'ta modellemekte zorluk yaşarsanız, lütfen Discord üzerinden bizimle iletişime geçin! +> While this constraint should not be problematic for most use-cases, it may introduce complexity for some. Please get in touch via Discord if you are having issues modelling your file-based data in a Subgraph! Ek olarak, zincir üstü bir veri kaynağı veya başka bir dosya veri kaynağı olsun, bir dosya veri kaynağından veri kaynakları oluşturmak mümkün değildir. Bu kısıtlama gelecekte kaldırılabilir. @@ -365,15 +365,15 @@ Handlers for File Data Sources cannot be in files which import `eth_call` contra > **Requires**: [SpecVersion](#specversion-releases) >= `1.2.0` -Konu filtreleri veya endekslenmiş argüman filtreleri olarak da bilinen bu özellik, subgraph'lerin endekslenmiş argümanlarının değerlerine göre blok zinciri olaylarını hassas bir şekilde filtrelemesine olanak tanır. +Topic filters, also known as indexed argument filters, are a powerful feature in Subgraphs that allow users to precisely filter blockchain events based on the values of their indexed arguments. -- Bu filtreler, blokzincirindeki büyük olay akışından ilgilenilen belirli olayları izole etmeye yardımcı olarak, subgraph'lerin yalnızca alakalı verilere odaklanmasını ve böylece daha verimli çalışmasını sağlar. +- These filters help isolate specific events of interest from the vast stream of events on the blockchain, allowing Subgraphs to operate more efficiently by focusing only on relevant data. -- Bu özellik, belirli adresleri ve bunların blokzincirindeki çeşitli akıllı sözleşmelerle olan etkileşimlerini izleyen kişisel subgraph'ler oluşturmak için faydalıdır. +- This is useful for creating personal Subgraphs that track specific addresses and their interactions with various smart contracts on the blockchain. ### Konu Filtreleri Nasıl Çalışır -Bir akıllı sözleşme olay yaydığında, endekslenmiş olarak işaretlenen tüm argümanlar bir subgraph'in manifestosunda filtre olarak kullanılabilir. Bu durum, subgraph'in yalnızca ilgili endekslenmiş argümanlara uyan olayları dinleyip diğerlerini görmezden gelmesini sağlar. +When a smart contract emits an event, any arguments that are marked as indexed can be used as filters in a Subgraph's manifest. This allows the Subgraph to listen selectively for events that match these indexed arguments. - The event's first indexed argument corresponds to `topic1`, the second to `topic2`, and so on, up to `topic3`, since the Ethereum Virtual Machine (EVM) allows up to three indexed arguments per event. @@ -401,7 +401,7 @@ Bu örnekte: #### Subgraph'lerde Yapılandırma -Konu filtreleri, subgraph manifestosunda doğrudan olay işleyici yapılandırması içinde tanımlanır. Yapılandırma örneği: +Topic filters are defined directly within the event handler configuration in the Subgraph manifest. Here is how they are configured: ```yaml eventHandlers: @@ -436,7 +436,7 @@ Bu konfigürasyonda: - `topic1` is configured to filter `Transfer` events where `0xAddressA` is the sender. - `topic2` is configured to filter `Transfer` events where `0xAddressB` is the receiver. -- The subgraph will only index transactions that occur directly from `0xAddressA` to `0xAddressB`. +- The Subgraph will only index transactions that occur directly from `0xAddressA` to `0xAddressB`. #### Örnek 2: İki veya Daha Fazla Adres Arasında Her İki Yönde Gerçekleşen İşlemleri Takip Etme @@ -452,17 +452,17 @@ Bu konfigürasyonda: - `topic1` is configured to filter `Transfer` events where `0xAddressA`, `0xAddressB`, `0xAddressC` is the sender. - `topic2` is configured to filter `Transfer` events where `0xAddressB` and `0xAddressC` is the receiver. -- Subgraph, birden fazla adres arasında her iki yönde gerçekleşen işlemleri endeksleyerek tüm adresleri içeren etkileşimlerin kapsamlı bir şekilde izlenmesini sağlar. +- The Subgraph will index transactions that occur in either direction between multiple addresses allowing for comprehensive monitoring of interactions involving all addresses. ## Deklare edilmiş eth_call > Not: Bu, henüz stabil bir Graph Düğümü sürümünde mevcut olmayan deneysel bir özelliktir. Yalnızca Subgraph Studio'da veya sağlayıcılığını kendiniz yaptığınız düğümünüzde kullanabilirsiniz. -Declarative `eth_calls` are a valuable subgraph feature that allows `eth_calls` to be executed ahead of time, enabling `graph-node` to execute them in parallel. +Declarative `eth_calls` are a valuable Subgraph feature that allows `eth_calls` to be executed ahead of time, enabling `graph-node` to execute them in parallel. Bu özellik: -- Ethereum blokzincirinden veri getirme performansını önemli ölçüde artırır. Bunu birden fazla çağrı için toplam süreyi azaltarak ve subgraph'in genel verimliliğini optimize ederek yapar. +- Significantly improves the performance of fetching data from the Ethereum blockchain by reducing the total time for multiple calls and optimizing the Subgraph's overall efficiency. - Daha hızlı veri çekmeye olanak tanıyarak, daha hızlı sorgu yanıtları alınmasını ve daha iyi bir kullanıcı deneyimi sağlar. - Birden fazla Ethereum çağrısından veri toplaması gereken uygulamalar için bekleme sürelerini azaltarak veri çekme sürecini daha verimli hale getirir. @@ -474,7 +474,7 @@ Bu özellik: #### Scenario without Declarative `eth_calls` -Bir kullanıcının işlemleri, bakiyesi ve token varlıkları hakkında veri almak için üç Ethereum çağrısı yapması gereken bir subgraph'iniz olduğunu düşünün. +Imagine you have a Subgraph that needs to make three Ethereum calls to fetch data about a user's transactions, balance, and token holdings. Geleneksel olarak, bu çağrılar ardışık olarak yapılabilir: @@ -498,15 +498,15 @@ Toplam süre = max (3, 2, 4) = 4 saniye #### Nasıl Çalışır -1. Bildirimsel Tanım: Subgraph manifestosunda, Ethereum çağrılarını paralel olarak çalıştırılabilecek şekilde tanımlarsınız. +1. Declarative Definition: In the Subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. 2. Paralel Çalıştırma Motoru: Graph Düğümü'nün yürütme motoru bu bildirimleri tanır ve çağrıları aynı anda çalıştırır. -3. Sonuçların Birleştirilmesi: Tüm çağrılar tamamlandığında, sonuçlar birleştirilir ve sonraki işlemler için subgraph tarafından kullanılır. +3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the Subgraph for further processing. #### Subgraph Manifestosunda Örnek Yapılandırma Declared `eth_calls` can access the `event.address` of the underlying event as well as all the `event.params`. -`Subgraph.yaml` using `event.address`: +`subgraph.yaml` using `event.address`: ```yaml eventHandlers: @@ -524,7 +524,7 @@ Yukarıdaki örnek için detaylar: - The text (`Pool[event.address].feeGrowthGlobal0X128()`) is the actual `eth_call` that will be executed, which is in the form of `Contract[address].function(arguments)` - The `address` and `arguments` can be replaced with variables that will be available when the handler is executed. -`Subgraph.yaml` using `event.params` +`subgraph.yaml` using `event.params` ```yaml calls: @@ -535,22 +535,22 @@ calls: > **Note:** it is not recommended to use grafting when initially upgrading to The Graph Network. Learn more [here](/subgraphs/cookbook/grafting/#important-note-on-grafting-when-upgrading-to-the-network). -When a subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances; it is beneficial to reuse the data from an existing subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly or to temporarily get an existing subgraph working again after it has failed. +When a Subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances; it is beneficial to reuse the data from an existing Subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly or to temporarily get an existing Subgraph working again after it has failed. -A subgraph is grafted onto a base subgraph when the subgraph manifest in `subgraph.yaml` contains a `graft` block at the top-level: +A Subgraph is grafted onto a base Subgraph when the Subgraph manifest in `subgraph.yaml` contains a `graft` block at the top-level: ```yaml description: ... graft: - base: Qm... # Subgraph ID of base subgraph + base: Qm... # Subgraph ID of base Subgraph block: 7345624 # Block number ``` -When a subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` subgraph up to and including the given `block` and then continue indexing the new subgraph from that block on. The base subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted subgraph. +When a Subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` Subgraph up to and including the given `block` and then continue indexing the new Subgraph from that block on. The base Subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted Subgraph. -Graftlama, temel verileri indekslemek yerine kopyaladığından, subgraph'ı istenen bloğa getirmek sıfırdan indekslemeye nazaran çok daha hızlıdır, ancak ilk veri kopyası çok büyük subgraphlar için yine birkaç saat sürebilir. Graftlanmış subgraph başlatılırken, Graph Düğümü halihazırda kopyalanmış olan varlık türleri hakkında bilgileri kaydedecektir. +Because grafting copies rather than indexes base data, it is much quicker to get the Subgraph to the desired block than indexing from scratch, though the initial data copy can still take several hours for very large Subgraphs. While the grafted Subgraph is being initialized, the Graph Node will log information about the entity types that have already been copied. -Graftlanan subgraph, temel subgraphla tamamen aynı olmayan, ancak onunla uyumlu olan bir GraphQL şeması kullanabilir. Kendi başına geçerli bir subgraph şeması olmalıdır, ancak şu şekillerde temel subgraph şemasından sapabilir: +Graftlanan subgraph, temel subgraphla tamamen aynı olmayan, ancak onunla uyumlu olan bir GraphQL şemasını kullanabilir. Kendi başına geçerli bir subgraph şeması olmalıdır, ancak şu şekillerde temel subgraph şemasından sapabilir: - Varlık türlerini ekler veya kaldırır - Varlık türlerinden öznitelikleri kaldırır @@ -560,4 +560,4 @@ Graftlanan subgraph, temel subgraphla tamamen aynı olmayan, ancak onunla uyumlu - Arayüzleri ekler veya kaldırır - Arayüzün hangi varlık türleri için uygulandığını değiştirir -> **[Feature Management](#experimental-features):** `grafting` must be declared under `features` in the subgraph manifest. +> **[Feature Management](#experimental-features):** `grafting` must be declared under `features` in the Subgraph manifest. diff --git a/website/src/pages/tr/subgraphs/developing/creating/assemblyscript-mappings.mdx b/website/src/pages/tr/subgraphs/developing/creating/assemblyscript-mappings.mdx index d3182334749c..7199a149244d 100644 --- a/website/src/pages/tr/subgraphs/developing/creating/assemblyscript-mappings.mdx +++ b/website/src/pages/tr/subgraphs/developing/creating/assemblyscript-mappings.mdx @@ -10,7 +10,7 @@ The mappings take data from a particular source and transform it into entities t For each event handler that is defined in `subgraph.yaml` under `mapping.eventHandlers`, create an exported function of the same name. Each handler must accept a single parameter called `event` with a type corresponding to the name of the event which is being handled. -In the example subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events: +In the example Subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events: ```javascript import { NewGravatar, UpdatedGravatar } from '../generated/Gravity/Gravity' @@ -72,7 +72,7 @@ Eğer aynı ID'ye sahip yeni bir varlıkta bir alan için değer atanmamışsa, ## Kod Oluşturma -Akıllı sözleşmeler, olaylar ve varlıklarla çalışmayı kolay ve tip güvenli hale getirmek amacıyla Graph CLI, subgraph'ın GraphQL şemasından ve veri kaynaklarında bulunan sözleşme ABI'lerinden AssemblyScript türleri oluşturabilir. +In order to make it easy and type-safe to work with smart contracts, events and entities, the Graph CLI can generate AssemblyScript types from the Subgraph's GraphQL schema and the contract ABIs included in the data sources. Bununla yapılır @@ -80,7 +80,7 @@ Bununla yapılır graph codegen [--output-dir ] [] ``` -but in most cases, subgraphs are already preconfigured via `package.json` to allow you to simply run one of the following to achieve the same: +but in most cases, Subgraphs are already preconfigured via `package.json` to allow you to simply run one of the following to achieve the same: ```sh # Yarn @@ -90,7 +90,7 @@ yarn codegen npm run codegen ``` -This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with. +This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example Subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with. ```javascript import { @@ -102,12 +102,12 @@ import { } from '../generated/Gravity/Gravity' ``` -In addition to this, one class is generated for each entity type in the subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with +In addition to this, one class is generated for each entity type in the Subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with ```javascript import { Gravatar } from '../generated/schema' ``` -> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the subgraph. +> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the Subgraph. -Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your subgraph to Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find. +Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your Subgraph to Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find. diff --git a/website/src/pages/tr/subgraphs/developing/creating/graph-ts/CHANGELOG.md b/website/src/pages/tr/subgraphs/developing/creating/graph-ts/CHANGELOG.md index 5d90888ac378..5f964d3cbb78 100644 --- a/website/src/pages/tr/subgraphs/developing/creating/graph-ts/CHANGELOG.md +++ b/website/src/pages/tr/subgraphs/developing/creating/graph-ts/CHANGELOG.md @@ -1,5 +1,11 @@ # @graphprotocol/graph-ts +## 0.38.0 + +### Minor Changes + +- [#1935](https://github.com/graphprotocol/graph-tooling/pull/1935) [`0c36a02`](https://github.com/graphprotocol/graph-tooling/commit/0c36a024e0516bbf883ae62b8312dba3d9945f04) Thanks [@isum](https://github.com/isum)! - feat: add yaml parsing support to mappings + ## 0.37.0 ### Minor Changes diff --git a/website/src/pages/tr/subgraphs/developing/creating/graph-ts/api.mdx b/website/src/pages/tr/subgraphs/developing/creating/graph-ts/api.mdx index e90c754f6c34..1b8f899e6161 100644 --- a/website/src/pages/tr/subgraphs/developing/creating/graph-ts/api.mdx +++ b/website/src/pages/tr/subgraphs/developing/creating/graph-ts/api.mdx @@ -2,12 +2,12 @@ title: AssemblyScript API'si --- -> Note: If you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/). +> Note: If you created a Subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/). -Subgraph eşlemeleri yazarken kullanılabilecek yerleşik API'leri öğrenin. Hazır olarak sunulan iki tür API mevcuttur: +Learn what built-in APIs can be used when writing Subgraph mappings. There are two kinds of APIs available out of the box: - [Graph TypeScript kütüphanesi](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) -- Subgraph dosyalarından `graph codegen` tarafından üretilen kod +- Code generated from Subgraph files by `graph codegen` [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) ile uyumlu olduğu sürece diğer kütüphaneleri de bağımlılık olarak ekleyebilirsiniz. @@ -27,7 +27,7 @@ Dil eşlemeleri AssemblyScript ile yazıldığından, [AssemblyScript wiki'sinde ### Sürümler -Subgraph manifestosundaki `apiVersion`, bir subgraph için Graph Düğümü tarafından çalıştırılan eşleme (mapping) API'sinin sürümünü belirtir. +The `apiVersion` in the Subgraph manifest specifies the mapping API version which is run by Graph Node for a given Subgraph. | Sürüm | Sürüm Notları | | :-: | --- | @@ -37,7 +37,7 @@ Subgraph manifestosundaki `apiVersion`, bir subgraph için Graph Düğümü tara | 0.0.6 | Ethereum Transaction nesnesine `nonce` alanı eklendi
Ethereum Block nesnesine `baseFeePerGas` eklendi | | 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/))
`ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` | | 0.0.4 | Ethereum SmartContractCall nesnesine `functionSignature` alanı eklendi | -| 0.0.3 | Added `from` field to the Ethereum Call object
`ethereum.call.address` renamed to `ethereum.call.to` | +| 0.0.3 | Ethereum Call nesnesine `from` alanı eklendi
`ethereum.call.address`, `ethereum.call.to` olarak yeniden adlandırıldı | | 0.0.2 | Ethereum Transaction nesnesine `input` alanı eklendi | ### Dahili Türler @@ -223,7 +223,7 @@ import { store } from '@graphprotocol/graph-ts' `store` API'si, varlıkları Graph Düğümü deposundan yüklemeye, depoya kaydetmeye ve depodan kaldırmaya olanak tanır. -Depoya yazılan varlıklar, subgraph'in GraphQL şemasında tanımlanan `@entity` türleriyle bire bir eşleşir. Bu varlıklarla çalışmayı kolaylaştırmak için [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) tarafından sağlanan `graph codegen` komutu varlık sınıfları oluşturur. Varlık sınıfları, şemadaki alanlar için özellik alıcıları ve ayarlayıcılarının yanı sıra bu varlıkları yüklemek ve kaydetmek için metotlar içeren, yerleşik `Entity` türünün alt sınıflarıdır. +Entities written to the store map one-to-one to the `@entity` types defined in the Subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities. #### Unsurların Oluşturulması @@ -282,8 +282,8 @@ Varlık henüz depoda mevcut olmayabileceğinden, `load` yöntemi `Transfer | nu The store API facilitates the retrieval of entities that were created or updated in the current block. A typical situation for this is that one handler creates a transaction from some onchain event, and a later handler wants to access this transaction if it exists. -- Eğer işlem mevcut değilse subgraph sırf varlığın mevcut olmadığını öğrenmek için veritabanına başvurmak zorunda kalacaktır. Ancak, subgraph yazarı varlığın aynı blokta oluşturulmuş olması gerektiğini zaten biliyorsa, `loadInBlock` kullanmak bu veritabanı sorgusunu ortadan kaldırır. -- Bazı subgraph'lerde bu başarısız aramalar endeksleme süresine önemli ölçüde etki edebilir. +- In the case where the transaction does not exist, the Subgraph will have to go to the database simply to find out that the entity does not exist. If the Subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. +- For some Subgraphs, these missed lookups can contribute significantly to the indexing time. ```typescript let id = event.transaction.hash // veya ID nasıl oluşturulurmuşsa @@ -380,11 +380,11 @@ Ethereum API'si, akıllı sözleşmelere, genel durum değişkenlerine, sözleş #### Ethereum Türleri İçin Destek -Varlıklarda olduğu gibi `graph codegen`, bir subgraph'te kullanılan tüm akıllı sözleşmeler ve olaylar için sınıflar oluşturur. Bunun için, sözleşme ABI'lerinin subgraph manifestosundaki veri kaynağının bir parçası olması gerekir. ABI dosyaları genelde `abis/` klasöründe saklanır. +As with entities, `graph codegen` generates classes for all smart contracts and events used in a Subgraph. For this, the contract ABIs need to be part of the data source in the Subgraph manifest. Typically, the ABI files are stored in an `abis/` folder. -Oluşturulan sınıflarla sayesinde Ethereum türleri ile [yerleşik türler](#built-in-types) arasındaki dönüşümler arka planda gerçekleşir, böylece subgraph yazarlarının bu dönüşümlerle ilgilenmesine gerek kalmaz. +With the generated classes, conversions between Ethereum types and the [built-in types](#built-in-types) take place behind the scenes so that Subgraph authors do not have to worry about them. -Aşağıdaki örnek bunu açıklar. Aşağıdaki gibi bir subgraph şeması verildiğinde +The following example illustrates this. Given a Subgraph schema like ```graphql type Transfer @entity { @@ -483,7 +483,7 @@ class Log { #### Akıllı Sözleşme Durumuna Erişim -`graph codegen` tarafından oluşturulan kod, subgraph'te kullanılan akıllı sözleşmeler için sınıflar da içerir. Bu sınıflar, mevcut blokta sözleşmenin genel durum değişkenlerine erişmek ve sözleşme fonksiyonlarını çağırmak için kullanılabilir. +The code generated by `graph codegen` also includes classes for the smart contracts used in the Subgraph. These can be used to access public state variables and call functions of the contract at the current block. Yaygın bir model, bir olayın kaynaklandığı sözleşmeye erişmektir. Bu, aşağıdaki kodla elde edilir: @@ -506,7 +506,7 @@ Burada `Transfer`, varlık türüyle adlandırma çakışmasını önlemek için Ethereum üzerindeki `ERC20Contract` sözleşmesi `symbol` adında herkese açık ve salt okunur bir fonksiyona sahip olduğu sürece, `.symbol()` ile çağrılabilir. Genel durum değişkenleri için otomatik olarak aynı ada sahip bir metot oluşturulur. -Subgraph parçası olan diğer tüm sözleşmelerde oluşturulan koddan içe aktarılabilir ve geçerli bir adrese bağlanabilir. +Any other contract that is part of the Subgraph can be imported from the generated code and can be bound to a valid address. #### Geri Dönen Çağrıları Yönetme @@ -582,7 +582,7 @@ let isContract = ethereum.hasCode(eoa).inner // false döndürür import { log } from '@graphprotocol/graph-ts' ``` -`log` API'si, subgraph'lerin bilgileri Graph Düğümü standart çıktısına ve Graph Gezgini'ne kaydetmesine olanak tanır. Mesajlar farklı günlük seviyeleri kullanılarak kaydedilebilir. Verilen argümanlardan günlük mesajlarını oluşturmak için temel bir biçimlendirme dizesi sentaksı sunulmaktadır. +The `log` API allows Subgraphs to log information to the Graph Node standard output as well as Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument. `log` API'si aşağıdaki fonksiyonları içerir: @@ -590,7 +590,7 @@ import { log } from '@graphprotocol/graph-ts' - `log.info(fmt: string, args: Array): void` – bir bilgilendirme mesajı kaydeder. - `log.warning(fmt: string, args: Array): void` – bir uyarı mesajı kaydeder. - `log.error(fmt: string, args: Array): void` – bir hata mesajı kaydeder. -- `log.critical(fmt: string, args: Array): void` – kritik bir mesaj kaydeder **ve** subgraph'i sonlandırır. +- `log.critical(fmt: string, args: Array): void` – logs a critical message _and_ terminates the Subgraph. `log` API'si bir format dizesi ve bir dize değerleri dizisini alır. Daha sonra, dizideki dize değerlerini format dizesindeki yer tutucuların yerine koyar. İlk `{}` yer tutucusu dizideki ilk değerle, ikinci `{}` yer tutucusu ikinci değerle ve bu şekilde devam ederek değiştirilir. @@ -722,7 +722,7 @@ ipfs.mapJSON('Qm...', 'processItem', Value.fromString('parentId')) Şu anda desteklenen tek bayrak `ipfs.map`'e iletilmesi gereken `json` bayrağıdır. `json` bayrağı ile IPFS dosyası, her satırda bir JSON değeri olacak şekilde bir dizi JSON değerinden oluşmalıdır. `ipfs.map` çağrısı, dosyadaki her satırı okur, bir `JSONValue` olarak ayrıştırır (deserialize eder) ve her biri için geri çağırma (callback) fonksiyonunu çağırır. Geri çağırma fonksiyonu daha sonra `JSONValue`dan gelen verileri depolamak için varlık operasyonlarını kullanabilir. Varlık değişiklikleri yalnızca `ipfs.map`'i çağıran işleyici başarıyla tamamlandığında depolanır; bu sırada değişiklikler bellekte tutulur ve bu nedenle `ipfs.map`'in işleyebileceği dosya boyutu sınırlıdır. -Başarılı olduğunda `ipfs.map`, `void` döndürür. Geri çağırma fonksiyonunun herhangi bir çağrısı bir hataya neden olursa, `ipfs.map`'i çağıran işleyici durdurulur ve subgraph başarısız olarak işaretlenir. +On success, `ipfs.map` returns `void`. If any invocation of the callback causes an error, the handler that invoked `ipfs.map` is aborted, and the Subgraph is marked as failed. ### Kripto(Crypto) API'si @@ -837,7 +837,7 @@ Temel `Entity` sınıfı ve alt sınıf olan `DataSourceContext` sınıfı, alan ### Manifest'teki DataSourceContext -`dataSources` içindeki `context` bölümü, subgraph eşlemeleriniz içinde erişilebilen anahtar-değer çiftlerini tanımlamanıza olanak tanır. Kullanılabilir türler şunlardır: `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List` ve `BigInt`. +The `context` section within `dataSources` allows you to define key-value pairs that are accessible within your Subgraph mappings. The available types are `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. İşte `context` bölümünde çeşitli türlerin kullanımını gösteren bir YAML örneği: @@ -888,4 +888,4 @@ dataSources: - `List`: Elemanlardan oluşan bir liste belirtir. Her elemanın türü ve verisi belirtilmelidir. - `BigInt`: Büyük bir tamsayı değeri belirtir. Büyük boyutu nedeniyle tırnak içinde yazılması gerekir. -Bu bağlama daha sonra subgraph eşleştirme dosyalarınızdan erişilebilir ve böylece daha dinamik ve yapılandırılabilir subgraphlar elde edebilirsiniz. +This context is then accessible in your Subgraph mapping files, enabling more dynamic and configurable Subgraphs. diff --git a/website/src/pages/tr/subgraphs/developing/creating/graph-ts/common-issues.mdx b/website/src/pages/tr/subgraphs/developing/creating/graph-ts/common-issues.mdx index 681a0a3c6b31..ef24d83f4b94 100644 --- a/website/src/pages/tr/subgraphs/developing/creating/graph-ts/common-issues.mdx +++ b/website/src/pages/tr/subgraphs/developing/creating/graph-ts/common-issues.mdx @@ -2,7 +2,7 @@ title: Genel AssemblyScript Sorunları --- -Subgraph geliştirme sırasında karşılaşılması muhtemel bazı [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) sorunları bulunmaktadır. Bu sorunlar, hata ayıklama zorluğuna göre değişiklik gösterse de bunların farkında olmak faydalı olabilir. Aşağıda, bu sorunların kapsamlı olmayan bir listesi verilmiştir: +There are certain [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) issues that are common to run into during Subgraph development. They range in debug difficulty, however, being aware of them may help. The following is a non-exhaustive list of these issues: - `Private` class variables are not enforced in [AssemblyScript](https://www.assemblyscript.org/status.html#language-features). There is no way to protect class variables from being directly changed from the class object. - Kapsam [Closure fonksiyonlarına] aktarılmaz (https://www.assemblyscript.org/status.html#on-closures) kalıtılmaz, yani closure fonksiyonlarının dışında tanımlanan değişkenler bu fonksiyonlar içinde kullanılamaz. Daha fazla açıklama için [Developer Highlights #3](https://www.youtube.com/watch?v=1-8AW-lVfrA&t=3243s) videosuna bakabilirsiniz. diff --git a/website/src/pages/tr/subgraphs/developing/creating/install-the-cli.mdx b/website/src/pages/tr/subgraphs/developing/creating/install-the-cli.mdx index 5b0633a6c1bf..08c282f651d6 100644 --- a/website/src/pages/tr/subgraphs/developing/creating/install-the-cli.mdx +++ b/website/src/pages/tr/subgraphs/developing/creating/install-the-cli.mdx @@ -2,11 +2,11 @@ title: Graph CLI'ı Yükleyin --- -> In order to use your subgraph on The Graph's decentralized network, you will need to [create an API key](/resources/subgraph-studio-faq/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your subgraph with at least 3,000 GRT to attract 2-3 Indexers. To learn more about signaling, check out [curating](/resources/roles/curating/). +> In order to use your Subgraph on The Graph's decentralized network, you will need to [create an API key](/resources/subgraph-studio-faq/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your Subgraph with at least 3,000 GRT to attract 2-3 Indexers. To learn more about signaling, check out [curating](/resources/roles/curating/). ## Genel Bakış -The [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) is a command-line interface that facilitates developers' commands for The Graph. It processes a [subgraph manifest](/subgraphs/developing/creating/subgraph-manifest/) and compiles the [mappings](/subgraphs/developing/creating/assemblyscript-mappings/) to create the files you will need to deploy the subgraph to [Subgraph Studio](https://thegraph.com/studio/) and the network. +The [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) is a command-line interface that facilitates developers' commands for The Graph. It processes a [Subgraph manifest](/subgraphs/developing/creating/subgraph-manifest/) and compiles the [mappings](/subgraphs/developing/creating/assemblyscript-mappings/) to create the files you will need to deploy the Subgraph to [Subgraph Studio](https://thegraph.com/studio/) and the network. ## Buradan Başlayın @@ -28,13 +28,13 @@ npm install -g @graphprotocol/graph-cli@latest yarn global add @graphprotocol/graph-cli ``` -The `graph init` command can be used to set up a new subgraph project, either from an existing contract or from an example subgraph. If you already have a smart contract deployed to your preferred network, you can bootstrap a new subgraph from that contract to get started. +The `graph init` command can be used to set up a new Subgraph project, either from an existing contract or from an example Subgraph. If you already have a smart contract deployed to your preferred network, you can bootstrap a new Subgraph from that contract to get started. ## Subgraph Oluştur ### Mevcut Bir Sözleşmeden -Aşağıdaki komut, mevcut bir sözleşmenin tüm olaylarını endeksleyen bir subgraph oluşturur: +The following command creates a Subgraph that indexes all events of an existing contract: ```sh graph init \ @@ -51,25 +51,25 @@ graph init \ - İsteğe bağlı argümanlar eksikse, komut sizi bir etkileşimli forma yönlendirir. -- The `` is the ID of your subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your subgraph details page. +- The `` is the ID of your Subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your Subgraph details page. ### Örnek Bir Subgraph'ten -Aşağıdaki komut, örnek bir subgraph'ten yeni bir proje ilklendirir: +The following command initializes a new project from an example Subgraph: ```sh graph init --from-example=example-subgraph ``` -- The [example subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. +- The [example Subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. -- The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. +- The Subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. ### Add New `dataSources` to an Existing Subgraph -`dataSources` are key components of subgraphs. They define the sources of data that the subgraph indexes and processes. A `dataSource` specifies which smart contract to listen to, which events to process, and how to handle them. +`dataSources` are key components of Subgraphs. They define the sources of data that the Subgraph indexes and processes. A `dataSource` specifies which smart contract to listen to, which events to process, and how to handle them. -Recent versions of the Graph CLI supports adding new `dataSources` to an existing subgraph through the `graph add` command: +Recent versions of the Graph CLI supports adding new `dataSources` to an existing Subgraph through the `graph add` command: ```sh graph add
[] @@ -101,19 +101,5 @@ The `graph add` command will fetch the ABI from Etherscan (unless an ABI path is ABI dosya(lar)ı sözleşme(ler) inizle uygun olmalıdır. ABI dosyalarını edinmek için birkaç yol vardır: - Kendi projenizi oluşturuyorsanız, muhtemelen en güncel ABI'lerinize erişiminiz olacaktır. -- If you are building a subgraph for a public project, you can download that project to your computer and get the ABI by using [`npx hardhat compile`](https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts) or using `solc` to compile. -- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your subgraph will fail. - -## SpecVersion Sürümleri - -| Sürüm | Sürüm Notları | -| :-: | --- | -| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | -| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | -| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune subgraphs | -| 0.0.9 | Supports `endBlock` feature | -| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). | -| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). | -| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | -| 0.0.5 | İşleyicilerin işlem makbuzlarına erişim desteği eklendi. | -| 0.0.4 | Subgraph özelliklerini yönetme desteği eklendi. | +- If you are building a Subgraph for a public project, you can download that project to your computer and get the ABI by using [`npx hardhat compile`](https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts) or using `solc` to compile. +- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your Subgraph will fail. diff --git a/website/src/pages/tr/subgraphs/developing/creating/ql-schema.mdx b/website/src/pages/tr/subgraphs/developing/creating/ql-schema.mdx index 3aedce85e696..1d13ea1eba9f 100644 --- a/website/src/pages/tr/subgraphs/developing/creating/ql-schema.mdx +++ b/website/src/pages/tr/subgraphs/developing/creating/ql-schema.mdx @@ -4,7 +4,7 @@ title: The Graph QL Schema ## Genel Bakış -The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language. +The schema for your Subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language. > Note: If you've never written a GraphQL schema, it is recommended that you check out this primer on the GraphQL type system. Reference documentation for GraphQL schemas can be found in the [GraphQL API](/subgraphs/querying/graphql-api/) section. @@ -12,7 +12,7 @@ The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas ar Varlıkları tanımlamadan önce, verilerinizin nasıl yapılandırıldığını ve nasıl bağlantılı olduğunu düşünmek önemlidir. -- All queries will be made against the data model defined in the subgraph schema. As a result, the design of the subgraph schema should be informed by the queries that your application will need to perform. +- All queries will be made against the data model defined in the Subgraph schema. As a result, the design of the Subgraph schema should be informed by the queries that your application will need to perform. - Varlıkları, olaylar veya fonksiyonlar yerine “veri içeren nesneler” olarak düşünmek faydalı olabilir. - You define entity types in `schema.graphql`, and Graph Node will generate top-level fields for querying single instances and collections of that entity type. - Each type that should be an entity is required to be annotated with an `@entity` directive. @@ -141,7 +141,7 @@ type TokenBalance @entity { Reverse lookups can be defined on an entity through the `@derivedFrom` field. This creates a virtual field on the entity that may be queried but cannot be set manually through the mappings API. Rather, it is derived from the relationship defined on the other entity. For such relationships, it rarely makes sense to store both sides of the relationship, and both indexing and query performance will be better when only one side is stored and the other is derived. -Birden çoğa ilişkileriiçin, ilişki her zaman 'birden' tarafında depolanmalı ve her zaman 'çoğa' tarafında türetilmelidir. İlişkinin 'çoğa' tarafında bir dizi varlık depolamak yerine bu şekilde saklanması, subgraph indeksleme ve sorgulaması adına önemli ölçüde daha iyi performans sağlayacaktır. Genel olarak, varlık dizilerini depolamaktan mümkün olduğunca sakınılması gerekmektedir. +For one-to-many relationships, the relationship should always be stored on the 'one' side, and the 'many' side should always be derived. Storing the relationship this way, rather than storing an array of entities on the 'many' side, will result in dramatically better performance for both indexing and querying the Subgraph. In general, storing arrays of entities should be avoided as much as is practical. #### Örnek @@ -160,7 +160,7 @@ type TokenBalance @entity { } ``` -Here is an example of how to write a mapping for a subgraph with reverse lookups: +Here is an example of how to write a mapping for a Subgraph with reverse lookups: ```typescript let token = new Token(event.address) // Create Token @@ -231,7 +231,7 @@ query usersWithOrganizations { } ``` -Çoktan çoğa ilişkileri depolamanın daha ayrıntılı bu yolu, subgraph için depolanan veri miktarının azalmasına ve bu sonucunda genellikle indekslenmesi ve sorgulanması önemli ölçüde daha hızlı olan bir subgraph sağlayacaktır. +This more elaborate way of storing many-to-many relationships will result in less data stored for the Subgraph, and therefore to a Subgraph that is often dramatically faster to index and to query. ### Şemaya notlar/yorumlar ekleme @@ -287,7 +287,7 @@ query { } ``` -> **[Feature Management](#experimental-features):** From `specVersion` `0.0.4` and onwards, `fullTextSearch` must be declared under the `features` section in the subgraph manifest. +> **[Feature Management](#experimental-features):** From `specVersion` `0.0.4` and onwards, `fullTextSearch` must be declared under the `features` section in the Subgraph manifest. ## Desteklenen diller diff --git a/website/src/pages/tr/subgraphs/developing/creating/starting-your-subgraph.mdx b/website/src/pages/tr/subgraphs/developing/creating/starting-your-subgraph.mdx index c10f6facbb0d..bfa4920f6360 100644 --- a/website/src/pages/tr/subgraphs/developing/creating/starting-your-subgraph.mdx +++ b/website/src/pages/tr/subgraphs/developing/creating/starting-your-subgraph.mdx @@ -4,20 +4,32 @@ title: Starting Your Subgraph ## Genel Bakış -The Graph is home to thousands of subgraphs already available for query, so check [The Graph Explorer](https://thegraph.com/explorer) and find one that already matches your needs. +The Graph is home to thousands of Subgraphs already available for query, so check [The Graph Explorer](https://thegraph.com/explorer) and find one that already matches your needs. -When you create a [subgraph](/subgraphs/developing/subgraphs/), you create a custom open API that extracts data from a blockchain, processes it, stores it, and makes it easy to query via GraphQL. +When you create a [Subgraph](/subgraphs/developing/subgraphs/), you create a custom open API that extracts data from a blockchain, processes it, stores it, and makes it easy to query via GraphQL. -Subgraph development ranges from simple scaffold subgraphs to advanced, specifically tailored subgraphs. +Subgraph development ranges from simple scaffold Subgraphs to advanced, specifically tailored Subgraphs. ### Start Building -Start the process and build a subgraph that matches your needs: +Start the process and build a Subgraph that matches your needs: 1. [Install the CLI](/subgraphs/developing/creating/install-the-cli/) - Set up your infrastructure -2. [Subgraph Manifest](/subgraphs/developing/creating/subgraph-manifest/) - Understand a subgraph's key component +2. [Subgraph Manifest](/subgraphs/developing/creating/subgraph-manifest/) - Understand a Subgraph's key component 3. [The GraphQL Schema](/subgraphs/developing/creating/ql-schema/) - Write your schema 4. [Writing AssemblyScript Mappings](/subgraphs/developing/creating/assemblyscript-mappings/) - Write your mappings -5. [Advanced Features](/subgraphs/developing/creating/advanced/) - Customize your subgraph with advanced features +5. [Advanced Features](/subgraphs/developing/creating/advanced/) - Customize your Subgraph with advanced features Explore additional [resources for APIs](/subgraphs/developing/creating/graph-ts/README/) and conduct local testing with [Matchstick](/subgraphs/developing/creating/unit-testing-framework/). + +| Sürüm | Sürüm Notları | +| :-: | --- | +| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune Subgraphs | +| 0.0.9 | Supports `endBlock` feature | +| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). | +| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | İşleyicilerin işlem makbuzlarına erişim desteği eklendi. | +| 0.0.4 | Subgraph özelliklerini yönetme desteği eklendi. | diff --git a/website/src/pages/tr/subgraphs/developing/creating/subgraph-manifest.mdx b/website/src/pages/tr/subgraphs/developing/creating/subgraph-manifest.mdx index 88693e796ef6..f6be12c03edb 100644 --- a/website/src/pages/tr/subgraphs/developing/creating/subgraph-manifest.mdx +++ b/website/src/pages/tr/subgraphs/developing/creating/subgraph-manifest.mdx @@ -4,19 +4,19 @@ title: Subgraph Manifest ## Genel Bakış -The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. +The Subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your Subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. -The **subgraph definition** consists of the following files: +The **Subgraph definition** consists of the following files: -- `subgraph.yaml`: Contains the subgraph manifest +- `subgraph.yaml`: Contains the Subgraph manifest -- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL +- `schema.graphql`: A GraphQL schema defining the data stored for your Subgraph and how to query it via GraphQL - `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema (e.g. `mapping.ts` in this guide) ### Subgraph Capabilities -Tek bir subgraph: +A single Subgraph can: - Birden fazla akıllı sözleşmeden veri endeksleyebilir (fakat birden fazla ağdan endeksleyemez). @@ -24,12 +24,12 @@ Tek bir subgraph: - Add an entry for each contract that requires indexing to the `dataSources` array. -The full specification for subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). +The full specification for Subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). -For the example subgraph listed above, `subgraph.yaml` is: +For the example Subgraph listed above, `subgraph.yaml` is: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum repository: https://github.com/graphprotocol/graph-tooling schema: @@ -54,7 +54,7 @@ dataSources: data: 'bar' mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -79,47 +79,47 @@ dataSources: ## Subgraph Entries -> Important Note: Be sure you populate your subgraph manifest with all handlers and [entities](/subgraphs/developing/creating/ql-schema/). +> Important Note: Be sure you populate your Subgraph manifest with all handlers and [entities](/subgraphs/developing/creating/ql-schema/). Manifest için güncellenmesi gereken önemli girdiler şunlardır: -- `specVersion`: a semver version that identifies the supported manifest structure and functionality for the subgraph. The latest version is `1.2.0`. See [specVersion releases](#specversion-releases) section to see more details on features & releases. +- `specVersion`: a semver version that identifies the supported manifest structure and functionality for the Subgraph. The latest version is `1.3.0`. See [specVersion releases](#specversion-releases) section to see more details on features & releases. -- `description`: a human-readable description of what the subgraph is. This description is displayed in Graph Explorer when the subgraph is deployed to Subgraph Studio. +- `description`: a human-readable description of what the Subgraph is. This description is displayed in Graph Explorer when the Subgraph is deployed to Subgraph Studio. -- `repository`: the URL of the repository where the subgraph manifest can be found. This is also displayed in Graph Explorer. +- `repository`: the URL of the repository where the Subgraph manifest can be found. This is also displayed in Graph Explorer. - `features`: a list of all used [feature](#experimental-features) names. -- `indexerHints.prune`: Defines the retention of historical block data for a subgraph. See [prune](#prune) in [indexerHints](#indexer-hints) section. +- `indexerHints.prune`: Defines the retention of historical block data for a Subgraph. See [prune](#prune) in [indexerHints](#indexer-hints) section. -- `dataSources.source`: the address of the smart contract the subgraph sources, and the ABI of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts. +- `dataSources.source`: the address of the smart contract the Subgraph sources, and the ABI of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts. - `dataSources.source.startBlock`: the optional number of the block that the data source starts indexing from. In most cases, we suggest using the block in which the contract was created. - `dataSources.source.endBlock`: The optional number of the block that the data source stops indexing at, including that block. Minimum spec version required: `0.0.9`. -- `dataSources.context`: key-value pairs that can be used within subgraph mappings. Supports various data types like `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Each variable needs to specify its `type` and `data`. These context variables are then accessible in the mapping files, offering more configurable options for subgraph development. +- `dataSources.context`: key-value pairs that can be used within Subgraph mappings. Supports various data types like `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Each variable needs to specify its `type` and `data`. These context variables are then accessible in the mapping files, offering more configurable options for Subgraph development. - `dataSources.mapping.entities`: the entities that the data source writes to the store. The schema for each entity is defined in the schema.graphql file. - `dataSources.mapping.abis`: one or more named ABI files for the source contract as well as any other smart contracts that you interact with from within the mappings. -- `dataSources.mapping.eventHandlers`: lists the smart contract events this subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store. +- `dataSources.mapping.eventHandlers`: lists the smart contract events this Subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store. -- `dataSources.mapping.callHandlers`: lists the smart contract functions this subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store. +- `dataSources.mapping.callHandlers`: lists the smart contract functions this Subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store. -- `dataSources.mapping.blockHandlers`: lists the blocks this subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional call-filter can be provided by adding a `filter` field with `kind: call` to the handler. This will only run the handler if the block contains at least one call to the data source contract. +- `dataSources.mapping.blockHandlers`: lists the blocks this Subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional call-filter can be provided by adding a `filter` field with `kind: call` to the handler. This will only run the handler if the block contains at least one call to the data source contract. -A single subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array. +A single Subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array. ## Olay İşleyicileri -Bir subgraph'in olay işleyicileri, blokzincir üzerindeki akıllı sözleşmeler tarafından yayılan belirli olaylara tepki verir, ve subgraph'in manifesto dosyasında tanımlanan işleyicileri tetikler. Bu, subgraph'lerin tanımlanmış mantığa göre olay verilerini işlemesini ve depolamasını sağlar. +Event handlers in a Subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the Subgraph's manifest. This enables Subgraphs to process and store event data according to defined logic. ### Olay İşleyici Tanımlama -Bir olay işleyici, subgraph'in YAML yapılandırmasında bir veri kaynağı içinde tanımlanır. Hangi olayların dinleneceğini ve bu olaylar algılandığında hangi fonksiyonun çalıştırılacağını belirtir. +An event handler is declared within a data source in the Subgraph's YAML configuration. It specifies which events to listen for and the corresponding function to execute when those events are detected. ```yaml dataSources: @@ -131,7 +131,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -144,16 +144,16 @@ dataSources: handler: handleApproval - event: Transfer(address,address,uint256) handler: handleTransfer - topic1: ['0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045', '0xc8dA6BF26964aF9D7eEd9e03E53415D37aA96325'] # Opsiyonel konu filtresi, sadece verilen konuyu içeren olayları filtreler. + topic1: ['0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045', '0xc8dA6BF26964aF9D7eEd9e03E53415D37aA96325'] # Optional topic filter which filters only events with the specified topic. ``` ## Çağrı İşleyicileri -While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured. +While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a Subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured. Çağrı işleyicileri yalnızca iki durumdan birinde tetiklenir: belirtilen işlevin sözleşme tarafından değil, başka bir hesap tarafından çağrılması durumunda veya Solidity'de harici olarak işaretlenip aynı sözleşmenin başka bir işlevinin bir parçası olarak çağrılması durumunda yalnızca tetiklenir. -> **Note:** Call handlers currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a subgraph indexing one of these networks contain one or more call handlers, it will not start syncing. Subgraph developers should instead use event handlers. These are far more performant than call handlers, and are supported on every evm network. +> **Note:** Call handlers currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a Subgraph indexing one of these networks contain one or more call handlers, it will not start syncing. Subgraph developers should instead use event handlers. These are far more performant than call handlers, and are supported on every evm network. ### Bir Çağrı İşleyici Tanımlama @@ -169,7 +169,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -186,7 +186,7 @@ The `function` is the normalized function signature to filter calls by. The `han ### Eşleştirme fonksiyonu -Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument: +Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example Subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument: ```typescript import { CreateGravatarCall } from '../generated/Gravity/Gravity' @@ -205,7 +205,7 @@ The `handleCreateGravatar` function takes a new `CreateGravatarCall` which is a ## Blok İşleyicileri -Bir subgraph, sözleşme olaylarına veya işlev çağrılarına abone olmanın yanı sıra, zincire yeni bloklar eklendikçe verilerini güncellemek isteyebilir. Bu işlemi gerçekleştirmek için a subgraph, her blok sonrasında veya önceden tanımlanmış bir filtreye uygun bloklardan sonra bir işlev çalıştırabilir. +In addition to subscribing to contract events or function calls, a Subgraph may want to update its data as new blocks are appended to the chain. To achieve this a Subgraph can run a function after every block or after blocks that match a pre-defined filter. ### Desteklenen Filtreler @@ -218,7 +218,7 @@ filter: _The defined handler will be called once for every block which contains a call to the contract (data source) the handler is defined under._ -> **Note:** The `call` filter currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a subgraph indexing one of these networks contain one or more block handlers with a `call` filter, it will not start syncing. +> **Note:** The `call` filter currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a Subgraph indexing one of these networks contain one or more block handlers with a `call` filter, it will not start syncing. Bir blok işleyicisi için filtre olmaması, işleyicinin her blok için çağrılacağı anlamına gelir. Bir veri kaynağı, her filtre türü için yalnızca bir blok işleyicisi içerebilir. @@ -232,7 +232,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -261,7 +261,7 @@ blockHandlers: every: 10 ``` -The defined handler will be called once for every `n` blocks, where `n` is the value provided in the `every` field. This configuration allows the subgraph to perform specific operations at regular block intervals. +The defined handler will be called once for every `n` blocks, where `n` is the value provided in the `every` field. This configuration allows the Subgraph to perform specific operations at regular block intervals. #### Once Filtresi @@ -276,7 +276,7 @@ blockHandlers: kind: once ``` -Once filtresi ile tanımlanan işleyici, diğer tüm işleyiciler çalışmadan önce yalnızca bir kez çağrılacaktır. Bu yapılandırma, subgraph'ın işleyiciyi indekslemenin başlangıcında belirli görevleri yerine getirmesine olanak sağlayan bir başlatma işleyicisi olarak kullanmasına yarar. +The defined handler with the once filter will be called only once before all other handlers run. This configuration allows the Subgraph to use the handler as an initialization handler, performing specific tasks at the start of indexing. ```ts export function handleOnce(block: ethereum.Block): void { @@ -288,7 +288,7 @@ export function handleOnce(block: ethereum.Block): void { ### Eşleştirme fonksiyonu -The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing subgraph entities in the store, call smart contracts and create or update entities. +The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing Subgraph entities in the store, call smart contracts and create or update entities. ```typescript import { ethereum } from '@graphprotocol/graph-ts' @@ -317,7 +317,7 @@ An event will only be triggered when both the signature and topic 0 match. By de Starting from `specVersion` `0.0.5` and `apiVersion` `0.0.7`, event handlers can have access to the receipt for the transaction which emitted them. -To do so, event handlers must be declared in the subgraph manifest with the new `receipt: true` key, which is optional and defaults to false. +To do so, event handlers must be declared in the Subgraph manifest with the new `receipt: true` key, which is optional and defaults to false. ```yaml eventHandlers: @@ -360,7 +360,7 @@ dataSources: abi: Factory mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/factory.ts entities: @@ -390,7 +390,7 @@ templates: abi: Exchange mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/exchange.ts entities: @@ -454,7 +454,7 @@ There are setters and getters like `setString` and `getString` for all value typ ## Başlangıç Blokları -The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. +The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a Subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. ```yaml dataSources: @@ -467,7 +467,7 @@ dataSources: startBlock: 6627917 mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/factory.ts entities: @@ -488,13 +488,13 @@ dataSources: ## Endeksleyici İpuçları -The `indexerHints` setting in a subgraph's manifest provides directives for indexers on processing and managing a subgraph. It influences operational decisions across data handling, indexing strategies, and optimizations. Presently, it features the `prune` option for managing historical data retention or pruning. +The `indexerHints` setting in a Subgraph's manifest provides directives for indexers on processing and managing a Subgraph. It influences operational decisions across data handling, indexing strategies, and optimizations. Presently, it features the `prune` option for managing historical data retention or pruning. > This feature is available from `specVersion: 1.0.0` ### Budama -`indexerHints.prune`: Defines the retention of historical block data for a subgraph. Options include: +`indexerHints.prune`: Defines the retention of historical block data for a Subgraph. Options include: 1. `"never"`: No pruning of historical data; retains the entire history. 2. `"auto"`: Retains the minimum necessary history as set by the indexer, optimizing query performance. @@ -505,19 +505,19 @@ The `indexerHints` setting in a subgraph's manifest provides directives for inde prune: auto ``` -> Subgraph'lerde "geçmiş" terimi, bu bağlamda, değiştirilebilir varlıkların eski durumlarına dair verilerin saklanmasıyla ilgilidir. +> The term "history" in this context of Subgraphs is about storing data that reflects the old states of mutable entities. Verilen bir bloktaki geçmiş, şu durumlar için gereklidir: -- [Time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), which enable querying the past states of these entities at specific blocks throughout the subgraph's history -- Using the subgraph as a [graft base](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) in another subgraph, at that block -- Subgraph'i verilen bloka geri sarmak +- [Time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), which enable querying the past states of these entities at specific blocks throughout the Subgraph's history +- Using the Subgraph as a [graft base](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) in another Subgraph, at that block +- Rewinding the Subgraph back to that block Eğer verilen bloktaki tarihsel veri budanmışsa yukarıdaki özellikler kullanılamayacaktır. > Using `"auto"` is generally recommended as it maximizes query performance and is sufficient for most users who do not require access to extensive historical data. -For subgraphs leveraging [time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), it's advisable to either set a specific number of blocks for historical data retention or use `prune: never` to keep all historical entity states. Below are examples of how to configure both options in your subgraph's settings: +For Subgraphs leveraging [time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), it's advisable to either set a specific number of blocks for historical data retention or use `prune: never` to keep all historical entity states. Below are examples of how to configure both options in your Subgraph's settings: Belirli bir miktarda tarihsel veri saklamak için: @@ -532,3 +532,18 @@ Varlık durumlarının tam geçmişini korumak için: indexerHints: prune: never ``` + +## SpecVersion Releases + +| Sürüm | Sürüm Notları | +| :-: | --- | +| 1.3.0 | Added support for [Subgraph Composition](/cookbook/subgraph-composition-three-sources) | +| 1.2.0 | Added support for [Indexed Argument Filtering](/developing/creating/advanced/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](/developing/creating/advanced/#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supports [`indexerHints`](/developing/creating/subgraph-manifest/#indexer-hints) feature to prune Subgraphs | +| 0.0.9 | Supports `endBlock` feature | +| 0.0.8 | Added support for polling [Block Handlers](/developing/creating/subgraph-manifest/#polling-filter) and [Initialisation Handlers](/developing/creating/subgraph-manifest/#once-filter). | +| 0.0.7 | Added support for [File Data Sources](/developing/creating/advanced/#ipfsarweave-file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | İşleyicilerin işlem makbuzlarına erişim desteği eklendi. | +| 0.0.4 | Subgraph özelliklerini yönetme desteği eklendi. | diff --git a/website/src/pages/tr/subgraphs/developing/creating/unit-testing-framework.mdx b/website/src/pages/tr/subgraphs/developing/creating/unit-testing-framework.mdx index fe203de9b520..05268d06e3d3 100644 --- a/website/src/pages/tr/subgraphs/developing/creating/unit-testing-framework.mdx +++ b/website/src/pages/tr/subgraphs/developing/creating/unit-testing-framework.mdx @@ -2,12 +2,12 @@ title: Birim Testi Framework'ü --- -Learn how to use Matchstick, a unit testing framework developed by [LimeChain](https://limechain.tech/). Matchstick enables subgraph developers to test their mapping logic in a sandboxed environment and successfully deploy their subgraphs. +Learn how to use Matchstick, a unit testing framework developed by [LimeChain](https://limechain.tech/). Matchstick enables Subgraph developers to test their mapping logic in a sandboxed environment and successfully deploy their Subgraphs. ## Benefits of Using Matchstick - It's written in Rust and optimized for high performance. -- It gives you access to developer features, including the ability to mock contract calls, make assertions about the store state, monitor subgraph failures, check test performance, and many more. +- It gives you access to developer features, including the ability to mock contract calls, make assertions about the store state, monitor Subgraph failures, check test performance, and many more. ## Buradan Başlayın @@ -87,7 +87,7 @@ And finally, do not use `graph test` (which uses your global installation of gra ### Using Matchstick -To use **Matchstick** in your subgraph project just open up a terminal, navigate to the root folder of your project and simply run `graph test [options] ` - it downloads the latest **Matchstick** binary and runs the specified test or all tests in a test folder (or all existing tests if no datasource flag is specified). +To use **Matchstick** in your Subgraph project just open up a terminal, navigate to the root folder of your project and simply run `graph test [options] ` - it downloads the latest **Matchstick** binary and runs the specified test or all tests in a test folder (or all existing tests if no datasource flag is specified). ### CLI seçenekleri @@ -113,7 +113,7 @@ graph test path/to/file.test.ts ```sh -c, --coverage Run the tests in coverage mode --d, --docker Run the tests in a docker container (Note: Please execute from the root folder of the subgraph) +-d, --docker Run the tests in a docker container (Note: Please execute from the root folder of the Subgraph) -f, --force Binary: Redownloads the binary. Docker: Redownloads the Dockerfile and rebuilds the docker image. -h, --help Show usage information -l, --logs Logs to the console information about the OS, CPU model and download url (debugging purposes) @@ -145,17 +145,17 @@ libsFolder: path/to/libs manifestPath: path/to/subgraph.yaml ``` -### Demo subgraph +### Demo Subgraph You can try out and play around with the examples from this guide by cloning the [Demo Subgraph repo](https://github.com/LimeChain/demo-subgraph) ### Öğretici videolar -Also you can check out the video series on ["How to use Matchstick to write unit tests for your subgraphs"](https://www.youtube.com/playlist?list=PLTqyKgxaGF3SNakGQwczpSGVjS_xvOv3h) +Also you can check out the video series on ["How to use Matchstick to write unit tests for your Subgraphs"](https://www.youtube.com/playlist?list=PLTqyKgxaGF3SNakGQwczpSGVjS_xvOv3h) ## Test yapısı -_**IMPORTANT: The test structure described below depens on `matchstick-as` version >=0.5.0**_ +_**IMPORTANT: The test structure described below depends on `matchstick-as` version >=0.5.0**_ ### describe() @@ -163,7 +163,7 @@ _**IMPORTANT: The test structure described below depens on `matchstick-as` versi **_Notes:_** -- _Describes are not mandatory. You can still use test() the old way, outside of the describe() blocks_ +- _Açıklamalar zorunlu değildir. Hala test() fonksiyonunu describe() bloklarının dışında kullanabilirsiniz_ Örnek: @@ -662,7 +662,7 @@ That's a lot to unpack! First off, an important thing to notice is that we're im İşte başardın - ilk testimizi oluşturduk! 👏 -Şimdi testlerimizi çalıştırmak için subgraph kök klasörünüzde şunu çalıştırmanız yeterlidir: +Now in order to run our tests you simply need to run the following in your Subgraph root folder: `graph test Gravity` @@ -756,7 +756,7 @@ createMockedFunction(contractAddress, 'getGravatar', 'getGravatar(address):(stri Users can mock IPFS files by using `mockIpfsFile(hash, filePath)` function. The function accepts two arguments, the first one is the IPFS file hash/path and the second one is the path to a local file. -NOTE: When testing `ipfs.map/ipfs.mapJSON`, the callback function must be exported from the test file in order for matchstck to detect it, like the `processGravatar()` function in the test example bellow: +NOTE: When testing `ipfs.map/ipfs.mapJSON`, the callback function must be exported from the test file in order for matchstick to detect it, like the `processGravatar()` function in the test example bellow: `.test.ts` file: @@ -765,7 +765,7 @@ import { assert, test, mockIpfsFile } from 'matchstick-as/assembly/index' import { ipfs } from '@graphprotocol/graph-ts' import { gravatarFromIpfs } from './utils' -// Export ipfs.map() callback in order for matchstck to detect it +// Export ipfs.map() callback in order for matchstick to detect it export { processGravatar } from './utils' test('ipfs.cat', () => { @@ -1172,7 +1172,7 @@ templates: network: mainnet mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/token-lock-wallet.ts handler: handleMetadata @@ -1289,7 +1289,7 @@ test('file/ipfs dataSource creation example', () => { ## Test Kapsamı -Using **Matchstick**, subgraph developers are able to run a script that will calculate the test coverage of the written unit tests. +Using **Matchstick**, Subgraph developers are able to run a script that will calculate the test coverage of the written unit tests. The test coverage tool takes the compiled test `wasm` binaries and converts them to `wat` files, which can then be easily inspected to see whether or not the handlers defined in `subgraph.yaml` have been called. Since code coverage (and testing as whole) is in very early stages in AssemblyScript and WebAssembly, **Matchstick** cannot check for branch coverage. Instead we rely on the assertion that if a given handler has been called, the event/function for it have been properly mocked. @@ -1395,7 +1395,7 @@ The mismatch in arguments is caused by mismatch in `graph-ts` and `matchstick-as ## Ek Kaynaklar -For any additional support, check out this [demo subgraph repo using Matchstick](https://github.com/LimeChain/demo-subgraph#readme_). +For any additional support, check out this [demo Subgraph repo using Matchstick](https://github.com/LimeChain/demo-subgraph#readme_). ## Geribildirim diff --git a/website/src/pages/tr/subgraphs/developing/deploying/multiple-networks.mdx b/website/src/pages/tr/subgraphs/developing/deploying/multiple-networks.mdx index 2241675eac10..d401f6ad16b2 100644 --- a/website/src/pages/tr/subgraphs/developing/deploying/multiple-networks.mdx +++ b/website/src/pages/tr/subgraphs/developing/deploying/multiple-networks.mdx @@ -1,12 +1,13 @@ --- title: Bir Subgraph'i Birden Fazla Ağda Dağıtma +sidebarTitle: Deploying to Multiple Networks --- -This page explains how to deploy a subgraph to multiple networks. To deploy a subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a subgraph already, see [Creating a subgraph](/developing/creating-a-subgraph/). +This page explains how to deploy a Subgraph to multiple networks. To deploy a Subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a Subgraph already, see [Creating a Subgraph](/developing/creating-a-subgraph/). -## Subgraph'i Birden Fazla Ağda Dağıtma +## Deploying the Subgraph to multiple networks -Bazı durumlarda, tüm kodu tekrarlamak zorunda olmadan aynı subgraph'i birden fazla ağda yayına almak isteyebilirsiniz. Bunu yapmaktaki temel zorluk, sözleşme kodu tamamen aynı olsa dahi, farklı ağlardaki sözleşme adreslerinin farklı olmasıdır. +In some cases, you will want to deploy the same Subgraph to multiple networks without duplicating all of its code. The main challenge that comes with this is that the contract addresses on these networks are different. ### `graph-cli` Kullanarak @@ -20,7 +21,7 @@ Seçenekler: --network-file Ağ yapılandırma dosya yolu (varsayılan: "./networks.json") ``` -`--network` seçeneğini, geliştirme sırasında subgraph'inizi kolayca güncellemek amacıyla, bir `json` standart dosyası kullanarak bir ağ yapılandırması belirlemek için kullanabilirsiniz. (Varsayılan olarak `networks.json` dosyasını kullanır.) +You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your Subgraph during development. > Not: Artık, `init` komutu, sağlanan bilgilere dayanarak otomatik olarak bir `networks.json` dosyası oluşturmaktadır. Daha sonra mevcut ağları güncelleyebilir veya yeni ağlar ekleyebilirsiniz. @@ -54,7 +55,7 @@ Eğer bir `networks.json` dosyanız yoksa, aşağıdaki yapı ile manuel olarak > Not: Yapılandırma dosyasında `templates` (şablonlar, eğer varsa) kısmını doldurmanıza gerek yoktur, yalnızca `dataSources` (veri kaynaklarını) belirtmelisiniz. Eğer `subgraph.yaml` dosyasında `templates` kısmı tanımlanmışsa, bunların ağı `--network` seçeneği ile belirtilen ağa otomatik olarak güncellenecektir. -Şimdi, subgraph'inizi `mainnet` ve `sepolia` ağlarında dağıtmak istediğinizi varsayalım ve `subgraph.yaml` dosyanız aşağıdaki gibi olsun\`: +Now, let's assume you want to be able to deploy your Subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`: ```yaml # ... @@ -96,7 +97,7 @@ yarn build --network sepolia yarn build --network sepolia --network-file path/to/config ``` -`build` komutu, `subgraph.yaml` dosyanızı `sepolia` yapılandırmasıyla güncelleyip ardından subgraph'i yeniden derleyecektir. `subgraph.yaml` dosyanız artık şöyle görünmelidir: +The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the Subgraph. Your `subgraph.yaml` file now should look like this: ```yaml # ... @@ -127,7 +128,7 @@ yarn deploy --network sepolia --network-file path/to/config Daha eski `graph-cli` sürümlerini kullanarak kontrat adresleri gibi unsurları parametrize etmenin bir yolu, bunların bir kısmını [Mustache](https://mustache.github.io/) veya [Handlebar](https://handlebarsjs.com/) gibi bir şablonlama sistemiyle oluşturmaktır. -Bu yaklaşımı açıklamak için, bir subgraph'in mainnet ve Sepolia ağlarına farklı sözleşme adresleri ile dağıtılması gerektiğini varsayalım. Her ağ için adresleri sağlayan iki yapılandırma dosyası tanımlayabilirsiniz: +To illustrate this approach, let's assume a Subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network: ```json { @@ -179,7 +180,7 @@ Her iki ağ için de bir manifesto oluşturmak amacıyla, `package.json` dosyas } ``` -Bu subgraph'i mainnet veya Sepolia üzerinde yayına almak için artık aşağıdaki iki komuttan birini çalıştırabilirsiniz: +To deploy this Subgraph for mainnet or Sepolia you would now simply run one of the two following commands: ```sh # Mainnet: @@ -193,25 +194,25 @@ Bunun çalışan bir örneğini [burada](https://github.com/graphprotocol/exampl **Not**: Bu yaklaşım, sözleşme adresleri ve ağ adlarının ötesinde daha fazla değişiklik yapmanın, veya şablonlardan mapping ya da ABI'ler oluşturmanın gerekli olduğu, daha karmaşık durumlara da uygulanabilir. -Bu işlem, subgraph'inizin geride kalıp kalmadığını kontrol etmek için `chainHeadBlock` değerini subgraph'inizdeki `latestBlock` ile karşılaştırmanızı sağlar. `synced`, subgraph'in zincire daha önce hiç yetişip yetişmediğini belirtir. `health` ise şu anda hata olmadığında `healthy` ve bir hata nedeniyle subgraph'in ilerlemesi durduğunda `failed` değerlerini alabilir. Bu durumda, hataya dair ayrıntılar için `fatalError` alanını kontrol edebilirsiniz. +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your Subgraph to check if it is running behind. `synced` informs if the Subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the Subgraph. In this case, you can check the `fatalError` field for details on this error. -## Subgraph Studio Subgraph Arşivleme Politikası +## Subgraph Studio Subgraph archive policy -Studio’daki bir subgraph sürümü yalnızca aşağıdaki kriterleri karşılaması durumunda arşivlenir: +A Subgraph version in Studio is archived if and only if it meets the following criteria: - Sürüm ağa yayımlanmamıştır (veya yayım askıda kalmıştır) - Sürüm, 45 gün veya daha uzun bir süre önce oluşturulmuştur -- Subgraph son 30 gündür sorgulanmamıştır +- The Subgraph hasn't been queried in 30 days -Ek olarak, yeni bir sürüm yayına alındığında, eğer subgraph yayımlanmadıysa, subgraph’in N-2 sürümü arşivlenir. +In addition, when a new version is deployed, if the Subgraph has not been published, then the N-2 version of the Subgraph is archived. -Bu politika kapsamında etkilenen her subgraph, ilgili sürümü geri getirme seçeneğine sahiptir. +Every Subgraph affected with this policy has an option to bring the version in question back. -## Subgraph durumunu kontrol etme +## Checking Subgraph health -Bir subgraph'in başarıyla senkronize olması, sonsuza kadar sorunsuz çalışmaya devam edeceğine dair iyi bir işarettir. Ancak, ağdaki yeni tetikleyiciler subgraph'inizin test edilmemiş bir hata durumuna düşmesine neden olabilir, veya performans sorunları ya da düğüm operatörlerindeki sorunlar nedeniyle subgraph geride kalmaya başlayabilir. +If a Subgraph syncs successfully, that is a good sign that it will continue to run well forever. However, new triggers on the network might cause your Subgraph to hit an untested error condition or it may start to fall behind due to performance issues or issues with the node operators. -Graph Düğümü, subgraph'inizin durumunu kontrol etmek için sorgu yapabileceğiniz bir GraphQL uç noktası sunar. Sağlayıcı hizmetinde bu uç nokta `https://api.thegraph.com/index-node/graphql` adresinde bulunmaktadır. Yerel bir düğümde ise varsayılan olarak `8030/graphql` portunda erişilebilir. Bu uç noktanın tam şemasına [buradan](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql) ulaşabilirsiniz. İşte bir subgraph'in güncel sürümünün durumunu kontrol eden örnek bir sorgu: +Graph Node exposes a GraphQL endpoint which you can query to check the status of your Subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a Subgraph: ```graphql { @@ -238,4 +239,4 @@ Graph Düğümü, subgraph'inizin durumunu kontrol etmek için sorgu yapabilece } ``` -Bu işlem, subgraph'inizin geride kalıp kalmadığını kontrol etmek için `chainHeadBlock` değerini subgraph'inizdeki `latestBlock` ile karşılaştırmanızı sağlar. `synced`, subgraph'in zincire daha önce hiç yetişip yetişmediğini belirtir. `health` ise şu anda hata olmadığında `healthy` ve bir hata nedeniyle subgraph'in ilerlemesi durduğunda `failed` değerlerini alabilir. Bu durumda, hataya dair ayrıntılar için `fatalError` alanını kontrol edebilirsiniz. +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your Subgraph to check if it is running behind. `synced` informs if the Subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the Subgraph. In this case, you can check the `fatalError` field for details on this error. diff --git a/website/src/pages/tr/subgraphs/developing/deploying/using-subgraph-studio.mdx b/website/src/pages/tr/subgraphs/developing/deploying/using-subgraph-studio.mdx index d7aaee820f01..a4e8ca41d951 100644 --- a/website/src/pages/tr/subgraphs/developing/deploying/using-subgraph-studio.mdx +++ b/website/src/pages/tr/subgraphs/developing/deploying/using-subgraph-studio.mdx @@ -2,23 +2,23 @@ title: Deploying Using Subgraph Studio --- -Subgraph'inizi Subgraph Studio'da dağıtma adımlarını öğrenin. +Learn how to deploy your Subgraph to Subgraph Studio. -> Note: When you deploy a subgraph, you push it to Subgraph Studio, where you'll be able to test it. It's important to remember that deploying is not the same as publishing. When you publish a subgraph, you're publishing it onchain. +> Note: When you deploy a Subgraph, you push it to Subgraph Studio, where you'll be able to test it. It's important to remember that deploying is not the same as publishing. When you publish a Subgraph, you're publishing it onchain. ## Subgraph Studio'ya Genel Bakış [Subgraph Studio](https://thegraph.com/studio/)'da aşağıdakileri yapabilirsiniz: -- Oluşturmuş olduğunuz sugraph'lerin listesini görüntülemek -- Belirli bir subgraph'i yönetmek, subgraph'in detaylarını görmek ve durumunu görüntülemek -- Belirli subgraph'ler için API anahtarlarınızı oluşturmak ve yönetmek +- View a list of Subgraphs you've created +- Manage, view details, and visualize the status of a specific Subgraph +- Create and manage your API keys for specific Subgraphs - API anahtarlarınızı belirli alanlara sınırlamak ve yalnızca belirli Endeksleyicilerin bu anahtarlarla sorgulama yapmasına izin vermek -- Subgraph'inizi oluşturmak -- Subgraph'inizi The Graph CLI'yi kullanarak dağıtmak -- Subgraph'inizi playground ortamında test etmek -- Geliştirme sorgu URL'sini kullanarak subgraph’inizi hazırlama ortamına entegre etmek -- Subgraph'inizi The Graph Ağında yayımlamak +- Create your Subgraph +- Deploy your Subgraph using The Graph CLI +- Test your Subgraph in the playground environment +- Integrate your Subgraph in staging using the development query URL +- Publish your Subgraph to The Graph Network - Faturalarınızı yönetmek ## The Graph CLI'yi Yükleme @@ -44,10 +44,10 @@ npm install -g @graphprotocol/graph-cli 1. [Subgraph Studio](https://thegraph.com/studio/)'yu açın. 2. Giriş yapmak için cüzdanınızı bağlayın. - Cüzdan bağlamak için MetaMask, Conbase Wallet, WalletConnect ya da Safe kullanabilirsiniz. -3. Giriş yaptıktan sonra, benzersiz yayına alma anahtarınız subgraph ayrıntıları sayfasında görünecektir. - - Dağıtma anahtarınız subgraph'lerinizi yayımlamanızı veya API anahtarlarınızı ve faturanızı yönetmenizi sağlar. Dağıtma anahtarınız benzersizdir; ancak anahtarınızın ele geçirildiğini düşünüyorsanız bu anahtarı yeniden yaratabilirsiniz. +3. After you sign in, your unique deploy key will be displayed on your Subgraph details page. + - The deploy key allows you to publish your Subgraphs or manage your API keys and billing. It is unique but can be regenerated if you think it has been compromised. -> Önemli not: Subgraph'leri sorgulamak için bir API anahtarına sahip olmanız gerekmektedir +> Important: You need an API key to query Subgraphs ### Subgraph Stüdyo'da Subgraph Nasıl Oluşturulur @@ -57,31 +57,25 @@ npm install -g @graphprotocol/graph-cli ### The Graph Ağı ile Subgraph Uyumluluğu -Subgraph'lerin Graph Ağı Endeksleyicileri tarafından desteklenebilmesi için şu gereklilikleri karşılaması gerekir: - -- Index a [supported network](/supported-networks/) -- Aşağıdaki özelliklerden hiçbirini kullanmamalı: - - ipfs.cat & ipfs.map - - Ölümcül Olmayan Hatalar - - Aşılama +To be supported by Indexers on The Graph Network, Subgraphs must index a [supported network](/supported-networks/). For a full list of supported and unsupported features, check out the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) repo. ## Subgraph'inizi İlklendirme -Subgraph’iniz Subgraph Studio’da oluşturulduktan sonra, aşağıdaki komutla CLI üzerinden subgraph kodunu ilklendirebilirsiniz: +Once your Subgraph has been created in Subgraph Studio, you can initialize its code through the CLI using this command: ```bash graph init ``` -`` değerini Subgraph Studio’daki subgraph ayrıntı sayfanızda bulabilirsiniz; aşağıdaki resme bakın: +You can find the `` value on your Subgraph details page in Subgraph Studio, see image below: ![Subgraph Studio - Slug](/img/doc-subgraph-slug.png) -`graph init` komutunu çalıştırdıktan sonra sorgulamak istediğiniz kontrat adresini, ağı ve ABI’yi girmeniz istenecektir. Bu komut, yerel makinenizde subgraph’inizle çalışmaya başlamanız için bazı temel kodları içeren yeni bir klasör oluşturacaktır. Sonrasında subgraph'inizi işlevselliğini test ederek nihayetlendirebilirsiniz. +After running `graph init`, you will be asked to input the contract address, network, and an ABI that you want to query. This will generate a new folder on your local machine with some basic code to start working on your Subgraph. You can then finalize your Subgraph to make sure it works as expected. ## Graph Auth -Subgraph’inizi Subgraph Studio’da yayına alabilmek için önce CLI üzerinden hesabınıza giriş yapmanız gerekmektedir. Bunun için, subgraph ayrıntıları sayfanızda bulabileceğiniz yayına alma anahtarınıza ihtiyacınız olacak. +Before you can deploy your Subgraph to Subgraph Studio, you need to log into your account within the CLI. To do this, you will need your deploy key, which you can find under your Subgraph details page. CLI üzerinden kimlik doğrulaması yapmak için aşağıdaki komutu kullanın: @@ -91,11 +85,11 @@ graph auth ## Bir Subgraph’i Dağıtma -Hazır olduğunuzda subgraph’inizi Subgraph Studio’da dağıtabilirsiniz. +Once you are ready, you can deploy your Subgraph to Subgraph Studio. -> CLI ile bir subgraph dağıtmak, onu Studio’ya iletir; burada subgraph'i test edip meta verilerini güncelleyebilirsiniz. Bu işlem, subgraph’inizi merkeziyetsiz ağda yayımlamaz. +> Deploying a Subgraph with the CLI pushes it to the Studio, where you can test it and update the metadata. This action won't publish your Subgraph to the decentralized network. -Subgraph’inizi dağıtmak için aşağıdaki CLI komutunu kullanın: +Use the following CLI command to deploy your Subgraph: ```bash graph deploy @@ -108,30 +102,30 @@ Bu komutu çalıştırdıktan sonra CLI sizden bir sürüm etiketi isteyecektir. ## Subgraph’inizi Test Etme -Yayına aldıktan sonra, subgraph’inizi (Subgraph Studio’da veya sorgu URL’si ile kendi uygulamanızda) test edebilir, yeni bir sürüm yayına alabilir, meta verileri güncelleyebilir ve hazır olduğunuzda [Graph Gezgini](https://thegraph.com/explorer)'nde yayımlayabilirsiniz. +After deploying, you can test your Subgraph (either in Subgraph Studio or in your own app, with the deployment query URL), deploy another version, update the metadata, and publish to [Graph Explorer](https://thegraph.com/explorer) when you are ready. -Subgraph Studio’da günlükleri kontrol ederek subgraph’inizle ilgili hataları görebilirsiniz. +Use Subgraph Studio to check the logs on the dashboard and look for any errors with your Subgraph. ## Subgraph’inizi Yayımlama -In order to publish your subgraph successfully, review [publishing a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). +In order to publish your Subgraph successfully, review [publishing a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). ## CLI ile Subgraph’inizi Sürümleme -Subgraph’inizi güncellemek isterseniz, aşağıdaki adımları izleyebilirsiniz: +If you want to update your Subgraph, you can do the following: - CLI kullanarak Studio’da yeni bir sürüm dağıtabilirsiniz (bu sürüm yalnızca özel olarak kalacaktır). - Memnun kaldığınızda, yeni dağıtımınızı [Graph Gezgini](https://thegraph.com/explorer)'nde yayımlayabilirsiniz. -- Bu işlem, küratörlerin sinyal vermeye başlayabileceği ve Endeksleyicilerin endeksleyebileceği, subgraph'inizin yeni bir sürümünü oluşturur. +- This action will create a new version of your Subgraph that Curators can start signaling on and Indexers can index. -You can also update your subgraph's metadata without publishing a new version. You can update your subgraph details in Studio (under the profile picture, name, description, etc.) by checking an option called **Update Details** in [Graph Explorer](https://thegraph.com/explorer). If this is checked, an onchain transaction will be generated that updates subgraph details in Explorer without having to publish a new version with a new deployment. +You can also update your Subgraph's metadata without publishing a new version. You can update your Subgraph details in Studio (under the profile picture, name, description, etc.) by checking an option called **Update Details** in [Graph Explorer](https://thegraph.com/explorer). If this is checked, an onchain transaction will be generated that updates Subgraph details in Explorer without having to publish a new version with a new deployment. -> Note: There are costs associated with publishing a new version of a subgraph to the network. In addition to the transaction fees, you must also fund a part of the curation tax on the auto-migrating signal. You cannot publish a new version of your subgraph if Curators have not signaled on it. For more information, please read more [here](/resources/roles/curating/). +> Note: There are costs associated with publishing a new version of a Subgraph to the network. In addition to the transaction fees, you must also fund a part of the curation tax on the auto-migrating signal. You cannot publish a new version of your Subgraph if Curators have not signaled on it. For more information, please read more [here](/resources/roles/curating/). ## Subgraph Sürümlerinin Otomatik Arşivlenmesi -Subgraph Studio’da yeni bir subgraph sürümü yayına aldığınızda, önceki sürüm arşivlenecektir. Arşivlenen sürümler endekslenmez/senkronize edilmez ve bu nedenle sorgulanamaz. Subgraph’inizin arşivlenen bir sürümünü Subgraph Studio'da arşivden çıkarabilirsiniz. +Whenever you deploy a new Subgraph version in Subgraph Studio, the previous version will be archived. Archived versions won't be indexed/synced and therefore cannot be queried. You can unarchive an archived version of your Subgraph in Subgraph Studio. -> Not: Studio’da yayına alınan ancak yayımlanmamış subgraph'lerin önceki sürümlerinin otomatik olarak arşivlenecektir. +> Note: Previous versions of non-published Subgraphs deployed to Studio will be automatically archived. ![Subgraph Studio - Arşivden Çıkarma](/img/Unarchive.png) diff --git a/website/src/pages/tr/subgraphs/developing/developer-faq.mdx b/website/src/pages/tr/subgraphs/developing/developer-faq.mdx index d464a0058dfb..ecb2f24b39e2 100644 --- a/website/src/pages/tr/subgraphs/developing/developer-faq.mdx +++ b/website/src/pages/tr/subgraphs/developing/developer-faq.mdx @@ -7,37 +7,37 @@ Bu sayfa, The Graph üzerinde geliştirme yapan geliştiricilerin sunduğu en ya ## Subgraph ile İlgili Sorular -### 1. Subgraph nedir? +### 1. What is a Subgraph? -Bir subgraph, blokzinciri verilerine dayalı olarak oluşturulmuş özel yapım bir API’dir. Subgraph'ler, GraphQL sorgu dili kullanılarak sorgulanır ve The Graph CLI kullanılarak bir Graph Düğümü'nde yayına alınır. Dağıtılıp The Graph’in merkeziyetsiz ağına yayımlandığında, Endeksleyiciler subgraph'leri işler ve sorgu yapmaları için kullanıcıların erişimine sunar. +A Subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process Subgraphs and make them available for Subgraph consumers to query. -### 2. Subgraph oluşturmanın ilk adımı nedir? +### 2. What is the first step to create a Subgraph? -To successfully create a subgraph, you will need to install The Graph CLI. Review the [Quick Start](/subgraphs/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). +To successfully create a Subgraph, you will need to install The Graph CLI. Review the [Quick Start](/subgraphs/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). -### 3. Akıllı sözleşmelerim olay içermiyorsa yine de subgraph oluşturabilir miyim? +### 3. Can I still create a Subgraph if my smart contracts don't have events? -Akıllı sözleşmelerinizi, sorgulamak istediğiniz verilerle ilişkili olaylara sahip olacak şekilde yapılandırmanız şiddetle önerilir. Subgraph içindeki olay işleyicileri sözleşme olayları tarafından tetiklenir ve kullanışlı verilere erişmenin en hızlı yolu bu işleyicileri kullanmaktır. +It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the Subgraph are triggered by contract events and are the fastest way to retrieve useful data. -Eğer çalıştığınız sözleşmeler olay içermiyorsa, subgraph’inizin endekslenmesini çağrı ve blok işleyicileri kullanarak tetikleyebilirsiniz. Ancak bu tavsiye edilmeyen bir yöntemdir ve performansı önemli ölçüde yavaşlatacaktır. +If the contracts you work with do not contain events, your Subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. -### 4. Subgraph'ımla ilişkili GitHub hesabını değiştirebilir miyim? +### 4. Can I change the GitHub account associated with my Subgraph? -Hayır. Bir subgraph oluşturulduktan sonra, ilişkili GitHub hesabı değiştirilemez. Bu nedenle, subgraph oluşturmadan önce bunu dikkatlice düşünmelisiniz. +No. Once a Subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your Subgraph. -### 5. Mainnet'teki bir subgraph nasıl güncellenir? +### 5. How do I update a Subgraph on mainnet? -CLI'yi kullanarak Subgraph Studio’ya yeni bir subgraph sürümü dağıtabilirsiniz. Bu işlem subgraph’inizi gizli olarak tutar, ancak memnun kaldığınızda Graph Gezgini’nde yayımlayabilirsiniz. Bu, Küratörlerin sinyal vermeye başlayabileceği yeni bir subgraph sürümü oluşturur. +You can deploy a new version of your Subgraph to Subgraph Studio using the CLI. This action maintains your Subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your Subgraph that Curators can start signaling on. -### 6. Bir subgraph’i yeniden dağıtmadan başka bir hesaba veya uç noktaya kopyalayabilir miyim? +### 6. Is it possible to duplicate a Subgraph to another account or endpoint without redeploying? -Subgraph’i yeniden dağıtmanız gerekir ancak subgraph ID'si (IPFS hash’i) değişmezse, senkronizasyona baştan başlamanıza gerek kalmaz. +You have to redeploy the Subgraph, but if the Subgraph ID (IPFS hash) doesn't change, it won't have to sync from the beginning. -### 7. Subgraph eşlemelerinden sözleşme fonksiyonunu nasıl çağırabilir veya bir genel durum değişkenine nasıl erişebilirim? +### 7. How do I call a contract function or access a public state variable from my Subgraph mappings? Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/#access-to-smart-contract-state). -### 8. Can I import `ethers.js` or other JS libraries into my subgraph mappings? +### 8. Can I import `ethers.js` or other JS libraries into my Subgraph mappings? Eşleyiciler AssemblyScript ile yazıldığından dolayı şu anda mümkün değil. @@ -45,15 +45,15 @@ Bunun alternatif bir çözümü, verileri varlıklarda ham halde depolayıp, JS ### 9. Birden fazla sözleşmeyi dinlerken, olayları dinlenecek sözleşmelerin sırasını seçmek mümkün müdür? -Bir subgraph içindeki olaylar, birden fazla sözleşme üzerinde olup olmamaya bakmaksızın her zaman bloklarda göründükleri sırayla işlenir. +Within a Subgraph, the events are always processed in the order they appear in the blocks, regardless of whether that is across multiple contracts or not. ### 10. Şablonlar veri kaynaklarından ne açıdan farklıdır? -Şablonlar, subgraph’iniz endeksleme yaparken veri kaynaklarını hızlıca oluşturmanızı sağlar. Sözleşmeniz, kullanıcılar etkileşime girdikçe yeni sözleşmeler yaratabilir. Bu sözleşmelerin yapısını (ABI, olaylar vb.) önceden bildiğinizden, onları nasıl endekslemek istediğinizi bir şablonda tanımlayabilirsiniz. Yeni sözleşmeler oluşturulduğunda, subgraph’iniz sözleşme adresini tespit ederek dinamik bir veri kaynağı oluşturacaktır. +Templates allow you to create data sources quickly, while your Subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your Subgraph will create a dynamic data source by supplying the contract address. Check out the "Instantiating a data source template" section on: [Data Source Templates](/developing/creating-a-subgraph/#data-source-templates). -### 11. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? +### 11. Is it possible to set up a Subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? Yes. On `graph init` command itself you can add multiple dataSources by entering contracts one after the other. @@ -79,9 +79,9 @@ docker pull graphprotocol/graph-node:latest If only one entity is created during the event and if there's nothing better available, then the transaction hash + log index would be unique. You can obfuscate these by converting that to Bytes and then piping it through `crypto.keccak256` but this won't make it more unique. -### 15. Subgraph'imi silebilir miyim? +### 15. Can I delete my Subgraph? -Yes, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) and [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) your subgraph. +Yes, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) and [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) your Subgraph. ## Ağ ile İlgili Sorular @@ -110,11 +110,11 @@ Evet. Sepolia, blok işleyicileri, çağrı işleyicileri ve olay işleyicilerin Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the dataSource starts indexing from. In most cases, we suggest using the block where the contract was created: [Start blocks](/developing/creating-a-subgraph/#start-blocks) -### 20. Endeksleme performansını artırmak için ipuçları var mı? Subgraph'imin senkronize edilmesi çok uzun zaman alıyor +### 20. What are some tips to increase the performance of indexing? My Subgraph is taking a very long time to sync Yes, you should take a look at the optional start block feature to start indexing from the block where the contract was deployed: [Start blocks](/developing/creating-a-subgraph/#start-blocks) -### 21. Subgraph üzerinde doğrudan sorgulama yaparak endekslenmiş en son blok numarasını öğrenmenin bir yolu var mı? +### 21. Is there a way to query the Subgraph directly to determine the latest block number it has indexed? Var! Aşağıdaki komutu, "organization/subgraphName" kısmına subgraph'inizi yayımladığınız organizasyon adını ve subgraph'inizin adını koyarak deneyin: @@ -132,7 +132,7 @@ someCollection(first: 1000, skip: ) { ... } ### 23. If my dapp frontend uses The Graph for querying, do I need to write my API key into the frontend directly? What if we pay query fees for users – will malicious users cause our query fees to be very high? -Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. +Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and Subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. ## Diğer diff --git a/website/src/pages/tr/subgraphs/developing/introduction.mdx b/website/src/pages/tr/subgraphs/developing/introduction.mdx index 6a76c8957cee..a7b92c65b4df 100644 --- a/website/src/pages/tr/subgraphs/developing/introduction.mdx +++ b/website/src/pages/tr/subgraphs/developing/introduction.mdx @@ -11,21 +11,21 @@ As a developer, you need data to build and power your dapp. Querying and indexin On The Graph, you can: -1. Create, deploy, and publish subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). -2. Use GraphQL to query existing subgraphs. +1. Create, deploy, and publish Subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). +2. Use GraphQL to query existing Subgraphs. -### What is GraphQL? +### GraphQL Nedir? -- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. +- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query Subgraphs. ### Developer Actions -- Query subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. -- Create custom subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. -- Deploy, publish and signal your subgraphs within The Graph Network. +- Query Subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. +- Create custom Subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. +- Deploy, publish and signal your Subgraphs within The Graph Network. -### What are subgraphs? +### What are Subgraphs? -A subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. +A Subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. -Check out the documentation on [subgraphs](/subgraphs/developing/subgraphs/) to learn specifics. +Check out the documentation on [Subgraphs](/subgraphs/developing/subgraphs/) to learn specifics. diff --git a/website/src/pages/tr/subgraphs/developing/managing/deleting-a-subgraph.mdx b/website/src/pages/tr/subgraphs/developing/managing/deleting-a-subgraph.mdx index e4564fc247f2..23574d11eff3 100644 --- a/website/src/pages/tr/subgraphs/developing/managing/deleting-a-subgraph.mdx +++ b/website/src/pages/tr/subgraphs/developing/managing/deleting-a-subgraph.mdx @@ -2,30 +2,30 @@ title: Deleting a Subgraph --- -Delete your subgraph using [Subgraph Studio](https://thegraph.com/studio/). +Delete your Subgraph using [Subgraph Studio](https://thegraph.com/studio/). -> Deleting your subgraph will remove all published versions from The Graph Network, but it will remain visible on Graph Explorer and Subgraph Studio for users who have signaled on it. +> Deleting your Subgraph will remove all published versions from The Graph Network, but it will remain visible on Graph Explorer and Subgraph Studio for users who have signaled on it. ## Adım Adım -1. Visit the subgraph's page on [Subgraph Studio](https://thegraph.com/studio/). +1. Visit the Subgraph's page on [Subgraph Studio](https://thegraph.com/studio/). 2. Click on the three-dots to the right of the "publish" button. -3. Click on the option to "delete this subgraph": +3. Click on the option to "delete this Subgraph": ![Delete-subgraph](/img/Delete-subgraph.png) -4. Depending on the subgraph's status, you will be prompted with various options. +4. Depending on the Subgraph's status, you will be prompted with various options. - - If the subgraph is not published, simply click “delete” and confirm. - - If the subgraph is published, you will need to confirm on your wallet before the subgraph can be deleted from Studio. If a subgraph is published to multiple networks, such as testnet and mainnet, additional steps may be required. + - If the Subgraph is not published, simply click “delete” and confirm. + - If the Subgraph is published, you will need to confirm on your wallet before the Subgraph can be deleted from Studio. If a Subgraph is published to multiple networks, such as testnet and mainnet, additional steps may be required. -> If the owner of the subgraph has signal on it, the signaled GRT will be returned to the owner. +> If the owner of the Subgraph has signal on it, the signaled GRT will be returned to the owner. ### Important Reminders -- Once you delete a subgraph, it will **not** appear on Graph Explorer's homepage. However, users who have signaled on it will still be able to view it on their profile pages and remove their signal. -- Curators will not be able to signal on the subgraph anymore. -- Subgraph'e halihazırda sinyal vermiş küratörler, sinyallerini ortalama hisse fiyatından geri çekebilir. -- Deleted subgraphs will show an error message. +- Once you delete a Subgraph, it will **not** appear on Graph Explorer's homepage. However, users who have signaled on it will still be able to view it on their profile pages and remove their signal. +- Curators will not be able to signal on the Subgraph anymore. +- Curators that already signaled on the Subgraph can withdraw their signal at an average share price. +- Deleted Subgraphs will show an error message. diff --git a/website/src/pages/tr/subgraphs/developing/managing/transferring-a-subgraph.mdx b/website/src/pages/tr/subgraphs/developing/managing/transferring-a-subgraph.mdx index 3631cc8a2973..0707d7d2ab5a 100644 --- a/website/src/pages/tr/subgraphs/developing/managing/transferring-a-subgraph.mdx +++ b/website/src/pages/tr/subgraphs/developing/managing/transferring-a-subgraph.mdx @@ -2,18 +2,18 @@ title: Transferring a Subgraph --- -Merkeziyetsiz ağda yayımlanan subgraph’ler, NFT olarak oluşturulup subgraph’i yayımlayan adrese gönderilir. Bu NFT, The Graph Ağı’ndaki hesaplar arasında transferi kolaylaştıran standart bir ERC721 sözleşmesini temel alır. +Subgraphs published to the decentralized network have an NFT minted to the address that published the Subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network. ## Reminders -- NFT’ye sahip olan kişi, subgraph’in kontrolünü elinde tutar. -- NFT’nin sahibi NFT’yi satmaya veya transfer etmeye karar verirse, artık bu subgraph’i ağ üzerinde düzenleyemez veya güncelleyemez. -- Subgraph kontrolünü kolayca bir multi-sig cüzdana taşıyabilirsiniz. -- Bir topluluk üyesi, bir DAO adına subgraph oluşturabilir. +- Whoever owns the NFT controls the Subgraph. +- If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that Subgraph on the network. +- You can easily move control of a Subgraph to a multi-sig. +- A community member can create a Subgraph on behalf of a DAO. ## Subgraph’inizi NFT Olarak Görüntüleyin -Subgraph’inizi bir NFT olarak görüntülemek için, **OpenSea** gibi bir NFT pazar yerini ziyaret edebilirsiniz: +To view your Subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**: ``` https://opensea.io/your-wallet-address @@ -27,13 +27,13 @@ https://rainbow.me/your-wallet-addres ## Adım Adım -Bir subgraph’in sahipliğini transfer etmek için şu adımları izleyin: +To transfer ownership of a Subgraph, do the following: 1. Subgraph Studio’ya entegre edilmiş kullanıcı arayüzünü kullanın: ![Subgraph Sahipliği Transferi](/img/subgraph-ownership-transfer-1.png) -2. Subgraph’i transfer etmek istediğiniz adresi seçin: +2. Choose the address that you would like to transfer the Subgraph to: ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-2.png) diff --git a/website/src/pages/tr/subgraphs/developing/publishing/publishing-a-subgraph.mdx b/website/src/pages/tr/subgraphs/developing/publishing/publishing-a-subgraph.mdx index 861f9e6a49f4..e315eb8c74fb 100644 --- a/website/src/pages/tr/subgraphs/developing/publishing/publishing-a-subgraph.mdx +++ b/website/src/pages/tr/subgraphs/developing/publishing/publishing-a-subgraph.mdx @@ -1,10 +1,11 @@ --- title: Bir Subgraph'i Merkeziyetsiz Ağda Yayımlamak +sidebarTitle: Publishing to the Decentralized Network --- -[Subgraph'inizi Subgraph Studio'ya dağıttıktan](/deploying/deploying-a-subgraph-to-studio/) ve üretime hazır hale getirdikten sonra, merkeziyetsiz ağda yayımlayabilirsiniz. +Once you have [deployed your Subgraph to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/) and it's ready to go into production, you can publish it to the decentralized network. -Bir subgraph'i merkeziyetsiz ağda yayımladığınızda, onu şu amaçlarla kullanılabilir hale getirirsiniz: +When you publish a Subgraph to the decentralized network, you make it available for: - [Küratörler](/resources/roles/curating/) tarafından kürasyona başlanması. - [Endeksleyiciler](/indexing/overview/) tarafından endekslenmeye başlanması. @@ -17,33 +18,33 @@ Bir subgraph'i merkeziyetsiz ağda yayımladığınızda, onu şu amaçlarla kul 1. [Subgraph Studio](https://thegraph.com/studio/) paneline gidin 2. **Publish** düğmesine tıklayın -3. Subgraph'iniz artık [Graph Gezgini](https://thegraph.com/explorer/) içinde görünür olacak. +3. Your Subgraph will now be visible in [Graph Explorer](https://thegraph.com/explorer/). -Mevcut bir subgraph'in yayımlanmış tüm sürümleri şunları yapabilir: +All published versions of an existing Subgraph can: - Arbitrum One'da yayımlanabilir. [The Graph Ağı'nın Arbitrum üzerindeki durumu hakkında daha fazla bilgi edinin](/archived/arbitrum/arbitrum-faq/). -- Subgraph'in yayımlandığı ağdan bağımsız olarak, [desteklenen ağlar](/supported-networks/) üzerindeki herhangi bir ağda veri endeksleyebilir. +- Index data on any of the [supported networks](/supported-networks/), regardless of the network on which the Subgraph was published. -### Yayınlanan bir subgraph için üst veri güncelleme +### Updating metadata for a published Subgraph -- Merkeziyetsiz ağda subgraph'inizi yayımladıktan sonra, Subgraph Studio'da metaveriyi istediğiniz zaman güncelleyebilirsiniz. +- After publishing your Subgraph to the decentralized network, you can update the metadata anytime in Subgraph Studio. - Yaptığınız değişiklikleri kaydedip güncellemeleri yayımladığınızda, bu güncellemeler Graph Gezgini'nde görünecektir. - Dağıtımınız değişmediği için bu işlemin yeni bir sürüm oluşturmayacağını unutmamak önemlidir. ## CLI'den Yayımlama -0.73.0 sürümünden itibaren subgraph'inizi [`graph-cli`](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) ile de yayımlayabilirsiniz. +As of version 0.73.0, you can also publish your Subgraph with the [`graph-cli`](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). 1. `graph-cli`yi açın. 2. Aşağıdaki komutları kullanın: `graph codegen && graph build` ardından `graph publish`. -3. Bir pencere açılır ve cüzdanınızı bağlamanıza, metaveri eklemenize ve tamamlanmış subgraph'inizi tercih ettiğiniz bir ağa dağıtmanıza olanak tanır. +3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized Subgraph to a network of your choice. ![cli-ui](/img/cli-ui.png) ### Dağıtımınızı özelleştirme -Aşağıdaki bayraklarla subgraph derlemenizi belirli bir IPFS düğümüne yükleyebilir ve dağıtımınızı daha fazla özelleştirebilirsiniz: +You can upload your Subgraph build to a specific IPFS node and further customize your deployment with the following flags: ``` KULLANIM @@ -61,33 +62,33 @@ BAYRAKLAR ``` -## Subgraph'inize sinyal ekleme +## Adding signal to your Subgraph -Geliştiriciler, Endeksleyicileri bir subgraph'i sorgulamaya teşvik etmek için subgraph'lerine GRT sinyali ekleyebilirler. +Developers can add GRT signal to their Subgraphs to incentivize Indexers to query the Subgraph. -- Bir subgraph endeksleme ödüllerine uygun ise, "endeksleme ispatı" sağlayan Endeksleyiciler, sinyallenen GRT miktarına bağlı olarak GRT ödülü alır. +- If a Subgraph is eligible for indexing rewards, Indexers who provide a "proof of indexing" will receive a GRT reward, based on the amount of GRT signalled. -- Subgraph'inizin endeksleme ödüllerine uygunluğunu (bu, subgraph özellik kullanımına bağlıdır) [buradan](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) kontrol edebilirsiniz. +- You can check indexing reward eligibility based on Subgraph feature usage [here](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). - Desteklenen spesifik ağları [buradan](/supported-networks/) inceleyebilirsiniz. -> Eğer bir subgraph ödüllere uygun değilse, bu subgraph'e sinyal eklemek ek Endeksleyicileri çekmeyecektir. +> Adding signal to a Subgraph which is not eligible for rewards will not attract additional Indexers. > -> Subgraph'iniz ödüllere uygunsa, subgraph'inizi en az 3.000 GRT ile küratörlüğünü yapmanız, ek Endeksleyicilerin subgraph'inizi endekslemesini sağlamak için önerilir. +> If your Subgraph is eligible for rewards, it is recommended that you curate your own Subgraph with at least 3,000 GRT in order to attract additional indexers to index your Subgraph. -[Sunrise Yükseltmesi Endeksleyicisi](/archived/sunrise/#what-is-the-upgrade-indexer), tüm subgraph'lerin endekslenmesini sağlar. Ancak, belirli bir subgraph'e GRT sinyali eklemek, daha fazla Endeksleyiciyi bu subgraph'e çekecektir. Küratörlük yoluyla ek Endeksleyicilerin teşvik edilmesi, sorgular için hizmet kalitesini artırmayı, gecikmeyi azaltmayı ve ağ kullanılabilirliğini iyileştirmeyi amaçlar. - -Sinyal verirken, Küratörler belirli bir subgraph sürümüne sinyal vermeyi veya otomatik geçiş (auto-migrate) özelliğini kullanmayı seçebilirler. Eğer otomatik geçiş özelliğini kullanarak sinyal verirlerse, bir küratörün payları her zaman geliştirici tarafından yayımlanan en son sürüme göre güncellenir. Bunun yerine belirli bir sürüme sinyal vermeyi seçerlerse, paylar her zaman bu belirli sürümdeki haliyle kalır. +The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all Subgraphs. However, signaling GRT on a particular Subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. Endeksleyiciler, Graph Gezgini'nde gördükleri küratörlük sinyallerine göre endeksleyecekleri subgraph'leri bulabilirler. -![Gezgin subgraph'leri](/img/explorer-subgraphs.png) +Indexers can find Subgraphs to index based on curation signals they see in Graph Explorer. + +![Explorer Subgraphs](/img/explorer-subgraphs.png) -Subgraph Studio; subgraph'inizi yayımladığınız işlemde, subgraph'inizin küratörlük havuzuna GRT ekleyerek subgraph'inize sinyal eklemenize olanak tanır. +Subgraph Studio enables you to add signal to your Subgraph by adding GRT to your Subgraph's curation pool in the same transaction it is published. ![Kürasyon Havuzu](/img/curate-own-subgraph-tx.png) -Alternatif olarak, yayımlanmış bir subgraph'e Graph Gezgini üzerinden GRT sinyali ekleyebilirsiniz. +Alternatively, you can add GRT signal to a published Subgraph from Graph Explorer. ![Gezgin'den sinyal ekleme](/img/signal-from-explorer.png) diff --git a/website/src/pages/tr/subgraphs/developing/subgraphs.mdx b/website/src/pages/tr/subgraphs/developing/subgraphs.mdx index 60c15e0e487e..2b7976396d2c 100644 --- a/website/src/pages/tr/subgraphs/developing/subgraphs.mdx +++ b/website/src/pages/tr/subgraphs/developing/subgraphs.mdx @@ -4,83 +4,83 @@ title: Subgraph'ler ## What is a Subgraph? -A subgraph is a custom, open API that extracts data from a blockchain, processes it, and stores it so it can be easily queried via GraphQL. +A Subgraph is a custom, open API that extracts data from a blockchain, processes it, and stores it so it can be easily queried via GraphQL. ### Subgraph Capabilities - **Access Data:** Subgraphs enable the querying and indexing of blockchain data for web3. -- **Build:** Developers can build, deploy, and publish subgraphs to The Graph Network. To get started, check out the subgraph developer [Quick Start](quick-start/). -- **Index & Query:** Once a subgraph is indexed, anyone can query it. Explore and query all subgraphs published to the network in [Graph Explorer](https://thegraph.com/explorer). +- **Build:** Developers can build, deploy, and publish Subgraphs to The Graph Network. To get started, check out the Subgraph developer [Quick Start](quick-start/). +- **Index & Query:** Once a Subgraph is indexed, anyone can query it. Explore and query all Subgraphs published to the network in [Graph Explorer](https://thegraph.com/explorer). ## Inside a Subgraph -The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. +The Subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your Subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. -The **subgraph definition** consists of the following files: +The **Subgraph definition** consists of the following files: -- `subgraph.yaml`: Contains the subgraph manifest +- `subgraph.yaml`: Contains the Subgraph manifest -- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL +- `schema.graphql`: A GraphQL schema defining the data stored for your Subgraph and how to query it via GraphQL - `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema -To learn more about each subgraph component, check out [creating a subgraph](/developing/creating-a-subgraph/). +To learn more about each Subgraph component, check out [creating a Subgraph](/developing/creating-a-subgraph/). ## Subgraph Yaşam Döngüsü -Here is a general overview of a subgraph’s lifecycle: +Here is a general overview of a Subgraph’s lifecycle: ![Subgraph Lifecycle](/img/subgraph-lifecycle.png) ## Subgraph Development -1. [Create a subgraph](/developing/creating-a-subgraph/) -2. [Deploy a subgraph](/deploying/deploying-a-subgraph-to-studio/) -3. [Test a subgraph](/deploying/subgraph-studio/#testing-your-subgraph-in-subgraph-studio) -4. [Publish a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) -5. [Signal on a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) +1. [Create a Subgraph](/developing/creating-a-subgraph/) +2. [Deploy a Subgraph](/deploying/deploying-a-subgraph-to-studio/) +3. [Test a Subgraph](/deploying/subgraph-studio/#testing-your-subgraph-in-subgraph-studio) +4. [Publish a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) +5. [Signal on a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) ### Build locally -Great subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying subgraphs on The Graph. They can also use [Graph TypeScript](/subgraphs/developing/creating/graph-ts/README/) and [Matchstick](/subgraphs/developing/creating/unit-testing-framework/) to create robust subgraphs. +Great Subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying Subgraphs on The Graph. They can also use [Graph TypeScript](/subgraphs/developing/creating/graph-ts/README/) and [Matchstick](/subgraphs/developing/creating/unit-testing-framework/) to create robust Subgraphs. ### Deploy to Subgraph Studio -Once defined, a subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: +Once defined, a Subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: -- Use its staging environment to index the deployed subgraph and make it available for review. -- Verify that your subgraph doesn't have any indexing errors and works as expected. +- Use its staging environment to index the deployed Subgraph and make it available for review. +- Verify that your Subgraph doesn't have any indexing errors and works as expected. ### Publish to the Network -When you're happy with your subgraph, you can [publish it](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph Network. +When you're happy with your Subgraph, you can [publish it](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph Network. -- This is an onchain action, which registers the subgraph and makes it discoverable by Indexers. -- Published subgraphs have a corresponding NFT, which defines the ownership of the subgraph. You can [transfer the subgraph's ownership](/subgraphs/developing/managing/transferring-a-subgraph/) by sending the NFT. -- Published subgraphs have associated metadata, which provides other network participants with useful context and information. +- This is an onchain action, which registers the Subgraph and makes it discoverable by Indexers. +- Published Subgraphs have a corresponding NFT, which defines the ownership of the Subgraph. You can [transfer the Subgraph's ownership](/subgraphs/developing/managing/transferring-a-subgraph/) by sending the NFT. +- Published Subgraphs have associated metadata, which provides other network participants with useful context and information. ### Add Curation Signal for Indexing -Published subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your subgraph. Learn more about signaling and [curating](/resources/roles/curating/) on The Graph. +Published Subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your Subgraph. Learn more about signaling and [curating](/resources/roles/curating/) on The Graph. #### What is signal? -- Signal is locked GRT associated with a given subgraph. It indicates to Indexers that a given subgraph will receive query volume and it contributes to the indexing rewards available for processing it. -- Third-party Curators may also signal on a given subgraph, if they deem the subgraph likely to drive query volume. +- Signal is locked GRT associated with a given Subgraph. It indicates to Indexers that a given Subgraph will receive query volume and it contributes to the indexing rewards available for processing it. +- Third-party Curators may also signal on a given Subgraph, if they deem the Subgraph likely to drive query volume. ### Querying & Application Development Subgraphs on The Graph Network receive 100,000 free queries per month, after which point developers can either [pay for queries with GRT or a credit card](/subgraphs/billing/). -Learn more about [querying subgraphs](/subgraphs/querying/introduction/). +Learn more about [querying Subgraphs](/subgraphs/querying/introduction/). ### Updating Subgraphs -To update your subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. +To update your Subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your Subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. -- If you selected "auto-migrate" when you applied the signal, updating the subgraph will migrate any signal to the new version and incur a migration tax. -- This signal migration should prompt Indexers to start indexing the new version of the subgraph, so it should soon become available for querying. +- If you selected "auto-migrate" when you applied the signal, updating the Subgraph will migrate any signal to the new version and incur a migration tax. +- This signal migration should prompt Indexers to start indexing the new version of the Subgraph, so it should soon become available for querying. ### Deleting & Transferring Subgraphs -If you no longer need a published subgraph, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) or [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) it. Deleting a subgraph returns any signaled GRT to [Curators](/resources/roles/curating/). +If you no longer need a published Subgraph, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) or [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) it. Deleting a Subgraph returns any signaled GRT to [Curators](/resources/roles/curating/). diff --git a/website/src/pages/tr/subgraphs/explorer.mdx b/website/src/pages/tr/subgraphs/explorer.mdx index 45ead0e64eea..95128bd87f9e 100644 --- a/website/src/pages/tr/subgraphs/explorer.mdx +++ b/website/src/pages/tr/subgraphs/explorer.mdx @@ -2,80 +2,80 @@ title: Graph Gezgini --- -Unlock the world of subgraphs and network data with [Graph Explorer](https://thegraph.com/explorer). +[Graph Gezgini](https://thegraph.com/explorer) ile Subgraph'ler ve ağ verilerinin dünyasını keşfedin. ## Genel Bakış -Graph Explorer consists of multiple parts where you can interact with [subgraphs](https://thegraph.com/explorer?chain=arbitrum-one), [delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one), engage [participants](https://thegraph.com/explorer/participants?chain=arbitrum-one), view [network information](https://thegraph.com/explorer/network?chain=arbitrum-one), and access your user profile. +Graph Gezgini, birden fazla bölümden oluşur; bu bölümlerde [Subgraph'ler](https://thegraph.com/explorer?chain=arbitrum-one) ile etkileşime geçebilir, [delege etme](https://thegraph.com/explorer/delegate?chain=arbitrum-one) işlemi yapabilir, [katılımcılarla](https://thegraph.com/explorer/participants?chain=arbitrum-one) etkileşime geçebilir, [ağ bilgilerini](https://thegraph.com/explorer/network?chain=arbitrum-one) görüntüleyebilir ve kullanıcı profilinize erişebilirsiniz. -## Inside Explorer +## Graph Gezgini'nin İçinde -The following is a breakdown of all the key features of Graph Explorer. For additional support, you can watch the [Graph Explorer video guide](/subgraphs/explorer/#video-guide). +Aşağıda, Graph Gezgini'nin tüm temel özelliklerinin bir dökümü yer almaktadır. Ek destek için [Graph Gezgini video rehberini](/subgraphs/explorer/#video-guide) izleyebilirsiniz. -### Subgraphs Page +### Subgraph Sayfası -After deploying and publishing your subgraph in Subgraph Studio, go to [Graph Explorer](https://thegraph.com/explorer) and click on the "[Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one)" link in the navigation bar to access the following: +Subgraph'inizi Subgraph Studio'da dağıtıp yayımladıktan sonra [Graph Gezgini](https://thegraph.com/explorer) sayfasına gidin ve gezinme çubuğundaki "[Subgraph'lar](https://thegraph.com/explorer?chain=arbitrum-one)" bağlantısına tıklayarak aşağıdaki bölümlere erişin: -- Your own finished subgraphs -- Subgraphs published by others -- The exact subgraph you want (based on the date created, signal amount, or name). +- Kendi tamamlanmış Subgraph'leriniz +- Başkaları tarafından yayımlanmış subgraph'ler +- İstediğiniz spesifik bir subgraph (oluşturulma tarihi, sinyal miktarı veya adına göre). -![Explorer Image 1](/img/Subgraphs-Explorer-Landing.png) +![Gezgin Görüntüsü 1](/img/Subgraphs-Explorer-Landing.png) -When you click into a subgraph, you will be able to do the following: +Bir Subgraph'e tıkladığınızda aşağıdakileri yapabilirsiniz: -- Test queries in the playground and be able to leverage network details to make informed decisions. -- Signal GRT on your own subgraph or the subgraphs of others to make indexers aware of its importance and quality. +- Oyun alanında (playground) sorguları test edin ve ağ detaylarından faydalanarak bilgili kararlar alın. +- Kendi Subgraph'inize veya başkalarının Subgraph'lerine GRT sinyali göndererek Endeksleyicilere bunların önemini ve kalitesini bildirin. - - This is critical because signaling on a subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries. + - Bu kritik bir öneme sahiptir çünkü bir Subgraph'e sinyal vermek, onun endekslenmesini teşvik eder. Bu da, sonunda ağ üzerinde ön plana çıkıp sorgulara hizmet verebileceği anlamına gelir. -![Explorer Image 2](/img/Subgraph-Details.png) +![Gezgin Görüntüsü 2](/img/Subgraph-Details.png) -On each subgraph’s dedicated page, you can do the following: +Her Subgraph'in kendi sayfasında aşağıdakileri yapabilirsiniz: -- Subgraphlar üzerinde sinyal/sinyalsizlik +- Subgraph'lere sinyal verme/sinyal geri çekme - Grafikler, mevcut dağıtım kimliği ve diğer üst veri gibi daha fazla ayrıntı görüntüleme -- Subgraph'ın geçmiş yinelemelerini keşfetmek için sürümleri değiştirme -- GraphQL aracılığıyla subgraphlar'ı sorgulama -- Test alanında(playground) subgraphlar'ı test etme -- Belirli bir subgraph üzerinde indeksleme yapan İndeksleyicileri görüntüleme +- Subgraph'in önceki sürümlerini incelemek için versiyonlar arasında geçiş yapma +- Subgraph'leri GraphQL ile sorgulama +- Subgraph'leri playground ortamında test etme +- Belirli bir Subgraph üzerinde endeksleme yapan Endeksleyicileri görüntüleme - Subgraph istatistikleri (tahsisler, Küratörler, vb.) -- Subgraph'ı yayınlayan varlığı görüntüleme +- Subgraph'i yayımlayan varlığı görüntüleme -![Explorer Image 3](/img/Explorer-Signal-Unsignal.png) +![Gezgin Görüntüsü 3](/img/Explorer-Signal-Unsignal.png) -### Delegate Page +### Delege Etme Sayfası -On the [Delegate page](https://thegraph.com/explorer/delegate?chain=arbitrum-one), you can find information about delegating, acquiring GRT, and choosing an Indexer. +[Delege Etme sayfasında](https://thegraph.com/explorer/delegate?chain=arbitrum-one), delege etme, GRT edinme ve bir Endeksleyici seçme hakkında bilgi bulabilirsiniz. -On this page, you can see the following: +Bu sayfada aşağıdakileri görüntüleyebilirsiniz: -- Indexers who collected the most query fees -- Indexers with the highest estimated APR +- En fazla sorgu ücretini toplayan Endeksleyiciler +- Tahmini APR'si (Yıllık Yüzde Getiri Oranı) en yüksek olan Endeksleyiciler -Additionally, you can calculate your ROI and search for top Indexers by name, address, or subgraph. +Ayrıca, ROI (Yatırım Getirisi) hesaplayabilir ve öne çıkan Endeksleyicileri ada, adrese ya da Subgraph’e göre arayabilirsiniz. -### Participants Page +### Katılımcı Sayfası -This page provides a bird's-eye view of all "participants," which includes everyone participating in the network, such as Indexers, Delegators, and Curators. +Bu sayfa, Endeksleyiciler, Delegatörler ve Küratörler gibi ağda yer alan tüm "katılımcıları" kapsayan genel bir görüntü sunar. #### 1. İndeksleyiciler -![Explorer Image 4](/img/Indexer-Pane.png) +![Gezgin Görüntüsü 4](/img/Indexer-Pane.png) -Indexers are the backbone of the protocol. They stake on subgraphs, index them, and serve queries to anyone consuming subgraphs. +Endeksleyiciler, protokolün bel kemiğidir. Subgraph'ler üzerinde istifleme yapar, onları endeksler ve Subgraph'leri tüketen herkese sorgular sunarlar. -In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each subgraph, and how much revenue they have made from query fees and indexing rewards. +Endeksleyiciler tablosunda, bir Endeksleyicinin delegasyon parametrelerini, istif miktarını, her bir Subgraph'e ne kadar istifleme yaptığını ve sorgu ücretleri ile endeksleme ödüllerinden ne kadar gelir elde ettiğini görebilirsiniz. **Ayrıntılar** -- Query Fee Cut - the % of the query fee rebates that the Indexer keeps when splitting with Delegators. -- Effective Reward Cut - the indexing reward cut applied to the delegation pool. If it’s negative, it means that the Indexer is giving away part of their rewards. If it’s positive, it means that the Indexer is keeping some of their rewards. -- Cooldown Remaining - the time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters. -- Owned - This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior. -- Delegated - Stake from Delegators which can be allocated by the Indexer, but cannot be slashed. -- Allocated - Stake that Indexers are actively allocating towards the subgraphs they are indexing. -- Available Delegation Capacity - the amount of delegated stake the Indexers can still receive before they become over-delegated. +- Sorgu Ücreti Kesintisi - Endeksleyicinin, Delegatörlerle paylaşırken sorgu ücreti iadelerinden kendine ayırdığı yüzdelik oran. +- Etkin Ödül Kesintisi - Delegasyon havuzuna uygulanan endeksleme ödülü kesintisi. Negatif bir değer, Endeksleyicinin kendi ödüllerinin bir kısmını dağıttığı anlamına gelir. Pozitif bir değer ise Endeksleyicinin ödüllerin bir kısmını kendine sakladığını gösterir. +- Kalan Bekleme Süresi - Endeksleyicinin yukarıdaki delegasyon parametrelerini değiştirebilmesi için kalan süre. Bekleme süreleri, Endeksleyiciler delegasyon parametrelerini güncellediğinde kendileri tarafından ayarlanır. +- Sahip Olunan - Endeksleyicinin yatırdığı istif miktarıdır ve kötü niyetli veya hatalı davranışlar nedeniyle kesinti (slashing) uygulanabilir. +- Delege Edilen - Delegatörlerden gelen ve Endeksleyici tarafından tahsis edilebilen, ancak kesintiye uğratılamayan istif. +- Tahsis Edilen - Endeksleyicilerin, endeksledikleri Subgraph'lere aktif olarak tahsis ettikleri istif. +- Mevcut Delegasyon Kapasitesi - Endeksleyicilerin kotalarını aşmadan önce alabilecekleri delege edilmiş istif miktarı. - Maksimum Delegasyon Kapasitesi - Endekserin verimli bir şekilde kabul edebileceği en yüksek delegasyon miktarıdır. Fazla delege edilmiş pay, tahsisler veya ödül hesaplamaları için kullanılamaz. - Query Fees - this is the total fees that end users have paid for queries from an Indexer over all time. - İndeksleyici Ödülleri - bu, İndeksleyici ve Delegatörler tarafından tüm zaman boyunca kazanılan toplam indeksleyici ödülleridir. İndeksleyici ödülleri GRT ihracı yoluyla ödenir. @@ -84,56 +84,56 @@ Indexers can earn both query fees and indexing rewards. Functionally, this happe - Indexing parameters can be set by clicking on the right-hand side of the table or by going into an Indexer’s profile and clicking the “Delegate” button. -To learn more about how to become an Indexer, you can take a look at the [official documentation](/indexing/overview/) or [The Graph Academy Indexer guides.](https://thegraph.academy/delegators/choosing-indexers/) +Nasıl Endeksleyici olunabileceğini daha ayrıntılı öğrenmek için [resmi dokümantasyona](/indexing/overview/) veya [The Graph Academy Endeksleyici rehberlerine](https://thegraph.academy/delegators/choosing-indexers/) göz atabilirsiniz. -![Indexing details pane](/img/Indexing-Details-Pane.png) +![Endeksleme detayları paneli](/img/Indexing-Details-Pane.png) #### 2. Küratörler -Curators analyze subgraphs to identify which subgraphs are of the highest quality. Once a Curator has found a potentially high-quality subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which subgraphs are high quality and should be indexed. +Küratörler, Subgraph'leri analiz ederek en yüksek kaliteye sahip olan Subgraph'leri belirler. Bir Küratör, potansiyel olarak yüksek kaliteli bir Subgraph bulduğunda, onun bağlanma eğrisine sinyal vererek küratörlük yapabilir. Küratörler böylece Endeksleyicilere hangi Subgraph'lerin yüksek kaliteli olduğunu ve hangilerinin endekslenmesi gerektiğini bildirir. -- Curators can be community members, data consumers, or even subgraph developers who signal on their own subgraphs by depositing GRT tokens into a bonding curve. - - By depositing GRT, Curators mint curation shares of a subgraph. As a result, they can earn a portion of the query fees generated by the subgraph they have signaled on. +- Küratörler; topluluk üyeleri, veri tüketicileri veya hatta kendi Subgraph'lerine GRT token'ları yatırarak bağlanma eğrisinde sinyal veren Subgraph geliştiricileri olabilir. + - Küratörler GRT yatırarak bir Subgraph'in küratörlük paylarını yaratırlar. Bu sayede sinyal verdikleri Subgraph'in ürettiği sorgu ücretlerinden bir kısmını kazanabilirler. - The bonding curve incentivizes Curators to curate the highest quality data sources. -In the The Curator table listed below you can see: +Aşağıda listelenen Küratör tablosunda şunları görebilirsiniz: - Küratör'ün küratörlüğe başladığı tarih - Yatırılan GRT sayısı - Küratör'ün sahip olduğu hisse sayısı -![Explorer Image 6](/img/Curation-Overview.png) +![Gezgin Görüntüsü 6](/img/Curation-Overview.png) -If you want to learn more about the Curator role, you can do so by visiting [official documentation.](/resources/roles/curating/) or [The Graph Academy](https://thegraph.academy/curators/). +Küratör rolü hakkında daha fazla bilgi edinmek isterseniz, [resmi dokümentasyon](/resources/roles/curating/)u veya [The Graph Academy](https://thegraph.academy/curators/)'yi ziyaret edebilirsiniz. #### 3. Delegatörler -Delegators play a key role in maintaining the security and decentralization of The Graph Network. They participate in the network by delegating (i.e., “staking”) GRT tokens to one or multiple indexers. +Delegatörler, The Graph Ağı'nın güvenliğinin ve merkeziyetsizliğinin korunmasında kilit bir rol oynar. Ağda, bir veya birden fazla Endeksleyiciye GRT token'larını delege ederek (yani istifleyerek, "stake" ederek) katılım sağlarlar. -- Without Delegators, Indexers are less likely to earn significant rewards and fees. Therefore, Indexers attract Delegators by offering them a portion of their indexing rewards and query fees. -- Delegators select Indexers based on a number of different variables, such as past performance, indexing reward rates, and query fee cuts. -- Reputation within the community can also play a factor in the selection process. It's recommended to connect with the selected Indexers via [The Graph's Discord](https://discord.gg/graphprotocol) or [The Graph Forum](https://forum.thegraph.com/). +- Endeksleyicilerin Delegatörler olmadan kayda değer ödüller ve ücretler kazanma olasılığı düşüktür. Bu nedenle, Endeksleyiciler, endeksleme ödüllerinin ve sorgu ücretlerinin bir kısmını sunarak Delegatörleri kendilerine çekmeye çalışır. +- Delegatörler, geçmiş performans, endeksleme ödül oranları ve sorgu ücreti kesintileri gibi çeşitli değişkenlere dayanarak Endeksleyicileri seçerler. +- Topluluk içindeki itibar da seçim sürecinde bir faktör olabilir. Seçtiğiniz Endeksleyicilerle [The Graph’in Discord sunucusu](https://discord.gg/graphprotocol) veya [The Graph Forumu](https://forum.thegraph.com/) üzerinden bağlantı kurmanız önerilir. -![Explorer Image 7](/img/Delegation-Overview.png) +![Gezgin Görüntüsü 7](/img/Delegation-Overview.png) -In the Delegators table you can see the active Delegators in the community and important metrics: +Delegatörler tablosunda, topluluktaki aktif Delegatörleri ve önemli metrikleri görebilirsiniz: - Bir Delegatör'ün delege ettiği İndeksleyici sayısı -- A Delegator's original delegation +- Bir Delegatörün orijinal delegasyonu - Biriktirdikleri ancak protokolden çekmedikleri ödüller - Protokolden çekildiler gerçekleşmiş ödüller - Şu anda protokolde bulunan sahip oldukları toplam GRT miktarı -- The date they last delegated +- Son delege ettikleri tarih -If you want to learn more about how to become a Delegator, check out the [official documentation](/resources/roles/delegating/delegating/) or [The Graph Academy](https://docs.thegraph.academy/official-docs/delegator/choosing-indexers). +Delegatör olmak hakkında daha fazla bilgi edinmek isterseniz, [resmi dokümantasyon](/resources/roles/delegating/delegating/)u veya [The Graph Academy](https://docs.thegraph.academy/official-docs/delegator/choosing-indexers)'yi ziyaret edebilirsiniz. -### Network Page +### Ağ Sayfası -On this page, you can see global KPIs and have the ability to switch to a per-epoch basis and analyze network metrics in more detail. These details will give you a sense of how the network is performing over time. +Bu sayfada, küresel TPG'leri (temel performans göstergesi, İng. KPI) görüntüleyebilir, dönem bazlı görünüme geçiş yaparak ağ metriklerini daha ayrıntılı şekilde analiz edebilirsiniz. Bu detaylar, ağın zaman içindeki performansı hakkında bir fikir edinmenizi sağlar. #### Genel Bakış -The overview section has both all the current network metrics and some cumulative metrics over time: +Genel bakış bölümü, hem mevcut ağ metriklerini hem de zamanla biriken bazı metrikleri içerir: - Mevcut toplam ağ payı - İndeksleyiciler ve Delegatörler arasındaki pay paylaşımı @@ -142,12 +142,12 @@ The overview section has both all the current network metrics and some cumulativ - Kürasyon ödülü, enflasyon oranı ve daha fazlası gibi protokol parametreleri - Mevcut dönem ödülleri ve ücretleri -A few key details to note: +Dikkat edilmesi gereken birkaç önemli detay: -- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the subgraphs have been closed and the data they served has been validated by the consumers. -- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). +- **Sorgu ücretleri, tüketiciler tarafından üretilen ücretleri temsil eder**. Bu ücretler, Endeksleyiciler tarafından Subgraph'lere yönelik tahsislerin kapatılmasından ve sağlanan verilerin tüketiciler tarafından doğrulanmasından sonra, en az 7 dönemlik (aşağıya bakınız) bir sürenin ardından talep edilebilir (veya edilmeyebilir). +- **Endeksleme ödülleri, Endeksleyicilerin dönem sırasında ağ ihraçlarından talep ettiği ödül miktarını temsil eder**. Protokol ihraç miktarı sabit olmasına rağmen, ödüller yalnızca Endeksleyicilerin endeksledikleri Subgraph'lere yönelik tahsislerini kapattıklarında oluşturulur. Bu nedenle, dönem başına ödül miktarı değişkenlik gösterir (örneğin, bazı dönemlerde Endeksleyiciler, günlerce açık kalmış tahsisleri topluca kapatmış olabilir). -![Explorer Image 8](/img/Network-Stats.png) +![Gezgin Görüntüsü 8](/img/Network-Stats.png) #### Dönemler @@ -159,34 +159,34 @@ Dönemler bölümünde, aşağıdaki gibi metrikleri dönem bazında analiz edeb - Aktif dönem, İndeksleyicilerin halihazırda pay tahsis ettiği ve sorgu ücretlerini topladığı dönemdir - Uzlaşma dönemleri, bildirim kanallarının uzlaştırıldığı dönemlerdir. Bu, kullanıcıların kendilerine karşı itirazda bulunması halinde İndeksleyicilerin kesintiye maruz kalacağı anlamına gelir. - Dağıtım dönemleri, bildirim kanallarının dönemler için yerleştiği ve İndeksleyicilerin sorgu ücreti iadelerini talep edebildiği dönemlerdir. - - The finalized epochs are the epochs that have no query fee rebates left to claim by the Indexers. + - Sonlandırılmış dönemler, Endeksleyiciler tarafından talep edilecek sorgu ücreti iadelerinin kalmadığı dönemlerdir. -![Explorer Image 9](/img/Epoch-Stats.png) +![Gezgin Görüntüsü 9](/img/Epoch-Stats.png) ## Kullanıcı Profiliniz -Your personal profile is the place where you can see your network activity, regardless of your role on the network. Your crypto wallet will act as your user profile, and with the User Dashboard, you’ll be able to see the following tabs: +Kişisel profiliniz, ağdaki rolünüz ne olursa olsun ağ etkinliğinizi görebileceğiniz yerdir. Kripto cüzdanınız kullanıcı profiliniz olarak işlev görecektir ve Kullanıcı Panosu ile aşağıdaki sekmeleri görüntüleyebileceksiniz: ### Profile Genel Bakış -In this section, you can view the following: +Bu bölümde aşağıdakileri görüntüleyebilirsiniz: -- Any of your current actions you've done. -- Your profile information, description, and website (if you added one). +- Yaptığınız mevcut tüm işlemleriniz. +- Profil bilgileriniz, tanımınız ve (varsa) eklediğiniz web sitesi. -![Explorer Image 10](/img/Profile-Overview.png) +![Gezgin Görüntüsü 10](/img/Profile-Overview.png) ### Subgraphlar Sekmesi -In the Subgraphs tab, you’ll see your published subgraphs. +Subgraph'ler sekmesinde, yayımladığınız Subgraph'leri göreceksiniz. -> This will not include any subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network. +> Bu, test amaçlı, CLI ile dağıtılmış Subgraph'leri içermeyecektir. Subgraph'ler yalnızca merkeziyetsiz ağda yayımlandıklarında görünecektir. -![Explorer Image 11](/img/Subgraphs-Overview.png) +![Gezgin Görüntüsü 11](/img/Subgraphs-Overview.png) ### İndeksleme Sekmesi -In the Indexing tab, you’ll find a table with all the active and historical allocations towards subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer. +Endeksleme sekmesinde, Subgraph'lere yönelik tüm aktif ve geçmiş tahsislerin yer aldığı bir tablo bulacaksınız. Ayrıca, bir Endeksleyici olarak geçmiş performansınızı görebileceğiniz ve analiz edebileceğiniz grafiklere erişebilirsiniz. Bu bölümde ayrıca net İndeksleyici ödülleriniz ve sorgu ücretlerinizle ilgili ayrıntılar da yer alacaktır. Aşağıdaki metrikleri göreceksiniz: @@ -197,13 +197,13 @@ Bu bölümde ayrıca net İndeksleyici ödülleriniz ve sorgu ücretlerinizle il - Ödül Kesintisi - Delegatörlerle ayrılırken İndeksleyici ödüllerinin elinizde kalacak yüzdesi - Depozito - kötü niyetli veya yanlış davranışlarınız sonucu kesilebilecek yatırılmış payınız -![Explorer Image 12](/img/Indexer-Stats.png) +![Gezgin Görüntüsü 12](/img/Indexer-Stats.png) ### Delegasyon Sekmesi -Delegators are important to the Graph Network. They must use their knowledge to choose an Indexer that will provide a healthy return on rewards. +Delegatörler, Graph Ağı için önemlidir. Sağlıklı bir ödül getirisi sağlayacak bir Endeksleyiciyi seçmek için bilgilerini kullanmaları gerekir. -In the Delegators tab, you can find the details of your active and historical delegations, along with the metrics of the Indexers that you delegated towards. +Delegatörler sekmesinde, aktif ve geçmiş delege işlemlerinizin detaylarını ve delege ettiğiniz Endeksleyicilerin metriklerini bulabilirsiniz. Sayfanın ilk yarısında, delegasyon grafiğinizin yanı sıra yalnızca ödül grafiğini de görebilirsiniz. Sol tarafta, mevcut delegasyon metriklerinizi yansıtan APG'leri görebilirsiniz. @@ -219,20 +219,20 @@ Tablonun sağ tarafındaki düğmelerle delegasyonunuzu yönetebilirsiniz - daha Bu grafiğin yatay olarak kaydırılabilir olduğunu unutmayın, bu nedenle sağa doğru kaydırırsanız, delegasyonunuzun durumunu da görebilirsiniz (delege edilen, delegeden çıkarılan, geri çekilebilir). -![Explorer Image 13](/img/Delegation-Stats.png) +![Gezgin Görüntüsü 13](/img/Delegation-Stats.png) ### Kürasyon Sekmesi -Kürasyon sekmesinde, sinyal verdiğiniz (sonucunda sorgu ücreti almanızı sağlayan) tüm subgraphlar'ı bulacaksınız. Sinyalleme, Küratörlerin İndeksleyicilere hangi subgraphlar'ın değerli ve güvenilir olduğunu belirtmesine ve böylece indekslenmeleri gerektiğinin belirtilmesine olanak tanır. +Kürasyon sekmesinde, sinyal verdiğiniz (ve böylece sorgu ücretleri alabileceğiniz) tüm Subgraph'leri bulabilirsiniz. Sinyal vermek, Küratörlerin Endeksleyicilere hangi Subgraph'lerin değerli ve güvenilir olduğunu vurgulamasına olanak tanır, ve böylece bu Subgraph'lerin endekslenmesi gerektiğini belirtir. Bu sekmede, aşağıdakilerin genel bir bakışını bulacaksınız: -- Sinyal ayrıntılarıyla birlikte küratörlüğünü yaptığınız tüm subgraphlar -- Subraph başına pay toplamları -- Subgraph başına sorgu ödülleri +- Üzerinde küratörlük yaptığınız tüm Subgraph'ler ve sinyal detayları +- Her bir Subgraph için pay toplamları +- Her bir Subgraph için sorgu ödülleri - Güncelleme tarih detayları -![Explorer Image 14](/img/Curation-Stats.png) +![Gezgin Görüntüsü 14](/img/Curation-Stats.png) ### Profil Ayarlarınız @@ -241,16 +241,16 @@ Kullanıcı profilinizde, kişisel profil ayrıntılarınızı yönetebileceksin - Operatörler, İndeksleyici adına protokolde tahsislerin açılması ve kapatılması gibi sınırlı işlemlerde bulunur. Operatörler tipik olarak, stake cüzdanlarından ayrı, İndeksleyicilerin kişisel olarak ayarlayabileceği ağa kontrollü bir şekilde erişime sahip diğer Ethereum adresleridir - Delegasyon parametreleri, GRT'nin siz ve Delegatörleriniz arasındaki dağılımını kontrol etmenizi sağlar. -![Explorer Image 15](/img/Profile-Settings.png) +![Gezgin Görüntüsü 15](/img/Profile-Settings.png) -As your official portal into the world of decentralized data, Graph Explorer allows you to take a variety of actions, no matter your role in the network. You can get to your profile settings by opening the dropdown menu next to your address, then clicking on the Settings button. +Graph Gezgini, merkeziyetsiz veri dünyasına açılan asıl portalınız olarak, ağdaki rolünüz ne olursa olsun birçok işlem yapmanıza olanak tanır. Profil ayarlarınıza, adresinizin yanındaki açılır menüyü açarak ve Ayarlar düğmesine tıklayarak erişebilirsiniz. -![Wallet details](/img/Wallet-Details.png) +![Cüzdan detayları](/img/Wallet-Details.png) ## Ek Kaynaklar -### Video Guide +### Video Kılavuzu -For a general overview of Graph Explorer, check out the video below: +Graph Gezgini hakkında genel bir bakış için aşağıdaki videoyu inceleyin: diff --git a/website/src/pages/tr/subgraphs/guides/_meta.js b/website/src/pages/tr/subgraphs/guides/_meta.js index 37e18bc51651..a1bb04fb6d3f 100644 --- a/website/src/pages/tr/subgraphs/guides/_meta.js +++ b/website/src/pages/tr/subgraphs/guides/_meta.js @@ -1,4 +1,5 @@ export default { + 'subgraph-composition': '', 'subgraph-debug-forking': '', near: '', arweave: '', diff --git a/website/src/pages/tr/subgraphs/guides/arweave.mdx b/website/src/pages/tr/subgraphs/guides/arweave.mdx index 08e6c4257268..9dccc056f701 100644 --- a/website/src/pages/tr/subgraphs/guides/arweave.mdx +++ b/website/src/pages/tr/subgraphs/guides/arweave.mdx @@ -1,50 +1,50 @@ --- -title: Building Subgraphs on Arweave +title: Arweave Üzerinde Subgraphlar Oluşturma --- > Arweave support in Graph Node and on Subgraph Studio is in beta: please reach us on [Discord](https://discord.gg/graphprotocol) with any questions about building Arweave Subgraphs! -In this guide, you will learn how to build and deploy Subgraphs to index the Arweave blockchain. +Bu rehberde, Arweave blok zincirini indekslemek için nasıl Subgraphs oluşturacağınızı ve dağıtacağınızı öğreneceksiniz. -## What is Arweave? +## Arweave Nedir? -The Arweave protocol allows developers to store data permanently and that is the main difference between Arweave and IPFS, where IPFS lacks the feature; permanence, and files stored on Arweave can't be changed or deleted. +Arweave protokolü geliştiricilere verileri kalıcı olarak depolama imkanı sağlar ve bu, Arweave ile IPFS arasındaki temel farktır. IPFS'de böyle bir özellik bulunmaz; yani IPFS'te depolanan dosyalar kalıcı değildir ve Arweave'de depolanan dosyalar değiştirilemez veya silinemez. -Arweave already has built numerous libraries for integrating the protocol in a number of different programming languages. For more information you can check: +Arweave, protokolü farklı programlama dillerine entegre etmek için halihazırda çok sayıda kütüphane oluşturmuştur. Daha fazla bilgi için şurayı kontrol edebilirsiniz: - [Arwiki](https://arwiki.wiki/#/en/main) -- [Arweave Resources](https://www.arweave.org/build) +- [Arweave Kaynakları](https://www.arweave.org/build) -## What are Arweave Subgraphs? +## Arweave Subgraphları Nedir? -The Graph allows you to build custom open APIs called "Subgraphs". Subgraphs are used to tell indexers (server operators) which data to index on a blockchain and save on their servers in order for you to be able to query it at any time using [GraphQL](https://graphql.org/). +The Graph, "Subgraph" adı verilen özel açık API'ler oluşturmanıza olanak tanır. Subgraph'ler, endeksleyicilere (sunucu operatörleri) bir blokzincirinde hangi verilerin endeksleneceğini ve sunucularında saklanacağını belirtmek için kullanılır. Böylece [GraphQL](https://graphql.org/) kullanarak bu verilere istediğiniz zaman sorgu yapabilirsiniz. -[Graph Node](https://github.com/graphprotocol/graph-node) is now able to index data on Arweave protocol. The current integration is only indexing Arweave as a blockchain (blocks and transactions), it is not indexing the stored files yet. +[Graph Düğümü](https://github.com/graphprotocol/graph-node) artık Arweave protokolündeki verileri endeksleyebiliyor. Mevcut entegrasyon yalnızca Arweave'i bir blokzinciri olarak (bloklar ve işlemler) endekslemekte olup, henüz depolanan dosyaları endekslememektedir. -## Building an Arweave Subgraph +## Bir Arweave Subgraph'ı Oluşturma -To be able to build and deploy Arweave Subgraphs, you need two packages: +Arweave Subgraphları oluşturabilmek ve dağıtabilmek için iki pakete ihtiyacınız vardır: 1. `@graphprotocol/graph-cli` above version 0.30.2 - This is a command-line tool for building and deploying Subgraphs. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-cli) to download using `npm`. 2. `@graphprotocol/graph-ts` above version 0.27.0 - This is library of Subgraph-specific types. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-ts) to download using `npm`. -## Subgraph's components +## Subgraph'ın bileşenleri There are three components of a Subgraph: -### 1. Manifest - `subgraph.yaml` +### 1. Manifesto - `subgraph.yaml` -Defines the data sources of interest, and how they should be processed. Arweave is a new kind of data source. +İlgilenilen veri kaynaklarını ve bunların nasıl işlenmesi gerektiğini tanımlar. Arweave yeni bir veri kaynağı türüdür. -### 2. Schema - `schema.graphql` +### 2. Şema - `schema.graphql` -Here you define which data you want to be able to query after indexing your Subgraph using GraphQL. This is actually similar to a model for an API, where the model defines the structure of a request body. +Burada, GraphQL kullanarak Subgraph'ınızı indeksledikten sonra hangi verileri sorgulayabilmek istediğinizi tanımlarsınız. Bu aslında, modelin bir istek gövdesinin yapısını tanımladığı bir API modeline benzer. The requirements for Arweave Subgraphs are covered by the [existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). -### 3. AssemblyScript Mappings - `mapping.ts` +### 3. AssemblyScript Eşlemeleri - `mapping.ts` -This is the logic that determines how data should be retrieved and stored when someone interacts with the data sources you are listening to. The data gets translated and is stored based off the schema you have listed. +Bu, birisi sizin etkinliklerini gözlemlediğiniz veri kaynaklarıyla etkileşimde bulunduğunda verinin nasıl alınması ve depolanması gerektiğini belirleyen mantıktır. Veri çevrilir ve belirttiğiniz şemaya göre depolanır. During Subgraph development there are two key commands: @@ -53,7 +53,7 @@ $ graph codegen # generates types from the schema file identified in the manifes $ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder ``` -## Subgraph Manifest Definition +## Subgraph Manifest Tanımı The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for an Arweave Subgraph: @@ -83,29 +83,29 @@ dataSources: ``` - Arweave Subgraphs introduce a new kind of data source (`arweave`) -- The network should correspond to a network on the hosting Graph Node. In Subgraph Studio, Arweave's mainnet is `arweave-mainnet` -- Arweave data sources introduce an optional source.owner field, which is the public key of an Arweave wallet +- Ağ, sağlayıcı Graph Düğümü üzerindeki bir ağa karşılık gelmelidir. Subgraph Studio'da, Arweave'in ana ağı `arweave-mainnet` olarak tanımlanır +- Arweave veri kaynakları, bir Arweave cüzdanının genel anahtarı olan opsiyonel bir source.owner alanı sunar -Arweave data sources support two types of handlers: +Arweave veri kaynakları iki tür işleyiciyi destekler: -- `blockHandlers` - Run on every new Arweave block. No source.owner is required. -- `transactionHandlers` - Run on every transaction where the data source's `source.owner` is the owner. Currently an owner is required for `transactionHandlers`, if users want to process all transactions they should provide "" as the `source.owner` +- `blockHandlers` - Her yeni Arweave blokunda çalıştırılır. source.owner belirtilmesi gerekmez. +- `transactionHandlers` - Veri kaynağının sahibinin source.owner olduğu her işlemde çalıştırılır. Şu anda ` transactionHandlers` için bir sahip (owner) gereklidir. Kullanıcılar tüm işlemleri gerçekleştirmek istiyorlarsa `source.owner` olarak boş dize "" sağlamalıdırlar -> The source.owner can be the owner's address, or their Public Key. +> source.owner, sahibin adresi veya Genel Anahtarı olabilir. +> +> İşlemler Arweave permaweb'in yapı taşlarıdır ve son kullanıcılar tarafından oluşturulan nesnelerdir. +> +> Not: [Irys (önceden Bundlr)](https://irys.xyz/) işlemleri henüz desteklenmemektedir. -> Transactions are the building blocks of the Arweave permaweb and they are objects created by end-users. - -> Note: [Irys (previously Bundlr)](https://irys.xyz/) transactions are not supported yet. - -## Schema Definition +## Şema Tanımı Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on the Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). -## AssemblyScript Mappings +## AssemblyScript Eşlemeleri -The handlers for processing events are written in [AssemblyScript](https://www.assemblyscript.org/). +Olayları işlemek için kullanılan işleyiciler [AssemblyScript](https://www.assemblyscript.org/) ile yazılmıştır. -Arweave indexing introduces Arweave-specific data types to the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/). +Arweave endeksleme, [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/) için Arweave'e özgü veri türlerini tanıtır. ```tsx class Block { @@ -146,23 +146,23 @@ class Transaction { } ``` -Block handlers receive a `Block`, while transactions receive a `Transaction`. +Blok işleyiciler bir `Block` alırken, işlemler bir `Transaction` alır. -Writing the mappings of an Arweave Subgraph is very similar to writing the mappings of an Ethereum Subgraph. For more information, click [here](/developing/creating-a-subgraph/#writing-mappings). +Arweave Subgraph'inin eşleştirmelerini yazmak, bir Ethereum Subgraph'inin eşleştirmelerini yazmaya oldukça benzerdir. Daha fazla bilgi için [buraya](/developing/creating-a-subgraph/#writing-mappings) tıklayın. -## Deploying an Arweave Subgraph in Subgraph Studio +## Subgraph Studio'da Arweave Subgraph'i Dağıtma Once your Subgraph has been created on your Subgraph Studio dashboard, you can deploy by using the `graph deploy` CLI command. ```bash -graph deploy --access-token +graph deploy --access-token ``` -## Querying an Arweave Subgraph +## Arweave Subgraph'ını Sorgulama The GraphQL endpoint for Arweave Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. -## Example Subgraphs +## Örnek Subgraph'ler Here is an example Subgraph for reference: @@ -174,23 +174,23 @@ Here is an example Subgraph for reference: No, a Subgraph can only support data sources from one chain/network. -### Can I index the stored files on Arweave? +### Depolanmış dosyaları Arweave üzerinde indeksleyebilir miyim? -Currently, The Graph is only indexing Arweave as a blockchain (its blocks and transactions). +Şu anda Graph, Arweave'yi yalnızca bir blok zinciri (blokları ve işlemleri) olarak indekslemektedir. ### Can I identify Bundlr bundles in my Subgraph? -This is not currently supported. +Bu şu anda desteklenmemektedir. -### How can I filter transactions to a specific account? +### İşlemleri belirli bir hesaba özel olarak nasıl filtreleyebilirim? -The source.owner can be the user's public key or account address. +source.owner kullanıcının genel anahtarı veya hesap adresi olabilir. -### What is the current encryption format? +### Mevcut şifreleme formatı nedir? Data is generally passed into the mappings as Bytes, which if stored directly is returned in the Subgraph in a `hex` format (ex. block and transaction hashes). You may want to convert to a `base64` or `base64 URL`-safe format in your mappings, in order to match what is displayed in block explorers like [Arweave Explorer](https://viewblock.io/arweave/). -The following `bytesToBase64(bytes: Uint8Array, urlSafe: boolean): string` helper function can be used, and will be added to `graph-ts`: +Aşağıdaki `bytesToBase64(bytes: Uint8Array, urlSafe: boolean): string` yardımcı fonksiyonu kullanılabilir. Bu fonksiyon, `graph-ts`'e eklenecektir: ``` const base64Alphabet = [ diff --git a/website/src/pages/tr/subgraphs/guides/contract-analyzer.mdx b/website/src/pages/tr/subgraphs/guides/contract-analyzer.mdx index 084ac8d28a00..e8fa4c3e60dc 100644 --- a/website/src/pages/tr/subgraphs/guides/contract-analyzer.mdx +++ b/website/src/pages/tr/subgraphs/guides/contract-analyzer.mdx @@ -2,11 +2,15 @@ title: Smart Contract Analysis with Cana CLI --- -# Cana CLI: Quick & Efficient Contract Analysis +Improve smart contract analysis with **Cana CLI**. It's fast, efficient, and designed specifically for EVM chains. -**Cana CLI** is a command-line tool that streamlines helpful smart contract metadata analysis specific to subgraph development across multiple EVM-compatible chains. +## Genel Bakış -## 📌 Key Features +**Cana CLI** is a command-line tool that streamlines helpful smart contract metadata analysis specific to subgraph development across multiple EVM-compatible chains. It simplifies retrieving contract details, detecting proxy implementations, extracting ABIs, and more. + +### Key Features + +With Cana CLI, you can: - Detect deployment blocks - Verify source code @@ -14,63 +18,75 @@ title: Smart Contract Analysis with Cana CLI - Identify proxy and implementation contracts - Support multiple chains -## 🚀 Installation & Setup +### Prerequisites + +Before installing Cana CLI, make sure you have: + +- [Node.js v16+](https://nodejs.org/en) +- [npm v6+](https://docs.npmjs.com/cli/v11/commands/npm-install) +- Block explorer API keys + +### Installation & Setup -Install Cana globally using npm: +1. Install Cana CLI + +Use npm to install it globally: ```bash npm install -g contract-analyzer ``` -Set up a blockchain for analysis: +2. Configure Cana CLI + +Set up a blockchain environment for analysis: ```bash cana setup ``` -Provide the required block explorer API and block explorer endpoint URL details when prompted. +During setup, you'll be prompted to provide the required block explorer API key and block explorer endpoint URL. -Running `cana setup` creates a configuration file at `~/.contract-analyzer/config.json`. This file stores your block explorer API credentials, endpoint URLs, and chain selection preferences for future use. +After setup, Cana CLI creates a configuration file at `~/.contract-analyzer/config.json`. This file stores your block explorer API credentials, endpoint URLs, and chain selection preferences for future use. -## 🍳 Usage +### Steps: Using Cana CLI for Smart Contract Analysis -### 🔹 Chain Selection +#### 1. Select a Chain -Cana supports multiple EVM-compatible chains. +Cana CLI supports multiple EVM-compatible chains. -List chains added with: +For a list of chains added run this command: ```bash cana chains ``` -Then select a chain with: +Then select a chain with this command: ```bash cana chains --switch ``` -Once a chain is selected, all subsequent contract analases will continue on that chain. +Once a chain is selected, all subsequent contract analyses will continue on that chain. -### 🔹 Basic Contract Analysis +#### 2. Basic Contract Analysis -Analyze a contract with: +Run the following command to analyze a contract: ```bash cana analyze 0xContractAddress ``` -or +yada ```bash cana -a 0xContractAddress ``` -This command displays essential contract information in the terminal using a clear, organized format. +This command fetches and displays essential contract information in the terminal using a clear, organized format. -### 🔹 Understanding Output +#### 3. Understanding the Output -Cana organizes results into the terminal as well as into a structured directory when detailed contract data is successfully retrieved: +Cana CLI organizes results into the terminal and into a structured directory when detailed contract data is successfully retrieved: ``` contracts-analyzed/ @@ -80,24 +96,22 @@ contracts-analyzed/ └── event-information.json # Event signatures and examples ``` -### 🔹 Chain Management +This format makes it easy to reference contract metadata, event signatures, and ABIs for subgraph development. + +#### 4. Chain Management Add and manage chains: ```bash -cana setup # Add a new chain -cana chains # List configured chains -cana chains -s # Swich chains. +cana setup # Add a new chain +cana chains # List configured chains +cana chains -s # Switch chains ``` -## ⚠️ Troubleshooting +### Troubleshooting -- **Missing Data**: Ensure the contract address is correct, verified on the block explorer, and that your API key has the required permissions. +Missing Data? Ensure that the contract address is correct, that it's verified on the block explorer, and that your API key has the required permissions. -## ✅ Requirements - -- Node.js v16+ -- npm v6+ -- Block explorer API keys +### Sonuç -Keep your contract analyses efficient and well-organized. 🚀 +With Cana CLI, you can efficiently analyze smart contracts, extract crucial metadata, and support subgraph development with ease. diff --git a/website/src/pages/tr/subgraphs/guides/enums.mdx b/website/src/pages/tr/subgraphs/guides/enums.mdx index 9f55ae07c54b..18c3021ed435 100644 --- a/website/src/pages/tr/subgraphs/guides/enums.mdx +++ b/website/src/pages/tr/subgraphs/guides/enums.mdx @@ -1,20 +1,20 @@ --- -title: Categorize NFT Marketplaces Using Enums +title: NFT Pazar Yerlerini Enums Kullanarak Kategorize Etme --- -Use Enums to make your code cleaner and less error-prone. Here's a full example of using Enums on NFT marketplaces. +Kodu daha temiz yapmak ve hata yapma riskini azaltmak için Enums kullanın. İşte NFT pazar yerlerinde Enums kullanımına bir örnek. -## What are Enums? +## Enum'lar Nedir? -Enums, or enumeration types, are a specific data type that allows you to define a set of specific, allowed values. +Enum'lar veya numaralandırma türleri, bir dizi izin verilen değeri tanımlamanıza olanak tanıyan belirli bir veri türüdür. -### Example of Enums in Your Schema +### Şemanızda Enum Örnekleri If you're building a Subgraph to track the ownership history of tokens on a marketplace, each token might go through different ownerships, such as `OriginalOwner`, `SecondOwner`, and `ThirdOwner`. By using enums, you can define these specific ownerships, ensuring only predefined values are assigned. -You can define enums in your schema, and once defined, you can use the string representation of the enum values to set an enum field on an entity. +Şemanızda enum tanımlayabilir ve bir kez tanımlandığında, bir varlık üzerinde bir enum alanı ayarlamak için enum değerlerinin dizi (string) gösterimini kullanabilirsiniz. -Here's what an enum definition might look like in your schema, based on the example above: +İşte yukarıdaki örneğe dayanarak, şemanızda bir enum tanımı şöyle görünebilir: ```graphql enum TokenStatus { @@ -24,109 +24,109 @@ enum TokenStatus { } ``` -This means that when you use the `TokenStatus` type in your schema, you expect it to be exactly one of predefined values: `OriginalOwner`, `SecondOwner`, or `ThirdOwner`, ensuring consistency and validity. +Bu, şemanızda `TokenStatus` türünü kullandığınızda, bunun tanımlı değerlerden tam olarak biri olmasını beklediğiniz anlamına gelir: `OriginalOwner`, `SecondOwner` veya `ThirdOwner`. Böylece tutarlılık ve geçerlilik sağlanmış olur. -To learn more about enums, check out [Creating a Subgraph](/developing/creating-a-subgraph/#enums) and [GraphQL documentation](https://graphql.org/learn/schema/#enumeration-types). +Enum'lar hakkında daha fazla bilgi edinmek için [Subgraph Oluşturma](/developing/creating-a-subgraph/#enums) ve [GraphQL dokümantasyonu](https://graphql.org/learn/schema/#enumeration-types) kaynaklarına göz atın. -## Benefits of Using Enums +## Enum Kullanmanın Faydaları -- **Clarity:** Enums provide meaningful names for values, making data easier to understand. -- **Validation:** Enums enforce strict value definitions, preventing invalid data entries. -- **Maintainability:** When you need to change or add new categories, enums allow you to do this in a focused manner. +- **Anlaşılırlık:** Enum'lar değerlere anlamlı isimler verir, veriyi daha anlaşılır hale getirir. +- **Doğrulama:** Enum'lar katı değer tanımlamaları uygulayarak geçersiz veri girişlerini önler. +- **Bakım Kolaylığı:** Yeni kategoriler eklemek veya mevcut olanları değiştirmek gerektiğinde, enum'lar bunu odaklı bir şekilde yapmanıza olanak tanır. -### Without Enums +### Enum'lar Olmadan -If you choose to define the type as a string instead of using an Enum, your code might look like this: +Türü Enum kullanmak yerine bir dize olarak tanımlamayı seçerseniz, kodunuz şöyle görünebilir: ```graphql type Token @entity { id: ID! tokenId: BigInt! - owner: Bytes! # Owner of the token - tokenStatus: String! # String field to track token status + owner: Bytes! # Token Sahibi + tokenStatus: String! # Token Durumunu Takip Eden Dize Alanı timestamp: BigInt! } ``` -In this schema, `TokenStatus` is a simple string with no specific, allowed values. +Bu şemada, `TokenStatus` belirli, alabileceği değerler sınırlandırılmış olmayan basit bir dizedir. -#### Why is this a problem? +#### Bu neden bir sorun? -- There's no restriction of `TokenStatus` values, so any string can be accidentally assigned. This makes it hard to ensure that only valid statuses like `OriginalOwner`, `SecondOwner`, or `ThirdOwner` are set. -- It's easy to make typos such as `Orgnalowner` instead of `OriginalOwner`, making the data and potential queries unreliable. +- `TokenStatus` değerleri için bir kısıtlama yoktur. Bu yüzden yanlışlıkla herhangi bir dize atanabilir. Bu, yalnızca `OriginalOwner`, `SecondOwner` veya `ThirdOwner` gibi geçerli durumların ayarlandığını sağlamayı zorlaştırır. +- `OriginalOwner` yerine `Orgnalowner` gibi yazım hataları yaparak verilerin ve potansiyel sorguların güvenilmez hale gelmesine sebep olmak kolaydır. -### With Enums +### Enum Kullanımıyla -Instead of assigning free-form strings, you can define an enum for `TokenStatus` with specific values: `OriginalOwner`, `SecondOwner`, or `ThirdOwner`. Using an enum ensures only allowed values are used. +Serbest formda dizeler atamak yerine, `TokenStatus` için `OriginalOwner`, `SecondOwner` veya `ThirdOwner` gibi belirli değerlerle bir enum tanımlanabilir. Bir enum kullanmak, yalnızca izin verilen değerlerin kullanılmasını sağlar. -Enums provide type safety, minimize typo risks, and ensure consistent and reliable results. +Enumlar; tür güvenliği sağlar, yazım hatası riskini en aza indirir ve tutarlı ve güvenilir sonuçlar sağlar. -## Defining Enums for NFT Marketplaces +## NFT Pazar Yerleri için Enum Tanımlama -> Note: The following guide uses the CryptoCoven NFT smart contract. +> Not: Aşağıdaki kılavuz CryptoCoven NFT akıllı sözleşmesini kullanmaktadır. To define enums for the various marketplaces where NFTs are traded, use the following in your Subgraph schema: ```gql -# Enum for Marketplaces that the CryptoCoven contract interacted with(likely a Trade/Mint) +# CryptoCoven sözleşmesinin etkileşimde bulunduğu pazar yerleri için Enum (muhtemel bir Takas/Basım) enum Marketplace { - OpenSeaV1 # Represents when a CryptoCoven NFT is traded on the marketplace - OpenSeaV2 # Represents when a CryptoCoven NFT is traded on the OpenSeaV2 marketplace - SeaPort # Represents when a CryptoCoven NFT is traded on the SeaPort marketplace - LooksRare # Represents when a CryptoCoven NFT is traded on the LookRare marketplace - # ...and other marketplaces + OpenSeaV1 # CryptoCoven NFT'sinin bu pazar yerinde takas yapılmasını temsil eder + OpenSeaV2 # CryptoCoven NFT'si ninOpenSeaV2 pazar yerinde takas yapılmasını temsil eder + SeaPort # CryptoCoven NFT'sinin SeaPort pazar yerinde takas yapılmasını temsil eder + LooksRare # CryptoCoven NFT'sinin LookRare pazar yerinde takas yapılmasını temsil eder + # ...ve diğer pazar yerleri } ``` -## Using Enums for NFT Marketplaces +## NFT Pazar Yerleri için Enum Kullanımı Once defined, enums can be used throughout your Subgraph to categorize transactions or events. -For example, when logging NFT sales, you can specify the marketplace involved in the trade using the enum. +Örneğin, NFT satışlarını kaydederken takasta yer alan pazar yerini enum kullanarak belirleyebilirsiniz. -### Implementing a Function for NFT Marketplaces +### NFT Pazar Yerleri için Bir Fonksiyon İmplementasyonu -Here's how you can implement a function to retrieve the marketplace name from the enum as a string: +Enum'dan pazar yeri adını bir dize olarak almak için bir fonksiyonu şöyle uygulayabilirsiniz: ```ts export function getMarketplaceName(marketplace: Marketplace): string { - // Using if-else statements to map the enum value to a string + // Enum değerini bir dizeye eşlemek için if-else ifadelerini kullanma if (marketplace === Marketplace.OpenSeaV1) { - return 'OpenSeaV1' // If the marketplace is OpenSea, return its string representation + return 'OpenSeaV1' // I Eğer pazar yeri OpenSea ise, onun dize temsilini döndür } else if (marketplace === Marketplace.OpenSeaV2) { return 'OpenSeaV2' } else if (marketplace === Marketplace.SeaPort) { - return 'SeaPort' // If the marketplace is SeaPort, return its string representation + return 'SeaPort' // Eğer pazar yeri SeaPort ise, onun dize temsilini döndür } else if (marketplace === Marketplace.LooksRare) { - return 'LooksRare' // If the marketplace is LooksRare, return its string representation - // ... and other market places + return 'LooksRare' // Eğer pazar yeri LooksRare ise, onun dize temsilini döndür + // ... ve diğer pazar yerleri } } ``` -## Best Practices for Using Enums +## Enum Kullanımı için En İyi Uygulamalar -- **Consistent Naming:** Use clear, descriptive names for enum values to improve readability. -- **Centralized Management:** Keep enums in a single file for consistency. This makes enums easier to update and ensures they are the single source of truth. -- **Documentation:** Add comments to enum to clarify their purpose and usage. +- **Tutarlı İsimlendirme:** Okunabilirliği artırmak için enum değerleri için net, açıklayıcı isimler kullanın. +- **Merkezi Yönetim:** Tutarlılık için enum'ları tek bir dosyada tutun. Böylece enum'ların güncellenmesi kolaylaşmış olur ve onların tek bir doğru bilgi kaynağı olmasını sağlar. +- **Dokümantasyon:** Amaçlarını ve kullanımını açıklamak için enum'a yorumlar ekleyin. -## Using Enums in Queries +## Sorgularda Enum Kullanımı -Enums in queries help you improve data quality and make your results easier to interpret. They function as filters and response elements, ensuring consistency and reducing errors in marketplace values. +Sorgulardaki enum'lar verilerin kalitesini artırmanıza ve sonuçları daha kolay yorumlamanıza yardımcı olur. Enumlar filtreleme ve yanıt ögeleri olarak işlev görürler, tutarlılığı sağlarlar ve pazar yerlerindeki hataları azaltırlar. -**Specifics** +**Ayrıntılar** -- **Filtering with Enums:** Enums provide clear filters, allowing you to confidently include or exclude specific marketplaces. -- **Enums in Responses:** Enums guarantee that only recognized marketplace names are returned, making the results standardized and accurate. +- **Enum ile Filtreleme:** Enum'lar net filtreler sağlar, belirli pazarları güvenle dahil etmenizi veya hariç tutmanızı mümkün kılar. +- **Yanıtlarda Enum'lar:** Enum'lar yalnızca tanınan pazar adlarının döndürülmesini garanti eder, bu da sonuçları standart ve isabetli hale getirir. -### Sample Queries +### Örnek Sorgular -#### Query 1: Account With The Highest NFT Marketplace Interactions +#### Sorgu 1: En Yüksek NFT Pazar Yeri Etkileşimine Sahip Hesap -This query does the following: +Bu sorgu şunları yapar: -- It finds the account with the highest unique NFT marketplace interactions, which is great for analyzing cross-marketplace activity. -- The marketplaces field uses the marketplace enum, ensuring consistent and validated marketplace values in the response. +- Farklı pazar yerlerinde en yüksek benzersiz NFT etkileşimlerine sahip hesabı bulur. Bu da çapraz pazar yeri aktivitelerini analiz etmek için mükemmeldir. +- Pazar yerleri alanı, yanıt içerisindeki pazar yeri değerlerini tutarlı ve doğrulanmış hale getiren pazar yeri enum'ını kullanır. ```gql { @@ -137,15 +137,15 @@ This query does the following: totalSpent uniqueMarketplacesCount marketplaces { - marketplace # This field returns the enum value representing the marketplace + marketplace # Bu alan, pazar yerini temsil eden enum değerini döndürür. } } } ``` -#### Returns +#### Dönüşler -This response provides account details and a list of unique marketplace interactions with enum values for standardized clarity: +Bu yanıt; hesap detaylarını, ve netlik sağlamak amacıyla enum değerlerine sahip benzersiz pazar yeri etkileşimlerinin listesini sağlar: ```gql { @@ -186,12 +186,12 @@ This response provides account details and a list of unique marketplace interact } ``` -#### Query 2: Most Active Marketplace for CryptoCoven transactions +#### Sorgu 2: CryptoCoven İşlemleri için En Aktif Pazar Yeri -This query does the following: +Bu sorgu şunları yapar: -- It identifies the marketplace with the highest volume of CryptoCoven transactions. -- It uses the marketplace enum to ensure that only valid marketplace types appear in the response, adding reliability and consistency to your data. +- CryptoCoven işlemlerinin en yüksek hacimli olduğu pazar yerini belirler. +- Yalnızca geçerli pazar yeri türlerinin yanıt olarak görünmesini sağlamak için pazar yeri enum'ını kullanarak verilerinize güvenilirlik ve tutarlılık katar. ```gql { @@ -202,9 +202,9 @@ This query does the following: } ``` -#### Result 2 +#### Sonuç 2 -The expected response includes the marketplace and the corresponding transaction count, using the enum to indicate the marketplace type: +Beklenen yanıt, pazar yerini ve ilgili işlem sayısını içerir; pazar yeri türünü belirtmek için enum kullanır: ```gql { @@ -219,12 +219,12 @@ The expected response includes the marketplace and the corresponding transaction } ``` -#### Query 3: Marketplace Interactions with High Transaction Counts +#### Sorgu 3: Yüksek İşlem Sayısına Sahip Pazar Etkileşimleri -This query does the following: +Bu sorgu şunları yapar: -- It retrieves the top four marketplaces with over 100 transactions, excluding "Unknown" marketplaces. -- It uses enums as filters to ensure that only valid marketplace types are included, increasing accuracy. +- "Unknown" pazarlarını hariç tutarak, 100'den fazla işlemi olan ilk dört pazarı getirir. +- Yalnızca geçerli pazar türlerinin dahil edilmesini sağlamak için filtre olarak enum'lar kullanır. Böylece doğruluk oranı arttırılmış olur. ```gql { @@ -240,9 +240,9 @@ This query does the following: } ``` -#### Result 3 +#### Sonuç 3 -Expected output includes the marketplaces that meet the criteria, each represented by an enum value: +Beklenen çıktı, her biri bir enum değeri ile temsil edilen, kriterleri karşılayan pazarları içerir: ```gql { @@ -269,6 +269,6 @@ Expected output includes the marketplaces that meet the criteria, each represent } ``` -## Additional Resources +## Ek Kaynaklar -For additional information, check out this guide's [repo](https://github.com/chidubemokeke/Subgraph-Tutorial-Enums). +Ek bilgi için bu rehberin [deposuna](https://github.com/chidubemokeke/Subgraph-Tutorial-Enums) göz atın. diff --git a/website/src/pages/tr/subgraphs/guides/grafting.mdx b/website/src/pages/tr/subgraphs/guides/grafting.mdx index d9abe0e70d2a..15eaafc65c95 100644 --- a/website/src/pages/tr/subgraphs/guides/grafting.mdx +++ b/website/src/pages/tr/subgraphs/guides/grafting.mdx @@ -1,54 +1,54 @@ --- -title: Replace a Contract and Keep its History With Grafting +title: Bir Sözleşmeyi Değiştirin ve Graftlama ile Geçmişini Koruyun --- In this guide, you will learn how to build and deploy new Subgraphs by grafting existing Subgraphs. -## What is Grafting? +## Graftlama Nedir? Grafting reuses the data from an existing Subgraph and starts indexing it at a later block. This is useful during development to get past simple errors in the mappings quickly or to temporarily get an existing Subgraph working again after it has failed. Also, it can be used when adding a feature to a Subgraph that takes long to index from scratch. -The grafted Subgraph can use a GraphQL schema that is not identical to the one of the base Subgraph, but merely compatible with it. It has to be a valid Subgraph schema in its own right, but may deviate from the base Subgraph's schema in the following ways: +Aşılanan subgraph, temel subgraphla tamamen aynı olmayan, ancak onunla uyumlu olan bir GraphQL şeması kullanabilir. Kendi başına geçerli bir subgraph şeması olmalıdır, ancak şu şekillerde temel subgraph şemasından sapabilir: -- It adds or removes entity types -- It removes attributes from entity types -- It adds nullable attributes to entity types -- It turns non-nullable attributes into nullable attributes -- It adds values to enums -- It adds or removes interfaces -- It changes for which entity types an interface is implemented +- Varlık türlerini ekler veya kaldırır +- Varlık türlerinden öznitelikleri kaldırır +- Varlık türlerine null yapılabilir öznitelikler ekler +- Null yapılamayan öznitelikleri null yapılabilir özniteliklere dönüştürür +- Numaralandırmalara değerler ekler +- Arayüzleri ekler veya kaldırır +- Arayüzün hangi varlık türleri için uygulandığını değiştirir -For more information, you can check: +Daha fazla bilgi için kontrol edebilirsiniz: -- [Grafting](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) +- [Aşılama](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) In this tutorial, we will be covering a basic use case. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing Subgraph onto the "base" Subgraph that tracks the new contract. -## Important Note on Grafting When Upgrading to the Network +## Ağa Yükseltme Durumunda Graftlamaya İlişkin Önemli Not > **Caution**: It is recommended to not use grafting for Subgraphs published to The Graph Network -### Why Is This Important? +### Bu Neden Önemli? Grafting is a powerful feature that allows you to "graft" one Subgraph onto another, effectively transferring historical data from the existing Subgraph to a new version. It is not possible to graft a Subgraph from The Graph Network back to Subgraph Studio. -### Best Practices +### En İyi Uygulamalar **Initial Migration**: when you first deploy your Subgraph to the decentralized network, do so without grafting. Ensure that the Subgraph is stable and functioning as expected. **Subsequent Updates**: once your Subgraph is live and stable on the decentralized network, you may use grafting for future versions to make the transition smoother and to preserve historical data. -By adhering to these guidelines, you minimize risks and ensure a smoother migration process. +Bu yönergelere uyarak riskleri en aza indirebilir ve daha sorunsuz bir taşıma süreci geçirebilirsiniz. -## Building an Existing Subgraph +## Mevcut Bir Subgraph'ı Oluşturma Building Subgraphs is an essential part of The Graph, described more in depth [here](/subgraphs/quick-start/). To be able to build and deploy the existing Subgraph used in this tutorial, the following repo is provided: -- [Subgraph example repo](https://github.com/Shiyasmohd/grafting-tutorial) +- [Subgraph örnek deposu](https://github.com/Shiyasmohd/grafting-tutorial) > Note: The contract used in the Subgraph was taken from the following [Hackathon Starterkit](https://github.com/schmidsi/hackathon-starterkit). -## Subgraph Manifest Definition +## Subgraph Manifest Tanımı The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest that you will use: @@ -79,11 +79,11 @@ dataSources: file: ./src/lock.ts ``` -- The `Lock` data source is the abi and contract address we will get when we compile and deploy the contract -- The network should correspond to an indexed network being queried. Since we're running on Sepolia testnet, the network is `sepolia` -- The `mapping` section defines the triggers of interest and the functions that should be run in response to those triggers. In this case, we are listening for the `Withdrawal` event and calling the `handleWithdrawal` function when it is emitted. +- `Lock` veri kaynağı, sözleşmeyi derleyip dağıttığımızda elde edeceğimiz "abi" ve sözleşme adresidir +- Ağ, sorgulanan endekslenmiş bir ağa karşılık gelmelidir. Sepolia testnet üzerinde çalıştığımız için, ağ `sepolia`'dır +- `mapping` bölümü, ilgili tetikleyicileri ve bu tetikleyicilere yanıt olarak çalıştırılması gereken fonksiyonları tanımlar. Bu durumda, `Withdrawal` olayını dinliyoruz ve yayarken `handleWithdrawal` fonksiyonunu çağırıyoruz. -## Grafting Manifest Definition +## Graftlama Manifest Tanımı Grafting requires adding two new items to the original Subgraph manifest: @@ -96,12 +96,12 @@ graft: block: 5956000 # block number ``` -- `features:` is a list of all used [feature names](/developing/creating-a-subgraph/#experimental-features). +- `features:` tüm kullanılan [özellik adlarının](/developing/creating-a-subgraph/#experimental-features) bir listesidir. - `graft:` is a map of the `base` Subgraph and the block to graft on to. The `block` is the block number to start indexing from. The Graph will copy the data of the base Subgraph up to and including the given block and then continue indexing the new Subgraph from that block on. The `base` and `block` values can be found by deploying two Subgraphs: one for the base indexing and one with grafting -## Deploying the Base Subgraph +## Temel Subgraph'ı Dağıtma 1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-example` 2. Follow the directions in the `AUTH & DEPLOY` section on your Subgraph page in the `graft-example` folder from the repo @@ -117,7 +117,7 @@ The `base` and `block` values can be found by deploying two Subgraphs: one for t } ``` -It returns something like this: +Şuna benzer bir şey döndürür: ``` { @@ -140,9 +140,9 @@ It returns something like this: Once you have verified the Subgraph is indexing properly, you can quickly update the Subgraph with grafting. -## Deploying the Grafting Subgraph +## Graftlama Subgraph'ını Dağıtma -The graft replacement subgraph.yaml will have a new contract address. This could happen when you update your dapp, redeploy a contract, etc. +Graft yerine geçen subgraph.yaml yeni bir sözleşme adresine sahip olacaktır. Bu, merkeziyetsiz uygulamanızı güncellediğinizde, bir sözleşmeyi yeniden dağıttığınızda vb. gerçekleşebilir. 1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-replacement` 2. Create a new manifest. The `subgraph.yaml` for `graph-replacement` contains a different contract address and new information about how it should graft. These are the `block` of the [last event emitted](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452) you care about by the old contract and the `base` of the old Subgraph. The `base` Subgraph ID is the `Deployment ID` of your original `graph-example` Subgraph. You can find this in Subgraph Studio. @@ -159,7 +159,7 @@ The graft replacement subgraph.yaml will have a new contract address. This could } ``` -It should return the following: +Aşağıdakileri döndürmelidir: ``` { @@ -189,14 +189,14 @@ You can see that the `graft-replacement` Subgraph is indexing from older `graph- Congrats! You have successfully grafted a Subgraph onto another Subgraph. -## Additional Resources +## Ek Kaynaklar -If you want more experience with grafting, here are a few examples for popular contracts: +Aşılama konusunda daha fazla deneyim kazanmak istiyorsanız, yaygın kullanılan sözleşmeler için aşağıda birkaç örnek bulunmaktadır: - [Curve](https://github.com/messari/subgraphs/blob/master/subgraphs/curve-finance/protocols/curve-finance/config/templates/curve.template.yaml) - [ERC-721](https://github.com/messari/subgraphs/blob/master/subgraphs/erc721-metadata/subgraph.yaml) - [Uniswap](https://github.com/messari/subgraphs/blob/master/subgraphs/uniswap-v3-forks/protocols/uniswap-v3/config/templates/uniswapV3Template.yaml), -To become even more of a Graph expert, consider learning about other ways to handle changes in underlying datasources. Alternatives like [Data Source Templates](/developing/creating-a-subgraph/#data-source-templates) can achieve similar results +Daha da iyi bir Graph uzmanı olmak için, temel veri kaynaklarındaki değişikliklerle başa çıkmanın diğer yollarını öğrenmeyi değerlendirin. [Veri Kaynağı Şablonları](/developing/creating-a-subgraph/#data-source-templates) gibi alternatifler benzer sonuçlar elde edebilir -> Note: A lot of material from this article was taken from the previously published [Arweave article](/subgraphs/cookbook/arweave/) +> Not: Bu makaledeki materyalin büyük bir kısmı, daha önce yayımlanmış olan [Arweave makalesinden](/subgraphs/cookbook/arweave/) alınmıştır diff --git a/website/src/pages/tr/subgraphs/guides/near.mdx b/website/src/pages/tr/subgraphs/guides/near.mdx index e78a69eb7fa2..134ad4c82262 100644 --- a/website/src/pages/tr/subgraphs/guides/near.mdx +++ b/website/src/pages/tr/subgraphs/guides/near.mdx @@ -1,12 +1,12 @@ --- -title: Building Subgraphs on NEAR +title: NEAR Üzerinde Subgraphlar Oluşturma --- This guide is an introduction to building Subgraphs indexing smart contracts on the [NEAR blockchain](https://docs.near.org/). -## What is NEAR? +## NEAR Nedir? -[NEAR](https://near.org/) is a smart contract platform for building decentralized applications. Visit the [official documentation](https://docs.near.org/concepts/basics/protocol) for more information. +[NEAR](https://near.org/), merkezi olmayan uygulamalar geliştirmek için kullanılan bir akıllı sözleşme platformudur. Daha fazla bilgi için [resmi dokümantasyona](https://docs.near.org/concepts/basics/protocol) bakabilirsiniz. ## What are NEAR Subgraphs? @@ -14,14 +14,14 @@ The Graph gives developers tools to process blockchain events and make the resul Subgraphs are event-based, which means that they listen for and then process onchain events. There are currently two types of handlers supported for NEAR Subgraphs: -- Block handlers: these are run on every new block -- Receipt handlers: run every time a message is executed at a specified account +- Blok işleyicileri: Bunlar her yeni blokta çalışır +- Makbuz işleyicileri: Belirli bir hesapta her mesaj yürütüldüğünde çalışır -[From the NEAR documentation](https://docs.near.org/build/data-infrastructure/lake-data-structures/receipt): +[NEAR dokümantasyonundan](https://docs.near.org/build/data-infrastructure/lake-data-structures/receipt): -> A Receipt is the only actionable object in the system. When we talk about "processing a transaction" on the NEAR platform, this eventually means "applying receipts" at some point. +> Makbuz, sistemdeki eyleme geçirilebilir tek nesnedir. NEAR platformunda "bir işlemin işlenmesinden" bahsettiğimizde, bu nihayetinde bir noktada "makbuzların uygulanması" anlamına gelir. -## Building a NEAR Subgraph +## NEAR Subgraph'ı Oluşturma `@graphprotocol/graph-cli` is a command-line tool for building and deploying Subgraphs. @@ -37,7 +37,7 @@ There are three aspects of Subgraph definition: **schema.graphql:** a schema file that defines what data is stored for your Subgraph, and how to query it via GraphQL. The requirements for NEAR Subgraphs are covered by [the existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). -**AssemblyScript Mappings:** [AssemblyScript code](/subgraphs/developing/creating/graph-ts/api/) that translates from the event data to the entities defined in your schema. NEAR support introduces NEAR-specific data types and new JSON parsing functionality. +**AssemblyScript Eşlemeleri:** Olay verisini, şemanızda tanımlanan varlıklara dönüştüren [AssemblyScript kodudur](/subgraphs/developing/creating/graph-ts/api/). NEAR desteği, NEAR'a özgü veri türleri ve yeni JSON ayrıştırma işlevi sunar. During Subgraph development there are two key commands: @@ -46,7 +46,7 @@ $ graph codegen # generates types from the schema file identified in the manifes $ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder ``` -### Subgraph Manifest Definition +### Subgraph Manifest Tanımı The Subgraph manifest (`subgraph.yaml`) identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for a NEAR Subgraph: @@ -71,9 +71,9 @@ dataSources: ``` - NEAR Subgraphs introduce a new `kind` of data source (`near`) -- The `network` should correspond to a network on the hosting Graph Node. On Subgraph Studio, NEAR's mainnet is `near-mainnet`, and NEAR's testnet is `near-testnet` -- NEAR data sources introduce an optional `source.account` field, which is a human-readable ID corresponding to a [NEAR account](https://docs.near.org/concepts/protocol/account-model). This can be an account or a sub-account. -- NEAR data sources introduce an alternative optional `source.accounts` field, which contains optional suffixes and prefixes. At least prefix or suffix must be specified, they will match the any account starting or ending with the list of values respectively. The example below would match: `[app|good].*[morning.near|morning.testnet]`. If only a list of prefixes or suffixes is necessary the other field can be omitted. +- `network`, subgraph'i sunan Graph Düğümü üzerindeki bir ağa karşılık gelmelidir. Subgraph Studio'da, NEAR'ın ana ağı `near-mainnet`, ve NEAR'ın test ağı `near-testnet`'tir +- NEAR veri kaynakları, [NEAR hesabı](https://docs.near.org/concepts/protocol/account-model) ile ilişkili, insan tarafından okunabilir bir kimlik olan isteğe bağlı `source.account` alanını sunar. Bu, bir hesap veya alt hesap olabilir. +- NEAR veri kaynakları, isteğe bağlı ek `source.accounts` alanını tanıtır. Bu alan isteğe bağlı sonekler ve önekler içerir. En azından bir önek veya sonek belirtilmelidir. Bu ekler ilgili listedeki değerlerle başlayan veya biten herhangi bir hesabı eşleştirirler. Aşağıdaki örnek şunlarla eşleşecektir: `[app|good].*[morning.near|morning.testnet]`. Sadece önekler veya sonekler listesi gerekiyorsa diğer alan atlanabilir. ```yaml accounts: @@ -85,20 +85,20 @@ accounts: - morning.testnet ``` -NEAR data sources support two types of handlers: +NEAR veri kaynakları iki tür işleyiciyi destekler: -- `blockHandlers`: run on every new NEAR block. No `source.account` is required. -- `receiptHandlers`: run on every receipt where the data source's `source.account` is the recipient. Note that only exact matches are processed ([subaccounts](https://docs.near.org/tutorials/crosswords/basics/add-functions-call#create-a-subaccount) must be added as independent data sources). +- `blockHandlers`: her yeni NEAR blokunda çalıştırılır. `source.account` gerekli değildir. +- `receiptHandlers`: veri kaynağının `source.account`'unun alıcı olduğu her makbuzda çalışır. Makbuz (receipt) teknik bir kavramdır, daha detaylı bilgi için NEAR dokümanlarını inceleyebilirsiniz. Bu noktada, yalnızca tam eşleşmelerin işlendiğine dikkat edin. ([Alt hesaplar](https://docs.near.org/tutorials/crosswords/basics/add-functions-call#create-a-subaccount) bağımsız veri kaynakları olarak eklenmelidir). -### Schema Definition +### Şema Tanımı Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). -### AssemblyScript Mappings +### AssemblyScript Eşlemeleri -The handlers for processing events are written in [AssemblyScript](https://www.assemblyscript.org/). +Olayları işlemek için kullanılan işleyiciler [AssemblyScript](https://www.assemblyscript.org/) ile yazılmıştır. -NEAR indexing introduces NEAR-specific data types to the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/). +NEAR endeksleme, [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/) için NEAR'a özgü veri türlerini tanıtır. ```typescript @@ -160,20 +160,20 @@ class ReceiptWithOutcome { } ``` -These types are passed to block & receipt handlers: +Bu türler blok & makbuz işleyicilerine aktarılır: -- Block handlers will receive a `Block` -- Receipt handlers will receive a `ReceiptWithOutcome` +- Blok işleyiciler bir `Block` alacaktır +- Makbuz işleyiciler bir `ReceiptWithOutcome` alacaktır Otherwise, the rest of the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/) is available to NEAR Subgraph developers during mapping execution. -This includes a new JSON parsing function - logs on NEAR are frequently emitted as stringified JSONs. A new `json.fromString(...)` function is available as part of the [JSON API](/subgraphs/developing/creating/graph-ts/api/#json-api) to allow developers to easily process these logs. +Bu, yeni bir JSON ayrıştırma fonksiyonunu içerir - NEAR üzerindeki günlükler sıklıkla dizeleştirilmiş JSON olarak yayılır. Geliştiricilerin bu günlükleri kolayca işlemelerine olanak tanımak için [JSON API](/subgraphs/developing/creating/graph-ts/api/#json-api) kapsamında yeni bir `json.fromString(...)` fonksiyonu mevcuttur. -## Deploying a NEAR Subgraph +## NEAR Subgraph'ını Dağıtma Once you have a built Subgraph, it is time to deploy it to Graph Node for indexing. NEAR Subgraphs can be deployed to any Graph Node `>=v0.26.x` (this version has not yet been tagged & released). -Subgraph Studio and the upgrade Indexer on The Graph Network currently supports indexing NEAR mainnet and testnet in beta, with the following network names: +The Graph Ağı'ndaki Subgraph Studio ve yükseltme Endeksleyicisi şu anda beta olarak NEAR ana ağı ve test ağını endekslemeyi, aşağıdaki ağ isimleriyle desteklemektedir: - `near-mainnet` - `near-testnet` @@ -191,17 +191,17 @@ $ graph deploy --node --ipfs https://api.thegraph.com/ipfs/ +graph deploy ``` -### Local Graph Node (based on default configuration) +### Yerel Graph Düğümü (varsayılan yapılandırmaya göre) ```sh -graph deploy --node http://localhost:8020/ --ipfs http://localhost:5001 +graph deploy --node http://localhost:8020/ --ipfs http://localhost:5001 ``` Once your Subgraph has been deployed, it will be indexed by Graph Node. You can check its progress by querying the Subgraph itself: @@ -216,31 +216,31 @@ Once your Subgraph has been deployed, it will be indexed by Graph Node. You can } ``` -### Indexing NEAR with a Local Graph Node +### NEAR'ı Yerel Graph Düğümü ile İndeksleme -Running a Graph Node that indexes NEAR has the following operational requirements: +NEAR'ı indeksleyen bir Graph Düğümü çalıştırmanın aşağıdaki operasyonel gereksinimleri vardır: -- NEAR Indexer Framework with Firehose instrumentation -- NEAR Firehose Component(s) -- Graph Node with Firehose endpoint configured +- Firehose enstrümantasyonu ile NEAR İndeksleyici Çerçevesi +- NEAR Firehose Bileşen(ler)i +- Firehose uç noktası yapılandırılmış Graph Düğümü -We will provide more information on running the above components soon. +Yukarıdaki bileşenlerin çalıştırılması hakkında yakında daha fazla bilgi vereceğiz. -## Querying a NEAR Subgraph +## NEAR Subgraph'ını Sorgulama The GraphQL endpoint for NEAR Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. -## Example Subgraphs +## Örnek Subgraph'ler Here are some example Subgraphs for reference: -[NEAR Blocks](https://github.com/graphprotocol/graph-tooling/tree/main/examples/near-blocks) +[NEAR Blokları](https://github.com/graphprotocol/graph-tooling/tree/main/examples/near-blocks) -[NEAR Receipts](https://github.com/graphprotocol/graph-tooling/tree/main/examples/near-receipts) +[NEAR Makbuzları](https://github.com/graphprotocol/graph-tooling/tree/main/examples/near-receipts) ## FAQ -### How does the beta work? +### Beta nasıl çalışır? NEAR support is in beta, which means that there may be changes to the API as we continue to work on improving the integration. Please email near@thegraph.com so that we can support you in building NEAR Subgraphs, and keep you up to date on the latest developments! @@ -250,11 +250,11 @@ No, a Subgraph can only support data sources from one chain/network. ### Can Subgraphs react to more specific triggers? -Currently, only Block and Receipt triggers are supported. We are investigating triggers for function calls to a specified account. We are also interested in supporting event triggers, once NEAR has native event support. +Şu anda yalnızca Blok ve Makbuz tetikleyicileri desteklenmektedir. Belirli bir hesaba yapılan fonksiyon çağrıları için tetikleyicileri araştırma aşamasındayız. NEAR yerel olay desteğine sahip oldu takdirde, olay tetikleyicilerini desteklemekle de ilgileneceğiz. -### Will receipt handlers trigger for accounts and their sub-accounts? +### Makbuz işleyicileri hesaplar ve bunların alt hesapları için tetiklenecek mi? -If an `account` is specified, that will only match the exact account name. It is possible to match sub-accounts by specifying an `accounts` field, with `suffixes` and `prefixes` specified to match accounts and sub-accounts, for example the following would match all `mintbase1.near` sub-accounts: +Bir `account` belirtildiyse, yalnızca tam hesap adı eşleştirilecektir. `accounts` alanı belirterek alt hesapları eşleştirmek mümkündür. `suffixes` (önekleri) ve `prefixes`'i (sonekleri) ve `accounts` alanını da belirterek, alt hesapları eşleştirmek mümkündür. Örneğin, aşağıdaki `mintbase1.near` alt hesaplarının tümünü eşleştirecektir: ```yaml accounts: @@ -264,11 +264,11 @@ accounts: ### Can NEAR Subgraphs make view calls to NEAR accounts during mappings? -This is not supported. We are evaluating whether this functionality is required for indexing. +Bu desteklenmemektedir. Bu fonksiyonelliğin indeksleme için gerekli olup olmadığını değerlendiriyoruz. ### Can I use data source templates in my NEAR Subgraph? -This is not currently supported. We are evaluating whether this functionality is required for indexing. +Bu şu anda desteklenmemektedir. Bu fonksiyonelliğin indeksleme için gerekli olup olmadığını değerlendiriyoruz. ### Ethereum Subgraphs support "pending" and "current" versions, how can I deploy a "pending" version of a NEAR Subgraph? @@ -278,6 +278,6 @@ Pending functionality is not yet supported for NEAR Subgraphs. In the interim, y If it is a general question about Subgraph development, there is a lot more information in the rest of the [Developer documentation](/subgraphs/quick-start/). Otherwise please join [The Graph Protocol Discord](https://discord.gg/graphprotocol) and ask in the #near channel or email near@thegraph.com. -## References +## Referanslar -- [NEAR developer documentation](https://docs.near.org/tutorials/crosswords/basics/set-up-skeleton) +- [NEAR geliştirici dokümantasyonu](https://docs.near.org/tutorials/crosswords/basics/set-up-skeleton) diff --git a/website/src/pages/tr/subgraphs/guides/secure-api-keys-nextjs.mdx b/website/src/pages/tr/subgraphs/guides/secure-api-keys-nextjs.mdx index e17e594408ff..b93a81626d72 100644 --- a/website/src/pages/tr/subgraphs/guides/secure-api-keys-nextjs.mdx +++ b/website/src/pages/tr/subgraphs/guides/secure-api-keys-nextjs.mdx @@ -1,48 +1,48 @@ --- -title: How to Secure API Keys Using Next.js Server Components +title: Next.js Sunucu Bileşenlerini Kullanarak API Anahtarları Nasıl Güvenli Bir Şekilde Kullanılır --- -## Overview +## Genel Bakış We can use [Next.js server components](https://nextjs.org/docs/app/building-your-application/rendering/server-components) to properly secure our API key from exposure in the frontend of our dapp. To further increase our API key security, we can also [restrict our API key to certain Subgraphs or domains in Subgraph Studio](/cookbook/upgrading-a-subgraph/#securing-your-api-key). In this cookbook, we will go over how to create a Next.js server component that queries a Subgraph while also hiding the API key from the frontend. -### Caveats +### Kısıtlamalar -- Next.js server components do not protect API keys from being drained using denial of service attacks. -- The Graph Network gateways have denial of service detection and mitigation strategies in place, however using server components may weaken these protections. -- Next.js server components introduce centralization risks as the server can go down. +- Next.js sunucu bileşenleri, servis dışı bırakma saldırıları ile API anahtarlarının boşaltılmasına karşı koruma sağlamaz. +- The Graph Ağ geçitleri, servis dışı bırakma saldırı tespiti ve saldırıyı hafifletme stratejilerine sahiptir. Ancak sunucu bileşenlerini kullanmak bu korumaları zayıflatabilir. +- Next.js sunucu bileşenleri, sunucunun çökmesi ihtimali dolayısıyla merkezileşme riskleri taşır. -### Why It's Needed +### Neden Gerekli -In a standard React application, API keys included in the frontend code can be exposed to the client-side, posing a security risk. While `.env` files are commonly used, they don't fully protect the keys since React's code is executed on the client side, exposing the API key in the headers. Next.js Server Components address this issue by handling sensitive operations server-side. +Standart bir React uygulamasında, ön yüz koduna dahil edilen API anahtarları istemci tarafında açığa çıkabilir ve güvenlik riski oluşturabilir. `.env` dosyaları yaygın olarak kullanılsa da React kodu istemci tarafında çalıştığı için anahtarları tam olarak korumazlar ve API anahtarı başlıklarda açığa çıkar. Next.js Sunucu Bileşenleri bu sorunu, hassas işlemleri sunucu tarafında yürüterek çözer. ### Using client-side rendering to query a Subgraph -![Client-side rendering](/img/api-key-client-side-rendering.png) +![İstemci tarafında işleme](/img/api-key-client-side-rendering.png) ### Prerequisites -- An API key from [Subgraph Studio](https://thegraph.com/studio) -- Basic knowledge of Next.js and React. -- An existing Next.js project that uses the [App Router](https://nextjs.org/docs/app). +- [Subgraph Studio](https://thegraph.com/studio)'dan bir API anahtarı +- Temel Next.js ve React bilgisi. +- [Uygulama Yönlendiricisi](https://nextjs.org/docs/app) kullanan mevcut bir Next.js projesi. -## Step-by-Step Cookbook +## Adım Adım Talimatlar -### Step 1: Set Up Environment Variables +### Adım 1: Ortam Değişkenlerini Ayarlayın -1. In our Next.js project root, create a `.env.local` file. -2. Add our API key: `API_KEY=`. +1. Next.js projemizin kök dizininde `.env.local` dosyası oluşturun. +2. API anahtarımızı ekleyin: `API_KEY=`. -### Step 2: Create a Server Component +### Adım 2: Bir Sunucu Bileşeni Oluşturma -1. In our `components` directory, create a new file, `ServerComponent.js`. -2. Use the provided example code to set up the server component. +1. `components` dizinimizde "ServerComponent.js" adında yeni bir dosya oluşturun. +2. Sunucu bileşenini kurmak için sağlanan örnek kodu kullanın. -### Step 3: Implement Server-Side API Request +### Adım 3: Sunucu Tarafı API İsteğini Gerçekleştirin -In `ServerComponent.js`, add the following code: +`ServerComponent.js`'e aşağıdaki kodu ekleyin: ```javascript const API_KEY = process.env.API_KEY @@ -95,10 +95,10 @@ export default async function ServerComponent() { } ``` -### Step 4: Use the Server Component +### Adım 4: Sunucu Bileşenini Kullanın -1. In our page file (e.g., `pages/index.js`), import `ServerComponent`. -2. Render the component: +1. Sayfa dosyamızda (örneğin, `pages/index.js`), `ServerComponent`'ı içe aktarın. +2. Bileşeni işleyin: ```javascript import ServerComponent from './components/ServerComponent' @@ -112,12 +112,12 @@ export default function Home() { } ``` -### Step 5: Run and Test Our Dapp +### Adım 5: Dapp'imizi Çalıştırın ve Test Edin -Start our Next.js application using `npm run dev`. Verify that the server component is fetching data without exposing the API key. +`npm run dev` komutunu kullanarak Next.js uygulamamızı başlatın. Sunucu bileşeninin API anahtarını açığa çıkarmadan veri çektiğini doğrulayın. -![Server-side rendering](/img/api-key-server-side-rendering.png) +![Sunucu-taraflı işleme](/img/api-key-server-side-rendering.png) -### Conclusion +### Sonuç -By utilizing Next.js Server Components, we've effectively hidden the API key from the client-side, enhancing the security of our application. This method ensures that sensitive operations are handled server-side, away from potential client-side vulnerabilities. Finally, be sure to explore [other API key security measures](/subgraphs/querying/managing-api-keys/) to increase your API key security even further. +Next.js Sunucu Bileşenlerini kullanarak, API anahtarını istemci tarafında gizlemeyi başardık ve bu da uygulamamızın güvenliğini artırdı. Bu yöntem, hassas işlemlerin potansiyel istemci-taraflı güvenlik açıklıklarından uzak bir şekilde sunucu tarafında ele alındığını garanti eder. Son olarak, API anahtar güvenliğinizi daha da artırmak için [diğer API anahtar güvenlik önlemlerini](/subgraphs/querying/managing-api-keys/) incelemeyi unutmayın. diff --git a/website/src/pages/tr/subgraphs/guides/subgraph-composition.mdx b/website/src/pages/tr/subgraphs/guides/subgraph-composition.mdx new file mode 100644 index 000000000000..51bf15b4ecd9 --- /dev/null +++ b/website/src/pages/tr/subgraphs/guides/subgraph-composition.mdx @@ -0,0 +1,132 @@ +--- +title: Aggregate Data Using Subgraph Composition +sidebarTitle: Build a Composable Subgraph with Multiple Subgraphs +--- + +Leverage Subgraph composition to speed up development time. Create a base Subgraph with essential data, then build additional Subgraphs on top of it. + +Optimize your Subgraph by merging data from independent, source Subgraphs into a single composable Subgraph to enhance data aggregation. + +## Giriş + +Composable Subgraphs enable you to combine multiple Subgraphs' data sources into a new Subgraph, facilitating faster and more flexible Subgraph development. Subgraph composition empowers you to create and maintain smaller, focused Subgraphs that collectively form a larger, interconnected dataset. + +### Benefits of Composition + +Subgraph composition is a powerful feature for scaling, allowing you to: + +- Reuse, mix, and combine existing data +- Streamline development and queries +- Use multiple data sources (up to five source Subgraphs) +- Speed up your Subgraph's syncing speed +- Handle errors and optimize the resync + +## Architecture Overview + +The setup for this example involves two Subgraphs: + +1. **Source Subgraph**: Tracks event data as entities. +2. **Dependent Subgraph**: Uses the source Subgraph as a data source. + +You can find these in the `source` and `dependent` directories. + +- The **source Subgraph** is a basic event-tracking Subgraph that records events emitted by relevant contracts. +- The **dependent Subgraph** references the source Subgraph as a data source, using the entities from the source as triggers. + +While the source Subgraph is a standard Subgraph, the dependent Subgraph uses the Subgraph composition feature. + +## Prerequisites + +### Source Subgraphs + +- All Subgraphs need to be published with a **specVersion 1.3.0 or later** (Use the latest graph-cli version to be able to deploy composable Subgraphs) +- See notes here: https://github.com/graphprotocol/graph-node/releases/tag/v0.37.0 +- Immutable entities only: All Subgraphs must have [immutable entities](https://thegraph.com/docs/en/subgraphs/best-practices/immutable-entities-bytes-as-ids/#immutable-entities) when the Subgraph is deployed +- Pruning can be used in the source Subgraphs, but only entities that are immutable can be composed on top of +- Source Subgraphs cannot use grafting on top of existing entities +- Aggregated entities can be used in composition, but entities that are composed from them cannot performed additional aggregations directly + +### Composed Subgraphs + +- You can only compose up to a **maximum of 5 source Subgraphs** +- Composed Subgraphs can only use **datasources from the same chain** +- **Nested composition is not yet supported**: Composing on top of another composed Subgraph isn’t allowed at this time +- Aggregated entities can be used in composition, but the composed entities on them cannot also use aggregations directly +- Developers cannot compose an onchain datasource with a Subgraph datasource (i.e. you can’t do normal event handlers and call handlers and block handlers in a composed Subgraph) + +Additionally, you can explore the [example-composable-subgraph](https://github.com/graphprotocol/example-composable-subgraph) repository for a working implementation of composable Subgraphs + +## Başlayalım + +The following guide provides examples for defining 3 source Subgraphs to create one powerful composed Subgraph. + +### Ayrıntılar + +- To keep this example simple, all source Subgraphs use only block handlers. However, in a real environment, each source Subgraph will use data from different smart contracts. +- The examples below show how to import and extend the schema of another Subgraph to enhance its functionality. +- Each source Subgraph is optimized with a specific entity. +- All the commands listed install the necessary dependencies, generate code based on the GraphQL schema, build the Subgraph, and deploy it to your local Graph Node instance. + +### Step 1. Deploy Block Time Source Subgraph + +This first source Subgraph calculates the block time for each block. + +- It imports schemas from other Subgraphs and adds a `block` entity with a `timestamp` field, representing the time each block was mined. +- It listens to time-related blockchain events (e.g., block timestamps) and processes this data to update the Subgraph's entities accordingly. + +To deploy this Subgraph locally, run the following commands: + +```bash +npm install +npm run codegen +npm run build +npm run create-local +npm run deploy-local +``` + +### Step 2. Deploy Block Cost Source Subgraph + +This second source Subgraph indexes the cost of each block. + +#### Key Functions + +- It imports schemas from other Subgraphs and adds a `block` entity with cost-related fields. +- It listens to blockchain events related to costs (e.g. gas fees, transaction costs) and processes this data to update the Subgraph's entities accordingly. + +To deploy this Subgraph locally, run the same commands as above. + +### Step 3. Define Block Size in Source Subgraph + +This third source Subgraph indexes the size of each block. To deploy this Subgraph locally, run the same commands as above. + +#### Key Functions + +- It imports existing schemas from other Subgraphs and adds a `block` entity with a `size` field representing each block's size. +- It listens to blockchain events related to block sizes (e.g., storage or volume) and processes this data to update the Subgraph's entities accordingly. + +### Step 4. Combine Into Block Stats Subgraph + +This composed Subgraph combines and aggregates the information from the source Subgraphs above, providing a unified view of block statistics. To deploy this Subgraph locally, run the same commands as above. + +> Note: +> +> - Any change to a source Subgraph will likely generate a new deployment ID. +> - Be sure to update the deployment ID in the data source address of the Subgraph manifest to take advantage of the latest changes. +> - All source Subgraphs should be deployed before the composed Subgraph is deployed. + +#### Key Functions + +- It provides a consolidated data model that encompasses all relevant block metrics. +- It combines data from 3 source Subgraphs, and provides a comprehensive view of block statistics, enabling more complex queries and analyses. + +## Key Takeaways + +- This powerful tool will scale your Subgraph development and allow you to combine multiple Subgraphs. +- The setup includes the deployment of 3 source Subgraphs and one final deployment of the composed Subgraph. +- This feature unlocks scalability, simplifying both development and maintenance efficiency. + +## Ek Kaynaklar + +- Check out all the code for this example in [this GitHub repo](https://github.com/graphprotocol/example-composable-subgraph). +- To add advanced features to your Subgraph, check out [Subgraph advanced features](/developing/creating/advanced/). +- To learn more about aggregations, check out [Timeseries and Aggregations](/subgraphs/developing/creating/advanced/#timeseries-and-aggregations). diff --git a/website/src/pages/tr/subgraphs/guides/subgraph-debug-forking.mdx b/website/src/pages/tr/subgraphs/guides/subgraph-debug-forking.mdx index 91aa7484d2ec..fd5c2222db9a 100644 --- a/website/src/pages/tr/subgraphs/guides/subgraph-debug-forking.mdx +++ b/website/src/pages/tr/subgraphs/guides/subgraph-debug-forking.mdx @@ -1,26 +1,26 @@ --- -title: Quick and Easy Subgraph Debugging Using Forks +title: Fork Kullanarak Hızlı ve Kolay Subgraph Hata Ayıklama --- As with many systems processing large amounts of data, The Graph's Indexers (Graph Nodes) may take quite some time to sync-up your Subgraph with the target blockchain. The discrepancy between quick changes with the purpose of debugging and long wait times needed for indexing is extremely counterproductive and we are well aware of that. This is why we are introducing **Subgraph forking**, developed by [LimeChain](https://limechain.tech/), and in this article I will show you how this feature can be used to substantially speed-up Subgraph debugging! -## Ok, what is it? +## Peki, nedir bu Subgraph Forklama? **Subgraph forking** is the process of lazily fetching entities from _another_ Subgraph's store (usually a remote one). In the context of debugging, **Subgraph forking** allows you to debug your failed Subgraph at block _X_ without needing to wait to sync-up to block _X_. -## What?! How? +## Ne?! Nasıl? When you deploy a Subgraph to a remote Graph Node for indexing and it fails at block _X_, the good news is that the Graph Node will still serve GraphQL queries using its store, which is synced-up to block _X_. That's great! This means we can take advantage of this "up-to-date" store to fix the bugs arising when indexing block _X_. In a nutshell, we are going to _fork the failing Subgraph_ from a remote Graph Node that is guaranteed to have the Subgraph indexed up to block _X_ in order to provide the locally deployed Subgraph being debugged at block _X_ an up-to-date view of the indexing state. -## Please, show me some code! +## Lütfen bana biraz kod göster! To stay focused on Subgraph debugging, let's keep things simple and run along with the [example-Subgraph](https://github.com/graphprotocol/graph-tooling/tree/main/examples/ethereum-gravatar) indexing the Ethereum Gravity smart contract. -Here are the handlers defined for indexing `Gravatar`s, with no bugs whatsoever: +`Gravatar`'ları endekslemek için tanımlanan, hiçbir hata içermeyen işleyiciler şunlardır: ```tsx export function handleNewGravatar(event: NewGravatar): void { @@ -46,39 +46,39 @@ export function handleUpdatedGravatar(event: UpdatedGravatar): void { Oops, how unfortunate, when I deploy my perfect looking Subgraph to [Subgraph Studio](https://thegraph.com/studio/) it fails with the _"Gravatar not found!"_ error. -The usual way to attempt a fix is: +Genellikle düzeltmeyi denemek için yol şudur: -1. Make a change in the mappings source, which you believe will solve the issue (while I know it won't). +1. Eşleştirme kaynağında, sorunu çözeceğine inandığınız bir değişiklik yapın (ama ben çözmeyeceğini biliyorum). 2. Re-deploy the Subgraph to [Subgraph Studio](https://thegraph.com/studio/) (or another remote Graph Node). -3. Wait for it to sync-up. -4. If it breaks again go back to 1, otherwise: Hooray! +3. Senkronize olması için bekleyin. +4. Tekrar sorunla karşılaşırsanız 1. aşamaya geri dönün, aksi takdirde: Yaşasın! -It is indeed pretty familiar to an ordinary debug process, but there is one step that horribly slows down the process: _3. Wait for it to sync-up._ +Bu gerçekten sıradan bir hata ayıklama sürecine oldukça benzemektedir, ancak süreci korkunç derecede yavaşlatan bir adım vardır: _3. Senkronize olmasını bekleyin._ Using **Subgraph forking** we can essentially eliminate this step. Here is how it looks: -0. Spin-up a local Graph Node with the **_appropriate fork-base_** set. -1. Make a change in the mappings source, which you believe will solve the issue. +0. **_Uygun çatal-temeli (fork-base)_** ayarlanmış yerel bir Graph Düğümü başlatın. +1. Eşleştirme kaynağında, sorunu çözeceğine inandığınız bir değişiklik yapın. 2. Deploy to the local Graph Node, **_forking the failing Subgraph_** and **_starting from the problematic block_**. -3. If it breaks again, go back to 1, otherwise: Hooray! +3. Tekrar sorunla karşılaşırsanız 1. aşamaya geri dönün, aksi takdirde: Yaşasın! -Now, you may have 2 questions: +Şimdi, 2 sorunuz olabilir: -1. fork-base what??? -2. Forking who?! +1. fork temelli ne??? +2. Kimi forkluyoruz?! -And I answer: +Ve ben cevap veriyorum: 1. `fork-base` is the "base" URL, such that when the _subgraph id_ is appended the resulting URL (`/`) is a valid GraphQL endpoint for the Subgraph's store. -2. Forking is easy, no need to sweat: +2. Forklama kolay, ter dökmeye gerek yok: ```bash -$ graph deploy --debug-fork --ipfs http://localhost:5001 --node http://localhost:8020 +$ graph deploy --debug-fork --ipfs http://localhost:5001 --node http://localhost:8020 ``` Also, don't forget to set the `dataSources.source.startBlock` field in the Subgraph manifest to the number of the problematic block, so you can skip indexing unnecessary blocks and take advantage of the fork! -So, here is what I do: +İşte benim ne yaptığım: 1. I spin-up a local Graph Node ([here is how to do it](https://github.com/graphprotocol/graph-node#running-a-local-graph-node)) with the `fork-base` option set to: `https://api.thegraph.com/subgraphs/id/`, since I will fork a Subgraph, the buggy one I deployed earlier, from [Subgraph Studio](https://thegraph.com/studio/). @@ -90,12 +90,12 @@ $ cargo run -p graph-node --release -- \ --fork-base https://api.thegraph.com/subgraphs/id/ ``` -2. After careful inspection I notice that there is a mismatch in the `id` representations used when indexing `Gravatar`s in my two handlers. While `handleNewGravatar` converts it to a hex (`event.params.id.toHex()`), `handleUpdatedGravatar` uses an int32 (`event.params.id.toI32()`) which causes the `handleUpdatedGravatar` to panic with "Gravatar not found!". I make them both convert the `id` to a hex. +2. Dikkatli bir incelemeden sonra, iki işleyicimde `Gravatar`'ları endekslerken kullanılan `id` temsillerinde bir uyumsuzluk olduğunu fark ettim. `handleNewGravatar` onu bir hex'e dönüştürürken (`event.params.id.toHex()`), `handleUpdatedGravatar` bir int32 (`event.params.id.toI32()`) kullanıyor, bu da `handleUpdatedGravatar`'ın "Gravatar not found!" hatasını vermesine neden oluyor. İkisini de `id`'yi hex'e dönüştürecek şekilde düzenledim. 3. After I made the changes I deploy my Subgraph to the local Graph Node, **_forking the failing Subgraph_** and setting `dataSources.source.startBlock` to `6190343` in `subgraph.yaml`: ```bash $ graph deploy gravity --debug-fork QmNp169tKvomnH3cPXTfGg4ZEhAHA6kEq5oy1XDqAxqHmW --ipfs http://localhost:5001 --node http://localhost:8020 ``` -4. I inspect the logs produced by the local Graph Node and, Hooray!, everything seems to be working. +4. Yerel Graph Düğümü tarafından üretilen günlükleri inceliyorum ve yaşasın! Her şey yolunda görünüyor. 5. I deploy my now bug-free Subgraph to a remote Graph Node and live happily ever after! (no potatoes tho) diff --git a/website/src/pages/tr/subgraphs/guides/subgraph-uncrashable.mdx b/website/src/pages/tr/subgraphs/guides/subgraph-uncrashable.mdx index a08e2a7ad8c9..ba461201d71f 100644 --- a/website/src/pages/tr/subgraphs/guides/subgraph-uncrashable.mdx +++ b/website/src/pages/tr/subgraphs/guides/subgraph-uncrashable.mdx @@ -1,10 +1,10 @@ --- -title: Safe Subgraph Code Generator +title: Güvenli Subgraph Kod Oluşturucu --- [Subgraph Uncrashable](https://float-capital.github.io/float-subgraph-uncrashable/) is a code generation tool that generates a set of helper functions from the graphql schema of a project. It ensures that all interactions with entities in your Subgraph are completely safe and consistent. -## Why integrate with Subgraph Uncrashable? +## Neden Subgraph Uncrashable'ı entegre etmelisiniz? - **Continuous Uptime**. Mishandled entities may cause Subgraphs to crash, which can be disruptive for projects that are dependent on The Graph. Set up helper functions to make your Subgraphs “uncrashable” and ensure business continuity. @@ -12,15 +12,15 @@ title: Safe Subgraph Code Generator - **User Configurable** Set default values and configure the level of security checks that suits your individual project's needs. Warning logs are recorded indicating where there is a breach of Subgraph logic to help patch the issue to ensure data accuracy. -**Key Features** +**Ana Özellikler** - The code generation tool accommodates **all** Subgraph types and is configurable for users to set sane defaults on values. The code generation will use this config to generate helper functions that are to the users specification. -- The framework also includes a way (via the config file) to create custom, but safe, setter functions for groups of entity variables. This way it is impossible for the user to load/use a stale graph entity and it is also impossible to forget to save or set a variable that is required by the function. +- Framework ayrıca unsur değişkenleri grupları için özel, ancak güvenli ayarlayıcı fonksiyonları oluşturmanın bir yolunu (yapılandırma dosyası aracılığıyla) içerir. Bu sayede, kullanıcının eski bir graph unsurunu yüklemesi/kullanması ve ayrıca fonksiyonun gerektirdiği bir değişkeni kaydetmeyi veya ayarlamayı unutması imkansız hale gelir. - Warning logs are recorded as logs indicating where there is a breach of Subgraph logic to help patch the issue to ensure data accuracy. -Subgraph Uncrashable can be run as an optional flag using the Graph CLI codegen command. +Subgraph Uncrashable, Graph CLI codegen komutu kullanılarak isteğe bağlı bir bayrak olarak çalıştırılabilir. ```sh graph codegen -u [options] [] diff --git a/website/src/pages/tr/subgraphs/guides/transfer-to-the-graph.mdx b/website/src/pages/tr/subgraphs/guides/transfer-to-the-graph.mdx index a62072c48373..12defb581449 100644 --- a/website/src/pages/tr/subgraphs/guides/transfer-to-the-graph.mdx +++ b/website/src/pages/tr/subgraphs/guides/transfer-to-the-graph.mdx @@ -1,37 +1,37 @@ --- -title: Transfer to The Graph +title: The Graph'e Transfer --- Quickly upgrade your Subgraphs from any platform to [The Graph's decentralized network](https://thegraph.com/networks/). -## Benefits of Switching to The Graph +## The Graph'e Geçmenin Avantajları - Use the same Subgraph that your apps already use with zero-downtime migration. -- Increase reliability from a global network supported by 100+ Indexers. +- Yüzden fazla Endeksleyici tarafından desteklenip global bir ağdan gelen güvenilirliği artırabilirsiniz. - Receive lightning-fast support for Subgraphs 24/7, with an on-call engineering team. -## Upgrade Your Subgraph to The Graph in 3 Easy Steps +## Subgraph'inizi The Graph'e 3 Kolay Adımda Yükseltin -1. [Set Up Your Studio Environment](/subgraphs/cookbook/transfer-to-the-graph/#1-set-up-your-studio-environment) -2. [Deploy Your Subgraph to Studio](/subgraphs/cookbook/transfer-to-the-graph/#2-deploy-your-subgraph-to-studio) -3. [Publish to The Graph Network](/subgraphs/cookbook/transfer-to-the-graph/#publish-your-subgraph-to-the-graphs-decentralized-network) +1. [Studio Ortamınızı Kurun](/subgraphs/cookbook/transfer-to-the-graph/#1-set-up-your-studio-environment) +2. [Subgraph'inizi Studio'ya Dağıtın](/subgraphs/cookbook/transfer-to-the-graph/#2-deploy-your-subgraph-to-studio) +3. [The Graph Ağı'nda Yayımlayın](/subgraphs/cookbook/transfer-to-the-graph/#publish-your-subgraph-to-the-graphs-decentralized-network) -## 1. Set Up Your Studio Environment +## 1. Stüdyo Ortamınızı Ayarlayın -### Create a Subgraph in Subgraph Studio +### Subgraph Studio'da Bir Subgraph Oluştur -- Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. +- [Subgraph Studio](https://thegraph.com/studio/)'ya gidin ve cüzdanınızı bağlayın. - Click "Create a Subgraph". It is recommended to name the Subgraph in Title Case: "Subgraph Name Chain Name". > Note: After publishing, the Subgraph name will be editable but requires onchain action each time, so name it properly. -### Install the Graph CLI⁠ +### Graph CLI'ı Yükle -You must have [Node.js](https://nodejs.org/) and a package manager of your choice (`npm` or `pnpm`) installed to use the Graph CLI. Check for the [most recent](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) CLI version. +Graph CLI'ı kullanmak için [Node.js](https://nodejs.org/) ve tercih ettiğiniz bir paket yöneticisi (`npm` veya `pnpm`) kurulu olmalıdır. [En son](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) CLI sürümünü kontrol edin. -On your local machine, run the following command: +Yerel makinenizde şu komutu çalıştırın: -Using [npm](https://www.npmjs.com/): +[npm](https://www.npmjs.com/) kullanarak: ```sh npm install -g @graphprotocol/graph-cli@latest @@ -43,62 +43,62 @@ Use the following command to create a Subgraph in Studio using the CLI: graph init --product subgraph-studio ``` -### Authenticate Your Subgraph +### Subgraph'inizi Doğrulayın -In The Graph CLI, use the auth command seen in Subgraph Studio: +The Graph CLI'da, Subgraph Studio'da görülen auth komutunu kullanın: ```sh -graph auth +graph auth ``` -## 2. Deploy Your Subgraph to Studio +## 2. Subgraph'inizi Studio'ya Dağıtın If you have your source code, you can easily deploy it to Studio. If you don't have it, here's a quick way to deploy your Subgraph. -In The Graph CLI, run the following command: +The Graph CLI'de aşağıdaki komutu çalıştırın: ```sh -graph deploy --ipfs-hash +graph deploy --ipfs-hash ``` > **Note:** Every Subgraph has an IPFS hash (Deployment ID), which looks like this: "Qmasdfad...". To deploy simply use this **IPFS hash**. You’ll be prompted to enter a version (e.g., v0.0.1). -## 3. Publish Your Subgraph to The Graph Network +## 3. Subgraph'inizi The Graph Ağı'nda Yayımlayın -![publish button](/img/publish-sub-transfer.png) +![yayımla butonu](/img/publish-sub-transfer.png) -### Query Your Subgraph +### Subgraph'inizi Sorgulayın > To attract about 3 indexers to query your Subgraph, it’s recommended to curate at least 3,000 GRT. To learn more about curating, check out [Curating](/resources/roles/curating/) on The Graph. You can start [querying](/subgraphs/querying/introduction/) any Subgraph by sending a GraphQL query into the Subgraph’s query URL endpoint, which is located at the top of its Explorer page in Subgraph Studio. -#### Example +#### Örnek [CryptoPunks Ethereum Subgraph](https://thegraph.com/explorer/subgraphs/HdVdERFUe8h61vm2fDyycHgxjsde5PbB832NHgJfZNqK) by Messari: -![Query URL](/img/cryptopunks-screenshot-transfer.png) +![Sorgu URL'si](/img/cryptopunks-screenshot-transfer.png) The query URL for this Subgraph is: ```sh -https://gateway-arbitrum.network.thegraph.com/api/`**your-own-api-key**`/subgraphs/id/HdVdERFUe8h61vm2fDyycgxjsde5PbB832NHgJfZNqK +https://gateway-arbitrum.network.thegraph.com/api/`**kendi-api-anahtarınız**`/subgraphs/id/HdVdERFUe8h61vm2fDyycgxjsde5PbB832NHgJfZNqK ``` -Now, you simply need to fill in **your own API Key** to start sending GraphQL queries to this endpoint. +Artık, bu uç noktaya GraphQL sorguları göndermeye başlamak için **kendi API Anahtarınızı** girmeniz yeterlidir. -### Getting your own API Key +### Kendi API Anahtarınızı Almak -You can create API Keys in Subgraph Studio under the “API Keys” menu at the top of the page: +API Anahtarlarını Subgraph Studio'da sayfanın üst kısmındaki “API Anahtarları” menüsünden oluşturabilirsiniz: -![API keys](/img/Api-keys-screenshot.png) +![API anahtarları](/img/Api-keys-screenshot.png) -### Monitor Subgraph Status +### Subgraph Durumunu İzle Once you upgrade, you can access and manage your Subgraphs in [Subgraph Studio](https://thegraph.com/studio/) and explore all Subgraphs in [The Graph Explorer](https://thegraph.com/networks/). -### Additional Resources +### Ek Kaynaklar - To quickly create and publish a new Subgraph, check out the [Quick Start](/subgraphs/quick-start/). - To explore all the ways you can optimize and customize your Subgraph for a better performance, read more about [creating a Subgraph here](/developing/creating-a-subgraph/). diff --git a/website/src/pages/tr/subgraphs/querying/_meta-titles.json b/website/src/pages/tr/subgraphs/querying/_meta-titles.json index a30daaefc9d0..7411ddf64a7d 100644 --- a/website/src/pages/tr/subgraphs/querying/_meta-titles.json +++ b/website/src/pages/tr/subgraphs/querying/_meta-titles.json @@ -1,3 +1,3 @@ { - "graph-client": "Graph Client" + "graph-client": "Graph İstemcisi" } diff --git a/website/src/pages/tr/subgraphs/querying/best-practices.mdx b/website/src/pages/tr/subgraphs/querying/best-practices.mdx index 1f4ba885476f..a67a07de6d2c 100644 --- a/website/src/pages/tr/subgraphs/querying/best-practices.mdx +++ b/website/src/pages/tr/subgraphs/querying/best-practices.mdx @@ -1,20 +1,20 @@ --- -title: Querying Best Practices +title: Sorgulama - Örnek Uygulamalar --- -The Graph provides a decentralized way to query data from blockchains. Its data is exposed through a GraphQL API, making it easier to query with the GraphQL language. +The Graph, blokzincirlerinden veri sorgulamak için merkeziyetsiz bir yöntem sağlar. Verileri bir GraphQL API'si aracılığıyla sunulur ve bu da GraphQL diliyle sorgulamayı kolaylaştırır. -Learn the essential GraphQL language rules and best practices to optimize your subgraph. +Subgraph'inizi optimize etmek için gerekli temel GraphQL dili kurallarını ve örnek uygulamaları öğrenin. --- -## Querying a GraphQL API +## Bir GraphQL API'sini sorgulama -### The Anatomy of a GraphQL Query +### Bir GraphQL Sorgusunun Anatomisi -Unlike REST API, a GraphQL API is built upon a Schema that defines which queries can be performed. +REST API'den farklı olarak, bir GraphQL API'si, hangi sorguların gerçekleştirilebileceğini tanımlayan bir Şema üzerine kuruludur. -For example, a query to get a token using the `token` query will look as follows: +Örneğin, `token` sorgusunu kullanarak bir token almak için yapılacak sorgu aşağıdaki gibi olacaktır: ```graphql query GetToken($id: ID!) { @@ -25,7 +25,7 @@ query GetToken($id: ID!) { } ``` -which will return the following predictable JSON response (_when passing the proper `$id` variable value_): +ve bu sorgu (doğru `$id` değişkeni geçirildiğinde) aşağıdaki öngörülebilir JSON yanıtını döndürecektir: ```json { @@ -36,9 +36,9 @@ which will return the following predictable JSON response (_when passing the pro } ``` -GraphQL queries use the GraphQL language, which is defined upon [a specification](https://spec.graphql.org/). +GraphQL sorguları, [bir spesifikasyon](https://spec.graphql.org/) temelinde tanımlanmış olan GraphQL dilini kullanır. -The above `GetToken` query is composed of multiple language parts (replaced below with `[...]` placeholders): +Yukarıdaki `GetToken` sorgusu, birden fazla dil bileşeninden oluşur (aşağıda `[...]` yer tutucularıyla gösterilmiştir): ```graphql query [operationName]([variableName]: [variableType]) { @@ -50,33 +50,33 @@ query [operationName]([variableName]: [variableType]) { } ``` -## Rules for Writing GraphQL Queries +## GraphQL Sorgusu Yazmanın Kuralları -- Each `queryName` must only be used once per operation. -- Each `field` must be used only once in a selection (we cannot query `id` twice under `token`) -- Some `field`s or queries (like `tokens`) return complex types that require a selection of sub-field. Not providing a selection when expected (or providing one when not expected - for example, on `id`) will raise an error. To know a field type, please refer to [Graph Explorer](/subgraphs/explorer/). -- Any variable assigned to an argument must match its type. -- In a given list of variables, each of them must be unique. -- All defined variables must be used. +- Her bir `queryName`, işlem başına yalnızca bir kez kullanılmalıdır. +- Her bir `field`, bir seçim içinde yalnızca bir kez kullanılmalıdır (örneğin, `token` altında `id` alanını iki kez sorgulayamayız) +- Bazı `field`'lar veya sorgular (örneğin `tokens`), alt alan seçimi gerektiren karmaşık türler döndürür. Beklendiğinde alt alan seçimi yapmamak (ya da beklenmediğinde böyle bir seçim yapmak, örneğin `id` üzerinde) bir hata oluşturur. Bir alanın türünü öğrenmek için lütfen [Graph Gezgini](/subgraphs/explorer/) sayfasına bakın. +- Bir argümana atanan herhangi bir değişken, onun türüyle eşleşmelidir. +- Belirli bir değişken listesinde, her bir değişken özgün olmalıdır. +- Tanımlanan tüm değişkenler kullanılmalıdır. -> Note: Failing to follow these rules will result in an error from The Graph API. +> Not: Bu kurallara uyulmaması, The Graph API'sinin hata vermesi ile sonuçlanacaktır. -For a complete list of rules with code examples, check out [GraphQL Validations guide](/resources/migration-guides/graphql-validations-migration-guide/). +Kod örnekleriyle birlikte tam kurallar listesi için [GraphQL Doğrulamaları rehberine](/resources/migration-guides/graphql-validations-migration-guide/) göz atın. -### Sending a query to a GraphQL API +### Bir GraphQL API'sine sorgu göndermek -GraphQL is a language and set of conventions that transport over HTTP. +GraphQL, HTTP üzerinden taşınan bir dil ve kurallar bütünüdür. -It means that you can query a GraphQL API using standard `fetch` (natively or via `@whatwg-node/fetch` or `isomorphic-fetch`). +Bu, (yerel olarak veya `@whatwg-node/fetch` ya da `isomorphic-fetch` aracılığıyla) standart `fetch` kullanarak bir GraphQL API'sini sorgulayabileceğiniz anlamına gelir. -However, as mentioned in ["Querying from an Application"](/subgraphs/querying/from-an-application/), it's recommended to use `graph-client`, which supports the following unique features: +Ancak, ["Bir Uygulamadan Sorgulama"](/subgraphs/querying/from-an-application/) bölümünde belirtildiği gibi, aşağıdaki benzersiz özellikleri destekleyen `graph-client`ın kullanılması önerilir: -- Cross-chain Subgraph Handling: Querying from multiple subgraphs in a single query -- [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) -- [Automatic Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) -- Fully typed result +- Zincirler arası Subgraph İşleme: Tek bir sorguda birden fazla Subgraph'ten veri sorgulama +- [Otomatik Blok Takibi](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) +- [Otomatik Sayfalama](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) +- Tamamen yazılmış sonuç -Here's how to query The Graph with `graph-client`: +`graph-client` aracılığıyla The Graph sorgusu nasıl yapılır: ```tsx import { execute } from '../.graphclient' @@ -100,15 +100,15 @@ async function main() { main() ``` -More GraphQL client alternatives are covered in ["Querying from an Application"](/subgraphs/querying/from-an-application/). +Daha fazla GraphQL istemcisi alternatifi, ["Bir Uygulamadan Sorgulama"](/subgraphs/querying/from-an-application/) bölümünde ele alınmıştır. --- ## En İyi Uygulamalar -### Always write static queries +### Her zaman statik sorgular yazın -A common (bad) practice is to dynamically build query strings as follows: +Yaygın (ve kötü) bir uygulama, sorgu dizelerini aşağıdaki gibi dinamik olarak oluşturmaktır: ```tsx const id = params.id @@ -121,20 +121,20 @@ query GetToken { } ` -// Execute query... +// Sorguyu çalıştır... ``` -While the above snippet produces a valid GraphQL query, **it has many drawbacks**: +Yukarıdaki kod parçası geçerli bir GraphQL sorgusu üretse de, **birçok dezavantaja sahiptir**: -- it makes it **harder to understand** the query as a whole -- developers are **responsible for safely sanitizing the string interpolation** -- not sending the values of the variables as part of the request parameters **prevent possible caching on server-side** -- it **prevents tools from statically analyzing the query** (ex: Linter, or type generations tools) +- sorguyu bir bütün olarak anlamayı **zorlaştırır** +- geliştiriciler, **dize enterpolasyonunun güvenliğini sağlamakla sorumludur** +- değişken değerlerinin istek parametreleriyle gönderilmemesi **sunucu tarafındaki önbellekleme olasılığını ortadan kaldırır** +- bu, **araçların sorguyu statik olarak analiz etmesini engeller** (örneğin: Linter veya tür üretim araçları) -For this reason, it is recommended to always write queries as static strings: +Bu nedenle, sorguları her zaman statik dizeler olarak yazmanız önerilir: ```tsx -import { execute } from 'your-favorite-graphql-client' +import { execute } from 'dilediğiniz-graphql-istemcisi' const id = params.id const query = ` @@ -153,21 +153,21 @@ const result = await execute(query, { }) ``` -Doing so brings **many advantages**: +Bunu yapmak **birçok avantaj** sağlar: -- **Easy to read and maintain** queries -- The GraphQL **server handles variables sanitization** -- **Variables can be cached** at server-level -- **Queries can be statically analyzed by tools** (more on this in the following sections) +- **Okuması ve bakımı kolay** sorgular +- GraphQL sunucusu **değişkenlerin güvenli hale getirilmesini üstlenir** +- Değişkenler sunucu düzeyinde **önbelleğe alınabilir** +- **Sorgular araçlar tarafından statik olarak analiz edilebilir** (detaylar sonraki bölümlerde açıklanacaktır) -### How to include fields conditionally in static queries +### Statik sorgularda alanlar nasıl koşullu olarak dahil edilir -You might want to include the `owner` field only on a particular condition. +`owner` alanını yalnızca belirli bir koşulla dahil etmek isteyebilirsiniz. -For this, you can leverage the `@include(if:...)` directive as follows: +Bunun için, `@include(if:...)` yönergesinden aşağıdaki gibi yararlanabilirsiniz: ```tsx -import { execute } from 'your-favorite-graphql-client' +import { execute } from 'dilediğiniz-graphql-istemcisi' const id = params.id const query = ` @@ -187,41 +187,41 @@ const result = await execute(query, { }) ``` -> Note: The opposite directive is `@skip(if: ...)`. +> Not: Bunun tersi yönerge `@skip(if: ...)` şeklindedir. -### Ask for what you want +### İstediğini sor -GraphQL became famous for its "Ask for what you want" tagline. +GraphQL, “İstediğini sor” sloganıyla ün kazanmıştı. -For this reason, there is no way, in GraphQL, to get all available fields without having to list them individually. +Bu nedenle, GraphQL'de tüm kullanılabilir alanları tek tek listelemeden almanın bir yolu yoktur. - GraphQL API'leri sorgularken, her zaman sadece gerçekten kullanılacak alanları sorgulamayı düşünmelisiniz. -- Make sure queries only fetch as many entities as you actually need. By default, queries will fetch 100 entities in a collection, which is usually much more than what will actually be used, e.g., for display to the user. This applies not just to top-level collections in a query, but even more so to nested collections of entities. +- Sorguların yalnızca gerçekten ihtiyaç duyduğunuz kadar varlık getirdiğinden emin olun. Varsayılan olarak, sorgular bir koleksiyondan 100 varlık getirir. Bu miktar genellikle gerekenden, örneğin kullanıcıya gösterilecek olandan, çok daha fazladır. Bu durum yalnızca sorgulardaki en üst düzey koleksiyonlar için değil, özellikle iç içe varlık koleksiyonları için de geçerlidir. -For example, in the following query: +Örneğin, aşağıdaki sorguda: ```graphql query listTokens { tokens { - # will fetch up to 100 tokens + # en fazla 100 token getirilecektir id transactions { - # will fetch up to 100 transactions + # en fazla 100 işlem getirilecektir id } } } ``` -The response could contain 100 transactions for each of the 100 tokens. +Yanıt, 100 token için 100'er işlem içerebilir. -If the application only needs 10 transactions, the query should explicitly set `first: 10` on the transactions field. +Uygulama yalnızca 10 işleme ihtiyaç duyuyorsa, sorguda transactions (işlemler) alanına açıkça `first: 10` değeri verilmelidir. -### Use a single query to request multiple records +### Birden fazla kaydı istemek için tek bir sorgu kullanın -By default, subgraphs have a singular entity for one record. For multiple records, use the plural entities and filter: `where: {id_in:[X,Y,Z]}` or `where: {volume_gt:100000}` +Varsayılan olarak, Subgraph'ler tek bir kayıt için tekil bir varlık sunar. Birden fazla kayıt almak için çoğul varlıkları ve filtrelemeyi kullanın: `where: {id_in:[X,Y,Z]}` veya `where: {volume_gt:100000}` -Example of inefficient querying: +Verimsiz bir sorgulama örneği: ```graphql query SingleRecord { @@ -238,7 +238,7 @@ query SingleRecord { } ``` -Example of optimized querying: +Optimize edilmiş bir sorgulama örneği: ```graphql query ManyRecords { @@ -249,12 +249,12 @@ query ManyRecords { } ``` -### Combine multiple queries in a single request +### Birden fazla sorguyu tek bir istekte birleştirin -Your application might require querying multiple types of data as follows: +Uygulamanız buradaki gibi birden fazla veri türünü sorgulamanızı gerektirebilir: ```graphql -import { execute } from "your-favorite-graphql-client" +import { execute } from "dilediğiniz-graphql-istemcisi" const tokensQuery = ` query GetTokens { @@ -281,12 +281,12 @@ const [tokens, counters] = Promise.all( ) ``` -While this implementation is totally valid, it will require two round trips with the GraphQL API. +Bu uygulama tamamen geçerli olsa da, GraphQL API'si ile iki kez veri alışverişi yapılmasını gerektirir. -Fortunately, it is also valid to send multiple queries in the same GraphQL request as follows: +Neyse ki, aynı GraphQL isteği içinde birden fazla sorgu göndermek de aşağıdaki gibi geçerlidir: ```graphql -import { execute } from "your-favorite-graphql-client" +import { execute } from "dilediğiniz-graphql-istemcisi" const query = ` query GetTokensandCounters { @@ -304,13 +304,13 @@ query GetTokensandCounters { const { result: { tokens, counters } } = execute(query) ``` -This approach will **improve the overall performance** by reducing the time spent on the network (saves you a round trip to the API) and will provide a **more concise implementation**. +Bu yaklaşım, ağ üzerinde harcanan süreyi azaltarak **genel performansı artırır** (API'ye yapılan bir gidiş-dönüşü ortadan kaldırır) ve **daha sade bir uygulama** sunar. -### Leverage GraphQL Fragments +### GraphQL Parçalarını (Fragment) Kullanın -A helpful feature to write GraphQL queries is GraphQL Fragment. +GraphQL sorguları yazarken faydalı bir özellik de GraphQL Fragment'tir. -Looking at the following query, you will notice that some fields are repeated across multiple Selection-Sets (`{ ... }`): +Aşağıdaki sorguya baktığınızda, bazı alanların birden fazla Seçim Kümesi (`{ ... }`) içinde tekrarlandığını fark edeceksiniz: ```graphql query { @@ -330,12 +330,12 @@ query { } ``` -Such repeated fields (`id`, `active`, `status`) bring many issues: +Bu tür tekrarlanan alanlar (`id`, `active`, `status`) birçok sorunu beraberinde getirir: -- More extensive queries become harder to read. -- When using tools that generate TypeScript types based on queries (_more on that in the last section_), `newDelegate` and `oldDelegate` will result in two distinct inline interfaces. +- Daha kapsamlı sorguların okunması zorlaşır. +- Sorgulara dayanarak TypeScript türleri üreten araçlar kullanıldığında (_ilgili detaylar için son bölüme bakınız_), `newDelegate` ve `oldDelegate` iki ayrı satır içi arayüz (inline interface) olarak tanımlanır. -A refactored version of the query would be the following: +Sorgunun yeniden düzenlenmiş hali aşağıdaki gibi olacaktır: ```graphql query { @@ -350,8 +350,8 @@ query { } } -# we define a fragment (subtype) on Transcoder -# to factorize repeated fields in the query +# sorguda tekrarlanan alanları ortaklaştırmak için +# transcoder üzerinde bir fragment (alt tür) tanımlıyoruz fragment DelegateItem on Transcoder { id active @@ -359,15 +359,15 @@ fragment DelegateItem on Transcoder { } ``` -Using GraphQL `fragment` will improve readability (especially at scale) and result in better TypeScript types generation. +GraphQL `fragment` kullanmak, okunabilirliği artırır (özellikle büyük sorgularda) ve daha iyi TypeScript türleri üretilmesini sağlar. -When using the types generation tool, the above query will generate a proper `DelegateItemFragment` type (_see last "Tools" section_). +Tür üretim aracını kullanırken, yukarıdaki sorgu doğru bir `DelegateItemFragment` türü (_sondaki "Araçlar" bölümüne bakabilirsiniz_) oluşturacaktır. -### GraphQL Fragment do's and don'ts +### GraphQL Fragment kullanırken yapılması ve kaçınılması gerekenler ### Fragment tabanı bir tip olmalıdır -A Fragment cannot be based on a non-applicable type, in short, **on type not having fields**: +Bir Fragment, geçerli olmayan bir tür üzerinde tanımlanamaz. Kısacası, **alanları olmayan bir tür** üzerinde kullanılamaz: ```graphql fragment MyFragment on BigInt { @@ -375,11 +375,11 @@ fragment MyFragment on BigInt { } ``` -`BigInt` is a **scalar** (native "plain" type) that cannot be used as a fragment's base. +`BigInt`, bir **skaler** (yerel "basit" tür) olduğu için bir fragment'in temeli olarak kullanılamaz. #### Fragment Nasıl Yayılır -Fragments are defined on specific types and should be used accordingly in queries. +Fragment'ler belirli türler üzerinde tanımlanırlar ve sorgularda buna uygun şekilde kullanılmalılardır. Örnek: @@ -388,7 +388,7 @@ query { bondEvents { id newDelegate { - ...VoteItem # Error! `VoteItem` cannot be spread on `Transcoder` type + ...VoteItem # Hata! `VoteItem` fragment'i `Transcoder` türü üzerinde kullanılamaz } oldDelegate { ...VoteItem @@ -402,29 +402,29 @@ fragment VoteItem on Vote { } ``` -`newDelegate` and `oldDelegate` are of type `Transcoder`. +`newDelegate` ve `oldDelegate`, `Transcoder` türündendir. -It is not possible to spread a fragment of type `Vote` here. +Burada `Vote` türünde bir fragment kullanılamaz. #### Fragment'ı atomik bir veri iş birimi olarak tanımlayın -GraphQL `Fragment`s must be defined based on their usage. +GraphQL'de bir `Fragment`, kullanım amacına göre tanımlanmalıdır. -For most use-case, defining one fragment per type (in the case of repeated fields usage or type generation) is sufficient. +Çoğu kullanım senaryosunda, her tür için bir fragment tanımlamak (özellikle tekrarlanan alan kullanımı veya tür üretimi durumlarında) yeterlidir. -Here is a rule of thumb for using fragments: +İşte fragment kullanımı için temel bir kural: -- When fields of the same type are repeated in a query, group them in a `Fragment`. -- When similar but different fields are repeated, create multiple fragments, for instance: +- Aynı türdeki alanlar bir sorguda tekrar ediyorsa, bunları bir `Fragment` içinde gruplayın. +- Benzer ancak farklı alanlar tekrar ediyorsa, birden fazla "fragment" oluşturun. Örneğin: ```graphql -# base fragment (mostly used in listing) +# temel fragment (genellikle listeleme işlemlerinde kullanılır) fragment Voter on Vote { id voter } -# extended fragment (when querying a detailed view of a vote) +# genişletilmiş fragment (bir vote'un detaylı görünümünü sorgularken kullanılır) fragment VoteWithPoll on Vote { id voter @@ -438,51 +438,51 @@ fragment VoteWithPoll on Vote { --- -## The Essential Tools +## Temel Araçlar -### GraphQL web-based explorers +### Web tabanlı GraphQL gezginleri -Iterating over queries by running them in your application can be cumbersome. For this reason, don't hesitate to use [Graph Explorer](https://thegraph.com/explorer) to test your queries before adding them to your application. Graph Explorer will provide you a preconfigured GraphQL playground to test your queries. +Sorgularınızı uygulamanızda çalıştırarak denemek zahmetli olabilir. Bu nedenle, sorgularınızı uygulamaya eklemeden önce test etmek için [Graph Gezgini](https://thegraph.com/explorer)ni kullanmaktan çekinmeyin. Graph Gezgini, sorgularınızı test etmeniz için önceden yapılandırılmış bir GraphQL playground'u sunar. -If you are looking for a more flexible way to debug/test your queries, other similar web-based tools are available such as [Altair](https://altairgraphql.dev/) and [GraphiQL](https://graphiql-online.com/graphiql). +Sorgularınızda hata ayıklamak/sorgularınızı test etmek için daha esnek bir yol arıyorsanız, [Altair](https://altairgraphql.dev/) ve [GraphiQL](https://graphiql-online.com/graphiql) gibi benzer web tabanlı araçlar da mevcuttur. -### GraphQL Linting +### GraphQL'de Linting -In order to keep up with the mentioned above best practices and syntactic rules, it is highly recommended to use the following workflow and IDE tools. +Yukarıda belirtilen örnek uygulamalar ve sözdizim kurallarına uyum sağlamak için aşağıdaki iş akışı ve IDE (Entegre Geliştirme Ortamı) araçlarını kullanmanız şiddetle tavsiye edilir. **GraphQL ESLint** -[GraphQL ESLint](https://the-guild.dev/graphql/eslint/docs/getting-started) will help you stay on top of GraphQL best practices with zero effort. +[GraphQL ESLint](https://the-guild.dev/graphql/eslint/docs/getting-started), GraphQL örnek uygulamalarına zahmetsizce uymanıza yardımcı olur. -[Setup the "operations-recommended"](https://the-guild.dev/graphql/eslint/docs/configs) config will enforce essential rules such as: +["operations-recommended" yapılandırmasını kurmak](https://the-guild.dev/graphql/eslint/docs/configs), aşağıdakiler gibi temel kuralları zorunlu kılar: -- `@graphql-eslint/fields-on-correct-type`: is a field used on a proper type? -- `@graphql-eslint/no-unused variables`: should a given variable stay unused? -- and more! +- `@graphql-eslint/fields-on-correct-type`: alan doğru tür üzerinde mi kullanılmış? +- `@graphql-eslint/no-unused-variables`: verilen bir değişken kullanılmadan mı bırakılmış? +- ve daha fazlası! -This will allow you to **catch errors without even testing queries** on the playground or running them in production! +Bu sayede, sorguları playground’da test etmeye ya da üretim ortamında çalıştırmaya gerek kalmadan **hataları önceden yakalayabilirsiniz**! -### IDE plugins +### IDE eklentileri -**VSCode and GraphQL** +**VSCode ve GraphQL** -The [GraphQL VSCode extension](https://marketplace.visualstudio.com/items?itemName=GraphQL.vscode-graphql) is an excellent addition to your development workflow to get: +[GraphQL VSCode uzantısı](https://marketplace.visualstudio.com/items?itemName=GraphQL.vscode-graphql), geliştirme sürecinize aşağıdakileri sağlamak için mükemmel bir eklentidir: -- Syntax highlighting -- Autocomplete suggestions -- Validation against schema -- Snippets -- Go to definition for fragments and input types +- Sözdizimi vurgulama +- Otomatik tamamlama önerileri +- Şemaya karşı doğrulama +- Kod parçacıkları (snippets) +- Fragment'ler ve girdi türleri için tanıma gitme özelliği -If you are using `graphql-eslint`, the [ESLint VSCode extension](https://marketplace.visualstudio.com/items?itemName=dbaeumer.vscode-eslint) is a must-have to visualize errors and warnings inlined in your code correctly. +`graphql-eslint` kullanıyorsanız, [ESLint VSCode uzantısı](https://marketplace.visualstudio.com/items?itemName=dbaeumer.vscode-eslint) kodunuzda hataları ve uyarıları satır içi şekilde doğru bir biçimde görüntülemek için mutlaka edinilmesi gereken bir araçtır. -**WebStorm/Intellij and GraphQL** +**WebStorm/Intellij ve GraphQL** -The [JS GraphQL plugin](https://plugins.jetbrains.com/plugin/8097-graphql/) will significantly improve your experience while working with GraphQL by providing: +[JS GraphQL eklentisi](https://plugins.jetbrains.com/plugin/8097-graphql/), GraphQL ile çalışırken deneyiminizi aşağıdakileri sağlayarak önemli ölçüde iyileştirir: -- Syntax highlighting -- Autocomplete suggestions -- Validation against schema -- Snippets +- Sözdizimi vurgulama +- Otomatik tamamlama önerileri +- Şemaya karşı doğrulama +- Kod parçacıkları (snippets) -For more information on this topic, check out the [WebStorm article](https://blog.jetbrains.com/webstorm/2019/04/featured-plugin-js-graphql/) which showcases all the plugin's main features. +Bu konuyla ilgili daha fazla bilgi için, eklentinin tüm ana özelliklerini gösteren [WebStorm makalesine](https://blog.jetbrains.com/webstorm/2019/04/featured-plugin-js-graphql/) göz atabilirsiniz. diff --git a/website/src/pages/tr/subgraphs/querying/distributed-systems.mdx b/website/src/pages/tr/subgraphs/querying/distributed-systems.mdx index 85337206bfd3..d73b935aa8ab 100644 --- a/website/src/pages/tr/subgraphs/querying/distributed-systems.mdx +++ b/website/src/pages/tr/subgraphs/querying/distributed-systems.mdx @@ -1,50 +1,50 @@ --- -title: Distributed Systems +title: Dağıtık Sistemler --- -The Graph is a protocol implemented as a distributed system. +The Graph, dağıtık bir sistem olarak uygulanmış bir protokoldür. -Connections fail. Requests arrive out of order. Different computers with out-of-sync clocks and states process related requests. Servers restart. Re-orgs happen between requests. These problems are inherent to all distributed systems but are exacerbated in systems operating at a global scale. +Bağlantılar kesilir. İstekler sıralanmamış şekilde ulaşır. Saatleri ve durumları senkronize olmayan farklı bilgisayarlar ilişkili istekleri işler. Sunucular yeniden başlatılır. İstekler arasında yeniden düzenlemeler meydana gelir. Tüm bu sorunlar tüm dağıtık sistemlerin doğasında vardır; ancak küresel ölçekte çalışan sistemlerde bu durumlar daha da şiddetlenir. -Consider this example of what may occur if a client polls an Indexer for the latest data during a re-org. +Bir istemcinin, bir yeniden düzenleme (re-org) sırasında en güncel veriyi almak için bir Endeksleyici’yi yokladığı (polling) durumda yaşanabilecek aşağıdaki örneği göz önünde bulundurun. -1. Indexer ingests block 8 -2. Request served to the client for block 8 -3. Indexer ingests block 9 -4. Indexer ingests block 10A -5. Request served to the client for block 10A -6. Indexer detects reorg to 10B and rolls back 10A -7. Request served to the client for block 9 -8. Indexer ingests block 10B -9. Indexer ingests block 11 -10. Request served to the client for block 11 +1. Endeksleyici blok 8’i alır ve işler +2. İstemciye blok 8 için istek sunulur +3. Endeksleyici blok 9’i alır ve işler +4. Endeksleyici blok 10A'yı alır ve işler +5. İstemciye blok 10A için istek sunulur +6. Endeksleyici, 10B bloğuna yönelik bir yeniden düzenlemeyi algılar ve 10A'yı geri alır (rollback) +7. İstemciye blok 9 için istek sunulur +8. Endeksleyici blok 10B'yi alır ve işler +9. Endeksleyici blok 11’i alır ve işler +10. İstemciye blok 11 için istek sunulur -From the point of view of the Indexer, things are progressing forward logically. Time is moving forward, though we did have to roll back an uncle block and play the block under consensus forward on top of it. Along the way, the Indexer serves requests using the latest state it knows about at that time. +Endeksleyicinin bakış açısından işler mantıklı olarak ilerlemektedir. Zaman ileriye doğru akıyor, her ne kadar bir "uncle" bloğu geri almak ve onun yerine üzerinde konsensüs sağlanan bloku yeniden yürütmek zorunda kalmış olsak da. Bu süreçte, Endeksleyici, o anda bildiği en güncel durumu kullanarak gelen istekleri karşılar. -From the point of view of the client, however, things appear chaotic. The client observes that the responses were for blocks 8, 10, 9, and 11 in that order. We call this the "block wobble" problem. When a client experiences block wobble, data may appear to contradict itself over time. The situation worsens when we consider that Indexers do not all ingest the latest blocks simultaneously, and your requests may be routed to multiple Indexers. +Ancak istemci bakış açısından işler kaotik görünür. İstemci, yanıtların sırasıyla 8, 10, 9 ve 11 numaralı bloklar için olduğunu gözlemler. Buna "blok dalgalanması" sorunu diyoruz. Bir istemci blok dalgalanması yaşadığında, veri zaman içinde kendisiyle çelişiyormuş gibi görünebilir. Bu durum, tüm Endeksleyicilerin son blokları aynı anda alıp işlememesi ve isteklerinizin birden fazla Endeksleyiciye yönlendirilebilmesi nedeniyle daha da kötüleşir. -It is the responsibility of the client and server to work together to provide consistent data to the user. Different approaches must be used depending on the desired consistency as there is no one right program for every problem. +Kullanıcıya tutarlı veri sunmak, istemci ve sunucunun ortak sorumluluğundadır ve beraber çalışmalarını gerektirir. Her problem için tek bir doğru program olmadığından, hedeflenen tutarlılık düzeyine göre farklı yaklaşımlar kullanılmalıdır. -Reasoning through the implications of distributed systems is hard, but the fix may not be! We've established APIs and patterns to help you navigate some common use-cases. The following examples illustrate those patterns but still elide details required by production code (like error handling and cancellation) to not obfuscate the main ideas. +Dağıtık sistemlerin sonuçlarını değerlendirmek zor olabilir, ancak çözüm o kadar da karmaşık olmak zorunda değil! Bazı yaygın kullanım senaryolarında yolunuzu bulmanıza yardımcı olmak için API'ler ve kalıplar oluşturduk. Aşağıdaki örnekler bu kalıpları göstermektedir; ancak temel fikirleri gölgelememek adına, hata yönetimi ve iptal gibi üretim ortamında gerekli olan bazı ayrıntılar atlanmıştır. -## Polling for updated data +## Güncellenmiş veriler için yoklama yapma -The Graph provides the `block: { number_gte: $minBlock }` API, which ensures that the response is for a single block equal or higher to `$minBlock`. If the request is made to a `graph-node` instance and the min block is not yet synced, `graph-node` will return an error. If `graph-node` has synced min block, it will run the response for the latest block. If the request is made to an Edge & Node Gateway, the Gateway will filter out any Indexers that have not yet synced min block and make the request for the latest block the Indexer has synced. +The Graph, yanıtın yalnızca `$minBlock` değerine eşit veya daha büyük tek bir blok için verilmesini garanti eden `block: { number_gte: $minBlock }` API'sini sağlar. Eğer istek bir `graph-node` örneğine yapılırsa ve belirtilen minimum blok henüz senkronize edilmemişse, `graph-node` bir hata döndürür. Eğer `graph-node` minimum bloğu senkronize ettiyse, yanıtı en son blok için üretir. İstek bir Edge & Node Gateway üzerinden yapılırsa, Gateway henüz minimum bloku senkronize etmemiş olan Endeksleyicileri filtreler ve isteği Endeksleyicinin senkronize ettiği en son blok için yapar. -We can use `number_gte` to ensure that time never travels backward when polling for data in a loop. Here is an example: +Bir döngü içinde veri yoklaması yaparken zamanın asla geriye gitmemesini sağlamak için `number_gte` ifadesini kullanabiliriz. İşte bir örnek: ```javascript -/// Updates the protocol.paused variable to the latest -/// known value in a loop by fetching it using The Graph. +/// The Graph kullanarak protocol.paused değişkenini +/// döngü içerisinde en son bilinen değere günceller. async function updateProtocolPaused() { - // It's ok to start with minBlock at 0. The query will be served - // using the latest block available. Setting minBlock to 0 is the - // same as leaving out that argument. + // minBlock = 0 ile başlamak sorun değildir. Sorgu, + // mevcut en güncel blok kullanılarak yanıtlanacaktır. + // minBlock'u 0 olarak ayarlamak, bu argümanı hiç vermemekle aynıdır. let minBlock = 0 for (;;) { - // Schedule a promise that will be ready once - // the next Ethereum block will likely be available. + // Bir sonraki Ethereum blokunun muhtemelen hazır olacağı zamanda + // tetiklenecek bir promise planla. const nextBlock = new Promise((f) => { setTimeout(f, 14000) }) @@ -65,30 +65,30 @@ async function updateProtocolPaused() { const response = await graphql(query, variables) minBlock = response._meta.block.number - // TODO: Do something with the response data here instead of logging it. + // TODO: Burada loglamak yerine yanıt verisiyle bir şey yap. console.log(response.protocol.paused) - // Sleep to wait for the next block + // Bir sonraki bloku beklemek için uykuya geç await nextBlock } } ``` -## Fetching a set of related items +## İlişkili ögelerden oluşan bir küme getirme -Another use-case is retrieving a large set or, more generally, retrieving related items across multiple requests. Unlike the polling case (where the desired consistency was to move forward in time), the desired consistency is for a single point in time. +Bir diğer kullanım senaryosu ise büyük bir kümenin ya da daha genel olarak, birden fazla istek üzerinden ilişkili ögelerin getirilmesidir. Yoklama (polling) senaryosundan farklı olarak (orada hedeflenen tutarlılık zaman içinde ileri gitmekti), burada hedeflenen tutarlılık tek bir zaman noktasına aittir. -Here we will use the `block: { hash: $blockHash }` argument to pin all of our results to the same block. +Burada, tüm sonuçlarımızı aynı bloka sabitlemek için `block: { hash: $blockHash }` argümanını kullanacağız. ```javascript -/// Gets a list of domain names from a single block using pagination +/// Sayfalama kullanarak tek bir bloktan alan adları listesini alır async function getDomainNames() { - // Set a cap on the maximum number of items to pull. + // // Çekilecek en fazla öge sayısına bir sınır koy. let pages = 5 const perPage = 1000 - // The first query will get the first page of results and also get the block - // hash so that the remainder of the queries are consistent with the first. + // İlk sorgu, ilk sayfadaki sonuçları alacak ve aynı zamanda + // kalan sorguların ilk sorguyla tutarlı olabilmesi için blok hash'ini alacak. const listDomainsQuery = ` query ListDomains($perPage: Int!) { domains(first: $perPage) { @@ -107,9 +107,9 @@ async function getDomainNames() { let blockHash = data._meta.block.hash let query - // Continue fetching additional pages until either we run into the limit of - // 5 pages total (specified above) or we know we have reached the last page - // because the page has fewer entities than a full page. + // Ya toplamda 5 sayfalık sınırı aşana kadar (yukarıda belirtildiği gibi) + // son sayfaya ulaştığımızı görene kadar (sayfada tam sayfadan az varlık varsa) + // ek sayfaları getirmeye devam et. while (data.domains.length == perPage && --pages) { let lastID = data.domains[data.domains.length - 1].id query = ` @@ -122,7 +122,7 @@ async function getDomainNames() { data = await graphql(query, { perPage, lastID, blockHash }) - // Accumulate domain names into the result + // Alan adlarını sonuca ekle for (domain of data.domains) { result.push(domain.name) } @@ -131,4 +131,4 @@ async function getDomainNames() { } ``` -Note that in case of a re-org, the client will need to retry from the first request to update the block hash to a non-uncle block. +Bir yeniden düzenleme durumunda, istemcinin blok hash’ini bir "uncle" olmayan blokla güncellemek için ilk isteği baştan tekrar etmesi gerekeceğini unutmayın. diff --git a/website/src/pages/tr/subgraphs/querying/from-an-application.mdx b/website/src/pages/tr/subgraphs/querying/from-an-application.mdx index 6f45ffd8a451..cd5c0717e86c 100644 --- a/website/src/pages/tr/subgraphs/querying/from-an-application.mdx +++ b/website/src/pages/tr/subgraphs/querying/from-an-application.mdx @@ -1,73 +1,74 @@ --- -title: Querying from an Application +title: Bir Uygulamadan Sorgu Yapma +sidebarTitle: Bir Uygulamadan Sorgulama --- -Learn how to query The Graph from your application. +Uygulamanızdan The Graph’e nasıl sorgu yapacağınızı öğrenin. -## Getting GraphQL Endpoints +## GraphQL Uç Noktalarını Alma -During the development process, you will receive a GraphQL API endpoint at two different stages: one for testing in Subgraph Studio, and another for making queries to The Graph Network in production. +Geliştirme süreci boyunca iki farklı aşamada birer GraphQL API'si uç noktası alırsınız: biri Subgraph Studio'da test etmek için, diğeri ise üretim ortamında The Graph Ağı'na sorgular göndermek içindir. -### Subgraph Studio Endpoint +### Subgraph Studio Uç Noktası -After deploying your subgraph to [Subgraph Studio](https://thegraph.com/docs/en/subgraphs/developing/deploying/using-subgraph-studio/), you will receive an endpoint that looks like this: +[Subgraph Studio](https://thegraph.com/docs/en/subgraphs/developing/deploying/using-subgraph-studio/) ortamına Subgraph’inizi dağıttıktan sonra, aşağıdakine benzer bir uç nokta alırsınız: ``` https://api.studio.thegraph.com/query/// ``` -> This endpoint is intended for testing purposes **only** and is rate-limited. +> Bu uç nokta **yalnızca** test amaçlıdır ve istek sayısı sınırlandırılmıştır. -### The Graph Network Endpoint +### The Graph Ağı Uç Noktası -After publishing your subgraph to the network, you will receive an endpoint that looks like this: : +Subgraph’inizi ağa yayımladıktan sonra, aşağıdakine benzer bir uç nokta alırsınız: ``` https://gateway.thegraph.com/api//subgraphs/id/ ``` -> This endpoint is intended for active use on the network. It allows you to use various GraphQL client libraries to query the subgraph and populate your application with indexed data. +> Bu uç nokta, ağ üzerinde aktif kullanım için tasarlanmıştır. Çeşitli GraphQL istemci kütüphanelerini kullanarak Subgraph'e sorgu göndermenize ve uygulamanızı endekslenmiş verilerle beslemenize olanak tanır. -## Using Popular GraphQL Clients +## Popüler GraphQL İstemcilerini Kullanma -### Graph Client +### Graph İstemcisi -The Graph is providing its own GraphQL client, `graph-client` that supports unique features such as: +The Graph, kendisine ait bir GraphQL istemcisi olan `graph-client`'i sunar ve bu istemci aşağıdaki gibi rakipsiz özellikleri destekler: -- Cross-chain Subgraph Handling: Querying from multiple subgraphs in a single query -- [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) -- [Automatic Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) -- Fully typed result +- Zincirler arası Subgraph İşleme: Tek bir sorguda birden fazla Subgraph'ten veri sorgulama +- [Otomatik Blok Takibi](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) +- [Otomatik Sayfalama](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) +- Tamamen yazılmış sonuç -> Note: `graph-client` is integrated with other popular GraphQL clients such as Apollo and URQL, which are compatible with environments such as React, Angular, Node.js, and React Native. As a result, using `graph-client` will provide you with an enhanced experience for working with The Graph. +> Not: `graph-client`, Apollo ve URQL gibi diğer popüler GraphQL istemcileriyle entegredir ve React, Angular, Node.js ile React Native gibi ortamlarla uyumludur. Bu nedenle, `graph-client` kullanmak, The Graph ile çalışırken size geliştirilmiş bir deneyim sunar. -### Fetch Data with Graph Client +### Graph Client ile Veri Çekme -Let's look at how to fetch data from a subgraph with `graph-client`: +`graph-client` ile bir Subgraph'ten nasıl veri çekebileceğimize bakalım: #### 1. Adım -Install The Graph Client CLI in your project: +Projenizde The Graph Client CLI'yi kurun: ```sh yarn add -D @graphprotocol/client-cli -# or, with NPM: +# veya, NPM ile: npm install --save-dev @graphprotocol/client-cli ``` #### 2. Adım -Define your query in a `.graphql` file (or inlined in your `.js` or `.ts` file): +Sorgunuzu bir `.graphql` dosyasında tanımlayın (veya doğrudan `.js` ya da `.ts` dosyanıza satır içi olarak ekleyin): ```graphql query ExampleQuery { - # this one is coming from compound-v2 + # burası compound-v2'den geliyor markets(first: 7) { borrowRate cash collateralFactor } - # this one is coming from uniswap-v2 + # burası uniswap-v2'den geliyor pair(id: "0x00004ee988665cdda9a1080d5792cecd16dc1220") { id token0 { @@ -86,7 +87,7 @@ query ExampleQuery { #### 3. Adım -Create a configuration file (called `.graphclientrc.yml`) and point to your GraphQL endpoints provided by The Graph, for example: +Bir yapılandırma dosyası oluşturun (`.graphclientrc.yml` adında) ve The Graph tarafından sağlanan GraphQL uç noktalarına işaret edin. Örneğin: ```yaml # .graphclientrc.yml @@ -104,22 +105,22 @@ documents: - ./src/example-query.graphql ``` -#### Step 4 +#### 4. Adım -Run the following The Graph Client CLI command to generate typed and ready to use JavaScript code: +Halihazırda yazılmış ve kullanıma hazır JavaScript kodu üretmek için aşağıdaki The Graph Client CLI komutunu çalıştırın: ```sh graphclient build ``` -#### Step 5 +#### 5. Adım -Update your `.ts` file to use the generated typed GraphQL documents: +Oluşturulan tip tanımlı GraphQL dökümanlarını kullanmak için `.ts` dosyanızı güncelleyin: ```tsx import React, { useEffect } from 'react' // ... -// we import types and typed-graphql document from the generated code (`..graphclient/`) +// oluşturulan koddan (`..graphclient/`) tipleri ve tip tanımlı graphql dökümanını içe aktarıyoruz import { ExampleQueryDocument, ExampleQueryQuery, execute } from '../.graphclient' function App() { @@ -152,27 +153,27 @@ function App() { export default App ``` -> **Important Note:** `graph-client` is perfectly integrated with other GraphQL clients such as Apollo client, URQL, or React Query; you can [find examples in the official repository](https://github.com/graphprotocol/graph-client/tree/main/examples). However, if you choose to go with another client, keep in mind that **you won't be able to use Cross-chain Subgraph Handling or Automatic Pagination, which are core features for querying The Graph**. +> **Önemli Not:** `graph-client`, Apollo Client, URQL veya React Query gibi diğer GraphQL istemcileriyle tamamen entegredir; [resmi depoda örneklere ulaşabilirsiniz](https://github.com/graphprotocol/graph-client/tree/main/examples). Ancak farklı bir istemci kullanmayı tercih ederseniz, **The Graph ile sorgu yaparken temel özelliklerden olan Zincirler Arası Subgraph İşleme ve Otomatik Sayfalama** fonksiyonlarını **kullanamayacağınızı** unutmayın. ### Apollo Client -[Apollo client](https://www.apollographql.com/docs/) is a common GraphQL client on front-end ecosystems. It's available for React, Angular, Vue, Ember, iOS, and Android. +[Apollo Client](https://www.apollographql.com/docs/), önyüz ekosistemlerinde yaygın olarak kullanılan bir GraphQL istemcisidir. React, Angular, Vue, Ember, iOS ve Android gibi platformlarda kullanılabilir. -Although it's the heaviest client, it has many features to build advanced UI on top of GraphQL: +En ağır istemci olmasına rağmen, GraphQL üzerine gelişmiş bir kullanıcı arayüzü oluşturmak için birçok özelliğe sahiptir: -- Advanced error handling +- Gelişmiş hata yönetimi - Sayfalandırma -- Data prefetching -- Optimistic UI -- Local state management +- Veriyi ön belleğe alma (prefetching) +- Optimist kullanıcı arayüzü (UI) +- Yerel durum yönetimi -### Fetch Data with Apollo Client +### Apollo Client ile Veri Getirme -Let's look at how to fetch data from a subgraph with Apollo client: +Apollo client ile bir Subgraph'ten nasıl veri çekebileceğimize bakalım: #### 1. Adım -Install `@apollo/client` and `graphql`: +`@apollo/client` ve `graphql`i yükleyin: ```sh npm install @apollo/client graphql @@ -180,7 +181,7 @@ npm install @apollo/client graphql #### 2. Adım -Query the API with the following code: +Aşağıdaki kod ile API'ye sorgu gönderin: ```javascript import { ApolloClient, InMemoryCache, gql } from '@apollo/client' @@ -215,7 +216,7 @@ client #### 3. Adım -To use variables, you can pass in a `variables` argument to the query: +Değişken kullanmak için, sorguya bir `variables` argümanı geçebilirsiniz: ```javascript const tokensQuery = ` @@ -246,22 +247,22 @@ client }) ``` -### URQL Overview +### URQL'e Genel Bakış -[URQL](https://formidable.com/open-source/urql/) is available within Node.js, React/Preact, Vue, and Svelte environments, with some more advanced features: +[URQL](https://formidable.com/open-source/urql/), Node.js, React/Preact, Vue ve Svelte ortamlarında kullanılabilir ve bazı gelişmiş özellikler sunar: -- Flexible cache system -- Extensible design (easing adding new capabilities on top of it) -- Lightweight bundle (~5x lighter than Apollo Client) -- Support for file uploads and offline mode +- Esnek önbellek sistemi +- Genişletilebilir tasarım (üzerine yeni özellikler eklemeyi kolaylaştırır) +- Hafif paket yapısı (Apollo Client'tan yaklaşık 5 kat daha hafif) +- Dosya yükleme ve çevrimdışı mod desteği -### Fetch data with URQL +### URQL ile veri getirme -Let's look at how to fetch data from a subgraph with URQL: +URQL ile bir Subgraph'ten nasıl veri çekebileceğimize bakalım: #### 1. Adım -Install `urql` and `graphql`: +`urql` and `graphql`i yükleyin: ```sh npm install urql graphql @@ -269,7 +270,7 @@ npm install urql graphql #### 2. Adım -Query the API with the following code: +Aşağıdaki kod ile API'ye sorgu gönderin: ```javascript import { createClient } from 'urql' diff --git a/website/src/pages/tr/subgraphs/querying/graph-client/README.md b/website/src/pages/tr/subgraphs/querying/graph-client/README.md index 416cadc13c6f..4f73fbb4c5c7 100644 --- a/website/src/pages/tr/subgraphs/querying/graph-client/README.md +++ b/website/src/pages/tr/subgraphs/querying/graph-client/README.md @@ -14,15 +14,15 @@ This library is intended to simplify the network aspect of data consumption for > The tools provided in this repo can be used as standalone, but you can also use it with any existing GraphQL Client! -| Status | Feature | Notes | -| :----: | ---------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------- | +| Durum | Feature | Notlar | +| :---: | ---------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------- | | ✅ | Multiple indexers | based on fetch strategies | | ✅ | Fetch Strategies | timeout, retry, fallback, race, highestValue | -| ✅ | Build time validations & optimizations | | -| ✅ | Client-Side Composition | with improved execution planner (based on GraphQL-Mesh) | +| ✅ | Build time validations & optimizations | | +| ✅ | Client-Side Composition | with improved execution planner (based on GraphQL-Mesh) | | ✅ | Cross-chain Subgraph Handling | Use similar subgraphs as a single source | -| ✅ | Raw Execution (standalone mode) | without a wrapping GraphQL client | -| ✅ | Local (client-side) Mutations | | +| ✅ | Raw Execution (standalone mode) | without a wrapping GraphQL client | +| ✅ | Local (client-side) Mutations | | | ✅ | [Automatic Block Tracking](../packages/block-tracking/README.md) | tracking block numbers [as described here](https://thegraph.com/docs/en/developer/distributed-systems/#polling-for-updated-data) | | ✅ | [Automatic Pagination](../packages/auto-pagination/README.md) | doing multiple requests in a single call to fetch more than the indexer limit | | ✅ | Integration with `@apollo/client` | | @@ -32,7 +32,7 @@ This library is intended to simplify the network aspect of data consumption for > You can find an [extended architecture design here](./architecture.md) -## Getting Started +## Buradan Başlayın You can follow [Episode 45 of `graphql.wtf`](https://graphql.wtf/episodes/45-the-graph-client) to learn more about Graph Client: @@ -42,13 +42,13 @@ To get started, make sure to install [The Graph Client CLI] in your project: ```sh yarn add -D @graphprotocol/client-cli -# or, with NPM: +# veya, NPM ile: npm install --save-dev @graphprotocol/client-cli ``` > The CLI is installed as dev dependency since we are using it to produce optimized runtime artifacts that can be loaded directly from your app! -Create a configuration file (called `.graphclientrc.yml`) and point to your GraphQL endpoints provided by The Graph, for example: +Bir yapılandırma dosyası oluşturun (`.graphclientrc.yml` adında) ve The Graph tarafından sağlanan GraphQL uç noktalarına işaret edin. Örneğin: ```yml # .graphclientrc.yml @@ -138,7 +138,7 @@ graphclient serve-dev And open http://localhost:4000/ to use GraphiQL. You can now experiment with your Graph client-side GraphQL schema locally! 🥳 -#### Examples +#### Örnekler You can also refer to [examples directory in this repo](../examples), for more advanced examples and integration examples: @@ -308,8 +308,8 @@ sources:
`highestValue` - - This strategy allows you to send parallel requests to different endpoints for the same source and choose the most updated. + +This strategy allows you to send parallel requests to different endpoints for the same source and choose the most updated. This is useful if you want to choose most synced data for the same Subgraph over different indexers/sources. diff --git a/website/src/pages/tr/subgraphs/querying/graph-client/architecture.md b/website/src/pages/tr/subgraphs/querying/graph-client/architecture.md index 99098cd77b95..7cc463e678c9 100644 --- a/website/src/pages/tr/subgraphs/querying/graph-client/architecture.md +++ b/website/src/pages/tr/subgraphs/querying/graph-client/architecture.md @@ -1,13 +1,13 @@ -# The Graph Client Architecture +# The Graph İstemcisi Mimarisi -To address the need to support a distributed network, we plan to take several actions to ensure The Graph client provides everything app needs: +Dağıtık bir ağı destekleme ihtiyacını karşılamak için, The Graph istemcisinin uygulamaların ihtiyaç duyduğu her şeyi temin etmesini sağlayacak çeşitli adımlar atmayı planlıyoruz: -1. Compose multiple Subgraphs (on the client-side) -2. Fallback to multiple indexers/sources/hosted services -3. Automatic/Manual source picking strategy -4. Agnostic core, with the ability to run integrate with any GraphQL client +1. Birden fazla Subgraph'i (istemci tarafında) birleştirme +2. Birden fazla endeksleyici/kaynak/sağlayıcı hizmetine geri dönüş (fallback) mekanizması +3. Otomatik/Manuel kaynak seçme stratejisi +4. Herhangi bir GraphQL istemcisiyle entegre olabilen, bağımsız (agnostik) çekirdek yapısı -## Standalone mode +## Bağımsız (standalone) mod ```mermaid graph LR; @@ -17,7 +17,7 @@ graph LR; op-->sB[Subgraph B]; ``` -## With any GraphQL client +## Herhangi bir GraphQL istemcisiyle ```mermaid graph LR; @@ -28,11 +28,11 @@ graph LR; op-->sB[Subgraph B]; ``` -## Subgraph Composition +## Subgraph Bileşimi (Subgraph Composition) -To allow simple and efficient client-side composition, we'll use [`graphql-tools`](https://graphql-tools.com) to create a remote schema / Executor, then can be hooked into the GraphQL client. +Basit ve verimli istemci tarafı bileşimini mümkün kılmak için [`graphql-tools`](https://graphql-tools.com) kullanarak uzak bir şema / Executor oluşturacağız ve bu yapı daha sonra GraphQL istemcisine entegre edilebilecek. -API could be either raw `graphql-tools` transformers, or using [GraphQL-Mesh declarative API](https://graphql-mesh.com/docs/transforms/transforms-introduction) for composing the schema. +API, şema bileşimi için ya doğrudan `graphql-tools` dönüştürücüleri (transformers) ile ya da [GraphQL-Mesh declarative API](https://graphql-mesh.com/docs/transforms/transforms-introduction) ile kullanılabilir. ```mermaid graph LR; @@ -42,9 +42,9 @@ graph LR; m-->s3[Subgraph C GraphQL schema]; ``` -## Subgraph Execution Strategies +## Subgraph Yürütme Stratejileri -Within every Subgraph defined as source, there will be a way to define it's source(s) indexer and the querying strategy, here are a few options: +Kaynak olarak tanımlanan her bir Subgraph içerisinde, o Subgraph'in bağlı olduğu kaynak(lar)ın endeksleyicisini ve sorgulama stratejisini tanımlamak mümkündür. İşte bazı seçenekler: ```mermaid graph LR; @@ -85,9 +85,9 @@ graph LR; end ``` -> We can ship a several built-in strategies, along with a simple interfaces to allow developers to write their own. +> Geliştiricilerin kendi stratejilerini yazabilmeleri için basit arayüzlerle birlikte, birkaç hazır strateji sunabiliriz. -To take the concept of strategies to the extreme, we can even build a magical layer that does subscription-as-query, with any hook, and provide a smooth DX for dapps: +Strateji kavramını en uç noktaya taşımak adına, herhangi bir hook ile çalışan ve abonelik modeliyle sorgu (subscription-as-query) yapan sihirli bir katman bile oluşturabiliriz. Bu sayede dapp’ler için akıcı bir geliştirici deneyimi (DX) sunabiliriz: ```mermaid graph LR; @@ -99,5 +99,4 @@ graph LR; sc[Smart Contract]-->|change event|op; ``` -With this mechanism, developers can write and execute GraphQL `subscription`, but under the hood we'll execute a GraphQL `query` to The Graph indexers, and allow to connect any external hook/probe for re-running the operation. -This way, we can watch for changes on the Smart Contract itself, and the GraphQL client will fill the gap on the need to real-time changes from The Graph. +Bu mekanizma sayesinde geliştiriciler GraphQL `subscription`ı yazıp çalıştırabilir, ancak arka planda The Graph endeksleyicilerine bir GraphQL `query`si gönderilir ve işlemin yeniden çalıştırılmasını sağlayacak harici bir hook/probe bağlantısına izin verilir. Bu sayede, doğrudan Akıllı Sözleşme üzerindeki değişiklikler izlenebilir ve GraphQL istemcisi, The Graph'ten gerçek zamanlı değişiklik ihtiyacını karşılayacak şekilde devreye girer. diff --git a/website/src/pages/tr/subgraphs/querying/graph-client/live.md b/website/src/pages/tr/subgraphs/querying/graph-client/live.md index e6f726cb4352..ea96631cd7ad 100644 --- a/website/src/pages/tr/subgraphs/querying/graph-client/live.md +++ b/website/src/pages/tr/subgraphs/querying/graph-client/live.md @@ -2,7 +2,7 @@ Graph-Client implements a custom `@live` directive that can make every GraphQL query work with real-time data. -## Getting Started +## Buradan Başlayın Start by adding the following configuration to your `.graphclientrc.yml` file: diff --git a/website/src/pages/tr/subgraphs/querying/graphql-api.mdx b/website/src/pages/tr/subgraphs/querying/graphql-api.mdx index 265c755683b9..f843969ea758 100644 --- a/website/src/pages/tr/subgraphs/querying/graphql-api.mdx +++ b/website/src/pages/tr/subgraphs/querying/graphql-api.mdx @@ -1,24 +1,24 @@ --- -title: GraphQL API +title: GraphQL API'ı --- -Learn about the GraphQL Query API used in The Graph. +The Graph'te kullanılan GraphQL Sorgu API'ı hakkında bilgi edinin. -## What is GraphQL? +## GraphQL Nedir? -[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. +[GraphQL] (https://graphql.org/learn/), API'lar için bir sorgu dili ve bu sorguları mevcut verileriniz üzerinde çalıştıran bir sorgu dilidir. The Graph, Subgraph'leri sorgulamak için GraphQL kullanır. -To understand the larger role that GraphQL plays, review [developing](/subgraphs/developing/introduction/) and [creating a subgraph](/developing/creating-a-subgraph/). +GraphQL’in daha kapsamlı rolünü anlamak için [geliştirme](/subgraphs/developing/introduction/) ve [bir Subgraph oluşturma](/developing/creating-a-subgraph/) bölümlerini inceleyin. -## Queries with GraphQL +## GraphQL Sorguları -In your subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type. +Subgraph şemanızda, `Entities` (Varlıklar) olarak adlandırılan türleri tanımlarsınız. Her bir `Entity` (Varlık) türü için, üst düzey `Query` türü üzerinde `entity` ve `entities` alanları otomatik olarak oluşturulur. -> Note: `query` does not need to be included at the top of the `graphql` query when using The Graph. +> Not: The Graph'te, `graphql` sorgularının başına `query` (sorgu) ifadesinin eklenmesi gerekmez. ### Örnekler -Query for a single `Token` entity defined in your schema: +Şemanızda tanımlı tek bir `Token` varlığı için sorgu: ```graphql { @@ -29,9 +29,9 @@ Query for a single `Token` entity defined in your schema: } ``` -> Note: When querying for a single entity, the `id` field is required, and it must be written as a string. +> Not: Tek bir varlık sorgulanırken `id` alanı zorunludur ve bir dize olarak yazılmalıdır. -Query all `Token` entities: +Tüm `Token` varlıklarını sorgulama: ```graphql { @@ -44,10 +44,10 @@ Query all `Token` entities: ### Sıralama -When querying a collection, you may: +Bir koleksiyon sorgularken şunları yapabilirsiniz: -- Use the `orderBy` parameter to sort by a specific attribute. -- Use the `orderDirection` to specify the sort direction, `asc` for ascending or `desc` for descending. +- Belirli bir özniteliğe göre sıralama yapmak için `orderBy` parametresini kullanın. +- Sıralama yönünü belirtmek için `orderDirection` kullanın; artan sıralama için `asc`, azalan sıralama için `desc`. #### Örnek @@ -62,9 +62,9 @@ When querying a collection, you may: #### İç içe varlık sıralaması için örnek -As of Graph Node [`v0.30.0`](https://github.com/graphprotocol/graph-node/releases/tag/v0.30.0) entities can be sorted on the basis of nested entities. +Graph Düğümü [`v0.30.0`](https://github.com/graphprotocol/graph-node/releases/tag/v0.30.0) sürümünden itibaren, varlıklar iç içe ve varlıklara göre sıralanabilir. -The following example shows tokens sorted by the name of their owner: +Aşağıdaki örnek, token'ların sahip adına göre nasıl sıralandığını gösteriyor: ```graphql { @@ -77,18 +77,18 @@ The following example shows tokens sorted by the name of their owner: } ``` -> Currently, you can sort by one-level deep `String` or `ID` types on `@entity` and `@derivedFrom` fields. Unfortunately, [sorting by interfaces on one level-deep entities](https://github.com/graphprotocol/graph-node/pull/4058), sorting by fields which are arrays and nested entities is not yet supported. +> Şu anda, `@entity` ve `@derivedFrom` alanlarında bir seviye derinliğindeki `String` veya `ID` türlerine göre sıralama yapabilirsiniz. Ne yazık ki, [bir seviye derinliğindeki varlıklarda arayüzlere göre sıralama](https://github.com/graphprotocol/graph-node/pull/4058), alanı bir dizi ya da iç içe bir varlık olan ögelere göre sıralama henüz desteklenmemektedir. ### Sayfalandırma -When querying a collection, it's best to: +Bir koleksiyon sorgularken aşağıdakileri uygulamanız önerilir: -- Use the `first` parameter to paginate from the beginning of the collection. - - The default sort order is by `ID` in ascending alphanumeric order, **not** by creation time. -- Use the `skip` parameter to skip entities and paginate. For instance, `first:100` shows the first 100 entities and `first:100, skip:100` shows the next 100 entities. -- Avoid using `skip` values in queries because they generally perform poorly. To retrieve a large number of items, it's best to page through entities based on an attribute as shown in the previous example above. +- Koleksiyonun başından itibaren sayfalama yapmak için `first` parametresini kullanın. + - Varsayılan sıralama düzeni, oluşturulma zamanına göre **değil**, artan alfasayısal sıraya göre `ID` alanına göre yapılır. +- Varlıkları atlamak ve sayfalama yapmak için `skip` parametresini kullanın. Örneğin, `first:100` ilk 100 varlığı gösterir. `first:100, skip:100` ise sonraki 100 varlığı gösterir. +- Sorgularda `skip` değerlerini kullanmaktan kaçının, çünkü genellikle düşük performans gösterirler. Çok sayıda ögeyi getirmek için, yukarıdaki örnekte gösterildiği gibi bir özniteliğe göre varlıklar arasında sayfalama yapmak en iyi yaklaşımdır. -#### Example using `first` +#### `first` kullanımına örnek İlk 10 tokeni sorgulayın: @@ -101,11 +101,11 @@ When querying a collection, it's best to: } ``` -To query for groups of entities in the middle of a collection, the `skip` parameter may be used in conjunction with the `first` parameter to skip a specified number of entities starting at the beginning of the collection. +Bir koleksiyonun ortasındaki varlık gruplarını sorgulamak için, `skip` parametresi `first` parametresiyle birlikte kullanılabilir. Bu, koleksiyonun başından itibaren belirli sayıda varlığı atlayarak sorgulama yapmanızı sağlar. -#### Example using `first` and `skip` +#### `first` ve `skip` kullanımına örnek -Query 10 `Token` entities, offset by 10 places from the beginning of the collection: +Koleksiyonun başından itibaren 10 öge atlayarak 10 `Token` varlığı sorgulama: ```graphql { @@ -116,9 +116,9 @@ Query 10 `Token` entities, offset by 10 places from the beginning of the collect } ``` -#### Example using `first` and `id_ge` +#### `first` ve `id_ge` kullanımına örnek -If a client needs to retrieve a large number of entities, it's more performant to base queries on an attribute and filter by that attribute. For example, a client could retrieve a large number of tokens using this query: +Bir istemcinin çok sayıda varlık alması gerektiğinde, sorguları bir öznitelik temelinde oluşturmak ve bu özniteliğe göre filtrelemek daha yüksek performans sağlar. Örneğin, bir istemci aşağıdaki sorguyu kullanarak çok sayıda token alabilir: ```graphql query manyTokens($lastID: String) { @@ -129,16 +129,16 @@ query manyTokens($lastID: String) { } ``` -The first time, it would send the query with `lastID = ""`, and for subsequent requests it would set `lastID` to the `id` attribute of the last entity in the previous request. This approach will perform significantly better than using increasing `skip` values. +İlk seferde `lastID = ""` ile sorgu gönderilecek ve sonraki istekler için `lastID` değeri önceki istekteki son varlığın `id` özelliğine göre ayarlanacaktır. Bu yaklaşım, artan `skip` değerlerini kullanmaktan önemli ölçüde daha iyi performans gösterir. ### Filtreleme -- You can use the `where` parameter in your queries to filter for different properties. -- You can filter on multiple values within the `where` parameter. +- `where` parametresini sorgularınızda farklı özellikleri filtrelemek için kullanabilirsiniz. +- Birden çok değeri `where` parametresi içinde filtreleyebilirsiniz. -#### Example using `where` +#### `where` kullanımına örnek -Query challenges with `failed` outcome: +`outcome` değeri `failed` olan challenge'ları sorgulama: ```graphql { @@ -152,7 +152,7 @@ Query challenges with `failed` outcome: } ``` -You can use suffixes like `_gt`, `_lte` for value comparison: +`_gt`, `_lte` gibi son ekleri değer karşılaştırması için kullanabilirsiniz: #### Aralık filtreleme için örnek @@ -168,9 +168,9 @@ You can use suffixes like `_gt`, `_lte` for value comparison: #### Blok filtreleme için örnek -You can also filter entities that were updated in or after a specified block with `_change_block(number_gte: Int)`. +Belirtilen bir blokta veya sonrasında güncellenen varlıkları, `_change_block(number_gte: Int)` ile de filtreleyebilirsiniz. -Örneğin bu, son yoklamanızdan bu yana yalnızca değişen varlıkları almak istiyorsanız yararlı olabilir. Ya da alternatif olarak, subgraph'ınızda varlıkların nasıl değiştiğini araştırmak veya hata ayıklamak için yararlı olabilir (bir blok filtresiyle birleştirilirse, yalnızca belirli bir blokta değişen varlıkları izole edebilirsiniz). +Bu, yalnızca değişen varlıkları getirmek istiyorsanız kullanışlı olabilir. Örneğin, son yoklamanızdan bu yana değişen varlıklar için. Ya da, subgraph'inizde varlıkların nasıl değiştiğini araştırmak veya hata ayıklamak için faydalı olabilir (bir blok filtresiyle beraber kullanıldığında, yalnızca belirli bir blokta değişen varlıkları izole edebilirsiniz). ```graphql { @@ -184,7 +184,7 @@ You can also filter entities that were updated in or after a specified block wit #### İç içe varlık filtreleme örneği -Filtering on the basis of nested entities is possible in the fields with the `_` suffix. +İç içe varlıklara göre filtreleme, `_` son ekine sahip alanlarda mümkündür. Bu, yalnızca alt düzey varlıkları sağlanan koşulları karşılayan varlıkları getirmek istiyorsanız yararlı olabilir. @@ -202,11 +202,11 @@ Bu, yalnızca alt düzey varlıkları sağlanan koşulları karşılayan varlık #### Mantıksal operatörler -As of Graph Node [`v0.30.0`](https://github.com/graphprotocol/graph-node/releases/tag/v0.30.0) you can group multiple parameters in the same `where` argument using the `and` or the `or` operators to filter results based on more than one criteria. +Graph Düğümü [`v0.30.0`](https://github.com/graphprotocol/graph-node/releases/tag/v0.30.0) sürümünden itibaren, birden fazla kritere dayalı sonuçları filtrelemek için aynı `where` argümanı içinde birden çok parametreyi `and` veya `or` operatörleriyle gruplayabilirsiniz. -##### `AND` Operator +##### `AND` Operatörü -The following example filters for challenges with `outcome` `succeeded` and `number` greater than or equal to `100`. +Aşağıdaki örnek, `outcome` değeri `succeeded` olan ve `number` alanı `100` veya daha büyük olan challenge'ları filtreler. ```graphql { @@ -220,7 +220,7 @@ The following example filters for challenges with `outcome` `succeeded` and `num } ``` -> **Syntactic sugar:** You can simplify the above query by removing the `and` operator by passing a sub-expression separated by commas. +> **Sözdizimsel şeker:** Yukarıdaki sorguyu, virgülle ayrılmış bir alt ifade kullanıp `and` operatörünü kaldırarak sadeleştirebilirsiniz. > > ```graphql > { @@ -234,9 +234,9 @@ The following example filters for challenges with `outcome` `succeeded` and `num > } > ``` -##### `OR` Operator +##### `OR` Operatörü -The following example filters for challenges with `outcome` `succeeded` or `number` greater than or equal to `100`. +Aşağıdaki örnek, `outcome` değeri `succeeded` olan veya `number` alanı `100` veya daha büyük olan challenge'ları filtreler. ```graphql { @@ -250,7 +250,7 @@ The following example filters for challenges with `outcome` `succeeded` or `numb } ``` -> **Note**: When constructing queries, it is important to consider the performance impact of using the `or` operator. While `or` can be a useful tool for broadening search results, it can also have significant costs. One of the main issues with `or` is that it can cause queries to slow down. This is because `or` requires the database to scan through multiple indexes, which can be a time-consuming process. To avoid these issues, it is recommended that developers use and operators instead of or whenever possible. This allows for more precise filtering and can lead to faster, more accurate queries. +> **Not:** Sorguları oluştururken `or` operatörünün performans üzerindeki etkisini dikkate almak önemlidir. `or`, arama sonuçlarını genişletmek için faydalı bir araç olsa da, ciddi performans maliyetlerine yol açabilir. `or` operatörünün başlıca sorunlarından biri, sorguların yavaşlamasına neden olmasıdır. Bunun nedeni, `or` kullanıldığında veritabanının birden fazla endeksi taramak zorunda kalmasıdır; bu da zaman alıcı bir işlemdir. Bu tür sorunlardan kaçınmak için geliştiricilere mümkün olduğunca `or` yerine `and` operatörlerini kullanmaları önerilir. Bu sayede daha hassas filtreleme yapılabilir ve sorgular daha hızlı ve doğru şekilde çalışabilir. #### Tüm Filtreler @@ -279,9 +279,9 @@ _not_ends_with _not_ends_with_nocase ``` -> Please note that some suffixes are only supported for specific types. For example, `Boolean` only supports `_not`, `_in`, and `_not_in`, but `_` is available only for object and interface types. +> Lütfen bazı son eklerin yalnızca belirli türler için desteklendiğini unutmayın. Örneğin, `Boolean` türü yalnızca `_not`, `_in` ve `_not_in` son eklerini destekler; ancak `_` soneki yalnızca nesne ve arayüz türleri için kullanılabilir. -In addition, the following global filters are available as part of `where` argument: +Ayrıca, `where` argümanının bir parçası olarak aşağıdaki genel filtreler kullanılabilir: ```graphql _change_block(number_gte: Int) @@ -289,11 +289,11 @@ _change_block(number_gte: Int) ### Zaman yolculuğu sorguları -You can query the state of your entities not just for the latest block, which is the default, but also for an arbitrary block in the past. The block at which a query should happen can be specified either by its block number or its block hash by including a `block` argument in the toplevel fields of queries. +Varlıklarınızın durumunu en son blok için (varsayılan), ya da geçmişteki herhangi bir blok için sorgulayabilirsiniz. Sorgunun hangi blokta yapılacağını belirtmek için, üst seviye sorgu alanlarında `block` argümanı eklenerek ilgili blok numarası veya blok hash’i kullanılabilir. -The result of such a query will not change over time, i.e., querying at a certain past block will return the same result no matter when it is executed, with the exception that if you query at a block very close to the head of the chain, the result might change if that block turns out to **not** be on the main chain and the chain gets reorganized. Once a block can be considered final, the result of the query will not change. +Böyle bir sorgunun sonucu zamanla değişmez; yani, geçmişteki belirli bir blokta yapılan bir sorgu, ne zaman çalıştırılırsa çalıştırılsın aynı sonucu döndürür. Ancak, zincirin en ucuna (head) çok yakın bir blokta sorgulama yapılırsa, bu blok ana zincirde **yer almıyorsa** ve zincir yeniden düzenlenirse sonuç değişebilir. Bir blok artık kesin (final) olarak kabul edilebildiğinde, o blok için yapılan sorgunun sonucu değişmeyecektir. -> Note: The current implementation is still subject to certain limitations that might violate these guarantees. The implementation can not always tell that a given block hash is not on the main chain at all, or if a query result by a block hash for a block that is not yet considered final could be influenced by a block reorganization running concurrently with the query. They do not affect the results of queries by block hash when the block is final and known to be on the main chain. [This issue](https://github.com/graphprotocol/graph-node/issues/1405) explains what these limitations are in detail. +> **Not:** Mevcut uygulama, bu güvenceleri ihlal edebilecek bazı sınırlamalara hâlâ tabidir. Uygulama, verilen bir blok hash’inin ana zincirde yer almadığını her zaman tespit edemez veya henüz kesin (final) olarak kabul edilmeyen bir blok için yapılan bir blok hash sorgusunun, sorgu ile eşzamanlı olarak gerçekleşen bir zincir yeniden düzenlemesinden etkilenip etkilenmeyeceğini öngöremez. Ancak bu sınırlamalar, blok kesinleşmiş ve ana zincirde yer aldığı biliniyorsa, blok hash ile yapılan sorguların sonuçlarını etkilemez. Bu sınırlamalar hakkında ayrıntılı bilgi için [bu GitHub issue'su](https://github.com/graphprotocol/graph-node/issues/1405) incelenebilir. #### Örnek @@ -309,7 +309,7 @@ The result of such a query will not change over time, i.e., querying at a certai } ``` -This query will return `Challenge` entities, and their associated `Application` entities, as they existed directly after processing block number 8,000,000. +Bu sorgu, blok numarası 8.000.000 işlendiği andan hemen sonraki halleriyle `Challenge` varlıklarını ve bunlara bağlı `Application` varlıklarını döndürecektir. #### Örnek @@ -325,26 +325,26 @@ This query will return `Challenge` entities, and their associated `Application` } ``` -This query will return `Challenge` entities, and their associated `Application` entities, as they existed directly after processing the block with the given hash. +Bu sorgu, verilen hash’e sahip blok işlendiği andan hemen sonraki halleriyle `Challenge` varlıklarını ve bunlara bağlı `Application` varlıklarını döndürecektir. ### Tam Metin Arama Sorguları -Fulltext search query fields provide an expressive text search API that can be added to the subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) to add fulltext search to your subgraph. +Tam metin arama sorgu alanları, Subgraph şemasına eklenebilen ve özelleştirilebilen gelişmiş bir metin arama API’si sağlar. Subgraph'inize tam metin arama eklemek için [Tam Metin Arama Alanlarını Tanımlama](/developing/creating-a-subgraph/#defining-fulltext-search-fields) bölümüne bakın. -Fulltext search queries have one required field, `text`, for supplying search terms. Several special fulltext operators are available to be used in this `text` search field. +Tam metin arama sorgularında, arama terimlerinin girildiği zorunlu bir `text` alanı bulunur. Bu `text` arama alanında kullanılabilecek çeşitli özel tam metin operatörleri mevcuttur. Tam metin arama operatörleri: -| Symbol | Operator | Tanım | +| Sembol | Operatör | Tanım | | --- | --- | --- | -| `&` | `And` | For combining multiple search terms into a filter for entities that include all of the provided terms | -| | | `Or` | Queries with multiple search terms separated by the or operator will return all entities with a match from any of the provided terms | -| `<->` | `Follow by` | Specify the distance between two words. | -| `:*` | `Prefix` | Use the prefix search term to find words whose prefix match (2 characters required.) | +| `&` | `And` | Sağlanan tüm arama terimlerini içeren varlıkları filtrelemek için birden fazla arama terimini birleştirir | +| | | `Or` | Or (veya) operatörüyle ayrılmış birden fazla arama terimi içeren sorgular, sağlanan terimlerden herhangi biriyle eşleşen tüm varlıkları döndürür | +| `<->` | `Follow by` | İki kelime arasındaki mesafeyi belirtmeyi sağlar. | +| `:*` | `Prefix` | Ön eki (Prefix'i) eşleşen kelimeleri bulmak için ön ek arama terimini kullanın (en az 2 karakter gereklidir). | #### Örnekler -Using the `or` operator, this query will filter to blog entities with variations of either "anarchism" or "crumpet" in their fulltext fields. +`or` operatörü kullanılarak yapılan bu sorgu, tam metin alanlarında "anarchism" veya "crumpet" kelimelerinin varyasyonlarını içeren blog varlıklarını döndürecektir. ```graphql { @@ -357,7 +357,7 @@ Using the `or` operator, this query will filter to blog entities with variations } ``` -The `follow by` operator specifies a words a specific distance apart in the fulltext documents. The following query will return all blogs with variations of "decentralize" followed by "philosophy" +`follow by` operatörü, tam metin belgelerinde belirli bir mesafe ile birbirini izleyen kelimeleri belirtir. Aşağıdaki sorgu, "decentralize" kelimesinin varyasyonları ardından belirli bir mesafede "philosophy" kelimesini içeren tüm blogları döndürecektir. ```graphql { @@ -385,25 +385,25 @@ Daha karmaşık filtreler oluşturmak için tam metin operatörlerini birleştir ### Validasyon -Graph Node implements [specification-based](https://spec.graphql.org/October2021/#sec-Validation) validation of the GraphQL queries it receives using [graphql-tools-rs](https://github.com/dotansimha/graphql-tools-rs#validation-rules), which is based on the [graphql-js reference implementation](https://github.com/graphql/graphql-js/tree/main/src/validation). Queries which fail a validation rule do so with a standard error - visit the [GraphQL spec](https://spec.graphql.org/October2021/#sec-Validation) to learn more. +Graph Düğümü, [graphql-js referans uygulamasını](https://github.com/graphql/graphql-js/tree/main/src/validation) temel alan [graphql-tools-rs](https://github.com/dotansimha/graphql-tools-rs#validation-rules)'yi kullanarak aldığı GraphQL sorgularının [spesifikasyon tabanlı](https://spec.graphql.org/October2021/#sec-Validation) doğrulamasını gerçekleştirir. Bir doğrulama kuralını geçemeyen sorgular standart bir hata ile sonuçlanır. Daha fazla bilgi için [GraphQL spec](https://spec.graphql.org/October2021/#sec-Validation)'i ziyaret edin. -## Schema +## Şema -The schema of your dataSources, i.e. the entity types, values, and relationships that are available to query, are defined through the [GraphQL Interface Definition Language (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). +Veri kaynaklarınızın şeması, yani sorgulanabilir olan entity türleri, değerler ve ilişkiler, [GraphQL Arayüz Tanım Dili (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System) kullanılarak tanımlanır. -GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your subgraph is automatically generated from the GraphQL schema that's included in your [subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph). +GraphQL şemaları genellikle `queries`, `subscriptions` ve `mutations` için kök türleri tanımlar. The Graph yalnızca `queries` desteği sunar. Subgraph’iniz için kök `Query` türü, [Subgraph manifesto](/developing/creating-a-subgraph/#components-of-a-subgraph) dosyanıza dahil edilen GraphQL şemasından otomatik olarak oluşturulur. -> Note: Our API does not expose mutations because developers are expected to issue transactions directly against the underlying blockchain from their applications. +> Not: API'miz mutation (mutasyon) işlemlerini desteklemez, çünkü geliştiricilerin kendi uygulamaları üzerinden doğrudan temel blokzincirine işlem göndermeleri beklenir. ### Varlıklar -All GraphQL types with `@entity` directives in your schema will be treated as entities and must have an `ID` field. +Şemanızda `@entity` yönergesiyle tanımlanan tüm GraphQL türleri varlık olarak kabul edilir ve bir `ID` alanına sahip olmaları gerekir. -> **Note:** Currently, all types in your schema must have an `@entity` directive. In the future, we will treat types without an `@entity` directive as value objects, but this is not yet supported. +> **Not:** Şu anda, şemanızdaki tüm türlerin `@entity` yönergesine sahip olması gerekir. Gelecekte, `@entity` yönergesi olmayan türler değer nesnesi olarak değerlendirilecek, ancak bu henüz desteklenmemektedir. ### Subgraph Üst Verisi -All subgraphs have an auto-generated `_Meta_` object, which provides access to subgraph metadata. This can be queried as follows: +Tüm Subgraph'lerde, Subgraph metadatasına erişim sağlayan otomatik olarak oluşturulmuş bir `_Meta_` nesnesi bulunur. Buna aşağıdaki şekilde sorgu yapabilirsiniz: ```graphQL { @@ -419,14 +419,14 @@ All subgraphs have an auto-generated `_Meta_` object, which provides access to s } ``` -Eğer bir blok belirtilirse, üst veri o blokla ilgilidir; belirtilmezse en son dizinlenen blok dikkate alınır. Eğer belirtilirse, blok subgraph başlangıç bloğundan sonra olmalıdır ve en son indekslenen bloğa eşit veya daha küçük olmalıdır. +Bir blok belirtilirse, metadata o bloğa ait durumu yansıtır; belirtilmezse en son endekslenmiş blok kullanılır. Belirtilen blok, Subgraph'in başlangıç bloğundan sonra ve en son endekslenmiş bloğa eşit ya da ondan küçük olmalıdır. -`deployment` is a unique ID, corresponding to the IPFS CID of the `subgraph.yaml` file. +`deployment`, `subgraph.yaml` dosyasının IPFS CID’sine karşılık gelen özgün bir kimliktir. -`block` provides information about the latest block (taking into account any block constraints passed to `_meta`): +`block`, `_meta`'ya iletilen herhangi bir blok kısıtlamasını dikkate alarak, en son blok hakkında bilgi sağlar: -- hash: the hash of the block -- number: the block number -- timestamp: the timestamp of the block, if available (this is currently only available for subgraphs indexing EVM networks) +- hash: bloğun hash değeri +- number: blok numarası +- timestamp: eğer mevcuts, blokun zaman damgası (bu şu anda yalnızca EVM ağlarını endeksleyen Subgraph'ler için kullanılabilir) -`hasIndexingErrors` is a boolean identifying whether the subgraph encountered indexing errors at some past block +`hasIndexingErrors`, spesifik bir Subgraph'in geçmişteki bir blokta endeksleme hatalarıyla karşılaşıp karşılaşmadığını belirten bir boolean ifadedir diff --git a/website/src/pages/tr/subgraphs/querying/introduction.mdx b/website/src/pages/tr/subgraphs/querying/introduction.mdx index 0994f3f0cb22..7429dea4c609 100644 --- a/website/src/pages/tr/subgraphs/querying/introduction.mdx +++ b/website/src/pages/tr/subgraphs/querying/introduction.mdx @@ -1,32 +1,32 @@ --- -title: Querying The Graph +title: The Graph'e Sorgu Gönderme sidebarTitle: Giriş --- -To start querying right away, visit [The Graph Explorer](https://thegraph.com/explorer). +Hemen sorgulamaya başlamak için [Graph Gezgini](https://thegraph.com/explorer)ni ziyaret edin. ## Genel Bakış -When a subgraph is published to The Graph Network, you can visit its subgraph details page on Graph Explorer and use the "Query" tab to explore the deployed GraphQL API for each subgraph. +Bir subgraph The Graph Ağı'nda yayımlandığında, Graph Gezgini üzerinde ilgili Subgraph'in detay sayfasını ziyaret edebilir ve "Query" sekmesini kullanarak her bir Subgraph için dağıtılmış GraphQL API'sini keşfedebilirsiniz. ## Ayrıntılar -Each subgraph published to The Graph Network has a unique query URL in Graph Explorer to make direct queries. You can find it by navigating to the subgraph details page and clicking on the "Query" button in the top right corner. +The Graph Ağı'na yayımlanan her Subgraph'in, doğrudan sorgular yapabilmek için Graph Gezgini'nde özel bir sorgu URL'si vardır. Bu URL'yi, subgraph detay sayfasına gidip sağ üst köşedeki "Query" butonuna tıklayarak bulabilirsiniz. -![Query Subgraph Button](/img/query-button-screenshot.png) +![Subgraph Sorgulama Butonu](/img/query-button-screenshot.png) -![Query Subgraph URL](/img/query-url-screenshot.png) +![Subgraph Sorgu URL'si](/img/query-url-screenshot.png) -You will notice that this query URL must use a unique API key. You can create and manage your API keys in [Subgraph Studio](https://thegraph.com/studio), under the "API Keys" section. Learn more about how to use Subgraph Studio [here](/deploying/subgraph-studio/). +Bu sorgu URL'sinin benzersiz bir API anahtarı kullanması gerektiğini fark edeceksiniz. API anahtarlarınızı [Subgraph Studio](https://thegraph.com/studio) içindeki "API Anahtarları" bölümünden oluşturabilir ve yönetebilirsiniz. Subgraph Studio'nun nasıl kullanılacağı hakkında daha fazla bilgiye [buradan](/deploying/subgraph-studio/) ulaşabilirsiniz. -Subgraph Studio users start on a Free Plan, which allows them to make 100,000 queries per month. Additional queries are available on the Growth Plan, which offers usage based pricing for additional queries, payable by credit card, or GRT on Arbitrum. You can learn more about billing [here](/subgraphs/billing/). +Subgraph Studio kullanıcıları, ayda 100.000 sorgu hakkı veren Ücretsiz Plan ile başlar. Ek sorgular, kullanım bazlı fiyatlandırma sunan ve kredi kartı ya da Arbitrum üzerinde GRT ile ödenebilen Growth Plan kapsamında sunulur. Faturalandırma hakkında daha fazla bilgiye [buradan](/subgraphs/billing/) ulaşabilirsiniz. -> Please see the [Query API](/subgraphs/querying/graphql-api/) for a complete reference on how to query the subgraph's entities. +> Subgraph varlıklarını nasıl sorgulayacağınıza dair tam referans için lütfen [Query API](/subgraphs/querying/graphql-api/) sayfasına bakın. > -> Note: If you encounter 405 errors with a GET request to the Graph Explorer URL, please switch to a POST request instead. +> Not: Graph Gezgini URL’sine yapılan GET isteğinde 405 hatasıyla karşılaşırsanız, lütfen bunun yerine POST isteği kullanın. ### Ek Kaynaklar -- Use [GraphQL querying best practices](/subgraphs/querying/best-practices/). -- To query from an application, click [here](/subgraphs/querying/from-an-application/). -- View [querying examples](https://github.com/graphprotocol/query-examples/tree/main). +- [GraphQL sorgulama konusunda örnek uygulamaları](/subgraphs/querying/best-practices/) kullanın. +- Bir uygulamadan sorgu yapmak için [buraya](/subgraphs/querying/from-an-application/) tıklayın. +- [Sorgulama örneklerini](https://github.com/graphprotocol/query-examples/tree/main) görüntüleyin. diff --git a/website/src/pages/tr/subgraphs/querying/managing-api-keys.mdx b/website/src/pages/tr/subgraphs/querying/managing-api-keys.mdx index af2cde13b073..13ec1e900544 100644 --- a/website/src/pages/tr/subgraphs/querying/managing-api-keys.mdx +++ b/website/src/pages/tr/subgraphs/querying/managing-api-keys.mdx @@ -1,34 +1,34 @@ --- -title: Managing API keys +title: API anahtarlarını yönetme --- ## Genel Bakış -API keys are needed to query subgraphs. They ensure that the connections between application services are valid and authorized, including authenticating the end user and the device using the application. +Subgraph'leri sorgulamak için API anahtarları gereklidir. Bu anahtarlar, uygulama servisleri arasındaki bağlantıların geçerli ve yetkili olduğunu garanti eder; buna, uygulamayı kullanan son kullanıcı ve cihazın doğrulaması da dahildir. -### Create and Manage API Keys +### API Anahtarı Oluşturma ve Yönetme -Go to [Subgraph Studio](https://thegraph.com/studio/) and click the **API Keys** tab to create and manage your API keys for specific subgraphs. +[Subgraph Studio](https://thegraph.com/studio/) adresine gidin ve spesifik Subgraph'ler için API anahtarlarınızı oluşturmak ve yönetmek için **API Anahtarları** sekmesine tıklayın. -The "API keys" table lists existing API keys and allows you to manage or delete them. For each key, you can see its status, the cost for the current period, the spending limit for the current period, and the total number of queries. +"API anahtarları" tablosu, mevcut API anahtarlarını listeler ve bunları yönetmenize veya silmenize olanak tanır. Her anahtarın durumunu, geçerli dönemdeki maliyeti ve harcama limitini ve toplam sorgu sayısını görebilirsiniz. -You can click the "three dots" menu to the right of a given API key to: +Belirli bir API anahtarının sağındaki "üç nokta" menüsüne tıklayarak şunları yapabilirsiniz: -- Rename API key -- Regenerate API key -- Delete API key -- Manage spending limit: this is an optional monthly spending limit for a given API key, in USD. This limit is per billing period (calendar month). +- API anahtarını yeniden adlandır +- API anahtarını yeniden oluştur +- API anahtarını sil +- Harcama limitini yönet: Bu, belirli bir API anahtarı için isteğe bağlı aylık harcama limitidir ve USD cinsindendir. Bu limit, her faturalandırma dönemi (takvim ayı) için geçerlidir. -### API Key Details +### API Anahtarı Detayları -You can click on an individual API key to view the Details page: +Bir API anahtarının üzerine tıklayarak Detaylar sayfasını görüntüleyebilirsiniz: -1. Under the **Overview** section, you can: - - Edit your key name - - Regenerate API keys - - View the current usage of the API key with stats: - - Number of queries - - Amount of GRT spent -2. Under the **Security** section, you can opt into security settings depending on the level of control you’d like to have. Specifically, you can: - - View and manage the domain names authorized to use your API key - - Assign subgraphs that can be queried with your API key +1. **Genel Bakış** bölümünde şunları yapabilirsiniz: + - Anahtar adınızı düzenleme + - API anahtarlarını yeniden oluşturma + - API anahtarının mevcut kullanımını istatistikleri ile birlikte görüntüleme: + - Sorgu sayısı + - Harcanan GRT miktarı +2. **Güvenlik** bölümünde, sahip olmak istediğiniz kontrol miktarına bağlı olarak güvenlik ayarlarını etkinleştirebilirsiniz. Spesifik olarak şunları yapabilirsiniz: + - API anahtarınızı kullanmaya yetkili alan adlarını görüntüleme ve yönetme + - API anahtarınızla sorgulanabilecek Subgraph'ler atama diff --git a/website/src/pages/tr/subgraphs/querying/python.mdx b/website/src/pages/tr/subgraphs/querying/python.mdx index dc82e0010623..fa612b11594e 100644 --- a/website/src/pages/tr/subgraphs/querying/python.mdx +++ b/website/src/pages/tr/subgraphs/querying/python.mdx @@ -1,9 +1,9 @@ --- -title: Query The Graph with Python and Subgrounds +title: Python ve Subgrounds ile The Graph'i Sorgulama sidebarTitle: Python (Subgrounds) --- -Subgrounds, [Playgrounds](https://playgrounds.network/) tarafından oluşturulmuş, subgraph sorgulamak için kullanılan sezgisel bir Python kütüphanesidir. Bu kütüphane, subgraph verilerini doğrudan bir Python veri ortamına bağlamanıza olanak tanır ve [pandas](https://pandas.pydata.org/) gibi kütüphaneleri kullanarak veri analizi yapmanıza imkan sağlar! +Subgrounds, [Playgrounds](https://playgrounds.network/) tarafından geliştirilen ve Subgraph'leri sorgulamak için kullanılan sezgisel bir Python kütüphanesidir. Bu kütüphane sayesinde Subgraph verilerini doğrudan bir Python veri ortamına bağlayabilir, [pandas](https://pandas.pydata.org/) gibi kütüphaneleri kullanarak veri analizi gerçekleştirebilirsiniz! Subgrounds, GraphQL sorguları oluşturmak için sayfalandırma gibi sıkıcı iş akışlarını otomatikleştiren ve kontrollü şema dönüşümleri aracılığıyla ileri düzey kullanıcıları güçlendiren basit bir Pythonic API sunar. @@ -17,14 +17,14 @@ pip install --upgrade subgrounds python -m pip install --upgrade subgrounds ``` -Kurulum tamamlandıktan sonra, aşağıdaki sorgu ile subgrounds'ı test edebilirsiniz. Aşağıdaki örnek, Aave v2 protokolü için bir subgraph çeker ve TVL'ye (Toplam Kilitli Varlık) göre sıralanan en üst 5 pazarı sorgular, adlarını ve TVL'lerini (USD cinsinden) seçer ve verileri bir pandas [DataFrame](https://pandas.pydata.org/pandas-docs/dev/reference/api/pandas.DataFrame.html#pandas.DataFrame) olarak döndürür. +Kurulum tamamlandıktan sonra, aşağıdaki sorgu ile Subgrounds'u test edebilirsiniz. Bu örnek, Aave v2 protokolüne ait bir Subgraph'i kullanarak TVL (Kilitlenen Toplam Değer) değerine göre sıralanmış ilk beş market'i sorgular; her bir market'in adını ve TVL değerini (USD cinsinden) seçer ve veriyi bir pandas [DataFrame](https://pandas.pydata.org/pandas-docs/dev/reference/api/pandas.DataFrame.html#pandas.DataFrame) olarak döndürür. ```python from subgrounds import Subgrounds sg = Subgrounds() -# Subgraph'ı yükleme +# Subgraph'i yükleme aave_v2 = sg.load_subgraph( "https://api.thegraph.com/subgraphs/name/messari/aave-v2-ethereum") @@ -34,7 +34,7 @@ latest_markets = aave_v2.Query.markets( orderDirection='desc', first=5, ) -# Sorguyu bir veri çerçevesine döndürme +# Sorguyu bir DataFrame olarak döndürme sg.query_df([ latest_markets.name, latest_markets.totalValueLockedUSD, @@ -54,4 +54,4 @@ Subgrounds'un keşfedilecek geniş bir özellik seti bulunduğundan, işe bazı - [Eşzamanlı Sorgular](https://docs.playgrounds.network/subgrounds/getting_started/async/) - Sorgularınızı paralelleştirerek nasıl geliştireceğinizi öğrenin. - [Veriyi CSV dosyalarına aktarma](https://docs.playgrounds.network/subgrounds/faq/exporting/) - - A quick article on how to seamlessly save your data as CSVs for further analysis. + - Verilerinizi daha ileri analizler için sorunsuz bir şekilde CSV formatında nasıl kaydedeceğinizi anlatan kısa bir makale. diff --git a/website/src/pages/tr/subgraphs/querying/subgraph-id-vs-deployment-id.mdx b/website/src/pages/tr/subgraphs/querying/subgraph-id-vs-deployment-id.mdx index 103e470e14da..60146e950eae 100644 --- a/website/src/pages/tr/subgraphs/querying/subgraph-id-vs-deployment-id.mdx +++ b/website/src/pages/tr/subgraphs/querying/subgraph-id-vs-deployment-id.mdx @@ -1,27 +1,27 @@ --- -title: Subgraph ID vs Deployment ID +title: Subgraph Kimliği ve Dağıtım Kimliği Karşılaştırılması --- -A subgraph is identified by a Subgraph ID, and each version of the subgraph is identified by a Deployment ID. +Her bir Subgraph, bir Subgraph Kimliği (Subgraph ID) ile tanımlanır ve bu Subgraph'in her bir sürümü, bir Dağıtım Kimliği (Deployment ID) ile tanımlanır. -When querying a subgraph, either ID can be used, though it is generally suggested that the Deployment ID is used due to its ability to specify a specific version of a subgraph. +Bir Subgraph sorgulanırken her iki kimlik de kullanılabilir, ancak genellikle spesifik bir Subgraph sürümünü tam olarak tanımlayabildiği için Dağıtım Kimliği'nin kullanılması önerilir. -Here are some key differences between the two IDs: ![](/img/subgraph-id-vs-deployment-id.png) +İki kimlik arasındaki bazı temel farklar şunlardır: ![](/img/subgraph-id-vs-deployment-id.png) -## Deployment ID +## Dağıtım Kimliği -The Deployment ID is the IPFS hash of the compiled manifest file, which refers to other files on IPFS instead of relative URLs on the computer. For example, the compiled manifest can be accessed via: `https://api.thegraph.com/ipfs/api/v0/cat?arg=QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. To change the Deployment ID, one can simply update the manifest file, such as modifying the description field as described in the [subgraph manifest documentation](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api). +Dağıtım Kimliği (Deployment ID), derlenmiş manifesto dosyasının IPFS hash’idir ve bilgisayardaki göreli URL’ler yerine IPFS üzerindeki diğer dosyalara referans verir. Örneğin, derlenmiş manifesto dosyasına şu bağlantı üzerinden erişilebilir: `https://api.thegraph.com/ipfs/api/v0/cat?arg=QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. Dağıtım Kimliği’ni değiştirmek için, manifesto dosyasında güncelleme yapmak yeterlidir; örneğin, [Subgraph manifesto dokümantasyonu](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api)nda açıklandığı gibi "description" alanını değiştirerek bu sağlanabilir. -When queries are made using a subgraph's Deployment ID, we are specifying a version of that subgraph to query. Using the Deployment ID to query a specific subgraph version results in a more sophisticated and robust setup as there is full control over the subgraph version being queried. However, this results in the need of updating the query code manually every time a new version of the subgraph is published. +Bir Subgraph'in Dağıtım Kimliği kullanılarak sorgu yapıldığında, sorgulanacak Subgraph sürümü açıkça belirtilmiş olur. Belirli bir Subgraph sürümünü sorgulamak için Dağıtım Kimliği kullanmak, sorgulanan sürüm üzerinde tam kontrol sağladığı için daha gelişmiş ve sağlam bir yapı sunar. Ancak, bu yaklaşım her yeni Subgraph sürümü yayımlandığında sorgu kodunun manuel olarak güncellenmesini gerektirir. -Example endpoint that uses Deployment ID: +Dağıtım Kimliği kullanan örnek uç nokta: `https://gateway-arbitrum.network.thegraph.com/api/[api-key]/deployments/id/QmfYaVdSSekUeK6expfm47tP8adg3NNdEGnVExqswsSwaB` -## Subgraph ID +## Subgraph Kimliği -The Subgraph ID is a unique identifier for a subgraph. It remains constant across all versions of a subgraph. It is recommended to use the Subgraph ID to query the latest version of a subgraph, although there are some caveats. +Subgraph Kimliği (Subgraph ID), bir Subgraph için benzersiz bir tanımlayıcıdır ve Subgraph'in tüm sürümleri boyunca sabit kalır. En güncel Subgraph sürümünü sorgulamak için Subgraph Kimliği'nin kullanılması önerilir, ancak bununla ilgili bazı dikkat edilmesi gereken noktalar vardır. -Be aware that querying using Subgraph ID may result in queries being responded to by an older version of the subgraph due to the new version needing time to sync. Also, new versions could introduce breaking schema changes. +Subgraph Kimliği kullanılarak yapılan sorguların, yeni sürümün senkronize olması için zamana ihtiyaç duyması nedeniyle eski bir Subgraph sürümünden yanıt alan sorgulara neden olabileceğini unutmayın. Ayrıca, yeni sürümler şemada uyumsuz değişikliklere yol açabilir. -Example endpoint that uses Subgraph ID: `https://gateway-arbitrum.network.thegraph.com/api/[api-key]/subgraphs/id/FL3ePDCBbShPvfRJTaSCNnehiqxsPHzpLud6CpbHoeKW` +Subgraph Kimliği kullanan örnek uç nokta: `https://gateway-arbitrum.network.thegraph.com/api/[api-key]/subgraphs/id/FL3ePDCBbShPvfRJTaSCNnehiqxsPHzpLud6CpbHoeKW` diff --git a/website/src/pages/tr/subgraphs/quick-start.mdx b/website/src/pages/tr/subgraphs/quick-start.mdx index 5841882242c5..f0687717220f 100644 --- a/website/src/pages/tr/subgraphs/quick-start.mdx +++ b/website/src/pages/tr/subgraphs/quick-start.mdx @@ -2,7 +2,7 @@ title: Hızlı Başlangıç --- -Learn how to easily build, publish and query a [subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) on The Graph. +Learn how to easily build, publish and query a [Subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) on The Graph. ## Prerequisites @@ -13,13 +13,13 @@ Learn how to easily build, publish and query a [subgraph](/subgraphs/developing/ ## How to Build a Subgraph -### 1. Create a subgraph in Subgraph Studio +### 1. Create a Subgraph in Subgraph Studio [Subgraph Studio](https://thegraph.com/studio/)'ya gidin ve cüzdanınızı bağlayın. -Subgraph Studio, subgraph oluşturmanıza, yönetmenize, yayına almanıza ve yayımlamanıza, ayrıca API anahtarlarını oluşturmanıza ve yönetmenize olanak tanır. +Subgraph Studio lets you create, manage, deploy, and publish Subgraphs, as well as create and manage API keys. -"Subgraph Oluştur" düğmesine tıklayın. Subgraph'in adını başlık formunda vermeniz önerilir: "Subgraph Adı Ağ Adı". +Click "Create a Subgraph". It is recommended to name the Subgraph in Title Case: "Subgraph Name Chain Name". ### 2. Graph CLI'yi yükleyin @@ -37,13 +37,13 @@ npm install -g @graphprotocol/graph-cli@latest yarn global add @graphprotocol/graph-cli ``` -### 3. Subgraph'inizi başlatın +### 3. Initialize your Subgraph -> Size ait spesifik subgraph'le ilgili komutları [Subgraph Studio](https://thegraph.com/studio/)'daki subgraph sayfasında bulabilirsiniz. +> You can find commands for your specific Subgraph on the Subgraph page in [Subgraph Studio](https://thegraph.com/studio/). -The `graph init` command will automatically create a scaffold of a subgraph based on your contract's events. +The `graph init` command will automatically create a scaffold of a Subgraph based on your contract's events. -Aşağıdaki komut, subgraph'inizi mevcut bir akıllı sözleşmeden başlatır: +The following command initializes your Subgraph from an existing contract: ```sh graph init @@ -51,42 +51,42 @@ graph init If your contract is verified on the respective blockscanner where it is deployed (such as [Etherscan](https://etherscan.io/)), then the ABI will automatically be created in the CLI. -Subgraph'inizi başlattığınızda, CLI sizden aşağıdaki bilgileri isteyecektir: +When you initialize your Subgraph, the CLI will ask you for the following information: -- **Protocol**: Choose the protocol your subgraph will be indexing data from. -- **Subgraph slug**: Create a name for your subgraph. Your subgraph slug is an identifier for your subgraph. -- **Directory**: Choose a directory to create your subgraph in. -- **Ethereum network** (optional): You may need to specify which EVM-compatible network your subgraph will be indexing data from. +- **Protocol**: Choose the protocol your Subgraph will be indexing data from. +- **Subgraph slug**: Create a name for your Subgraph. Your Subgraph slug is an identifier for your Subgraph. +- **Directory**: Choose a directory to create your Subgraph in. +- **Ethereum network** (optional): You may need to specify which EVM-compatible network your Subgraph will be indexing data from. - **Contract address**: Locate the smart contract address you’d like to query data from. - **ABI**: If the ABI is not auto-populated, you will need to input it manually as a JSON file. -- **Start Block**: You should input the start block to optimize subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed. +- **Start Block**: You should input the start block to optimize Subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed. - **Contract Name**: Input the name of your contract. -- **Index contract events as entities**: It is suggested that you set this to true, as it will automatically add mappings to your subgraph for every emitted event. +- **Index contract events as entities**: It is suggested that you set this to true, as it will automatically add mappings to your Subgraph for every emitted event. - **Add another contract** (optional): You can add another contract. -Subgraph'ınızı başlatırken neyle karşılaşacağınıza dair bir örnek için aşağıdaki ekran görüntüsüne bakın: +See the following screenshot for an example for what to expect when initializing your Subgraph: ![Subgraph command](/img/CLI-Example.png) -### 4. Edit your subgraph +### 4. Edit your Subgraph -Önceki adımda `init` komutu, subgraph'inizi oluşturmak için kullanabileceğiniz bir iskelet subgraph yaratır. +The `init` command in the previous step creates a scaffold Subgraph that you can use as a starting point to build your Subgraph. -Subgraph'inizde değişiklik yaparken, ağırlıklı olarak üç dosya ile çalışacaksınız: +When making changes to the Subgraph, you will mainly work with three files: -- Manifesto (`subgraph.yaml`): Subgraph'inizin hangi veri kaynaklarını endeksleyeceğini tanımlar. -- Şema (`schema.graphql`): Subgraph'ten hangi veriyi almak istediğinizi tanımlar. +- Manifest (`subgraph.yaml`) - defines what data sources your Subgraph will index. +- Schema (`schema.graphql`) - defines what data you wish to retrieve from the Subgraph. - AssemblyScript Eşlemeleri (`mapping.ts`): Veri kaynaklarınızdan gelen veriyi şemada tanımlanan varlıklara dönüştürür. -Subgraph yazımı hakkında ayrıntılı bilgi için [Subgraph Oluşturma](/developing/creating-a-subgraph/) sayfasına göz atın. +For a detailed breakdown on how to write your Subgraph, check out [Creating a Subgraph](/developing/creating-a-subgraph/). -### 5. Subgraph'inizi yayına alın +### 5. Deploy your Subgraph > Remember, deploying is not the same as publishing. -When you **deploy** a subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. A deployed subgraph's indexing is performed by the [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/), which is a single Indexer owned and operated by Edge & Node, rather than by the many decentralized Indexers in The Graph Network. A **deployed** subgraph is free to use, rate-limited, not visible to the public, and meant to be used for development, staging, and testing purposes. +When you **deploy** a Subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. A deployed Subgraph's indexing is performed by the [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/), which is a single Indexer owned and operated by Edge & Node, rather than by the many decentralized Indexers in The Graph Network. A **deployed** Subgraph is free to use, rate-limited, not visible to the public, and meant to be used for development, staging, and testing purposes. -Subgraph'ınız yazıldıktan sonra aşağıdaki komutları çalıştırın: +Once your Subgraph is written, run the following commands: ```` ```sh @@ -94,7 +94,7 @@ graph codegen && graph build ``` ```` -Subgraph'inizi kimlik doğrulayıp yayına alın. Yayına alma anahtarını, Subgraph Studio'daki subgraph sayfasında bulabilirsiniz. +Authenticate and deploy your Subgraph. The deploy key can be found on the Subgraph's page in Subgraph Studio. ![Yayına alma anahtarı](/img/subgraph-studio-deploy-key.jpg) @@ -109,37 +109,37 @@ graph deploy The CLI will ask for a version label. It's strongly recommended to use [semantic versioning](https://semver.org/), e.g. `0.0.1`. -### 6. Subgraph'inizi gözden geçirin +### 6. Review your Subgraph -If you’d like to test your subgraph before publishing it, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following: +If you’d like to test your Subgraph before publishing it, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following: - Örnek bir sorgu çalıştırabilirsiniz. -- Subgraph'iniz hakkında bilgi kontrol etmek için kontrol panelini analiz edebilirsiniz. -- Subgraph'inizde hata olup olmadığını görmek için kontrol panelindeki kayıtları kontrol edin. Çalışan bir subgraph'in kayıtları şu şekilde görünecektir: +- Analyze your Subgraph in the dashboard to check information. +- Check the logs on the dashboard to see if there are any errors with your Subgraph. The logs of an operational Subgraph will look like this: ![Subgraph kayıtları](/img/subgraph-logs-image.png) -### 7. Subgraph'inizi The Graph Ağında yayımlayın +### 7. Publish your Subgraph to The Graph Network -When your subgraph is ready for a production environment, you can publish it to the decentralized network. Publishing is an onchain action that does the following: +When your Subgraph is ready for a production environment, you can publish it to the decentralized network. Publishing is an onchain action that does the following: -- It makes your subgraph available to be to indexed by the decentralized [Indexers](/indexing/overview/) on The Graph Network. -- It removes rate limits and makes your subgraph publicly searchable and queryable in [Graph Explorer](https://thegraph.com/explorer/). -- It makes your subgraph available for [Curators](/resources/roles/curating/) to curate it. +- It makes your Subgraph available to be to indexed by the decentralized [Indexers](/indexing/overview/) on The Graph Network. +- It removes rate limits and makes your Subgraph publicly searchable and queryable in [Graph Explorer](https://thegraph.com/explorer/). +- It makes your Subgraph available for [Curators](/resources/roles/curating/) to curate it. -> The greater the amount of GRT you and others curate on your subgraph, the more Indexers will be incentivized to index your subgraph, improving the quality of service, reducing latency, and enhancing network redundancy for your subgraph. +> The greater the amount of GRT you and others curate on your Subgraph, the more Indexers will be incentivized to index your Subgraph, improving the quality of service, reducing latency, and enhancing network redundancy for your Subgraph. #### Subgraph Studio ile yayımlama -Subgraph'inizi yayımlamak için, kontrol panelindeki Yayımla düğmesine tıklayın. +To publish your Subgraph, click the Publish button in the dashboard. -![Publish a subgraph on Subgraph Studio](/img/publish-sub-transfer.png) +![Publish a Subgraph on Subgraph Studio](/img/publish-sub-transfer.png) -Subgraph'inizi yayımlamak istediğiniz ağı seçin. +Select the network to which you would like to publish your Subgraph. #### CLI'den Yayımlama -Sürüm 0.73.0 itibarıyla, subgraph'inizi Graph CLI ile de yayımlayabilirsiniz. +As of version 0.73.0, you can also publish your Subgraph with the Graph CLI. `graph-cli`yi açın. @@ -157,32 +157,32 @@ graph publish ``` ```` -3. Bir pencere açılır ve cüzdanınızı bağlamanıza, metaveri eklemenize ve tamamlanmış subgraph'inizi tercih ettiğiniz bir ağa dağıtmanıza olanak tanır. +3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized Subgraph to a network of your choice. ![cli-ui](/img/cli-ui.png) To customize your deployment, see [Publishing a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). -#### Subgraph'inize sinyal ekleme +#### Adding signal to your Subgraph -1. To attract Indexers to query your subgraph, you should add GRT curation signal to it. +1. To attract Indexers to query your Subgraph, you should add GRT curation signal to it. - - Bu işlem, hizmet kalitesini artırır, gecikmeyi azaltır ve subgraph'inizin ağdaki yedekliliğini ve müsaitliğini artırır. + - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your Subgraph. 2. Endeksleme ödüllerine hak kazanan Endeksleyiciler sinyal miktarına bağlı olarak GRT ödülü alırlar. - - En az 3 Endeksleyici çekmek için en az 3.000 GRT küratörlük yapmanız önerilir. Subgraph özelliği kullanımı ve desteklenen ağlara bağlı olarak ödül hak kazanımlarının nasıl dağıtıldığını kontrol edin. + - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on Subgraph feature usage and supported networks. To learn more about curation, read [Curating](/resources/roles/curating/). -Gas maliyetlerinden tasarruf etmek için, subgraph'inizi küratörlük işlemini, yayımlama işlemiyle aynı anda yapabilirsiniz. Bunun için şu seçeneği seçin: +To save on gas costs, you can curate your Subgraph in the same transaction you publish it by selecting this option: ![Subgraph publish](/img/studio-publish-modal.png) -### 8. Subgraph'inizi sorgulama +### 8. Query your Subgraph -You now have access to 100,000 free queries per month with your subgraph on The Graph Network! +You now have access to 100,000 free queries per month with your Subgraph on The Graph Network! -You can query your subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button. +You can query your Subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button. -For more information about querying data from your subgraph, read [Querying The Graph](/subgraphs/querying/introduction/). +For more information about querying data from your Subgraph, read [Querying The Graph](/subgraphs/querying/introduction/). diff --git a/website/src/pages/tr/substreams/_meta-titles.json b/website/src/pages/tr/substreams/_meta-titles.json index 6262ad528c3a..327533e4f629 100644 --- a/website/src/pages/tr/substreams/_meta-titles.json +++ b/website/src/pages/tr/substreams/_meta-titles.json @@ -1,3 +1,3 @@ { - "developing": "Developing" + "developing": "Geliştirme" } diff --git a/website/src/pages/tr/substreams/developing/dev-container.mdx b/website/src/pages/tr/substreams/developing/dev-container.mdx index bd4acf16eec7..339ddb159c87 100644 --- a/website/src/pages/tr/substreams/developing/dev-container.mdx +++ b/website/src/pages/tr/substreams/developing/dev-container.mdx @@ -9,7 +9,7 @@ Develop your first project with Substreams Dev Container. It's a tool to help you build your first project. You can either run it remotely through Github codespaces or locally by cloning the [substreams starter repository](https://github.com/streamingfast/substreams-starter?tab=readme-ov-file). -Inside the Dev Container, the `substreams init` command sets up a code-generated Substreams project, allowing you to easily build a subgraph or an SQL-based solution for data handling. +Inside the Dev Container, the `substreams init` command sets up a code-generated Substreams project, allowing you to easily build a Subgraph or an SQL-based solution for data handling. ## Prerequisites @@ -35,7 +35,7 @@ To share your work with the broader community, publish your `.spkg` to [Substrea You can configure your project to query data either through a Subgraph or directly from an SQL database: -- **Subgraph**: Run `substreams codegen subgraph`. This generates a project with a basic `schema.graphql` and `mappings.ts` file. You can customize these to define entities based on the data extracted by Substreams. For more configurations, see [subgraph sink documentation](https://docs.substreams.dev/how-to-guides/sinks/subgraph). +- **Subgraph**: Run `substreams codegen subgraph`. This generates a project with a basic `schema.graphql` and `mappings.ts` file. You can customize these to define entities based on the data extracted by Substreams. For more configurations, see [Subgraph sink documentation](https://docs.substreams.dev/how-to-guides/sinks/subgraph). - **SQL**: Run `substreams codegen sql` for SQL-based queries. For more information on configuring a SQL sink, refer to the [SQL documentation](https://docs.substreams.dev/how-to-guides/sinks/sql-sink). ## Deployment Options diff --git a/website/src/pages/tr/substreams/developing/sinks.mdx b/website/src/pages/tr/substreams/developing/sinks.mdx index a3feb4d27289..4d32cdf4ce1b 100644 --- a/website/src/pages/tr/substreams/developing/sinks.mdx +++ b/website/src/pages/tr/substreams/developing/sinks.mdx @@ -1,5 +1,5 @@ --- -title: Official Sinks +title: Sink your Substreams --- Choose a sink that meets your project's needs. @@ -8,14 +8,14 @@ Choose a sink that meets your project's needs. Once you find a package that fits your needs, you can choose how you want to consume the data. -Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database, a file or a subgraph. +Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database, a file or a Subgraph. ## Sinks > Note: Some of the sinks are officially supported by the StreamingFast core development team (i.e. active support is provided), but other sinks are community-driven and support can't be guaranteed. - [SQL Database](https://docs.substreams.dev/how-to-guides/sinks/sql-sink): Send the data to a database. -- [Subgraph](/sps/introduction/): Configure an API to meet your data needs and host it on The Graph Network. +- [Subgraph](./sps/introduction.mdx): Configure an API to meet your data needs and host it on The Graph Network. - [Direct Streaming](https://docs.substreams.dev/how-to-guides/sinks/stream): Stream data directly from your application. - [PubSub](https://docs.substreams.dev/how-to-guides/sinks/pubsub): Send data to a PubSub topic. - [Community Sinks](https://docs.substreams.dev/how-to-guides/sinks/community-sinks): Explore quality community maintained sinks. @@ -26,7 +26,7 @@ Sinks are integrations that allow you to send the extracted data to different de ### Official -| Name | Support | Maintainer | Source Code | +| İsim | Destek | Maintainer | Source Code | | --- | --- | --- | --- | | SQL | O | StreamingFast | [substreams-sink-sql](https://github.com/streamingfast/substreams-sink-sql) | | Go SDK | O | StreamingFast | [substreams-sink](https://github.com/streamingfast/substreams-sink) | @@ -40,7 +40,7 @@ Sinks are integrations that allow you to send the extracted data to different de ### Community -| Name | Support | Maintainer | Source Code | +| İsim | Destek | Maintainer | Source Code | | --- | --- | --- | --- | | MongoDB | C | Community | [substreams-sink-mongodb](https://github.com/streamingfast/substreams-sink-mongodb) | | Files | C | Community | [substreams-sink-files](https://github.com/streamingfast/substreams-sink-files) | diff --git a/website/src/pages/tr/substreams/developing/solana/account-changes.mdx b/website/src/pages/tr/substreams/developing/solana/account-changes.mdx index 7a62c4cef167..8519173a43af 100644 --- a/website/src/pages/tr/substreams/developing/solana/account-changes.mdx +++ b/website/src/pages/tr/substreams/developing/solana/account-changes.mdx @@ -11,7 +11,7 @@ This guide walks you through the process of setting up your environment, configu > NOTE: History for the Solana Account Changes dates as of 2025, block 310629601. -For each Substreams Solana account block, only the latest update per account is recorded, see the [Protobuf Referece](https://buf.build/streamingfast/firehose-solana/file/main:sf/solana/type/v1/account.proto). If an account is deleted, a payload with `deleted == True` is provided. Additionally, events of low importance ware omitted, such as those with the special owner “Vote11111111…” account or changes that do not affect the account data (ex: lamport changes). +For each Substreams Solana account block, only the latest update per account is recorded, see the [Protobuf Reference](https://buf.build/streamingfast/firehose-solana/file/main:sf/solana/type/v1/account.proto). If an account is deleted, a payload with `deleted == True` is provided. Additionally, events of low importance ware omitted, such as those with the special owner “Vote11111111…” account or changes that do not affect the account data (ex: lamport changes). > NOTE: To test Substreams latency for Solana accounts, measured as block-head drift, install the [Substreams CLI](https://docs.substreams.dev/reference-material/substreams-cli/installing-the-cli) and running `substreams run solana-common blocks_without_votes -s -1 -o clock`. diff --git a/website/src/pages/tr/substreams/developing/solana/transactions.mdx b/website/src/pages/tr/substreams/developing/solana/transactions.mdx index b5d6d886b271..2196dc993166 100644 --- a/website/src/pages/tr/substreams/developing/solana/transactions.mdx +++ b/website/src/pages/tr/substreams/developing/solana/transactions.mdx @@ -36,12 +36,12 @@ Within the generated directories, modify your Substreams modules to include addi ## Step 3: Load the Data -To make your Substreams queryable (as opposed to [direct streaming](https://docs.substreams.dev/how-to-guides/sinks/stream)), you can automatically generate a [Substreams-powered subgraph](/sps/introduction/) or SQL-DB sink. +To make your Substreams queryable (as opposed to [direct streaming](https://docs.substreams.dev/how-to-guides/sinks/stream)), you can automatically generate a [Substreams-powered Subgraph](/sps/introduction/) or SQL-DB sink. ### Subgraph 1. Run `substreams codegen subgraph` to initialize the sink, producing the necessary files and function definitions. -2. Create your [subgraph mappings](/sps/triggers/) within the `mappings.ts` and associated entities within the `schema.graphql`. +2. Create your [Subgraph mappings](/sps/triggers/) within the `mappings.ts` and associated entities within the `schema.graphql`. 3. Build and deploy locally or to [Subgraph Studio](https://thegraph.com/studio-pricing/) by running `deploy-studio`. ### SQL diff --git a/website/src/pages/tr/substreams/introduction.mdx b/website/src/pages/tr/substreams/introduction.mdx index 18c80a1880f3..62348dd4532f 100644 --- a/website/src/pages/tr/substreams/introduction.mdx +++ b/website/src/pages/tr/substreams/introduction.mdx @@ -13,7 +13,7 @@ Substreams is a powerful parallel blockchain indexing technology designed to enh ## Substreams Benefits -- **Accelerated Indexing**: Boost subgraph indexing time with a parallelized engine for quicker data retrieval and processing. +- **Accelerated Indexing**: Boost Subgraph indexing time with a parallelized engine for quicker data retrieval and processing. - **Multi-Chain Support**: Expand indexing capabilities beyond EVM-based chains, supporting ecosystems like Solana, Injective, Starknet, and Vara. - **Enhanced Data Model**: Access comprehensive data, including the `trace` level data on EVM or account changes on Solana, while efficiently managing forks/disconnections. - **Multi-Sink Support:** For Subgraph, Postgres database, Clickhouse, and Mongo database. diff --git a/website/src/pages/tr/substreams/publishing.mdx b/website/src/pages/tr/substreams/publishing.mdx index 6ae9498d97a7..b56a04388301 100644 --- a/website/src/pages/tr/substreams/publishing.mdx +++ b/website/src/pages/tr/substreams/publishing.mdx @@ -9,7 +9,7 @@ Learn how to publish a Substreams package to the [Substreams Registry](https://s ### What is a package? -A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional subgraphs. +A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional Subgraphs. ## Publish a Package @@ -44,7 +44,7 @@ A Substreams package is a precompiled binary file that defines the specific data ![confirm](/img/4_confirm.png) -That's it! You have succesfully published a package in the Substreams registry. +That's it! You have successfully published a package in the Substreams registry. ![success](/img/5_success.png) diff --git a/website/src/pages/tr/substreams/quick-start.mdx b/website/src/pages/tr/substreams/quick-start.mdx index d7f226438e7b..7ff97c69968b 100644 --- a/website/src/pages/tr/substreams/quick-start.mdx +++ b/website/src/pages/tr/substreams/quick-start.mdx @@ -1,5 +1,5 @@ --- -title: Substreams Quick Start +title: 'Substreams: Hızlı Başlangıç' sidebarTitle: Hızlı Başlangıç --- diff --git a/website/src/pages/tr/supported-networks.mdx b/website/src/pages/tr/supported-networks.mdx index 58d3ad6df215..baee4073b30c 100644 --- a/website/src/pages/tr/supported-networks.mdx +++ b/website/src/pages/tr/supported-networks.mdx @@ -18,11 +18,11 @@ export const getStaticProps = getSupportedNetworksStaticProps - Subgraph Studio, örneğin JSON-RPC, Firehose ve Substreams uç noktaları gibi temel teknolojilerin istikrarlılığına ve güvenilirliğine bel bağlar. - Gnosis Chain'i endeksleyen subgraph'ler artık `gnosis` ağ tanımlayıcısı ile dağıtılabilir. -- Bir subgraph CLI aracılığıyla yayımlandıysa ve bir Endeksleyici tarafından algılandıysa, teknik olarak, desteklenmeden de sorgulanabilir. Yeni ağların entegrasyonunu daha da kolaylaştırmak için çalışmalar devam etmektedir. +- If a Subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks. - Merkeziyetsiz ağda hangi özelliklerin desteklendiğinin tam listesi için [bu sayfayı](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) inceleyin. ## Graph Düğümünü yerel olarak çalıştırma Tercih ettiğiniz ağ The Graph'ın merkeziyetsiz ağında desteklenmiyorsa, kendi [Graph Düğümünüzü](https://github.com/graphprotocol/graph-node) çalıştırarak herhangi bir EVM uyumlu ağı endeksleyebilirsiniz. Kullandığınız [sürümün](https://github.com/graphprotocol/graph-node/releases) ağı desteklediğinden ve gerekli yapılandırmaya sahip olduğunuzdan emin olun. -Graph Düğümü, Firehose entegrasyonu aracılığıyla diğer protokolleri de endeksleyebilir. NEAR, Arweave ve Cosmos tabanlı ağlar için Firehose entegrasyonları oluşturulmuştur. Ayrıca, Graph Düğümü, Substreams desteğine sahip herhangi bir ağ için Substreams destekli subgraph'leri de destekleyebilir. +Graph Node can also index other protocols, via a Firehose integration. Firehose integrations have been created for NEAR and Arweave. Additionally, Graph Node can support Substreams-powered Subgraphs for any network with Substreams support. diff --git a/website/src/pages/tr/token-api/_meta-titles.json b/website/src/pages/tr/token-api/_meta-titles.json index 692cec84bd58..7ed31e0af95d 100644 --- a/website/src/pages/tr/token-api/_meta-titles.json +++ b/website/src/pages/tr/token-api/_meta-titles.json @@ -1,5 +1,6 @@ { "mcp": "MCP", "evm": "EVM Endpoints", - "monitoring": "Monitoring Endpoints" + "monitoring": "Monitoring Endpoints", + "faq": "FAQ" } diff --git a/website/src/pages/tr/token-api/_meta.js b/website/src/pages/tr/token-api/_meta.js index 09aa7ffc2649..0e526f673a66 100644 --- a/website/src/pages/tr/token-api/_meta.js +++ b/website/src/pages/tr/token-api/_meta.js @@ -5,4 +5,5 @@ export default { mcp: titles.mcp, evm: titles.evm, monitoring: titles.monitoring, + faq: '', } diff --git a/website/src/pages/tr/token-api/faq.mdx b/website/src/pages/tr/token-api/faq.mdx new file mode 100644 index 000000000000..caacf8d1b035 --- /dev/null +++ b/website/src/pages/tr/token-api/faq.mdx @@ -0,0 +1,109 @@ +--- +title: Token API FAQ +--- + +Get fast answers to easily integrate and scale with The Graph's high-performance Token API. + +## Genel + +### What blockchains does the Token API support? + +Currently, the Token API supports Ethereum, Binance Smart Chain (BSC), Polygon, Optimism, Base, and Arbitrum One. + +### Why isn't my API key from The Graph Market working? + +Be sure to use the Access Token generated from the API key, not the API key itself. An access token can be generated from the dashboard on The Graph Market using the dropdown menu next to each API key. + +### How current is the data provided by the API relative to the blockchain? + +The API provides data up to the latest finalized block. + +### How do I authenticate requests to the Token API? + +Authentication is managed via JWT tokens obtained through The Graph Market. JWT tokens issued by [Pinax](https://pinax.network/en), a core developer of The Graph, are also supported. + +### Does the Token API provide a client SDK? + +While a client SDK is not currently available, please share feedback on any SDKs or integrations you would like to see on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to support additional blockchains in the future? + +Yes, more blockchains will be supported in the future. Please share feedback on desired chains on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to offer data closer to the chain head? + +Yes, improvements to provide data closer to the chain head are planned. Feedback is welcome on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to support additional use cases such as NFTs? + +The Graph ecosystem is actively determining the [roadmap](https://thegraph.com/blog/token-api-the-graph/) for additional use cases, including NFTs. Please provide feedback on specific features you would like prioritized on [Discord](https://discord.gg/graphprotocol). + +## MCP / LLM / AI Topics + +### Is there a time limit for LLM queries? + +Yes. The maximum time limit for LLM queries is 10 seconds. + +### Is there a known list of LLMs that work with the API? + +Yes, Cline, Cursor, and Claude Desktop integrate successfully with The Graph's Token API + MCP server. + +Beyond that, whether an LLM "works" depends on whether it supports the MCP protocol directly (or has a compatible plugin/adapter). + +### Where can I find the MCP client? + +You can find the code for the MCP client in [The Graph's repo](https://github.com/graphprotocol/mcp-client). + +## Advanced Topics + +### I'm getting 403/401 errors. What's wrong? + +Check that you included the `Authorization: Bearer ` header with the correct, non-expired token. Common issues include using the API key instead of generating a new JWT, forgetting the "Bearer" prefix, using an incorrect token, or omitting the header entirely. Ensure you copied the JWT exactly as provided by The Graph Market. + +### Are there rate limits or usage costs?\*\* + +During Beta, the Token API is free for authorized developers. While specific rate limits aren't documented, reasonable throttling exists to prevent abuse. High request volumes may trigger HTTP 429 errors. Monitor official announcements for future pricing changes after Beta. + +### What networks are supported, and how do I specify them? + +You can query available networks with [this link](https://token-api.thegraph.com/#tag/monitoring/GET/networks). A full list of network IDs accepted by The Graph can be found on [The Graph's Networks](https://thegraph.com/docs/en/supported-networks/). The API supports Ethereum mainnet, Binance Smart Chain, Base, Arbitrum One, Optimism, and Polygon/Matic. Use the optional `network_id` parameter (e.g., `mainnet`, `bsc`, `base`, `arbitrum-one`, `optimism`, `matic`) to target a specific chain. Without this parameter, the API defaults to Ethereum mainnet. + +### Why do I only see 10 results? How can I get more data? + +Endpoints cap output at 10 items by default. Use pagination parameters: `limit` (up to 500) and `page` (1-indexed). For example, set `limit=50` to get 50 results, and increment `page` for subsequent batches (e.g., `page=2` for items 51-100). + +### How do I fetch older transfer history? + +The Transfers endpoint defaults to 30 days of history. To retrieve older events, increase the `age` parameter up to 180 days maximum (e.g., `age=180` for 6 months of transfers). Transfers older than 180 days cannot be fetched in a single call. + +### What does an empty `"data": []` array mean? + +An empty data array means no records were found matching the query – not an error. This occurs when querying wallets with no tokens/transfers or token contracts with no holders. Verify you've used the correct address and parameters. An invalid address format will trigger a 4xx error. + +### Why is the JSON response wrapped in a `"data"` array? + +All Token API responses consistently wrap results in a top-level `data` array, even for single items. This uniform design handles the common case where addresses have multiple balances or transfers. When parsing, be sure to index into the `data` array (e.g., `const results = response.data`). + +### Why are token amounts returned as strings? + +Large numeric values (like token amounts or supplies) are returned as strings to avoid precision loss, as they often exceed JavaScript's safe integer range. Convert these to big number types for arithmetic operations. Fields like `decimals` are provided as normal numbers to help derive human-readable values. + +### What format should addresses be in? + +The API accepts 40-character hex addresses with or without the `0x` prefix. The endpoint is case-insensitive, so both lower and uppercase hex characters work. Ensure addresses are exactly 40 hex digits (20 bytes) if you remove the prefix. For contract queries, use the token's contract address; for balance/transfer queries, use the wallet address. + +### Do I need special headers besides authentication? + +While recommended, `Accept: application/json` isn't strictly required as the API returns JSON by default. The critical header is `Authorization: Bearer `. Ensure you make a GET request to the correct URL without trailing slashes or path typos (e.g., use `/balances/evm/{address}` not `/balance`). + +### MCP integration with Claude/Cline/Cursor shows errors like "ENOENT" or "Server disconnected". How do I fix this? + +For "ENOENT" errors, ensure Node.js 18+ is installed and the path to `npx`/`bunx` is correct (consider using full paths in config). "Server disconnected" usually indicates authentication or connectivity issues – verify your `ACCESS_TOKEN` is set correctly and your network allows access to `https://token-api.thegraph.com/sse`. + +### Is the Token API part of The Graph's GraphQL service? + +No, the Token API is a separate RESTful service. Unlike traditional subgraphs, it provides ready-to-use REST endpoints (HTTP GET) for common token data. You don't need to write GraphQL queries or deploy subgraphs. Under the hood, it uses The Graph's infrastructure and MCP with AI for enrichment, but you simply interact with REST endpoints. + +### Do I need to use MCP or tools like Claude, Cline, or Cursor? + +No, these are optional. MCP is an advanced feature allowing AI assistants to interface with the API via streaming. For standard usage, simply call the REST endpoints with any HTTP client using your JWT. Claude Desktop, Cline bot, and Cursor IDE integrations are provided for convenience but aren't required. diff --git a/website/src/pages/tr/token-api/mcp/claude.mdx b/website/src/pages/tr/token-api/mcp/claude.mdx index 0da8f2be031d..f937a6ee6ae8 100644 --- a/website/src/pages/tr/token-api/mcp/claude.mdx +++ b/website/src/pages/tr/token-api/mcp/claude.mdx @@ -12,7 +12,7 @@ sidebarTitle: Claude Desktop ![Screenshot of Claude Desktop's settings panel showing the MCP server configuration option.](/img/claude-preview-token-api.png) -## Configuration +## Yapılandırma Create or edit your `claude_desktop_config.json` file. @@ -25,11 +25,11 @@ Create or edit your `claude_desktop_config.json` file. ```json label="claude_desktop_config.json" { "mcpServers": { - "mcp-pinax": { + "token-api": { "command": "npx", "args": ["@pinax/mcp", "--sse-url", "https://token-api.thegraph.com/sse"], "env": { - "ACCESS_TOKEN": "" + "ACCESS_TOKEN": "" } } } diff --git a/website/src/pages/tr/token-api/mcp/cline.mdx b/website/src/pages/tr/token-api/mcp/cline.mdx index ab54c0c8f6f0..688b650fbf51 100644 --- a/website/src/pages/tr/token-api/mcp/cline.mdx +++ b/website/src/pages/tr/token-api/mcp/cline.mdx @@ -10,9 +10,9 @@ sidebarTitle: Cline - [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path. - The `@pinax/mcp` package requires Node 18+, as it relies on built-in `fetch()` / `Headers`, which are not available in Node 17 or older. You may need to specify an exact path to an up-to-date Node version, or uninstall previous versions of Node to ensure `@pinax/mcp` uses the correct version. -![Screenshot of Cline's MCP server configuration interface displaying the JSON settings file with mcp-pinax server details visible](/img/cline-preview-token-api.png) +![Screenshot of Cline's MCP server configuration interface displaying the JSON settings file with mcp-pinax server details visible.](/img/cline-preview-token-api.png) -## Configuration +## Yapılandırma Create or edit your `cline_mcp_settings.json` file. diff --git a/website/src/pages/tr/token-api/mcp/cursor.mdx b/website/src/pages/tr/token-api/mcp/cursor.mdx index 658108d1337b..14b948fbabb8 100644 --- a/website/src/pages/tr/token-api/mcp/cursor.mdx +++ b/website/src/pages/tr/token-api/mcp/cursor.mdx @@ -12,7 +12,7 @@ sidebarTitle: Cursor ![Screenshot of Cursor's MCP configuration panel.](/img/cursor-preview-token-api.png) -## Configuration +## Yapılandırma Create or edit your `~/.cursor/mcp.json` file. diff --git a/website/src/pages/tr/token-api/quick-start.mdx b/website/src/pages/tr/token-api/quick-start.mdx index 4653c3d41ac6..18629312033a 100644 --- a/website/src/pages/tr/token-api/quick-start.mdx +++ b/website/src/pages/tr/token-api/quick-start.mdx @@ -1,6 +1,6 @@ --- title: Token API Quick Start -sidebarTitle: Quick Start +sidebarTitle: Hızlı Başlangıç --- ![The Graph Token API Quick Start banner](/img/token-api-quickstart-banner.jpg) diff --git a/website/src/pages/uk/about.mdx b/website/src/pages/uk/about.mdx index 7d346fa59854..55eb5593f48b 100644 --- a/website/src/pages/uk/about.mdx +++ b/website/src/pages/uk/about.mdx @@ -30,25 +30,25 @@ Blockchain properties, such as finality, chain reorganizations, and uncled block ## The Graph Provides a Solution -The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API. +The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "Subgraphs") can then be queried with a standard GraphQL API. Today, there is a decentralized protocol that is backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node) that enables this process. ### How The Graph Functions -Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL. +Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using Subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL. #### Specifics -- The Graph uses subgraph descriptions, which are known as the subgraph manifest inside the subgraph. +- The Graph uses Subgraph descriptions, which are known as the Subgraph manifest inside the Subgraph. -- The subgraph description outlines the smart contracts of interest for a subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database. +- The Subgraph description outlines the smart contracts of interest for a Subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database. -- When creating a subgraph, you need to write a subgraph manifest. +- When creating a Subgraph, you need to write a Subgraph manifest. -- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that subgraph. +- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that Subgraph. -The diagram below provides more detailed information about the flow of data after a subgraph manifest has been deployed with Ethereum transactions. +The diagram below provides more detailed information about the flow of data after a Subgraph manifest has been deployed with Ethereum transactions. ![Малюнок, що пояснює, як The Graph використовує Graph Node для обслуговування запитів до споживачів даних](/img/graph-dataflow.png) @@ -56,12 +56,12 @@ The diagram below provides more detailed information about the flow of data afte 1. Додаток відправляє дані в мережу Ethereum через транзакцію в смартконтракті. 2. Під час обробки транзакції смартконтракт видає одну або декілька різних подій. -3. Graph Node постійно сканує Ethereum на наявність нових блоків і даних для вашого підграфа, які вони можуть містити. -4. Graph Node знаходить події на Ethereum для вашого підграфа в цих блоках і запускає надані вами mapping handlers. Mapping - це модуль WASM, який створює або оновлює структуру даних, що зберігаються у Graph Node у відповідь на події на Ethereum. +3. Graph Node continually scans Ethereum for new blocks and the data for your Subgraph they may contain. +4. Graph Node finds Ethereum events for your Subgraph in these blocks and runs the mapping handlers you provided. The mapping is a WASM module that creates or updates the data entities that Graph Node stores in response to Ethereum events. 5. Додаток запитує Graph Node про дані, проіндексовані в блокчейні, використовуючи [кінцеву точку GraphQL](https://graphql.org/learn/). The Graph Node, і собі, переводить запити GraphQL в запити до свого базового сховища даних, щоб отримати ці дані, використовуючи можливості індексації сховища. Dapp відображає ці дані в величезному інтерфейсі для кінцевих користувачів, який вони використовують для створення нових транзакцій на Ethereum. Цикл повторюється. ## Наступні кроки -The following sections provide a more in-depth look at subgraphs, their deployment and data querying. +The following sections provide a more in-depth look at Subgraphs, their deployment and data querying. -Before you write your own subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed subgraphs. Each subgraph's page includes a GraphQL playground, allowing you to query its data. +Before you write your own Subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed Subgraphs. Each Subgraph's page includes a GraphQL playground, allowing you to query its data. diff --git a/website/src/pages/uk/archived/arbitrum/arbitrum-faq.mdx b/website/src/pages/uk/archived/arbitrum/arbitrum-faq.mdx index 28f6a3faeee6..8e6d1bd8d962 100644 --- a/website/src/pages/uk/archived/arbitrum/arbitrum-faq.mdx +++ b/website/src/pages/uk/archived/arbitrum/arbitrum-faq.mdx @@ -14,7 +14,7 @@ By scaling The Graph on L2, network participants can now benefit from: - Security inherited from Ethereum -Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers can open and close allocations more frequently to index a greater number of subgraphs. Developers can deploy and update subgraphs more easily, and Delegators can delegate GRT more frequently. Curators can add or remove signal to a larger number of subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. +Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers can open and close allocations more frequently to index a greater number of Subgraphs. Developers can deploy and update Subgraphs more easily, and Delegators can delegate GRT more frequently. Curators can add or remove signal to a larger number of Subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. The Graph community decided to move forward with Arbitrum last year after the outcome of the [GIP-0031](https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305) discussion. @@ -39,7 +39,7 @@ Once you have GRT on Arbitrum, you can add it to your billing balance. ![Dropdown switcher to toggle Arbitrum](/img/arbitrum-screenshot-toggle.png) -## Якщо я розробник підграфів, споживач даних, Індексатор, Куратор або Делегат, що мені потрібно робити зараз? +## As a Subgraph developer, data consumer, Indexer, Curator, or Delegator, what do I need to do now? Network participants must move to Arbitrum to continue participating in The Graph Network. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) for additional support. @@ -51,9 +51,9 @@ All smart contracts have been thoroughly [audited](https://github.com/graphproto Everything has been tested thoroughly, and a contingency plan is in place to ensure a safe and seamless transition. Details can be found [here](https://forum.thegraph.com/t/gip-0037-the-graph-arbitrum-deployment-with-linear-rewards-minted-in-l2/3551#risks-and-security-considerations-20). -## Are existing subgraphs on Ethereum working? +## Are existing Subgraphs on Ethereum working? -All subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) to ensure your subgraphs operate seamlessly. +All Subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) to ensure your Subgraphs operate seamlessly. ## Does GRT have a new smart contract deployed on Arbitrum? diff --git a/website/src/pages/uk/archived/arbitrum/l2-transfer-tools-faq.mdx b/website/src/pages/uk/archived/arbitrum/l2-transfer-tools-faq.mdx index 612b61fd0515..7edde3d0cbcd 100644 --- a/website/src/pages/uk/archived/arbitrum/l2-transfer-tools-faq.mdx +++ b/website/src/pages/uk/archived/arbitrum/l2-transfer-tools-faq.mdx @@ -24,9 +24,9 @@ The exception is with smart contract wallets like multisigs: these are smart con The L2 Transfer Tools use Arbitrum’s native mechanism to send messages from L1 to L2. This mechanism is called a “retryable ticket” and is used by all native token bridges, including the Arbitrum GRT bridge. You can read more about retryable tickets in the [Arbitrum docs](https://docs.arbitrum.io/arbos/l1-to-l2-messaging). -When you transfer your assets (subgraph, stake, delegation or curation) to L2, a message is sent through the Arbitrum GRT bridge which creates a retryable ticket in L2. The transfer tool includes some ETH value in the transaction, that is used to 1) pay to create the ticket and 2) pay for the gas to execute the ticket in L2. However, because gas prices might vary in the time until the ticket is ready to execute in L2, it is possible that this auto-execution attempt fails. When that happens, the Arbitrum bridge will keep the retryable ticket alive for up to 7 days, and anyone can retry “redeeming” the ticket (which requires a wallet with some ETH bridged to Arbitrum). +When you transfer your assets (Subgraph, stake, delegation or curation) to L2, a message is sent through the Arbitrum GRT bridge which creates a retryable ticket in L2. The transfer tool includes some ETH value in the transaction, that is used to 1) pay to create the ticket and 2) pay for the gas to execute the ticket in L2. However, because gas prices might vary in the time until the ticket is ready to execute in L2, it is possible that this auto-execution attempt fails. When that happens, the Arbitrum bridge will keep the retryable ticket alive for up to 7 days, and anyone can retry “redeeming” the ticket (which requires a wallet with some ETH bridged to Arbitrum). -This is what we call the “Confirm” step in all the transfer tools - it will run automatically in most cases, as the auto-execution is most often successful, but it is important that you check back to make sure it went through. If it doesn’t succeed and there are no successful retries in 7 days, the Arbitrum bridge will discard the ticket, and your assets (subgraph, stake, delegation or curation) will be lost and can’t be recovered. The Graph core devs have a monitoring system in place to detect these situations and try to redeem the tickets before it’s too late, but it is ultimately your responsibility to ensure your transfer is completed in time. If you’re having trouble confirming your transaction, please reach out using [this form](https://noteforms.com/forms/notionform-l2-transfer-tooling-issues-0ogqfu?notionforms=1&utm_source=notionforms) and core devs will be there help you. +This is what we call the “Confirm” step in all the transfer tools - it will run automatically in most cases, as the auto-execution is most often successful, but it is important that you check back to make sure it went through. If it doesn’t succeed and there are no successful retries in 7 days, the Arbitrum bridge will discard the ticket, and your assets (Subgraph, stake, delegation or curation) will be lost and can’t be recovered. The Graph core devs have a monitoring system in place to detect these situations and try to redeem the tickets before it’s too late, but it is ultimately your responsibility to ensure your transfer is completed in time. If you’re having trouble confirming your transaction, please reach out using [this form](https://noteforms.com/forms/notionform-l2-transfer-tooling-issues-0ogqfu?notionforms=1&utm_source=notionforms) and core devs will be there help you. ### I started my delegation/stake/curation transfer and I'm not sure if it made it through to L2, how can I confirm that it was transferred correctly? @@ -36,43 +36,43 @@ If you have the L1 transaction hash (which you can find by looking at the recent ## Subgraph Transfer -### How do I transfer my subgraph? +### How do I transfer my Subgraph? -To transfer your subgraph, you will need to complete the following steps: +To transfer your Subgraph, you will need to complete the following steps: 1. Initiate the transfer on Ethereum mainnet 2. Wait 20 minutes for confirmation -3. Confirm subgraph transfer on Arbitrum\* +3. Confirm Subgraph transfer on Arbitrum\* -4. Finish publishing subgraph on Arbitrum +4. Finish publishing Subgraph on Arbitrum 5. Update Query URL (recommended) -\*Note that you must confirm the transfer within 7 days otherwise your subgraph may be lost. In most cases, this step will run automatically, but a manual confirmation may be needed if there is a gas price spike on Arbitrum. If there are any issues during this process, there will be resources to help: contact support at support@thegraph.com or on [Discord](https://discord.gg/graphprotocol). +\*Note that you must confirm the transfer within 7 days otherwise your Subgraph may be lost. In most cases, this step will run automatically, but a manual confirmation may be needed if there is a gas price spike on Arbitrum. If there are any issues during this process, there will be resources to help: contact support at support@thegraph.com or on [Discord](https://discord.gg/graphprotocol). ### Where should I initiate my transfer from? -You can initiate your transfer from the [Subgraph Studio](https://thegraph.com/studio/), [Explorer,](https://thegraph.com/explorer) or any subgraph details page. Click the "Transfer Subgraph" button in the subgraph details page to start the transfer. +You can initiate your transfer from the [Subgraph Studio](https://thegraph.com/studio/), [Explorer,](https://thegraph.com/explorer) or any Subgraph details page. Click the "Transfer Subgraph" button in the Subgraph details page to start the transfer. -### How long do I need to wait until my subgraph is transferred +### How long do I need to wait until my Subgraph is transferred The transfer time takes approximately 20 minutes. The Arbitrum bridge is working in the background to complete the bridge transfer automatically. In some cases, gas costs may spike and you will need to confirm the transaction again. -### Will my subgraph still be discoverable after I transfer it to L2? +### Will my Subgraph still be discoverable after I transfer it to L2? -Your subgraph will only be discoverable on the network it is published to. For example, if your subgraph is on Arbitrum One, then you can only find it in Explorer on Arbitrum One and will not be able to find it on Ethereum. Please ensure that you have Arbitrum One selected in the network switcher at the top of the page to ensure you are on the correct network.  After the transfer, the L1 subgraph will appear as deprecated. +Your Subgraph will only be discoverable on the network it is published to. For example, if your Subgraph is on Arbitrum One, then you can only find it in Explorer on Arbitrum One and will not be able to find it on Ethereum. Please ensure that you have Arbitrum One selected in the network switcher at the top of the page to ensure you are on the correct network.  After the transfer, the L1 Subgraph will appear as deprecated. -### Does my subgraph need to be published to transfer it? +### Does my Subgraph need to be published to transfer it? -To take advantage of the subgraph transfer tool, your subgraph must be already published to Ethereum mainnet and must have some curation signal owned by the wallet that owns the subgraph. If your subgraph is not published, it is recommended you simply publish directly on Arbitrum One - the associated gas fees will be considerably lower. If you want to transfer a published subgraph but the owner account hasn't curated any signal on it, you can signal a small amount (e.g. 1 GRT) from that account; make sure to choose "auto-migrating" signal. +To take advantage of the Subgraph transfer tool, your Subgraph must be already published to Ethereum mainnet and must have some curation signal owned by the wallet that owns the Subgraph. If your Subgraph is not published, it is recommended you simply publish directly on Arbitrum One - the associated gas fees will be considerably lower. If you want to transfer a published Subgraph but the owner account hasn't curated any signal on it, you can signal a small amount (e.g. 1 GRT) from that account; make sure to choose "auto-migrating" signal. -### What happens to the Ethereum mainnet version of my subgraph after I transfer to Arbitrum? +### What happens to the Ethereum mainnet version of my Subgraph after I transfer to Arbitrum? -After transferring your subgraph to Arbitrum, the Ethereum mainnet version will be deprecated. We recommend you update your query URL within 48 hours. However, there is a grace period in place that keeps your mainnet URL functioning so that any third-party dapp support can be updated. +After transferring your Subgraph to Arbitrum, the Ethereum mainnet version will be deprecated. We recommend you update your query URL within 48 hours. However, there is a grace period in place that keeps your mainnet URL functioning so that any third-party dapp support can be updated. ### After I transfer, do I also need to re-publish on Arbitrum? @@ -80,21 +80,21 @@ After the 20 minute transfer window, you will need to confirm the transfer with ### Will my endpoint experience downtime while re-publishing? -It is unlikely, but possible to experience a brief downtime depending on which Indexers are supporting the subgraph on L1 and whether they keep indexing it until the subgraph is fully supported on L2. +It is unlikely, but possible to experience a brief downtime depending on which Indexers are supporting the Subgraph on L1 and whether they keep indexing it until the Subgraph is fully supported on L2. ### Is publishing and versioning the same on L2 as Ethereum Ethereum mainnet? -Yes. Select Arbitrum One as your published network when publishing in Subgraph Studio. In the Studio, the latest endpoint will be available which points to the latest updated version of the subgraph. +Yes. Select Arbitrum One as your published network when publishing in Subgraph Studio. In the Studio, the latest endpoint will be available which points to the latest updated version of the Subgraph. -### Will my subgraph's curation move with my subgraph? +### Will my Subgraph's curation move with my Subgraph? -If you've chosen auto-migrating signal, 100% of your own curation will move with your subgraph to Arbitrum One. All of the subgraph's curation signal will be converted to GRT at the time of the transfer, and the GRT corresponding to your curation signal will be used to mint signal on the L2 subgraph. +If you've chosen auto-migrating signal, 100% of your own curation will move with your Subgraph to Arbitrum One. All of the Subgraph's curation signal will be converted to GRT at the time of the transfer, and the GRT corresponding to your curation signal will be used to mint signal on the L2 Subgraph. -Other Curators can choose whether to withdraw their fraction of GRT, or also transfer it to L2 to mint signal on the same subgraph. +Other Curators can choose whether to withdraw their fraction of GRT, or also transfer it to L2 to mint signal on the same Subgraph. -### Can I move my subgraph back to Ethereum mainnet after I transfer? +### Can I move my Subgraph back to Ethereum mainnet after I transfer? -Once transferred, your Ethereum mainnet version of this subgraph will be deprecated. If you would like to move back to mainnet, you will need to redeploy and publish back to mainnet. However, transferring back to Ethereum mainnet is strongly discouraged as indexing rewards will eventually be distributed entirely on Arbitrum One. +Once transferred, your Ethereum mainnet version of this Subgraph will be deprecated. If you would like to move back to mainnet, you will need to redeploy and publish back to mainnet. However, transferring back to Ethereum mainnet is strongly discouraged as indexing rewards will eventually be distributed entirely on Arbitrum One. ### Why do I need bridged ETH to complete my transfer? @@ -206,19 +206,19 @@ To transfer your curation, you will need to complete the following steps: \*If necessary - i.e. you are using a contract address. -### How will I know if the subgraph I curated has moved to L2? +### How will I know if the Subgraph I curated has moved to L2? -When viewing the subgraph details page, a banner will notify you that this subgraph has been transferred. You can follow the prompt to transfer your curation. You can also find this information on the subgraph details page of any subgraph that has moved. +When viewing the Subgraph details page, a banner will notify you that this Subgraph has been transferred. You can follow the prompt to transfer your curation. You can also find this information on the Subgraph details page of any Subgraph that has moved. ### What if I do not wish to move my curation to L2? -When a subgraph is deprecated you have the option to withdraw your signal. Similarly, if a subgraph has moved to L2, you can choose to withdraw your signal in Ethereum mainnet or send the signal to L2. +When a Subgraph is deprecated you have the option to withdraw your signal. Similarly, if a Subgraph has moved to L2, you can choose to withdraw your signal in Ethereum mainnet or send the signal to L2. ### How do I know my curation successfully transferred? Signal details will be accessible via Explorer approximately 20 minutes after the L2 transfer tool is initiated. -### Can I transfer my curation on more than one subgraph at a time? +### Can I transfer my curation on more than one Subgraph at a time? There is no bulk transfer option at this time. @@ -266,7 +266,7 @@ It will take approximately 20 minutes for the L2 transfer tool to complete trans ### Do I have to index on Arbitrum before I transfer my stake? -You can effectively transfer your stake first before setting up indexing, but you will not be able to claim any rewards on L2 until you allocate to subgraphs on L2, index them, and present POIs. +You can effectively transfer your stake first before setting up indexing, but you will not be able to claim any rewards on L2 until you allocate to Subgraphs on L2, index them, and present POIs. ### Can Delegators move their delegation before I move my indexing stake? diff --git a/website/src/pages/uk/archived/arbitrum/l2-transfer-tools-guide.mdx b/website/src/pages/uk/archived/arbitrum/l2-transfer-tools-guide.mdx index 549618bfd7c3..4a34da9bad0e 100644 --- a/website/src/pages/uk/archived/arbitrum/l2-transfer-tools-guide.mdx +++ b/website/src/pages/uk/archived/arbitrum/l2-transfer-tools-guide.mdx @@ -6,53 +6,53 @@ The Graph has made it easy to move to L2 on Arbitrum One. For each protocol part Some frequent questions about these tools are answered in the [L2 Transfer Tools FAQ](/archived/arbitrum/l2-transfer-tools-faq/). The FAQs contain in-depth explanations of how to use the tools, how they work, and things to keep in mind when using them. -## How to transfer your subgraph to Arbitrum (L2) +## How to transfer your Subgraph to Arbitrum (L2) -## Benefits of transferring your subgraphs +## Benefits of transferring your Subgraphs The Graph's community and core devs have [been preparing](https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305) to move to Arbitrum over the past year. Arbitrum, a layer 2 or "L2" blockchain, inherits the security from Ethereum but provides drastically lower gas fees. -When you publish or upgrade your subgraph to The Graph Network, you're interacting with smart contracts on the protocol and this requires paying for gas using ETH. By moving your subgraphs to Arbitrum, any future updates to your subgraph will require much lower gas fees. The lower fees, and the fact that curation bonding curves on L2 are flat, also make it easier for other Curators to curate on your subgraph, increasing the rewards for Indexers on your subgraph. This lower-cost environment also makes it cheaper for Indexers to index and serve your subgraph. Indexing rewards will be increasing on Arbitrum and decreasing on Ethereum mainnet over the coming months, so more and more Indexers will be transferring their stake and setting up their operations on L2. +When you publish or upgrade your Subgraph to The Graph Network, you're interacting with smart contracts on the protocol and this requires paying for gas using ETH. By moving your Subgraphs to Arbitrum, any future updates to your Subgraph will require much lower gas fees. The lower fees, and the fact that curation bonding curves on L2 are flat, also make it easier for other Curators to curate on your Subgraph, increasing the rewards for Indexers on your Subgraph. This lower-cost environment also makes it cheaper for Indexers to index and serve your Subgraph. Indexing rewards will be increasing on Arbitrum and decreasing on Ethereum mainnet over the coming months, so more and more Indexers will be transferring their stake and setting up their operations on L2. -## Understanding what happens with signal, your L1 subgraph and query URLs +## Understanding what happens with signal, your L1 Subgraph and query URLs -Transferring a subgraph to Arbitrum uses the Arbitrum GRT bridge, which in turn uses the native Arbitrum bridge, to send the subgraph to L2. The "transfer" will deprecate the subgraph on mainnet and send the information to re-create the subgraph on L2 using the bridge. It will also include the subgraph owner's signaled GRT, which must be more than zero for the bridge to accept the transfer. +Transferring a Subgraph to Arbitrum uses the Arbitrum GRT bridge, which in turn uses the native Arbitrum bridge, to send the Subgraph to L2. The "transfer" will deprecate the Subgraph on mainnet and send the information to re-create the Subgraph on L2 using the bridge. It will also include the Subgraph owner's signaled GRT, which must be more than zero for the bridge to accept the transfer. -When you choose to transfer the subgraph, this will convert all of the subgraph's curation signal to GRT. This is equivalent to "deprecating" the subgraph on mainnet. The GRT corresponding to your curation will be sent to L2 together with the subgraph, where they will be used to mint signal on your behalf. +When you choose to transfer the Subgraph, this will convert all of the Subgraph's curation signal to GRT. This is equivalent to "deprecating" the Subgraph on mainnet. The GRT corresponding to your curation will be sent to L2 together with the Subgraph, where they will be used to mint signal on your behalf. -Other Curators can choose whether to withdraw their fraction of GRT, or also transfer it to L2 to mint signal on the same subgraph. If a subgraph owner does not transfer their subgraph to L2 and manually deprecates it via a contract call, then Curators will be notified and will be able to withdraw their curation. +Other Curators can choose whether to withdraw their fraction of GRT, or also transfer it to L2 to mint signal on the same Subgraph. If a Subgraph owner does not transfer their Subgraph to L2 and manually deprecates it via a contract call, then Curators will be notified and will be able to withdraw their curation. -As soon as the subgraph is transferred, since all curation is converted to GRT, Indexers will no longer receive rewards for indexing the subgraph. However, there will be Indexers that will 1) keep serving transferred subgraphs for 24 hours, and 2) immediately start indexing the subgraph on L2. Since these Indexers already have the subgraph indexed, there should be no need to wait for the subgraph to sync, and it will be possible to query the L2 subgraph almost immediately. +As soon as the Subgraph is transferred, since all curation is converted to GRT, Indexers will no longer receive rewards for indexing the Subgraph. However, there will be Indexers that will 1) keep serving transferred Subgraphs for 24 hours, and 2) immediately start indexing the Subgraph on L2. Since these Indexers already have the Subgraph indexed, there should be no need to wait for the Subgraph to sync, and it will be possible to query the L2 Subgraph almost immediately. -Queries to the L2 subgraph will need to be done to a different URL (on `arbitrum-gateway.thegraph.com`), but the L1 URL will continue working for at least 48 hours. After that, the L1 gateway will forward queries to the L2 gateway (for some time), but this will add latency so it is recommended to switch all your queries to the new URL as soon as possible. +Queries to the L2 Subgraph will need to be done to a different URL (on `arbitrum-gateway.thegraph.com`), but the L1 URL will continue working for at least 48 hours. After that, the L1 gateway will forward queries to the L2 gateway (for some time), but this will add latency so it is recommended to switch all your queries to the new URL as soon as possible. ## Choosing your L2 wallet -When you published your subgraph on mainnet, you used a connected wallet to create the subgraph, and this wallet owns the NFT that represents this subgraph and allows you to publish updates. +When you published your Subgraph on mainnet, you used a connected wallet to create the Subgraph, and this wallet owns the NFT that represents this Subgraph and allows you to publish updates. -When transferring the subgraph to Arbitrum, you can choose a different wallet that will own this subgraph NFT on L2. +When transferring the Subgraph to Arbitrum, you can choose a different wallet that will own this Subgraph NFT on L2. If you're using a "regular" wallet like MetaMask (an Externally Owned Account or EOA, i.e. a wallet that is not a smart contract), then this is optional and it is recommended to keep the same owner address as in L1. -If you're using a smart contract wallet, like a multisig (e.g. a Safe), then choosing a different L2 wallet address is mandatory, as it is most likely that this account only exists on mainnet and you will not be able to make transactions on Arbitrum using this wallet. If you want to keep using a smart contract wallet or multisig, create a new wallet on Arbitrum and use its address as the L2 owner of your subgraph. +If you're using a smart contract wallet, like a multisig (e.g. a Safe), then choosing a different L2 wallet address is mandatory, as it is most likely that this account only exists on mainnet and you will not be able to make transactions on Arbitrum using this wallet. If you want to keep using a smart contract wallet or multisig, create a new wallet on Arbitrum and use its address as the L2 owner of your Subgraph. -**It is very important to use a wallet address that you control, and that can make transactions on Arbitrum. Otherwise, the subgraph will be lost and cannot be recovered.** +**It is very important to use a wallet address that you control, and that can make transactions on Arbitrum. Otherwise, the Subgraph will be lost and cannot be recovered.** ## Preparing for the transfer: bridging some ETH -Transferring the subgraph involves sending a transaction through the bridge, and then executing another transaction on Arbitrum. The first transaction uses ETH on mainnet, and includes some ETH to pay for gas when the message is received on L2. However, if this gas is insufficient, you will have to retry the transaction and pay for the gas directly on L2 (this is "Step 3: Confirming the transfer" below). This step **must be executed within 7 days of starting the transfer**. Moreover, the second transaction ("Step 4: Finishing the transfer on L2") will be done directly on Arbitrum. For these reasons, you will need some ETH on an Arbitrum wallet. If you're using a multisig or smart contract account, the ETH will need to be in the regular (EOA) wallet that you are using to execute the transactions, not on the multisig wallet itself. +Transferring the Subgraph involves sending a transaction through the bridge, and then executing another transaction on Arbitrum. The first transaction uses ETH on mainnet, and includes some ETH to pay for gas when the message is received on L2. However, if this gas is insufficient, you will have to retry the transaction and pay for the gas directly on L2 (this is "Step 3: Confirming the transfer" below). This step **must be executed within 7 days of starting the transfer**. Moreover, the second transaction ("Step 4: Finishing the transfer on L2") will be done directly on Arbitrum. For these reasons, you will need some ETH on an Arbitrum wallet. If you're using a multisig or smart contract account, the ETH will need to be in the regular (EOA) wallet that you are using to execute the transactions, not on the multisig wallet itself. You can buy ETH on some exchanges and withdraw it directly to Arbitrum, or you can use the Arbitrum bridge to send ETH from a mainnet wallet to L2: [bridge.arbitrum.io](http://bridge.arbitrum.io). Since gas fees on Arbitrum are lower, you should only need a small amount. It is recommended that you start at a low threshold (0.e.g. 01 ETH) for your transaction to be approved. -## Finding the subgraph Transfer Tool +## Finding the Subgraph Transfer Tool -You can find the L2 Transfer Tool when you're looking at your subgraph's page on Subgraph Studio: +You can find the L2 Transfer Tool when you're looking at your Subgraph's page on Subgraph Studio: ![transfer tool](/img/L2-transfer-tool1.png) -It is also available on Explorer if you're connected with the wallet that owns a subgraph and on that subgraph's page on Explorer: +It is also available on Explorer if you're connected with the wallet that owns a Subgraph and on that Subgraph's page on Explorer: ![Transferring to L2](/img/transferToL2.png) @@ -60,19 +60,19 @@ Clicking on the Transfer to L2 button will open the transfer tool where you can ## Step 1: Starting the transfer -Before starting the transfer, you must decide which address will own the subgraph on L2 (see "Choosing your L2 wallet" above), and it is strongly recommend having some ETH for gas already bridged on Arbitrum (see "Preparing for the transfer: bridging some ETH" above). +Before starting the transfer, you must decide which address will own the Subgraph on L2 (see "Choosing your L2 wallet" above), and it is strongly recommend having some ETH for gas already bridged on Arbitrum (see "Preparing for the transfer: bridging some ETH" above). -Also please note transferring the subgraph requires having a nonzero amount of signal on the subgraph with the same account that owns the subgraph; if you haven't signaled on the subgraph you will have to add a bit of curation (adding a small amount like 1 GRT would suffice). +Also please note transferring the Subgraph requires having a nonzero amount of signal on the Subgraph with the same account that owns the Subgraph; if you haven't signaled on the Subgraph you will have to add a bit of curation (adding a small amount like 1 GRT would suffice). -After opening the Transfer Tool, you will be able to input the L2 wallet address into the "Receiving wallet address" field - **make sure you've entered the correct address here**. Clicking on Transfer Subgraph will prompt you to execute the transaction on your wallet (note some ETH value is included to pay for L2 gas); this will initiate the transfer and deprecate your L1 subgraph (see "Understanding what happens with signal, your L1 subgraph and query URLs" above for more details on what goes on behind the scenes). +After opening the Transfer Tool, you will be able to input the L2 wallet address into the "Receiving wallet address" field - **make sure you've entered the correct address here**. Clicking on Transfer Subgraph will prompt you to execute the transaction on your wallet (note some ETH value is included to pay for L2 gas); this will initiate the transfer and deprecate your L1 Subgraph (see "Understanding what happens with signal, your L1 Subgraph and query URLs" above for more details on what goes on behind the scenes). -If you execute this step, **make sure you proceed until completing step 3 in less than 7 days, or the subgraph and your signal GRT will be lost.** This is due to how L1-L2 messaging works on Arbitrum: messages that are sent through the bridge are "retry-able tickets" that must be executed within 7 days, and the initial execution might need a retry if there are spikes in the gas price on Arbitrum. +If you execute this step, **make sure you proceed until completing step 3 in less than 7 days, or the Subgraph and your signal GRT will be lost.** This is due to how L1-L2 messaging works on Arbitrum: messages that are sent through the bridge are "retry-able tickets" that must be executed within 7 days, and the initial execution might need a retry if there are spikes in the gas price on Arbitrum. ![Start the transfer to L2](/img/startTransferL2.png) -## Step 2: Waiting for the subgraph to get to L2 +## Step 2: Waiting for the Subgraph to get to L2 -After you start the transfer, the message that sends your L1 subgraph to L2 must propagate through the Arbitrum bridge. This takes approximately 20 minutes (the bridge waits for the mainnet block containing the transaction to be "safe" from potential chain reorgs). +After you start the transfer, the message that sends your L1 Subgraph to L2 must propagate through the Arbitrum bridge. This takes approximately 20 minutes (the bridge waits for the mainnet block containing the transaction to be "safe" from potential chain reorgs). Once this wait time is over, Arbitrum will attempt to auto-execute the transfer on the L2 contracts. @@ -80,7 +80,7 @@ Once this wait time is over, Arbitrum will attempt to auto-execute the transfer ## Step 3: Confirming the transfer -In most cases, this step will auto-execute as the L2 gas included in step 1 should be sufficient to execute the transaction that receives the subgraph on the Arbitrum contracts. In some cases, however, it is possible that a spike in gas prices on Arbitrum causes this auto-execution to fail. In this case, the "ticket" that sends your subgraph to L2 will be pending and require a retry within 7 days. +In most cases, this step will auto-execute as the L2 gas included in step 1 should be sufficient to execute the transaction that receives the Subgraph on the Arbitrum contracts. In some cases, however, it is possible that a spike in gas prices on Arbitrum causes this auto-execution to fail. In this case, the "ticket" that sends your Subgraph to L2 will be pending and require a retry within 7 days. If this is the case, you will need to connect using an L2 wallet that has some ETH on Arbitrum, switch your wallet network to Arbitrum, and click on "Confirm Transfer" to retry the transaction. @@ -88,33 +88,33 @@ If this is the case, you will need to connect using an L2 wallet that has some E ## Step 4: Finishing the transfer on L2 -At this point, your subgraph and GRT have been received on Arbitrum, but the subgraph is not published yet. You will need to connect using the L2 wallet that you chose as the receiving wallet, switch your wallet network to Arbitrum, and click "Publish Subgraph." +At this point, your Subgraph and GRT have been received on Arbitrum, but the Subgraph is not published yet. You will need to connect using the L2 wallet that you chose as the receiving wallet, switch your wallet network to Arbitrum, and click "Publish Subgraph." -![Publish the subgraph](/img/publishSubgraphL2TransferTools.png) +![Publish the Subgraph](/img/publishSubgraphL2TransferTools.png) -![Wait for the subgraph to be published](/img/waitForSubgraphToPublishL2TransferTools.png) +![Wait for the Subgraph to be published](/img/waitForSubgraphToPublishL2TransferTools.png) -This will publish the subgraph so that Indexers that are operating on Arbitrum can start serving it. It will also mint curation signal using the GRT that were transferred from L1. +This will publish the Subgraph so that Indexers that are operating on Arbitrum can start serving it. It will also mint curation signal using the GRT that were transferred from L1. ## Step 5: Updating the query URL -Your subgraph has been successfully transferred to Arbitrum! To query the subgraph, the new URL will be : +Your Subgraph has been successfully transferred to Arbitrum! To query the Subgraph, the new URL will be : `https://arbitrum-gateway.thegraph.com/api/[api-key]/subgraphs/id/[l2-subgraph-id]` -Note that the subgraph ID on Arbitrum will be a different than the one you had on mainnet, but you can always find it on Explorer or Studio. As mentioned above (see "Understanding what happens with signal, your L1 subgraph and query URLs") the old L1 URL will be supported for a short while, but you should switch your queries to the new address as soon as the subgraph has been synced on L2. +Note that the Subgraph ID on Arbitrum will be a different than the one you had on mainnet, but you can always find it on Explorer or Studio. As mentioned above (see "Understanding what happens with signal, your L1 Subgraph and query URLs") the old L1 URL will be supported for a short while, but you should switch your queries to the new address as soon as the Subgraph has been synced on L2. ## How to transfer your curation to Arbitrum (L2) -## Understanding what happens to curation on subgraph transfers to L2 +## Understanding what happens to curation on Subgraph transfers to L2 -When the owner of a subgraph transfers a subgraph to Arbitrum, all of the subgraph's signal is converted to GRT at the same time. This applies to "auto-migrated" signal, i.e. signal that is not specific to a subgraph version or deployment but that follows the latest version of a subgraph. +When the owner of a Subgraph transfers a Subgraph to Arbitrum, all of the Subgraph's signal is converted to GRT at the same time. This applies to "auto-migrated" signal, i.e. signal that is not specific to a Subgraph version or deployment but that follows the latest version of a Subgraph. -This conversion from signal to GRT is the same as what would happen if the subgraph owner deprecated the subgraph in L1. When the subgraph is deprecated or transferred, all curation signal is "burned" simultaneously (using the curation bonding curve) and the resulting GRT is held by the GNS smart contract (that is the contract that handles subgraph upgrades and auto-migrated signal). Each Curator on that subgraph therefore has a claim to that GRT proportional to the amount of shares they had for the subgraph. +This conversion from signal to GRT is the same as what would happen if the Subgraph owner deprecated the Subgraph in L1. When the Subgraph is deprecated or transferred, all curation signal is "burned" simultaneously (using the curation bonding curve) and the resulting GRT is held by the GNS smart contract (that is the contract that handles Subgraph upgrades and auto-migrated signal). Each Curator on that Subgraph therefore has a claim to that GRT proportional to the amount of shares they had for the Subgraph. -A fraction of these GRT corresponding to the subgraph owner is sent to L2 together with the subgraph. +A fraction of these GRT corresponding to the Subgraph owner is sent to L2 together with the Subgraph. -At this point, the curated GRT will not accrue any more query fees, so Curators can choose to withdraw their GRT or transfer it to the same subgraph on L2, where it can be used to mint new curation signal. There is no rush to do this as the GRT can be help indefinitely and everybody gets an amount proportional to their shares, irrespective of when they do it. +At this point, the curated GRT will not accrue any more query fees, so Curators can choose to withdraw their GRT or transfer it to the same Subgraph on L2, where it can be used to mint new curation signal. There is no rush to do this as the GRT can be help indefinitely and everybody gets an amount proportional to their shares, irrespective of when they do it. ## Choosing your L2 wallet @@ -130,9 +130,9 @@ If you're using a smart contract wallet, like a multisig (e.g. a Safe), then cho Before starting the transfer, you must decide which address will own the curation on L2 (see "Choosing your L2 wallet" above), and it is recommended having some ETH for gas already bridged on Arbitrum in case you need to retry the execution of the message on L2. You can buy ETH on some exchanges and withdraw it directly to Arbitrum, or you can use the Arbitrum bridge to send ETH from a mainnet wallet to L2: [bridge.arbitrum.io](http://bridge.arbitrum.io) - since gas fees on Arbitrum are so low, you should only need a small amount, e.g. 0.01 ETH will probably be more than enough. -If a subgraph that you curate to has been transferred to L2, you will see a message on Explorer telling you that you're curating to a transferred subgraph. +If a Subgraph that you curate to has been transferred to L2, you will see a message on Explorer telling you that you're curating to a transferred Subgraph. -When looking at the subgraph page, you can choose to withdraw or transfer the curation. Clicking on "Transfer Signal to Arbitrum" will open the transfer tool. +When looking at the Subgraph page, you can choose to withdraw or transfer the curation. Clicking on "Transfer Signal to Arbitrum" will open the transfer tool. ![Transfer signal](/img/transferSignalL2TransferTools.png) @@ -162,4 +162,4 @@ If this is the case, you will need to connect using an L2 wallet that has some E ## Withdrawing your curation on L1 -If you prefer not to send your GRT to L2, or you'd rather bridge the GRT manually, you can withdraw your curated GRT on L1. On the banner on the subgraph page, choose "Withdraw Signal" and confirm the transaction; the GRT will be sent to your Curator address. +If you prefer not to send your GRT to L2, or you'd rather bridge the GRT manually, you can withdraw your curated GRT on L1. On the banner on the Subgraph page, choose "Withdraw Signal" and confirm the transaction; the GRT will be sent to your Curator address. diff --git a/website/src/pages/uk/archived/sunrise.mdx b/website/src/pages/uk/archived/sunrise.mdx index eb18a93c506c..71262f22e7d8 100644 --- a/website/src/pages/uk/archived/sunrise.mdx +++ b/website/src/pages/uk/archived/sunrise.mdx @@ -7,61 +7,61 @@ sidebarTitle: Post-Sunrise Upgrade FAQ ## What was the Sunrise of Decentralized Data? -The Sunrise of Decentralized Data was an initiative spearheaded by Edge & Node. This initiative enabled subgraph developers to upgrade to The Graph’s decentralized network seamlessly. +The Sunrise of Decentralized Data was an initiative spearheaded by Edge & Node. This initiative enabled Subgraph developers to upgrade to The Graph’s decentralized network seamlessly. -This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published subgraphs. +This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published Subgraphs. ### What happened to the hosted service? -The hosted service query endpoints are no longer available, and developers cannot deploy new subgraphs on the hosted service. +The hosted service query endpoints are no longer available, and developers cannot deploy new Subgraphs on the hosted service. -During the upgrade process, owners of hosted service subgraphs could upgrade their subgraphs to The Graph Network. Additionally, developers were able to claim auto-upgraded subgraphs. +During the upgrade process, owners of hosted service Subgraphs could upgrade their Subgraphs to The Graph Network. Additionally, developers were able to claim auto-upgraded Subgraphs. ### Was Subgraph Studio impacted by this upgrade? No, Subgraph Studio was not impacted by Sunrise. Subgraphs were immediately available for querying, powered by the upgrade Indexer, which uses the same infrastructure as the hosted service. -### Why were subgraphs published to Arbitrum, did it start indexing a different network? +### Why were Subgraphs published to Arbitrum, did it start indexing a different network? -The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that subgraphs are published to, but subgraphs can index any of the [supported networks](/supported-networks/) +The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new Subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that Subgraphs are published to, but Subgraphs can index any of the [supported networks](/supported-networks/) ## About the Upgrade Indexer > The upgrade Indexer is currently active. -The upgrade Indexer was implemented to improve the experience of upgrading subgraphs from the hosted service to The Graph Network and support new versions of existing subgraphs that had not yet been indexed. +The upgrade Indexer was implemented to improve the experience of upgrading Subgraphs from the hosted service to The Graph Network and support new versions of existing Subgraphs that had not yet been indexed. ### What does the upgrade Indexer do? -- It bootstraps chains that have yet to receive indexing rewards on The Graph Network and ensures that an Indexer is available to serve queries as quickly as possible after a subgraph is published. +- It bootstraps chains that have yet to receive indexing rewards on The Graph Network and ensures that an Indexer is available to serve queries as quickly as possible after a Subgraph is published. - It supports chains that were previously only available on the hosted service. Find a comprehensive list of supported chains [here](/supported-networks/). -- Indexers that operate an upgrade Indexer do so as a public service to support new subgraphs and additional chains that lack indexing rewards before The Graph Council approves them. +- Indexers that operate an upgrade Indexer do so as a public service to support new Subgraphs and additional chains that lack indexing rewards before The Graph Council approves them. ### Why is Edge & Node running the upgrade Indexer? -Edge & Node historically maintained the hosted service and, as a result, already have synced data for hosted service subgraphs. +Edge & Node historically maintained the hosted service and, as a result, already have synced data for hosted service Subgraphs. ### What does the upgrade indexer mean for existing Indexers? Chains previously only supported on the hosted service were made available to developers on The Graph Network without indexing rewards at first. -However, this action unlocked query fees for any interested Indexer and increased the number of subgraphs published on The Graph Network. As a result, Indexers have more opportunities to index and serve these subgraphs in exchange for query fees, even before indexing rewards are enabled for a chain. +However, this action unlocked query fees for any interested Indexer and increased the number of Subgraphs published on The Graph Network. As a result, Indexers have more opportunities to index and serve these Subgraphs in exchange for query fees, even before indexing rewards are enabled for a chain. -The upgrade Indexer also provides the Indexer community with information about the potential demand for subgraphs and new chains on The Graph Network. +The upgrade Indexer also provides the Indexer community with information about the potential demand for Subgraphs and new chains on The Graph Network. ### What does this mean for Delegators? -The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity. +The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more Subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity. ### Did the upgrade Indexer compete with existing Indexers for rewards? -No, the upgrade Indexer only allocates the minimum amount per subgraph and does not collect indexing rewards. +No, the upgrade Indexer only allocates the minimum amount per Subgraph and does not collect indexing rewards. -It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and subgraphs. +It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and Subgraphs. -### How does this affect subgraph developers? +### How does this affect Subgraph developers? -Subgraph developers can query their subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/subgraphs/developing/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a subgraph](/developing/creating-a-subgraph/) was not impacted by this upgrade. +Subgraph developers can query their Subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/subgraphs/developing/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a Subgraph](/developing/creating-a-subgraph/) was not impacted by this upgrade. ### How does the upgrade Indexer benefit data consumers? @@ -71,10 +71,10 @@ The upgrade Indexer enables chains on the network that were previously only supp The upgrade Indexer prices queries at the market rate to avoid influencing the query fee market. -### When will the upgrade Indexer stop supporting a subgraph? +### When will the upgrade Indexer stop supporting a Subgraph? -The upgrade Indexer supports a subgraph until at least 3 other Indexers successfully and consistently serve queries made to it. +The upgrade Indexer supports a Subgraph until at least 3 other Indexers successfully and consistently serve queries made to it. -Furthermore, the upgrade Indexer stops supporting a subgraph if it has not been queried in the last 30 days. +Furthermore, the upgrade Indexer stops supporting a Subgraph if it has not been queried in the last 30 days. -Other Indexers are incentivized to support subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it. +Other Indexers are incentivized to support Subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it. diff --git a/website/src/pages/uk/global.json b/website/src/pages/uk/global.json index 6959b66fd4a7..2f65cbe097dc 100644 --- a/website/src/pages/uk/global.json +++ b/website/src/pages/uk/global.json @@ -6,6 +6,7 @@ "subgraphs": "Підграфи", "substreams": "Substreams", "sps": "Substreams-Powered Subgraphs", + "tokenApi": "Token API", "indexing": "Indexing", "resources": "Resources", "archived": "Archived" @@ -24,9 +25,51 @@ "linkToThisSection": "Link to this section" }, "content": { - "note": "Note", + "callout": { + "note": "Note", + "tip": "Tip", + "important": "Important", + "warning": "Warning", + "caution": "Caution" + }, "video": "Video" }, + "openApi": { + "parameters": { + "pathParameters": "Path Parameters", + "queryParameters": "Query Parameters", + "headerParameters": "Header Parameters", + "cookieParameters": "Cookie Parameters", + "parameter": "Parameter", + "description": "Description", + "value": "Value", + "required": "Required", + "deprecated": "Deprecated", + "defaultValue": "Default value", + "minimumValue": "Minimum value", + "maximumValue": "Maximum value", + "acceptedValues": "Accepted values", + "acceptedPattern": "Accepted pattern", + "format": "Format", + "serializationFormat": "Serialization format" + }, + "request": { + "label": "Test this endpoint", + "noCredentialsRequired": "No credentials required", + "send": "Send Request" + }, + "responses": { + "potentialResponses": "Potential Responses", + "status": "Status", + "description": "Description", + "liveResponse": "Live Response", + "example": "Example" + }, + "errors": { + "invalidApi": "Could not retrieve API {0}.", + "invalidOperation": "Could not retrieve operation {0} in API {1}." + } + }, "notFound": { "title": "Oops! This page was lost in space...", "subtitle": "Check if you’re using the right address or explore our website by clicking on the link below.", diff --git a/website/src/pages/uk/index.json b/website/src/pages/uk/index.json index fb54e8013ea4..2e98e3092a7e 100644 --- a/website/src/pages/uk/index.json +++ b/website/src/pages/uk/index.json @@ -7,7 +7,7 @@ "cta2": "Build your first subgraph" }, "products": { - "title": "The Graph’s Products", + "title": "The Graph's Products", "description": "Choose a solution that fits your needs—interact with blockchain data your way.", "subgraphs": { "title": "Підграфи", @@ -21,7 +21,7 @@ }, "sps": { "title": "Substreams-Powered Subgraphs", - "description": "Boost your subgraph’s efficiency and scalability by using Substreams.", + "description": "Boost your subgraph's efficiency and scalability by using Substreams.", "cta": "Set up a Substreams-powered subgraph" }, "graphNode": { @@ -44,7 +44,7 @@ "identifier": "Identifier", "chainId": "Chain ID", "nativeCurrency": "Native Currency", - "docs": "Docs", + "docs": "Документи", "shortName": "Short Name", "guides": "Guides", "search": "Search networks", @@ -67,7 +67,7 @@ "tableHeaders": { "name": "Name", "id": "ID", - "subgraphs": "Subgraphs", + "subgraphs": "Підграфи", "substreams": "Substreams", "firehose": "Firehose", "tokenapi": "Token API" @@ -92,7 +92,7 @@ "description": "Leverage features like custom data sources, event handlers, and topic filters." }, "billing": { - "title": "Billing", + "title": "Білінг", "description": "Optimize costs and manage billing efficiently." } }, @@ -156,15 +156,15 @@ "watchOnYouTube": "Watch on YouTube", "theGraphExplained": { "title": "The Graph Explained In 1 Minute", - "description": "What is The Graph? How does it work? Why does it matter so much to web3 developers? Learn how and why The Graph is the backbone of web3 in this short, non-technical video." + "description": "Learn how and why The Graph is the backbone of web3 in this short, non-technical video." }, "whatIsDelegating": { "title": "What is Delegating?", - "description": "Delegators are key participants who help secure The Graph by staking their GRT tokens to Indexers. This video explains key concepts to understand before delegating." + "description": "This video explains key concepts to understand before delegating, a form of staking that helps secure The Graph." }, "howToIndexSolana": { "title": "How to Index Solana with a Substreams-powered Subgraph", - "description": "If you’re familiar with subgraphs, discover how Substreams offer a different approach for key use cases. This video walks you through the process of building your first Substreams-powered subgraph." + "description": "If you're familiar with Subgraphs, discover how Substreams offer a different approach for key use cases." } }, "time": { diff --git a/website/src/pages/uk/indexing/chain-integration-overview.mdx b/website/src/pages/uk/indexing/chain-integration-overview.mdx index 77141e82b34a..33619b03c483 100644 --- a/website/src/pages/uk/indexing/chain-integration-overview.mdx +++ b/website/src/pages/uk/indexing/chain-integration-overview.mdx @@ -36,7 +36,7 @@ This process is related to the Subgraph Data Service, applicable only to new Sub ### 2. What happens if Firehose & Substreams support comes after the network is supported on mainnet? -This would only impact protocol support for indexing rewards on Substreams-powered subgraphs. The new Firehose implementation would need testing on testnet, following the methodology outlined for Stage 2 in this GIP. Similarly, assuming the implementation is performant and reliable, a PR on the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) would be required (`Substreams data sources` Subgraph Feature), as well as a new GIP for protocol support for indexing rewards. Anyone can create the PR and GIP; the Foundation would help with Council approval. +This would only impact protocol support for indexing rewards on Substreams-powered Subgraphs. The new Firehose implementation would need testing on testnet, following the methodology outlined for Stage 2 in this GIP. Similarly, assuming the implementation is performant and reliable, a PR on the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) would be required (`Substreams data sources` Subgraph Feature), as well as a new GIP for protocol support for indexing rewards. Anyone can create the PR and GIP; the Foundation would help with Council approval. ### 3. How much time will the process of reaching full protocol support take? diff --git a/website/src/pages/uk/indexing/new-chain-integration.mdx b/website/src/pages/uk/indexing/new-chain-integration.mdx index e45c4b411010..670e06c752c3 100644 --- a/website/src/pages/uk/indexing/new-chain-integration.mdx +++ b/website/src/pages/uk/indexing/new-chain-integration.mdx @@ -2,7 +2,7 @@ title: New Chain Integration --- -Chains can bring subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies: +Chains can bring Subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies: 1. **EVM JSON-RPC** 2. **Firehose**: All Firehose integration solutions include Substreams, a large-scale streaming engine based off Firehose with native `graph-node` support, allowing for parallelized transforms. @@ -47,15 +47,15 @@ For EVM chains, there exists a deeper level of data that can be achieved through ## EVM considerations - Difference between JSON-RPC & Firehose -While the JSON-RPC and Firehose are both suitable for subgraphs, a Firehose is always required for developers wanting to build with [Substreams](https://substreams.streamingfast.io). Supporting Substreams allows developers to build [Substreams-powered subgraphs](/subgraphs/cookbook/substreams-powered-subgraphs/) for the new chain, and has the potential to improve the performance of your subgraphs. Additionally, Firehose — as a drop-in replacement for the JSON-RPC extraction layer of `graph-node` — reduces by 90% the number of RPC calls required for general indexing. +While the JSON-RPC and Firehose are both suitable for Subgraphs, a Firehose is always required for developers wanting to build with [Substreams](https://substreams.streamingfast.io). Supporting Substreams allows developers to build [Substreams-powered Subgraphs](/subgraphs/cookbook/substreams-powered-subgraphs/) for the new chain, and has the potential to improve the performance of your Subgraphs. Additionally, Firehose — as a drop-in replacement for the JSON-RPC extraction layer of `graph-node` — reduces by 90% the number of RPC calls required for general indexing. -- All those `getLogs` calls and roundtrips get replaced by a single stream arriving into the heart of `graph-node`; a single block model for all subgraphs it processes. +- All those `getLogs` calls and roundtrips get replaced by a single stream arriving into the heart of `graph-node`; a single block model for all Subgraphs it processes. -> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers) +> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index Subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers) ## Graph Node Configuration -Configuring Graph Node is as easy as preparing your local environment. Once your local environment is set, you can test the integration by locally deploying a subgraph. +Configuring Graph Node is as easy as preparing your local environment. Once your local environment is set, you can test the integration by locally deploying a Subgraph. 1. [Clone Graph Node](https://github.com/graphprotocol/graph-node) @@ -67,4 +67,4 @@ Configuring Graph Node is as easy as preparing your local environment. Once your ## Substreams-powered Subgraphs -For StreamingFast-led Firehose/Substreams integrations, basic support for foundational Substreams modules (e.g. decoded transactions, logs and smart-contract events) and Substreams codegen tools are included. These tools enable the ability to enable [Substreams-powered subgraphs](/substreams/sps/introduction/). Follow the [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) and run `substreams codegen subgraph` to experience the codegen tools for yourself. +For StreamingFast-led Firehose/Substreams integrations, basic support for foundational Substreams modules (e.g. decoded transactions, logs and smart-contract events) and Substreams codegen tools are included. These tools enable the ability to enable [Substreams-powered Subgraphs](/substreams/sps/introduction/). Follow the [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) and run `substreams codegen subgraph` to experience the codegen tools for yourself. diff --git a/website/src/pages/uk/indexing/overview.mdx b/website/src/pages/uk/indexing/overview.mdx index 9bb1d1febb33..1ef3973c70cd 100644 --- a/website/src/pages/uk/indexing/overview.mdx +++ b/website/src/pages/uk/indexing/overview.mdx @@ -7,7 +7,7 @@ sidebarTitle: Overview GRT, які застейкані в протоколі, підлягають періоду "розблокування" і можуть бути порізані (slashing), якщо індексатори є шкідливими та надають некоректні дані додаткам або якщо вони неправильно індексують. Індексатори також отримують винагороду за стейк, який вони отримують від делегатів, щоб зробити свій внесок у розвиток мережі. -Індексатори вибирають підграфи для індексування на основі сигналу від кураторів, де куратори стейкають GRT, щоб вказати, які підграфи є якісними та мають бути пріоритетними. Споживачі (наприклад, додатки) також можуть задавати параметри, за якими індексатори обробляють запити до їхніх підграфів, і встановлювати налаштування щодо оплати за запити. +Indexers select Subgraphs to index based on the Subgraph’s curation signal, where Curators stake GRT in order to indicate which Subgraphs are high-quality and should be prioritized. Consumers (eg. applications) can also set parameters for which Indexers process queries for their Subgraphs and set preferences for query fee pricing. ## FAQ @@ -19,17 +19,17 @@ The minimum stake for an Indexer is currently set to 100K GRT. **Query fee rebates** - Payments for serving queries on the network. These payments are mediated via state channels between an Indexer and a gateway. Each query request from a gateway contains a payment and the corresponding response a proof of query result validity. -**Indexing rewards** - Generated via a 3% annual protocol wide inflation, the indexing rewards are distributed to Indexers who are indexing subgraph deployments for the network. +**Indexing rewards** - Generated via a 3% annual protocol wide inflation, the indexing rewards are distributed to Indexers who are indexing Subgraph deployments for the network. ### How are indexing rewards distributed? -Indexing rewards come from protocol inflation which is set to 3% annual issuance. They are distributed across subgraphs based on the proportion of all curation signal on each, then distributed proportionally to Indexers based on their allocated stake on that subgraph. **An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.** +Indexing rewards come from protocol inflation which is set to 3% annual issuance. They are distributed across Subgraphs based on the proportion of all curation signal on each, then distributed proportionally to Indexers based on their allocated stake on that Subgraph. **An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.** Numerous tools have been created by the community for calculating rewards; you'll find a collection of them organized in the [Community Guides collection](https://www.notion.so/Community-Guides-abbb10f4dba040d5ba81648ca093e70c). You can also find an up to date list of tools in the #Delegators and #Indexers channels on the [Discord server](https://discord.gg/graphprotocol). Here we link a [recommended allocation optimiser](https://github.com/graphprotocol/allocation-optimizer) integrated with the indexer software stack. ### What is a proof of indexing (POI)? -POIs are used in the network to verify that an Indexer is indexing the subgraphs they have allocated on. A POI for the first block of the current epoch must be submitted when closing an allocation for that allocation to be eligible for indexing rewards. A POI for a block is a digest for all entity store transactions for a specific subgraph deployment up to and including that block. +POIs are used in the network to verify that an Indexer is indexing the Subgraphs they have allocated on. A POI for the first block of the current epoch must be submitted when closing an allocation for that allocation to be eligible for indexing rewards. A POI for a block is a digest for all entity store transactions for a specific Subgraph deployment up to and including that block. ### When are indexing rewards distributed? @@ -41,7 +41,7 @@ The RewardsManager contract has a read-only [getRewards](https://github.com/grap Many of the community-made dashboards include pending rewards values and they can be easily checked manually by following these steps: -1. Query the [mainnet subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations: +1. Query the [mainnet Subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations: ```graphql query indexerAllocations { @@ -91,24 +91,24 @@ The `queryFeeCut` and `indexingRewardCut` values are delegation parameters that - **indexingRewardCut** - the % of indexing rewards that will be distributed to the Indexer. If this is set to 95%, the Indexer will receive 95% of the indexing rewards when an allocation is closed and the Delegators will split the other 5%. -### How do Indexers know which subgraphs to index? +### How do Indexers know which Subgraphs to index? -Indexers may differentiate themselves by applying advanced techniques for making subgraph indexing decisions but to give a general idea we'll discuss several key metrics used to evaluate subgraphs in the network: +Indexers may differentiate themselves by applying advanced techniques for making Subgraph indexing decisions but to give a general idea we'll discuss several key metrics used to evaluate Subgraphs in the network: -- **Curation signal** - The proportion of network curation signal applied to a particular subgraph is a good indicator of the interest in that subgraph, especially during the bootstrap phase when query voluming is ramping up. +- **Curation signal** - The proportion of network curation signal applied to a particular Subgraph is a good indicator of the interest in that Subgraph, especially during the bootstrap phase when query volume is ramping up. -- **Query fees collected** - The historical data for volume of query fees collected for a specific subgraph is a good indicator of future demand. +- **Query fees collected** - The historical data for volume of query fees collected for a specific Subgraph is a good indicator of future demand. -- **Amount staked** - Monitoring the behavior of other Indexers or looking at proportions of total stake allocated towards specific subgraphs can allow an Indexer to monitor the supply side for subgraph queries to identify subgraphs that the network is showing confidence in or subgraphs that may show a need for more supply. +- **Amount staked** - Monitoring the behavior of other Indexers or looking at proportions of total stake allocated towards specific Subgraphs can allow an Indexer to monitor the supply side for Subgraph queries to identify Subgraphs that the network is showing confidence in or Subgraphs that may show a need for more supply. -- **Subgraphs with no indexing rewards** - Some subgraphs do not generate indexing rewards mainly because they are using unsupported features like IPFS or because they are querying another network outside of mainnet. You will see a message on a subgraph if it is not generating indexing rewards. +- **Subgraphs with no indexing rewards** - Some Subgraphs do not generate indexing rewards mainly because they are using unsupported features like IPFS or because they are querying another network outside of mainnet. You will see a message on a Subgraph if it is not generating indexing rewards. ### What are the hardware requirements? -- **Small** - Enough to get started indexing several subgraphs, will likely need to be expanded. +- **Small** - Enough to get started indexing several Subgraphs, will likely need to be expanded. - **Standard** - Default setup, this is what is used in the example k8s/terraform deployment manifests. -- **Medium** - Production Indexer supporting 100 subgraphs and 200-500 requests per second. -- **Large** - Prepared to index all currently used subgraphs and serve requests for the related traffic. +- **Medium** - Production Indexer supporting 100 Subgraphs and 200-500 requests per second. +- **Large** - Prepared to index all currently used Subgraphs and serve requests for the related traffic. | Setup | Postgres
(CPUs) | Postgres
(memory in GBs) | Postgres
(disk in TBs) | VMs
(CPUs) | VMs
(memory in GBs) | | --- | :-: | :-: | :-: | :-: | :-: | @@ -125,17 +125,17 @@ Indexers may differentiate themselves by applying advanced techniques for making ## Infrastructure -At the center of an Indexer's infrastructure is the Graph Node which monitors the indexed networks, extracts and loads data per a subgraph definition and serves it as a [GraphQL API](/about/#how-the-graph-works). The Graph Node needs to be connected to an endpoint exposing data from each indexed network; an IPFS node for sourcing data; a PostgreSQL database for its store; and Indexer components which facilitate its interactions with the network. +At the center of an Indexer's infrastructure is the Graph Node which monitors the indexed networks, extracts and loads data per a Subgraph definition and serves it as a [GraphQL API](/about/#how-the-graph-works). The Graph Node needs to be connected to an endpoint exposing data from each indexed network; an IPFS node for sourcing data; a PostgreSQL database for its store; and Indexer components which facilitate its interactions with the network. -- **PostgreSQL database** - The main store for the Graph Node, this is where subgraph data is stored. The Indexer service and agent also use the database to store state channel data, cost models, indexing rules, and allocation actions. +- **PostgreSQL database** - The main store for the Graph Node, this is where Subgraph data is stored. The Indexer service and agent also use the database to store state channel data, cost models, indexing rules, and allocation actions. -- **Data endpoint** - For EVM-compatible networks, Graph Node needs to be connected to an endpoint that exposes an EVM-compatible JSON-RPC API. This may take the form of a single client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain subgraphs will require particular client capabilities such as archive mode and/or the parity tracing API. +- **Data endpoint** - For EVM-compatible networks, Graph Node needs to be connected to an endpoint that exposes an EVM-compatible JSON-RPC API. This may take the form of a single client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain Subgraphs will require particular client capabilities such as archive mode and/or the parity tracing API. -- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during subgraph deployment to fetch the subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com. +- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com. - **Indexer service** - Handles all required external communications with the network. Shares cost models and indexing statuses, passes query requests from gateways on to a Graph Node, and manages the query payments via state channels with the gateway. -- **Indexer agent** - Facilitates the Indexers interactions onchain including registering on the network, managing subgraph deployments to its Graph Node/s, and managing allocations. +- **Indexer agent** - Facilitates the Indexers interactions onchain including registering on the network, managing Subgraph deployments to its Graph Node/s, and managing allocations. - **Prometheus metrics server** - The Graph Node and Indexer components log their metrics to the metrics server. @@ -149,8 +149,8 @@ Note: To support agile scaling, it is recommended that query and indexing concer | Port | Purpose | Routes | CLI Argument | Environment Variable | | --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | -| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | +| 8000 | GraphQL HTTP server
(for Subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | +| 8001 | GraphQL WS
(for Subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | | 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | | 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | | 8040 | Prometheus metrics | /metrics | \--metrics-port | - | @@ -159,7 +159,7 @@ Note: To support agile scaling, it is recommended that query and indexing concer | Port | Purpose | Routes | CLI Argument | Environment Variable | | --- | --- | --- | --- | --- | -| 7600 | GraphQL HTTP server
(for paid subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` | +| 7600 | GraphQL HTTP server
(for paid Subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` | | 7300 | Prometheus metrics | /metrics | \--metrics-port | - | #### Indexer Agent @@ -295,7 +295,7 @@ Deploy all resources with `kubectl apply -k $dir`. ### Graph Node -[Graph Node](https://github.com/graphprotocol/graph-node) is an open source Rust implementation that event sources the Ethereum blockchain to deterministically update a data store that can be queried via the GraphQL endpoint. Developers use subgraphs to define their schema, and a set of mappings for transforming the data sourced from the blockchain and the Graph Node handles syncing the entire chain, monitoring for new blocks, and serving it via a GraphQL endpoint. +[Graph Node](https://github.com/graphprotocol/graph-node) is an open source Rust implementation that event sources the Ethereum blockchain to deterministically update a data store that can be queried via the GraphQL endpoint. Developers use Subgraphs to define their schema, and a set of mappings for transforming the data sourced from the blockchain and the Graph Node handles syncing the entire chain, monitoring for new blocks, and serving it via a GraphQL endpoint. #### Getting started from source @@ -365,9 +365,9 @@ docker-compose up To successfully participate in the network requires almost constant monitoring and interaction, so we've built a suite of Typescript applications for facilitating an Indexers network participation. There are three Indexer components: -- **Indexer agent** - The agent monitors the network and the Indexer's own infrastructure and manages which subgraph deployments are indexed and allocated towards onchain and how much is allocated towards each. +- **Indexer agent** - The agent monitors the network and the Indexer's own infrastructure and manages which Subgraph deployments are indexed and allocated towards onchain and how much is allocated towards each. -- **Indexer service** - The only component that needs to be exposed externally, the service passes on subgraph queries to the graph node, manages state channels for query payments, shares important decision making information to clients like the gateways. +- **Indexer service** - The only component that needs to be exposed externally, the service passes on Subgraph queries to the graph node, manages state channels for query payments, shares important decision making information to clients like the gateways. - **Indexer CLI** - The command line interface for managing the Indexer agent. It allows Indexers to manage cost models, manual allocations, actions queue, and indexing rules. @@ -525,7 +525,7 @@ graph indexer status #### Indexer management using Indexer CLI -The suggested tool for interacting with the **Indexer Management API** is the **Indexer CLI**, an extension to the **Graph CLI**. The Indexer agent needs input from an Indexer in order to autonomously interact with the network on the behalf of the Indexer. The mechanism for defining Indexer agent behavior are **allocation management** mode and **indexing rules**. Under auto mode, an Indexer can use **indexing rules** to apply their specific strategy for picking subgraphs to index and serve queries for. Rules are managed via a GraphQL API served by the agent and known as the Indexer Management API. Under manual mode, an Indexer can create allocation actions using **actions queue** and explicitly approve them before they get executed. Under oversight mode, **indexing rules** are used to populate **actions queue** and also require explicit approval for execution. +The suggested tool for interacting with the **Indexer Management API** is the **Indexer CLI**, an extension to the **Graph CLI**. The Indexer agent needs input from an Indexer in order to autonomously interact with the network on the behalf of the Indexer. The mechanism for defining Indexer agent behavior are **allocation management** mode and **indexing rules**. Under auto mode, an Indexer can use **indexing rules** to apply their specific strategy for picking Subgraphs to index and serve queries for. Rules are managed via a GraphQL API served by the agent and known as the Indexer Management API. Under manual mode, an Indexer can create allocation actions using **actions queue** and explicitly approve them before they get executed. Under oversight mode, **indexing rules** are used to populate **actions queue** and also require explicit approval for execution. #### Usage @@ -537,7 +537,7 @@ The **Indexer CLI** connects to the Indexer agent, typically through port-forwar - `graph indexer rules set [options] ...` - Set one or more indexing rules. -- `graph indexer rules start [options] ` - Start indexing a subgraph deployment if available and set its `decisionBasis` to `always`, so the Indexer agent will always choose to index it. If the global rule is set to always then all available subgraphs on the network will be indexed. +- `graph indexer rules start [options] ` - Start indexing a Subgraph deployment if available and set its `decisionBasis` to `always`, so the Indexer agent will always choose to index it. If the global rule is set to always then all available Subgraphs on the network will be indexed. - `graph indexer rules stop [options] ` - Stop indexing a deployment and set its `decisionBasis` to never, so it will skip this deployment when deciding on deployments to index. @@ -561,9 +561,9 @@ All commands which display rules in the output can choose between the supported #### Indexing rules -Indexing rules can either be applied as global defaults or for specific subgraph deployments using their IDs. The `deployment` and `decisionBasis` fields are mandatory, while all other fields are optional. When an indexing rule has `rules` as the `decisionBasis`, then the Indexer agent will compare non-null threshold values on that rule with values fetched from the network for the corresponding deployment. If the subgraph deployment has values above (or below) any of the thresholds it will be chosen for indexing. +Indexing rules can either be applied as global defaults or for specific Subgraph deployments using their IDs. The `deployment` and `decisionBasis` fields are mandatory, while all other fields are optional. When an indexing rule has `rules` as the `decisionBasis`, then the Indexer agent will compare non-null threshold values on that rule with values fetched from the network for the corresponding deployment. If the Subgraph deployment has values above (or below) any of the thresholds it will be chosen for indexing. -For example, if the global rule has a `minStake` of **5** (GRT), any subgraph deployment which has more than 5 (GRT) of stake allocated to it will be indexed. Threshold rules include `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, and `minAverageQueryFees`. +For example, if the global rule has a `minStake` of **5** (GRT), any Subgraph deployment which has more than 5 (GRT) of stake allocated to it will be indexed. Threshold rules include `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, and `minAverageQueryFees`. Data model: @@ -679,7 +679,7 @@ graph indexer actions execute approve Note that supported action types for allocation management have different input requirements: -- `Allocate` - allocate stake to a specific subgraph deployment +- `Allocate` - allocate stake to a specific Subgraph deployment - required action params: - deploymentID @@ -694,7 +694,7 @@ Note that supported action types for allocation management have different input - poi - force (forces using the provided POI even if it doesn’t match what the graph-node provides) -- `Reallocate` - atomically close allocation and open a fresh allocation for the same subgraph deployment +- `Reallocate` - atomically close allocation and open a fresh allocation for the same Subgraph deployment - required action params: - allocationID @@ -706,7 +706,7 @@ Note that supported action types for allocation management have different input #### Cost models -Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make Indexer selection decisions per query and to negotiate payment with chosen Indexers. +Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each Subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make Indexer selection decisions per query and to negotiate payment with chosen Indexers. #### Agora @@ -782,9 +782,9 @@ Once an Indexer has staked GRT in the protocol, the [Indexer components](/indexi 6. Call `stake()` to stake GRT in the protocol. -7. (Optional) Indexers may approve another address to be the operator for their Indexer infrastructure in order to separate the keys that control the funds from those that are performing day to day actions such as allocating on subgraphs and serving (paid) queries. In order to set the operator call `setOperator()` with the operator address. +7. (Optional) Indexers may approve another address to be the operator for their Indexer infrastructure in order to separate the keys that control the funds from those that are performing day to day actions such as allocating on Subgraphs and serving (paid) queries. In order to set the operator call `setOperator()` with the operator address. -8. (Optional) In order to control the distribution of rewards and strategically attract Delegators Indexers can update their delegation parameters by updating their indexingRewardCut (parts per million), queryFeeCut (parts per million), and cooldownBlocks (number of blocks). To do so call `setDelegationParameters()`. The following example sets the queryFeeCut to distribute 95% of query rebates to the Indexer and 5% to Delegators, set the indexingRewardCutto distribute 60% of indexing rewards to the Indexer and 40% to Delegators, and set `thecooldownBlocks` period to 500 blocks. +8. (Optional) In order to control the distribution of rewards and strategically attract Delegators Indexers can update their delegation parameters by updating their `indexingRewardCut` (parts per million), `queryFeeCut` (parts per million), and `cooldownBlocks` (number of blocks). To do so call `setDelegationParameters()`. The following example sets the `queryFeeCut` to distribute 95% of query rebates to the Indexer and 5% to Delegators, set the `indexingRewardCut` to distribute 60% of indexing rewards to the Indexer and 40% to Delegators, and set the `cooldownBlocks` period to 500 blocks. ``` setDelegationParameters(950000, 600000, 500) @@ -810,8 +810,8 @@ To set the delegation parameters using Graph Explorer interface, follow these st After being created by an Indexer a healthy allocation goes through two states. -- **Active** - Once an allocation is created onchain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L316)) it is considered **active**. A portion of the Indexer's own and/or delegated stake is allocated towards a subgraph deployment, which allows them to claim indexing rewards and serve queries for that subgraph deployment. The Indexer agent manages creating allocations based on the Indexer rules. +- **Active** - Once an allocation is created onchain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L316)) it is considered **active**. A portion of the Indexer's own and/or delegated stake is allocated towards a Subgraph deployment, which allows them to claim indexing rewards and serve queries for that Subgraph deployment. The Indexer agent manages creating allocations based on the Indexer rules. - **Closed** - An Indexer is free to close an allocation once 1 epoch has passed ([closeAllocation()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L335)) or their Indexer agent will automatically close the allocation after the **maxAllocationEpochs** (currently 28 days). When an allocation is closed with a valid proof of indexing (POI) their indexing rewards are distributed to the Indexer and its Delegators ([learn more](/indexing/overview/#how-are-indexing-rewards-distributed)). -Indexers are recommended to utilize offchain syncing functionality to sync subgraph deployments to chainhead before creating the allocation onchain. This feature is especially useful for subgraphs that may take longer than 28 epochs to sync or have some chances of failing undeterministically. +Indexers are recommended to utilize offchain syncing functionality to sync Subgraph deployments to chainhead before creating the allocation onchain. This feature is especially useful for Subgraphs that may take longer than 28 epochs to sync or have some chances of failing undeterministically. diff --git a/website/src/pages/uk/indexing/supported-network-requirements.mdx b/website/src/pages/uk/indexing/supported-network-requirements.mdx index df15ef48d762..3d57daa55709 100644 --- a/website/src/pages/uk/indexing/supported-network-requirements.mdx +++ b/website/src/pages/uk/indexing/supported-network-requirements.mdx @@ -6,7 +6,7 @@ title: Supported Network Requirements | --- | --- | --- | :-: | | Arbitrum | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | 4+ core CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_last updated August 2023_ | ✅ | | Avalanche | [Docker Guide](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 5 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
Debian 12/Ubuntu 22.04
16 GB RAM
>= 4.5TB (NVME preffered)
_last updated 14th May 2024_ | ✅ | +| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
Debian 12/Ubuntu 22.04
16 GB RAM
>= 4.5TB (NVME preferred)
_last updated 14th May 2024_ | ✅ | | Binance | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | 8 core / 16 threads CPU
Ubuntu 22.04
>=32 GB RAM
>= 14 TiB NVMe SSD
_last updated 22nd June 2024_ | ✅ | | Celo | [Docker Guide](https://docs.infradao.com/archive-nodes-101/celo/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 2 TiB NVMe SSD
_last updated August 2023_ | ✅ | | Ethereum | [Docker Guide](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Higher clock speed over core count
Ubuntu 22.04
16GB+ RAM
>=3TB (NVMe recommended)
_last updated August 2023_ | ✅ | diff --git a/website/src/pages/uk/indexing/tap.mdx b/website/src/pages/uk/indexing/tap.mdx index 3bab672ab211..477534d63201 100644 --- a/website/src/pages/uk/indexing/tap.mdx +++ b/website/src/pages/uk/indexing/tap.mdx @@ -1,21 +1,21 @@ --- -title: TAP Migration Guide +title: GraphTally Guide --- -Learn about The Graph’s new payment system, **Timeline Aggregation Protocol, TAP**. This system provides fast, efficient microtransactions with minimized trust. +Learn about The Graph’s new payment system, **GraphTally** [(previously Timeline Aggregation Protocol)](https://docs.rs/tap_core/latest/tap_core/index.html). This system provides fast, efficient microtransactions with minimized trust. ## Overview -[TAP](https://docs.rs/tap_core/latest/tap_core/index.html) is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features: +GraphTally is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features: - Efficiently handles micropayments. - Adds a layer of consolidations to onchain transactions and costs. - Allows Indexers control of receipts and payments, guaranteeing payment for queries. - It enables decentralized, trustless gateways and improves `indexer-service` performance for multiple senders. -## Specifics +### Specifics -TAP allows a sender to make multiple payments to a receiver, **TAP Receipts**, which aggregates these payments into a single payment, a **Receipt Aggregate Voucher**, also known as a **RAV**. This aggregated payment can then be verified on the blockchain, reducing the number of transactions and simplifying the payment process. +GraphTally allows a sender to make multiple payments to a receiver, **Receipts**, which aggregates these payments into a single payment, a **Receipt Aggregate Voucher**, also known as a **RAV**. This aggregated payment can then be verified on the blockchain, reducing the number of transactions and simplifying the payment process. For each query, the gateway will send you a `signed receipt` that is stored on your database. Then, these queries will be aggregated by a `tap-agent` through a request. Afterwards, you’ll receive a RAV. You can update a RAV by sending it with newer receipts and this will generate a new RAV with an increased value. @@ -59,14 +59,14 @@ As long as you run `tap-agent` and `indexer-agent`, everything will be executed | Signers | `0xfF4B7A5EfD00Ff2EC3518D4F250A27e4c29A2211` | `0xFb142dE83E261e43a81e9ACEADd1c66A0DB121FE` | | Aggregator | `https://tap-aggregator.network.thegraph.com` | `https://tap-aggregator.testnet.thegraph.com` | -### Requirements +### Prerequisites -In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query TAP updates. You can use The Graph Network to query or host yourself on your `graph-node`. +In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query updates. You can use The Graph Network to query or host yourself on your `graph-node`. -- [Graph TAP Arbitrum Sepolia subgraph (for The Graph testnet)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) -- [Graph TAP Arbitrum One subgraph (for The Graph mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) +- [Graph TAP Arbitrum Sepolia Subgraph (for The Graph testnet)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) +- [Graph TAP Arbitrum One Subgraph (for The Graph mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) -> Note: `indexer-agent` does not currently handle the indexing of this subgraph like it does for the network subgraph deployment. As a result, you have to index it manually. +> Note: `indexer-agent` does not currently handle the indexing of this Subgraph like it does for the network Subgraph deployment. As a result, you have to index it manually. ## Migration Guide @@ -79,7 +79,7 @@ The required software version can be found [here](https://github.com/graphprotoc 1. **Indexer Agent** - Follow the [same process](https://github.com/graphprotocol/indexer/pkgs/container/indexer-agent#graph-protocol-indexer-components). - - Give the new argument `--tap-subgraph-endpoint` to activate the new TAP codepaths and enable redeeming of TAP RAVs. + - Give the new argument `--tap-subgraph-endpoint` to activate the new GraphTally codepaths and enable redeeming of RAVs. 2. **Indexer Service** @@ -128,18 +128,18 @@ query_url = "" status_url = "" [subgraphs.network] -# Query URL for the Graph Network subgraph. +# Query URL for the Graph Network Subgraph. query_url = "" # Optional, deployment to look for in the local `graph-node`, if locally indexed. -# Locally indexing the subgraph is recommended. +# Locally indexing the Subgraph is recommended. # NOTE: Use `query_url` or `deployment_id` only deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" [subgraphs.escrow] -# Query URL for the Escrow subgraph. +# Query URL for the Escrow Subgraph. query_url = "" # Optional, deployment to look for in the local `graph-node`, if locally indexed. -# Locally indexing the subgraph is recommended. +# Locally indexing the Subgraph is recommended. # NOTE: Use `query_url` or `deployment_id` only deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" diff --git a/website/src/pages/uk/indexing/tooling/graph-node.mdx b/website/src/pages/uk/indexing/tooling/graph-node.mdx index 0250f14a3d08..edde8a157fd3 100644 --- a/website/src/pages/uk/indexing/tooling/graph-node.mdx +++ b/website/src/pages/uk/indexing/tooling/graph-node.mdx @@ -2,31 +2,31 @@ title: Graph Node --- -Graph Node is the component which indexes subgraphs, and makes the resulting data available to query via a GraphQL API. As such it is central to the indexer stack, and correct operation of Graph Node is crucial to running a successful indexer. +Graph Node is the component which indexes Subgraphs, and makes the resulting data available to query via a GraphQL API. As such it is central to the indexer stack, and correct operation of Graph Node is crucial to running a successful indexer. This provides a contextual overview of Graph Node, and some of the more advanced options available to indexers. Detailed documentation and instructions can be found in the [Graph Node repository](https://github.com/graphprotocol/graph-node). ## Graph Node -[Graph Node](https://github.com/graphprotocol/graph-node) is the reference implementation for indexing Subgraphs on The Graph Network, connecting to blockchain clients, indexing subgraphs and making indexed data available to query. +[Graph Node](https://github.com/graphprotocol/graph-node) is the reference implementation for indexing Subgraphs on The Graph Network, connecting to blockchain clients, indexing Subgraphs and making indexed data available to query. Graph Node (and the whole indexer stack) can be run on bare metal, or in a cloud environment. This flexibility of the central indexing component is crucial to the robustness of The Graph Protocol. Similarly, Graph Node can be [built from source](https://github.com/graphprotocol/graph-node), or indexers can use one of the [provided Docker Images](https://hub.docker.com/r/graphprotocol/graph-node). ### PostgreSQL database -The main store for the Graph Node, this is where subgraph data is stored, as well as metadata about subgraphs, and subgraph-agnostic network data such as the block cache, and eth_call cache. +The main store for the Graph Node, this is where Subgraph data is stored, as well as metadata about Subgraphs, and Subgraph-agnostic network data such as the block cache, and eth_call cache. ### Network clients In order to index a network, Graph Node needs access to a network client via an EVM-compatible JSON-RPC API. This RPC may connect to a single client or it could be a more complex setup that load balances across multiple. -While some subgraphs may just require a full node, some may have indexing features which require additional RPC functionality. Specifically subgraphs which make `eth_calls` as part of indexing will require an archive node which supports [EIP-1898](https://eips.ethereum.org/EIPS/eip-1898), and subgraphs with `callHandlers`, or `blockHandlers` with a `call` filter, require `trace_filter` support ([see trace module documentation here](https://openethereum.github.io/JSONRPC-trace-module)). +While some Subgraphs may just require a full node, some may have indexing features which require additional RPC functionality. Specifically Subgraphs which make `eth_calls` as part of indexing will require an archive node which supports [EIP-1898](https://eips.ethereum.org/EIPS/eip-1898), and Subgraphs with `callHandlers`, or `blockHandlers` with a `call` filter, require `trace_filter` support ([see trace module documentation here](https://openethereum.github.io/JSONRPC-trace-module)). **Network Firehoses** - a Firehose is a gRPC service providing an ordered, yet fork-aware, stream of blocks, developed by The Graph's core developers to better support performant indexing at scale. This is not currently an Indexer requirement, but Indexers are encouraged to familiarise themselves with the technology, ahead of full network support. Learn more about the Firehose [here](https://firehose.streamingfast.io/). ### IPFS Nodes -Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during subgraph deployment to fetch the subgraph manifest and all linked files. Network indexers do not need to host their own IPFS node. An IPFS node for the network is hosted at https://ipfs.network.thegraph.com. +Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network indexers do not need to host their own IPFS node. An IPFS node for the network is hosted at https://ipfs.network.thegraph.com. ### Prometheus metrics server @@ -79,8 +79,8 @@ When it is running Graph Node exposes the following ports: | Port | Purpose | Routes | CLI Argument | Environment Variable | | --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | -| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | +| 8000 | GraphQL HTTP server
(for Subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | +| 8001 | GraphQL WS
(for Subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | | 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | | 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | | 8040 | Prometheus metrics | /metrics | \--metrics-port | - | @@ -89,7 +89,7 @@ When it is running Graph Node exposes the following ports: ## Advanced Graph Node configuration -At its simplest, Graph Node can be operated with a single instance of Graph Node, a single PostgreSQL database, an IPFS node, and the network clients as required by the subgraphs to be indexed. +At its simplest, Graph Node can be operated with a single instance of Graph Node, a single PostgreSQL database, an IPFS node, and the network clients as required by the Subgraphs to be indexed. This setup can be scaled horizontally, by adding multiple Graph Nodes, and multiple databases to support those Graph Nodes. Advanced users may want to take advantage of some of the horizontal scaling capabilities of Graph Node, as well as some of the more advanced configuration options, via the `config.toml` file and Graph Node's environment variables. @@ -114,13 +114,13 @@ Full documentation of `config.toml` can be found in the [Graph Node docs](https: #### Multiple Graph Nodes -Graph Node indexing can scale horizontally, running multiple instances of Graph Node to split indexing and querying across different nodes. This can be done simply by running Graph Nodes configured with a different `node_id` on startup (e.g. in the Docker Compose file), which can then be used in the `config.toml` file to specify [dedicated query nodes](#dedicated-query-nodes), [block ingestors](#dedicated-block-ingestion), and splitting subgraphs across nodes with [deployment rules](#deployment-rules). +Graph Node indexing can scale horizontally, running multiple instances of Graph Node to split indexing and querying across different nodes. This can be done simply by running Graph Nodes configured with a different `node_id` on startup (e.g. in the Docker Compose file), which can then be used in the `config.toml` file to specify [dedicated query nodes](#dedicated-query-nodes), [block ingestors](#dedicated-block-ingestion), and splitting Subgraphs across nodes with [deployment rules](#deployment-rules). > Note that multiple Graph Nodes can all be configured to use the same database, which itself can be horizontally scaled via sharding. #### Deployment rules -Given multiple Graph Nodes, it is necessary to manage deployment of new subgraphs so that the same subgraph isn't being indexed by two different nodes, which would lead to collisions. This can be done by using deployment rules, which can also specify which `shard` a subgraph's data should be stored in, if database sharding is being used. Deployment rules can match on the subgraph name and the network that the deployment is indexing in order to make a decision. +Given multiple Graph Nodes, it is necessary to manage deployment of new Subgraphs so that the same Subgraph isn't being indexed by two different nodes, which would lead to collisions. This can be done by using deployment rules, which can also specify which `shard` a Subgraph's data should be stored in, if database sharding is being used. Deployment rules can match on the Subgraph name and the network that the deployment is indexing in order to make a decision. Example deployment rule configuration: @@ -138,7 +138,7 @@ indexers = [ "index_node_kovan_0" ] match = { network = [ "xdai", "poa-core" ] } indexers = [ "index_node_other_0" ] [[deployment.rule]] -# There's no 'match', so any subgraph matches +# There's no 'match', so any Subgraph matches shards = [ "sharda", "shardb" ] indexers = [ "index_node_community_0", @@ -167,11 +167,11 @@ Any node whose --node-id matches the regular expression will be set up to only r For most use cases, a single Postgres database is sufficient to support a graph-node instance. When a graph-node instance outgrows a single Postgres database, it is possible to split the storage of graph-node's data across multiple Postgres databases. All databases together form the store of the graph-node instance. Each individual database is called a shard. -Shards can be used to split subgraph deployments across multiple databases, and can also be used to use replicas to spread query load across databases. This includes configuring the number of available database connections each `graph-node` should keep in its connection pool for each database, which becomes increasingly important as more subgraphs are being indexed. +Shards can be used to split Subgraph deployments across multiple databases, and can also be used to use replicas to spread query load across databases. This includes configuring the number of available database connections each `graph-node` should keep in its connection pool for each database, which becomes increasingly important as more Subgraphs are being indexed. Sharding becomes useful when your existing database can't keep up with the load that Graph Node puts on it, and when it's not possible to increase the database size anymore. -> It is generally better make a single database as big as possible, before starting with shards. One exception is where query traffic is split very unevenly between subgraphs; in those situations it can help dramatically if the high-volume subgraphs are kept in one shard and everything else in another because that setup makes it more likely that the data for the high-volume subgraphs stays in the db-internal cache and doesn't get replaced by data that's not needed as much from low-volume subgraphs. +> It is generally better make a single database as big as possible, before starting with shards. One exception is where query traffic is split very unevenly between Subgraphs; in those situations it can help dramatically if the high-volume Subgraphs are kept in one shard and everything else in another because that setup makes it more likely that the data for the high-volume Subgraphs stays in the db-internal cache and doesn't get replaced by data that's not needed as much from low-volume Subgraphs. In terms of configuring connections, start with max_connections in postgresql.conf set to 400 (or maybe even 200) and look at the store_connection_wait_time_ms and store_connection_checkout_count Prometheus metrics. Noticeable wait times (anything above 5ms) is an indication that there are too few connections available; high wait times there will also be caused by the database being very busy (like high CPU load). However if the database seems otherwise stable, high wait times indicate a need to increase the number of connections. In the configuration, how many connections each graph-node instance can use is an upper limit, and Graph Node will not keep connections open if it doesn't need them. @@ -188,7 +188,7 @@ ingestor = "block_ingestor_node" #### Supporting multiple networks -The Graph Protocol is increasing the number of networks supported for indexing rewards, and there exist many subgraphs indexing unsupported networks which an indexer would like to process. The `config.toml` file allows for expressive and flexible configuration of: +The Graph Protocol is increasing the number of networks supported for indexing rewards, and there exist many Subgraphs indexing unsupported networks which an indexer would like to process. The `config.toml` file allows for expressive and flexible configuration of: - Multiple networks - Multiple providers per network (this can allow splitting of load across providers, and can also allow for configuration of full nodes as well as archive nodes, with Graph Node preferring cheaper providers if a given workload allows). @@ -225,11 +225,11 @@ Users who are operating a scaled indexing setup with advanced configuration may ### Managing Graph Node -Given a running Graph Node (or Graph Nodes!), the challenge is then to manage deployed subgraphs across those nodes. Graph Node surfaces a range of tools to help with managing subgraphs. +Given a running Graph Node (or Graph Nodes!), the challenge is then to manage deployed Subgraphs across those nodes. Graph Node surfaces a range of tools to help with managing Subgraphs. #### Logging -Graph Node's logs can provide useful information for debugging and optimisation of Graph Node and specific subgraphs. Graph Node supports different log levels via the `GRAPH_LOG` environment variable, with the following levels: error, warn, info, debug or trace. +Graph Node's logs can provide useful information for debugging and optimisation of Graph Node and specific Subgraphs. Graph Node supports different log levels via the `GRAPH_LOG` environment variable, with the following levels: error, warn, info, debug or trace. In addition setting `GRAPH_LOG_QUERY_TIMING` to `gql` provides more details about how GraphQL queries are running (though this will generate a large volume of logs). @@ -247,11 +247,11 @@ The graphman command is included in the official containers, and you can docker Full documentation of `graphman` commands is available in the Graph Node repository. See [/docs/graphman.md] (https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md) in the Graph Node `/docs` -### Working with subgraphs +### Working with Subgraphs #### Indexing status API -Available on port 8030/graphql by default, the indexing status API exposes a range of methods for checking indexing status for different subgraphs, checking proofs of indexing, inspecting subgraph features and more. +Available on port 8030/graphql by default, the indexing status API exposes a range of methods for checking indexing status for different Subgraphs, checking proofs of indexing, inspecting Subgraph features and more. The full schema is available [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). @@ -263,7 +263,7 @@ There are three separate parts of the indexing process: - Processing events in order with the appropriate handlers (this can involve calling the chain for state, and fetching data from the store) - Writing the resulting data to the store -These stages are pipelined (i.e. they can be executed in parallel), but they are dependent on one another. Where subgraphs are slow to index, the underlying cause will depend on the specific subgraph. +These stages are pipelined (i.e. they can be executed in parallel), but they are dependent on one another. Where Subgraphs are slow to index, the underlying cause will depend on the specific Subgraph. Common causes of indexing slowness: @@ -276,24 +276,24 @@ Common causes of indexing slowness: - The provider itself falling behind the chain head - Slowness in fetching new receipts at the chain head from the provider -Subgraph indexing metrics can help diagnose the root cause of indexing slowness. In some cases, the problem lies with the subgraph itself, but in others, improved network providers, reduced database contention and other configuration improvements can markedly improve indexing performance. +Subgraph indexing metrics can help diagnose the root cause of indexing slowness. In some cases, the problem lies with the Subgraph itself, but in others, improved network providers, reduced database contention and other configuration improvements can markedly improve indexing performance. -#### Failed subgraphs +#### Failed Subgraphs -During indexing subgraphs might fail, if they encounter data that is unexpected, some component not working as expected, or if there is some bug in the event handlers or configuration. There are two general types of failure: +During indexing Subgraphs might fail, if they encounter data that is unexpected, some component not working as expected, or if there is some bug in the event handlers or configuration. There are two general types of failure: - Deterministic failures: these are failures which will not be resolved with retries - Non-deterministic failures: these might be down to issues with the provider, or some unexpected Graph Node error. When a non-deterministic failure occurs, Graph Node will retry the failing handlers, backing off over time. -In some cases a failure might be resolvable by the indexer (for example if the error is a result of not having the right kind of provider, adding the required provider will allow indexing to continue). However in others, a change in the subgraph code is required. +In some cases a failure might be resolvable by the indexer (for example if the error is a result of not having the right kind of provider, adding the required provider will allow indexing to continue). However in others, a change in the Subgraph code is required. -> Deterministic failures are considered "final", with a Proof of Indexing generated for the failing block, while non-deterministic failures are not, as the subgraph may manage to "unfail" and continue indexing. In some cases, the non-deterministic label is incorrect, and the subgraph will never overcome the error; such failures should be reported as issues on the Graph Node repository. +> Deterministic failures are considered "final", with a Proof of Indexing generated for the failing block, while non-deterministic failures are not, as the Subgraph may manage to "unfail" and continue indexing. In some cases, the non-deterministic label is incorrect, and the Subgraph will never overcome the error; such failures should be reported as issues on the Graph Node repository. #### Block and call cache -Graph Node caches certain data in the store in order to save refetching from the provider. Blocks are cached, as are the results of `eth_calls` (the latter being cached as of a specific block). This caching can dramatically increase indexing speed during "resyncing" of a slightly altered subgraph. +Graph Node caches certain data in the store in order to save refetching from the provider. Blocks are cached, as are the results of `eth_calls` (the latter being cached as of a specific block). This caching can dramatically increase indexing speed during "resyncing" of a slightly altered Subgraph. -However, in some instances, if an Ethereum node has provided incorrect data for some period, that can make its way into the cache, leading to incorrect data or failed subgraphs. In this case indexers can use `graphman` to clear the poisoned cache, and then rewind the affected subgraphs, which will then fetch fresh data from the (hopefully) healthy provider. +However, in some instances, if an Ethereum node has provided incorrect data for some period, that can make its way into the cache, leading to incorrect data or failed Subgraphs. In this case indexers can use `graphman` to clear the poisoned cache, and then rewind the affected Subgraphs, which will then fetch fresh data from the (hopefully) healthy provider. If a block cache inconsistency is suspected, such as a tx receipt missing event: @@ -304,7 +304,7 @@ If a block cache inconsistency is suspected, such as a tx receipt missing event: #### Querying issues and errors -Once a subgraph has been indexed, indexers can expect to serve queries via the subgraph's dedicated query endpoint. If the indexer is hoping to serve significant query volume, a dedicated query node is recommended, and in case of very high query volumes, indexers may want to configure replica shards so that queries don't impact the indexing process. +Once a Subgraph has been indexed, indexers can expect to serve queries via the Subgraph's dedicated query endpoint. If the indexer is hoping to serve significant query volume, a dedicated query node is recommended, and in case of very high query volumes, indexers may want to configure replica shards so that queries don't impact the indexing process. However, even with a dedicated query node and replicas, certain queries can take a long time to execute, and in some cases increase memory usage and negatively impact the query time for other users. @@ -316,7 +316,7 @@ Graph Node caches GraphQL queries by default, which can significantly reduce dat ##### Analysing queries -Problematic queries most often surface in one of two ways. In some cases, users themselves report that a given query is slow. In that case the challenge is to diagnose the reason for the slowness - whether it is a general issue, or specific to that subgraph or query. And then of course to resolve it, if possible. +Problematic queries most often surface in one of two ways. In some cases, users themselves report that a given query is slow. In that case the challenge is to diagnose the reason for the slowness - whether it is a general issue, or specific to that Subgraph or query. And then of course to resolve it, if possible. In other cases, the trigger might be high memory usage on a query node, in which case the challenge is first to identify the query causing the issue. @@ -336,10 +336,10 @@ In general, tables where the number of distinct entities are less than 1% of the Once a table has been determined to be account-like, running `graphman stats account-like .
` will turn on the account-like optimization for queries against that table. The optimization can be turned off again with `graphman stats account-like --clear .
` It takes up to 5 minutes for query nodes to notice that the optimization has been turned on or off. After turning the optimization on, it is necessary to verify that the change does not in fact make queries slower for that table. If you have configured Grafana to monitor Postgres, slow queries would show up in `pg_stat_activity`in large numbers, taking several seconds. In that case, the optimization needs to be turned off again. -For Uniswap-like subgraphs, the `pair` and `token` tables are prime candidates for this optimization, and can have a dramatic effect on database load. +For Uniswap-like Subgraphs, the `pair` and `token` tables are prime candidates for this optimization, and can have a dramatic effect on database load. -#### Removing subgraphs +#### Removing Subgraphs > This is new functionality, which will be available in Graph Node 0.29.x -At some point an indexer might want to remove a given subgraph. This can be easily done via `graphman drop`, which deletes a deployment and all it's indexed data. The deployment can be specified as either a subgraph name, an IPFS hash `Qm..`, or the database namespace `sgdNNN`. Further documentation is available [here](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop). +At some point an indexer might want to remove a given Subgraph. This can be easily done via `graphman drop`, which deletes a deployment and all it's indexed data. The deployment can be specified as either a Subgraph name, an IPFS hash `Qm..`, or the database namespace `sgdNNN`. Further documentation is available [here](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop). diff --git a/website/src/pages/uk/indexing/tooling/graphcast.mdx b/website/src/pages/uk/indexing/tooling/graphcast.mdx index 4072877a1257..d1795e9be577 100644 --- a/website/src/pages/uk/indexing/tooling/graphcast.mdx +++ b/website/src/pages/uk/indexing/tooling/graphcast.mdx @@ -10,10 +10,10 @@ Currently, the cost to broadcast information to other network participants is de The Graphcast SDK (Software Development Kit) allows developers to build Radios, which are gossip-powered applications that Indexers can run to serve a given purpose. We also intend to create a few Radios (or provide support to other developers/teams that wish to build Radios) for the following use cases: -- Real-time cross-checking of subgraph data integrity ([Subgraph Radio](https://docs.graphops.xyz/graphcast/radios/subgraph-radio/intro)). -- Conducting auctions and coordination for warp syncing subgraphs, substreams, and Firehose data from other Indexers. -- Self-reporting on active query analytics, including subgraph request volumes, fee volumes, etc. -- Self-reporting on indexing analytics, including subgraph indexing time, handler gas costs, indexing errors encountered, etc. +- Real-time cross-checking of Subgraph data integrity ([Subgraph Radio](https://docs.graphops.xyz/graphcast/radios/subgraph-radio/intro)). +- Conducting auctions and coordination for warp syncing Subgraphs, substreams, and Firehose data from other Indexers. +- Self-reporting on active query analytics, including Subgraph request volumes, fee volumes, etc. +- Self-reporting on indexing analytics, including Subgraph indexing time, handler gas costs, indexing errors encountered, etc. - Self-reporting on stack information including graph-node version, Postgres version, Ethereum client version, etc. ### Learn More diff --git a/website/src/pages/uk/resources/benefits.mdx b/website/src/pages/uk/resources/benefits.mdx index e433c11c5903..e7643ee0d7cf 100644 --- a/website/src/pages/uk/resources/benefits.mdx +++ b/website/src/pages/uk/resources/benefits.mdx @@ -75,9 +75,9 @@ Query costs may vary; the quoted cost is the average at time of publication (Mar Reflects cost for data consumer. Query fees are still paid to Indexers for Free Plan queries. -Estimated costs are only for Ethereum Mainnet subgraphs — costs are even higher when self hosting a `graph-node` on other networks. Some users may need to update their subgraph to a new version. Due to Ethereum gas fees, an update costs ~$50 at time of writing. Note that gas fees on [Arbitrum](/archived/arbitrum/arbitrum-faq/) are substantially lower than Ethereum mainnet. +Estimated costs are only for Ethereum Mainnet Subgraphs — costs are even higher when self hosting a `graph-node` on other networks. Some users may need to update their Subgraph to a new version. Due to Ethereum gas fees, an update costs ~$50 at time of writing. Note that gas fees on [Arbitrum](/archived/arbitrum/arbitrum-faq/) are substantially lower than Ethereum mainnet. -Кураторство сигналу на підграфі є опціональною одноразовою послугою з нульовою вартістю (наприклад, сигнал на суму 1 тис. доларів можна розмістити на підграфі, а потім вивести — з можливістю отримання прибутку в цьому процесі). +Curating signal on a Subgraph is an optional one-time, net-zero cost (e.g., $1k in signal can be curated on a Subgraph, and later withdrawn—with potential to earn returns in the process). ## No Setup Costs & Greater Operational Efficiency @@ -89,4 +89,4 @@ The Graph’s decentralized network gives users access to geographic redundancy Bottom line: The Graph Network is less expensive, easier to use, and produces superior results compared to running a `graph-node` locally. -Start using The Graph Network today, and learn how to [publish your subgraph to The Graph's decentralized network](/subgraphs/quick-start/). +Start using The Graph Network today, and learn how to [publish your Subgraph to The Graph's decentralized network](/subgraphs/quick-start/). diff --git a/website/src/pages/uk/resources/glossary.mdx b/website/src/pages/uk/resources/glossary.mdx index 1338f2ba16ba..ef7f1d9c23b9 100644 --- a/website/src/pages/uk/resources/glossary.mdx +++ b/website/src/pages/uk/resources/glossary.mdx @@ -4,51 +4,51 @@ title: Глосарій - **The Graph**: A decentralized protocol for indexing and querying data. -- **Query**: A request for data. In the case of The Graph, a query is a request for data from a subgraph that will be answered by an Indexer. +- **Query**: A request for data. In the case of The Graph, a query is a request for data from a Subgraph that will be answered by an Indexer. -- **GraphQL**: A query language for APIs and a runtime for fulfilling those queries with your existing data. The Graph uses GraphQL to query subgraphs. +- **GraphQL**: A query language for APIs and a runtime for fulfilling those queries with your existing data. The Graph uses GraphQL to query Subgraphs. -- **Endpoint**: A URL that can be used to query a subgraph. The testing endpoint for Subgraph Studio is `https://api.studio.thegraph.com/query///` and the Graph Explorer endpoint is `https://gateway.thegraph.com/api//subgraphs/id/`. The Graph Explorer endpoint is used to query subgraphs on The Graph's decentralized network. +- **Endpoint**: A URL that can be used to query a Subgraph. The testing endpoint for Subgraph Studio is `https://api.studio.thegraph.com/query///` and the Graph Explorer endpoint is `https://gateway.thegraph.com/api//subgraphs/id/`. The Graph Explorer endpoint is used to query Subgraphs on The Graph's decentralized network. -- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish subgraphs to The Graph Network. Once it is indexed, the subgraph can be queried by anyone. +- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish Subgraphs to The Graph Network. Once it is indexed, the Subgraph can be queried by anyone. - **Indexer**: Network participants that run indexing nodes to index data from blockchains and serve GraphQL queries. - **Indexer Revenue Streams**: Indexers are rewarded in GRT with two components: query fee rebates and indexing rewards. - 1. **Query Fee Rebates**: Payments from subgraph consumers for serving queries on the network. + 1. **Query Fee Rebates**: Payments from Subgraph consumers for serving queries on the network. - 2. **Indexing Rewards**: The rewards that Indexers receive for indexing subgraphs. Indexing rewards are generated via new issuance of 3% GRT annually. + 2. **Indexing Rewards**: The rewards that Indexers receive for indexing Subgraphs. Indexing rewards are generated via new issuance of 3% GRT annually. - **Indexer's Self-Stake**: The amount of GRT that Indexers stake to participate in the decentralized network. The minimum is 100,000 GRT, and there is no upper limit. - **Delegation Capacity**: The maximum amount of GRT an Indexer can accept from Delegators. Indexers can only accept up to 16x their Indexer Self-Stake, and additional delegation results in diluted rewards. For example, if an Indexer has a Self-Stake of 1M GRT, their delegation capacity is 16M. However, Indexers can increase their Delegation Capacity by increasing their Self-Stake. -- **Upgrade Indexer**: An Indexer designed to act as a fallback for subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. +- **Upgrade Indexer**: An Indexer designed to act as a fallback for Subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. -- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing subgraphs. +- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in Subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing Subgraphs. - **Delegation Tax**: A 0.5% fee paid by Delegators when they delegate GRT to Indexers. The GRT used to pay the fee is burned. -- **Curator**: Network participants that identify high-quality subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a subgraph, 10% is distributed to the Curators of that subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a subgraph. +- **Curator**: Network participants that identify high-quality Subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a Subgraph, 10% is distributed to the Curators of that Subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a Subgraph. -- **Curation Tax**: A 1% fee paid by Curators when they signal GRT on subgraphs. The GRT used to pay the fee is burned. +- **Curation Tax**: A 1% fee paid by Curators when they signal GRT on Subgraphs. The GRT used to pay the fee is burned. -- **Data Consumer**: Any application or user that queries a subgraph. +- **Data Consumer**: Any application or user that queries a Subgraph. -- **Subgraph Developer**: A developer who builds and deploys a subgraph to The Graph's decentralized network. +- **Subgraph Developer**: A developer who builds and deploys a Subgraph to The Graph's decentralized network. -- **Subgraph Manifest**: A YAML file that describes the subgraph's GraphQL schema, data sources, and other metadata. [Here](https://github.com/graphprotocol/example-subgraph/blob/master/subgraph.yaml) is an example. +- **Subgraph Manifest**: A YAML file that describes the Subgraph's GraphQL schema, data sources, and other metadata. [Here](https://github.com/graphprotocol/example-subgraph/blob/master/subgraph.yaml) is an example. - **Epoch**: A unit of time within the network. Currently, one epoch is 6,646 blocks or approximately 1 day. -- **Allocation**: An Indexer can allocate their total GRT stake (including Delegators' stake) towards subgraphs that have been published on The Graph's decentralized network. Allocations can have different statuses: +- **Allocation**: An Indexer can allocate their total GRT stake (including Delegators' stake) towards Subgraphs that have been published on The Graph's decentralized network. Allocations can have different statuses: - 1. **Active**: An allocation is considered active when it is created onchain. This is called opening an allocation, and indicates to the network that the Indexer is actively indexing and serving queries for a particular subgraph. Active allocations accrue indexing rewards proportional to the signal on the subgraph, and the amount of GRT allocated. + 1. **Active**: An allocation is considered active when it is created onchain. This is called opening an allocation, and indicates to the network that the Indexer is actively indexing and serving queries for a particular Subgraph. Active allocations accrue indexing rewards proportional to the signal on the Subgraph, and the amount of GRT allocated. - 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. + 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given Subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. -- **Subgraph Studio**: A powerful dapp for building, deploying, and publishing subgraphs. +- **Subgraph Studio**: A powerful dapp for building, deploying, and publishing Subgraphs. - **Fishermen**: A role within The Graph Network held by participants who monitor the accuracy and integrity of data served by Indexers. When a Fisherman identifies a query response or a POI they believe to be incorrect, they can initiate a dispute against the Indexer. If the dispute rules in favor of the Fisherman, the Indexer is slashed by losing 2.5% of their self-stake. Of this amount, 50% is awarded to the Fisherman as a bounty for their vigilance, and the remaining 50% is removed from circulation (burned). This mechanism is designed to encourage Fishermen to help maintain the reliability of the network by ensuring that Indexers are held accountable for the data they provide. @@ -56,28 +56,28 @@ title: Глосарій - **Slashing**: Indexers can have their self-staked GRT slashed for providing an incorrect POI or for serving inaccurate data. The slashing percentage is a protocol parameter currently set to 2.5% of an Indexer's self-stake. 50% of the slashed GRT goes to the Fisherman that disputed the inaccurate data or incorrect POI. The other 50% is burned. -- **Indexing Rewards**: The rewards that Indexers receive for indexing subgraphs. Indexing rewards are distributed in GRT. +- **Indexing Rewards**: The rewards that Indexers receive for indexing Subgraphs. Indexing rewards are distributed in GRT. - **Delegation Rewards**: The rewards that Delegators receive for delegating GRT to Indexers. Delegation rewards are distributed in GRT. - **GRT**: The Graph's work utility token. GRT provides economic incentives to network participants for contributing to the network. -- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. +- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given Subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. -- **Graph Node**: Graph Node is the component that indexes subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. +- **Graph Node**: Graph Node is the component that indexes Subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. -- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions onchain, including registering on the network, managing subgraph deployments to its Graph Node(s), and managing allocations. +- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions onchain, including registering on the network, managing Subgraph deployments to its Graph Node(s), and managing allocations. - **The Graph Client**: A library for building GraphQL-based dapps in a decentralized way. -- **Graph Explorer**: A dapp designed for network participants to explore subgraphs and interact with the protocol. +- **Graph Explorer**: A dapp designed for network participants to explore Subgraphs and interact with the protocol. - **Graph CLI**: A command line interface tool for building and deploying to The Graph. - **Cooldown Period**: The time remaining until an Indexer who changed their delegation parameters can do so again. -- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, subgraphs, curation shares, and Indexer's self-stake. +- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, Subgraphs, curation shares, and Indexer's self-stake. -- **Updating a subgraph**: The process of releasing a new subgraph version with updates to the subgraph's manifest, schema, or mappings. +- **Updating a Subgraph**: The process of releasing a new Subgraph version with updates to the Subgraph's manifest, schema, or mappings. -- **Migrating**: The process of curation shares moving from an old version of a subgraph to a new version of a subgraph (e.g. when v0.0.1 is updated to v0.0.2). +- **Migrating**: The process of curation shares moving from an old version of a Subgraph to a new version of a Subgraph (e.g. when v0.0.1 is updated to v0.0.2). diff --git a/website/src/pages/uk/resources/migration-guides/assemblyscript-migration-guide.mdx b/website/src/pages/uk/resources/migration-guides/assemblyscript-migration-guide.mdx index 85f6903a6c69..aead2514ff51 100644 --- a/website/src/pages/uk/resources/migration-guides/assemblyscript-migration-guide.mdx +++ b/website/src/pages/uk/resources/migration-guides/assemblyscript-migration-guide.mdx @@ -2,13 +2,13 @@ title: AssemblyScript Migration Guide --- -Up until now, subgraphs have been using one of the [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉 +Up until now, Subgraphs have been using one of the [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉 -That will enable subgraph developers to use newer features of the AS language and standard library. +That will enable Subgraph developers to use newer features of the AS language and standard library. This guide is applicable for anyone using `graph-cli`/`graph-ts` below version `0.22.0`. If you're already at a higher than (or equal) version to that, you've already been using version `0.19.10` of AssemblyScript 🙂 -> Note: As of `0.24.0`, `graph-node` can support both versions, depending on the `apiVersion` specified in the subgraph manifest. +> Note: As of `0.24.0`, `graph-node` can support both versions, depending on the `apiVersion` specified in the Subgraph manifest. ## Features @@ -44,7 +44,7 @@ This guide is applicable for anyone using `graph-cli`/`graph-ts` below version ` ## How to upgrade? -1. Change your mappings `apiVersion` in `subgraph.yaml` to `0.0.6`: +1. Change your mappings `apiVersion` in `subgraph.yaml` to `0.0.9`: ```yaml ... @@ -52,7 +52,7 @@ dataSources: ... mapping: ... - apiVersion: 0.0.6 + apiVersion: 0.0.9 ... ``` @@ -106,7 +106,7 @@ let maybeValue = load()! // breaks in runtime if value is null maybeValue.aMethod() ``` -If you are unsure which to choose, we recommend always using the safe version. If the value doesn't exist you might want to just do an early if statement with a return in you subgraph handler. +If you are unsure which to choose, we recommend always using the safe version. If the value doesn't exist you might want to just do an early if statement with a return in you Subgraph handler. ### Variable Shadowing @@ -132,7 +132,7 @@ You'll need to rename your duplicate variables if you had variable shadowing. ### Null Comparisons -By doing the upgrade on your subgraph, sometimes you might get errors like these: +By doing the upgrade on your Subgraph, sometimes you might get errors like these: ```typescript ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' is not assignable to type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt'. @@ -330,7 +330,7 @@ let wrapper = new Wrapper(y) wrapper.n = wrapper.n + x // doesn't give compile time errors as it should ``` -We've opened a issue on the AssemblyScript compiler for this, but for now if you do these kind of operations in your subgraph mappings, you should change them to do a null check before it. +We've opened a issue on the AssemblyScript compiler for this, but for now if you do these kind of operations in your Subgraph mappings, you should change them to do a null check before it. ```typescript let wrapper = new Wrapper(y) @@ -352,7 +352,7 @@ value.x = 10 value.y = 'content' ``` -It will compile but break at runtime, that happens because the value hasn't been initialized, so make sure your subgraph has initialized their values, like this: +It will compile but break at runtime, that happens because the value hasn't been initialized, so make sure your Subgraph has initialized their values, like this: ```typescript var value = new Type() // initialized diff --git a/website/src/pages/uk/resources/migration-guides/graphql-validations-migration-guide.mdx b/website/src/pages/uk/resources/migration-guides/graphql-validations-migration-guide.mdx index 29fed533ef8c..ebed96df1002 100644 --- a/website/src/pages/uk/resources/migration-guides/graphql-validations-migration-guide.mdx +++ b/website/src/pages/uk/resources/migration-guides/graphql-validations-migration-guide.mdx @@ -20,7 +20,7 @@ To be compliant with those validations, please follow the migration guide. You can use the CLI migration tool to find any issues in your GraphQL operations and fix them. Alternatively you can update the endpoint of your GraphQL client to use the `https://api-next.thegraph.com/subgraphs/name/$GITHUB_USER/$SUBGRAPH_NAME` endpoint. Testing your queries against this endpoint will help you find the issues in your queries. -> Not all subgraphs will need to be migrated, if you are using [GraphQL ESlint](https://the-guild.dev/graphql/eslint/docs) or [GraphQL Code Generator](https://the-guild.dev/graphql/codegen), they already ensure that your queries are valid. +> Not all Subgraphs will need to be migrated, if you are using [GraphQL ESlint](https://the-guild.dev/graphql/eslint/docs) or [GraphQL Code Generator](https://the-guild.dev/graphql/codegen), they already ensure that your queries are valid. ## Migration CLI tool diff --git a/website/src/pages/uk/resources/roles/curating.mdx b/website/src/pages/uk/resources/roles/curating.mdx index 4304c7c138df..547fe31b6272 100644 --- a/website/src/pages/uk/resources/roles/curating.mdx +++ b/website/src/pages/uk/resources/roles/curating.mdx @@ -2,37 +2,37 @@ title: Кураторство --- -Curators are critical to The Graph's decentralized economy. They use their knowledge of the web3 ecosystem to assess and signal on the subgraphs that should be indexed by The Graph Network. Through Graph Explorer, Curators view network data to make signaling decisions. In turn, The Graph Network rewards Curators who signal on good quality subgraphs with a share of the query fees those subgraphs generate. The amount of GRT signaled is one of the key considerations for indexers when determining which subgraphs to index. +Curators are critical to The Graph's decentralized economy. They use their knowledge of the web3 ecosystem to assess and signal on the Subgraphs that should be indexed by The Graph Network. Through Graph Explorer, Curators view network data to make signaling decisions. In turn, The Graph Network rewards Curators who signal on good quality Subgraphs with a share of the query fees those Subgraphs generate. The amount of GRT signaled is one of the key considerations for indexers when determining which Subgraphs to index. ## What Does Signaling Mean for The Graph Network? -Before consumers can query a subgraph, it must be indexed. This is where curation comes into play. In order for Indexers to earn substantial query fees on quality subgraphs, they need to know what subgraphs to index. When Curators signal on a subgraph, it lets Indexers know that a subgraph is in demand and of sufficient quality that it should be indexed. +Before consumers can query a Subgraph, it must be indexed. This is where curation comes into play. In order for Indexers to earn substantial query fees on quality Subgraphs, they need to know what Subgraphs to index. When Curators signal on a Subgraph, it lets Indexers know that a Subgraph is in demand and of sufficient quality that it should be indexed. -Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that Curators use to let Indexers know that a subgraph is good to index. Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives. +Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that Curators use to let Indexers know that a Subgraph is good to index. Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the Subgraph, entitling them to a portion of future query fees that the Subgraph drives. -Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality subgraph because there will be fewer queries to process or fewer Indexers to process them. +Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to Subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality Subgraph because there will be fewer queries to process or fewer Indexers to process them. -The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. +The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all Subgraphs, signaling GRT on a particular Subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. -When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. +When signaling, Curators can decide to signal on a specific version of the Subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. -If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. +If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the Subgraphs you need assistance with. -Indexers can find subgraphs to index based on curation signals they see in Graph Explorer (screenshot below). +Indexers can find Subgraphs to index based on curation signals they see in Graph Explorer (screenshot below). -![Explorer subgraphs](/img/explorer-subgraphs.png) +![Explorer Subgraphs](/img/explorer-subgraphs.png) ## Як сигналізувати -Within the Curator tab in Graph Explorer, curators will be able to signal and unsignal on certain subgraphs based on network stats. For a step-by-step overview of how to do this in Graph Explorer, [click here.](/subgraphs/explorer/) +Within the Curator tab in Graph Explorer, curators will be able to signal and unsignal on certain Subgraphs based on network stats. For a step-by-step overview of how to do this in Graph Explorer, [click here.](/subgraphs/explorer/) -Куратор може обрати подання сигналу на певну версію підграфа, або ж він може обрати автоматичне перенесення сигналу на найновішу версію цього підграфа. Обидва варіанти є прийнятними та мають свої плюси та мінуси. +A curator can choose to signal on a specific Subgraph version, or they can choose to have their signal automatically migrate to the newest production build of that Subgraph. Both are valid strategies and come with their own pros and cons. -Signaling on a specific version is especially useful when one subgraph is used by multiple dapps. One dapp might need to regularly update the subgraph with new features. Another dapp might prefer to use an older, well-tested subgraph version. Upon initial curation, a 1% standard tax is incurred. +Signaling on a specific version is especially useful when one Subgraph is used by multiple dapps. One dapp might need to regularly update the Subgraph with new features. Another dapp might prefer to use an older, well-tested Subgraph version. Upon initial curation, a 1% standard tax is incurred. Автоматичне переміщення вашого сигналу на найновішу версію може бути корисним для того, щоб ви продовжували нараховувати комісію за запити. Кожного разу, коли ви здійснюєте кураторську роботу, стягується плата за в розмірі 1%. Ви також сплачуєте 0,5% за кураторство, за кожну міграцію. Розробникам підграфів не рекомендується часто публікувати нові версії - вони повинні сплачувати 0.5% кураторам за всі автоматично переміщені частки кураторів. -> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy. +> **Note**: The first address to signal a particular Subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy. ## Withdrawing your GRT @@ -40,39 +40,39 @@ Curators have the option to withdraw their signaled GRT at any time. Unlike the process of delegating, if you decide to withdraw your signaled GRT you will not have to wait for a cooldown period and will receive the entire amount (minus the 1% curation tax). -Once a curator withdraws their signal, indexers may choose to keep indexing the subgraph, even if there's currently no active GRT signaled. +Once a curator withdraws their signal, indexers may choose to keep indexing the Subgraph, even if there's currently no active GRT signaled. -However, it is recommended that curators leave their signaled GRT in place not only to receive a portion of the query fees, but also to ensure reliability and uptime of the subgraph. +However, it is recommended that curators leave their signaled GRT in place not only to receive a portion of the query fees, but also to ensure reliability and uptime of the Subgraph. ## Ризики 1. Ринок запитів за своєю суттю молодий в Graph, і існує ризик того, що ваш %APY може бути нижчим, ніж ви очікуєте, через динаміку ринку, що зароджується. -2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned. -3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their subgraph or if a subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/resources/roles/delegating/delegating/). -4. Підграф може не працювати через різноманітні помилки (баги). Підграф, що не працює не стягує комісію за запити. В результаті вам доведеться почекати, поки розробник виправить усі помилки й випустить нову версію. - - Якщо ви підключені до найновішої версії підграфу, ваші частки будуть автоматично перенесені до цієї нової версії. При цьому буде стягуватися податок на в розмірі 0,5%. - - If you have signaled on a specific subgraph version and it fails, you will have to manually burn your curation shares. You can then signal on the new subgraph version, thus incurring a 1% curation tax. +2. Curation Fee - when a curator signals GRT on a Subgraph, they incur a 1% curation tax. This fee is burned. +3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their Subgraph or if a Subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/resources/roles/delegating/delegating/). +4. A Subgraph can fail due to a bug. A failed Subgraph does not accrue query fees. As a result, you’ll have to wait until the developer fixes the bug and deploys a new version. + - If you are subscribed to the newest version of a Subgraph, your shares will auto-migrate to that new version. This will incur a 0.5% curation tax. + - If you have signaled on a specific Subgraph version and it fails, you will have to manually burn your curation shares. You can then signal on the new Subgraph version, thus incurring a 1% curation tax. ## Часті запитання про кураторство ### 1. Який % від комісії за запити отримують куратори? -By signalling on a subgraph, you will earn a share of all the query fees that the subgraph generates. 10% of all query fees go to the Curators pro-rata to their curation shares. This 10% is subject to governance. +By signalling on a Subgraph, you will earn a share of all the query fees that the Subgraph generates. 10% of all query fees go to the Curators pro-rata to their curation shares. This 10% is subject to governance. -### 2. Як вирішити, які підграфи є якісними для подачі сигналу? +### 2. How do I decide which Subgraphs are high quality to signal on? -Finding high-quality subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy subgraphs that are driving query volume. A trustworthy subgraph may be valuable if it is complete, accurate, and supports a dapp’s data needs. A poorly architected subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a subgraph’s architecture or code in order to assess if a subgraph is valuable. As a result: +Finding high-quality Subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy Subgraphs that are driving query volume. A trustworthy Subgraph may be valuable if it is complete, accurate, and supports a dapp’s data needs. A poorly architected Subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a Subgraph’s architecture or code in order to assess if a Subgraph is valuable. As a result: -- Curators can use their understanding of a network to try and predict how an individual subgraph may generate a higher or lower query volume in the future -- Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the subgraph developer is can help determine whether or not a subgraph is worth signalling on. +- Curators can use their understanding of a network to try and predict how an individual Subgraph may generate a higher or lower query volume in the future +- Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the Subgraph developer is can help determine whether or not a Subgraph is worth signalling on. -### 3. What’s the cost of updating a subgraph? +### 3. What’s the cost of updating a Subgraph? -Migrating your curation shares to a new subgraph version incurs a curation tax of 1%. Curators can choose to subscribe to the newest version of a subgraph. When curator shares get auto-migrated to a new version, Curators will also pay half curation tax, ie. 0.5%, because upgrading subgraphs is an onchain action that costs gas. +Migrating your curation shares to a new Subgraph version incurs a curation tax of 1%. Curators can choose to subscribe to the newest version of a Subgraph. When curator shares get auto-migrated to a new version, Curators will also pay half curation tax, ie. 0.5%, because upgrading Subgraphs is an onchain action that costs gas. -### 4. How often can I update my subgraph? +### 4. How often can I update my Subgraph? -It’s suggested that you don’t update your subgraphs too frequently. See the question above for more details. +It’s suggested that you don’t update your Subgraphs too frequently. See the question above for more details. ### 5. Чи можу я продати свої частки куратора? diff --git a/website/src/pages/uk/resources/subgraph-studio-faq.mdx b/website/src/pages/uk/resources/subgraph-studio-faq.mdx index 8761f7a31bf6..c2d4037bd099 100644 --- a/website/src/pages/uk/resources/subgraph-studio-faq.mdx +++ b/website/src/pages/uk/resources/subgraph-studio-faq.mdx @@ -4,7 +4,7 @@ title: Subgraph Studio FAQs ## 1. What is Subgraph Studio? -[Subgraph Studio](https://thegraph.com/studio/) is a dapp for creating, managing, and publishing subgraphs and API keys. +[Subgraph Studio](https://thegraph.com/studio/) is a dapp for creating, managing, and publishing Subgraphs and API keys. ## 2. How do I create an API Key? @@ -18,14 +18,14 @@ Yes! You can create multiple API Keys to use in different projects. Check out th After creating an API Key, in the Security section, you can define the domains that can query a specific API Key. -## 5. Can I transfer my subgraph to another owner? +## 5. Can I transfer my Subgraph to another owner? -Yes, subgraphs that have been published to Arbitrum One can be transferred to a new wallet or a Multisig. You can do so by clicking the three dots next to the 'Publish' button on the subgraph's details page and selecting 'Transfer ownership'. +Yes, Subgraphs that have been published to Arbitrum One can be transferred to a new wallet or a Multisig. You can do so by clicking the three dots next to the 'Publish' button on the Subgraph's details page and selecting 'Transfer ownership'. -Note that you will no longer be able to see or edit the subgraph in Studio once it has been transferred. +Note that you will no longer be able to see or edit the Subgraph in Studio once it has been transferred. -## 6. How do I find query URLs for subgraphs if I’m not the developer of the subgraph I want to use? +## 6. How do I find query URLs for Subgraphs if I’m not the developer of the Subgraph I want to use? -You can find the query URL of each subgraph in the Subgraph Details section of Graph Explorer. When you click on the “Query” button, you will be directed to a pane wherein you can view the query URL of the subgraph you’re interested in. You can then replace the `` placeholder with the API key you wish to leverage in Subgraph Studio. +You can find the query URL of each Subgraph in the Subgraph Details section of Graph Explorer. When you click on the “Query” button, you will be directed to a pane wherein you can view the query URL of the Subgraph you’re interested in. You can then replace the `` placeholder with the API key you wish to leverage in Subgraph Studio. -Remember that you can create an API key and query any subgraph published to the network, even if you build a subgraph yourself. These queries via the new API key, are paid queries as any other on the network. +Remember that you can create an API key and query any Subgraph published to the network, even if you build a Subgraph yourself. These queries via the new API key, are paid queries as any other on the network. diff --git a/website/src/pages/uk/resources/tokenomics.mdx b/website/src/pages/uk/resources/tokenomics.mdx index 709ebb3b40c0..0c58cbf44968 100644 --- a/website/src/pages/uk/resources/tokenomics.mdx +++ b/website/src/pages/uk/resources/tokenomics.mdx @@ -6,7 +6,7 @@ description: The Graph Network is incentivized by powerful tokenomics. Here’s ## Overview -The Graph is a decentralized protocol that enables easy access to blockchain data. It indexes blockchain data similarly to how Google indexes the web. If you've used a dapp that retrieves data from a subgraph, you've probably interacted with The Graph. Today, thousands of [popular dapps](https://thegraph.com/explorer) in the web3 ecosystem use The Graph. +The Graph is a decentralized protocol that enables easy access to blockchain data. It indexes blockchain data similarly to how Google indexes the web. If you've used a dapp that retrieves data from a Subgraph, you've probably interacted with The Graph. Today, thousands of [popular dapps](https://thegraph.com/explorer) in the web3 ecosystem use The Graph. ## Specifics @@ -24,9 +24,9 @@ There are four primary network participants: 1. Delegators - Delegate GRT to Indexers & secure the network -2. Куратори - знаходять найкращі підграфи для індексаторів +2. Curators - Find the best Subgraphs for Indexers -3. Developers - Build & query subgraphs +3. Developers - Build & query Subgraphs 4. Індексатори - кістяк блокчейн-даних @@ -36,7 +36,7 @@ Fishermen and Arbitrators are also integral to the network's success through oth ## Delegators (Passively earn GRT) -Indexers are delegated GRT by Delegators, increasing the Indexer’s stake in subgraphs on the network. In return, Delegators earn a percentage of all query fees and indexing rewards from the Indexer. Each Indexer sets the cut that will be rewarded to Delegators independently, creating competition among Indexers to attract Delegators. Most Indexers offer between 9-12% annually. +Indexers are delegated GRT by Delegators, increasing the Indexer’s stake in Subgraphs on the network. In return, Delegators earn a percentage of all query fees and indexing rewards from the Indexer. Each Indexer sets the cut that will be rewarded to Delegators independently, creating competition among Indexers to attract Delegators. Most Indexers offer between 9-12% annually. For example, if a Delegator were to delegate 15k GRT to an Indexer offering 10%, the Delegator would receive ~1,500 GRT in rewards annually. @@ -46,25 +46,25 @@ If you're reading this, you're capable of becoming a Delegator right now by head ## Curators (Earn GRT) -Curators identify high-quality subgraphs and "curate" them (i.e., signal GRT on them) to earn curation shares, which guarantee a percentage of all future query fees generated by the subgraph. While any independent network participant can be a Curator, typically subgraph developers are among the first Curators for their own subgraphs because they want to ensure their subgraph is indexed. +Curators identify high-quality Subgraphs and "curate" them (i.e., signal GRT on them) to earn curation shares, which guarantee a percentage of all future query fees generated by the Subgraph. While any independent network participant can be a Curator, typically Subgraph developers are among the first Curators for their own Subgraphs because they want to ensure their Subgraph is indexed. -Subgraph developers are encouraged to curate their subgraph with at least 3,000 GRT. However, this number may be impacted by network activity and community participation. +Subgraph developers are encouraged to curate their Subgraph with at least 3,000 GRT. However, this number may be impacted by network activity and community participation. -Curators pay a 1% curation tax when they curate a new subgraph. This curation tax is burned, decreasing the supply of GRT. +Curators pay a 1% curation tax when they curate a new Subgraph. This curation tax is burned, decreasing the supply of GRT. ## Developers -Developers build and query subgraphs to retrieve blockchain data. Since subgraphs are open source, developers can query existing subgraphs to load blockchain data into their dapps. Developers pay for queries they make in GRT, which is distributed to network participants. +Developers build and query Subgraphs to retrieve blockchain data. Since Subgraphs are open source, developers can query existing Subgraphs to load blockchain data into their dapps. Developers pay for queries they make in GRT, which is distributed to network participants. -### Creating a subgraph +### Creating a Subgraph -Developers can [create a subgraph](/developing/creating-a-subgraph/) to index data on the blockchain. Subgraphs are instructions for Indexers about which data should be served to consumers. +Developers can [create a Subgraph](/developing/creating-a-subgraph/) to index data on the blockchain. Subgraphs are instructions for Indexers about which data should be served to consumers. -Once developers have built and tested their subgraph, they can [publish their subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) on The Graph's decentralized network. +Once developers have built and tested their Subgraph, they can [publish their Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) on The Graph's decentralized network. -### Querying an existing subgraph +### Querying an existing Subgraph -Once a subgraph is [published](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph's decentralized network, anyone can create an API key, add GRT to their billing balance, and query the subgraph. +Once a Subgraph is [published](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph's decentralized network, anyone can create an API key, add GRT to their billing balance, and query the Subgraph. Subgraphs are [queried using GraphQL](/subgraphs/querying/introduction/), and the query fees are paid for with GRT in [Subgraph Studio](https://thegraph.com/studio/). Query fees are distributed to network participants based on their contributions to the protocol. @@ -72,27 +72,27 @@ Subgraphs are [queried using GraphQL](/subgraphs/querying/introduction/), and th ## Indexers (Earn GRT) -Indexers are the backbone of The Graph. They operate independent hardware and software powering The Graph’s decentralized network. Indexers serve data to consumers based on instructions from subgraphs. +Indexers are the backbone of The Graph. They operate independent hardware and software powering The Graph’s decentralized network. Indexers serve data to consumers based on instructions from Subgraphs. Indexers can earn GRT rewards in two ways: -1. **Query fees**: GRT paid by developers or users for subgraph data queries. Query fees are directly distributed to Indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). +1. **Query fees**: GRT paid by developers or users for Subgraph data queries. Query fees are directly distributed to Indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). -2. **Indexing rewards**: the 3% annual issuance is distributed to Indexers based on the number of subgraphs they are indexing. These rewards incentivize Indexers to index subgraphs, occasionally before the query fees begin, to accrue and submit Proofs of Indexing (POIs), verifying that they have indexed data accurately. +2. **Indexing rewards**: the 3% annual issuance is distributed to Indexers based on the number of Subgraphs they are indexing. These rewards incentivize Indexers to index Subgraphs, occasionally before the query fees begin, to accrue and submit Proofs of Indexing (POIs), verifying that they have indexed data accurately. -Each subgraph is allotted a portion of the total network token issuance, based on the amount of the subgraph’s curation signal. That amount is then rewarded to Indexers based on their allocated stake on the subgraph. +Each Subgraph is allotted a portion of the total network token issuance, based on the amount of the Subgraph’s curation signal. That amount is then rewarded to Indexers based on their allocated stake on the Subgraph. In order to run an indexing node, Indexers must self-stake 100,000 GRT or more with the network. Indexers are incentivized to self-stake GRT in proportion to the amount of queries they serve. -Indexers can increase their GRT allocations on subgraphs by accepting GRT delegation from Delegators, and they can accept up to 16 times their initial self-stake. If an Indexer becomes "over-delegated" (i.e., more than 16 times their initial self-stake), they will not be able to use the additional GRT from Delegators until they increase their self-stake in the network. +Indexers can increase their GRT allocations on Subgraphs by accepting GRT delegation from Delegators, and they can accept up to 16 times their initial self-stake. If an Indexer becomes "over-delegated" (i.e., more than 16 times their initial self-stake), they will not be able to use the additional GRT from Delegators until they increase their self-stake in the network. The amount of rewards an Indexer receives can vary based on the Indexer's self-stake, accepted delegation, quality of service, and many more factors. ## Token Supply: Burning & Issuance -The initial token supply is 10 billion GRT, with a target of 3% new issuance annually to reward Indexers for allocating stake on subgraphs. This means that the total supply of GRT tokens will increase by 3% each year as new tokens are issued to Indexers for their contribution to the network. +The initial token supply is 10 billion GRT, with a target of 3% new issuance annually to reward Indexers for allocating stake on Subgraphs. This means that the total supply of GRT tokens will increase by 3% each year as new tokens are issued to Indexers for their contribution to the network. -The Graph is designed with multiple burning mechanisms to offset new token issuance. Approximately 1% of the GRT supply is burned annually through various activities on the network, and this number has been increasing as network activity continues to grow. These burning activities include a 0.5% delegation tax whenever a Delegator delegates GRT to an Indexer, a 1% curation tax when Curators signal on a subgraph, and a 1% of query fees for blockchain data. +The Graph is designed with multiple burning mechanisms to offset new token issuance. Approximately 1% of the GRT supply is burned annually through various activities on the network, and this number has been increasing as network activity continues to grow. These burning activities include a 0.5% delegation tax whenever a Delegator delegates GRT to an Indexer, a 1% curation tax when Curators signal on a Subgraph, and a 1% of query fees for blockchain data. ![Total burned GRT](/img/total-burned-grt.jpeg) diff --git a/website/src/pages/uk/sps/introduction.mdx b/website/src/pages/uk/sps/introduction.mdx index 1463ea45a11a..8a801f1a048a 100644 --- a/website/src/pages/uk/sps/introduction.mdx +++ b/website/src/pages/uk/sps/introduction.mdx @@ -3,21 +3,21 @@ title: Introduction to Substreams-Powered Subgraphs sidebarTitle: Introduction --- -Boost your subgraph's efficiency and scalability by using [Substreams](/substreams/introduction/) to stream pre-indexed blockchain data. +Boost your Subgraph's efficiency and scalability by using [Substreams](/substreams/introduction/) to stream pre-indexed blockchain data. ## Overview -Use a Substreams package (`.spkg`) as a data source to give your subgraph access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks. +Use a Substreams package (`.spkg`) as a data source to give your Subgraph access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks. ### Specifics There are two methods of enabling this technology: -1. **Using Substreams [triggers](/sps/triggers/)**: Consume from any Substreams module by importing the Protobuf model through a subgraph handler and move all your logic into a subgraph. This method creates the subgraph entities directly in the subgraph. +1. **Using Substreams [triggers](/sps/triggers/)**: Consume from any Substreams module by importing the Protobuf model through a Subgraph handler and move all your logic into a Subgraph. This method creates the Subgraph entities directly in the Subgraph. -2. **Using [Entity Changes](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)**: By writing more of the logic into Substreams, you can consume the module's output directly into [graph-node](/indexing/tooling/graph-node/). In graph-node, you can use the Substreams data to create your subgraph entities. +2. **Using [Entity Changes](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)**: By writing more of the logic into Substreams, you can consume the module's output directly into [graph-node](/indexing/tooling/graph-node/). In graph-node, you can use the Substreams data to create your Subgraph entities. -You can choose where to place your logic, either in the subgraph or Substreams. However, consider what aligns with your data needs, as Substreams has a parallelized model, and triggers are consumed linearly in the graph node. +You can choose where to place your logic, either in the Subgraph or Substreams. However, consider what aligns with your data needs, as Substreams has a parallelized model, and triggers are consumed linearly in the graph node. ### Додаткові матеріали @@ -28,3 +28,4 @@ Visit the following links for tutorials on using code-generation tooling to buil - [Starknet](https://docs.substreams.dev/tutorials/intro-to-tutorials/starknet) - [Injective](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/injective) - [MANTRA](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/mantra) +- [Stellar](https://docs.substreams.dev/tutorials/intro-to-tutorials/stellar) diff --git a/website/src/pages/uk/sps/sps-faq.mdx b/website/src/pages/uk/sps/sps-faq.mdx index abc1f3906686..250c466d5929 100644 --- a/website/src/pages/uk/sps/sps-faq.mdx +++ b/website/src/pages/uk/sps/sps-faq.mdx @@ -11,21 +11,21 @@ Specifically, it's a blockchain-agnostic, parallelized, and streaming-first engi Substreams is developed by [StreamingFast](https://www.streamingfast.io/). Visit the [Substreams Documentation](/substreams/introduction/) to learn more about Substreams. -## What are Substreams-powered subgraphs? +## What are Substreams-powered Subgraphs? -[Substreams-powered subgraphs](/sps/introduction/) combine the power of Substreams with the queryability of subgraphs. When publishing a Substreams-powered Subgraph, the data produced by the Substreams transformations, can [output entity changes](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs) that are compatible with subgraph entities. +[Substreams-powered Subgraphs](/sps/introduction/) combine the power of Substreams with the queryability of Subgraphs. When publishing a Substreams-powered Subgraph, the data produced by the Substreams transformations, can [output entity changes](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs) that are compatible with Subgraph entities. -If you are already familiar with subgraph development, note that Substreams-powered subgraphs can then be queried just as if it had been produced by the AssemblyScript transformation layer. This provides all the benefits of subgraphs, including a dynamic and flexible GraphQL API. +If you are already familiar with Subgraph development, note that Substreams-powered Subgraphs can then be queried just as if it had been produced by the AssemblyScript transformation layer. This provides all the benefits of Subgraphs, including a dynamic and flexible GraphQL API. -## How are Substreams-powered subgraphs different from subgraphs? +## How are Substreams-powered Subgraphs different from Subgraphs? Subgraphs are made up of datasources which specify onchain events, and how those events should be transformed via handlers written in Assemblyscript. These events are processed sequentially, based on the order in which events happen onchain. -By contrast, substreams-powered subgraphs have a single datasource which references a substreams package, which is processed by the Graph Node. Substreams have access to additional granular onchain data compared to conventional subgraphs, and can also benefit from massively parallelised processing, which can mean much faster processing times. +By contrast, substreams-powered Subgraphs have a single datasource which references a substreams package, which is processed by the Graph Node. Substreams have access to additional granular onchain data compared to conventional Subgraphs, and can also benefit from massively parallelised processing, which can mean much faster processing times. -## What are the benefits of using Substreams-powered subgraphs? +## What are the benefits of using Substreams-powered Subgraphs? -Substreams-powered subgraphs combine all the benefits of Substreams with the queryability of subgraphs. They bring greater composability and high-performance indexing to The Graph. They also enable new data use cases; for example, once you've built your Substreams-powered Subgraph, you can reuse your [Substreams modules](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) to output to different [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) such as PostgreSQL, MongoDB, and Kafka. +Substreams-powered Subgraphs combine all the benefits of Substreams with the queryability of Subgraphs. They bring greater composability and high-performance indexing to The Graph. They also enable new data use cases; for example, once you've built your Substreams-powered Subgraph, you can reuse your [Substreams modules](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) to output to different [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) such as PostgreSQL, MongoDB, and Kafka. ## What are the benefits of Substreams? @@ -35,7 +35,7 @@ There are many benefits to using Substreams, including: - High-performance indexing: Orders of magnitude faster indexing through large-scale clusters of parallel operations (think BigQuery). -- Sink anywhere: Sink your data to anywhere you want: PostgreSQL, MongoDB, Kafka, subgraphs, flat files, Google Sheets. +- Sink anywhere: Sink your data to anywhere you want: PostgreSQL, MongoDB, Kafka, Subgraphs, flat files, Google Sheets. - Programmable: Use code to customize extraction, do transformation-time aggregations, and model your output for multiple sinks. @@ -63,17 +63,17 @@ There are many benefits to using Firehose, including: - Leverages flat files: Blockchain data is extracted into flat files, the cheapest and most optimized computing resource available. -## Where can developers access more information about Substreams-powered subgraphs and Substreams? +## Where can developers access more information about Substreams-powered Subgraphs and Substreams? The [Substreams documentation](/substreams/introduction/) will teach you how to build Substreams modules. -The [Substreams-powered subgraphs documentation](/sps/introduction/) will show you how to package them for deployment on The Graph. +The [Substreams-powered Subgraphs documentation](/sps/introduction/) will show you how to package them for deployment on The Graph. The [latest Substreams Codegen tool](https://streamingfastio.medium.com/substreams-codegen-no-code-tool-to-bootstrap-your-project-a11efe0378c6) will allow you to bootstrap a Substreams project without any code. ## What is the role of Rust modules in Substreams? -Rust modules are the equivalent of the AssemblyScript mappers in subgraphs. They are compiled to WASM in a similar way, but the programming model allows for parallel execution. They define the sort of transformations and aggregations you want to apply to the raw blockchain data. +Rust modules are the equivalent of the AssemblyScript mappers in Subgraphs. They are compiled to WASM in a similar way, but the programming model allows for parallel execution. They define the sort of transformations and aggregations you want to apply to the raw blockchain data. See [modules documentation](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) for details. @@ -81,16 +81,16 @@ See [modules documentation](https://docs.substreams.dev/reference-material/subst When using Substreams, the composition happens at the transformation layer enabling cached modules to be re-used. -As an example, Alice can build a DEX price module, Bob can use it to build a volume aggregator for some tokens of his interest, and Lisa can combine four individual DEX price modules to create a price oracle. A single Substreams request will package all of these individual's modules, link them together, to offer a much more refined stream of data. That stream can then be used to populate a subgraph, and be queried by consumers. +As an example, Alice can build a DEX price module, Bob can use it to build a volume aggregator for some tokens of his interest, and Lisa can combine four individual DEX price modules to create a price oracle. A single Substreams request will package all of these individual's modules, link them together, to offer a much more refined stream of data. That stream can then be used to populate a Subgraph, and be queried by consumers. ## How can you build and deploy a Substreams-powered Subgraph? After [defining](/sps/introduction/) a Substreams-powered Subgraph, you can use the Graph CLI to deploy it in [Subgraph Studio](https://thegraph.com/studio/). -## Where can I find examples of Substreams and Substreams-powered subgraphs? +## Where can I find examples of Substreams and Substreams-powered Subgraphs? -You can visit [this Github repo](https://github.com/pinax-network/awesome-substreams) to find examples of Substreams and Substreams-powered subgraphs. +You can visit [this Github repo](https://github.com/pinax-network/awesome-substreams) to find examples of Substreams and Substreams-powered Subgraphs. -## What do Substreams and Substreams-powered subgraphs mean for The Graph Network? +## What do Substreams and Substreams-powered Subgraphs mean for The Graph Network? The integration promises many benefits, including extremely high-performance indexing and greater composability by leveraging community modules and building on them. diff --git a/website/src/pages/uk/sps/triggers.mdx b/website/src/pages/uk/sps/triggers.mdx index 9124d805743f..87181f9bd72d 100644 --- a/website/src/pages/uk/sps/triggers.mdx +++ b/website/src/pages/uk/sps/triggers.mdx @@ -6,13 +6,13 @@ Use Custom Triggers and enable the full use GraphQL. ## Overview -Custom Triggers allow you to send data directly into your subgraph mappings file and entities, which are similar to tables and fields. This enables you to fully use the GraphQL layer. +Custom Triggers allow you to send data directly into your Subgraph mappings file and entities, which are similar to tables and fields. This enables you to fully use the GraphQL layer. -By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data in your subgraph's handler. This ensures efficient and streamlined data management within the subgraph framework. +By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data in your Subgraph's handler. This ensures efficient and streamlined data management within the Subgraph framework. ### Defining `handleTransactions` -The following code demonstrates how to define a `handleTransactions` function in a subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new subgraph entity is created. +The following code demonstrates how to define a `handleTransactions` function in a Subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new Subgraph entity is created. ```tsx export function handleTransactions(bytes: Uint8Array): void { @@ -38,9 +38,9 @@ Here's what you're seeing in the `mappings.ts` file: 1. The bytes containing Substreams data are decoded into the generated `Transactions` object, this object is used like any other AssemblyScript object 2. Looping over the transactions -3. Create a new subgraph entity for every transaction +3. Create a new Subgraph entity for every transaction -To go through a detailed example of a trigger-based subgraph, [check out the tutorial](/sps/tutorial/). +To go through a detailed example of a trigger-based Subgraph, [check out the tutorial](/sps/tutorial/). ### Додаткові матеріали diff --git a/website/src/pages/uk/sps/tutorial.mdx b/website/src/pages/uk/sps/tutorial.mdx index 71dc37075218..6b611ef2c923 100644 --- a/website/src/pages/uk/sps/tutorial.mdx +++ b/website/src/pages/uk/sps/tutorial.mdx @@ -3,7 +3,7 @@ title: 'Tutorial: Set Up a Substreams-Powered Subgraph on Solana' sidebarTitle: Tutorial --- -Successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. +Successfully set up a trigger-based Substreams-powered Subgraph for a Solana SPL token. ## Розпочати роботу @@ -54,7 +54,7 @@ params: # Modify the param fields to meet your needs ### Step 2: Generate the Subgraph Manifest -Once the project is initialized, generate a subgraph manifest by running the following command in the Dev Container: +Once the project is initialized, generate a Subgraph manifest by running the following command in the Dev Container: ```bash substreams codegen subgraph @@ -73,7 +73,7 @@ dataSources: moduleName: map_spl_transfers # Module defined in the substreams.yaml file: ./my-project-sol-v0.1.0.spkg mapping: - apiVersion: 0.0.7 + apiVersion: 0.0.9 kind: substreams/graph-entities file: ./src/mappings.ts handler: handleTriggers @@ -81,7 +81,7 @@ dataSources: ### Step 3: Define Entities in `schema.graphql` -Define the fields you want to save in your subgraph entities by updating the `schema.graphql` file. +Define the fields you want to save in your Subgraph entities by updating the `schema.graphql` file. Here is an example: @@ -101,7 +101,7 @@ This schema defines a `MyTransfer` entity with fields such as `id`, `amount`, `s With the Protobuf objects generated, you can now handle the decoded Substreams data in your `mappings.ts` file found in the `./src` directory. -The example below demonstrates how to extract to subgraph entities the non-derived transfers associated to the Orca account id: +The example below demonstrates how to extract to Subgraph entities the non-derived transfers associated to the Orca account id: ```ts import { Protobuf } from 'as-proto/assembly' @@ -140,11 +140,11 @@ To generate Protobuf objects in AssemblyScript, run the following command: npm run protogen ``` -This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the subgraph's handler. +This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the Subgraph's handler. ### Conclusion -Congratulations! You've successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. You can take the next step by customizing your schema, mappings, and modules to fit your specific use case. +Congratulations! You've successfully set up a trigger-based Substreams-powered Subgraph for a Solana SPL token. You can take the next step by customizing your schema, mappings, and modules to fit your specific use case. ### Video Tutorial diff --git a/website/src/pages/uk/subgraphs/_meta-titles.json b/website/src/pages/uk/subgraphs/_meta-titles.json index 3fd405eed29a..06078a2635a4 100644 --- a/website/src/pages/uk/subgraphs/_meta-titles.json +++ b/website/src/pages/uk/subgraphs/_meta-titles.json @@ -2,5 +2,5 @@ "querying": "Querying", "developing": "Developing", "guides": "How-to Guides", - "best-practices": "Best Practices" + "best-practices": "Найкращі практики" } diff --git a/website/src/pages/uk/subgraphs/best-practices/avoid-eth-calls.mdx b/website/src/pages/uk/subgraphs/best-practices/avoid-eth-calls.mdx index e40a7b3712e4..07249c97dd2a 100644 --- a/website/src/pages/uk/subgraphs/best-practices/avoid-eth-calls.mdx +++ b/website/src/pages/uk/subgraphs/best-practices/avoid-eth-calls.mdx @@ -1,19 +1,19 @@ --- title: Subgraph Best Practice 4 - Improve Indexing Speed by Avoiding eth_calls -sidebarTitle: 'Subgraph Best Practice 4: Avoiding eth_calls' +sidebarTitle: Avoiding eth_calls --- ## TLDR -`eth_calls` are calls that can be made from a subgraph to an Ethereum node. These calls take a significant amount of time to return data, slowing down indexing. If possible, design smart contracts to emit all the data you need so you don’t need to use `eth_calls`. +`eth_calls` are calls that can be made from a Subgraph to an Ethereum node. These calls take a significant amount of time to return data, slowing down indexing. If possible, design smart contracts to emit all the data you need so you don’t need to use `eth_calls`. ## Why Avoiding `eth_calls` Is a Best Practice -Subgraphs are optimized to index event data emitted from smart contracts. A subgraph can also index the data coming from an `eth_call`, however, this can significantly slow down subgraph indexing as `eth_calls` require making external calls to smart contracts. The responsiveness of these calls relies not on the subgraph but on the connectivity and responsiveness of the Ethereum node being queried. By minimizing or eliminating eth_calls in our subgraphs, we can significantly improve our indexing speed. +Subgraphs are optimized to index event data emitted from smart contracts. A Subgraph can also index the data coming from an `eth_call`, however, this can significantly slow down Subgraph indexing as `eth_calls` require making external calls to smart contracts. The responsiveness of these calls relies not on the Subgraph but on the connectivity and responsiveness of the Ethereum node being queried. By minimizing or eliminating eth_calls in our Subgraphs, we can significantly improve our indexing speed. ### What Does an eth_call Look Like? -`eth_calls` are often necessary when the data required for a subgraph is not available through emitted events. For example, consider a scenario where a subgraph needs to identify whether ERC20 tokens are part of a specific pool, but the contract only emits a basic `Transfer` event and does not emit an event that contains the data that we need: +`eth_calls` are often necessary when the data required for a Subgraph is not available through emitted events. For example, consider a scenario where a Subgraph needs to identify whether ERC20 tokens are part of a specific pool, but the contract only emits a basic `Transfer` event and does not emit an event that contains the data that we need: ```yaml event Transfer(address indexed from, address indexed to, uint256 value); @@ -44,7 +44,7 @@ export function handleTransfer(event: Transfer): void { } ``` -This is functional, however is not ideal as it slows down our subgraph’s indexing. +This is functional, however is not ideal as it slows down our Subgraph’s indexing. ## How to Eliminate `eth_calls` @@ -54,7 +54,7 @@ Ideally, the smart contract should be updated to emit all necessary data within event TransferWithPool(address indexed from, address indexed to, uint256 value, bytes32 indexed poolInfo); ``` -With this update, the subgraph can directly index the required data without external calls: +With this update, the Subgraph can directly index the required data without external calls: ```typescript import { Address } from '@graphprotocol/graph-ts' @@ -96,11 +96,11 @@ The portion highlighted in yellow is the call declaration. The part before the c The handler itself accesses the result of this `eth_call` exactly as in the previous section by binding to the contract and making the call. graph-node caches the results of declared `eth_calls` in memory and the call from the handler will retrieve the result from this in memory cache instead of making an actual RPC call. -Note: Declared eth_calls can only be made in subgraphs with specVersion >= 1.2.0. +Note: Declared eth_calls can only be made in Subgraphs with specVersion >= 1.2.0. ## Conclusion -You can significantly improve indexing performance by minimizing or eliminating `eth_calls` in your subgraphs. +You can significantly improve indexing performance by minimizing or eliminating `eth_calls` in your Subgraphs. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/uk/subgraphs/best-practices/derivedfrom.mdx b/website/src/pages/uk/subgraphs/best-practices/derivedfrom.mdx index db3a49928c89..093eb29255ab 100644 --- a/website/src/pages/uk/subgraphs/best-practices/derivedfrom.mdx +++ b/website/src/pages/uk/subgraphs/best-practices/derivedfrom.mdx @@ -1,11 +1,11 @@ --- title: Subgraph Best Practice 2 - Improve Indexing and Query Responsiveness By Using @derivedFrom -sidebarTitle: 'Subgraph Best Practice 2: Arrays with @derivedFrom' +sidebarTitle: Arrays with @derivedFrom --- ## TLDR -Arrays in your schema can really slow down a subgraph's performance as they grow beyond thousands of entries. If possible, the `@derivedFrom` directive should be used when using arrays as it prevents large arrays from forming, simplifies handlers, and reduces the size of individual entities, improving indexing speed and query performance significantly. +Arrays in your schema can really slow down a Subgraph's performance as they grow beyond thousands of entries. If possible, the `@derivedFrom` directive should be used when using arrays as it prevents large arrays from forming, simplifies handlers, and reduces the size of individual entities, improving indexing speed and query performance significantly. ## How to Use the `@derivedFrom` Directive @@ -15,7 +15,7 @@ You just need to add a `@derivedFrom` directive after your array in your schema. comments: [Comment!]! @derivedFrom(field: "post") ``` -`@derivedFrom` creates efficient one-to-many relationships, enabling an entity to dynamically associate with multiple related entities based on a field in the related entity. This approach removes the need for both sides of the relationship to store duplicate data, making the subgraph more efficient. +`@derivedFrom` creates efficient one-to-many relationships, enabling an entity to dynamically associate with multiple related entities based on a field in the related entity. This approach removes the need for both sides of the relationship to store duplicate data, making the Subgraph more efficient. ### Example Use Case for `@derivedFrom` @@ -60,17 +60,17 @@ type Comment @entity { Just by adding the `@derivedFrom` directive, this schema will only store the “Comments” on the “Comments” side of the relationship and not on the “Post” side of the relationship. Arrays are stored across individual rows, which allows them to expand significantly. This can lead to particularly large sizes if their growth is unbounded. -This will not only make our subgraph more efficient, but it will also unlock three features: +This will not only make our Subgraph more efficient, but it will also unlock three features: 1. We can query the `Post` and see all of its comments. 2. We can do a reverse lookup and query any `Comment` and see which post it comes from. -3. We can use [Derived Field Loaders](/subgraphs/developing/creating/graph-ts/api/#looking-up-derived-entities) to unlock the ability to directly access and manipulate data from virtual relationships in our subgraph mappings. +3. We can use [Derived Field Loaders](/subgraphs/developing/creating/graph-ts/api/#looking-up-derived-entities) to unlock the ability to directly access and manipulate data from virtual relationships in our Subgraph mappings. ## Conclusion -Use the `@derivedFrom` directive in subgraphs to effectively manage dynamically growing arrays, enhancing indexing efficiency and data retrieval. +Use the `@derivedFrom` directive in Subgraphs to effectively manage dynamically growing arrays, enhancing indexing efficiency and data retrieval. For a more detailed explanation of strategies to avoid large arrays, check out Kevin Jones' blog: [Best Practices in Subgraph Development: Avoiding Large Arrays](https://thegraph.com/blog/improve-subgraph-performance-avoiding-large-arrays/). diff --git a/website/src/pages/uk/subgraphs/best-practices/grafting-hotfix.mdx b/website/src/pages/uk/subgraphs/best-practices/grafting-hotfix.mdx index d042b1960232..91ec6dff40bc 100644 --- a/website/src/pages/uk/subgraphs/best-practices/grafting-hotfix.mdx +++ b/website/src/pages/uk/subgraphs/best-practices/grafting-hotfix.mdx @@ -1,26 +1,26 @@ --- title: Subgraph Best Practice 6 - Use Grafting for Quick Hotfix Deployment -sidebarTitle: 'Subgraph Best Practice 6: Grafting and Hotfixing' +sidebarTitle: Grafting and Hotfixing --- ## TLDR -Grafting is a powerful feature in subgraph development that allows you to build and deploy new subgraphs while reusing the indexed data from existing ones. +Grafting is a powerful feature in Subgraph development that allows you to build and deploy new Subgraphs while reusing the indexed data from existing ones. ### Overview -This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. +This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire Subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. ## Benefits of Grafting for Hotfixes 1. **Rapid Deployment** - - **Minimize Downtime**: When a subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. - - **Immediate Recovery**: The new subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. + - **Minimize Downtime**: When a Subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. + - **Immediate Recovery**: The new Subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. 2. **Data Preservation** - - **Reuse Historical Data**: Grafting copies the existing data from the base subgraph, so you don’t lose valuable historical records. + - **Reuse Historical Data**: Grafting copies the existing data from the base Subgraph, so you don’t lose valuable historical records. - **Consistency**: Maintains data continuity, which is crucial for applications relying on consistent historical data. 3. **Efficiency** @@ -31,38 +31,38 @@ This feature enables quick deployment of hotfixes for critical issues, eliminati 1. **Initial Deployment Without Grafting** - - **Start Clean**: Always deploy your initial subgraph without grafting to ensure that it’s stable and functions as expected. - - **Test Thoroughly**: Validate the subgraph’s performance to minimize the need for future hotfixes. + - **Start Clean**: Always deploy your initial Subgraph without grafting to ensure that it’s stable and functions as expected. + - **Test Thoroughly**: Validate the Subgraph’s performance to minimize the need for future hotfixes. 2. **Implementing the Hotfix with Grafting** - **Identify the Issue**: When a critical error occurs, determine the block number of the last successfully indexed event. - - **Create a New Subgraph**: Develop a new subgraph that includes the hotfix. - - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed subgraph. - - **Deploy Quickly**: Publish the grafted subgraph to restore service as soon as possible. + - **Create a New Subgraph**: Develop a new Subgraph that includes the hotfix. + - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed Subgraph. + - **Deploy Quickly**: Publish the grafted Subgraph to restore service as soon as possible. 3. **Post-Hotfix Actions** - - **Monitor Performance**: Ensure the grafted subgraph is indexing correctly and the hotfix resolves the issue. - - **Republish Without Grafting**: Once stable, deploy a new version of the subgraph without grafting for long-term maintenance. + - **Monitor Performance**: Ensure the grafted Subgraph is indexing correctly and the hotfix resolves the issue. + - **Republish Without Grafting**: Once stable, deploy a new version of the Subgraph without grafting for long-term maintenance. > Note: Relying on grafting indefinitely is not recommended as it can complicate future updates and maintenance. - - **Update References**: Redirect any services or applications to use the new, non-grafted subgraph. + - **Update References**: Redirect any services or applications to use the new, non-grafted Subgraph. 4. **Important Considerations** - **Careful Block Selection**: Choose the graft block number carefully to prevent data loss. - **Tip**: Use the block number of the last correctly processed event. - - **Use Deployment ID**: Ensure you reference the Deployment ID of the base subgraph, not the Subgraph ID. - - **Note**: The Deployment ID is the unique identifier for a specific subgraph deployment. - - **Feature Declaration**: Remember to declare grafting in the subgraph manifest under features. + - **Use Deployment ID**: Ensure you reference the Deployment ID of the base Subgraph, not the Subgraph ID. + - **Note**: The Deployment ID is the unique identifier for a specific Subgraph deployment. + - **Feature Declaration**: Remember to declare grafting in the Subgraph manifest under features. ## Example: Deploying a Hotfix with Grafting -Suppose you have a subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. +Suppose you have a Subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. 1. **Failed Subgraph Manifest (subgraph.yaml)** ```yaml - specVersion: 1.0.0 + specVersion: 1.3.0 schema: file: ./schema.graphql dataSources: @@ -75,7 +75,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing startBlock: 5000000 mapping: kind: ethereum/events - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Withdrawal @@ -90,7 +90,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing 2. **New Grafted Subgraph Manifest (subgraph.yaml)** ```yaml - specVersion: 1.0.0 + specVersion: 1.3.0 schema: file: ./schema.graphql dataSources: @@ -103,7 +103,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing startBlock: 6000001 # Block after the last indexed block mapping: kind: ethereum/events - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Withdrawal @@ -117,16 +117,16 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing features: - grafting graft: - base: QmBaseDeploymentID # Deployment ID of the failed subgraph + base: QmBaseDeploymentID # Deployment ID of the failed Subgraph block: 6000000 # Last successfully indexed block ``` **Explanation:** -- **Data Source Update**: The new subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. +- **Data Source Update**: The new Subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. - **Start Block**: Set to one block after the last successfully indexed block to avoid reprocessing the error. - **Grafting Configuration**: - - **base**: Deployment ID of the failed subgraph. + - **base**: Deployment ID of the failed Subgraph. - **block**: Block number where grafting should begin. 3. **Deployment Steps** @@ -135,10 +135,10 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing - **Adjust the Manifest**: As shown above, update the `subgraph.yaml` with grafting configurations. - **Deploy the Subgraph**: - Authenticate with the Graph CLI. - - Deploy the new subgraph using `graph deploy`. + - Deploy the new Subgraph using `graph deploy`. 4. **Post-Deployment** - - **Verify Indexing**: Check that the subgraph is indexing correctly from the graft point. + - **Verify Indexing**: Check that the Subgraph is indexing correctly from the graft point. - **Monitor Data**: Ensure that new data is being captured and the hotfix is effective. - **Plan for Republish**: Schedule the deployment of a non-grafted version for long-term stability. @@ -146,9 +146,9 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing While grafting is a powerful tool for deploying hotfixes quickly, there are specific scenarios where it should be avoided to maintain data integrity and ensure optimal performance. -- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new subgraph’s schema to be compatible with the base subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. +- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new Subgraph’s schema to be compatible with the base Subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. - **Significant Mapping Logic Overhauls**: When the hotfix involves substantial modifications to your mapping logic—such as changing how events are processed or altering handler functions—grafting may not function correctly. The new logic might not be compatible with the data processed under the old logic, leading to incorrect data or failed indexing. -- **Deployments to The Graph Network**: Grafting is not recommended for subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the subgraph from scratch to ensure full compatibility and reliability. +- **Deployments to The Graph Network**: Grafting is not recommended for Subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the Subgraph from scratch to ensure full compatibility and reliability. ### Risk Management @@ -157,20 +157,20 @@ While grafting is a powerful tool for deploying hotfixes quickly, there are spec ## Conclusion -Grafting is an effective strategy for deploying hotfixes in subgraph development, enabling you to: +Grafting is an effective strategy for deploying hotfixes in Subgraph development, enabling you to: - **Quickly Recover** from critical errors without re-indexing. - **Preserve Historical Data**, maintaining continuity for applications and users. - **Ensure Service Availability** by minimizing downtime during critical fixes. -However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. +However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your Subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. ## Додаткові матеріали - **[Grafting Documentation](/subgraphs/cookbook/grafting/)**: Replace a Contract and Keep its History With Grafting - **[Understanding Deployment IDs](/subgraphs/querying/subgraph-id-vs-deployment-id/)**: Learn the difference between Deployment ID and Subgraph ID. -By incorporating grafting into your subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. +By incorporating grafting into your Subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/uk/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx b/website/src/pages/uk/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx index 6ff60ec9ab34..3a633244e0f2 100644 --- a/website/src/pages/uk/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx +++ b/website/src/pages/uk/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx @@ -1,6 +1,6 @@ --- title: Subgraph Best Practice 3 - Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs -sidebarTitle: 'Subgraph Best Practice 3: Immutable Entities and Bytes as IDs' +sidebarTitle: Immutable Entities and Bytes as IDs --- ## TLDR @@ -50,12 +50,12 @@ While other types for IDs are possible, such as String and Int8, it is recommend ### Reasons to Not Use Bytes as IDs 1. If entity IDs must be human-readable such as auto-incremented numerical IDs or readable strings, Bytes for IDs should not be used. -2. If integrating a subgraph’s data with another data model that does not use Bytes as IDs, Bytes as IDs should not be used. +2. If integrating a Subgraph’s data with another data model that does not use Bytes as IDs, Bytes as IDs should not be used. 3. Indexing and querying performance improvements are not desired. ### Concatenating With Bytes as IDs -It is a common practice in many subgraphs to use string concatenation to combine two properties of an event into a single ID, such as using `event.transaction.hash.toHex() + "-" + event.logIndex.toString()`. However, as this returns a string, this significantly impedes subgraph indexing and querying performance. +It is a common practice in many Subgraphs to use string concatenation to combine two properties of an event into a single ID, such as using `event.transaction.hash.toHex() + "-" + event.logIndex.toString()`. However, as this returns a string, this significantly impedes Subgraph indexing and querying performance. Instead, we should use the `concatI32()` method to concatenate event properties. This strategy results in a `Bytes` ID that is much more performant. @@ -172,7 +172,7 @@ Query Response: ## Conclusion -Using both Immutable Entities and Bytes as IDs has been shown to markedly improve subgraph efficiency. Specifically, tests have highlighted up to a 28% increase in query performance and up to a 48% acceleration in indexing speeds. +Using both Immutable Entities and Bytes as IDs has been shown to markedly improve Subgraph efficiency. Specifically, tests have highlighted up to a 28% increase in query performance and up to a 48% acceleration in indexing speeds. Read more about using Immutable Entities and Bytes as IDs in this blog post by David Lutterkort, a Software Engineer at Edge & Node: [Two Simple Subgraph Performance Improvements](https://thegraph.com/blog/two-simple-subgraph-performance-improvements/). diff --git a/website/src/pages/uk/subgraphs/best-practices/pruning.mdx b/website/src/pages/uk/subgraphs/best-practices/pruning.mdx index 1b51dde8894f..2d4f9ad803e0 100644 --- a/website/src/pages/uk/subgraphs/best-practices/pruning.mdx +++ b/website/src/pages/uk/subgraphs/best-practices/pruning.mdx @@ -1,11 +1,11 @@ --- title: Subgraph Best Practice 1 - Improve Query Speed with Subgraph Pruning -sidebarTitle: 'Subgraph Best Practice 1: Pruning with indexerHints' +sidebarTitle: Pruning with indexerHints --- ## TLDR -[Pruning](/developing/creating-a-subgraph/#prune) removes archival entities from the subgraph’s database up to a given block, and removing unused entities from a subgraph’s database will improve a subgraph’s query performance, often dramatically. Using `indexerHints` is an easy way to prune a subgraph. +[Pruning](/developing/creating-a-subgraph/#prune) removes archival entities from the Subgraph’s database up to a given block, and removing unused entities from a Subgraph’s database will improve a Subgraph’s query performance, often dramatically. Using `indexerHints` is an easy way to prune a Subgraph. ## How to Prune a Subgraph With `indexerHints` @@ -13,14 +13,14 @@ Add a section called `indexerHints` in the manifest. `indexerHints` has three `prune` options: -- `prune: auto`: Retains the minimum necessary history as set by the Indexer, optimizing query performance. This is the generally recommended setting and is the default for all subgraphs created by `graph-cli` >= 0.66.0. +- `prune: auto`: Retains the minimum necessary history as set by the Indexer, optimizing query performance. This is the generally recommended setting and is the default for all Subgraphs created by `graph-cli` >= 0.66.0. - `prune: `: Sets a custom limit on the number of historical blocks to retain. - `prune: never`: No pruning of historical data; retains the entire history and is the default if there is no `indexerHints` section. `prune: never` should be selected if [Time Travel Queries](/subgraphs/querying/graphql-api/#time-travel-queries) are desired. -We can add `indexerHints` to our subgraphs by updating our `subgraph.yaml`: +We can add `indexerHints` to our Subgraphs by updating our `subgraph.yaml`: ```yaml -specVersion: 1.0.0 +specVersion: 1.3.0 schema: file: ./schema.graphql indexerHints: @@ -39,7 +39,7 @@ dataSources: ## Conclusion -Pruning using `indexerHints` is a best practice for subgraph development, offering significant query performance improvements. +Pruning using `indexerHints` is a best practice for Subgraph development, offering significant query performance improvements. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/uk/subgraphs/best-practices/timeseries.mdx b/website/src/pages/uk/subgraphs/best-practices/timeseries.mdx index cacdc44711fe..9732199531a8 100644 --- a/website/src/pages/uk/subgraphs/best-practices/timeseries.mdx +++ b/website/src/pages/uk/subgraphs/best-practices/timeseries.mdx @@ -1,11 +1,11 @@ --- title: Subgraph Best Practice 5 - Simplify and Optimize with Timeseries and Aggregations -sidebarTitle: 'Subgraph Best Practice 5: Timeseries and Aggregations' +sidebarTitle: Timeseries and Aggregations --- ## TLDR -Leveraging the new time-series and aggregations feature in subgraphs can significantly enhance both indexing speed and query performance. +Leveraging the new time-series and aggregations feature in Subgraphs can significantly enhance both indexing speed and query performance. ## Overview @@ -36,6 +36,10 @@ Timeseries and aggregations reduce data processing overhead and accelerate queri ## How to Implement Timeseries and Aggregations +### Prerequisites + +You need `spec version 1.1.0` for this feature. + ### Defining Timeseries Entities A timeseries entity represents raw data points collected over time. It is defined with the `@entity(timeseries: true)` annotation. Key requirements: @@ -51,7 +55,7 @@ Example: type Data @entity(timeseries: true) { id: Int8! timestamp: Timestamp! - price: BigDecimal! + amount: BigDecimal! } ``` @@ -68,11 +72,11 @@ Example: type Stats @aggregation(intervals: ["hour", "day"], source: "Data") { id: Int8! timestamp: Timestamp! - sum: BigDecimal! @aggregate(fn: "sum", arg: "price") + sum: BigDecimal! @aggregate(fn: "sum", arg: "amount") } ``` -In this example, Stats aggregates the price field from Data over hourly and daily intervals, computing the sum. +In this example, Stats aggregates the amount field from Data over hourly and daily intervals, computing the sum. ### Querying Aggregated Data @@ -172,13 +176,13 @@ Supported operators and functions include basic arithmetic (+, -, \_, /), compar ### Conclusion -Implementing timeseries and aggregations in subgraphs is a best practice for projects dealing with time-based data. This approach: +Implementing timeseries and aggregations in Subgraphs is a best practice for projects dealing with time-based data. This approach: - Enhances Performance: Speeds up indexing and querying by reducing data processing overhead. - Simplifies Development: Eliminates the need for manual aggregation logic in mappings. - Scales Efficiently: Handles large volumes of data without compromising on speed or responsiveness. -By adopting this pattern, developers can build more efficient and scalable subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your subgraphs. +By adopting this pattern, developers can build more efficient and scalable Subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your Subgraphs. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/uk/subgraphs/billing.mdx b/website/src/pages/uk/subgraphs/billing.mdx index ac919c79491b..95efac23f7cc 100644 --- a/website/src/pages/uk/subgraphs/billing.mdx +++ b/website/src/pages/uk/subgraphs/billing.mdx @@ -4,12 +4,14 @@ title: Білінг ## Querying Plans -There are two plans to use when querying subgraphs on The Graph Network. +There are two plans to use when querying Subgraphs on The Graph Network. - **Free Plan**: The Free Plan includes 100,000 free monthly queries with full access to the Subgraph Studio testing environment. This plan is designed for hobbyists, hackathoners, and those with side projects to try out The Graph before scaling their dapp. - **Growth Plan**: The Growth Plan includes everything in the Free Plan with all queries after 100,000 monthly queries requiring payments with GRT or credit card. The Growth Plan is flexible enough to cover teams that have established dapps across a variety of use cases. +Learn more about pricing [here](https://thegraph.com/studio-pricing/). + ## Query Payments with credit card diff --git a/website/src/pages/uk/subgraphs/developing/creating/advanced.mdx b/website/src/pages/uk/subgraphs/developing/creating/advanced.mdx index 7614511a5617..0a60a469c86a 100644 --- a/website/src/pages/uk/subgraphs/developing/creating/advanced.mdx +++ b/website/src/pages/uk/subgraphs/developing/creating/advanced.mdx @@ -4,9 +4,9 @@ title: Advanced Subgraph Features ## Overview -Add and implement advanced subgraph features to enhanced your subgraph's built. +Add and implement advanced Subgraph features to enhanced your Subgraph's built. -Starting from `specVersion` `0.0.4`, subgraph features must be explicitly declared in the `features` section at the top level of the manifest file, using their `camelCase` name, as listed in the table below: +Starting from `specVersion` `0.0.4`, Subgraph features must be explicitly declared in the `features` section at the top level of the manifest file, using their `camelCase` name, as listed in the table below: | Feature | Name | | ---------------------------------------------------- | ---------------- | @@ -14,10 +14,10 @@ Starting from `specVersion` `0.0.4`, subgraph features must be explicitly declar | [Full-text Search](#defining-fulltext-search-fields) | `fullTextSearch` | | [Grafting](#grafting-onto-existing-subgraphs) | `grafting` | -For instance, if a subgraph uses the **Full-Text Search** and the **Non-fatal Errors** features, the `features` field in the manifest should be: +For instance, if a Subgraph uses the **Full-Text Search** and the **Non-fatal Errors** features, the `features` field in the manifest should be: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum features: - fullTextSearch @@ -25,7 +25,7 @@ features: dataSources: ... ``` -> Note that using a feature without declaring it will incur a **validation error** during subgraph deployment, but no errors will occur if a feature is declared but not used. +> Note that using a feature without declaring it will incur a **validation error** during Subgraph deployment, but no errors will occur if a feature is declared but not used. ## Timeseries and Aggregations @@ -33,9 +33,9 @@ Prerequisites: - Subgraph specVersion must be ≥1.1.0. -Timeseries and aggregations enable your subgraph to track statistics like daily average price, hourly total transfers, and more. +Timeseries and aggregations enable your Subgraph to track statistics like daily average price, hourly total transfers, and more. -This feature introduces two new types of subgraph entity. Timeseries entities record data points with timestamps. Aggregation entities perform pre-declared calculations on the timeseries data points on an hourly or daily basis, then store the results for easy access via GraphQL. +This feature introduces two new types of Subgraph entity. Timeseries entities record data points with timestamps. Aggregation entities perform pre-declared calculations on the timeseries data points on an hourly or daily basis, then store the results for easy access via GraphQL. ### Example Schema @@ -97,21 +97,21 @@ Aggregation entities are automatically calculated on the basis of the specified ## Non-fatal errors -Indexing errors on already synced subgraphs will, by default, cause the subgraph to fail and stop syncing. Subgraphs can alternatively be configured to continue syncing in the presence of errors, by ignoring the changes made by the handler which provoked the error. This gives subgraph authors time to correct their subgraphs while queries continue to be served against the latest block, though the results might be inconsistent due to the bug that caused the error. Note that some errors are still always fatal. To be non-fatal, the error must be known to be deterministic. +Indexing errors on already synced Subgraphs will, by default, cause the Subgraph to fail and stop syncing. Subgraphs can alternatively be configured to continue syncing in the presence of errors, by ignoring the changes made by the handler which provoked the error. This gives Subgraph authors time to correct their Subgraphs while queries continue to be served against the latest block, though the results might be inconsistent due to the bug that caused the error. Note that some errors are still always fatal. To be non-fatal, the error must be known to be deterministic. -> **Note:** The Graph Network does not yet support non-fatal errors, and developers should not deploy subgraphs using that functionality to the network via the Studio. +> **Note:** The Graph Network does not yet support non-fatal errors, and developers should not deploy Subgraphs using that functionality to the network via the Studio. -Enabling non-fatal errors requires setting the following feature flag on the subgraph manifest: +Enabling non-fatal errors requires setting the following feature flag on the Subgraph manifest: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum features: - nonFatalErrors ... ``` -The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the subgraph has skipped over errors, as in the example: +The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the Subgraph has skipped over errors, as in the example: ```graphql foos(first: 100, subgraphError: allow) { @@ -123,7 +123,7 @@ _meta { } ``` -If the subgraph encounters an error, that query will return both the data and a graphql error with the message `"indexing_error"`, as in this example response: +If the Subgraph encounters an error, that query will return both the data and a graphql error with the message `"indexing_error"`, as in this example response: ```graphql "data": { @@ -145,7 +145,7 @@ If the subgraph encounters an error, that query will return both the data and a ## IPFS/Arweave File Data Sources -File data sources are a new subgraph functionality for accessing off-chain data during indexing in a robust, extendable way. File data sources support fetching files from IPFS and from Arweave. +File data sources are a new Subgraph functionality for accessing off-chain data during indexing in a robust, extendable way. File data sources support fetching files from IPFS and from Arweave. > This also lays the groundwork for deterministic indexing of off-chain data, as well as the potential introduction of arbitrary HTTP-sourced data. @@ -221,7 +221,7 @@ templates: - name: TokenMetadata kind: file/ipfs mapping: - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mapping.ts handler: handleMetadata @@ -290,7 +290,7 @@ Example: import { TokenMetadata as TokenMetadataTemplate } from '../generated/templates' const ipfshash = 'QmaXzZhcYnsisuue5WRdQDH6FDvqkLQX1NckLqBYeYYEfm' -//This example code is for a Crypto coven subgraph. The above ipfs hash is a directory with token metadata for all crypto coven NFTs. +//This example code is for a Crypto coven Subgraph. The above ipfs hash is a directory with token metadata for all crypto coven NFTs. export function handleTransfer(event: TransferEvent): void { let token = Token.load(event.params.tokenId.toString()) @@ -317,23 +317,23 @@ This will create a new file data source, which will poll Graph Node's configured This example is using the CID as the lookup between the parent `Token` entity and the resulting `TokenMetadata` entity. -> Previously, this is the point at which a subgraph developer would have called `ipfs.cat(CID)` to fetch the file +> Previously, this is the point at which a Subgraph developer would have called `ipfs.cat(CID)` to fetch the file Congratulations, you are using file data sources! -#### Deploying your subgraphs +#### Deploying your Subgraphs -You can now `build` and `deploy` your subgraph to any Graph Node >=v0.30.0-rc.0. +You can now `build` and `deploy` your Subgraph to any Graph Node >=v0.30.0-rc.0. #### Limitations -File data source handlers and entities are isolated from other subgraph entities, ensuring that they are deterministic when executed, and ensuring no contamination of chain-based data sources. To be specific: +File data source handlers and entities are isolated from other Subgraph entities, ensuring that they are deterministic when executed, and ensuring no contamination of chain-based data sources. To be specific: - Entities created by File Data Sources are immutable, and cannot be updated - File Data Source handlers cannot access entities from other file data sources - Entities associated with File Data Sources cannot be accessed by chain-based handlers -> While this constraint should not be problematic for most use-cases, it may introduce complexity for some. Please get in touch via Discord if you are having issues modelling your file-based data in a subgraph! +> While this constraint should not be problematic for most use-cases, it may introduce complexity for some. Please get in touch via Discord if you are having issues modelling your file-based data in a Subgraph! Additionally, it is not possible to create data sources from a file data source, be it an onchain data source or another file data source. This restriction may be lifted in the future. @@ -365,15 +365,15 @@ Handlers for File Data Sources cannot be in files which import `eth_call` contra > **Requires**: [SpecVersion](#specversion-releases) >= `1.2.0` -Topic filters, also known as indexed argument filters, are a powerful feature in subgraphs that allow users to precisely filter blockchain events based on the values of their indexed arguments. +Topic filters, also known as indexed argument filters, are a powerful feature in Subgraphs that allow users to precisely filter blockchain events based on the values of their indexed arguments. -- These filters help isolate specific events of interest from the vast stream of events on the blockchain, allowing subgraphs to operate more efficiently by focusing only on relevant data. +- These filters help isolate specific events of interest from the vast stream of events on the blockchain, allowing Subgraphs to operate more efficiently by focusing only on relevant data. -- This is useful for creating personal subgraphs that track specific addresses and their interactions with various smart contracts on the blockchain. +- This is useful for creating personal Subgraphs that track specific addresses and their interactions with various smart contracts on the blockchain. ### How Topic Filters Work -When a smart contract emits an event, any arguments that are marked as indexed can be used as filters in a subgraph's manifest. This allows the subgraph to listen selectively for events that match these indexed arguments. +When a smart contract emits an event, any arguments that are marked as indexed can be used as filters in a Subgraph's manifest. This allows the Subgraph to listen selectively for events that match these indexed arguments. - The event's first indexed argument corresponds to `topic1`, the second to `topic2`, and so on, up to `topic3`, since the Ethereum Virtual Machine (EVM) allows up to three indexed arguments per event. @@ -401,7 +401,7 @@ In this example: #### Configuration in Subgraphs -Topic filters are defined directly within the event handler configuration in the subgraph manifest. Here is how they are configured: +Topic filters are defined directly within the event handler configuration in the Subgraph manifest. Here is how they are configured: ```yaml eventHandlers: @@ -436,7 +436,7 @@ In this configuration: - `topic1` is configured to filter `Transfer` events where `0xAddressA` is the sender. - `topic2` is configured to filter `Transfer` events where `0xAddressB` is the receiver. -- The subgraph will only index transactions that occur directly from `0xAddressA` to `0xAddressB`. +- The Subgraph will only index transactions that occur directly from `0xAddressA` to `0xAddressB`. #### Example 2: Tracking Transactions in Either Direction Between Two or More Addresses @@ -452,17 +452,17 @@ In this configuration: - `topic1` is configured to filter `Transfer` events where `0xAddressA`, `0xAddressB`, `0xAddressC` is the sender. - `topic2` is configured to filter `Transfer` events where `0xAddressB` and `0xAddressC` is the receiver. -- The subgraph will index transactions that occur in either direction between multiple addresses allowing for comprehensive monitoring of interactions involving all addresses. +- The Subgraph will index transactions that occur in either direction between multiple addresses allowing for comprehensive monitoring of interactions involving all addresses. ## Declared eth_call > Note: This is an experimental feature that is not currently available in a stable Graph Node release yet. You can only use it in Subgraph Studio or your self-hosted node. -Declarative `eth_calls` are a valuable subgraph feature that allows `eth_calls` to be executed ahead of time, enabling `graph-node` to execute them in parallel. +Declarative `eth_calls` are a valuable Subgraph feature that allows `eth_calls` to be executed ahead of time, enabling `graph-node` to execute them in parallel. This feature does the following: -- Significantly improves the performance of fetching data from the Ethereum blockchain by reducing the total time for multiple calls and optimizing the subgraph's overall efficiency. +- Significantly improves the performance of fetching data from the Ethereum blockchain by reducing the total time for multiple calls and optimizing the Subgraph's overall efficiency. - Allows faster data fetching, resulting in quicker query responses and a better user experience. - Reduces wait times for applications that need to aggregate data from multiple Ethereum calls, making the data retrieval process more efficient. @@ -474,7 +474,7 @@ This feature does the following: #### Scenario without Declarative `eth_calls` -Imagine you have a subgraph that needs to make three Ethereum calls to fetch data about a user's transactions, balance, and token holdings. +Imagine you have a Subgraph that needs to make three Ethereum calls to fetch data about a user's transactions, balance, and token holdings. Traditionally, these calls might be made sequentially: @@ -498,15 +498,15 @@ Total time taken = max (3, 2, 4) = 4 seconds #### How it Works -1. Declarative Definition: In the subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. +1. Declarative Definition: In the Subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. 2. Parallel Execution Engine: The Graph Node's execution engine recognizes these declarations and runs the calls simultaneously. -3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the subgraph for further processing. +3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the Subgraph for further processing. #### Example Configuration in Subgraph Manifest Declared `eth_calls` can access the `event.address` of the underlying event as well as all the `event.params`. -`Subgraph.yaml` using `event.address`: +`subgraph.yaml` using `event.address`: ```yaml eventHandlers: @@ -524,7 +524,7 @@ Details for the example above: - The text (`Pool[event.address].feeGrowthGlobal0X128()`) is the actual `eth_call` that will be executed, which is in the form of `Contract[address].function(arguments)` - The `address` and `arguments` can be replaced with variables that will be available when the handler is executed. -`Subgraph.yaml` using `event.params` +`subgraph.yaml` using `event.params` ```yaml calls: @@ -535,22 +535,22 @@ calls: > **Note:** it is not recommended to use grafting when initially upgrading to The Graph Network. Learn more [here](/subgraphs/cookbook/grafting/#important-note-on-grafting-when-upgrading-to-the-network). -When a subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances; it is beneficial to reuse the data from an existing subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly or to temporarily get an existing subgraph working again after it has failed. +When a Subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances; it is beneficial to reuse the data from an existing Subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly or to temporarily get an existing Subgraph working again after it has failed. -A subgraph is grafted onto a base subgraph when the subgraph manifest in `subgraph.yaml` contains a `graft` block at the top-level: +A Subgraph is grafted onto a base Subgraph when the Subgraph manifest in `subgraph.yaml` contains a `graft` block at the top-level: ```yaml description: ... graft: - base: Qm... # Subgraph ID of base subgraph + base: Qm... # Subgraph ID of base Subgraph block: 7345624 # Block number ``` -When a subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` subgraph up to and including the given `block` and then continue indexing the new subgraph from that block on. The base subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted subgraph. +When a Subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` Subgraph up to and including the given `block` and then continue indexing the new Subgraph from that block on. The base Subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted Subgraph. -Because grafting copies rather than indexes base data, it is much quicker to get the subgraph to the desired block than indexing from scratch, though the initial data copy can still take several hours for very large subgraphs. While the grafted subgraph is being initialized, the Graph Node will log information about the entity types that have already been copied. +Because grafting copies rather than indexes base data, it is much quicker to get the Subgraph to the desired block than indexing from scratch, though the initial data copy can still take several hours for very large Subgraphs. While the grafted Subgraph is being initialized, the Graph Node will log information about the entity types that have already been copied. -Підграф, утворений в результаті може використовувати схему GraphQL, яка не є ідентичною схемі базового підграфа, а лише сумісною з нею. Вона повинна бути валідною схемою підграфа сама по собі, але може відхилятися від схеми базового підграфа у такому випадку: +The grafted Subgraph can use a GraphQL schema that is not identical to the one of the base Subgraph, but merely compatible with it. It has to be a valid Subgraph schema in its own right, but may deviate from the base Subgraph's schema in the following ways: - Додає або видаляє типи елементів - Видаляє атрибути з типів елементів @@ -560,4 +560,4 @@ Because grafting copies rather than indexes base data, it is much quicker to get - Додає або видаляє інтерфейси - Визначає, для яких типів елементів реалізовано інтерфейс -> **[Feature Management](#experimental-features):** `grafting` must be declared under `features` in the subgraph manifest. +> **[Feature Management](#experimental-features):** `grafting` must be declared under `features` in the Subgraph manifest. diff --git a/website/src/pages/uk/subgraphs/developing/creating/assemblyscript-mappings.mdx b/website/src/pages/uk/subgraphs/developing/creating/assemblyscript-mappings.mdx index 2ac894695fe1..cd81dc118f28 100644 --- a/website/src/pages/uk/subgraphs/developing/creating/assemblyscript-mappings.mdx +++ b/website/src/pages/uk/subgraphs/developing/creating/assemblyscript-mappings.mdx @@ -10,7 +10,7 @@ The mappings take data from a particular source and transform it into entities t For each event handler that is defined in `subgraph.yaml` under `mapping.eventHandlers`, create an exported function of the same name. Each handler must accept a single parameter called `event` with a type corresponding to the name of the event which is being handled. -In the example subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events: +In the example Subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events: ```javascript import { NewGravatar, UpdatedGravatar } from '../generated/Gravity/Gravity' @@ -72,7 +72,7 @@ If no value is set for a field in the new entity with the same ID, the field wil ## Code Generation -In order to make it easy and type-safe to work with smart contracts, events and entities, the Graph CLI can generate AssemblyScript types from the subgraph's GraphQL schema and the contract ABIs included in the data sources. +In order to make it easy and type-safe to work with smart contracts, events and entities, the Graph CLI can generate AssemblyScript types from the Subgraph's GraphQL schema and the contract ABIs included in the data sources. This is done with @@ -80,7 +80,7 @@ This is done with graph codegen [--output-dir ] [] ``` -but in most cases, subgraphs are already preconfigured via `package.json` to allow you to simply run one of the following to achieve the same: +but in most cases, Subgraphs are already preconfigured via `package.json` to allow you to simply run one of the following to achieve the same: ```sh # Yarn @@ -90,7 +90,7 @@ yarn codegen npm run codegen ``` -This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with. +This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example Subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with. ```javascript import { @@ -102,12 +102,12 @@ import { } from '../generated/Gravity/Gravity' ``` -In addition to this, one class is generated for each entity type in the subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with +In addition to this, one class is generated for each entity type in the Subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with ```javascript import { Gravatar } from '../generated/schema' ``` -> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the subgraph. +> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the Subgraph. -Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your subgraph to Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find. +Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your Subgraph to Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find. diff --git a/website/src/pages/uk/subgraphs/developing/creating/graph-ts/CHANGELOG.md b/website/src/pages/uk/subgraphs/developing/creating/graph-ts/CHANGELOG.md index 5d90888ac378..5f964d3cbb78 100644 --- a/website/src/pages/uk/subgraphs/developing/creating/graph-ts/CHANGELOG.md +++ b/website/src/pages/uk/subgraphs/developing/creating/graph-ts/CHANGELOG.md @@ -1,5 +1,11 @@ # @graphprotocol/graph-ts +## 0.38.0 + +### Minor Changes + +- [#1935](https://github.com/graphprotocol/graph-tooling/pull/1935) [`0c36a02`](https://github.com/graphprotocol/graph-tooling/commit/0c36a024e0516bbf883ae62b8312dba3d9945f04) Thanks [@isum](https://github.com/isum)! - feat: add yaml parsing support to mappings + ## 0.37.0 ### Minor Changes diff --git a/website/src/pages/uk/subgraphs/developing/creating/graph-ts/api.mdx b/website/src/pages/uk/subgraphs/developing/creating/graph-ts/api.mdx index 35bb04826c98..5be2530c4d6b 100644 --- a/website/src/pages/uk/subgraphs/developing/creating/graph-ts/api.mdx +++ b/website/src/pages/uk/subgraphs/developing/creating/graph-ts/api.mdx @@ -2,12 +2,12 @@ title: AssemblyScript API --- -> Note: If you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/). +> Note: If you created a Subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/). -Learn what built-in APIs can be used when writing subgraph mappings. There are two kinds of APIs available out of the box: +Learn what built-in APIs can be used when writing Subgraph mappings. There are two kinds of APIs available out of the box: - The [Graph TypeScript library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) -- Code generated from subgraph files by `graph codegen` +- Code generated from Subgraph files by `graph codegen` You can also add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). @@ -27,7 +27,7 @@ The `@graphprotocol/graph-ts` library provides the following APIs: ### Versions -The `apiVersion` in the subgraph manifest specifies the mapping API version which is run by Graph Node for a given subgraph. +The `apiVersion` in the Subgraph manifest specifies the mapping API version which is run by Graph Node for a given Subgraph. | Version | Release notes | | :-: | --- | @@ -223,7 +223,7 @@ import { store } from '@graphprotocol/graph-ts' The `store` API allows to load, save and remove entities from and to the Graph Node store. -Entities written to the store map one-to-one to the `@entity` types defined in the subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities. +Entities written to the store map one-to-one to the `@entity` types defined in the Subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities. #### Creating entities @@ -282,8 +282,8 @@ As of `graph-node` v0.31.0, `@graphprotocol/graph-ts` v0.30.0 and `@graphprotoco The store API facilitates the retrieval of entities that were created or updated in the current block. A typical situation for this is that one handler creates a transaction from some onchain event, and a later handler wants to access this transaction if it exists. -- In the case where the transaction does not exist, the subgraph will have to go to the database simply to find out that the entity does not exist. If the subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. -- For some subgraphs, these missed lookups can contribute significantly to the indexing time. +- In the case where the transaction does not exist, the Subgraph will have to go to the database simply to find out that the entity does not exist. If the Subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. +- For some Subgraphs, these missed lookups can contribute significantly to the indexing time. ```typescript let id = event.transaction.hash // or however the ID is constructed @@ -380,11 +380,11 @@ The Ethereum API provides access to smart contracts, public state variables, con #### Support for Ethereum Types -As with entities, `graph codegen` generates classes for all smart contracts and events used in a subgraph. For this, the contract ABIs need to be part of the data source in the subgraph manifest. Typically, the ABI files are stored in an `abis/` folder. +As with entities, `graph codegen` generates classes for all smart contracts and events used in a Subgraph. For this, the contract ABIs need to be part of the data source in the Subgraph manifest. Typically, the ABI files are stored in an `abis/` folder. -With the generated classes, conversions between Ethereum types and the [built-in types](#built-in-types) take place behind the scenes so that subgraph authors do not have to worry about them. +With the generated classes, conversions between Ethereum types and the [built-in types](#built-in-types) take place behind the scenes so that Subgraph authors do not have to worry about them. -The following example illustrates this. Given a subgraph schema like +The following example illustrates this. Given a Subgraph schema like ```graphql type Transfer @entity { @@ -483,7 +483,7 @@ class Log { #### Access to Smart Contract State -The code generated by `graph codegen` also includes classes for the smart contracts used in the subgraph. These can be used to access public state variables and call functions of the contract at the current block. +The code generated by `graph codegen` also includes classes for the smart contracts used in the Subgraph. These can be used to access public state variables and call functions of the contract at the current block. A common pattern is to access the contract from which an event originates. This is achieved with the following code: @@ -506,7 +506,7 @@ export function handleTransfer(event: TransferEvent) { As long as the `ERC20Contract` on Ethereum has a public read-only function called `symbol`, it can be called with `.symbol()`. For public state variables a method with the same name is created automatically. -Any other contract that is part of the subgraph can be imported from the generated code and can be bound to a valid address. +Any other contract that is part of the Subgraph can be imported from the generated code and can be bound to a valid address. #### Handling Reverted Calls @@ -582,7 +582,7 @@ let isContract = ethereum.hasCode(eoa).inner // returns false import { log } from '@graphprotocol/graph-ts' ``` -The `log` API allows subgraphs to log information to the Graph Node standard output as well as Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument. +The `log` API allows Subgraphs to log information to the Graph Node standard output as well as Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument. The `log` API includes the following functions: @@ -590,7 +590,7 @@ The `log` API includes the following functions: - `log.info(fmt: string, args: Array): void` - logs an informational message. - `log.warning(fmt: string, args: Array): void` - logs a warning. - `log.error(fmt: string, args: Array): void` - logs an error message. -- `log.critical(fmt: string, args: Array): void` – logs a critical message _and_ terminates the subgraph. +- `log.critical(fmt: string, args: Array): void` – logs a critical message _and_ terminates the Subgraph. The `log` API takes a format string and an array of string values. It then replaces placeholders with the string values from the array. The first `{}` placeholder gets replaced by the first value in the array, the second `{}` placeholder gets replaced by the second value and so on. @@ -721,7 +721,7 @@ ipfs.mapJSON('Qm...', 'processItem', Value.fromString('parentId')) The only flag currently supported is `json`, which must be passed to `ipfs.map`. With the `json` flag, the IPFS file must consist of a series of JSON values, one value per line. The call to `ipfs.map` will read each line in the file, deserialize it into a `JSONValue` and call the callback for each of them. The callback can then use entity operations to store data from the `JSONValue`. Entity changes are stored only when the handler that called `ipfs.map` finishes successfully; in the meantime, they are kept in memory, and the size of the file that `ipfs.map` can process is therefore limited. -On success, `ipfs.map` returns `void`. If any invocation of the callback causes an error, the handler that invoked `ipfs.map` is aborted, and the subgraph is marked as failed. +On success, `ipfs.map` returns `void`. If any invocation of the callback causes an error, the handler that invoked `ipfs.map` is aborted, and the Subgraph is marked as failed. ### Crypto API @@ -836,7 +836,7 @@ The base `Entity` class and the child `DataSourceContext` class have helpers to ### DataSourceContext in Manifest -The `context` section within `dataSources` allows you to define key-value pairs that are accessible within your subgraph mappings. The available types are `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. +The `context` section within `dataSources` allows you to define key-value pairs that are accessible within your Subgraph mappings. The available types are `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Here is a YAML example illustrating the usage of various types in the `context` section: @@ -887,4 +887,4 @@ dataSources: - `List`: Specifies a list of items. Each item needs to specify its type and data. - `BigInt`: Specifies a large integer value. Must be quoted due to its large size. -This context is then accessible in your subgraph mapping files, enabling more dynamic and configurable subgraphs. +This context is then accessible in your Subgraph mapping files, enabling more dynamic and configurable Subgraphs. diff --git a/website/src/pages/uk/subgraphs/developing/creating/graph-ts/common-issues.mdx b/website/src/pages/uk/subgraphs/developing/creating/graph-ts/common-issues.mdx index f8d0c9c004c2..65e8e3d4a8a3 100644 --- a/website/src/pages/uk/subgraphs/developing/creating/graph-ts/common-issues.mdx +++ b/website/src/pages/uk/subgraphs/developing/creating/graph-ts/common-issues.mdx @@ -2,7 +2,7 @@ title: Common AssemblyScript Issues --- -There are certain [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) issues that are common to run into during subgraph development. They range in debug difficulty, however, being aware of them may help. The following is a non-exhaustive list of these issues: +There are certain [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) issues that are common to run into during Subgraph development. They range in debug difficulty, however, being aware of them may help. The following is a non-exhaustive list of these issues: - `Private` class variables are not enforced in [AssemblyScript](https://www.assemblyscript.org/status.html#language-features). There is no way to protect class variables from being directly changed from the class object. - Scope is not inherited into [closure functions](https://www.assemblyscript.org/status.html#on-closures), i.e. variables declared outside of closure functions cannot be used. Explanation in [Developer Highlights #3](https://www.youtube.com/watch?v=1-8AW-lVfrA&t=3243s). diff --git a/website/src/pages/uk/subgraphs/developing/creating/install-the-cli.mdx b/website/src/pages/uk/subgraphs/developing/creating/install-the-cli.mdx index 9f03c3a6c84a..cac462d8e960 100644 --- a/website/src/pages/uk/subgraphs/developing/creating/install-the-cli.mdx +++ b/website/src/pages/uk/subgraphs/developing/creating/install-the-cli.mdx @@ -2,11 +2,11 @@ title: Install the Graph CLI --- -> In order to use your subgraph on The Graph's decentralized network, you will need to [create an API key](/resources/subgraph-studio-faq/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your subgraph with at least 3,000 GRT to attract 2-3 Indexers. To learn more about signaling, check out [curating](/resources/roles/curating/). +> In order to use your Subgraph on The Graph's decentralized network, you will need to [create an API key](/resources/subgraph-studio-faq/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your Subgraph with at least 3,000 GRT to attract 2-3 Indexers. To learn more about signaling, check out [curating](/resources/roles/curating/). ## Overview -The [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) is a command-line interface that facilitates developers' commands for The Graph. It processes a [subgraph manifest](/subgraphs/developing/creating/subgraph-manifest/) and compiles the [mappings](/subgraphs/developing/creating/assemblyscript-mappings/) to create the files you will need to deploy the subgraph to [Subgraph Studio](https://thegraph.com/studio/) and the network. +The [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) is a command-line interface that facilitates developers' commands for The Graph. It processes a [Subgraph manifest](/subgraphs/developing/creating/subgraph-manifest/) and compiles the [mappings](/subgraphs/developing/creating/assemblyscript-mappings/) to create the files you will need to deploy the Subgraph to [Subgraph Studio](https://thegraph.com/studio/) and the network. ## Getting Started @@ -28,13 +28,13 @@ npm install -g @graphprotocol/graph-cli@latest yarn global add @graphprotocol/graph-cli ``` -The `graph init` command can be used to set up a new subgraph project, either from an existing contract or from an example subgraph. If you already have a smart contract deployed to your preferred network, you can bootstrap a new subgraph from that contract to get started. +The `graph init` command can be used to set up a new Subgraph project, either from an existing contract or from an example Subgraph. If you already have a smart contract deployed to your preferred network, you can bootstrap a new Subgraph from that contract to get started. ## Створення субграфа ### From an Existing Contract -The following command creates a subgraph that indexes all events of an existing contract: +The following command creates a Subgraph that indexes all events of an existing contract: ```sh graph init \ @@ -51,25 +51,25 @@ graph init \ - If any of the optional arguments are missing, it guides you through an interactive form. -- The `` is the ID of your subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your subgraph details page. +- The `` is the ID of your Subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your Subgraph details page. ### From an Example Subgraph -The following command initializes a new project from an example subgraph: +The following command initializes a new project from an example Subgraph: ```sh graph init --from-example=example-subgraph ``` -- The [example subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. +- The [example Subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. -- The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. +- The Subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. ### Add New `dataSources` to an Existing Subgraph -`dataSources` are key components of subgraphs. They define the sources of data that the subgraph indexes and processes. A `dataSource` specifies which smart contract to listen to, which events to process, and how to handle them. +`dataSources` are key components of Subgraphs. They define the sources of data that the Subgraph indexes and processes. A `dataSource` specifies which smart contract to listen to, which events to process, and how to handle them. -Recent versions of the Graph CLI supports adding new `dataSources` to an existing subgraph through the `graph add` command: +Recent versions of the Graph CLI supports adding new `dataSources` to an existing Subgraph through the `graph add` command: ```sh graph add
[] @@ -101,19 +101,5 @@ The `graph add` command will fetch the ABI from Etherscan (unless an ABI path is The ABI file(s) must match your contract(s). There are a few ways to obtain ABI files: - If you are building your own project, you will likely have access to your most current ABIs. -- If you are building a subgraph for a public project, you can download that project to your computer and get the ABI by using [`npx hardhat compile`](https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts) or using `solc` to compile. -- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your subgraph will fail. - -## SpecVersion Releases - -| Version | Release notes | -| :-: | --- | -| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | -| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | -| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune subgraphs | -| 0.0.9 | Supports `endBlock` feature | -| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). | -| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). | -| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | -| 0.0.5 | Added support for event handlers having access to transaction receipts. | -| 0.0.4 | Added support for managing subgraph features. | +- If you are building a Subgraph for a public project, you can download that project to your computer and get the ABI by using [`npx hardhat compile`](https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts) or using `solc` to compile. +- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your Subgraph will fail. diff --git a/website/src/pages/uk/subgraphs/developing/creating/ql-schema.mdx b/website/src/pages/uk/subgraphs/developing/creating/ql-schema.mdx index 27562f970620..7e0f889447c5 100644 --- a/website/src/pages/uk/subgraphs/developing/creating/ql-schema.mdx +++ b/website/src/pages/uk/subgraphs/developing/creating/ql-schema.mdx @@ -4,7 +4,7 @@ title: The Graph QL Schema ## Overview -The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language. +The schema for your Subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language. > Note: If you've never written a GraphQL schema, it is recommended that you check out this primer on the GraphQL type system. Reference documentation for GraphQL schemas can be found in the [GraphQL API](/subgraphs/querying/graphql-api/) section. @@ -12,7 +12,7 @@ The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas ar Before defining entities, it is important to take a step back and think about how your data is structured and linked. -- All queries will be made against the data model defined in the subgraph schema. As a result, the design of the subgraph schema should be informed by the queries that your application will need to perform. +- All queries will be made against the data model defined in the Subgraph schema. As a result, the design of the Subgraph schema should be informed by the queries that your application will need to perform. - It may be useful to imagine entities as "objects containing data", rather than as events or functions. - You define entity types in `schema.graphql`, and Graph Node will generate top-level fields for querying single instances and collections of that entity type. - Each type that should be an entity is required to be annotated with an `@entity` directive. @@ -141,7 +141,7 @@ type TokenBalance @entity { Reverse lookups can be defined on an entity through the `@derivedFrom` field. This creates a virtual field on the entity that may be queried but cannot be set manually through the mappings API. Rather, it is derived from the relationship defined on the other entity. For such relationships, it rarely makes sense to store both sides of the relationship, and both indexing and query performance will be better when only one side is stored and the other is derived. -For one-to-many relationships, the relationship should always be stored on the 'one' side, and the 'many' side should always be derived. Storing the relationship this way, rather than storing an array of entities on the 'many' side, will result in dramatically better performance for both indexing and querying the subgraph. In general, storing arrays of entities should be avoided as much as is practical. +For one-to-many relationships, the relationship should always be stored on the 'one' side, and the 'many' side should always be derived. Storing the relationship this way, rather than storing an array of entities on the 'many' side, will result in dramatically better performance for both indexing and querying the Subgraph. In general, storing arrays of entities should be avoided as much as is practical. #### Example @@ -160,7 +160,7 @@ type TokenBalance @entity { } ``` -Here is an example of how to write a mapping for a subgraph with reverse lookups: +Here is an example of how to write a mapping for a Subgraph with reverse lookups: ```typescript let token = new Token(event.address) // Create Token @@ -231,7 +231,7 @@ query usersWithOrganizations { } ``` -This more elaborate way of storing many-to-many relationships will result in less data stored for the subgraph, and therefore to a subgraph that is often dramatically faster to index and to query. +This more elaborate way of storing many-to-many relationships will result in less data stored for the Subgraph, and therefore to a Subgraph that is often dramatically faster to index and to query. ### Adding comments to the schema @@ -287,7 +287,7 @@ query { } ``` -> **[Feature Management](#experimental-features):** From `specVersion` `0.0.4` and onwards, `fullTextSearch` must be declared under the `features` section in the subgraph manifest. +> **[Feature Management](#experimental-features):** From `specVersion` `0.0.4` and onwards, `fullTextSearch` must be declared under the `features` section in the Subgraph manifest. ## Languages supported diff --git a/website/src/pages/uk/subgraphs/developing/creating/starting-your-subgraph.mdx b/website/src/pages/uk/subgraphs/developing/creating/starting-your-subgraph.mdx index 4823231d9a40..180a343470b1 100644 --- a/website/src/pages/uk/subgraphs/developing/creating/starting-your-subgraph.mdx +++ b/website/src/pages/uk/subgraphs/developing/creating/starting-your-subgraph.mdx @@ -4,20 +4,32 @@ title: Starting Your Subgraph ## Overview -The Graph is home to thousands of subgraphs already available for query, so check [The Graph Explorer](https://thegraph.com/explorer) and find one that already matches your needs. +The Graph is home to thousands of Subgraphs already available for query, so check [The Graph Explorer](https://thegraph.com/explorer) and find one that already matches your needs. -When you create a [subgraph](/subgraphs/developing/subgraphs/), you create a custom open API that extracts data from a blockchain, processes it, stores it, and makes it easy to query via GraphQL. +When you create a [Subgraph](/subgraphs/developing/subgraphs/), you create a custom open API that extracts data from a blockchain, processes it, stores it, and makes it easy to query via GraphQL. -Subgraph development ranges from simple scaffold subgraphs to advanced, specifically tailored subgraphs. +Subgraph development ranges from simple scaffold Subgraphs to advanced, specifically tailored Subgraphs. ### Start Building -Start the process and build a subgraph that matches your needs: +Start the process and build a Subgraph that matches your needs: 1. [Install the CLI](/subgraphs/developing/creating/install-the-cli/) - Set up your infrastructure -2. [Subgraph Manifest](/subgraphs/developing/creating/subgraph-manifest/) - Understand a subgraph's key component +2. [Subgraph Manifest](/subgraphs/developing/creating/subgraph-manifest/) - Understand a Subgraph's key component 3. [The GraphQL Schema](/subgraphs/developing/creating/ql-schema/) - Write your schema 4. [Writing AssemblyScript Mappings](/subgraphs/developing/creating/assemblyscript-mappings/) - Write your mappings -5. [Advanced Features](/subgraphs/developing/creating/advanced/) - Customize your subgraph with advanced features +5. [Advanced Features](/subgraphs/developing/creating/advanced/) - Customize your Subgraph with advanced features Explore additional [resources for APIs](/subgraphs/developing/creating/graph-ts/README/) and conduct local testing with [Matchstick](/subgraphs/developing/creating/unit-testing-framework/). + +| Version | Release notes | +| :-: | --- | +| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune Subgraphs | +| 0.0.9 | Supports `endBlock` feature | +| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). | +| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | diff --git a/website/src/pages/uk/subgraphs/developing/creating/subgraph-manifest.mdx b/website/src/pages/uk/subgraphs/developing/creating/subgraph-manifest.mdx index a42a50973690..78e4a3a55e7d 100644 --- a/website/src/pages/uk/subgraphs/developing/creating/subgraph-manifest.mdx +++ b/website/src/pages/uk/subgraphs/developing/creating/subgraph-manifest.mdx @@ -4,19 +4,19 @@ title: Subgraph Manifest ## Overview -The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. +The Subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your Subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. -The **subgraph definition** consists of the following files: +The **Subgraph definition** consists of the following files: -- `subgraph.yaml`: Contains the subgraph manifest +- `subgraph.yaml`: Contains the Subgraph manifest -- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL +- `schema.graphql`: A GraphQL schema defining the data stored for your Subgraph and how to query it via GraphQL - `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema (e.g. `mapping.ts` in this guide) ### Subgraph Capabilities -A single subgraph can: +A single Subgraph can: - Index data from multiple smart contracts (but not multiple networks). @@ -24,12 +24,12 @@ A single subgraph can: - Add an entry for each contract that requires indexing to the `dataSources` array. -The full specification for subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). +The full specification for Subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). -For the example subgraph listed above, `subgraph.yaml` is: +For the example Subgraph listed above, `subgraph.yaml` is: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum repository: https://github.com/graphprotocol/graph-tooling schema: @@ -54,7 +54,7 @@ dataSources: data: 'bar' mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -79,47 +79,47 @@ dataSources: ## Subgraph Entries -> Important Note: Be sure you populate your subgraph manifest with all handlers and [entities](/subgraphs/developing/creating/ql-schema/). +> Important Note: Be sure you populate your Subgraph manifest with all handlers and [entities](/subgraphs/developing/creating/ql-schema/). The important entries to update for the manifest are: -- `specVersion`: a semver version that identifies the supported manifest structure and functionality for the subgraph. The latest version is `1.2.0`. See [specVersion releases](#specversion-releases) section to see more details on features & releases. +- `specVersion`: a semver version that identifies the supported manifest structure and functionality for the Subgraph. The latest version is `1.3.0`. See [specVersion releases](#specversion-releases) section to see more details on features & releases. -- `description`: a human-readable description of what the subgraph is. This description is displayed in Graph Explorer when the subgraph is deployed to Subgraph Studio. +- `description`: a human-readable description of what the Subgraph is. This description is displayed in Graph Explorer when the Subgraph is deployed to Subgraph Studio. -- `repository`: the URL of the repository where the subgraph manifest can be found. This is also displayed in Graph Explorer. +- `repository`: the URL of the repository where the Subgraph manifest can be found. This is also displayed in Graph Explorer. - `features`: a list of all used [feature](#experimental-features) names. -- `indexerHints.prune`: Defines the retention of historical block data for a subgraph. See [prune](#prune) in [indexerHints](#indexer-hints) section. +- `indexerHints.prune`: Defines the retention of historical block data for a Subgraph. See [prune](#prune) in [indexerHints](#indexer-hints) section. -- `dataSources.source`: the address of the smart contract the subgraph sources, and the ABI of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts. +- `dataSources.source`: the address of the smart contract the Subgraph sources, and the ABI of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts. - `dataSources.source.startBlock`: the optional number of the block that the data source starts indexing from. In most cases, we suggest using the block in which the contract was created. - `dataSources.source.endBlock`: The optional number of the block that the data source stops indexing at, including that block. Minimum spec version required: `0.0.9`. -- `dataSources.context`: key-value pairs that can be used within subgraph mappings. Supports various data types like `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Each variable needs to specify its `type` and `data`. These context variables are then accessible in the mapping files, offering more configurable options for subgraph development. +- `dataSources.context`: key-value pairs that can be used within Subgraph mappings. Supports various data types like `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Each variable needs to specify its `type` and `data`. These context variables are then accessible in the mapping files, offering more configurable options for Subgraph development. - `dataSources.mapping.entities`: the entities that the data source writes to the store. The schema for each entity is defined in the schema.graphql file. - `dataSources.mapping.abis`: one or more named ABI files for the source contract as well as any other smart contracts that you interact with from within the mappings. -- `dataSources.mapping.eventHandlers`: lists the smart contract events this subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store. +- `dataSources.mapping.eventHandlers`: lists the smart contract events this Subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store. -- `dataSources.mapping.callHandlers`: lists the smart contract functions this subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store. +- `dataSources.mapping.callHandlers`: lists the smart contract functions this Subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store. -- `dataSources.mapping.blockHandlers`: lists the blocks this subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional call-filter can be provided by adding a `filter` field with `kind: call` to the handler. This will only run the handler if the block contains at least one call to the data source contract. +- `dataSources.mapping.blockHandlers`: lists the blocks this Subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional call-filter can be provided by adding a `filter` field with `kind: call` to the handler. This will only run the handler if the block contains at least one call to the data source contract. -A single subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array. +A single Subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array. ## Event Handlers -Event handlers in a subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the subgraph's manifest. This enables subgraphs to process and store event data according to defined logic. +Event handlers in a Subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the Subgraph's manifest. This enables Subgraphs to process and store event data according to defined logic. ### Defining an Event Handler -An event handler is declared within a data source in the subgraph's YAML configuration. It specifies which events to listen for and the corresponding function to execute when those events are detected. +An event handler is declared within a data source in the Subgraph's YAML configuration. It specifies which events to listen for and the corresponding function to execute when those events are detected. ```yaml dataSources: @@ -131,7 +131,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -149,11 +149,11 @@ dataSources: ## Call Handlers -While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured. +While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a Subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured. Call handlers will only trigger in one of two cases: when the function specified is called by an account other than the contract itself or when it is marked as external in Solidity and called as part of another function in the same contract. -> **Note:** Call handlers currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a subgraph indexing one of these networks contain one or more call handlers, it will not start syncing. Subgraph developers should instead use event handlers. These are far more performant than call handlers, and are supported on every evm network. +> **Note:** Call handlers currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a Subgraph indexing one of these networks contain one or more call handlers, it will not start syncing. Subgraph developers should instead use event handlers. These are far more performant than call handlers, and are supported on every evm network. ### Defining a Call Handler @@ -169,7 +169,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -186,7 +186,7 @@ The `function` is the normalized function signature to filter calls by. The `han ### Mapping Function -Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument: +Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example Subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument: ```typescript import { CreateGravatarCall } from '../generated/Gravity/Gravity' @@ -205,7 +205,7 @@ The `handleCreateGravatar` function takes a new `CreateGravatarCall` which is a ## Block Handlers -In addition to subscribing to contract events or function calls, a subgraph may want to update its data as new blocks are appended to the chain. To achieve this a subgraph can run a function after every block or after blocks that match a pre-defined filter. +In addition to subscribing to contract events or function calls, a Subgraph may want to update its data as new blocks are appended to the chain. To achieve this a Subgraph can run a function after every block or after blocks that match a pre-defined filter. ### Supported Filters @@ -218,7 +218,7 @@ filter: _The defined handler will be called once for every block which contains a call to the contract (data source) the handler is defined under._ -> **Note:** The `call` filter currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a subgraph indexing one of these networks contain one or more block handlers with a `call` filter, it will not start syncing. +> **Note:** The `call` filter currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a Subgraph indexing one of these networks contain one or more block handlers with a `call` filter, it will not start syncing. The absence of a filter for a block handler will ensure that the handler is called every block. A data source can only contain one block handler for each filter type. @@ -232,7 +232,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -261,7 +261,7 @@ blockHandlers: every: 10 ``` -The defined handler will be called once for every `n` blocks, where `n` is the value provided in the `every` field. This configuration allows the subgraph to perform specific operations at regular block intervals. +The defined handler will be called once for every `n` blocks, where `n` is the value provided in the `every` field. This configuration allows the Subgraph to perform specific operations at regular block intervals. #### Once Filter @@ -276,7 +276,7 @@ blockHandlers: kind: once ``` -The defined handler with the once filter will be called only once before all other handlers run. This configuration allows the subgraph to use the handler as an initialization handler, performing specific tasks at the start of indexing. +The defined handler with the once filter will be called only once before all other handlers run. This configuration allows the Subgraph to use the handler as an initialization handler, performing specific tasks at the start of indexing. ```ts export function handleOnce(block: ethereum.Block): void { @@ -288,7 +288,7 @@ export function handleOnce(block: ethereum.Block): void { ### Mapping Function -The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing subgraph entities in the store, call smart contracts and create or update entities. +The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing Subgraph entities in the store, call smart contracts and create or update entities. ```typescript import { ethereum } from '@graphprotocol/graph-ts' @@ -317,7 +317,7 @@ An event will only be triggered when both the signature and topic 0 match. By de Starting from `specVersion` `0.0.5` and `apiVersion` `0.0.7`, event handlers can have access to the receipt for the transaction which emitted them. -To do so, event handlers must be declared in the subgraph manifest with the new `receipt: true` key, which is optional and defaults to false. +To do so, event handlers must be declared in the Subgraph manifest with the new `receipt: true` key, which is optional and defaults to false. ```yaml eventHandlers: @@ -360,7 +360,7 @@ dataSources: abi: Factory mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/factory.ts entities: @@ -390,7 +390,7 @@ templates: abi: Exchange mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/exchange.ts entities: @@ -454,7 +454,7 @@ There are setters and getters like `setString` and `getString` for all value typ ## Start Blocks -The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. +The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a Subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. ```yaml dataSources: @@ -467,7 +467,7 @@ dataSources: startBlock: 6627917 mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/factory.ts entities: @@ -488,13 +488,13 @@ dataSources: ## Indexer Hints -The `indexerHints` setting in a subgraph's manifest provides directives for indexers on processing and managing a subgraph. It influences operational decisions across data handling, indexing strategies, and optimizations. Presently, it features the `prune` option for managing historical data retention or pruning. +The `indexerHints` setting in a Subgraph's manifest provides directives for indexers on processing and managing a Subgraph. It influences operational decisions across data handling, indexing strategies, and optimizations. Presently, it features the `prune` option for managing historical data retention or pruning. > This feature is available from `specVersion: 1.0.0` ### Prune -`indexerHints.prune`: Defines the retention of historical block data for a subgraph. Options include: +`indexerHints.prune`: Defines the retention of historical block data for a Subgraph. Options include: 1. `"never"`: No pruning of historical data; retains the entire history. 2. `"auto"`: Retains the minimum necessary history as set by the indexer, optimizing query performance. @@ -505,19 +505,19 @@ The `indexerHints` setting in a subgraph's manifest provides directives for inde prune: auto ``` -> The term "history" in this context of subgraphs is about storing data that reflects the old states of mutable entities. +> The term "history" in this context of Subgraphs is about storing data that reflects the old states of mutable entities. History as of a given block is required for: -- [Time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), which enable querying the past states of these entities at specific blocks throughout the subgraph's history -- Using the subgraph as a [graft base](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) in another subgraph, at that block -- Rewinding the subgraph back to that block +- [Time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), which enable querying the past states of these entities at specific blocks throughout the Subgraph's history +- Using the Subgraph as a [graft base](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) in another Subgraph, at that block +- Rewinding the Subgraph back to that block If historical data as of the block has been pruned, the above capabilities will not be available. > Using `"auto"` is generally recommended as it maximizes query performance and is sufficient for most users who do not require access to extensive historical data. -For subgraphs leveraging [time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), it's advisable to either set a specific number of blocks for historical data retention or use `prune: never` to keep all historical entity states. Below are examples of how to configure both options in your subgraph's settings: +For Subgraphs leveraging [time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), it's advisable to either set a specific number of blocks for historical data retention or use `prune: never` to keep all historical entity states. Below are examples of how to configure both options in your Subgraph's settings: To retain a specific amount of historical data: @@ -532,3 +532,18 @@ To preserve the complete history of entity states: indexerHints: prune: never ``` + +## SpecVersion Releases + +| Version | Release notes | +| :-: | --- | +| 1.3.0 | Added support for [Subgraph Composition](/cookbook/subgraph-composition-three-sources) | +| 1.2.0 | Added support for [Indexed Argument Filtering](/developing/creating/advanced/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](/developing/creating/advanced/#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supports [`indexerHints`](/developing/creating/subgraph-manifest/#indexer-hints) feature to prune Subgraphs | +| 0.0.9 | Supports `endBlock` feature | +| 0.0.8 | Added support for polling [Block Handlers](/developing/creating/subgraph-manifest/#polling-filter) and [Initialisation Handlers](/developing/creating/subgraph-manifest/#once-filter). | +| 0.0.7 | Added support for [File Data Sources](/developing/creating/advanced/#ipfsarweave-file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | diff --git a/website/src/pages/uk/subgraphs/developing/creating/unit-testing-framework.mdx b/website/src/pages/uk/subgraphs/developing/creating/unit-testing-framework.mdx index 78df2c601459..0437eaabff1e 100644 --- a/website/src/pages/uk/subgraphs/developing/creating/unit-testing-framework.mdx +++ b/website/src/pages/uk/subgraphs/developing/creating/unit-testing-framework.mdx @@ -2,12 +2,12 @@ title: Unit Testing Framework --- -Learn how to use Matchstick, a unit testing framework developed by [LimeChain](https://limechain.tech/). Matchstick enables subgraph developers to test their mapping logic in a sandboxed environment and successfully deploy their subgraphs. +Learn how to use Matchstick, a unit testing framework developed by [LimeChain](https://limechain.tech/). Matchstick enables Subgraph developers to test their mapping logic in a sandboxed environment and successfully deploy their Subgraphs. ## Benefits of Using Matchstick - It's written in Rust and optimized for high performance. -- It gives you access to developer features, including the ability to mock contract calls, make assertions about the store state, monitor subgraph failures, check test performance, and many more. +- It gives you access to developer features, including the ability to mock contract calls, make assertions about the store state, monitor Subgraph failures, check test performance, and many more. ## Getting Started @@ -87,7 +87,7 @@ And finally, do not use `graph test` (which uses your global installation of gra ### Using Matchstick -To use **Matchstick** in your subgraph project just open up a terminal, navigate to the root folder of your project and simply run `graph test [options] ` - it downloads the latest **Matchstick** binary and runs the specified test or all tests in a test folder (or all existing tests if no datasource flag is specified). +To use **Matchstick** in your Subgraph project just open up a terminal, navigate to the root folder of your project and simply run `graph test [options] ` - it downloads the latest **Matchstick** binary and runs the specified test or all tests in a test folder (or all existing tests if no datasource flag is specified). ### CLI options @@ -113,7 +113,7 @@ graph test path/to/file.test.ts ```sh -c, --coverage Run the tests in coverage mode --d, --docker Run the tests in a docker container (Note: Please execute from the root folder of the subgraph) +-d, --docker Run the tests in a docker container (Note: Please execute from the root folder of the Subgraph) -f, --force Binary: Redownloads the binary. Docker: Redownloads the Dockerfile and rebuilds the docker image. -h, --help Show usage information -l, --logs Logs to the console information about the OS, CPU model and download url (debugging purposes) @@ -145,17 +145,17 @@ libsFolder: path/to/libs manifestPath: path/to/subgraph.yaml ``` -### Demo subgraph +### Demo Subgraph You can try out and play around with the examples from this guide by cloning the [Demo Subgraph repo](https://github.com/LimeChain/demo-subgraph) ### Video tutorials -Also you can check out the video series on ["How to use Matchstick to write unit tests for your subgraphs"](https://www.youtube.com/playlist?list=PLTqyKgxaGF3SNakGQwczpSGVjS_xvOv3h) +Also you can check out the video series on ["How to use Matchstick to write unit tests for your Subgraphs"](https://www.youtube.com/playlist?list=PLTqyKgxaGF3SNakGQwczpSGVjS_xvOv3h) ## Tests structure -_**IMPORTANT: The test structure described below depens on `matchstick-as` version >=0.5.0**_ +_**IMPORTANT: The test structure described below depends on `matchstick-as` version >=0.5.0**_ ### describe() @@ -662,7 +662,7 @@ That's a lot to unpack! First off, an important thing to notice is that we're im There we go - we've created our first test! 👏 -Now in order to run our tests you simply need to run the following in your subgraph root folder: +Now in order to run our tests you simply need to run the following in your Subgraph root folder: `graph test Gravity` @@ -756,7 +756,7 @@ createMockedFunction(contractAddress, 'getGravatar', 'getGravatar(address):(stri Users can mock IPFS files by using `mockIpfsFile(hash, filePath)` function. The function accepts two arguments, the first one is the IPFS file hash/path and the second one is the path to a local file. -NOTE: When testing `ipfs.map/ipfs.mapJSON`, the callback function must be exported from the test file in order for matchstck to detect it, like the `processGravatar()` function in the test example bellow: +NOTE: When testing `ipfs.map/ipfs.mapJSON`, the callback function must be exported from the test file in order for matchstick to detect it, like the `processGravatar()` function in the test example bellow: `.test.ts` file: @@ -765,7 +765,7 @@ import { assert, test, mockIpfsFile } from 'matchstick-as/assembly/index' import { ipfs } from '@graphprotocol/graph-ts' import { gravatarFromIpfs } from './utils' -// Export ipfs.map() callback in order for matchstck to detect it +// Export ipfs.map() callback in order for matchstick to detect it export { processGravatar } from './utils' test('ipfs.cat', () => { @@ -1172,7 +1172,7 @@ templates: network: mainnet mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/token-lock-wallet.ts handler: handleMetadata @@ -1289,7 +1289,7 @@ test('file/ipfs dataSource creation example', () => { ## Test Coverage -Using **Matchstick**, subgraph developers are able to run a script that will calculate the test coverage of the written unit tests. +Using **Matchstick**, Subgraph developers are able to run a script that will calculate the test coverage of the written unit tests. The test coverage tool takes the compiled test `wasm` binaries and converts them to `wat` files, which can then be easily inspected to see whether or not the handlers defined in `subgraph.yaml` have been called. Since code coverage (and testing as whole) is in very early stages in AssemblyScript and WebAssembly, **Matchstick** cannot check for branch coverage. Instead we rely on the assertion that if a given handler has been called, the event/function for it have been properly mocked. @@ -1395,7 +1395,7 @@ The mismatch in arguments is caused by mismatch in `graph-ts` and `matchstick-as ## Додаткові матеріали -For any additional support, check out this [demo subgraph repo using Matchstick](https://github.com/LimeChain/demo-subgraph#readme_). +For any additional support, check out this [demo Subgraph repo using Matchstick](https://github.com/LimeChain/demo-subgraph#readme_). ## Feedback diff --git a/website/src/pages/uk/subgraphs/developing/deploying/multiple-networks.mdx b/website/src/pages/uk/subgraphs/developing/deploying/multiple-networks.mdx index 4f7dcd3864e8..3b2b1bbc70ae 100644 --- a/website/src/pages/uk/subgraphs/developing/deploying/multiple-networks.mdx +++ b/website/src/pages/uk/subgraphs/developing/deploying/multiple-networks.mdx @@ -1,12 +1,13 @@ --- title: Deploying a Subgraph to Multiple Networks +sidebarTitle: Deploying to Multiple Networks --- -This page explains how to deploy a subgraph to multiple networks. To deploy a subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a subgraph already, see [Creating a subgraph](/developing/creating-a-subgraph/). +This page explains how to deploy a Subgraph to multiple networks. To deploy a Subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a Subgraph already, see [Creating a Subgraph](/developing/creating-a-subgraph/). -## Deploying the subgraph to multiple networks +## Deploying the Subgraph to multiple networks -In some cases, you will want to deploy the same subgraph to multiple networks without duplicating all of its code. The main challenge that comes with this is that the contract addresses on these networks are different. +In some cases, you will want to deploy the same Subgraph to multiple networks without duplicating all of its code. The main challenge that comes with this is that the contract addresses on these networks are different. ### Using `graph-cli` @@ -20,7 +21,7 @@ Options: --network-file Networks config file path (default: "./networks.json") ``` -You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your subgraph during development. +You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your Subgraph during development. > Note: The `init` command will now auto-generate a `networks.json` based on the provided information. You will then be able to update existing or add additional networks. @@ -54,7 +55,7 @@ If you don't have a `networks.json` file, you'll need to manually create one wit > Note: You don't have to specify any of the `templates` (if you have any) in the config file, only the `dataSources`. If there are any `templates` declared in the `subgraph.yaml` file, their network will be automatically updated to the one specified with the `--network` option. -Now, let's assume you want to be able to deploy your subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`: +Now, let's assume you want to be able to deploy your Subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`: ```yaml # ... @@ -96,7 +97,7 @@ yarn build --network sepolia yarn build --network sepolia --network-file path/to/config ``` -The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the subgraph. Your `subgraph.yaml` file now should look like this: +The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the Subgraph. Your `subgraph.yaml` file now should look like this: ```yaml # ... @@ -127,7 +128,7 @@ yarn deploy --network sepolia --network-file path/to/config One way to parameterize aspects like contract addresses using older `graph-cli` versions is to generate parts of it with a templating system like [Mustache](https://mustache.github.io/) or [Handlebars](https://handlebarsjs.com/). -To illustrate this approach, let's assume a subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network: +To illustrate this approach, let's assume a Subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network: ```json { @@ -179,7 +180,7 @@ In order to generate a manifest to either network, you could add two additional } ``` -To deploy this subgraph for mainnet or Sepolia you would now simply run one of the two following commands: +To deploy this Subgraph for mainnet or Sepolia you would now simply run one of the two following commands: ```sh # Mainnet: @@ -193,25 +194,25 @@ A working example of this can be found [here](https://github.com/graphprotocol/e **Note:** This approach can also be applied to more complex situations, where it is necessary to substitute more than contract addresses and network names or where generating mappings or ABIs from templates as well. -This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your Subgraph to check if it is running behind. `synced` informs if the Subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the Subgraph. In this case, you can check the `fatalError` field for details on this error. -## Subgraph Studio subgraph archive policy +## Subgraph Studio Subgraph archive policy -A subgraph version in Studio is archived if and only if it meets the following criteria: +A Subgraph version in Studio is archived if and only if it meets the following criteria: - The version is not published to the network (or pending publish) - The version was created 45 or more days ago -- The subgraph hasn't been queried in 30 days +- The Subgraph hasn't been queried in 30 days -In addition, when a new version is deployed, if the subgraph has not been published, then the N-2 version of the subgraph is archived. +In addition, when a new version is deployed, if the Subgraph has not been published, then the N-2 version of the Subgraph is archived. -Every subgraph affected with this policy has an option to bring the version in question back. +Every Subgraph affected with this policy has an option to bring the version in question back. -## Checking subgraph health +## Checking Subgraph health -If a subgraph syncs successfully, that is a good sign that it will continue to run well forever. However, new triggers on the network might cause your subgraph to hit an untested error condition or it may start to fall behind due to performance issues or issues with the node operators. +If a Subgraph syncs successfully, that is a good sign that it will continue to run well forever. However, new triggers on the network might cause your Subgraph to hit an untested error condition or it may start to fall behind due to performance issues or issues with the node operators. -Graph Node exposes a GraphQL endpoint which you can query to check the status of your subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a subgraph: +Graph Node exposes a GraphQL endpoint which you can query to check the status of your Subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a Subgraph: ```graphql { @@ -238,4 +239,4 @@ Graph Node exposes a GraphQL endpoint which you can query to check the status of } ``` -This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your Subgraph to check if it is running behind. `synced` informs if the Subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the Subgraph. In this case, you can check the `fatalError` field for details on this error. diff --git a/website/src/pages/uk/subgraphs/developing/deploying/using-subgraph-studio.mdx b/website/src/pages/uk/subgraphs/developing/deploying/using-subgraph-studio.mdx index db3f790fdfe6..2b4c1c11efa0 100644 --- a/website/src/pages/uk/subgraphs/developing/deploying/using-subgraph-studio.mdx +++ b/website/src/pages/uk/subgraphs/developing/deploying/using-subgraph-studio.mdx @@ -2,23 +2,23 @@ title: Deploying Using Subgraph Studio --- -Learn how to deploy your subgraph to Subgraph Studio. +Learn how to deploy your Subgraph to Subgraph Studio. -> Note: When you deploy a subgraph, you push it to Subgraph Studio, where you'll be able to test it. It's important to remember that deploying is not the same as publishing. When you publish a subgraph, you're publishing it onchain. +> Note: When you deploy a Subgraph, you push it to Subgraph Studio, where you'll be able to test it. It's important to remember that deploying is not the same as publishing. When you publish a Subgraph, you're publishing it onchain. ## Subgraph Studio Overview In [Subgraph Studio](https://thegraph.com/studio/), you can do the following: -- View a list of subgraphs you've created -- Manage, view details, and visualize the status of a specific subgraph -- Create and manage your API keys for specific subgraphs +- View a list of Subgraphs you've created +- Manage, view details, and visualize the status of a specific Subgraph +- Create and manage your API keys for specific Subgraphs - Restrict your API keys to specific domains and allow only certain Indexers to query with them -- Create your subgraph -- Deploy your subgraph using The Graph CLI -- Test your subgraph in the playground environment -- Integrate your subgraph in staging using the development query URL -- Publish your subgraph to The Graph Network +- Create your Subgraph +- Deploy your Subgraph using The Graph CLI +- Test your Subgraph in the playground environment +- Integrate your Subgraph in staging using the development query URL +- Publish your Subgraph to The Graph Network - Manage your billing ## Install The Graph CLI @@ -44,10 +44,10 @@ npm install -g @graphprotocol/graph-cli 1. Open [Subgraph Studio](https://thegraph.com/studio/). 2. Connect your wallet to sign in. - You can do this via MetaMask, Coinbase Wallet, WalletConnect, or Safe. -3. After you sign in, your unique deploy key will be displayed on your subgraph details page. - - The deploy key allows you to publish your subgraphs or manage your API keys and billing. It is unique but can be regenerated if you think it has been compromised. +3. After you sign in, your unique deploy key will be displayed on your Subgraph details page. + - The deploy key allows you to publish your Subgraphs or manage your API keys and billing. It is unique but can be regenerated if you think it has been compromised. -> Important: You need an API key to query subgraphs +> Important: You need an API key to query Subgraphs ### How to Create a Subgraph in Subgraph Studio @@ -57,31 +57,25 @@ npm install -g @graphprotocol/graph-cli ### Subgraph Compatibility with The Graph Network -In order to be supported by Indexers on The Graph Network, subgraphs must: - -- Index a [supported network](/supported-networks/) -- Must not use any of the following features: - - ipfs.cat & ipfs.map - - Non-fatal errors - - Grafting +To be supported by Indexers on The Graph Network, Subgraphs must index a [supported network](/supported-networks/). For a full list of supported and unsupported features, check out the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) repo. ## Initialize Your Subgraph -Once your subgraph has been created in Subgraph Studio, you can initialize its code through the CLI using this command: +Once your Subgraph has been created in Subgraph Studio, you can initialize its code through the CLI using this command: ```bash graph init ``` -You can find the `` value on your subgraph details page in Subgraph Studio, see image below: +You can find the `` value on your Subgraph details page in Subgraph Studio, see image below: ![Subgraph Studio - Slug](/img/doc-subgraph-slug.png) -After running `graph init`, you will be asked to input the contract address, network, and an ABI that you want to query. This will generate a new folder on your local machine with some basic code to start working on your subgraph. You can then finalize your subgraph to make sure it works as expected. +After running `graph init`, you will be asked to input the contract address, network, and an ABI that you want to query. This will generate a new folder on your local machine with some basic code to start working on your Subgraph. You can then finalize your Subgraph to make sure it works as expected. ## Graph Auth -Before you can deploy your subgraph to Subgraph Studio, you need to log into your account within the CLI. To do this, you will need your deploy key, which you can find under your subgraph details page. +Before you can deploy your Subgraph to Subgraph Studio, you need to log into your account within the CLI. To do this, you will need your deploy key, which you can find under your Subgraph details page. Then, use the following command to authenticate from the CLI: @@ -91,11 +85,11 @@ graph auth ## Deploying a Subgraph -Once you are ready, you can deploy your subgraph to Subgraph Studio. +Once you are ready, you can deploy your Subgraph to Subgraph Studio. -> Deploying a subgraph with the CLI pushes it to the Studio, where you can test it and update the metadata. This action won't publish your subgraph to the decentralized network. +> Deploying a Subgraph with the CLI pushes it to the Studio, where you can test it and update the metadata. This action won't publish your Subgraph to the decentralized network. -Use the following CLI command to deploy your subgraph: +Use the following CLI command to deploy your Subgraph: ```bash graph deploy @@ -108,30 +102,30 @@ After running this command, the CLI will ask for a version label. ## Testing Your Subgraph -After deploying, you can test your subgraph (either in Subgraph Studio or in your own app, with the deployment query URL), deploy another version, update the metadata, and publish to [Graph Explorer](https://thegraph.com/explorer) when you are ready. +After deploying, you can test your Subgraph (either in Subgraph Studio or in your own app, with the deployment query URL), deploy another version, update the metadata, and publish to [Graph Explorer](https://thegraph.com/explorer) when you are ready. -Use Subgraph Studio to check the logs on the dashboard and look for any errors with your subgraph. +Use Subgraph Studio to check the logs on the dashboard and look for any errors with your Subgraph. ## Publish Your Subgraph -In order to publish your subgraph successfully, review [publishing a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). +In order to publish your Subgraph successfully, review [publishing a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). ## Versioning Your Subgraph with the CLI -If you want to update your subgraph, you can do the following: +If you want to update your Subgraph, you can do the following: - You can deploy a new version to Studio using the CLI (it will only be private at this point). - Once you're happy with it, you can publish your new deployment to [Graph Explorer](https://thegraph.com/explorer). -- This action will create a new version of your subgraph that Curators can start signaling on and Indexers can index. +- This action will create a new version of your Subgraph that Curators can start signaling on and Indexers can index. -You can also update your subgraph's metadata without publishing a new version. You can update your subgraph details in Studio (under the profile picture, name, description, etc.) by checking an option called **Update Details** in [Graph Explorer](https://thegraph.com/explorer). If this is checked, an onchain transaction will be generated that updates subgraph details in Explorer without having to publish a new version with a new deployment. +You can also update your Subgraph's metadata without publishing a new version. You can update your Subgraph details in Studio (under the profile picture, name, description, etc.) by checking an option called **Update Details** in [Graph Explorer](https://thegraph.com/explorer). If this is checked, an onchain transaction will be generated that updates Subgraph details in Explorer without having to publish a new version with a new deployment. -> Note: There are costs associated with publishing a new version of a subgraph to the network. In addition to the transaction fees, you must also fund a part of the curation tax on the auto-migrating signal. You cannot publish a new version of your subgraph if Curators have not signaled on it. For more information, please read more [here](/resources/roles/curating/). +> Note: There are costs associated with publishing a new version of a Subgraph to the network. In addition to the transaction fees, you must also fund a part of the curation tax on the auto-migrating signal. You cannot publish a new version of your Subgraph if Curators have not signaled on it. For more information, please read more [here](/resources/roles/curating/). ## Automatic Archiving of Subgraph Versions -Whenever you deploy a new subgraph version in Subgraph Studio, the previous version will be archived. Archived versions won't be indexed/synced and therefore cannot be queried. You can unarchive an archived version of your subgraph in Subgraph Studio. +Whenever you deploy a new Subgraph version in Subgraph Studio, the previous version will be archived. Archived versions won't be indexed/synced and therefore cannot be queried. You can unarchive an archived version of your Subgraph in Subgraph Studio. -> Note: Previous versions of non-published subgraphs deployed to Studio will be automatically archived. +> Note: Previous versions of non-published Subgraphs deployed to Studio will be automatically archived. ![Subgraph Studio - Unarchive](/img/Unarchive.png) diff --git a/website/src/pages/uk/subgraphs/developing/developer-faq.mdx b/website/src/pages/uk/subgraphs/developing/developer-faq.mdx index 8dbe6d23ad39..e45141294523 100644 --- a/website/src/pages/uk/subgraphs/developing/developer-faq.mdx +++ b/website/src/pages/uk/subgraphs/developing/developer-faq.mdx @@ -7,37 +7,37 @@ This page summarizes some of the most common questions for developers building o ## Subgraph Related -### 1. What is a subgraph? +### 1. What is a Subgraph? -A subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process subgraphs and make them available for subgraph consumers to query. +A Subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process Subgraphs and make them available for Subgraph consumers to query. -### 2. What is the first step to create a subgraph? +### 2. What is the first step to create a Subgraph? -To successfully create a subgraph, you will need to install The Graph CLI. Review the [Quick Start](/subgraphs/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). +To successfully create a Subgraph, you will need to install The Graph CLI. Review the [Quick Start](/subgraphs/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). -### 3. Can I still create a subgraph if my smart contracts don't have events? +### 3. Can I still create a Subgraph if my smart contracts don't have events? -It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the subgraph are triggered by contract events and are the fastest way to retrieve useful data. +It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the Subgraph are triggered by contract events and are the fastest way to retrieve useful data. -If the contracts you work with do not contain events, your subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. +If the contracts you work with do not contain events, your Subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. -### 4. Can I change the GitHub account associated with my subgraph? +### 4. Can I change the GitHub account associated with my Subgraph? -No. Once a subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your subgraph. +No. Once a Subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your Subgraph. -### 5. How do I update a subgraph on mainnet? +### 5. How do I update a Subgraph on mainnet? -You can deploy a new version of your subgraph to Subgraph Studio using the CLI. This action maintains your subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. +You can deploy a new version of your Subgraph to Subgraph Studio using the CLI. This action maintains your Subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your Subgraph that Curators can start signaling on. -### 6. Is it possible to duplicate a subgraph to another account or endpoint without redeploying? +### 6. Is it possible to duplicate a Subgraph to another account or endpoint without redeploying? -You have to redeploy the subgraph, but if the subgraph ID (IPFS hash) doesn't change, it won't have to sync from the beginning. +You have to redeploy the Subgraph, but if the Subgraph ID (IPFS hash) doesn't change, it won't have to sync from the beginning. -### 7. How do I call a contract function or access a public state variable from my subgraph mappings? +### 7. How do I call a contract function or access a public state variable from my Subgraph mappings? Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/#access-to-smart-contract-state). -### 8. Can I import `ethers.js` or other JS libraries into my subgraph mappings? +### 8. Can I import `ethers.js` or other JS libraries into my Subgraph mappings? Not currently, as mappings are written in AssemblyScript. @@ -45,15 +45,15 @@ One possible alternative solution to this is to store raw data in entities and p ### 9. When listening to multiple contracts, is it possible to select the contract order to listen to events? -Within a subgraph, the events are always processed in the order they appear in the blocks, regardless of whether that is across multiple contracts or not. +Within a Subgraph, the events are always processed in the order they appear in the blocks, regardless of whether that is across multiple contracts or not. ### 10. How are templates different from data sources? -Templates allow you to create data sources quickly, while your subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your subgraph will create a dynamic data source by supplying the contract address. +Templates allow you to create data sources quickly, while your Subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your Subgraph will create a dynamic data source by supplying the contract address. Check out the "Instantiating a data source template" section on: [Data Source Templates](/developing/creating-a-subgraph/#data-source-templates). -### 11. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? +### 11. Is it possible to set up a Subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? Yes. On `graph init` command itself you can add multiple dataSources by entering contracts one after the other. @@ -79,9 +79,9 @@ docker pull graphprotocol/graph-node:latest If only one entity is created during the event and if there's nothing better available, then the transaction hash + log index would be unique. You can obfuscate these by converting that to Bytes and then piping it through `crypto.keccak256` but this won't make it more unique. -### 15. Can I delete my subgraph? +### 15. Can I delete my Subgraph? -Yes, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) and [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) your subgraph. +Yes, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) and [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) your Subgraph. ## Network Related @@ -110,11 +110,11 @@ Yes. Sepolia supports block handlers, call handlers and event handlers. It shoul Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the dataSource starts indexing from. In most cases, we suggest using the block where the contract was created: [Start blocks](/developing/creating-a-subgraph/#start-blocks) -### 20. What are some tips to increase the performance of indexing? My subgraph is taking a very long time to sync +### 20. What are some tips to increase the performance of indexing? My Subgraph is taking a very long time to sync Yes, you should take a look at the optional start block feature to start indexing from the block where the contract was deployed: [Start blocks](/developing/creating-a-subgraph/#start-blocks) -### 21. Is there a way to query the subgraph directly to determine the latest block number it has indexed? +### 21. Is there a way to query the Subgraph directly to determine the latest block number it has indexed? Yes! Try the following command, substituting "organization/subgraphName" with the organization under it is published and the name of your subgraph: @@ -132,7 +132,7 @@ someCollection(first: 1000, skip: ) { ... } ### 23. If my dapp frontend uses The Graph for querying, do I need to write my API key into the frontend directly? What if we pay query fees for users – will malicious users cause our query fees to be very high? -Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. +Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and Subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. ## Miscellaneous diff --git a/website/src/pages/uk/subgraphs/developing/introduction.mdx b/website/src/pages/uk/subgraphs/developing/introduction.mdx index 615b6cec4c9c..06bc2b76104d 100644 --- a/website/src/pages/uk/subgraphs/developing/introduction.mdx +++ b/website/src/pages/uk/subgraphs/developing/introduction.mdx @@ -11,21 +11,21 @@ As a developer, you need data to build and power your dapp. Querying and indexin On The Graph, you can: -1. Create, deploy, and publish subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). -2. Use GraphQL to query existing subgraphs. +1. Create, deploy, and publish Subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). +2. Use GraphQL to query existing Subgraphs. ### What is GraphQL? -- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. +- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query Subgraphs. ### Developer Actions -- Query subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. -- Create custom subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. -- Deploy, publish and signal your subgraphs within The Graph Network. +- Query Subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. +- Create custom Subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. +- Deploy, publish and signal your Subgraphs within The Graph Network. -### What are subgraphs? +### What are Subgraphs? -A subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. +A Subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. -Check out the documentation on [subgraphs](/subgraphs/developing/subgraphs/) to learn specifics. +Check out the documentation on [Subgraphs](/subgraphs/developing/subgraphs/) to learn specifics. diff --git a/website/src/pages/uk/subgraphs/developing/managing/deleting-a-subgraph.mdx b/website/src/pages/uk/subgraphs/developing/managing/deleting-a-subgraph.mdx index 5a4ac15e07fd..b8c2330ca49d 100644 --- a/website/src/pages/uk/subgraphs/developing/managing/deleting-a-subgraph.mdx +++ b/website/src/pages/uk/subgraphs/developing/managing/deleting-a-subgraph.mdx @@ -2,30 +2,30 @@ title: Deleting a Subgraph --- -Delete your subgraph using [Subgraph Studio](https://thegraph.com/studio/). +Delete your Subgraph using [Subgraph Studio](https://thegraph.com/studio/). -> Deleting your subgraph will remove all published versions from The Graph Network, but it will remain visible on Graph Explorer and Subgraph Studio for users who have signaled on it. +> Deleting your Subgraph will remove all published versions from The Graph Network, but it will remain visible on Graph Explorer and Subgraph Studio for users who have signaled on it. ## Step-by-Step -1. Visit the subgraph's page on [Subgraph Studio](https://thegraph.com/studio/). +1. Visit the Subgraph's page on [Subgraph Studio](https://thegraph.com/studio/). 2. Click on the three-dots to the right of the "publish" button. -3. Click on the option to "delete this subgraph": +3. Click on the option to "delete this Subgraph": ![Delete-subgraph](/img/Delete-subgraph.png) -4. Depending on the subgraph's status, you will be prompted with various options. +4. Depending on the Subgraph's status, you will be prompted with various options. - - If the subgraph is not published, simply click “delete” and confirm. - - If the subgraph is published, you will need to confirm on your wallet before the subgraph can be deleted from Studio. If a subgraph is published to multiple networks, such as testnet and mainnet, additional steps may be required. + - If the Subgraph is not published, simply click “delete” and confirm. + - If the Subgraph is published, you will need to confirm on your wallet before the Subgraph can be deleted from Studio. If a Subgraph is published to multiple networks, such as testnet and mainnet, additional steps may be required. -> If the owner of the subgraph has signal on it, the signaled GRT will be returned to the owner. +> If the owner of the Subgraph has signal on it, the signaled GRT will be returned to the owner. ### Important Reminders -- Once you delete a subgraph, it will **not** appear on Graph Explorer's homepage. However, users who have signaled on it will still be able to view it on their profile pages and remove their signal. -- Curators will not be able to signal on the subgraph anymore. -- Curators that already signaled on the subgraph can withdraw their signal at an average share price. -- Deleted subgraphs will show an error message. +- Once you delete a Subgraph, it will **not** appear on Graph Explorer's homepage. However, users who have signaled on it will still be able to view it on their profile pages and remove their signal. +- Curators will not be able to signal on the Subgraph anymore. +- Curators that already signaled on the Subgraph can withdraw their signal at an average share price. +- Deleted Subgraphs will show an error message. diff --git a/website/src/pages/uk/subgraphs/developing/managing/transferring-a-subgraph.mdx b/website/src/pages/uk/subgraphs/developing/managing/transferring-a-subgraph.mdx index 0fc6632cbc40..e80bde3fa6d2 100644 --- a/website/src/pages/uk/subgraphs/developing/managing/transferring-a-subgraph.mdx +++ b/website/src/pages/uk/subgraphs/developing/managing/transferring-a-subgraph.mdx @@ -2,18 +2,18 @@ title: Transferring a Subgraph --- -Subgraphs published to the decentralized network have an NFT minted to the address that published the subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network. +Subgraphs published to the decentralized network have an NFT minted to the address that published the Subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network. ## Reminders -- Whoever owns the NFT controls the subgraph. -- If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that subgraph on the network. -- You can easily move control of a subgraph to a multi-sig. -- A community member can create a subgraph on behalf of a DAO. +- Whoever owns the NFT controls the Subgraph. +- If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that Subgraph on the network. +- You can easily move control of a Subgraph to a multi-sig. +- A community member can create a Subgraph on behalf of a DAO. ## View Your Subgraph as an NFT -To view your subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**: +To view your Subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**: ``` https://opensea.io/your-wallet-address @@ -27,13 +27,13 @@ https://rainbow.me/your-wallet-addres ## Step-by-Step -To transfer ownership of a subgraph, do the following: +To transfer ownership of a Subgraph, do the following: 1. Use the UI built into Subgraph Studio: ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-1.png) -2. Choose the address that you would like to transfer the subgraph to: +2. Choose the address that you would like to transfer the Subgraph to: ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-2.png) diff --git a/website/src/pages/uk/subgraphs/developing/publishing/publishing-a-subgraph.mdx b/website/src/pages/uk/subgraphs/developing/publishing/publishing-a-subgraph.mdx index dca943ad3152..2bc0ec5f514c 100644 --- a/website/src/pages/uk/subgraphs/developing/publishing/publishing-a-subgraph.mdx +++ b/website/src/pages/uk/subgraphs/developing/publishing/publishing-a-subgraph.mdx @@ -1,10 +1,11 @@ --- title: Publishing a Subgraph to the Decentralized Network +sidebarTitle: Publishing to the Decentralized Network --- -Once you have [deployed your subgraph to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/) and it's ready to go into production, you can publish it to the decentralized network. +Once you have [deployed your Subgraph to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/) and it's ready to go into production, you can publish it to the decentralized network. -When you publish a subgraph to the decentralized network, you make it available for: +When you publish a Subgraph to the decentralized network, you make it available for: - [Curators](/resources/roles/curating/) to begin curating it. - [Indexers](/indexing/overview/) to begin indexing it. @@ -17,33 +18,33 @@ Check out the list of [supported networks](/supported-networks/). 1. Go to the [Subgraph Studio](https://thegraph.com/studio/) dashboard 2. Click on the **Publish** button -3. Your subgraph will now be visible in [Graph Explorer](https://thegraph.com/explorer/). +3. Your Subgraph will now be visible in [Graph Explorer](https://thegraph.com/explorer/). -All published versions of an existing subgraph can: +All published versions of an existing Subgraph can: - Be published to Arbitrum One. [Learn more about The Graph Network on Arbitrum](/archived/arbitrum/arbitrum-faq/). -- Index data on any of the [supported networks](/supported-networks/), regardless of the network on which the subgraph was published. +- Index data on any of the [supported networks](/supported-networks/), regardless of the network on which the Subgraph was published. -### Updating metadata for a published subgraph +### Updating metadata for a published Subgraph -- After publishing your subgraph to the decentralized network, you can update the metadata anytime in Subgraph Studio. +- After publishing your Subgraph to the decentralized network, you can update the metadata anytime in Subgraph Studio. - Once you’ve saved your changes and published the updates, they will appear in Graph Explorer. - It's important to note that this process will not create a new version since your deployment has not changed. ## Publishing from the CLI -As of version 0.73.0, you can also publish your subgraph with the [`graph-cli`](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). +As of version 0.73.0, you can also publish your Subgraph with the [`graph-cli`](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). 1. Open the `graph-cli`. 2. Use the following commands: `graph codegen && graph build` then `graph publish`. -3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized subgraph to a network of your choice. +3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized Subgraph to a network of your choice. ![cli-ui](/img/cli-ui.png) ### Customizing your deployment -You can upload your subgraph build to a specific IPFS node and further customize your deployment with the following flags: +You can upload your Subgraph build to a specific IPFS node and further customize your deployment with the following flags: ``` USAGE @@ -61,33 +62,33 @@ FLAGS ``` -## Adding signal to your subgraph +## Adding signal to your Subgraph -Developers can add GRT signal to their subgraphs to incentivize Indexers to query the subgraph. +Developers can add GRT signal to their Subgraphs to incentivize Indexers to query the Subgraph. -- If a subgraph is eligible for indexing rewards, Indexers who provide a "proof of indexing" will receive a GRT reward, based on the amount of GRT signalled. +- If a Subgraph is eligible for indexing rewards, Indexers who provide a "proof of indexing" will receive a GRT reward, based on the amount of GRT signalled. -- You can check indexing reward eligibility based on subgraph feature usage [here](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). +- You can check indexing reward eligibility based on Subgraph feature usage [here](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). - Specific supported networks can be checked [here](/supported-networks/). -> Adding signal to a subgraph which is not eligible for rewards will not attract additional Indexers. +> Adding signal to a Subgraph which is not eligible for rewards will not attract additional Indexers. > -> If your subgraph is eligible for rewards, it is recommended that you curate your own subgraph with at least 3,000 GRT in order to attract additional indexers to index your subgraph. +> If your Subgraph is eligible for rewards, it is recommended that you curate your own Subgraph with at least 3,000 GRT in order to attract additional indexers to index your Subgraph. -The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs. However, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. +The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all Subgraphs. However, signaling GRT on a particular Subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. -When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. +When signaling, Curators can decide to signal on a specific version of the Subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. -Indexers can find subgraphs to index based on curation signals they see in Graph Explorer. +Indexers can find Subgraphs to index based on curation signals they see in Graph Explorer. -![Explorer subgraphs](/img/explorer-subgraphs.png) +![Explorer Subgraphs](/img/explorer-subgraphs.png) -Subgraph Studio enables you to add signal to your subgraph by adding GRT to your subgraph's curation pool in the same transaction it is published. +Subgraph Studio enables you to add signal to your Subgraph by adding GRT to your Subgraph's curation pool in the same transaction it is published. ![Curation Pool](/img/curate-own-subgraph-tx.png) -Alternatively, you can add GRT signal to a published subgraph from Graph Explorer. +Alternatively, you can add GRT signal to a published Subgraph from Graph Explorer. ![Signal from Explorer](/img/signal-from-explorer.png) diff --git a/website/src/pages/uk/subgraphs/developing/subgraphs.mdx b/website/src/pages/uk/subgraphs/developing/subgraphs.mdx index 598118d01b51..2e99661e5d5d 100644 --- a/website/src/pages/uk/subgraphs/developing/subgraphs.mdx +++ b/website/src/pages/uk/subgraphs/developing/subgraphs.mdx @@ -4,83 +4,83 @@ title: Підграфи ## What is a Subgraph? -A subgraph is a custom, open API that extracts data from a blockchain, processes it, and stores it so it can be easily queried via GraphQL. +A Subgraph is a custom, open API that extracts data from a blockchain, processes it, and stores it so it can be easily queried via GraphQL. ### Subgraph Capabilities - **Access Data:** Subgraphs enable the querying and indexing of blockchain data for web3. -- **Build:** Developers can build, deploy, and publish subgraphs to The Graph Network. To get started, check out the subgraph developer [Quick Start](quick-start/). -- **Index & Query:** Once a subgraph is indexed, anyone can query it. Explore and query all subgraphs published to the network in [Graph Explorer](https://thegraph.com/explorer). +- **Build:** Developers can build, deploy, and publish Subgraphs to The Graph Network. To get started, check out the Subgraph developer [Quick Start](quick-start/). +- **Index & Query:** Once a Subgraph is indexed, anyone can query it. Explore and query all Subgraphs published to the network in [Graph Explorer](https://thegraph.com/explorer). ## Inside a Subgraph -The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. +The Subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your Subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. -The **subgraph definition** consists of the following files: +The **Subgraph definition** consists of the following files: -- `subgraph.yaml`: Contains the subgraph manifest +- `subgraph.yaml`: Contains the Subgraph manifest -- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL +- `schema.graphql`: A GraphQL schema defining the data stored for your Subgraph and how to query it via GraphQL - `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema -To learn more about each subgraph component, check out [creating a subgraph](/developing/creating-a-subgraph/). +To learn more about each Subgraph component, check out [creating a Subgraph](/developing/creating-a-subgraph/). ## Subgraph Lifecycle -Here is a general overview of a subgraph’s lifecycle: +Here is a general overview of a Subgraph’s lifecycle: ![Subgraph Lifecycle](/img/subgraph-lifecycle.png) ## Subgraph Development -1. [Create a subgraph](/developing/creating-a-subgraph/) -2. [Deploy a subgraph](/deploying/deploying-a-subgraph-to-studio/) -3. [Test a subgraph](/deploying/subgraph-studio/#testing-your-subgraph-in-subgraph-studio) -4. [Publish a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) -5. [Signal on a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) +1. [Create a Subgraph](/developing/creating-a-subgraph/) +2. [Deploy a Subgraph](/deploying/deploying-a-subgraph-to-studio/) +3. [Test a Subgraph](/deploying/subgraph-studio/#testing-your-subgraph-in-subgraph-studio) +4. [Publish a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) +5. [Signal on a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) ### Build locally -Great subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying subgraphs on The Graph. They can also use [Graph TypeScript](/subgraphs/developing/creating/graph-ts/README/) and [Matchstick](/subgraphs/developing/creating/unit-testing-framework/) to create robust subgraphs. +Great Subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying Subgraphs on The Graph. They can also use [Graph TypeScript](/subgraphs/developing/creating/graph-ts/README/) and [Matchstick](/subgraphs/developing/creating/unit-testing-framework/) to create robust Subgraphs. ### Deploy to Subgraph Studio -Once defined, a subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: +Once defined, a Subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: -- Use its staging environment to index the deployed subgraph and make it available for review. -- Verify that your subgraph doesn't have any indexing errors and works as expected. +- Use its staging environment to index the deployed Subgraph and make it available for review. +- Verify that your Subgraph doesn't have any indexing errors and works as expected. ### Publish to the Network -When you're happy with your subgraph, you can [publish it](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph Network. +When you're happy with your Subgraph, you can [publish it](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph Network. -- This is an onchain action, which registers the subgraph and makes it discoverable by Indexers. -- Published subgraphs have a corresponding NFT, which defines the ownership of the subgraph. You can [transfer the subgraph's ownership](/subgraphs/developing/managing/transferring-a-subgraph/) by sending the NFT. -- Published subgraphs have associated metadata, which provides other network participants with useful context and information. +- This is an onchain action, which registers the Subgraph and makes it discoverable by Indexers. +- Published Subgraphs have a corresponding NFT, which defines the ownership of the Subgraph. You can [transfer the Subgraph's ownership](/subgraphs/developing/managing/transferring-a-subgraph/) by sending the NFT. +- Published Subgraphs have associated metadata, which provides other network participants with useful context and information. ### Add Curation Signal for Indexing -Published subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your subgraph. Learn more about signaling and [curating](/resources/roles/curating/) on The Graph. +Published Subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your Subgraph. Learn more about signaling and [curating](/resources/roles/curating/) on The Graph. #### What is signal? -- Signal is locked GRT associated with a given subgraph. It indicates to Indexers that a given subgraph will receive query volume and it contributes to the indexing rewards available for processing it. -- Third-party Curators may also signal on a given subgraph, if they deem the subgraph likely to drive query volume. +- Signal is locked GRT associated with a given Subgraph. It indicates to Indexers that a given Subgraph will receive query volume and it contributes to the indexing rewards available for processing it. +- Third-party Curators may also signal on a given Subgraph, if they deem the Subgraph likely to drive query volume. ### Querying & Application Development Subgraphs on The Graph Network receive 100,000 free queries per month, after which point developers can either [pay for queries with GRT or a credit card](/subgraphs/billing/). -Learn more about [querying subgraphs](/subgraphs/querying/introduction/). +Learn more about [querying Subgraphs](/subgraphs/querying/introduction/). ### Updating Subgraphs -To update your subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. +To update your Subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your Subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. -- If you selected "auto-migrate" when you applied the signal, updating the subgraph will migrate any signal to the new version and incur a migration tax. -- This signal migration should prompt Indexers to start indexing the new version of the subgraph, so it should soon become available for querying. +- If you selected "auto-migrate" when you applied the signal, updating the Subgraph will migrate any signal to the new version and incur a migration tax. +- This signal migration should prompt Indexers to start indexing the new version of the Subgraph, so it should soon become available for querying. ### Deleting & Transferring Subgraphs -If you no longer need a published subgraph, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) or [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) it. Deleting a subgraph returns any signaled GRT to [Curators](/resources/roles/curating/). +If you no longer need a published Subgraph, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) or [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) it. Deleting a Subgraph returns any signaled GRT to [Curators](/resources/roles/curating/). diff --git a/website/src/pages/uk/subgraphs/explorer.mdx b/website/src/pages/uk/subgraphs/explorer.mdx index 1df1c1675ab0..fd1259ea78b1 100644 --- a/website/src/pages/uk/subgraphs/explorer.mdx +++ b/website/src/pages/uk/subgraphs/explorer.mdx @@ -2,11 +2,11 @@ title: Graph Explorer --- -Unlock the world of subgraphs and network data with [Graph Explorer](https://thegraph.com/explorer). +Unlock the world of Subgraphs and network data with [Graph Explorer](https://thegraph.com/explorer). ## Overview -Graph Explorer consists of multiple parts where you can interact with [subgraphs](https://thegraph.com/explorer?chain=arbitrum-one), [delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one), engage [participants](https://thegraph.com/explorer/participants?chain=arbitrum-one), view [network information](https://thegraph.com/explorer/network?chain=arbitrum-one), and access your user profile. +Graph Explorer consists of multiple parts where you can interact with [Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one), [delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one), engage [participants](https://thegraph.com/explorer/participants?chain=arbitrum-one), view [network information](https://thegraph.com/explorer/network?chain=arbitrum-one), and access your user profile. ## Inside Explorer @@ -14,33 +14,33 @@ The following is a breakdown of all the key features of Graph Explorer. For addi ### Subgraphs Page -After deploying and publishing your subgraph in Subgraph Studio, go to [Graph Explorer](https://thegraph.com/explorer) and click on the "[Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one)" link in the navigation bar to access the following: +After deploying and publishing your Subgraph in Subgraph Studio, go to [Graph Explorer](https://thegraph.com/explorer) and click on the "[Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one)" link in the navigation bar to access the following: -- Your own finished subgraphs +- Your own finished Subgraphs - Subgraphs published by others -- The exact subgraph you want (based on the date created, signal amount, or name). +- The exact Subgraph you want (based on the date created, signal amount, or name). ![Explorer Image 1](/img/Subgraphs-Explorer-Landing.png) -When you click into a subgraph, you will be able to do the following: +When you click into a Subgraph, you will be able to do the following: - Test queries in the playground and be able to leverage network details to make informed decisions. -- Signal GRT on your own subgraph or the subgraphs of others to make indexers aware of its importance and quality. +- Signal GRT on your own Subgraph or the Subgraphs of others to make indexers aware of its importance and quality. - - This is critical because signaling on a subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries. + - This is critical because signaling on a Subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries. ![Explorer Image 2](/img/Subgraph-Details.png) -On each subgraph’s dedicated page, you can do the following: +On each Subgraph’s dedicated page, you can do the following: -- Наявність/відсутність сигналу на підграфах +- Signal/Un-signal on Subgraphs - Перегляд додаткових відомостей, таких як діаграми, поточний ID розгортання та інші ключові параметри -- Перемикання версій для дослідження минулих ітерацій підграфа -- Запит до підграфів через GraphQL -- Тестування підграфів в інтерактивному середовищі -- Перегляд індексаторів, які індексують певний підграф +- Switch versions to explore past iterations of the Subgraph +- Query Subgraphs via GraphQL +- Test Subgraphs in the playground +- View the Indexers that are indexing on a certain Subgraph - Статистика підграфів (розподіл коштів, куратори тощо) -- Переглянути особу, яка опублікувала підграф +- View the entity who published the Subgraph ![Explorer Image 3](/img/Explorer-Signal-Unsignal.png) @@ -53,7 +53,7 @@ On this page, you can see the following: - Indexers who collected the most query fees - Indexers with the highest estimated APR -Additionally, you can calculate your ROI and search for top Indexers by name, address, or subgraph. +Additionally, you can calculate your ROI and search for top Indexers by name, address, or Subgraph. ### Participants Page @@ -63,9 +63,9 @@ This page provides a bird's-eye view of all "participants," which includes every ![Explorer Image 4](/img/Indexer-Pane.png) -Indexers are the backbone of the protocol. They stake on subgraphs, index them, and serve queries to anyone consuming subgraphs. +Indexers are the backbone of the protocol. They stake on Subgraphs, index them, and serve queries to anyone consuming Subgraphs. -In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each subgraph, and how much revenue they have made from query fees and indexing rewards. +In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each Subgraph, and how much revenue they have made from query fees and indexing rewards. **Specifics** @@ -74,7 +74,7 @@ In the Indexers table, you can see an Indexers’ delegation parameters, their s - Cooldown Remaining - the time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters. - Owned - This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior. - Delegated - Stake from Delegators which can be allocated by the Indexer, but cannot be slashed. -- Allocated - Stake that Indexers are actively allocating towards the subgraphs they are indexing. +- Allocated - Stake that Indexers are actively allocating towards the Subgraphs they are indexing. - Available Delegation Capacity - the amount of delegated stake the Indexers can still receive before they become over-delegated. - Max Delegation Capacity - the maximum amount of delegated stake the Indexer can productively accept. An excess delegated stake cannot be used for allocations or rewards calculations. - Query Fees - this is the total fees that end users have paid for queries from an Indexer over all time. @@ -90,10 +90,10 @@ To learn more about how to become an Indexer, you can take a look at the [offici #### 2. Curators -Curators analyze subgraphs to identify which subgraphs are of the highest quality. Once a Curator has found a potentially high-quality subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which subgraphs are high quality and should be indexed. +Curators analyze Subgraphs to identify which Subgraphs are of the highest quality. Once a Curator has found a potentially high-quality Subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which Subgraphs are high quality and should be indexed. -- Curators can be community members, data consumers, or even subgraph developers who signal on their own subgraphs by depositing GRT tokens into a bonding curve. - - By depositing GRT, Curators mint curation shares of a subgraph. As a result, they can earn a portion of the query fees generated by the subgraph they have signaled on. +- Curators can be community members, data consumers, or even Subgraph developers who signal on their own Subgraphs by depositing GRT tokens into a bonding curve. + - By depositing GRT, Curators mint curation shares of a Subgraph. As a result, they can earn a portion of the query fees generated by the Subgraph they have signaled on. - The bonding curve incentivizes Curators to curate the highest quality data sources. In the The Curator table listed below you can see: @@ -144,8 +144,8 @@ The overview section has both all the current network metrics and some cumulativ A few key details to note: -- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the subgraphs have been closed and the data they served has been validated by the consumers. -- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). +- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the Subgraphs have been closed and the data they served has been validated by the consumers. +- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the Subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). ![Explorer Image 8](/img/Network-Stats.png) @@ -178,15 +178,15 @@ In this section, you can view the following: ### Subgraphs Tab -In the Subgraphs tab, you’ll see your published subgraphs. +In the Subgraphs tab, you’ll see your published Subgraphs. -> This will not include any subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network. +> This will not include any Subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network. ![Explorer Image 11](/img/Subgraphs-Overview.png) ### Indexing Tab -In the Indexing tab, you’ll find a table with all the active and historical allocations towards subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer. +In the Indexing tab, you’ll find a table with all the active and historical allocations towards Subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer. This section will also include details about your net Indexer rewards and net query fees. You’ll see the following metrics: @@ -223,13 +223,13 @@ Keep in mind that this chart is horizontally scrollable, so if you scroll all th ### Curating Tab -In the Curation tab, you’ll find all the subgraphs you’re signaling on (thus enabling you to receive query fees). Signaling allows Curators to highlight to Indexers which subgraphs are valuable and trustworthy, thus signaling that they need to be indexed on. +In the Curation tab, you’ll find all the Subgraphs you’re signaling on (thus enabling you to receive query fees). Signaling allows Curators to highlight to Indexers which Subgraphs are valuable and trustworthy, thus signaling that they need to be indexed on. Within this tab, you’ll find an overview of: -- All the subgraphs you're curating on with signal details -- Share totals per subgraph -- Query rewards per subgraph +- All the Subgraphs you're curating on with signal details +- Share totals per Subgraph +- Query rewards per Subgraph - Updated at date details ![Explorer Image 14](/img/Curation-Stats.png) diff --git a/website/src/pages/uk/subgraphs/guides/_meta.js b/website/src/pages/uk/subgraphs/guides/_meta.js index 37e18bc51651..a1bb04fb6d3f 100644 --- a/website/src/pages/uk/subgraphs/guides/_meta.js +++ b/website/src/pages/uk/subgraphs/guides/_meta.js @@ -1,4 +1,5 @@ export default { + 'subgraph-composition': '', 'subgraph-debug-forking': '', near: '', arweave: '', diff --git a/website/src/pages/uk/subgraphs/guides/arweave.mdx b/website/src/pages/uk/subgraphs/guides/arweave.mdx index 08e6c4257268..3fe39f3a2575 100644 --- a/website/src/pages/uk/subgraphs/guides/arweave.mdx +++ b/website/src/pages/uk/subgraphs/guides/arweave.mdx @@ -53,7 +53,7 @@ $ graph codegen # generates types from the schema file identified in the manifes $ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder ``` -## Subgraph Manifest Definition +## Визначення маніфесту підграфів The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for an Arweave Subgraph: @@ -92,12 +92,12 @@ Arweave data sources support two types of handlers: - `transactionHandlers` - Run on every transaction where the data source's `source.owner` is the owner. Currently an owner is required for `transactionHandlers`, if users want to process all transactions they should provide "" as the `source.owner` > The source.owner can be the owner's address, or their Public Key. - +> > Transactions are the building blocks of the Arweave permaweb and they are objects created by end-users. - +> > Note: [Irys (previously Bundlr)](https://irys.xyz/) transactions are not supported yet. -## Schema Definition +## Визначення схеми Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on the Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). @@ -162,7 +162,7 @@ graph deploy --access-token The GraphQL endpoint for Arweave Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. -## Example Subgraphs +## Приклади підграфів Here is an example Subgraph for reference: diff --git a/website/src/pages/uk/subgraphs/guides/contract-analyzer.mdx b/website/src/pages/uk/subgraphs/guides/contract-analyzer.mdx index 084ac8d28a00..ab5076c5ebf4 100644 --- a/website/src/pages/uk/subgraphs/guides/contract-analyzer.mdx +++ b/website/src/pages/uk/subgraphs/guides/contract-analyzer.mdx @@ -2,11 +2,15 @@ title: Smart Contract Analysis with Cana CLI --- -# Cana CLI: Quick & Efficient Contract Analysis +Improve smart contract analysis with **Cana CLI**. It's fast, efficient, and designed specifically for EVM chains. -**Cana CLI** is a command-line tool that streamlines helpful smart contract metadata analysis specific to subgraph development across multiple EVM-compatible chains. +## Overview -## 📌 Key Features +**Cana CLI** is a command-line tool that streamlines helpful smart contract metadata analysis specific to subgraph development across multiple EVM-compatible chains. It simplifies retrieving contract details, detecting proxy implementations, extracting ABIs, and more. + +### Key Features + +With Cana CLI, you can: - Detect deployment blocks - Verify source code @@ -14,47 +18,59 @@ title: Smart Contract Analysis with Cana CLI - Identify proxy and implementation contracts - Support multiple chains -## 🚀 Installation & Setup +### Prerequisites + +Before installing Cana CLI, make sure you have: + +- [Node.js v16+](https://nodejs.org/en) +- [npm v6+](https://docs.npmjs.com/cli/v11/commands/npm-install) +- Block explorer API keys + +### Installation & Setup -Install Cana globally using npm: +1. Install Cana CLI + +Use npm to install it globally: ```bash npm install -g contract-analyzer ``` -Set up a blockchain for analysis: +2. Configure Cana CLI + +Set up a blockchain environment for analysis: ```bash cana setup ``` -Provide the required block explorer API and block explorer endpoint URL details when prompted. +During setup, you'll be prompted to provide the required block explorer API key and block explorer endpoint URL. -Running `cana setup` creates a configuration file at `~/.contract-analyzer/config.json`. This file stores your block explorer API credentials, endpoint URLs, and chain selection preferences for future use. +After setup, Cana CLI creates a configuration file at `~/.contract-analyzer/config.json`. This file stores your block explorer API credentials, endpoint URLs, and chain selection preferences for future use. -## 🍳 Usage +### Steps: Using Cana CLI for Smart Contract Analysis -### 🔹 Chain Selection +#### 1. Select a Chain -Cana supports multiple EVM-compatible chains. +Cana CLI supports multiple EVM-compatible chains. -List chains added with: +For a list of chains added run this command: ```bash cana chains ``` -Then select a chain with: +Then select a chain with this command: ```bash cana chains --switch ``` -Once a chain is selected, all subsequent contract analases will continue on that chain. +Once a chain is selected, all subsequent contract analyses will continue on that chain. -### 🔹 Basic Contract Analysis +#### 2. Basic Contract Analysis -Analyze a contract with: +Run the following command to analyze a contract: ```bash cana analyze 0xContractAddress @@ -66,11 +82,11 @@ or cana -a 0xContractAddress ``` -This command displays essential contract information in the terminal using a clear, organized format. +This command fetches and displays essential contract information in the terminal using a clear, organized format. -### 🔹 Understanding Output +#### 3. Understanding the Output -Cana organizes results into the terminal as well as into a structured directory when detailed contract data is successfully retrieved: +Cana CLI organizes results into the terminal and into a structured directory when detailed contract data is successfully retrieved: ``` contracts-analyzed/ @@ -80,24 +96,22 @@ contracts-analyzed/ └── event-information.json # Event signatures and examples ``` -### 🔹 Chain Management +This format makes it easy to reference contract metadata, event signatures, and ABIs for subgraph development. + +#### 4. Chain Management Add and manage chains: ```bash -cana setup # Add a new chain -cana chains # List configured chains -cana chains -s # Swich chains. +cana setup # Add a new chain +cana chains # List configured chains +cana chains -s # Switch chains ``` -## ⚠️ Troubleshooting +### Troubleshooting -- **Missing Data**: Ensure the contract address is correct, verified on the block explorer, and that your API key has the required permissions. +Missing Data? Ensure that the contract address is correct, that it's verified on the block explorer, and that your API key has the required permissions. -## ✅ Requirements - -- Node.js v16+ -- npm v6+ -- Block explorer API keys +### Conclusion -Keep your contract analyses efficient and well-organized. 🚀 +With Cana CLI, you can efficiently analyze smart contracts, extract crucial metadata, and support subgraph development with ease. diff --git a/website/src/pages/uk/subgraphs/guides/enums.mdx b/website/src/pages/uk/subgraphs/guides/enums.mdx index 9f55ae07c54b..195d3bb7ee84 100644 --- a/website/src/pages/uk/subgraphs/guides/enums.mdx +++ b/website/src/pages/uk/subgraphs/guides/enums.mdx @@ -269,6 +269,6 @@ Expected output includes the marketplaces that meet the criteria, each represent } ``` -## Additional Resources +## Додаткові матеріали For additional information, check out this guide's [repo](https://github.com/chidubemokeke/Subgraph-Tutorial-Enums). diff --git a/website/src/pages/uk/subgraphs/guides/grafting.mdx b/website/src/pages/uk/subgraphs/guides/grafting.mdx index d9abe0e70d2a..b089da22af78 100644 --- a/website/src/pages/uk/subgraphs/guides/grafting.mdx +++ b/website/src/pages/uk/subgraphs/guides/grafting.mdx @@ -1,46 +1,46 @@ --- -title: Replace a Contract and Keep its History With Grafting +title: Замініть контракт та збережіть його історію за допомогою графтингу --- In this guide, you will learn how to build and deploy new Subgraphs by grafting existing Subgraphs. -## What is Grafting? +## Що таке Grafting? Grafting reuses the data from an existing Subgraph and starts indexing it at a later block. This is useful during development to get past simple errors in the mappings quickly or to temporarily get an existing Subgraph working again after it has failed. Also, it can be used when adding a feature to a Subgraph that takes long to index from scratch. The grafted Subgraph can use a GraphQL schema that is not identical to the one of the base Subgraph, but merely compatible with it. It has to be a valid Subgraph schema in its own right, but may deviate from the base Subgraph's schema in the following ways: -- It adds or removes entity types -- It removes attributes from entity types -- It adds nullable attributes to entity types -- It turns non-nullable attributes into nullable attributes -- It adds values to enums -- It adds or removes interfaces -- It changes for which entity types an interface is implemented +- Додає або видаляє типи елементів +- Видаляє атрибути з типів елементів +- Додає до типів об'єктів атрибути, які можна скасувати +- Перетворює атрибути, які не можна скасувати, на атрибути, які можна скасувати +- Додає значення до переліків +- Додає або видаляє інтерфейси +- Визначає, для яких типів елементів реалізовано інтерфейс -For more information, you can check: +Для отримання додаткової інформації ви можете ознайомитися: - [Grafting](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) In this tutorial, we will be covering a basic use case. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing Subgraph onto the "base" Subgraph that tracks the new contract. -## Important Note on Grafting When Upgrading to the Network +## Важливе зауваження щодо графтингу при оновленні в мережі > **Caution**: It is recommended to not use grafting for Subgraphs published to The Graph Network -### Why Is This Important? +### Чому це так важливо? Grafting is a powerful feature that allows you to "graft" one Subgraph onto another, effectively transferring historical data from the existing Subgraph to a new version. It is not possible to graft a Subgraph from The Graph Network back to Subgraph Studio. -### Best Practices +### Найкращі практики **Initial Migration**: when you first deploy your Subgraph to the decentralized network, do so without grafting. Ensure that the Subgraph is stable and functioning as expected. **Subsequent Updates**: once your Subgraph is live and stable on the decentralized network, you may use grafting for future versions to make the transition smoother and to preserve historical data. -By adhering to these guidelines, you minimize risks and ensure a smoother migration process. +Дотримуючись цих рекомендацій, ви мінімізуєте ризики та забезпечите безперешкодний процес міграції. -## Building an Existing Subgraph +## Побудова наявного підграфа Building Subgraphs is an essential part of The Graph, described more in depth [here](/subgraphs/quick-start/). To be able to build and deploy the existing Subgraph used in this tutorial, the following repo is provided: @@ -48,7 +48,7 @@ Building Subgraphs is an essential part of The Graph, described more in depth [h > Note: The contract used in the Subgraph was taken from the following [Hackathon Starterkit](https://github.com/schmidsi/hackathon-starterkit). -## Subgraph Manifest Definition +## Визначення маніфесту підграфів The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest that you will use: @@ -83,7 +83,7 @@ dataSources: - The network should correspond to an indexed network being queried. Since we're running on Sepolia testnet, the network is `sepolia` - The `mapping` section defines the triggers of interest and the functions that should be run in response to those triggers. In this case, we are listening for the `Withdrawal` event and calling the `handleWithdrawal` function when it is emitted. -## Grafting Manifest Definition +## Визначення Grafting Manifest Grafting requires adding two new items to the original Subgraph manifest: @@ -101,7 +101,7 @@ graft: The `base` and `block` values can be found by deploying two Subgraphs: one for the base indexing and one with grafting -## Deploying the Base Subgraph +## Розгортання базового підграфа 1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-example` 2. Follow the directions in the `AUTH & DEPLOY` section on your Subgraph page in the `graft-example` folder from the repo @@ -117,7 +117,7 @@ The `base` and `block` values can be found by deploying two Subgraphs: one for t } ``` -It returns something like this: +Це повертає щось на зразок цього: ``` { @@ -140,9 +140,9 @@ It returns something like this: Once you have verified the Subgraph is indexing properly, you can quickly update the Subgraph with grafting. -## Deploying the Grafting Subgraph +## Розгортання підграфів для графтингу -The graft replacement subgraph.yaml will have a new contract address. This could happen when you update your dapp, redeploy a contract, etc. +При цьому процесі підрозділ subgraph.yaml матиме нову адресу контракту. Це може статися, коли ви оновлюєте децентралізований додаток, перерозподіляєте контракт тощо. 1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-replacement` 2. Create a new manifest. The `subgraph.yaml` for `graph-replacement` contains a different contract address and new information about how it should graft. These are the `block` of the [last event emitted](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452) you care about by the old contract and the `base` of the old Subgraph. The `base` Subgraph ID is the `Deployment ID` of your original `graph-example` Subgraph. You can find this in Subgraph Studio. @@ -159,7 +159,7 @@ The graft replacement subgraph.yaml will have a new contract address. This could } ``` -It should return the following: +Це має повернути наступне: ``` { @@ -189,7 +189,7 @@ You can see that the `graft-replacement` Subgraph is indexing from older `graph- Congrats! You have successfully grafted a Subgraph onto another Subgraph. -## Additional Resources +## Додаткові матеріали If you want more experience with grafting, here are a few examples for popular contracts: diff --git a/website/src/pages/uk/subgraphs/guides/near.mdx b/website/src/pages/uk/subgraphs/guides/near.mdx index e78a69eb7fa2..792c41180f20 100644 --- a/website/src/pages/uk/subgraphs/guides/near.mdx +++ b/website/src/pages/uk/subgraphs/guides/near.mdx @@ -46,7 +46,7 @@ $ graph codegen # generates types from the schema file identified in the manifes $ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder ``` -### Subgraph Manifest Definition +### Визначення маніфесту підграфів The Subgraph manifest (`subgraph.yaml`) identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for a NEAR Subgraph: @@ -90,7 +90,7 @@ NEAR data sources support two types of handlers: - `blockHandlers`: run on every new NEAR block. No `source.account` is required. - `receiptHandlers`: run on every receipt where the data source's `source.account` is the recipient. Note that only exact matches are processed ([subaccounts](https://docs.near.org/tutorials/crosswords/basics/add-functions-call#create-a-subaccount) must be added as independent data sources). -### Schema Definition +### Визначення схеми Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). @@ -191,7 +191,7 @@ $ graph deploy --node --ipfs https://api.thegraph.com/ipfs/ Note: +> +> - Any change to a source Subgraph will likely generate a new deployment ID. +> - Be sure to update the deployment ID in the data source address of the Subgraph manifest to take advantage of the latest changes. +> - All source Subgraphs should be deployed before the composed Subgraph is deployed. + +#### Key Functions + +- It provides a consolidated data model that encompasses all relevant block metrics. +- It combines data from 3 source Subgraphs, and provides a comprehensive view of block statistics, enabling more complex queries and analyses. + +## Key Takeaways + +- This powerful tool will scale your Subgraph development and allow you to combine multiple Subgraphs. +- The setup includes the deployment of 3 source Subgraphs and one final deployment of the composed Subgraph. +- This feature unlocks scalability, simplifying both development and maintenance efficiency. + +## Додаткові матеріали + +- Check out all the code for this example in [this GitHub repo](https://github.com/graphprotocol/example-composable-subgraph). +- To add advanced features to your Subgraph, check out [Subgraph advanced features](/developing/creating/advanced/). +- To learn more about aggregations, check out [Timeseries and Aggregations](/subgraphs/developing/creating/advanced/#timeseries-and-aggregations). diff --git a/website/src/pages/uk/subgraphs/guides/transfer-to-the-graph.mdx b/website/src/pages/uk/subgraphs/guides/transfer-to-the-graph.mdx index a62072c48373..6563dc785fc6 100644 --- a/website/src/pages/uk/subgraphs/guides/transfer-to-the-graph.mdx +++ b/website/src/pages/uk/subgraphs/guides/transfer-to-the-graph.mdx @@ -12,9 +12,9 @@ Quickly upgrade your Subgraphs from any platform to [The Graph's decentralized n ## Upgrade Your Subgraph to The Graph in 3 Easy Steps -1. [Set Up Your Studio Environment](/subgraphs/cookbook/transfer-to-the-graph/#1-set-up-your-studio-environment) -2. [Deploy Your Subgraph to Studio](/subgraphs/cookbook/transfer-to-the-graph/#2-deploy-your-subgraph-to-studio) -3. [Publish to The Graph Network](/subgraphs/cookbook/transfer-to-the-graph/#publish-your-subgraph-to-the-graphs-decentralized-network) +1. [Set Up Your Studio Environment](/subgraphs/cookbook/transfer-to-the-graph/#1-set-up-your-studio-environment) +2. [Deploy Your Subgraph to Studio](/subgraphs/cookbook/transfer-to-the-graph/#2-deploy-your-subgraph-to-studio) +3. [Publish to The Graph Network](/subgraphs/cookbook/transfer-to-the-graph/#publish-your-subgraph-to-the-graphs-decentralized-network) ## 1. Set Up Your Studio Environment @@ -98,7 +98,7 @@ You can create API Keys in Subgraph Studio under the “API Keys” menu at the Once you upgrade, you can access and manage your Subgraphs in [Subgraph Studio](https://thegraph.com/studio/) and explore all Subgraphs in [The Graph Explorer](https://thegraph.com/networks/). -### Additional Resources +### Додаткові матеріали - To quickly create and publish a new Subgraph, check out the [Quick Start](/subgraphs/quick-start/). - To explore all the ways you can optimize and customize your Subgraph for a better performance, read more about [creating a Subgraph here](/developing/creating-a-subgraph/). diff --git a/website/src/pages/uk/subgraphs/querying/best-practices.mdx b/website/src/pages/uk/subgraphs/querying/best-practices.mdx index f62d0540130d..a905e77e8adb 100644 --- a/website/src/pages/uk/subgraphs/querying/best-practices.mdx +++ b/website/src/pages/uk/subgraphs/querying/best-practices.mdx @@ -4,7 +4,7 @@ title: Найкращі практики виконання запитів The Graph provides a decentralized way to query data from blockchains. Its data is exposed through a GraphQL API, making it easier to query with the GraphQL language. -Learn the essential GraphQL language rules and best practices to optimize your subgraph. +Learn the essential GraphQL language rules and best practices to optimize your Subgraph. --- @@ -71,7 +71,7 @@ It means that you can query a GraphQL API using standard `fetch` (natively or vi However, as mentioned in ["Querying from an Application"](/subgraphs/querying/from-an-application/), it's recommended to use `graph-client`, which supports the following unique features: -- Робота з кросс-чейн підграфами: Отримання інформації з декількох підграфів за один запит +- Cross-chain Subgraph Handling: Querying from multiple Subgraphs in a single query - [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) - [Automatic Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) - Повністю введений результат @@ -219,7 +219,7 @@ If the application only needs 10 transactions, the query should explicitly set ` ### Use a single query to request multiple records -By default, subgraphs have a singular entity for one record. For multiple records, use the plural entities and filter: `where: {id_in:[X,Y,Z]}` or `where: {volume_gt:100000}` +By default, Subgraphs have a singular entity for one record. For multiple records, use the plural entities and filter: `where: {id_in:[X,Y,Z]}` or `where: {volume_gt:100000}` Example of inefficient querying: diff --git a/website/src/pages/uk/subgraphs/querying/from-an-application.mdx b/website/src/pages/uk/subgraphs/querying/from-an-application.mdx index a83bf2860737..e52779e9d2f5 100644 --- a/website/src/pages/uk/subgraphs/querying/from-an-application.mdx +++ b/website/src/pages/uk/subgraphs/querying/from-an-application.mdx @@ -1,5 +1,6 @@ --- title: Отримання запиту з додатка +sidebarTitle: Querying from an App --- Learn how to query The Graph from your application. @@ -10,7 +11,7 @@ During the development process, you will receive a GraphQL API endpoint at two d ### Subgraph Studio Endpoint -After deploying your subgraph to [Subgraph Studio](https://thegraph.com/docs/en/subgraphs/developing/deploying/using-subgraph-studio/), you will receive an endpoint that looks like this: +After deploying your Subgraph to [Subgraph Studio](https://thegraph.com/docs/en/subgraphs/developing/deploying/using-subgraph-studio/), you will receive an endpoint that looks like this: ``` https://api.studio.thegraph.com/query/// @@ -20,13 +21,13 @@ https://api.studio.thegraph.com/query/// ### The Graph Network Endpoint -After publishing your subgraph to the network, you will receive an endpoint that looks like this: : +After publishing your Subgraph to the network, you will receive an endpoint that looks like this: : ``` https://gateway.thegraph.com/api//subgraphs/id/ ``` -> This endpoint is intended for active use on the network. It allows you to use various GraphQL client libraries to query the subgraph and populate your application with indexed data. +> This endpoint is intended for active use on the network. It allows you to use various GraphQL client libraries to query the Subgraph and populate your application with indexed data. ## Using Popular GraphQL Clients @@ -34,7 +35,7 @@ https://gateway.thegraph.com/api//subgraphs/id/ The Graph is providing its own GraphQL client, `graph-client` that supports unique features such as: -- Робота з кросс-чейн підграфами: Отримання інформації з декількох підграфів за один запит +- Cross-chain Subgraph Handling: Querying from multiple Subgraphs in a single query - [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) - [Automatic Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) - Повністю введений результат @@ -43,7 +44,7 @@ The Graph is providing its own GraphQL client, `graph-client` that supports uniq ### Fetch Data with Graph Client -Let's look at how to fetch data from a subgraph with `graph-client`: +Let's look at how to fetch data from a Subgraph with `graph-client`: #### Step 1 @@ -168,7 +169,7 @@ Although it's the heaviest client, it has many features to build advanced UI on ### Fetch Data with Apollo Client -Let's look at how to fetch data from a subgraph with Apollo client: +Let's look at how to fetch data from a Subgraph with Apollo client: #### Step 1 @@ -257,7 +258,7 @@ client ### Fetch data with URQL -Let's look at how to fetch data from a subgraph with URQL: +Let's look at how to fetch data from a Subgraph with URQL: #### Step 1 diff --git a/website/src/pages/uk/subgraphs/querying/graph-client/README.md b/website/src/pages/uk/subgraphs/querying/graph-client/README.md index 416cadc13c6f..d4850e723c6e 100644 --- a/website/src/pages/uk/subgraphs/querying/graph-client/README.md +++ b/website/src/pages/uk/subgraphs/querying/graph-client/README.md @@ -16,19 +16,19 @@ This library is intended to simplify the network aspect of data consumption for | Status | Feature | Notes | | :----: | ---------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------- | -| ✅ | Multiple indexers | based on fetch strategies | -| ✅ | Fetch Strategies | timeout, retry, fallback, race, highestValue | -| ✅ | Build time validations & optimizations | | -| ✅ | Client-Side Composition | with improved execution planner (based on GraphQL-Mesh) | -| ✅ | Cross-chain Subgraph Handling | Use similar subgraphs as a single source | -| ✅ | Raw Execution (standalone mode) | without a wrapping GraphQL client | -| ✅ | Local (client-side) Mutations | | -| ✅ | [Automatic Block Tracking](../packages/block-tracking/README.md) | tracking block numbers [as described here](https://thegraph.com/docs/en/developer/distributed-systems/#polling-for-updated-data) | -| ✅ | [Automatic Pagination](../packages/auto-pagination/README.md) | doing multiple requests in a single call to fetch more than the indexer limit | -| ✅ | Integration with `@apollo/client` | | -| ✅ | Integration with `urql` | | -| ✅ | TypeScript support | with built-in GraphQL Codegen and `TypedDocumentNode` | -| ✅ | [`@live` queries](./live.md) | Based on polling | +| ✅ | Multiple indexers | based on fetch strategies | +| ✅ | Fetch Strategies | timeout, retry, fallback, race, highestValue | +| ✅ | Build time validations & optimizations | | +| ✅ | Client-Side Composition | with improved execution planner (based on GraphQL-Mesh) | +| ✅ | Cross-chain Subgraph Handling | Use similar subgraphs as a single source | +| ✅ | Raw Execution (standalone mode) | without a wrapping GraphQL client | +| ✅ | Local (client-side) Mutations | | +| ✅ | [Automatic Block Tracking](../packages/block-tracking/README.md) | tracking block numbers [as described here](https://thegraph.com/docs/en/developer/distributed-systems/#polling-for-updated-data) | +| ✅ | [Automatic Pagination](../packages/auto-pagination/README.md) | doing multiple requests in a single call to fetch more than the indexer limit | +| ✅ | Integration with `@apollo/client` | | +| ✅ | Integration with `urql` | | +| ✅ | TypeScript support | with built-in GraphQL Codegen and `TypedDocumentNode` | +| ✅ | [`@live` queries](./live.md) | Based on polling | > You can find an [extended architecture design here](./architecture.md) @@ -308,8 +308,8 @@ sources:
`highestValue` - - This strategy allows you to send parallel requests to different endpoints for the same source and choose the most updated. + +This strategy allows you to send parallel requests to different endpoints for the same source and choose the most updated. This is useful if you want to choose most synced data for the same Subgraph over different indexers/sources. diff --git a/website/src/pages/uk/subgraphs/querying/graphql-api.mdx b/website/src/pages/uk/subgraphs/querying/graphql-api.mdx index b3003ece651a..e10201771989 100644 --- a/website/src/pages/uk/subgraphs/querying/graphql-api.mdx +++ b/website/src/pages/uk/subgraphs/querying/graphql-api.mdx @@ -6,13 +6,13 @@ Learn about the GraphQL Query API used in The Graph. ## What is GraphQL? -[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. +[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query Subgraphs. -To understand the larger role that GraphQL plays, review [developing](/subgraphs/developing/introduction/) and [creating a subgraph](/developing/creating-a-subgraph/). +To understand the larger role that GraphQL plays, review [developing](/subgraphs/developing/introduction/) and [creating a Subgraph](/developing/creating-a-subgraph/). ## Queries with GraphQL -In your subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type. +In your Subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type. > Note: `query` does not need to be included at the top of the `graphql` query when using The Graph. @@ -170,7 +170,7 @@ You can use suffixes like `_gt`, `_lte` for value comparison: You can also filter entities that were updated in or after a specified block with `_change_block(number_gte: Int)`. -This can be useful if you are looking to fetch only entities which have changed, for example since the last time you polled. Or alternatively it can be useful to investigate or debug how entities are changing in your subgraph (if combined with a block filter, you can isolate only entities that changed in a specific block). +This can be useful if you are looking to fetch only entities which have changed, for example since the last time you polled. Or alternatively it can be useful to investigate or debug how entities are changing in your Subgraph (if combined with a block filter, you can isolate only entities that changed in a specific block). ```graphql { @@ -329,7 +329,7 @@ This query will return `Challenge` entities, and their associated `Application` ### Fulltext Search Queries -Fulltext search query fields provide an expressive text search API that can be added to the subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) to add fulltext search to your subgraph. +Fulltext search query fields provide an expressive text search API that can be added to the Subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) to add fulltext search to your Subgraph. Fulltext search queries have one required field, `text`, for supplying search terms. Several special fulltext operators are available to be used in this `text` search field. @@ -391,7 +391,7 @@ Graph Node implements [specification-based](https://spec.graphql.org/October2021 The schema of your dataSources, i.e. the entity types, values, and relationships that are available to query, are defined through the [GraphQL Interface Definition Language (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). -GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your subgraph is automatically generated from the GraphQL schema that's included in your [subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph). +GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your Subgraph is automatically generated from the GraphQL schema that's included in your [Subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph). > Note: Our API does not expose mutations because developers are expected to issue transactions directly against the underlying blockchain from their applications. @@ -403,7 +403,7 @@ All GraphQL types with `@entity` directives in your schema will be treated as en ### Subgraph Metadata -All subgraphs have an auto-generated `_Meta_` object, which provides access to subgraph metadata. This can be queried as follows: +All Subgraphs have an auto-generated `_Meta_` object, which provides access to Subgraph metadata. This can be queried as follows: ```graphQL { @@ -419,7 +419,7 @@ All subgraphs have an auto-generated `_Meta_` object, which provides access to s } ``` -If a block is provided, the metadata is as of that block, if not the latest indexed block is used. If provided, the block must be after the subgraph's start block, and less than or equal to the most recently indexed block. +If a block is provided, the metadata is as of that block, if not the latest indexed block is used. If provided, the block must be after the Subgraph's start block, and less than or equal to the most recently indexed block. `deployment` is a unique ID, corresponding to the IPFS CID of the `subgraph.yaml` file. @@ -427,6 +427,6 @@ If a block is provided, the metadata is as of that block, if not the latest inde - hash: the hash of the block - number: the block number -- timestamp: the timestamp of the block, if available (this is currently only available for subgraphs indexing EVM networks) +- timestamp: the timestamp of the block, if available (this is currently only available for Subgraphs indexing EVM networks) -`hasIndexingErrors` is a boolean identifying whether the subgraph encountered indexing errors at some past block +`hasIndexingErrors` is a boolean identifying whether the Subgraph encountered indexing errors at some past block diff --git a/website/src/pages/uk/subgraphs/querying/introduction.mdx b/website/src/pages/uk/subgraphs/querying/introduction.mdx index 4e9b3712d89b..e8559e2b6b2e 100644 --- a/website/src/pages/uk/subgraphs/querying/introduction.mdx +++ b/website/src/pages/uk/subgraphs/querying/introduction.mdx @@ -7,11 +7,11 @@ To start querying right away, visit [The Graph Explorer](https://thegraph.com/ex ## Overview -When a subgraph is published to The Graph Network, you can visit its subgraph details page on Graph Explorer and use the "Query" tab to explore the deployed GraphQL API for each subgraph. +When a Subgraph is published to The Graph Network, you can visit its Subgraph details page on Graph Explorer and use the "Query" tab to explore the deployed GraphQL API for each Subgraph. ## Specifics -Each subgraph published to The Graph Network has a unique query URL in Graph Explorer to make direct queries. You can find it by navigating to the subgraph details page and clicking on the "Query" button in the top right corner. +Each Subgraph published to The Graph Network has a unique query URL in Graph Explorer to make direct queries. You can find it by navigating to the Subgraph details page and clicking on the "Query" button in the top right corner. ![Query Subgraph Button](/img/query-button-screenshot.png) @@ -21,7 +21,7 @@ You will notice that this query URL must use a unique API key. You can create an Subgraph Studio users start on a Free Plan, which allows them to make 100,000 queries per month. Additional queries are available on the Growth Plan, which offers usage based pricing for additional queries, payable by credit card, or GRT on Arbitrum. You can learn more about billing [here](/subgraphs/billing/). -> Please see the [Query API](/subgraphs/querying/graphql-api/) for a complete reference on how to query the subgraph's entities. +> Please see the [Query API](/subgraphs/querying/graphql-api/) for a complete reference on how to query the Subgraph's entities. > > Note: If you encounter 405 errors with a GET request to the Graph Explorer URL, please switch to a POST request instead. diff --git a/website/src/pages/uk/subgraphs/querying/managing-api-keys.mdx b/website/src/pages/uk/subgraphs/querying/managing-api-keys.mdx index 26ab619d9279..53040c392ef4 100644 --- a/website/src/pages/uk/subgraphs/querying/managing-api-keys.mdx +++ b/website/src/pages/uk/subgraphs/querying/managing-api-keys.mdx @@ -1,14 +1,14 @@ --- -title: Управління API-ключами +title: Managing API keys --- ## Overview -API keys are needed to query subgraphs. They ensure that the connections between application services are valid and authorized, including authenticating the end user and the device using the application. +API keys are needed to query Subgraphs. They ensure that the connections between application services are valid and authorized, including authenticating the end user and the device using the application. ### Create and Manage API Keys -Go to [Subgraph Studio](https://thegraph.com/studio/) and click the **API Keys** tab to create and manage your API keys for specific subgraphs. +Go to [Subgraph Studio](https://thegraph.com/studio/) and click the **API Keys** tab to create and manage your API keys for specific Subgraphs. The "API keys" table lists existing API keys and allows you to manage or delete them. For each key, you can see its status, the cost for the current period, the spending limit for the current period, and the total number of queries. @@ -31,4 +31,4 @@ You can click on an individual API key to view the Details page: - Кількість витрачених GRT 2. Under the **Security** section, you can opt into security settings depending on the level of control you’d like to have. Specifically, you can: - Переглядати доменні імена, яким дозволено використовувати ваш API-ключ та керувати цими іменами - - Призначати підграфи, з яких можна отримувати запити за допомогою API-ключа + - Assign Subgraphs that can be queried with your API key diff --git a/website/src/pages/uk/subgraphs/querying/python.mdx b/website/src/pages/uk/subgraphs/querying/python.mdx index 0937e4f7862d..ed0d078a4175 100644 --- a/website/src/pages/uk/subgraphs/querying/python.mdx +++ b/website/src/pages/uk/subgraphs/querying/python.mdx @@ -3,7 +3,7 @@ title: Query The Graph with Python and Subgrounds sidebarTitle: Python (Subgrounds) --- -Subgrounds is an intuitive Python library for querying subgraphs, built by [Playgrounds](https://playgrounds.network/). It allows you to directly connect subgraph data to a Python data environment, letting you use libraries like [pandas](https://pandas.pydata.org/) to perform data analysis! +Subgrounds is an intuitive Python library for querying Subgraphs, built by [Playgrounds](https://playgrounds.network/). It allows you to directly connect Subgraph data to a Python data environment, letting you use libraries like [pandas](https://pandas.pydata.org/) to perform data analysis! Subgrounds offers a simple Pythonic API for building GraphQL queries, automates tedious workflows such as pagination, and empowers advanced users through controlled schema transformations. @@ -17,14 +17,14 @@ pip install --upgrade subgrounds python -m pip install --upgrade subgrounds ``` -Once installed, you can test out subgrounds with the following query. The following example grabs a subgraph for the Aave v2 protocol and queries the top 5 markets ordered by TVL (Total Value Locked), selects their name and their TVL (in USD) and returns the data as a pandas [DataFrame](https://pandas.pydata.org/pandas-docs/dev/reference/api/pandas.DataFrame.html#pandas.DataFrame). +Once installed, you can test out subgrounds with the following query. The following example grabs a Subgraph for the Aave v2 protocol and queries the top 5 markets ordered by TVL (Total Value Locked), selects their name and their TVL (in USD) and returns the data as a pandas [DataFrame](https://pandas.pydata.org/pandas-docs/dev/reference/api/pandas.DataFrame.html#pandas.DataFrame). ```python from subgrounds import Subgrounds sg = Subgrounds() -# Load the subgraph +# Load the Subgraph aave_v2 = sg.load_subgraph( "https://api.thegraph.com/subgraphs/name/messari/aave-v2-ethereum") diff --git a/website/src/pages/uk/subgraphs/querying/subgraph-id-vs-deployment-id.mdx b/website/src/pages/uk/subgraphs/querying/subgraph-id-vs-deployment-id.mdx index 103e470e14da..17258dd13ea1 100644 --- a/website/src/pages/uk/subgraphs/querying/subgraph-id-vs-deployment-id.mdx +++ b/website/src/pages/uk/subgraphs/querying/subgraph-id-vs-deployment-id.mdx @@ -2,17 +2,17 @@ title: Subgraph ID vs Deployment ID --- -A subgraph is identified by a Subgraph ID, and each version of the subgraph is identified by a Deployment ID. +A Subgraph is identified by a Subgraph ID, and each version of the Subgraph is identified by a Deployment ID. -When querying a subgraph, either ID can be used, though it is generally suggested that the Deployment ID is used due to its ability to specify a specific version of a subgraph. +When querying a Subgraph, either ID can be used, though it is generally suggested that the Deployment ID is used due to its ability to specify a specific version of a Subgraph. Here are some key differences between the two IDs: ![](/img/subgraph-id-vs-deployment-id.png) ## Deployment ID -The Deployment ID is the IPFS hash of the compiled manifest file, which refers to other files on IPFS instead of relative URLs on the computer. For example, the compiled manifest can be accessed via: `https://api.thegraph.com/ipfs/api/v0/cat?arg=QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. To change the Deployment ID, one can simply update the manifest file, such as modifying the description field as described in the [subgraph manifest documentation](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api). +The Deployment ID is the IPFS hash of the compiled manifest file, which refers to other files on IPFS instead of relative URLs on the computer. For example, the compiled manifest can be accessed via: `https://api.thegraph.com/ipfs/api/v0/cat?arg=QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. To change the Deployment ID, one can simply update the manifest file, such as modifying the description field as described in the [Subgraph manifest documentation](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api). -When queries are made using a subgraph's Deployment ID, we are specifying a version of that subgraph to query. Using the Deployment ID to query a specific subgraph version results in a more sophisticated and robust setup as there is full control over the subgraph version being queried. However, this results in the need of updating the query code manually every time a new version of the subgraph is published. +When queries are made using a Subgraph's Deployment ID, we are specifying a version of that Subgraph to query. Using the Deployment ID to query a specific Subgraph version results in a more sophisticated and robust setup as there is full control over the Subgraph version being queried. However, this results in the need of updating the query code manually every time a new version of the Subgraph is published. Example endpoint that uses Deployment ID: @@ -20,8 +20,8 @@ Example endpoint that uses Deployment ID: ## Subgraph ID -The Subgraph ID is a unique identifier for a subgraph. It remains constant across all versions of a subgraph. It is recommended to use the Subgraph ID to query the latest version of a subgraph, although there are some caveats. +The Subgraph ID is a unique identifier for a Subgraph. It remains constant across all versions of a Subgraph. It is recommended to use the Subgraph ID to query the latest version of a Subgraph, although there are some caveats. -Be aware that querying using Subgraph ID may result in queries being responded to by an older version of the subgraph due to the new version needing time to sync. Also, new versions could introduce breaking schema changes. +Be aware that querying using Subgraph ID may result in queries being responded to by an older version of the Subgraph due to the new version needing time to sync. Also, new versions could introduce breaking schema changes. Example endpoint that uses Subgraph ID: `https://gateway-arbitrum.network.thegraph.com/api/[api-key]/subgraphs/id/FL3ePDCBbShPvfRJTaSCNnehiqxsPHzpLud6CpbHoeKW` diff --git a/website/src/pages/uk/subgraphs/quick-start.mdx b/website/src/pages/uk/subgraphs/quick-start.mdx index 6e8f79c5e4ec..8ed3c12ffaa9 100644 --- a/website/src/pages/uk/subgraphs/quick-start.mdx +++ b/website/src/pages/uk/subgraphs/quick-start.mdx @@ -2,7 +2,7 @@ title: Швидкий старт --- -Learn how to easily build, publish and query a [subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) on The Graph. +Learn how to easily build, publish and query a [Subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) on The Graph. ## Prerequisites @@ -13,13 +13,13 @@ Learn how to easily build, publish and query a [subgraph](/subgraphs/developing/ ## How to Build a Subgraph -### 1. Create a subgraph in Subgraph Studio +### 1. Create a Subgraph in Subgraph Studio Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. -Subgraph Studio lets you create, manage, deploy, and publish subgraphs, as well as create and manage API keys. +Subgraph Studio lets you create, manage, deploy, and publish Subgraphs, as well as create and manage API keys. -Click "Create a Subgraph". It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name". +Click "Create a Subgraph". It is recommended to name the Subgraph in Title Case: "Subgraph Name Chain Name". ### 2. Встановлення Graph CLI @@ -37,13 +37,13 @@ Using [yarn](https://yarnpkg.com/): yarn global add @graphprotocol/graph-cli ``` -### 3. Initialize your subgraph +### 3. Initialize your Subgraph -> You can find commands for your specific subgraph on the subgraph page in [Subgraph Studio](https://thegraph.com/studio/). +> You can find commands for your specific Subgraph on the Subgraph page in [Subgraph Studio](https://thegraph.com/studio/). -The `graph init` command will automatically create a scaffold of a subgraph based on your contract's events. +The `graph init` command will automatically create a scaffold of a Subgraph based on your contract's events. -The following command initializes your subgraph from an existing contract: +The following command initializes your Subgraph from an existing contract: ```sh graph init @@ -51,42 +51,42 @@ graph init If your contract is verified on the respective blockscanner where it is deployed (such as [Etherscan](https://etherscan.io/)), then the ABI will automatically be created in the CLI. -When you initialize your subgraph, the CLI will ask you for the following information: +When you initialize your Subgraph, the CLI will ask you for the following information: -- **Protocol**: Choose the protocol your subgraph will be indexing data from. -- **Subgraph slug**: Create a name for your subgraph. Your subgraph slug is an identifier for your subgraph. -- **Directory**: Choose a directory to create your subgraph in. -- **Ethereum network** (optional): You may need to specify which EVM-compatible network your subgraph will be indexing data from. +- **Protocol**: Choose the protocol your Subgraph will be indexing data from. +- **Subgraph slug**: Create a name for your Subgraph. Your Subgraph slug is an identifier for your Subgraph. +- **Directory**: Choose a directory to create your Subgraph in. +- **Ethereum network** (optional): You may need to specify which EVM-compatible network your Subgraph will be indexing data from. - **Contract address**: Locate the smart contract address you’d like to query data from. - **ABI**: If the ABI is not auto-populated, you will need to input it manually as a JSON file. -- **Start Block**: You should input the start block to optimize subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed. +- **Start Block**: You should input the start block to optimize Subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed. - **Contract Name**: Input the name of your contract. -- **Index contract events as entities**: It is suggested that you set this to true, as it will automatically add mappings to your subgraph for every emitted event. +- **Index contract events as entities**: It is suggested that you set this to true, as it will automatically add mappings to your Subgraph for every emitted event. - **Add another contract** (optional): You can add another contract. -На наступному скриншоті ви можете побачити, чого варто очікувати при ініціалізації вашого підграфа: +See the following screenshot for an example for what to expect when initializing your Subgraph: ![Subgraph command](/img/CLI-Example.png) -### 4. Edit your subgraph +### 4. Edit your Subgraph -The `init` command in the previous step creates a scaffold subgraph that you can use as a starting point to build your subgraph. +The `init` command in the previous step creates a scaffold Subgraph that you can use as a starting point to build your Subgraph. -When making changes to the subgraph, you will mainly work with three files: +When making changes to the Subgraph, you will mainly work with three files: -- Manifest (`subgraph.yaml`) - defines what data sources your subgraph will index. -- Schema (`schema.graphql`) - defines what data you wish to retrieve from the subgraph. +- Manifest (`subgraph.yaml`) - defines what data sources your Subgraph will index. +- Schema (`schema.graphql`) - defines what data you wish to retrieve from the Subgraph. - AssemblyScript Mappings (`mapping.ts`) - translates data from your data sources to the entities defined in the schema. -For a detailed breakdown on how to write your subgraph, check out [Creating a Subgraph](/developing/creating-a-subgraph/). +For a detailed breakdown on how to write your Subgraph, check out [Creating a Subgraph](/developing/creating-a-subgraph/). -### 5. Deploy your subgraph +### 5. Deploy your Subgraph > Remember, deploying is not the same as publishing. -When you **deploy** a subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. A deployed subgraph's indexing is performed by the [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/), which is a single Indexer owned and operated by Edge & Node, rather than by the many decentralized Indexers in The Graph Network. A **deployed** subgraph is free to use, rate-limited, not visible to the public, and meant to be used for development, staging, and testing purposes. +When you **deploy** a Subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. A deployed Subgraph's indexing is performed by the [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/), which is a single Indexer owned and operated by Edge & Node, rather than by the many decentralized Indexers in The Graph Network. A **deployed** Subgraph is free to use, rate-limited, not visible to the public, and meant to be used for development, staging, and testing purposes. -Як тільки ваш підграф буде написаний, виконайте наступні команди: +Once your Subgraph is written, run the following commands: ```` ```sh @@ -94,7 +94,7 @@ graph codegen && graph build ``` ```` -Authenticate and deploy your subgraph. The deploy key can be found on the subgraph's page in Subgraph Studio. +Authenticate and deploy your Subgraph. The deploy key can be found on the Subgraph's page in Subgraph Studio. ![Deploy key](/img/subgraph-studio-deploy-key.jpg) @@ -109,37 +109,37 @@ graph deploy The CLI will ask for a version label. It's strongly recommended to use [semantic versioning](https://semver.org/), e.g. `0.0.1`. -### 6. Review your subgraph +### 6. Review your Subgraph -If you’d like to test your subgraph before publishing it, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following: +If you’d like to test your Subgraph before publishing it, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following: - Run a sample query. -- Analyze your subgraph in the dashboard to check information. -- Check the logs on the dashboard to see if there are any errors with your subgraph. The logs of an operational subgraph will look like this: +- Analyze your Subgraph in the dashboard to check information. +- Check the logs on the dashboard to see if there are any errors with your Subgraph. The logs of an operational Subgraph will look like this: ![Subgraph logs](/img/subgraph-logs-image.png) -### 7. Publish your subgraph to The Graph Network +### 7. Publish your Subgraph to The Graph Network -When your subgraph is ready for a production environment, you can publish it to the decentralized network. Publishing is an onchain action that does the following: +When your Subgraph is ready for a production environment, you can publish it to the decentralized network. Publishing is an onchain action that does the following: -- It makes your subgraph available to be to indexed by the decentralized [Indexers](/indexing/overview/) on The Graph Network. -- It removes rate limits and makes your subgraph publicly searchable and queryable in [Graph Explorer](https://thegraph.com/explorer/). -- It makes your subgraph available for [Curators](/resources/roles/curating/) to curate it. +- It makes your Subgraph available to be to indexed by the decentralized [Indexers](/indexing/overview/) on The Graph Network. +- It removes rate limits and makes your Subgraph publicly searchable and queryable in [Graph Explorer](https://thegraph.com/explorer/). +- It makes your Subgraph available for [Curators](/resources/roles/curating/) to curate it. -> The greater the amount of GRT you and others curate on your subgraph, the more Indexers will be incentivized to index your subgraph, improving the quality of service, reducing latency, and enhancing network redundancy for your subgraph. +> The greater the amount of GRT you and others curate on your Subgraph, the more Indexers will be incentivized to index your Subgraph, improving the quality of service, reducing latency, and enhancing network redundancy for your Subgraph. #### Publishing with Subgraph Studio -To publish your subgraph, click the Publish button in the dashboard. +To publish your Subgraph, click the Publish button in the dashboard. -![Publish a subgraph on Subgraph Studio](/img/publish-sub-transfer.png) +![Publish a Subgraph on Subgraph Studio](/img/publish-sub-transfer.png) -Select the network to which you would like to publish your subgraph. +Select the network to which you would like to publish your Subgraph. #### Publishing from the CLI -As of version 0.73.0, you can also publish your subgraph with the Graph CLI. +As of version 0.73.0, you can also publish your Subgraph with the Graph CLI. Open the `graph-cli`. @@ -157,32 +157,32 @@ graph publish ``` ```` -3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized subgraph to a network of your choice. +3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized Subgraph to a network of your choice. ![cli-ui](/img/cli-ui.png) To customize your deployment, see [Publishing a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). -#### Adding signal to your subgraph +#### Adding signal to your Subgraph -1. To attract Indexers to query your subgraph, you should add GRT curation signal to it. +1. To attract Indexers to query your Subgraph, you should add GRT curation signal to it. - - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your subgraph. + - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your Subgraph. 2. If eligible for indexing rewards, Indexers receive GRT rewards based on the signaled amount. - - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on subgraph feature usage and supported networks. + - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on Subgraph feature usage and supported networks. To learn more about curation, read [Curating](/resources/roles/curating/). -To save on gas costs, you can curate your subgraph in the same transaction you publish it by selecting this option: +To save on gas costs, you can curate your Subgraph in the same transaction you publish it by selecting this option: ![Subgraph publish](/img/studio-publish-modal.png) -### 8. Query your subgraph +### 8. Query your Subgraph -You now have access to 100,000 free queries per month with your subgraph on The Graph Network! +You now have access to 100,000 free queries per month with your Subgraph on The Graph Network! -You can query your subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button. +You can query your Subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button. -For more information about querying data from your subgraph, read [Querying The Graph](/subgraphs/querying/introduction/). +For more information about querying data from your Subgraph, read [Querying The Graph](/subgraphs/querying/introduction/). diff --git a/website/src/pages/uk/substreams/developing/dev-container.mdx b/website/src/pages/uk/substreams/developing/dev-container.mdx index bd4acf16eec7..339ddb159c87 100644 --- a/website/src/pages/uk/substreams/developing/dev-container.mdx +++ b/website/src/pages/uk/substreams/developing/dev-container.mdx @@ -9,7 +9,7 @@ Develop your first project with Substreams Dev Container. It's a tool to help you build your first project. You can either run it remotely through Github codespaces or locally by cloning the [substreams starter repository](https://github.com/streamingfast/substreams-starter?tab=readme-ov-file). -Inside the Dev Container, the `substreams init` command sets up a code-generated Substreams project, allowing you to easily build a subgraph or an SQL-based solution for data handling. +Inside the Dev Container, the `substreams init` command sets up a code-generated Substreams project, allowing you to easily build a Subgraph or an SQL-based solution for data handling. ## Prerequisites @@ -35,7 +35,7 @@ To share your work with the broader community, publish your `.spkg` to [Substrea You can configure your project to query data either through a Subgraph or directly from an SQL database: -- **Subgraph**: Run `substreams codegen subgraph`. This generates a project with a basic `schema.graphql` and `mappings.ts` file. You can customize these to define entities based on the data extracted by Substreams. For more configurations, see [subgraph sink documentation](https://docs.substreams.dev/how-to-guides/sinks/subgraph). +- **Subgraph**: Run `substreams codegen subgraph`. This generates a project with a basic `schema.graphql` and `mappings.ts` file. You can customize these to define entities based on the data extracted by Substreams. For more configurations, see [Subgraph sink documentation](https://docs.substreams.dev/how-to-guides/sinks/subgraph). - **SQL**: Run `substreams codegen sql` for SQL-based queries. For more information on configuring a SQL sink, refer to the [SQL documentation](https://docs.substreams.dev/how-to-guides/sinks/sql-sink). ## Deployment Options diff --git a/website/src/pages/uk/substreams/developing/sinks.mdx b/website/src/pages/uk/substreams/developing/sinks.mdx index 5f6f9de21326..48c246201e8f 100644 --- a/website/src/pages/uk/substreams/developing/sinks.mdx +++ b/website/src/pages/uk/substreams/developing/sinks.mdx @@ -1,5 +1,5 @@ --- -title: Official Sinks +title: Sink your Substreams --- Choose a sink that meets your project's needs. @@ -8,7 +8,7 @@ Choose a sink that meets your project's needs. Once you find a package that fits your needs, you can choose how you want to consume the data. -Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database, a file or a subgraph. +Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database, a file or a Subgraph. ## Sinks diff --git a/website/src/pages/uk/substreams/developing/solana/account-changes.mdx b/website/src/pages/uk/substreams/developing/solana/account-changes.mdx index a282278c7d91..8c821acaee3f 100644 --- a/website/src/pages/uk/substreams/developing/solana/account-changes.mdx +++ b/website/src/pages/uk/substreams/developing/solana/account-changes.mdx @@ -11,7 +11,7 @@ This guide walks you through the process of setting up your environment, configu > NOTE: History for the Solana Account Changes dates as of 2025, block 310629601. -For each Substreams Solana account block, only the latest update per account is recorded, see the [Protobuf Referece](https://buf.build/streamingfast/firehose-solana/file/main:sf/solana/type/v1/account.proto). If an account is deleted, a payload with `deleted == True` is provided. Additionally, events of low importance ware omitted, such as those with the special owner “Vote11111111…” account or changes that do not affect the account data (ex: lamport changes). +For each Substreams Solana account block, only the latest update per account is recorded, see the [Protobuf Reference](https://buf.build/streamingfast/firehose-solana/file/main:sf/solana/type/v1/account.proto). If an account is deleted, a payload with `deleted == True` is provided. Additionally, events of low importance ware omitted, such as those with the special owner “Vote11111111…” account or changes that do not affect the account data (ex: lamport changes). > NOTE: To test Substreams latency for Solana accounts, measured as block-head drift, install the [Substreams CLI](https://docs.substreams.dev/reference-material/substreams-cli/installing-the-cli) and running `substreams run solana-common blocks_without_votes -s -1 -o clock`. diff --git a/website/src/pages/uk/substreams/developing/solana/transactions.mdx b/website/src/pages/uk/substreams/developing/solana/transactions.mdx index 7010231d668c..e1b9e7ddec40 100644 --- a/website/src/pages/uk/substreams/developing/solana/transactions.mdx +++ b/website/src/pages/uk/substreams/developing/solana/transactions.mdx @@ -36,12 +36,12 @@ Within the generated directories, modify your Substreams modules to include addi ## Step 3: Load the Data -To make your Substreams queryable (as opposed to [direct streaming](https://docs.substreams.dev/how-to-guides/sinks/stream)), you can automatically generate a [Substreams-powered subgraph](/sps/introduction/) or SQL-DB sink. +To make your Substreams queryable (as opposed to [direct streaming](https://docs.substreams.dev/how-to-guides/sinks/stream)), you can automatically generate a [Substreams-powered Subgraph](/sps/introduction/) or SQL-DB sink. ### Subgraph 1. Run `substreams codegen subgraph` to initialize the sink, producing the necessary files and function definitions. -2. Create your [subgraph mappings](/sps/triggers/) within the `mappings.ts` and associated entities within the `schema.graphql`. +2. Create your [Subgraph mappings](/sps/triggers/) within the `mappings.ts` and associated entities within the `schema.graphql`. 3. Build and deploy locally or to [Subgraph Studio](https://thegraph.com/studio-pricing/) by running `deploy-studio`. ### SQL diff --git a/website/src/pages/uk/substreams/introduction.mdx b/website/src/pages/uk/substreams/introduction.mdx index 4d44d2350ef1..65e5099fc565 100644 --- a/website/src/pages/uk/substreams/introduction.mdx +++ b/website/src/pages/uk/substreams/introduction.mdx @@ -13,7 +13,7 @@ Substreams is a powerful parallel blockchain indexing technology designed to enh ## Substreams Benefits -- **Accelerated Indexing**: Boost subgraph indexing time with a parallelized engine for quicker data retrieval and processing. +- **Accelerated Indexing**: Boost Subgraph indexing time with a parallelized engine for quicker data retrieval and processing. - **Multi-Chain Support**: Expand indexing capabilities beyond EVM-based chains, supporting ecosystems like Solana, Injective, Starknet, and Vara. - **Enhanced Data Model**: Access comprehensive data, including the `trace` level data on EVM or account changes on Solana, while efficiently managing forks/disconnections. - **Multi-Sink Support:** For Subgraph, Postgres database, Clickhouse, and Mongo database. diff --git a/website/src/pages/uk/substreams/publishing.mdx b/website/src/pages/uk/substreams/publishing.mdx index ea8494efcb1e..a8886140152a 100644 --- a/website/src/pages/uk/substreams/publishing.mdx +++ b/website/src/pages/uk/substreams/publishing.mdx @@ -9,7 +9,7 @@ Learn how to publish a Substreams package to the [Substreams Registry](https://s ### What is a package? -A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional subgraphs. +A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional Subgraphs. ## Publish a Package @@ -44,7 +44,7 @@ A Substreams package is a precompiled binary file that defines the specific data ![confirm](/img/4_confirm.png) -That's it! You have succesfully published a package in the Substreams registry. +That's it! You have successfully published a package in the Substreams registry. ![success](/img/5_success.png) diff --git a/website/src/pages/uk/supported-networks.mdx b/website/src/pages/uk/supported-networks.mdx index 637174e4f000..6cac2ffa4bac 100644 --- a/website/src/pages/uk/supported-networks.mdx +++ b/website/src/pages/uk/supported-networks.mdx @@ -18,11 +18,11 @@ export const getStaticProps = getSupportedNetworksStaticProps - Subgraph Studio relies on the stability and reliability of the underlying technologies, for example JSON-RPC, Firehose and Substreams endpoints. - Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier. -- If a subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks. +- If a Subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks. - For a full list of which features are supported on the decentralized network, see [this page](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). ## Running Graph Node locally If your preferred network isn't supported on The Graph's decentralized network, you can run your own [Graph Node](https://github.com/graphprotocol/graph-node) to index any EVM-compatible network. Make sure that the [version](https://github.com/graphprotocol/graph-node/releases) you are using supports the network and you have the needed configuration. -Graph Node can also index other protocols, via a Firehose integration. Firehose integrations have been created for NEAR, Arweave and Cosmos-based networks. Additionally, Graph Node can support Substreams-powered subgraphs for any network with Substreams support. +Graph Node can also index other protocols, via a Firehose integration. Firehose integrations have been created for NEAR and Arweave. Additionally, Graph Node can support Substreams-powered Subgraphs for any network with Substreams support. diff --git a/website/src/pages/uk/token-api/_meta-titles.json b/website/src/pages/uk/token-api/_meta-titles.json index 692cec84bd58..7ed31e0af95d 100644 --- a/website/src/pages/uk/token-api/_meta-titles.json +++ b/website/src/pages/uk/token-api/_meta-titles.json @@ -1,5 +1,6 @@ { "mcp": "MCP", "evm": "EVM Endpoints", - "monitoring": "Monitoring Endpoints" + "monitoring": "Monitoring Endpoints", + "faq": "FAQ" } diff --git a/website/src/pages/uk/token-api/_meta.js b/website/src/pages/uk/token-api/_meta.js index 09aa7ffc2649..0e526f673a66 100644 --- a/website/src/pages/uk/token-api/_meta.js +++ b/website/src/pages/uk/token-api/_meta.js @@ -5,4 +5,5 @@ export default { mcp: titles.mcp, evm: titles.evm, monitoring: titles.monitoring, + faq: '', } diff --git a/website/src/pages/uk/token-api/faq.mdx b/website/src/pages/uk/token-api/faq.mdx new file mode 100644 index 000000000000..6178aee33e86 --- /dev/null +++ b/website/src/pages/uk/token-api/faq.mdx @@ -0,0 +1,109 @@ +--- +title: Token API FAQ +--- + +Get fast answers to easily integrate and scale with The Graph's high-performance Token API. + +## General + +### What blockchains does the Token API support? + +Currently, the Token API supports Ethereum, Binance Smart Chain (BSC), Polygon, Optimism, Base, and Arbitrum One. + +### Why isn't my API key from The Graph Market working? + +Be sure to use the Access Token generated from the API key, not the API key itself. An access token can be generated from the dashboard on The Graph Market using the dropdown menu next to each API key. + +### How current is the data provided by the API relative to the blockchain? + +The API provides data up to the latest finalized block. + +### How do I authenticate requests to the Token API? + +Authentication is managed via JWT tokens obtained through The Graph Market. JWT tokens issued by [Pinax](https://pinax.network/en), a core developer of The Graph, are also supported. + +### Does the Token API provide a client SDK? + +While a client SDK is not currently available, please share feedback on any SDKs or integrations you would like to see on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to support additional blockchains in the future? + +Yes, more blockchains will be supported in the future. Please share feedback on desired chains on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to offer data closer to the chain head? + +Yes, improvements to provide data closer to the chain head are planned. Feedback is welcome on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to support additional use cases such as NFTs? + +The Graph ecosystem is actively determining the [roadmap](https://thegraph.com/blog/token-api-the-graph/) for additional use cases, including NFTs. Please provide feedback on specific features you would like prioritized on [Discord](https://discord.gg/graphprotocol). + +## MCP / LLM / AI Topics + +### Is there a time limit for LLM queries? + +Yes. The maximum time limit for LLM queries is 10 seconds. + +### Is there a known list of LLMs that work with the API? + +Yes, Cline, Cursor, and Claude Desktop integrate successfully with The Graph's Token API + MCP server. + +Beyond that, whether an LLM "works" depends on whether it supports the MCP protocol directly (or has a compatible plugin/adapter). + +### Where can I find the MCP client? + +You can find the code for the MCP client in [The Graph's repo](https://github.com/graphprotocol/mcp-client). + +## Advanced Topics + +### I'm getting 403/401 errors. What's wrong? + +Check that you included the `Authorization: Bearer ` header with the correct, non-expired token. Common issues include using the API key instead of generating a new JWT, forgetting the "Bearer" prefix, using an incorrect token, or omitting the header entirely. Ensure you copied the JWT exactly as provided by The Graph Market. + +### Are there rate limits or usage costs?\*\* + +During Beta, the Token API is free for authorized developers. While specific rate limits aren't documented, reasonable throttling exists to prevent abuse. High request volumes may trigger HTTP 429 errors. Monitor official announcements for future pricing changes after Beta. + +### What networks are supported, and how do I specify them? + +You can query available networks with [this link](https://token-api.thegraph.com/#tag/monitoring/GET/networks). A full list of network IDs accepted by The Graph can be found on [The Graph's Networks](https://thegraph.com/docs/en/supported-networks/). The API supports Ethereum mainnet, Binance Smart Chain, Base, Arbitrum One, Optimism, and Polygon/Matic. Use the optional `network_id` parameter (e.g., `mainnet`, `bsc`, `base`, `arbitrum-one`, `optimism`, `matic`) to target a specific chain. Without this parameter, the API defaults to Ethereum mainnet. + +### Why do I only see 10 results? How can I get more data? + +Endpoints cap output at 10 items by default. Use pagination parameters: `limit` (up to 500) and `page` (1-indexed). For example, set `limit=50` to get 50 results, and increment `page` for subsequent batches (e.g., `page=2` for items 51-100). + +### How do I fetch older transfer history? + +The Transfers endpoint defaults to 30 days of history. To retrieve older events, increase the `age` parameter up to 180 days maximum (e.g., `age=180` for 6 months of transfers). Transfers older than 180 days cannot be fetched in a single call. + +### What does an empty `"data": []` array mean? + +An empty data array means no records were found matching the query – not an error. This occurs when querying wallets with no tokens/transfers or token contracts with no holders. Verify you've used the correct address and parameters. An invalid address format will trigger a 4xx error. + +### Why is the JSON response wrapped in a `"data"` array? + +All Token API responses consistently wrap results in a top-level `data` array, even for single items. This uniform design handles the common case where addresses have multiple balances or transfers. When parsing, be sure to index into the `data` array (e.g., `const results = response.data`). + +### Why are token amounts returned as strings? + +Large numeric values (like token amounts or supplies) are returned as strings to avoid precision loss, as they often exceed JavaScript's safe integer range. Convert these to big number types for arithmetic operations. Fields like `decimals` are provided as normal numbers to help derive human-readable values. + +### What format should addresses be in? + +The API accepts 40-character hex addresses with or without the `0x` prefix. The endpoint is case-insensitive, so both lower and uppercase hex characters work. Ensure addresses are exactly 40 hex digits (20 bytes) if you remove the prefix. For contract queries, use the token's contract address; for balance/transfer queries, use the wallet address. + +### Do I need special headers besides authentication? + +While recommended, `Accept: application/json` isn't strictly required as the API returns JSON by default. The critical header is `Authorization: Bearer `. Ensure you make a GET request to the correct URL without trailing slashes or path typos (e.g., use `/balances/evm/{address}` not `/balance`). + +### MCP integration with Claude/Cline/Cursor shows errors like "ENOENT" or "Server disconnected". How do I fix this? + +For "ENOENT" errors, ensure Node.js 18+ is installed and the path to `npx`/`bunx` is correct (consider using full paths in config). "Server disconnected" usually indicates authentication or connectivity issues – verify your `ACCESS_TOKEN` is set correctly and your network allows access to `https://token-api.thegraph.com/sse`. + +### Is the Token API part of The Graph's GraphQL service? + +No, the Token API is a separate RESTful service. Unlike traditional subgraphs, it provides ready-to-use REST endpoints (HTTP GET) for common token data. You don't need to write GraphQL queries or deploy subgraphs. Under the hood, it uses The Graph's infrastructure and MCP with AI for enrichment, but you simply interact with REST endpoints. + +### Do I need to use MCP or tools like Claude, Cline, or Cursor? + +No, these are optional. MCP is an advanced feature allowing AI assistants to interface with the API via streaming. For standard usage, simply call the REST endpoints with any HTTP client using your JWT. Claude Desktop, Cline bot, and Cursor IDE integrations are provided for convenience but aren't required. diff --git a/website/src/pages/uk/token-api/mcp/claude.mdx b/website/src/pages/uk/token-api/mcp/claude.mdx index 0da8f2be031d..12a036b6fc24 100644 --- a/website/src/pages/uk/token-api/mcp/claude.mdx +++ b/website/src/pages/uk/token-api/mcp/claude.mdx @@ -25,11 +25,11 @@ Create or edit your `claude_desktop_config.json` file. ```json label="claude_desktop_config.json" { "mcpServers": { - "mcp-pinax": { + "token-api": { "command": "npx", "args": ["@pinax/mcp", "--sse-url", "https://token-api.thegraph.com/sse"], "env": { - "ACCESS_TOKEN": "" + "ACCESS_TOKEN": "" } } } diff --git a/website/src/pages/uk/token-api/mcp/cline.mdx b/website/src/pages/uk/token-api/mcp/cline.mdx index ab54c0c8f6f0..ef98e45939fe 100644 --- a/website/src/pages/uk/token-api/mcp/cline.mdx +++ b/website/src/pages/uk/token-api/mcp/cline.mdx @@ -10,7 +10,7 @@ sidebarTitle: Cline - [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path. - The `@pinax/mcp` package requires Node 18+, as it relies on built-in `fetch()` / `Headers`, which are not available in Node 17 or older. You may need to specify an exact path to an up-to-date Node version, or uninstall previous versions of Node to ensure `@pinax/mcp` uses the correct version. -![Screenshot of Cline's MCP server configuration interface displaying the JSON settings file with mcp-pinax server details visible](/img/cline-preview-token-api.png) +![Screenshot of Cline's MCP server configuration interface displaying the JSON settings file with mcp-pinax server details visible.](/img/cline-preview-token-api.png) ## Configuration diff --git a/website/src/pages/uk/token-api/quick-start.mdx b/website/src/pages/uk/token-api/quick-start.mdx index 4653c3d41ac6..69a5a4d298d3 100644 --- a/website/src/pages/uk/token-api/quick-start.mdx +++ b/website/src/pages/uk/token-api/quick-start.mdx @@ -1,6 +1,6 @@ --- title: Token API Quick Start -sidebarTitle: Quick Start +sidebarTitle: Швидкий старт --- ![The Graph Token API Quick Start banner](/img/token-api-quickstart-banner.jpg) diff --git a/website/src/pages/ur/about.mdx b/website/src/pages/ur/about.mdx index 75dd5e34c6e0..d737a0994ad9 100644 --- a/website/src/pages/ur/about.mdx +++ b/website/src/pages/ur/about.mdx @@ -30,25 +30,25 @@ Blockchain properties, such as finality, chain reorganizations, and uncled block ## The Graph Provides a Solution -The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API. +The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "Subgraphs") can then be queried with a standard GraphQL API. Today, there is a decentralized protocol that is backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node) that enables this process. ### How The Graph Functions -Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL. +Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using Subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL. #### Specifics -- The Graph uses subgraph descriptions, which are known as the subgraph manifest inside the subgraph. +- The Graph uses Subgraph descriptions, which are known as the Subgraph manifest inside the Subgraph. -- The subgraph description outlines the smart contracts of interest for a subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database. +- The Subgraph description outlines the smart contracts of interest for a Subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database. -- When creating a subgraph, you need to write a subgraph manifest. +- When creating a Subgraph, you need to write a Subgraph manifest. -- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that subgraph. +- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that Subgraph. -The diagram below provides more detailed information about the flow of data after a subgraph manifest has been deployed with Ethereum transactions. +The diagram below provides more detailed information about the flow of data after a Subgraph manifest has been deployed with Ethereum transactions. ![ایک گرافک یہ بتاتا ہے کہ گراف کس طرح ڈیٹا صارفین کو کیوریز پیش کرنے کے لیے گراف نوڈ کا استعمال کرتا ہے](/img/graph-dataflow.png) @@ -56,12 +56,12 @@ The diagram below provides more detailed information about the flow of data afte 1. ایک ڈیپ سمارٹ کنٹریکٹ پر ٹرانزیکشن کے ذریعے سے ایتھیریم میں ڈیٹا کا اضافہ کرتی ہے. 2. سمارٹ کنٹریکٹ ٹرانزیکشن پر کارروائی کے دوران ایک یا ایک سے زیادہ واقعات کا اخراج کرتا ہے. -3. گراف نوڈ ایتھیریم کو نئے بلاکس اور آپ کے سب گراف کے ڈیٹا کے لیے مسلسل سکین کرتا ہے. -4. گراف نوڈ ان بلاکس میں آپ کے سب گراف کے لیے ایتھریم ایونٹس تلاش کرتا ہے اور آپ کے فراہم کردہ میپنگ ہینڈلرز کو چلاتا ہے. میپنگ ایک WASM ماڈیول ہے جو ڈیٹا ہستیوں کو تخلیق یا اپ ڈیٹ کرتا ہے جو ایتھیریم ایونٹس کے جواب میں گراف نوڈ ذخیرہ کرتا ہے. +3. Graph Node continually scans Ethereum for new blocks and the data for your Subgraph they may contain. +4. Graph Node finds Ethereum events for your Subgraph in these blocks and runs the mapping handlers you provided. The mapping is a WASM module that creates or updates the data entities that Graph Node stores in response to Ethereum events. 5. ڈیپ بلاکچین سے انڈیکس کردہ ڈیٹا کے لیے گراف نوڈ کو کیوری کرتی ہے, نوڈ کے [GraphQL اینڈ پوائنٹ](https://graphql.org/learn/) کا استعمال کرتے ہوئے. گراف نوڈ بدلے میں اس ڈیٹا کو حاصل کرنے کے لیے GraphQL کی کیوریز کو اپنے بنیادی ڈیٹا اسٹور کی کیوریز میں تبدیل کرتا ہے, سٹور کی انڈیکسنگ کی صلاحیتوں کا استعمال کرتے ہوئے. ڈیسینٹرلائزڈ ایپلیکیشن اس ڈیٹا کو صارفین کے لیے ایک بھرپور UI میں دکھاتی ہے, جسے وہ ایتھیریم پر نئی ٹرانزیکشنز جاری کرنے کے لیے استعمال کرتے ہیں. یہ سلسلہ دہرایا جاتا ہے. ## اگلے مراحل -The following sections provide a more in-depth look at subgraphs, their deployment and data querying. +The following sections provide a more in-depth look at Subgraphs, their deployment and data querying. -Before you write your own subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed subgraphs. Each subgraph's page includes a GraphQL playground, allowing you to query its data. +Before you write your own Subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed Subgraphs. Each Subgraph's page includes a GraphQL playground, allowing you to query its data. diff --git a/website/src/pages/ur/archived/arbitrum/arbitrum-faq.mdx b/website/src/pages/ur/archived/arbitrum/arbitrum-faq.mdx index c51d33e4e16c..1483cf6b2a4e 100644 --- a/website/src/pages/ur/archived/arbitrum/arbitrum-faq.mdx +++ b/website/src/pages/ur/archived/arbitrum/arbitrum-faq.mdx @@ -14,7 +14,7 @@ By scaling The Graph on L2, network participants can now benefit from: - سیکیورٹی ایتھیریم سے وراثت میں ملی -Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers can open and close allocations more frequently to index a greater number of subgraphs. Developers can deploy and update subgraphs more easily, and Delegators can delegate GRT more frequently. Curators can add or remove signal to a larger number of subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. +Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers can open and close allocations more frequently to index a greater number of Subgraphs. Developers can deploy and update Subgraphs more easily, and Delegators can delegate GRT more frequently. Curators can add or remove signal to a larger number of Subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. گراف کمیونٹی نے [GIP-0031](https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305) بحث کے نتائج کے بعد گزشتہ سال Arbitrum کے ساتھ آگے بڑھنے کا فیصلہ کیا۔ @@ -39,7 +39,7 @@ L2 پر گراف استعمال کرنے کا فائدہ اٹھانے کے لی ![Arbitrum کو ٹوگل کرنے کے لیے ڈراپ ڈاؤن سویچر](/img/arbitrum-screenshot-toggle.png) -## بطور سب گراف ڈویلپر، ڈیٹا کنزیومر، انڈیکسر، کیوریٹر، یا ڈیلیگیٹر، مجھے اب کیا کرنے کی ضرورت ہے؟ +## As a Subgraph developer, data consumer, Indexer, Curator, or Delegator, what do I need to do now? Network participants must move to Arbitrum to continue participating in The Graph Network. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) for additional support. @@ -51,9 +51,9 @@ All smart contracts have been thoroughly [audited](https://github.com/graphproto ہر چیز کی اچھی طرح جانچ کی گئی ہے، اور ایک محفوظ اور ہموار منتقلی کو یقینی بنانے کے لیے ایک ہنگامی منصوبہ تیار کیا گیا ہے۔ تفصیلات دیکھی جا سکتی ہیں [یہاں](https://forum.thegraph.com/t/gip-0037-the-graph-arbitrum-deployment-with-linear-rewards-minted-in-l2/3551#risks-and-security-considerations-20). -## Are existing subgraphs on Ethereum working? +## Are existing Subgraphs on Ethereum working? -All subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) to ensure your subgraphs operate seamlessly. +All Subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) to ensure your Subgraphs operate seamlessly. ## Does GRT have a new smart contract deployed on Arbitrum? diff --git a/website/src/pages/ur/archived/arbitrum/l2-transfer-tools-faq.mdx b/website/src/pages/ur/archived/arbitrum/l2-transfer-tools-faq.mdx index ce46b35ce79b..466aa1cc8f3f 100644 --- a/website/src/pages/ur/archived/arbitrum/l2-transfer-tools-faq.mdx +++ b/website/src/pages/ur/archived/arbitrum/l2-transfer-tools-faq.mdx @@ -25,9 +25,9 @@ The exception is with smart contract wallets like multisigs: these are smart con L2 ٹرانسفر ٹول L1 کو پیغامات بھیجنے کے لیے Arbitrum کا مقامی طریقہ استعمال کرتے ہیں۔ اس طریقہ کار کو "ریٹری ایبل ٹکٹ" کہا جاتا ہے اور اس کا استعمال تمام مقامی ٹوکن برجز بشمول Arbitrum GRT بریج کے ذریعے کیا جاتا ہے۔ آپ دوبارہ قابل کوشش ٹکٹوں کے بارے میں مزید پڑھ سکتے ہیں [Arbitrum دستاویزات](https://docs.arbitrum.io/arbos/l1-to-l2-messaging) میں۔ -جب آپ اپنے اثاثے (سب گراف، سٹیک، ڈیلیگیشن یا کیوریشن) L2 پر منتقل کرتے ہیں، تو Arbitrum GRT بریج کے ذریعے ایک پیغام بھیجا جاتا ہے جو L2 میں دوبارہ ریٹری ایبل ٹکٹ بناتا ہے۔ ٹرانسفر ٹول میں ٹرانزیکشن میں کچھ ایتھیریم ویلیو شامل ہوتی ہے، جس کا استعمال 1) ٹکٹ بنانے کے لیے ادائیگی اور 2) L2 میں ٹکٹ کو انجام دینے کے لیے گیس کی ادائیگی کے لیے کیا جاتا ہے۔ تاہم، چونکہ L2 میں ٹکٹ کے مکمل ہونے کے لیے تیار ہونے تک گیس کی قیمتیں مختلف ہو سکتی ہیں، اس لیے یہ ممکن ہے کہ خودکار طریقے سے عمل درآمد کی یہ کوشش ناکام ہو جائے۔ جب ایسا ہوتا ہے، تو Arbitrum بریج دوبارہ کوشش کے قابل ٹکٹ کو 7 دنوں تک زندہ رکھے گا، اور کوئی بھی ٹکٹ کو "چھڑانے" کی دوبارہ کوشش کر سکتا ہے (جس کے لیے Arbitrum کے لیے کچھ ایتھیریم والے والیٹ کی ضرورت ہوتی ہے)۔ +When you transfer your assets (Subgraph, stake, delegation or curation) to L2, a message is sent through the Arbitrum GRT bridge which creates a retryable ticket in L2. The transfer tool includes some ETH value in the transaction, that is used to 1) pay to create the ticket and 2) pay for the gas to execute the ticket in L2. However, because gas prices might vary in the time until the ticket is ready to execute in L2, it is possible that this auto-execution attempt fails. When that happens, the Arbitrum bridge will keep the retryable ticket alive for up to 7 days, and anyone can retry “redeeming” the ticket (which requires a wallet with some ETH bridged to Arbitrum). -اسے ہم منتقلی کے تمام ٹولز میں "تصدیق" مرحلہ کہتے ہیں - یہ زیادہ تر معاملات میں خود بخود چلے گا، کیونکہ خود کار طریقے سے عمل اکثر کامیاب ہوتا ہے، لیکن یہ ضروری ہے کہ آپ اس بات کو یقینی بنانے کے لیے دوبارہ چیک کریں۔ اگر یہ کامیاب نہیں ہوتا ہے اور 7 دنوں میں کوئی کامیاب کوشش نہیں ہوتی ہے، تو Arbitrum بریج ٹکٹ کو رد کر دے گا، اور آپ کے اثاثے (سب گراف، سٹیک، ڈیلیگیشن یا کیوریشن) ضائع ہو جائیں گے اور بازیافت نہیں ہو سکیں گے۔ گراف کور ڈویلپرز کے پاس ان حالات کا پتہ لگانے کے لیے ایک نگرانی کا نظام موجود ہے اور بہت دیر ہونے سے پہلے ٹکٹوں کو چھڑانے کی کوشش کریں، لیکن یہ یقینی بنانا آپ کی ذمہ داری ہے کہ آپ کی منتقلی بروقت مکمل ہو جائے۔ اگر آپ کو اپنے ٹرانزیکشن کی تصدیق کرنے میں دشواری ہو رہی ہے، تو براہ کرم [اس فارم](https://noteforms.com/forms/notionform-l2-transfer-tooling-issues-0ogqfu?notionforms=1&utm_source=notionforms) اور کور ڈویلپرز کا استعمال کرتے ہوئے رابطہ کریں۔ وہاں آپ کی مدد ہو گی. +This is what we call the “Confirm” step in all the transfer tools - it will run automatically in most cases, as the auto-execution is most often successful, but it is important that you check back to make sure it went through. If it doesn’t succeed and there are no successful retries in 7 days, the Arbitrum bridge will discard the ticket, and your assets (Subgraph, stake, delegation or curation) will be lost and can’t be recovered. The Graph core devs have a monitoring system in place to detect these situations and try to redeem the tickets before it’s too late, but it is ultimately your responsibility to ensure your transfer is completed in time. If you’re having trouble confirming your transaction, please reach out using [this form](https://noteforms.com/forms/notionform-l2-transfer-tooling-issues-0ogqfu?notionforms=1&utm_source=notionforms) and core devs will be there help you. ### میں نے اپنا ڈیلیگیشن/سٹیک/کیوریشن کی منتقلی شروع کی ہے اور مجھے یقین نہیں ہے کہ آیا یہ L2 تک پہنچا ہے، میں کیسے تصدیق کر سکتا ہوں کہ اسے صحیح طریقے سے منتقل کیا گیا تھا؟ @@ -37,43 +37,43 @@ L2 ٹرانسفر ٹول L1 کو پیغامات بھیجنے کے لیے Arbitru ## سب گراف منتقلی -### میں اپنا سب گراف کیسے منتقل کروں؟ +### How do I transfer my Subgraph? -اپنے سب گراف کو منتقل کرنے کے لیے، آپ کو درج ذیل مراحل کو مکمل کرنے کی ضرورت ہو گی: +To transfer your Subgraph, you will need to complete the following steps: 1. ایتھیریم مین نیٹ پر منتقلی شروع کریں 2. تصدیق کے لیے 20 منٹ انتظار کریں -3. Arbitrum پر سب گراف منتقلی کی تصدیق کریں\* +3. Confirm Subgraph transfer on Arbitrum\* -4. Arbitrum پر سب گراف کی اشاعت مکمل کریں +4. Finish publishing Subgraph on Arbitrum 5. کیوری لنک اپ ڈیٹ کریں (تجویز کردہ) -\*Note that you must confirm the transfer within 7 days otherwise your subgraph may be lost. In most cases, this step will run automatically, but a manual confirmation may be needed if there is a gas price spike on Arbitrum. If there are any issues during this process, there will be resources to help: contact support at support@thegraph.com or on [Discord](https://discord.gg/graphprotocol). +\*Note that you must confirm the transfer within 7 days otherwise your Subgraph may be lost. In most cases, this step will run automatically, but a manual confirmation may be needed if there is a gas price spike on Arbitrum. If there are any issues during this process, there will be resources to help: contact support at support@thegraph.com or on [Discord](https://discord.gg/graphprotocol). ### مجھے اپنی منتقلی کہاں سے شروع کرنی چاہیے؟ -آپ اپنی منتقلی کو [سب گراف سٹوڈیو](https://thegraph.com/studio/)، [ایکسپلورر](https://thegraph.com/explorer) یا کسی بھی سب گراف کی تفصیلات کے پیج سے شروع کر سکتے ہیں۔ منتقلی شروع کرنے کے لیے سب گراف کی تفصیلات کے پیج میں "سب گراف منتقل کریں" بٹن کلک کریں۔ +You can initiate your transfer from the [Subgraph Studio](https://thegraph.com/studio/), [Explorer,](https://thegraph.com/explorer) or any Subgraph details page. Click the "Transfer Subgraph" button in the Subgraph details page to start the transfer. -### میرا سب گراف منتقل ہونے تک مجھے کتنا انتظار کرنا پڑے گا +### How long do I need to wait until my Subgraph is transferred منتقلی کا وقت تقریباً 20 منٹ لگتا ہے۔ Arbitrum بریج کی منتقلی کو خود بخود مکمل کرنے کے لیے پس منظر میں کام کر رہا ہے۔ کچھ معاملات میں، گیس کی قیمتیں بڑھ سکتی ہیں اور آپ کو دوبارہ ٹرانزیکشن کی تصدیق کرنی ہوگی. -### کیا میرا سب گراف L2 میں منتقل کرنے کے بعد بھی قابل دریافت ہو گا؟ +### Will my Subgraph still be discoverable after I transfer it to L2? -آپ کا سب گراف صرف اس نیٹ ورک پر قابل دریافت ہوگا جس پر اسے شائع کیا گیا ہے۔ مثال کے طور پر، اگر آپ کا سب گراف Arbitrum One پر ہے، تو آپ اسے صرف Arbitrum One پر ایکسپلورر میں تلاش کر سکتے ہیں اور اسے ایتھیریم پر تلاش نہیں کر پائیں گے۔ براہ کرم یقینی بنائیں کہ آپ نے نیٹ ورک سوئچر میں پیج کے اوپری حصے میں Arbitrum One کا انتخاب کیا ہے تاکہ یہ یقینی بنایا جا سکے کہ آپ درست نیٹ ورک پر ہیں۔ منتقلی کے بعد، L1 سب گراف فرسودہ کے طور پر ظاہر ہوگا. +Your Subgraph will only be discoverable on the network it is published to. For example, if your Subgraph is on Arbitrum One, then you can only find it in Explorer on Arbitrum One and will not be able to find it on Ethereum. Please ensure that you have Arbitrum One selected in the network switcher at the top of the page to ensure you are on the correct network.  After the transfer, the L1 Subgraph will appear as deprecated. -### کیا میرے سب گراف کو منتقل کرنے کے لیے اسے شائع کرنے کی ضرورت ہے؟ +### Does my Subgraph need to be published to transfer it? -سب گراف ٹرانسفر ٹول سے فائدہ اٹھانے کے لیے، آپ کا سب گراف پہلے سے ہی ایتھیریم مین نیٹ پر شائع ہونا چاہیے اور اس میں کچھ کیوریشن سگنل ہونا چاہیے جو والیٹ کی ملکیت ہے جو سب گراف کا مالک ہے۔ اگر آپ کا سب گراف شائع نہیں ہوا ہے، تو یہ تجویز کیا جاتا ہے کہ آپ براہ راست Arbitrum One پر شائع کریں - متعلقہ گیس کی فیسیں کافی کم ہوں گی۔ اگر آپ شائع شدہ سب گراف کو منتقل کرنا چاہتے ہیں لیکن مالک کے اکاؤنٹ نے اس پر کوئی سگنل کیوریٹ نہیں کیا ہے، تو آپ اس اکاؤنٹ سے ایک چھوٹی رقم (جیسے 1 GRT) کا اشارہ دے سکتے ہیں۔ یقینی بنائیں کہ "خودکار منتقلی" سگنل کا انتخاب کریں. +To take advantage of the Subgraph transfer tool, your Subgraph must be already published to Ethereum mainnet and must have some curation signal owned by the wallet that owns the Subgraph. If your Subgraph is not published, it is recommended you simply publish directly on Arbitrum One - the associated gas fees will be considerably lower. If you want to transfer a published Subgraph but the owner account hasn't curated any signal on it, you can signal a small amount (e.g. 1 GRT) from that account; make sure to choose "auto-migrating" signal. -### میرے سب گراف کے ایتھیریم مین نیٹ ورزن کا کیا ہوتا ہے جب میں Arbitrum میں منتقل ہو جاتا ہوں؟ +### What happens to the Ethereum mainnet version of my Subgraph after I transfer to Arbitrum? -آپ کے سب گراف کو Arbitrum میں منتقل کرنے کے بعد، ایتھیریم مین نیٹ ورزن فرسودہ ہو جائے گا۔ ہمارا مشورہ ہے کہ آپ 48 گھنٹوں کے اندر اپنے کیوری کے لنک کو اپ ڈیٹ کریں۔ تاہم، ایک رعایتی مدت موجود ہے جو آپ کے مین نیٹ لنک کو کام میں لاتی رہتی ہے تاکہ کسی بھی فریق ثالث ڈیپ سپورٹ کو اپ ڈیٹ کیا جا سکے۔ +After transferring your Subgraph to Arbitrum, the Ethereum mainnet version will be deprecated. We recommend you update your query URL within 48 hours. However, there is a grace period in place that keeps your mainnet URL functioning so that any third-party dapp support can be updated. ### میری منتقلی کے بعد، کیا مجھے بھی Arbitrum پر دوبارہ شائع کرنے کی ضرورت ہے؟ @@ -81,21 +81,21 @@ L2 ٹرانسفر ٹول L1 کو پیغامات بھیجنے کے لیے Arbitru ### کیا دوبارہ شائع کرنے کے دوران میرا اینڈ پوائنٹ ڈاؤن ٹائم کا تجربہ کرے گا؟ -اس بات کا امکان نہیں ہے، لیکن اس بات پر منحصر ہے کہ انڈیکسرز L1 پر سب گراف کو سپورٹ کر رہے ہیں اور کیا وہ اس کو انڈیکس کرتے رہیں گے جب تک کہ L2 پر سب گراف مکمل طور پر سپورٹ نہ ہو جائے، مختصر وقت کا تجربہ کرنا ممکن ہے۔ +It is unlikely, but possible to experience a brief downtime depending on which Indexers are supporting the Subgraph on L1 and whether they keep indexing it until the Subgraph is fully supported on L2. ### کیا L2 پر اشاعت اور ورزن ایتھیریم مین نیٹ کی طرح ہے؟ -جی ہاں. سب گراف سٹوڈیو میں شائع کرتے وقت اپنے شائع شدہ نیٹ ورک کے طور پر Arbitrum One کو منتخب کریں۔ سٹوڈیو میں، تازہ ترین اختتامی نقطہ دستیاب ہوگا جو سب گراف کے تازہ ترین اپ ڈیٹ شدہ ورژن کی طرف اشارہ کرتا ہے۔ +Yes. Select Arbitrum One as your published network when publishing in Subgraph Studio. In the Studio, the latest endpoint will be available which points to the latest updated version of the Subgraph. -### کیا میرے سب گراف کا کیوریشن میرے سب گراف کے ساتھ منتقل ہو جائے گا؟ +### Will my Subgraph's curation move with my Subgraph? -اگر آپ نے خودکار منتقلی کے سگنل کا انتخاب کیا ہے، تو آپ کی اپنی کیوریشن کا 100% حصہ آپ کے سب گراف کے ساتھ Arbitrum One میں منتقل ہو جائے گا۔ منتقلی کے وقت سب گراف کے تمام کیوریشن سگنل کو GRT میں تبدیل کر دیا جائے گا، اور آپ کے کیوریشن سگنل کے مطابق GRT L2 سب گراف پر سگنل منٹ کے لیے استعمال کیا جائے گا. +If you've chosen auto-migrating signal, 100% of your own curation will move with your Subgraph to Arbitrum One. All of the Subgraph's curation signal will be converted to GRT at the time of the transfer, and the GRT corresponding to your curation signal will be used to mint signal on the L2 Subgraph. -دوسرے کیوریٹرز یہ انتخاب کر سکتے ہیں کہ آیا GRT کا اپنا حصہ واپس لینا ہے، یا اسی سب گراف پر اسے L2 پر منٹ سگنل پر منتقل کرنا ہے۔ +Other Curators can choose whether to withdraw their fraction of GRT, or also transfer it to L2 to mint signal on the same Subgraph. -### کیا میں منتقلی کے بعد اپنے سب گراف کو واپس ایتھیریم مین نیٹ پر منتقل کر سکتا ہوں؟ +### Can I move my Subgraph back to Ethereum mainnet after I transfer? -ایک بار منتقل ہونے کے بعد، اس سب گراف کا آپ کا ایتھیریم مین نیٹ ورزن فرسودہ ہو جائے گا۔ اگر آپ مین نیٹ پر واپس جانا چاہتے ہیں، تو آپ کو مین نیٹ پر دوبارہ تعینات اور شائع کرنے کی ضرورت ہوگی۔ تاہم، ایتھیریم مین نیٹ پر واپس منتقلی کی سختی سے حوصلہ شکنی کی جاتی ہے کیونکہ انڈیکسنگ انعامات بالآخر Arbitrum One پر مکمل طور پر تقسیم کیے جائیں گے. +Once transferred, your Ethereum mainnet version of this Subgraph will be deprecated. If you would like to move back to mainnet, you will need to redeploy and publish back to mainnet. However, transferring back to Ethereum mainnet is strongly discouraged as indexing rewards will eventually be distributed entirely on Arbitrum One. ### مجھے اپنی منتقلی مکمل کرنے کے لیے پریجڈ ایتھیریم کی ضرورت کیوں ہے؟ @@ -207,19 +207,19 @@ If you'd like to release GRT from the vesting contract, you can transfer them ba \*اگر ضروری ہو تو - یعنی آپ کنٹریکٹ ایڈریس استعمال کر رہے ہیں. -### مجھے کیسے پتہ چلے گا کہ میں نے جو سب گراف تیار کیا ہے وہ L2 میں چلا گیا ہے؟ +### How will I know if the Subgraph I curated has moved to L2? -سب گراف کی تفصیلات کا پیج دیکھتے وقت، ایک بینر آپ کو مطلع کرے گا کہ اس سب گراف کو منتقل کر دیا گیا ہے۔ آپ اپنے کیوریشن کو منتقل کرنے کے لیے پرامپٹ پر عمل کر سکتے ہیں۔ آپ یہ معلومات کسی بھی سب گراف کے سب گراف کی تفصیلات کے پیج پر بھی حاصل کر سکتے ہیں جو منتقل ہوا ہے۔ +When viewing the Subgraph details page, a banner will notify you that this Subgraph has been transferred. You can follow the prompt to transfer your curation. You can also find this information on the Subgraph details page of any Subgraph that has moved. ### اگر میں اپنے کیوریشن کو L2 میں منتقل نہیں کرنا چاہتا تو کیا ہو گا؟ -جب سب گراف فرسودہ ہو جاتا ہے تو آپ کے پاس اپنا سگنل واپس لینے کا اختیار ہوتا ہے۔ اسی طرح، اگر کوئی سب گراف L2 میں منتقل ہو گیا ہے، تو آپ ایتھیریم مین نیٹ میں اپنے سگنل کو واپس لینے یا L2 کو سگنل بھیجنے کا انتخاب کر سکتے ہیں. +When a Subgraph is deprecated you have the option to withdraw your signal. Similarly, if a Subgraph has moved to L2, you can choose to withdraw your signal in Ethereum mainnet or send the signal to L2. ### میں کیسے جان سکتا ہوں کہ میری کیوریشن کامیابی سے منتقل ہو گئی ہے؟ L2 ٹرانسفر ٹول شروع ہونے کے تقریباً 20 منٹ بعد سگنل کی تفصیلات ایکسپلورر کے ذریعے قابل رسائی ہوں گی. -### کیا میں ایک وقت میں ایک سے زیادہ سب گراف پر اپنی کیوریشن منتقل کر سکتا ہوں؟ +### Can I transfer my curation on more than one Subgraph at a time? اس وقت بلک ٹرانسفر کا کوئی آپشن نہیں ہے. @@ -267,7 +267,7 @@ L2 ٹرانسفر ٹول کو آپ کے سٹیک کی منتقلی مکمل ک ### کیا مجھے اپنا سٹیک منتقل کرنے سے پہلے Arbitrum پر انڈیکس کرنا ہو گا؟ -آپ انڈیکسنگ کو ترتیب دینے سے پہلے مؤثر طریقے سے اپنا سٹیک منتقل کر سکتے ہیں، لیکن آپ L2 پر کسی بھی انعام کا دعویٰ نہیں کر سکیں گے جب تک کہ آپ L2 پر سب گرافس کے لیے مختص نہیں کر دیتے، ان کو انڈیکس نہیں کرتے اور POIs پیش نہیں کرتے. +You can effectively transfer your stake first before setting up indexing, but you will not be able to claim any rewards on L2 until you allocate to Subgraphs on L2, index them, and present POIs. ### کیا ڈیلیگیٹرز اپنے ڈیلیگیشن کو منتقل کر سکتے ہیں اس سے پہلے کہ میں اپنے انڈیکسنگ کا سٹیک منتقل کروں؟ diff --git a/website/src/pages/ur/archived/arbitrum/l2-transfer-tools-guide.mdx b/website/src/pages/ur/archived/arbitrum/l2-transfer-tools-guide.mdx index 2099dcb22749..4684fb754f05 100644 --- a/website/src/pages/ur/archived/arbitrum/l2-transfer-tools-guide.mdx +++ b/website/src/pages/ur/archived/arbitrum/l2-transfer-tools-guide.mdx @@ -6,53 +6,53 @@ title: L2 ٹرانسفر ٹولز گائڈ Some frequent questions about these tools are answered in the [L2 Transfer Tools FAQ](/archived/arbitrum/l2-transfer-tools-faq/). The FAQs contain in-depth explanations of how to use the tools, how they work, and things to keep in mind when using them. -## اپنے سب گراف کو Arbitrum (L2) میں کیسے منتقل کریں +## How to transfer your Subgraph to Arbitrum (L2) -## اپنے سب گرافس منتقل کرنے کے فوائد +## Benefits of transferring your Subgraphs گراف کی کمیونٹی اور بنیادی ڈویلپرز پچھلے سال سے Arbitrum پر جانے کے [تیار کر رہے ہیں](https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305)۔ Arbitrum، ایک لیئر 2 یا "L2" بلاکچین، ایتھیریم سے سیکورٹی وراثت میں ملتی ہے لیکن گیس کی فیس بہت کم فراہم کرتی ہے. -جب آپ گراف نیٹ ورک پر اپنا سب گراف شائع یا اپ گریڈ کرتے ہیں، آپ پروٹوکول پر سمارٹ کنٹریکٹ کے ساتھ تعامل کر رہے ہوتے ہیں اور اس کے لیے ایتھیریم کا استعمال کرتے ہوئے گیس کی ادائیگی کی ضرورت ہوتی ہے۔ اپنا سب گراف Arbitrum پر منتقل کر کے، آپ کے سب گراف کی آئندہ کسی بھی اپ ڈیٹ کے لیے گیس کی بہت کم فیس درکار ہو گی۔ کم فیس، اور حقیقت یہ ہے کہ L2 پر کیوریشن بانڈنگ منحنی خطوط فلیٹ ہیں، کیوریٹرز کے لیے آپ کے سب گراف پر کیوریٹ کرنا آسان بناتے ہیں، جس سے آپ کے سب گراف پر انڈیکسرز کے لیے انعامات بڑھ جاتے ہیں۔ یہ کم لاگت والا ماحول بھی انڈیکسرز کے لیے آپ کے سب گراف کو انڈیکس کرنا اور پیش کرنا سستا بناتا ہے۔ Arbitrum پر انڈیکسنگ کے انعامات بڑھیں گے اور آنے والے مہینوں میں ایتھیریم مین نیٹ پر کم ہوں گے، اس لیے زیادہ سے زیادہ انڈیکسرز اپنے حصص کو منتقل کر رہے ہوں گے اور L2 پر اپنی کارروائیاں ترتیب دیں گے. +When you publish or upgrade your Subgraph to The Graph Network, you're interacting with smart contracts on the protocol and this requires paying for gas using ETH. By moving your Subgraphs to Arbitrum, any future updates to your Subgraph will require much lower gas fees. The lower fees, and the fact that curation bonding curves on L2 are flat, also make it easier for other Curators to curate on your Subgraph, increasing the rewards for Indexers on your Subgraph. This lower-cost environment also makes it cheaper for Indexers to index and serve your Subgraph. Indexing rewards will be increasing on Arbitrum and decreasing on Ethereum mainnet over the coming months, so more and more Indexers will be transferring their stake and setting up their operations on L2. -## یہ سمجھنا کہ سگنل کے ساتھ کیا ہوتا ہے، آپ کا L1 سب گراف اور کیوری URLs +## Understanding what happens with signal, your L1 Subgraph and query URLs -سب گراف کو Arbitrum پر منتقل کرنا Arbitrum GRT بریج کا استعمال کرتا ہے، جو بدلے میں مقامی Arbitrum بریج استعمال کرتا ہے، سب گراف کو L2 پر بھیجنے کے لیے۔ "منتقلی" مین نیٹ پر سب گراف کو فرسودہ کر دے گی اور بریج کا استعمال کرتے ہوئے L2 پر سب گراف کو دوبارہ بنانے کے لیے معلومات بھیجے گی۔ اس میں سب گراف کے مالک کا سگنل شدہ GRT بھی شامل ہوگا، جو بریج کے لیے منتقلی کو قبول کرنے کے لیے صفر سے زیادہ ہونا چاہیے. +Transferring a Subgraph to Arbitrum uses the Arbitrum GRT bridge, which in turn uses the native Arbitrum bridge, to send the Subgraph to L2. The "transfer" will deprecate the Subgraph on mainnet and send the information to re-create the Subgraph on L2 using the bridge. It will also include the Subgraph owner's signaled GRT, which must be more than zero for the bridge to accept the transfer. -جب آپ سب گراف کو منتقل کرنے کا انتخاب کرتے ہیں، یہ تمام سب گراف کے کیوریشن سگنلز کو GRT میں تبدیل کر دے گا۔ یہ مین نیٹ پر سب گراف کو "فرسودہ" کرنے کے مترادف ہے۔ آپ کے کیوریشن کے مطابق GRT سب گراف کے ساتھ L2 کو بھیجا جائے گا، جہاں ان کا استعمال آپ کی جانب سے سگنل دینے کے لیے کیا جائے گا. +When you choose to transfer the Subgraph, this will convert all of the Subgraph's curation signal to GRT. This is equivalent to "deprecating" the Subgraph on mainnet. The GRT corresponding to your curation will be sent to L2 together with the Subgraph, where they will be used to mint signal on your behalf. -دوسرے کیوریٹرز یہ انتخاب کر سکتے ہیں کہ آیا اپنے GRT کا حصہ لینا ہے، یا اسی سب گراف پر اسے L2 پر منٹ سگنل پر منتقل کرنا ہے۔ اگر ایک سب گراف کا مالک اپنا سب گراف L2 میں منتقل نہیں کرتا ہے اور اسے کنٹریکٹ کال کے ذریعے دستی طور پر فرسودہ کرتا ہے، تو کیوریٹرز کو مطلع کیا جائے گا اور وہ اپنا کیوریشن واپس لے سکیں گے. +Other Curators can choose whether to withdraw their fraction of GRT, or also transfer it to L2 to mint signal on the same Subgraph. If a Subgraph owner does not transfer their Subgraph to L2 and manually deprecates it via a contract call, then Curators will be notified and will be able to withdraw their curation. -جیسے ہی سب گراف منتقل ہو جائے گا، چونکہ تمام کیوریشن GRT میں تبدیل ہو چکی ہے، انڈیکسر کو سب گراف کو انڈیکس کرنے کے لیے مزید انعامات نہیں ملیں گے۔ البتہ، ایسے انڈیکسرز ہوں گے جو 1) منتقل ہونے والے سب گرافس کو 24 گھنٹوں تک پیش کرتے رہیں گے، اور 2) فوری طور پر L2 پر سب گراف انڈیکسنگ شروع کر دیں گے۔ چونکہ ان کے انڈیکسرز کے پاس پہلے سے ہی انڈیکسڈ سب گراف موجود ہے، اس لیے سب گراف کے مطابقت پذیر ہونے کا انتظار کرنے کی ضرورت نہیں ہے، اور L2 سب گراف سے تقریبآٓ فورآٓ کیوری کرنا ممکن ہو گا. +As soon as the Subgraph is transferred, since all curation is converted to GRT, Indexers will no longer receive rewards for indexing the Subgraph. However, there will be Indexers that will 1) keep serving transferred Subgraphs for 24 hours, and 2) immediately start indexing the Subgraph on L2. Since these Indexers already have the Subgraph indexed, there should be no need to wait for the Subgraph to sync, and it will be possible to query the L2 Subgraph almost immediately. -L2 سب گراف سے متعلق کیوریز ایک مختلف لنک پر کرنے کی ضرورت ہوگی (`arbitrum-gateway.thegraph.com` پر)، لیکن L1 لنک کم از کم 48 گھنٹے تک کام کرتا رہے گا۔ اس کے بعد، L1 گیٹ وے کیوریز کو L2 گیٹ وے (کچھ وقت کے لیے) پر بھیجے گا، لیکن اس سے تاخیر میں اضافہ ہو جائے گا، اس لیے یہ تجویز کیا جاتا ہے کہ آپ اپنے تمام کیوریز کو جلد از جلد نئے لنک میں تبدیل کر دیں۔ +Queries to the L2 Subgraph will need to be done to a different URL (on `arbitrum-gateway.thegraph.com`), but the L1 URL will continue working for at least 48 hours. After that, the L1 gateway will forward queries to the L2 gateway (for some time), but this will add latency so it is recommended to switch all your queries to the new URL as soon as possible. ## اپنا L2 والیٹ منتخب کرنا -کب آپ اپنا سب گراف مین نیٹ پر شائع کرتے ہیں، آپ نے اپنا سب گراف بنانے کے لیے کنیکٹڈ والیٹ کا استعمال کیا، اور یہ والیٹ NFT کا مالک ہے جو اس سب گراف کی نمائندگی کرتا ہے اور آپ کو اپ ڈیٹس شائع کرنے کی اجازت دیتا ہے۔ +When you published your Subgraph on mainnet, you used a connected wallet to create the Subgraph, and this wallet owns the NFT that represents this Subgraph and allows you to publish updates. -جب سب گراف کو Arbitrum پر منتقل کر رہے ہوں، آپ مختلف والیٹ استعمال کر سکتے ہیں جو L2 پر اس سب گراف NFT کا مالک ہو گا۔ +When transferring the Subgraph to Arbitrum, you can choose a different wallet that will own this Subgraph NFT on L2. اگر آپ میٹا ماسک کی طرح ایک عام والیٹ استعمال کر رہے ہیں (ایک بیرونی ملکیتی اکاؤنٹ یا EOA، یعنی ایک والیٹ جو سمارٹ کنٹریکٹ نہیں ہے)، تو یہ اختیاری ہے اور یہ نصیحت کی جاتی ہے کہ مالک کا وہی ایڈریس رکھا جائے جو L1 میں ہے۔ -اگر آپ سمارٹ کنٹریکٹ والیٹ کا استعمال کر رہے ہیں، جیسے کہ ملٹی سگ (مثال کے طور پر ایک تجوری)، پھر ایک مختلف والیٹ ایڈریس کا استعمال کرنا ضروری ہے، کیونکہ یہ زیادہ امکان ہیں کہ اکاؤنٹ صرف مین نیٹ پر ہو گا اور آپ Arbitrum پر اس والیٹ کا استعمال کرتے ہوئے ٹرانزیکشنز نہیں کر پائیں گے۔ اگر آپ سمارٹ کنٹریکٹ والیٹ یا ملٹی سگ کا استعمال جاری رکھنا چاہتے ہیں، Arbitrum پر نیا والیٹ بنائیں اور اس کا ایڈریس اپنے سب گراف کا L2 مالک ہونے کی حیثیت سے استعمال کریں۔ +If you're using a smart contract wallet, like a multisig (e.g. a Safe), then choosing a different L2 wallet address is mandatory, as it is most likely that this account only exists on mainnet and you will not be able to make transactions on Arbitrum using this wallet. If you want to keep using a smart contract wallet or multisig, create a new wallet on Arbitrum and use its address as the L2 owner of your Subgraph. -**یہ بہت ضروری ہے کہ آپ جو والیٹ ایڈریس استعمال کریں اس کا کنٹرول آپ کے پاس ہو، اور جو Arbitrum پر ٹرانزیکشنز کر سکے۔ ورنہ، سب گراف کھو جائے گا اور بازیافت نہیں ہو پائے گا۔** +**It is very important to use a wallet address that you control, and that can make transactions on Arbitrum. Otherwise, the Subgraph will be lost and cannot be recovered.** ## منتقلی کی تیاری: کچھ ایتھیریم بریج کرنا -سب گراف کی منتقلی میں بریج کے ذریعے ٹرانزیکشن بھیجنا شامل ہے، اور پھر Arbitrum پر ایک اور ٹرانزیکشن کو انجام دینا۔ پہلی ٹرانزیکشن مین نیٹ پر ایتھیریم کا استعمال کرتی ہے، اور L2 پر پیغام موصول ہونے پر گیس کی ادائیگی کے لیے کچھ ایتھیریم شامل کرتا ہے۔ تاہم، اگر یہ گیس ناکافی ہے، تو آپ کو ٹرانزیکشن کی دوبارہ کوشش کرنی ہوگی اور براہ راست L2 پر گیس کی ادائیگی کرنی ہوگی (یہ ذیل میں "مرحلہ 3: منتقلی کی تصدیق" ہے)۔ یہ مرحلہ **منتقلی شروع کرنے کے 7 دنوں کے اندر انجام دیا جانا چاہیے**۔ مزید یہ کہ، دوسری ٹرانزیکشن ("مرحلہ 4: L2 پر منتقلی کو ختم کرنا") براہ راست Arbitrum پر کیا جائے گا۔ ان وجوہات کی بناء پر، آپ کو Arbitrum والیٹ پر کچھ ایتھیریم کی ضرورت ہوگی۔ اگر آپ ملٹی سگ یا سمارٹ کنٹریکٹ اکاؤنٹ استعمال کر رہے ہیں، تو ایتھیریم کو باقاعدہ (EOA) والیٹ میں ہونا چاہیے جسے آپ ٹرانزیکشن کو انجام دینے کے لیے استعمال کر رہے ہیں، نہ کہ ملٹی سگ والیٹ پر۔ +Transferring the Subgraph involves sending a transaction through the bridge, and then executing another transaction on Arbitrum. The first transaction uses ETH on mainnet, and includes some ETH to pay for gas when the message is received on L2. However, if this gas is insufficient, you will have to retry the transaction and pay for the gas directly on L2 (this is "Step 3: Confirming the transfer" below). This step **must be executed within 7 days of starting the transfer**. Moreover, the second transaction ("Step 4: Finishing the transfer on L2") will be done directly on Arbitrum. For these reasons, you will need some ETH on an Arbitrum wallet. If you're using a multisig or smart contract account, the ETH will need to be in the regular (EOA) wallet that you are using to execute the transactions, not on the multisig wallet itself. آپ کچھ ایکسچینجیز سے ایتھیریم خرید سکتے ہیں اور سیدھا اسے Arbitrum میں مگوا سکتے ہیں، یا آپ ایتھیریم کو مین نیٹ والیٹ سے L2 پر Arbitrum بریج کا استعمال کرتے ہوئے کر سکتے ہیں: [bridge.arbitrum.io](http://bridge.arbitrum.io)۔ چونکہ Arbitrum پر گیس فیس کم ہوتے ہے، آپ کو صرف چھوٹی سی مقدار کی ضرورت پڑے گی۔ یہ تجویز کیا جاتا ہے کہ آپ اپنی ٹرانزیکشن کی منظوری کے لیے کم حد (مثال کے طور پر 0.01 ایتھیریم) سے شروع کریں۔ -## سب گراف ٹرانسفر ٹول تلاش کرنا +## Finding the Subgraph Transfer Tool -آپ L2 ٹرانسفر ٹول تلاش کر سکتے ہیں جب آپ سب گراف سٹوڈیو پر اپنا سب گراف کا پیج دیکھ رہے ہوں گے۔ +You can find the L2 Transfer Tool when you're looking at your Subgraph's page on Subgraph Studio: ![ٹرانسفر ٹول](/img/L2-transfer-tool1.png) -یہ ایکسپلورر پر بھی دستیاب ہے اگر آپ اس والیٹ سے کنیکٹڈ ہیں جس کے پاس سب گراف ہے اور ایکسپلورر پر اس سب گراف کے پیج پر: +It is also available on Explorer if you're connected with the wallet that owns a Subgraph and on that Subgraph's page on Explorer: ![L2 پر منتقل کرنا](/img/transferToL2.png) @@ -60,19 +60,19 @@ L2 پر منتقل کرنے کے بٹن پر کلک کرنے سے ٹرانسفر ## مرحلہ 1: منتقلی شروع کرنا -منتقلی شروع کرنے سے پہلے، آپ کو یہ فیصلہ کرنا ہو گا کہ L2 پر کون سا ایڈریس سب گراف کا مالک ہو گا (اوپر "اپنے L2 والیٹ کا انتخاب" دیکھیں)، اور یہ پرزور مشورہ دیا جاتا ہے کہ Arbitrum پر پہلے سے ہی گیس کے لیے کچھ ایتھیریم رکھیں (دیکھیں "منتقلی کی تیاری: کچھ ایتھیریم بریج کرنا" اوپر)۔ +Before starting the transfer, you must decide which address will own the Subgraph on L2 (see "Choosing your L2 wallet" above), and it is strongly recommend having some ETH for gas already bridged on Arbitrum (see "Preparing for the transfer: bridging some ETH" above). -یہ بھی نوٹ کریں کہ سب گراف کی منتقلی کے لیے سب گراف پر اسی اکاؤنٹ کے ساتھ سگنل کی غیر صفر مقدار کی ضرورت ہوتی ہے جس کے پاس سب گراف ہے۔ اگر آپ نے سب گراف پر اشارہ نہیں کیا ہے تو آپ کو تھوڑا سا کیوریشن شامل کرنا پڑے گا (ایک چھوٹی سی رقم جیسے ایک GRT شامل کرنا کافی ہوگا)۔ +Also please note transferring the Subgraph requires having a nonzero amount of signal on the Subgraph with the same account that owns the Subgraph; if you haven't signaled on the Subgraph you will have to add a bit of curation (adding a small amount like 1 GRT would suffice). -ٹرانسفر ٹول کھولنے کے بعد، آپ L2 والیٹ ایڈریس کو "ریسیونگ والیٹ ایڈریس" فیلڈ میں داخل کرنے کے قابل ہو جائیں گے- ** یقینی بنائیں کہ آپ نے یہاں درست ایڈریس لکھا ہے**۔ ٹرانسفر سب گراف پر کلک کرنے سے آپ کو اپنے والیٹ پر ٹرانزیکشن کرنے کا اشارہ ملے گا (نوٹ کریں کہ L2 گیس کی ادائیگی کے لیے کچھ ایتھیریم ویلیو شامل ہے)؛ یہ منتقلی کا آغاز کرے گا اور آپ کے L1 سب گراف کو فرسودہ کر دے گا (پردے کے پیچھے کیا ہو رہا ہے اس کے بارے میں مزید تفصیلات کے لیے اوپر دیکھیں "سگنل کے ساتھ کیا ہوتا ہے، آپ کا L1 سب گراف اور کیوری لنکس" دیکھیں)۔ +After opening the Transfer Tool, you will be able to input the L2 wallet address into the "Receiving wallet address" field - **make sure you've entered the correct address here**. Clicking on Transfer Subgraph will prompt you to execute the transaction on your wallet (note some ETH value is included to pay for L2 gas); this will initiate the transfer and deprecate your L1 Subgraph (see "Understanding what happens with signal, your L1 Subgraph and query URLs" above for more details on what goes on behind the scenes). -اگر آپ اس قدم پر عمل کرتے ہیں، تو **یقینی بنائیں کہ آپ 7 دنوں سے بھی کم وقت میں مرحلہ 3 مکمل کرنے تک آگے بڑھیں، ورنہ سب گراف اور آپ کا سگنل GRT ضائع ہو جائے گا۔** یہ اس وجہ سے ہے کہ L1-L2 پیغام رسانی Arbitrum پر کیسے کام کرتی ہے: پیغامات جو بریج کے ذریعے بھیجے گئے "دوبارہ کوشش کے قابل ٹکٹ" ہیں جن پر عمل درآمد 7 دنوں کے اندر ہونا ضروری ہے، اور اگر Arbitrum پر گیس کی قیمت میں اضافہ ہوتا ہے تو ابتدائی عمل درآمد کے لیے دوبارہ کوشش کی ضرورت پڑ سکتی ہے۔ +If you execute this step, **make sure you proceed until completing step 3 in less than 7 days, or the Subgraph and your signal GRT will be lost.** This is due to how L1-L2 messaging works on Arbitrum: messages that are sent through the bridge are "retry-able tickets" that must be executed within 7 days, and the initial execution might need a retry if there are spikes in the gas price on Arbitrum. ![Start the transfer to L2](/img/startTransferL2.png) -## مرحلہ 2: سب گراف کے L2 تک پہنچنے کا انتظار کرنا +## Step 2: Waiting for the Subgraph to get to L2 -منتقلی شروع کرنے بعد، وہ پیغام جو آپ کا L1 سب گراف L2 کو بھیجتا ہے اسے Arbitrum بریج کے ذریعے پھیلانا چاہیے۔ اس میں تقریبآٓ 20 منٹ لگتے ہیں (بریج مین نیٹ بلاک کا انتظار کرتا ہے جس میں ٹرانزیکشن کو ممکنہ چین کی بحالی سے "محفوظ" رکھا جاتا ہے)۔ +After you start the transfer, the message that sends your L1 Subgraph to L2 must propagate through the Arbitrum bridge. This takes approximately 20 minutes (the bridge waits for the mainnet block containing the transaction to be "safe" from potential chain reorgs). انتظار کا وقت ختم ہونے کے بعد، Arbitrum L2 کنٹریکٹس پر منتقلی کو خودکار طریقے سے انجام دینے کی کوشش کرے گا۔ @@ -80,7 +80,7 @@ L2 پر منتقل کرنے کے بٹن پر کلک کرنے سے ٹرانسفر ## مرحلہ 3: منتقلی کی تصدیق کرنا -زیادہ تر معاملات میں، یہ مرحلہ خود بخود عمل میں آجائے گا کیونکہ مرحلہ 1 میں شامل L2 گیس اس ٹرانزیکشن کو انجام دینے کے لیے کافی ہونی چاہیے جو Arbitrum کنٹریکٹس پر سب گراف وصول کرتی ہے۔ تاہم، بعض صورتوں میں، یہ ممکن ہے کہ Arbitrum پر گیس کی قیمتوں میں اضافہ اس خود کار طریقے سے عمل کو ناکام بنادے۔ اس صورت میں، "ٹکٹ" جو آپ کے سب گراف کو L2 پر بھیجتا ہے زیر التواء رہے گا اور اسے 7 دنوں کے اندر دوبارہ کوشش کرنے کی ضرورت ہوگی۔ +In most cases, this step will auto-execute as the L2 gas included in step 1 should be sufficient to execute the transaction that receives the Subgraph on the Arbitrum contracts. In some cases, however, it is possible that a spike in gas prices on Arbitrum causes this auto-execution to fail. In this case, the "ticket" that sends your Subgraph to L2 will be pending and require a retry within 7 days. اس صورت میں، آپ کو L2 والیٹ کنیکٹ کرنے کی ضرورت پڑے گی جس میں Arbitrum میں تھوڑا ایتھیریم موجود ہو، اپنے والیٹ نیٹ ورک کو Arbitrum میں سویچ کریں، اور "کنفرم ٹرانسفر" کو ٹرانزیکشن دہرانے کے لیے دبائیں. @@ -88,33 +88,33 @@ L2 پر منتقل کرنے کے بٹن پر کلک کرنے سے ٹرانسفر ## مرحلہ 4: L2 پر منتقلی ختم کریں -اس موقع پر، آپ کا سب گراف اور GRT آپ کے Arbitrum میں موصول ہو چکے ہیں، لیکن سب گراف ابھی تک شائع نہیں ہوا۔ آپ کو L2 والیٹ کا استعمال کرتے ہوئے منسلک کرنے کی ضرورت ہو گی جسے آپ نے وصول کرنے والے والیٹ کے طور پر منتخب کیا ہے، اپنے والیٹ نیٹ ورک کو Arbitrum میں سویچ کریں ، اور "سب گراف شائع کریں" پر کلک کریں۔ +At this point, your Subgraph and GRT have been received on Arbitrum, but the Subgraph is not published yet. You will need to connect using the L2 wallet that you chose as the receiving wallet, switch your wallet network to Arbitrum, and click "Publish Subgraph." -![سب گراف شائع کریں](/img/publishSubgraphL2TransferTools.png) +![Publish the Subgraph](/img/publishSubgraphL2TransferTools.png) -![سب گراف کے شائع ہونے کا انتظار کریں](/img/waitForSubgraphToPublishL2TransferTools.png) +![Wait for the Subgraph to be published](/img/waitForSubgraphToPublishL2TransferTools.png) -یہ سب گراف کو شائع کرے گا تا کہ انڈیکسرز جو Arbitrum پر کام کر رہے ہیں اسے پیش کرنا شروع کر سکیں۔ یہ GRT کا استعمال کرتے ہوئے کیوریشن سگنل بھی دے گا جو L1 سے منتقل کیا گیا ہے. +This will publish the Subgraph so that Indexers that are operating on Arbitrum can start serving it. It will also mint curation signal using the GRT that were transferred from L1. ## مرحلہ 5: کیوری لنک اپ ڈیٹ کریں -آپ کا سب گراف کامیابی کے ساتھ Arbitrum پر منتقل کر دیا گیا ہے! سب گراف کو کیوری کرنے کے لیے، نیا لنگ ہو گا: +Your Subgraph has been successfully transferred to Arbitrum! To query the Subgraph, the new URL will be : `https://arbitrum-gateway.thegraph.com/api/[api-key]/subgraphs/id/[l2-subgraph-id]` -نوٹ کریں کہ Arbitrum پر سب گراف ID آپ کے مین نیٹ پر موجود ایک سے مختلف ہوگی، لیکن آپ اسے ہمیشہ ایکسپلورر یا سٹوڈیو پر تلاش کر سکتے ہیں۔ جیسا کہ اوپر بتایا گیا ہے (دیکھیں "سگنل کے ساتھ کیا ہوتا ہے، آپ کے L1 سب گراف اور کیوری والے لنکس") پرانا L1 لنک تھوڑی دیر کے لیے سپورٹ کیا جائے گا، لیکن آپ کو اپنی کیوریز کو نئے ایڈریس پر تبدیل کر دینا چاہیے جیسے ہی سب گراف کی مطابقت پذیری L2 پر ہو جائے گی. +Note that the Subgraph ID on Arbitrum will be a different than the one you had on mainnet, but you can always find it on Explorer or Studio. As mentioned above (see "Understanding what happens with signal, your L1 Subgraph and query URLs") the old L1 URL will be supported for a short while, but you should switch your queries to the new address as soon as the Subgraph has been synced on L2. ## اپنی کیوریشن کو کیسے Arbitrum (L2) پر منتقل کیا جائے -## یہ سمجھنا کہ L2 میں سب گراف کی منتقلی پر کیوریشن کا کیا ہوتا ہے +## Understanding what happens to curation on Subgraph transfers to L2 -جب سب گراف کا مالک سب گراف کو Arbitrum پر منتقل کرتا ہے، سب گراف کے تمام سگنلز اسی وقت GRT میں تبدیل ہو جاتے ہیں۔ یہ "آٹو مائیگریٹڈ" پر لاگو ہوتا ہے، یعنی وہ سگنل جو سب گراف ورزن یا تعیناتی کے لیے مخصوص نہیں ہے لیکن یہ سب گراف کے تازہ ترین ورزن کی پیروی کرتا ہے۔ +When the owner of a Subgraph transfers a Subgraph to Arbitrum, all of the Subgraph's signal is converted to GRT at the same time. This applies to "auto-migrated" signal, i.e. signal that is not specific to a Subgraph version or deployment but that follows the latest version of a Subgraph. -سگنل سے GRT میں یہ تبدیلی وہی ہے جیسا کہ اگر سب گراف کے مالک نے L1 میں سب گراف کو فرسودہ کیا تو کیا ہوگا۔ جب سب گراف کو فرسودہ یا منتقل کیا جاتا ہے، تو تمام کیوریشن سگنل بیک وقت "برن" ہو جاتے ہیں (کیوریشن بانڈنگ کریو کا استعمال کرتے ہوئے) اور نتیجے میں GRT سمارٹ کنٹریکٹ GNS کے پاس ہوتا ہے (یہ وہ کنٹریکٹ ہے جو سب گراف اپ گریڈ اور خودکار منتقلی سگنل کو ہینڈل کرتا ہے)۔ اس لیے اس سب گراف پر ہر کیوریٹر کا دعویٰ ہے کہ وہ سب گراف کے حصص کی مقدار کے متناسب GRT پر ہے۔ +This conversion from signal to GRT is the same as what would happen if the Subgraph owner deprecated the Subgraph in L1. When the Subgraph is deprecated or transferred, all curation signal is "burned" simultaneously (using the curation bonding curve) and the resulting GRT is held by the GNS smart contract (that is the contract that handles Subgraph upgrades and auto-migrated signal). Each Curator on that Subgraph therefore has a claim to that GRT proportional to the amount of shares they had for the Subgraph. -سب گراف کے مالک کے مطابق ان GRT کا ایک حصہ سب گراف کے ساتھ L2 کو بھیجا جاتا ہے۔ +A fraction of these GRT corresponding to the Subgraph owner is sent to L2 together with the Subgraph. -اس مقام پر، کیویرٹڈ GRT مزید کیوری کی فیس جمع نہیں کرے گا، لہذا کیوریٹرز اپنا GRT واپس لینے یا اسے L2 پر اسی سب گراف میں منتقل کرنے کا انتخاب کر سکتے ہیں، جہاں اسے نئے کیویریشن سگنل کے لیے استعمال کیا جا سکتا یے۔ ایسا کرنے میں کوئی جلدی نہیں ہے کیونکہ GRT غیر معینہ مدت کے لیے مدد کی جا سکتی ہے اور ہر کسی کو اس کے حصص کے متناسب رقم ملتی ہے، چاہے وہ ایسا کرتے ہی کیوں نہ ہوں۔ +At this point, the curated GRT will not accrue any more query fees, so Curators can choose to withdraw their GRT or transfer it to the same Subgraph on L2, where it can be used to mint new curation signal. There is no rush to do this as the GRT can be help indefinitely and everybody gets an amount proportional to their shares, irrespective of when they do it. ## اپنا L2 والیٹ منتخب کرنا @@ -130,9 +130,9 @@ L2 پر منتقل کرنے کے بٹن پر کلک کرنے سے ٹرانسفر منتقلی شروع کرنے سے پہلے، آپ کو یہ فیصلہ کرنا ہوگا کہ L2 پر کیوریشن کا کون سا ایڈریس ہوگا (اوپر "اپنے L2 والیٹ کا انتخاب" دیکھیں)، اور یہ تجویز کی جاتی ہے کہ اگر آپ کو L2 پر پیغام کے نفاذ کی دوبارہ کوشش کرنے کی ضرورت ہو تو Arbitrum پر پہلے سے ہی بریج شدہ گیس کے لیے کچھ ایتھیریم رکھیں۔ آپ کچھ ایکسچینجز پر ایتھیریم خرید سکتے ہیں اور اسے براہ راست Arbitrum میں واپس لے سکتے ہیں، یا آپ ایتھیریم کو مین نیٹ والیٹ سے L2 پر بھیجنے کے لیے Arbitrum بریج کا استعمال کر سکتے ہیں: [bridge.arbitrum.io](http://bridge.arbitrum.io) - چونکہ Arbitrum پر گیس کی فیس بہت کم ہیں، آپ کو صرف تھوڑی سی رقم کی ضرورت ہوگی، جیسے۔ 0.01 ایتھیریم شاید کافی سے زیادہ ہو گا۔ -اگر جو سب گراف آپ کیویرٹ کر رہے ہیں L2 پر منتقل ہو گیا ہے، آپ ایکسپلورر پر ایک میسج دیکھیں گے جو بتا رہا ہو گا کہ آپ منتقل ہوئے سب گراف پر کیوریٹ کر رہے ہیں۔ +If a Subgraph that you curate to has been transferred to L2, you will see a message on Explorer telling you that you're curating to a transferred Subgraph. -سب گراف پیج پر دیکھتے ہوئے، آپ کیوریشن واپس لینے یا منتقل کرنے کا انتخاب کر سکتے ہیں۔ "Arbitrum پر سگنل منتقل کریں" پر کلک کرنے سے ٹرانسفر ٹول کھل جائے گا۔ +When looking at the Subgraph page, you can choose to withdraw or transfer the curation. Clicking on "Transfer Signal to Arbitrum" will open the transfer tool. ![ٹرانسفر سگنل](/img/transferSignalL2TransferTools.png) @@ -162,4 +162,4 @@ L2 پر منتقل کرنے کے بٹن پر کلک کرنے سے ٹرانسفر ## L1 پر اپنی کیوریشن واپس لینا -اگر آپ اپنے GRT کو L2 پر نہیں بھیجنا پسند کرتے ہیں، یا آپ GRT کو دستی طور پر بریج کرنا چاہتے ہیں، تو آپ L1 پر اپنا کیوریٹ شدہ GRT واپس لے سکتے ہیں۔ سب گراف کے پیج پر بینر پر، "سگنل واپس لیں" کا انتخاب کریں اور ٹرانزیکشن کی تصدیق کریں۔ GRT آپ کے کیوریٹر کے ایڈریس پر بھیج دیا جائے گا۔ +If you prefer not to send your GRT to L2, or you'd rather bridge the GRT manually, you can withdraw your curated GRT on L1. On the banner on the Subgraph page, choose "Withdraw Signal" and confirm the transaction; the GRT will be sent to your Curator address. diff --git a/website/src/pages/ur/archived/sunrise.mdx b/website/src/pages/ur/archived/sunrise.mdx index b1ad2e6523a3..dc77506b82b6 100644 --- a/website/src/pages/ur/archived/sunrise.mdx +++ b/website/src/pages/ur/archived/sunrise.mdx @@ -7,61 +7,61 @@ sidebarTitle: Post-Sunrise Upgrade FAQ ## What was the Sunrise of Decentralized Data? -The Sunrise of Decentralized Data was an initiative spearheaded by Edge & Node. This initiative enabled subgraph developers to upgrade to The Graph’s decentralized network seamlessly. +The Sunrise of Decentralized Data was an initiative spearheaded by Edge & Node. This initiative enabled Subgraph developers to upgrade to The Graph’s decentralized network seamlessly. -This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published subgraphs. +This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published Subgraphs. ### What happened to the hosted service? -The hosted service query endpoints are no longer available, and developers cannot deploy new subgraphs on the hosted service. +The hosted service query endpoints are no longer available, and developers cannot deploy new Subgraphs on the hosted service. -During the upgrade process, owners of hosted service subgraphs could upgrade their subgraphs to The Graph Network. Additionally, developers were able to claim auto-upgraded subgraphs. +During the upgrade process, owners of hosted service Subgraphs could upgrade their Subgraphs to The Graph Network. Additionally, developers were able to claim auto-upgraded Subgraphs. ### Was Subgraph Studio impacted by this upgrade? No, Subgraph Studio was not impacted by Sunrise. Subgraphs were immediately available for querying, powered by the upgrade Indexer, which uses the same infrastructure as the hosted service. -### Why were subgraphs published to Arbitrum, did it start indexing a different network? +### Why were Subgraphs published to Arbitrum, did it start indexing a different network? -The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that subgraphs are published to, but subgraphs can index any of the [supported networks](/supported-networks/) +The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new Subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that Subgraphs are published to, but Subgraphs can index any of the [supported networks](/supported-networks/) ## About the Upgrade Indexer > The upgrade Indexer is currently active. -The upgrade Indexer was implemented to improve the experience of upgrading subgraphs from the hosted service to The Graph Network and support new versions of existing subgraphs that had not yet been indexed. +The upgrade Indexer was implemented to improve the experience of upgrading Subgraphs from the hosted service to The Graph Network and support new versions of existing Subgraphs that had not yet been indexed. ### What does the upgrade Indexer do? -- It bootstraps chains that have yet to receive indexing rewards on The Graph Network and ensures that an Indexer is available to serve queries as quickly as possible after a subgraph is published. +- It bootstraps chains that have yet to receive indexing rewards on The Graph Network and ensures that an Indexer is available to serve queries as quickly as possible after a Subgraph is published. - It supports chains that were previously only available on the hosted service. Find a comprehensive list of supported chains [here](/supported-networks/). -- Indexers that operate an upgrade Indexer do so as a public service to support new subgraphs and additional chains that lack indexing rewards before The Graph Council approves them. +- Indexers that operate an upgrade Indexer do so as a public service to support new Subgraphs and additional chains that lack indexing rewards before The Graph Council approves them. ### ایج اور نوڈ اپ گریڈ انڈیکسر کیوں چلا رہا ہے؟ -Edge & Node historically maintained the hosted service and, as a result, already have synced data for hosted service subgraphs. +Edge & Node historically maintained the hosted service and, as a result, already have synced data for hosted service Subgraphs. ### What does the upgrade indexer mean for existing Indexers? Chains previously only supported on the hosted service were made available to developers on The Graph Network without indexing rewards at first. -However, this action unlocked query fees for any interested Indexer and increased the number of subgraphs published on The Graph Network. As a result, Indexers have more opportunities to index and serve these subgraphs in exchange for query fees, even before indexing rewards are enabled for a chain. +However, this action unlocked query fees for any interested Indexer and increased the number of Subgraphs published on The Graph Network. As a result, Indexers have more opportunities to index and serve these Subgraphs in exchange for query fees, even before indexing rewards are enabled for a chain. -The upgrade Indexer also provides the Indexer community with information about the potential demand for subgraphs and new chains on The Graph Network. +The upgrade Indexer also provides the Indexer community with information about the potential demand for Subgraphs and new chains on The Graph Network. ### ڈیلیگیٹرز کے لیے اس کا کیا مطلب ہے؟ -The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity. +The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more Subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity. ### Did the upgrade Indexer compete with existing Indexers for rewards? -No, the upgrade Indexer only allocates the minimum amount per subgraph and does not collect indexing rewards. +No, the upgrade Indexer only allocates the minimum amount per Subgraph and does not collect indexing rewards. -It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and subgraphs. +It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and Subgraphs. -### How does this affect subgraph developers? +### How does this affect Subgraph developers? -Subgraph developers can query their subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/subgraphs/developing/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a subgraph](/developing/creating-a-subgraph/) was not impacted by this upgrade. +Subgraph developers can query their Subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/subgraphs/developing/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a Subgraph](/developing/creating-a-subgraph/) was not impacted by this upgrade. ### How does the upgrade Indexer benefit data consumers? @@ -71,10 +71,10 @@ The upgrade Indexer enables chains on the network that were previously only supp The upgrade Indexer prices queries at the market rate to avoid influencing the query fee market. -### When will the upgrade Indexer stop supporting a subgraph? +### When will the upgrade Indexer stop supporting a Subgraph? -The upgrade Indexer supports a subgraph until at least 3 other Indexers successfully and consistently serve queries made to it. +The upgrade Indexer supports a Subgraph until at least 3 other Indexers successfully and consistently serve queries made to it. -Furthermore, the upgrade Indexer stops supporting a subgraph if it has not been queried in the last 30 days. +Furthermore, the upgrade Indexer stops supporting a Subgraph if it has not been queried in the last 30 days. -Other Indexers are incentivized to support subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it. +Other Indexers are incentivized to support Subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it. diff --git a/website/src/pages/ur/global.json b/website/src/pages/ur/global.json index 0f8266151ab8..3cba9a6b96c5 100644 --- a/website/src/pages/ur/global.json +++ b/website/src/pages/ur/global.json @@ -6,6 +6,7 @@ "subgraphs": "سب گراف", "substreams": "سب سٹریمز", "sps": "Substreams-Powered Subgraphs", + "tokenApi": "Token API", "indexing": "Indexing", "resources": "Resources", "archived": "Archived" @@ -24,9 +25,51 @@ "linkToThisSection": "Link to this section" }, "content": { - "note": "Note", + "callout": { + "note": "Note", + "tip": "Tip", + "important": "Important", + "warning": "Warning", + "caution": "Caution" + }, "video": "Video" }, + "openApi": { + "parameters": { + "pathParameters": "Path Parameters", + "queryParameters": "Query Parameters", + "headerParameters": "Header Parameters", + "cookieParameters": "Cookie Parameters", + "parameter": "Parameter", + "description": "تفصیل", + "value": "Value", + "required": "Required", + "deprecated": "Deprecated", + "defaultValue": "Default value", + "minimumValue": "Minimum value", + "maximumValue": "Maximum value", + "acceptedValues": "Accepted values", + "acceptedPattern": "Accepted pattern", + "format": "Format", + "serializationFormat": "Serialization format" + }, + "request": { + "label": "Test this endpoint", + "noCredentialsRequired": "No credentials required", + "send": "Send Request" + }, + "responses": { + "potentialResponses": "Potential Responses", + "status": "Status", + "description": "تفصیل", + "liveResponse": "Live Response", + "example": "مثال" + }, + "errors": { + "invalidApi": "Could not retrieve API {0}.", + "invalidOperation": "Could not retrieve operation {0} in API {1}." + } + }, "notFound": { "title": "Oops! This page was lost in space...", "subtitle": "Check if you’re using the right address or explore our website by clicking on the link below.", diff --git a/website/src/pages/ur/index.json b/website/src/pages/ur/index.json index f5f072c7a326..ba4e9527fa53 100644 --- a/website/src/pages/ur/index.json +++ b/website/src/pages/ur/index.json @@ -7,7 +7,7 @@ "cta2": "Build your first subgraph" }, "products": { - "title": "The Graph’s Products", + "title": "The Graph's Products", "description": "Choose a solution that fits your needs—interact with blockchain data your way.", "subgraphs": { "title": "سب گراف", @@ -21,7 +21,7 @@ }, "sps": { "title": "Substreams-Powered Subgraphs", - "description": "Boost your subgraph’s efficiency and scalability by using Substreams.", + "description": "Boost your subgraph's efficiency and scalability by using Substreams.", "cta": "Set up a Substreams-powered subgraph" }, "graphNode": { @@ -39,12 +39,12 @@ "title": "تعاون یافتہ نیٹ ورکس", "details": "Network Details", "services": "Services", - "type": "Type", + "type": "قسم", "protocol": "Protocol", "identifier": "Identifier", "chainId": "Chain ID", "nativeCurrency": "Native Currency", - "docs": "Docs", + "docs": "دستاویزات", "shortName": "Short Name", "guides": "Guides", "search": "Search networks", @@ -67,8 +67,8 @@ "tableHeaders": { "name": "Name", "id": "ID", - "subgraphs": "Subgraphs", - "substreams": "Substreams", + "subgraphs": "سب گراف", + "substreams": "سب سٹریمز", "firehose": "Firehose", "tokenapi": "Token API" } @@ -80,7 +80,7 @@ "description": "Kickstart your journey into subgraph development." }, "substreams": { - "title": "Substreams", + "title": "سب سٹریمز", "description": "Stream high-speed data for real-time indexing." }, "timeseries": { @@ -92,7 +92,7 @@ "description": "Leverage features like custom data sources, event handlers, and topic filters." }, "billing": { - "title": "Billing", + "title": "بلنگ", "description": "Optimize costs and manage billing efficiently." } }, @@ -156,15 +156,15 @@ "watchOnYouTube": "Watch on YouTube", "theGraphExplained": { "title": "The Graph Explained In 1 Minute", - "description": "What is The Graph? How does it work? Why does it matter so much to web3 developers? Learn how and why The Graph is the backbone of web3 in this short, non-technical video." + "description": "Learn how and why The Graph is the backbone of web3 in this short, non-technical video." }, "whatIsDelegating": { "title": "What is Delegating?", - "description": "Delegators are key participants who help secure The Graph by staking their GRT tokens to Indexers. This video explains key concepts to understand before delegating." + "description": "This video explains key concepts to understand before delegating, a form of staking that helps secure The Graph." }, "howToIndexSolana": { "title": "How to Index Solana with a Substreams-powered Subgraph", - "description": "If you’re familiar with subgraphs, discover how Substreams offer a different approach for key use cases. This video walks you through the process of building your first Substreams-powered subgraph." + "description": "If you're familiar with Subgraphs, discover how Substreams offer a different approach for key use cases." } }, "time": { diff --git a/website/src/pages/ur/indexing/chain-integration-overview.mdx b/website/src/pages/ur/indexing/chain-integration-overview.mdx index e348639e9efa..7812d88241da 100644 --- a/website/src/pages/ur/indexing/chain-integration-overview.mdx +++ b/website/src/pages/ur/indexing/chain-integration-overview.mdx @@ -36,7 +36,7 @@ title: چین انٹیگریشن کے عمل کا جائزہ ### 2. اگر مین نیٹ پر نیٹ ورک سپورٹ ہونے کے بعد فائر ہوز اور سب سٹریم سپورٹ آجائے تو کیا ہوگا؟ -یہ صرف سب سٹریمزسے چلنے والے سب گرافس پر انڈیکسنگ کے انعامات کے لیے پروٹوکول سپورٹ کو متاثر کرے گا۔ اس GIP میں اسٹیج 2 کے لیے بیان کردہ طریقہ کار کے بعد، نئے فائر ہوز کے نفاذ کو ٹیسٹ نیٹ پر جانچ کی ضرورت ہوگی۔ اسی طرح، یہ فرض کرتے ہوئے کہ نفاذ پرفارمنس اور قابل اعتماد ہے، [فیچر سپورٹ میٹرکس](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) پر ایک PR کی ضرورت ہوگی ( 'سب سٹریمز ڈیٹا سورسز' سب گراف فیچر)، نیز انڈیکسنگ انعامات کے لیے پروٹوکول سپورٹ کے لیے ایک نیا GIP۔ کوئی بھی PR اور GIP بنا سکتا ہے۔ فاؤنڈیشن کونسل کی منظوری میں مدد کرے گی. +This would only impact protocol support for indexing rewards on Substreams-powered Subgraphs. The new Firehose implementation would need testing on testnet, following the methodology outlined for Stage 2 in this GIP. Similarly, assuming the implementation is performant and reliable, a PR on the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) would be required (`Substreams data sources` Subgraph Feature), as well as a new GIP for protocol support for indexing rewards. Anyone can create the PR and GIP; the Foundation would help with Council approval. ### 3. How much time will the process of reaching full protocol support take? diff --git a/website/src/pages/ur/indexing/new-chain-integration.mdx b/website/src/pages/ur/indexing/new-chain-integration.mdx index fc630546433a..ff1b5b6aa293 100644 --- a/website/src/pages/ur/indexing/new-chain-integration.mdx +++ b/website/src/pages/ur/indexing/new-chain-integration.mdx @@ -2,7 +2,7 @@ title: New Chain Integration --- -Chains can bring subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies: +Chains can bring Subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies: 1. **EVM JSON-RPC** 2. **Firehose**: All Firehose integration solutions include Substreams, a large-scale streaming engine based off Firehose with native `graph-node` support, allowing for parallelized transforms. @@ -47,15 +47,15 @@ For EVM chains, there exists a deeper level of data that can be achieved through ## EVM considerations - Difference between JSON-RPC & Firehose -While the JSON-RPC and Firehose are both suitable for subgraphs, a Firehose is always required for developers wanting to build with [Substreams](https://substreams.streamingfast.io). Supporting Substreams allows developers to build [Substreams-powered subgraphs](/subgraphs/cookbook/substreams-powered-subgraphs/) for the new chain, and has the potential to improve the performance of your subgraphs. Additionally, Firehose — as a drop-in replacement for the JSON-RPC extraction layer of `graph-node` — reduces by 90% the number of RPC calls required for general indexing. +While the JSON-RPC and Firehose are both suitable for Subgraphs, a Firehose is always required for developers wanting to build with [Substreams](https://substreams.streamingfast.io). Supporting Substreams allows developers to build [Substreams-powered Subgraphs](/subgraphs/cookbook/substreams-powered-subgraphs/) for the new chain, and has the potential to improve the performance of your Subgraphs. Additionally, Firehose — as a drop-in replacement for the JSON-RPC extraction layer of `graph-node` — reduces by 90% the number of RPC calls required for general indexing. -- All those `getLogs` calls and roundtrips get replaced by a single stream arriving into the heart of `graph-node`; a single block model for all subgraphs it processes. +- All those `getLogs` calls and roundtrips get replaced by a single stream arriving into the heart of `graph-node`; a single block model for all Subgraphs it processes. -> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers) +> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index Subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers) ## گراف نوڈ کنفگریشن -Configuring Graph Node is as easy as preparing your local environment. Once your local environment is set, you can test the integration by locally deploying a subgraph. +Configuring Graph Node is as easy as preparing your local environment. Once your local environment is set, you can test the integration by locally deploying a Subgraph. 1. [گراف نوڈ کی نقل بنائیں](https://github.com/graphprotocol/graph-node) @@ -67,4 +67,4 @@ Configuring Graph Node is as easy as preparing your local environment. Once your ## Substreams-powered Subgraphs -For StreamingFast-led Firehose/Substreams integrations, basic support for foundational Substreams modules (e.g. decoded transactions, logs and smart-contract events) and Substreams codegen tools are included. These tools enable the ability to enable [Substreams-powered subgraphs](/substreams/sps/introduction/). Follow the [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) and run `substreams codegen subgraph` to experience the codegen tools for yourself. +For StreamingFast-led Firehose/Substreams integrations, basic support for foundational Substreams modules (e.g. decoded transactions, logs and smart-contract events) and Substreams codegen tools are included. These tools enable the ability to enable [Substreams-powered Subgraphs](/substreams/sps/introduction/). Follow the [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) and run `substreams codegen subgraph` to experience the codegen tools for yourself. diff --git a/website/src/pages/ur/indexing/overview.mdx b/website/src/pages/ur/indexing/overview.mdx index 40f78c96c399..8258c63fa6f1 100644 --- a/website/src/pages/ur/indexing/overview.mdx +++ b/website/src/pages/ur/indexing/overview.mdx @@ -7,7 +7,7 @@ sidebarTitle: جائزہ پروٹوکول میں داؤ پر لگائی گئی GRT پگھلنے کی مدت سے مشروط ہے اور اگر انڈیکسرز بدنیتی پر مبنی ہوں اور ایپلیکیشنز کو غلط ڈیٹا پیش کرتے ہیں یا اگر وہ غلط طریقے سے انڈیکس کرتے ہیں تو اسے کم کیا جا سکتا ہے. انڈیکسرز نیٹ ورک میں حصہ ڈالنے کے لیے ڈیلیگیٹرز کی جانب سے دیے گئے سٹیک کے لیے بھی انعامات حاصل کرتے ہیں. -انڈیکسرز سب گراف کے کیوریشن سگنل کی بنیاد پر انڈیکس کرنے کے لیے سب گرافس کا انتخاب کرتے ہیں, جہاں کیوریٹرز GRT کو سٹیک کرتے ہیں تاکہ یہ ظاہر کیا جا سکے کہ کون سے سب گرافس اعلیٰ معیار کے ہیں اور انہیں ترجیح دی جانی چاہیے. صارفین (مثلاً ایپلی کیشنز) ایسے عوامل کا تعین کر سکتے ہیں جن کے لیے انڈیکسرز اپنے سب گرافس کے لیے کیوریز پر کارروائی کرتے ہیں اور کیوری کی فیس کی قیمتوں کے لیے ترجیحات طے کرتے ہیں. +Indexers select Subgraphs to index based on the Subgraph’s curation signal, where Curators stake GRT in order to indicate which Subgraphs are high-quality and should be prioritized. Consumers (eg. applications) can also set parameters for which Indexers process queries for their Subgraphs and set preferences for query fee pricing. ## FAQ @@ -19,17 +19,17 @@ The minimum stake for an Indexer is currently set to 100K GRT. **Query fee rebates** - Payments for serving queries on the network. These payments are mediated via state channels between an Indexer and a gateway. Each query request from a gateway contains a payment and the corresponding response a proof of query result validity. -**Indexing rewards** - Generated via a 3% annual protocol wide inflation, the indexing rewards are distributed to Indexers who are indexing subgraph deployments for the network. +**Indexing rewards** - Generated via a 3% annual protocol wide inflation, the indexing rewards are distributed to Indexers who are indexing Subgraph deployments for the network. ### How are indexing rewards distributed? -Indexing rewards come from protocol inflation which is set to 3% annual issuance. They are distributed across subgraphs based on the proportion of all curation signal on each, then distributed proportionally to Indexers based on their allocated stake on that subgraph. **An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.** +Indexing rewards come from protocol inflation which is set to 3% annual issuance. They are distributed across Subgraphs based on the proportion of all curation signal on each, then distributed proportionally to Indexers based on their allocated stake on that Subgraph. **An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.** Numerous tools have been created by the community for calculating rewards; you'll find a collection of them organized in the [Community Guides collection](https://www.notion.so/Community-Guides-abbb10f4dba040d5ba81648ca093e70c). You can also find an up to date list of tools in the #Delegators and #Indexers channels on the [Discord server](https://discord.gg/graphprotocol). Here we link a [recommended allocation optimiser](https://github.com/graphprotocol/allocation-optimizer) integrated with the indexer software stack. ### What is a proof of indexing (POI)? -POIs are used in the network to verify that an Indexer is indexing the subgraphs they have allocated on. A POI for the first block of the current epoch must be submitted when closing an allocation for that allocation to be eligible for indexing rewards. A POI for a block is a digest for all entity store transactions for a specific subgraph deployment up to and including that block. +POIs are used in the network to verify that an Indexer is indexing the Subgraphs they have allocated on. A POI for the first block of the current epoch must be submitted when closing an allocation for that allocation to be eligible for indexing rewards. A POI for a block is a digest for all entity store transactions for a specific Subgraph deployment up to and including that block. ### When are indexing rewards distributed? @@ -41,7 +41,7 @@ The RewardsManager contract has a read-only [getRewards](https://github.com/grap Many of the community-made dashboards include pending rewards values and they can be easily checked manually by following these steps: -1. Query the [mainnet subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations: +1. Query the [mainnet Subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations: ```graphql query indexerAllocations { @@ -91,24 +91,24 @@ The `queryFeeCut` and `indexingRewardCut` values are delegation parameters that - **indexingRewardCut** - the % of indexing rewards that will be distributed to the Indexer. If this is set to 95%, the Indexer will receive 95% of the indexing rewards when an allocation is closed and the Delegators will split the other 5%. -### How do Indexers know which subgraphs to index? +### How do Indexers know which Subgraphs to index? -Indexers may differentiate themselves by applying advanced techniques for making subgraph indexing decisions but to give a general idea we'll discuss several key metrics used to evaluate subgraphs in the network: +Indexers may differentiate themselves by applying advanced techniques for making Subgraph indexing decisions but to give a general idea we'll discuss several key metrics used to evaluate Subgraphs in the network: -- **Curation signal** - The proportion of network curation signal applied to a particular subgraph is a good indicator of the interest in that subgraph, especially during the bootstrap phase when query voluming is ramping up. +- **Curation signal** - The proportion of network curation signal applied to a particular Subgraph is a good indicator of the interest in that Subgraph, especially during the bootstrap phase when query volume is ramping up. -- **Query fees collected** - The historical data for volume of query fees collected for a specific subgraph is a good indicator of future demand. +- **Query fees collected** - The historical data for volume of query fees collected for a specific Subgraph is a good indicator of future demand. -- **Amount staked** - Monitoring the behavior of other Indexers or looking at proportions of total stake allocated towards specific subgraphs can allow an Indexer to monitor the supply side for subgraph queries to identify subgraphs that the network is showing confidence in or subgraphs that may show a need for more supply. +- **Amount staked** - Monitoring the behavior of other Indexers or looking at proportions of total stake allocated towards specific Subgraphs can allow an Indexer to monitor the supply side for Subgraph queries to identify Subgraphs that the network is showing confidence in or Subgraphs that may show a need for more supply. -- **Subgraphs with no indexing rewards** - Some subgraphs do not generate indexing rewards mainly because they are using unsupported features like IPFS or because they are querying another network outside of mainnet. You will see a message on a subgraph if it is not generating indexing rewards. +- **Subgraphs with no indexing rewards** - Some Subgraphs do not generate indexing rewards mainly because they are using unsupported features like IPFS or because they are querying another network outside of mainnet. You will see a message on a Subgraph if it is not generating indexing rewards. ### What are the hardware requirements? -- **Small** - Enough to get started indexing several subgraphs, will likely need to be expanded. +- **Small** - Enough to get started indexing several Subgraphs, will likely need to be expanded. - **Standard** - Default setup, this is what is used in the example k8s/terraform deployment manifests. -- **Medium** - Production Indexer supporting 100 subgraphs and 200-500 requests per second. -- **Large** - Prepared to index all currently used subgraphs and serve requests for the related traffic. +- **Medium** - Production Indexer supporting 100 Subgraphs and 200-500 requests per second. +- **Large** - Prepared to index all currently used Subgraphs and serve requests for the related traffic. | Setup | Postgres
(CPUs) | Postgres
(memory in GBs) | Postgres
(disk in TBs) | VMs
(CPUs) | VMs
(memory in GBs) | | --- | :-: | :-: | :-: | :-: | :-: | @@ -125,17 +125,17 @@ Indexers may differentiate themselves by applying advanced techniques for making ## Infrastructure -At the center of an Indexer's infrastructure is the Graph Node which monitors the indexed networks, extracts and loads data per a subgraph definition and serves it as a [GraphQL API](/about/#how-the-graph-works). The Graph Node needs to be connected to an endpoint exposing data from each indexed network; an IPFS node for sourcing data; a PostgreSQL database for its store; and Indexer components which facilitate its interactions with the network. +At the center of an Indexer's infrastructure is the Graph Node which monitors the indexed networks, extracts and loads data per a Subgraph definition and serves it as a [GraphQL API](/about/#how-the-graph-works). The Graph Node needs to be connected to an endpoint exposing data from each indexed network; an IPFS node for sourcing data; a PostgreSQL database for its store; and Indexer components which facilitate its interactions with the network. -- **PostgreSQL database** - The main store for the Graph Node, this is where subgraph data is stored. The Indexer service and agent also use the database to store state channel data, cost models, indexing rules, and allocation actions. +- **PostgreSQL database** - The main store for the Graph Node, this is where Subgraph data is stored. The Indexer service and agent also use the database to store state channel data, cost models, indexing rules, and allocation actions. -- **Data endpoint** - For EVM-compatible networks, Graph Node needs to be connected to an endpoint that exposes an EVM-compatible JSON-RPC API. This may take the form of a single client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain subgraphs will require particular client capabilities such as archive mode and/or the parity tracing API. +- **Data endpoint** - For EVM-compatible networks, Graph Node needs to be connected to an endpoint that exposes an EVM-compatible JSON-RPC API. This may take the form of a single client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain Subgraphs will require particular client capabilities such as archive mode and/or the parity tracing API. -- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during subgraph deployment to fetch the subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com. +- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com. - **Indexer service** - Handles all required external communications with the network. Shares cost models and indexing statuses, passes query requests from gateways on to a Graph Node, and manages the query payments via state channels with the gateway. -- **Indexer agent** - Facilitates the Indexers interactions onchain including registering on the network, managing subgraph deployments to its Graph Node/s, and managing allocations. +- **Indexer agent** - Facilitates the Indexers interactions onchain including registering on the network, managing Subgraph deployments to its Graph Node/s, and managing allocations. - **Prometheus metrics server** - The Graph Node and Indexer components log their metrics to the metrics server. @@ -149,8 +149,8 @@ Note: To support agile scaling, it is recommended that query and indexing concer | Port | Purpose | Routes | CLI Argument | Environment Variable | | --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | -| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | +| 8000 | GraphQL HTTP server
(for Subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | +| 8001 | GraphQL WS
(for Subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | | 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | | 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | | 8040 | Prometheus metrics | /metrics | \--metrics-port | - | @@ -159,7 +159,7 @@ Note: To support agile scaling, it is recommended that query and indexing concer | Port | Purpose | Routes | CLI Argument | Environment Variable | | --- | --- | --- | --- | --- | -| 7600 | GraphQL HTTP server
(for paid subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` | +| 7600 | GraphQL HTTP server
(for paid Subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` | | 7300 | Prometheus metrics | /metrics | \--metrics-port | - | #### Indexer Agent @@ -295,7 +295,7 @@ Deploy all resources with `kubectl apply -k $dir`. ### گراف نوڈ -[Graph Node](https://github.com/graphprotocol/graph-node) is an open source Rust implementation that event sources the Ethereum blockchain to deterministically update a data store that can be queried via the GraphQL endpoint. Developers use subgraphs to define their schema, and a set of mappings for transforming the data sourced from the blockchain and the Graph Node handles syncing the entire chain, monitoring for new blocks, and serving it via a GraphQL endpoint. +[Graph Node](https://github.com/graphprotocol/graph-node) is an open source Rust implementation that event sources the Ethereum blockchain to deterministically update a data store that can be queried via the GraphQL endpoint. Developers use Subgraphs to define their schema, and a set of mappings for transforming the data sourced from the blockchain and the Graph Node handles syncing the entire chain, monitoring for new blocks, and serving it via a GraphQL endpoint. #### Getting started from source @@ -365,9 +365,9 @@ docker-compose up To successfully participate in the network requires almost constant monitoring and interaction, so we've built a suite of Typescript applications for facilitating an Indexers network participation. There are three Indexer components: -- **Indexer agent** - The agent monitors the network and the Indexer's own infrastructure and manages which subgraph deployments are indexed and allocated towards onchain and how much is allocated towards each. +- **Indexer agent** - The agent monitors the network and the Indexer's own infrastructure and manages which Subgraph deployments are indexed and allocated towards onchain and how much is allocated towards each. -- **Indexer service** - The only component that needs to be exposed externally, the service passes on subgraph queries to the graph node, manages state channels for query payments, shares important decision making information to clients like the gateways. +- **Indexer service** - The only component that needs to be exposed externally, the service passes on Subgraph queries to the graph node, manages state channels for query payments, shares important decision making information to clients like the gateways. - **Indexer CLI** - The command line interface for managing the Indexer agent. It allows Indexers to manage cost models, manual allocations, actions queue, and indexing rules. @@ -525,7 +525,7 @@ graph indexer status #### Indexer management using Indexer CLI -The suggested tool for interacting with the **Indexer Management API** is the **Indexer CLI**, an extension to the **Graph CLI**. The Indexer agent needs input from an Indexer in order to autonomously interact with the network on the behalf of the Indexer. The mechanism for defining Indexer agent behavior are **allocation management** mode and **indexing rules**. Under auto mode, an Indexer can use **indexing rules** to apply their specific strategy for picking subgraphs to index and serve queries for. Rules are managed via a GraphQL API served by the agent and known as the Indexer Management API. Under manual mode, an Indexer can create allocation actions using **actions queue** and explicitly approve them before they get executed. Under oversight mode, **indexing rules** are used to populate **actions queue** and also require explicit approval for execution. +The suggested tool for interacting with the **Indexer Management API** is the **Indexer CLI**, an extension to the **Graph CLI**. The Indexer agent needs input from an Indexer in order to autonomously interact with the network on the behalf of the Indexer. The mechanism for defining Indexer agent behavior are **allocation management** mode and **indexing rules**. Under auto mode, an Indexer can use **indexing rules** to apply their specific strategy for picking Subgraphs to index and serve queries for. Rules are managed via a GraphQL API served by the agent and known as the Indexer Management API. Under manual mode, an Indexer can create allocation actions using **actions queue** and explicitly approve them before they get executed. Under oversight mode, **indexing rules** are used to populate **actions queue** and also require explicit approval for execution. #### Usage @@ -537,7 +537,7 @@ The **Indexer CLI** connects to the Indexer agent, typically through port-forwar - `graph indexer rules set [options] ...` - Set one or more indexing rules. -- `graph indexer rules start [options] ` - Start indexing a subgraph deployment if available and set its `decisionBasis` to `always`, so the Indexer agent will always choose to index it. If the global rule is set to always then all available subgraphs on the network will be indexed. +- `graph indexer rules start [options] ` - Start indexing a Subgraph deployment if available and set its `decisionBasis` to `always`, so the Indexer agent will always choose to index it. If the global rule is set to always then all available Subgraphs on the network will be indexed. - `graph indexer rules stop [options] ` - Stop indexing a deployment and set its `decisionBasis` to never, so it will skip this deployment when deciding on deployments to index. @@ -561,9 +561,9 @@ All commands which display rules in the output can choose between the supported #### Indexing rules -Indexing rules can either be applied as global defaults or for specific subgraph deployments using their IDs. The `deployment` and `decisionBasis` fields are mandatory, while all other fields are optional. When an indexing rule has `rules` as the `decisionBasis`, then the Indexer agent will compare non-null threshold values on that rule with values fetched from the network for the corresponding deployment. If the subgraph deployment has values above (or below) any of the thresholds it will be chosen for indexing. +Indexing rules can either be applied as global defaults or for specific Subgraph deployments using their IDs. The `deployment` and `decisionBasis` fields are mandatory, while all other fields are optional. When an indexing rule has `rules` as the `decisionBasis`, then the Indexer agent will compare non-null threshold values on that rule with values fetched from the network for the corresponding deployment. If the Subgraph deployment has values above (or below) any of the thresholds it will be chosen for indexing. -For example, if the global rule has a `minStake` of **5** (GRT), any subgraph deployment which has more than 5 (GRT) of stake allocated to it will be indexed. Threshold rules include `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, and `minAverageQueryFees`. +For example, if the global rule has a `minStake` of **5** (GRT), any Subgraph deployment which has more than 5 (GRT) of stake allocated to it will be indexed. Threshold rules include `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, and `minAverageQueryFees`. Data model: @@ -679,7 +679,7 @@ graph indexer actions execute approve Note that supported action types for allocation management have different input requirements: -- `Allocate` - allocate stake to a specific subgraph deployment +- `Allocate` - allocate stake to a specific Subgraph deployment - required action params: - deploymentID @@ -694,7 +694,7 @@ Note that supported action types for allocation management have different input - poi - force (forces using the provided POI even if it doesn’t match what the graph-node provides) -- `Reallocate` - atomically close allocation and open a fresh allocation for the same subgraph deployment +- `Reallocate` - atomically close allocation and open a fresh allocation for the same Subgraph deployment - required action params: - allocationID @@ -706,7 +706,7 @@ Note that supported action types for allocation management have different input #### Cost models -Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make Indexer selection decisions per query and to negotiate payment with chosen Indexers. +Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each Subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make Indexer selection decisions per query and to negotiate payment with chosen Indexers. #### Agora @@ -782,9 +782,9 @@ Once an Indexer has staked GRT in the protocol, the [Indexer components](/indexi 6. Call `stake()` to stake GRT in the protocol. -7. (Optional) Indexers may approve another address to be the operator for their Indexer infrastructure in order to separate the keys that control the funds from those that are performing day to day actions such as allocating on subgraphs and serving (paid) queries. In order to set the operator call `setOperator()` with the operator address. +7. (Optional) Indexers may approve another address to be the operator for their Indexer infrastructure in order to separate the keys that control the funds from those that are performing day to day actions such as allocating on Subgraphs and serving (paid) queries. In order to set the operator call `setOperator()` with the operator address. -8. (Optional) In order to control the distribution of rewards and strategically attract Delegators Indexers can update their delegation parameters by updating their indexingRewardCut (parts per million), queryFeeCut (parts per million), and cooldownBlocks (number of blocks). To do so call `setDelegationParameters()`. The following example sets the queryFeeCut to distribute 95% of query rebates to the Indexer and 5% to Delegators, set the indexingRewardCutto distribute 60% of indexing rewards to the Indexer and 40% to Delegators, and set `thecooldownBlocks` period to 500 blocks. +8. (Optional) In order to control the distribution of rewards and strategically attract Delegators Indexers can update their delegation parameters by updating their `indexingRewardCut` (parts per million), `queryFeeCut` (parts per million), and `cooldownBlocks` (number of blocks). To do so call `setDelegationParameters()`. The following example sets the `queryFeeCut` to distribute 95% of query rebates to the Indexer and 5% to Delegators, set the `indexingRewardCut` to distribute 60% of indexing rewards to the Indexer and 40% to Delegators, and set the `cooldownBlocks` period to 500 blocks. ``` setDelegationParameters(950000, 600000, 500) @@ -810,8 +810,8 @@ To set the delegation parameters using Graph Explorer interface, follow these st After being created by an Indexer a healthy allocation goes through two states. -- **Active** - Once an allocation is created onchain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L316)) it is considered **active**. A portion of the Indexer's own and/or delegated stake is allocated towards a subgraph deployment, which allows them to claim indexing rewards and serve queries for that subgraph deployment. The Indexer agent manages creating allocations based on the Indexer rules. +- **Active** - Once an allocation is created onchain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L316)) it is considered **active**. A portion of the Indexer's own and/or delegated stake is allocated towards a Subgraph deployment, which allows them to claim indexing rewards and serve queries for that Subgraph deployment. The Indexer agent manages creating allocations based on the Indexer rules. - **Closed** - An Indexer is free to close an allocation once 1 epoch has passed ([closeAllocation()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L335)) or their Indexer agent will automatically close the allocation after the **maxAllocationEpochs** (currently 28 days). When an allocation is closed with a valid proof of indexing (POI) their indexing rewards are distributed to the Indexer and its Delegators ([learn more](/indexing/overview/#how-are-indexing-rewards-distributed)). -Indexers are recommended to utilize offchain syncing functionality to sync subgraph deployments to chainhead before creating the allocation onchain. This feature is especially useful for subgraphs that may take longer than 28 epochs to sync or have some chances of failing undeterministically. +Indexers are recommended to utilize offchain syncing functionality to sync Subgraph deployments to chainhead before creating the allocation onchain. This feature is especially useful for Subgraphs that may take longer than 28 epochs to sync or have some chances of failing undeterministically. diff --git a/website/src/pages/ur/indexing/supported-network-requirements.mdx b/website/src/pages/ur/indexing/supported-network-requirements.mdx index f4b5a7768f13..04aa24db9e48 100644 --- a/website/src/pages/ur/indexing/supported-network-requirements.mdx +++ b/website/src/pages/ur/indexing/supported-network-requirements.mdx @@ -6,7 +6,7 @@ title: Supported Network Requirements | --- | --- | --- | :-: | | Arbitrum | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | 4+ core CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_last updated August 2023_ | ✅ | | Avalanche | [Docker Guide](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 5 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
Debian 12/Ubuntu 22.04
16 GB RAM
>= 4.5TB (NVME preffered)
_last updated 14th May 2024_ | ✅ | +| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
Debian 12/Ubuntu 22.04
16 GB RAM
>= 4.5TB (NVME preferred)
_last updated 14th May 2024_ | ✅ | | Binance | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | 8 core / 16 threads CPU
Ubuntu 22.04
>=32 GB RAM
>= 14 TiB NVMe SSD
_last updated 22nd June 2024_ | ✅ | | Celo | [Docker Guide](https://docs.infradao.com/archive-nodes-101/celo/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 2 TiB NVMe SSD
_last updated August 2023_ | ✅ | | Ethereum | [Docker Guide](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Higher clock speed over core count
Ubuntu 22.04
16GB+ RAM
>=3TB (NVMe recommended)
_last updated August 2023_ | ✅ | diff --git a/website/src/pages/ur/indexing/tap.mdx b/website/src/pages/ur/indexing/tap.mdx index 227fbfc0593f..b7347fb63141 100644 --- a/website/src/pages/ur/indexing/tap.mdx +++ b/website/src/pages/ur/indexing/tap.mdx @@ -1,21 +1,21 @@ --- -title: TAP Migration Guide +title: GraphTally Guide --- -Learn about The Graph’s new payment system, **Timeline Aggregation Protocol, TAP**. This system provides fast, efficient microtransactions with minimized trust. +Learn about The Graph’s new payment system, **GraphTally** [(previously Timeline Aggregation Protocol)](https://docs.rs/tap_core/latest/tap_core/index.html). This system provides fast, efficient microtransactions with minimized trust. ## جائزہ -[TAP](https://docs.rs/tap_core/latest/tap_core/index.html) is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features: +GraphTally is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features: - Efficiently handles micropayments. - Adds a layer of consolidations to onchain transactions and costs. - Allows Indexers control of receipts and payments, guaranteeing payment for queries. - It enables decentralized, trustless gateways and improves `indexer-service` performance for multiple senders. -## Specifics +### Specifics -TAP allows a sender to make multiple payments to a receiver, **TAP Receipts**, which aggregates these payments into a single payment, a **Receipt Aggregate Voucher**, also known as a **RAV**. This aggregated payment can then be verified on the blockchain, reducing the number of transactions and simplifying the payment process. +GraphTally allows a sender to make multiple payments to a receiver, **Receipts**, which aggregates these payments into a single payment, a **Receipt Aggregate Voucher**, also known as a **RAV**. This aggregated payment can then be verified on the blockchain, reducing the number of transactions and simplifying the payment process. For each query, the gateway will send you a `signed receipt` that is stored on your database. Then, these queries will be aggregated by a `tap-agent` through a request. Afterwards, you’ll receive a RAV. You can update a RAV by sending it with newer receipts and this will generate a new RAV with an increased value. @@ -59,14 +59,14 @@ As long as you run `tap-agent` and `indexer-agent`, everything will be executed | Signers | `0xfF4B7A5EfD00Ff2EC3518D4F250A27e4c29A2211` | `0xFb142dE83E261e43a81e9ACEADd1c66A0DB121FE` | | Aggregator | `https://tap-aggregator.network.thegraph.com` | `https://tap-aggregator.testnet.thegraph.com` | -### تقاضے +### Prerequisites -In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query TAP updates. You can use The Graph Network to query or host yourself on your `graph-node`. +In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query updates. You can use The Graph Network to query or host yourself on your `graph-node`. -- [Graph TAP Arbitrum Sepolia subgraph (for The Graph testnet)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) -- [Graph TAP Arbitrum One subgraph (for The Graph mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) +- [Graph TAP Arbitrum Sepolia Subgraph (for The Graph testnet)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) +- [Graph TAP Arbitrum One Subgraph (for The Graph mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) -> Note: `indexer-agent` does not currently handle the indexing of this subgraph like it does for the network subgraph deployment. As a result, you have to index it manually. +> Note: `indexer-agent` does not currently handle the indexing of this Subgraph like it does for the network Subgraph deployment. As a result, you have to index it manually. ## Migration Guide @@ -79,7 +79,7 @@ The required software version can be found [here](https://github.com/graphprotoc 1. **Indexer Agent** - Follow the [same process](https://github.com/graphprotocol/indexer/pkgs/container/indexer-agent#graph-protocol-indexer-components). - - Give the new argument `--tap-subgraph-endpoint` to activate the new TAP codepaths and enable redeeming of TAP RAVs. + - Give the new argument `--tap-subgraph-endpoint` to activate the new GraphTally codepaths and enable redeeming of RAVs. 2. **Indexer Service** @@ -128,18 +128,18 @@ query_url = "" status_url = "" [subgraphs.network] -# Query URL for the Graph Network subgraph. +# Query URL for the Graph Network Subgraph. query_url = "" # Optional, deployment to look for in the local `graph-node`, if locally indexed. -# Locally indexing the subgraph is recommended. +# Locally indexing the Subgraph is recommended. # NOTE: Use `query_url` or `deployment_id` only deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" [subgraphs.escrow] -# Query URL for the Escrow subgraph. +# Query URL for the Escrow Subgraph. query_url = "" # Optional, deployment to look for in the local `graph-node`, if locally indexed. -# Locally indexing the subgraph is recommended. +# Locally indexing the Subgraph is recommended. # NOTE: Use `query_url` or `deployment_id` only deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" diff --git a/website/src/pages/ur/indexing/tooling/graph-node.mdx b/website/src/pages/ur/indexing/tooling/graph-node.mdx index 3e6d0c1e3d44..ca1535e89a01 100644 --- a/website/src/pages/ur/indexing/tooling/graph-node.mdx +++ b/website/src/pages/ur/indexing/tooling/graph-node.mdx @@ -2,31 +2,31 @@ title: گراف نوڈ --- -گراف نوڈ وہ جزو ہے جو سب گراف کو انڈیکس کرتا ہے، اور نتیجے میں ڈیٹا کو GraphQL API کے ذریعے کیوری کے لیے دستیاب کرتا ہے. اس طرح یہ انڈیکسر اسٹیک میں مرکزی حیثیت رکھتا ہے، اور ایک کامیاب انڈیکسر چلانے کے لیے گراف نوڈ کا درست آپریشن بہت ضروری ہے. +Graph Node is the component which indexes Subgraphs, and makes the resulting data available to query via a GraphQL API. As such it is central to the indexer stack, and correct operation of Graph Node is crucial to running a successful indexer. This provides a contextual overview of Graph Node, and some of the more advanced options available to indexers. Detailed documentation and instructions can be found in the [Graph Node repository](https://github.com/graphprotocol/graph-node). ## گراف نوڈ -[Graph Node](https://github.com/graphprotocol/graph-node) is the reference implementation for indexing Subgraphs on The Graph Network, connecting to blockchain clients, indexing subgraphs and making indexed data available to query. +[Graph Node](https://github.com/graphprotocol/graph-node) is the reference implementation for indexing Subgraphs on The Graph Network, connecting to blockchain clients, indexing Subgraphs and making indexed data available to query. Graph Node (and the whole indexer stack) can be run on bare metal, or in a cloud environment. This flexibility of the central indexing component is crucial to the robustness of The Graph Protocol. Similarly, Graph Node can be [built from source](https://github.com/graphprotocol/graph-node), or indexers can use one of the [provided Docker Images](https://hub.docker.com/r/graphprotocol/graph-node). ### PostgreSQL ڈیٹا بیس -گراف نوڈ کا مرکزی اسٹور، یہ وہ جگہ ہے جہاں سب گراف کا ڈیٹا ذخیرہ کیا جاتا ہے، ساتھ ہی سب گراف کے بارے میں میٹا ڈیٹا، اور سب گراف-اگنوسٹک نیٹ ورک ڈیٹا جیسے کہ بلاک کیشے، اور ایتھ_کال کیشے. +The main store for the Graph Node, this is where Subgraph data is stored, as well as metadata about Subgraphs, and Subgraph-agnostic network data such as the block cache, and eth_call cache. ### نیٹ ورک کلائنٹس کسی نیٹ ورک کو انڈیکس کرنے کے لیے، گراف نوڈ کو EVM سے مطابقت رکھنے والے JSON-RPC API کے ذریعے نیٹ ورک کلائنٹ تک رسائی کی ضرورت ہے۔ یہ RPC کسی ایک کلائنٹ سے منسلک ہو سکتا ہے یا یہ زیادہ پیچیدہ سیٹ اپ ہو سکتا ہے جو متعدد پر بیلنس لوڈ کرتا ہے. -While some subgraphs may just require a full node, some may have indexing features which require additional RPC functionality. Specifically subgraphs which make `eth_calls` as part of indexing will require an archive node which supports [EIP-1898](https://eips.ethereum.org/EIPS/eip-1898), and subgraphs with `callHandlers`, or `blockHandlers` with a `call` filter, require `trace_filter` support ([see trace module documentation here](https://openethereum.github.io/JSONRPC-trace-module)). +While some Subgraphs may just require a full node, some may have indexing features which require additional RPC functionality. Specifically Subgraphs which make `eth_calls` as part of indexing will require an archive node which supports [EIP-1898](https://eips.ethereum.org/EIPS/eip-1898), and Subgraphs with `callHandlers`, or `blockHandlers` with a `call` filter, require `trace_filter` support ([see trace module documentation here](https://openethereum.github.io/JSONRPC-trace-module)). **Network Firehoses** - a Firehose is a gRPC service providing an ordered, yet fork-aware, stream of blocks, developed by The Graph's core developers to better support performant indexing at scale. This is not currently an Indexer requirement, but Indexers are encouraged to familiarise themselves with the technology, ahead of full network support. Learn more about the Firehose [here](https://firehose.streamingfast.io/). ### IPFS نوڈس -سب گراف تعیناتی کا میٹا ڈیٹا IPFS نیٹ ورک پر محفوظ کیا جاتا ہے. گراف نوڈ بنیادی طور پر سب گراف کی تعیناتی کے دوران IPFS نوڈ تک رسائی حاصل کرتا ہے تاکہ سب گراف مینی فیسٹ اور تمام منسلک فائلوں کو حاصل کیا جا سکے. نیٹ ورک انڈیکسرز کو اپنے IPFص نوڈ کو ہوسٹ کرنے کی ضرورت نہیں ہے. نیٹ ورک کے لیے ایک IPFS نوڈ https://ipfs.network.thegraph.com پر ہوسٹ کیا گیا ہے. +Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network indexers do not need to host their own IPFS node. An IPFS node for the network is hosted at https://ipfs.network.thegraph.com. ### Prometheus میٹرکس سرور @@ -79,8 +79,8 @@ A complete Kubernetes example configuration can be found in the [indexer reposit | Port | Purpose | Routes | CLI Argument | Environment Variable | | --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | -| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | +| 8000 | GraphQL HTTP server
(for Subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | +| 8001 | GraphQL WS
(for Subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | | 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | | 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | | 8040 | Prometheus metrics | /metrics | \--metrics-port | - | @@ -89,7 +89,7 @@ A complete Kubernetes example configuration can be found in the [indexer reposit ## اعلی درجے کی گراف نوڈ کنفیگریشن -اس کے آسان ترین طور پر، گراف نوڈ کو گراف نوڈ کے ایک انسٹینس, واحد PostgreSQL ڈیٹا بیس، ایک IPFS نوڈ، اور نیٹ ورک کلائنٹس کے ساتھ آپریٹ کیا جا سکتا ہے جیسا کہ سب گراف کو انڈیکس کرنے کے لیے ضرورت ہوتی ہے. +At its simplest, Graph Node can be operated with a single instance of Graph Node, a single PostgreSQL database, an IPFS node, and the network clients as required by the Subgraphs to be indexed. This setup can be scaled horizontally, by adding multiple Graph Nodes, and multiple databases to support those Graph Nodes. Advanced users may want to take advantage of some of the horizontal scaling capabilities of Graph Node, as well as some of the more advanced configuration options, via the `config.toml` file and Graph Node's environment variables. @@ -114,13 +114,13 @@ Full documentation of `config.toml` can be found in the [Graph Node docs](https: #### متعدد گراف نوڈس -Graph Node indexing can scale horizontally, running multiple instances of Graph Node to split indexing and querying across different nodes. This can be done simply by running Graph Nodes configured with a different `node_id` on startup (e.g. in the Docker Compose file), which can then be used in the `config.toml` file to specify [dedicated query nodes](#dedicated-query-nodes), [block ingestors](#dedicated-block-ingestion), and splitting subgraphs across nodes with [deployment rules](#deployment-rules). +Graph Node indexing can scale horizontally, running multiple instances of Graph Node to split indexing and querying across different nodes. This can be done simply by running Graph Nodes configured with a different `node_id` on startup (e.g. in the Docker Compose file), which can then be used in the `config.toml` file to specify [dedicated query nodes](#dedicated-query-nodes), [block ingestors](#dedicated-block-ingestion), and splitting Subgraphs across nodes with [deployment rules](#deployment-rules). > نوٹ کریں کہ ایک سے زیادہ گراف نوڈس کو ایک ہی ڈیٹا بیس کو استعمال کرنے کے لیے کنفیگر کیا جا سکتا ہے، جسے خود کو شارڈنگ کے ذریعے افقی طور پر سکیل کیا جا سکتا ہے. #### تعیناتی کے قواعد -Given multiple Graph Nodes, it is necessary to manage deployment of new subgraphs so that the same subgraph isn't being indexed by two different nodes, which would lead to collisions. This can be done by using deployment rules, which can also specify which `shard` a subgraph's data should be stored in, if database sharding is being used. Deployment rules can match on the subgraph name and the network that the deployment is indexing in order to make a decision. +Given multiple Graph Nodes, it is necessary to manage deployment of new Subgraphs so that the same Subgraph isn't being indexed by two different nodes, which would lead to collisions. This can be done by using deployment rules, which can also specify which `shard` a Subgraph's data should be stored in, if database sharding is being used. Deployment rules can match on the Subgraph name and the network that the deployment is indexing in order to make a decision. مثال کی تعیناتی کے اصول کی کنفگریشن: @@ -138,7 +138,7 @@ indexers = [ "index_node_kovan_0" ] match = { network = [ "xdai", "poa-core" ] } indexers = [ "index_node_other_0" ] [[deployment.rule]] -# There's no 'match', so any subgraph matches +# There's no 'match', so any Subgraph matches shards = [ "sharda", "shardb" ] indexers = [ "index_node_community_0", @@ -167,11 +167,11 @@ query = "" زیادہ تر استعمال کے معاملات میں، ایک واحد Postgres ڈیٹا بیس گراف نوڈ کی انسٹینس کو سپورٹ کرنے کے لیے کافی ہے. جب ایک گراف نوڈ کی انسٹینس ایک واحد postgres ڈیٹا بیس سے بڑھ جاتی ہے، تو یہ ممکن ہے کہ گراف نوڈ کے ڈیٹا کے ذخیرہ کو متعدد پوسٹگریس ڈیٹا بیس میں تقسیم کیا جا سکے. تمام ڈیٹا بیس مل کر گراف نوڈ انسٹینس کا اسٹور بناتے ہیں. ہر انفرادی ڈیٹا بیس کو شارڈ کہا جاتا ہے. -Shards can be used to split subgraph deployments across multiple databases, and can also be used to use replicas to spread query load across databases. This includes configuring the number of available database connections each `graph-node` should keep in its connection pool for each database, which becomes increasingly important as more subgraphs are being indexed. +Shards can be used to split Subgraph deployments across multiple databases, and can also be used to use replicas to spread query load across databases. This includes configuring the number of available database connections each `graph-node` should keep in its connection pool for each database, which becomes increasingly important as more Subgraphs are being indexed. شارڈنگ مفید ہو جاتا ہے جب آپ کا موجودہ ڈیٹا بیس اس بوجھ کو برقرار نہیں رکھ سکتا جو گراف نوڈ اس پر ڈالتا ہے، اور جب ڈیٹا بیس کے سائز کو مزید بڑھانا ممکن نہ ہو. -> عام طور پر یہ بہتر ہے کہ شارڈز کے ساتھ شروع کرنے سے پہلے، ایک ہی ڈیٹا بیس کو جتنا ہو سکے بڑا بنائیں. ایک استثناء وہ ہے جہاں کیوری ٹریفک کو سب گرافس کے درمیان بہت غیر مساوی طور پر تقسیم کیا جاتا ہے; ان حالات میں یہ بھاری طور پر مدد کر سکتا ہے اگر اعلی حجم کے سب گراف کو ایک شارڈ میں اور باقی سب کچھ دوسرے میں رکھا جائے کیونکہ اس سیٹ اپ سے یہ زیادہ امکان ہوتا ہے کہ زیادہ حجم والے سب گرافس کا ڈیٹا db-internal کیشے میں رہتا ہے اور ایسا نہیں ہوتا ہے کہ وہ کم حجم والے سب گرافس سے ڈیٹا کی جگہ لے لیں جس کی ضرورت نہیں ہے. +> It is generally better make a single database as big as possible, before starting with shards. One exception is where query traffic is split very unevenly between Subgraphs; in those situations it can help dramatically if the high-volume Subgraphs are kept in one shard and everything else in another because that setup makes it more likely that the data for the high-volume Subgraphs stays in the db-internal cache and doesn't get replaced by data that's not needed as much from low-volume Subgraphs. کنکشن کنفیگر کرنے کے معاملے میں، postgresql.conf میں max_connections کے ساتھ شروع کریں 400 سیٹ کریں(یا شاید 200 بھی) اور store_connection_wait_time_ms اور store_connection_checkout_count Prometheus میٹرکس دیکھیں. قابل توجہ انتظار کے اوقات (5ms سے اوپر کی کوئی بھی چیز) اس بات کا اشارہ ہے کہ بہت کم کنکشن دستیاب ہیں; زیادہ انتظار کا وقت بھی ڈیٹا بیس کے بہت مصروف ہونے کی وجہ سے ہوگا (جیسے زیادہ CPU لوڈ). تاہم اگر ڈیٹا بیس بصورت دیگر مستحکم معلوم ہوتا ہے تو، زیادہ انتظار کے اوقات کنکشن کی تعداد بڑھانے کی ضرورت کی نشاندہی کرتے ہیں. کنفیگریشن میں، ہر گراف نوڈ انسٹینس کتنے کنکشن استعمال کر سکتا ہے ایک بالائی حد ہے، اور اگر گراف نوڈ کو ان کی ضرورت نہ ہو تو کنکشن کو کھلا نہیں رکھے گا. @@ -188,7 +188,7 @@ ingestor = "block_ingestor_node" #### متعدد نیٹ ورکس کو سپورٹ کرنا -The Graph Protocol is increasing the number of networks supported for indexing rewards, and there exist many subgraphs indexing unsupported networks which an indexer would like to process. The `config.toml` file allows for expressive and flexible configuration of: +The Graph Protocol is increasing the number of networks supported for indexing rewards, and there exist many Subgraphs indexing unsupported networks which an indexer would like to process. The `config.toml` file allows for expressive and flexible configuration of: - متعدد نیٹ ورکس - ایک سے زیادہ فراہم کنندگان فی نیٹ ورک (یہ فراہم کنندگان میں بوجھ کو تقسیم کرنے کی اجازت دے سکتا ہے، اور مکمل نوڈس کے ساتھ ساتھ آرکائیو نوڈس کی ترتیب کی بھی اجازت دے سکتا ہے، گراف نوڈ سستے فراہم کنندگان کو ترجیح دیتا ہے اگر کام کا بوجھ اجازت دیتا ہے). @@ -225,11 +225,11 @@ Graph Node supports a range of environment variables which can enable features, ### گراف نوڈ کا انتظام -چلتے ہوئے گراف نوڈ (یا گراف نوڈس!) کو دیکھتے ہوئے، پھر چیلنج یہ ہے کہ ان نوڈس میں تعینات سب گراف کا انتظام کرنا. گراف نوڈ سب گرافس کو منظم کرنے میں مدد کے لیے ٹولز کی ایک رینج پیش کرتا ہے. +Given a running Graph Node (or Graph Nodes!), the challenge is then to manage deployed Subgraphs across those nodes. Graph Node surfaces a range of tools to help with managing Subgraphs. #### لاگنگ -Graph Node's logs can provide useful information for debugging and optimisation of Graph Node and specific subgraphs. Graph Node supports different log levels via the `GRAPH_LOG` environment variable, with the following levels: error, warn, info, debug or trace. +Graph Node's logs can provide useful information for debugging and optimisation of Graph Node and specific Subgraphs. Graph Node supports different log levels via the `GRAPH_LOG` environment variable, with the following levels: error, warn, info, debug or trace. In addition setting `GRAPH_LOG_QUERY_TIMING` to `gql` provides more details about how GraphQL queries are running (though this will generate a large volume of logs). @@ -247,11 +247,11 @@ The graphman command is included in the official containers, and you can docker Full documentation of `graphman` commands is available in the Graph Node repository. See [/docs/graphman.md] (https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md) in the Graph Node `/docs` -### سب گرافس کے ساتھ کام کرنا +### Working with Subgraphs #### انڈیکسنگ اسٹیٹس API -پورٹ 8030/graphql پر بطور ڈیفالٹ دستیاب ہے، انڈیکسنگ اسٹیٹس API مختلف سب گرافس کے لیے انڈیکسنگ کی حیثیت کو جانچنے، انڈیکسنگ کے ثبوتوں کی جانچ، سب گراف کی خصوصیات کا معائنہ کرنے اور مزید بہت سے طریقوں کو ظاہر کرتا ہے. +Available on port 8030/graphql by default, the indexing status API exposes a range of methods for checking indexing status for different Subgraphs, checking proofs of indexing, inspecting Subgraph features and more. The full schema is available [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). @@ -263,7 +263,7 @@ The full schema is available [here](https://github.com/graphprotocol/graph-node/ - مناسب ہینڈلرز کے ساتھ ایوینٹس پر کارروائی کرنا (اس میں سٹیٹ کے لیے چین کو کال کرنا، اور اسٹور سے ڈیٹا حاصل کرنا شامل ہو سکتا ہے) - نتیجے کے ڈیٹا کو اسٹور پر لکھنا -یہ مراحل پائپ لائنڈ ہیں (یعنی انہیں متوازی طور پر انجام دیا جا سکتا ہے)، لیکن وہ ایک دوسرے پر منحصر ہیں. جہاں سب گراف انڈیکس میں سست ہیں، بنیادی وجہ مخصوص سب گراف پر منحصر ہوگی. +These stages are pipelined (i.e. they can be executed in parallel), but they are dependent on one another. Where Subgraphs are slow to index, the underlying cause will depend on the specific Subgraph. انڈیکسنگ میں سستی کی عام وجوہات: @@ -276,24 +276,24 @@ The full schema is available [here](https://github.com/graphprotocol/graph-node/ - فراہم کنندہ خود چین ہیڈ کے پیچھے پڑا ہے - فراہم کنندہ سے چین ہیڈ پر نئی رسیدیں لانے میں سست روی -سب گراف انڈیکسنگ میٹرکس انڈیکسنگ کی سستی کی بنیادی وجہ کی تشخیص میں مدد کر سکتی ہے. کچھ معاملات میں، مسئلہ خود سب گراف کے ساتھ ہوتا ہے، لیکن دوسروں میں، بہتر نیٹ ورک فراہم کرنے والے، ڈیٹا بیس کے تنازعہ میں کمی اور دیگر ترتیب میں بہتری انڈیکسنگ کی کارکردگی کو نمایاں طور پر بہتر بنا سکتی ہے. +Subgraph indexing metrics can help diagnose the root cause of indexing slowness. In some cases, the problem lies with the Subgraph itself, but in others, improved network providers, reduced database contention and other configuration improvements can markedly improve indexing performance. -#### ناکام سب گراف +#### Failed Subgraphs -انڈیکسنگ کے دوران سب گرافس ناکام ہو سکتے ہیں، اگر وہ غیر متوقع ڈیٹا کا سامنا کرتے ہیں، کچھ جزو توقع کے مطابق کام نہیں کر رہا ہے، یا اگر ایونٹ ہینڈلرز یا کنفیگریشن میں کچھ بگ ہے۔ ناکامی کی دو عمومی قسمیں ہیں: +During indexing Subgraphs might fail, if they encounter data that is unexpected, some component not working as expected, or if there is some bug in the event handlers or configuration. There are two general types of failure: - تعییناتی ناکامیاں: یہ وہ ناکامیاں ہیں جو دوبارہ کوششوں سے حل نہیں ہوں گی - غیر مقررہ ناکامیاں: یہ فراہم کنندہ کے ساتھ مسائل، یا کچھ غیر متوقع گراف نوڈ کی خرابی کی وجہ سے ہوسکتی ہیں. جب ایک غیر مقررہ ناکامی واقع ہوتی ہے تو، گراف نوڈ ناکام ہونے والے ہینڈلرز کو دوبارہ کوشش کرے گا، وقت کے ساتھ پیچھے ہٹتا ہے. -بعض صورتوں میں ایک ناکامی کو انڈیکسر کے ذریعے حل کیا جا سکتا ہے (مثال کے طور پر اگر غلطی صحیح قسم کا فراہم کنندہ نہ ہونے کا نتیجہ ہے، مطلوبہ فراہم کنندہ کو شامل کرنے سے انڈیکسنگ جاری رہے گی). تاہم دوسری صورتوں میں، سب گراف کوڈ میں تبدیلی کی ضرورت ہے. +In some cases a failure might be resolvable by the indexer (for example if the error is a result of not having the right kind of provider, adding the required provider will allow indexing to continue). However in others, a change in the Subgraph code is required. -> Deterministic failures are considered "final", with a Proof of Indexing generated for the failing block, while non-deterministic failures are not, as the subgraph may manage to "unfail" and continue indexing. In some cases, the non-deterministic label is incorrect, and the subgraph will never overcome the error; such failures should be reported as issues on the Graph Node repository. +> Deterministic failures are considered "final", with a Proof of Indexing generated for the failing block, while non-deterministic failures are not, as the Subgraph may manage to "unfail" and continue indexing. In some cases, the non-deterministic label is incorrect, and the Subgraph will never overcome the error; such failures should be reported as issues on the Graph Node repository. #### کیشے -Graph Node caches certain data in the store in order to save refetching from the provider. Blocks are cached, as are the results of `eth_calls` (the latter being cached as of a specific block). This caching can dramatically increase indexing speed during "resyncing" of a slightly altered subgraph. +Graph Node caches certain data in the store in order to save refetching from the provider. Blocks are cached, as are the results of `eth_calls` (the latter being cached as of a specific block). This caching can dramatically increase indexing speed during "resyncing" of a slightly altered Subgraph. -However, in some instances, if an Ethereum node has provided incorrect data for some period, that can make its way into the cache, leading to incorrect data or failed subgraphs. In this case indexers can use `graphman` to clear the poisoned cache, and then rewind the affected subgraphs, which will then fetch fresh data from the (hopefully) healthy provider. +However, in some instances, if an Ethereum node has provided incorrect data for some period, that can make its way into the cache, leading to incorrect data or failed Subgraphs. In this case indexers can use `graphman` to clear the poisoned cache, and then rewind the affected Subgraphs, which will then fetch fresh data from the (hopefully) healthy provider. اگر کسی بلاک کیشے کی عدم مطابقت کا شبہ ہے، جیسے کہ tx رسید غائب ہونے کا ایوینٹ: @@ -304,7 +304,7 @@ However, in some instances, if an Ethereum node has provided incorrect data for #### مسائل اور غلطیوں کو کیوری کرنا -ایک بار ایک سب گراف کو انڈیکس کرنے کے بعد، انڈیکسرز سب گراف کے وقف کردہ کیوری کے اختتامی نقطہ کے ذریعے کیوریز پیش کرنے کی توقع کر سکتے ہیں. اگر انڈیکسر کافی تعداد میں کیوریز کے حجم کو پیش کرنے کی امید کر رہا ہے تو، ایک وقف شدہ کیوری نوڈ کی تجویز کی جاتی ہے، اور بہت زیادہ کیوریز کی تعداد کی صورت میں، انڈیکسر نقل شارڈز کو ترتیب دینا چاہیں گے تاکہ کیوریز انڈیکسنگ کے عمل کو متاثر نہ کریں. +Once a Subgraph has been indexed, indexers can expect to serve queries via the Subgraph's dedicated query endpoint. If the indexer is hoping to serve significant query volume, a dedicated query node is recommended, and in case of very high query volumes, indexers may want to configure replica shards so that queries don't impact the indexing process. تاہم، ایک وقف شدہ کیوری نوڈ اور نقل کے ساتھ بھی، بعض کیوریز کو عمل میں لانے میں کافی وقت لگ سکتا ہے، اور بعض صورتوں میں میموری کے استعمال میں اضافہ ہوتا ہے اور دوسرے صارفین کے لیے کیوری کے وقت پر منفی اثر پڑتا ہے. @@ -316,7 +316,7 @@ Graph Node caches GraphQL queries by default, which can significantly reduce dat ##### کیوریز کا تجزیہ کرنا -مشکل کیوریز اکثر دو طریقوں میں سے ایک میں سامنے آتی ہیں. کچھ معاملات میں، صارفین خود رپورٹ کرتے ہیں کہ دی گئی کیوری آہستہ ہے. اس صورت میں چیلنج آہستگی کی وجہ کی تشخیص کرنا ہے ء چاہے یہ عام مسئلہ ہو، یا اس سب گراف یا کیوری کے لیے مخصوص ہو. اور پھر اگر ممکن ہو تو یقیناً اسے حل کرنا. +Problematic queries most often surface in one of two ways. In some cases, users themselves report that a given query is slow. In that case the challenge is to diagnose the reason for the slowness - whether it is a general issue, or specific to that Subgraph or query. And then of course to resolve it, if possible. دوسری صورتوں میں، مسئلہ ایک کیوری نوڈ پر زیادہ میموری کا استعمال ہو سکتا ہے، ایسی صورت میں چیلنج سب سے پہلے اس کیوری کی نشاندہی کرنا ہے جس کی وجہ سے مسئلہ ہے. @@ -336,10 +336,10 @@ In general, tables where the number of distinct entities are less than 1% of the Once a table has been determined to be account-like, running `graphman stats account-like .
` will turn on the account-like optimization for queries against that table. The optimization can be turned off again with `graphman stats account-like --clear .
` It takes up to 5 minutes for query nodes to notice that the optimization has been turned on or off. After turning the optimization on, it is necessary to verify that the change does not in fact make queries slower for that table. If you have configured Grafana to monitor Postgres, slow queries would show up in `pg_stat_activity`in large numbers, taking several seconds. In that case, the optimization needs to be turned off again. -For Uniswap-like subgraphs, the `pair` and `token` tables are prime candidates for this optimization, and can have a dramatic effect on database load. +For Uniswap-like Subgraphs, the `pair` and `token` tables are prime candidates for this optimization, and can have a dramatic effect on database load. -#### سب گراف کو ختم کرنا +#### Removing Subgraphs > یہ نئی فعالیت ہے، جو گراف نوڈ 0.29.x میں دستیاب ہوگی -At some point an indexer might want to remove a given subgraph. This can be easily done via `graphman drop`, which deletes a deployment and all it's indexed data. The deployment can be specified as either a subgraph name, an IPFS hash `Qm..`, or the database namespace `sgdNNN`. Further documentation is available [here](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop). +At some point an indexer might want to remove a given Subgraph. This can be easily done via `graphman drop`, which deletes a deployment and all it's indexed data. The deployment can be specified as either a Subgraph name, an IPFS hash `Qm..`, or the database namespace `sgdNNN`. Further documentation is available [here](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop). diff --git a/website/src/pages/ur/indexing/tooling/graphcast.mdx b/website/src/pages/ur/indexing/tooling/graphcast.mdx index 4a7b5e2c4cfd..280366873bb3 100644 --- a/website/src/pages/ur/indexing/tooling/graphcast.mdx +++ b/website/src/pages/ur/indexing/tooling/graphcast.mdx @@ -10,10 +10,10 @@ title: گراف کاسٹ گراف کاسٹ SDK (سافٹ ویئر ڈویلپمنٹ کٹ) ڈویلپرز کو ریڈیو بنانے کی اجازت دیتا ہے، جو گپ شپ سے چلنے والی ایپلیکیشنز ہیں جنہیں انڈیکسرز ایک مقررہ مقصد کی تکمیل کے لیے چلا سکتے ہیں۔ ہم مندرجہ ذیل استعمال کے معاملات کے لیے چند ریڈیوز بنانے کا ارادہ رکھتے ہیں (یا دیگر ڈویلپرز/ٹیموں کو مدد فراہم کرتے ہیں جو ریڈیو بنانا چاہتے ہیں): -- Real-time cross-checking of subgraph data integrity ([Subgraph Radio](https://docs.graphops.xyz/graphcast/radios/subgraph-radio/intro)). -- دوسرے انڈیکسرز سے وارپ سنکنگ سب گرافس، سب اسٹریمز، اور فائر ہوز ڈیٹا کے لیے نیلامی اور کوآرڈینیشن کا انعقاد. -- فعال کیوری کے تجزیات پر خود رپورٹنگ، بشمول سب گراف کی درخواست والیوم، فیس والیوم وغیرہ. -- انڈیکسنگ کے تجزیات پر خود رپورٹنگ، بشمول سب گراف انڈیکسنگ کا وقت، ہینڈلر گیس کے اخراجات، انڈیکسنگ کی غلطیوں کا سامنا کرنا وغیرہ. +- Real-time cross-checking of Subgraph data integrity ([Subgraph Radio](https://docs.graphops.xyz/graphcast/radios/subgraph-radio/intro)). +- Conducting auctions and coordination for warp syncing Subgraphs, substreams, and Firehose data from other Indexers. +- Self-reporting on active query analytics, including Subgraph request volumes, fee volumes, etc. +- Self-reporting on indexing analytics, including Subgraph indexing time, handler gas costs, indexing errors encountered, etc. - اسٹیک کی معلومات پر خود رپورٹنگ بشمول گراف نوڈ ورژن، Postgres ورژن، ایتھیریم کلائنٹ ورژن، وغیرہ. ### مزید جانیے diff --git a/website/src/pages/ur/resources/benefits.mdx b/website/src/pages/ur/resources/benefits.mdx index 341a8c0a4c31..35137ff0d524 100644 --- a/website/src/pages/ur/resources/benefits.mdx +++ b/website/src/pages/ur/resources/benefits.mdx @@ -75,9 +75,9 @@ Query costs may vary; the quoted cost is the average at time of publication (Mar Reflects cost for data consumer. Query fees are still paid to Indexers for Free Plan queries. -Estimated costs are only for Ethereum Mainnet subgraphs — costs are even higher when self hosting a `graph-node` on other networks. Some users may need to update their subgraph to a new version. Due to Ethereum gas fees, an update costs ~$50 at time of writing. Note that gas fees on [Arbitrum](/archived/arbitrum/arbitrum-faq/) are substantially lower than Ethereum mainnet. +Estimated costs are only for Ethereum Mainnet Subgraphs — costs are even higher when self hosting a `graph-node` on other networks. Some users may need to update their Subgraph to a new version. Due to Ethereum gas fees, an update costs ~$50 at time of writing. Note that gas fees on [Arbitrum](/archived/arbitrum/arbitrum-faq/) are substantially lower than Ethereum mainnet. -سب گراف پر کیوریٹنگ سگنل ایک اختیاری ایک بار، خالص صفر لاگت ہے (مثال کے طور پر، $1k سگنل کو سب گراف پر کیوریٹ کیا جا سکتا ہے، اور بعد میں واپس لیا جا سکتا ہے—اس عمل میں منافع کمانے کی صلاحیت کے ساتھ). +Curating signal on a Subgraph is an optional one-time, net-zero cost (e.g., $1k in signal can be curated on a Subgraph, and later withdrawn—with potential to earn returns in the process). ## No Setup Costs & Greater Operational Efficiency @@ -89,4 +89,4 @@ The Graph’s decentralized network gives users access to geographic redundancy Bottom line: The Graph Network is less expensive, easier to use, and produces superior results compared to running a `graph-node` locally. -Start using The Graph Network today, and learn how to [publish your subgraph to The Graph's decentralized network](/subgraphs/quick-start/). +Start using The Graph Network today, and learn how to [publish your Subgraph to The Graph's decentralized network](/subgraphs/quick-start/). diff --git a/website/src/pages/ur/resources/glossary.mdx b/website/src/pages/ur/resources/glossary.mdx index bece9e2db4ea..8b3d3ba9814c 100644 --- a/website/src/pages/ur/resources/glossary.mdx +++ b/website/src/pages/ur/resources/glossary.mdx @@ -4,51 +4,51 @@ title: لغت - **The Graph**: A decentralized protocol for indexing and querying data. -- **Query**: A request for data. In the case of The Graph, a query is a request for data from a subgraph that will be answered by an Indexer. +- **Query**: A request for data. In the case of The Graph, a query is a request for data from a Subgraph that will be answered by an Indexer. -- **GraphQL**: A query language for APIs and a runtime for fulfilling those queries with your existing data. The Graph uses GraphQL to query subgraphs. +- **GraphQL**: A query language for APIs and a runtime for fulfilling those queries with your existing data. The Graph uses GraphQL to query Subgraphs. -- **Endpoint**: A URL that can be used to query a subgraph. The testing endpoint for Subgraph Studio is `https://api.studio.thegraph.com/query///` and the Graph Explorer endpoint is `https://gateway.thegraph.com/api//subgraphs/id/`. The Graph Explorer endpoint is used to query subgraphs on The Graph's decentralized network. +- **Endpoint**: A URL that can be used to query a Subgraph. The testing endpoint for Subgraph Studio is `https://api.studio.thegraph.com/query///` and the Graph Explorer endpoint is `https://gateway.thegraph.com/api//subgraphs/id/`. The Graph Explorer endpoint is used to query Subgraphs on The Graph's decentralized network. -- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish subgraphs to The Graph Network. Once it is indexed, the subgraph can be queried by anyone. +- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish Subgraphs to The Graph Network. Once it is indexed, the Subgraph can be queried by anyone. - **Indexer**: Network participants that run indexing nodes to index data from blockchains and serve GraphQL queries. - **Indexer Revenue Streams**: Indexers are rewarded in GRT with two components: query fee rebates and indexing rewards. - 1. **Query Fee Rebates**: Payments from subgraph consumers for serving queries on the network. + 1. **Query Fee Rebates**: Payments from Subgraph consumers for serving queries on the network. - 2. **Indexing Rewards**: The rewards that Indexers receive for indexing subgraphs. Indexing rewards are generated via new issuance of 3% GRT annually. + 2. **Indexing Rewards**: The rewards that Indexers receive for indexing Subgraphs. Indexing rewards are generated via new issuance of 3% GRT annually. - **Indexer's Self-Stake**: The amount of GRT that Indexers stake to participate in the decentralized network. The minimum is 100,000 GRT, and there is no upper limit. - **Delegation Capacity**: The maximum amount of GRT an Indexer can accept from Delegators. Indexers can only accept up to 16x their Indexer Self-Stake, and additional delegation results in diluted rewards. For example, if an Indexer has a Self-Stake of 1M GRT, their delegation capacity is 16M. However, Indexers can increase their Delegation Capacity by increasing their Self-Stake. -- **Upgrade Indexer**: An Indexer designed to act as a fallback for subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. +- **Upgrade Indexer**: An Indexer designed to act as a fallback for Subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. -- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing subgraphs. +- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in Subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing Subgraphs. - **Delegation Tax**: A 0.5% fee paid by Delegators when they delegate GRT to Indexers. The GRT used to pay the fee is burned. -- **Curator**: Network participants that identify high-quality subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a subgraph, 10% is distributed to the Curators of that subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a subgraph. +- **Curator**: Network participants that identify high-quality Subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a Subgraph, 10% is distributed to the Curators of that Subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a Subgraph. -- **Curation Tax**: A 1% fee paid by Curators when they signal GRT on subgraphs. The GRT used to pay the fee is burned. +- **Curation Tax**: A 1% fee paid by Curators when they signal GRT on Subgraphs. The GRT used to pay the fee is burned. -- **Data Consumer**: Any application or user that queries a subgraph. +- **Data Consumer**: Any application or user that queries a Subgraph. -- **Subgraph Developer**: A developer who builds and deploys a subgraph to The Graph's decentralized network. +- **Subgraph Developer**: A developer who builds and deploys a Subgraph to The Graph's decentralized network. -- **Subgraph Manifest**: A YAML file that describes the subgraph's GraphQL schema, data sources, and other metadata. [Here](https://github.com/graphprotocol/example-subgraph/blob/master/subgraph.yaml) is an example. +- **Subgraph Manifest**: A YAML file that describes the Subgraph's GraphQL schema, data sources, and other metadata. [Here](https://github.com/graphprotocol/example-subgraph/blob/master/subgraph.yaml) is an example. - **Epoch**: A unit of time within the network. Currently, one epoch is 6,646 blocks or approximately 1 day. -- **Allocation**: An Indexer can allocate their total GRT stake (including Delegators' stake) towards subgraphs that have been published on The Graph's decentralized network. Allocations can have different statuses: +- **Allocation**: An Indexer can allocate their total GRT stake (including Delegators' stake) towards Subgraphs that have been published on The Graph's decentralized network. Allocations can have different statuses: - 1. **Active**: An allocation is considered active when it is created onchain. This is called opening an allocation, and indicates to the network that the Indexer is actively indexing and serving queries for a particular subgraph. Active allocations accrue indexing rewards proportional to the signal on the subgraph, and the amount of GRT allocated. + 1. **Active**: An allocation is considered active when it is created onchain. This is called opening an allocation, and indicates to the network that the Indexer is actively indexing and serving queries for a particular Subgraph. Active allocations accrue indexing rewards proportional to the signal on the Subgraph, and the amount of GRT allocated. - 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. + 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given Subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. -- **Subgraph Studio**: A powerful dapp for building, deploying, and publishing subgraphs. +- **Subgraph Studio**: A powerful dapp for building, deploying, and publishing Subgraphs. - **Fishermen**: A role within The Graph Network held by participants who monitor the accuracy and integrity of data served by Indexers. When a Fisherman identifies a query response or a POI they believe to be incorrect, they can initiate a dispute against the Indexer. If the dispute rules in favor of the Fisherman, the Indexer is slashed by losing 2.5% of their self-stake. Of this amount, 50% is awarded to the Fisherman as a bounty for their vigilance, and the remaining 50% is removed from circulation (burned). This mechanism is designed to encourage Fishermen to help maintain the reliability of the network by ensuring that Indexers are held accountable for the data they provide. @@ -56,28 +56,28 @@ title: لغت - **Slashing**: Indexers can have their self-staked GRT slashed for providing an incorrect POI or for serving inaccurate data. The slashing percentage is a protocol parameter currently set to 2.5% of an Indexer's self-stake. 50% of the slashed GRT goes to the Fisherman that disputed the inaccurate data or incorrect POI. The other 50% is burned. -- **Indexing Rewards**: The rewards that Indexers receive for indexing subgraphs. Indexing rewards are distributed in GRT. +- **Indexing Rewards**: The rewards that Indexers receive for indexing Subgraphs. Indexing rewards are distributed in GRT. - **Delegation Rewards**: The rewards that Delegators receive for delegating GRT to Indexers. Delegation rewards are distributed in GRT. - **GRT**: The Graph's work utility token. GRT provides economic incentives to network participants for contributing to the network. -- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. +- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given Subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. -- **Graph Node**: Graph Node is the component that indexes subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. +- **Graph Node**: Graph Node is the component that indexes Subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. -- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions onchain, including registering on the network, managing subgraph deployments to its Graph Node(s), and managing allocations. +- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions onchain, including registering on the network, managing Subgraph deployments to its Graph Node(s), and managing allocations. - **The Graph Client**: A library for building GraphQL-based dapps in a decentralized way. -- **Graph Explorer**: A dapp designed for network participants to explore subgraphs and interact with the protocol. +- **Graph Explorer**: A dapp designed for network participants to explore Subgraphs and interact with the protocol. - **Graph CLI**: A command line interface tool for building and deploying to The Graph. - **Cooldown Period**: The time remaining until an Indexer who changed their delegation parameters can do so again. -- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, subgraphs, curation shares, and Indexer's self-stake. +- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, Subgraphs, curation shares, and Indexer's self-stake. -- **Updating a subgraph**: The process of releasing a new subgraph version with updates to the subgraph's manifest, schema, or mappings. +- **Updating a Subgraph**: The process of releasing a new Subgraph version with updates to the Subgraph's manifest, schema, or mappings. -- **Migrating**: The process of curation shares moving from an old version of a subgraph to a new version of a subgraph (e.g. when v0.0.1 is updated to v0.0.2). +- **Migrating**: The process of curation shares moving from an old version of a Subgraph to a new version of a Subgraph (e.g. when v0.0.1 is updated to v0.0.2). diff --git a/website/src/pages/ur/resources/migration-guides/assemblyscript-migration-guide.mdx b/website/src/pages/ur/resources/migration-guides/assemblyscript-migration-guide.mdx index 8a354cb1c231..ccfbb0647039 100644 --- a/website/src/pages/ur/resources/migration-guides/assemblyscript-migration-guide.mdx +++ b/website/src/pages/ur/resources/migration-guides/assemblyscript-migration-guide.mdx @@ -2,13 +2,13 @@ title: اسمبلی سکرپٹ مائیگریشن گائیڈ --- -Up until now, subgraphs have been using one of the [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉 +Up until now, Subgraphs have been using one of the [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉 -وہ سب گراف ڈویلپرز کو اسمبلی لینگوج اور سٹینڈرڈ لائبریری کی نئ خصوصیات استعمال کرنے پر فعال کرے گا. +That will enable Subgraph developers to use newer features of the AS language and standard library. This guide is applicable for anyone using `graph-cli`/`graph-ts` below version `0.22.0`. If you're already at a higher than (or equal) version to that, you've already been using version `0.19.10` of AssemblyScript 🙂 -> Note: As of `0.24.0`, `graph-node` can support both versions, depending on the `apiVersion` specified in the subgraph manifest. +> Note: As of `0.24.0`, `graph-node` can support both versions, depending on the `apiVersion` specified in the Subgraph manifest. ## خصوصیات @@ -44,7 +44,7 @@ This guide is applicable for anyone using `graph-cli`/`graph-ts` below version ` ## اپ گریڈ کیسے کریں؟ -1. Change your mappings `apiVersion` in `subgraph.yaml` to `0.0.6`: +1. Change your mappings `apiVersion` in `subgraph.yaml` to `0.0.9`: ```yaml ... @@ -52,7 +52,7 @@ dataSources: ... mapping: ... - apiVersion: 0.0.6 + apiVersion: 0.0.9 ... ``` @@ -106,7 +106,7 @@ let maybeValue = load()! // breaks in runtime if value is null maybeValue.aMethod() ``` -اگر آپ کو یقین نہیں ہے کہ کون سا انتخاب کرنا ہے، تو ہم ہمیشہ محفوظ ورژن استعمال کرنے کی تجویز کرتے ہیں۔ اگر ویلیو موجود نہیں ہے تو آپ اپنے سب گراف ہینڈلر میں واپسی کے ساتھ صرف ابتدائی اف سٹیٹمینٹ کرنا چاہتے ہیں. +If you are unsure which to choose, we recommend always using the safe version. If the value doesn't exist you might want to just do an early if statement with a return in you Subgraph handler. ### متغیر شیڈونگ @@ -132,7 +132,7 @@ in assembly/index.ts(4,3) ### کالعدم موازنہ -اپنے سب گراف پر اپ گریڈ کرنے سے، بعض اوقات آپ کو اس طرح کی غلطیاں مل سکتی ہیں: +By doing the upgrade on your Subgraph, sometimes you might get errors like these: ```typescript ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' is not assignable to type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt'. @@ -330,7 +330,7 @@ let wrapper = new Wrapper(y) wrapper.n = wrapper.n + x // doesn't give compile time errors as it should ``` -ہم نے اس کے لیے اسمبلی اسکرپٹ کمپائلر پر ایک ایشو کھولا ہے، لیکن ابھی کے لیے اگر آپ اپنی سب گراف میپنگز میں اس قسم کی کارروائیاں کرتے ہیں، تو آپ کو اس سے پہلے ایک کالعدم چیک کرنے کے لیے انہیں تبدیل کرنا چاہیے. +We've opened a issue on the AssemblyScript compiler for this, but for now if you do these kind of operations in your Subgraph mappings, you should change them to do a null check before it. ```typescript let wrapper = new Wrapper(y) @@ -352,7 +352,7 @@ value.x = 10 value.y = 'content' ``` -یہ کمپائل کرے گا لیکن رن ٹائم پر ٹوٹ جائے گا، ایسا اس لیے ہوتا ہے کیونکہ ویلیو شروع نہیں کی گئی ہے، اس لیے یقینی بنائیں کہ آپ کے سب گراف نے اپنی ویلیوس کی ابتدا کی ہے، اس طرح: +It will compile but break at runtime, that happens because the value hasn't been initialized, so make sure your Subgraph has initialized their values, like this: ```typescript var value = new Type() // initialized diff --git a/website/src/pages/ur/resources/migration-guides/graphql-validations-migration-guide.mdx b/website/src/pages/ur/resources/migration-guides/graphql-validations-migration-guide.mdx index fba78a067915..5d31717e36bb 100644 --- a/website/src/pages/ur/resources/migration-guides/graphql-validations-migration-guide.mdx +++ b/website/src/pages/ur/resources/migration-guides/graphql-validations-migration-guide.mdx @@ -1,5 +1,5 @@ --- -title: GraphQL کی توثیق کی منتقلی گائیڈ +title: GraphQL Validations Migration Guide --- جلد ہی `گراف نوڈ` [GraphQL توثیق کی تفصیلات](https://spec.graphql.org/June2018/#sec-Validation) کی 100% کوریج کو سپورٹ کرے گا. @@ -20,7 +20,7 @@ GraphQL ویلیڈیشن سپورٹ آنے والی نئی خصوصیات اور آپ اپنے GraphQL آپریشنز میں کسی بھی مسئلے کو تلاش کرنے اور انہیں ٹھیک کرنے کے لیے CLI مائیگریشن ٹول استعمال کر سکتے ہیں۔ متبادل طور پر آپ `https://api-next.thegraph.com/subgraphs/name/$GITHUB_USER/$SUBGRAPH_NAME` اینڈ پوائنٹ استعمال کرنے کے لیے اپنے GraphQL کلائنٹ کے اینڈ پوائنٹ کو اپ ڈیٹ کر سکتے ہیں۔ اس اختتامی نقطہ کے خلاف اپنے کیوریز کی جانچ کرنے سے آپ کو اپنے کیوریز میں مسائل تلاش کرنے میں مدد ملے گی. -> اگر آپ [GraphQL ESlint](https://the-guild.dev/graphql/eslint/docs) یا [ GraphQL کوڈ جنریٹر] (//https://the-guild.dev) استعمال کر رہے ہیں تو تمام سب گراف کو منتقل کرنے کی ضرورت نہیں ہوگی۔ /graphql/codegen، وہ پہلے ہی اس بات کو یقینی بناتے ہیں کہ آپ کے کیوریز درست ہیں. +> Not all Subgraphs will need to be migrated, if you are using [GraphQL ESlint](https://the-guild.dev/graphql/eslint/docs) or [GraphQL Code Generator](https://the-guild.dev/graphql/codegen), they already ensure that your queries are valid. ## مائیگریشن CLI ٹول diff --git a/website/src/pages/ur/resources/roles/curating.mdx b/website/src/pages/ur/resources/roles/curating.mdx index 9e972e55ab7f..c5138d8482d2 100644 --- a/website/src/pages/ur/resources/roles/curating.mdx +++ b/website/src/pages/ur/resources/roles/curating.mdx @@ -2,37 +2,37 @@ title: کیورٹنگ --- -Curators are critical to The Graph's decentralized economy. They use their knowledge of the web3 ecosystem to assess and signal on the subgraphs that should be indexed by The Graph Network. Through Graph Explorer, Curators view network data to make signaling decisions. In turn, The Graph Network rewards Curators who signal on good quality subgraphs with a share of the query fees those subgraphs generate. The amount of GRT signaled is one of the key considerations for indexers when determining which subgraphs to index. +Curators are critical to The Graph's decentralized economy. They use their knowledge of the web3 ecosystem to assess and signal on the Subgraphs that should be indexed by The Graph Network. Through Graph Explorer, Curators view network data to make signaling decisions. In turn, The Graph Network rewards Curators who signal on good quality Subgraphs with a share of the query fees those Subgraphs generate. The amount of GRT signaled is one of the key considerations for indexers when determining which Subgraphs to index. ## What Does Signaling Mean for The Graph Network? -Before consumers can query a subgraph, it must be indexed. This is where curation comes into play. In order for Indexers to earn substantial query fees on quality subgraphs, they need to know what subgraphs to index. When Curators signal on a subgraph, it lets Indexers know that a subgraph is in demand and of sufficient quality that it should be indexed. +Before consumers can query a Subgraph, it must be indexed. This is where curation comes into play. In order for Indexers to earn substantial query fees on quality Subgraphs, they need to know what Subgraphs to index. When Curators signal on a Subgraph, it lets Indexers know that a Subgraph is in demand and of sufficient quality that it should be indexed. -Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that Curators use to let Indexers know that a subgraph is good to index. Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives. +Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that Curators use to let Indexers know that a Subgraph is good to index. Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the Subgraph, entitling them to a portion of future query fees that the Subgraph drives. -Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality subgraph because there will be fewer queries to process or fewer Indexers to process them. +Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to Subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality Subgraph because there will be fewer queries to process or fewer Indexers to process them. -The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. +The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all Subgraphs, signaling GRT on a particular Subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. -When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. +When signaling, Curators can decide to signal on a specific version of the Subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. -If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. +If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the Subgraphs you need assistance with. -Indexers can find subgraphs to index based on curation signals they see in Graph Explorer (screenshot below). +Indexers can find Subgraphs to index based on curation signals they see in Graph Explorer (screenshot below). -![Explorer subgraphs](/img/explorer-subgraphs.png) +![Explorer Subgraphs](/img/explorer-subgraphs.png) ## سگنل کرنے کا طریقہ -Within the Curator tab in Graph Explorer, curators will be able to signal and unsignal on certain subgraphs based on network stats. For a step-by-step overview of how to do this in Graph Explorer, [click here.](/subgraphs/explorer/) +Within the Curator tab in Graph Explorer, curators will be able to signal and unsignal on certain Subgraphs based on network stats. For a step-by-step overview of how to do this in Graph Explorer, [click here.](/subgraphs/explorer/) -ایک کیوریٹر مخصوص سب گراف ورژن پر سگنل دینے کا انتخاب کر سکتا ہے، یا وہ اپنے سگنل کو خود بخود اس سب گراف کی جدید ترین پروڈکشن بلڈ میں منتقل کرنے کا انتخاب کر سکتا ہے۔ دونوں درست حکمت عملی ہیں اور ان کے اپنے فوائد اور نقصانات کے ساتھ آتے ہیں. +A curator can choose to signal on a specific Subgraph version, or they can choose to have their signal automatically migrate to the newest production build of that Subgraph. Both are valid strategies and come with their own pros and cons. -Signaling on a specific version is especially useful when one subgraph is used by multiple dapps. One dapp might need to regularly update the subgraph with new features. Another dapp might prefer to use an older, well-tested subgraph version. Upon initial curation, a 1% standard tax is incurred. +Signaling on a specific version is especially useful when one Subgraph is used by multiple dapps. One dapp might need to regularly update the Subgraph with new features. Another dapp might prefer to use an older, well-tested Subgraph version. Upon initial curation, a 1% standard tax is incurred. آپ کے سگنل کو خود بخود جدید ترین پروڈکشن کی تعمیر میں منتقل کرنا اس بات کو یقینی بنانے کے لیے قابل قدر ہو سکتا ہے کہ آپ کیوری کی فیس جمع کرتے رہیں۔ جب بھی آپ کیوریشن کرتے ہیں، 1% کیوریشن ٹیکس لاگو ہوتا ہے۔ آپ ہر دفعہ منتقلی پر 0.5% کا کیوریشن ٹیکس ادا کریں گے. سب گراف ڈویلپرز کو نئے ورژنز کثرت سے شائع کرنے کی حوصلہ شکنی کی جاتی ہے - انہیں تمام خود کار طریقے سے منتقل کیوریشن شیئرز پر 0.5% کیوریشن ٹیکس ادا کرنا پڑتا ہے. -> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy. +> **Note**: The first address to signal a particular Subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy. ## Withdrawing your GRT @@ -40,39 +40,39 @@ Curators have the option to withdraw their signaled GRT at any time. Unlike the process of delegating, if you decide to withdraw your signaled GRT you will not have to wait for a cooldown period and will receive the entire amount (minus the 1% curation tax). -Once a curator withdraws their signal, indexers may choose to keep indexing the subgraph, even if there's currently no active GRT signaled. +Once a curator withdraws their signal, indexers may choose to keep indexing the Subgraph, even if there's currently no active GRT signaled. -However, it is recommended that curators leave their signaled GRT in place not only to receive a portion of the query fees, but also to ensure reliability and uptime of the subgraph. +However, it is recommended that curators leave their signaled GRT in place not only to receive a portion of the query fees, but also to ensure reliability and uptime of the Subgraph. ## خطرات 1. گراف میں کیوری کی مارکیٹ فطری طور پر جوان ہے اور اس بات کا خطرہ ہے کہ آپ کا %APY مارکیٹ کی نئی حرکیات کی وجہ سے آپ کی توقع سے کم ہو سکتا ہے. -2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned. -3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their subgraph or if a subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/resources/roles/delegating/delegating/). -4. ایک سب گراف ایک بگ کی وجہ سے ناکام ہو سکتا ہے. ایک ناکام سب گراف کیوری کی فیس جمع نہیں کرتا ہے. اس کے نتیجے میں،آپ کو انتظار کرنا پڑے گاجب تک کہ ڈویلپر اس بگ کو کو ٹھیک نہیں کرتا اور نیا ورژن تعینات کرتا ہے. - - اگر آپ نےسب گراف کے نۓ ورژن کو سبسکرائب کیا ہے. آپ کے حصص خود بخود اس نئے ورژن میں منتقل ہو جائیں گے۔ اس پر 0.5% کیوریشن ٹیکس لگے گا. - - If you have signaled on a specific subgraph version and it fails, you will have to manually burn your curation shares. You can then signal on the new subgraph version, thus incurring a 1% curation tax. +2. Curation Fee - when a curator signals GRT on a Subgraph, they incur a 1% curation tax. This fee is burned. +3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their Subgraph or if a Subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/resources/roles/delegating/delegating/). +4. A Subgraph can fail due to a bug. A failed Subgraph does not accrue query fees. As a result, you’ll have to wait until the developer fixes the bug and deploys a new version. + - If you are subscribed to the newest version of a Subgraph, your shares will auto-migrate to that new version. This will incur a 0.5% curation tax. + - If you have signaled on a specific Subgraph version and it fails, you will have to manually burn your curation shares. You can then signal on the new Subgraph version, thus incurring a 1% curation tax. ## کیوریشن کے اکثر پوچھے گئے سوالات ### 1. کیوریٹرز کتنی % کیوری فیس کماتے ہیں؟ -By signalling on a subgraph, you will earn a share of all the query fees that the subgraph generates. 10% of all query fees go to the Curators pro-rata to their curation shares. This 10% is subject to governance. +By signalling on a Subgraph, you will earn a share of all the query fees that the Subgraph generates. 10% of all query fees go to the Curators pro-rata to their curation shares. This 10% is subject to governance. -### 2. میں یہ کیسے طے کروں کہ کون سے سب گرافس اعلیٰ معیار کے ہیں؟ +### 2. How do I decide which Subgraphs are high quality to signal on? -Finding high-quality subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy subgraphs that are driving query volume. A trustworthy subgraph may be valuable if it is complete, accurate, and supports a dapp’s data needs. A poorly architected subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a subgraph’s architecture or code in order to assess if a subgraph is valuable. As a result: +Finding high-quality Subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy Subgraphs that are driving query volume. A trustworthy Subgraph may be valuable if it is complete, accurate, and supports a dapp’s data needs. A poorly architected Subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a Subgraph’s architecture or code in order to assess if a Subgraph is valuable. As a result: -- Curators can use their understanding of a network to try and predict how an individual subgraph may generate a higher or lower query volume in the future -- Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the subgraph developer is can help determine whether or not a subgraph is worth signalling on. +- Curators can use their understanding of a network to try and predict how an individual Subgraph may generate a higher or lower query volume in the future +- Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the Subgraph developer is can help determine whether or not a Subgraph is worth signalling on. -### 3. سب گراف کو اپ ڈیٹ کرنے کی کیا قیمت ہے؟ +### 3. What’s the cost of updating a Subgraph? -Migrating your curation shares to a new subgraph version incurs a curation tax of 1%. Curators can choose to subscribe to the newest version of a subgraph. When curator shares get auto-migrated to a new version, Curators will also pay half curation tax, ie. 0.5%, because upgrading subgraphs is an onchain action that costs gas. +Migrating your curation shares to a new Subgraph version incurs a curation tax of 1%. Curators can choose to subscribe to the newest version of a Subgraph. When curator shares get auto-migrated to a new version, Curators will also pay half curation tax, ie. 0.5%, because upgrading Subgraphs is an onchain action that costs gas. -### 4. میں اپنے سب گراف کو کتنی بار اپ گریڈ کر سکتا ہوں؟ +### 4. How often can I update my Subgraph? -یہ تجویز کی جاتی ہے کہ آپ اپنے سب گراف کو کثرت سے اپ گریڈ نہ کریں۔ مزید تفصیلات کو لیے اوپر والا سوال دیکھیں۔ +It’s suggested that you don’t update your Subgraphs too frequently. See the question above for more details. ### 5. کیا میں اپنے کیوریشن شیئرز بیچ سکتا ہوں؟ diff --git a/website/src/pages/ur/resources/subgraph-studio-faq.mdx b/website/src/pages/ur/resources/subgraph-studio-faq.mdx index 3edb3d799ad3..d7ebed49c292 100644 --- a/website/src/pages/ur/resources/subgraph-studio-faq.mdx +++ b/website/src/pages/ur/resources/subgraph-studio-faq.mdx @@ -4,7 +4,7 @@ title: سب گراف سٹوڈیو کے اکثر پوچھے گئے سوالات ## 1. سب گراف سٹوڈیو کیا ہے؟ -[Subgraph Studio](https://thegraph.com/studio/) is a dapp for creating, managing, and publishing subgraphs and API keys. +[Subgraph Studio](https://thegraph.com/studio/) is a dapp for creating, managing, and publishing Subgraphs and API keys. ## 2. میں ایک API کلید کیسے بنا سکتا ہوں؟ @@ -18,14 +18,14 @@ Yes! You can create multiple API Keys to use in different projects. Check out th ایک API کلید بنانے کے بعد، سیکورٹی سیکشن میں، آپ ان ڈومینز کی وضاحت کر سکتے ہیں جو ایک مخصوص API کلید سے استفسار کر سکتے ہیں. -## 5. کیا میں اپنا سب گراف کسی دوسرے مالک کو منتقل کر سکتا ہوں؟ +## 5. Can I transfer my Subgraph to another owner? -Yes, subgraphs that have been published to Arbitrum One can be transferred to a new wallet or a Multisig. You can do so by clicking the three dots next to the 'Publish' button on the subgraph's details page and selecting 'Transfer ownership'. +Yes, Subgraphs that have been published to Arbitrum One can be transferred to a new wallet or a Multisig. You can do so by clicking the three dots next to the 'Publish' button on the Subgraph's details page and selecting 'Transfer ownership'. -نوٹ کریں کہ ایک بار منتقل ہونے کے بعد آپ سٹوڈیو میں سب گراف کو دیکھنے یا اس میں ترمیم کرنے کے قابل نہیں رہیں گے. +Note that you will no longer be able to see or edit the Subgraph in Studio once it has been transferred. -## 6. اگر میں اس سب گراف کا ڈویلپر نہیں ہوں جسے میں استعمال کرنا چاہتا ہوں تو میں سب گراف کے لیے کیوری کے URLs کیسے تلاش کروں؟ +## 6. How do I find query URLs for Subgraphs if I’m not the developer of the Subgraph I want to use? -You can find the query URL of each subgraph in the Subgraph Details section of Graph Explorer. When you click on the “Query” button, you will be directed to a pane wherein you can view the query URL of the subgraph you’re interested in. You can then replace the `` placeholder with the API key you wish to leverage in Subgraph Studio. +You can find the query URL of each Subgraph in the Subgraph Details section of Graph Explorer. When you click on the “Query” button, you will be directed to a pane wherein you can view the query URL of the Subgraph you’re interested in. You can then replace the `` placeholder with the API key you wish to leverage in Subgraph Studio. -یاد رکھیں کہ آپ ایک API کلید بنا سکتے ہیں اور نیٹ ورک پر شائع ہونے والے کسی بھی سب گراف سے کیوری کر سکتے ہیں، چاہے آپ خود ایک سب گراف بناتے ہوں۔ نئی API کلید کے ذریعے یہ کیوریز، نیٹ ورک پر کسی دوسرے کی طرح ادائیگی کے سوالات ہیں. +Remember that you can create an API key and query any Subgraph published to the network, even if you build a Subgraph yourself. These queries via the new API key, are paid queries as any other on the network. diff --git a/website/src/pages/ur/resources/tokenomics.mdx b/website/src/pages/ur/resources/tokenomics.mdx index 269dfc583951..92304bab81c9 100644 --- a/website/src/pages/ur/resources/tokenomics.mdx +++ b/website/src/pages/ur/resources/tokenomics.mdx @@ -6,7 +6,7 @@ description: The Graph Network is incentivized by powerful tokenomics. Here’s ## جائزہ -The Graph is a decentralized protocol that enables easy access to blockchain data. It indexes blockchain data similarly to how Google indexes the web. If you've used a dapp that retrieves data from a subgraph, you've probably interacted with The Graph. Today, thousands of [popular dapps](https://thegraph.com/explorer) in the web3 ecosystem use The Graph. +The Graph is a decentralized protocol that enables easy access to blockchain data. It indexes blockchain data similarly to how Google indexes the web. If you've used a dapp that retrieves data from a Subgraph, you've probably interacted with The Graph. Today, thousands of [popular dapps](https://thegraph.com/explorer) in the web3 ecosystem use The Graph. ## Specifics @@ -24,9 +24,9 @@ There are four primary network participants: 1. Delegators - Delegate GRT to Indexers & secure the network -2. کیوریٹرز - انڈیکسرز کے لیے بہترین سب گراف تلاش کریں +2. Curators - Find the best Subgraphs for Indexers -3. Developers - Build & query subgraphs +3. Developers - Build & query Subgraphs 4. انڈیکسرز - بلاکچین ڈیٹا کی ریڑھ کی ہڈی @@ -36,7 +36,7 @@ Fishermen and Arbitrators are also integral to the network's success through oth ## Delegators (Passively earn GRT) -Indexers are delegated GRT by Delegators, increasing the Indexer’s stake in subgraphs on the network. In return, Delegators earn a percentage of all query fees and indexing rewards from the Indexer. Each Indexer sets the cut that will be rewarded to Delegators independently, creating competition among Indexers to attract Delegators. Most Indexers offer between 9-12% annually. +Indexers are delegated GRT by Delegators, increasing the Indexer’s stake in Subgraphs on the network. In return, Delegators earn a percentage of all query fees and indexing rewards from the Indexer. Each Indexer sets the cut that will be rewarded to Delegators independently, creating competition among Indexers to attract Delegators. Most Indexers offer between 9-12% annually. For example, if a Delegator were to delegate 15k GRT to an Indexer offering 10%, the Delegator would receive ~1,500 GRT in rewards annually. @@ -46,25 +46,25 @@ If you're reading this, you're capable of becoming a Delegator right now by head ## Curators (Earn GRT) -Curators identify high-quality subgraphs and "curate" them (i.e., signal GRT on them) to earn curation shares, which guarantee a percentage of all future query fees generated by the subgraph. While any independent network participant can be a Curator, typically subgraph developers are among the first Curators for their own subgraphs because they want to ensure their subgraph is indexed. +Curators identify high-quality Subgraphs and "curate" them (i.e., signal GRT on them) to earn curation shares, which guarantee a percentage of all future query fees generated by the Subgraph. While any independent network participant can be a Curator, typically Subgraph developers are among the first Curators for their own Subgraphs because they want to ensure their Subgraph is indexed. -Subgraph developers are encouraged to curate their subgraph with at least 3,000 GRT. However, this number may be impacted by network activity and community participation. +Subgraph developers are encouraged to curate their Subgraph with at least 3,000 GRT. However, this number may be impacted by network activity and community participation. -Curators pay a 1% curation tax when they curate a new subgraph. This curation tax is burned, decreasing the supply of GRT. +Curators pay a 1% curation tax when they curate a new Subgraph. This curation tax is burned, decreasing the supply of GRT. ## Developers -Developers build and query subgraphs to retrieve blockchain data. Since subgraphs are open source, developers can query existing subgraphs to load blockchain data into their dapps. Developers pay for queries they make in GRT, which is distributed to network participants. +Developers build and query Subgraphs to retrieve blockchain data. Since Subgraphs are open source, developers can query existing Subgraphs to load blockchain data into their dapps. Developers pay for queries they make in GRT, which is distributed to network participants. -### سب گراف بنائیں +### Creating a Subgraph -Developers can [create a subgraph](/developing/creating-a-subgraph/) to index data on the blockchain. Subgraphs are instructions for Indexers about which data should be served to consumers. +Developers can [create a Subgraph](/developing/creating-a-subgraph/) to index data on the blockchain. Subgraphs are instructions for Indexers about which data should be served to consumers. -Once developers have built and tested their subgraph, they can [publish their subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) on The Graph's decentralized network. +Once developers have built and tested their Subgraph, they can [publish their Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) on The Graph's decentralized network. -### موجودہ سب گراف کو کیوری کریں +### Querying an existing Subgraph -Once a subgraph is [published](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph's decentralized network, anyone can create an API key, add GRT to their billing balance, and query the subgraph. +Once a Subgraph is [published](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph's decentralized network, anyone can create an API key, add GRT to their billing balance, and query the Subgraph. Subgraphs are [queried using GraphQL](/subgraphs/querying/introduction/), and the query fees are paid for with GRT in [Subgraph Studio](https://thegraph.com/studio/). Query fees are distributed to network participants based on their contributions to the protocol. @@ -72,27 +72,27 @@ Subgraphs are [queried using GraphQL](/subgraphs/querying/introduction/), and th ## Indexers (Earn GRT) -Indexers are the backbone of The Graph. They operate independent hardware and software powering The Graph’s decentralized network. Indexers serve data to consumers based on instructions from subgraphs. +Indexers are the backbone of The Graph. They operate independent hardware and software powering The Graph’s decentralized network. Indexers serve data to consumers based on instructions from Subgraphs. Indexers can earn GRT rewards in two ways: -1. **Query fees**: GRT paid by developers or users for subgraph data queries. Query fees are directly distributed to Indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). +1. **Query fees**: GRT paid by developers or users for Subgraph data queries. Query fees are directly distributed to Indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). -2. **Indexing rewards**: the 3% annual issuance is distributed to Indexers based on the number of subgraphs they are indexing. These rewards incentivize Indexers to index subgraphs, occasionally before the query fees begin, to accrue and submit Proofs of Indexing (POIs), verifying that they have indexed data accurately. +2. **Indexing rewards**: the 3% annual issuance is distributed to Indexers based on the number of Subgraphs they are indexing. These rewards incentivize Indexers to index Subgraphs, occasionally before the query fees begin, to accrue and submit Proofs of Indexing (POIs), verifying that they have indexed data accurately. -Each subgraph is allotted a portion of the total network token issuance, based on the amount of the subgraph’s curation signal. That amount is then rewarded to Indexers based on their allocated stake on the subgraph. +Each Subgraph is allotted a portion of the total network token issuance, based on the amount of the Subgraph’s curation signal. That amount is then rewarded to Indexers based on their allocated stake on the Subgraph. In order to run an indexing node, Indexers must self-stake 100,000 GRT or more with the network. Indexers are incentivized to self-stake GRT in proportion to the amount of queries they serve. -Indexers can increase their GRT allocations on subgraphs by accepting GRT delegation from Delegators, and they can accept up to 16 times their initial self-stake. If an Indexer becomes "over-delegated" (i.e., more than 16 times their initial self-stake), they will not be able to use the additional GRT from Delegators until they increase their self-stake in the network. +Indexers can increase their GRT allocations on Subgraphs by accepting GRT delegation from Delegators, and they can accept up to 16 times their initial self-stake. If an Indexer becomes "over-delegated" (i.e., more than 16 times their initial self-stake), they will not be able to use the additional GRT from Delegators until they increase their self-stake in the network. The amount of rewards an Indexer receives can vary based on the Indexer's self-stake, accepted delegation, quality of service, and many more factors. ## Token Supply: Burning & Issuance -The initial token supply is 10 billion GRT, with a target of 3% new issuance annually to reward Indexers for allocating stake on subgraphs. This means that the total supply of GRT tokens will increase by 3% each year as new tokens are issued to Indexers for their contribution to the network. +The initial token supply is 10 billion GRT, with a target of 3% new issuance annually to reward Indexers for allocating stake on Subgraphs. This means that the total supply of GRT tokens will increase by 3% each year as new tokens are issued to Indexers for their contribution to the network. -The Graph is designed with multiple burning mechanisms to offset new token issuance. Approximately 1% of the GRT supply is burned annually through various activities on the network, and this number has been increasing as network activity continues to grow. These burning activities include a 0.5% delegation tax whenever a Delegator delegates GRT to an Indexer, a 1% curation tax when Curators signal on a subgraph, and a 1% of query fees for blockchain data. +The Graph is designed with multiple burning mechanisms to offset new token issuance. Approximately 1% of the GRT supply is burned annually through various activities on the network, and this number has been increasing as network activity continues to grow. These burning activities include a 0.5% delegation tax whenever a Delegator delegates GRT to an Indexer, a 1% curation tax when Curators signal on a Subgraph, and a 1% of query fees for blockchain data. ![Total burned GRT](/img/total-burned-grt.jpeg) diff --git a/website/src/pages/ur/sps/introduction.mdx b/website/src/pages/ur/sps/introduction.mdx index 9cca6587e591..b98518e49e1d 100644 --- a/website/src/pages/ur/sps/introduction.mdx +++ b/website/src/pages/ur/sps/introduction.mdx @@ -3,21 +3,21 @@ title: Introduction to Substreams-Powered Subgraphs sidebarTitle: تعارف --- -Boost your subgraph's efficiency and scalability by using [Substreams](/substreams/introduction/) to stream pre-indexed blockchain data. +Boost your Subgraph's efficiency and scalability by using [Substreams](/substreams/introduction/) to stream pre-indexed blockchain data. ## جائزہ -Use a Substreams package (`.spkg`) as a data source to give your subgraph access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks. +Use a Substreams package (`.spkg`) as a data source to give your Subgraph access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks. ### Specifics There are two methods of enabling this technology: -1. **Using Substreams [triggers](/sps/triggers/)**: Consume from any Substreams module by importing the Protobuf model through a subgraph handler and move all your logic into a subgraph. This method creates the subgraph entities directly in the subgraph. +1. **Using Substreams [triggers](/sps/triggers/)**: Consume from any Substreams module by importing the Protobuf model through a Subgraph handler and move all your logic into a Subgraph. This method creates the Subgraph entities directly in the Subgraph. -2. **Using [Entity Changes](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)**: By writing more of the logic into Substreams, you can consume the module's output directly into [graph-node](/indexing/tooling/graph-node/). In graph-node, you can use the Substreams data to create your subgraph entities. +2. **Using [Entity Changes](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)**: By writing more of the logic into Substreams, you can consume the module's output directly into [graph-node](/indexing/tooling/graph-node/). In graph-node, you can use the Substreams data to create your Subgraph entities. -You can choose where to place your logic, either in the subgraph or Substreams. However, consider what aligns with your data needs, as Substreams has a parallelized model, and triggers are consumed linearly in the graph node. +You can choose where to place your logic, either in the Subgraph or Substreams. However, consider what aligns with your data needs, as Substreams has a parallelized model, and triggers are consumed linearly in the graph node. ### اضافی وسائل @@ -28,3 +28,4 @@ Visit the following links for tutorials on using code-generation tooling to buil - [Starknet](https://docs.substreams.dev/tutorials/intro-to-tutorials/starknet) - [Injective](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/injective) - [MANTRA](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/mantra) +- [Stellar](https://docs.substreams.dev/tutorials/intro-to-tutorials/stellar) diff --git a/website/src/pages/ur/sps/sps-faq.mdx b/website/src/pages/ur/sps/sps-faq.mdx index e1395adba688..292390a34142 100644 --- a/website/src/pages/ur/sps/sps-faq.mdx +++ b/website/src/pages/ur/sps/sps-faq.mdx @@ -11,21 +11,21 @@ Specifically, it's a blockchain-agnostic, parallelized, and streaming-first engi Substreams is developed by [StreamingFast](https://www.streamingfast.io/). Visit the [Substreams Documentation](/substreams/introduction/) to learn more about Substreams. -## سب سٹریمز سے چلنے والے سب گرافس کیا ہیں؟ +## What are Substreams-powered Subgraphs? -[Substreams-powered subgraphs](/sps/introduction/) combine the power of Substreams with the queryability of subgraphs. When publishing a Substreams-powered Subgraph, the data produced by the Substreams transformations, can [output entity changes](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs) that are compatible with subgraph entities. +[Substreams-powered Subgraphs](/sps/introduction/) combine the power of Substreams with the queryability of Subgraphs. When publishing a Substreams-powered Subgraph, the data produced by the Substreams transformations, can [output entity changes](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs) that are compatible with Subgraph entities. -If you are already familiar with subgraph development, note that Substreams-powered subgraphs can then be queried just as if it had been produced by the AssemblyScript transformation layer. This provides all the benefits of subgraphs, including a dynamic and flexible GraphQL API. +If you are already familiar with Subgraph development, note that Substreams-powered Subgraphs can then be queried just as if it had been produced by the AssemblyScript transformation layer. This provides all the benefits of Subgraphs, including a dynamic and flexible GraphQL API. -## سب سٹریمز سے چلنے والے سب گرافس سب گراف سے کیسے مختلف ہیں؟ +## How are Substreams-powered Subgraphs different from Subgraphs? Subgraphs are made up of datasources which specify onchain events, and how those events should be transformed via handlers written in Assemblyscript. These events are processed sequentially, based on the order in which events happen onchain. -By contrast, substreams-powered subgraphs have a single datasource which references a substreams package, which is processed by the Graph Node. Substreams have access to additional granular onchain data compared to conventional subgraphs, and can also benefit from massively parallelised processing, which can mean much faster processing times. +By contrast, substreams-powered Subgraphs have a single datasource which references a substreams package, which is processed by the Graph Node. Substreams have access to additional granular onchain data compared to conventional Subgraphs, and can also benefit from massively parallelised processing, which can mean much faster processing times. -## سب سٹریمز سے چلنے والے سب گرافس استعمال کرنے کے کیا فوائد ہیں؟ +## What are the benefits of using Substreams-powered Subgraphs? -Substreams-powered subgraphs combine all the benefits of Substreams with the queryability of subgraphs. They bring greater composability and high-performance indexing to The Graph. They also enable new data use cases; for example, once you've built your Substreams-powered Subgraph, you can reuse your [Substreams modules](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) to output to different [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) such as PostgreSQL, MongoDB, and Kafka. +Substreams-powered Subgraphs combine all the benefits of Substreams with the queryability of Subgraphs. They bring greater composability and high-performance indexing to The Graph. They also enable new data use cases; for example, once you've built your Substreams-powered Subgraph, you can reuse your [Substreams modules](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) to output to different [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) such as PostgreSQL, MongoDB, and Kafka. ## سب سٹریمز کے فوائد کہاں ہیں؟ @@ -35,7 +35,7 @@ Substreams-powered subgraphs combine all the benefits of Substreams with the que - اعلی کارکردگی کی انڈیکسنگ: متوازی کارروائیوں کے بڑے پیمانے پر کلسٹرز کے ذریعے تیز تر انڈیکسنگ کے آرڈرز (سوچیں BigQuery). -- کہیں بھی سینک: اپنے ڈیٹا کو جہاں چاہیں سینک: PostgreSQL، MongoDB، Kafka، سب گرافس، فلیٹ فائلز، Google Sheets. +- Sink anywhere: Sink your data to anywhere you want: PostgreSQL, MongoDB, Kafka, Subgraphs, flat files, Google Sheets. - قابل پروگرام: اسے اپنی مرضی کے مطابق بنانے کے لیے کوڈ کریں، دو ٹرانسفارم ٹائم ایگریگیشنز، اور متعدد حواس کے لیے اپنے آؤٹ پٹ کو ماڈل کریں. @@ -63,17 +63,17 @@ Firehose استعمال کرنے کے بہت سے فوائد ہیں، بشمول - فلیٹ فائلوں کا فائدہ اٹھاتا ہے: بلاکچین ڈیٹا کو فلیٹ فائلوں میں نکالا جاتا ہے، جو دستیاب سب سے سستا اور بہترین کمپیوٹنگ وسیلہ ہے. -## ڈویلپرز سب سٹریمز سے چلنے والے سب گرافس اور سب سٹریمز کے بارے میں مزید معلومات کہاں تک رسائی حاصل کرسکتے ہیں؟ +## Where can developers access more information about Substreams-powered Subgraphs and Substreams? The [Substreams documentation](/substreams/introduction/) will teach you how to build Substreams modules. -The [Substreams-powered subgraphs documentation](/sps/introduction/) will show you how to package them for deployment on The Graph. +The [Substreams-powered Subgraphs documentation](/sps/introduction/) will show you how to package them for deployment on The Graph. The [latest Substreams Codegen tool](https://streamingfastio.medium.com/substreams-codegen-no-code-tool-to-bootstrap-your-project-a11efe0378c6) will allow you to bootstrap a Substreams project without any code. ## سب سٹریمز میں Rust ماڈیولز کا کیا کردار ہے؟ -زنگ ماڈیول سب گراف میں اسمبلی اسکرپٹ میپرز کے مساوی ہیں۔ وہ اسی طرح WASM پر مرتب کیے گئے ہیں، لیکن پروگرامنگ ماڈل متوازی عمل درآمد کی اجازت دیتا ہے۔ وہ اس قسم کی تبدیلیوں اور مجموعوں کی وضاحت کرتے ہیں جسے آپ خام بلاکچین ڈیٹا پر لاگو کرنا چاہتے ہیں. +Rust modules are the equivalent of the AssemblyScript mappers in Subgraphs. They are compiled to WASM in a similar way, but the programming model allows for parallel execution. They define the sort of transformations and aggregations you want to apply to the raw blockchain data. See [modules documentation](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) for details. @@ -81,16 +81,16 @@ See [modules documentation](https://docs.substreams.dev/reference-material/subst سب سٹریمز کا استعمال کرتے وقت، کمپوزیشن ٹرانسفارمیشن لیئر پر ہوتی ہے جو کیشڈ ماڈیولز کو دوبارہ استعمال کرنے کے قابل بناتی ہے. -مثال کے طور پر، ایلس DEX پرائس ماڈیول بنا سکتی ہے، باب اپنی دلچسپی کے کچھ ٹوکنز کے لیے حجم ایگریگیٹر بنانے کے لیے اسے استعمال کر سکتا ہے، اور لیزا قیمت اوریکل بنانے کے لیے چار انفرادی DEX قیمت ماڈیول کو جوڑ سکتی ہے۔ ایک واحد سب سٹریمز کی درخواست ان تمام افراد کے ماڈیولز کو پیک کرے گی، ان کو آپس میں جوڑ دے گی، تاکہ ڈیٹا کا بہت زیادہ بہتر سلسلہ پیش کیا جا سکے۔ اس سلسلے کو پھر سب گراف کو آباد کرنے کے لیے استعمال کیا جا سکتا ہے، اور صارفین اس سے کیوریز کر سکتے ہیں. +As an example, Alice can build a DEX price module, Bob can use it to build a volume aggregator for some tokens of his interest, and Lisa can combine four individual DEX price modules to create a price oracle. A single Substreams request will package all of these individual's modules, link them together, to offer a much more refined stream of data. That stream can then be used to populate a Subgraph, and be queried by consumers. ## آپ سب سٹریمز سے چلنے والے سب گراف کو کیسے بنا اور تعینات کر سکتے ہیں؟ After [defining](/sps/introduction/) a Substreams-powered Subgraph, you can use the Graph CLI to deploy it in [Subgraph Studio](https://thegraph.com/studio/). -## مجھے سب سٹریمز اور سب سٹریمز سے چلنے والے سب گرافس کی مثالیں کہاں مل سکتی ہیں؟ +## Where can I find examples of Substreams and Substreams-powered Subgraphs? -آپ سب سٹریمز اور سب سٹریم سے چلنے والے سب گرافس کی مثالیں تلاش کرنے کے لیے [یہ گٹ ہب ریپو](https://github.com/pinax-network/awesome-substreams) ملاحظہ کر سکتے ہیں. +You can visit [this Github repo](https://github.com/pinax-network/awesome-substreams) to find examples of Substreams and Substreams-powered Subgraphs. -## گراف نیٹ ورک کے لیے سب سٹریمز اور سب سٹریمز سے چلنے والے سب گرافس کا کیا مطلب ہے؟ +## What do Substreams and Substreams-powered Subgraphs mean for The Graph Network? انضمام بہت سے فوائد کا وعدہ کرتا ہے، بشمول انتہائی اعلی کارکردگی کی انڈیکسنگ اور کمیونٹی ماڈیولز کا فائدہ اٹھا کر اور ان پر تعمیر کرنے کے ذریعے زیادہ کمپوز ایبلٹی. diff --git a/website/src/pages/ur/sps/triggers.mdx b/website/src/pages/ur/sps/triggers.mdx index 5eab1a39f4a6..e1149c68812f 100644 --- a/website/src/pages/ur/sps/triggers.mdx +++ b/website/src/pages/ur/sps/triggers.mdx @@ -6,13 +6,13 @@ Use Custom Triggers and enable the full use GraphQL. ## جائزہ -Custom Triggers allow you to send data directly into your subgraph mappings file and entities, which are similar to tables and fields. This enables you to fully use the GraphQL layer. +Custom Triggers allow you to send data directly into your Subgraph mappings file and entities, which are similar to tables and fields. This enables you to fully use the GraphQL layer. -By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data in your subgraph's handler. This ensures efficient and streamlined data management within the subgraph framework. +By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data in your Subgraph's handler. This ensures efficient and streamlined data management within the Subgraph framework. ### Defining `handleTransactions` -The following code demonstrates how to define a `handleTransactions` function in a subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new subgraph entity is created. +The following code demonstrates how to define a `handleTransactions` function in a Subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new Subgraph entity is created. ```tsx export function handleTransactions(bytes: Uint8Array): void { @@ -38,9 +38,9 @@ Here's what you're seeing in the `mappings.ts` file: 1. The bytes containing Substreams data are decoded into the generated `Transactions` object, this object is used like any other AssemblyScript object 2. Looping over the transactions -3. Create a new subgraph entity for every transaction +3. Create a new Subgraph entity for every transaction -To go through a detailed example of a trigger-based subgraph, [check out the tutorial](/sps/tutorial/). +To go through a detailed example of a trigger-based Subgraph, [check out the tutorial](/sps/tutorial/). ### اضافی وسائل diff --git a/website/src/pages/ur/sps/tutorial.mdx b/website/src/pages/ur/sps/tutorial.mdx index 5b7f7c7ae742..841654e04782 100644 --- a/website/src/pages/ur/sps/tutorial.mdx +++ b/website/src/pages/ur/sps/tutorial.mdx @@ -3,7 +3,7 @@ title: 'Tutorial: Set Up a Substreams-Powered Subgraph on Solana' sidebarTitle: Tutorial --- -Successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. +Successfully set up a trigger-based Substreams-powered Subgraph for a Solana SPL token. ## شروع کریں @@ -54,7 +54,7 @@ params: # Modify the param fields to meet your needs ### Step 2: Generate the Subgraph Manifest -Once the project is initialized, generate a subgraph manifest by running the following command in the Dev Container: +Once the project is initialized, generate a Subgraph manifest by running the following command in the Dev Container: ```bash substreams codegen subgraph @@ -73,7 +73,7 @@ dataSources: moduleName: map_spl_transfers # Module defined in the substreams.yaml file: ./my-project-sol-v0.1.0.spkg mapping: - apiVersion: 0.0.7 + apiVersion: 0.0.9 kind: substreams/graph-entities file: ./src/mappings.ts handler: handleTriggers @@ -81,7 +81,7 @@ dataSources: ### Step 3: Define Entities in `schema.graphql` -Define the fields you want to save in your subgraph entities by updating the `schema.graphql` file. +Define the fields you want to save in your Subgraph entities by updating the `schema.graphql` file. Here is an example: @@ -101,7 +101,7 @@ This schema defines a `MyTransfer` entity with fields such as `id`, `amount`, `s With the Protobuf objects generated, you can now handle the decoded Substreams data in your `mappings.ts` file found in the `./src` directory. -The example below demonstrates how to extract to subgraph entities the non-derived transfers associated to the Orca account id: +The example below demonstrates how to extract to Subgraph entities the non-derived transfers associated to the Orca account id: ```ts import { Protobuf } from 'as-proto/assembly' @@ -140,11 +140,11 @@ To generate Protobuf objects in AssemblyScript, run the following command: npm run protogen ``` -This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the subgraph's handler. +This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the Subgraph's handler. ### Conclusion -Congratulations! You've successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. You can take the next step by customizing your schema, mappings, and modules to fit your specific use case. +Congratulations! You've successfully set up a trigger-based Substreams-powered Subgraph for a Solana SPL token. You can take the next step by customizing your schema, mappings, and modules to fit your specific use case. ### Video Tutorial diff --git a/website/src/pages/ur/subgraphs/_meta-titles.json b/website/src/pages/ur/subgraphs/_meta-titles.json index 3fd405eed29a..6d4a697a0e94 100644 --- a/website/src/pages/ur/subgraphs/_meta-titles.json +++ b/website/src/pages/ur/subgraphs/_meta-titles.json @@ -2,5 +2,5 @@ "querying": "Querying", "developing": "Developing", "guides": "How-to Guides", - "best-practices": "Best Practices" + "best-practices": "بہترین طریقے" } diff --git a/website/src/pages/ur/subgraphs/best-practices/avoid-eth-calls.mdx b/website/src/pages/ur/subgraphs/best-practices/avoid-eth-calls.mdx index e40a7b3712e4..07249c97dd2a 100644 --- a/website/src/pages/ur/subgraphs/best-practices/avoid-eth-calls.mdx +++ b/website/src/pages/ur/subgraphs/best-practices/avoid-eth-calls.mdx @@ -1,19 +1,19 @@ --- title: Subgraph Best Practice 4 - Improve Indexing Speed by Avoiding eth_calls -sidebarTitle: 'Subgraph Best Practice 4: Avoiding eth_calls' +sidebarTitle: Avoiding eth_calls --- ## TLDR -`eth_calls` are calls that can be made from a subgraph to an Ethereum node. These calls take a significant amount of time to return data, slowing down indexing. If possible, design smart contracts to emit all the data you need so you don’t need to use `eth_calls`. +`eth_calls` are calls that can be made from a Subgraph to an Ethereum node. These calls take a significant amount of time to return data, slowing down indexing. If possible, design smart contracts to emit all the data you need so you don’t need to use `eth_calls`. ## Why Avoiding `eth_calls` Is a Best Practice -Subgraphs are optimized to index event data emitted from smart contracts. A subgraph can also index the data coming from an `eth_call`, however, this can significantly slow down subgraph indexing as `eth_calls` require making external calls to smart contracts. The responsiveness of these calls relies not on the subgraph but on the connectivity and responsiveness of the Ethereum node being queried. By minimizing or eliminating eth_calls in our subgraphs, we can significantly improve our indexing speed. +Subgraphs are optimized to index event data emitted from smart contracts. A Subgraph can also index the data coming from an `eth_call`, however, this can significantly slow down Subgraph indexing as `eth_calls` require making external calls to smart contracts. The responsiveness of these calls relies not on the Subgraph but on the connectivity and responsiveness of the Ethereum node being queried. By minimizing or eliminating eth_calls in our Subgraphs, we can significantly improve our indexing speed. ### What Does an eth_call Look Like? -`eth_calls` are often necessary when the data required for a subgraph is not available through emitted events. For example, consider a scenario where a subgraph needs to identify whether ERC20 tokens are part of a specific pool, but the contract only emits a basic `Transfer` event and does not emit an event that contains the data that we need: +`eth_calls` are often necessary when the data required for a Subgraph is not available through emitted events. For example, consider a scenario where a Subgraph needs to identify whether ERC20 tokens are part of a specific pool, but the contract only emits a basic `Transfer` event and does not emit an event that contains the data that we need: ```yaml event Transfer(address indexed from, address indexed to, uint256 value); @@ -44,7 +44,7 @@ export function handleTransfer(event: Transfer): void { } ``` -This is functional, however is not ideal as it slows down our subgraph’s indexing. +This is functional, however is not ideal as it slows down our Subgraph’s indexing. ## How to Eliminate `eth_calls` @@ -54,7 +54,7 @@ Ideally, the smart contract should be updated to emit all necessary data within event TransferWithPool(address indexed from, address indexed to, uint256 value, bytes32 indexed poolInfo); ``` -With this update, the subgraph can directly index the required data without external calls: +With this update, the Subgraph can directly index the required data without external calls: ```typescript import { Address } from '@graphprotocol/graph-ts' @@ -96,11 +96,11 @@ The portion highlighted in yellow is the call declaration. The part before the c The handler itself accesses the result of this `eth_call` exactly as in the previous section by binding to the contract and making the call. graph-node caches the results of declared `eth_calls` in memory and the call from the handler will retrieve the result from this in memory cache instead of making an actual RPC call. -Note: Declared eth_calls can only be made in subgraphs with specVersion >= 1.2.0. +Note: Declared eth_calls can only be made in Subgraphs with specVersion >= 1.2.0. ## Conclusion -You can significantly improve indexing performance by minimizing or eliminating `eth_calls` in your subgraphs. +You can significantly improve indexing performance by minimizing or eliminating `eth_calls` in your Subgraphs. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/ur/subgraphs/best-practices/derivedfrom.mdx b/website/src/pages/ur/subgraphs/best-practices/derivedfrom.mdx index db3a49928c89..093eb29255ab 100644 --- a/website/src/pages/ur/subgraphs/best-practices/derivedfrom.mdx +++ b/website/src/pages/ur/subgraphs/best-practices/derivedfrom.mdx @@ -1,11 +1,11 @@ --- title: Subgraph Best Practice 2 - Improve Indexing and Query Responsiveness By Using @derivedFrom -sidebarTitle: 'Subgraph Best Practice 2: Arrays with @derivedFrom' +sidebarTitle: Arrays with @derivedFrom --- ## TLDR -Arrays in your schema can really slow down a subgraph's performance as they grow beyond thousands of entries. If possible, the `@derivedFrom` directive should be used when using arrays as it prevents large arrays from forming, simplifies handlers, and reduces the size of individual entities, improving indexing speed and query performance significantly. +Arrays in your schema can really slow down a Subgraph's performance as they grow beyond thousands of entries. If possible, the `@derivedFrom` directive should be used when using arrays as it prevents large arrays from forming, simplifies handlers, and reduces the size of individual entities, improving indexing speed and query performance significantly. ## How to Use the `@derivedFrom` Directive @@ -15,7 +15,7 @@ You just need to add a `@derivedFrom` directive after your array in your schema. comments: [Comment!]! @derivedFrom(field: "post") ``` -`@derivedFrom` creates efficient one-to-many relationships, enabling an entity to dynamically associate with multiple related entities based on a field in the related entity. This approach removes the need for both sides of the relationship to store duplicate data, making the subgraph more efficient. +`@derivedFrom` creates efficient one-to-many relationships, enabling an entity to dynamically associate with multiple related entities based on a field in the related entity. This approach removes the need for both sides of the relationship to store duplicate data, making the Subgraph more efficient. ### Example Use Case for `@derivedFrom` @@ -60,17 +60,17 @@ type Comment @entity { Just by adding the `@derivedFrom` directive, this schema will only store the “Comments” on the “Comments” side of the relationship and not on the “Post” side of the relationship. Arrays are stored across individual rows, which allows them to expand significantly. This can lead to particularly large sizes if their growth is unbounded. -This will not only make our subgraph more efficient, but it will also unlock three features: +This will not only make our Subgraph more efficient, but it will also unlock three features: 1. We can query the `Post` and see all of its comments. 2. We can do a reverse lookup and query any `Comment` and see which post it comes from. -3. We can use [Derived Field Loaders](/subgraphs/developing/creating/graph-ts/api/#looking-up-derived-entities) to unlock the ability to directly access and manipulate data from virtual relationships in our subgraph mappings. +3. We can use [Derived Field Loaders](/subgraphs/developing/creating/graph-ts/api/#looking-up-derived-entities) to unlock the ability to directly access and manipulate data from virtual relationships in our Subgraph mappings. ## Conclusion -Use the `@derivedFrom` directive in subgraphs to effectively manage dynamically growing arrays, enhancing indexing efficiency and data retrieval. +Use the `@derivedFrom` directive in Subgraphs to effectively manage dynamically growing arrays, enhancing indexing efficiency and data retrieval. For a more detailed explanation of strategies to avoid large arrays, check out Kevin Jones' blog: [Best Practices in Subgraph Development: Avoiding Large Arrays](https://thegraph.com/blog/improve-subgraph-performance-avoiding-large-arrays/). diff --git a/website/src/pages/ur/subgraphs/best-practices/grafting-hotfix.mdx b/website/src/pages/ur/subgraphs/best-practices/grafting-hotfix.mdx index b90ae82e0fa7..89b675d3acab 100644 --- a/website/src/pages/ur/subgraphs/best-practices/grafting-hotfix.mdx +++ b/website/src/pages/ur/subgraphs/best-practices/grafting-hotfix.mdx @@ -1,26 +1,26 @@ --- title: Subgraph Best Practice 6 - Use Grafting for Quick Hotfix Deployment -sidebarTitle: 'Subgraph Best Practice 6: Grafting and Hotfixing' +sidebarTitle: Grafting and Hotfixing --- ## TLDR -Grafting is a powerful feature in subgraph development that allows you to build and deploy new subgraphs while reusing the indexed data from existing ones. +Grafting is a powerful feature in Subgraph development that allows you to build and deploy new Subgraphs while reusing the indexed data from existing ones. ### جائزہ -This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. +This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire Subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. ## Benefits of Grafting for Hotfixes 1. **Rapid Deployment** - - **Minimize Downtime**: When a subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. - - **Immediate Recovery**: The new subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. + - **Minimize Downtime**: When a Subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. + - **Immediate Recovery**: The new Subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. 2. **Data Preservation** - - **Reuse Historical Data**: Grafting copies the existing data from the base subgraph, so you don’t lose valuable historical records. + - **Reuse Historical Data**: Grafting copies the existing data from the base Subgraph, so you don’t lose valuable historical records. - **Consistency**: Maintains data continuity, which is crucial for applications relying on consistent historical data. 3. **Efficiency** @@ -31,38 +31,38 @@ This feature enables quick deployment of hotfixes for critical issues, eliminati 1. **Initial Deployment Without Grafting** - - **Start Clean**: Always deploy your initial subgraph without grafting to ensure that it’s stable and functions as expected. - - **Test Thoroughly**: Validate the subgraph’s performance to minimize the need for future hotfixes. + - **Start Clean**: Always deploy your initial Subgraph without grafting to ensure that it’s stable and functions as expected. + - **Test Thoroughly**: Validate the Subgraph’s performance to minimize the need for future hotfixes. 2. **Implementing the Hotfix with Grafting** - **Identify the Issue**: When a critical error occurs, determine the block number of the last successfully indexed event. - - **Create a New Subgraph**: Develop a new subgraph that includes the hotfix. - - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed subgraph. - - **Deploy Quickly**: Publish the grafted subgraph to restore service as soon as possible. + - **Create a New Subgraph**: Develop a new Subgraph that includes the hotfix. + - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed Subgraph. + - **Deploy Quickly**: Publish the grafted Subgraph to restore service as soon as possible. 3. **Post-Hotfix Actions** - - **Monitor Performance**: Ensure the grafted subgraph is indexing correctly and the hotfix resolves the issue. - - **Republish Without Grafting**: Once stable, deploy a new version of the subgraph without grafting for long-term maintenance. + - **Monitor Performance**: Ensure the grafted Subgraph is indexing correctly and the hotfix resolves the issue. + - **Republish Without Grafting**: Once stable, deploy a new version of the Subgraph without grafting for long-term maintenance. > Note: Relying on grafting indefinitely is not recommended as it can complicate future updates and maintenance. - - **Update References**: Redirect any services or applications to use the new, non-grafted subgraph. + - **Update References**: Redirect any services or applications to use the new, non-grafted Subgraph. 4. **Important Considerations** - **Careful Block Selection**: Choose the graft block number carefully to prevent data loss. - **Tip**: Use the block number of the last correctly processed event. - - **Use Deployment ID**: Ensure you reference the Deployment ID of the base subgraph, not the Subgraph ID. - - **Note**: The Deployment ID is the unique identifier for a specific subgraph deployment. - - **Feature Declaration**: Remember to declare grafting in the subgraph manifest under features. + - **Use Deployment ID**: Ensure you reference the Deployment ID of the base Subgraph, not the Subgraph ID. + - **Note**: The Deployment ID is the unique identifier for a specific Subgraph deployment. + - **Feature Declaration**: Remember to declare grafting in the Subgraph manifest under features. ## Example: Deploying a Hotfix with Grafting -Suppose you have a subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. +Suppose you have a Subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. 1. **Failed Subgraph Manifest (subgraph.yaml)** ```yaml - specVersion: 1.0.0 + specVersion: 1.3.0 schema: file: ./schema.graphql dataSources: @@ -75,7 +75,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing startBlock: 5000000 mapping: kind: ethereum/events - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Withdrawal @@ -90,7 +90,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing 2. **New Grafted Subgraph Manifest (subgraph.yaml)** ```yaml - specVersion: 1.0.0 + specVersion: 1.3.0 schema: file: ./schema.graphql dataSources: @@ -103,7 +103,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing startBlock: 6000001 # Block after the last indexed block mapping: kind: ethereum/events - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Withdrawal @@ -117,16 +117,16 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing features: - grafting graft: - base: QmBaseDeploymentID # Deployment ID of the failed subgraph + base: QmBaseDeploymentID # Deployment ID of the failed Subgraph block: 6000000 # Last successfully indexed block ``` **Explanation:** -- **Data Source Update**: The new subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. +- **Data Source Update**: The new Subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. - **Start Block**: Set to one block after the last successfully indexed block to avoid reprocessing the error. - **Grafting Configuration**: - - **base**: Deployment ID of the failed subgraph. + - **base**: Deployment ID of the failed Subgraph. - **block**: Block number where grafting should begin. 3. **Deployment Steps** @@ -135,10 +135,10 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing - **Adjust the Manifest**: As shown above, update the `subgraph.yaml` with grafting configurations. - **Deploy the Subgraph**: - Authenticate with the Graph CLI. - - Deploy the new subgraph using `graph deploy`. + - Deploy the new Subgraph using `graph deploy`. 4. **Post-Deployment** - - **Verify Indexing**: Check that the subgraph is indexing correctly from the graft point. + - **Verify Indexing**: Check that the Subgraph is indexing correctly from the graft point. - **Monitor Data**: Ensure that new data is being captured and the hotfix is effective. - **Plan for Republish**: Schedule the deployment of a non-grafted version for long-term stability. @@ -146,9 +146,9 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing While grafting is a powerful tool for deploying hotfixes quickly, there are specific scenarios where it should be avoided to maintain data integrity and ensure optimal performance. -- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new subgraph’s schema to be compatible with the base subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. +- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new Subgraph’s schema to be compatible with the base Subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. - **Significant Mapping Logic Overhauls**: When the hotfix involves substantial modifications to your mapping logic—such as changing how events are processed or altering handler functions—grafting may not function correctly. The new logic might not be compatible with the data processed under the old logic, leading to incorrect data or failed indexing. -- **Deployments to The Graph Network**: Grafting is not recommended for subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the subgraph from scratch to ensure full compatibility and reliability. +- **Deployments to The Graph Network**: Grafting is not recommended for Subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the Subgraph from scratch to ensure full compatibility and reliability. ### Risk Management @@ -157,20 +157,20 @@ While grafting is a powerful tool for deploying hotfixes quickly, there are spec ## Conclusion -Grafting is an effective strategy for deploying hotfixes in subgraph development, enabling you to: +Grafting is an effective strategy for deploying hotfixes in Subgraph development, enabling you to: - **Quickly Recover** from critical errors without re-indexing. - **Preserve Historical Data**, maintaining continuity for applications and users. - **Ensure Service Availability** by minimizing downtime during critical fixes. -However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. +However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your Subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. ## اضافی وسائل - **[Grafting Documentation](/subgraphs/cookbook/grafting/)**: Replace a Contract and Keep its History With Grafting - **[Understanding Deployment IDs](/subgraphs/querying/subgraph-id-vs-deployment-id/)**: Learn the difference between Deployment ID and Subgraph ID. -By incorporating grafting into your subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. +By incorporating grafting into your Subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/ur/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx b/website/src/pages/ur/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx index 6ff60ec9ab34..3a633244e0f2 100644 --- a/website/src/pages/ur/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx +++ b/website/src/pages/ur/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx @@ -1,6 +1,6 @@ --- title: Subgraph Best Practice 3 - Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs -sidebarTitle: 'Subgraph Best Practice 3: Immutable Entities and Bytes as IDs' +sidebarTitle: Immutable Entities and Bytes as IDs --- ## TLDR @@ -50,12 +50,12 @@ While other types for IDs are possible, such as String and Int8, it is recommend ### Reasons to Not Use Bytes as IDs 1. If entity IDs must be human-readable such as auto-incremented numerical IDs or readable strings, Bytes for IDs should not be used. -2. If integrating a subgraph’s data with another data model that does not use Bytes as IDs, Bytes as IDs should not be used. +2. If integrating a Subgraph’s data with another data model that does not use Bytes as IDs, Bytes as IDs should not be used. 3. Indexing and querying performance improvements are not desired. ### Concatenating With Bytes as IDs -It is a common practice in many subgraphs to use string concatenation to combine two properties of an event into a single ID, such as using `event.transaction.hash.toHex() + "-" + event.logIndex.toString()`. However, as this returns a string, this significantly impedes subgraph indexing and querying performance. +It is a common practice in many Subgraphs to use string concatenation to combine two properties of an event into a single ID, such as using `event.transaction.hash.toHex() + "-" + event.logIndex.toString()`. However, as this returns a string, this significantly impedes Subgraph indexing and querying performance. Instead, we should use the `concatI32()` method to concatenate event properties. This strategy results in a `Bytes` ID that is much more performant. @@ -172,7 +172,7 @@ Query Response: ## Conclusion -Using both Immutable Entities and Bytes as IDs has been shown to markedly improve subgraph efficiency. Specifically, tests have highlighted up to a 28% increase in query performance and up to a 48% acceleration in indexing speeds. +Using both Immutable Entities and Bytes as IDs has been shown to markedly improve Subgraph efficiency. Specifically, tests have highlighted up to a 28% increase in query performance and up to a 48% acceleration in indexing speeds. Read more about using Immutable Entities and Bytes as IDs in this blog post by David Lutterkort, a Software Engineer at Edge & Node: [Two Simple Subgraph Performance Improvements](https://thegraph.com/blog/two-simple-subgraph-performance-improvements/). diff --git a/website/src/pages/ur/subgraphs/best-practices/pruning.mdx b/website/src/pages/ur/subgraphs/best-practices/pruning.mdx index 1b51dde8894f..2d4f9ad803e0 100644 --- a/website/src/pages/ur/subgraphs/best-practices/pruning.mdx +++ b/website/src/pages/ur/subgraphs/best-practices/pruning.mdx @@ -1,11 +1,11 @@ --- title: Subgraph Best Practice 1 - Improve Query Speed with Subgraph Pruning -sidebarTitle: 'Subgraph Best Practice 1: Pruning with indexerHints' +sidebarTitle: Pruning with indexerHints --- ## TLDR -[Pruning](/developing/creating-a-subgraph/#prune) removes archival entities from the subgraph’s database up to a given block, and removing unused entities from a subgraph’s database will improve a subgraph’s query performance, often dramatically. Using `indexerHints` is an easy way to prune a subgraph. +[Pruning](/developing/creating-a-subgraph/#prune) removes archival entities from the Subgraph’s database up to a given block, and removing unused entities from a Subgraph’s database will improve a Subgraph’s query performance, often dramatically. Using `indexerHints` is an easy way to prune a Subgraph. ## How to Prune a Subgraph With `indexerHints` @@ -13,14 +13,14 @@ Add a section called `indexerHints` in the manifest. `indexerHints` has three `prune` options: -- `prune: auto`: Retains the minimum necessary history as set by the Indexer, optimizing query performance. This is the generally recommended setting and is the default for all subgraphs created by `graph-cli` >= 0.66.0. +- `prune: auto`: Retains the minimum necessary history as set by the Indexer, optimizing query performance. This is the generally recommended setting and is the default for all Subgraphs created by `graph-cli` >= 0.66.0. - `prune: `: Sets a custom limit on the number of historical blocks to retain. - `prune: never`: No pruning of historical data; retains the entire history and is the default if there is no `indexerHints` section. `prune: never` should be selected if [Time Travel Queries](/subgraphs/querying/graphql-api/#time-travel-queries) are desired. -We can add `indexerHints` to our subgraphs by updating our `subgraph.yaml`: +We can add `indexerHints` to our Subgraphs by updating our `subgraph.yaml`: ```yaml -specVersion: 1.0.0 +specVersion: 1.3.0 schema: file: ./schema.graphql indexerHints: @@ -39,7 +39,7 @@ dataSources: ## Conclusion -Pruning using `indexerHints` is a best practice for subgraph development, offering significant query performance improvements. +Pruning using `indexerHints` is a best practice for Subgraph development, offering significant query performance improvements. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/ur/subgraphs/best-practices/timeseries.mdx b/website/src/pages/ur/subgraphs/best-practices/timeseries.mdx index b8a181c76b7a..a0022b5be7ed 100644 --- a/website/src/pages/ur/subgraphs/best-practices/timeseries.mdx +++ b/website/src/pages/ur/subgraphs/best-practices/timeseries.mdx @@ -1,11 +1,11 @@ --- title: Subgraph Best Practice 5 - Simplify and Optimize with Timeseries and Aggregations -sidebarTitle: 'Subgraph Best Practice 5: Timeseries and Aggregations' +sidebarTitle: Timeseries and Aggregations --- ## TLDR -Leveraging the new time-series and aggregations feature in subgraphs can significantly enhance both indexing speed and query performance. +Leveraging the new time-series and aggregations feature in Subgraphs can significantly enhance both indexing speed and query performance. ## جائزہ @@ -36,6 +36,10 @@ Timeseries and aggregations reduce data processing overhead and accelerate queri ## How to Implement Timeseries and Aggregations +### Prerequisites + +You need `spec version 1.1.0` for this feature. + ### Defining Timeseries Entities A timeseries entity represents raw data points collected over time. It is defined with the `@entity(timeseries: true)` annotation. Key requirements: @@ -51,7 +55,7 @@ A timeseries entity represents raw data points collected over time. It is define type Data @entity(timeseries: true) { id: Int8! timestamp: Timestamp! - price: BigDecimal! + amount: BigDecimal! } ``` @@ -68,11 +72,11 @@ An aggregation entity computes aggregated values from a timeseries source. It is type Stats @aggregation(intervals: ["hour", "day"], source: "Data") { id: Int8! timestamp: Timestamp! - sum: BigDecimal! @aggregate(fn: "sum", arg: "price") + sum: BigDecimal! @aggregate(fn: "sum", arg: "amount") } ``` -In this example, Stats aggregates the price field from Data over hourly and daily intervals, computing the sum. +In this example, Stats aggregates the amount field from Data over hourly and daily intervals, computing the sum. ### Querying Aggregated Data @@ -172,13 +176,13 @@ Supported operators and functions include basic arithmetic (+, -, \_, /), compar ### Conclusion -Implementing timeseries and aggregations in subgraphs is a best practice for projects dealing with time-based data. This approach: +Implementing timeseries and aggregations in Subgraphs is a best practice for projects dealing with time-based data. This approach: - Enhances Performance: Speeds up indexing and querying by reducing data processing overhead. - Simplifies Development: Eliminates the need for manual aggregation logic in mappings. - Scales Efficiently: Handles large volumes of data without compromising on speed or responsiveness. -By adopting this pattern, developers can build more efficient and scalable subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your subgraphs. +By adopting this pattern, developers can build more efficient and scalable Subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your Subgraphs. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/ur/subgraphs/billing.mdx b/website/src/pages/ur/subgraphs/billing.mdx index f7f5c848204d..ad0ad942cdcd 100644 --- a/website/src/pages/ur/subgraphs/billing.mdx +++ b/website/src/pages/ur/subgraphs/billing.mdx @@ -4,12 +4,14 @@ title: بلنگ ## Querying Plans -There are two plans to use when querying subgraphs on The Graph Network. +There are two plans to use when querying Subgraphs on The Graph Network. - **Free Plan**: The Free Plan includes 100,000 free monthly queries with full access to the Subgraph Studio testing environment. This plan is designed for hobbyists, hackathoners, and those with side projects to try out The Graph before scaling their dapp. - **Growth Plan**: The Growth Plan includes everything in the Free Plan with all queries after 100,000 monthly queries requiring payments with GRT or credit card. The Growth Plan is flexible enough to cover teams that have established dapps across a variety of use cases. +Learn more about pricing [here](https://thegraph.com/studio-pricing/). + ## Query Payments with credit card diff --git a/website/src/pages/ur/subgraphs/developing/creating/advanced.mdx b/website/src/pages/ur/subgraphs/developing/creating/advanced.mdx index 6d3c40d1e663..2870853839df 100644 --- a/website/src/pages/ur/subgraphs/developing/creating/advanced.mdx +++ b/website/src/pages/ur/subgraphs/developing/creating/advanced.mdx @@ -4,9 +4,9 @@ title: Advanced Subgraph Features ## جائزہ -Add and implement advanced subgraph features to enhanced your subgraph's built. +Add and implement advanced Subgraph features to enhanced your Subgraph's built. -Starting from `specVersion` `0.0.4`, subgraph features must be explicitly declared in the `features` section at the top level of the manifest file, using their `camelCase` name, as listed in the table below: +Starting from `specVersion` `0.0.4`, Subgraph features must be explicitly declared in the `features` section at the top level of the manifest file, using their `camelCase` name, as listed in the table below: | Feature | Name | | ---------------------------------------------------- | ---------------- | @@ -14,10 +14,10 @@ Starting from `specVersion` `0.0.4`, subgraph features must be explicitly declar | [Full-text Search](#defining-fulltext-search-fields) | `fullTextSearch` | | [Grafting](#grafting-onto-existing-subgraphs) | `grafting` | -For instance, if a subgraph uses the **Full-Text Search** and the **Non-fatal Errors** features, the `features` field in the manifest should be: +For instance, if a Subgraph uses the **Full-Text Search** and the **Non-fatal Errors** features, the `features` field in the manifest should be: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum features: - fullTextSearch @@ -25,7 +25,7 @@ features: dataSources: ... ``` -> Note that using a feature without declaring it will incur a **validation error** during subgraph deployment, but no errors will occur if a feature is declared but not used. +> Note that using a feature without declaring it will incur a **validation error** during Subgraph deployment, but no errors will occur if a feature is declared but not used. ## Timeseries and Aggregations @@ -33,9 +33,9 @@ Prerequisites: - Subgraph specVersion must be ≥1.1.0. -Timeseries and aggregations enable your subgraph to track statistics like daily average price, hourly total transfers, and more. +Timeseries and aggregations enable your Subgraph to track statistics like daily average price, hourly total transfers, and more. -This feature introduces two new types of subgraph entity. Timeseries entities record data points with timestamps. Aggregation entities perform pre-declared calculations on the timeseries data points on an hourly or daily basis, then store the results for easy access via GraphQL. +This feature introduces two new types of Subgraph entity. Timeseries entities record data points with timestamps. Aggregation entities perform pre-declared calculations on the timeseries data points on an hourly or daily basis, then store the results for easy access via GraphQL. ### Example Schema @@ -97,21 +97,21 @@ Aggregation entities are automatically calculated on the basis of the specified ## Non-fatal errors -پہلے سے مطابقت پذیر سب گرافس پر انڈیکسنگ کی غلطیاں، بذریعہ ڈیفالٹ، سب گراف کے ناکام ہونے اور مطابقت پذیری کو روکنے کا سبب بنیں گی. سب گراف کو متبادل طور پر غلطیوں کی موجودگی میں مطابقت پذیری جاری رکھنے کے لیے ترتیب دیا جا سکتا ہے، ہینڈلر کی طرف سے کی گئی تبدیلیوں کو نظر انداز کر کے جس سے خرابی پیدا ہوئی. اس سے سب گراف مصنفین کو اپنے سب گراف کو درست کرنے کا وقت ملتا ہے جب کہ تازہ ترین بلاک کے خلاف کیوریز پیش کی جاتی رہتی ہیں، حالانکہ اس خرابی کی وجہ سے نتائج متضاد ہو سکتے ہیں. نوٹ کریں کہ کچھ غلطیاں اب بھی ہمیشہ مہلک ہوتی ہیں. غیر مہلک ہونے کے لیے، خرابی کو تعییناتی معلوم ہونا چاہیے. +Indexing errors on already synced Subgraphs will, by default, cause the Subgraph to fail and stop syncing. Subgraphs can alternatively be configured to continue syncing in the presence of errors, by ignoring the changes made by the handler which provoked the error. This gives Subgraph authors time to correct their Subgraphs while queries continue to be served against the latest block, though the results might be inconsistent due to the bug that caused the error. Note that some errors are still always fatal. To be non-fatal, the error must be known to be deterministic. -> **Note:** The Graph Network does not yet support non-fatal errors, and developers should not deploy subgraphs using that functionality to the network via the Studio. +> **Note:** The Graph Network does not yet support non-fatal errors, and developers should not deploy Subgraphs using that functionality to the network via the Studio. -غیر مہلک غلطیوں کو فعال کرنے کے لیے سب گراف مینی فیسٹ پر درج ذیل خصوصیت کا فلیگ ترتیب دینے کی ضرورت ہے: +Enabling non-fatal errors requires setting the following feature flag on the Subgraph manifest: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum features: - nonFatalErrors ... ``` -The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the subgraph has skipped over errors, as in the example: +The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the Subgraph has skipped over errors, as in the example: ```graphql foos(first: 100, subgraphError: allow) { @@ -123,7 +123,7 @@ _meta { } ``` -If the subgraph encounters an error, that query will return both the data and a graphql error with the message `"indexing_error"`, as in this example response: +If the Subgraph encounters an error, that query will return both the data and a graphql error with the message `"indexing_error"`, as in this example response: ```graphql "data": { @@ -145,7 +145,7 @@ If the subgraph encounters an error, that query will return both the data and a ## IPFS/Arweave File Data Sources -فائل ڈیٹا کے ذرائع ایک مضبوط، قابل توسیع طریقے سے انڈیکسنگ کے دوران آف چین ڈیٹا تک رسائی کے لیے ایک نئی سب گراف کی فعالیت ہے۔ فائل ڈیٹا کے ذرائع IPFS اور Arweave سے فائلیں لانے میں معاونت کرتے ہیں. +File data sources are a new Subgraph functionality for accessing off-chain data during indexing in a robust, extendable way. File data sources support fetching files from IPFS and from Arweave. > یہ آف چین ڈیٹا کی تعییناتی انڈیکسنگ کے ساتھ ساتھ صوابدیدی HTTP سے حاصل کردہ ڈیٹا کے ممکنہ تعارف کی بنیاد بھی رکھتا ہے. @@ -221,7 +221,7 @@ templates: - name: TokenMetadata kind: file/ipfs mapping: - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mapping.ts handler: handleMetadata @@ -290,7 +290,7 @@ For Arweave, as of version 0.33.0 Graph Node can fetch files stored on Arweave b import { TokenMetadata as TokenMetadataTemplate } from '../generated/templates' const ipfshash = 'QmaXzZhcYnsisuue5WRdQDH6FDvqkLQX1NckLqBYeYYEfm' -//This example code is for a Crypto coven subgraph. The above ipfs hash is a directory with token metadata for all crypto coven NFTs. +//This example code is for a Crypto coven Subgraph. The above ipfs hash is a directory with token metadata for all crypto coven NFTs. export function handleTransfer(event: TransferEvent): void { let token = Token.load(event.params.tokenId.toString()) @@ -317,23 +317,23 @@ export function handleTransfer(event: TransferEvent): void { This example is using the CID as the lookup between the parent `Token` entity and the resulting `TokenMetadata` entity. -> Previously, this is the point at which a subgraph developer would have called `ipfs.cat(CID)` to fetch the file +> Previously, this is the point at which a Subgraph developer would have called `ipfs.cat(CID)` to fetch the file مبارک ہو، آپ فائل ڈیٹا سورسز استعمال کر رہے ہیں! -#### آپ کے سب گراف کو تعینات کرنا +#### Deploying your Subgraphs -You can now `build` and `deploy` your subgraph to any Graph Node >=v0.30.0-rc.0. +You can now `build` and `deploy` your Subgraph to any Graph Node >=v0.30.0-rc.0. #### حدود -فائل ڈیٹا سورس کے ہینڈلرز اور ہستیوں کو دیگر سب گراف ہستیوں سے الگ تھلگ کر دیا جاتا ہے، اس بات کو یقینی بناتے ہوئے کہ عمل درآمد کے وقت وہ تعیین پسند ہیں، اور اس بات کو یقینی بناتے ہیں کہ چین پر مبنی ڈیٹا سورسز کی کوئی آلودگی نہ ہو۔ مخصوص ہونا: +File data source handlers and entities are isolated from other Subgraph entities, ensuring that they are deterministic when executed, and ensuring no contamination of chain-based data sources. To be specific: - فائل ڈیٹا سورسز کے ذریعے تخلیق کردہ ادارے ناقابل تغیر ہیں، اور انہیں اپ ڈیٹ نہیں کیا جا سکتا - فائل ڈیٹا کے ذرائع ہینڈلرز دوسرے فائل ڈیٹا سورسز سے اداروں تک رسائی حاصل نہیں کرسکتے ہیں - فائل ڈیٹا کے ذرائع سے وابستہ ہستیوں تک چین پر مبنی ہینڈلرز تک رسائی حاصل نہیں کی جا سکتی ہے -> اگرچہ یہ رکاوٹ زیادہ تر استعمال کے معاملات کے لیے مشکل نہیں ہونی چاہیے، لیکن یہ کچھ لوگوں کے لیے پیچیدگی پیدا کر سکتی ہے۔ براہ کرم ڈسکورڈ کے ذریعے رابطہ کریں اگر آپ کو اپنے فائل پر مبنی ڈیٹا کو سب گراف میں ماڈل کرنے میں مسئلہ درپیش ہے! +> While this constraint should not be problematic for most use-cases, it may introduce complexity for some. Please get in touch via Discord if you are having issues modelling your file-based data in a Subgraph! مزید برآں، فائل ڈیٹا سورس سے ڈیٹا سورسز بنانا ممکن نہیں ہے، چاہے وہ آن چین ڈیٹا سورس ہو یا کوئی اور فائل ڈیٹا سورس۔ مستقبل میں یہ پابندی ختم ہو سکتی ہے. @@ -365,15 +365,15 @@ Handlers for File Data Sources cannot be in files which import `eth_call` contra > **Requires**: [SpecVersion](#specversion-releases) >= `1.2.0` -Topic filters, also known as indexed argument filters, are a powerful feature in subgraphs that allow users to precisely filter blockchain events based on the values of their indexed arguments. +Topic filters, also known as indexed argument filters, are a powerful feature in Subgraphs that allow users to precisely filter blockchain events based on the values of their indexed arguments. -- These filters help isolate specific events of interest from the vast stream of events on the blockchain, allowing subgraphs to operate more efficiently by focusing only on relevant data. +- These filters help isolate specific events of interest from the vast stream of events on the blockchain, allowing Subgraphs to operate more efficiently by focusing only on relevant data. -- This is useful for creating personal subgraphs that track specific addresses and their interactions with various smart contracts on the blockchain. +- This is useful for creating personal Subgraphs that track specific addresses and their interactions with various smart contracts on the blockchain. ### How Topic Filters Work -When a smart contract emits an event, any arguments that are marked as indexed can be used as filters in a subgraph's manifest. This allows the subgraph to listen selectively for events that match these indexed arguments. +When a smart contract emits an event, any arguments that are marked as indexed can be used as filters in a Subgraph's manifest. This allows the Subgraph to listen selectively for events that match these indexed arguments. - The event's first indexed argument corresponds to `topic1`, the second to `topic2`, and so on, up to `topic3`, since the Ethereum Virtual Machine (EVM) allows up to three indexed arguments per event. @@ -401,7 +401,7 @@ In this example: #### Configuration in Subgraphs -Topic filters are defined directly within the event handler configuration in the subgraph manifest. Here is how they are configured: +Topic filters are defined directly within the event handler configuration in the Subgraph manifest. Here is how they are configured: ```yaml eventHandlers: @@ -436,7 +436,7 @@ In this configuration: - `topic1` is configured to filter `Transfer` events where `0xAddressA` is the sender. - `topic2` is configured to filter `Transfer` events where `0xAddressB` is the receiver. -- The subgraph will only index transactions that occur directly from `0xAddressA` to `0xAddressB`. +- The Subgraph will only index transactions that occur directly from `0xAddressA` to `0xAddressB`. #### Example 2: Tracking Transactions in Either Direction Between Two or More Addresses @@ -452,17 +452,17 @@ In this configuration: - `topic1` is configured to filter `Transfer` events where `0xAddressA`, `0xAddressB`, `0xAddressC` is the sender. - `topic2` is configured to filter `Transfer` events where `0xAddressB` and `0xAddressC` is the receiver. -- The subgraph will index transactions that occur in either direction between multiple addresses allowing for comprehensive monitoring of interactions involving all addresses. +- The Subgraph will index transactions that occur in either direction between multiple addresses allowing for comprehensive monitoring of interactions involving all addresses. ## Declared eth_call > Note: This is an experimental feature that is not currently available in a stable Graph Node release yet. You can only use it in Subgraph Studio or your self-hosted node. -Declarative `eth_calls` are a valuable subgraph feature that allows `eth_calls` to be executed ahead of time, enabling `graph-node` to execute them in parallel. +Declarative `eth_calls` are a valuable Subgraph feature that allows `eth_calls` to be executed ahead of time, enabling `graph-node` to execute them in parallel. This feature does the following: -- Significantly improves the performance of fetching data from the Ethereum blockchain by reducing the total time for multiple calls and optimizing the subgraph's overall efficiency. +- Significantly improves the performance of fetching data from the Ethereum blockchain by reducing the total time for multiple calls and optimizing the Subgraph's overall efficiency. - Allows faster data fetching, resulting in quicker query responses and a better user experience. - Reduces wait times for applications that need to aggregate data from multiple Ethereum calls, making the data retrieval process more efficient. @@ -474,7 +474,7 @@ This feature does the following: #### Scenario without Declarative `eth_calls` -Imagine you have a subgraph that needs to make three Ethereum calls to fetch data about a user's transactions, balance, and token holdings. +Imagine you have a Subgraph that needs to make three Ethereum calls to fetch data about a user's transactions, balance, and token holdings. Traditionally, these calls might be made sequentially: @@ -498,15 +498,15 @@ Total time taken = max (3, 2, 4) = 4 seconds #### How it Works -1. Declarative Definition: In the subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. +1. Declarative Definition: In the Subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. 2. Parallel Execution Engine: The Graph Node's execution engine recognizes these declarations and runs the calls simultaneously. -3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the subgraph for further processing. +3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the Subgraph for further processing. #### Example Configuration in Subgraph Manifest Declared `eth_calls` can access the `event.address` of the underlying event as well as all the `event.params`. -`Subgraph.yaml` using `event.address`: +`subgraph.yaml` using `event.address`: ```yaml eventHandlers: @@ -524,7 +524,7 @@ Details for the example above: - The text (`Pool[event.address].feeGrowthGlobal0X128()`) is the actual `eth_call` that will be executed, which is in the form of `Contract[address].function(arguments)` - The `address` and `arguments` can be replaced with variables that will be available when the handler is executed. -`Subgraph.yaml` using `event.params` +`subgraph.yaml` using `event.params` ```yaml calls: @@ -535,22 +535,22 @@ calls: > **Note:** it is not recommended to use grafting when initially upgrading to The Graph Network. Learn more [here](/subgraphs/cookbook/grafting/#important-note-on-grafting-when-upgrading-to-the-network). -When a subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances; it is beneficial to reuse the data from an existing subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly or to temporarily get an existing subgraph working again after it has failed. +When a Subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances; it is beneficial to reuse the data from an existing Subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly or to temporarily get an existing Subgraph working again after it has failed. -A subgraph is grafted onto a base subgraph when the subgraph manifest in `subgraph.yaml` contains a `graft` block at the top-level: +A Subgraph is grafted onto a base Subgraph when the Subgraph manifest in `subgraph.yaml` contains a `graft` block at the top-level: ```yaml description: ... graft: - base: Qm... # Subgraph ID of base subgraph + base: Qm... # Subgraph ID of base Subgraph block: 7345624 # Block number ``` -When a subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` subgraph up to and including the given `block` and then continue indexing the new subgraph from that block on. The base subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted subgraph. +When a Subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` Subgraph up to and including the given `block` and then continue indexing the new Subgraph from that block on. The base Subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted Subgraph. -چونکہ گرافٹنگ بیس ڈیٹا کو انڈیکس کرنے کے بجائے کاپی کرتا ہے، شروع سے انڈیکس کرنے کے مقابلے میں مطلوبہ بلاک میں سب گراف حاصل کرنا بہت تیز ہے، حالانکہ ابتدائی ڈیٹا کاپی بہت بڑے سب گراف کے لیے کئی گھنٹے لگ سکتی ہے۔ جب گرافٹ شدہ سب گراف کو شروع کیا جا رہا ہے، گراف نوڈ ان ہستی کی اقسام کے بارے میں معلومات کو لاگ کرے گا جو پہلے ہی کاپی ہو چکی ہیں. +Because grafting copies rather than indexes base data, it is much quicker to get the Subgraph to the desired block than indexing from scratch, though the initial data copy can still take several hours for very large Subgraphs. While the grafted Subgraph is being initialized, the Graph Node will log information about the entity types that have already been copied. -گرافٹڈ سب گراف ایک گراف کیو ایل اسکیما استعمال کرسکتا ہے جو بیس سب گراف میں سے ایک سے مماثل نہیں ہے، لیکن اس کے ساتھ محض مطابقت رکھتا ہے۔ اسے اپنے طور پر ایک درست سب گراف سکیما ہونا چاہیے، لیکن درج ذیل طریقوں سے بنیادی سب گراف کے سکیما سے انحراف ہو سکتا ہے: +The grafted Subgraph can use a GraphQL schema that is not identical to the one of the base Subgraph, but merely compatible with it. It has to be a valid Subgraph schema in its own right, but may deviate from the base Subgraph's schema in the following ways: - یہ ہستی کی اقسام کو جوڑتا یا ہٹاتا ہے - یہ ہستی کی اقسام سے صفات کو ہٹاتا ہے @@ -560,4 +560,4 @@ When a subgraph whose manifest contains a `graft` block is deployed, Graph Node - یہ انٹرفیس کو جوڑتا یا ہٹاتا ہے - یہ تبدیل ہوتا ہے جس کے لیے ایک انٹرفیس لاگو کیا جاتا ہے -> **[Feature Management](#experimental-features):** `grafting` must be declared under `features` in the subgraph manifest. +> **[Feature Management](#experimental-features):** `grafting` must be declared under `features` in the Subgraph manifest. diff --git a/website/src/pages/ur/subgraphs/developing/creating/assemblyscript-mappings.mdx b/website/src/pages/ur/subgraphs/developing/creating/assemblyscript-mappings.mdx index 28f2936bb14f..64013ef5df38 100644 --- a/website/src/pages/ur/subgraphs/developing/creating/assemblyscript-mappings.mdx +++ b/website/src/pages/ur/subgraphs/developing/creating/assemblyscript-mappings.mdx @@ -10,7 +10,7 @@ The mappings take data from a particular source and transform it into entities t For each event handler that is defined in `subgraph.yaml` under `mapping.eventHandlers`, create an exported function of the same name. Each handler must accept a single parameter called `event` with a type corresponding to the name of the event which is being handled. -In the example subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events: +In the example Subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events: ```javascript import { NewGravatar, UpdatedGravatar } from '../generated/Gravity/Gravity' @@ -72,7 +72,7 @@ If no value is set for a field in the new entity with the same ID, the field wil ## کوڈ تخلیق کرنا -سمارٹ کنٹریکٹس، ایوینٹس اور ہستیوں کے ساتھ کام کرنا آسان اور ٹائپ محفوظ بنانے کے لیے، گراف CLI ڈیٹا کے ذرائع میں شامل سب گراف کے GraphQL اسکیما اور کنٹریکٹ ABIs سے اسمبلی سکرپٹ کی قسمیں تیار کر سکتا ہے. +In order to make it easy and type-safe to work with smart contracts, events and entities, the Graph CLI can generate AssemblyScript types from the Subgraph's GraphQL schema and the contract ABIs included in the data sources. اس کے ساتھ کیا جاتا ہے @@ -80,7 +80,7 @@ If no value is set for a field in the new entity with the same ID, the field wil graph codegen [--output-dir ] [] ``` -but in most cases, subgraphs are already preconfigured via `package.json` to allow you to simply run one of the following to achieve the same: +but in most cases, Subgraphs are already preconfigured via `package.json` to allow you to simply run one of the following to achieve the same: ```sh # Yarn @@ -90,7 +90,7 @@ yarn codegen npm run codegen ``` -This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with. +This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example Subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with. ```javascript import { @@ -102,12 +102,12 @@ import { } from '../generated/Gravity/Gravity' ``` -In addition to this, one class is generated for each entity type in the subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with +In addition to this, one class is generated for each entity type in the Subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with ```javascript import { Gravatar } from '../generated/schema' ``` -> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the subgraph. +> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the Subgraph. -Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your subgraph to Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find. +Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your Subgraph to Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find. diff --git a/website/src/pages/ur/subgraphs/developing/creating/graph-ts/CHANGELOG.md b/website/src/pages/ur/subgraphs/developing/creating/graph-ts/CHANGELOG.md index 5d90888ac378..5f964d3cbb78 100644 --- a/website/src/pages/ur/subgraphs/developing/creating/graph-ts/CHANGELOG.md +++ b/website/src/pages/ur/subgraphs/developing/creating/graph-ts/CHANGELOG.md @@ -1,5 +1,11 @@ # @graphprotocol/graph-ts +## 0.38.0 + +### Minor Changes + +- [#1935](https://github.com/graphprotocol/graph-tooling/pull/1935) [`0c36a02`](https://github.com/graphprotocol/graph-tooling/commit/0c36a024e0516bbf883ae62b8312dba3d9945f04) Thanks [@isum](https://github.com/isum)! - feat: add yaml parsing support to mappings + ## 0.37.0 ### Minor Changes diff --git a/website/src/pages/ur/subgraphs/developing/creating/graph-ts/api.mdx b/website/src/pages/ur/subgraphs/developing/creating/graph-ts/api.mdx index e8e3e92cc489..d936e38aa363 100644 --- a/website/src/pages/ur/subgraphs/developing/creating/graph-ts/api.mdx +++ b/website/src/pages/ur/subgraphs/developing/creating/graph-ts/api.mdx @@ -2,12 +2,12 @@ title: اسمبلی اسکرپٹ API --- -> Note: If you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/). +> Note: If you created a Subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/). -Learn what built-in APIs can be used when writing subgraph mappings. There are two kinds of APIs available out of the box: +Learn what built-in APIs can be used when writing Subgraph mappings. There are two kinds of APIs available out of the box: - The [Graph TypeScript library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) -- Code generated from subgraph files by `graph codegen` +- Code generated from Subgraph files by `graph codegen` You can also add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). @@ -27,7 +27,7 @@ The `@graphprotocol/graph-ts` library provides the following APIs: ### ورژنز -The `apiVersion` in the subgraph manifest specifies the mapping API version which is run by Graph Node for a given subgraph. +The `apiVersion` in the Subgraph manifest specifies the mapping API version which is run by Graph Node for a given Subgraph. | ورزن | جاری کردہ نوٹس | | :-: | --- | @@ -223,7 +223,7 @@ import { store } from '@graphprotocol/graph-ts' The `store` API allows to load, save and remove entities from and to the Graph Node store. -Entities written to the store map one-to-one to the `@entity` types defined in the subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities. +Entities written to the store map one-to-one to the `@entity` types defined in the Subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities. #### Creating entities @@ -282,8 +282,8 @@ As of `graph-node` v0.31.0, `@graphprotocol/graph-ts` v0.30.0 and `@graphprotoco The store API facilitates the retrieval of entities that were created or updated in the current block. A typical situation for this is that one handler creates a transaction from some onchain event, and a later handler wants to access this transaction if it exists. -- In the case where the transaction does not exist, the subgraph will have to go to the database simply to find out that the entity does not exist. If the subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. -- For some subgraphs, these missed lookups can contribute significantly to the indexing time. +- In the case where the transaction does not exist, the Subgraph will have to go to the database simply to find out that the entity does not exist. If the Subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. +- For some Subgraphs, these missed lookups can contribute significantly to the indexing time. ```typescript let id = event.transaction.hash // or however the ID is constructed @@ -380,11 +380,11 @@ store.remove('Transfer', id) #### ایتھیریم کی اقسام کے لیے سپورٹ -As with entities, `graph codegen` generates classes for all smart contracts and events used in a subgraph. For this, the contract ABIs need to be part of the data source in the subgraph manifest. Typically, the ABI files are stored in an `abis/` folder. +As with entities, `graph codegen` generates classes for all smart contracts and events used in a Subgraph. For this, the contract ABIs need to be part of the data source in the Subgraph manifest. Typically, the ABI files are stored in an `abis/` folder. -With the generated classes, conversions between Ethereum types and the [built-in types](#built-in-types) take place behind the scenes so that subgraph authors do not have to worry about them. +With the generated classes, conversions between Ethereum types and the [built-in types](#built-in-types) take place behind the scenes so that Subgraph authors do not have to worry about them. -مندرجہ ذیل مثال اس کی وضاحت کرتی ہے۔ جیسا کہ سب گراف اسکیما دیا گیا +The following example illustrates this. Given a Subgraph schema like ```graphql type Transfer @entity { @@ -483,7 +483,7 @@ class Log { #### سمارٹ کنٹریکٹ اسٹیٹ تک رسائی -The code generated by `graph codegen` also includes classes for the smart contracts used in the subgraph. These can be used to access public state variables and call functions of the contract at the current block. +The code generated by `graph codegen` also includes classes for the smart contracts used in the Subgraph. These can be used to access public state variables and call functions of the contract at the current block. ایک عام نمونہ اس کنٹریکٹ تک رسائی حاصل کرنا ہے جہاں سے کوئی واقعہ شروع ہوتا ہے۔ یہ مندرجہ ذیل کوڈ کے ساتھ حاصل کیا جاتا ہے: @@ -506,7 +506,7 @@ export function handleTransfer(event: TransferEvent) { As long as the `ERC20Contract` on Ethereum has a public read-only function called `symbol`, it can be called with `.symbol()`. For public state variables a method with the same name is created automatically. -کوئی بھی دوسرا کنٹریکٹ جو سب گراف کا حصہ ہے، تیار کردہ کوڈ سے درآمد کیا جا سکتا ہے اور اسے ایک درست ایڈریس کا پابند کیا جا سکتا ہے. +Any other contract that is part of the Subgraph can be imported from the generated code and can be bound to a valid address. #### واپس آنے والی کالوں کو ہینڈل کرنا @@ -582,7 +582,7 @@ let isContract = ethereum.hasCode(eoa).inner // returns false import { log } from '@graphprotocol/graph-ts' ``` -The `log` API allows subgraphs to log information to the Graph Node standard output as well as Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument. +The `log` API allows Subgraphs to log information to the Graph Node standard output as well as Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument. The `log` API includes the following functions: @@ -590,7 +590,7 @@ The `log` API includes the following functions: - `log.info(fmt: string, args: Array): void` - logs an informational message. - `log.warning(fmt: string, args: Array): void` - logs a warning. - `log.error(fmt: string, args: Array): void` - logs an error message. -- `log.critical(fmt: string, args: Array): void` – logs a critical message _and_ terminates the subgraph. +- `log.critical(fmt: string, args: Array): void` – logs a critical message _and_ terminates the Subgraph. The `log` API takes a format string and an array of string values. It then replaces placeholders with the string values from the array. The first `{}` placeholder gets replaced by the first value in the array, the second `{}` placeholder gets replaced by the second value and so on. @@ -721,7 +721,7 @@ ipfs.mapJSON('Qm...', 'processItem', Value.fromString('parentId')) The only flag currently supported is `json`, which must be passed to `ipfs.map`. With the `json` flag, the IPFS file must consist of a series of JSON values, one value per line. The call to `ipfs.map` will read each line in the file, deserialize it into a `JSONValue` and call the callback for each of them. The callback can then use entity operations to store data from the `JSONValue`. Entity changes are stored only when the handler that called `ipfs.map` finishes successfully; in the meantime, they are kept in memory, and the size of the file that `ipfs.map` can process is therefore limited. -On success, `ipfs.map` returns `void`. If any invocation of the callback causes an error, the handler that invoked `ipfs.map` is aborted, and the subgraph is marked as failed. +On success, `ipfs.map` returns `void`. If any invocation of the callback causes an error, the handler that invoked `ipfs.map` is aborted, and the Subgraph is marked as failed. ### کرپٹو API @@ -836,7 +836,7 @@ The base `Entity` class and the child `DataSourceContext` class have helpers to ### DataSourceContext in Manifest -The `context` section within `dataSources` allows you to define key-value pairs that are accessible within your subgraph mappings. The available types are `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. +The `context` section within `dataSources` allows you to define key-value pairs that are accessible within your Subgraph mappings. The available types are `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Here is a YAML example illustrating the usage of various types in the `context` section: @@ -887,4 +887,4 @@ dataSources: - `List`: Specifies a list of items. Each item needs to specify its type and data. - `BigInt`: Specifies a large integer value. Must be quoted due to its large size. -This context is then accessible in your subgraph mapping files, enabling more dynamic and configurable subgraphs. +This context is then accessible in your Subgraph mapping files, enabling more dynamic and configurable Subgraphs. diff --git a/website/src/pages/ur/subgraphs/developing/creating/graph-ts/common-issues.mdx b/website/src/pages/ur/subgraphs/developing/creating/graph-ts/common-issues.mdx index 4b7eaae1c362..40b245255886 100644 --- a/website/src/pages/ur/subgraphs/developing/creating/graph-ts/common-issues.mdx +++ b/website/src/pages/ur/subgraphs/developing/creating/graph-ts/common-issues.mdx @@ -2,7 +2,7 @@ title: مشترکہ اسمبلی اسکرپٹ کے مسائل --- -There are certain [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) issues that are common to run into during subgraph development. They range in debug difficulty, however, being aware of them may help. The following is a non-exhaustive list of these issues: +There are certain [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) issues that are common to run into during Subgraph development. They range in debug difficulty, however, being aware of them may help. The following is a non-exhaustive list of these issues: - `Private` class variables are not enforced in [AssemblyScript](https://www.assemblyscript.org/status.html#language-features). There is no way to protect class variables from being directly changed from the class object. - Scope is not inherited into [closure functions](https://www.assemblyscript.org/status.html#on-closures), i.e. variables declared outside of closure functions cannot be used. Explanation in [Developer Highlights #3](https://www.youtube.com/watch?v=1-8AW-lVfrA&t=3243s). diff --git a/website/src/pages/ur/subgraphs/developing/creating/install-the-cli.mdx b/website/src/pages/ur/subgraphs/developing/creating/install-the-cli.mdx index 14ead227c976..fc2b95935dc4 100644 --- a/website/src/pages/ur/subgraphs/developing/creating/install-the-cli.mdx +++ b/website/src/pages/ur/subgraphs/developing/creating/install-the-cli.mdx @@ -2,11 +2,11 @@ title: گراف CLI انسٹال کریں --- -> In order to use your subgraph on The Graph's decentralized network, you will need to [create an API key](/resources/subgraph-studio-faq/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your subgraph with at least 3,000 GRT to attract 2-3 Indexers. To learn more about signaling, check out [curating](/resources/roles/curating/). +> In order to use your Subgraph on The Graph's decentralized network, you will need to [create an API key](/resources/subgraph-studio-faq/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your Subgraph with at least 3,000 GRT to attract 2-3 Indexers. To learn more about signaling, check out [curating](/resources/roles/curating/). ## جائزہ -The [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) is a command-line interface that facilitates developers' commands for The Graph. It processes a [subgraph manifest](/subgraphs/developing/creating/subgraph-manifest/) and compiles the [mappings](/subgraphs/developing/creating/assemblyscript-mappings/) to create the files you will need to deploy the subgraph to [Subgraph Studio](https://thegraph.com/studio/) and the network. +The [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) is a command-line interface that facilitates developers' commands for The Graph. It processes a [Subgraph manifest](/subgraphs/developing/creating/subgraph-manifest/) and compiles the [mappings](/subgraphs/developing/creating/assemblyscript-mappings/) to create the files you will need to deploy the Subgraph to [Subgraph Studio](https://thegraph.com/studio/) and the network. ## شروع ہوا چاہتا ہے @@ -28,13 +28,13 @@ npm install -g @graphprotocol/graph-cli@latest yarn global add @graphprotocol/graph-cli ``` -The `graph init` command can be used to set up a new subgraph project, either from an existing contract or from an example subgraph. If you already have a smart contract deployed to your preferred network, you can bootstrap a new subgraph from that contract to get started. +The `graph init` command can be used to set up a new Subgraph project, either from an existing contract or from an example Subgraph. If you already have a smart contract deployed to your preferred network, you can bootstrap a new Subgraph from that contract to get started. ## سب گراف بنائیں ### ایک موجودہ کنٹریکٹ سے -The following command creates a subgraph that indexes all events of an existing contract: +The following command creates a Subgraph that indexes all events of an existing contract: ```sh graph init \ @@ -51,25 +51,25 @@ graph init \ - If any of the optional arguments are missing, it guides you through an interactive form. -- The `` is the ID of your subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your subgraph details page. +- The `` is the ID of your Subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your Subgraph details page. ### ایک مثالی سب گراف سے -The following command initializes a new project from an example subgraph: +The following command initializes a new project from an example Subgraph: ```sh graph init --from-example=example-subgraph ``` -- The [example subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. +- The [example Subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. -- The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. +- The Subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. ### Add New `dataSources` to an Existing Subgraph -`dataSources` are key components of subgraphs. They define the sources of data that the subgraph indexes and processes. A `dataSource` specifies which smart contract to listen to, which events to process, and how to handle them. +`dataSources` are key components of Subgraphs. They define the sources of data that the Subgraph indexes and processes. A `dataSource` specifies which smart contract to listen to, which events to process, and how to handle them. -Recent versions of the Graph CLI supports adding new `dataSources` to an existing subgraph through the `graph add` command: +Recent versions of the Graph CLI supports adding new `dataSources` to an existing Subgraph through the `graph add` command: ```sh graph add
[] @@ -101,19 +101,5 @@ The `graph add` command will fetch the ABI from Etherscan (unless an ABI path is ABI فائل (فائلیں) آپ کے کنٹریکٹ (کنٹریکٹس) سے مماثل ہونی چاہیں. ABI کی فائلیں حاصل کرنے کے چند طریقے ہیں: - اگر آپ اپنا پراجیکٹ خود بنا رہے ہیں، تو ممکنہ طور پر آپ کو اپنے حالیہ ABIs تک رسائی حاصل ہوگی. -- If you are building a subgraph for a public project, you can download that project to your computer and get the ABI by using [`npx hardhat compile`](https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts) or using `solc` to compile. -- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your subgraph will fail. - -## SpecVersion Releases - -| ورزن | جاری کردہ نوٹس | -| :-: | --- | -| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | -| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | -| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune subgraphs | -| 0.0.9 | Supports `endBlock` feature | -| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). | -| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). | -| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | -| 0.0.5 | Added support for event handlers having access to transaction receipts. | -| 0.0.4 | Added support for managing subgraph features. | +- If you are building a Subgraph for a public project, you can download that project to your computer and get the ABI by using [`npx hardhat compile`](https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts) or using `solc` to compile. +- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your Subgraph will fail. diff --git a/website/src/pages/ur/subgraphs/developing/creating/ql-schema.mdx b/website/src/pages/ur/subgraphs/developing/creating/ql-schema.mdx index 833aee6e0499..71165595a765 100644 --- a/website/src/pages/ur/subgraphs/developing/creating/ql-schema.mdx +++ b/website/src/pages/ur/subgraphs/developing/creating/ql-schema.mdx @@ -4,7 +4,7 @@ title: The Graph QL Schema ## جائزہ -The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language. +The schema for your Subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language. > Note: If you've never written a GraphQL schema, it is recommended that you check out this primer on the GraphQL type system. Reference documentation for GraphQL schemas can be found in the [GraphQL API](/subgraphs/querying/graphql-api/) section. @@ -12,7 +12,7 @@ The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas ar Before defining entities, it is important to take a step back and think about how your data is structured and linked. -- All queries will be made against the data model defined in the subgraph schema. As a result, the design of the subgraph schema should be informed by the queries that your application will need to perform. +- All queries will be made against the data model defined in the Subgraph schema. As a result, the design of the Subgraph schema should be informed by the queries that your application will need to perform. - It may be useful to imagine entities as "objects containing data", rather than as events or functions. - You define entity types in `schema.graphql`, and Graph Node will generate top-level fields for querying single instances and collections of that entity type. - Each type that should be an entity is required to be annotated with an `@entity` directive. @@ -141,7 +141,7 @@ type TokenBalance @entity { Reverse lookups can be defined on an entity through the `@derivedFrom` field. This creates a virtual field on the entity that may be queried but cannot be set manually through the mappings API. Rather, it is derived from the relationship defined on the other entity. For such relationships, it rarely makes sense to store both sides of the relationship, and both indexing and query performance will be better when only one side is stored and the other is derived. -ون-ٹو-مینی تعلقات کے لیے، تعلق کو ہمیشہ 'ون' سائیڈ پر رکھنا چاہیے، اور 'مینی' سائیڈ کو ہمیشہ اخذ کیا جانا چاہیے۔ 'مینی' سائیڈ پر ہستیوں کی ایک ایرے کو ذخیرہ کرنے کے بجائے اس طرح سے تعلق کو ذخیرہ کرنے کے نتیجے میں سب گراف کی انڈیکسنگ اور کیوریز دونوں کے لیے نمایاں طور پر بہتر کارکردگی ہوگی۔ عام طور پر، ہستیوں کی ایریز کو ذخیرہ کرنے سے اتنا ہی گریز کیا جانا چاہیے جتنا کہ عملی ہو. +For one-to-many relationships, the relationship should always be stored on the 'one' side, and the 'many' side should always be derived. Storing the relationship this way, rather than storing an array of entities on the 'many' side, will result in dramatically better performance for both indexing and querying the Subgraph. In general, storing arrays of entities should be avoided as much as is practical. #### مثال @@ -160,7 +160,7 @@ type TokenBalance @entity { } ``` -Here is an example of how to write a mapping for a subgraph with reverse lookups: +Here is an example of how to write a mapping for a Subgraph with reverse lookups: ```typescript let token = new Token(event.address) // Create Token @@ -231,7 +231,7 @@ query usersWithOrganizations { } ``` -Many-to-many تعلقات کو ذخیرہ کرنے کے اس زیادہ وسیع طریقے کے نتیجے میں سب گراف کے لیے کم ڈیٹا ذخیرہ کیا جائے گا، اور اس لیے ایک سب گراف میں جو اکثر نمایاں طور پر انڈیکس اور کیوری کے لیے تیز تر ہوتا ہے. +This more elaborate way of storing many-to-many relationships will result in less data stored for the Subgraph, and therefore to a Subgraph that is often dramatically faster to index and to query. ### اسکیما میں کامینٹس شامل کرنا @@ -287,7 +287,7 @@ query { } ``` -> **[Feature Management](#experimental-features):** From `specVersion` `0.0.4` and onwards, `fullTextSearch` must be declared under the `features` section in the subgraph manifest. +> **[Feature Management](#experimental-features):** From `specVersion` `0.0.4` and onwards, `fullTextSearch` must be declared under the `features` section in the Subgraph manifest. ## تعاون یافتہ زبانیں ہیں diff --git a/website/src/pages/ur/subgraphs/developing/creating/starting-your-subgraph.mdx b/website/src/pages/ur/subgraphs/developing/creating/starting-your-subgraph.mdx index 3f0d9b8cde40..ec107baa126f 100644 --- a/website/src/pages/ur/subgraphs/developing/creating/starting-your-subgraph.mdx +++ b/website/src/pages/ur/subgraphs/developing/creating/starting-your-subgraph.mdx @@ -4,20 +4,32 @@ title: Starting Your Subgraph ## جائزہ -The Graph is home to thousands of subgraphs already available for query, so check [The Graph Explorer](https://thegraph.com/explorer) and find one that already matches your needs. +The Graph is home to thousands of Subgraphs already available for query, so check [The Graph Explorer](https://thegraph.com/explorer) and find one that already matches your needs. -When you create a [subgraph](/subgraphs/developing/subgraphs/), you create a custom open API that extracts data from a blockchain, processes it, stores it, and makes it easy to query via GraphQL. +When you create a [Subgraph](/subgraphs/developing/subgraphs/), you create a custom open API that extracts data from a blockchain, processes it, stores it, and makes it easy to query via GraphQL. -Subgraph development ranges from simple scaffold subgraphs to advanced, specifically tailored subgraphs. +Subgraph development ranges from simple scaffold Subgraphs to advanced, specifically tailored Subgraphs. ### Start Building -Start the process and build a subgraph that matches your needs: +Start the process and build a Subgraph that matches your needs: 1. [Install the CLI](/subgraphs/developing/creating/install-the-cli/) - Set up your infrastructure -2. [Subgraph Manifest](/subgraphs/developing/creating/subgraph-manifest/) - Understand a subgraph's key component +2. [Subgraph Manifest](/subgraphs/developing/creating/subgraph-manifest/) - Understand a Subgraph's key component 3. [The GraphQL Schema](/subgraphs/developing/creating/ql-schema/) - Write your schema 4. [Writing AssemblyScript Mappings](/subgraphs/developing/creating/assemblyscript-mappings/) - Write your mappings -5. [Advanced Features](/subgraphs/developing/creating/advanced/) - Customize your subgraph with advanced features +5. [Advanced Features](/subgraphs/developing/creating/advanced/) - Customize your Subgraph with advanced features Explore additional [resources for APIs](/subgraphs/developing/creating/graph-ts/README/) and conduct local testing with [Matchstick](/subgraphs/developing/creating/unit-testing-framework/). + +| ورزن | جاری کردہ نوٹس | +| :-: | --- | +| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune Subgraphs | +| 0.0.9 | Supports `endBlock` feature | +| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). | +| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | diff --git a/website/src/pages/ur/subgraphs/developing/creating/subgraph-manifest.mdx b/website/src/pages/ur/subgraphs/developing/creating/subgraph-manifest.mdx index de8a303b302d..b87cc57c42ce 100644 --- a/website/src/pages/ur/subgraphs/developing/creating/subgraph-manifest.mdx +++ b/website/src/pages/ur/subgraphs/developing/creating/subgraph-manifest.mdx @@ -4,19 +4,19 @@ title: Subgraph Manifest ## جائزہ -The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. +The Subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your Subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. -The **subgraph definition** consists of the following files: +The **Subgraph definition** consists of the following files: -- `subgraph.yaml`: Contains the subgraph manifest +- `subgraph.yaml`: Contains the Subgraph manifest -- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL +- `schema.graphql`: A GraphQL schema defining the data stored for your Subgraph and how to query it via GraphQL - `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema (e.g. `mapping.ts` in this guide) ### Subgraph Capabilities -A single subgraph can: +A single Subgraph can: - Index data from multiple smart contracts (but not multiple networks). @@ -24,12 +24,12 @@ A single subgraph can: - Add an entry for each contract that requires indexing to the `dataSources` array. -The full specification for subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). +The full specification for Subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). -For the example subgraph listed above, `subgraph.yaml` is: +For the example Subgraph listed above, `subgraph.yaml` is: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum repository: https://github.com/graphprotocol/graph-tooling schema: @@ -54,7 +54,7 @@ dataSources: data: 'bar' mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -79,47 +79,47 @@ dataSources: ## Subgraph Entries -> Important Note: Be sure you populate your subgraph manifest with all handlers and [entities](/subgraphs/developing/creating/ql-schema/). +> Important Note: Be sure you populate your Subgraph manifest with all handlers and [entities](/subgraphs/developing/creating/ql-schema/). مینی فیسٹ کے لیے اپ ڈیٹ کرنے کے لیے اہم اندراجات یہ ہیں: -- `specVersion`: a semver version that identifies the supported manifest structure and functionality for the subgraph. The latest version is `1.2.0`. See [specVersion releases](#specversion-releases) section to see more details on features & releases. +- `specVersion`: a semver version that identifies the supported manifest structure and functionality for the Subgraph. The latest version is `1.3.0`. See [specVersion releases](#specversion-releases) section to see more details on features & releases. -- `description`: a human-readable description of what the subgraph is. This description is displayed in Graph Explorer when the subgraph is deployed to Subgraph Studio. +- `description`: a human-readable description of what the Subgraph is. This description is displayed in Graph Explorer when the Subgraph is deployed to Subgraph Studio. -- `repository`: the URL of the repository where the subgraph manifest can be found. This is also displayed in Graph Explorer. +- `repository`: the URL of the repository where the Subgraph manifest can be found. This is also displayed in Graph Explorer. - `features`: a list of all used [feature](#experimental-features) names. -- `indexerHints.prune`: Defines the retention of historical block data for a subgraph. See [prune](#prune) in [indexerHints](#indexer-hints) section. +- `indexerHints.prune`: Defines the retention of historical block data for a Subgraph. See [prune](#prune) in [indexerHints](#indexer-hints) section. -- `dataSources.source`: the address of the smart contract the subgraph sources, and the ABI of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts. +- `dataSources.source`: the address of the smart contract the Subgraph sources, and the ABI of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts. - `dataSources.source.startBlock`: the optional number of the block that the data source starts indexing from. In most cases, we suggest using the block in which the contract was created. - `dataSources.source.endBlock`: The optional number of the block that the data source stops indexing at, including that block. Minimum spec version required: `0.0.9`. -- `dataSources.context`: key-value pairs that can be used within subgraph mappings. Supports various data types like `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Each variable needs to specify its `type` and `data`. These context variables are then accessible in the mapping files, offering more configurable options for subgraph development. +- `dataSources.context`: key-value pairs that can be used within Subgraph mappings. Supports various data types like `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Each variable needs to specify its `type` and `data`. These context variables are then accessible in the mapping files, offering more configurable options for Subgraph development. - `dataSources.mapping.entities`: the entities that the data source writes to the store. The schema for each entity is defined in the schema.graphql file. - `dataSources.mapping.abis`: one or more named ABI files for the source contract as well as any other smart contracts that you interact with from within the mappings. -- `dataSources.mapping.eventHandlers`: lists the smart contract events this subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store. +- `dataSources.mapping.eventHandlers`: lists the smart contract events this Subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store. -- `dataSources.mapping.callHandlers`: lists the smart contract functions this subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store. +- `dataSources.mapping.callHandlers`: lists the smart contract functions this Subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store. -- `dataSources.mapping.blockHandlers`: lists the blocks this subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional call-filter can be provided by adding a `filter` field with `kind: call` to the handler. This will only run the handler if the block contains at least one call to the data source contract. +- `dataSources.mapping.blockHandlers`: lists the blocks this Subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional call-filter can be provided by adding a `filter` field with `kind: call` to the handler. This will only run the handler if the block contains at least one call to the data source contract. -A single subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array. +A single Subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array. ## Event Handlers -Event handlers in a subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the subgraph's manifest. This enables subgraphs to process and store event data according to defined logic. +Event handlers in a Subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the Subgraph's manifest. This enables Subgraphs to process and store event data according to defined logic. ### Defining an Event Handler -An event handler is declared within a data source in the subgraph's YAML configuration. It specifies which events to listen for and the corresponding function to execute when those events are detected. +An event handler is declared within a data source in the Subgraph's YAML configuration. It specifies which events to listen for and the corresponding function to execute when those events are detected. ```yaml dataSources: @@ -131,7 +131,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -149,11 +149,11 @@ dataSources: ## کال ہینڈلرز -While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured. +While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a Subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured. کال ہینڈلرز صرف دو صورتوں میں سے ایک میں ٹرگر کریں گے: جب مخصوص کردہ فنکشن کو کنٹریکٹ کے علاوہ کسی دوسرے اکاؤنٹ سے کال جاتا ہے یا جب اسے سولیڈیٹی میں بیرونی کے طور پر نشان زد کیا جاتا ہے اور اسی کنٹریکٹ میں کسی دوسرے فنکشن کے حصے کے طور پر کال کیا جاتا ہے. -> **Note:** Call handlers currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a subgraph indexing one of these networks contain one or more call handlers, it will not start syncing. Subgraph developers should instead use event handlers. These are far more performant than call handlers, and are supported on every evm network. +> **Note:** Call handlers currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a Subgraph indexing one of these networks contain one or more call handlers, it will not start syncing. Subgraph developers should instead use event handlers. These are far more performant than call handlers, and are supported on every evm network. ### کال ہینڈلر کی تعریف @@ -169,7 +169,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -186,7 +186,7 @@ The `function` is the normalized function signature to filter calls by. The `han ### میپنگ فنکشن -Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument: +Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example Subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument: ```typescript import { CreateGravatarCall } from '../generated/Gravity/Gravity' @@ -205,7 +205,7 @@ The `handleCreateGravatar` function takes a new `CreateGravatarCall` which is a ## بلاک ہینڈلرز -کنٹریکٹ ایونٹس یا فنکشن کالز کو سبسکرائب کرنے کے علاوہ، ایک سب گراف اپنے ڈیٹا کو اپ ڈیٹ کرنا چاہتا ہے جیسے جیسے چین میں نئے بلاکس شامل ہوتے ہیں. اس کو حاصل کرنے کے لیے ایک سب گراف ہر بلاک کے بعد یا پہلے سے طے شدہ فلٹر سے مماثل بلاکس کے بعد ایک فنکشن چلا سکتا ہے. +In addition to subscribing to contract events or function calls, a Subgraph may want to update its data as new blocks are appended to the chain. To achieve this a Subgraph can run a function after every block or after blocks that match a pre-defined filter. ### معاون فلٹرز @@ -218,7 +218,7 @@ filter: _The defined handler will be called once for every block which contains a call to the contract (data source) the handler is defined under._ -> **Note:** The `call` filter currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a subgraph indexing one of these networks contain one or more block handlers with a `call` filter, it will not start syncing. +> **Note:** The `call` filter currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a Subgraph indexing one of these networks contain one or more block handlers with a `call` filter, it will not start syncing. بلاک ہینڈلر کے لیے فلٹر کی عدم موجودگی اس بات کو یقینی بنائے گی کہ ہینڈلر کو ہر بلاک کے لیے کال کیا جاتا ہے. ڈیٹا سورس میں ہر فلٹر کی قسم کے لیے صرف ایک بلاک ہینڈلر ہو سکتا ہے. @@ -232,7 +232,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -261,7 +261,7 @@ blockHandlers: every: 10 ``` -The defined handler will be called once for every `n` blocks, where `n` is the value provided in the `every` field. This configuration allows the subgraph to perform specific operations at regular block intervals. +The defined handler will be called once for every `n` blocks, where `n` is the value provided in the `every` field. This configuration allows the Subgraph to perform specific operations at regular block intervals. #### ونس فلٹر @@ -276,7 +276,7 @@ blockHandlers: kind: once ``` -ایک بار فلٹر کے ساتھ متعین ہینڈلر کو دوسرے تمام ہینڈلرز کے چلنے سے پہلے صرف ایک بار کال کیا جائے گا۔ یہ کنفیگریشن سب گراف کو انڈیکسنگ کے آغاز میں مخصوص کاموں کو انجام دیتے ہوئے، ہینڈلر کو ابتدائیہ ہینڈلر کے طور پر استعمال کرنے کی اجازت دیتی ہے. +The defined handler with the once filter will be called only once before all other handlers run. This configuration allows the Subgraph to use the handler as an initialization handler, performing specific tasks at the start of indexing. ```ts export function handleOnce(block: ethereum.Block): void { @@ -288,7 +288,7 @@ export function handleOnce(block: ethereum.Block): void { ### میپنگ فنکشن -The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing subgraph entities in the store, call smart contracts and create or update entities. +The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing Subgraph entities in the store, call smart contracts and create or update entities. ```typescript import { ethereum } from '@graphprotocol/graph-ts' @@ -317,7 +317,7 @@ An event will only be triggered when both the signature and topic 0 match. By de Starting from `specVersion` `0.0.5` and `apiVersion` `0.0.7`, event handlers can have access to the receipt for the transaction which emitted them. -To do so, event handlers must be declared in the subgraph manifest with the new `receipt: true` key, which is optional and defaults to false. +To do so, event handlers must be declared in the Subgraph manifest with the new `receipt: true` key, which is optional and defaults to false. ```yaml eventHandlers: @@ -360,7 +360,7 @@ dataSources: abi: Factory mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/factory.ts entities: @@ -390,7 +390,7 @@ templates: abi: Exchange mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/exchange.ts entities: @@ -454,7 +454,7 @@ There are setters and getters like `setString` and `getString` for all value typ ## بلاکس شروع کریں -The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. +The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a Subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. ```yaml dataSources: @@ -467,7 +467,7 @@ dataSources: startBlock: 6627917 mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/factory.ts entities: @@ -488,13 +488,13 @@ dataSources: ## Indexer Hints -The `indexerHints` setting in a subgraph's manifest provides directives for indexers on processing and managing a subgraph. It influences operational decisions across data handling, indexing strategies, and optimizations. Presently, it features the `prune` option for managing historical data retention or pruning. +The `indexerHints` setting in a Subgraph's manifest provides directives for indexers on processing and managing a Subgraph. It influences operational decisions across data handling, indexing strategies, and optimizations. Presently, it features the `prune` option for managing historical data retention or pruning. > This feature is available from `specVersion: 1.0.0` ### Prune -`indexerHints.prune`: Defines the retention of historical block data for a subgraph. Options include: +`indexerHints.prune`: Defines the retention of historical block data for a Subgraph. Options include: 1. `"never"`: No pruning of historical data; retains the entire history. 2. `"auto"`: Retains the minimum necessary history as set by the indexer, optimizing query performance. @@ -505,19 +505,19 @@ The `indexerHints` setting in a subgraph's manifest provides directives for inde prune: auto ``` -> The term "history" in this context of subgraphs is about storing data that reflects the old states of mutable entities. +> The term "history" in this context of Subgraphs is about storing data that reflects the old states of mutable entities. History as of a given block is required for: -- [Time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), which enable querying the past states of these entities at specific blocks throughout the subgraph's history -- Using the subgraph as a [graft base](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) in another subgraph, at that block -- Rewinding the subgraph back to that block +- [Time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), which enable querying the past states of these entities at specific blocks throughout the Subgraph's history +- Using the Subgraph as a [graft base](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) in another Subgraph, at that block +- Rewinding the Subgraph back to that block If historical data as of the block has been pruned, the above capabilities will not be available. > Using `"auto"` is generally recommended as it maximizes query performance and is sufficient for most users who do not require access to extensive historical data. -For subgraphs leveraging [time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), it's advisable to either set a specific number of blocks for historical data retention or use `prune: never` to keep all historical entity states. Below are examples of how to configure both options in your subgraph's settings: +For Subgraphs leveraging [time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), it's advisable to either set a specific number of blocks for historical data retention or use `prune: never` to keep all historical entity states. Below are examples of how to configure both options in your Subgraph's settings: To retain a specific amount of historical data: @@ -532,3 +532,18 @@ To preserve the complete history of entity states: indexerHints: prune: never ``` + +## SpecVersion Releases + +| ورزن | جاری کردہ نوٹس | +| :-: | --- | +| 1.3.0 | Added support for [Subgraph Composition](/cookbook/subgraph-composition-three-sources) | +| 1.2.0 | Added support for [Indexed Argument Filtering](/developing/creating/advanced/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](/developing/creating/advanced/#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supports [`indexerHints`](/developing/creating/subgraph-manifest/#indexer-hints) feature to prune Subgraphs | +| 0.0.9 | Supports `endBlock` feature | +| 0.0.8 | Added support for polling [Block Handlers](/developing/creating/subgraph-manifest/#polling-filter) and [Initialisation Handlers](/developing/creating/subgraph-manifest/#once-filter). | +| 0.0.7 | Added support for [File Data Sources](/developing/creating/advanced/#ipfsarweave-file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | diff --git a/website/src/pages/ur/subgraphs/developing/creating/unit-testing-framework.mdx b/website/src/pages/ur/subgraphs/developing/creating/unit-testing-framework.mdx index ba6feb650a07..891fc27e2cf7 100644 --- a/website/src/pages/ur/subgraphs/developing/creating/unit-testing-framework.mdx +++ b/website/src/pages/ur/subgraphs/developing/creating/unit-testing-framework.mdx @@ -2,12 +2,12 @@ title: یونٹ ٹیسٹنگ فریم ورک --- -Learn how to use Matchstick, a unit testing framework developed by [LimeChain](https://limechain.tech/). Matchstick enables subgraph developers to test their mapping logic in a sandboxed environment and successfully deploy their subgraphs. +Learn how to use Matchstick, a unit testing framework developed by [LimeChain](https://limechain.tech/). Matchstick enables Subgraph developers to test their mapping logic in a sandboxed environment and successfully deploy their Subgraphs. ## Benefits of Using Matchstick - It's written in Rust and optimized for high performance. -- It gives you access to developer features, including the ability to mock contract calls, make assertions about the store state, monitor subgraph failures, check test performance, and many more. +- It gives you access to developer features, including the ability to mock contract calls, make assertions about the store state, monitor Subgraph failures, check test performance, and many more. ## شروع ہوا چاہتا ہے @@ -87,7 +87,7 @@ And finally, do not use `graph test` (which uses your global installation of gra ### Using Matchstick -To use **Matchstick** in your subgraph project just open up a terminal, navigate to the root folder of your project and simply run `graph test [options] ` - it downloads the latest **Matchstick** binary and runs the specified test or all tests in a test folder (or all existing tests if no datasource flag is specified). +To use **Matchstick** in your Subgraph project just open up a terminal, navigate to the root folder of your project and simply run `graph test [options] ` - it downloads the latest **Matchstick** binary and runs the specified test or all tests in a test folder (or all existing tests if no datasource flag is specified). ### CLI کے اختیارات @@ -113,7 +113,7 @@ graph test path/to/file.test.ts ```sh -c, --coverage Run the tests in coverage mode --d, --docker Run the tests in a docker container (Note: Please execute from the root folder of the subgraph) +-d, --docker Run the tests in a docker container (Note: Please execute from the root folder of the Subgraph) -f, --force Binary: Redownloads the binary. Docker: Redownloads the Dockerfile and rebuilds the docker image. -h, --help Show usage information -l, --logs Logs to the console information about the OS, CPU model and download url (debugging purposes) @@ -145,17 +145,17 @@ libsFolder: path/to/libs manifestPath: path/to/subgraph.yaml ``` -### ڈیمو سب گراف +### Demo Subgraph You can try out and play around with the examples from this guide by cloning the [Demo Subgraph repo](https://github.com/LimeChain/demo-subgraph) ### ویڈیو ٹیوٹوریلز -Also you can check out the video series on ["How to use Matchstick to write unit tests for your subgraphs"](https://www.youtube.com/playlist?list=PLTqyKgxaGF3SNakGQwczpSGVjS_xvOv3h) +Also you can check out the video series on ["How to use Matchstick to write unit tests for your Subgraphs"](https://www.youtube.com/playlist?list=PLTqyKgxaGF3SNakGQwczpSGVjS_xvOv3h) ## Tests structure -_**IMPORTANT: The test structure described below depens on `matchstick-as` version >=0.5.0**_ +_**IMPORTANT: The test structure described below depends on `matchstick-as` version >=0.5.0**_ ### describe() @@ -662,7 +662,7 @@ That's a lot to unpack! First off, an important thing to notice is that we're im ہم وہاں جاتے ہیں - ہم نے اپنا پہلا ٹیسٹ بنایا ہے! 👏 -اب ہمارے ٹیسٹ چلانے کے لیے آپ کو اپنے سب گراف روٹ فولڈر میں درج ذیل کو چلانے کی ضرورت ہے: +Now in order to run our tests you simply need to run the following in your Subgraph root folder: `graph test Gravity` @@ -756,7 +756,7 @@ createMockedFunction(contractAddress, 'getGravatar', 'getGravatar(address):(stri Users can mock IPFS files by using `mockIpfsFile(hash, filePath)` function. The function accepts two arguments, the first one is the IPFS file hash/path and the second one is the path to a local file. -NOTE: When testing `ipfs.map/ipfs.mapJSON`, the callback function must be exported from the test file in order for matchstck to detect it, like the `processGravatar()` function in the test example bellow: +NOTE: When testing `ipfs.map/ipfs.mapJSON`, the callback function must be exported from the test file in order for matchstick to detect it, like the `processGravatar()` function in the test example bellow: `.test.ts` file: @@ -765,7 +765,7 @@ import { assert, test, mockIpfsFile } from 'matchstick-as/assembly/index' import { ipfs } from '@graphprotocol/graph-ts' import { gravatarFromIpfs } from './utils' -// Export ipfs.map() callback in order for matchstck to detect it +// Export ipfs.map() callback in order for matchstick to detect it export { processGravatar } from './utils' test('ipfs.cat', () => { @@ -1172,7 +1172,7 @@ templates: network: mainnet mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/token-lock-wallet.ts handler: handleMetadata @@ -1289,7 +1289,7 @@ test('file/ipfs dataSource creation example', () => { ## ٹیسٹ کوریج -Using **Matchstick**, subgraph developers are able to run a script that will calculate the test coverage of the written unit tests. +Using **Matchstick**, Subgraph developers are able to run a script that will calculate the test coverage of the written unit tests. The test coverage tool takes the compiled test `wasm` binaries and converts them to `wat` files, which can then be easily inspected to see whether or not the handlers defined in `subgraph.yaml` have been called. Since code coverage (and testing as whole) is in very early stages in AssemblyScript and WebAssembly, **Matchstick** cannot check for branch coverage. Instead we rely on the assertion that if a given handler has been called, the event/function for it have been properly mocked. @@ -1395,7 +1395,7 @@ The mismatch in arguments is caused by mismatch in `graph-ts` and `matchstick-as ## اضافی وسائل -For any additional support, check out this [demo subgraph repo using Matchstick](https://github.com/LimeChain/demo-subgraph#readme_). +For any additional support, check out this [demo Subgraph repo using Matchstick](https://github.com/LimeChain/demo-subgraph#readme_). ## تاثرات diff --git a/website/src/pages/ur/subgraphs/developing/deploying/multiple-networks.mdx b/website/src/pages/ur/subgraphs/developing/deploying/multiple-networks.mdx index 0f23c9bdb044..018d2eb471e7 100644 --- a/website/src/pages/ur/subgraphs/developing/deploying/multiple-networks.mdx +++ b/website/src/pages/ur/subgraphs/developing/deploying/multiple-networks.mdx @@ -1,12 +1,13 @@ --- title: Deploying a Subgraph to Multiple Networks +sidebarTitle: Deploying to Multiple Networks --- -This page explains how to deploy a subgraph to multiple networks. To deploy a subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a subgraph already, see [Creating a subgraph](/developing/creating-a-subgraph/). +This page explains how to deploy a Subgraph to multiple networks. To deploy a Subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a Subgraph already, see [Creating a Subgraph](/developing/creating-a-subgraph/). -## سب گراف کو متعدد نیٹ ورکس پر تعینات کرنا +## Deploying the Subgraph to multiple networks -کچھ معاملات میں، آپ ایک ہی سب گراف کو متعدد نیٹ ورکس پر اس کے تمام کوڈ کی نقل کیے بغیر تعینات کرنا چاہیں گے۔ اس کے ساتھ آنے والا بنیادی چیلنج یہ ہے کہ ان نیٹ ورکس پر کنٹریکٹ ایڈریس مختلف ہیں. +In some cases, you will want to deploy the same Subgraph to multiple networks without duplicating all of its code. The main challenge that comes with this is that the contract addresses on these networks are different. ### Using `graph-cli` @@ -20,7 +21,7 @@ Options: --network-file Networks config file path (default: "./networks.json") ``` -You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your subgraph during development. +You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your Subgraph during development. > Note: The `init` command will now auto-generate a `networks.json` based on the provided information. You will then be able to update existing or add additional networks. @@ -54,7 +55,7 @@ If you don't have a `networks.json` file, you'll need to manually create one wit > Note: You don't have to specify any of the `templates` (if you have any) in the config file, only the `dataSources`. If there are any `templates` declared in the `subgraph.yaml` file, their network will be automatically updated to the one specified with the `--network` option. -Now, let's assume you want to be able to deploy your subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`: +Now, let's assume you want to be able to deploy your Subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`: ```yaml # ... @@ -96,7 +97,7 @@ yarn build --network sepolia yarn build --network sepolia --network-file path/to/config ``` -The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the subgraph. Your `subgraph.yaml` file now should look like this: +The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the Subgraph. Your `subgraph.yaml` file now should look like this: ```yaml # ... @@ -127,7 +128,7 @@ yarn deploy --network sepolia --network-file path/to/config One way to parameterize aspects like contract addresses using older `graph-cli` versions is to generate parts of it with a templating system like [Mustache](https://mustache.github.io/) or [Handlebars](https://handlebarsjs.com/). -To illustrate this approach, let's assume a subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network: +To illustrate this approach, let's assume a Subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network: ```json { @@ -179,7 +180,7 @@ In order to generate a manifest to either network, you could add two additional } ``` -To deploy this subgraph for mainnet or Sepolia you would now simply run one of the two following commands: +To deploy this Subgraph for mainnet or Sepolia you would now simply run one of the two following commands: ```sh # Mainnet: @@ -193,25 +194,25 @@ A working example of this can be found [here](https://github.com/graphprotocol/e **Note:** This approach can also be applied to more complex situations, where it is necessary to substitute more than contract addresses and network names or where generating mappings or ABIs from templates as well. -This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your Subgraph to check if it is running behind. `synced` informs if the Subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the Subgraph. In this case, you can check the `fatalError` field for details on this error. -## سب گراف سٹوڈیو سب گراف آرکائیو پالیسی +## Subgraph Studio Subgraph archive policy -A subgraph version in Studio is archived if and only if it meets the following criteria: +A Subgraph version in Studio is archived if and only if it meets the following criteria: - The version is not published to the network (or pending publish) - The version was created 45 or more days ago -- The subgraph hasn't been queried in 30 days +- The Subgraph hasn't been queried in 30 days -In addition, when a new version is deployed, if the subgraph has not been published, then the N-2 version of the subgraph is archived. +In addition, when a new version is deployed, if the Subgraph has not been published, then the N-2 version of the Subgraph is archived. -اس پالیسی سے متاثر ہونے والے ہر سب گراف کے پاس زیر بحث ورژن کو واپس لانے کا اختیار ہے. +Every Subgraph affected with this policy has an option to bring the version in question back. -## سب گراف کی صحت کی جانچ کرنا +## Checking Subgraph health -اگر ایک سب گراف کامیابی کے ساتھ مطابقت پذیر ہوتا ہے، تو یہ ایک اچھی علامت ہے کہ یہ ہمیشہ کے لیے اچھی طرح چلتا رہے گا۔ تاہم، نیٹ ورک پر نئے محرکات آپ کے سب گراف کو بغیر جانچ کی خرابی کی حالت کو نشانہ بنا سکتے ہیں یا کارکردگی کے مسائل یا نوڈ آپریٹرز کے ساتھ مسائل کی وجہ سے یہ پیچھے پڑنا شروع کر سکتا ہے. +If a Subgraph syncs successfully, that is a good sign that it will continue to run well forever. However, new triggers on the network might cause your Subgraph to hit an untested error condition or it may start to fall behind due to performance issues or issues with the node operators. -Graph Node exposes a GraphQL endpoint which you can query to check the status of your subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a subgraph: +Graph Node exposes a GraphQL endpoint which you can query to check the status of your Subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a Subgraph: ```graphql { @@ -238,4 +239,4 @@ Graph Node exposes a GraphQL endpoint which you can query to check the status of } ``` -This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your Subgraph to check if it is running behind. `synced` informs if the Subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the Subgraph. In this case, you can check the `fatalError` field for details on this error. diff --git a/website/src/pages/ur/subgraphs/developing/deploying/using-subgraph-studio.mdx b/website/src/pages/ur/subgraphs/developing/deploying/using-subgraph-studio.mdx index 2d16e87e3f7a..f41499fd9c51 100644 --- a/website/src/pages/ur/subgraphs/developing/deploying/using-subgraph-studio.mdx +++ b/website/src/pages/ur/subgraphs/developing/deploying/using-subgraph-studio.mdx @@ -2,23 +2,23 @@ title: Deploying Using Subgraph Studio --- -Learn how to deploy your subgraph to Subgraph Studio. +Learn how to deploy your Subgraph to Subgraph Studio. -> Note: When you deploy a subgraph, you push it to Subgraph Studio, where you'll be able to test it. It's important to remember that deploying is not the same as publishing. When you publish a subgraph, you're publishing it onchain. +> Note: When you deploy a Subgraph, you push it to Subgraph Studio, where you'll be able to test it. It's important to remember that deploying is not the same as publishing. When you publish a Subgraph, you're publishing it onchain. ## Subgraph Studio Overview In [Subgraph Studio](https://thegraph.com/studio/), you can do the following: -- View a list of subgraphs you've created -- Manage, view details, and visualize the status of a specific subgraph -- مخصوص سب گرافس کے لۓ API کیز بنائیں اور ان کا انتظام کریں +- View a list of Subgraphs you've created +- Manage, view details, and visualize the status of a specific Subgraph +- Create and manage your API keys for specific Subgraphs - Restrict your API keys to specific domains and allow only certain Indexers to query with them -- Create your subgraph -- Deploy your subgraph using The Graph CLI -- Test your subgraph in the playground environment -- Integrate your subgraph in staging using the development query URL -- Publish your subgraph to The Graph Network +- Create your Subgraph +- Deploy your Subgraph using The Graph CLI +- Test your Subgraph in the playground environment +- Integrate your Subgraph in staging using the development query URL +- Publish your Subgraph to The Graph Network - Manage your billing ## Install The Graph CLI @@ -44,10 +44,10 @@ npm install -g @graphprotocol/graph-cli 1. Open [Subgraph Studio](https://thegraph.com/studio/). 2. Connect your wallet to sign in. - You can do this via MetaMask, Coinbase Wallet, WalletConnect, or Safe. -3. After you sign in, your unique deploy key will be displayed on your subgraph details page. - - The deploy key allows you to publish your subgraphs or manage your API keys and billing. It is unique but can be regenerated if you think it has been compromised. +3. After you sign in, your unique deploy key will be displayed on your Subgraph details page. + - The deploy key allows you to publish your Subgraphs or manage your API keys and billing. It is unique but can be regenerated if you think it has been compromised. -> Important: You need an API key to query subgraphs +> Important: You need an API key to query Subgraphs ### How to Create a Subgraph in Subgraph Studio @@ -57,31 +57,25 @@ npm install -g @graphprotocol/graph-cli ### گراف نیٹ ورک کے ساتھ سب گراف مطابقت -In order to be supported by Indexers on The Graph Network, subgraphs must: - -- Index a [supported network](/supported-networks/) -- درج ذیل خصوصیات میں سے کوئی بھی استعمال نہیں کرنا چاہیے: - - ipfs.cat & ipfs.map - - Non-fatal errors - - گرافٹنگ +To be supported by Indexers on The Graph Network, Subgraphs must index a [supported network](/supported-networks/). For a full list of supported and unsupported features, check out the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) repo. ## Initialize Your Subgraph -Once your subgraph has been created in Subgraph Studio, you can initialize its code through the CLI using this command: +Once your Subgraph has been created in Subgraph Studio, you can initialize its code through the CLI using this command: ```bash graph init ``` -You can find the `` value on your subgraph details page in Subgraph Studio, see image below: +You can find the `` value on your Subgraph details page in Subgraph Studio, see image below: ![Subgraph Studio - Slug](/img/doc-subgraph-slug.png) -After running `graph init`, you will be asked to input the contract address, network, and an ABI that you want to query. This will generate a new folder on your local machine with some basic code to start working on your subgraph. You can then finalize your subgraph to make sure it works as expected. +After running `graph init`, you will be asked to input the contract address, network, and an ABI that you want to query. This will generate a new folder on your local machine with some basic code to start working on your Subgraph. You can then finalize your Subgraph to make sure it works as expected. ## Graph Auth -Before you can deploy your subgraph to Subgraph Studio, you need to log into your account within the CLI. To do this, you will need your deploy key, which you can find under your subgraph details page. +Before you can deploy your Subgraph to Subgraph Studio, you need to log into your account within the CLI. To do this, you will need your deploy key, which you can find under your Subgraph details page. Then, use the following command to authenticate from the CLI: @@ -91,11 +85,11 @@ graph auth ## Deploying a Subgraph -Once you are ready, you can deploy your subgraph to Subgraph Studio. +Once you are ready, you can deploy your Subgraph to Subgraph Studio. -> Deploying a subgraph with the CLI pushes it to the Studio, where you can test it and update the metadata. This action won't publish your subgraph to the decentralized network. +> Deploying a Subgraph with the CLI pushes it to the Studio, where you can test it and update the metadata. This action won't publish your Subgraph to the decentralized network. -Use the following CLI command to deploy your subgraph: +Use the following CLI command to deploy your Subgraph: ```bash graph deploy @@ -108,30 +102,30 @@ After running this command, the CLI will ask for a version label. ## Testing Your Subgraph -After deploying, you can test your subgraph (either in Subgraph Studio or in your own app, with the deployment query URL), deploy another version, update the metadata, and publish to [Graph Explorer](https://thegraph.com/explorer) when you are ready. +After deploying, you can test your Subgraph (either in Subgraph Studio or in your own app, with the deployment query URL), deploy another version, update the metadata, and publish to [Graph Explorer](https://thegraph.com/explorer) when you are ready. -Use Subgraph Studio to check the logs on the dashboard and look for any errors with your subgraph. +Use Subgraph Studio to check the logs on the dashboard and look for any errors with your Subgraph. ## Publish Your Subgraph -In order to publish your subgraph successfully, review [publishing a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). +In order to publish your Subgraph successfully, review [publishing a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). ## Versioning Your Subgraph with the CLI -If you want to update your subgraph, you can do the following: +If you want to update your Subgraph, you can do the following: - You can deploy a new version to Studio using the CLI (it will only be private at this point). - Once you're happy with it, you can publish your new deployment to [Graph Explorer](https://thegraph.com/explorer). -- This action will create a new version of your subgraph that Curators can start signaling on and Indexers can index. +- This action will create a new version of your Subgraph that Curators can start signaling on and Indexers can index. -You can also update your subgraph's metadata without publishing a new version. You can update your subgraph details in Studio (under the profile picture, name, description, etc.) by checking an option called **Update Details** in [Graph Explorer](https://thegraph.com/explorer). If this is checked, an onchain transaction will be generated that updates subgraph details in Explorer without having to publish a new version with a new deployment. +You can also update your Subgraph's metadata without publishing a new version. You can update your Subgraph details in Studio (under the profile picture, name, description, etc.) by checking an option called **Update Details** in [Graph Explorer](https://thegraph.com/explorer). If this is checked, an onchain transaction will be generated that updates Subgraph details in Explorer without having to publish a new version with a new deployment. -> Note: There are costs associated with publishing a new version of a subgraph to the network. In addition to the transaction fees, you must also fund a part of the curation tax on the auto-migrating signal. You cannot publish a new version of your subgraph if Curators have not signaled on it. For more information, please read more [here](/resources/roles/curating/). +> Note: There are costs associated with publishing a new version of a Subgraph to the network. In addition to the transaction fees, you must also fund a part of the curation tax on the auto-migrating signal. You cannot publish a new version of your Subgraph if Curators have not signaled on it. For more information, please read more [here](/resources/roles/curating/). ## سب گراف ورژن کی خودکار آرکائیونگ -Whenever you deploy a new subgraph version in Subgraph Studio, the previous version will be archived. Archived versions won't be indexed/synced and therefore cannot be queried. You can unarchive an archived version of your subgraph in Subgraph Studio. +Whenever you deploy a new Subgraph version in Subgraph Studio, the previous version will be archived. Archived versions won't be indexed/synced and therefore cannot be queried. You can unarchive an archived version of your Subgraph in Subgraph Studio. -> Note: Previous versions of non-published subgraphs deployed to Studio will be automatically archived. +> Note: Previous versions of non-published Subgraphs deployed to Studio will be automatically archived. ![Subgraph Studio - Unarchive](/img/Unarchive.png) diff --git a/website/src/pages/ur/subgraphs/developing/developer-faq.mdx b/website/src/pages/ur/subgraphs/developing/developer-faq.mdx index ca250f41cc35..955c825816d1 100644 --- a/website/src/pages/ur/subgraphs/developing/developer-faq.mdx +++ b/website/src/pages/ur/subgraphs/developing/developer-faq.mdx @@ -7,37 +7,37 @@ This page summarizes some of the most common questions for developers building o ## Subgraph Related -### سب گراف کیا ہے؟ +### 1. What is a Subgraph? -A subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process subgraphs and make them available for subgraph consumers to query. +A Subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process Subgraphs and make them available for Subgraph consumers to query. -### 2. What is the first step to create a subgraph? +### 2. What is the first step to create a Subgraph? -To successfully create a subgraph, you will need to install The Graph CLI. Review the [Quick Start](/subgraphs/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). +To successfully create a Subgraph, you will need to install The Graph CLI. Review the [Quick Start](/subgraphs/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). -### 3. Can I still create a subgraph if my smart contracts don't have events? +### 3. Can I still create a Subgraph if my smart contracts don't have events? -It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the subgraph are triggered by contract events and are the fastest way to retrieve useful data. +It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the Subgraph are triggered by contract events and are the fastest way to retrieve useful data. -If the contracts you work with do not contain events, your subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. +If the contracts you work with do not contain events, your Subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. -### 4. کیا میں گٹ ہب کا اکاونٹ بدل سکتا ہوں جو میرے سب گراف کے ساتھ وابستہ ہے؟ +### 4. Can I change the GitHub account associated with my Subgraph? -No. Once a subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your subgraph. +No. Once a Subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your Subgraph. -### 5. How do I update a subgraph on mainnet? +### 5. How do I update a Subgraph on mainnet? -You can deploy a new version of your subgraph to Subgraph Studio using the CLI. This action maintains your subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. +You can deploy a new version of your Subgraph to Subgraph Studio using the CLI. This action maintains your Subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your Subgraph that Curators can start signaling on. -### 6. Is it possible to duplicate a subgraph to another account or endpoint without redeploying? +### 6. Is it possible to duplicate a Subgraph to another account or endpoint without redeploying? -آپ کو سب گراف کو دوبارہ تعینات کرنا ہوگا، لیکن اگر سب گراف ID (IPFS ہیش) تبدیل نہیں ہوتا ہے، تو اسے شروع سے مطابقت پذیر نہیں ہونا پڑے گا. +You have to redeploy the Subgraph, but if the Subgraph ID (IPFS hash) doesn't change, it won't have to sync from the beginning. -### 7. How do I call a contract function or access a public state variable from my subgraph mappings? +### 7. How do I call a contract function or access a public state variable from my Subgraph mappings? Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/#access-to-smart-contract-state). -### 8. Can I import `ethers.js` or other JS libraries into my subgraph mappings? +### 8. Can I import `ethers.js` or other JS libraries into my Subgraph mappings? Not currently, as mappings are written in AssemblyScript. @@ -45,15 +45,15 @@ One possible alternative solution to this is to store raw data in entities and p ### 9. When listening to multiple contracts, is it possible to select the contract order to listen to events? -سب گراف کے اندر، ایونٹس کو ہمیشہ اسی ترتیب سے پروسیس کیا جاتا ہے جس ترتیب سے وہ بلاکس میں ظاہر ہوتے ہیں، قطع نظر اس کے کہ یہ متعدد کنٹریکٹس میں ہے یا نہیں. +Within a Subgraph, the events are always processed in the order they appear in the blocks, regardless of whether that is across multiple contracts or not. ### 10. How are templates different from data sources? -Templates allow you to create data sources quickly, while your subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your subgraph will create a dynamic data source by supplying the contract address. +Templates allow you to create data sources quickly, while your Subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your Subgraph will create a dynamic data source by supplying the contract address. Check out the "Instantiating a data source template" section on: [Data Source Templates](/developing/creating-a-subgraph/#data-source-templates). -### 11. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? +### 11. Is it possible to set up a Subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? Yes. On `graph init` command itself you can add multiple dataSources by entering contracts one after the other. @@ -79,9 +79,9 @@ docker pull graphprotocol/graph-node:latest If only one entity is created during the event and if there's nothing better available, then the transaction hash + log index would be unique. You can obfuscate these by converting that to Bytes and then piping it through `crypto.keccak256` but this won't make it more unique. -### 15. Can I delete my subgraph? +### 15. Can I delete my Subgraph? -Yes, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) and [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) your subgraph. +Yes, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) and [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) your Subgraph. ## Network Related @@ -110,11 +110,11 @@ Yes. Sepolia supports block handlers, call handlers and event handlers. It shoul Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the dataSource starts indexing from. In most cases, we suggest using the block where the contract was created: [Start blocks](/developing/creating-a-subgraph/#start-blocks) -### 20. What are some tips to increase the performance of indexing? My subgraph is taking a very long time to sync +### 20. What are some tips to increase the performance of indexing? My Subgraph is taking a very long time to sync Yes, you should take a look at the optional start block feature to start indexing from the block where the contract was deployed: [Start blocks](/developing/creating-a-subgraph/#start-blocks) -### 21. Is there a way to query the subgraph directly to determine the latest block number it has indexed? +### 21. Is there a way to query the Subgraph directly to determine the latest block number it has indexed? جی ہاں! مندرجہ ذیل کمانڈ کو آزمائیں، "تنظیم/سب گراف نام" کو اس کے تحت شائع ہونے والی تنظیم کے ساتھ تبدیل کرتے ہوئے اور آپ کے سب گراف کا نام: @@ -132,7 +132,7 @@ someCollection(first: 1000, skip: ) { ... } ### 23. If my dapp frontend uses The Graph for querying, do I need to write my API key into the frontend directly? What if we pay query fees for users – will malicious users cause our query fees to be very high? -Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. +Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and Subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. ## Miscellaneous diff --git a/website/src/pages/ur/subgraphs/developing/introduction.mdx b/website/src/pages/ur/subgraphs/developing/introduction.mdx index e7ab36598ccb..aceaf166c362 100644 --- a/website/src/pages/ur/subgraphs/developing/introduction.mdx +++ b/website/src/pages/ur/subgraphs/developing/introduction.mdx @@ -11,21 +11,21 @@ As a developer, you need data to build and power your dapp. Querying and indexin On The Graph, you can: -1. Create, deploy, and publish subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). -2. Use GraphQL to query existing subgraphs. +1. Create, deploy, and publish Subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). +2. Use GraphQL to query existing Subgraphs. ### What is GraphQL? -- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. +- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query Subgraphs. ### Developer Actions -- Query subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. -- Create custom subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. -- Deploy, publish and signal your subgraphs within The Graph Network. +- Query Subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. +- Create custom Subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. +- Deploy, publish and signal your Subgraphs within The Graph Network. -### What are subgraphs? +### What are Subgraphs? -A subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. +A Subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. -Check out the documentation on [subgraphs](/subgraphs/developing/subgraphs/) to learn specifics. +Check out the documentation on [Subgraphs](/subgraphs/developing/subgraphs/) to learn specifics. diff --git a/website/src/pages/ur/subgraphs/developing/managing/deleting-a-subgraph.mdx b/website/src/pages/ur/subgraphs/developing/managing/deleting-a-subgraph.mdx index f078c166db88..b8c2330ca49d 100644 --- a/website/src/pages/ur/subgraphs/developing/managing/deleting-a-subgraph.mdx +++ b/website/src/pages/ur/subgraphs/developing/managing/deleting-a-subgraph.mdx @@ -2,30 +2,30 @@ title: Deleting a Subgraph --- -Delete your subgraph using [Subgraph Studio](https://thegraph.com/studio/). +Delete your Subgraph using [Subgraph Studio](https://thegraph.com/studio/). -> Deleting your subgraph will remove all published versions from The Graph Network, but it will remain visible on Graph Explorer and Subgraph Studio for users who have signaled on it. +> Deleting your Subgraph will remove all published versions from The Graph Network, but it will remain visible on Graph Explorer and Subgraph Studio for users who have signaled on it. ## Step-by-Step -1. Visit the subgraph's page on [Subgraph Studio](https://thegraph.com/studio/). +1. Visit the Subgraph's page on [Subgraph Studio](https://thegraph.com/studio/). 2. Click on the three-dots to the right of the "publish" button. -3. Click on the option to "delete this subgraph": +3. Click on the option to "delete this Subgraph": ![Delete-subgraph](/img/Delete-subgraph.png) -4. Depending on the subgraph's status, you will be prompted with various options. +4. Depending on the Subgraph's status, you will be prompted with various options. - - If the subgraph is not published, simply click “delete” and confirm. - - If the subgraph is published, you will need to confirm on your wallet before the subgraph can be deleted from Studio. If a subgraph is published to multiple networks, such as testnet and mainnet, additional steps may be required. + - If the Subgraph is not published, simply click “delete” and confirm. + - If the Subgraph is published, you will need to confirm on your wallet before the Subgraph can be deleted from Studio. If a Subgraph is published to multiple networks, such as testnet and mainnet, additional steps may be required. -> If the owner of the subgraph has signal on it, the signaled GRT will be returned to the owner. +> If the owner of the Subgraph has signal on it, the signaled GRT will be returned to the owner. ### Important Reminders -- Once you delete a subgraph, it will **not** appear on Graph Explorer's homepage. However, users who have signaled on it will still be able to view it on their profile pages and remove their signal. -- کیوریٹرز اب سب گراف پر سگنل نہیں دے سکیں گے. -- Curators that already signaled on the subgraph can withdraw their signal at an average share price. -- Deleted subgraphs will show an error message. +- Once you delete a Subgraph, it will **not** appear on Graph Explorer's homepage. However, users who have signaled on it will still be able to view it on their profile pages and remove their signal. +- Curators will not be able to signal on the Subgraph anymore. +- Curators that already signaled on the Subgraph can withdraw their signal at an average share price. +- Deleted Subgraphs will show an error message. diff --git a/website/src/pages/ur/subgraphs/developing/managing/transferring-a-subgraph.mdx b/website/src/pages/ur/subgraphs/developing/managing/transferring-a-subgraph.mdx index 0fc6632cbc40..e80bde3fa6d2 100644 --- a/website/src/pages/ur/subgraphs/developing/managing/transferring-a-subgraph.mdx +++ b/website/src/pages/ur/subgraphs/developing/managing/transferring-a-subgraph.mdx @@ -2,18 +2,18 @@ title: Transferring a Subgraph --- -Subgraphs published to the decentralized network have an NFT minted to the address that published the subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network. +Subgraphs published to the decentralized network have an NFT minted to the address that published the Subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network. ## Reminders -- Whoever owns the NFT controls the subgraph. -- If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that subgraph on the network. -- You can easily move control of a subgraph to a multi-sig. -- A community member can create a subgraph on behalf of a DAO. +- Whoever owns the NFT controls the Subgraph. +- If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that Subgraph on the network. +- You can easily move control of a Subgraph to a multi-sig. +- A community member can create a Subgraph on behalf of a DAO. ## View Your Subgraph as an NFT -To view your subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**: +To view your Subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**: ``` https://opensea.io/your-wallet-address @@ -27,13 +27,13 @@ https://rainbow.me/your-wallet-addres ## Step-by-Step -To transfer ownership of a subgraph, do the following: +To transfer ownership of a Subgraph, do the following: 1. Use the UI built into Subgraph Studio: ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-1.png) -2. Choose the address that you would like to transfer the subgraph to: +2. Choose the address that you would like to transfer the Subgraph to: ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-2.png) diff --git a/website/src/pages/ur/subgraphs/developing/publishing/publishing-a-subgraph.mdx b/website/src/pages/ur/subgraphs/developing/publishing/publishing-a-subgraph.mdx index 36ce4650b242..0029f0a41559 100644 --- a/website/src/pages/ur/subgraphs/developing/publishing/publishing-a-subgraph.mdx +++ b/website/src/pages/ur/subgraphs/developing/publishing/publishing-a-subgraph.mdx @@ -1,10 +1,11 @@ --- title: ڈیسینٹرلائزڈ نیٹ ورک پر سب گراف شائع کرنا +sidebarTitle: Publishing to the Decentralized Network --- -Once you have [deployed your subgraph to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/) and it's ready to go into production, you can publish it to the decentralized network. +Once you have [deployed your Subgraph to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/) and it's ready to go into production, you can publish it to the decentralized network. -When you publish a subgraph to the decentralized network, you make it available for: +When you publish a Subgraph to the decentralized network, you make it available for: - [Curators](/resources/roles/curating/) to begin curating it. - [Indexers](/indexing/overview/) to begin indexing it. @@ -17,33 +18,33 @@ Check out the list of [supported networks](/supported-networks/). 1. Go to the [Subgraph Studio](https://thegraph.com/studio/) dashboard 2. Click on the **Publish** button -3. Your subgraph will now be visible in [Graph Explorer](https://thegraph.com/explorer/). +3. Your Subgraph will now be visible in [Graph Explorer](https://thegraph.com/explorer/). -All published versions of an existing subgraph can: +All published versions of an existing Subgraph can: - Be published to Arbitrum One. [Learn more about The Graph Network on Arbitrum](/archived/arbitrum/arbitrum-faq/). -- Index data on any of the [supported networks](/supported-networks/), regardless of the network on which the subgraph was published. +- Index data on any of the [supported networks](/supported-networks/), regardless of the network on which the Subgraph was published. -### شائع شدہ سب گراف کے لیے میٹا ڈیٹا کو اپ ڈیٹ کرنا +### Updating metadata for a published Subgraph -- After publishing your subgraph to the decentralized network, you can update the metadata anytime in Subgraph Studio. +- After publishing your Subgraph to the decentralized network, you can update the metadata anytime in Subgraph Studio. - Once you’ve saved your changes and published the updates, they will appear in Graph Explorer. - It's important to note that this process will not create a new version since your deployment has not changed. ## Publishing from the CLI -As of version 0.73.0, you can also publish your subgraph with the [`graph-cli`](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). +As of version 0.73.0, you can also publish your Subgraph with the [`graph-cli`](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). 1. Open the `graph-cli`. 2. Use the following commands: `graph codegen && graph build` then `graph publish`. -3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized subgraph to a network of your choice. +3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized Subgraph to a network of your choice. ![cli-ui](/img/cli-ui.png) ### Customizing your deployment -You can upload your subgraph build to a specific IPFS node and further customize your deployment with the following flags: +You can upload your Subgraph build to a specific IPFS node and further customize your deployment with the following flags: ``` USAGE @@ -61,33 +62,33 @@ FLAGS ``` -## Adding signal to your subgraph +## Adding signal to your Subgraph -Developers can add GRT signal to their subgraphs to incentivize Indexers to query the subgraph. +Developers can add GRT signal to their Subgraphs to incentivize Indexers to query the Subgraph. -- If a subgraph is eligible for indexing rewards, Indexers who provide a "proof of indexing" will receive a GRT reward, based on the amount of GRT signalled. +- If a Subgraph is eligible for indexing rewards, Indexers who provide a "proof of indexing" will receive a GRT reward, based on the amount of GRT signalled. -- You can check indexing reward eligibility based on subgraph feature usage [here](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). +- You can check indexing reward eligibility based on Subgraph feature usage [here](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). - Specific supported networks can be checked [here](/supported-networks/). -> Adding signal to a subgraph which is not eligible for rewards will not attract additional Indexers. +> Adding signal to a Subgraph which is not eligible for rewards will not attract additional Indexers. > -> If your subgraph is eligible for rewards, it is recommended that you curate your own subgraph with at least 3,000 GRT in order to attract additional indexers to index your subgraph. +> If your Subgraph is eligible for rewards, it is recommended that you curate your own Subgraph with at least 3,000 GRT in order to attract additional indexers to index your Subgraph. -The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs. However, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. +The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all Subgraphs. However, signaling GRT on a particular Subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. -When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. +When signaling, Curators can decide to signal on a specific version of the Subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. -Indexers can find subgraphs to index based on curation signals they see in Graph Explorer. +Indexers can find Subgraphs to index based on curation signals they see in Graph Explorer. -![Explorer subgraphs](/img/explorer-subgraphs.png) +![Explorer Subgraphs](/img/explorer-subgraphs.png) -Subgraph Studio enables you to add signal to your subgraph by adding GRT to your subgraph's curation pool in the same transaction it is published. +Subgraph Studio enables you to add signal to your Subgraph by adding GRT to your Subgraph's curation pool in the same transaction it is published. ![Curation Pool](/img/curate-own-subgraph-tx.png) -Alternatively, you can add GRT signal to a published subgraph from Graph Explorer. +Alternatively, you can add GRT signal to a published Subgraph from Graph Explorer. ![Signal from Explorer](/img/signal-from-explorer.png) diff --git a/website/src/pages/ur/subgraphs/developing/subgraphs.mdx b/website/src/pages/ur/subgraphs/developing/subgraphs.mdx index 25961936c677..f44bb2f8d73d 100644 --- a/website/src/pages/ur/subgraphs/developing/subgraphs.mdx +++ b/website/src/pages/ur/subgraphs/developing/subgraphs.mdx @@ -4,83 +4,83 @@ title: سب گراف ## What is a Subgraph? -A subgraph is a custom, open API that extracts data from a blockchain, processes it, and stores it so it can be easily queried via GraphQL. +A Subgraph is a custom, open API that extracts data from a blockchain, processes it, and stores it so it can be easily queried via GraphQL. ### Subgraph Capabilities - **Access Data:** Subgraphs enable the querying and indexing of blockchain data for web3. -- **Build:** Developers can build, deploy, and publish subgraphs to The Graph Network. To get started, check out the subgraph developer [Quick Start](quick-start/). -- **Index & Query:** Once a subgraph is indexed, anyone can query it. Explore and query all subgraphs published to the network in [Graph Explorer](https://thegraph.com/explorer). +- **Build:** Developers can build, deploy, and publish Subgraphs to The Graph Network. To get started, check out the Subgraph developer [Quick Start](quick-start/). +- **Index & Query:** Once a Subgraph is indexed, anyone can query it. Explore and query all Subgraphs published to the network in [Graph Explorer](https://thegraph.com/explorer). ## Inside a Subgraph -The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. +The Subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your Subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. -The **subgraph definition** consists of the following files: +The **Subgraph definition** consists of the following files: -- `subgraph.yaml`: Contains the subgraph manifest +- `subgraph.yaml`: Contains the Subgraph manifest -- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL +- `schema.graphql`: A GraphQL schema defining the data stored for your Subgraph and how to query it via GraphQL - `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema -To learn more about each subgraph component, check out [creating a subgraph](/developing/creating-a-subgraph/). +To learn more about each Subgraph component, check out [creating a Subgraph](/developing/creating-a-subgraph/). ## سب گراف لائف سائیکل -Here is a general overview of a subgraph’s lifecycle: +Here is a general overview of a Subgraph’s lifecycle: ![Subgraph Lifecycle](/img/subgraph-lifecycle.png) ## Subgraph Development -1. [Create a subgraph](/developing/creating-a-subgraph/) -2. [Deploy a subgraph](/deploying/deploying-a-subgraph-to-studio/) -3. [Test a subgraph](/deploying/subgraph-studio/#testing-your-subgraph-in-subgraph-studio) -4. [Publish a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) -5. [Signal on a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) +1. [Create a Subgraph](/developing/creating-a-subgraph/) +2. [Deploy a Subgraph](/deploying/deploying-a-subgraph-to-studio/) +3. [Test a Subgraph](/deploying/subgraph-studio/#testing-your-subgraph-in-subgraph-studio) +4. [Publish a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) +5. [Signal on a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) ### Build locally -Great subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying subgraphs on The Graph. They can also use [Graph TypeScript](/subgraphs/developing/creating/graph-ts/README/) and [Matchstick](/subgraphs/developing/creating/unit-testing-framework/) to create robust subgraphs. +Great Subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying Subgraphs on The Graph. They can also use [Graph TypeScript](/subgraphs/developing/creating/graph-ts/README/) and [Matchstick](/subgraphs/developing/creating/unit-testing-framework/) to create robust Subgraphs. ### Deploy to Subgraph Studio -Once defined, a subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: +Once defined, a Subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: -- Use its staging environment to index the deployed subgraph and make it available for review. -- Verify that your subgraph doesn't have any indexing errors and works as expected. +- Use its staging environment to index the deployed Subgraph and make it available for review. +- Verify that your Subgraph doesn't have any indexing errors and works as expected. ### Publish to the Network -When you're happy with your subgraph, you can [publish it](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph Network. +When you're happy with your Subgraph, you can [publish it](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph Network. -- This is an onchain action, which registers the subgraph and makes it discoverable by Indexers. -- Published subgraphs have a corresponding NFT, which defines the ownership of the subgraph. You can [transfer the subgraph's ownership](/subgraphs/developing/managing/transferring-a-subgraph/) by sending the NFT. -- Published subgraphs have associated metadata, which provides other network participants with useful context and information. +- This is an onchain action, which registers the Subgraph and makes it discoverable by Indexers. +- Published Subgraphs have a corresponding NFT, which defines the ownership of the Subgraph. You can [transfer the Subgraph's ownership](/subgraphs/developing/managing/transferring-a-subgraph/) by sending the NFT. +- Published Subgraphs have associated metadata, which provides other network participants with useful context and information. ### Add Curation Signal for Indexing -Published subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your subgraph. Learn more about signaling and [curating](/resources/roles/curating/) on The Graph. +Published Subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your Subgraph. Learn more about signaling and [curating](/resources/roles/curating/) on The Graph. #### What is signal? -- Signal is locked GRT associated with a given subgraph. It indicates to Indexers that a given subgraph will receive query volume and it contributes to the indexing rewards available for processing it. -- Third-party Curators may also signal on a given subgraph, if they deem the subgraph likely to drive query volume. +- Signal is locked GRT associated with a given Subgraph. It indicates to Indexers that a given Subgraph will receive query volume and it contributes to the indexing rewards available for processing it. +- Third-party Curators may also signal on a given Subgraph, if they deem the Subgraph likely to drive query volume. ### Querying & Application Development Subgraphs on The Graph Network receive 100,000 free queries per month, after which point developers can either [pay for queries with GRT or a credit card](/subgraphs/billing/). -Learn more about [querying subgraphs](/subgraphs/querying/introduction/). +Learn more about [querying Subgraphs](/subgraphs/querying/introduction/). ### Updating Subgraphs -To update your subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. +To update your Subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your Subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. -- If you selected "auto-migrate" when you applied the signal, updating the subgraph will migrate any signal to the new version and incur a migration tax. -- This signal migration should prompt Indexers to start indexing the new version of the subgraph, so it should soon become available for querying. +- If you selected "auto-migrate" when you applied the signal, updating the Subgraph will migrate any signal to the new version and incur a migration tax. +- This signal migration should prompt Indexers to start indexing the new version of the Subgraph, so it should soon become available for querying. ### Deleting & Transferring Subgraphs -If you no longer need a published subgraph, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) or [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) it. Deleting a subgraph returns any signaled GRT to [Curators](/resources/roles/curating/). +If you no longer need a published Subgraph, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) or [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) it. Deleting a Subgraph returns any signaled GRT to [Curators](/resources/roles/curating/). diff --git a/website/src/pages/ur/subgraphs/explorer.mdx b/website/src/pages/ur/subgraphs/explorer.mdx index c2f581c2a0f1..6074ff5c5b2f 100644 --- a/website/src/pages/ur/subgraphs/explorer.mdx +++ b/website/src/pages/ur/subgraphs/explorer.mdx @@ -2,11 +2,11 @@ title: گراف ایکسپلورر --- -Unlock the world of subgraphs and network data with [Graph Explorer](https://thegraph.com/explorer). +Unlock the world of Subgraphs and network data with [Graph Explorer](https://thegraph.com/explorer). ## جائزہ -Graph Explorer consists of multiple parts where you can interact with [subgraphs](https://thegraph.com/explorer?chain=arbitrum-one), [delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one), engage [participants](https://thegraph.com/explorer/participants?chain=arbitrum-one), view [network information](https://thegraph.com/explorer/network?chain=arbitrum-one), and access your user profile. +Graph Explorer consists of multiple parts where you can interact with [Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one), [delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one), engage [participants](https://thegraph.com/explorer/participants?chain=arbitrum-one), view [network information](https://thegraph.com/explorer/network?chain=arbitrum-one), and access your user profile. ## Inside Explorer @@ -14,33 +14,33 @@ The following is a breakdown of all the key features of Graph Explorer. For addi ### Subgraphs Page -After deploying and publishing your subgraph in Subgraph Studio, go to [Graph Explorer](https://thegraph.com/explorer) and click on the "[Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one)" link in the navigation bar to access the following: +After deploying and publishing your Subgraph in Subgraph Studio, go to [Graph Explorer](https://thegraph.com/explorer) and click on the "[Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one)" link in the navigation bar to access the following: -- Your own finished subgraphs +- Your own finished Subgraphs - Subgraphs published by others -- The exact subgraph you want (based on the date created, signal amount, or name). +- The exact Subgraph you want (based on the date created, signal amount, or name). ![Explorer Image 1](/img/Subgraphs-Explorer-Landing.png) -When you click into a subgraph, you will be able to do the following: +When you click into a Subgraph, you will be able to do the following: - Test queries in the playground and be able to leverage network details to make informed decisions. -- Signal GRT on your own subgraph or the subgraphs of others to make indexers aware of its importance and quality. +- Signal GRT on your own Subgraph or the Subgraphs of others to make indexers aware of its importance and quality. - - This is critical because signaling on a subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries. + - This is critical because signaling on a Subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries. ![Explorer Image 2](/img/Subgraph-Details.png) -On each subgraph’s dedicated page, you can do the following: +On each Subgraph’s dedicated page, you can do the following: -- سب گرافس پر سگنل/غیر سگنل +- Signal/Un-signal on Subgraphs - مزید تفصیلات دیکھیں جیسے چارٹس، موجودہ تعیناتی ID، اور دیگر میٹا ڈیٹا -- سب گراف کی ماضی کی تکرار کو دریافت کرنے کے لیے ورژنز کو تبدیل کریں -- GraphQL کے ذریعے سب گرافس سے کیوری کریں -- پلے گراؤنڈ میں سب گراف کو جانچیں -- انڈیکسرز دیکھیں جو کسی خاص سب گراف پر انڈیکس کر رہے ہیں +- Switch versions to explore past iterations of the Subgraph +- Query Subgraphs via GraphQL +- Test Subgraphs in the playground +- View the Indexers that are indexing on a certain Subgraph - سب گراف کے اعدادوشمار (مختصات، کیوریٹرز، وغیرہ) -- اس ہستی کو دیکھیں جس نے سب گراف کوشائع کیا +- View the entity who published the Subgraph ![Explorer Image 3](/img/Explorer-Signal-Unsignal.png) @@ -53,7 +53,7 @@ On this page, you can see the following: - Indexers who collected the most query fees - Indexers with the highest estimated APR -Additionally, you can calculate your ROI and search for top Indexers by name, address, or subgraph. +Additionally, you can calculate your ROI and search for top Indexers by name, address, or Subgraph. ### Participants Page @@ -63,9 +63,9 @@ This page provides a bird's-eye view of all "participants," which includes every ![Explorer Image 4](/img/Indexer-Pane.png) -Indexers are the backbone of the protocol. They stake on subgraphs, index them, and serve queries to anyone consuming subgraphs. +Indexers are the backbone of the protocol. They stake on Subgraphs, index them, and serve queries to anyone consuming Subgraphs. -In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each subgraph, and how much revenue they have made from query fees and indexing rewards. +In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each Subgraph, and how much revenue they have made from query fees and indexing rewards. **Specifics** @@ -74,7 +74,7 @@ In the Indexers table, you can see an Indexers’ delegation parameters, their s - Cooldown Remaining - the time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters. - Owned - This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior. - Delegated - Stake from Delegators which can be allocated by the Indexer, but cannot be slashed. -- Allocated - Stake that Indexers are actively allocating towards the subgraphs they are indexing. +- Allocated - Stake that Indexers are actively allocating towards the Subgraphs they are indexing. - Available Delegation Capacity - the amount of delegated stake the Indexers can still receive before they become over-delegated. - زیادہ سے زیاد ڈیلیلگیشن صلاحیت - ڈیلیلگیٹڈ حصص کی زیادہ سے زیادہ مقدار کو انڈیکسر نتیجہ خیز طور پر قبول کر سکتا ہے۔ ایک اضافی حصص کو مختص کرنے یا انعامات کے حساب کتاب کے لیے استعمال نہیں کیا جا سکتا. - Query Fees - this is the total fees that end users have paid for queries from an Indexer over all time. @@ -90,10 +90,10 @@ To learn more about how to become an Indexer, you can take a look at the [offici #### کیوریٹرز -Curators analyze subgraphs to identify which subgraphs are of the highest quality. Once a Curator has found a potentially high-quality subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which subgraphs are high quality and should be indexed. +Curators analyze Subgraphs to identify which Subgraphs are of the highest quality. Once a Curator has found a potentially high-quality Subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which Subgraphs are high quality and should be indexed. -- Curators can be community members, data consumers, or even subgraph developers who signal on their own subgraphs by depositing GRT tokens into a bonding curve. - - By depositing GRT, Curators mint curation shares of a subgraph. As a result, they can earn a portion of the query fees generated by the subgraph they have signaled on. +- Curators can be community members, data consumers, or even Subgraph developers who signal on their own Subgraphs by depositing GRT tokens into a bonding curve. + - By depositing GRT, Curators mint curation shares of a Subgraph. As a result, they can earn a portion of the query fees generated by the Subgraph they have signaled on. - The bonding curve incentivizes Curators to curate the highest quality data sources. In the The Curator table listed below you can see: @@ -144,8 +144,8 @@ The overview section has both all the current network metrics and some cumulativ A few key details to note: -- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the subgraphs have been closed and the data they served has been validated by the consumers. -- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). +- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the Subgraphs have been closed and the data they served has been validated by the consumers. +- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the Subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). ![Explorer Image 8](/img/Network-Stats.png) @@ -178,15 +178,15 @@ In this section, you can view the following: ### سب گرافس ٹیب -In the Subgraphs tab, you’ll see your published subgraphs. +In the Subgraphs tab, you’ll see your published Subgraphs. -> This will not include any subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network. +> This will not include any Subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network. ![Explorer Image 11](/img/Subgraphs-Overview.png) ### انڈیکسنگ ٹیب -In the Indexing tab, you’ll find a table with all the active and historical allocations towards subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer. +In the Indexing tab, you’ll find a table with all the active and historical allocations towards Subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer. اس حصے میں آپ کے نیٹ انڈیکسر انعامات اور نیٹ استفسار کی فیس کے بارے میں تفصیلات بھی شامل ہوں گی۔ آپ کو درج ذیل میٹرکس نظر آئیں گے: @@ -223,13 +223,13 @@ In the Delegators tab, you can find the details of your active and historical de ### کیوریٹنگ ٹیب -کیوریشن ٹیب میں، آپ کو وہ تمام سب گراف مل جائیں گے جن پر آپ سگنل دے رہے ہیں (اس طرح آپ کو کیوری کی فیس وصول کرنے کے قابل بناتے ہیں). سگنلنگ کیوریٹرز کو انڈیکسرز کو نمایاں کرنے کی اجازت دیتا ہے کے کون سے سب گرافس قابل قدر اور قابل اعتماد ہیں، اس طرح یہ اشارہ کرتا ہے کہ انہیں انڈیکس کرنے کی ضرورت ہے. +In the Curation tab, you’ll find all the Subgraphs you’re signaling on (thus enabling you to receive query fees). Signaling allows Curators to highlight to Indexers which Subgraphs are valuable and trustworthy, thus signaling that they need to be indexed on. اس ٹیب کے اندر، آپ کو ایک جائزہ ملے گا: -- سگنل کی تفصیلات کے ساتھ آپ جن سب گرافس کو کیورٹنگ کر رہے ہیں -- فی سب گراف کا مجموعی اشتراک کریں -- استفسار کے انعامات فی سب گراف +- All the Subgraphs you're curating on with signal details +- Share totals per Subgraph +- Query rewards per Subgraph - تاریخ کی تفصیلات پر اپ ڈیٹ کیا گیا ![Explorer Image 14](/img/Curation-Stats.png) diff --git a/website/src/pages/ur/subgraphs/guides/_meta.js b/website/src/pages/ur/subgraphs/guides/_meta.js index 37e18bc51651..a1bb04fb6d3f 100644 --- a/website/src/pages/ur/subgraphs/guides/_meta.js +++ b/website/src/pages/ur/subgraphs/guides/_meta.js @@ -1,4 +1,5 @@ export default { + 'subgraph-composition': '', 'subgraph-debug-forking': '', near: '', arweave: '', diff --git a/website/src/pages/ur/subgraphs/guides/arweave.mdx b/website/src/pages/ur/subgraphs/guides/arweave.mdx index 08e6c4257268..26e34b1be5ab 100644 --- a/website/src/pages/ur/subgraphs/guides/arweave.mdx +++ b/website/src/pages/ur/subgraphs/guides/arweave.mdx @@ -1,50 +1,50 @@ --- -title: Building Subgraphs on Arweave +title: بناۓ گئے سب گرافز آرویو(Arweave) پر --- > Arweave support in Graph Node and on Subgraph Studio is in beta: please reach us on [Discord](https://discord.gg/graphprotocol) with any questions about building Arweave Subgraphs! -In this guide, you will learn how to build and deploy Subgraphs to index the Arweave blockchain. +اس گائڈ میں، آپ سیکھیں گے کہ آرویو(Arweave) بلاکچین کو انڈیکس کرنے کیلئے سب گرافز بنانے اور مستعمل کرنے کا طریقہ کار کیسے ہے۔ -## What is Arweave? +## آرویو(Arweave) کیا ہے؟ -The Arweave protocol allows developers to store data permanently and that is the main difference between Arweave and IPFS, where IPFS lacks the feature; permanence, and files stored on Arweave can't be changed or deleted. +آرویو (Arweave) پروٹوکول ڈیولپرز کو اجازت دیتا کے وہ ڈیٹا کو مستقل طور پر اسٹور کرے اور یہی آرویو(Arweave) اور IPFS میں سب سے بڑا فرق ہے،جہاں IPFS میں خصوصیئت کی کمی ہے؛مستقل مزاجی ، اور فایلز جو آرویو(Arweave) پر اسٹور ہوتی ہیں بدل یا ختم نہیں ہو سکتی. -Arweave already has built numerous libraries for integrating the protocol in a number of different programming languages. For more information you can check: +آرویو(Arweave) نے پہلے ہی بہت سی کتابخانےاں تیار کی ہیں جو مختلف پروگرامنگ زبانوں میں پروٹوکول کو اندر ملانے کے لئے بنائی گئی ہیں۔ مزید معلومات کے لئے آپ یہ چیک کر سکتے ہیں: - [Arwiki](https://arwiki.wiki/#/en/main) - [Arweave Resources](https://www.arweave.org/build) -## What are Arweave Subgraphs? +## آرویو(Arweave) سب گرافز کیا ہوتے ہیں؟ The Graph allows you to build custom open APIs called "Subgraphs". Subgraphs are used to tell indexers (server operators) which data to index on a blockchain and save on their servers in order for you to be able to query it at any time using [GraphQL](https://graphql.org/). [Graph Node](https://github.com/graphprotocol/graph-node) is now able to index data on Arweave protocol. The current integration is only indexing Arweave as a blockchain (blocks and transactions), it is not indexing the stored files yet. -## Building an Arweave Subgraph +## آرویو(Arweave) سب گراف بنانا -To be able to build and deploy Arweave Subgraphs, you need two packages: +آرویو کے سب گراف بنانے اور تعینات کرنے کے لئے،آپ کو دو پیکجوں کی ضرورت ہے: 1. `@graphprotocol/graph-cli` above version 0.30.2 - This is a command-line tool for building and deploying Subgraphs. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-cli) to download using `npm`. 2. `@graphprotocol/graph-ts` above version 0.27.0 - This is library of Subgraph-specific types. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-ts) to download using `npm`. -## Subgraph's components +## سب گراف کے حصے There are three components of a Subgraph: ### 1. Manifest - `subgraph.yaml` -Defines the data sources of interest, and how they should be processed. Arweave is a new kind of data source. +دلچسپی کے ڈیٹا کے ذرایع کو بیان کرتا ہے،اور کیسے ان پر کاروائ کی جاۓ۔ آرویو ایک نئ طرح کا ڈیٹا کا ذریعہ ہے. ### 2. Schema - `schema.graphql` -Here you define which data you want to be able to query after indexing your Subgraph using GraphQL. This is actually similar to a model for an API, where the model defines the structure of a request body. +یہاں آپ بیان کرتے ہیں کے کونسا ڈیٹا آپ کے سب گراف کا کیوری گراف کیو ایل کا استعمال کرتے ہوۓ کر سکے۔یہ دراصل اے پی آی(API) کے ماڈل سے ملتا ہے،جہاں ماڈل درخواست کے جسم کے ڈھانچے کو بیان کرتا ہے. The requirements for Arweave Subgraphs are covered by the [existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). ### 3. AssemblyScript Mappings - `mapping.ts` -This is the logic that determines how data should be retrieved and stored when someone interacts with the data sources you are listening to. The data gets translated and is stored based off the schema you have listed. +یہ وہ منطق جو اس بات کا پتہ لگاتا ہے کے کیسے ڈیٹا کو بازیافت اور مہفوظ کیا جاۓ جب کوئ اس ڈیٹا کے ذخیرہ سے تعامل کرے جسے آپ سن رہے ہیں۔اس ڈیٹا کا ترجمہ کیا جاتا ہے اور آپ کے درج کردہ اسکیما کی بنیاد پر مہفوظ کیا جاتا ہے. During Subgraph development there are two key commands: @@ -53,7 +53,7 @@ $ graph codegen # generates types from the schema file identified in the manifes $ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder ``` -## Subgraph Manifest Definition +## سب گراف مینی فیسٹ کی تعریف The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for an Arweave Subgraph: @@ -84,24 +84,24 @@ dataSources: - Arweave Subgraphs introduce a new kind of data source (`arweave`) - The network should correspond to a network on the hosting Graph Node. In Subgraph Studio, Arweave's mainnet is `arweave-mainnet` -- Arweave data sources introduce an optional source.owner field, which is the public key of an Arweave wallet +- آرویو ڈیٹا کے ذرائع ایک اختیاری source.owner فیلڈ متعارف کراتے ہیں، جو آرویو والیٹ کی عوامی کلید ہے -Arweave data sources support two types of handlers: +آرویو ڈیٹا کے ذرائع دو قسم کے ہینڈلرز کو سپورٹ کرتے ہیں: - `blockHandlers` - Run on every new Arweave block. No source.owner is required. - `transactionHandlers` - Run on every transaction where the data source's `source.owner` is the owner. Currently an owner is required for `transactionHandlers`, if users want to process all transactions they should provide "" as the `source.owner` -> The source.owner can be the owner's address, or their Public Key. - -> Transactions are the building blocks of the Arweave permaweb and they are objects created by end-users. - +> Source.owner مالک کا پتہ، یا ان کی عوامی کلید ہو سکتا ہے. +> +> ٹرانزیکشنز آرویو پرما ویب کے تعمیراتی بلاکس ہیں اور یہ آخری صارفین کے ذریعہ تخلیق کردہ اشیاء ہیں. +> > Note: [Irys (previously Bundlr)](https://irys.xyz/) transactions are not supported yet. -## Schema Definition +## اسکیما کی تعریف Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on the Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). -## AssemblyScript Mappings +## اسمبلی اسکرپٹ سب میپنک The handlers for processing events are written in [AssemblyScript](https://www.assemblyscript.org/). @@ -158,11 +158,11 @@ Once your Subgraph has been created on your Subgraph Studio dashboard, you can d graph deploy --access-token ``` -## Querying an Arweave Subgraph +## آرویو سب گراف سے کیوری کرنا The GraphQL endpoint for Arweave Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. -## Example Subgraphs +## سب گراف کی مثال Here is an example Subgraph for reference: @@ -174,19 +174,19 @@ Here is an example Subgraph for reference: No, a Subgraph can only support data sources from one chain/network. -### Can I index the stored files on Arweave? +### کیا میں آرویو پر ذخیرہ شدہ فائلوں کو انڈیکس کر سکتا ہوں؟ -Currently, The Graph is only indexing Arweave as a blockchain (its blocks and transactions). +فی الحال، دی گراف صرف آرویو کو بلاک چین (اس کے بلاکس اور لین دین) کے طور پر ترتیب دے رہا ہے. ### Can I identify Bundlr bundles in my Subgraph? -This is not currently supported. +یہ فی الحال سپورٹڈ نہیں ہے. -### How can I filter transactions to a specific account? +### میں کسی مخصوص اکاؤنٹ میں لین دین کو کیسے فلٹر کر سکتا ہوں؟ -The source.owner can be the user's public key or account address. +Source.owner صارف کی عوامی کلید یا اکاؤنٹ ایڈریس ہو سکتا ہے. -### What is the current encryption format? +### موجودہ خفیہ کاری کا فارمیٹ کیا ہے؟ Data is generally passed into the mappings as Bytes, which if stored directly is returned in the Subgraph in a `hex` format (ex. block and transaction hashes). You may want to convert to a `base64` or `base64 URL`-safe format in your mappings, in order to match what is displayed in block explorers like [Arweave Explorer](https://viewblock.io/arweave/). diff --git a/website/src/pages/ur/subgraphs/guides/contract-analyzer.mdx b/website/src/pages/ur/subgraphs/guides/contract-analyzer.mdx index 084ac8d28a00..ea2163b4caff 100644 --- a/website/src/pages/ur/subgraphs/guides/contract-analyzer.mdx +++ b/website/src/pages/ur/subgraphs/guides/contract-analyzer.mdx @@ -2,11 +2,15 @@ title: Smart Contract Analysis with Cana CLI --- -# Cana CLI: Quick & Efficient Contract Analysis +Improve smart contract analysis with **Cana CLI**. It's fast, efficient, and designed specifically for EVM chains. -**Cana CLI** is a command-line tool that streamlines helpful smart contract metadata analysis specific to subgraph development across multiple EVM-compatible chains. +## جائزہ -## 📌 Key Features +**Cana CLI** is a command-line tool that streamlines helpful smart contract metadata analysis specific to subgraph development across multiple EVM-compatible chains. It simplifies retrieving contract details, detecting proxy implementations, extracting ABIs, and more. + +### Key Features + +With Cana CLI, you can: - Detect deployment blocks - Verify source code @@ -14,63 +18,75 @@ title: Smart Contract Analysis with Cana CLI - Identify proxy and implementation contracts - Support multiple chains -## 🚀 Installation & Setup +### Prerequisites + +Before installing Cana CLI, make sure you have: + +- [Node.js v16+](https://nodejs.org/en) +- [npm v6+](https://docs.npmjs.com/cli/v11/commands/npm-install) +- Block explorer API keys + +### Installation & Setup -Install Cana globally using npm: +1. Install Cana CLI + +Use npm to install it globally: ```bash npm install -g contract-analyzer ``` -Set up a blockchain for analysis: +2. Configure Cana CLI + +Set up a blockchain environment for analysis: ```bash cana setup ``` -Provide the required block explorer API and block explorer endpoint URL details when prompted. +During setup, you'll be prompted to provide the required block explorer API key and block explorer endpoint URL. -Running `cana setup` creates a configuration file at `~/.contract-analyzer/config.json`. This file stores your block explorer API credentials, endpoint URLs, and chain selection preferences for future use. +After setup, Cana CLI creates a configuration file at `~/.contract-analyzer/config.json`. This file stores your block explorer API credentials, endpoint URLs, and chain selection preferences for future use. -## 🍳 Usage +### Steps: Using Cana CLI for Smart Contract Analysis -### 🔹 Chain Selection +#### 1. Select a Chain -Cana supports multiple EVM-compatible chains. +Cana CLI supports multiple EVM-compatible chains. -List chains added with: +For a list of chains added run this command: ```bash cana chains ``` -Then select a chain with: +Then select a chain with this command: ```bash cana chains --switch ``` -Once a chain is selected, all subsequent contract analases will continue on that chain. +Once a chain is selected, all subsequent contract analyses will continue on that chain. -### 🔹 Basic Contract Analysis +#### 2. Basic Contract Analysis -Analyze a contract with: +Run the following command to analyze a contract: ```bash cana analyze 0xContractAddress ``` -or +یا ```bash cana -a 0xContractAddress ``` -This command displays essential contract information in the terminal using a clear, organized format. +This command fetches and displays essential contract information in the terminal using a clear, organized format. -### 🔹 Understanding Output +#### 3. Understanding the Output -Cana organizes results into the terminal as well as into a structured directory when detailed contract data is successfully retrieved: +Cana CLI organizes results into the terminal and into a structured directory when detailed contract data is successfully retrieved: ``` contracts-analyzed/ @@ -80,24 +96,22 @@ contracts-analyzed/ └── event-information.json # Event signatures and examples ``` -### 🔹 Chain Management +This format makes it easy to reference contract metadata, event signatures, and ABIs for subgraph development. + +#### 4. Chain Management Add and manage chains: ```bash -cana setup # Add a new chain -cana chains # List configured chains -cana chains -s # Swich chains. +cana setup # Add a new chain +cana chains # List configured chains +cana chains -s # Switch chains ``` -## ⚠️ Troubleshooting +### Troubleshooting -- **Missing Data**: Ensure the contract address is correct, verified on the block explorer, and that your API key has the required permissions. +Missing Data? Ensure that the contract address is correct, that it's verified on the block explorer, and that your API key has the required permissions. -## ✅ Requirements - -- Node.js v16+ -- npm v6+ -- Block explorer API keys +### Conclusion -Keep your contract analyses efficient and well-organized. 🚀 +With Cana CLI, you can efficiently analyze smart contracts, extract crucial metadata, and support subgraph development with ease. diff --git a/website/src/pages/ur/subgraphs/guides/enums.mdx b/website/src/pages/ur/subgraphs/guides/enums.mdx index 9f55ae07c54b..97a0c22fd89e 100644 --- a/website/src/pages/ur/subgraphs/guides/enums.mdx +++ b/website/src/pages/ur/subgraphs/guides/enums.mdx @@ -269,6 +269,6 @@ Expected output includes the marketplaces that meet the criteria, each represent } ``` -## Additional Resources +## اضافی وسائل For additional information, check out this guide's [repo](https://github.com/chidubemokeke/Subgraph-Tutorial-Enums). diff --git a/website/src/pages/ur/subgraphs/guides/grafting.mdx b/website/src/pages/ur/subgraphs/guides/grafting.mdx index d9abe0e70d2a..8003a91b2429 100644 --- a/website/src/pages/ur/subgraphs/guides/grafting.mdx +++ b/website/src/pages/ur/subgraphs/guides/grafting.mdx @@ -1,46 +1,46 @@ --- -title: Replace a Contract and Keep its History With Grafting +title: ایک کنٹریکٹ کو تبدیل کریں اور اس کی تاریخ کو گرافٹنگ کے ساتھ رکھیں --- In this guide, you will learn how to build and deploy new Subgraphs by grafting existing Subgraphs. -## What is Grafting? +## گرافٹنگ کیا ہے؟ Grafting reuses the data from an existing Subgraph and starts indexing it at a later block. This is useful during development to get past simple errors in the mappings quickly or to temporarily get an existing Subgraph working again after it has failed. Also, it can be used when adding a feature to a Subgraph that takes long to index from scratch. The grafted Subgraph can use a GraphQL schema that is not identical to the one of the base Subgraph, but merely compatible with it. It has to be a valid Subgraph schema in its own right, but may deviate from the base Subgraph's schema in the following ways: -- It adds or removes entity types -- It removes attributes from entity types -- It adds nullable attributes to entity types -- It turns non-nullable attributes into nullable attributes -- It adds values to enums -- It adds or removes interfaces -- It changes for which entity types an interface is implemented +- یہ ہستی کی اقسام کو جوڑتا یا ہٹاتا ہے +- یہ ہستی کی اقسام سے صفات کو ہٹاتا ہے +- یہ ہستی کی قسموں میں کالعدم صفات کو شامل کرتا ہے +- یہ غیر کالعدم صفات کو کالعدم صفات میں بدل دیتا ہے +- یہ enums میں اقدار کا اضافہ کرتا ہے +- یہ انٹرفیس کو جوڑتا یا ہٹاتا ہے +- یہ تبدیل ہوتا ہے جس کے لیے ایک انٹرفیس لاگو کیا جاتا ہے -For more information, you can check: +مزید معلومات کے لۓ، آپ دیکہ سکتے ہیں: - [Grafting](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) In this tutorial, we will be covering a basic use case. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing Subgraph onto the "base" Subgraph that tracks the new contract. -## Important Note on Grafting When Upgrading to the Network +## نیٹ ورک میں اپ گریڈ کرتے وقت گرافٹنگ پر اہم نوٹ > **Caution**: It is recommended to not use grafting for Subgraphs published to The Graph Network -### Why Is This Important? +### یہ کیوں اہم ہے؟ Grafting is a powerful feature that allows you to "graft" one Subgraph onto another, effectively transferring historical data from the existing Subgraph to a new version. It is not possible to graft a Subgraph from The Graph Network back to Subgraph Studio. -### Best Practices +### بہترین طریقے **Initial Migration**: when you first deploy your Subgraph to the decentralized network, do so without grafting. Ensure that the Subgraph is stable and functioning as expected. **Subsequent Updates**: once your Subgraph is live and stable on the decentralized network, you may use grafting for future versions to make the transition smoother and to preserve historical data. -By adhering to these guidelines, you minimize risks and ensure a smoother migration process. +ان رہنما خطوط پر عمل پیرا ہو کر، آپ خطرات کو کم کرتے ہیں اور منتقلی کے ایک ہموار عمل کو یقینی بناتے ہیں. -## Building an Existing Subgraph +## ایک موجودہ سب گراف بنانا Building Subgraphs is an essential part of The Graph, described more in depth [here](/subgraphs/quick-start/). To be able to build and deploy the existing Subgraph used in this tutorial, the following repo is provided: @@ -48,7 +48,7 @@ Building Subgraphs is an essential part of The Graph, described more in depth [h > Note: The contract used in the Subgraph was taken from the following [Hackathon Starterkit](https://github.com/schmidsi/hackathon-starterkit). -## Subgraph Manifest Definition +## سب گراف مینی فیسٹ کی تعریف The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest that you will use: @@ -83,7 +83,7 @@ dataSources: - The network should correspond to an indexed network being queried. Since we're running on Sepolia testnet, the network is `sepolia` - The `mapping` section defines the triggers of interest and the functions that should be run in response to those triggers. In this case, we are listening for the `Withdrawal` event and calling the `handleWithdrawal` function when it is emitted. -## Grafting Manifest Definition +## گرافٹنگ مینی فیسٹ کی تعریف Grafting requires adding two new items to the original Subgraph manifest: @@ -101,7 +101,7 @@ graft: The `base` and `block` values can be found by deploying two Subgraphs: one for the base indexing and one with grafting -## Deploying the Base Subgraph +## بیس سب گراف تعینات کرنا 1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-example` 2. Follow the directions in the `AUTH & DEPLOY` section on your Subgraph page in the `graft-example` folder from the repo @@ -117,7 +117,7 @@ The `base` and `block` values can be found by deploying two Subgraphs: one for t } ``` -It returns something like this: +یہ کچھ اس طرح لوٹاتا ہے: ``` { @@ -140,9 +140,9 @@ It returns something like this: Once you have verified the Subgraph is indexing properly, you can quickly update the Subgraph with grafting. -## Deploying the Grafting Subgraph +## گرافٹنگ سب گراف کو تعینات کرنا -The graft replacement subgraph.yaml will have a new contract address. This could happen when you update your dapp, redeploy a contract, etc. +گرافٹ متبادل subgraph.yaml کے پاس ایک نیا کنٹریکٹ ایڈریس ہوگا۔ یہ اس وقت ہو سکتا ہے جب آپ اپنے ڈیپ کو اپ ڈیٹ کرتے ہیں، کسی کنٹریکٹ کو دوبارہ استعمال کرتے ہیں، وغیر. 1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-replacement` 2. Create a new manifest. The `subgraph.yaml` for `graph-replacement` contains a different contract address and new information about how it should graft. These are the `block` of the [last event emitted](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452) you care about by the old contract and the `base` of the old Subgraph. The `base` Subgraph ID is the `Deployment ID` of your original `graph-example` Subgraph. You can find this in Subgraph Studio. @@ -159,7 +159,7 @@ The graft replacement subgraph.yaml will have a new contract address. This could } ``` -It should return the following: +اسے مندرجہ ذیل کو واپس کرنا چاہئے: ``` { @@ -189,7 +189,7 @@ You can see that the `graft-replacement` Subgraph is indexing from older `graph- Congrats! You have successfully grafted a Subgraph onto another Subgraph. -## Additional Resources +## اضافی وسائل If you want more experience with grafting, here are a few examples for popular contracts: diff --git a/website/src/pages/ur/subgraphs/guides/near.mdx b/website/src/pages/ur/subgraphs/guides/near.mdx index e78a69eb7fa2..325e114ac248 100644 --- a/website/src/pages/ur/subgraphs/guides/near.mdx +++ b/website/src/pages/ur/subgraphs/guides/near.mdx @@ -1,10 +1,10 @@ --- -title: Building Subgraphs on NEAR +title: سب گرافس کو NEAR پر بنانا --- This guide is an introduction to building Subgraphs indexing smart contracts on the [NEAR blockchain](https://docs.near.org/). -## What is NEAR? +## NEAR کیا ہے؟ [NEAR](https://near.org/) is a smart contract platform for building decentralized applications. Visit the [official documentation](https://docs.near.org/concepts/basics/protocol) for more information. @@ -14,14 +14,14 @@ The Graph gives developers tools to process blockchain events and make the resul Subgraphs are event-based, which means that they listen for and then process onchain events. There are currently two types of handlers supported for NEAR Subgraphs: -- Block handlers: these are run on every new block -- Receipt handlers: run every time a message is executed at a specified account +- بلاک ہینڈلرز: یہ ہر نۓ بلاک پر چلتے ہیں +- ریسیپٹ ہینڈلرز: ہر بار جب کسی مخصوص اکاؤنٹ پر کوئی پیغام عمل میں آۓ تو چلتا ہے [From the NEAR documentation](https://docs.near.org/build/data-infrastructure/lake-data-structures/receipt): -> A Receipt is the only actionable object in the system. When we talk about "processing a transaction" on the NEAR platform, this eventually means "applying receipts" at some point. +> نظام میں ایک ریسیپٹ واحد قابل عمل شے ہے۔ جب ہم NEAR پلیٹ فارم پر "ایک ٹرانزیکشن پر کارروائی" کے بارے میں بات کرتے ہیں، تو اس کا مطلب بالآخر کسی وقت "ریسیپٹ لگانا" ہوتا ہے. -## Building a NEAR Subgraph +## NEAR سب گراف بنانا `@graphprotocol/graph-cli` is a command-line tool for building and deploying Subgraphs. @@ -46,7 +46,7 @@ $ graph codegen # generates types from the schema file identified in the manifes $ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder ``` -### Subgraph Manifest Definition +### سب گراف مینی فیسٹ کی تعریف The Subgraph manifest (`subgraph.yaml`) identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for a NEAR Subgraph: @@ -85,16 +85,16 @@ accounts: - morning.testnet ``` -NEAR data sources support two types of handlers: +قریبی ڈیٹا ذرائع دو قسم کے ہینڈلرز کی حمایت کرتے ہیں: - `blockHandlers`: run on every new NEAR block. No `source.account` is required. - `receiptHandlers`: run on every receipt where the data source's `source.account` is the recipient. Note that only exact matches are processed ([subaccounts](https://docs.near.org/tutorials/crosswords/basics/add-functions-call#create-a-subaccount) must be added as independent data sources). -### Schema Definition +### اسکیما کی تعریف Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). -### AssemblyScript Mappings +### اسمبلی اسکرپٹ سب میپنک The handlers for processing events are written in [AssemblyScript](https://www.assemblyscript.org/). @@ -169,7 +169,7 @@ Otherwise, the rest of the [AssemblyScript API](/subgraphs/developing/creating/g This includes a new JSON parsing function - logs on NEAR are frequently emitted as stringified JSONs. A new `json.fromString(...)` function is available as part of the [JSON API](/subgraphs/developing/creating/graph-ts/api/#json-api) to allow developers to easily process these logs. -## Deploying a NEAR Subgraph +## NEAR سب گراف کی تعیناتی Once you have a built Subgraph, it is time to deploy it to Graph Node for indexing. NEAR Subgraphs can be deployed to any Graph Node `>=v0.26.x` (this version has not yet been tagged & released). @@ -191,14 +191,14 @@ $ graph deploy --node --ipfs https://api.thegraph.com/ipfs/ ``` -### Local Graph Node (based on default configuration) +### مقامی گراف نوڈ (پہلے سے طے شدہ ترتیب پر مبنی) ```sh graph deploy --node http://localhost:8020/ --ipfs http://localhost:5001 @@ -216,21 +216,21 @@ Once your Subgraph has been deployed, it will be indexed by Graph Node. You can } ``` -### Indexing NEAR with a Local Graph Node +### NEAR کو مقامی گراف نوڈ سے انڈیکس کرنا -Running a Graph Node that indexes NEAR has the following operational requirements: +ایک گراف نوڈ چلانا جو NEAR کو انڈیکس کرتا ہے اس کے لیے درج ذیل آپریشنل تقاضے ہوتے ہیں: -- NEAR Indexer Framework with Firehose instrumentation -- NEAR Firehose Component(s) -- Graph Node with Firehose endpoint configured +- فائر ہوز انسٹرومینٹیشن کے ساتھ NEAR انڈیکسر فریم ورک +- NEAR فائر ہوز اجزاء +- فائر ہوز اینڈ پوائنٹ کے ساتھ گراف نوڈ کنفیگر ہو گیا -We will provide more information on running the above components soon. +ہم جلد ہی مندرجہ بالا اجزاء کو چلانے کے بارے میں مزید معلومات فراہم کریں گے. -## Querying a NEAR Subgraph +## NEAR سب گراف کا کیوری کرنا The GraphQL endpoint for NEAR Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. -## Example Subgraphs +## سب گراف کی مثال Here are some example Subgraphs for reference: @@ -240,7 +240,7 @@ Here are some example Subgraphs for reference: ## FAQ -### How does the beta work? +### بیٹا کیسے کام کرتا ہے؟ NEAR support is in beta, which means that there may be changes to the API as we continue to work on improving the integration. Please email near@thegraph.com so that we can support you in building NEAR Subgraphs, and keep you up to date on the latest developments! @@ -250,9 +250,9 @@ No, a Subgraph can only support data sources from one chain/network. ### Can Subgraphs react to more specific triggers? -Currently, only Block and Receipt triggers are supported. We are investigating triggers for function calls to a specified account. We are also interested in supporting event triggers, once NEAR has native event support. +فی الحال، صرف بلاک اور رسید کے محرکات تعاون یافتہ ہیں۔ ہم ایک مخصوص اکاؤنٹ پر فنکشن کالز کے محرکات کی چھان بین کر رہے ہیں۔ ہم ایونٹ کے محرکات کو سپورٹ کرنے میں بھی دلچسپی رکھتے ہیں، ایک بار جب NEAR کو مقامی ایونٹ سپورٹ مل جائے. -### Will receipt handlers trigger for accounts and their sub-accounts? +### کیا رسید ہینڈلرز اکاؤنٹس اور ان کے سب اکاؤنٹس کو متحرک کریں گے؟ If an `account` is specified, that will only match the exact account name. It is possible to match sub-accounts by specifying an `accounts` field, with `suffixes` and `prefixes` specified to match accounts and sub-accounts, for example the following would match all `mintbase1.near` sub-accounts: @@ -264,11 +264,11 @@ accounts: ### Can NEAR Subgraphs make view calls to NEAR accounts during mappings? -This is not supported. We are evaluating whether this functionality is required for indexing. +یہ تعاون یافتہ نہیں ہے۔ ہم اس بات کا جائزہ لے رہے ہیں کہ آیا یہ فعالیت انڈیکسنگ کے لیے درکار ہے. ### Can I use data source templates in my NEAR Subgraph? -This is not currently supported. We are evaluating whether this functionality is required for indexing. +یہ فی الحال تعاون یافتہ نہیں ہے۔ ہم اس بات کا جائزہ لے رہے ہیں کہ آیا یہ فعالیت اشاریہ سازی کے لیے درکار ہے. ### Ethereum Subgraphs support "pending" and "current" versions, how can I deploy a "pending" version of a NEAR Subgraph? @@ -278,6 +278,6 @@ Pending functionality is not yet supported for NEAR Subgraphs. In the interim, y If it is a general question about Subgraph development, there is a lot more information in the rest of the [Developer documentation](/subgraphs/quick-start/). Otherwise please join [The Graph Protocol Discord](https://discord.gg/graphprotocol) and ask in the #near channel or email near@thegraph.com. -## References +## حوالہ جات - [NEAR developer documentation](https://docs.near.org/tutorials/crosswords/basics/set-up-skeleton) diff --git a/website/src/pages/ur/subgraphs/guides/secure-api-keys-nextjs.mdx b/website/src/pages/ur/subgraphs/guides/secure-api-keys-nextjs.mdx index e17e594408ff..2ef4a2ac97fd 100644 --- a/website/src/pages/ur/subgraphs/guides/secure-api-keys-nextjs.mdx +++ b/website/src/pages/ur/subgraphs/guides/secure-api-keys-nextjs.mdx @@ -2,7 +2,7 @@ title: How to Secure API Keys Using Next.js Server Components --- -## Overview +## جائزہ We can use [Next.js server components](https://nextjs.org/docs/app/building-your-application/rendering/server-components) to properly secure our API key from exposure in the frontend of our dapp. To further increase our API key security, we can also [restrict our API key to certain Subgraphs or domains in Subgraph Studio](/cookbook/upgrading-a-subgraph/#securing-your-api-key). diff --git a/website/src/pages/ur/subgraphs/guides/subgraph-composition.mdx b/website/src/pages/ur/subgraphs/guides/subgraph-composition.mdx new file mode 100644 index 000000000000..c8b09c0c30fc --- /dev/null +++ b/website/src/pages/ur/subgraphs/guides/subgraph-composition.mdx @@ -0,0 +1,132 @@ +--- +title: Aggregate Data Using Subgraph Composition +sidebarTitle: Build a Composable Subgraph with Multiple Subgraphs +--- + +Leverage Subgraph composition to speed up development time. Create a base Subgraph with essential data, then build additional Subgraphs on top of it. + +Optimize your Subgraph by merging data from independent, source Subgraphs into a single composable Subgraph to enhance data aggregation. + +## تعارف + +Composable Subgraphs enable you to combine multiple Subgraphs' data sources into a new Subgraph, facilitating faster and more flexible Subgraph development. Subgraph composition empowers you to create and maintain smaller, focused Subgraphs that collectively form a larger, interconnected dataset. + +### Benefits of Composition + +Subgraph composition is a powerful feature for scaling, allowing you to: + +- Reuse, mix, and combine existing data +- Streamline development and queries +- Use multiple data sources (up to five source Subgraphs) +- Speed up your Subgraph's syncing speed +- Handle errors and optimize the resync + +## Architecture Overview + +The setup for this example involves two Subgraphs: + +1. **Source Subgraph**: Tracks event data as entities. +2. **Dependent Subgraph**: Uses the source Subgraph as a data source. + +You can find these in the `source` and `dependent` directories. + +- The **source Subgraph** is a basic event-tracking Subgraph that records events emitted by relevant contracts. +- The **dependent Subgraph** references the source Subgraph as a data source, using the entities from the source as triggers. + +While the source Subgraph is a standard Subgraph, the dependent Subgraph uses the Subgraph composition feature. + +## Prerequisites + +### Source Subgraphs + +- All Subgraphs need to be published with a **specVersion 1.3.0 or later** (Use the latest graph-cli version to be able to deploy composable Subgraphs) +- See notes here: https://github.com/graphprotocol/graph-node/releases/tag/v0.37.0 +- Immutable entities only: All Subgraphs must have [immutable entities](https://thegraph.com/docs/en/subgraphs/best-practices/immutable-entities-bytes-as-ids/#immutable-entities) when the Subgraph is deployed +- Pruning can be used in the source Subgraphs, but only entities that are immutable can be composed on top of +- Source Subgraphs cannot use grafting on top of existing entities +- Aggregated entities can be used in composition, but entities that are composed from them cannot performed additional aggregations directly + +### Composed Subgraphs + +- You can only compose up to a **maximum of 5 source Subgraphs** +- Composed Subgraphs can only use **datasources from the same chain** +- **Nested composition is not yet supported**: Composing on top of another composed Subgraph isn’t allowed at this time +- Aggregated entities can be used in composition, but the composed entities on them cannot also use aggregations directly +- Developers cannot compose an onchain datasource with a Subgraph datasource (i.e. you can’t do normal event handlers and call handlers and block handlers in a composed Subgraph) + +Additionally, you can explore the [example-composable-subgraph](https://github.com/graphprotocol/example-composable-subgraph) repository for a working implementation of composable Subgraphs + +## شروع کریں + +The following guide provides examples for defining 3 source Subgraphs to create one powerful composed Subgraph. + +### Specifics + +- To keep this example simple, all source Subgraphs use only block handlers. However, in a real environment, each source Subgraph will use data from different smart contracts. +- The examples below show how to import and extend the schema of another Subgraph to enhance its functionality. +- Each source Subgraph is optimized with a specific entity. +- All the commands listed install the necessary dependencies, generate code based on the GraphQL schema, build the Subgraph, and deploy it to your local Graph Node instance. + +### Step 1. Deploy Block Time Source Subgraph + +This first source Subgraph calculates the block time for each block. + +- It imports schemas from other Subgraphs and adds a `block` entity with a `timestamp` field, representing the time each block was mined. +- It listens to time-related blockchain events (e.g., block timestamps) and processes this data to update the Subgraph's entities accordingly. + +To deploy this Subgraph locally, run the following commands: + +```bash +npm install +npm run codegen +npm run build +npm run create-local +npm run deploy-local +``` + +### Step 2. Deploy Block Cost Source Subgraph + +This second source Subgraph indexes the cost of each block. + +#### Key Functions + +- It imports schemas from other Subgraphs and adds a `block` entity with cost-related fields. +- It listens to blockchain events related to costs (e.g. gas fees, transaction costs) and processes this data to update the Subgraph's entities accordingly. + +To deploy this Subgraph locally, run the same commands as above. + +### Step 3. Define Block Size in Source Subgraph + +This third source Subgraph indexes the size of each block. To deploy this Subgraph locally, run the same commands as above. + +#### Key Functions + +- It imports existing schemas from other Subgraphs and adds a `block` entity with a `size` field representing each block's size. +- It listens to blockchain events related to block sizes (e.g., storage or volume) and processes this data to update the Subgraph's entities accordingly. + +### Step 4. Combine Into Block Stats Subgraph + +This composed Subgraph combines and aggregates the information from the source Subgraphs above, providing a unified view of block statistics. To deploy this Subgraph locally, run the same commands as above. + +> Note: +> +> - Any change to a source Subgraph will likely generate a new deployment ID. +> - Be sure to update the deployment ID in the data source address of the Subgraph manifest to take advantage of the latest changes. +> - All source Subgraphs should be deployed before the composed Subgraph is deployed. + +#### Key Functions + +- It provides a consolidated data model that encompasses all relevant block metrics. +- It combines data from 3 source Subgraphs, and provides a comprehensive view of block statistics, enabling more complex queries and analyses. + +## Key Takeaways + +- This powerful tool will scale your Subgraph development and allow you to combine multiple Subgraphs. +- The setup includes the deployment of 3 source Subgraphs and one final deployment of the composed Subgraph. +- This feature unlocks scalability, simplifying both development and maintenance efficiency. + +## اضافی وسائل + +- Check out all the code for this example in [this GitHub repo](https://github.com/graphprotocol/example-composable-subgraph). +- To add advanced features to your Subgraph, check out [Subgraph advanced features](/developing/creating/advanced/). +- To learn more about aggregations, check out [Timeseries and Aggregations](/subgraphs/developing/creating/advanced/#timeseries-and-aggregations). diff --git a/website/src/pages/ur/subgraphs/guides/subgraph-debug-forking.mdx b/website/src/pages/ur/subgraphs/guides/subgraph-debug-forking.mdx index 91aa7484d2ec..70889905808f 100644 --- a/website/src/pages/ur/subgraphs/guides/subgraph-debug-forking.mdx +++ b/website/src/pages/ur/subgraphs/guides/subgraph-debug-forking.mdx @@ -1,22 +1,22 @@ --- -title: Quick and Easy Subgraph Debugging Using Forks +title: فورکس کا استعمال کرتے ہوۓ تیز اور آسان ڈیبگنگ --- As with many systems processing large amounts of data, The Graph's Indexers (Graph Nodes) may take quite some time to sync-up your Subgraph with the target blockchain. The discrepancy between quick changes with the purpose of debugging and long wait times needed for indexing is extremely counterproductive and we are well aware of that. This is why we are introducing **Subgraph forking**, developed by [LimeChain](https://limechain.tech/), and in this article I will show you how this feature can be used to substantially speed-up Subgraph debugging! -## Ok, what is it? +## ٹھیک ہے، یہ ہے کیا؟ **Subgraph forking** is the process of lazily fetching entities from _another_ Subgraph's store (usually a remote one). In the context of debugging, **Subgraph forking** allows you to debug your failed Subgraph at block _X_ without needing to wait to sync-up to block _X_. -## What?! How? +## کیا؟! کیسے؟ When you deploy a Subgraph to a remote Graph Node for indexing and it fails at block _X_, the good news is that the Graph Node will still serve GraphQL queries using its store, which is synced-up to block _X_. That's great! This means we can take advantage of this "up-to-date" store to fix the bugs arising when indexing block _X_. In a nutshell, we are going to _fork the failing Subgraph_ from a remote Graph Node that is guaranteed to have the Subgraph indexed up to block _X_ in order to provide the locally deployed Subgraph being debugged at block _X_ an up-to-date view of the indexing state. -## Please, show me some code! +## براۓ مہربانی، مجہے کچھ کوڈ دکھائیں! To stay focused on Subgraph debugging, let's keep things simple and run along with the [example-Subgraph](https://github.com/graphprotocol/graph-tooling/tree/main/examples/ethereum-gravatar) indexing the Ethereum Gravity smart contract. @@ -46,31 +46,31 @@ export function handleUpdatedGravatar(event: UpdatedGravatar): void { Oops, how unfortunate, when I deploy my perfect looking Subgraph to [Subgraph Studio](https://thegraph.com/studio/) it fails with the _"Gravatar not found!"_ error. -The usual way to attempt a fix is: +درست کرنے کی کوشش کرنے کا معمول کا طریقہ یہ ہے: -1. Make a change in the mappings source, which you believe will solve the issue (while I know it won't). +1. میپنگ کے ماخذ میں تبدیلی کریں، جس کے بارے میں آپ کو یقین ہے کہ مسئلہ حل ہو جائے گا (جبکہ میں جانتا ہوں کہ ایسا نہیں ہوگا). 2. Re-deploy the Subgraph to [Subgraph Studio](https://thegraph.com/studio/) (or another remote Graph Node). -3. Wait for it to sync-up. -4. If it breaks again go back to 1, otherwise: Hooray! +3. اس کے مطابقت پذیر ہونے کا انتظار کریں. +4. اگر یہ دوبارہ ٹوٹ جاتا ہے تو 1 پر واپس جائیں، ورنہ: ہورے! It is indeed pretty familiar to an ordinary debug process, but there is one step that horribly slows down the process: _3. Wait for it to sync-up._ Using **Subgraph forking** we can essentially eliminate this step. Here is how it looks: 0. Spin-up a local Graph Node with the **_appropriate fork-base_** set. -1. Make a change in the mappings source, which you believe will solve the issue. +1. میپنگ کے ماخذ میں تبدیلی کریں، جس سے آپ کو یقین ہے کہ مسئلہ حل ہو جائے گا. 2. Deploy to the local Graph Node, **_forking the failing Subgraph_** and **_starting from the problematic block_**. -3. If it breaks again, go back to 1, otherwise: Hooray! +3. اگر یہ دوبارہ ٹوٹ جاتا ہے، تو 1 پر واپس جائیں، ورنہ: ہورے! -Now, you may have 2 questions: +اب، آپ کے پاس 2 سوالات ہوسکتے ہیں: -1. fork-base what??? -2. Forking who?! +1. فورک بیس کیا؟؟؟ +2. فورکنگ کون؟! -And I answer: +اور میرا جواب: 1. `fork-base` is the "base" URL, such that when the _subgraph id_ is appended the resulting URL (`/`) is a valid GraphQL endpoint for the Subgraph's store. -2. Forking is easy, no need to sweat: +2. فورکنگ آسان ہے، پریشان ہونے کی ضرورت نہیں ہے: ```bash $ graph deploy --debug-fork --ipfs http://localhost:5001 --node http://localhost:8020 @@ -78,7 +78,7 @@ $ graph deploy --debug-fork --ipfs http://localhos Also, don't forget to set the `dataSources.source.startBlock` field in the Subgraph manifest to the number of the problematic block, so you can skip indexing unnecessary blocks and take advantage of the fork! -So, here is what I do: +تو، یہاں میں کیا کرتا ہوں: 1. I spin-up a local Graph Node ([here is how to do it](https://github.com/graphprotocol/graph-node#running-a-local-graph-node)) with the `fork-base` option set to: `https://api.thegraph.com/subgraphs/id/`, since I will fork a Subgraph, the buggy one I deployed earlier, from [Subgraph Studio](https://thegraph.com/studio/). diff --git a/website/src/pages/ur/subgraphs/guides/subgraph-uncrashable.mdx b/website/src/pages/ur/subgraphs/guides/subgraph-uncrashable.mdx index a08e2a7ad8c9..f75738833744 100644 --- a/website/src/pages/ur/subgraphs/guides/subgraph-uncrashable.mdx +++ b/website/src/pages/ur/subgraphs/guides/subgraph-uncrashable.mdx @@ -1,10 +1,10 @@ --- -title: Safe Subgraph Code Generator +title: محفوظ سب گراف کوڈ جنریٹر --- [Subgraph Uncrashable](https://float-capital.github.io/float-subgraph-uncrashable/) is a code generation tool that generates a set of helper functions from the graphql schema of a project. It ensures that all interactions with entities in your Subgraph are completely safe and consistent. -## Why integrate with Subgraph Uncrashable? +## سب گراف ان کریش ایبل کے ساتھ کیوں ضم کیا جائے؟ - **Continuous Uptime**. Mishandled entities may cause Subgraphs to crash, which can be disruptive for projects that are dependent on The Graph. Set up helper functions to make your Subgraphs “uncrashable” and ensure business continuity. @@ -16,11 +16,11 @@ title: Safe Subgraph Code Generator - The code generation tool accommodates **all** Subgraph types and is configurable for users to set sane defaults on values. The code generation will use this config to generate helper functions that are to the users specification. -- The framework also includes a way (via the config file) to create custom, but safe, setter functions for groups of entity variables. This way it is impossible for the user to load/use a stale graph entity and it is also impossible to forget to save or set a variable that is required by the function. +- فریم ورک میں ہستی متغیرات کے گروپس کے لیے حسب ضرورت، لیکن محفوظ، سیٹر فنکشنز بنانے کا ایک طریقہ (کنفگ فائل کے ذریعے) بھی شامل ہے۔ اس طرح صارف کے لیے کسی باسی گراف ہستی کو لوڈ/استعمال کرنا ناممکن ہے اور فنکشن کے لیے مطلوبہ متغیر کو محفوظ کرنا یا سیٹ کرنا بھولنا بھی ناممکن ہے. - Warning logs are recorded as logs indicating where there is a breach of Subgraph logic to help patch the issue to ensure data accuracy. -Subgraph Uncrashable can be run as an optional flag using the Graph CLI codegen command. +گراف CLI کوڈجن کمانڈ کا استعمال کرتے ہوئے سب گراف ان کریش ایبل کو اختیاری پرچم کے طور پر چلایا جا سکتا ہے. ```sh graph codegen -u [options] [] diff --git a/website/src/pages/ur/subgraphs/guides/transfer-to-the-graph.mdx b/website/src/pages/ur/subgraphs/guides/transfer-to-the-graph.mdx index a62072c48373..028337a3cab5 100644 --- a/website/src/pages/ur/subgraphs/guides/transfer-to-the-graph.mdx +++ b/website/src/pages/ur/subgraphs/guides/transfer-to-the-graph.mdx @@ -12,9 +12,9 @@ Quickly upgrade your Subgraphs from any platform to [The Graph's decentralized n ## Upgrade Your Subgraph to The Graph in 3 Easy Steps -1. [Set Up Your Studio Environment](/subgraphs/cookbook/transfer-to-the-graph/#1-set-up-your-studio-environment) -2. [Deploy Your Subgraph to Studio](/subgraphs/cookbook/transfer-to-the-graph/#2-deploy-your-subgraph-to-studio) -3. [Publish to The Graph Network](/subgraphs/cookbook/transfer-to-the-graph/#publish-your-subgraph-to-the-graphs-decentralized-network) +1. [Set Up Your Studio Environment](/subgraphs/cookbook/transfer-to-the-graph/#1-set-up-your-studio-environment) +2. [Deploy Your Subgraph to Studio](/subgraphs/cookbook/transfer-to-the-graph/#2-deploy-your-subgraph-to-studio) +3. [Publish to The Graph Network](/subgraphs/cookbook/transfer-to-the-graph/#publish-your-subgraph-to-the-graphs-decentralized-network) ## 1. Set Up Your Studio Environment @@ -31,7 +31,7 @@ You must have [Node.js](https://nodejs.org/) and a package manager of your choic On your local machine, run the following command: -Using [npm](https://www.npmjs.com/): +[npm](https://www.npmjs.com/) کا استعمال: ```sh npm install -g @graphprotocol/graph-cli@latest @@ -74,7 +74,7 @@ graph deploy --ipfs-hash You can start [querying](/subgraphs/querying/introduction/) any Subgraph by sending a GraphQL query into the Subgraph’s query URL endpoint, which is located at the top of its Explorer page in Subgraph Studio. -#### Example +#### مثال [CryptoPunks Ethereum Subgraph](https://thegraph.com/explorer/subgraphs/HdVdERFUe8h61vm2fDyycHgxjsde5PbB832NHgJfZNqK) by Messari: @@ -98,7 +98,7 @@ You can create API Keys in Subgraph Studio under the “API Keys” menu at the Once you upgrade, you can access and manage your Subgraphs in [Subgraph Studio](https://thegraph.com/studio/) and explore all Subgraphs in [The Graph Explorer](https://thegraph.com/networks/). -### Additional Resources +### اضافی وسائل - To quickly create and publish a new Subgraph, check out the [Quick Start](/subgraphs/quick-start/). - To explore all the ways you can optimize and customize your Subgraph for a better performance, read more about [creating a Subgraph here](/developing/creating-a-subgraph/). diff --git a/website/src/pages/ur/subgraphs/querying/best-practices.mdx b/website/src/pages/ur/subgraphs/querying/best-practices.mdx index 3c367fdac0ee..428dbc57ae1f 100644 --- a/website/src/pages/ur/subgraphs/querying/best-practices.mdx +++ b/website/src/pages/ur/subgraphs/querying/best-practices.mdx @@ -4,7 +4,7 @@ title: بہترین طریقوں سے کیوری کرنا The Graph provides a decentralized way to query data from blockchains. Its data is exposed through a GraphQL API, making it easier to query with the GraphQL language. -Learn the essential GraphQL language rules and best practices to optimize your subgraph. +Learn the essential GraphQL language rules and best practices to optimize your Subgraph. --- @@ -71,7 +71,7 @@ It means that you can query a GraphQL API using standard `fetch` (natively or vi However, as mentioned in ["Querying from an Application"](/subgraphs/querying/from-an-application/), it's recommended to use `graph-client`, which supports the following unique features: -- کراس چین سب گراف ہینڈلنگ: ایک کیوری میں متعدد سب گرافس سے کیوری کرنا +- Cross-chain Subgraph Handling: Querying from multiple Subgraphs in a single query - [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) - [Automatic Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) - مکمل طور پر ٹائپ شدہ نتیجہ @@ -219,7 +219,7 @@ If the application only needs 10 transactions, the query should explicitly set ` ### Use a single query to request multiple records -By default, subgraphs have a singular entity for one record. For multiple records, use the plural entities and filter: `where: {id_in:[X,Y,Z]}` or `where: {volume_gt:100000}` +By default, Subgraphs have a singular entity for one record. For multiple records, use the plural entities and filter: `where: {id_in:[X,Y,Z]}` or `where: {volume_gt:100000}` Example of inefficient querying: diff --git a/website/src/pages/ur/subgraphs/querying/from-an-application.mdx b/website/src/pages/ur/subgraphs/querying/from-an-application.mdx index 9e2b78e6e628..1d03ed97f144 100644 --- a/website/src/pages/ur/subgraphs/querying/from-an-application.mdx +++ b/website/src/pages/ur/subgraphs/querying/from-an-application.mdx @@ -1,5 +1,6 @@ --- title: ایپلیکیشن سے کیوری +sidebarTitle: Querying from an App --- Learn how to query The Graph from your application. @@ -10,7 +11,7 @@ During the development process, you will receive a GraphQL API endpoint at two d ### Subgraph Studio Endpoint -After deploying your subgraph to [Subgraph Studio](https://thegraph.com/docs/en/subgraphs/developing/deploying/using-subgraph-studio/), you will receive an endpoint that looks like this: +After deploying your Subgraph to [Subgraph Studio](https://thegraph.com/docs/en/subgraphs/developing/deploying/using-subgraph-studio/), you will receive an endpoint that looks like this: ``` https://api.studio.thegraph.com/query/// @@ -20,13 +21,13 @@ https://api.studio.thegraph.com/query/// ### The Graph Network Endpoint -After publishing your subgraph to the network, you will receive an endpoint that looks like this: : +After publishing your Subgraph to the network, you will receive an endpoint that looks like this: : ``` https://gateway.thegraph.com/api//subgraphs/id/ ``` -> This endpoint is intended for active use on the network. It allows you to use various GraphQL client libraries to query the subgraph and populate your application with indexed data. +> This endpoint is intended for active use on the network. It allows you to use various GraphQL client libraries to query the Subgraph and populate your application with indexed data. ## Using Popular GraphQL Clients @@ -34,7 +35,7 @@ https://gateway.thegraph.com/api//subgraphs/id/ The Graph is providing its own GraphQL client, `graph-client` that supports unique features such as: -- کراس چین سب گراف ہینڈلنگ: ایک کیوری میں متعدد سب گرافس سے کیوری کرنا +- Cross-chain Subgraph Handling: Querying from multiple Subgraphs in a single query - [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) - [Automatic Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) - مکمل طور پر ٹائپ شدہ نتیجہ @@ -43,7 +44,7 @@ The Graph is providing its own GraphQL client, `graph-client` that supports uniq ### Fetch Data with Graph Client -Let's look at how to fetch data from a subgraph with `graph-client`: +Let's look at how to fetch data from a Subgraph with `graph-client`: #### پہلا قدم @@ -168,7 +169,7 @@ Although it's the heaviest client, it has many features to build advanced UI on ### Fetch Data with Apollo Client -Let's look at how to fetch data from a subgraph with Apollo client: +Let's look at how to fetch data from a Subgraph with Apollo client: #### پہلا قدم @@ -257,7 +258,7 @@ client ### Fetch data with URQL -Let's look at how to fetch data from a subgraph with URQL: +Let's look at how to fetch data from a Subgraph with URQL: #### پہلا قدم diff --git a/website/src/pages/ur/subgraphs/querying/graph-client/README.md b/website/src/pages/ur/subgraphs/querying/graph-client/README.md index 416cadc13c6f..3e01132a0a4d 100644 --- a/website/src/pages/ur/subgraphs/querying/graph-client/README.md +++ b/website/src/pages/ur/subgraphs/querying/graph-client/README.md @@ -16,23 +16,23 @@ This library is intended to simplify the network aspect of data consumption for | Status | Feature | Notes | | :----: | ---------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------- | -| ✅ | Multiple indexers | based on fetch strategies | -| ✅ | Fetch Strategies | timeout, retry, fallback, race, highestValue | -| ✅ | Build time validations & optimizations | | -| ✅ | Client-Side Composition | with improved execution planner (based on GraphQL-Mesh) | -| ✅ | Cross-chain Subgraph Handling | Use similar subgraphs as a single source | -| ✅ | Raw Execution (standalone mode) | without a wrapping GraphQL client | -| ✅ | Local (client-side) Mutations | | -| ✅ | [Automatic Block Tracking](../packages/block-tracking/README.md) | tracking block numbers [as described here](https://thegraph.com/docs/en/developer/distributed-systems/#polling-for-updated-data) | -| ✅ | [Automatic Pagination](../packages/auto-pagination/README.md) | doing multiple requests in a single call to fetch more than the indexer limit | -| ✅ | Integration with `@apollo/client` | | -| ✅ | Integration with `urql` | | -| ✅ | TypeScript support | with built-in GraphQL Codegen and `TypedDocumentNode` | -| ✅ | [`@live` queries](./live.md) | Based on polling | +| ✅ | Multiple indexers | based on fetch strategies | +| ✅ | Fetch Strategies | timeout, retry, fallback, race, highestValue | +| ✅ | Build time validations & optimizations | | +| ✅ | Client-Side Composition | with improved execution planner (based on GraphQL-Mesh) | +| ✅ | Cross-chain Subgraph Handling | Use similar subgraphs as a single source | +| ✅ | Raw Execution (standalone mode) | without a wrapping GraphQL client | +| ✅ | Local (client-side) Mutations | | +| ✅ | [Automatic Block Tracking](../packages/block-tracking/README.md) | tracking block numbers [as described here](https://thegraph.com/docs/en/developer/distributed-systems/#polling-for-updated-data) | +| ✅ | [Automatic Pagination](../packages/auto-pagination/README.md) | doing multiple requests in a single call to fetch more than the indexer limit | +| ✅ | Integration with `@apollo/client` | | +| ✅ | Integration with `urql` | | +| ✅ | TypeScript support | with built-in GraphQL Codegen and `TypedDocumentNode` | +| ✅ | [`@live` queries](./live.md) | Based on polling | > You can find an [extended architecture design here](./architecture.md) -## Getting Started +## شروع ہوا چاہتا ہے You can follow [Episode 45 of `graphql.wtf`](https://graphql.wtf/episodes/45-the-graph-client) to learn more about Graph Client: @@ -62,7 +62,7 @@ sources: Now, create a runtime artifact by running The Graph Client CLI: ```sh -graphclient build +گراف کلائنٹ کی تعمیر ``` > Note: you need to run this with `yarn` prefix, or add that as a script in your `package.json`. @@ -138,7 +138,7 @@ graphclient serve-dev And open http://localhost:4000/ to use GraphiQL. You can now experiment with your Graph client-side GraphQL schema locally! 🥳 -#### Examples +#### مثالیں You can also refer to [examples directory in this repo](../examples), for more advanced examples and integration examples: @@ -308,8 +308,8 @@ sources:
`highestValue` - - This strategy allows you to send parallel requests to different endpoints for the same source and choose the most updated. + +This strategy allows you to send parallel requests to different endpoints for the same source and choose the most updated. This is useful if you want to choose most synced data for the same Subgraph over different indexers/sources. diff --git a/website/src/pages/ur/subgraphs/querying/graph-client/live.md b/website/src/pages/ur/subgraphs/querying/graph-client/live.md index e6f726cb4352..1a4e7a89e956 100644 --- a/website/src/pages/ur/subgraphs/querying/graph-client/live.md +++ b/website/src/pages/ur/subgraphs/querying/graph-client/live.md @@ -2,7 +2,7 @@ Graph-Client implements a custom `@live` directive that can make every GraphQL query work with real-time data. -## Getting Started +## شروع ہوا چاہتا ہے Start by adding the following configuration to your `.graphclientrc.yml` file: diff --git a/website/src/pages/ur/subgraphs/querying/graphql-api.mdx b/website/src/pages/ur/subgraphs/querying/graphql-api.mdx index d981496823bc..79ecfe95c599 100644 --- a/website/src/pages/ur/subgraphs/querying/graphql-api.mdx +++ b/website/src/pages/ur/subgraphs/querying/graphql-api.mdx @@ -6,13 +6,13 @@ Learn about the GraphQL Query API used in The Graph. ## What is GraphQL? -[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. +[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query Subgraphs. -To understand the larger role that GraphQL plays, review [developing](/subgraphs/developing/introduction/) and [creating a subgraph](/developing/creating-a-subgraph/). +To understand the larger role that GraphQL plays, review [developing](/subgraphs/developing/introduction/) and [creating a Subgraph](/developing/creating-a-subgraph/). ## Queries with GraphQL -In your subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type. +In your Subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type. > Note: `query` does not need to be included at the top of the `graphql` query when using The Graph. @@ -170,7 +170,7 @@ You can use suffixes like `_gt`, `_lte` for value comparison: You can also filter entities that were updated in or after a specified block with `_change_block(number_gte: Int)`. -یہ کارآمد ہو سکتا ہے اگر آپ صرف ان ہستیوں کو لانے کے خواہاں ہیں جو تبدیل ہو چکی ہیں، مثال کے طور پر آخری بار جب آپ نے پول کیا تھا۔ یا متبادل طور پر یہ تحقیق کرنا یا ڈیبگ کرنا مفید ہو سکتا ہے کہ آپ کے سب گراف میں ہستی کیسے تبدیل ہو رہی ہیں (اگر بلاک فلٹر کے ساتھ ملایا جائے تو، آپ صرف ان ہستیوں کو الگ تھلگ کر سکتے ہیں جو ایک مخصوص بلاک میں تبدیل ہوئی ہیں). +This can be useful if you are looking to fetch only entities which have changed, for example since the last time you polled. Or alternatively it can be useful to investigate or debug how entities are changing in your Subgraph (if combined with a block filter, you can isolate only entities that changed in a specific block). ```graphql { @@ -329,7 +329,7 @@ This query will return `Challenge` entities, and their associated `Application` ### فل ٹیکسٹ تلاش کے کیوریز -Fulltext search query fields provide an expressive text search API that can be added to the subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) to add fulltext search to your subgraph. +Fulltext search query fields provide an expressive text search API that can be added to the Subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) to add fulltext search to your Subgraph. Fulltext search queries have one required field, `text`, for supplying search terms. Several special fulltext operators are available to be used in this `text` search field. @@ -391,7 +391,7 @@ Graph Node implements [specification-based](https://spec.graphql.org/October2021 The schema of your dataSources, i.e. the entity types, values, and relationships that are available to query, are defined through the [GraphQL Interface Definition Language (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). -GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your subgraph is automatically generated from the GraphQL schema that's included in your [subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph). +GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your Subgraph is automatically generated from the GraphQL schema that's included in your [Subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph). > Note: Our API does not expose mutations because developers are expected to issue transactions directly against the underlying blockchain from their applications. @@ -403,7 +403,7 @@ All GraphQL types with `@entity` directives in your schema will be treated as en ### سب گراف میٹا ڈیٹا -All subgraphs have an auto-generated `_Meta_` object, which provides access to subgraph metadata. This can be queried as follows: +All Subgraphs have an auto-generated `_Meta_` object, which provides access to Subgraph metadata. This can be queried as follows: ```graphQL { @@ -419,7 +419,7 @@ All subgraphs have an auto-generated `_Meta_` object, which provides access to s } ``` -اگر کوئی بلاک فراہم کیا جاتا ہے تو، میٹا ڈیٹا اس بلاک کا ہوتا ہے، اگر تازہ ترین انڈیکسڈ بلاک استعمال نہیں کیا جاتا ہے۔ اگر فراہم کیا گیا ہو، تو بلاک سب گراف کے اسٹارٹ بلاک کے بعد ہونا چاہیے، اور حال ہی میں انڈیکس کیے گئے بلاک سے کم یا اس کے برابر ہونا چاہیے. +If a block is provided, the metadata is as of that block, if not the latest indexed block is used. If provided, the block must be after the Subgraph's start block, and less than or equal to the most recently indexed block. `deployment` is a unique ID, corresponding to the IPFS CID of the `subgraph.yaml` file. @@ -427,6 +427,6 @@ All subgraphs have an auto-generated `_Meta_` object, which provides access to s - ہیش: بلاک کی ہیش - نمبر: بلاک نمبر -- ٹائم اسٹیمپ: بلاک کا ٹائم اسٹیمپ، اگر دستیاب ہو (یہ فی الحال صرف ای وی ایم نیٹ ورکس کو انڈیکس کرنے والے سب گرافس کے لیے دستیاب ہے) +- timestamp: the timestamp of the block, if available (this is currently only available for Subgraphs indexing EVM networks) -`hasIndexingErrors` is a boolean identifying whether the subgraph encountered indexing errors at some past block +`hasIndexingErrors` is a boolean identifying whether the Subgraph encountered indexing errors at some past block diff --git a/website/src/pages/ur/subgraphs/querying/introduction.mdx b/website/src/pages/ur/subgraphs/querying/introduction.mdx index 338bd95d3782..1bebc072eacf 100644 --- a/website/src/pages/ur/subgraphs/querying/introduction.mdx +++ b/website/src/pages/ur/subgraphs/querying/introduction.mdx @@ -7,11 +7,11 @@ To start querying right away, visit [The Graph Explorer](https://thegraph.com/ex ## جائزہ -When a subgraph is published to The Graph Network, you can visit its subgraph details page on Graph Explorer and use the "Query" tab to explore the deployed GraphQL API for each subgraph. +When a Subgraph is published to The Graph Network, you can visit its Subgraph details page on Graph Explorer and use the "Query" tab to explore the deployed GraphQL API for each Subgraph. ## Specifics -Each subgraph published to The Graph Network has a unique query URL in Graph Explorer to make direct queries. You can find it by navigating to the subgraph details page and clicking on the "Query" button in the top right corner. +Each Subgraph published to The Graph Network has a unique query URL in Graph Explorer to make direct queries. You can find it by navigating to the Subgraph details page and clicking on the "Query" button in the top right corner. ![Query Subgraph Button](/img/query-button-screenshot.png) @@ -21,7 +21,7 @@ You will notice that this query URL must use a unique API key. You can create an Subgraph Studio users start on a Free Plan, which allows them to make 100,000 queries per month. Additional queries are available on the Growth Plan, which offers usage based pricing for additional queries, payable by credit card, or GRT on Arbitrum. You can learn more about billing [here](/subgraphs/billing/). -> Please see the [Query API](/subgraphs/querying/graphql-api/) for a complete reference on how to query the subgraph's entities. +> Please see the [Query API](/subgraphs/querying/graphql-api/) for a complete reference on how to query the Subgraph's entities. > > Note: If you encounter 405 errors with a GET request to the Graph Explorer URL, please switch to a POST request instead. diff --git a/website/src/pages/ur/subgraphs/querying/managing-api-keys.mdx b/website/src/pages/ur/subgraphs/querying/managing-api-keys.mdx index 505edb314906..72e0694153f0 100644 --- a/website/src/pages/ur/subgraphs/querying/managing-api-keys.mdx +++ b/website/src/pages/ur/subgraphs/querying/managing-api-keys.mdx @@ -1,14 +1,14 @@ --- -title: اپنی API کلیدوں کا انتظام کرنا +title: Managing API keys --- ## جائزہ -API keys are needed to query subgraphs. They ensure that the connections between application services are valid and authorized, including authenticating the end user and the device using the application. +API keys are needed to query Subgraphs. They ensure that the connections between application services are valid and authorized, including authenticating the end user and the device using the application. ### Create and Manage API Keys -Go to [Subgraph Studio](https://thegraph.com/studio/) and click the **API Keys** tab to create and manage your API keys for specific subgraphs. +Go to [Subgraph Studio](https://thegraph.com/studio/) and click the **API Keys** tab to create and manage your API keys for specific Subgraphs. The "API keys" table lists existing API keys and allows you to manage or delete them. For each key, you can see its status, the cost for the current period, the spending limit for the current period, and the total number of queries. @@ -31,4 +31,4 @@ You can click on an individual API key to view the Details page: - GRT کو تعداد جو خرچ ہوئ ہے 2. Under the **Security** section, you can opt into security settings depending on the level of control you’d like to have. Specifically, you can: - اپنی API کلید استعمال کرنے کے لیے مجاز ڈومین ناموں کو دیکھیں اور ان کا نظم کریں - - سب گراف تفویض کریں جن سے آپ کی API کلید سے کیوری کیا جا سکتا ہے + - Assign Subgraphs that can be queried with your API key diff --git a/website/src/pages/ur/subgraphs/querying/python.mdx b/website/src/pages/ur/subgraphs/querying/python.mdx index b5abcce57b6d..2f9e2327b65e 100644 --- a/website/src/pages/ur/subgraphs/querying/python.mdx +++ b/website/src/pages/ur/subgraphs/querying/python.mdx @@ -3,7 +3,7 @@ title: Query The Graph with Python and Subgrounds sidebarTitle: Python (Subgrounds) --- -Subgrounds is an intuitive Python library for querying subgraphs, built by [Playgrounds](https://playgrounds.network/). It allows you to directly connect subgraph data to a Python data environment, letting you use libraries like [pandas](https://pandas.pydata.org/) to perform data analysis! +Subgrounds is an intuitive Python library for querying Subgraphs, built by [Playgrounds](https://playgrounds.network/). It allows you to directly connect Subgraph data to a Python data environment, letting you use libraries like [pandas](https://pandas.pydata.org/) to perform data analysis! Subgrounds offers a simple Pythonic API for building GraphQL queries, automates tedious workflows such as pagination, and empowers advanced users through controlled schema transformations. @@ -17,14 +17,14 @@ pip install --upgrade subgrounds python -m pip install --upgrade subgrounds ``` -Once installed, you can test out subgrounds with the following query. The following example grabs a subgraph for the Aave v2 protocol and queries the top 5 markets ordered by TVL (Total Value Locked), selects their name and their TVL (in USD) and returns the data as a pandas [DataFrame](https://pandas.pydata.org/pandas-docs/dev/reference/api/pandas.DataFrame.html#pandas.DataFrame). +Once installed, you can test out subgrounds with the following query. The following example grabs a Subgraph for the Aave v2 protocol and queries the top 5 markets ordered by TVL (Total Value Locked), selects their name and their TVL (in USD) and returns the data as a pandas [DataFrame](https://pandas.pydata.org/pandas-docs/dev/reference/api/pandas.DataFrame.html#pandas.DataFrame). ```python from subgrounds import Subgrounds sg = Subgrounds() -# Load the subgraph +# Load the Subgraph aave_v2 = sg.load_subgraph( "https://api.thegraph.com/subgraphs/name/messari/aave-v2-ethereum") diff --git a/website/src/pages/ur/subgraphs/querying/subgraph-id-vs-deployment-id.mdx b/website/src/pages/ur/subgraphs/querying/subgraph-id-vs-deployment-id.mdx index 14128acd1789..3a51d42586d1 100644 --- a/website/src/pages/ur/subgraphs/querying/subgraph-id-vs-deployment-id.mdx +++ b/website/src/pages/ur/subgraphs/querying/subgraph-id-vs-deployment-id.mdx @@ -2,17 +2,17 @@ title: سب گراف شناخت بمقابلہ تعیناتی شناخت --- -سب گراف کی شناخت سب گراف شناخت کے ذریعے کی جاتی ہے، اور سب گراف کے ہر ورژن کی شناخت ایک تعیناتی شناخت سے ہوتی ہے۔ +A Subgraph is identified by a Subgraph ID, and each version of the Subgraph is identified by a Deployment ID. -When querying a subgraph, either ID can be used, though it is generally suggested that the Deployment ID is used due to its ability to specify a specific version of a subgraph. +When querying a Subgraph, either ID can be used, though it is generally suggested that the Deployment ID is used due to its ability to specify a specific version of a Subgraph. Here are some key differences between the two IDs: ![](/img/subgraph-id-vs-deployment-id.png) ## تعیناتی شناخت -The Deployment ID is the IPFS hash of the compiled manifest file, which refers to other files on IPFS instead of relative URLs on the computer. For example, the compiled manifest can be accessed via: `https://api.thegraph.com/ipfs/api/v0/cat?arg=QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. To change the Deployment ID, one can simply update the manifest file, such as modifying the description field as described in the [subgraph manifest documentation](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api). +The Deployment ID is the IPFS hash of the compiled manifest file, which refers to other files on IPFS instead of relative URLs on the computer. For example, the compiled manifest can be accessed via: `https://api.thegraph.com/ipfs/api/v0/cat?arg=QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. To change the Deployment ID, one can simply update the manifest file, such as modifying the description field as described in the [Subgraph manifest documentation](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api). -When queries are made using a subgraph's Deployment ID, we are specifying a version of that subgraph to query. Using the Deployment ID to query a specific subgraph version results in a more sophisticated and robust setup as there is full control over the subgraph version being queried. However, this results in the need of updating the query code manually every time a new version of the subgraph is published. +When queries are made using a Subgraph's Deployment ID, we are specifying a version of that Subgraph to query. Using the Deployment ID to query a specific Subgraph version results in a more sophisticated and robust setup as there is full control over the Subgraph version being queried. However, this results in the need of updating the query code manually every time a new version of the Subgraph is published. مثالی اینڈ پوائنٹ جو سب گراف شناخت استعمال کرتا ہے: @@ -20,8 +20,8 @@ When queries are made using a subgraph's Deployment ID, we are specifying a vers ## سب گراف شناخت -The Subgraph ID is a unique identifier for a subgraph. It remains constant across all versions of a subgraph. It is recommended to use the Subgraph ID to query the latest version of a subgraph, although there are some caveats. +The Subgraph ID is a unique identifier for a Subgraph. It remains constant across all versions of a Subgraph. It is recommended to use the Subgraph ID to query the latest version of a Subgraph, although there are some caveats. -Be aware that querying using Subgraph ID may result in queries being responded to by an older version of the subgraph due to the new version needing time to sync. Also, new versions could introduce breaking schema changes. +Be aware that querying using Subgraph ID may result in queries being responded to by an older version of the Subgraph due to the new version needing time to sync. Also, new versions could introduce breaking schema changes. Example endpoint that uses Subgraph ID: `https://gateway-arbitrum.network.thegraph.com/api/[api-key]/subgraphs/id/FL3ePDCBbShPvfRJTaSCNnehiqxsPHzpLud6CpbHoeKW` diff --git a/website/src/pages/ur/subgraphs/quick-start.mdx b/website/src/pages/ur/subgraphs/quick-start.mdx index b9eba5966d71..280a19a8a3d9 100644 --- a/website/src/pages/ur/subgraphs/quick-start.mdx +++ b/website/src/pages/ur/subgraphs/quick-start.mdx @@ -2,7 +2,7 @@ title: فورا شروع کریں --- -Learn how to easily build, publish and query a [subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) on The Graph. +Learn how to easily build, publish and query a [Subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) on The Graph. ## Prerequisites @@ -13,13 +13,13 @@ Learn how to easily build, publish and query a [subgraph](/subgraphs/developing/ ## How to Build a Subgraph -### 1. Create a subgraph in Subgraph Studio +### 1. Create a Subgraph in Subgraph Studio Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. -Subgraph Studio lets you create, manage, deploy, and publish subgraphs, as well as create and manage API keys. +Subgraph Studio lets you create, manage, deploy, and publish Subgraphs, as well as create and manage API keys. -Click "Create a Subgraph". It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name". +Click "Create a Subgraph". It is recommended to name the Subgraph in Title Case: "Subgraph Name Chain Name". ### 2. گراف CLI انسٹال کریں @@ -37,13 +37,13 @@ npm install -g @graphprotocol/graph-cli@latest yarn global add @graphprotocol/graph-cli ``` -### 3. Initialize your subgraph +### 3. Initialize your Subgraph -> آپ اپنے مخصوص سب گراف کے لیے سب گراف کے پیج پر [سب گراف سٹوڈیو](https://thegraph.com/studio/) میں کمانڈز تلاش کر سکتے ہیں. +> You can find commands for your specific Subgraph on the Subgraph page in [Subgraph Studio](https://thegraph.com/studio/). -The `graph init` command will automatically create a scaffold of a subgraph based on your contract's events. +The `graph init` command will automatically create a scaffold of a Subgraph based on your contract's events. -The following command initializes your subgraph from an existing contract: +The following command initializes your Subgraph from an existing contract: ```sh graph init @@ -51,42 +51,42 @@ graph init If your contract is verified on the respective blockscanner where it is deployed (such as [Etherscan](https://etherscan.io/)), then the ABI will automatically be created in the CLI. -When you initialize your subgraph, the CLI will ask you for the following information: +When you initialize your Subgraph, the CLI will ask you for the following information: -- **Protocol**: Choose the protocol your subgraph will be indexing data from. -- **Subgraph slug**: Create a name for your subgraph. Your subgraph slug is an identifier for your subgraph. -- **Directory**: Choose a directory to create your subgraph in. -- **Ethereum network** (optional): You may need to specify which EVM-compatible network your subgraph will be indexing data from. +- **Protocol**: Choose the protocol your Subgraph will be indexing data from. +- **Subgraph slug**: Create a name for your Subgraph. Your Subgraph slug is an identifier for your Subgraph. +- **Directory**: Choose a directory to create your Subgraph in. +- **Ethereum network** (optional): You may need to specify which EVM-compatible network your Subgraph will be indexing data from. - **Contract address**: Locate the smart contract address you’d like to query data from. - **ABI**: If the ABI is not auto-populated, you will need to input it manually as a JSON file. -- **Start Block**: You should input the start block to optimize subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed. +- **Start Block**: You should input the start block to optimize Subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed. - **Contract Name**: Input the name of your contract. -- **Index contract events as entities**: It is suggested that you set this to true, as it will automatically add mappings to your subgraph for every emitted event. +- **Index contract events as entities**: It is suggested that you set this to true, as it will automatically add mappings to your Subgraph for every emitted event. - **Add another contract** (optional): You can add another contract. -اپنے سب گراف کو شروع کرتے وقت کیا توقع کی جائے اس کی مثال کے لیے درج ذیل اسکرین شاٹ دیکھیں: +See the following screenshot for an example for what to expect when initializing your Subgraph: ![Subgraph command](/img/CLI-Example.png) -### 4. Edit your subgraph +### 4. Edit your Subgraph -The `init` command in the previous step creates a scaffold subgraph that you can use as a starting point to build your subgraph. +The `init` command in the previous step creates a scaffold Subgraph that you can use as a starting point to build your Subgraph. -When making changes to the subgraph, you will mainly work with three files: +When making changes to the Subgraph, you will mainly work with three files: -- Manifest (`subgraph.yaml`) - defines what data sources your subgraph will index. -- Schema (`schema.graphql`) - defines what data you wish to retrieve from the subgraph. +- Manifest (`subgraph.yaml`) - defines what data sources your Subgraph will index. +- Schema (`schema.graphql`) - defines what data you wish to retrieve from the Subgraph. - AssemblyScript Mappings (`mapping.ts`) - translates data from your data sources to the entities defined in the schema. -For a detailed breakdown on how to write your subgraph, check out [Creating a Subgraph](/developing/creating-a-subgraph/). +For a detailed breakdown on how to write your Subgraph, check out [Creating a Subgraph](/developing/creating-a-subgraph/). -### 5. Deploy your subgraph +### 5. Deploy your Subgraph > Remember, deploying is not the same as publishing. -When you **deploy** a subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. A deployed subgraph's indexing is performed by the [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/), which is a single Indexer owned and operated by Edge & Node, rather than by the many decentralized Indexers in The Graph Network. A **deployed** subgraph is free to use, rate-limited, not visible to the public, and meant to be used for development, staging, and testing purposes. +When you **deploy** a Subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. A deployed Subgraph's indexing is performed by the [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/), which is a single Indexer owned and operated by Edge & Node, rather than by the many decentralized Indexers in The Graph Network. A **deployed** Subgraph is free to use, rate-limited, not visible to the public, and meant to be used for development, staging, and testing purposes. -ایک بار آپ کا سب گراف لکھا جائے، درج ذیل کمانڈز رن کریں: +Once your Subgraph is written, run the following commands: ```` ```sh @@ -94,7 +94,7 @@ graph codegen && graph build ``` ```` -Authenticate and deploy your subgraph. The deploy key can be found on the subgraph's page in Subgraph Studio. +Authenticate and deploy your Subgraph. The deploy key can be found on the Subgraph's page in Subgraph Studio. ![Deploy key](/img/subgraph-studio-deploy-key.jpg) @@ -109,37 +109,37 @@ graph deploy The CLI will ask for a version label. It's strongly recommended to use [semantic versioning](https://semver.org/), e.g. `0.0.1`. -### 6. Review your subgraph +### 6. Review your Subgraph -If you’d like to test your subgraph before publishing it, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following: +If you’d like to test your Subgraph before publishing it, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following: - Run a sample query. -- Analyze your subgraph in the dashboard to check information. -- Check the logs on the dashboard to see if there are any errors with your subgraph. The logs of an operational subgraph will look like this: +- Analyze your Subgraph in the dashboard to check information. +- Check the logs on the dashboard to see if there are any errors with your Subgraph. The logs of an operational Subgraph will look like this: ![سب گراف لاگز](/img/subgraph-logs-image.png) -### 7. Publish your subgraph to The Graph Network +### 7. Publish your Subgraph to The Graph Network -When your subgraph is ready for a production environment, you can publish it to the decentralized network. Publishing is an onchain action that does the following: +When your Subgraph is ready for a production environment, you can publish it to the decentralized network. Publishing is an onchain action that does the following: -- It makes your subgraph available to be to indexed by the decentralized [Indexers](/indexing/overview/) on The Graph Network. -- It removes rate limits and makes your subgraph publicly searchable and queryable in [Graph Explorer](https://thegraph.com/explorer/). -- It makes your subgraph available for [Curators](/resources/roles/curating/) to curate it. +- It makes your Subgraph available to be to indexed by the decentralized [Indexers](/indexing/overview/) on The Graph Network. +- It removes rate limits and makes your Subgraph publicly searchable and queryable in [Graph Explorer](https://thegraph.com/explorer/). +- It makes your Subgraph available for [Curators](/resources/roles/curating/) to curate it. -> The greater the amount of GRT you and others curate on your subgraph, the more Indexers will be incentivized to index your subgraph, improving the quality of service, reducing latency, and enhancing network redundancy for your subgraph. +> The greater the amount of GRT you and others curate on your Subgraph, the more Indexers will be incentivized to index your Subgraph, improving the quality of service, reducing latency, and enhancing network redundancy for your Subgraph. #### Publishing with Subgraph Studio -To publish your subgraph, click the Publish button in the dashboard. +To publish your Subgraph, click the Publish button in the dashboard. -![Publish a subgraph on Subgraph Studio](/img/publish-sub-transfer.png) +![Publish a Subgraph on Subgraph Studio](/img/publish-sub-transfer.png) -Select the network to which you would like to publish your subgraph. +Select the network to which you would like to publish your Subgraph. #### Publishing from the CLI -As of version 0.73.0, you can also publish your subgraph with the Graph CLI. +As of version 0.73.0, you can also publish your Subgraph with the Graph CLI. Open the `graph-cli`. @@ -157,32 +157,32 @@ graph publish ``` ```` -3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized subgraph to a network of your choice. +3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized Subgraph to a network of your choice. ![cli-ui](/img/cli-ui.png) To customize your deployment, see [Publishing a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). -#### Adding signal to your subgraph +#### Adding signal to your Subgraph -1. To attract Indexers to query your subgraph, you should add GRT curation signal to it. +1. To attract Indexers to query your Subgraph, you should add GRT curation signal to it. - - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your subgraph. + - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your Subgraph. 2. If eligible for indexing rewards, Indexers receive GRT rewards based on the signaled amount. - - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on subgraph feature usage and supported networks. + - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on Subgraph feature usage and supported networks. To learn more about curation, read [Curating](/resources/roles/curating/). -To save on gas costs, you can curate your subgraph in the same transaction you publish it by selecting this option: +To save on gas costs, you can curate your Subgraph in the same transaction you publish it by selecting this option: ![Subgraph publish](/img/studio-publish-modal.png) -### 8. Query your subgraph +### 8. Query your Subgraph -You now have access to 100,000 free queries per month with your subgraph on The Graph Network! +You now have access to 100,000 free queries per month with your Subgraph on The Graph Network! -You can query your subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button. +You can query your Subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button. -For more information about querying data from your subgraph, read [Querying The Graph](/subgraphs/querying/introduction/). +For more information about querying data from your Subgraph, read [Querying The Graph](/subgraphs/querying/introduction/). diff --git a/website/src/pages/ur/substreams/developing/dev-container.mdx b/website/src/pages/ur/substreams/developing/dev-container.mdx index bd4acf16eec7..339ddb159c87 100644 --- a/website/src/pages/ur/substreams/developing/dev-container.mdx +++ b/website/src/pages/ur/substreams/developing/dev-container.mdx @@ -9,7 +9,7 @@ Develop your first project with Substreams Dev Container. It's a tool to help you build your first project. You can either run it remotely through Github codespaces or locally by cloning the [substreams starter repository](https://github.com/streamingfast/substreams-starter?tab=readme-ov-file). -Inside the Dev Container, the `substreams init` command sets up a code-generated Substreams project, allowing you to easily build a subgraph or an SQL-based solution for data handling. +Inside the Dev Container, the `substreams init` command sets up a code-generated Substreams project, allowing you to easily build a Subgraph or an SQL-based solution for data handling. ## Prerequisites @@ -35,7 +35,7 @@ To share your work with the broader community, publish your `.spkg` to [Substrea You can configure your project to query data either through a Subgraph or directly from an SQL database: -- **Subgraph**: Run `substreams codegen subgraph`. This generates a project with a basic `schema.graphql` and `mappings.ts` file. You can customize these to define entities based on the data extracted by Substreams. For more configurations, see [subgraph sink documentation](https://docs.substreams.dev/how-to-guides/sinks/subgraph). +- **Subgraph**: Run `substreams codegen subgraph`. This generates a project with a basic `schema.graphql` and `mappings.ts` file. You can customize these to define entities based on the data extracted by Substreams. For more configurations, see [Subgraph sink documentation](https://docs.substreams.dev/how-to-guides/sinks/subgraph). - **SQL**: Run `substreams codegen sql` for SQL-based queries. For more information on configuring a SQL sink, refer to the [SQL documentation](https://docs.substreams.dev/how-to-guides/sinks/sql-sink). ## Deployment Options diff --git a/website/src/pages/ur/substreams/developing/sinks.mdx b/website/src/pages/ur/substreams/developing/sinks.mdx index d0ed6202bebf..dd0135dc1802 100644 --- a/website/src/pages/ur/substreams/developing/sinks.mdx +++ b/website/src/pages/ur/substreams/developing/sinks.mdx @@ -1,5 +1,5 @@ --- -title: Official Sinks +title: Sink your Substreams --- Choose a sink that meets your project's needs. @@ -8,7 +8,7 @@ Choose a sink that meets your project's needs. Once you find a package that fits your needs, you can choose how you want to consume the data. -Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database, a file or a subgraph. +Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database, a file or a Subgraph. ## Sinks diff --git a/website/src/pages/ur/substreams/developing/solana/account-changes.mdx b/website/src/pages/ur/substreams/developing/solana/account-changes.mdx index db9bc7009fb4..fa0239d92297 100644 --- a/website/src/pages/ur/substreams/developing/solana/account-changes.mdx +++ b/website/src/pages/ur/substreams/developing/solana/account-changes.mdx @@ -11,7 +11,7 @@ This guide walks you through the process of setting up your environment, configu > NOTE: History for the Solana Account Changes dates as of 2025, block 310629601. -For each Substreams Solana account block, only the latest update per account is recorded, see the [Protobuf Referece](https://buf.build/streamingfast/firehose-solana/file/main:sf/solana/type/v1/account.proto). If an account is deleted, a payload with `deleted == True` is provided. Additionally, events of low importance ware omitted, such as those with the special owner “Vote11111111…” account or changes that do not affect the account data (ex: lamport changes). +For each Substreams Solana account block, only the latest update per account is recorded, see the [Protobuf Reference](https://buf.build/streamingfast/firehose-solana/file/main:sf/solana/type/v1/account.proto). If an account is deleted, a payload with `deleted == True` is provided. Additionally, events of low importance ware omitted, such as those with the special owner “Vote11111111…” account or changes that do not affect the account data (ex: lamport changes). > NOTE: To test Substreams latency for Solana accounts, measured as block-head drift, install the [Substreams CLI](https://docs.substreams.dev/reference-material/substreams-cli/installing-the-cli) and running `substreams run solana-common blocks_without_votes -s -1 -o clock`. diff --git a/website/src/pages/ur/substreams/developing/solana/transactions.mdx b/website/src/pages/ur/substreams/developing/solana/transactions.mdx index e26afa17c1e3..49ac17e21955 100644 --- a/website/src/pages/ur/substreams/developing/solana/transactions.mdx +++ b/website/src/pages/ur/substreams/developing/solana/transactions.mdx @@ -36,12 +36,12 @@ Within the generated directories, modify your Substreams modules to include addi ## Step 3: Load the Data -To make your Substreams queryable (as opposed to [direct streaming](https://docs.substreams.dev/how-to-guides/sinks/stream)), you can automatically generate a [Substreams-powered subgraph](/sps/introduction/) or SQL-DB sink. +To make your Substreams queryable (as opposed to [direct streaming](https://docs.substreams.dev/how-to-guides/sinks/stream)), you can automatically generate a [Substreams-powered Subgraph](/sps/introduction/) or SQL-DB sink. ### سب گراف 1. Run `substreams codegen subgraph` to initialize the sink, producing the necessary files and function definitions. -2. Create your [subgraph mappings](/sps/triggers/) within the `mappings.ts` and associated entities within the `schema.graphql`. +2. Create your [Subgraph mappings](/sps/triggers/) within the `mappings.ts` and associated entities within the `schema.graphql`. 3. Build and deploy locally or to [Subgraph Studio](https://thegraph.com/studio-pricing/) by running `deploy-studio`. ### SQL diff --git a/website/src/pages/ur/substreams/introduction.mdx b/website/src/pages/ur/substreams/introduction.mdx index 9b4e267cb162..3acb1f2b83c1 100644 --- a/website/src/pages/ur/substreams/introduction.mdx +++ b/website/src/pages/ur/substreams/introduction.mdx @@ -13,7 +13,7 @@ Substreams is a powerful parallel blockchain indexing technology designed to enh ## Substreams Benefits -- **Accelerated Indexing**: Boost subgraph indexing time with a parallelized engine for quicker data retrieval and processing. +- **Accelerated Indexing**: Boost Subgraph indexing time with a parallelized engine for quicker data retrieval and processing. - **Multi-Chain Support**: Expand indexing capabilities beyond EVM-based chains, supporting ecosystems like Solana, Injective, Starknet, and Vara. - **Enhanced Data Model**: Access comprehensive data, including the `trace` level data on EVM or account changes on Solana, while efficiently managing forks/disconnections. - **Multi-Sink Support:** For Subgraph, Postgres database, Clickhouse, and Mongo database. diff --git a/website/src/pages/ur/substreams/publishing.mdx b/website/src/pages/ur/substreams/publishing.mdx index 56510e785365..4ce52f954ff3 100644 --- a/website/src/pages/ur/substreams/publishing.mdx +++ b/website/src/pages/ur/substreams/publishing.mdx @@ -9,7 +9,7 @@ Learn how to publish a Substreams package to the [Substreams Registry](https://s ### What is a package? -A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional subgraphs. +A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional Subgraphs. ## Publish a Package @@ -44,7 +44,7 @@ A Substreams package is a precompiled binary file that defines the specific data ![confirm](/img/4_confirm.png) -That's it! You have succesfully published a package in the Substreams registry. +That's it! You have successfully published a package in the Substreams registry. ![success](/img/5_success.png) diff --git a/website/src/pages/ur/supported-networks.mdx b/website/src/pages/ur/supported-networks.mdx index fb2e141372fe..709ff2193c44 100644 --- a/website/src/pages/ur/supported-networks.mdx +++ b/website/src/pages/ur/supported-networks.mdx @@ -18,11 +18,11 @@ export const getStaticProps = getSupportedNetworksStaticProps - Subgraph Studio relies on the stability and reliability of the underlying technologies, for example JSON-RPC, Firehose and Substreams endpoints. - Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier. -- If a subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks. +- If a Subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks. - For a full list of which features are supported on the decentralized network, see [this page](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). ## Running Graph Node locally If your preferred network isn't supported on The Graph's decentralized network, you can run your own [Graph Node](https://github.com/graphprotocol/graph-node) to index any EVM-compatible network. Make sure that the [version](https://github.com/graphprotocol/graph-node/releases) you are using supports the network and you have the needed configuration. -Graph Node can also index other protocols, via a Firehose integration. Firehose integrations have been created for NEAR, Arweave and Cosmos-based networks. Additionally, Graph Node can support Substreams-powered subgraphs for any network with Substreams support. +Graph Node can also index other protocols, via a Firehose integration. Firehose integrations have been created for NEAR and Arweave. Additionally, Graph Node can support Substreams-powered Subgraphs for any network with Substreams support. diff --git a/website/src/pages/ur/token-api/_meta-titles.json b/website/src/pages/ur/token-api/_meta-titles.json index 692cec84bd58..7ed31e0af95d 100644 --- a/website/src/pages/ur/token-api/_meta-titles.json +++ b/website/src/pages/ur/token-api/_meta-titles.json @@ -1,5 +1,6 @@ { "mcp": "MCP", "evm": "EVM Endpoints", - "monitoring": "Monitoring Endpoints" + "monitoring": "Monitoring Endpoints", + "faq": "FAQ" } diff --git a/website/src/pages/ur/token-api/_meta.js b/website/src/pages/ur/token-api/_meta.js index 09aa7ffc2649..0e526f673a66 100644 --- a/website/src/pages/ur/token-api/_meta.js +++ b/website/src/pages/ur/token-api/_meta.js @@ -5,4 +5,5 @@ export default { mcp: titles.mcp, evm: titles.evm, monitoring: titles.monitoring, + faq: '', } diff --git a/website/src/pages/ur/token-api/faq.mdx b/website/src/pages/ur/token-api/faq.mdx new file mode 100644 index 000000000000..6178aee33e86 --- /dev/null +++ b/website/src/pages/ur/token-api/faq.mdx @@ -0,0 +1,109 @@ +--- +title: Token API FAQ +--- + +Get fast answers to easily integrate and scale with The Graph's high-performance Token API. + +## General + +### What blockchains does the Token API support? + +Currently, the Token API supports Ethereum, Binance Smart Chain (BSC), Polygon, Optimism, Base, and Arbitrum One. + +### Why isn't my API key from The Graph Market working? + +Be sure to use the Access Token generated from the API key, not the API key itself. An access token can be generated from the dashboard on The Graph Market using the dropdown menu next to each API key. + +### How current is the data provided by the API relative to the blockchain? + +The API provides data up to the latest finalized block. + +### How do I authenticate requests to the Token API? + +Authentication is managed via JWT tokens obtained through The Graph Market. JWT tokens issued by [Pinax](https://pinax.network/en), a core developer of The Graph, are also supported. + +### Does the Token API provide a client SDK? + +While a client SDK is not currently available, please share feedback on any SDKs or integrations you would like to see on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to support additional blockchains in the future? + +Yes, more blockchains will be supported in the future. Please share feedback on desired chains on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to offer data closer to the chain head? + +Yes, improvements to provide data closer to the chain head are planned. Feedback is welcome on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to support additional use cases such as NFTs? + +The Graph ecosystem is actively determining the [roadmap](https://thegraph.com/blog/token-api-the-graph/) for additional use cases, including NFTs. Please provide feedback on specific features you would like prioritized on [Discord](https://discord.gg/graphprotocol). + +## MCP / LLM / AI Topics + +### Is there a time limit for LLM queries? + +Yes. The maximum time limit for LLM queries is 10 seconds. + +### Is there a known list of LLMs that work with the API? + +Yes, Cline, Cursor, and Claude Desktop integrate successfully with The Graph's Token API + MCP server. + +Beyond that, whether an LLM "works" depends on whether it supports the MCP protocol directly (or has a compatible plugin/adapter). + +### Where can I find the MCP client? + +You can find the code for the MCP client in [The Graph's repo](https://github.com/graphprotocol/mcp-client). + +## Advanced Topics + +### I'm getting 403/401 errors. What's wrong? + +Check that you included the `Authorization: Bearer ` header with the correct, non-expired token. Common issues include using the API key instead of generating a new JWT, forgetting the "Bearer" prefix, using an incorrect token, or omitting the header entirely. Ensure you copied the JWT exactly as provided by The Graph Market. + +### Are there rate limits or usage costs?\*\* + +During Beta, the Token API is free for authorized developers. While specific rate limits aren't documented, reasonable throttling exists to prevent abuse. High request volumes may trigger HTTP 429 errors. Monitor official announcements for future pricing changes after Beta. + +### What networks are supported, and how do I specify them? + +You can query available networks with [this link](https://token-api.thegraph.com/#tag/monitoring/GET/networks). A full list of network IDs accepted by The Graph can be found on [The Graph's Networks](https://thegraph.com/docs/en/supported-networks/). The API supports Ethereum mainnet, Binance Smart Chain, Base, Arbitrum One, Optimism, and Polygon/Matic. Use the optional `network_id` parameter (e.g., `mainnet`, `bsc`, `base`, `arbitrum-one`, `optimism`, `matic`) to target a specific chain. Without this parameter, the API defaults to Ethereum mainnet. + +### Why do I only see 10 results? How can I get more data? + +Endpoints cap output at 10 items by default. Use pagination parameters: `limit` (up to 500) and `page` (1-indexed). For example, set `limit=50` to get 50 results, and increment `page` for subsequent batches (e.g., `page=2` for items 51-100). + +### How do I fetch older transfer history? + +The Transfers endpoint defaults to 30 days of history. To retrieve older events, increase the `age` parameter up to 180 days maximum (e.g., `age=180` for 6 months of transfers). Transfers older than 180 days cannot be fetched in a single call. + +### What does an empty `"data": []` array mean? + +An empty data array means no records were found matching the query – not an error. This occurs when querying wallets with no tokens/transfers or token contracts with no holders. Verify you've used the correct address and parameters. An invalid address format will trigger a 4xx error. + +### Why is the JSON response wrapped in a `"data"` array? + +All Token API responses consistently wrap results in a top-level `data` array, even for single items. This uniform design handles the common case where addresses have multiple balances or transfers. When parsing, be sure to index into the `data` array (e.g., `const results = response.data`). + +### Why are token amounts returned as strings? + +Large numeric values (like token amounts or supplies) are returned as strings to avoid precision loss, as they often exceed JavaScript's safe integer range. Convert these to big number types for arithmetic operations. Fields like `decimals` are provided as normal numbers to help derive human-readable values. + +### What format should addresses be in? + +The API accepts 40-character hex addresses with or without the `0x` prefix. The endpoint is case-insensitive, so both lower and uppercase hex characters work. Ensure addresses are exactly 40 hex digits (20 bytes) if you remove the prefix. For contract queries, use the token's contract address; for balance/transfer queries, use the wallet address. + +### Do I need special headers besides authentication? + +While recommended, `Accept: application/json` isn't strictly required as the API returns JSON by default. The critical header is `Authorization: Bearer `. Ensure you make a GET request to the correct URL without trailing slashes or path typos (e.g., use `/balances/evm/{address}` not `/balance`). + +### MCP integration with Claude/Cline/Cursor shows errors like "ENOENT" or "Server disconnected". How do I fix this? + +For "ENOENT" errors, ensure Node.js 18+ is installed and the path to `npx`/`bunx` is correct (consider using full paths in config). "Server disconnected" usually indicates authentication or connectivity issues – verify your `ACCESS_TOKEN` is set correctly and your network allows access to `https://token-api.thegraph.com/sse`. + +### Is the Token API part of The Graph's GraphQL service? + +No, the Token API is a separate RESTful service. Unlike traditional subgraphs, it provides ready-to-use REST endpoints (HTTP GET) for common token data. You don't need to write GraphQL queries or deploy subgraphs. Under the hood, it uses The Graph's infrastructure and MCP with AI for enrichment, but you simply interact with REST endpoints. + +### Do I need to use MCP or tools like Claude, Cline, or Cursor? + +No, these are optional. MCP is an advanced feature allowing AI assistants to interface with the API via streaming. For standard usage, simply call the REST endpoints with any HTTP client using your JWT. Claude Desktop, Cline bot, and Cursor IDE integrations are provided for convenience but aren't required. diff --git a/website/src/pages/ur/token-api/mcp/claude.mdx b/website/src/pages/ur/token-api/mcp/claude.mdx index 0da8f2be031d..8289b2947386 100644 --- a/website/src/pages/ur/token-api/mcp/claude.mdx +++ b/website/src/pages/ur/token-api/mcp/claude.mdx @@ -12,7 +12,7 @@ sidebarTitle: Claude Desktop ![Screenshot of Claude Desktop's settings panel showing the MCP server configuration option.](/img/claude-preview-token-api.png) -## Configuration +## کنفیگریشن Create or edit your `claude_desktop_config.json` file. @@ -25,11 +25,11 @@ Create or edit your `claude_desktop_config.json` file. ```json label="claude_desktop_config.json" { "mcpServers": { - "mcp-pinax": { + "token-api": { "command": "npx", "args": ["@pinax/mcp", "--sse-url", "https://token-api.thegraph.com/sse"], "env": { - "ACCESS_TOKEN": "" + "ACCESS_TOKEN": "" } } } diff --git a/website/src/pages/ur/token-api/mcp/cline.mdx b/website/src/pages/ur/token-api/mcp/cline.mdx index ab54c0c8f6f0..2711235c8985 100644 --- a/website/src/pages/ur/token-api/mcp/cline.mdx +++ b/website/src/pages/ur/token-api/mcp/cline.mdx @@ -10,9 +10,9 @@ sidebarTitle: Cline - [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path. - The `@pinax/mcp` package requires Node 18+, as it relies on built-in `fetch()` / `Headers`, which are not available in Node 17 or older. You may need to specify an exact path to an up-to-date Node version, or uninstall previous versions of Node to ensure `@pinax/mcp` uses the correct version. -![Screenshot of Cline's MCP server configuration interface displaying the JSON settings file with mcp-pinax server details visible](/img/cline-preview-token-api.png) +![Screenshot of Cline's MCP server configuration interface displaying the JSON settings file with mcp-pinax server details visible.](/img/cline-preview-token-api.png) -## Configuration +## کنفیگریشن Create or edit your `cline_mcp_settings.json` file. diff --git a/website/src/pages/ur/token-api/mcp/cursor.mdx b/website/src/pages/ur/token-api/mcp/cursor.mdx index 658108d1337b..fdab852890ac 100644 --- a/website/src/pages/ur/token-api/mcp/cursor.mdx +++ b/website/src/pages/ur/token-api/mcp/cursor.mdx @@ -12,7 +12,7 @@ sidebarTitle: Cursor ![Screenshot of Cursor's MCP configuration panel.](/img/cursor-preview-token-api.png) -## Configuration +## کنفیگریشن Create or edit your `~/.cursor/mcp.json` file. diff --git a/website/src/pages/ur/token-api/quick-start.mdx b/website/src/pages/ur/token-api/quick-start.mdx index 4653c3d41ac6..0efd314e3281 100644 --- a/website/src/pages/ur/token-api/quick-start.mdx +++ b/website/src/pages/ur/token-api/quick-start.mdx @@ -1,6 +1,6 @@ --- title: Token API Quick Start -sidebarTitle: Quick Start +sidebarTitle: فورا شروع کریں --- ![The Graph Token API Quick Start banner](/img/token-api-quickstart-banner.jpg) diff --git a/website/src/pages/vi/about.mdx b/website/src/pages/vi/about.mdx index 917e7817b3a7..dbcf77b348c9 100644 --- a/website/src/pages/vi/about.mdx +++ b/website/src/pages/vi/about.mdx @@ -30,25 +30,25 @@ Blockchain properties, such as finality, chain reorganizations, and uncled block ## The Graph Provides a Solution -The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API. +The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "Subgraphs") can then be queried with a standard GraphQL API. Today, there is a decentralized protocol that is backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node) that enables this process. ### How The Graph Functions -Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL. +Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using Subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL. #### Specifics -- The Graph uses subgraph descriptions, which are known as the subgraph manifest inside the subgraph. +- The Graph uses Subgraph descriptions, which are known as the Subgraph manifest inside the Subgraph. -- The subgraph description outlines the smart contracts of interest for a subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database. +- The Subgraph description outlines the smart contracts of interest for a Subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database. -- When creating a subgraph, you need to write a subgraph manifest. +- When creating a Subgraph, you need to write a Subgraph manifest. -- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that subgraph. +- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that Subgraph. -The diagram below provides more detailed information about the flow of data after a subgraph manifest has been deployed with Ethereum transactions. +The diagram below provides more detailed information about the flow of data after a Subgraph manifest has been deployed with Ethereum transactions. ![A graphic explaining how The Graph uses Graph Node to serve queries to data consumers](/img/graph-dataflow.png) @@ -56,12 +56,12 @@ Quy trình thực hiện theo các bước sau: 1. A dapp adds data to Ethereum through a transaction on a smart contract. 2. Hợp đồng thông minh phát ra một hoặc nhiều sự kiện trong khi xử lý giao dịch. -3. Graph Node liên tục quét Ethereum để tìm các khối mới và dữ liệu cho subgraph của bạn mà chúng có thể chứa. -4. Graph Node tìm các sự kiện Ethereum cho subgraph của bạn trong các khối này và chạy các trình xử lý ánh xạ mà bạn đã cung cấp. Ánh xạ là một mô-đun WASM tạo hoặc cập nhật các thực thể dữ liệu mà Graph Node lưu trữ để đáp ứng với các sự kiện Ethereum. +3. Graph Node continually scans Ethereum for new blocks and the data for your Subgraph they may contain. +4. Graph Node finds Ethereum events for your Subgraph in these blocks and runs the mapping handlers you provided. The mapping is a WASM module that creates or updates the data entities that Graph Node stores in response to Ethereum events. 5. The dapp queries the Graph Node for data indexed from the blockchain, using the node's [GraphQL endpoint](https://graphql.org/learn/). The Graph Node in turn translates the GraphQL queries into queries for its underlying data store in order to fetch this data, making use of the store's indexing capabilities. The dapp displays this data in a rich UI for end-users, which they use to issue new transactions on Ethereum. The cycle repeats. ## Bước tiếp theo -The following sections provide a more in-depth look at subgraphs, their deployment and data querying. +The following sections provide a more in-depth look at Subgraphs, their deployment and data querying. -Before you write your own subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed subgraphs. Each subgraph's page includes a GraphQL playground, allowing you to query its data. +Before you write your own Subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed Subgraphs. Each Subgraph's page includes a GraphQL playground, allowing you to query its data. diff --git a/website/src/pages/vi/archived/arbitrum/arbitrum-faq.mdx b/website/src/pages/vi/archived/arbitrum/arbitrum-faq.mdx index 562824e64e95..d121f5a2d0f3 100644 --- a/website/src/pages/vi/archived/arbitrum/arbitrum-faq.mdx +++ b/website/src/pages/vi/archived/arbitrum/arbitrum-faq.mdx @@ -14,7 +14,7 @@ By scaling The Graph on L2, network participants can now benefit from: - Security inherited from Ethereum -Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers can open and close allocations more frequently to index a greater number of subgraphs. Developers can deploy and update subgraphs more easily, and Delegators can delegate GRT more frequently. Curators can add or remove signal to a larger number of subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. +Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers can open and close allocations more frequently to index a greater number of Subgraphs. Developers can deploy and update Subgraphs more easily, and Delegators can delegate GRT more frequently. Curators can add or remove signal to a larger number of Subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. The Graph community decided to move forward with Arbitrum last year after the outcome of the [GIP-0031](https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305) discussion. @@ -39,7 +39,7 @@ To take advantage of using The Graph on L2, use this dropdown switcher to toggle ![Dropdown switcher to toggle Arbitrum](/img/arbitrum-screenshot-toggle.png) -## As a subgraph developer, data consumer, Indexer, Curator, or Delegator, what do I need to do now? +## As a Subgraph developer, data consumer, Indexer, Curator, or Delegator, what do I need to do now? Network participants must move to Arbitrum to continue participating in The Graph Network. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) for additional support. @@ -51,9 +51,9 @@ All smart contracts have been thoroughly [audited](https://github.com/graphproto Everything has been tested thoroughly, and a contingency plan is in place to ensure a safe and seamless transition. Details can be found [here](https://forum.thegraph.com/t/gip-0037-the-graph-arbitrum-deployment-with-linear-rewards-minted-in-l2/3551#risks-and-security-considerations-20). -## Are existing subgraphs on Ethereum working? +## Are existing Subgraphs on Ethereum working? -All subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) to ensure your subgraphs operate seamlessly. +All Subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) to ensure your Subgraphs operate seamlessly. ## Does GRT have a new smart contract deployed on Arbitrum? diff --git a/website/src/pages/vi/archived/arbitrum/l2-transfer-tools-faq.mdx b/website/src/pages/vi/archived/arbitrum/l2-transfer-tools-faq.mdx index cbc7b6346f33..2fec5116ce58 100644 --- a/website/src/pages/vi/archived/arbitrum/l2-transfer-tools-faq.mdx +++ b/website/src/pages/vi/archived/arbitrum/l2-transfer-tools-faq.mdx @@ -24,9 +24,9 @@ The exception is with smart contract wallets like multisigs: these are smart con The L2 Transfer Tools use Arbitrum’s native mechanism to send messages from L1 to L2. This mechanism is called a “retryable ticket” and is used by all native token bridges, including the Arbitrum GRT bridge. You can read more about retryable tickets in the [Arbitrum docs](https://docs.arbitrum.io/arbos/l1-to-l2-messaging). -When you transfer your assets (subgraph, stake, delegation or curation) to L2, a message is sent through the Arbitrum GRT bridge which creates a retryable ticket in L2. The transfer tool includes some ETH value in the transaction, that is used to 1) pay to create the ticket and 2) pay for the gas to execute the ticket in L2. However, because gas prices might vary in the time until the ticket is ready to execute in L2, it is possible that this auto-execution attempt fails. When that happens, the Arbitrum bridge will keep the retryable ticket alive for up to 7 days, and anyone can retry “redeeming” the ticket (which requires a wallet with some ETH bridged to Arbitrum). +When you transfer your assets (Subgraph, stake, delegation or curation) to L2, a message is sent through the Arbitrum GRT bridge which creates a retryable ticket in L2. The transfer tool includes some ETH value in the transaction, that is used to 1) pay to create the ticket and 2) pay for the gas to execute the ticket in L2. However, because gas prices might vary in the time until the ticket is ready to execute in L2, it is possible that this auto-execution attempt fails. When that happens, the Arbitrum bridge will keep the retryable ticket alive for up to 7 days, and anyone can retry “redeeming” the ticket (which requires a wallet with some ETH bridged to Arbitrum). -This is what we call the “Confirm” step in all the transfer tools - it will run automatically in most cases, as the auto-execution is most often successful, but it is important that you check back to make sure it went through. If it doesn’t succeed and there are no successful retries in 7 days, the Arbitrum bridge will discard the ticket, and your assets (subgraph, stake, delegation or curation) will be lost and can’t be recovered. The Graph core devs have a monitoring system in place to detect these situations and try to redeem the tickets before it’s too late, but it is ultimately your responsibility to ensure your transfer is completed in time. If you’re having trouble confirming your transaction, please reach out using [this form](https://noteforms.com/forms/notionform-l2-transfer-tooling-issues-0ogqfu?notionforms=1&utm_source=notionforms) and core devs will be there help you. +This is what we call the “Confirm” step in all the transfer tools - it will run automatically in most cases, as the auto-execution is most often successful, but it is important that you check back to make sure it went through. If it doesn’t succeed and there are no successful retries in 7 days, the Arbitrum bridge will discard the ticket, and your assets (Subgraph, stake, delegation or curation) will be lost and can’t be recovered. The Graph core devs have a monitoring system in place to detect these situations and try to redeem the tickets before it’s too late, but it is ultimately your responsibility to ensure your transfer is completed in time. If you’re having trouble confirming your transaction, please reach out using [this form](https://noteforms.com/forms/notionform-l2-transfer-tooling-issues-0ogqfu?notionforms=1&utm_source=notionforms) and core devs will be there help you. ### I started my delegation/stake/curation transfer and I'm not sure if it made it through to L2, how can I confirm that it was transferred correctly? @@ -36,43 +36,43 @@ If you have the L1 transaction hash (which you can find by looking at the recent ## Subgraph Transfer -### How do I transfer my subgraph? +### How do I transfer my Subgraph? -To transfer your subgraph, you will need to complete the following steps: +To transfer your Subgraph, you will need to complete the following steps: 1. Initiate the transfer on Ethereum mainnet 2. Wait 20 minutes for confirmation -3. Confirm subgraph transfer on Arbitrum\* +3. Confirm Subgraph transfer on Arbitrum\* -4. Finish publishing subgraph on Arbitrum +4. Finish publishing Subgraph on Arbitrum 5. Update Query URL (recommended) -\*Note that you must confirm the transfer within 7 days otherwise your subgraph may be lost. In most cases, this step will run automatically, but a manual confirmation may be needed if there is a gas price spike on Arbitrum. If there are any issues during this process, there will be resources to help: contact support at support@thegraph.com or on [Discord](https://discord.gg/graphprotocol). +\*Note that you must confirm the transfer within 7 days otherwise your Subgraph may be lost. In most cases, this step will run automatically, but a manual confirmation may be needed if there is a gas price spike on Arbitrum. If there are any issues during this process, there will be resources to help: contact support at support@thegraph.com or on [Discord](https://discord.gg/graphprotocol). ### Where should I initiate my transfer from? -You can initiate your transfer from the [Subgraph Studio](https://thegraph.com/studio/), [Explorer,](https://thegraph.com/explorer) or any subgraph details page. Click the "Transfer Subgraph" button in the subgraph details page to start the transfer. +You can initiate your transfer from the [Subgraph Studio](https://thegraph.com/studio/), [Explorer,](https://thegraph.com/explorer) or any Subgraph details page. Click the "Transfer Subgraph" button in the Subgraph details page to start the transfer. -### How long do I need to wait until my subgraph is transferred +### How long do I need to wait until my Subgraph is transferred The transfer time takes approximately 20 minutes. The Arbitrum bridge is working in the background to complete the bridge transfer automatically. In some cases, gas costs may spike and you will need to confirm the transaction again. -### Will my subgraph still be discoverable after I transfer it to L2? +### Will my Subgraph still be discoverable after I transfer it to L2? -Your subgraph will only be discoverable on the network it is published to. For example, if your subgraph is on Arbitrum One, then you can only find it in Explorer on Arbitrum One and will not be able to find it on Ethereum. Please ensure that you have Arbitrum One selected in the network switcher at the top of the page to ensure you are on the correct network.  After the transfer, the L1 subgraph will appear as deprecated. +Your Subgraph will only be discoverable on the network it is published to. For example, if your Subgraph is on Arbitrum One, then you can only find it in Explorer on Arbitrum One and will not be able to find it on Ethereum. Please ensure that you have Arbitrum One selected in the network switcher at the top of the page to ensure you are on the correct network.  After the transfer, the L1 Subgraph will appear as deprecated. -### Does my subgraph need to be published to transfer it? +### Does my Subgraph need to be published to transfer it? -To take advantage of the subgraph transfer tool, your subgraph must be already published to Ethereum mainnet and must have some curation signal owned by the wallet that owns the subgraph. If your subgraph is not published, it is recommended you simply publish directly on Arbitrum One - the associated gas fees will be considerably lower. If you want to transfer a published subgraph but the owner account hasn't curated any signal on it, you can signal a small amount (e.g. 1 GRT) from that account; make sure to choose "auto-migrating" signal. +To take advantage of the Subgraph transfer tool, your Subgraph must be already published to Ethereum mainnet and must have some curation signal owned by the wallet that owns the Subgraph. If your Subgraph is not published, it is recommended you simply publish directly on Arbitrum One - the associated gas fees will be considerably lower. If you want to transfer a published Subgraph but the owner account hasn't curated any signal on it, you can signal a small amount (e.g. 1 GRT) from that account; make sure to choose "auto-migrating" signal. -### What happens to the Ethereum mainnet version of my subgraph after I transfer to Arbitrum? +### What happens to the Ethereum mainnet version of my Subgraph after I transfer to Arbitrum? -After transferring your subgraph to Arbitrum, the Ethereum mainnet version will be deprecated. We recommend you update your query URL within 48 hours. However, there is a grace period in place that keeps your mainnet URL functioning so that any third-party dapp support can be updated. +After transferring your Subgraph to Arbitrum, the Ethereum mainnet version will be deprecated. We recommend you update your query URL within 48 hours. However, there is a grace period in place that keeps your mainnet URL functioning so that any third-party dapp support can be updated. ### After I transfer, do I also need to re-publish on Arbitrum? @@ -80,21 +80,21 @@ After the 20 minute transfer window, you will need to confirm the transfer with ### Will my endpoint experience downtime while re-publishing? -It is unlikely, but possible to experience a brief downtime depending on which Indexers are supporting the subgraph on L1 and whether they keep indexing it until the subgraph is fully supported on L2. +It is unlikely, but possible to experience a brief downtime depending on which Indexers are supporting the Subgraph on L1 and whether they keep indexing it until the Subgraph is fully supported on L2. ### Is publishing and versioning the same on L2 as Ethereum Ethereum mainnet? -Yes. Select Arbitrum One as your published network when publishing in Subgraph Studio. In the Studio, the latest endpoint will be available which points to the latest updated version of the subgraph. +Yes. Select Arbitrum One as your published network when publishing in Subgraph Studio. In the Studio, the latest endpoint will be available which points to the latest updated version of the Subgraph. -### Will my subgraph's curation move with my subgraph? +### Will my Subgraph's curation move with my Subgraph? -If you've chosen auto-migrating signal, 100% of your own curation will move with your subgraph to Arbitrum One. All of the subgraph's curation signal will be converted to GRT at the time of the transfer, and the GRT corresponding to your curation signal will be used to mint signal on the L2 subgraph. +If you've chosen auto-migrating signal, 100% of your own curation will move with your Subgraph to Arbitrum One. All of the Subgraph's curation signal will be converted to GRT at the time of the transfer, and the GRT corresponding to your curation signal will be used to mint signal on the L2 Subgraph. -Other Curators can choose whether to withdraw their fraction of GRT, or also transfer it to L2 to mint signal on the same subgraph. +Other Curators can choose whether to withdraw their fraction of GRT, or also transfer it to L2 to mint signal on the same Subgraph. -### Can I move my subgraph back to Ethereum mainnet after I transfer? +### Can I move my Subgraph back to Ethereum mainnet after I transfer? -Once transferred, your Ethereum mainnet version of this subgraph will be deprecated. If you would like to move back to mainnet, you will need to redeploy and publish back to mainnet. However, transferring back to Ethereum mainnet is strongly discouraged as indexing rewards will eventually be distributed entirely on Arbitrum One. +Once transferred, your Ethereum mainnet version of this Subgraph will be deprecated. If you would like to move back to mainnet, you will need to redeploy and publish back to mainnet. However, transferring back to Ethereum mainnet is strongly discouraged as indexing rewards will eventually be distributed entirely on Arbitrum One. ### Why do I need bridged ETH to complete my transfer? @@ -206,19 +206,19 @@ To transfer your curation, you will need to complete the following steps: \*If necessary - i.e. you are using a contract address. -### How will I know if the subgraph I curated has moved to L2? +### How will I know if the Subgraph I curated has moved to L2? -When viewing the subgraph details page, a banner will notify you that this subgraph has been transferred. You can follow the prompt to transfer your curation. You can also find this information on the subgraph details page of any subgraph that has moved. +When viewing the Subgraph details page, a banner will notify you that this Subgraph has been transferred. You can follow the prompt to transfer your curation. You can also find this information on the Subgraph details page of any Subgraph that has moved. ### What if I do not wish to move my curation to L2? -When a subgraph is deprecated you have the option to withdraw your signal. Similarly, if a subgraph has moved to L2, you can choose to withdraw your signal in Ethereum mainnet or send the signal to L2. +When a Subgraph is deprecated you have the option to withdraw your signal. Similarly, if a Subgraph has moved to L2, you can choose to withdraw your signal in Ethereum mainnet or send the signal to L2. ### How do I know my curation successfully transferred? Signal details will be accessible via Explorer approximately 20 minutes after the L2 transfer tool is initiated. -### Can I transfer my curation on more than one subgraph at a time? +### Can I transfer my curation on more than one Subgraph at a time? There is no bulk transfer option at this time. @@ -266,7 +266,7 @@ It will take approximately 20 minutes for the L2 transfer tool to complete trans ### Do I have to index on Arbitrum before I transfer my stake? -You can effectively transfer your stake first before setting up indexing, but you will not be able to claim any rewards on L2 until you allocate to subgraphs on L2, index them, and present POIs. +You can effectively transfer your stake first before setting up indexing, but you will not be able to claim any rewards on L2 until you allocate to Subgraphs on L2, index them, and present POIs. ### Can Delegators move their delegation before I move my indexing stake? diff --git a/website/src/pages/vi/archived/arbitrum/l2-transfer-tools-guide.mdx b/website/src/pages/vi/archived/arbitrum/l2-transfer-tools-guide.mdx index 78ec8c82a911..e0b5aa2214fa 100644 --- a/website/src/pages/vi/archived/arbitrum/l2-transfer-tools-guide.mdx +++ b/website/src/pages/vi/archived/arbitrum/l2-transfer-tools-guide.mdx @@ -6,53 +6,53 @@ The Graph has made it easy to move to L2 on Arbitrum One. For each protocol part Some frequent questions about these tools are answered in the [L2 Transfer Tools FAQ](/archived/arbitrum/l2-transfer-tools-faq/). The FAQs contain in-depth explanations of how to use the tools, how they work, and things to keep in mind when using them. -## How to transfer your subgraph to Arbitrum (L2) +## How to transfer your Subgraph to Arbitrum (L2) -## Benefits of transferring your subgraphs +## Benefits of transferring your Subgraphs The Graph's community and core devs have [been preparing](https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305) to move to Arbitrum over the past year. Arbitrum, a layer 2 or "L2" blockchain, inherits the security from Ethereum but provides drastically lower gas fees. -When you publish or upgrade your subgraph to The Graph Network, you're interacting with smart contracts on the protocol and this requires paying for gas using ETH. By moving your subgraphs to Arbitrum, any future updates to your subgraph will require much lower gas fees. The lower fees, and the fact that curation bonding curves on L2 are flat, also make it easier for other Curators to curate on your subgraph, increasing the rewards for Indexers on your subgraph. This lower-cost environment also makes it cheaper for Indexers to index and serve your subgraph. Indexing rewards will be increasing on Arbitrum and decreasing on Ethereum mainnet over the coming months, so more and more Indexers will be transferring their stake and setting up their operations on L2. +When you publish or upgrade your Subgraph to The Graph Network, you're interacting with smart contracts on the protocol and this requires paying for gas using ETH. By moving your Subgraphs to Arbitrum, any future updates to your Subgraph will require much lower gas fees. The lower fees, and the fact that curation bonding curves on L2 are flat, also make it easier for other Curators to curate on your Subgraph, increasing the rewards for Indexers on your Subgraph. This lower-cost environment also makes it cheaper for Indexers to index and serve your Subgraph. Indexing rewards will be increasing on Arbitrum and decreasing on Ethereum mainnet over the coming months, so more and more Indexers will be transferring their stake and setting up their operations on L2. -## Understanding what happens with signal, your L1 subgraph and query URLs +## Understanding what happens with signal, your L1 Subgraph and query URLs -Transferring a subgraph to Arbitrum uses the Arbitrum GRT bridge, which in turn uses the native Arbitrum bridge, to send the subgraph to L2. The "transfer" will deprecate the subgraph on mainnet and send the information to re-create the subgraph on L2 using the bridge. It will also include the subgraph owner's signaled GRT, which must be more than zero for the bridge to accept the transfer. +Transferring a Subgraph to Arbitrum uses the Arbitrum GRT bridge, which in turn uses the native Arbitrum bridge, to send the Subgraph to L2. The "transfer" will deprecate the Subgraph on mainnet and send the information to re-create the Subgraph on L2 using the bridge. It will also include the Subgraph owner's signaled GRT, which must be more than zero for the bridge to accept the transfer. -When you choose to transfer the subgraph, this will convert all of the subgraph's curation signal to GRT. This is equivalent to "deprecating" the subgraph on mainnet. The GRT corresponding to your curation will be sent to L2 together with the subgraph, where they will be used to mint signal on your behalf. +When you choose to transfer the Subgraph, this will convert all of the Subgraph's curation signal to GRT. This is equivalent to "deprecating" the Subgraph on mainnet. The GRT corresponding to your curation will be sent to L2 together with the Subgraph, where they will be used to mint signal on your behalf. -Other Curators can choose whether to withdraw their fraction of GRT, or also transfer it to L2 to mint signal on the same subgraph. If a subgraph owner does not transfer their subgraph to L2 and manually deprecates it via a contract call, then Curators will be notified and will be able to withdraw their curation. +Other Curators can choose whether to withdraw their fraction of GRT, or also transfer it to L2 to mint signal on the same Subgraph. If a Subgraph owner does not transfer their Subgraph to L2 and manually deprecates it via a contract call, then Curators will be notified and will be able to withdraw their curation. -As soon as the subgraph is transferred, since all curation is converted to GRT, Indexers will no longer receive rewards for indexing the subgraph. However, there will be Indexers that will 1) keep serving transferred subgraphs for 24 hours, and 2) immediately start indexing the subgraph on L2. Since these Indexers already have the subgraph indexed, there should be no need to wait for the subgraph to sync, and it will be possible to query the L2 subgraph almost immediately. +As soon as the Subgraph is transferred, since all curation is converted to GRT, Indexers will no longer receive rewards for indexing the Subgraph. However, there will be Indexers that will 1) keep serving transferred Subgraphs for 24 hours, and 2) immediately start indexing the Subgraph on L2. Since these Indexers already have the Subgraph indexed, there should be no need to wait for the Subgraph to sync, and it will be possible to query the L2 Subgraph almost immediately. -Queries to the L2 subgraph will need to be done to a different URL (on `arbitrum-gateway.thegraph.com`), but the L1 URL will continue working for at least 48 hours. After that, the L1 gateway will forward queries to the L2 gateway (for some time), but this will add latency so it is recommended to switch all your queries to the new URL as soon as possible. +Queries to the L2 Subgraph will need to be done to a different URL (on `arbitrum-gateway.thegraph.com`), but the L1 URL will continue working for at least 48 hours. After that, the L1 gateway will forward queries to the L2 gateway (for some time), but this will add latency so it is recommended to switch all your queries to the new URL as soon as possible. ## Chọn ví L2 của bạn -When you published your subgraph on mainnet, you used a connected wallet to create the subgraph, and this wallet owns the NFT that represents this subgraph and allows you to publish updates. +When you published your Subgraph on mainnet, you used a connected wallet to create the Subgraph, and this wallet owns the NFT that represents this Subgraph and allows you to publish updates. -When transferring the subgraph to Arbitrum, you can choose a different wallet that will own this subgraph NFT on L2. +When transferring the Subgraph to Arbitrum, you can choose a different wallet that will own this Subgraph NFT on L2. Nếu bạn đang sử dụng ví "thông thường" như MetaMask (Tài khoản thuộc sở hữu bên ngoài hoặc EOA, tức là ví không phải là hợp đồng thông minh), thì đây là tùy chọn và bạn nên giữ cùng địa chỉ chủ sở hữu như trong L1. -If you're using a smart contract wallet, like a multisig (e.g. a Safe), then choosing a different L2 wallet address is mandatory, as it is most likely that this account only exists on mainnet and you will not be able to make transactions on Arbitrum using this wallet. If you want to keep using a smart contract wallet or multisig, create a new wallet on Arbitrum and use its address as the L2 owner of your subgraph. +If you're using a smart contract wallet, like a multisig (e.g. a Safe), then choosing a different L2 wallet address is mandatory, as it is most likely that this account only exists on mainnet and you will not be able to make transactions on Arbitrum using this wallet. If you want to keep using a smart contract wallet or multisig, create a new wallet on Arbitrum and use its address as the L2 owner of your Subgraph. -**It is very important to use a wallet address that you control, and that can make transactions on Arbitrum. Otherwise, the subgraph will be lost and cannot be recovered.** +**It is very important to use a wallet address that you control, and that can make transactions on Arbitrum. Otherwise, the Subgraph will be lost and cannot be recovered.** ## Preparing for the transfer: bridging some ETH -Transferring the subgraph involves sending a transaction through the bridge, and then executing another transaction on Arbitrum. The first transaction uses ETH on mainnet, and includes some ETH to pay for gas when the message is received on L2. However, if this gas is insufficient, you will have to retry the transaction and pay for the gas directly on L2 (this is "Step 3: Confirming the transfer" below). This step **must be executed within 7 days of starting the transfer**. Moreover, the second transaction ("Step 4: Finishing the transfer on L2") will be done directly on Arbitrum. For these reasons, you will need some ETH on an Arbitrum wallet. If you're using a multisig or smart contract account, the ETH will need to be in the regular (EOA) wallet that you are using to execute the transactions, not on the multisig wallet itself. +Transferring the Subgraph involves sending a transaction through the bridge, and then executing another transaction on Arbitrum. The first transaction uses ETH on mainnet, and includes some ETH to pay for gas when the message is received on L2. However, if this gas is insufficient, you will have to retry the transaction and pay for the gas directly on L2 (this is "Step 3: Confirming the transfer" below). This step **must be executed within 7 days of starting the transfer**. Moreover, the second transaction ("Step 4: Finishing the transfer on L2") will be done directly on Arbitrum. For these reasons, you will need some ETH on an Arbitrum wallet. If you're using a multisig or smart contract account, the ETH will need to be in the regular (EOA) wallet that you are using to execute the transactions, not on the multisig wallet itself. You can buy ETH on some exchanges and withdraw it directly to Arbitrum, or you can use the Arbitrum bridge to send ETH from a mainnet wallet to L2: [bridge.arbitrum.io](http://bridge.arbitrum.io). Since gas fees on Arbitrum are lower, you should only need a small amount. It is recommended that you start at a low threshold (0.e.g. 01 ETH) for your transaction to be approved. -## Finding the subgraph Transfer Tool +## Finding the Subgraph Transfer Tool -You can find the L2 Transfer Tool when you're looking at your subgraph's page on Subgraph Studio: +You can find the L2 Transfer Tool when you're looking at your Subgraph's page on Subgraph Studio: ![công cụ chuyển](/img/L2-transfer-tool1.png) -It is also available on Explorer if you're connected with the wallet that owns a subgraph and on that subgraph's page on Explorer: +It is also available on Explorer if you're connected with the wallet that owns a Subgraph and on that Subgraph's page on Explorer: ![Chuyển sang L2](/img/transferToL2.png) @@ -60,19 +60,19 @@ Nhấp vào nút Chuyển sang L2 sẽ mở công cụ chuyển nơi bạn có t ## Step 1: Starting the transfer -Before starting the transfer, you must decide which address will own the subgraph on L2 (see "Choosing your L2 wallet" above), and it is strongly recommend having some ETH for gas already bridged on Arbitrum (see "Preparing for the transfer: bridging some ETH" above). +Before starting the transfer, you must decide which address will own the Subgraph on L2 (see "Choosing your L2 wallet" above), and it is strongly recommend having some ETH for gas already bridged on Arbitrum (see "Preparing for the transfer: bridging some ETH" above). -Also please note transferring the subgraph requires having a nonzero amount of signal on the subgraph with the same account that owns the subgraph; if you haven't signaled on the subgraph you will have to add a bit of curation (adding a small amount like 1 GRT would suffice). +Also please note transferring the Subgraph requires having a nonzero amount of signal on the Subgraph with the same account that owns the Subgraph; if you haven't signaled on the Subgraph you will have to add a bit of curation (adding a small amount like 1 GRT would suffice). -After opening the Transfer Tool, you will be able to input the L2 wallet address into the "Receiving wallet address" field - **make sure you've entered the correct address here**. Clicking on Transfer Subgraph will prompt you to execute the transaction on your wallet (note some ETH value is included to pay for L2 gas); this will initiate the transfer and deprecate your L1 subgraph (see "Understanding what happens with signal, your L1 subgraph and query URLs" above for more details on what goes on behind the scenes). +After opening the Transfer Tool, you will be able to input the L2 wallet address into the "Receiving wallet address" field - **make sure you've entered the correct address here**. Clicking on Transfer Subgraph will prompt you to execute the transaction on your wallet (note some ETH value is included to pay for L2 gas); this will initiate the transfer and deprecate your L1 Subgraph (see "Understanding what happens with signal, your L1 Subgraph and query URLs" above for more details on what goes on behind the scenes). -If you execute this step, **make sure you proceed until completing step 3 in less than 7 days, or the subgraph and your signal GRT will be lost.** This is due to how L1-L2 messaging works on Arbitrum: messages that are sent through the bridge are "retry-able tickets" that must be executed within 7 days, and the initial execution might need a retry if there are spikes in the gas price on Arbitrum. +If you execute this step, **make sure you proceed until completing step 3 in less than 7 days, or the Subgraph and your signal GRT will be lost.** This is due to how L1-L2 messaging works on Arbitrum: messages that are sent through the bridge are "retry-able tickets" that must be executed within 7 days, and the initial execution might need a retry if there are spikes in the gas price on Arbitrum. ![Start the transfer to L2](/img/startTransferL2.png) -## Step 2: Waiting for the subgraph to get to L2 +## Step 2: Waiting for the Subgraph to get to L2 -After you start the transfer, the message that sends your L1 subgraph to L2 must propagate through the Arbitrum bridge. This takes approximately 20 minutes (the bridge waits for the mainnet block containing the transaction to be "safe" from potential chain reorgs). +After you start the transfer, the message that sends your L1 Subgraph to L2 must propagate through the Arbitrum bridge. This takes approximately 20 minutes (the bridge waits for the mainnet block containing the transaction to be "safe" from potential chain reorgs). Once this wait time is over, Arbitrum will attempt to auto-execute the transfer on the L2 contracts. @@ -80,7 +80,7 @@ Once this wait time is over, Arbitrum will attempt to auto-execute the transfer ## Step 3: Confirming the transfer -In most cases, this step will auto-execute as the L2 gas included in step 1 should be sufficient to execute the transaction that receives the subgraph on the Arbitrum contracts. In some cases, however, it is possible that a spike in gas prices on Arbitrum causes this auto-execution to fail. In this case, the "ticket" that sends your subgraph to L2 will be pending and require a retry within 7 days. +In most cases, this step will auto-execute as the L2 gas included in step 1 should be sufficient to execute the transaction that receives the Subgraph on the Arbitrum contracts. In some cases, however, it is possible that a spike in gas prices on Arbitrum causes this auto-execution to fail. In this case, the "ticket" that sends your Subgraph to L2 will be pending and require a retry within 7 days. If this is the case, you will need to connect using an L2 wallet that has some ETH on Arbitrum, switch your wallet network to Arbitrum, and click on "Confirm Transfer" to retry the transaction. @@ -88,33 +88,33 @@ If this is the case, you will need to connect using an L2 wallet that has some E ## Step 4: Finishing the transfer on L2 -At this point, your subgraph and GRT have been received on Arbitrum, but the subgraph is not published yet. You will need to connect using the L2 wallet that you chose as the receiving wallet, switch your wallet network to Arbitrum, and click "Publish Subgraph." +At this point, your Subgraph and GRT have been received on Arbitrum, but the Subgraph is not published yet. You will need to connect using the L2 wallet that you chose as the receiving wallet, switch your wallet network to Arbitrum, and click "Publish Subgraph." -![Publish the subgraph](/img/publishSubgraphL2TransferTools.png) +![Publish the Subgraph](/img/publishSubgraphL2TransferTools.png) -![Wait for the subgraph to be published](/img/waitForSubgraphToPublishL2TransferTools.png) +![Wait for the Subgraph to be published](/img/waitForSubgraphToPublishL2TransferTools.png) -This will publish the subgraph so that Indexers that are operating on Arbitrum can start serving it. It will also mint curation signal using the GRT that were transferred from L1. +This will publish the Subgraph so that Indexers that are operating on Arbitrum can start serving it. It will also mint curation signal using the GRT that were transferred from L1. ## Bước 5: Cập nhật URL truy vấn -Your subgraph has been successfully transferred to Arbitrum! To query the subgraph, the new URL will be : +Your Subgraph has been successfully transferred to Arbitrum! To query the Subgraph, the new URL will be : `https://arbitrum-gateway.thegraph.com/api/[api-key]/subgraphs/id/[l2-subgraph-id]` -Note that the subgraph ID on Arbitrum will be a different than the one you had on mainnet, but you can always find it on Explorer or Studio. As mentioned above (see "Understanding what happens with signal, your L1 subgraph and query URLs") the old L1 URL will be supported for a short while, but you should switch your queries to the new address as soon as the subgraph has been synced on L2. +Note that the Subgraph ID on Arbitrum will be a different than the one you had on mainnet, but you can always find it on Explorer or Studio. As mentioned above (see "Understanding what happens with signal, your L1 Subgraph and query URLs") the old L1 URL will be supported for a short while, but you should switch your queries to the new address as soon as the Subgraph has been synced on L2. ## How to transfer your curation to Arbitrum (L2) -## Understanding what happens to curation on subgraph transfers to L2 +## Understanding what happens to curation on Subgraph transfers to L2 -When the owner of a subgraph transfers a subgraph to Arbitrum, all of the subgraph's signal is converted to GRT at the same time. This applies to "auto-migrated" signal, i.e. signal that is not specific to a subgraph version or deployment but that follows the latest version of a subgraph. +When the owner of a Subgraph transfers a Subgraph to Arbitrum, all of the Subgraph's signal is converted to GRT at the same time. This applies to "auto-migrated" signal, i.e. signal that is not specific to a Subgraph version or deployment but that follows the latest version of a Subgraph. -This conversion from signal to GRT is the same as what would happen if the subgraph owner deprecated the subgraph in L1. When the subgraph is deprecated or transferred, all curation signal is "burned" simultaneously (using the curation bonding curve) and the resulting GRT is held by the GNS smart contract (that is the contract that handles subgraph upgrades and auto-migrated signal). Each Curator on that subgraph therefore has a claim to that GRT proportional to the amount of shares they had for the subgraph. +This conversion from signal to GRT is the same as what would happen if the Subgraph owner deprecated the Subgraph in L1. When the Subgraph is deprecated or transferred, all curation signal is "burned" simultaneously (using the curation bonding curve) and the resulting GRT is held by the GNS smart contract (that is the contract that handles Subgraph upgrades and auto-migrated signal). Each Curator on that Subgraph therefore has a claim to that GRT proportional to the amount of shares they had for the Subgraph. -A fraction of these GRT corresponding to the subgraph owner is sent to L2 together with the subgraph. +A fraction of these GRT corresponding to the Subgraph owner is sent to L2 together with the Subgraph. -At this point, the curated GRT will not accrue any more query fees, so Curators can choose to withdraw their GRT or transfer it to the same subgraph on L2, where it can be used to mint new curation signal. There is no rush to do this as the GRT can be help indefinitely and everybody gets an amount proportional to their shares, irrespective of when they do it. +At this point, the curated GRT will not accrue any more query fees, so Curators can choose to withdraw their GRT or transfer it to the same Subgraph on L2, where it can be used to mint new curation signal. There is no rush to do this as the GRT can be help indefinitely and everybody gets an amount proportional to their shares, irrespective of when they do it. ## Chọn ví L2 của bạn @@ -130,9 +130,9 @@ Nếu bạn đang sử dụng ví hợp đồng thông minh, chẳng hạn như Before starting the transfer, you must decide which address will own the curation on L2 (see "Choosing your L2 wallet" above), and it is recommended having some ETH for gas already bridged on Arbitrum in case you need to retry the execution of the message on L2. You can buy ETH on some exchanges and withdraw it directly to Arbitrum, or you can use the Arbitrum bridge to send ETH from a mainnet wallet to L2: [bridge.arbitrum.io](http://bridge.arbitrum.io) - since gas fees on Arbitrum are so low, you should only need a small amount, e.g. 0.01 ETH will probably be more than enough. -If a subgraph that you curate to has been transferred to L2, you will see a message on Explorer telling you that you're curating to a transferred subgraph. +If a Subgraph that you curate to has been transferred to L2, you will see a message on Explorer telling you that you're curating to a transferred Subgraph. -When looking at the subgraph page, you can choose to withdraw or transfer the curation. Clicking on "Transfer Signal to Arbitrum" will open the transfer tool. +When looking at the Subgraph page, you can choose to withdraw or transfer the curation. Clicking on "Transfer Signal to Arbitrum" will open the transfer tool. ![Transfer signal](/img/transferSignalL2TransferTools.png) @@ -162,4 +162,4 @@ If this is the case, you will need to connect using an L2 wallet that has some E ## Withdrawing your curation on L1 -If you prefer not to send your GRT to L2, or you'd rather bridge the GRT manually, you can withdraw your curated GRT on L1. On the banner on the subgraph page, choose "Withdraw Signal" and confirm the transaction; the GRT will be sent to your Curator address. +If you prefer not to send your GRT to L2, or you'd rather bridge the GRT manually, you can withdraw your curated GRT on L1. On the banner on the Subgraph page, choose "Withdraw Signal" and confirm the transaction; the GRT will be sent to your Curator address. diff --git a/website/src/pages/vi/archived/sunrise.mdx b/website/src/pages/vi/archived/sunrise.mdx index eb18a93c506c..71262f22e7d8 100644 --- a/website/src/pages/vi/archived/sunrise.mdx +++ b/website/src/pages/vi/archived/sunrise.mdx @@ -7,61 +7,61 @@ sidebarTitle: Post-Sunrise Upgrade FAQ ## What was the Sunrise of Decentralized Data? -The Sunrise of Decentralized Data was an initiative spearheaded by Edge & Node. This initiative enabled subgraph developers to upgrade to The Graph’s decentralized network seamlessly. +The Sunrise of Decentralized Data was an initiative spearheaded by Edge & Node. This initiative enabled Subgraph developers to upgrade to The Graph’s decentralized network seamlessly. -This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published subgraphs. +This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published Subgraphs. ### What happened to the hosted service? -The hosted service query endpoints are no longer available, and developers cannot deploy new subgraphs on the hosted service. +The hosted service query endpoints are no longer available, and developers cannot deploy new Subgraphs on the hosted service. -During the upgrade process, owners of hosted service subgraphs could upgrade their subgraphs to The Graph Network. Additionally, developers were able to claim auto-upgraded subgraphs. +During the upgrade process, owners of hosted service Subgraphs could upgrade their Subgraphs to The Graph Network. Additionally, developers were able to claim auto-upgraded Subgraphs. ### Was Subgraph Studio impacted by this upgrade? No, Subgraph Studio was not impacted by Sunrise. Subgraphs were immediately available for querying, powered by the upgrade Indexer, which uses the same infrastructure as the hosted service. -### Why were subgraphs published to Arbitrum, did it start indexing a different network? +### Why were Subgraphs published to Arbitrum, did it start indexing a different network? -The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that subgraphs are published to, but subgraphs can index any of the [supported networks](/supported-networks/) +The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new Subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that Subgraphs are published to, but Subgraphs can index any of the [supported networks](/supported-networks/) ## About the Upgrade Indexer > The upgrade Indexer is currently active. -The upgrade Indexer was implemented to improve the experience of upgrading subgraphs from the hosted service to The Graph Network and support new versions of existing subgraphs that had not yet been indexed. +The upgrade Indexer was implemented to improve the experience of upgrading Subgraphs from the hosted service to The Graph Network and support new versions of existing Subgraphs that had not yet been indexed. ### What does the upgrade Indexer do? -- It bootstraps chains that have yet to receive indexing rewards on The Graph Network and ensures that an Indexer is available to serve queries as quickly as possible after a subgraph is published. +- It bootstraps chains that have yet to receive indexing rewards on The Graph Network and ensures that an Indexer is available to serve queries as quickly as possible after a Subgraph is published. - It supports chains that were previously only available on the hosted service. Find a comprehensive list of supported chains [here](/supported-networks/). -- Indexers that operate an upgrade Indexer do so as a public service to support new subgraphs and additional chains that lack indexing rewards before The Graph Council approves them. +- Indexers that operate an upgrade Indexer do so as a public service to support new Subgraphs and additional chains that lack indexing rewards before The Graph Council approves them. ### Why is Edge & Node running the upgrade Indexer? -Edge & Node historically maintained the hosted service and, as a result, already have synced data for hosted service subgraphs. +Edge & Node historically maintained the hosted service and, as a result, already have synced data for hosted service Subgraphs. ### What does the upgrade indexer mean for existing Indexers? Chains previously only supported on the hosted service were made available to developers on The Graph Network without indexing rewards at first. -However, this action unlocked query fees for any interested Indexer and increased the number of subgraphs published on The Graph Network. As a result, Indexers have more opportunities to index and serve these subgraphs in exchange for query fees, even before indexing rewards are enabled for a chain. +However, this action unlocked query fees for any interested Indexer and increased the number of Subgraphs published on The Graph Network. As a result, Indexers have more opportunities to index and serve these Subgraphs in exchange for query fees, even before indexing rewards are enabled for a chain. -The upgrade Indexer also provides the Indexer community with information about the potential demand for subgraphs and new chains on The Graph Network. +The upgrade Indexer also provides the Indexer community with information about the potential demand for Subgraphs and new chains on The Graph Network. ### What does this mean for Delegators? -The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity. +The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more Subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity. ### Did the upgrade Indexer compete with existing Indexers for rewards? -No, the upgrade Indexer only allocates the minimum amount per subgraph and does not collect indexing rewards. +No, the upgrade Indexer only allocates the minimum amount per Subgraph and does not collect indexing rewards. -It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and subgraphs. +It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and Subgraphs. -### How does this affect subgraph developers? +### How does this affect Subgraph developers? -Subgraph developers can query their subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/subgraphs/developing/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a subgraph](/developing/creating-a-subgraph/) was not impacted by this upgrade. +Subgraph developers can query their Subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/subgraphs/developing/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a Subgraph](/developing/creating-a-subgraph/) was not impacted by this upgrade. ### How does the upgrade Indexer benefit data consumers? @@ -71,10 +71,10 @@ The upgrade Indexer enables chains on the network that were previously only supp The upgrade Indexer prices queries at the market rate to avoid influencing the query fee market. -### When will the upgrade Indexer stop supporting a subgraph? +### When will the upgrade Indexer stop supporting a Subgraph? -The upgrade Indexer supports a subgraph until at least 3 other Indexers successfully and consistently serve queries made to it. +The upgrade Indexer supports a Subgraph until at least 3 other Indexers successfully and consistently serve queries made to it. -Furthermore, the upgrade Indexer stops supporting a subgraph if it has not been queried in the last 30 days. +Furthermore, the upgrade Indexer stops supporting a Subgraph if it has not been queried in the last 30 days. -Other Indexers are incentivized to support subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it. +Other Indexers are incentivized to support Subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it. diff --git a/website/src/pages/vi/global.json b/website/src/pages/vi/global.json index f0bd80d9715b..ff0ee29303ff 100644 --- a/website/src/pages/vi/global.json +++ b/website/src/pages/vi/global.json @@ -6,6 +6,7 @@ "subgraphs": "Subgraphs", "substreams": "Substreams", "sps": "Substreams-Powered Subgraphs", + "tokenApi": "Token API", "indexing": "Indexing", "resources": "Resources", "archived": "Archived" @@ -24,9 +25,51 @@ "linkToThisSection": "Link to this section" }, "content": { - "note": "Note", + "callout": { + "note": "Note", + "tip": "Tip", + "important": "Important", + "warning": "Warning", + "caution": "Caution" + }, "video": "Video" }, + "openApi": { + "parameters": { + "pathParameters": "Path Parameters", + "queryParameters": "Query Parameters", + "headerParameters": "Header Parameters", + "cookieParameters": "Cookie Parameters", + "parameter": "Parameter", + "description": "Miêu tả", + "value": "Value", + "required": "Required", + "deprecated": "Deprecated", + "defaultValue": "Default value", + "minimumValue": "Minimum value", + "maximumValue": "Maximum value", + "acceptedValues": "Accepted values", + "acceptedPattern": "Accepted pattern", + "format": "Format", + "serializationFormat": "Serialization format" + }, + "request": { + "label": "Test this endpoint", + "noCredentialsRequired": "No credentials required", + "send": "Send Request" + }, + "responses": { + "potentialResponses": "Potential Responses", + "status": "Status", + "description": "Miêu tả", + "liveResponse": "Live Response", + "example": "Ví dụ" + }, + "errors": { + "invalidApi": "Could not retrieve API {0}.", + "invalidOperation": "Could not retrieve operation {0} in API {1}." + } + }, "notFound": { "title": "Oops! This page was lost in space...", "subtitle": "Check if you’re using the right address or explore our website by clicking on the link below.", diff --git a/website/src/pages/vi/index.json b/website/src/pages/vi/index.json index 34303f6b8cc3..2793483b60d9 100644 --- a/website/src/pages/vi/index.json +++ b/website/src/pages/vi/index.json @@ -7,7 +7,7 @@ "cta2": "Build your first subgraph" }, "products": { - "title": "The Graph’s Products", + "title": "The Graph's Products", "description": "Choose a solution that fits your needs—interact with blockchain data your way.", "subgraphs": { "title": "Subgraphs", @@ -21,7 +21,7 @@ }, "sps": { "title": "Substreams-Powered Subgraphs", - "description": "Boost your subgraph’s efficiency and scalability by using Substreams.", + "description": "Boost your subgraph's efficiency and scalability by using Substreams.", "cta": "Set up a Substreams-powered subgraph" }, "graphNode": { @@ -39,12 +39,12 @@ "title": "Mạng lưới được hỗ trợ", "details": "Network Details", "services": "Services", - "type": "Type", + "type": "Loại", "protocol": "Protocol", "identifier": "Identifier", "chainId": "Chain ID", "nativeCurrency": "Native Currency", - "docs": "Docs", + "docs": "Tài liệu tham khảo", "shortName": "Short Name", "guides": "Guides", "search": "Search networks", @@ -156,15 +156,15 @@ "watchOnYouTube": "Watch on YouTube", "theGraphExplained": { "title": "The Graph Explained In 1 Minute", - "description": "What is The Graph? How does it work? Why does it matter so much to web3 developers? Learn how and why The Graph is the backbone of web3 in this short, non-technical video." + "description": "Learn how and why The Graph is the backbone of web3 in this short, non-technical video." }, "whatIsDelegating": { "title": "What is Delegating?", - "description": "Delegators are key participants who help secure The Graph by staking their GRT tokens to Indexers. This video explains key concepts to understand before delegating." + "description": "This video explains key concepts to understand before delegating, a form of staking that helps secure The Graph." }, "howToIndexSolana": { "title": "How to Index Solana with a Substreams-powered Subgraph", - "description": "If you’re familiar with subgraphs, discover how Substreams offer a different approach for key use cases. This video walks you through the process of building your first Substreams-powered subgraph." + "description": "If you're familiar with Subgraphs, discover how Substreams offer a different approach for key use cases." } }, "time": { diff --git a/website/src/pages/vi/indexing/chain-integration-overview.mdx b/website/src/pages/vi/indexing/chain-integration-overview.mdx index 77141e82b34a..33619b03c483 100644 --- a/website/src/pages/vi/indexing/chain-integration-overview.mdx +++ b/website/src/pages/vi/indexing/chain-integration-overview.mdx @@ -36,7 +36,7 @@ This process is related to the Subgraph Data Service, applicable only to new Sub ### 2. What happens if Firehose & Substreams support comes after the network is supported on mainnet? -This would only impact protocol support for indexing rewards on Substreams-powered subgraphs. The new Firehose implementation would need testing on testnet, following the methodology outlined for Stage 2 in this GIP. Similarly, assuming the implementation is performant and reliable, a PR on the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) would be required (`Substreams data sources` Subgraph Feature), as well as a new GIP for protocol support for indexing rewards. Anyone can create the PR and GIP; the Foundation would help with Council approval. +This would only impact protocol support for indexing rewards on Substreams-powered Subgraphs. The new Firehose implementation would need testing on testnet, following the methodology outlined for Stage 2 in this GIP. Similarly, assuming the implementation is performant and reliable, a PR on the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) would be required (`Substreams data sources` Subgraph Feature), as well as a new GIP for protocol support for indexing rewards. Anyone can create the PR and GIP; the Foundation would help with Council approval. ### 3. How much time will the process of reaching full protocol support take? diff --git a/website/src/pages/vi/indexing/new-chain-integration.mdx b/website/src/pages/vi/indexing/new-chain-integration.mdx index e45c4b411010..670e06c752c3 100644 --- a/website/src/pages/vi/indexing/new-chain-integration.mdx +++ b/website/src/pages/vi/indexing/new-chain-integration.mdx @@ -2,7 +2,7 @@ title: New Chain Integration --- -Chains can bring subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies: +Chains can bring Subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies: 1. **EVM JSON-RPC** 2. **Firehose**: All Firehose integration solutions include Substreams, a large-scale streaming engine based off Firehose with native `graph-node` support, allowing for parallelized transforms. @@ -47,15 +47,15 @@ For EVM chains, there exists a deeper level of data that can be achieved through ## EVM considerations - Difference between JSON-RPC & Firehose -While the JSON-RPC and Firehose are both suitable for subgraphs, a Firehose is always required for developers wanting to build with [Substreams](https://substreams.streamingfast.io). Supporting Substreams allows developers to build [Substreams-powered subgraphs](/subgraphs/cookbook/substreams-powered-subgraphs/) for the new chain, and has the potential to improve the performance of your subgraphs. Additionally, Firehose — as a drop-in replacement for the JSON-RPC extraction layer of `graph-node` — reduces by 90% the number of RPC calls required for general indexing. +While the JSON-RPC and Firehose are both suitable for Subgraphs, a Firehose is always required for developers wanting to build with [Substreams](https://substreams.streamingfast.io). Supporting Substreams allows developers to build [Substreams-powered Subgraphs](/subgraphs/cookbook/substreams-powered-subgraphs/) for the new chain, and has the potential to improve the performance of your Subgraphs. Additionally, Firehose — as a drop-in replacement for the JSON-RPC extraction layer of `graph-node` — reduces by 90% the number of RPC calls required for general indexing. -- All those `getLogs` calls and roundtrips get replaced by a single stream arriving into the heart of `graph-node`; a single block model for all subgraphs it processes. +- All those `getLogs` calls and roundtrips get replaced by a single stream arriving into the heart of `graph-node`; a single block model for all Subgraphs it processes. -> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers) +> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index Subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers) ## Graph Node Configuration -Configuring Graph Node is as easy as preparing your local environment. Once your local environment is set, you can test the integration by locally deploying a subgraph. +Configuring Graph Node is as easy as preparing your local environment. Once your local environment is set, you can test the integration by locally deploying a Subgraph. 1. [Clone Graph Node](https://github.com/graphprotocol/graph-node) @@ -67,4 +67,4 @@ Configuring Graph Node is as easy as preparing your local environment. Once your ## Substreams-powered Subgraphs -For StreamingFast-led Firehose/Substreams integrations, basic support for foundational Substreams modules (e.g. decoded transactions, logs and smart-contract events) and Substreams codegen tools are included. These tools enable the ability to enable [Substreams-powered subgraphs](/substreams/sps/introduction/). Follow the [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) and run `substreams codegen subgraph` to experience the codegen tools for yourself. +For StreamingFast-led Firehose/Substreams integrations, basic support for foundational Substreams modules (e.g. decoded transactions, logs and smart-contract events) and Substreams codegen tools are included. These tools enable the ability to enable [Substreams-powered Subgraphs](/substreams/sps/introduction/). Follow the [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) and run `substreams codegen subgraph` to experience the codegen tools for yourself. diff --git a/website/src/pages/vi/indexing/overview.mdx b/website/src/pages/vi/indexing/overview.mdx index e09a783ede2a..034b74623eaf 100644 --- a/website/src/pages/vi/indexing/overview.mdx +++ b/website/src/pages/vi/indexing/overview.mdx @@ -7,7 +7,7 @@ Indexers are node operators in The Graph Network that stake Graph Tokens (GRT) i GRT that is staked in the protocol is subject to a thawing period and can be slashed if Indexers are malicious and serve incorrect data to applications or if they index incorrectly. Indexers also earn rewards for delegated stake from Delegators, to contribute to the network. -Indexer chọn các subgraph để index dựa trên tín hiệu curation của subgraph, trong đó Curator stake GRT để chỉ ra subgraph nào có chất lượng cao và cần được ưu tiên. Bên tiêu dùng (ví dụ: ứng dụng) cũng có thể đặt các tham số (parameter) mà Indexer xử lý các truy vấn cho các subgraph của họ và đặt các tùy chọn cho việc định giá phí truy vấn. +Indexers select Subgraphs to index based on the Subgraph’s curation signal, where Curators stake GRT in order to indicate which Subgraphs are high-quality and should be prioritized. Consumers (eg. applications) can also set parameters for which Indexers process queries for their Subgraphs and set preferences for query fee pricing. ## FAQ @@ -19,17 +19,17 @@ The minimum stake for an Indexer is currently set to 100K GRT. **Query fee rebates** - Payments for serving queries on the network. These payments are mediated via state channels between an Indexer and a gateway. Each query request from a gateway contains a payment and the corresponding response a proof of query result validity. -**Indexing rewards** - Generated via a 3% annual protocol wide inflation, the indexing rewards are distributed to Indexers who are indexing subgraph deployments for the network. +**Indexing rewards** - Generated via a 3% annual protocol wide inflation, the indexing rewards are distributed to Indexers who are indexing Subgraph deployments for the network. ### How are indexing rewards distributed? -Indexing rewards come from protocol inflation which is set to 3% annual issuance. They are distributed across subgraphs based on the proportion of all curation signal on each, then distributed proportionally to Indexers based on their allocated stake on that subgraph. **An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.** +Indexing rewards come from protocol inflation which is set to 3% annual issuance. They are distributed across Subgraphs based on the proportion of all curation signal on each, then distributed proportionally to Indexers based on their allocated stake on that Subgraph. **An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.** Numerous tools have been created by the community for calculating rewards; you'll find a collection of them organized in the [Community Guides collection](https://www.notion.so/Community-Guides-abbb10f4dba040d5ba81648ca093e70c). You can also find an up to date list of tools in the #Delegators and #Indexers channels on the [Discord server](https://discord.gg/graphprotocol). Here we link a [recommended allocation optimiser](https://github.com/graphprotocol/allocation-optimizer) integrated with the indexer software stack. ### What is a proof of indexing (POI)? -POIs are used in the network to verify that an Indexer is indexing the subgraphs they have allocated on. A POI for the first block of the current epoch must be submitted when closing an allocation for that allocation to be eligible for indexing rewards. A POI for a block is a digest for all entity store transactions for a specific subgraph deployment up to and including that block. +POIs are used in the network to verify that an Indexer is indexing the Subgraphs they have allocated on. A POI for the first block of the current epoch must be submitted when closing an allocation for that allocation to be eligible for indexing rewards. A POI for a block is a digest for all entity store transactions for a specific Subgraph deployment up to and including that block. ### When are indexing rewards distributed? @@ -41,7 +41,7 @@ The RewardsManager contract has a read-only [getRewards](https://github.com/grap Many of the community-made dashboards include pending rewards values and they can be easily checked manually by following these steps: -1. Query the [mainnet subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations: +1. Query the [mainnet Subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations: ```graphql query indexerAllocations { @@ -91,24 +91,24 @@ The `queryFeeCut` and `indexingRewardCut` values are delegation parameters that - **indexingRewardCut** - the % of indexing rewards that will be distributed to the Indexer. If this is set to 95%, the Indexer will receive 95% of the indexing rewards when an allocation is closed and the Delegators will split the other 5%. -### How do Indexers know which subgraphs to index? +### How do Indexers know which Subgraphs to index? -Indexers may differentiate themselves by applying advanced techniques for making subgraph indexing decisions but to give a general idea we'll discuss several key metrics used to evaluate subgraphs in the network: +Indexers may differentiate themselves by applying advanced techniques for making Subgraph indexing decisions but to give a general idea we'll discuss several key metrics used to evaluate Subgraphs in the network: -- **Curation signal** - The proportion of network curation signal applied to a particular subgraph is a good indicator of the interest in that subgraph, especially during the bootstrap phase when query voluming is ramping up. +- **Curation signal** - The proportion of network curation signal applied to a particular Subgraph is a good indicator of the interest in that Subgraph, especially during the bootstrap phase when query volume is ramping up. -- **Query fees collected** - The historical data for volume of query fees collected for a specific subgraph is a good indicator of future demand. +- **Query fees collected** - The historical data for volume of query fees collected for a specific Subgraph is a good indicator of future demand. -- **Amount staked** - Monitoring the behavior of other Indexers or looking at proportions of total stake allocated towards specific subgraphs can allow an Indexer to monitor the supply side for subgraph queries to identify subgraphs that the network is showing confidence in or subgraphs that may show a need for more supply. +- **Amount staked** - Monitoring the behavior of other Indexers or looking at proportions of total stake allocated towards specific Subgraphs can allow an Indexer to monitor the supply side for Subgraph queries to identify Subgraphs that the network is showing confidence in or Subgraphs that may show a need for more supply. -- **Subgraphs with no indexing rewards** - Some subgraphs do not generate indexing rewards mainly because they are using unsupported features like IPFS or because they are querying another network outside of mainnet. You will see a message on a subgraph if it is not generating indexing rewards. +- **Subgraphs with no indexing rewards** - Some Subgraphs do not generate indexing rewards mainly because they are using unsupported features like IPFS or because they are querying another network outside of mainnet. You will see a message on a Subgraph if it is not generating indexing rewards. ### What are the hardware requirements? -- **Small** - Enough to get started indexing several subgraphs, will likely need to be expanded. +- **Small** - Enough to get started indexing several Subgraphs, will likely need to be expanded. - **Standard** - Default setup, this is what is used in the example k8s/terraform deployment manifests. -- **Medium** - Production Indexer supporting 100 subgraphs and 200-500 requests per second. -- **Large** - Prepared to index all currently used subgraphs and serve requests for the related traffic. +- **Medium** - Production Indexer supporting 100 Subgraphs and 200-500 requests per second. +- **Large** - Prepared to index all currently used Subgraphs and serve requests for the related traffic. | Setup | Postgres
(CPUs) | Postgres
(memory in GBs) | Postgres
(disk in TBs) | VMs
(CPUs) | VMs
(memory in GBs) | | --- | :-: | :-: | :-: | :-: | :-: | @@ -125,17 +125,17 @@ Indexers may differentiate themselves by applying advanced techniques for making ## Infrastructure -At the center of an Indexer's infrastructure is the Graph Node which monitors the indexed networks, extracts and loads data per a subgraph definition and serves it as a [GraphQL API](/about/#how-the-graph-works). The Graph Node needs to be connected to an endpoint exposing data from each indexed network; an IPFS node for sourcing data; a PostgreSQL database for its store; and Indexer components which facilitate its interactions with the network. +At the center of an Indexer's infrastructure is the Graph Node which monitors the indexed networks, extracts and loads data per a Subgraph definition and serves it as a [GraphQL API](/about/#how-the-graph-works). The Graph Node needs to be connected to an endpoint exposing data from each indexed network; an IPFS node for sourcing data; a PostgreSQL database for its store; and Indexer components which facilitate its interactions with the network. -- **PostgreSQL database** - The main store for the Graph Node, this is where subgraph data is stored. The Indexer service and agent also use the database to store state channel data, cost models, indexing rules, and allocation actions. +- **PostgreSQL database** - The main store for the Graph Node, this is where Subgraph data is stored. The Indexer service and agent also use the database to store state channel data, cost models, indexing rules, and allocation actions. -- **Data endpoint** - For EVM-compatible networks, Graph Node needs to be connected to an endpoint that exposes an EVM-compatible JSON-RPC API. This may take the form of a single client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain subgraphs will require particular client capabilities such as archive mode and/or the parity tracing API. +- **Data endpoint** - For EVM-compatible networks, Graph Node needs to be connected to an endpoint that exposes an EVM-compatible JSON-RPC API. This may take the form of a single client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain Subgraphs will require particular client capabilities such as archive mode and/or the parity tracing API. -- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during subgraph deployment to fetch the subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com. +- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com. - **Indexer service** - Handles all required external communications with the network. Shares cost models and indexing statuses, passes query requests from gateways on to a Graph Node, and manages the query payments via state channels with the gateway. -- **Indexer agent** - Facilitates the Indexers interactions onchain including registering on the network, managing subgraph deployments to its Graph Node/s, and managing allocations. +- **Indexer agent** - Facilitates the Indexers interactions onchain including registering on the network, managing Subgraph deployments to its Graph Node/s, and managing allocations. - **Prometheus metrics server** - The Graph Node and Indexer components log their metrics to the metrics server. @@ -149,8 +149,8 @@ Note: To support agile scaling, it is recommended that query and indexing concer | Port | Purpose | Routes | CLI Argument | Environment Variable | | --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | -| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | +| 8000 | GraphQL HTTP server
(for Subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | +| 8001 | GraphQL WS
(for Subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | | 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | | 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | | 8040 | Prometheus metrics | /metrics | \--metrics-port | - | @@ -159,7 +159,7 @@ Note: To support agile scaling, it is recommended that query and indexing concer | Port | Purpose | Routes | CLI Argument | Environment Variable | | --- | --- | --- | --- | --- | -| 7600 | GraphQL HTTP server
(for paid subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` | +| 7600 | GraphQL HTTP server
(for paid Subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` | | 7300 | Prometheus metrics | /metrics | \--metrics-port | - | #### Indexer Agent @@ -295,7 +295,7 @@ Deploy all resources with `kubectl apply -k $dir`. ### Graph Node -[Graph Node](https://github.com/graphprotocol/graph-node) is an open source Rust implementation that event sources the Ethereum blockchain to deterministically update a data store that can be queried via the GraphQL endpoint. Developers use subgraphs to define their schema, and a set of mappings for transforming the data sourced from the blockchain and the Graph Node handles syncing the entire chain, monitoring for new blocks, and serving it via a GraphQL endpoint. +[Graph Node](https://github.com/graphprotocol/graph-node) is an open source Rust implementation that event sources the Ethereum blockchain to deterministically update a data store that can be queried via the GraphQL endpoint. Developers use Subgraphs to define their schema, and a set of mappings for transforming the data sourced from the blockchain and the Graph Node handles syncing the entire chain, monitoring for new blocks, and serving it via a GraphQL endpoint. #### Getting started from source @@ -365,9 +365,9 @@ docker-compose up To successfully participate in the network requires almost constant monitoring and interaction, so we've built a suite of Typescript applications for facilitating an Indexers network participation. There are three Indexer components: -- **Indexer agent** - The agent monitors the network and the Indexer's own infrastructure and manages which subgraph deployments are indexed and allocated towards onchain and how much is allocated towards each. +- **Indexer agent** - The agent monitors the network and the Indexer's own infrastructure and manages which Subgraph deployments are indexed and allocated towards onchain and how much is allocated towards each. -- **Indexer service** - The only component that needs to be exposed externally, the service passes on subgraph queries to the graph node, manages state channels for query payments, shares important decision making information to clients like the gateways. +- **Indexer service** - The only component that needs to be exposed externally, the service passes on Subgraph queries to the graph node, manages state channels for query payments, shares important decision making information to clients like the gateways. - **Indexer CLI** - The command line interface for managing the Indexer agent. It allows Indexers to manage cost models, manual allocations, actions queue, and indexing rules. @@ -525,7 +525,7 @@ graph indexer status #### Indexer management using Indexer CLI -The suggested tool for interacting with the **Indexer Management API** is the **Indexer CLI**, an extension to the **Graph CLI**. The Indexer agent needs input from an Indexer in order to autonomously interact with the network on the behalf of the Indexer. The mechanism for defining Indexer agent behavior are **allocation management** mode and **indexing rules**. Under auto mode, an Indexer can use **indexing rules** to apply their specific strategy for picking subgraphs to index and serve queries for. Rules are managed via a GraphQL API served by the agent and known as the Indexer Management API. Under manual mode, an Indexer can create allocation actions using **actions queue** and explicitly approve them before they get executed. Under oversight mode, **indexing rules** are used to populate **actions queue** and also require explicit approval for execution. +The suggested tool for interacting with the **Indexer Management API** is the **Indexer CLI**, an extension to the **Graph CLI**. The Indexer agent needs input from an Indexer in order to autonomously interact with the network on the behalf of the Indexer. The mechanism for defining Indexer agent behavior are **allocation management** mode and **indexing rules**. Under auto mode, an Indexer can use **indexing rules** to apply their specific strategy for picking Subgraphs to index and serve queries for. Rules are managed via a GraphQL API served by the agent and known as the Indexer Management API. Under manual mode, an Indexer can create allocation actions using **actions queue** and explicitly approve them before they get executed. Under oversight mode, **indexing rules** are used to populate **actions queue** and also require explicit approval for execution. #### Usage @@ -537,7 +537,7 @@ The **Indexer CLI** connects to the Indexer agent, typically through port-forwar - `graph indexer rules set [options] ...` - Set one or more indexing rules. -- `graph indexer rules start [options] ` - Start indexing a subgraph deployment if available and set its `decisionBasis` to `always`, so the Indexer agent will always choose to index it. If the global rule is set to always then all available subgraphs on the network will be indexed. +- `graph indexer rules start [options] ` - Start indexing a Subgraph deployment if available and set its `decisionBasis` to `always`, so the Indexer agent will always choose to index it. If the global rule is set to always then all available Subgraphs on the network will be indexed. - `graph indexer rules stop [options] ` - Stop indexing a deployment and set its `decisionBasis` to never, so it will skip this deployment when deciding on deployments to index. @@ -561,9 +561,9 @@ All commands which display rules in the output can choose between the supported #### Indexing rules -Indexing rules can either be applied as global defaults or for specific subgraph deployments using their IDs. The `deployment` and `decisionBasis` fields are mandatory, while all other fields are optional. When an indexing rule has `rules` as the `decisionBasis`, then the Indexer agent will compare non-null threshold values on that rule with values fetched from the network for the corresponding deployment. If the subgraph deployment has values above (or below) any of the thresholds it will be chosen for indexing. +Indexing rules can either be applied as global defaults or for specific Subgraph deployments using their IDs. The `deployment` and `decisionBasis` fields are mandatory, while all other fields are optional. When an indexing rule has `rules` as the `decisionBasis`, then the Indexer agent will compare non-null threshold values on that rule with values fetched from the network for the corresponding deployment. If the Subgraph deployment has values above (or below) any of the thresholds it will be chosen for indexing. -For example, if the global rule has a `minStake` of **5** (GRT), any subgraph deployment which has more than 5 (GRT) of stake allocated to it will be indexed. Threshold rules include `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, and `minAverageQueryFees`. +For example, if the global rule has a `minStake` of **5** (GRT), any Subgraph deployment which has more than 5 (GRT) of stake allocated to it will be indexed. Threshold rules include `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, and `minAverageQueryFees`. Data model: @@ -679,7 +679,7 @@ graph indexer actions execute approve Note that supported action types for allocation management have different input requirements: -- `Allocate` - allocate stake to a specific subgraph deployment +- `Allocate` - allocate stake to a specific Subgraph deployment - required action params: - deploymentID @@ -694,7 +694,7 @@ Note that supported action types for allocation management have different input - poi - force (forces using the provided POI even if it doesn’t match what the graph-node provides) -- `Reallocate` - atomically close allocation and open a fresh allocation for the same subgraph deployment +- `Reallocate` - atomically close allocation and open a fresh allocation for the same Subgraph deployment - required action params: - allocationID @@ -706,7 +706,7 @@ Note that supported action types for allocation management have different input #### Cost models -Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make Indexer selection decisions per query and to negotiate payment with chosen Indexers. +Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each Subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make Indexer selection decisions per query and to negotiate payment with chosen Indexers. #### Agora @@ -782,9 +782,9 @@ Once an Indexer has staked GRT in the protocol, the [Indexer components](/indexi 6. Call `stake()` to stake GRT in the protocol. -7. (Optional) Indexers may approve another address to be the operator for their Indexer infrastructure in order to separate the keys that control the funds from those that are performing day to day actions such as allocating on subgraphs and serving (paid) queries. In order to set the operator call `setOperator()` with the operator address. +7. (Optional) Indexers may approve another address to be the operator for their Indexer infrastructure in order to separate the keys that control the funds from those that are performing day to day actions such as allocating on Subgraphs and serving (paid) queries. In order to set the operator call `setOperator()` with the operator address. -8. (Optional) In order to control the distribution of rewards and strategically attract Delegators Indexers can update their delegation parameters by updating their indexingRewardCut (parts per million), queryFeeCut (parts per million), and cooldownBlocks (number of blocks). To do so call `setDelegationParameters()`. The following example sets the queryFeeCut to distribute 95% of query rebates to the Indexer and 5% to Delegators, set the indexingRewardCutto distribute 60% of indexing rewards to the Indexer and 40% to Delegators, and set `thecooldownBlocks` period to 500 blocks. +8. (Optional) In order to control the distribution of rewards and strategically attract Delegators Indexers can update their delegation parameters by updating their `indexingRewardCut` (parts per million), `queryFeeCut` (parts per million), and `cooldownBlocks` (number of blocks). To do so call `setDelegationParameters()`. The following example sets the `queryFeeCut` to distribute 95% of query rebates to the Indexer and 5% to Delegators, set the `indexingRewardCut` to distribute 60% of indexing rewards to the Indexer and 40% to Delegators, and set the `cooldownBlocks` period to 500 blocks. ``` setDelegationParameters(950000, 600000, 500) @@ -810,8 +810,8 @@ To set the delegation parameters using Graph Explorer interface, follow these st After being created by an Indexer a healthy allocation goes through two states. -- **Active** - Once an allocation is created onchain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L316)) it is considered **active**. A portion of the Indexer's own and/or delegated stake is allocated towards a subgraph deployment, which allows them to claim indexing rewards and serve queries for that subgraph deployment. The Indexer agent manages creating allocations based on the Indexer rules. +- **Active** - Once an allocation is created onchain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L316)) it is considered **active**. A portion of the Indexer's own and/or delegated stake is allocated towards a Subgraph deployment, which allows them to claim indexing rewards and serve queries for that Subgraph deployment. The Indexer agent manages creating allocations based on the Indexer rules. - **Closed** - An Indexer is free to close an allocation once 1 epoch has passed ([closeAllocation()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L335)) or their Indexer agent will automatically close the allocation after the **maxAllocationEpochs** (currently 28 days). When an allocation is closed with a valid proof of indexing (POI) their indexing rewards are distributed to the Indexer and its Delegators ([learn more](/indexing/overview/#how-are-indexing-rewards-distributed)). -Indexers are recommended to utilize offchain syncing functionality to sync subgraph deployments to chainhead before creating the allocation onchain. This feature is especially useful for subgraphs that may take longer than 28 epochs to sync or have some chances of failing undeterministically. +Indexers are recommended to utilize offchain syncing functionality to sync Subgraph deployments to chainhead before creating the allocation onchain. This feature is especially useful for Subgraphs that may take longer than 28 epochs to sync or have some chances of failing undeterministically. diff --git a/website/src/pages/vi/indexing/supported-network-requirements.mdx b/website/src/pages/vi/indexing/supported-network-requirements.mdx index 50cd5e88b459..8d11d83d4d40 100644 --- a/website/src/pages/vi/indexing/supported-network-requirements.mdx +++ b/website/src/pages/vi/indexing/supported-network-requirements.mdx @@ -6,7 +6,7 @@ title: Supported Network Requirements | --- | --- | --- | :-: | | Arbitrum | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | 4+ core CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_last updated August 2023_ | ✅ | | Avalanche | [Docker Guide](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 5 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
Debian 12/Ubuntu 22.04
16 GB RAM
>= 4.5TB (NVME preffered)
_last updated 14th May 2024_ | ✅ | +| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
Debian 12/Ubuntu 22.04
16 GB RAM
>= 4.5TB (NVME preferred)
_last updated 14th May 2024_ | ✅ | | Binance | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | 8 core / 16 threads CPU
Ubuntu 22.04
>=32 GB RAM
>= 14 TiB NVMe SSD
_last updated 22nd June 2024_ | ✅ | | Celo | [Docker Guide](https://docs.infradao.com/archive-nodes-101/celo/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 2 TiB NVMe SSD
_last updated August 2023_ | ✅ | | Ethereum | [Docker Guide](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Higher clock speed over core count
Ubuntu 22.04
16GB+ RAM
>=3TB (NVMe recommended)
_last updated August 2023_ | ✅ | diff --git a/website/src/pages/vi/indexing/tap.mdx b/website/src/pages/vi/indexing/tap.mdx index eccf6efc1d41..a754e67819a3 100644 --- a/website/src/pages/vi/indexing/tap.mdx +++ b/website/src/pages/vi/indexing/tap.mdx @@ -1,21 +1,21 @@ --- -title: TAP Migration Guide +title: GraphTally Guide --- -Learn about The Graph’s new payment system, **Timeline Aggregation Protocol, TAP**. This system provides fast, efficient microtransactions with minimized trust. +Learn about The Graph’s new payment system, **GraphTally** [(previously Timeline Aggregation Protocol)](https://docs.rs/tap_core/latest/tap_core/index.html). This system provides fast, efficient microtransactions with minimized trust. ## Tổng quan -[TAP](https://docs.rs/tap_core/latest/tap_core/index.html) is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features: +GraphTally is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features: - Efficiently handles micropayments. - Adds a layer of consolidations to onchain transactions and costs. - Allows Indexers control of receipts and payments, guaranteeing payment for queries. - It enables decentralized, trustless gateways and improves `indexer-service` performance for multiple senders. -## Specifics +### Specifics -TAP allows a sender to make multiple payments to a receiver, **TAP Receipts**, which aggregates these payments into a single payment, a **Receipt Aggregate Voucher**, also known as a **RAV**. This aggregated payment can then be verified on the blockchain, reducing the number of transactions and simplifying the payment process. +GraphTally allows a sender to make multiple payments to a receiver, **Receipts**, which aggregates these payments into a single payment, a **Receipt Aggregate Voucher**, also known as a **RAV**. This aggregated payment can then be verified on the blockchain, reducing the number of transactions and simplifying the payment process. For each query, the gateway will send you a `signed receipt` that is stored on your database. Then, these queries will be aggregated by a `tap-agent` through a request. Afterwards, you’ll receive a RAV. You can update a RAV by sending it with newer receipts and this will generate a new RAV with an increased value. @@ -59,14 +59,14 @@ As long as you run `tap-agent` and `indexer-agent`, everything will be executed | Signers | `0xfF4B7A5EfD00Ff2EC3518D4F250A27e4c29A2211` | `0xFb142dE83E261e43a81e9ACEADd1c66A0DB121FE` | | Aggregator | `https://tap-aggregator.network.thegraph.com` | `https://tap-aggregator.testnet.thegraph.com` | -### Requirements +### Prerequisites -In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query TAP updates. You can use The Graph Network to query or host yourself on your `graph-node`. +In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query updates. You can use The Graph Network to query or host yourself on your `graph-node`. -- [Graph TAP Arbitrum Sepolia subgraph (for The Graph testnet)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) -- [Graph TAP Arbitrum One subgraph (for The Graph mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) +- [Graph TAP Arbitrum Sepolia Subgraph (for The Graph testnet)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) +- [Graph TAP Arbitrum One Subgraph (for The Graph mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) -> Note: `indexer-agent` does not currently handle the indexing of this subgraph like it does for the network subgraph deployment. As a result, you have to index it manually. +> Note: `indexer-agent` does not currently handle the indexing of this Subgraph like it does for the network Subgraph deployment. As a result, you have to index it manually. ## Migration Guide @@ -79,7 +79,7 @@ The required software version can be found [here](https://github.com/graphprotoc 1. **Indexer Agent** - Follow the [same process](https://github.com/graphprotocol/indexer/pkgs/container/indexer-agent#graph-protocol-indexer-components). - - Give the new argument `--tap-subgraph-endpoint` to activate the new TAP codepaths and enable redeeming of TAP RAVs. + - Give the new argument `--tap-subgraph-endpoint` to activate the new GraphTally codepaths and enable redeeming of RAVs. 2. **Indexer Service** @@ -128,18 +128,18 @@ query_url = "" status_url = "" [subgraphs.network] -# Query URL for the Graph Network subgraph. +# Query URL for the Graph Network Subgraph. query_url = "" # Optional, deployment to look for in the local `graph-node`, if locally indexed. -# Locally indexing the subgraph is recommended. +# Locally indexing the Subgraph is recommended. # NOTE: Use `query_url` or `deployment_id` only deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" [subgraphs.escrow] -# Query URL for the Escrow subgraph. +# Query URL for the Escrow Subgraph. query_url = "" # Optional, deployment to look for in the local `graph-node`, if locally indexed. -# Locally indexing the subgraph is recommended. +# Locally indexing the Subgraph is recommended. # NOTE: Use `query_url` or `deployment_id` only deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" diff --git a/website/src/pages/vi/indexing/tooling/graph-node.mdx b/website/src/pages/vi/indexing/tooling/graph-node.mdx index 0250f14a3d08..edde8a157fd3 100644 --- a/website/src/pages/vi/indexing/tooling/graph-node.mdx +++ b/website/src/pages/vi/indexing/tooling/graph-node.mdx @@ -2,31 +2,31 @@ title: Graph Node --- -Graph Node is the component which indexes subgraphs, and makes the resulting data available to query via a GraphQL API. As such it is central to the indexer stack, and correct operation of Graph Node is crucial to running a successful indexer. +Graph Node is the component which indexes Subgraphs, and makes the resulting data available to query via a GraphQL API. As such it is central to the indexer stack, and correct operation of Graph Node is crucial to running a successful indexer. This provides a contextual overview of Graph Node, and some of the more advanced options available to indexers. Detailed documentation and instructions can be found in the [Graph Node repository](https://github.com/graphprotocol/graph-node). ## Graph Node -[Graph Node](https://github.com/graphprotocol/graph-node) is the reference implementation for indexing Subgraphs on The Graph Network, connecting to blockchain clients, indexing subgraphs and making indexed data available to query. +[Graph Node](https://github.com/graphprotocol/graph-node) is the reference implementation for indexing Subgraphs on The Graph Network, connecting to blockchain clients, indexing Subgraphs and making indexed data available to query. Graph Node (and the whole indexer stack) can be run on bare metal, or in a cloud environment. This flexibility of the central indexing component is crucial to the robustness of The Graph Protocol. Similarly, Graph Node can be [built from source](https://github.com/graphprotocol/graph-node), or indexers can use one of the [provided Docker Images](https://hub.docker.com/r/graphprotocol/graph-node). ### PostgreSQL database -The main store for the Graph Node, this is where subgraph data is stored, as well as metadata about subgraphs, and subgraph-agnostic network data such as the block cache, and eth_call cache. +The main store for the Graph Node, this is where Subgraph data is stored, as well as metadata about Subgraphs, and Subgraph-agnostic network data such as the block cache, and eth_call cache. ### Network clients In order to index a network, Graph Node needs access to a network client via an EVM-compatible JSON-RPC API. This RPC may connect to a single client or it could be a more complex setup that load balances across multiple. -While some subgraphs may just require a full node, some may have indexing features which require additional RPC functionality. Specifically subgraphs which make `eth_calls` as part of indexing will require an archive node which supports [EIP-1898](https://eips.ethereum.org/EIPS/eip-1898), and subgraphs with `callHandlers`, or `blockHandlers` with a `call` filter, require `trace_filter` support ([see trace module documentation here](https://openethereum.github.io/JSONRPC-trace-module)). +While some Subgraphs may just require a full node, some may have indexing features which require additional RPC functionality. Specifically Subgraphs which make `eth_calls` as part of indexing will require an archive node which supports [EIP-1898](https://eips.ethereum.org/EIPS/eip-1898), and Subgraphs with `callHandlers`, or `blockHandlers` with a `call` filter, require `trace_filter` support ([see trace module documentation here](https://openethereum.github.io/JSONRPC-trace-module)). **Network Firehoses** - a Firehose is a gRPC service providing an ordered, yet fork-aware, stream of blocks, developed by The Graph's core developers to better support performant indexing at scale. This is not currently an Indexer requirement, but Indexers are encouraged to familiarise themselves with the technology, ahead of full network support. Learn more about the Firehose [here](https://firehose.streamingfast.io/). ### IPFS Nodes -Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during subgraph deployment to fetch the subgraph manifest and all linked files. Network indexers do not need to host their own IPFS node. An IPFS node for the network is hosted at https://ipfs.network.thegraph.com. +Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network indexers do not need to host their own IPFS node. An IPFS node for the network is hosted at https://ipfs.network.thegraph.com. ### Prometheus metrics server @@ -79,8 +79,8 @@ When it is running Graph Node exposes the following ports: | Port | Purpose | Routes | CLI Argument | Environment Variable | | --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | -| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | +| 8000 | GraphQL HTTP server
(for Subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | +| 8001 | GraphQL WS
(for Subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | | 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | | 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | | 8040 | Prometheus metrics | /metrics | \--metrics-port | - | @@ -89,7 +89,7 @@ When it is running Graph Node exposes the following ports: ## Advanced Graph Node configuration -At its simplest, Graph Node can be operated with a single instance of Graph Node, a single PostgreSQL database, an IPFS node, and the network clients as required by the subgraphs to be indexed. +At its simplest, Graph Node can be operated with a single instance of Graph Node, a single PostgreSQL database, an IPFS node, and the network clients as required by the Subgraphs to be indexed. This setup can be scaled horizontally, by adding multiple Graph Nodes, and multiple databases to support those Graph Nodes. Advanced users may want to take advantage of some of the horizontal scaling capabilities of Graph Node, as well as some of the more advanced configuration options, via the `config.toml` file and Graph Node's environment variables. @@ -114,13 +114,13 @@ Full documentation of `config.toml` can be found in the [Graph Node docs](https: #### Multiple Graph Nodes -Graph Node indexing can scale horizontally, running multiple instances of Graph Node to split indexing and querying across different nodes. This can be done simply by running Graph Nodes configured with a different `node_id` on startup (e.g. in the Docker Compose file), which can then be used in the `config.toml` file to specify [dedicated query nodes](#dedicated-query-nodes), [block ingestors](#dedicated-block-ingestion), and splitting subgraphs across nodes with [deployment rules](#deployment-rules). +Graph Node indexing can scale horizontally, running multiple instances of Graph Node to split indexing and querying across different nodes. This can be done simply by running Graph Nodes configured with a different `node_id` on startup (e.g. in the Docker Compose file), which can then be used in the `config.toml` file to specify [dedicated query nodes](#dedicated-query-nodes), [block ingestors](#dedicated-block-ingestion), and splitting Subgraphs across nodes with [deployment rules](#deployment-rules). > Note that multiple Graph Nodes can all be configured to use the same database, which itself can be horizontally scaled via sharding. #### Deployment rules -Given multiple Graph Nodes, it is necessary to manage deployment of new subgraphs so that the same subgraph isn't being indexed by two different nodes, which would lead to collisions. This can be done by using deployment rules, which can also specify which `shard` a subgraph's data should be stored in, if database sharding is being used. Deployment rules can match on the subgraph name and the network that the deployment is indexing in order to make a decision. +Given multiple Graph Nodes, it is necessary to manage deployment of new Subgraphs so that the same Subgraph isn't being indexed by two different nodes, which would lead to collisions. This can be done by using deployment rules, which can also specify which `shard` a Subgraph's data should be stored in, if database sharding is being used. Deployment rules can match on the Subgraph name and the network that the deployment is indexing in order to make a decision. Example deployment rule configuration: @@ -138,7 +138,7 @@ indexers = [ "index_node_kovan_0" ] match = { network = [ "xdai", "poa-core" ] } indexers = [ "index_node_other_0" ] [[deployment.rule]] -# There's no 'match', so any subgraph matches +# There's no 'match', so any Subgraph matches shards = [ "sharda", "shardb" ] indexers = [ "index_node_community_0", @@ -167,11 +167,11 @@ Any node whose --node-id matches the regular expression will be set up to only r For most use cases, a single Postgres database is sufficient to support a graph-node instance. When a graph-node instance outgrows a single Postgres database, it is possible to split the storage of graph-node's data across multiple Postgres databases. All databases together form the store of the graph-node instance. Each individual database is called a shard. -Shards can be used to split subgraph deployments across multiple databases, and can also be used to use replicas to spread query load across databases. This includes configuring the number of available database connections each `graph-node` should keep in its connection pool for each database, which becomes increasingly important as more subgraphs are being indexed. +Shards can be used to split Subgraph deployments across multiple databases, and can also be used to use replicas to spread query load across databases. This includes configuring the number of available database connections each `graph-node` should keep in its connection pool for each database, which becomes increasingly important as more Subgraphs are being indexed. Sharding becomes useful when your existing database can't keep up with the load that Graph Node puts on it, and when it's not possible to increase the database size anymore. -> It is generally better make a single database as big as possible, before starting with shards. One exception is where query traffic is split very unevenly between subgraphs; in those situations it can help dramatically if the high-volume subgraphs are kept in one shard and everything else in another because that setup makes it more likely that the data for the high-volume subgraphs stays in the db-internal cache and doesn't get replaced by data that's not needed as much from low-volume subgraphs. +> It is generally better make a single database as big as possible, before starting with shards. One exception is where query traffic is split very unevenly between Subgraphs; in those situations it can help dramatically if the high-volume Subgraphs are kept in one shard and everything else in another because that setup makes it more likely that the data for the high-volume Subgraphs stays in the db-internal cache and doesn't get replaced by data that's not needed as much from low-volume Subgraphs. In terms of configuring connections, start with max_connections in postgresql.conf set to 400 (or maybe even 200) and look at the store_connection_wait_time_ms and store_connection_checkout_count Prometheus metrics. Noticeable wait times (anything above 5ms) is an indication that there are too few connections available; high wait times there will also be caused by the database being very busy (like high CPU load). However if the database seems otherwise stable, high wait times indicate a need to increase the number of connections. In the configuration, how many connections each graph-node instance can use is an upper limit, and Graph Node will not keep connections open if it doesn't need them. @@ -188,7 +188,7 @@ ingestor = "block_ingestor_node" #### Supporting multiple networks -The Graph Protocol is increasing the number of networks supported for indexing rewards, and there exist many subgraphs indexing unsupported networks which an indexer would like to process. The `config.toml` file allows for expressive and flexible configuration of: +The Graph Protocol is increasing the number of networks supported for indexing rewards, and there exist many Subgraphs indexing unsupported networks which an indexer would like to process. The `config.toml` file allows for expressive and flexible configuration of: - Multiple networks - Multiple providers per network (this can allow splitting of load across providers, and can also allow for configuration of full nodes as well as archive nodes, with Graph Node preferring cheaper providers if a given workload allows). @@ -225,11 +225,11 @@ Users who are operating a scaled indexing setup with advanced configuration may ### Managing Graph Node -Given a running Graph Node (or Graph Nodes!), the challenge is then to manage deployed subgraphs across those nodes. Graph Node surfaces a range of tools to help with managing subgraphs. +Given a running Graph Node (or Graph Nodes!), the challenge is then to manage deployed Subgraphs across those nodes. Graph Node surfaces a range of tools to help with managing Subgraphs. #### Logging -Graph Node's logs can provide useful information for debugging and optimisation of Graph Node and specific subgraphs. Graph Node supports different log levels via the `GRAPH_LOG` environment variable, with the following levels: error, warn, info, debug or trace. +Graph Node's logs can provide useful information for debugging and optimisation of Graph Node and specific Subgraphs. Graph Node supports different log levels via the `GRAPH_LOG` environment variable, with the following levels: error, warn, info, debug or trace. In addition setting `GRAPH_LOG_QUERY_TIMING` to `gql` provides more details about how GraphQL queries are running (though this will generate a large volume of logs). @@ -247,11 +247,11 @@ The graphman command is included in the official containers, and you can docker Full documentation of `graphman` commands is available in the Graph Node repository. See [/docs/graphman.md] (https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md) in the Graph Node `/docs` -### Working with subgraphs +### Working with Subgraphs #### Indexing status API -Available on port 8030/graphql by default, the indexing status API exposes a range of methods for checking indexing status for different subgraphs, checking proofs of indexing, inspecting subgraph features and more. +Available on port 8030/graphql by default, the indexing status API exposes a range of methods for checking indexing status for different Subgraphs, checking proofs of indexing, inspecting Subgraph features and more. The full schema is available [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). @@ -263,7 +263,7 @@ There are three separate parts of the indexing process: - Processing events in order with the appropriate handlers (this can involve calling the chain for state, and fetching data from the store) - Writing the resulting data to the store -These stages are pipelined (i.e. they can be executed in parallel), but they are dependent on one another. Where subgraphs are slow to index, the underlying cause will depend on the specific subgraph. +These stages are pipelined (i.e. they can be executed in parallel), but they are dependent on one another. Where Subgraphs are slow to index, the underlying cause will depend on the specific Subgraph. Common causes of indexing slowness: @@ -276,24 +276,24 @@ Common causes of indexing slowness: - The provider itself falling behind the chain head - Slowness in fetching new receipts at the chain head from the provider -Subgraph indexing metrics can help diagnose the root cause of indexing slowness. In some cases, the problem lies with the subgraph itself, but in others, improved network providers, reduced database contention and other configuration improvements can markedly improve indexing performance. +Subgraph indexing metrics can help diagnose the root cause of indexing slowness. In some cases, the problem lies with the Subgraph itself, but in others, improved network providers, reduced database contention and other configuration improvements can markedly improve indexing performance. -#### Failed subgraphs +#### Failed Subgraphs -During indexing subgraphs might fail, if they encounter data that is unexpected, some component not working as expected, or if there is some bug in the event handlers or configuration. There are two general types of failure: +During indexing Subgraphs might fail, if they encounter data that is unexpected, some component not working as expected, or if there is some bug in the event handlers or configuration. There are two general types of failure: - Deterministic failures: these are failures which will not be resolved with retries - Non-deterministic failures: these might be down to issues with the provider, or some unexpected Graph Node error. When a non-deterministic failure occurs, Graph Node will retry the failing handlers, backing off over time. -In some cases a failure might be resolvable by the indexer (for example if the error is a result of not having the right kind of provider, adding the required provider will allow indexing to continue). However in others, a change in the subgraph code is required. +In some cases a failure might be resolvable by the indexer (for example if the error is a result of not having the right kind of provider, adding the required provider will allow indexing to continue). However in others, a change in the Subgraph code is required. -> Deterministic failures are considered "final", with a Proof of Indexing generated for the failing block, while non-deterministic failures are not, as the subgraph may manage to "unfail" and continue indexing. In some cases, the non-deterministic label is incorrect, and the subgraph will never overcome the error; such failures should be reported as issues on the Graph Node repository. +> Deterministic failures are considered "final", with a Proof of Indexing generated for the failing block, while non-deterministic failures are not, as the Subgraph may manage to "unfail" and continue indexing. In some cases, the non-deterministic label is incorrect, and the Subgraph will never overcome the error; such failures should be reported as issues on the Graph Node repository. #### Block and call cache -Graph Node caches certain data in the store in order to save refetching from the provider. Blocks are cached, as are the results of `eth_calls` (the latter being cached as of a specific block). This caching can dramatically increase indexing speed during "resyncing" of a slightly altered subgraph. +Graph Node caches certain data in the store in order to save refetching from the provider. Blocks are cached, as are the results of `eth_calls` (the latter being cached as of a specific block). This caching can dramatically increase indexing speed during "resyncing" of a slightly altered Subgraph. -However, in some instances, if an Ethereum node has provided incorrect data for some period, that can make its way into the cache, leading to incorrect data or failed subgraphs. In this case indexers can use `graphman` to clear the poisoned cache, and then rewind the affected subgraphs, which will then fetch fresh data from the (hopefully) healthy provider. +However, in some instances, if an Ethereum node has provided incorrect data for some period, that can make its way into the cache, leading to incorrect data or failed Subgraphs. In this case indexers can use `graphman` to clear the poisoned cache, and then rewind the affected Subgraphs, which will then fetch fresh data from the (hopefully) healthy provider. If a block cache inconsistency is suspected, such as a tx receipt missing event: @@ -304,7 +304,7 @@ If a block cache inconsistency is suspected, such as a tx receipt missing event: #### Querying issues and errors -Once a subgraph has been indexed, indexers can expect to serve queries via the subgraph's dedicated query endpoint. If the indexer is hoping to serve significant query volume, a dedicated query node is recommended, and in case of very high query volumes, indexers may want to configure replica shards so that queries don't impact the indexing process. +Once a Subgraph has been indexed, indexers can expect to serve queries via the Subgraph's dedicated query endpoint. If the indexer is hoping to serve significant query volume, a dedicated query node is recommended, and in case of very high query volumes, indexers may want to configure replica shards so that queries don't impact the indexing process. However, even with a dedicated query node and replicas, certain queries can take a long time to execute, and in some cases increase memory usage and negatively impact the query time for other users. @@ -316,7 +316,7 @@ Graph Node caches GraphQL queries by default, which can significantly reduce dat ##### Analysing queries -Problematic queries most often surface in one of two ways. In some cases, users themselves report that a given query is slow. In that case the challenge is to diagnose the reason for the slowness - whether it is a general issue, or specific to that subgraph or query. And then of course to resolve it, if possible. +Problematic queries most often surface in one of two ways. In some cases, users themselves report that a given query is slow. In that case the challenge is to diagnose the reason for the slowness - whether it is a general issue, or specific to that Subgraph or query. And then of course to resolve it, if possible. In other cases, the trigger might be high memory usage on a query node, in which case the challenge is first to identify the query causing the issue. @@ -336,10 +336,10 @@ In general, tables where the number of distinct entities are less than 1% of the Once a table has been determined to be account-like, running `graphman stats account-like .
` will turn on the account-like optimization for queries against that table. The optimization can be turned off again with `graphman stats account-like --clear .
` It takes up to 5 minutes for query nodes to notice that the optimization has been turned on or off. After turning the optimization on, it is necessary to verify that the change does not in fact make queries slower for that table. If you have configured Grafana to monitor Postgres, slow queries would show up in `pg_stat_activity`in large numbers, taking several seconds. In that case, the optimization needs to be turned off again. -For Uniswap-like subgraphs, the `pair` and `token` tables are prime candidates for this optimization, and can have a dramatic effect on database load. +For Uniswap-like Subgraphs, the `pair` and `token` tables are prime candidates for this optimization, and can have a dramatic effect on database load. -#### Removing subgraphs +#### Removing Subgraphs > This is new functionality, which will be available in Graph Node 0.29.x -At some point an indexer might want to remove a given subgraph. This can be easily done via `graphman drop`, which deletes a deployment and all it's indexed data. The deployment can be specified as either a subgraph name, an IPFS hash `Qm..`, or the database namespace `sgdNNN`. Further documentation is available [here](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop). +At some point an indexer might want to remove a given Subgraph. This can be easily done via `graphman drop`, which deletes a deployment and all it's indexed data. The deployment can be specified as either a Subgraph name, an IPFS hash `Qm..`, or the database namespace `sgdNNN`. Further documentation is available [here](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop). diff --git a/website/src/pages/vi/indexing/tooling/graphcast.mdx b/website/src/pages/vi/indexing/tooling/graphcast.mdx index 2c523a014098..2b541a818654 100644 --- a/website/src/pages/vi/indexing/tooling/graphcast.mdx +++ b/website/src/pages/vi/indexing/tooling/graphcast.mdx @@ -10,10 +10,10 @@ Currently, the cost to broadcast information to other network participants is de The Graphcast SDK (Software Development Kit) allows developers to build Radios, which are gossip-powered applications that Indexers can run to serve a given purpose. We also intend to create a few Radios (or provide support to other developers/teams that wish to build Radios) for the following use cases: -- Real-time cross-checking of subgraph data integrity ([Subgraph Radio](https://docs.graphops.xyz/graphcast/radios/subgraph-radio/intro)). -- Conducting auctions and coordination for warp syncing subgraphs, substreams, and Firehose data from other Indexers. -- Self-reporting on active query analytics, including subgraph request volumes, fee volumes, etc. -- Self-reporting on indexing analytics, including subgraph indexing time, handler gas costs, indexing errors encountered, etc. +- Real-time cross-checking of Subgraph data integrity ([Subgraph Radio](https://docs.graphops.xyz/graphcast/radios/subgraph-radio/intro)). +- Conducting auctions and coordination for warp syncing Subgraphs, substreams, and Firehose data from other Indexers. +- Self-reporting on active query analytics, including Subgraph request volumes, fee volumes, etc. +- Self-reporting on indexing analytics, including Subgraph indexing time, handler gas costs, indexing errors encountered, etc. - Self-reporting on stack information including graph-node version, Postgres version, Ethereum client version, etc. ### Learn More diff --git a/website/src/pages/vi/resources/benefits.mdx b/website/src/pages/vi/resources/benefits.mdx index fa0a84626503..6e22015b8fdb 100644 --- a/website/src/pages/vi/resources/benefits.mdx +++ b/website/src/pages/vi/resources/benefits.mdx @@ -75,9 +75,9 @@ Query costs may vary; the quoted cost is the average at time of publication (Mar Reflects cost for data consumer. Query fees are still paid to Indexers for Free Plan queries. -Estimated costs are only for Ethereum Mainnet subgraphs — costs are even higher when self hosting a `graph-node` on other networks. Some users may need to update their subgraph to a new version. Due to Ethereum gas fees, an update costs ~$50 at time of writing. Note that gas fees on [Arbitrum](/archived/arbitrum/arbitrum-faq/) are substantially lower than Ethereum mainnet. +Estimated costs are only for Ethereum Mainnet Subgraphs — costs are even higher when self hosting a `graph-node` on other networks. Some users may need to update their Subgraph to a new version. Due to Ethereum gas fees, an update costs ~$50 at time of writing. Note that gas fees on [Arbitrum](/archived/arbitrum/arbitrum-faq/) are substantially lower than Ethereum mainnet. -Curating signal on a subgraph is an optional one-time, net-zero cost (e.g., $1k in signal can be curated on a subgraph, and later withdrawn—with potential to earn returns in the process). +Curating signal on a Subgraph is an optional one-time, net-zero cost (e.g., $1k in signal can be curated on a Subgraph, and later withdrawn—with potential to earn returns in the process). ## No Setup Costs & Greater Operational Efficiency @@ -89,4 +89,4 @@ The Graph’s decentralized network gives users access to geographic redundancy Bottom line: The Graph Network is less expensive, easier to use, and produces superior results compared to running a `graph-node` locally. -Start using The Graph Network today, and learn how to [publish your subgraph to The Graph's decentralized network](/subgraphs/quick-start/). +Start using The Graph Network today, and learn how to [publish your Subgraph to The Graph's decentralized network](/subgraphs/quick-start/). diff --git a/website/src/pages/vi/resources/glossary.mdx b/website/src/pages/vi/resources/glossary.mdx index ffcd4bca2eed..4c5ad55cd0d3 100644 --- a/website/src/pages/vi/resources/glossary.mdx +++ b/website/src/pages/vi/resources/glossary.mdx @@ -4,51 +4,51 @@ title: Glossary - **The Graph**: A decentralized protocol for indexing and querying data. -- **Query**: A request for data. In the case of The Graph, a query is a request for data from a subgraph that will be answered by an Indexer. +- **Query**: A request for data. In the case of The Graph, a query is a request for data from a Subgraph that will be answered by an Indexer. -- **GraphQL**: A query language for APIs and a runtime for fulfilling those queries with your existing data. The Graph uses GraphQL to query subgraphs. +- **GraphQL**: A query language for APIs and a runtime for fulfilling those queries with your existing data. The Graph uses GraphQL to query Subgraphs. -- **Endpoint**: A URL that can be used to query a subgraph. The testing endpoint for Subgraph Studio is `https://api.studio.thegraph.com/query///` and the Graph Explorer endpoint is `https://gateway.thegraph.com/api//subgraphs/id/`. The Graph Explorer endpoint is used to query subgraphs on The Graph's decentralized network. +- **Endpoint**: A URL that can be used to query a Subgraph. The testing endpoint for Subgraph Studio is `https://api.studio.thegraph.com/query///` and the Graph Explorer endpoint is `https://gateway.thegraph.com/api//subgraphs/id/`. The Graph Explorer endpoint is used to query Subgraphs on The Graph's decentralized network. -- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish subgraphs to The Graph Network. Once it is indexed, the subgraph can be queried by anyone. +- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish Subgraphs to The Graph Network. Once it is indexed, the Subgraph can be queried by anyone. - **Indexer**: Network participants that run indexing nodes to index data from blockchains and serve GraphQL queries. - **Indexer Revenue Streams**: Indexers are rewarded in GRT with two components: query fee rebates and indexing rewards. - 1. **Query Fee Rebates**: Payments from subgraph consumers for serving queries on the network. + 1. **Query Fee Rebates**: Payments from Subgraph consumers for serving queries on the network. - 2. **Indexing Rewards**: The rewards that Indexers receive for indexing subgraphs. Indexing rewards are generated via new issuance of 3% GRT annually. + 2. **Indexing Rewards**: The rewards that Indexers receive for indexing Subgraphs. Indexing rewards are generated via new issuance of 3% GRT annually. - **Indexer's Self-Stake**: The amount of GRT that Indexers stake to participate in the decentralized network. The minimum is 100,000 GRT, and there is no upper limit. - **Delegation Capacity**: The maximum amount of GRT an Indexer can accept from Delegators. Indexers can only accept up to 16x their Indexer Self-Stake, and additional delegation results in diluted rewards. For example, if an Indexer has a Self-Stake of 1M GRT, their delegation capacity is 16M. However, Indexers can increase their Delegation Capacity by increasing their Self-Stake. -- **Upgrade Indexer**: An Indexer designed to act as a fallback for subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. +- **Upgrade Indexer**: An Indexer designed to act as a fallback for Subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. -- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing subgraphs. +- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in Subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing Subgraphs. - **Delegation Tax**: A 0.5% fee paid by Delegators when they delegate GRT to Indexers. The GRT used to pay the fee is burned. -- **Curator**: Network participants that identify high-quality subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a subgraph, 10% is distributed to the Curators of that subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a subgraph. +- **Curator**: Network participants that identify high-quality Subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a Subgraph, 10% is distributed to the Curators of that Subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a Subgraph. -- **Curation Tax**: A 1% fee paid by Curators when they signal GRT on subgraphs. The GRT used to pay the fee is burned. +- **Curation Tax**: A 1% fee paid by Curators when they signal GRT on Subgraphs. The GRT used to pay the fee is burned. -- **Data Consumer**: Any application or user that queries a subgraph. +- **Data Consumer**: Any application or user that queries a Subgraph. -- **Subgraph Developer**: A developer who builds and deploys a subgraph to The Graph's decentralized network. +- **Subgraph Developer**: A developer who builds and deploys a Subgraph to The Graph's decentralized network. -- **Subgraph Manifest**: A YAML file that describes the subgraph's GraphQL schema, data sources, and other metadata. [Here](https://github.com/graphprotocol/example-subgraph/blob/master/subgraph.yaml) is an example. +- **Subgraph Manifest**: A YAML file that describes the Subgraph's GraphQL schema, data sources, and other metadata. [Here](https://github.com/graphprotocol/example-subgraph/blob/master/subgraph.yaml) is an example. - **Epoch**: A unit of time within the network. Currently, one epoch is 6,646 blocks or approximately 1 day. -- **Allocation**: An Indexer can allocate their total GRT stake (including Delegators' stake) towards subgraphs that have been published on The Graph's decentralized network. Allocations can have different statuses: +- **Allocation**: An Indexer can allocate their total GRT stake (including Delegators' stake) towards Subgraphs that have been published on The Graph's decentralized network. Allocations can have different statuses: - 1. **Active**: An allocation is considered active when it is created onchain. This is called opening an allocation, and indicates to the network that the Indexer is actively indexing and serving queries for a particular subgraph. Active allocations accrue indexing rewards proportional to the signal on the subgraph, and the amount of GRT allocated. + 1. **Active**: An allocation is considered active when it is created onchain. This is called opening an allocation, and indicates to the network that the Indexer is actively indexing and serving queries for a particular Subgraph. Active allocations accrue indexing rewards proportional to the signal on the Subgraph, and the amount of GRT allocated. - 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. + 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given Subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. -- **Subgraph Studio**: A powerful dapp for building, deploying, and publishing subgraphs. +- **Subgraph Studio**: A powerful dapp for building, deploying, and publishing Subgraphs. - **Fishermen**: A role within The Graph Network held by participants who monitor the accuracy and integrity of data served by Indexers. When a Fisherman identifies a query response or a POI they believe to be incorrect, they can initiate a dispute against the Indexer. If the dispute rules in favor of the Fisherman, the Indexer is slashed by losing 2.5% of their self-stake. Of this amount, 50% is awarded to the Fisherman as a bounty for their vigilance, and the remaining 50% is removed from circulation (burned). This mechanism is designed to encourage Fishermen to help maintain the reliability of the network by ensuring that Indexers are held accountable for the data they provide. @@ -56,28 +56,28 @@ title: Glossary - **Slashing**: Indexers can have their self-staked GRT slashed for providing an incorrect POI or for serving inaccurate data. The slashing percentage is a protocol parameter currently set to 2.5% of an Indexer's self-stake. 50% of the slashed GRT goes to the Fisherman that disputed the inaccurate data or incorrect POI. The other 50% is burned. -- **Indexing Rewards**: The rewards that Indexers receive for indexing subgraphs. Indexing rewards are distributed in GRT. +- **Indexing Rewards**: The rewards that Indexers receive for indexing Subgraphs. Indexing rewards are distributed in GRT. - **Delegation Rewards**: The rewards that Delegators receive for delegating GRT to Indexers. Delegation rewards are distributed in GRT. - **GRT**: The Graph's work utility token. GRT provides economic incentives to network participants for contributing to the network. -- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. +- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given Subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. -- **Graph Node**: Graph Node is the component that indexes subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. +- **Graph Node**: Graph Node is the component that indexes Subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. -- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions onchain, including registering on the network, managing subgraph deployments to its Graph Node(s), and managing allocations. +- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions onchain, including registering on the network, managing Subgraph deployments to its Graph Node(s), and managing allocations. - **The Graph Client**: A library for building GraphQL-based dapps in a decentralized way. -- **Graph Explorer**: A dapp designed for network participants to explore subgraphs and interact with the protocol. +- **Graph Explorer**: A dapp designed for network participants to explore Subgraphs and interact with the protocol. - **Graph CLI**: A command line interface tool for building and deploying to The Graph. - **Cooldown Period**: The time remaining until an Indexer who changed their delegation parameters can do so again. -- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, subgraphs, curation shares, and Indexer's self-stake. +- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, Subgraphs, curation shares, and Indexer's self-stake. -- **Updating a subgraph**: The process of releasing a new subgraph version with updates to the subgraph's manifest, schema, or mappings. +- **Updating a Subgraph**: The process of releasing a new Subgraph version with updates to the Subgraph's manifest, schema, or mappings. -- **Migrating**: The process of curation shares moving from an old version of a subgraph to a new version of a subgraph (e.g. when v0.0.1 is updated to v0.0.2). +- **Migrating**: The process of curation shares moving from an old version of a Subgraph to a new version of a Subgraph (e.g. when v0.0.1 is updated to v0.0.2). diff --git a/website/src/pages/vi/resources/migration-guides/assemblyscript-migration-guide.mdx b/website/src/pages/vi/resources/migration-guides/assemblyscript-migration-guide.mdx index 20f0fcfaf8e8..dbaa3b162345 100644 --- a/website/src/pages/vi/resources/migration-guides/assemblyscript-migration-guide.mdx +++ b/website/src/pages/vi/resources/migration-guides/assemblyscript-migration-guide.mdx @@ -2,13 +2,13 @@ title: Hướng dẫn Di chuyển AssemblyScript --- -Up until now, subgraphs have been using one of the [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉 +Up until now, Subgraphs have been using one of the [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉 -Điều đó sẽ cho phép các nhà phát triển subgrap sử dụng các tính năng mới hơn của ngôn ngữ AS và thư viện chuẩn. +That will enable Subgraph developers to use newer features of the AS language and standard library. This guide is applicable for anyone using `graph-cli`/`graph-ts` below version `0.22.0`. If you're already at a higher than (or equal) version to that, you've already been using version `0.19.10` of AssemblyScript 🙂 -> Note: As of `0.24.0`, `graph-node` can support both versions, depending on the `apiVersion` specified in the subgraph manifest. +> Note: As of `0.24.0`, `graph-node` can support both versions, depending on the `apiVersion` specified in the Subgraph manifest. ## Các đặc điểm @@ -44,7 +44,7 @@ This guide is applicable for anyone using `graph-cli`/`graph-ts` below version ` ## Làm thế nào để nâng cấp? -1. Change your mappings `apiVersion` in `subgraph.yaml` to `0.0.6`: +1. Change your mappings `apiVersion` in `subgraph.yaml` to `0.0.9`: ```yaml ... @@ -52,7 +52,7 @@ dataSources: ... mapping: ... - apiVersion: 0.0.6 + apiVersion: 0.0.9 ... ``` @@ -106,7 +106,7 @@ let maybeValue = load()! // breaks in runtime if value is null maybeValue.aMethod() ``` -Nếu bạn không chắc nên chọn cái nào, chúng tôi khuyên bạn nên luôn sử dụng phiên bản an toàn. Nếu giá trị không tồn tại, bạn có thể chỉ muốn thực hiện câu lệnh if sớm với trả về trong trình xử lý subgraph của bạn. +If you are unsure which to choose, we recommend always using the safe version. If the value doesn't exist you might want to just do an early if statement with a return in you Subgraph handler. ### Variable Shadowing (Che khuất Biến) @@ -132,7 +132,7 @@ Bạn sẽ cần đổi tên các biến trùng lặp của mình nếu bạn c ### So sánh Null -Bằng cách thực hiện nâng cấp trên subgraph của bạn, đôi khi bạn có thể gặp các lỗi như sau: +By doing the upgrade on your Subgraph, sometimes you might get errors like these: ```typescript ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' is not assignable to type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt'. @@ -330,7 +330,7 @@ let wrapper = new Wrapper(y) wrapper.n = wrapper.n + x // doesn't give compile time errors as it should ``` -Chúng tôi đã giải quyết vấn đề trên trình biên dịch AssemblyScript cho vấn đề này, nhưng hiện tại nếu bạn thực hiện các loại hoạt động này trong ánh xạ subgraph của mình, bạn nên thay đổi chúng để thực hiện kiểm tra rỗng trước nó. +We've opened a issue on the AssemblyScript compiler for this, but for now if you do these kind of operations in your Subgraph mappings, you should change them to do a null check before it. ```typescript let wrapper = new Wrapper(y) @@ -352,7 +352,7 @@ value.x = 10 value.y = 'content' ``` -Nó sẽ biên dịch nhưng bị hỏng trong thời gian chạy, điều đó xảy ra vì giá trị chưa được khởi tạo, vì vậy hãy đảm bảo rằng subgraph của bạn đã khởi tạo các giá trị của chúng, như sau: +It will compile but break at runtime, that happens because the value hasn't been initialized, so make sure your Subgraph has initialized their values, like this: ```typescript var value = new Type() // initialized diff --git a/website/src/pages/vi/resources/migration-guides/graphql-validations-migration-guide.mdx b/website/src/pages/vi/resources/migration-guides/graphql-validations-migration-guide.mdx index 29fed533ef8c..ebed96df1002 100644 --- a/website/src/pages/vi/resources/migration-guides/graphql-validations-migration-guide.mdx +++ b/website/src/pages/vi/resources/migration-guides/graphql-validations-migration-guide.mdx @@ -20,7 +20,7 @@ To be compliant with those validations, please follow the migration guide. You can use the CLI migration tool to find any issues in your GraphQL operations and fix them. Alternatively you can update the endpoint of your GraphQL client to use the `https://api-next.thegraph.com/subgraphs/name/$GITHUB_USER/$SUBGRAPH_NAME` endpoint. Testing your queries against this endpoint will help you find the issues in your queries. -> Not all subgraphs will need to be migrated, if you are using [GraphQL ESlint](https://the-guild.dev/graphql/eslint/docs) or [GraphQL Code Generator](https://the-guild.dev/graphql/codegen), they already ensure that your queries are valid. +> Not all Subgraphs will need to be migrated, if you are using [GraphQL ESlint](https://the-guild.dev/graphql/eslint/docs) or [GraphQL Code Generator](https://the-guild.dev/graphql/codegen), they already ensure that your queries are valid. ## Migration CLI tool diff --git a/website/src/pages/vi/resources/roles/curating.mdx b/website/src/pages/vi/resources/roles/curating.mdx index e1633707faf3..06aa7b62b93f 100644 --- a/website/src/pages/vi/resources/roles/curating.mdx +++ b/website/src/pages/vi/resources/roles/curating.mdx @@ -2,37 +2,37 @@ title: Curating --- -Curators are critical to The Graph's decentralized economy. They use their knowledge of the web3 ecosystem to assess and signal on the subgraphs that should be indexed by The Graph Network. Through Graph Explorer, Curators view network data to make signaling decisions. In turn, The Graph Network rewards Curators who signal on good quality subgraphs with a share of the query fees those subgraphs generate. The amount of GRT signaled is one of the key considerations for indexers when determining which subgraphs to index. +Curators are critical to The Graph's decentralized economy. They use their knowledge of the web3 ecosystem to assess and signal on the Subgraphs that should be indexed by The Graph Network. Through Graph Explorer, Curators view network data to make signaling decisions. In turn, The Graph Network rewards Curators who signal on good quality Subgraphs with a share of the query fees those Subgraphs generate. The amount of GRT signaled is one of the key considerations for indexers when determining which Subgraphs to index. ## What Does Signaling Mean for The Graph Network? -Before consumers can query a subgraph, it must be indexed. This is where curation comes into play. In order for Indexers to earn substantial query fees on quality subgraphs, they need to know what subgraphs to index. When Curators signal on a subgraph, it lets Indexers know that a subgraph is in demand and of sufficient quality that it should be indexed. +Before consumers can query a Subgraph, it must be indexed. This is where curation comes into play. In order for Indexers to earn substantial query fees on quality Subgraphs, they need to know what Subgraphs to index. When Curators signal on a Subgraph, it lets Indexers know that a Subgraph is in demand and of sufficient quality that it should be indexed. -Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that Curators use to let Indexers know that a subgraph is good to index. Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives. +Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that Curators use to let Indexers know that a Subgraph is good to index. Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the Subgraph, entitling them to a portion of future query fees that the Subgraph drives. -Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality subgraph because there will be fewer queries to process or fewer Indexers to process them. +Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to Subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality Subgraph because there will be fewer queries to process or fewer Indexers to process them. -The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. +The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all Subgraphs, signaling GRT on a particular Subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. -When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. +When signaling, Curators can decide to signal on a specific version of the Subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. -If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. +If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the Subgraphs you need assistance with. -Indexers can find subgraphs to index based on curation signals they see in Graph Explorer (screenshot below). +Indexers can find Subgraphs to index based on curation signals they see in Graph Explorer (screenshot below). -![Explorer subgraphs](/img/explorer-subgraphs.png) +![Explorer Subgraphs](/img/explorer-subgraphs.png) ## Làm thế nào để phát tín hiệu -Within the Curator tab in Graph Explorer, curators will be able to signal and unsignal on certain subgraphs based on network stats. For a step-by-step overview of how to do this in Graph Explorer, [click here.](/subgraphs/explorer/) +Within the Curator tab in Graph Explorer, curators will be able to signal and unsignal on certain Subgraphs based on network stats. For a step-by-step overview of how to do this in Graph Explorer, [click here.](/subgraphs/explorer/) -Curator có thể chọn phát tín hiệu trên một phiên bản subgraph cụ thể hoặc họ có thể chọn để tín hiệu của họ tự động chuyển sang bản dựng sản xuất mới nhất của subgraph đó. Cả hai đều là những chiến lược hợp lệ và đi kèm với những ưu và nhược điểm của riêng chúng. +A curator can choose to signal on a specific Subgraph version, or they can choose to have their signal automatically migrate to the newest production build of that Subgraph. Both are valid strategies and come with their own pros and cons. -Signaling on a specific version is especially useful when one subgraph is used by multiple dapps. One dapp might need to regularly update the subgraph with new features. Another dapp might prefer to use an older, well-tested subgraph version. Upon initial curation, a 1% standard tax is incurred. +Signaling on a specific version is especially useful when one Subgraph is used by multiple dapps. One dapp might need to regularly update the Subgraph with new features. Another dapp might prefer to use an older, well-tested Subgraph version. Upon initial curation, a 1% standard tax is incurred. Having your signal automatically migrate to the newest production build can be valuable to ensure you keep accruing query fees. Every time you curate, a 1% curation tax is incurred. You will also pay a 0.5% curation tax on every migration. Subgraph developers are discouraged from frequently publishing new versions - they have to pay a 0.5% curation tax on all auto-migrated curation shares. -> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy. +> **Note**: The first address to signal a particular Subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy. ## Withdrawing your GRT @@ -40,39 +40,39 @@ Curators have the option to withdraw their signaled GRT at any time. Unlike the process of delegating, if you decide to withdraw your signaled GRT you will not have to wait for a cooldown period and will receive the entire amount (minus the 1% curation tax). -Once a curator withdraws their signal, indexers may choose to keep indexing the subgraph, even if there's currently no active GRT signaled. +Once a curator withdraws their signal, indexers may choose to keep indexing the Subgraph, even if there's currently no active GRT signaled. -However, it is recommended that curators leave their signaled GRT in place not only to receive a portion of the query fees, but also to ensure reliability and uptime of the subgraph. +However, it is recommended that curators leave their signaled GRT in place not only to receive a portion of the query fees, but also to ensure reliability and uptime of the Subgraph. ## Những rủi ro 1. The query market is inherently young at The Graph and there is risk that your %APY may be lower than you expect due to nascent market dynamics. -2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned. -3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their subgraph or if a subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/resources/roles/delegating/delegating/). -4. Một subgraph có thể thất bại do một lỗi. Một subgraph thất bại không tích lũy phí truy vấn. Do đó, bạn sẽ phải đợi cho đến khi nhà phát triển sửa lỗi và triển khai phiên bản mới. - - Nếu bạn đã đăng ký phiên bản mới nhất của một subgraph, các cổ phần của bạn sẽ tự động chuyển sang phiên bản mới đó. Điều này sẽ phát sinh một khoản thuế curation 0.5%. - - If you have signaled on a specific subgraph version and it fails, you will have to manually burn your curation shares. You can then signal on the new subgraph version, thus incurring a 1% curation tax. +2. Curation Fee - when a curator signals GRT on a Subgraph, they incur a 1% curation tax. This fee is burned. +3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their Subgraph or if a Subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/resources/roles/delegating/delegating/). +4. A Subgraph can fail due to a bug. A failed Subgraph does not accrue query fees. As a result, you’ll have to wait until the developer fixes the bug and deploys a new version. + - If you are subscribed to the newest version of a Subgraph, your shares will auto-migrate to that new version. This will incur a 0.5% curation tax. + - If you have signaled on a specific Subgraph version and it fails, you will have to manually burn your curation shares. You can then signal on the new Subgraph version, thus incurring a 1% curation tax. ## Câu hỏi thường gặp về Curation ### 1. Curator kiếm được bao nhiêu % phí truy vấn? -By signalling on a subgraph, you will earn a share of all the query fees that the subgraph generates. 10% of all query fees go to the Curators pro-rata to their curation shares. This 10% is subject to governance. +By signalling on a Subgraph, you will earn a share of all the query fees that the Subgraph generates. 10% of all query fees go to the Curators pro-rata to their curation shares. This 10% is subject to governance. -### 2. Làm cách nào để tôi quyết định xem các subgraph nào có chất lượng cao để báo hiệu? +### 2. How do I decide which Subgraphs are high quality to signal on? -Finding high-quality subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy subgraphs that are driving query volume. A trustworthy subgraph may be valuable if it is complete, accurate, and supports a dapp’s data needs. A poorly architected subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a subgraph’s architecture or code in order to assess if a subgraph is valuable. As a result: +Finding high-quality Subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy Subgraphs that are driving query volume. A trustworthy Subgraph may be valuable if it is complete, accurate, and supports a dapp’s data needs. A poorly architected Subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a Subgraph’s architecture or code in order to assess if a Subgraph is valuable. As a result: -- Curators can use their understanding of a network to try and predict how an individual subgraph may generate a higher or lower query volume in the future -- Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the subgraph developer is can help determine whether or not a subgraph is worth signalling on. +- Curators can use their understanding of a network to try and predict how an individual Subgraph may generate a higher or lower query volume in the future +- Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the Subgraph developer is can help determine whether or not a Subgraph is worth signalling on. -### 3. What’s the cost of updating a subgraph? +### 3. What’s the cost of updating a Subgraph? -Migrating your curation shares to a new subgraph version incurs a curation tax of 1%. Curators can choose to subscribe to the newest version of a subgraph. When curator shares get auto-migrated to a new version, Curators will also pay half curation tax, ie. 0.5%, because upgrading subgraphs is an onchain action that costs gas. +Migrating your curation shares to a new Subgraph version incurs a curation tax of 1%. Curators can choose to subscribe to the newest version of a Subgraph. When curator shares get auto-migrated to a new version, Curators will also pay half curation tax, ie. 0.5%, because upgrading Subgraphs is an onchain action that costs gas. -### 4. How often can I update my subgraph? +### 4. How often can I update my Subgraph? -It’s suggested that you don’t update your subgraphs too frequently. See the question above for more details. +It’s suggested that you don’t update your Subgraphs too frequently. See the question above for more details. ### 5. Tôi có thể bán cổ phần curation của mình không? diff --git a/website/src/pages/vi/resources/subgraph-studio-faq.mdx b/website/src/pages/vi/resources/subgraph-studio-faq.mdx index 8761f7a31bf6..c2d4037bd099 100644 --- a/website/src/pages/vi/resources/subgraph-studio-faq.mdx +++ b/website/src/pages/vi/resources/subgraph-studio-faq.mdx @@ -4,7 +4,7 @@ title: Subgraph Studio FAQs ## 1. What is Subgraph Studio? -[Subgraph Studio](https://thegraph.com/studio/) is a dapp for creating, managing, and publishing subgraphs and API keys. +[Subgraph Studio](https://thegraph.com/studio/) is a dapp for creating, managing, and publishing Subgraphs and API keys. ## 2. How do I create an API Key? @@ -18,14 +18,14 @@ Yes! You can create multiple API Keys to use in different projects. Check out th After creating an API Key, in the Security section, you can define the domains that can query a specific API Key. -## 5. Can I transfer my subgraph to another owner? +## 5. Can I transfer my Subgraph to another owner? -Yes, subgraphs that have been published to Arbitrum One can be transferred to a new wallet or a Multisig. You can do so by clicking the three dots next to the 'Publish' button on the subgraph's details page and selecting 'Transfer ownership'. +Yes, Subgraphs that have been published to Arbitrum One can be transferred to a new wallet or a Multisig. You can do so by clicking the three dots next to the 'Publish' button on the Subgraph's details page and selecting 'Transfer ownership'. -Note that you will no longer be able to see or edit the subgraph in Studio once it has been transferred. +Note that you will no longer be able to see or edit the Subgraph in Studio once it has been transferred. -## 6. How do I find query URLs for subgraphs if I’m not the developer of the subgraph I want to use? +## 6. How do I find query URLs for Subgraphs if I’m not the developer of the Subgraph I want to use? -You can find the query URL of each subgraph in the Subgraph Details section of Graph Explorer. When you click on the “Query” button, you will be directed to a pane wherein you can view the query URL of the subgraph you’re interested in. You can then replace the `` placeholder with the API key you wish to leverage in Subgraph Studio. +You can find the query URL of each Subgraph in the Subgraph Details section of Graph Explorer. When you click on the “Query” button, you will be directed to a pane wherein you can view the query URL of the Subgraph you’re interested in. You can then replace the `` placeholder with the API key you wish to leverage in Subgraph Studio. -Remember that you can create an API key and query any subgraph published to the network, even if you build a subgraph yourself. These queries via the new API key, are paid queries as any other on the network. +Remember that you can create an API key and query any Subgraph published to the network, even if you build a Subgraph yourself. These queries via the new API key, are paid queries as any other on the network. diff --git a/website/src/pages/vi/resources/tokenomics.mdx b/website/src/pages/vi/resources/tokenomics.mdx index 4b1d2516879a..b7e29f27647b 100644 --- a/website/src/pages/vi/resources/tokenomics.mdx +++ b/website/src/pages/vi/resources/tokenomics.mdx @@ -6,7 +6,7 @@ description: The Graph Network is incentivized by powerful tokenomics. Here’s ## Tổng quan -The Graph is a decentralized protocol that enables easy access to blockchain data. It indexes blockchain data similarly to how Google indexes the web. If you've used a dapp that retrieves data from a subgraph, you've probably interacted with The Graph. Today, thousands of [popular dapps](https://thegraph.com/explorer) in the web3 ecosystem use The Graph. +The Graph is a decentralized protocol that enables easy access to blockchain data. It indexes blockchain data similarly to how Google indexes the web. If you've used a dapp that retrieves data from a Subgraph, you've probably interacted with The Graph. Today, thousands of [popular dapps](https://thegraph.com/explorer) in the web3 ecosystem use The Graph. ## Specifics @@ -24,9 +24,9 @@ There are four primary network participants: 1. Delegators - Delegate GRT to Indexers & secure the network -2. Curators - Find the best subgraphs for Indexers +2. Curators - Find the best Subgraphs for Indexers -3. Developers - Build & query subgraphs +3. Developers - Build & query Subgraphs 4. Indexers - Backbone of blockchain data @@ -36,7 +36,7 @@ Fishermen and Arbitrators are also integral to the network's success through oth ## Delegators (Passively earn GRT) -Indexers are delegated GRT by Delegators, increasing the Indexer’s stake in subgraphs on the network. In return, Delegators earn a percentage of all query fees and indexing rewards from the Indexer. Each Indexer sets the cut that will be rewarded to Delegators independently, creating competition among Indexers to attract Delegators. Most Indexers offer between 9-12% annually. +Indexers are delegated GRT by Delegators, increasing the Indexer’s stake in Subgraphs on the network. In return, Delegators earn a percentage of all query fees and indexing rewards from the Indexer. Each Indexer sets the cut that will be rewarded to Delegators independently, creating competition among Indexers to attract Delegators. Most Indexers offer between 9-12% annually. For example, if a Delegator were to delegate 15k GRT to an Indexer offering 10%, the Delegator would receive ~1,500 GRT in rewards annually. @@ -46,25 +46,25 @@ If you're reading this, you're capable of becoming a Delegator right now by head ## Curators (Earn GRT) -Curators identify high-quality subgraphs and "curate" them (i.e., signal GRT on them) to earn curation shares, which guarantee a percentage of all future query fees generated by the subgraph. While any independent network participant can be a Curator, typically subgraph developers are among the first Curators for their own subgraphs because they want to ensure their subgraph is indexed. +Curators identify high-quality Subgraphs and "curate" them (i.e., signal GRT on them) to earn curation shares, which guarantee a percentage of all future query fees generated by the Subgraph. While any independent network participant can be a Curator, typically Subgraph developers are among the first Curators for their own Subgraphs because they want to ensure their Subgraph is indexed. -Subgraph developers are encouraged to curate their subgraph with at least 3,000 GRT. However, this number may be impacted by network activity and community participation. +Subgraph developers are encouraged to curate their Subgraph with at least 3,000 GRT. However, this number may be impacted by network activity and community participation. -Curators pay a 1% curation tax when they curate a new subgraph. This curation tax is burned, decreasing the supply of GRT. +Curators pay a 1% curation tax when they curate a new Subgraph. This curation tax is burned, decreasing the supply of GRT. ## Developers -Developers build and query subgraphs to retrieve blockchain data. Since subgraphs are open source, developers can query existing subgraphs to load blockchain data into their dapps. Developers pay for queries they make in GRT, which is distributed to network participants. +Developers build and query Subgraphs to retrieve blockchain data. Since Subgraphs are open source, developers can query existing Subgraphs to load blockchain data into their dapps. Developers pay for queries they make in GRT, which is distributed to network participants. -### Creating a subgraph +### Creating a Subgraph -Developers can [create a subgraph](/developing/creating-a-subgraph/) to index data on the blockchain. Subgraphs are instructions for Indexers about which data should be served to consumers. +Developers can [create a Subgraph](/developing/creating-a-subgraph/) to index data on the blockchain. Subgraphs are instructions for Indexers about which data should be served to consumers. -Once developers have built and tested their subgraph, they can [publish their subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) on The Graph's decentralized network. +Once developers have built and tested their Subgraph, they can [publish their Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) on The Graph's decentralized network. -### Querying an existing subgraph +### Querying an existing Subgraph -Once a subgraph is [published](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph's decentralized network, anyone can create an API key, add GRT to their billing balance, and query the subgraph. +Once a Subgraph is [published](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph's decentralized network, anyone can create an API key, add GRT to their billing balance, and query the Subgraph. Subgraphs are [queried using GraphQL](/subgraphs/querying/introduction/), and the query fees are paid for with GRT in [Subgraph Studio](https://thegraph.com/studio/). Query fees are distributed to network participants based on their contributions to the protocol. @@ -72,27 +72,27 @@ Subgraphs are [queried using GraphQL](/subgraphs/querying/introduction/), and th ## Indexers (Earn GRT) -Indexers are the backbone of The Graph. They operate independent hardware and software powering The Graph’s decentralized network. Indexers serve data to consumers based on instructions from subgraphs. +Indexers are the backbone of The Graph. They operate independent hardware and software powering The Graph’s decentralized network. Indexers serve data to consumers based on instructions from Subgraphs. Indexers can earn GRT rewards in two ways: -1. **Query fees**: GRT paid by developers or users for subgraph data queries. Query fees are directly distributed to Indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). +1. **Query fees**: GRT paid by developers or users for Subgraph data queries. Query fees are directly distributed to Indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). -2. **Indexing rewards**: the 3% annual issuance is distributed to Indexers based on the number of subgraphs they are indexing. These rewards incentivize Indexers to index subgraphs, occasionally before the query fees begin, to accrue and submit Proofs of Indexing (POIs), verifying that they have indexed data accurately. +2. **Indexing rewards**: the 3% annual issuance is distributed to Indexers based on the number of Subgraphs they are indexing. These rewards incentivize Indexers to index Subgraphs, occasionally before the query fees begin, to accrue and submit Proofs of Indexing (POIs), verifying that they have indexed data accurately. -Each subgraph is allotted a portion of the total network token issuance, based on the amount of the subgraph’s curation signal. That amount is then rewarded to Indexers based on their allocated stake on the subgraph. +Each Subgraph is allotted a portion of the total network token issuance, based on the amount of the Subgraph’s curation signal. That amount is then rewarded to Indexers based on their allocated stake on the Subgraph. In order to run an indexing node, Indexers must self-stake 100,000 GRT or more with the network. Indexers are incentivized to self-stake GRT in proportion to the amount of queries they serve. -Indexers can increase their GRT allocations on subgraphs by accepting GRT delegation from Delegators, and they can accept up to 16 times their initial self-stake. If an Indexer becomes "over-delegated" (i.e., more than 16 times their initial self-stake), they will not be able to use the additional GRT from Delegators until they increase their self-stake in the network. +Indexers can increase their GRT allocations on Subgraphs by accepting GRT delegation from Delegators, and they can accept up to 16 times their initial self-stake. If an Indexer becomes "over-delegated" (i.e., more than 16 times their initial self-stake), they will not be able to use the additional GRT from Delegators until they increase their self-stake in the network. The amount of rewards an Indexer receives can vary based on the Indexer's self-stake, accepted delegation, quality of service, and many more factors. ## Token Supply: Burning & Issuance -The initial token supply is 10 billion GRT, with a target of 3% new issuance annually to reward Indexers for allocating stake on subgraphs. This means that the total supply of GRT tokens will increase by 3% each year as new tokens are issued to Indexers for their contribution to the network. +The initial token supply is 10 billion GRT, with a target of 3% new issuance annually to reward Indexers for allocating stake on Subgraphs. This means that the total supply of GRT tokens will increase by 3% each year as new tokens are issued to Indexers for their contribution to the network. -The Graph is designed with multiple burning mechanisms to offset new token issuance. Approximately 1% of the GRT supply is burned annually through various activities on the network, and this number has been increasing as network activity continues to grow. These burning activities include a 0.5% delegation tax whenever a Delegator delegates GRT to an Indexer, a 1% curation tax when Curators signal on a subgraph, and a 1% of query fees for blockchain data. +The Graph is designed with multiple burning mechanisms to offset new token issuance. Approximately 1% of the GRT supply is burned annually through various activities on the network, and this number has been increasing as network activity continues to grow. These burning activities include a 0.5% delegation tax whenever a Delegator delegates GRT to an Indexer, a 1% curation tax when Curators signal on a Subgraph, and a 1% of query fees for blockchain data. ![Total burned GRT](/img/total-burned-grt.jpeg) diff --git a/website/src/pages/vi/sps/introduction.mdx b/website/src/pages/vi/sps/introduction.mdx index 421b05c245a2..bd0bb34b8342 100644 --- a/website/src/pages/vi/sps/introduction.mdx +++ b/website/src/pages/vi/sps/introduction.mdx @@ -3,21 +3,21 @@ title: Introduction to Substreams-Powered Subgraphs sidebarTitle: Giới thiệu --- -Boost your subgraph's efficiency and scalability by using [Substreams](/substreams/introduction/) to stream pre-indexed blockchain data. +Boost your Subgraph's efficiency and scalability by using [Substreams](/substreams/introduction/) to stream pre-indexed blockchain data. ## Tổng quan -Use a Substreams package (`.spkg`) as a data source to give your subgraph access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks. +Use a Substreams package (`.spkg`) as a data source to give your Subgraph access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks. ### Specifics There are two methods of enabling this technology: -1. **Using Substreams [triggers](/sps/triggers/)**: Consume from any Substreams module by importing the Protobuf model through a subgraph handler and move all your logic into a subgraph. This method creates the subgraph entities directly in the subgraph. +1. **Using Substreams [triggers](/sps/triggers/)**: Consume from any Substreams module by importing the Protobuf model through a Subgraph handler and move all your logic into a Subgraph. This method creates the Subgraph entities directly in the Subgraph. -2. **Using [Entity Changes](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)**: By writing more of the logic into Substreams, you can consume the module's output directly into [graph-node](/indexing/tooling/graph-node/). In graph-node, you can use the Substreams data to create your subgraph entities. +2. **Using [Entity Changes](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)**: By writing more of the logic into Substreams, you can consume the module's output directly into [graph-node](/indexing/tooling/graph-node/). In graph-node, you can use the Substreams data to create your Subgraph entities. -You can choose where to place your logic, either in the subgraph or Substreams. However, consider what aligns with your data needs, as Substreams has a parallelized model, and triggers are consumed linearly in the graph node. +You can choose where to place your logic, either in the Subgraph or Substreams. However, consider what aligns with your data needs, as Substreams has a parallelized model, and triggers are consumed linearly in the graph node. ### Additional Resources @@ -28,3 +28,4 @@ Visit the following links for tutorials on using code-generation tooling to buil - [Starknet](https://docs.substreams.dev/tutorials/intro-to-tutorials/starknet) - [Injective](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/injective) - [MANTRA](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/mantra) +- [Stellar](https://docs.substreams.dev/tutorials/intro-to-tutorials/stellar) diff --git a/website/src/pages/vi/sps/sps-faq.mdx b/website/src/pages/vi/sps/sps-faq.mdx index abc1f3906686..250c466d5929 100644 --- a/website/src/pages/vi/sps/sps-faq.mdx +++ b/website/src/pages/vi/sps/sps-faq.mdx @@ -11,21 +11,21 @@ Specifically, it's a blockchain-agnostic, parallelized, and streaming-first engi Substreams is developed by [StreamingFast](https://www.streamingfast.io/). Visit the [Substreams Documentation](/substreams/introduction/) to learn more about Substreams. -## What are Substreams-powered subgraphs? +## What are Substreams-powered Subgraphs? -[Substreams-powered subgraphs](/sps/introduction/) combine the power of Substreams with the queryability of subgraphs. When publishing a Substreams-powered Subgraph, the data produced by the Substreams transformations, can [output entity changes](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs) that are compatible with subgraph entities. +[Substreams-powered Subgraphs](/sps/introduction/) combine the power of Substreams with the queryability of Subgraphs. When publishing a Substreams-powered Subgraph, the data produced by the Substreams transformations, can [output entity changes](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs) that are compatible with Subgraph entities. -If you are already familiar with subgraph development, note that Substreams-powered subgraphs can then be queried just as if it had been produced by the AssemblyScript transformation layer. This provides all the benefits of subgraphs, including a dynamic and flexible GraphQL API. +If you are already familiar with Subgraph development, note that Substreams-powered Subgraphs can then be queried just as if it had been produced by the AssemblyScript transformation layer. This provides all the benefits of Subgraphs, including a dynamic and flexible GraphQL API. -## How are Substreams-powered subgraphs different from subgraphs? +## How are Substreams-powered Subgraphs different from Subgraphs? Subgraphs are made up of datasources which specify onchain events, and how those events should be transformed via handlers written in Assemblyscript. These events are processed sequentially, based on the order in which events happen onchain. -By contrast, substreams-powered subgraphs have a single datasource which references a substreams package, which is processed by the Graph Node. Substreams have access to additional granular onchain data compared to conventional subgraphs, and can also benefit from massively parallelised processing, which can mean much faster processing times. +By contrast, substreams-powered Subgraphs have a single datasource which references a substreams package, which is processed by the Graph Node. Substreams have access to additional granular onchain data compared to conventional Subgraphs, and can also benefit from massively parallelised processing, which can mean much faster processing times. -## What are the benefits of using Substreams-powered subgraphs? +## What are the benefits of using Substreams-powered Subgraphs? -Substreams-powered subgraphs combine all the benefits of Substreams with the queryability of subgraphs. They bring greater composability and high-performance indexing to The Graph. They also enable new data use cases; for example, once you've built your Substreams-powered Subgraph, you can reuse your [Substreams modules](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) to output to different [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) such as PostgreSQL, MongoDB, and Kafka. +Substreams-powered Subgraphs combine all the benefits of Substreams with the queryability of Subgraphs. They bring greater composability and high-performance indexing to The Graph. They also enable new data use cases; for example, once you've built your Substreams-powered Subgraph, you can reuse your [Substreams modules](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) to output to different [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) such as PostgreSQL, MongoDB, and Kafka. ## What are the benefits of Substreams? @@ -35,7 +35,7 @@ There are many benefits to using Substreams, including: - High-performance indexing: Orders of magnitude faster indexing through large-scale clusters of parallel operations (think BigQuery). -- Sink anywhere: Sink your data to anywhere you want: PostgreSQL, MongoDB, Kafka, subgraphs, flat files, Google Sheets. +- Sink anywhere: Sink your data to anywhere you want: PostgreSQL, MongoDB, Kafka, Subgraphs, flat files, Google Sheets. - Programmable: Use code to customize extraction, do transformation-time aggregations, and model your output for multiple sinks. @@ -63,17 +63,17 @@ There are many benefits to using Firehose, including: - Leverages flat files: Blockchain data is extracted into flat files, the cheapest and most optimized computing resource available. -## Where can developers access more information about Substreams-powered subgraphs and Substreams? +## Where can developers access more information about Substreams-powered Subgraphs and Substreams? The [Substreams documentation](/substreams/introduction/) will teach you how to build Substreams modules. -The [Substreams-powered subgraphs documentation](/sps/introduction/) will show you how to package them for deployment on The Graph. +The [Substreams-powered Subgraphs documentation](/sps/introduction/) will show you how to package them for deployment on The Graph. The [latest Substreams Codegen tool](https://streamingfastio.medium.com/substreams-codegen-no-code-tool-to-bootstrap-your-project-a11efe0378c6) will allow you to bootstrap a Substreams project without any code. ## What is the role of Rust modules in Substreams? -Rust modules are the equivalent of the AssemblyScript mappers in subgraphs. They are compiled to WASM in a similar way, but the programming model allows for parallel execution. They define the sort of transformations and aggregations you want to apply to the raw blockchain data. +Rust modules are the equivalent of the AssemblyScript mappers in Subgraphs. They are compiled to WASM in a similar way, but the programming model allows for parallel execution. They define the sort of transformations and aggregations you want to apply to the raw blockchain data. See [modules documentation](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) for details. @@ -81,16 +81,16 @@ See [modules documentation](https://docs.substreams.dev/reference-material/subst When using Substreams, the composition happens at the transformation layer enabling cached modules to be re-used. -As an example, Alice can build a DEX price module, Bob can use it to build a volume aggregator for some tokens of his interest, and Lisa can combine four individual DEX price modules to create a price oracle. A single Substreams request will package all of these individual's modules, link them together, to offer a much more refined stream of data. That stream can then be used to populate a subgraph, and be queried by consumers. +As an example, Alice can build a DEX price module, Bob can use it to build a volume aggregator for some tokens of his interest, and Lisa can combine four individual DEX price modules to create a price oracle. A single Substreams request will package all of these individual's modules, link them together, to offer a much more refined stream of data. That stream can then be used to populate a Subgraph, and be queried by consumers. ## How can you build and deploy a Substreams-powered Subgraph? After [defining](/sps/introduction/) a Substreams-powered Subgraph, you can use the Graph CLI to deploy it in [Subgraph Studio](https://thegraph.com/studio/). -## Where can I find examples of Substreams and Substreams-powered subgraphs? +## Where can I find examples of Substreams and Substreams-powered Subgraphs? -You can visit [this Github repo](https://github.com/pinax-network/awesome-substreams) to find examples of Substreams and Substreams-powered subgraphs. +You can visit [this Github repo](https://github.com/pinax-network/awesome-substreams) to find examples of Substreams and Substreams-powered Subgraphs. -## What do Substreams and Substreams-powered subgraphs mean for The Graph Network? +## What do Substreams and Substreams-powered Subgraphs mean for The Graph Network? The integration promises many benefits, including extremely high-performance indexing and greater composability by leveraging community modules and building on them. diff --git a/website/src/pages/vi/sps/triggers.mdx b/website/src/pages/vi/sps/triggers.mdx index ce6d650c35b9..41b53829a5e7 100644 --- a/website/src/pages/vi/sps/triggers.mdx +++ b/website/src/pages/vi/sps/triggers.mdx @@ -6,13 +6,13 @@ Use Custom Triggers and enable the full use GraphQL. ## Tổng quan -Custom Triggers allow you to send data directly into your subgraph mappings file and entities, which are similar to tables and fields. This enables you to fully use the GraphQL layer. +Custom Triggers allow you to send data directly into your Subgraph mappings file and entities, which are similar to tables and fields. This enables you to fully use the GraphQL layer. -By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data in your subgraph's handler. This ensures efficient and streamlined data management within the subgraph framework. +By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data in your Subgraph's handler. This ensures efficient and streamlined data management within the Subgraph framework. ### Defining `handleTransactions` -The following code demonstrates how to define a `handleTransactions` function in a subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new subgraph entity is created. +The following code demonstrates how to define a `handleTransactions` function in a Subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new Subgraph entity is created. ```tsx export function handleTransactions(bytes: Uint8Array): void { @@ -38,9 +38,9 @@ Here's what you're seeing in the `mappings.ts` file: 1. The bytes containing Substreams data are decoded into the generated `Transactions` object, this object is used like any other AssemblyScript object 2. Looping over the transactions -3. Create a new subgraph entity for every transaction +3. Create a new Subgraph entity for every transaction -To go through a detailed example of a trigger-based subgraph, [check out the tutorial](/sps/tutorial/). +To go through a detailed example of a trigger-based Subgraph, [check out the tutorial](/sps/tutorial/). ### Additional Resources diff --git a/website/src/pages/vi/sps/tutorial.mdx b/website/src/pages/vi/sps/tutorial.mdx index abba70ec412a..05036a1b24ae 100644 --- a/website/src/pages/vi/sps/tutorial.mdx +++ b/website/src/pages/vi/sps/tutorial.mdx @@ -3,7 +3,7 @@ title: 'Tutorial: Set Up a Substreams-Powered Subgraph on Solana' sidebarTitle: Tutorial --- -Successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. +Successfully set up a trigger-based Substreams-powered Subgraph for a Solana SPL token. ## Bắt đầu @@ -54,7 +54,7 @@ params: # Modify the param fields to meet your needs ### Step 2: Generate the Subgraph Manifest -Once the project is initialized, generate a subgraph manifest by running the following command in the Dev Container: +Once the project is initialized, generate a Subgraph manifest by running the following command in the Dev Container: ```bash substreams codegen subgraph @@ -73,7 +73,7 @@ dataSources: moduleName: map_spl_transfers # Module defined in the substreams.yaml file: ./my-project-sol-v0.1.0.spkg mapping: - apiVersion: 0.0.7 + apiVersion: 0.0.9 kind: substreams/graph-entities file: ./src/mappings.ts handler: handleTriggers @@ -81,7 +81,7 @@ dataSources: ### Step 3: Define Entities in `schema.graphql` -Define the fields you want to save in your subgraph entities by updating the `schema.graphql` file. +Define the fields you want to save in your Subgraph entities by updating the `schema.graphql` file. Here is an example: @@ -101,7 +101,7 @@ This schema defines a `MyTransfer` entity with fields such as `id`, `amount`, `s With the Protobuf objects generated, you can now handle the decoded Substreams data in your `mappings.ts` file found in the `./src` directory. -The example below demonstrates how to extract to subgraph entities the non-derived transfers associated to the Orca account id: +The example below demonstrates how to extract to Subgraph entities the non-derived transfers associated to the Orca account id: ```ts import { Protobuf } from 'as-proto/assembly' @@ -140,11 +140,11 @@ To generate Protobuf objects in AssemblyScript, run the following command: npm run protogen ``` -This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the subgraph's handler. +This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the Subgraph's handler. ### Conclusion -Congratulations! You've successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. You can take the next step by customizing your schema, mappings, and modules to fit your specific use case. +Congratulations! You've successfully set up a trigger-based Substreams-powered Subgraph for a Solana SPL token. You can take the next step by customizing your schema, mappings, and modules to fit your specific use case. ### Video Tutorial diff --git a/website/src/pages/vi/subgraphs/best-practices/avoid-eth-calls.mdx b/website/src/pages/vi/subgraphs/best-practices/avoid-eth-calls.mdx index e40a7b3712e4..07249c97dd2a 100644 --- a/website/src/pages/vi/subgraphs/best-practices/avoid-eth-calls.mdx +++ b/website/src/pages/vi/subgraphs/best-practices/avoid-eth-calls.mdx @@ -1,19 +1,19 @@ --- title: Subgraph Best Practice 4 - Improve Indexing Speed by Avoiding eth_calls -sidebarTitle: 'Subgraph Best Practice 4: Avoiding eth_calls' +sidebarTitle: Avoiding eth_calls --- ## TLDR -`eth_calls` are calls that can be made from a subgraph to an Ethereum node. These calls take a significant amount of time to return data, slowing down indexing. If possible, design smart contracts to emit all the data you need so you don’t need to use `eth_calls`. +`eth_calls` are calls that can be made from a Subgraph to an Ethereum node. These calls take a significant amount of time to return data, slowing down indexing. If possible, design smart contracts to emit all the data you need so you don’t need to use `eth_calls`. ## Why Avoiding `eth_calls` Is a Best Practice -Subgraphs are optimized to index event data emitted from smart contracts. A subgraph can also index the data coming from an `eth_call`, however, this can significantly slow down subgraph indexing as `eth_calls` require making external calls to smart contracts. The responsiveness of these calls relies not on the subgraph but on the connectivity and responsiveness of the Ethereum node being queried. By minimizing or eliminating eth_calls in our subgraphs, we can significantly improve our indexing speed. +Subgraphs are optimized to index event data emitted from smart contracts. A Subgraph can also index the data coming from an `eth_call`, however, this can significantly slow down Subgraph indexing as `eth_calls` require making external calls to smart contracts. The responsiveness of these calls relies not on the Subgraph but on the connectivity and responsiveness of the Ethereum node being queried. By minimizing or eliminating eth_calls in our Subgraphs, we can significantly improve our indexing speed. ### What Does an eth_call Look Like? -`eth_calls` are often necessary when the data required for a subgraph is not available through emitted events. For example, consider a scenario where a subgraph needs to identify whether ERC20 tokens are part of a specific pool, but the contract only emits a basic `Transfer` event and does not emit an event that contains the data that we need: +`eth_calls` are often necessary when the data required for a Subgraph is not available through emitted events. For example, consider a scenario where a Subgraph needs to identify whether ERC20 tokens are part of a specific pool, but the contract only emits a basic `Transfer` event and does not emit an event that contains the data that we need: ```yaml event Transfer(address indexed from, address indexed to, uint256 value); @@ -44,7 +44,7 @@ export function handleTransfer(event: Transfer): void { } ``` -This is functional, however is not ideal as it slows down our subgraph’s indexing. +This is functional, however is not ideal as it slows down our Subgraph’s indexing. ## How to Eliminate `eth_calls` @@ -54,7 +54,7 @@ Ideally, the smart contract should be updated to emit all necessary data within event TransferWithPool(address indexed from, address indexed to, uint256 value, bytes32 indexed poolInfo); ``` -With this update, the subgraph can directly index the required data without external calls: +With this update, the Subgraph can directly index the required data without external calls: ```typescript import { Address } from '@graphprotocol/graph-ts' @@ -96,11 +96,11 @@ The portion highlighted in yellow is the call declaration. The part before the c The handler itself accesses the result of this `eth_call` exactly as in the previous section by binding to the contract and making the call. graph-node caches the results of declared `eth_calls` in memory and the call from the handler will retrieve the result from this in memory cache instead of making an actual RPC call. -Note: Declared eth_calls can only be made in subgraphs with specVersion >= 1.2.0. +Note: Declared eth_calls can only be made in Subgraphs with specVersion >= 1.2.0. ## Conclusion -You can significantly improve indexing performance by minimizing or eliminating `eth_calls` in your subgraphs. +You can significantly improve indexing performance by minimizing or eliminating `eth_calls` in your Subgraphs. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/vi/subgraphs/best-practices/derivedfrom.mdx b/website/src/pages/vi/subgraphs/best-practices/derivedfrom.mdx index db3a49928c89..093eb29255ab 100644 --- a/website/src/pages/vi/subgraphs/best-practices/derivedfrom.mdx +++ b/website/src/pages/vi/subgraphs/best-practices/derivedfrom.mdx @@ -1,11 +1,11 @@ --- title: Subgraph Best Practice 2 - Improve Indexing and Query Responsiveness By Using @derivedFrom -sidebarTitle: 'Subgraph Best Practice 2: Arrays with @derivedFrom' +sidebarTitle: Arrays with @derivedFrom --- ## TLDR -Arrays in your schema can really slow down a subgraph's performance as they grow beyond thousands of entries. If possible, the `@derivedFrom` directive should be used when using arrays as it prevents large arrays from forming, simplifies handlers, and reduces the size of individual entities, improving indexing speed and query performance significantly. +Arrays in your schema can really slow down a Subgraph's performance as they grow beyond thousands of entries. If possible, the `@derivedFrom` directive should be used when using arrays as it prevents large arrays from forming, simplifies handlers, and reduces the size of individual entities, improving indexing speed and query performance significantly. ## How to Use the `@derivedFrom` Directive @@ -15,7 +15,7 @@ You just need to add a `@derivedFrom` directive after your array in your schema. comments: [Comment!]! @derivedFrom(field: "post") ``` -`@derivedFrom` creates efficient one-to-many relationships, enabling an entity to dynamically associate with multiple related entities based on a field in the related entity. This approach removes the need for both sides of the relationship to store duplicate data, making the subgraph more efficient. +`@derivedFrom` creates efficient one-to-many relationships, enabling an entity to dynamically associate with multiple related entities based on a field in the related entity. This approach removes the need for both sides of the relationship to store duplicate data, making the Subgraph more efficient. ### Example Use Case for `@derivedFrom` @@ -60,17 +60,17 @@ type Comment @entity { Just by adding the `@derivedFrom` directive, this schema will only store the “Comments” on the “Comments” side of the relationship and not on the “Post” side of the relationship. Arrays are stored across individual rows, which allows them to expand significantly. This can lead to particularly large sizes if their growth is unbounded. -This will not only make our subgraph more efficient, but it will also unlock three features: +This will not only make our Subgraph more efficient, but it will also unlock three features: 1. We can query the `Post` and see all of its comments. 2. We can do a reverse lookup and query any `Comment` and see which post it comes from. -3. We can use [Derived Field Loaders](/subgraphs/developing/creating/graph-ts/api/#looking-up-derived-entities) to unlock the ability to directly access and manipulate data from virtual relationships in our subgraph mappings. +3. We can use [Derived Field Loaders](/subgraphs/developing/creating/graph-ts/api/#looking-up-derived-entities) to unlock the ability to directly access and manipulate data from virtual relationships in our Subgraph mappings. ## Conclusion -Use the `@derivedFrom` directive in subgraphs to effectively manage dynamically growing arrays, enhancing indexing efficiency and data retrieval. +Use the `@derivedFrom` directive in Subgraphs to effectively manage dynamically growing arrays, enhancing indexing efficiency and data retrieval. For a more detailed explanation of strategies to avoid large arrays, check out Kevin Jones' blog: [Best Practices in Subgraph Development: Avoiding Large Arrays](https://thegraph.com/blog/improve-subgraph-performance-avoiding-large-arrays/). diff --git a/website/src/pages/vi/subgraphs/best-practices/grafting-hotfix.mdx b/website/src/pages/vi/subgraphs/best-practices/grafting-hotfix.mdx index 6d941bcf9432..79ac8203aaef 100644 --- a/website/src/pages/vi/subgraphs/best-practices/grafting-hotfix.mdx +++ b/website/src/pages/vi/subgraphs/best-practices/grafting-hotfix.mdx @@ -1,26 +1,26 @@ --- title: Subgraph Best Practice 6 - Use Grafting for Quick Hotfix Deployment -sidebarTitle: 'Subgraph Best Practice 6: Grafting and Hotfixing' +sidebarTitle: Grafting and Hotfixing --- ## TLDR -Grafting is a powerful feature in subgraph development that allows you to build and deploy new subgraphs while reusing the indexed data from existing ones. +Grafting is a powerful feature in Subgraph development that allows you to build and deploy new Subgraphs while reusing the indexed data from existing ones. ### Tổng quan -This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. +This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire Subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. ## Benefits of Grafting for Hotfixes 1. **Rapid Deployment** - - **Minimize Downtime**: When a subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. - - **Immediate Recovery**: The new subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. + - **Minimize Downtime**: When a Subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. + - **Immediate Recovery**: The new Subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. 2. **Data Preservation** - - **Reuse Historical Data**: Grafting copies the existing data from the base subgraph, so you don’t lose valuable historical records. + - **Reuse Historical Data**: Grafting copies the existing data from the base Subgraph, so you don’t lose valuable historical records. - **Consistency**: Maintains data continuity, which is crucial for applications relying on consistent historical data. 3. **Efficiency** @@ -31,38 +31,38 @@ This feature enables quick deployment of hotfixes for critical issues, eliminati 1. **Initial Deployment Without Grafting** - - **Start Clean**: Always deploy your initial subgraph without grafting to ensure that it’s stable and functions as expected. - - **Test Thoroughly**: Validate the subgraph’s performance to minimize the need for future hotfixes. + - **Start Clean**: Always deploy your initial Subgraph without grafting to ensure that it’s stable and functions as expected. + - **Test Thoroughly**: Validate the Subgraph’s performance to minimize the need for future hotfixes. 2. **Implementing the Hotfix with Grafting** - **Identify the Issue**: When a critical error occurs, determine the block number of the last successfully indexed event. - - **Create a New Subgraph**: Develop a new subgraph that includes the hotfix. - - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed subgraph. - - **Deploy Quickly**: Publish the grafted subgraph to restore service as soon as possible. + - **Create a New Subgraph**: Develop a new Subgraph that includes the hotfix. + - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed Subgraph. + - **Deploy Quickly**: Publish the grafted Subgraph to restore service as soon as possible. 3. **Post-Hotfix Actions** - - **Monitor Performance**: Ensure the grafted subgraph is indexing correctly and the hotfix resolves the issue. - - **Republish Without Grafting**: Once stable, deploy a new version of the subgraph without grafting for long-term maintenance. + - **Monitor Performance**: Ensure the grafted Subgraph is indexing correctly and the hotfix resolves the issue. + - **Republish Without Grafting**: Once stable, deploy a new version of the Subgraph without grafting for long-term maintenance. > Note: Relying on grafting indefinitely is not recommended as it can complicate future updates and maintenance. - - **Update References**: Redirect any services or applications to use the new, non-grafted subgraph. + - **Update References**: Redirect any services or applications to use the new, non-grafted Subgraph. 4. **Important Considerations** - **Careful Block Selection**: Choose the graft block number carefully to prevent data loss. - **Tip**: Use the block number of the last correctly processed event. - - **Use Deployment ID**: Ensure you reference the Deployment ID of the base subgraph, not the Subgraph ID. - - **Note**: The Deployment ID is the unique identifier for a specific subgraph deployment. - - **Feature Declaration**: Remember to declare grafting in the subgraph manifest under features. + - **Use Deployment ID**: Ensure you reference the Deployment ID of the base Subgraph, not the Subgraph ID. + - **Note**: The Deployment ID is the unique identifier for a specific Subgraph deployment. + - **Feature Declaration**: Remember to declare grafting in the Subgraph manifest under features. ## Example: Deploying a Hotfix with Grafting -Suppose you have a subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. +Suppose you have a Subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. 1. **Failed Subgraph Manifest (subgraph.yaml)** ```yaml - specVersion: 1.0.0 + specVersion: 1.3.0 schema: file: ./schema.graphql dataSources: @@ -75,7 +75,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing startBlock: 5000000 mapping: kind: ethereum/events - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Withdrawal @@ -90,7 +90,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing 2. **New Grafted Subgraph Manifest (subgraph.yaml)** ```yaml - specVersion: 1.0.0 + specVersion: 1.3.0 schema: file: ./schema.graphql dataSources: @@ -103,7 +103,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing startBlock: 6000001 # Block after the last indexed block mapping: kind: ethereum/events - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Withdrawal @@ -117,16 +117,16 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing features: - grafting graft: - base: QmBaseDeploymentID # Deployment ID of the failed subgraph + base: QmBaseDeploymentID # Deployment ID of the failed Subgraph block: 6000000 # Last successfully indexed block ``` **Explanation:** -- **Data Source Update**: The new subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. +- **Data Source Update**: The new Subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. - **Start Block**: Set to one block after the last successfully indexed block to avoid reprocessing the error. - **Grafting Configuration**: - - **base**: Deployment ID of the failed subgraph. + - **base**: Deployment ID of the failed Subgraph. - **block**: Block number where grafting should begin. 3. **Deployment Steps** @@ -135,10 +135,10 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing - **Adjust the Manifest**: As shown above, update the `subgraph.yaml` with grafting configurations. - **Deploy the Subgraph**: - Authenticate with the Graph CLI. - - Deploy the new subgraph using `graph deploy`. + - Deploy the new Subgraph using `graph deploy`. 4. **Post-Deployment** - - **Verify Indexing**: Check that the subgraph is indexing correctly from the graft point. + - **Verify Indexing**: Check that the Subgraph is indexing correctly from the graft point. - **Monitor Data**: Ensure that new data is being captured and the hotfix is effective. - **Plan for Republish**: Schedule the deployment of a non-grafted version for long-term stability. @@ -146,9 +146,9 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing While grafting is a powerful tool for deploying hotfixes quickly, there are specific scenarios where it should be avoided to maintain data integrity and ensure optimal performance. -- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new subgraph’s schema to be compatible with the base subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. +- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new Subgraph’s schema to be compatible with the base Subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. - **Significant Mapping Logic Overhauls**: When the hotfix involves substantial modifications to your mapping logic—such as changing how events are processed or altering handler functions—grafting may not function correctly. The new logic might not be compatible with the data processed under the old logic, leading to incorrect data or failed indexing. -- **Deployments to The Graph Network**: Grafting is not recommended for subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the subgraph from scratch to ensure full compatibility and reliability. +- **Deployments to The Graph Network**: Grafting is not recommended for Subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the Subgraph from scratch to ensure full compatibility and reliability. ### Risk Management @@ -157,20 +157,20 @@ While grafting is a powerful tool for deploying hotfixes quickly, there are spec ## Conclusion -Grafting is an effective strategy for deploying hotfixes in subgraph development, enabling you to: +Grafting is an effective strategy for deploying hotfixes in Subgraph development, enabling you to: - **Quickly Recover** from critical errors without re-indexing. - **Preserve Historical Data**, maintaining continuity for applications and users. - **Ensure Service Availability** by minimizing downtime during critical fixes. -However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. +However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your Subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. ## Additional Resources - **[Grafting Documentation](/subgraphs/cookbook/grafting/)**: Replace a Contract and Keep its History With Grafting - **[Understanding Deployment IDs](/subgraphs/querying/subgraph-id-vs-deployment-id/)**: Learn the difference between Deployment ID and Subgraph ID. -By incorporating grafting into your subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. +By incorporating grafting into your Subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/vi/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx b/website/src/pages/vi/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx index 6ff60ec9ab34..3a633244e0f2 100644 --- a/website/src/pages/vi/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx +++ b/website/src/pages/vi/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx @@ -1,6 +1,6 @@ --- title: Subgraph Best Practice 3 - Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs -sidebarTitle: 'Subgraph Best Practice 3: Immutable Entities and Bytes as IDs' +sidebarTitle: Immutable Entities and Bytes as IDs --- ## TLDR @@ -50,12 +50,12 @@ While other types for IDs are possible, such as String and Int8, it is recommend ### Reasons to Not Use Bytes as IDs 1. If entity IDs must be human-readable such as auto-incremented numerical IDs or readable strings, Bytes for IDs should not be used. -2. If integrating a subgraph’s data with another data model that does not use Bytes as IDs, Bytes as IDs should not be used. +2. If integrating a Subgraph’s data with another data model that does not use Bytes as IDs, Bytes as IDs should not be used. 3. Indexing and querying performance improvements are not desired. ### Concatenating With Bytes as IDs -It is a common practice in many subgraphs to use string concatenation to combine two properties of an event into a single ID, such as using `event.transaction.hash.toHex() + "-" + event.logIndex.toString()`. However, as this returns a string, this significantly impedes subgraph indexing and querying performance. +It is a common practice in many Subgraphs to use string concatenation to combine two properties of an event into a single ID, such as using `event.transaction.hash.toHex() + "-" + event.logIndex.toString()`. However, as this returns a string, this significantly impedes Subgraph indexing and querying performance. Instead, we should use the `concatI32()` method to concatenate event properties. This strategy results in a `Bytes` ID that is much more performant. @@ -172,7 +172,7 @@ Query Response: ## Conclusion -Using both Immutable Entities and Bytes as IDs has been shown to markedly improve subgraph efficiency. Specifically, tests have highlighted up to a 28% increase in query performance and up to a 48% acceleration in indexing speeds. +Using both Immutable Entities and Bytes as IDs has been shown to markedly improve Subgraph efficiency. Specifically, tests have highlighted up to a 28% increase in query performance and up to a 48% acceleration in indexing speeds. Read more about using Immutable Entities and Bytes as IDs in this blog post by David Lutterkort, a Software Engineer at Edge & Node: [Two Simple Subgraph Performance Improvements](https://thegraph.com/blog/two-simple-subgraph-performance-improvements/). diff --git a/website/src/pages/vi/subgraphs/best-practices/pruning.mdx b/website/src/pages/vi/subgraphs/best-practices/pruning.mdx index 1b51dde8894f..2d4f9ad803e0 100644 --- a/website/src/pages/vi/subgraphs/best-practices/pruning.mdx +++ b/website/src/pages/vi/subgraphs/best-practices/pruning.mdx @@ -1,11 +1,11 @@ --- title: Subgraph Best Practice 1 - Improve Query Speed with Subgraph Pruning -sidebarTitle: 'Subgraph Best Practice 1: Pruning with indexerHints' +sidebarTitle: Pruning with indexerHints --- ## TLDR -[Pruning](/developing/creating-a-subgraph/#prune) removes archival entities from the subgraph’s database up to a given block, and removing unused entities from a subgraph’s database will improve a subgraph’s query performance, often dramatically. Using `indexerHints` is an easy way to prune a subgraph. +[Pruning](/developing/creating-a-subgraph/#prune) removes archival entities from the Subgraph’s database up to a given block, and removing unused entities from a Subgraph’s database will improve a Subgraph’s query performance, often dramatically. Using `indexerHints` is an easy way to prune a Subgraph. ## How to Prune a Subgraph With `indexerHints` @@ -13,14 +13,14 @@ Add a section called `indexerHints` in the manifest. `indexerHints` has three `prune` options: -- `prune: auto`: Retains the minimum necessary history as set by the Indexer, optimizing query performance. This is the generally recommended setting and is the default for all subgraphs created by `graph-cli` >= 0.66.0. +- `prune: auto`: Retains the minimum necessary history as set by the Indexer, optimizing query performance. This is the generally recommended setting and is the default for all Subgraphs created by `graph-cli` >= 0.66.0. - `prune: `: Sets a custom limit on the number of historical blocks to retain. - `prune: never`: No pruning of historical data; retains the entire history and is the default if there is no `indexerHints` section. `prune: never` should be selected if [Time Travel Queries](/subgraphs/querying/graphql-api/#time-travel-queries) are desired. -We can add `indexerHints` to our subgraphs by updating our `subgraph.yaml`: +We can add `indexerHints` to our Subgraphs by updating our `subgraph.yaml`: ```yaml -specVersion: 1.0.0 +specVersion: 1.3.0 schema: file: ./schema.graphql indexerHints: @@ -39,7 +39,7 @@ dataSources: ## Conclusion -Pruning using `indexerHints` is a best practice for subgraph development, offering significant query performance improvements. +Pruning using `indexerHints` is a best practice for Subgraph development, offering significant query performance improvements. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/vi/subgraphs/best-practices/timeseries.mdx b/website/src/pages/vi/subgraphs/best-practices/timeseries.mdx index f1b15a258169..11c1440ee00f 100644 --- a/website/src/pages/vi/subgraphs/best-practices/timeseries.mdx +++ b/website/src/pages/vi/subgraphs/best-practices/timeseries.mdx @@ -1,11 +1,11 @@ --- title: Subgraph Best Practice 5 - Simplify and Optimize with Timeseries and Aggregations -sidebarTitle: 'Subgraph Best Practice 5: Timeseries and Aggregations' +sidebarTitle: Timeseries and Aggregations --- ## TLDR -Leveraging the new time-series and aggregations feature in subgraphs can significantly enhance both indexing speed and query performance. +Leveraging the new time-series and aggregations feature in Subgraphs can significantly enhance both indexing speed and query performance. ## Tổng quan @@ -36,6 +36,10 @@ Timeseries and aggregations reduce data processing overhead and accelerate queri ## How to Implement Timeseries and Aggregations +### Prerequisites + +You need `spec version 1.1.0` for this feature. + ### Defining Timeseries Entities A timeseries entity represents raw data points collected over time. It is defined with the `@entity(timeseries: true)` annotation. Key requirements: @@ -51,7 +55,7 @@ Example: type Data @entity(timeseries: true) { id: Int8! timestamp: Timestamp! - price: BigDecimal! + amount: BigDecimal! } ``` @@ -68,11 +72,11 @@ Example: type Stats @aggregation(intervals: ["hour", "day"], source: "Data") { id: Int8! timestamp: Timestamp! - sum: BigDecimal! @aggregate(fn: "sum", arg: "price") + sum: BigDecimal! @aggregate(fn: "sum", arg: "amount") } ``` -In this example, Stats aggregates the price field from Data over hourly and daily intervals, computing the sum. +In this example, Stats aggregates the amount field from Data over hourly and daily intervals, computing the sum. ### Querying Aggregated Data @@ -172,13 +176,13 @@ Supported operators and functions include basic arithmetic (+, -, \_, /), compar ### Conclusion -Implementing timeseries and aggregations in subgraphs is a best practice for projects dealing with time-based data. This approach: +Implementing timeseries and aggregations in Subgraphs is a best practice for projects dealing with time-based data. This approach: - Enhances Performance: Speeds up indexing and querying by reducing data processing overhead. - Simplifies Development: Eliminates the need for manual aggregation logic in mappings. - Scales Efficiently: Handles large volumes of data without compromising on speed or responsiveness. -By adopting this pattern, developers can build more efficient and scalable subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your subgraphs. +By adopting this pattern, developers can build more efficient and scalable Subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your Subgraphs. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/vi/subgraphs/billing.mdx b/website/src/pages/vi/subgraphs/billing.mdx index c9f380bb022c..ec654ca63f55 100644 --- a/website/src/pages/vi/subgraphs/billing.mdx +++ b/website/src/pages/vi/subgraphs/billing.mdx @@ -4,12 +4,14 @@ title: Billing ## Querying Plans -There are two plans to use when querying subgraphs on The Graph Network. +There are two plans to use when querying Subgraphs on The Graph Network. - **Free Plan**: The Free Plan includes 100,000 free monthly queries with full access to the Subgraph Studio testing environment. This plan is designed for hobbyists, hackathoners, and those with side projects to try out The Graph before scaling their dapp. - **Growth Plan**: The Growth Plan includes everything in the Free Plan with all queries after 100,000 monthly queries requiring payments with GRT or credit card. The Growth Plan is flexible enough to cover teams that have established dapps across a variety of use cases. +Learn more about pricing [here](https://thegraph.com/studio-pricing/). + ## Query Payments with credit card diff --git a/website/src/pages/vi/subgraphs/developing/creating/advanced.mdx b/website/src/pages/vi/subgraphs/developing/creating/advanced.mdx index 82d7dd120a70..15e48ee9a6a8 100644 --- a/website/src/pages/vi/subgraphs/developing/creating/advanced.mdx +++ b/website/src/pages/vi/subgraphs/developing/creating/advanced.mdx @@ -4,9 +4,9 @@ title: Advanced Subgraph Features ## Tổng quan -Add and implement advanced subgraph features to enhanced your subgraph's built. +Add and implement advanced Subgraph features to enhanced your Subgraph's built. -Starting from `specVersion` `0.0.4`, subgraph features must be explicitly declared in the `features` section at the top level of the manifest file, using their `camelCase` name, as listed in the table below: +Starting from `specVersion` `0.0.4`, Subgraph features must be explicitly declared in the `features` section at the top level of the manifest file, using their `camelCase` name, as listed in the table below: | Feature | Name | | ---------------------------------------------------- | ---------------- | @@ -14,10 +14,10 @@ Starting from `specVersion` `0.0.4`, subgraph features must be explicitly declar | [Full-text Search](#defining-fulltext-search-fields) | `fullTextSearch` | | [Grafting](#grafting-onto-existing-subgraphs) | `grafting` | -For instance, if a subgraph uses the **Full-Text Search** and the **Non-fatal Errors** features, the `features` field in the manifest should be: +For instance, if a Subgraph uses the **Full-Text Search** and the **Non-fatal Errors** features, the `features` field in the manifest should be: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum features: - fullTextSearch @@ -25,7 +25,7 @@ features: dataSources: ... ``` -> Note that using a feature without declaring it will incur a **validation error** during subgraph deployment, but no errors will occur if a feature is declared but not used. +> Note that using a feature without declaring it will incur a **validation error** during Subgraph deployment, but no errors will occur if a feature is declared but not used. ## Timeseries and Aggregations @@ -33,9 +33,9 @@ Prerequisites: - Subgraph specVersion must be ≥1.1.0. -Timeseries and aggregations enable your subgraph to track statistics like daily average price, hourly total transfers, and more. +Timeseries and aggregations enable your Subgraph to track statistics like daily average price, hourly total transfers, and more. -This feature introduces two new types of subgraph entity. Timeseries entities record data points with timestamps. Aggregation entities perform pre-declared calculations on the timeseries data points on an hourly or daily basis, then store the results for easy access via GraphQL. +This feature introduces two new types of Subgraph entity. Timeseries entities record data points with timestamps. Aggregation entities perform pre-declared calculations on the timeseries data points on an hourly or daily basis, then store the results for easy access via GraphQL. ### Example Schema @@ -97,21 +97,21 @@ Aggregation entities are automatically calculated on the basis of the specified ## Lỗi không nghiêm trọng -Indexing errors on already synced subgraphs will, by default, cause the subgraph to fail and stop syncing. Subgraphs can alternatively be configured to continue syncing in the presence of errors, by ignoring the changes made by the handler which provoked the error. This gives subgraph authors time to correct their subgraphs while queries continue to be served against the latest block, though the results might be inconsistent due to the bug that caused the error. Note that some errors are still always fatal. To be non-fatal, the error must be known to be deterministic. +Indexing errors on already synced Subgraphs will, by default, cause the Subgraph to fail and stop syncing. Subgraphs can alternatively be configured to continue syncing in the presence of errors, by ignoring the changes made by the handler which provoked the error. This gives Subgraph authors time to correct their Subgraphs while queries continue to be served against the latest block, though the results might be inconsistent due to the bug that caused the error. Note that some errors are still always fatal. To be non-fatal, the error must be known to be deterministic. -> **Note:** The Graph Network does not yet support non-fatal errors, and developers should not deploy subgraphs using that functionality to the network via the Studio. +> **Note:** The Graph Network does not yet support non-fatal errors, and developers should not deploy Subgraphs using that functionality to the network via the Studio. -Enabling non-fatal errors requires setting the following feature flag on the subgraph manifest: +Enabling non-fatal errors requires setting the following feature flag on the Subgraph manifest: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum features: - nonFatalErrors ... ``` -The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the subgraph has skipped over errors, as in the example: +The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the Subgraph has skipped over errors, as in the example: ```graphql foos(first: 100, subgraphError: allow) { @@ -123,7 +123,7 @@ _meta { } ``` -If the subgraph encounters an error, that query will return both the data and a graphql error with the message `"indexing_error"`, as in this example response: +If the Subgraph encounters an error, that query will return both the data and a graphql error with the message `"indexing_error"`, as in this example response: ```graphql "data": { @@ -145,7 +145,7 @@ If the subgraph encounters an error, that query will return both the data and a ## IPFS/Arweave File Data Sources -File data sources are a new subgraph functionality for accessing off-chain data during indexing in a robust, extendable way. File data sources support fetching files from IPFS and from Arweave. +File data sources are a new Subgraph functionality for accessing off-chain data during indexing in a robust, extendable way. File data sources support fetching files from IPFS and from Arweave. > This also lays the groundwork for deterministic indexing of off-chain data, as well as the potential introduction of arbitrary HTTP-sourced data. @@ -221,7 +221,7 @@ templates: - name: TokenMetadata kind: file/ipfs mapping: - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mapping.ts handler: handleMetadata @@ -290,7 +290,7 @@ Example: import { TokenMetadata as TokenMetadataTemplate } from '../generated/templates' const ipfshash = 'QmaXzZhcYnsisuue5WRdQDH6FDvqkLQX1NckLqBYeYYEfm' -//This example code is for a Crypto coven subgraph. The above ipfs hash is a directory with token metadata for all crypto coven NFTs. +//This example code is for a Crypto coven Subgraph. The above ipfs hash is a directory with token metadata for all crypto coven NFTs. export function handleTransfer(event: TransferEvent): void { let token = Token.load(event.params.tokenId.toString()) @@ -317,23 +317,23 @@ This will create a new file data source, which will poll Graph Node's configured This example is using the CID as the lookup between the parent `Token` entity and the resulting `TokenMetadata` entity. -> Previously, this is the point at which a subgraph developer would have called `ipfs.cat(CID)` to fetch the file +> Previously, this is the point at which a Subgraph developer would have called `ipfs.cat(CID)` to fetch the file Congratulations, you are using file data sources! -#### Deploying your subgraphs +#### Deploying your Subgraphs -You can now `build` and `deploy` your subgraph to any Graph Node >=v0.30.0-rc.0. +You can now `build` and `deploy` your Subgraph to any Graph Node >=v0.30.0-rc.0. #### Limitations -File data source handlers and entities are isolated from other subgraph entities, ensuring that they are deterministic when executed, and ensuring no contamination of chain-based data sources. To be specific: +File data source handlers and entities are isolated from other Subgraph entities, ensuring that they are deterministic when executed, and ensuring no contamination of chain-based data sources. To be specific: - Entities created by File Data Sources are immutable, and cannot be updated - File Data Source handlers cannot access entities from other file data sources - Entities associated with File Data Sources cannot be accessed by chain-based handlers -> While this constraint should not be problematic for most use-cases, it may introduce complexity for some. Please get in touch via Discord if you are having issues modelling your file-based data in a subgraph! +> While this constraint should not be problematic for most use-cases, it may introduce complexity for some. Please get in touch via Discord if you are having issues modelling your file-based data in a Subgraph! Additionally, it is not possible to create data sources from a file data source, be it an onchain data source or another file data source. This restriction may be lifted in the future. @@ -365,15 +365,15 @@ Handlers for File Data Sources cannot be in files which import `eth_call` contra > **Requires**: [SpecVersion](#specversion-releases) >= `1.2.0` -Topic filters, also known as indexed argument filters, are a powerful feature in subgraphs that allow users to precisely filter blockchain events based on the values of their indexed arguments. +Topic filters, also known as indexed argument filters, are a powerful feature in Subgraphs that allow users to precisely filter blockchain events based on the values of their indexed arguments. -- These filters help isolate specific events of interest from the vast stream of events on the blockchain, allowing subgraphs to operate more efficiently by focusing only on relevant data. +- These filters help isolate specific events of interest from the vast stream of events on the blockchain, allowing Subgraphs to operate more efficiently by focusing only on relevant data. -- This is useful for creating personal subgraphs that track specific addresses and their interactions with various smart contracts on the blockchain. +- This is useful for creating personal Subgraphs that track specific addresses and their interactions with various smart contracts on the blockchain. ### How Topic Filters Work -When a smart contract emits an event, any arguments that are marked as indexed can be used as filters in a subgraph's manifest. This allows the subgraph to listen selectively for events that match these indexed arguments. +When a smart contract emits an event, any arguments that are marked as indexed can be used as filters in a Subgraph's manifest. This allows the Subgraph to listen selectively for events that match these indexed arguments. - The event's first indexed argument corresponds to `topic1`, the second to `topic2`, and so on, up to `topic3`, since the Ethereum Virtual Machine (EVM) allows up to three indexed arguments per event. @@ -401,7 +401,7 @@ In this example: #### Configuration in Subgraphs -Topic filters are defined directly within the event handler configuration in the subgraph manifest. Here is how they are configured: +Topic filters are defined directly within the event handler configuration in the Subgraph manifest. Here is how they are configured: ```yaml eventHandlers: @@ -436,7 +436,7 @@ In this configuration: - `topic1` is configured to filter `Transfer` events where `0xAddressA` is the sender. - `topic2` is configured to filter `Transfer` events where `0xAddressB` is the receiver. -- The subgraph will only index transactions that occur directly from `0xAddressA` to `0xAddressB`. +- The Subgraph will only index transactions that occur directly from `0xAddressA` to `0xAddressB`. #### Example 2: Tracking Transactions in Either Direction Between Two or More Addresses @@ -452,17 +452,17 @@ In this configuration: - `topic1` is configured to filter `Transfer` events where `0xAddressA`, `0xAddressB`, `0xAddressC` is the sender. - `topic2` is configured to filter `Transfer` events where `0xAddressB` and `0xAddressC` is the receiver. -- The subgraph will index transactions that occur in either direction between multiple addresses allowing for comprehensive monitoring of interactions involving all addresses. +- The Subgraph will index transactions that occur in either direction between multiple addresses allowing for comprehensive monitoring of interactions involving all addresses. ## Declared eth_call > Note: This is an experimental feature that is not currently available in a stable Graph Node release yet. You can only use it in Subgraph Studio or your self-hosted node. -Declarative `eth_calls` are a valuable subgraph feature that allows `eth_calls` to be executed ahead of time, enabling `graph-node` to execute them in parallel. +Declarative `eth_calls` are a valuable Subgraph feature that allows `eth_calls` to be executed ahead of time, enabling `graph-node` to execute them in parallel. This feature does the following: -- Significantly improves the performance of fetching data from the Ethereum blockchain by reducing the total time for multiple calls and optimizing the subgraph's overall efficiency. +- Significantly improves the performance of fetching data from the Ethereum blockchain by reducing the total time for multiple calls and optimizing the Subgraph's overall efficiency. - Allows faster data fetching, resulting in quicker query responses and a better user experience. - Reduces wait times for applications that need to aggregate data from multiple Ethereum calls, making the data retrieval process more efficient. @@ -474,7 +474,7 @@ This feature does the following: #### Scenario without Declarative `eth_calls` -Imagine you have a subgraph that needs to make three Ethereum calls to fetch data about a user's transactions, balance, and token holdings. +Imagine you have a Subgraph that needs to make three Ethereum calls to fetch data about a user's transactions, balance, and token holdings. Traditionally, these calls might be made sequentially: @@ -498,15 +498,15 @@ Total time taken = max (3, 2, 4) = 4 seconds #### How it Works -1. Declarative Definition: In the subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. +1. Declarative Definition: In the Subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. 2. Parallel Execution Engine: The Graph Node's execution engine recognizes these declarations and runs the calls simultaneously. -3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the subgraph for further processing. +3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the Subgraph for further processing. #### Example Configuration in Subgraph Manifest Declared `eth_calls` can access the `event.address` of the underlying event as well as all the `event.params`. -`Subgraph.yaml` using `event.address`: +`subgraph.yaml` using `event.address`: ```yaml eventHandlers: @@ -524,7 +524,7 @@ Details for the example above: - The text (`Pool[event.address].feeGrowthGlobal0X128()`) is the actual `eth_call` that will be executed, which is in the form of `Contract[address].function(arguments)` - The `address` and `arguments` can be replaced with variables that will be available when the handler is executed. -`Subgraph.yaml` using `event.params` +`subgraph.yaml` using `event.params` ```yaml calls: @@ -535,22 +535,22 @@ calls: > **Note:** it is not recommended to use grafting when initially upgrading to The Graph Network. Learn more [here](/subgraphs/cookbook/grafting/#important-note-on-grafting-when-upgrading-to-the-network). -When a subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances; it is beneficial to reuse the data from an existing subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly or to temporarily get an existing subgraph working again after it has failed. +When a Subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances; it is beneficial to reuse the data from an existing Subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly or to temporarily get an existing Subgraph working again after it has failed. -A subgraph is grafted onto a base subgraph when the subgraph manifest in `subgraph.yaml` contains a `graft` block at the top-level: +A Subgraph is grafted onto a base Subgraph when the Subgraph manifest in `subgraph.yaml` contains a `graft` block at the top-level: ```yaml description: ... graft: - base: Qm... # Subgraph ID of base subgraph + base: Qm... # Subgraph ID of base Subgraph block: 7345624 # Block number ``` -When a subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` subgraph up to and including the given `block` and then continue indexing the new subgraph from that block on. The base subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted subgraph. +When a Subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` Subgraph up to and including the given `block` and then continue indexing the new Subgraph from that block on. The base Subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted Subgraph. -Because grafting copies rather than indexes base data, it is much quicker to get the subgraph to the desired block than indexing from scratch, though the initial data copy can still take several hours for very large subgraphs. While the grafted subgraph is being initialized, the Graph Node will log information about the entity types that have already been copied. +Because grafting copies rather than indexes base data, it is much quicker to get the Subgraph to the desired block than indexing from scratch, though the initial data copy can still take several hours for very large Subgraphs. While the grafted Subgraph is being initialized, the Graph Node will log information about the entity types that have already been copied. -The grafted subgraph can use a GraphQL schema that is not identical to the one of the base subgraph, but merely compatible with it. It has to be a valid subgraph schema in its own right, but may deviate from the base subgraph's schema in the following ways: +The grafted Subgraph can use a GraphQL schema that is not identical to the one of the base Subgraph, but merely compatible with it. It has to be a valid Subgraph schema in its own right, but may deviate from the base Subgraph's schema in the following ways: - It adds or removes entity types - It removes attributes from entity types @@ -560,4 +560,4 @@ The grafted subgraph can use a GraphQL schema that is not identical to the one o - It adds or removes interfaces - Nó thay đổi đối với loại thực thể nào mà một giao diện được triển khai -> **[Feature Management](#experimental-features):** `grafting` must be declared under `features` in the subgraph manifest. +> **[Feature Management](#experimental-features):** `grafting` must be declared under `features` in the Subgraph manifest. diff --git a/website/src/pages/vi/subgraphs/developing/creating/assemblyscript-mappings.mdx b/website/src/pages/vi/subgraphs/developing/creating/assemblyscript-mappings.mdx index 8a1d491a50fd..87b694b86828 100644 --- a/website/src/pages/vi/subgraphs/developing/creating/assemblyscript-mappings.mdx +++ b/website/src/pages/vi/subgraphs/developing/creating/assemblyscript-mappings.mdx @@ -10,7 +10,7 @@ The mappings take data from a particular source and transform it into entities t For each event handler that is defined in `subgraph.yaml` under `mapping.eventHandlers`, create an exported function of the same name. Each handler must accept a single parameter called `event` with a type corresponding to the name of the event which is being handled. -In the example subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events: +In the example Subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events: ```javascript import { NewGravatar, UpdatedGravatar } from '../generated/Gravity/Gravity' @@ -72,7 +72,7 @@ If no value is set for a field in the new entity with the same ID, the field wil ## Tạo mã -In order to make it easy and type-safe to work with smart contracts, events and entities, the Graph CLI can generate AssemblyScript types from the subgraph's GraphQL schema and the contract ABIs included in the data sources. +In order to make it easy and type-safe to work with smart contracts, events and entities, the Graph CLI can generate AssemblyScript types from the Subgraph's GraphQL schema and the contract ABIs included in the data sources. This is done with @@ -80,7 +80,7 @@ This is done with graph codegen [--output-dir ] [] ``` -but in most cases, subgraphs are already preconfigured via `package.json` to allow you to simply run one of the following to achieve the same: +but in most cases, Subgraphs are already preconfigured via `package.json` to allow you to simply run one of the following to achieve the same: ```sh # Yarn @@ -90,7 +90,7 @@ yarn codegen npm run codegen ``` -This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with. +This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example Subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with. ```javascript import { @@ -102,12 +102,12 @@ import { } from '../generated/Gravity/Gravity' ``` -In addition to this, one class is generated for each entity type in the subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with +In addition to this, one class is generated for each entity type in the Subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with ```javascript import { Gravatar } from '../generated/schema' ``` -> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the subgraph. +> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the Subgraph. -Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your subgraph to Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find. +Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your Subgraph to Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find. diff --git a/website/src/pages/vi/subgraphs/developing/creating/graph-ts/CHANGELOG.md b/website/src/pages/vi/subgraphs/developing/creating/graph-ts/CHANGELOG.md index 5d90888ac378..5f964d3cbb78 100644 --- a/website/src/pages/vi/subgraphs/developing/creating/graph-ts/CHANGELOG.md +++ b/website/src/pages/vi/subgraphs/developing/creating/graph-ts/CHANGELOG.md @@ -1,5 +1,11 @@ # @graphprotocol/graph-ts +## 0.38.0 + +### Minor Changes + +- [#1935](https://github.com/graphprotocol/graph-tooling/pull/1935) [`0c36a02`](https://github.com/graphprotocol/graph-tooling/commit/0c36a024e0516bbf883ae62b8312dba3d9945f04) Thanks [@isum](https://github.com/isum)! - feat: add yaml parsing support to mappings + ## 0.37.0 ### Minor Changes diff --git a/website/src/pages/vi/subgraphs/developing/creating/graph-ts/api.mdx b/website/src/pages/vi/subgraphs/developing/creating/graph-ts/api.mdx index 7fea4f954429..bb95b05932cc 100644 --- a/website/src/pages/vi/subgraphs/developing/creating/graph-ts/api.mdx +++ b/website/src/pages/vi/subgraphs/developing/creating/graph-ts/api.mdx @@ -2,12 +2,12 @@ title: AssemblyScript API --- -> Note: If you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/). +> Note: If you created a Subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/). -Learn what built-in APIs can be used when writing subgraph mappings. There are two kinds of APIs available out of the box: +Learn what built-in APIs can be used when writing Subgraph mappings. There are two kinds of APIs available out of the box: - The [Graph TypeScript library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) -- Code generated from subgraph files by `graph codegen` +- Code generated from Subgraph files by `graph codegen` You can also add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). @@ -27,7 +27,7 @@ The `@graphprotocol/graph-ts` library provides the following APIs: ### Các phiên bản -The `apiVersion` in the subgraph manifest specifies the mapping API version which is run by Graph Node for a given subgraph. +The `apiVersion` in the Subgraph manifest specifies the mapping API version which is run by Graph Node for a given Subgraph. | Phiên bản | Ghi chú phát hành | | :-: | --- | @@ -223,7 +223,7 @@ import { store } from '@graphprotocol/graph-ts' The `store` API allows to load, save and remove entities from and to the Graph Node store. -Entities written to the store map one-to-one to the `@entity` types defined in the subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities. +Entities written to the store map one-to-one to the `@entity` types defined in the Subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities. #### Tạo các thực thể @@ -282,8 +282,8 @@ As of `graph-node` v0.31.0, `@graphprotocol/graph-ts` v0.30.0 and `@graphprotoco The store API facilitates the retrieval of entities that were created or updated in the current block. A typical situation for this is that one handler creates a transaction from some onchain event, and a later handler wants to access this transaction if it exists. -- In the case where the transaction does not exist, the subgraph will have to go to the database simply to find out that the entity does not exist. If the subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. -- For some subgraphs, these missed lookups can contribute significantly to the indexing time. +- In the case where the transaction does not exist, the Subgraph will have to go to the database simply to find out that the entity does not exist. If the Subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. +- For some Subgraphs, these missed lookups can contribute significantly to the indexing time. ```typescript let id = event.transaction.hash // or however the ID is constructed @@ -380,11 +380,11 @@ The Ethereum API provides access to smart contracts, public state variables, con #### Hỗ trợ các loại Ethereum -As with entities, `graph codegen` generates classes for all smart contracts and events used in a subgraph. For this, the contract ABIs need to be part of the data source in the subgraph manifest. Typically, the ABI files are stored in an `abis/` folder. +As with entities, `graph codegen` generates classes for all smart contracts and events used in a Subgraph. For this, the contract ABIs need to be part of the data source in the Subgraph manifest. Typically, the ABI files are stored in an `abis/` folder. -With the generated classes, conversions between Ethereum types and the [built-in types](#built-in-types) take place behind the scenes so that subgraph authors do not have to worry about them. +With the generated classes, conversions between Ethereum types and the [built-in types](#built-in-types) take place behind the scenes so that Subgraph authors do not have to worry about them. -The following example illustrates this. Given a subgraph schema like +The following example illustrates this. Given a Subgraph schema like ```graphql type Transfer @entity { @@ -483,7 +483,7 @@ class Log { #### Quyền truy cập vào Trạng thái Hợp đồng Thông minh -The code generated by `graph codegen` also includes classes for the smart contracts used in the subgraph. These can be used to access public state variables and call functions of the contract at the current block. +The code generated by `graph codegen` also includes classes for the smart contracts used in the Subgraph. These can be used to access public state variables and call functions of the contract at the current block. A common pattern is to access the contract from which an event originates. This is achieved with the following code: @@ -506,7 +506,7 @@ export function handleTransfer(event: TransferEvent) { As long as the `ERC20Contract` on Ethereum has a public read-only function called `symbol`, it can be called with `.symbol()`. For public state variables a method with the same name is created automatically. -Any other contract that is part of the subgraph can be imported from the generated code and can be bound to a valid address. +Any other contract that is part of the Subgraph can be imported from the generated code and can be bound to a valid address. #### Xử lý các lệnh gọi được hoàn nguyên @@ -582,7 +582,7 @@ let isContract = ethereum.hasCode(eoa).inner // returns false import { log } from '@graphprotocol/graph-ts' ``` -The `log` API allows subgraphs to log information to the Graph Node standard output as well as Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument. +The `log` API allows Subgraphs to log information to the Graph Node standard output as well as Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument. The `log` API includes the following functions: @@ -590,7 +590,7 @@ The `log` API includes the following functions: - `log.info(fmt: string, args: Array): void` - logs an informational message. - `log.warning(fmt: string, args: Array): void` - logs a warning. - `log.error(fmt: string, args: Array): void` - logs an error message. -- `log.critical(fmt: string, args: Array): void` – logs a critical message _and_ terminates the subgraph. +- `log.critical(fmt: string, args: Array): void` – logs a critical message _and_ terminates the Subgraph. The `log` API takes a format string and an array of string values. It then replaces placeholders with the string values from the array. The first `{}` placeholder gets replaced by the first value in the array, the second `{}` placeholder gets replaced by the second value and so on. @@ -721,7 +721,7 @@ ipfs.mapJSON('Qm...', 'processItem', Value.fromString('parentId')) The only flag currently supported is `json`, which must be passed to `ipfs.map`. With the `json` flag, the IPFS file must consist of a series of JSON values, one value per line. The call to `ipfs.map` will read each line in the file, deserialize it into a `JSONValue` and call the callback for each of them. The callback can then use entity operations to store data from the `JSONValue`. Entity changes are stored only when the handler that called `ipfs.map` finishes successfully; in the meantime, they are kept in memory, and the size of the file that `ipfs.map` can process is therefore limited. -On success, `ipfs.map` returns `void`. If any invocation of the callback causes an error, the handler that invoked `ipfs.map` is aborted, and the subgraph is marked as failed. +On success, `ipfs.map` returns `void`. If any invocation of the callback causes an error, the handler that invoked `ipfs.map` is aborted, and the Subgraph is marked as failed. ### Crypto API @@ -836,7 +836,7 @@ The base `Entity` class and the child `DataSourceContext` class have helpers to ### DataSourceContext in Manifest -The `context` section within `dataSources` allows you to define key-value pairs that are accessible within your subgraph mappings. The available types are `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. +The `context` section within `dataSources` allows you to define key-value pairs that are accessible within your Subgraph mappings. The available types are `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Here is a YAML example illustrating the usage of various types in the `context` section: @@ -887,4 +887,4 @@ dataSources: - `List`: Specifies a list of items. Each item needs to specify its type and data. - `BigInt`: Specifies a large integer value. Must be quoted due to its large size. -This context is then accessible in your subgraph mapping files, enabling more dynamic and configurable subgraphs. +This context is then accessible in your Subgraph mapping files, enabling more dynamic and configurable Subgraphs. diff --git a/website/src/pages/vi/subgraphs/developing/creating/graph-ts/common-issues.mdx b/website/src/pages/vi/subgraphs/developing/creating/graph-ts/common-issues.mdx index f8d0c9c004c2..65e8e3d4a8a3 100644 --- a/website/src/pages/vi/subgraphs/developing/creating/graph-ts/common-issues.mdx +++ b/website/src/pages/vi/subgraphs/developing/creating/graph-ts/common-issues.mdx @@ -2,7 +2,7 @@ title: Common AssemblyScript Issues --- -There are certain [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) issues that are common to run into during subgraph development. They range in debug difficulty, however, being aware of them may help. The following is a non-exhaustive list of these issues: +There are certain [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) issues that are common to run into during Subgraph development. They range in debug difficulty, however, being aware of them may help. The following is a non-exhaustive list of these issues: - `Private` class variables are not enforced in [AssemblyScript](https://www.assemblyscript.org/status.html#language-features). There is no way to protect class variables from being directly changed from the class object. - Scope is not inherited into [closure functions](https://www.assemblyscript.org/status.html#on-closures), i.e. variables declared outside of closure functions cannot be used. Explanation in [Developer Highlights #3](https://www.youtube.com/watch?v=1-8AW-lVfrA&t=3243s). diff --git a/website/src/pages/vi/subgraphs/developing/creating/install-the-cli.mdx b/website/src/pages/vi/subgraphs/developing/creating/install-the-cli.mdx index ab11aa3306cb..f9573b198f89 100644 --- a/website/src/pages/vi/subgraphs/developing/creating/install-the-cli.mdx +++ b/website/src/pages/vi/subgraphs/developing/creating/install-the-cli.mdx @@ -2,11 +2,11 @@ title: Cài đặt Graph CLI --- -> In order to use your subgraph on The Graph's decentralized network, you will need to [create an API key](/resources/subgraph-studio-faq/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your subgraph with at least 3,000 GRT to attract 2-3 Indexers. To learn more about signaling, check out [curating](/resources/roles/curating/). +> In order to use your Subgraph on The Graph's decentralized network, you will need to [create an API key](/resources/subgraph-studio-faq/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your Subgraph with at least 3,000 GRT to attract 2-3 Indexers. To learn more about signaling, check out [curating](/resources/roles/curating/). ## Tổng quan -The [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) is a command-line interface that facilitates developers' commands for The Graph. It processes a [subgraph manifest](/subgraphs/developing/creating/subgraph-manifest/) and compiles the [mappings](/subgraphs/developing/creating/assemblyscript-mappings/) to create the files you will need to deploy the subgraph to [Subgraph Studio](https://thegraph.com/studio/) and the network. +The [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) is a command-line interface that facilitates developers' commands for The Graph. It processes a [Subgraph manifest](/subgraphs/developing/creating/subgraph-manifest/) and compiles the [mappings](/subgraphs/developing/creating/assemblyscript-mappings/) to create the files you will need to deploy the Subgraph to [Subgraph Studio](https://thegraph.com/studio/) and the network. ## Getting Started @@ -28,13 +28,13 @@ npm install -g @graphprotocol/graph-cli@latest yarn global add @graphprotocol/graph-cli ``` -The `graph init` command can be used to set up a new subgraph project, either from an existing contract or from an example subgraph. If you already have a smart contract deployed to your preferred network, you can bootstrap a new subgraph from that contract to get started. +The `graph init` command can be used to set up a new Subgraph project, either from an existing contract or from an example Subgraph. If you already have a smart contract deployed to your preferred network, you can bootstrap a new Subgraph from that contract to get started. ## Tạo một Subgraph ### From an Existing Contract -The following command creates a subgraph that indexes all events of an existing contract: +The following command creates a Subgraph that indexes all events of an existing contract: ```sh graph init \ @@ -51,25 +51,25 @@ graph init \ - If any of the optional arguments are missing, it guides you through an interactive form. -- The `` is the ID of your subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your subgraph details page. +- The `` is the ID of your Subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your Subgraph details page. ### From an Example Subgraph -The following command initializes a new project from an example subgraph: +The following command initializes a new project from an example Subgraph: ```sh graph init --from-example=example-subgraph ``` -- The [example subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. +- The [example Subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. -- The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. +- The Subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. ### Add New `dataSources` to an Existing Subgraph -`dataSources` are key components of subgraphs. They define the sources of data that the subgraph indexes and processes. A `dataSource` specifies which smart contract to listen to, which events to process, and how to handle them. +`dataSources` are key components of Subgraphs. They define the sources of data that the Subgraph indexes and processes. A `dataSource` specifies which smart contract to listen to, which events to process, and how to handle them. -Recent versions of the Graph CLI supports adding new `dataSources` to an existing subgraph through the `graph add` command: +Recent versions of the Graph CLI supports adding new `dataSources` to an existing Subgraph through the `graph add` command: ```sh graph add
[] @@ -101,19 +101,5 @@ The `graph add` command will fetch the ABI from Etherscan (unless an ABI path is (Các) tệp ABI phải khớp với (các) hợp đồng của bạn. Có một số cách để lấy tệp ABI: - Nếu bạn đang xây dựng dự án của riêng mình, bạn có thể sẽ có quyền truy cập vào các ABI mới nhất của mình. -- If you are building a subgraph for a public project, you can download that project to your computer and get the ABI by using [`npx hardhat compile`](https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts) or using `solc` to compile. -- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your subgraph will fail. - -## SpecVersion Releases - -| Phiên bản | Ghi chú phát hành | -| :-: | --- | -| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | -| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | -| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune subgraphs | -| 0.0.9 | Supports `endBlock` feature | -| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). | -| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). | -| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | -| 0.0.5 | Added support for event handlers having access to transaction receipts. | -| 0.0.4 | Added support for managing subgraph features. | +- If you are building a Subgraph for a public project, you can download that project to your computer and get the ABI by using [`npx hardhat compile`](https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts) or using `solc` to compile. +- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your Subgraph will fail. diff --git a/website/src/pages/vi/subgraphs/developing/creating/ql-schema.mdx b/website/src/pages/vi/subgraphs/developing/creating/ql-schema.mdx index e0b62e6f5e8d..7e28150b4e4a 100644 --- a/website/src/pages/vi/subgraphs/developing/creating/ql-schema.mdx +++ b/website/src/pages/vi/subgraphs/developing/creating/ql-schema.mdx @@ -4,7 +4,7 @@ title: The Graph QL Schema ## Tổng quan -The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language. +The schema for your Subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language. > Note: If you've never written a GraphQL schema, it is recommended that you check out this primer on the GraphQL type system. Reference documentation for GraphQL schemas can be found in the [GraphQL API](/subgraphs/querying/graphql-api/) section. @@ -12,7 +12,7 @@ The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas ar Before defining entities, it is important to take a step back and think about how your data is structured and linked. -- All queries will be made against the data model defined in the subgraph schema. As a result, the design of the subgraph schema should be informed by the queries that your application will need to perform. +- All queries will be made against the data model defined in the Subgraph schema. As a result, the design of the Subgraph schema should be informed by the queries that your application will need to perform. - It may be useful to imagine entities as "objects containing data", rather than as events or functions. - You define entity types in `schema.graphql`, and Graph Node will generate top-level fields for querying single instances and collections of that entity type. - Each type that should be an entity is required to be annotated with an `@entity` directive. @@ -141,7 +141,7 @@ type TokenBalance @entity { Reverse lookups can be defined on an entity through the `@derivedFrom` field. This creates a virtual field on the entity that may be queried but cannot be set manually through the mappings API. Rather, it is derived from the relationship defined on the other entity. For such relationships, it rarely makes sense to store both sides of the relationship, and both indexing and query performance will be better when only one side is stored and the other is derived. -For one-to-many relationships, the relationship should always be stored on the 'one' side, and the 'many' side should always be derived. Storing the relationship this way, rather than storing an array of entities on the 'many' side, will result in dramatically better performance for both indexing and querying the subgraph. In general, storing arrays of entities should be avoided as much as is practical. +For one-to-many relationships, the relationship should always be stored on the 'one' side, and the 'many' side should always be derived. Storing the relationship this way, rather than storing an array of entities on the 'many' side, will result in dramatically better performance for both indexing and querying the Subgraph. In general, storing arrays of entities should be avoided as much as is practical. #### Ví dụ @@ -160,7 +160,7 @@ type TokenBalance @entity { } ``` -Here is an example of how to write a mapping for a subgraph with reverse lookups: +Here is an example of how to write a mapping for a Subgraph with reverse lookups: ```typescript let token = new Token(event.address) // Create Token @@ -231,7 +231,7 @@ query usersWithOrganizations { } ``` -This more elaborate way of storing many-to-many relationships will result in less data stored for the subgraph, and therefore to a subgraph that is often dramatically faster to index and to query. +This more elaborate way of storing many-to-many relationships will result in less data stored for the Subgraph, and therefore to a Subgraph that is often dramatically faster to index and to query. ### Adding comments to the schema @@ -287,7 +287,7 @@ query { } ``` -> **[Feature Management](#experimental-features):** From `specVersion` `0.0.4` and onwards, `fullTextSearch` must be declared under the `features` section in the subgraph manifest. +> **[Feature Management](#experimental-features):** From `specVersion` `0.0.4` and onwards, `fullTextSearch` must be declared under the `features` section in the Subgraph manifest. ## Các ngôn ngữ được hỗ trợ diff --git a/website/src/pages/vi/subgraphs/developing/creating/starting-your-subgraph.mdx b/website/src/pages/vi/subgraphs/developing/creating/starting-your-subgraph.mdx index f7427e79c81a..84a5ebbc5d34 100644 --- a/website/src/pages/vi/subgraphs/developing/creating/starting-your-subgraph.mdx +++ b/website/src/pages/vi/subgraphs/developing/creating/starting-your-subgraph.mdx @@ -4,20 +4,32 @@ title: Starting Your Subgraph ## Tổng quan -The Graph is home to thousands of subgraphs already available for query, so check [The Graph Explorer](https://thegraph.com/explorer) and find one that already matches your needs. +The Graph is home to thousands of Subgraphs already available for query, so check [The Graph Explorer](https://thegraph.com/explorer) and find one that already matches your needs. -When you create a [subgraph](/subgraphs/developing/subgraphs/), you create a custom open API that extracts data from a blockchain, processes it, stores it, and makes it easy to query via GraphQL. +When you create a [Subgraph](/subgraphs/developing/subgraphs/), you create a custom open API that extracts data from a blockchain, processes it, stores it, and makes it easy to query via GraphQL. -Subgraph development ranges from simple scaffold subgraphs to advanced, specifically tailored subgraphs. +Subgraph development ranges from simple scaffold Subgraphs to advanced, specifically tailored Subgraphs. ### Start Building -Start the process and build a subgraph that matches your needs: +Start the process and build a Subgraph that matches your needs: 1. [Install the CLI](/subgraphs/developing/creating/install-the-cli/) - Set up your infrastructure -2. [Subgraph Manifest](/subgraphs/developing/creating/subgraph-manifest/) - Understand a subgraph's key component +2. [Subgraph Manifest](/subgraphs/developing/creating/subgraph-manifest/) - Understand a Subgraph's key component 3. [The GraphQL Schema](/subgraphs/developing/creating/ql-schema/) - Write your schema 4. [Writing AssemblyScript Mappings](/subgraphs/developing/creating/assemblyscript-mappings/) - Write your mappings -5. [Advanced Features](/subgraphs/developing/creating/advanced/) - Customize your subgraph with advanced features +5. [Advanced Features](/subgraphs/developing/creating/advanced/) - Customize your Subgraph with advanced features Explore additional [resources for APIs](/subgraphs/developing/creating/graph-ts/README/) and conduct local testing with [Matchstick](/subgraphs/developing/creating/unit-testing-framework/). + +| Phiên bản | Ghi chú phát hành | +| :-: | --- | +| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune Subgraphs | +| 0.0.9 | Supports `endBlock` feature | +| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). | +| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | diff --git a/website/src/pages/vi/subgraphs/developing/creating/subgraph-manifest.mdx b/website/src/pages/vi/subgraphs/developing/creating/subgraph-manifest.mdx index 01ca69dbcd4b..3edc8a28180f 100644 --- a/website/src/pages/vi/subgraphs/developing/creating/subgraph-manifest.mdx +++ b/website/src/pages/vi/subgraphs/developing/creating/subgraph-manifest.mdx @@ -4,19 +4,19 @@ title: Subgraph Manifest ## Tổng quan -The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. +The Subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your Subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. -The **subgraph definition** consists of the following files: +The **Subgraph definition** consists of the following files: -- `subgraph.yaml`: Contains the subgraph manifest +- `subgraph.yaml`: Contains the Subgraph manifest -- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL +- `schema.graphql`: A GraphQL schema defining the data stored for your Subgraph and how to query it via GraphQL - `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema (e.g. `mapping.ts` in this guide) ### Subgraph Capabilities -A single subgraph can: +A single Subgraph can: - Index data from multiple smart contracts (but not multiple networks). @@ -24,12 +24,12 @@ A single subgraph can: - Add an entry for each contract that requires indexing to the `dataSources` array. -The full specification for subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). +The full specification for Subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). -For the example subgraph listed above, `subgraph.yaml` is: +For the example Subgraph listed above, `subgraph.yaml` is: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum repository: https://github.com/graphprotocol/graph-tooling schema: @@ -54,7 +54,7 @@ dataSources: data: 'bar' mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -79,47 +79,47 @@ dataSources: ## Subgraph Entries -> Important Note: Be sure you populate your subgraph manifest with all handlers and [entities](/subgraphs/developing/creating/ql-schema/). +> Important Note: Be sure you populate your Subgraph manifest with all handlers and [entities](/subgraphs/developing/creating/ql-schema/). Các mục nhập quan trọng cần cập nhật cho tệp kê khai là: -- `specVersion`: a semver version that identifies the supported manifest structure and functionality for the subgraph. The latest version is `1.2.0`. See [specVersion releases](#specversion-releases) section to see more details on features & releases. +- `specVersion`: a semver version that identifies the supported manifest structure and functionality for the Subgraph. The latest version is `1.3.0`. See [specVersion releases](#specversion-releases) section to see more details on features & releases. -- `description`: a human-readable description of what the subgraph is. This description is displayed in Graph Explorer when the subgraph is deployed to Subgraph Studio. +- `description`: a human-readable description of what the Subgraph is. This description is displayed in Graph Explorer when the Subgraph is deployed to Subgraph Studio. -- `repository`: the URL of the repository where the subgraph manifest can be found. This is also displayed in Graph Explorer. +- `repository`: the URL of the repository where the Subgraph manifest can be found. This is also displayed in Graph Explorer. - `features`: a list of all used [feature](#experimental-features) names. -- `indexerHints.prune`: Defines the retention of historical block data for a subgraph. See [prune](#prune) in [indexerHints](#indexer-hints) section. +- `indexerHints.prune`: Defines the retention of historical block data for a Subgraph. See [prune](#prune) in [indexerHints](#indexer-hints) section. -- `dataSources.source`: the address of the smart contract the subgraph sources, and the ABI of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts. +- `dataSources.source`: the address of the smart contract the Subgraph sources, and the ABI of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts. - `dataSources.source.startBlock`: the optional number of the block that the data source starts indexing from. In most cases, we suggest using the block in which the contract was created. - `dataSources.source.endBlock`: The optional number of the block that the data source stops indexing at, including that block. Minimum spec version required: `0.0.9`. -- `dataSources.context`: key-value pairs that can be used within subgraph mappings. Supports various data types like `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Each variable needs to specify its `type` and `data`. These context variables are then accessible in the mapping files, offering more configurable options for subgraph development. +- `dataSources.context`: key-value pairs that can be used within Subgraph mappings. Supports various data types like `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Each variable needs to specify its `type` and `data`. These context variables are then accessible in the mapping files, offering more configurable options for Subgraph development. - `dataSources.mapping.entities`: the entities that the data source writes to the store. The schema for each entity is defined in the schema.graphql file. - `dataSources.mapping.abis`: one or more named ABI files for the source contract as well as any other smart contracts that you interact with from within the mappings. -- `dataSources.mapping.eventHandlers`: lists the smart contract events this subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store. +- `dataSources.mapping.eventHandlers`: lists the smart contract events this Subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store. -- `dataSources.mapping.callHandlers`: lists the smart contract functions this subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store. +- `dataSources.mapping.callHandlers`: lists the smart contract functions this Subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store. -- `dataSources.mapping.blockHandlers`: lists the blocks this subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional call-filter can be provided by adding a `filter` field with `kind: call` to the handler. This will only run the handler if the block contains at least one call to the data source contract. +- `dataSources.mapping.blockHandlers`: lists the blocks this Subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional call-filter can be provided by adding a `filter` field with `kind: call` to the handler. This will only run the handler if the block contains at least one call to the data source contract. -A single subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array. +A single Subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array. ## Event Handlers -Event handlers in a subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the subgraph's manifest. This enables subgraphs to process and store event data according to defined logic. +Event handlers in a Subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the Subgraph's manifest. This enables Subgraphs to process and store event data according to defined logic. ### Defining an Event Handler -An event handler is declared within a data source in the subgraph's YAML configuration. It specifies which events to listen for and the corresponding function to execute when those events are detected. +An event handler is declared within a data source in the Subgraph's YAML configuration. It specifies which events to listen for and the corresponding function to execute when those events are detected. ```yaml dataSources: @@ -131,7 +131,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -149,11 +149,11 @@ dataSources: ## Trình xử lý lệnh gọi -While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured. +While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a Subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured. Call handlers will only trigger in one of two cases: when the function specified is called by an account other than the contract itself or when it is marked as external in Solidity and called as part of another function in the same contract. -> **Note:** Call handlers currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a subgraph indexing one of these networks contain one or more call handlers, it will not start syncing. Subgraph developers should instead use event handlers. These are far more performant than call handlers, and are supported on every evm network. +> **Note:** Call handlers currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a Subgraph indexing one of these networks contain one or more call handlers, it will not start syncing. Subgraph developers should instead use event handlers. These are far more performant than call handlers, and are supported on every evm network. ### Xác định một Trình xử lý lệnh gọi @@ -169,7 +169,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -186,7 +186,7 @@ The `function` is the normalized function signature to filter calls by. The `han ### Chức năng Ánh xạ -Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument: +Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example Subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument: ```typescript import { CreateGravatarCall } from '../generated/Gravity/Gravity' @@ -205,7 +205,7 @@ The `handleCreateGravatar` function takes a new `CreateGravatarCall` which is a ## Trình xử lý Khối -In addition to subscribing to contract events or function calls, a subgraph may want to update its data as new blocks are appended to the chain. To achieve this a subgraph can run a function after every block or after blocks that match a pre-defined filter. +In addition to subscribing to contract events or function calls, a Subgraph may want to update its data as new blocks are appended to the chain. To achieve this a Subgraph can run a function after every block or after blocks that match a pre-defined filter. ### Bộ lọc được hỗ trợ @@ -218,7 +218,7 @@ filter: _The defined handler will be called once for every block which contains a call to the contract (data source) the handler is defined under._ -> **Note:** The `call` filter currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a subgraph indexing one of these networks contain one or more block handlers with a `call` filter, it will not start syncing. +> **Note:** The `call` filter currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a Subgraph indexing one of these networks contain one or more block handlers with a `call` filter, it will not start syncing. The absence of a filter for a block handler will ensure that the handler is called every block. A data source can only contain one block handler for each filter type. @@ -232,7 +232,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -261,7 +261,7 @@ blockHandlers: every: 10 ``` -The defined handler will be called once for every `n` blocks, where `n` is the value provided in the `every` field. This configuration allows the subgraph to perform specific operations at regular block intervals. +The defined handler will be called once for every `n` blocks, where `n` is the value provided in the `every` field. This configuration allows the Subgraph to perform specific operations at regular block intervals. #### Once Filter @@ -276,7 +276,7 @@ blockHandlers: kind: once ``` -The defined handler with the once filter will be called only once before all other handlers run. This configuration allows the subgraph to use the handler as an initialization handler, performing specific tasks at the start of indexing. +The defined handler with the once filter will be called only once before all other handlers run. This configuration allows the Subgraph to use the handler as an initialization handler, performing specific tasks at the start of indexing. ```ts export function handleOnce(block: ethereum.Block): void { @@ -288,7 +288,7 @@ export function handleOnce(block: ethereum.Block): void { ### Chức năng Ánh xạ -The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing subgraph entities in the store, call smart contracts and create or update entities. +The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing Subgraph entities in the store, call smart contracts and create or update entities. ```typescript import { ethereum } from '@graphprotocol/graph-ts' @@ -317,7 +317,7 @@ An event will only be triggered when both the signature and topic 0 match. By de Starting from `specVersion` `0.0.5` and `apiVersion` `0.0.7`, event handlers can have access to the receipt for the transaction which emitted them. -To do so, event handlers must be declared in the subgraph manifest with the new `receipt: true` key, which is optional and defaults to false. +To do so, event handlers must be declared in the Subgraph manifest with the new `receipt: true` key, which is optional and defaults to false. ```yaml eventHandlers: @@ -360,7 +360,7 @@ dataSources: abi: Factory mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/factory.ts entities: @@ -390,7 +390,7 @@ templates: abi: Exchange mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/exchange.ts entities: @@ -454,7 +454,7 @@ There are setters and getters like `setString` and `getString` for all value typ ## Khối Bắt đầu -The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. +The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a Subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. ```yaml dataSources: @@ -467,7 +467,7 @@ dataSources: startBlock: 6627917 mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/factory.ts entities: @@ -488,13 +488,13 @@ dataSources: ## Indexer Hints -The `indexerHints` setting in a subgraph's manifest provides directives for indexers on processing and managing a subgraph. It influences operational decisions across data handling, indexing strategies, and optimizations. Presently, it features the `prune` option for managing historical data retention or pruning. +The `indexerHints` setting in a Subgraph's manifest provides directives for indexers on processing and managing a Subgraph. It influences operational decisions across data handling, indexing strategies, and optimizations. Presently, it features the `prune` option for managing historical data retention or pruning. > This feature is available from `specVersion: 1.0.0` ### Prune -`indexerHints.prune`: Defines the retention of historical block data for a subgraph. Options include: +`indexerHints.prune`: Defines the retention of historical block data for a Subgraph. Options include: 1. `"never"`: No pruning of historical data; retains the entire history. 2. `"auto"`: Retains the minimum necessary history as set by the indexer, optimizing query performance. @@ -505,19 +505,19 @@ The `indexerHints` setting in a subgraph's manifest provides directives for inde prune: auto ``` -> The term "history" in this context of subgraphs is about storing data that reflects the old states of mutable entities. +> The term "history" in this context of Subgraphs is about storing data that reflects the old states of mutable entities. History as of a given block is required for: -- [Time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), which enable querying the past states of these entities at specific blocks throughout the subgraph's history -- Using the subgraph as a [graft base](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) in another subgraph, at that block -- Rewinding the subgraph back to that block +- [Time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), which enable querying the past states of these entities at specific blocks throughout the Subgraph's history +- Using the Subgraph as a [graft base](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) in another Subgraph, at that block +- Rewinding the Subgraph back to that block If historical data as of the block has been pruned, the above capabilities will not be available. > Using `"auto"` is generally recommended as it maximizes query performance and is sufficient for most users who do not require access to extensive historical data. -For subgraphs leveraging [time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), it's advisable to either set a specific number of blocks for historical data retention or use `prune: never` to keep all historical entity states. Below are examples of how to configure both options in your subgraph's settings: +For Subgraphs leveraging [time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), it's advisable to either set a specific number of blocks for historical data retention or use `prune: never` to keep all historical entity states. Below are examples of how to configure both options in your Subgraph's settings: To retain a specific amount of historical data: @@ -532,3 +532,18 @@ To preserve the complete history of entity states: indexerHints: prune: never ``` + +## SpecVersion Releases + +| Phiên bản | Ghi chú phát hành | +| :-: | --- | +| 1.3.0 | Added support for [Subgraph Composition](/cookbook/subgraph-composition-three-sources) | +| 1.2.0 | Added support for [Indexed Argument Filtering](/developing/creating/advanced/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](/developing/creating/advanced/#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supports [`indexerHints`](/developing/creating/subgraph-manifest/#indexer-hints) feature to prune Subgraphs | +| 0.0.9 | Supports `endBlock` feature | +| 0.0.8 | Added support for polling [Block Handlers](/developing/creating/subgraph-manifest/#polling-filter) and [Initialisation Handlers](/developing/creating/subgraph-manifest/#once-filter). | +| 0.0.7 | Added support for [File Data Sources](/developing/creating/advanced/#ipfsarweave-file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | diff --git a/website/src/pages/vi/subgraphs/developing/creating/unit-testing-framework.mdx b/website/src/pages/vi/subgraphs/developing/creating/unit-testing-framework.mdx index 10a1078a2eb5..720f612265cd 100644 --- a/website/src/pages/vi/subgraphs/developing/creating/unit-testing-framework.mdx +++ b/website/src/pages/vi/subgraphs/developing/creating/unit-testing-framework.mdx @@ -2,12 +2,12 @@ title: Unit Testing Framework --- -Learn how to use Matchstick, a unit testing framework developed by [LimeChain](https://limechain.tech/). Matchstick enables subgraph developers to test their mapping logic in a sandboxed environment and successfully deploy their subgraphs. +Learn how to use Matchstick, a unit testing framework developed by [LimeChain](https://limechain.tech/). Matchstick enables Subgraph developers to test their mapping logic in a sandboxed environment and successfully deploy their Subgraphs. ## Benefits of Using Matchstick - It's written in Rust and optimized for high performance. -- It gives you access to developer features, including the ability to mock contract calls, make assertions about the store state, monitor subgraph failures, check test performance, and many more. +- It gives you access to developer features, including the ability to mock contract calls, make assertions about the store state, monitor Subgraph failures, check test performance, and many more. ## Getting Started @@ -87,7 +87,7 @@ And finally, do not use `graph test` (which uses your global installation of gra ### Using Matchstick -To use **Matchstick** in your subgraph project just open up a terminal, navigate to the root folder of your project and simply run `graph test [options] ` - it downloads the latest **Matchstick** binary and runs the specified test or all tests in a test folder (or all existing tests if no datasource flag is specified). +To use **Matchstick** in your Subgraph project just open up a terminal, navigate to the root folder of your project and simply run `graph test [options] ` - it downloads the latest **Matchstick** binary and runs the specified test or all tests in a test folder (or all existing tests if no datasource flag is specified). ### CLI options @@ -113,7 +113,7 @@ graph test path/to/file.test.ts ```sh -c, --coverage Run the tests in coverage mode --d, --docker Run the tests in a docker container (Note: Please execute from the root folder of the subgraph) +-d, --docker Run the tests in a docker container (Note: Please execute from the root folder of the Subgraph) -f, --force Binary: Redownloads the binary. Docker: Redownloads the Dockerfile and rebuilds the docker image. -h, --help Show usage information -l, --logs Logs to the console information about the OS, CPU model and download url (debugging purposes) @@ -145,17 +145,17 @@ libsFolder: path/to/libs manifestPath: path/to/subgraph.yaml ``` -### Demo subgraph +### Demo Subgraph You can try out and play around with the examples from this guide by cloning the [Demo Subgraph repo](https://github.com/LimeChain/demo-subgraph) ### Video tutorials -Also you can check out the video series on ["How to use Matchstick to write unit tests for your subgraphs"](https://www.youtube.com/playlist?list=PLTqyKgxaGF3SNakGQwczpSGVjS_xvOv3h) +Also you can check out the video series on ["How to use Matchstick to write unit tests for your Subgraphs"](https://www.youtube.com/playlist?list=PLTqyKgxaGF3SNakGQwczpSGVjS_xvOv3h) ## Tests structure -_**IMPORTANT: The test structure described below depens on `matchstick-as` version >=0.5.0**_ +_**IMPORTANT: The test structure described below depends on `matchstick-as` version >=0.5.0**_ ### describe() @@ -662,7 +662,7 @@ That's a lot to unpack! First off, an important thing to notice is that we're im There we go - we've created our first test! 👏 -Now in order to run our tests you simply need to run the following in your subgraph root folder: +Now in order to run our tests you simply need to run the following in your Subgraph root folder: `graph test Gravity` @@ -756,7 +756,7 @@ createMockedFunction(contractAddress, 'getGravatar', 'getGravatar(address):(stri Users can mock IPFS files by using `mockIpfsFile(hash, filePath)` function. The function accepts two arguments, the first one is the IPFS file hash/path and the second one is the path to a local file. -NOTE: When testing `ipfs.map/ipfs.mapJSON`, the callback function must be exported from the test file in order for matchstck to detect it, like the `processGravatar()` function in the test example bellow: +NOTE: When testing `ipfs.map/ipfs.mapJSON`, the callback function must be exported from the test file in order for matchstick to detect it, like the `processGravatar()` function in the test example bellow: `.test.ts` file: @@ -765,7 +765,7 @@ import { assert, test, mockIpfsFile } from 'matchstick-as/assembly/index' import { ipfs } from '@graphprotocol/graph-ts' import { gravatarFromIpfs } from './utils' -// Export ipfs.map() callback in order for matchstck to detect it +// Export ipfs.map() callback in order for matchstick to detect it export { processGravatar } from './utils' test('ipfs.cat', () => { @@ -1172,7 +1172,7 @@ templates: network: mainnet mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/token-lock-wallet.ts handler: handleMetadata @@ -1289,7 +1289,7 @@ test('file/ipfs dataSource creation example', () => { ## Test Coverage -Using **Matchstick**, subgraph developers are able to run a script that will calculate the test coverage of the written unit tests. +Using **Matchstick**, Subgraph developers are able to run a script that will calculate the test coverage of the written unit tests. The test coverage tool takes the compiled test `wasm` binaries and converts them to `wat` files, which can then be easily inspected to see whether or not the handlers defined in `subgraph.yaml` have been called. Since code coverage (and testing as whole) is in very early stages in AssemblyScript and WebAssembly, **Matchstick** cannot check for branch coverage. Instead we rely on the assertion that if a given handler has been called, the event/function for it have been properly mocked. @@ -1395,7 +1395,7 @@ The mismatch in arguments is caused by mismatch in `graph-ts` and `matchstick-as ## Additional Resources -For any additional support, check out this [demo subgraph repo using Matchstick](https://github.com/LimeChain/demo-subgraph#readme_). +For any additional support, check out this [demo Subgraph repo using Matchstick](https://github.com/LimeChain/demo-subgraph#readme_). ## Feedback diff --git a/website/src/pages/vi/subgraphs/developing/deploying/multiple-networks.mdx b/website/src/pages/vi/subgraphs/developing/deploying/multiple-networks.mdx index 4f7dcd3864e8..3b2b1bbc70ae 100644 --- a/website/src/pages/vi/subgraphs/developing/deploying/multiple-networks.mdx +++ b/website/src/pages/vi/subgraphs/developing/deploying/multiple-networks.mdx @@ -1,12 +1,13 @@ --- title: Deploying a Subgraph to Multiple Networks +sidebarTitle: Deploying to Multiple Networks --- -This page explains how to deploy a subgraph to multiple networks. To deploy a subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a subgraph already, see [Creating a subgraph](/developing/creating-a-subgraph/). +This page explains how to deploy a Subgraph to multiple networks. To deploy a Subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a Subgraph already, see [Creating a Subgraph](/developing/creating-a-subgraph/). -## Deploying the subgraph to multiple networks +## Deploying the Subgraph to multiple networks -In some cases, you will want to deploy the same subgraph to multiple networks without duplicating all of its code. The main challenge that comes with this is that the contract addresses on these networks are different. +In some cases, you will want to deploy the same Subgraph to multiple networks without duplicating all of its code. The main challenge that comes with this is that the contract addresses on these networks are different. ### Using `graph-cli` @@ -20,7 +21,7 @@ Options: --network-file Networks config file path (default: "./networks.json") ``` -You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your subgraph during development. +You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your Subgraph during development. > Note: The `init` command will now auto-generate a `networks.json` based on the provided information. You will then be able to update existing or add additional networks. @@ -54,7 +55,7 @@ If you don't have a `networks.json` file, you'll need to manually create one wit > Note: You don't have to specify any of the `templates` (if you have any) in the config file, only the `dataSources`. If there are any `templates` declared in the `subgraph.yaml` file, their network will be automatically updated to the one specified with the `--network` option. -Now, let's assume you want to be able to deploy your subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`: +Now, let's assume you want to be able to deploy your Subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`: ```yaml # ... @@ -96,7 +97,7 @@ yarn build --network sepolia yarn build --network sepolia --network-file path/to/config ``` -The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the subgraph. Your `subgraph.yaml` file now should look like this: +The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the Subgraph. Your `subgraph.yaml` file now should look like this: ```yaml # ... @@ -127,7 +128,7 @@ yarn deploy --network sepolia --network-file path/to/config One way to parameterize aspects like contract addresses using older `graph-cli` versions is to generate parts of it with a templating system like [Mustache](https://mustache.github.io/) or [Handlebars](https://handlebarsjs.com/). -To illustrate this approach, let's assume a subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network: +To illustrate this approach, let's assume a Subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network: ```json { @@ -179,7 +180,7 @@ In order to generate a manifest to either network, you could add two additional } ``` -To deploy this subgraph for mainnet or Sepolia you would now simply run one of the two following commands: +To deploy this Subgraph for mainnet or Sepolia you would now simply run one of the two following commands: ```sh # Mainnet: @@ -193,25 +194,25 @@ A working example of this can be found [here](https://github.com/graphprotocol/e **Note:** This approach can also be applied to more complex situations, where it is necessary to substitute more than contract addresses and network names or where generating mappings or ABIs from templates as well. -This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your Subgraph to check if it is running behind. `synced` informs if the Subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the Subgraph. In this case, you can check the `fatalError` field for details on this error. -## Subgraph Studio subgraph archive policy +## Subgraph Studio Subgraph archive policy -A subgraph version in Studio is archived if and only if it meets the following criteria: +A Subgraph version in Studio is archived if and only if it meets the following criteria: - The version is not published to the network (or pending publish) - The version was created 45 or more days ago -- The subgraph hasn't been queried in 30 days +- The Subgraph hasn't been queried in 30 days -In addition, when a new version is deployed, if the subgraph has not been published, then the N-2 version of the subgraph is archived. +In addition, when a new version is deployed, if the Subgraph has not been published, then the N-2 version of the Subgraph is archived. -Every subgraph affected with this policy has an option to bring the version in question back. +Every Subgraph affected with this policy has an option to bring the version in question back. -## Checking subgraph health +## Checking Subgraph health -If a subgraph syncs successfully, that is a good sign that it will continue to run well forever. However, new triggers on the network might cause your subgraph to hit an untested error condition or it may start to fall behind due to performance issues or issues with the node operators. +If a Subgraph syncs successfully, that is a good sign that it will continue to run well forever. However, new triggers on the network might cause your Subgraph to hit an untested error condition or it may start to fall behind due to performance issues or issues with the node operators. -Graph Node exposes a GraphQL endpoint which you can query to check the status of your subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a subgraph: +Graph Node exposes a GraphQL endpoint which you can query to check the status of your Subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a Subgraph: ```graphql { @@ -238,4 +239,4 @@ Graph Node exposes a GraphQL endpoint which you can query to check the status of } ``` -This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your Subgraph to check if it is running behind. `synced` informs if the Subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the Subgraph. In this case, you can check the `fatalError` field for details on this error. diff --git a/website/src/pages/vi/subgraphs/developing/deploying/using-subgraph-studio.mdx b/website/src/pages/vi/subgraphs/developing/deploying/using-subgraph-studio.mdx index 98602d583746..8e89b2999d96 100644 --- a/website/src/pages/vi/subgraphs/developing/deploying/using-subgraph-studio.mdx +++ b/website/src/pages/vi/subgraphs/developing/deploying/using-subgraph-studio.mdx @@ -2,23 +2,23 @@ title: Deploying Using Subgraph Studio --- -Learn how to deploy your subgraph to Subgraph Studio. +Learn how to deploy your Subgraph to Subgraph Studio. -> Note: When you deploy a subgraph, you push it to Subgraph Studio, where you'll be able to test it. It's important to remember that deploying is not the same as publishing. When you publish a subgraph, you're publishing it onchain. +> Note: When you deploy a Subgraph, you push it to Subgraph Studio, where you'll be able to test it. It's important to remember that deploying is not the same as publishing. When you publish a Subgraph, you're publishing it onchain. ## Subgraph Studio Overview In [Subgraph Studio](https://thegraph.com/studio/), you can do the following: -- View a list of subgraphs you've created -- Manage, view details, and visualize the status of a specific subgraph -- Create and manage your API keys for specific subgraphs +- View a list of Subgraphs you've created +- Manage, view details, and visualize the status of a specific Subgraph +- Create and manage your API keys for specific Subgraphs - Restrict your API keys to specific domains and allow only certain Indexers to query with them -- Create your subgraph -- Deploy your subgraph using The Graph CLI -- Test your subgraph in the playground environment -- Integrate your subgraph in staging using the development query URL -- Publish your subgraph to The Graph Network +- Create your Subgraph +- Deploy your Subgraph using The Graph CLI +- Test your Subgraph in the playground environment +- Integrate your Subgraph in staging using the development query URL +- Publish your Subgraph to The Graph Network - Manage your billing ## Install The Graph CLI @@ -44,10 +44,10 @@ npm install -g @graphprotocol/graph-cli 1. Open [Subgraph Studio](https://thegraph.com/studio/). 2. Connect your wallet to sign in. - You can do this via MetaMask, Coinbase Wallet, WalletConnect, or Safe. -3. After you sign in, your unique deploy key will be displayed on your subgraph details page. - - The deploy key allows you to publish your subgraphs or manage your API keys and billing. It is unique but can be regenerated if you think it has been compromised. +3. After you sign in, your unique deploy key will be displayed on your Subgraph details page. + - The deploy key allows you to publish your Subgraphs or manage your API keys and billing. It is unique but can be regenerated if you think it has been compromised. -> Important: You need an API key to query subgraphs +> Important: You need an API key to query Subgraphs ### How to Create a Subgraph in Subgraph Studio @@ -57,31 +57,25 @@ npm install -g @graphprotocol/graph-cli ### Subgraph Compatibility with The Graph Network -In order to be supported by Indexers on The Graph Network, subgraphs must: - -- Index a [supported network](/supported-networks/) -- Must not use any of the following features: - - ipfs.cat & ipfs.map - - Lỗi không nghiêm trọng - - Ghép +To be supported by Indexers on The Graph Network, Subgraphs must index a [supported network](/supported-networks/). For a full list of supported and unsupported features, check out the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) repo. ## Initialize Your Subgraph -Once your subgraph has been created in Subgraph Studio, you can initialize its code through the CLI using this command: +Once your Subgraph has been created in Subgraph Studio, you can initialize its code through the CLI using this command: ```bash graph init ``` -You can find the `` value on your subgraph details page in Subgraph Studio, see image below: +You can find the `` value on your Subgraph details page in Subgraph Studio, see image below: ![Subgraph Studio - Slug](/img/doc-subgraph-slug.png) -After running `graph init`, you will be asked to input the contract address, network, and an ABI that you want to query. This will generate a new folder on your local machine with some basic code to start working on your subgraph. You can then finalize your subgraph to make sure it works as expected. +After running `graph init`, you will be asked to input the contract address, network, and an ABI that you want to query. This will generate a new folder on your local machine with some basic code to start working on your Subgraph. You can then finalize your Subgraph to make sure it works as expected. ## Graph Auth -Before you can deploy your subgraph to Subgraph Studio, you need to log into your account within the CLI. To do this, you will need your deploy key, which you can find under your subgraph details page. +Before you can deploy your Subgraph to Subgraph Studio, you need to log into your account within the CLI. To do this, you will need your deploy key, which you can find under your Subgraph details page. Then, use the following command to authenticate from the CLI: @@ -91,11 +85,11 @@ graph auth ## Deploying a Subgraph -Once you are ready, you can deploy your subgraph to Subgraph Studio. +Once you are ready, you can deploy your Subgraph to Subgraph Studio. -> Deploying a subgraph with the CLI pushes it to the Studio, where you can test it and update the metadata. This action won't publish your subgraph to the decentralized network. +> Deploying a Subgraph with the CLI pushes it to the Studio, where you can test it and update the metadata. This action won't publish your Subgraph to the decentralized network. -Use the following CLI command to deploy your subgraph: +Use the following CLI command to deploy your Subgraph: ```bash graph deploy @@ -108,30 +102,30 @@ After running this command, the CLI will ask for a version label. ## Testing Your Subgraph -After deploying, you can test your subgraph (either in Subgraph Studio or in your own app, with the deployment query URL), deploy another version, update the metadata, and publish to [Graph Explorer](https://thegraph.com/explorer) when you are ready. +After deploying, you can test your Subgraph (either in Subgraph Studio or in your own app, with the deployment query URL), deploy another version, update the metadata, and publish to [Graph Explorer](https://thegraph.com/explorer) when you are ready. -Use Subgraph Studio to check the logs on the dashboard and look for any errors with your subgraph. +Use Subgraph Studio to check the logs on the dashboard and look for any errors with your Subgraph. ## Publish Your Subgraph -In order to publish your subgraph successfully, review [publishing a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). +In order to publish your Subgraph successfully, review [publishing a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). ## Versioning Your Subgraph with the CLI -If you want to update your subgraph, you can do the following: +If you want to update your Subgraph, you can do the following: - You can deploy a new version to Studio using the CLI (it will only be private at this point). - Once you're happy with it, you can publish your new deployment to [Graph Explorer](https://thegraph.com/explorer). -- This action will create a new version of your subgraph that Curators can start signaling on and Indexers can index. +- This action will create a new version of your Subgraph that Curators can start signaling on and Indexers can index. -You can also update your subgraph's metadata without publishing a new version. You can update your subgraph details in Studio (under the profile picture, name, description, etc.) by checking an option called **Update Details** in [Graph Explorer](https://thegraph.com/explorer). If this is checked, an onchain transaction will be generated that updates subgraph details in Explorer without having to publish a new version with a new deployment. +You can also update your Subgraph's metadata without publishing a new version. You can update your Subgraph details in Studio (under the profile picture, name, description, etc.) by checking an option called **Update Details** in [Graph Explorer](https://thegraph.com/explorer). If this is checked, an onchain transaction will be generated that updates Subgraph details in Explorer without having to publish a new version with a new deployment. -> Note: There are costs associated with publishing a new version of a subgraph to the network. In addition to the transaction fees, you must also fund a part of the curation tax on the auto-migrating signal. You cannot publish a new version of your subgraph if Curators have not signaled on it. For more information, please read more [here](/resources/roles/curating/). +> Note: There are costs associated with publishing a new version of a Subgraph to the network. In addition to the transaction fees, you must also fund a part of the curation tax on the auto-migrating signal. You cannot publish a new version of your Subgraph if Curators have not signaled on it. For more information, please read more [here](/resources/roles/curating/). ## Automatic Archiving of Subgraph Versions -Whenever you deploy a new subgraph version in Subgraph Studio, the previous version will be archived. Archived versions won't be indexed/synced and therefore cannot be queried. You can unarchive an archived version of your subgraph in Subgraph Studio. +Whenever you deploy a new Subgraph version in Subgraph Studio, the previous version will be archived. Archived versions won't be indexed/synced and therefore cannot be queried. You can unarchive an archived version of your Subgraph in Subgraph Studio. -> Note: Previous versions of non-published subgraphs deployed to Studio will be automatically archived. +> Note: Previous versions of non-published Subgraphs deployed to Studio will be automatically archived. ![Subgraph Studio - Unarchive](/img/Unarchive.png) diff --git a/website/src/pages/vi/subgraphs/developing/developer-faq.mdx b/website/src/pages/vi/subgraphs/developing/developer-faq.mdx index 867c704194ab..66fbae4a568e 100644 --- a/website/src/pages/vi/subgraphs/developing/developer-faq.mdx +++ b/website/src/pages/vi/subgraphs/developing/developer-faq.mdx @@ -7,37 +7,37 @@ This page summarizes some of the most common questions for developers building o ## Subgraph Related -### 1. What is a subgraph? +### 1. What is a Subgraph? -A subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process subgraphs and make them available for subgraph consumers to query. +A Subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process Subgraphs and make them available for Subgraph consumers to query. -### 2. What is the first step to create a subgraph? +### 2. What is the first step to create a Subgraph? -To successfully create a subgraph, you will need to install The Graph CLI. Review the [Quick Start](/subgraphs/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). +To successfully create a Subgraph, you will need to install The Graph CLI. Review the [Quick Start](/subgraphs/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). -### 3. Can I still create a subgraph if my smart contracts don't have events? +### 3. Can I still create a Subgraph if my smart contracts don't have events? -It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the subgraph are triggered by contract events and are the fastest way to retrieve useful data. +It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the Subgraph are triggered by contract events and are the fastest way to retrieve useful data. -If the contracts you work with do not contain events, your subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. +If the contracts you work with do not contain events, your Subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. -### 4. Can I change the GitHub account associated with my subgraph? +### 4. Can I change the GitHub account associated with my Subgraph? -No. Once a subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your subgraph. +No. Once a Subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your Subgraph. -### 5. How do I update a subgraph on mainnet? +### 5. How do I update a Subgraph on mainnet? -You can deploy a new version of your subgraph to Subgraph Studio using the CLI. This action maintains your subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. +You can deploy a new version of your Subgraph to Subgraph Studio using the CLI. This action maintains your Subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your Subgraph that Curators can start signaling on. -### 6. Is it possible to duplicate a subgraph to another account or endpoint without redeploying? +### 6. Is it possible to duplicate a Subgraph to another account or endpoint without redeploying? -Bạn phải triển khai lại subgraph, nhưng nếu ID subgraph (mã băm IPFS) không thay đổi, nó sẽ không phải đồng bộ hóa từ đầu. +You have to redeploy the Subgraph, but if the Subgraph ID (IPFS hash) doesn't change, it won't have to sync from the beginning. -### 7. How do I call a contract function or access a public state variable from my subgraph mappings? +### 7. How do I call a contract function or access a public state variable from my Subgraph mappings? Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/#access-to-smart-contract-state). -### 8. Can I import `ethers.js` or other JS libraries into my subgraph mappings? +### 8. Can I import `ethers.js` or other JS libraries into my Subgraph mappings? Not currently, as mappings are written in AssemblyScript. @@ -45,15 +45,15 @@ One possible alternative solution to this is to store raw data in entities and p ### 9. When listening to multiple contracts, is it possible to select the contract order to listen to events? -Trong một subgraph, các sự kiện luôn được xử lý theo thứ tự chúng xuất hiện trong các khối, bất kể điều đó có qua nhiều hợp đồng hay không. +Within a Subgraph, the events are always processed in the order they appear in the blocks, regardless of whether that is across multiple contracts or not. ### 10. How are templates different from data sources? -Templates allow you to create data sources quickly, while your subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your subgraph will create a dynamic data source by supplying the contract address. +Templates allow you to create data sources quickly, while your Subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your Subgraph will create a dynamic data source by supplying the contract address. Check out the "Instantiating a data source template" section on: [Data Source Templates](/developing/creating-a-subgraph/#data-source-templates). -### 11. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? +### 11. Is it possible to set up a Subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? Yes. On `graph init` command itself you can add multiple dataSources by entering contracts one after the other. @@ -79,9 +79,9 @@ docker pull graphprotocol/graph-node:latest If only one entity is created during the event and if there's nothing better available, then the transaction hash + log index would be unique. You can obfuscate these by converting that to Bytes and then piping it through `crypto.keccak256` but this won't make it more unique. -### 15. Can I delete my subgraph? +### 15. Can I delete my Subgraph? -Yes, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) and [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) your subgraph. +Yes, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) and [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) your Subgraph. ## Network Related @@ -110,11 +110,11 @@ Yes. Sepolia supports block handlers, call handlers and event handlers. It shoul Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the dataSource starts indexing from. In most cases, we suggest using the block where the contract was created: [Start blocks](/developing/creating-a-subgraph/#start-blocks) -### 20. What are some tips to increase the performance of indexing? My subgraph is taking a very long time to sync +### 20. What are some tips to increase the performance of indexing? My Subgraph is taking a very long time to sync Yes, you should take a look at the optional start block feature to start indexing from the block where the contract was deployed: [Start blocks](/developing/creating-a-subgraph/#start-blocks) -### 21. Is there a way to query the subgraph directly to determine the latest block number it has indexed? +### 21. Is there a way to query the Subgraph directly to determine the latest block number it has indexed? Có! Hãy thử lệnh sau, thay thế "organization/subgraphName" bằng tổ chức dưới nó được xuất bản và tên của subgraph của bạn: @@ -132,7 +132,7 @@ someCollection(first: 1000, skip: ) { ... } ### 23. If my dapp frontend uses The Graph for querying, do I need to write my API key into the frontend directly? What if we pay query fees for users – will malicious users cause our query fees to be very high? -Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. +Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and Subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. ## Miscellaneous diff --git a/website/src/pages/vi/subgraphs/developing/introduction.mdx b/website/src/pages/vi/subgraphs/developing/introduction.mdx index ea7cc276b1d2..7e1039b57a36 100644 --- a/website/src/pages/vi/subgraphs/developing/introduction.mdx +++ b/website/src/pages/vi/subgraphs/developing/introduction.mdx @@ -11,21 +11,21 @@ As a developer, you need data to build and power your dapp. Querying and indexin On The Graph, you can: -1. Create, deploy, and publish subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). -2. Use GraphQL to query existing subgraphs. +1. Create, deploy, and publish Subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). +2. Use GraphQL to query existing Subgraphs. ### What is GraphQL? -- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. +- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query Subgraphs. ### Developer Actions -- Query subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. -- Create custom subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. -- Deploy, publish and signal your subgraphs within The Graph Network. +- Query Subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. +- Create custom Subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. +- Deploy, publish and signal your Subgraphs within The Graph Network. -### What are subgraphs? +### What are Subgraphs? -A subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. +A Subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. -Check out the documentation on [subgraphs](/subgraphs/developing/subgraphs/) to learn specifics. +Check out the documentation on [Subgraphs](/subgraphs/developing/subgraphs/) to learn specifics. diff --git a/website/src/pages/vi/subgraphs/developing/managing/deleting-a-subgraph.mdx b/website/src/pages/vi/subgraphs/developing/managing/deleting-a-subgraph.mdx index 5a4ac15e07fd..b8c2330ca49d 100644 --- a/website/src/pages/vi/subgraphs/developing/managing/deleting-a-subgraph.mdx +++ b/website/src/pages/vi/subgraphs/developing/managing/deleting-a-subgraph.mdx @@ -2,30 +2,30 @@ title: Deleting a Subgraph --- -Delete your subgraph using [Subgraph Studio](https://thegraph.com/studio/). +Delete your Subgraph using [Subgraph Studio](https://thegraph.com/studio/). -> Deleting your subgraph will remove all published versions from The Graph Network, but it will remain visible on Graph Explorer and Subgraph Studio for users who have signaled on it. +> Deleting your Subgraph will remove all published versions from The Graph Network, but it will remain visible on Graph Explorer and Subgraph Studio for users who have signaled on it. ## Step-by-Step -1. Visit the subgraph's page on [Subgraph Studio](https://thegraph.com/studio/). +1. Visit the Subgraph's page on [Subgraph Studio](https://thegraph.com/studio/). 2. Click on the three-dots to the right of the "publish" button. -3. Click on the option to "delete this subgraph": +3. Click on the option to "delete this Subgraph": ![Delete-subgraph](/img/Delete-subgraph.png) -4. Depending on the subgraph's status, you will be prompted with various options. +4. Depending on the Subgraph's status, you will be prompted with various options. - - If the subgraph is not published, simply click “delete” and confirm. - - If the subgraph is published, you will need to confirm on your wallet before the subgraph can be deleted from Studio. If a subgraph is published to multiple networks, such as testnet and mainnet, additional steps may be required. + - If the Subgraph is not published, simply click “delete” and confirm. + - If the Subgraph is published, you will need to confirm on your wallet before the Subgraph can be deleted from Studio. If a Subgraph is published to multiple networks, such as testnet and mainnet, additional steps may be required. -> If the owner of the subgraph has signal on it, the signaled GRT will be returned to the owner. +> If the owner of the Subgraph has signal on it, the signaled GRT will be returned to the owner. ### Important Reminders -- Once you delete a subgraph, it will **not** appear on Graph Explorer's homepage. However, users who have signaled on it will still be able to view it on their profile pages and remove their signal. -- Curators will not be able to signal on the subgraph anymore. -- Curators that already signaled on the subgraph can withdraw their signal at an average share price. -- Deleted subgraphs will show an error message. +- Once you delete a Subgraph, it will **not** appear on Graph Explorer's homepage. However, users who have signaled on it will still be able to view it on their profile pages and remove their signal. +- Curators will not be able to signal on the Subgraph anymore. +- Curators that already signaled on the Subgraph can withdraw their signal at an average share price. +- Deleted Subgraphs will show an error message. diff --git a/website/src/pages/vi/subgraphs/developing/managing/transferring-a-subgraph.mdx b/website/src/pages/vi/subgraphs/developing/managing/transferring-a-subgraph.mdx index 0fc6632cbc40..e80bde3fa6d2 100644 --- a/website/src/pages/vi/subgraphs/developing/managing/transferring-a-subgraph.mdx +++ b/website/src/pages/vi/subgraphs/developing/managing/transferring-a-subgraph.mdx @@ -2,18 +2,18 @@ title: Transferring a Subgraph --- -Subgraphs published to the decentralized network have an NFT minted to the address that published the subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network. +Subgraphs published to the decentralized network have an NFT minted to the address that published the Subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network. ## Reminders -- Whoever owns the NFT controls the subgraph. -- If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that subgraph on the network. -- You can easily move control of a subgraph to a multi-sig. -- A community member can create a subgraph on behalf of a DAO. +- Whoever owns the NFT controls the Subgraph. +- If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that Subgraph on the network. +- You can easily move control of a Subgraph to a multi-sig. +- A community member can create a Subgraph on behalf of a DAO. ## View Your Subgraph as an NFT -To view your subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**: +To view your Subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**: ``` https://opensea.io/your-wallet-address @@ -27,13 +27,13 @@ https://rainbow.me/your-wallet-addres ## Step-by-Step -To transfer ownership of a subgraph, do the following: +To transfer ownership of a Subgraph, do the following: 1. Use the UI built into Subgraph Studio: ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-1.png) -2. Choose the address that you would like to transfer the subgraph to: +2. Choose the address that you would like to transfer the Subgraph to: ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-2.png) diff --git a/website/src/pages/vi/subgraphs/developing/publishing/publishing-a-subgraph.mdx b/website/src/pages/vi/subgraphs/developing/publishing/publishing-a-subgraph.mdx index dca943ad3152..2bc0ec5f514c 100644 --- a/website/src/pages/vi/subgraphs/developing/publishing/publishing-a-subgraph.mdx +++ b/website/src/pages/vi/subgraphs/developing/publishing/publishing-a-subgraph.mdx @@ -1,10 +1,11 @@ --- title: Publishing a Subgraph to the Decentralized Network +sidebarTitle: Publishing to the Decentralized Network --- -Once you have [deployed your subgraph to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/) and it's ready to go into production, you can publish it to the decentralized network. +Once you have [deployed your Subgraph to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/) and it's ready to go into production, you can publish it to the decentralized network. -When you publish a subgraph to the decentralized network, you make it available for: +When you publish a Subgraph to the decentralized network, you make it available for: - [Curators](/resources/roles/curating/) to begin curating it. - [Indexers](/indexing/overview/) to begin indexing it. @@ -17,33 +18,33 @@ Check out the list of [supported networks](/supported-networks/). 1. Go to the [Subgraph Studio](https://thegraph.com/studio/) dashboard 2. Click on the **Publish** button -3. Your subgraph will now be visible in [Graph Explorer](https://thegraph.com/explorer/). +3. Your Subgraph will now be visible in [Graph Explorer](https://thegraph.com/explorer/). -All published versions of an existing subgraph can: +All published versions of an existing Subgraph can: - Be published to Arbitrum One. [Learn more about The Graph Network on Arbitrum](/archived/arbitrum/arbitrum-faq/). -- Index data on any of the [supported networks](/supported-networks/), regardless of the network on which the subgraph was published. +- Index data on any of the [supported networks](/supported-networks/), regardless of the network on which the Subgraph was published. -### Updating metadata for a published subgraph +### Updating metadata for a published Subgraph -- After publishing your subgraph to the decentralized network, you can update the metadata anytime in Subgraph Studio. +- After publishing your Subgraph to the decentralized network, you can update the metadata anytime in Subgraph Studio. - Once you’ve saved your changes and published the updates, they will appear in Graph Explorer. - It's important to note that this process will not create a new version since your deployment has not changed. ## Publishing from the CLI -As of version 0.73.0, you can also publish your subgraph with the [`graph-cli`](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). +As of version 0.73.0, you can also publish your Subgraph with the [`graph-cli`](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). 1. Open the `graph-cli`. 2. Use the following commands: `graph codegen && graph build` then `graph publish`. -3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized subgraph to a network of your choice. +3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized Subgraph to a network of your choice. ![cli-ui](/img/cli-ui.png) ### Customizing your deployment -You can upload your subgraph build to a specific IPFS node and further customize your deployment with the following flags: +You can upload your Subgraph build to a specific IPFS node and further customize your deployment with the following flags: ``` USAGE @@ -61,33 +62,33 @@ FLAGS ``` -## Adding signal to your subgraph +## Adding signal to your Subgraph -Developers can add GRT signal to their subgraphs to incentivize Indexers to query the subgraph. +Developers can add GRT signal to their Subgraphs to incentivize Indexers to query the Subgraph. -- If a subgraph is eligible for indexing rewards, Indexers who provide a "proof of indexing" will receive a GRT reward, based on the amount of GRT signalled. +- If a Subgraph is eligible for indexing rewards, Indexers who provide a "proof of indexing" will receive a GRT reward, based on the amount of GRT signalled. -- You can check indexing reward eligibility based on subgraph feature usage [here](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). +- You can check indexing reward eligibility based on Subgraph feature usage [here](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). - Specific supported networks can be checked [here](/supported-networks/). -> Adding signal to a subgraph which is not eligible for rewards will not attract additional Indexers. +> Adding signal to a Subgraph which is not eligible for rewards will not attract additional Indexers. > -> If your subgraph is eligible for rewards, it is recommended that you curate your own subgraph with at least 3,000 GRT in order to attract additional indexers to index your subgraph. +> If your Subgraph is eligible for rewards, it is recommended that you curate your own Subgraph with at least 3,000 GRT in order to attract additional indexers to index your Subgraph. -The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs. However, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. +The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all Subgraphs. However, signaling GRT on a particular Subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. -When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. +When signaling, Curators can decide to signal on a specific version of the Subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. -Indexers can find subgraphs to index based on curation signals they see in Graph Explorer. +Indexers can find Subgraphs to index based on curation signals they see in Graph Explorer. -![Explorer subgraphs](/img/explorer-subgraphs.png) +![Explorer Subgraphs](/img/explorer-subgraphs.png) -Subgraph Studio enables you to add signal to your subgraph by adding GRT to your subgraph's curation pool in the same transaction it is published. +Subgraph Studio enables you to add signal to your Subgraph by adding GRT to your Subgraph's curation pool in the same transaction it is published. ![Curation Pool](/img/curate-own-subgraph-tx.png) -Alternatively, you can add GRT signal to a published subgraph from Graph Explorer. +Alternatively, you can add GRT signal to a published Subgraph from Graph Explorer. ![Signal from Explorer](/img/signal-from-explorer.png) diff --git a/website/src/pages/vi/subgraphs/developing/subgraphs.mdx b/website/src/pages/vi/subgraphs/developing/subgraphs.mdx index 951ec74234d1..b5a75a88e94f 100644 --- a/website/src/pages/vi/subgraphs/developing/subgraphs.mdx +++ b/website/src/pages/vi/subgraphs/developing/subgraphs.mdx @@ -4,83 +4,83 @@ title: Subgraphs ## What is a Subgraph? -A subgraph is a custom, open API that extracts data from a blockchain, processes it, and stores it so it can be easily queried via GraphQL. +A Subgraph is a custom, open API that extracts data from a blockchain, processes it, and stores it so it can be easily queried via GraphQL. ### Subgraph Capabilities - **Access Data:** Subgraphs enable the querying and indexing of blockchain data for web3. -- **Build:** Developers can build, deploy, and publish subgraphs to The Graph Network. To get started, check out the subgraph developer [Quick Start](quick-start/). -- **Index & Query:** Once a subgraph is indexed, anyone can query it. Explore and query all subgraphs published to the network in [Graph Explorer](https://thegraph.com/explorer). +- **Build:** Developers can build, deploy, and publish Subgraphs to The Graph Network. To get started, check out the Subgraph developer [Quick Start](quick-start/). +- **Index & Query:** Once a Subgraph is indexed, anyone can query it. Explore and query all Subgraphs published to the network in [Graph Explorer](https://thegraph.com/explorer). ## Inside a Subgraph -The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. +The Subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your Subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. -The **subgraph definition** consists of the following files: +The **Subgraph definition** consists of the following files: -- `subgraph.yaml`: Contains the subgraph manifest +- `subgraph.yaml`: Contains the Subgraph manifest -- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL +- `schema.graphql`: A GraphQL schema defining the data stored for your Subgraph and how to query it via GraphQL - `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema -To learn more about each subgraph component, check out [creating a subgraph](/developing/creating-a-subgraph/). +To learn more about each Subgraph component, check out [creating a Subgraph](/developing/creating-a-subgraph/). ## Subgraph Lifecycle -Here is a general overview of a subgraph’s lifecycle: +Here is a general overview of a Subgraph’s lifecycle: ![Subgraph Lifecycle](/img/subgraph-lifecycle.png) ## Subgraph Development -1. [Create a subgraph](/developing/creating-a-subgraph/) -2. [Deploy a subgraph](/deploying/deploying-a-subgraph-to-studio/) -3. [Test a subgraph](/deploying/subgraph-studio/#testing-your-subgraph-in-subgraph-studio) -4. [Publish a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) -5. [Signal on a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) +1. [Create a Subgraph](/developing/creating-a-subgraph/) +2. [Deploy a Subgraph](/deploying/deploying-a-subgraph-to-studio/) +3. [Test a Subgraph](/deploying/subgraph-studio/#testing-your-subgraph-in-subgraph-studio) +4. [Publish a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) +5. [Signal on a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) ### Build locally -Great subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying subgraphs on The Graph. They can also use [Graph TypeScript](/subgraphs/developing/creating/graph-ts/README/) and [Matchstick](/subgraphs/developing/creating/unit-testing-framework/) to create robust subgraphs. +Great Subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying Subgraphs on The Graph. They can also use [Graph TypeScript](/subgraphs/developing/creating/graph-ts/README/) and [Matchstick](/subgraphs/developing/creating/unit-testing-framework/) to create robust Subgraphs. ### Deploy to Subgraph Studio -Once defined, a subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: +Once defined, a Subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: -- Use its staging environment to index the deployed subgraph and make it available for review. -- Verify that your subgraph doesn't have any indexing errors and works as expected. +- Use its staging environment to index the deployed Subgraph and make it available for review. +- Verify that your Subgraph doesn't have any indexing errors and works as expected. ### Publish to the Network -When you're happy with your subgraph, you can [publish it](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph Network. +When you're happy with your Subgraph, you can [publish it](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph Network. -- This is an onchain action, which registers the subgraph and makes it discoverable by Indexers. -- Published subgraphs have a corresponding NFT, which defines the ownership of the subgraph. You can [transfer the subgraph's ownership](/subgraphs/developing/managing/transferring-a-subgraph/) by sending the NFT. -- Published subgraphs have associated metadata, which provides other network participants with useful context and information. +- This is an onchain action, which registers the Subgraph and makes it discoverable by Indexers. +- Published Subgraphs have a corresponding NFT, which defines the ownership of the Subgraph. You can [transfer the Subgraph's ownership](/subgraphs/developing/managing/transferring-a-subgraph/) by sending the NFT. +- Published Subgraphs have associated metadata, which provides other network participants with useful context and information. ### Add Curation Signal for Indexing -Published subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your subgraph. Learn more about signaling and [curating](/resources/roles/curating/) on The Graph. +Published Subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your Subgraph. Learn more about signaling and [curating](/resources/roles/curating/) on The Graph. #### What is signal? -- Signal is locked GRT associated with a given subgraph. It indicates to Indexers that a given subgraph will receive query volume and it contributes to the indexing rewards available for processing it. -- Third-party Curators may also signal on a given subgraph, if they deem the subgraph likely to drive query volume. +- Signal is locked GRT associated with a given Subgraph. It indicates to Indexers that a given Subgraph will receive query volume and it contributes to the indexing rewards available for processing it. +- Third-party Curators may also signal on a given Subgraph, if they deem the Subgraph likely to drive query volume. ### Querying & Application Development Subgraphs on The Graph Network receive 100,000 free queries per month, after which point developers can either [pay for queries with GRT or a credit card](/subgraphs/billing/). -Learn more about [querying subgraphs](/subgraphs/querying/introduction/). +Learn more about [querying Subgraphs](/subgraphs/querying/introduction/). ### Updating Subgraphs -To update your subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. +To update your Subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your Subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. -- If you selected "auto-migrate" when you applied the signal, updating the subgraph will migrate any signal to the new version and incur a migration tax. -- This signal migration should prompt Indexers to start indexing the new version of the subgraph, so it should soon become available for querying. +- If you selected "auto-migrate" when you applied the signal, updating the Subgraph will migrate any signal to the new version and incur a migration tax. +- This signal migration should prompt Indexers to start indexing the new version of the Subgraph, so it should soon become available for querying. ### Deleting & Transferring Subgraphs -If you no longer need a published subgraph, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) or [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) it. Deleting a subgraph returns any signaled GRT to [Curators](/resources/roles/curating/). +If you no longer need a published Subgraph, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) or [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) it. Deleting a Subgraph returns any signaled GRT to [Curators](/resources/roles/curating/). diff --git a/website/src/pages/vi/subgraphs/explorer.mdx b/website/src/pages/vi/subgraphs/explorer.mdx index 8a962fb85217..2a5747208705 100644 --- a/website/src/pages/vi/subgraphs/explorer.mdx +++ b/website/src/pages/vi/subgraphs/explorer.mdx @@ -2,11 +2,11 @@ title: Trình khám phá Graph --- -Unlock the world of subgraphs and network data with [Graph Explorer](https://thegraph.com/explorer). +Unlock the world of Subgraphs and network data with [Graph Explorer](https://thegraph.com/explorer). ## Tổng quan -Graph Explorer consists of multiple parts where you can interact with [subgraphs](https://thegraph.com/explorer?chain=arbitrum-one), [delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one), engage [participants](https://thegraph.com/explorer/participants?chain=arbitrum-one), view [network information](https://thegraph.com/explorer/network?chain=arbitrum-one), and access your user profile. +Graph Explorer consists of multiple parts where you can interact with [Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one), [delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one), engage [participants](https://thegraph.com/explorer/participants?chain=arbitrum-one), view [network information](https://thegraph.com/explorer/network?chain=arbitrum-one), and access your user profile. ## Inside Explorer @@ -14,33 +14,33 @@ The following is a breakdown of all the key features of Graph Explorer. For addi ### Subgraphs Page -After deploying and publishing your subgraph in Subgraph Studio, go to [Graph Explorer](https://thegraph.com/explorer) and click on the "[Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one)" link in the navigation bar to access the following: +After deploying and publishing your Subgraph in Subgraph Studio, go to [Graph Explorer](https://thegraph.com/explorer) and click on the "[Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one)" link in the navigation bar to access the following: -- Your own finished subgraphs +- Your own finished Subgraphs - Subgraphs published by others -- The exact subgraph you want (based on the date created, signal amount, or name). +- The exact Subgraph you want (based on the date created, signal amount, or name). ![Explorer Image 1](/img/Subgraphs-Explorer-Landing.png) -When you click into a subgraph, you will be able to do the following: +When you click into a Subgraph, you will be able to do the following: - Test queries in the playground and be able to leverage network details to make informed decisions. -- Signal GRT on your own subgraph or the subgraphs of others to make indexers aware of its importance and quality. +- Signal GRT on your own Subgraph or the Subgraphs of others to make indexers aware of its importance and quality. - - This is critical because signaling on a subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries. + - This is critical because signaling on a Subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries. ![Explorer Image 2](/img/Subgraph-Details.png) -On each subgraph’s dedicated page, you can do the following: +On each Subgraph’s dedicated page, you can do the following: -- Báo hiệu / Hủy báo hiệu trên subgraph +- Signal/Un-signal on Subgraphs - Xem thêm chi tiết như biểu đồ, ID triển khai hiện tại và siêu dữ liệu khác -- Chuyển đổi giữa các phiên bản để khám phá các lần bản trước đây của subgraph -- Truy vấn subgraph qua GraphQL -- Thử subgraph trong playground -- Xem các Indexers đang lập chỉ mục trên một subgraph nhất định +- Switch versions to explore past iterations of the Subgraph +- Query Subgraphs via GraphQL +- Test Subgraphs in the playground +- View the Indexers that are indexing on a certain Subgraph - Thống kê Subgraph (phân bổ, Curators, v.v.) -- Xem pháp nhân đã xuất bản subgraph +- View the entity who published the Subgraph ![Explorer Image 3](/img/Explorer-Signal-Unsignal.png) @@ -53,7 +53,7 @@ On this page, you can see the following: - Indexers who collected the most query fees - Indexers with the highest estimated APR -Additionally, you can calculate your ROI and search for top Indexers by name, address, or subgraph. +Additionally, you can calculate your ROI and search for top Indexers by name, address, or Subgraph. ### Participants Page @@ -63,9 +63,9 @@ This page provides a bird's-eye view of all "participants," which includes every ![Explorer Image 4](/img/Indexer-Pane.png) -Indexers are the backbone of the protocol. They stake on subgraphs, index them, and serve queries to anyone consuming subgraphs. +Indexers are the backbone of the protocol. They stake on Subgraphs, index them, and serve queries to anyone consuming Subgraphs. -In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each subgraph, and how much revenue they have made from query fees and indexing rewards. +In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each Subgraph, and how much revenue they have made from query fees and indexing rewards. **Specifics** @@ -74,7 +74,7 @@ In the Indexers table, you can see an Indexers’ delegation parameters, their s - Cooldown Remaining - the time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters. - Owned - This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior. - Delegated - Stake from Delegators which can be allocated by the Indexer, but cannot be slashed. -- Allocated - Stake that Indexers are actively allocating towards the subgraphs they are indexing. +- Allocated - Stake that Indexers are actively allocating towards the Subgraphs they are indexing. - Available Delegation Capacity - the amount of delegated stake the Indexers can still receive before they become over-delegated. - Max Delegation Capacity - the maximum amount of delegated stake the Indexer can productively accept. An excess delegated stake cannot be used for allocations or rewards calculations. - Query Fees - this is the total fees that end users have paid for queries from an Indexer over all time. @@ -90,10 +90,10 @@ To learn more about how to become an Indexer, you can take a look at the [offici #### 2. Curators -Curators analyze subgraphs to identify which subgraphs are of the highest quality. Once a Curator has found a potentially high-quality subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which subgraphs are high quality and should be indexed. +Curators analyze Subgraphs to identify which Subgraphs are of the highest quality. Once a Curator has found a potentially high-quality Subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which Subgraphs are high quality and should be indexed. -- Curators can be community members, data consumers, or even subgraph developers who signal on their own subgraphs by depositing GRT tokens into a bonding curve. - - By depositing GRT, Curators mint curation shares of a subgraph. As a result, they can earn a portion of the query fees generated by the subgraph they have signaled on. +- Curators can be community members, data consumers, or even Subgraph developers who signal on their own Subgraphs by depositing GRT tokens into a bonding curve. + - By depositing GRT, Curators mint curation shares of a Subgraph. As a result, they can earn a portion of the query fees generated by the Subgraph they have signaled on. - The bonding curve incentivizes Curators to curate the highest quality data sources. In the The Curator table listed below you can see: @@ -144,8 +144,8 @@ The overview section has both all the current network metrics and some cumulativ A few key details to note: -- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the subgraphs have been closed and the data they served has been validated by the consumers. -- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). +- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the Subgraphs have been closed and the data they served has been validated by the consumers. +- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the Subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). ![Explorer Image 8](/img/Network-Stats.png) @@ -178,15 +178,15 @@ In this section, you can view the following: ### Tab Subgraphs -In the Subgraphs tab, you’ll see your published subgraphs. +In the Subgraphs tab, you’ll see your published Subgraphs. -> This will not include any subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network. +> This will not include any Subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network. ![Explorer Image 11](/img/Subgraphs-Overview.png) ### Tab Indexing -In the Indexing tab, you’ll find a table with all the active and historical allocations towards subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer. +In the Indexing tab, you’ll find a table with all the active and historical allocations towards Subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer. Phần này cũng sẽ bao gồm thông tin chi tiết về phần thưởng Indexer ròng của bạn và phí truy vấn ròng. Bạn sẽ thấy các số liệu sau: @@ -223,13 +223,13 @@ Lưu ý rằng biểu đồ này có thể cuộn theo chiều ngang, vì vậy ### Tab Curating -Trong tab Curation, bạn sẽ tìm thấy tất cả các subgraph mà bạn đang báo hiệu (do đó cho phép bạn nhận phí truy vấn). Báo hiệu cho phép Curator đánh dấu cho Indexer biết những subgraph nào có giá trị và đáng tin cậy, do đó báo hiệu rằng chúng cần được lập chỉ mục. +In the Curation tab, you’ll find all the Subgraphs you’re signaling on (thus enabling you to receive query fees). Signaling allows Curators to highlight to Indexers which Subgraphs are valuable and trustworthy, thus signaling that they need to be indexed on. Trong tab này, bạn sẽ tìm thấy tổng quan về: -- Tất cả các subgraph bạn đang quản lý với các chi tiết về tín hiệu -- Tổng cổ phần trên mỗi subgraph -- Phần thưởng truy vấn cho mỗi subgraph +- All the Subgraphs you're curating on with signal details +- Share totals per Subgraph +- Query rewards per Subgraph - Chi tiết ngày được cập nhật ![Explorer Image 14](/img/Curation-Stats.png) diff --git a/website/src/pages/vi/subgraphs/guides/_meta.js b/website/src/pages/vi/subgraphs/guides/_meta.js index 37e18bc51651..a1bb04fb6d3f 100644 --- a/website/src/pages/vi/subgraphs/guides/_meta.js +++ b/website/src/pages/vi/subgraphs/guides/_meta.js @@ -1,4 +1,5 @@ export default { + 'subgraph-composition': '', 'subgraph-debug-forking': '', near: '', arweave: '', diff --git a/website/src/pages/vi/subgraphs/guides/arweave.mdx b/website/src/pages/vi/subgraphs/guides/arweave.mdx index 08e6c4257268..e59abffa383f 100644 --- a/website/src/pages/vi/subgraphs/guides/arweave.mdx +++ b/website/src/pages/vi/subgraphs/guides/arweave.mdx @@ -92,9 +92,9 @@ Arweave data sources support two types of handlers: - `transactionHandlers` - Run on every transaction where the data source's `source.owner` is the owner. Currently an owner is required for `transactionHandlers`, if users want to process all transactions they should provide "" as the `source.owner` > The source.owner can be the owner's address, or their Public Key. - +> > Transactions are the building blocks of the Arweave permaweb and they are objects created by end-users. - +> > Note: [Irys (previously Bundlr)](https://irys.xyz/) transactions are not supported yet. ## Schema Definition diff --git a/website/src/pages/vi/subgraphs/guides/contract-analyzer.mdx b/website/src/pages/vi/subgraphs/guides/contract-analyzer.mdx index 084ac8d28a00..bdeb7b9249a0 100644 --- a/website/src/pages/vi/subgraphs/guides/contract-analyzer.mdx +++ b/website/src/pages/vi/subgraphs/guides/contract-analyzer.mdx @@ -2,11 +2,15 @@ title: Smart Contract Analysis with Cana CLI --- -# Cana CLI: Quick & Efficient Contract Analysis +Improve smart contract analysis with **Cana CLI**. It's fast, efficient, and designed specifically for EVM chains. -**Cana CLI** is a command-line tool that streamlines helpful smart contract metadata analysis specific to subgraph development across multiple EVM-compatible chains. +## Tổng quan -## 📌 Key Features +**Cana CLI** is a command-line tool that streamlines helpful smart contract metadata analysis specific to subgraph development across multiple EVM-compatible chains. It simplifies retrieving contract details, detecting proxy implementations, extracting ABIs, and more. + +### Key Features + +With Cana CLI, you can: - Detect deployment blocks - Verify source code @@ -14,47 +18,59 @@ title: Smart Contract Analysis with Cana CLI - Identify proxy and implementation contracts - Support multiple chains -## 🚀 Installation & Setup +### Prerequisites + +Before installing Cana CLI, make sure you have: + +- [Node.js v16+](https://nodejs.org/en) +- [npm v6+](https://docs.npmjs.com/cli/v11/commands/npm-install) +- Block explorer API keys + +### Installation & Setup -Install Cana globally using npm: +1. Install Cana CLI + +Use npm to install it globally: ```bash npm install -g contract-analyzer ``` -Set up a blockchain for analysis: +2. Configure Cana CLI + +Set up a blockchain environment for analysis: ```bash cana setup ``` -Provide the required block explorer API and block explorer endpoint URL details when prompted. +During setup, you'll be prompted to provide the required block explorer API key and block explorer endpoint URL. -Running `cana setup` creates a configuration file at `~/.contract-analyzer/config.json`. This file stores your block explorer API credentials, endpoint URLs, and chain selection preferences for future use. +After setup, Cana CLI creates a configuration file at `~/.contract-analyzer/config.json`. This file stores your block explorer API credentials, endpoint URLs, and chain selection preferences for future use. -## 🍳 Usage +### Steps: Using Cana CLI for Smart Contract Analysis -### 🔹 Chain Selection +#### 1. Select a Chain -Cana supports multiple EVM-compatible chains. +Cana CLI supports multiple EVM-compatible chains. -List chains added with: +For a list of chains added run this command: ```bash cana chains ``` -Then select a chain with: +Then select a chain with this command: ```bash cana chains --switch ``` -Once a chain is selected, all subsequent contract analases will continue on that chain. +Once a chain is selected, all subsequent contract analyses will continue on that chain. -### 🔹 Basic Contract Analysis +#### 2. Basic Contract Analysis -Analyze a contract with: +Run the following command to analyze a contract: ```bash cana analyze 0xContractAddress @@ -66,11 +82,11 @@ or cana -a 0xContractAddress ``` -This command displays essential contract information in the terminal using a clear, organized format. +This command fetches and displays essential contract information in the terminal using a clear, organized format. -### 🔹 Understanding Output +#### 3. Understanding the Output -Cana organizes results into the terminal as well as into a structured directory when detailed contract data is successfully retrieved: +Cana CLI organizes results into the terminal and into a structured directory when detailed contract data is successfully retrieved: ``` contracts-analyzed/ @@ -80,24 +96,22 @@ contracts-analyzed/ └── event-information.json # Event signatures and examples ``` -### 🔹 Chain Management +This format makes it easy to reference contract metadata, event signatures, and ABIs for subgraph development. + +#### 4. Chain Management Add and manage chains: ```bash -cana setup # Add a new chain -cana chains # List configured chains -cana chains -s # Swich chains. +cana setup # Add a new chain +cana chains # List configured chains +cana chains -s # Switch chains ``` -## ⚠️ Troubleshooting +### Troubleshooting -- **Missing Data**: Ensure the contract address is correct, verified on the block explorer, and that your API key has the required permissions. +Missing Data? Ensure that the contract address is correct, that it's verified on the block explorer, and that your API key has the required permissions. -## ✅ Requirements - -- Node.js v16+ -- npm v6+ -- Block explorer API keys +### Conclusion -Keep your contract analyses efficient and well-organized. 🚀 +With Cana CLI, you can efficiently analyze smart contracts, extract crucial metadata, and support subgraph development with ease. diff --git a/website/src/pages/vi/subgraphs/guides/grafting.mdx b/website/src/pages/vi/subgraphs/guides/grafting.mdx index d9abe0e70d2a..96df280d9198 100644 --- a/website/src/pages/vi/subgraphs/guides/grafting.mdx +++ b/website/src/pages/vi/subgraphs/guides/grafting.mdx @@ -16,7 +16,7 @@ The grafted Subgraph can use a GraphQL schema that is not identical to the one o - It turns non-nullable attributes into nullable attributes - It adds values to enums - It adds or removes interfaces -- It changes for which entity types an interface is implemented +- Nó thay đổi đối với loại thực thể nào mà một giao diện được triển khai For more information, you can check: diff --git a/website/src/pages/vi/subgraphs/guides/secure-api-keys-nextjs.mdx b/website/src/pages/vi/subgraphs/guides/secure-api-keys-nextjs.mdx index e17e594408ff..4940d09d815a 100644 --- a/website/src/pages/vi/subgraphs/guides/secure-api-keys-nextjs.mdx +++ b/website/src/pages/vi/subgraphs/guides/secure-api-keys-nextjs.mdx @@ -2,7 +2,7 @@ title: How to Secure API Keys Using Next.js Server Components --- -## Overview +## Tổng quan We can use [Next.js server components](https://nextjs.org/docs/app/building-your-application/rendering/server-components) to properly secure our API key from exposure in the frontend of our dapp. To further increase our API key security, we can also [restrict our API key to certain Subgraphs or domains in Subgraph Studio](/cookbook/upgrading-a-subgraph/#securing-your-api-key). diff --git a/website/src/pages/vi/subgraphs/guides/subgraph-composition.mdx b/website/src/pages/vi/subgraphs/guides/subgraph-composition.mdx new file mode 100644 index 000000000000..65aeb9f4ac09 --- /dev/null +++ b/website/src/pages/vi/subgraphs/guides/subgraph-composition.mdx @@ -0,0 +1,132 @@ +--- +title: Aggregate Data Using Subgraph Composition +sidebarTitle: Build a Composable Subgraph with Multiple Subgraphs +--- + +Leverage Subgraph composition to speed up development time. Create a base Subgraph with essential data, then build additional Subgraphs on top of it. + +Optimize your Subgraph by merging data from independent, source Subgraphs into a single composable Subgraph to enhance data aggregation. + +## Giới thiệu + +Composable Subgraphs enable you to combine multiple Subgraphs' data sources into a new Subgraph, facilitating faster and more flexible Subgraph development. Subgraph composition empowers you to create and maintain smaller, focused Subgraphs that collectively form a larger, interconnected dataset. + +### Benefits of Composition + +Subgraph composition is a powerful feature for scaling, allowing you to: + +- Reuse, mix, and combine existing data +- Streamline development and queries +- Use multiple data sources (up to five source Subgraphs) +- Speed up your Subgraph's syncing speed +- Handle errors and optimize the resync + +## Architecture Overview + +The setup for this example involves two Subgraphs: + +1. **Source Subgraph**: Tracks event data as entities. +2. **Dependent Subgraph**: Uses the source Subgraph as a data source. + +You can find these in the `source` and `dependent` directories. + +- The **source Subgraph** is a basic event-tracking Subgraph that records events emitted by relevant contracts. +- The **dependent Subgraph** references the source Subgraph as a data source, using the entities from the source as triggers. + +While the source Subgraph is a standard Subgraph, the dependent Subgraph uses the Subgraph composition feature. + +## Prerequisites + +### Source Subgraphs + +- All Subgraphs need to be published with a **specVersion 1.3.0 or later** (Use the latest graph-cli version to be able to deploy composable Subgraphs) +- See notes here: https://github.com/graphprotocol/graph-node/releases/tag/v0.37.0 +- Immutable entities only: All Subgraphs must have [immutable entities](https://thegraph.com/docs/en/subgraphs/best-practices/immutable-entities-bytes-as-ids/#immutable-entities) when the Subgraph is deployed +- Pruning can be used in the source Subgraphs, but only entities that are immutable can be composed on top of +- Source Subgraphs cannot use grafting on top of existing entities +- Aggregated entities can be used in composition, but entities that are composed from them cannot performed additional aggregations directly + +### Composed Subgraphs + +- You can only compose up to a **maximum of 5 source Subgraphs** +- Composed Subgraphs can only use **datasources from the same chain** +- **Nested composition is not yet supported**: Composing on top of another composed Subgraph isn’t allowed at this time +- Aggregated entities can be used in composition, but the composed entities on them cannot also use aggregations directly +- Developers cannot compose an onchain datasource with a Subgraph datasource (i.e. you can’t do normal event handlers and call handlers and block handlers in a composed Subgraph) + +Additionally, you can explore the [example-composable-subgraph](https://github.com/graphprotocol/example-composable-subgraph) repository for a working implementation of composable Subgraphs + +## Bắt đầu + +The following guide provides examples for defining 3 source Subgraphs to create one powerful composed Subgraph. + +### Specifics + +- To keep this example simple, all source Subgraphs use only block handlers. However, in a real environment, each source Subgraph will use data from different smart contracts. +- The examples below show how to import and extend the schema of another Subgraph to enhance its functionality. +- Each source Subgraph is optimized with a specific entity. +- All the commands listed install the necessary dependencies, generate code based on the GraphQL schema, build the Subgraph, and deploy it to your local Graph Node instance. + +### Step 1. Deploy Block Time Source Subgraph + +This first source Subgraph calculates the block time for each block. + +- It imports schemas from other Subgraphs and adds a `block` entity with a `timestamp` field, representing the time each block was mined. +- It listens to time-related blockchain events (e.g., block timestamps) and processes this data to update the Subgraph's entities accordingly. + +To deploy this Subgraph locally, run the following commands: + +```bash +npm install +npm run codegen +npm run build +npm run create-local +npm run deploy-local +``` + +### Step 2. Deploy Block Cost Source Subgraph + +This second source Subgraph indexes the cost of each block. + +#### Key Functions + +- It imports schemas from other Subgraphs and adds a `block` entity with cost-related fields. +- It listens to blockchain events related to costs (e.g. gas fees, transaction costs) and processes this data to update the Subgraph's entities accordingly. + +To deploy this Subgraph locally, run the same commands as above. + +### Step 3. Define Block Size in Source Subgraph + +This third source Subgraph indexes the size of each block. To deploy this Subgraph locally, run the same commands as above. + +#### Key Functions + +- It imports existing schemas from other Subgraphs and adds a `block` entity with a `size` field representing each block's size. +- It listens to blockchain events related to block sizes (e.g., storage or volume) and processes this data to update the Subgraph's entities accordingly. + +### Step 4. Combine Into Block Stats Subgraph + +This composed Subgraph combines and aggregates the information from the source Subgraphs above, providing a unified view of block statistics. To deploy this Subgraph locally, run the same commands as above. + +> Note: +> +> - Any change to a source Subgraph will likely generate a new deployment ID. +> - Be sure to update the deployment ID in the data source address of the Subgraph manifest to take advantage of the latest changes. +> - All source Subgraphs should be deployed before the composed Subgraph is deployed. + +#### Key Functions + +- It provides a consolidated data model that encompasses all relevant block metrics. +- It combines data from 3 source Subgraphs, and provides a comprehensive view of block statistics, enabling more complex queries and analyses. + +## Key Takeaways + +- This powerful tool will scale your Subgraph development and allow you to combine multiple Subgraphs. +- The setup includes the deployment of 3 source Subgraphs and one final deployment of the composed Subgraph. +- This feature unlocks scalability, simplifying both development and maintenance efficiency. + +## Additional Resources + +- Check out all the code for this example in [this GitHub repo](https://github.com/graphprotocol/example-composable-subgraph). +- To add advanced features to your Subgraph, check out [Subgraph advanced features](/developing/creating/advanced/). +- To learn more about aggregations, check out [Timeseries and Aggregations](/subgraphs/developing/creating/advanced/#timeseries-and-aggregations). diff --git a/website/src/pages/vi/subgraphs/guides/transfer-to-the-graph.mdx b/website/src/pages/vi/subgraphs/guides/transfer-to-the-graph.mdx index a62072c48373..533ed1c52155 100644 --- a/website/src/pages/vi/subgraphs/guides/transfer-to-the-graph.mdx +++ b/website/src/pages/vi/subgraphs/guides/transfer-to-the-graph.mdx @@ -12,9 +12,9 @@ Quickly upgrade your Subgraphs from any platform to [The Graph's decentralized n ## Upgrade Your Subgraph to The Graph in 3 Easy Steps -1. [Set Up Your Studio Environment](/subgraphs/cookbook/transfer-to-the-graph/#1-set-up-your-studio-environment) -2. [Deploy Your Subgraph to Studio](/subgraphs/cookbook/transfer-to-the-graph/#2-deploy-your-subgraph-to-studio) -3. [Publish to The Graph Network](/subgraphs/cookbook/transfer-to-the-graph/#publish-your-subgraph-to-the-graphs-decentralized-network) +1. [Set Up Your Studio Environment](/subgraphs/cookbook/transfer-to-the-graph/#1-set-up-your-studio-environment) +2. [Deploy Your Subgraph to Studio](/subgraphs/cookbook/transfer-to-the-graph/#2-deploy-your-subgraph-to-studio) +3. [Publish to The Graph Network](/subgraphs/cookbook/transfer-to-the-graph/#publish-your-subgraph-to-the-graphs-decentralized-network) ## 1. Set Up Your Studio Environment @@ -74,7 +74,7 @@ graph deploy --ipfs-hash You can start [querying](/subgraphs/querying/introduction/) any Subgraph by sending a GraphQL query into the Subgraph’s query URL endpoint, which is located at the top of its Explorer page in Subgraph Studio. -#### Example +#### Ví dụ [CryptoPunks Ethereum Subgraph](https://thegraph.com/explorer/subgraphs/HdVdERFUe8h61vm2fDyycHgxjsde5PbB832NHgJfZNqK) by Messari: diff --git a/website/src/pages/vi/subgraphs/querying/best-practices.mdx b/website/src/pages/vi/subgraphs/querying/best-practices.mdx index ff5f381e2993..ab02b27cbc03 100644 --- a/website/src/pages/vi/subgraphs/querying/best-practices.mdx +++ b/website/src/pages/vi/subgraphs/querying/best-practices.mdx @@ -4,7 +4,7 @@ title: Querying Best Practices The Graph provides a decentralized way to query data from blockchains. Its data is exposed through a GraphQL API, making it easier to query with the GraphQL language. -Learn the essential GraphQL language rules and best practices to optimize your subgraph. +Learn the essential GraphQL language rules and best practices to optimize your Subgraph. --- @@ -71,7 +71,7 @@ It means that you can query a GraphQL API using standard `fetch` (natively or vi However, as mentioned in ["Querying from an Application"](/subgraphs/querying/from-an-application/), it's recommended to use `graph-client`, which supports the following unique features: -- Cross-chain Subgraph Handling: Querying from multiple subgraphs in a single query +- Cross-chain Subgraph Handling: Querying from multiple Subgraphs in a single query - [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) - [Automatic Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) - Fully typed result @@ -219,7 +219,7 @@ If the application only needs 10 transactions, the query should explicitly set ` ### Use a single query to request multiple records -By default, subgraphs have a singular entity for one record. For multiple records, use the plural entities and filter: `where: {id_in:[X,Y,Z]}` or `where: {volume_gt:100000}` +By default, Subgraphs have a singular entity for one record. For multiple records, use the plural entities and filter: `where: {id_in:[X,Y,Z]}` or `where: {volume_gt:100000}` Example of inefficient querying: diff --git a/website/src/pages/vi/subgraphs/querying/from-an-application.mdx b/website/src/pages/vi/subgraphs/querying/from-an-application.mdx index af623acbabbe..bac7467a2443 100644 --- a/website/src/pages/vi/subgraphs/querying/from-an-application.mdx +++ b/website/src/pages/vi/subgraphs/querying/from-an-application.mdx @@ -1,5 +1,6 @@ --- title: Truy vấn từ một ứng dụng +sidebarTitle: Querying from an App --- Learn how to query The Graph from your application. @@ -10,7 +11,7 @@ During the development process, you will receive a GraphQL API endpoint at two d ### Subgraph Studio Endpoint -After deploying your subgraph to [Subgraph Studio](https://thegraph.com/docs/en/subgraphs/developing/deploying/using-subgraph-studio/), you will receive an endpoint that looks like this: +After deploying your Subgraph to [Subgraph Studio](https://thegraph.com/docs/en/subgraphs/developing/deploying/using-subgraph-studio/), you will receive an endpoint that looks like this: ``` https://api.studio.thegraph.com/query/// @@ -20,13 +21,13 @@ https://api.studio.thegraph.com/query/// ### The Graph Network Endpoint -After publishing your subgraph to the network, you will receive an endpoint that looks like this: : +After publishing your Subgraph to the network, you will receive an endpoint that looks like this: : ``` https://gateway.thegraph.com/api//subgraphs/id/ ``` -> This endpoint is intended for active use on the network. It allows you to use various GraphQL client libraries to query the subgraph and populate your application with indexed data. +> This endpoint is intended for active use on the network. It allows you to use various GraphQL client libraries to query the Subgraph and populate your application with indexed data. ## Using Popular GraphQL Clients @@ -34,7 +35,7 @@ https://gateway.thegraph.com/api//subgraphs/id/ The Graph is providing its own GraphQL client, `graph-client` that supports unique features such as: -- Cross-chain Subgraph Handling: Querying from multiple subgraphs in a single query +- Cross-chain Subgraph Handling: Querying from multiple Subgraphs in a single query - [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) - [Automatic Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) - Fully typed result @@ -43,7 +44,7 @@ The Graph is providing its own GraphQL client, `graph-client` that supports uniq ### Fetch Data with Graph Client -Let's look at how to fetch data from a subgraph with `graph-client`: +Let's look at how to fetch data from a Subgraph with `graph-client`: #### Step 1 @@ -168,7 +169,7 @@ Although it's the heaviest client, it has many features to build advanced UI on ### Fetch Data with Apollo Client -Let's look at how to fetch data from a subgraph with Apollo client: +Let's look at how to fetch data from a Subgraph with Apollo client: #### Step 1 @@ -257,7 +258,7 @@ client ### Fetch data with URQL -Let's look at how to fetch data from a subgraph with URQL: +Let's look at how to fetch data from a Subgraph with URQL: #### Step 1 diff --git a/website/src/pages/vi/subgraphs/querying/graph-client/README.md b/website/src/pages/vi/subgraphs/querying/graph-client/README.md index 416cadc13c6f..d4850e723c6e 100644 --- a/website/src/pages/vi/subgraphs/querying/graph-client/README.md +++ b/website/src/pages/vi/subgraphs/querying/graph-client/README.md @@ -16,19 +16,19 @@ This library is intended to simplify the network aspect of data consumption for | Status | Feature | Notes | | :----: | ---------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------- | -| ✅ | Multiple indexers | based on fetch strategies | -| ✅ | Fetch Strategies | timeout, retry, fallback, race, highestValue | -| ✅ | Build time validations & optimizations | | -| ✅ | Client-Side Composition | with improved execution planner (based on GraphQL-Mesh) | -| ✅ | Cross-chain Subgraph Handling | Use similar subgraphs as a single source | -| ✅ | Raw Execution (standalone mode) | without a wrapping GraphQL client | -| ✅ | Local (client-side) Mutations | | -| ✅ | [Automatic Block Tracking](../packages/block-tracking/README.md) | tracking block numbers [as described here](https://thegraph.com/docs/en/developer/distributed-systems/#polling-for-updated-data) | -| ✅ | [Automatic Pagination](../packages/auto-pagination/README.md) | doing multiple requests in a single call to fetch more than the indexer limit | -| ✅ | Integration with `@apollo/client` | | -| ✅ | Integration with `urql` | | -| ✅ | TypeScript support | with built-in GraphQL Codegen and `TypedDocumentNode` | -| ✅ | [`@live` queries](./live.md) | Based on polling | +| ✅ | Multiple indexers | based on fetch strategies | +| ✅ | Fetch Strategies | timeout, retry, fallback, race, highestValue | +| ✅ | Build time validations & optimizations | | +| ✅ | Client-Side Composition | with improved execution planner (based on GraphQL-Mesh) | +| ✅ | Cross-chain Subgraph Handling | Use similar subgraphs as a single source | +| ✅ | Raw Execution (standalone mode) | without a wrapping GraphQL client | +| ✅ | Local (client-side) Mutations | | +| ✅ | [Automatic Block Tracking](../packages/block-tracking/README.md) | tracking block numbers [as described here](https://thegraph.com/docs/en/developer/distributed-systems/#polling-for-updated-data) | +| ✅ | [Automatic Pagination](../packages/auto-pagination/README.md) | doing multiple requests in a single call to fetch more than the indexer limit | +| ✅ | Integration with `@apollo/client` | | +| ✅ | Integration with `urql` | | +| ✅ | TypeScript support | with built-in GraphQL Codegen and `TypedDocumentNode` | +| ✅ | [`@live` queries](./live.md) | Based on polling | > You can find an [extended architecture design here](./architecture.md) @@ -308,8 +308,8 @@ sources:
`highestValue` - - This strategy allows you to send parallel requests to different endpoints for the same source and choose the most updated. + +This strategy allows you to send parallel requests to different endpoints for the same source and choose the most updated. This is useful if you want to choose most synced data for the same Subgraph over different indexers/sources. diff --git a/website/src/pages/vi/subgraphs/querying/graphql-api.mdx b/website/src/pages/vi/subgraphs/querying/graphql-api.mdx index 3056a573e67f..65547da41195 100644 --- a/website/src/pages/vi/subgraphs/querying/graphql-api.mdx +++ b/website/src/pages/vi/subgraphs/querying/graphql-api.mdx @@ -6,13 +6,13 @@ Learn about the GraphQL Query API used in The Graph. ## What is GraphQL? -[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. +[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query Subgraphs. -To understand the larger role that GraphQL plays, review [developing](/subgraphs/developing/introduction/) and [creating a subgraph](/developing/creating-a-subgraph/). +To understand the larger role that GraphQL plays, review [developing](/subgraphs/developing/introduction/) and [creating a Subgraph](/developing/creating-a-subgraph/). ## Queries with GraphQL -In your subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type. +In your Subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type. > Note: `query` does not need to be included at the top of the `graphql` query when using The Graph. @@ -170,7 +170,7 @@ You can use suffixes like `_gt`, `_lte` for value comparison: You can also filter entities that were updated in or after a specified block with `_change_block(number_gte: Int)`. -This can be useful if you are looking to fetch only entities which have changed, for example since the last time you polled. Or alternatively it can be useful to investigate or debug how entities are changing in your subgraph (if combined with a block filter, you can isolate only entities that changed in a specific block). +This can be useful if you are looking to fetch only entities which have changed, for example since the last time you polled. Or alternatively it can be useful to investigate or debug how entities are changing in your Subgraph (if combined with a block filter, you can isolate only entities that changed in a specific block). ```graphql { @@ -329,7 +329,7 @@ This query will return `Challenge` entities, and their associated `Application` ### Fulltext Search Queries -Fulltext search query fields provide an expressive text search API that can be added to the subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) to add fulltext search to your subgraph. +Fulltext search query fields provide an expressive text search API that can be added to the Subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) to add fulltext search to your Subgraph. Fulltext search queries have one required field, `text`, for supplying search terms. Several special fulltext operators are available to be used in this `text` search field. @@ -391,7 +391,7 @@ Graph Node implements [specification-based](https://spec.graphql.org/October2021 The schema of your dataSources, i.e. the entity types, values, and relationships that are available to query, are defined through the [GraphQL Interface Definition Language (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). -GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your subgraph is automatically generated from the GraphQL schema that's included in your [subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph). +GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your Subgraph is automatically generated from the GraphQL schema that's included in your [Subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph). > Note: Our API does not expose mutations because developers are expected to issue transactions directly against the underlying blockchain from their applications. @@ -403,7 +403,7 @@ All GraphQL types with `@entity` directives in your schema will be treated as en ### Subgraph Metadata -All subgraphs have an auto-generated `_Meta_` object, which provides access to subgraph metadata. This can be queried as follows: +All Subgraphs have an auto-generated `_Meta_` object, which provides access to Subgraph metadata. This can be queried as follows: ```graphQL { @@ -419,7 +419,7 @@ All subgraphs have an auto-generated `_Meta_` object, which provides access to s } ``` -If a block is provided, the metadata is as of that block, if not the latest indexed block is used. If provided, the block must be after the subgraph's start block, and less than or equal to the most recently indexed block. +If a block is provided, the metadata is as of that block, if not the latest indexed block is used. If provided, the block must be after the Subgraph's start block, and less than or equal to the most recently indexed block. `deployment` is a unique ID, corresponding to the IPFS CID of the `subgraph.yaml` file. @@ -427,6 +427,6 @@ If a block is provided, the metadata is as of that block, if not the latest inde - hash: the hash of the block - number: the block number -- timestamp: the timestamp of the block, if available (this is currently only available for subgraphs indexing EVM networks) +- timestamp: the timestamp of the block, if available (this is currently only available for Subgraphs indexing EVM networks) -`hasIndexingErrors` is a boolean identifying whether the subgraph encountered indexing errors at some past block +`hasIndexingErrors` is a boolean identifying whether the Subgraph encountered indexing errors at some past block diff --git a/website/src/pages/vi/subgraphs/querying/introduction.mdx b/website/src/pages/vi/subgraphs/querying/introduction.mdx index c3ca53a89f97..2867eb642cb2 100644 --- a/website/src/pages/vi/subgraphs/querying/introduction.mdx +++ b/website/src/pages/vi/subgraphs/querying/introduction.mdx @@ -7,11 +7,11 @@ To start querying right away, visit [The Graph Explorer](https://thegraph.com/ex ## Tổng quan -When a subgraph is published to The Graph Network, you can visit its subgraph details page on Graph Explorer and use the "Query" tab to explore the deployed GraphQL API for each subgraph. +When a Subgraph is published to The Graph Network, you can visit its Subgraph details page on Graph Explorer and use the "Query" tab to explore the deployed GraphQL API for each Subgraph. ## Specifics -Each subgraph published to The Graph Network has a unique query URL in Graph Explorer to make direct queries. You can find it by navigating to the subgraph details page and clicking on the "Query" button in the top right corner. +Each Subgraph published to The Graph Network has a unique query URL in Graph Explorer to make direct queries. You can find it by navigating to the Subgraph details page and clicking on the "Query" button in the top right corner. ![Query Subgraph Button](/img/query-button-screenshot.png) @@ -21,7 +21,7 @@ You will notice that this query URL must use a unique API key. You can create an Subgraph Studio users start on a Free Plan, which allows them to make 100,000 queries per month. Additional queries are available on the Growth Plan, which offers usage based pricing for additional queries, payable by credit card, or GRT on Arbitrum. You can learn more about billing [here](/subgraphs/billing/). -> Please see the [Query API](/subgraphs/querying/graphql-api/) for a complete reference on how to query the subgraph's entities. +> Please see the [Query API](/subgraphs/querying/graphql-api/) for a complete reference on how to query the Subgraph's entities. > > Note: If you encounter 405 errors with a GET request to the Graph Explorer URL, please switch to a POST request instead. diff --git a/website/src/pages/vi/subgraphs/querying/managing-api-keys.mdx b/website/src/pages/vi/subgraphs/querying/managing-api-keys.mdx index 7475b0910885..259a727ed2df 100644 --- a/website/src/pages/vi/subgraphs/querying/managing-api-keys.mdx +++ b/website/src/pages/vi/subgraphs/querying/managing-api-keys.mdx @@ -4,11 +4,11 @@ title: Managing API keys ## Tổng quan -API keys are needed to query subgraphs. They ensure that the connections between application services are valid and authorized, including authenticating the end user and the device using the application. +API keys are needed to query Subgraphs. They ensure that the connections between application services are valid and authorized, including authenticating the end user and the device using the application. ### Create and Manage API Keys -Go to [Subgraph Studio](https://thegraph.com/studio/) and click the **API Keys** tab to create and manage your API keys for specific subgraphs. +Go to [Subgraph Studio](https://thegraph.com/studio/) and click the **API Keys** tab to create and manage your API keys for specific Subgraphs. The "API keys" table lists existing API keys and allows you to manage or delete them. For each key, you can see its status, the cost for the current period, the spending limit for the current period, and the total number of queries. @@ -31,4 +31,4 @@ You can click on an individual API key to view the Details page: - Amount of GRT spent 2. Under the **Security** section, you can opt into security settings depending on the level of control you’d like to have. Specifically, you can: - View and manage the domain names authorized to use your API key - - Assign subgraphs that can be queried with your API key + - Assign Subgraphs that can be queried with your API key diff --git a/website/src/pages/vi/subgraphs/querying/python.mdx b/website/src/pages/vi/subgraphs/querying/python.mdx index 0937e4f7862d..ed0d078a4175 100644 --- a/website/src/pages/vi/subgraphs/querying/python.mdx +++ b/website/src/pages/vi/subgraphs/querying/python.mdx @@ -3,7 +3,7 @@ title: Query The Graph with Python and Subgrounds sidebarTitle: Python (Subgrounds) --- -Subgrounds is an intuitive Python library for querying subgraphs, built by [Playgrounds](https://playgrounds.network/). It allows you to directly connect subgraph data to a Python data environment, letting you use libraries like [pandas](https://pandas.pydata.org/) to perform data analysis! +Subgrounds is an intuitive Python library for querying Subgraphs, built by [Playgrounds](https://playgrounds.network/). It allows you to directly connect Subgraph data to a Python data environment, letting you use libraries like [pandas](https://pandas.pydata.org/) to perform data analysis! Subgrounds offers a simple Pythonic API for building GraphQL queries, automates tedious workflows such as pagination, and empowers advanced users through controlled schema transformations. @@ -17,14 +17,14 @@ pip install --upgrade subgrounds python -m pip install --upgrade subgrounds ``` -Once installed, you can test out subgrounds with the following query. The following example grabs a subgraph for the Aave v2 protocol and queries the top 5 markets ordered by TVL (Total Value Locked), selects their name and their TVL (in USD) and returns the data as a pandas [DataFrame](https://pandas.pydata.org/pandas-docs/dev/reference/api/pandas.DataFrame.html#pandas.DataFrame). +Once installed, you can test out subgrounds with the following query. The following example grabs a Subgraph for the Aave v2 protocol and queries the top 5 markets ordered by TVL (Total Value Locked), selects their name and their TVL (in USD) and returns the data as a pandas [DataFrame](https://pandas.pydata.org/pandas-docs/dev/reference/api/pandas.DataFrame.html#pandas.DataFrame). ```python from subgrounds import Subgrounds sg = Subgrounds() -# Load the subgraph +# Load the Subgraph aave_v2 = sg.load_subgraph( "https://api.thegraph.com/subgraphs/name/messari/aave-v2-ethereum") diff --git a/website/src/pages/vi/subgraphs/querying/subgraph-id-vs-deployment-id.mdx b/website/src/pages/vi/subgraphs/querying/subgraph-id-vs-deployment-id.mdx index 103e470e14da..17258dd13ea1 100644 --- a/website/src/pages/vi/subgraphs/querying/subgraph-id-vs-deployment-id.mdx +++ b/website/src/pages/vi/subgraphs/querying/subgraph-id-vs-deployment-id.mdx @@ -2,17 +2,17 @@ title: Subgraph ID vs Deployment ID --- -A subgraph is identified by a Subgraph ID, and each version of the subgraph is identified by a Deployment ID. +A Subgraph is identified by a Subgraph ID, and each version of the Subgraph is identified by a Deployment ID. -When querying a subgraph, either ID can be used, though it is generally suggested that the Deployment ID is used due to its ability to specify a specific version of a subgraph. +When querying a Subgraph, either ID can be used, though it is generally suggested that the Deployment ID is used due to its ability to specify a specific version of a Subgraph. Here are some key differences between the two IDs: ![](/img/subgraph-id-vs-deployment-id.png) ## Deployment ID -The Deployment ID is the IPFS hash of the compiled manifest file, which refers to other files on IPFS instead of relative URLs on the computer. For example, the compiled manifest can be accessed via: `https://api.thegraph.com/ipfs/api/v0/cat?arg=QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. To change the Deployment ID, one can simply update the manifest file, such as modifying the description field as described in the [subgraph manifest documentation](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api). +The Deployment ID is the IPFS hash of the compiled manifest file, which refers to other files on IPFS instead of relative URLs on the computer. For example, the compiled manifest can be accessed via: `https://api.thegraph.com/ipfs/api/v0/cat?arg=QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. To change the Deployment ID, one can simply update the manifest file, such as modifying the description field as described in the [Subgraph manifest documentation](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api). -When queries are made using a subgraph's Deployment ID, we are specifying a version of that subgraph to query. Using the Deployment ID to query a specific subgraph version results in a more sophisticated and robust setup as there is full control over the subgraph version being queried. However, this results in the need of updating the query code manually every time a new version of the subgraph is published. +When queries are made using a Subgraph's Deployment ID, we are specifying a version of that Subgraph to query. Using the Deployment ID to query a specific Subgraph version results in a more sophisticated and robust setup as there is full control over the Subgraph version being queried. However, this results in the need of updating the query code manually every time a new version of the Subgraph is published. Example endpoint that uses Deployment ID: @@ -20,8 +20,8 @@ Example endpoint that uses Deployment ID: ## Subgraph ID -The Subgraph ID is a unique identifier for a subgraph. It remains constant across all versions of a subgraph. It is recommended to use the Subgraph ID to query the latest version of a subgraph, although there are some caveats. +The Subgraph ID is a unique identifier for a Subgraph. It remains constant across all versions of a Subgraph. It is recommended to use the Subgraph ID to query the latest version of a Subgraph, although there are some caveats. -Be aware that querying using Subgraph ID may result in queries being responded to by an older version of the subgraph due to the new version needing time to sync. Also, new versions could introduce breaking schema changes. +Be aware that querying using Subgraph ID may result in queries being responded to by an older version of the Subgraph due to the new version needing time to sync. Also, new versions could introduce breaking schema changes. Example endpoint that uses Subgraph ID: `https://gateway-arbitrum.network.thegraph.com/api/[api-key]/subgraphs/id/FL3ePDCBbShPvfRJTaSCNnehiqxsPHzpLud6CpbHoeKW` diff --git a/website/src/pages/vi/subgraphs/quick-start.mdx b/website/src/pages/vi/subgraphs/quick-start.mdx index 91b673bde83e..0af59e0b4c46 100644 --- a/website/src/pages/vi/subgraphs/quick-start.mdx +++ b/website/src/pages/vi/subgraphs/quick-start.mdx @@ -2,7 +2,7 @@ title: Bắt đầu nhanh --- -Learn how to easily build, publish and query a [subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) on The Graph. +Learn how to easily build, publish and query a [Subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) on The Graph. ## Prerequisites @@ -13,13 +13,13 @@ Learn how to easily build, publish and query a [subgraph](/subgraphs/developing/ ## How to Build a Subgraph -### 1. Create a subgraph in Subgraph Studio +### 1. Create a Subgraph in Subgraph Studio Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. -Subgraph Studio lets you create, manage, deploy, and publish subgraphs, as well as create and manage API keys. +Subgraph Studio lets you create, manage, deploy, and publish Subgraphs, as well as create and manage API keys. -Click "Create a Subgraph". It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name". +Click "Create a Subgraph". It is recommended to name the Subgraph in Title Case: "Subgraph Name Chain Name". ### 2. Install the Graph CLI @@ -37,13 +37,13 @@ Using [yarn](https://yarnpkg.com/): yarn global add @graphprotocol/graph-cli ``` -### 3. Initialize your subgraph +### 3. Initialize your Subgraph -> You can find commands for your specific subgraph on the subgraph page in [Subgraph Studio](https://thegraph.com/studio/). +> You can find commands for your specific Subgraph on the Subgraph page in [Subgraph Studio](https://thegraph.com/studio/). -The `graph init` command will automatically create a scaffold of a subgraph based on your contract's events. +The `graph init` command will automatically create a scaffold of a Subgraph based on your contract's events. -The following command initializes your subgraph from an existing contract: +The following command initializes your Subgraph from an existing contract: ```sh graph init @@ -51,42 +51,42 @@ graph init If your contract is verified on the respective blockscanner where it is deployed (such as [Etherscan](https://etherscan.io/)), then the ABI will automatically be created in the CLI. -When you initialize your subgraph, the CLI will ask you for the following information: +When you initialize your Subgraph, the CLI will ask you for the following information: -- **Protocol**: Choose the protocol your subgraph will be indexing data from. -- **Subgraph slug**: Create a name for your subgraph. Your subgraph slug is an identifier for your subgraph. -- **Directory**: Choose a directory to create your subgraph in. -- **Ethereum network** (optional): You may need to specify which EVM-compatible network your subgraph will be indexing data from. +- **Protocol**: Choose the protocol your Subgraph will be indexing data from. +- **Subgraph slug**: Create a name for your Subgraph. Your Subgraph slug is an identifier for your Subgraph. +- **Directory**: Choose a directory to create your Subgraph in. +- **Ethereum network** (optional): You may need to specify which EVM-compatible network your Subgraph will be indexing data from. - **Contract address**: Locate the smart contract address you’d like to query data from. - **ABI**: If the ABI is not auto-populated, you will need to input it manually as a JSON file. -- **Start Block**: You should input the start block to optimize subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed. +- **Start Block**: You should input the start block to optimize Subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed. - **Contract Name**: Input the name of your contract. -- **Index contract events as entities**: It is suggested that you set this to true, as it will automatically add mappings to your subgraph for every emitted event. +- **Index contract events as entities**: It is suggested that you set this to true, as it will automatically add mappings to your Subgraph for every emitted event. - **Add another contract** (optional): You can add another contract. -See the following screenshot for an example for what to expect when initializing your subgraph: +See the following screenshot for an example for what to expect when initializing your Subgraph: ![Subgraph command](/img/CLI-Example.png) -### 4. Edit your subgraph +### 4. Edit your Subgraph -The `init` command in the previous step creates a scaffold subgraph that you can use as a starting point to build your subgraph. +The `init` command in the previous step creates a scaffold Subgraph that you can use as a starting point to build your Subgraph. -When making changes to the subgraph, you will mainly work with three files: +When making changes to the Subgraph, you will mainly work with three files: -- Manifest (`subgraph.yaml`) - defines what data sources your subgraph will index. -- Schema (`schema.graphql`) - defines what data you wish to retrieve from the subgraph. +- Manifest (`subgraph.yaml`) - defines what data sources your Subgraph will index. +- Schema (`schema.graphql`) - defines what data you wish to retrieve from the Subgraph. - AssemblyScript Mappings (`mapping.ts`) - translates data from your data sources to the entities defined in the schema. -For a detailed breakdown on how to write your subgraph, check out [Creating a Subgraph](/developing/creating-a-subgraph/). +For a detailed breakdown on how to write your Subgraph, check out [Creating a Subgraph](/developing/creating-a-subgraph/). -### 5. Deploy your subgraph +### 5. Deploy your Subgraph > Remember, deploying is not the same as publishing. -When you **deploy** a subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. A deployed subgraph's indexing is performed by the [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/), which is a single Indexer owned and operated by Edge & Node, rather than by the many decentralized Indexers in The Graph Network. A **deployed** subgraph is free to use, rate-limited, not visible to the public, and meant to be used for development, staging, and testing purposes. +When you **deploy** a Subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. A deployed Subgraph's indexing is performed by the [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/), which is a single Indexer owned and operated by Edge & Node, rather than by the many decentralized Indexers in The Graph Network. A **deployed** Subgraph is free to use, rate-limited, not visible to the public, and meant to be used for development, staging, and testing purposes. -Once your subgraph is written, run the following commands: +Once your Subgraph is written, run the following commands: ```` ```sh @@ -94,7 +94,7 @@ graph codegen && graph build ``` ```` -Authenticate and deploy your subgraph. The deploy key can be found on the subgraph's page in Subgraph Studio. +Authenticate and deploy your Subgraph. The deploy key can be found on the Subgraph's page in Subgraph Studio. ![Deploy key](/img/subgraph-studio-deploy-key.jpg) @@ -109,37 +109,37 @@ graph deploy The CLI will ask for a version label. It's strongly recommended to use [semantic versioning](https://semver.org/), e.g. `0.0.1`. -### 6. Review your subgraph +### 6. Review your Subgraph -If you’d like to test your subgraph before publishing it, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following: +If you’d like to test your Subgraph before publishing it, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following: - Run a sample query. -- Analyze your subgraph in the dashboard to check information. -- Check the logs on the dashboard to see if there are any errors with your subgraph. The logs of an operational subgraph will look like this: +- Analyze your Subgraph in the dashboard to check information. +- Check the logs on the dashboard to see if there are any errors with your Subgraph. The logs of an operational Subgraph will look like this: ![Subgraph logs](/img/subgraph-logs-image.png) -### 7. Publish your subgraph to The Graph Network +### 7. Publish your Subgraph to The Graph Network -When your subgraph is ready for a production environment, you can publish it to the decentralized network. Publishing is an onchain action that does the following: +When your Subgraph is ready for a production environment, you can publish it to the decentralized network. Publishing is an onchain action that does the following: -- It makes your subgraph available to be to indexed by the decentralized [Indexers](/indexing/overview/) on The Graph Network. -- It removes rate limits and makes your subgraph publicly searchable and queryable in [Graph Explorer](https://thegraph.com/explorer/). -- It makes your subgraph available for [Curators](/resources/roles/curating/) to curate it. +- It makes your Subgraph available to be to indexed by the decentralized [Indexers](/indexing/overview/) on The Graph Network. +- It removes rate limits and makes your Subgraph publicly searchable and queryable in [Graph Explorer](https://thegraph.com/explorer/). +- It makes your Subgraph available for [Curators](/resources/roles/curating/) to curate it. -> The greater the amount of GRT you and others curate on your subgraph, the more Indexers will be incentivized to index your subgraph, improving the quality of service, reducing latency, and enhancing network redundancy for your subgraph. +> The greater the amount of GRT you and others curate on your Subgraph, the more Indexers will be incentivized to index your Subgraph, improving the quality of service, reducing latency, and enhancing network redundancy for your Subgraph. #### Publishing with Subgraph Studio -To publish your subgraph, click the Publish button in the dashboard. +To publish your Subgraph, click the Publish button in the dashboard. -![Publish a subgraph on Subgraph Studio](/img/publish-sub-transfer.png) +![Publish a Subgraph on Subgraph Studio](/img/publish-sub-transfer.png) -Select the network to which you would like to publish your subgraph. +Select the network to which you would like to publish your Subgraph. #### Publishing from the CLI -As of version 0.73.0, you can also publish your subgraph with the Graph CLI. +As of version 0.73.0, you can also publish your Subgraph with the Graph CLI. Open the `graph-cli`. @@ -157,32 +157,32 @@ graph publish ``` ```` -3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized subgraph to a network of your choice. +3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized Subgraph to a network of your choice. ![cli-ui](/img/cli-ui.png) To customize your deployment, see [Publishing a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). -#### Adding signal to your subgraph +#### Adding signal to your Subgraph -1. To attract Indexers to query your subgraph, you should add GRT curation signal to it. +1. To attract Indexers to query your Subgraph, you should add GRT curation signal to it. - - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your subgraph. + - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your Subgraph. 2. If eligible for indexing rewards, Indexers receive GRT rewards based on the signaled amount. - - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on subgraph feature usage and supported networks. + - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on Subgraph feature usage and supported networks. To learn more about curation, read [Curating](/resources/roles/curating/). -To save on gas costs, you can curate your subgraph in the same transaction you publish it by selecting this option: +To save on gas costs, you can curate your Subgraph in the same transaction you publish it by selecting this option: ![Subgraph publish](/img/studio-publish-modal.png) -### 8. Query your subgraph +### 8. Query your Subgraph -You now have access to 100,000 free queries per month with your subgraph on The Graph Network! +You now have access to 100,000 free queries per month with your Subgraph on The Graph Network! -You can query your subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button. +You can query your Subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button. -For more information about querying data from your subgraph, read [Querying The Graph](/subgraphs/querying/introduction/). +For more information about querying data from your Subgraph, read [Querying The Graph](/subgraphs/querying/introduction/). diff --git a/website/src/pages/vi/substreams/developing/dev-container.mdx b/website/src/pages/vi/substreams/developing/dev-container.mdx index bd4acf16eec7..339ddb159c87 100644 --- a/website/src/pages/vi/substreams/developing/dev-container.mdx +++ b/website/src/pages/vi/substreams/developing/dev-container.mdx @@ -9,7 +9,7 @@ Develop your first project with Substreams Dev Container. It's a tool to help you build your first project. You can either run it remotely through Github codespaces or locally by cloning the [substreams starter repository](https://github.com/streamingfast/substreams-starter?tab=readme-ov-file). -Inside the Dev Container, the `substreams init` command sets up a code-generated Substreams project, allowing you to easily build a subgraph or an SQL-based solution for data handling. +Inside the Dev Container, the `substreams init` command sets up a code-generated Substreams project, allowing you to easily build a Subgraph or an SQL-based solution for data handling. ## Prerequisites @@ -35,7 +35,7 @@ To share your work with the broader community, publish your `.spkg` to [Substrea You can configure your project to query data either through a Subgraph or directly from an SQL database: -- **Subgraph**: Run `substreams codegen subgraph`. This generates a project with a basic `schema.graphql` and `mappings.ts` file. You can customize these to define entities based on the data extracted by Substreams. For more configurations, see [subgraph sink documentation](https://docs.substreams.dev/how-to-guides/sinks/subgraph). +- **Subgraph**: Run `substreams codegen subgraph`. This generates a project with a basic `schema.graphql` and `mappings.ts` file. You can customize these to define entities based on the data extracted by Substreams. For more configurations, see [Subgraph sink documentation](https://docs.substreams.dev/how-to-guides/sinks/subgraph). - **SQL**: Run `substreams codegen sql` for SQL-based queries. For more information on configuring a SQL sink, refer to the [SQL documentation](https://docs.substreams.dev/how-to-guides/sinks/sql-sink). ## Deployment Options diff --git a/website/src/pages/vi/substreams/developing/sinks.mdx b/website/src/pages/vi/substreams/developing/sinks.mdx index daf46cbcbb79..cda0fb403117 100644 --- a/website/src/pages/vi/substreams/developing/sinks.mdx +++ b/website/src/pages/vi/substreams/developing/sinks.mdx @@ -1,5 +1,5 @@ --- -title: Official Sinks +title: Sink your Substreams --- Choose a sink that meets your project's needs. @@ -8,7 +8,7 @@ Choose a sink that meets your project's needs. Once you find a package that fits your needs, you can choose how you want to consume the data. -Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database, a file or a subgraph. +Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database, a file or a Subgraph. ## Sinks diff --git a/website/src/pages/vi/substreams/developing/solana/account-changes.mdx b/website/src/pages/vi/substreams/developing/solana/account-changes.mdx index 6c1348ca28ef..f0b510c0d768 100644 --- a/website/src/pages/vi/substreams/developing/solana/account-changes.mdx +++ b/website/src/pages/vi/substreams/developing/solana/account-changes.mdx @@ -11,7 +11,7 @@ This guide walks you through the process of setting up your environment, configu > NOTE: History for the Solana Account Changes dates as of 2025, block 310629601. -For each Substreams Solana account block, only the latest update per account is recorded, see the [Protobuf Referece](https://buf.build/streamingfast/firehose-solana/file/main:sf/solana/type/v1/account.proto). If an account is deleted, a payload with `deleted == True` is provided. Additionally, events of low importance ware omitted, such as those with the special owner “Vote11111111…” account or changes that do not affect the account data (ex: lamport changes). +For each Substreams Solana account block, only the latest update per account is recorded, see the [Protobuf Reference](https://buf.build/streamingfast/firehose-solana/file/main:sf/solana/type/v1/account.proto). If an account is deleted, a payload with `deleted == True` is provided. Additionally, events of low importance ware omitted, such as those with the special owner “Vote11111111…” account or changes that do not affect the account data (ex: lamport changes). > NOTE: To test Substreams latency for Solana accounts, measured as block-head drift, install the [Substreams CLI](https://docs.substreams.dev/reference-material/substreams-cli/installing-the-cli) and running `substreams run solana-common blocks_without_votes -s -1 -o clock`. diff --git a/website/src/pages/vi/substreams/developing/solana/transactions.mdx b/website/src/pages/vi/substreams/developing/solana/transactions.mdx index c22bd0f50611..1542ae22dab7 100644 --- a/website/src/pages/vi/substreams/developing/solana/transactions.mdx +++ b/website/src/pages/vi/substreams/developing/solana/transactions.mdx @@ -36,12 +36,12 @@ Within the generated directories, modify your Substreams modules to include addi ## Step 3: Load the Data -To make your Substreams queryable (as opposed to [direct streaming](https://docs.substreams.dev/how-to-guides/sinks/stream)), you can automatically generate a [Substreams-powered subgraph](/sps/introduction/) or SQL-DB sink. +To make your Substreams queryable (as opposed to [direct streaming](https://docs.substreams.dev/how-to-guides/sinks/stream)), you can automatically generate a [Substreams-powered Subgraph](/sps/introduction/) or SQL-DB sink. ### Subgraph 1. Run `substreams codegen subgraph` to initialize the sink, producing the necessary files and function definitions. -2. Create your [subgraph mappings](/sps/triggers/) within the `mappings.ts` and associated entities within the `schema.graphql`. +2. Create your [Subgraph mappings](/sps/triggers/) within the `mappings.ts` and associated entities within the `schema.graphql`. 3. Build and deploy locally or to [Subgraph Studio](https://thegraph.com/studio-pricing/) by running `deploy-studio`. ### SQL diff --git a/website/src/pages/vi/substreams/introduction.mdx b/website/src/pages/vi/substreams/introduction.mdx index c51a4ffa7cf6..d0fad4821fe3 100644 --- a/website/src/pages/vi/substreams/introduction.mdx +++ b/website/src/pages/vi/substreams/introduction.mdx @@ -13,7 +13,7 @@ Substreams is a powerful parallel blockchain indexing technology designed to enh ## Substreams Benefits -- **Accelerated Indexing**: Boost subgraph indexing time with a parallelized engine for quicker data retrieval and processing. +- **Accelerated Indexing**: Boost Subgraph indexing time with a parallelized engine for quicker data retrieval and processing. - **Multi-Chain Support**: Expand indexing capabilities beyond EVM-based chains, supporting ecosystems like Solana, Injective, Starknet, and Vara. - **Enhanced Data Model**: Access comprehensive data, including the `trace` level data on EVM or account changes on Solana, while efficiently managing forks/disconnections. - **Multi-Sink Support:** For Subgraph, Postgres database, Clickhouse, and Mongo database. diff --git a/website/src/pages/vi/substreams/publishing.mdx b/website/src/pages/vi/substreams/publishing.mdx index 4fee8dc4facb..cefc8592d0c6 100644 --- a/website/src/pages/vi/substreams/publishing.mdx +++ b/website/src/pages/vi/substreams/publishing.mdx @@ -9,7 +9,7 @@ Learn how to publish a Substreams package to the [Substreams Registry](https://s ### What is a package? -A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional subgraphs. +A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional Subgraphs. ## Publish a Package @@ -44,7 +44,7 @@ A Substreams package is a precompiled binary file that defines the specific data ![confirm](/img/4_confirm.png) -That's it! You have succesfully published a package in the Substreams registry. +That's it! You have successfully published a package in the Substreams registry. ![success](/img/5_success.png) diff --git a/website/src/pages/vi/supported-networks.mdx b/website/src/pages/vi/supported-networks.mdx index 63242b802eaf..f2af01f61c81 100644 --- a/website/src/pages/vi/supported-networks.mdx +++ b/website/src/pages/vi/supported-networks.mdx @@ -18,11 +18,11 @@ export const getStaticProps = getSupportedNetworksStaticProps - Subgraph Studio relies on the stability and reliability of the underlying technologies, for example JSON-RPC, Firehose and Substreams endpoints. - Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier. -- If a subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks. +- If a Subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks. - For a full list of which features are supported on the decentralized network, see [this page](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). ## Running Graph Node locally If your preferred network isn't supported on The Graph's decentralized network, you can run your own [Graph Node](https://github.com/graphprotocol/graph-node) to index any EVM-compatible network. Make sure that the [version](https://github.com/graphprotocol/graph-node/releases) you are using supports the network and you have the needed configuration. -Graph Node can also index other protocols, via a Firehose integration. Firehose integrations have been created for NEAR, Arweave and Cosmos-based networks. Additionally, Graph Node can support Substreams-powered subgraphs for any network with Substreams support. +Graph Node can also index other protocols, via a Firehose integration. Firehose integrations have been created for NEAR and Arweave. Additionally, Graph Node can support Substreams-powered Subgraphs for any network with Substreams support. diff --git a/website/src/pages/vi/token-api/_meta-titles.json b/website/src/pages/vi/token-api/_meta-titles.json index 692cec84bd58..7ed31e0af95d 100644 --- a/website/src/pages/vi/token-api/_meta-titles.json +++ b/website/src/pages/vi/token-api/_meta-titles.json @@ -1,5 +1,6 @@ { "mcp": "MCP", "evm": "EVM Endpoints", - "monitoring": "Monitoring Endpoints" + "monitoring": "Monitoring Endpoints", + "faq": "FAQ" } diff --git a/website/src/pages/vi/token-api/_meta.js b/website/src/pages/vi/token-api/_meta.js index 09aa7ffc2649..0e526f673a66 100644 --- a/website/src/pages/vi/token-api/_meta.js +++ b/website/src/pages/vi/token-api/_meta.js @@ -5,4 +5,5 @@ export default { mcp: titles.mcp, evm: titles.evm, monitoring: titles.monitoring, + faq: '', } diff --git a/website/src/pages/vi/token-api/faq.mdx b/website/src/pages/vi/token-api/faq.mdx new file mode 100644 index 000000000000..b2dcec14d671 --- /dev/null +++ b/website/src/pages/vi/token-api/faq.mdx @@ -0,0 +1,109 @@ +--- +title: Token API FAQ +--- + +Get fast answers to easily integrate and scale with The Graph's high-performance Token API. + +## Khái quát + +### What blockchains does the Token API support? + +Currently, the Token API supports Ethereum, Binance Smart Chain (BSC), Polygon, Optimism, Base, and Arbitrum One. + +### Why isn't my API key from The Graph Market working? + +Be sure to use the Access Token generated from the API key, not the API key itself. An access token can be generated from the dashboard on The Graph Market using the dropdown menu next to each API key. + +### How current is the data provided by the API relative to the blockchain? + +The API provides data up to the latest finalized block. + +### How do I authenticate requests to the Token API? + +Authentication is managed via JWT tokens obtained through The Graph Market. JWT tokens issued by [Pinax](https://pinax.network/en), a core developer of The Graph, are also supported. + +### Does the Token API provide a client SDK? + +While a client SDK is not currently available, please share feedback on any SDKs or integrations you would like to see on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to support additional blockchains in the future? + +Yes, more blockchains will be supported in the future. Please share feedback on desired chains on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to offer data closer to the chain head? + +Yes, improvements to provide data closer to the chain head are planned. Feedback is welcome on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to support additional use cases such as NFTs? + +The Graph ecosystem is actively determining the [roadmap](https://thegraph.com/blog/token-api-the-graph/) for additional use cases, including NFTs. Please provide feedback on specific features you would like prioritized on [Discord](https://discord.gg/graphprotocol). + +## MCP / LLM / AI Topics + +### Is there a time limit for LLM queries? + +Yes. The maximum time limit for LLM queries is 10 seconds. + +### Is there a known list of LLMs that work with the API? + +Yes, Cline, Cursor, and Claude Desktop integrate successfully with The Graph's Token API + MCP server. + +Beyond that, whether an LLM "works" depends on whether it supports the MCP protocol directly (or has a compatible plugin/adapter). + +### Where can I find the MCP client? + +You can find the code for the MCP client in [The Graph's repo](https://github.com/graphprotocol/mcp-client). + +## Advanced Topics + +### I'm getting 403/401 errors. What's wrong? + +Check that you included the `Authorization: Bearer ` header with the correct, non-expired token. Common issues include using the API key instead of generating a new JWT, forgetting the "Bearer" prefix, using an incorrect token, or omitting the header entirely. Ensure you copied the JWT exactly as provided by The Graph Market. + +### Are there rate limits or usage costs?\*\* + +During Beta, the Token API is free for authorized developers. While specific rate limits aren't documented, reasonable throttling exists to prevent abuse. High request volumes may trigger HTTP 429 errors. Monitor official announcements for future pricing changes after Beta. + +### What networks are supported, and how do I specify them? + +You can query available networks with [this link](https://token-api.thegraph.com/#tag/monitoring/GET/networks). A full list of network IDs accepted by The Graph can be found on [The Graph's Networks](https://thegraph.com/docs/en/supported-networks/). The API supports Ethereum mainnet, Binance Smart Chain, Base, Arbitrum One, Optimism, and Polygon/Matic. Use the optional `network_id` parameter (e.g., `mainnet`, `bsc`, `base`, `arbitrum-one`, `optimism`, `matic`) to target a specific chain. Without this parameter, the API defaults to Ethereum mainnet. + +### Why do I only see 10 results? How can I get more data? + +Endpoints cap output at 10 items by default. Use pagination parameters: `limit` (up to 500) and `page` (1-indexed). For example, set `limit=50` to get 50 results, and increment `page` for subsequent batches (e.g., `page=2` for items 51-100). + +### How do I fetch older transfer history? + +The Transfers endpoint defaults to 30 days of history. To retrieve older events, increase the `age` parameter up to 180 days maximum (e.g., `age=180` for 6 months of transfers). Transfers older than 180 days cannot be fetched in a single call. + +### What does an empty `"data": []` array mean? + +An empty data array means no records were found matching the query – not an error. This occurs when querying wallets with no tokens/transfers or token contracts with no holders. Verify you've used the correct address and parameters. An invalid address format will trigger a 4xx error. + +### Why is the JSON response wrapped in a `"data"` array? + +All Token API responses consistently wrap results in a top-level `data` array, even for single items. This uniform design handles the common case where addresses have multiple balances or transfers. When parsing, be sure to index into the `data` array (e.g., `const results = response.data`). + +### Why are token amounts returned as strings? + +Large numeric values (like token amounts or supplies) are returned as strings to avoid precision loss, as they often exceed JavaScript's safe integer range. Convert these to big number types for arithmetic operations. Fields like `decimals` are provided as normal numbers to help derive human-readable values. + +### What format should addresses be in? + +The API accepts 40-character hex addresses with or without the `0x` prefix. The endpoint is case-insensitive, so both lower and uppercase hex characters work. Ensure addresses are exactly 40 hex digits (20 bytes) if you remove the prefix. For contract queries, use the token's contract address; for balance/transfer queries, use the wallet address. + +### Do I need special headers besides authentication? + +While recommended, `Accept: application/json` isn't strictly required as the API returns JSON by default. The critical header is `Authorization: Bearer `. Ensure you make a GET request to the correct URL without trailing slashes or path typos (e.g., use `/balances/evm/{address}` not `/balance`). + +### MCP integration with Claude/Cline/Cursor shows errors like "ENOENT" or "Server disconnected". How do I fix this? + +For "ENOENT" errors, ensure Node.js 18+ is installed and the path to `npx`/`bunx` is correct (consider using full paths in config). "Server disconnected" usually indicates authentication or connectivity issues – verify your `ACCESS_TOKEN` is set correctly and your network allows access to `https://token-api.thegraph.com/sse`. + +### Is the Token API part of The Graph's GraphQL service? + +No, the Token API is a separate RESTful service. Unlike traditional subgraphs, it provides ready-to-use REST endpoints (HTTP GET) for common token data. You don't need to write GraphQL queries or deploy subgraphs. Under the hood, it uses The Graph's infrastructure and MCP with AI for enrichment, but you simply interact with REST endpoints. + +### Do I need to use MCP or tools like Claude, Cline, or Cursor? + +No, these are optional. MCP is an advanced feature allowing AI assistants to interface with the API via streaming. For standard usage, simply call the REST endpoints with any HTTP client using your JWT. Claude Desktop, Cline bot, and Cursor IDE integrations are provided for convenience but aren't required. diff --git a/website/src/pages/vi/token-api/mcp/claude.mdx b/website/src/pages/vi/token-api/mcp/claude.mdx index 0da8f2be031d..12a036b6fc24 100644 --- a/website/src/pages/vi/token-api/mcp/claude.mdx +++ b/website/src/pages/vi/token-api/mcp/claude.mdx @@ -25,11 +25,11 @@ Create or edit your `claude_desktop_config.json` file. ```json label="claude_desktop_config.json" { "mcpServers": { - "mcp-pinax": { + "token-api": { "command": "npx", "args": ["@pinax/mcp", "--sse-url", "https://token-api.thegraph.com/sse"], "env": { - "ACCESS_TOKEN": "" + "ACCESS_TOKEN": "" } } } diff --git a/website/src/pages/vi/token-api/mcp/cline.mdx b/website/src/pages/vi/token-api/mcp/cline.mdx index ab54c0c8f6f0..ef98e45939fe 100644 --- a/website/src/pages/vi/token-api/mcp/cline.mdx +++ b/website/src/pages/vi/token-api/mcp/cline.mdx @@ -10,7 +10,7 @@ sidebarTitle: Cline - [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path. - The `@pinax/mcp` package requires Node 18+, as it relies on built-in `fetch()` / `Headers`, which are not available in Node 17 or older. You may need to specify an exact path to an up-to-date Node version, or uninstall previous versions of Node to ensure `@pinax/mcp` uses the correct version. -![Screenshot of Cline's MCP server configuration interface displaying the JSON settings file with mcp-pinax server details visible](/img/cline-preview-token-api.png) +![Screenshot of Cline's MCP server configuration interface displaying the JSON settings file with mcp-pinax server details visible.](/img/cline-preview-token-api.png) ## Configuration diff --git a/website/src/pages/vi/token-api/quick-start.mdx b/website/src/pages/vi/token-api/quick-start.mdx index 4653c3d41ac6..4a426052097d 100644 --- a/website/src/pages/vi/token-api/quick-start.mdx +++ b/website/src/pages/vi/token-api/quick-start.mdx @@ -1,6 +1,6 @@ --- title: Token API Quick Start -sidebarTitle: Quick Start +sidebarTitle: Bắt đầu nhanh --- ![The Graph Token API Quick Start banner](/img/token-api-quickstart-banner.jpg) diff --git a/website/src/pages/zh/about.mdx b/website/src/pages/zh/about.mdx index 81c40b3d9f61..df1778ca8c0b 100644 --- a/website/src/pages/zh/about.mdx +++ b/website/src/pages/zh/about.mdx @@ -1,56 +1,56 @@ --- -title: 关于 Graph +title: 关于 The Graph --- -## 什么是Graph? +## 什么是 The Graph? -The Graph is a powerful decentralized protocol that enables seamless querying and indexing of blockchain data. It simplifies the complex process of querying blockchain data, making dapp development faster and easier. +The Graph是一个强大的去中心化协议,可以无缝地查询区块链数据并将其索引。 它简化了查询区块链数据的复杂过程,使开发dapp 更快和更容易。 -## Understanding the Basics +## 了解基础知识 -Projects with complex smart contracts such as [Uniswap](https://uniswap.org/) and NFTs initiatives like [Bored Ape Yacht Club](https://boredapeyachtclub.com/) store data on the Ethereum blockchain, making it very difficult to read anything other than basic data directly from the blockchain. +像 [Uniswap](https://uniswap.org/)这样具有复杂智能合约的项目,以及像 [Bored Ape Yacht Club](https://boredapeyachtclub.com/) 这样的 NFTs 倡议,都在以太坊区块链上存储数据,因此,除了直接从区块链上读取基本数据外,真的很难。 -### Challenges Without The Graph +### 没有The Graph的挑战 -In the case of the example listed above, Bored Ape Yacht Club, you can perform basic read operations on [the contract](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code). You can read the owner of a certain Ape, read the content URI of an Ape based on their ID, or read the total supply. +在上面列出的例子中,Bored Ape Yacht Club,您可以在 [合约](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code)上进行基本的读取操作。 您可以读取某个Ape的所有者。根据他们的ID阅读Ape的内容URI,或者阅读总供应。 -- This can be done because these read operations are programmed directly into the smart contract itself. However, more advanced, specific, and real-world queries and operations like aggregation, search, relationships, and non-trivial filtering, **are not possible**. +- 之所以能够做到这一点,是因为这些已阅读的操作直接编入智能合约本身。 然而,更高级、具体和现实世界的查询和操作,如集成、搜索、关系和非微不足道的过滤, **是不可能的**。 -- For instance, if you want to inquire about Apes owned by a specific address and refine your search based on a particular characteristic, you would not be able to obtain that information by directly interacting with the contract itself. +- 例如,如果您想要查询某个特定地址所拥有的Ape,并根据某个特定特征改进您的搜索, 你无法通过与合约本身直接互动获取这种信息。 -- To get more data, you would have to process every single [`transfer`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) event ever emitted, read the metadata from IPFS using the Token ID and IPFS hash, and then aggregate it. +- 为了获得这些数据,你必须处理曾经发出的每一个 [`传输`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) 事件,使用Token ID和IPFS的哈希值从IPFS读取元数据,然后将其汇总。 -### Why is this a problem? +### 为什么这是个问题? -It would take **hours or even days** for a decentralized application (dapp) running in a browser to get an answer to these simple questions. +在浏览器中运行的去中心化应用程序(dapp)需要**几小时甚至几天**才能得到这些简单问题的答案。 -Alternatively, you have the option to set up your own server, process the transactions, store them in a database, and create an API endpoint to query the data. However, this option is [resource intensive](/resources/benefits/), needs maintenance, presents a single point of failure, and breaks important security properties required for decentralization. +你也可以建立你自己的服务器,在那里处理交易,把它们保存到数据库,并在上面建立一个 API 终端,以便查询数据。 然而,这种选择是[资源密集型的](/resources/benefits/),需要维护,会出现单点故障,并破坏了去中心化化所需的重要安全属性。 -Blockchain properties, such as finality, chain reorganizations, and uncled blocks, add complexity to the process, making it time-consuming and conceptually challenging to retrieve accurate query results from blockchain data. +区块链的属性,如最终性、链重组或未封闭的区块,使这一过程进一步复杂化,并使从区块链数据中检索出正确查询结果不仅耗时,而且在概念上也很难。 -## The Graph Provides a Solution +## The Graph提供了一个解决办法 -The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API. +The Graph通过一个去中心化的协议解决了这个挑战,这个协议可以索引并使区块链数据能够高效和高性能的查询。 这些API(索引的 "Subgraphs") 然后可以用标准的 GraphQL API查询。 -Today, there is a decentralized protocol that is backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node) that enables this process. +今天有一个去中心化的协议,它得到了 [Graph Node](https://github.com/graphprotocol/graph-node) 开放源代码实现的支持。 -### How The Graph Functions +### The Graph 如何起效 -Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL. +索引区块链数据是非常困难的,但是The Graph使它变得容易。The Graph通过子图学习如何索引以太坊数据。 子图是基于 blockchain 数据从区块链中提取数据的自定义 API, 处理它,并存储它,以便它可以通过 GraphQL 无缝查询。 -#### Specifics +#### 详情 -- The Graph uses subgraph descriptions, which are known as the subgraph manifest inside the subgraph. +- The Graph使用子图描述,它称为子图中的子图清单。 -- The subgraph description outlines the smart contracts of interest for a subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database. +- 子图描述定义了子图所关注的智能合约,这些合约中需要关注的事件,以及如何将事件数据映射到The Graph将存储在其数据库中的数据。 -- When creating a subgraph, you need to write a subgraph manifest. +- 创建子图时,您需要写子图清单。 -- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that subgraph. +- 一旦编写了`子图清单`,就可以使用Graph CLI将定义存储在IPFS中,并告诉索引人开始为该子图的数据编制索引。 -The diagram below provides more detailed information about the flow of data after a subgraph manifest has been deployed with Ethereum transactions. +此图提供了部署子图清单后用于处理以太坊交易的数据流的更多细节。 -![一图解释Graph如何使用Graph节点向数据消费者提供查询的图形](/img/graph-dataflow.png) +![解释The Graph如何使用Graph节点向数据消费者提供查询的图形。](/img/graph-dataflow.png) 流程遵循这些步骤: @@ -62,6 +62,6 @@ The diagram below provides more detailed information about the flow of data afte ## 下一步 -The following sections provide a more in-depth look at subgraphs, their deployment and data querying. +以下各节更深入地审视了子图、其部署情况和数据查询情况。 -Before you write your own subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed subgraphs. Each subgraph's page includes a GraphQL playground, allowing you to query its data. +在编写自己的子图之前,建议您探索[Graph Explorer](https://thegraph.com/explorer)并查看一些已经部署的子图。每个Subgraph的页面都包含一个GraphQL游乐场,允许您查询其数据。 diff --git a/website/src/pages/zh/archived/_meta-titles.json b/website/src/pages/zh/archived/_meta-titles.json index 9501304a4305..b39f6c46ac4e 100644 --- a/website/src/pages/zh/archived/_meta-titles.json +++ b/website/src/pages/zh/archived/_meta-titles.json @@ -1,3 +1,3 @@ { - "arbitrum": "Scaling with Arbitrum" + "arbitrum": "用Arbitrum缩放" } diff --git a/website/src/pages/zh/archived/arbitrum/arbitrum-faq.mdx b/website/src/pages/zh/archived/arbitrum/arbitrum-faq.mdx index cc912a21a269..1394b869f02e 100644 --- a/website/src/pages/zh/archived/arbitrum/arbitrum-faq.mdx +++ b/website/src/pages/zh/archived/arbitrum/arbitrum-faq.mdx @@ -4,9 +4,9 @@ title: Arbitrum网络常见问题解答 如果您想跳到Arbitrum计费常见问题解答,请单击[here](#billing-on-arbitrum-faqs)。 -## Why did The Graph implement an L2 Solution? +## 为什么 The Graph 实施L2解决方案? -By scaling The Graph on L2, network participants can now benefit from: +通过缩放L2上的Graph,网络参与者可以预期: - 燃气费节省26倍以上 @@ -14,48 +14,48 @@ By scaling The Graph on L2, network participants can now benefit from: - 从以太坊继承的安全性 -Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers can open and close allocations more frequently to index a greater number of subgraphs. Developers can deploy and update subgraphs more easily, and Delegators can delegate GRT more frequently. Curators can add or remove signal to a larger number of subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. +将协议智能合约扩展到L2,允许网络参与者以较低的费用进行更频繁的交互。例如,索引人可以更频繁地打开和关闭分配,以索引更多的子图。开发人员可以更容易地部署和更新子图,委托人可以更频繁地委托GRT。策展人可以在更多的子图中添加或删除信号——由于费用的原因,以前认为这些操作成本过高,无法频繁执行。 去年,Graph社区在[GIP-0031](https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305) 讨论的结果之后,决定推进Arbitrum。 -## 我需要做什么才能在L2上使用Graph? +## 我需要做什么才能在L2上使用The Graph? -The Graph’s billing system accepts GRT on Arbitrum, and users will need ETH on Arbitrum to pay their gas. While The Graph protocol started on Ethereum Mainnet, all activity, including the billing contracts, is now on Arbitrum One. +Graph的计费系统在Arbitrum上接受GRT,用户将需要Arbitrum的ETH来支付他们的费用。虽然Graph协议始于以太坊主网,但所有活动,包括计费合同,现在都在Arbitrum One上。 -Consequently, to pay for queries, you need GRT on Arbitrum. Here are a few different ways to achieve this: +因此,要为查询付费,您需要Arbitrum上的GRT。以下是实现这一目标的几种不同方法: -- If you already have GRT on Ethereum, you can bridge it to Arbitrum. You can do this via the GRT bridging option provided in Subgraph Studio or by using one of the following bridges: +- 如果你已经在以太坊上有了GRT,你可以把它连接到Arbitrum。您可以通过Subgraph Studio中提供的GRT桥接选项或使用以下桥接器之一来完成此操作: - - [The Arbitrum Bridge](https://bridge.arbitrum.io/?l2ChainId=42161) + - Arbitrum大桥(https://bridge.arbitrum.io/?l2ChainId=42161) - [TransferTo](https://transferto.xyz/swap) -- If you have other assets on Arbitrum, you can swap them for GRT through a swapping protocol like Uniswap. +- 如果你在Arbitrum上有其他资产,你可以通过Uniswap等交换协议将它们交换为GRT。 -- Alternatively, you can acquire GRT directly on Arbitrum through a decentralized exchange. +- 或者,您可以通过去中心化交易所直接在Arbitrum上获得GRT -Once you have GRT on Arbitrum, you can add it to your billing balance. +一旦您在Arbitrum上获得了GRT,您就可以将其添加到您的账单余额中。 要使用L2上的Graph,请使用此下拉开关在链之间切换。 -![Dropdown switcher to toggle Arbitrum](/img/arbitrum-screenshot-toggle.png) +![下拉式切换器,用于切换Arbitrum](/img/arbitrum-screenshot-toggle.png) ## 作为子图开发人员、数据消费者、索引人、策展人或授权者,我现在需要做什么? -Network participants must move to Arbitrum to continue participating in The Graph Network. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) for additional support. +网络参与者必须转移到Arbitrum,才能继续参与Graph Network。请参阅[L2传输工具指南](/archived/arbitrum/l2-transfer-tools-guide/) 以获取更多支持。 -All indexing rewards are now entirely on Arbitrum. +所有指数奖励现在都完全在Arbitrum上。 -## Were there any risks associated with scaling the network to L2? +## 将网络扩展到L2是否存在一些风险? -All smart contracts have been thoroughly [audited](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/audits/OpenZeppelin/2022-07-graph-arbitrum-bridge-audit.pdf). +所有智能合约都经过了彻底审核。 -所有事项已经经过了彻底测试,并制定了应急计划,以确保安全和无缝过渡。详细信息可以在 [这里] (https://forum.thegraph.com/t/gip-0037-the-graph-arbitrum-deployment-with-linear-rewards-minted-in-l2/3551#risks-and-security-considerations-20)找到。 +所有事项已经经过了彻底测试,并制定了应急计划,以确保安全和无缝过渡。详细信息可以在[这里](https://forum.thegraph.com/t/gip-0037-the-graph-arbitrum-deployment-with-linear-rewards-minted-in-l2/3551#risks-and-security-considerations-20)找到。 -## Are existing subgraphs on Ethereum working? +## 以太坊上现有的子图是否有效? -All subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) to ensure your subgraphs operate seamlessly. +所有子图现在都在Arbitrum上。请参阅 [L2传输工具指南](/archived/arbitrum/l2-transfer-tools-guide/) ,以确保您的子图无缝运行。 -## Does GRT have a new smart contract deployed on Arbitrum? +## GRT会在Arbitrum上部署新的智能合约吗? 是的,GRT在Arbitrum上有一个额外的[智能合约](https://arbiscan.io/address/0x9623063377ad1b27544c965ccd7342f7ea7e88c7)。然而,以太坊主网上的 [GRT合约](https://etherscan.io/token/0xc944e90c64b2c07662a292be6244bdf05cda44a7)将继续保持运营。 @@ -77,4 +77,4 @@ All subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/arc 您可以通过在[Subgraph Studio](https://thegraph.com/studio/)中进行一键体验,将GRT添加到您的Arbitrum计费余额中。您将能够通过一笔交易轻松将您的GRT桥接到Arbitrum并填写您的API密钥。 -Visit the [Billing page](/subgraphs/billing/) for more detailed instructions on adding, withdrawing, or acquiring GRT. +请访问[计费页面](/subgraphs/billing/),获取更详细的有关添加、提取或获取GRT的说明。 diff --git a/website/src/pages/zh/archived/arbitrum/l2-transfer-tools-faq.mdx b/website/src/pages/zh/archived/arbitrum/l2-transfer-tools-faq.mdx index 5ee091bbc5a3..76d4cec14156 100644 --- a/website/src/pages/zh/archived/arbitrum/l2-transfer-tools-faq.mdx +++ b/website/src/pages/zh/archived/arbitrum/l2-transfer-tools-faq.mdx @@ -14,7 +14,7 @@ title: L2转移工具常见问题解答 ### 我可以使用与以太坊主网上相同的钱包吗? -If you are using an [EOA](https://ethereum.org/en/developers/docs/accounts/#types-of-account) wallet you can use the same address. If your Ethereum mainnet wallet is a contract (e.g. a multisig) then you must specify an [Arbitrum wallet address](/archived/arbitrum/arbitrum-faq/#what-do-i-need-to-do-to-use-the-graph-on-l2) where your transfer will be sent. Please check the address carefully as any transfers to an incorrect address can result in permanent loss. If you'd like to use a multisig on L2, make sure you deploy a multisig contract on Arbitrum One. +如果您使用[EOA](https://ethereum.org/en/developers/docs/accounts/#types-of-account)钱包,则可以使用相同的地址。如果您的以太坊主网钱包是一个合约(例如多重签名),那么您必须指定一个[Arbitrum钱包地址](/archived/arbitrum/arbitrum-faq/#what-do-i-need-to-do-to-use-the-graph-on-l2),您的转账将发送到该地址。请仔细检查地址,因为任何转移到错误地址的操作都可能导致永久性损失。如果你想在L2上使用多重签名,请确保在Arbitrum One上部署多重签名合约。 在像以太坊和 Arbitrum 这样的 EVM 区块链上,钱包是一对密钥(公钥和私钥),您可以在不需要与区块链进行任何交互的情况下创建。因此,任何在以太坊上创建的钱包也将在 Arbitrum 上运作,而无需采取其他任何行动。 @@ -24,7 +24,7 @@ If you are using an [EOA](https://ethereum.org/en/developers/docs/accounts/#type L2 传输工具使用 Arbitrum 的原生机制将信息从 L1 发送至 L2。这种机制被称为 "retryable ticket,所有本地令牌网桥都使用这种机制,包括Arbitrum GRT网桥。您可以在[Arbitrum文档](https://docs.arbitrum.io/arbos/l1-to-l2-messaging)中阅读更多关于retryable ticket的信息。 -当您将您的资产(子图、股权、委托)转移到 L2 时,会通过 Arbitrum GRT 桥接器发送一条信息,该桥接器会在 L2 中创建一个可retryable ticket。转移工具在交易中包含一些 ETH ,用于:1)支付创建票据的费用;2)支付在 L2 中执行票据的气体费用。但是,在票据准备好在 L2 中执行之前,gas价格可能会发生变化,因此自动执行尝试可能会失败。当这种情况发生时,Arbitrum 桥接器会将retryable ticket保留最多 7 天,任何人都可以重试 "赎回 "票据(这需要一个与 Arbitrum 桥接了一些 ETH 的钱包)。 +当您将您的资产(子图、股权、委托)转移到 L2 时,会通过 Arbitrum GRT 桥接器发送一条信息,该桥接器会在 L2 中创建一个可retryable ticket。转移工具在交易中包含一些 ETH ,用于:1)支付创建票据的费用;2)支付在 L2 中执行票据的气体费用。但是,在票据准备好在 L2 中执行之前,燃气价格可能会发生变化,因此自动执行尝试可能会失败。当这种情况发生时,Arbitrum 桥接器会将可退票保留最多 7 天,任何人都可以重试 "赎回 "票据(这需要一个与 Arbitrum 桥接了一些 ETH 的钱包)。 这就是我们在所有传输工具中所说的 "确认 "步骤--在大多数情况下,它会自动运行,因为自动执行通常都会成功,但重要的是,您要回过头来检查,以确保它成功了。如果没有成功,并且在 7 天内没有成功的重试,Arbitrum 桥接器将丢弃该票据,您的资产(子图、股权、委托或管理)将丢失且无法恢复。The Graph核心开发人员有一个监控系统来检测这些情况,并尝试在为时已晚之前赎回门票,但确保您的转让及时完成最终还是您的责任。如果您在确认交易时遇到困难,请使用[此表单](https://noteforms.com/forms/notionform-l2-transfer-tooling-issues-0ogqfu?notionforms=1&utm_source=notionforms)联系我们,核心开发人员将为您提供帮助。 @@ -70,7 +70,7 @@ L2 传输工具使用 Arbitrum 的原生机制将信息从 L1 发送至 L2。这 要使用子图转移工具,您的子图必须已经发布到以太坊主网上,并且拥有子图的钱包必须拥有一定的策划信号。如果您的子图尚未发布,建议您直接在Arbitrum One上进行发布-相关的燃气费用将大大降低。如果您想转移已发布的子图,但拥有该子图的所有者账户尚未对其进行任何策划信号的策展,您可以从该账户中发送一小笔金额(例如1 GRT)进行信号,确保选择“自动迁移”信号。 -### 我将我的子图转移到Arbitrum后,以太坊主网版本的子图会发生什么? +### 我将子图转移到Arbitrum后,以太坊主网版本的子图会发生什么? 将子图转移到Arbitrum后,以太坊主网版本的子图将被弃用。我们建议您在48小时内更新查询URL。但是,我们已经设置了一个宽限期使您的主网URL继续可用,以便更新任何第三方dapp的支持。 @@ -274,7 +274,7 @@ L2 转移工具将始终将您的委托转移到您先前委托的同一索引 ### 如果我使用 GRT 解锁合约(GRT vesting contract)/令牌锁定钱包,我可以转移质押吗? -可以!由于解锁合约(vesting contracts)无法转发用于支付 L2 交易费用的 ETH,所以流程略有不同,你需要事先存入所需的 ETH。如果你的解锁合约尚未完全解锁,你还需要在 L2 上先初始化一个对应的解锁合约,并且只能将质押转移到此 L2 解锁合约。Explorer 上的用户界面可以指导你在使用解锁钱包(vesting lock wallet)连接到 Explorer 时完成这个过程。 +可以!由于解锁合约无法转发用于支付 L2 交易费用的 ETH,所以流程略有不同,你需要事先存入所需的 ETH。如果你的解锁合约尚未完全解锁,你还需要在 L2 上先初始化一个对应的解锁合约,并且只能将质押转移到此 L2 解锁合约。Explorer 上的用户界面可以指导你在使用解锁钱包连接到 Explorer 时完成这个过程。 ### 我已经在L2有质押。当我第一次使用转移工具时,是否仍需要发送 100,000 GRT? diff --git a/website/src/pages/zh/archived/arbitrum/l2-transfer-tools-guide.mdx b/website/src/pages/zh/archived/arbitrum/l2-transfer-tools-guide.mdx index da4756a834dd..8dcdbcca1d4e 100644 --- a/website/src/pages/zh/archived/arbitrum/l2-transfer-tools-guide.mdx +++ b/website/src/pages/zh/archived/arbitrum/l2-transfer-tools-guide.mdx @@ -1,10 +1,10 @@ --- -title: L2 Transfer Tools Guide +title: L2转移工具指南 --- -The Graph has made it easy to move to L2 on Arbitrum One. For each protocol participant, there are a set of L2 Transfer Tools to make transferring to L2 seamless for all network participants. These tools will require you to follow a specific set of steps depending on what you are transferring. +Graph 使迁移到 Arbitrum One(L2) 上变得非常容易。对于每个协议参与者,都有一组 L2 转账工具,使所有网络参与者无缝地迁移到 L2。根据你要转移的内容,这些工具会要求你按照特定的步骤操作。 -Some frequent questions about these tools are answered in the [L2 Transfer Tools FAQ](/archived/arbitrum/l2-transfer-tools-faq/). The FAQs contain in-depth explanations of how to use the tools, how they work, and things to keep in mind when using them. +关于这些工具的一些常见问题在 [L2转移工具FAQ](/archived/arbitrum/l2-transfer-tools-faq/) 中有详细解答。FAQ 中深入解释了如何使用这些工具、它们的工作原理以及在使用过程中需要注意的事项。 ## 如何将你的子图转移到 Arbitrum (L2) @@ -14,7 +14,7 @@ Some frequent questions about these tools are answered in the [L2 Transfer Tools 过去一年里,Graph社区和核心开发人员一直在为迁移到 Arbitrum [做准备](https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305) 。Arbitrum 是一种二层网络或“L2”区块链,继承了以太坊的安全性,但提供了大幅降低的燃气费用。 -当您将子图发布或升级到Graph网络时,您将与协议上的智能合约进行交互,这需要使用以太币(ETH)支付燃气费用。通过将您的子图迁移到Arbitrum,将来对您的子图进行的任何更新将需要更低的燃气费用。较低的费用以及L2网络上平滑的曲线,使其他策展人更容易在您的子图上进行策展,从而增加了在您的子图上的索引人的奖励。这种较低成本的环境还使得索引人更便宜地对您的子图进行索引和服务。在接下来的几个月里,Arbitrum上的索引奖励将增加,而以太坊主网上的索引奖励将减少,因此越来越多的索引器将会将他们的质押迁移到L2网络并在该网络上设置运营。 +当您将子图发布或升级到Graph网络时,您将与协议上的智能合约进行交互,这需要使用以太币(ETH)支付燃气费用。通过将您的子图迁移到Arbitrum,将来对您的子图进行的任何更新将需要更低的燃气费用。较低的费用以及L2网络上平滑的曲线,使其他策展人更容易在您的子图上进行策展,从而增加了在您的子图上的索引人的奖励。这种较低成本的环境还使得索引人更便宜地对您的子图进行索引和服务。在接下来的几个月里,Arbitrum上的索引奖励将增加,而以太坊主网上的索引奖励将减少,因此越来越多的索引人将会将他们的质押迁移到L2网络并在该网络上设置运营。 ## 理解信号、你的 L1 子图和查询 URL 的变化 @@ -24,7 +24,7 @@ Some frequent questions about these tools are answered in the [L2 Transfer Tools 其他策展人可以选择是否提取他们所占份额的 GRT,或者将其转移到 L2 上的同一子图上,以铸造新的策展信号。如果一个子图所有者不将他们的子图转移到 L2 并通过合约调用手动废弃它,那么策展人将收到通知并可以提取他们的策展。 -一旦子图转移完成,由于所有策展都转换为 GRT,索引器将不再因索引子图而获得奖励。但是,有些索引器会保持对转移的子图进行 24 小时的服务,并立即开始在 L2 上进行子图索引。由于这些索引人已经对子图进行了索引,所以无需等待子图同步,几乎可以立即查询 L2 子图。 +一旦子图转移完成,由于所有策展都转换为 GRT,索引人将不再因索引子图而获得奖励。但是,有些索引会人保持对转移的子图进行 24 小时的服务,并立即开始在 L2 上进行子图索引。由于这些索引人已经对子图进行了索引,所以无需等待子图同步,几乎可以立即查询 L2 子图。 对 L2 子图的查询需要使用不同的 URL(on `arbitrum-gateway.thegraph.com`),但 L1 URL 将继续工作至少 48 小时。之后,L1 网关将把查询转发到 L2 网关(一段时间内),但这会增加延迟,因此建议尽快将所有查询切换到新的 URL。 @@ -42,7 +42,7 @@ Some frequent questions about these tools are answered in the [L2 Transfer Tools ## 为转移做准备:转移一些 ETH -转移子图涉及通过跨链桥发送一个交易,然后在 Arbitrum 上执行另一个交易。第一个交易使用主网上的 ETH,并包含一些 ETH 用于接收 L2 上的消息时支付燃气费用。然而,如果这个燃气费用不足,你将不得不重试交易,并直接在 L2 上支付燃气费用(这是下面的“第 3 步:确认转移”)。这一步必须在开始转移后的 7 天内执行。此外,第二个交易(“第 4 步:在 L2 上完成转移”)将直接在 Arbitrum 上执行。因此,你需要在 Arbitrum 钱包中拥有一些 ETH。如果你使用的是多签或智能合约账户,则 ETH 必须在你用于执行交易的常规(EOA)钱包中,而不是多签钱包本身。 +转移子图涉及通过跨链桥发送一个交易,然后在 Arbitrum 上执行另一个交易。第一个交易使用主网上的 ETH,并包含一些 ETH 用于接收 L2 上的消息时支付燃气费用。然而,如果这个燃气费用不足,你将不得不重试交易,并直接在 L2 上支付燃气费用(这是下面的“第 3 步:确认转移”)。这一步**必须在开始转移后的 7 天内执行**。此外,第二个交易(“第 4 步:在 L2 上完成转移”)将直接在 Arbitrum 上执行。因此,你需要在 Arbitrum 钱包中拥有一些 ETH。如果你使用的是多签或智能合约账户,则 ETH 必须在你用于执行交易的常规(EOA)钱包中,而不是多签钱包本身。 你可以在一些交易所购买 ETH,并直接将其提取到 Arbitrum,或者你可以使用 Arbitrum 跨链桥将 ETH 从主网钱包发送到 L2:[bridge.arbitrum.io](http://bridge.arbitrum.io)。由于 Arbitrum 上的燃气费用较低,你只需要一小笔资金即可。建议你设置一个较低的阈值(例如 0.01 ETH),以便你的交易得到批准。 @@ -60,13 +60,13 @@ Some frequent questions about these tools are answered in the [L2 Transfer Tools ## 第 1 步:开始转移 -在开始转移之前,你必须决定哪个地址将在 L2 上拥有这个子图(参见上面的“选择你的 L2 钱包”),并且强烈建议提前转移一些 ETH到 Arbitrum (参见上面的“为转移做准备:转移一些 ETH”) +在开始转移之前,你必须决定哪个地址将在 L2 上拥有这个子图(参见上面的“选择你的 L2 钱包”),并且强烈建议提前转移一些 ETH到 Arbitrum (参见上面的“为转移做准备:转移一些 ETH”)。 另外,请注意转移子图需要在拥有与子图相同账户的非零信号 GRT 的情况下进行;如果你没有对子图发出信号,你将需要添加一点策展(添加少量,如 1 GRT 就足够)。 -在打开转移工具后,你将能够在“接收钱包地址”字段中输入 L2 钱包地址-请确保你在这里输入的地址是正确的。点击 "Transfer Subgraph" 将提示你在钱包上执行交易(注意,其中包含一定数量的 ETH,用于支付 L2 燃气费用);这将启动转移并废弃你的 L1 子图(关于背后发生的详细信息,请参见上面的“理解信号、你的 L1 子图和查询 URL 的变化”)。 +在打开转移工具后,你将能够在“接收钱包地址”字段中输入 L2 钱包地址-**请确保你在这里输入的地址是正确的**。点击 Transfer Subgraph"将提示你在钱包上执行交易(注意,其中包含一定数量的 ETH,用于支付 L2 燃气费用);这将启动转移并废弃你的 L1 子图(关于背后发生的详细信息,请参见上面的“理解信号、你的 L1 子图和查询 URL 的变化”)。 -如果你执行了此步骤,确保在 7 天内完成第 3 步,否则子图和你的信号 GRT 将会丢失。这是由于 L1-L2 消息在 Arbitrum 上的工作方式:通过跨链桥发送的消息是“可重试的票据”,必须在 7 天内执行。如果 Arbitrum 上的燃气价格飙升,初始执行可能需要重试。 +如果你执行了此步骤,**确保在 7 天内完成第 3 步,否则子图和你的信号 GRT 将会丢失**。这是由于 L1-L2 消息在 Arbitrum 上的工作方式:通过跨链桥发送的消息是“可重试的票据”,必须在 7 天内执行。如果 Arbitrum 上的燃气价格飙升,初始执行可能需要重试。 ![Start the transfer to L2](/img/startTransferL2.png) @@ -92,7 +92,7 @@ Some frequent questions about these tools are answered in the [L2 Transfer Tools ![Publish the subgraph](/img/publishSubgraphL2TransferTools.png) -![Wait for the subgraph to be published](/img/waitForSubgraphToPublishL2TransferTools.png) +![Wait for the Subgraph to be published](/img/waitForSubgraphToPublishL2TransferTools.png) 这将发布子图,使在 Arbitrum 上运行的索引人可以开始提供服务。它还将使用从 L1 转移过来的 GRT 铸造策展信号。 diff --git a/website/src/pages/zh/archived/sunrise.mdx b/website/src/pages/zh/archived/sunrise.mdx index a768ee33d016..79096b600ce6 100644 --- a/website/src/pages/zh/archived/sunrise.mdx +++ b/website/src/pages/zh/archived/sunrise.mdx @@ -1,80 +1,80 @@ --- -title: 黎明后+升级到Graph网络常见问题 -sidebarTitle: Post-Sunrise Upgrade FAQ +title: Post-Sunrise+升级到The Graph网络常见问题 +sidebarTitle: Post-Sunrise升级常见问题 --- -> Note: The Sunrise of Decentralized Data ended June 12th, 2024. +> 注:去中心化数据的Sunrise结束于2024年6月12日。 ## 去中心化数据的黎明是什么? -The Sunrise of Decentralized Data was an initiative spearheaded by Edge & Node. This initiative enabled subgraph developers to upgrade to The Graph’s decentralized network seamlessly. +去中心化数据的Sunrise是由Edge&Node牵头的一项倡议。这一举措使子图开发人员能够无缝升级到The Graph的去中心化网络。 -This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published subgraphs. +该计划借鉴了The Graph生态系统之前的发展,包括升级索引人,为新发布的子图提供查询服务。 -### What happened to the hosted service? +### 托管服务怎么了? -The hosted service query endpoints are no longer available, and developers cannot deploy new subgraphs on the hosted service. +托管服务查询终结点不再可用,开发人员无法在托管服务上部署新的子图。 -During the upgrade process, owners of hosted service subgraphs could upgrade their subgraphs to The Graph Network. Additionally, developers were able to claim auto-upgraded subgraphs. +在升级过程中,托管服务子图的所有者可以将其子图升级到The Graph网络。此外,开发人员还可以声明自动升级的子图。 -### Was Subgraph Studio impacted by this upgrade? +### Subgraph Studio是否受到此次升级的影响? -No, Subgraph Studio was not impacted by Sunrise. Subgraphs were immediately available for querying, powered by the upgrade Indexer, which uses the same infrastructure as the hosted service. +不,Subgraph Studio没有受到Sunrise的影响。子图立即可用于查询,由升级索引人提供支持,该索引人使用与托管服务相同的基础架构。 -### Why were subgraphs published to Arbitrum, did it start indexing a different network? +### 为什么子图被发布到Arbitrum,它是否开始索引不同的网络? -The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that subgraphs are published to, but subgraphs can index any of the [supported networks](/supported-networks/) +The Graph网络最初部署在以太坊主网上,但后来被转移到Arbitrum One,以降低所有用户的燃气成本。因此,所有新子图都会发布到Arbitrum上的The Graph网络,以便索引人可以支持它们。Arbitrum是子图发布到的网络,但子图可以索引任何[支持的网络](/supported-networks/)。 -## About the Upgrade Indexer +## 关于升级索引人 -> The upgrade Indexer is currently active. +> 升级索引人当前处于活动状态。 -The upgrade Indexer was implemented to improve the experience of upgrading subgraphs from the hosted service to The Graph Network and support new versions of existing subgraphs that had not yet been indexed. +"升级索引人" 旨在改善从托管服务迁移到The Graph网络的子图体验,以及支持尚未被索引的现有子图的新版本。 -### What does the upgrade Indexer do? +### 升级的索引人意味着什么? -- It bootstraps chains that have yet to receive indexing rewards on The Graph Network and ensures that an Indexer is available to serve queries as quickly as possible after a subgraph is published. -- It supports chains that were previously only available on the hosted service. Find a comprehensive list of supported chains [here](/supported-networks/). -- Indexers that operate an upgrade Indexer do so as a public service to support new subgraphs and additional chains that lack indexing rewards before The Graph Council approves them. +- 升级索引人旨在启动尚未在The Graph网络上具有索引奖励的链,并确保在子图发布后,索引人可以尽快为查询提供服务。 +- 支持以前仅在托管服务上可用的链。在[此处](/supported-networks/)查找支持的链的完整列表。 +- 操作升级索引人的索引人将其作为一项公共服务来支持新的子图和在The Graph委员会批准之前缺乏索引奖励的附加链。 -### 为什么 Edge & Node 运行升级索引器? +### 为什么 Edge & Node 运行升级索引人? -Edge & Node historically maintained the hosted service and, as a result, already have synced data for hosted service subgraphs. +在历史上,Edge & Node一直维护托管服务,因此已同步了托管服务子图的数据。 -### What does the upgrade indexer mean for existing Indexers? +### 这对现有的索引人意味着什么? -Chains previously only supported on the hosted service were made available to developers on The Graph Network without indexing rewards at first. +以前仅在托管服务上支持的链最初在没有索引奖励的情况下提供给The Graph网络上的开发人员。 -However, this action unlocked query fees for any interested Indexer and increased the number of subgraphs published on The Graph Network. As a result, Indexers have more opportunities to index and serve these subgraphs in exchange for query fees, even before indexing rewards are enabled for a chain. +然而,此操作为任何感兴趣的索引人解锁了查询费用,并增加了在The Graph网络上发布的子图数量。因此,即使在为链启用索引奖励之前,索引人也有更多机会为这些子图进行索引和服务,以换取查询费用。 -The upgrade Indexer also provides the Indexer community with information about the potential demand for subgraphs and new chains on The Graph Network. +升级索引人还向索引人社区提供关于The Graph网络上潜在的子图需求和新链的信息。 -### What does this mean for Delegators? +### 这对于委托人来说意味着什么? -The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity. +升级索引人为代币委托人提供了强大的机会。随着越来越多的子图从托管服务迁移到The Graph网络,委托人将从增加的网络活动中获益。 -### Did the upgrade Indexer compete with existing Indexers for rewards? +### 升级索引人会与现有的索引人竞争奖励吗? -No, the upgrade Indexer only allocates the minimum amount per subgraph and does not collect indexing rewards. +不,升级索引人只会为每个子图分配最低金额,并不会收集索引奖励。 -It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and subgraphs. +它依据“根据需要”的方式运行,并作为网络中至少有三个其他索引人为各自的链和子图实现足够的服务质量之前的备用。 -### How does this affect subgraph developers? +### 这将如何影响子图开发者? -Subgraph developers can query their subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/subgraphs/developing/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a subgraph](/developing/creating-a-subgraph/) was not impacted by this upgrade. +从托管服务升级或从[Subgraph Studio发布](/subgraphs/developing/publishing/publishing-a-subgraph/)后,子图开发人员几乎可以立即在The Graph网络上查询他们的子图,因为索引不需要提前时间。请注意,[创建子图](/developing/creating-a-subgraph/)不受此升级的影响。 -### How does the upgrade Indexer benefit data consumers? +### 升级索引人如何使数据消费者受益? -The upgrade Indexer enables chains on the network that were previously only supported on the hosted service. Therefore, it widens the scope and availability of data that can be queried on the network. +升级的索引人使目前仅在托管服务上受支持的网络链能够在网络上运行,因此,扩大了可以在网络上查询的数据范围和可用性。 -### How does the upgrade Indexer price queries? +### 升级的索引人将如何定价查询? -The upgrade Indexer prices queries at the market rate to avoid influencing the query fee market. +升级的索引人将按市场价格定价查询,避免影响查询费用市场。 -### When will the upgrade Indexer stop supporting a subgraph? +### 升级索引人何时停止支持子图? -The upgrade Indexer supports a subgraph until at least 3 other Indexers successfully and consistently serve queries made to it. +升级索引人支持子图,直到至少有三个其他索引人成功并一致地为对其进行的查询提供服务。 -Furthermore, the upgrade Indexer stops supporting a subgraph if it has not been queried in the last 30 days. +此外,如果一个子图在过去的三十天内没有被查询,升级的索引人将停止支持该子图。 -Other Indexers are incentivized to support subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it. +其他索引人会被激励以支持有持续查询量的子图。因此,升级的索引人的查询量应该趋近于零,因为该索引人的分配容量较小,其他索引人将在升级的索引人之前被选中进行查询。 diff --git a/website/src/pages/zh/contracts.json b/website/src/pages/zh/contracts.json index ff6baee6ceb7..a5cbcfcb8872 100644 --- a/website/src/pages/zh/contracts.json +++ b/website/src/pages/zh/contracts.json @@ -1,4 +1,4 @@ { - "contract": "Contract", + "contract": "合约", "address": "地址" } diff --git a/website/src/pages/zh/contracts.mdx b/website/src/pages/zh/contracts.mdx index 3938844149c1..dee438e37b8a 100644 --- a/website/src/pages/zh/contracts.mdx +++ b/website/src/pages/zh/contracts.mdx @@ -1,26 +1,26 @@ --- -title: Protocol Contracts +title: 协议合约 --- import { ProtocolContractsTable } from '@/contracts' -Below are the deployed contracts which power The Graph Network. Visit the official [contracts repository](https://github.com/graphprotocol/contracts) to learn more. +以下是为The Graph网络提供动力的已部署合约。访问官方[合约存储库](https://github.com/graphprotocol/contracts)了解更多信息。 ## Arbitrum -This is the principal deployment of The Graph Network. +这是The Graph网络的主要部署。 -## Mainnet +## 主网 -This was the original deployment of The Graph Network. [Learn more](/archived/arbitrum/arbitrum-faq/) about The Graph's scaling with Arbitrum. +这是The Graph网络的原始部署。[了解更多](/archived/arbitrum/arbitrum-faq/)关于The Graph使用Arbitrum进行缩放的信息。 ## Arbitrum Sepolia -This is the primary testnet for The Graph Network. Testnet is predominantly used by core developers and ecosystem participants for testing purposes. There are no guarantees of service or availability on The Graph's testnets. +这是The Graph网络的主要测试网。测试网主要由核心开发人员和生态系统参与者用于测试目的。The Graph的测试网络无法保证服务或可用性。 diff --git a/website/src/pages/zh/global.json b/website/src/pages/zh/global.json index 63c04e346008..7925897a8f30 100644 --- a/website/src/pages/zh/global.json +++ b/website/src/pages/zh/global.json @@ -1,35 +1,78 @@ { "navigation": { "title": "主导航", - "show": "Show navigation", - "hide": "Hide navigation", + "show": "展示导航", + "hide": "隐藏导航", "subgraphs": "子图", "substreams": "子流", - "sps": "Substreams-Powered Subgraphs", - "indexing": "Indexing", - "resources": "Resources", - "archived": "Archived" + "sps": "Substreams驱动的子图", + "tokenApi": "代币 API", + "indexing": "索引", + "resources": "资源", + "archived": "存档" }, "page": { - "lastUpdated": "Last updated", + "lastUpdated": "最近更新", "readingTime": { - "title": "Reading time", - "minutes": "minutes" + "title": "读取时间", + "minutes": "分钟" }, - "previous": "Previous page", - "next": "Next page", - "edit": "Edit on GitHub", - "onThisPage": "On this page", - "tableOfContents": "Table of contents", - "linkToThisSection": "Link to this section" + "previous": "上一页", + "next": "下一页", + "edit": "在GitHub上编辑", + "onThisPage": "在此页面上", + "tableOfContents": "目录", + "linkToThisSection": "链到本节" }, "content": { - "note": "Note", - "video": "Video" + "callout": { + "note": "注意", + "tip": "小提示", + "important": "重要信息", + "warning": "警告", + "caution": "注意" + }, + "video": "视频" + }, + "openApi": { + "parameters": { + "pathParameters": "路径参数", + "queryParameters": "查询参数", + "headerParameters": "标头参数", + "cookieParameters": "Cookie 参数", + "parameter": "参数", + "description": "描述", + "value": "值", + "required": "必填", + "deprecated": "已废弃", + "defaultValue": "默认值", + "minimumValue": "最小值", + "maximumValue": "最大值", + "acceptedValues": "可选值", + "acceptedPattern": "接受的模式", + "format": "格式", + "serializationFormat": "序列化格式" + }, + "request": { + "label": "测试此终点", + "noCredentialsRequired": "无需凭据", + "send": "发送请求" + }, + "responses": { + "potentialResponses": "可能的反应", + "status": "状态", + "description": "描述", + "liveResponse": "实时回复", + "example": "示例" + }, + "errors": { + "invalidApi": "无法获取 API {0}。", + "invalidOperation": "无法在 API {1} 中检索操作 {0}。" + } }, "notFound": { - "title": "Oops! This page was lost in space...", - "subtitle": "Check if you’re using the right address or explore our website by clicking on the link below.", - "back": "Go Home" + "title": "哦! 这个页面丢失了...…", + "subtitle": "检查您是否使用正确地址或通过单击以下链接浏览网站。", + "back": "主页" } } diff --git a/website/src/pages/zh/index.json b/website/src/pages/zh/index.json index 5183e7025de8..73a3a54b8106 100644 --- a/website/src/pages/zh/index.json +++ b/website/src/pages/zh/index.json @@ -1,52 +1,52 @@ { "title": "主页", "hero": { - "title": "The Graph Docs", - "description": "Kick-start your web3 project with the tools to extract, transform, and load blockchain data.", - "cta1": "How The Graph works", - "cta2": "Build your first subgraph" + "title": "The Graph 文档", + "description": "使用提取、转换和加载区块链数据的工具启动您的 web3 项目。", + "cta1": "The Graph 是如何工作的", + "cta2": "创建你的第一个子图" }, "products": { - "title": "The Graph’s Products", - "description": "Choose a solution that fits your needs—interact with blockchain data your way.", + "title": "The Graph's Products", + "description": "选择一个适合您需要的解决方案—与您的路由区块链数据交互。", "subgraphs": { "title": "子图", - "description": "Extract, process, and query blockchain data with open APIs.", - "cta": "Develop a subgraph" + "description": "用打开的 API 提取、处理和查询 blockchain 数据。", + "cta": "开发子图" }, "substreams": { "title": "子流", - "description": "Fetch and consume blockchain data with parallel execution.", - "cta": "Develop with Substreams" + "description": "通过并行执行获取和消耗区块链数据。", + "cta": "开发子流" }, "sps": { - "title": "Substreams-Powered Subgraphs", - "description": "Boost your subgraph’s efficiency and scalability by using Substreams.", - "cta": "Set up a Substreams-powered subgraph" + "title": "Substreams驱动的子图", + "description": "Boost your subgraph's efficiency and scalability by using Substreams.", + "cta": "设置一个Substreams驱动的子图" }, "graphNode": { "title": "Graph 节点", - "description": "Index blockchain data and serve it via GraphQL queries.", - "cta": "Set up a local Graph Node" + "description": "索引区块链数据并通过 GraphQL 查询提供服务。", + "cta": "设置本地Graph节点" }, "firehose": { "title": "Firehose", - "description": "Extract blockchain data into flat files to enhance sync times and streaming capabilities.", - "cta": "Get started with Firehose" + "description": "提取区块链数据为平坦文件,以提高同步时间和流媒体功能。", + "cta": "从Firehose开始" } }, "supportedNetworks": { "title": "支持的网络", "details": "Network Details", "services": "Services", - "type": "Type", + "type": "类型", "protocol": "Protocol", "identifier": "Identifier", "chainId": "Chain ID", "nativeCurrency": "Native Currency", - "docs": "Docs", + "docs": "相关文档", "shortName": "Short Name", - "guides": "Guides", + "guides": "指南", "search": "Search networks", "showTestnets": "Show Testnets", "loading": "Loading...", @@ -54,9 +54,9 @@ "infoText": "Boost your developer experience by enabling The Graph's indexing network.", "infoLink": "Integrate new network", "description": { - "base": "The Graph supports {0}. To add a new network, {1}", - "networks": "networks", - "completeThisForm": "complete this form" + "base": "The Graph支持 {0}。要添加一个新网络, {1}", + "networks": "网络", + "completeThisForm": "完成此表单" }, "emptySearch": { "title": "No networks found", @@ -65,12 +65,12 @@ "showTestnets": "Show testnets" }, "tableHeaders": { - "name": "Name", + "name": "名称", "id": "ID", - "subgraphs": "Subgraphs", - "substreams": "Substreams", + "subgraphs": "子图", + "substreams": "子流", "firehose": "Firehose", - "tokenapi": "Token API" + "tokenapi": "代币 API" } }, "networkGuides": { @@ -80,7 +80,7 @@ "description": "Kickstart your journey into subgraph development." }, "substreams": { - "title": "Substreams", + "title": "子流", "description": "Stream high-speed data for real-time indexing." }, "timeseries": { @@ -92,7 +92,7 @@ "description": "Leverage features like custom data sources, event handlers, and topic filters." }, "billing": { - "title": "Billing", + "title": "计费", "description": "Optimize costs and manage billing efficiently." } }, @@ -120,56 +120,56 @@ } }, "guides": { - "title": "Guides", + "title": "指南", "description": "", "explorer": { - "title": "Find Data in Graph Explorer", - "description": "Leverage hundreds of public subgraphs for existing blockchain data." + "title": "在 Graph Explorer 中查找数据", + "description": "为现有区块链数据利用数百个公共子图。" }, "publishASubgraph": { - "title": "Publish a Subgraph", - "description": "Add your subgraph to the decentralized network." + "title": "发布子图", + "description": "添加子图到去中心化网络。" }, "publishSubstreams": { - "title": "Publish Substreams", - "description": "Launch your Substreams package to the Substreams Registry." + "title": "发布子流", + "description": "启动您的子流包到子流注册处。" }, "queryingBestPractices": { "title": "查询最佳实践", - "description": "Optimize your subgraph queries for faster, better results." + "description": "优化您的子图查询以获得更快更好的结果。" }, "timeseries": { - "title": "Optimized Timeseries & Aggregations", - "description": "Streamline your subgraph for efficiency." + "title": "优化时间和聚合", + "description": "简化您的子图以提高效率。" }, "apiKeyManagement": { - "title": "API Key Management", - "description": "Easily create, manage, and secure API keys for your subgraphs." + "title": "API密钥管理", + "description": "轻松为你的子图创建、管理和保护 API 密钥。" }, "transferToTheGraph": { - "title": "Transfer to The Graph", - "description": "Seamlessly upgrade your subgraph from any platform." + "title": "传输到The Graph", + "description": "从任何平台无缝升级你的子图。" } }, "videos": { - "title": "Video Tutorials", - "watchOnYouTube": "Watch on YouTube", + "title": "视频教程", + "watchOnYouTube": "在 You tube上观看", "theGraphExplained": { - "title": "The Graph Explained In 1 Minute", - "description": "What is The Graph? How does it work? Why does it matter so much to web3 developers? Learn how and why The Graph is the backbone of web3 in this short, non-technical video." + "title": "1分钟内的The Graph说明", + "description": "了解如何和为何在这么短的非技术视频中,The Graph是web3的主干。" }, "whatIsDelegating": { - "title": "What is Delegating?", - "description": "Delegators are key participants who help secure The Graph by staking their GRT tokens to Indexers. This video explains key concepts to understand before delegating." + "title": "什么是委托?", + "description": "这个视频解释了要在委托之前理解的关键概念,一种有助于保护The Graph的隐藏形式。" }, "howToIndexSolana": { - "title": "How to Index Solana with a Substreams-powered Subgraph", - "description": "If you’re familiar with subgraphs, discover how Substreams offer a different approach for key use cases. This video walks you through the process of building your first Substreams-powered subgraph." + "title": "如何用子流驱动子图索引Solana", + "description": "If you're familiar with Subgraphs, discover how Substreams offer a different approach for key use cases." } }, "time": { - "reading": "Reading time", - "duration": "Duration", - "minutes": "min" + "reading": "读取时间", + "duration": "持续时间", + "minutes": "最小值" } } diff --git a/website/src/pages/zh/indexing/_meta-titles.json b/website/src/pages/zh/indexing/_meta-titles.json index 42f4de188fd4..b1be70e6e798 100644 --- a/website/src/pages/zh/indexing/_meta-titles.json +++ b/website/src/pages/zh/indexing/_meta-titles.json @@ -1,3 +1,3 @@ { - "tooling": "Indexer Tooling" + "tooling": "索引器工具" } diff --git a/website/src/pages/zh/indexing/chain-integration-overview.mdx b/website/src/pages/zh/indexing/chain-integration-overview.mdx index 425fdaced82a..8f7d7b4dbdc3 100644 --- a/website/src/pages/zh/indexing/chain-integration-overview.mdx +++ b/website/src/pages/zh/indexing/chain-integration-overview.mdx @@ -6,12 +6,12 @@ title: 链集成过程概述 ## 阶段1:技术集成 -- Please visit [New Chain Integration](/indexing/new-chain-integration/) for information on `graph-node` support for new chains. +- 请访问[新链集成](/indexing/new-chain-integration/) ,了解新链的`graph-node`支持信息。 - 团队通过在[此处](https://forum.thegraph.com/c/governance-gips/new-chain-support/71)(治理与GIPs下的新数据源子类别)创建一个论坛帖子来启动协议集成过程。强制使用默认的论坛模板。 ## 阶段2:集成验证 -- Teams collaborate with core developers, Graph Foundation and operators of GUIs and network gateways, such as [Subgraph Studio](https://thegraph.com/studio/), to ensure a smooth integration process. This involves providing the necessary backend infrastructure, such as the integrating chain's JSON-RPC, Firehose or Substreams endpoints. Teams wanting to avoid self-hosting such infrastructure can leverage The Graph's community of node operators (Indexers) to do so, which the Foundation can help with. +- 团队与核心开发人员、Graph基金会以及GUI和网络网关(如[Subgraph Studio](https://thegraph.com/studio/)的运营商合作,以确保顺利的集成过程。这涉及提供必要的后端基础设施,例如集成链的JSON-RPC、Firehose或Substreams端点。想要避免自托管此类基础设施的团队可以利用Graph的节点运营商(索引人)社区来实现这一点,基金会可以提供帮助。 - Graph索引人在Graph的测试网上测试集成。 - 核心开发者和索引人监控稳定性、性能和数据确定性。 @@ -38,7 +38,7 @@ title: 链集成过程概述 这只会影响 Substreams 驱动的子图上的索引奖励的协议支持。新的 Firehose 实现需要在测试网上进行测试,遵循了本 GIP 中第二阶段所概述的方法论。同样地,假设实现是高性能且可靠的,那么需要在 [特征支持矩阵](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) 上提出 PR(`Substreams 数据源` 子图特性),以及一个新的 GIP 来支持索引奖励的协议。任何人都可以创建这个 PR 和 GIP;基金会将协助获得理事会的批准。 -### 3. How much time will the process of reaching full protocol support take? +### 3.获得全面协议支持的过程需要多长时间? 主网上线预计还有数周时间,具体取决于集成开发的时间、是否需要额外的研究、测试和漏洞修复,以及始终如一地需要社区反馈的治理过程的时间。 @@ -46,4 +46,4 @@ title: 链集成过程概述 ### 4. 如何处理优先事项? -Similar to #3, it will depend on overall readiness and involved stakeholders' bandwidth. For example, a new chain with a brand new Firehose implementation may take longer than integrations that have already been battle-tested or are farther along in the governance process. +与第3点类似,这将取决于整体准备情况和相关利益相关者的带宽。例如,具有全新Firehose实现的新链可能需要比已经经过战斗测试或在治理过程中走得更远的集成更长的时间。 diff --git a/website/src/pages/zh/indexing/new-chain-integration.mdx b/website/src/pages/zh/indexing/new-chain-integration.mdx index cb717c36d646..3083fd48bdbf 100644 --- a/website/src/pages/zh/indexing/new-chain-integration.mdx +++ b/website/src/pages/zh/indexing/new-chain-integration.mdx @@ -1,70 +1,70 @@ --- -title: New Chain Integration +title: 新链整合 --- -Chains can bring subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies: +链可以通过启动新的`graph-node`集成为其生态系统带来子图支持。子图是一个强大的索引工具,为开发人员打开了一个充满可能性的世界。Graph节点已经对此处列出的链中的数据进行了索引。如果您对新的集成感兴趣,有两种集成策略: 1. **EVM JSON-RPC** -2. **Firehose**: All Firehose integration solutions include Substreams, a large-scale streaming engine based off Firehose with native `graph-node` support, allowing for parallelized transforms. +2. **Firehose**: 所有Firehose集成解决方案都包括Substreams,这是一个基于Firehose的大规模流媒体引擎,支持原生`graph-node`,允许并行转换。 -> Note that while the recommended approach is to develop a new Firehose for all new chains, it is only required for non-EVM chains. +> 请注意,虽然推荐的方法是为所有新链开发新的Firehose,但只有非EVM链才需要。 -## Integration Strategies +## 整合策略 ### 1. EVM JSON-RPC -If the blockchain is EVM equivalent and the client/node exposes the standard EVM JSON-RPC API, Graph Node should be able to index the new chain. +如果区块链与EVM等效,并且客户端/节点公开标准的EVM JSON-RPC API,Graph节点应该能够索引新的链。有关更多信息,请参阅EVM JSON-RPC测试()。 -#### Testing an EVM JSON-RPC +#### 测试EVM JSON-RPC -For Graph Node to be able to ingest data from an EVM chain, the RPC node must expose the following EVM JSON-RPC methods: +为了使Graph节点能够从EVM链中获取数据,RPC节点必须公开以下EVM JSON RPC方法: - `eth_getLogs` -- `eth_call` (for historical blocks, with EIP-1898 - requires archive node) +- `eth_call` \(对于历史块,使用EIP-1898 - 需要归档节点): - `eth_getBlockByNumber` - `eth_getBlockByHash` - `net_version` -- `eth_getTransactionReceipt`, in a JSON-RPC batch request -- `trace_filter` *(limited tracing and optionally required for Graph Node)* +- `eth_getTransactionReceipt`, 在JSON-RPC批量请求中 +- `trace_filter` *(有限跟踪,图节点可选)* -### 2. Firehose Integration +### 2. Firehose整合 -[Firehose](https://firehose.streamingfast.io/firehose-setup/overview) is a next-generation extraction layer. It collects history in flat files and streams in real time. Firehose technology replaces those polling API calls with a stream of data utilizing a push model that sends data to the indexing node faster. This helps increase the speed of syncing and indexing. +[Firehose](https://firehose.streamingfast.io/firehose-setup/overview)是下一代提取层。它以平面文件的形式收集历史记录,并实时流式传输。Firehose技术利用推送模型将数据更快地发送到索引节点,用数据流取代了那些轮询API调用。这有助于提高同步和索引的速度。 -> NOTE: All integrations done by the StreamingFast team include maintenance for the Firehose replication protocol into the chain's codebase. StreamingFast tracks any changes and releases binaries when you change code and when StreamingFast changes code. This includes releasing Firehose/Substreams binaries for the protocol, maintaining Substreams modules for the block model of the chain, and releasing binaries for the blockchain node with instrumentation if need be. +> 注意:StreamingFast团队完成的所有集成都包括将Firehose复制协议维护到链的代码库中。StreamingFast会跟踪任何更改,并在您更改代码和StreamingFast更改代码时发布二进制文件。这包括为协议发布Firehose/Substreams二进制文件,为链的块模型维护Substreams模块,并在必要时为区块链节点发布带有插装的二进制文件。 -#### Integration for Non-EVM chains +#### 非EVM链的集成 -The primary method to integrate the Firehose into chains is to use an RPC polling strategy. Our polling algorithm will predict when a new block will arrive and increase the rate at which it checks for a new block near that time, making it a very low-latency and efficient solution. For help with the integration and maintenance of the Firehose, contact the [StreamingFast team](https://www.streamingfast.io/firehose-integration-program). New chains and their integrators will appreciate the [fork awareness](https://substreams.streamingfast.io/documentation/consume/reliability-guarantees) and massive parallelized indexing capabilities that Firehose and Substreams bring to their ecosystem. +将Firehose集成到链中的主要方法是使用RPC轮询策略。我们的轮询算法将预测新块何时到达,并提高在该时间附近检查新块的速率,使其成为一种非常低延迟和高效的解决方案。有关Firehose集成和维护的帮助,请联系[StreamingFast团队](https://www.streamingfast.io/firehose-integration-program)。新链及其集成商将欣赏Firehose和Substreams为其生态系统带来的[分叉意识](https://substreams.streamingfast.io/documentation/consume/reliability-guarantees) 和大规模并行索引功能。 -#### Specific Instrumentation for EVM (`geth`) chains +#### EVM(`geth`)链特定仪器 -For EVM chains, there exists a deeper level of data that can be achieved through the `geth` [live-tracer](https://github.com/ethereum/go-ethereum/releases/tag/v1.14.0), a collaboration between Go-Ethereum and StreamingFast, in building a high-throughput and rich transaction tracing system. The Live Tracer is the most comprehensive solution, resulting in [Extended](https://streamingfastio.medium.com/new-block-model-to-accelerate-chain-integration-9f65126e5425) block details. This enables new indexing paradigms, like pattern matching of events based on state changes, calls, parent call trees, or triggering of events based on changes to the actual variables in a smart contract. +对于EVM链,通过Go Ethereum和StreamingFast之间的合作 `geth` [live-tracer](https://github.com/ethereum/go-ethereum/releases/tag/v1.14.0)可以实现更深层次的数据,从而构建一个高吞吐量和丰富的交易跟踪系统。Live Tracer是最全面的解决方案,可提供 [扩展](https://streamingfastio.medium.com/new-block-model-to-accelerate-chain-integration-9f65126e5425) 的块细节。这启用了新的索引范式,例如基于状态更改、调用、父调用树的事件模式匹配,或基于智能合约中实际变量的更改触发事件。 -![Base block vs Extended block](/img/extended-vs-base-substreams-blocks.png) +![基础块与扩展块](/img/extended-vs-base-substreams-blocks.png) -> NOTE: This improvement upon the Firehose requires chains make use of the EVM engine `geth version 1.13.0` and up. +> 注意:对Firehose的改进要求链条使用EVM 1.13.0及更高版本的引擎。 -## EVM considerations - Difference between JSON-RPC & Firehose +## EVM注意事项-JSON-RPC和Firehose之间的区别 -While the JSON-RPC and Firehose are both suitable for subgraphs, a Firehose is always required for developers wanting to build with [Substreams](https://substreams.streamingfast.io). Supporting Substreams allows developers to build [Substreams-powered subgraphs](/subgraphs/cookbook/substreams-powered-subgraphs/) for the new chain, and has the potential to improve the performance of your subgraphs. Additionally, Firehose — as a drop-in replacement for the JSON-RPC extraction layer of `graph-node` — reduces by 90% the number of RPC calls required for general indexing. +虽然JSON-RPC和Firehose都适用于子图,但想要使用[Substreams](https://substreams.streamingfast.io)构建的开发人员总是需要Firehose。支持子流允许开发人员为新链构建[子流驱动的子图](/subgraphs/cookbook/substreams-powered-subgraphs/),并有可能提高子图的性能。此外,Firehose作为`graph节点`JSON-RPC提取层的直接替代品,将常规索引所需的RPC调用数量减少了90%。 -- All those `getLogs` calls and roundtrips get replaced by a single stream arriving into the heart of `graph-node`; a single block model for all subgraphs it processes. +- 所有这些`getLogs`调用和往返都被到达`graph节点`中心的单个流所取代;它处理的所有子图的单个块模型。 -> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers) +> 注意:基于Firehose的EVM链集成仍然需要索引人运行链的归档RPC节点来正确索引子图。这是由于Firehose无法提供通常可通过 `eth_call` RPC方法访问的智能合约状态。(值得提醒的是,`eth_call`对开发人员来说不 是一种好的做法) ## Graph节点配置 -Configuring Graph Node is as easy as preparing your local environment. Once your local environment is set, you can test the integration by locally deploying a subgraph. +配置Graph节点就像准备本地环境一样简单。设置好本地环境后,您可以通过在本地部署子图来测试集成。 1. [克隆Graph节点](https://github.com/graphprotocol/graph-node) -2. Modify [this line](https://github.com/graphprotocol/graph-node/blob/master/docker/docker-compose.yml#L22) to include the new network name and the EVM JSON-RPC or Firehose compliant URL +2. 修改 [此行](https://github.com/graphprotocol/graph-node/blob/master/docker/docker-compose.yml#L22), 包括新网络名称和符合EVM JSON RPC或Firehose的URL - > Do not change the env var name itself. It must remain `ethereum` even if the network name is different. + > 不要更改环境变量名称本身。它必须保持为 `ethereum` ,即使网络名称不同也是如此。   -3. Run an IPFS node or use the one used by The Graph: https://api.thegraph.com/ipfs/ +3. 运行IPFS节点,或使用Graph使用的IPFS节点: https://api.thegraph.com/ipfs/ ## Substreams驱动的子图 -For StreamingFast-led Firehose/Substreams integrations, basic support for foundational Substreams modules (e.g. decoded transactions, logs and smart-contract events) and Substreams codegen tools are included. These tools enable the ability to enable [Substreams-powered subgraphs](/substreams/sps/introduction/). Follow the [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) and run `substreams codegen subgraph` to experience the codegen tools for yourself. +对于StreamingFast led Firehose/Substreams集成,包括对基础Substreams模块(如解码交易、日志和智能合约事件)和Substreams代码生成工具的基本支持。这些工具允许启用[Substreams驱动的子图](/substreams/sps/introduction/)。按照 [操作指南](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) 运行`子流codegen子图`,亲身体验codegen工具。 diff --git a/website/src/pages/zh/indexing/overview.mdx b/website/src/pages/zh/indexing/overview.mdx index 33c864e1dc69..a20d5cb185d4 100644 --- a/website/src/pages/zh/indexing/overview.mdx +++ b/website/src/pages/zh/indexing/overview.mdx @@ -9,39 +9,39 @@ sidebarTitle: 概述 索引人根据子图的策展信号选择要索引的子图,其中策展人质押 GRT 以指示哪些子图是高质量的并应优先考虑。 消费者(例如应用程序)还可以设置索引人处理其子图查询的参数,并设置查询费用定价的偏好。 -## FAQ +## 常见问题 -### What is the minimum stake required to be an Indexer on the network? +### 成为网络索引人所需的最低份额是多少? -The minimum stake for an Indexer is currently set to 100K GRT. +索引人的最低抵押数量目前设置为 10万个 GRT。 -### What are the revenue streams for an Indexer? +### 索引人的收入来源是什么? -**Query fee rebates** - Payments for serving queries on the network. These payments are mediated via state channels between an Indexer and a gateway. Each query request from a gateway contains a payment and the corresponding response a proof of query result validity. +**查询费返利** - 为网络上的查询服务支付的费用。这些支付通过索引人和网关之间的状态通道进行调解。来自网关的每个查询请求都包含一个支付和相应的响应,一个查询结果有效性的证明。 -**Indexing rewards** - Generated via a 3% annual protocol wide inflation, the indexing rewards are distributed to Indexers who are indexing subgraph deployments for the network. +**索引奖励** - 通过 3% 的年度协议范围通货膨胀生成,索引奖励分配给为网络索引子图部署的索引人。 -### How are indexing rewards distributed? +### 索引奖励如何分配? -Indexing rewards come from protocol inflation which is set to 3% annual issuance. They are distributed across subgraphs based on the proportion of all curation signal on each, then distributed proportionally to Indexers based on their allocated stake on that subgraph. **An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.** +索引奖励来自协议通胀,每年发行量设定为 3%。 它们根据每个子图上所有管理信号的比例分布在子图上,然后根据他们在该子图上分配的份额按比例分配给索引人。 **一项分配必须以符合仲裁章程规定的标准的有效索引证明(POI)来结束,才有资格获得奖励。** -Numerous tools have been created by the community for calculating rewards; you'll find a collection of them organized in the [Community Guides collection](https://www.notion.so/Community-Guides-abbb10f4dba040d5ba81648ca093e70c). You can also find an up to date list of tools in the #Delegators and #Indexers channels on the [Discord server](https://discord.gg/graphprotocol). Here we link a [recommended allocation optimiser](https://github.com/graphprotocol/allocation-optimizer) integrated with the indexer software stack. +社区创建了许多用于计算奖励的工具,您会在[社区指南集合](https://www.notion.so/Community-Guides-abbb10f4dba040d5ba81648ca093e70c)中找到它们的集合。 您还可以在[Discord 服务器](https://discord.gg/graphprotocol)上的 #Delegators 和 #Indexers 频道中找到最新的工具列表。在这里,我们链接一个[推荐的分配优化器](https://github.com/graphprotocol/allocation-optimizer) 与索引人软件栈集成。 -### What is a proof of indexing (POI)? +### 什么是索引证明 (POI)? -POIs are used in the network to verify that an Indexer is indexing the subgraphs they have allocated on. A POI for the first block of the current epoch must be submitted when closing an allocation for that allocation to be eligible for indexing rewards. A POI for a block is a digest for all entity store transactions for a specific subgraph deployment up to and including that block. +网络中使用 POI 来验证索引人是否正在索引它们分配的子图。 在关闭该分配的分配时,必须提交当前时期第一个区块的 POI,才有资格获得索引奖励。 区块的 POI 是特定子图部署的所有实体存储交易的摘要,直到并包括该块。 -### When are indexing rewards distributed? +### 索引奖励什么时候分配? -Allocations are continuously accruing rewards while they're active and allocated within 28 epochs. Rewards are collected by the Indexers, and distributed whenever their allocations are closed. That happens either manually, whenever the Indexer wants to force close them, or after 28 epochs a Delegator can close the allocation for the Indexer, but this results in no rewards. 28 epochs is the max allocation lifetime (right now, one epoch lasts for ~24h). +当分配活动在28个时期内分配时,分配会不断累积奖励。奖励由索引人收集,并在分配结束时分发。 这可以手动发生,每当索引人想要强制关闭它们时,或者在 28 个时期后,委托人可以关闭索引人的分配,但这会导致没有奖励。28 个时期是最大分配生命周期(现在,一个 时期持续约 24 小时)。 -### Can pending indexing rewards be monitored? +### 可以监控待处理的索引人奖励吗? -The RewardsManager contract has a read-only [getRewards](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/rewards/RewardsManager.sol#L316) function that can be used to check the pending rewards for a specific allocation. +RewardsManager合约有一个只读的 [getRewards](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/rewards/RewardsManager.sol#L316) 函数,可用于检查特定分配的未决奖励。 -Many of the community-made dashboards include pending rewards values and they can be easily checked manually by following these steps: +许多社区制作的控制板包含悬而未决的奖励值,通过以下步骤可以很容易地手动检查这些值: -1. Query the [mainnet subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations: +1. 查询[主网子图](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one)以获取所有活动分配的ID: ```graphql query indexerAllocations { @@ -57,138 +57,138 @@ query indexerAllocations { } ``` -Use Etherscan to call `getRewards()`: +使用 Etherscan 调用 `getRewards()`: -- Navigate to [Etherscan interface to Rewards contract](https://etherscan.io/address/0x9Ac758AB77733b4150A901ebd659cbF8cB93ED66#readProxyContract) -- To call `getRewards()`: - - Expand the **9. getRewards** dropdown. - - Enter the **allocationID** in the input. - - Click the **Query** button. +- 导航到[Etherscan界面以获取奖励合约](https://etherscan.io/address/0x9Ac758AB77733b4150A901ebd659cbF8cB93ED66#readProxyContract) +- 调用`getRewards()`: + - 展开 **9. getRewards** 下拉菜单。 + - 在输入中输入**分配 Id**。 + - 点击**查询**按钮。 -### What are disputes and where can I view them? +### 争议是什么? 在哪里可以查看? -Indexer's queries and allocations can both be disputed on The Graph during the dispute period. The dispute period varies, depending on the type of dispute. Queries/attestations have 7 epochs dispute window, whereas allocations have 56 epochs. After these periods pass, disputes cannot be opened against either of allocations or queries. When a dispute is opened, a deposit of a minimum of 10,000 GRT is required by the Fishermen, which will be locked until the dispute is finalized and a resolution has been given. Fisherman are any network participants that open disputes. +在争议期间,索引人的查询和分配都可以在The Graph上进行争论。 争议期限因争议类型而异。 查询/证明有 7 个时期的争议窗口,而分配有 56 个时期。 在这些期限过后,不能对分配或查询提出争议。 当争议开始时,Fishermen需要至少 10000 GRT 的押金,押金将被锁定,直到争议结束并给出解决方案。 Fishermen是任何引发争议的网络参与者。 -Disputes have **three** possible outcomes, so does the deposit of the Fishermen. +争议有**三种**可能的结果,Fishermen的存款也是如此。 -- If the dispute is rejected, the GRT deposited by the Fishermen will be burned, and the disputed Indexer will not be slashed. -- If the dispute is settled as a draw, the Fishermen's deposit will be returned, and the disputed Indexer will not be slashed. -- If the dispute is accepted, the GRT deposited by the Fishermen will be returned, the disputed Indexer will be slashed and the Fishermen will earn 50% of the slashed GRT. +- 如果争议被驳回,Fishermen存入的 GRT 将被消耗,争议的索引人将不会被削减。 +- 如果以平局方式解决争议,Fishermen的押金将被退还,并且争议的索引人不会被削减。 +- 如果争议被接受,Fishermen存入的 GRT 将被退回,有争议的索引人将被削减,Fishermen将获得被削减的 GRT的50%。 -Disputes can be viewed in the UI in an Indexer's profile page under the `Disputes` tab. +争议可以在用户界面中的 `争议`标签下的索引人档案页中查看。 -### What are query fee rebates and when are they distributed? +### 什么是查询费返利? 何时分配? -Query fees are collected by the gateway and distributed to indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). The exponential rebate function is proposed as a way to ensure indexers achieve the best outcome by faithfully serving queries. It works by incentivizing Indexers to allocate a large amount of stake (which can be slashed for erring when serving a query) relative to the amount of query fees they may collect. +查询费用由网关收取,并根据指数回扣函数(请参阅[此处](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)的GIP)分配给索引人。指数返佣函数被提出,作为一种通过忠实地服务查询来确保索引人获得最佳结果的方法。它的工作原理是激励索引人相对于他们可能收取的查询费用分配大量股份(在提供查询时可能会因出错而大幅削减)。 -Once an allocation has been closed the rebates are available to be claimed by the Indexer. Upon claiming, the query fee rebates are distributed to the Indexer and their Delegators based on the query fee cut and the exponential rebate function. +一旦分配已结束且争议期已过,索引人就可以要求回扣。 查询费用回扣根据查询费用减免和委托池比例分配给索引人及其委托人。 -### What is query fee cut and indexing reward cut? +### 什么是查询费减免和索引奖励减免? -The `queryFeeCut` and `indexingRewardCut` values are delegation parameters that the Indexer may set along with cooldownBlocks to control the distribution of GRT between the Indexer and their Delegators. See the last steps in [Staking in the Protocol](/indexing/overview/#stake-in-the-protocol) for instructions on setting the delegation parameters. +`queryFeeCut` 和 `indexingRewardCut` 值是委托的参数,该索引可以设置连同 cooldownBlocks 控制 GRT 的索引和他们的委托人之间的分配。 有关设置委托参数的说明,请参阅[协议中的质押](/indexing/overview/#stake-in-the-protocol)的最后步骤。 -- **queryFeeCut** - the % of query fee rebates that will be distributed to the Indexer. If this is set to 95%, the Indexer will receive 95% of the query fees earned when an allocation is closed with the other 5% going to the Delegators. +- **查询费用削减** - 将分配给索引人的子图上累积的查询费用回扣的百分比。 如果将其设置为 95%,则在申请分配时,索引人将获得查询费用回扣池的 95%,另外 5% 将分配给委托人。 -- **indexingRewardCut** - the % of indexing rewards that will be distributed to the Indexer. If this is set to 95%, the Indexer will receive 95% of the indexing rewards when an allocation is closed and the Delegators will split the other 5%. +- **索引奖励削减** - 分配给索引人的子图上累积的索引奖励的百分比。 如果将其设置为 95%,则当分配结束时,索引人将获得索引奖励池的 95%,而委托人将分配其他 5%。 -### How do Indexers know which subgraphs to index? +### 索引人如何知道要索引哪些子图? -Indexers may differentiate themselves by applying advanced techniques for making subgraph indexing decisions but to give a general idea we'll discuss several key metrics used to evaluate subgraphs in the network: +索引人可以通过应用高级技术来进行子图索引决策,从而使自己与众不同,但为了给出一个大致的概念,我们将讨论几个用于评估网络中子图的关键指标: -- **Curation signal** - The proportion of network curation signal applied to a particular subgraph is a good indicator of the interest in that subgraph, especially during the bootstrap phase when query voluming is ramping up. +- **策展信号** - 应用于特定子图的网络策展信号的比例是对该子图兴趣的一个很好的指标,尤其是在引导阶段,当查询量不断上升时。 -- **Query fees collected** - The historical data for volume of query fees collected for a specific subgraph is a good indicator of future demand. +- **收取的查询费** - 特定子图收取的查询费的历史数据是未来需求的良好指标。 -- **Amount staked** - Monitoring the behavior of other Indexers or looking at proportions of total stake allocated towards specific subgraphs can allow an Indexer to monitor the supply side for subgraph queries to identify subgraphs that the network is showing confidence in or subgraphs that may show a need for more supply. +- **质押量** - 监控其他索引人的行为或查看分配给特定子图的总质押量的比例,可以让索引人监控子图查询的供应方,以确定网络显示出信心的子图或可能显示出需要更多供应的子图。 -- **Subgraphs with no indexing rewards** - Some subgraphs do not generate indexing rewards mainly because they are using unsupported features like IPFS or because they are querying another network outside of mainnet. You will see a message on a subgraph if it is not generating indexing rewards. +- **没有索引奖励的子图** - 一些子图不会产生索引奖励,主要是因为它们使用了不受支持的功能,如 IPFS,或者因为它们正在查询主网之外的另一个网络。 如果子图未生成索引奖励,您将在子图上看到一条消息。 -### What are the hardware requirements? +### 对硬件有什么要求? -- **Small** - Enough to get started indexing several subgraphs, will likely need to be expanded. -- **Standard** - Default setup, this is what is used in the example k8s/terraform deployment manifests. -- **Medium** - Production Indexer supporting 100 subgraphs and 200-500 requests per second. -- **Large** - Prepared to index all currently used subgraphs and serve requests for the related traffic. +- **小型** - 足以开始索引几个子图,可能需要扩展。 +- **标准** - 默认设置,这是在 k8s/terraform 部署清单示例中使用的。 +- **中型** - 生产型索引人支持 100 个子图和每秒 200-500 个请求。 +- **大型** -准备对当前使用的所有子图进行索引,并为相关流量的请求提供服务。 -| Setup | Postgres
(CPUs) | Postgres
(memory in GBs) | Postgres
(disk in TBs) | VMs
(CPUs) | VMs
(memory in GBs) | +| 设置 | Postgres
(CPUs) | Postgres
(内存 GBs) | Postgres
(硬盘TBs) | VMs
(CPUs) | VMs
(内存 GBs) | | --- | :-: | :-: | :-: | :-: | :-: | -| Small | 4 | 8 | 1 | 4 | 16 | -| Standard | 8 | 30 | 1 | 12 | 48 | -| Medium | 16 | 64 | 2 | 32 | 64 | -| Large | 72 | 468 | 3.5 | 48 | 184 | +| 小型 | 4 | 8 | 1 | 4 | 16 | +| 标准 | 8 | 30 | 1 | 12 | 48 | +| 中型 | 16 | 64 | 2 | 32 | 64 | +| 大型 | 72 | 468 | 3.5 | 48 | 184 | -### What are some basic security precautions an Indexer should take? +### 索引人应该采取哪些基本的安全防范措施? -- **Operator wallet** - Setting up an operator wallet is an important precaution because it allows an Indexer to maintain separation between their keys that control stake and those that are in control of day-to-day operations. See [Stake in Protocol](/indexing/overview/#stake-in-the-protocol) for instructions. +- **运营商钱包** - 设置运营商钱包是一项重要的预防措施,因为它允许索引人在控制股权的密钥和控制日常交易的密钥之间保持分离-天操作。有关说明,请参阅[Stake in Protocol](/indexing/overview/#stake-in-the-protocol)。 -- **Firewall** - Only the Indexer service needs to be exposed publicly and particular attention should be paid to locking down admin ports and database access: the Graph Node JSON-RPC endpoint (default port: 8030), the Indexer management API endpoint (default port: 18000), and the Postgres database endpoint (default port: 5432) should not be exposed. +- **防火墙** - 只有索引人服务需要公开,尤其要注意锁定管理端口和数据库访问:Graph 节点 JSON-RPC 端点(默认端口:8030)、索引人管理 API 端点(默认端口:18000)和 Postgres 数据库端点(默认端口:5432)不应暴露。 -## Infrastructure +## 基础设施 -At the center of an Indexer's infrastructure is the Graph Node which monitors the indexed networks, extracts and loads data per a subgraph definition and serves it as a [GraphQL API](/about/#how-the-graph-works). The Graph Node needs to be connected to an endpoint exposing data from each indexed network; an IPFS node for sourcing data; a PostgreSQL database for its store; and Indexer components which facilitate its interactions with the network. +索引人基础设施的中心是Graph节点,监控索引网络,根据子图定义提取和加载数据,并将其作为[GraphQL API](/about/#how-the-graph-works)提供。Graph节点需要连接到一个端点,该端点暴露来自每个索引网络的数据;用于源数据的IPFS节点;用于其存储的PostgreSQL数据库;以及促进其与网络交互的索引人组件。 -- **PostgreSQL database** - The main store for the Graph Node, this is where subgraph data is stored. The Indexer service and agent also use the database to store state channel data, cost models, indexing rules, and allocation actions. +- **PostgreSQL 数据库** - Graph节点的主要存储,这是存储子图数据的地方。 索引人服务和代理也使用数据库来存储状态通道数据、成本模型、索引规则以及分配操作。 -- **Data endpoint** - For EVM-compatible networks, Graph Node needs to be connected to an endpoint that exposes an EVM-compatible JSON-RPC API. This may take the form of a single client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain subgraphs will require particular client capabilities such as archive mode and/or the parity tracing API. +- **数据端点**-对于兼容EVM的网络,Graph节点需要连接到一个公开兼容EVM JSON-RPC API的端点。这可以采取单个客户端的形式,也可以是跨多个客户端进行负载平衡的更复杂的设置。需要注意的是,某些子图将需要特定的客户端功能,如存档模式和/或奇偶校验跟踪API。 -- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during subgraph deployment to fetch the subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com. +- **IPFS 节点(版本小于 5)** - 子图部署元数据存储在 IPFS 网络上。 Graph节点在子图部署期间主要访问 IPFS 节点,以获取子图清单和所有链接文件。 网络索引人不需要托管自己的 IPFS 节点,网络的 IPFS 节点是托管在https://ipfs.network.thegraph.com。 -- **Indexer service** - Handles all required external communications with the network. Shares cost models and indexing statuses, passes query requests from gateways on to a Graph Node, and manages the query payments via state channels with the gateway. +- **索引人服务** -处理所有网络必要的外部通信。 共享成本模型和索引状态,将来自网关的查询请求传递给一个Graph节点,并通过状态通道与网关管理查询支付。 -- **Indexer agent** - Facilitates the Indexers interactions onchain including registering on the network, managing subgraph deployments to its Graph Node/s, and managing allocations. +- **索引人代理** - 促进索引人在链上的交互,包括在网络上注册,管理子图部署到其Graph节点,以及管理分配。 -- **Prometheus metrics server** - The Graph Node and Indexer components log their metrics to the metrics server. +- **Prometheus 指标服务器** - Graph节点 和索引人组件将其指标记录到指标服务器。 -Note: To support agile scaling, it is recommended that query and indexing concerns are separated between different sets of nodes: query nodes and index nodes. +注意:为了支持敏捷扩展,建议在不同的节点集之间分开查询和索引问题:查询节点和索引节点。 -### Ports overview +### 端口概述 -> **Important**: Be careful about exposing ports publicly - **administration ports** should be kept locked down. This includes the the Graph Node JSON-RPC and the Indexer management endpoints detailed below. +> **重要**: 公开暴露端口时要小心 - **管理端口** 应保持锁定。 这包括下面详述的Graph节点 JSON-RPC 和索引人管理端点。 #### Graph 节点 -| Port | Purpose | Routes | CLI Argument | Environment Variable | +| 端口 | 用途 | 路径 | CLI 参数 | 环境 变量 | | --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | -| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | -| 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | -| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | -| 8040 | Prometheus metrics | /metrics | \--metrics-port | - | +| 8000 | GraphQL HTTP server
(用于子图查询) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | +| 8001 | GraphQL WS
(用于子图订阅) | /subgraphs/id/...
/subgraphs/name/.../…… | \--ws-port | - | +| 8020 | JSON-RPC
(用于管理部署) | / | \--admin-port | - | +| 8030 | 子图索引状态 API | /graphql | \--index-node-port | - | +| 8040 | Prometheus 指标 | /metrics | \--metrics-port | - | -#### Indexer Service +#### 索引人服务 -| Port | Purpose | Routes | CLI Argument | Environment Variable | +| 端口 | 用途 | 路径 | CLI 参数 | 环境 变量 | | --- | --- | --- | --- | --- | -| 7600 | GraphQL HTTP server
(for paid subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` | -| 7300 | Prometheus metrics | /metrics | \--metrics-port | - | +| 7600 | GraphQL HTTP 服务器
(用于付费子图查询) | /subgraphs/id/...
/status
/channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` | +| 7300 | Prometheus 指标 | /metrics | \--metrics-port | - | -#### Indexer Agent +#### 索引人代理 -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| ---- | ---------------------- | ------ | -------------------------- | --------------------------------------- | -| 8000 | Indexer management API | / | \--indexer-management-port | `INDEXER_AGENT_INDEXER_MANAGEMENT_PORT` | +| 端口 | 目的 | 路径 | CLI 参数 | 环境 变量 | +| ---- | -------------- | ---- | ------------------- | --------------------------------------- | +| 8000 | 索引人管理 API | / | \--索引人-管理-端口 | `INDEXER_AGENT_INDEXER_MANAGEMENT_PORT` | -### Setup server infrastructure using Terraform on Google Cloud +### 在谷歌云上使用 Terraform 建立服务器基础设施 -> Note: Indexers can alternatively use AWS, Microsoft Azure, or Alibaba. +> 注意:索引人可以选择使用AWS,Microsoft Azure, or Alibaba。 -#### Install prerequisites +#### 安装先决条件 -- Google Cloud SDK -- Kubectl command line tool +- 谷歌云 SDK +- Kubectl 命令行工具 - Terraform -#### Create a Google Cloud Project +#### 创建一个谷歌云项目 -- Clone or navigate to the [Indexer repository](https://github.com/graphprotocol/indexer). +- 克隆或导航到[索引人存储库](https://github.com/graphprotocol/indexer)。 -- Navigate to the `./terraform` directory, this is where all commands should be executed. +- 导航到`./terraform` 目录,这是所有命令应该执行的地方。 ```sh cd terraform ``` -- Authenticate with Google Cloud and create a new project. +- 通过谷歌云认证并创建一个新项目。 ```sh gcloud auth login @@ -196,9 +196,9 @@ project= gcloud projects create --enable-cloud-apis $project ``` -- Use the Google Cloud Console's billing page to enable billing for the new project. +- 使用 Google Cloud Console 的计费页面为新项目启用计费。 -- Create a Google Cloud configuration. +- 创建谷歌云配置。 ```sh proj_id=$(gcloud projects list --format='get(project_id)' --filter="name=$project") @@ -208,7 +208,7 @@ gcloud config set compute/region us-central1 gcloud config set compute/zone us-central1-a ``` -- Enable required Google Cloud APIs. +- 启用所需的 Google Cloud API。 ```sh gcloud services enable compute.googleapis.com @@ -217,7 +217,7 @@ gcloud services enable servicenetworking.googleapis.com gcloud services enable sqladmin.googleapis.com ``` -- Create a service account. +- 创建一个服务账户。 ```sh svc_name= @@ -235,7 +235,7 @@ gcloud projects add-iam-policy-binding $proj_id \ --role roles/editor ``` -- Enable peering between database and Kubernetes cluster that will be created in the next step. +- 启用将在下一步中创建的数据库和 Kubernetes 集群之间的对等连接。 ```sh gcloud compute addresses create google-managed-services-default \ @@ -249,7 +249,7 @@ gcloud services vpc-peerings connect \ --ranges=google-managed-services-default ``` -- Create minimal terraform configuration file (update as needed). +- 创建最小的 terraform 配置文件(根据需要更新)。 ```sh indexer= @@ -260,11 +260,11 @@ database_password = "" EOF ``` -#### Use Terraform to create infrastructure +#### 使用 Terraform 创建基础设施 -Before running any commands, read through [variables.tf](https://github.com/graphprotocol/indexer/blob/main/terraform/variables.tf) and create a file `terraform.tfvars` in this directory (or modify the one we created in the last step). For each variable where you want to override the default, or where you need to set a value, enter a setting into `terraform.tfvars`. +在运行任何命令之前,先阅读 [variables.tf](https://github.com/graphprotocol/indexer/blob/main/terraform/variables.tf) 并在这个目录下创建一个文件`terraform.tfvars`(或者修改我们在上一步创建的文件)。 对于每一个想要覆盖默认值的变量,或者需要设置值的变量,在 `terraform.tfvars`中输入一个设置。 -- Run the following commands to create the infrastructure. +- 运行以下命令来创建基础设施。 ```sh # Install required plugins @@ -277,7 +277,7 @@ terraform plan terraform apply ``` -Download credentials for the new cluster into `~/.kube/config` and set it as your default context. +将新集群的凭据下载到`~/.kube/config`中,并将其设置为默认背景。 ```sh gcloud container clusters get-credentials $indexer @@ -285,21 +285,21 @@ kubectl config use-context $(kubectl config get-contexts --output='name' | grep $indexer) ``` -#### Creating the Kubernetes components for the Indexer +#### 为索引人创建 Kubernetes 组件 -- Copy the directory `k8s/overlays` to a new directory `$dir,` and adjust the `bases` entry in `$dir/kustomization.yaml` so that it points to the directory `k8s/base`. +- 将目录`k8s/overlays` 复制到新的目录 `$dir,` 中,并调整`bases` 进入中的`$dir/kustomization.yaml`条目,使其指向目录`k8s/base`。 -- Read through all the files in `$dir` and adjust any values as indicated in the comments. +- 读取`$dir`中的所有文件,并按照注释中的指示调整任何值。 -Deploy all resources with `kubectl apply -k $dir`. +用`kubectl apply -k $dir`部署所有资源。 ### Graph 节点 -[Graph Node](https://github.com/graphprotocol/graph-node) is an open source Rust implementation that event sources the Ethereum blockchain to deterministically update a data store that can be queried via the GraphQL endpoint. Developers use subgraphs to define their schema, and a set of mappings for transforming the data sourced from the blockchain and the Graph Node handles syncing the entire chain, monitoring for new blocks, and serving it via a GraphQL endpoint. +[Graph节点](https://github.com/graphprotocol/graph-node) 是一个开源的 Rust 实现,它将以太坊区块链事件源化,以确定地更新一个数据存储,可以通过 GraphQL 端点进行查询。 开发者使用子图来定义他们的模式,以及一组用于转换区块链来源数据的映射,Graph 节点处理同步整个链,监控新的区块,并通过 GraphQL 端点提供服务。 -#### Getting started from source +#### 从来源开始 -#### Install prerequisites +#### 安装先决条件 - **Rust** @@ -307,15 +307,15 @@ Deploy all resources with `kubectl apply -k $dir`. - **IPFS** -- **Additional Requirements for Ubuntu users** - To run a Graph Node on Ubuntu a few additional packages may be needed. +- **Ubuntu 用户的附加要求** - 要在 Ubuntu 上运行 Graph 节点,可能需要一些附加的软件包。 ```sh sudo apt-get install -y clang libpg-dev libssl-dev pkg-config ``` -#### Setup +#### 设置 -1. Start a PostgreSQL database server +1. 启动 PostgreSQL 数据库服务器 ```sh initdb -D .postgres @@ -323,9 +323,9 @@ pg_ctl -D .postgres -l logfile start createdb graph-node ``` -2. Clone [Graph Node](https://github.com/graphprotocol/graph-node) repo and build the source by running `cargo build` +2. 克隆[Graph 节点](https://github.com/graphprotocol/graph-node)repo,并通过运行 `cargo build`来构建源代码。 -3. Now that all the dependencies are setup, start the Graph Node: +3. 现在,所有的依赖关系都已设置完毕,启动 Graph节点: ```sh cargo run -p graph-node --release -- \ @@ -334,48 +334,48 @@ cargo run -p graph-node --release -- \ --ipfs https://ipfs.network.thegraph.com ``` -#### Getting started using Docker +#### 使用 Docker -#### Prerequisites +#### 先决条件 -- **Ethereum node** - By default, the docker compose setup will use mainnet: [http://host.docker.internal:8545](http://host.docker.internal:8545) to connect to the Ethereum node on your host machine. You can replace this network name and url by updating `docker-compose.yaml`. +- **Ethereum 节点** - 默认情况下,docker 编译设置将使用 mainnet:[http://host.docker.internal:8545](http://host.docker.internal:8545) 连接到主机上的以太坊节点。 你可以通过更新 `docker-compose.yaml`来替换这个网络名和 url。 -#### Setup +#### 设置 -1. Clone Graph Node and navigate to the Docker directory: +1. 克隆Graph节点并导航到Docker目录: ```sh git clone https://github.com/graphprotocol/graph-node cd graph-node/docker ``` -2. For linux users only - Use the host IP address instead of `host.docker.internal` in the `docker-compose.yaml `using the included script: +2. 仅适用于linux用户 - 在`docker-compose.yaml`中使用主机IP地址代替 `host.docker.internal`并使用附带的脚本: ```sh ./setup.sh ``` -3. Start a local Graph Node that will connect to your Ethereum endpoint: +3. 启动一个本地Graph节点,它将连接到你的以太坊端点: ```sh docker-compose up ``` -### Indexer components +### 索引人组件 -To successfully participate in the network requires almost constant monitoring and interaction, so we've built a suite of Typescript applications for facilitating an Indexers network participation. There are three Indexer components: +要成功地参与网络,需要几乎持续的监控和互动,所以我们建立了一套 Typescript 应用程序,以方便索引人的网络参与。 有三个索引人组件: -- **Indexer agent** - The agent monitors the network and the Indexer's own infrastructure and manages which subgraph deployments are indexed and allocated towards onchain and how much is allocated towards each. +- **索引人代理** - 代理监控网络和索引人自身的基础设施,并管理哪些子图部署被索引和分配到链上,以及分配到每个子图的数量。 -- **Indexer service** - The only component that needs to be exposed externally, the service passes on subgraph queries to the graph node, manages state channels for query payments, shares important decision making information to clients like the gateways. +- **索引人服务** - 唯一需要对外暴露的组件,该服务将子图查询传递给节点,管理查询支付的状态通道,将重要的决策信息分享给网关等客户端。 -- **Indexer CLI** - The command line interface for managing the Indexer agent. It allows Indexers to manage cost models, manual allocations, actions queue, and indexing rules. +- **索引人 CLI** - 用于管理索引人代理的命令行界面。 它允许索引人管理成本模型、手动分配、操作队列和索引规则。 -#### Getting started +#### 开始 -The Indexer agent and Indexer service should be co-located with your Graph Node infrastructure. There are many ways to set up virtual execution environments for your Indexer components; here we'll explain how to run them on baremetal using NPM packages or source, or via kubernetes and docker on the Google Cloud Kubernetes Engine. If these setup examples do not translate well to your infrastructure there will likely be a community guide to reference, come say hi on [Discord](https://discord.gg/graphprotocol)! Remember to [stake in the protocol](/indexing/overview/#stake-in-the-protocol) before starting up your Indexer components! +索引人代理和索引人服务应该与你的 Graph节点基础设施共同定位。 有很多方法可以为你的索引人组件设置虚拟执行环境,这里我们将解释如何使用 NPM 包或源码在裸机上运行它们,或者通过谷歌云 Kubernetes 引擎上的 kubernetes 和 docker 运行。 如果这些设置实例不能很好地转化为你的基础设施,很可能会有一个社区指南供参考,请到 [Discord](https://discord.gg/graphprotocol)! 在启动您的 Indexer 组件之前,请记住[stake in the protocol](/indexing/overview/#stake-in-the-protocol)! -#### From NPM packages +#### 来自 NPM 包 ```sh npm install -g @graphprotocol/indexer-service @@ -398,7 +398,7 @@ graph indexer connect http://localhost:18000/ graph indexer ... ``` -#### From source +#### 来自数据源 ```sh # From Repo root directory @@ -418,16 +418,16 @@ cd packages/indexer-cli ./bin/graph-indexer-cli indexer ... ``` -#### Using docker +#### 使用 docker -- Pull images from the registry +- 从注册表中提取图像 ```sh docker pull ghcr.io/graphprotocol/indexer-service:latest docker pull ghcr.io/graphprotocol/indexer-agent:latest ``` -Or build images locally from source +或从数据源本地生成图像 ```sh # Indexer service @@ -442,24 +442,24 @@ docker build \ -t indexer-agent:latest \ ``` -- Run the components +- 运行组件 ```sh docker run -p 7600:7600 -it indexer-service:latest ... docker run -p 18000:8000 -it indexer-agent:latest ... ``` -**NOTE**: After starting the containers, the Indexer service should be accessible at [http://localhost:7600](http://localhost:7600) and the Indexer agent should be exposing the Indexer management API at [http://localhost:18000/](http://localhost:18000/). +**注意:**启动容器后,索引人服务应该在[http://localhost:7600](http://localhost:7600)可用,而且索引人代理应该被暴露在[http://localhost:18000/](http://localhost:18000/)索引人管理API。 -#### Using K8s and Terraform +#### 使用 K8s 和 Terraform -See the [Setup Server Infrastructure Using Terraform on Google Cloud](/indexing/overview/#setup-server-infrastructure-using-terraform-on-google-cloud) section +请参阅[在Google Cloud上使用Terraform设置服务器基础结构](/indexing/overview/#setup-server-infrastructure-using-terraform-on-google-cloud)一节。 -#### Usage +#### 使用方法 -> **NOTE**: All runtime configuration variables may be applied either as parameters to the command on startup or using environment variables of the format `COMPONENT_NAME_VARIABLE_NAME`(ex. `INDEXER_AGENT_ETHEREUM`). +> **注意**: 所有的运行时配置变量可以在启动时作为参数应用到命令中,也可以使用格式为 `COMPONENT_NAME_VARIABLE_NAME`(ex. `INDEXER_AGENT_ETHEREUM`) 的环境变量。 -#### Indexer agent +#### 索引人代理 ```sh graph-indexer-agent start \ @@ -488,7 +488,7 @@ graph-indexer-agent start \ | pino-pretty ``` -#### Indexer service +#### 索引人服务 ```sh SERVER_HOST=localhost \ @@ -514,58 +514,58 @@ graph-indexer-service start \ | pino-pretty ``` -#### Indexer CLI +#### 索引人 CLI -The Indexer CLI is a plugin for [`@graphprotocol/graph-cli`](https://www.npmjs.com/package/@graphprotocol/graph-cli) accessible in the terminal at `graph indexer`. +Indexer CLI 是一个可以在终端访问`graph indexer`的插件,地址是[`@graphprotocol/graph-cli`](https://www.npmjs.com/package/@graphprotocol/graph-cli)。 ```sh graph indexer connect http://localhost:18000 graph indexer status ``` -#### Indexer management using Indexer CLI +#### 使用Indexer CLI 管理索引人 -The suggested tool for interacting with the **Indexer Management API** is the **Indexer CLI**, an extension to the **Graph CLI**. The Indexer agent needs input from an Indexer in order to autonomously interact with the network on the behalf of the Indexer. The mechanism for defining Indexer agent behavior are **allocation management** mode and **indexing rules**. Under auto mode, an Indexer can use **indexing rules** to apply their specific strategy for picking subgraphs to index and serve queries for. Rules are managed via a GraphQL API served by the agent and known as the Indexer Management API. Under manual mode, an Indexer can create allocation actions using **actions queue** and explicitly approve them before they get executed. Under oversight mode, **indexing rules** are used to populate **actions queue** and also require explicit approval for execution. +与**Indexer Management API**交互的建议工具是 **IndexerCLI**,它是 **GraphCLI** 的扩展。Indexer 代理需要来自 Indexer 的输入,以便代表 Indexer 与网络进行自主交互。定义 Indexer 代理行为的机制是**分配管理**模式和**索引规则**。在自动模式下,Indexer 可以使用**索引规则**应用它们的特定策略来选择子图以索引并查询。规则通过代理提供的 GraphQLAPI 进行管理,称为 Indexer Management API。在手动模式下,索引人可以使用**操作队列**创建分配操作,并在操作队列执行之前显式批准它们。在监督模式下,**索引规则**用于填充**操作队列**,并且还需要执行的显式批准。 -#### Usage +#### 使用方法 -The **Indexer CLI** connects to the Indexer agent, typically through port-forwarding, so the CLI does not need to run on the same server or cluster. To help you get started, and to provide some context, the CLI will briefly be described here. +**Indexer CLI**连接到索引人代理,通常是通过端口转发,因此 CLI 不需要运行在同一服务器或集群上。 为了帮助你入门,并提供一些背景,这里将简要介绍 CLI。 -- `graph indexer connect ` - Connect to the Indexer management API. Typically the connection to the server is opened via port forwarding, so the CLI can be easily operated remotely. (Example: `kubectl port-forward pod/ 8000:8000`) +- `graph索引人连接` - 连接到索引人管理 API。 通常情况下,与服务器的连接是通过端口转发打开的,所以 CLI 可以很容易地进行远程操作。 (例如: `kubectl port-forward pod/ 8000:8000`) -- `graph indexer rules get [options] [ ...]` - Get one or more indexing rules using `all` as the `` to get all rules, or `global` to get the global defaults. An additional argument `--merged` can be used to specify that deployment specific rules are merged with the global rule. This is how they are applied in the Indexer agent. +- `graph索引人规则获取[选项] [ ...]` -获取一个或多个索引规则,使用 `all` 作为`` 来获取所有规则,或使用 `global` 来获取全局默认规则。 可以使用额外的参数 `--merged` 来指定将特定部署规则与全局规则合并。 这就是它们在索引人代理中的应用方式。 -- `graph indexer rules set [options] ...` - Set one or more indexing rules. +- `graph索引人规则设置[选项] ...` -设置一个或多个索引规则。 -- `graph indexer rules start [options] ` - Start indexing a subgraph deployment if available and set its `decisionBasis` to `always`, so the Indexer agent will always choose to index it. If the global rule is set to always then all available subgraphs on the network will be indexed. +- `graph索引人规则开始[选项] ` - 开始索引子图部署(如果可用),并将其`decisionBasis`设置为`always`, 这样索引人代理将始终选择对其进行索引。 如果全局规则被设置为总是,那么网络上所有可用的子图都将被索引。 -- `graph indexer rules stop [options] ` - Stop indexing a deployment and set its `decisionBasis` to never, so it will skip this deployment when deciding on deployments to index. +- `graph索引人规则停止[选项] ` -停止对某个部署进行索引,并将其 `decisionBasis`设置为 never, 这样它在决定要索引的部署时就会跳过这个部署。 -- `graph indexer rules maybe [options] ` — Set the `decisionBasis` for a deployment to `rules`, so that the Indexer agent will use indexing rules to decide whether to index this deployment. +- `graph indexer rules maybe [options] ` —将部署的 `thedecisionBasis`设置为`规则`, 这样索引人代理将使用索引规则来决定是否对这个部署进行索引。 -- `graph indexer actions get [options] ` - Fetch one or more actions using `all` or leave `action-id` empty to get all actions. An additional argument `--status` can be used to print out all actions of a certain status. +- `graph indexer actions get [options] `-使用 `all` 获取一个或多个操作,或者将 `action-id` 保持为空以获取所有操作。一个附加的参数—— `status` 可以用来打印出某个状态的所有操作。 -- `graph indexer action queue allocate ` - Queue allocation action +- `graph indexer action queue allocate ` -队列分配操作 -- `graph indexer action queue reallocate ` - Queue reallocate action +- `graph indexer action queue reallocate ` -队列重新分配操作 -- `graph indexer action queue unallocate ` - Queue unallocate action +- `graph indexer action queue unallocate ` - 队列未分配操作 -- `graph indexer actions cancel [ ...]` - Cancel all action in the queue if id is unspecified, otherwise cancel array of id with space as separator +- `graph indexer actions cancel [ ...]` - 如果未指定 id,则取消队列中的所有操作,否则取消以空格作为分隔符的 id 数组 -- `graph indexer actions approve [ ...]` - Approve multiple actions for execution +- `graph indexer actions approve [ ...]` - 批准执行多个操作 -- `graph indexer actions execute approve` - Force the worker to execute approved actions immediately +- `graph indexer actions execute approve` - 强迫工人立即执行批准的行动 -All commands which display rules in the output can choose between the supported output formats (`table`, `yaml`, and `json`) using the `-output` argument. +所有在输出中显示规则的命令都可以使用 `-output`参数在支持的输出格式(`table`, `yaml`, and `json`)之间进行选择。 -#### Indexing rules +#### 索引规则 -Indexing rules can either be applied as global defaults or for specific subgraph deployments using their IDs. The `deployment` and `decisionBasis` fields are mandatory, while all other fields are optional. When an indexing rule has `rules` as the `decisionBasis`, then the Indexer agent will compare non-null threshold values on that rule with values fetched from the network for the corresponding deployment. If the subgraph deployment has values above (or below) any of the thresholds it will be chosen for indexing. +索引规则可以作为全局默认值应用,也可以使用它们的 ID 应用于特定的子图部署。`部署`和 `decisionBase` 字段是强制性的,而所有其他字段都是可选的。当索引规则具有`规则`作为 `decisionBase` 时,索引人代理将比较该规则上的非空阈值与从网络获取的用于相应部署的值。如果子图部署的值高于(或低于) 任何阈值,则将选择它进行索引。 -For example, if the global rule has a `minStake` of **5** (GRT), any subgraph deployment which has more than 5 (GRT) of stake allocated to it will be indexed. Threshold rules include `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, and `minAverageQueryFees`. +例如,如果全局规则的`minStake` 值为**5** (GRT), 则分配给它的份额超过 5 (GRT) 的任何子图部署都将被编入索引。 阈值规则包括`maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, 和 `minAverageQueryFees`。 -Data model: +数据模型: ```graphql type IndexingRule { @@ -599,7 +599,7 @@ IndexingDecisionBasis { } ``` -Example usage of indexing rule: +索引规则用法示例: ``` graph indexer rules offchain QmZfeJYR86UARzp9HiXbURWunYgC9ywvPvoePNbuaATrEK @@ -611,20 +611,20 @@ graph indexer rules stop QmZfeJYR86UARzp9HiXbURWunYgC9ywvPvoePNbuaATrEK graph indexer rules delete QmZfeJYR86UARzp9HiXbURWunYgC9ywvPvoePNbuaATrEK ``` -#### Actions queue CLI +#### 操作队列CLI -The indexer-cli provides an `actions` module for manually working with the action queue. It uses the **Graphql API** hosted by the indexer management server to interact with the actions queue. +Indexer-cli 提供了一个 `actions` 模块,用于手动处理操作队列。它使用由索引人管理服务器托管的 **Graphql API** 与操作队列进行交互。 -The action execution worker will only grab items from the queue to execute if they have `ActionStatus = approved`. In the recommended path actions are added to the queue with ActionStatus = queued, so they must then be approved in order to be executed onchain. The general flow will look like: +如果`ActionStatus=已批准`,则操作执行工作人员将仅从队列中获取要执行的项目。在推荐的路径中,操作被添加到ActionStatus=queued的队列中,因此必须经过批准才能在链上执行。一般流程如下: -- Action added to the queue by the 3rd party optimizer tool or indexer-cli user -- Indexer can use the `indexer-cli` to view all queued actions -- Indexer (or other software) can approve or cancel actions in the queue using the `indexer-cli`. The approve and cancel commands take an array of action ids as input. -- The execution worker regularly polls the queue for approved actions. It will grab the `approved` actions from the queue, attempt to execute them, and update the values in the db depending on the status of execution to `success` or `failed`. -- If an action is successful the worker will ensure that there is an indexing rule present that tells the agent how to manage the allocation moving forward, useful when taking manual actions while the agent is in `auto` or `oversight` mode. -- The indexer can monitor the action queue to see a history of action execution and if needed re-approve and update action items if they failed execution. The action queue provides a history of all actions queued and taken. +- 第三方优化器工具或indexer-cli用户添加到队列的操作。 +- 索引人可以使用`indexer-cli`查看所有排队的操作。 +- 索引人(或其他软件)可以使用`indexer-cli`批准或取消队列中的操作。批准和取消命令将一组操作ID作为输入。 +- 执行工作人员定期轮询队列以获得批准的操作。它将从队列中获取`已批准`的操作,尝试执行它们,并根据执行状态将数据库中的值更新为`成功`或`失败`。 +- 如果操作成功,工作人员将确保存在索引规则,告诉代理如何管理向前的分配,这在代理处于`自动`或`监督`模式时进行手动操作非常有用。 +- 索引人可以监视操作队列以查看操作执行的历史记录,如果需要,可以在操作项执行失败时重新批准和更新操作项。操作队列提供排队和执行的所有操作的历史记录。 -Data model: +数据模型: ```graphql Type ActionInput { @@ -657,7 +657,7 @@ ActionType { } ``` -Example usage from source: +数据源用法示例: ```bash graph indexer actions get all @@ -677,44 +677,44 @@ graph indexer actions approve 1 3 5 graph indexer actions execute approve ``` -Note that supported action types for allocation management have different input requirements: +请注意,分配管理支持的操作类型有不同的输入要求: -- `Allocate` - allocate stake to a specific subgraph deployment +- `Allocate` - 将份额分配给特定的子图部署 - - required action params: - - deploymentID - - amount + - 所需的操作参数: + - 部署ID + - 数量 -- `Unallocate` - close allocation, freeing up the stake to reallocate elsewhere +- `Unallocate` - 结束分配,腾出份额重新分配到其他地方 - - required action params: - - allocationID - - deploymentID - - optional action params: + - 所需的操作参数: + - 分配ID + - 部署ID + - 可选操作参数: - poi - - force (forces using the provided POI even if it doesn’t match what the graph-node provides) + - force(使用提供的POI的力量,即使它与图形节点提供的不匹配) -- `Reallocate` - atomically close allocation and open a fresh allocation for the same subgraph deployment +- `Reallocate` - 自动关闭分配并为相同的子图部署打开新的分配 - - required action params: - - allocationID - - deploymentID - - amount - - optional action params: + - 所需的操作参数: + - 分配ID + - 部署ID + - 数量 + - 可选操作参数: - poi - - force (forces using the provided POI even if it doesn’t match what the graph-node provides) + - force(使用提供的POI的力量,即使它与图形节点提供的不匹配) -#### Cost models +#### 成本模式 -Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make Indexer selection decisions per query and to negotiate payment with chosen Indexers. +成本模型根据市场和查询属性为查询提供动态定价。索引人服务与网关共享一个成本模型,用于它们打算响应查询的每个子图。反过来,网关使用成本模型对每个查询进行索引人选择决策,并与选定的索引人协商付款。 #### Agora -The Agora language provides a flexible format for declaring cost models for queries. An Agora price model is a sequence of statements that execute in order for each top-level query in a GraphQL query. For each top-level query, the first statement which matches it determines the price for that query. +Agora 语言提供了一种灵活的格式来声明查询的成本模型。 Agora 价格模型是一系列的语句,它们按照 GraphQL 查询中每个顶层查询的顺序执行。 对于每个顶层查询,第一个与其匹配的语句决定了该查询的价格。 -A statement is comprised of a predicate, which is used for matching GraphQL queries, and a cost expression which when evaluated outputs a cost in decimal GRT. Values in the named argument position of a query may be captured in the predicate and used in the expression. Globals may also be set and substituted in for placeholders in an expression. +语句由一个用于匹配 GraphQL 查询的谓词和一个成本表达式组成,该表达式在评估时输出一个以十进制 GRT 表示的成本。 查询的命名参数位置中的值可以在谓词中捕获并在表达式中使用。 也可以在表达式中设置全局,并代替占位符。 -Example cost model: +成本模型示例: ``` # This statement captures the skip value, @@ -727,91 +727,91 @@ query { pairs(skip: $skip) { id } } when $skip > 2000 => 0.0001 * $skip * $SYSTE default => 0.1 * $SYSTEM_LOAD; ``` -Example query costing using the above model: +使用上述模型的查询成本计算示例: -| Query | Price | +| 查询 | 价格 | | ---------------------------------------------------------------------------- | ------- | | { pairs(skip: 5000) { id } } | 0.5 GRT | | { tokens { symbol } } | 0.1 GRT | | { pairs(skip: 5000) { id } tokens { symbol } } | 0.6 GRT | -#### Applying the cost model +#### 应用成本模式 -Cost models are applied via the Indexer CLI, which passes them to the Indexer Management API of the Indexer agent for storing in the database. The Indexer Service will then pick them up and serve the cost models to gateways whenever they ask for them. +成本模型是通过索引人 CLI 应用的,CLI 将它们传递给索引人代理的索引人管理 API,以便存储在数据库中。 然后,索引人服务将接收这些模型,并在网关要求时将成本模型提供给它们。 ```sh indexer cost set variables '{ "SYSTEM_LOAD": 1.4 }' indexer cost set model my_model.agora ``` -## Interacting with the network +## 与网络的交互 -### Stake in the protocol +### 在协议中进行质押 -The first steps to participating in the network as an Indexer are to approve the protocol, stake funds, and (optionally) set up an operator address for day-to-day protocol interactions. +作为索引人参与网络的第一步是批准协议、质押资金,以及(可选)设置一个操作员地址以进行日常协议交互。 -> Note: For the purposes of these instructions Remix will be used for contract interaction, but feel free to use your tool of choice ([OneClickDapp](https://oneclickdapp.com/), [ABItopic](https://abitopic.io/), and [MyCrypto](https://www.mycrypto.com/account) are a few other known tools). +> 注意: 在这些说明中,Remix 将用于合约交互,但请随意使用您选择的工具([OneClickDapp](https://oneclickdapp.com/), [ABItopic](https://abitopic.io/), 和 [MyCrypto](https://www.mycrypto.com/account) 是其他一些已知的工具)。 -Once an Indexer has staked GRT in the protocol, the [Indexer components](/indexing/overview/#indexer-components) can be started up and begin their interactions with the network. +一旦索引人将GRT置于协议中,[索引人组件](/indexing/overview/#indexer-components)就可以启动并开始与网络交互。 -#### Approve tokens +#### 批准代币 -1. Open the [Remix app](https://remix.ethereum.org/) in a browser +1. 在浏览器中打开[Remix app](https://remix.ethereum.org/) 。 -2. In the `File Explorer` create a file named **GraphToken.abi** with the [token ABI](https://raw.githubusercontent.com/graphprotocol/contracts/mainnet-deploy-build/build/abis/GraphToken.json). +2. 使用[token ABI](https://raw.githubusercontent.com/graphprotocol/contracts/mainnet-deploy-build/build/abis/GraphToken.json).在`File Explorer`文件夹中创建一个名为**GraphToken.abi**的文件。 -3. With `GraphToken.abi` selected and open in the editor, switch to the `Deploy and run transactions` section in the Remix interface. +3. 在编辑器中选择`GraphToken.abi` 并打开,切换到Remix 界面中`部署和Run Transactions` 选项中。 -4. Under environment select `Injected Web3` and under `Account` select your Indexer address. +4. 在环境选择`Injected Web3`和`Account` 下面选择你的索引人地址。 -5. Set the GraphToken contract address - Paste the GraphToken contract address (`0xc944E90C64B2c07662A292be6244BDf05Cda44a7`) next to `At Address` and click the `At address` button to apply. +5. 设置 GraphToken 合约地址 - 将 GraphToken 合约地址(`0xc944E90C64B2c07662A292be6244BDf05Cda44a7`) 粘贴到`At Address` 旁边 ,单击并应用`At address` 按钮。 -6. Call the `approve(spender, amount)` function to approve the Staking contract. Fill in `spender` with the Staking contract address (`0xF55041E37E12cD407ad00CE2910B8269B01263b9`) and `amount` with the tokens to stake (in wei). +6. 调用`approve(spender, amount)`函数以批准 Staking 合约。 用质押合约地址(`0xF55041E37E12cD407ad00CE2910B8269B01263b9`) 填写`spender` 和`amount` 要质押的代币数量 (in wei)。 -#### Stake tokens +#### 质押代币 -1. Open the [Remix app](https://remix.ethereum.org/) in a browser +1. 在浏览器中打开[Remix应用程序](https://remix.ethereum.org/)。 -2. In the `File Explorer` create a file named **Staking.abi** with the staking ABI. +2. 在 `File Explorer` 创建一个名为**Staking.abi** 的文件中,使用 staking ABI。 -3. With `Staking.abi` selected and open in the editor, switch to the `Deploy and run transactions` section in the Remix interface. +3. 在编辑器中选择并打开 `Staking.abi` 后,切换到 Remix 界面中的 `Deploy 和 run Transactions` 部分。 -4. Under environment select `Injected Web3` and under `Account` select your Indexer address. +4. 在环境选择`Injected Web3`和`Account` 下面选择你的索引人地址。 -5. Set the Staking contract address - Paste the Staking contract address (`0xF55041E37E12cD407ad00CE2910B8269B01263b9`) next to `At Address` and click the `At address` button to apply. +5. 设置 GraphToken 合约地址 - 将 GraphToken 地址(`0xc944E90C64B2c07662A292be6244BDf05Cda44a7`) 粘贴到`At Address` 旁边 ,单击`At address` 按钮以应用。 -6. Call `stake()` to stake GRT in the protocol. +6. 调用 `stake()` 质押协议中的 GRT。 -7. (Optional) Indexers may approve another address to be the operator for their Indexer infrastructure in order to separate the keys that control the funds from those that are performing day to day actions such as allocating on subgraphs and serving (paid) queries. In order to set the operator call `setOperator()` with the operator address. +7. (可选)索引人可以批准另一个地址作为其索引人基础设施的操作员,以便将控制资金的密钥与执行日常操作,例如在子图上分配和服务(付费)查询的密钥分开。 用操作员地址调用`setOperator()` 设置操作员。 -8. (Optional) In order to control the distribution of rewards and strategically attract Delegators Indexers can update their delegation parameters by updating their indexingRewardCut (parts per million), queryFeeCut (parts per million), and cooldownBlocks (number of blocks). To do so call `setDelegationParameters()`. The following example sets the queryFeeCut to distribute 95% of query rebates to the Indexer and 5% to Delegators, set the indexingRewardCutto distribute 60% of indexing rewards to the Indexer and 40% to Delegators, and set `thecooldownBlocks` period to 500 blocks. +8. (可选)为了控制奖励的分配和战略性地吸引委托人,索引人可以通过更新他们的`索引人奖励削减`(百万分之一)、`查询费用削减`(百万分之一)和`冷却周期区块`(区块数)来更新他们的委托参数。 要实现这一目的需要调用 `setDelegationParameters()`。 以下示例设置`查询费用削减`将 95% 的查询返利分配给索引人,5% 给委托人,设置`索引人奖励削减`将 60% 的索引奖励分配给索引人,将 40% 分配给委托人,并将`冷却周期区块`设置为 500 个。 ``` setDelegationParameters(950000, 600000, 500) ``` -### Setting delegation parameters +### 设置委托参数 -The `setDelegationParameters()` function in the [staking contract](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol) is essential for Indexers, allowing them to set parameters that define their interactions with Delegators, influencing their reward sharing and delegation capacity. +[质押合约](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol) 中的`setDelegationParameters()`函数对索引人至关重要,允许他们设置参数来定义他们与委托人的交互,从而影响他们的奖励共享和委托能力。 -### How to set delegation parameters +### 如何设置委托参数 -To set the delegation parameters using Graph Explorer interface, follow these steps: +要使用Graph Explorer界面设置委派参数,请执行以下步骤: -1. Navigate to [Graph Explorer](https://thegraph.com/explorer/). -2. Connect your wallet. Choose multisig (such as Gnosis Safe) and then select mainnet. Note: You will need to repeat this process for Arbitrum One. -3. Connect the wallet you have as a signer. -4. Navigate to the 'Settings' section and select 'Delegation Parameters'. These parameters should be configured to achieve an effective cut within the desired range. Upon entering values in the provided input fields, the interface will automatically calculate the effective cut. Adjust these values as necessary to attain the desired effective cut percentage. -5. Submit the transaction to the network. +1. 导航到 [Graph Explorer](https://thegraph.com/explorer/)。 +2. 连接你的钱包。选择multisig(如Gnosis Safe),然后选择主网。注意:您需要对Arbitrum One重复此过程。 +3. 连接您作为签名者的钱包。 +4. '设置'部分,然后选择'委托参数'。这些参数应配置为在所需范围内实现有效切割。在提供的输入字段中输入值后,界面将自动计算有效切割。根据需要调整这些值,以达到所需的有效切割百分比。 +5. 将交易提交到网络。 -> Note: This transaction will need to be confirmed by the multisig wallet signers. +> 注意:此交易需要由多重签名钱包签名者确认。 -### The life of an allocation +### 分配的生命周期 -After being created by an Indexer a healthy allocation goes through two states. +在被索引人创建之后,健康的分配会经历两个状态。 -- **Active** - Once an allocation is created onchain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L316)) it is considered **active**. A portion of the Indexer's own and/or delegated stake is allocated towards a subgraph deployment, which allows them to claim indexing rewards and serve queries for that subgraph deployment. The Indexer agent manages creating allocations based on the Indexer rules. +- **Active** - 一旦在链上创建了分配 ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L316)) ,它就被认为是**active**的。索引人自己和/或委托的一部分份额分配给子图部署,这允许他们申请索引奖励并为该子图部署提供查询。索引人代理根据索引人规则管理创建分配。 -- **Closed** - An Indexer is free to close an allocation once 1 epoch has passed ([closeAllocation()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L335)) or their Indexer agent will automatically close the allocation after the **maxAllocationEpochs** (currently 28 days). When an allocation is closed with a valid proof of indexing (POI) their indexing rewards are distributed to the Indexer and its Delegators ([learn more](/indexing/overview/#how-are-indexing-rewards-distributed)). +- **Closed**-一旦经过1个时期,索引人就可以自由关闭分配([closeGallocation()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L335))或者他们的索引人代理将在**maxAllocationPochs**(目前为28天)后自动关闭分配。当分配以有效的索引证明(POI)结束时,其索引奖励将分配给索引人及其委托人([了解更多](/indexing/overview/#how-are-indexing-rewards-distributed))。 -Indexers are recommended to utilize offchain syncing functionality to sync subgraph deployments to chainhead before creating the allocation onchain. This feature is especially useful for subgraphs that may take longer than 28 epochs to sync or have some chances of failing undeterministically. +建议索引人在链上创建分配之前,利用链外同步功能将子图部署同步到链头。对于可能需要超过28个时期才能同步或有一些无法确定失败的机会的子图,此功能特别有用。 diff --git a/website/src/pages/zh/indexing/supported-network-requirements.mdx b/website/src/pages/zh/indexing/supported-network-requirements.mdx index 31ca8ba7ecf4..ee6a45f9c2dd 100644 --- a/website/src/pages/zh/indexing/supported-network-requirements.mdx +++ b/website/src/pages/zh/indexing/supported-network-requirements.mdx @@ -1,18 +1,18 @@ --- -title: Supported Network Requirements +title: 支持的网络要求 --- -| Network | Guides | System Requirements | Indexing Rewards | +| 网络 | 指南 | 系统需求 | 索引奖励 | | --- | --- | --- | :-: | | Arbitrum | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | 4+ core CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Avalanche | [Docker Guide](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 5 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
Debian 12/Ubuntu 22.04
16 GB RAM
>= 4.5TB (NVME preffered)
_last updated 14th May 2024_ | ✅ | -| Binance | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | 8 core / 16 threads CPU
Ubuntu 22.04
>=32 GB RAM
>= 14 TiB NVMe SSD
_last updated 22nd June 2024_ | ✅ | +| Avalanche | [Docker Guide](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 核心/8 线程 CPU
Ubuntu 22.04
16GB+ RAM
>= 5 TiB NVMe SSD
_最后更新于 2023年8月_ | ✅ | +| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ 核心CPU
Debian 12/Ubuntu 22.04
16 GB RAM
>= 4.5TB (NVME 首选)
_最后更新2024年5月14日_ | ✅ | +| Binance | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | 8 核心 / 16线程 CPU
Ubuntu 22.04
>=32 GB RAM
>= 14 TiB NVMe SSD
_最新更新于2024年6月22日_ | ✅ | | Celo | [Docker Guide](https://docs.infradao.com/archive-nodes-101/celo/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 2 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| 以太坊 | [Docker Guide](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Higher clock speed over core count
Ubuntu 22.04
16GB+ RAM
>=3TB (NVMe recommended)
_last updated August 2023_ | ✅ | -| Fantom | [Docker Guide](https://docs.infradao.com/archive-nodes-101/fantom/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 13 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Gnosis | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/gnosis/erigon/baremetal) | 6 core / 12 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 3 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| 以太坊 | [Docker Guide](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | 时钟速度高于内核数
Ubuntu 22.04
16GB+RAM
>=3TB(建议使用NVMe)
_最后更新于2023年8月_ | ✅ | +| Fantom | [Docker Guide](https://docs.infradao.com/archive-nodes-101/fantom/docker) | 4 核心/8 线程 CPU
Ubuntu 22.04
16GB+ RAM
>= 13 TiB NVMe SSD
_最后更新于 2023年8月_ | ✅ | +| Gnosis | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/gnosis/erigon/baremetal) | 6 核心/12 线程 CPU
Ubuntu 22.04
16GB+ RAM
>= 3 TiB NVMe SSD
_最后更新于 2023年8月_ | ✅ | | Linea | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/linea/baremetal) | 4+ core CPU
Ubuntu 22.04
16GB+ RAM
>= 1 TiB NVMe SSD
_last updated 2nd April 2024_ | ✅ | -| Optimism | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Polygon | [Docker Guide](https://docs.infradao.com/archive-nodes-101/polygon/docker) | 16 core CPU
Ubuntu 22.04
32GB+ RAM
>= 10 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Scroll | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/scroll/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/scroll/docker) | 4 core / 8 threads CPU
Debian 12
16GB+ RAM
>= 1 TiB NVMe SSD
_last updated 3rd April 2024_ | ✅ | +| 优化 | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/docker) | 4 核心/8 线程 CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_最后更新于2023年8月_ | ✅ | +| Polygon | [Docker Guide](https://docs.infradao.com/archive-nodes-101/polygon/docker) | 16 core CPU
Ubuntu 22.04
32GB+ RAM
>= 10 TiB NVMe SSD
_最新更新于2023年8月_ | ✅ | +| Scroll | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/scroll/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/scroll/docker) | 4 core / 8 threads CPU
Debian 12
16GB+ RAM
>= 1 TiB NVMe SSD
_2024年4月3日_ | ✅ | diff --git a/website/src/pages/zh/indexing/tap.mdx b/website/src/pages/zh/indexing/tap.mdx index de09d72fa74a..8a3aa1a43d1a 100644 --- a/website/src/pages/zh/indexing/tap.mdx +++ b/website/src/pages/zh/indexing/tap.mdx @@ -1,102 +1,102 @@ --- -title: TAP Migration Guide +title: GraphTally Guide --- -Learn about The Graph’s new payment system, **Timeline Aggregation Protocol, TAP**. This system provides fast, efficient microtransactions with minimized trust. +了解图表的新支付系统,**GraphTally** [(先前的时间线聚合协议)](https://docs.rs/tap_core/latest/tap_core/index.html)。这个系统提供了快速、高效的微交易,并最大限度地减少了信任。 ## 概述 -[TAP](https://docs.rs/tap_core/latest/tap_core/index.html) is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features: +GraphTally is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features: -- Efficiently handles micropayments. -- Adds a layer of consolidations to onchain transactions and costs. -- Allows Indexers control of receipts and payments, guaranteeing payment for queries. -- It enables decentralized, trustless gateways and improves `indexer-service` performance for multiple senders. +- 高效处理小额支付。 +- 为链上交易和成本添加一层整合。 +- 允许索引人控制收款和付款,保证查询付款。 +- 支持去中心化、去信任的网关,并提高了多个发送者的`索引人服务`性能。 -## Specifics +### 详情 -TAP allows a sender to make multiple payments to a receiver, **TAP Receipts**, which aggregates these payments into a single payment, a **Receipt Aggregate Voucher**, also known as a **RAV**. This aggregated payment can then be verified on the blockchain, reducing the number of transactions and simplifying the payment process. +TAP允许发送方向接收方进行多次付款,**TAP收据**将这些付款聚合为一次付款,即**收据聚合凭证**,也称为**RAV**。然后可以在区块链上验证这种聚合支付,减少交易数量并简化支付过程。 -For each query, the gateway will send you a `signed receipt` that is stored on your database. Then, these queries will be aggregated by a `tap-agent` through a request. Afterwards, you’ll receive a RAV. You can update a RAV by sending it with newer receipts and this will generate a new RAV with an increased value. +对于每个查询,网关将向您发送一个存储在数据库中的`签名收据`。然后,这些查询将由`tap代理`通过请求聚合。之后,您将收到RAV。您可以通过发送带有较新收据的RAV来更新它,这将生成一个价值增加的新RAV。 -### RAV Details +### RAV详情 -- It’s money that is waiting to be sent to the blockchain. +- 这是等待发送到区块链的钱。 -- It will continue to send requests to aggregate and ensure that the total value of non-aggregated receipts does not exceed the `amount willing to lose`. +- 它将继续发送汇总请求,并确保非汇总收据的总价值不超过`amount willing to lose`。 -- Each RAV can be redeemed once in the contracts, which is why they are sent after the allocation is closed. +- 每个RAV在合约中都可以兑换一次,这就是为什么它们在分配结束后被发送的原因。 -### Redeeming RAV +### 兑换RAV -As long as you run `tap-agent` and `indexer-agent`, everything will be executed automatically. The following provides a detailed breakdown of the process: +只要您运行`tap代理`和`索引人代理`,所有操作都将自动执行。以下提供了该过程的详细分解: -1. An Indexer closes allocation. +1. 索引人关闭分配。 -2. ` period, tap-agent` takes all pending receipts for that specific allocation and requests an aggregation into a RAV, marking it as `last`. +2. ` 在此期间,tap代理`会获取该特定分配的所有待处理收据,并请求将其聚合到RAV中,将其标记为`最后一次`。 -3. `indexer-agent` takes all the last RAVS and sends redeem requests to the blockchain, which will update the value of `redeem_at`. +3. `indexer代理`获取所有最后一次的RAVS,并向区块链发送兑换请求,区块链将更新`recent_at`的值。 -4. During the `` period, `indexer-agent` monitors if the blockchain has any reorganizations that revert the transaction. +4. 在``期间,`索引人代理`监控区块链是否有任何重组来恢复交易。 - - If it was reverted, the RAV is resent to the blockchain. If it was not reverted, it gets marked as `final`. + - 如果被回退,RAV将重新发送到区块链。如果它没有被回退,它将被标记为最终版本。 -## Blockchain Addresses +## 区块链地址 -### Contracts +### 合约 -| Contract | Arbitrum Mainnet (42161) | Arbitrum Sepolia (421614) | +| 合约 | Arbitrum 主网 (42161) | Arbitrum Sepolia (421614) | | ------------------- | -------------------------------------------- | -------------------------------------------- | -| TAP Verifier | `0x33f9E93266ce0E108fc85DdE2f71dab555A0F05a` | `0xfC24cE7a4428A6B89B52645243662A02BA734ECF` | +| TAPVerifier | `0x33f9E93266ce0E108fc85DdE2f71dab555A0F05a` | `0xfC24cE7a4428A6B89B52645243662A02BA734ECF` | | AllocationIDTracker | `0x5B2F33d7Ca6Ec88f5586f2528f58c20843D9FE7c` | `0xAaC28a10d707bbc6e02029f1bfDAEB5084b2aD11` | | Escrow | `0x8f477709eF277d4A880801D01A140a9CF88bA0d3` | `0x1e4dC4f9F95E102635D8F7ED71c5CdbFa20e2d02` | -### Gateway +### 网关 -| Component | Edge and Node Mainnet (Arbitrum Mainnet) | Edge and Node Testnet (Arbitrum Sepolia) | -| ---------- | --------------------------------------------- | --------------------------------------------- | -| Sender | `0xDDE4cfFd3D9052A9cb618fC05a1Cd02be1f2F467` | `0xC3dDf37906724732FfD748057FEBe23379b0710D` | -| Signers | `0xfF4B7A5EfD00Ff2EC3518D4F250A27e4c29A2211` | `0xFb142dE83E261e43a81e9ACEADd1c66A0DB121FE` | -| Aggregator | `https://tap-aggregator.network.thegraph.com` | `https://tap-aggregator.testnet.thegraph.com` | +| 组件 | Edge and Node主网(Arbitrum 主网) | Edge and Node测试网(Arbitrum Sepolia) | +| ------ | --------------------------------------------- | --------------------------------------------- | +| 发送人 | `0xDDE4cfFd3D9052A9cb618fC05a1Cd02be1f2F467` | `0xC3dDf37906724732FfD748057FEBe23379b0710D` | +| 签字人 | `0xfF4B7A5EfD00Ff2EC3518D4F250A27e4c29A2211` | `0xFb142dE83E261e43a81e9ACEADd1c66A0DB121FE` | +| 聚合器 | `https://tap-aggregator.network.thegraph.com` | `https://tap-aggregator.testnet.thegraph.com` | -### 要求 +### 先决条件 -In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query TAP updates. You can use The Graph Network to query or host yourself on your `graph-node`. +除了运行索引人的典型要求外,您还需要一个`tap托管子图`端点来查询tap更新。您可以使用Graph网络在`graph-节点`上查询或托管自己。 -- [Graph TAP Arbitrum Sepolia subgraph (for The Graph testnet)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) -- [Graph TAP Arbitrum One subgraph (for The Graph mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) +- [Graph TAP Arbitrum Sepolia Subgraph (适用The Graph测试网)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) +- [Graph TAP Arbitrum One Subgraph (适用The Graph主网)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) -> Note: `indexer-agent` does not currently handle the indexing of this subgraph like it does for the network subgraph deployment. As a result, you have to index it manually. +> 注意:“索引人代理”目前不能像网络子图部署那样处理此子图的索引。因此,您必须手动对其进行索引。 -## Migration Guide +## 迁移指南 -### Software versions +### 软件版本 -The required software version can be found [here](https://github.com/graphprotocol/indexer/blob/main/docs/networks/arbitrum-one.md#latest-releases). +所需的软件版本可以在[此处](https://github.com/graphprotocol/indexer/blob/main/docs/networks/arbitrum-one.md#latest-releases)找到。 -### Steps +### 步骤 -1. **Indexer Agent** +1. **索引人代理** - - Follow the [same process](https://github.com/graphprotocol/indexer/pkgs/container/indexer-agent#graph-protocol-indexer-components). - - Give the new argument `--tap-subgraph-endpoint` to activate the new TAP codepaths and enable redeeming of TAP RAVs. + - 遵循[相同的过程](https://github.com/graphprotocol/indexer/pkgs/container/indexer-agent#graph-protocol-indexer-components)。 + - Give the new argument `--tap-subgraph-endpoint` to activate the new GraphTally codepaths and enable redeeming of RAVs. -2. **Indexer Service** +2. **索引人服务** - - Fully replace your current configuration with the [new Indexer Service rs](https://github.com/graphprotocol/indexer-rs). It's recommend that you use the [container image](https://github.com/orgs/graphprotocol/packages?repo_name=indexer-rs). - - Like the older version, you can scale Indexer Service horizontally easily. It is still stateless. + - 用[新的索引人服务](https://github.com/graphprotocol/indexer-rs)完全替换当前的配置。建议您使用[容器映像](https://github.com/orgs/graphprotocol/packages?repo_name=indexer-rs)。 + - 与旧版本一样,您可以轻松地水平扩展索引人服务。它仍然是无状态的。 -3. **TAP Agent** +3. **TAP代理** - - Run _one_ single instance of [TAP Agent](https://github.com/graphprotocol/indexer-rs) at all times. It's recommend that you use the [container image](https://github.com/orgs/graphprotocol/packages?repo_name=indexer-rs). + - 运行 _一个_ [TAP代理](https://github.com/graphprotocol/indexer-rs)实例。建议您使用[容器映像](https://github.com/orgs/graphprotocol/packages?repo_name=indexer-rs)。 -4. **Configure Indexer Service and TAP Agent** +4. **配置索引人服务和TAP代理** - Configuration is a TOML file shared between `indexer-service` and `tap-agent`, supplied with the argument `--config /path/to/config.toml`. + 配置是`索引人服务`和`tap代理`之间共享的TOML文件,帶上参数 `--config /path/to/config.toml`。 - Check out the full [configuration](https://github.com/graphprotocol/indexer-rs/blob/main/config/maximal-config-example.toml) and the [default values](https://github.com/graphprotocol/indexer-rs/blob/main/config/default_values.toml) + 查看完整[配置](https://github.com/graphprotocol/indexer-rs/blob/main/config/maximal-config-example.toml)和[默认值](https://github.com/graphprotocol/indexer-rs/blob/main/config/default_values.toml) -For minimal configuration, use the following template: +对于最小配置,请使用以下模板: ```bash # You will have to change *all* the values below to match your setup. @@ -128,18 +128,18 @@ query_url = "" status_url = "" [subgraphs.network] -# Query URL for the Graph Network subgraph. +# Query URL for the Graph Network Subgraph. query_url = "" # Optional, deployment to look for in the local `graph-node`, if locally indexed. -# Locally indexing the subgraph is recommended. +# Locally indexing the Subgraph is recommended. # NOTE: Use `query_url` or `deployment_id` only deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" [subgraphs.escrow] -# Query URL for the Escrow subgraph. +# Query URL for the Escrow Subgraph. query_url = "" # Optional, deployment to look for in the local `graph-node`, if locally indexed. -# Locally indexing the subgraph is recommended. +# Locally indexing the Subgraph is recommended. # NOTE: Use `query_url` or `deployment_id` only deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" @@ -170,24 +170,24 @@ max_amount_willing_to_lose_grt = 20 注意: -- Values for `tap.sender_aggregator_endpoints` can be found in the [gateway section](/indexing/tap/#gateway). -- Values for `blockchain.receipts_verifier_address` must be used accordingly to the [Blockchain addresses section](/indexing/tap/#contracts) using the appropriate chain id. +- `tap.sender_aggregator_endpoints`的值可以在[网关部分](/indexing/tap/#gateway)找到。 +- `blockchain.receipts_verider_address`的值必须使用相应的链id与[blockchain地址部分](/indexing/tap/#contracts) 相对应。 -**Log Level** +**日志级别** -- You can set the log level by using the `RUST_LOG` environment variable. -- It’s recommended that you set it to `RUST_LOG=indexer_tap_agent=debug,info`. +- 您可以使用`RUST_log`环境变量设置日志级别。 +- 建议您将其设置为 `RUST_LOG=indexer_tap_agent=debug,info`。 -## Monitoring +## 监测 ### Metrics -All components expose the port 7300 to be queried by prometheus. +所有组件都将端口7300暴露给prometheus进行查询。 ### Grafana Dashboard -You can download [Grafana Dashboard](https://github.com/graphprotocol/indexer-rs/blob/main/docs/dashboard.json) and import. +您可以下载[Grafana Dashboard](https://github.com/graphprotocol/indexer-rs/blob/main/docs/dashboard.json) 并导入。 ### Launchpad -Currently, there is a WIP version of `indexer-rs` and `tap-agent` that can be found [here](https://github.com/graphops/launchpad-charts/tree/main/charts/graph-network-indexer) +目前,有一个WIP版本的 `index-rs`和`tap-agent`可以在[这里](https://github.com/graphops/launchpad-charts/tree/main/charts/graph-network-indexer)找到。 diff --git a/website/src/pages/zh/indexing/tooling/firehose.mdx b/website/src/pages/zh/indexing/tooling/firehose.mdx index 6e8c9b46a23c..80c8abe71e82 100644 --- a/website/src/pages/zh/indexing/tooling/firehose.mdx +++ b/website/src/pages/zh/indexing/tooling/firehose.mdx @@ -2,23 +2,23 @@ title: Firehose --- -![Firehose Logo](/img/firehose-logo.png) +![Firehose 标志](/img/firehose-logo.png) -Firehose is a new technology developed by StreamingFast working with The Graph Foundation. The product provides **previously unseen capabilities and speeds for indexing blockchain data** using a files-based and streaming-first approach. +Firehose是StreamingFast与Graph基金会合作开发的一项新技术。该产品提供了以前从未见过的功能和速度,可以使用基于文件和流式优先的方法对区块链数据进行索引。 -The Graph merges into Go Ethereum/geth with the adoption of [Live Tracer with v1.14.0 release](https://github.com/ethereum/go-ethereum/releases/tag/v1.14.0). +随着[Live Tracer v1.14.0版本](https://github.com/ethereum/go-ethereum/releases/tag/v1.14.0)的采用,Graph并入了Go Ethereum/geth。 -Firehose extracts, transforms and saves blockchain data in a highly performant file-based strategy. Blockchain developers can then access data extracted by Firehose through binary data streams. Firehose is intended to stand as a replacement for The Graph’s original blockchain data extraction layer. +Firehose以高性能的基于文件的策略提取、转换和保存区块链数据。然后,区块链开发人员可以通过二进制数据流访问Firehose提取的数据。Firehose旨在替代Graph的原始区块链数据提取层。 -## Firehose Documentation +## Firehose文档 -The Firehose documentation is currently maintained by the StreamingFast team [on the StreamingFast website](https://firehose.streamingfast.io/). +目前,Firehose文档由StreamingFast团队[在StreamingFast网站上](https://firehose.streamingfast.io/)维护。 ### 开始 -- Read this [Firehose introduction](https://firehose.streamingfast.io/introduction/firehose-overview) to get an overview of what it is and why it was built. -- Learn about the [Prerequisites](https://firehose.streamingfast.io/introduction/prerequisites) to install and deploy Firehose. +- 阅读此[Firehose介绍](https://firehose.streamingfast.io/introduction/firehose-overview),了解它是什么以及为什么建造它。 +- 了解安装和部署Firehose的[先决条件](https://firehose.streamingfast.io/introduction/prerequisites)。 ### 知识拓展 -- Learn about the different [Firehose components](https://firehose.streamingfast.io/architecture/components) available. +- 了解可用的不同[Firehose组件](https://firehose.streamingfast.io/architecture/components)。 diff --git a/website/src/pages/zh/indexing/tooling/graph-node.mdx b/website/src/pages/zh/indexing/tooling/graph-node.mdx index c8e88c9a82b9..b151102f4324 100644 --- a/website/src/pages/zh/indexing/tooling/graph-node.mdx +++ b/website/src/pages/zh/indexing/tooling/graph-node.mdx @@ -2,15 +2,15 @@ title: Graph 节点 --- -Graph节点是索引子图的组件,并使生成的数据可通过GraphQL API进行查询。因此,它是索引器堆栈的中心,Graph节点的正确运作对于运行成功的索引器至关重要。 +Graph节点是索引子图的组件,并使生成的数据可通过GraphQL API进行查询。因此,它是索引人堆栈的中心,Graph节点的正确运作对于运行成功的索引人至关重要。 -This provides a contextual overview of Graph Node, and some of the more advanced options available to indexers. Detailed documentation and instructions can be found in the [Graph Node repository](https://github.com/graphprotocol/graph-node). +这提供了Graph 节点的上下文概述,以及索引人可用的一些更高级的选项。详细的文档和说明可以在[Graph节点存储库](https://github.com/graphprotocol/graph-node)中找到。 -## Graph 节点 +## Graph节点 -[Graph Node](https://github.com/graphprotocol/graph-node) is the reference implementation for indexing Subgraphs on The Graph Network, connecting to blockchain clients, indexing subgraphs and making indexed data available to query. +[Graph Node](https://github.com/graphprotocol/graph-node)是The Graph网络上索引子图、连接到区块链客户端、索引子图并使索引数据可供查询的参考实现。 -Graph Node (and the whole indexer stack) can be run on bare metal, or in a cloud environment. This flexibility of the central indexing component is crucial to the robustness of The Graph Protocol. Similarly, Graph Node can be [built from source](https://github.com/graphprotocol/graph-node), or indexers can use one of the [provided Docker Images](https://hub.docker.com/r/graphprotocol/graph-node). +Graph节点(以及整个索引人堆栈)可以在裸机上运行,也可以在云环境中运行。中央索引组件的这种灵活性对于Graph协议的健壮性至关重要。同样,Graph节点可以从[源代码构建](https://github.com/graphprotocol/graph-node),或者索引人可以使用[提供的Docker镜像](https://hub.docker.com/r/graphprotocol/graph-node)之一。 ### PostgreSQL 数据库 @@ -20,9 +20,9 @@ Graph节点的主存储区,这是存储子图数据、子图元数据以及子 为了索引网络,Graph节点需要通过以太坊兼容的JSON-RPC访问网络客户端。此RPC可能连接到单个以太坊客户端,也可能是跨多个客户端进行负载平衡的更复杂的设置。 -While some subgraphs may just require a full node, some may have indexing features which require additional RPC functionality. Specifically subgraphs which make `eth_calls` as part of indexing will require an archive node which supports [EIP-1898](https://eips.ethereum.org/EIPS/eip-1898), and subgraphs with `callHandlers`, or `blockHandlers` with a `call` filter, require `trace_filter` support ([see trace module documentation here](https://openethereum.github.io/JSONRPC-trace-module)). +虽然有些子图可能只需要一个完整的节点,但有些子图的索引功能可能需要额外的RPC功能。特别是,将`eth_calls`作为索引的一部分的子图需要一个支持[EIP-1898](https://eips.ethereum.org/EIPS/eip-1898)的归档节点,而带有`callHandlers`或带有`调用`筛选器的`blockHandlers`的子图则需要`trace_filter`支持[(请参阅此处的跟踪模块文档)](https://openethereum.github.io/JSONRPC-trace-module)。 -**Network Firehoses** - a Firehose is a gRPC service providing an ordered, yet fork-aware, stream of blocks, developed by The Graph's core developers to better support performant indexing at scale. This is not currently an Indexer requirement, but Indexers are encouraged to familiarise themselves with the technology, ahead of full network support. Learn more about the Firehose [here](https://firehose.streamingfast.io/). +**Network Firehose**-Firehose是一种gRPC服务,提供有序但具有分叉意识的块流,由Graph的核心开发人员开发,以更好地支持大规模的高性能索引。这目前不是索引人的要求,但鼓励索引人在完全网络支持之前熟悉该技术。点击[此处](https://firehose.streamingfast.io/)了解更多关于Firehose的信息。 ### IPFS节点 @@ -32,9 +32,9 @@ While some subgraphs may just require a full node, some may have indexing featur 为了实现监控和报告,Graph节点可以选择将指标记录到Prometheus指标服务器。 -### Getting started from source +### 从来源开始 -#### Install prerequisites +#### 安装先决条件 - **Rust** @@ -42,15 +42,15 @@ While some subgraphs may just require a full node, some may have indexing featur - **IPFS** -- **Additional Requirements for Ubuntu users** - To run a Graph Node on Ubuntu a few additional packages may be needed. +- **Ubuntu 用户的附加要求** - 要在 Ubuntu 上运行 Graph 节点,可能需要一些附加的软件包。 ```sh sudo apt-get install -y clang libpq-dev libssl-dev pkg-config ``` -#### Setup +#### 设置 -1. Start a PostgreSQL database server +1. 启动 PostgreSQL 数据库服务器 ```sh initdb -D .postgres @@ -58,9 +58,9 @@ pg_ctl -D .postgres -l logfile start createdb graph-node ``` -2. Clone [Graph Node](https://github.com/graphprotocol/graph-node) repo and build the source by running `cargo build` +2. 克隆[Graph节点](https://github.com/graphprotocol/graph-node)仓库,通过运行`cargo build`构建源代码。 -3. Now that all the dependencies are setup, start the Graph Node: +3. 现在,所有的依赖关系都已设置完毕,启动 Graph节点。 ```sh cargo run -p graph-node --release -- \ @@ -71,35 +71,35 @@ cargo run -p graph-node --release -- \ ### Kubernetes入门 -A complete Kubernetes example configuration can be found in the [indexer repository](https://github.com/graphprotocol/indexer/tree/main/k8s). +完整的Kubernetes示例配置可以在[索引人存储库](https://github.com/graphprotocol/indexer/tree/main/k8s)中找到。 ### 端口 当运行Graph Node时,会暴露以下端口: -| Port | Purpose | Routes | CLI Argument | Environment Variable | +| 端口 | 用途 | 路径 | CLI 参数 | 环境 变量 | | --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | -| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | -| 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | -| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | -| 8040 | Prometheus metrics | /metrics | \--metrics-port | - | +| 8000 | GraphQL HTTP server
(for Subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | +| 8001 | GraphQL WS
(for Subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | +| 8020 | JSON-RPC
(用于管理部署) | / | \--admin-port | - | +| 8030 | 子图索引状态 API | /graphql | \--index-node-port | - | +| 8040 | Prometheus 指标 | /metrics | \--metrics-port | - | -> **Important**: Be careful about exposing ports publicly - **administration ports** should be kept locked down. This includes the the Graph Node JSON-RPC endpoint. +> **重要**: 公开暴露端口时要小心 - **管理端口** 应保持锁定。 这包括下面详述的 Graph节点 JSON-RPC。 ## 高级 Graph 节点配置 最简单的是,Graph节点可以使用Graph节点的单个实例、单个PostgreSQL数据库、IPFS节点和要索引的子图所需的网络客户端来操作。 -This setup can be scaled horizontally, by adding multiple Graph Nodes, and multiple databases to support those Graph Nodes. Advanced users may want to take advantage of some of the horizontal scaling capabilities of Graph Node, as well as some of the more advanced configuration options, via the `config.toml` file and Graph Node's environment variables. +通过添加多个Graph节点和多个数据库以支持这些Graph节点,可以水平扩展此设置。高级用户可能希望通过`config.toml`文件和Graph节点的环境变量,利用Graph节点的一些水平扩展功能以及一些更高级的配置选项。 ### `config.toml` -A [TOML](https://toml.io/en/) configuration file can be used to set more complex configurations than those exposed in the CLI. The location of the file is passed with the --config command line switch. +一个[TOML](https://toml.io/en/)配置文件可用于设置比CLI中公开的配置更复杂的配置。文件的位置通过--config命令行开关传递。 > 使用配置文件时,不能使用选项--postgres-url、--postgres-secondary-hosts和--postgres-host-weights。 -A minimal `config.toml` file can be provided; the following file is equivalent to using the --postgres-url command line option: +可以提供最小的`config.toml`文件;以下文件等效于使用--postgres-url命令行选项: ```toml [store] @@ -110,17 +110,17 @@ connection="<.. postgres-url argument ..>" indexers = [ "<.. list of all indexing nodes ..>" ] ``` -Full documentation of `config.toml` can be found in the [Graph Node docs](https://github.com/graphprotocol/graph-node/blob/master/docs/config.md). +`config.toml` 的完整文档可以在[Graph节点文档](https://github.com/graphprotocol/graph-node/blob/master/docs/config.md)中找到。 #### 多个 Graph 节点 -Graph Node indexing can scale horizontally, running multiple instances of Graph Node to split indexing and querying across different nodes. This can be done simply by running Graph Nodes configured with a different `node_id` on startup (e.g. in the Docker Compose file), which can then be used in the `config.toml` file to specify [dedicated query nodes](#dedicated-query-nodes), [block ingestors](#dedicated-block-ingestion), and splitting subgraphs across nodes with [deployment rules](#deployment-rules). +Graph节点索引可以水平扩展,运行Graph节点的多个实例,将索引和查询拆分到不同的节点上。这可以通过在启动时运行配置了不同`node_id` 的Graph节点来完成(例如在Docker Compose文件中),然后可以在config.toml文件中使用它来指定[专用查询节点](#dedicated-query-nodes),[块入口](#dedicated-block-ingestion),并使用[部署规则](#deployment-rules)在节点之间拆分子图。 > 请注意,可以将多个Graph节点配置为使用同一个数据库,该数据库本身可以通过分片进行水平扩展。 #### 部署规则 -Given multiple Graph Nodes, it is necessary to manage deployment of new subgraphs so that the same subgraph isn't being indexed by two different nodes, which would lead to collisions. This can be done by using deployment rules, which can also specify which `shard` a subgraph's data should be stored in, if database sharding is being used. Deployment rules can match on the subgraph name and the network that the deployment is indexing in order to make a decision. +给定多个Graph节点,有必要管理新子图的部署,以便同一子图不会被两个不同的节点索引, 这会导致冲突。这可以通过使用部署规则来实现,如果正在使用数据库`shard`,部署规则还可以指定子图的数据应该存储在哪个分片中。部署规则可以与子图名称和部署所索引的网络相匹配,以便做出决策。 部署规则配置示例: @@ -138,7 +138,7 @@ indexers = [ "index_node_kovan_0" ] match = { network = [ "xdai", "poa-core" ] } indexers = [ "index_node_other_0" ] [[deployment.rule]] -# There's no 'match', so any subgraph matches +# There's no 'match', so any Subgraph matches shards = [ "sharda", "shardb" ] indexers = [ "index_node_community_0", @@ -150,7 +150,7 @@ indexers = [ ] ``` -Read more about deployment rules [here](https://github.com/graphprotocol/graph-node/blob/master/docs/config.md#controlling-deployment). +在[此处](https://github.com/graphprotocol/graph-node/blob/master/docs/config.md#controlling-deployment)阅读有关部署规则的更多信息。 #### 专用查询节点 @@ -167,7 +167,7 @@ query = "" 对于大多数用例,单个Postgres数据库足以支持graph节点实例。当一个graph节点实例超过一个Postgres数据库时,可以将graph节点的数据存储拆分到多个Postgres数据库中。所有数据库一起构成graph节点实例的存储。每个单独的数据库都称为分片。 -Shards can be used to split subgraph deployments across multiple databases, and can also be used to use replicas to spread query load across databases. This includes configuring the number of available database connections each `graph-node` should keep in its connection pool for each database, which becomes increasingly important as more subgraphs are being indexed. +分片可用于在多个数据库中拆分子图部署,也可用于使用副本在数据库之间分散查询负载。这包括配置每个`graph-node` 应在其连接池中为每个数据库保留的可用数据库连接,随着索引的子图越来越多,这一点变得越来越重要。 当您的现有数据库无法跟上Graph节点给它带来的负载时,以及当无法再增加数据库大小时,分片变得非常有用。 @@ -175,11 +175,11 @@ Shards can be used to split subgraph deployments across multiple databases, and 在配置连接方面,首先将 postgresql.conf 中的 max_connections 设置为400(或甚至200),然后查看 store_connection_wait_time_ms 和 store_connecion_checkout_count Prometheus 度量。明显的等待时间(任何超过5ms的时间)表明可用连接太少;高等待时间也将由数据库非常繁忙(如高CPU负载)引起。然而,如果数据库在其他方面看起来很稳定,那么高等待时间表明需要增加连接数量。在配置中,每个graph节点实例可以使用的连接数是一个上限,如果不需要,Graph节点将不会保持连接打开。 -Read more about store configuration [here](https://github.com/graphprotocol/graph-node/blob/master/docs/config.md#configuring-multiple-databases). +在[此处](https://github.com/graphprotocol/graph-node/blob/master/docs/config.md#configuring-multiple-databases)阅读有关配置的更多信息。 #### 专用区块摄取 -If there are multiple nodes configured, it will be necessary to specify one node which is responsible for ingestion of new blocks, so that all configured index nodes aren't polling the chain head. This is done as part of the `chains` namespace, specifying the `node_id` to be used for block ingestion: +如果配置了多个节点,则需要指定一个负责接收新区块的节点,这样所有配置的索引节点都不会轮询链头。这是作为`chains`命名空间的一部分完成的,指定用于区块摄取的`node_id`: ```toml [chains] @@ -188,13 +188,13 @@ ingestor = "block_ingestor_node" #### 支持多个网络 -The Graph Protocol is increasing the number of networks supported for indexing rewards, and there exist many subgraphs indexing unsupported networks which an indexer would like to process. The `config.toml` file allows for expressive and flexible configuration of: +Graph协议正在增加支持索引奖励的网络数量,并且存在许多索引不支持的网络的子图,索引人希望处理这些子图。`config.toml`文件允许表达和灵活配置: - 多个网络。 - 每个网络有多个提供程序(这可以允许跨提供程序分配负载,也可以允许配置完整节点和归档节点,如果给定的工作负载允许,Graph Node更喜欢便宜些的提供程序)。 - 其他提供商详细信息,如特征、身份验证和提供程序类型(用于实验性Firehose支持)。 -The `[chains]` section controls the ethereum providers that graph-node connects to, and where blocks and other metadata for each chain are stored. The following example configures two chains, mainnet and kovan, where blocks for mainnet are stored in the vip shard and blocks for kovan are stored in the primary shard. The mainnet chain can use two different providers, whereas kovan only has one provider. +[chains]部分控制graph节点连接到的以太坊提供程序,以及每个链的区块和其他元数据的存储位置。以下示例配置了两个链,mainnet和kovan,其中mainnet的区块存储在vip分片中,而kovan的区块则存储在主分片中。主网链可以使用两个不同的提供商,而kovan只有一个提供商。 ```toml [chains] @@ -210,18 +210,18 @@ shard = "primary" provider = [ { label = "kovan", url = "http://..", features = [] } ] ``` -Read more about provider configuration [here](https://github.com/graphprotocol/graph-node/blob/master/docs/config.md#configuring-ethereum-providers). +在[此处](https://github.com/graphprotocol/graph-node/blob/master/docs/config.md#configuring-ethereum-providers)阅读有关配置提供商的更多信息。 ### 环境变量 -Graph Node supports a range of environment variables which can enable features, or change Graph Node behaviour. These are documented [here](https://github.com/graphprotocol/graph-node/blob/master/docs/environment-variables.md). +Graph节点支持一系列环境变量,这些变量可以启用功能或更改Graph节点行为。这些都记录在[这里](https://github.com/graphprotocol/graph-node/blob/master/docs/environment-variables.md)。 ### 持续部署 使用高级配置操作缩放索引设置的用户可以从使用Kubernetes管理Graph节点中受益。 -- The indexer repository has an [example Kubernetes reference](https://github.com/graphprotocol/indexer/tree/main/k8s) -- [Launchpad](https://docs.graphops.xyz/launchpad/intro) is a toolkit for running a Graph Protocol Indexer on Kubernetes maintained by GraphOps. It provides a set of Helm charts and a CLI to manage a Graph Node deployment. +- 索引人存储库有一个[Kubernetes参考示例](https://github.com/graphprotocol/indexer/tree/main/k8s) +- [Launchpad](https://docs.graphops.xyz/launchpad/intro)是一个工具包,用于在由GraphOps维护的Kubernetes上运行Graph协议索引人。它提供了一组Helm图表和一个CLI来管理Graph节点部署。 ### 管理Graph节点 @@ -229,23 +229,23 @@ Graph Node supports a range of environment variables which can enable features, #### 日志 -Graph Node's logs can provide useful information for debugging and optimisation of Graph Node and specific subgraphs. Graph Node supports different log levels via the `GRAPH_LOG` environment variable, with the following levels: error, warn, info, debug or trace. +Graph节点的日志可以为Graph节点和特定子图的调试和优化提供有用的信息。Graph节点通过`GRAPH_LOG`环境变量支持不同的日志级别,具有以下级别:错误、警告、信息、调试或跟踪。 -In addition setting `GRAPH_LOG_QUERY_TIMING` to `gql` provides more details about how GraphQL queries are running (though this will generate a large volume of logs). +此外,将`GRAPH_LOG_QUERY_TIMING`设置为`gql`提供了有关GraphQL查询如何运行的更多详细信息(尽管这将生成大量日志)。 -#### Monitoring & alerting +#### 监控&警报 默认情况下,Graph Node通过8040端口上的Prometheus端点提供指标。然后可以使用Grafana来可视化这些指标。 -The indexer repository provides an [example Grafana configuration](https://github.com/graphprotocol/indexer/blob/main/k8s/base/grafana.yaml). +索引人存储库提供了一个[Grafana配置示例](https://github.com/graphprotocol/indexer/blob/main/k8s/base/grafana.yaml)。 #### Graphman -`graphman` is a maintenance tool for Graph Node, helping with diagnosis and resolution of different day-to-day and exceptional tasks. +`graphman`是Graph节点的维护工具,帮助诊断和解决不同的日常和异常任务。 -The graphman command is included in the official containers, and you can docker exec into your graph-node container to run it. It requires a `config.toml` file. +Graphman命令包含在官方容器中,您可以将docker exec插入到 Graph节点容器中运行它。它需要一个`config.toml`文件。 -Full documentation of `graphman` commands is available in the Graph Node repository. See [/docs/graphman.md] (https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md) in the Graph Node `/docs` +Graph节点存储库中提供了`graphman`命令的完整文档。参见[/docs/graphman.md] (https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md)Graph节点/`/docs` ### 使用子图 @@ -253,7 +253,7 @@ Full documentation of `graphman` commands is available in the Graph Node reposit 默认情况下,在端口8030/graphql上可用,索引状态API公开了一系列方法,用于检查不同子图的索引状态、检查索引证明、检查子图特征等。 -The full schema is available [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). +完整的模式可以在[这里](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql)找到。 #### 索引性能 @@ -267,8 +267,8 @@ The full schema is available [here](https://github.com/graphprotocol/graph-node/ 索引速度慢的常见原因: -- Time taken to find relevant events from the chain (call handlers in particular can be slow, given the reliance on `trace_filter`) -- Making large numbers of `eth_calls` as part of handlers +- 从链中查找相关事件所需的时间(特别是调用处理程序可能慢,因为依赖 `trace_filter`) +- 将大量`eth_calls` 作为处理程序的一部分 - 执行期间大量的存储交互 - 要保存到存储的大量数据 - 要处理的大量事件 @@ -287,24 +287,24 @@ The full schema is available [here](https://github.com/graphprotocol/graph-node/ 在某些情况下,索引人可能会解决故障(例如,如果错误是由于没有正确类型的提供程序导致的,则添加所需的提供程序将允许继续索引)。然而,在其他情况下,需要更改子图代码。 -> Deterministic failures are considered "final", with a Proof of Indexing generated for the failing block, while non-deterministic failures are not, as the subgraph may manage to "unfail" and continue indexing. In some cases, the non-deterministic label is incorrect, and the subgraph will never overcome the error; such failures should be reported as issues on the Graph Node repository. +> 确定性故障被视为“最终”,并为故障区块生成索引证明,而非确定性故障则不是,因为子图可能会“可靠”并继续索引。在某些情况下,非确定性标签是不正确的,子图永远无法克服错误;此类故障应报告为Graph节点存储库中的问题。 #### 区块和调用缓存 -Graph Node caches certain data in the store in order to save refetching from the provider. Blocks are cached, as are the results of `eth_calls` (the latter being cached as of a specific block). This caching can dramatically increase indexing speed during "resyncing" of a slightly altered subgraph. +Graph节点在存储中缓存某些数据,以保存来自提供程序的重新绘制。区块被缓存,`eth_calls`的结果也被缓存(后者作为特定区块被缓存)。这种缓存可以在稍微改变的子图的“重新同步”期间显著提高索引速度。 -However, in some instances, if an Ethereum node has provided incorrect data for some period, that can make its way into the cache, leading to incorrect data or failed subgraphs. In this case indexers can use `graphman` to clear the poisoned cache, and then rewind the affected subgraphs, which will then fetch fresh data from the (hopefully) healthy provider. +然而,在某些情况下,如果以太坊节点在一段时间内提供了错误的数据,这可能会进入缓存,导致错误的数据或失败子图。在这种情况下,索引人可以使用`graphman`清除不良的缓存,倒回受影响的子图,然后从(希望)健康提供程序获取新数据。 如果怀疑区块缓存不一致,例如tx收据丢失事件: -1. `graphman chain list` to find the chain name. -2. `graphman chain check-blocks by-number ` will check if the cached block matches the provider, and deletes the block from the cache if it doesn’t. - 1. If there is a difference, it may be safer to truncate the whole cache with `graphman chain truncate `. +1. `graphman链列表`以查找链名称。 +2. `graphman链按数字检查块` 将检查缓存的块是否与提供程序匹配,如果不匹配,则从缓存中删除该块。 + 1. 如果存在差异,则使用 `graphman chain truncate`截断整个缓存可能更安全。 2. 如果区块与提供程序匹配,则可以直接针对提供程序调试问题。 #### 查询问题和错误 -一旦子图被索引,索引器就可以期望通过子图的专用查询端点来服务查询。如果索引器希望为大量查询量提供服务,建议使用专用查询节点,如果查询量非常大,索引器可能需要配置副本分片,以便查询不会影响索引过程。 +一旦子图被索引,索引人就可以期望通过子图的专用查询端点来服务查询。如果索引让人希望为大量查询量提供服务,建议使用专用查询节点,如果查询量非常大,索引人可能需要配置副本分片,以便查询不会影响索引过程。 然而,即使使用专用的查询节点和副本,某些查询也可能需要很长时间才能执行,在某些情况下还会增加内存使用量,并对其他用户的查询时间产生负面影响。 @@ -312,7 +312,7 @@ However, in some instances, if an Ethereum node has provided incorrect data for ##### 查询缓存 -Graph Node caches GraphQL queries by default, which can significantly reduce database load. This can be further configured with the `GRAPH_QUERY_CACHE_BLOCKS` and `GRAPH_QUERY_CACHE_MAX_MEM` settings - read more [here](https://github.com/graphprotocol/graph-node/blob/master/docs/environment-variables.md#graphql-caching). +Graph节点默认缓存GraphQL查询,这可以显著减少数据库负载。这可以通过`GRAPH_QUERY_CACHE_BLOCKS` and `GRAPH_QUERY_CACHE_MAX_MEM`设置进行进一步配置-在[此处](https://github.com/graphprotocol/graph-node/blob/master/docs/environment-variables.md#graphql-caching)阅读更多信息。 ##### 分析查询 @@ -320,7 +320,7 @@ Graph Node caches GraphQL queries by default, which can significantly reduce dat 在其他情况下,触发因素可能是查询节点上的高内存使用率,在这种情况下,首要挑战是要确定导致问题的查询。 -Indexers can use [qlog](https://github.com/graphprotocol/qlog/) to process and summarize Graph Node's query logs. `GRAPH_LOG_QUERY_TIMING` can also be enabled to help identify and debug slow queries. +索引人可以使用[qlog](https://github.com/graphprotocol/qlog/)来处理和汇总Graph节点的查询日志。还可以启用`GRAPH_LOG_QUERY_TIMING`来帮助识别和调试慢速查询。 针对慢查询,索引人有几个选项。当然,他们可以改变成本模型,显著增加发送有问题查询的成本。这可能导致该查询的频率降低。然而,这通常并不能解决问题的根本原因。 @@ -328,18 +328,18 @@ Indexers can use [qlog](https://github.com/graphprotocol/qlog/) to process and s 存储实体的数据库表通常有两种类型:“类交易”,即实体一旦创建,就永远不会更新,即存储类似于金融交易列表的内容;“类账户”,即经常更新实体,即存储每次记录交易时都会修改的类似金融账户的内容。类账户表的特点是,它们包含大量实体版本,但不同的实体相对较少。通常,在这种表中,不同实体的数量是行总数的1%(实体版本)。 -For account-like tables, `graph-node` can generate queries that take advantage of details of how Postgres ends up storing data with such a high rate of change, namely that all of the versions for recent blocks are in a small subsection of the overall storage for such a table. +对于类似账户的表,`graph-node` 可以生成查询,利用Postgres如何以如此高的变化率存储数据的细节,即最近块的所有版本都在这样一个表的整体存储的一小部分中。 The command `graphman stats show shows, for each entity type/table in a deployment, how many distinct entities, and how many entity versions each table contains. That data is based on Postgres-internal estimates, and is therefore necessarily imprecise, and can be off by an order of magnitude. A `-1` in the `entities` column means that Postgres believes that all rows contain a distinct entity. -In general, tables where the number of distinct entities are less than 1% of the total number of rows/entity versions are good candidates for the account-like optimization. When the output of `graphman stats show` indicates that a table might benefit from this optimization, running `graphman stats show
` will perform a full count of the table - that can be slow, but gives a precise measure of the ratio of distinct entities to overall entity versions. +一般来说,不同实体的数量小于行/实体版本总数的1%的表是类似帐户优化的良好候选者。当`graphman统计显示`的输出表明某个表可能会从这种优化中受益时,运行`graphman统计显示
`;将执行表的完整计数-这可能很慢,但可以精确衡量不同实体与整体实体版本的比率。 -Once a table has been determined to be account-like, running `graphman stats account-like .
` will turn on the account-like optimization for queries against that table. The optimization can be turned off again with `graphman stats account-like --clear .
` It takes up to 5 minutes for query nodes to notice that the optimization has been turned on or off. After turning the optimization on, it is necessary to verify that the change does not in fact make queries slower for that table. If you have configured Grafana to monitor Postgres, slow queries would show up in `pg_stat_activity`in large numbers, taking several seconds. In that case, the optimization needs to be turned off again. +一旦表被确定为类似帐户,运行`graphman stats类似帐户.
`,将为针对该表的查询打开类似帐户的优化。可以使用`graphman统计帐户再次关闭优化,如--clear.
`查询节点最多需要5分钟才能注意到优化已打开或关闭。打开优化后,有必要验证更改是否确实不会使该表的查询速度变慢。如果您已将Grafana配置为监视Postgres,那么慢速查询将在`pg_stat_activity`中大量显示,需要几秒钟的时间。在这种情况下,需要再次关闭优化。 -For Uniswap-like subgraphs, the `pair` and `token` tables are prime candidates for this optimization, and can have a dramatic effect on database load. +对于类似Uniswap的子图,`pair` 和 `token`表是这种优化的主要候选项,并且可以对数据库负载产生显著影响。 #### 删除子图 > 这是一项新功能,将在Graph节点0.29.x中提供。 -At some point an indexer might want to remove a given subgraph. This can be easily done via `graphman drop`, which deletes a deployment and all it's indexed data. The deployment can be specified as either a subgraph name, an IPFS hash `Qm..`, or the database namespace `sgdNNN`. Further documentation is available [here](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop). +在某个时刻,索引人可能想要删除给定的子图。这可以通过删除部署及其所有索引数据的`graphman drop`, 轻松完成。部署可以被指定为子图名称、IPFS has、`Qm..`,或数据库命名空间`sgdNNN`。[此处](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop)提供了更多文档。 diff --git a/website/src/pages/zh/indexing/tooling/graphcast.mdx b/website/src/pages/zh/indexing/tooling/graphcast.mdx index 6e29da450727..06142f7cbb9c 100644 --- a/website/src/pages/zh/indexing/tooling/graphcast.mdx +++ b/website/src/pages/zh/indexing/tooling/graphcast.mdx @@ -10,11 +10,11 @@ title: Graphcast Graphcast SDK(软件开发工具包)允许开发人员构建Radio,这是一种使用gossip协议的应用程序,索引人可以运行这些应用程序来服务于特定的目的。我们还打算为以下用例创建一些Radio(或为希望构建Radio的其他开发人员/团队提供支持): -- Real-time cross-checking of subgraph data integrity ([Subgraph Radio](https://docs.graphops.xyz/graphcast/radios/subgraph-radio/intro)). -- \-对来自其他索引人的warp同步中的子图、子流和Firehose数据进行拍卖和协调。 -- \-主动查询分析的自我报告,包括子图请求量、费用量等。 -- \-索引分析的自我报告,包括子图索引时间、处理程序gas成本、遇到的索引错误等。 -- \-自报栈信息,包括graph节点版本、Postgres版本、以太坊客户端版本等。 +- 子图数据完整性的实时交叉检查([Subgraph Radio](https://docs.graphops.xyz/graphcast/radios/subgraph-radio/intro))。 +- 对来自其他索引人的warp同步中的子图、子流和Firehose数据进行拍卖和协调。 +- 主动查询分析的自我报告,包括子图请求量、费用量等。 +- 索引分析的自我报告,包括子图索引时间、处理程序燃气成本、遇到的索引错误等。 +- 自报栈信息,包括graph节点版本、Postgres版本、以太坊客户端版本等。 ### 了解更多 diff --git a/website/src/pages/zh/resources/_meta-titles.json b/website/src/pages/zh/resources/_meta-titles.json index f5971e95a8f6..7c071e368667 100644 --- a/website/src/pages/zh/resources/_meta-titles.json +++ b/website/src/pages/zh/resources/_meta-titles.json @@ -1,4 +1,4 @@ { - "roles": "Additional Roles", - "migration-guides": "Migration Guides" + "roles": "其他角色", + "migration-guides": "迁移指南" } diff --git a/website/src/pages/zh/resources/benefits.mdx b/website/src/pages/zh/resources/benefits.mdx index dc6336e1893a..24ed916085f2 100644 --- a/website/src/pages/zh/resources/benefits.mdx +++ b/website/src/pages/zh/resources/benefits.mdx @@ -1,70 +1,70 @@ --- -title: The Graph vs. Self Hosting +title: The Graph 与自托管 socialImage: https://thegraph.com/docs/img/seo/benefits.jpg --- Graph的去中心化网络经过精心设计和完善,创造了强大的索引和查询体验,由于世界各地成千上万的贡献者,它每天都在变得更好。 -The benefits of this decentralized protocol cannot be replicated by running a `graph-node` locally. The Graph Network is more reliable, more efficient, and less expensive. +这种去中心化协议的好处无法通过在本地运行`graph-node`来复制。The Graph网络更可靠、更高效、更便宜。 以下是分析: ## 为什么要使用Graph网络 -- Significantly lower monthly costs +- 显著降低每月成本 - 基础设施设置成本为0美元 - 超群的正常运行时间 -- Access to hundreds of independent Indexers around the world +- 访问全球数百个独立索引人 - 全球社区24/7的技术支持 ## 好处解释 -### Lower & more Flexible Cost Structure +### 更低及更灵活的成本结构 -No contracts. No monthly fees. Only pay for the queries you use—with an average cost-per-query of $40 per million queries (~$0.00004 per query). Queries are priced in USD and paid in GRT or credit card. +没有合约。没有月费。只为您使用的查询付费,每次查询的平均成本为每百万个查询40美元(每个查询约0.00004美元)。查询以美元计价,以GRT或信用卡支付。 -Query costs may vary; the quoted cost is the average at time of publication (March 2024). +查询成本可能有所不同;报价成本为出版时(2024年3月)的平均值。 -## Low Volume User (less than 100,000 queries per month) +## 低用量用户(每月少于100,000次查询) | 成本比较 | 自托管 | Graph网络 | | :------------------: | :-------------------------------------: | :----------------------------------------: | | 每月服务器费用 \* | 每月350美元 | 0美元 | -| 查询成本 | $0+ | $0 per month | +| 查询成本 | $0+ | 每月0美元 | | 工程时间 | 400美元每月 | 没有,内置在具有全球去中心化索引者的网络中 | -| 每月查询 | 受限于基础设施能力 | 100,000 (Free Plan) | +| 每月查询 | 受限于基础设施能力 | 100,000 (免费计划) | | 每个查询的成本 | 0美元 | $0 | -| Infrastructure | 中心化 | 去中心化 | +| 基础设施 | 中心化 | 去中心化 | | 异地备援 | 每个额外节点 $750 + | 包括在内 | | 正常工作时间 | 变量 | 99.9%+ | | 每月总成本 | $750+ | 0美元 | -## Medium Volume User (~3M queries per month) +## 中等容量用户(每月超过约3M查询) | 成本比较 | 自托管 | Graph网络 | | :------------------: | :----------------------------------------: | :----------------------------------------: | | 每月服务器费用 \* | 每月350美元 | 0美元 | -| 查询成本 | 每月500美元 | $120 per month | +| 查询成本 | 每月500美元 | 每月120美元 | | 工程时间 | 每月800美元 | 没有,内置在具有全球去中心化索引者的网络中 | | 每月查询 | 受限于基础设施能力 | ~3,000,000 | | 每个查询的成本 | 0美元 | $0.00004 | -| Infrastructure | 中心化 | 去中心化 | +| 基础设施 | 中心化 | 去中心化 | | 工程费用 | 每小时200美元 | 包括在内 | | 异地备援 | 每个额外节点的总成本为1200美元 | 包括在内 | | 正常工作时间 | 变量 | 99.9%+ | | 每月总成本 | $1,650+ | $120 | -## High Volume User (~30M queries per month) +## 高用量用户(每月超过约30M次查询) | 成本比较 | 自托管 | Graph网络 | | :------------------: | :-----------------------------------------: | :----------------------------------------: | | 每月服务器费用 \* | 1100美元每月每节点 | 0美元 | -| 查询成本 | 4000美元 | $1,200 per month | +| 查询成本 | 4000美元 | 每月1,200美元 | | 需要的节点数量 | 10 | 不适用 | | 工程时间 | 每月6000美元或以上 | 没有,内置在具有全球去中心化索引者的网络中 | | 每月查询 | 受限于基础设施能力 | ~30,000,000 | | 每个查询的成本 | 0美元 | $0.00004 | -| Infrastructure | 中心化 | 去中心化 | +| 基础设施 | 中心化 | 去中心化 | | 异地备援 | 每个额外节点的总成本为1200美元 | 包括在内 | | 正常工作时间 | 变量 | 99.9%+ | | 每月总成本 | $11,000+ | $1,200 | @@ -73,20 +73,20 @@ Query costs may vary; the quoted cost is the average at time of publication (Mar 按每小时200美元的假设计算的工程时间 -Reflects cost for data consumer. Query fees are still paid to Indexers for Free Plan queries. +反映数据消费者的成本。对于免费计划查询,仍向索引人支付查询费。 -Estimated costs are only for Ethereum Mainnet subgraphs — costs are even higher when self hosting a `graph-node` on other networks. Some users may need to update their subgraph to a new version. Due to Ethereum gas fees, an update costs ~$50 at time of writing. Note that gas fees on [Arbitrum](/archived/arbitrum/arbitrum-faq/) are substantially lower than Ethereum mainnet. +估计成本仅适用于以太坊主网子图——当在其他网络上自托管 `graph-节点`时,成本甚至更高。一些用户可能需要将其子图更新到新版本。由于以太坊的燃气费用,在撰写本文时,更新费用约为50美元。请注意,[Arbitrum](/archived/arbitrum/arbitrum-faq/) 上的燃气费用远低于以太坊主网。 -在一个子图上策划信号是一个可选一次性净零成本(例如,1千美元的信号可以在一个子图上管理,然后撤回ーー在这个过程中有可能获得回报)。 +在一个子图上策划信号是一个可选一次性净零成本(例如,1千美元的信号可以在一个子图上策展,然后撤回ーー在这个过程中有可能获得回报)。 -## No Setup Costs & Greater Operational Efficiency +## 无设置成本和更高的运行效率 零安装费。立即开始,没有设置或间接费用。没有硬件要求。没有由于集中式基础设施而导致的中断,并且有更多的时间专注于您的核心产品。不需要备份服务器、故障排除或昂贵的工程资源。 -## Reliability & Resiliency +## 可靠性和弹性 -The Graph’s decentralized network gives users access to geographic redundancy that does not exist when self-hosting a `graph-node`. Queries are served reliably thanks to 99.9%+ uptime, achieved by hundreds of independent Indexers securing the network globally. +The Graph的去中心化网络为用户提供了地理冗余的访问权限,这在自托管`graph-node`时是不存在的。由于99.9%以上的正常运行时间,查询得到了可靠的服务,这是由数百个独立的索引人在全球范围内保护网络所实现的。 -Bottom line: The Graph Network is less expensive, easier to use, and produces superior results compared to running a `graph-node` locally. +一句话: 与在本地运行一个`graph-node`相比,The Graph网络成本更低,更容易使用,并且产生更好的结果。 -Start using The Graph Network today, and learn how to [publish your subgraph to The Graph's decentralized network](/subgraphs/quick-start/). +今天开始使用The Graph网络,学习如何将您的子图发布到The Graph的去中心化网络](/subgraphs/quick-start/)。 diff --git a/website/src/pages/zh/resources/glossary.mdx b/website/src/pages/zh/resources/glossary.mdx index 98e473e0a8ae..bb993a1dffa7 100644 --- a/website/src/pages/zh/resources/glossary.mdx +++ b/website/src/pages/zh/resources/glossary.mdx @@ -2,82 +2,82 @@ title: 术语汇编 --- -- **The Graph**: A decentralized protocol for indexing and querying data. +- **The Graph**: 用于索引和查询数据的去中心化协议。 -- **Query**: A request for data. In the case of The Graph, a query is a request for data from a subgraph that will be answered by an Indexer. +- **Query**: 对数据的请求。对于The Graph,查询是从子图中请求数据,并由索引人回答。 -- **GraphQL**: A query language for APIs and a runtime for fulfilling those queries with your existing data. The Graph uses GraphQL to query subgraphs. +- **GraphQL**: 用于 API 的查询语言,以及用现有数据实现这些查询的运行时。The Graph 使用 GraphQL 查询子图。 -- **Endpoint**: A URL that can be used to query a subgraph. The testing endpoint for Subgraph Studio is `https://api.studio.thegraph.com/query///` and the Graph Explorer endpoint is `https://gateway.thegraph.com/api//subgraphs/id/`. The Graph Explorer endpoint is used to query subgraphs on The Graph's decentralized network. +- **Endpoint**:可用于查询子图的URL。Subgraph Studio的测试端点为`https://api.studio.thegraph.com/query///`,Graph资源管理器端点为`https://gateway.thegraph.com/api//subgraphs/id/`。Graph Explorer端点用于查询The Graph去中心化网络上的子图。 -- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish subgraphs to The Graph Network. Once it is indexed, the subgraph can be queried by anyone. +- **Subgraph**:一个开放的API,它从区块链中提取数据,对其进行处理并存储,以便通过GraphQL轻松查询。开发人员可以构建、部署和发布子图到The Graph网络。一旦它被索引,任何人都可以查询子图。 -- **Indexer**: Network participants that run indexing nodes to index data from blockchains and serve GraphQL queries. +- **Indexers**:网络参与者运行索引节点,从区块链索引数据并提供 GraphQL 查询。 -- **Indexer Revenue Streams**: Indexers are rewarded in GRT with two components: query fee rebates and indexing rewards. +- **Indexer Revenue Streams**:索引人在 GRT 中的获得包括两个组成部分: 查询费用回扣和索引奖励。 - 1. **Query Fee Rebates**: Payments from subgraph consumers for serving queries on the network. + 1. **Query Fee Rebates**: 子图消费者为网络上的查询提供服务支付的费用。 - 2. **Indexing Rewards**: The rewards that Indexers receive for indexing subgraphs. Indexing rewards are generated via new issuance of 3% GRT annually. + 2. **Indexing Rewards**: 索引人因为索引子图而获得的奖励。索引奖励是通过每年发行3% 的 GRT 来产生的。 -- **Indexer's Self-Stake**: The amount of GRT that Indexers stake to participate in the decentralized network. The minimum is 100,000 GRT, and there is no upper limit. +- **Indexer's Self Stake**: 索引人参与去中心化网络的 GRT 金额。最低为100,000 GRT,并且没有上限。 -- **Delegation Capacity**: The maximum amount of GRT an Indexer can accept from Delegators. Indexers can only accept up to 16x their Indexer Self-Stake, and additional delegation results in diluted rewards. For example, if an Indexer has a Self-Stake of 1M GRT, their delegation capacity is 16M. However, Indexers can increase their Delegation Capacity by increasing their Self-Stake. +- **Delegation Capacity**:索引人可以从委托人那里接受的最大GRT量。索引人最多只能接受 16 倍的索引人自身质押,额外的委托会导致奖励稀释。例如,如果Indexer的自身质押为 1M GRT,则其委托容量为 16M。但是,索引人可以通过增加自身质押来提高其委托能力。 -- **Upgrade Indexer**: An Indexer designed to act as a fallback for subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. +- **Upgrade Indexer**:一种旨在作为网络上其他索引人不提供服务的子图查询時的后备索引人。升级索引人与其他索引人没有竞争关系。 -- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing subgraphs. +- **Delegators**: 拥有 GRT 并将其 GRT 委托给索引人的网络参与者。这使得索引人可以增加它们在网络子图中的份额。作为回报,委托方将获得索引人为处理子图而获得的索引奖励的一部分。 -- **Delegation Tax**: A 0.5% fee paid by Delegators when they delegate GRT to Indexers. The GRT used to pay the fee is burned. +- **Delegation Tax**: 委托人将 GRT 委托给索引人时支付的0.5% 的费用。用于支付费用的 GRT 将被销毁。 -- **Curator**: Network participants that identify high-quality subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a subgraph, 10% is distributed to the Curators of that subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a subgraph. +- **Curators**: 网络参与者,识别高质量的子图,对他们发GRT信号,以换取策展份额。当索引人索取子图上的查询费用时,10% 将分配给该子图的策展人。索引人获得与子图上的信号成比例的索引奖励。我们可以看到发出信号的 GRT 数量与索引子图的索引人数量之间的相关性。 -- **Curation Tax**: A 1% fee paid by Curators when they signal GRT on subgraphs. The GRT used to pay the fee is burned. +- **Curation Tax**: 当策展人在子图上显示 GRT 时,他们要支付1% 的费用。用于支付费用的 GRT 将被销毁。 -- **Data Consumer**: Any application or user that queries a subgraph. +- **Data Consumer**: 查询子图的任何应用程序或用户。 -- **Subgraph Developer**: A developer who builds and deploys a subgraph to The Graph's decentralized network. +- **Subgraph Developer**: 构建并部署子图到The Graph 去中心化网络的开发人员。 -- **Subgraph Manifest**: A YAML file that describes the subgraph's GraphQL schema, data sources, and other metadata. [Here](https://github.com/graphprotocol/example-subgraph/blob/master/subgraph.yaml) is an example. +- **Subgraph Manifest**:一个描述子图的GraphQL模式、数据源和其他元数据的YAML文件。[这里](https://github.com/graphprotocol/example-subgraph/blob/master/subgraph.yaml) 有一个例子。 -- **Epoch**: A unit of time within the network. Currently, one epoch is 6,646 blocks or approximately 1 day. +- **Epoch**: 网络中的时间单位。一个时期目前为6,646个区块或大约1天。 -- **Allocation**: An Indexer can allocate their total GRT stake (including Delegators' stake) towards subgraphs that have been published on The Graph's decentralized network. Allocations can have different statuses: +- **Allocation**: 一个索引人可以分配他们的总 GRT 份额(包括委托人的股份) 到已经部署在The Graph去中心化网络的子图。分配可以有不同的状态: - 1. **Active**: An allocation is considered active when it is created onchain. This is called opening an allocation, and indicates to the network that the Indexer is actively indexing and serving queries for a particular subgraph. Active allocations accrue indexing rewards proportional to the signal on the subgraph, and the amount of GRT allocated. + 1. **Active**: 分配在链上创建时被认为是活跃的。这称为开启一个分配,并向网络表明索引人正在为特定子图建立索引并提供查询服务。活跃分配的索引奖励与子图上的信号以及分配的 GRT 的数量成正比。 - 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. + 2. **Closed**:索引人可以通过提交最近有效的索引证明(POI)来获得给定子图的累积索引奖励。这被称为关闭分配。分配必须至少开放一个时期,然后才能关闭。最大分配周期为28个时期。如果索引人在28个时期之后留下一个开放的分配,则称为陈旧分配。当分配处于**Closed**状态时,Fisherman仍然可以发起争议,质疑索引人提供虚假数据。 -- **Subgraph Studio**: A powerful dapp for building, deploying, and publishing subgraphs. +- **Subgraph Studio**: 用于构建、部署和发布子图的强大 dapp。 -- **Fishermen**: A role within The Graph Network held by participants who monitor the accuracy and integrity of data served by Indexers. When a Fisherman identifies a query response or a POI they believe to be incorrect, they can initiate a dispute against the Indexer. If the dispute rules in favor of the Fisherman, the Indexer is slashed by losing 2.5% of their self-stake. Of this amount, 50% is awarded to the Fisherman as a bounty for their vigilance, and the remaining 50% is removed from circulation (burned). This mechanism is designed to encourage Fishermen to help maintain the reliability of the network by ensuring that Indexers are held accountable for the data they provide. +- **Fisherman**:The Graph网络中的一个角色,由监控索引人所提供数据的准确性和完整性的参与者担任。当Fisherman识别出他们认为不正确的查询响应或POI时,他们可以向索引人发起争议。如果争议有利于Fisherman,那么索引人将失去2.5%的自身质押。在这笔金额中,50%作为赏金奖励给Fisherman,以表彰他们的警惕性,其余50%则从流通中移除(消耗)。该机制旨在鼓励Fisherman通过确保索引人对其提供的数据负责来帮助维护网络的可靠性。 -- **Arbitrators**: Arbitrators are network participants appointed through a governance process. The role of the Arbitrator is to decide the outcome of indexing and query disputes. Their goal is to maximize the utility and reliability of The Graph Network. +- **Arbitrators**: 仲裁员是通过治理设置的网络参与者。仲裁员的作用是决定索引和查询争议的结果。他们的目标是最大限度地提高The Graph网络的效用和可靠性。 -- **Slashing**: Indexers can have their self-staked GRT slashed for providing an incorrect POI or for serving inaccurate data. The slashing percentage is a protocol parameter currently set to 2.5% of an Indexer's self-stake. 50% of the slashed GRT goes to the Fisherman that disputed the inaccurate data or incorrect POI. The other 50% is burned. +- **Slashing**: 索引人可能因为提供了不正确的索引证明(POI) 或提供了不准确的数据而削减它们自身质押的 GRT。削减百分比是一个协议参数,目前设置为索引人自身质押的2.5% 。被削减的50% GRT归Fisherman,他们对不准确的数据或不正确的 POI 提出异议。剩下的50% 被销毁。 -- **Indexing Rewards**: The rewards that Indexers receive for indexing subgraphs. Indexing rewards are distributed in GRT. +- **Indexing Rewards**: 索引人因为索引子图而获得的奖励。索引奖励以 GRT 的形式分配。 -- **Delegation Rewards**: The rewards that Delegators receive for delegating GRT to Indexers. Delegation rewards are distributed in GRT. +- **Delegation Rewards**: 委托人将 GRT 委托给索引人所获得的奖励。委托奖励以 GRT 的形式分配。 -- **GRT**: The Graph's work utility token. GRT provides economic incentives to network participants for contributing to the network. +- **GRT**: The Graph的工作效用代币。 GRT 为网络参与者提供经济激励,鼓励他们为网络做出贡献。 -- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. +- **Proof of Indexing (POI)**: 当一个索引人关闭他们的分配,并希望索取他们对特定子图的累计索引人奖励,他们必须提供一个有效的和最近的索引证明(POI)。Fishermen可以对索引人提供的 POI 提出异议。若Fisherman提出异议成功,将导致索引人被惩罚。 -- **Graph Node**: Graph Node is the component that indexes subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. +- **Graph Node**: Graph节点是索引子图的组件,并使生成的数据可通过GraphQL API进行查询。因此,它是索引人堆栈的中心,Graph节点的正确操作对于运行成功的索引人至关重要。 -- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions onchain, including registering on the network, managing subgraph deployments to its Graph Node(s), and managing allocations. +- **Indexer agent**: 索引人代理是索引人堆栈的一部分。它促进了索引人在链上的交互,包括在网络上注册、管理到其 Graph节点的子图部署以及分配管理。 -- **The Graph Client**: A library for building GraphQL-based dapps in a decentralized way. +- **The Graph Client**: 用于以去中心化方式构建基于 GraphQL 的 dapps 的库。 -- **Graph Explorer**: A dapp designed for network participants to explore subgraphs and interact with the protocol. +- **Graph Explorer**: 为网络参与者探索子图并与协议交互而设计的 dapp。 -- **Graph CLI**: A command line interface tool for building and deploying to The Graph. +- **Graph CLI**: 用于构建和部署到The Graph 的命令行界面工具。 -- **Cooldown Period**: The time remaining until an Indexer who changed their delegation parameters can do so again. +- **Cooldown Period**: 直到更改其委托参数的索引人可以再次进行此操作之前的剩余时间。 -- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, subgraphs, curation shares, and Indexer's self-stake. +- **L2 Transfer Tools**: 智能合约和UI,使网络参与者能够从以太坊主网转移到Arbitrum One。网络参与者可以转移委托的GRT、子图、策展股份和索引者自己的股份。 -- **Updating a subgraph**: The process of releasing a new subgraph version with updates to the subgraph's manifest, schema, or mappings. +- **Updating a Subgraph**: 发布新子图版本的过程,其中包含对子图的清单、模式或映射的更新。 - **Migrating**: 策展份额从子图的旧版本移动到子图的新版本的过程(例如,从 v0.0.1 更新到 v0.0.2)。 diff --git a/website/src/pages/zh/resources/migration-guides/assemblyscript-migration-guide.mdx b/website/src/pages/zh/resources/migration-guides/assemblyscript-migration-guide.mdx index 45d64ae2ead8..831fc0625d2a 100644 --- a/website/src/pages/zh/resources/migration-guides/assemblyscript-migration-guide.mdx +++ b/website/src/pages/zh/resources/migration-guides/assemblyscript-migration-guide.mdx @@ -2,49 +2,49 @@ title: AssemblyScript 迁移指南 --- -Up until now, subgraphs have been using one of the [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉 +到目前为止,子图一直在使用 [AssemblyScript 的第一个版本](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6) 之一。 最终,我们添加了对[最新版本](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10) 的支持! 🎉 -That will enable subgraph developers to use newer features of the AS language and standard library. +这将使子图开发人员能够使用 AS 语言和标准库的更新特性。 -This guide is applicable for anyone using `graph-cli`/`graph-ts` below version `0.22.0`. If you're already at a higher than (or equal) version to that, you've already been using version `0.19.10` of AssemblyScript 🙂 +本指南适用于使用 `0.22.0` 版本以下的 `graph-cli`/`graph-ts` 的任何人。 如果您已经使用了高于(或等于)该版本号的版本,那么您已经在使用 AssemblyScript 的 `0.19.10` 版本 🙂。 -> Note: As of `0.24.0`, `graph-node` can support both versions, depending on the `apiVersion` specified in the subgraph manifest. +> 注意:从 `0.24.0` 开始,`graph-node` 可以支持这两个版本,具体取决于子图清单文件中指定的 `apiVersion`。 ## 特征 ### 新功能 -- `TypedArray`s can now be built from `ArrayBuffer`s by using the [new `wrap` static method](https://www.assemblyscript.org/stdlib/typedarray.html#static-members) ([v0.8.1](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.8.1)) -- New standard library functions: `String#toUpperCase`, `String#toLowerCase`, `String#localeCompare`and `TypedArray#set` ([v0.9.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.0)) -- Added support for x instanceof GenericClass ([v0.9.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.2)) -- Added `StaticArray`, a more efficient array variant ([v0.9.3](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.3)) -- Added `Array#flat` ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) -- Implemented `radix` argument on `Number#toString` ([v0.10.1](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.1)) -- Added support for separators in floating point literals ([v0.13.7](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.13.7)) -- Added support for first class functions ([v0.14.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.14.0)) -- Add builtins: `i32/i64/f32/f64.add/sub/mul` ([v0.14.13](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.14.13)) -- Implement `Array/TypedArray/String#at` ([v0.18.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.2)) -- Added support for template literal strings ([v0.18.17](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.17)) -- Add `encodeURI(Component)` and `decodeURI(Component)` ([v0.18.27](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.27)) -- Add `toString`, `toDateString` and `toTimeString` to `Date` ([v0.18.29](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.29)) -- Add `toUTCString` for `Date` ([v0.18.30](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.30)) -- Add `nonnull/NonNullable` builtin type ([v0.19.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.19.2)) +- `TypedArray`s 现在可以使用[新的`wrap`静态方法](https://www.assemblyscript.org/stdlib/typedarray.html#static-members) ([v0.8.1](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.8.1))基于`ArrayBuffer`s 构建 +- 新的标准库函数: `String#toUpperCase`, `String#toLowerCase`, `String#localeCompare`和`TypedArray#set` ([v0.9.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.0)) +- 增加了对 x instanceof GenericClass ([v0.9.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.2))的支持 +- 添加了 `StaticArray`, 一种更高效的数组变体 ([v0.9.3](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.3)) +- 增加了 `Array#flat` ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) +- 在`Number#toString` ([v0.10.1](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.1))上实现了`radix` 参数 +- 添加了对浮点文字中的分隔符的支持 ([v0.13.7](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.13.7)) +- 添加了对一级函数的支持 ([v0.14.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.14.0)) +- 添加内置函数:`i32/i64/f32/f64.add/sub/mul` ([ v0.14.13](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.14.13)) +- 实现了`Array/TypedArray/String#at` ([v0.18.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.2)) +- 添加了对模板文字字符串的支持([v0.18.17](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.17)) +- 添加了`encodeURI(Component)` 和 `decodeURI(Component)` ([v0.18.27](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.27)) +- 将 `toString`、`toDateString` 和 `toTimeString` 添加到 `Date` (\[v0.18.29\](https://github.com/ AssemblyScript/assemblyscript/releases/tag/v0.18.29)) +- 为`Date` 添加了`toUTCString`([v0.18.30](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.30)) +- 添加 `nonnull/NonNullable` 内置类型([v0.19.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.19.2)) ### 优化 -- `Math` functions such as `exp`, `exp2`, `log`, `log2` and `pow` have been replaced by faster variants ([v0.9.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.0)) -- Slightly optimize `Math.mod` ([v0.17.1](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.1)) -- Cache more field accesses in std Map and Set ([v0.17.8](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.8)) -- Optimize for powers of two in `ipow32/64` ([v0.18.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.2)) +- `Math` 函数,例如 `exp`、`exp2`、`log`、`log2` 和 `pow` 已替换为更快的变体 ([v0.9.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.0)) +- 些许优化了`Math.mod` ([v0.17.1](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.1)) +- 在 std Map 和 Set ([v0.17.8](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.8)) 中缓存更多字段访问 +- 在 `ipow32/64` ([v0.18.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.2))中优化二的幂运算 ### 其他 -- The type of an array literal can now be inferred from its contents ([v0.9.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.0)) -- Updated stdlib to Unicode 13.0.0 ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) +- 现在可以从数组内容中推断出数组文字的类型([v0.9.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.0)) +- 将 stdlib 更新为 Unicode 13.0.0 ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) ## 如何升级? -1. Change your mappings `apiVersion` in `subgraph.yaml` to `0.0.6`: +1. 将 `subgraph.yaml` 中的映射 `apiVersion` 更改为 `0.0.9`: ```yaml ... @@ -52,11 +52,11 @@ dataSources: ... mapping: ... - apiVersion: 0.0.6 + apiVersion: 0.0.9 ... ``` -2. Update the `graph-cli` you're using to the `latest` version by running: +2. 通过运行以下命令,将您正在使用的 `graph-cli` 更新为 `latest` 版本: ```bash # if you have it globally installed @@ -66,14 +66,14 @@ npm install --global @graphprotocol/graph-cli@latest npm install --save-dev @graphprotocol/graph-cli@latest ``` -3. Do the same for `graph-ts`, but instead of installing globally, save it in your main dependencies: +3. 对 `graph-ts` 执行相同的操作,但不是全局安装,而是将其保存在您的主要依赖项中: ```bash npm install --save @graphprotocol/graph-ts@latest ``` -4. Follow the rest of the guide to fix the language breaking changes. -5. Run `codegen` and `deploy` again. +4. 参考指南的其余部分修复语言更改带来的问题。 +5. 再次运行 `codegen` 和 `deploy`。 ## 重大变化 @@ -110,7 +110,7 @@ maybeValue.aMethod() ### 变量遮蔽 -Before you could do [variable shadowing](https://en.wikipedia.org/wiki/Variable_shadowing) and code like this would work: +在您可以进行 [变量遮蔽](https://en.wikipedia.org/wiki/Variable_shadowing) 之前,这样的代码可以工作: ```typescript let a = 10 @@ -141,7 +141,7 @@ ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' i in src/mappings/file.ts(41,21) ``` -To solve you can simply change the `if` statement to something like this: +要解决此问题,您只需将 `if` 语句更改为如下所示代码: ```typescript if (!decimals) { @@ -155,7 +155,7 @@ To solve you can simply change the `if` statement to something like this: ### 强制转换 -The common way to do casting before was to just use the `as` keyword, like this: +以前,进行强制转换的常用方法是使用 `as`关键字,如下所示: ```typescript let byteArray = new ByteArray(10) @@ -164,7 +164,7 @@ let uint8Array = byteArray as Uint8Array // equivalent to: byteArray 但是,这只适用于两种情况: -- Primitive casting (between types such as `u8`, `i32`, `bool`; eg: `let b: isize = 10; b as usize`); +- 原始类型转换(在`u8`, `i32`, `bool`等类型之间; 例如: `let b: isize = 10; b as usize`); - 在类继承时向上转换(子类 → 超类) 例子: @@ -184,7 +184,7 @@ let bytes = new Bytes(2) // bytes // same as: bytes as Uint8Array ``` -There are two scenarios where you may want to cast, but using `as`/`var` **isn't safe**: +在两种情况下,您可能希望进行类型转换,但使用 `as`/`var` **并不安全**: - 在类继承时向下转换(超类 → 子类) - 在共享超类的两种类型之间 @@ -206,7 +206,7 @@ let bytes = new Bytes(2) // bytes // breaks in runtime :( ``` -For those cases, you can use the `changetype` function: +对于这些情况,您可以使用 `changetype` 函数: ```typescript // downcasting on class inheritance @@ -238,7 +238,7 @@ if (previousBalance != null) { let newBalance = new AccountBalance(balanceId) ``` -For the nullability case we recommend taking a look at the [nullability check feature](https://www.assemblyscript.org/basics.html#nullability-checks), it will make your code cleaner 🙂 +对于可空性情况,我们建议查看[可空性检查功能](https://www.assemblyscript.org/basics.html#nullability-checks),它会让您的代码更简洁 🙂 我们还在某些类型中添加了一些静态方法来简化转换,它们是: @@ -249,7 +249,7 @@ For the nullability case we recommend taking a look at the [nullability check fe ### 使用属性访问进行可空性检查 -To use the [nullability check feature](https://www.assemblyscript.org/basics.html#nullability-checks) you can use either `if` statements or the ternary operator (`?` and `:`) like this: +要使用 [可空性检查功能](https://www.assemblyscript.org/basics.html#nullability-checks),您可以使用 `if` 语句或三元运算符(`?` 和 `:`),如下所示: ```typescript let something: string | null = 'data' @@ -267,7 +267,7 @@ if (something) { } ``` -However that only works when you're doing the `if` / ternary on a variable, not on a property access, like this: +但是,这仅在您对变量执行 `if` / 三元组而不是属性访问时才有效,如下所示: ```typescript class Container { @@ -381,7 +381,7 @@ if (total === null) { total.amount = total.amount + BigInt.fromI32(1) ``` -You'll need to make sure to initialize the `total.amount` value, because if you try to access like in the last line for the sum, it will crash. So you either initialize it first: +您需要确保初始化 `total.amount` 值,因为如果您尝试像最后一行代码一样求和,程序将崩溃。 所以你要么先初始化它: ```typescript let total = Total.load('latest') @@ -394,7 +394,7 @@ if (total === null) { total.tokens = total.tokens + BigInt.fromI32(1) ``` -Or you can just change your GraphQL schema to not use a nullable type for this property, then we'll initialize it as zero on the `codegen` step 😉 +或者您可以更改您的 GraphQL 模式,不给此属性赋予可为空的类型,然后您在 `codegen` 步骤中将其初始化为零 😉 ```graphql type Total @entity { @@ -425,7 +425,7 @@ export class Something { } ``` -The compiler will error because you either need to add an initializer for the properties that are classes, or add the `!` operator: +编译器会报错,因为您需要为类属性添加初始化程序,或者添加 `!` 运算符: ```typescript export class Something { @@ -451,7 +451,7 @@ export class Something { ### 数组初始化 -The `Array` class still accepts a number to initialize the length of the list, however you should take care because operations like `.push` will actually increase the size instead of adding to the beginning, for example: +`Array` 类仍然接受一个数字来初始化列表的长度,但是您应该小心,因为像`.push`的操作实际上会增加大小,而不是添加到开头,例如: ```typescript let arr = new Array(5) // ["", "", "", "", ""] @@ -459,13 +459,13 @@ let arr = new Array(5) // ["", "", "", "", ""] arr.push('something') // ["", "", "", "", "", "something"] // size 6 :( ``` -Depending on the types you're using, eg nullable ones, and how you're accessing them, you might encounter a runtime error like this one: +根据您使用的类型,例如可以为空的类型,以及访问它们的方式,您可能会遇到类似下面这样的运行时错误: ``` -ERRO Handler skipped due to execution failure, error: Mapping aborted at ~lib/array.ts, line 110, column 40, with message: Element type must be nullable if array is holey wasm backtrace: 0: 0x19c4 - !~lib/@graphprotocol/graph-ts/index/format 1: 0x1e75 - !~lib/@graphprotocol/graph-ts/common/collections/Entity#constructor 2: 0x30b9 - !node_modules/@graphprotocol/graph-ts/global/global/id_of_type +ERRO Handler 由于执行失败而跳过,错误: 映射在 ~ lib/array.ts,第110行,第40列中止,并且带有消息: 如果 array 是漏洞 wasm 反向跟踪,那么 Element type 必须为 null: 0:0x19c4-!~lib/@graphprotocol/graph-ts/index/format 1: 0x1e75 - !~lib/@graphprotocol/graph-ts/common/collections/Entity#constructor 2: 0x30b9 - !node_modules/@graphprotocol/graph-ts/global/global/id_of_type ``` -To actually push at the beginning you should either, initialize the `Array` with size zero, like this: +要想真正在开始的时候推入,你应该将 `Array` 初始化为大小为零,如下所示: ```typescript let arr = new Array(0) // [] @@ -483,7 +483,7 @@ arr[0] = 'something' // ["something", "", "", "", ""] ### GraphQL 模式 -This is not a direct AssemblyScript change, but you may have to update your `schema.graphql` file. +这不是一个直接的 AssemblyScript 更改,但是您可能需要更新 `schema.Graphql` 文件。 现在,您不再能够在类型中定义属于非空列表的字段。如果您有这样的模式: @@ -498,7 +498,7 @@ type MyEntity @entity { } ``` -You'll have to add an `!` to the member of the List type, like this: +您必须向 List 类型的成员添加一个`!` ,如下所示: ```graphql type Something @entity { @@ -511,14 +511,14 @@ type MyEntity @entity { } ``` -This changed because of nullability differences between AssemblyScript versions, and it's related to the `src/generated/schema.ts` file (default path, you might have changed this). +AssemblyScript 版本之间的可空性差异导致了这种改变, 并且这也与 `src/generated/schema.ts`文件(默认路径,您可能已更改)有关。 ### 其他 -- Aligned `Map#set` and `Set#add` with the spec, returning `this` ([v0.9.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.2)) -- Arrays no longer inherit from ArrayBufferView, but are now distinct ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) -- Classes initialized from object literals can no longer define a constructor ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) -- The result of a `**` binary operation is now the common denominator integer if both operands are integers. Previously, the result was a float as if calling `Math/f.pow` ([v0.11.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.11.0)) -- Coerce `NaN` to `false` when casting to `bool` ([v0.14.9](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.14.9)) -- When shifting a small integer value of type `i8`/`u8` or `i16`/`u16`, only the 3 respectively 4 least significant bits of the RHS value affect the result, analogous to the result of an `i32.shl` only being affected by the 5 least significant bits of the RHS value. Example: `someI8 << 8` previously produced the value `0`, but now produces `someI8` due to masking the RHS as `8 & 7 = 0` (3 bits) ([v0.17.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.0)) +- 将 `Map#set` 和 `Set#add` 与规范对齐,返回 `this` (\[v0.9.2\](https://github.com/AssemblyScript /assemblyscript/releases/tag/v0.9.2)) +- 数组不再继承自 ArrayBufferView,并且现在是完全不同的 ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) +- 从对象字面初始化的类不能再定义构造函数([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) +- 如果两个操作数都是整数,则 `**` 二元运算的结果现在是公分母整数。 以前,结果是一个浮点数,就像调用 `Math/f.pow` ([v0.11.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.11.0)) +- 在转换为 `bool` 时强制 `NaN` 为 `false` (\[v0.14.9\](https://github.com/AssemblyScript/assemblyscript/releases/tag /v0.14.9)) +- 当移动 `i8`/`u8` 或 `i16`/`u16` 类型的小整数值时,只有 4 个 RHS 值的最低有效位中的 3 个会影响结果,类似于 `i32.shl` 的结果仅受 RHS 值的 5 个最低有效位影响。 示例:`someI8 << 8` 以前生成值 `0`,但现在由于将 RHS 屏蔽为`8 & 7 = 0` (3 比特), 而生成 `someI8`([v0.17.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.0)) - 修复了大小不同时关系字符串比较的错误 ([v0.17.8](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.8)) diff --git a/website/src/pages/zh/resources/migration-guides/graphql-validations-migration-guide.mdx b/website/src/pages/zh/resources/migration-guides/graphql-validations-migration-guide.mdx index 1493a96c8f55..294bbc74fead 100644 --- a/website/src/pages/zh/resources/migration-guides/graphql-validations-migration-guide.mdx +++ b/website/src/pages/zh/resources/migration-guides/graphql-validations-migration-guide.mdx @@ -2,7 +2,7 @@ title: GraphQL验证迁移指南 --- -很快,“graph-节点”将支持[GraphQL验证规范]的100%覆盖率(https://spec.graphql.org/June2018/#sec-验证)。 +很快,“graph-节点”将支持[GraphQL验证规范](https://spec.graphql.org/June2018/#sec-Validation)的100%覆盖率。 “graph-节点”的早期版本不支持所有验证,并提供了更优雅的响应——因此,在出现歧义的情况下,“graph-节点”会忽略无效的GraphQL操作组件。 @@ -20,7 +20,7 @@ GraphQL验证支持是即将推出的新功能和The Graph网络规模性能的 您可以使用CLI迁移工具查找GraphQL操作中的任何问题并进行修复。或者,您可以更新GraphQL客户端的端点,以使用`https://api-next.thegraph.com/subgraphs/name/$GITHUB_USER/$SUBGRAPH_NAME`端点。针对此端点测试查询将帮助您发现查询中的问题。 -> > 如果您使用[GraphQL ESlint],并不是所有的子图都需要迁移(https://the-guild.dev/graphql/eslint/docs)或[GraphQL代码生成器](https://the-guild.dev/graphql/codegen),它们已经确保了您的查询是有效的。 +> > 如果您使用[GraphQL ESlint](https://the-guild.dev/graphql/eslint/docs),并不是所有的子图都需要迁移或[GraphQL代码生成器](https://the-guild.dev/graphql/codegen),它们已经确保了您的查询是有效的。 ## 迁移CLI工具 diff --git a/website/src/pages/zh/resources/roles/curating.mdx b/website/src/pages/zh/resources/roles/curating.mdx index 54f4658473d7..66000aeab711 100644 --- a/website/src/pages/zh/resources/roles/curating.mdx +++ b/website/src/pages/zh/resources/roles/curating.mdx @@ -2,88 +2,88 @@ title: 策展 --- -Curators are critical to The Graph's decentralized economy. They use their knowledge of the web3 ecosystem to assess and signal on the subgraphs that should be indexed by The Graph Network. Through Graph Explorer, Curators view network data to make signaling decisions. In turn, The Graph Network rewards Curators who signal on good quality subgraphs with a share of the query fees those subgraphs generate. The amount of GRT signaled is one of the key considerations for indexers when determining which subgraphs to index. +策展人对The Graph的去中心化经济至关重要。他们利用他们对web3生态系统的了解来评估和标记应该由The Graph网络索引的子图。通过Graph浏览器,策展人查看网络数据以做出标记决策。反过来,The Graph网络会奖励那些在高质量子图上发出信号的策展人,并分享这些子图产生的查询费用。GRT信号量是索引人在确定索引哪些子图时的关键考虑因素之一。 -## What Does Signaling Mean for The Graph Network? +## 信号对The Graph 网络意味着什么? -Before consumers can query a subgraph, it must be indexed. This is where curation comes into play. In order for Indexers to earn substantial query fees on quality subgraphs, they need to know what subgraphs to index. When Curators signal on a subgraph, it lets Indexers know that a subgraph is in demand and of sufficient quality that it should be indexed. +在消费者可以查询子图之前,必须对其进行索引。这就是策展发挥作用的地方。为了使索引人在高质量的子图上获得可观的查询费用,他们需要知道要索引哪些子图。当策展人在子图上发出信号时,它会让索引人知道一个子图有需求,并且质量足够高,应该被索引。 -Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that Curators use to let Indexers know that a subgraph is good to index. Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives. +策展人使The Graph网络高效,而[信号](#how-to-signal) 是策展人用来让索引人知道子图适合索引的过程。索引人可以信任策展人的信号,因为策展人根据信号为子图创建策展份额,使他们有权获得子图驱动的未来查询费用的一部分。 -Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality subgraph because there will be fewer queries to process or fewer Indexers to process them. +策展人信号表示为ERC20代币,称为Graph策展份额(GCS)。那些想获得更多查询费的人应该向他们预测将产生大量费用流入网络的子图发出GRT信号。不能因为不良行为而削减策展人,但对策展人征收押金税,以抑制可能损害网络完整性的不良决策。如果策展人在低质量的子图上进行策展,他们也会获得更少的查询费,因为需要处理的查询或需要处理的索引人会更少。 -The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. +[Sunrise 升级索引人](/archived/sunrise/#what-is-the-upgrade-indexer) 确保对所有子图进行索引,表明特定子图上的GRT将吸引更多的索引人。通过策展来激励额外的索引人,旨在通过减少延迟和提高网络可用性来提高查询的服务质量。 -When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. +在发出信号时,策展人可以决定在子图的特定版本上发出信号,或者使用自动迁移发出信号。如果他们使用自动迁移发出信号,策展人的份额将始终更新到开发人员发布的最新版本。如果他们决定在特定版本上发出信号,则共享将始终保留在该特定版本上。 -If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. +如果您需要协助策展以提高服务质量,请向Edge & Node团队发送请求,网址为support@thegraph.zendesk.com并指定需要帮助的子图。 -Indexers can find subgraphs to index based on curation signals they see in Graph Explorer (screenshot below). +索引人可以根据他们在Graph 浏览器中看到的策展信号找到要索引的子图(下面的截图)。 -![Explorer subgraphs](/img/explorer-subgraphs.png) +![Explorer Subgraphs](/img/explorer-subgraphs.png) ## 如何进行信号处理 -Within the Curator tab in Graph Explorer, curators will be able to signal and unsignal on certain subgraphs based on network stats. For a step-by-step overview of how to do this in Graph Explorer, [click here.](/subgraphs/explorer/) +在Graph 浏览器的策展人选项卡中,策展人将能够根据网络统计数据对某些子图发出信号和取消信号。 关于如何在浏览器中做到这一点的一步步概述,请点击[这里。](/subgraphs/explorer/) 策展人可以选择在特定的子图版本上发出信号,或者他们可以选择让他们的策展份额自动迁移到该子图的最新生产版本。 这两种策略都是有效的,都有各自的优点和缺点。 -Signaling on a specific version is especially useful when one subgraph is used by multiple dapps. One dapp might need to regularly update the subgraph with new features. Another dapp might prefer to use an older, well-tested subgraph version. Upon initial curation, a 1% standard tax is incurred. +当一个子图被多个 dapp 使用时,在特定版本上发出信号特别有用。 一个 dapp 可能需要定期更新子图的新特性。 另一个 dapp 可能更喜欢使用更旧的、经过良好测试的子图版本。 在初始策展时,会产生 1%的标准税。 让你的策展份额自动迁移到最新的生产构建,对确保你不断累积查询费用是有价值的。 每次你策展时,都会产生 1%的策展税。 每次迁移时,你也将支付 0.5%的策展税。 不鼓励子图开发人员频繁发布新版本--他们必须为所有自动迁移的策展份额支付 0.5%的策展税。 -> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy. +> **注意**: 第一个给特定子图发出信号的地址被认为是第一个策展人,将不得不消耗比之后其他策展人更多的燃气费工作,因为第一个策展人初始化了策展份额代币,还将代币转移到 The Graph 代理。 -## Withdrawing your GRT +## 撤回您的GRT -Curators have the option to withdraw their signaled GRT at any time. +策展人有权随时撤回他们发出的GRT信号。 -Unlike the process of delegating, if you decide to withdraw your signaled GRT you will not have to wait for a cooldown period and will receive the entire amount (minus the 1% curation tax). +与委托过程不同,如果您决定撤回您的信号GRT,您将不必等待冷却期,并将收到全部金额(减去1%的策展税)。 -Once a curator withdraws their signal, indexers may choose to keep indexing the subgraph, even if there's currently no active GRT signaled. +一旦策展人撤回他们的信号,索引人可能会选择继续对子图进行索引,即使目前没有活动的GRT信号。 -However, it is recommended that curators leave their signaled GRT in place not only to receive a portion of the query fees, but also to ensure reliability and uptime of the subgraph. +然而,建议策展人保留他们的信号GRT,不仅可以获得部分查询费用,还可以确保子图的可靠性和正常运行时间。 ## 风险 -1. 在Graph,查询市场本来就很年轻,由于市场动态刚刚开始,你的年收益率可能低于你的预期,这是有风险的。 -2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned. -3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their subgraph or if a subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/resources/roles/delegating/delegating/). +1. 在The Graph,查询市场本来就很年轻,由于市场动态刚刚开始,你的年收益率可能低于你的预期,这是有风险的。 +2. 策展费 - 当策展人对子图发出 GRT 信号时,他们会产生 1%的策展税。这笔费用被消耗掉了。 +3. 当策展人消耗他们的股份以提取 GRT 时,剩余股份的 GRT 估值将被降低。(仅适用于以太坊) 请注意,在某些情况下,策展人可能决定**一次性**消耗他们的份额。 这种情况可能很常见,如果一个 dApp 开发者停止版本/改进和查询他们的子图,或者如果一个子图失败。 因此,剩下的策展人可能只能提取他们最初 GRT 的一小部分。 关于风险较低的网络角色,请看 [委托人](/resources/roles/delegating/delegating/)。 4. 一个子图可能由于错误而失败。 一个失败的子图不会累积查询费用。 因此,你必须等待,直到开发人员修复错误并部署一个新的版本。 - 如果你订阅了一个子图的最新版本,你的份额将自动迁移到该新版本。 这将产生 0.5%的策展税。 - - If you have signaled on a specific subgraph version and it fails, you will have to manually burn your curation shares. You can then signal on the new subgraph version, thus incurring a 1% curation tax. + - 如果你已经在一个特定的子图版本上发出信号,但它失败了,你将不得不手动花费你的策展份额。 然后你可以在新的子图版本上发出信号,从而产生 1%的策展税。 ## 策展常见问题 ### 1. 策展人能赚取多少百分比的查询费? -By signalling on a subgraph, you will earn a share of all the query fees that the subgraph generates. 10% of all query fees go to the Curators pro-rata to their curation shares. This 10% is subject to governance. +通过在一个子图上发出信号,你将获得这个子图产生的所有查询费用的份额。 所有查询费用的 10%将按策展人的策展份额比例分配给他们。 这 10%是受管理的。 ### 2. 如何决定哪些子图是高质量的信号? -Finding high-quality subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy subgraphs that are driving query volume. A trustworthy subgraph may be valuable if it is complete, accurate, and supports a dapp’s data needs. A poorly architected subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a subgraph’s architecture or code in order to assess if a subgraph is valuable. As a result: +寻找高质量的子图是一项复杂的任务,但它可以通过许多不同的方式来实现。 作为策展人,你要寻找那些推动查询量的值得信赖的子图。 这些值得信赖的子图是有价值的,因为它们是完整的,准确的,并支持 dapp 的数据需求。 一个架构不良的子图可能需要修改或重新发布,也可能最终失败。 策展人审查子图的架构或代码,以评估一个子图是否有价值,这是至关重要的。 因此: -- Curators can use their understanding of a network to try and predict how an individual subgraph may generate a higher or lower query volume in the future -- Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the subgraph developer is can help determine whether or not a subgraph is worth signalling on. +- 策展人可以利用网络知识,尝试预测单个子图在未来可能产生更多或更少查询量。 +- 策展人还应该了解通过 Graph 浏览器提供的指标。 像过去的查询量和子图开发者是谁这样的指标可以帮助确定一个子图是否值得发出信号。 ### 3. 升级一个子图的成本是多少? -Migrating your curation shares to a new subgraph version incurs a curation tax of 1%. Curators can choose to subscribe to the newest version of a subgraph. When curator shares get auto-migrated to a new version, Curators will also pay half curation tax, ie. 0.5%, because upgrading subgraphs is an onchain action that costs gas. +将你的策展份额迁移到一个新的子图版本会产生 1%的策展税。 策展人可以选择订阅子图的最新版本。 当策展人质押被自动迁移到一个新的版本时,策展人也将支付一半的策展税,即 0.5%,因为升级子图是一个链上动作,需要花费交易费。 -### 4. 我可以多频繁的升级子图? +### 4. 多长时间可以升级子图? 建议你不要太频繁地升级子图。 更多细节请见上面的问题。 ### 5. 我可以出售我的策展份额吗? -Curation shares cannot be "bought" or "sold" like other ERC20 tokens that you may be familiar with. They can only be minted (created) or burned (destroyed). +策展股份不能像您可能熟悉的其他ERC20代币那样“买入”或“卖出”。它们只能被铸造(创造)或焚烧(销毁)。 -As a Curator on Arbitrum, you are guaranteed to get back the GRT you initially deposited (minus the tax). +在Arbitrum上作为策展人,您可以保证收回最初存入的GRT(减去税款)。 -### 6. Am I eligible for a curation grant? +### 6.我有资格获得策展资助吗? -Curation grants are determined individually on a case-by-case basis. If you need assistance with curation, please send a request to support@thegraph.zendesk.com. +策展资助根据具体情况单独确定。如果您需要策展方面的帮助,请发送请求至support@thegraph.zendesk.com.。 -还有困惑吗? 点击下面查看管理视频指导: +还有困惑吗? 点击下面查看策展视频指导: diff --git a/website/src/pages/zh/resources/roles/delegating/delegating.mdx b/website/src/pages/zh/resources/roles/delegating/delegating.mdx index 20b6eb5a1caa..a37a7a8aa981 100644 --- a/website/src/pages/zh/resources/roles/delegating/delegating.mdx +++ b/website/src/pages/zh/resources/roles/delegating/delegating.mdx @@ -2,142 +2,142 @@ title: 委托 --- -To start delegating right away, check out [delegate on the graph](https://thegraph.com/explorer/delegate?chain=arbitrum-one). +要立即开始委托,请查看[The Graph上的委托](https://thegraph.com/explorer/delegate?chain=arbitrum-one)。 ## 概述 -Delegators earn GRT by delegating GRT to Indexers, which helps network security and functionality. +委托人通过将GRT委托给索引人来获得GRT,这有助于网络安全和功能。 -## Benefits of Delegating +## 委托的好处 -- Strengthen the network’s security and scalability by supporting Indexers. -- Earn a portion of rewards generated by the Indexers. +- 通过支持索引人来增强网络的安全性和可扩展性。 +- 赚取索引人生成的奖励的一部分。 -## How Does Delegation Work? +## 委托是如何运作的? -Delegators earn GRT rewards from the Indexer(s) they choose to delegate their GRT to. +委托人从他们选择委托GRT的索引人那里获得GRT奖励。 -An Indexer's ability to process queries and earn rewards depends on three key factors: +索引人处理查询和获得奖励的能力取决于三个关键因素: -1. The Indexer's Self-Stake (GRT staked by the Indexer). -2. The total GRT delegated to them by Delegators. -3. The price the Indexer sets for queries. +1. 索引人的自我质押(由索引人质押的GRT)。 +2. 委托人委托给他们的总GRT。 +3. 索引人为查询设置的价格。 -The more GRT staked and delegated to an Indexer, the more queries they can serve, leading to higher potential rewards for both the Delegator and Indexer. +GRT质押和委托给索引人的越多,它们可以服务的查询就越多,从而为委托人和索引人带来更高的潜在回报。 -### What is Delegation Capacity? +### 什么是委托能力? -Delegation Capacity refers to the maximum amount of GRT an Indexer can accept from Delegators, based on the Indexer's Self-Stake. +委托能力是指基于索引人的自身质押,索引人可以从委托人接受的最大GRT量。 -The Graph Network includes a delegation ratio of 16, meaning an Indexer can accept up to 16 times their Self-Stake in delegated GRT. +Graph网络的委托比率为16,这意味着索引人在委托的GRT中最多可以接受其自身质押的16倍。 -For example, if an Indexer has a Self-Stake of 1M GRT, their Delegation Capacity is 16M. +例如,如果索引人的自身质押为1M GRT,则其委托容量为16M。 -### Why Does Delegation Capacity Matter? +### 为什么委托能力很重要? -If an Indexer exceeds their Delegation Capacity, rewards for all Delegators become diluted because the excess delegated GRT cannot be used effectively within the protocol. +如果索引人超过其委托能力,则所有委托人的奖励都会被稀释,因为超出的委托GRT无法在协议中有效使用。 -This makes it crucial for Delegators to evaluate an Indexer's current Delegation Capacity before selecting an Indexer. +这使得委托人在选择索引人之前评估索引人的当前委托能力至关重要。 -Indexers can increase their Delegation Capacity by increasing their Self-Stake, thereby raising the limit for delegated tokens. +索引人可以通过增加自身质押来增加其委托能力,从而提高委托代币的限制。 -## Delegation on The Graph +## The Graph上的委托 -> Please note this guide does not cover steps such as setting up MetaMask. The Ethereum community provides a [comprehensive resource regarding wallets](https://ethereum.org/en/wallets/). +> 请注意,本指南不包括设置MetaMask等步骤。以太坊社区提供了一个[关于钱包的全面资源](https://ethereum.org/en/wallets/)。 -There are two sections in this guide: +本指南有两个部分: -- 在 Graph 网络中委托代币的风险 +- 在 The Graph 网络中委托代币的风险 - 如何计算作为委托人的预期回报 ## 委托风险 下面列出了作为协议中的委托人的主要风险。 -### The Delegation Tax +### 委托税 委托人不能因为不良行为而被取消,但对委托有税,以抑制可能损害网络完整性的不良决策。 -As a Delegator, it's important to understand the following: +作为委托人,了解以下内容很重要: -- You will be charged 0.5% every time you delegate. This means that if you delegate 1,000 GRT, you will automatically burn 5 GRT. +- 每次委托时,您将被收取0.5%的费用。这意味着,如果您委托1,000 GRT,您将自动消耗5 GRT。 -- In order to be safe, you should calculate your potential return when delegating to an Indexer. For example, you might calculate how many days it will take before you have earned back the 0.5% tax on your delegation. +- 为了安全起见,委托人应该通过委托给索引人来计算他们的回报。 例如,委托人可能会计算他们需要多少天才能收回其委托的 0.5% 税。 -### The Undelegation Period +### 解除委托期 -When a Delegator chooses to undelegate, their tokens are subject to a 28-day undelegation period. +每当委托人想要解除委托时,他们的代币都有 28 天的解除委托期。 -This means they cannot transfer their tokens or earn any rewards for 28 days. +这意味着他们在 28 天内不能转移他们的代币,也不能获得任何奖励。 -After the undelegation period, GRT will return to your crypto wallet. +在解除委托期结束后,GRT将返回您的加密钱包。 -### Why is this important? +### 为什么这很重要? -If you choose an Indexer that is not trustworthy or not doing a good job, you will want to undelegate. This means you will be losing opportunities to earn rewards. +如果您选择了一个不值得信赖或者没有做好工作的索引人,您会想要解除委托,这意味着您将失去很多获得奖励的机会。 -As a result, it’s recommended that you choose an Indexer wisely. +因此,建议您明智地选择索引人。 ![Delegation unbonding. Note the 0.5% fee in the Delegation UI, as well as the 28 day unbonding period.](/img/Delegation-Unbonding.png) -#### Delegation Parameters +#### 委托参数 -In order to understand how to choose a trustworthy Indexer, you need to understand the Delegation Parameters. +为了了解如何选择值得信赖的索引人,您需要了解委托参数。 -- **Indexing Reward Cut** - The portion of the rewards the Indexer will keep for themselves. - - If an Indexer's reward cut is set to 100%, as a Delegator, you will get 0 indexing rewards. - - If it is set to 80%, as a Delegator, you will receive 20%. +- **索引奖励削减**-索引人将为自己保留的奖励部分。 + - 这意味着,如果该项被设置为 100%,作为一个委托人,你将获得 0 个索引奖励。 + - 如果设置为80%,作为委托人,您将获得20%。 ![Indexing Reward Cut. The top Indexer is giving Delegators 90% of the rewards. The middle one is giving Delegators 20%. The bottom one is giving Delegators ~83%.](/img/Indexing-Reward-Cut.png) -- **Query Fee Cut** - This is just like the Indexing Reward Cut, but it applies to returns on the query fees that the Indexer collects. +- **查询费用削减**-这就像索引奖励削减一样,但它适用于索引人收集的查询费用的回报。 -- It is highly recommended that you explore [The Graph Discord](https://discord.gg/graphprotocol) to determine which Indexers have the best social and technical reputations. +- 强烈建议您探索[The Graph Discord](https://discord.gg/graphprotocol) ,以确定哪些索引人具有最佳的社会和技术声誉。 -- Many Indexers are active in Discord and will be happy to answer your questions. +- 许多索引人在 Discord 中非常活跃,他们将很乐意回答您的问题。 -## Calculating Delegators Expected Return +## 计算委托人的预期收益 -> Calculate the ROI on your delegation [here](https://thegraph.com/explorer/delegate?chain=arbitrum-one). +> 在[此处](https://thegraph.com/explorer/delegate?chain=arbitrum-one),计算您的委托的投资回报率。 -A Delegator must consider a variety of factors to determine a return: +委托人在确定收益时必须考虑很多因素: -An Indexer's ability to use the delegated GRT available to them impacts their rewards. +索引人使用委托的GRT的能力会影响他们的奖励。 -If an Indexer does not allocate all the GRT at their disposal, they may miss out on maximizing potential earnings for both themselves and their Delegators. +如果索引人没有分配所有可支配的GRT,他们可能会错过为自己和委托人实现潜在收益最大化的机会。 -Indexers can close an allocation and collect rewards at any time within the 1 to 28-day window. However, if rewards are not promptly collected, the total rewards may appear lower, even if a percentage of rewards remain unclaimed. +索引人可以在1到28天的窗口内随时关闭分配并收集奖励。然而,如果没有及时领取奖励,即使有一部分奖励无人领取,总奖励也可能会显得较低。 ### 考虑到查询费用的分成和索引费用的分成 -You should choose an Indexer that is transparent about setting their Query Fee and Indexing Fee Cuts. +您应该选择一个在设置查询费和索引费减免方面透明的索引人。 -The formula is: +计算公式是: -![Delegation Image 3](/img/Delegation-Reward-Formula.png) +![委託图片 3](/img/Delegation-Reward-Formula.png) ### 考虑索引人委托池 -Delegators should consider the proportion of the Delegation Pool they own. +委托人应考虑他们在自己拥有的委托池中所占的比例。 -All delegation rewards are shared evenly, with a pool rebalancing based on the amount the Delegator deposited into the pool. +所有的委托奖励都是平均分配的,根据委托人存入池子的数额来决定池子的再平衡。 -This gives the Delegator a share of the pool: +这使委托人拥有了委托池的份额: -![Share formula](/img/Share-Forumla.png) +![份额公式](/img/Share-Forumla.png) -> The formula above shows that it is possible for an Indexer offering only 20% to Delegators to provide a better reward than an Indexer giving 90%. Simply do the math to determine the best reward. +> 上面的公式表明,一个只向委托人提供20%的索引人可能比一个提供90%的索引人提供更好的奖励。只需进行数学运算,即可确定最佳奖励。 ## 委托人常见问题和错误 ### MetaMask“待定交易”错误 -At times, attempts to delegate to Indexers via MetaMask can fail and result in prolonged periods of "Pending" or "Queued" transaction attempts. +有时,通过MetaMask委托给索引人的尝试可能会失败,并导致长时间的“未决”或“排队”交易尝试。 -A simple resolution to this bug is to restart the browser (e.g., using "abort:restart" in the address bar), which will cancel all previous attempts without gas being subtracted from the wallet. Several users who have encountered this issue have reported successful transactions after restarting their browser and attempting to delegate. +这个漏洞的一个简单解决方案是重新启动浏览器(例如,在地址栏中使用“abort:restart”),这将取消所有之前的尝试,而不会从钱包中扣除燃气费。一些遇到此问题的用户回报了在重新启动浏览器并尝试委托后交易成功。 diff --git a/website/src/pages/zh/resources/roles/delegating/undelegating.mdx b/website/src/pages/zh/resources/roles/delegating/undelegating.mdx index 89c2b7cf5bb7..c7bcbb60817e 100644 --- a/website/src/pages/zh/resources/roles/delegating/undelegating.mdx +++ b/website/src/pages/zh/resources/roles/delegating/undelegating.mdx @@ -1,73 +1,73 @@ --- -title: Undelegating +title: 取消授权 --- -Learn how to withdraw your delegated tokens through [Graph Explorer](https://thegraph.com/explorer) or [Arbiscan](https://arbiscan.io/). +了解如何通过[Graph Explorer](https://thegraph.com/explorer) 或[Arbiscan](https://arbiscan.io/)提取委托代币。 -> To avoid this in the future, it's recommended that you select an Indexer wisely. To learn how to select and Indexer, check out the Delegate section in Graph Explorer. +> 为了避免将来出现这种情况,建议您明智地选择索引人。要了解如何选择和索引人,请查看Graph Explorer中的Delegate部分。 -## How to Withdraw Using Graph Explorer +## 如何使用Graph Explorer退出 -### Step-by-Step +### 步骤 -1. Visit [Graph Explorer](https://thegraph.com/explorer). Please make sure you're on Explorer and **not** Subgraph Studio. +1. 访问[Graph Explorer](https://thegraph.com/explorer)。请确保您使用的是Explorer而**不是** Subgraph Studio。 -2. Click on your profile. You can find it on the top right corner of the page. +2. 点击您的个人资料。你可以在页面的右上角找到它。 - - Make sure that your wallet is connected. If it's not connected, you will see the "connect" button instead. + - 确保您的钱包已连接。如果它没有连接,您将看到“连接”按钮。 -3. Once you're in your profile, click on the Delegating tab. In the Delegating tab, you can view the list of Indexers you have delegated to. +3. 进入个人资料后,单击委托选项卡。在委托选项卡中,您可以查看已委托的索引人列表。 -4. Click on the Indexer from which you wish to withdraw your tokens. +4. 单击要从中提取代币的索引人。 - - Make sure to note the specific Indexer, as you will need to find them again to withdraw. + - 请务必注意特定的索引人,因为您需要再次找到它们才能提取。 -5. Select the "Undelegate" option by clicking on the three dots next to the Indexer on the right side, see image below: +5. 通过单击右侧索引人旁边的三个点来选择“Undelegate”选项,请参阅下图: ![Undelegate button](/img/undelegate-button.png) -6. After approximately [28 epochs](https://thegraph.com/explorer/network/epochs?chain=arbitrum-one) (28 days), return to the Delegate section and locate the specific Indexer you undelegated from. +6. 大约[28个时期](https://thegraph.com/explorer/network/epochs?chain=arbitrum-one)(28天)后,返回到委托部分并找到您从中取消委托的特定索引人。 -7. Once you find the Indexer, click on the three dots next to them and proceed to withdraw all your tokens. +7. 找到索引人后,单击它们旁边的三个点,然后继续提取所有代币。 -## How to Withdraw Using Arbiscan +## 如何使用Arbiscan提款 -> This process is primarily useful if the UI in Graph Explorer experiences issues. +> 如果Graph Explorer中的UI遇到问题,此过程主要很有用。 -### Step-by-Step +### 步骤 -1. Find your delegation transaction on Arbiscan. +1. 在Arbiscan上查找您的委托交易。 - - Here's an [example transaction on Arbiscan](https://arbiscan.io/tx/0xcf2110eac897099f821064445041031efb32786392bdbe7544a4cb7a6b2e4f9a) + - 这是[Arbiscan上的一个交易示例](https://arbiscan.io/tx/0xcf2110eac897099f821064445041031efb32786392bdbe7544a4cb7a6b2e4f9a)。 -2. Navigate to "Transaction Action" where you can find the staking extension contract: +2. 导航到“交易操作”,您可以在其中找到质押延期合约: - - [This is the staking extension contract for the example listed above](https://arbiscan.io/address/0x00669A4CF01450B64E8A2A20E9b1FCB71E61eF03) + - [这是上述示例的质押延期合约](https://arbiscan.io/address/0x00669A4CF01450B64E8A2A20E9b1FCB71E61eF03)。 -3. Then click on "Contract". ![Contract tab on Arbiscan, between NFT Transfers and Events](/img/arbiscan-contract.png) +3. 然后点击“合约”。![Arbiscan上NFT传输和事件之间的合约选项卡](/img/arbiscan-contract.png)。 -4. Scroll to the bottom and copy the Contract ABI. There should be a small button next to it that allows you to copy everything. +4. 滚动到底部并复制合约ABI。它旁边应该有一个小按钮,可以复制所有内容。 -5. Click on your profile button in the top right corner of the page. If you haven't created an account yet, please do so. +5. 点击页面右上角的个人资料按钮。如果您还没有创建帐户,请这样做。 -6. Once you're in your profile, click on "Custom ABI”. +6. 进入个人资料后,点击“自定义ABI”。 -7. Paste the custom ABI you copied from the staking extension contract, and add the custom ABI for the address: 0x00669A4CF01450B64E8A2A20E9b1FCB71E61eF03 (**sample address**) +7. 粘贴您从质押扩展合同中复制的自定义ABI,并为地址0x00669A4CF01450B64A2A2A20E9b1FCB71E61eF03添加自定义ABI(**示例地址**)。 -8. Go back to the [staking extension contract](https://arbiscan.io/address/0x00669A4CF01450B64E8A2A20E9b1FCB71E61eF03#writeProxyContract). Now, call the `unstake` function in the [Write as Proxy tab](https://arbiscan.io/address/0x00669A4CF01450B64E8A2A20E9b1FCB71E61eF03#writeProxyContract), which has been added thanks to the custom ABI, with the number of tokens that you delegated. +8. 回到[质押延期合同](https://arbiscan.io/address/0x00669A4CF01450B64E8A2A20E9b1FCB71E61eF03#writeProxyContract)。现在,使用您委托的代币数量调用 [Write as Proxy tab](https://arbiscan.io/address/0x00669A4CF01450B64E8A2A20E9b1FCB71E61eF03#writeProxyContract)选项卡中的`unstake`函数,该函数是由于自定义ABI而添加的。 -9. If you don't know how many tokens you delegated, you can call `getDelegation` on the Read Custom tab. You will need to paste your address (delegator address) and the address of the Indexer that you delegated to, as shown in the following screenshot: +9. 如果你不知道你委托了多少代币,你可以在读取自定义选项卡上调用`getDelegation`。你需要粘贴你的地址(委托人地址)和你委托给的索引人的地址,如下图所示: - ![Both of the addresses needed](/img/get-delegate.png) + ![这两个地址都需要](/img/get-delegate.png) - - This will return three numbers. The first number is the amount you can unstake. + - 这将返回三个数字。第一个数字是您可以取消预订的金额。 -10. After you have called `unstake`, you can withdraw after approximately 28 epochs (28 days) by calling the `withdraw` function. +10. 调用`unstake`后,您可以在大约28个时期(28天)后通过调用 `withdraw`函数进行撤回。 -11. You can see how much you will have available to withdraw by calling the `getWithdrawableDelegatedTokens` on Read Custom and passing it your delegation tuple. See screenshot below: +11. 您可以通过调用ReadCustom上的`getWithdrawableDelegatedTokens`并将其传递给您的委托元组来查看您有多少可提取的金额。请看下面的屏幕截图: - ![Call `getWithdrawableDelegatedTokens` to see amount of tokens that can be withdrawn](/img/withdraw-available.png) + ![调用`getWithdrawableDelegatedTokens`查看可以提取的代币数量](/img/withdraw-available.png)。 ## 其他资源 -To delegate successfully, review the [delegating documentation](/resources/roles/delegating/delegating/) and check out the delegate section in Graph Explorer. +要成功委托,请查看[委托文档](/resources/roles/delegating/delegating/)并查看Graph Explorer中的委托部分。 diff --git a/website/src/pages/zh/resources/subgraph-studio-faq.mdx b/website/src/pages/zh/resources/subgraph-studio-faq.mdx index b8d40e2f2dc3..8294b94cb6f9 100644 --- a/website/src/pages/zh/resources/subgraph-studio-faq.mdx +++ b/website/src/pages/zh/resources/subgraph-studio-faq.mdx @@ -4,15 +4,15 @@ title: 子图工作室常见问题 ## 1. 什么是 Subgraph Studio? -[Subgraph Studio](https://thegraph.com/studio/) is a dapp for creating, managing, and publishing subgraphs and API keys. +[Subgraph Studio](https://thegraph.com/studio/)是一个用于创建、管理和发布子图和 API 密钥的 dapp。 ## 2. 如何创建 API 密钥? -To create an API, navigate to Subgraph Studio and connect your wallet. You will be able to click the API keys tab at the top. There, you will be able to create an API key. +要创建API,请导航到Subgraph Studio并连接您的钱包。您将能够单击顶部的API密钥选项卡。在那里,您将能够创建API密钥。 ## 3. 我可以创建多个 API 密钥吗? -Yes! You can create multiple API Keys to use in different projects. Check out the link [here](https://thegraph.com/studio/apikeys/). +对!您可以创建多个API密钥,以便在不同的项目中使用。请查看[此处](https://thegraph.com/studio/apikeys/)的链接。 ## 4. 如何为 API 密钥限制域? @@ -20,12 +20,12 @@ Yes! You can create multiple API Keys to use in different projects. Check out th ## 5. 我可以把我的子图转给其他所有者吗? -Yes, subgraphs that have been published to Arbitrum One can be transferred to a new wallet or a Multisig. You can do so by clicking the three dots next to the 'Publish' button on the subgraph's details page and selecting 'Transfer ownership'. +是的,发表到Arbitrum One的子图可以转移到一个新的钱包或 Multisig。您可以通过点击子图详细信息页面上“发布”按钮旁边的三个点并选择“传输所有权”来实现。 请注意,一旦传输了子图,您将无法在工作室中查看或编辑该子图。 ## 6. 如果我不是要使用的子图的开发人员,如何查找子图的查询URL? -You can find the query URL of each subgraph in the Subgraph Details section of Graph Explorer. When you click on the “Query” button, you will be directed to a pane wherein you can view the query URL of the subgraph you’re interested in. You can then replace the `` placeholder with the API key you wish to leverage in Subgraph Studio. +你可以在Graph Explorer的Subgraph Details部分找到每个子图的查询 URL。 当你点击 "查询 "按钮时,你将被引导到一个窗格,在这里你可以查看你感兴趣的子图的查询 URL。 然后你可以把 `` 占位符替换成你想在Subgraph Studio中利用的 API 密钥。 请记住,你可以创建一个 API 密钥并查询发布到网络上的任何子图,即使你自己建立了一个子图。 这些通过新的 API 密钥进行的查询,与网络上的任何其他查询一样,都是付费查询。 diff --git a/website/src/pages/zh/resources/tokenomics.mdx b/website/src/pages/zh/resources/tokenomics.mdx index c9062327aa5d..2b151d1b37d4 100644 --- a/website/src/pages/zh/resources/tokenomics.mdx +++ b/website/src/pages/zh/resources/tokenomics.mdx @@ -1,103 +1,103 @@ --- -title: Graph网络的代币经济学 -sidebarTitle: Tokenomics -description: The Graph Network is incentivized by powerful tokenomics. Here’s how GRT, The Graph’s native work utility token, works. +title: The Graph网络的代币经济学 +sidebarTitle: 代币经济学 +description: The Graph网络受到强大的代币经济学的激励。以下是GRT——The Graph的原生工作效用代币的工作原理。 --- ## 概述 -The Graph is a decentralized protocol that enables easy access to blockchain data. It indexes blockchain data similarly to how Google indexes the web. If you've used a dapp that retrieves data from a subgraph, you've probably interacted with The Graph. Today, thousands of [popular dapps](https://thegraph.com/explorer) in the web3 ecosystem use The Graph. +The Graph是一种去中心化的协议,可以轻松访问区块链数据。它对区块链数据进行索引,就像谷歌对网络进行索引一样。如果您使用了从子图检索数据的dapp,那么您可能已经与The Graph进行了交互。如今,web3生态系统中数千个[流行的dapp](https://thegraph.com/explorer) 都使用the Graph。 -## Specifics +## 详情 -The Graph's model is akin to a B2B2C model, but it's driven by a decentralized network where participants collaborate to provide data to end users in exchange for GRT rewards. GRT is the utility token for The Graph. It coordinates and incentivizes the interaction between data providers and consumers within the network. +The Graph的模型类似于B2B2C模型,但它是由一个去中心化的网络驱动的,在这个网络中,参与者合作向最终用户提供数据,以换取GRT奖励。GRT是The Graph的效用代币。它协调和激励网络中数据提供者和消费者之间的互动。 -The Graph plays a vital role in making blockchain data more accessible and supports a marketplace for its exchange. To learn more about The Graph's pay-for-what-you-need model, check out its [free and growth plans](/subgraphs/billing/). +The Graph在使区块链数据更易于访问方面发挥着至关重要的作用,并支持其交换的市场。要了解更多关于The Graph按需付费模式的信息,请查看其[免费和增长计划](/subgraphs/billing/)。 -- GRT Token Address on Mainnet: [0xc944e90c64b2c07662a292be6244bdf05cda44a7](https://etherscan.io/token/0xc944e90c64b2c07662a292be6244bdf05cda44a7) +- 主网上的GRT代币地址:[0xc944e90c64b2c07662a292be6244bdf05cda44a7](https://etherscan.io/token/0xc944e90c64b2c07662a292be6244bdf05cda44a7) -- GRT Token Address on Arbitrum One: [0x9623063377AD1B27544C965cCd7342f7EA7e88C7](https://arbiscan.io/token/0x9623063377ad1b27544c965ccd7342f7ea7e88c7) +- Arbitrum One的GRT代币地址:: [0x9623063377AD1B27544C965cCd7342f7EA7e88C7](https://arbiscan.io/token/0x9623063377ad1b27544c965ccd7342f7ea7e88c7) -## The Roles of Network Participants +## 网络参与者的角色 -There are four primary network participants: +主要有四个网络参与者: -1. Delegators - Delegate GRT to Indexers & secure the network +1. 委托人-将GRT委托给索引人并确保网络安全 2. 策展人-为索引人找到最佳子图 -3. Developers - Build & query subgraphs +3. 开发人员-构建和查询子图 4. 索引人-区块链数据的主干 -Fishermen and Arbitrators are also integral to the network's success through other contributions, supporting the work of the other primary participant roles. For more information about network roles, [read this article](https://thegraph.com/blog/the-graph-grt-token-economics/). +Fishermen和Arbitrators也通过其他贡献,支持其他主要参与者角色的工作,对网络的成功不可或缺。有关网络角色的更多信息,请[阅读本文](https://thegraph.com/blog/the-graph-grt-token-economics/)。 ![Tokenomics diagram](/img/updated-tokenomics-image.png) -## Delegators (Passively earn GRT) +## 委托人(被动赚取GRT) -Indexers are delegated GRT by Delegators, increasing the Indexer’s stake in subgraphs on the network. In return, Delegators earn a percentage of all query fees and indexing rewards from the Indexer. Each Indexer sets the cut that will be rewarded to Delegators independently, creating competition among Indexers to attract Delegators. Most Indexers offer between 9-12% annually. +索引人由委托人委托GRT,以增加索引人在网络子图中的份额。作为回报,委托人从索引人获得所有查询费用和索引奖励的一部分。每一个索引人都会单独设置奖励给委托人的回扣,从而在索引人之间产生竞争以吸引委托人。大多数指数成份股的年利率在9-12%之间。 -For example, if a Delegator were to delegate 15k GRT to an Indexer offering 10%, the Delegator would receive ~1,500 GRT in rewards annually. +例如,如果委托人将1.5万GRT委托给提供10%的索引人,则委托人每年将获得约1,500 GRT的奖励。 -There is a 0.5% delegation tax which is burned whenever a Delegator delegates GRT on the network. If a Delegator chooses to withdraw their delegated GRT, the Delegator must wait for the 28-epoch unbonding period. Each epoch is 6,646 blocks, which means 28 epochs ends up being approximately 26 days. +每当委托人在网络上委托GRT时,将缴纳0.5%的委托税。如果委托人选择撤回其委托GRT,则问题人必须等待28个月的解锁期。每个时期为6646个区块,这意味着28个时期结束时约为26天。 -If you're reading this, you're capable of becoming a Delegator right now by heading to the [network participants page](https://thegraph.com/explorer/participants/indexers), and delegating GRT to an Indexer of your choice. +如果你正在阅读这篇文章,你现在就可以成为一名委托人,方法是转到网络参与者页面,并将GRT委托给你选择的索引人。 -## Curators (Earn GRT) +## 策展人(赚取GRT) -Curators identify high-quality subgraphs and "curate" them (i.e., signal GRT on them) to earn curation shares, which guarantee a percentage of all future query fees generated by the subgraph. While any independent network participant can be a Curator, typically subgraph developers are among the first Curators for their own subgraphs because they want to ensure their subgraph is indexed. +策展人识别高质量的子图并对其进行“策展”(即向其发出GRT信号)以获得策展份额,这保证了子图产生的所有未来查询费用的一定比例。虽然任何独立的网络参与者都可以是策展人,但通常子图开发人员是他们自己的子图的首批策展人之一,因为他们想确保他们的子图被索引。 -Subgraph developers are encouraged to curate their subgraph with at least 3,000 GRT. However, this number may be impacted by network activity and community participation. +鼓励子图开发人员以至少3000 GRT来策展他们的子图。然而,这个数字可能会受到网络活动和社区参与的影响。 -Curators pay a 1% curation tax when they curate a new subgraph. This curation tax is burned, decreasing the supply of GRT. +策展人在策划新的子图时要缴纳1%的策展税。这种策展税被消耗了,减少了GRT的供应。 ## 开发人员 -Developers build and query subgraphs to retrieve blockchain data. Since subgraphs are open source, developers can query existing subgraphs to load blockchain data into their dapps. Developers pay for queries they make in GRT, which is distributed to network participants. +开发人员构建和查询子图以检索区块链数据。由于子图是开源的,开发人员可以查询现有子图以将区块链数据加载到其dapp中。开发人员为他们在GRT中进行的查询付费,GRT分配给网络参与者。 ### 创建子图 -Developers can [create a subgraph](/developing/creating-a-subgraph/) to index data on the blockchain. Subgraphs are instructions for Indexers about which data should be served to consumers. +开发人员可以[创建子图](/development/creasing-a-subgraph/)对区块链上的数据进行索引。子图是索引人关于应向消费者提供哪些数据的说明。 -Once developers have built and tested their subgraph, they can [publish their subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) on The Graph's decentralized network. +一旦开发人员构建并测试了他们的子图,他们就可以在The Graph的去中心化网络上[发布他们的子图](/subgraphs/developing/publishing/publishing-a-subgraph/)。 -### 查询现存子图 +### 查询现有子图 -Once a subgraph is [published](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph's decentralized network, anyone can create an API key, add GRT to their billing balance, and query the subgraph. +一旦一个子图被[发布](/subgraphs/developming/publishing/publlishing-a-subgraph/)到The Graph的去中心化网络,任何人都可以创建一个API密钥,将GRT添加到他们的账单余额中,并查询子图。 -Subgraphs are [queried using GraphQL](/subgraphs/querying/introduction/), and the query fees are paid for with GRT in [Subgraph Studio](https://thegraph.com/studio/). Query fees are distributed to network participants based on their contributions to the protocol. +[使用GraphQL查询](/subgraphs/querying/introduction/)子图,查询费用在[Subgraph Studio](https://thegraph.com/studio/)中用GRT支付。查询费用根据网络参与者对协议的贡献分配给他们。 -1% of the query fees paid to the network are burned. +支付给网络的查询费用的1%被销毁。 -## Indexers (Earn GRT) +## 索引人(赚取GRT) -Indexers are the backbone of The Graph. They operate independent hardware and software powering The Graph’s decentralized network. Indexers serve data to consumers based on instructions from subgraphs. +索引人是The Graph的支柱。他们运营独立的硬件和软件,为The Graph的去中心化网络提供动力。索引人根据子图的指令向消费者提供数据。 -Indexers can earn GRT rewards in two ways: +索引人可以通过两种方式获得GRT奖励: -1. **Query fees**: GRT paid by developers or users for subgraph data queries. Query fees are directly distributed to Indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). +1. **Query fees**:开发人员或用户为子图数据查询支付的GRT。查询费用根据指数回扣函数直接分配给索引人(请参阅[此处](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)的GIP)。 -2. **Indexing rewards**: the 3% annual issuance is distributed to Indexers based on the number of subgraphs they are indexing. These rewards incentivize Indexers to index subgraphs, occasionally before the query fees begin, to accrue and submit Proofs of Indexing (POIs), verifying that they have indexed data accurately. +2. **Indexing rewards**:每年3%的发行量根据索引子图的数量分配给索引人。这些奖励激励索引人对子图进行索引,有时在查询费用开始之前,累积并提交索引证明(POI),以验证他们是否已准确索引数据。 -Each subgraph is allotted a portion of the total network token issuance, based on the amount of the subgraph’s curation signal. That amount is then rewarded to Indexers based on their allocated stake on the subgraph. +基于子图的策展信号的量,向每个子图分配总网络代币发行的一部分。然后,根据其在子图上分配的份额,将该金额奖励给索引人。 -In order to run an indexing node, Indexers must self-stake 100,000 GRT or more with the network. Indexers are incentivized to self-stake GRT in proportion to the amount of queries they serve. +为了运行索引节点,索引人必须在网络中质押100000 GRT或更多。索引人被激励根据其服务的查询量按比例持有GRT。 -Indexers can increase their GRT allocations on subgraphs by accepting GRT delegation from Delegators, and they can accept up to 16 times their initial self-stake. If an Indexer becomes "over-delegated" (i.e., more than 16 times their initial self-stake), they will not be able to use the additional GRT from Delegators until they increase their self-stake in the network. +索引人可以通过接受委托人的GRT委托来增加其在子图上的GRT分配,并且他们可以接受最多16倍的初始份额。如果索引人变得“过度授权”(即超过其初始股权16倍),他们将无法使用授权人的额外GRT,直到他们增加其在网络中的份额。 -The amount of rewards an Indexer receives can vary based on the Indexer's self-stake, accepted delegation, quality of service, and many more factors. +索引人收到的奖励金额可能会因索引人的自我利益、接受的委托、服务质量以及其他许多因素而异。 -## Token Supply: Burning & Issuance +## 代币供应:消耗和发行 -The initial token supply is 10 billion GRT, with a target of 3% new issuance annually to reward Indexers for allocating stake on subgraphs. This means that the total supply of GRT tokens will increase by 3% each year as new tokens are issued to Indexers for their contribution to the network. +初始代币供应量为100亿GRT,目标是每年发行3%的新代币,以奖励在子图上分配份额的索引人。这意味着,GRT代币的总供应量将每年增加3%,因为新的代币将发行给索引人,以供其对网络的贡献。 -The Graph is designed with multiple burning mechanisms to offset new token issuance. Approximately 1% of the GRT supply is burned annually through various activities on the network, and this number has been increasing as network activity continues to grow. These burning activities include a 0.5% delegation tax whenever a Delegator delegates GRT to an Indexer, a 1% curation tax when Curators signal on a subgraph, and a 1% of query fees for blockchain data. +The Graph设计有多种消耗机制来抵消新代币的发行。每年约有1%的GRT供应通过网络上的各种活动消耗,随着网络活动的持续增长,这一数字一直在增加。这些消耗活动包括:当委托人将GRT委托给索引人时,收取0.5%的委托税;当策展人在子图上发出信号时,收取1%的策展税;以及区块链数据查询费的1%。 -![Total burned GRT](/img/total-burned-grt.jpeg) +![消耗GRT总额](/img/total-burned-grt.jpeg) -In addition to these regularly occurring burning activities, the GRT token also has a slashing mechanism in place to penalize malicious or irresponsible behavior by Indexers. If an Indexer is slashed, 50% of their indexing rewards for the epoch are burned (while the other half goes to the fisherman), and their self-stake is slashed by 2.5%, with half of this amount being burned. This helps to ensure that Indexers have a strong incentive to act in the best interests of the network and to contribute to its security and stability. +除了这些经常发生的消耗活动外,GRT代币还具有一个砍削机制,以惩罚索引人的恶意或不负责任的行为。如果索引人被削减,他们在该时期索引回报的50%将被消耗(而另一半归fisherman所有),他们的自股将被削减2.5%,其中一半将被消耗。这有助于确保索引人有强烈的动机以网络的最大利益为出发点,并为网络的安全和稳定做出贡献。 -## Improving the Protocol +## 改进协议 -The Graph Network is ever-evolving and improvements to the economic design of the protocol are constantly being made to provide the best experience for all network participants. The Graph Council oversees protocol changes and community members are encouraged to participate. Get involved with protocol improvements in [The Graph Forum](https://forum.thegraph.com/). +The Graph网络不断发展,协议的经济设计不断改进,为所有网络参与者提供最佳体验。The Graph委员会负责监督协议变更,并鼓励社区成员参与。参与[The Graph论坛](https://forum.thegraph.com/)中的协议改进。 diff --git a/website/src/pages/zh/sps/introduction.mdx b/website/src/pages/zh/sps/introduction.mdx index 4fb8f675c2b5..bb5579cb7b65 100644 --- a/website/src/pages/zh/sps/introduction.mdx +++ b/website/src/pages/zh/sps/introduction.mdx @@ -1,30 +1,31 @@ --- -title: Introduction to Substreams-Powered Subgraphs +title: Substreams驱动子图介绍 sidebarTitle: 介绍 --- -Boost your subgraph's efficiency and scalability by using [Substreams](/substreams/introduction/) to stream pre-indexed blockchain data. +使用 [Substreams](/substreams/introduction/) 流式传输预索引的区块链数据,提高子图的效率和伸缩能力。 ## 概述 -Use a Substreams package (`.spkg`) as a data source to give your subgraph access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks. +通过使用Substreams包(`.spkg`)作为数据源,您的子图可以访问预先索引的区块链数据流。这使得数据处理更加高效和可扩展,特别是在大型或复杂的区块链网络中。 -### Specifics +### 详情 -There are two methods of enabling this technology: +启用此技术有两种方法: -1. **Using Substreams [triggers](/sps/triggers/)**: Consume from any Substreams module by importing the Protobuf model through a subgraph handler and move all your logic into a subgraph. This method creates the subgraph entities directly in the subgraph. +1. **使用Substreams[触发器](/sps/triggers/)**:通过子图处理程序导入Protobuf模型,从任何Substreams模块中消费,并将所有逻辑移动到子图中。此方法直接在子图中创建子图实体。 -2. **Using [Entity Changes](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)**: By writing more of the logic into Substreams, you can consume the module's output directly into [graph-node](/indexing/tooling/graph-node/). In graph-node, you can use the Substreams data to create your subgraph entities. +2. **使用[实体更改](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)**:通过将更多的逻辑写入Substreams,您可以将模块的输出直接消耗到[graph节点](/indexing/tooling/graph-node/)中。在graph节点中,可以使用Substreams数据创建子图实体。 -You can choose where to place your logic, either in the subgraph or Substreams. However, consider what aligns with your data needs, as Substreams has a parallelized model, and triggers are consumed linearly in the graph node. +您可以选择在子图或子流中放置逻辑的位置。但是,考虑一下什么符合您的数据需求,因为Substreams有一个并行化的模型,触发器在graph节点中是线性消耗的。 ### 其他资源 -Visit the following links for tutorials on using code-generation tooling to build your first end-to-end Substreams project quickly: +请访问以下教程链接,了解如何使用代码生成工具快速构建您的第一个端到端Substreams项目: - [Solana](/substreams/developing/solana/transactions/) - [EVM](https://docs.substreams.dev/tutorials/intro-to-tutorials/evm) - [Starknet](https://docs.substreams.dev/tutorials/intro-to-tutorials/starknet) - [Injective](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/injective) - [MANTRA](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/mantra) +- [Stellar](https://docs.substreams.dev/tutorials/intro-to-tutorials/stellar) diff --git a/website/src/pages/zh/sps/sps-faq.mdx b/website/src/pages/zh/sps/sps-faq.mdx index aa4938a88621..6fe73c72b8da 100644 --- a/website/src/pages/zh/sps/sps-faq.mdx +++ b/website/src/pages/zh/sps/sps-faq.mdx @@ -1,31 +1,31 @@ --- -title: Substreams-Powered Subgraphs FAQ -sidebarTitle: FAQ +title: Substreams驱动的子图的常见问题 +sidebarTitle: 常见问题 --- ## 什么是Substreams? -Substreams is an exceptionally powerful processing engine capable of consuming rich streams of blockchain data. It allows you to refine and shape blockchain data for fast and seamless digestion by end-user applications. +Substreams是一个非常强大的处理引擎,能够消耗丰富的区块链数据流。它允许您优化和塑造区块链数据,以便最终用户应用程序快速无缝地消化。 -Specifically, it's a blockchain-agnostic, parallelized, and streaming-first engine that serves as a blockchain data transformation layer. It's powered by [Firehose](https://firehose.streamingfast.io/), and enables developers to write Rust modules, build upon community modules, provide extremely high-performance indexing, and [sink](/substreams/developing/sinks/) their data anywhere. +具体来说,它是一个与区块链无关的、并行化的、流媒体优先的引擎,充当区块链数据转换层。它由[Firehose](https://firehose.streamingfast.io/)提供支持,使开发人员能够编写Rust模块,构建社区模块,提供极高性能的索引,并将数据[存储](/substreams/developing/sinks/) 在任何地方。 -Substreams is developed by [StreamingFast](https://www.streamingfast.io/). Visit the [Substreams Documentation](/substreams/introduction/) to learn more about Substreams. +Substreams由[StreamingFast](https://www.streamingfast.io/)开发。访问[Substreams文档](/substreams/introduction/)以了解有关Substreams的更多信息。 -## 什么是基于Substreams的子图? +## 什么是Substreams驱动的子图? [Substreams驱动的子图](/sps/introduction/)结合了Substreams的强大功能和子图的可查询性。发布基于Substreams的子图时,Substreams转换生成的数据可以输出与子图实体兼容的[实体更改](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs)。 -If you are already familiar with subgraph development, note that Substreams-powered subgraphs can then be queried just as if it had been produced by the AssemblyScript transformation layer. This provides all the benefits of subgraphs, including a dynamic and flexible GraphQL API. +如果您已经熟悉子图Subgraph开发,那么请注意,Substreams驱动的子图可以被查询,就像它是由AssemblyScript转换层生成的一样。它具有所有子图的优势,比如提供动态和灵活的GraphQL API。 -## 基于Substreams的子图和普通子图有什么区别? +## Substreams驱动的子图和普通子图有什么区别? 子图由数据源组成,这些数据源指定了在链上发生的事件以及通过用Assemblyscript编写的处理程序应如何转换这些事件。这些事件按照链上发生事件的顺序依次进行处理。 -By contrast, substreams-powered subgraphs have a single datasource which references a substreams package, which is processed by the Graph Node. Substreams have access to additional granular onchain data compared to conventional subgraphs, and can also benefit from massively parallelised processing, which can mean much faster processing times. +相比之下,由Substreams驱动的子图具有单一数据源,该数据源引用一个由Graph节点进行处理的Substreams包。与传统的子图相比,Substreams可以访问更多精细的链上数据,并且还可以从大规模并行处理中获益,这可能意味着处理时间更快。 -## 使用基于Substeams的子图的优势是什么? +## 使用Substeams驱动的子图的优势是什么? -Substreams-powered subgraphs combine all the benefits of Substreams with the queryability of subgraphs. They bring greater composability and high-performance indexing to The Graph. They also enable new data use cases; for example, once you've built your Substreams-powered Subgraph, you can reuse your [Substreams modules](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) to output to different [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) such as PostgreSQL, MongoDB, and Kafka. +Substreams驱动的子图结合了Substreams的所有优点和子图的可查询性。它们为The Graph带来了更大的兼容性和高性能索引。它们还支持新的数据用例:例如,一旦构建了基于Substreams的子图,就可以重用[Subreams模块](https://substreams.streamingfast.io/documentation/develop/manifest-modules)输出到不同的[sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink),例如PostgreSQL、MongoDB和Kafka。 ## Substreams的优势是什么? @@ -63,34 +63,34 @@ Firehose是由[StreamingFast](https://www.streamingfast.io/)开发的区块链 - 利用平面文件:将区块链数据提取到平面文件中,这是目前最便宜、最优化的计算资源。 -## 开发人员在哪里可以获得关于Substreams-powered子图和Substreams的更多信息? +## 开发人员在哪里可以获得关于Substreams驱动的子图和Substreams的更多信息? [Substreams文档](/substreams/introduction/)将教您如何构建Substreams模块。 [Substreams驱动的子图文档](/sps/introduction/)将向您展示如何将它们打包部署在The Graph上。 -The [latest Substreams Codegen tool](https://streamingfastio.medium.com/substreams-codegen-no-code-tool-to-bootstrap-your-project-a11efe0378c6) will allow you to bootstrap a Substreams project without any code. +[最新的Substreams Codegen工具](https://streamingfastio.medium.com/substreams-codegen-no-code-tool-to-bootstrap-your-project-a11efe0378c6)将允许您无需任何代码即可引导Substreams项目。 ## Rust模块在Substreams中扮演什么角色? -Rust模块相当于子图中的AssemblyScript mappers。它们以类似的方式编译为WASM,但编程模型允许并行执行。它们定义了您想要对原始区块链数据应用的转换和聚合类型。 +Rust模块相当于子图中的AssemblyScript映射。它们以类似的方式编译为WASM,但编程模型允许并行执行。它们定义了您想要对原始区块链数据应用的转换和聚合类型。 -See [modules documentation](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) for details. +请参阅[模块文档](https://docs.substreams.dev/reference-material/substreams-components/modules#modules)了解详情。 ## 什么使Substreams具有组合性? 在使用Substreams时,组合发生在转换层,从而使得缓存模块可以被重复使用。 -举例来说,Alice可以构建一个DEX价格模块,Bob可以使用它来构建一种感兴趣的代币的交易量聚合器,Lisa可以将四个单独的DEX价格模块组合起来创建一个价格预言机。一个单独的Substreams请求将打包所有这些个人模块,并将它们链接在一起,提供一个更加精细的数据流。然后可以使用该数据流填充子图,并由消费者查询。 +举例来说,Alice可以构建一个DEX价格模块,Bob可以使用它来构建一种感兴趣的代币的交易量集成器,Lisa可以将四个单独的DEX价格模块组合起来创建一个价格预言机。一个单独的Substreams请求将打包所有这些个人模块,并将它们链接在一起,提供一个更加精细的数据流。然后可以使用该数据流填充子图,并由消费者查询。 -## 如何构建和部署Substreams-powered子图? +## 如何构建和部署Substreams驱动的子图? -After [defining](/sps/introduction/) a Substreams-powered Subgraph, you can use the Graph CLI to deploy it in [Subgraph Studio](https://thegraph.com/studio/). +在[定义](/sps/introduction/) 一个Substreams驱动的子图后,您可以使用Graph CLI在[Subgraph Studio](https://thegraph.com/studio/)中部署它。 -## 在哪里可以找到Substreams和Substreams-powered子图的示例? +## 在哪里可以找到Substreams和Substreams驱动的子图的示例? -您可以访问此 [this Github repo](https://github.com/pinax-network/awesome-substreams) 以找到Substreams和Substreams-powered子图的示例。 +您可以访问[此Github repo](https://github.com/pinax-network/awesome-substreams) 以找到Substreams和Substreams-powered子图的示例。 -## Substreams和Substreams-powered子图对于The Graph Network意味着什么? +## Substreams和Substreams驱动的子图对于The Graph网络意味着什么? -The integration promises many benefits, including extremely high-performance indexing and greater composability by leveraging community modules and building on them. +这种集成带来许多好处,包括通过利用社区模块和在其上构建的组合性来实现极高性能的索引。 diff --git a/website/src/pages/zh/sps/triggers.mdx b/website/src/pages/zh/sps/triggers.mdx index fcdf887058fa..a92760b3a388 100644 --- a/website/src/pages/zh/sps/triggers.mdx +++ b/website/src/pages/zh/sps/triggers.mdx @@ -1,18 +1,18 @@ --- -title: Substreams Triggers +title: Substreams触发器 --- -Use Custom Triggers and enable the full use GraphQL. +使用自定义触发器并启用完全使用GraphQL。 ## 概述 -Custom Triggers allow you to send data directly into your subgraph mappings file and entities, which are similar to tables and fields. This enables you to fully use the GraphQL layer. +自定义触发器允许您将数据直接发送到子图映射文件和实体中,这些文件和实体类似于表和字段。这使您能够充分使用GraphQL层。 -By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data in your subgraph's handler. This ensures efficient and streamlined data management within the subgraph framework. +通过导入Substreams模块发出的Protobuf定义,您可以在子图的处理程序中接收和处理这些数据。这确保了子图框架内高效和简化的数据管理。 -### Defining `handleTransactions` +### 定义`处理交易` -The following code demonstrates how to define a `handleTransactions` function in a subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new subgraph entity is created. +以下代码演示了如何在子图处理程序中定义`handleTransactions`函数。此函数接收原始Substreams字节作为参数,并将其解码为`Transactions`对象。对于每个交易,都会创建一个新的子图实体。 ```tsx export function handleTransactions(bytes: Uint8Array): void { @@ -34,14 +34,14 @@ export function handleTransactions(bytes: Uint8Array): void { } ``` -Here's what you're seeing in the `mappings.ts` file: +以下是您在`mappings.ts`文件中看到的内容: -1. The bytes containing Substreams data are decoded into the generated `Transactions` object, this object is used like any other AssemblyScript object -2. Looping over the transactions -3. Create a new subgraph entity for every transaction +1. 包含Substreams数据的字节被解码为生成的`交易`对象,该对象与任何其他AssemblyScript对象一样使用 +2. 循环交易 +3. 为每笔交易创建一个新的子图实体 -To go through a detailed example of a trigger-based subgraph, [check out the tutorial](/sps/tutorial/). +要查看基于触发器的子图的详细示例,[单击此处](/sps/tutorial/)。 -### 其他资源 +### 知识拓展 -To scaffold your first project in the Development Container, check out one of the [How-To Guide](/substreams/developing/dev-container/). +要在开发容器中构建你的第一个项目,请查看[操作指南](/substreams/developing/dev-container/)。 diff --git a/website/src/pages/zh/sps/tutorial.mdx b/website/src/pages/zh/sps/tutorial.mdx index de3381f5eed8..c8ca5c967a22 100644 --- a/website/src/pages/zh/sps/tutorial.mdx +++ b/website/src/pages/zh/sps/tutorial.mdx @@ -1,32 +1,32 @@ --- -title: 'Tutorial: Set Up a Substreams-Powered Subgraph on Solana' -sidebarTitle: Tutorial +title: 教程:在Solana上设置基于Substreams的子图 +sidebarTitle: 教程 --- -Successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. +已成功为Solana SPL代币设置基于触发器的Substreams驱动子图。 ## 开始 -For a video tutorial, check out [How to Index Solana with a Substreams-powered Subgraph](/sps/tutorial/#video-tutorial) +有关视频教程,请查看[如何使用Substreams驱动的子图对Solana进行索引](/sps/tutorial/#video-tutorial) -### Prerequisites +### 先决条件 -Before starting, make sure to: +开始之前,请确保: -- Complete the [Getting Started Guide](https://github.com/streamingfast/substreams-starter) to set up your development environment using a Dev Container. -- Be familiar with The Graph and basic blockchain concepts such as transactions and Protobufs. +- 完成[入门指南](https://github.com/streamingfast/substreams-starter)使用Dev容器设置开发环境。 +- 熟悉The Graph和基本的区块链概念,如交易和Protobuf。 -### Step 1: Initialize Your Project +### 步骤1:初始化您的项目 -1. Open your Dev Container and run the following command to initialize your project: +1. 打开Dev容器并运行以下命令以初始化项目: ```bash substreams init ``` -2. Select the "minimal" project option. +2. 选择“最小”项目选项。 -3. Replace the contents of the generated `substreams.yaml` file with the following configuration, which filters transactions for the Orca account on the SPL token program ID: +3. 将生成的`substreams.yaml`文件的内容替换为以下配置,该配置过滤SPL代币程序ID上Orca帐户的交易: ```yaml specVersion: v0.1.0 @@ -52,15 +52,15 @@ params: # Modify the param fields to meet your needs map_spl_transfers: token_contract:orcaEKTdK7LKz57vaAYr9QeNsVEPfiu6QeMU1kektZE ``` -### Step 2: Generate the Subgraph Manifest +### 步骤2:生成子图清单 -Once the project is initialized, generate a subgraph manifest by running the following command in the Dev Container: +项目初始化后,通过在Dev容器中运行以下命令生成子图清单: ```bash substreams codegen subgraph ``` -You will generate a`subgraph.yaml` manifest which imports the Substreams package as a data source: +您将生成`asubgraph.yaml`清单,该清单将Substreams包作为数据源导入: ```yaml --- @@ -73,17 +73,17 @@ dataSources: moduleName: map_spl_transfers # Module defined in the substreams.yaml file: ./my-project-sol-v0.1.0.spkg mapping: - apiVersion: 0.0.7 + apiVersion: 0.0.9 kind: substreams/graph-entities file: ./src/mappings.ts handler: handleTriggers ``` -### Step 3: Define Entities in `schema.graphql` +### 步骤3:在`schema.graphql`中定义实体 -Define the fields you want to save in your subgraph entities by updating the `schema.graphql` file. +通过更新`schema.graphql`文件来定义要保存在子图实体中的字段。 -Here is an example: +以下是一个示例: ```graphql type MyTransfer @entity { @@ -95,13 +95,13 @@ type MyTransfer @entity { } ``` -This schema defines a `MyTransfer` entity with fields such as `id`, `amount`, `source`, `designation`, and `signers`. +此模式定义了一个名为`MyTransfer`的实体,其字段包括 `id`, `amount`, `source`, `designation`, 和`signers`。 -### Step 4: Handle Substreams Data in `mappings.ts` +### 步骤4:在`mappings.ts`中处理Substreams数据 -With the Protobuf objects generated, you can now handle the decoded Substreams data in your `mappings.ts` file found in the `./src` directory. +生成Protobuf对象后,您现在可以在`./src`目录里找到的`mappings.ts`文件中处理解码的Substreams数据。 -The example below demonstrates how to extract to subgraph entities the non-derived transfers associated to the Orca account id: +下面的示例演示了如何将与Orca帐户id关联的非派生传输提取到子图实体中: ```ts import { Protobuf } from 'as-proto/assembly' @@ -132,24 +132,24 @@ export function handleTriggers(bytes: Uint8Array): void { } ``` -### Step 5: Generate Protobuf Files +### 步骤5:生成Protobuf文件 -To generate Protobuf objects in AssemblyScript, run the following command: +要在AssemblyScript中生成Protobuf对象,请运行以下命令: ```bash npm run protogen ``` -This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the subgraph's handler. +此命令将Protobuf定义转换为AssemblyScript,允许您在子图的处理程序中使用它们。 -### Conclusion +### 结论 -Congratulations! You've successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. You can take the next step by customizing your schema, mappings, and modules to fit your specific use case. +恭喜!您已成功为Solana SPL代币设置了基于触发器的Substreams驱动子图。现在,您可以进一步定制您的模式、映射和模块,以适应您的特定用例。 -### Video Tutorial +### 视频教程 ### 其他资源 -For more advanced customization and optimizations, check out the official [Substreams documentation](https://substreams.streamingfast.io/tutorials/solana). +如需更高级的定制和优化,请查看官方[Substreams文档](https://substreams.streamingfast.io/tutorials/solana)。 diff --git a/website/src/pages/zh/subgraphs/_meta-titles.json b/website/src/pages/zh/subgraphs/_meta-titles.json index 3fd405eed29a..b1655fee1ce1 100644 --- a/website/src/pages/zh/subgraphs/_meta-titles.json +++ b/website/src/pages/zh/subgraphs/_meta-titles.json @@ -1,6 +1,6 @@ { - "querying": "Querying", - "developing": "Developing", - "guides": "How-to Guides", - "best-practices": "Best Practices" + "querying": "查询", + "developing": "开发", + "guides": "操作指南", + "best-practices": "最佳实践" } diff --git a/website/src/pages/zh/subgraphs/best-practices/avoid-eth-calls.mdx b/website/src/pages/zh/subgraphs/best-practices/avoid-eth-calls.mdx index e40a7b3712e4..4c72bd0657ca 100644 --- a/website/src/pages/zh/subgraphs/best-practices/avoid-eth-calls.mdx +++ b/website/src/pages/zh/subgraphs/best-practices/avoid-eth-calls.mdx @@ -1,25 +1,25 @@ --- -title: Subgraph Best Practice 4 - Improve Indexing Speed by Avoiding eth_calls -sidebarTitle: 'Subgraph Best Practice 4: Avoiding eth_calls' +title: 子图最佳实践4-通过避免eth_calls提高索引速度 +sidebarTitle: 避免eth_calls --- ## TLDR -`eth_calls` are calls that can be made from a subgraph to an Ethereum node. These calls take a significant amount of time to return data, slowing down indexing. If possible, design smart contracts to emit all the data you need so you don’t need to use `eth_calls`. +`eth_calls是可以从子图到以太坊节点进行的调用。这些调用需要大量时间来返回数据,从而减慢了索引速度。如果可能的话,设计智能合约来发出你需要的所有数据,这样你就不需要使用`eth_calls\`。 -## Why Avoiding `eth_calls` Is a Best Practice +## 为什么避免`eth_calls`是一种最佳实践 -Subgraphs are optimized to index event data emitted from smart contracts. A subgraph can also index the data coming from an `eth_call`, however, this can significantly slow down subgraph indexing as `eth_calls` require making external calls to smart contracts. The responsiveness of these calls relies not on the subgraph but on the connectivity and responsiveness of the Ethereum node being queried. By minimizing or eliminating eth_calls in our subgraphs, we can significantly improve our indexing speed. +子图经过优化,可以对智能合约发出的事件数据进行索引。子图也可以对来自`eth_calls`的数据进行索引,但是,这会大大减慢子图索引的速度,因为`eth_call`需要对智能合约进行外部调用。这些调用的响应性不依赖于子图,而是依赖于被查询的以太坊节点的连接性和响应性。通过最小化或消除子图中的eth_calls,我们可以显著提高索引速度。 -### What Does an eth_call Look Like? +### Eth_call是什么样子的? -`eth_calls` are often necessary when the data required for a subgraph is not available through emitted events. For example, consider a scenario where a subgraph needs to identify whether ERC20 tokens are part of a specific pool, but the contract only emits a basic `Transfer` event and does not emit an event that contains the data that we need: +当子图所需的数据无法通过发出的事件获得时,通常需要`eth_calls`。例如,考虑一个场景,其中子图需要确定ERC20代币是否是特定池的一部分,但合约只发出一个基本的`转移`事件,而不发出包含我们所需数据的事件: ```yaml -event Transfer(address indexed from, address indexed to, uint256 value); +event Transfer(address indexed from, address indexed to, uint256 value); ``` -Suppose the tokens' pool membership is determined by a state variable named `getPoolInfo`. In this case, we would need to use an `eth_call` to query this data: +假设代币的池成员资格由名为`getPoolInfo`的状态变量决定。在这种情况下,我们需要使用`eth_call`来查询这些数据: ```typescript import { Address } from '@graphprotocol/graph-ts' @@ -44,17 +44,17 @@ export function handleTransfer(event: Transfer): void { } ``` -This is functional, however is not ideal as it slows down our subgraph’s indexing. +这是功能性的,但并不理想,因为它减缓了子图的索引速度。 -## How to Eliminate `eth_calls` +## 如何消除`eth_calls` -Ideally, the smart contract should be updated to emit all necessary data within events. For instance, modifying the smart contract to include pool information in the event could eliminate the need for `eth_calls`: +理想情况下,智能合约应该更新已在事件中发出所有必要的数据。例如,修改智能合约以在事件中包含池信息可以消除对`eth_calls`的需求: ``` event TransferWithPool(address indexed from, address indexed to, uint256 value, bytes32 indexed poolInfo); ``` -With this update, the subgraph can directly index the required data without external calls: +通过此更新,子图可以直接索引所需的数据,而无需外部调用: ```typescript import { Address } from '@graphprotocol/graph-ts' @@ -73,17 +73,17 @@ export function handleTransferWithPool(event: TransferWithPool): void { } ``` -This is much more performant as it has eliminated the need for `eth_calls`. +这更具性能,因为它消除了对`eth_calls`的需求。 -## How to Optimize `eth_calls` +## 如何优化`eth_calls` -If modifying the smart contract is not possible and `eth_calls` are required, read “[Improve Subgraph Indexing Performance Easily: Reduce eth_calls](https://thegraph.com/blog/improve-subgraph-performance-reduce-eth-calls/)” by Simon Emanuel Schmid to learn various strategies on how to optimize `eth_calls`. +如果无法修改智能合约并且需要`eth_calls`,请阅读“[轻松提高子图索引性能:减少eth_calls](https://thegraph.com/blog/improve-subgraph-performance-reduce-eth-calls/)”由Simon Emanuel Schmid教授所作,学习如何优化`eth_calls`的各种策略。 -## Reducing the Runtime Overhead of `eth_calls` +## 降低`eth_calls`运行时的开销 -For the `eth_calls` that can not be eliminated, the runtime overhead they introduce can be minimized by declaring them in the manifest. When `graph-node` processes a block it performs all declared `eth_calls` in parallel before handlers are run. Calls that are not declared are executed sequentially when handlers run. The runtime improvement comes from performing calls in parallel rather than sequentially - that helps reduce the total time spent in calls but does not eliminate it completely. +对于无法消除的`eth_calls`,可以通过在清单中声明来最小化引入的运行开销。当`graph-节点`处理一个块时,它会在处理程序运行之前并行执行所有声明的`eth_calls`。未声明的调用在处理程序运行时按顺序执行。运行时的改进来自并行而不是顺序执行调用,这有助于减少调用所花费的总时间,但并不能完全消除它。 -Currently, `eth_calls` can only be declared for event handlers. In the manifest, write +目前,'eth_calls'只能为事件处理程序声明。在清单中,写 ```yaml event: TransferWithPool(address indexed, address indexed, uint256, bytes32 indexed) @@ -92,26 +92,26 @@ calls: ERC20.poolInfo: ERC20[event.address].getPoolInfo(event.params.to) ``` -The portion highlighted in yellow is the call declaration. The part before the colon is simply a text label that is only used for error messages. The part after the colon has the form `Contract[address].function(params)`. Permissible values for address and params are `event.address` and `event.params.`. +黄色突出显示的部分是调用声明。冒号之前的部分只是一个仅用于错误消息的文本标签。冒号后的部分的格式为`Contract[address].function(params)`。地址和参数的允许值是`event.address`和`event.params`。 -The handler itself accesses the result of this `eth_call` exactly as in the previous section by binding to the contract and making the call. graph-node caches the results of declared `eth_calls` in memory and the call from the handler will retrieve the result from this in memory cache instead of making an actual RPC call. +处理程序本身通过绑定到合约并进行调用来访问这个`eth_call`的结果,与上一节完全相同。graph-节点将声明的`eth_calls`结果缓存在内存中,处理程序的调用将从内存缓存中检索结果,而不是进行实际的RPC调用。 -Note: Declared eth_calls can only be made in subgraphs with specVersion >= 1.2.0. +注意:声明的eth_calls只能在specVersion>=1.2.0的子图中创建。 -## Conclusion +## 结论 -You can significantly improve indexing performance by minimizing or eliminating `eth_calls` in your subgraphs. +通过最小化或消除子图中的`eth_calls`,我们可以显著提高索引性能。 -## Subgraph Best Practices 1-6 +## 子图最佳实践1-6 -1. [Improve Query Speed with Subgraph Pruning](/subgraphs/best-practices/pruning/) +1. [通过子图修剪提高查询速度](/subgraphs/best-practices/pruning/) -2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/subgraphs/best-practices/derivedfrom/) +2. [使用@derivedFrom提高索引和查询响应能力](/subgraphs/best-practices/derivedfrom/) -3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) +3. [通过使用不可变实体和字节作为ID来提高索引和查询性能](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) -4. [Improve Indexing Speed by Avoiding `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) +4. [通过避免`eth_calls`提高索引速度](/subgraphs/best-practices/avoid-eth-calls/) -5. [Simplify and Optimize with Timeseries and Aggregations](/subgraphs/best-practices/timeseries/) +5. [通过时间序列和聚合进行简化和优化](/subgraphs/best-practices/timeseries/) -6. [Use Grafting for Quick Hotfix Deployment](/subgraphs/best-practices/grafting-hotfix/) +6. [使用嫁接快速部署修补程序](/subgraphs/best-practices/grafting-hotfix/) diff --git a/website/src/pages/zh/subgraphs/best-practices/derivedfrom.mdx b/website/src/pages/zh/subgraphs/best-practices/derivedfrom.mdx index db3a49928c89..9f5770026ce0 100644 --- a/website/src/pages/zh/subgraphs/best-practices/derivedfrom.mdx +++ b/website/src/pages/zh/subgraphs/best-practices/derivedfrom.mdx @@ -1,29 +1,29 @@ --- -title: Subgraph Best Practice 2 - Improve Indexing and Query Responsiveness By Using @derivedFrom -sidebarTitle: 'Subgraph Best Practice 2: Arrays with @derivedFrom' +title: 子图最佳实践 2 - 通过使用 @derivedFrom 提高索引和查询响应性 +sidebarTitle: 带有@derivedFrom的数组 --- ## TLDR -Arrays in your schema can really slow down a subgraph's performance as they grow beyond thousands of entries. If possible, the `@derivedFrom` directive should be used when using arrays as it prevents large arrays from forming, simplifies handlers, and reduces the size of individual entities, improving indexing speed and query performance significantly. +在你的模式中,数组可能会在条目数量超过数千时显著降低子图的性能。如果可能的话,使用数组时应使用 ‘@derivedFrom\` 指令,这样可以防止大型数组的形成,简化处理器,减少单个实体的大小,显著提高索引速度和查询性能。 -## How to Use the `@derivedFrom` Directive +## 如何使用‘@derivedFrom’指令 -You just need to add a `@derivedFrom` directive after your array in your schema. Like this: +你只需要在模式中的数组后面添加一个 ‘@derivedFrom’ 指令。就像这样: ```graphql comments: [Comment!]! @derivedFrom(field: "post") ``` -`@derivedFrom` creates efficient one-to-many relationships, enabling an entity to dynamically associate with multiple related entities based on a field in the related entity. This approach removes the need for both sides of the relationship to store duplicate data, making the subgraph more efficient. +‘@derivedFrom’ 创建了高效的一对多关系,使一个实体能够根据相关实体中的字段动态地与多个相关实体关联。这种方法消除了关系双方存储重复数据的需要,使子图更加高效。 -### Example Use Case for `@derivedFrom` +### ‘@derivedFrom’ 的示例用例 -An example of a dynamically growing array is a blogging platform where a “Post” can have many “Comments”. +一个动态增长数组的例子是博客平台,其中的“Post”(帖子)可以拥有许多“Comments”(评论)。 -Let’s start with our two entities, `Post` and `Comment` +让我们从两个实体开始,`Post`(帖子)和 `Comment`(评论)。 -Without optimization, you could implement it like this with an array: +未经优化,你可以使用数组这样实现: ```graphql type Post @entity { @@ -39,9 +39,9 @@ type Comment @entity { } ``` -Arrays like these will effectively store extra Comments data on the Post side of the relationship. +像这样的数组将在关系的“Post”(帖子)一侧有效存储额外的“Comments”(评论)数据。 -Here’s what an optimized version looks like using `@derivedFrom`: +以下是使用 `@derivedFrom` 优化版本的样子: ```graphql type Post @entity { @@ -58,32 +58,32 @@ type Comment @entity { } ``` -Just by adding the `@derivedFrom` directive, this schema will only store the “Comments” on the “Comments” side of the relationship and not on the “Post” side of the relationship. Arrays are stored across individual rows, which allows them to expand significantly. This can lead to particularly large sizes if their growth is unbounded. +只需添加 `@derivedFrom` 指令,这个模式将仅在关系的“Comments”(评论)一侧存储“Comments”,而不是在“Post”(帖子)一侧。数组存储在各个单独的行中,这允许其显著扩展。如果生长没有界限,这可能导致特别大的尺寸。 -This will not only make our subgraph more efficient, but it will also unlock three features: +这不仅能使我们的子图更加高效,而且还将解锁三个功能: -1. We can query the `Post` and see all of its comments. +1. 我们可以查询post并查看其所有评论。 -2. We can do a reverse lookup and query any `Comment` and see which post it comes from. +2. 我们可以进行反向查找,查询任何评论并查看它来自哪个帖子。 -3. We can use [Derived Field Loaders](/subgraphs/developing/creating/graph-ts/api/#looking-up-derived-entities) to unlock the ability to directly access and manipulate data from virtual relationships in our subgraph mappings. +3. 我们可以使用[Derived Field Loaders](/subgraphs/developing/creating/graph-ts/api/#looking-up-derived-entities)来解锁直接访问和操纵子图映射中虚拟关系数据的能力。 -## Conclusion +## 结论 -Use the `@derivedFrom` directive in subgraphs to effectively manage dynamically growing arrays, enhancing indexing efficiency and data retrieval. +在子图中采用`@derivedFrom`指令可以有效地处理动态增长的数组,提高索引效率和数据检索。 -For a more detailed explanation of strategies to avoid large arrays, check out Kevin Jones' blog: [Best Practices in Subgraph Development: Avoiding Large Arrays](https://thegraph.com/blog/improve-subgraph-performance-avoiding-large-arrays/). +要了解避免大数组的更详细策略,请阅读Kevin Jones的博客:[子图开发的最佳实践:避免大数组](https://thegraph.com/blog/improve-subgraph-performance-avoiding-large-arrays/)。 -## Subgraph Best Practices 1-6 +## 子图最佳实践1-6 -1. [Improve Query Speed with Subgraph Pruning](/subgraphs/best-practices/pruning/) +1. [通过子图修剪提高查询速度](/subgraphs/best-practices/pruning/) -2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/subgraphs/best-practices/derivedfrom/) +2. [使用@derivedFrom提高索引和查询响应能力](/subgraphs/best-practices/derivedfrom/) -3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) +3. [通过使用不可变实体和字节作为ID来提高索引和查询性能](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) -4. [Improve Indexing Speed by Avoiding `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) +4. [通过避免`eth_calls`提高索引速度](/subgraphs/best-practices/avoid-eth-calls/) -5. [Simplify and Optimize with Timeseries and Aggregations](/subgraphs/best-practices/timeseries/) +5. [通过时间序列和聚合进行简化和优化](/subgraphs/best-practices/timeseries/) -6. [Use Grafting for Quick Hotfix Deployment](/subgraphs/best-practices/grafting-hotfix/) +6. [使用嫁接快速部署修补程序](/subgraphs/best-practices/grafting-hotfix/) diff --git a/website/src/pages/zh/subgraphs/best-practices/grafting-hotfix.mdx b/website/src/pages/zh/subgraphs/best-practices/grafting-hotfix.mdx index 06f1b9fef399..2fd2d120ee1f 100644 --- a/website/src/pages/zh/subgraphs/best-practices/grafting-hotfix.mdx +++ b/website/src/pages/zh/subgraphs/best-practices/grafting-hotfix.mdx @@ -1,68 +1,68 @@ --- title: 子图最佳实践6-使用嫁接快速部署修补程序 -sidebarTitle: 'Subgraph Best Practice 6: Grafting and Hotfixing' +sidebarTitle: 嫁接和修补 --- ## TLDR -Grafting is a powerful feature in subgraph development that allows you to build and deploy new subgraphs while reusing the indexed data from existing ones. +嫁接是子图开发中的一个强大功能,它允许您构建和部署新的子图,同时重用现有子图中的索引数据。 ### 概述 -This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. +此功能可快速部署关键问题的修补程序,无需从头开始重新索引整个子图。通过保留历史数据,移植可以最大限度地减少停机时间,并确保数据服务的连续性。 -## Benefits of Grafting for Hotfixes +## 嫁接修复补丁的好处 -1. **Rapid Deployment** +1. **快速部署** - - **Minimize Downtime**: When a subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. - - **Immediate Recovery**: The new subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. + - **最小化停机时间**:当子图遇到严重错误并停止索引时,嫁接使您能够立即部署修复程序,而无需等待重新索引。 + - **立即恢复**:新的子图从最后一个索引块继续,确保数据服务保持不间断。 -2. **Data Preservation** +2. **数据保存** - - **Reuse Historical Data**: Grafting copies the existing data from the base subgraph, so you don’t lose valuable historical records. - - **Consistency**: Maintains data continuity, which is crucial for applications relying on consistent historical data. + - **重用历史数据**:嫁接会从基础子图复制现有数据,这样就不会丢失有价值的历史记录。 + - **一致性**:保持数据连续性,这对于依赖一致历史数据的应用程序至关重要。 -3. **Efficiency** - - **Save Time and Resources**: Avoids the computational overhead of re-indexing large datasets. - - **Focus on Fixes**: Allows developers to concentrate on resolving issues rather than managing data recovery. +3. **效率** + - **节省时间和资源**:避免重新索引大型数据集的计算开销。 + - **专注于修复**:允许开发人员专注于解决问题,而不是管理数据恢复。 -## Best Practices When Using Grafting for Hotfixes +## 使用嫁接修复补丁的最佳实践 -1. **Initial Deployment Without Grafting** +1. **无嫁接的初始部署** - - **Start Clean**: Always deploy your initial subgraph without grafting to ensure that it’s stable and functions as expected. - - **Test Thoroughly**: Validate the subgraph’s performance to minimize the need for future hotfixes. + - **开始清理**:始终部署初始子图而不进行嫁接,以确保其稳定并按预期运行。 + - **彻底测试**:验证子图的性能,以尽量减少对未来补丁的需求。 -2. **Implementing the Hotfix with Grafting** +2. **用嫁接实现补丁** - - **Identify the Issue**: When a critical error occurs, determine the block number of the last successfully indexed event. - - **Create a New Subgraph**: Develop a new subgraph that includes the hotfix. - - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed subgraph. - - **Deploy Quickly**: Publish the grafted subgraph to restore service as soon as possible. + - **识别问题**:当发生严重错误时,确定最后一个成功索引事件的块号。 + - **创建新子图**:开发一个包含修补程序的新子图。 + - **配置嫁接**:使用嫁接将数据从失败的子图复制到识别的块号。 + - **快速部署**:发布嫁接的子图,尽快恢复服务。 -3. **Post-Hotfix Actions** +3. **发布修补程序操作** - - **Monitor Performance**: Ensure the grafted subgraph is indexing correctly and the hotfix resolves the issue. - - **Republish Without Grafting**: Once stable, deploy a new version of the subgraph without grafting for long-term maintenance. - > Note: Relying on grafting indefinitely is not recommended as it can complicate future updates and maintenance. - - **Update References**: Redirect any services or applications to use the new, non-grafted subgraph. + - **监控性能**:确保嫁接的子图正确索引,并且修补程序解决了该问题。 + - **无嫁接的重新发布**:一旦稳定,部署新版本的子图,无需嫁接以进行长期维护。 + > 注意:不建议无限期地依赖嫁接,因为这会使未来的更新和维护复杂化。 + - **更新引用**:重定向任何服务或应用程序以使用新的、未嫁接的子图。 -4. **Important Considerations** - - **Careful Block Selection**: Choose the graft block number carefully to prevent data loss. - - **Tip**: Use the block number of the last correctly processed event. - - **Use Deployment ID**: Ensure you reference the Deployment ID of the base subgraph, not the Subgraph ID. - - **Note**: The Deployment ID is the unique identifier for a specific subgraph deployment. - - **Feature Declaration**: Remember to declare grafting in the subgraph manifest under features. +4. **重要注意事项** + - **谨慎块选择**:仔细选择嫁接块编号,以防止数据丢失。 + - **提示**:使用最后一个正确处理的事件的块号。 + - **使用部署ID**:确保引用基子图的部署ID,而不是子图ID。 + - **注意**:部署ID是特定子图部署的唯一标识符。 + - **特征声明**:记住在特征下的子图清单中声明嫁接。 -## Example: Deploying a Hotfix with Grafting +## 示例:使用嫁接部署修补程序 -Suppose you have a subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. +假设你有一个子图跟踪一个因严重错误而停止索引的智能合约。以下是如何使用嫁接来部署修补程序。 -1. **Failed Subgraph Manifest (subgraph.yaml)** +1. 失败的子图清单(Subgraph.yaml) ```yaml - specVersion: 1.0.0 + specVersion: 1.3.0 schema: file: ./schema.graphql dataSources: @@ -75,7 +75,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing startBlock: 5000000 mapping: kind: ethereum/events - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Withdrawal @@ -88,9 +88,9 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing file: ./src/old-lock.ts ``` -2. **New Grafted Subgraph Manifest (subgraph.yaml)** +2. **新的嫁接子图清单(Subgraph.yaml)** ```yaml - specVersion: 1.0.0 + specVersion: 1.3.0 schema: file: ./schema.graphql dataSources: @@ -103,7 +103,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing startBlock: 6000001 # Block after the last indexed block mapping: kind: ethereum/events - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Withdrawal @@ -117,71 +117,71 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing features: - grafting graft: - base: QmBaseDeploymentID # Deployment ID of the failed subgraph + base: QmBaseDeploymentID # Deployment ID of the failed Subgraph block: 6000000 # Last successfully indexed block ``` -**Explanation:** +**说明:** -- **Data Source Update**: The new subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. -- **Start Block**: Set to one block after the last successfully indexed block to avoid reprocessing the error. -- **Grafting Configuration**: - - **base**: Deployment ID of the failed subgraph. - - **block**: Block number where grafting should begin. +- **数据源更新**:新的子图指向0xNewContractAddress,这可能是智能合约的固定版本。 +- **开始块**:设置为最后一个成功索引块后的一个块,以避免重新处理错误。 +- **嫁接配置:**: + - **base**:失败子图的部署ID。 + - **block**:应该开始嫁接的块号。 -3. **Deployment Steps** +3. **部署步骤** - - **Update the Code**: Implement the hotfix in your mapping scripts (e.g., handleWithdrawal). - - **Adjust the Manifest**: As shown above, update the `subgraph.yaml` with grafting configurations. - - **Deploy the Subgraph**: - - Authenticate with the Graph CLI. - - Deploy the new subgraph using `graph deploy`. + - **更新代码**:在映射脚本中实现修补程序(例如handleWithdrawal)。 + - **调整Manifest**:如上所示,使用嫁接配置更新`subgraph.yaml`。 + - **部署子图**: + - 使用Graph CLI进行身份验证。 + - 使用`graph deploy`部署新的子图。 -4. **Post-Deployment** - - **Verify Indexing**: Check that the subgraph is indexing correctly from the graft point. - - **Monitor Data**: Ensure that new data is being captured and the hotfix is effective. - - **Plan for Republish**: Schedule the deployment of a non-grafted version for long-term stability. +4. **部署后** + - **验证索引**:检查子图是否从嫁接点正确索引。 + - **监控数据**:确保正在捕获新数据,并且修补程序有效。 + - **重新发布计划**:安排部署非嫁接版本以实现长期稳定性。 -## Warnings and Cautions +## 警告和注意事项 -While grafting is a powerful tool for deploying hotfixes quickly, there are specific scenarios where it should be avoided to maintain data integrity and ensure optimal performance. +虽然嫁接是快速部署修补程序的强大工具,但在某些特定情况下,为了保持数据完整性并确保最佳性能,应该避免嫁接。 -- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new subgraph’s schema to be compatible with the base subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. -- **Significant Mapping Logic Overhauls**: When the hotfix involves substantial modifications to your mapping logic—such as changing how events are processed or altering handler functions—grafting may not function correctly. The new logic might not be compatible with the data processed under the old logic, leading to incorrect data or failed indexing. -- **Deployments to The Graph Network**: Grafting is not recommended for subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the subgraph from scratch to ensure full compatibility and reliability. +- **不兼容的模式更改**:如果您的修补程序需要更改现有字段的类型或从模式中删除字段,则嫁接是不合适的。Grafting期望新子图的模式与基础子图的模式兼容。不兼容的更改可能会导致数据不一致和错误,因为现有数据与新模式不一致。 +- **重要的映射逻辑大修**:当修补程序涉及对映射逻辑的重大修改时,例如更改事件的处理方式或更改处理程序函数,嫁接可能无法正常工作。新逻辑可能与在旧逻辑下处理的数据不兼容,导致数据不正确或索引失败。 +- **部署到The Graph网络**:不建议对用于The Graph去中心化网络(主网)的子图进行嫁接。它可能会使索引复杂化,并且可能不会得到所有索引人的完全支持,从而可能导致意外行为或成本增加。对于主网部署,从头开始重新索引子图以确保完全兼容性和可靠性更安全。 -### Risk Management +### 风险管理 -- **Data Integrity**: Incorrect block numbers can lead to data loss or duplication. -- **Testing**: Always test grafting in a development environment before deploying to production. +- **数据完整性**:不正确的块号可能会导致数据丢失或重复。 +- **测试**:在部署到生产环境之前,始终在开发环境中测试嫁接。 -## Conclusion +## 结论 -Grafting is an effective strategy for deploying hotfixes in subgraph development, enabling you to: +嫁接是在子图开发中部署修补程序的有效策略,使您能够: -- **Quickly Recover** from critical errors without re-indexing. -- **Preserve Historical Data**, maintaining continuity for applications and users. -- **Ensure Service Availability** by minimizing downtime during critical fixes. +- 无需重新索引即可**快速**从关键错误中**恢复**。 +- **保留历史数据**,保持应用程序和用户的连续性。 +- 通过最大限度地减少关键修复期间的停机时间来**确保服务可用性**。 -However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. +然而,明智地使用嫁接并遵循最佳实践来降低风险非常重要。使用修补程序稳定子图后,计划部署非嫁接版本以确保长期可维护性。 ## 其他资源 -- **[Grafting Documentation](/subgraphs/cookbook/grafting/)**: Replace a Contract and Keep its History With Grafting -- **[Understanding Deployment IDs](/subgraphs/querying/subgraph-id-vs-deployment-id/)**: Learn the difference between Deployment ID and Subgraph ID. +- **[嫁接文献](/subgraphs/cookbook/grafting/)**:用嫁接取代合约,保留其历史。 +- **[了解部署ID](/subgraphs/querying/subgraph-id-vs-deployment-id/)**:了解部署ID和子图ID之间的区别。 -By incorporating grafting into your subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. +通过将嫁接纳入子图开发工作流程,您可以提高快速响应问题的能力,确保您的数据服务保持健壮和可靠。 -## Subgraph Best Practices 1-6 +## 子图最佳实践1-6 -1. [Improve Query Speed with Subgraph Pruning](/subgraphs/best-practices/pruning/) +1. [通过子图修剪提高查询速度](/subgraphs/best-practices/pruning/) -2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/subgraphs/best-practices/derivedfrom/) +2. [使用@derivedFrom提高索引和查询响应能力](/subgraphs/best-practices/derivedfrom/) -3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) +3. [通过使用不可变实体和字节作为ID来提高索引和查询性能](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) -4. [Improve Indexing Speed by Avoiding `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) +4. [通过避免`eth_calls`提高索引速度](/subgraphs/best-practices/avoid-eth-calls/) -5. [Simplify and Optimize with Timeseries and Aggregations](/subgraphs/best-practices/timeseries/) +5. [通过时间序列和聚合进行简化和优化](/subgraphs/best-practices/timeseries/) -6. [Use Grafting for Quick Hotfix Deployment](/subgraphs/best-practices/grafting-hotfix/) +6. [使用嫁接快速部署修补程序](/subgraphs/best-practices/grafting-hotfix/) diff --git a/website/src/pages/zh/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx b/website/src/pages/zh/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx index 6ff60ec9ab34..b6dc194e32e2 100644 --- a/website/src/pages/zh/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx +++ b/website/src/pages/zh/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx @@ -1,15 +1,15 @@ --- -title: Subgraph Best Practice 3 - Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs -sidebarTitle: 'Subgraph Best Practice 3: Immutable Entities and Bytes as IDs' +title: 子图最佳实践3-通过使用Immutable Entities和Bytes作为ID来提高索引和查询性能 +sidebarTitle: Immutable Entities和Bytes作为ID --- ## TLDR -Using Immutable Entities and Bytes for IDs in our `schema.graphql` file [significantly improves ](https://thegraph.com/blog/two-simple-subgraph-performance-improvements/) indexing speed and query performance. +在`schema.graphql`文件中使用Immutable Entities和Bytes作为ID[显著改进](https://thegraph.com/blog/two-simple-subgraph-performance-improvements/)索引速度和查询性能。 -## Immutable Entities +## 不可变实体 -To make an entity immutable, we simply add `(immutable: true)` to an entity. +为了使一个实体不可变,我们只需在实体中添加`(immutable:true)`。 ```graphql type Transfer @entity(immutable: true) { @@ -20,21 +20,21 @@ type Transfer @entity(immutable: true) { } ``` -By making the `Transfer` entity immutable, graph-node is able to process the entity more efficiently, improving indexing speeds and query responsiveness. +通过使`Transfer`实体不可变,graph-node能够更有效地处理实体,提高索引速度和查询响应能力。 -Immutable Entities structures will not change in the future. An ideal entity to become an Immutable Entity would be an entity that is directly logging onchain event data, such as a `Transfer` event being logged as a `Transfer` entity. +Immutable Entities structures will not change in the future. An ideal entity to become an Immutable Entity would be an entity that is directly logging on-chain event data, such as a `Transfer` event being logged as a `Transfer` entity. -### Under the hood +### 在后台 -Mutable entities have a 'block range' indicating their validity. Updating these entities requires the graph node to adjust the block range of previous versions, increasing database workload. Queries also need filtering to find only live entities. Immutable entities are faster because they are all live and since they won't change, no checks or updates are required while writing, and no filtering is required during queries. +可变实体有一个`块范围`表示其有效性。更新这些实体需要graph节点调整以前版本的块范围,从而增加数据库工作负载。查询还需要筛选,以便只找到活动实体。不可变实体更快,因为都是活的,而且不会改变,在写入时不需要检查或更新,在查询时也不需要筛选。 -### When not to use Immutable Entities +### 何时不使用不可变实体 -If you have a field like `status` that needs to be modified over time, then you should not make the entity immutable. Otherwise, you should use immutable entities whenever possible. +如果你有一个类似字段的`状态`需要随着时间的推移而修改,那么你不应该让实体不可变。否则,您应该尽可能使用不可变实体。 -## Bytes as IDs +## 字节作为ID -Every entity requires an ID. In the previous example, we can see that the ID is already of the Bytes type. +每个实体都需要一个ID。在前面的示例中,我们可以看到ID已经是Bytes类型。 ```graphql type Transfer @entity(immutable: true) { @@ -45,19 +45,19 @@ type Transfer @entity(immutable: true) { } ``` -While other types for IDs are possible, such as String and Int8, it is recommended to use the Bytes type for all IDs due to character strings taking twice as much space as Byte strings to store binary data, and comparisons of UTF-8 character strings must take the locale into account which is much more expensive than the bytewise comparison used to compare Byte strings. +虽然其他类型的ID也是可能的,如String和Int8,但建议对所有ID使用Bytes类型,因为字符串存储二进制数据所需的空间是Byte字符串的两倍,并且UTF-8字符串的比较必须考虑到区域设置,这比用于比较Byte串的逐字节比较要昂贵得多。 -### Reasons to Not Use Bytes as IDs +### 不使用字节作为ID的原因 -1. If entity IDs must be human-readable such as auto-incremented numerical IDs or readable strings, Bytes for IDs should not be used. -2. If integrating a subgraph’s data with another data model that does not use Bytes as IDs, Bytes as IDs should not be used. -3. Indexing and querying performance improvements are not desired. +1. 如果实体ID必须是人类可读的,例如自动递增的数字ID或可读字符串,则不应使用Bytes 作为 ID。 +2. 如果将子图的数据与不使用Bytes作为ID的另一个数据模型集成,则不应使用Bytes 作为 ID。 +3. 索引和查询性能的改进是不可取的。 -### Concatenating With Bytes as IDs +### 以Bytes作为ID连接 -It is a common practice in many subgraphs to use string concatenation to combine two properties of an event into a single ID, such as using `event.transaction.hash.toHex() + "-" + event.logIndex.toString()`. However, as this returns a string, this significantly impedes subgraph indexing and querying performance. +在许多子图中,使用字符串连接将事件的两个属性组合成一个ID是一种常见的做法,例如使用`event.transaction.hash.toHex()+“-”+event.logIndex.toString()`。然而,由于这会返回一个字符串,将严重阻碍子图索引和查询性能。 -Instead, we should use the `concatI32()` method to concatenate event properties. This strategy results in a `Bytes` ID that is much more performant. +相反,我们应该使用`concatI32()`方法来连接事件属性。此策略产生了性能更高的`Bytes`ID。 ```typescript export function handleTransfer(event: TransferEvent): void { @@ -74,11 +74,11 @@ export function handleTransfer(event: TransferEvent): void { } ``` -### Sorting With Bytes as IDs +### 以Bytes作为ID排序 -Sorting using Bytes as IDs is not optimal as seen in this example query and response. +如本示例查询和响应所示,使用字节作为ID进行排序不是最佳选择。 -Query: +查询: ```graphql { @@ -91,7 +91,7 @@ Query: } ``` -Query response: +查询响应: ```json { @@ -120,9 +120,9 @@ Query response: } ``` -The IDs are returned as hex. +ID以十六进制返回。 -To improve sorting, we should create another field on the entity that is a BigInt. +为了改进排序,我们应该在BigInt实体上创建另一个字段。 ```graphql type Transfer @entity { @@ -134,9 +134,9 @@ type Transfer @entity { } ``` -This will allow for sorting to be optimized sequentially. +这将允许按顺序优化排序。 -Query: +查询: ```graphql { @@ -147,7 +147,7 @@ Query: } ``` -Query Response: +查询响应: ```json { @@ -170,22 +170,22 @@ Query Response: } ``` -## Conclusion +## 结论 -Using both Immutable Entities and Bytes as IDs has been shown to markedly improve subgraph efficiency. Specifically, tests have highlighted up to a 28% increase in query performance and up to a 48% acceleration in indexing speeds. +使用Immutable Entities和Bytes作为ID已被证明可以显著提高子图效率。具体来说,测试表明查询性能提高了28%,索引速度提高了48%。 -Read more about using Immutable Entities and Bytes as IDs in this blog post by David Lutterkort, a Software Engineer at Edge & Node: [Two Simple Subgraph Performance Improvements](https://thegraph.com/blog/two-simple-subgraph-performance-improvements/). +在Edge&Node的软件工程师David Lutterkort的这篇博客文章中,可以阅读到更多关于使用Immutable Entities和字节作为ID的信息:[两个简单的子图性能改进](https://thegraph.com/blog/two-simple-subgraph-performance-improvements/)。 -## Subgraph Best Practices 1-6 +## 子图最佳实践1-6 -1. [Improve Query Speed with Subgraph Pruning](/subgraphs/best-practices/pruning/) +1. [通过子图修剪提高查询速度](/subgraphs/best-practices/pruning/) -2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/subgraphs/best-practices/derivedfrom/) +2. [使用@derivedFrom提高索引和查询响应能力](/subgraphs/best-practices/derivedfrom/) -3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) +3. [通过使用不可变实体和字节作为ID来提高索引和查询性能](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) -4. [Improve Indexing Speed by Avoiding `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) +4. [通过避免`eth_calls`提高索引速度](/subgraphs/best-practices/avoid-eth-calls/) -5. [Simplify and Optimize with Timeseries and Aggregations](/subgraphs/best-practices/timeseries/) +5. [通过时间序列和聚合进行简化和优化](/subgraphs/best-practices/timeseries/) -6. [Use Grafting for Quick Hotfix Deployment](/subgraphs/best-practices/grafting-hotfix/) +6. [使用嫁接快速部署修补程序](/subgraphs/best-practices/grafting-hotfix/) diff --git a/website/src/pages/zh/subgraphs/best-practices/pruning.mdx b/website/src/pages/zh/subgraphs/best-practices/pruning.mdx index 1b51dde8894f..5b40343f9273 100644 --- a/website/src/pages/zh/subgraphs/best-practices/pruning.mdx +++ b/website/src/pages/zh/subgraphs/best-practices/pruning.mdx @@ -1,26 +1,26 @@ --- -title: Subgraph Best Practice 1 - Improve Query Speed with Subgraph Pruning -sidebarTitle: 'Subgraph Best Practice 1: Pruning with indexerHints' +title: 子图最佳实践1-通过子图修剪提高查询速度 +sidebarTitle: 用indexerHints修剪 --- ## TLDR -[Pruning](/developing/creating-a-subgraph/#prune) removes archival entities from the subgraph’s database up to a given block, and removing unused entities from a subgraph’s database will improve a subgraph’s query performance, often dramatically. Using `indexerHints` is an easy way to prune a subgraph. +[修剪](/developing/creating-a-subgraph/#prune) 会从子图的数据库中删除给定块内的存档实体,从子图数据库中删除未使用的实体将提高子图的查询性能,通常会显著提高。使用`indexerHints`是修剪子图的一种简单方法。 -## How to Prune a Subgraph With `indexerHints` +## 如何用`indexerHints`修剪子图 -Add a section called `indexerHints` in the manifest. +在清单中添加一个名为`indexerHints`的部分。 -`indexerHints` has three `prune` options: +`indexerHints`有三个`修剪`选项: -- `prune: auto`: Retains the minimum necessary history as set by the Indexer, optimizing query performance. This is the generally recommended setting and is the default for all subgraphs created by `graph-cli` >= 0.66.0. -- `prune: `: Sets a custom limit on the number of historical blocks to retain. -- `prune: never`: No pruning of historical data; retains the entire history and is the default if there is no `indexerHints` section. `prune: never` should be selected if [Time Travel Queries](/subgraphs/querying/graphql-api/#time-travel-queries) are desired. +- `prune:auto`:保留Indexer设置的最小必要历史记录,优化查询性能。这是通常推荐的设置,也是`graph-cli`>=0.66.0创建的所有子图的默认设置。 +- `prune: `: 对要保留的历史块数量设置自定义限制。 +- `prune: never`:不修剪历史数据;保留整个历史记录,如果没有`indexerHints`部分,则为默认设置。`prune: never`:如果需要[Time Travel Queries](/subgraphs/querying/graphql-api/#time-travel-queries),则不应选择修剪。 -We can add `indexerHints` to our subgraphs by updating our `subgraph.yaml`: +我们可以通过更新我们的`subgraph.yaml`将`indexerHints`添加到我们的子图中: ```yaml -specVersion: 1.0.0 +specVersion: 1.3.0 schema: file: ./schema.graphql indexerHints: @@ -31,26 +31,26 @@ dataSources: network: mainnet ``` -## Important Considerations +## 重要注意事项 -- If [Time Travel Queries](/subgraphs/querying/graphql-api/#time-travel-queries) are desired as well as pruning, pruning must be performed accurately to retain Time Travel Query functionality. Due to this, it is generally not recommended to use `indexerHints: prune: auto` with Time Travel Queries. Instead, prune using `indexerHints: prune: ` to accurately prune to a block height that preserves the historical data required by Time Travel Queries, or use `prune: never` to maintain all data. +- 如果同样需要[Time Travel Queries](/subgraphs/querying/graphql-api/#time-travel-queries) 和修剪,则必须准确执行修剪以保留Time Travel Query。因此,通常不建议在Time Travel Queries中使用`indexerHints: prune:auto`。相反,修剪使用`indexerHints: prune: `:修剪:精确修剪到保留Time Travel Queries所需历史数据的块高度,或使用`prune: never`维护所有数据。 -- It is not possible to [graft](/subgraphs/cookbook/grafting/) at a block height that has been pruned. If grafting is routinely performed and pruning is desired, it is recommended to use `indexerHints: prune: ` that will accurately retain a set number of blocks (e.g., enough for six months). +- 在修剪过的块高度进行[嫁接](/subgraphs/cookbook/grafting/)是不可能的。如果嫁接是常规操作,需要修剪,建议使用`indexerHints: prune: `,这将准确地保留一定数量的块(例如足够六个月)。 -## Conclusion +## 结论 -Pruning using `indexerHints` is a best practice for subgraph development, offering significant query performance improvements. +使用`indexerHints`进行修剪是子图开发的最佳实践,可以显著提高查询性能。 -## Subgraph Best Practices 1-6 +## 子图最佳实践1-6 -1. [Improve Query Speed with Subgraph Pruning](/subgraphs/best-practices/pruning/) +1. [通过子图修剪提高查询速度](/subgraphs/best-practices/pruning/) -2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/subgraphs/best-practices/derivedfrom/) +2. [使用@derivedFrom提高索引和查询响应能力](/subgraphs/best-practices/derivedfrom/) -3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) +3. [通过使用不可变实体和字节作为ID来提高索引和查询性能](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) -4. [Improve Indexing Speed by Avoiding `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) +4. [通过避免`eth_calls`提高索引速度](/subgraphs/best-practices/avoid-eth-calls/) -5. [Simplify and Optimize with Timeseries and Aggregations](/subgraphs/best-practices/timeseries/) +5. [通过时间序列和聚合进行简化和优化](/subgraphs/best-practices/timeseries/) -6. [Use Grafting for Quick Hotfix Deployment](/subgraphs/best-practices/grafting-hotfix/) +6. [使用嫁接快速部署修补程序](/subgraphs/best-practices/grafting-hotfix/) diff --git a/website/src/pages/zh/subgraphs/best-practices/timeseries.mdx b/website/src/pages/zh/subgraphs/best-practices/timeseries.mdx index 2197763ae9f0..c18b61169631 100644 --- a/website/src/pages/zh/subgraphs/best-practices/timeseries.mdx +++ b/website/src/pages/zh/subgraphs/best-practices/timeseries.mdx @@ -1,49 +1,53 @@ --- -title: Subgraph Best Practice 5 - Simplify and Optimize with Timeseries and Aggregations -sidebarTitle: 'Subgraph Best Practice 5: Timeseries and Aggregations' +title: 子图最佳实践5-使用时间序列和聚合进行简化和优化 +sidebarTitle: 时间序列和聚合 --- ## TLDR -Leveraging the new time-series and aggregations feature in subgraphs can significantly enhance both indexing speed and query performance. +利用子图中新的时间序列和聚合功能可以显著提高索引速度和查询性能。 ## 概述 -Timeseries and aggregations reduce data processing overhead and accelerate queries by offloading aggregation computations to the database and simplifying mapping code. This approach is particularly effective when handling large volumes of time-based data. +时间序列和聚合通过将聚合计算卸载到数据库和简化映射代码来减少数据处理开销并加速查询。这种方法在处理大量基于时间的数据时特别有效。 -## Benefits of Timeseries and Aggregations +## 时间序列和聚合的好处 -1. Improved Indexing Time +1. 缩短了索引时间 -- Less Data to Load: Mappings handle less data since raw data points are stored as immutable timeseries entities. -- Database-Managed Aggregations: Aggregations are automatically computed by the database, reducing the workload on the mappings. +- 要加载的数据更少:映射处理的数据更少,因为原始数据点存储为不可变的时间序列实体。 +- 数据库管理聚合:聚合由数据库自动计算,减少了映射的工作量。 -2. Simplified Mapping Code +2. 简化映射代码 -- No Manual Calculations: Developers no longer need to write complex aggregation logic in mappings. -- Reduced Complexity: Simplifies code maintenance and minimizes the potential for errors. +- 无需手动计算:开发人员不再需要在映射中编写复杂的聚合逻辑。 +- 降低复杂性:简化代码维护,最大限度地减少错误的可能性。 -3. Dramatically Faster Queries +3. 查询速度快得多 -- Immutable Data: All timeseries data is immutable, enabling efficient storage and retrieval. -- Efficient Data Separation: Aggregates are stored separately from raw timeseries data, allowing queries to process significantly less data—often several orders of magnitude less. +- 不可变数据:所有时间序列数据都是不可变的,可以实现高效的存储和检索。 +- 高效的数据分离:聚合与原始时间序列数据分开存储,使查询处理的数据大大减少,通常减少几个数量级。 -### Important Considerations +### 重要注意事项 -- Immutable Data: Timeseries data cannot be altered once written, ensuring data integrity and simplifying indexing. -- Automatic ID and Timestamp Management: id and timestamp fields are automatically managed by graph-node, reducing potential errors. -- Efficient Data Storage: By separating raw data from aggregates, storage is optimized, and queries run faster. +- 不可变数据:时间序列数据一旦写入就不能更改,从而确保数据完整性并简化索引。 +- 自动ID和时间戳管理:ID和时间戳字段由图节点自动管理,减少了潜在的错误。 +- 高效的数据存储:通过将原始数据与聚合分离,存储得到优化,查询运行速度更快。 -## How to Implement Timeseries and Aggregations +## 如何实现时间序列和聚合 -### Defining Timeseries Entities +### 先决条件 -A timeseries entity represents raw data points collected over time. It is defined with the `@entity(timeseries: true)` annotation. Key requirements: +此功能需要`规范版本1.1.0`。 -- Immutable: Timeseries entities are always immutable. -- Mandatory Fields: - - `id`: Must be of type `Int8!` and is auto-incremented. - - `timestamp`: Must be of type `Timestamp!` and is automatically set to the block timestamp. +### 定义时间序列实体 + +时间序列实体表示随时间收集的原始数据点。它是用`@entity(timeseries:true)`注释定义的。关键要求: + +- 不可变:时间序列实体始终是不可变的。 +- 必填字段: + - `id`:必须是`Int8!`类型并且自动递增。 + - `timestamp`:必须是`Timestamp!`类型并自动设置为块时间戳。 例子: @@ -51,16 +55,16 @@ A timeseries entity represents raw data points collected over time. It is define type Data @entity(timeseries: true) { id: Int8! timestamp: Timestamp! - price: BigDecimal! + amount: BigDecimal! } ``` -### Defining Aggregation Entities +### 定义聚合实体 -An aggregation entity computes aggregated values from a timeseries source. It is defined with the `@aggregation` annotation. Key components: +聚合实体从时间序列源计算聚合值。它是用`@aggregation`注释定义的。关键部件: -- Annotation Arguments: - - `intervals`: Specifies time intervals (e.g., `["hour", "day"]`). +- 注释参数: + - `interval`:指定时间间隔(例如,`["hour", "day"]`)。 例子: @@ -68,15 +72,15 @@ An aggregation entity computes aggregated values from a timeseries source. It is type Stats @aggregation(intervals: ["hour", "day"], source: "Data") { id: Int8! timestamp: Timestamp! - sum: BigDecimal! @aggregate(fn: "sum", arg: "price") + sum: BigDecimal! @aggregate(fn: "sum", arg: "amount") } ``` -In this example, Stats aggregates the price field from Data over hourly and daily intervals, computing the sum. +在这个例子中,Stats在每小时和每天的时间间隔内聚合Data中的金额字段,计算总和。 -### Querying Aggregated Data +### 查询聚合数据 -Aggregations are exposed via query fields that allow filtering and retrieval based on dimensions and time intervals. +聚合通过查询字段公开,这些字段允许基于维度和时间间隔进行过滤和检索。 例子: @@ -98,13 +102,13 @@ Aggregations are exposed via query fields that allow filtering and retrieval bas } ``` -### Using Dimensions in Aggregations +### 在聚合中使用维度 -Dimensions are non-aggregated fields used to group data points. They enable aggregations based on specific criteria, such as a token in a financial application. +维度是用于对数据点进行分组的非聚合字段。它们支持基于特定标准的聚合,例如金融应用程序中的代币。 例子: -### Timeseries Entity +### 时间序列实体 ```graphql type TokenData @entity(timeseries: true) { @@ -116,7 +120,7 @@ type TokenData @entity(timeseries: true) { } ``` -### Aggregation Entity with Dimension +### 带维度的聚合实体 ```graphql type TokenStats @aggregation(intervals: ["hour", "day"], source: "TokenData") { @@ -129,67 +133,67 @@ type TokenStats @aggregation(intervals: ["hour", "day"], source: "TokenData") { } ``` -- Dimension Field: token groups the data, so aggregates are computed per token. -- Aggregates: - - totalVolume: Sum of amount. - - priceUSD: Last recorded priceUSD. - - count: Cumulative count of records. +- 维度字段:代币对数据进行分组,因此每个代币计算聚合。 +- 聚合: + - totalVolume:总金额。 + - priceUSD:最后记录的价格USD。 + - count:记录的累计计数。 -### Aggregation Functions and Expressions +### 聚合函数和表达式 -Supported aggregation functions: +支持的聚合功能: -- sum -- count -- min -- max -- first -- last +- 求和 +- 计数 +- 最小值 +- 最大值 +- 第一 +- 最后 -### The arg in @aggregate can be +### @aggregate中的参数可以是 -- A field name from the timeseries entity. -- An expression using fields and constants. +- 时间序列实体中的字段名。 +- 使用字段和常量的表达式。 -### Examples of Aggregation Expressions +### 聚合表达式示例 -- Sum Token Value: @aggregate(fn: "sum", arg: "priceUSD \_ amount") -- Maximum Positive Amount: @aggregate(fn: "max", arg: "greatest(amount0, amount1, 0)") -- Conditional Sum: @aggregate(fn: "sum", arg: "case when amount0 > amount1 then amount0 else 0 end") +- 代币总值: @aggregate(fn: "sum", arg: "priceUSD \_ amount") +- 最大正金额: @aggregate(fn: "max", arg: "greatest(amount0, amount1, 0)") +- 条件总和: @aggregate(fn: "sum", arg: "case when amount0 > amount1 then amount0 else 0 end") -Supported operators and functions include basic arithmetic (+, -, \_, /), comparison operators, logical operators (and, or, not), and SQL functions like greatest, least, coalesce, etc. +支持的运算符和函数包括基本算术(+、-、\_、/)、比较运算符、逻辑运算符(和、或、非)以及最大、最小、合并等SQL函数。 -### Query Parameters +### 查询参数 -- interval: Specifies the time interval (e.g., "hour"). -- where: Filters based on dimensions and timestamp ranges. -- timestamp_gte / timestamp_lt: Filters for start and end times (microseconds since epoch). +- interval:指定时间间隔(例如,"hour")。 +- where: 基于维度和时间戳范围的筛选器。 +- timestamp_gte/timestamp_lt:开始和结束时间的筛选器(自时期以来的微秒)。 -### Notes +### 注意: -- Sorting: Results are automatically sorted by timestamp and id in descending order. -- Current Data: An optional current argument can include the current, partially filled interval. +- 排序:结果按时间戳和id降序自动排序。 +- 当前数据:可选的当前参数可以包括当前部分填充的间隔。 -### Conclusion +### 结论 -Implementing timeseries and aggregations in subgraphs is a best practice for projects dealing with time-based data. This approach: +在子图中实现时间序列和聚合是处理基于时间的数据的项目的最佳实践。这种方法: -- Enhances Performance: Speeds up indexing and querying by reducing data processing overhead. -- Simplifies Development: Eliminates the need for manual aggregation logic in mappings. -- Scales Efficiently: Handles large volumes of data without compromising on speed or responsiveness. +- 提高性能:通过减少数据处理开销来加快索引和查询。 +- 简化开发:消除了在映射中手动聚合逻辑的需要。 +- 高效扩展:在不影响速度或响应性的情况下处理大量数据。 -By adopting this pattern, developers can build more efficient and scalable subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your subgraphs. +通过采用这种模式,开发人员可以构建更高效、更可扩展的子图,为最终用户提供更快、更可靠的数据访问。要了解有关实现时间序列和聚合的更多信息,请参阅[时间序列和聚集自述文件](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md),并考虑在子图中尝试此功能。 -## Subgraph Best Practices 1-6 +## 子图最佳实践1-6 -1. [Improve Query Speed with Subgraph Pruning](/subgraphs/best-practices/pruning/) +1. [通过子图修剪提高查询速度](/subgraphs/best-practices/pruning/) -2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/subgraphs/best-practices/derivedfrom/) +2. [使用@derivedFrom提高索引和查询响应能力](/subgraphs/best-practices/derivedfrom/) -3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) +3. [通过使用不可变实体和字节作为ID来提高索引和查询性能](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) -4. [Improve Indexing Speed by Avoiding `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) +4. [通过避免`eth_calls`提高索引速度](/subgraphs/best-practices/avoid-eth-calls/) -5. [Simplify and Optimize with Timeseries and Aggregations](/subgraphs/best-practices/timeseries/) +5. [通过时间序列和聚合进行简化和优化](/subgraphs/best-practices/timeseries/) -6. [Use Grafting for Quick Hotfix Deployment](/subgraphs/best-practices/grafting-hotfix/) +6. [使用嫁接快速部署修补程序](/subgraphs/best-practices/grafting-hotfix/) diff --git a/website/src/pages/zh/subgraphs/billing.mdx b/website/src/pages/zh/subgraphs/billing.mdx index 985cc1679f23..d352f51c8b09 100644 --- a/website/src/pages/zh/subgraphs/billing.mdx +++ b/website/src/pages/zh/subgraphs/billing.mdx @@ -2,213 +2,215 @@ title: 计费 --- -## Querying Plans +## 查询计划 -There are two plans to use when querying subgraphs on The Graph Network. +在 The Graph网络上查询子图时,有两种计划可以使用。 -- **Free Plan**: The Free Plan includes 100,000 free monthly queries with full access to the Subgraph Studio testing environment. This plan is designed for hobbyists, hackathoners, and those with side projects to try out The Graph before scaling their dapp. +- **免费计划**:免费计划包括每月100000次免费查询,可完全访问Subgraph Studio测试环境。这个计划是为业余爱好者、黑客马拉松者和那些有副项目的人设计的,他们可以在扩展他们的dapp之前尝试Graph。 -- **Growth Plan**: The Growth Plan includes everything in the Free Plan with all queries after 100,000 monthly queries requiring payments with GRT or credit card. The Growth Plan is flexible enough to cover teams that have established dapps across a variety of use cases. +- **增长计划**:增长计划包括免费计划中的所有内容,以及每月100000次查询后需要使用GRT或信用卡付款的所有查询。增长计划足够灵活,可以覆盖在各种用例中建立dapp的团队。 + +了解更多关于定价的 [here](https://thegraph.com/studio-pricing/)。 -## Query Payments with credit card +## 使用信用卡付款 -- To set up billing with credit/debit cards, users should access Subgraph Studio (https://thegraph.com/studio/) - 1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/subgraphs/billing/). +- 要使用信用卡/借记卡设置账单,用户应访问Subgraph Studio(https://thegraph.com/studio/)。 + 1. 转到 [Subgraph Studio计费页面](https://thegraph.com/studio/subgraphs/billing/)。 2. 单击页面右上角的“Connect Wallet”(连接钱包)按钮。您将被重定向到钱包选择页面。选择您的钱包,然后单击“Connect”(连接)。 - 3. Choose “Upgrade plan” if you are upgrading from the Free Plan or choose “Manage Plan” if you have already added GRT to your billing balance in the past. Next, you can estimate the number of queries to get a pricing estimate, but this is not a required step. - 4. To choose a credit card payment, choose “Credit card” as the payment method and fill out your credit card information. Those who have used Stripe before can use the Link feature to autofill their details. -- Invoices will be processed at the end of each month and require an active credit card on file for all queries beyond the free plan quota. + 3. 如果您从免费计划升级,请选择“升级计划“,如果您过去已经将GRT添加到您的计费余额中,则选择“管理计划“。接下来,您可以估计查询次数以获得定价估计,但这不是必需的步骤。 + 4. 要选择信用卡付款,请选择“信用卡”作为付款方式,并填写您的信用卡信息。以前使用过Stripe的人可以使用链接功能自动填充他们的详细信息。 +- 发票将在每月月底处理,对于超出免费计划配额的所有查询,都需要一张有效的信用卡存档。 -## Query Payments with GRT +## 使用GRT查询付款 -Subgraph users can use The Graph Token (or GRT) to pay for queries on The Graph Network. With GRT, invoices will be processed at the end of each month and require a sufficient balance of GRT to make queries beyond the Free Plan quota of 100,000 monthly queries. You'll be required to pay fees generated from your API keys. Using the billing contract, you'll be able to: +Subgraph用户可以使用Graph代币(或GRT)在Graph网络上支付查询费用。使用GRT,发票将在每月末处理,并需要足够的GRT余额来进行超过每月100000次查询的免费计划配额的查询。您将被要求支付API密钥产生的费用。使用计费合约,您将能够: - 从您的账户余额中添加和提取GRT。 - 根据您的账户添加的GRT数量、移除的数量和发票,跟踪您的余额。 - 只要您的账单余额中有足够的GRT,就可以根据生成的查询费用自动支付发票。 -### GRT on Arbitrum or Ethereum +### Arbitrum或以太坊上的GRT -The Graph’s billing system accepts GRT on Arbitrum, and users will need ETH on Arbitrum to pay their gas. While The Graph protocol started on Ethereum Mainnet, all activity, including the billing contracts, is now on Arbitrum One. +Graph的计费系统在Arbitrum上接受GRT,用户将需要Arbitrum的ETH来支付他们的费用。虽然Graph协议始于以太坊主网,但所有活动,包括计费合同,现在都在Arbitrum One上。 -To pay for queries, you need GRT on Arbitrum. Here are a few different ways to achieve this: +因此,要为查询付费,您需要Arbitrum上的GRT。以下是实现这一目标的几种不同方法: -- If you already have GRT on Ethereum, you can bridge it to Arbitrum. You can do this via the GRT bridging option provided in Subgraph Studio or by using one of the following bridges: +- 如果你已经在以太坊上有了GRT,你可以把它连接到Arbitrum。您可以通过Subgraph Studio中提供的GRT桥接选项或使用以下桥接器之一来完成此操作: -- [The Arbitrum Bridge](https://bridge.arbitrum.io/?l2ChainId=42161) +- Arbitrum大桥(https://bridge.arbitrum.io/?l2ChainId=42161) - [TransferTo](https://transferto.xyz/swap) -- If you already have assets on Arbitrum, you can swap them for GRT via a swapping protocol like Uniswap. +- 如果你在Arbitrum上有其他资产,你可以通过Uniswap等交换协议将它们交换为GRT。 -- Alternatively, you acquire GRT directly on Arbitrum through a decentralized exchange. +- 或者,您可以通过去中心化交易所直接在Arbitrum上获得GRT。 -> This section is written assuming you already have GRT in your wallet, and you're on Arbitrum. If you don't have GRT, you can learn how to get GRT [here](#getting-grt). +> 本节假设您的加密钱包中已经有GRT,并且您在以太坊主网上。如果你没有GRT,你可以在[这里](#getting-grt)学习如何获得GRT。 -Once you bridge GRT, you can add it to your billing balance. +一旦您桥接了GRT,您就可以将其添加到您的计费余额中。 -### Adding GRT using a wallet +### 使用加密钱包添加GRT -1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/subgraphs/billing/). +1. 转到 [Subgraph Studio计费页面](https://thegraph.com/studio/subgraphs/billing/)。 2. 单击页面右上角的“Connect Wallet”(连接钱包)按钮。您将被重定向到钱包选择页面。选择您的钱包,然后单击“Connect”(连接)。 -3. Select the "Manage" button near the top right corner. First time users will see an option to "Upgrade to Growth plan" while returning users will click "Deposit from wallet". -4. Use the slider to estimate the number of queries you expect to make on a monthly basis. - - For suggestions on the number of queries you may use, see our **Frequently Asked Questions** page. -5. Choose "Cryptocurrency". GRT is currently the only cryptocurrency accepted on The Graph Network. -6. Select the number of months you would like to prepay. - - Paying in advance does not commit you to future usage. You will only be charged for what you use and you can withdraw your balance at any time. -7. Pick the network from which you are depositing your GRT. GRT on Arbitrum or Ethereum are both acceptable. -8. Click "Allow GRT Access" and then specify the amount of GRT that can be taken from you wallet. - - If you are prepaying for multiple months, you must allow access to the amount that corresponds with that amount. This interaction will not cost any gas. -9. Lastly, click on "Add GRT to Billing Balance". This transaction will require ETH on Arbitrum to cover the gas costs. - -- Note that GRT deposited from Arbitrum will process within a few moments while GRT deposited from Ethereum will take approximately 15-20 minutes to process. Once the transaction is confirmed, you'll see the GRT added to your account balance. - -### Withdrawing GRT using a wallet - -1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/subgraphs/billing/). -2. Click on the "Connect Wallet" button on the top right corner of the page. Select your wallet and click on "Connect". -3. Click the "Manage" button at the top right corner of the page. Select "Withdraw GRT". A side panel will appear. -4. Enter the amount of GRT you would like to withdraw. -5. Click 'Withdraw GRT' to withdraw the GRT from your account balance. Sign the associated transaction in your wallet. This will cost gas. The GRT will be sent to your Arbitrum wallet. -6. Once the transaction is confirmed, you'll see the GRT withdrawn from your account balance in your Arbitrum wallet. +3. 选择右上角附近的“管理”按钮。首次用户将看到“升级到增长计划”的选项,而回头客将点击“从钱包存款”。 +4. 使用滑块估算您每月预计进行的查询数量。 + - 有关您可以使用的查询数量的建议,请参阅我们的**常见问题**页面。 +5. 选择“加密货币”。GRT目前是The Graph网络上唯一接受的加密货币。 +6. 选择您要预付的月数。 + - 提前付款并不意味着您将来会使用。您只需支付您使用的费用,您可以随时提取余额。 +7. 选择您存入GRT的网络。Arbitrum或以太坊上的GRT都是可以接受的。 +8. 点击“允许GRT访问”,然后指定可以从钱包中提取的GRT金额。 + - 如果您预付了几个月,您必须允许访问与该金额相对应的金额。这种相互作用不会消耗任何燃气费。 +9. 最后,点击“将GRT添加到计费余额”。这笔交易将需要Arbitrum上的ETH来支付燃气费。 + +- 请注意,从Arbitrum存入的GRT将在几分钟内处理,而从以太坊存入的GRT大约需要15到20分钟才能处理。交易确认后,您将看到GRT添加到您的计费余额中。 + +### 使用加密钱包提取GRT + +1. 转到  [Subgraph Studio计费页面](https://thegraph.com/studio/subgraphs/billing/)。 +2. 单击页面右上角的“Connect Wallet”(连接钱包)按钮。选择您的钱包,然后单击“Connect”(连接)。 +3. 单击页面右上角的“管理”按钮。选择“提取GRT”。将出现一个侧面板。 +4. 输入您要提取的GRT金额。 +5. 单击“提取GRT”,从您的账单余额中提取GRT。在您的钱包中签署相关交易。这将耗费燃气费。GRT将发送到您的Arbitrum钱包。 +6. 一旦交易被确认,您将在一小时内看到GRT提取到您的Arbitrum钱包的账单余额中。 ### 使用多签钱包添加GRT -1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/subgraphs/billing/). -2. Click on the "Connect Wallet" button on the top right corner of the page. Select your wallet and click on "Connect". If you're using [Gnosis-Safe](https://gnosis-safe.io/), you'll be able to connect your multisig as well as your signing wallet. Then, sign the associated message. This will not cost any gas. -3. Select the "Manage" button near the top right corner. First time users will see an option to "Upgrade to Growth plan" while returning users will click "Deposit from wallet". -4. Use the slider to estimate the number of queries you expect to make on a monthly basis. - - For suggestions on the number of queries you may use, see our **Frequently Asked Questions** page. -5. Choose "Cryptocurrency". GRT is currently the only cryptocurrency accepted on The Graph Network. -6. Select the number of months you would like to prepay. - - Paying in advance does not commit you to future usage. You will only be charged for what you use and you can withdraw your balance at any time. -7. Pick the network from which you are depositing your GRT. GRT on Arbitrum or Ethereum are both acceptable. 8. Click "Allow GRT Access" and then specify the amount of GRT that can be taken from you wallet. - - If you are prepaying for multiple months, you must allow access to the amount that corresponds with that amount. This interaction will not cost any gas. -8. Lastly, click on "Add GRT to Billing Balance". This transaction will require ETH on Arbitrum to cover the gas costs. +1. 转到  [Subgraph Studio计费页面](https://thegraph.com/studio/subgraphs/billing/)。 +2. 点击页面右上角的“连接钱包”按钮。选择您的钱包,然后单击“连接”。如果您使用的是[Gnosis-Safe](https://gnosis-safe.io/),您将能够连接您的多重签名和签名钱包。然后,在相关消息上签名。这不会花费任何燃气费。 +3. 选择右上角附近的“管理”按钮。首次用户将看到“升级到增长计划”的选项,而回头客将点击“从钱包存款”。 +4. 使用滑块估算您每月预计进行的查询数量。 + - 有关您可以使用的查询数量的建议,请参阅我们的**常见问题**页面。 +5. 选择“加密货币”。GRT目前是The Graph网络上唯一接受的加密货币。 +6. 选择您要预付的月数。 + - 提前付款并不意味着您将来会使用。您只需支付您使用的费用,您可以随时提取余额。 +7. 选择您存入GRT的网络。Arbitrum或以太坊上的GRT都是可以接受的。8.点击“允许GRT访问”,然后指定可以从您的钱包中提取的GRT金额。 + - 如果您预付了几个月,您必须允许访问与该金额相对应的金额。这种相互作用不会消耗任何燃气费。 +8. 最后,点击“将GRT添加到计费余额”。这笔交易将需要Arbitrum上的ETH来支付燃气费。 -- Note that GRT deposited from Arbitrum will process within a few moments while GRT deposited from Ethereum will take approximately 15-20 minutes to process. Once the transaction is confirmed, you'll see the GRT added to your account balance. +- 请注意,从Arbitrum存入的GRT将在几分钟内处理,而从以太坊存入的GRT大约需要15到20分钟才能处理。交易确认后,您将看到GRT添加到您的计费余额中。 -## Getting GRT +## 获取GRT -This section will show you how to get GRT to pay for query fees. +本节将向您展示如何让GRT支付查询费用。 ### Coinbase -This will be a step by step guide for purchasing GRT on Coinbase. - -1. Go to [Coinbase](https://www.coinbase.com/) and create an account. -2. Once you have created an account, you will need to verify your identity through a process known as KYC (or Know Your Customer). This is a standard procedure for all centralized or custodial crypto exchanges. -3. Once you have verified your identity, you can purchase GRT. You can do this by clicking on the "Buy/Sell" button on the top right of the page. -4. Select the currency you want to purchase. Select GRT. -5. Select the payment method. Select your preferred payment method. -6. Select the amount of GRT you want to purchase. -7. Review your purchase. Review your purchase and click "Buy GRT". -8. Confirm your purchase. Confirm your purchase and you will have successfully purchased GRT. -9. You can transfer the GRT from your account to your wallet such as [MetaMask](https://metamask.io/). - - To transfer the GRT to your wallet, click on the "Accounts" button on the top right of the page. - - Click on the "Send" button next to the GRT account. - - Enter the amount of GRT you want to send and the wallet address you want to send it to. - - Click "Continue" and confirm your transaction. -Please note that for larger purchase amounts, Coinbase may require you to wait 7-10 days before transferring the full amount to a wallet. +这将是在Coinbase上购买GRT的分步指南。 -You can learn more about getting GRT on Coinbase [here](https://help.coinbase.com/en/coinbase/trading-and-funding/buying-selling-or-converting-crypto/how-do-i-buy-digital-currency). +1. 到 [Coinbase](https://www.coinbase.com/) 并创建一个账户。 +2. 创建账户后,您需要通过KYC(或了解您的客户)流程验证您的身份。这是所有中心化或托管加密交易所的标准程序。 +3. 一旦您验证了自己的身份,就可以购买GRT。您可以通过单击页面右上方的“买入/卖出”按钮来完成此操作。 +4. 选择要购买的货币。选择GRT。 +5. 选择付款方式。选择您的首选付款方式。 +6. 选择您要购买的GRT数量。 +7. 查看您的购买。查看您的购买并单击“购买GRT”。 +8. 确认您的购买。确认您的购买,您将成功购买GRT。 +9. 您可以将GRT从您的帐户转移到您的钱包,如 [MetaMask](https://metamask.io/)。 + - 要将GRT转移到您的加密钱包,请单击页面右上方的“账户”按钮。 + - 单击GRT账户旁边的“发送”按钮。 + - 输入您要发送的GRT金额和您要发送到的钱包地址。 + - 单击“继续”并确认您的交易。-请注意,对于较大的购买金额,Coinbase可能需要您等待7到10天,然后才能将全部金额转移到加密钱包。 + +您可以在[此处]了解有关在Coinbase上获取GRT的更多信息。(https://help.coinbase.com/en/coinbase/trading-and-funding/buying-selling-or-converting-crypto/how-do-i-buy-digital-currency)。 ### Binance -This will be a step by step guide for purchasing GRT on Binance. - -1. Go to [Binance](https://www.binance.com/en) and create an account. -2. Once you have created an account, you will need to verify your identity through a process known as KYC (or Know Your Customer). This is a standard procedure for all centralized or custodial crypto exchanges. -3. Once you have verified your identity, you can purchase GRT. You can do this by clicking on the "Buy Now" button on the homepage banner. -4. You will be taken to a page where you can select the currency you want to purchase. Select GRT. -5. Select your preferred payment method. You'll be able to pay with different fiat currencies such as Euros, US Dollars, and more. -6. Select the amount of GRT you want to purchase. -7. Review your purchase and click "Buy GRT". -8. Confirm your purchase and you will be able to see your GRT in your Binance Spot Wallet. -9. You can withdraw the GRT from your account to your wallet such as [MetaMask](https://metamask.io/). - - [To withdraw](https://www.binance.com/en/blog/ecosystem/how-to-transfer-crypto-from-binance-to-trust-wallet-8305050796630181570) the GRT to your wallet, add your wallet's address to the withdrawal whitelist. - - Click on the "wallet" button, click withdraw, and select GRT. - - Enter the amount of GRT you want to send and the whitelisted wallet address you want to send it to. +这将是在Binance上购买GRT的分步指南。 + +1. 转到[Coinbase](https://www.binance.com/en) 并创建一个账户。 +2. 创建账户后,您需要通过KYC(或了解您的客户)流程验证您的身份。这是所有中心化或托管加密交易所的标准程序。 +3. 一旦您验证了自己的身份,就可以购买GRT。您可以通过单击主页横幅上的“立即购买”按钮来完成此操作。 +4. 您将转到一个可以选择要购买的货币的页面,选择GRT。 +5. 选择您的首选付款方式。您可以使用不同的法定货币支付,如欧元、美元等。 +6. 选择您要购买的GRT数量。 +7. 查看您的购买并单击“购买GRT”。 +8. 确认您的购买,您将能够在Binance现货钱包中看到您的GRT。 +9. 您可以将GRT从您的帐户转移到您的钱包,如 [MetaMask](https://metamask.io/)。 + - [要提取](https://www.binance.com/en/blog/ecosystem/how-to-transfer-crypto-from-binance-to-trust-wallet-8305050796630181570) GRT到您的钱包, 需将您的钱包地址添加到提款白名单中。 + - 单击“钱包”按钮,单击提取,然后选择GRT。 + - 输入您要发送的GRT金额和您要发送到的白名单钱包地址。 - 单击“继续”并确认您的交易。 -You can learn more about getting GRT on Binance [here](https://www.binance.com/en/support/faq/how-to-buy-cryptocurrency-on-binance-homepage-400c38f5e0cd4b46a1d0805c296b5582). +您可以在[此处]了解有关在Binance上获取GRT的更多信息。(https://www.binance.com/en/support/faq/how-to-buy-cryptocurrency-on-binance-homepage-400c38f5e0cd4b46a1d0805c296b5582)。 ### Uniswap -This is how you can purchase GRT on Uniswap. +这是您在Uniswap上购买GRT的方式。 -1. Go to [Uniswap](https://app.uniswap.org/swap?chain=arbitrum) and connect your wallet. -2. Select the token you want to swap from. Select ETH. -3. Select the token you want to swap to. Select GRT. - - Make sure you're swapping for the correct token. The GRT smart contract address on Arbitrum One is: [0x9623063377AD1B27544C965cCd7342f7EA7e88C7](https://arbiscan.io/token/0x9623063377ad1b27544c965ccd7342f7ea7e88c7) -4. Enter the amount of ETH you want to swap. -5. Click "Swap". -6. Confirm the transaction in your wallet and you wait for the transaction to process. +1. 到[Uniswap](https://app.uniswap.org/swap?chain=arbitrum) 并连接您的钱包。 +2. 选择要从中交换的代币。选择ETH。 +3. 选择要从中交换的代币。选择GRT。 + - 确保您正在交换正确的代币。Arbitrum One上的GRT智能合约地址为:: [0x9623063377AD1B27544C965cCd7342f7EA7e88C7](https://arbiscan.io/token/0x9623063377ad1b27544c965ccd7342f7ea7e88c7) +4. 输入要交换的ETH数量。 +5. 单击“交换”。 +6. 确认钱包中的交易,然后等待交易处理。 -You can learn more about getting GRT on Uniswap [here](https://support.uniswap.org/hc/en-us/articles/8370549680909-How-to-Swap-Tokens-). +您可以在[此处]了解有关在Uniswap上获取GRT的更多信息。(https://support.uniswap.org/hc/en-us/articles/8370549680909-How-to-Swap-Tokens-)。 -## Getting Ether +## 获取以太币 -This section will show you how to get Ether (ETH) to pay for transaction fees or gas costs. ETH is necessary to execute operations on the Ethereum network such as transferring tokens or interacting with contracts. +这一部分将向您展示如何获取以太币(ETH)以支付交易费用或燃气费用。在以太坊网络上执行操作,例如转移代币或与智能合约交互,都需要以太币(ETH)。 ### Coinbase -This will be a step by step guide for purchasing ETH on Coinbase. +这将是在Coinbase上购买ETH的分步指南。 -1. Go to [Coinbase](https://www.coinbase.com/) and create an account. +1. 转到[Coinbase](https://www.coinbase.com/) 并创建一个账户。 2. 创建账户后,您需要通过KYC(或了解您的客户)流程验证您的身份。这是所有中心化或托管加密交易所的标准程序。 -3. Once you have verified your identity, purchase ETH by clicking on the "Buy/Sell" button on the top right of the page. +3. 一旦您验证了自己的身份,就可以购买ETH。您可以通过单击页面右上方的“买入/卖出”按钮来完成此操作。 4. 选择要购买的货币。选择ETH。 -5. Select your preferred payment method. -6. Enter the amount of ETH you want to purchase. -7. Review your purchase and click "Buy ETH". -8. Confirm your purchase and you will have successfully purchased ETH. -9. You can transfer the ETH from your Coinbase account to your wallet such as [MetaMask](https://metamask.io/). - - To transfer the ETH to your wallet, click on the "Accounts" button on the top right of the page. - - Click on the "Send" button next to the ETH account. - - Enter the amount of ETH you want to send and the wallet address you want to send it to. - - Ensure that you are sending to your Ethereum wallet address on Arbitrum One. +5. 选择你偏好的支付方式。 +6. 输入要购买的ETH金额。 +7. 查看您的购买并单击“购买ETH”。 +8. 确认您的购买,您将成功购买ETH。 +9. 您可以将ETH从您的帐户转移到您的钱包,如 [MetaMask](https://metamask.io/)。 + - 要将ETH转移到您的加密钱包,请单击页面右上方的“账户”按钮。 + - 单击ETH账户旁边的“发送”按钮。 + - 输入您要发送的ETH金额和您要发送到的钱包地址。 + - 确保您正在发送到Arbitrum One上的以太坊钱包地址。 - 单击“继续”并确认您的交易。 -You can learn more about getting ETH on Coinbase [here](https://help.coinbase.com/en/coinbase/trading-and-funding/buying-selling-or-converting-crypto/how-do-i-buy-digital-currency). +您可以在[此处]了解有关在Coinbase上获取ETH的更多信息。(https://help.coinbase.com/en/coinbase/trading-and-funding/buying-selling-or-converting-crypto/how-do-i-buy-digital-currency)。 ### Binance -This will be a step by step guide for purchasing ETH on Binance. +这将是在Binance上购买ETH的分步指南。 -1. Go to [Binance](https://www.binance.com/en) and create an account. +1. 到[Binance](https://www.binance.com/en) 并创建一个账户。 2. 创建账户后,您需要通过KYC(或了解您的客户)流程验证您的身份。这是所有中心化或托管加密交易所的标准程序。 3. 一旦您完成了身份验证,您可以通过在首页横幅上点击“立即购买”按钮来购买ETH。 4. 选择要购买的货币。选择ETH。 -5. Select your preferred payment method. -6. Enter the amount of ETH you want to purchase. -7. Review your purchase and click "Buy ETH". +5. 选择你偏好的支付方式。 +6. 输入要购买的ETH金额。 +7. 查看您的购买并单击“购买ETH”。 8. 确认您的购买,您将能够在Binance现货钱包中看到您的ETH。 -9. You can withdraw the ETH from your account to your wallet such as [MetaMask](https://metamask.io/). - - To withdraw the ETH to your wallet, add your wallet's address to the withdrawal whitelist. +9. 您可以将ETH从您的帐户转移到您的钱包,如 [MetaMask](https://metamask.io/)。 + - 想要将ETH提现到你的加密钱包,先将你的加密钱包地址添加到提现白名单。 - 单击“钱包”按钮,单击提取,然后选择ETH。 - 输入您要发送的ETH金额和您要发送到的白名单钱包地址。 - - Ensure that you are sending to your Ethereum wallet address on Arbitrum One. + - 确保您正在发送到Arbitrum One上的以太坊钱包地址。 - 单击“继续”并确认您的交易。 -You can learn more about getting ETH on Binance [here](https://www.binance.com/en/support/faq/how-to-buy-cryptocurrency-on-binance-homepage-400c38f5e0cd4b46a1d0805c296b5582). +您可以在[此处]了解有关在Binance上获取ETH的更多信息。(https://www.binance.com/en/support/faq/how-to-buy-cryptocurrency-on-binance-homepage-400c38f5e0cd4b46a1d0805c296b5582)。 -## Billing FAQs +## 计费常见问题 -### How many queries will I need? +### 我需要多少个查询? -You don't need to know how many queries you'll need in advance. You will only be charged for what you use and you can withdraw GRT from your account at any time. +你不需要提前知道你需要多少查询。您只需支付您使用的费用,您可以随时从您的账户中提取GRT。 -We recommend you overestimate the number of queries you will need so that you don’t have to top up your balance frequently. A good estimate for small to medium sized applications is to start with 1M-2M queries per month and monitor usage closely in the first weeks. For larger apps, a good estimate is to use the number of daily visits your site gets multiplied by the number of queries your most active page makes upon opening. +我们建议您高估所需的查询次数,这样您就不必经常充值余额。对于中小型应用程序,一个很好的估计是每月从1M-2M的查询开始,并在头几周密切监控使用情况。对于较大的应用程序,一个很好的估计是使用您的网站每天的访问次数乘以您最活跃的页面在打开时的查询次数。 -Of course, both new and existing users can reach out to Edge & Node's BD team for a consult to learn more about anticipated usage. +当然,新用户和现有用户都可以联系Edge&Node的BD团队进行咨询,以了解更多关于预期使用情况的信息。 -### Can I withdraw GRT from my billing balance? +### 我能从我的计费余额中添加和提取GRT吗? -Yes, you can always withdraw GRT that has not already been used for queries from your billing balance. The billing contract is only designed to bridge GRT from Ethereum mainnet to the Arbitrum network. If you'd like to transfer your GRT from Arbitrum back to Ethereum mainnet, you'll need to use the [Arbitrum Bridge](https://bridge.arbitrum.io/?l2ChainId=42161). +是的,您始终可以从计费余额中提取尚未用于查询的GRT。计费合约仅用于将GRT从以太坊主网桥接到Arbitrum网络。如果您想将GRT从Arbitrum转移回以太坊主网上,您需要使用[Arbitrum bridge](https://bridge.arbitrum.io/?l2ChainId=42161)。 -### What happens when my billing balance runs out? Will I get a warning? +### 当我的计费余额用完时会发生什么?我会得到警告吗? -You will receive several email notifications before your billing balance runs out. +在您的计费余额用完之前,您将收到几封电子邮件通知。 diff --git a/website/src/pages/zh/subgraphs/developing/_meta-titles.json b/website/src/pages/zh/subgraphs/developing/_meta-titles.json index 01a91b09ed77..eb59cb18f79a 100644 --- a/website/src/pages/zh/subgraphs/developing/_meta-titles.json +++ b/website/src/pages/zh/subgraphs/developing/_meta-titles.json @@ -1,6 +1,6 @@ { - "creating": "Creating", - "deploying": "Deploying", - "publishing": "Publishing", - "managing": "Managing" + "creating": "创建", + "deploying": "部署", + "publishing": "发布", + "managing": "管理" } diff --git a/website/src/pages/zh/subgraphs/developing/creating/_meta-titles.json b/website/src/pages/zh/subgraphs/developing/creating/_meta-titles.json index 6106ac328dc1..dbfff81eca64 100644 --- a/website/src/pages/zh/subgraphs/developing/creating/_meta-titles.json +++ b/website/src/pages/zh/subgraphs/developing/creating/_meta-titles.json @@ -1,3 +1,3 @@ { - "graph-ts": "AssemblyScript API" + "graph-ts": "汇编脚本API" } diff --git a/website/src/pages/zh/subgraphs/developing/creating/advanced.mdx b/website/src/pages/zh/subgraphs/developing/creating/advanced.mdx index 5f86707e106c..e279b181ffb2 100644 --- a/website/src/pages/zh/subgraphs/developing/creating/advanced.mdx +++ b/website/src/pages/zh/subgraphs/developing/creating/advanced.mdx @@ -1,43 +1,43 @@ --- -title: Advanced Subgraph Features +title: 高级子图功能 --- ## 概述 -Add and implement advanced subgraph features to enhanced your subgraph's built. +添加并实现高级子图功能,以增强子图的构建。 -Starting from `specVersion` `0.0.4`, subgraph features must be explicitly declared in the `features` section at the top level of the manifest file, using their `camelCase` name, as listed in the table below: +从 `specVersion ``0.0.4` 开始,子图特征必须使用它们的 `camelCase` 名称,在清单文件顶层的 `features` 部分中显式声明,如下表所列: -| Feature | Name | -| ---------------------------------------------------- | ---------------- | -| [Non-fatal errors](#non-fatal-errors) | `nonFatalErrors` | -| [Full-text Search](#defining-fulltext-search-fields) | `fullTextSearch` | -| [Grafting](#grafting-onto-existing-subgraphs) | `grafting` | +| 功能 | 名字 | +| -------------------------------------------- | ---------------- | +| [非致命错误](#non-fatal-errors) | `nonFatalErrors` | +| [全文搜索](#defining-fulltext-search-fields) | `fullTextSearch` | +| [嫁接](#grafting-onto-existing-subgraphs) | `grafting` | -For instance, if a subgraph uses the **Full-Text Search** and the **Non-fatal Errors** features, the `features` field in the manifest should be: +例如,如果子图使用 **Full-Text Search** 和 **Non-fatal Errors** 功能,则清单中的 `features` 字段应为: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum features: - fullTextSearch - nonFatalErrors -dataSources: ... +dataSources: …… ``` -> Note that using a feature without declaring it will incur a **validation error** during subgraph deployment, but no errors will occur if a feature is declared but not used. +> 请注意,在子图部署期间使用未声明的特性会导致**验证错误**,但如果声明了特性未使用,则不会出现错误。 -## Timeseries and Aggregations +## 时间序列和聚合 -Prerequisites: +先决条件: -- Subgraph specVersion must be ≥1.1.0. +- 子图规范版本必须≥1.1.0。 -Timeseries and aggregations enable your subgraph to track statistics like daily average price, hourly total transfers, and more. +时间序列和聚合使您的子图能够跟踪每日平均价格、每小时总转账等统计数据。 -This feature introduces two new types of subgraph entity. Timeseries entities record data points with timestamps. Aggregation entities perform pre-declared calculations on the timeseries data points on an hourly or daily basis, then store the results for easy access via GraphQL. +此功能引入了两种新型子图实体。时间序列实体用时间戳记录数据点。聚合实体每小时或每天对时间序列数据点执行预先声明的计算,然后存储结果,以便通过GraphQL轻松访问。 -### Example Schema +### 示例模式 ```graphql type Data @entity(timeseries: true) { @@ -53,35 +53,35 @@ type Stats @aggregation(intervals: ["hour", "day"], source: "Data") { } ``` -### How to Define Timeseries and Aggregations +### 如何实现时间序列和聚合 -Timeseries entities are defined with `@entity(timeseries: true)` in the GraphQL schema. Every timeseries entity must: +时间序列实体在GraphQL模式中用`@entity(Timeseries:true)`定义。每个时间序列实体必须: -- have a unique ID of the int8 type -- have a timestamp of the Timestamp type -- include data that will be used for calculation by aggregation entities. +- 具有int8类型的唯一ID +- 具有timestamp类型的时间戳 +- 包括将由聚合实体用于计算的数据 -These Timeseries entities can be saved in regular trigger handlers, and act as the “raw data” for the aggregation entities. +这些时序实体可以保存在常规触发器处理程序中,并作为聚合实体的“原始数据”。 -Aggregation entities are defined with `@aggregation` in the GraphQL schema. Every aggregation entity defines the source from which it will gather data (which must be a timeseries entity), sets the intervals (e.g., hour, day), and specifies the aggregation function it will use (e.g., sum, count, min, max, first, last). +聚合实体在GraphQL模式中用`@aggregation`定义。每个聚合实体都定义了它将从中收集数据的源(必须是时间序列实体),设置间隔(例如小时、天),并指定它将使用的聚合函数(例如总和、计数、最小值、最大值、第一个、最后一个)。 -Aggregation entities are automatically calculated on the basis of the specified source at the end of the required interval. +聚合实体在所需间隔结束时根据指定的源自动计算。 -#### Available Aggregation Intervals +#### 可用聚合间隔 -- `hour`: sets the timeseries period every hour, on the hour. -- `day`: sets the timeseries period every day, starting and ending at 00:00. +- `hour`:按小时设置时间序列周期。 +- `day`:设置每天的时间序列周期,从00:00开始到00:00结束。 -#### Available Aggregation Functions +#### 可用聚合功能 -- `sum`: Total of all values. -- `count`: Number of values. -- `min`: Minimum value. -- `max`: Maximum value. -- `first`: First value in the period. -- `last`: Last value in the period. +- `sum`:所有值的总和。 +- `count`:值的数量。 +- `min`:最小值。 +- `max`:最大值。 +- `first`:周期中的第一个值。 +- `last`:周期中的最后一个值。 -#### Example Aggregations Query +#### 示例聚合查询 ```graphql { @@ -93,25 +93,25 @@ Aggregation entities are automatically calculated on the basis of the specified } ``` -[Read more](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) about Timeseries and Aggregations. +[阅读更多](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md)关于时间序列和聚合的信息。 ## 非致命错误 在默认情况下,已同步子图上的索引错误会导致子图失败并停止同步。 子图也可以配置为忽略引发错误的处理程序所做的更改, 在出现错误时继续同步。 这使子图作者有时间更正他们的子图,同时继续针对最新区块提供查询,尽管由于导致错误的代码问题,结果可能会不一致。 请注意,某些错误仍然总是致命的,要成为非致命错误,首先需要确定相应的错误是确定性的错误。 -> **Note:** The Graph Network does not yet support non-fatal errors, and developers should not deploy subgraphs using that functionality to the network via the Studio. +> 注意: The Graph 网络尚不支持非致命错误,开发人员不应通过工作室将使用该功能的子图部署到网络。 启用非致命错误需要在子图清单上设置以下功能标志: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum features: - nonFatalErrors - ... + … ``` -The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the subgraph has skipped over errors, as in the example: +查询还必须通过 `subgraphError` 参数选择查询可能存在不一致的数据。 还建议查询 `_meta` 以检查子图是否跳过错误,如示例: ```graphql foos(first: 100, subgraphError: allow) { @@ -123,7 +123,7 @@ _meta { } ``` -If the subgraph encounters an error, that query will return both the data and a graphql error with the message `"indexing_error"`, as in this example response: +如果子图遇到错误,则查询将返回数据和带有消息 `"indexing_error"` 的 graphql 错误,如以下示例响应所示: ```graphql "data": { @@ -143,7 +143,7 @@ If the subgraph encounters an error, that query will return both the data and a ] ``` -## IPFS/Arweave File Data Sources +## IPFS/Arweave文件数据源 文件数据源是一种新的子图功能,用于以稳健、可扩展的方式在索引期间访问链下数据。文件数据源支持从IPFS和Arweave获取文件。 @@ -151,17 +151,17 @@ If the subgraph encounters an error, that query will return both the data and a ### 概述 -Rather than fetching files "in line" during handler execution, this introduces templates which can be spawned as new data sources for a given file identifier. These new data sources fetch the files, retrying if they are unsuccessful, running a dedicated handler when the file is found. +这不是在处理程序执行期间“在线”获取文件,而是引入了可以作为给定文件标识符的新数据源生成的模板。这些新数据源获取文件,如果不成功则重试,找到文件后运行专用处理程序。 -This is similar to the [existing data source templates](/developing/creating-a-subgraph/#data-source-templates), which are used to dynamically create new chain-based data sources. +这类似于[现有数据源模板](/developing/creating-a-subgraph/#data-source-templates),用于动态创建新的基于链的数据源。 -> This replaces the existing `ipfs.cat` API +> 这将替换现有的`ipfs.cat` API ### 升级指南 -#### Update `graph-ts` and `graph-cli` +#### 更新`graph-ts`和`graph-cli` -File data sources requires graph-ts >=0.29.0 and graph-cli >=0.33.1 +文件数据源需要graph-ts>=0.29.0和graph-cli>=0.33.1 #### 添加新的实体类型,当找到文件时将更新该类型 @@ -210,9 +210,9 @@ type TokenMetadata @entity { 如果母实体与生成的文件数据源实体之间的关系为1:1,则最简单的模式是通过使用IPFS CID作为查找将母实体链接到生成的文件实体。如果您在建模新的基于文件的实体时遇到困难,请联系Discord! -> You can use [nested filters](/subgraphs/querying/graphql-api/#example-for-nested-entity-filtering) to filter parent entities on the basis of these nested entities. +> 您可以使用[嵌套筛选器](/subgraphs/querying/graphql-api/#example-for-nested-entity-filtering)根据这些嵌套实体过滤父实体。 -#### Add a new templated data source with `kind: file/ipfs` or `kind: file/arweave` +#### 添加一个新的模板数据源,使用`kind: file/ipfs`或`kind: file/arweave`。 这是在识别出感兴趣的文件时生成的数据源。 @@ -221,7 +221,7 @@ templates: - name: TokenMetadata kind: file/ipfs mapping: - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mapping.ts handler: handleMetadata @@ -232,15 +232,15 @@ templates: file: ./abis/Token.json ``` -> Currently `abis` are required, though it is not possible to call contracts from within file data sources +> 目前需要`abis`,但无法从文件数据源中调用合约。 -The file data source must specifically mention all the entity types which it will interact with under `entities`. See [limitations](#limitations) for more details. +文件数据源必须特别提到它将在`实体`下与之交互的所有实体类型。有关详细信息,请参阅[限制](#limitations) 。 #### 创建新处理程序以处理文件 -This handler should accept one `Bytes` parameter, which will be the contents of the file, when it is found, which can then be processed. This will often be a JSON file, which can be processed with `graph-ts` helpers ([documentation](/subgraphs/developing/creating/graph-ts/api/#json-api)). +此处理程序应接受一个`Bytes`参数,当找到文件时,该参数将是文件的内容,然后可以对其进行处理。这通常是一个JSON文件,可以用`graph-ts` 助手([文件](/subgraphs/developing/creating/graph-ts/api/#json-api))处理。 -The CID of the file as a readable string can be accessed via the `dataSource` as follows: +文件的CID作为可读字符串可通过`数据源`访问,如下所示: ```typescript const cid = dataSource.stringParam() @@ -277,12 +277,12 @@ export function handleMetadata(content: Bytes): void { 现在,您可以在执行基于链的处理程序期间创建文件数据源: -- Import the template from the auto-generated `templates` -- call `TemplateName.create(cid: string)` from within a mapping, where the cid is a valid content identifier for IPFS or Arweave +- 从自动生成的模板导入`模板` +- 从映射中调用`TemplateName.create(cid:string)`,其中cid是有效的IPFS或Arweave内容标识符 -For IPFS, Graph Node supports [v0 and v1 content identifiers](https://docs.ipfs.tech/concepts/content-addressing/), and content identifiers with directories (e.g. `bafyreighykzv2we26wfrbzkcdw37sbrby4upq7ae3aqobbq7i4er3tnxci/metadata.json`). +对于IPFS,Graph Node支持[v0 和 v1内容标识符](https://docs.ipfs.tech/concepts/content-addressing/),以及带有目录的内容标识符(例如`bafyreighykzv2we26wfrbzkcdw37sbrby4upq7ae3aqobbq7i4er3tnxci/metadata.json`)。 -For Arweave, as of version 0.33.0 Graph Node can fetch files stored on Arweave based on their [transaction ID](https://docs.arweave.org/developers/arweave-node-server/http-api#transactions) from an Arweave gateway ([example file](https://bdxujjl5ev5eerd5ouhhs6o4kjrs4g6hqstzlci5pf6vhxezkgaa.arweave.net/CO9EpX0lekJEfXUOeXncUmMuG8eEp5WJHXl9U9yZUYA)). Arweave supports transactions uploaded via Irys (previously Bundlr), and Graph Node can also fetch files based on [Irys manifests](https://docs.irys.xyz/overview/gateways#indexing). +对于Arweave,从0.33.0版本开始,Graph Node可以根据其[交易ID](https://docs.arweave.org/developers/arweave-node-server/http-api#transactions)获取存储在Arweave上的文件来自Arweave网关([示例文件](https://bdxujjl5ev5eerd5ouhhs6o4kjrs4g6hqstzlci5pf6vhxezkgaa.arweave.net/CO9EpX0lekJEfXUOeXncUmMuG8eEp5WJHXl9U9yZUYA)). Arweave支持通过Irys(以前是Bundlr)上传的交易,Graph Node还可以根据[Irys清单](https://docs.irys.xyz/overview/gateways#indexing)获取文件。 例子: @@ -290,7 +290,7 @@ For Arweave, as of version 0.33.0 Graph Node can fetch files stored on Arweave b import { TokenMetadata as TokenMetadataTemplate } from '../generated/templates' const ipfshash = 'QmaXzZhcYnsisuue5WRdQDH6FDvqkLQX1NckLqBYeYYEfm' -//This example code is for a Crypto coven subgraph. The above ipfs hash is a directory with token metadata for all crypto coven NFTs. +//This example code is for a Crypto coven Subgraph. The above ipfs hash is a directory with token metadata for all crypto coven NFTs. export function handleTransfer(event: TransferEvent): void { let token = Token.load(event.params.tokenId.toString()) @@ -315,19 +315,19 @@ export function handleTransfer(event: TransferEvent): void { 这将创建一个新的文件数据源,该数据源将轮询Graph Node配置的IPFS或Arweave端点,如果未找到文件,则进行重试。当找到文件时,文件数据源处理程序将被执行。 -This example is using the CID as the lookup between the parent `Token` entity and the resulting `TokenMetadata` entity. +此示例使用 CID 作为母 `代币` 实体和生成的 `TokenMetadata` 实体之间的查找。 -> Previously, this is the point at which a subgraph developer would have called `ipfs.cat(CID)` to fetch the file +> 以前,子图开发人员会在此时调用`ipfs.cat(CID)`来获取文件。 祝贺您,您正在使用文件数据源! -#### 将你的子图部署 +#### 部署子图 -You can now `build` and `deploy` your subgraph to any Graph Node >=v0.30.0-rc.0. +现在,您可以将子图`构建`并`部署`到任何Graph Node>=v0.30.0-rc.0。 #### 限制 -文件数据源处理程序和实体与其他子图实体隔离,确保它们在执行时是确定的,并确保基于链的数据源不受污染。具体来说: +文件数据源处理程序和实体与其他子图实体隔离,确保它们在执行时是确定的,并确保基于链的数据源不受污染。具体来说: - 文件数据源创建的实体是不可变的,不能更新 - 文件数据源处理程序无法访问其他文件数据源中的实体 @@ -341,41 +341,41 @@ You can now `build` and `deploy` your subgraph to any Graph Node >=v0.30.0-rc.0. 如果要将 NFT 元数据链接到相应的代币,请使用元数据的 IPFS hash从代币实体引用元数据实体。使用 IPFS hash作为 ID 保存元数据实体。 -You can use [DataSource context](/subgraphs/developing/creating/graph-ts/api/#entity-and-datasourcecontext) when creating File Data Sources to pass extra information which will be available to the File Data Source handler. +在创建文件数据源时,您可以使用[DataSource上下文](/subgraphs/developing/creating/graph-ts/api/#entity-and-datasourcecontext)传递额外的信息,这些信息将可供文件数据源处理程序使用。 -If you have entities which are refreshed multiple times, create unique file-based entities using the IPFS hash & the entity ID, and reference them using a derived field in the chain-based entity. +如果您有多次刷新的实体,请使用 IPFS 一的基于文件的实体。实体 ID,并使用基于链的实体中的派生字段引用它们。 > 我们正在努力改进上述建议,因此查询只返回“最新”版本。 #### 已知问题 -File data sources currently require ABIs, even though ABIs are not used ([issue](https://github.com/graphprotocol/graph-cli/issues/961)). Workaround is to add any ABI. +文件数据源目前需要ABI,即使不使用ABI([问题](https://github.com/graphprotocol/graph-cli/issues/961)))。解决方法是添加任何ABI。 -Handlers for File Data Sources cannot be in files which import `eth_call` contract bindings, failing with "unknown import: `ethereum::ethereum.call` has not been defined" ([issue](https://github.com/graphprotocol/graph-node/issues/4309)). Workaround is to create file data source handlers in a dedicated file. +文件数据源的处理程序不能在导入 `eth _ call` 契约绑定的文件中,如果“未知导入: `etherum:: etherum.call` 尚未定义”([问题](https://github.com/graphprotocol/graph-node/issues/4309)) 则失败。解决办法是在专用文件中创建文件数据源处理程序。 #### 例子 -[Crypto Coven Subgraph migration](https://github.com/azf20/cryptocoven-api/tree/file-data-sources-refactor) +[Crypto-Coven子图迁移](https://github.com/azf20/cryptocoven-api/tree/file-data-sources-refactor) -#### References +#### 参考 -[GIP File Data Sources](https://forum.thegraph.com/t/gip-file-data-sources/2721) +[GIP文件数据源](https://forum.thegraph.com/t/gip-file-data-sources/2721) -## Indexed Argument Filters / Topic Filters +## 索引参数过滤器 / 主题过滤器 -> **Requires**: [SpecVersion](#specversion-releases) >= `1.2.0` +> **要求**: [规范版本](#specversion-releases) >= `1.2.0` -Topic filters, also known as indexed argument filters, are a powerful feature in subgraphs that allow users to precisely filter blockchain events based on the values of their indexed arguments. +主题过滤器,也称为索引参数过滤器,是子图中的一个强大功能,允许用户根据其索引参数的值精确过滤区块链事件。 -- These filters help isolate specific events of interest from the vast stream of events on the blockchain, allowing subgraphs to operate more efficiently by focusing only on relevant data. +- 这些过滤器有助于将感兴趣的特定事件与区块链上的大量事件流隔离开来,通过只关注相关数据,使子图能够更有效地运行。 -- This is useful for creating personal subgraphs that track specific addresses and their interactions with various smart contracts on the blockchain. +- 这对于创建跟踪特定地址及其与区块链上各种智能合约交互的个人子图非常有用。 -### How Topic Filters Work +### 主题过滤器的工作原理 -When a smart contract emits an event, any arguments that are marked as indexed can be used as filters in a subgraph's manifest. This allows the subgraph to listen selectively for events that match these indexed arguments. +当智能合约发出事件时,任何标记为索引的参数都可以用作子图清单中的过滤器。这允许子图有选择地监听与这些索引参数匹配的事件。 -- The event's first indexed argument corresponds to `topic1`, the second to `topic2`, and so on, up to `topic3`, since the Ethereum Virtual Machine (EVM) allows up to three indexed arguments per event. +- 事件的第一个索引参数对应`topic1`,第二个对应`topic2`,以此类推,直到`topic3`,因为以太坊虚拟机(EVM)允许每个事件最多三个索引参数。 ```solidity // SPDX-License-Identifier: MIT @@ -393,15 +393,15 @@ contract Token { } ``` -In this example: +在这个例子中: -- The `Transfer` event is used to log transactions of tokens between addresses. -- The `from` and `to` parameters are indexed, allowing event listeners to filter and monitor transfers involving specific addresses. -- The `transfer` function is a simple representation of a token transfer action, emitting the Transfer event whenever it is called. +- `Transfer`事件用于记录地址之间的代币交易。 +- `from`和`to`参数被索引,允许事件侦听器过滤和监视涉及特定地址的传输。 +- `传输`函数是代币传输操作的简单表示,每当调用时都会发出传输事件。 -#### Configuration in Subgraphs +#### 子图中的配置 -Topic filters are defined directly within the event handler configuration in the subgraph manifest. Here is how they are configured: +主题过滤器直接在子图清单的事件处理程序配置中定义。以下是它们的配置方式: ```yaml eventHandlers: @@ -412,17 +412,17 @@ eventHandlers: topic3: ['0xValue3'] ``` -In this setup: +在此设置中: -- `topic1` corresponds to the first indexed argument of the event, `topic2` to the second, and `topic3` to the third. -- Each topic can have one or more values, and an event is only processed if it matches one of the values in each specified topic. +- `topic1`对应于事件的第一个索引参数,`topic2`对应于第二个,`topic3`对应于第三个。 +- 每个主题可以有一个或多个值,只有当事件与每个指定主题中的一个值匹配时,才会处理该事件。 -#### Filter Logic +#### 过滤器逻辑 -- Within a Single Topic: The logic functions as an OR condition. The event will be processed if it matches any one of the listed values in a given topic. -- Between Different Topics: The logic functions as an AND condition. An event must satisfy all specified conditions across different topics to trigger the associated handler. +- 在单个主题中:逻辑功能充当OR条件。如果事件与给定主题中列出的任何一个值匹配,则将对其进行处理。 +- 不同主题之间:逻辑作为AND条件发挥作用。事件必须满足不同主题的所有指定条件,才能触发关联的处理程序。 -#### Example 1: Tracking Direct Transfers from Address A to Address B +#### 示例1:跟踪从地址A到地址B的直接传输 ```yaml eventHandlers: @@ -432,13 +432,13 @@ eventHandlers: topic2: ['0xAddressB'] # Receiver Address ``` -In this configuration: +在此配置中: -- `topic1` is configured to filter `Transfer` events where `0xAddressA` is the sender. -- `topic2` is configured to filter `Transfer` events where `0xAddressB` is the receiver. -- The subgraph will only index transactions that occur directly from `0xAddressA` to `0xAddressB`. +- `topic1`被配置为过滤`0xAddressA`为发送方的`传输`事件。 +- `topic2`被配置为过滤`0xAddressB`为接收方的`传输`事件。 +- 子图将仅索引直接从`0xAddressA` 到 `0xAddressB`发生的交易。 -#### Example 2: Tracking Transactions in Either Direction Between Two or More Addresses +#### 示例2:跟踪两个或多个地址之间的双向交易 ```yaml eventHandlers: @@ -448,65 +448,65 @@ eventHandlers: topic2: ['0xAddressB', '0xAddressC'] # Receiver Address ``` -In this configuration: +在此配置中: -- `topic1` is configured to filter `Transfer` events where `0xAddressA`, `0xAddressB`, `0xAddressC` is the sender. -- `topic2` is configured to filter `Transfer` events where `0xAddressB` and `0xAddressC` is the receiver. -- The subgraph will index transactions that occur in either direction between multiple addresses allowing for comprehensive monitoring of interactions involving all addresses. +- `topic1`被配置为过滤`0xAddressA`、`0xAddressB`、`0xAddressC`为发送方的`传输`事件。 +- `topic2`被配置为过滤`0xAddressB`和`0xAddressC`为接收方的`传输`事件。 +- 子图将对多个地址之间双向发生的交易进行索引,从而全面监控涉及所有地址的交互。 -## Declared eth_call +## 已声明eth_call -> Note: This is an experimental feature that is not currently available in a stable Graph Node release yet. You can only use it in Subgraph Studio or your self-hosted node. +> 注意:这是一个实验性功能,目前尚未在稳定的Graph Node版本中提供。您只能在Subgraph Studio或自托管节点中使用它。 -Declarative `eth_calls` are a valuable subgraph feature that allows `eth_calls` to be executed ahead of time, enabling `graph-node` to execute them in parallel. +声明性`eth_calls`是一个有价值的子图特性,它允许`eth_call`提前执行,使`graph-node`能够并行执行它们。 -This feature does the following: +此功能执行以下操作: -- Significantly improves the performance of fetching data from the Ethereum blockchain by reducing the total time for multiple calls and optimizing the subgraph's overall efficiency. -- Allows faster data fetching, resulting in quicker query responses and a better user experience. -- Reduces wait times for applications that need to aggregate data from multiple Ethereum calls, making the data retrieval process more efficient. +- 通过减少多次调用的总时间和优化子图的整体效率,显著提高了从以太坊区块链获取数据的性能。 +- 允许更快的数据获取,从而获得更快的查询响应和更好的用户体验。 +- 减少需要聚合来自多个以太坊调用的数据的应用程序的等待时间,使数据检索过程更加高效。 -### Key Concepts +### 关键概念 -- Declarative `eth_calls`: Ethereum calls that are defined to be executed in parallel rather than sequentially. -- Parallel Execution: Instead of waiting for one call to finish before starting the next, multiple calls can be initiated simultaneously. -- Time Efficiency: The total time taken for all the calls changes from the sum of the individual call times (sequential) to the time taken by the longest call (parallel). +- 声明性`eth_calls`:定义为并行执行而非顺序执行的以太坊调用。 +- 并行执行:可以同时发起多个调用,而不是在开始下一个调用之前等待一个调用完成。 +- 时间效率:所有呼叫所花费的总时间从单个呼叫时间的总和(连续)变为最长呼叫所花费时间(并行)。 -#### Scenario without Declarative `eth_calls` +#### 没有声明性`eth_calls`的场景 -Imagine you have a subgraph that needs to make three Ethereum calls to fetch data about a user's transactions, balance, and token holdings. +想象一下,你有一个子图,需要进行三次以太坊调用来获取有关用户交易、余额和代币持有量的数据。 -Traditionally, these calls might be made sequentially: +传统上,这些调用可能会按顺序进行: -1. Call 1 (Transactions): Takes 3 seconds -2. Call 2 (Balance): Takes 2 seconds -3. Call 3 (Token Holdings): Takes 4 seconds +1. 调用 1 (交易): 需要3 秒 +2. 调用 2 (余额): 需要2 秒 +3. 调用 3 (代币持有): 需要4 秒 -Total time taken = 3 + 2 + 4 = 9 seconds +总耗时 = 3 + 2 + 4 = 9 秒 -#### Scenario with Declarative `eth_calls` +#### 带有声明性`eth_calls`的场景 -With this feature, you can declare these calls to be executed in parallel: +使用此功能,您可以声明这些调用并行执行: -1. Call 1 (Transactions): Takes 3 seconds -2. Call 2 (Balance): Takes 2 seconds -3. Call 3 (Token Holdings): Takes 4 seconds +1. 调用 1 (交易): 需要3 秒 +2. 调用 2 (余额): 需要2 秒 +3. 调用 3 (代币持有): 需要4 秒 -Since these calls are executed in parallel, the total time taken is equal to the time taken by the longest call. +由于这些调用是并行执行的,因此所花费的总时间等于最长调用所花费的时间。 -Total time taken = max (3, 2, 4) = 4 seconds +总耗时 = max (3, 2, 4) = 4 秒 -#### How it Works +#### 它是如何工作的 -1. Declarative Definition: In the subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. -2. Parallel Execution Engine: The Graph Node's execution engine recognizes these declarations and runs the calls simultaneously. -3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the subgraph for further processing. +1. 声明性定义:在子图清单中,你以一种表明以太坊调用可以并行执行的方式声明它们。 +2. 并行执行引擎:The Graph节点的执行引擎识别这些声明并同时运行调用。 +3. 结果聚合:一旦所有调用完成,结果将被聚合并由子图用于进一步处理。 -#### Example Configuration in Subgraph Manifest +#### 子图清单中的示例配置 -Declared `eth_calls` can access the `event.address` of the underlying event as well as all the `event.params`. +声明的`eth_calls`可以访问底层事件的`event.address`以及所有`event.params`。 -`Subgraph.yaml` using `event.address`: +使用`event.address`的`subgraph.yaml` : ```yaml eventHandlers: @@ -517,14 +517,14 @@ calls: global1X128: Pool[event.address].feeGrowthGlobal1X128() ``` -Details for the example above: +上述示例的详细信息: -- `global0X128` is the declared `eth_call`. -- The text (`global0X128`) is the label for this `eth_call` which is used when logging errors. -- The text (`Pool[event.address].feeGrowthGlobal0X128()`) is the actual `eth_call` that will be executed, which is in the form of `Contract[address].function(arguments)` -- The `address` and `arguments` can be replaced with variables that will be available when the handler is executed. +- `global0X128` 是声明的 `eth_call`。 +- 文本(`global0X128`)是此`eth_call`的标签,用于记录错误。 +- 文本(`Pool[event.address].feeGrowthGlobal0X128()`)是将要执行的实际`eth_call`,其形式为`Contract[address].function(arguments`)。 +- `地址`和`参数`可以用执行处理程序时可用的变量替换。 -`Subgraph.yaml` using `event.params` +使用 `event.params`的`Subgraph.yaml` : ```yaml calls: @@ -533,31 +533,31 @@ calls: ### 嫁接到现有子图 -> **Note:** it is not recommended to use grafting when initially upgrading to The Graph Network. Learn more [here](/subgraphs/cookbook/grafting/#important-note-on-grafting-when-upgrading-to-the-network). +> **注意:**不建议在最初升级到The Graph网络时使用嫁接。了解更多信息[此处](/subgraphs/cookbook/grafting/#important-note-on-grafting-when-upgrading-to-the-network)。 -When a subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances; it is beneficial to reuse the data from an existing subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly or to temporarily get an existing subgraph working again after it has failed. +首次部署子图时,它会在相应链的启动区块(或每个数据源定义的 `startBlock` 处)开始索引事件。在某些情况下,可以使用现有子图已经索引的数据并在更晚的区块上开始索引。 这种索引模式称为*Grafting*。 例如,嫁接在开发过程中非常有用,可以快速克服映射中的简单错误,或者在现有子图失败后暂时恢复工作。 -A subgraph is grafted onto a base subgraph when the subgraph manifest in `subgraph.yaml` contains a `graft` block at the top-level: +当 `subgraph.yaml` 中的子图清单在顶层包含 `graft` 区块时,子图被嫁接到基础子图: ```yaml description: ... graft: - base: Qm... # Subgraph ID of base subgraph + base: Qm... # Subgraph ID of base Subgraph block: 7345624 # Block number ``` -When a subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` subgraph up to and including the given `block` and then continue indexing the new subgraph from that block on. The base subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted subgraph. +当部署其清单包含 `graft` 区块的子图时,Graph 节点将复制 `base` 子图的数据,直到并包括给定的 `区块`,然后继续从该区块开始索引新子图。 基础子图必须存在于目标Graph Node实例上,并且必须至少索引到给定区块。 由于这个限制,嫁接只能在开发期间或紧急情况下使用,以加快生成等效的非嫁接子图。 因为嫁接是拷贝而不是索引基础数据,所以子图同步到所需区块比从头开始索引要快得多,尽管对于非常大的子图,初始数据拷贝仍可能需要几个小时。 在初始化嫁接子图时,Graph 节点将记录有关已复制的实体类型的信息。 -嫁接子图可以使用一个 GraphQL 模式 schema,该模式与基子图之一不同,但仅与基子图兼容。它本身必须是一个有效的子图模式,但是可以通过以下方式偏离基子图的模式: +嫁接子图可以使用一个与基础子图不同的GraphQL 模式,但仅与之兼容。它本身必须是一个有效的子图模式,但是可以通过以下方式偏离基础子图的模式: -- 它添加或删除实体类型 -- 它从实体类型中删除属性 -- 它将可为空的属性添加到实体类型 -- 它将不可为空的属性转换为可空的属性 -- 它将值添加到枚举类型中 -- 它添加或删除接口 -- 它改变了实现接口的实体类型 +- 添加或删除实体类型 +- 从实体类型中删除属性 +- 将可为空的属性添加到实体类型 +- 将不可为空的属性转换为可空的属性 +- 将值添加到枚举类型中 +- 添加或删除接口 +- 改变了实现接口的实体类型 -> **[Feature Management](#experimental-features):** `grafting` must be declared under `features` in the subgraph manifest. +> **[特征管理](#experimental-features):** `grafting`必须在子图清单中的`features`下声明。 diff --git a/website/src/pages/zh/subgraphs/developing/creating/assemblyscript-mappings.mdx b/website/src/pages/zh/subgraphs/developing/creating/assemblyscript-mappings.mdx index 88028e162e55..7f1803a058db 100644 --- a/website/src/pages/zh/subgraphs/developing/creating/assemblyscript-mappings.mdx +++ b/website/src/pages/zh/subgraphs/developing/creating/assemblyscript-mappings.mdx @@ -4,13 +4,13 @@ title: Writing AssemblyScript Mappings ## 概述 -The mappings take data from a particular source and transform it into entities that are defined within your schema. Mappings are written in a subset of [TypeScript](https://www.typescriptlang.org/docs/handbook/typescript-in-5-minutes.html) called [AssemblyScript](https://github.com/AssemblyScript/assemblyscript/wiki) which can be compiled to WASM ([WebAssembly](https://webassembly.org/)). AssemblyScript is stricter than normal TypeScript, yet provides a familiar syntax. +映射将获取的以太坊数据转换为您的模式文件中定义的实体。 映射是用 [TypeScript](https://www.typescriptlang.org/docs/handbook/typescript-in-5-minutes.html) 的子集编写的,称为 [AssemblyScript](https://github.com/AssemblyScript/assemblyscript/wiki)。 AssemblyScript 可以编译成 WASM ([WebAssembly](https://webassembly.org/))。 AssemblyScript 比普通的 TypeScript 更严格,但提供了开发者熟悉的语法。 ## 编写映射 -For each event handler that is defined in `subgraph.yaml` under `mapping.eventHandlers`, create an exported function of the same name. Each handler must accept a single parameter called `event` with a type corresponding to the name of the event which is being handled. +对于在 `mapping.eventHandlers` 下的 `subgraph.yaml` 中定义的每个事件处理程序,都会创建一个同名的导出函数。 每个处理程序必须接受一个名为 `event` 的参数,其类型对应于正在处理的事件的名称。 -In the example subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events: +在示例子图中,`src/mapping.ts` 包含 `NewGravatar` 和 `UpdatedGravatar` 事件的处理程序: ```javascript import { NewGravatar, UpdatedGravatar } from '../generated/Gravity/Gravity' @@ -37,50 +37,50 @@ export function handleUpdatedGravatar(event: UpdatedGravatar): void { } ``` -The first handler takes a `NewGravatar` event and creates a new `Gravatar` entity with `new Gravatar(event.params.id.toHex())`, populating the entity fields using the corresponding event parameters. This entity instance is represented by the variable `gravatar`, with an id value of `event.params.id.toHex()`. +第一个处理程序接受 `NewGravatar` 事件,而且使用 `new Gravatar(event.params.id.toHex())` 创建一个新的 `Gravatar` 实体,使用相应的事件参数填充实体字段。 该实体实例由变量 `gravatar` 表示,id 值为 `event.params.id.toHex()`。 -The second handler tries to load the existing `Gravatar` from the Graph Node store. If it does not exist yet, it is created on-demand. The entity is then updated to match the new event parameters before it is saved back to the store using `gravatar.save()`. +第二个处理程序尝试从 Graph 节点存储加载现有的 Gravatar。 如果尚不存在,则会按需创建。 然后更新实体以匹配新的事件参数,并使用 gravatar.save() 将其保存。 ### 用于创建新实体的推荐 ID -It is highly recommended to use `Bytes` as the type for `id` fields, and only use `String` for attributes that truly contain human-readable text, like the name of a token. Below are some recommended `id` values to consider when creating new entities. +强烈建议使用`Bytes`作为`id`字段的类型,并且只在确实包含人类可读文本的属性上使用`String`,例如代币的名称。以下是在创建新实体时需要考虑的一些推荐的`id`值。 - `transfer.id = event.transaction.hash` - `let id = event.transaction.hash.concatI32(event.logIndex.toI32())` -- For entities that store aggregated data, for e.g, daily trade volumes, the `id` usually contains the day number. Here, using a `Bytes` as the `id` is beneficial. Determining the `id` would look like +- 对于存储聚合数据的实体,例如每日交易量,`id`通常包含天数。在这里,使用`Bytes`作为`id`是有益的。确定`id`的方式可能看起来像这样: ```typescript let dayID = event.block.timestamp.toI32() / 86400 let id = Bytes.fromI32(dayID) ``` -- Convert constant addresses to `Bytes`. +- 将固定地址转换为`Bytes`。 `const id = Bytes.fromHexString('0xdead...beef')` -There is a [Graph Typescript Library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) which contains utilities for interacting with the Graph Node store and conveniences for handling smart contract data and entities. It can be imported into `mapping.ts` from `@graphprotocol/graph-ts`. +存在一个名为[Graph Typescript Library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts)的库,其中包含用于与Graph Node存储交互的工具以及处理智能合约数据和实体的便利功能。它可以从`@graphprotocol/graph-ts`导入到`mapping.ts`中。 -### Handling of entities with identical IDs +### 处理具有相同ID的entity的策略 -When creating and saving a new entity, if an entity with the same ID already exists, the properties of the new entity are always preferred during the merge process. This means that the existing entity will be updated with the values from the new entity. +在创建并保存新实体时,如果已存在具有相同ID的实体,则在合并过程中始终优先考虑新实体的属性。这意味着现有的实体将会用新实体的值进行更新。这种方式确保了数据的最新性和一致性,使得每次操作都反映最近的变更和信息。 -If a null value is intentionally set for a field in the new entity with the same ID, the existing entity will be updated with the null value. +如果在具有相同ID的新实体中某个字段故意设置为null值,现有的实体将会被更新为该null值。这种做法允许数据显式地表示无值或已删除的状态,从而在数据处理和存储中提供更大的灵活性和控制。 -If no value is set for a field in the new entity with the same ID, the field will result in null as well. +如果在具有相同ID的新实体中某个字段没有设置任何值,该字段也将被设为null。这样的处理确保了数据的完整性,防止了意外的旧数据保留,确保了在数据更新时的透明性和一致性。这种做法适用于那些需要明确字段值缺失或未定义情况的应用场景。 ## 代码生成 为了使与智能合约、事件和实体的代码编写工作变得简单且类型安全,Graph CLI 可以从子图的 GraphQL 模式和数据源中包含的合约 ABI 生成 AssemblyScript 类型。 -这可以通过以下命令实现 +这可以通过以下命令实现: ```sh graph codegen [--output-dir ] [] ``` -but in most cases, subgraphs are already preconfigured via `package.json` to allow you to simply run one of the following to achieve the same: +但在大多数情况下,子图已经通过 `package.json` 进行了预配置,以允许您简单地运行以下命令之一来实现相同的目的: ```sh # Yarn @@ -90,7 +90,7 @@ yarn codegen npm run codegen ``` -This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with. +这将为 `subgrap.yaml` 中提到的 ABI 文件中的每个智能合约生成一个 AssemblyScript 类,允许您将这些合约绑定到映射中的特定地址,并针对正在处理的区块调用只读合约方法。它还将为每个合约事件生成一个类,以便于访问事件参数以及事件源自的区块和交易。所有这些类型都写入到`//.ts`。在示例子图中,这将成为`generated/Gravity/Gravity.ts`,允许映射导入这些类型。 ```javascript import { @@ -102,12 +102,12 @@ import { } from '../generated/Gravity/Gravity' ``` -In addition to this, one class is generated for each entity type in the subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with +除此之外,还会为子图的 GraphQL 模式中的每个实体类型生成一个类。 这些类提供类型安全的实体加载、对实体字段的读写访问以及一个 `save()` 方法来写入要存储的实体。 所有实体类都写入 `/schema.ts`,允许映射导入它们: ```javascript import { Gravatar } from '../generated/schema' ``` -> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the subgraph. +> **注意:** 每次更改 GraphQL 模式文件或清单中包含的 ABI 后,都必须再次执行代码生成。 在构建或部署子图之前,它还必须至少执行一次。 -Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your subgraph to Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find. +代码生成不会检查 `src/mapping.ts` 中的映射代码。 如果您想在尝试将子图部署到 Graph Explorer 之前进行检查,您可以运行 `yarn build`,并修复 TypeScript 编译器可能发现的任何语法错误。 diff --git a/website/src/pages/zh/subgraphs/developing/creating/graph-ts/CHANGELOG.md b/website/src/pages/zh/subgraphs/developing/creating/graph-ts/CHANGELOG.md index 5d90888ac378..20c96b935045 100644 --- a/website/src/pages/zh/subgraphs/developing/creating/graph-ts/CHANGELOG.md +++ b/website/src/pages/zh/subgraphs/developing/creating/graph-ts/CHANGELOG.md @@ -1,101 +1,107 @@ # @graphprotocol/graph-ts +## 0.38.0 + +### 微小变化 + +- [#1935](https://github.com/graphprotocol/graph-tooling/pull/1935) [`0c36a02`](https://github.com/graphprotocol/graph-tooling/commit/0c36a024e0516bbf883ae62b8312dba3d9945f04) 感谢[@isum](https://github.com/isum)! - 功能:添加yaml 解析支持映射 + ## 0.37.0 -### Minor Changes +### 微小变化 - [#1843](https://github.com/graphprotocol/graph-tooling/pull/1843) [`c09b56b`](https://github.com/graphprotocol/graph-tooling/commit/c09b56b093f23c80aa5d217b2fd56fccac061145) - Thanks [@YaroShkvorets](https://github.com/YaroShkvorets)! - Update all dependencies + Thanks [@YaroShkvorets](https://github.com/YaroShkvorets)! - 更新所有依赖项 ## 0.36.0 -### Minor Changes +### 微小变化 - [#1754](https://github.com/graphprotocol/graph-tooling/pull/1754) [`2050bf6`](https://github.com/graphprotocol/graph-tooling/commit/2050bf6259c19bd86a7446410c7e124dfaddf4cd) - Thanks [@incrypto32](https://github.com/incrypto32)! - Add support for subgraph datasource and - associated types. + Thanks [@incrypto32](https://github.com/incrypto32)! - 添加对子图数据源和 + 相关类型的支持。 ## 0.35.1 -### Patch Changes +### 补丁更改 - [#1637](https://github.com/graphprotocol/graph-tooling/pull/1637) [`f0c583f`](https://github.com/graphprotocol/graph-tooling/commit/f0c583f00c90e917d87b707b5b7a892ad0da916f) - Thanks [@incrypto32](https://github.com/incrypto32)! - Update return type for ethereum.hasCode + Thanks [@incrypto32](https://github.com/incrypto32)! - 更新ethereum.hasCode的返回类型 ## 0.35.0 -### Minor Changes +### 微小变化 - [#1609](https://github.com/graphprotocol/graph-tooling/pull/1609) [`e299f6c`](https://github.com/graphprotocol/graph-tooling/commit/e299f6ce5cf1ad74cab993f6df3feb7ca9993254) - Thanks [@incrypto32](https://github.com/incrypto32)! - Add support for eth.hasCode method + Thanks [@incrypto32](https://github.com/incrypto32)! -添加对eth.hasCode方法的支持 ## 0.34.0 -### Minor Changes +### 微小变化 - [#1522](https://github.com/graphprotocol/graph-tooling/pull/1522) - [`d132f9c`](https://github.com/graphprotocol/graph-tooling/commit/d132f9c9f6ea5283e40a8d913f3abefe5a8ad5f8) - Thanks [@dotansimha](https://github.com/dotansimha)! - Added support for handling GraphQL - `Timestamp` scalar as `i64` (AssemblyScript) + [`dab4ca1`](https://github.com/graphprotocol/graph-tooling/commit/d132f9c9f6ea5283e40a8d913f3abefe5a8ad5f8) + 感谢[@dotansimha](https://github.com/dotansimha)! - 添加了对处理GraphQL`Int8的支持` + 标量为`i64`(AssemblyScript) ## 0.33.0 -### Minor Changes +### 微小变化 - [#1584](https://github.com/graphprotocol/graph-tooling/pull/1584) [`0075f06`](https://github.com/graphprotocol/graph-tooling/commit/0075f06ddaa6d37606e42e1c12d11d19674d00ad) - Thanks [@incrypto32](https://github.com/incrypto32)! - Added getBalance call to ethereum API + Thanks [@incrypto32](https://github.com/incrypto32)! - 添加了对ethereum API的getBalance调用 ## 0.32.0 -### Minor Changes +### 微小变化 - [#1523](https://github.com/graphprotocol/graph-tooling/pull/1523) [`167696e`](https://github.com/graphprotocol/graph-tooling/commit/167696eb611db0da27a6cf92a7390e72c74672ca) - Thanks [@xJonathanLEI](https://github.com/xJonathanLEI)! - add starknet data types + Thanks [@xJonathanLEI](https://github.com/xJonathanLEI)! - 添加starknet数据类型 ## 0.31.0 -### Minor Changes +### 微小变化 - [#1340](https://github.com/graphprotocol/graph-tooling/pull/1340) [`2375877`](https://github.com/graphprotocol/graph-tooling/commit/23758774b33b5b7c6934f57a3e137870205ca6f0) - Thanks [@incrypto32](https://github.com/incrypto32)! - export `loadRelated` host function + Thanks [@incrypto32](https://github.com/incrypto32)! - 导出“loadRelated”主机函数 - [#1296](https://github.com/graphprotocol/graph-tooling/pull/1296) [`dab4ca1`](https://github.com/graphprotocol/graph-tooling/commit/dab4ca1f5df7dcd0928bbaa20304f41d23b20ced) - Thanks [@dotansimha](https://github.com/dotansimha)! - Added support for handling GraphQL `Int8` - scalar as `i64` (AssemblyScript) + Thanks [@dotansimha](https://github.com/dotansimha)! - 添加了对处理GraphQL`Int8的支持` + 标量为`i64`(AssemblyScript) ## 0.30.0 -### Minor Changes +### 微小变化 - [#1299](https://github.com/graphprotocol/graph-tooling/pull/1299) [`3f8b514`](https://github.com/graphprotocol/graph-tooling/commit/3f8b51440db281e69879be7d91d79cd43e45fe86) - Thanks [@saihaj](https://github.com/saihaj)! - introduce new Etherum utility to get a CREATE2 - Address + Thanks [@saihaj](https://github.com/saihaj)! - 引入新的Etherum实用程序以获得CREATE2 + 住址 - [#1306](https://github.com/graphprotocol/graph-tooling/pull/1306) [`f5e4b58`](https://github.com/graphprotocol/graph-tooling/commit/f5e4b58989edc5f3bb8211f1b912449e77832de8) - Thanks [@saihaj](https://github.com/saihaj)! - expose Host's `get_in_block` function + Thanks [@saihaj](https://github.com/saihaj)! - 暴露Host的`get_in_block`函数 ## 0.29.3 -### Patch Changes +### 补丁更改 - [#1057](https://github.com/graphprotocol/graph-tooling/pull/1057) [`b7a2ec3`](https://github.com/graphprotocol/graph-tooling/commit/b7a2ec3e9e2206142236f892e2314118d410ac93) - Thanks [@saihaj](https://github.com/saihaj)! - fix publihsed contents + Thanks [@saihaj](https://github.com/saihaj)! - 修复已发布的内容 ## 0.29.2 -### Patch Changes +### 补丁更改 - [#1044](https://github.com/graphprotocol/graph-tooling/pull/1044) [`8367f90`](https://github.com/graphprotocol/graph-tooling/commit/8367f90167172181870c1a7fe5b3e84d2c5aeb2c) - Thanks [@saihaj](https://github.com/saihaj)! - publish readme with packages + Thanks [@saihaj](https://github.com/saihaj)! - 使用软件包发布自述文件 diff --git a/website/src/pages/zh/subgraphs/developing/creating/graph-ts/README.md b/website/src/pages/zh/subgraphs/developing/creating/graph-ts/README.md index b6771a8305e5..ff1015faff29 100644 --- a/website/src/pages/zh/subgraphs/developing/creating/graph-ts/README.md +++ b/website/src/pages/zh/subgraphs/developing/creating/graph-ts/README.md @@ -1,30 +1,30 @@ -# The Graph TypeScript Library (graph-ts) +# 图形类型脚本库 (graph-ts) [![npm (scoped)](https://img.shields.io/npm/v/@graphprotocol/graph-ts.svg)](https://www.npmjs.com/package/@graphprotocol/graph-ts) [![Build Status](https://travis-ci.org/graphprotocol/graph-ts.svg?branch=master)](https://travis-ci.org/graphprotocol/graph-ts) -TypeScript/AssemblyScript library for writing subgraph mappings to be deployed to -[The Graph](https://github.com/graphprotocol/graph-node). +要部署到 +[The Graph](https://github.com/graphprotocol/graph-node)写入子图的 TypeScript/AssemblyScript 库。 -## Usage +## 使用方法 -For a detailed guide on how to create a subgraph, please see the -[Graph CLI docs](https://github.com/graphprotocol/graph-cli). +关于如何创建子图的详细指南,请参阅 +[GraphCLI 文档](https://github.com/graphprotocol/graph-cli)。 -One step of creating the subgraph is writing mappings that will process blockchain events and will -write entities into the store. These mappings are written in TypeScript/AssemblyScript. +创建子图的步骤是编写将处理区块链事件的映射,并将实体 +写入存储。 这些映射都以 TypeScript/AssemblyScript 编写。 -The `graph-ts` library provides APIs to access the Graph Node store, blockchain data, smart -contracts, data on IPFS, cryptographic functions and more. To use it, all you have to do is add a -dependency on it: +`graph-ts`库提供 API,访问the Graph节点存储、区块链数据、智能 +合约、IPFS数据、加密功能等数据。 若要使用它,您必须做的就是添加一个 +依赖于它: ```sh npm install --dev @graphprotocol/graph-ts # NPM -yarn add --dev @graphprotocol/graph-ts # Yarn +yarn add --dev @graph/graph-ts # Yarn ``` -After that, you can import the `store` API and other features from this library in your mappings. A -few examples: +然后,您可以在您的映射中导入这个库的 `store` API 和其他功能。 +几个例子: ```typescript import { crypto, store } from '@graphprotocol/graph-ts' @@ -50,19 +50,19 @@ function handleNameRegistered(event: NameRegistered) { } ``` -## Helper Functions for AssemblyScript +## AssemblyScript 的辅助函数 -Refer to the `helper-functions.ts` file in -[this](https://github.com/graphprotocol/graph-tooling/blob/main/packages/ts/helper-functions.ts) -repository for a few common functions that help build on top of the AssemblyScript library, such as -byte array concatenation, among others. +参考 +[这个](https://github.com/graphprotocol/graph-tooling/blob/main/packages/ts/helper-functions.ts) +版本库用于一些共同的函数的`助手函数`,这些函数有助于在 AssemblyScript 库顶端上建构, 例如 +字节数组会合等。 ## API -Documentation on the API can be found -[here](https://thegraph.com/docs/en/developer/assemblyscript-api/). +API 上的文档可以在[这里](https://thegraph.com/docs/en/developer/assemblyscript-api/)找到 +。 -For examples of `graph-ts` in use take a look at one of the following subgraphs: +对于所用的`graph-ts`的示例,请看下面的子图之一: - https://github.com/graphprotocol/ens-subgraph - https://github.com/graphprotocol/decentraland-subgraph @@ -71,15 +71,15 @@ For examples of `graph-ts` in use take a look at one of the following subgraphs: - https://github.com/graphprotocol/aragon-subgraph - https://github.com/graphprotocol/dharma-subgraph -## License +## 许可协议 -Copyright © 2018 Graph Protocol, Inc. and contributors. +版权所有 © 2018 Graph协议、 公司和贡献者。 -The Graph TypeScript library is dual-licensed under the -[MIT license](https://github.com/graphprotocol/graph-tooling/blob/main/LICENSE-MIT) and the -[Apache License, Version 2.0](https://github.com/graphprotocol/graph-tooling/blob/main/LICENSE-APACHE). +GraphTypeScript 库是 +[MIT license](https://github.com/graphprotocol/graph-tooling/blob/main/LICENSE-MIT) 和 +[Apache License, 版本 2.0](https://github.com/graphprotocol/graph-tooling/blob/main/LICENSE-APACHE)的双向授权。 -Unless required by applicable law or agreed to in writing, software distributed under the License is -distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or -implied. See the License for the specific language governing permissions and limitations under the -License. +除非适用法律要求或书面同意,否则根据许可证分发的软件 +按“原样”分发,不附带任何明示或明示的保证或条件 +暗指的。有关管理许可和限制的特定语言,请参阅许可证 +许可证。 diff --git a/website/src/pages/zh/subgraphs/developing/creating/graph-ts/_meta-titles.json b/website/src/pages/zh/subgraphs/developing/creating/graph-ts/_meta-titles.json index f60245847922..4d8577f06725 100644 --- a/website/src/pages/zh/subgraphs/developing/creating/graph-ts/_meta-titles.json +++ b/website/src/pages/zh/subgraphs/developing/creating/graph-ts/_meta-titles.json @@ -1,5 +1,5 @@ { "README": "介绍", "api": "API 参考", - "common-issues": "Common Issues" + "common-issues": "常见问题" } diff --git a/website/src/pages/zh/subgraphs/developing/creating/graph-ts/api.mdx b/website/src/pages/zh/subgraphs/developing/creating/graph-ts/api.mdx index 2a35d4ba56d4..6efd4699dd7b 100644 --- a/website/src/pages/zh/subgraphs/developing/creating/graph-ts/api.mdx +++ b/website/src/pages/zh/subgraphs/developing/creating/graph-ts/api.mdx @@ -1,17 +1,17 @@ --- -title: AssemblyScript API +title: 汇编脚本API --- -> Note: If you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/). +> 注意:如果您在 `graph-cli`/`graph-ts` 版本 `0.22.0` 之前创建了子图,那么您正在使用较旧版本的 AssemblyScript,我们建议查看[`迁移指南`](/resources/migration-guides/assemblyscript-migration-guide/)。 -Learn what built-in APIs can be used when writing subgraph mappings. There are two kinds of APIs available out of the box: +此页面记录了编写子图映射时可以使用的内置 API。有两种开箱即用的 API: -- The [Graph TypeScript library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) -- Code generated from subgraph files by `graph codegen` +- [Graph TypeScript库](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) +- 通过 `graph codegen` 生成的子图文件中的代码 -You can also add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). +您还可以添加其他库作为依赖项,只要它们与[AssemblyScript](https://github.com/AssemblyScript/assemblyscript)兼容。 -Since language mappings are written in AssemblyScript, it is useful to review the language and standard library features from the [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki). +由于语言映射是用AssemblyScript编写的,因此从[AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki)查看语言和标准库功能非常有用。 ## API 参考 @@ -19,7 +19,7 @@ Since language mappings are written in AssemblyScript, it is useful to review th - 用于处理以太坊智能合约、事件、区块、交易和以太坊价值的以太坊 API。 - 用于与图形节点交互,存储和加载实体的 存储 API。 -- A `log` API to log messages to the Graph Node output and Graph Explorer. +- 用于将消息记录Graph Node输出和Graph Explorer的`log` API。 - 用于从 IPFS 加载文件的ipfs API。 - 用于解析 JSON 数据的json API。 - 使用加密功能的crypto API。 @@ -31,18 +31,18 @@ Since language mappings are written in AssemblyScript, it is useful to review th | 版本 | Release 说明 | | :-: | --- | -| 0.0.9 | Adds new host functions [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) | -| 0.0.8 | Adds validation for existence of fields in the schema when saving an entity. | +| 0.0.9 | 添加新的主机函数[`eth_get_balance`](#balance-of-an-address) 和 [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) | +| 0.0.8 | 在保存实体时添加对模式中是否存在字段的验证。 | | 0.0.7 | 添加了 `TransactionReceipt` 和 `Log` 类到以太坊类型。
已将 `receipt` 字段添加到Ethereum Event对象。 | | 0.0.6 | 向Ethereum Transaction对象添加了 nonce 字段 向 Etherum Block对象添加
baseFeePerGas字段 | -| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/))
`ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` | +| 0.0.5 | AssemblyScript 升级到版本 0.19.10(这包括重大更改,参阅[`迁移指南`](/resources/migration-guides/assemblyscript-migration-guide/))
`ethereum.transaction.gasUsed 重命名为 `ethereum.transaction.gasLimit\` | | 0.0.4 | 已向 Ethereum SmartContractCall对象添加了 `functionSignature` 字段。 | -| 0.0.3 | Added `from` field to the Ethereum Call object
`ethereum.call.address` renamed to `ethereum.call.to` | +| 0.0.3 | 已向Ethereum Call 对象添加了 `from` 字段。
`ethereum.call.address` 被重命名为 `ethereum.call.to`。 | | 0.0.2 | 已向Ethereum Transaction对象添加了 `input` 字段。 | ### 内置类型 -Documentation on the base types built into AssemblyScript can be found in the [AssemblyScript wiki](https://www.assemblyscript.org/types.html). +关于内置于 AssemblyScript 中的基本类型的文档可以在[AssemblyScript wiki](https://www.assemblyscript.org/types.html)中找到。 以下额外的类型由 `@graphprotocol/graph-ts` 提供。 @@ -81,7 +81,7 @@ Documentation on the base types built into AssemblyScript can be found in the [A BigDecimal 用于表示任意精度的小数。 -> Note: [Internally](https://github.com/graphprotocol/graph-node/blob/master/graph/src/data/store/scalar/bigdecimal.rs) `BigDecimal` is stored in [IEEE-754 decimal128 floating-point format](https://en.wikipedia.org/wiki/Decimal128_floating-point_format), which supports 34 decimal digits of significand. This makes `BigDecimal` unsuitable for representing fixed-point types that can span wider than 34 digits, such as a Solidity [`ufixed256x18`](https://docs.soliditylang.org/en/latest/types.html#fixed-point-numbers) or equivalent. +> 注意:[内部](https://github.com/graphprotocol/graph-node/blob/master/graph/src/data/store/scalar/bigdecimal.rs)`BigDecimal`以[IEEE-754 decimal128浮点格式](https://en.wikipedia.org/wiki/Decimal128_floating-point_format)存储,支持34位十进制有效位。这使得`BigDecimal`不适合表示跨度超过34位的定点类型,例如Solidity [`ufixed256x18`](https://docs.soliditylang.org/en/latest/types.html#fixed-point-numbers)或等效类型。 结构 @@ -223,7 +223,7 @@ TypedMap 类具有以下 API: `store` API 允许从和到 Graph Node 存储加载、保存和删除实体。 -Entities written to the store map one-to-one to the `@entity` types defined in the subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities. +写入存储的实体与子图的 GraphQL 模式中定义的 `@entity` 类型一一对应。为了方便处理这些实体,由 [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) 提供的 `graph codegen` 命令会生成实体类,这些类是内置 `Entity` 类的子类,具有模式中字段的属性 getter 和 setter,以及加载和保存这些实体的方法。 #### 创建实体 @@ -254,9 +254,9 @@ export function handleTransfer(event: TransferEvent): void { 如果在处理链时遇到 Transfer 事件,它会使用生成的 Transfer 类型(别名为 TransferEvent 以避免与实体类型的命名冲突) 传递给 handleTransfer 事件处理器。 此类型允许访问事件的母交易及其参数等数据。 -Each entity must have a unique ID to avoid collisions with other entities. It is fairly common for event parameters to include a unique identifier that can be used. +每个实体都必须有一个唯一的ID,以避免与其他实体发生冲突。事件参数中包含可使用的唯一标识符是相当常见的。 -> Note: Using the transaction hash as the ID assumes that no other events in the same transaction create entities with this hash as the ID. +> 注意:使用交易hash作为ID假设同一交易中没有其他事件创建具有此hash作为ID的实体。 #### 从存储中加载实体 @@ -272,18 +272,18 @@ if (transfer == null) { // Use the Transfer entity as before ``` -As the entity may not exist in the store yet, the `load` method returns a value of type `Transfer | null`. It may be necessary to check for the `null` case before using the value. +由于实体可能尚未存在于存储中,因此 `load` 方法返回一个类型为 `Transfer | null` 的值。因此,在使用该值之前可能需要检查 `null` 情况。 -> Note: Loading entities is only necessary if the changes made in the mapping depend on the previous data of an entity. See the next section for the two ways of updating existing entities. +> 注意: 仅当映射中所做的更改依赖于实体的先前数据时,才需要加载实体。 有关更新现有实体的两种方法,请参阅下一节。 #### 查找在区块中创建的实体 截至 `graph-node` v0.31.0、`@graphprotocol/graph-ts` v0.30.0 和 `@graphprotocol/graph-cli` v0.49.0,所有实体类型上都提供了 `loadInBlock` 方法。 -The store API facilitates the retrieval of entities that were created or updated in the current block. A typical situation for this is that one handler creates a transaction from some onchain event, and a later handler wants to access this transaction if it exists. +存储API便于检索在当前块中创建或更新的实体。一个典型的情况是,一个处理程序从某个链上事件创建一个交易,如果该交易存在,则稍后的处理程序希望访问该交易。 -- In the case where the transaction does not exist, the subgraph will have to go to the database simply to find out that the entity does not exist. If the subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. -- For some subgraphs, these missed lookups can contribute significantly to the indexing time. +- 在交易不存在的情况下,子图将不得不访问数据库,以发现实体不存在。如果子图作者已经知道实体必须是在同一块中创建的,那么使用`loadInBlock`可以避免这种数据库往返。 +- 对于某些子图,这些遗漏的查找可能会大大增加索引时间。 ```typescript let id = event.transaction.hash // or however the ID is constructed @@ -382,7 +382,7 @@ store.remove('Transfer', id) 与实体一样,`graph codegen` 为子图中使用的所有智能合约和事件生成类。为此,合约 ABI 需要作为子图清单中数据源的一部分。通常,ABI 文件存储在一个名为 `abis/` 的文件夹中。 -通过生成的类,以太坊类型和内置类型之间的转换在幕后进行,这样子图作者就不必担心它们。 +通过生成的类,以太坊类型和[内置类型](#built-in-types) 之间的转换在幕后进行,这样子图作者就不必担心它们。 以下示例说明了这一点。 给定一个子图模式,如 @@ -509,9 +509,9 @@ export function handleTransfer(event: TransferEvent) { #### 处理重复调用 -If the read-only methods of your contract may revert, then you should handle that by calling the generated contract method prefixed with `try_`. +如果你的合约的只读方法可能会恢复,那么你应该通过调用以`try_`为前缀的生成合约方法来处理。 -- For example, the Gravity contract exposes the `gravatarToOwner` method. This code would be able to handle a revert in that method: +- 例如,Gravity合约公开了`gravatarToOwner`方法。此代码将能够处理该方法中的还原: ```typescript let gravity = Gravity.bind(event.address) @@ -523,7 +523,7 @@ if (callResult.reverted) { } ``` -> Note: A Graph node connected to a Geth or Infura client may not detect all reverts. If you rely on this, we recommend using a Graph Node connected to a Parity client. +> 请注意,连接到 Geth 或 Infura 客户端的 Graph 节点可能无法检测到所有重复使用,如果您依赖于此,我们建议使用连接到 Parity 客户端的 Graph 节点。 #### 编码/解码 ABI @@ -548,11 +548,11 @@ let decoded = ethereum.decode('(address,uint256)', encoded) - [ABI 规范](https://docs.soliditylang.org/en/v0.7.4/abi-spec.html#types) - 您可以使用[Rust Ethereum ABI 库/CLI](https://github.com/rust-ethereum/ethabi)来进行编码和解码。 -- More [complex example](https://github.com/graphprotocol/graph-node/blob/08da7cb46ddc8c09f448c5ea4b210c9021ea05ad/tests/integration-tests/host-exports/src/mapping.ts#L86). +- 更复杂的示例可以在[这里](https://github.com/graphprotocol/graph-node/blob/08da7cb46ddc8c09f448c5ea4b210c9021ea05ad/tests/integration-tests/host-exports/src/mapping.ts#L86)找到。 -#### Balance of an Address +#### 地址余额 -The native token balance of an address can be retrieved using the `ethereum` module. This feature is available from `apiVersion: 0.0.9` which is defined `subgraph.yaml`. The `getBalance()` retrieves the balance of the specified address as of the end of the block in which the event is triggered. +可以使用`以太坊`模块检索地址的本地代币余额。此功能可从`apiVersion:0.0.9`中获得,其定义为`subgraph.yaml`。`getBalance()`检索指定地址在触发事件的块结束时的余额。 ```typescript import { ethereum } from '@graphprotocol/graph-ts' @@ -561,9 +561,9 @@ let address = Address.fromString('0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045') let balance = ethereum.getBalance(address) // returns balance in BigInt ``` -#### Check if an Address is a Contract or EOA +#### 检查地址是合约还是EOA -To check whether an address is a smart contract address or an externally owned address (EOA), use the `hasCode()` function from the `ethereum` module which will return `boolean`. This feature is available from `apiVersion: 0.0.9` which is defined `subgraph.yaml`. +要检查地址是智能合约地址还是外部所有地址(EOA),请使用`以太坊`模块中的`hasCode()`函数,该函数将返回`boolean`。此功能可从`apiVersion:0.0.9`中获得,该版本定义为`subgraph.yaml`。 ```typescript import { ethereum } from '@graphprotocol/graph-ts' @@ -581,7 +581,7 @@ let isContract = ethereum.hasCode(eoa).inner // returns false 从 '@graphprotocol/graph-ts'导入 { log } ``` -The `log` API allows subgraphs to log information to the Graph Node standard output as well as Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument. +`log` API 允许子图将信息记录到 Graph Node 的标准输出以及 Graph Explorer。可以使用不同的日志级别记录消息。提供了一种基本的格式字符串语法来从参数组成日志消息。 `log` API 包括以下函数: @@ -671,7 +671,7 @@ export function handleSomeEvent(event: SomeEvent): void { 从 '@graphprotocol/graph-ts'导入{ ipfs } ``` -Smart contracts occasionally anchor IPFS files onchain. This allows mappings to obtain the IPFS hashes from the contract and read the corresponding files from IPFS. The file data will be returned as `Bytes`, which usually requires further processing, e.g. with the `json` API documented later on this page. +智能合约偶尔会在链上固定 IPFS 文件。这允许映射从合约获取 IPFS 哈希并从 IPFS 读取相应的文件。文件数据将作为 `Bytes` 返回,通常需要进一步处理,例如使用稍后在本页面中记录的 `json` API。 给定一个 IPFS hash或路径,从 IPFS 读取文件的过程如下: @@ -722,7 +722,7 @@ ipfs.mapJSON('Qm...', 'processItem', Value.fromString('parentId')) 成功时,`ipfs.map` 返回 `void`。如果回调的任何调用导致错误,则调用 `ipfs.map` 的处理程序将被中止,并且子图被标记为失败。 -### Crypto API +### 加密API ```typescript 从'@graphprotocol/graph-ts'导入{ crypto } diff --git a/website/src/pages/zh/subgraphs/developing/creating/graph-ts/common-issues.mdx b/website/src/pages/zh/subgraphs/developing/creating/graph-ts/common-issues.mdx index d8625f05baea..a5f6e79c1bfe 100644 --- a/website/src/pages/zh/subgraphs/developing/creating/graph-ts/common-issues.mdx +++ b/website/src/pages/zh/subgraphs/developing/creating/graph-ts/common-issues.mdx @@ -2,7 +2,7 @@ title: AssemblyScript的常见问题 --- -在子图开发过程中,常常会遇到某些 AssemblyScript 问题。它们在调试难度范围内,但是,意识到它们可能会有所帮助。以下是这些问题的非详尽清单: +在子图开发过程中,常常会遇到某些 [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) 问题。它们在调试难度范围内,但是,意识到它们可能会有所帮助。以下是这些问题的非详尽清单: -- `Private` class variables are not enforced in [AssemblyScript](https://www.assemblyscript.org/status.html#language-features). There is no way to protect class variables from being directly changed from the class object. -- Scope is not inherited into [closure functions](https://www.assemblyscript.org/status.html#on-closures), i.e. variables declared outside of closure functions cannot be used. Explanation in [Developer Highlights #3](https://www.youtube.com/watch?v=1-8AW-lVfrA&t=3243s). +- 在 [AssemblyScript](https://www.assemblyscript.org/status.html#language-features)中,并不强制执行 `Private` 类变量。无法保护类变量免受类对象直接更改的影响。 +- 在闭包函数中,范围不会继承,即无法使用在闭包函数外部声明的变量。请参阅[开发者亮点 #3](https://www.youtube.com/watch?v=1-8AW-lVfrA&t=3243s)中的解释。 diff --git a/website/src/pages/zh/subgraphs/developing/creating/install-the-cli.mdx b/website/src/pages/zh/subgraphs/developing/creating/install-the-cli.mdx index 521b9a21d1a6..b8fa541b7453 100644 --- a/website/src/pages/zh/subgraphs/developing/creating/install-the-cli.mdx +++ b/website/src/pages/zh/subgraphs/developing/creating/install-the-cli.mdx @@ -2,17 +2,17 @@ title: 安装 Graph CLI --- -> In order to use your subgraph on The Graph's decentralized network, you will need to [create an API key](/resources/subgraph-studio-faq/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your subgraph with at least 3,000 GRT to attract 2-3 Indexers. To learn more about signaling, check out [curating](/resources/roles/curating/). +> 为了在The Graph的去中心化网络上使用您的Subgraph,您需要在[Subgraph studio](https://thegraph.com/studio/apikeys/)中[创建一个API密钥](/resources/subgraph-studio-faq/#2-how-do-i-create-an-api-key) 。建议您在子图中添加至少3000 GRT的信号,以吸引2-3个索引人。要了解更多关于信号的信息,请查看[策展](/resources/roles/curating/)。 ## 概述 -The [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) is a command-line interface that facilitates developers' commands for The Graph. It processes a [subgraph manifest](/subgraphs/developing/creating/subgraph-manifest/) and compiles the [mappings](/subgraphs/developing/creating/assemblyscript-mappings/) to create the files you will need to deploy the subgraph to [Subgraph Studio](https://thegraph.com/studio/) and the network. +[Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli)是一个命令行界面,便于开发人员对The Graph执行命令。它处理[Subgraph manifest](/subgraphs/developing/creating/subgraph-manifest/),并编译[mappings](/subgraphs/developing/creating/assemblyscript-mappings/) ,以创建将Subgraph部署到[Subgraph Studio](https://thegraph.com/studio/)所需的文件以及网络。 ## 开始 ### 安装 Graph CLI -The Graph CLI is written in TypeScript, and you must have `node` and either `npm` or `yarn` installed to use it. Check for the [most recent](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) CLI version. +Graph CLI是用TypeScript编写的,您必须安装`node` 和`npm` 或 `yarn`才能使用它。检查[最新](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true)CLI版本。 在本地计算机上,运行以下命令之一: @@ -28,13 +28,13 @@ npm install -g @graphprotocol/graph-cli@latest yarn global add @graphprotocol/graph-cli ``` -The `graph init` command can be used to set up a new subgraph project, either from an existing contract or from an example subgraph. If you already have a smart contract deployed to your preferred network, you can bootstrap a new subgraph from that contract to get started. +`graph-int`命令可用于从现有合约或示例Subgraph中设置新的Subgraph项目。如果您已经将智能合约部署到首选网络,则可以从该合约引导一个新的子图开始。 ## 创建子图 ### 基于现有合约 -The following command creates a subgraph that indexes all events of an existing contract: +以下命令创建一个子图,对现有合约的所有事件进行索引: ```sh graph init \ @@ -45,31 +45,31 @@ graph init \ [] ``` -- The command tries to retrieve the contract ABI from Etherscan. +- 该命令试图从Etherscan检索合约ABI。 - - The Graph CLI relies on a public RPC endpoint. While occasional failures are expected, retries typically resolve this issue. If failures persist, consider using a local ABI. + - Graph CLI依赖于公共RPC端点。虽然偶尔会出现故障,但重试通常可以解决此问题。如果故障持续存在,请考虑使用本地ABI。 -- If any of the optional arguments are missing, it guides you through an interactive form. +- 如果缺少任何可选参数,它将引导您完成交互式表单。 -- The `` is the ID of your subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your subgraph details page. +- `` 是您在[Subgraph Studio](https://thegraph.com/studio/) 中的子图 ID,可以在您的子图详细信息页面上找到。 ### 基于子图示例 -The following command initializes a new project from an example subgraph: +以下命令从示例子图初始化新项目: ```sh graph init --from-example=example-subgraph ``` -- The [example subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. +- [这个示例子图](https://github.com/graphprotocol/example-subgraph)基于 Dani Grant 的 Gravity 合约,该合约管理用户的头像,并在头像创建或更新时发出`NewGravatar` 或`UpdateGravatar`事件。 -- The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. +- 子图通过将`Gravatar`实体写入Graph Node存储并确保这些实体根据事件进行更新来处理这些事件。 -### Add New `dataSources` to an Existing Subgraph +### 将新`数据源`添加到现有子图 -`dataSources` are key components of subgraphs. They define the sources of data that the subgraph indexes and processes. A `dataSource` specifies which smart contract to listen to, which events to process, and how to handle them. +`数据源`是子图的关键组成部分。它们定义了子图索引和处理的数据来源。`dataSource`指定要监听哪个智能合约、处理哪些事件以及如何处理它们。 -Recent versions of the Graph CLI supports adding new `dataSources` to an existing subgraph through the `graph add` command: +最近版本的Graph CLI支持通过`graph add`命令向现有子图添加新的`数据源`: ```sh graph add
[] @@ -82,38 +82,24 @@ Options: --network-file Networks config file path (default: "./networks.json") ``` -#### Specifics +#### 详情 -The `graph add` command will fetch the ABI from Etherscan (unless an ABI path is specified with the `--abi` option) and creates a new `dataSource`, similar to how the `graph init` command creates a `dataSource` `--from-contract`, updating the schema and mappings accordingly. This allows you to index implementation contracts from their proxy contracts. +`graph add`命令将从Etherscan获取ABI(除非使用`--abi`选项指定了ABI路径),并创建一个新的`dataSource`,类似于`graph-init`指令如何`从合约`创建`data. Source`,从而相应地更新模式和映射。这允许您从代理合约中索引实现合约。 -- The `--merge-entities` option identifies how the developer would like to handle `entity` and `event` name conflicts: +- `--merge-entities`选项标识开发人员希望如何处理`实体`和`事件`名称冲突: - - If `true`: the new `dataSource` should use existing `eventHandlers` & `entities`. + - 如果为`真`:新的`数据源`应该使用现有的`事件处理程序`和`实体`。 - - If `false`: a new `entity` & `event` handler should be created with `${dataSourceName}{EventName}`. + - 如果为`假`:应使用`${dataSourceName}{EventName}`创建新的`实体`和`事件`处理程序。 -- The contract `address` will be written to the `networks.json` for the relevant network. +- 合约`地址`将写入相关网络的`networks.json`。 -> Note: When using the interactive CLI, after successfully running `graph init`, you'll be prompted to add a new `dataSource`. +> 注意:使用交互式CLI时,在成功运行`graph init`后,将提示您添加新的`dataSource`。 -### Getting The ABIs +### 获取 ABI ABI 文件必须与您的合约相匹配。 获取 ABI 文件的方法有以下几种: - 如果您正在构建自己的项目,您可以获取最新的 ABI。 -- If you are building a subgraph for a public project, you can download that project to your computer and get the ABI by using [`npx hardhat compile`](https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts) or using `solc` to compile. -- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your subgraph will fail. - -## SpecVersion Releases - -| 版本 | Release 说明 | -| :-: | --- | -| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | -| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | -| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune subgraphs | -| 0.0.9 | Supports `endBlock` feature | -| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). | -| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). | -| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | -| 0.0.5 | Added support for event handlers having access to transaction receipts. | -| 0.0.4 | Added support for managing subgraph features. | +- 如果您正在为公共项目构建子图,则可以将该项目下载到您的计算机,并通过使用[`npx hardhat compile`](https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts),或使用 `solc` 进行编译来获取 ABI。 +- 您还可以在[Etherscan](https://etherscan.io/)上找到 ABI,但这并不总是可靠的,因为在那里上传的 ABI 可能已过期。 请确保您拥有正确的 ABI,否则您的子图将会运行失败。 diff --git a/website/src/pages/zh/subgraphs/developing/creating/ql-schema.mdx b/website/src/pages/zh/subgraphs/developing/creating/ql-schema.mdx index 6e24e832c85d..af89158103ba 100644 --- a/website/src/pages/zh/subgraphs/developing/creating/ql-schema.mdx +++ b/website/src/pages/zh/subgraphs/developing/creating/ql-schema.mdx @@ -1,28 +1,28 @@ --- -title: The Graph QL Schema +title: GraphQL 模式 --- ## 概述 -The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language. +您的子图的模式位于文件 `schema.graphql`。GraphQL模式是使用GraphQL 接口定义语言定义的。 -> Note: If you've never written a GraphQL schema, it is recommended that you check out this primer on the GraphQL type system. Reference documentation for GraphQL schemas can be found in the [GraphQL API](/subgraphs/querying/graphql-api/) section. +> 注意:如果您从未写过GraphQL模式,建议您在GraphQL类型系统上查看此初级读物。 GraphQL模式的参考文档可以在 [GraphQL API](/subgraphs/querying/graphql-api/)部分找到。 -### Defining Entities +### 定义实体 -Before defining entities, it is important to take a step back and think about how your data is structured and linked. +在定义实体之前,重要的是回头一步,思考你的数据是如何架构和联系的。 -- All queries will be made against the data model defined in the subgraph schema. As a result, the design of the subgraph schema should be informed by the queries that your application will need to perform. -- It may be useful to imagine entities as "objects containing data", rather than as events or functions. -- You define entity types in `schema.graphql`, and Graph Node will generate top-level fields for querying single instances and collections of that entity type. -- Each type that should be an entity is required to be annotated with an `@entity` directive. -- By default, entities are mutable, meaning that mappings can load existing entities, modify them and store a new version of that entity. - - Mutability comes at a price, so for entity types that will never be modified, such as those containing data extracted verbatim from the chain, it is recommended to mark them as immutable with `@entity(immutable: true)`. - - If changes happen in the same block in which the entity was created, then mappings can make changes to immutable entities. Immutable entities are much faster to write and to query so they should be used whenever possible. +- 所有查询都将根据子图模式定义的数据模型进行。 因此,子图模式的设计应通知您的应用程序需要执行的查询。 +- 将实体想象成“含有数据的对象”,而不是作为事件或功能,可能是有用的。 +- 您在 `schema.graphql` 中定义实体类型, Graph节点将生成顶级字段以查询该实体类型的单个实例和集合。 +- 每一类型的实体都需要使用 `@entity` 指令注明。 +- 默认情况下,实体是可变的,意味着映射可以加载现有实体,修改它们并存储该实体的新版本。 + - 易燃性是以一种价格计算的,对于那些永远不会被修改的实体类型,例如那些载有从链中逐字提取的数据的类型, 建议用`@entity(immutable:true)`标记为不可变的。 + - 如果在创建实体的同一区块中发生更改,则映射可以对不可变的实体进行更改。 不可变的实体可以更快地撰写和查询,以便在可能的情况下使用。 -#### 好代码的例子 +#### 好示例 -The following `Gravatar` entity is structured around a Gravatar object and is a good example of how an entity could be defined. +下面的 `Gravatar` 实体围绕 Gravatar 对象构建,是如何定义实体的一个很好的示例。 ```graphql type Gravatar @entity { @@ -36,7 +36,7 @@ type Gravatar @entity { #### 坏榜样 -The following example `GravatarAccepted` and `GravatarDeclined` entities are based around events. It is not recommended to map events or function calls to entities 1:1. +下面的示例中,`GravatarAccepted` 和 `GravatarDeclined` 实体都基于事件。 不建议将事件或函数调用以 1:1 的方式映射到实体。 ```graphql type GravatarAccepted @entity { @@ -56,32 +56,32 @@ type GravatarDeclined @entity { #### 可选和必选字段 -Entity fields can be defined as required or optional. Required fields are indicated by the `!` in the schema. If the field is a scalar field, you get an error when you try to store the entity. If the field references another entity then you get this error: +实体字段可以定义为必填字段或可选字段。在模式中,所需字段以 `!` 标明。 如果字段是缩放字段,当您尝试存储实体时会出现错误。 如果字段引用了另一个实体,你就会得到这个错误: ``` Null value resolved for non-null field 'name' ``` -Each entity must have an `id` field, which must be of type `Bytes!` or `String!`. It is generally recommended to use `Bytes!`, unless the `id` contains human-readable text, since entities with `Bytes!` id's will be faster to write and query as those with a `String!` `id`. The `id` field serves as the primary key, and needs to be unique among all entities of the same type. For historical reasons, the type `ID!` is also accepted and is a synonym for `String!`. +每个实体必须有一个 `id` 字段,其类型必须是 `Bytes!`或者`String!`。通常建议使用`Bytes!`,除非 `id` 包含人类可读的文本,因为有`Bytes!` `id`的实体比使用`String!``id`的写入和查询速度会更快。`id` 字段充当主钥,并且需要在同一类型的所有实体中是唯一的。由于历史原因,类型 `ID!`也被接受,是 `String!`的同义词。 -For some entity types the `id` for `Bytes!` is constructed from the id's of two other entities; that is possible using `concat`, e.g., `let id = left.id.concat(right.id) ` to form the id from the id's of `left` and `right`. Similarly, to construct an id from the id of an existing entity and a counter `count`, `let id = left.id.concatI32(count)` can be used. The concatenation is guaranteed to produce unique id's as long as the length of `left` is the same for all such entities, for example, because `left.id` is an `Address`. +对于`Bytes!`的某些实体类型,`id` 是由另外两个实体的 id 构成的; 这可以使用 `concat`,例如,`let id = left.id.concat(right.id) ` 来从`left`和`right`的 id 构成 id。类似地,要从现有实体的 id 和`count`构造 id,可以使用 `let id = left.id.concatI32(count)` 。只要`left`的长度对于所有这样的实体都是相同的,这种串联就一定会产生唯一的 id,例如,因为 `left.id`是一个 `Address`。 ### 内置标量类型 #### GraphQL 支持的标量 -The following scalars are supported in the GraphQL API: +GraphQL API支持以下缩写: | 类型 | 描述 | | --- | --- | | `Bytes` | 字节数组,表示为十六进制字符串。 通常用于以太坊hash和地址。 | -| `String` | Scalar for `string` values. Null characters are not supported and are automatically removed. | -| `Boolean` | Scalar for `boolean` values. | -| `Int` | The GraphQL spec defines `Int` to be a signed 32-bit integer. | -| `Int8` | An 8-byte signed integer, also known as a 64-bit signed integer, can store values in the range from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807. Prefer using this to represent `i64` from ethereum. | -| `BigInt` | Large integers. Used for Ethereum's `uint32`, `int64`, `uint64`, ..., `uint256` types. Note: Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. | -| `BigDecimal` | `BigDecimal` High precision decimals represented as a significand and an exponent. The exponent range is from −6143 to +6144. Rounded to 34 significant digits. | -| `Timestamp` | It is an `i64` value in microseconds. Commonly used for `timestamp` fields for timeseries and aggregations. | +| `String` | `string`值的标量。不支持空字符,将被自动删除。 | +| `Boolean` | `boolean` 值的标量。 | +| `Int` | GraphQL spec定义`Int`为一个带符号的32位整数。 | +| `Int8` | 一个8字节有符号整数,也称为64位有符号整数,可以存储从-9,223,372,036,854,775,808到9,223,372,036,854,775,807的范围内的值。建议使用此类型来表示以太坊中的`i64`。 | +| `BigInt` | 大整数。 用于以太坊的 `uint32`, `int64`, `uint64`, ..., `uint256` 类型。 注意:`uint32`以下的所有类型,例如`int32`, `uint24`或`int8`都表示为`i32`。 | +| `BigDecimal` | `BigDecimal`表示为有效数字和指数的高精度小数。 指数范围是 -6143 到 +6144。 四舍五入到 34 位有效数字。 | +| `Timestamp` | 它是一个微秒的`i64`值。通常用于时间序列和聚合的`时间戳`字段。 | ### 枚举类型 @@ -95,19 +95,19 @@ enum TokenStatus { } ``` -Once the enum is defined in the schema, you can use the string representation of the enum value to set an enum field on an entity. For example, you can set the `tokenStatus` to `SecondOwner` by first defining your entity and subsequently setting the field with `entity.tokenStatus = "SecondOwner"`. The example below demonstrates what the Token entity would look like with an enum field: +在模式中定义枚举后,您可以使用枚举值的字符串表示形式在实体上设置枚举字段。 例如,您可以将 `tokenStatus` 设置为 `SecondOwner`,方法是首先定义您的实体,然后使用 `entity.tokenStatus = "SecondOwner"` 设置字段。 下面的示例演示了带有枚举字段的 Token 实体: -More detail on writing enums can be found in the [GraphQL documentation](https://graphql.org/learn/schema/). +在[GraphQL文档](https://graphql.org/learn/schema/)中找到更多关于写入编码的详细信息。 ### 实体关系 -一个实体可能与模式中的一个或多个其他实体发生联系。 您可以在您的查询中遍历这些联系。 Graph 中的联系是单向的。 可以通过在关系的任一“端”上定义单向关系来模拟双向关系。 +一个实体可能与模式中的一个或多个其他实体发生联系。 您可以在您的查询中遍历这些联系。 The Graph 中的联系是单向的。 可以通过在关系的任一“端”上定义单向关系来模拟双向关系。 关系是在实体上定义的,就像任何其他字段一样,除了指定的类型是另一个实体类型。 #### 一对一关系 -Define a `Transaction` entity type with an optional one-to-one relationship with a `TransactionReceipt` entity type: +定义一个 `Transaction` 实体类型,该类型与一个 `TransactionRecipt` 实体类型是可选的一对一的关系: ```graphql type Transaction @entity(immutable: true) { @@ -123,7 +123,7 @@ type TransactionReceipt @entity(immutable: true) { #### 一对多关系 -Define a `TokenBalance` entity type with a required one-to-many relationship with a Token entity type: +定义一个 `TokenBalance` 实体类型,它与代币实体类型具有必备的一对多的关系: ```graphql type Token @entity(immutable: true) { @@ -139,13 +139,13 @@ type TokenBalance @entity { ### 反向查找 -Reverse lookups can be defined on an entity through the `@derivedFrom` field. This creates a virtual field on the entity that may be queried but cannot be set manually through the mappings API. Rather, it is derived from the relationship defined on the other entity. For such relationships, it rarely makes sense to store both sides of the relationship, and both indexing and query performance will be better when only one side is stored and the other is derived. +反向查找可以通过 `@arotovedFrom` 字段定义的实体。 这将在实体上创建一个虚拟字段,可以查询,但不能通过Mappings API手动设置。 相反,它产生于与其他实体界定的关系。 对于这种关系来说,把双方都放在一起是很不合理的。 如果只存储一方而得出另一方,则索引和查询性能都会更好。 -对于一对多关系,关系应始终存储在“一”端,而“多”端应始终派生。 以这种方式存储关系,而不是在“多”端存储实体数组,将大大提高索引和查询子图的性能。 通常,应尽可能避免存储实体数组。 +对于一对多的关系,这种关系应该始终保存在'one'一边,而'many'一边应该总是被派生出来的。 以这种方式存储关系,而不是将一系列实体存储在'many'侧, 用于索引和查询Subgraph的性能将会大大提高。 一般而言,应尽量避免储存实体阵列。 #### 示例 -We can make the balances for a token accessible from the token by deriving a `tokenBalances` field: +我们可以从 tokenBalances 中产生一个 \`tokenBalances ' 字段来获取代币: ```graphql type Token @entity(immutable: true) { @@ -160,7 +160,7 @@ type TokenBalance @entity { } ``` -Here is an example of how to write a mapping for a subgraph with reverse lookups: +下面是如何为具有反向查找的子图撰写映射的示例: ```typescript let token = new Token(event.address) // Create Token @@ -178,7 +178,7 @@ tokenBalance.save() #### 示例 -Define a reverse lookup from a `User` entity type to an `Organization` entity type. In the example below, this is achieved by looking up the `members` attribute from within the `Organization` entity. In queries, the `organizations` field on `User` will be resolved by finding all `Organization` entities that include the user's ID. +定义从`User`实体类型向`Organization`实体类型的反向查找。 在下面的例子中,实现这个目标的方法是在 `Organization` 实体内查找 `members` 属性。 在查询时,`User`上的 `organization` 字段将通过找到所有包含用户ID的 `Organization` 实体来解决。 ```graphql type Organization @entity { @@ -194,7 +194,7 @@ type User @entity { } ``` -A more performant way to store this relationship is through a mapping table that has one entry for each `User` / `Organization` pair with a schema like +存储这种关系的更加有效的方法是通过一个映射表,这个表为每一个 ' User' / ' Organization' 配对提供一个条目和一个类似的模式: ```graphql type Organization @entity { @@ -231,11 +231,11 @@ query usersWithOrganizations { } ``` -这种存储多对多关系的更精细的方式将导致为子图存储的数据更少,因此子图的索引和查询速度通常会大大加快。 +这种存储多对多关系的更精细的方式,将导致为子图存储的数据更少,因此,子图的索引和查询速度通常会大大加快。 ### 向模式添加注释 -As per GraphQL spec, comments can be added above schema entity attributes using the hash symbol `#`. This is illustrated in the example below: +根据 GraphQL 规范,可以使用双引号`#`在模式实体属性上方添加注释。 这在下面的示例中进行了说明: ```graphql type MyFirstEntity @entity { @@ -251,7 +251,7 @@ type MyFirstEntity @entity { 全文查询定义包括查询名称、用于处理文本字段的语言词典、用于对结果进行排序的排序算法,以及搜索中包含的字段。 每个全文查询可能跨越多个字段,但所有包含的字段必须来自单个实体类型。 -To add a fulltext query, include a `_Schema_` type with a fulltext directive in the GraphQL schema. +要添加全文查询,请在 GraphQL 模式中包含带有全文指令的 `_Schema_` 类型。 ```graphql type _Schema_ @@ -274,7 +274,7 @@ type Band @entity { } ``` -The example `bandSearch` field can be used in queries to filter `Band` entities based on the text documents in the `name`, `description`, and `bio` fields. Jump to [GraphQL API - Queries](/subgraphs/querying/graphql-api/#queries) for a description of the fulltext search API and more example usage. +示例`bandSearch` 字段可以用来在查询中根据`name`、`description`、和`bio`字段中的文本文档过滤`Band` 实体。 跳转到[GraphQL API - 查询](/subgraphs/querying/graphql-api/#queries)以了解全文搜索 API 和更多示例用法。 ```graphql query { @@ -287,7 +287,7 @@ query { } ``` -> **[Feature Management](#experimental-features):** From `specVersion` `0.0.4` and onwards, `fullTextSearch` must be declared under the `features` section in the subgraph manifest. +> **[特征管理](#experimental-features):**从 `specVersion` `0.0.4` 及以后, `fullTextSearch` 必须在子图清单中`features` 部分下申报。 ## 支持的语言 @@ -295,30 +295,30 @@ query { 支持的语言词典: -| Code | 词典 | -| ------ | --------- | -| simple | General | -| da | Danish | -| nl | Dutch | -| en | English | -| fi | Finnish | -| fr | French | -| de | German | -| hu | Hungarian | -| it | Italian | -| no | Norwegian | -| pt | 葡萄牙语 | -| ro | Romanian | -| ru | Russian | -| es | Spanish | -| sv | Swedish | -| tr | Turkish | +| 代码 | 词典 | +| ------ | ---------- | +| simple | 通用 | +| da | 丹麦语 | +| nl | 荷兰语 | +| en | 英语 | +| fi | 芬兰语 | +| fr | French | +| de | 德语 | +| hu | 匈牙利语 | +| it | 意大利语 | +| no | 挪威语 | +| pt | 葡萄牙语 | +| ro | 罗马尼亚语 | +| ru | 俄语 | +| es | 西班牙语 | +| sv | 瑞典语 | +| tr | 土耳其语 | ### 排序算法 支持的排序结果算法: -| Algorithm | Description | -| ------------- | --------------------------------------------------------------- | -| rank | 使用全文查询的匹配质量 (0-1) 对结果进行排序。 | -| proximityRank | Similar to rank but also includes the proximity of the matches. | +| 算法 | 说明 | +| ------------- | --------------------------------------------- | +| 排名 | 使用全文查询的匹配质量 (0-1) 对结果进行排序。 | +| proximityRank | 与 rank 类似,但也包括匹配的接近程度。 | diff --git a/website/src/pages/zh/subgraphs/developing/creating/starting-your-subgraph.mdx b/website/src/pages/zh/subgraphs/developing/creating/starting-your-subgraph.mdx index d00c872abc59..60544fa53eaf 100644 --- a/website/src/pages/zh/subgraphs/developing/creating/starting-your-subgraph.mdx +++ b/website/src/pages/zh/subgraphs/developing/creating/starting-your-subgraph.mdx @@ -1,23 +1,35 @@ --- -title: Starting Your Subgraph +title: 开始你的子图 --- ## 概述 -The Graph is home to thousands of subgraphs already available for query, so check [The Graph Explorer](https://thegraph.com/explorer) and find one that already matches your needs. +The Graph是数千个子图的所在地,这些子图已经可供查询,因此请查看[The Graph Explorer](https://thegraph.com/explorer) ,找到一个已经符合您需求的子图。 -When you create a [subgraph](/subgraphs/developing/subgraphs/), you create a custom open API that extracts data from a blockchain, processes it, stores it, and makes it easy to query via GraphQL. +当你创建一个[子图](/subgraphs/developing/subgraphs/)时,你会创建一个自定义的开放API,从区块链中提取数据,处理数据,存储数据,并通过GraphQL轻松查询。 -Subgraph development ranges from simple scaffold subgraphs to advanced, specifically tailored subgraphs. +子图的开发范围从简单的脚手架子图到高级的、专门定制的子图。 -### Start Building +### 开始构建 -Start the process and build a subgraph that matches your needs: +启动该过程并构建一个符合您需求的子图: -1. [Install the CLI](/subgraphs/developing/creating/install-the-cli/) - Set up your infrastructure -2. [Subgraph Manifest](/subgraphs/developing/creating/subgraph-manifest/) - Understand a subgraph's key component -3. [The GraphQL Schema](/subgraphs/developing/creating/ql-schema/) - Write your schema -4. [Writing AssemblyScript Mappings](/subgraphs/developing/creating/assemblyscript-mappings/) - Write your mappings -5. [Advanced Features](/subgraphs/developing/creating/advanced/) - Customize your subgraph with advanced features +1. [安装CLI](/subgraphs/developing/creating/install-the-cli/)-设置您的基础架构 +2. [子图清单](/subgraphs/developing/creating/subgraph-manifest/) -了解子图的关键组成部分 +3. [The GraphQL模式](/subgraphs/developing/creating/ql-schema/) -写你的模式 +4. [编写AssemblyScript映射](/subgraphs/developing/creating/assemblyscript-mappings/)-编写映射 +5. [高级功能](/subgraphs/developing/creating/advanced/) - 使用高级功能自定义子图 -Explore additional [resources for APIs](/subgraphs/developing/creating/graph-ts/README/) and conduct local testing with [Matchstick](/subgraphs/developing/creating/unit-testing-framework/). +探索[API的其他资源](/subgraphs/developing/creating/graph-ts/README/),并使用[Matchstick](/subgraphs/developing/creating/unit-testing-framework/)进行本地测试。 + +| 版本 | Release 说明 | +| :-: | --- | +| 1.2.0 | 添加了对[索引参数过滤器](/#indexed-argument-filters--topic-filters) 的支持,并声明了`eth_call`。 | +| 1.1.0 | 支持[时间序列和聚合](#timeseries-and-aggregations)。为`id`添加了对`Int8`类型的支持。 | +| 1.0.0 | 支持[`indexerHints`](/developing/creating-a-subgraph/#indexer-hints)功能以修剪子图。 | +| 0.0.9 | 支持`endBlock`功能。 | +| 0.0.8 | 添加了对轮询[块处理程序](/developing/creating-a-subgraph/#polling-filter)和[初始化处理程序](/developing/creating-a-subgraph/#once-filter)的支持。 | +| 0.0.7 | 添加了对[文件数据源](/developing/creating-a-subgraph/#file-data-sources)的支持。 | +| 0.0.6 | 支持快速的[索引证明](/indexing/overview/#what-is-a-proof-of-indexing-poi) 计算变体。 | +| 0.0.5 | 添加了对可以访问交易收据的事件处理程序的支持。 | +| 0.0.4 | 添加了对管理子图功能的支持。 | diff --git a/website/src/pages/zh/subgraphs/developing/creating/subgraph-manifest.mdx b/website/src/pages/zh/subgraphs/developing/creating/subgraph-manifest.mdx index 486f06a4c248..9aa577956956 100644 --- a/website/src/pages/zh/subgraphs/developing/creating/subgraph-manifest.mdx +++ b/website/src/pages/zh/subgraphs/developing/creating/subgraph-manifest.mdx @@ -1,35 +1,35 @@ --- -title: Subgraph Manifest +title: 子图清单 --- ## 概述 -The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. +子图清单 `subgraph.yaml` 定义了您的子图索引的智能合约和网络,这些合约中需要关注的事件,以及如何将事件数据映射到 Graph 节点存储并允许查询的实体。 -The **subgraph definition** consists of the following files: +**子图定义**由几个文件组成: -- `subgraph.yaml`: Contains the subgraph manifest +- `subgraph.yaml`: 包含子图清单 -- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL +- `schema.graphql`: 一个 GraphQL 模式文件,它定义了为您的子图存储哪些数据,以及如何通过 GraphQL 查询这些数据 -- `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema (e.g. `mapping.ts` in this guide) +- `mapping.ts`:[AssemblyScript映射](https://github.com/AssemblyScript/assemblyscript)将事件数据转换为模式中定义的实体的代码(例如本指南中的`mapping.ts`) -### Subgraph Capabilities +### 子图功能 -A single subgraph can: +一个子图可以: -- Index data from multiple smart contracts (but not multiple networks). +- 索引来自多个智能合约(但不是多个网络)的数据。 -- Index data from IPFS files using File Data Sources. +- 使用文件数据源对IPFS文件中的数据进行索引。 -- Add an entry for each contract that requires indexing to the `dataSources` array. +- 为每个需要索引到`dataSources`数组的合约添加一个条目。 -The full specification for subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). +子图清单的完整规范可以在[这里](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md)找到。 -For the example subgraph listed above, `subgraph.yaml` is: +对于上面列出的示例子图,`subgraph.yaml`是: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum repository: https://github.com/graphprotocol/graph-tooling schema: @@ -54,7 +54,7 @@ dataSources: data: 'bar' mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -77,49 +77,49 @@ dataSources: file: ./src/mapping.ts ``` -## Subgraph Entries +## 子图条目 -> Important Note: Be sure you populate your subgraph manifest with all handlers and [entities](/subgraphs/developing/creating/ql-schema/). +> 重要提示:请确保您将子图清单与所有处理程序和 [entities](/subgraphs/developing/creating/ql-schema/)一起填充。 清单中要更新的重要条目是: -- `specVersion`: a semver version that identifies the supported manifest structure and functionality for the subgraph. The latest version is `1.2.0`. See [specVersion releases](#specversion-releases) section to see more details on features & releases. +- `specVersion`:标识子图支持的清单结构和功能的semver版本。最新版本是`1.3.0`。有关功能和版本的更多详细信息,请参阅[specVersion版本](#specversion-releases) 部分。 -- `description`: a human-readable description of what the subgraph is. This description is displayed in Graph Explorer when the subgraph is deployed to Subgraph Studio. +- `description`:关于子图是什么的人类可读的描述。 当子图部署到Subgraph Studio时,Graph Explorer 会显示此描述。 -- `repository`: the URL of the repository where the subgraph manifest can be found. This is also displayed in Graph Explorer. +- `repository`:可以找到子图清单存储库的 URL。 这也由 Graph Explorer显示。 -- `features`: a list of all used [feature](#experimental-features) names. +- `features:`是所有使用的[功能名称](#experimental-features)的列表。 -- `indexerHints.prune`: Defines the retention of historical block data for a subgraph. See [prune](#prune) in [indexerHints](#indexer-hints) section. +- `indexerHints.prune`:定义子图的历史区块数据的保留情况。请参见[indexerHints](#indexer-hints) 章节中的[prune](#prune)。 -- `dataSources.source`: the address of the smart contract the subgraph sources, and the ABI of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts. +- `dataSources.source`:智能合约子图源的地址,以及要使用的智能合约的ABI。 地址是可选的; 省略它允许索引来自所有合约的匹配事件。 -- `dataSources.source.startBlock`: the optional number of the block that the data source starts indexing from. In most cases, we suggest using the block in which the contract was created. +- `dataSources.source.startBlock`:数据源开始索引的区块的可选编号。 在大多数情况下,我们建议使用创建合约的区块。 -- `dataSources.source.endBlock`: The optional number of the block that the data source stops indexing at, including that block. Minimum spec version required: `0.0.9`. +- `dataSources.source.endBlock`:数据源停止索引的区块的可选编号,包括该区块。所需的最低版本为:`0.0.9`。 -- `dataSources.context`: key-value pairs that can be used within subgraph mappings. Supports various data types like `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Each variable needs to specify its `type` and `data`. These context variables are then accessible in the mapping files, offering more configurable options for subgraph development. +- `dataSources.context`:可以在子图映射中使用的键值对。支持各种数据类型,如`Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, 和 `BigInt`。每个变量需要指定其`type` 和`data`。这些背景变量随后可以在映射文件中访问,为子图开发提供更多可配置选项。 -- `dataSources.mapping.entities`: the entities that the data source writes to the store. The schema for each entity is defined in the schema.graphql file. +- `dataSources.mapping.entities`:数据源写入存储的实体。 每个实体的模式在 schema.graphql 文件中定义。 -- `dataSources.mapping.abis`: one or more named ABI files for the source contract as well as any other smart contracts that you interact with from within the mappings. +- `dataSources.mapping.abis`:源合约以及您在映射中与之交互的任何其他智能合约的一个或多个命名 ABI 文件。 -- `dataSources.mapping.eventHandlers`: lists the smart contract events this subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store. +- `dataSources.mapping.eventHandlers`:列出此子图响应的智能合约事件,映射中的处理程序—示例中为./src/mapping.ts—也将这些事件转换为存储中的实体。 -- `dataSources.mapping.callHandlers`: lists the smart contract functions this subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store. +- `dataSources.mapping.callHandlers`:列出此子图响应的智能合约函数以及映射中的处理程序,该映射将输入和输出转换为函数调用到存储中的实体。 -- `dataSources.mapping.blockHandlers`: lists the blocks this subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional call-filter can be provided by adding a `filter` field with `kind: call` to the handler. This will only run the handler if the block contains at least one call to the data source contract. +- `dataSources.mapping.blockHandlers`:列出此子图响应的区块以及映射中的处理程序,以便在将区块附加到链时运行。 如果没有过滤器,区块处理程序将在每个区块中运行。 可以通过向处理程序添加为以下类型字段提供可选的调用`filter`-`kind: call`。 如果区块包含至少一个对数据源合约的调用,则调用筛子将运行处理程序。 -A single subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array. +通过为每个需要将数据索引到 `dataSources` 数组的合约添加一个条目,单个子图可以索引来自多个智能合约的数据。 -## Event Handlers +## 事件处理程序 -Event handlers in a subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the subgraph's manifest. This enables subgraphs to process and store event data according to defined logic. +子图中的事件处理程序对区块链上的智能合约发出的特定事件以及子图清单中定义的触发器处理程序做出反应。这使得子图能够根据定义的逻辑处理和存储事件数据。 -### Defining an Event Handler +### 定义事件处理程序 -An event handler is declared within a data source in the subgraph's YAML configuration. It specifies which events to listen for and the corresponding function to execute when those events are detected. +事件处理程序在子图的YAML配置中的数据源内声明。它指定了要监听的事件以及检测到这些事件时要执行的相应函数。 ```yaml dataSources: @@ -131,7 +131,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -149,15 +149,15 @@ dataSources: ## 调用处理程序 -While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured. +虽然事件提供了一种收集合约状态相关变换的有效方法,但许多合约避免生成日志以优化燃气成本。 在这些情况下,子图可以订阅对数据源合约的调用。 这是通过定义引用函数签名的调用处理程序,及处理对该函数调用的映射处理程序来实现的。 为了处理这些调用,映射处理程序将接收一个 `ethereum.Call` 作为参数,其中包含调用的类型化输入和输出。 在交易调用链中的任何深度进行的调用都会触发映射,从而捕获通过代理合约与数据源合约的交互活动。 调用处理程序只会在以下两种情况之一触发:当指定的函数被合约本身以外的账户调用时,或者当它在 Solidity 中被标记为外部,并作为同一合约中另一个函数的一部分被调用时。 -> **Note:** Call handlers currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a subgraph indexing one of these networks contain one or more call handlers, it will not start syncing. Subgraph developers should instead use event handlers. These are far more performant than call handlers, and are supported on every evm network. +> **注意: **调用处理程序目前依赖于 Parity 跟踪 API。某些网络,如 BNB 链和 Arbitrum,不支持此 API。如果索引其中一个网络的子图包含一个或多个调用处理程序,它将不会开始同步。子图开发人员应该使用事件处理程序。它们比调用处理程序性能好得多,并且在每个 evm 网络上都受到支持。 ### 定义调用处理程序 -To define a call handler in your manifest, simply add a `callHandlers` array under the data source you would like to subscribe to. +要在清单中定义调用处理程序,只需在您要订阅的数据源下添加一个 `callHandlers` 数组。 ```yaml dataSources: @@ -169,7 +169,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -182,11 +182,11 @@ dataSources: handler: handleCreateGravatar ``` -The `function` is the normalized function signature to filter calls by. The `handler` property is the name of the function in your mapping you would like to execute when the target function is called in the data source contract. +`function` 是用于筛选调用的规范化函数签名。 `handler` 属性是映射中您希望在数据源合约中调用目标函数时执行的函数名称。 ### 映射函数 -Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument: +每个调用处理程序都有一个参数,该参数的类型对应于被调用函数的名称。 在上面的示例子图中,映射包含一个处理程序,用于调用 `createGravatar` 函数并接收 `CreateGravatarCall` 参数作为参数: ```typescript import { CreateGravatarCall } from '../generated/Gravity/Gravity' @@ -201,7 +201,7 @@ export function handleCreateGravatar(call: CreateGravatarCall): void { } ``` -The `handleCreateGravatar` function takes a new `CreateGravatarCall` which is a subclass of `ethereum.Call`, provided by `@graphprotocol/graph-ts`, that includes the typed inputs and outputs of the call. The `CreateGravatarCall` type is generated for you when you run `graph codegen`. +`handleCreateGravatar` 函数接受一个新的 `CreateGravatarCall`,它是 `@graphprotocol/graph-ts`提供的`ethereum.Call` 的子类,包括调用的输入和输出。 `CreateGravatarCall` 类型是在您运行 `graph codegen` 时为您生成的。 ## 区块处理程序 @@ -209,16 +209,16 @@ The `handleCreateGravatar` function takes a new `CreateGravatarCall` which is a ### 支持的过滤器 -#### 调用筛选器 +#### 调用过滤器 ```yaml filter: kind: call ``` -_The defined handler will be called once for every block which contains a call to the contract (data source) the handler is defined under._ +对于每个包含对定义处理程序的合约(数据源)调用的区块,相应的处理程序都会被调用一次。 -> **Note:** The `call` filter currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a subgraph indexing one of these networks contain one or more block handlers with a `call` filter, it will not start syncing. +> 注意: `调用`过滤器目前依赖于 Parity 跟踪 API。某些网络,如 BNB 链和 Arbitrum,不支持此 API。如果索引其中一个网络的子图包含一个或多个带过滤器的区块`调用`处理程序,它将不会开始同步。 块处理程序没有过滤器将确保每个块都调用处理程序。对于每种过滤器类型,一个数据源只能包含一个块处理程序。 @@ -232,7 +232,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -247,11 +247,11 @@ dataSources: kind: call ``` -#### 投票筛选器 +#### 投票过滤器 -> **Requires `specVersion` >= 0.0.8** +> **要求 `规范版本` >= 0.0.8** > -> **Note:** Polling filters are only available on dataSources of `kind: ethereum`. +> **注意:** 投票过滤器仅适用于`kind: ethereum`的数据源。 ```yaml blockHandlers: @@ -261,13 +261,13 @@ blockHandlers: every: 10 ``` -The defined handler will be called once for every `n` blocks, where `n` is the value provided in the `every` field. This configuration allows the subgraph to perform specific operations at regular block intervals. +所定义的处理程序将在每`n`个块上被调用一次,其中`n`的值由`every`字段提供。这种配置允许子图以固定的区块间隔执行特定的操作。 -#### 一次性筛选器 +#### 一次性过滤器 -> **Requires `specVersion` >= 0.0.8** +> **要求 `规范版本` >= 0.0.8** > -> **Note:** Once filters are only available on dataSources of `kind: ethereum`. +> **注意**:一次性过滤器仅适用于`kind: ethereum`的数据源。 ```yaml blockHandlers: @@ -288,7 +288,7 @@ export function handleOnce(block: ethereum.Block): void { ### 映射函数 -The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing subgraph entities in the store, call smart contracts and create or update entities. +映射函数将接收 `ethereum.Block` 作为其唯一参数。 与事件的映射函数一样,此函数可以访问存储中现有的子图实体、调用智能合约、以及创建或更新实体。 ```typescript import { ethereum } from '@graphprotocol/graph-ts' @@ -311,13 +311,13 @@ eventHandlers: handler: handleGive ``` -An event will only be triggered when both the signature and topic 0 match. By default, `topic0` is equal to the hash of the event signature. +只有当签名和主题0都匹配时,才会触发事件。默认情况下,`topic0`等于事件签名的哈希值。 ## 事件处理程序中的交易接收 -Starting from `specVersion` `0.0.5` and `apiVersion` `0.0.7`, event handlers can have access to the receipt for the transaction which emitted them. +从`specVersion` `0.0.5` 和 `apiVersion` `0.0.7`开始,事件处理程序可以访问发出它们的交易接收。 -To do so, event handlers must be declared in the subgraph manifest with the new `receipt: true` key, which is optional and defaults to false. +为此,必须在子图清单中使用新的`receict:true`键声明事件处理程序,该键是可选的,默认为false。 ```yaml eventHandlers: @@ -326,9 +326,9 @@ eventHandlers: receipt: true ``` -Inside the handler function, the receipt can be accessed in the `Event.receipt` field. When the `receipt` key is set to `false` or omitted in the manifest, a `null` value will be returned instead. +在处理函数中,收据可以在 `Event.receivt` 字段中访问。 当`receipt`键设置为 `false` 或在清单中省略时,将返回`null`值。 -## Order of Triggering Handlers +## 触发处理程序的顺序 区块内数据源的触发器使用以下流程进行排序: @@ -338,17 +338,17 @@ Inside the handler function, the receipt can be accessed in the `Event.receipt` 这些排序规则可能会发生变化。 -> **Note:** When new [dynamic data source](#data-source-templates-for-dynamically-created-contracts) are created, the handlers defined for dynamic data sources will only start processing after all existing data source handlers are processed, and will repeat in the same sequence whenever triggered. +> **注意**:当创建新的[动态数据源](#data-source-templates-for-dynamically-created-contracts)时,为动态数据源定义的处理程序只会在所有现有数据源处理程序处理完毕后开始处理,并且每次触发时都会按相同的顺序重复处理。 ## 数据源模板 EVM兼容智能合约中的一种常见模式是使用注册表或工厂合约,其中一个合约创建、管理或引用任意数量的其他合约,每个合约都有自己的状态和事件。 -The addresses of these sub-contracts may or may not be known upfront and many of these contracts may be created and/or added over time. This is why, in such cases, defining a single data source or a fixed number of data sources is impossible and a more dynamic approach is needed: _data source templates_. +这些子合约的地址可能事先知道,也可能不知道,其中许多合约可能会随着时间的推移而创建和/或添加。这就是为什么在这种情况下,定义单个数据源或固定数量的数据源是不可能的,需要一种更动态的方法:_data source templates_。 ### 主合约的数据源 -First, you define a regular data source for the main contract. The snippet below shows a simplified example data source for the [Uniswap](https://uniswap.org) exchange factory contract. Note the `NewExchange(address,address)` event handler. This is emitted when a new exchange contract is created onchain by the factory contract. +首先,您需要为主合约定义一个常规数据源。 下面的代码片段显示了[Uniswap](https://uniswap.org) 交换工厂合约的简化示例数据源。 注意 `NewExchange(address,address) `事件处理程序。 当工厂合约在链上创建新交换合约时,会发出此消息。 ```yaml dataSources: @@ -360,7 +360,7 @@ dataSources: abi: Factory mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/factory.ts entities: @@ -375,7 +375,7 @@ dataSources: ### 动态创建合约的数据源模板 -Then, you add _data source templates_ to the manifest. These are identical to regular data sources, except that they lack a pre-defined contract address under `source`. Typically, you would define one template for each type of sub-contract managed or referenced by the parent contract. +然后,将*data source templates* 添加到清单中。 它们与常规数据源相同,只是在 `source` 下缺少预先定义的合约地址。 通常,您需要为母合约管理或引用的每种类型的子合约定义一个模板。 ```yaml dataSources: @@ -411,7 +411,7 @@ templates: ### 实例化数据源模板 -In the final step, you update your main contract mapping to create a dynamic data source instance from one of the templates. In this example, you would change the main contract mapping to import the `Exchange` template and call the `Exchange.create(address)` method on it to start indexing the new exchange contract. +在最后一步中,您可以更新主合约映射,以便从其中一个模板创建动态数据源实例。 在此示例中,您将更改主合约映射以导入 `Exchange` 模板,并在其上调用 `Exchange.create(address)` 方法,从而开始索引新交换合约。 ```typescript import { Exchange } from '../generated/templates' @@ -423,13 +423,13 @@ export function handleNewExchange(event: NewExchange): void { } ``` -> **Note:** A new data source will only process the calls and events for the block in which it was created and all following blocks, but will not process historical data, i.e., data that is contained in prior blocks. +> **注意**: 新的数据源只会处理创建它的区块和所有后续区块的调用和事件,而不会处理历史数据,也就是包含在先前区块中的数据。 > > 如果先前的区块包含与新数据源相关的数据,最好通过读取合约的当前状态,并在创建新数据源时创建表示该状态的实体来索引该数据。 ### 数据源背景 -Data source contexts allow passing extra configuration when instantiating a template. In our example, let's say exchanges are associated with a particular trading pair, which is included in the `NewExchange` event. That information can be passed into the instantiated data source, like so: +数据源上下文允许在实例化模板时传递额外的配置。在我们的示例中,假设交易所与特定的交易对相关联,该交易对包含在`NewExchange`事件中。这些信息可以传递到实例化的数据源中,如下所示: ```typescript import { Exchange } from '../generated/templates' @@ -441,7 +441,7 @@ export function handleNewExchange(event: NewExchange): void { } ``` -Inside a mapping of the `Exchange` template, the context can then be accessed: +在 `Exchange` 模板的映射中,可以访问背景: ```typescript import { dataSource } from '@graphprotocol/graph-ts' @@ -450,11 +450,11 @@ let context = dataSource.context() let tradingPair = context.getString('tradingPair') ``` -There are setters and getters like `setString` and `getString` for all value types. +对于所有的值类型,都有像 `setString` 和 `getString` 这样的 setter 和 getter。 ## 起始区块 -The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. +`startBlock` 是一个可选配置,允许您定义数据源从区块链中的哪个区块开始索引。 设置起始区块允许数据源跳过潜在的数百万个不相关的区块。 通常,子图开发人员会将 `startBlock` 设置为创建数据源智能合约的区块。 ```yaml dataSources: @@ -467,7 +467,7 @@ dataSources: startBlock: 6627917 mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/factory.ts entities: @@ -480,55 +480,70 @@ dataSources: handler: handleNewEvent ``` -> **Note:** The contract creation block can be quickly looked up on Etherscan: +> **注意**: 合约创建区块可以在 Etherscan 上快速查找: > > 1. 通过在搜索栏中输入合约地址来搜索合约。 -> 2. Click on the creation transaction hash in the `Contract Creator` section. +> 2. 单击 `Contract Creator` 部分中的创建交易hash。 > 3. 加载交易详情页面,您将在其中找到该合约的起始区块。 -## Indexer Hints +## 索引人提示 -The `indexerHints` setting in a subgraph's manifest provides directives for indexers on processing and managing a subgraph. It influences operational decisions across data handling, indexing strategies, and optimizations. Presently, it features the `prune` option for managing historical data retention or pruning. +`indexerHints`设置位于子图的清单文件中,为索引人提供有关处理和管理子图的指令。它影响数据处理、索引策略和优化等操作决策。目前,它包括`prune`选项,用于管理历史数据的保留或修剪。 -> This feature is available from `specVersion: 1.0.0` +> 此功能可从`specVersion:1.0.0获得` -### Prune +### 修剪 -`indexerHints.prune`: Defines the retention of historical block data for a subgraph. Options include: +`indexerHints.prune`:定义子图的历史区块数据的保留策略。可选项包括: -1. `"never"`: No pruning of historical data; retains the entire history. -2. `"auto"`: Retains the minimum necessary history as set by the indexer, optimizing query performance. -3. A specific number: Sets a custom limit on the number of historical blocks to retain. +1. `never`:不进行历史数据的修剪;保留全部历史记录。 +2. `auto`:保留索引人设置的最小必要历史记录,优化查询性能。 +3. 指定一个具体的数字:设置保留历史区块的自定义限制数量。这允许开发者根据特定的应用需求和存储容量,精确控制保留的历史区块数,从而在维持历史数据完整性和优化资源使用之间找到平衡。 ``` indexerHints: prune: auto ``` -> The term "history" in this context of subgraphs is about storing data that reflects the old states of mutable entities. +> 在子图的上下文中,“历史”一词是关于存储反映可变实体旧状态的数据。 -History as of a given block is required for: +以下情况需要给定区块的历史记录: -- [Time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), which enable querying the past states of these entities at specific blocks throughout the subgraph's history -- Using the subgraph as a [graft base](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) in another subgraph, at that block -- Rewinding the subgraph back to that block +- [Time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries),可以查询子图历史中特定块处这些实体的过去状态。 +- 在该区块,将子图用作另一个子图中的[graft base](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs)。 +- 将子图倒回该块。 -If historical data as of the block has been pruned, the above capabilities will not be available. +如果截至该块的历史数据已被修剪,则上述功能将不可用。 -> Using `"auto"` is generally recommended as it maximizes query performance and is sufficient for most users who do not require access to extensive historical data. +> 通常建议使用`自动`,因为它可以最大限度地提高查询性能,对于大多数不需要访问大量历史数据的用户来说已经足够了。 -For subgraphs leveraging [time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), it's advisable to either set a specific number of blocks for historical data retention or use `prune: never` to keep all historical entity states. Below are examples of how to configure both options in your subgraph's settings: +对于利用[time travel querie](/subgraphs/querying/graphql-api/#time-travel-queries)的子图,建议设置一个特定的区块数来保留历史数据,或使用`prune: never`以保留所有历史实体状态。以下是在子图设置中配置这两个选项的示例: -To retain a specific amount of historical data: +要保留特定数量的历史数据: ``` indexerHints: prune: 1000 # Replace 1000 with the desired number of blocks to retain ``` -To preserve the complete history of entity states: +要保留实体状态的完整历史记录: ``` indexerHints: prune: never ``` + +## 视图版本发布 + +| 版本 | Release 说明 | +| :-: | --- | +| 1.3.0 | 添加了对 [Subgraph 合成](/cookbook/subgraph-composition-three-sources) 的支持。 | +| 1.2.0 | 添加了对[索引参数过滤器](/developing/creating/advanced/#indexed-argument-filters--topic-filters) 的支持,并声明了`eth_call`。 | +| 1.1.0 | 支持[时间序列和聚合](/developing/creating/advanced/#timeseries-and-aggregations)。为`id`添加了对`Int8`类型的支持。 | +| 1.0.0 | 支持[`indexerHints`](/developing/creating/subgraph-manifest/#indexer-hints)功能以修剪子图。 | +| 0.0.9 | 支持`endBlock`功能。 | +| 0.0.8 | 添加了对轮询[块处理程序](/developing/creating/subgraph-manifest/#polling-filter)和[初始化处理程序](/developing/creating/subgraph-manifest/#once-filter)的支持。 | +| 0.0.7 | 添加了对[文件数据源](/developing/creating/advanced/#ipfsarweave-file-data-sources)的支持。 | +| 0.0.6 | 支持快速的[索引证明](/indexing/overview/#what-is-a-proof-of-indexing-poi) 计算变体。 | +| 0.0.5 | 添加了对可以访问交易收据的事件处理程序的支持。 | +| 0.0.4 | 添加了对管理子图功能的支持。 | diff --git a/website/src/pages/zh/subgraphs/developing/creating/unit-testing-framework.mdx b/website/src/pages/zh/subgraphs/developing/creating/unit-testing-framework.mdx index fb9703c0fdff..0bb085db72c6 100644 --- a/website/src/pages/zh/subgraphs/developing/creating/unit-testing-framework.mdx +++ b/website/src/pages/zh/subgraphs/developing/creating/unit-testing-framework.mdx @@ -2,52 +2,52 @@ title: 单元测试框架 --- -Learn how to use Matchstick, a unit testing framework developed by [LimeChain](https://limechain.tech/). Matchstick enables subgraph developers to test their mapping logic in a sandboxed environment and successfully deploy their subgraphs. +学习如何使用 Matchstick,一个由 [LimeChain]开发的单元测试框架(https://limechain.tech/)。 Matchstick使子图开发者能够在沙盒环境中测试其绘图逻辑并成功地部署其子图。 -## Benefits of Using Matchstick +## 使用Matchstick的好处 -- It's written in Rust and optimized for high performance. -- It gives you access to developer features, including the ability to mock contract calls, make assertions about the store state, monitor subgraph failures, check test performance, and many more. +- 它用Rust编写并优化高性能。 +- 它允许您访问开发者功能,包括模拟合约通话的能力 对商店状态进行断言、监视子图失败、检查测试性能等等。 ## 开始 -### Install Dependencies +### 安装依赖项 -In order to use the test helper methods and run tests, you need to install the following dependencies: +为了使用测试辅助器方法并运行测试,您需要安装以下依赖项: ```sh yarn add --dev matchstick-as ``` -### Install PostgreSQL +### 安装PostgreSQL -`graph-node` depends on PostgreSQL, so if you don't already have it, then you will need to install it. +`graph-node`依赖于PostgreSQL,所以如果你还没有它,你将需要安装它。 -> Note: It's highly recommended to use the commands below to avoid unexpected errors. +> 注意:强烈建议使用下面的命令来避免意外错误。 -#### Using MacOS +#### 使用 MacOS -Installation command: +安装命令: ```sh -brew install postgresql +酿造安装postgresql ``` -Create a symlink to the latest libpq.5.lib _You may need to create this dir first_ `/usr/local/opt/postgresql/lib/` +创建到最新 libpq.5. lib 的符号链接,可能需要首先创建这个目录/usr/local/opt/postgreql/lib/\` ```sh ln -sf /usr/local/opt/postgresql@14/lib/postgresql@14/libpq.5.dylib /usr/local/opt/postgresql/lib/libpq.5.dylib ``` -#### Using Linux +#### 使用 Linux -Installation command (depends on your distro): +安装命令(取决于您的拆分): ```sh sudo apt install postgresql ``` -### Using WSL (Windows Subsystem for Linux) +### 使用 WSL (window子系统为 Linux) 可以使用 Docker 方法和二进制方法在 WSL 上使用 Matchstick。由于 WSL 可能有点复杂,所以这里有一些提示,以防您遇到诸如 @@ -61,13 +61,13 @@ static BYTES = Symbol("Bytes") SyntaxError: Unexpected token = /node_modules/gluegun/build/index.js:13 throw up; ``` -Please make sure you're on a newer version of Node.js graph-cli doesn't support **v10.19.0** anymore, and that is still the default version for new Ubuntu images on WSL. For instance Matchstick is confirmed to be working on WSL with **v18.1.0**, you can switch to it either via **nvm** or if you update your global Node.js. Don't forget to delete `node_modules` and to run `npm install` again after updating you nodejs! Then, make sure you have **libpq** installed, you can do that by running +请确保您是新版本的 Node.js graph-cli 不支持 **v10.19.0** ,这仍然是WSL上新的 Ubuntu 图像的默认版本。 例如,Matchstick被确认在 WSL 使用 **v18.1.0 **,您可以通过 **nvm** 切换到它,也可以更新您的全局Node.js。 别忘了在更新节点后删除 `node_modules` 并重新运行 `npm install` ! 然后,请确保您已经安装了 **libpq** ,您可以通过运行来做到这一点。 ``` sudo apt-get install libpq-dev ``` -And finally, do not use `graph test` (which uses your global installation of graph-cli and for some reason that looks like it's broken on WSL currently), instead use `yarn test` or `npm run test` (that will use the local, project-level instance of graph-cli, which works like a charm). For that you would of course need to have a `"test"` script in your `package.json` file which can be something as simple as +最后, 不要使用 `graph test` (它使用您的全局安装图形-cli ,并且出于某些原因看起来像当前WSL 上的故障), 相反,使用 `yarn test` 或 `npm 运行测试` (这将使用当地、项目一级的graph-cli 实例,它类似于一个字符)。 因此,你当然需要在你的`package.json` 文件中有一个 "test"" 脚本,这个脚本可以像这样简单。 ```json { @@ -85,9 +85,9 @@ And finally, do not use `graph test` (which uses your global installation of gra } ``` -### Using Matchstick +### 使用 Matchstick -To use **Matchstick** in your subgraph project just open up a terminal, navigate to the root folder of your project and simply run `graph test [options] ` - it downloads the latest **Matchstick** binary and runs the specified test or all tests in a test folder (or all existing tests if no datasource flag is specified). +要在子图项目中使用**Matchstick**,只需打开一个终端,导航到项目的根文件夹,然后简单地运行`graph test [options] `-它下载最新的**Matchstick**二进制文件,并在测试文件夹中运行指定的测试或所有测试(如果未指定数据源标志,则运行所有现有测试)。 ### CLI 选项 @@ -109,11 +109,11 @@ graph test Gravity graph test path/to/file.test.ts ``` -**Options:** +**选项:** ```sh -c, --coverage Run the tests in coverage mode --d, --docker Run the tests in a docker container (Note: Please execute from the root folder of the subgraph) +-d, --docker Run the tests in a docker container (Note: Please execute from the root folder of the Subgraph) -f, --force Binary: Redownloads the binary. Docker: Redownloads the Dockerfile and rebuilds the docker image. -h, --help Show usage information -l, --logs Logs to the console information about the OS, CPU model and download url (debugging purposes) @@ -123,21 +123,21 @@ graph test path/to/file.test.ts ### Docker -From `graph-cli 0.25.2`, the `graph test` command supports running `matchstick` in a docker container with the `-d` flag. The docker implementation uses [bind mount](https://docs.docker.com/storage/bind-mounts/) so it does not have to rebuild the docker image every time the `graph test -d` command is executed. Alternatively you can follow the instructions from the [matchstick](https://github.com/LimeChain/matchstick#docker-) repository to run docker manually. +从 `graph-cli 0.25.2`, `graph test` 命令支持使用 `-d` 标志在一个码头容器中运行 `matchstick` 。 停泊器实现使用 [绑定挂载](https://docs.docker.com/storage/bind-mounts/),所以它不必在执行"Graph test -d" 命令时重建停泊器图像。 你也可以按照 [matchstick](https://github.com/LimeChain/matchstick#docker-)的指令手动运行停靠仓库。 -❗ `graph test -d` forces `docker run` to run with flag `-t`. This must be removed to run inside non-interactive environments (like GitHub CI). +❗ `graph test -d` 强制`docker run` 使用标志`-t`运行。这必须移除以在非交互环境中运行(如GitHub CI)。 -❗ If you have previously ran `graph test` you may encounter the following error during docker build: +❗ 如果你以前运行过`graph test`, 在docker构建过程中可能会遇到以下错误: ```sh error from sender: failed to xattr node_modules/binary-install-raw/bin/binary-: permission denied ``` -In this case create a `.dockerignore` in the root folder and add `node_modules/binary-install-raw/bin` +在这种情况下,在根文件夹中创建 `.dockerignore` 并添加 `node_modules/biny-install-raw/bin`。 ### 配置 -Matchstick can be configured to use a custom tests, libs and manifest path via `matchstick.yaml` config file: +Matchstick可以通过`matchstick.yaml`配置文件配置为使用自定义测试、库和清单路径: ```yaml testsFolder: path/to/tests @@ -147,23 +147,23 @@ manifestPath: path/to/subgraph.yaml ### 演示子图 -You can try out and play around with the examples from this guide by cloning the [Demo Subgraph repo](https://github.com/LimeChain/demo-subgraph) +你可以尝试通过克隆[Demo Subgraph repo](https://github.com/LimeChain/demo-subgraph) 来使用本指南的示例。 ### 视频教程 -Also you can check out the video series on ["How to use Matchstick to write unit tests for your subgraphs"](https://www.youtube.com/playlist?list=PLTqyKgxaGF3SNakGQwczpSGVjS_xvOv3h) +此外,您还可以查看[“如何使用Matchstick为子图编写单元测试”系列视频](https://www.youtube.com/playlist?list=PLTqyKgxaGF3SNakGQwczpSGVjS_xvOv3h)。 -## Tests structure +## 测试结构 -_**IMPORTANT: The test structure described below depens on `matchstick-as` version >=0.5.0**_ +_**IMPORTANT:下面描述的测试结构取决于`matchstick-as`版本 >=0.5.0**_ ### 描述() -`describe(name: String , () => {})` - Defines a test group. +`description(name: String , () => {})` - 定义测试组。 -**_Notes:_** +**_注意:_** -- _Describes are not mandatory. You can still use test() the old way, outside of the describe() blocks_ +- 描述不是强制性的。您仍然可以在describe()区块之外,以旧的方式使用test() 例子: @@ -178,7 +178,7 @@ describe("handleNewGravatar()", () => { }) ``` -Nested `describe()` example: +嵌套的 `descrip()` 示例: ```typescript import { describe, test } from "matchstick-as/assembly/index" @@ -203,7 +203,7 @@ describe("handleUpdatedGravatar()", () => { ### 测试() -`test(name: String, () =>, should_fail: bool)` - Defines a test case. You can use test() inside of describe() blocks or independently. +`test(name: String, () =>, should_fail: bool)` - 定义测试案例。您可以在描述() 块内或独立使用test()。 例子: @@ -232,11 +232,11 @@ test("handleNewGravatar() should create a new entity", () => { ### beforeAll() -Runs a code block before any of the tests in the file. If `beforeAll` is declared inside of a `describe` block, it runs at the beginning of that `describe` block. +在文件中的任何测试之前运行代码区块。如果`beforeAll`在`describe`区块内声明,它将在该`describe`区块的开头运行。 例子: -Code inside `beforeAll` will execute once before _all_ tests in the file. +`beforeAll`中的代码将在第一个描述区块中的所有测试之前执行一次。 ```typescript import { describe, test, beforeAll } from "matchstick-as/assembly/index" @@ -263,7 +263,7 @@ describe("When entity already exists", () => { }) ``` -Code inside `beforeAll` will execute once before all tests in the first describe block +`beforeAll`中的代码将在第一个描述区块中的所有测试之前执行一次。 ```typescript import { describe, test, beforeAll } from "matchstick-as/assembly/index" @@ -292,11 +292,11 @@ describe("handleUpdatedGravatar()", () => { ### afterAll() -Runs a code block after all of the tests in the file. If `afterAll` is declared inside of a `describe` block, it runs at the end of that `describe` block. +在每次测试后运行代码区块。如果`afterAll`在`describe` 区块中声明,则在该`describe` 区块中的每个测试之后运行。 例子: -Code inside `afterAll` will execute once after _all_ tests in the file. +`afterAll`中的代码将在第一个描述区块中的所有测试之后执行一次。 ```typescript import { describe, test, afterAll } from "matchstick-as/assembly/index" @@ -321,7 +321,7 @@ describe("handleUpdatedGravatar", () => { }) ``` -Code inside `afterAll` will execute once after all tests in the first describe block +`afterAll`中的代码将在第一个描述区块中的所有测试之后执行一次。 ```typescript import { describe, test, afterAll, clearStore } from "matchstick-as/assembly/index" @@ -353,9 +353,9 @@ describe("handleUpdatedGravatar", () => { ### beforeEach() -Runs a code block before every test. If `beforeEach` is declared inside of a `describe` block, it runs before each test in that `describe` block. +在文件中的任何测试之前运行代码区块。如果`beforeEach`在`describe`区块内声明,它将在该`describe`区块的每次测试之前运行。 -Examples: Code inside `beforeEach` will execute before each tests. +示例:在每次测试之前,`beforeEach`中的代码将会执行。 ```typescript import { describe, test, beforeEach, clearStore } from "matchstick-as/assembly/index" @@ -378,7 +378,7 @@ describe("handleNewGravatars, () => { ... ``` -Code inside `beforeEach` will execute only before each test in the that describe +`beforeEach` 里的代码将仅在描述中的每个测试之前执行。 ```typescript import { describe, test, beforeEach } from 'matchstick-as/assembly/index' @@ -416,11 +416,11 @@ describe('handleUpdatedGravatars', () => { ### afterEach() -Runs a code block after every test. If `afterEach` is declared inside of a `describe` block, it runs after each test in that `describe` block. +在文件中的任何测试之前运行代码区块。如果`beforeEach`在`describe`区块内声明,它将在该`describe`区块的每次测试之前运行。 例子: -Code inside `afterEach` will execute after every test. +`AfterEach`中的代码将仅在描述中的每个测试之后执行。 ```typescript import { describe, test, beforeEach, afterEach } from "matchstick-as/assembly/index" @@ -459,7 +459,7 @@ describe("handleUpdatedGravatar", () => { }) ``` -Code inside `afterEach` will execute after each test in that describe +`AfterEach`中的代码将仅在描述中的每个测试之后执行。 ```typescript import { describe, test, beforeEach, afterEach } from "matchstick-as/assembly/index" @@ -499,7 +499,7 @@ describe("handleUpdatedGravatar", () => { }) ``` -## 断言 +## 判断 ```typescript fieldEquals(entityType: string, id: string, fieldName: string, expectedVal: string) @@ -533,7 +533,7 @@ assertNotNull(value: T) entityCount(entityType: string, expectedCount: i32) ``` -As of version 0.6.0, asserts support custom error messages as well +截至版本 0.6.0,我们也支持自定义错误消息。 ```typescript assert.fieldEquals('Gravatar', '0x123', 'id', '0x123', 'Id should be 0x123') @@ -565,7 +565,7 @@ assert.dataSourceExists( ## 编写一个单元测试 -Let's see how a simple unit test would look like using the Gravatar examples in the [Demo Subgraph](https://github.com/LimeChain/demo-subgraph/blob/main/src/gravity.ts). +让我们看看一个简单的单元测试,如何看起来像在 [Demo Subgraph](https://github.com/LimeChain/demo-subgraph/blob/main/src/gravity.ts)中使用 Gravatar 示例。 假设我们有以下处理程序函数(以及两个帮助函数,以使我们的生活更轻松): @@ -652,13 +652,13 @@ test('Next test', () => { }) ``` -That's a lot to unpack! First off, an important thing to notice is that we're importing things from `matchstick-as`, our AssemblyScript helper library (distributed as an npm module). You can find the repository [here](https://github.com/LimeChain/matchstick-as). `matchstick-as` provides us with useful testing methods and also defines the `test()` function which we will use to build our test blocks. The rest of it is pretty straightforward - here's what happens: +还有很多东西可以解包! 首先,我们注意到的一个重要问题是,我们正在从 `matchstick-as`、我们的 AssemblyScript 助手库中导入一些东西(作为npm 模块分发)。 您可以在[这里](https://github.com/LimeChain/matchstick-as)找到仓库。 `matchstick-as`为我们提供了有用的测试方法,并定义了`test()`功能,我们将用它来构建我们的测试块。 其它部分相当直截了当――这里发生了什么事: - 我们正在设置我们的初始状态并添加一个自定义的 Gravatar 实体。 -- We define two `NewGravatar` event objects along with their data, using the `createNewGravatarEvent()` function; -- We're calling out handler methods for those events - `handleNewGravatars()` and passing in the list of our custom events; +- 我们使用 `createNewGravatarEvent()` 函数定义两个`NewGravatar`事件对象及其数据; +- 我们正在调用这些事件的处理方法 - `handleNewGravatars()` 并在我们的自定义事件列表中传递; - 我们断定存储的状态。那是怎么实现的呢?- 我们传递一个实体类型和 id 的唯一组合。然后我们检查该实体的一个特定字段,并断定它具有我们期望的值。我们为我们添加到存储的初始 Gravatar 实体,以及当处理函数被调用时被添加的两个 Gravatar 实体都做这个。 -- And lastly - we're cleaning the store using `clearStore()` so that our next test can start with a fresh and empty store object. We can define as many test blocks as we want. +- 最后——我们正在使用 `clearStore()` 来清理内存,以便我们的下一次测试能够以一个新的和空的储存对象开始。 我们可以定义我们想要的尽可能多的试验区块。 好了,我们创建了第一个测试!👏 @@ -668,7 +668,7 @@ That's a lot to unpack! First off, an important thing to notice is that we're im 如果一切顺利,您应该会收到以下信息: -![Matchstick saying “All tests passed!”](/img/matchstick-tests-passed.png) +![Matchstick表示“通过所有测试!”](/img/matchstick-tests-passed.png) ## 常见测试场景 @@ -754,9 +754,9 @@ createMockedFunction(contractAddress, 'getGravatar', 'getGravatar(address):(stri ### 模拟IPFS文件(from matchstick 0.4.1) -Users can mock IPFS files by using `mockIpfsFile(hash, filePath)` function. The function accepts two arguments, the first one is the IPFS file hash/path and the second one is the path to a local file. +用户可以使用 `mockIpfsFile(hash, filePath)` 函数模拟IPFS 文件。 函数接受两个参数,第一个参数是IPFS 文件哈希/路径,第二个参数是本地文件的路径。 -NOTE: When testing `ipfs.map/ipfs.mapJSON`, the callback function must be exported from the test file in order for matchstck to detect it, like the `processGravatar()` function in the test example bellow: +注意:在测试`ipfs.map/ipfs.mapJSON`,时,必须从测试文件中导出回调函数,以便matchstck检测到它,如下面测试示例中的`processGravatar()` 函数: `.test.ts` file: @@ -765,7 +765,7 @@ import { assert, test, mockIpfsFile } from 'matchstick-as/assembly/index' import { ipfs } from '@graphprotocol/graph-ts' import { gravatarFromIpfs } from './utils' -// Export ipfs.map() callback in order for matchstck to detect it +// Export ipfs.map() callback in order for matchstick to detect it export { processGravatar } from './utils' test('ipfs.cat', () => { @@ -857,11 +857,11 @@ gravatar.save() assert.fieldEquals('Gravatar', 'gravatarId0', 'id', 'gravatarId0') ``` -Running the assert.fieldEquals() function will check for equality of the given field against the given expected value. The test will fail and an error message will be outputted if the values are **NOT** equal. Otherwise the test will pass successfully. +正在运行assert.fieldEquals() 函数将检查给定字段的均等性与给定的预期值。 测试将失败,如果值为 **NOT** 则输出错误消息。否则测试将成功通过。 ### 与事件元数据交互 -Users can use default transaction metadata, which could be returned as an ethereum.Event by using the `newMockEvent()` function. The following example shows how you can read/write to those fields on the Event object: +用户可以使用 `newMockEvent()` 函数返回默认的交易元数据。事件可以使用 `newMockEvent()` 函数。 下面的示例显示您如何在事件对象上读/写到那些字段: ```typescript // Read @@ -878,7 +878,7 @@ newGravatarEvent.address = Address.fromString(UPDATED_ADDRESS) assert.equals(ethereum.Value.fromString("hello"); ethereum.Value.fromString("hello")); ``` -### Asserting that an Entity is **not** in the store +### 断定实体**不**在存储中 用户可以断定实体在存储中不存在。该函数接受实体类型和id。如果实体实际上在存储中,测试将失败,并显示相关错误消息。以下是如何使用此功能的快速示例: @@ -886,7 +886,7 @@ assert.equals(ethereum.Value.fromString("hello"); ethereum.Value.fromString("hel assert.notInStore('Gravatar', '23') ``` -### Printing the whole store, or single entities from it (for debug purposes) +### 打印整个内存或单个实体(用于调试目的) 您可以使用此助手功能将整个存储登载到控制台: @@ -896,7 +896,7 @@ import { logStore } from 'matchstick-as/assembly/store' logStore() ``` -As of version 0.6.0, `logStore` no longer prints derived fields, instead users can use the new `logEntity` function. Of course `logEntity` can be used to print any entity, not just ones that have derived fields. `logEntity` takes the entity type, entity id and a `showRelated` flag to indicate if users want to print the related derived entities. +从 0.6.0版本 `logStore` 不再打印派生字段,而是用户可以使用新的 `logEntity` 函数。 当然,`logEntity`可以用于打印任何实体,而不仅仅是有衍生字段的实体。 `logEntity` 需要实体类型、实体ID和一个`showRelated`标志来表示用户是否想打印相关派生实体。 ``` import { logEntity } from 'matchstick-as/assembly/store' @@ -958,16 +958,16 @@ test('Blow everything up', () => { ### 测试派生字段 -Testing derived fields is a feature which allows users to set a field on a certain entity and have another entity be updated automatically if it derives one of its fields from the first entity. +测试派生字段是一个功能,用户可以在某个实体上设置一个字段,如果它从第一个实体中获得一个字段,则另一个实体会自动更新。 -Before version `0.6.0` it was possible to get the derived entities by accessing them as entity fields/properties, like so: +在版本 `0.6.0` 之前,可以通过以实体字段/属性访问它们来获取派生实体,就像这样: ```typescript let entity = ExampleEntity.load('id') let derivedEntity = entity.derived_entity ``` -As of version `0.6.0`, this is done by using the `loadRelated` function of graph-node, the derived entities can be accessed the same way as in the handlers. +截至版本`0.6'。 `, 通过使用graph节点的 `loadRelated` 函数来做到这一点, 派生的实体可以以与处理器相同的方式访问。 ```typescript test('Derived fields example test', () => { @@ -1009,9 +1009,9 @@ test('Derived fields example test', () => { }) ``` -### Testing `loadInBlock` +### 测试 `loadInBlock` -As of version `0.6.0`, users can test `loadInBlock` by using the `mockInBlockStore`, it allows mocking entities in the block cache. +从版本 `0.6.0` 开始,用户可以使用 `mockInBlockStore` 测试`loadInBlock` ,它允许在区块缓存中模拟实体。 ```typescript import { afterAll, beforeAll, describe, mockInBlockStore, test } from 'matchstick-as' @@ -1040,7 +1040,7 @@ describe('loadInBlock', () => { ### 测试动态数据源 -Testing dynamic data sources can be be done by mocking the return value of the `context()`, `address()` and `network()` functions of the dataSource namespace. These functions currently return the following: `context()` - returns an empty entity (DataSourceContext), `address()` - returns `0x0000000000000000000000000000000000000000`, `network()` - returns `mainnet`. The `create(...)` and `createWithContext(...)` functions are mocked to do nothing so they don't need to be called in the tests at all. Changes to the return values can be done through the functions of the `dataSourceMock` namespace in `matchstick-as` (version 0.3.0+). +可以通过模拟数据源命名空间的`context()`、`address()`和`network()`的返回值来测试动态数据源。 这些函数目前返回以下内容:`context()` - 返回一个空实体(DataSourceContext)、`address()` - 返回 `0x000000000000000000000000`、`net()` - 返回 `mainnet` 。 `create(...)` 和 `createWidext(...)` 两个函数都被嘲讽,完全不需要在测试中调用。 对返回值的更改可以通过`matchstick-as`中的`dataSourceMock`命名空间的函数进行(版本 0.3.0+)。 示例如下: @@ -1095,16 +1095,16 @@ test('Data source simple mocking example', () => { 注意,dataSourceMock.resetValues()在末尾被调用。这是因为值在更改时会被记住,如果要返回到默认值,则需要重新设置。 -### Testing dynamic data source creation +### 测试动态数据源创建 -As of version `0.6.0`, it is possible to test if a new data source has been created from a template. This feature supports both ethereum/contract and file/ipfs templates. There are four functions for this: +从版本 `0.6.0` 开始,可以测试是否从模板创建了一个新的数据源。 此功能支持etherum/contract 和 file/ipfs 模板。为此有四个函数: -- `assert.dataSourceCount(templateName, expectedCount)` can be used to assert the expected count of data sources from the specified template -- `assert.dataSourceExists(templateName, address/ipfsHash)` asserts that a data source with the specified identifier (could be a contract address or IPFS file hash) from a specified template was created -- `logDataSources(templateName)` prints all data sources from the specified template to the console for debugging purposes -- `readFile(path)` reads a JSON file that represents an IPFS file and returns the content as Bytes +- `assert.dataSourceCount(templateName,expectedCount)` 可以用来确定指定模板中的数据源的预期数量 +- `assert. ataSourceExists(templateName) address/ipfsHash)`声称,从一个指定的模板创建了带有指定标识符的数据源(可以是合约地址或IPFS 文件哈希) +- `logDataSources(templateName)` 将指定模板中的所有数据源打印到控制台以进行调试 +- `readFile(path)` 读取一个 JSON 文件,表示一个 IPFS 文件,并返回内容为字节 -#### Testing `ethereum/contract` templates +#### 测试`etherum/contract`模板 ```typescript test('ethereum/contract dataSource creation example', () => { @@ -1134,7 +1134,7 @@ test('ethereum/contract dataSource creation example', () => { }) ``` -##### Example `logDataSource` output +##### 示例 `logDataSource` 输出 ```bash 🛠 { @@ -1158,11 +1158,11 @@ test('ethereum/contract dataSource creation example', () => { } ``` -#### Testing `file/ipfs` templates +#### 测试`file/ipfs`模板 -Similarly to contract dynamic data sources, users can test test file data sources and their handlers +类似于合约动态数据源,用户可以测试文件数据源及其处理程序 -##### Example `subgraph.yaml` +##### 示例 `subgraph.yaml` ```yaml ... @@ -1172,7 +1172,7 @@ templates: network: mainnet mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/token-lock-wallet.ts handler: handleMetadata @@ -1183,7 +1183,7 @@ templates: file: ./abis/GraphTokenLockWallet.json ``` -##### Example `schema.graphql` +##### 示例 `schema.graphql` ```graphql """ @@ -1203,7 +1203,7 @@ type TokenLockMetadata @entity { } ``` -##### Example `metadata.json` +##### 示例 `metadata.json` ```json { @@ -1214,7 +1214,7 @@ type TokenLockMetadata @entity { } ``` -##### Example handler +##### 示例处理程序: ```typescript export function handleMetadata(content: Bytes): void { @@ -1289,29 +1289,29 @@ test('file/ipfs dataSource creation example', () => { ## 测试覆盖率 -Using **Matchstick**, subgraph developers are able to run a script that will calculate the test coverage of the written unit tests. +使用**Matchstick**,子图开发者可以运行一个脚本,计算编写的单元测试的测试覆盖率。 -The test coverage tool takes the compiled test `wasm` binaries and converts them to `wat` files, which can then be easily inspected to see whether or not the handlers defined in `subgraph.yaml` have been called. Since code coverage (and testing as whole) is in very early stages in AssemblyScript and WebAssembly, **Matchstick** cannot check for branch coverage. Instead we rely on the assertion that if a given handler has been called, the event/function for it have been properly mocked. +测试覆盖工具接受已编译的测试 `wasm` 二进制并将它们转换为 `wat` 文件, 然后便于检查,看看是否`subgraph.yaml`中定义的处理程序已被调用。 因为代码覆盖面(和整个测试)在 AssemblyScript 和 WebAssembly 中处于早期阶段,**Matchstick** 无法检查分支覆盖面。 相反,我们依赖的是这样一种说法:如果一个处理程序被调用了,它的事件/功能就被恰当地仿效了。 -### Prerequisites +### 先决条件 -To run the test coverage functionality provided in **Matchstick**, there are a few things you need to prepare beforehand: +要运行在 **Matchstick\*** 中提供的测试覆盖功能,您需要事先准备几件事: #### 导出处理程序 -In order for **Matchstick** to check which handlers are being run, those handlers need to be exported from the **test file**. So for instance in our example, in our gravity.test.ts file we have the following handler being imported: +为了让**Matchstick** 检查哪些处理程序正在运行,这些处理程序需要从 **测试文件** 导出。 因此,例如在我们的例子中,在我们的gravity.test.ts文件中,我们有以下处理程序被导入: ```typescript import { handleNewGravatar } from '../../src/gravity' ``` -In order for that function to be visible (for it to be included in the `wat` file **by name**) we need to also export it, like this: +为了让这个函数可见,我们也需要导出它(**以名字**写入`wat`文件)。 像这样: ```typescript export { handleNewGravatar } ``` -### Usage +### 使用方法 设置好后,要运行测试覆盖工具,只需运行: @@ -1319,7 +1319,7 @@ export { handleNewGravatar } graph test -- -c ``` -You could also add a custom `coverage` command to your `package.json` file, like so: +你也可以在你的 `package.json` 文件中添加一个自定义的 `coverage` 命令,就像这样: ```typescript "scripts": { @@ -1371,13 +1371,11 @@ Global test coverage: 22.2% (2/9 handlers). 日志输出包括测试运行持续时间。下面是一个示例: -`[Thu, 31 Mar 2022 13:54:54 +0300] Program executed in: 42.270ms.` - ## 常见编译器错误 > 关键:无法从具有背景的有效模块创建WasmInstance:未知导入:wasi_snapshot_preview1::尚未定义fd_write -This means you have used `console.log` in your code, which is not supported by AssemblyScript. Please consider using the [Logging API](/subgraphs/developing/creating/graph-ts/api/#logging-api) +这意味着您在代码中使用了`console.log`,不被 AssemblyScript 支持。请考虑使用 [日志API](/subgraphs/developing/creating/graph-ts/api/#logging-api)。 > ERROR TS2554: Expected ? arguments, but got ?. > @@ -1391,11 +1389,11 @@ This means you have used `console.log` in your code, which is not supported by A > > in ~lib/matchstick-as/assembly/defaults.ts(24,12) -The mismatch in arguments is caused by mismatch in `graph-ts` and `matchstick-as`. The best way to fix issues like this one is to update everything to the latest released version. +参数中的不匹配是由`graph-ts`和`matchstick-as`中的不匹配造成的。 解决这类问题的最佳方法是更新最新发布的版本。 ## 其他资源 -For any additional support, check out this [demo subgraph repo using Matchstick](https://github.com/LimeChain/demo-subgraph#readme_). +如需任何额外支持,请查看此[使用Matchstick的演示子图仓库](https://github.com/LimeChain/demo-subgraph#readme_)。 ## 反馈 diff --git a/website/src/pages/zh/subgraphs/developing/deploying/multiple-networks.mdx b/website/src/pages/zh/subgraphs/developing/deploying/multiple-networks.mdx index 3608f13cb405..969ec7b95d03 100644 --- a/website/src/pages/zh/subgraphs/developing/deploying/multiple-networks.mdx +++ b/website/src/pages/zh/subgraphs/developing/deploying/multiple-networks.mdx @@ -1,16 +1,17 @@ --- -title: Deploying a Subgraph to Multiple Networks +title: 将子图部署到多个网络 +sidebarTitle: 部署到多个网络 --- -This page explains how to deploy a subgraph to multiple networks. To deploy a subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a subgraph already, see [Creating a subgraph](/developing/creating-a-subgraph/). +本页介绍如何将子图部署到多个网络。要部署子图,需要首先安装[Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli)。如果尚未创建子图,请参见[创建子图](/developing/creating-a-subgraph)。 ## 将子图部署到多个网络 在某些情况下,您需要将相同的子图部署到多个网络,而不复制其所有代码。随之而来的主要挑战是这些网络上的合约地址不同。 -### Using `graph-cli` +### 使用`graph-cli` -Both `graph build` (since `v0.29.0`) and `graph deploy` (since `v0.32.0`) accept two new options: +`graph build`(从`v0.29.0`版本开始)和`graph deploy`(从`v0.32.0`版本开始)都接受两个新选项: ```sh Options: @@ -20,11 +21,11 @@ Options: --network-file Networks config file path (default: "./networks.json") ``` -You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your subgraph during development. +您可以使用--`network`选项从`json`标准文件(默认为networks.json\`)中指定网络配置,以便在开发期间轻松更新子图。 -> Note: The `init` command will now auto-generate a `networks.json` based on the provided information. You will then be able to update existing or add additional networks. +> 注意: `init`命令现在将根据提供的信息自动生成 `networks.json`。然后,您就可以更新现有的或添加其他网络。 -If you don't have a `networks.json` file, you'll need to manually create one with the following structure: +如果您没有 `networks.json` 文件,您则需要手动创建一个具有以下结构的文件: ```json { @@ -52,9 +53,9 @@ If you don't have a `networks.json` file, you'll need to manually create one wit } ``` -> Note: You don't have to specify any of the `templates` (if you have any) in the config file, only the `dataSources`. If there are any `templates` declared in the `subgraph.yaml` file, their network will be automatically updated to the one specified with the `--network` option. +> 注意:您不必在配置文件中指定任何`模板`(如果有),只需指定`dataSources`。如果`subgraph.yaml`文件中声明了任何`模板`,则其网络将自动更新为`--network`选项指定的模板。 -Now, let's assume you want to be able to deploy your subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`: +现在,让我们假设您希望能够将子图部署到`mainnet` 和 `sepolia`网络中,这是您的`subgraph.yaml`: ```yaml # ... @@ -96,7 +97,7 @@ yarn build --network sepolia yarn build --network sepolia --network-file path/to/config ``` -The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the subgraph. Your `subgraph.yaml` file now should look like this: +`Build` 命令将使用 `sepolia` 配置更新 `subgrap.yaml`,然后重新编译子图。你的`subgraph.yaml`现在应该是这样的: ```yaml # ... @@ -111,9 +112,9 @@ dataSources: kind: ethereum/events ``` -Now you are ready to `yarn deploy`. +现在你准备好了 `yarn 部署`。 -> Note: As mentioned earlier, since `graph-cli 0.32.0` you can directly run `yarn deploy` with the `--network` option: +> 注意: 如前所述,由于是 `graph-cli 0.32.0`,所以您可以使用` --network`选项直接运行`yarn deploy`: ```sh # Using default networks.json file @@ -125,9 +126,9 @@ yarn deploy --network sepolia --network-file path/to/config ### 使用 subgraph.yaml 模板 -One way to parameterize aspects like contract addresses using older `graph-cli` versions is to generate parts of it with a templating system like [Mustache](https://mustache.github.io/) or [Handlebars](https://handlebarsjs.com/). +使用较旧的`graph-cli`版本对合约地址等方面进行参数化的一种方法是使用[Mustache](https://mustache.github.io/)或[Handlebars](https://handlebarsjs.com/)等模板系统生成部分内容。 -To illustrate this approach, let's assume a subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network: +为了说明这种方法,我们假设应该使用不同的合约地址将子图部署到 mainnet 和 Sepolia。然后,您可以定义两个配置文件,提供每个网络的地址: ```json { @@ -145,7 +146,7 @@ To illustrate this approach, let's assume a subgraph should be deployed to mainn } ``` -Along with that, you would substitute the network name and addresses in the manifest with variable placeholders `{{network}}` and `{{address}}` and rename the manifest to e.g. `subgraph.template.yaml`: +除此之外,您还可以使用可变占位符`{{network}} `和 `{{address}}`替换清单中的网络名称和地址,并将清单重命名为,例如 `subgraph.template.yaml`: ```yaml # ... @@ -162,7 +163,7 @@ dataSources: kind: ethereum/events ``` -In order to generate a manifest to either network, you could add two additional commands to `package.json` along with a dependency on `mustache`: +为了向任何一个网络生成一个清单,您可以向`package.json` 添加两个额外的命令以及对`mustache`的依赖: ```json { @@ -179,7 +180,7 @@ In order to generate a manifest to either network, you could add two additional } ``` -To deploy this subgraph for mainnet or Sepolia you would now simply run one of the two following commands: +要为 mainnet 或 Sepolia 部署这个子图,现在只需运行以下两个命令之一: ```sh # Mainnet: @@ -189,21 +190,21 @@ yarn prepare:mainnet && yarn deploy yarn prepare:sepolia && yarn deploy ``` -A working example of this can be found [here](https://github.com/graphprotocol/example-subgraph/tree/371232cf68e6d814facf5e5413ad0fef65144759). +[这里]可以找到一个工作示例。(https://github.com/graphprotocol/example-subgraph/tree/371232cf68e6d814facf5e5413ad0fef65144759)。 -**Note:** This approach can also be applied to more complex situations, where it is necessary to substitute more than contract addresses and network names or where generating mappings or ABIs from templates as well. +注意: 这种方法也可以应用于更复杂的情况,在这种情况下,需要替换的不仅仅是合约地址和网络名称,或者也需要从模板生成映射或 ABI。 -This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. +这将为您提供`chainHeadBlock`,您可以将其与子图上的`latestBlock`进行比较,以检查它是否落后。`同步`通知子图是否已经赶上链。如果没有发生错误,`health`当前可以取`healthy`的值,如果有错误导致子图的进度停止,则可以取`failed`的值。在这种情况下,您可以查看`fatalError`字段以了解此错误的详细信息。 ## 子图工作室子图封存策略 -A subgraph version in Studio is archived if and only if it meets the following criteria: +Studio中的子图版本只有在满足以下条件时才会存档: -- The version is not published to the network (or pending publish) -- The version was created 45 or more days ago -- The subgraph hasn't been queried in 30 days +- 该版本未发布到网络(或等待发布) +- 该版本创建于45天或更早之前 +- 该子图已有30天未被查询 -In addition, when a new version is deployed, if the subgraph has not been published, then the N-2 version of the subgraph is archived. +此外,当部署新版本时,如果子图尚未发布,则子图的N-2版本将被存档。 受此策略影响的每个子图都有一个选项,可以回复有问题的版本。 @@ -211,7 +212,7 @@ In addition, when a new version is deployed, if the subgraph has not been publis 如果子图成功同步,这是一个好信号,表明它将永远运行良好。然而,网络上的新触发器可能会导致子图遇到未经测试的错误条件,或者由于性能问题或节点操作符的问题,子图开始落后。 -Graph Node exposes a GraphQL endpoint which you can query to check the status of your subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a subgraph: +Graph Node公开了一个GraphQL端点,您可以查询该端点以检查子图的状态。在托管服务上,可以在`https://api.thegraph.com/index-node/graphql`使用。在本地节点的默认情况下,在`8030/graphql`端口上可用。此端点的完整架构可以在[此处](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql)找到。以下是一个检查子图当前版本状态的示例查询: ```graphql { @@ -238,4 +239,4 @@ Graph Node exposes a GraphQL endpoint which you can query to check the status of } ``` -This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. +这将为您提供`chainHeadBlock`,您可以将其与子图上的`latestBlock`进行比较,以检查它是否落后。`同步`通知子图是否已经赶上链。如果没有发生错误,`health`当前可以取`healthy`的值,如果有错误导致子图的进度停止,则可以取`failed`的值。在这种情况下,您可以查看`fatalError`字段以了解此错误的详细信息。 diff --git a/website/src/pages/zh/subgraphs/developing/deploying/using-subgraph-studio.mdx b/website/src/pages/zh/subgraphs/developing/deploying/using-subgraph-studio.mdx index 0e20b7f2a2a0..889dbf99a4a8 100644 --- a/website/src/pages/zh/subgraphs/developing/deploying/using-subgraph-studio.mdx +++ b/website/src/pages/zh/subgraphs/developing/deploying/using-subgraph-studio.mdx @@ -1,39 +1,39 @@ --- -title: Deploying Using Subgraph Studio +title: 部署到子图工作室 --- -Learn how to deploy your subgraph to Subgraph Studio. +以下是将子图部署到Subgraph Studio步骤。 -> Note: When you deploy a subgraph, you push it to Subgraph Studio, where you'll be able to test it. It's important to remember that deploying is not the same as publishing. When you publish a subgraph, you're publishing it onchain. +> 注意:当您部署子图时,您将它推送到子图工作室,在那里您将能够测试它。 重要的是要记住部署与发布不一样。当你发布一个Subgraph时,你发布它是一个链。 -## Subgraph Studio Overview +## 子图工作室概述 -In [Subgraph Studio](https://thegraph.com/studio/), you can do the following: +在 [Subgraph Studio](https://thegraph.com/studio/), 您可以做以下操作: -- View a list of subgraphs you've created -- Manage, view details, and visualize the status of a specific subgraph +- 查看您创建的子图列表 +- 用于管理、查看详细信息和可视化特定子图状态的部分 - 为特定子图创建和管理 API 密钥 -- Restrict your API keys to specific domains and allow only certain Indexers to query with them -- Create your subgraph -- Deploy your subgraph using The Graph CLI -- Test your subgraph in the playground environment -- Integrate your subgraph in staging using the development query URL -- Publish your subgraph to The Graph Network -- Manage your billing +- 将您的 API 密钥限制在指定的域名,并且只允许某些索引人与它们查询 +- 编写子图 +- 使用 The GraphCLI 部署你的子图 +- 在播放环境中测试你的子图 +- 使用开发查询 URL 整合你的子图 +- 将你的子图发布到 The Graph的去中心化网络 +- 管理您的账单 -## Install The Graph CLI +## 安装The Graph CLI -Before deploying, you must install The Graph CLI. +部署前,您必须安装 The Graph CLI。 -You must have [Node.js](https://nodejs.org/) and a package manager of your choice (`npm`, `yarn` or `pnpm`) installed to use The Graph CLI. Check for the [most recent](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) CLI version. +你必须安装[Node.js](https://nodejs.org/)和你选择的包管理器 (`npm` or `pnpm`) 才能使用Graph CLI。检查[最新的](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true)CLI版本。 -### Install with yarn +### 使用 yarn 安装 ```bash yarn global add @graphprotocol/graph-cli ``` -### Install with npm +### 使用 npm 安装 ```bash npm install -g @graphprotocol/graph-cli @@ -41,97 +41,91 @@ npm install -g @graphprotocol/graph-cli ## 开始 -1. Open [Subgraph Studio](https://thegraph.com/studio/). -2. Connect your wallet to sign in. - - You can do this via MetaMask, Coinbase Wallet, WalletConnect, or Safe. -3. After you sign in, your unique deploy key will be displayed on your subgraph details page. - - The deploy key allows you to publish your subgraphs or manage your API keys and billing. It is unique but can be regenerated if you think it has been compromised. +1. 打开 [Subgraph Studio](https://thegraph.com/studio/)。 +2. 连接您的钱包以登录。 + - 您可以通过Metamask、Coinbase Wallet 、WalletConnect或安全做到这一点。 +3. 登录后,您唯一的部署密钥将显示在您的子图详细信息页面上。 + - 一旦您登录,您将在您的账户主页上看到您唯一的部署密钥。这将允许您发布您的子图或管理您的 API 密钥以及计费。您将拥有一个惟一的部署密钥,如果您认为该密钥已被破坏,则可以重新生成该密钥。 -> Important: You need an API key to query subgraphs +> 重要:您需要 API 密钥查询子图 ### 如何在子图工作室中创建子图 -> For additional written detail, review the [Quick Start](/subgraphs/quick-start/). +> 欲了解更多书面详情,请查看[快速启动](/subgraphs/quick-start/)。 -### 子图与图形网络的兼容性 +### 子图与The Graph网络的兼容性 -In order to be supported by Indexers on The Graph Network, subgraphs must: +为了得到The Graph网络上的索引人支持,子图必须对[支持的网络](/supported-networks/)进行索引。有关支持和不支持功能的完整列表,请查看[功能支持列表](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md)回购。 -- Index a [supported network](/supported-networks/) -- 不得使用以下任何功能: - - ipfs.cat & ipfs.map - - 非致命错误 - - 嫁接 +## 初始化你的子图 -## Initialize Your Subgraph - -Once your subgraph has been created in Subgraph Studio, you can initialize its code through the CLI using this command: +一旦你的子图在子图工作室中被创建,你可以用这个命令初始化子图代码: ```bash graph init ``` -You can find the `` value on your subgraph details page in Subgraph Studio, see image below: +您可以在SubgraStudio的Subgra详细信息页面找到``值,查看下面的图像: ![Subgraph Studio - Slug](/img/doc-subgraph-slug.png) -After running `graph init`, you will be asked to input the contract address, network, and an ABI that you want to query. This will generate a new folder on your local machine with some basic code to start working on your subgraph. You can then finalize your subgraph to make sure it works as expected. +运行`graph init`后,您将被要求输入合约地址、网络和您想要查询的ABI。 这将在您的本地机器上生成一个新文件夹,带有一些基本代码来开始在您的Subgraph上工作。 然后你可以完成你的子图,确保它按预期工作。 ## Graph 认证 -Before you can deploy your subgraph to Subgraph Studio, you need to log into your account within the CLI. To do this, you will need your deploy key, which you can find under your subgraph details page. +在将你的子图部署到子图工作室之前,你需要在 CLI 内登入你的账户。要做到这一点,您将需要您的部署密钥,您可以在您的子图详细信息页面找到。 -Then, use the following command to authenticate from the CLI: +然后,使用下面的命令从 CLI 进行身份验证: ```bash graph auth ``` -## Deploying a Subgraph +## 部署子图 -Once you are ready, you can deploy your subgraph to Subgraph Studio. +一旦你准备好了,你可以将你的子图部署到子图工作室。 -> Deploying a subgraph with the CLI pushes it to the Studio, where you can test it and update the metadata. This action won't publish your subgraph to the decentralized network. +> 部署一个 CLI 的子图,推送它到工作室,在那里你可以测试它并更新元数据。 此操作不会将你的子图发布到去中心化的网络。 -Use the following CLI command to deploy your subgraph: +使用下面的 CLI 命令来部署您的子图: ```bash graph deploy ``` -After running this command, the CLI will ask for a version label. +在运行此命令后,CLI将要求一个版本标签。 -- It's strongly recommended to use [semver](https://semver.org/) for versioning like `0.0.1`. That said, you are free to choose any string as version such as `v1`, `version1`, or `asdf`. -- The labels you create will be visible in Graph Explorer and can be used by curators to decide if they want to signal on a specific version or not, so choose them wisely. +- 强烈建议使用[semver](https://semver.org/) 进行版本控制,如`0.0.1`。尽管如此,您可以自由选择任何字符串作为版本,比如:`v1`,`version1`,`asdf`。 +- 这些标签将在Graph Explorer中可见,并可由策展人用来决定是否要在这个版本上发出信号,所以要明智地选择它们。 -## Testing Your Subgraph +## 测试子图 -After deploying, you can test your subgraph (either in Subgraph Studio or in your own app, with the deployment query URL), deploy another version, update the metadata, and publish to [Graph Explorer](https://thegraph.com/explorer) when you are ready. +部署后,你可以测试你的子图(在SubgraStudio或在你自己的应用中,使用部署查询 URL), 部署另一个版本,更新元数据,并在你准备就绪时发布到 [Graph Explorer](https://thegraph.com/explorer)。 -Use Subgraph Studio to check the logs on the dashboard and look for any errors with your subgraph. +使用子图工作室检查仪表板上的日志,并查找您的子图中的任何错误。 -## Publish Your Subgraph +## 发布子图 -In order to publish your subgraph successfully, review [publishing a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). +要成功发布您的子图,请审阅[发布子图](/subgraphs/developing/publishing/publishing-a-subgraph/)。 -## Versioning Your Subgraph with the CLI +## 使用 CLI 对子图进行版本控制 -If you want to update your subgraph, you can do the following: +如果你想要更新子图,可以做以下操作: -- You can deploy a new version to Studio using the CLI (it will only be private at this point). -- Once you're happy with it, you can publish your new deployment to [Graph Explorer](https://thegraph.com/explorer). -- This action will create a new version of your subgraph that Curators can start signaling on and Indexers can index. +- 你可以使用 CLI 部署一个新版本到 Studio (现在它只是私密的)。 +- 一旦你对它满意,你可以发布你的新部署到 [Graph Explorer](https://thegraph.com/explorer)。 +- 此操作将创建你的Subgra新版本,策展人可以开始发信号,索引人可以索引。 -You can also update your subgraph's metadata without publishing a new version. You can update your subgraph details in Studio (under the profile picture, name, description, etc.) by checking an option called **Update Details** in [Graph Explorer](https://thegraph.com/explorer). If this is checked, an onchain transaction will be generated that updates subgraph details in Explorer without having to publish a new version with a new deployment. +您也可以在不发布新版本的情况下更新子图形元数据。 您可以在 Studio 中更新您的子图详细信息(在个人资料图片、名称、描述等下)。 通过在 [Graph Explorer](https://thegraph.com/explorer) 中检查名为 **Updetails** 的选项。 如果选中此项, 在浏览器中,将生成更新Subgraph 详细信息的在线交易,无需发布一个新版本的安装。 -> Note: There are costs associated with publishing a new version of a subgraph to the network. In addition to the transaction fees, you must also fund a part of the curation tax on the auto-migrating signal. You cannot publish a new version of your subgraph if Curators have not signaled on it. For more information, please read more [here](/resources/roles/curating/). +> 注意:发布新版Subgraph 到网络相关的费用。 除了交易费外,您还必须在自动迁移信号上为策展税提供部分资金。 如果策展人没有在Subgraph 上签名,您不能发布新版本。 欲了解更多信息,请阅读更多的 [here](/resources/roles/curating/)。 ## 子图版本的自动归档 -Whenever you deploy a new subgraph version in Subgraph Studio, the previous version will be archived. Archived versions won't be indexed/synced and therefore cannot be queried. You can unarchive an archived version of your subgraph in Subgraph Studio. +每当您在Subgraph Studio中部署新的子图版本时,都会归档以前的版本。 存档版本不会被索引/同步,因此无法查询。 您可以在 Studio UI 中取消存档子图的存档版本。 请注意,部署到Studio的以前版本的未发布子图将被自动存档。 -> Note: Previous versions of non-published subgraphs deployed to Studio will be automatically archived. +> 注意:部署到Studio的未发布的子图的以前版本将被自动归档。 -![Subgraph Studio - Unarchive](/img/Unarchive.png) +![Subgraph Studio - 取消归档](/img/Unarchive.png) diff --git a/website/src/pages/zh/subgraphs/developing/developer-faq.mdx b/website/src/pages/zh/subgraphs/developing/developer-faq.mdx index dab117b8f2b5..552af24751f1 100644 --- a/website/src/pages/zh/subgraphs/developing/developer-faq.mdx +++ b/website/src/pages/zh/subgraphs/developing/developer-faq.mdx @@ -1,71 +1,71 @@ --- -title: Developer FAQ -sidebarTitle: FAQ +title: 开发者常见问题 +sidebarTitle: 常见问题 --- -This page summarizes some of the most common questions for developers building on The Graph. +本页总结了开发者在The Graph上最常见的一些问题。 -## Subgraph Related +## 子图相关的 -### 什么是子图? +### 1. 什么是子图? -A subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process subgraphs and make them available for subgraph consumers to query. +子图是基于区块链数据构建的自定义API。子图使用GraphQL查询语言进行查询,并使用Graph CLI部署到Graph节点。一旦部署并发布到Graph的去中心化网络,索引人就会处理子图,并使其可供子图消费者查询。 -### 2. What is the first step to create a subgraph? +### 2. 创建子图的第一步是什么? -To successfully create a subgraph, you will need to install The Graph CLI. Review the [Quick Start](/subgraphs/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). +要成功创建子图,您需要安装Graph CLI。请检查[快速启动](/subgraphs/quick-start/) 以开始操作。 详情请查看[创建子图](/developing/creating-a-subgraph/)。 -### 3. Can I still create a subgraph if my smart contracts don't have events? +### 3. 如果我的智能合约没有事件,还能创建子图吗? -It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the subgraph are triggered by contract events and are the fastest way to retrieve useful data. +强烈建议您构建智能合约,以使事件与您有兴趣查询的数据相关联。 子图中的事件处理程序由合约事件触发,是迄今为止检索有用数据的最快方式。 -If the contracts you work with do not contain events, your subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. +如果您正在使用的合约不包含事件,您的子图可以使用调用和区块处理程序来触发索引。 因为这样做会严重影响性能,所以不建议。 -### 4. 我可以更改与我的子图关联的 GitHub 账户吗? +### 4. 可以更改与我的子图关联的 GitHub 账户吗? -No. Once a subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your subgraph. +不可以。一旦创建了子图,就不能更改关联的 GitHub 账户。 在创建子图之前,请务必仔细考虑这一点。 -### 5. How do I update a subgraph on mainnet? +### 5. 如何升级主网上的子图? -You can deploy a new version of your subgraph to Subgraph Studio using the CLI. This action maintains your subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. +如果您是子图开发人员,您可以使用 CLI 将新版本的子图升级到工作室。 届时子图将是私有的,但如果您对它感到满意,您可以发布到去中心化的 Graph浏览器。 这将创建一个新版本的子图,策展人可以开始对其发出信号。 -### 6. Is it possible to duplicate a subgraph to another account or endpoint without redeploying? +### 6. 是否可以在不重新部署的情况下,将子图复制到另一个账户或端点? 您必须重新部署子图,但如果子图 ID(IPFS hash)没有更改,则不必从头开始同步。 -### 7. How do I call a contract function or access a public state variable from my subgraph mappings? +### 7. 如何从我的子图映射中调用合约函数,或访问公共状态变量? -Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/#access-to-smart-contract-state). +查看[AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/#access-to-smart-contract-state)中的`访问智能合约` 状态。 -### 8. Can I import `ethers.js` or other JS libraries into my subgraph mappings? +### 8. 我可以将 `ethers.js` 或其他 JS 库导入到子图映射吗? -Not currently, as mappings are written in AssemblyScript. +目前不行,因为映射是在 AssemblyScript 中写的。 -One possible alternative solution to this is to store raw data in entities and perform logic that requires JS libraries on the client. +一个可能的解决办法是将原始数据存储在实体中,并执行需要客户端上的 JS库的逻辑。 -### 9. When listening to multiple contracts, is it possible to select the contract order to listen to events? +### 9. 在监听多项合约时,是否可以选择听取事件的合约命令? 在子图中,无论是否跨多个合约,事件始终按照它们在区块中出现的顺序进行处理的。 -### 10. How are templates different from data sources? +### 10. 模板与数据源有何不同? -Templates allow you to create data sources quickly, while your subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your subgraph will create a dynamic data source by supplying the contract address. +模板允许您在子图索引时动态创建数据源。 当人们与之交互时,您的合约可能会产生新的合约,并且由于您预先知道这些合约的架构(ABI、事件等),您可以定义您希望如何在模板中索引它们。当这些合约已生成,您的子图将通过提供合约地址来创建动态数据源。 -Check out the "Instantiating a data source template" section on: [Data Source Templates](/developing/creating-a-subgraph/#data-source-templates). +查看“实例化数据源模板”部分:[数据源模板](/developing/creating-a-subgraph/#data-source-templates)。 -### 11. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? +### 11. 是否可以使用`graph-cli`中的`graph-init`和两个合约来设置一个子图?或者,在运行`graph-init`后,我应该在`subgraph.yaml `中手动添加另一个数据源吗? -Yes. On `graph init` command itself you can add multiple dataSources by entering contracts one after the other. +是的。在`graph init` 命令中,您可以通过输入合约来添加多个数据源。 -You can also use `graph add` command to add a new dataSource. +您也可以使用 `grap add` 命令来添加新的数据源。 -### 12. In what order are the event, block, and call handlers triggered for a data source? +### 12. 事件、方块和调用处理程序触发到数据源的顺序是什么? -Event and call handlers are first ordered by transaction index within the block. Event and call handlers within the same transaction are ordered using a convention: event handlers first then call handlers, each type respecting the order they are defined in the manifest. Block handlers are run after event and call handlers, in the order they are defined in the manifest. Also these ordering rules are subject to change. +事件和调用处理程序,首先是按区块内的交易索引排序的。 在同一交易中的事件和调用处理程序是通过常规订购的:事件处理程序先进行,然后调用处理程序, 每种类型遵守清单中定义的顺序。 区块处理程序是在事件和调用处理程序之后运行的,其顺序是在清单中定义的。 这些排序规则也可能会有变化。 -When new dynamic data source are created, the handlers defined for dynamic data sources will only start processing after all existing data source handlers are processed, and will repeat in the same sequence whenever triggered. +注意:当创建新的动态数据源时,为动态数据源定义的处理程序只会在所有现有数据源处理程序处理完毕后开始处理,并且每次触发时都会按相同的顺序重复处理。 -### 13. How do I make sure I'm using the latest version of graph-node for my local deployments? +### 13. 如何确保我使用最新版本的 graph-节点 进行本地部署? 您可以运行以下命令: @@ -73,25 +73,25 @@ When new dynamic data source are created, the handlers defined for dynamic data docker pull graphprotocol/graph-node:latest ``` -> Note: docker / docker-compose will always use whatever graph-node version was pulled the first time you ran it, so make sure you're up to date with the latest version of graph-node. +> 注意: docker / docker-compose 将始终使用您第一次运行时提取的任何 graph-node 版本,因此执行此操作非常重要,可以确保您使用的是最新版本的 graph-node。 -### 14. What is the recommended way to build "autogenerated" ids for an entity when handling events? +### 14. 在处理事件时,为实体构建“自动生成”id 的推荐方法是什么? -If only one entity is created during the event and if there's nothing better available, then the transaction hash + log index would be unique. You can obfuscate these by converting that to Bytes and then piping it through `crypto.keccak256` but this won't make it more unique. +如果在事件期间只创建了一个实体并且没有更好的其他方法,那么交易hash + log索引的组合是唯一的。 您可以先将其转换为字节,然后将调用 `crypto.keccak256` 来混淆这些内容,但这不会使其更加独特。 -### 15. Can I delete my subgraph? +### 15. 我可以删除我的子图吗? -Yes, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) and [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) your subgraph. +是的,你可以[删除](/subgraphs/developing/managing/deleting-a-subgraph/)以及[传输](/subgraphs/developing/managing/transferring-a-subgraph/) 你的子图。 -## Network Related +## 网络相关的问题 -### 16. What networks are supported by The Graph? +### 16. The Graph 支持哪些网络? -You can find the list of the supported networks [here](/supported-networks/). +您可以在[这里](/supported-networks/)找到支持的网络。 -### 17. Is it possible to differentiate between networks (mainnet, Sepolia, local) within event handlers? +### 17. 是否可以从事件处理程序中区分网络(主网、Sepolia、本地)? -Yes. You can do this by importing `graph-ts` as per the example below: +是的,您可以通过下面的示例导入`graph-ts`来完成这项工作: ```javascript import { dataSource } from '@graphprotocol/graph-ts' @@ -100,21 +100,21 @@ dataSource.network() dataSource.address() ``` -### 18. Do you support block and call handlers on Sepolia? +### 18. 您是否支持Sepolia上的区块和调用处理程序? -Yes. Sepolia supports block handlers, call handlers and event handlers. It should be noted that event handlers are far more performant than the other two handlers, and they are supported on every EVM-compatible network. +是的,Sepolia 支持区块处理器、调用处理器和事件处理器。 应当指出,事件处理器的性能远远超过其他两个处理器,并且在每个EVM兼容的网络上得到支持。 -## Indexing & Querying Related +## 索引和查询相关内容 -### 19. Is it possible to specify what block to start indexing on? +### 19. 是否可以指定从哪个特定区块开始索引? -Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the dataSource starts indexing from. In most cases, we suggest using the block where the contract was created: [Start blocks](/developing/creating-a-subgraph/#start-blocks) +是的,`subgraph.yaml`文件中的`dataSources.source.startBlock`可以指定数据源开始索引的区块编号。 在大多数情况下,我们建议使用创建合约的区块:[开始区块](/developing/creating-a-subgraph/#start-blocks)。 -### 20. What are some tips to increase the performance of indexing? My subgraph is taking a very long time to sync +### 20. 有没有一些提高索引性能的技巧? 子图需要很长时间才能同步。 -Yes, you should take a look at the optional start block feature to start indexing from the block where the contract was deployed: [Start blocks](/developing/creating-a-subgraph/#start-blocks) +是的,您应该看看可选的起始区块功能,以便从部署合约的区块开始索引:[起始区块](/developing/creating-a-subgraph/#start-blocks)。 -### 21. Is there a way to query the subgraph directly to determine the latest block number it has indexed? +### 21. 有没有办法直接查询子图,来确定它索引的最新区块号是多少? 是的! 请尝试以下命令,并将“organization/subgraphName”替换为发布的组织和子图名称: @@ -122,25 +122,25 @@ Yes, you should take a look at the optional start block feature to start indexin curl -X POST -d '{ "query": "{indexingStatusForCurrentVersion(subgraphName: \"organization/subgraphName\") { chains { latestBlock { hash number }}}}"}' https://api.thegraph.com/index-node/graphql ``` -### 22. Is there a limit to how many objects The Graph can return per query? +### 22. The Graph 每次查询可以返回多少个对象有限制吗? -By default, query responses are limited to 100 items per collection. If you want to receive more, you can go up to 1000 items per collection and beyond that, you can paginate with: +默认情况下,每个集合的查询响应限制为 100 个项目。 如果您想收到更多,则每个收藏最多可以包含 1000 个项目,并且可以使用以下查询进行分页: ```graphql someCollection(first: 1000, skip: ) { ... } ``` -### 23. If my dapp frontend uses The Graph for querying, do I need to write my API key into the frontend directly? What if we pay query fees for users – will malicious users cause our query fees to be very high? +### 23. 如果我的 dapp 前端使用The Graph 进行查询,我是否需要将我的查询密钥直接写入前端? 如果我们为用户支付查询费用,恶意用户会不会导致我们的查询费用非常高? -Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. +目前,推荐的 dapp 方法是将密钥添加到前端并将其公开给最终用户。 也就是说,您可以将该键限制为主机名,例如*yourdapp.io* 和子图。 网关目前由 Edge & Node 运营。 网关的部分职责是监控滥用行为,并阻止来自恶意客户端的流量。 -## Miscellaneous +## 其他 -### 24. Is it possible to use Apollo Federation on top of graph-node? +### 24. 可以在 graph节点之上使用 Apollo Federation 吗? -Federation is not supported yet. At the moment, you can use schema stitching, either on the client or via a proxy service. +尚不支持Federation。目前,您可以在客户端或通过代理服务使用Schema stiching。 -### 25. I want to contribute or add a GitHub issue. Where can I find the open source repositories? +### 25.我想贡献或添加一个GitHub问题。我在哪里可以找到开源存储库? - [graph-node](https://github.com/graphprotocol/graph-node) - [graph-tooling](https://github.com/graphprotocol/graph-tooling) diff --git a/website/src/pages/zh/subgraphs/developing/introduction.mdx b/website/src/pages/zh/subgraphs/developing/introduction.mdx index a34bc90855b3..35b41f069997 100644 --- a/website/src/pages/zh/subgraphs/developing/introduction.mdx +++ b/website/src/pages/zh/subgraphs/developing/introduction.mdx @@ -1,31 +1,31 @@ --- -title: Introduction to Subgraph Development +title: 子图开发导论 sidebarTitle: 介绍 --- -To start coding right away, go to [Developer Quick Start](/subgraphs/quick-start/). +要立即开始编码,请转到[开发人员快速入门](/subgraphs/Quick-start/)。 ## 概述 -As a developer, you need data to build and power your dapp. Querying and indexing blockchain data is challenging, but The Graph provides a solution to this issue. +作为一名开发人员,您需要数据来构建和驱动您的dapp。查询和索引区块链数据具有挑战性,但The Graph为这个问题提供了一个解决方案。 -On The Graph, you can: +在The Graph上, 你可以: -1. Create, deploy, and publish subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). -2. Use GraphQL to query existing subgraphs. +1. 使用 Graphh CLI 和 [Subgraph Studio](https://thegraph.com/studio/) 创建、部署和发布子图到The Graph。 +2. 使用 GraphQL 查询现有子图。 -### What is GraphQL? +### 什么是GraphQL? -- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. +- [GraphQL](https://graphql.org/learn/)是API的查询语言,也是用您现有数据执行这些查询的运行时间。The Graph使用GraphQL查询子图。 -### Developer Actions +### 开发者操作 -- Query subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. -- Create custom subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. -- Deploy, publish and signal your subgraphs within The Graph Network. +- 查询其他开发者在 [The Graph网络](https://thegraph.com/explorer) 中生成的子图,并将它们整合到您自己的数据中。 +- 创建自定义子图以满足特定的数据需要,为其他开发者提供更高的可扩展性和灵活性。 +- 在The Graph网络中部署、发布和加信号给你的子图。 -### What are subgraphs? +### 什么是子图? -A subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. +子图是基于区块链数据的自定义 API 。 它从区块链中提取数据,处理并存储它,以便它能够轻松地通过 GraphQL 查询。 -Check out the documentation on [subgraphs](/subgraphs/developing/subgraphs/) to learn specifics. +查看 [Subgraphs](/subgraphs/developing/subgraphs/)上的文档以了解具体情况。 diff --git a/website/src/pages/zh/subgraphs/developing/managing/deleting-a-subgraph.mdx b/website/src/pages/zh/subgraphs/developing/managing/deleting-a-subgraph.mdx index dff170e3730f..6c01e7a3e67b 100644 --- a/website/src/pages/zh/subgraphs/developing/managing/deleting-a-subgraph.mdx +++ b/website/src/pages/zh/subgraphs/developing/managing/deleting-a-subgraph.mdx @@ -1,31 +1,31 @@ --- -title: Deleting a Subgraph +title: 删除子图 --- -Delete your subgraph using [Subgraph Studio](https://thegraph.com/studio/). +使用 [Subgraph Studio](https://thegraph.com/studio/)删除您的子图。 -> Deleting your subgraph will remove all published versions from The Graph Network, but it will remain visible on Graph Explorer and Subgraph Studio for users who have signaled on it. +> 删除您的子图将从The Graph网络中删除所有已发布的版本。 但它将在Graph Explorer 和子图工作室显示给在它上有信的用户。 -## Step-by-Step +## 步骤 -1. Visit the subgraph's page on [Subgraph Studio](https://thegraph.com/studio/). +1. 访问 [Subgraph Studio](https://thegraph.com/studio/)上的子图页面。 -2. Click on the three-dots to the right of the "publish" button. +2. 点击右侧的“发布”按钮的三个点。 -3. Click on the option to "delete this subgraph": +3. 单击“删除此子图”选项: ![Delete-subgraph](/img/Delete-subgraph.png) -4. Depending on the subgraph's status, you will be prompted with various options. +4. 视子图的状态而定,您将会收到各种选项提示。 - - If the subgraph is not published, simply click “delete” and confirm. - - If the subgraph is published, you will need to confirm on your wallet before the subgraph can be deleted from Studio. If a subgraph is published to multiple networks, such as testnet and mainnet, additional steps may be required. + - 如果子图未发布,请单击“删除”并确认。 + - 如果子图已发布,您需要先确认钱包,然后才能从工作室删除子图。 如果子图被发布到多个网络,如测试网和主网,则可能需要额外的步骤。 -> If the owner of the subgraph has signal on it, the signaled GRT will be returned to the owner. +> 如果子图的所有者有信号, 信号的 GRT 将退还给所有者。 -### Important Reminders +### 重要提示 -- Once you delete a subgraph, it will **not** appear on Graph Explorer's homepage. However, users who have signaled on it will still be able to view it on their profile pages and remove their signal. +- 一旦您删除子图,它将**不**出现在Graph Explorer的主页。 然而,已经发出信号的用户仍然能够在他们的个人资料页面上查看它并去除他们的信号。 - 策展人将无法再对该子图发出信号。 -- Curators that already signaled on the subgraph can withdraw their signal at an average share price. -- Deleted subgraphs will show an error message. +- 已经在该子图上发出信号的策展人,将能够以平均价格撤回他们的信号。 +- 删除子图将显示一个错误消息。 diff --git a/website/src/pages/zh/subgraphs/developing/managing/transferring-a-subgraph.mdx b/website/src/pages/zh/subgraphs/developing/managing/transferring-a-subgraph.mdx index 0fc6632cbc40..017a744b661f 100644 --- a/website/src/pages/zh/subgraphs/developing/managing/transferring-a-subgraph.mdx +++ b/website/src/pages/zh/subgraphs/developing/managing/transferring-a-subgraph.mdx @@ -1,42 +1,42 @@ --- -title: Transferring a Subgraph +title: 传输子图 --- -Subgraphs published to the decentralized network have an NFT minted to the address that published the subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network. +发布到去中心化网络的子图具有一个NFT,该NFT被铸造到发布子图的地址。NFT基于标准ERC721,该标准有助于在The Graph 网络上的账户之间进行转账。 -## Reminders +## 提示 -- Whoever owns the NFT controls the subgraph. -- If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that subgraph on the network. -- You can easily move control of a subgraph to a multi-sig. -- A community member can create a subgraph on behalf of a DAO. +- 任何拥有NFT的人控制子图。 +- 如果所有者决定出售或传输NFT,他们将无法编辑或更新网络上的子图。 +- 您可以轻松地将子图的控制移动到多片段。 +- 社区成员可以代表DAO创建子图。 -## View Your Subgraph as an NFT +## 将子图视为 NFT -To view your subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**: +要将您的子图视为一个 NFT,您可以访问一个 NFT 市场,例如**OpenSea**: ``` https://opensea.io/your-wallet-address ``` -Or a wallet explorer like **Rainbow.me**: +或者像**Rainbow.me**这样的钱包浏览器: ``` https://rainbow.me/your-wallet-addres ``` -## Step-by-Step +## 步骤 -To transfer ownership of a subgraph, do the following: +若要转让子图的所有权,请执行以下操作: -1. Use the UI built into Subgraph Studio: +1. 使用内置于Subgraph Studio的界面: ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-1.png) -2. Choose the address that you would like to transfer the subgraph to: +2. 然后选择您想要将子图转移到的地址: ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-2.png) -Optionally, you can also use the built-in UI of NFT marketplaces like OpenSea: +你也可以使用 NFT 市场的内置用户界面,比如 OpenSea: ![Subgraph Ownership Transfer from NFT marketplace](/img/subgraph-ownership-transfer-nft-marketplace.png) diff --git a/website/src/pages/zh/subgraphs/developing/publishing/publishing-a-subgraph.mdx b/website/src/pages/zh/subgraphs/developing/publishing/publishing-a-subgraph.mdx index e1d2731b4617..f8a85bcfc0e7 100644 --- a/website/src/pages/zh/subgraphs/developing/publishing/publishing-a-subgraph.mdx +++ b/website/src/pages/zh/subgraphs/developing/publishing/publishing-a-subgraph.mdx @@ -1,49 +1,50 @@ --- title: 向去中心化的网络发布子图 +sidebarTitle: 发布到去中心化网络 --- -Once you have [deployed your subgraph to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/) and it's ready to go into production, you can publish it to the decentralized network. +一旦你[将子图部署到Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/)并准备投入生产, 您可以将其发布到去中心化网络。 -When you publish a subgraph to the decentralized network, you make it available for: +当您向去中心化网络发布子图时,您可以提供: -- [Curators](/resources/roles/curating/) to begin curating it. -- [Indexers](/indexing/overview/) to begin indexing it. +- [策展人](/resources/roles/curating/) 开始策展它。 +- [索引人](/indexing/overview/) 开始索引它。 -Check out the list of [supported networks](/supported-networks/). +找到[支持的网络列表](/supported-networks/)。 -## Publishing from Subgraph Studio +## 从子图工作室发布 -1. Go to the [Subgraph Studio](https://thegraph.com/studio/) dashboard -2. Click on the **Publish** button -3. Your subgraph will now be visible in [Graph Explorer](https://thegraph.com/explorer/). +1. 转到[Subgraph Studio](https://thegraph.com/studio/) 控制面板 +2. 点击 **发布** 按钮 +3. 您的子图现在将会在 [Graph Explorer](https://thegraph.com/explorer/) 中可见。 -All published versions of an existing subgraph can: +所有已经发布的子图版本: -- Be published to Arbitrum One. [Learn more about The Graph Network on Arbitrum](/archived/arbitrum/arbitrum-faq/). +- 要发表到Arbitrum One。[了解更多关于Arbitrum上的The Graph网络](/archived/arbitrum/arbitrum-faq/)。 -- Index data on any of the [supported networks](/supported-networks/), regardless of the network on which the subgraph was published. +- 不论子图发布在哪个网络上,索引任何[支持的网络](/supported-networks/)上的数据。 -### 更新已发布的子图的元数据 +### 更新已发布的子图元数据 -- After publishing your subgraph to the decentralized network, you can update the metadata anytime in Subgraph Studio. -- Once you’ve saved your changes and published the updates, they will appear in Graph Explorer. -- It's important to note that this process will not create a new version since your deployment has not changed. +- 在将子图发布到去中心化的网络后,您可以随时在子图工作室中更新元数据。 +- 一旦保存了您的更改并发布了更新,它们将出现在 Graph Explorer 中。 +- 重要的是要注意,这个过程不会创建一个新版本,因为您的部署没有改变。 -## Publishing from the CLI +## 从 CLI 发布 -As of version 0.73.0, you can also publish your subgraph with the [`graph-cli`](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). +从 0.73.0 版本起,您也可以通过 [`graph-cli`](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli)发布您的Subgra。 -1. Open the `graph-cli`. -2. Use the following commands: `graph codegen && graph build` then `graph publish`. -3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized subgraph to a network of your choice. +1. 打开 `graph-cli`。 +2. 使用以下命令:`graph codegen &graph build` then `graph publish`。 +3. 一个窗口将打开,允许您连接您的钱包,添加元数据,并将您的最终子图部署到您选择的网络。 ![cli-ui](/img/cli-ui.png) -### Customizing your deployment +### 自定义您的部署 -You can upload your subgraph build to a specific IPFS node and further customize your deployment with the following flags: +可以上传您的子图,构建到一个指定的 IPFS 节点,并使用以下标志进一步自定义您的部署: ``` USAGE @@ -61,34 +62,34 @@ FLAGS ``` -## Adding signal to your subgraph +## 将信号添加到您的子图 -Developers can add GRT signal to their subgraphs to incentivize Indexers to query the subgraph. +开发者可以将 GRT 信号添加到他们的子图中,激励索引人查询子图。 -- If a subgraph is eligible for indexing rewards, Indexers who provide a "proof of indexing" will receive a GRT reward, based on the amount of GRT signalled. +- 如果子图符合索引奖励资格, 提供“索引证明”的索引人将根据标记的GRT的数量获得GRT奖励。 -- You can check indexing reward eligibility based on subgraph feature usage [here](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). +- 您可以根据[此处](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md)的子图功能使用情况检查索引奖励资格。 -- Specific supported networks can be checked [here](/supported-networks/). +- [此处](/supported-networks/)可以检查特定支持的网络 。 -> Adding signal to a subgraph which is not eligible for rewards will not attract additional Indexers. +> 将信号添加到不符合奖励条件的子图将不会吸引更多索引人。 > -> If your subgraph is eligible for rewards, it is recommended that you curate your own subgraph with at least 3,000 GRT in order to attract additional indexers to index your subgraph. +> 如果您的子图符合奖励条件,建议您策展至少3000 GRT 自己的子图,以吸引更多索引人来索引您的子图。 -The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs. However, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. +[Sunrise 升级索引人](/archived/sunrise/#what-is-the-upgrade-indexer) 确保对所有子图进行索引。然而,标明特定子图上的GRT将吸引更多的索引人。通过策展来激励额外的索引人,旨在通过减少延迟和提高网络可用性来提高查询的服务质量。 -When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. +在发出信号时,策展人可以决定在子图的特定版本上发出信号,或者使用自动迁移发出信号。如果他们使用自动迁移发出信号,策展人的份额将始终更新到开发人员发布的最新版本。如果他们决定在特定版本上发出信号,则共享将始终保留在该特定版本上。 -Indexers can find subgraphs to index based on curation signals they see in Graph Explorer. +索引人可以根据他们在Graph Explorer中看到的策展信号找到要索引的子图。 -![Explorer subgraphs](/img/explorer-subgraphs.png) +![Explorer Subgraphs](/img/explorer-subgraphs.png) -Subgraph Studio enables you to add signal to your subgraph by adding GRT to your subgraph's curation pool in the same transaction it is published. +子图工作室允许您通过在发布的相同交易中添加 GRT 到子图策展池,从而为子图添加信号。 ![Curation Pool](/img/curate-own-subgraph-tx.png) -Alternatively, you can add GRT signal to a published subgraph from Graph Explorer. +或者,您可以从 Graph Explorer 将GRT 信号添加到已发布的子图。 ![Signal from Explorer](/img/signal-from-explorer.png) -Learn more about [Curating](/resources/roles/curating/) on The Graph Network. +了解更多关于 The Graph网络上的 [Curating](/resources/roles/curating/)。 diff --git a/website/src/pages/zh/subgraphs/developing/subgraphs.mdx b/website/src/pages/zh/subgraphs/developing/subgraphs.mdx index 1541bf9c2dd0..8d838fc2370c 100644 --- a/website/src/pages/zh/subgraphs/developing/subgraphs.mdx +++ b/website/src/pages/zh/subgraphs/developing/subgraphs.mdx @@ -2,85 +2,85 @@ title: 子图 --- -## What is a Subgraph? +## 什么是子图? -A subgraph is a custom, open API that extracts data from a blockchain, processes it, and stores it so it can be easily queried via GraphQL. +子图从区块链中提取数据,对其进行处理并存储,以便通过 GraphQL 轻松查询。 -### Subgraph Capabilities +### 子图功能 -- **Access Data:** Subgraphs enable the querying and indexing of blockchain data for web3. -- **Build:** Developers can build, deploy, and publish subgraphs to The Graph Network. To get started, check out the subgraph developer [Quick Start](quick-start/). -- **Index & Query:** Once a subgraph is indexed, anyone can query it. Explore and query all subgraphs published to the network in [Graph Explorer](https://thegraph.com/explorer). +- **Access Data:** 子图可以为web3启用区块链数据的查询和索引。 +- **Build:** 开发者可以构建、部署和发布子图到The Graph网络。若要启动,请查看子图开发者[Quick Star](quick-start/)。 +- **Index & Query:** 一旦子图被索引,任何人都可以查询。 探索并查询在[Graph Explorer](https://thegraph.com/explorer)中发布到网络的所有子图。 -## Inside a Subgraph +## 子图内部 -The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. +子图清单 `subgraph.yaml` 定义了您的子图索引的智能合约和网络,这些合约中需要关注的事件,以及如何将事件数据映射到 Graph 节点存储并允许查询的实体。 -The **subgraph definition** consists of the following files: +**子图定义**由几个文件组成: -- `subgraph.yaml`: Contains the subgraph manifest +- `subgraph.yaml`: 包含子图清单 -- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL +- `schema.graphql`: 一个 GraphQL 模式文件,它定义了为您的子图存储哪些数据,以及如何通过 GraphQL 查询这些数据 -- `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema +- `mapping.ts`:[AssemblyScript映射](https://github.com/AssemblyScript/assemblyscript)将事件数据转换为模式中定义的实体的代码(例如本指南中的`mapping.ts`) -To learn more about each subgraph component, check out [creating a subgraph](/developing/creating-a-subgraph/). +要了解更多关于每个子图组件的信息,请查看 [创建子图](/developing/creating-a-subgraph/)。 ## 子图生命周期 -Here is a general overview of a subgraph’s lifecycle: +下面是子图生命周期的一般概述: -![Subgraph Lifecycle](/img/subgraph-lifecycle.png) +![Subgray Lifecycle](/img/subgraph-lifecycle.png) -## Subgraph Development +## 子图开发 -1. [Create a subgraph](/developing/creating-a-subgraph/) -2. [Deploy a subgraph](/deploying/deploying-a-subgraph-to-studio/) -3. [Test a subgraph](/deploying/subgraph-studio/#testing-your-subgraph-in-subgraph-studio) -4. [Publish a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) -5. [Signal on a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) +1. [创建子图](/developing/creating-a-subgraph/) +2. [部署子图](/deploying/deploying-a-subgraph-to-studio/) +3. [测试子图](/deploying/subgraph-studio/#testing-your-subgraph-in-subgraph-studio) +4. [发布子图](/subgraphs/developing/publishing/publishing-a-subgraph/) +5. [子图上的信号](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) -### Build locally +### 本地创建 -Great subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying subgraphs on The Graph. They can also use [Graph TypeScript](/subgraphs/developing/creating/graph-ts/README/) and [Matchstick](/subgraphs/developing/creating/unit-testing-framework/) to create robust subgraphs. +优秀的子图从本地开发环境和单元测试开始。 开发者使用 [GraphCLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli),一个命令行界面工具用于在图上构建和部署子图。 他们也可以使用 [GraphTypeScript](/subgraphs/developing/creating/graph-ts/README/) 和 [Matchstick](/subgraphs/developing/creating/unit-testing-framework/) 创建强大的子图。 -### Deploy to Subgraph Studio +### 部署到Subgraph Studio -Once defined, a subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: +一旦定义后,子图可以[部署到SubgraStudio](/deploying/deploying-a-subgraph-to-studio/)在Subgraph Studio中,您可以做以下工作: -- Use its staging environment to index the deployed subgraph and make it available for review. -- Verify that your subgraph doesn't have any indexing errors and works as expected. +- 使用其发布环境来索引已部署的子图并使其可供审核。 +- 验证您的子图没有任何索引错误,能够正常工作。 -### Publish to the Network +### 发布到网络 -When you're happy with your subgraph, you can [publish it](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph Network. +当你喜欢你的子图时,你可以[发布它](/subgraphs/developing/publishing/publishing-a-subgraph/) 到The Graph网络。 -- This is an onchain action, which registers the subgraph and makes it discoverable by Indexers. -- Published subgraphs have a corresponding NFT, which defines the ownership of the subgraph. You can [transfer the subgraph's ownership](/subgraphs/developing/managing/transferring-a-subgraph/) by sending the NFT. -- Published subgraphs have associated metadata, which provides other network participants with useful context and information. +- 这是一种网上操作,它注册了子图并使索引人能够发现它。 +- 发布的子图有相应的 NFT,它定义了子图的所有权。您可以通过发送 NFT 来[传输子图的所有权](/subgraphs/developing/managing/transferring-a-subgraph/)。 +- 已发布的子图有相关的元数据,为其他网络参与者提供有用的背景和信息。 -### Add Curation Signal for Indexing +### 为索引添加测展信号 -Published subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your subgraph. Learn more about signaling and [curating](/resources/roles/curating/) on The Graph. +索引人不大可能在没有策展信号的情况下采集已发布的子图。 为了鼓励索引,您应该向子图添加信号。了解更多关于信号和 [curating](/resources/roles/curating/)的信息。 -#### What is signal? +#### 什么是信号? -- Signal is locked GRT associated with a given subgraph. It indicates to Indexers that a given subgraph will receive query volume and it contributes to the indexing rewards available for processing it. -- Third-party Curators may also signal on a given subgraph, if they deem the subgraph likely to drive query volume. +- 信号与给定的子图相关联的 GRT 锁定。 它向索引表人明,某个子图将收到查询量,并且它将有助于为处理它提供索引奖励。 +- 第三方策展员也可以在给定的子图上发出信号,如果他们认为子图可能驱动查询量。 -### Querying & Application Development +### 查询及应用程序开发 -Subgraphs on The Graph Network receive 100,000 free queries per month, after which point developers can either [pay for queries with GRT or a credit card](/subgraphs/billing/). +The Graph网络每个月收到100 000个免费查询。 过了这个点之后,开发者可以[用GRT支付查询或信用卡](/subgraphs/billing/)。 -Learn more about [querying subgraphs](/subgraphs/querying/introduction/). +了解更多关于 [查询子图](/subgraphs/querying/introduction/)。 -### Updating Subgraphs +### 升级子图 -To update your subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. +若要使用错误修正或新功能更新你的子图,请启动交易来指向新版本。 您可以将您的子图的新版本部署到 [Subgraph Studio](https://thegraph.com/studio/) 进行开发和测试。 -- If you selected "auto-migrate" when you applied the signal, updating the subgraph will migrate any signal to the new version and incur a migration tax. -- This signal migration should prompt Indexers to start indexing the new version of the subgraph, so it should soon become available for querying. +- 如果您在应用信号时选择了“自动迁移”,更新Subgra会将任何信号迁移到新版本并产生迁移税。 +- 这种信号迁移应促使索引人开始索引新版本的子图,因此它很快就可以进行查询。 -### Deleting & Transferring Subgraphs +### 删除并传输子图 -If you no longer need a published subgraph, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) or [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) it. Deleting a subgraph returns any signaled GRT to [Curators](/resources/roles/curating/). +如果您不再需要已发布的子图,您可以[删除](/subgraphs/developing/managing/deleting-a-subgraph/)或[传输](/subgraphs/developing/managing/transferring-a-subgraph/)。删除子图会将任何已发出信号的GRT返回给[策展人](/resources/roles/curating/)。 diff --git a/website/src/pages/zh/subgraphs/explorer.mdx b/website/src/pages/zh/subgraphs/explorer.mdx index d49895c1d9f7..e2e8147cd683 100644 --- a/website/src/pages/zh/subgraphs/explorer.mdx +++ b/website/src/pages/zh/subgraphs/explorer.mdx @@ -2,36 +2,36 @@ title: Graph 浏览器 --- -Unlock the world of subgraphs and network data with [Graph Explorer](https://thegraph.com/explorer). +使用 [Graph Explorer](https://thegraph.com/explorer)解锁子图和网络数据。 ## 概述 -Graph Explorer consists of multiple parts where you can interact with [subgraphs](https://thegraph.com/explorer?chain=arbitrum-one), [delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one), engage [participants](https://thegraph.com/explorer/participants?chain=arbitrum-one), view [network information](https://thegraph.com/explorer/network?chain=arbitrum-one), and access your user profile. +Graph Explorer由多个部分组成,您可以在其中与 [Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one), [delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one) 互动,使用 [participants](https://thegraph.com/explorer/participants?chain=arbitrum-one),查看[网络信息](https://thegraph.com/explorer/network?chain=arbitrum-one),并访问您的用户配置文件。 -## Inside Explorer +## 内置浏览器 -The following is a breakdown of all the key features of Graph Explorer. For additional support, you can watch the [Graph Explorer video guide](/subgraphs/explorer/#video-guide). +下面是Graph Explorer所有关键特征的细分。 您可以观看[Graph浏览器视频指南](/subgraphs/explorer/#video-guide)。 -### Subgraphs Page +### 子图页面 -After deploying and publishing your subgraph in Subgraph Studio, go to [Graph Explorer](https://thegraph.com/explorer) and click on the "[Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one)" link in the navigation bar to access the following: +部署并在Subgraph Studio发布您的子图后, 前往[Graph Explorer](https://thegraph.com/explorer),然后点击导航栏中的“[Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one)”链接访问以下内容: -- Your own finished subgraphs -- Subgraphs published by others -- The exact subgraph you want (based on the date created, signal amount, or name). +- 您自己完成的子图 +- 其他人发布的子图 +- 您想要的精确子图 (基于创建的日期、信号数量或名称)。 -![Explorer Image 1](/img/Subgraphs-Explorer-Landing.png) +![Explorer 图像 1](/img/Subgraphs-Explorer-Landing.png) -When you click into a subgraph, you will be able to do the following: +当您点击子图时,将能够做以下工作: -- Test queries in the playground and be able to leverage network details to make informed decisions. -- Signal GRT on your own subgraph or the subgraphs of others to make indexers aware of its importance and quality. +- 在面板测试查询,并能够通过网络细节来做出知情的决定。 +- 您还可以在自己的子图或其他人的子图中发出 GRT 信号,以使索引人意识到其重要性和质量。 - - This is critical because signaling on a subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries. + - 这很关键,因为子图上的信号会激励它被索引,这意味着它将出现在网络上,最终为查询提供服务。 -![Explorer Image 2](/img/Subgraph-Details.png) +![Explorer 图像 2](/img/Subgraph-Details.png) -On each subgraph’s dedicated page, you can do the following: +在每个子图的专用页面,您可以做以下工作: - 子图上的信号/非信号 - 查看详细信息,例如图表、当前部署 ID 和其他元数据 @@ -42,61 +42,61 @@ On each subgraph’s dedicated page, you can do the following: - 子图统计信息(分配、策展人等) - 查看发布子图的实体 -![Explorer Image 3](/img/Explorer-Signal-Unsignal.png) +![Explorer 图像 3](/img/Explorer-Signal-Unsignal.png) -### Delegate Page +### 授权页面 -On the [Delegate page](https://thegraph.com/explorer/delegate?chain=arbitrum-one), you can find information about delegating, acquiring GRT, and choosing an Indexer. +在 [委托](https://thegraph.com/explorer/delegate?chain=arbitrum-one)页面, 您可以找到关于委托、 获取GRT和选择索引人的信息。 -On this page, you can see the following: +在这个页面上,您可以看到: -- Indexers who collected the most query fees -- Indexers with the highest estimated APR +- 收取最多查询费的索引人 +- 估值最大的 APR 索引人 -Additionally, you can calculate your ROI and search for top Indexers by name, address, or subgraph. +此外,您可以计算您的 ROI 并通过名称、地址或子图搜索顶级索引人。 -### Participants Page +### 参与者页面 -This page provides a bird's-eye view of all "participants," which includes everyone participating in the network, such as Indexers, Delegators, and Curators. +这个页面提供了所有“参与者”的鸟视图,包括参与网络的每个人,如索引人、委托人和策展人。 #### 1. 索引人 -![Explorer Image 4](/img/Indexer-Pane.png) +![Explorer 图像 4](/img/Indexer-Pane.png) -Indexers are the backbone of the protocol. They stake on subgraphs, index them, and serve queries to anyone consuming subgraphs. +索引人是协议的骨干,是那些质押于子图、索引它们并向使用子图的任何人提供查询服务的人。 -In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each subgraph, and how much revenue they have made from query fees and indexing rewards. +在 索引人表中,您将能够看到 索引人的委托参数、他们的权益、他们对每个子图的权益以及他们从查询费用和索引奖励中获得的收入。 -**Specifics** +**详情** -- Query Fee Cut - the % of the query fee rebates that the Indexer keeps when splitting with Delegators. -- Effective Reward Cut - the indexing reward cut applied to the delegation pool. If it’s negative, it means that the Indexer is giving away part of their rewards. If it’s positive, it means that the Indexer is keeping some of their rewards. -- Cooldown Remaining - the time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters. -- Owned - This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior. -- Delegated - Stake from Delegators which can be allocated by the Indexer, but cannot be slashed. -- Allocated - Stake that Indexers are actively allocating towards the subgraphs they are indexing. -- Available Delegation Capacity - the amount of delegated stake the Indexers can still receive before they become over-delegated. +- 查询费用划分 - 索引人与委托人划分查询费用的百分比。 +- 有效的奖励划分 - 应用于委托池的索引奖励削减。 如果是负数,则意味着索引人正在赠送部分奖励。 如果是正数,则意味着索引人保留了他们的一些奖励。 +- 冷却时间剩余 - 索引人可以更改上述委托参数之前的剩余时间。 冷却时间由索引人在更新委托参数时设置。 +- 已拥有 - 索引人的存入份额,可能会因恶意或不正确的行为被削减。 +- 已委托 - 委托人的份额可以由索引人分配,但不能被削减。 +- 已分配 - 索引人积极分配给他们正在索引的子图。 +- 可用委托容量 - 索引人在过度委托之前仍然可以收到的委托数量。 - 最大委托容量 - 索引人可以有效接受的最大委托份额数量。 超出的委托权益不能用于分配或奖励计算。 -- Query Fees - this is the total fees that end users have paid for queries from an Indexer over all time. -- 索引人奖励 - 这是索引者及其委托者在所有时间获得的总索引人奖励。 索引人奖励通过 GRT 发行支付。 +- 查询费用 - 这是最终用户一直以来为索引人的查询支付的费用。 +- 索引者奖励 - 这是索引人及其委托人在所有时间获得的总索引人奖励。 索引人奖励通过 GRT 发行支付。 -Indexers can earn both query fees and indexing rewards. Functionally, this happens when network participants delegate GRT to an Indexer. This enables Indexers to receive query fees and rewards depending on their Indexer parameters. +索引人可以获得查询费用和索引奖励。 从功能上讲,当网络参与人将 GRT 委托给索引人时,就会发生这种情况。 这使索引人能够根据其索引参数接收查询费用和奖励。 -- Indexing parameters can be set by clicking on the right-hand side of the table or by going into an Indexer’s profile and clicking the “Delegate” button. +- 索引参数可以通过点击表格的右侧来设置,或者通过进入索引人的配置文件并点击“委托”按钮来设置。 -To learn more about how to become an Indexer, you can take a look at the [official documentation](/indexing/overview/) or [The Graph Academy Indexer guides.](https://thegraph.academy/delegators/choosing-indexers/) +要了解更多关于如何成为索引人的信息,您可以查看[官方文档](/indexing/overview/)或[The Graph学院索引人指南。](https://thegraph.academy/delegators/choosing-indexers/) -![Indexing details pane](/img/Indexing-Details-Pane.png) +![索引详情面板](/img/Indexing-Details-Pane.png) #### 2. 策展人 -Curators analyze subgraphs to identify which subgraphs are of the highest quality. Once a Curator has found a potentially high-quality subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which subgraphs are high quality and should be indexed. +策展人分析子图以确定哪些子图质量最高。 一旦策展人发现一个潜在有吸引力的子图,他们就可以通过在其粘合曲线上发出信号来策展。 在这样做时,策展人让索引人知道哪些子图是高质量的且应该被索引。 -- Curators can be community members, data consumers, or even subgraph developers who signal on their own subgraphs by depositing GRT tokens into a bonding curve. - - By depositing GRT, Curators mint curation shares of a subgraph. As a result, they can earn a portion of the query fees generated by the subgraph they have signaled on. - - The bonding curve incentivizes Curators to curate the highest quality data sources. +- 策展人可以是社区成员、数据消费者,甚至是子图开发者,他们通过将 GRT 代币存入粘合曲线来在自己的子图上发出信号。 + - 通过交存GRT,策展人会对子图的策展份额进行识别。 因此,他们可以从发出的子图所产生的查询费中赚取一部分。 + - 粘合曲线激励策展人策展最高质量的数据源。 -In the The Curator table listed below you can see: +在下面列出的策展人表中,您可以看到: - 策展人开始策展的日期 - 已存入的 GRT 数量 @@ -104,36 +104,36 @@ In the The Curator table listed below you can see: ![Explorer Image 6](/img/Curation-Overview.png) -If you want to learn more about the Curator role, you can do so by visiting [official documentation.](/resources/roles/curating/) or [The Graph Academy](https://thegraph.academy/curators/). +如果你想了解更多关于管理员角色的信息,你可以通过访问[正式文件](/resources/roles/curating/)或[The Graph学院](https://thegraph.academy/curators/)来做到这一点。 #### 3. 委托人 -Delegators play a key role in maintaining the security and decentralization of The Graph Network. They participate in the network by delegating (i.e., “staking”) GRT tokens to one or multiple indexers. +委托人在维护The Graph网络的安全和去中心化方面发挥着关键作用。 他们通过向一个或多个索引人委托(如“质押”)GRT代币来参加网络。 -- Without Delegators, Indexers are less likely to earn significant rewards and fees. Therefore, Indexers attract Delegators by offering them a portion of their indexing rewards and query fees. -- Delegators select Indexers based on a number of different variables, such as past performance, indexing reward rates, and query fee cuts. -- Reputation within the community can also play a factor in the selection process. It's recommended to connect with the selected Indexers via [The Graph's Discord](https://discord.gg/graphprotocol) or [The Graph Forum](https://forum.thegraph.com/). +- 如果没有委托人,索引人就不太可能获得大量的报酬和费用。 因此,索引人通过向委托人提供一部分索引奖励和查询费来吸引他们。 +- 委托人反过来根据许多不同的变量选择索引人,例如过去的表现、索引奖励率和查询费用削减。 +- 社区内部的信誉也可以在甄选过程中发挥一定作用。 建议通过[ The Graph的Discord](https://discord.gg/graphprotocol)或[ The Graph论坛](https://forum.thegraph.com/)与选中的索引人连接。 -![Explorer Image 7](/img/Delegation-Overview.png) +![Explorer 图像 7](/img/Delegation-Overview.png) -In the Delegators table you can see the active Delegators in the community and important metrics: +在委托人表格上,你可以看到社区中的积极委托人和重要的衡量尺度: - 委托人委托给的索引人数量 -- A Delegator's original delegation +- 委托人的原始委托 - 协议中已经产生但没有提现的奖励 - 从协议中提取的已实现奖励 - 目前在协议中的 GRT 总量 -- The date they last delegated +- 上次委托的日期 -If you want to learn more about how to become a Delegator, check out the [official documentation](/resources/roles/delegating/delegating/) or [The Graph Academy](https://docs.thegraph.academy/official-docs/delegator/choosing-indexers). +如果你想了解更多关于如何成为委托人的信息,你可以通过访问[正式文件](/resources/roles/delegating/delegating/)或[The Graph学院](https://docs.thegraph.academy/official-docs/delegator/choosing-indexers)来做到这一点。 -### Network Page +### 网络页面 -On this page, you can see global KPIs and have the ability to switch to a per-epoch basis and analyze network metrics in more detail. These details will give you a sense of how the network is performing over time. +在网络部分,您将看到全局 KPI 以及切换到每个时期的基础和更详细地分析网络指标的能力。 这些详细信息将让您了解网络随时间推移的表现。 #### 概述 -The overview section has both all the current network metrics and some cumulative metrics over time: +活动部分包含所有当前网络指标以及一些随时间累积的指标: - 当前网络总份额 - 索引人和他们的委托人之间的份额分配 @@ -142,12 +142,12 @@ The overview section has both all the current network metrics and some cumulativ - 协议参数,例如管理奖励、通货膨胀率等 - 当前时期奖励和费用 -A few key details to note: +需要注意的几个关键细节: -- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the subgraphs have been closed and the data they served has been validated by the consumers. -- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). +- **查询费用代表消费者产生的费用**。索引人可以在至少7个时期(见下文)后声明(或不声明)它们,这些时期是在它们对子图的分配被关闭并且它们提供的数据已经被消费者验证之后。 +- **索引奖励表示索引人在时期里从网络发布中获得的奖励金额。**虽然协议发布是固定的,但只有在索引人关闭其对已索引的子图的分配时,才会产生奖励。因此,每个时期的奖励数量各不相同(即在某些时期里,索引人可能会集体关闭已开放多日的分配)。 -![Explorer Image 8](/img/Network-Stats.png) +![Explorer 图像 8](/img/Network-Stats.png) #### 时期 @@ -159,51 +159,51 @@ A few key details to note: - 活跃时期是索引人目前正在分配权益并收取查询费用的时期 - 稳定时期是状态通道正在稳定的时期。 这意味着如果消费者对他们提出争议,索引人将受到严厉惩罚。 - 分发时期是时期的状态通道正在结算的时期,索引人可以要求他们的查询费用回扣。 - - The finalized epochs are the epochs that have no query fee rebates left to claim by the Indexers. + - 最终确定的时期是索引人没有留下查询费回扣的时期,因此被最终确定。 -![Explorer Image 9](/img/Epoch-Stats.png) +![Explorer 图像 9](/img/Epoch-Stats.png) ## 您的用户资料 -Your personal profile is the place where you can see your network activity, regardless of your role on the network. Your crypto wallet will act as your user profile, and with the User Dashboard, you’ll be able to see the following tabs: +无论您以何种方式参与网络,您的个人资料都是您查看网络活动的地方。 您的加密钱包将作为您的用户资料,通过用户面板,您将能够看到: ### 个人资料概览 -In this section, you can view the following: +在本节中,您可以查看以下内容: -- Any of your current actions you've done. -- Your profile information, description, and website (if you added one). +- 您当前完成的任何操作。 +- 您的个人资料信息、描述和网站 (如果您添加了一个)。 -![Explorer Image 10](/img/Profile-Overview.png) +![Explorer 图像 10](/img/Profile-Overview.png) ### 子图标签 -In the Subgraphs tab, you’ll see your published subgraphs. +在子图选项卡中,您将看到您已发布的子图。 -> This will not include any subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network. +> 这将不包括为测试目的使用 CLI 部署的任何子图——子图只会在它们发布到去中心化网络时显示。 -![Explorer Image 11](/img/Subgraphs-Overview.png) +![Explorer 图像 11](/img/Subgraphs-Overview.png) ### 索引标签 -In the Indexing tab, you’ll find a table with all the active and historical allocations towards subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer. +在索引选项卡中,您将找到一个包含所有活动和历史分配给子图的表格。 您还会找到图表,在那里可以看到和分析您过去作为索引人的表现。 本节还将包括有关您的净索引人奖励和净查询费用的详细信息。 您将看到以下指标: - 已委托份额 - 委托人的份额,您可以分配但不能被削减 - 总查询费用 - 用户在一段时间内为您提供的查询支付的总费用 - 索引人奖励- 您收到的索引人奖励总额,以 GRT 为单位 -- 费用削减 - 当您与委托人拆分时,您将保留的查询费用回扣百分比 -- 奖励削减 - 与委托人拆分时您将保留的索引人奖励的百分比 +- 费用划分 - 当您与委托人拆分时,您将保留的查询费用回扣百分比 +- 奖励划分 - 与委托人拆分时您将保留的索引人奖励的百分比 - 已拥有 - 您存入的股份,可能会因恶意或不正确的行为而被削减 -![Explorer Image 12](/img/Indexer-Stats.png) +![Explorer 图像 12](/img/Indexer-Stats.png) ### 委托标签 -Delegators are important to the Graph Network. They must use their knowledge to choose an Indexer that will provide a healthy return on rewards. +委托人对Graph网络很重要。他们必须利用自己的知识选择一个能够提供健康回报的索引人。 -In the Delegators tab, you can find the details of your active and historical delegations, along with the metrics of the Indexers that you delegated towards. +在委托人选项卡中,可以找到您积极的和历史性委托的详情,连同您委托给索引人的指标。 在页面的前半部分,您可以看到您的委托图表,以及仅奖励图表。 在左侧,您可以看到反映您当前委托指标的 KPI。 @@ -223,16 +223,16 @@ In the Delegators tab, you can find the details of your active and historical de ### 策展标签 -在策展选项卡中,您将找到您正在发送信号的所有子图(从而使您能够接收查询费用)。 信号允许策展人向索引人突出显示哪些子图有价值和值得信赖,从而表明它们需要被索引。 +在策展选项卡中,您将找到正在发送信号的所有子图(从而使您能够接收查询费用)。 信号允许策展人向索引人突出显示哪些子图有价值和值得信赖,从而表明它们需要被索引。 在此选项卡中,您将找到以下内容的概述: -- 您正在管理的所有带有信号细节的子图 -- 每个子图的共享总数 +- 您正在策展的所有带信号细节的子图 +- 每个子图的份额总数 - 查询每个子图的奖励 - 更新日期详情 -![Explorer Image 14](/img/Curation-Stats.png) +![Explorer 图像 14](/img/Curation-Stats.png) ### 设置您的个人资料 @@ -241,16 +241,16 @@ In the Delegators tab, you can find the details of your active and historical de - 操作员代表索引人在协议中采取有限的操作,例如打开和关闭分配。 操作员通常是其他以太坊地址,与他们的抵押钱包分开,可以访问索引人可以亲自设置的网络。 - 委托参数允许您控制 GRT 在您和您的委托人之间的分配。 -![Explorer Image 15](/img/Profile-Settings.png) +![Explorer 图像 15](/img/Profile-Settings.png) -As your official portal into the world of decentralized data, Graph Explorer allows you to take a variety of actions, no matter your role in the network. You can get to your profile settings by opening the dropdown menu next to your address, then clicking on the Settings button. +作为您进入去中心化数据世界的官方门户,无论您在网络中的角色如何,G​​raph 浏览器都允许您采取各种行动。 您可以通过打开地址旁边的下拉菜单进入您的个人资料设置,然后单击设置按钮。 -![Wallet details](/img/Wallet-Details.png) +![钱包细节](/img/Wallet-Details.png) ## 其他资源 -### Video Guide +### 视频教程 -For a general overview of Graph Explorer, check out the video below: +有关 Graph 浏览器的通用概述,请查看下面的视频: diff --git a/website/src/pages/zh/subgraphs/guides/_meta.js b/website/src/pages/zh/subgraphs/guides/_meta.js index 37e18bc51651..a1bb04fb6d3f 100644 --- a/website/src/pages/zh/subgraphs/guides/_meta.js +++ b/website/src/pages/zh/subgraphs/guides/_meta.js @@ -1,4 +1,5 @@ export default { + 'subgraph-composition': '', 'subgraph-debug-forking': '', near: '', arweave: '', diff --git a/website/src/pages/zh/subgraphs/guides/arweave.mdx b/website/src/pages/zh/subgraphs/guides/arweave.mdx index 08e6c4257268..df35134c3233 100644 --- a/website/src/pages/zh/subgraphs/guides/arweave.mdx +++ b/website/src/pages/zh/subgraphs/guides/arweave.mdx @@ -1,61 +1,61 @@ --- -title: Building Subgraphs on Arweave +title: 在 Arweave 上构建子图 --- -> Arweave support in Graph Node and on Subgraph Studio is in beta: please reach us on [Discord](https://discord.gg/graphprotocol) with any questions about building Arweave Subgraphs! +> Graph Node和Subgraph Studio中的Arweave支持处于测试阶段:对构建Arweave子图有任何疑问,请通过[Discord](https://discord.gg/graphprotocol)联系我们! -In this guide, you will learn how to build and deploy Subgraphs to index the Arweave blockchain. +在本指南中,您将学习如何构建和部署子图以索引Arweave区块链。 -## What is Arweave? +## Arweave是什么? -The Arweave protocol allows developers to store data permanently and that is the main difference between Arweave and IPFS, where IPFS lacks the feature; permanence, and files stored on Arweave can't be changed or deleted. +Arweave 协议允许开发者永久存储数据,这是 Arweave 和 IPFS 的主要区别,IPFS 没有这个特性,永久性和存储在 Arweave 上的文件不能被更改或删除。 -Arweave already has built numerous libraries for integrating the protocol in a number of different programming languages. For more information you can check: +Arweave 已经构建了许多库,用于将协议集成到许多不同的编程语言中。更多信息可以查看: - [Arwiki](https://arwiki.wiki/#/en/main) - [Arweave Resources](https://www.arweave.org/build) -## What are Arweave Subgraphs? +## Arweave子图是什么? -The Graph allows you to build custom open APIs called "Subgraphs". Subgraphs are used to tell indexers (server operators) which data to index on a blockchain and save on their servers in order for you to be able to query it at any time using [GraphQL](https://graphql.org/). +Graph 允许您构建称为“子图 ”的自定义开放 API。子图用于告诉索引人(服务器操作员) 在区块链上索引哪些数据,并保存在他们的服务器上,以便您能够在任何时候使用 [GraphQL](https://graphql.org/) 查询它。 -[Graph Node](https://github.com/graphprotocol/graph-node) is now able to index data on Arweave protocol. The current integration is only indexing Arweave as a blockchain (blocks and transactions), it is not indexing the stored files yet. +[Graph节点](https://github.com/graphprotocol/graph-node) 现在能够在 Arweave 协议上索引数据。当前的集成只是索引 Arweave 作为一个区块链(区块和交易) ,它还没有索引存储的文件。 -## Building an Arweave Subgraph +## 构建 Arweave 子图 -To be able to build and deploy Arweave Subgraphs, you need two packages: +为了能够构建和部署 Arweave 子图,您需要两个包: -1. `@graphprotocol/graph-cli` above version 0.30.2 - This is a command-line tool for building and deploying Subgraphs. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-cli) to download using `npm`. -2. `@graphprotocol/graph-ts` above version 0.27.0 - This is library of Subgraph-specific types. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-ts) to download using `npm`. +1. `@graphprotocol/graph-cli` 高于0.30.2版本-这是一个用于构建和部署子图的命令行工具。[点击这里](https://www.npmjs.com/package/@graphprotocol/graph-cli)下载使用 `npm`。 +2. `@ graph protocol/graph-ts` 0.27.0以上版本-这是子图特定类型的库。[点击这里](https://www.npmjs.com/package/@graphprotocol/graph-ts)下载使用 `npm`。 -## Subgraph's components +## 子图的组成部分 -There are three components of a Subgraph: +一个子图有三个组成部分: -### 1. Manifest - `subgraph.yaml` +### 1. 数据源明细 - `subgraph.yaml` -Defines the data sources of interest, and how they should be processed. Arweave is a new kind of data source. +定义感兴趣的数据源,以及如何处理它们。Arweave是一种新型数据源。 -### 2. Schema - `schema.graphql` +### 2. 数据查询结构- `schema.graphql` -Here you define which data you want to be able to query after indexing your Subgraph using GraphQL. This is actually similar to a model for an API, where the model defines the structure of a request body. +在这里,您可以定义在使用 GraphQL 索引子图之后希望能够查询的数据。这实际上类似于 API 的模型,其中模型定义了请求主体的结构。 -The requirements for Arweave Subgraphs are covered by the [existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). +[现有文档](/developing/creating-a-subgraph/#the-graphql-schema)涵盖了对 Arweave 子图的需求。 -### 3. AssemblyScript Mappings - `mapping.ts` +### 3. AssemblyScript 映射 - `mapping.ts` -This is the logic that determines how data should be retrieved and stored when someone interacts with the data sources you are listening to. The data gets translated and is stored based off the schema you have listed. +这种逻辑决定了当有人与您正在监听的数据源进行交互时,应该如何检索和存储数据。数据将被翻译并根据您列出的模式进行存储。 -During Subgraph development there are two key commands: +在子图开发过程中,有两个关键命令: ``` $ graph codegen # generates types from the schema file identified in the manifest $ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder ``` -## Subgraph Manifest Definition +## 子图清单定义 -The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for an Arweave Subgraph: +子图清单`subgraph.yaml` 标识子图的数据源、感兴趣的触发器以及应该响应这些触发器而运行的函数。下面是 Arweave 子图的子图清单示例: ```yaml specVersion: 1.3.0 @@ -82,30 +82,30 @@ dataSources: - handler: handleTx # the function name in the mapping file ``` -- Arweave Subgraphs introduce a new kind of data source (`arweave`) -- The network should correspond to a network on the hosting Graph Node. In Subgraph Studio, Arweave's mainnet is `arweave-mainnet` -- Arweave data sources introduce an optional source.owner field, which is the public key of an Arweave wallet +- Arweave子图引入了一种新的数据源(`arweave`)。 +- 网络应该对应于托管Graph节点上的网络。在Subgraph Studio上,Arweave 的主网是`Arweave-mainnet`。 +- Arweave 数据源引入了一个可选的 source. owner 字段,它是 Arweave 钱包的公钥 -Arweave data sources support two types of handlers: +Arweave 数据源支持两种类型的处理程序: -- `blockHandlers` - Run on every new Arweave block. No source.owner is required. -- `transactionHandlers` - Run on every transaction where the data source's `source.owner` is the owner. Currently an owner is required for `transactionHandlers`, if users want to process all transactions they should provide "" as the `source.owner` +- `blockHandlers` 在每个新的 Arweave 区块上运行,不需要 source. owner。 +- `transactionHandlers` - 在数据源的`source.owner` 是所有者的每个交易上运行。目前, `transactionHandlers`需要一个所有者,如果用户想要处理所有交易,他们应该提供""作为 `source.owner` -> The source.owner can be the owner's address, or their Public Key. +> Source.Owner 可以是所有者的地址,也可以是他们的公钥。 +> +> 交易是 Arweave permaweb 的构建区块,它们是终端用户创建的对象。 +> +> 注意: 目前还不支持[Irys(先前的Bundl)](https://irys.xyz/)交易。 -> Transactions are the building blocks of the Arweave permaweb and they are objects created by end-users. +## 模式定义 -> Note: [Irys (previously Bundlr)](https://irys.xyz/) transactions are not supported yet. +数据查询结构定义描述了生成的子图数据库的结构以及实体之间的关系,无需与原始数据源有关。[这里](/developing/creating-a-subgraph/#the-graphql-schema)有关于子图模式定义的更多细节。 -## Schema Definition +## AssemblyScript 映射 -Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on the Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). +处理事件的处理程序是用 [AssemblyScript](https://www.assemblyscript.org/) 编写的。 -## AssemblyScript Mappings - -The handlers for processing events are written in [AssemblyScript](https://www.assemblyscript.org/). - -Arweave indexing introduces Arweave-specific data types to the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/). +Arweave索引将Arweave特定的数据类型引入[AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/)。 ```tsx class Block { @@ -146,94 +146,94 @@ class Transaction { } ``` -Block handlers receive a `Block`, while transactions receive a `Transaction`. +区块处理程序接收`Block`,而交易接收`Transaction`.。 -Writing the mappings of an Arweave Subgraph is very similar to writing the mappings of an Ethereum Subgraph. For more information, click [here](/developing/creating-a-subgraph/#writing-mappings). +写 Arweave 子图的映射与写 Etherum 子图的映射非常相似。了解更多信息,请点击[这里](/developing/creating-a-subgraph/#writing-mappings)。 -## Deploying an Arweave Subgraph in Subgraph Studio +## 将Arweave子图部署到Subgraph Studio -Once your Subgraph has been created on your Subgraph Studio dashboard, you can deploy by using the `graph deploy` CLI command. +一旦您的子图已经在Subgraph Studio控制板上创建,您就可以通过使用`graph deploy` CLI 命令进行部署。 ```bash graph deploy --access-token ``` -## Querying an Arweave Subgraph +## 查询 Arweave 子图 -The GraphQL endpoint for Arweave Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. +Arweave 子图的 GraphQL 端点由模式定义和现有的 API 接口决定。有关更多信息,请访问 [GraphQLAPI 文档](/subgraphs/querying/graphql-api/)。 -## Example Subgraphs +## 示例子图 -Here is an example Subgraph for reference: +下面是一个子图的例子,以供参考: -- [Example Subgraph for Arweave](https://github.com/graphprotocol/graph-tooling/tree/main/examples/arweave-blocks-transactions) +- [Arweave 的子图示例](https://github.com/graphprotocol/graph-tooling/tree/main/examples/arweave-blocks-transactions) -## FAQ +## 常见问题 -### Can a Subgraph index Arweave and other chains? +### 子图可以索引 Arweave 和其他链吗? -No, a Subgraph can only support data sources from one chain/network. +不,子图只能支持来自一个链或网络的数据源。 -### Can I index the stored files on Arweave? +### 我可以索引存储在 Arweave 上的文件吗? -Currently, The Graph is only indexing Arweave as a blockchain (its blocks and transactions). +目前,Graph 只是将 Arweave 索引为区块链(它的区块和交易)。 -### Can I identify Bundlr bundles in my Subgraph? +### 我可以识别我的子图中的 Bundlr 包吗? -This is not currently supported. +目前还不支持。 -### How can I filter transactions to a specific account? +### 如何筛选特定账户的交易? -The source.owner can be the user's public key or account address. +Source.owner可以是用户的公钥或账户地址。 -### What is the current encryption format? +### 当前的加密格式是什么? -Data is generally passed into the mappings as Bytes, which if stored directly is returned in the Subgraph in a `hex` format (ex. block and transaction hashes). You may want to convert to a `base64` or `base64 URL`-safe format in your mappings, in order to match what is displayed in block explorers like [Arweave Explorer](https://viewblock.io/arweave/). +数据通常以字节的形式传递到映射中,如果直接存储字节,则以`十六进制`格式(例如,区块和和交易hashes)返回。您可能希望在映射中转换为 `base64`或 `base64 URL` 安全格式,以便与 [Arweave Explorer](https://viewblock.io/arweave/) 等区块浏览器中显示的内容相匹配。 -The following `bytesToBase64(bytes: Uint8Array, urlSafe: boolean): string` helper function can be used, and will be added to `graph-ts`: +可以使用以下 `bytesToBase64(字节: Uint8Array,urlSafe: boolean): string` 辅助函数,并将其添加到 `graph-ts`: ``` const base64Alphabet = [ - "A", "B", "C", "D", "E", "F", "G", "H", "I", "J", "K", "L", "M", - "N", "O", "P", "Q", "R", "S", "T", "U", "V", "W", "X", "Y", "Z", - "a", "b", "c", "d", "e", "f", "g", "h", "i", "j", "k", "l", "m", - "n", "o", "p", "q", "r", "s", "t", "u", "v", "w", "x", "y", "z", - "0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "+", "/" + "A", "B", "C", "D", "E", "F", "G", "H", "I", "J", "K", "L", "M", + "N", "O", "P", "Q", "R", "S", "T", "U", "V", "W", "X", "Y", "Z", + "a", "b", "c", "d", "e", "f", "g", "h", "i", "j", "k", "l", "m", + "n", "o", "p", "q", "r", "s", "t", "u", "v", "w", "x", "y", "z", + "0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "+", "/" ]; const base64UrlAlphabet = [ - "A", "B", "C", "D", "E", "F", "G", "H", "I", "J", "K", "L", "M", - "N", "O", "P", "Q", "R", "S", "T", "U", "V", "W", "X", "Y", "Z", - "a", "b", "c", "d", "e", "f", "g", "h", "i", "j", "k", "l", "m", - "n", "o", "p", "q", "r", "s", "t", "u", "v", "w", "x", "y", "z", - "0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "-", "_" + "A", "B", "C", "D", "E", "F", "G", "H", "I", "J", "K", "L", "M", + "N", "O", "P", "Q", "R", "S", "T", "U", "V", "W", "X", "Y", "Z", + "a", "b", "c", "d", "e", "f", "g", "h", "i", "j", "k", "l", "m", + "n", "o", "p", "q", "r", "s", "t", "u", "v", "w", "x", "y", "z", + "0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "-", "_" ]; function bytesToBase64(bytes: Uint8Array, urlSafe: boolean): string { - let alphabet = urlSafe? base64UrlAlphabet : base64Alphabet; - - let result = '', i: i32, l = bytes.length; - for (i = 2; i < l; i += 3) { - result += alphabet[bytes[i - 2] >> 2]; - result += alphabet[((bytes[i - 2] & 0x03) << 4) | (bytes[i - 1] >> 4)]; - result += alphabet[((bytes[i - 1] & 0x0F) << 2) | (bytes[i] >> 6)]; - result += alphabet[bytes[i] & 0x3F]; - } - if (i === l + 1) { // 1 octet yet to write - result += alphabet[bytes[i - 2] >> 2]; - result += alphabet[(bytes[i - 2] & 0x03) << 4]; - if (!urlSafe) { - result += "=="; - } - } - if (!urlSafe && i === l) { // 2 octets yet to write - result += alphabet[bytes[i - 2] >> 2]; - result += alphabet[((bytes[i - 2] & 0x03) << 4) | (bytes[i - 1] >> 4)]; - result += alphabet[(bytes[i - 1] & 0x0F) << 2]; - if (!urlSafe) { - result += "="; - } - } - return result; + let alphabet = urlSafe? base64UrlAlphabet : base64Alphabet; + + let result = '', i: i32, l = bytes.length; + for (i = 2; i < l; i += 3) { + result += alphabet[bytes[i - 2] >> 2]; + result += alphabet[((bytes[i - 2] & 0x03) << 4) | (bytes[i - 1] >> 4)]; + result += alphabet[((bytes[i - 1] & 0x0F) << 2) | (bytes[i] >> 6)]; + result += alphabet[bytes[i] & 0x3F]; + } + if (i === l + 1) { // 1 octet yet to write + result += alphabet[bytes[i - 2] >> 2]; + result += alphabet[(bytes[i - 2] & 0x03) << 4]; + if (!urlSafe) { + result += "=="; + } + } + if (!urlSafe && i === l) { // 2 octets yet to write + result += alphabet[bytes[i - 2] >> 2]; + result += alphabet[((bytes[i - 2] & 0x03) << 4) | (bytes[i - 1] >> 4)]; + result += alphabet[(bytes[i - 1] & 0x0F) << 2]; + if (!urlSafe) { + result += "="; + } + } + return result; } ``` diff --git a/website/src/pages/zh/subgraphs/guides/contract-analyzer.mdx b/website/src/pages/zh/subgraphs/guides/contract-analyzer.mdx index 084ac8d28a00..36e5030e2293 100644 --- a/website/src/pages/zh/subgraphs/guides/contract-analyzer.mdx +++ b/website/src/pages/zh/subgraphs/guides/contract-analyzer.mdx @@ -1,76 +1,92 @@ --- -title: Smart Contract Analysis with Cana CLI +title: 与Cana CLI的智能合约分析 --- -# Cana CLI: Quick & Efficient Contract Analysis +通过**Cana CLI**改进智能合约分析。它快速、高效,专门为EVM链设计。 -**Cana CLI** is a command-line tool that streamlines helpful smart contract metadata analysis specific to subgraph development across multiple EVM-compatible chains. +## 概述 -## 📌 Key Features +**Cana CLI** 是一个命令行工具,用于简化智能合约元数据分析,专门用于在多个EVM兼容的链条上进行子图开发。 它简化了检索合约细节、侦查代理人实现情况、提取ABIs等等。 -- Detect deployment blocks -- Verify source code -- Extract ABIs & event signatures -- Identify proxy and implementation contracts -- Support multiple chains +### 主要特征 -## 🚀 Installation & Setup +使用Cana CLI,您可以: -Install Cana globally using npm: +- 检测部署区块 +- 验证源代码 +- 提取ABI和事件签名 +- 确定代理和执行合约 +- 支持多种链 + +### 先决条件 + +在安装 Cana CLI 之前,请确保您: + +- [Node.js v16+](https://nodejs.org/en) +- [npm v6+](https://docs.npmjs.com/cli/v11/commands/npm-install) +- 区块浏览器 API 密钥 + +### 安装与设置 + +1. 安装Cana CLI + +使用 npm 在全球安装它: ```bash npm install -g contract-analyzer ``` -Set up a blockchain for analysis: +2. 配置Cana CLI + +设置一个区块链环境用于分析: ```bash cana setup ``` -Provide the required block explorer API and block explorer endpoint URL details when prompted. +在设置过程中,您将被提示提供所需的区块浏览器 API 密钥和区块浏览器端点URL。 -Running `cana setup` creates a configuration file at `~/.contract-analyzer/config.json`. This file stores your block explorer API credentials, endpoint URLs, and chain selection preferences for future use. +设置后,Cana CLI 在 "~/.contract-analyzer/config.json" 创建了一个配置文件。 这个文件存储你的区块浏览器API凭据、端点URL和链选择首选项供将来使用。 -## 🍳 Usage +### 步骤:使用Cana CLI进行智能合约分析 -### 🔹 Chain Selection +#### 1. 选择一个链 -Cana supports multiple EVM-compatible chains. +Cana CLI 支持多个EVM兼容的链。 -List chains added with: +对于添加的链运行此命令: ```bash cana chains ``` -Then select a chain with: +然后使用此命令选择一个链: ```bash cana chains --switch ``` -Once a chain is selected, all subsequent contract analases will continue on that chain. +一旦选择了一个链,随后的所有合约分析都将继续在这一链上进行。 -### 🔹 Basic Contract Analysis +#### 2. 基本合约分析 -Analyze a contract with: +运行以下命令来分析合约: ```bash cana analyze 0xContractAddress ``` -or +或者 ```bash cana -a 0xContractAddress ``` -This command displays essential contract information in the terminal using a clear, organized format. +此命令使用清晰、有条理的格式在终端中获取和显示重要的合约信息。 -### 🔹 Understanding Output +#### 3. 了解输出 -Cana organizes results into the terminal as well as into a structured directory when detailed contract data is successfully retrieved: +Cana CLI 在成功检索详细合约数据后将结果编入终端和结构目录: ``` contracts-analyzed/ @@ -80,24 +96,22 @@ contracts-analyzed/ └── event-information.json # Event signatures and examples ``` -### 🔹 Chain Management +这种格式便于参考合约元数据、事件签名和 ABI,以便于子图的开发。 -Add and manage chains: +#### 4. 链管理 + +添加并管理链: ```bash -cana setup # Add a new chain -cana chains # List configured chains -cana chains -s # Swich chains. +cana setup # Add a new chain +cana chains # List configured chains +cana chains -s # Switch chains ``` -## ⚠️ Troubleshooting - -- **Missing Data**: Ensure the contract address is correct, verified on the block explorer, and that your API key has the required permissions. +### 故障排除 -## ✅ Requirements +缺少数据?请确保合约地址是正确的,它已验证在区块浏览器上,并且您的 API 密钥具有所需的权限。 -- Node.js v16+ -- npm v6+ -- Block explorer API keys +### 结论 -Keep your contract analyses efficient and well-organized. 🚀 +通过Cana CLI,您可以有效地分析智能合约,提取关键的元数据,并轻松支持子图的开发。 diff --git a/website/src/pages/zh/subgraphs/guides/enums.mdx b/website/src/pages/zh/subgraphs/guides/enums.mdx index 9f55ae07c54b..e81a2b51ef42 100644 --- a/website/src/pages/zh/subgraphs/guides/enums.mdx +++ b/website/src/pages/zh/subgraphs/guides/enums.mdx @@ -1,20 +1,20 @@ --- -title: Categorize NFT Marketplaces Using Enums +title: 使用枚举对NFT市场进行分类 --- -Use Enums to make your code cleaner and less error-prone. Here's a full example of using Enums on NFT marketplaces. +使用枚举使代码更清晰,更不容易出错。这是一个在NFT市场上使用枚举的完整示例。 -## What are Enums? +## 枚举是什么? -Enums, or enumeration types, are a specific data type that allows you to define a set of specific, allowed values. +枚举或枚举类型是一种特定的数据类型,允许您定义一组特定的、允许的值。 -### Example of Enums in Your Schema +### 模式中枚举的示例 -If you're building a Subgraph to track the ownership history of tokens on a marketplace, each token might go through different ownerships, such as `OriginalOwner`, `SecondOwner`, and `ThirdOwner`. By using enums, you can define these specific ownerships, ensuring only predefined values are assigned. +If you're building a subgraph to track the ownership history of tokens on a marketplace, each token might go through different ownerships, such as `OriginalOwner`, `SecondOwner`, and `ThirdOwner`. By using enums, you can define these specific ownerships, ensuring only predefined values are assigned. -You can define enums in your schema, and once defined, you can use the string representation of the enum values to set an enum field on an entity. +您可以在架构中定义枚举,定义后,您可以使用枚举值的字符串表示形式在实体上设置枚举字段。 -Here's what an enum definition might look like in your schema, based on the example above: +基于上面的示例,以下是枚举定义在模式中的样子: ```graphql enum TokenStatus { @@ -24,19 +24,19 @@ enum TokenStatus { } ``` -This means that when you use the `TokenStatus` type in your schema, you expect it to be exactly one of predefined values: `OriginalOwner`, `SecondOwner`, or `ThirdOwner`, ensuring consistency and validity. +这意味着,当您在模式中使用`TokenStatus`类型时,您希望它恰好是预定义值之一:`OriginalOwner`、`SecondOwner`或`ThirdOwner`,以确保一致性和有效性。 -To learn more about enums, check out [Creating a Subgraph](/developing/creating-a-subgraph/#enums) and [GraphQL documentation](https://graphql.org/learn/schema/#enumeration-types). +要了解更多关于枚举的信息,请查看[创建子图](/developing/creating-a-subgraph/#enums) 和 [GraphQL文档](https://graphql.org/learn/schema/#enumeration-types)。 -## Benefits of Using Enums +## 使用枚举的好处 -- **Clarity:** Enums provide meaningful names for values, making data easier to understand. -- **Validation:** Enums enforce strict value definitions, preventing invalid data entries. -- **Maintainability:** When you need to change or add new categories, enums allow you to do this in a focused manner. +- **清晰度:** 枚举为值提供有意义的名称,使数据更容易理解。 +- **有效性:** 枚举强制执行严格的值定义,防止无效的数据条目。 +- **可维护性:** 当您需要更改或添加新类别时,枚举允许您以专注的方式完成此操作。 -### Without Enums +### 无枚举 -If you choose to define the type as a string instead of using an Enum, your code might look like this: +如果你选择将类型定义为字符串而不是使用枚举,你的代码可能看起来像这样: ```graphql type Token @entity { @@ -48,24 +48,24 @@ type Token @entity { } ``` -In this schema, `TokenStatus` is a simple string with no specific, allowed values. +在此模式中,`TokenStatus`是一个简单的字符串,没有特定的、允许的值。 -#### Why is this a problem? +#### 为什么这是个问题? -- There's no restriction of `TokenStatus` values, so any string can be accidentally assigned. This makes it hard to ensure that only valid statuses like `OriginalOwner`, `SecondOwner`, or `ThirdOwner` are set. -- It's easy to make typos such as `Orgnalowner` instead of `OriginalOwner`, making the data and potential queries unreliable. +- `TokenStatus`值没有限制,因此任何字符串都可能被意外分配。这使得很难确保只设置有效的状态,如`OriginalOwner`、`SecondOwner`或`ThirdOwner`。 +- 很容易出现拼写错误,例如`Orgnalowner`而不是`OriginalOwner`,从而使数据和潜在查询不可靠。 -### With Enums +### 带枚举 -Instead of assigning free-form strings, you can define an enum for `TokenStatus` with specific values: `OriginalOwner`, `SecondOwner`, or `ThirdOwner`. Using an enum ensures only allowed values are used. +您可以为`TokenStatus`定义一个具有特定值的枚举,而不是分配自由格式的字符串:`OriginalOwner`、`SecondOwner`或`ThirdOwner`。使用枚举可确保只使用允许的值。 -Enums provide type safety, minimize typo risks, and ensure consistent and reliable results. +枚举提供类型安全性,最大限度地减少拼写错误风险,并确保一致可靠的结果。 -## Defining Enums for NFT Marketplaces +## 定义NFT市场的枚举 -> Note: The following guide uses the CryptoCoven NFT smart contract. +> 注意:以下指南使用CryptoCoven NFT智能合约。 -To define enums for the various marketplaces where NFTs are traded, use the following in your Subgraph schema: +To define enums for the various marketplaces where NFTs are traded, use the following in your subgraph schema: ```gql # Enum for Marketplaces that the CryptoCoven contract interacted with(likely a Trade/Mint) @@ -78,15 +78,15 @@ enum Marketplace { } ``` -## Using Enums for NFT Marketplaces +## NFT市场使用枚举 -Once defined, enums can be used throughout your Subgraph to categorize transactions or events. +Once defined, enums can be used throughout your subgraph to categorize transactions or events. -For example, when logging NFT sales, you can specify the marketplace involved in the trade using the enum. +例如,在记录NFT销售时,您可以使用枚举指定交易中涉及的市场。 -### Implementing a Function for NFT Marketplaces +### 实现NFT市场功能 -Here's how you can implement a function to retrieve the marketplace name from the enum as a string: +以下是如何实现一个函数,以字符串形式从枚举中检索市场名称: ```ts export function getMarketplaceName(marketplace: Marketplace): string { @@ -104,29 +104,29 @@ export function getMarketplaceName(marketplace: Marketplace): string { } ``` -## Best Practices for Using Enums +## 使用枚举的最佳实践 -- **Consistent Naming:** Use clear, descriptive names for enum values to improve readability. -- **Centralized Management:** Keep enums in a single file for consistency. This makes enums easier to update and ensures they are the single source of truth. -- **Documentation:** Add comments to enum to clarify their purpose and usage. +- **一致命名:**为枚举值使用清晰的、描述性的名称,以提高可读性。 +- **集中管理:** 将枚举保存在单个文件中以保持一致性。这使得枚举更容易更新,并确保它们是唯一的事实来源。 +- **文档:** 在枚举中添加注释,以阐明其目的和用法。 -## Using Enums in Queries +## 在查询中使用枚举 -Enums in queries help you improve data quality and make your results easier to interpret. They function as filters and response elements, ensuring consistency and reducing errors in marketplace values. +查询中的枚举有助于提高数据质量,并使结果更容易解释。它们充当过滤器和响应元素,确保一致性并减少市场价值中的错误。 -**Specifics** +**详情** -- **Filtering with Enums:** Enums provide clear filters, allowing you to confidently include or exclude specific marketplaces. -- **Enums in Responses:** Enums guarantee that only recognized marketplace names are returned, making the results standardized and accurate. +- **使用枚举进行筛选:**枚举提供清晰的筛选,使您能够自信地包含或排除特定的市场。 +- **响应中的枚举:**枚举保证只返回可识别的市场名称,使结果标准化和准确。 -### Sample Queries +### 示例查询 -#### Query 1: Account With The Highest NFT Marketplace Interactions +#### 问题1:NFT市场互动最高的账户 -This query does the following: +此查询执行以下操作: -- It finds the account with the highest unique NFT marketplace interactions, which is great for analyzing cross-marketplace activity. -- The marketplaces field uses the marketplace enum, ensuring consistent and validated marketplace values in the response. +- 它找到了具有最高独特NFT市场互动的帐户,这对于分析跨市场活动非常有用。 +- 市场字段使用市场枚举,确保响应中一致且经过验证的市场值。 ```gql { @@ -143,9 +143,9 @@ This query does the following: } ``` -#### Returns +#### 返回 -This response provides account details and a list of unique marketplace interactions with enum values for standardized clarity: +此响应提供了帐户详细信息和具有枚举值的独特市场交互列表,以实现标准化的清晰度: ```gql { @@ -186,12 +186,12 @@ This response provides account details and a list of unique marketplace interact } ``` -#### Query 2: Most Active Marketplace for CryptoCoven transactions +#### 问题2:CryptoCoven交易最活跃的市场 -This query does the following: +此查询执行以下操作: -- It identifies the marketplace with the highest volume of CryptoCoven transactions. -- It uses the marketplace enum to ensure that only valid marketplace types appear in the response, adding reliability and consistency to your data. +- 它确定了CryptoCoven交易量最大的市场。 +- 它使用市场枚举来确保响应中只显示有效的市场类型,从而为您的数据增加可靠性和一致性。 ```gql { @@ -202,9 +202,9 @@ This query does the following: } ``` -#### Result 2 +#### 结果2 -The expected response includes the marketplace and the corresponding transaction count, using the enum to indicate the marketplace type: +预期的响应包括市场和相应的交易计数,使用枚举指示市场类型: ```gql { @@ -219,12 +219,12 @@ The expected response includes the marketplace and the corresponding transaction } ``` -#### Query 3: Marketplace Interactions with High Transaction Counts +#### 问题3:交易量高的市场互动 -This query does the following: +此查询执行以下操作: -- It retrieves the top four marketplaces with over 100 transactions, excluding "Unknown" marketplaces. -- It uses enums as filters to ensure that only valid marketplace types are included, increasing accuracy. +- 它检索了交易量超过100的前四大市场,不包括“未知”市场。 +- 它使用枚举作为筛子,以确保只包含有效的市场类型,从而提高准确性。 ```gql { @@ -240,9 +240,9 @@ This query does the following: } ``` -#### Result 3 +#### 结果3 -Expected output includes the marketplaces that meet the criteria, each represented by an enum value: +预期产出包括符合标准的市场,每个市场由一个枚举值表示: ```gql { @@ -269,6 +269,6 @@ Expected output includes the marketplaces that meet the criteria, each represent } ``` -## Additional Resources +## 其他资源 -For additional information, check out this guide's [repo](https://github.com/chidubemokeke/Subgraph-Tutorial-Enums). +如需更多信息,请查看本指南的[库存](https://github.com/chidubemokeke/Subgraph-Tutorial-Enums)。 diff --git a/website/src/pages/zh/subgraphs/guides/grafting.mdx b/website/src/pages/zh/subgraphs/guides/grafting.mdx index d9abe0e70d2a..3eb1f737605a 100644 --- a/website/src/pages/zh/subgraphs/guides/grafting.mdx +++ b/website/src/pages/zh/subgraphs/guides/grafting.mdx @@ -1,56 +1,56 @@ --- -title: Replace a Contract and Keep its History With Grafting +title: 用嫁接替换合约并保持合约的历史 --- -In this guide, you will learn how to build and deploy new Subgraphs by grafting existing Subgraphs. +在本指南中,您将学习如何通过嫁接现有的子图来构建和部署新的子图。 -## What is Grafting? +## 什么是嫁接? -Grafting reuses the data from an existing Subgraph and starts indexing it at a later block. This is useful during development to get past simple errors in the mappings quickly or to temporarily get an existing Subgraph working again after it has failed. Also, it can be used when adding a feature to a Subgraph that takes long to index from scratch. +嫁接重用现有子图中的数据,并开始在稍后的区块中对其进行索引。这在开发过程中非常有用,可以快速克服映射中的简单错误,或者在现有子图失败后暂时使其重新工作。此外,当向子图添加特性时可以使用它,因为从头开始索引需要很长时间。 -The grafted Subgraph can use a GraphQL schema that is not identical to the one of the base Subgraph, but merely compatible with it. It has to be a valid Subgraph schema in its own right, but may deviate from the base Subgraph's schema in the following ways: +嫁接子图可以使用一个与基础子图不同的GraphQL 模式,但仅与之兼容。它本身必须是一个有效的子图模式,但是可以通过以下方式偏离基础子图的模式: -- It adds or removes entity types -- It removes attributes from entity types -- It adds nullable attributes to entity types -- It turns non-nullable attributes into nullable attributes -- It adds values to enums -- It adds or removes interfaces -- It changes for which entity types an interface is implemented +- 添加或删除实体类型 +- 从实体类型中删除属性 +- 将可为空的属性添加到实体类型 +- 将不可为空的属性转换为可空的属性 +- 将值添加到枚举类型中 +- 添加或删除接口 +- 改变了实现接口的实体类型 -For more information, you can check: +有关详情,请参阅: -- [Grafting](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) +- [嫁接](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) -In this tutorial, we will be covering a basic use case. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing Subgraph onto the "base" Subgraph that tracks the new contract. +在本教程中,我们将介绍一个基本用例。我们将用一个相同的合约(用一个新的地址,但相同的代码) 替换现有的合约。然后,将现有的子图嫁接到跟踪新合约的 "基础 "子图上。 -## Important Note on Grafting When Upgrading to the Network +## 升级到网络时嫁接的重要注意事项 -> **Caution**: It is recommended to not use grafting for Subgraphs published to The Graph Network +> **警告**:建议不要对发布到The Graph网络的子图使用嫁接 -### Why Is This Important? +### 为什么这很重要? -Grafting is a powerful feature that allows you to "graft" one Subgraph onto another, effectively transferring historical data from the existing Subgraph to a new version. It is not possible to graft a Subgraph from The Graph Network back to Subgraph Studio. +嫁接是一个强大的功能,允许您将一个子图“嫁接”到另一个子图上,有效地将历史数据从现有子图转移到新版本。无法将子图从The Graph网络嫁接回subgraph Studio。 -### Best Practices +### 最佳实践 -**Initial Migration**: when you first deploy your Subgraph to the decentralized network, do so without grafting. Ensure that the Subgraph is stable and functioning as expected. +**初始迁移**:当你第一次将子图部署到去中心化网络时,不要进行嫁接。确保子图稳定并按预期运行。 -**Subsequent Updates**: once your Subgraph is live and stable on the decentralized network, you may use grafting for future versions to make the transition smoother and to preserve historical data. +**后续更新**:一旦你的子图在去中心化网络上上线并稳定,你可以在未来的版本中使用嫁接,使过渡更平滑,并保留历史数据。 -By adhering to these guidelines, you minimize risks and ensure a smoother migration process. +通过遵守这些准则,您可以将风险降至最低,并确保迁移过程更加顺利。 -## Building an Existing Subgraph +## 构建现有子图 -Building Subgraphs is an essential part of The Graph, described more in depth [here](/subgraphs/quick-start/). To be able to build and deploy the existing Subgraph used in this tutorial, the following repo is provided: +构建子图是The Graph的重要组成部分,在[此文](/subgraphs/quick-start/)进行更深入的描述。为了能够构建和部署本教程中使用的现有子图,提供了以下库: -- [Subgraph example repo](https://github.com/Shiyasmohd/grafting-tutorial) +- [子图示例存储库](https://github.com/Shiyasmohd/grafting-tutorial) -> Note: The contract used in the Subgraph was taken from the following [Hackathon Starterkit](https://github.com/schmidsi/hackathon-starterkit). +> 注意: 子图中使用的合约取自以下[Hackathon Starterkit](https://github.com/schmidsi/hackathon-starterkit)。 -## Subgraph Manifest Definition +## 子图清单定义 -The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest that you will use: +子图诠释了`subgraph.yaml`标识子图的数据源、感兴趣的触发器以及应该响应这些触发器而运行的函数。下面是您将使用的子图清单示例: ```yaml specVersion: 1.3.0 @@ -79,13 +79,13 @@ dataSources: file: ./src/lock.ts ``` -- The `Lock` data source is the abi and contract address we will get when we compile and deploy the contract -- The network should correspond to an indexed network being queried. Since we're running on Sepolia testnet, the network is `sepolia` -- The `mapping` section defines the triggers of interest and the functions that should be run in response to those triggers. In this case, we are listening for the `Withdrawal` event and calling the `handleWithdrawal` function when it is emitted. +- `Lock`数据源是我们在编译和部署合约时获得的abi和合约地址。 +- 网络应该对应于一个被查询的索引网络。因为我们运行在Sepolia 测试网上,所以这个网络就是Sepolia。 +- `mapping`部分定义了感兴趣的触发器以及应该响应这些触发器而运行的函数。在这种情况下,我们正在监听`Withdrawl`事件,并在发出该事件时调用`处理Withdrawal`函数。 -## Grafting Manifest Definition +## 嫁接清单定义 -Grafting requires adding two new items to the original Subgraph manifest: +嫁接需要在原始子图清单中添加两个新项: ```yaml --- @@ -96,16 +96,16 @@ graft: block: 5956000 # block number ``` -- `features:` is a list of all used [feature names](/developing/creating-a-subgraph/#experimental-features). -- `graft:` is a map of the `base` Subgraph and the block to graft on to. The `block` is the block number to start indexing from. The Graph will copy the data of the base Subgraph up to and including the given block and then continue indexing the new Subgraph from that block on. +- `features:`是所有使用的[功能名称](/developing/creating-a-subgraph/#experimental-features)的列表。 +- `graft`:是`基础`子图和要嫁接到的模块的映射。`block`是开始索引的区块号。The Graph将把基础子图的数据复制到给定的区块并将其包括在内,然后从该区块开始继续索引新的子图。 -The `base` and `block` values can be found by deploying two Subgraphs: one for the base indexing and one with grafting +通过部署两个子图可以找到`base`和`block`值:一个用于基础索引,一个用于嫁接。 -## Deploying the Base Subgraph +## 部署基子图 -1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-example` -2. Follow the directions in the `AUTH & DEPLOY` section on your Subgraph page in the `graft-example` folder from the repo -3. Once finished, verify the Subgraph is indexing properly. If you run the following command in The Graph Playground +1. 转到[Subgraph Studio](https://thegraph.com/studio/) 子图并在Sepolia 测试网上创建一个称为`graft-example`的子图。 +2. 按照存储库中`graft-example`文件夹中子图页面的 `AUTH& DEPLOY `部分中的说明操作。 +3. 完成后,验证子图是否正确索引。如果在The Graph Playground中运行下列指令。 ```graphql { @@ -117,7 +117,7 @@ The `base` and `block` values can be found by deploying two Subgraphs: one for t } ``` -It returns something like this: +它返回的结果是这样的: ``` { @@ -138,16 +138,16 @@ It returns something like this: } ``` -Once you have verified the Subgraph is indexing properly, you can quickly update the Subgraph with grafting. +一旦您验证了子图是正确的索引,您可以快速更新子图与嫁接。 -## Deploying the Grafting Subgraph +## 部署嫁接子图 -The graft replacement subgraph.yaml will have a new contract address. This could happen when you update your dapp, redeploy a contract, etc. +嫁接替代Subgraph.yaml将获得一个新的合约地址。这可能发生在更新dapp、重新部署合约等时。 -1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-replacement` -2. Create a new manifest. The `subgraph.yaml` for `graph-replacement` contains a different contract address and new information about how it should graft. These are the `block` of the [last event emitted](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452) you care about by the old contract and the `base` of the old Subgraph. The `base` Subgraph ID is the `Deployment ID` of your original `graph-example` Subgraph. You can find this in Subgraph Studio. -3. Follow the directions in the `AUTH & DEPLOY` section on your Subgraph page in the `graft-replacement` folder from the repo -4. Once finished, verify the Subgraph is indexing properly. If you run the following command in The Graph Playground +1. 转到[Subgraph Studio](https://thegraph.com/studio/) 子图并在Sepolia 测试网上创建一个称为`graft-replacement`的子图。 +2. 创建新清单。 `graph-replacement` 的`subgraph.yaml`包含一个不同的合约地址和关于其应该如何嫁接的新信息。这些是旧合约[发出的最后一个事件](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452) 的`模块`,你关心的是旧合约,也是旧子图的`基础`。`基础`子图ID是原始`graph-example` 子图的`部署ID`。你可以在Subgraph Studio中找到它。 +3. 按照存储库中的`graft-replacement`文件夹中子图页面上的 `AUTH& DEPLOY `部分的说明操作。 +4. 完成后,验证子图是否正确索引。如果在Graph Playground中运行下列指令。 ```graphql { @@ -159,7 +159,7 @@ The graft replacement subgraph.yaml will have a new contract address. This could } ``` -It should return the following: +它返回的结果是这样的: ``` { @@ -185,18 +185,18 @@ It should return the following: } ``` -You can see that the `graft-replacement` Subgraph is indexing from older `graph-example` data and newer data from the new contract address. The original contract emitted two `Withdrawal` events, [Event 1](https://sepolia.etherscan.io/tx/0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d) and [Event 2](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452). The new contract emitted one `Withdrawal` after, [Event 3](https://sepolia.etherscan.io/tx/0x2410475f76a44754bae66d293d14eac34f98ec03a3689cbbb56a716d20b209af). The two previously indexed transactions (Event 1 and 2) and the new transaction (Event 3) were combined together in the `graft-replacement` Subgraph. +您可以看到,`嫁接替换`子图是从旧的`graph-example`数据和新合约地址的新数据中索引的。原始合约发出了两个`撤回`事件,[事件1](https://sepolia.etherscan.io/tx/0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d)和[事件2](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452)。新合约在[事件3](https://sepolia.etherscan.io/tx/0x2410475f76a44754bae66d293d14eac34f98ec03a3689cbbb56a716d20b209af)之后发出一次`撤回`。两个先前索引的交易(事件1和2)和新交易(事件3)在`graft-replacement`子图中合并在一起。 -Congrats! You have successfully grafted a Subgraph onto another Subgraph. +恭喜! 你成功地将一个子图嫁接到另一个子图上。 -## Additional Resources +## 其他资源 -If you want more experience with grafting, here are a few examples for popular contracts: +如果你想要更多的嫁接经验,这里有一些流行合约的例子: - [Curve](https://github.com/messari/subgraphs/blob/master/subgraphs/curve-finance/protocols/curve-finance/config/templates/curve.template.yaml) - [ERC-721](https://github.com/messari/subgraphs/blob/master/subgraphs/erc721-metadata/subgraph.yaml) - [Uniswap](https://github.com/messari/subgraphs/blob/master/subgraphs/uniswap-v3-forks/protocols/uniswap-v3/config/templates/uniswapV3Template.yaml), -To become even more of a Graph expert, consider learning about other ways to handle changes in underlying datasources. Alternatives like [Data Source Templates](/developing/creating-a-subgraph/#data-source-templates) can achieve similar results +要成为更专业的Graph专家,请考虑学习处理底层数据源更改的其他方法。像[Data Source Templates](/developing/creating-a-subgraph/#data-source-templates) 这样的替代方案可以实现类似的结果。 -> Note: A lot of material from this article was taken from the previously published [Arweave article](/subgraphs/cookbook/arweave/) +> 注意:这篇文章中的很多内容都来自之前发表的[Arweave文章](/subgraphs/cookbook/arweave/)。 diff --git a/website/src/pages/zh/subgraphs/guides/near.mdx b/website/src/pages/zh/subgraphs/guides/near.mdx index e78a69eb7fa2..05e59c328e6c 100644 --- a/website/src/pages/zh/subgraphs/guides/near.mdx +++ b/website/src/pages/zh/subgraphs/guides/near.mdx @@ -1,54 +1,54 @@ --- -title: Building Subgraphs on NEAR +title: 在 NEAR 上构建子图 --- -This guide is an introduction to building Subgraphs indexing smart contracts on the [NEAR blockchain](https://docs.near.org/). +本指南介绍了如何在[NEAR 区块链](https://docs.near.org/)上构建索引智能合约的子图。 -## What is NEAR? +## NEAR 是什么? -[NEAR](https://near.org/) is a smart contract platform for building decentralized applications. Visit the [official documentation](https://docs.near.org/concepts/basics/protocol) for more information. +[NEAR](https://near.org/)是一个用于构建去中心化应用程序的智能合约平台。请访问[官方文档](https://docs.near.org/concepts/basics/protocol)以获取更多信息。 -## What are NEAR Subgraphs? +## NEAR 子图是什么? -The Graph gives developers tools to process blockchain events and make the resulting data easily available via a GraphQL API, known individually as a Subgraph. [Graph Node](https://github.com/graphprotocol/graph-node) is now able to process NEAR events, which means that NEAR developers can now build Subgraphs to index their smart contracts. +The Graph 为开发人员提供了一种被称为子图的工具,利用这个工具,开发人员能够处理区块链事件,并通过 GraphQL API 提供结果数据。 [Graph 节点](https://github.com/graphprotocol/graph-node)现在能够处理 NEAR 事件,这意味着 NEAR 开发人员现在可以构建子图来索引他们的智能合约。 -Subgraphs are event-based, which means that they listen for and then process onchain events. There are currently two types of handlers supported for NEAR Subgraphs: +子图是基于事件的,这意味着子图可以侦听并处理链上事件。 NEAR 子图目前支持两种类型的处理程序: -- Block handlers: these are run on every new block -- Receipt handlers: run every time a message is executed at a specified account +- 区块处理器: 这些处理程序在每个新区块上运行。 +- 收据处理器: 每次在指定账户上一个消息被执行时运行。 -[From the NEAR documentation](https://docs.near.org/build/data-infrastructure/lake-data-structures/receipt): +[来自NEAR文档](https://docs.near.org/build/data-infrastructure/lake-data-structures/receipt): -> A Receipt is the only actionable object in the system. When we talk about "processing a transaction" on the NEAR platform, this eventually means "applying receipts" at some point. +> Receipt 是系统中唯一可操作的对象。 当我们在 NEAR 平台上谈论“处理交易”时,这最终意味着在某个时候“应用收据”。 -## Building a NEAR Subgraph +## 构建 NEAR 子图 -`@graphprotocol/graph-cli` is a command-line tool for building and deploying Subgraphs. +`@graphprotocol/graph-cli`是一个用于构建和部署子图的命令行工具。 -`@graphprotocol/graph-ts` is a library of Subgraph-specific types. +`@graphprotocol/graph-ts` 是子图特定类型的库。 -NEAR Subgraph development requires `graph-cli` above version `0.23.0`, and `graph-ts` above version `0.23.0`. +NEAR 子图开发需要`0.23.0`以上版本的`graph-cli`,以及 `0.23.0`以上版本的`graph-ts`。 -> Building a NEAR Subgraph is very similar to building a Subgraph that indexes Ethereum. +> 构建 NEAR 子图与构建索引以太坊的子图非常相似。 -There are three aspects of Subgraph definition: +子图定义包括三个方面: -**subgraph.yaml:** the Subgraph manifest, defining the data sources of interest, and how they should be processed. NEAR is a new `kind` of data source. +**subgraph.yaml:** 子图清单,定义感兴趣的数据源以及如何处理它们。 NEAR 是一种全新`类型`数据源。 -**schema.graphql:** a schema file that defines what data is stored for your Subgraph, and how to query it via GraphQL. The requirements for NEAR Subgraphs are covered by [the existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). +**schema.graphql:** 一个模式文件,定义子图存储的数据以及如何通过 GraphQL 查询数据。NEAR 子图的要求已经在[现有的文档](/developing/creating-a-subgraph/#the-graphql-schema)中介绍了。 -**AssemblyScript Mappings:** [AssemblyScript code](/subgraphs/developing/creating/graph-ts/api/) that translates from the event data to the entities defined in your schema. NEAR support introduces NEAR-specific data types and new JSON parsing functionality. +**Assembly Script Mappings**:将事件数据转换为架构中定义实体的[AssemblyScript代码](/subgraphs/developing/creating/graph-ts/api/) 。NEAR支持引入了NEAR特定的数据类型和新的JSON解析功能。 -During Subgraph development there are two key commands: +在子图开发过程中,有两个关键命令: ```bash -$ graph codegen # generates types from the schema file identified in the manifest -$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder +$ graph codegen # 从清单中标识的模式文件生成类型 +$ graph build # 从 AssemblyScript 文件生成 Web Assembly,并在 /build 文件夹中准备所有子图文件 ``` -### Subgraph Manifest Definition +### 子图清单定义 -The Subgraph manifest (`subgraph.yaml`) identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for a NEAR Subgraph: +子图清单(`subgraph.yaml`)标识子图的数据源、感兴趣的触发器以及响应这些触发器而运行的函数。 以下是一个 NEAR 的子图清单的例子: ```yaml specVersion: 1.3.0 @@ -70,10 +70,10 @@ dataSources: file: ./src/mapping.ts # link to the file with the Assemblyscript mappings ``` -- NEAR Subgraphs introduce a new `kind` of data source (`near`) -- The `network` should correspond to a network on the hosting Graph Node. On Subgraph Studio, NEAR's mainnet is `near-mainnet`, and NEAR's testnet is `near-testnet` -- NEAR data sources introduce an optional `source.account` field, which is a human-readable ID corresponding to a [NEAR account](https://docs.near.org/concepts/protocol/account-model). This can be an account or a sub-account. -- NEAR data sources introduce an alternative optional `source.accounts` field, which contains optional suffixes and prefixes. At least prefix or suffix must be specified, they will match the any account starting or ending with the list of values respectively. The example below would match: `[app|good].*[morning.near|morning.testnet]`. If only a list of prefixes or suffixes is necessary the other field can be omitted. +- NEAR 子图引入了一种新的 `kind` 数据源(`near`)。 +- 该`网络`应与主机Graph Node上的网络相对应。在Subgraph Studio上,NEAR的主网靠近`near-主网`,NEAR测试网靠近`near-测试网`。 +- NEAR数据源引入了一个可选的`source.account`字段,这是一个与[NEAR帐户](https://docs.near.org/concepts/protocol/account-model)对应的人类可读ID。这可以是一个帐户或子帐户。 +- NEAR 数据源引入了一个替代的可选 `source. account` 字段,其中包含可选的后缀和前缀。至少必须指定前缀或后缀,它们将分别与以值列表开始或结束的任何账户匹配。下面的例子将匹配: `[ app | good]。* [ morning.near | morning.testnet]`.如果只需要一个前缀或后缀列表,则可以省略其他字段。 ```yaml accounts: @@ -85,20 +85,20 @@ accounts: - morning.testnet ``` -NEAR data sources support two types of handlers: +NEAR 数据源支持两种类型的处理程序: -- `blockHandlers`: run on every new NEAR block. No `source.account` is required. -- `receiptHandlers`: run on every receipt where the data source's `source.account` is the recipient. Note that only exact matches are processed ([subaccounts](https://docs.near.org/tutorials/crosswords/basics/add-functions-call#create-a-subaccount) must be added as independent data sources). +- `blockHandlers`:在每个新的 NEAR 区块上运行。 不需要 `source.account`。 +- `receiptHandlers`:在数据源的`source.account`收件人的每个收据上运行。请注意,只处理完全匹配的数据([子帐户](https://docs.near.org/tutorials/crosswords/basics/add-functions-call#create-a-subaccount)必须作为独立的数据源添加)。 -### Schema Definition +### 模式定义 -Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). +数据查询结构定义描述了生成的子图数据库的结构以及实体之间的关系,无需与原始数据源有关。[这里](/developing/creating-a-subgraph/#the-graphql-schema)有关于子图模式定义的更多细节。 -### AssemblyScript Mappings +### AssemblyScript 映射 -The handlers for processing events are written in [AssemblyScript](https://www.assemblyscript.org/). +处理事件的处理程序是用 [AssemblyScript](https://www.assemblyscript.org/) 编写的。 -NEAR indexing introduces NEAR-specific data types to the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/). +NEAR索引将NEAR特定的数据类型引入[AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/)。 ```typescript @@ -160,51 +160,51 @@ class ReceiptWithOutcome { } ``` -These types are passed to block & receipt handlers: +这些类型被传递给区块 & 收据处理程序: -- Block handlers will receive a `Block` -- Receipt handlers will receive a `ReceiptWithOutcome` +- 块处理程序将收到 `Block` +- 收据处理程序将收到 `ReceiptWithOutcome` -Otherwise, the rest of the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/) is available to NEAR Subgraph developers during mapping execution. +否则,在映射执行期间,[AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/)的其余部分可供NEAR子图开发人员使用。 -This includes a new JSON parsing function - logs on NEAR are frequently emitted as stringified JSONs. A new `json.fromString(...)` function is available as part of the [JSON API](/subgraphs/developing/creating/graph-ts/api/#json-api) to allow developers to easily process these logs. +这包括一个新的 JSON 解析函数—— NEAR 上的日志经常作为带字符串的 JSONs 发出。作为[JSON API](/subgraphs/developing/creating/graph-ts/api/#json-api) 的一部分,可以使用一个新的 `json.fromString(...)`函数来允许开发人员轻松地处理这些日志。 -## Deploying a NEAR Subgraph +## 部署 NEAR 子图 -Once you have a built Subgraph, it is time to deploy it to Graph Node for indexing. NEAR Subgraphs can be deployed to any Graph Node `>=v0.26.x` (this version has not yet been tagged & released). +构建子图后,就可以将其部署到 Graph节点以进行索引了。 NEAR 子图可以部署到任何`大于或等于0.26.x版本`(此版本尚未标记和发布)的图节点 。 -Subgraph Studio and the upgrade Indexer on The Graph Network currently supports indexing NEAR mainnet and testnet in beta, with the following network names: +Subgraph Studio和The Graph网络上的升级索引人,目前支持在测试版中对NEAR主网和测试网进行索引,网络名称如下: -- `near-mainnet` -- `near-testnet` +- `near-主网` +- `near-测试网` -More information on creating and deploying Subgraphs on Subgraph Studio can be found [here](/deploying/deploying-a-subgraph-to-studio/). +有关在Subgraph Studio上创建和部署子图的更多信息,请参阅[此处](/deploying/deploying-a-Subgraph-to-Studio/)。 -As a quick primer - the first step is to "create" your Subgraph - this only needs to be done once. On Subgraph Studio, this can be done from [your Dashboard](https://thegraph.com/studio/): "Create a Subgraph". +作为一个快速入门——第一步是“创建”你的子图——这只需要完成一次。在Subgraph Studio上,这可以从[您的Dashboard](https://thegraph.com/studio/)完成:“创建子图”。 -Once your Subgraph has been created, you can deploy your Subgraph by using the `graph deploy` CLI command: +创建子图后,您可以使用 `graph deploy` CLI 命令部署子图: ```sh $ graph create --node # creates a Subgraph on a local Graph Node (on Subgraph Studio, this is done via the UI) $ graph deploy --node --ipfs https://api.thegraph.com/ipfs/ # uploads the build files to a specified IPFS endpoint, and then deploys the Subgraph to a specified Graph Node based on the manifest IPFS hash ``` -The node configuration will depend on where the Subgraph is being deployed. +节点配置将取决于子图的部署位置。 -### Subgraph Studio +### 子图工作室 ```sh graph auth graph deploy ``` -### Local Graph Node (based on default configuration) +### 本地Graph节点(基于默认配置) ```sh graph deploy --node http://localhost:8020/ --ipfs http://localhost:5001 ``` -Once your Subgraph has been deployed, it will be indexed by Graph Node. You can check its progress by querying the Subgraph itself: +部署子图后,它将由 Graph节点索引。 您可以通过查询子图本身来检查其进度: ```graphql { @@ -216,45 +216,45 @@ Once your Subgraph has been deployed, it will be indexed by Graph Node. You can } ``` -### Indexing NEAR with a Local Graph Node +### 使用本地Graph节点索引 NEAR -Running a Graph Node that indexes NEAR has the following operational requirements: +运行索引 NEAR 的 Graph节点有以下操作要求: -- NEAR Indexer Framework with Firehose instrumentation -- NEAR Firehose Component(s) -- Graph Node with Firehose endpoint configured +- 带有 Firehose 工具的 NEAR 索引人框架 +- NEAR Firehose 组件 +- 配置了 Firehose 端点的Graph节点 -We will provide more information on running the above components soon. +我们将很快提供有关运行上述组件的更多信息。 -## Querying a NEAR Subgraph +## 查询 NEAR 子图 -The GraphQL endpoint for NEAR Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. +NEAR子图的GraphQL端点由模式定义和现有的API接口确定。有关更多信息,请访问[GraphQL API文档](/subgraphs/querying/graphql-api/) 。 -## Example Subgraphs +## 示例子图 -Here are some example Subgraphs for reference: +以下是一些示例子图供参考: -[NEAR Blocks](https://github.com/graphprotocol/graph-tooling/tree/main/examples/near-blocks) +[NEAR块](https://github.com/graphprotocol/graph-tooling/tree/main/examples/near-blocks) -[NEAR Receipts](https://github.com/graphprotocol/graph-tooling/tree/main/examples/near-receipts) +[NEAR收据](https://github.com/graphprotocol/graph-tooling/tree/main/examples/near-receipts) -## FAQ +## 常见问题 -### How does the beta work? +### 测试版是如何工作的? -NEAR support is in beta, which means that there may be changes to the API as we continue to work on improving the integration. Please email near@thegraph.com so that we can support you in building NEAR Subgraphs, and keep you up to date on the latest developments! +NEAR 支持处于测试阶段,这意味着随着我们继续致力于改进集成,API 可能会发生变化。 请发送电子邮件至 near@thegraph.com,以便我们支持您构建 NEAR 子图,并让您了解最新进展! -### Can a Subgraph index both NEAR and EVM chains? +### 子图可以同时索引 NEAR 和 EVM 链吗? -No, a Subgraph can only support data sources from one chain/network. +不,子图只能支持来自一个链/网络的数据源。 -### Can Subgraphs react to more specific triggers? +### 子图可以对更具体的触发器做出反应吗? -Currently, only Block and Receipt triggers are supported. We are investigating triggers for function calls to a specified account. We are also interested in supporting event triggers, once NEAR has native event support. +目前,仅支持 Block 和 Receipt 触发器。 我们正在调查对指定帐户的函数调用的触发器。 一旦 NEAR 拥有原生事件支持,我们也对支持事件触发器感兴趣。 -### Will receipt handlers trigger for accounts and their sub-accounts? +### 接受处理程序会触发账户及其子账户吗? -If an `account` is specified, that will only match the exact account name. It is possible to match sub-accounts by specifying an `accounts` field, with `suffixes` and `prefixes` specified to match accounts and sub-accounts, for example the following would match all `mintbase1.near` sub-accounts: +如果指定了`账户`,那么它将只与确切的账户名匹配。可以通过指定`accounts`字段来匹配子账户,并指定`后缀`和`前缀`来匹配账户和子账户,例如,下面将匹配所有 `mintbase1.near` 子账户: ```yaml accounts: @@ -262,22 +262,22 @@ accounts: - mintbase1.near ``` -### Can NEAR Subgraphs make view calls to NEAR accounts during mappings? +### NEAR 子图可以在映射期间对 NEAR 帐户进行视图调用吗? -This is not supported. We are evaluating whether this functionality is required for indexing. +这是不支持的。 我们正在评估索引是否需要此功能。 -### Can I use data source templates in my NEAR Subgraph? +### 我可以在 NEAR 子图中使用数据源模板吗? -This is not currently supported. We are evaluating whether this functionality is required for indexing. +目前不支持此功能。 我们正在评估索引是否需要此功能。 -### Ethereum Subgraphs support "pending" and "current" versions, how can I deploy a "pending" version of a NEAR Subgraph? +### 以太坊子图支持“待定”和“当前”版本,如何部署 NEAR 子图的“待定”版本? -Pending functionality is not yet supported for NEAR Subgraphs. In the interim, you can deploy a new version to a different "named" Subgraph, and then when that is synced with the chain head, you can redeploy to your primary "named" Subgraph, which will use the same underlying deployment ID, so the main Subgraph will be instantly synced. +NEAR 子图尚不支持挂起的功能。 在此期间,您可以将新版本部署到不同的“命名”子图,然后当它与链头同步时,您可以重新部署到您的主“命名”子图,它将使用相同的底层部署 ID,所以 主子图将立即同步。 -### My question hasn't been answered, where can I get more help building NEAR Subgraphs? +### 我的问题尚未得到解答,在哪里可以获得更多构建 NEAR 子图的帮助? -If it is a general question about Subgraph development, there is a lot more information in the rest of the [Developer documentation](/subgraphs/quick-start/). Otherwise please join [The Graph Protocol Discord](https://discord.gg/graphprotocol) and ask in the #near channel or email near@thegraph.com. +如果这是一个关于子图开发的一般性问题,那么在 [开发者文档](/subgraphs/quick-start/)的其余部分中会有更多的信息。否则,请加入[The Graph 协议的Discord](https://discord.gg/graphprotocol) ,并在 # near 频道或发邮件到 near@thegraph. com 询问。 -## References +## 参考 -- [NEAR developer documentation](https://docs.near.org/tutorials/crosswords/basics/set-up-skeleton) +- [NEAR开发者文档](https://docs.near.org/tutorials/crosswords/basics/set-up-skeleton) diff --git a/website/src/pages/zh/subgraphs/guides/polymarket.mdx b/website/src/pages/zh/subgraphs/guides/polymarket.mdx index 74efe387b0d7..3e7951b57a4a 100644 --- a/website/src/pages/zh/subgraphs/guides/polymarket.mdx +++ b/website/src/pages/zh/subgraphs/guides/polymarket.mdx @@ -1,23 +1,23 @@ --- -title: Querying Blockchain Data from Polymarket with Subgraphs on The Graph -sidebarTitle: Query Polymarket Data +title: 使用The Graph上的子图从Polymarket查询区块链数据 +sidebarTitle: 查询Polymarket数据 --- -Query Polymarket’s onchain data using GraphQL via Subgraphs on The Graph Network. Subgraphs are decentralized APIs powered by The Graph, a protocol for indexing & querying data from blockchains. +通过The Graph网络上的子图使用GraphQL查询Polymarket的链上数据。子图是由The Graph支持的去中心化API,The Graph是一种用于索引和查询区块链数据的协议。 -## Polymarket Subgraph on Graph Explorer +## Graph Explorer上的Polymarket子图 -You can see an interactive query playground on the [Polymarket Subgraph’s page on The Graph Explorer](https://thegraph.com/explorer/subgraphs/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp?view=Query&chain=arbitrum-one), where you can test any query. +您可以在[The Graph Explorer的Polymarket子图页面](https://thegraph.com/explorer/subgraphs/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp?view=Query&chain=arbitrum-one)上看到一个交互式查询场,在那里您可以测试任何查询。 ![Polymarket Playground](/img/Polymarket-playground.png) -## How to use the Visual Query Editor +## 如何使用可视化查询编辑器 -The visual query editor helps you test sample queries from your Subgraph. +可视化查询编辑器可帮助您测试子图中的示例查询。 -You can use the GraphiQL Explorer to compose your GraphQL queries by clicking on the fields you want. +您可以使用GraphiQL Explorer通过单击所需的字段来编写GraphQL查询。 -### Example Query: Get the top 5 highest payouts from Polymarket +### 示例查询:从Polymarket获取前5位最高支出 ``` { @@ -30,7 +30,7 @@ You can use the GraphiQL Explorer to compose your GraphQL queries by clicking on } ``` -### Example output +### 示例输出 ``` { @@ -71,41 +71,41 @@ You can use the GraphiQL Explorer to compose your GraphQL queries by clicking on } ``` -## Polymarket's GraphQL Schema +## Polymarket's GraphQL模式 -The schema for this Subgraph is defined [here in Polymarket’s GitHub](https://github.com/Polymarket/polymarket-subgraph/blob/main/polymarket-subgraph/schema.graphql). +此子图的模式[在Polymarket的GitHub](https://github.com/Polymarket/polymarket-subgraph/blob/main/polymarket-subgraph/schema.graphql)中定义。 -### Polymarket Subgraph Endpoint +### Polymarket子图端点 https://gateway.thegraph.com/api/{api-key}/subgraphs/id/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp -The Polymarket Subgraph endpoint is available on [Graph Explorer](https://thegraph.com/explorer). +Polymarket子图端点在[Graph Explorer](https://thegraph.com/explorer)上可用。 ![Polymarket Endpoint](/img/Polymarket-endpoint.png) -## How to Get your own API Key +## 如何获得您自己的API密钥 -1. Go to [https://thegraph.com/studio](http://thegraph.com/studio) and connect your wallet -2. Go to https://thegraph.com/studio/apikeys/ to create an API key +1. 进入[https://thegraph.com/studio/](https://thegraph.com/studio/) 并连接钱包 +2. 进入https://thegraph.com/studio/apikeys/创建API密钥 -You can use this API key on any Subgraph in [Graph Explorer](https://thegraph.com/explorer), and it’s not limited to just Polymarket. +您可以在[Graph Explorer](https://thegraph.com/explorer)中的任何子图上使用此API键,而且它不仅限于Polymarket。 -100k queries per month are free which is perfect for your side project! +每月10万次查询是免费的,非常适合您的副项目! -## Additional Polymarket Subgraphs +## 其他Polymarket子图 - [Polymarket](https://thegraph.com/explorer/subgraphs/81Dm16JjuFSrqz813HysXoUPvzTwE7fsfPk2RTf66nyC?view=Query&chain=arbitrum-one) - [Polymarket Activity Polygon](https://thegraph.com/explorer/subgraphs/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp?view=Query&chain=arbitrum-one) - [Polymarket Profit & Loss](https://thegraph.com/explorer/subgraphs/6c58N5U4MtQE2Y8njfVrrAfRykzfqajMGeTMEvMmskVz?view=Query&chain=arbitrum-one) - [Polymarket Open Interest](https://thegraph.com/explorer/subgraphs/ELaW6RtkbmYNmMMU6hEPsghG9Ko3EXSmiRkH855M4qfF?view=Query&chain=arbitrum-one) -## How to Query with the API +## 如何使用API进行查询 -You can pass any GraphQL query to the Polymarket endpoint and receive data in json format. +您可以将任何GraphQL查询传递给Polymarket端点,并接收json格式的数据。 -This following code example will return the exact same output as above. +以下代码示例将返回与上述完全相同的输出。 -### Sample Code from node.js +### node.js中的示例代码 ``` const axios = require('axios'); @@ -141,8 +141,8 @@ axios(graphQLRequest) }); ``` -### Additional resources +### 其他资源 -For more information about querying data from your Subgraph, read more [here](/subgraphs/querying/introduction/). +有关从您的子图查询数据的更多信息,请阅读[此处](/subgraphs/querying/introduction/)。 -To explore all the ways you can optimize & customize your Subgraph for a better performance, read more about [creating a Subgraph here](/developing/creating-a-subgraph/). +要探索优化和自定义子图以获得更好性能的所有方法,请在此处阅读有关[创建子图](/developing/creating-a-subgraph/)的更多信息。 diff --git a/website/src/pages/zh/subgraphs/guides/secure-api-keys-nextjs.mdx b/website/src/pages/zh/subgraphs/guides/secure-api-keys-nextjs.mdx index e17e594408ff..209998bfad82 100644 --- a/website/src/pages/zh/subgraphs/guides/secure-api-keys-nextjs.mdx +++ b/website/src/pages/zh/subgraphs/guides/secure-api-keys-nextjs.mdx @@ -2,47 +2,47 @@ title: How to Secure API Keys Using Next.js Server Components --- -## Overview +## 概述 -We can use [Next.js server components](https://nextjs.org/docs/app/building-your-application/rendering/server-components) to properly secure our API key from exposure in the frontend of our dapp. To further increase our API key security, we can also [restrict our API key to certain Subgraphs or domains in Subgraph Studio](/cookbook/upgrading-a-subgraph/#securing-your-api-key). +我们可以使用[Next.js服务器组件](https://nextjs.org/docs/app/building-your-application/rendering/server-components) 来正确地保护API密钥,使其免于暴露在我们的dapp前端。为了进一步提高我们的API密钥安全性,我们还可以将我们的API密钥[限制在Subgraph Studio中的某些子图或域](/cookbook/upgrading-a-subgraph/#securing-your-api-key)。 -In this cookbook, we will go over how to create a Next.js server component that queries a Subgraph while also hiding the API key from the frontend. +在这本指南中,我们将介绍如何创建一个Next.js服务器组件,该组件可以查询子图,同时还可以从前端隐藏API键。 -### Caveats +### 警告 -- Next.js server components do not protect API keys from being drained using denial of service attacks. -- The Graph Network gateways have denial of service detection and mitigation strategies in place, however using server components may weaken these protections. -- Next.js server components introduce centralization risks as the server can go down. +- Next.js服务器组件不能保护API密钥免于被拒绝服务攻击耗尽。 +- The Graph网络的网关具有拒绝服务检测和适当缓解策略,但使用服务器组件可能会削弱这些保护。 +- Next.js服务器组件引入了中心化风险,因为服务器可能会宕机。 -### Why It's Needed +### 为什么需要它 -In a standard React application, API keys included in the frontend code can be exposed to the client-side, posing a security risk. While `.env` files are commonly used, they don't fully protect the keys since React's code is executed on the client side, exposing the API key in the headers. Next.js Server Components address this issue by handling sensitive operations server-side. +在标准React应用程序中,前端代码中包含的API密钥可能会暴露给客户端,从而带来安全风险。虽然通常使用`.env`文件,但它们并不能完全保护密钥,因为React的代码是在客户端执行的,在头中暴露了API密钥。Next.js服务器组件通过在服务器端处理敏感操作来解决这个问题。 -### Using client-side rendering to query a Subgraph +### 使用客户端导入查询子图 ![Client-side rendering](/img/api-key-client-side-rendering.png) -### Prerequisites +### 先决条件 -- An API key from [Subgraph Studio](https://thegraph.com/studio) -- Basic knowledge of Next.js and React. -- An existing Next.js project that uses the [App Router](https://nextjs.org/docs/app). +- 来自[Subgraph Studio](https://thegraph.com/studio)的API密钥。 +- Next.js和React的基本知识。 +- 一个使用[App Router](https://nextjs.org/docs/app)的现有Next.js项目。 -## Step-by-Step Cookbook +## 循序渐进的指南 -### Step 1: Set Up Environment Variables +### 步骤1:设置环境变量 -1. In our Next.js project root, create a `.env.local` file. -2. Add our API key: `API_KEY=`. +1. 在Next.js项目根目录中,创建一个`.env.local`文件。 +2. 添加API密钥:`API_key=`。 -### Step 2: Create a Server Component +### 步骤2:创建服务器组件 -1. In our `components` directory, create a new file, `ServerComponent.js`. -2. Use the provided example code to set up the server component. +1. 在`components`目录中,创建一个`ServerComponent.js `新文件。 +2. 使用提供的示例代码设置服务器组件。 -### Step 3: Implement Server-Side API Request +### 步骤3:实现服务器端API请求 -In `ServerComponent.js`, add the following code: +在`ServerComponent.js `中,添加以下代码: ```javascript const API_KEY = process.env.API_KEY @@ -95,10 +95,10 @@ export default async function ServerComponent() { } ``` -### Step 4: Use the Server Component +### 步骤4:使用服务器组件 -1. In our page file (e.g., `pages/index.js`), import `ServerComponent`. -2. Render the component: +1. 在我们的页面文件(例如`pages/index.js`)中,导入`ServerComponent`。 +2. 导入组件: ```javascript import ServerComponent from './components/ServerComponent' @@ -112,12 +112,12 @@ export default function Home() { } ``` -### Step 5: Run and Test Our Dapp +### 步骤5:运行并测试我们的Dapp -Start our Next.js application using `npm run dev`. Verify that the server component is fetching data without exposing the API key. +使用`npm run-dev`启动我们的Next.js应用程序。验证服务器组件是否在不公开API密钥的情况下获取数据。 ![Server-side rendering](/img/api-key-server-side-rendering.png) -### Conclusion +### 结论 -By utilizing Next.js Server Components, we've effectively hidden the API key from the client-side, enhancing the security of our application. This method ensures that sensitive operations are handled server-side, away from potential client-side vulnerabilities. Finally, be sure to explore [other API key security measures](/subgraphs/querying/managing-api-keys/) to increase your API key security even further. +By utilizing Next.js Server Components, we've effectively hidden the API key from the client-side, enhancing the security of our application. This method ensures that sensitive operations are handled server-side, away from potential client-side vulnerabilities. Finally, be sure to explore [other API key security measures](/cookbook/upgrading-a-subgraph/#securing-your-api-key) to increase your API key security even further. diff --git a/website/src/pages/zh/subgraphs/guides/subgraph-composition.mdx b/website/src/pages/zh/subgraphs/guides/subgraph-composition.mdx new file mode 100644 index 000000000000..7fbb93375ad8 --- /dev/null +++ b/website/src/pages/zh/subgraphs/guides/subgraph-composition.mdx @@ -0,0 +1,132 @@ +--- +title: 使用子图合成的集成数据 +sidebarTitle: 用多个子图生成可编译子图 +--- + +利用子图构成来加快开发时间。创建基础子图,带有基本数据,然后在它上方构建附加子图。 + +Optimize your Subgraph by merging data from independent, source Subgraphs into a single composable Subgraph to enhance data aggregation. + +## 介绍 + +Composable Subgraphs enable you to combine multiple Subgraphs' data sources into a new Subgraph, facilitating faster and more flexible Subgraph development. Subgraph composition empowers you to create and maintain smaller, focused Subgraphs that collectively form a larger, interconnected dataset. + +### 合成的好处 + +子图合成是一个强大的功能,可以缩放: + +- 重新使用、混合和合并现有数据 +- 简化开发和查询 +- 使用多个数据源 (最多五个源子图) +- 加快子图同步速度 +- 处理错误并优化重新同步 + +## 架构概述 + +此示例的设置涉及两个子图: + +1. **源子图**:跟踪事件数据作为实体。 +2. **依赖子图**:使用源子图作为数据源。 + +你可以在 `source` 和 `dependent` 目录中找到。 + +- **源子图** 是一个基本的事件跟踪子图,记录相关合约发布的事件。 +- **依赖子图** 将源子图作为数据源,将源实体作为触发器。 + +源子图是标准子图,依赖子图则使用子图合成功能。 + +## 先决条件 + +### Source Subgraphs + +- All Subgraphs need to be published with a **specVersion 1.3.0 or later** (Use the latest graph-cli version to be able to deploy composable Subgraphs) +- See notes here: https://github.com/graphprotocol/graph-node/releases/tag/v0.37.0 +- Immutable entities only: All Subgraphs must have [immutable entities](https://thegraph.com/docs/en/subgraphs/best-practices/immutable-entities-bytes-as-ids/#immutable-entities) when the Subgraph is deployed +- Pruning can be used in the source Subgraphs, but only entities that are immutable can be composed on top of +- Source Subgraphs cannot use grafting on top of existing entities +- Aggregated entities can be used in composition, but entities that are composed from them cannot performed additional aggregations directly + +### Composed Subgraphs + +- You can only compose up to a **maximum of 5 source Subgraphs** +- Composed Subgraphs can only use **datasources from the same chain** +- **Nested composition is not yet supported**: Composing on top of another composed Subgraph isn’t allowed at this time +- Aggregated entities can be used in composition, but the composed entities on them cannot also use aggregations directly +- Developers cannot compose an onchain datasource with a Subgraph datasource (i.e. you can’t do normal event handlers and call handlers and block handlers in a composed Subgraph) + +Additionally, you can explore the [example-composable-subgraph](https://github.com/graphprotocol/example-composable-subgraph) repository for a working implementation of composable Subgraphs + +## 开始 + +The following guide provides examples for defining 3 source Subgraphs to create one powerful composed Subgraph. + +### 详情 + +- 为了保持示例简单,所有源子图只使用区块处理器。 然而,在实际环境中,每个源子图将使用不同的智能合约提供的数据。 +- 下面的例子显示了如何导入和扩展另一个子图的模式以增强其功能。 +- 每个源子图由一个特定实体优化。 +- 列出的所有命令都安装了必要的依赖,基于GraphQL 模式生成代码。 构建子图并将其部署到您的本地Graph节点实例。 + +### 第 1 步:部署源子图区块时间 + +第一个源子图计算每个区块的区块时间。 + +- 它从其它子图中导入方案并添加一个带有“时间戳”字段的`block`实体,这代表了每个区块被开采的时间。 +- 它监听与时间相关的区块链事件(例如区块时间戳),并处理此数据以相应地更新子图的实体。 + +要在本地部署此子图,请运行以下命令: + +```bash +npm install +npm run codegen +npm run build +npm run create-local +npm run deploy-local +``` + +### 第 2 步:部署源子图区块时间 + +第二个源子图将每个区块的成本指数化。 + +#### 关键函数 + +- 从其它子图中导入模式并添加一个含有成本相关字段的`block`实体。 +- 监听与成本相关的区块链事件(例如燃气费、交易成本),并处理此数据以相应更新子图的实体。 + +要在本地部署此子图,请运行以下命令。 + +### 第 3 步:定义源子图中区块大小 + +第三个源子图索引每个块的大小。要在本地部署此子图,请运行与上面相同的命令。 + +#### 关键函数 + +- 从其它子图中导入现有的模式并添加一个 `block` 实体,其中包含一个 `size` 字段,代表每个区块的大小。 +- 监听与区块大小相关的区块链事件(例如存储或体积),并处理此数据以相应地更新子图的实体。 + +### 第 4 步:合并进区块统计子图 + +This composed Subgraph combines and aggregates the information from the source Subgraphs above, providing a unified view of block statistics. To deploy this Subgraph locally, run the same commands as above. + +> 注意: +> +> - 对源子图的任何更改都可能生成一个新的部署ID。 +> - 请务必更新子图数据源地址中的部署 ID 来利用最新的更改。 +> - 所有源子图均应在部署合成子图之前部署。 + +#### 关键函数 + +- 它提供了一个综合数据模型,其中包括所有相关的区块计量。 +- It combines data from 3 source Subgraphs, and provides a comprehensive view of block statistics, enabling more complex queries and analyses. + +## 要点前瞻 + +- 这个强大的工具将缩放子图的开发,并允许您合并多个子图。 +- The setup includes the deployment of 3 source Subgraphs and one final deployment of the composed Subgraph. +- 这个功能解锁了可扩展性,简化了开发和维护效率。 + +## 其他资源 + +- Check out all the code for this example in [this GitHub repo](https://github.com/graphprotocol/example-composable-subgraph). +- 若要将高级功能添加到您的子图,请参阅[子图高级功能](/developing/creating/advanced/)。 +- 要了解更多关于聚合的信息,请查看 [Timeseries and Aggreg](/subgraphs/developing/creating/advanced/#timeseries-and-aggregations)。 diff --git a/website/src/pages/zh/subgraphs/guides/subgraph-debug-forking.mdx b/website/src/pages/zh/subgraphs/guides/subgraph-debug-forking.mdx index 91aa7484d2ec..b106c9eeacc1 100644 --- a/website/src/pages/zh/subgraphs/guides/subgraph-debug-forking.mdx +++ b/website/src/pages/zh/subgraphs/guides/subgraph-debug-forking.mdx @@ -1,26 +1,26 @@ --- -title: Quick and Easy Subgraph Debugging Using Forks +title: 使用分叉快速轻松地调试子图 --- -As with many systems processing large amounts of data, The Graph's Indexers (Graph Nodes) may take quite some time to sync-up your Subgraph with the target blockchain. The discrepancy between quick changes with the purpose of debugging and long wait times needed for indexing is extremely counterproductive and we are well aware of that. This is why we are introducing **Subgraph forking**, developed by [LimeChain](https://limechain.tech/), and in this article I will show you how this feature can be used to substantially speed-up Subgraph debugging! +与许多处理大量数据的系统一样,The Graph的索引人(Graph Nodes)可能需要相当长的时间才能将您的子图与目标区块链同步。以调试为目的的快速更改和索引所需的长等待时间之间的差异,是极其适得其反的,我们对此非常清楚。这就是为什么我们引入了由[LimeChain](https://limechain.tech/)开发的**子图分叉**,在本文中,我将向您展示如何使用此功能来大大加快子图调试! -## Ok, what is it? +## 好的,那是什么? -**Subgraph forking** is the process of lazily fetching entities from _another_ Subgraph's store (usually a remote one). +**子图分叉**是从另一个子图的存储(通常是远程存储)中缓慢获取实体的过程。 -In the context of debugging, **Subgraph forking** allows you to debug your failed Subgraph at block _X_ without needing to wait to sync-up to block _X_. +在调试时,**子图分叉**允许您在固定的区块X中调试失败的子图,而无需等待区块同步X。 -## What?! How? +## 什么?! 如何处理? -When you deploy a Subgraph to a remote Graph Node for indexing and it fails at block _X_, the good news is that the Graph Node will still serve GraphQL queries using its store, which is synced-up to block _X_. That's great! This means we can take advantage of this "up-to-date" store to fix the bugs arising when indexing block _X_. +当您将子图部署到远程Graph节点进行索引时,它在块X处失败,好消息是Graph节点仍将使用其存储服务GraphQL查询,该存储同步到块X。这太棒了!这意味着我们可以利用这个“最新”存储来修复索引块X时出现的错误。 -In a nutshell, we are going to _fork the failing Subgraph_ from a remote Graph Node that is guaranteed to have the Subgraph indexed up to block _X_ in order to provide the locally deployed Subgraph being debugged at block _X_ an up-to-date view of the indexing state. +简而言之,我们将从远程 Graph 节点 分叉失败的子图,保证子图索引更新至区块X的数据, 以便基于更新至区块 X 的数据在本地部署的子图进行调试,以反映索引数据的最新状态。 -## Please, show me some code! +## 请给我看一些代码! -To stay focused on Subgraph debugging, let's keep things simple and run along with the [example-Subgraph](https://github.com/graphprotocol/graph-tooling/tree/main/examples/ethereum-gravatar) indexing the Ethereum Gravity smart contract. +为了专注于子图调试,让我们保持简单,并与索引 Ethereum Gravity 智能合约的 [示例子图](https://github.com/graphprotocol/graph-tooling/tree/main/examples/ethereum-gravatar) 一起运行。 -Here are the handlers defined for indexing `Gravatar`s, with no bugs whatsoever: +以下是索引 `Gravatar` 定义的处理程序,没有任何错误: ```tsx export function handleNewGravatar(event: NewGravatar): void { @@ -44,43 +44,43 @@ export function handleUpdatedGravatar(event: UpdatedGravatar): void { } ``` -Oops, how unfortunate, when I deploy my perfect looking Subgraph to [Subgraph Studio](https://thegraph.com/studio/) it fails with the _"Gravatar not found!"_ error. +糟糕,不幸的是,当我将完美的子图部署到[Subgraph Studio](https://thegraph.com/studio/)时,它会报“未找到 Gravatar!” 的错误。 -The usual way to attempt a fix is: +尝试修复的常用方法是: -1. Make a change in the mappings source, which you believe will solve the issue (while I know it won't). -2. Re-deploy the Subgraph to [Subgraph Studio](https://thegraph.com/studio/) (or another remote Graph Node). -3. Wait for it to sync-up. -4. If it breaks again go back to 1, otherwise: Hooray! +1. 在映射源中进行更改,你认为这将解决问题(但我知道它不会)。 +2. 将子图重新部署到[subgraph Studio](https://thegraph.com/studio/)(或另一个远程图形节点)。 +3. 等待同步。 +4. 如果它再次中断,则返回 第1步,否则:搞定! -It is indeed pretty familiar to an ordinary debug process, but there is one step that horribly slows down the process: _3. Wait for it to sync-up._ +对于一个普通的调试过程来说很常见,但是有一个步骤会严重减缓这个过程:_3。 等待同步。_ -Using **Subgraph forking** we can essentially eliminate this step. Here is how it looks: +使用 **子图分叉** 我们可以从根本上解决这个问题。 如下: -0. Spin-up a local Graph Node with the **_appropriate fork-base_** set. -1. Make a change in the mappings source, which you believe will solve the issue. -2. Deploy to the local Graph Node, **_forking the failing Subgraph_** and **_starting from the problematic block_**. -3. If it breaks again, go back to 1, otherwise: Hooray! +0. 使用**适当的子分叉**设置启动本地 Graph 节点。 +1. 按照你认为可以解决问题的方法,在映射源中进行更改。 +2. 部署到本地 Graph 节点,**分叉失败的子图**并**从有问题的区块开始**。 +3. 如果它再次中断,则返回 第1步,否则:搞定! -Now, you may have 2 questions: +现在,你可能有 2 个问题: -1. fork-base what??? -2. Forking who?! +1. 子分叉集是什么??? +2. 分叉什么?! -And I answer: +回答如下: -1. `fork-base` is the "base" URL, such that when the _subgraph id_ is appended the resulting URL (`/`) is a valid GraphQL endpoint for the Subgraph's store. -2. Forking is easy, no need to sweat: +1. `分叉基础` 是“基础”URL,例如将 _子图 id_ 添加到结果 URL (`/`),就是一个合法的子图GraphQL访问端口。 +2. 分叉容易,不要紧张: ```bash -$ graph deploy --debug-fork --ipfs http://localhost:5001 --node http://localhost:8020 +$ graph 部署 --调试分叉 --ipfs地址 http://localhost:5001 --node http://localhost:8020 ``` -Also, don't forget to set the `dataSources.source.startBlock` field in the Subgraph manifest to the number of the problematic block, so you can skip indexing unnecessary blocks and take advantage of the fork! +另外,不要忘记将子图中的 `dataSources.source.startBlock` 字段设置为有问题的区块编号,这样您就可以跳过索引不必要的区块并利用分叉! -So, here is what I do: +所以,我是这么做的: -1. I spin-up a local Graph Node ([here is how to do it](https://github.com/graphprotocol/graph-node#running-a-local-graph-node)) with the `fork-base` option set to: `https://api.thegraph.com/subgraphs/id/`, since I will fork a Subgraph, the buggy one I deployed earlier, from [Subgraph Studio](https://thegraph.com/studio/). +1. 我启动一个本地Graph节点,([这里是如何做到的](https://github.com/graphprotocol/graph-node#running-a-local-graph-node)) 将`分叉基础`选项设为:`https://api.thegraph.com/subgraphs/id/`,因为我将从[Subgraph Studio](https://thegraph.com/studio/)分叉子图,即之前部署有问题的子图。 ``` $ cargo run -p graph-node --release -- \ @@ -90,12 +90,12 @@ $ cargo run -p graph-node --release -- \ --fork-base https://api.thegraph.com/subgraphs/id/ ``` -2. After careful inspection I notice that there is a mismatch in the `id` representations used when indexing `Gravatar`s in my two handlers. While `handleNewGravatar` converts it to a hex (`event.params.id.toHex()`), `handleUpdatedGravatar` uses an int32 (`event.params.id.toI32()`) which causes the `handleUpdatedGravatar` to panic with "Gravatar not found!". I make them both convert the `id` to a hex. -3. After I made the changes I deploy my Subgraph to the local Graph Node, **_forking the failing Subgraph_** and setting `dataSources.source.startBlock` to `6190343` in `subgraph.yaml`: +2. 经过仔细检查,我注意到在我的两个处理程序中索引 `Gravatar` 时使用的 `id` 不匹配。 `handleNewGravatar` 将其转换为十六进制 (`event.params.id.toHex()`),而 `handleUpdatedGravatar` 使用 int32格式 (`event. params.id.toI32()`) 这会导致 `handleUpdatedGravatar` 出现“未找到 Gravatar!”的错误。 于是我将他们都 `id` 转换为十六进制。 +3. 更改后,我将子图部署到本地 Graph 节点,**分叉失败的子图**并设置 subgraph.yaml 中的`dataSources.source.startBlock`到`6190343`: ```bash $ graph deploy gravity --debug-fork QmNp169tKvomnH3cPXTfGg4ZEhAHA6kEq5oy1XDqAxqHmW --ipfs http://localhost:5001 --node http://localhost:8020 ``` -4. I inspect the logs produced by the local Graph Node and, Hooray!, everything seems to be working. -5. I deploy my now bug-free Subgraph to a remote Graph Node and live happily ever after! (no potatoes tho) +4. 我检查了本地 Graph 节点生成的日志,万岁!一切正常。 +5. 我将没有问题的子图部署到远程 Graph 节点上,从此过上幸福的生活! (不担心缺衣少粮) diff --git a/website/src/pages/zh/subgraphs/guides/subgraph-uncrashable.mdx b/website/src/pages/zh/subgraphs/guides/subgraph-uncrashable.mdx index a08e2a7ad8c9..92de3c3e985c 100644 --- a/website/src/pages/zh/subgraphs/guides/subgraph-uncrashable.mdx +++ b/website/src/pages/zh/subgraphs/guides/subgraph-uncrashable.mdx @@ -1,29 +1,29 @@ --- -title: Safe Subgraph Code Generator +title: 安全子图代码生成器 --- -[Subgraph Uncrashable](https://float-capital.github.io/float-subgraph-uncrashable/) is a code generation tool that generates a set of helper functions from the graphql schema of a project. It ensures that all interactions with entities in your Subgraph are completely safe and consistent. +[Subgraph Uncrashable](https://float-capital.github.io/float-subgraph-uncrashable/) 是一个代码生成工具,从项目的 Graphql 模式生成一组辅助函数。确保与子图中实体的所有交互都是完全安全和一致的。 -## Why integrate with Subgraph Uncrashable? +## 为什么要整合子图使其不崩溃? -- **Continuous Uptime**. Mishandled entities may cause Subgraphs to crash, which can be disruptive for projects that are dependent on The Graph. Set up helper functions to make your Subgraphs “uncrashable” and ensure business continuity. +- **连续正常运行时间**。处理不当的实体可能会导致子图崩溃,这可能会破坏依赖于 The Graph 的项目。设置 helper 函数,使您的子图“不可崩溃”,并确保业务连续性。 -- **Completely Safe**. Common problems seen in Subgraph development are issues of loading undefined entities, not setting or initializing all values of entities, and race conditions on loading and saving entities. Ensure all interactions with entities are completely atomic. +- **绝对安全**。在子图开发中常见的问题是加载未定义的实体,不设置或初始化实体的所有值,以及加载和保存实体的竞态条件。确保与实体的所有交互都是完全原子的。 -- **User Configurable** Set default values and configure the level of security checks that suits your individual project's needs. Warning logs are recorded indicating where there is a breach of Subgraph logic to help patch the issue to ensure data accuracy. +- **User Configure**。设置默认值,并配置适合各个项目需要的安全检查级别。警告日志被记录下来,表明哪里存在子图逻辑的缺陷,以帮助修补这个问题,从而确保数据的准确性。 -**Key Features** +**主要特征** -- The code generation tool accommodates **all** Subgraph types and is configurable for users to set sane defaults on values. The code generation will use this config to generate helper functions that are to the users specification. +- 代码生成工具可以容纳**所有**子图类型,并且可以为用户配置,以便在值上设置合理默认值。代码生成将使用此配置生成用户规范所要求的辅助函数。 -- The framework also includes a way (via the config file) to create custom, but safe, setter functions for groups of entity variables. This way it is impossible for the user to load/use a stale graph entity and it is also impossible to forget to save or set a variable that is required by the function. +- 该框架还包括一种方法(通过配置文件) 为实体变量组创建自定义但安全的 setter 函数。这样,用户就不可能加载/使用过时的图形实体,也不可能忘记保存或设置函数所需的变量。 -- Warning logs are recorded as logs indicating where there is a breach of Subgraph logic to help patch the issue to ensure data accuracy. +- 警告日志被记录为指示子图逻辑漏洞的日志,以帮助修补问题,确保数据准确性。 -Subgraph Uncrashable can be run as an optional flag using the Graph CLI codegen command. +使用 Graph CLI codegen 命令,Subgraph Uncrashable 可以作为一个可选标志运行。 ```sh graph codegen -u [options] [] ``` -Visit the [Subgraph uncrashable documentation](https://float-capital.github.io/float-subgraph-uncrashable/docs/) or watch this [video tutorial](https://float-capital.github.io/float-subgraph-uncrashable/docs/tutorial) to learn more and to get started with developing safer Subgraphs. +访问[子图不可崩溃的文档](https://float-capital.github.io/float-subgraph-uncrashable/docs/)或观看此[视频教程](https://float-capital.github.io/float-subgraph-uncrashable/docs/tutorial)了解更多信息,并开始开发更安全的子图。 diff --git a/website/src/pages/zh/subgraphs/guides/transfer-to-the-graph.mdx b/website/src/pages/zh/subgraphs/guides/transfer-to-the-graph.mdx index a62072c48373..feacd1168036 100644 --- a/website/src/pages/zh/subgraphs/guides/transfer-to-the-graph.mdx +++ b/website/src/pages/zh/subgraphs/guides/transfer-to-the-graph.mdx @@ -1,104 +1,104 @@ --- -title: Transfer to The Graph +title: 传输到The Graph --- -Quickly upgrade your Subgraphs from any platform to [The Graph's decentralized network](https://thegraph.com/networks/). +快速将您的子图从任何平台升级到[The Graph的去中心化网络](https://thegraph.com/networks/)。 -## Benefits of Switching to The Graph +## 切换到The Graph的好处 -- Use the same Subgraph that your apps already use with zero-downtime migration. -- Increase reliability from a global network supported by 100+ Indexers. -- Receive lightning-fast support for Subgraphs 24/7, with an on-call engineering team. +- 使用您的应用程序已经使用的零停机迁移子图。 +- 通过100多个索引人支持的全球网络提高可靠性。 +- Receive lightning-fast support for subgraphs 24/7, with an on-call engineering team. -## Upgrade Your Subgraph to The Graph in 3 Easy Steps +## 通过3个简单步骤将子图升级为The Graph -1. [Set Up Your Studio Environment](/subgraphs/cookbook/transfer-to-the-graph/#1-set-up-your-studio-environment) -2. [Deploy Your Subgraph to Studio](/subgraphs/cookbook/transfer-to-the-graph/#2-deploy-your-subgraph-to-studio) -3. [Publish to The Graph Network](/subgraphs/cookbook/transfer-to-the-graph/#publish-your-subgraph-to-the-graphs-decentralized-network) +1. [设置您的Studio环境](/subgraphs/cookbook/transfer-to-the-graph/#1-set-up-your-studio-environment) +2. [将子图部署到Studio](/subgraphs/cookbook/transfer-to-the-graph/#2-deploy-your-subgraph-to-studio) +3. [发布到The Graph网络](/subgraphs/cookbook/transfer-to-the-graph/#publish-your-subgraph-to-the-graphs-decentralized-network) -## 1. Set Up Your Studio Environment +## 1.设置你的Studio环境 -### Create a Subgraph in Subgraph Studio +### 在Subgraph Studio中创建子图 -- Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. -- Click "Create a Subgraph". It is recommended to name the Subgraph in Title Case: "Subgraph Name Chain Name". +- 进入[Subgraph Studio](https://thegraph.com/studio/)并连接你的钱包。 +- 点击“创建子图”。建议在标题大小写中为子图命名:“子图名称链名称”。 -> Note: After publishing, the Subgraph name will be editable but requires onchain action each time, so name it properly. +> 注意:发布后,子图名称将是可编辑的,但每次都需要进行链上操作,因此请正确命名。 -### Install the Graph CLI⁠ +### 安装Graph CLI⁠ -You must have [Node.js](https://nodejs.org/) and a package manager of your choice (`npm` or `pnpm`) installed to use the Graph CLI. Check for the [most recent](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) CLI version. +你必须安装[Node.js](https://nodejs.org/)和你选择的包管理器 (`npm` or `pnpm`) 才能使用Graph CLI。检查[最新的](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true)CLI版本。 -On your local machine, run the following command: +在本地计算机上,运行以下命令之一: -Using [npm](https://www.npmjs.com/): +使用[npm](https://www.npmjs.com/): ```sh npm install -g @graphprotocol/graph-cli@latest ``` -Use the following command to create a Subgraph in Studio using the CLI: +使用以下命令在Studio中使用CLI创建子图: ```sh graph init --product subgraph-studio ``` -### Authenticate Your Subgraph +### 验证你的子图 -In The Graph CLI, use the auth command seen in Subgraph Studio: +在The Graph CLI中,使用Subgraph Studio中的auth命令: ```sh graph auth ``` -## 2. Deploy Your Subgraph to Studio +## 2. 部署到Subgraph Studio -If you have your source code, you can easily deploy it to Studio. If you don't have it, here's a quick way to deploy your Subgraph. +如果你有源代码,你可以很容易地将其部署到Studio。如果你没有它,这里有一个快速部署子图的方法。 -In The Graph CLI, run the following command: +在 The Graph CLI中,运行以下命令: ```sh graph deploy --ipfs-hash ``` -> **Note:** Every Subgraph has an IPFS hash (Deployment ID), which looks like this: "Qmasdfad...". To deploy simply use this **IPFS hash**. You’ll be prompted to enter a version (e.g., v0.0.1). +> **注意:**每个子图都有一个IPFS哈希(部署ID),看起来像这样:“Qmasdfad…”。要部署,只需使用此**IPFS哈希**。系统将提示您输入版本(例如v0.0.1)。 -## 3. Publish Your Subgraph to The Graph Network +## 3. 将你的子图发布到 The Graph的去中心化网络 ![publish button](/img/publish-sub-transfer.png) -### Query Your Subgraph +### 查询子图 -> To attract about 3 indexers to query your Subgraph, it’s recommended to curate at least 3,000 GRT. To learn more about curating, check out [Curating](/resources/roles/curating/) on The Graph. +> 为了吸引大约3个索引人查询你的子图,建议至少策展3000 GRT。要了解更多关于策展的信息,请查看The Graph上的[策展](/resources/roles/curating/)。 -You can start [querying](/subgraphs/querying/introduction/) any Subgraph by sending a GraphQL query into the Subgraph’s query URL endpoint, which is located at the top of its Explorer page in Subgraph Studio. +您可以通过将GraphQL查询发送到子图的查询URL端点(位于subgraph Studio中其Explorer页面的顶部)来开始[查询](/subgraphs/querying/introduction/) 任何子图。 -#### Example +#### 示例 -[CryptoPunks Ethereum Subgraph](https://thegraph.com/explorer/subgraphs/HdVdERFUe8h61vm2fDyycHgxjsde5PbB832NHgJfZNqK) by Messari: +[CryptoPunks以太坊子图](https://thegraph.com/explorer/subgraphs/HdVdERFUe8h61vm2fDyycHgxjsde5PbB832NHgJfZNqK) 通过 Messari: ![Query URL](/img/cryptopunks-screenshot-transfer.png) -The query URL for this Subgraph is: +此子图的查询URL为: ```sh https://gateway-arbitrum.network.thegraph.com/api/`**your-own-api-key**`/subgraphs/id/HdVdERFUe8h61vm2fDyycgxjsde5PbB832NHgJfZNqK ``` -Now, you simply need to fill in **your own API Key** to start sending GraphQL queries to this endpoint. +现在,您只需填写**您自己的API密钥**即可开始向该端点发送GraphQL查询。 -### Getting your own API Key +### 获取您自己的API密钥 -You can create API Keys in Subgraph Studio under the “API Keys” menu at the top of the page: +您可以在Subgraph Studio中的页面顶部的“API键”菜单下创建: -![API keys](/img/Api-keys-screenshot.png) +![API 键](/img/Api-keys-screenshot.png) -### Monitor Subgraph Status +### 监控子图状态 -Once you upgrade, you can access and manage your Subgraphs in [Subgraph Studio](https://thegraph.com/studio/) and explore all Subgraphs in [The Graph Explorer](https://thegraph.com/networks/). +升级后,您可以在[Subgraph Studio](https://thegraph.com/studio/) 中访问和管理子图,并在[The Graph Explorer](https://thegraph.com/networks/)中浏览所有子图。 -### Additional Resources +### 其他资源 -- To quickly create and publish a new Subgraph, check out the [Quick Start](/subgraphs/quick-start/). -- To explore all the ways you can optimize and customize your Subgraph for a better performance, read more about [creating a Subgraph here](/developing/creating-a-subgraph/). +- 要快速创建和发布新的子图,请查看[快速入门](/subgraphs/quick-start/)。 +- 要探索优化和自定义子图以获得更好性能的所有方法,请在此处阅读有关[创建子图](/developing/creating-a-subgraph/)的更多信息。 diff --git a/website/src/pages/zh/subgraphs/querying/_meta-titles.json b/website/src/pages/zh/subgraphs/querying/_meta-titles.json index a30daaefc9d0..fe667a36597c 100644 --- a/website/src/pages/zh/subgraphs/querying/_meta-titles.json +++ b/website/src/pages/zh/subgraphs/querying/_meta-titles.json @@ -1,3 +1,3 @@ { - "graph-client": "Graph Client" + "graph-client": "Graph 客户端" } diff --git a/website/src/pages/zh/subgraphs/querying/best-practices.mdx b/website/src/pages/zh/subgraphs/querying/best-practices.mdx index ead15d8026eb..b0a4e0d93912 100644 --- a/website/src/pages/zh/subgraphs/querying/best-practices.mdx +++ b/website/src/pages/zh/subgraphs/querying/best-practices.mdx @@ -2,19 +2,19 @@ title: 查询最佳实践 --- -The Graph provides a decentralized way to query data from blockchains. Its data is exposed through a GraphQL API, making it easier to query with the GraphQL language. +Graph提供了一种去中心化的方式来查询区块链中的数据。它的数据是通过GraphQL API公开的,这使得使用GraphQL语言进行查询更加容易。 -Learn the essential GraphQL language rules and best practices to optimize your subgraph. +学习基本的 GraphQL 语言规则和最佳做法,以优化您的子图。 --- ## 查询GraphQL API -### The Anatomy of a GraphQL Query +### GraphQL查询的剖析 与REST API不同,GraphQL API构建在定义可以执行哪些查询的模式之上。 -For example, a query to get a token using the `token` query will look as follows: +例如,使用`token`查询获取代币如下所示: ```graphql query GetToken($id: ID!) { @@ -25,7 +25,7 @@ query GetToken($id: ID!) { } ``` -which will return the following predictable JSON response (_when passing the proper `$id` variable value_): +它将返回以下可预测的 JSON 响应(_当传递适当的 $id 变量值时_): ```json { @@ -36,9 +36,9 @@ which will return the following predictable JSON response (_when passing the pro } ``` -GraphQL queries use the GraphQL language, which is defined upon [a specification](https://spec.graphql.org/). +GraphQL 查询使用基于[规范](https://spec.graphql.org/)定义的 GraphQL 语言。 -The above `GetToken` query is composed of multiple language parts (replaced below with `[...]` placeholders): +上述`GetToken`查询由多语言部分组成(用`[...]`占位符替换): ```graphql query [operationName]([variableName]: [variableType]) { @@ -50,33 +50,33 @@ query [operationName]([variableName]: [variableType]) { } ``` -## Rules for Writing GraphQL Queries +## 写入 GraphQL 查询规则 -- Each `queryName` must only be used once per operation. -- Each `field` must be used only once in a selection (we cannot query `id` twice under `token`) -- Some `field`s or queries (like `tokens`) return complex types that require a selection of sub-field. Not providing a selection when expected (or providing one when not expected - for example, on `id`) will raise an error. To know a field type, please refer to [Graph Explorer](/subgraphs/explorer/). +- 每个操作只能使用一次`queryName`。 +- 每个`字段`必须只能在选择中使用一次 (我们不能在 `token` 下查询`id` 两次)。 +- 有些`字段`或查询(如`tokens`)返回了需要选择子字段的复杂类型。 预期时不提供选择(或在不预期时提供选择――例如在`id`上――会引起错误。 要知道一个字段类型,请参阅[Graph Explorer](/subgraphs/explorer/)。 - 分配给参数的任何变量都必须匹配其类型。 - 在给定的变量列表中,每个变量必须是唯一的。 - 必须使用所有已定义的变量。 -> Note: Failing to follow these rules will result in an error from The Graph API. +> 注意:如果不遵循这些规则,将导致从 The Graph API中发生错误。 -For a complete list of rules with code examples, check out [GraphQL Validations guide](/resources/migration-guides/graphql-validations-migration-guide/). +有关包含代码示例的完整规则列表,请查看[GraphQL Validations guide](/resources/migration-guides/graphql-validations-migration-guide/)。 ### 向 GraphQL API 发送查询 -GraphQL is a language and set of conventions that transport over HTTP. +GraphQL 是一种通过 HTTP 传输的语言和一组协议。 -It means that you can query a GraphQL API using standard `fetch` (natively or via `@whatwg-node/fetch` or `isomorphic-fetch`). +这意味着您可以使用标准的 `fetch` (nomatily or through `@whatwg-node/fetch` 或 `isomorphic-fetch`)查询一个 GraphQL API。 -However, as mentioned in ["Querying from an Application"](/subgraphs/querying/from-an-application/), it's recommended to use `graph-client`, which supports the following unique features: +然而,正如[“查询申请”](/subgraphs/querying/from-an-application/)中提到的那样,建议使用 `graph-client` ,支持以下独特功能: - 跨链子图处理: 在一个查询中从多个子图进行查询 -- [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) -- [Automatic Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) +- [自动区块跟踪](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) +- [自动分页](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) - 完全类型化的结果 -Here's how to query The Graph with `graph-client`: +以下是如何使用 `Graph-client` 查询The Graph: ```tsx import { execute } from '../.graphclient' @@ -100,15 +100,15 @@ async function main() { main() ``` -More GraphQL client alternatives are covered in ["Querying from an Application"](/subgraphs/querying/from-an-application/). +更多GraphQL客户端替代方案在[“从应用程序查询”](/subgraphs/querying/from-an-application/)中介绍。 --- -## Best Practices +## 最佳实践 ### 始终编写静态查询 -A common (bad) practice is to dynamically build query strings as follows: +一个常见的(不好的) 实践是动态构建查询字符串,如下所示: ```tsx const id = params.id @@ -124,14 +124,14 @@ query GetToken { // Execute query... ``` -While the above snippet produces a valid GraphQL query, **it has many drawbacks**: +虽然上面的代码片段产生了一个有效的 GraphQL 查询,但**它有许多缺点** : -- it makes it **harder to understand** the query as a whole -- developers are **responsible for safely sanitizing the string interpolation** -- not sending the values of the variables as part of the request parameters **prevent possible caching on server-side** -- it **prevents tools from statically analyzing the query** (ex: Linter, or type generations tools) +- 它使得**更难理解** 整个查询 +- 开发者**负责安全清理字符串内插值** +- 不将变量的值作为请求参数的一部分发送以**防止服务器侧可能缓存** +- 它**阻止工具静态分析查询** (例如: Linter, 或类型代工具) -For this reason, it is recommended to always write queries as static strings: +因此,建议始终将查询写为静态字符串: ```tsx import { execute } from 'your-favorite-graphql-client' @@ -153,18 +153,18 @@ const result = await execute(query, { }) ``` -Doing so brings **many advantages**: +这样做会带有**许多优点**: -- **Easy to read and maintain** queries -- The GraphQL **server handles variables sanitization** -- **Variables can be cached** at server-level -- **Queries can be statically analyzed by tools** (more on this in the following sections) +- **易读和维护** 查询 +- GraphQL **服务器处理变量净化** +- **变量可以缓存到 **服务器级别 +- **查询可以通过工具进行静态分析** (在以下章节中更多关于此问题) -### How to include fields conditionally in static queries +### 如何在静态查询中有条件地包含字段 -You might want to include the `owner` field only on a particular condition. +您可能只想在特定条件下包含 '所有者' 字段。 -For this, you can leverage the `@include(if:...)` directive as follows: +为此,你可以将`@include(if:...)`的指令应用如下: ```tsx import { execute } from 'your-favorite-graphql-client' @@ -187,18 +187,18 @@ const result = await execute(query, { }) ``` -> Note: The opposite directive is `@skip(if: ...)`. +> 注意: 相反的指令是 `@skip(if: ...)` 。 -### Ask for what you want +### 问你所想 -GraphQL became famous for its "Ask for what you want" tagline. +GraphQL以其“问你所想”的口号而闻名。 -For this reason, there is no way, in GraphQL, to get all available fields without having to list them individually. +因此,在GraphQL中,不单独列出所有可用字段,就无法获取所有可用字段。 - 在查询GraphQL API时,请始终考虑只查询实际使用的字段。 -- Make sure queries only fetch as many entities as you actually need. By default, queries will fetch 100 entities in a collection, which is usually much more than what will actually be used, e.g., for display to the user. This applies not just to top-level collections in a query, but even more so to nested collections of entities. +- 请确保查询仅获取您实际需要的更多实体。 默认情况下,查询将会在集合中获取100个实体,这通常比实际使用的要多得多,如用于显示用户。这不仅适用于查询中的顶层集合,更适用于实体嵌套集合。 -For example, in the following query: +例如,在以下查询中: ```graphql query listTokens { @@ -213,15 +213,15 @@ query listTokens { } ``` -The response could contain 100 transactions for each of the 100 tokens. +该响应可以包含每100个代币交易中的100个交易。 -If the application only needs 10 transactions, the query should explicitly set `first: 10` on the transactions field. +如果应用程序只需要10笔交易,查询应在交易字段中明确设置 `first: 10`。 -### Use a single query to request multiple records +### 使用单个查询请求多个记录 -By default, subgraphs have a singular entity for one record. For multiple records, use the plural entities and filter: `where: {id_in:[X,Y,Z]}` or `where: {volume_gt:100000}` +默认情况下,子图表有一个单数实体用于一个记录。对于多个记录,请使用复数实体和过滤器:`其中:{id_in:[X,Y,Z]}`或者`其中: {volume_gt:100000}`。 -Example of inefficient querying: +低效查询示例: ```graphql query SingleRecord { @@ -238,7 +238,7 @@ query SingleRecord { } ``` -Example of optimized querying: +优化查询示例: ```graphql query ManyRecords { @@ -249,9 +249,9 @@ query ManyRecords { } ``` -### Combine multiple queries in a single request +### 在单个请求中合并多个查询 -Your application might require querying multiple types of data as follows: +您的应用程序可能需要查询多种类型的数据,如下所示: ```graphql import { execute } from "your-favorite-graphql-client" @@ -281,9 +281,9 @@ const [tokens, counters] = Promise.all( ) ``` -While this implementation is totally valid, it will require two round trips with the GraphQL API. +虽然这个实现是完全有效的,但它需要使用GraphQL API进行两次交互。 -Fortunately, it is also valid to send multiple queries in the same GraphQL request as follows: +幸运的是,在同一GraphQL请求中发送多个查询也是有效的,如下所示: ```graphql import { execute } from "your-favorite-graphql-client" @@ -304,13 +304,13 @@ query GetTokensandCounters { const { result: { tokens, counters } } = execute(query) ``` -This approach will **improve the overall performance** by reducing the time spent on the network (saves you a round trip to the API) and will provide a **more concise implementation**. +这个方法将**改进整体性能**,减少网络上花费的时间(将你节省一次回程旅行到 API),并提供一个**更简洁的实现**。 ### 利用GraphQL片段 -A helpful feature to write GraphQL queries is GraphQL Fragment. +编写GraphQL查询的一个有用功能是GraphQL片段。 -Looking at the following query, you will notice that some fields are repeated across multiple Selection-Sets (`{ ... }`): +查看以下查询,您会注意到某些字段在多个选择集(`{ ... }`) 中重复: ```graphql query { @@ -330,12 +330,12 @@ query { } ``` -Such repeated fields (`id`, `active`, `status`) bring many issues: +此类重复字段(`id`, `active`, `status`)会带来许多问题: -- More extensive queries become harder to read. -- When using tools that generate TypeScript types based on queries (_more on that in the last section_), `newDelegate` and `oldDelegate` will result in two distinct inline interfaces. +- 更广泛的查询变得更难阅读。 +- 当使用基于查询生成类型的 TypeScript 类型的工具 (_more on that in the last section_), `newDelegate`和`oldDelegate`将导致两个不同的内联接口。 -A refactored version of the query would be the following: +查询的重构版本如下: ```graphql query { @@ -359,15 +359,15 @@ fragment DelegateItem on Transcoder { } ``` -Using GraphQL `fragment` will improve readability (especially at scale) and result in better TypeScript types generation. +使用 GraphQL `fragment` 将提高可读性(尤其在规模上),并导致更好的 TypeScript 类型生成。 -When using the types generation tool, the above query will generate a proper `DelegateItemFragment` type (_see last "Tools" section_). +当使用类型生成工具时,上面的查询将生成一个适当的`DelegateItemFragment` 类型(_see last "Tools" section _)。 ### GraphQL片段的注意事项 ### 片段必须是一种类型 -A Fragment cannot be based on a non-applicable type, in short, **on type not having fields**: +片段不能基于不适用的类型,简而言之,**基于没有字段的类型**: ```graphql fragment MyFragment on BigInt { @@ -375,11 +375,11 @@ fragment MyFragment on BigInt { } ``` -`BigInt` is a **scalar** (native "plain" type) that cannot be used as a fragment's base. +`BigInt` 是一个 **scalar** (原生"plain" 类型),不能用作碎片的基础。 #### 如何传播片段 -Fragments are defined on specific types and should be used accordingly in queries. +片段是在特定类型上定义的,应该在查询中相应地使用。 例子: @@ -402,20 +402,20 @@ fragment VoteItem on Vote { } ``` -`newDelegate` and `oldDelegate` are of type `Transcoder`. +`newDelegate`和`oldDelegate`都是`Transcoder`的类型。 -It is not possible to spread a fragment of type `Vote` here. +不可能在这里散布`Vote`型号的片段。 #### 将片段定义为数据的原子业务单元。 -GraphQL `Fragment`s must be defined based on their usage. +必须根据它们的用途来定义GraphQL `Fragment` 。 -For most use-case, defining one fragment per type (in the case of repeated fields usage or type generation) is sufficient. +对于大多数用例,为每个类型定义一个片段(在重复使用字段或生成类型的情况下)就足够了。 -Here is a rule of thumb for using fragments: +以下是使用Fragment的经验法则: -- When fields of the same type are repeated in a query, group them in a `Fragment`. -- When similar but different fields are repeated, create multiple fragments, for instance: +- 当同类字段重复查询时,将它们分组为`片段`。 +- 当重复类似但不相同的字段时,创建多个片段,例如: ```graphql # base fragment (主要在上架中使用) @@ -438,51 +438,51 @@ fragment VoteWithPoll on Vote { --- -## The Essential Tools +## 重要工具 ### GraphQL基于web的浏览器 -Iterating over queries by running them in your application can be cumbersome. For this reason, don't hesitate to use [Graph Explorer](https://thegraph.com/explorer) to test your queries before adding them to your application. Graph Explorer will provide you a preconfigured GraphQL playground to test your queries. +通过在您的应用程序中运行这些查询来绕过它们,可能是繁琐的。 为此,请不要犹豫地使用 [Graph Explorer](https://thegraph.com/explorer) 来测试您的查询,然后将它们添加到您的应用程序中。 Graph Explorer将为您提供一个预配置的 GraphQL 播放场,以测试您的查询。 -If you are looking for a more flexible way to debug/test your queries, other similar web-based tools are available such as [Altair](https://altairgraphql.dev/) and [GraphiQL](https://graphiql-online.com/graphiql). +如果您正在寻找一种更灵活的方式来调试或测试您的查询, 其他类似的网络工具也可用,如 [Altair](https://altairgraphql.dev/) 和 [GraphiQL](https://graphiql-online.com/graphiql)。 ### GraphQL Linting -In order to keep up with the mentioned above best practices and syntactic rules, it is highly recommended to use the following workflow and IDE tools. +为了跟上提到的最佳实践和语法规则,强烈建议使用以下工作流和IDE工具。 **GraphQL ESLint** -[GraphQL ESLint](https://the-guild.dev/graphql/eslint/docs/getting-started) will help you stay on top of GraphQL best practices with zero effort. +[GraphQL ESLint](https://the-guild.dev/graphql/eslint/docs/getting-started)将帮助您保持在 GraphQL 最佳做法之外的零努力状态。 -[Setup the "operations-recommended"](https://the-guild.dev/graphql/eslint/docs/configs) config will enforce essential rules such as: +[设置“推荐操作”](https://the-guild.dev/graphql/eslint/docs/configs)配置将强制执行基本规则,例如: -- `@graphql-eslint/fields-on-correct-type`: is a field used on a proper type? -- `@graphql-eslint/no-unused variables`: should a given variable stay unused? +- `@graphql-eslint/fields-on-correct-type`: 字段是否用于适当类型? +- `@graphql-eslint/no-used 变量`: 某个变量是否不使用? - 还有更多! -This will allow you to **catch errors without even testing queries** on the playground or running them in production! +这将允许您在运作场上**捕获错误——甚至不需要测试查询** 或者在生产中运行它们! ### IDE插件 -**VSCode and GraphQL** +**VSCode和GraphQL** -The [GraphQL VSCode extension](https://marketplace.visualstudio.com/items?itemName=GraphQL.vscode-graphql) is an excellent addition to your development workflow to get: +[GraphQL VSCode扩展](https://marketplace.visualstudio.com/items?itemName=GraphQL.vscode-graphql)是开发工作流程中一个很好的补充,可以获得: -- Syntax highlighting -- Autocomplete suggestions -- Validation against schema -- Snippets -- Go to definition for fragments and input types +- 语法高亮 +- 自动完成建议 +- 针对模式的验证 +- 代码片段 +- 转到片段和输入类型的定义 -If you are using `graphql-eslint`, the [ESLint VSCode extension](https://marketplace.visualstudio.com/items?itemName=dbaeumer.vscode-eslint) is a must-have to visualize errors and warnings inlined in your code correctly. +如果您使用 `graphql-eslint`, [ESLint VSCode 扩展](https://marketplace.visualstudio.com/items?itemName=dbaeumer.vscode-eslint) 是一个必须直观的错误和警告包含在您的代码中是正确的。 -**WebStorm/Intellij and GraphQL** +**WebStorm/Intellij和GraphQL** -The [JS GraphQL plugin](https://plugins.jetbrains.com/plugin/8097-graphql/) will significantly improve your experience while working with GraphQL by providing: +[JS GraphQL插件](https://plugins.jetbrains.com/plugin/8097-graphql/) 在使用 GraphQL 时将通过提供以下方式大大改进您的体验: -- Syntax highlighting -- Autocomplete suggestions -- Validation against schema -- Snippets +- 语法高亮 +- 自动完成建议 +- 针对模式的验证 +- 代码片段 -For more information on this topic, check out the [WebStorm article](https://blog.jetbrains.com/webstorm/2019/04/featured-plugin-js-graphql/) which showcases all the plugin's main features. +了解更多关于此主题的信息,请查阅[WebStorm 文章](https://blog.jetbrains.com/webstorm/2019/04/featured-plugin-js-graphql/),该插件的所有主要功能。 diff --git a/website/src/pages/zh/subgraphs/querying/distributed-systems.mdx b/website/src/pages/zh/subgraphs/querying/distributed-systems.mdx index 10acf15d56be..f8e10f9dfe6a 100644 --- a/website/src/pages/zh/subgraphs/querying/distributed-systems.mdx +++ b/website/src/pages/zh/subgraphs/querying/distributed-systems.mdx @@ -29,9 +29,9 @@ Graph 是分布式系统实现的协议。 ## 轮询更新的数据 -The Graph provides the `block: { number_gte: $minBlock }` API, which ensures that the response is for a single block equal or higher to `$minBlock`. If the request is made to a `graph-node` instance and the min block is not yet synced, `graph-node` will return an error. If `graph-node` has synced min block, it will run the response for the latest block. If the request is made to an Edge & Node Gateway, the Gateway will filter out any Indexers that have not yet synced min block and make the request for the latest block the Indexer has synced. +The Graph 提供 `block: { number_gte: $minBlock }` API,确保响应是针对等于或高于`$minBlock`的单个区块。 如果向`graph-node`实例发出请求并且最小区块尚未同步,则`graph-node`将返回错误。 如果 `graph-node`已同步最小区块,它将返回最新区块的响应。 如果请求是发给 Edge &Node 网关的,网关将过滤掉任何尚未同步最小区块的索引人,并请求索引人已同步的最新区块。 -We can use `number_gte` to ensure that time never travels backward when polling for data in a loop. Here is an example: +我们可以使用`number_gte`, 从而确保在循环中轮询数据时,时间不会倒流。 这是一个例子: ```javascript /// Updates the protocol.paused variable to the latest @@ -78,7 +78,7 @@ async function updateProtocolPaused() { 另一个用例是检索一个更大的集合,或者更一般地说,跨多个请求检索相关项目。 与轮询案例(所需的一致性是及时向前进行)不同,此用例所需的一致性是针对单个时间点的。 -Here we will use the `block: { hash: $blockHash }` argument to pin all of our results to the same block. +在这里,我们将使用 `block: { hash: $blockHash }` 参数将我们所有的结果锚定到同一个区块。 ```javascript /// Gets a list of domain names from a single block using pagination diff --git a/website/src/pages/zh/subgraphs/querying/from-an-application.mdx b/website/src/pages/zh/subgraphs/querying/from-an-application.mdx index f9b6bf63e45f..35316d8dbbc3 100644 --- a/website/src/pages/zh/subgraphs/querying/from-an-application.mdx +++ b/website/src/pages/zh/subgraphs/querying/from-an-application.mdx @@ -1,53 +1,54 @@ --- title: 从应用程序中进行查询 +sidebarTitle: 从应用程序中进行查询 --- -Learn how to query The Graph from your application. +学习如何从您的应用程序查询The Graph。 -## Getting GraphQL Endpoints +## 获取GraphQL端点 -During the development process, you will receive a GraphQL API endpoint at two different stages: one for testing in Subgraph Studio, and another for making queries to The Graph Network in production. +在开发过程中,您将在两个不同阶段收到一个 GraphQL API 端点:用于在Subgrah Studio进行测试和另一个在生产中查询The Graph网络的问题。 -### Subgraph Studio Endpoint +### 子图工作室端点 -After deploying your subgraph to [Subgraph Studio](https://thegraph.com/docs/en/subgraphs/developing/deploying/using-subgraph-studio/), you will receive an endpoint that looks like this: +将Subgraph部署到[Subgraph Studio](https://thegraph.com/docs/en/subgraphs/developing/deploying/using-subgraph-studio/)后,您将收到一个如下所示的端点: ``` https://api.studio.thegraph.com/query/// ``` -> This endpoint is intended for testing purposes **only** and is rate-limited. +> 此端点仅用于测试目的**仅**并且是限定频率。 -### The Graph Network Endpoint +### The Graph网络端点 -After publishing your subgraph to the network, you will receive an endpoint that looks like this: : +在将您的子图发布到网络后,您将收到一个看起来像这样的端点: ``` https://gateway.thegraph.com/api//subgraphs/id/ ``` -> This endpoint is intended for active use on the network. It allows you to use various GraphQL client libraries to query the subgraph and populate your application with indexed data. +> 此端点用于网络上的活动用途。 它允许您使用各种GraphQL客户端库查询子图并使用索引数据填充您的应用程序。 -## Using Popular GraphQL Clients +## 使用热门GraphQL客户端 -### Graph Client +### Graph 客户端 -The Graph is providing its own GraphQL client, `graph-client` that supports unique features such as: +The Graph提供了自己的GraphQL客户端,`graph-client`支持以下独特功能: - 跨链子图处理: 在一个查询中从多个子图进行查询 -- [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) -- [Automatic Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) +- [自动区块跟踪](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) +- [自动分页](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) - 完全类型化的结果 -> Note: `graph-client` is integrated with other popular GraphQL clients such as Apollo and URQL, which are compatible with environments such as React, Angular, Node.js, and React Native. As a result, using `graph-client` will provide you with an enhanced experience for working with The Graph. +> 注意:`graph-client` 是与其他受欢迎的GraphQL客户端集成的,例如Apollo和 URQL,这些客户端与React、Angular、Node.js和React Native等环境兼容。 因此,使用 `graph-client` 可以为你提供与The Graph的更高工作体验。 -### Fetch Data with Graph Client +### 使用 Graph客户端获取数据 -Let's look at how to fetch data from a subgraph with `graph-client`: +让我们看看如何使用 `graphql-client` 从子图获取数据: #### 步骤1 -Install The Graph Client CLI in your project: +在您的项目中安装The Graph客户端 CLI : ```sh yarn add -D @graphprotocol/client-cli @@ -57,7 +58,7 @@ npm install --save-dev @graphprotocol/client-cli #### 步骤2 -Define your query in a `.graphql` file (or inlined in your `.js` or `.ts` file): +定义在`.graphql` 文件(或在 `.js` 或 `.ts` 中嵌入)中的查询: ```graphql query ExampleQuery { @@ -86,7 +87,7 @@ query ExampleQuery { #### 步骤3 -Create a configuration file (called `.graphclientrc.yml`) and point to your GraphQL endpoints provided by The Graph, for example: +创建一个配置文件 (名为 `.graphclientrc.yml`) 并指向您的由The Graph提供的GraphQL 端点, 例如: ```yaml # .graphclientrc.yml @@ -104,17 +105,17 @@ documents: - ./src/example-query.graphql ``` -#### Step 4 +#### 步骤4 -Run the following The Graph Client CLI command to generate typed and ready to use JavaScript code: +运行以下The Graph Client CLI 命令将生成类型化的并可以投入使用 的JavaScript 代码: ```sh graphclient build ``` -#### Step 5 +#### 步骤5 -Update your `.ts` file to use the generated typed GraphQL documents: +更新您的 `.ts` 文件以使用生成的 GraphQL 文档: ```tsx import React, { useEffect } from 'react' @@ -152,27 +153,27 @@ function App() { export default App ``` -> **Important Note:** `graph-client` is perfectly integrated with other GraphQL clients such as Apollo client, URQL, or React Query; you can [find examples in the official repository](https://github.com/graphprotocol/graph-client/tree/main/examples). However, if you choose to go with another client, keep in mind that **you won't be able to use Cross-chain Subgraph Handling or Automatic Pagination, which are core features for querying The Graph**. +> **重要注意:** `graph-client` 与其他GraphQL客户端,如Apollo客户端、URL或React查询完全整合; 您可以[在官方仓库中找到示例](https://github.com/graphprotocol/graph-client/tree/main/examples)。 然而,如果您选择与另一个客户端联系,请记住**您将无法使用跨链子处理或自动分页, 它们是查询The Graph的核心功能**。 -### Apollo Client +### Apollo 客户端 -[Apollo client](https://www.apollographql.com/docs/) is a common GraphQL client on front-end ecosystems. It's available for React, Angular, Vue, Ember, iOS, and Android. +[Apollo 客户端](https://www.apollographql.com/docs/) 是前端生态系统常见的 GraphQL 客户端。它可用于React、Angular、Vue、Ember、iOS和Android。 -Although it's the heaviest client, it has many features to build advanced UI on top of GraphQL: +虽然它是最重的客户端,但它有许多功能可以在 GraphQL 顶部构建高级用户界面: -- Advanced error handling +- 高级错误处理 - 分页 -- Data prefetching -- Optimistic UI -- Local state management +- 预获取数据 +- 优化用户界面 +- 本地状态管理 -### Fetch Data with Apollo Client +### 通过 Apollo 客户端获取数据 -Let's look at how to fetch data from a subgraph with Apollo client: +让我们看看如何用 Apollo 客户端从子图中获取数据。 #### 步骤1 -Install `@apollo/client` and `graphql`: +安装 `@apollo/client` 和 `graphql`: ```sh npm install @apollo/client graphql @@ -180,7 +181,7 @@ npm install @apollo/client graphql #### 步骤2 -Query the API with the following code: +用以下代码查询API: ```javascript import { ApolloClient, InMemoryCache, gql } from '@apollo/client' @@ -215,7 +216,7 @@ client #### 步骤3 -To use variables, you can pass in a `variables` argument to the query: +要使用变量,你可以在查询中传递一个`变量`参数: ```javascript const tokensQuery = ` @@ -246,22 +247,22 @@ client }) ``` -### URQL Overview +### URQL 概述 -[URQL](https://formidable.com/open-source/urql/) is available within Node.js, React/Preact, Vue, and Svelte environments, with some more advanced features: +[URQL](https://formidable.com/open-source/urql/)可以在 Node.js, React/Preact, Vue 和 Svelte 环境中使用,具有更高级的功能: - 灵活的缓存系统 - 可扩展设计(使在它上面添加新功能变得容易) - 轻量级捆绑包(比 Apollo Client 小约5倍) - 支持文件上传和离线模式 -### Fetch data with URQL +### 通过 URQL 获取数据 -Let's look at how to fetch data from a subgraph with URQL: +让我们看看如何从 URQL 的子图中获取数据: #### 步骤1 -Install `urql` and `graphql`: +安装 `urql` 和 `graphql`: ```sh npm install urql graphql @@ -269,7 +270,7 @@ npm install urql graphql #### 步骤2 -Query the API with the following code: +用以下代码查询API: ```javascript import { createClient } from 'urql' diff --git a/website/src/pages/zh/subgraphs/querying/graph-client/README.md b/website/src/pages/zh/subgraphs/querying/graph-client/README.md index 416cadc13c6f..7df9f2e3a90d 100644 --- a/website/src/pages/zh/subgraphs/querying/graph-client/README.md +++ b/website/src/pages/zh/subgraphs/querying/graph-client/README.md @@ -1,44 +1,44 @@ -# The Graph Client Tools +# The Graph客户端工具 -This repo is the home for [The Graph](https://thegraph.com) consumer-side tools (for both browser and NodeJS environments). +这个仓库是[The Graph](https://thegraph.com)消费者端工具(适用于浏览器和NodeJS环境)的家。 -## Background +## 背景 -The tools provided in this repo are intended to enrich and extend the DX, and add the additional layer required for dApps in order to implement distributed applications. +本节提供的工具旨在丰富和扩展DX, 并添加 dApp 所需的附加层以实现分布式应用程序。 -Developers who consume data from [The Graph](https://thegraph.com) GraphQL API often need peripherals for making data consumption easier, and also tools that allow using multiple indexers at the same time. +从GraphQL API 上[[The Graph](https://thegraph.com) 消耗数据的开发者常常需要外观才能使数据消耗更加容易, 而且还可以同时使用多个索引人的工具。 -## Features and Goals +## 特征和目标 -This library is intended to simplify the network aspect of data consumption for dApps. The tools provided within this repository are intended to run at build time, in order to make execution faster and performant at runtime. +这个库旨在简化dApp数据消耗的网络方面。 这个仓库中提供的工具是为了在构建时运行,以便在运行时更快地执行和运行。 -> The tools provided in this repo can be used as standalone, but you can also use it with any existing GraphQL Client! +> 在这个仓库中提供的工具可以单独使用,但你也可以和任何现有的 GraphQL 客户端一起使用! -| Status | Feature | Notes | -| :----: | ---------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------- | -| ✅ | Multiple indexers | based on fetch strategies | -| ✅ | Fetch Strategies | timeout, retry, fallback, race, highestValue | -| ✅ | Build time validations & optimizations | | -| ✅ | Client-Side Composition | with improved execution planner (based on GraphQL-Mesh) | -| ✅ | Cross-chain Subgraph Handling | Use similar subgraphs as a single source | -| ✅ | Raw Execution (standalone mode) | without a wrapping GraphQL client | -| ✅ | Local (client-side) Mutations | | -| ✅ | [Automatic Block Tracking](../packages/block-tracking/README.md) | tracking block numbers [as described here](https://thegraph.com/docs/en/developer/distributed-systems/#polling-for-updated-data) | -| ✅ | [Automatic Pagination](../packages/auto-pagination/README.md) | doing multiple requests in a single call to fetch more than the indexer limit | -| ✅ | Integration with `@apollo/client` | | -| ✅ | Integration with `urql` | | -| ✅ | TypeScript support | with built-in GraphQL Codegen and `TypedDocumentNode` | -| ✅ | [`@live` queries](./live.md) | Based on polling | +| 状态 | 特征 | 注意: | +| :-: | ---------------------------------------------- | ----------------------------------------------------------------------------------------------------- | +| ✅ | 多个索引人 | 基于获取策略 | +| ✅ | 获取策略 | 超时、 重试、 回退、 种族,最高值 | +| ✅ | 构建时间验证和优化 | | +| ✅ | 客户端组成 | 改进执行规划程序(基于 GraphQL-Mesh) | +| ✅ | 跨链子图处理 | 使用相似子图作为单个源 | +| ✅ | 原始执行 (独立模式) | 没有包装GraphQL客户端 | +| ✅ | 本地(客户端) 突变 | | +| ✅ | [自动区块跟踪](../packages/block-tracking/README.md) | 跟踪区块编号 [如这里描述的](https://thegraph.com/docs/en/developer/distributed-systems/#polling-for-updated-data) | +| ✅ | [自动分页](../packages/auto-pagination/README.md) | 在单次调用中执行多个请求以获取超过索引人限制的数据 | +| ✅ | 与 `@apollo/client` 集成 | | +| ✅ | 与 `urql` 集成 | | +| ✅ | TypeScript 支持 | 具有内置的 GraphQL Codegen 和 `TypedDocumentNode` | +| ✅ | [`@live` 查询](./live.md) | 基于投票 | -> You can find an [extended architecture design here](./architecture.md) +> 您可以在这里找到一个[扩展架构设计](./architecture.md)。 -## Getting Started +## 开始 -You can follow [Episode 45 of `graphql.wtf`](https://graphql.wtf/episodes/45-the-graph-client) to learn more about Graph Client: +您可以关注 [Episode 45 of `graphql.wtf`](https://graphql.wtf/episodes/45-the-graph-client) 来了解更多关于Graph客户端的信息: [![GraphQL.wtf Episode 45](https://img.youtube.com/vi/ZsRAmyUtvwg/0.jpg)](https://graphql.wtf/episodes/45-the-graph-client) -To get started, make sure to install [The Graph Client CLI] in your project: +若要启动,请确保在您的项目中安装 [The Graph客户端CLI] : ```sh yarn add -D @graphprotocol/client-cli @@ -46,9 +46,9 @@ yarn add -D @graphprotocol/client-cli npm install --save-dev @graphprotocol/client-cli ``` -> The CLI is installed as dev dependency since we are using it to produce optimized runtime artifacts that can be loaded directly from your app! +> CLI 是作为dev 依赖安装的,因为我们正在使用它来产生优化的运行时工件,这些工件可以直接从您的应用中加载! -Create a configuration file (called `.graphclientrc.yml`) and point to your GraphQL endpoints provided by The Graph, for example: +创建一个配置文件 (名为 `.graphclientrc.yml`) 并指向您的由The Graph提供的GraphQL 端点, 例如: ```yml # .graphclientrc.yml @@ -59,15 +59,15 @@ sources: endpoint: https://api.thegraph.com/subgraphs/name/uniswap/uniswap-v2 ``` -Now, create a runtime artifact by running The Graph Client CLI: +现在,通过运行 The Graph客户端 CLI 创建运行时的工件: ```sh graphclient build ``` -> Note: you need to run this with `yarn` prefix, or add that as a script in your `package.json`. +> 注意:您需要使用 `yarn` 前缀运行此操作,或者在您的 `package.json` 中添加一个脚本。 -This should produce a ready-to-use standalone `execute` function, that you can use for running your application GraphQL operations, you should have an output similar to the following: +这将产生一个可随时使用的独立`执行`函数。 你可以用来运行你的应用程序 GraphQL 操作,你应该有一个类似于以下的输出: ```sh GraphClient: Cleaning existing artifacts @@ -80,7 +80,7 @@ GraphClient: Reading the configuration 🕸️: Done! => .graphclient ``` -Now, the `.graphclient` artifact is generated for you, and you can import it directly from your code, and run your queries: +现在,`.graphclient`的艺术品是为你生成的,你可以直接从你的代码中导入它,并运行你的查询: ```ts import { execute } from '../.graphclient' @@ -111,54 +111,54 @@ async function main() { main() ``` -### Using Vanilla JavaScript Instead of TypeScript +### 使用 Vanilla JavaScript 而不是 TypeScript -GraphClient CLI generates the client artifacts as TypeScript files by default, but you can configure CLI to generate JavaScript and JSON files together with additional TypeScript definition files by using `--fileType js` or `--fileType json`. +GraphClient CLI 默认情况下以 TypeScript 文件生成客户端工件, 但您可以使用 `--fileType js` 或 `--fileType js` 或 `--fileType json` 来配置 CLI 以生成JavaScript 和 JSON 文件以及额外的 TypeScript 定义文件。 -`js` flag generates all files as JavaScript files with ESM Syntax and `json` flag generates source artifacts as JSON files while entrypoint JavaScript file with old CommonJS syntax because only CommonJS supports JSON files as modules. +`js` 标志生成了所有使用 JavaScript 文件的文件,其中含有ESM 语法和 `json` 标志作为JSON 文件生成了源代码,同时也生成了旧的 CommonJS 语法的 JavaScript 文件,因为只有CommonJS支持 JSON 文件作为模块。 -Unless you use CommonJS(`require`) specifically, we'd recommend you to use `js` flag. +除非您使用CommonJS(`require`),否则我们建议您使用`js`标记。 `graphclient --fileType js` -- [An example for JavaScript usage in CommonJS syntax with JSON files](../examples/javascript-cjs) -- [An example for JavaScript usage in ESM syntax](../examples/javascript-esm) +- [使用JSON文件在CommonJS语法中使用JavaScript的示例](../examples/javascript-cjs) +- [一个 JavaScript 在ESM 语法中的使用示例](../examples/javascript-esm) -#### The Graph Client DevTools +#### The Graph客户端开发工具 -The Graph Client CLI comes with a built-in GraphiQL, so you can experiment with queries in real-time. +The Graph客户端CLI 带有内置的 GraphiQL,因此您可以实时尝试查询。 -The GraphQL schema served in that environment, is the eventual schema based on all composed Subgraphs and transformations you applied. +在这种环境下服务的 GraphQL 模式是基于您应用的所有构成子图和转换的最终模式 。 -To start the DevTool GraphiQL, run the following command: +要启动DevTool GraphiQL,请运行以下命令: ```sh graphclient serve-dev ``` -And open http://localhost:4000/ to use GraphiQL. You can now experiment with your Graph client-side GraphQL schema locally! 🥳 +然后打开 http://localhost:4000/以使用 GraphiQL。您现在可以在本地试用您的Graph客户端GraphQL 模式!🥳 -#### Examples +#### 例子 -You can also refer to [examples directory in this repo](../examples), for more advanced examples and integration examples: +您还可以参考[此仓库中的示例目录](../examples),了解更高级的示例和集成示例: -- [TypeScript & React example with raw `execute` and built-in GraphQL-Codegen](../examples/execute) -- [TS/JS NodeJS standalone mode](../examples/node) -- [Client-Side GraphQL Composition](../examples/composition) -- [Integration with Urql and React](../examples/urql) -- [Integration with NextJS and TypeScript](../examples/nextjs) -- [Integration with Apollo-Client and React](../examples/apollo) -- [Integration with React-Query](../examples/react-query) -- _Cross-chain merging (same Subgraph, different chains)_ -- - [Parallel SDK calls](../examples/cross-chain-sdk) -- - [Parallel internal calls with schema extensions](../examples/cross-chain-extension) -- [Customize execution with Transforms (auto-pagination and auto-block-tracking)](../examples/transforms) +- [[TypeScript & React示例与原始的 `execute` 和内置的 GraphQL-Codegen](../examples/execute) +- [TS/JS NodeJS独立模式](../examples/node) +- [客户端 GraphQL 组合](../examples/composition) +- [与Urql 和 React集成](../examples/urql) +- [与NextJS 和 TypeScript集成](../examples/nextjs) +- [与Apollo-Client 和 React集成](../examples/apollo) +- [与React-Query集成](../examples/react-query) +- _跨链合并 (相同的子图,不同的链)_ +- - [并行的 SDK 调用](../examples/cross-chain-sdk) +- - [具有模式扩展的并行内部调用](../examples/cross-chain-extension) +- [使用Transforms(自动分页和自动块跟踪)自定义执行](../examples/transforms) -### Advanced Examples/Features +### 高级示例/功能 -#### Customize Network Calls +#### 自定义网络调用 -You can customize the network execution (for example, to add authentication headers) by using `operationHeaders`: +您可以使用`operationHeaders`自定义网络执行 (例如,添加身份验证头): ```yaml sources: @@ -170,7 +170,7 @@ sources: Authorization: Bearer MY_TOKEN ``` -You can also use runtime variables if you wish, and specify it in a declarative way: +如果您愿意,您也可以使用运行时变量,并以声明方式指定: ```yaml sources: @@ -182,7 +182,7 @@ sources: Authorization: Bearer {context.config.apiToken} ``` -Then, you can specify that when you execute operations: +然后,您可以指定当您执行操作时: ```ts execute(myQuery, myVariables, { @@ -192,11 +192,11 @@ execute(myQuery, myVariables, { }) ``` -> You can find the [complete documentation for the `graphql` handler here](https://graphql-mesh.com/docs/handlers/graphql#config-api-reference). +> 您可以在这里找到 [`graphql` 处理程序的完整文档](https://graphql-mesh.com/docs/handlers/graphql#config-api-reference)。 -#### Environment Variables Interpolation +#### 环境变量内插值 -If you wish to use environment variables in your Graph Client configuration file, you can use interpolation with `env` helper: +如果你想要在你的Graph客户端配置文件中使用环境变量,你可以使用 `env` 助手的插值: ```yaml sources: @@ -208,9 +208,9 @@ sources: Authorization: Bearer {env.MY_API_TOKEN} # runtime ``` -Then, make sure to have `MY_API_TOKEN` defined when you run `process.env` at runtime. +然后,请确保在运行`process.env`时定义`MY_API_TOKEN`。 -You can also specify environment variables to be filled at build time (during `graphclient build` run) by using the env-var name directly: +您还可以直接使用 Env-var 名称指定要在构建时间(在 `graphclient build` 运行时) 填充的环境变量: ```yaml sources: @@ -222,20 +222,20 @@ sources: Authorization: Bearer ${MY_API_TOKEN} # build time ``` -> You can find the [complete documentation for the `graphql` handler here](https://graphql-mesh.com/docs/handlers/graphql#config-api-reference). +> 您可以在这里找到 [`graphql` 处理程序的完整文档](https://graphql-mesh.com/docs/handlers/graphql#config-api-reference)。 -#### Fetch Strategies and Multiple Graph Indexers +#### 获取策略和多图索引人 -It's a common practice to use more than one indexer in dApps, so to achieve the ideal experience with The Graph, you can specify several `fetch` strategies in order to make it more smooth and simple. +在dApp中使用多个索引人是一种常见的做法,以便实现理想的The Graph体验, 你可以指定 `fetch` 的几种策略来使它更加顺畅和简单。 -All `fetch` strategies can be combined to create the ultimate execution flow. +所有的 `fetch` 策略可以合并来创建最终的执行流程。
- `retry` + "重试" -The `retry` mechanism allow you to specify the retry attempts for a single GraphQL endpoint/source. +`重试`机制允许您指定重试单个GraphQL端点/源的尝试。 -The retry flow will execute in both conditions: a netword error, or due to a runtime error (indexing issue/inavailability of the indexer). +重试将在两个条件下执行:网络错误或运行时错误(索引问题/索引人不可用)。 ```yaml sources: @@ -249,9 +249,9 @@ sources:
- `timeout` + "超时" -The `timeout` mechanism allow you to specify the `timeout` for a given GraphQL endpoint. +`超时`机制允许您为给定的 GraphQL 端点指定`超时` 。 ```yaml sources: @@ -265,11 +265,11 @@ sources:
- `fallback` + “fallback” -The `fallback` mechanism allow you to specify use more than one GraphQL endpoint, for the same source. +`fallback`机制允许您为同一来源指定多个GraphQL端点。 -This is useful if you want to use more than one indexer for the same Subgraph, and fallback when an error/timeout happens. You can also use this strategy in order to use a custom indexer, but allow it to fallback to [The Graph Hosted Service](https://thegraph.com/hosted-service). +如果您想要为同一个子图使用多个索引人,并在发生错误/超时时时进行回退,这是有用的。 您也可以使用此策略来使用自定义索引人,但允许它回退到 [The Graph托管服务](https://thegraph.com/hosted-service)。 ```yaml sources: @@ -287,11 +287,11 @@ sources:
- `race` + "竞技" -The `race` mechanism allow you to specify use more than one GraphQL endpoint, for the same source, and race on every execution. +“种族”机制允许您为同一源指定多个GraphQL端点,以及每次执行时的竞赛。 -This is useful if you want to use more than one indexer for the same Subgraph, and allow both sources to race and get the fastest response from all specified indexers. +如果你想要在同一个子图中使用多个索引人,这是有用的, 并且允许这两个来源进行竞赛,并从所有指定的索引人获得最快的响应。 ```yaml sources: @@ -307,11 +307,11 @@ sources:
- `highestValue` - - This strategy allows you to send parallel requests to different endpoints for the same source and choose the most updated. + "最高值" -This is useful if you want to choose most synced data for the same Subgraph over different indexers/sources. +此策略允许您将并行请求发送到同一源的不同端点并选择最先进的端点。 + +如果您想要从不同的索引人/源选择同一子图中最常同步的数据,这是有用的。 ```yaml sources: @@ -349,9 +349,9 @@ graph LR;
-#### Block Tracking +#### 区块跟踪 -The Graph Client can track block numbers and do the following queries by following [this pattern](https://thegraph.com/docs/en/developer/distributed-systems/#polling-for-updated-data) with `blockTracking` transform; +Graph客户端可以跟踪块号码并通过 `blockTracking` 转换通过 [此模式](https://thegraph.com/docs/en/developer/distributed-systems/#polling-for-updated-data)进行以下查询: ```yaml sources: @@ -369,11 +369,11 @@ sources: ignoreOperationNames: [NotFollowed] ``` -[You can try a working example here](../examples/transforms) +[您可以在此尝试一个工作示例](../examples/transforms)。 -#### Automatic Pagination +#### 自动分页 -With most subgraphs, the number of records you can fetch is limited. In this case, you have to send multiple requests with pagination. +对大多数子图,您可以获取的记录数量是有限的。在这种情况下,您必须发送多个带分页的请求。 ```graphql query { @@ -385,7 +385,7 @@ query { } ``` -So you have to send the following operations one after the other: +所以你必须一个接一个的发送操作: ```graphql query { @@ -397,7 +397,7 @@ query { } ``` -Then after the first response: +然后在第一个响应之后: ```graphql query { @@ -409,9 +409,9 @@ query { } ``` -After the second response, you have to merge the results manually. But instead The Graph Client allows you to do the first one and automatically does those multiple requests for you under the hood. +在第二个响应后,您必须手动合并结果。 但是The Graph客户端允许您做第一个,并自动为您在场景下执行这些多个请求。 -All you have to do is: +您必须做的是: ```yaml sources: @@ -425,17 +425,17 @@ sources: validateSchema: true ``` -[You can try a working example here](../examples/transforms) +[您可以在此尝试一个工作示例](../examples/transforms)。 -#### Client-side Composition +#### 客户端组成 -The Graph Client has built-in support for client-side GraphQL Composition (powered by [GraphQL-Tools Schema-Stitching](https://graphql-tools.com/docs/schema-stitching/stitch-combining-schemas)). +The Graph客户端内置支持客户端GraphQL组成(由 [GraphQL-Tools Schema-Stitching](https://graphql-tools.com/docs/schema-stitching/stitch-combining-schemas)驱动)。 -You can leverage this feature in order to create a single GraphQL layer from multiple Subgraphs, deployed on multiple indexers. +您可以利用此功能,从多个子图中创建一个单一的 GraphQL 层,部署在多个索引人。 -> 💡 Tip: You can compose any GraphQL sources, and not only Subgraphs! +> 💡 提示: 你可以创建任何GraphQL源, 而不仅仅是子图! -Trivial composition can be done by adding more than one GraphQL source to your `.graphclientrc.yml` file, here's an example: +可通过将多个GraphQL源添加到您的 `.graphclientrc.yml` 文件来完成三角合成,下面是一个示例: ```yaml sources: @@ -449,7 +449,7 @@ sources: endpoint: https://api.thegraph.com/subgraphs/name/graphprotocol/compound-v2 ``` -As long as there a no conflicts across the composed schemas, you can compose it, and then run a single query to both Subgraphs: +只要在合成模式之间没有冲突,您就可以编写它,然后对两个子图执行一个查询: ```graphql query myQuery { @@ -470,23 +470,23 @@ query myQuery { } ``` -You can also resolve conflicts, rename parts of the schema, add custom GraphQL fields, and modify the entire execution phase. +您也可以解决冲突,重命名模式的一部分,添加自定义 GraphQL 字段,并修改整个执行阶段。 -For advanced use-cases with composition, please refer to the following resources: +关于由人员组成的高级使用案例,请参考以下资源: - [Advanced Composition Example](../examples/composition) - [GraphQL-Mesh Schema transformations](https://graphql-mesh.com/docs/transforms/transforms-introduction) - [GraphQL-Tools Schema-Stitching documentation](https://graphql-tools.com/docs/schema-stitching/stitch-combining-schemas) -#### TypeScript Support +#### TypeScript 支持 -If your project is written in TypeScript, you can leverage the power of [`TypedDocumentNode`](https://the-guild.dev/blog/typed-document-node) and have a fully-typed GraphQL client experience. +如果你的项目是在TypeScript写的,你可以利用[`TypedDocumentNode`](https://the-guild.dev/blog/typed-document-node)的力量,并且拥有一个完整的GraphQL客户端体验。 The standalone mode of The GraphQL, and popular GraphQL client libraries like Apollo-Client and urql has built-in support for `TypedDocumentNode`! -The Graph Client CLI comes with a ready-to-use configuration for [GraphQL Code Generator](https://graphql-code-generator.com), and it can generate `TypedDocumentNode` based on your GraphQL operations. +The Graph客户端CLI带有一个现成配置的 [GraphQL 代码生成器](https://graphql-code-generator.com),它可以根据您的 GraphQL 操作生成`TypedDocumentNode` 。 -To get started, define your GraphQL operations in your application code, and point to those files using the `documents` section of `.graphclientrc.yml`: +要启动,请在应用程序代码中定义您的 GraphQL 操作,并指向使用 `.graphclientrc.yml` 中的 `documents` 部分的文件: ```yaml sources: @@ -496,7 +496,7 @@ documents: - ./src/example-query.graphql ``` -You can also use Glob expressions, or even point to code files, and the CLI will find your GraphQL queries automatically: +您也可以使用 Glob 表达式,甚至指向代码文件,CLI 会自动找到您的 GraphQL 查询: ```yaml documents: @@ -504,11 +504,11 @@ documents: - './src/**/*.{ts,tsx,js,jsx}' ``` -Now, run the GraphQL CLI `build` command again, the CLI will generate a `TypedDocumentNode` object under `.graphclient` for every operation found. +现在,再次运行 GraphQL CLI `build` 命令,CLI 将在`.graphclient`下为找到的每个操作生成一个 `TypedDocumentNode` 对象。 -> Make sure to name your GraphQL operations, otherwise it will be ignored! +> 请务必命名您的 GraphQL 操作,否则将被忽略! -For example, a query called `query ExampleQuery` will have the corresponding `ExampleQueryDocument` generated in `.graphclient`. You can now import it and use that for your GraphQL calls, and you'll have a fully typed experience without writing or specifying any TypeScript manually: +例如,一个叫做`query ExampleQuery`的查询将在`.graphclient`中生成相应的`ExampleQueryDocument`。 您现在可以导入它并用于您的 GraphQL 调用, 您将拥有完整类型的体验,无需手动写入或指定任何类型脚本: ```ts import { ExampleQueryDocument, execute } from '../.graphclient' @@ -520,17 +520,17 @@ async function main() { } ``` -> You can find a [TypeScript project example here](../examples/urql). +> 你可以在这里找到一个[TypeScript项目示例](../examples/urql)。 -#### Client-Side Mutations +#### 客户端突变 -Due to the nature of Graph-Client setup, it is possible to add client-side schema, that you can later bridge to run any arbitrary code. +由于Graph-客户端设置的性质,可以添加客户端模式,您以后可以通过桥接运行任意代码。 -This is helpful since you can implement custom code as part of your GraphQL schema, and have it as unified application schema that is easier to track and develop. +这很有帮助,因为您可以实现自定义代码作为您的 GraphQL 模式的一部分, 让它作为统一的应用程序模式更容易跟踪和发展。 -> This document explains how to add custom mutations, but in fact you can add any GraphQL operation (query/mutation/subscriptions). See [Extending the unified schema article](https://graphql-mesh.com/docs/guides/extending-unified-schema) for more information about this feature. +> 本文档解释了如何添加自定义突变,但事实上,您可以添加任何GraphQL操作(查询/突变/订阅)。请参阅[扩展统一模式文章](https://graphql-mesh.com/docs/guides/extending-unified-schema)获取有关此功能的更多信息。 -To get started, define a `additionalTypeDefs` section in your config file: +要启动,请在配置文件中定义一个 `additionalTypeDefs` 部分: ```yaml additionalTypeDefs: | @@ -548,14 +548,14 @@ additionalTypeDefs: | } ``` -Then, add a pointer to a custom GraphQL resolvers file: +然后,在自定义GraphQL解析器文件中添加指针: ```yaml additionalResolvers: - './resolvers' ``` -Now, create `resolver.js` (or, `resolvers.ts`) in your project, and implement your custom mutation: +现在,在你的项目中创建 `resolver.js` (或`resolvers.ts`),并实现你的自定义突变: ```js module.exports = { @@ -570,7 +570,7 @@ module.exports = { } ``` -If you are using TypeScript, you can also get fully type-safe signature by doing: +如果您正在使用 TypeScript,您也可以通过以下操作获得完全安全类型的签名: ```ts import { Resolvers } from './.graphclient' @@ -590,7 +590,7 @@ const resolvers: Resolvers = { export default resolvers ``` -If you need to inject runtime variables into your GraphQL execution `context`, you can use the following snippet: +如果您需要将运行时变量注入到您的 GraphQL 执行`context`中,您可以使用以下代码: ```ts execute( @@ -602,10 +602,10 @@ execute( ) ``` -> [You can read more about client-side schema extensions here](https://graphql-mesh.com/docs/guides/extending-unified-schema) +> [您可以在这里阅读更多关于客户端模式扩展的信息](https://graphql-mesh.com/docs/guides/extending-unified-schema)。 -> [You can also delegate and call Query fields as part of your mutation](https://graphql-mesh.com/docs/guides/extending-unified-schema#using-the-sdk-to-fetch-sources) +> [您也可以委托和通话查询字段作为您的突变的一部分](https://graphql-mesh.com/docs/guides/extending-unified-schema#using-the-sdk-to-fetch-sources)。 -## License +## 许可协议 -Released under the [MIT license](../LICENSE). +在 [MIT license](../LICENSE)下发布。 diff --git a/website/src/pages/zh/subgraphs/querying/graph-client/architecture.md b/website/src/pages/zh/subgraphs/querying/graph-client/architecture.md index 99098cd77b95..54d5e9de5300 100644 --- a/website/src/pages/zh/subgraphs/querying/graph-client/architecture.md +++ b/website/src/pages/zh/subgraphs/querying/graph-client/architecture.md @@ -1,13 +1,13 @@ -# The Graph Client Architecture +# The Graph客户端结构 -To address the need to support a distributed network, we plan to take several actions to ensure The Graph client provides everything app needs: +为了满足支持分布式网络的需要,我们计划采取若干行动,确保The Graph客户端提供所有应用需求: -1. Compose multiple Subgraphs (on the client-side) -2. Fallback to multiple indexers/sources/hosted services -3. Automatic/Manual source picking strategy -4. Agnostic core, with the ability to run integrate with any GraphQL client +1. 编写多个子图(在客户端) +2. 返回到多个索引人/源/托管服务 +3. 自动/手动选取源策略 +4. Agnostic 核心,能够运行任何GraphQL客户端集成 -## Standalone mode +## 独立模式 ```mermaid graph LR; @@ -17,7 +17,7 @@ graph LR; op-->sB[Subgraph B]; ``` -## With any GraphQL client +## 使用任意GraphQL客户端 ```mermaid graph LR; @@ -28,11 +28,11 @@ graph LR; op-->sB[Subgraph B]; ``` -## Subgraph Composition +## 子图组成 -To allow simple and efficient client-side composition, we'll use [`graphql-tools`](https://graphql-tools.com) to create a remote schema / Executor, then can be hooked into the GraphQL client. +为了实现简单高效的客户端组合,我们将使用[`graphql-tools`](https://graphql-tools.com)创建远程模式/执行器,然后可以挂接到GraphQL客户端。 -API could be either raw `graphql-tools` transformers, or using [GraphQL-Mesh declarative API](https://graphql-mesh.com/docs/transforms/transforms-introduction) for composing the schema. +API 可以是原始的 `graphql-tool` 变换器,也可以使用 [GraphQL-Mesh 声明的 API](https://graphql-mesh.com/docs/transforms/transforms-introduction) 来构造架构。 ```mermaid graph LR; @@ -42,9 +42,9 @@ graph LR; m-->s3[Subgraph C GraphQL schema]; ``` -## Subgraph Execution Strategies +## 子图执行策略 -Within every Subgraph defined as source, there will be a way to define it's source(s) indexer and the querying strategy, here are a few options: +在被定义为源的每一个子图中,都有一种方法来定义其源索引人和查询策略,下面是几个选项: ```mermaid graph LR; @@ -85,9 +85,9 @@ graph LR; end ``` -> We can ship a several built-in strategies, along with a simple interfaces to allow developers to write their own. +> 我们可以配送几个内置策略,以及一个简单的接口,让开发者自己写。 -To take the concept of strategies to the extreme, we can even build a magical layer that does subscription-as-query, with any hook, and provide a smooth DX for dapps: +为了使战略概念走向极端,我们甚至可以建立一个订阅即时查询的魔力层, 带任何钩子, 并提供一个平滑的 DX 数据库: ```mermaid graph LR; @@ -99,5 +99,5 @@ graph LR; sc[Smart Contract]-->|change event|op; ``` -With this mechanism, developers can write and execute GraphQL `subscription`, but under the hood we'll execute a GraphQL `query` to The Graph indexers, and allow to connect any external hook/probe for re-running the operation. -This way, we can watch for changes on the Smart Contract itself, and the GraphQL client will fill the gap on the need to real-time changes from The Graph. +使用此机制,开发者可以写入并执行 GraphQL `subscription` , 但在这个位置下,我们会执行 GraphQL `query` 到 Graph索引人,并允许连接任何外部钩子/probe来重新运行操作。 +这种方式,我们可以观看智能合约本身的变更。 GraphQL客户端将填补The Graph实时变化的需要。 diff --git a/website/src/pages/zh/subgraphs/querying/graph-client/live.md b/website/src/pages/zh/subgraphs/querying/graph-client/live.md index e6f726cb4352..48451989529c 100644 --- a/website/src/pages/zh/subgraphs/querying/graph-client/live.md +++ b/website/src/pages/zh/subgraphs/querying/graph-client/live.md @@ -1,10 +1,10 @@ -# `@live` queries in `graph-client` +# `@live`查询`graph-client` -Graph-Client implements a custom `@live` directive that can make every GraphQL query work with real-time data. +Graph-客户端实现了一个自定义 `@live` 指令,可以让每个GraphQL 查询都与实时数据兼容。 -## Getting Started +## 开始 -Start by adding the following configuration to your `.graphclientrc.yml` file: +首先将以下配置添加到您的`.graphclientrc.yml`文件中: ```yaml plugins: @@ -12,9 +12,9 @@ plugins: defaultInterval: 1000 ``` -## Usage +## 使用方法 -Set the default update interval you wish to use, and then you can apply the following GraphQL `@directive` over your GraphQL queries: +设置您想要使用的默认更新间隔,然后您可以在 GraphQL 查询中应用下面的 GraphQL `@directive` : ```graphql query ExampleQuery @live { @@ -26,7 +26,7 @@ query ExampleQuery @live { } ``` -Or, you can specify a per-query interval: +或者,您可以指定每个查询间隔: ```graphql query ExampleQuery @live(interval: 5000) { @@ -36,8 +36,8 @@ query ExampleQuery @live(interval: 5000) { } ``` -## Integrations +## 集成 -Since the entire network layer (along with the `@live` mechanism) is implemented inside `graph-client` core, you can use Live queries with every GraphQL client (such as Urql or Apollo-Client), as long as it supports streame responses (`AsyncIterable`). +因为整个网络图层 (与 `@live` 机制一起) 是在 `graph-client` 核心内实现的, 您可以使用每个GraphQL客户端的实时查询(例如Urql 或 Apollo-Client),只要它支持流回应(`AsyncIterable`)。 -No additional setup is required for GraphQL clients cache updates. +GraphQL客户端缓存更新不需要额外设置。 diff --git a/website/src/pages/zh/subgraphs/querying/graphql-api.mdx b/website/src/pages/zh/subgraphs/querying/graphql-api.mdx index 450adf6248ff..0e408a66bc0d 100644 --- a/website/src/pages/zh/subgraphs/querying/graphql-api.mdx +++ b/website/src/pages/zh/subgraphs/querying/graphql-api.mdx @@ -2,23 +2,23 @@ title: GraphQL API --- -Learn about the GraphQL Query API used in The Graph. +了解在The Graph中使用的 GraphQL 查询 API。 -## What is GraphQL? +## 什么是GraphQL? -[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. +[GraphQL](https://graphql.org/learn/)是API的查询语言,也是用您现有数据执行这些查询的运行时间。The Graph使用GraphQL查询子图。 -To understand the larger role that GraphQL plays, review [developing](/subgraphs/developing/introduction/) and [creating a subgraph](/developing/creating-a-subgraph/). +要理解GraphQL所起的更大作用,请查看 [developing](/subgraphs/developing/introduction/) 和 [创建一个子图](/developing/creating-a-subgraph/)。 -## Queries with GraphQL +## 用GraphQL查询 -In your subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type. +在你的子图模式中,定义了叫做`Entities`的类型。对于每个`Entity`类型,`entity` 和 `entities`字段将生成在顶级`Query`类型上。 -> Note: `query` does not need to be included at the top of the `graphql` query when using The Graph. +> 注意:在使用The Graph时,`query` 不需要包含在`graphql`查询的顶部。 ### 例子 -Query for a single `Token` entity defined in your schema: +查询在您的模式中定义的单个`Token`实体: ```graphql { @@ -29,9 +29,9 @@ Query for a single `Token` entity defined in your schema: } ``` -> Note: When querying for a single entity, the `id` field is required, and it must be written as a string. +> 注意:当查询单个实体时,需要填写`id`字段,它必须是一个字符串。 -Query all `Token` entities: +查询所有 `Token` 实体: ```graphql { @@ -44,10 +44,10 @@ Query all `Token` entities: ### 排序 -When querying a collection, you may: +查询集合时,您可以: -- Use the `orderBy` parameter to sort by a specific attribute. -- Use the `orderDirection` to specify the sort direction, `asc` for ascending or `desc` for descending. +- 使用 `orderBy` 参数按特定属性排序。 +- 使用 `orderDirection` 来指定排序方向, `asc` 用于升序或 `desc` 用于降序。 #### 示例 @@ -62,9 +62,9 @@ When querying a collection, you may: #### 嵌套实体筛选示例 -As of Graph Node [`v0.30.0`](https://github.com/graphprotocol/graph-node/releases/tag/v0.30.0) entities can be sorted on the basis of nested entities. +从Graph节点[`v0.30.0`](https://github.com/graphprotocol/graph-node/releases/tag/v0.30.0) 可以根据嵌套实体排序。 -The following example shows tokens sorted by the name of their owner: +在以下示例中,我们根据代币所有者的名称对其进行排序: ```graphql { @@ -77,18 +77,18 @@ The following example shows tokens sorted by the name of their owner: } ``` -> Currently, you can sort by one-level deep `String` or `ID` types on `@entity` and `@derivedFrom` fields. Unfortunately, [sorting by interfaces on one level-deep entities](https://github.com/graphprotocol/graph-node/pull/4058), sorting by fields which are arrays and nested entities is not yet supported. +> 目前,您可以在 `@entity` 和 `@orotovedFrom` 字段按一级深度`String` 或 `ID`类型排序。 不幸的是,[按一个深度实体的接口排序](https://github.com/graphprotocol/graph-node/pull/4058),仍不支持按数组和嵌套实体的字段排序。 ### 分页 -When querying a collection, it's best to: +当查询集合时,最好: -- Use the `first` parameter to paginate from the beginning of the collection. - - The default sort order is by `ID` in ascending alphanumeric order, **not** by creation time. -- Use the `skip` parameter to skip entities and paginate. For instance, `first:100` shows the first 100 entities and `first:100, skip:100` shows the next 100 entities. -- Avoid using `skip` values in queries because they generally perform poorly. To retrieve a large number of items, it's best to page through entities based on an attribute as shown in the previous example above. +- 从集合开始使用`first`参数分页。 + - 默认排序顺序是 `ID`,按字母和数字顺序排列,**不是**创建时间。 +- 使用 `skip` 参数跳过实体和分页。例如,`first:100` 会显示前100个实体,`first:100, skip:100`会显示后100个实体。 +- 避免在查询中使用 `skip` 值,因为它们通常表现很差。 要检索大量条目,最好是通过上面示例中显示的属性的实体进行查找。 -#### Example using `first` +#### 使用 `first` 示例 查询前10 个代币: @@ -101,11 +101,11 @@ When querying a collection, it's best to: } ``` -To query for groups of entities in the middle of a collection, the `skip` parameter may be used in conjunction with the `first` parameter to skip a specified number of entities starting at the beginning of the collection. +查询集合中间的实体群组, `skip`参数可以与 `first`参数一起使用,以跳过从集合开始的指定数量的实体。 -#### Example using `first` and `skip` +#### 使用 `first` 和 `skip` 的示例 -Query 10 `Token` entities, offset by 10 places from the beginning of the collection: +查询 10 `Token` 实体,由集合开始时的10个地方抵消: ```graphql { @@ -116,9 +116,9 @@ Query 10 `Token` entities, offset by 10 places from the beginning of the collect } ``` -#### Example using `first` and `id_ge` +#### 使用 `first` 和 `id_ge` 的示例 -If a client needs to retrieve a large number of entities, it's more performant to base queries on an attribute and filter by that attribute. For example, a client could retrieve a large number of tokens using this query: +如果客户端需要检索大量实体,则基于属性进行查询和过滤会明显提高性能。 例如,客户端可以使用以下查询检索大量代币: ```graphql query manyTokens($lastID: String) { tokens(first: 1000, where: { id_gt: $lastID }) { @@ -129,16 +129,16 @@ query manyTokens($lastID: String) { tokens(first: 1000, where: { id_gt: $last } ``` -The first time, it would send the query with `lastID = ""`, and for subsequent requests it would set `lastID` to the `id` attribute of the last entity in the previous request. This approach will perform significantly better than using increasing `skip` values. +第一次,它会用'lastID = ""`发送查询, 对于随后的请求,它会在上次请求中将 `lastID`设置为最后一个实体的`id`属性。 这个方法将大大优于使用增加`skip\`值。 ### 过滤 -- You can use the `where` parameter in your queries to filter for different properties. -- You can filter on multiple values within the `where` parameter. +- 您可以在查询中使用 `where` 参数来过滤不同的属性。 +- 您可以在 `where` 参数中筛选多个值。 -#### Example using `where` +#### 使用 `where` 的示例 -Query challenges with `failed` outcome: +使用 '失败的' 结果查询挑战: ```graphql { @@ -152,7 +152,7 @@ Query challenges with `failed` outcome: } ``` -You can use suffixes like `_gt`, `_lte` for value comparison: +你可以使用后缀,例如`_gt`, `_lte`来进行值比较: #### 范围过滤示例 @@ -168,7 +168,7 @@ You can use suffixes like `_gt`, `_lte` for value comparison: #### 区块过滤示例 -You can also filter entities that were updated in or after a specified block with `_change_block(number_gte: Int)`. +您也可以筛选在指定的区块中或之后更新的实体使用 `_change_block(number_gte: Int)`。 如果您只想获取已经更改的实体,例如自上次轮询以来改变的实体,那么这将非常有用。或者也可以调查或调试子图中实体的变化情况(如果与区块过滤器结合使用,则只能隔离在特定区块中发生变化的实体)。 @@ -184,7 +184,7 @@ You can also filter entities that were updated in or after a specified block wit #### 嵌套实体筛选示例 -Filtering on the basis of nested entities is possible in the fields with the `_` suffix. +在后缀是\`\_'的字段中可以根据嵌套的实体进行过滤。 如果您希望只获取其子级实体满足条件的实体,那么这可能很有用。 @@ -202,11 +202,11 @@ Filtering on the basis of nested entities is possible in the fields with the `_` #### 逻辑运算符 -As of Graph Node [`v0.30.0`](https://github.com/graphprotocol/graph-node/releases/tag/v0.30.0) you can group multiple parameters in the same `where` argument using the `and` or the `or` operators to filter results based on more than one criteria. +对于Graph节点[`v0.30.0`](https://github.com/graphprotocol/graph-node/releases/tag/v0.30.0),您可以在同一个`where`参数中使用`and`或`or`运算符,根据多个标准过滤结果。 -##### `AND` Operator +##### `AND`运算符 -The following example filters for challenges with `outcome` `succeeded` and `number` greater than or equal to `100`. +下面的示例对大于或等于`100`的`outcome`、`successed`和`number`的挑战进行过滤。 ```graphql { @@ -220,7 +220,7 @@ The following example filters for challenges with `outcome` `succeeded` and `num } ``` -> **Syntactic sugar:** You can simplify the above query by removing the `and` operator by passing a sub-expression separated by commas. +> **Synteracc sugar:** 你可以通过传递一个用逗号分隔的子表达式来移除`and`运算符来简化上述查询。 > > ```graphql > { @@ -234,9 +234,9 @@ The following example filters for challenges with `outcome` `succeeded` and `num > } > ``` -##### `OR` Operator +##### `OR`运算符 -The following example filters for challenges with `outcome` `succeeded` or `number` greater than or equal to `100`. +下面的示例对大于或等于`100`的`outcome`、`successed`和`number`的挑战进行过滤。 ```graphql { @@ -250,7 +250,7 @@ The following example filters for challenges with `outcome` `succeeded` or `numb } ``` -> **Note**: When constructing queries, it is important to consider the performance impact of using the `or` operator. While `or` can be a useful tool for broadening search results, it can also have significant costs. One of the main issues with `or` is that it can cause queries to slow down. This is because `or` requires the database to scan through multiple indexes, which can be a time-consuming process. To avoid these issues, it is recommended that developers use and operators instead of or whenever possible. This allows for more precise filtering and can lead to faster, more accurate queries. +> **注意**:在构建查询时,重要的是要考虑使用`or`运算符的性能影响。 虽然`or`可以成为扩大搜索结果的一个有用工具,但它也可能有很高的费用。 `or`的主要问题之一是它可能导致查询减缓。 这是因为`or`需要数据库通过多个索引进行扫描,这可能是一个耗时的过程。 为了避免这些问题,建议开发人员使用和操作人员,而不是在任何可能的情况下使用。 这样可以进行更精确的过滤,可以更快、更准确的查询。 #### 所有过滤器 @@ -279,9 +279,9 @@ _not_ends_with _not_ends_with_nocase ``` -> Please note that some suffixes are only supported for specific types. For example, `Boolean` only supports `_not`, `_in`, and `_not_in`, but `_` is available only for object and interface types. +> 请注意,某些后缀只支持特定类型。 例如,`Boolean` 只支持 `_not`、`_in`和`_not_in`,但`_` 只适用于对象和接口类型。 -In addition, the following global filters are available as part of `where` argument: +此外,下列全局过滤器可以作为`where`参数的一部分: ```graphql _change_block(number_gte: Int) @@ -289,11 +289,11 @@ _change_block(number_gte: Int) ### 跨时间查询 -You can query the state of your entities not just for the latest block, which is the default, but also for an arbitrary block in the past. The block at which a query should happen can be specified either by its block number or its block hash by including a `block` argument in the toplevel fields of queries. +您可以查询您的实体状态,不仅仅是为了最新的区块, 它是默认的,但也是过去的任意区块的。 可以在查询的顶级字段中包含一个 ' block' 参数来指定查询的要么用其区块编号或区块哈希。 -The result of such a query will not change over time, i.e., querying at a certain past block will return the same result no matter when it is executed, with the exception that if you query at a block very close to the head of the chain, the result might change if that block turns out to **not** be on the main chain and the chain gets reorganized. Once a block can be considered final, the result of the query will not change. +这种查询结果不会随着时间的推移而改变,即对过去某个区块的查询,无论何时执行,都将返回相同的结果。唯一的例外是,如果您在非常靠近链头的区块上进行查询,如果该区块**不**在主链上,并且链被重新组织,则结果可能会改变。 一旦一个区块被确认是最终的区块,那么查询的结果就不会改变。 -> Note: The current implementation is still subject to certain limitations that might violate these guarantees. The implementation can not always tell that a given block hash is not on the main chain at all, or if a query result by a block hash for a block that is not yet considered final could be influenced by a block reorganization running concurrently with the query. They do not affect the results of queries by block hash when the block is final and known to be on the main chain. [This issue](https://github.com/graphprotocol/graph-node/issues/1405) explains what these limitations are in detail. +> 请注意,当前的实现仍然受到某些限制,这些限制可能会违反这些保证。该实现不能总是判断给定的区块哈希根本不在主链上,或者对于一个不能被认为是最终的区块,逐块哈希查询的结果可能会受到与查询同时运行的区块重组的影响。当区块是最终区块并且已知在主链上时,它们不会影响区块哈希查询的结果。[这个](https://github.com/graphprotocol/graph-node/issues/1405)问题详细解释了这些限制是什么。 #### 示例 @@ -309,7 +309,7 @@ The result of such a query will not change over time, i.e., querying at a certai } ``` -This query will return `Challenge` entities, and their associated `Application` entities, as they existed directly after processing block number 8,000,000. +此查询将返回 `Challenge` 实体及其关联的 `Application` 实体,因为它们在处理8,000,000个区块后就存在了。 #### 示例 @@ -325,13 +325,13 @@ This query will return `Challenge` entities, and their associated `Application` } ``` -This query will return `Challenge` entities, and their associated `Application` entities, as they existed directly after processing the block with the given hash. +此查询将返回 `Challenge` 实体及其关联的 `Application` 实体,因为它们在处理具有给定哈希值的区块后就存在了。 ### 全文搜索查询 -Fulltext search query fields provide an expressive text search API that can be added to the subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) to add fulltext search to your subgraph. +全文搜索查询字段提供了一个表达性的文本搜索 API,可以添加到子图模式中并进行自定义。 请参阅[定义全文搜索字段](/developing/creating-a-subgraph/#defining-fulltext-search-fields)以将全文搜索添加到您的子图中。 -Fulltext search queries have one required field, `text`, for supplying search terms. Several special fulltext operators are available to be used in this `text` search field. +全文搜索查询有一个必填字段 `text`,用于提供搜索词。 在这个 `text` 搜索字段中可以使用几个特殊的全文运算符。 全文搜索运算符: @@ -344,7 +344,7 @@ Fulltext search queries have one required field, `text`, for supplying search te #### 例子 -Using the `or` operator, this query will filter to blog entities with variations of either "anarchism" or "crumpet" in their fulltext fields. +使用 `or` 运算符,这个查询将过滤到在其全文字段中具有"anarchism" or "crumpet"变化的博客实体。 ```graphql { @@ -357,7 +357,7 @@ Using the `or` operator, this query will filter to blog entities with variations } ``` -The `follow by` operator specifies a words a specific distance apart in the fulltext documents. The following query will return all blogs with variations of "decentralize" followed by "philosophy" +`follow by` 运算符指定全文文档中相隔特定距离的单词。 以下查询将返回所有“decentralize”后跟着“philosophy”变体的日志。 ```graphql { @@ -385,25 +385,25 @@ The `follow by` operator specifies a words a specific distance apart in the full ### 验证 -Graph Node implements [specification-based](https://spec.graphql.org/October2021/#sec-Validation) validation of the GraphQL queries it receives using [graphql-tools-rs](https://github.com/dotansimha/graphql-tools-rs#validation-rules), which is based on the [graphql-js reference implementation](https://github.com/graphql/graphql-js/tree/main/src/validation). Queries which fail a validation rule do so with a standard error - visit the [GraphQL spec](https://spec.graphql.org/October2021/#sec-Validation) to learn more. +Graph节点实现使用 [graphql-tools-rs](https://github.com/dotansimha/graphql-tools-rs#validation-rules)验证它收到的 GraphQL 查询的 [specification-based](https://spec.graphql.org/October2021/#sec-Validation) 验证基于 [graphql-js 参考实现](https://github.com/graphql/graphql-js/tree/main/src/validation)。 查询失败的验证规则有一个标准错误 - 请访问 [GraphQL spec](https://spec.graphql.org/October2021/#sec-Validation)来了解更多信息。 ## 模式 -The schema of your dataSources, i.e. the entity types, values, and relationships that are available to query, are defined through the [GraphQL Interface Definition Language (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). +您的数据源模式,例如可用于查询的实体类型、值和关系是通过 [GraphQL 接口定义语言(IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System)定义的。 -GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your subgraph is automatically generated from the GraphQL schema that's included in your [subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph). +GraphQL模式通常定义`查询`、`订阅`和`突变`的根类型。The Graph仅支持`查询`。子图的根`查询`类型是从[子图清单](/developing/creating-a-subgraph/#components-of-a-subgraph)中包含的GraphQL模式自动生成的。 -> Note: Our API does not expose mutations because developers are expected to issue transactions directly against the underlying blockchain from their applications. +> 注意:我们的 API 不提供对突变的支持,因为开发人员会从他们的应用程序中直接针对底层区块链发出交易。 ### 实体 -All GraphQL types with `@entity` directives in your schema will be treated as entities and must have an `ID` field. +模式定义中所有带有 `@entity` 指示的 GraphQL 类型都将被视为实体,并且必须具有 `ID` 字段。 -> **Note:** Currently, all types in your schema must have an `@entity` directive. In the future, we will treat types without an `@entity` directive as value objects, but this is not yet supported. +> **注意:** 目前,您的模式中的所有类型都必须有一个 `@entity` 指令。 今后,我们将把没有`@entity`指令的类型视为值对象,但这还不被支持。 ### 子图元数据 -All subgraphs have an auto-generated `_Meta_` object, which provides access to subgraph metadata. This can be queried as follows: +所有子图都有一个自动生成的`_Meta_`对象,它提供对子图元数据的访问。可按如下方式查询: ```graphQL { @@ -421,12 +421,12 @@ All subgraphs have an auto-generated `_Meta_` object, which provides access to s 如果提供了区块,则元数据为该区块的元数据,如果未使用最新的索引区块。如果提供,则区块必须在子图的起始区块之后,并且小于或等于最近索引的区块。 -`deployment` is a unique ID, corresponding to the IPFS CID of the `subgraph.yaml` file. +`deplement` 是一个唯一的ID,与 `subgraph.yaml` 文件的 IPFS CID 相对应。 -`block` provides information about the latest block (taking into account any block constraints passed to `_meta`): +`block` 提供了关于最新区块的信息(同时考虑到传递给`_meta`的任何区块约束): - hash:区块的哈希 - number:区块编号 -- timestamp:区块的时间戳(如果可用)(当前仅适用于索引EVM网络的子图) +- timestamp:区块的时间戳,如果可用的话(当前仅适用于索引EVM网络的子图) -`hasIndexingErrors` is a boolean identifying whether the subgraph encountered indexing errors at some past block +`hasIndexingErrors`是一个布尔值,用于标识子图在过去的某个区块中是否遇到索引错误。 diff --git a/website/src/pages/zh/subgraphs/querying/introduction.mdx b/website/src/pages/zh/subgraphs/querying/introduction.mdx index 5a87ac290adf..a20ae70f39bf 100644 --- a/website/src/pages/zh/subgraphs/querying/introduction.mdx +++ b/website/src/pages/zh/subgraphs/querying/introduction.mdx @@ -1,32 +1,32 @@ --- -title: 查询Graph +title: 查询The Graph sidebarTitle: 介绍 --- -To start querying right away, visit [The Graph Explorer](https://thegraph.com/explorer). +要立即开始查询,请访问[The Graph Explorer](https://thegraph.com/explorer)。 ## 概述 -When a subgraph is published to The Graph Network, you can visit its subgraph details page on Graph Explorer and use the "Query" tab to explore the deployed GraphQL API for each subgraph. +当子图发布到The Graph网络时,您可以访问Graph Explorer上的子图详细信息页面,并使用“查询”选项卡来探索每个子图的已部署GraphQL API。 -## Specifics +## 详情 -Each subgraph published to The Graph Network has a unique query URL in Graph Explorer to make direct queries. You can find it by navigating to the subgraph details page and clicking on the "Query" button in the top right corner. +发布到The Graph网络上的每个子图在Graph Explorer中都有一个唯一的查询URL,可以进行直接查询。您可以通过导航到子图详细信息页面并单击右上角的“查询”按钮来找到它。 ![Query Subgraph Button](/img/query-button-screenshot.png) ![Query Subgraph URL](/img/query-url-screenshot.png) -You will notice that this query URL must use a unique API key. You can create and manage your API keys in [Subgraph Studio](https://thegraph.com/studio), under the "API Keys" section. Learn more about how to use Subgraph Studio [here](/deploying/subgraph-studio/). +正如你所注意到的,这个查询 URL 必须使用一个独特的 API 密钥。你可以在[Subgraph Studio](https://thegraph.com/studio)的 "API 密钥 "部分创建和管理你的 API 密钥。在[这里](/deploying/subgraph-studio/)了解更多如何使用Subgraph Studio的信息。 -Subgraph Studio users start on a Free Plan, which allows them to make 100,000 queries per month. Additional queries are available on the Growth Plan, which offers usage based pricing for additional queries, payable by credit card, or GRT on Arbitrum. You can learn more about billing [here](/subgraphs/billing/). +Subgraph Studio用户从免费计划开始,每月可以进行100000次查询。增长计划中提供了其他查询,该计划为其他查询提供了基于使用量的定价,可通过信用卡支付,或通过Arbitrum的GRT支付。您可以在[此处](/subgraphs/billing/)了解更多关于计费的信息。 -> Please see the [Query API](/subgraphs/querying/graphql-api/) for a complete reference on how to query the subgraph's entities. +> 有关如何查询子图实体的完整参考,请参见[Query API](/subgraphs/querying/graphql-api/) 。 > -> Note: If you encounter 405 errors with a GET request to the Graph Explorer URL, please switch to a POST request instead. +> 注意:如果对Graph Explorer URL的GET请求遇到405个错误,请切换到POST请求。 ### 其他资源 -- Use [GraphQL querying best practices](/subgraphs/querying/best-practices/). -- To query from an application, click [here](/subgraphs/querying/from-an-application/). -- View [querying examples](https://github.com/graphprotocol/query-examples/tree/main). +- 使用[GraphQL查询最佳实践](/subgraphs/query/最佳实践/)。 +- 要从应用程序查询,请单击[此处]](/subgraphs/querying/from-an-application/)。 +- 查看[查询示例](https://github.com/graphprotocol/query-examples/tree/main)。 diff --git a/website/src/pages/zh/subgraphs/querying/managing-api-keys.mdx b/website/src/pages/zh/subgraphs/querying/managing-api-keys.mdx index 381ed2a67447..536298f71c62 100644 --- a/website/src/pages/zh/subgraphs/querying/managing-api-keys.mdx +++ b/website/src/pages/zh/subgraphs/querying/managing-api-keys.mdx @@ -1,34 +1,34 @@ --- -title: 管理您的 API 密钥 +title: 管理API 密钥 --- ## 概述 -API keys are needed to query subgraphs. They ensure that the connections between application services are valid and authorized, including authenticating the end user and the device using the application. +查询子图需要API键。它们确保应用程序服务之间的连接有效且经过授权,包括对最终用户和使用应用程序的设备进行身份验证。 -### Create and Manage API Keys +### 创建和管理API密钥 -Go to [Subgraph Studio](https://thegraph.com/studio/) and click the **API Keys** tab to create and manage your API keys for specific subgraphs. +转到[Subgraph Studio](https://thegraph.com/studio/)并单击**API键**选项卡为特定子图创建和管理API键。 -The "API keys" table lists existing API keys and allows you to manage or delete them. For each key, you can see its status, the cost for the current period, the spending limit for the current period, and the total number of queries. +“API密钥”表列出了现有的API密钥,并允许您管理或删除它们。对于每个键,您可以看到它的状态、当前期间的成本、当前时期的支出限制和查询总数。 -You can click the "three dots" menu to the right of a given API key to: +您可以单击给定API键右侧的“三点”菜单: -- Rename API key -- Regenerate API key -- Delete API key -- Manage spending limit: this is an optional monthly spending limit for a given API key, in USD. This limit is per billing period (calendar month). +- 重命名API密钥 +- 重新生成 API 密钥 +- 删除API密钥 +- 管理支出限额:这是一个可选的每月支出限额,用于给定的API密钥,单位为美元。此限额按计费期(日历月)计算。 -### API Key Details +### API密钥详细信息 -You can click on an individual API key to view the Details page: +您可以单击单个API键查看详细信息页面: -1. Under the **Overview** section, you can: +1. Under the **Overview** section, you can: - 编辑您的密钥名称 - 重新生成 API 密钥 - 使用统计信息查看 API 密钥的当前使用情况: - 查询数 - 花费的 GRT 金额 -2. Under the **Security** section, you can opt into security settings depending on the level of control you’d like to have. Specifically, you can: +2. 在**安全**部分,您可以根据想要的控制级别选择进入安全设置。具体来说,您可以: - 查看和管理授权使用您的 API 密钥的域名 - 分配可以使用您的 API 密钥查询的子图 diff --git a/website/src/pages/zh/subgraphs/querying/python.mdx b/website/src/pages/zh/subgraphs/querying/python.mdx index a1372fbf300d..5b56ab2a4fb9 100644 --- a/website/src/pages/zh/subgraphs/querying/python.mdx +++ b/website/src/pages/zh/subgraphs/querying/python.mdx @@ -1,15 +1,15 @@ --- -title: Query The Graph with Python and Subgrounds +title: 使用 Python 和 Subground查询The Graph sidebarTitle: Python (Subgrounds) --- -Subgrounds is an intuitive Python library for querying subgraphs, built by [Playgrounds](https://playgrounds.network/). It allows you to directly connect subgraph data to a Python data environment, letting you use libraries like [pandas](https://pandas.pydata.org/) to perform data analysis! +Subgrounds是一个用于查询子图的直观的 Python 库,由 [Playgrounds](https://playgrounds.network/)构建。 它允许您直接将子图数据连接到 Python 数据环境。 让您使用像 [pandas](https://pandas.pydata.org/)这样的库来进行数据分析! -Subgrounds offers a simple Pythonic API for building GraphQL queries, automates tedious workflows such as pagination, and empowers advanced users through controlled schema transformations. +Subgrounds提供了一个简单的 Pythonic API,用于构建GraphQL 查询,实现分页等冗余工作流自动化,并通过受控模式转换增强高级用户的能力。 ## 开始 -Subgrounds requires Python 3.10 or higher and is available on [pypi](https://pypi.org/project/subgrounds/). +Subgrounds需要 Python 3.10或更高版本,可在 [pypi](https://pypi.org/project/subgrounds/)上获取。 ```bash pip install --upgrade subgrounds @@ -17,14 +17,14 @@ pip install --upgrade subgrounds python -m pip install --upgrade subgrounds ``` -Once installed, you can test out subgrounds with the following query. The following example grabs a subgraph for the Aave v2 protocol and queries the top 5 markets ordered by TVL (Total Value Locked), selects their name and their TVL (in USD) and returns the data as a pandas [DataFrame](https://pandas.pydata.org/pandas-docs/dev/reference/api/pandas.DataFrame.html#pandas.DataFrame). +安装完毕后,您可以通过以下查询测试Subgrounds。 下面的示例为 Aave v2 协议拍摄了子图并查询了TVL 订购的前 5 个市场(总值锁定), 选择他们的名字和他们的 TVL (美元) 并返回数据为pandas [DataFrame](https://pandas.pydata.org/pandas-docs/dev/reference/api/pandas.DataFrame.html#pandas.DataFrame)。 ```python from subgrounds import Subgrounds sg = Subgrounds() -# Load the subgraph +# Load the Subgraph aave_v2 = sg.load_subgraph( "https://api.thegraph.com/subgraphs/name/messari/aave-v2-ethereum") @@ -41,17 +41,17 @@ sg.query_df([ ]) ``` -## Documentation +## 文档 -Subgrounds is built and maintained by the [Playgrounds](https://playgrounds.network/) team and can be accessed on the [Playgrounds docs](https://docs.playgrounds.network/subgrounds). +Subgrounds是由 [Playgrounds](https://playgrounds.network/)团队构建和维护的,可以在 [Playplace文档](https://docs.playgrounds.network/subgrounds)上访问。 -Since subgrounds has a large feature set to explore, here are some helpful starting places: +由于Subgrounds有大的功能可以探索,在这里有一些有用的起始地点: -- [Getting Started with Querying](https://docs.playgrounds.network/subgrounds/getting_started/basics/) - - A good first step for how to build queries with subgrounds. -- [Building Synthetic Fields](https://docs.playgrounds.network/subgrounds/getting_started/synthetic_fields/) - - A gentle introduction to defining synthetic fields that transform data defined from the schema. -- [Concurrent Queries](https://docs.playgrounds.network/subgrounds/getting_started/async/) - - Learn how to level up your queries by parallelizing them. -- [Exporting Data to CSVs](https://docs.playgrounds.network/subgrounds/faq/exporting/) - - A quick article on how to seamlessly save your data as CSVs for further analysis. +- [开始查询](https://docs.playgrounds.network/subgrounds/getting_started/basics/) + - 用subground构建查询,很好的第一步。 +- [构建合成字段](https://docs.playgrounds.network/subgrounds/getting_started/synthetic_fields/) + - 对定义可转换从模式中定义的数据的合成字段作了简要介绍。 +- [开始查询](https://docs.playgrounds.network/subgrounds/getting_started/async/) + - 学习如何平行地提升您的查询。 +- [导出数据到CSV](https://docs.playgrounds.network/subgrounds/faq/exporting/) + - 一个关于如何无缝保存您的数据为 CSV以便进一步分析的快速文章。 diff --git a/website/src/pages/zh/subgraphs/querying/subgraph-id-vs-deployment-id.mdx b/website/src/pages/zh/subgraphs/querying/subgraph-id-vs-deployment-id.mdx index 103e470e14da..c715990d980a 100644 --- a/website/src/pages/zh/subgraphs/querying/subgraph-id-vs-deployment-id.mdx +++ b/website/src/pages/zh/subgraphs/querying/subgraph-id-vs-deployment-id.mdx @@ -1,27 +1,27 @@ --- -title: Subgraph ID vs Deployment ID +title: 子图 ID vs 部署 ID --- -A subgraph is identified by a Subgraph ID, and each version of the subgraph is identified by a Deployment ID. +Subgraph 是用Subgraph ID标明的,Subgraph 的每一版本都用部署ID标明。 -When querying a subgraph, either ID can be used, though it is generally suggested that the Deployment ID is used due to its ability to specify a specific version of a subgraph. +当查询子图时,可以使用 ID。 虽然一般认为使用部署ID是因为它能够指定Subgraph的特定版本。 -Here are some key differences between the two IDs: ![](/img/subgraph-id-vs-deployment-id.png) +以下是这两个ID之间的一些关键区别: ![](/img/subgraph-id-vs-deployment-id.png) -## Deployment ID +## 部署 ID -The Deployment ID is the IPFS hash of the compiled manifest file, which refers to other files on IPFS instead of relative URLs on the computer. For example, the compiled manifest can be accessed via: `https://api.thegraph.com/ipfs/api/v0/cat?arg=QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. To change the Deployment ID, one can simply update the manifest file, such as modifying the description field as described in the [subgraph manifest documentation](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api). +部署ID是编译清单文件的 IPFS 哈希值, 它是指IPFS上的其他文件,而不是计算机上的相对URL。 例如,编译后的清单可以访问:`https://api.thegraph。 om/ipfs/api/v0/cat?arg=QmQKXcNQRdUvNRMGJiE2idotu9fo5F5MRtKztH4WyKxED`。 要更改部署ID,只需更新清单文件, 例如修改描述字段如[子图文档](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api)。 -When queries are made using a subgraph's Deployment ID, we are specifying a version of that subgraph to query. Using the Deployment ID to query a specific subgraph version results in a more sophisticated and robust setup as there is full control over the subgraph version being queried. However, this results in the need of updating the query code manually every time a new version of the subgraph is published. +当使用Subgraph的部署ID进行查询时,我们正在指定要查询的Subgra版本。 使用部署 ID 查询特定的 Subgraph 版本会导致更加复杂和强大的设置,因为正在查询的 Subgraph 版本的完全控制。 然而,这导致每次发布新版Subgra时,都需要手动更新查询码。 -Example endpoint that uses Deployment ID: +使用部署ID 的示例终点: `https://gateway-arbitrum.network.thegraph.com/api/[api-key]/deployments/id/QmfYaVdSSekUeK6expfm47tP8adg3NNdEGnVExqswsSwaB` -## Subgraph ID +## 子图 ID -The Subgraph ID is a unique identifier for a subgraph. It remains constant across all versions of a subgraph. It is recommended to use the Subgraph ID to query the latest version of a subgraph, although there are some caveats. +子图ID 是子图的唯一标识符。它在所有版本的子图中保持常数。 建议使用Subgraph ID查询最新版本的 Subgraph ID,尽管有一些警告。 -Be aware that querying using Subgraph ID may result in queries being responded to by an older version of the subgraph due to the new version needing time to sync. Also, new versions could introduce breaking schema changes. +请注意,使用 Subgrap ID 查询可能会导致旧版本的 Subgraph 响应查询,因为新版本需要时间同步。 此外,新版本可能会引入破解模式更改。 -Example endpoint that uses Subgraph ID: `https://gateway-arbitrum.network.thegraph.com/api/[api-key]/subgraphs/id/FL3ePDCBbShPvfRJTaSCNnehiqxsPHzpLud6CpbHoeKW` +使用Subgraph ID的示例终点: `https://gateway-arbitrum.network.thegraph.com/api/[api-key]/subgraphs/id/FL3ePDCBbShPvfRJTaSCNnehiqxsPHzpLud6CpbHoeKW` diff --git a/website/src/pages/zh/subgraphs/quick-start.mdx b/website/src/pages/zh/subgraphs/quick-start.mdx index 3ad430005cff..01a0c5cbcfc5 100644 --- a/website/src/pages/zh/subgraphs/quick-start.mdx +++ b/website/src/pages/zh/subgraphs/quick-start.mdx @@ -2,30 +2,30 @@ title: 快速开始 --- -Learn how to easily build, publish and query a [subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) on The Graph. +学习如何轻松地构建、发布和查询The Graph上的 [子图](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph)。 -## Prerequisites +## 先决条件 - 一个加密钱包 -- A smart contract address on a [supported network](/supported-networks/) -- [Node.js](https://nodejs.org/) installed -- A package manager of your choice (`npm`, `yarn` or `pnpm`) +- 智能合约在一个[支持网络](/supported-networks/)上的地址 +- [Node.js](https://nodejs.org/) 已安装 +- 您选择的软件包管理器 (`npm`, `yarn` 或 `pnpm`) -## How to Build a Subgraph +## 如何构建子图 -### 1. Create a subgraph in Subgraph Studio +### 1. 在子图工作室中创建子图 -Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. +进入[Subgraph Studio](https://thegraph.com/studio/)并连接你的钱包。 -Subgraph Studio lets you create, manage, deploy, and publish subgraphs, as well as create and manage API keys. +子图工作室可以让您创建、管理、部署和发布子图,以及创建和管理 API 密钥。 -Click "Create a Subgraph". It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name". +点击“创建子图”。建议在标题大小写中为子图命名:“子图名称链名称”。 ### 2. 安装 Graph CLI 在本地计算机上,运行以下命令之一: -Using [npm](https://www.npmjs.com/): +使用[npm](https://www.npmjs.com/): ```sh npm install -g @graphprotocol/graph-cli@latest @@ -37,54 +37,54 @@ Using [yarn](https://yarnpkg.com/): yarn global add @graphprotocol/graph-cli ``` -### 3. Initialize your subgraph +### 3. 初始化子图 > 您可以在[Subgraph Studio](https://thegraph.com/studio/)的子图页面找到针对您特定子图的命令。 -The `graph init` command will automatically create a scaffold of a subgraph based on your contract's events. +`graph init`命令将根据你的合约事件自动创建一个子图的手写。 -The following command initializes your subgraph from an existing contract: +以下命令从现有合约初始化你的子图: ```sh graph init ``` -If your contract is verified on the respective blockscanner where it is deployed (such as [Etherscan](https://etherscan.io/)), then the ABI will automatically be created in the CLI. +如果您的合约已经在部署的相应拦截扫描仪上进行了验证(例如 [Etherscan](https://etherscan.io/)), 那么自动在 CLI 中创建 ABI 。 -When you initialize your subgraph, the CLI will ask you for the following information: +初始化子图时,CLI工具会要求您提供以下信息: -- **Protocol**: Choose the protocol your subgraph will be indexing data from. -- **Subgraph slug**: Create a name for your subgraph. Your subgraph slug is an identifier for your subgraph. -- **Directory**: Choose a directory to create your subgraph in. -- **Ethereum network** (optional): You may need to specify which EVM-compatible network your subgraph will be indexing data from. -- **Contract address**: Locate the smart contract address you’d like to query data from. -- **ABI**: If the ABI is not auto-populated, you will need to input it manually as a JSON file. -- **Start Block**: You should input the start block to optimize subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed. -- **Contract Name**: Input the name of your contract. -- **Index contract events as entities**: It is suggested that you set this to true, as it will automatically add mappings to your subgraph for every emitted event. -- **Add another contract** (optional): You can add another contract. +- **协议**:选择子图将要索引数据的协议。 +- **子图slug**: 为你的子图创建一个名称。子图slug是你的子图的标识符。 +- **Directory**:选择一个目录来创建你的子图。 +- **Etherum网络** (可选):您可能需要指定您的子图将索引数据来自哪个EVM兼容网络。 +- **合约地址**:找到要查询数据的智能合约地址。 +- **ABI**:如果ABI没有自动填充,您需要手动输入JSON文件。 +- **启动Block**:您应该输入起始区块以优化区块链数据的子图索引。 通过找到您的合约部署所在的区块来定位起始区块。 +- **合约名称**:输入合约名称。 +- **将合约事件作为实体索引**:建议您将其设置为真。 因为它会自动将映射添加到你的子图中。 +- **添加其他合约**(可选):您可以添加其他合约。 请参阅下面的屏幕截图,以获取初始化子图时所需的示例: ![Subgraph command](/img/CLI-Example.png) -### 4. Edit your subgraph +### 4. 编写子图 -The `init` command in the previous step creates a scaffold subgraph that you can use as a starting point to build your subgraph. +上一步的 `init` 命令创建了一个可以用作构建Subgraph起点的 scaffold 子图。 -When making changes to the subgraph, you will mainly work with three files: +在对子图进行修改时,你将主要与三个文件一起工作: -- Manifest (`subgraph.yaml`) - defines what data sources your subgraph will index. -- Schema (`schema.graphql`) - defines what data you wish to retrieve from the subgraph. -- AssemblyScript Mappings (`mapping.ts`) - translates data from your data sources to the entities defined in the schema. +- 清单(`subgraph.yaml`)--该清单定义了你的子图将索引哪些数据源。 +- 模式(`schema.graphql`)--定义你希望从子图中检索到的数据。 +- AssemblyScript 映射(`mapping.ts`)--将数据源中的数据转换为模式中定义的实体的代码。 -For a detailed breakdown on how to write your subgraph, check out [Creating a Subgraph](/developing/creating-a-subgraph/). +想了解更多如何编写子图的信息,请参阅[创建子图](/developing/creating-a-subgraph/)。 -### 5. Deploy your subgraph +### 5. 部署子图 -> Remember, deploying is not the same as publishing. +> 记住,部署与发布不同。 -When you **deploy** a subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. A deployed subgraph's indexing is performed by the [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/), which is a single Indexer owned and operated by Edge & Node, rather than by the many decentralized Indexers in The Graph Network. A **deployed** subgraph is free to use, rate-limited, not visible to the public, and meant to be used for development, staging, and testing purposes. +当你**部署**一个Subgraph时,你将它推送到[Subgraph Studio](https://thegraph.com/studio/),在那里你可以测试、分段和查看它。 已部署的子图索引由 [升级索引人](https://thegraph.com/blog/upgrade-indexer/) 执行。 这是一个由Edge & Node拥有和操作的单一索引人,而不是由The Graph网络中许多去中心化的索引人拥有和操作的。 **部署** 子图可自由使用,频率限制,公众看不到,并可用于开发、分阶段和测试。 一旦您的子图被编写好,请运行以下命令: @@ -94,9 +94,9 @@ graph codegen && graph build ``` ```` -Authenticate and deploy your subgraph. The deploy key can be found on the subgraph's page in Subgraph Studio. +认证并部署子图。部署密钥可以在子图工作室的子图页面上找到。 -![Deploy key](/img/subgraph-studio-deploy-key.jpg) +![部署密钥](/img/subgraph-studio-deploy-key.jpg) ```` ```sh @@ -107,43 +107,43 @@ graph deploy ``` ```` -The CLI will ask for a version label. It's strongly recommended to use [semantic versioning](https://semver.org/), e.g. `0.0.1`. +CLI 将要求一个版本标签。强烈建议使用 [语义版本](https://semver.org/),例如`0.0.1`。 -### 6. Review your subgraph +### 6. 审查子图 -If you’d like to test your subgraph before publishing it, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following: +如果你想在发布之前测试你的子图,你可以使用 [Subgraph Studio](https://thegraph.com/studio/) 来执行以下操作: -- Run a sample query. -- Analyze your subgraph in the dashboard to check information. -- Check the logs on the dashboard to see if there are any errors with your subgraph. The logs of an operational subgraph will look like this: +- 运行一个示例查询。 +- 在仪表盘分析您的子图以检查信息。 +- 检查仪表盘上的日志,以查看您的子图表是否有任何错误。 操作子图的日志将看起来像这样: ![Subgraph logs](/img/subgraph-logs-image.png) -### 7. Publish your subgraph to The Graph Network +### 7. 将你的子图发布到 The Graph的去中心化网络 -When your subgraph is ready for a production environment, you can publish it to the decentralized network. Publishing is an onchain action that does the following: +当你的子图准备好生产环境时,你可以将它发布到去中心化的网络。 发布是一种在线操作,可以做以下工作: -- It makes your subgraph available to be to indexed by the decentralized [Indexers](/indexing/overview/) on The Graph Network. -- It removes rate limits and makes your subgraph publicly searchable and queryable in [Graph Explorer](https://thegraph.com/explorer/). -- It makes your subgraph available for [Curators](/resources/roles/curating/) to curate it. +- 使您的子图可以被去中心化的 [索引人](/indexing/overview/)索引到The Graph网络。 +- 取消了费率限制,使你的子图可以公开搜索并可以在 [Graph Explorer](https://thegraph.com/explorer/) 中查询。 +- 使您的子图可供 [策展人](/resources/roles/curating/)进行策展。 -> The greater the amount of GRT you and others curate on your subgraph, the more Indexers will be incentivized to index your subgraph, improving the quality of service, reducing latency, and enhancing network redundancy for your subgraph. +> 您和其他人在您的子图上策展的GRT数量越多,索引人将会更多地被激励来索引您的子图, 提高服务质量,降低延迟性,并提高子图的网络冗余性。 -#### Publishing with Subgraph Studio +#### 从子图工作室发布 -To publish your subgraph, click the Publish button in the dashboard. +要发布您的子图,请单击仪表盘中的发布按钮。 -![Publish a subgraph on Subgraph Studio](/img/publish-sub-transfer.png) +![Publish a Subgraph on Subgraph Studio](/img/publish-sub-transfer.png) -Select the network to which you would like to publish your subgraph. +选择您想要发布子图的网络。 -#### Publishing from the CLI +#### 从 CLI 发布 -As of version 0.73.0, you can also publish your subgraph with the Graph CLI. +截止版本 0.73.0 ,您也可以使用 GraphCLI 发布您的子图。 -Open the `graph-cli`. +打开 `graph-cli`。 -Use the following commands: +使用以下命令: ```` ```sh @@ -157,32 +157,32 @@ graph publish ``` ```` -3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized subgraph to a network of your choice. +3. 一个窗口将打开,允许您连接您的钱包,添加元数据,并将您的最终子图部署到您选择的网络。 ![cli-ui](/img/cli-ui.png) -To customize your deployment, see [Publishing a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). +要成功发布您的子图,请査阅[发布子图](/subgraphs/developing/publishing/publishing-a-subgraph/)。 -#### Adding signal to your subgraph +#### 将信号添加到您的子图 -1. To attract Indexers to query your subgraph, you should add GRT curation signal to it. +1. 为了吸引索引人查询您的子图,您应该添加 GRT 策展信号。 - - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your subgraph. + - 此操作可以提高服务质量,减少延迟,提高网络冗余性和子图的可用性。 -2. If eligible for indexing rewards, Indexers receive GRT rewards based on the signaled amount. +2. 如果符合索引奖励,索引人将根据信号金额获得GRT奖励。 - - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on subgraph feature usage and supported networks. + - 建议策展至少 3 000GRT 来吸引3个索引人。根据子图功能的使用情况和支持的网络检查奖励资格。 -To learn more about curation, read [Curating](/resources/roles/curating/). +要了解更多关于策展的信息,请访问 [Curating](/resources/roles/curating/)。 -To save on gas costs, you can curate your subgraph in the same transaction you publish it by selecting this option: +为了节省燃气费,您可以通过选择此选项来策展您的子图发布在相同的交易中: ![Subgraph publish](/img/studio-publish-modal.png) -### 8. Query your subgraph +### 8. 查询子图 -You now have access to 100,000 free queries per month with your subgraph on The Graph Network! +您现在每月可以使用The Graph网络上的子图访问100 000个免费查询! -You can query your subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button. +现在,您可以通过将GraphQL查询发送到子图的查询URL来查询子图,您可以单击查询按钮找到该查询URL。 -For more information about querying data from your subgraph, read [Querying The Graph](/subgraphs/querying/introduction/). +有关从您的子图查询数据的更多信息,请阅读[Querying The Graph](/subgraphs/querying/introduction/)。 diff --git a/website/src/pages/zh/substreams/_meta-titles.json b/website/src/pages/zh/substreams/_meta-titles.json index 6262ad528c3a..91b4b56a38e6 100644 --- a/website/src/pages/zh/substreams/_meta-titles.json +++ b/website/src/pages/zh/substreams/_meta-titles.json @@ -1,3 +1,3 @@ { - "developing": "Developing" + "developing": "开发" } diff --git a/website/src/pages/zh/substreams/developing/_meta-titles.json b/website/src/pages/zh/substreams/developing/_meta-titles.json index 882ee9fc7c9c..cb35e8b3529e 100644 --- a/website/src/pages/zh/substreams/developing/_meta-titles.json +++ b/website/src/pages/zh/substreams/developing/_meta-titles.json @@ -1,4 +1,4 @@ { "solana": "Solana", - "sinks": "Sink your Substreams" + "sinks": "将您的子流链接出去" } diff --git a/website/src/pages/zh/substreams/developing/dev-container.mdx b/website/src/pages/zh/substreams/developing/dev-container.mdx index bd4acf16eec7..92c9e28fa7bc 100644 --- a/website/src/pages/zh/substreams/developing/dev-container.mdx +++ b/website/src/pages/zh/substreams/developing/dev-container.mdx @@ -1,48 +1,48 @@ --- -title: Substreams Dev Container -sidebarTitle: Dev Container +title: 子流开发容器 +sidebarTitle: 开发容器 --- -Develop your first project with Substreams Dev Container. +用子流开发容器开发您的第一个项目。 -## What is a Dev Container? +## 什么是开发容器? -It's a tool to help you build your first project. You can either run it remotely through Github codespaces or locally by cloning the [substreams starter repository](https://github.com/streamingfast/substreams-starter?tab=readme-ov-file). +这是一个帮助您构建第一个项目的工具。 您可以通过Github 代码远程运行它,也可以通过本地克隆[substreams starter 仓库](https://github.com/streamingfast/substreams-starter?tab=readme-ov-file)。 -Inside the Dev Container, the `substreams init` command sets up a code-generated Substreams project, allowing you to easily build a subgraph or an SQL-based solution for data handling. +在开发容器中,`substreams init`命令设置代码生成的子流项目。 允许您为数据处理轻松构建子图或基于 SQL 的解决方案。 -## Prerequisites +## 先决条件 -- Ensure Docker and VS Code are up-to-date. +- 确认 Docker 和 VS 代码是最新的。 -## Navigating the Dev Container +## 导航开发容器 -In the Dev Container, you can either build or import your own `substreams.yaml` and associate modules within the minimal path or opt for the automatically generated Substreams paths. Then, when you run the `Substreams Build` it will generate the Protobuf files. +在开发容器中,您可以构建或导入您自己的`substreams.yaml`和关联模块在最小路径内或选择自动生成的子流路径。 然后,当你运行 `Substreams Build` 时,它将生成Protobuf 文件。 -### Options +### 选项 -- **Minimal**: Starts you with the raw block `.proto` and requires development. This path is intended for experienced users. -- **Non-Minimal**: Extracts filtered data using network-specific caches and Protobufs taken from corresponding foundational modules (maintained by the StreamingFast team). This path generates a working Substreams out of the box. +- **Minimal**:用原始块`.proto`启动,需要开发。此路径是为有经验的用户设计的。 +- **Non-Minimal**:从相应的基础模块中提取网络特定缓存和 Protobufs 的过滤数据(由 StreamingFast 团队维护)。 此路径从方框生成一个工作的子流。 -To share your work with the broader community, publish your `.spkg` to [Substreams registry](https://substreams.dev/) using: +要与更广泛的社区分享您的工作,使用以下方式发布您的`.spkg` 到 [Substreams registry](https://substreams.dev/) : -- `substreams registry login` -- `substreams registry publish` +- `substreams 注册表登录` +- `substreams 注册表发布` -> Note: If you run into any problems within the Dev Container, use the `help` command to access trouble shooting tools. +> 注意:如果你遇到了开发容器中的任何问题,请使用 `help` 命令访问故障射击工具。 -## Building a Sink for Your Project +## 为您的项目建立一个汇 -You can configure your project to query data either through a Subgraph or directly from an SQL database: +您可以配置您的项目,通过子图或直接从 SQL 数据库查询数据: -- **Subgraph**: Run `substreams codegen subgraph`. This generates a project with a basic `schema.graphql` and `mappings.ts` file. You can customize these to define entities based on the data extracted by Substreams. For more configurations, see [subgraph sink documentation](https://docs.substreams.dev/how-to-guides/sinks/subgraph). -- **SQL**: Run `substreams codegen sql` for SQL-based queries. For more information on configuring a SQL sink, refer to the [SQL documentation](https://docs.substreams.dev/how-to-guides/sinks/sql-sink). +- **Subgraph**: 运行 `substreams codegen subgraph`。这将生成一个基本的 `schema.graphql` 和 `mappings.ts` 文件。 您可以根据子流提取的数据定制这些定义实体。关于更多配置,请参阅[Subgraph sink 文档](https://docs.substreams.dev/how-to-guides/sinks/subgraph)。 +- **SQL**: 为基于 SQL 查询运行`substreams codegen sql`。欲了解更多配置SQL sink 的信息,请参阅[SQL 文档](https://docs.substreams.dev/how-to-guides/sinks/sql-sink)。 -## Deployment Options +## 部署选项 -To deploy a Subgraph, you can either run the `graph-node` locally using the `deploy-local` command or deploy to Subgraph Studio by using the `deploy` command found in the `package.json` file. +部署子图, 您可以使用 `depay-local` 命令在本地运行 `graph-node` ,也可以使用在`package.json`文件中找到的`design` 命令部署到Subgraph Studio 。 -## Common Errors +## 常见错误 -- When running locally, make sure to verify that all Docker containers are healthy by running the `dev-status` command. -- If you put the wrong start-block while generating your project, navigate to the `substreams.yaml` to change the block number, then re-run `substreams build`. +- 当本地运行时,请确保通过运行 \`dev-status' 命令来验证所有Docker容器是否健康。 +- 如果你在生成项目时设置了错误的起始块,导航到`substreams.yaml`来更改模块号码,然后重新运行 \`substreams build'。 diff --git a/website/src/pages/zh/substreams/developing/sinks.mdx b/website/src/pages/zh/substreams/developing/sinks.mdx index edab5713fb0b..fcbc54d84b27 100644 --- a/website/src/pages/zh/substreams/developing/sinks.mdx +++ b/website/src/pages/zh/substreams/developing/sinks.mdx @@ -1,32 +1,32 @@ --- -title: Official Sinks +title: 将您的子流链接出去 --- -Choose a sink that meets your project's needs. +选择一个满足您项目需要的汇。 ## 概述 -Once you find a package that fits your needs, you can choose how you want to consume the data. +一旦找到符合您需要的包,您可以选择如何消耗数据。 -Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database, a file or a subgraph. +汇集是一种集成,允许您将提取的数据发送到不同目的地,例如SQL数据库、文件或子图。 -## Sinks +## 汇集 -> Note: Some of the sinks are officially supported by the StreamingFast core development team (i.e. active support is provided), but other sinks are community-driven and support can't be guaranteed. +> 注:部分汇由StreamingFast core development team(即: 但其他汇集是由社区驱动的),因此无法保证支持。 -- [SQL Database](https://docs.substreams.dev/how-to-guides/sinks/sql-sink): Send the data to a database. -- [Subgraph](/sps/introduction/): Configure an API to meet your data needs and host it on The Graph Network. -- [Direct Streaming](https://docs.substreams.dev/how-to-guides/sinks/stream): Stream data directly from your application. -- [PubSub](https://docs.substreams.dev/how-to-guides/sinks/pubsub): Send data to a PubSub topic. -- [Community Sinks](https://docs.substreams.dev/how-to-guides/sinks/community-sinks): Explore quality community maintained sinks. +- [SQL 数据库](https://docs.substreams.dev/how-to-guides/sinks/sql-sink):发送数据到数据库。 +- [Subgraph](./sps/introduction.mdx): Configure an API to meet your data needs and host it on The Graph Network. +- [直接流](https://docs.substreams.dev/how-to-guides/sinks/stream):直接从您的应用程序流数据。 +- [PubSub](https://docs.substreams.dev/how-to-guides/sinks/pubsub):将数据发送到一个PubSub主题中。 +- [Community Sinks](https://docs.substreams.dev/how-to-guides/sinks/community-sinks):探索优质社区维护的汇。 -> Important: If you’d like your sink (e.g., SQL or PubSub) hosted for you, reach out to the StreamingFast team [here](mailto:sales@streamingfast.io). +> 重要:如果你喜欢你的汇(例如SQL 或 PubSub),请联系[这里的](mailto:sales@streamingfast.io)StreamingFast 团队 。 -## Navigating Sink Repos +## 正在导航汇仓库 -### Official +### 官方 -| Name | Support | Maintainer | Source Code | +| 名称 | 支持 | 维护人员 | 源代码 | | --- | --- | --- | --- | | SQL | O | StreamingFast | [substreams-sink-sql](https://github.com/streamingfast/substreams-sink-sql) | | Go SDK | O | StreamingFast | [substreams-sink](https://github.com/streamingfast/substreams-sink) | @@ -38,14 +38,14 @@ Sinks are integrations that allow you to send the extracted data to different de | CSV | O | Pinax | [substreams-sink-csv](https://github.com/pinax-network/substreams-sink-csv) | | PubSub | O | StreamingFast | [substreams-sink-pubsub](https://github.com/streamingfast/substreams-sink-pubsub) | -### Community +### 社区 -| Name | Support | Maintainer | Source Code | +| 名称 | 支持 | 维护人员 | 源代码 | | --- | --- | --- | --- | -| MongoDB | C | Community | [substreams-sink-mongodb](https://github.com/streamingfast/substreams-sink-mongodb) | -| Files | C | Community | [substreams-sink-files](https://github.com/streamingfast/substreams-sink-files) | -| KV Store | C | Community | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) | -| Prometheus | C | Community | [substreams-sink-Prometheus](https://github.com/pinax-network/substreams-sink-prometheus) | +| MongoDB | C | 社区 | [substreams-sink-mongodb](https://github.com/streamingfast/substreams-sink-mongodb) | +| Files | C | 社区 | [substreams-sink-files](https://github.com/streamingfast/substreams-sink-files) | +| KV Store | C | 社区 | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) | +| Prometheus | C | 社区 | [substreams-sink-Prometheus](https://github.com/pinax-network/substreams-sink-prometheus) | -- O = Official Support (by one of the main Substreams providers) -- C = Community Support +- O = 官方支持(由一个主要的分流提供者) +- C = 社区支持 diff --git a/website/src/pages/zh/substreams/developing/solana/account-changes.mdx b/website/src/pages/zh/substreams/developing/solana/account-changes.mdx index 05e0a3b5d659..cc0f9ce1bade 100644 --- a/website/src/pages/zh/substreams/developing/solana/account-changes.mdx +++ b/website/src/pages/zh/substreams/developing/solana/account-changes.mdx @@ -1,57 +1,57 @@ --- -title: Solana Account Changes -sidebarTitle: Account Changes +title: Solana帐户更改 +sidebarTitle: 帐户更改 --- -Learn how to consume Solana account change data using Substreams. +了解如何使用Substreams使Solana帐户更改数据。 ## 介绍 -This guide walks you through the process of setting up your environment, configuring your first Substreams stream, and consuming account changes efficiently. By the end of this guide, you will have a working Substreams feed that allows you to track real-time account changes on the Solana blockchain, as well as historical account change data. +本指南将引导您完成设置环境、配置第一个子流和高效使用帐户更改的过程。在本指南结束时,您将有一个可用的Substreams提要,允许您跟踪Solana区块链上的实时帐户更改以及历史帐户更改数据。 -> NOTE: History for the Solana Account Changes dates as of 2025, block 310629601. +> 注意:Solana帐户更改的历史日期为2025年,块310629601。 -For each Substreams Solana account block, only the latest update per account is recorded, see the [Protobuf Referece](https://buf.build/streamingfast/firehose-solana/file/main:sf/solana/type/v1/account.proto). If an account is deleted, a payload with `deleted == True` is provided. Additionally, events of low importance ware omitted, such as those with the special owner “Vote11111111…” account or changes that do not affect the account data (ex: lamport changes). +对于每个Substreams Solana帐户块,只记录每个帐户的最新更新,请参阅[Protobuf Referece](https://buf.build/streamingfast/firehose-solana/file/main:sf/solana/type/v1/account.proto)。如果删除了帐户,则会提供`deleted == True`的有效载荷。此外,忽略了不太重要的事件,例如具有特殊所有者“Vote11111111…”帐户的事件或不影响帐户数据的更改(例如:lamport更改)。 -> NOTE: To test Substreams latency for Solana accounts, measured as block-head drift, install the [Substreams CLI](https://docs.substreams.dev/reference-material/substreams-cli/installing-the-cli) and running `substreams run solana-common blocks_without_votes -s -1 -o clock`. +> 注意:要测试Solana帐户的Substreams延迟(以块头漂移衡量),请安装[Substreams CLI](https://docs.substreams.dev/reference-material/substreams-cli/installing-the-cli),运行`substreams run solana-common blocks_without_votes-s-1-oclock`。 ## 开始 -### Prerequisites +### 先决条件 -Before you begin, ensure that you have the following: +在开始之前,请确保您拥有以下内容: -1. [Substreams CLI](https://docs.substreams.dev/reference-material/substreams-cli/installing-the-cli) installed. -2. A [Substreams key](https://docs.substreams.dev/reference-material/substreams-cli/authentication) for access to the Solana Account Change data. -3. Basic knowledge of [how to use](https://docs.substreams.dev/reference-material/substreams-cli/command-line-interface) the command line interface (CLI). +1. 已安装[Substreams CLI](https://docs.substreams.dev/reference-material/substreams-cli/installing-the-cli) 。 +2. 用于访问Solana帐户更改数据的[Substreams密钥](https://docs.substreams.dev/reference-material/substreams-cli/authentication)。 +3. 了解[如何使用](https://docs.substreams.dev/reference-material/substreams-cli/command-line-interface)命令行界面(CLI)。 -### Step 1: Set Up a Connection to Solana Account Change Substreams +### 步骤1:设置与Solana帐户更改子流的连接 -Now that you have Substreams CLI installed, you can set up a connection to the Solana Account Change Substreams feed. +现在您已经安装了Substreams CLI,可以设置与Solana Account Change Substreams提要的连接。 -- Using the [Solana Accounts Foundational Module](https://substreams.dev/packages/solana-accounts-foundational/latest), you can choose to stream data directly or use the GUI for a more visual experience. The following `gui` example filters for Honey Token account data. +- 使用[Solana Accounts Foundation模块](https://substreams.dev/packages/solana-accounts-foundational/latest),您可以选择直接流式传输数据或使用GUI以获得更直观的体验。以下`gui`示例过滤Honey Token帐户数据。 ```bash substreams gui solana-accounts-foundational filtered_accounts -t +10 -p filtered_accounts="owner:TokenkegQfeZyiNwAJbNbGKPFXCWuBvf9Ss623VQ5DA || account:4vMsoUT2BWatFweudnQM1xedRLfJgJ7hswhcpz4xgBTy" ``` -- This command will stream account changes directly to your terminal. +- 此命令将直接将帐户更改流式传输到您的终端。 ```bash substreams run solana-accounts-foundational filtered_accounts -s -1 -o clock ``` -The Foundational Module has support for filtering on specific accounts and/or owners. You can adjust the query based on your needs. +基础模块支持对特定帐户和/或所有者进行筛选。您可以根据需要调整查询。 -### Step 2: Sink the Substreams +### 步骤2:存储子流 -Consume the account stream [directly in your application](https://docs.substreams.dev/how-to-guides/sinks/stream) using a callback or make it queryable by using the [SQL-DB sink](https://docs.substreams.dev/how-to-guides/sinks/sql-sink). +使用回调[直接在应用程序](https://docs.substreams.dev/how-to-guides/sinks/stream) 中使用帐户流,或者使用[SQL-DB sink](https://docs.substreams.dev/how-to-guides/sinks/sql-sink)使其可查询。 -### Step 3: Setting up a Reconnection Policy +### 步骤3:设置重新连接策略 -[Cursor Management](https://docs.substreams.dev/reference-material/reliability-guarantees) ensures seamless continuity and retraceability by allowing you to resume from the last consumed block if the connection is interrupted. This functionality prevents data loss and maintains a persistent stream. +[策展管理](https://docs.substreams.dev/reference-material/reliability-guarantees)允许您在连接中断时从最后一个消耗的块恢复,从而确保无缝的连续性和可追溯性。此功能可防止数据丢失并保持持久流。 -When creating or using a sink, the user's primary responsibility is to provide implementations of BlockScopedDataHandler and a BlockUndoSignalHandler implementation(s) which has the following interface: +在创建或使用接收器时,用户的主要责任是提供BlockScopedDataHandler和BlockUndoSignalHandler的实现,后者具有以下接口: ```go import ( diff --git a/website/src/pages/zh/substreams/developing/solana/transactions.mdx b/website/src/pages/zh/substreams/developing/solana/transactions.mdx index e3992e32eb99..ee1e0cef0915 100644 --- a/website/src/pages/zh/substreams/developing/solana/transactions.mdx +++ b/website/src/pages/zh/substreams/developing/solana/transactions.mdx @@ -1,61 +1,61 @@ --- -title: Solana Transactions -sidebarTitle: Transactions +title: Solana交易 +sidebarTitle: 交易 --- -Learn how to initialize a Solana-based Substreams project within the Dev Container. +了解如何在开发容器中初始化基于Solana的Substreams项目。 -> Note: This guide excludes [Account Changes](/substreams/developing/solana/account-changes/). +> 注意:本指南不包括[帐户更改](/substreams/developing/solana/account-changes/)。 -## Options +## 选项 -If you prefer to begin locally within your terminal rather than through the Dev Container (VS Code required), refer to the [Substreams CLI installation guide](https://docs.substreams.dev/reference-material/substreams-cli/installing-the-cli). +如果您更喜欢在终端内本地开始,而不是通过开发容器(需要VS代码),请参阅[Substreams CLI安装指南](https://docs.substreams.dev/reference-material/substreams-cli/installing-the-cli)。 -## Step 1: Initialize Your Solana Substreams Project +## 第 1 步:初始化您的 Solana 子流项目 -1. Open the [Dev Container](https://github.com/streamingfast/substreams-starter) and follow the on-screen steps to initialize your project. +1. 打开 [Dev容器](https://github.com/streamingfast/substreams-starter) 并按屏幕步骤初始化您的项目。 -2. Running `substreams init` will give you the option to choose between two Solana project options. Select the best option for your project: - - **sol-minimal**: This creates a simple Substreams that extracts raw Solana block data and generates corresponding Rust code. This path will start you with the full raw block, and you can navigate to the `substreams.yaml` (the manifest) to modify the input. - - **sol-transactions**: This creates a Substreams that filters Solana transactions based on one or more Program IDs and/or Account IDs, using the cached [Solana Foundational Module](https://substreams.dev/streamingfast/solana-common/v0.3.0). - - **sol-anchor-beta**: This creates a Substreams that decodes instructions and events with an Anchor IDL. If an IDL isn’t available (reference [Anchor CLI](https://www.anchor-lang.com/docs/cli)), then you’ll need to provide it yourself. +2. 运行 `substreams init` 可以让您选择两个Solana 项目选项。选择您项目的最佳选项: + - **最小化**:这创建了一个简单的子流,用于提取原始Solana区块数据并生成相应的 Rust 代码。 此路径将以完整的原始方块启动,您可以导航到 `substreams.yaml` (manifest) 来修改输入。 + - **sol transactions**:这将创建一个Substreams,使用缓存的[Solana基础模块](https://substreams.dev/streamingfast/solana-common/v0.3.0)根据一个或多个程序ID和/或帐户ID过滤Solana交易。 + - **sol-anchor-beta**:这创建了一个用Anchor IDL解析指令和事件的子流。 如果一个 IDL 不可用 (参考 [Anchor CLI](https://www.anchor-lang.com/docs/cli)), 那么你将需要自己提供。 -The modules within Solana Common do not include voting transactions. To gain a 75% reduction in data processing size and costs, delay your stream by over 1000 blocks from the head. This can be done using the [`sleep`](https://doc.rust-lang.org/std/thread/fn.sleep.html) function in Rust. +Solana Common中的模块不包括投票交易。为了将数据处理大小和成本降低75%,请将数据流从头延迟1000多个块。这可以使用[`sleep`]来完成(https://doc.rust-lang.org/std/thread/fn.sleep.html)Rust中的函数。 -To access voting transactions, use the full Solana block, `sf.solana.type.v1.Block`, as input. +要访问投票交易,请使用完整的 Solana 块,`sf.solana.type.v1.Block` 作为输入。 -## Step 2: Visualize the Data +## 第 2 步:可视化数据 -1. Run `substreams auth` to create your [account](https://thegraph.market/) and generate an authentication token (JWT), then pass this token back as input. +1. 运行`substreams-auth`以创建您的[帐户](https://thegraph.market/)并生成一个身份验证代币(JWT),然后将该代币作为输入传递回去。 -2. Now you can freely use the `substreams gui` to visualize and iterate on your extracted data. +2. 现在你可以自由使用 `substreams gui` 来在你已提取的数据上进行可视化和迭代。 -## Step 2.5: (Optionally) Transform the Data +## 步骤2.5:(可选) 转换数据 -Within the generated directories, modify your Substreams modules to include additional filters, aggregations, and transformations, then update the manifest accordingly. +在生成的目录中,修改您的子串流模块以包含额外的过滤、聚合和转换,然后相应地更新清单。 -## Step 3: Load the Data +## 第 3 步:加载数据 -To make your Substreams queryable (as opposed to [direct streaming](https://docs.substreams.dev/how-to-guides/sinks/stream)), you can automatically generate a [Substreams-powered subgraph](/sps/introduction/) or SQL-DB sink. +要使您的子流可以查询(相对于[直接流](https://docs.substreams.dev/how-to-guides/sinks/stream)),您可以自动生成[子流驱动子图](/sps/introduction/) 或 SQL-DB 接收器。 ### 子图 -1. Run `substreams codegen subgraph` to initialize the sink, producing the necessary files and function definitions. -2. Create your [subgraph mappings](/sps/triggers/) within the `mappings.ts` and associated entities within the `schema.graphql`. -3. Build and deploy locally or to [Subgraph Studio](https://thegraph.com/studio-pricing/) by running `deploy-studio`. +1. 运行 `substreams codegen subgraph` 以初始化接收器,生成必要的文件和函数定义。 +2. 在 `mappings.ts` 中创建你的 [Subgraph 映射](/sps/triggers/) 以及`schema.graphql` 中的相关实体。 +3. 在本地或[Subgraph Studio](https://thegraph.com/studio-pricing/)构建和部署通过运行`部署工作室`。 ### SQL -1. Run `substreams codegen sql` and choose from either ClickHouse or Postgres to initialize the sink, producing the necessary files. -2. Run `substreams build` build the [Substream SQL](https://docs.substreams.dev/how-to-guides/sinks/sql-sink) sink. -3. Run `substreams-sink-sql` to sink the data into your selected SQL DB. +1. 运行 `substreams codegen sql` 并从ClickHouse或 Postgres 中选择以初始化接收器,生成必要的文件。 +2. 运行 `substreams build` 生成[substance SQL](https://docs.substreams.dev/how-to-guides/sinks/sql-sink) 接收器。 +3. 运行 `substreams-sink sql` 将数据存储进您选中的 SQL DB。 -> Note: Run `help` to better navigate the development environment and check the health of containers. +> 注意:运行 `help` 以更好地导航开发环境并检查容器的健康状况。 ## 其他资源 -You may find these additional resources helpful for developing your first Solana application. +您可能会发现这些额外资源有助于开发您的第一个Solana应用程序。 -- The [Dev Container Reference](/substreams/developing/dev-container/) helps you navigate the container and its common errors. -- The [CLI reference](https://docs.substreams.dev/reference-material/substreams-cli/command-line-interface) lets you explore all the tools available in the Substreams CLI. -- The [Components Reference](https://docs.substreams.dev/reference-material/substreams-components/packages) dives deeper into navigating the `substreams.yaml`. +- [开发容器参考](/substreams/developing/dev-container/)帮助您导航容器及其常见错误。 +- [CLI 引用](https://docs.substreams.dev/reference-material/substreams-cli/command-line-interface)允许您探索所有可用的子串CLI 工具。 +- [组件引用](https://docs.substreams.dev/reference-material/substreams-components/packages) 深入到 `substreams.yaml` 导航中。 diff --git a/website/src/pages/zh/substreams/introduction.mdx b/website/src/pages/zh/substreams/introduction.mdx index 8aecb413c049..c0933e723bd9 100644 --- a/website/src/pages/zh/substreams/introduction.mdx +++ b/website/src/pages/zh/substreams/introduction.mdx @@ -1,26 +1,26 @@ --- -title: Introduction to Substreams +title: 子流介绍 sidebarTitle: 介绍 --- ![Substreams Logo](/img/substreams-logo.png) -To start coding right away, check out the [Substreams Quick Start](/substreams/quick-start/). +要立即开始编码,请转到[开发人员快速入门](/subgraphs/Quick-start/)。 ## 概述 -Substreams is a powerful parallel blockchain indexing technology designed to enhance performance and scalability within The Graph Network. +子流是一种强大的并行区块链索引技术,旨在提高 The Graph 网络内的性能和可扩展性。 -## Substreams Benefits +## 子流好处 -- **Accelerated Indexing**: Boost subgraph indexing time with a parallelized engine for quicker data retrieval and processing. -- **Multi-Chain Support**: Expand indexing capabilities beyond EVM-based chains, supporting ecosystems like Solana, Injective, Starknet, and Vara. -- **Enhanced Data Model**: Access comprehensive data, including the `trace` level data on EVM or account changes on Solana, while efficiently managing forks/disconnections. -- **Multi-Sink Support:** For Subgraph, Postgres database, Clickhouse, and Mongo database. +- **加速索引**:提升子图索引时间和平行引擎,以更快的数据检索和处理。 +- **Multi-Chain Support**:将索引能力扩展到基于EVM的链之外,支持Solana、 Injective、 Starknet和Vara等生态系统。 +- **增强数据模型**:访问全面数据,包括EVM上的 `trace` 级数据或Solana 上的帐户变动,同时有效管理叉/断开连接。 +- **Multi-Sink支持:** 用于Subgraph、Postgres数据库、Clickhouse和Mongo数据库。 ## 子流的工作原理分为四个步骤 -1. You write a Rust program, which defines the transformations that you want to apply to the blockchain data. For example, the following Rust function extracts relevant information from an Ethereum block (number, hash, and parent hash). +1. 您编写了一个Rust程序,定义了要应用于区块链数据的转换操作。例如,以下的Rust函数从以太坊区块中提取相关信息(区块号、哈希和父哈希)。 ```rust fn get_my_block(blk: Block) -> Result { @@ -34,12 +34,12 @@ fn get_my_block(blk: Block) -> Result { } ``` -2. You wrap up your Rust program into a WASM module just by running a single CLI command. +2. 您只需运行一个CLI命令,就可以将您的Rust程序打包成一个WASM模块。 -3. The WASM container is sent to a Substreams endpoint for execution. The Substreams provider feeds the WASM container with the blockchain data and the transformations are applied. +3. WASM容器被发送到Substreams端点执行。 Substreams提供商将区块链数据传送给WASM容器,然后执行转换操作。 -4. You select a [sink](https://docs.substreams.dev/how-to-guides/sinks), a place where you want to send the transformed data (such as a SQL database or a Subgraph). +4. 您选择了一个 [sink](https://docs.substreams.dev/how-to-guides/sinks),一个您想要发送变异数据的地方(如SQL 数据库或子图形)。 ## 其他资源 -All Substreams developer documentation is maintained by the StreamingFast core development team on the [Substreams registry](https://docs.substreams.dev). +所有Substreams开发人员文档均由StreamingFast核心开发团队在[Subreams注册表](https://docs.substreams.dev)上维护。 diff --git a/website/src/pages/zh/substreams/publishing.mdx b/website/src/pages/zh/substreams/publishing.mdx index 4ca12786aeb5..e36540115bc5 100644 --- a/website/src/pages/zh/substreams/publishing.mdx +++ b/website/src/pages/zh/substreams/publishing.mdx @@ -1,53 +1,53 @@ --- -title: Publishing a Substreams Package -sidebarTitle: Publishing +title: 发布子流包 +sidebarTitle: 发布 --- -Learn how to publish a Substreams package to the [Substreams Registry](https://substreams.dev). +学习如何发布子流包到[子流注册](https://substreams.dev)。 ## 概述 -### What is a package? +### 什么是包? -A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional subgraphs. +子流包是一个预编译的二进制文件,定义你想要从区块链中提取的特定数据,类似于传统子图中的`mapping.ts`文件。 -## Publish a Package +## 发布包 -### Prerequisites +### 先决条件 -- You must have the Substreams CLI installed. -- You must have a Substreams package (`.spkg`) that you want to publish. +- 您必须安装子流 CLI 。 +- 您必须有一个您想要发布的子流包(`.spkg`)。 -### Step 1: Run the `substreams publish` Command +### 第 1 步:运行 `substreams publish` 命令 -1. In a command-line terminal, run `substreams publish .spkg`. +1. 在命令行终端中,运行 `substreams publish .spkg`。 -2. If you do not have a token set in your computer, navigate to `https://substreams.dev/me`. +2. 如果您的电脑没有设置代币,请导航到 `https://substreams.dev/me` 。 ![get token](/img/1_get-token.png) -### Step 2: Get a Token in the Substreams Registry +### 第 2 步:获取子流注册表中的代币 -1. In the Substreams Registry, log in with your GitHub account. +1. 在子流注册表中,使用您的 GitHub 帐户登录。 -2. Create a new token and copy it in a safe location. +2. 创建一个新的代币并在安全的位置复制它。 ![new token](/img/2_new_token.png) -### Step 3: Authenticate in the Substreams CLI +### 第 3 步:在子流 CLI 中进行身份验证 -1. Back in the Substreams CLI, paste the previously generated token. +1. 回到子流 CLI 中,粘贴之前生成的代币。 ![paste token](/img/3_paste_token.png) -2. Lastly, confirm that you want to publish the package. +2. 最后,确认您想要发布这个包。 ![confirm](/img/4_confirm.png) -That's it! You have succesfully published a package in the Substreams registry. +就是这样!您已经成功地在子流注册表中发布了一个包。 ![success](/img/5_success.png) ## 其他资源 -Visit [Substreams](https://substreams.dev/) to explore a growing collection of ready-to-use Substreams packages across various blockchain networks. +访问 [Substreams](https://substreams.dev/) 来探索在各个区块链网络中越来越多的现成的子流软件包。 diff --git a/website/src/pages/zh/substreams/quick-start.mdx b/website/src/pages/zh/substreams/quick-start.mdx index 76f738c2752e..be4fc19302f3 100644 --- a/website/src/pages/zh/substreams/quick-start.mdx +++ b/website/src/pages/zh/substreams/quick-start.mdx @@ -1,30 +1,30 @@ --- -title: Substreams Quick Start +title: 子流快速入门 sidebarTitle: 快速开始 --- -Discover how to utilize ready-to-use substream packages or develop your own. +探索如何使用即时使用的子流软件包或开发您自己的软件包。 ## 概述 -Integrating Substreams can be quick and easy. They are permissionless, and you can [obtain a key here](https://thegraph.market/) without providing personal information to start streaming on-chain data. +集成子流可以快速轻松。 他们没有权限,您可以[在这里获取一个密钥](https://thegraph.market/)无需提供个人信息来开始串流链上的数据。 -## Start Building +## 开始构建 -### Use Substreams Packages +### 使用子流包 -There are many ready-to-use Substreams packages available. You can explore these packages by visiting the [Substreams Registry](https://substreams.dev) and [sinking them](/substreams/developing/sinks/). The registry lets you search for and find any package that meets your needs. +有许多现成的子流包可供使用。您可以通过访问 [子流注册](https://substreams.dev) 和 [汇集它们](/substreams/developing/sinks/)来探索这些软件包。 注册表允许您搜索并找到满足您需要的任何包。 -Once you find a package that fits your needs, you can choose how you want to consume the data: +一旦找到符合您需要的包,您可以选择如何消耗数据: -- **[Subgraph](/sps/introduction/)**: Configure an API to meet your data needs and host it on The Graph Network. -- **[SQL Database](https://docs.substreams.dev/how-to-guides/sinks/sql-sink)**: Send the data to a database. -- **[Direct Streaming](https://docs.substreams.dev/how-to-guides/sinks/stream)**: Stream data directly to your application. -- **[PubSub](https://docs.substreams.dev/how-to-guides/sinks/pubsub)**: Send data to a PubSub topic. +- **[Subgraph](/sps/introduction/)**: 配置一个 API 以满足您的数据需要并将其托管在The Graph网络。 +- **[SQL 数据库](https://docs.substreams.dev/how-to-guides/sinks/sql-sink)**:发送数据到数据库。 +- **[直接流](https://docs.substreams.dev/how-to-guides/sinks/stream)**:直接从您的应用程序流数据。 +- **[PubSub](https://docs.substreams.dev/how-to-guides/sinks/pubsub)**:将数据发送到一个PubSub主题中。 -### Develop Your Own +### 开发您自己的包 -If you can't find a Substreams package that meets your specific needs, you can develop your own. Substreams are built with Rust, so you'll write functions that extract and filter the data you need from the blockchain. To get started, check out the following tutorials: +如果您找不到满足您特定需要的子流包,您可以开发自己的包。 子流是用Rust构建的,所以你会写一些功能来提取和过滤你从blockchain中需要的数据。 要开始,请参阅以下教程: - [EVM](https://docs.substreams.dev/tutorials/intro-to-tutorials/evm) - [Solana](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-solana) @@ -32,11 +32,11 @@ If you can't find a Substreams package that meets your specific needs, you can d - [Injective](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/injective) - [MANTRA](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/mantra) -To build and optimize your Substreams from zero, use the minimal path within the [Dev Container](/substreams/developing/dev-container/). +若要从零构建和优化您的子流,请使用 [Dev容器](/substreams/developing/dev-container/)中的最小路径。 -> Note: Substreams guarantees that you'll [never miss data](https://docs.substreams.dev/reference-material/reliability-guarantees) with a simple reconnection policy. +> 注意:Substreams保证您[永远不会错过数据](https://docs.substreams.dev/reference-material/reliability-guarantees)通过简单的重新连接策略。 ## 其他资源 -- For additional guidance, reference the [Tutorials](https://docs.substreams.dev/tutorials/intro-to-tutorials) and follow the [How-To Guides](https://docs.substreams.dev/how-to-guides/develop-your-own-substreams) on Streaming Fast docs. -- For a deeper understanding of how Substreams works, explore the [architectural overview](https://docs.substreams.dev/reference-material/architecture) of the data service. +- 如需更多指导,请参考[教程](https://docs.substreams.dev/tutorials/intro-to-tutorials)并遵循[操作指南](https://docs.substreams.dev/how-to-guides/develop-your-own-substreams)在 Streaming Fast文档上。 +- 为了更深入地了解子流的工作方式,探索数据服务的[建筑概述](https://docs.substreams.dev/reference-material/architecture)。 diff --git a/website/src/pages/zh/supported-networks.mdx b/website/src/pages/zh/supported-networks.mdx index 986d59ce75b3..e45f7dfbe97c 100644 --- a/website/src/pages/zh/supported-networks.mdx +++ b/website/src/pages/zh/supported-networks.mdx @@ -16,13 +16,13 @@ export const getStaticProps = getSupportedNetworksStaticProps -- Subgraph Studio relies on the stability and reliability of the underlying technologies, for example JSON-RPC, Firehose and Substreams endpoints. -- Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier. -- If a subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks. -- For a full list of which features are supported on the decentralized network, see [this page](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). +- Subgraph Studio依赖于底层技术的稳定性和可靠性,例如JSON-RPC、Firehose和Substreams端点。 +- 现在可以使用`gnosis`网络标识符部署索引Gnosis链的子图。 +- 如果一个子图是通过CLI发布并由索引人获取的,那么从技术上讲,即使没有支持,也可以对其进行查询,并且正在努力进一步简化新网络的集成。 +- 有关去中心化网络支持哪些功能的完整列表,请参阅此[页面](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md)。 -## Running Graph Node locally +## 在本地运行Graph 节点 -If your preferred network isn't supported on The Graph's decentralized network, you can run your own [Graph Node](https://github.com/graphprotocol/graph-node) to index any EVM-compatible network. Make sure that the [version](https://github.com/graphprotocol/graph-node/releases) you are using supports the network and you have the needed configuration. +如果您想使用的网络不受The Graph的去中心化网络支持,您可以运行自己的[Graph节点](https://github.com/graphprotocol/graph-node) 来索引任何与以太坊虚拟机(EVM)兼容的网络。确保您使用的[版本](https://github.com/graphprotocol/graph-node/releases) 支持该网络,并且您设置好了所需的配置。 -Graph Node can also index other protocols, via a Firehose integration. Firehose integrations have been created for NEAR, Arweave and Cosmos-based networks. Additionally, Graph Node can support Substreams-powered subgraphs for any network with Substreams support. +Graph Node还可以通过Firehose集成对其他协议进行索引。已为NEAR和Arweave创建了Firehose集成。此外,Graph Node可以为任何支持子流的网络支持子流驱动的子图。 diff --git a/website/src/pages/zh/token-api/_meta-titles.json b/website/src/pages/zh/token-api/_meta-titles.json index 692cec84bd58..b14132282d69 100644 --- a/website/src/pages/zh/token-api/_meta-titles.json +++ b/website/src/pages/zh/token-api/_meta-titles.json @@ -1,5 +1,6 @@ { "mcp": "MCP", - "evm": "EVM Endpoints", - "monitoring": "Monitoring Endpoints" + "evm": "EVM端点", + "monitoring": "监控端点", + "faq": "常见问题" } diff --git a/website/src/pages/zh/token-api/_meta.js b/website/src/pages/zh/token-api/_meta.js index 09aa7ffc2649..0e526f673a66 100644 --- a/website/src/pages/zh/token-api/_meta.js +++ b/website/src/pages/zh/token-api/_meta.js @@ -5,4 +5,5 @@ export default { mcp: titles.mcp, evm: titles.evm, monitoring: titles.monitoring, + faq: '', } diff --git a/website/src/pages/zh/token-api/evm/get-balances-evm-by-address.mdx b/website/src/pages/zh/token-api/evm/get-balances-evm-by-address.mdx index 3386fd078059..799a52e67504 100644 --- a/website/src/pages/zh/token-api/evm/get-balances-evm-by-address.mdx +++ b/website/src/pages/zh/token-api/evm/get-balances-evm-by-address.mdx @@ -1,9 +1,9 @@ --- -title: Token Balances by Wallet Address +title: 钱包地址代币余额 template: type: openApi apiId: tokenApi operationId: getBalancesEvmByAddress --- -The EVM Balances endpoint provides a snapshot of an account’s current token holdings. The endpoint returns the current balances of native and ERC-20 tokens held by a specified wallet address on an Ethereum-compatible blockchain. +EVM 余额端点提供了帐户当前代币持有情况的快照。 端点返回本地和ERC-20代币的当前余额,这些代币由一个指定的钱包地址在一个兼容的Etherum-blockchain上持有。 diff --git a/website/src/pages/zh/token-api/evm/get-holders-evm-by-contract.mdx b/website/src/pages/zh/token-api/evm/get-holders-evm-by-contract.mdx index 0bb79e41ed54..8c3776959410 100644 --- a/website/src/pages/zh/token-api/evm/get-holders-evm-by-contract.mdx +++ b/website/src/pages/zh/token-api/evm/get-holders-evm-by-contract.mdx @@ -1,9 +1,9 @@ --- -title: Token Holders by Contract Address +title: 按合约地址分类的代币持有人 template: type: openApi apiId: tokenApi operationId: getHoldersEvmByContract --- -The EVM Holders endpoint provides information about the addresses holding a specific token, including each holder’s balance. This is useful for analyzing token distribution for a particular contract. +EVM 控件端点提供关于持有特定代币地址的信息,包括每个控件持有人的余额。 这有助于分析特定合约的代币分布情况。 diff --git a/website/src/pages/zh/token-api/evm/get-ohlc-prices-evm-by-contract.mdx b/website/src/pages/zh/token-api/evm/get-ohlc-prices-evm-by-contract.mdx index d1558ddd6e78..086b1d8a5dc3 100644 --- a/website/src/pages/zh/token-api/evm/get-ohlc-prices-evm-by-contract.mdx +++ b/website/src/pages/zh/token-api/evm/get-ohlc-prices-evm-by-contract.mdx @@ -1,9 +1,9 @@ --- -title: Token OHLCV prices by Contract Address +title: 按合约地址列出的代币OHLCV价格 template: type: openApi apiId: tokenApi operationId: getOhlcPricesEvmByContract --- -The EVM Prices endpoint provides pricing data in the Open/High/Low/Close/Volume (OHCLV) format. +EVM 价格端点以Open/High/Low/Close/Volume (OHCLV) 格式提供定价数据。 diff --git a/website/src/pages/zh/token-api/evm/get-tokens-evm-by-contract.mdx b/website/src/pages/zh/token-api/evm/get-tokens-evm-by-contract.mdx index b6fab8011fc2..bb65c736edef 100644 --- a/website/src/pages/zh/token-api/evm/get-tokens-evm-by-contract.mdx +++ b/website/src/pages/zh/token-api/evm/get-tokens-evm-by-contract.mdx @@ -1,9 +1,9 @@ --- -title: Token Holders and Supply by Contract Address +title: 代币持有人和按合约地址的供应 template: type: openApi apiId: tokenApi operationId: getTokensEvmByContract --- -The Tokens endpoint delivers contract metadata for a specific ERC-20 token contract from a supported EVM blockchain. Metadata includes name, symbol, number of holders, circulating supply, decimals, and more. +代币端点提供了来自支持的EVM区块链的具体ERC-20代币合约的元数据。 元数据包括名称、符号、持有者人数、循环供应、小数等等。 diff --git a/website/src/pages/zh/token-api/evm/get-transfers-evm-by-address.mdx b/website/src/pages/zh/token-api/evm/get-transfers-evm-by-address.mdx index 604c185588ea..4e9111fb77ec 100644 --- a/website/src/pages/zh/token-api/evm/get-transfers-evm-by-address.mdx +++ b/website/src/pages/zh/token-api/evm/get-transfers-evm-by-address.mdx @@ -1,9 +1,9 @@ --- -title: Token Transfers by Wallet Address +title: 钱包地址的代币传输 template: type: openApi apiId: tokenApi operationId: getTransfersEvmByAddress --- -The EVM Transfers endpoint provides access to historical token transfer events for a specified address. This endpoint is ideal for tracking transaction history and analyzing token movements over time. +EVM 传输端点为指定地址提供了历史代币传输事件的访问权限。 此端点是追踪交易历史记录和分析代币在一段时间内移动的理想点。 diff --git a/website/src/pages/zh/token-api/faq.mdx b/website/src/pages/zh/token-api/faq.mdx new file mode 100644 index 000000000000..99e1466c9952 --- /dev/null +++ b/website/src/pages/zh/token-api/faq.mdx @@ -0,0 +1,109 @@ +--- +title: Token API FAQ +--- + +Get fast answers to easily integrate and scale with The Graph's high-performance Token API. + +## 通用 + +### What blockchains does the Token API support? + +Currently, the Token API supports Ethereum, Binance Smart Chain (BSC), Polygon, Optimism, Base, and Arbitrum One. + +### Why isn't my API key from The Graph Market working? + +Be sure to use the Access Token generated from the API key, not the API key itself. An access token can be generated from the dashboard on The Graph Market using the dropdown menu next to each API key. + +### How current is the data provided by the API relative to the blockchain? + +The API provides data up to the latest finalized block. + +### How do I authenticate requests to the Token API? + +Authentication is managed via JWT tokens obtained through The Graph Market. JWT tokens issued by [Pinax](https://pinax.network/en), a core developer of The Graph, are also supported. + +### Does the Token API provide a client SDK? + +While a client SDK is not currently available, please share feedback on any SDKs or integrations you would like to see on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to support additional blockchains in the future? + +Yes, more blockchains will be supported in the future. Please share feedback on desired chains on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to offer data closer to the chain head? + +Yes, improvements to provide data closer to the chain head are planned. Feedback is welcome on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to support additional use cases such as NFTs? + +The Graph ecosystem is actively determining the [roadmap](https://thegraph.com/blog/token-api-the-graph/) for additional use cases, including NFTs. Please provide feedback on specific features you would like prioritized on [Discord](https://discord.gg/graphprotocol). + +## MCP / LLM / AI Topics + +### Is there a time limit for LLM queries? + +Yes. The maximum time limit for LLM queries is 10 seconds. + +### Is there a known list of LLMs that work with the API? + +Yes, Cline, Cursor, and Claude Desktop integrate successfully with The Graph's Token API + MCP server. + +Beyond that, whether an LLM "works" depends on whether it supports the MCP protocol directly (or has a compatible plugin/adapter). + +### Where can I find the MCP client? + +You can find the code for the MCP client in [The Graph's repo](https://github.com/graphprotocol/mcp-client). + +## Advanced Topics + +### I'm getting 403/401 errors. What's wrong? + +Check that you included the `Authorization: Bearer ` header with the correct, non-expired token. Common issues include using the API key instead of generating a new JWT, forgetting the "Bearer" prefix, using an incorrect token, or omitting the header entirely. Ensure you copied the JWT exactly as provided by The Graph Market. + +### Are there rate limits or usage costs?\*\* + +During Beta, the Token API is free for authorized developers. While specific rate limits aren't documented, reasonable throttling exists to prevent abuse. High request volumes may trigger HTTP 429 errors. Monitor official announcements for future pricing changes after Beta. + +### What networks are supported, and how do I specify them? + +You can query available networks with [this link](https://token-api.thegraph.com/#tag/monitoring/GET/networks). A full list of network IDs accepted by The Graph can be found on [The Graph's Networks](https://thegraph.com/docs/en/supported-networks/). The API supports Ethereum mainnet, Binance Smart Chain, Base, Arbitrum One, Optimism, and Polygon/Matic. Use the optional `network_id` parameter (e.g., `mainnet`, `bsc`, `base`, `arbitrum-one`, `optimism`, `matic`) to target a specific chain. Without this parameter, the API defaults to Ethereum mainnet. + +### Why do I only see 10 results? How can I get more data? + +Endpoints cap output at 10 items by default. Use pagination parameters: `limit` (up to 500) and `page` (1-indexed). For example, set `limit=50` to get 50 results, and increment `page` for subsequent batches (e.g., `page=2` for items 51-100). + +### How do I fetch older transfer history? + +The Transfers endpoint defaults to 30 days of history. To retrieve older events, increase the `age` parameter up to 180 days maximum (e.g., `age=180` for 6 months of transfers). Transfers older than 180 days cannot be fetched in a single call. + +### What does an empty `"data": []` array mean? + +An empty data array means no records were found matching the query – not an error. This occurs when querying wallets with no tokens/transfers or token contracts with no holders. Verify you've used the correct address and parameters. An invalid address format will trigger a 4xx error. + +### Why is the JSON response wrapped in a `"data"` array? + +All Token API responses consistently wrap results in a top-level `data` array, even for single items. This uniform design handles the common case where addresses have multiple balances or transfers. When parsing, be sure to index into the `data` array (e.g., `const results = response.data`). + +### Why are token amounts returned as strings? + +Large numeric values (like token amounts or supplies) are returned as strings to avoid precision loss, as they often exceed JavaScript's safe integer range. Convert these to big number types for arithmetic operations. Fields like `decimals` are provided as normal numbers to help derive human-readable values. + +### What format should addresses be in? + +The API accepts 40-character hex addresses with or without the `0x` prefix. The endpoint is case-insensitive, so both lower and uppercase hex characters work. Ensure addresses are exactly 40 hex digits (20 bytes) if you remove the prefix. For contract queries, use the token's contract address; for balance/transfer queries, use the wallet address. + +### Do I need special headers besides authentication? + +While recommended, `Accept: application/json` isn't strictly required as the API returns JSON by default. The critical header is `Authorization: Bearer `. Ensure you make a GET request to the correct URL without trailing slashes or path typos (e.g., use `/balances/evm/{address}` not `/balance`). + +### MCP integration with Claude/Cline/Cursor shows errors like "ENOENT" or "Server disconnected". How do I fix this? + +For "ENOENT" errors, ensure Node.js 18+ is installed and the path to `npx`/`bunx` is correct (consider using full paths in config). "Server disconnected" usually indicates authentication or connectivity issues – verify your `ACCESS_TOKEN` is set correctly and your network allows access to `https://token-api.thegraph.com/sse`. + +### Is the Token API part of The Graph's GraphQL service? + +No, the Token API is a separate RESTful service. Unlike traditional subgraphs, it provides ready-to-use REST endpoints (HTTP GET) for common token data. You don't need to write GraphQL queries or deploy subgraphs. Under the hood, it uses The Graph's infrastructure and MCP with AI for enrichment, but you simply interact with REST endpoints. + +### Do I need to use MCP or tools like Claude, Cline, or Cursor? + +No, these are optional. MCP is an advanced feature allowing AI assistants to interface with the API via streaming. For standard usage, simply call the REST endpoints with any HTTP client using your JWT. Claude Desktop, Cline bot, and Cursor IDE integrations are provided for convenience but aren't required. diff --git a/website/src/pages/zh/token-api/mcp/claude.mdx b/website/src/pages/zh/token-api/mcp/claude.mdx index 0da8f2be031d..1287a7754a1c 100644 --- a/website/src/pages/zh/token-api/mcp/claude.mdx +++ b/website/src/pages/zh/token-api/mcp/claude.mdx @@ -1,22 +1,22 @@ --- -title: Using Claude Desktop to Access the Token API via MCP -sidebarTitle: Claude Desktop +title: 使用 Claude 桌面通过 MCP 访问 Token API +sidebarTitle: Claude 桌面 --- -## Prerequisites +## 先决条件 -- [Claude Desktop](https://claude.ai/download) installed. -- A [JWT token](/token-api/quick-start) from [The Graph Market](https://thegraph.market/). -- [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path. -- The `@pinax/mcp` package requires Node 18+, as it relies on built-in `fetch()` / `Headers`, which are not available in Node 17 or older. You may need to specify an exact path to an up-to-date Node version, or uninstall previous versions of Node to ensure `@pinax/mcp` uses the correct version. +- [Claude Desktop](https://claude.ai/download) 已安装。 +- 一个来自[The Graph市场]的[JWT 代币](/token-api/quick-start) (https://thegraph.market/)。 +- [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) 或 [`bunx`](https://bun.sh/) 已安装并在您的路径中可用。 +- `@pinax/mcp`软件包需要 Node 18+,因为它依赖于内置的`fetch()` / `Headers`,这些软件包在Node 17或更高版本中不可用。 您可能需要指定一个到最新节点版本的确切路径, 或卸载以前版本的节点以确保`@pinax/mcp`使用正确版本。 -![Screenshot of Claude Desktop's settings panel showing the MCP server configuration option.](/img/claude-preview-token-api.png) +![Claude Desktop的设置面板显示MCP服务器配置选项。](/img/claude-preview-token-api.png) -## Configuration +## 配置 -Create or edit your `claude_desktop_config.json` file. +创建或编辑您的 `claude_desktop_config.json` 文件。 -> **Settings** > **Developer** > **Edit Config** +> **设置** > **开发者** > **编辑配置** - OSX: `~/Library/Application Support/Claude/claude_desktop_config.json` - Windows: `%APPDATA%\Claude\claude_desktop_config.json` @@ -25,34 +25,34 @@ Create or edit your `claude_desktop_config.json` file. ```json label="claude_desktop_config.json" { "mcpServers": { - "mcp-pinax": { + "token-api": { "command": "npx", "args": ["@pinax/mcp", "--sse-url", "https://token-api.thegraph.com/sse"], "env": { - "ACCESS_TOKEN": "" + "ACCESS_TOKEN": "" } } } } ``` -## Troubleshooting +## 故障排除 -To enable logs for the MCP, use the `--verbose true` option. +要启用 MCP日志,请使用 `--verbose true` 选项。 ### ENOENT -![Error dialog in Claude Desktop showing 'ENOENT' system error, indicating the npx/bunx command wasn't found in the system path.](/img/claude-ENOENT.png) +![Claude Desktop中的错误对话框显示“ENOENT”系统错误,表示在系统路径中找不到npx/bunx命令。](/img/claude-ENOENT.png) -Try to use the full path of the command instead: +请尝试使用命令的完整路径: -- Run `which npx` or `which bunx` to get the path of the command. -- Replace `npx` or `bunx` in the configuration file with the full path (e.g. `/home/user/bin/bunx`). +- 运行`哪个npx` 或 `哪个bunx` 来获取命令的路径。 +- 将配置文件中的`npx`或`bunx`替换为完整路径(例如`/home/user/bin/bunx`)。 -### Server disconnected +### 与服务器连接已断开 -![Connection error notification in Claude Desktop displaying 'Server disconnected' message.](/img/claude-server-disconnect.png) +![Claude 桌面中显示“服务器断开连接”消息的连接错误通知。](/img/claude-server-disconnect.png) -Double-check your API key otherwise look in your navigator if `https://token-api.thegraph.com/sse` is reachable. +如果`https://token-api.thegraph.com/sse`,请在导航器中以其他方式检查您的 API 密钥。 -> You can always have a look at the full logs under `Claude/logs/mcp.log` and `Claude/logs/mcp-server-pinax.log` for more details. +> 您总是可以查看`Claude/logs/mcp.log`和`Claude/logs/mcp-server-pinax.log`下的完整日志以了解更多详情。 diff --git a/website/src/pages/zh/token-api/mcp/cline.mdx b/website/src/pages/zh/token-api/mcp/cline.mdx index ab54c0c8f6f0..43c2552b9291 100644 --- a/website/src/pages/zh/token-api/mcp/cline.mdx +++ b/website/src/pages/zh/token-api/mcp/cline.mdx @@ -1,22 +1,22 @@ --- -title: Using Cline to Access the Token API via MCP +title: 通过 MCP 使用 Cline 访问代币API sidebarTitle: Cline --- -## Prerequisites +## 先决条件 -- [Cline](https://cline.bot/) installed. -- A [JWT token](/token-api/quick-start) from [The Graph Market](https://thegraph.market/). -- [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path. -- The `@pinax/mcp` package requires Node 18+, as it relies on built-in `fetch()` / `Headers`, which are not available in Node 17 or older. You may need to specify an exact path to an up-to-date Node version, or uninstall previous versions of Node to ensure `@pinax/mcp` uses the correct version. +- [Cline](https://cline.bot/) 已安装。 +- 一个来自[The Graph市场]的[JWT 代币](/token-api/quick-start) (https://thegraph.market/)。 +- [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) 或 [`bunx`](https://bun.sh/) 已安装并在您的路径中可用。 +- `@pinax/mcp`软件包需要 Node 18+,因为它依赖于内置的`fetch()` / `Headers`,这些软件包在Node 17或更高版本中不可用。 您可能需要指定一个到最新节点版本的确切路径, 或卸载以前版本的节点以确保`@pinax/mcp`使用正确版本。 -![Screenshot of Cline's MCP server configuration interface displaying the JSON settings file with mcp-pinax server details visible](/img/cline-preview-token-api.png) +![Screenshot of Cline's MCP server configuration interface displaying the JSON settings file with mcp-pinax server details visible.](/img/cline-preview-token-api.png) -## Configuration +## 配置 -Create or edit your `cline_mcp_settings.json` file. +创建或编辑您的 `claude_desktop_config.json` 文件。 -> **MCP Servers** > **Installed** > **Configure MCP Servers** +> **MCP 服务器** > **安装** > **配置 MCP 服务器** ```json label="cline_mcp_settings.json" { @@ -32,21 +32,21 @@ Create or edit your `cline_mcp_settings.json` file. } ``` -## Troubleshooting +## 故障排除 -To enable logs for the MCP, use the `--verbose true` option. +要启用 MCP日志,请使用 `--verbose true` 选项。 ### ENOENT ![Cline error dialog showing 'ENOENT' system alert.](/img/cline-error.png) -Try to use the full path of the command instead: +请尝试使用命令的完整路径: -- Run `which npx` or `which bunx` to get the path of the command. -- Replace `npx` or `bunx` in the configuration file with the full path (e.g. `/home/user/bin/bunx`). +- 运行`哪个npx` 或 `哪个bunx` 来获取命令的路径。 +- 将配置文件中的`npx`或`bunx`替换为完整路径(例如`/home/user/bin/bunx`)。 -### Server disconnected +### 与服务器连接已断开 -![Cline connection error notification displaying server disconnection warning.](/img/cline-missing-variables.png) +![Cline 连接错误通知显示服务器断开连接警告。](/img/cline-missing-variables.png) -Double-check your API key otherwise look in your navigator if `https://token-api.thegraph.com/sse` is reachable. +如果`https://token-api.thegraph.com/sse`,请在导航器中以其他方式检查您的 API 密钥。 diff --git a/website/src/pages/zh/token-api/mcp/cursor.mdx b/website/src/pages/zh/token-api/mcp/cursor.mdx index 658108d1337b..b9295d7cb315 100644 --- a/website/src/pages/zh/token-api/mcp/cursor.mdx +++ b/website/src/pages/zh/token-api/mcp/cursor.mdx @@ -3,16 +3,16 @@ title: Using Cursor to Access the Token API via MCP sidebarTitle: Cursor --- -## Prerequisites +## 先决条件 - [Cursor](https://www.cursor.com/) installed. -- A [JWT token](/token-api/quick-start) from [The Graph Market](https://thegraph.market/). -- [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path. -- The `@pinax/mcp` package requires Node 18+, as it relies on built-in `fetch()` / `Headers`, which are not available in Node 17 or older. You may need to specify an exact path to an up-to-date Node version, or uninstall previous versions of Node to ensure `@pinax/mcp` uses the correct version. +- 一个来自[The Graph市场]的[JWT 代币](/token-api/quick-start) (https://thegraph.market/)。 +- [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) 或 [`bunx`](https://bun.sh/) 已安装并在您的路径中可用。 +- `@pinax/mcp`软件包需要 Node 18+,因为它依赖于内置的`fetch()` / `Headers`,这些软件包在Node 17或更高版本中不可用。 您可能需要指定一个到最新节点版本的确切路径, 或卸载以前版本的节点以确保`@pinax/mcp`使用正确版本。 ![Screenshot of Cursor's MCP configuration panel.](/img/cursor-preview-token-api.png) -## Configuration +## 配置 Create or edit your `~/.cursor/mcp.json` file. @@ -32,19 +32,19 @@ Create or edit your `~/.cursor/mcp.json` file. } ``` -## Troubleshooting +## 故障排除 ![Cursor IDE error notification that reads, "Failed to create client"](/img/cursor-error.png) -To enable logs for the MCP, use the `--verbose true` option. +要启用 MCP日志,请使用 `--verbose true` 选项。 ### ENOENT -Try to use the full path of the command instead: +请尝试使用命令的完整路径: -- Run `which npx` or `which bunx` to get the path of the command. -- Replace `npx` or `bunx` in the configuration file with the full path (e.g. `/home/user/bin/bunx`). +- 运行`哪个npx` 或 `哪个bunx` 来获取命令的路径。 +- 将配置文件中的`npx`或`bunx`替换为完整路径(例如`/home/user/bin/bunx`)。 -### Server disconnected +### 与服务器连接已断开 -Double-check your API key otherwise look in your navigator if `https://token-api.thegraph.com/sse` is reachable. +如果`https://token-api.thegraph.com/sse`,请在导航器中以其他方式检查您的 API 密钥。 diff --git a/website/src/pages/zh/token-api/monitoring/get-health.mdx b/website/src/pages/zh/token-api/monitoring/get-health.mdx index 57a827b3343b..ca7c078746e0 100644 --- a/website/src/pages/zh/token-api/monitoring/get-health.mdx +++ b/website/src/pages/zh/token-api/monitoring/get-health.mdx @@ -1,5 +1,5 @@ --- -title: Get health status of the API +title: 获取 API 的健康状况 template: type: openApi apiId: tokenApi diff --git a/website/src/pages/zh/token-api/monitoring/get-networks.mdx b/website/src/pages/zh/token-api/monitoring/get-networks.mdx index 0ea3c485ddb9..b54ba746cf5d 100644 --- a/website/src/pages/zh/token-api/monitoring/get-networks.mdx +++ b/website/src/pages/zh/token-api/monitoring/get-networks.mdx @@ -1,5 +1,5 @@ --- -title: Get supported networks of the API +title: 获取 API 的支持网络 template: type: openApi apiId: tokenApi diff --git a/website/src/pages/zh/token-api/monitoring/get-version.mdx b/website/src/pages/zh/token-api/monitoring/get-version.mdx index 0be6b7e92d04..3f7d769d7135 100644 --- a/website/src/pages/zh/token-api/monitoring/get-version.mdx +++ b/website/src/pages/zh/token-api/monitoring/get-version.mdx @@ -1,5 +1,5 @@ --- -title: Get the version of the API +title: 获取API版本 template: type: openApi apiId: tokenApi diff --git a/website/src/pages/zh/token-api/quick-start.mdx b/website/src/pages/zh/token-api/quick-start.mdx index 4653c3d41ac6..f55201103dc4 100644 --- a/website/src/pages/zh/token-api/quick-start.mdx +++ b/website/src/pages/zh/token-api/quick-start.mdx @@ -1,23 +1,23 @@ --- -title: Token API Quick Start -sidebarTitle: Quick Start +title: 代币API快速启动 +sidebarTitle: 快速开始 --- -![The Graph Token API Quick Start banner](/img/token-api-quickstart-banner.jpg) +![The Graph 代币API快速启动条幅](/img/token-api-quickstart-banner.jpg) -> [!CAUTION] This product is currently in Beta and under active development. If you have any feedback, please reach out to us on [Discord](https://discord.gg/graphprotocol). +> [!小心]此产品目前处于测试阶段,正在积极开发中。如果您有任何反馈,请通过[Discord](https://discord.gg/graphprotocol)联系我们。 -The Graph's Token API lets you access blockchain token information via a GET request. This guide is designed to help you quickly integrate the Token API into your application. +The Graph的 代币API允许您通过 GET 请求访问区块链代币信息。 本指南旨在帮助您快速将 Token API 整合到您的应用程序中。 -The Token API provides access to onchain token data, including balances, holders, detailed token metadata, and historical transfers. This API also uses the Model Context Protocol (MCP) to enrich raw blockchain data with contextual insights using AI tools, such as Claude. +代币API提供了访问网链代币数据的机会,包括余额、持有人、详细的代币元数据和历史传输。 此 API 还使用 Model Context 协议 (MCP) 来丰富原始区块链数据,并通过使用 Claude 等的 AI 工具来丰富相关联的洞察力。 -## Prerequisites +## 先决条件 -Before you begin, get a JWT token by signing up on [The Graph Market](https://thegraph.market/). You can generate a JWT token for each of your API keys using the dropdown menu. +在您开始之前,通过注册[The Graph市场](https://thegraph.market/),获得一个 JWT 代币。 您可以使用下拉菜单为您的API密钥生成一个 JWT 代币。 -## Authentication +## 认证 -All API endpoints are authenticated using a JWT token inserted in the header as `Authorization: Bearer `. +所有API端点都使用插入头中的JWT代币进行身份验证,该代币名为`Authorization: Bearer `。 ```json { @@ -27,9 +27,9 @@ All API endpoints are authenticated using a JWT token inserted in the header as } ``` -## Using JavaScript +## 使用 JavaScript -Make an API request using **JavaScript** by adding the request parameters, and then fetching from the relevant endpoint. For example: +使用 **JavaScript ** 做出API请求,添加请求参数,然后从相关的端点获取。例如: ```js label="index.js" const address = '0x2a0c0dbecc7e4d658f48e01e3fa353f44050c208' @@ -47,11 +47,11 @@ fetch(`https://token-api.thegraph.com/balances/evm/${address}`, options) .catch((err) => console.error(err)) ``` -Make sure to replace `` with the JWT Token generated from your API key. +请务必用你的 API 密钥生成的 JWT 代币替换``。 -## Using cURL (Command Line) +## 使用 cURL (命令行) -To make an API request using **cURL**, open your command line and run the following command. +若要使用 **cURL**提出API请求,请打开您的命令行并运行以下命令。 ```curl curl --request GET \ @@ -60,13 +60,13 @@ curl --request GET \ --header 'Authorization: Bearer ' ``` -Make sure to replace `` with the JWT Token generated from your API key. +请务必用你的 API 密钥生成的 JWT 代币替换``。 -> Most Unix-like systems come with cURL preinstalled. For Windows, you may need to install cURL. +> 大多数类似Unix的系统都已预安装了 cURL 。对于Windows,您可能需要安装 cURL。 -## Troubleshooting +## 故障排除 -If the API call fails, try printing out the full response object for additional error details. For example: +如果API调用失败,请尝试打印完整的响应对象以获取额外的错误详细信息。例如: ```js label="index.js" fetch(`https://token-api.thegraph.com/balances/evm/${address}`, options)